The post Teaching teamwork appeared first on OUPblog.

]]>I think we can improve undergraduate and graduate students’ educational experiences by giving them the benefit of working in teams. This can be implemented in short-term (two-hour to two-week) or longer term (2-12 week) projects. I believe that working on a larger project with 2-4 other students, for at least 15-35% of their coursework in several courses, would build essential professional and personal skills. I agree that it is easier to plan and execute team projects in smaller graduate courses than larger undergraduate courses.

Unfortunately, many faculty members were trained through lecture, individual homework, and strictly solitary testing. They have weak teamwork skills and are little inclined to teach teamwork. In fact, they have many fears that increase their resistance. Some believe that teamwork takes extra effort for faculty or that teams naturally lead to one person doing most of the work.

Teamwork projects may require fresh thinking by faculty members, but it may be easier to supervise and grade ten teams of four students, than to mentor and grade 40 individuals. Moreover, well-designed teamwork projects could lead to published papers or start-up companies in which faculty are included as co-authors or advisers. In my best semester, five of the seven teams in my graduate course on information visualization produced a final report that led to a publication in a refereed journal or conference.

Another possible payoff is that teamwork courses may create more engaged students with higher student retention rates. Of course teams can run into difficulties and conflict among students. These are teachable moments when students can learn lessons that will help them in their professional and personal lives. These difficulties and conflicts may be more visible than individual students failing or dropping out, but I think they are a preferable alternative.

So if faculty members are ready to move towards teaching with team projects, there are some key decisions to be made. Sometimes two-person teams are natural, but larger teams of 3-5 allow more ambitious projects, while increasing the complexity. I’ve also run projects where the entire class acts as a team to produce a project such as the Encyclopedia of Virtual Environments (EVE), in which the 20 students wrote about 100 web-based articles defining the topic. Colleagues have told me about their teamwork projects that had their French students create an online newspaper for French alumni describing campus sports events or a timeline of the European philosophical movements leading up to the framing of the US Constitution.

**Team formation:** I have moved to assigning team membership (rather than allow self-formation) using a randomization strategy, which is recommended in the literature. This helps ensure diversity among the team members, speeds the process of getting teams started, and eliminates the problem of some students having a hard time finding a team to join.

**Project design (student-driven): **Well-designed team projects take on more ambitious efforts, giving students the chance to learn how to deal with a larger goal. I prefer student designed projects with an outside mentor, where the goal is to produce an inspirational pilot project that benefits someone outside the classroom and survives beyond the semester. I’ve had student teams work on software to schedule the campus bus routes or support a local organization that brings hundreds of foreign students for summer visits in people’s homes. Other teams helped a marketing company to assess consumer behavior in a nearby shopping mall or an internet provider to develop a network security monitor. Two teams proposed novel visualizations for the monthly jobs report of the US Bureau of Labor Statistics, which they presented to the Commissioner and her staff. I give a single grade to the team, but do require that their report includes a credits section in which the role of each person is described.

**Project design (faculty-driven):** Another approach is for the teacher to design the team projects, which might be the same ones for every team. With a four-person team, distinct roles can be assigned to each person, so it becomes easier to grade students individually. Just getting students to talk together, resolve differences, agree to schedules, etc. gives them valuable skills.

**Milestones:** Especially in longer projects, there should be deliverables every week, e.g. initial proposal, first designs, test cases, mid-course report, preliminary report, and final report.

**Deliverables:** With teams there can be multiple deliverables, e.g. in my graduate information visualization course, students produce a full conference paper, 3-5 minute YouTube video, working demo, and slide deck & presentation.

**Teamwork strategies:** For short-term teams (a few weeks to a semester), simple strategies are probably best. I use: (1) “Something small soon,” which asks students to make small efforts that validate concepts before committing greater energy and (2) “Who does what by when,” which clarifies responsibilities on an hourly basis, such as “If Sam and I do the draft by 6pm Tuesday, will Jose and Marie give us feedback by noon on Wednesday?” Teamwork does not require any meetings at all; it is a management strategy to coordinate work among team members.

**Critiques and revisions:** I ask students to post their preliminary reports on the class’s shared website two weeks before the end of the semester. Then students sign up to read and critique one of the reports, which they send to me and the report authors. They write one paragraph about what they learned and liked, then as much as constructive suggestions for improvements to the report’s overall structure, to proposed references and improved figures, to grammar and spelling fixes. When students realize that their work will be read by other students they are likely to be more careful. When students read another team’s project report, they reflect on their own project report, possibly seeing ways to improve it. I grade the critiques which can be 3-6% of their final grade. My goal is to help every team to improve the quality of their work. Sometimes the process of preparing their preliminary reports early and then revising does much to improve quality.

**Concerns:** I know that some faculty members worry that one person in a team will do the majority of the work, but if projects are ambitious enough then that possibility is reduced. Grading remains an issue that each faculty member has to decide on. I find that having students include a credits box in their final report helps, but other instructors require peer rating/reporting for team members.

In summary, anything novel takes some thinking, but embracing team projects could substantially improve education programs, engage more marginal students, and improve student retention rates. Learning to use teamwork tools such as email, videoconferencing, and shared documents provides students with valuable skills. Working in teams can be fun for students and satisfying for teachers.

*Featured image credit: Harvard Business School classroom by HBS 1908. CC BY-SA 3.0 via Wikimedia Commons.*

The post Teaching teamwork appeared first on OUPblog.

]]>The post What is combinatorics? appeared first on OUPblog.

]]>- How many possible sudoku puzzles are there?
- Do 37 Londoners exist with the same number of hairs on their head?
- In a lottery where 6 balls are selected from 49, how often do two winning balls have consecutive numbers?
- In how many ways can we give change for £1 using only 10p, 20p, and 50p pieces?
- Is there a systematic way of escaping from a maze?
- How many ways are there of rearranging the letters in the word “ABRACADABRA”?
- Can we construct a floor tiling from squares and regular hexagons?
- In a random group of 23 people, what is the chance that two have the same birthday?
- In chess, can a knight visit all the 64 squares of an 8 × 8 chessboard by knight’s moves and return to its starting point?
- If a number of letters are put at random into envelopes, what is the chance that no letter ends up in the right envelope?

What do you notice about these problems?

First of all, unlike many mathematical problems that involve much abstract and technical language, they’re all easy to understand – even though some of them turn out to be frustratingly difficult to solve. This is one of the main delights of the subject.

Secondly, although these problems may appear diverse and unrelated, they mainly involve selecting, arranging, and counting objects of various types. In particular, many of them have the forms. Does such-and-such exist? If so, how can we construct it, and how many of them are there? And which one is the ‘best’?

The subject of combinatorial analysis or combinatorics (pronounced *com-bin-a-tor-ics*) is concerned with such questions. We may loosely describe it as the branch of mathematics concerned with selecting, arranging, constructing, classifying, and counting or listing things.

To clarify our ideas, let’s see how various sources define combinatorics.

Oxford Dictionaries describe it briefly as:

“The branch of mathematics dealing with combinations of objects belonging to a finite set in accordance with certain constraints, such as those of graph theory.”

While the Collins dictionary present it as:

“the branch of mathematics concerned with the theory of enumeration, or combinations and permutations, in order to solve problems about the possibility of constructing arrangements of objects which satisfy specified conditions.”

Wikipedia introduces a new idea, that combinatorics is:

“a branch of mathematics concerning the study of finite or countable discrete structures.”

So the subject involves finite sets or discrete elements that proceed in separate steps (such as the numbers 1, 2, 3 …), rather than continuous systems such as the totality of numbers (including π, √2, etc.) or ideas of gradual change such as are found in the calculus. The Encyclopaedia Britannica extends this distinction by defining combinatorics as:

“the field of mathematics concerned with problems of selection, arrangement, and operation within a finite or discrete system … One of the basic problems of combinatorics is to determine the number of possible configurations (e.g., graphs, designs, arrays) of a given type.”

Finally, Wolfram Research’s *MathWorld* presents it slightly differently as:

“the branch of mathematics studying the enumeration, combination, and permutation of sets of elements and the mathematical relations that characterize their properties,”

adding that:

“Mathematicians sometimes use the term ‘combinatorics’ to refer to a larger subset of discrete mathematics that includes graph theory. In that case, what is commonly called combinatorics is then referred to as ‘enumeration’.”

The subject of combinatorics can be dated back some 3000 years to ancient China and India. For many years, especially in the Middle Ages and the Renaissance, it consisted mainly of problems involving the permutations and combinations of certain objects. Indeed, one of the earliest works to introduce the word ‘combinatorial’ was a *Dissertation on the combinatorial art* by the 20-year-old Gottfried Wilhelm Leibniz in 1666. This work discussed permutations and combinations, even claiming on the front cover to ‘prove the existence of God with complete mathematical certainty’.

Over the succeeding centuries the range of combinatorial activity broadened greatly. Many new types of problem came under its umbrella, while combinatorial techniques were gradually developed for solving them. In particular, combinatorics now includes a wide range of topics, such as the geometry of tilings and polyhedra, the theory of graphs, magic squares and Latin squares, block designs and finite projective planes, and partitions of numbers.

Much of combinatorics originated in recreational pastimes, as illustrated by such well-known puzzles such as the Königsberg bridges problem, the four-colour map problem, the Tower of Hanoi, the birthday paradox, and Fibonacci’s ‘rabbits’ problem. But in recent years the subject has developed in depth and variety and has increasingly become a part of mainstream mathematics. Prestigious mathematical awards such as the Fields Medal and the Abel Prize have been given for ground-breaking contributions to the subject, while a number of spectacular combinatorial advances have been reported in the national and international media.

Undoubtedly part of the reason for the subject’s recent importance has arisen from the growth of computer science and the increasing use of algorithmic methods for solving real-world practical problems. These have led to combinatorial applications in a wide range of subject areas, both within and outside mathematics, including network analysis, coding theory, probability, virology, experimental design, scheduling, and operations research.

*Featured image credit: ‘Sudoku’ by Gellinger. CC0 public domain via Pixabay.*

The post What is combinatorics? appeared first on OUPblog.

]]>The post A brief history of crystallography appeared first on OUPblog.

]]>So, what is crystallography? Put simply, it is the study of crystals. Now, let’s be careful here. I am not talking about all those silly websites advertising ways in which crystals act as magical healing agents, with their chakras, auras, and energy levels. No, this is a serious scientific subject, with around 26 or so Nobel prizes to its credit. And yet, despite this, it remains a largely hidden subject, at least in the public mind.

Crystallography as a science has a long and venerable history going back to the 17^{th} century when the sheer beauty of the symmetry of crystals suggested an underlying order of some kind. For the next three centuries, our knowledge of what crystals actually were was based on conjecture and argument, with a few simple experiments thrown in. From their symmetry and shapes it was argued that crystals must consist of ordered arrangements of minute particles: today we know them as atoms and molecules.

But it was the discovery of X-rays in 1895 that changed all that, for a few years later in 1912 in Germany, Max Laue, Walter Friedrich, and Paul Knipping showed that an X-ray beam incident on a crystal was scattered to form a regular pattern of spots on a film (we call this diffraction). Thus it was proved that X-rays consisted of waves and furthermore this gave direct evidence of the underlying order of atoms in the crystal. Hence Nobel Prize number 1 went to Laue in 1914. However, it was William Lawrence Bragg (WLB) who in 1912 at the age of 22 showed how the observed diffraction pattern could be used to determine the positions of atoms in the crystal, thus launching a completely new scientific discipline, X-ray crystallography. Working with his father, William Henry Bragg (WHB), they quickly determined the crystal structures of several materials starting with that of common salt and diamond. Both father and son shared Nobel prize number 2 in 1915. William Henry Bragg and William Lawrence Bragg went on to create world-class research groups working on a huge range of solid materials and incidentally they were active in encouraging women into science.

Since then X-ray crystallography, which today is used throughout the world, has been the method of choice for determining the crystal structures of organic and inorganic solids, pharmaceuticals, biological substances such as proteins and viruses, and indeed all kinds of solid substances. Crick and Watson’s determination of the double helix of DNA is probably the most well-known example of the use of crystallography, incidentally a discovery made in William Lawrence Bragg’s laboratory in Cambridge. Had it not been for X-ray (and later neutron and electron) crystallography we probably would not have today much of an electronics industry, computer technology, new pharmaceuticals, new materials of all sorts, nor the modern field of genetics. The Braggs left a huge legacy which today continues to make astonishing progress.

*Featured image credit: Protein Crystals Use in XRay Crystallography by CSIRO. CC BY 3.0 via Wikimedia Commons *

The post A brief history of crystallography appeared first on OUPblog.

]]>The post Today’s Forecast: Cloudy with a chance of seizures appeared first on OUPblog.

]]>For many years, experts in neurology, computer science, and engineering have worked toward developing algorithms to predict a seizure before it occurs. If an algorithm could detect subtle changes in the electrical activity of a person’s brain (measured by electroencephalography (EEG)) before a seizure occurs, people with epilepsy could take medications only when needed, and possibly reclaim some of those daily activities many of us take for granted. But algorithm development and testing requires substantial quantities of suitable data, and progress has been slow. Many early research reports developed and tested algorithms on relatively short intracranial EEG data segments from patients with epilepsy undergoing intracranial EEG before surgery. There are a number of problems with this. First, patients undergoing pre-surgical monitoring for epilepsy typically have their medications reduced to encourage seizures to occur, which causes a progressive decrease in the blood levels of medications which have been shown to affect the normal baseline pattern in a patient’s EEG. Second, hospital stays for pre-surgical monitoring by necessity rarely last more than two weeks, providing a very limited amount of data for any single patient. These short data segments with changing baseline EEG characteristics are particularly problematic when algorithm scientists attempt to measure an algorithm’s false positive rate, or the number of false alarms that a seizure forecasting algorithm might raise. Development of robust, reliable seizure prediction algorithms requires data on many seizures and many periods of baseline, non-seizure EEG with enough time between the seizures to allow the brain to recover. In addition, researchers are often reluctant to share algorithm data and programs; privacy concerns and the high cost of sharing large data sets makes testing and comparison very difficult.

In 2013 a group of physicians and scientists from Melbourne Australia reported a successful trial of an implanted device capable of measuring EEG from intracranial electrode strips, and telemetering the EEG data to a small external device about the size of a smart phone that could run seizure forecasting algorithms and provide warnings of impending seizures. The device used a proprietary seizure forecasting algorithm that performed well enough to be helpful for some patients in the trial, raising hopes that seizure forecasting might soon become clinically possible.

We recently made an effort to use Kaggle.com — a website that runs data science competitions to develop algorithms to predict everything from insurance rates to the Higgs Boson — to develop new algorithms for seizure forecasting. Our competition used intracranial EEG data from the same device in the Australian trial (implanted in eight dogs with naturally occurring epilepsy) as well as data from two human patients undergoing intracranial monitoring. In hope of winning $15,000 in prize money, plus bragging rights among elite data science circles, hundreds of algorithm developers, most with little or no experience with epilepsy or EEG, worked countless hours to build, test, and rebuild algorithms for seizure forecasting, and tested their algorithms on nearly 350 seizures recorded over more than 1,500 days. After four months, over half of these “crowdsourced” algorithms performed better than random predictions, and the winning algorithms accurately predicted over 70% of seizures with a 25% false positive rate. The data are available for researchers to continue developing new algorithms for predicting seizures, and can serve as a benchmark for new algorithms to be compared directly to one another and to the algorithms developed in this competition. The best performing algorithms in the competition used a mixture of conventional and complex approaches drawn from physics, engineering, and computer science, sometimes in unorthodox ways that proved to be surprisingly effective. The winning teams also made the source code for their algorithms publicly available, providing a benchmark and starting point for future algorithm developers.

While we applaud the talented algorithm scientists who took home the prize money, we hope the real winners of the contest will be our patients.

*Featured image: Sky cloudy. Uploaded by Carla Nunziata. CC-BY-SA-3.0 via Wikimedia Commons*

The post Today’s Forecast: Cloudy with a chance of seizures appeared first on OUPblog.

]]>The post Doing it with sensitivity appeared first on OUPblog.

]]>The most recent time this happened, it reminded me of a startling academic paper, first published in 1978, in the *New England Journal of Medicine.* Dr Ward Casscells and colleagues reported something very disturbing: that most doctors can’t calculate risks correctly.

The question they posed was this. Imagine a disease (let’s call it Gobble’s disease*), which has a prevalence of 1 in 1000 in your population. There is a test for Gobble’s disease, and you know it has a false positive rate of 5%. You meet a patient in your clinic, who has tested positive. What is the probability that the patient has Gobble’s disease?

A member of the public could be forgiven for thinking the answer is 100%. After all, medical tests are always reliable, right? Someone a bit savvier, say a doctor, might look at that 5% false positive rate, and decide the answer is 95%. That’s what most of the respondents in Casscells’ study said, and he offered his question to senior doctors, junior doctors, and medical students. (And, if you had offered it to me as a medical student or a junior doctor, that’s almost certainly what I would have said – even though, in all fairness, my medical school tried hard to teach us the truth).

But they would all be hopelessly wrong. A statistician would say this: suppose you test the population for Gobble’s disease; a 5% false positive rate means that 5% of your population will test positive for Gobble’s disease, *even when they don’t have it.* 5% of your population is 50 per 1000. But we know that only 1 in 1000 of the people in your population has Gobble’s disease; therefore your test will be wrong for those 50 people, and right only for that last 1 person. So the probability of your patient – who tested positive for Gobble’s disease – actually having Gobble’s disease is only 1 in 50, or 2%.

This result is so unexpected, so counter-intuitive, that it’s worth looking at more closely.

All medical tests have two basic properties. These are known as *sensitivity* and *specificity*. The sensitivity is the probability that the patient will test positive for the disease, if they actually have it. Our fictitious Gobble’s test is, we assume, 100% sensitive it will always detect someone with Gobble’s disease. In practice, few medical tests approach 100% sensitivity.

The specificity is the probability that the patient will test negative for the disease if they haven’t got the disease. Our Gobble’s test is 95% specific: if the patient doesn’t have Gobble’s disease, there is a 95% likelihood that they will test negative for the disease. That sounds great, until we remember that there’s a 5% likelihood they will test positive, which is the cause of all our problems. Sadly, in reality, few medical tests approach 95% specificity.

In reality, sensitivity and specificity are two sides of the same coin. One cannot improve the sensitivity of any test without including more false positives (which might, as we can see, drown out the true positives we are actually interested in). An extreme example is to make every test a positive result: you would never miss anyone with the disease, but there would be so many false positives that your test would be useless.

The reason our test for Gobble’s disease is so unhelpful is that Gobble’s disease is rare. The test becomes much more valuable if Gobble’s disease is more common. Therefore to make it more useful, we shouldn’t apply the test indiscriminately, but we should try to narrow down our focus to people with risk factors. If Gobble’s disease is rare in the young but gets more common in the elderly (as many cancers do), then we can improve the usefulness of the test by applying it only to the elderly.

The other way in which we can improve the usefulness of our test is to combine it with other tests. Say our test is quick and safe. We can apply it easily to a large number of people. But to those who test positive, we can then go on and apply a different test, perhaps one which is more invasive or more expensive. Patients who test positive for both are much more likely to actually *have* Gobble’s disease.

That security guard, having a quick look through my bag, is applying a diagnostic test: do I have a dangerous item in there, or not? Unfortunately his test isn’t very sensitive, since he might easily miss something down at the bottom. And, since most people going to the concert are there to enjoy the music, the prevalence of miscreants is low. Therefore the simple mathematics of the test tells us it is likely to be worthless. The effectiveness of the test is multiplied by applying a different test: an X-ray scan of my bag, or even of my body. These are much more expensive than a quick visual check, but airports, understandably, are prepared to foot the bill.

There are powerful lessons to be learned here. The first is that applying a single test to a whole population is likely to be very unhelpful, especially if what you are looking for is rare. The second is that medical tests seldom give a clear-cut answer; instead they lengthen or shorten the likelihood of a particular diagnosis being true. Finally, quite a lot of other tests (such as concert security) are subject to exactly the same mathematical rules as medical tests. A thorough understanding of the mathematics of probability will help no end in this endeavour. In the words of William Osler (often described as the father of modern medicine), “Medicine is a science of uncertainty and an art of probability”!

*‘Gobble’s Disease’ is an invented illness from the *Oxford Handbook of Clinical Medicine*.

*Featured Image Credit: ‘Dice, Die, Probability’ by Jody Lehigh. CCO Public Domain via Pixabay.*

The post Doing it with sensitivity appeared first on OUPblog.

]]>The post Can design thinking challenge the scientific method? appeared first on OUPblog.

]]>However, there is growing interest in design thinking, a research method which encourages practitioners to reformulate goals, question requirements, empathize with users, consider divergent solutions, and validate designs with real-world interventions. Design thinking promotes playful exploration and recognizes the profound influence that diverse contexts have on preferred solutions. Advocates believe that they are dealing with “wicked problems” situated in the real world, in which controlled experiments are of dubious value.

The design thinking community generally respects science, but resists pressures to be “scientized”, which they equate with relying on controlled laboratory experiments, reductionist approaches, traditional thinking, and toy problems. Similarly, many in the scientific community will grant that design thinking has benefits in coming up with better toothbrushes or attractive smartphones, but they see little relevance to research work that leads to discoveries.

The tension may be growing since design thinking is on the rise as a business necessity and as part of university education. Institutions as diverse as the Royal College of Art, Goldsmiths at the University of London, Stanford University’s D-School, and Singapore University of Technology and Design are leading a rapidly growing movement that is eagerly supported by business. Design thinking promoters see it as a new way of thinking about serious problems such as healthcare delivery, community safety, environmental perseveration, and energy conservation.

The rising prominence of design thinking in public discourse is revealed by these two graphs. (see Figures 1 and 2).

These two sources both appear to show that after 1975 design took over prominence from science and engineering.

Scientists and engineers might dismiss this data and the idea that design thinking could challenge the scientific method. They believe that controlled experiments with statistical tests for significant differences are the “gold standard” for collecting evidence to support hypotheses, which add to the trusted body of knowledge. Furthermore, they believe that the cumulative body of knowledge provides the foundations for solving the serious problems of our time.

By contrast, designing thinking activists question the validity of controlled experiments in dealing with complex socio-technical problems such as healthcare delivery. They question the value of medical research by carefully controlled clinical trials because of the restricted selection criteria for participants, the higher rates of compliance during trials, and the focus on a limited set of treatment possibilities. Flawed clinical trials have resulted in harm such as when a treatment is tested only on men, but then the results are applied to women. Even respected members of the scientific community have made disturbing complaints about the scientific method, such as John Ioannidis’s now-famous 2005 paper “Why Most Published Research Findings Are False.”

Designing thinking advocates do not promise truth, but they believe that valuable new ideas, services, and products can come from their methods. They are passionate about immersing themselves in problems, talking to real customers, patients, or students, considering a range of alternatives, and then testing carefully in realistic settings.

Of course, there is no need to choose between design thinking and the scientific method, when researchers can and should do both. The happy compromise may be to use design thinking methods at early stages to understand a problem, and then test some of the hypotheses with the scientific method. As solutions are fashioned they can be tested in the real world to gather data on what works and what doesn’t. Then more design thinking and more scientific research could provide still clearer insights and innovations.

Instead of seeing research as a single event, such as a controlled experiment, the British Design Council recommends the Double Diamond model which captures the idea of repeated cycles of divergent and then convergent thinking. In one formulation they describe a 4-step process: “Discover”, “Define”, “Develop”, and “Deliver.”

The spirited debates about which methods to use will continue, but as teachers we should ensure that our students are skilled with both the scientific method and design thinking. Similarly, as business leaders we should ensure that our employees are well-trained enough to apply design thinking and the scientific method. When serious problems need solutions, such as healthcare of environmental preservation are being addressed, we will all be better served when design thinking and scientific method are combined.

*Featured image credit: library books education literature by Foundry. Public domain via Pixabay.*

The post Can design thinking challenge the scientific method? appeared first on OUPblog.

]]>The post The consistency of inconsistency claims appeared first on OUPblog.

]]>In 1931 Kurt Gödel published one of the most important and most celebrated results in 20^{th} century mathematics: the incompleteness of arithmetic. Gödel’s work, however, actually contains two distinct incompleteness theorems. The first can be stated a bit loosely as follows:

*First Incompleteness Theorem*: If *T* is a consistent, sufficiently strong, recursively axiomatizable theory, then there is a sentence “P” in the language of arithmetic such that neither “P” nor “not: P” is provable in *T*.

A few terminological points: To say that a theory is *recursively axiomatizable* means, again loosely put, that there is an algorithm that allows us to decide, of any statement in the language, whether it is an axiom of the theory or not. Explicating what, exactly, is meant by saying a theory is *sufficiently strong* is a bit trickier, but it suffices for our purposes to note that a theory is sufficiently strong if it is at least as strong as standard theories of arithmetic, and by noting further that this isn’t actually very strong at all: the vast majority of mathematical and scientific theories studied in standard undergraduate courses are sufficiently strong in this sense. Thus, we can understand Gödel’s first incompleteness theorem as placing a limitation on how ‘good’ a scientific or mathematical theory *T* in a language *L* can be: if *T* is consistent, and if *T* is sufficiently strong, then there is a sentence *S* in language *L* such that *T* does not prove that *S* is true, but it also doesn’t prove that *S* is false.

The first incompleteness theorem has received a lot of attention in the philosophical and mathematical literature, appearing in arguments purporting to show that human minds are not equivalent to computers, or that mathematical truth is somehow ineffable, and the theorem has even been claimed as evidence that God exists. But here I want to draw attention to a less well-known, and very weird, consequence of Gödel’s other result, the second incompleteness theorem.

First, a final bit of terminology. Given any theory *T*, we will represent the claim that *T* is consistent as “Con(*T*)”. It is worth emphasizing that, if *T* is a theory expressed in language *L*, and *T* is sufficiently strong in the sense discussed above, then “Con(*T*)” is a sentence in the language *L* (for the cognoscenti: “Con(*T*)” is a very complex statement of arithmetic that is *equivalent* to the claim that *T* is consistent)! Now, Gödel’s second incompleteness theorem, loosely put, is as follows:

*Second Incompleteness Theorem*: If *T* is a consistent, sufficiently strong, recursively axiomatizable theory, then *T* does not prove “Con(*T*)”.

For our purposes, it will be easier to use an equivalent, but somewhat differently formulated, version of the theorem:

*Second Incompleteness Theorem*: If *T* is a consistent, sufficiently strong, recursively axiomatizable theory, then the theory:

*T* + not: Con(*T*)

* *is consistent.

In other words, if *T* is a consistent, sufficiently strong theory, then the theory that says everything that *T* says, but also includes the (false) claim that *T* is inconsistent is nevertheless consistent (although obviously not true!) It is important to note in what follows that the second incompleteness theorem does not guarantee that a consistent theory *T* does not prove “not: Con(*T*)”. In fact, as we shall see, some consistent (but false) theories allow us to prove that they are not consistent even though they are!

We are now (finally!) in a position to state the main result of this post:

*Theorem*: There exists a consistent theory *T* such that:

*T* + Con(*T*)

is inconsistent, yet:

*T* + not: Con(*T*)

is consistent.

In other words, there is a consistent theory *T* such that adding the __true__ claim “*T* is consistent” to *T* results in a contradiction, yet adding the __false__ claim “*T* is inconsistent” to *T* results in a (false but) consistent theory.

Here is the proof: Let *T*_{1} be any consistent, sufficiently strong theory (e.g. Peano arithmetic). So, by Gödel’s second incompleteness theorem:

*T*_{2} = *T*_{1} + not: Con(*T*_{1})

is a consistent theory. Hence “Con(*T*_{2})” is true. Now, consider the following theories:

*(i) T*_{2} + not: Con(*T*_{2})

*(ii) T*_{2} + Con(*T*_{2})

Since, as we have already seen, *T*_{2} is consistent, it follows, again, by the second incompleteness theorem, that the first theory:

*T*_{2} + not: Con(*T*_{2})

is consistent. But now consider the second theory (ii). This theory includes the claim that *T*_{2} does not prove a contradiction – that is, it contains “Con(*T*_{2})”. But it also contains every claim that *T*_{2} contains. And *T*_{2} contains the claim that *T*_{1} __does__ prove a contradiction – that is, it contains “not: Con(*T*_{1})”. But if *T*_{1} proves a contradiction, then *T*_{2} proves a contradiction (since everything contained in *T*_{1} is also contained in *T*_{2}). Further, any sufficiently strong theory is strong enough to show this, and hence, *T*_{2} proves “not: Con(*T*_{2})”. Thus, the second theory:

*T*_{2} + Con(*T*_{2})

is inconsistent, since it proves both “Con(*T*_{2})” and “not: Con(*T*_{2})”. QED.

Thus, there exist consistent theories, such as *T*_{2} above, such that adding the (true) claim that that theory is consistent to that theory results in inconsistency, while adding the (false) claim that the theory is inconsistent results in a consistent theory. It is worth noting that part of the trick is that the theory *T*_{2} we used in the proof is itself consistent but not true.

This, in turn, suggests the following: in some situations, when faced with a theory *T* where we believe *T* to be consistent, but where we are unsure as to whether *T* is true, it might be safer to add “not: Con(*T*)” to *T* than it is to add “Con(*T*)” to *T*. Given that the majority of our scientific theories are likely to be consistent, but many will turn out to be false as they are overturned by newer, better, theories, this then suggests that sometimes we might be better off believing that our scientific theories are inconsistent than believing that they are consistent (if we take a stand on their consistency at all). But how can this be right?

*Featured image credit: Random mathematical formulæ illustrating the field of pure mathematics. Public domain via Wikimedia Commons.*

The post The consistency of inconsistency claims appeared first on OUPblog.

]]>The post Mary Somerville: the new face on Royal Bank of Scotland’s ten-pound note is worthy of international recognition appeared first on OUPblog.

]]>In celebrating the good news that Somerville is the people’s choice for the new gig, we could do worse than listen to the accolade given to her writing by one of the men she defeated in the public poll: James Clerk Maxwell. Father of the wireless electromagnetic era, he no doubt studied *Mechanism of the Heavens* as a student at Cambridge – and he certainly knew of Somerville’s second book, *On the Connexion of the Physical Sciences*. This was popular science rather than an advanced textbook, but Maxwell described it as “one of those suggestive books, which put into definite, intelligible, and communicable form the guiding ideas that are already working in the minds of men of science… but which they cannot yet shape into a definite statement.” This is high praise indeed.

If Maxwell’s ‘men of science’ sounds sexist in hindsight, it is doubly important to remember that women were not allowed to join the academic academies – not even the Royal Society, whose aim was not so much the doing of science as promoting it. In other words, ‘men of science’ was fact, not opinion. Which makes Mary Somerville all the more remarkable. She went on to write two more science books – and a delightful memoir completed when she was 91 – but she was also a scientist in her own right. In 1826 she published a paper in the prestigious *Philosophical Transactions of the Royal Society*, based on her experiments on a possible connection between violet light and electromagnetism. Although her results were ultimately proved incorrect, initially such famous scientists as her friends John Herschel and William Wollaston, had regarded her experiment as authoritative. Her friend Michael Faraday would find the first correct experimental connection between light and electromagnetism, and then Maxwell would complete the puzzle with his magnificent electromagnetic theory of light. But he had such respect for Somerville that nearly fifty years after her experiment, he took the trouble to analyse its underlying flaw, in his *Treatise on Electricity and Magnetism*.

Somerville corresponded with Faraday during her next series of experiments, in 1835. These involved testing the effects of different coloured light on photographic paper (photography was a new and fledgling invention at the time), and her paper was published by the French Academy of Sciences. The results of her third set of experiments – on the effect of different coloured light on organic matter – were published by the Royal Society in 1845.

In her book *Connexions*, she had also conjectured that observed discrepancies in the orbit of Uranus – which had been discovered by another friend of hers, William Herschel, father of John – might be due to the effects of another body as yet unseen. After John Couch Adams and Urbain Leverrier independently discovered the existence of Neptune in 1846, Adams told Somerville’s husband, William Somerville, that his search for the planet had been inspired by that passage in *Connexions.*

When Mary Somerville died in 1872, just before her 92^{nd} birthday, she was widely acknowledged as the nineteenth century’s ‘Queen of Science.’ The day before she died, she had been studying cutting edge mathematics (‘quaternions’, which Maxwell was also studying, as it happens – he discussed their application to electromagnetism in his *Treatise* of the following year). But what makes Mary Somerville’s story timeless is her monumental struggle to understand the mysteries of science in the first place. It might be tempting to think she owed her success to the support of all her famous friends – and indeed, they did support her. But she had gained entry to the society of ‘men of science’ in a most extraordinary way.

As a child in Burntisland, a village across the Firth of Forth from Edinburgh, Mary Fairfax had grown up ‘a wild creature’, as she put it. While her brothers were sent to school, she had been left free to roam along the seashore. Her mother had taught her enough literacy to read the Bible, and later she was taught some basic arithmetic. But everything changed when a friend showed fifteen-year-old Mary the needlework patterns in a women’s magazine. Leafing through it, Mary was mesmerized not by exquisite needlework but by a collection of x’s and y’s in strange, alluring patterns. Her friend knew only that “they call it algebra” (it was a worked solution to one of the magazine’s mathematical puzzles). Tantalized, Mary began studying mathematics in secret, reading under the covers at night. When the household stock of candles ran low too quickly, Mary’s secret was discovered and her candles confiscated – her father accepted the prevailing belief that intellectual study would send a girl mad or make her seriously ill.

Mary persevered for decades, teaching herself mathematics, Latin, and French. Eventually, she was able to read and understand both Newton’s *Principia* in Latin, and Newton’s disciple Pierre Laplace in French. She did it all alone, just for the love of knowledge. But when Britain’s ‘men of science’ and their wives finally discovered her learning, they were stunned. They quickly embraced her, but her public success was also possible because of the support of her three surviving children (three had died in childhood), and especially her medical doctor husband.

At 91, while studying quaternions, she revealed one of the secrets of her success: whenever she encountered a difficulty, she remained calm, never giving up, because “if I do not succeed today, I will attack [the problem] again on the morrow.” It is a great aphorism to remember her by, as we celebrate the accolades that are still coming her way: Oxford’s Somerville College was named in her honour soon after her death, and now RBS’s ten-pound note.

*Featured image credit: Somerville College by Philip Allfrey. CC-BY-SA-3.0 via Wikimedia Commons.*

The post Mary Somerville: the new face on Royal Bank of Scotland’s ten-pound note is worthy of international recognition appeared first on OUPblog.

]]>The post Earth’s climate: a complex system with mysteries abound appeared first on OUPblog.

]]>The climate has evolved through massive changes to where we are today. It continues to evolve on long time scales but is also impacted by two factors acting on human time scales. First, there is the ongoing internal variability resulting from a plethora of natural cycles. El Niño is an exemplar, but variability occurs at all levels of the system—in the atmosphere, ocean, cryosphere, biosphere, and through their connectedness—and spans a wide range of time scales from weeks to centuries. Moreover, modes of variability can conspire together to produce unanticipated and seemingly unrelated effects. Secondly, there are changes in radiative forcing, recently dominated by anthropogenic emissions, but also affected by other factors including land use, ocean carbon uptake, solar variability, and feedbacks such as impacts on albedo from melting ice and changing cloud patterns. It is this complex mixture in a dynamically evolving system that the scientific community is striving to unravel.

Climate science is in an unusual situation in that it is an experimental science but one in which the experiments are not restricted to a traditional laboratory. Because experiments cannot be carried out on the full climate system, mathematical replicas of the Earth have been developed in order to test scientific hypotheses about how the planet will react in different circumstances. These are the climate models hosted at around 30 or so climate centres around the world that provide the main source of predictive information in the Intergovernmental Panel on Climate Change assessment reports. They are each highly complex entities in themselves involving massive computational codes. The upkeep and development of these codes raises significant mathematical issues, but the involvement of the mathematical sciences in the study of climate goes far beyond these operational tasks.

The complementary view to the climate as a deterministic dynamical system involves compiling information from observational data. These observations, both from the modern instrumental systems and from the distant past using palaeoclimate proxies, are uncertain, which means that we need to use sophisticated statistical methodology to estimate and map properties of the Earth’s climate system. The two viewpoints come together as significant uncertainty accompanies any model projection and so their output is also properly regarded as statistical.

The mathematical sciences are playing a growing role in climate studies at all levels. Models of more modest dimension than the models residing at climate centres are gaining prominence. These conceptual models can help us see the relations between different internal mechanisms that can be hidden in the full model. Key processes can often be studied in isolation and their modelling brings considerable insight into the overall climate. Such processes include biogeochemical cycles, melting of sea and land ice and land use changes. The internal structure of such processes and their impact on other aspects of the climate are revealed by mathematical and statistical analyses. Such analysis is also critical in the proper inclusion of these processes through parameterization.

Ultimately, both understanding and prediction of the climate depend equally on models, that encode physical laws, and observations, that bring direct insight into the real world. Melding these together to tease out the optimal information is an extraordinary mathematical challenge that demands a blend of statistical and dynamical thinking, in both cases at the frontiers of these areas.

* Featured image credit: Windräder by fill. Public Domain via pixabay.*

The post Earth’s climate: a complex system with mysteries abound appeared first on OUPblog.

]]>The post What is information, and should it be free? appeared first on OUPblog.

]]>Fortunately there are some bright spots, such as the fact that it is now possible to measure information. This is the result of the pioneering work of Claude Shannon in the 1940s and 1950s. Shannon’s definitions can be used to prove theorems in a mathematically precise way, and in practice they provide the foundation for the machines which handle the vast amounts of information that are now available to us. However, that is not the end of the story.

In 1738 Daniel Bernoulli pointed out that the mathematical measure of ‘expectation’ did not allow for the fact that different people can value the outcome of an event in different ways. This observation led him to introduce the idea of ‘utility’, which has come to pervade theoretical economics. In fact, Shannon’s measure of information can be thought of as a generalization of expectation, and it leads to similar difficulties. As far as I am aware, academic work on this subject has not yet found applications in practice.

In the absence of a theoretical model, many countries (including the UK) have tried to set up a legal framework for information. First we had the Data Protection Act (1998), intended to prevent the misuse of the large amounts of personal data stored on computers. But it is very loosely worded, and consequently open to many different interpretations. (In one case, a police force believed that the Act prevented them from passing on information about a person whom they suspected of being a serial sex offender. This person then obtained employment elsewhere as a school caretaker, and murdered two pupils.) The next step was the Freedom of Information (FoI) Act (2000), which now seems to be regarded as unsatisfactory – on all sides. Those who see themselves as guardians of our ‘right to know’ are dissatisfied with the wide range of circumstances which can be considered as exceptions. Those who see themselves as guardians of our ‘security’ are concerned that attempts to prevent terrorist activities may be compromised.

We are expected to observe certain rules and regulations about how we use our money, but we are allowed to keep it safe. Will similar rules and regulations about information emerge?

Recently I looked at a question which led me into these muddy waters. The question was a simple instance of a very general one. Suppose a piece of information is only partly revealed to us: it may have been corrupted by transmission through a ‘noisy channel’, or it may have been encrypted, or some important details may have been intentionally withheld. How much useful information can we deduce from the data that we do have?

My example came from the unlikely source of the popular BBC television programme, Strictly Come Dancing. The problem was as follows. In order to determine which contestants should be eliminated from the show, the programme’s creators have devised a complex voting algorithm. First the judges award scores, and these are converted into points. Then the public is invited to vote, and the result is also converted into points. Finally the two sets of points are combined to produce a ranking of all the contestants. But the public points and the final ranking are not revealed on the results show, only the identity of the two lowest contestants. I was able to show that in some circumstances the revealed data can indeed provide a great deal of information about the public vote.

Strictly Come Dancing arouses great passion among its followers, and the lack of transparency of the voting system has led to numerous requests under the FoI Act. Most of these requests have been refused by the BBC, on grounds that appear to be valid in law. It seems that the FoI Act was originally based on some rather idealistic notions. When the Act was passing into law, it had to be converted into a more realistic instrument, and the resulting form of words therefore provides for a large number of exceptions to the general principle. Specifically, the BBC is able to claim that the details of the voting are exempt, because they are being used ‘for the purposes of journalism, art, or literature.’

In my view the FoI Act is simply a shield that deflects attention from the heart of the matter. The public is invited to vote and therefore has good reason to be interested in the mechanics of the voting procedure and its outcome. In addition to details of the method used to combine the public ranking with the judges’ ranking, there are other causes for concern. For example, multiple voting is allowed, and this opens up the possibility of misuse by agents who have a vested interest in a particular contestant.

I began by remarking that there is a close relationship between money and information. We are expected to observe certain rules and regulations about how we use our money, but we are allowed to keep it safe. Will similar rules and regulations about information emerge, and when?

*Featured image credit: binary code by Christiaan Colen. CC-BY-SA 2.0 via Flickr.*

The post What is information, and should it be free? appeared first on OUPblog.

]]>The post Conversations in computing: Q&A with Editor-in-Chief, Professor Steve Furber appeared first on OUPblog.

]]>**Justin: Can you tell us a bit more about your current role?**

**Steve:** At Manchester I am a regular research professor and I’ve served my term as head of department, that’s some time ago now. I lead a group of 40 or 50 staff and students and our general research area covers computer engineering to computer architecture. On the engineering side we’re interested in the design of silicon chips and how you can make the most of the enormous transistor resource that the manufacturing industries can now give us on a chip. On the architecture side we are interested in particular in how we exploit the many core resources that are increasingly available in all computer products today.

**Justin: What was the topic of your recent Lovelace Lecture?**

**Steve:** The title is ‘Computers and Brains’ and basically this is a lecture which talks about some of the history of artificial intelligence from some of the early writings of Ada Lovelace herself – 2015 was the 200th anniversary of her birth – through to Alan Turing’s thoughts on AI and then onto the research that I’m leading today, which is building a very large parallel computer for real-time brain modelling applications.

**Justin: Can you tell us about your SpiNNaker project?**

**Steve:** SpiNNaker is the massive parallel computer for real time brain modelling and the name is a rather crude compression of spiking your network architecture – it’s not quite an acronym. We’re using a million ARM processors – those are the processors that you find in your mobile phone designed by a British company in Cambridge – in a single machine and with a million ARM cores we can model about 1% of the scale of the network in the human brain. The brain is a very challenging, modelling target, you can think of it as 1% of the human brain, but I sometimes prefer to think of it as ten whole mouse brains. The network we’re using is quite simplified as there’s a lot about brain connectivity that’s still not known, so there’s a lot of guesswork in building any such model.

**Justin: You’ve said in the past that accelerating our understanding of brain function would represent a major scientific breakthrough. Can you expand a little bit more on that thought?**

**Steve:** It is clear to anybody who uses a computer that they are incredibly fast and capable at the set of things that they are good at, but they really struggle with things that we humans find simple. Very young babies learn to recognise their mother, whereas programming a computer to recognise an individual human face is possible but extremely hard. My view is that if we understood more about how humans learn to recognise faces and solve similar problems then we’d be much better placed to build computers that could do this easily.

**Justin: Where do you see AI processing going in the next five to ten years?**

**Steve:** The big issue with AI is understanding what intelligence is in the first place. I think one of the reasons why we have found true AI so difficult to reproduce in machines is that we’ve not quite worked out how natural intelligence works, hence my interest in going back to look at the brain as the substrate from which human intelligence emerges. If we can understand that better then we might be able to reproduce it more faithfully in our computing machines.

**Justin: What about the ethics of AI?**

**Steve:** Ultimately AI will lead to ethical issues. Clearly if machines become sentient then the issue as to whether you can or can’t switch them off becomes an ethical consideration. I think we are a very long way from that at the moment so that isn’t foremost among ethical issues we have to consider. I think there are much more pragmatic engineering issues, for example to do with driverless cars. If a driverless car is involved in a crash whose fault is it, who is responsible? If the crash turns out to be the result of the software bug, if it turns out to be the result of the human interfering with the car, there’s a whole set of issues that will have to be thought through there and they come a long time before the issue of the machine itself having any kind of rights.

**Justin: What do you think are the biggest challenges the IT industry faces?**

**Steve:** I think high on the list is the issue of cybersecurity. We are seeing increasing numbers of attacks on IT systems and it’s very technical to work out how to build defences that don’t compromise the performance of the systems too much. So as consumers we install antivirus software on our PCs but sometimes the antivirus software makes the PC almost useless. So there’s a compromise in security, always. Most of us live in houses where the front door will succumb to a few decent kicks, but the bank chooses something more substantial for its vault. Security has to be proportionate to the risk. But I think security is going to loom increasingly large in the IT industry.

**Justin: What do you see as the most exciting emerging technologies at the moment?**

**Steve:** The most exciting technologies around the corner I think are the cognitive systems, machines becoming less passive, they don’t just sit waiting for human imports but they actually respond to the environment, interact with it, engage with it and that requires some degree of understanding. I don’t want understanding to be interpreted in too anthropomorphic a way – their understanding may be quite prosaic, it might be at the level of an insect. But an insect has an adequate understanding of its environment for its purposes. That’s how I would expect to see computers developing increasingly in the future.

**Justin: What do you think the IT industry as a whole should be doing to improve its image?**

**Steve:** I think the image of the industry is particularly important in the way it comes over in schools and in the choices that pupils make about their future careers. We certainly had a problem recently with the kind of exposure to IT that’s happened in a lot of schools being de-motivating, it has discouraged pupils from computing. I think the changes that are needed to remedy that are now in place and it will take a little while for them to filter through, but of course BCS has played a very active role in seeing those changes through, so hopefully computing will have a better image where it matters most, which is in schools.

**Justin: Why do you think that we aren’t seeing so many women going into IT?**

**Steve:** If I knew why women did not find IT so attractive, then I’d do something about it. It’s a major problem that for some reason culturally we think IT and computers are a male preserve and of course if we talk numbers then they are predominately male. It’s a problem that we’ve been worrying about all the time I’ve been in the university and many things have been tried and nothing has really made much difference, so it concerns me hugely but I don’t know what to do about it. I don’t think there is any shortage of female role models, there are plenty of very high-powered women in the computing business. I really don’t understand why the subject is not attractive to girls at school, which is where the problem starts. I welcome any suggestions as to what we can do to remedy this.

**Justin: Talking of role models, did you have any of your own?**

**Steve:** My role models were probably not in computing, as I said I came through the mathematics and aerodynamics route at university and was really drawn into computing by what I saw as the new wave of computing based on the microprocessor, which in the late 1970s was a very new approach to building machines. So who do I hold up as a role model? Well, one of the lecturers at the university was John Conway who was always a very inspiring mathematician and it was great fun to listen to his lectures.

**Justin: Looking back at your career so far is there anything you would have done differently if you had your time again?**

**Steve:** I don’t think so, there are no decisions in my career path that I particularly regret and I think the advice I give to people is roughly the advice I follow myself, which is to make decisions that keep the maximum number of doors open. So look for opportunities, but when there’s nothing obvious staring you in the face then think about what subject creates the most possibilities in the area you’re interested in. Maximise the number of doors.

*The full interview between Justin and Steve was originally published in ITNOW, and may also be viewed on YouTube as a two part recording. Watch Part One and Part Two online.*

*Featured image credit: Mother board by Magnascan. CC0 Public Domain via Pixabay.*

The post Conversations in computing: Q&A with Editor-in-Chief, Professor Steve Furber appeared first on OUPblog.

]]>The post Addressing anxiety in the teaching room: techniques to enhance mathematics and statistics education appeared first on OUPblog.

]]>Mathematics and statistics anxiety is one of the major challenges involved in communicating mathematics and statistics to non-specialists. Students enrolled on degree programmes in several areas other than mathematics or statistics are required to study mandatory courses in mathematics and statistics as core elements of their degree programmes. Academics, educators, and researchers presented papers on how they have addressed this issue of anxiety using history, enhancing students’ self-belief, and individual support, demonstrating the relevance of the subjects to their respective degree work and making the learning process enjoyable.

The general consensus from the session was that:

- Students with low confidence experience high levels of mathematics anxiety, which has an adverse impact on their academic performance;
- University students are far from resilient to experiencing mathematics anxiety ;
- It is most common in non-specialist university students;
- It is not always related to students’ academic abilities but their prior learning experience of the subjects, self-efficacy and self-beliefs;
- The increasing diversity of the university student population as a result of the high proportion of international students, widening participation and access to higher education, add new dimensions to this challenge;
- This range of cultural, socio-economic and academic backgrounds of students manifests itself through diverse expectations and individual learning requirements that need to be carefully considered.

Delegates agreed that if educators involved in designing and delivering mathematics and statistics courses for non-specialist university students are aware of the implications of this diversity in student backgrounds, they should be able to appreciate the indispensable role of using a variety of teaching and learning approaches.

My personal view is that thinking like social scientists would make higher education practitioners more empathetic towards students. I think making course delivery student focused as well as student led would encourage students to share responsibility for their education. Focusing on connecting with students and being perceptive as well as receptive to students’ feedback and willing to revise teaching delivery can enhance the learning climate in teaching rooms. This would promote student interaction and encourage active learning.

Undergraduates can face several issues during their transition to university education, such as key gaps in their mathematical skills despite the fact that they have A-level Mathematics or equivalent. Effective practices were shared to include a blended learning project using online formative assessment followed by feedback and encouraging students to work within their Zone of Proximal Development (Vygotsky, 1978). Delegates were informed about two innovative Mathematics Support Centres (MSCs) that facilitate distant learning. MSCs have become important features of universities in the UK as well as overseas.

Undergraduates can face several issues during their transition to university education, such as key gaps in their mathematical skills despite the fact that they have A-level Mathematics or equivalent.

Educators shared their projects on scenario based training of statistics support teachers, instruction methods developed by mathematics teachers and using census data as well as other publicly available large data sets to support statistics literacy. Social media was explored as a tool to facilitate deep learning, enhance student engagement in science as well as engineering and improve students’ learning experience. There were presentations on the effective use of a virtual learning environment, audio feedback, and an online collaboration model to encourage students’ participation.

Delegates seemed to find online formative assessment practices worth incorporating into their teaching. The innovative Mathematics Support Centres (MSCs) that facilitate distant learning sounded appealing to several others. Delegates who had not experimented with Facebook were convinced after a paper presentation that Facebook is an area worth exploring to enhance student engagement.

I have used Facebook for promoting scholarly dialogue and collaborative research as well as enhancing student engagement with statistics and operational research methods since 2012. My rationale is to address mathematics and statistics anxiety by connecting with students which can be done without intruding into their personal territory, i.e. becoming their Facebook friends. I would argue that Facebook is an excellent online system which academics can use for posting topics for discussions, promoting interaction, addressing students’ queries, uploading course material and monitoring students’ progress. These study groups are easy to set up and promote inclusive education. It is a platform students are used to and view extremely positively.

Barriers to learning such as neurodiversity were also explored focusing on learning difficulties faced by visually-impaired and hearing-impaired learners. Other areas covered included language difficulties as a barrier to reading mathematics, dyslexia/dyscalculia and teachers’ negative bias against students from certain backgrounds. Gender imbalance was also discussed as a significant barrier with the general consensus being that more women should be encouraged, as well as supported, to pursue careers in mathematics.

In light of the existing literature and research relating to the difficulties blind learners face, it was agreed that this is an area that calls for further research to make mathematics more accessible to the blind. It was proposed that research on combining lexical rules, speech prosody and non-speech sounds would be desirable. Furthermore, providing tools for carrying out mathematical analysis may improve the situation for blind learners.

A critique on the fallacy of assuming a homogeneous student body and homogeneous teaching in a ‘what works’ approach introduced an interesting point of controversy in the midst of excitement and optimism about a range of initiatives. These exchanges of information on research, initiatives and projects should promote multi-disciplinary research collaboration in mathematics and statistics education.

The conference might impact research in a variety of themes related to statistics and mathematics education to include mathematics anxiety, inclusive practice and statistics anxiety.

*Featured image credit: calculator mathematics maths finance by Unsplash. Public domain via Pixabay.*

The post Addressing anxiety in the teaching room: techniques to enhance mathematics and statistics education appeared first on OUPblog.

]]>The post Is an engineering mind-set linked to violent terrorism? appeared first on OUPblog.

]]>The process by which young people are radicalised is very complex and poorly understood. As Scott Atran has said, the “first step to combating Isis is to understand it. We have yet to do so … What inspires the most uncompromisingly lethal actors in the world today is not so much the Qur’an or religious teachings. It’s a thrilling cause that promises glory and esteem … Youth needs values and dreams.”

Rose notes that engineering, medicine and other technical subjects are regarded as superior education in many MENA countries. These subjects may attract people with mind-sets that like simple solutions; little ambiguity, nuance, or debate. Rose calls this an ‘engineering mind-set’. He says that in these courses there is a tendency to concentrate on rote learning and exam-passing with little or no questioning. Those mind-sets may then be re-enforced by the way they are taught. Rose emphasises that young people need to be taught how to think to immunise their minds against ideologies that seek to teach them what to think. In other words they need to be encouraged to think critically as in the social sciences.

There are two main points I want to highlight here. First, as Rose states, the sparse data that we have indicates that there are a disproportionate number of STEM students and graduates recruited into Jihadist terrorism. That needs to be explained.

At least part, but only part, of the complex answer may rest on the second point. How do Rose and others characterise an engineering mind-set and how does it relate to the way engineers actually think? For sure the way Rose describes it needs unpacking. Rose quotes Diego Gambetta in 2007 (who in turn quotes the work of Seymour Lipset and Earl Raab who wrote on right wing and Islamic extremism in 1971). He says this mind-set has three components; (a) ‘monism’ – the idea that there exists one best solution to all problems; (b) ‘simplism’ – the idea that if only people were rational remedies would be simple with no ambiguity and single causes and remedies; (c) ‘preservatism’ – an underlying craving for a lost order of privileges and authority as a backlash against deprivation in a period of sharp social change – in jihadist ideology the theme of returning to the order of the prophet’s early community.

Rose’s characterisation, like many attempts to capture something complex and protean, contains some truth – but it is far from adequate.

A good start at an analysis would be the report by the Royal Academy of Engineering, ‘Thinking like an engineer’. One could easily counter Rose’s three components, and be nearer the mark, by using the trio pluralism, complexity, and sustainability. I believe that we should perhaps look for an explanation by examining the huge gap that has existed (and still exists – though reduced) between engineering science and practice. For example theoretical engineering mechanics rests on the certainty of deterministic physics from Newton to Einstein with its consequent time invariant dynamics. Determinism means that all events have sufficient causes – literally that the past decides the future. Einstein is reputed to have said “time is an illusion.” No practitioner takes these interpretations of certainty seriously, but she uses them as a model to make decisions because they are the best we have and they work. But there is one big and important proviso – they work in a context that must be understood. The Nobel Prize winner Ilya Prigogine has shown that evolutionary thermodynamics rests on complex processes far from equilibrium. Contrary to dynamics theories the laws of thermodynamics show that time is an arrow going only in one direction.

Quantum physics has blown away all pretence at certainty. Practitioners intuitively know that their theories are human constructs – imperfect models built to provide us with meaning and guide our ways of behaving. They use them to help make safe and functional decisions to provide systems of artefacts that are fit for purpose as set out in a specification. They use them but they know there are risks. There is no certainty in engineering practice (as some seem to believe) – witness the few tragic engineering failures (like Chernobyl). Risk and uncertainty is managed by safe dependable practice – always testing always checking – taking a professional duty of care. Were it not for the creativity, dedication and ingenuity of engineers such disasters would be more frequent. Just think of the amazing complexity of building and maintaining the international space station – truly inspirational and built by engineers. So yes there is a paradox. Deterministic theory points to single solution, to black and white answers – but in practice we use it only as a model – a human construct which has enabled us to achieve some incredible things.

Engineers use a plurality of methodologies and solutions. Anyone who has designed and made anything knows that are multiple solutions. Any attempt at optimising a solution will only work in a context and may be dangerously vulnerable outside of that context. Even a simple hinged pendulum behaves in a complex chaotic way. Bifurcations in its trajectory make its actual performance very sensitive to initial conditions. All successful practitioners know that people make decisions for a variety of reasons – some rational some not so rational. Only in theory does simplism apply. The financial crash of 2008 put paid to simplism in economic theory. Lastly, yes engineers do like order. Like life itself they create negentropy. They impose order on nature but do it to improve the human condition. The modern challenge is to do it more sustainably and to create resilience in the face of climate change.

So these are the ideas that engineering educators work too. Most engineering courses include design projects. Students learn about understanding a need, turning it into a specification and delivering a reality. To do so they must think creatively, consider the needs of multiple stakeholders, think critically to exercise judgement to determine criteria to make choices. Many undergraduate engineering courses (but admittedly not all) now include ethics. In practice the products of the work of engineers is continually tested by use. If engineers didn’t think creatively and critically about such use they would soon be out of a job.

In summary to characterise the engineering mind-set as one that thinks problems have single solutions devoid of ambiguity and uncertainty is derogatory and disparaging of our ingenious engineers. It is quite wrong to characterise the engineering mind-set as one that does not help students how to think though of course engineering educators are constantly striving to do better. Any claim that an engineering mind-set is linked to violent terrorism needs to be examined with great care.

*Featured image credit: Engineering, by wolter_tom. Public domain via Pixabay.*

The post Is an engineering mind-set linked to violent terrorism? appeared first on OUPblog.

]]>The post How do people read mathematics? appeared first on OUPblog.

]]>But it turns out that there are interesting questions here. There are, for instance, thousands of mathematics textbooks–many students own one and use it regularly. They might not use it in the way intended by the author; research indicates that some students–perhaps most–typically use their textbooks only as a source of problems, and essentially ignore the expository sections. That is a shame for textbook authors, whose months spent crafting those sections do not influence learning in the ways they intend. It is also a shame for students, especially for those who go on to more advanced, demanding study of upper-level university mathematics. In proof-based courses it is difficult to avoid learning by reading. Even successful students are unlikely to understand everything in lectures – the material is too challenging and the pace is too fast – and reading to learn is expected.

Because students are not typically experienced or trained in mathematical reading, this returns us to the opening questions. Does this lack of training matter? Undergraduate students can read, so can they not simply apply this skill to mathematical material? But it turns out that this is not as simple as it sounds, because mathematical reading is not like ordinary reading. Mathematicians have long known this (“you should read with a pencil in hand”), but the skills needed have recently been empirically documented in research studies conducted in the Mathematics Education Centre at Loughborough University. Matthew Inglis and I began with an expert/novice study contrasting the reading behaviours of professional mathematicians with those of undergraduate students. By using eye-movement analyses we found that, when reading purported mathematical proofs, undergraduates’ attention is drawn to the mathematical symbols. To the uninitiated that might sound fine, but it is not consistent with expert behaviour; the professional mathematicians attended proportionately more to the words, reflecting their knowledge that these capture much of the logical reasoning in any written mathematical argument.

Another difference appeared in patterns of behaviour, which can best be seen by watching the behaviour of one mathematician when reading a purported proof to decide upon its validity (see below). Ordinary reading, as you might expect, is fairly linear. But mathematical reading is not. When studying the purported proof, the mathematician makes a great many back-and-forth eye movements, and this is characteristic of professional reading: the mathematicians in our study did this significantly more than the undergraduate students, particularly when justifications for deductions were left implicit.

This work is captured in detail in our article “Expert and Novice Approaches to Reading Mathematical Proofs”. Since completing it, Matthew and I have worked with PhD and project students Mark Hodds, Somali Roy and Tom Kilbey to further investigate undergraduate mathematical reading. We have discovered that research-based Self-Explanation Training can render students’ reading more like that of mathematicians and can consequently improve their proof comprehension (see our paper Self-Explanation Training Improves Proof Comprehension); that multimedia resources designed to support effective reading can help too much, leading to poorer retention of the resulting knowledge; and that there is minimal link between reading time and consequent learning. Readers interested in this work might like to begin by reading our AMS Notices article, which summarises much of this work.

In the meantime, my own teaching has changed – I am now much more aware of the need to help students learn to read mathematics and to provide them with time to practice. And this research has influenced my own writing for students: there is no option to skip the expository text, because expository text is all there is. But this text is as much about the thinking as it is about the mathematics. It is necessary for mathematics textbooks to contain accessible text, explicit guidance on engaging with abstract mathematical information, and encouragement to recognise that mathematical reading is challenging but eminently possible for those who are willing to learn.

*Feature Image: Open book by Image Catalog. CC0 1.0 via Flickr.*

The post How do people read mathematics? appeared first on OUPblog.

]]>The post Very Short Resolutions: filling the gaps in our knowledge in 2016 appeared first on OUPblog.

]]>“This year I’m going to read *Algebra: A Very Short Introduction*. An unlikely choice for a History grad but author Peter M. Higgins convinced me of its importance in his article on mathematical literacy. Bad math can lead to silly mistakes and poor choices that are easily avoided otherwise!

—*Katie Stileman, VSI Publicity*

“This year I’m going to read *Classical Mythology: A Very Short Introduction*. I have an embarrassing lack of knowledge in this area so it’s definitely time. It will also help me to hold my own in conversations with my classics loving chum Malcolm!”

—*Julie Gough, VSI Marketing*

“This year I’m going to brush up on my Shakespeare in time for the 400^{th} anniversary of the Bard’s death by reading *William Shakespeare: A Very Short Introduction* by Stanley Wells and eagerly anticipating *Shakespeare’s Comedies: A Very Short Introduction* after enjoying *Much Ado About Nothing* at the Bodleian last summer.”

—*Amy Jelf, VSI Marketing*

“Next year I want to find the time to read *Buddhism: A Very Short Introduction*. My interest was first piqued by reading the top ten facts about Buddhism, and I look forward to learning more about meditation and mindfulness in the new year. Any tips I can glean to remove the stress from my life would be welcome too!”

—*Dan Parker, VSI Social Media*

“After working on VSIs for a number of years, not having them as part of my day-to-day life for the first time this year meant some pretty serious withdrawal symptoms from this incredible series. In 2016, I plan to fill the gap by reading *Circadian Rhythms: A Very Short Introduction*. The same author wrote the VSI to *Sleep*, which I think we’re all fascinated by – not getting enough, getting too much, and the quality of it.”

—*Chloe Foster, VSI Publicity (2012-15)*

“Next year I am going to read *Exploration: A Very Short Introduction *which was recommended to me by Nancy Toff, who commissions VSIs from the US office. As a VSI commissioning editor in the UK, it’s really nice to read a VSI from the other side of the pond!”

—*Andrea Keegan, VSI Editorial*

Wishing you a happy new year from everyone in the VSI team!

*Featured image credit: VSIs, by the VSI team. Image used with permission.*

The post Very Short Resolutions: filling the gaps in our knowledge in 2016 appeared first on OUPblog.

]]>The post Can one hear the corners of a drum? appeared first on OUPblog.

]]>At the heart of both of these equations is the Laplace operator, ∆, also known as the Laplacian, named for Pierre-Simon, marquis de Laplace (1749-1827). It turns out that if one can solve the Laplace equation: ∆*f* = λ*f*, then one can solve both the wave and heat equations.

It is not necessary to understand these mathematical symbols and jargon because they are connected to something everyone understands: music. The sound produced by a stringed instrument is made by the vibration of the strings. The note one hears, such as an A, C, or B-flat, depends on how long the string is, and of what material it is composed. This note is also referred to as the fundamental tone or fundamental frequency.

The vibration of the string also produces overtones, known as harmonics, and these play an important role in creating the sound we hear. The collection of values obtained from solving the Laplace equation provide all these different frequencies: the fundamental frequency and all of the harmonics. Altogether these determine the sound of the string.

In the case of a string, the Laplace equation can be solved rather easily, and it turns out that all the harmonics are integer multiples of the fundamental frequency. This mathematical fact is one of the reasons that stringed instruments and pianos are so popular; it causes the sound that we hear from such instruments to have a pleasant and “clean” quality. In fact, every other instrument in a classical orchestra also has this property, with the exception of the percussion instruments.

Drums are fundamentally different. The sound created by beating a drum comes from the vibrations of the drumhead. Mathematically, this means that the Laplace equation is now in two dimensions. Acoustically, you may observe that the sound produced by vibrating drums is “messier” in a certain sense as compared to the sound produced by a vibrating string. The reason is that for drums it is no longer true that the harmonics are integer multiples of the fundamental frequency.

Although we can mathematically prove the preceding fact, we cannot, apart from a few notable exceptions, solve the Laplace equation in two dimensions. Facing this impasse, mathematicians have turned to investigate questions such as: if two drums sound the same, in the sense that their fundamental frequencies as well as *all* their harmonics are identical, then what geometric features do they have in common? Such features are known as *geometric spectral invariants*.

Hermann Weyl (1885-1955) discovered the first geometric spectral invariant: if two drums sound the same, then their drumheads have the same area. About a half century later, Åke Pleijel (1913-1989) proved that the perimeters of the drumheads must also be the same length. Shortly thereafter, M. Kac (1914-1984) wrote the now famous paper, “Can one hear the shape of a drum?” He wanted to know whether or not the drumheads must have the same shape? It took about a quarter century to solve the problem, which was achieved by Carol Gordon, David Webb, and Scott Wolpert in 1991. The answer is *no*.

In contrast to a nice round drumhead, the “identical sounding drums,” in Figure 1 both have corners. A natural question is therefore: can one hear the corners? This means, is it possible for two drums to sound the same, and one of them has a nicely rounded, but not necessarily circular, shape, whereas the other has at least one sharp corner? In other words, *can one hear the corners of a drum*? We have proven that the answer is *yes*. The sound produced by a drumhead with at least one sharp corner will always be different from the sound produced by any drumhead without corners. Mathematically, this marks the discovery of a new geometric spectral invariant.

Inquiring minds still have several questions to investigate. For example, if we now assume that both drums have nicely rounded, not necessarily circular shapes, and no sharp corners, is it possible that they can sound identical but be of different shapes? Can one hear the shape of a convex drum? What happens when we consider these types of problems for three-dimensional vibrating solids? We continue to work alongside our fellow mathematicians on problems such as these, and there is plenty of room for further investigation by young researchers.

*Image credit: Drum by PublicDomanImages, Public Domain via Pixabay.*

The post Can one hear the corners of a drum? appeared first on OUPblog.

]]>The post Predictive brains, sentient robots, and the embodied self appeared first on OUPblog.

]]>My personal grail, though, was always something rather more systematic: a principled science of the embodied mind. I think we may now be glimpsing the shape of that science. It will be a science built around an emerging vision of the brain as a guessing engine – a multi-layer probabilistic prediction machine. This is an idea that, in one form or another, has been around for a long time. But exciting new developments are taking this vision to some brand-new places. In this short post, I highlight a few of those places. First though, what’s the basic vision of the predictive brain?

A prediction machine of the relevant stripe is a multi-layer neural network that uses rich downwards (and sideways) connectivity to try to perform a superficially simple, yet hugely empowering, task. That task is the ongoing prediction of its own evolving flows of sensory stimulation. When you see that steaming coffee-cup on the desk in front of you, your perceptual experience reflects the multi-level neural guess that best reduces visual prediction errors. To visually perceive the scene in front of you, your brain attempts to *predict *the scene in front of you, allowing the ensuing error signals to refine its guessing until a kind of equilibrium is achieved.

Such an architecture makes full use of the huge amounts of downwards and recurrent connectivity that characterize advanced biological brains. This is important since the bulk of our actual neural connectivity is recurrent, involving loops in which information flows downwards and sideways. So much so that the AI pioneer Patrick Winston wrote, in a 2012 paper, that “Everything is all mixed up, with information flowing bottom to top and top to bottom and sideways too. It is a strange architecture about which we are nearly clueless”.

One key role of all that looping connectivity, it now seems, is to try to predict the streams of sensory stimulation before they arrive. Systems like that are most strongly impacted by sensed *deviations* from their predicted sensory states. It is these deviations from predicted states (known as prediction errors) that now bear much of the information-processing burden, informing us of what is salient and newsworthy within the dense sensory barrage.

Systems like this are already deep in the business of understanding. To perceive a hockey game using multi-level prediction machinery is to be able to predict distinctive sensory patterns as the play unfolds. And the more experience has taught you about the game and the teams, the better those predictions will be. What we quite literally see, as we watch a game, is here constantly informed and structured by what we know and what we are thus already busy (consciously and non-consciously) expecting.

This, as has recently been pointed out in a *New York Times* piece by Lisa Feldman Barrett, has real social and political implications. You might really seem to see your beloved but recently deceased pet start to enter the room, when the curtain moves in just the right way. The police officer might likewise really seem to see the outline of that gun in the hands of the unarmed, cellphone-wielding suspect. In such cases, the full swathe of good sensory evidence should soon turn the tables – but that might be too late for the unwitting suspect.

On the brighter side, a system that has learnt to predict and expect its own evolving flows of sensory activity in this way is one that is already positioned to imagine its world. For the self-same prediction machinery can also be run ‘offline’, generating the kinds of neuronal activity that would be expected (predicted) in some imaginary situation. Sometimes, however, the delicate balances between top-down prediction and the use of incoming sensory evidence are disturbed, and our grip on the world loosens in remarkable ways. Thinking about perception as tied intimately to multi-level prediction is thus also delivering new ways to think about the emergence of delusions, hallucinations, and psychoses, as well as the effects of various drugs, and the distinctive profiles of non-neurotypical (for example, autistic) agents.

The most tantalizing (but least developed) aspect of the emerging framework concerns the origins of conscious experience itself. To creep up on this suppose we ask: what might it take to build a sentient robot? By that I mean: what might it take to build a robot that begins to have some sense of *itself* as a material being, with its own concerns, encountering a structured and meaningful world?

A growing body of work by Professor Anil Seth (University of Sussex) and others may – and I say this with all due caution and trepidation – be suggesting a clue. That work involves the stream of interoceptive information specifying the physiological state of the body – the state of the gut and viscera, blood sugar levels, temperature, and much much more (Bud Craig’s recent book *How Do You Feel* offers a wonderfully rich account of this).

What happens when a multi-level prediction engine crunches all that interoceptive information together with information specifying structure in the external world? Our multi-layered predictive grip on the external world is then superimposed upon another multi-layered predictive grip – a grip on the changing physiological state of our own body. And predictions along each of these dimensions will constantly interact with predictions along the other. To take a very simple case, the sight of water, when we are thirsty, should incline us to predict drinking in ways that the sight of water otherwise need not. Our predictive grip upon the external world thus becomes inflected, at every level, by an accompanying grip upon ‘how things are (physiologically) with us’. Might this be part of what enables a robot, animal, or machine to start to experience a low-grade sense of being-in-the-world? Such a system has, in some intuitive sense, a simple grip not just on the world, but on the world ‘as it matters, right here, right now, for the embodied being that is you’. Agents like that experience a structured and – dare I say it – meaningful world, a world where each perceptual moment presents salient affordances for action, permeated by a subtle sense of our own present and unfolding bodily states. A recipe for Sentient Robotics 101.beta perhaps?

There is much that I’ve left out from this post – most importantly, the crucial role of self-estimated sensory uncertainty (‘precision’), and the role of action and environmental structuring in altering the predictive tasks confronting the brain: changing what we need to predict, and when, in order to get things done. That’s where these stories score major points by dovetailing very neatly with large bodies of work in embodied cognition.

Nor have I mentioned the many outstanding problems and puzzles. For example, it is not known whether multi-level prediction machinery characterizes all, or even most, aspects of the neural economy. Most importantly of all, perhaps, it is not yet clear how best to factor human motivation into the overall story. Are human motivations (e.g. for play, novelty, and pleasure) best understood as disguised predictions – deep-seated expectations that there will be play, novelty, and pleasure? That is a challenging vision, but one that could offer a deeply unifying perspective indeed.

*Feature Image: “I love water,” by Derek Gavey. CC-BY-2.0 via Flickr. *

The post Predictive brains, sentient robots, and the embodied self appeared first on OUPblog.

]]>The post Why know any algebra? appeared first on OUPblog.

]]>It may have been a joke but some nevertheless found this argument compelling. Their mistake was quickly pointed out – to give everyone a million dollars would cost 317 million million dollars – but some persisted with the error. Their reasoning seemed to go “if you have 360 dollars you can give 317 people one dollar each so if you have 360 million dollars you can give 317 million people a million dollars each!” Others explained: “No Joe, just imagine you have 360 boxes, each containing a million dollars cash. When you go to give each person one of them you will run out after 360 people, leaving the remaining 316,999,640 people empty handed.”

Everyone would agree that adults need to have a grasp of numbers to the extent that they can spot nonsense arguments like this. At the same time however, it may be said that the x and y stuff can safely be forgotten once you leave school as it is practically never used. Anyone will forget the details of any subject if they never go back to it. That is why occasional reading about things mathematical renews confidence and allows you to question what is going on when a topic becomes complex.

For instance, it will give you the power to ask killer questions in pretentious presentations. Often a couple of graphs and equations flashed up on the screen will cower an audience into submissive silence. Never put up with that. Ask the presenter how that equation relates to the topic of their talk. Better still, ask what each of the symbols in the slide stand for. You don’t need to know anything in particular about maths to do that and everyone will soon see how competent your presenter is.

Taking this a little further, it is genuinely useful to know some algebra as it lets you deal with simple mathematical problems and to know that you have them right. And this does happen in real life. A friend once gave a presentation pitch for a contract that involved two factors whose graph was a straight line. He laboured to explain this and an audience member lost patience and pointed out that the two quantities had an obvious linear relationship so of course the graph had to be a perfect straight line. My friend was made to look clueless and, not surprisingly, failed to land the contract. He explained to me later that what most annoyed him was that he had figured that out the night before, and had even written down the equation of the line and checked it was right. However, he lacked the nerve to say that in his presentation and so when it was pointed out to him, he looked stupid. With just a touch more algebraic confidence he could have carried the day.

It is a worthwhile skill just to be able to see an algebraic problem for what it is even if your own attempts to solve it are a bit clumsy. A recent example concerned the controversial film *The Interview*, which is about a fictional assassination plot of the North Korean leader, Kim Jong-un. A magazine article said that the film grossed $15 million on the weekend of its release, that the cost of the movie was $15 to buy and $6 to rent, and that two million copies were distributed overall. The article went on to say however that the company did not state how many copies were rented and how many were bought. It seemed that it did not occur to anyone at the magazine that they ought to be able to figure that out. A person with some mathematical habits of mind however would at least pause to think, and then would get the answer somehow, as it is not difficult. Working in units of millions there are two equations here: r + s = 2 and 6r + 15s = 15; the first equation counts units of r (rentals) and s (sales) while the second equation counts the money. From these we may deduce that there were 1/3 million sales and 5/3 million rentals overall. (Google simultaneous equations for further details.)

“I have a dream, which is that people will not run for cover whenever anything mathematical appears but rather will pause, think a little, ask a question or two and, if still out of their depth, seek a more qualified person to clear the matter up.”

The most salutory experiences of harm caused by mathematical ignorance however often stem from probability questions, which can fool even intelligent and educated people. It is one thing not to be able to do a problem but it is quite another to imagine that you can do it and be seduced into an utterly false conclusion. As an example, the following question was put to a large group of medical students. A certain condition affects one person in 1,000 and a particular medical test will certainly give a positive result if the person tested has the disease but has a 5% probability of coming out positive for people who have not got the condition. A randomly chosen person tests positive. What is the probability that they have the disease?

The most popular response was that the test was 95% accurate so the probability that it was right was 95%. I’m afraid that answer is not only wrong but represents a mistake on a par with poor Joe’s analysis of the cost of ObamaCare. The correct answer is at the other end of the scale: the chance that the person actually has the condition is less than 2%.

This is a tricky question and even a mathematically aware person might find it difficult to answer. However, I hope that the same person would instinctively be sceptical of the guess of 95% as that number takes no account of the prevalence of the condition in the population, which surely affects the answer. If the condition were very rare, then the chances of a false positive must be high. A quick way to see the right answer is to note that for every 1000 people in the general population, one person will have the condition but about 50 (5% of 1, 000) will be patients who generate a false positive; for that reason a random positive test only has a chance of one in fifty-one of detecting a person with the disease.

We might hope that qualified doctors would know how to interpret any test results that they call for. However, bad mistakes may still happen in serious situations such as court cases where an ‘expert’ witness makes a probability statement. Landmark cases involving cot deaths have led to gross miscarriages of justice. For example, once a probability statement on the likelihood of DNA matches is accepted as fact by the court, there may be only one verdict possible, and that may be the wrong one. I trust that lessons have been learnt from past errors but the risk of blunders remains unless any statement of probability is checked by a qualified statistician. Being an expert in the field of the testimony is not enough. Before a precise probability claim is admitted as evidence, it should be professionally scrutinised with the underlying assumptions, the calculation of the actual probability number and, just as importantly, its margin of error, all checked. I am not sure that would necessarily happen in a British law court.

In conclusion, I have a dream, which is that people will not run for cover whenever anything mathematical appears but rather will pause, think a little, ask a question or two and, if still out of their depth, seek a more qualified person to clear the matter up. It is a modest sounding dream but its realisation would make the world a better place.

*Featured image credit: Maths and calculator. Public domain via Pixabay.*

The post Why know any algebra? appeared first on OUPblog.

]]>The post From number theory to e-commerce appeared first on OUPblog.

]]>In this blog series, Leo Corry, author of *A Brief History of Numbers*, helps us understand how the history of mathematics can be harnessed to develop modern-day applications today. The final post in the series takes a look at the history of factorization and its use in computer encryption.

The American Mathematical Society held on October 1903 its regular meeting in New York City. The program announced a talk by Frank Nelson Cole (1861-1921), with the unpretending title of *On the factorization of large numbers*. In due course, Cole approached the board and started to multiply the number 2 by itself, step after step and without saying a word, sixty seven times. He then subtracted 1 from the result obtained. He went on to multiply, by longhand, the following two large numbers:

193,707,721 x 761,838,257,287.

Realizing that the two calculations agreed, an astonished audience offered an enthusiastic applause, while Cole returned to his seat and continued to remain silent. One can only guess how satisfied he was for having shown to his colleagues this little gem of calculation that he discovered after much hard work.

The number 2^{67} – 1 is commonly known as the Mersenne number *M*_{67}. The question whether a given Mersenne number of the form 2* ^{n}* – 1 is prime had attracted some attention since the seventeenth century, but it was only in the last third of the nineteenth century that Edouard Lucas (1842-1891) came up with an algorithm to test this property. The algorithm was improved in 1930 by Derrick Henry (Dick) Lehmer (1905-1991) and turned into a widely used tool for testing primality, the Lucas-Lehmer test. But it is one thing to know, with the help of this test, whether or not a certain Mersenne number is prime, but a much more difficult task is finding the factors of one such large number even if we know it to be composite. When asked in 1911 how long it had taken to crack

Almost hundred years later, another remarkable factorization was achieved, this one involving much larger numbers. In 1997 a team of computer scientists, led by Samuel Wagstaff at Purdue University, factorized a 167-digit number, (3^{349} – 1)/2, into two factors of eighty and eighty-seven digits respectively. According to Wagstaff’s report, the result required about 100,000 computer hours. Quite a bit more than in Cole’s story. Wagstaff had previously been involved in many other remarkable computations. For instance, in 1978 he used a digital computer to prove that Fermat’s last theorem (FLT) is valid for prime exponents up to 125,000.

Factorization results such as those of Cole and Wagstaff will at the very least elicit a smile of approval on the side of anyone with a minimum of sympathy and appreciation for remarkable mathematical results. But when faced with the price tag (in terms of human time spent or computer resources used to achieve it), the same sympathetic listener (and by all means the cynical one) will immediately raise the question whether all this awful lot of time was worth spending.

Central to the mainstream conceptions of pure mathematics over the twentieth century was the idea that numerical calculations with individual cases is at best a preliminary exercise to warm up the mind and start getting a feeling for the situations to be investigated. Still nowadays, many a mathematician is proud of stressing his slowness in calculating and pointing out the mistakes he makes in restaurants when splitting a bill among friends. David Hilbert (1862-1943) one of the most influential mathematicians at the turn of the twentieth century, was clear in stating that from “the highest peak reached on the mountain of today’s knowledge of arithmetic” one should “look out on the wide panorama of the whole explored domain” only with the help of elaborated, abstract theories. He consciously sought to avoid the use of any “elaborate computational machinery, so that … proofs can be completed not by calculations but purely by ideas.”

But with the rise of electronic computing a deep change has affected the status of time-consuming computational tasks from the time of Hilbert and Cole to the time of Wagstaff, via Lehmer and up to our own days. If in 1903 Cole found it appropriate to remain silent about his result and its significance, in 1997 the PR department of Purdue University rushed to publish a press release announcing Wagstaff’s factorization result: “Number crunchers zero in on record-large number”. Wagstaff cared to stress to the press the importance of knowing the limits of our abilities to perform such large factorizations, arguing that the latter are “essential to developing secure codes and ciphers.”

General perceptions about the need, and the appropriate ways for public scrutiny of science, its tasks and its funding, changed very much in the period of time between 1903 and 1997, and this in itself would be enough to elicit different kinds of reactions to both undertakings. But above all, it was the rise of e-commerce and the need for secure encryption techniques for the Internet that brought about a deep revolution in the self-definition of the discipline of number theory in the eyes of many of its practitioners, and in the ways it could be presented to the public. Whereas in the past, this was a discipline that prided itself above all for its detachment from any real application to the affairs of the mundane world, over the last four decades it has turned into the showpiece of mathematics as applied to the brave new world of cyberspace security. The application of public-key encryption techniques, such as those based on the RSA cryptosystem, turned the entire field of factorization techniques and primarily testing, from an arcane, highly esoteric, purely mathematical pursuit into a most coveted area of intense investigation with immediate practical applications, and expected to yield enormous economic gains to the experts in the field.

*Featured image credit: Transmediale 2010 Ryoji Ikeda Data Tron 1 by Shervinafshar. CC BY-SA 3.0 via Wikimedia Commons.
*

The post From number theory to e-commerce appeared first on OUPblog.

]]>The post World Statistics Day: a reading list appeared first on OUPblog.

]]>* Analyzing Wimbledon,* by Franc Klaassen and Jan R. Magnus

The world’s most famous tennis tournament offers statisticians insight into examining probabilities. This study attempts to answer many questions, including whether an advantage is given to the person who serves first, whether new balls influence gameplay, or whether previous champions win crucial points. Looking at a unique data set of 100,000 points played at Wimbledon, Klaassen and Magnus illustrate the amazing power of statistical reasoning.

‘**Asking About Numbers: Why and How**,’ by Stephen Ansolabehere, Marc Meredith, and Erik Snowberg, published in *Political Analysis*

How can designing quantitative standardized questions for surveys yields findings that can later be linked to statistical models? The authors offer a full analysis about why quantitative questions are feasible and useful, particularly for the study of economic voting.

* The Credit Scoring Toolkit: Theory and Practice for Retail Credit Risk Management and Decision Automation*, by Raymond Anderson

This textbook demonstrates how statistical models are used to evaluate retail credit risk and to generate automated decisions. Aimed at graduate students in business, statistics, economics, and finance, the book introduces likely situations where credit scoring might be applicable, before presenting a practical guide and real-life examples on how credit scoring can be learned to implement on the job. Little prior knowledge is assumed, making this textbook the first stop for anyone learning the intricacies of credit scoring.

‘**Big data and precision**’ by D. R. Cox, published in *Biometrika*

Professor D.R. Cox of Nuffield College, Oxford, explores issues around big data, statistical procedure, and precision, in addition to oulining a fairly general representation of the accretion of error in large systems.

* The New Statistics with R: An Introduction for Biologists*, by Andy Hector

This introductory text to statistical reasoning helps biologists learn how to manipulate their data sets in R. The text begins by explaining the classical techniques of linear model analysis and consequently provides real-world examples of its application. With all the analyses worked in R, the open source programming language for statistics and graphics, and the R scripts included as support material, Hector presents an easy-to-use textbook for students and professionals with all levels of understanding of statistics.

‘**Housing Wealth and Retirement Timing**’ by Martin Farnham and Purvi Sevak, published in* CESifo Economic Studies*

Having found that rising house prices cause people to revise their planned retirement age, Farnham and Sevak explore movements in the housing market and the implications for labour-supply.

* An Introduction to Medical Statistics*, by Martin Bland

Every medical student needs to have a firm understanding of medical statistics and its uses throughout training to become doctor. The fourth edition of *An Introduction to Medical Statistics* aims to do just that, summarising the key statistical methods by drawing on real-life examples and studies carried out in clinical practice. The textbook also includes exercises to aid learning, and illustrates how correctly employed medical data can improve the quality of research published today.

‘**Getting policy-makers to listen to field experiments**’ by Paul Dolan and Matteo M. Galizzi, published in *Oxford Review of Economic Policy*

On the premise that the greater use of field experiment findings would lead to more efficient use of scarce resources, this paper from Dolan and Galizzi considers what could be done to address this issue, including a consideration of current obstacles and misconceptions.

* Stochastic Analysis and Diffusion Processes*, by Gopinath Kallianpur and P. Sundar

Building the basic theory and offering examples of important research directions in stochastic analysis, this graduate textbook provides a mathematical introduction to stochastic calculus and its applications. Written as a guide to important topics in the field and including full proofs of all results, the book aims to render a complete understanding of the subject for the reader in preparation for research work.

‘**Statistical measures for evaluating protected group under-representation**’ by Joseph L. Gastwirth, Wenjing Xu, and Qing Pan, published in *Law, Probability & Risk*

The authors explore the conflicting inferences drawn from the same data in the cases of *People* v*. Bryant* and *Ambrose *v.* Booker*. Based on their full analysis, they argue that when assessing statistics on the demographic mix of jury pools for legal significance, courts should consider the possible reduction in minority representation that can occur in the peremptory challenge proceedings.

** Bayesian Theory and Applications**, edited by Paul Damien, Petros Dellaportas,

Nicholas G. Polson, and David A. Stephens

Beginning by introducing the foundations of Bayesian theory, this volume proceeds to detail developments in the field since the 1970s. It includes an explanatory chapter for each conceptual advance followed by journal-style chapters presenting applications, targeting those studying statistics at every level.

‘**Representative Surveys in Insecure Environments: A Case Study of Mogadishu, Somalia**,’ by Jesse Driscoll and Nicholai Lidow, published in *Journal of Survey Statistics and Methodology*

How do we get accurate statistics from politically unstable areas? This paper discusses the challenges of conducting a representative survey in Somalia and the opportunities for improving future data collection efforts in these insecure environments.

* Stochastic Population Processes: Analysis, Approximations, Simulations*, by Eric Renshaw

Talking about random processes in real-life is tricky, as the world has no memories of those processes which depend only on the current state of the system and not on its previous history. This book is driven by the underlying Kolmogorov probability equations for population size. It’s the first title on stochastic population processes that focuses on practical application. It is not intended as a text for pure-minded mathematicians who require deep theoretical understanding, but for researchers who want to answer real questions.

*Image Credit: Statistics by Simon Cunningham. CC BY 2.0 via Flickr. *

The post World Statistics Day: a reading list appeared first on OUPblog.

]]>The post Better medical research for longer, healthier lives appeared first on OUPblog.

]]>There were other changes. In 1972, only 58% of research papers in the *Lancet* and the *British Medical Journal* included the results of any statistical calculations (nearly all significance tests), and only three reports gave any reference to a statistical work, in each case to a textbook which was already out of date. In 2010, all research papers in these journals included details of their statistical analyses, with in one case a paragraph on the methods in the Research Methods section, in all the others a subsection devoted to statistics, and most papers reporting confidence intervals rather than, or in addition to, significance tests. I think that the increased sample sizes, the greater length of papers, and the increased statistical detail are all indicators of greatly increased research quality in the top medical journals.

What led to this research revolution? One force was the movement for evidence-based medicine, spreading the idea that treatment decisions should be based on objective evidence rather than on experience and authority. Such evidence would include statistics. Use of the term evidence-based medicine began in the 1990s with the work of Gordon Guyatt and David Sackett, but the ideas were around long before then. Statisticians, whose business was the evaluation of evidence, were enthusiastic cheerleaders. The demand for evidence led to systematic reviews, where we collect together all the trials of a therapy which had ever been carried out, and try to form a conclusion about effectiveness. Iain Chalmers led a huge project to assemble all the trials ever done in obstetrics. He went on to found the Cochrane Collaboration, which aims to do the same for all of medicine. The *Lancet* and the *British Medical Journal* now typically include a systematic review every week.

As an alternative solution to the problem of inadequate sample sizes, Richard Peto led the call for large, simple trials; his first being the ‘First International Study of Infarct Survival’. Published in 1986, the report of this trial included the sample size of 16,027 patients in the title. Unlike Guyatt, Sackett, and Chalmers, who are, or were doctors, Richard Peto is a statistician. Another statistically-led movement was to evaluate evidence using confidence intervals rather than significance tests, particularly for clinical trials. The idea was to estimate the plausible size of the difference between treatments rather than simply say whether there is evidence that a difference exists. A paper by Martin Gardner and Doug Altman in 1986 led to the *British Medical Journal* including this in its instructions for authors. Other journals, such as the *Lancet*, followed suit.

Reviews of the quality of statistical methods in medical journals began to sting journal editors into action and led to instructions to authors about statistical aspects of presentation of results. Following reviews of statistics, journals began to introduce statistical referees, with the systematic use of a panel of statisticians to check all research papers before they appeared in the journal. The main difficulty was finding enough statisticians. Finally, in 1996 the first consolidated statement on reporting trials (CONSORT) was published, giving guidelines for reporting trials, encouraging researchers to provide information about methods of determining sample size, allocation to treatments, statistical analysis, etc.

We cannot know which, if any, of these forces is responsible for improvements in the statistical quality of the top clinical literature. We should beware of the logical fallacy of *post hoc ergo propter hoc –* just because improvements followed all this activity does not necessarily imply that they were caused by it. As statisticians say, correlation does not imply causation. But I think that the combination of factors did matter and it was exciting to live through it, especially as I have known, and in some cases worked with, nearly all the major actors.

The quality of top clinical research has improved greatly, but does this matter? Has medicine improved? I wondered what might be a good indicator and settled on life expectancy at age 65, the average number of years 65-year-olds would live if the current death rates were to apply through their remaining time. I thought that that the health of the old may respond particularly to improvements in medicine, as the old are its main consumers. I knew that from its first calculation for England and Wales in 1841, life expectancy at 65 changed very little for a century.

As the graph shows, in the 19^{th} century there was little difference between the life expectancy of men and women. For women, a slight increase began at the start of the 20^{th} century, which continued throughout the century. One possible explanation for this is that women in the 20^{th} century had far fewer pregnancies than women in the 19^{th}, and so arrived at age 65 healthier and fitter than previous generations. For men, life expectancy at 65 increased very little until 1971. Then it began to rise rapidly, faster than that for women, so that men have almost caught up. Women considerably outliving men may be a 20^{th} century phenomenon, because now expectation of life at age 65 is 18 years for men and 21 years for women. For both, the remaining years of life are half as much again as they were. This period of rapid improvement in statistical methods in the best medical research has coincided with a rapid improvement in the health of the older members of the population. People are living longer, healthier lives.

For the writer of medical statistical textbooks, these changes have required a lot of updating and expansion of succeeding editions, to accommodate the new methods and larger studies appearing in journals. As I am now over 65, I can look back and think that it was definitely worth it.

*Featured Image Credit: Running, Runner, Long Distance, by Skeeze. CC0 Public Domain via Pixabay.*

The post Better medical research for longer, healthier lives appeared first on OUPblog.

]]>The post The real charm of imaginary numbers appeared first on OUPblog.

]]>In this blog series, Leo Corry, author of *A Brief History of Numbers*, helps us understand how the history of mathematics can be harnessed to develop modern-day applications today. In this second post, looking specifically at the impact of imaginary numbers on our comprehension of the physical world, Leo explores how theorems change over time.

Few elementary mathematical ideas arouse the kind of curiosity and astonishment among the uninitiated as does the idea of the ‘imaginary numbers’, an idea embodied in the somewhat mysterious number *i*. This symbol is used to denote the idea of, namely, a number that when multiplied by itself yields -1. How come? We were consistently told in the early years of secondary school that two numbers of the same sign, positive or negative, when multiplied by each other always yield a positive number. Now we are told that this is not always the case, and here we have this ‘number’ for which the rule does not hold? What kind of number is this if it breaks such a fundamental law about the multiplication of numbers? Is the mystery and the apparent contradiction solved just by calling *i* an ‘imaginary number’?

The truth is that when one is introduced to imaginary numbers, it’s not the first time that a fundamental idea previously taught, is turned upside down. Think, in the first place, about the negative numbers. When learning in primary school the job of subtracting numbers, a typically curious child may ask the teacher how to subtract, say, six from four. A typically cautious teacher would answer, “well, … ehem, … that can’t be done.” Indeed, it makes a lot of pedagogical sense to let the pupil acquire good mechanical skills in performing the operations without having to worry about such nuances, and there is no immediate need to bring up confusing issues such as the idea of negative numbers. All of this can be clarified later on, simply by telling the child that, “well … yes … actually, we can subtract 6 from 4 with the help of a new idea, the idea of negative numbers.” Most children tend to pass this experience without a lasting negative impact, though perhaps for some this is the beginning of the kind of post-traumatic symptoms so commonly associated in our society with the study of mathematics.

At any rate, the difficulty that typically arises immediately after becoming aware of the existence of negative numbers relates to the rule ‘minus times minus yields plus’. It really takes time until one becomes used to this strange rule, and, let’s be frank, many intelligent people never really come to believe it, and much less to understand its justification. And then comes the news that the number *i* breaks that rule.

The story of imaginary numbers is interesting not only because it touches upon the most central topic of mathematics, numbers and their properties, but also because there is a dramatic parallel between, on the one hand, the path that the individual student crosses before reaching a clear understanding of the topic and, on the other hand, the historical path that the world of mathematics at large had to cross for the same purpose. It may sound strange at first, but in general, central developments in the history of mathematics happen as the more complex ideas give way to simpler ones. This is opposite to the way in which we are typically taught, namely from the simple to the complex.

Roots of negative numbers started to surface repeatedly when the great mathematicians of the Renaissance worked out solutions for equations involving cubes and fourth powers of the unknown. The most prominent of these was Girolamo Cardano (1501-1576). The techniques he developed led to correct answers to the problems that he investigated, but the intermediary stages often involved calculations with roots of negative numbers such as:

For Cardano equations such as *x*^{2} + 1 = 0, or even the simpler one *x* + 3 = 0 have no solutions. Just like we were initially told in school. But his techniques were making roots of negative numbers, as in the example above, ever more conspicuous and unavoidable. He continued to look at them as ‘sophistic’, ‘subtle’ and ‘useless.’ Nevertheless his mathematical curiosity did not let him just ignore them. He searched ways to apply to these numbers the same formal mathematical procedures he had considered to be legitimate for integers and fractions, while at the same time “putting aside the mental tortures involved.”

The conceptual status of imaginary numbers was successfully clarified only slowly in the centuries following. They became central to our conception of mathematics at large by the mid-nineteenth century. No less interesting is the fact that this apparently artificially concocted idea also became fundamental for physics. Many of the central pillars of modern physics, such as electrodynamics, cannot even be conceived without imaginary numbers.

*Featured image credit: Formula mathematics psychics by markusspiske. CC0 public domain via Pixabay.*

The post The real charm of imaginary numbers appeared first on OUPblog.

]]>The post Diamonds are forever, and so are mathematical truths? appeared first on OUPblog.

]]>Over the next few weeks, Leo Corry, author of *A Brief History of Numbers*, helps us understand how the history of mathematics can be harnessed to develop modern-day applications today. In this first post, he explores how the differences between perceptions of truth in maths and history affect the study of those subjects.

Try googling ‘mathematical gem.’ I just got 465,000 results. Quite a lot. Indeed, the metaphor of mathematical ideas as precious little gems is an old one, and it is well known to anyone with a zest for mathematics. A diamond is a little, fully transparent structure, all of whose parts can be observed with awe from any angle. It is breathtaking in its beauty, yet at the same time powerful and virtually indestructible. This description applies equally well to so many pieces of mathematical knowledge: proofs, formulae, and algorithms.

Leonhard Euler, for instance, was the greatest mathematician of the eighteenth century and we associate his name nowadays with many beautiful mathematical gems. Think of the so-called Euler Formula: *V – E + F *= 2. This concise expression embodies a surprising property of any convex polyhedron, of which *V* represents the numbers of its vertices, *E* of its edges, and *F* of its faces. But probably the most famous gem associated with his name is the so-called ‘Euler identity’:

*e ^{i π}* + 1 = 0.

Beyond the mathematical importance of this identity it is remarkable how often it is known and praised, above all, for its beauty: “the most beautiful equation of maths”, we read in various places. A most impressive diamond!

But we can compare mathematical ideas to diamonds not only in terms of beauty. Diamonds are also, as you surely remember from the James Bond film, forever. And so are proved mathematical results. Indeed, the theme song of the Bond film defines very aptly, I think, the way in which mathematicians relate to those ideas with which they become involved and invest their best efforts for long periods of time:

Diamonds are forever,

Hold one up and then caress it,

Touch it, stroke it and undress it,

I can see every part,

Nothing hides in the heart to hurt me.

— Shirley Bassey, *Diamonds Are Forever.*

Of course, before reaching the point where mathematical ideas become diamonds, likely to remain forever, there is a period of groping in the dark. This period may sometimes be long and the dark may be deep, before light is finally turned on and the diamond becomes transparent. You can then touch it, stroke it and undress it, and you will truly understand the necessary interconnection between all of its parts.

In a recent TED video, the Spanish mathematician Eduardo Sáenz de Cabezón tells his audience that “if you want to tell someone that you will love her forever you can give her a diamond. But if you want to tell her that you’ll love her *forever and ever*, give her a theorem!” (Unfortunately, in spite of the accompanying English subtitles, his most successful jokes are lost in translation from Spanish.)

And so, it is the eternal character of mathematical truths and the unanimity of mathematicians about them that sets mathematics apart from almost all other endeavors of human knowledge. This unique character of mathematics as a system of knowledge may be stressed even more sharply by comparison to another discipline, like history for example. At its core, mathematical knowledge deals with certain, necessary, and universal truths. True mathematical statements do not depend on contextual considerations, either in time or in geographical location. Generally speaking, established mathematical statements are considered to be beyond dispute or interpretation.

The discipline of history, on the contrary, deals with the particular, the contingent, and the idiosyncratic. It deals with events that happened in a particular location at a particular point in time, and events that happened in a certain way but could have happened otherwise. Historical statements are always partial, debatable, and open to interpretation. Arguments put forward by historians keep changing with time. ‘Thinking historically’ and ‘thinking mathematically’, then, are clearly two different things.

For historians of mathematics, the comparison between mathematics and history, as two different ways of thought and as two different kinds of academic disciplines, is an important issue. Historians in general do not see just providing an account of “one damn thing after the other” (as the phrase often attributed to Arnold Toynbee goes) as the aim of their intellectual pursuit. Historians of mathematics, in turn, do not see the aims of their pursuits as just providing a chronology of discoveries. What does it mean, then, to think historically about the ways in which people have been ‘thinking mathematically’ throughout history, and about the processes of change that have affected these ways of thinking? If mathematics deals with universal truths, how can we speak about mathematics from a historical perspective (other than to establish the chronology of certain discoveries)? What is it that changes through time in a discipline whose truths are, apparently, eternal?

“Who was the first to discover the formula for the quadratic equation?” That’s not really the kind of question that historians of mathematics try to be involved with. In fact, rather than “Who was the first to discover X?” we may find it more interesting to investigate a question such as “Who was the *last* person to discover X?” This latter question involves the understanding that in spite of the eternal character of mathematical results, there is still a lot to be said about the way in which mathematical ideas develop and are understood throughout history. It suggests that something that was mathematically proved at some point was not considered to be so at a later time, that mathematicians are not always aware or do not always care about the existence of a proof that later becomes interesting and relevant. It also suggests that it makes historical sense to try and understand the circumstances of this change in mathematical values. The question also implies that only at a certain point in time a mathematical proof became so fundamentally convincing that it impressed upon that result a stamp of *eternal* validity. Or, at least, temporarily so.

*Featured image: Isfahan Lotfollah mosque ceiling symmetric by Phillip Maiwald. CC BY-SA 3.0 via Wikimedia Commons.*

The post Diamonds are forever, and so are mathematical truths? appeared first on OUPblog.

]]>The post Yes, maths can be for the amateur too appeared first on OUPblog.

]]>Over the past weeks Snezana Lawrence, co-author of *Mathematicians and their Gods*, introduced us into a summer journey around the beauty of mathematics; including secret maths witches and wizards, and trying to answer the question: will we ever need maths after school? In this last post, Snezana tells us the story of bright amateurs in mathematics that had great influence on scientific discoveries, from multidimensionality to the fourth dimension.

A friend of mine picked an argument with me the other day about how people go on about the beauty of mathematics, but this is only not obvious to non-mathematicians, it cannot be accessed by those outside the field. Unlike, for example, modern art, which is also not always obvious, mathematical beauty is elusive to all but the mathematicians. Or so he said. He mused further that a non-mathematician can never bring anything new to mathematics, unlike art: in art, from time to time you get shifts, paradigm changes, contributions from people who don’t necessarily belong to the old establishment, who bring a new insight, that may change art and influence further developments at a profound level. This is how movements in art happen and so, the thinking goes, there is a possibility of an outsider making a contribution to bring about such a change or a shift. Furthermore, this in effect means that people generally engage with art on a much greater scale, as there is always a potential of making a contribution to it.

Is this really the case? Is mathematics really a discipline so insular that no amateur or admirer can ever play a role in its development? I wracked my brain to come up with a counter-example, hoping that, at least, I would be able to persuade him of the beauty of mathematical techniques. My master plan was to use the argument of the form *reductio ad absurdum*, an old trick that would finish by ‘aha!’ on my part.

But I think it ended up more nicely. I came up with the concept that I have investigated recently, of the amateurs making not only a huge contribution to the field, but actually enriching a view of mathematics itself. There was one development that took place at the turn of the twentieth century involving mathematicians and non-mathematicians alike in the development of the concept of multidimensionality. I am talking about, on one side, mathematicians such as Bernhard Riemann and Hermann Günther. Their work and lives were devoted to the development of this concept. On the other hand though, there were others whose work included philosophizing on the multidimensionality. People such as Edwin Abbott Abbott, the Shakespearean scholar and the London schoolmaster who wrote one of the most famous and popular novellas of all time, the *Flatland*, and Alicia Boole-Stott, whose three-dimensional models of four-dimensional polytopes contributed to the development of mathematics immensely for example. Alicia had no official mathematical education, apart from being the daughter of the famous George Boole (but earnt her honorary doctorate at the University of Gro¨ningenn in 1914).

How did this happen? By very unorthodox means, in fact. Abbott wrote *Flatland* at the time when he was still working at the City of London School and lived in Marylebone. Mary, Alicia’s mother, also lived in Marylebone and was writing mathematics books for children, but also contributed to theology and science. Her interests extended to Darwinian theory, philosophy and psychology, organizing discussion groups on all of these from time to time. Mary Boole was also at one time a personal secretary to James Hinton, a famous spiritualist, whose son Charles wrote some very interesting books on the fourth dimension, invented the term ‘tesseract,’ and also married Mary’s oldest daughter, also named Mary. Charles apparently believed in the multi-dimensionality of time too, which may explain his bigamous marriage within three years of his marriage to Mary (but to whom he later returned); he also taught Alicia to visualize four dimensional polytopes (polytope is an object in different dimensions – for example in two dimensions polytopes are triangles, squares etc., in three polytopes are cube, octahedron, etc. and so on) as they would pass through the third dimension. There is strong circumstantial evidence to show that there was a strong link with spiritualism that linked all of these people together, the belief that there is some other higher dimension, from which the dimensions of our world could be seen at once. Mary Boole and Edwin Abbott Abbott even wrote apologies about spiritualism, none of course forthcoming from Charles Hinton.

So there! I managed to come across a group of people who were not mathematicians – they had some links to it in their various ways, but none were actually mathematicians, apart from the weirdest of them all, Charles Hinton, who ended his life as a mathematics instructor at Princeton University. Yet their work, both in terms of writing and their social involvement and communication, made a lasting influence on the development of the concept of the fourth dimension in mathematics, and from there, the concept of multidimensionality.

Perhaps this type of mathematics comes very close to abstract art, but so be it. We can all enjoy the many representations of the tesseract, the word Charles Hinton coined, and Alicia Boole-Stott so beautifully represented with her many models. And we can certainly attempt to venture from the world of *Flatland* to the world of the fourth, and many more dimensions.

*Featured image credit: Math Castle by Gabriel Molina. CC-BY-2.0 via Flickr.*

The post Yes, maths can be for the amateur too appeared first on OUPblog.

]]>The post Sects, witches, and wizards-from Pythagoreans to Kepler appeared first on OUPblog.

]]>Over the next few weeks, Snezana Lawrence, co-author of *Mathematicians and their Gods*, introduced us into a Summer journey around the beauty of mathematics, trying to answer the question: will we ever need maths after school? In this second post, Snezana discusses popular perceptions of mathematics, from Pythagorean sect to the various interpretations of the supposed numerical values and hidden messages in the Bible.

As the summer is safely on its way (certainly in Sicily, where I write this on a terrace of one of its grand hotels) I think of the topics to discuss with friends and acquaintances over a glass of Prosecco when the rest of the party joins me in a week. To start a conversation based on mathematics may seem to some to be one of the tasks inevitably converging towards the plot-line of *Mission Impossible*. Well, certainly there are more pressing things that would occupy people’s minds, concerning international politics, the future of Europe, and the future of the Middle East. What’s new? These topics have occupied people’s minds for centuries. And inevitably there will be calls (as the amount of Prosecco increases) to discuss plots, pinpoint to patterns, make assumptions.

But there is unfortunately, in this particular sense, a similarity also with some popular perceptions of mathematics. From the Pythagorean sect, via various interpretations of the supposed numerical values and hidden messages in the Bible and other major religions’ sacred texts, to the phenomenon of ‘sacred geometry’ and the patterns upon which cities and institutions are built – mathematics history too has some questions that may be of interest in exploring the similar type of sentiment.

One response that answers all those questions wouldn’t satisfy anyone. So what examples can one come up with? Without going into much detail, one can mention that mathematicians, too, have been at times wrong, not completely, but a bit. Like for example Johannes Kepler whose mathematical model of the universe was, for a want of a better word, perfected over a period of time. One of his most surprising inventions came about when he taught mathematics at a school in Graz in his early career. He pondered: what if the five Platonic solids, are indeed some kind of blueprint upon which the universe is made as Plato suggested centuries earlier?

Plato, who discussed these solids in *Timaeus *(c. 360BC) associated them with four classical elements – Earth was represented by cube, air by octahedron, water by icosahedron, and fire with tetrahedron. The fifth element (yes just like the *Fifth Element*, the film by the French director Luc Besson, made in 1997) Plato thought, the dodecahedron, must have been used for arranging the constellations of the heavens. Furthermore, in the history of mathematics and philosophy, it was often identified as the element denoting divine spark, the principle of attraction, and the force that made all other elements come to life.

Forward to 1595 when Kepler worked on Platonic solids and used them to make a model of the universe, in his now famous book *Mysterium cosmographicum *(1596), illustrating the work with one of the most famous images of the history of science and mathematics. The image shows each Platonic solid encased in a sphere, inscribed in a further solid, encased in a sphere, which Kepler identified with the then six known planets: Mercury, Venus, Earth, Mars, Jupiter, and Saturn. Kepler described that the spheres containing the solids are placed at intervals corresponding to the sizes of each planet’s paths (as they were then known), assuming that they circled around the Sun.

Of course, later, whilst he was living in Prague, Kepler found that the orbital paths of planets of the Solar System were not circular but elliptical, but this beautiful model, even though not perfectly accurate, gave him an impetus for further research.

Kepler’s work on planetary motions and modeling the Solar System were based on his deep religiosity and theological convictions, connecting the spiritual and the physical ideas and imagery in his work. Kepler’s less known work is his *Somnium*, a novel about an imaginary journey that depicts his flight around the Solar System, guided by his mother. This novel was posthumously published, and not surprisingly, as his own mother Katharina Kepler, was accused of witchcraft in 1617. She was imprisoned and released in 1621, thanks partly to Kepler’s own efforts and involvement in the trial.

So what is one to make with this information? Was Kepler’s mother a witch, and was he a wizard of a kind? Is that how he worked out his laws of planetary motion? Of course our common sense takes over at this point. However, it is worth pointing out that it is much easier to do so with some time between us and Kepler in between, and during which the witches and wizards have safely made a transition from reality into fiction.

So, to get back to Sicily, my grand hotel’s balcony and Prosecco reception. And revisit the potential mathematical conversation, compare it with possible scheming and plotting that can be projected from the example of mathematics, from the secrecy of the Pythagorean sect, to the occult knowledge of Kepler, or the numerology of Newton to what is currently happening in Europe, the Middle East and generally in the world… Safe bet would be to get to know the details, actually know the real maths, and make cautious calculations. History of mathematics teaches us that models are prone to improvements over a period of time, just like Kepler’s model of the universe.

*Featured image credit: books, bookshelf, read. CC0 Public domain via Pixabay. *

The post Sects, witches, and wizards-from Pythagoreans to Kepler appeared first on OUPblog.

]]>