The post Coincidences are underrated appeared first on OUPblog.

]]>The unreasonable popularity of pseudosciences such as ESP or astrology often stems from personal experience. We’ve all had that “Ralph” phone call or some other happening that seems well beyond the range of normal probability, at least according to what we consider to be common sense. But how accurately does common sense forecast probabilities and how much of it is fuzzy math? As we will see, fuzzy math holds its own.

Let’s try to de-fuzz the math a bit, starting with a classic example: the birthday problem. Perhaps you’ve encountered this problem in a math class that dealt with probabilities. In a group of people, what is the probability that two people would have the same birthday? Certainly it must depend on the size of the group. If we start with only two people, the chance would be one out of 365 – well, OK, one in 366 for leap years. If the group included more people, common sense might suggest that the probability would just increase linearly. So, to get a 50% chance, you might think it would take 183 people in the group. Wrong. That’s where common sense goes off the rails. It turns out that, in a group of only 23 people, the probability of two having the same birthday is 50%.

Details of the logic required to arrive at this result are unnecessary here, but a clue is given by a group of three. The third person might match the birthday of either of the first two, so you might think to just double the first probability. But think about this from the inverse point of view. The probability of the second person’s birthday NOT matching the first is 364/365. But the third person could match either of the first two, so the probability of NOT matching is only 363/365. Since NOT matching is thus less probable, matching becomes more probable. Working this out involves a bit of number crunching, but math classes have calculators galore, and since many classes have 23 or more members, real data are available to support the probability calculation. As you can see, what we take to be common sense often yields inaccurate solutions.

Meanwhile, back at the “Ralph” problem, a math textbook might tackle this problem in terms of drawing different colored pebbles from a large urn. Let’s forego that approach, and set the “Ralph” problem in more realistic terms. Suppose you know N people. During the course of a single day, a number of those people, k, cross your mind on a purely random basis. For this illustration, let’s agree to ignore close relatives and friends that you think about almost every day. Next, a certain number of people, L, contact you in a given day by any means, including phone calls, e-mails and electronic messages, social media, snail mail, and random meetings.

Working though this problem (actually kind of fun if you like mathematical puzzles) yields an equation for the number of days that will elapse before the probability of getting a contact from someone you thought about reaches a given level. Of course, it depends on the variables N, k, and L, not the easiest quantities to obtain.

An estimate of N, the number of people that an average person knows, is available from various sources, and ranges from 200 to 1500, but k, the number of people one would think about is highly subjective, as is L, the number of contacts one receives in an average day. Yet, all these numbers are necessary to find an estimate of the time required for someone you thought about to contact you shortly after you thought about them. Unscientific surveys of students, neighbors, and friends produced numbers of thoughts from 10 to 100 and contacts from 5 to 30. Substituting these numbers into the appropriate derived equation and requiring that it be 90% probable yields a remarkable result. Such coincidences would happen anywhere from once a week to once every other month. Most people’s fuzzy math would probably have estimated a much longer time period.

If you are curious about how often you might expect such coincidences to occur, e-mail your numbers for N, k, and L to me and I’ll calculate the estimate for your case and send it to you.

Next time you get that “Ralph” call, rather than attributing it to ESP, you might tell Ralph: “Hey, I was just thinking about you, so you can consider yourself my coincidence of the month.”

*Featured image: Ancient Planet by PIRO4D. Public domain via **Pixabay**. *

The post Coincidences are underrated appeared first on OUPblog.

]]>The post Prime numbers and how to find them appeared first on OUPblog.

]]>A prime number is always bigger than 1 and can only be divided by itself and 1 – no other number will divide in to it. So the number 2 is the first prime number, then 3, 5, 7, and so on. Non-prime numbers are defined as composite numbers (they are composed of other smaller numbers).

Prime numbers are so tantalizing because they seem to be in never ending supply, and are distributed somewhat randomly throughout all the other numbers. Also, no-one has (yet) found a simple and quick way to find a specific (new) prime number.

Because of this, very large prime numbers are used every day when encrypting data to make the online world a safer place to communicate, move money, and control our households. But could we ever run out of prime numbers? How can we find new, incredibly large prime numbers? Below is a brief explanation about how we can do this:

This got us interested in learning more about primes, so we’ve collected together some facts about these elusive numbers:

- A simple way to find prime numbers is to write out a list of all numbers and then cross off the composite numbers as you find them – this is called the
*Sieve of Eratosthenes*. However, this can take a long time! - In 2002 a quicker way to test whether a number is prime was discovered – an algorithm called the ‘AKS primality test’, published by Manindra Agrawal, Neeraj Kayal, and Nitin Saxena.
- Even though prime numbers seem to be randomly distributed, there are fewer large primes than smaller ones. This is logical, as there are more ways for large numbers to not be prime, but mathematicians ask: how much rarer are larger primes?
- In 2001 a group of computer scientists from IBM and Stanford University showed that a quantum computer could be programmed to find the prime factors of numbers.
- The RSA enciphering process, published in 1978 by Ron Rivest, Adi Shamir, and Leonard Adleman, is used to hide plaintext messages using prime numbers. In this process every person has a private key which is made up of three numbers, two of which are very large prime numbers.
- At any moment in time, the largest known prime number is also usually the largest known Mersenne prime.

*Featured image credit: numbers by morebyless. CC-BY-2.0 via **Flickr**.*

The post Prime numbers and how to find them appeared first on OUPblog.

]]>The post Opening the door for the next Ramanujan appeared first on OUPblog.

]]>It is still possible to learn mathematics to a high standard at a British university but there is no doubt that the fun and satisfaction the subject affords to those who naturally enjoy it has taken a hit. Students are constantly reminded about the lifelong debts they are incurring and how they need to be thoroughly aware of the demands of their future employers. The fretting over this within universities is relentless. To be fair, students generally report high to very high levels of satisfaction with their courses right throughout the university system. Certainly university staff are kept on their toes by being annually called to account by the National Student Survey, which is a data set that offers no hiding place. We should bear in mind, however, that this key performance indicator does not measure the extent to which students have mastered their degree subject. What is important here is getting everyone to say they are happy, which is quite another matter.

This all contrasts with the life of the main character, Sri Ramanujan in the recent film *The Man Who Knew Infinity*. The Indian genius of the early twentieth century had a reasonable high school education after which he was almost self-taught. It seems he got hold of a handful of British mathematics books, amongst them *Synopsis of Pure Mathematics* by G. S. Carr, written in 1886. I understand that this was not even a very good book in the ordinary sense for it merely listed around five thousand mathematical facts in a rather disjointed fashion with little in the way of example or proof. This compendium, however, suited the young Ramanujan perfectly for he devoured it, filling in the gaps and making new additions of his own. Through this process of learning he emerged as a master of deep and difficult aspects of mathematics, although inevitably he remained quite ignorant of some other important fields within the subject.

It would therefore be a very good thing if everyone had unfettered online access to the contents of a British general mathematics degree. Mathematics is the subject among the sciences that most lends itself to learning through books and online sources alone. There is nothing fake or phoney when it comes to maths. The content of the subject, being completely and undeniably true, does not date. Mathematics texts and lectures from many decades ago remain as valuable as ever. Indeed, older texts are often refreshing to read because they are so free from clutter. There are new developments of course but learning from high quality older material will never lead you astray.

I had thought this had already been taken care of as for ten years or more, many universities, for example MIT in the United States, have granted open online access to all their teaching materials, completely free of charge. There is no need to even register your interest — just go to their website and help yourself. Modern day Ramanujans would seem not to have a problem coming to grips with the subject.

The reality, however, is somewhat different and softer barriers remain. The attitude of these admirable institutions is relaxed but not necessarily that helpful to the private student who is left very much to their own devices. There is little guidance as to what you need to know, and what is available online depends on the decisions of individual lecturers so there is no consistency of presentation. Acquiring an overall picture of mainstream mathematics is not as straightforward as one might expect. It would be a relatively easy thing to remedy this and the rather rigid framework of British degrees could be useful. In Britain, a degree normally consists of 24 modules (eight per year), each demanding a minimum of 50 hours of study (coffee breaks not included). If we were to set up a suite of 24 modules for a general mathematics degree that met the so-called QAA Benchmark and placed the collection online for anyone on the planet to access, it would be welcomed by poor would-be mathematicians from everywhere around the globe. The simplicity and clarity of that setting would be understood and appreciated.

This modern day Ramanujan Project would require some work by the mathematical community but it would largely be a one-off task. As I have explained, the basic content of a mathematical undergraduate degree has no need to change rapidly over time for here we are talking about fundamental advanced mathematics and not cutting-edge research. Everyone, even a Ramanujan, needs to learn to walk before they can run and the helping hand we will be offering will long be remembered with gratitude and be a force for good in the world.

*Featured image credit: Black-and-white by Pexels. CC0 public domain via Pixabay.*

The post Opening the door for the next Ramanujan appeared first on OUPblog.

]]>The post The historian and the longitude appeared first on OUPblog.

]]>When Harrison arrived in London in the 1730s with ambitions to build a successful longitude timepiece, he was supported and encouraged by Fellows of the Royal Society, who occasioned the very first meeting of the Board of Longitude (formed some 20 years previously), at which a clock presented by Harrison was the only item of business. He requested, and was granted, the very considerable sum of £500 to work on a second timepiece, to be finished in two years (the annual salary of the Astronomer Royal was £100). This was the first of a series of grants that had amounted to £4,000 by the time Harrison announced in 1760 that his third timepiece was ready for testing. It had taken 19 years to complete and the Board were, not unreasonably, becoming doubtful whether this was the route to a practical solution to the problem. To say that such a sum was inadequate is to ignore completely the simple fact that this was the 18th century, long before the accepted notion of government grants for research and development, but this is just one example of where a historian becomes frustrated with the popular narrative.

In the event, Harrison asked for a fourth timepiece – quite unlike the first three – to be given the statutory test of keeping time on a voyage to the West Indies. Many difficulties and arguments had to be overcome before a satisfactory test was completed in 1764, when everyone agreed that ‘the watch’ had kept time within the limits required for the maximum award of £20,000. It was now that the Board’s difficulties began in earnest. Faced with the real prospect of parting with their major award, they needed to know that the longitude problem really had been solved – anything less would have been a very public failure to fulfilling their central responsibility. The original Act of Parliament of 1714 offered the reward for a method that was ‘Practicable and Useful at Sea’, while stipulating that the test was to be a single voyage. The Board were troubled over whether these two criteria were compatible, and such doubts were being aired in the popular press. Was the legislation itself inadequate?

So far the Board had not been given a detailed account of the watch’s manufacture and operation, and they wanted to know what principles or manufacturing procedures had resulted in its outstanding performance. Could these be explained and communicated to other makers? Could such watches be manufactured in numbers, in a reasonable time, at a reasonable cost, by moderately competent makers? Might the success of Harrison’s watch have been a matter of chance in a single instance? Had it depended on the achievement of a wholly exceptional, individual talent? All of these considerations were relevant to the question of a ‘practicable and useful’ method, notwithstanding the recent performance of the watch.

The Board decided to separate the components of the legislation by granting Harrison half the full reward, once he had explained the watch and its operation, while retaining the other half until it could be proved that watches of this type could go into routine production. Harrison did ‘discover’ his watch, as it was said (that is, literally, he removed the cover and explained its working), and so was granted £10,000, but gave up on the Board and appealed to Parliament and the King for the remainder.

In many ways the Board were left, as they had feared, without a practical solution. Harrison’s watch did not go into regular production. He had shown that a timepiece could keep time as required, but the design of the successful marine chronometer, as it emerged towards the end of the century, was quite different from his work. Other makers, in France for example, had been making independent advances, and two English makers, John Arnold and Thomas Earnshaw, brought the chronometer to a manageable and successful format. It is difficult to claim without important qualification that Harrison solved the longitude problem in a practical sense. In the broad sweep of the history of navigation, Harrison was not a major contributor.

The Harrison story seems to attract challenge and controversy. The longitude exhibition at the National Maritime Museum in 2014 was an attempt to offer a more balanced account than has been in vogue recently. The Astronomer Royal Nevil Maskelyne, for example, has been maligned without justification. A recent article in *The Horological Journal* takes a contrary view and offers ‘An Antidote to John Harrison’, and we seem set for another round of disputation. From a historian’s point of view, one of the casualties of the enthusiasm of recent years has been an appreciation of the context of the whole affair, while a degree of partisanship has obscured the legitimate positions of many of the characters involved. There is a much richer and more interesting story to be written than the one-dimensional tale of virtue and villainy.

*Featured image credit: Pocket watch time of Sand by annca. Public domain via Pixabay. *

The post The historian and the longitude appeared first on OUPblog.

]]>The post In defense of mathematics [excerpt] appeared first on OUPblog.

]]>Once reframed in its historical context, mathematics quickly loses its intimidating status. As a subject innately tied to culture, art, and philosophy, the study of mathematics leads to a clearer understanding of human culture and the world in which we live. In this shortened excerpt from *A Brief History of Mathematical Thought*, Luke Heaton discusses the reputation of mathematics and its significance to human life.

Mathematics is often praised (or ignored) on the grounds that it is far removed from the lives of ordinary people, but that assessment of the subject is utterly mistaken. As G. H. Hardy observed in *A Mathematician’s Apology*:

Most people have some appreciation of mathematics, just as most people can enjoy a pleasant tune; and there are probably more people really interested in mathematics than in music. Appearances suggest the contrary, but there are easy explanations. Music can be used to stimulate mass emotion, while mathematics cannot; and musical incapacity is recognized (no doubt rightly) as mildly discreditable, whereas most people are so frightened of the name of mathematics that they are ready, quite unaffectedly, to exaggerate their own mathematical stupidity.

The considerable popularity of sudoku is a case in point. These puzzles require nothing but the application of mathematical logic, and yet to avoid scaring people off, they often carry the disclaimer “no mathematical knowledge required!” The mathematics that we know shapes the way we see the world, not least because mathematics serves as “the handmaiden of the sciences.” For example, an economist, an engineer, or a biologist might measure something several times, and then summarize their measurements by finding the mean or average value. Because we have developed the symbolic techniques for calculating mean values, we can formulate the useful but highly abstract concept of “the mean value.” We can only do this because we have a mathematical system of symbols. Without those symbols we could not record our data, let alone define the mean.

Mathematicians are interested in concepts and patterns, not just computation. Nevertheless, it should be clear to everyone that computational techniques have been of vital importance for many millennia. For example, most forms of trade are literally inconceivable without the concept of number, and without mathematics you could not organize an empire, or develop modern science. More generally, mathematical ideas are not just practically important: the conceptual tools that we have at our disposal shape the way we approach the world. As the psychologist Abraham Maslow famously remarked, “If the only tool you have is a hammer, you tend to treat everything as if it were a nail.” Although our ability to count, calculate, and measure things in the world is practically and psychologically critical, it is important to emphasize that mathematicians do not spend their time making calculations.

The great edifice of mathematical theorems has a crystalline perfection, and it can seem far removed from the messy and contingent realities of the everyday world. Nevertheless, mathematics is a product of human culture, which has co-evolved with our attempts to comprehend the world. Rather than picturing mathematics as the study of “abstract” objects, we can describe it as a poetry of patterns, in which our language brings about the truth that it proclaims. The idea that mathematicians bring about the truths that they proclaim may sound rather mysterious, but as a simple example, just think about the game of chess. By describing the rules we can call the game of chess into being, complete with truths that we did not think of when we first invented it. For example, whether or not anyone has ever actually played the game, we can prove that you cannot force a competent player into checkmate if the only pieces at your disposal are a king and a pair of knights. Chess is clearly a human invention, but this fact about chess must be true in any world where the rules of chess are the same, and we cannot imagine a world where we could not decide to keep our familiar rules in place.

Mathematical language and methodology present and represent structures that we can study, and those structures or patterns are as much a human invention as the game of chess. However, mathematics as a whole is much more than an arbitrary game, as the linguistic technologies that we have developed are genuinely fit for human purpose. For example, people (and other animals) mentally gather objects into groups, and we have found that the process of counting really does elucidate the plurality of those groups. Furthermore, the many different branches of mathematics are profoundly interconnected, to art, science, and the rest of mathematics.

In short, mathematics is a language and while we may be astounded that the universe is at all comprehensible, we should not be surprised that science is mathematical. Scientists need to be able to communicate their theories and when we have a rule-governed understanding, the instructions that a student can follow draw out patterns or structures that the mathematician can then study. When you understand it properly, the purely mathematical is not a distant abstraction – it is as close as the sense that we make of the world: what is seen right there in front of us. In my view, math is not abstract because it has to be, right from the word go. It actually begins with linguistic practice of the simplest and most sensible kind. We only pursue greater levels of abstraction because doing so is a necessary step in achieving the noble goals of modern mathematicians.

In particular, making our mathematical language more abstract means that our conclusions hold more generally, as when children realize that it makes no difference whether they are counting apples, pears, or people. From generation to generation, people have found that numbers and other formal systems are deeply compelling: they can shape our imagination, and what is more, they can enable comprehension.

*Featured image credit: Image by Lum3n.com – Snufkin. CC0 public domain via Pexels.*

The post In defense of mathematics [excerpt] appeared first on OUPblog.

]]>The post Alan Turing’s lost notebook appeared first on OUPblog.

]]>The yellowing notebook — from Metcalfe and Son, just along the street from Turing’s rooms at King’s College in Cambridge — contains 39 pages in his handwriting. The auction catalogue (which inconsequentially inflated the page count) gave this description:

“Hitherto unknown wartime manuscript of the utmost rarity, consisting of 56 pages of mathematical notes by Alan Turing, likely the only extensive holograph manuscript by him in existence.”

A question uppermost in the minds of Turing fans will be whether the notebook gives new information about his famous code-cracking breakthroughs at Bletchley Park, or about the speech-enciphering device named “Delilah” that he invented later in the war at nearby Hanslope Park. The answer may disappoint. Although most probably written during the war, the notebook has no significant connection with Turing’s work for military intelligence. Nevertheless it makes fascinating reading: Turing titled it “Notes on Notations” and it consists of his commentaries on the symbolisms advocated by leading figures of twentieth century mathematics.

My interest in the notebook was first piqued more than 20 years ago. This was during a visit to Turing’s friend Robin Gandy, an amiable and irreverent mathematical logician. In 1944-5 Gandy and Turing had worked in the same Nissen hut at Hanslope Park. Gandy remembered thinking Turing austere at first, but soon found him enchanting — he discovered that Turing liked parties and was a little vain about his clothes and appearance. As we sat chatting in his house in Oxford, Gandy mentioned that upstairs he had one of Turing’s notebooks. For a moment I thought he was going to show it to me, but he added mysteriously that it contained some private notes of his own.

In his will Turing left all his mathematical papers to Gandy, who eventually passed them on to King’s College library — but not the notebook, which he kept with him up till his death in 1995. Subsequently the notebook passed into unknown hands, until its reappearance in 2015. Gandy’s private notes turned out to be a dream diary. During the summer and autumn of 1956, two years after Turing’s death, he had filled 33 blank pages in the center of the notebook with his own handwriting. What he said there was indeed personal.

Only a few years before Gandy wrote down these dreams and his autobiographical notes relating to them, Turing had been put on trial for being gay. Gandy began his concealed dream diary: “It seems a suitable disguise to write in between these notes of Alan’s on notation; but possibly a little sinister; a dead father figure and some of his thoughts which I most completely inherited.”

**Mathematical reformer **

Turing’s own writings in the notebook are entirely mathematical, forming a critical commentary on the notational practices of a number of famous mathematicians, including Courant, Eisenhart, Hilbert, Peano, Titchmarsh, Weyl, and others. Notation is an important matter to mathematicians. As Alfred North Whitehead — one of the founders of modern mathematical logic — said in his 1911 essay “The Symbolism of Mathematics”, a good notation “represents an analysis of the ideas of the subject and an almost pictorial representation of their relations to each other”. “By relieving the brain of all unnecessary work”, Whitehead remarked, “a good notation sets it free to concentrate on more advanced problems”. In a wartime typescript titled “The Reform of Mathematical Notation and Phraseology” Turing said that an ill-considered notation was a “handicap” that could create “trouble”; it could even lead to “a most unfortunate psychological effect”, namely a tendency “to suspect the soundness of our [mathematical] arguments all the time”.

This typescript, which according to Gandy was written at Hanslope Park in 1944 or 1945, provides a context for Turing’s notebook. In the typescript Turing proposed what he called a “programme” for “the reform of mathematical notation”. Based on mathematical logic, his programme would, he said, “help the mathematicians to improve their notations and phraseology, which are at present exceedingly unsystematic”. Turing’s programme called for “An extensive examination of current mathematical … books and papers with a view to listing all commonly used forms of notation”, together with an “[e]xamination of these notations to discover what they really mean”. His “Notes on Notations” formed part of this extensive investigation.

Key to Turing’s proposed reforms was what mathematical logicians call the “theory of types”. This reflects the commonsensical idea that numbers and bananas, for example, are entities of different types: there are things which makes sense to say about a number — e.g. that it has a unique prime factorization — that cannot meaningfully be said of a banana. In emphasizing the importance of type theory for day-to-day mathematics, Turing was as usual ahead of his time. Today, virtually every computer programming language incorporates type-based distinctions.

**Link to the real Turing**

Turing never displayed much respect for status and — despite the eminence of the mathematicians whose notations he was discussing — his tone in “Notes on Notations” is far from deferential. “I don’t like this” he wrote at one point, and at another “this is too subtle and makes an inconvenient definition”. His criticisms bristle with phrases like “there is obscurity”, “rather abortive”, “ugly”, “confusing”, and “somewhat to be deplored”. There is nothing quite like this blunt candor to be found elsewhere in Turing’s writings; and with these phrases we perhaps get a sense of what it would have been like to sit in his Cambridge study listening to him. This scruffy notebook gives us the plain unvarnished Turing.

*Featured image credit: Enigma by Tomasz_Mikolajczyk. CC0 Public domain via Pixabay. *

The post Alan Turing’s lost notebook appeared first on OUPblog.

]]>The post Really big numbers appeared first on OUPblog.

]]>Of course, in real life you’ll die before you get to any really *big* numbers that way. So here’s a more interesting way of asking the question: what is the biggest whole number that you can uniquely describe on a standard sheet of paper (single spaced, 12 point type, etc.) or, more fitting, perhaps, in a single blog post?

In 2007 two philosophy professors – Adam Elga (Princeton) and Agustin Rayo (MIT) – asked essentially this question when they competed against each other in the *Big Number Duel*. The contest consisted of Elga and Rayo taking turns describing a whole number, where each number had to be larger than the number described previously. There were three additional rules:

- Any unusual notation had to be explained.
- No primitive semantic vocabulary was allowed (i.e. “the smallest number not mentioned up to now.”)
- Each new answer had to involve some new notion – it couldn’t be reachable in principle using methods that appeared in previous answers (hence after the second turn you can’t just add 1 to the previous answer)

Elga began with “1”, Rayo countered with a string of “1”s, Elga then erased bits of some of those “1”s to turn them into factorials, and they raced off into land of large whole numbers. Rayo eventually won with this description:

The least number that cannot be uniquely described by an expression of first-order set theory that contains no more than a googol (10^{100}) symbols.

A more detailed description of the *Duel*, along with some technical details about Rayo’s description, can be found here.

Fans of paradox will recognize that Rayo’s winning move was inspired by the Berry paradox:

The least number that cannot be described in less than twenty syllables.

This expression leads to paradox since it seems to name the least number that cannot be described in less than twenty syllables, and to do so using less than twenty syllables! Rayo’s description, however, is not paradoxical, since although it uses far fewer than a googol symbols to describe the number in English, this doesn’t contradict the fact that, in the expressively much less efficient language of set theory, the number cannot be described in fewer than a googol symbols.

The number picked out by Rayo’s description has come to be called, appropriately enough, Rayo’s number. And it is big – *really* big. But can we come up with short descriptions of even bigger numbers?

Notice that Rayo’s construction implicitly provides us with a description of a function:

*F*(*n*) = The least number that cannot be uniquely described by an expression of first-order set theory that contains no more than *n* symbols.

Rayo’s number is then just *F*(10^{100}). So one way to answer the question would be to construct a function *G*(*n*) such that *G*(*n*) grows more quickly than *F*(*n*). Here’s one way to do it.

First, we’ll define a two place function *H*(*m*, *n*) as follows. We’ll just let *H*(0, 0) be 0. Now:

*H(0, n)* = The least number that cannot be uniquely described by an expression of first-order set theory that contains no more than *n* symbols.

So *H*(0, *n*) is just the Rayo function, and *H*(0, 10^{100}) is Rayo’s number. But now we let:

*H(m, n)* = The least number that cannot be uniquely described by an expression of first-order set theory supplemented with constant symbols for:

*H*(*m*-1, *n*), *H*(*m*-2, *n*),… *H*(1, *n*), *H*(0, *n*)

that contains no more than *n* symbols.

In other words, *H*(1, 10^{100}) is the least number that cannot be described in first-order set theory supplemented with a constant symbol that picks out Rayo’s number. Note that, in this new theory, Rayo’s number can now be described very briefly, in terms of this new constant! So *H*(1, 10^{100}) will be *much* larger than Rayo’s number.

But then we can consider *H*(2, 10^{100}), which is the least the least number that cannot be described in first-order set theory supplemented with a constant symbol that picks out Rayo’s number and a second constant symbol that picks out *H*(1, 10^{100}). This number is *much*, *much* bigger than *H*(1, 10^{100})!

And then we have *H*(3, 10^{100}), which is the least the least number that cannot be described in first-order set theory supplemented with a constant symbol that picks out *H*(0, 10^{100}), a second constant symbol that picks out *H*(1, 10^{100}) and a third constant symbol that picks out *H*(2, 10^{100}). This number is *much*, *much*, *much* bigger than *H*(2, 10^{100})!

And so on…

We can now get our quickly growing unary function *G*(*n*) by just identifying *m* and *n*:

*G*(*n*) = *H*(*n*, *n*).

And finally, our big, huge, enormous, number is:

*G*(10^{100})

*G*(10^{100}) is the least number that cannot be described in first-order set theory supplemented with googol-many constant symbols – one for each of *H*(0, 10^{100}), *H*(1, 10^{100}), … *H*(10^{100}-1, 10^{100}).

This number really is big. Can you come up with a bigger one?

*Featured image: “Infant Stars in Orion” Public domain via Wikimedia Commons. *

The post Really big numbers appeared first on OUPblog.

]]>The post A person-less variant of the Bernadete paradox appeared first on OUPblog.

]]>Imagine that Alice is walking towards a point – call it *A* – and will continue walking past *A* unless something prevents her from progressing further.

There is also an infinite series of gods, which we shall call *G*_{1}, *G*_{2}, *G*_{3}, and so on. Each god in the series intends to erect a magical barrier preventing Alice from progressing further if Alice reaches a certain point (and each god will do nothing otherwise):

(1) *G*_{1} will erect a barrier at exactly ½ meter past *A* if Alice reaches that point.

(2) *G*_{2} will erect a barrier at exactly ¼ meter past *A* if Alice reaches that point.

(3) *G*_{3} will erect a barrier at exactly ^{1}/_{8} meter past *A* if Alice reaches that point.

And so on.

Note that the possible barriers get arbitrarily close to *A*. Now, what happens when Alice approaches *A*?

Alice’s forward progress will be mysteriously halted at *A*, but no barriers will have been erected by any of the gods, and so there is no explanation for Alice’s inability to move forward. Proof: Imagine that Alice did travel past *A*. Then she would have had to go some finite distance past *A*. But, for any such distance, there is a god far enough along in the list who would have thrown up a barrier before Alice reached that point. So Alice can’t reach that point after all. Thus, Alice has to halt at *A*. But, since Alice doesn’t travel past *A*, none of the gods actually do anything.

Some responses to this paradox argue that the Gods have individually consistent, but jointly inconsistent intentions, and hence cannot actually promise to do what they promise to do. Other responses have suggested that the fusion of the individual intentions of the gods, or some similarly complex construction, is what blocks Alice’s path, even though no individual God actually erects a barrier. But it turns out that we can construct a version of the paradox that seems immune to both strategies.

Image that *A*, *B*, and *C* are points lying exactly one meter from the next, in a straight line (in that order). A particle *p* leaves point *A*, and begins travelling towards point *B* at exactly one second before midnight. The particle *p* is travelling at exactly one meter per second. The particle *p* will pass through *B* (at exactly midnight) and continue on towards *C* unless something prevents it from progressing further.

There is also an infinite series of force-field generators, which we shall call *G*_{1}, *G*_{2}, *G*_{3}, and so on. Each force-field generator in the series will erect an impenetrable force field at a certain point between *A* and *B*, and at a certain time. In particular:

(1) *G*_{1} will generate a force-field at exactly ½ meter past *B* at ¼ second past midnight, and take the force-field down at exactly 1 second past midnight.

(2) *G*_{2} will generate a force-field at exactly ¼ meter past *B* at exactly ^{1}/_{8} second past midnight, and take the force-field down at exactly ^{1}/_{2} second past midnight.

(3) *G*_{3} will generate a force-field at exactly ^{1}/_{8} meter past *B* at exactly ^{1}/_{16} second past midnight, and take the force-field down at exactly ^{1}/_{4} second past midnight.

And so on. In short, for each natural number *n*:

(n) *G*_{n} will generate a force-field at exactly ^{1}/_{2}^{n} meter past *B* at exactly ^{1}/_{2}^{n+1 }second past midnight, and take the force-field down at exactly ^{1}/_{2}^{n-1 }second past midnight.

Now, what happens when *p* approaches *B*?

Particle *p*’s forward progress will be mysteriously halted at *B*, but *p* will not have impacted any of the barriers, and so there is no explanation for *p*’s inability to move forward. Proof: Imagine that particle *p* did travel to some point *x* past *B*. Let *n* be the largest whole number such that ^{1}/_{2}^{n} is less than *x*. Then *p* would have travelled at a constant speed between the point ½^{n+2} meter past *B* and ^{1}/_{2}^{n} meter past *B* during the period from ½^{n+2} second past midnight and ^{1}/_{2}^{n} second past midnight. But there is a force-field at ^{1}/_{2}^{n+1 }meter past *B* for this entire duration, so *p* cannot move uniformly from ½^{n+2} meter past *B* and ^{1}/_{2}^{n} meter past *B* during this period. Thus, *p* is halted at *B*. But *p* does not make contact with any of the force-fields, since the distance between the *m*^{th} force-field and *p* (when it stops at *B*) is ^{1}/_{2}^{m} meters, and the *m*^{th} force-field does not appear until ^{1}/_{2}^{m+1} second after the particle halts at *B*.

Notice that since there are no gods (or anyone else) in this version of the puzzle, no solution relying on facts about intentions will apply here. More generally, unlike the original puzzle, in this set-up the force-fields are generated at the appropriate places and times regardless of how the particle behaves – there are no instructions or outcomes that are dependent upon the particle’s behavior. In addition, arguing that, even though no individual force-field stops the particle, the fusion or union of the force-fields does stop the particle will be tricky, since although at any point during the first ½ second after midnight two different force-fields will exist, there is no time at which all of the force-fields exist.

Thanks go to the students in my Fall 2016 Paradoxes and Infinity course for the inspiration for this puzzle!

*Featured image credit: Photo by Nicolas Raymond, CC BY 2.0 via Flickr.*

The post A person-less variant of the Bernadete paradox appeared first on OUPblog.

]]>The post Is elementary school mathematics “real” mathematics? appeared first on OUPblog.

]]>There is little doubt that elementary students should know the multiplication tables, be able to do simple calculations mentally, develop fluency in using algorithms to carry out more complex calculations, and so on. Indeed, these topics are fundamental to students’ future learning of mathematics and important for everyday life. Yet, is elementary students’ engagement with these topics in itself engagement with “real” mathematics?

I suggest that classroom discourse in an elementary school classroom where students engage with “real” mathematics should satisfy two major considerations. First, it should be meaningful and important to the students. Elementary students’ engagement with the topics I mentioned earlier can offer a productive context in which to satisfy this first consideration, especially if students’ work is characterized by an emphasis not only on procedural fluency but also on conceptual understanding.

Second, the classroom discourse in an elementary school classroom where students engage with “real” mathematics should be a rudimentary but genuine reflection of the broader mathematical practice. One might interpret the second consideration as asking us to treat elementary students as little mathematicians. That would be a misinterpretation. The point is that some aspects of mathematicians’ work that are fundamental to what it means to do mathematics in the discipline should also be represented, in pedagogically and developmentally appropriate forms, in elementary students’ engagement with the subject matter.

In its typical form, classroom discourse in elementary school classrooms fails to satisfy the second consideration. A main reason for this is the limited attention it pays to issues concerning the epistemic basis of mathematics, including what counts as evidence in mathematics and how new mathematical knowledge is being validated and accepted. The notion of *proof* lies at the heart of these epistemic issues and is a defining feature of authentic mathematical work. Yet the notion of proof has a marginal place (if any at all) in many elementary school classrooms internationally, thus jeopardizing students’ opportunities to engage with “real” mathematics.

Consider, for example, a class of eight–nine-year-olds who have been writing number sentences for the number ten and have begun to develop the intuitive understanding that there are infinitely many number sentences for ten when subtracting two whole numbers (e.g., 15-5=10). In most elementary school classrooms the activity would finish here, possibly with the teacher ratifying students’ intuitive understanding thus giving it the status of public knowledge in the classroom. However, in a classroom that aspires to engage students with “real” mathematics, new mathematical knowledge isn’t established by appeal to the authority of the teacher, but rather on the basis of the logical structure of mathematics. Thus the teacher of this classroom may help the students think how they can prove their intuitive understanding.

Given appropriate instructional support, students of this age can prove that there are infinitely many number sentences for ten when subtracting two whole numbers. For example, a student called Andy in a class of eight–nine-year-olds I studied for my research generated an argument along the following lines:

To generate infinitely many subtraction number sentences for ten, you can start with 11-1=10. For each new number sentence you can add one to both terms of the previous subtraction sentence. This looks like this: 12-2=10, 13-3=10, 14-4=10, 15-5=10, and so on. This can go on forever and will maintain a constant difference of ten.

Andy’s argument used mathematically accepted ways of reasoning, which were also accessible to his peers, to establish convincingly the truth of an intuitive understanding. This argument illustrates what a proof can look like in the context of elementary school mathematics. The process of developing this argument contributed also a powerful element of mathematical sense making to Andy’s work with number sentences for ten: As he carried out calculations to write the various number sentences, he thought deeply about key arithmetical properties (e.g., how to maintain a constant difference) and he put everything together in a coherent line of reasoning. Thus an elevated status of proof in elementary students’ work can play a pivotal role in students’ meaningful engagement with mathematics. This presents a connection with the first consideration I discussed earlier.

To conclude, elementary school mathematics as reflected in typical classroom work internationally falls short of being “real.” Yet it has the potential to become “real” if the learning experiences currently offered to elementary students are transformed. A major part of this transformation needs to concern the epistemic basis of mathematics, with more opportunities offered for students to engage with proof in the context of mathematics as a sense-making activity. The teacher has an important role to play as the representative of the discipline of mathematics in the classroom and as the person with the responsibility to induct students into mathematically acceptable ways of reasoning and standards of evidence. This is a complex role that cannot be fully understood without a strong research basis about the kind of teaching practices and curricular materials that can facilitate elementary students’ access to “real” mathematics.

*Featured image credit: Math by Pixapopz. Public domain via Pixabay.*

The post Is elementary school mathematics “real” mathematics? appeared first on OUPblog.

]]>The post Measuring up appeared first on OUPblog.

]]>My interest was further aroused by complications arising from the interactions between statistics and the results of different kinds of measurement. Many textbooks say it’s meaningless to calculate the arithmetic mean of ordinal measurements — those where the numbers reflect only the order of the objects being measured — and yet a glance at scientific and medical practice shows that this is commonplace. Clearly, although measurement was ubiquitous throughout the entire world (or, as I have put it elsewhere, we view the world through the lens of measurement), there was more to it than met the eye. Things were not always as simple as they might seem. Indeed, it would not be stretching things to say that occasionally, consideration of measurement issues revealed apparent rips in the fabric of reality.

A simple example arises from the *Daily Telegraph* report of 8 February 1989, which said that “Temperatures in London were still three times the February average at 55 °F (13 °C) yesterday”, prompting the natural question: what is the average February temperature? The answer is obvious — we just divide the temperature by three. So the February average is a third of 55 °F, equal to 18⅓ °F. Alternatively, it is a third of 13 °C, equal to 4⅓ °C. But this is very odd, because these two results are different. Indeed, the first is below freezing, while the second is above. In fact, in this example a little thought shows where things have done wrong, and which average temperature is right. But things are not always so straightforward, and occasionally deep thought about the nature of measurement is needed to work out what is going on. This reveals that there are different kinds of measurement. At one extreme we have so-called representational measurement, and at the other pragmatic measurement, with most being a mixture of the two extremes.

The aim of representational measurement is to construct a simplified model of some aspect of the world. In particular, we assign numbers to objects so that the relations between the numbers correspond to the relations between the objects. This rock extends the spring further than that, so we say it is heavier, and assign it a larger weight number. These two rocks together stretch the spring the same distance as a third one alone, so we give them numbers which add up to the number we give the third rock. And so on.

Representational measurement is essentially based on certain symmetries in the mapping from the world to the numbers, and understanding of these symmetries can be very revealing about properties of the world — about the way the world works. A familiar example is through the use of dimensional analysis in physics, engineering, and elsewhere. In contrast, a provocative way of describing pragmatic measurement is that “we don’t know what we are talking about.” What this really means is that we must define the characteristic we aim to measure before we can measure it. Or, more precisely, we define it at the same time as we measure it. The definition is implicit in the measurement procedure, and it is only through the measurement procedure that we know precisely what it is we are talking about. At first this strikes some people as strange. But take the economic example of inflation rate. Inflation can be defined in various different ways. None is “right.” Rather, it depends what properties you want the measurement to have, and what questions you want to answer. It depends on what you want to use the concept and the measured numbers for.

The bottom line to all this is that decisions and understanding are (or at least should be!) based on evidence. Evidence comes from data. And data come from measurements. Given how central measurement is to our understanding of the universe about us, to education, to government, to medicine, to technology, and so on, it is entirely fitting that it should be the topic of the 500^{th} volume in the *Very Short Introduction *series.

*Featured image credit: Scale kitchen measure by Unsplash. CC0 Public Domain via Pixabay.*

The post Measuring up appeared first on OUPblog.

]]>The post Very short facts about the Very Short Introductions appeared first on OUPblog.

]]>- VSIs have been translated into 50 languages, including Gujarati (an Indo-Aryan language) and Belarusian. Arabic is the most popular translated language.
- When their VSIs published, the oldest VSI author was Stanely Wells at age 85, author of
*William Shakespeare: A Very Short Introduction*. - The first VSI,
*Classics*, was published 21 years ago, in 1995 and remains in its first edition. - The highest selling VSI is
*Globalization*, which will soon be on its fourth edition! When it was first proposed people were worried it might not be a success. - Someone once wrote in suggesting we needed a VSI to Olivia Newton John. Other suggestions have included a very short introduction to coconuts and a very short introduction to Harry Potter.
- One VSI author had a tie made to match his jacket cover. Unfortunately his cover then needed to be flipped around, so his tie is now upside down.
- There are 84 VSI titles starting with “The”.
- Discounting the word “The”, the most common initial letter of a VSI is ‘A’ (55 titles), followed closely by ‘C’ and ‘M’ (52 titles each). Between them the VSI titles cover every letter of the alphabet, with the letter ‘e’ appearing over 600 times.

So, where’s the gap in your knowledge?

*Featured image credit: Very Short Introductions © Jack Campbell-Smith, for Oxford University Press.*

The post Very short facts about the Very Short Introductions appeared first on OUPblog.

]]>The post Just because all philosophers are on Twitter… appeared first on OUPblog.

]]>It is not all bad news of course. The expansion and ready availability of communication technologies has meant that it is far easier for serious ideas to be tested and refined, far easier to develop diverse communities of scholarship, and far easier for new discoveries, theories and data to permeate beyond ivory towers. A modern counterpoint is that it is also far easier to spread misleading and self-serving theories, far easier to spread messages of hate and violence, and far easier for discourse to polarise as, with so many options available, people gravitate towards those sources which reinforce and intensify existing prejudices.

This explosion of available information and opinions also presents a challenge to traditional notions of education and citizenry. There may have been a time when the purpose of education was primarily to create an informed citizenry – to give them the relevant information – but that time is certainly not now. Now, information is more freely available and a far more important skill is the ability to independently discern reliable from unreliable sources, fact from fiction, genuine authority from charlatanism, feline ecology from lolcats. Where once scholarship meant perseverance and a dedication to tracking down otherwise inaccessible information, an increasingly vital skill in modern scholarship is a well-tuned bullshit detector.

A useful distinction to bear in mind here is between message and medium. (Philosophers love distinctions!) Each has its own hype cycle, and they are not always in sync. At the top of each cycle is the peak of inflated expectations. It is at this point in the cycle of new media technologies that we hear grand transformational claims, such as the view that virtual reality will end inequality, that the internet will kill traditional publishers and bookshops, that social networking will scupper academic peer-review, or that Massive Open Online Courses (MOOCs) will turn University campuses into ghost towns. After a tough period of disillusionment when initial enthusiasts and investors come to terms with the failure of their over-inflated expectations, the cycle reaches a plateau of productivity where the new medium is embraced by an increasingly significant portion of the population who see genuine usefulness beyond the hype.

Though the cycle repeats with each new medium – from telephones to Twitter – it is not futile. For what is gained through each iteration is a deeper understanding of the phenomenon which the technology was due to replace. So, for example, we learn something about the true (and changing) value of publishers, bookshops, peer-review and universities by understanding that they cannot be wholly replaced by new technologies. A strong theme running through these particular values is the notion of a discerning eye. With so many pieces of information, opinions and lolcats out there, we would simply be lost if we did not have some way of filtering reliable from unreliable research, scientific from wishful thinking, well-reasoned interpretations from self-serving propaganda.

This is not to say, of course, that the best way to navigate modern media seas is by blind deference to authority. All authorities are fallible (with the notable exception of OUP, of course). Far more important is the ability to critically evaluate pedigree for oneself. This is where universities can come in. My own engagement with MOOCs (through the Open University’s FutureLearn platform) has taught me that while large online courses are fantastic at bringing together a diverse range of students, they work best when those students are encouraged to engage critically with the ideas and experience they and others bring to the community. Inculcating and refining these skills is something that smaller scale teaching and face-to-face education are, im my experience, uniquely placed to do. So while everyone being on Twitter might not mean that everyone has interesting things to say, the resulting flood of information and opinion does mean that educators still have interesting things to do.

*Featured image: Mobile Phone by geralt. Public domain via Pixabay.*

The post Just because all philosophers are on Twitter… appeared first on OUPblog.

]]>The post Teaching teamwork appeared first on OUPblog.

]]>I think we can improve undergraduate and graduate students’ educational experiences by giving them the benefit of working in teams. This can be implemented in short-term (two-hour to two-week) or longer term (2-12 week) projects. I believe that working on a larger project with 2-4 other students, for at least 15-35% of their coursework in several courses, would build essential professional and personal skills. I agree that it is easier to plan and execute team projects in smaller graduate courses than larger undergraduate courses.

Unfortunately, many faculty members were trained through lecture, individual homework, and strictly solitary testing. They have weak teamwork skills and are little inclined to teach teamwork. In fact, they have many fears that increase their resistance. Some believe that teamwork takes extra effort for faculty or that teams naturally lead to one person doing most of the work.

Teamwork projects may require fresh thinking by faculty members, but it may be easier to supervise and grade ten teams of four students, than to mentor and grade 40 individuals. Moreover, well-designed teamwork projects could lead to published papers or start-up companies in which faculty are included as co-authors or advisers. In my best semester, five of the seven teams in my graduate course on information visualization produced a final report that led to a publication in a refereed journal or conference.

Another possible payoff is that teamwork courses may create more engaged students with higher student retention rates. Of course teams can run into difficulties and conflict among students. These are teachable moments when students can learn lessons that will help them in their professional and personal lives. These difficulties and conflicts may be more visible than individual students failing or dropping out, but I think they are a preferable alternative.

So if faculty members are ready to move towards teaching with team projects, there are some key decisions to be made. Sometimes two-person teams are natural, but larger teams of 3-5 allow more ambitious projects, while increasing the complexity. I’ve also run projects where the entire class acts as a team to produce a project such as the Encyclopedia of Virtual Environments (EVE), in which the 20 students wrote about 100 web-based articles defining the topic. Colleagues have told me about their teamwork projects that had their French students create an online newspaper for French alumni describing campus sports events or a timeline of the European philosophical movements leading up to the framing of the US Constitution.

**Team formation:** I have moved to assigning team membership (rather than allow self-formation) using a randomization strategy, which is recommended in the literature. This helps ensure diversity among the team members, speeds the process of getting teams started, and eliminates the problem of some students having a hard time finding a team to join.

**Project design (student-driven): **Well-designed team projects take on more ambitious efforts, giving students the chance to learn how to deal with a larger goal. I prefer student designed projects with an outside mentor, where the goal is to produce an inspirational pilot project that benefits someone outside the classroom and survives beyond the semester. I’ve had student teams work on software to schedule the campus bus routes or support a local organization that brings hundreds of foreign students for summer visits in people’s homes. Other teams helped a marketing company to assess consumer behavior in a nearby shopping mall or an internet provider to develop a network security monitor. Two teams proposed novel visualizations for the monthly jobs report of the US Bureau of Labor Statistics, which they presented to the Commissioner and her staff. I give a single grade to the team, but do require that their report includes a credits section in which the role of each person is described.

**Project design (faculty-driven):** Another approach is for the teacher to design the team projects, which might be the same ones for every team. With a four-person team, distinct roles can be assigned to each person, so it becomes easier to grade students individually. Just getting students to talk together, resolve differences, agree to schedules, etc. gives them valuable skills.

**Milestones:** Especially in longer projects, there should be deliverables every week, e.g. initial proposal, first designs, test cases, mid-course report, preliminary report, and final report.

**Deliverables:** With teams there can be multiple deliverables, e.g. in my graduate information visualization course, students produce a full conference paper, 3-5 minute YouTube video, working demo, and slide deck & presentation.

**Teamwork strategies:** For short-term teams (a few weeks to a semester), simple strategies are probably best. I use: (1) “Something small soon,” which asks students to make small efforts that validate concepts before committing greater energy and (2) “Who does what by when,” which clarifies responsibilities on an hourly basis, such as “If Sam and I do the draft by 6pm Tuesday, will Jose and Marie give us feedback by noon on Wednesday?” Teamwork does not require any meetings at all; it is a management strategy to coordinate work among team members.

**Critiques and revisions:** I ask students to post their preliminary reports on the class’s shared website two weeks before the end of the semester. Then students sign up to read and critique one of the reports, which they send to me and the report authors. They write one paragraph about what they learned and liked, then as much as constructive suggestions for improvements to the report’s overall structure, to proposed references and improved figures, to grammar and spelling fixes. When students realize that their work will be read by other students they are likely to be more careful. When students read another team’s project report, they reflect on their own project report, possibly seeing ways to improve it. I grade the critiques which can be 3-6% of their final grade. My goal is to help every team to improve the quality of their work. Sometimes the process of preparing their preliminary reports early and then revising does much to improve quality.

**Concerns:** I know that some faculty members worry that one person in a team will do the majority of the work, but if projects are ambitious enough then that possibility is reduced. Grading remains an issue that each faculty member has to decide on. I find that having students include a credits box in their final report helps, but other instructors require peer rating/reporting for team members.

In summary, anything novel takes some thinking, but embracing team projects could substantially improve education programs, engage more marginal students, and improve student retention rates. Learning to use teamwork tools such as email, videoconferencing, and shared documents provides students with valuable skills. Working in teams can be fun for students and satisfying for teachers.

*Featured image credit: Harvard Business School classroom by HBS 1908. CC BY-SA 3.0 via Wikimedia Commons.*

The post Teaching teamwork appeared first on OUPblog.

]]>The post What is combinatorics? appeared first on OUPblog.

]]>- How many possible sudoku puzzles are there?
- Do 37 Londoners exist with the same number of hairs on their head?
- In a lottery where 6 balls are selected from 49, how often do two winning balls have consecutive numbers?
- In how many ways can we give change for £1 using only 10p, 20p, and 50p pieces?
- Is there a systematic way of escaping from a maze?
- How many ways are there of rearranging the letters in the word “ABRACADABRA”?
- Can we construct a floor tiling from squares and regular hexagons?
- In a random group of 23 people, what is the chance that two have the same birthday?
- In chess, can a knight visit all the 64 squares of an 8 × 8 chessboard by knight’s moves and return to its starting point?
- If a number of letters are put at random into envelopes, what is the chance that no letter ends up in the right envelope?

What do you notice about these problems?

First of all, unlike many mathematical problems that involve much abstract and technical language, they’re all easy to understand – even though some of them turn out to be frustratingly difficult to solve. This is one of the main delights of the subject.

Secondly, although these problems may appear diverse and unrelated, they mainly involve selecting, arranging, and counting objects of various types. In particular, many of them have the forms. Does such-and-such exist? If so, how can we construct it, and how many of them are there? And which one is the ‘best’?

The subject of combinatorial analysis or combinatorics (pronounced *com-bin-a-tor-ics*) is concerned with such questions. We may loosely describe it as the branch of mathematics concerned with selecting, arranging, constructing, classifying, and counting or listing things.

To clarify our ideas, let’s see how various sources define combinatorics.

Oxford Dictionaries describe it briefly as:

“The branch of mathematics dealing with combinations of objects belonging to a finite set in accordance with certain constraints, such as those of graph theory.”

While the Collins dictionary present it as:

“the branch of mathematics concerned with the theory of enumeration, or combinations and permutations, in order to solve problems about the possibility of constructing arrangements of objects which satisfy specified conditions.”

Wikipedia introduces a new idea, that combinatorics is:

“a branch of mathematics concerning the study of finite or countable discrete structures.”

So the subject involves finite sets or discrete elements that proceed in separate steps (such as the numbers 1, 2, 3 …), rather than continuous systems such as the totality of numbers (including π, √2, etc.) or ideas of gradual change such as are found in the calculus. The Encyclopaedia Britannica extends this distinction by defining combinatorics as:

“the field of mathematics concerned with problems of selection, arrangement, and operation within a finite or discrete system … One of the basic problems of combinatorics is to determine the number of possible configurations (e.g., graphs, designs, arrays) of a given type.”

Finally, Wolfram Research’s *MathWorld* presents it slightly differently as:

“the branch of mathematics studying the enumeration, combination, and permutation of sets of elements and the mathematical relations that characterize their properties,”

adding that:

“Mathematicians sometimes use the term ‘combinatorics’ to refer to a larger subset of discrete mathematics that includes graph theory. In that case, what is commonly called combinatorics is then referred to as ‘enumeration’.”

The subject of combinatorics can be dated back some 3000 years to ancient China and India. For many years, especially in the Middle Ages and the Renaissance, it consisted mainly of problems involving the permutations and combinations of certain objects. Indeed, one of the earliest works to introduce the word ‘combinatorial’ was a *Dissertation on the combinatorial art* by the 20-year-old Gottfried Wilhelm Leibniz in 1666. This work discussed permutations and combinations, even claiming on the front cover to ‘prove the existence of God with complete mathematical certainty’.

Over the succeeding centuries the range of combinatorial activity broadened greatly. Many new types of problem came under its umbrella, while combinatorial techniques were gradually developed for solving them. In particular, combinatorics now includes a wide range of topics, such as the geometry of tilings and polyhedra, the theory of graphs, magic squares and Latin squares, block designs and finite projective planes, and partitions of numbers.

Much of combinatorics originated in recreational pastimes, as illustrated by such well-known puzzles such as the Königsberg bridges problem, the four-colour map problem, the Tower of Hanoi, the birthday paradox, and Fibonacci’s ‘rabbits’ problem. But in recent years the subject has developed in depth and variety and has increasingly become a part of mainstream mathematics. Prestigious mathematical awards such as the Fields Medal and the Abel Prize have been given for ground-breaking contributions to the subject, while a number of spectacular combinatorial advances have been reported in the national and international media.

Undoubtedly part of the reason for the subject’s recent importance has arisen from the growth of computer science and the increasing use of algorithmic methods for solving real-world practical problems. These have led to combinatorial applications in a wide range of subject areas, both within and outside mathematics, including network analysis, coding theory, probability, virology, experimental design, scheduling, and operations research.

*Featured image credit: ‘Sudoku’ by Gellinger. CC0 public domain via Pixabay.*

The post What is combinatorics? appeared first on OUPblog.

]]>The post A brief history of crystallography appeared first on OUPblog.

]]>So, what is crystallography? Put simply, it is the study of crystals. Now, let’s be careful here. I am not talking about all those silly websites advertising ways in which crystals act as magical healing agents, with their chakras, auras, and energy levels. No, this is a serious scientific subject, with around 26 or so Nobel prizes to its credit. And yet, despite this, it remains a largely hidden subject, at least in the public mind.

Crystallography as a science has a long and venerable history going back to the 17^{th} century when the sheer beauty of the symmetry of crystals suggested an underlying order of some kind. For the next three centuries, our knowledge of what crystals actually were was based on conjecture and argument, with a few simple experiments thrown in. From their symmetry and shapes it was argued that crystals must consist of ordered arrangements of minute particles: today we know them as atoms and molecules.

But it was the discovery of X-rays in 1895 that changed all that, for a few years later in 1912 in Germany, Max Laue, Walter Friedrich, and Paul Knipping showed that an X-ray beam incident on a crystal was scattered to form a regular pattern of spots on a film (we call this diffraction). Thus it was proved that X-rays consisted of waves and furthermore this gave direct evidence of the underlying order of atoms in the crystal. Hence Nobel Prize number 1 went to Laue in 1914. However, it was William Lawrence Bragg (WLB) who in 1912 at the age of 22 showed how the observed diffraction pattern could be used to determine the positions of atoms in the crystal, thus launching a completely new scientific discipline, X-ray crystallography. Working with his father, William Henry Bragg (WHB), they quickly determined the crystal structures of several materials starting with that of common salt and diamond. Both father and son shared Nobel prize number 2 in 1915. William Henry Bragg and William Lawrence Bragg went on to create world-class research groups working on a huge range of solid materials and incidentally they were active in encouraging women into science.

Since then X-ray crystallography, which today is used throughout the world, has been the method of choice for determining the crystal structures of organic and inorganic solids, pharmaceuticals, biological substances such as proteins and viruses, and indeed all kinds of solid substances. Crick and Watson’s determination of the double helix of DNA is probably the most well-known example of the use of crystallography, incidentally a discovery made in William Lawrence Bragg’s laboratory in Cambridge. Had it not been for X-ray (and later neutron and electron) crystallography we probably would not have today much of an electronics industry, computer technology, new pharmaceuticals, new materials of all sorts, nor the modern field of genetics. The Braggs left a huge legacy which today continues to make astonishing progress.

*Featured image credit: Protein Crystals Use in XRay Crystallography by CSIRO. CC BY 3.0 via Wikimedia Commons *

The post A brief history of crystallography appeared first on OUPblog.

]]>The post Today’s Forecast: Cloudy with a chance of seizures appeared first on OUPblog.

]]>For many years, experts in neurology, computer science, and engineering have worked toward developing algorithms to predict a seizure before it occurs. If an algorithm could detect subtle changes in the electrical activity of a person’s brain (measured by electroencephalography (EEG)) before a seizure occurs, people with epilepsy could take medications only when needed, and possibly reclaim some of those daily activities many of us take for granted. But algorithm development and testing requires substantial quantities of suitable data, and progress has been slow. Many early research reports developed and tested algorithms on relatively short intracranial EEG data segments from patients with epilepsy undergoing intracranial EEG before surgery. There are a number of problems with this. First, patients undergoing pre-surgical monitoring for epilepsy typically have their medications reduced to encourage seizures to occur, which causes a progressive decrease in the blood levels of medications which have been shown to affect the normal baseline pattern in a patient’s EEG. Second, hospital stays for pre-surgical monitoring by necessity rarely last more than two weeks, providing a very limited amount of data for any single patient. These short data segments with changing baseline EEG characteristics are particularly problematic when algorithm scientists attempt to measure an algorithm’s false positive rate, or the number of false alarms that a seizure forecasting algorithm might raise. Development of robust, reliable seizure prediction algorithms requires data on many seizures and many periods of baseline, non-seizure EEG with enough time between the seizures to allow the brain to recover. In addition, researchers are often reluctant to share algorithm data and programs; privacy concerns and the high cost of sharing large data sets makes testing and comparison very difficult.

In 2013 a group of physicians and scientists from Melbourne Australia reported a successful trial of an implanted device capable of measuring EEG from intracranial electrode strips, and telemetering the EEG data to a small external device about the size of a smart phone that could run seizure forecasting algorithms and provide warnings of impending seizures. The device used a proprietary seizure forecasting algorithm that performed well enough to be helpful for some patients in the trial, raising hopes that seizure forecasting might soon become clinically possible.

We recently made an effort to use Kaggle.com — a website that runs data science competitions to develop algorithms to predict everything from insurance rates to the Higgs Boson — to develop new algorithms for seizure forecasting. Our competition used intracranial EEG data from the same device in the Australian trial (implanted in eight dogs with naturally occurring epilepsy) as well as data from two human patients undergoing intracranial monitoring. In hope of winning $15,000 in prize money, plus bragging rights among elite data science circles, hundreds of algorithm developers, most with little or no experience with epilepsy or EEG, worked countless hours to build, test, and rebuild algorithms for seizure forecasting, and tested their algorithms on nearly 350 seizures recorded over more than 1,500 days. After four months, over half of these “crowdsourced” algorithms performed better than random predictions, and the winning algorithms accurately predicted over 70% of seizures with a 25% false positive rate. The data are available for researchers to continue developing new algorithms for predicting seizures, and can serve as a benchmark for new algorithms to be compared directly to one another and to the algorithms developed in this competition. The best performing algorithms in the competition used a mixture of conventional and complex approaches drawn from physics, engineering, and computer science, sometimes in unorthodox ways that proved to be surprisingly effective. The winning teams also made the source code for their algorithms publicly available, providing a benchmark and starting point for future algorithm developers.

While we applaud the talented algorithm scientists who took home the prize money, we hope the real winners of the contest will be our patients.

*Featured image: Sky cloudy. Uploaded by Carla Nunziata. CC-BY-SA-3.0 via Wikimedia Commons*

The post Today’s Forecast: Cloudy with a chance of seizures appeared first on OUPblog.

]]>The post Doing it with sensitivity appeared first on OUPblog.

]]>The most recent time this happened, it reminded me of a startling academic paper, first published in 1978, in the *New England Journal of Medicine.* Dr Ward Casscells and colleagues reported something very disturbing: that most doctors can’t calculate risks correctly.

The question they posed was this. Imagine a disease (let’s call it Gobble’s disease*), which has a prevalence of 1 in 1000 in your population. There is a test for Gobble’s disease, and you know it has a false positive rate of 5%. You meet a patient in your clinic, who has tested positive. What is the probability that the patient has Gobble’s disease?

A member of the public could be forgiven for thinking the answer is 100%. After all, medical tests are always reliable, right? Someone a bit savvier, say a doctor, might look at that 5% false positive rate, and decide the answer is 95%. That’s what most of the respondents in Casscells’ study said, and he offered his question to senior doctors, junior doctors, and medical students. (And, if you had offered it to me as a medical student or a junior doctor, that’s almost certainly what I would have said – even though, in all fairness, my medical school tried hard to teach us the truth).

But they would all be hopelessly wrong. A statistician would say this: suppose you test the population for Gobble’s disease; a 5% false positive rate means that 5% of your population will test positive for Gobble’s disease, *even when they don’t have it.* 5% of your population is 50 per 1000. But we know that only 1 in 1000 of the people in your population has Gobble’s disease; therefore your test will be wrong for those 50 people, and right only for that last 1 person. So the probability of your patient – who tested positive for Gobble’s disease – actually having Gobble’s disease is only 1 in 50, or 2%.

This result is so unexpected, so counter-intuitive, that it’s worth looking at more closely.

All medical tests have two basic properties. These are known as *sensitivity* and *specificity*. The sensitivity is the probability that the patient will test positive for the disease, if they actually have it. Our fictitious Gobble’s test is, we assume, 100% sensitive it will always detect someone with Gobble’s disease. In practice, few medical tests approach 100% sensitivity.

The specificity is the probability that the patient will test negative for the disease if they haven’t got the disease. Our Gobble’s test is 95% specific: if the patient doesn’t have Gobble’s disease, there is a 95% likelihood that they will test negative for the disease. That sounds great, until we remember that there’s a 5% likelihood they will test positive, which is the cause of all our problems. Sadly, in reality, few medical tests approach 95% specificity.

In reality, sensitivity and specificity are two sides of the same coin. One cannot improve the sensitivity of any test without including more false positives (which might, as we can see, drown out the true positives we are actually interested in). An extreme example is to make every test a positive result: you would never miss anyone with the disease, but there would be so many false positives that your test would be useless.

The reason our test for Gobble’s disease is so unhelpful is that Gobble’s disease is rare. The test becomes much more valuable if Gobble’s disease is more common. Therefore to make it more useful, we shouldn’t apply the test indiscriminately, but we should try to narrow down our focus to people with risk factors. If Gobble’s disease is rare in the young but gets more common in the elderly (as many cancers do), then we can improve the usefulness of the test by applying it only to the elderly.

The other way in which we can improve the usefulness of our test is to combine it with other tests. Say our test is quick and safe. We can apply it easily to a large number of people. But to those who test positive, we can then go on and apply a different test, perhaps one which is more invasive or more expensive. Patients who test positive for both are much more likely to actually *have* Gobble’s disease.

That security guard, having a quick look through my bag, is applying a diagnostic test: do I have a dangerous item in there, or not? Unfortunately his test isn’t very sensitive, since he might easily miss something down at the bottom. And, since most people going to the concert are there to enjoy the music, the prevalence of miscreants is low. Therefore the simple mathematics of the test tells us it is likely to be worthless. The effectiveness of the test is multiplied by applying a different test: an X-ray scan of my bag, or even of my body. These are much more expensive than a quick visual check, but airports, understandably, are prepared to foot the bill.

There are powerful lessons to be learned here. The first is that applying a single test to a whole population is likely to be very unhelpful, especially if what you are looking for is rare. The second is that medical tests seldom give a clear-cut answer; instead they lengthen or shorten the likelihood of a particular diagnosis being true. Finally, quite a lot of other tests (such as concert security) are subject to exactly the same mathematical rules as medical tests. A thorough understanding of the mathematics of probability will help no end in this endeavour. In the words of William Osler (often described as the father of modern medicine), “Medicine is a science of uncertainty and an art of probability”!

*‘Gobble’s Disease’ is an invented illness from the *Oxford Handbook of Clinical Medicine*.

*Featured Image Credit: ‘Dice, Die, Probability’ by Jody Lehigh. CCO Public Domain via Pixabay.*

The post Doing it with sensitivity appeared first on OUPblog.

]]>The post Can design thinking challenge the scientific method? appeared first on OUPblog.

]]>However, there is growing interest in design thinking, a research method which encourages practitioners to reformulate goals, question requirements, empathize with users, consider divergent solutions, and validate designs with real-world interventions. Design thinking promotes playful exploration and recognizes the profound influence that diverse contexts have on preferred solutions. Advocates believe that they are dealing with “wicked problems” situated in the real world, in which controlled experiments are of dubious value.

The design thinking community generally respects science, but resists pressures to be “scientized”, which they equate with relying on controlled laboratory experiments, reductionist approaches, traditional thinking, and toy problems. Similarly, many in the scientific community will grant that design thinking has benefits in coming up with better toothbrushes or attractive smartphones, but they see little relevance to research work that leads to discoveries.

The tension may be growing since design thinking is on the rise as a business necessity and as part of university education. Institutions as diverse as the Royal College of Art, Goldsmiths at the University of London, Stanford University’s D-School, and Singapore University of Technology and Design are leading a rapidly growing movement that is eagerly supported by business. Design thinking promoters see it as a new way of thinking about serious problems such as healthcare delivery, community safety, environmental perseveration, and energy conservation.

The rising prominence of design thinking in public discourse is revealed by these two graphs. (see Figures 1 and 2).

These two sources both appear to show that after 1975 design took over prominence from science and engineering.

Scientists and engineers might dismiss this data and the idea that design thinking could challenge the scientific method. They believe that controlled experiments with statistical tests for significant differences are the “gold standard” for collecting evidence to support hypotheses, which add to the trusted body of knowledge. Furthermore, they believe that the cumulative body of knowledge provides the foundations for solving the serious problems of our time.

By contrast, designing thinking activists question the validity of controlled experiments in dealing with complex socio-technical problems such as healthcare delivery. They question the value of medical research by carefully controlled clinical trials because of the restricted selection criteria for participants, the higher rates of compliance during trials, and the focus on a limited set of treatment possibilities. Flawed clinical trials have resulted in harm such as when a treatment is tested only on men, but then the results are applied to women. Even respected members of the scientific community have made disturbing complaints about the scientific method, such as John Ioannidis’s now-famous 2005 paper “Why Most Published Research Findings Are False.”

Designing thinking advocates do not promise truth, but they believe that valuable new ideas, services, and products can come from their methods. They are passionate about immersing themselves in problems, talking to real customers, patients, or students, considering a range of alternatives, and then testing carefully in realistic settings.

Of course, there is no need to choose between design thinking and the scientific method, when researchers can and should do both. The happy compromise may be to use design thinking methods at early stages to understand a problem, and then test some of the hypotheses with the scientific method. As solutions are fashioned they can be tested in the real world to gather data on what works and what doesn’t. Then more design thinking and more scientific research could provide still clearer insights and innovations.

Instead of seeing research as a single event, such as a controlled experiment, the British Design Council recommends the Double Diamond model which captures the idea of repeated cycles of divergent and then convergent thinking. In one formulation they describe a 4-step process: “Discover”, “Define”, “Develop”, and “Deliver.”

The spirited debates about which methods to use will continue, but as teachers we should ensure that our students are skilled with both the scientific method and design thinking. Similarly, as business leaders we should ensure that our employees are well-trained enough to apply design thinking and the scientific method. When serious problems need solutions, such as healthcare of environmental preservation are being addressed, we will all be better served when design thinking and scientific method are combined.

*Featured image credit: library books education literature by Foundry. Public domain via Pixabay.*

The post Can design thinking challenge the scientific method? appeared first on OUPblog.

]]>The post The consistency of inconsistency claims appeared first on OUPblog.

]]>In 1931 Kurt Gödel published one of the most important and most celebrated results in 20^{th} century mathematics: the incompleteness of arithmetic. Gödel’s work, however, actually contains two distinct incompleteness theorems. The first can be stated a bit loosely as follows:

*First Incompleteness Theorem*: If *T* is a consistent, sufficiently strong, recursively axiomatizable theory, then there is a sentence “P” in the language of arithmetic such that neither “P” nor “not: P” is provable in *T*.

A few terminological points: To say that a theory is *recursively axiomatizable* means, again loosely put, that there is an algorithm that allows us to decide, of any statement in the language, whether it is an axiom of the theory or not. Explicating what, exactly, is meant by saying a theory is *sufficiently strong* is a bit trickier, but it suffices for our purposes to note that a theory is sufficiently strong if it is at least as strong as standard theories of arithmetic, and by noting further that this isn’t actually very strong at all: the vast majority of mathematical and scientific theories studied in standard undergraduate courses are sufficiently strong in this sense. Thus, we can understand Gödel’s first incompleteness theorem as placing a limitation on how ‘good’ a scientific or mathematical theory *T* in a language *L* can be: if *T* is consistent, and if *T* is sufficiently strong, then there is a sentence *S* in language *L* such that *T* does not prove that *S* is true, but it also doesn’t prove that *S* is false.

The first incompleteness theorem has received a lot of attention in the philosophical and mathematical literature, appearing in arguments purporting to show that human minds are not equivalent to computers, or that mathematical truth is somehow ineffable, and the theorem has even been claimed as evidence that God exists. But here I want to draw attention to a less well-known, and very weird, consequence of Gödel’s other result, the second incompleteness theorem.

First, a final bit of terminology. Given any theory *T*, we will represent the claim that *T* is consistent as “Con(*T*)”. It is worth emphasizing that, if *T* is a theory expressed in language *L*, and *T* is sufficiently strong in the sense discussed above, then “Con(*T*)” is a sentence in the language *L* (for the cognoscenti: “Con(*T*)” is a very complex statement of arithmetic that is *equivalent* to the claim that *T* is consistent)! Now, Gödel’s second incompleteness theorem, loosely put, is as follows:

*Second Incompleteness Theorem*: If *T* is a consistent, sufficiently strong, recursively axiomatizable theory, then *T* does not prove “Con(*T*)”.

For our purposes, it will be easier to use an equivalent, but somewhat differently formulated, version of the theorem:

*Second Incompleteness Theorem*: If *T* is a consistent, sufficiently strong, recursively axiomatizable theory, then the theory:

*T* + not: Con(*T*)

* *is consistent.

In other words, if *T* is a consistent, sufficiently strong theory, then the theory that says everything that *T* says, but also includes the (false) claim that *T* is inconsistent is nevertheless consistent (although obviously not true!) It is important to note in what follows that the second incompleteness theorem does not guarantee that a consistent theory *T* does not prove “not: Con(*T*)”. In fact, as we shall see, some consistent (but false) theories allow us to prove that they are not consistent even though they are!

We are now (finally!) in a position to state the main result of this post:

*Theorem*: There exists a consistent theory *T* such that:

*T* + Con(*T*)

is inconsistent, yet:

*T* + not: Con(*T*)

is consistent.

In other words, there is a consistent theory *T* such that adding the __true__ claim “*T* is consistent” to *T* results in a contradiction, yet adding the __false__ claim “*T* is inconsistent” to *T* results in a (false but) consistent theory.

Here is the proof: Let *T*_{1} be any consistent, sufficiently strong theory (e.g. Peano arithmetic). So, by Gödel’s second incompleteness theorem:

*T*_{2} = *T*_{1} + not: Con(*T*_{1})

is a consistent theory. Hence “Con(*T*_{2})” is true. Now, consider the following theories:

*(i) T*_{2} + not: Con(*T*_{2})

*(ii) T*_{2} + Con(*T*_{2})

Since, as we have already seen, *T*_{2} is consistent, it follows, again, by the second incompleteness theorem, that the first theory:

*T*_{2} + not: Con(*T*_{2})

is consistent. But now consider the second theory (ii). This theory includes the claim that *T*_{2} does not prove a contradiction – that is, it contains “Con(*T*_{2})”. But it also contains every claim that *T*_{2} contains. And *T*_{2} contains the claim that *T*_{1} __does__ prove a contradiction – that is, it contains “not: Con(*T*_{1})”. But if *T*_{1} proves a contradiction, then *T*_{2} proves a contradiction (since everything contained in *T*_{1} is also contained in *T*_{2}). Further, any sufficiently strong theory is strong enough to show this, and hence, *T*_{2} proves “not: Con(*T*_{2})”. Thus, the second theory:

*T*_{2} + Con(*T*_{2})

is inconsistent, since it proves both “Con(*T*_{2})” and “not: Con(*T*_{2})”. QED.

Thus, there exist consistent theories, such as *T*_{2} above, such that adding the (true) claim that that theory is consistent to that theory results in inconsistency, while adding the (false) claim that the theory is inconsistent results in a consistent theory. It is worth noting that part of the trick is that the theory *T*_{2} we used in the proof is itself consistent but not true.

This, in turn, suggests the following: in some situations, when faced with a theory *T* where we believe *T* to be consistent, but where we are unsure as to whether *T* is true, it might be safer to add “not: Con(*T*)” to *T* than it is to add “Con(*T*)” to *T*. Given that the majority of our scientific theories are likely to be consistent, but many will turn out to be false as they are overturned by newer, better, theories, this then suggests that sometimes we might be better off believing that our scientific theories are inconsistent than believing that they are consistent (if we take a stand on their consistency at all). But how can this be right?

*Featured image credit: Random mathematical formulæ illustrating the field of pure mathematics. Public domain via Wikimedia Commons.*

The post The consistency of inconsistency claims appeared first on OUPblog.

]]>The post Mary Somerville: the new face on Royal Bank of Scotland’s ten-pound note is worthy of international recognition appeared first on OUPblog.

]]>In celebrating the good news that Somerville is the people’s choice for the new gig, we could do worse than listen to the accolade given to her writing by one of the men she defeated in the public poll: James Clerk Maxwell. Father of the wireless electromagnetic era, he no doubt studied *Mechanism of the Heavens* as a student at Cambridge – and he certainly knew of Somerville’s second book, *On the Connexion of the Physical Sciences*. This was popular science rather than an advanced textbook, but Maxwell described it as “one of those suggestive books, which put into definite, intelligible, and communicable form the guiding ideas that are already working in the minds of men of science… but which they cannot yet shape into a definite statement.” This is high praise indeed.

If Maxwell’s ‘men of science’ sounds sexist in hindsight, it is doubly important to remember that women were not allowed to join the academic academies – not even the Royal Society, whose aim was not so much the doing of science as promoting it. In other words, ‘men of science’ was fact, not opinion. Which makes Mary Somerville all the more remarkable. She went on to write two more science books – and a delightful memoir completed when she was 91 – but she was also a scientist in her own right. In 1826 she published a paper in the prestigious *Philosophical Transactions of the Royal Society*, based on her experiments on a possible connection between violet light and electromagnetism. Although her results were ultimately proved incorrect, initially such famous scientists as her friends John Herschel and William Wollaston, had regarded her experiment as authoritative. Her friend Michael Faraday would find the first correct experimental connection between light and electromagnetism, and then Maxwell would complete the puzzle with his magnificent electromagnetic theory of light. But he had such respect for Somerville that nearly fifty years after her experiment, he took the trouble to analyse its underlying flaw, in his *Treatise on Electricity and Magnetism*.

Somerville corresponded with Faraday during her next series of experiments, in 1835. These involved testing the effects of different coloured light on photographic paper (photography was a new and fledgling invention at the time), and her paper was published by the French Academy of Sciences. The results of her third set of experiments – on the effect of different coloured light on organic matter – were published by the Royal Society in 1845.

In her book *Connexions*, she had also conjectured that observed discrepancies in the orbit of Uranus – which had been discovered by another friend of hers, William Herschel, father of John – might be due to the effects of another body as yet unseen. After John Couch Adams and Urbain Leverrier independently discovered the existence of Neptune in 1846, Adams told Somerville’s husband, William Somerville, that his search for the planet had been inspired by that passage in *Connexions.*

When Mary Somerville died in 1872, just before her 92^{nd} birthday, she was widely acknowledged as the nineteenth century’s ‘Queen of Science.’ The day before she died, she had been studying cutting edge mathematics (‘quaternions’, which Maxwell was also studying, as it happens – he discussed their application to electromagnetism in his *Treatise* of the following year). But what makes Mary Somerville’s story timeless is her monumental struggle to understand the mysteries of science in the first place. It might be tempting to think she owed her success to the support of all her famous friends – and indeed, they did support her. But she had gained entry to the society of ‘men of science’ in a most extraordinary way.

As a child in Burntisland, a village across the Firth of Forth from Edinburgh, Mary Fairfax had grown up ‘a wild creature’, as she put it. While her brothers were sent to school, she had been left free to roam along the seashore. Her mother had taught her enough literacy to read the Bible, and later she was taught some basic arithmetic. But everything changed when a friend showed fifteen-year-old Mary the needlework patterns in a women’s magazine. Leafing through it, Mary was mesmerized not by exquisite needlework but by a collection of x’s and y’s in strange, alluring patterns. Her friend knew only that “they call it algebra” (it was a worked solution to one of the magazine’s mathematical puzzles). Tantalized, Mary began studying mathematics in secret, reading under the covers at night. When the household stock of candles ran low too quickly, Mary’s secret was discovered and her candles confiscated – her father accepted the prevailing belief that intellectual study would send a girl mad or make her seriously ill.

Mary persevered for decades, teaching herself mathematics, Latin, and French. Eventually, she was able to read and understand both Newton’s *Principia* in Latin, and Newton’s disciple Pierre Laplace in French. She did it all alone, just for the love of knowledge. But when Britain’s ‘men of science’ and their wives finally discovered her learning, they were stunned. They quickly embraced her, but her public success was also possible because of the support of her three surviving children (three had died in childhood), and especially her medical doctor husband.

At 91, while studying quaternions, she revealed one of the secrets of her success: whenever she encountered a difficulty, she remained calm, never giving up, because “if I do not succeed today, I will attack [the problem] again on the morrow.” It is a great aphorism to remember her by, as we celebrate the accolades that are still coming her way: Oxford’s Somerville College was named in her honour soon after her death, and now RBS’s ten-pound note.

*Featured image credit: Somerville College by Philip Allfrey. CC-BY-SA-3.0 via Wikimedia Commons.*

The post Mary Somerville: the new face on Royal Bank of Scotland’s ten-pound note is worthy of international recognition appeared first on OUPblog.

]]>The post Earth’s climate: a complex system with mysteries abound appeared first on OUPblog.

]]>The climate has evolved through massive changes to where we are today. It continues to evolve on long time scales but is also impacted by two factors acting on human time scales. First, there is the ongoing internal variability resulting from a plethora of natural cycles. El Niño is an exemplar, but variability occurs at all levels of the system—in the atmosphere, ocean, cryosphere, biosphere, and through their connectedness—and spans a wide range of time scales from weeks to centuries. Moreover, modes of variability can conspire together to produce unanticipated and seemingly unrelated effects. Secondly, there are changes in radiative forcing, recently dominated by anthropogenic emissions, but also affected by other factors including land use, ocean carbon uptake, solar variability, and feedbacks such as impacts on albedo from melting ice and changing cloud patterns. It is this complex mixture in a dynamically evolving system that the scientific community is striving to unravel.

Climate science is in an unusual situation in that it is an experimental science but one in which the experiments are not restricted to a traditional laboratory. Because experiments cannot be carried out on the full climate system, mathematical replicas of the Earth have been developed in order to test scientific hypotheses about how the planet will react in different circumstances. These are the climate models hosted at around 30 or so climate centres around the world that provide the main source of predictive information in the Intergovernmental Panel on Climate Change assessment reports. They are each highly complex entities in themselves involving massive computational codes. The upkeep and development of these codes raises significant mathematical issues, but the involvement of the mathematical sciences in the study of climate goes far beyond these operational tasks.

The complementary view to the climate as a deterministic dynamical system involves compiling information from observational data. These observations, both from the modern instrumental systems and from the distant past using palaeoclimate proxies, are uncertain, which means that we need to use sophisticated statistical methodology to estimate and map properties of the Earth’s climate system. The two viewpoints come together as significant uncertainty accompanies any model projection and so their output is also properly regarded as statistical.

The mathematical sciences are playing a growing role in climate studies at all levels. Models of more modest dimension than the models residing at climate centres are gaining prominence. These conceptual models can help us see the relations between different internal mechanisms that can be hidden in the full model. Key processes can often be studied in isolation and their modelling brings considerable insight into the overall climate. Such processes include biogeochemical cycles, melting of sea and land ice and land use changes. The internal structure of such processes and their impact on other aspects of the climate are revealed by mathematical and statistical analyses. Such analysis is also critical in the proper inclusion of these processes through parameterization.

Ultimately, both understanding and prediction of the climate depend equally on models, that encode physical laws, and observations, that bring direct insight into the real world. Melding these together to tease out the optimal information is an extraordinary mathematical challenge that demands a blend of statistical and dynamical thinking, in both cases at the frontiers of these areas.

* Featured image credit: Windräder by fill. Public Domain via pixabay.*

The post Earth’s climate: a complex system with mysteries abound appeared first on OUPblog.

]]>The post What is information, and should it be free? appeared first on OUPblog.

]]>Fortunately there are some bright spots, such as the fact that it is now possible to measure information. This is the result of the pioneering work of Claude Shannon in the 1940s and 1950s. Shannon’s definitions can be used to prove theorems in a mathematically precise way, and in practice they provide the foundation for the machines which handle the vast amounts of information that are now available to us. However, that is not the end of the story.

In 1738 Daniel Bernoulli pointed out that the mathematical measure of ‘expectation’ did not allow for the fact that different people can value the outcome of an event in different ways. This observation led him to introduce the idea of ‘utility’, which has come to pervade theoretical economics. In fact, Shannon’s measure of information can be thought of as a generalization of expectation, and it leads to similar difficulties. As far as I am aware, academic work on this subject has not yet found applications in practice.

In the absence of a theoretical model, many countries (including the UK) have tried to set up a legal framework for information. First we had the Data Protection Act (1998), intended to prevent the misuse of the large amounts of personal data stored on computers. But it is very loosely worded, and consequently open to many different interpretations. (In one case, a police force believed that the Act prevented them from passing on information about a person whom they suspected of being a serial sex offender. This person then obtained employment elsewhere as a school caretaker, and murdered two pupils.) The next step was the Freedom of Information (FoI) Act (2000), which now seems to be regarded as unsatisfactory – on all sides. Those who see themselves as guardians of our ‘right to know’ are dissatisfied with the wide range of circumstances which can be considered as exceptions. Those who see themselves as guardians of our ‘security’ are concerned that attempts to prevent terrorist activities may be compromised.

We are expected to observe certain rules and regulations about how we use our money, but we are allowed to keep it safe. Will similar rules and regulations about information emerge?

Recently I looked at a question which led me into these muddy waters. The question was a simple instance of a very general one. Suppose a piece of information is only partly revealed to us: it may have been corrupted by transmission through a ‘noisy channel’, or it may have been encrypted, or some important details may have been intentionally withheld. How much useful information can we deduce from the data that we do have?

My example came from the unlikely source of the popular BBC television programme, Strictly Come Dancing. The problem was as follows. In order to determine which contestants should be eliminated from the show, the programme’s creators have devised a complex voting algorithm. First the judges award scores, and these are converted into points. Then the public is invited to vote, and the result is also converted into points. Finally the two sets of points are combined to produce a ranking of all the contestants. But the public points and the final ranking are not revealed on the results show, only the identity of the two lowest contestants. I was able to show that in some circumstances the revealed data can indeed provide a great deal of information about the public vote.

Strictly Come Dancing arouses great passion among its followers, and the lack of transparency of the voting system has led to numerous requests under the FoI Act. Most of these requests have been refused by the BBC, on grounds that appear to be valid in law. It seems that the FoI Act was originally based on some rather idealistic notions. When the Act was passing into law, it had to be converted into a more realistic instrument, and the resulting form of words therefore provides for a large number of exceptions to the general principle. Specifically, the BBC is able to claim that the details of the voting are exempt, because they are being used ‘for the purposes of journalism, art, or literature.’

In my view the FoI Act is simply a shield that deflects attention from the heart of the matter. The public is invited to vote and therefore has good reason to be interested in the mechanics of the voting procedure and its outcome. In addition to details of the method used to combine the public ranking with the judges’ ranking, there are other causes for concern. For example, multiple voting is allowed, and this opens up the possibility of misuse by agents who have a vested interest in a particular contestant.

I began by remarking that there is a close relationship between money and information. We are expected to observe certain rules and regulations about how we use our money, but we are allowed to keep it safe. Will similar rules and regulations about information emerge, and when?

*Featured image credit: binary code by Christiaan Colen. CC-BY-SA 2.0 via Flickr.*

The post What is information, and should it be free? appeared first on OUPblog.

]]>The post Conversations in computing: Q&A with Editor-in-Chief, Professor Steve Furber appeared first on OUPblog.

]]>**Justin: Can you tell us a bit more about your current role?**

**Steve:** At Manchester I am a regular research professor and I’ve served my term as head of department, that’s some time ago now. I lead a group of 40 or 50 staff and students and our general research area covers computer engineering to computer architecture. On the engineering side we’re interested in the design of silicon chips and how you can make the most of the enormous transistor resource that the manufacturing industries can now give us on a chip. On the architecture side we are interested in particular in how we exploit the many core resources that are increasingly available in all computer products today.

**Justin: What was the topic of your recent Lovelace Lecture?**

**Steve:** The title is ‘Computers and Brains’ and basically this is a lecture which talks about some of the history of artificial intelligence from some of the early writings of Ada Lovelace herself – 2015 was the 200th anniversary of her birth – through to Alan Turing’s thoughts on AI and then onto the research that I’m leading today, which is building a very large parallel computer for real-time brain modelling applications.

**Justin: Can you tell us about your SpiNNaker project?**

**Steve:** SpiNNaker is the massive parallel computer for real time brain modelling and the name is a rather crude compression of spiking your network architecture – it’s not quite an acronym. We’re using a million ARM processors – those are the processors that you find in your mobile phone designed by a British company in Cambridge – in a single machine and with a million ARM cores we can model about 1% of the scale of the network in the human brain. The brain is a very challenging, modelling target, you can think of it as 1% of the human brain, but I sometimes prefer to think of it as ten whole mouse brains. The network we’re using is quite simplified as there’s a lot about brain connectivity that’s still not known, so there’s a lot of guesswork in building any such model.

**Justin: You’ve said in the past that accelerating our understanding of brain function would represent a major scientific breakthrough. Can you expand a little bit more on that thought?**

**Steve:** It is clear to anybody who uses a computer that they are incredibly fast and capable at the set of things that they are good at, but they really struggle with things that we humans find simple. Very young babies learn to recognise their mother, whereas programming a computer to recognise an individual human face is possible but extremely hard. My view is that if we understood more about how humans learn to recognise faces and solve similar problems then we’d be much better placed to build computers that could do this easily.

**Justin: Where do you see AI processing going in the next five to ten years?**

**Steve:** The big issue with AI is understanding what intelligence is in the first place. I think one of the reasons why we have found true AI so difficult to reproduce in machines is that we’ve not quite worked out how natural intelligence works, hence my interest in going back to look at the brain as the substrate from which human intelligence emerges. If we can understand that better then we might be able to reproduce it more faithfully in our computing machines.

**Justin: What about the ethics of AI?**

**Steve:** Ultimately AI will lead to ethical issues. Clearly if machines become sentient then the issue as to whether you can or can’t switch them off becomes an ethical consideration. I think we are a very long way from that at the moment so that isn’t foremost among ethical issues we have to consider. I think there are much more pragmatic engineering issues, for example to do with driverless cars. If a driverless car is involved in a crash whose fault is it, who is responsible? If the crash turns out to be the result of the software bug, if it turns out to be the result of the human interfering with the car, there’s a whole set of issues that will have to be thought through there and they come a long time before the issue of the machine itself having any kind of rights.

**Justin: What do you think are the biggest challenges the IT industry faces?**

**Steve:** I think high on the list is the issue of cybersecurity. We are seeing increasing numbers of attacks on IT systems and it’s very technical to work out how to build defences that don’t compromise the performance of the systems too much. So as consumers we install antivirus software on our PCs but sometimes the antivirus software makes the PC almost useless. So there’s a compromise in security, always. Most of us live in houses where the front door will succumb to a few decent kicks, but the bank chooses something more substantial for its vault. Security has to be proportionate to the risk. But I think security is going to loom increasingly large in the IT industry.

**Justin: What do you see as the most exciting emerging technologies at the moment?**

**Steve:** The most exciting technologies around the corner I think are the cognitive systems, machines becoming less passive, they don’t just sit waiting for human imports but they actually respond to the environment, interact with it, engage with it and that requires some degree of understanding. I don’t want understanding to be interpreted in too anthropomorphic a way – their understanding may be quite prosaic, it might be at the level of an insect. But an insect has an adequate understanding of its environment for its purposes. That’s how I would expect to see computers developing increasingly in the future.

**Justin: What do you think the IT industry as a whole should be doing to improve its image?**

**Steve:** I think the image of the industry is particularly important in the way it comes over in schools and in the choices that pupils make about their future careers. We certainly had a problem recently with the kind of exposure to IT that’s happened in a lot of schools being de-motivating, it has discouraged pupils from computing. I think the changes that are needed to remedy that are now in place and it will take a little while for them to filter through, but of course BCS has played a very active role in seeing those changes through, so hopefully computing will have a better image where it matters most, which is in schools.

**Justin: Why do you think that we aren’t seeing so many women going into IT?**

**Steve:** If I knew why women did not find IT so attractive, then I’d do something about it. It’s a major problem that for some reason culturally we think IT and computers are a male preserve and of course if we talk numbers then they are predominately male. It’s a problem that we’ve been worrying about all the time I’ve been in the university and many things have been tried and nothing has really made much difference, so it concerns me hugely but I don’t know what to do about it. I don’t think there is any shortage of female role models, there are plenty of very high-powered women in the computing business. I really don’t understand why the subject is not attractive to girls at school, which is where the problem starts. I welcome any suggestions as to what we can do to remedy this.

**Justin: Talking of role models, did you have any of your own?**

**Steve:** My role models were probably not in computing, as I said I came through the mathematics and aerodynamics route at university and was really drawn into computing by what I saw as the new wave of computing based on the microprocessor, which in the late 1970s was a very new approach to building machines. So who do I hold up as a role model? Well, one of the lecturers at the university was John Conway who was always a very inspiring mathematician and it was great fun to listen to his lectures.

**Justin: Looking back at your career so far is there anything you would have done differently if you had your time again?**

**Steve:** I don’t think so, there are no decisions in my career path that I particularly regret and I think the advice I give to people is roughly the advice I follow myself, which is to make decisions that keep the maximum number of doors open. So look for opportunities, but when there’s nothing obvious staring you in the face then think about what subject creates the most possibilities in the area you’re interested in. Maximise the number of doors.

*The full interview between Justin and Steve was originally published in ITNOW, and may also be viewed on YouTube as a two part recording. Watch Part One and Part Two online.*

*Featured image credit: Mother board by Magnascan. CC0 Public Domain via Pixabay.*

The post Conversations in computing: Q&A with Editor-in-Chief, Professor Steve Furber appeared first on OUPblog.

]]>The post Addressing anxiety in the teaching room: techniques to enhance mathematics and statistics education appeared first on OUPblog.

]]>Mathematics and statistics anxiety is one of the major challenges involved in communicating mathematics and statistics to non-specialists. Students enrolled on degree programmes in several areas other than mathematics or statistics are required to study mandatory courses in mathematics and statistics as core elements of their degree programmes. Academics, educators, and researchers presented papers on how they have addressed this issue of anxiety using history, enhancing students’ self-belief, and individual support, demonstrating the relevance of the subjects to their respective degree work and making the learning process enjoyable.

The general consensus from the session was that:

- Students with low confidence experience high levels of mathematics anxiety, which has an adverse impact on their academic performance;
- University students are far from resilient to experiencing mathematics anxiety ;
- It is most common in non-specialist university students;
- It is not always related to students’ academic abilities but their prior learning experience of the subjects, self-efficacy and self-beliefs;
- The increasing diversity of the university student population as a result of the high proportion of international students, widening participation and access to higher education, add new dimensions to this challenge;
- This range of cultural, socio-economic and academic backgrounds of students manifests itself through diverse expectations and individual learning requirements that need to be carefully considered.

Delegates agreed that if educators involved in designing and delivering mathematics and statistics courses for non-specialist university students are aware of the implications of this diversity in student backgrounds, they should be able to appreciate the indispensable role of using a variety of teaching and learning approaches.

My personal view is that thinking like social scientists would make higher education practitioners more empathetic towards students. I think making course delivery student focused as well as student led would encourage students to share responsibility for their education. Focusing on connecting with students and being perceptive as well as receptive to students’ feedback and willing to revise teaching delivery can enhance the learning climate in teaching rooms. This would promote student interaction and encourage active learning.

Undergraduates can face several issues during their transition to university education, such as key gaps in their mathematical skills despite the fact that they have A-level Mathematics or equivalent. Effective practices were shared to include a blended learning project using online formative assessment followed by feedback and encouraging students to work within their Zone of Proximal Development (Vygotsky, 1978). Delegates were informed about two innovative Mathematics Support Centres (MSCs) that facilitate distant learning. MSCs have become important features of universities in the UK as well as overseas.

Undergraduates can face several issues during their transition to university education, such as key gaps in their mathematical skills despite the fact that they have A-level Mathematics or equivalent.

Educators shared their projects on scenario based training of statistics support teachers, instruction methods developed by mathematics teachers and using census data as well as other publicly available large data sets to support statistics literacy. Social media was explored as a tool to facilitate deep learning, enhance student engagement in science as well as engineering and improve students’ learning experience. There were presentations on the effective use of a virtual learning environment, audio feedback, and an online collaboration model to encourage students’ participation.

Delegates seemed to find online formative assessment practices worth incorporating into their teaching. The innovative Mathematics Support Centres (MSCs) that facilitate distant learning sounded appealing to several others. Delegates who had not experimented with Facebook were convinced after a paper presentation that Facebook is an area worth exploring to enhance student engagement.

I have used Facebook for promoting scholarly dialogue and collaborative research as well as enhancing student engagement with statistics and operational research methods since 2012. My rationale is to address mathematics and statistics anxiety by connecting with students which can be done without intruding into their personal territory, i.e. becoming their Facebook friends. I would argue that Facebook is an excellent online system which academics can use for posting topics for discussions, promoting interaction, addressing students’ queries, uploading course material and monitoring students’ progress. These study groups are easy to set up and promote inclusive education. It is a platform students are used to and view extremely positively.

Barriers to learning such as neurodiversity were also explored focusing on learning difficulties faced by visually-impaired and hearing-impaired learners. Other areas covered included language difficulties as a barrier to reading mathematics, dyslexia/dyscalculia and teachers’ negative bias against students from certain backgrounds. Gender imbalance was also discussed as a significant barrier with the general consensus being that more women should be encouraged, as well as supported, to pursue careers in mathematics.

In light of the existing literature and research relating to the difficulties blind learners face, it was agreed that this is an area that calls for further research to make mathematics more accessible to the blind. It was proposed that research on combining lexical rules, speech prosody and non-speech sounds would be desirable. Furthermore, providing tools for carrying out mathematical analysis may improve the situation for blind learners.

A critique on the fallacy of assuming a homogeneous student body and homogeneous teaching in a ‘what works’ approach introduced an interesting point of controversy in the midst of excitement and optimism about a range of initiatives. These exchanges of information on research, initiatives and projects should promote multi-disciplinary research collaboration in mathematics and statistics education.

The conference might impact research in a variety of themes related to statistics and mathematics education to include mathematics anxiety, inclusive practice and statistics anxiety.

*Featured image credit: calculator mathematics maths finance by Unsplash. Public domain via Pixabay.*

The post Addressing anxiety in the teaching room: techniques to enhance mathematics and statistics education appeared first on OUPblog.

]]>The post Is an engineering mind-set linked to violent terrorism? appeared first on OUPblog.

]]>The process by which young people are radicalised is very complex and poorly understood. As Scott Atran has said, the “first step to combating Isis is to understand it. We have yet to do so … What inspires the most uncompromisingly lethal actors in the world today is not so much the Qur’an or religious teachings. It’s a thrilling cause that promises glory and esteem … Youth needs values and dreams.”

Rose notes that engineering, medicine and other technical subjects are regarded as superior education in many MENA countries. These subjects may attract people with mind-sets that like simple solutions; little ambiguity, nuance, or debate. Rose calls this an ‘engineering mind-set’. He says that in these courses there is a tendency to concentrate on rote learning and exam-passing with little or no questioning. Those mind-sets may then be re-enforced by the way they are taught. Rose emphasises that young people need to be taught how to think to immunise their minds against ideologies that seek to teach them what to think. In other words they need to be encouraged to think critically as in the social sciences.

There are two main points I want to highlight here. First, as Rose states, the sparse data that we have indicates that there are a disproportionate number of STEM students and graduates recruited into Jihadist terrorism. That needs to be explained.

At least part, but only part, of the complex answer may rest on the second point. How do Rose and others characterise an engineering mind-set and how does it relate to the way engineers actually think? For sure the way Rose describes it needs unpacking. Rose quotes Diego Gambetta in 2007 (who in turn quotes the work of Seymour Lipset and Earl Raab who wrote on right wing and Islamic extremism in 1971). He says this mind-set has three components; (a) ‘monism’ – the idea that there exists one best solution to all problems; (b) ‘simplism’ – the idea that if only people were rational remedies would be simple with no ambiguity and single causes and remedies; (c) ‘preservatism’ – an underlying craving for a lost order of privileges and authority as a backlash against deprivation in a period of sharp social change – in jihadist ideology the theme of returning to the order of the prophet’s early community.

Rose’s characterisation, like many attempts to capture something complex and protean, contains some truth – but it is far from adequate.

A good start at an analysis would be the report by the Royal Academy of Engineering, ‘Thinking like an engineer’. One could easily counter Rose’s three components, and be nearer the mark, by using the trio pluralism, complexity, and sustainability. I believe that we should perhaps look for an explanation by examining the huge gap that has existed (and still exists – though reduced) between engineering science and practice. For example theoretical engineering mechanics rests on the certainty of deterministic physics from Newton to Einstein with its consequent time invariant dynamics. Determinism means that all events have sufficient causes – literally that the past decides the future. Einstein is reputed to have said “time is an illusion.” No practitioner takes these interpretations of certainty seriously, but she uses them as a model to make decisions because they are the best we have and they work. But there is one big and important proviso – they work in a context that must be understood. The Nobel Prize winner Ilya Prigogine has shown that evolutionary thermodynamics rests on complex processes far from equilibrium. Contrary to dynamics theories the laws of thermodynamics show that time is an arrow going only in one direction.

Quantum physics has blown away all pretence at certainty. Practitioners intuitively know that their theories are human constructs – imperfect models built to provide us with meaning and guide our ways of behaving. They use them to help make safe and functional decisions to provide systems of artefacts that are fit for purpose as set out in a specification. They use them but they know there are risks. There is no certainty in engineering practice (as some seem to believe) – witness the few tragic engineering failures (like Chernobyl). Risk and uncertainty is managed by safe dependable practice – always testing always checking – taking a professional duty of care. Were it not for the creativity, dedication and ingenuity of engineers such disasters would be more frequent. Just think of the amazing complexity of building and maintaining the international space station – truly inspirational and built by engineers. So yes there is a paradox. Deterministic theory points to single solution, to black and white answers – but in practice we use it only as a model – a human construct which has enabled us to achieve some incredible things.

Engineers use a plurality of methodologies and solutions. Anyone who has designed and made anything knows that are multiple solutions. Any attempt at optimising a solution will only work in a context and may be dangerously vulnerable outside of that context. Even a simple hinged pendulum behaves in a complex chaotic way. Bifurcations in its trajectory make its actual performance very sensitive to initial conditions. All successful practitioners know that people make decisions for a variety of reasons – some rational some not so rational. Only in theory does simplism apply. The financial crash of 2008 put paid to simplism in economic theory. Lastly, yes engineers do like order. Like life itself they create negentropy. They impose order on nature but do it to improve the human condition. The modern challenge is to do it more sustainably and to create resilience in the face of climate change.

So these are the ideas that engineering educators work too. Most engineering courses include design projects. Students learn about understanding a need, turning it into a specification and delivering a reality. To do so they must think creatively, consider the needs of multiple stakeholders, think critically to exercise judgement to determine criteria to make choices. Many undergraduate engineering courses (but admittedly not all) now include ethics. In practice the products of the work of engineers is continually tested by use. If engineers didn’t think creatively and critically about such use they would soon be out of a job.

In summary to characterise the engineering mind-set as one that thinks problems have single solutions devoid of ambiguity and uncertainty is derogatory and disparaging of our ingenious engineers. It is quite wrong to characterise the engineering mind-set as one that does not help students how to think though of course engineering educators are constantly striving to do better. Any claim that an engineering mind-set is linked to violent terrorism needs to be examined with great care.

*Featured image credit: Engineering, by wolter_tom. Public domain via Pixabay.*

The post Is an engineering mind-set linked to violent terrorism? appeared first on OUPblog.

]]>