The post Robot rats are the future of recycling appeared first on OUPblog.

]]>Imagine that, twenty years from now, we have got our production of plastic rubbish mostly under control. I saw Prof Mark Miodownik of University College London give a great talk on how he and his colleagues are working to make this happen for plastics. This would involve limiting our use of everyday plastics to those that easily be recycled, using enzymes to break them down into the hydrocarbons we need to make more stuff, and having a much clearer and more efficient recycling streams. To be really effective, this would have to happen not just for plastics but for everything: paper, organic matter, electronics. Though only a tiny fraction of the waste we produce is currently dealt with constructively, change is already happening: there are industrial machines designed to split out construction waste for re-use, disassemble iPhones, and sort in recycling plants.

Just stopping making the situation worse isn’t enough though. I think what haunts many of us is the idea that we have done too much damage to the planet for it to ever recover. So, after we stop (quite literally) trashing the planet, how do we deal with all those hundreds of thousands of acres of landfill across the world? Or, for that matter, the Texas-sized garbage patch floating in the ocean, made infamous by Sir David Attenborough two years ago?

To me, the obvious answer is autonomous robots. Not some magic-bullet-solves-the-whole-problem race of giant super-intelligent humanoid robots, but a swarm of mostly-small robots with dexterity and a specific expertise, but relatively limited native intelligence otherwise. I imagine hundreds of robot rats (probably looking more like cockroaches actually), scurrying up landfill with little sensors and dextrous arms, identifying particular types of material, and delivering the sorted rubbish to larger carrier robots that will take the sorted materials back to the recycling plant when they are full. Some robots will be bigger and stronger to pick up and lug old appliances back to base. Some will have specialist sensors to identify tricky materials. And all will have some ability to communicate with each other, even if only in the most rudimentary way.

This heterogeneity – the idea that there will be a diverse community of machines with different skills and of different generations – is one of several things that the makers of *WALL-E* gets very right. Another is the idea that robots don’t need to squander one of our precious resources: power. Apart from the obvious idea that small machines can use solar power to keep going, there are other possibilities too. Refueling robots (you’d want these stations to be able to move to where they’re most efficiently used) could also be powered by digesting some of the biological matter in the landfill. Two decades ago, researchers at the University of the West of England developed a system that could eat and get energy from slugs: surely a vegan digester must be possible by now. If that can’t work, then we could try to develop some kind of miniature version of the Klemetsrud waste-to-energy incinerator in Norway, which now includes carbon capture.

To make sure the electricity burden is minimized in the first place, we will want to use the most efficient hardware possible. For more than 30 years, we’ve been developing models of how to do vision, hearing, and other intelligent tasks using silicon circuits that work more like our brains do. This field, known as neuromorphic engineering, not only has the advantage of speed, but – when designed well – neuromorphic circuits can operate with several orders of magnitude less power than their conventional digital counterparts.

Of course, a vision like this has many challenges: not least ensuring that robot rats aren’t hurting biological wildlife (perhaps including *real* rats). In an effort like this we will undoubtedly encounter unexpected problems and unintended consequences. But scientists, engineers, and philosophers from across the world have been considering these issues for decades now and, apart from anything else, we are all now exquisitely aware that the biggest danger to human life is other humans.

Why don’t we have swarms of robots in our landfill yet, when most of the technological problems seem to be solvable? Perhaps it’s because we, as citizens, have never made it a priority. But it’s clear robotic technology can develop pretty fast when given the right resources. In this case, the research is funded by the military. Of course, they’re looking to replace pack animals, security guards, and people who have to decommission bombs, not sort garbage. The engineers work according to our priorities, so if we want to see progress, we need to put our money where rubbish is.

On the plus side, the kind of smaller, less-intelligent, less safety-critical robots that we need to sort trash should be massively cheaper than the machines that we need to accompany troops. And maybe a war on rubbish is one that we could eventually win.

Featured images from Unsplash and Pixabay.

The post Robot rats are the future of recycling appeared first on OUPblog.

]]>The post Why academics announce plans for research that might never happen appeared first on OUPblog.

]]>Additionally, in our future work, we will extend our model to incorporate more realistic physical effects . . . We will expand the detection procedure . . . We will integrate our detection procedure . . . We will validate the performance of our proposed detector with real data.

There are several possible explanations for this practice. One is that it’s a continuation of a discussion between author and colleagues—participants in a society meeting, conference, or research network. Another is that it’s a response to a reviewer’s critique of the submitted paper. To secure its publication, the author promises to remedy its defects in future work. Yet another is that it’s simply promotional—an advertisement of the scope of the research, its importance to the field.

To be clear, none of this refers to suggestions in an article to benefit research by the larger community. For example, an author might suggest new lines of investigation or potential confounds in experimental practice, here in **a study** of mental health problems: “Depression, too, can be heterogeneous . . . and future work would be wise to consider such variation.”

In contrast, an author’s announcement of his or her intentions can present readers with a dilemma. Should they restrict their own research to avoid duplication or, more drastically, turn to a different area of study? As Vernon Booth noted in his prize-winning booklet * Writing a scientific paper*, published in 1971, “An author who writes that an idea will be investigated may be warning you off [his or her] territory.”

Suppose, though, you’re committed to the territory. Should you accelerate your efforts to avoid being overtaken? Or continue at the same pace and assume that duplication is unlikely? Or should you exercise patience and wait for the follow-up publication to see what emerges? If you do wait, where should you look and for how long? Or should you simply ignore the announcement? After all, the author may never publish again. He or she may have promised too readily and not thought through what was entailed, or having done so, loses interest, or for other reasons disengages from the field, or from research more generally. And of course it’s possible that the announcement was meant only for effect, an empty version of the warning described by Booth. Parallels can be found in the software industry’s promotion of what became known as vapourware—advertised products that didn’t exist and were never likely to.

Curiously, the act of rounding off a paper with an announcement has a remarkably long history. Here’s **an example** from about 350 years ago: “And the next time, we hope to be more exact, especially in weighing the Emittent Animal before and after the Operation.”

And **another** from over 150 years ago: “We hope shortly to be able to prepare some pure cobalt and nickel by depositing galvanoplastically those metals in the form of foil from solutions of their pure salts.”

And **another** from about 50 years ago: “Quantitative results regarding these phenomena have been obtained and will be published in due course.”

In fact, this last example, which a colleague helpfully identified, is from one of my own publications, a note published during my thesis studies.

Nevertheless, it’s difficult to justify the inclusion of such material in papers now. In his booklet, subsequently revised and **republished**, Booth said that promises should be offered sparingly. The advice was probably too gentle. Whether they’re about future publications or research, it’s best to avoid statements of intent altogether. They can prove a challenge for the reader and, just occasionally, for the author too.

*Featured image credit: “White Printer Paper Lot” by Pixabay. Public domain via Pexels.*

The post Why academics announce plans for research that might never happen appeared first on OUPblog.

]]>The post Standing in Galileo’s shadow: Why Thomas Harriot should take his place in the scientific hall of fame appeared first on OUPblog.

]]>It was a time when many still believed in magic, and most of the methods of science and mathematics that we take for granted today had not yet been developed. For instance, calculus had not yet been discovered, and there was no formal understanding of infinite series. But Harriot was one of the best algebraists of the early seventeenth century, and he managed to make a number of pioneering mathematical discoveries—including some that anticipated calculus and limit theory. These discoveries enabled him not only to make advances in pure mathematics, they also enabled him to apply mathematics to various practical and theoretical problems.

There are many reasons that Harriot never got around to publishing his scientific work, not least the dramatic times in which he lived. He moved in the most glamorous of Elizabethan circles—the charismatic Sir Walter Ralegh was his patron; but stars often fall, and when Ralegh fell from grace, Harriot’s fortunes suffered, too.

But at the beginning of their association, Harriot and Ralegh’s lives were full of excitement and hope. After graduating from Oxford, Harriot’s first job was that of live-in navigational advisor during Ralegh’s preparations for the first English colony in America.

He began his new employment in 1583, when England was a tiny player in maritime commerce and geopolitics. If Ralegh’s planned American trading colony were to materialize, his mariners would have to sail safely across uncharted oceans with the sky as their only guide, and Harriot provided them with the best training then available. Which was just as well for Harriot, too, because in 1585 he sailed to Roanoke Island with the First Colonists.

He saw the Roanoke people as friendly and ingenious, and found their way of life appealing in many ways. Tragically, few of his fellow travellers were so open-minded. When food supplies eventually ran low, and illnesses unwittingly brought from England struck down many of the local people, tensions between the visitors and their hosts reached breaking point. The First Colonists abandoned their experiment and returned to England—having killed the chief who had earlier invited them to share his land.

The disaster at Roanoke seems to have weighed heavily on Harriot. Although he continued as navigational advisor to Ralegh’s follow-up ventures, he never sailed to America again.

Instead, he began to turn towards more detached scientific pursuits. Initially, these included original mathematical research with navigational applications, but later, he explored science and mathematics simply for their own sake. He explored almost every area of the mathematics and physics of his day, and he made a number of breakthroughs that today we link with others’ names.

He found the law of falling motion independently of his contemporary Galileo, and used a telescope to map the moon and the motion of sunspots, again independently of Galileo. He discovered the law of refraction before Snell, produced a fully symbolic algebra and fledgling analytic geometry before René Descartes, found the secret of colour and the nature of the rainbow before Isaac Newton, and the rules of binary arithmetic before Gottfried Leibniz—to name just a few of his many achievements.

If only he had published all this work! Unfortunately for early modern science, he had little interest in fame. But the vagaries of a life spent working for controversial patrons didn’t help. First Ralegh, and then Harriot’s second patron, the earl of Northumberland, ended up imprisoned in the Tower of London on false charges of treason. Harriot himself came under suspicion. For instance, his renown as an astronomer led to the false accusation that he had cast a horoscope to aid the Gunpowder Plotters.

Yet despite all the ups and downs his curiosity remained undimmed. Four centuries on, he deserves to be celebrated for the decades he devoted to scientific discovery. Although this means he should take his rightful place in the scientific hall of fame, it is refreshing, at this time when we are awash with celebrities, that Harriot did much of his work just for the love of it.

*Featured image credit: Night sky in Hatchers Pass, Alaska. Photo by McKayla Crump. CC0 via **Unsplash**.*

The post Standing in Galileo’s shadow: Why Thomas Harriot should take his place in the scientific hall of fame appeared first on OUPblog.

]]>The post Can plants help us avoid a climate catastrophe? appeared first on OUPblog.

]]>What’s more, the concentration of atmospheric carbon dioxide has risen so rapidly over the past few decades that Earth’s temperature has yet to fully adjust to the new warmer climate it dictates. This means that even if we could magically stop our carbon dioxide emissions from fossil fuels overnight, we have already committed Earth to transition to a warmer climate. Global temperatures have risen by more than 1°C since the 1970s. How much more warming are we likely to experience? Another 0.5°C, 1.5°C, 2.5°C or worse? Scientists are working urgently to try and better constrain this number. Meantime, over 190 nations worldwide signed up to the 2015 Paris Agreement with the goal of limiting warming to less than 2°C and ideally less than 1.5°C. Given the current situation, even a lenient 2°C target now looks wildly optimistic, especially given 34+ billion tonnes of carbon dioxide are added every year we delay mitigation measures.

This is why, along with the United Nations and the National Academy of Sciences in the United States, the UK’s Royal Society acknowledges drastic phase-down of our carbon dioxide emissions from burning fossil fuels for energy will be insufficient to avoid seeding catastrophic human-caused climate change. We actually have to start removing carbon dioxide from the atmosphere, safely, affordably, and within the next 20 years.

Enter, the kingdom of plants.

Hundreds of millions of years ago, during the Devonian Period (393-383 millions of years ago), plants bioengineered a cooler climate as the spread of forests lowered atmospheric carbon dioxide levels. As their root systems evolved to become larger and more complex, trees generated soils and accelerated the breakdown of rocks and minerals into minute grains, forming dissolved bicarbonate in the process. Eventually, this bicarbonate washed into the oceans, where the carbon it carries was stored for hundreds of thousands of years or locked up on the sea floor.

We now think it may be possible to mimic those processes to remove carbon dioxide from the atmosphere. The method would be to dress the soils of agricultural landscapes with crushed rapidly weathering rocks, such as basalt. This biogeochemical soil improvement could also boost yields by adding plant-essential nutrients, helping reverse soil acidification, and helping restore degraded agricultural top soils that provide food security for billions of people. Although there are possible drawbacks and unintended consequences, the approach may be practicable. Humans have put over ten million square kilometres of land to the plough, and application of crushed rock to this farmland could be feasible by exploiting existing infrastructure.

However, at the very best, this approach might remove only about 1/10th of our current emissions.

We could also undertake reforestation of forested lands once cleared for agriculture and afforestation of new areas, again mimicking the ancient spread of forests across the continents. Planting millions of trees could help by storing carbon dioxide in forest biomass and soils. Undertaken across a sufficiently large area of the globe, these actions might sequester another few billion tonnes of carbon dioxide.

Even these sorts of radical measures will not represent a sufficient climate restoration plan, however. A wider portfolio of carbon removal techniques will be required to scrub sufficient amounts of carbon from the atmosphere each year. But the technologies need multibillion dollar investment to move them from the lab to pilot schemes and then to determine which can scale massively. At the same time, we will need to fundamentally transform our global energy systems to halt carbon emissions.

As Erik Solheim, until recently the executive director of the United Nations Environment Programme, has remarked, “if we invest in the right technologies, ensuring that the private sector is involved, we can still meet the promise we made to our children to protect their future. But we have to get on the case now.”

Right now, carbon dioxide removal looks like a prohibitively expensive option for helping slow the pace of climate change. Taking action places an enormous burden on young people and future generations. But taking no action asks them to face dire consequences including intensifying droughts, heat-waves, storms, ice-sheet melt and sea-level rise flooding coastal regions. This is the intergenerational injustice of our time.

Our current crisis is urgent and unfolding at a time when global food demand will need to more than double before the end of the century. Can we sustainably feed a crowded planet, preserve the wonderful diversity of life on Earth, and stabilize the climate? These are the daunting challenges facing humanity. Faced with the collective moral failure of world leaders to act, it is hardly surprising that young people worldwide are bravely striking for action on climate change supported by thousands of scientists. At stake is nothing less than the future of humanity.

*Feature Image credit: “Green and white leaf plant” by Jackie DiLorenzo. Public Domain via Unsplash.*

The post Can plants help us avoid a climate catastrophe? appeared first on OUPblog.

]]>The post The ethics of the climate emergency appeared first on OUPblog.

]]>Recently, tens of thousands of school students stayed away from classrooms to demonstrate for action on climate change. They are recognising that there is a climate emergency, and that governments and corporations need to take emergency action.

Last October, an Intergovernmental Panel on Climate Change report explained why average temperature increases must be restricted to 1.5 degrees, one of the agreed goals of the **Paris agreement of 2015**.

Limiting average increases to 2 degrees, they explain, will be nowhere near enough to prevent the flooding of low-lying islands and coastal cities, and the loss of almost all coral reefs. Disappointingly, however, the national commitments made at Paris would spell a catastrophic increase of towards 3 degrees. Governments need to rachet up these commitments at coming review conferences, as a matter of urgency.

Antonio Guterres, the UN Secretary-General, is soon to hold the first of these review conferences. The UK government, which is hoping to host this conference, needs to commit now to more drastic cuts to set an example to the rest of the world.

Ethicists debate the grounds for taking such emergencies seriously. By now it is widely agreed that the people of the present matter, however distant, and wherever they live. But many of them are losing their livelihoods because of climate change, and they are usually people who have hardly at all contributed to it. And that is hardly fair.

Most people also agree that coming generations matter, and should be taken into account. Some suggest that the more distant in time future people are, the less they matter. Yet suffering in fifty or a hundred years is likely to be just as bad as suffering now. Many people already recognise this. As **Hilary Graham** and her fellow-researchers have shown, if you ask about future interests in an impersonal manner, you get answers that downplay these interests, but if you ask about what we should do to make life bearable for our grandchildren, you get much more affirmative answers, expressing deep concern about their well-being.

But this too means that we need to take action in the present to prevent rising sea-levels and freak weather events of greater intensity and frequency than the world has yet known both in the present and later in this century.

There is also a debate about whether other species matter. Everyone agrees that we need the ecosystems on which human beings depend to remain intact, and most hold that we need to preserve these and other ecosystems for the sake of their natural beauty. Many go on to hold that the needs of nonhuman species count ethically alongside our own, whether or not they count as much as our own.

When we get concerned about the bleaching of coral reefs and the disappearance of their polychrome communities, our concern expresses a blend of reasons of these kinds. But increasingly people (particularly young people) are worried about the wellbeing of animals and their habitats.

Just at the same time, there are alarming losses to populations of many wild species, and to biodiversity. All governments need to make special efforts to preserve wild species, and the governments of developed countries should subsidise poorer countries (which are often the homes of biodiversity hot-spots) to enable them to do this.

But far from the biodiversity emergency being in competition for our attention with the climate crisis, they should be seen as a single emergency. This is because one of the main causes of threats to wildlife is nothing but climate change.

So we need urgent plans and policies to replace carbon-based energy generation with renewable energy. We need to eat less meat, thus increasing our life-expectancies and reducing emissions of methane. We need to replace vehicles with diesel and internal combustion engines with electric cars, lorries and (if possible) ships. And we need to cut down on our airline travel. Individuals, companies and governments all have a part to play.

While sunny days in February are welcome, an overheated, tempestuous and increasingly flooded future world is not. We need to support Antonio Guterres’ worldwide campaign to prevent it.

*Featured image credit: Climate Change by TheDigitalArtist. Pixabay License via Pixabay.*

The post The ethics of the climate emergency appeared first on OUPblog.

]]>The post 150 Years of the Periodic Table appeared first on OUPblog.

]]>Here, we take a look at the fascinating history of the people behind the table, starting in 450 BCE and going through the present day, and the way the table and understanding of elements has evolved over time. Even today, many aspects of the periodic table remain unresolved—including a consensus on just how many elements remain undiscovered, leaving much room for discovery and further development in the future.

*Featured age credit: “Beaker glass ware” by uncredited. Public Domain via Pxhere.*

The post 150 Years of the Periodic Table appeared first on OUPblog.

]]>The post Happy sesquicentennial to the periodic table of the elements appeared first on OUPblog.

]]>Well it turns out that this is not the case. In this blog I will touch on just some of the loose ends in the study of the periodic table. The first has to do with the sheer number of periodic tables that have appeared, either in print or on the Internet, in the 150 years that have elapsed since the Russian chemist Dmitri Mendeleev first published a mature version of the table in 1869. There have been over 1000 such tables, although some of them are best referred to by the more general term periodic system, since they come in all shapes and sizes other than table forms, with some of them being 3-D representations.

Given that there are so many periodic systems on offer it is natural to ask whether there might be one ultimate periodic system that captures the relationship between the elements most accurately. This relationship that lies at the basis of the periodic system is a rather simple one. First the atoms of all the elements are arranged in a sequence according to how many protons are present in their nuclei. This gives a sequence of 118 different atoms for the 118 different elements, according to the present count. Secondly, one considers the properties of these elements as one moves through the sequence, to reveal a remarkable phenomenon known as chemical periodicity.

It is as though the properties of chemical elements recur periodically every so often, in much the same way that the notes on a keyboard recur periodically after each octave. In the case of musical notes the recurrence can easily be appreciated by most people, but it is quite difficult to explain in what way the notes represent a recurrence. In technical terms moving up an octave on a keyboard, or any other instrument for that matter, represents a doubling in the frequency of the sound.

Octaves in the case of elements, if we can call them so, are not quite like that. There is no single property which shows a doubling each time we encounter a recurrence. Nevertheless there are some intriguing patterns that emerge among the elements that are chemically ‘similar’. For example consider the number of protons in the nuclei of the atoms of lithium atom (3), sodium (11) and potassium (19). An atom of sodium has precisely the average number of protons among the two flanking elements (3 + 19)/2 = 11. This kind of triad relationship occurs all over the periodic table. In fact the discovery of such triads among groups of three similar elements predates the discovery of the mature periodic table by about 50 years.

Now most of the periodic tables that have been proposed display such triad relationships and so we must look elsewhere in order to find an optimal table, assuming that such an object actually exists. One possible course of action might be to consult the official international governing body of chemistry or IUPAC (International Union of Pure & Applied Chemistry) to see what their recommendation might be. The IUPAC organization has a rather odd policy when it comes to the periodic table of the elements. The official position is that they do not support any particular form of the periodic table. Nevertheless in the IUPAC literature one can find many instances of a version of the periodic table that is sometimes even labeled as “IUPAC periodic table”.

And if that’s not bad enough, the version that IUPAC frequently publishes, as shown in figure 1, is rather unsatisfactory for reasons that I will now explain.

Consider the third column from the left, or as it is aptly known, group 3 of the periodic table. Unlike all other columns of the table this group appears to contain just two elements, scandium and yttrium, shown by their symbols Sc and Y. If you look closely at the numbers in the two shaded spaces below these two elements you will see a range of values, such as 57-71 in the first case. This occurs because the elements numbered from 57 to 71 inclusive are assumed to fit in-between element 56 and 72, naturally enough. The reason why the two sequences of shaded elements are shown below the main body of the main table in a mysteriously detached manner is purely pragmatic.

It’s because doing otherwise, would produce a table that is perhaps too wide as shown in figure 2. But in a sense the far-too-wide table is more correct, since it avoids any breaks in the sequence of elements and avoids the impression that the shaded elements somehow have a different status from all others or that they represent something of an afterthought. But switching to such a wide table would not solve the problem even if IUPAC were to endorse doing so. This is because the table in figure 2 still shows only two elements in group 3 of the table and because it would imply that there are 15 so-called f-orbitals in each atom, whereas quantum mechanics, that provides the underlying explanation for the periodic table, suggests that there should be 14 of them.

OK, you might say, we can easily fix the problem by tweaking the periodic table slightly to produce figure 3. As far as I can see, from a lifetime of studying and writing about the periodic table, figure 3 is precisely the optimal periodic table that IUPAC should be publishing and even endorsing officially. This table restores the notion of 14 f-orbital elements as well as removing the anomaly whereby group 3 only contained 2 elements, since it now contains four, including lutetium and lawrencium.

Why will IUPAC not see things quite so simply? That’s a big and complicated question which I can only touch upon here. Like many organizations with rules and regulations, when push comes to shove, decisions are made by committees. As a result, the science takes second place while the various committee members vie with each other and ultimately take votes on what periodic table they should publish. Unfortunately, science is not like elections for presidents or prime ministers, where voting is the appropriate channel for picking a winner. In science there is still something called the truth of the matter, which can be arrived at by weighing up all the evidence. The unfortunate situation is that IUPAC cannot yet be relied upon to inform us of the truth of the matter concerning the periodic table. In this respect there is indeed an analogy with the political realm and whether we can rely on what politicians tell us.

*Featured image credit: Retro style chemical science by Geoffrey Whiteway. Public Domain via Stockvault.*

The post Happy sesquicentennial to the periodic table of the elements appeared first on OUPblog.

]]>The coronation went off script. Barack Obama, a black man with an unhelpful name, won the Democratic nomination and, then, the presidential election against Republican John McCain because the Obama campaign had a lot more going for it than Obama’s eloquence and charisma.

The post How Trump beat Ada’s big data appeared first on OUPblog.

]]>The coronation went off script. Barack Obama, a black man with an unhelpful name, won the Democratic nomination and, then, the presidential election against Republican John McCain because the Obama campaign had a lot more going for it than Obama’s eloquence and charisma: Big Data.

The Obama campaign put every potential voter into its database, along with hundreds of tidbits of personal information: age, gender, marital status, race, religion, address, occupation, income, car registrations, home value, donation history, magazine subscriptions, leisure activities, Facebook friends, and anything else they could find that seemed relevant.

Layered on top were weekly telephone surveys of thousands of potential voters that attempted to gauge each person’s likelihood of voting—and voting for Obama. These voter likelihoods were correlated statistically with personal characteristics and extrapolated to other potential voters so that the campaign’s computer software could predict how likely each person in its database was to vote and the probability that the vote would be for Obama.

This>wonks who put all their faith in a computer program and ignored the millions of working-class voters who had either lost their jobs or feared they might lose their jobs. In one phone call with Hillary, Bill reportedly got so angry that he threw his phone out the window of his Arkansas penthouse.

Big Data is not a panacea—particularly when Big Data is hidden inside a computer and humans who know a lot about the real world do not know what the computer is doing with all that data.

Computers can do some things really, really well. We are empowered and enriched by them every single day of our lives. However, Hillary Clinton is not the only one who has been overawed by Big Data, and she will surely not be the last.

*Featured image credit: American flag by DWilliams. CC0 via Pixabay.*

The post How Trump beat Ada’s big data appeared first on OUPblog.

]]>The post Modelling roasting coffee beans using mathematics: now full-bodied and robust appeared first on OUPblog.

]]>During the roasting process, partially dried coffee beans turn from green to yellow to various shades of brown, depending on the length of the roast. Once the residual moisture content within the bean dries up in the yellowing phase, crucial aromas and flavours are developed. However, the associated chemical reactions that produce these desirable coffee traits are highly complex and not well understood. This is partially due to the fact that the browning reactions linked to aroma and flavour development, called the *Maillard reactions*, is comprised of a large network of individual chemical reactions, where only preliminary understanding of the network’s construction exists.

To tackle the challenges involved with creating mathematical models for the Maillard reactions, along with other chemical reaction groups in a roasting coffee bean, we use the concept of a *Distributed Activation Energy Model* (DAEM), originally developed to describe the pyrolysis of coal. Not dissimilar to the Maillard reactions, the pyrolysis of coal involves large numbers of parallel chemical reactions and, using the DAEM, can be simplified to a single *global *reaction rate that describes the overall process. Crucially, however, the DAEM relies on knowing the distribution of individual chemical reactions beforehand. While the overall distributions associated with the Maillard chemical reactions remain unknown, we can reasonably approximate the reaction kinetics of the majority of the Maillard chemical reaction group.

However, the DAEM approach to chemical reaction groups only works when each of the reactions is happening parallel of one another. Because of this, we examine a simplified pathway of reactions involving sugars (which are linked to the formation of Maillard products) and separate groups of reactions to follow a progression of reactants to products. Specifically, we examine how sucrose first hydrolyses into reducing sugars, which in turn become either Maillard products or products of caramelisation. This division of this sugar pathway network allows us not only to fit each reaction subgroup with different parameter values, but also to determine that the hydrolysis of sucrose creates a “bottleneck” in the sugar pathway and prevents Maillard products from forming too early in the roast.

Even if you’re not an entrepreneur looking for the next big coffee venture, you’ll probably still care about how to make the 2.25 billion cups of coffee globally consumed every day as delicious as possible.

To model the local moisture content and temperature of the bean, two variables that crucially change which chemical reactions can occur during the roast, we use multiphase physics to describe how the solid, liquid, and gas components within the coffee bean evolve. This is a crucial difference to what has previously been done to model coffee roasting, as existing models often treat the coffee bean as a single “bulk” material. Additionally, unlike in previous multiphase models for roasting coffee beans, we allow the porosity of the bean to change according to the consumption of products in the sugar pathway chemical reaction groups. We also incorporate a *sorption isotherm*, an equilibrium vapour pressure specific to the evaporation mechanisms present in coffee bean roasting, in our model. Finally, to reduce the system variables to functions of a single spatial variable and time, we model a whole coffee bean as a spherical “shell”, while modelling a chunk of a coffee bean as a solid sphere. This is another improvement to previous multiphase models, which disagreed with recent experimental data describing the moisture content in both roasting coffee chunks and roasting whole beans.

Numerical simulations of this improved multiphase model (referred to as the *Sugar Pathway Model*) provide several key conclusions. Firstly, the use of spherical shells and solid spheres to describe whole and broken coffee beans, respectively, allows for good agreement with experimental data while simplifying the mathematical model’s structure. Secondly, due to the large number unknowns in the model, the Sugar Pathway Model can be fit to experimental data using a variety of parameter values. While this could be viewed as a drawback to the Sugar Pathway Model, we also show that small changes in parameter values do not drastically change the model’s predictions. Hence, the Sugar Pathway Model provides a reasonable qualitative understanding of how to model key chemical reactions that occur in the coffee bean, as well as how to model coffee bean chunks differently to whole coffee beans.

While largely theoretical, the Sugar Pathway Model provides a balance between the immensely complicated underlying physical processes occurring in a real-life coffee bean roast and its dominant qualitative features predicted by multiphase mathematical models. Additionally, industrial researchers can cheaply and efficiently use these multiphase mathematical models to determine the important features at play within a coffee bean under a variety of roasting configurations. While a basic framework for the roasting of a coffee bean is presented here, understanding the qualitative features of key chemical reaction groups allows us to get one step closer to that perfect cup of coffee.

*Featured image credit: Coffee by fxxu. CC0 via Pixabay.*

The post Modelling roasting coffee beans using mathematics: now full-bodied and robust appeared first on OUPblog.

]]>The post The dilemma of ‘progress’ in science appeared first on OUPblog.

]]>On the other hand careful examination by historians and philosophers of science has shown that identifying progress in science is in many ways a formidable and elusive problem. At the very least scholars such Karl Popper, Thomas Kuhn, Larry Laudan and Paul Thagard while not doubting that science makes progress, have debated on *how *science is progressive or *what it is about *science that makes it inherently progressive. Then there are the serious doubters and skeptics. In particular, those of a decidedly postmodernist cast of mind reject the very idea that science makes progress. They claim that science is just another ‘story’ constructed ‘socially’; by implication one cannot speak of science making objective progress.

A major source of the problem is the question of what we mean by the very idea of progress. The history of this idea is long and complicated as historian Robert Nisbet has shown. Even narrowing our concern to the realm of science we find at least two different views. There is the view espoused by most practicing scientists mentioned earlier, and stated quite explicitly by physicist-philosopher John Ziman that growth of knowledge is manifest evidence of progress in science. We may call this the ‘knowledge-centric’ view. Contrast this with what philosopher of science Larry Laudan suggested: progress in a science occurs if successive theories in that science demonstrate a growth in ‘problem*–*solving effectiveness’. We may call this the ‘problem-centric’ view.

The dilemma lies in that it is quite possible that while the knowledge-centric view may indicate progress in a given scientific field the problem-centric perspective may suggest quite the contrary. An episode from the history of computer science illustrates this dilemma.

Around 1974, computer scientist Jack Dennis proposed a new style of computing he called *data flow. *This arose in response to a desire to exploit the ‘natural’ parallelism between computational operations constrained only by the availability of the data required by each operation. The image is that of computation as a network of operations, each operation being activated as and when its required input data is available to it as output of other operations: data ‘flows’ between operations and computation proceeds in a naturally parallel fashion.

The dilemma lies in that it is quite possible that while the knowledge-centric view may indicate progress in a given scientific field the problem-centric perspective may suggest quite the contrary.

The prospect of data flow computing evoked enormous excitement in the computer science community, for it was perceived as a means of liberating computing from the shackles of sequential processing inherent in the style of computing prevalent since the mid-1940s when a group of pioneers invented the so-called ‘von Neumann’ style (named after applied mathematician John von Neumann, who had authored the first report on this style). Dennis’s idea was seen as a revolutionary means of circumventing the ‘von-Neumann bottleneck’ which limited the ability of conventional (‘von Neumann’) computers from exploiting parallel processing. Almost immediately it prompted much research in all aspects of computing — computer design, programming techniques and programming languages — at universities, research centers and corporations in Europe, the UK, North America and Asia. Arguably the most publicized and ambitious project inspired by data flow was the Japanese Fifth Generation Computer Project in the 1980s, involving the co-operative participation of several leading Japanese companies and universities.

There is no doubt that from a knowledge-centric perspective the history of data flow computing from mid-1970s to the late 1980s manifested progress — in the sense that both theoretical research and experimental machine building generated much new knowledge and understanding into the nature of data flow computing and, more generally, parallel computing. But from a problem-centric view it turned out to be *unprogressive*. The reasons are rather technical but in essence it rested on the failure to realize what had seemed the most subversive idea in the proposed style: the elimination of the *central* *memory *to store data in the von Neumann computer: the source of the ‘von Neumann bottleneck’. As research in practical data flow computing developed it eventually became apparent that the goal of computing without a central memory could not be realized. Memory was needed, after all, to hold large data objects (‘data structures’). The effectiveness of the data flow style as originally conceived was seriously undermined. Computer scientists gained knowledge about the *limits *of data flow, thus becoming wiser (if sadder) in the process. But insofar as effectively solving the problem of memory-less computing, the case for progress in this particular field in computer science was found to have no merit.

In fact, this episode reveals that the idea of the growth of knowledge as a marker of progress in science is trivially true since even failure — as in the case of the data flow movement — generates knowledge (of the path not to take). For this reason as a *theory *of progress knowledge-centrism can never be refuted: knowledge is always produced. In contrast the problem-centric theory of progress — that a science makes progress if successive theories or models demonstrate greater problem solving effectiveness — is at least falsifiable in any particular domain, as the data flow episode shows. A supporter of Karl Popper’s principle of falsifiability would no doubt espouse problem-centrism as a more promising *empirical* theory of progress than knowledge-centrism.

Featured image credit: ‘Colossus’ from The National Archives. Public Domain via Wikimedia Commons.

The post The dilemma of ‘progress’ in science appeared first on OUPblog.

]]>The post Laudable mathematics – The Fields Medal appeared first on OUPblog.

]]>This year’s recipients come from diverse mathematical backgrounds, spanning the fields of algebraic geometry, number theory, and optimal transport. Honourees in 2018 are:

For the proof of the boundedness of Fano varieties and for contributions to the minimal model program.

For contributions to the theory of optimal transport and its applications in partial differential equations, metric geometry and probability.

For transforming arithmetic algebraic geometry over p-adic fields through his introduction of perfectoid spaces, with application to Galois representations, and for the development of new cohomology theories.

For his synthesis of analytic number theory, homogeneous dynamics, topology, and representation theory, which has resolved long-standing problems in areas such as the equidistribution of arithmetic objects.

To celebrate the achievements of all of the winners, we’ve put together a reading list of free materials relating to the work that contributed to this honour.

**A Quantitative Analysis of Metrics on Rn with Almost Constant Positive Scalar Curvature, with Applications to Fast Diffusion Flows****, by Giulio Ciraolo, Alessio Figalli, and Francesco Maggi**, published in *International Mathematics Research Notices*

The authors prove quantitative structure theorem for metrics on Rn that are conformal to the flat metric, have almost constant positive scalar curvature, and cannot concentrate more than one bubble.

**The Langlands–Kottwitz approach for the modular curve****, by Peter Scholze**, published in *International Mathematics Research Notices*

Scholze shows how the Langlands–Kottwitz method can be used to determine the local factors of the Hasse–Weil zeta-function of the modular curve at places of bad reduction.

**The Behavior of Random Reduced Bases****, by Seungki Kim and Akshay Venkatesh, **published in *International Mathematics Research Notices*

Kim and Venkatesh prove that the number of Siegel-reduced bases for a randomly chosen n -dimensional lattice becomes, for n→∞ , tightly concentrated around its mean, while also showing that most reduced bases behave as in the worst-case analysis of lattice reduction.

**A Note on Sphere Packings in High Dimension****, by Akshay Venkatesh**, published in *International Mathematics Research Notices*

An improvement on the lower bounds for the optimal density of sphere packings. In all sufficiently large dimensions, the improvement is by a factor of at least 10,000.

*Featured image: Math concept. **Shutterstock**. *

The post Laudable mathematics – The Fields Medal appeared first on OUPblog.

]]>The post Celebrating the Fields Medal [infographic] appeared first on OUPblog.

]]>A highlight at every ICM is the announcement of the recipients of the Fields Medal, an award that honours up to four mathematicians under the age of 40, and is viewed as one of the highest honours a mathematician can receive.

Here we honour past Fields Medal winners who we are proud to name as our authors. Hover over each name to learn a little more about who they are and what their contributions have been.

*Featured image: Math concept. **Shutterstock**. *

The post Celebrating the Fields Medal [infographic] appeared first on OUPblog.

]]>The post The scientist as historian appeared first on OUPblog.

]]>There is, of course, a whole discipline and a profession called “history of science.” People get doctoral degrees in this discipline, they teach it as members of history faculties or, if they are fortunate, as members of history of science departments. Many of them may have begun as apprentice scientists, but early in their post-graduate careers decided to make the switch. Others may have entered this discipline from the social sciences or the humanities. But here I am speaking to the idea of *practicing *scientists, who have experienced the pleasures and the perils of actual scientific research, who have got their hands dirty in the nuts and bolts of doing science, who earn a living doing science, turning to the history of their science.

Now, I happen to be an admirer and reader of Medawar’s superb essays on science, but here I think he missed—or chose to ignore—some crucial points.

First, a scientist may be lured into the realm of science past when certain kinds of questions are presented to him or her that can only be answered, or at least explored, by doffing the historian’s hat. Put simply, the scientist *qua *scientist and the scientist *qua *historian ask different kinds of questions of science itself. The latter pose and address problems and questions about their science rather than within it.

Second, the scientist can bring to the historical table a corpus of knowledge about his or her science and a sensibility that derives from scientific training which the non-scientist historian of science may not be in a position to summon in addressing certain questions or problems.

As an example of these factors at work, consider the Harvard physicist Gerald Holton’s book *Thematic Origins of Scientific Thought: Kepler to Einstein *(1973). Here, we find a physicist striving to find patterns of thinking in his discipline by examining the evolution of physical ideas. Toward this end, he undertook historical case studies of such physicists as Kepler, Einstein, Millikan, Michelson, and Fermi, but case studies that were deeply informed by Holton’s background and authority as a professional physicist.

As another example from the realm of engineering sciences, we may consider David Billington, professor of civil engineering at Princeton, who published a remarkable book, titled *Robert Maillart’s Bridges; The New Art of Engineering *(1979) on the work of the 19th century Swiss bridge engineer Robert Maillart. Billington studied the complete corpus of Maillart’s work on bridges and other structures—his designs, constructions and writings—to illustrate the nature of Maillart’s cognitive style in design: a style Billingon summarized by the formula “force follows form”—that is, for Maillart, the form of a bridge, determined by the physical environment in which it would be situated came first in the engineer’s thinking, and then his analysis of the mechanical forces within the structure followed thereafter. This study was authored by an engineering scientist who brought his deep structural engineering knowledge and scientific sensibility to his task.

There is another compelling reason why and when a working scientist might want to delve into science past. If we take the three most fundamental questions of interest to historians: “How did it begin?”; “What happened in the past?”; and “How did we get to the present state?”, then there are scientists who feel compelled to ask and investigate these questions in regard to their respective sciences. I am talking of scientists who possess a synthesizing disposition, who wish to compose a coherent narrative in response to their desire to answer such broader questions. They would probably agree with the Danish philosopher Søren Kierkegaard’s dictum, “Life must be understood backwards. But… it must be lived forwards.” To understand a science, such scientists believe, demands understanding its origins and its evolution, and framing this understanding as a story. They want to be storytellers—as much for fellow scientists as for historians of science.

Numerous examples can be given. One is the Englishman John Riddick Partington, for decades professor of chemistry at Queen Mary College, University of London. His four volume *History of Chemistry *(1960-1970) is extraordinary for the sheer range of its scholarship, but my particular exemplar is his *A History of Greek Fire and Gunpowder* (1960), an account spanning some 600 years, of the evolution of incendiaries. This is an account written by a professional chemist who summoned all his chemical authority to his task. Thus, when he talks about the nature of Greek fire (the name given by the Crusaders to an incendiary first used by the Byzantines) he tells that of the several different explanations offered about its nature, he believed only one particular theory of its composition agreed with the description of its nature and use. Here we are listening to a chemical-historical discourse which only a chemist can speak with authority. Partington’s remarkable book, intended for the scholarly reader interested in this topic, is uncompromising in its attention to the chemistry of explosives.

There are, however, important caveats to the scientist’s successful engagement in historical studies. He or she must master the tools of historical research and the principles of historiography (that is, the writing of history). The scientist-turned-historian must learn to understand, assess, and discriminate between different kinds of archival sources: what counts as historical “data”; be aware of such pitfalls as what is called “presentism”—the tendency to analyze and interpret past events in the light of present day values and situations; and master the nuances of historical interpretation. In other words, in methods of inquiry, the scientist-turned-historian must be indistinguishable from the formally trained historian of science.

*Featured image credit: Galileo Donato by Henry-Julien Detouche. Public Domain via Wikimedia Commons.*

The post The scientist as historian appeared first on OUPblog.

]]>The post What is a mathematical model? appeared first on OUPblog.

]]>Every time we attempt to understand some new phenomenon or idea that may be quantifiable, our first and very natural pass at comprehension is to compare its values, behavior, and limits to something we already understand or at least have control over. Time is a common concept to measure our new phenomenon against, as in how is it changing as time passes? But we can use any known quantity with which to compare our new idea. Think of drug efficacy by dosage, say, or population growth by population size. This comparison comes in the form of a relation tying together values of our new phenomenon to values of something we already know. And when this relation between our newly quantified concept and something we already have control over is functional (meaning to each value of our known quantity, there is at most only one value of the new one), we can use our known quantity to discover, play with, and/or predict values of the new variable via studying the properties of the relationship or function.

The idea of a functional relationship tying together the values of two measurable quantities, one of which we know and the other we want to know more about, is, in essence, a mathematical model.

Sometimes, the input and output variable values can be discrete (individual real numbers with gaps between values), or continuous (like an interval of real numbers), and the properties of the functions, as mathematical models, will reflect this. In mathematics, sets of numbers (collections of valid input and output variables, the known and newly studied phenomena, respectively) and functions between them are part of the fundamental building blocks of all of our mathematical structures. We structure the vast majority of our thought processes around the functional relationships between quantifiable phenomena.

In one of these functional relationships between two quantified entities, indeed, in a mathematical model, we can vary the values of the input variable from one value to the next or to the previous one, as a means to study the function’s properties. Studying the properties of a model (a function) in this fashion is something a mathematics student begins to do at a basic level in what we call calculus, or the “calculus of functions of one independent variable,” and at a higher level in areas like analysis and topology.

The idea of a functional relationship tying together the values of two measurable quantities, one of which we know and the other we want to know more about, is, in essence, a mathematical model.

We tend to also use the properties of functions (models) often without really being aware. We understand somewhat intuitively that the warmest part of a day is about two thirds of the way through the daylight hours, linking temperature to time over a day. We also know that two aspirin are more effective at pain relief than one, but intuitively understand that there is probably a maximum effective dosage that is safe before deleterious effects kick in, whether or not we choose to test the theory.

But mostly, the central power of a mathematical model, as a functional relationship between two measurable quantities, one known and one studied, is in its ability to predict, uncover, or extrapolate trends in the new quantity. And here is where my field of choice in mathematics becomes relevant: functions between quantities contain *dynamical* information. If we apply a function to a set, allowing its output to be reused as an input, over and over again, we can uncover properties of the function (and sometimes also of the set) by watching where individual inputs go upon repeated application of the function. This idea, iterating a function on a set (the discrete version) or using calculus to write a model as a differential equation (the continuous version), is what we call a *dynamical system*. In such a dynamical system, we often call the numbers that represent the iterates, or the input variable in a differential equation, the time variable, due to its common interpretation as actual time in models of science, engineering and technology. However, there is no real compelling reason why in general. But underlying both a function and its iterates (a discrete dynamical system) or a system of ordinary differential equations (the continuous one), is the idea of a function whose input and output variable values come from the same set of possibilities. So a dynamical system is the mathematical discipline that studies the structure of mathematical modeling. And a mathematical model is simply a function.

Forming and studying functional relationships to understand new things? In mathematics, this is called modeling. And in real life?

*Featured image credit: ‘Koch curve’ by Fibonacci. CC BY-SA 3.0 via Wikimedia Commons.*

The post What is a mathematical model? appeared first on OUPblog.

]]>The post The final piece of the puzzle appeared first on OUPblog.

]]>The Nobel Prize in Physics 2017 was recently awarded to Rainer Weiss, Barry C. Barish, and Kip S. Thorne, “for decisive contributions to the LIGO detector and the observation of gravitational waves”. This has provoked discussion about how Nobel Prizes are awarded. Weiss himself noted that the work that led to the prize involved around a thousand scientists. Martin Rees, the Astronomer Royal, said:

“Of course, LIGO’s success was owed to literally hundreds of dedicated scientists and engineers. The fact that the Nobel committee refuses to make group awards is causing them increasingly frequent problems – and giving a misleading and unfair impression of how a lot of science is actually done.”

There are perhaps two difficult questions here. If a group of people publish a research paper, how do they divide up the credit amongst themselves? And if a breakthrough is a result of a number of papers, by a whole group of people, who gets the acclaim?

The convention in my own subject of pure mathematics is usually that on a paper with multiple authors, the names appear in alphabetical order by last name, regardless of who did the most work, who made the largest contribution, or who is most senior. That, at least, is relatively straightforward – although it does mean for example that when reviewing candidates for a job, it can be hard to identify exactly what someone contributed to their joint publications.

More recently, some mathematicians have started to work in new large-scale online collaborations, and this raises all sorts of questions about how credit is assigned. When the mathematician Tim Gowers first proposed experimenting with such a collaboration, which he called a ‘Polymath’ project, he specified in advance how any resulting research papers would be published. The last of his twelve Polymath rules says

“Suppose the experiment actually results in something publishable. Even if only a very small number of people contribute the lion’s share of the ideas, the paper will still be submitted under a collective pseudonym with a link to the entire online discussion.”

Since, to everyone’s surprise, the first Polymath project did indeed lead to a research paper, this rule was immediately implemented, and it has continued in this way for subsequent Polymath projects. The discussions that led to the paper are all still available online, on various blogs and wikis, so if someone wants to check an individual’s specific contribution, they can do exactly that – which is not the case for more traditional collaborations.

Within pure mathematics, breakthroughs have mostly been attributed to the individual or small group of people who put the final piece in the jigsaw puzzle. That is perhaps unsurprising. If a person gives a solution to a problem or proves a conjecture, then they should get the credit, shouldn’t they? But mathematical arguments don’t usually exist in isolation: most often one piece of work builds on many previous ideas. Isaac Newton famously said “If I have seen further it is by standing on the shoulders of Giants”. If other mathematicians contributed ideas that were crucial ingredients but that don’t have the glamour of a complete solution to a famous problem, how can they receive appropriate credit for their work?

The public nature of the Polymath projects makes it possible to track progress on some problems in a way that has not previously been possible. In 2010, there was a Polymath collaborative project on the ‘Erdős discrepancy problem’, but the project did not reach a solution. The problem was subsequently solved by Terry Tao in 2015. Tao had been one of the participants in the Polymath5 project on the problem, and in his paper he acknowledged the role that Polymath5 had played in his work. He also built on work by Kaisa Matomäki and Maksym Radziwiłł, and a suggestion by Uwe Stroinski that a recent paper of Matomäki, Radziwiłł, and Tao might be linked to the Erdős discrepancy problem. I am not commenting on this example because I think that anyone has behaved badly. On the contrary, Tao was scrupulous about acknowledging all of this in his paper. Rather, the public collaborative aspect of the story has made it easier than usual to trace the journey that eventually led to a solution. Without a doubt it was Tao who put it all together, added his own crucial ideas and insights, and came up with a solution, but it does seem that others’ contributions were key to the breakthrough coming at that particular moment. I suspect that history will record that “the Erdős discrepancy problem was solved by Tao”, without the nuances.

Two of the mathematicians I have mentioned, Tim Gowers and Terry Tao, are winners of the Fields Medal, one of the highest honours to be awarded in mathematics. The Fields Medal is awarded “to recognize outstanding mathematical achievement for existing work and for the promise of future achievement”. I am curious about whether in the future the Fields Medal, the Abel Prize, or any of the other accolades in mathematics might be awarded to a Polymath collaboration that has achieved something extraordinary.

*Featured image credit: Pay by geralt. Public domain via **Pixabay**. *

The post The final piece of the puzzle appeared first on OUPblog.

]]>The post On serendipity, metals, and networks appeared first on OUPblog.

]]>The first run of metals data through complex networks algorithms happened the night before we were due to deliver our ideas on how networks research can benefit early Balkan metallurgy research at a physics conference. Needless to say, we (over)committed ourselves to delivering a fresh view on this topic without giving it a thorough thought, rather, we hoped that our enthusiasm would do the job. That night, the best and the worst thing happened: our results presented the separation of modules, or most densely connected structures in our networks, as statistically, archaeologically, and spatiotemporally significant. The bad news was that we stumbled upon it in a classic serendipitous manner – we did not know what was it that we pursued, but it looked too good to let go. It subsequently took us three years to get to the bottom of networks analysis we made that night.

In simple terms, what we did here is present ancient societies as a network. A large number of systems can be represented as a network. For example, human society is a network where the nodes are people and the links are social or genetic ties between them. A lot of real-world networks exhibit nontrivial properties that we do not observe in a regular lattice or in the network where we connect the nodes randomly. For example social networks have the property called ‘six degrees of separation’, which means that the distance between any of us to anybody else on the planet is less than six steps of friendships. So any of us knows somebody, who knows somebody etc (six times) who knows Barack Obama or fishermen on a small island in Indonesia. Another property that is common in complex networks is so-called modularity. This means that some parts of the network are more densely connected with each other than with other parts of the network. Successful investigation of modularity or community structure property of networks includes detecting modules in citation networks, or pollination systems – in our case we used this property to shed light on the connections between prehistoric societies that traded copper. It turned out that they did not do it randomly, but within their own network of dense social ties, which are remarkably consistent with the distribution of known archaeological phenomena at the time (c. 6200- 3200 BC), or cultures.

What we managed to capture were properties of highly interconnected systems based on copper supply networks that also reflected organisation of social and economic ties between c. 6200 and c. 3200 BC.

Our example is the first 3,000 years of development of metallurgy in the Balkans. The case study includes more than 400 prehistoric copper artefacts: copper ores, beads made of green copper ores, production debris like slags, and a variety of copper metal artefacts, from trinkets to massive hammer axes weighing around 1kg each. Although our database was filled with detailed archaeological, spatial, and temporal information about each of 400+ artefacts used to design and conduct networks analyses, we only employed chemical analysis, which is the information acquired independently, and can be replicated. Importantly, we operated under the premise that networks of copper supply can reveal information relevant for the specific histories of people behind these movements, and hence reflect human behaviour.

Our initial aim was to see how supply networks of copper artefacts were organised in the past, and as the last step of analysis we planned to utilize geographical location only to facilitate visual representation of our results. Basically, if two artefacts from the same chemical cluster were found in two different sites, we placed a link between them. In the final step, the so-called Louvain algorithm was applied in order to identify structures in our networks, and we used it as a good modularity optimization method. Another advantage is this approach is that we can test its statistical significance and put a probability figure to the obtained modules.

What we managed to capture were properties of highly interconnected systems based on copper supply networks that also reflected organisation of social and economic ties between c. 6200 and c. 3200 BC. The intensity of algorithmically calculated social interaction revealed three main groups of communities (or modules) that are archaeologically, spatiotemporally, and statistically significant across the studied period (and represented in different colours in Figure 1). These communities display substantial correlation with at least three dominant archaeological cultures that represented main economic and social cores of copper industries in the Balkans during these 3,000 years (Figure 2). Basically, such correlation shows that known archaeological phenomena can be mathematically evaluated using modularity approach.

Although serendipity marked the beginnings of our research, our plan is to take it from here with a detailed research strategy plan, which now includes looking at other aspects of material culture (not only metals), testing the model on datasets across prehistoric Europe, or indeed different chronological periods. We can say that the Balkan example worked out well because metal supply and circulation played a great role in the lives of societies within an observed period, but it may not apply in cases where this economy was not as developed. The most exciting part for us though was changing our perspective on what archaeological culture might represent. Traditional systematics is commonly looking at cultures as depositions of similar associations of materials, dwelling and subsistence forms across distinct space-time, and debates come down to either grouping or splitting distinctive archaeological cultures based on expressions of similarity and reproduction across the defined time and space. But now we have the opportunity to change this perspective and look at the strength of links between similar material culture, rather than their accumulation patterns. This is a game changer for us. And we hope that this research inspires colleagues to pursue this idea of measuring connectedness amongst past societies in order to shed more light on how people in the past cooperated, and why.

*Featured image credit: Mountains in Bulgaria by Alex Dimitrov. CC BY-SA 4.0 via Wikimedia Commons. *

The post On serendipity, metals, and networks appeared first on OUPblog.

]]>The post Mathematical reasoning and the human mind [excerpt] appeared first on OUPblog.

]]>Mathematics is more than the memorization and application of various rules. Although the language of mathematics can be intimidating, the concepts themselves are built into everyday life. In the following excerpt from *A Brief History of Mathematical Thought*, Luke Heaton examines the concepts behind mathematics and the language we use to describe them.

There is strong empirical evidence that before they learn to speak, and long before they learn mathematics, children start to structure their perceptual world. For example, a child might play with some eggs by putting them in a bowl, and they have some sense that this collection of eggs is in a different spatial region to the things that are outside the bowl. This kind of spatial understanding is a basic cognitive ability, and we do not need symbols to begin to appreciate the sense that we can make of moving something into or out of a container. Furthermore, we can see in an instant the difference between collections containing one, two, three or four eggs. These cognitive capacities enable us to see that when we add an egg to our bowl (moving it from outside to inside), the collection somehow changes, and likewise, taking an egg out of the bowl changes the collection. Even when we have a bowl of sugar, where we cannot see how many grains there might be, small children have some kind of understanding of the process of adding sugar to a bowl, or taking some sugar away. That is to say, we can recognize particular acts of adding sugar to a bowl as being examples of someone ‘adding something to a bowl’, so the word ‘adding’ has some grounding in physical experience.

Of course, adding sugar to my cup of tea is not an example of mathematical addition. My point is that our innate cognitive capabilities provide a foundation for our notions of containers, of collections of things, and of adding or taking away from those collections. Furthermore, when we teach the more sophisticated, abstract concepts of addition and subtraction (which are certainly not innate), we do so by referring to those more basic, physically grounded forms of understanding. When we use pen and paper to do some sums we do not literally add objects to a collection, but it is no coincidence that we use the same words for both mathematical addition and the physical case where we literally move some objects. After all, even the greatest of mathematicians first understood mathematical addition by hearing things like ‘If you have two apples in a basket and you add three more, how many do you have?’

As the cognitive scientists George Lakoff and Rafael Núñez argue in their thought-provoking and controversial book *Where Mathematics Comes From*, our understanding of mathematical symbols is rooted in our cognitive capabilities. In particular, we have some innate understanding of spatial relations, and we have the ability to construct ‘conceptual metaphors’, where we understand an idea or conceptual domain by employing the language and patterns of thought that were first developed in some other domain. The use of conceptual metaphor is something that is common to all forms of understanding, and as such it is not characteristic of mathematics in particular. That is simply to say, I take it for granted that new ideas do not descend from on high: they must relate to what we already know, as physically embodied human beings, and we explain new concepts by talking about how they are akin to some other, familiar concept.

Conceptual mappings from one thing to another are fundamental to human understanding, not least because they allow us to reason about unfamiliar or abstract things by using the inferential structure of things that are deeply familiar. For example, when we are asked to think about adding the numbers two and three, we know that this operation is like adding three apples to a basket that already contains two apples, and it is also like taking two steps followed by three steps. Of course, whether we are imagining moving apples into a basket or thinking about an abstract form of addition, we don’t actually need to move any objects. Furthermore, we understand that the touch and smell of apples are not part of the facts of addition, as the concepts involved are very general, and can be applied to all manner of situations. Nevertheless, we understand that when we are adding two numbers, the meaning of the symbols entitles us to think in terms of concrete, physical cases, though we are not obliged to do so. Indeed, it may well be true to say that our minds and brains are capable of forming abstract number concepts because we are capable of thinking about particular, concrete cases.

Mathematical reasoning involves rules and definitions, and the fact that computers can add correctly demonstrates that you don’t even need to have a brain to correctly employ a specific, notational system. In other words, in a very limited way we can ‘do mathematics’ without needing to reflect on the significance or meaning of our symbols. However, mathematics isn’t only about the proper, rule-governed use of symbols: it is about *ideas *that can be expressed by the rule-governed use of symbols, and it seems that many mathematical ideas are deeply rooted in the structure of the world that we perceive.

*Featured image credit: “mental-human-experience-mindset” by johnhain. CC0 via **Pixabay.*

The post Mathematical reasoning and the human mind [excerpt] appeared first on OUPblog.

]]>The post Who needs quantum key distribution? appeared first on OUPblog.

]]>Should we be impressed? Yes – scientific breakthroughs are great things.

Does this revolutionise the future of cyber security? No – sadly, almost certainly not.

At the heart of modern cyber security is cryptography, which provides a kit of mathematically-based tools for providing core security services such as confidentiality (restricting who can access data), data integrity (making sure that any unauthorised changes to data are detected), and authentication (identifying the correct source of data). We rely on cryptography every day for securing everything we do in cyberspace, such as banking, mobile phone calls, online shopping, messaging, social media, etc. Since everything is in cyberspace these days, cryptography also underpins the security of the likes of governments, power stations, homes, and cars.

Cryptography relies on secrets, known as keys, which act in a similar role to keys in the physical world. *Encryption*, for example, is the digital equivalent of locking information inside a box. Only those who have access to the key can open the box to retrieve the contents. Anyone else can shake the box all they like – the contents remain inaccessible without access to the key.

A challenge in cryptography is *key distribution*, which means getting the right cryptographic key to those (and only those) who need it. There are many different techniques for key distribution. For many of our everyday applications key distribution is effortless, since keys come preinstalled on devices that we acquire (for example, mobile SIM cards, bank cards, car key fobs, etc.) In other cases it is straightforward because devices that need to share keys are physically close to one another (for example, you read the key on the label of your Wi-Fi router and type it into devices you permit to connect).

Key distribution is more challenging when the communicating parties are far from one another and do not have any business relationship during which keys could have been distributed. This is typically the case when you buy something from an online store or engage in a WhatsApp message exchange. Key distribution in these situations is tricky, but very solvable, using techniques based on a special set of cryptographic tools known as *public-key cryptography*. Your devices use such techniques every day to distribute keys, without you even being aware it is happening.

There is yet another way of distributing keys, known as *quantum key distribution*. This uses a quantum channel such as line of sight or fibre-optic cable to exchange light particles, from which a cryptographic key can eventually be extracted. Distance limitations, poor data rates, and the reliance on specialist equipment have previously made quantum key distribution more of a scientific curiosity than a practical technology. What the Chinese scientists have done is blow the current distance record for quantum key distribution from around 100kms to 1000kms, through the use of a satellite. That’s impressive.

However, the Chinese scientists have not significantly improved the case for using quantum key distribution in the first place. We can happily distribute cryptographic keys today without lasers and satellites, so why would we ever need to? Just because we can?

Well, there’s a glimmer of a case. For the likes of banking and mobile phones, it seems unlikely we will ever need quantum key distribution. However, for applications which currently rely on public-key cryptography, there is a problem brewing. If anyone gets around to building a practical quantum computer (and we’re not talking tomorrow), then current public-key cryptographic techniques will become insecure. This is because a quantum computer will efficiently solve the hard mathematical problems on which today’s public-key cryptography relies. Cryptographers today are thus developing new types of public-key cryptography that will resist quantum computers. I am confident they will succeed. When they do, we will be able to continue distributing keys in similar ways to today.—in other words, without quantum key distribution.

Who needs quantum key distribution then? Frankly, it’s hard to make a case, but let’s try. One possible advantage of quantum key distribution is that it enables the use of a highly secure form of encryption known as the *one-time pad*. One reason almost nobody uses the one-time pad is that it’s a complete hassle to distribute its keys. Quantum key distribution would solve this. More importantly, however, nobody uses the one-time pad today because modern encryption techniques are so strong. If you don’t believe me, look how frustrated some government agencies are that we are using them. We don’t use the one-time pad because we don’t need to. The same argument applies to quantum key distribution itself.

Finally, let’s just suppose that there is an application which somehow merits the use of the one-time pad. Do the one-time pad and quantum key distribution provide the ultimate security that physicists often claim? Here’s the really bad news. We have just been discussing all the wrong things. Cyber security rarely fails due to problems with encryption algorithms or the ways that cryptographic keys are distributed. Much more common are failures in the systems and processes surrounding cryptography. These include poor implementations and misuse. For example, one-time pads and quantum key distribution don’t protect data after it is decrypted, or if a key is accidentally used twice, or if someone forgets to turn encryption on, etc. We already have good encryption and key distribution techniques. We need to get much better at building secure systems.

So, I’m very impressed that a cryptographic key can be distributed via satellite. That’s great – but I don’t think this will revolutionise cryptography. And I certainly don’t feel any more secure as a result.

*Featured image credit: Virus by geralt. CC0 public domain via **Pixabay**.*

The post Who needs quantum key distribution? appeared first on OUPblog.

]]>The post The life and work of Alan Turing appeared first on OUPblog.

]]>Pioneering the field of ‘machine intelligence’, today we celebrate all of Turing’s achievements and the legacy his research left. Find out more about some of the key events that shaped his investigations with this interactive timeline.

*Featured image credit: Enigma by Rama. CC BY-SA 3.0 via **Wikimedia Commons**.*

The post The life and work of Alan Turing appeared first on OUPblog.

]]>The post Suspected ‘fake results’ in science appeared first on OUPblog.

]]>Because it is based on random sampling model, a ‘P value’ implies that the probability of a treatment being truly better in a large idealized study is very near to ‘1 – P’ *provided* that it is calculated by using a symmetrical (e.g. Gaussian) distribution, that the study is described accurately so that someone else can repeat in exactly the same way, the study is performed with no hidden biases, and there are no other study results that contradict it. It should also be borne in mind that ‘truly better’ in this context includes differences of just greater than ‘no difference’, so that ‘truly better’ may not necessarily mean a big difference. However, if the above conditions of accuracy etc. are not met then the probability of the treatment being truly better than placebo in an idealized study will be lower (i.e. it will range from an upper limit of ‘1 – P’ [e.g. 1 – 0.025 = 0.975] down to zero). This is so because the possible outcomes of a very large number of random samples are always equally probable, this being a special property of the random sampling process. I will explain.

Figure 1 represents a large population two mutually exclusive subgroups. One contains people with ‘appendicitis’ numbering 80M + 20M = 100M; the other group has ‘no appendicitis’ numbering 120M + 180M = 300M. Now, say that a single computer file contains all the records of *only one* of these groups and we have to guess which group it holds. In order to help us, we are told that 80M/(80M+20M) = 80% of those with appendicitis have RLQ pain and that 120M/(120M+180M) = 40% of those without appendicitis have RLQ pain as shown in figure 1. In order to find out which one of the group’s records is in the computer file, we could perform an ‘idealised’ study. This would involve selecting an individual patient’s record at random from the unknown group and looking to see if that person had RLQ pain or not. If the person had RLQ pain we could write ‘RLQ pain’ on a card and put it into a box. We could repeat this process an ideally large number (N) times (e.g. thousands).

If we had been selecting from the group of people with appendicitis then we would get the result in Box A where 80N/100N = 80% of the cards had ‘RLQ pain’ written on them. However, if we had been selecting from people without appendicitis, we would get the result in Box B, with 120N/300N = 40% of the cards bearing ‘RLQ pain’. We would then be able to tell immediately from which group of people we had been selecting. Note that random sampling only ‘sees’ the *proportion* with RLQ pain in each group (i.e. either 80% or 40%). It is immaterial that the size of the group of people in figure 1 with appendicitis (100M) is different to the group without appendicitis (300M).

The current confusion about ‘P values’ is because this ‘fact’ is overlooked and that it is assumed wrongly that a difference in size of the source populations affects the sampling process. A scientist would be interested in the possible long term outcome of an idealised study (in this case the possible contents of the two boxes A and B) not in the various proportions in the unknown source population.

Making a large number of ‘N’ random selections would represent an idealized study. In practice we cannot do such idealized studies but have to make do with a smaller number of observations. For example, we would have to try to predict from which of these possible boxes with N cards representing ideal study outcomes we would have selected a smaller sample. If we selected 24 cards at random from the box of cards drawn from the computer file containing details of the unknown population and found that 15 by chance had ‘RLQ pain’, we can work out the probability (from the binomial distribution e.g. when n=24, r=15 and p=0.8) of getting 15/24 exactly from each possible box A and B. From Box A it would be 0.023554 and from Box B it would be 0.0141483. The proportions in box A and B are not affected by the numbers with and without appendicitis in the source population and are therefore equally probable before the random selections were made. This allows us to work out the probability that the computer file contained records of patients with appendicitis by dividing 0.023554 by (0.023554 + 0.0141483) = 0.6247. The probability of the computer file containing the ‘no appendicitis’ group would thus be 1- 0.6247 = 0.3753.

It does not matter how many possible idealized study results we have to consider; they will always be equally probable. This is because each possible idealized random selection study result is not affected by differences in sizes of the source populations. So, if a ‘P value’ is 0.025 based on a symmetrical (e.g. Gaussian) distribution, the probability of a treatment being better than placebo will be 1 – P = 0.975 or less if there are inaccuracies, biases, or other very similar studies that give contrary results, etc. These factors will have to be taken into account in most cases.

*Featured image credit: Edited STATS1_P-VALUE originally by fickleandfreckled. CC BY 2.0 via Flickr.*

The post Suspected ‘fake results’ in science appeared first on OUPblog.

]]>The post What is game theory? appeared first on OUPblog.

]]>Despite the theory’s origins dating back to Neumann and Morgenstern’s work, the economists John Nash, John Harsanyi, and Reinhard Selten received the Nobel Prize for Economics in 1994 for further developing game theory in relation to economics. Here are some interesting facts on the field; from its key influencers and terms, to how it applies in everyday life and examples.

- Game theory can be thought of as an extension of decision theory. In standard decision theory, each agent has utilities associated with outcomes. However, in game theory each agent also has to consider the utilities of other agents and how they will affect the other agent’s decisions and the overall outcome.
- The term ‘Tit for Tat’ is a concept used in the mathematical side of game theory. It is used to describe when a player responds with the same action or move used by an opponent in the previous action or move.

- One of the most celebrated theorems of game theory is referred to as the minimax theorem. This theorem explains that there is always a solution to a conflict between two people with opposing interests.
- “Common Knowledge” is widely used in game theory. This refers to the assumption in games that everyone knows a piece of information but does not essentially know if everyone else knows it too.
- Focal point or Schelling point is one of the many key terms used in game theory. It was developed by the American economist Thomas Schelling in his book
*The Strategy of Conflict*which was published in 1960. Thomas Schelling and Robert J. Aumann both were awarded a noble Prize in economics for developing game theory analysis in 2005. - Prisoner’s Dilemma is one of the best known examples of games analysed in game theory. The name’s origin comes from a situation that involves two prisoners who would have to choose either ‘confess’ or ‘don’t confess’ without knowing what the other person will choose. This game aims to illustrate how people behave in tactical situations.
- Another widely used example is known as the Battle of the Sexes Game. For instance, two partners would like to share an evening together. However, they have two different ideas of what they would like to do but still would prefer to be together than attend two separate events. This game is used to demonstrate the pros and cons of coordination.
- John Forbes Nash Jr was an American mathematician renowned for his contribution to game theory. The phrase Nash equilibrium used in game theory is named after him.

*Featured image credit: checkmate chess by Stevepb. Public domain via Pixabay.*

The post What is game theory? appeared first on OUPblog.

]]>The post Mathematics Masterclasses for young people appeared first on OUPblog.

]]>In fact the idea really goes back to Michael Faraday, who gave Christmas lectures about science for young people at The Royal Institution of Great Britain in London in 1826. Sir Christopher Zeeman, following upon Porter’s initiative, gave the first series of six one-hour lectures (Mathematics Masterclasses) to young people at The Royal Institution in 1981, about “The Nature of Mathematics and The Mathematics of Nature”.

A consequence has been initiatives, widespread now throughout the United Kingdom, of Mathematics Masterclasses, in particular for age groups from 8 to 18 years of age, and with enthusiastic local organisers. I served for several years on the Committee at The Royal Institution whose role was to encourage those Masterclasses nationally.

A reasonable definition of a Masterclass might be that it is devised for “students” (of whatever age level) who have a ready curiosity about what goes on around them, and an interest in identifying an explanation of what they observe, even if that explanation is not immediately obvious but requires, perhaps, a two- or three-stage process to arrive at a solution. The “speaker” will have an intrinsic interest in drawing out an answer from such students, and also of devising problems from any circumstances that lie within the area just stated. In mathematics, the solution process will normally require the identification of appropriate “variables” to describe the problem, the formulation of suitable relations (equations) between those variables, and then the “solution” of those equations in a way which expresses an unknown quantity entirely in terms of known quantities. That is how mathematics “works”.

Every year in the 1990s in Berkshire, England, sixty 12-year-old pupils were gathered at Mathematics Masterclasses at the University of Reading. Attendees were nominated by their schools and showed an aptitude for maths. Two parallel sessions were held, each containing 30 pupils, a lecturer, and qualified helpers.

A typical Masterclass might last for up to three hours (with refreshment breaks, and tutorial sessions, interspersing the lecture material) and broken up into three sessions. Ideally there will be several volunteer teachers circulating to give advice during the tutorial sessions. Teachers from the participating schools were readily found to be enthusiastic to volunteer for this role.

Examples of topics treated in Masterclasses have been “Weather” (the atmosphere and forces therein) by Sir Brian Hoskins, “Water Waves” (in deep and shallow water, and in groups) by Winifred Wood, and the “Dynamics of Dinosaurs” (e.g. their weight and speed) by Michael Sewell. I also gave a Masterclass about “Balloons and Bubbles”, which used mathematics allied to classroom demonstrations to illustrate an associated sequence of topics: pressure, equilibrium of a spherical bubble, tension in a soap film, tension in rubber, pressure peaks and pits, and cylindrical balloons.

The long-term benefit of a Masterclass, and one of its objectives, is to encourage a lasting enthusiasm and curiosity about how to devise a “model” of a natural phenomenon by using mathematics, and thereby to develop the capacity for original thinking about an observed situation in nature, and which is still within the scope of schoolchildren.

An example of an everyday problem suitable for a Masterclass is the following “Coffee Shop Problem”, actually posed to me by my wife in that situation. Given eight points equally distributed around a circle, how many differently shaped triangles can be drawn using only three of those points as vertices? Now generalise the problem by introducing more equally spaced points, and looking for different polygons (not just triangles). This teaches one how to realise that any given problem may be the start of a much larger problem, which is an important part of any mathematical investigation, and which may not be at first apparent.

A further example of a Masterclass problem is the following. Draw a right-angled triangle with unequal shorter sides. Draw three circles, each using one of those sides as the diameter. The two external regions between the larger circle and (in turn) the two smaller circles are called lunes (because they each have the shape of a crescent Moon). Now, prove Hippocrates Theorem (c. 410 B.C.), that the sum of the areas of external lunes is equal to the area of the right-angled triangle.

*Featured image: Calculator by 422737. Public domain via **Pixabay**. *

The post Mathematics Masterclasses for young people appeared first on OUPblog.

]]>The post Coincidences are underrated appeared first on OUPblog.

]]>The unreasonable popularity of pseudosciences such as ESP or astrology often stems from personal experience. We’ve all had that “Ralph” phone call or some other happening that seems well beyond the range of normal probability, at least according to what we consider to be common sense. But how accurately does common sense forecast probabilities and how much of it is fuzzy math? As we will see, fuzzy math holds its own.

Let’s try to de-fuzz the math a bit, starting with a classic example: the birthday problem. Perhaps you’ve encountered this problem in a math class that dealt with probabilities. In a group of people, what is the probability that two people would have the same birthday? Certainly it must depend on the size of the group. If we start with only two people, the chance would be one out of 365 – well, OK, one in 366 for leap years. If the group included more people, common sense might suggest that the probability would just increase linearly. So, to get a 50% chance, you might think it would take 183 people in the group. Wrong. That’s where common sense goes off the rails. It turns out that, in a group of only 23 people, the probability of two having the same birthday is 50%.

Details of the logic required to arrive at this result are unnecessary here, but a clue is given by a group of three. The third person might match the birthday of either of the first two, so you might think to just double the first probability. But think about this from the inverse point of view. The probability of the second person’s birthday NOT matching the first is 364/365. But the third person could match either of the first two, so the probability of NOT matching is only 363/365. Since NOT matching is thus less probable, matching becomes more probable. Working this out involves a bit of number crunching, but math classes have calculators galore, and since many classes have 23 or more members, real data are available to support the probability calculation. As you can see, what we take to be common sense often yields inaccurate solutions.

Meanwhile, back at the “Ralph” problem, a math textbook might tackle this problem in terms of drawing different colored pebbles from a large urn. Let’s forego that approach, and set the “Ralph” problem in more realistic terms. Suppose you know N people. During the course of a single day, a number of those people, k, cross your mind on a purely random basis. For this illustration, let’s agree to ignore close relatives and friends that you think about almost every day. Next, a certain number of people, L, contact you in a given day by any means, including phone calls, e-mails and electronic messages, social media, snail mail, and random meetings.

Working though this problem (actually kind of fun if you like mathematical puzzles) yields an equation for the number of days that will elapse before the probability of getting a contact from someone you thought about reaches a given level. Of course, it depends on the variables N, k, and L, not the easiest quantities to obtain.

An estimate of N, the number of people that an average person knows, is available from various sources, and ranges from 200 to 1500, but k, the number of people one would think about is highly subjective, as is L, the number of contacts one receives in an average day. Yet, all these numbers are necessary to find an estimate of the time required for someone you thought about to contact you shortly after you thought about them. Unscientific surveys of students, neighbors, and friends produced numbers of thoughts from 10 to 100 and contacts from 5 to 30. Substituting these numbers into the appropriate derived equation and requiring that it be 90% probable yields a remarkable result. Such coincidences would happen anywhere from once a week to once every other month. Most people’s fuzzy math would probably have estimated a much longer time period.

If you are curious about how often you might expect such coincidences to occur, e-mail your numbers for N, k, and L to me and I’ll calculate the estimate for your case and send it to you.

Next time you get that “Ralph” call, rather than attributing it to ESP, you might tell Ralph: “Hey, I was just thinking about you, so you can consider yourself my coincidence of the month.”

*Featured image: Ancient Planet by PIRO4D. Public domain via **Pixabay**. *

The post Coincidences are underrated appeared first on OUPblog.

]]>The post Prime numbers and how to find them appeared first on OUPblog.

]]>A prime number is always bigger than 1 and can only be divided by itself and 1 – no other number will divide in to it. So the number 2 is the first prime number, then 3, 5, 7, and so on. Non-prime numbers are defined as composite numbers (they are composed of other smaller numbers).

Prime numbers are so tantalizing because they seem to be in never ending supply, and are distributed somewhat randomly throughout all the other numbers. Also, no-one has (yet) found a simple and quick way to find a specific (new) prime number.

Because of this, very large prime numbers are used every day when encrypting data to make the online world a safer place to communicate, move money, and control our households. But could we ever run out of prime numbers? How can we find new, incredibly large prime numbers? Below is a brief explanation about how we can do this:

This got us interested in learning more about primes, so we’ve collected together some facts about these elusive numbers:

- A simple way to find prime numbers is to write out a list of all numbers and then cross off the composite numbers as you find them – this is called the
*Sieve of Eratosthenes*. However, this can take a long time! - In 2002 a quicker way to test whether a number is prime was discovered – an algorithm called the ‘AKS primality test’, published by Manindra Agrawal, Neeraj Kayal, and Nitin Saxena.
- Even though prime numbers seem to be randomly distributed, there are fewer large primes than smaller ones. This is logical, as there are more ways for large numbers to not be prime, but mathematicians ask: how much rarer are larger primes?
- In 2001 a group of computer scientists from IBM and Stanford University showed that a quantum computer could be programmed to find the prime factors of numbers.
- The RSA enciphering process, published in 1978 by Ron Rivest, Adi Shamir, and Leonard Adleman, is used to hide plaintext messages using prime numbers. In this process every person has a private key which is made up of three numbers, two of which are very large prime numbers.
- At any moment in time, the largest known prime number is also usually the largest known Mersenne prime.

*Featured image credit: numbers by morebyless. CC-BY-2.0 via **Flickr**.*

The post Prime numbers and how to find them appeared first on OUPblog.

]]>The post Opening the door for the next Ramanujan appeared first on OUPblog.

]]>It is still possible to learn mathematics to a high standard at a British university but there is no doubt that the fun and satisfaction the subject affords to those who naturally enjoy it has taken a hit. Students are constantly reminded about the lifelong debts they are incurring and how they need to be thoroughly aware of the demands of their future employers. The fretting over this within universities is relentless. To be fair, students generally report high to very high levels of satisfaction with their courses right throughout the university system. Certainly university staff are kept on their toes by being annually called to account by the National Student Survey, which is a data set that offers no hiding place. We should bear in mind, however, that this key performance indicator does not measure the extent to which students have mastered their degree subject. What is important here is getting everyone to say they are happy, which is quite another matter.

This all contrasts with the life of the main character, Sri Ramanujan in the recent film *The Man Who Knew Infinity*. The Indian genius of the early twentieth century had a reasonable high school education after which he was almost self-taught. It seems he got hold of a handful of British mathematics books, amongst them *Synopsis of Pure Mathematics* by G. S. Carr, written in 1886. I understand that this was not even a very good book in the ordinary sense for it merely listed around five thousand mathematical facts in a rather disjointed fashion with little in the way of example or proof. This compendium, however, suited the young Ramanujan perfectly for he devoured it, filling in the gaps and making new additions of his own. Through this process of learning he emerged as a master of deep and difficult aspects of mathematics, although inevitably he remained quite ignorant of some other important fields within the subject.

It would therefore be a very good thing if everyone had unfettered online access to the contents of a British general mathematics degree. Mathematics is the subject among the sciences that most lends itself to learning through books and online sources alone. There is nothing fake or phoney when it comes to maths. The content of the subject, being completely and undeniably true, does not date. Mathematics texts and lectures from many decades ago remain as valuable as ever. Indeed, older texts are often refreshing to read because they are so free from clutter. There are new developments of course but learning from high quality older material will never lead you astray.

I had thought this had already been taken care of as for ten years or more, many universities, for example MIT in the United States, have granted open online access to all their teaching materials, completely free of charge. There is no need to even register your interest — just go to their website and help yourself. Modern day Ramanujans would seem not to have a problem coming to grips with the subject.

The reality, however, is somewhat different and softer barriers remain. The attitude of these admirable institutions is relaxed but not necessarily that helpful to the private student who is left very much to their own devices. There is little guidance as to what you need to know, and what is available online depends on the decisions of individual lecturers so there is no consistency of presentation. Acquiring an overall picture of mainstream mathematics is not as straightforward as one might expect. It would be a relatively easy thing to remedy this and the rather rigid framework of British degrees could be useful. In Britain, a degree normally consists of 24 modules (eight per year), each demanding a minimum of 50 hours of study (coffee breaks not included). If we were to set up a suite of 24 modules for a general mathematics degree that met the so-called QAA Benchmark and placed the collection online for anyone on the planet to access, it would be welcomed by poor would-be mathematicians from everywhere around the globe. The simplicity and clarity of that setting would be understood and appreciated.

This modern day Ramanujan Project would require some work by the mathematical community but it would largely be a one-off task. As I have explained, the basic content of a mathematical undergraduate degree has no need to change rapidly over time for here we are talking about fundamental advanced mathematics and not cutting-edge research. Everyone, even a Ramanujan, needs to learn to walk before they can run and the helping hand we will be offering will long be remembered with gratitude and be a force for good in the world.

*Featured image credit: Black-and-white by Pexels. CC0 public domain via Pixabay.*

The post Opening the door for the next Ramanujan appeared first on OUPblog.

]]>