There are different ways to think about the importance of QFT, firstly we we can think of it as the extension of Quantum Mechanics from a system of few particles to a system of many particles. Quantum Mechanics can explain accurately the behaviour of one particle and therefore it can only operate with a limited number of degrees of freedom. (A degree of freedom of a physical system is a variable that is necessary to characterise the state of a physical system. For example a system that is confined to move in one direction with a fixed velocity has 2 degrees of freedom). As such QFT extends QM so that we are able to handle systems of many particles and infinite degrees of freedom.
Quantum Field Theory can also be thought of as the reconciliation of Quantum Mechanics and Special Relativity. The Schrodinger Equation – (the fundamental law for the evolution of Quantum Mechanical states in time) cannot obey the requirements of relativistic theories. Special Relativity, a relativistic theory as the name would suggest, requires that physical laws of nature are invariant under certain transformations (namely Lorentz transformations). For example a law of nature in one reference frame must look exactly in the same in a different reference frame that was shifted say shifted in position or boosted by a certain velocity. However the Schrodinger Equation is not invariant under such transformations and a quantum mechanical state will not evolve in exactly the same manner as one in a different frame. Additionally, a second clash between Quantum Mechanics and Special Relativity occurs, when particles have velocities close to the speed of light, as here Quantum Mechanics breaks down. But QFT allows us to work in relativistic frames which is extends our understanding of the world of the tiny enormously, as often tiny particles are able to move at very high speeds.
This diagram illustrates the two points above, N stands for the number of degrees of freedom, SRT for Special Theory of Relativity.
Quantum Field Theory treats particles as excited states of an underlying field (see ‘What is a Field‘ for an introduction to the concept of a field). In QFT, quantum mechanics interactions among particles are then described by interactions among the underlying quantum fields. The notation of the theory combines classical field theory, special relativity and quantum mechanics a new overarching manner. QFT was the pivotal rung in the ladder to elevate our understanding of the tiny into the realm of the fast moving whilst also extending our ability to be able to analyse systems with many particles and infinite degrees of freedom.
QFT is a wonderfully successful theory and one of modern physic’s great accomplishments. It is an effectively field theory and is widely believed to be a good low-energy approximation to a more fundamental theory which could take the physics towards the final frontier of incorporating General Relativity with the quantum world.
2017 marks the 40 year anniversary of the Voyager missions. The missions were originally conceived with the intention to explore Jupiter and Saturn and then onto, if successful, the further reaches of our solar system. The Voyager missions have become the success story of space missions, let’s recap their journey.
Way back in 1960s calculations revealed that, due to the alignment of the planets, it would be possible for a spacecraft launched in the late 1970s to visit all four of the outer giants in the solar system with an orbit that manipulated the gravity of each planet to swing the spacecraft round and on to the next. The technical term for this is a gravity assist, or a cooler term, a gravitational slingshot. And so the mission begins, seizing this opportunity which only comes roughly once every 200 years, truly a once in a lifetime chance.
The team at NASA decides to create two twin spacecrafts, Voyager 1 & 2 which are designed to take slightly different orbits. Voyager 2 is launched first on August 20 1977, followed 16 days later by Voyager 1 who receives the title of 1 because it will reach Jupiter and Saturn first. The Voyagers spend the next 20+ years travelling through the solar system sending back the most detailed images we have ever seen. On Jupiter for the first time we can see active volcanoes on the moons, understand that the Red Spot is a enormous cyclone-like storm and detect the presence of lightening. On Saturn we discover three more moons , come to learn that the largest moon Titan harbours a thick Earth-like atmosphere and we propel our understanding of the composition of the rings. And these are just the highlights. The main objective of the Voyager missions is a success, but these marathon runners are only just getting started…
After hurtling past Uranus and Neptune, encountering several new moons around each and sending back high resolution images of these icy worlds Voyager 1 gives us one of its most famous pieces in 1990. At a distance of 4 billion miles from the sun Voyager 1 takes the ‘Solar System Family Portrait’ a series of images that capture all the planets in orbit around the sun and the ‘Pale Blue Dot’ image, the image which captures the Earth suspended as a tiny speak in a sunbeam. This never seen before perspective inspires Carl Sagan, a leader behind the mission to write his piece on the humble nature of our planet and of all those that reside on it.
The Voyagers now continue to press outward and conduct studies of interplanetary space. In 1998 Voyager 1 became the farthest human-man object from Earth in space, going further than any craft has gone before. And the marathon journey just keeps on going. In 2004 Voyager crossed the barrier known as the ‘Termination Shock’ here the solar wind slowly down and heats up as it clashes with the interstellar wind. This new area of space in known as the Heliosheath and officially marks the end of the solar system. Then recently in 2012 Voyager 1 entered interstellar space, passing beyond the finally boundary known as the Heliopause – the boundary between our solar bubble and the matter ejected by explosions of other stars. (Soon to be followed by Voyager 2). The spacecraft continue to study ultraviolet sources amongst the stars in interstellar space and are still capable of sending this data back to Earth. In 2013 the first measurement of the density of the interstellar medium was made when an ‘explosion’ from the sun causes waves to ripple outward, creating a ripple in the plasma of interstellar space.
As well as the collection of data and measurements, the Voyagers have another important purpose, a purpose which may well never be fulfilled. They both contain a ‘Golden Record’ a 12-inch gold-plated copper disk, a beautiful artefact which plays the role of a kind of time capsule. The record would, if obtained by extraterrestrials, attempt to communicate the story of our world. It is contains a variety of images, natural sounds of earth, spoken greetings from languages, and music specifically curated to best attempt to display the diversity of life and culture on Earth. The ultimate message in a bottle.
At this moment in time Voyager 1 is 12,999,227,000 miles away from Earth, Voyager 2 is 10,728,140,000 miles away and they are both rack up miles every second – 330 million miles each year. It currently takes 19 hours for a light signal to be sent from Voyager 1 to Earth so safe to say communication isn’t easy. It is amazing to think that even with the 1970s hardware, 40 years on scientists can still communicate with the crafts but how much longer this will continue is uncertain. It is predicted communications will drop off between 2022-2025. However, it is not the growing distance that is the main problem. The issue is fuel supply, because the amount of fuel the crafts could be launched with was finite, eventually they will run out of juice and have to wander the galaxy alone, meaning we’ll no longer be able to locate, transmit or receive. The spaceships will go completely off the radar, but the distance they’ve got so far since 1977 without running out of fuel is remarkable!
So the Voyagers, the ultimate expectation exceeders, continue into their VIM phase (Voyager Interstellar Mission) 40 years on and still reach for the stars. Although we will soon loose communication with our old friends, they carry with them the human blueprint and the story of our pale blue dot into deep space, a story we hope one day may be found by another.
Firstly a group, denoted G, is a collection of elements call them g(1), g(2), g(3)…. and there exists a group operation, denote it *, which determines how the elements act on each other. Now the elements of the group must obey the 4 axioms of group theory. I’ll lay them out first and everything may seem rather abstract to begin with – but bear with me, all will become clear after an example.
Axioms of Group Theory
In each group their must exist an element called the identity, it is denoted e. When the identity element acts on any of the other group elements it essentially does nothing, the element remains the same. In group theory language this is written:
e * g = g or g * e = g
The element is unchanged when acted on by the identity.
This principle states that the product of any two group elements will produce an element that is also part of the same group.
For example if g(1) and g(2) belong to G, then g(1)*g(2) must also belong to G.
This principle states that the order of operation between elements can be fluid. If g(1) acts on the product of g(2) and g(3), this is the same as the product of g(1) and g(2) acting on g(3). In group theory language this is written:
g(1) * (g(2)*g(3)) = (g(1)*g(2)) * g(3)
Finally there must exist an element which is the inverse for each pre-existing element. The inverse is denoted with a superscript -1 after the element but to save me from introducing math-type I will denote it with a strikethrough.
So the inverse of element g(1) is g(1)
When each element is acted on by its inverse it gives… the identity!
In group theory language this is written:
g(1) * g(1) = e or more generally g * g = e
Ok this must be seeming extremely abstract without an example so let’s introduce the square – one of the most simple examples we can work with. Group theory is all about respecting and classifying symmetries in nature so the question we want to ask is what transformations exist that preserve the symmetry of the square?
A square has four sides, forming four right angles. What action can we perform on the square that will preserve its shape/symmetry? If we rotate the square by 90 degrees, we will take point a to point b, point b to point c, point c to point d and point d to point a – but the square will still look exactly the same. In fact if we rotate the square by 180 degrees we’ll still get a square as well except point a will go to c, point b will go to d, point c will go to a and point d will go to b! Ok very nice. I think we now see if we rotate by 270 degrees a will go to d, b will go to c, c will go to b and d will go to a. And finally if we rotate by 360 everything goes back to its original place and nothing changes!
These transformations/rotations form the elements of the group G – in this case the cyclic group of a square.
Let g(1) = clockwise rotation by 90 degrees
g(2) = clockwise rotation by 180 degrees
g(3) = clockwise rotation by 270 degrees
and what’s rotation by 360 degrees? Of course! It’s e – the identity element.
Let’s now test the axioms to make sure these elements fit our definition for a group.
1. Identity – Already checked, we can see that a rotation by 360 degrees leaves all sides as they were to begin with. The e element exists.
2. Closure – Is for example g(2)*g(3) a member of the original group? This would be a rotation of 180 degrees followed by 270 degrees, so in total 450 degrees. Perform this on the square, it’s rotating through by 360 then adding an extra 90. So yes it gives back the same operation as you would if you just performed g(1).
g(2)*g(3) = g(1) – Check
You can try this with any combination of elements and check it works!
3. Associativity – Does g(1) * (g(2)*g(3)) = (g(1)*g(2)) * g(3) ?
Let’s try it: The left hand side is just 90 degrees then 450 degrees (total 540 degrees). The right hand side is just 270 degrees then 270 degrees (total 540 degrees). All rotations are taken to be clockwise in this case and so it does not matter in which order you perform them, they will produce the same outcome.
4. Inverse – This one is very straight forward. What are the inverse of clockwise rotations? Anti-clockwise rotations.
So for g(1), g(1) is an anticlockwise rotation by 90 degrees. If we perform g(1) then g(1) we undo our first rotation and get back what we started with a.k.a e
g(1) * g(1) = e
g(2), g(3) are anticlockwise rotations of 180 and 270 degrees respectively.
So there we have it the lay-out of the group G – in this case the cyclic group of a square.
Many other, far more beautiful, groups exist in nature and this was by far the simplest explanation I could give whilst still having enough elements to explain the axioms clearly. At some point in the future I’ll write again on more complex groups and may even dare to venture into the famous Lie Group. If you’re interested in reading more now wikipedia does a decent job or I would recommend ‘Physics from Symmetry’ for an extremely clear approach, which I must say i’ve found hard to come by in this subject. For now, I hope you have found some closure on the subject.
Hello to everyone still reading Rationalising The Universe! Let me apologies for my radio silence, things have been moving around quite quickly for me so let me explain why I have been so quiet, before we go on a brief foray into the Black-Scholes. So first of all, RTU went to Rome.
Following on from that, the opportunity arose to take a qualification in investment analysis, which features a reasonable amount of mathematics and statistics. Whilst this didn’t fall directly into my recent study patterns, it presented me with a very good career opportunity and has some pretty neat applications of mathematics. As such, I have taken it – which means currently I am studying financial mathematics for the year! This site would be impossible to maintain if I wrote about anything other than the work I am doing, so as a result my writings will have a slightly more financial slant to them. They will remain applications of the beautiful language that I find fascinating.
Following on from this I had intended to write on this topic, but then RTU ended up at Glastonbury.
So that brings me to today – back ready to tap out posts a little more regularly, with a marginally different focus.
The Black-Scholes equation
The Black-Scholes equation was developed developed by Fischer Black, Myron Scholes and Robert Merton in the early 1970’s which earned them the Nobel Prize (in Economic Sciences). It is actually one of the most important piece of financial mathematics that exists, something young graduates are supposed to understand when they grilled over a shiny Canary Wharf desk.
There are many different applications of the Black-Scholes, because there are many different types of options – but for the purposes of this explanation it isn’t all that relevant. So what is an option? Well really it is as the dictionary would suggest – an option to do something. But in financial terms, it usually means an option to buy something such as a stock (but it could be anything; gold, coal, a bond, rice, coffee etc). Now the important thing about the option is you have the right to buy, but you don’t have to. Secondly, there will be a maturity – so after this date, you no longer have the option.
Option contracts are very useful – if you know you might have to buy something in the future, and you want to start to working out what it costs it may be very inconvenient if the costs are fluctuating. An option gives you the certainty of what the price could be, without tying you into a cost you may not need to incur. But what is the value to one of the option? Clearly if I can buy the item cheaper than the option price at the time I need it the option has no value; but if the option price is cheaper than I can get the item elsewhere the option has value.
The value of an option comes from two places:
This is all discursive – but how is a bank (who sells the option) going to come up with a price – not some discussion about what it should be worth, an actual price that customers can be charged. Coming up with an accurate way of pricing such options allows them to exist; if this could not be achieved no one would ever sell them.
From now on when we talk about options we are talking about stock options – the capital (money) raised by companies by issuing shares. Stock, share and equity are all interchangeable terms. The option we are considering has the following features:
The maths assumes that the returns on the underlying stock have a normal distribution (see here for further reading). The actual equation that is known as the Black-Scholes is a fairly gnarly second order partial differential equation – no body wants that.If you are familiar with partial differentials and want to know more about these terms do write to me in the comments section and I would be happy to discuss.
However for the option we have been describing (which in financial literature would be a European stock option), we can derive (which we won’t step by step) a far more useful result to price option contracts.
To make sure you are familiar with the terms, the call premium is what we are looking for – what should we charge for the pleasure of this option. This depends on all the terms on the right hand side. The current stock price, the time until the option matures and the option striking price are the most obvious ones (i.e. if I want an option at todays price for tomorrow it will probably be quite cheap, if I want it for 10 years it would be much more expensive). The risk free interest rate is a little more complicated, but can be summarised as something like a theoretical interest rate an investor would expect in the absence of all risk – the rate on a totally risk free rate. N represents the normal distribution (as a function) and e takes its normal meaning.
All of this information is taken from modelling the relationships with partial differential equations and then finding the solution. The result of this is a solution where values can be plugged in, to instantly spit out prices.
Using this information we can calculate stock option prices – here is one I plugged into a calculator earlier, where the model has told me the option I want would carry a premium of $26.30 – this is what I would need to pay the bank in order to enter into this contract. This is the pricing model that in some form is used by all the major banks.
So why is this important? Well in short it’s big money. When I discussed using the option to mitigate your risks, in reality the sexier applications come when we use the option to speculate. If you think you know the market is going to soar, say for example you think Apple are about to soar on the release of a new phone you may enter into an option to buy that stock close to todays price. You then can buy that stock when it matures, and sell it on for more on the market or (because the contracts are standardised) you can cash in and sell your option onto a third party at a higher price – because your option is now in the money. On the flip side, there is a bank somewhere that has to fulfil this option – so if they really mess up, it costs the bank a lot of money.
So the premium represents a number which is the expected amount that needs to be charged to make the contract rational to both parties. How the individuals view the value of this option however, depends on their view on the world. To me this is quite neat – the maths is strong, and when applied allows us to price up something very complex; a choice. This is however strongly based on statistical distributions – much like in quantum physics where we can only give likelihood a particle will be at a range of values, we can only give likelihood that the stock will arrive at a range of values in the future based on the current prices. What happens could make you very rich, or be a real waste of money.
Back in the day it was believed by many, including the great Newton who put together the three laws of motion, that time was absolute. This meant that wherever you went in the universe, a clock at one end of the cosmos would be ticking in synchronisation with one at the other. As though, there was a universal time, running on an almighty clock in the sky and all other clocks in the cosmos would adhere to its pace. Not so. With the discover of Special Relativity, we learn that the ticking of one’s clock and the time measured between events, depends on where you are in the universe. It depends on whether you’re near a gravitational field and how fast you are moving. The closer you move to the speed of light, the slower your clock will tick. The closer you are to a heavy mass, the slower your clock will tick. Let me just caveat here – this is not a special property of just clocks! The is the property of time as we know it, itself. A clock is just a mechanical object that we have devised, which has a constant periodic tick, from which we can then measure other events against. It is not just this tick that slows down, it is all atomic processes. The beating of your very heart would slow down in a strong gravitational potential or when travelling at a fast speed – for in essence your heart (if regular in its beating I hope!) is nothing more than a type of clock. If you need a reminder of how time is warped by relativistic effects I refer you to these two posts ‘Does the present really exist‘ and ‘Black Holes: #1 Falling In‘.
From now i’ll take it as understood that we all accept that modern physics tells us that time as we know it is not absolute – (accepting this deep down in your being is another matter entirely and I’m sure even the greatest theoretical physicists have trouble with this).
When describing an object we use 4 coordinates, three for the spatial position of the object and one for the time: x, y, z, t. Now the framework used to describe the geometry of everyday space is called, by mathematicians, Euclidean geometry. Another very useful tool employed to visualise spatial geometry is a standard two-dimensional graph. If we plot on a page two axes – y and x, one representing one direction e.g. North and the other a direction perpendicular to it e.g South and then plot the relative positions of two objects on this graph, the line joining them represents the distance between these two objects.
Of course in real life there are three spatial dimensions, so we can also plot a three dimensional graph. Imagine a room. The z axis protrudes out the floor straight up to the ceiling, and the x and y axis exist in the floor at right angles. Your feet exist at one point in this graph and the door knob at another, the straight line across these three dimensions represents the distance between your feet and the door. Simple right? I’m sure you knew that already. Where am I going with this? Bear with me.
(A 2D representation of a 3D graph!)
Next step. How do we describe the geometry of four dimensions? Well this is the geometry of space-time. The graph we use here is called a space-time graph or space-time diagram. Now unfortunately I can’t create a nice visual analogy for you this time, because if you can visualise four-dimensions you’re a super-human (or an alien). We have the three-dimensional graph from before with an extra axis, protruding out in an extra fourth dimension representing time. Now again we plot two things into the graph – we call these things events instead of objects now because they include time. For example, Event A could be Big Ben chiming at 12pm on the 18th June 2017 and Event B could be New Year’s Eve 2017 at the Eiffel Tower. Now what does the line on the space-time graph connecting these two events signify? Clearly not distance. The line connecting two events on a space-time four-dimensional graph represents a quantity theoretical physicists called proper time.
Proper time?! What on earth is that I hear you say. Let me explain. Proper time is a construct made from the difference in the four coordinates x, y, z, t between two events (along with a factor of c – the speed of light). And whereas the time (t) between two events is not always measured to be the same because of relativistic effects (somebody running fast will measure the time between two events to be less than somebody standing still) any observer, regardless of their place in the universe or their speed will always measure the proper time (the combination of the change in all x, y, z and t) to be the same. This is very profound indeed – we’ve found a quantity, composed of the spatial and temporal coordinates that is invariant. The symbol for proper time is tau, τ. The equation is given here:
Why do physicists call it proper time? Well my inkling is because, as I mentioned before, we as humans are so deeply unsettled by the fact that time is not measured the same for all that we wanted to give the quantity that we did find to be fixed, a connection to our beloved time.
So just to re-iterate the key points here. A space-time graph, is a graph of four dimensions which is exists in four-dimensional space-time geometry (x, y, z and t). Two events are plotted on the graph and the magnitude of the line joining them represents the proper time between the two events. This quantity is measured to be the same for any observer, regardless of their motion. So as not to confuse these special 4D line with 2D or 3D lines in normal Euclidian geometry, we call the lines in four-dimensions worldlines.
(A 2D representation of a 4D space-time graph – even worse! You can ignore the world sheet and world volume for today.)
In a way this discovery does alleviate my unsettled feelings to do with the relative nature of time. With proper time there now does exist a quantity, composed of all four dimensions whose measurement remains invariant between any two events in this universe. The invariance, we initially believed held for time, exists instead on a higher dimensional level, incorporating time as a component. And thankfully it does, for a universe where everything was truly relative and nothing absolute wouldn’t sit well with our burning desire to seek out an underlying simplicity to nature.
AI alert – computer power is expected to match the power of the human brain by 2019! The year is a guesstimate but we can safely say this will be achieved before 2030, which if you haven’t quite swallowed it, i’ll reiterate – by 2030 computers will be as smart as the human minds that created them. We’re also going to see a hell of a lot more AI roaming around, performing routine tasks that used to be human responsibilities – i’m talking rubbish collecting, supermarket packing, clerical duties. Owning an AI to help with household tasks will become commonplace by 2030, though whether the sci-fi craze of making them to model the human appearance will catch on or be thought too creepy is questionable. The next question is then whether their intellect can autonomously advance so that it surpasses that of humans.. I can refer you to countless sci-fi movies to explore this trail of thought.
Nanotechnology may well be our shining light of the future for breakthroughs in medicine and dentistry, bringing advancements for diagnosis, treatment and prevention of diseases. In five years nano-biotechnology techniques will be able to examine and filter bodily fluids for tiny particles that reveal signs of diseases such as cancer before we even have any symptoms. Then even when we do have cancer, imagine the scenario when nanoparticles are injected into the patient to directly kill cancerous cells without the effects of chemotherapy, which along with attacking cancerous cells, attack the patient overall. ‘The future of medicine won’t focus on treating the symptoms of a disease; it will focus on curing it at the genetic level.’ By mastering nanotechnology we will be able to pinpoint our medical efforts at the site of a disease as opposed to clumsily blasting the patient all over or letting a medicine be ingested and distributed over the entire bloodstream. As well as medical applications nanotechnology will have a big impact on the environment with the clean manufacturing of materials and on energy where we will be creating ever smaller devices with ever larger storage and processing capabilities.
Interest in space exploration is peaking again. With the quiet years after the Apollo missions when man conquered the moon it’s now finally time we can see a drive towards taking the next step – sending humans to Mars and creating a sustainable base there. Two key players in this endeavour are NASA (of course) and SpaceX. SpaceX’s initiative focuses on reusability of rockets – a more detailed post on their approach can be found here: ‘SpaceX – Making Humans Interplanetary‘. NASA’s mission will rely on the Orion spacecraft and an evolved version of the SLS which will be the most powerful launch vehicle to date. NASA aims to put humans up on the red planet by mid 2030s whereas SpaceX ambitiously quotes almost a decade ahead. For either mission, multiple unmanned cargo ships will need to start being sent across imminently, so that when the humans do arrive they have the supplies necessary to construct the basis of a sustainable colony.
As it stands the buy-in for a space experience is far beyond the budget of the public. It currently costs a whopping $20 million to take a cruise to the ISS and a hefty $200,000 to go on a sub-orbital spaceflight with Virgin Galactic. But the market for space tourism is on the rise. Many companies are striving to get the idea into the mainstream with wacky ideas such as building low earth orbit satellite-hotels so that guests can take spending a night under the stars to a whole new level. Companies pushing for a such visions go by the name of Space Island, Galactic Suite and Orbital Technologies. All that’s required to get that price point down is innovation and demand, i’d certainly like this one to come about so I can add it to my bucket-list.
California legalised driverless cars way back in 2012 and the UK seems likely to follow before the end of the decade. Testing so far shows very promising safety results and with the elimination of human error should greatly reduce motor accounts, which are currently thought to account for 93% of crashes. Theoretically speaking if all cars were driverless and a communication network was put in place between the vehicles, no accident should ever occur. As well as increased safety we would likely also see increased efficiency as in-built algorithms would allow cars to calculate optimum velocities, often acting as a network so that the whole column on a road would move forward at maximum speed. Sounds like a nicer commute.
The internet of things is the terminology used to describe the addition of wifi capabilities to a whole host of everyday household items. Refrigerators, trashcans and even toasters are becoming computerised and locking in to an online network allowing device-to-device communication. The wifi connected appliances will allow you remotely control the temperature of your house via mobile device and perform even wackier tasks like get the toast to pop up just as you walk through the door or automatically place food orders when supplies are running low, wonderful for all us efficiency freaks out there. The main aim of this innovation is to make the home-owner feel like they can monitor their consumptions better, be it energy or food and the internet of things will undoubtably drive new trends in consumer behaviours. All of this is possible thanks to changes in 5G technology and super capacitors that will be able to store much more energy for later release than the current generation of capacitors.
Nuclear fusion is the opposite of nuclear fission, instead of a heavy nucleus splitting into smaller nuclei, lighter nuclei fuse together to create a heavier nucleus whilst emitting energy at the same time. Nuclear fusion is the process that powers the sun, with four hydrogen atoms coming together to form a helium, releasing a colossal amount of energy. Three key factors are required to achieve fusion: (1) Temperature – fusion requires a extremely high temperature to kick off the reaction, we’re talking 40,000,000 degrees! (2) Time – the components must be forcibly held together long enough to allow the reaction to begin. (3) Containment – to successfully contain the whole chaotic procedure is a feat in itself. If these conditions can be met fusion could solve the world’s energy problems for years to come whilst giving Planet Earth a welcome break from today’s energy generating procedures which spew out pollutants no end.
3D printers have become fairly commonplace over the last couple of years with many companies now using them to create bespoke holds and prototypes. Digital libraries for parts to be printed are constantly growing, meaning that the owner requires very little design skill to exploit the technology. What is very intriguing is that there is talk that scientists will, in the future, be able to print 3D functioning organs that can be tailored to suit a patient’s needs, saving millions of lives and eliminating the dependence on donors.
Virtual reality and augmented reality technologies are getting a lot of limelight recently with the phenomenon being bought to everyday users through the use of a headset and a smartphone. And the experience won’t be stopping at video games, the platform will foreseeably be used for education, communication and virtual experiences. School children can go on virtual trips to historic sites, colleagues across the world can participate in life-like meetings and a person may be able to experience the wonder of an African safari through their headset. Thinking even further ahead VR hardware and software makers are discussing the incorporation of high-end touch sensors which will stimulate the other senses making the technology leap from immersive to fully interactive and truly breaking down the barrier between the virtual and physical world.
Brave new world here we come. It will become commonplace and not at all taboo to map your foetus’s and then newborn child’s DNA to identify any presence or risk of disabilities or diseases. And this procedure won’t stop at birth, thanks to advances in single-cell analysis and the increase in handling of big data, DNA mapping will be the best way to assess and manage disease risk throughout a human life. When these types of procedures are applied to a mass populations we will then have a wealth of data which may help us predict and prevent epidemics.
So there we have it, a snapshot of some of the most exciting breakthroughs ahead. Which of these got your inner-geek jumping up and down? Drop us a comment below.
The world has changed rapidly; we have talked about the consequences of Brexit, the election in America and the general state of the world through a series of opinion posts with a scientific slant. Today we go on to address another issue in a brief opinion post – the ever rising phenomena of fake news.
For anyone who has been living under a rock for the last few months, this is the phenomena of information which is not factually correct disseminating through society and somehow influencing society. From the offset, we are going to draw a distinction between two types of fake news, before ditching the label to avoid the reader having to constantly picture Donald Trumps orange face.
The first type is a problem, but not the subject of this post. In truth, those individuals who participate are just information criminals and where possible they should be stopped. They are the creators, the source but they don’t spread the issue. How to we address the wildfire of incorrect information that burns through society?
We live in an era where people, particularly of my generation, read the majority of online content through information circulated on social media. This used to be a harmless endeavor, or perhaps I was just too young to realize, but it seemed that 5 or so years ago articles were just called 25 Places You Won’t Believe are Actually Real or The 10 The Sweets from your Childhood You Forgot About. While this trivial fluff still clogs the online world, the growth of political, scientific or economic content is undeniable.
Generation social media have made it really easy for anyone to speak. If I was studying 20 years ago I simply wouldn’t have had such a large a platform to discuss scientific ideas, yet now I am able to spark up a site in Starbucks one evening and along with Mekhi grow a healthy following. As such, I am the co-owner of a fairly smart looking site that presents scientific fact and opinions to a reasonable readership. (You might argue that there are no facts and only opinions, but that’s for another time).
On this site some of the things we discuss – particularly classical Physics are undeniably presented as facts. That might seem obvious, but actually it’s a huge statement and it comes with a responsibility. Presenting something as a fact means you have considered all possible alternatives and you are beyond any reasonable doubt that what you are saying is the truth. This is why, aside from classical posts our posts don’t have a heavy technical aspect; the highest qualification among the authors is a Bachelors degree, and it is important information beyond this level is not presented as fact.
Of course we stray into areas far beyond a Bachelors degree – but the important point is the presentation of the post. For example my post on string theory; I am certain everything I wrote is either factually accurate or appropriately flagged as a statement of opinion, because while full comprehension of string theory is advanced, the basic understanding I have offered is not beyond my comfort zone. Indeed you may have noticed in this post it was flagged at the start as opinion; which I helpfully left in bold for you.
This formula has worked well for us – it’s quite simple, to stay within our limits and as our studies expand so does the content on this site. It’s quite neat, as the site slowly becomes a digital catalog of our collective scientific knowledge. The formula for creating accurate content is so simple, yet fake news is apparently a thing. Why?
Breaking the formula
The formula I described is not followed by so many and it pains me – I see people genuinely trying to do a good thing and teach some science but the content is either slightly, or totally incorrect. The only way this can happen is through writing on subjects which you have not studied which leaves a knowledge gap. Why would anyone do this? Temptation. We all like the idea of being able to write about the very frontiers of our field, to be able to eloquently express the biggest ideas out there but the fact is there are very few who can. My goodness I want to be a Theoretical Physics genius, but I simply am not. It’s a knowledge pyramid, there are many who are able and qualified to communicate the most basic levels of knowledge with fewer and fewer as we get further to the top.
Let me caveat what I mean by qualified. The obvious would be the level of your formal education – so if you have studied to degree level there are probably a variety of degree level topics you are comfortable on, and so forth. That rule can never be absolute – perhaps you scored 50% in a final module on a subject; this implies you may be missing about half the knowledge which is far from ideal for a teacher. On the other end of the scale there are those who have relentlessly studied a topic they have sat no formal qualification in who probably have a greater breadth of knowledge than someone confined to a curriculum. So while a qualification may act as a proxy it is no license. Myself, I always obey the rule that if I have to look more than a few small details up I don’t know what I am talking about.
The majority of incorrect information on the internet would be avoided by authors writing within their domain of knowledge.
Laziness
This idea is cynical but true – a large amount of poor quality information is spread through laziness. The top third of an article is always the most well read, because reading a long article – particularly when it’s technical is hard going. Often this is the fault of the author if the reader does not stay engaged, but this is a victim-less crime when the reader is disengaged and moves on. It is when part of an article or text is read and then used as a source in another piece of writing without appreciating it in its entirety. When something is written on a particular topic, if you are going to quote it in any way – be that verbally or in writing at least make an attempt to understand the full picture intended by the author.
For example if one were to be very careful with my post on Bayes then I am sure it could be selectively used to argue my belief in the existence of God. However if you were to read the entire article in full, you would understand it is about adoption a healthy and questioning mindset, with the whimsical example worked in of belief in God and how one might arrive at the truth on such a subject – this is of course a very different picture.
Selective paraphrasing is a key factor involved in spreading false information, performed both with intent and through apathetic reading.
The speed of information sharing
The final point I have to make with you is the speed at which information, which really links to the point above. I would be willing to bet a substantial wager that the vast majority of articles people share obey that top third rule – the top third of the article is read and it seems good so ping, round it goes and before you know it it’s shared with a network of people without being fully assessed. In science, medical news is the absolute worst for this. This story is made up (not that it matters anymore!) but you have an article that presents the findings of a fungus that has been seen to exhibit some cancer repelling properties in lab rats, with a snappy headline “The Fungus that Fights Tumors” and just like a bush-fire in summer, the hopes of many rest on some fungus that the article concluded probably wouldn’t have any applications in humans.
The number of times we have had to cringe at the information that has been shared – from politicians tweeting their own name to presidents denying climate change. Having a platform to always express your thoughts is lethal, as thoughts need to grow and mature before they are released. My personal favorite was this graph tweeted by the Brexit department to show our “steady growth” in trade.
What this graph clearly shows is the UK reaping the benefits of European membership with the heaviest growth being over the period of membership. If you agree with the EU or not is irrelevant here – the evidence provided does not support the conclusion, it contradicts it and it is embarrassing and beneath a government department to make a mistake like this. People love to label themselves scientists; we are all scientists. We all use the evidence before us and use it to form conclusions about the way the world is – this is a long and proud tradition of being human, something which is being aggressively eroded by the world of social media. A world where style matters over substance, where sharing information that makes you look smart is more important than being smart. Never fall into this trap; intellectual prowess always has been, and always will be more important than looking like a lifestyle blogger (no offence to my lifestyle blogger followers – you can have both).
We are past the point of thinking we can have some kind of regulation on the content that gets out there, the internet is unleashed. It has become a form of society – much like when you walk down a street, there is good, bad, criminal and trivial. The difference is when you walk down a street, in London at least, you only cross paths with people in your geographic area and you certainly would not have peoples opinions pressed upon you. The internet’s greatest asset is that we can exchange freely with people we normally wouldn’t; but increasingly this is becoming a weakness.
A huge amount of incorrect information would be avoided if people were as selective with online opinions as they are face-to-face
Law 1: Uncertainty
In the macroscopic world, if you know the initial conditions of an object and you subject it to a force you can calculate its position at a later time. Let me give you can example, we have a plane which is currently over Canada and it is flying with a fixed velocity. Knowing this we will therefore be able to determine its position after a certain duration of time. However the case is not so straight forward when it comes to subatomic particles. In this world knowledge about one thing means a trade off with another. If we know the position of a particle accurately then it means we do not know its momentum (which to recap, is its mass times its velocity) well at all. Basically if we can pinpoint where an object is, we know very little about its mass or its velocity – seems counter-intuitive doesn’t it?! Welcome to the world of quantum.
The uncertainty that dominates this world is aptly named as the ‘quantum uncertainty principle’ and it is not limited to position and momentum. Uncertainty acts on a multitude of other quantities, for example energy and time. The more refined your measurement of the energy of a quantum particle the fuzzier your grip on the time duration in question. For example, in the classical world if a ball is trying to get over a wall it needs enough energy to do so, it needs enough energy to reach the height required to pass over the wall. But in the quantum world a particle which doesn’t classically have the energy to pass a barrier can ‘tunnel’ through, appearing on the other side given enough uncertainty in the time taken to do so. It is this phenomena that enables particle pairs to be produced out of a vacuum as we saw in Black Holes: Glowing and Shrinking post. So there’s law number 1, we basically can’t know much with certainty.
Law 2: Quanta
In the classical world things seem continuous. Take energy for example, before quantum theory, electromagnetic energy was though to be emitted continuously from a source. When looking at a light source it seems to be flowing out smoothly in all directions. However through the joint efforts of Planck and Einstein it was discovered that energy was instead emitted in individual packets called ‘quanta’. (See the photoelectric effect experiment!) Quanta is a generic term for discrete particles. In the case of electromagnetic radiation instead of energy coming out in a continuous flow it is emitted in quanta called photons. Quantisation is a fundamental aspect of the quantum world and applies to many more things than just energy.
Quantisation is believed to extend right down to the fundamental level and a quanta exists for every type of field. If you read my previous post ‘What is a field?’ you’ll remember that what we call space in everyday life is actually the gravitational field. Therefore according to Quantum Theory even space itself is quantised. In a nutshell, space is granular – if you analysed a region of space it would not be infinitely divisible, eventually one would get right down to the tiny granular points that make up the fabric of spacetime.
Any its not just quanta of fields that come in these little bite-sized chunks but other quantities that crop up in Quantum Theory such spin and charge – they all come with a minimum unit size. You can’t keep reducing the value of these properties, you eventually hit the minimum boundary.
Law 3: Duality
Now after Einstein realised that light came in discrete little chunks known as photons our understanding was thrown under the bus. Previous experiments had shown us that light very much acts like a wave – it can interfere with itself creating areas of constructive interference and destructive interference. (To understand this think of a water wave analogy, if the peak of two waves coincide perfectly it creates a bigger wave, but if the trough of a wave hits another’s peak they somewhat cancel out). This phenomena was seen with light, even though it was understood to be quantised into little chunks. Is that necessarily contradictory you may say, can’t all the particles act together to cancel each other out in certain areas and visa versa? Well the real spooky fact is that this wave-like behaviour even happened when we fired just out one photon/quanta at a time! To explain this it would almost have to be as though the single photon was interfering with itself!
Louis de Broglie came along and proposed ‘wave-particle duality’ quite simply that quantum objects can act as both particles and waves for they exist in a ‘wave-like superposition’ of all possible states. This leads us onto our next ‘pseudo’ law – superposition.
Law 4: Superposition
In the classical world things exist without doubt in one state – how confusing interactions with people would be if not! However, in the quantum world objects exist in a superposition of multiple states which is represented by a ‘wavefunction’. The wave function encodes all the different states the object could be in along with the associated probabilities. For example a photon could be in position A or position B and the wavefunction includes both these possibilities including the information of whether one is more likely than the other.
Whether particles actually exist in this plethora of states or whether this is just how we, from our macroscopic perspective, understand it is a question for philosophers of physics. However when we perform a measurement of a quantum object we can then pinpoint which of these multiple states it is. Read on for what is probably the weirdest law yet…
Law 5: Measurement
In the classical world if somebody told us our act of observation would have an effect on the outcome of an experiment we would think it very self-righteous! If a ball is fired from point A whether it hits point B or C does not depend on whether anyone is watching, it depends on its initial velocity alone – unless one believes they are telekinetic. However in the quantum world the observer is a crucial element in determining an outcome.
Take the example of a photon being fired a screen. (The full Double-Slit experiment is explained in full in my very first post here on RTU). If nobody monitors the path of the photons a wavelike pattern builds up on the screen. However if an observer actively measures the path the photon takes a different pattern of lines builds up on the screen. Our act of observation actively causes a change in the outcome of the experiment! The general belief in Quantum Theory is that our act of measurement causes the particle to the collapse from its superposition of multiple states to one finite position. The observer in quantum mechanics assumes a leading role.
Let me add a caveat here to take away some of the supernatural nature that currently seems to surround the observer. (A word of thanks to lwbut for explaining this so nicely in the comments below). Because quantum particles are so tiny any form of interference from us on the macroscopic scale can alter their behaviour. When we observe these particles, because we cannot do it solely with our human eye we must use some form of equipment to capture information about the particle. Whatever method we use, be it a beam of light to whatever will affect the particle in some way, which it would not have to endure if the particle allowed to move independent of said observation.
So there we have the five principles of the quantum world uncertainty, quantisation, duality, superposition and measurement – all counter-intuitive to our everyday experiences yet strong and sturdy laws when playing around in the realm of the tiny.
What is a thought experiment?
A thought experiment starts, as with all experiments, with the consideration of a certain hypothesis. It then involves setting up a hypothetical scenario in ones head and uses the knowledge one possesses to think through its consequences in order to arrive at the result. Thought experiments are often used in order to ‘perform’ experiments which may not be feasible in everyday life, for example involving black holes or life on fast-travelling spaceships. Thought experiments allow the thinker to transport themselves to strange scenarios and use their knowledge to resolve a problem. Time for an example.
Example #1: Leaning Tower of Pisa
Let’s start with a simple one that doesn’t require any knowledge of complex theories in Physics. (For a comprehensive list of thought experiments see Wikipedia.) Aristotle’s theory of gravity asserted that heavier objects fell faster when dropped. We now know that, in the absence of air resistance, all objects fall at the same speed under gravity regardless of their mass and it is only the air resistance which acts proportionally higher on the lighter objects which disguises the truth – making it seem like a feather falls slower than a bowling ball. It was Galileo we have to thank for our newfound understanding and his ‘Leaning Tower of Pisa’ experiment. Legend has it that Galileo actually went to the top of the leaning tower of pisa to perform this experiment but scientific historians now believe this to just be for poetic effect and the experiment was indeed a thought experiment.
Galileo imagined two stones, one much larger than the other tied together with a piece of string, then dropped from the top of a tower. If it was assumed that heavier objects did fall faster than lighter ones than the heavier stone would be hurtling towards the ground faster than the lighter one and the string between the stones would become taught as the distance between them grew. However, Galileo then thought of the two stones as one combined system, whose overall mass was clearly grater than the larger stone alone and therefore the whole thing should fall even faster than the larger stone alone. The contradiction here proves that the assumption made by Aristotle must be false and led Galileo on his way to prove his Law of Free Fall – that objects fall at one rate, regardless of their mass.
With this simple example alone you can see the power of the thought experiment. Of course this experiment can quite easily be performed on Earth now-a-days in an artificial vacuum which eradicates air-resistance. However, there are many famous thought-experiments which work in domains much less accessible to humans.
Example #2: Einstein and the elevator
Probably the most famous thought-experiment came from Einstein in his breakthrough to understanding the equivalence principle between acceleration and gravity. Einstein imagined he was in an elevator cabin isolated from the outside world. He dropped an object and saw it, in his minds eye, fall down to the floor. He wondered does this behaviour necessarily mean he is on Earth experiencing the familiar force of gravity keeping his shoes on the floor of the elevator and the pulling the object to the ground when it is dropped? No, not necessarily he realised. This elevator cabin could be in a rocket in deep space away from any gravitational pull of planets and as long as the rocket was accelerating exactly with the magnitude of 9.81 meters per second squared (as is the value of gravity here on earth) all the same effects would be observed. He would feeling a force pushing the soles of his shoes down to the floor and acting on the falling object just as he did on Earth.
The observer in the cabin cannot distinguish between the two scenarios and this made Einstein realise that accelerating frames of reference are the same as those in gravitational fields, leading to his theory of general relativity. Einstein was a master of the thought experiment and he used the same method of pondering in his earlier theory of Special Relativity when he realised that when one sits on a train that is moving at a constant velocity, if the windows are blacked out and the train is perfectly smooth the situation is indistinguishable from the train standing still. The thought experiment is a powerful tool, which helped Einstein in his two major discoveries without having to travel to deep space or annoy train conductors.
Example #3: Schrodinger’s Cat
Are thought experiments all sparks of genius or can things get a little befuddled? Let’s see Schrodinger’s take on things. Schrodinger, a quantum physicist was unsettled by the ideas quantum theory was coming up with in the early 20th century. A famous interpretation of certain results in Quantum Mechanics was the Copenhagen interpretation, which was the idea that quantum objects exist in simultaneous different states until an observation occurs. Taking this Schrodinger then presents a scenario which extrapolates on this idea with a cat in a box that is simultaneously dead and live until someone looks in the box – two simultaneous different states. Schodinger intended to present this thought experiment to make a mockery of the theory and show it’s absurdity. Instead the idea became a touchstone for the modern interpretation of quantum physics, advanced by many physicists who regard the ‘alive’ and ‘dead’ cat superposition as quite real. Clear evidence that the interpretation of the outcome of a thought experiment can be rather subjective.
Reflections
It seems thought experiments, because they cannot be performed or experimentally verified, can fall foul to the subjectivity of the mind that conjures them up or the minds that think about them. The process of analysing the outcome of the thought experiment must require deep knowledge of the field in which one is working, rigorous understanding of the laws that govern that particular science and the ability to, without previous bias, review one’s logic.
Physical experiment remains pivotal to testing theories and advancing a science and when one can run experimental tests to check results there is no question over the merit of doing this. However when it comes to pushing science to its limits and trying to find answers to those real big questions, thought experiments come into their own. The physical experiments humans can perform stuck on our tiny planet, working with our fixed gravitational field and objects on the same scale as our bodies, are very confined. We may not yet have the ability to physically probe the very tiny or travel close to black holes in reality but our existing knowledge coupled with the curiosity and imagination of the human mind allows us to sit in our armchair with a notebook and simply think.
I was thinking over topics I had never written on, and one of the things that immediately sprung to mind was statistics. This may strike some as odd, since it lies at the heart of modern physics however there is a good and simple reason for this: myself and Mekhi don’t enjoy it as much. Given the broad reach of modern science, I think it is permissible to still say you love science whilst not enjoying an area of it, but in the interests of personal betterment we shall banish the elephant from the room.
In fairness todays topic is quite interesting – we will talk about Bayes Theorem, the work of Rev. Thomas Bayes, who died in 1761 of one of those generic illnesses people seemed to die of in times of poorer sanitation. Bayes Theorem initially seems like a probability tool – however it is much more than that, it gives a way of thinking which I believe to be incredibly healthy – this is what gives it a broader appeal making it worthy of a slot on the blog. Those of you who regularly watch the Big Bang Theory my have even spotted Bayes Theorem making a cameo – which hopefully gives some evidence to suggest this is a big popular area in modern science, rather than some niche bit of stats I decided to torture the readers with.
Some simple, slightly dry ideas
In order to appreciate the appeal of Bayes, you need to first understand roughly how it works. Bayes Theorem is presented in a 15,000 word essay, so there are a lot of different themes to it, but the theme is events and evidence.
A fundamental point to consider is that a test and an event are discrete things. In the example we are going to build we will have an event, which is going to be me falling severely ill with a rare disease. Because I don’t want this to happen, we have come up with a test to check for the illness – which is to take my blood pressure and see if it is highly elevated above a certain threshold. The test isn’t exactly perfect – there are many things which can highly elevate my blood pressure, from a stressful day at work to several coffees so the test could incorrectly diagnose me as severely ill (a false positive). It is also possible, although less likely, that there is another factor lowering my blood pressure on the very same day that gives me a negative reading when I am severely ill, say I have just come back from a relaxing holiday to Rome – this would be a false negative.
I’m sure by now you are screaming GIVE ME SOME NUMBERS, because truthfully you can’t explain as efficiently with words. If I am actually ill, there is a 90% chance that the test will read positive – the other 10% of the time, this is the false negative we spoke about, where my blood pressure has been lowered for another reason. On the other side if I were perfectly well then the test will read negative 95% of the time. There is still a 5% chance that the test reads positive even though I am fine. Clearly we can draw from this is that the test probabilities are not real – there are so many factors which can influence the tests that we must make a distinction between the real probabilities and the test probabilities. The real probabilities do exist, and are features of the universe. The test probabilities will be closer to the real probabilities if we can refine our blunt human instruments. In this example, we are rolling with a 1% chance that I am severely ill, and a 99% chance that I am not. All of these probabilities are pretend.
Joe Ill (1%) | Joe Well (99%) | |
Test Positive | 90% | 5% |
Test Negative | 10% | 95% |
From this table we see that I have a 1% chance of actually being ill, but that does not tell me if I am actually ill or not – I know the probability of having any disease but it still does not tell me if I have it… Only a test can tell me that and tests are flawed. So the question really is – how good is this test, given the presence of these false negatives? If my test is positive – how worried should I be? The test does not seem that bad, given if I am ill it gets it right 90% of the time. We do some analysis on the table – without getting too mathematical, to combine the probability of two events you multiply them. So if the chance of flipping a coin to heads is 50%, the probability of two heads in a row is 25% – which I can write as 0.5 x 0.5 = 0.25. Here I combine the probability for each event considering the probabilities I am actually ill using the real probabilities.
Joe Ill (1%) | Joe Well (99%) | |
Test Positive | The True Positive
0.01 x 0.9 = 0.009 |
The False Positive
0.99 x 0.05 = 0.0495 |
Test Negative | The False Negative
0.01 x 0.1 = 0.001 |
The True Negative
0.99 x 0.95 = 0.9405 |
Next I am going to drop Bayes theorem, which will frighten you, then I will explain it and you will see it is quite cute.
Pr(A|X) means probability of A given X; so here we could say the probability of me being ill (A) given a positive test result, X.
Then we have Pr(X|A)Pr(A) which is quite simple – it’s the probability of X given A, multiplied by the probability of A. So if this is the probability of a positive result, given me being ill multiplied by the probability of me being ill. You can see this result in the top left hand corner, which I have made bold for your convenience. Then what we have it is just “everything” else that could happen given the positive test result – the probability that I am ill given a positive test result multiplied by the probability of being ill, plus the probability of not being ill given a positive test result multiplied by the probability of not being ill. Those words may seem like a total mess – that’s why we use symbols in mathematics, once you know how to read them life is better.
What is the actual answer? If you crunch the numbers you get 15.4% – what this is saying is that the probability of actually being ill given a positive test result is 15.4%. If that seems obvious to you, then you are a better statistician than me. If it seems a little at odds with what I said earlier, then such is Bayes and why it is so important to run the numbers. The reason that this is so is due to the fact the event itself, me being ill, is so rare. Don’t get lured in by the silly human tests – the real probability was 1%. So even though the test gives the correct result given be being ill 90% of the time, it is so rare that I actually am ill that the false positives occur a lot more. Because there is so much room for false positive results, even after a positive result from the test I am probably fine. Pretty dumb test it seems.
Well done, you stuck with the statistics. I didn’t like writing it and you didn’t enjoy reading it, but let us reward ourselves with something which is quite neat.
Some deeper, juicy ideas
The above is a method that you can compute the probability of something given an event, which is useful. It teaches us interesting things about the crude nature of our tests, and highlights how false positives or negatives can render tests almost useless – in the sense that you can’t even rely on the result. What is really interesting is how Bayes theorem can be used for so many tasks – it can (and does) sort spam emails, help robots to learn and even help you to decide things in your own mind. It is believed that your brain employs many different Baysean algorithms without you even realizing (so you kind of know a lot of this already!).
The power in Bayes Theorem comes from using the best available evidence we have, and then using it to update our belief system – this is highly scientific. In the above example, if we had a positive result we now know that I have a 15% chance of actually being ill. This is far different to the situation we started in, when we were doing a test with a 1% chance of me being ill. If we were to run the test again we could perform the same calculations but instead of the 1% we have 15% – now, assuming the same positive result in the test as the one we had before we can now be 76% sure that I am ill. So by running the test twice, a test which is highly flawed, it becomes much more useful. We would only need to do this test once, or maybe two more times with a positive result before we were comfortable enough to start treatment. This is a much better way of diagnosing people, although it comes with an economic cost. What we have done is powerful – we are only doing the same test over and over, but rather than looking at them in isolation we continually update our belief system to become more confident in our assertions.
You might be forgiven for thinking by this point – hang on this just feels like common sense. I hope you do, because Bayes theorem is just that, a codification of common sense but we can use it to find the truth. Bayes theorem tells us that if people are willing to play the game, we can find the truth about the whole universe – it will always win out. Take the big question, God. Richard for example, who often engages in lively discussion here at RTU firmly believes in God, where as I firmly do not. Neither of us actually have any probabilities God exists, except for the ones we have decided. These are our prior credences. If we both played the game and collected evidence, we could keep updating our belief system in the method of Bayes. We take our prior credence, that God does or does not exist and the probability of it, and then take some evidence/event (say for example a scientific discovery that seems to contradict a holy scripture) and update our beliefs. We would need to understand (in our own minds) how likely is it that this indicator is actually a false positive or negative – just like we did in our example above, so say for example the scientific discovery were actually wrong – since this itself is just based on tests. We update our belief system, and we both have a slightly different view of the world – I maybe am firmer in my belief, or less firm depending on the example and we may have moved our beliefs by different amounts, however if we are being honest we will both be closer in our views.
It all depends on prior credences about certain events and pieces of evidence, coupled with a truly honest refreshing of our belief system. There is a deep philosophical message here, that if you are not thorough about seeking and considering all explanations for your evidence, the truth will be hidden from you and your evidence will only serve to confirm your prejudice. Search for evidence widely, consider all options thoroughly and update your prior credences with untapped honesty and be rewarded with the truth. This is what it’s all about – this is how we decide if God is real, determine if there is a multiverse, if string theory is real or conduct medical testing as we saw earlier! This is the definitive way to think if you want to arrive at the truth.
An additional beauty in this theorem, is the change in direction from Popper’s falsifiability criteria – it means we can consider big theories we cannot falsify and make progress in determining if they are true without dying some big scientific death. Non-falsifiable? No issue – we can just run the Bayes method and get closer and closer to the truth.
I must remind you that with great power, comes great responsibility. If this logic is abused you will arrive at the wrong conclusions. Many humans for example have emotions – they want things to be true and they are unwilling to update their credences in the way that they should. Desperately wanting there to be a heaven does not lend itself well to openly considering all options, which will mean that Bayes theorem will not work and we will confirm our prejudices as discussed. People don’t rigorously consider all alternatives which means that their tests don’t have a complete view of the world. Perhaps saddest of all, since it is hardest to get around, sometimes peoples views are just so polarised and the evidence available is so sparse that a human lifetime is far too short to give enough plays of the game to reveal the truth. This is why it is of vital importance that you learn all the lessons past on from our ancestors, and contribute our own tests to the pot to update the credences of those who call us their ancestors.
The search for the truth is bigger than me, you or any of us. So do the right thing and be a good Baysian, update your credences and pass it on for the good of humanity. A fascinatingly simple embodiment of common sense, that pokes around in some very deep pockets of thought.