- Supervised learning
- Unsupervised learning
- Semi-supervised learning
- Reinforcement learning
- Deep learning

All of these can be imagined to be within the bubble of machine learning, a term you’ve probably heard before. The main differences in these methods, and why some can be more desirable than others for certain tasks more will be briefly looked at now.

Supervised learning is called as so because the whole learning process of the program is relatively controlled. When a machine like this learns it is fed training data as the input, and in this case the program is also told the desired answer to each piece of data (the output). This way the program creates a function mapping the input to the desired output, which it can then use to predict future outputs when given new input data. All of these machine learning programs are usually fed validation data after the training data, to confirm the working state of the AI. A basic example of supervised learning in use would be giving a program data that represents the attributes of a house, such as the area of floor space, number of bedrooms, and the location of the house, and then also giving the program the outputs of how much each house sold for. The idea is then to give the program relevant data about new houses to find an accurate value for them.

A somewhat more interesting technique is unsupervised learning. Unsupervised learning can spot similar patterns that the former method would when being used for the same task however, in a more abstract way. The main difference between these two approaches is that in unsupervised learning the program isn’t given any output data to match to the given input data, ultimately leading to the program finding patterns and correlations in the data without being explicitly told what to look for. Because of this, unsupervised learning is more so used when the results wanted are not so obvious to the people working with the data. Since this method of learning has less strict instructions and fewer guidelines, it’s seen to be closer to general intelligence than supervised learning. Unsupervised learning is commonly used in things like online shopping recommendations, where after you have bought something you are targeted with advertisements based off of what other users who bought the same product as you also bought after of before.

Semi-supervised learning is as the name suggests, partly supervised and partly not. The data given in this approach is typically a small amount of labelled data (like in supervised learning) and a large amount of unlabelled data (same as in unsupervised learning). Firstly, this helps with a few things, such as reducing bias and error in the data as not all the data is labelled by someone who could have made mistakes or impacted the data in an inaccurate way. Also, not having to label a large majority of the data helps greatly with timing and cost issues too. Semi-supervised learning can be taken advantage of in certain situations to do with organising things into groups, at less cost than with supervised learning. A program could for instance take unlabelled data of pictures of fruit like apples bananas and oranges, and put them into groups based on things like colour and shape. It cannot however, actually state which one is an orange or an apple or a banana, but with a small amount of labelled data it would then be able to recognise out of all of the pictures it has, which ones correspond to which fruit after examining similar pictures that have a name attached to them.

Reinforcement learning is quite different in how it learns compared to the previous three. It uses a reward-based system, often with punishment involved as well, to incentivise the program to perform a task well. When the program is presented with a task, it has an observable environment and the ability to perform certain actions. For certain actions it is rewarded and for others it is punished, usually through a numbered score system, where a positive action would give +1 and a negative -1, or even larger numbers depending on the severity of the consequence of the action. This approach is pretty similar to training your dog to do tricks by giving it treats when it does it right. But a difference between the two is the vast amounts of ways some problems can be approached and proceeded through. Where a dog rolling over is just one action for the dog to perform, and AI playing chess for instance is faced with a lot of different actions that each lead to a new set of different actions, and so on until the option pathways aren’t really comprehendible for a person due to all the different moves the opponent can make on top of that. Thankfully though these types of chess playing programs can play millions of games in a couple hours during their learning phases, so this kind of brute force computing power means it does not encounter problems when faced with the vast number of possible options. Something that could seem to be a problem however, is if a path of actions that seems bad at first, giving punishment to the AI, eventually turns out to lead to a better or more efficient solution overall. This can lead to some interesting situations though.. DeepMind’s Alpha Zero, a chess program that learnt by just being told the rules and playing games against itself managed to reach the level of play of Stockfish 8 (a chess playing program that is consistently ranked near the top) in just 4 hours of training, and beat it in a 100 game tournament (28 wins, 72 draws) in 9 hours. Normally these chess playing programs would analyse games that had already been played by other people or other programs during it learning phase, Alpha Zero however only played games against itself to learn. Throughout these games Alpha Zero seemed to play different to other AIs; where you might expect, due to the reward system in reinforcement learning, that it would highly value taking pieces and minimising losses to eventually win, it unexpectedly made large sacrifices of valuable pieces to instead gain positional advantages that lead to victory in the long term. Alpha Zero also utilised neural networks to come out on top too, something that is a core aspect of the

next subject.

Deep learning uses neural networks in a way that models the human brain to process data it is given. Deep learning uses supervised, semi-supervised or unsupervised learning, and it can also use reinforcement learning. In a multilayer perceptron model, the artificial neurons in a neural network are connected to each other in layers, with a minimum of three. At the minimum an input and output layer are needed, as well as a hidden layer in the middle, every neuron in the input layer are connected to all of the neurons in the middle layer, which then connect to every neuron in the output layer. Each of these neurons has a weight assigned to it that adjusts the information it has been sent and decides where to send it or if even to send it on at all, these weightings are adjusted throughout the training phase to hone in on the optimal final state. It is this sandwich of hidden neuron layers that holds the model which assigns outputs to inputs. There are other types of ways neural networks can be implemented as well such as convolutional neural networks and recursive neural networks, and needless to say there is a lot of interest in the field of deep learning at the moment. Some of this interest is actually from neuroscientists who are observing how neural networks arrive at the conclusions they do as an insight to how the process might work with biological neurons. Due to the presence of hidden layers in these neural networks though, there are times when an AI can arrive at a result without anyone understanding why it did, or what sort of path it followed to get there, which raises some ethical questions. For instance, if you had an AI judge that sentenced someone as guilty without there being a clear reason how it reached its conclusion, it would far from instil trust and could leave potential for undectectable abuse if someone were able to force it to provide a fake answer. Another situation which may have more relevance at this time is the decision-making process in a self-driving car during an emergency. If the car was faced with its own version of the trolley problem or something similar, involving having to decide between the safety of different people, how could we come to the conclusion that the decision the car had made was fair and reasonable? Along this path there have been AIs developed to try and understand the actions of other AIs, in a bid to try and keep the ethical problems under control. Though this methodology raises obvious questions of circularity. ..

This was a very brief introduction into some of the main types of machine learning. There is a lot more to say about these methods and many others, especially on the subject of neural networks, but these will have to be explored more at a later time. Artificial intelligence as a field will only continue to grow with new and interesting developments being abundant, and with AI creeping its way into many parts of our lives it will be worthwhile to keep up with its evolution, wherever it leads.

]]>Back in 2017, I wrote a post on gravitational waves here at RTU, describing how such waves are generated. I briefly explained that, in order for the waves to be currently detectable, the sources need to be extremely massive i.e. colliding neutron stars or black holes. Just as with any other astronomical observation, to pick out a clear signal, one needs to know what they are looking for in the data. Here’s where the theory comes in; systems such as black hole binaries (two black holes locked in orbit around each other) are complex solutions, but of course solutions nonetheless to Einstein’s equations of motion. The field of numerical relativity uses numerical methods and algorithms to solve Einstein’s equations for such complex, dynamical systems. Solutions of the Einstein Field Equations that we can solve fully by hand represent only trivially simple systems in nature and astrophysical binaries certainly don’t fall into this box. Equipped with computer clusters, we can now computationally model these systems and theoretically compute the templates of the gravitational waves they would emit.

Such computational methodology works well when the two objects in the binary system are roughly the same size – i.e. when their mass ratio is roughly 1. The two objects circle each other a handful number of times before spiralling inward and amalgamating into one fat mass. This process is known as an inspiral. Key point being, the number of orbits undertaken during the inspiral in this case is relatively small and consequently, the evolution of the system can be computationally run in a reasonable amount of time. When the two entities of the binary are of roughly equal mass it is known as a Comparable Mass Ratio Inspiral (CMRI). Our success with numerical relativity in this area has led to the LIGO gravitational wave detector spotting eleven of such events since 2015! Detailed descriptions of such inspirals have been a major computational effort in gravitational research for recent decades. The ability to predict the exact pattern of gravitational waves for such systems, allows for meaningful observation and it can be safely said that gravitational waves have now firmly entered the domain of the observational.

*Artists impression of the gravitational waves from a CMRI system*

The challenge comes when we move from objects of comparable mass to those of disparate mass. Of particular interest is the set up where the larger object is a factor of 10,000 *or more* heavier than it’s partner in the system. This type of binary system is called an Extreme Mass Ratio Inspiral (EMRI) and is often embodied in nature by a supermassive black hole at the center of a galaxy, being orbited by a stellar mass black hole. Because the little black hole is so much smaller than its partner, it exhibits between *10^(5)-10^(6)** *orbits before eventually plunging in. The examination of the gravitational waves from such a system would provide us with a wealth of knowledge. Due to the thousands of orbits, the gravitational wave signal encodes highly detailed mapping of the spacetime geometry surrounding the super massive black hole. You can think of the little black hole as tracing out the structure of spacetime with each encircling and transmitting this information in the form of gravitational waves.

*Artists impression of an EMRI system’s spacetime curvature*

Results from such set ups would be extremely accurate tests for the predictions of Einstein’s theory of General Relativity in the regime of strong gravity – a regime which has largely been untestable thus far. Additionally, the data from such an inspiral would give in profound insight into parameters of the components, such as mass and angular momentum. This would hugely help theoretical physicists validate their hypotheses on the *types* of black holes that exist.

Due to the colossal number of orbits in an EMRI system, modelling the gravitational waveforms with numerical relativity would be highly computationally expensive, if not impossible. However, large mass difference in the EMRI case *can* be used to our advantage, providing us with a highly accurate approximation scheme to solving the Einstein equations. Approximation schemes, are often used in theoretical physics and center around expanding equations about a small perturbative parameter – in the case of EMRI’s we expand in one over the mass ratio of the two objects. The Einstein equations are perfectly accepting of a perturbative expansion in powers of such a parameter and in the case of EMRI systems the mass ratio can be as small as 10^(-6). At first order of the expansion, the path of the lighter object is simply treated as that of a massive test particle, affected solely by the gravity created by the larger black hole. Then, order by order we add corrections into the equations, to account for the mass of the lighter object and the small effective force it imposes. This force is known as the gravitational self-force. In fact, it has been estimated that reaching the second-order expansion will be sufficient for accuracy in the gravitational waveform templates, allowing for detection of EMRI systems from data gathered by the upcoming gravitational wave detector, LISA. This analysis of EMRI systems is a key area of research of my supervisors to be, Professor Leor Barack and Dr. Adam Pound, and one where they have already had great success.

*Artists impression of the LISA space mission*

LISA, a space-based observatory to detect gravitational waves, is planned to launch in the early 2030s. The sensitivity of LISA will peak in the mililihertz band, the frequency range at which EMRI systems will emit gravitational waves. However, even if an EMRI system is very close, its signal will still be much weaker than the instrumental noise gathered by LISA. Such is the problem when trying to catch such extraordinarily sensitive signals that are buried in detector noise. To maximise the science return from the multi-billion dollar mission it is vital that the theoretical waveform models are derived accurately, in advance. Then, the data from LISA can be matched up against these theoretical templates, acting as a filter against the noise, allowing us to clear signals. Getting the EMRI waveforms right would unlock a wealth of scientific information. The encoding of the geometry of spacetime in the gravitational waves, would provide profound insight into our understanding of gravity in the strong regime – we just need the wave template cipher.

To recap, we have the comparable mass binary systems (i.e. two similar size black holes) whose gravitational waves have been detected by LIGO, for which numerical methods have proved fruitful to model. And, the extreme mass binary systems, key LISA targets, for which we are using our perturbative tricks to model. A third system sits between these, the logically named Intermediate Mass Ratio Inspiral (IMRI). IMRI systems are those for which the mass ratio is around 1000. They would be embodied in nature by either an intermediate mass black hole around a supermassive black hole (case 1) *or *a stellar mass black hole around a intermediate mass black hole (case 2). There is doubt around the existence of such systems however, as intermediate black holes have not *yet* been proven to exist.

Being the middle sibling in this situation, means neither of our above methods for theoretical waveform modelling can do the trick. The accuracy of the perturbative expansion in the mass ratio method severely deteriorates as the parameter is no longer small, yet the number of orbits remains large. Such a set up thus requires a hybrid approach and this is what my PhD will hope to investigate. But let me end by telling you why this last case is worth cracking. As well as providing the first confirmation of the existence of an intermediate mass black hole, observations of IMRI gravitational waves will allow us to probe the dynamical processes in globular clusters and galactic nuclei. Rich astrophysical insights are up for grabs, along with fundamental knowledge on black hole formation and morphology. In case 1 IMRI’s, since the central object is large, gravitational waves are produced at a low frequencies. Such systems would then be detectable by the capabilities of LISA in the future. In case 2, IMIRI’s the central object is smaller, producing higher frequency gravitational waves which could actually be detectable by the currently running Advanced LIGO instrument. Tantalising prospects, whereby discoveries are theoretically possible as soon as we have the correct gravitational wave templates against which to filter the LIGO data.

Gravitational waves, black holes and all things gravity will return to being a central theme here at RTU. Posts in the near future will also include a more in-depth look at IMRI systems and the workings of the LISA instrument. A journey to catching waves starts here.

]]>

To understand fuzzballs we must first remind ourselves of the key properties of black holes. Earlier posts in the Black Holes series (#1, #2, #3, #4) at RTU cover these fantastical features in more depth, but in a nutshell they are defined by two key features. Firstly, the event horizon, the barrier from which information cannot escape. Secondly, the singularity, the point at the very centre of the black hole at which, due to an infinity matter density, space and time as we know them breakdown. The problem with the original black hole view is that the nature of these beasts can be described through both the quantum lens and the gravity lens. The two leading theories of the universe have been unable to reconcile their differences and their clash, in the context of black holes, has caused arguably the most troublesome conflict in the subject, the information paradox. This, coupled with the seemingly impossible understanding of a singularity has caused physicists to tear their hair out over the years. Fuzzballs, claim to present solutions to both these problems. To understand these let us take each problem and its supposed resolution in turn.

**The information paradox**

*The Black Hole*

In the standard description of a black hole there exists the infamous event horizon – a boundary a distance from the centre of the black hole from which nothing can escape. A point of no return. Hawking realised that in the empty space surrounding this horizon there can enter into existence, particle and anti-particle pairs. If this happens, there is chance that one of these pairs will escape outwards, while the other passes through the event horizon of the black hole, never to be seen again. As a result of the outwardly escaping particle, the black hole is seen to be radiating and with a loss of energy through radiation comes a shrinking of the black hole. For a dedicated explanation of this process see Black Holes: #2 Glowing and Shrinking. As a result of this ongoing phenomena, a black hole will finally cease to exist altogether, having evaporated entirely. In doing so, information of whatever fell into the black hole will be destroyed and this destruction of information is staunchly in opposition to the laws of quantum mechanics. In quantum mechanics, information is *never* lost. General Relativity also states that a black hole is characterised only by its mass, spin and charge. There is no other information that can be deduced about a black hole from examination of its event horizon. This lack of information and eventual disappearance of it altogether is known as *the information paradox.*

*The Fuzzball *

The fuzzball view claims to resolve this paradox with doing away with the event horizon altogether. The theory instead claims that these extremely dense coagulations of matter are comprised entirely of strings and *do* have a physical surface, just like a neutron star, ordinary star, or planet does. This surface however, is fuzzy instead of entirely solid. The diagram below may help your visualisation.

By eliminating the event horizon, we eliminate the phenomena of Hawking radiation as information can not longer be lost past a boundary or no return as there *is *no boundary of no return. Radiation still gets emitted from the fuzzball if one particle falls in and the other escapes but there is no clash with quantum mechanics. The information about the infalling particle can be retrieved. Furthermore because the fuzzball has a surface, there is structure here and information about the past history of the fuzzball can be deduced from it. From analysis of this structure all fuzzballs are seen to be unique and are characterised by a lot more than just their mass, spin and charge. As John Wheeler famously said to sum up the generic nature of black holes, ‘a black has no hair’. Fuzzballs however very much do have hair, and knotty hair at that.

**The singularity**

*The Black Hole*

Another troubling feature of the black hole is the point at the very centre where space and time breakdown due to the extreme density of matter. In the standard theory of general relativity the curvature of spacetime tends towards infinity with the mathematics blowing up in our faces, producing seemingly unphysical results.

*The Fuzzball*

The fuzzball structure, as we have said, is made from strings – as, according to string theory, is everything in our universe as they are the fundamental components of matter. As objects fall into the fuzzball, their strings combine with those on fuzzball’s surface forming larger, more complex string structures. When these strings combine together there is resultant outward pressure from the massless fields at play. At the centre of the fuzzball the density of these strings is at its highest and the strong resultant outward pressure causes a phase transition to a new state of matter which prevents the formation of a singularity. Perhaps a little hard to swallow without examining the maths first hand but i’m giving you the quick and dirty jist of it.

*a) the black hole view with singularity in spacetime represented by a jagged line*

*b) the fuzzball view with the centre of the fuzzball represented by a dense coagulation of strings*

**Entropy**

Another advocate for fuzzballs is entropy. As required by the second law of thermodynamics, black holes have entropy – an inherent measure of their level of disorder or simply put, chaos. All systems have a measure of entropy and this entropy can be quantified by counting the number of microstates of the system. Different microstates are the different ways the components of the system can be arranged whilst preserving the overall macroscopic picture. For example a messy room has a high entropy as the items can be strewn around in many ways, i.e. a large number of microstates, whilst still preserving the overall look of messiness.

In 1973 Bekenstein postulated that the level of this entropy associated to the black hole is proportional area of the black hole’s event horizon. Together with Hawking, the formula for a black hole’s entropy was produced, expressing it as proportional to the area of the horizon with factors of fundamental constants. It is a truly remarkable formula as it includes the fundamental constant of gravity and a fundamental constant of the quantum world, the planck length. Such constants rarely meet in our descriptions of nature, given the long standing incompatibility of quantum mechanics and general relativity. In the original black hole view, the only way we can measure this entropy is from properties of the event horizon since we cannot retrieve any further information from inside.

The fuzzball theory however, allows us to directly count the number of microstates of the system. Within string theory, a black hole’s structure comes in the forms of strings and branes and the ways in which these can be arranged represent the different microstates. Mathur’s calculation of the entropy from analysing these microstates can be found to equal that found by the Bekenstein-Hawking formula! A very promising find.

**Fuzzy thoughts**

Fuzzballs present a way to reconcile classical and quantum descriptions of black holes, however the jury is still out in the theoretical physics community. Fuzzballs make use of string theory, much to the delight of many who have poured over its formulation as a possible quantum gravity theory. However, string theory is by no means a complete theory and the fuzzballs rely heavily on its claims. Although the framework seemingly resolves problems of singularities and information destruction, it raises new questions in lieu, including the nature of extra dimensions to name one (did I mention, string theory is at minimum 10 dimensional?!) And… as much as the event horizon of a black hole is a wicked feature, physicists have a somewhat twisted affinity for it. Not all are keen to champion its dismissal and instead would rather find a theory which resolves the paradox whilst maintaining its inclusion.

The final fate of the fuzz is still unknown.

]]>The Standard Model of particle physics is the basis of our current best understanding of the sub-atomic realm. The model postulates a conglomeration of different fundamental particles, that together explain the behaviour of the forces and matter that we observe in nature. A recent triumph of the LHC in 2012 (recent as major discoveries in particle physics go) was the discovery of Higgs Boson, the elusive last member of the Standard Model to be observed experimentally. The Higgs is believed to be the particle that explains the existence of the range of masses of different particles and was a crucial missing piece to fully validate the Standard Model.

However, there is still a lot about our universe that we cannot describe with the Standard Model. To be precise, the known constituents make up *only 5% *of the whole universe. The remaining 95% of the universe is made up of what physicists call dark matter and dark energy. The LHC has provided no insight into the nature of these mysterious entities and it is believed that collisions at a much higher energy are necessary to unlock their secrets. Additionally, the current model cannot unite the forces that govern the quantum world with the force of gravity. Since 1905 there has been an incompatibility in theoretical physics, between quantum mechanics (our best theory of the very small) and general relativity (our best theory of the very large). By seeking to probe physics *beyond *the Standard Model, the FCC represents a chance for physicists to find a way to break this stalemate.

Such bold ideas come with a very hefty price tag – £9 billion for the least expensive design, rising to £20 billion for the full capabilities that the CERN team are hoping for. Such a cost has sparked serious criticism at a time when issues of environmental sustainability and climate change are at the forefront of many discussions amongst scientific and political communities. A crucial problem lies within the very nature of the quest, a probing of the *unknown*. There is no guarantee that the energy at which the FCC is built to operate, will be the energy at which currently hidden physics becomes visible. The entire endeavour could function at an energy way off, or *just *short of that necessary to reveal the currently unseen particles. Some argue this is too large a gamble on resources that could deliver tangible, guaranteed benefits to humanity’s very human problems of the environment and health. Nevertheless, lead scientists at CERN, such as Director-General Professor Fabiola Gioanotti and senior physicist John Womersley, are keen to emphasise the peripheral advancements to technology and benefits to society that the endeavour would bring. Being at the very forefront of science, it is argued the FCC will unearth innovative technologies during its design, construction and operation phases; just as electronics, the internet and superconducting magnets in MRI machines all arose from previous enterprises in fundamental physics.

The FCC has now been proposed to the European Strategy for Particle Physics. A decision is expected in 2020 and if accepted, the initial phases of the collider would be up and running between 2040-2050. CERN scientists firmly believe the creation of this facility is the necessary next step towards uncovering nature’s secrets, but such a gigantic vision will no doubt require global support from both national governments and the public. Although the potential challenges of the FCC are enormous, its potential impact on humanity’s understanding of the universe is arguably much larger. To stop pushing the limits of our exploration, is to stop discovering and this is something CERN physicists are determined not to allow.

]]>Einstein’s understanding of this behaviour of space and time is cemented in his field equations, which represent the evolution of a spacetime given an initial set up of masses or energy. However the universe is complex and spacetime can be very messy – it can contain chaotic galaxies, multi-planetary systems, exploding stars and even colliding black holes. Einstein’s equations cannot be exactly solved for such detailed, interacting systems – so there exist but a finite collection of *exact *solutions to the field equations representing physical setups in nature that we can analytically work with. A collection of black hole’s is one such class of solutions – and the nature of these solutions will be our focus for today.

The Schwarzchild solution to the Einstein field equations represents a static (not rotating) black hole in the presence of nothing else. One black hole, alone in the spacetime – isolated and simple enough, giving an exact solution. The Kerr solutions to the field equations are the same setup except the black hole is spinning. A black hole can’t do much else really – in fact the only three ingredients that characterise a black hole is its mass, its rotation and whether it has any charge. This is summarised as the ‘No Hair Theorem‘ – all other information about a black hole disappears and for some reason according to John Wheeler, this is akin to it being bald… not so sure about his reasoning there but I digress, such lack of hair is a story for another time. Now a question at the forefront of mathematical physics is whether these black hole solutions are *stable*. Whether if we *perturb *the black hole a little bit, it will settle back to down a stable system which can be represented by the previous solution. Proving this would be proving the ‘Black Hole Stability Conjecture’. But what is meant here by perturb? Let me explain.

Imagine a still pond, undisturbed with flat water. Then imagine this pond is disturbed or *perturbed *by an external object, for example by throwing in a pebble. A perturbation is essentially an external influence on a system which causes a disturbance to its original state. Perturbations are created by the injection of energy in some form or another and as we know from Einstein, energy is akin to mass and mass causes curvature of spacetime. So if we throw a pebble into a pond, the water is disturbed and energy is propagated outwards in the form of ripples. Similar idea with a black hole, throw in some kind of object or add some form of energy and spacetime will ripple outward. Instead of water waves forming these ripples, in spacetime the ripples take the form of gravitational waves and these waves warp the surrounding spacetime just as the water waves warp the surrounding water. What mathematical physicists want to prove, is that regardless of the size of the perturbation, if we are dealing with a static or rotating black hole (provided it isn’t rotating *too* fast) then it will eventually settle back down into its original stable state.

Merger of two black holes – a highly perturbed system creating strong gravitational waves.

The key to this endeavour is to show that the gravitational waves produced in the perturbation all eventually fully decay – as they ripple outwards they become smaller and smaller until they finally have no effect on the surrounding spacetime. We need to measure the size of these waves to make sure they are getting smaller/decaying*, *so for this we need coordinates – as coordinates allow us to measure distances between points. These calculations can all be done but it’s not that easy… there is a catch. General relativity is* diffeomorphism invariant. *Diffeo-what I hear you say? Horrible phrase but really not that complicated at all – basically all it means is that it doesn’t matter what coordinate system you choose to use in your work. Think back to the GCSE maths days, different coordinates exist to better describe different physical systems. Euclidean x,y,z coordinates are often used when dealing with straight lines or 3D objects whilst spherical coordinates r, θ, ϕ are often used when dealing with curved lines or 3D objects. But the point is the world is does not have such coordinates naturally built in – they are mathematical constructs designed by humans to help us with whatever our calculation at hand is. However, in this case* *a particular choice of coordinates can *obscure* whether the gravitational waves are decaying. It wouldn’t be a *wrong *choice that you made when choosing the coordinates you did, but it would not have been the best possible choice given your “are black holes stable?” aims. Therefore, the problem is making the right coordinate choice in order to examine black hole stability, and making the best choice in mathematics is not obvious…

Mathematicians across the world are working on this proof in order to show black holes are stable solutions to the Einstein field equations and there is excitement in the community as some groups seem to be getting close. Proving the stability of black holes will give us a greater understanding of their behaviour and given some of the universe’s most specular phenomena occur around these astronomical beasts, the more we know about them with certainty, the better equipped we are to tackling some of nature’s biggest outstanding questions.

]]>

One of the popular phrases used to describe the nature of space and time through the eyes of general relativity is as follows;

*“Spacetime tells matter how to move,*

*matter tells spacetime how to curve,”*

Imagine the old bowling ball analogy for general relativity and spacetime. Spacetime is depicted as a rubber sheet whilst an object such as a bowling ball, often representing a planet or star (though it can be any object with mass) then deforms this rubber sheet when it is placed on top. This simple analogy is meant to represent the curvature of spacetime as a result of the mass of an object. The two key words here being; mass and curvature. Mass *produces *curvature. The larger the mass of the matter in question, the greater the curvature of the surrounding spacetime. A black hole for example, one of the heaviest objects in our universe, produces a curvature so large, a warping *so great,* of the surrounding spacetime that phenomena like no where else occur – see here and here. In fact, at the centre of the black hole where bending of the spacetime is so great due to the highly concentrated mass, the curvature is mathematically described as *infinite *– and our understanding of the nature of spacetime here breaks down completely. Such an extremity causes us a problem, we do not like it when our maths doesn’t work as a descriptive language for nature. These points of infinite curvature, called singularities in the nomenclature, create problems in the standard theory of general relativity. Take home message 1; general relativity only includes the mass of matter as a property affecting the behaviour of the surrounding spacetime and it does so by causing it to *curve.* If there was no matter in the universe, or if all matter was massless (for example photons) the surrounding spacetime would be flat – imagine a sheet stretched out completely taut. Take home message 2; the theory of general relativity causes us problems, which come in the form of singularities at points of extreme curvature.

An alternative theory, Einstein-Cartan theory, holds a possible key to these problems. Matter in our universe has two fundamental properties; *mass *and *spin*. Spin is a funny little characteristic of matter, it is not related the spinning of a particle in actual space, so don’t imagine a spinning top on a table but instead is the *intrinsic angular momentum *of a particle. If that doesn’t mean much to you don’t fret, but for the purpose of this article humour me and the particle physicists of the world and accept that spin is a fundamental characteristic of matter. See *here* for more reassurance. Einstein-Cartan theory, claims that both these fundamental properties of matter affect the nature of the surrounding spacetime, whereas general relativity only incorporates mass. Whilst mass created a curvature of the spacetime, the spin creates an effect called the torsion of spacetime – unfortunately a similar analogy to the bowling ball does not exist for a visualisation of this torsion (that I know of). The mathematics of Einstein-Cartan theory, with the inclusion of spin and the resulting torsion into the framework, has some very interesting implications. In standard general relativity, the black hole situation described above creates a singularity at the centre due to the infinite curvature of spacetime. *However, *when torsion is also in play at such extreme matter densities, the torsion field creates a repulsive force that pushes outward against this extreme warping. Instead of a singularity at the centre, in Einstein-Cartan theory the interaction between torsion and curvature creates a wormhole or Einstein-Rosen bridge, at the centre of the black hole. The wormhole creates a passage to a new, growing universe on the other-side of the black hole! The same can be said of the situation at the beginning of the universe. In standard general relativity, the big bang represents a singular point, of infinitely dense matter, from which the universe then somehow comes into existence. However in the Einstein-Cartan formulation, the torsion again creates an outward-type repulsion at these such points of extreme curvature and density, forbidding them to occur. The big bang is then replaced by what is known as a big bounce scenario in cosmology.

The nice thing about Einstein-Cartan theory is that, because the extra features only come into play in extremely high matter density regimes, like the centre of a black hole, the tests which probe astronomical phenomena within our experimental reach still agree with the predictions from general relativity. For example, the perihelion of Mercury, a key test of general relativity, would still be true if working within the Einstein-Cartan formulation. Therefore, we can see the theory not as an opponent to general relativity but a slightly revision, extending its validity and in so doing presenting resolutions (and very exciting ones at that) to our previous trouble points. On the other hand, this is also unfortunate because given our nearby surroundings and inability to accurately probe quantum phenomena, we are not in a position to experimentally test the predictions of Einstein-Cartan theory. The experiments that we can conduct are ones where mass is the heavily dominant player affecting the behaviour of spacetime and spin effects remain physically hidden to us.

What we can say, without even commenting on the mathematical elegance of the theory is that Einstein-Cartan simply seems more wholesome ontologically. Why include one fundamental property of matter (mass) and not the other (spin)? The theory seeks to resolve this omission and unlock the missing results even if they cannot be physically probed. A curveball theory, where curvature is no longer the only behaviour of spacetime.

]]>The key idea is that AdS/CFT is an example of a duality between two types of theories. On the left hand side we have theories in Anti-de-Sitter space, this is a special type of geometry that physicists use to model a particular spacetime. There number of dimensions for this spacetime can be chosen freely but the details of what exact *kind *of geometry this represents need not concern us today. When working with spacetimes it is easy to add gravity into your models and physicists try to model the gravitational interactions here in terms of string theory (see How long is a piece of string). Incorporating string theory to AdS space attempts to create a big and bold theory of quantum gravity. The inner workings of such a theory are still very much in the dark but here the assumed pillars of a quantum gravity set-up have been erected. Take-home message; AdS sets up an arena for physicists where the only field (see What is a Field?) is the gravitational field and the constituents are our little friends the strings. It is a proposed theory of quantum gravity, still being explored.

On the right hand side we have what is known as a Conformal Field Theory. Conformal field theories are particular types of quantum field theories (see What is Quantum Field Theory?) which in turn are models describing the interactions of elementary particles. Elementary particles interact heavily with the other three out of four fields in the universe’s playbook; the electromagnetic, strong and weak field. On this side of the correspondence there is no gravity and as such the geometry of the arena is a flat spacetime. The way Maldacena, the father of AdS/CFT, set up the correspondence is that if you choose the number of dimensions in your AdS theory to be D (any integer) the number of dimensions in the CFT theory must be D-1. We’ll explain the consequences of this later. Take home message; CFT sets up an arena for physicists where they can work with the interactions between the very small constituents of the universe and use all the fields *except* gravity.

Ok so both halves have been defined what next? Well we choose a number for our choice of dimensions and play around which each side of the theory. We then begin to see remarkable similarities in the behaviours of the models of quantum gravity formulated in AdS space and those the conformal quantum field theories. Important characteristics of the models, such as emergent symmetries and levels of chaos mirror each other on both sides of the correspondence and the main breakthroughs of this are twofold;

Firstly, the fields at play in quantum field theories on CFT side of things (electromagnetic, strong and weak) are subject to what is known as ‘strong coupling’. When studying theories in theoretical physics there are a bunch of mathematical procedures we very often use however when using maths to study theories with *strong couplings* our calculations essentially blow up. This is because the strong coupling is representing in the mathematics as a larg pre-factor, call it q here for example. As we try to expand the mathematical terms in the theory we get terms with q, q^2, q^3… and because q is a large number already the whole thing blows up in our face! The fancy terminology is that the theory is not ‘mathematically tractable’. *However*, theories on the AdS side of things only contain the gravitational field, which is subject to ‘weak coupling’ The pre-factor here is very small and less than 1, call it g. So when we expand the terms and get g, g^2, g^3 we can essentially ignore the terms which contain g raised to high powers as they would be so much smaller than 1 (check it!). As such we can work with the first few terms alone and the maths makes us happy. Take home message; we can study examples of CFTs in areas in difficult like nuclear and condensed matter physics by translating the projects into *mathematically solvable* problems of string theories on AdS spaces.

Secondly, the string theory on an AdS space set up is, as we said, a proposed theory of quantum gravity. A theory of quantum gravity is the holy grail of the theoretical physics world and this proposed theory is of course only an attempt at a formulation. However due to the similarity in the behaviours of models on both sides of the *duality*, we can probe the proposed quantum gravity with a quantum field theory – a domain much easier for humans to manipulate. Since we know very little about the inner workings of quantum gravity by considering its quantum analogue without gravity (the CFT theory) we can significantly enhance our understanding. Take home message; we explore the quantum gravity AdS set up with what we know about the corresponding CFT.

A final point worth touching on because the idea is pretty funky and Susskind is a wonderful man, is holography. You may have recalled that in the set up the AdS quantum gravity theory has D dimensions but the CFT quantum field theory had D-1 dimensions, one less. The duality between the two theories was proposed by Susskind and ‘T Hooft as being the same kind of duality seen between a real world object in three-dimensions and its hologram on a two-dimensional surface… The idea is that all of the information in a theory of quantum gravity can be encoded within a theory without gravity on a lower-dimensional space. The geometric visualisation is akin to a sphere (3D) holding the full theory and the information being encoded on its surface, the boundary of the sphere (2D). Susskind’s interpretation of the AdS/CFT correspondence has conjured up an image of us living within an analogous hologram, a four-dimensional space, an embodiment of a richer five-dimensional space somewhere in the universe… a post in more detail on this to come.

]]>One such tangent would be developing a rigorous and robust understanding of why dividing by zero does not work. I am sure many people know this – but a surprising number will not. A lot of people will tell you it’s does not exist – true, but why? Others will say it’s just infinity, which is fine but can I define it using a limit and infinity? No, of course you can’t.

Division is first explained to us as sharing some quantity of objects among another quantity of objects (or people). So 15 shared among 3 gives five. The question of how many items are received “when” no items are shared among one, two, three of four hundred people has no real meaning. When are no items shared? Always? Never? There is no real meaning in this elementary description of division – so we may conclude that it is just undefined. Loosely this is right, but does not really contain a complete description of division.

So why can’t we just attack the problem with limits? This argument would see something like this set up;

Where a and b are real numbers, and b approaches zero from the right. when we examine this limit from the left, we would get the following expression;

So when we combine these two, to calculate the limit as b tends to zero?

So it is wrong to define a/0 as infinity, approaching infinity etc. The limit does not exist. So you are only correct if you say that division by zero is undefined; which above we have shown using calculus. You can also show this using algebraic rings, or inverse multiplication. The most basic, and amusing demonstration as to why this monstrosity cannot exist, is the fallacy we would create.

Doing some really basic maths;

0 x 20 = 0

and;

0 x 5 = 0

In this case, the following must also be true;

0 x 20 = 0 x 5.

In a world where we accept division by zero, we can write;

0/0 x 20 = 0/0 x 5,

20 = 5.

This is no world I want to live in.

Let me know if you want to look into other ways we can demonstrate that a/0, and indeed 0/0 is undefined – there are so many of them and it is fundamental to mathematics as we know it. There are areas of mathematics (such as matrices) where such operations are defined or pseudo defined, but these are special cases which in no way violate the discussion above.

]]>In this modern day, what I like to call the ‘projection of the self’ is ubiquitous amongst our generation. The dissemination of selfies and social media posts is prolific and it seems we are constantly putting *out *images of ourselves, trying to convey an idea of *who* we are, *what *we like and* what *we are doing to others. It seems as though conveying this image outward, is becoming one the most important activities amongst young people of our generation yet stepping away from the phone, after the post has been posted, what value did this action add to reaffirming our understanding of the self? I worry that all this ‘projection of the self’ leads to far less time being spent on ‘inward reflection of the self’ and it feels as though these activities lead to a dulling of our own consciousness. By occupying all our free time with social media, even when partaking in activities that may be focused on *ourselves *and our identity we are never truly giving ourselves time to be alone with our thoughts. Our identity becomes that which we project on our social media accounts, we constantly share what we like and what we dislike but we never seem to sit down and reflect on what it is that we *truly *like and dislike, what makes us happy and unhappy in life.

As much as I believe this time for self-reflection should increased, by doing this we reach another hurdling point. What *is it *that inherently makes us who we are. A popular answer is that it is our choices that make us who are. However as discussed in a previous post on free will, from a physicists standpoint on life anyway, free will is a very dubious concept. However if you *do *accept that choice is real and it is these decisions that defines you, it is important to try and analyse why you make the choices you do. When I have tried to do this analysis, it has seemed to me that all my choices can be traced back to a previous thought, idea or experience that I have had. However in the tracing back of these choices we find no logical start point, there is always a predecessor in the chain. It seems to me, a combination of upbringing and surroundings heavily influence a persons choices, memories and experiences are continually stored in ones subconsciousness and are then drawn on when making a decision.

I have also experienced of late, your surroundings and the people you choose to surround yourself with can have a large impact on the choices you make and the type of self you choose to project outward in the moment. It is a worrying thought that the idea of the self is so fluid and of course it is simple human psychology that people choose to reflect aspects of the self that are most in line with their company at the time, out of fear of being treated as a social pariah. Yet these effects compound over time and begin to shape the self we independently choose to project. Identity can then begin to become a product of your social company… However, in this new age of online communications social interactions and the way social company is kept is also changing its form. With conversations held more frequently over a whatsapp screen that in person and love shared in the form of a heart emoji as opposed to a real time exchange of emotions we also seem to be at risk of a lost sense of real time relations. It has been shown that too much time on social media can fuel feelings of loneliness and decrease ones self esteem. Compounding these feelings with the idea of a lost sense of self can lead to a worrying state of mind and lack of purpose.

Perhaps now more than ever, in the age where it is all to easy to become influenced not only as those in the same room as you but those a million miles away yet on your phone screen it is important to spend more time alone with your own thoughts in an attempt to try and find out what you independently value. Ideas of mindfulness and meditation are avenues which seem to be useful in trying and get a better sense of this. Although free will may not be real, human intuition compels us to believe it is, in a bid to give purpose to our lives. And given that we will always experience life from the confines of our own brain, in which choice, consciousness and the idea of the self appears tangible and concrete, to spend time focusing on our own identity and values seems a worthwhile exercise indeed. To then surround yourself with people who share these values in real time may be a step in the right direction for a generation who at times seem to be more concerned with the ‘projection of the self’ than what they themselves want to achieve with their finite time.

]]>*Photo Credit: NASA*

From these numbers, our characterisation of other stars and some nifty extrapolations, assuming the universe is homogenous, (matter is evenly distributed and the same in all directions) we have calculated that in the universe there* could* be as many as 40 billion earth-like planets! By earth-like we usually mean that they exist in what is known as the habitable zone, a region which is sufficiently far away from its host star that it is not frazzled but close enough such that there is enough heat energy to sustain life i.e. conditions where water can exist. These numbers from exoplanetary research have opened our eyes towards our insignificance, having once thought of ourself as a special and rare ball floating at just the right distance from just the right size star, we now realise that this combination is far from unique. Of course, we should remember that these are the conditions for life *we* expect it, we refine our searches for life based on water. It is the only type of life we know and again we are blinded by our sample size and lack of comparison. Perhaps life has somehow evolved from different building blocks, all depends on your definition of life I suppose.. a tangent on which I shall not divert today.

I was sent an article by a true physics guru the other day pulling together exoplanets and black holes by Sean Raymond which sparked my interest. The premise is funky; *theoretically* what is the astrophysical system that could host the most life. (From now on I will speak of life we understand it, water based life on an earth-like planet.) When I say system, I mean similar to ‘solar system’, in our case we have our star, the sun, at the center with a host planets orbiting it at increasing distances. The restriction with having a singular star at the center of the system, is that planetary orbits can only come so close together before their gravitational effects start to pull on each other and dominate over the gravitational pull felt from the star. If this happens the orbits become unstable and the system falls out of its delicate balance. Key takeaway – there are only so many Earth-like planets you could fit into the habitable zone (set distance from the star) for a sun-like star before things get messy.

*Credit: Sean Raymond*

You need something *heavier* at the center of the system, such that more planets can be packed into the habitable zone whilst their gravitational effects on each other will still be overwhelmed by the gravitational mass in the center, meaning they stick to their orbits. In Raymond’s imaginary system, instead of a singular star at the center there is a *super massive black hole. *Taking a supermassive black hole 1 million times the mass of the sun, *550-Earth* mass planets could fit in stable orbits in a set distance away from the center of the system. Of course replacing the sun with a black hole we loose the definition of habitability altogether, we need a sun-like star for heat and warmth. The funky idea theorised by Raymond to create a habitable zone is place to first place a ring of sun-like stars around the black hole. In fact, 9 sun-like stars 0.5 astronomical units from a one-milton mass sun would make 550 Earth-like planets habitable, with each planet circling in its own orbit. If taken a step further and orbits are shared with the planets spread out evenly throughout the ring, Raymond calculated the super-massive black hole could hold as many as 400 rings, each holding *2,500 planets.* The beauty of this set up would be the sky show inhabitants of the worlds would enjoy, with other planets passing through large areas of the sky, day and night. In fact Raymond even goes so far as to suggest that planets would be close enough such that a space elevator could be constructed to connect them!

*Photo Credit: NASA*

Here’s one of my favourite aspects and of course it’s to do with time. Take a planet in one orbital ring and another in the orbit ring one further away from the black hole. The planet closer to the black hole will feel a significantly larger gravitational pull, hence being in a greater gravitational potential well, time will pass slower for them than planets in further orbits. See my previous posts on relativity and time dilation for some more insight in this disturbing phenomenon. As such, if two babies are born at what can be taken as the same moment in an external reference frame on two planets, the one on the inner ring would age slightly more slowly than the other. If taking the inner most and outer most habitable ring and two babies, the effect could be taken to bizarre extremes. The inner most baby could have reached it’s 2nd birthday while the outermost was seeing its grandchild reach their 2nd. Such discrepancy in one shared system, the effects of relativity plainly observable.

Of course this whole system is all theoretical fun and games and could be criticised for its fantasy, but it brings together ideas in theoretical and astrophysics nicely and i’m all for that at the moment. The system could be criticised by being ‘unnatural’ but if by that we mean statistically unlikely then we run into some trouble. Playing statistical games from our vantage point of the universe is a dangerous business. We only have one data point of life-formation, of data point of a life-bearing planet and a very small sample of universe which we can observe from our little corner of space. Although observations in exoplanetary science are continually stacking up, it is only from the spectra of light that we can do our de-coding, we have *much* still to learn about other systems. [See ‘Strange New Worlds‘ for some detail on these observations] If ‘unnatural’ is *not* meant from a statistically unlikely point of view alone then, what even is an unnatural system except something we as humans find inherently displeasing somehow? It is funny what the human mind can easily normalise, which when actually sat down and deeply thought about is mind-bending. I like this quote a lot to remind us of that;

*“The fact that we live at the bottom of a deep gravity well, on the*

*surface of a gas covered planet going around a nuclear fireball 90*

*million miles away and think this to be normal is obviously some*

*indication of how skewed our perspective tends to be.”*

— Douglas Adams, The Salmon of Doubt: Hitchhiking the Galaxy One

Last Time

]]>