Let me first clarify what I mean by a particular system. I make an extreme understatement when I say the universe is complicated. We can acquire some mental sanity by breaking it down and grouping its components into different systems. For example, you sitting in your living room reading this, could be treated as a system and a mathematical model could be designed to measure the evolution of this system. The model could measure effects such as the fluctuating temperature due to the incoming and outgoing heat flows, or the change in mass as more objects enter or leave the room. We can have much bigger systems, such as the solar system with models that track the dynamics of the planets around the sun. Or, we could have a system describing two black holes about to collide in a far corner of the universe. We state from the off-set what is included in our system and assume influence from no outside objects within our model, mathematically this is done through the inclusion of constraints and boundary conditions.

The Einstein equation is most often used to describe the evolution of relativistic systems on the cosmological scale. Think black holes, exploding stars and the big bang. The deceptively simple looking equation is actually comprised of ten distinct highly non-linear partial differential equations which describe the behaviour of the system. To write each of the ten equations out in full, in their most general form without any grouping of the terms, would literally take hundreds of pages. Solving it, is therefore an incredibly daunting task.

To be able to find exact solutions to this equation is extremely rare and only occurs for a handful of simple systems in which a large number of these terms fall away to zero. For example, if we are looking at a system with an isolated body of mass sitting in an otherwise empty space-time, the term **T** in the above equation (the Stress-Energy tensor) disappears, greatly simplifying the equations with now nothing on the right hand side. You may wonder how could this could represent any interesting physical situation in nature, but remember our thinking in systems. An isolated black hole, far away from any other matter can be well represented by this scheme due to the negligible influence from far away bodies and thus their exclusion from the system. Symmetry in a system also provides a big helping hand, if a system has spatial (or far less common temporal) symmetry this will also make the equations much more tractable due to a large cancellation of terms. Think back to the living room example if this isn’t immediately obvious, if the left half of my living room is identical in every way to the right half, my model needs only do half the work to represent the system.

There then exist systems which are not exact solutions to the Einstein equation, but to which we can find approximate solutions due to certain simplifications. If the gravitational field is weak or the speed of the bodies in the system is substantially slower than the speed of the light, a number of the terms in the equations become very small and can essentially be ignored. Such approximations allow us, to a high a degree of accuracy, to model the dynamics of planets, certain binary neutron stars and particular emissions of gravitational waves. However, the exact and approximately solvable cases only represent a fraction of the systems we’d like to model in the universe and unfortunately, as well as obviously, the most interesting and physically realistic cases are the most complex.

To examine gravitational waves from colliding black holes or supernovae, to model relativistic phenomena such as active galactic nuclei or to follow spacetime singularities, we work in the regime of strong gravity and require the ability to solve the Einstein equation in its full, almighty form. The Einstein equation must be solved everywhere in the system, tracking the matter at each point, how fast it is moving, the pressures and stresses and the resulting warping of the surrounding spacetime. The changes at one point in the system then affect the spacetime geometry at every other point. All the terms laid out in the hundreds of pages must be inputted into the equation to ensure accurate results. Thankfully for the human brains of all relativists, we now have the technology to pass this less than enviable job onto computers. This is the field of Numerical Relativity, the ability to provide approximate solutions to the Einstein equation for systems using (as the name suggests) numerical methods.

Researchers have developed methods to computationally evolve systems by inputting the data describing the system at an initial moment and then using finite differencing numerical methods to evolve the state forward step by step in time. However, numerical discretisation is a double edged sword. The smaller the taken steps the smaller the introduced error and the smaller the deviation from the true solution. Small steps however come at the cost of runtime. If the problem is discretised up into smaller chunks, there obviously exists a larger number of them for the computer to process, taking a longer time to spit out a complete simulation. A main job of the numerical relativist is to ensure the inputted data describing the system at an initial time was well-posed. This meaning, it will not lead to numerical instabilities when the computer evolves it and will ultimately present an accurate approximation for the system’s behaviour over time.

This problem is known as the initial value problem and the nature of the theory of general relativity, to which Einstein’s equation belongs, provides a high entry barrier for acceptable initial data due to the theory being *gauge invariant*. What this means is that the model that describes the behaviour of your system must be independent of the coordinates you choose to do the modelling! Coordinates are after all just a mathematical choice to make certain aspects of your calculation easier, but more on this subtlety another time. This gauge invariance means the data must be subject to certain mathematical constraints and boundary conditions which ensures criticial information about the physical situation trying to be modelling is appropriately included. Takeaway message, although we give the brute work to computers, we must still work extremely hard to formulate what we feed in before we can kick back and let the algorithm chug on. I hope to do a more detailed post on the formulation of Numerical Relativity soon.

For a beautiful visualisation of a numerical simulation of two merging black-holes, with asymmetric masses and the extra complications of orbital precession (GW190412) see the following video from the Albert Einstein Institute.

Such intricate behaviour, all ultimately encapsulated in the equation given at the top of this post.

Feature Photo Credit: NASA/VICTOR TANGERMANN

]]>Though there has not been any concrete proof of IMBHs so far, some evidence provides strong suggestion. Whilst gravitational waves are the medium with which we can detect black hole *mergers,* while they live as solitary beasts there are other ways in which we can detect their presence. As black holes consume matter many emit high-energy radiation in the form of X-rays. Observers have previously found strong X-ray emission from nearby galaxies, such as NGC 1313. From the analysis of the X-ray emission, its strength and periodicity, a constraint can be put on the mass of the source. In the case of NGC 1313, this came in at around 1,000 solar masses, firmly putting it in intermediate mass black hole territory. Further evidence that such strong X-ray emitting sources are indeed black holes comes when the radiation clearly does not have with a visible light counterpart, which* would* be the case should the source instead have been a star or galaxy. The radiation signals often also show periodicity, a phenomenon thought to be due to the consumption pattern of the black hole, whereby every time it rips out matter from encircling nearby stars it emits a strong burst of X-rays.

As detection of such signals is sporadic and not entirely conclusive, concrete proof of IMBHs will only come when we can detect a merger. This will either take the form of two intermediate black holes colliding, or intermediate mass-ratio inspirals (IMIRIs) when a stellar mass black holes falls into an IMBH or when an IMBH falls into a supermassive black hole. To detect gravitational waves signals from IMRIs requires innovative methods of modelling their gravitational waveform templates, an exciting new field which myself and my supervisors are currently working on. The next generation of gravitational wave detector, the space satellite LISA, will have a frequency detection range well suited to IMRI signals, so the race is on to provide accurate templates before its launch. It is also hoped that Advanced LIGO may be able to tune into the gravitational waves from IMRIs once we know what waveforms it is that we are looking for. The detection of IMBHs would be provide a vital bridge in our understanding of black holes, helping us piece together black hole formation, population and evolution over time. Stay tuned!

]]>The dawn of the universe was flooded with radiation produced by the Big Bang. The popular cosmological picture of the beginning of the universe is that of a swirling soup of such radiation which, as the cosmos expanded, began to clump together to form matter and subsequently the first stars and planets. In this popular picture it was only millions of years later, once these stars became sufficiently dense that the strength of their own gravity caused them to collapse and form black holes. However, new models suggest that the early universe could have been a breeding ground for other beasts, primordial black holes (PBHs (the cosmic OAPs)). It is believed that black holes could have also been formed from extreme density fluctuations in the early universe, less than *one second *after the Big Bang. There are many cosmological phenomena that could have produced such density fluctuations such as cosmic inflation, reheating or cosmological phase transitions. From these, fluctuations (sharp points of contrast) in the matter densities, the spacetime would again be in the position to undergo gravitational collapse upon itself, forming a black hole. Such PBHs would then be able to devour the radiation surrounding them, growing as they ate.

Theoretically PBHs could have initial masses ranging from 10^(-8)kg to thousands of solar masses, however those having masses lower than 10^(11)kg would not have survived to the present day due to having evaporated entirely by now (see Glowing and Shrinking) through the process of Hawking radiation. However these limits still allow good theoretical scope for observation of some of such relics today, which in turn would resolve many unanswered cosmic questions.

*Shedding light on darkness*

Dark matter is a substance thought to account for approximately 85% of the matter in the universe and about a quarter of its total energy density. The fact that the main constituents of our universe remain a mystery is a sorry state of affairs for theoretical physicists. Popular candidates for the elusive constituents of dark matter are WIMPs (see The Dark Side) and MACHOs: massive compact halo objects. MACHOs are large objects that emit little to no radiation, given their nature PBHs are a possible type of such object. Taken into account their formation at the dawn of time, their supreme density and the masking properties of the horizon to direct observation, it is easily believable that PBHs could be the dominant, or even sole, component of dark matter.

A second theory is that even if PBHs are not directly the dark matter constituents, through their evaporation they could emit whatever the true dark matter particles are. The type of particles emitted during the process of Hawking radiation (during the evaporation of a black hole) crucially do not depend on what fell into reach of the black hole during its lifetime. The black hole amalgamates all it ingests and the by-products of its evaporation can take the from of whatever particle exist in nature. Dark matter treated on an equal footing.

*Cosmic speed limit*

The rate of the expansion of the universe has not yet been pinned down and cosmologists currently have two ways to measure the rate of acceleration. The first involves the measurement of light from supernovae, whilst the second uses the ancient radiation leftover over from the Big Bang (cosmic microwave background radiation). The trouble is, these two measurements are currently in conflict with each other. However if we account for the radiation effects from the ancient relics of PBHs *alongside* the ancient radiation of the CMB, this would seem equate rate calculated from the two approaches.

*The Supermassive*

The mass of a black hole increases as it sucks in the matter surrounding it. However if black holes are only able to be born from the fiery deaths of stars, this puts a size limit on the what their mass upper bound should be (due to the restricted time they could have been alive and thus been able to grow). This formation theory for black holes is in direct contradiction with the size of the supermassive black holes (SBHs) we now know exist at the centres of most large galaxies. The gargantuan size of these SBHs is simply impossible based on the current understanding of standard black hole formation. Either there is another process in place by which black holes can grow, perhaps the amalgamation of black holes born from stars is much more frequent than we predict* or* perhaps PBHs are the answer. If black holes can exist from moments after the Big Bang, seeded instead from cosmic density fluctuations and *not* have to wait around for stars to form and die, it becomes plausible to reach the supermassive size necessary to agree with current day findings.

*Are they out there*

LIGO – the Laser Interferometer Gravitational-Wave Observatory has now detected gravitational waves signals from over ten binary black hole mergers (for more on the spectacular details of a black hole merger and what we can learn from them see Inspirational Systems). There are mutterings amongst the LIGO community that some of these detections came from PBHs rather than the standard stellar remnant black holes .

Many of the black holes detected by LIGO did not seem to be spinning fast. If the black holes were formed from resurrected-stars in a binary system, they would tend to have a degree of spin as the stars would have possessed angular moment. PBHs born in isolation in the early universe however, do not tend to have much spin.

PBHs were born from density fluctuations in the early universe and predictions on the nature of such fluctuations can then be extrapolated to tell us what the masses of such PBHs would roughly be today. The average answer is suggested to be about 30 solar masses. Remarkably most of the LIGO observed black holes fall around such a mass range. The majority of the early LIGO measurements coming in at this range is argued by some to support the case.

As the next generation of gravitational wave detectors enter the game, a resolution to the debate over PBHs may come. Whatever the answer may be one thing is for certain, gravitational waves are a revolutionary new medium with which we can explore and understand the universe. For now we continue to wonder whether primordial black holes may hold the answer to such primordial questions.

]]>- Supervised learning
- Unsupervised learning
- Semi-supervised learning
- Reinforcement learning
- Deep learning

All of these can be imagined to be within the bubble of machine learning, a term you’ve probably heard before. The main differences in these methods, and why some can be more desirable than others for certain tasks more will be briefly looked at now.

Supervised learning is called as so because the whole learning process of the program is relatively controlled. When a machine like this learns it is fed training data as the input, and in this case the program is also told the desired answer to each piece of data (the output). This way the program creates a function mapping the input to the desired output, which it can then use to predict future outputs when given new input data. All of these machine learning programs are usually fed validation data after the training data, to confirm the working state of the AI. A basic example of supervised learning in use would be giving a program data that represents the attributes of a house, such as the area of floor space, number of bedrooms, and the location of the house, and then also giving the program the outputs of how much each house sold for. The idea is then to give the program relevant data about new houses to find an accurate value for them.

A somewhat more interesting technique is unsupervised learning. Unsupervised learning can spot similar patterns that the former method would when being used for the same task however, in a more abstract way. The main difference between these two approaches is that in unsupervised learning the program isn’t given any output data to match to the given input data, ultimately leading to the program finding patterns and correlations in the data without being explicitly told what to look for. Because of this, unsupervised learning is more so used when the results wanted are not so obvious to the people working with the data. Since this method of learning has less strict instructions and fewer guidelines, it’s seen to be closer to general intelligence than supervised learning. Unsupervised learning is commonly used in things like online shopping recommendations, where after you have bought something you are targeted with advertisements based off of what other users who bought the same product as you also bought after of before.

Semi-supervised learning is as the name suggests, partly supervised and partly not. The data given in this approach is typically a small amount of labelled data (like in supervised learning) and a large amount of unlabelled data (same as in unsupervised learning). Firstly, this helps with a few things, such as reducing bias and error in the data as not all the data is labelled by someone who could have made mistakes or impacted the data in an inaccurate way. Also, not having to label a large majority of the data helps greatly with timing and cost issues too. Semi-supervised learning can be taken advantage of in certain situations to do with organising things into groups, at less cost than with supervised learning. A program could for instance take unlabelled data of pictures of fruit like apples bananas and oranges, and put them into groups based on things like colour and shape. It cannot however, actually state which one is an orange or an apple or a banana, but with a small amount of labelled data it would then be able to recognise out of all of the pictures it has, which ones correspond to which fruit after examining similar pictures that have a name attached to them.

Reinforcement learning is quite different in how it learns compared to the previous three. It uses a reward-based system, often with punishment involved as well, to incentivise the program to perform a task well. When the program is presented with a task, it has an observable environment and the ability to perform certain actions. For certain actions it is rewarded and for others it is punished, usually through a numbered score system, where a positive action would give +1 and a negative -1, or even larger numbers depending on the severity of the consequence of the action. This approach is pretty similar to training your dog to do tricks by giving it treats when it does it right. But a difference between the two is the vast amounts of ways some problems can be approached and proceeded through. Where a dog rolling over is just one action for the dog to perform, and AI playing chess for instance is faced with a lot of different actions that each lead to a new set of different actions, and so on until the option pathways aren’t really comprehendible for a person due to all the different moves the opponent can make on top of that. Thankfully though these types of chess playing programs can play millions of games in a couple hours during their learning phases, so this kind of brute force computing power means it does not encounter problems when faced with the vast number of possible options. Something that could seem to be a problem however, is if a path of actions that seems bad at first, giving punishment to the AI, eventually turns out to lead to a better or more efficient solution overall. This can lead to some interesting situations though.. DeepMind’s Alpha Zero, a chess program that learnt by just being told the rules and playing games against itself managed to reach the level of play of Stockfish 8 (a chess playing program that is consistently ranked near the top) in just 4 hours of training, and beat it in a 100 game tournament (28 wins, 72 draws) in 9 hours. Normally these chess playing programs would analyse games that had already been played by other people or other programs during it learning phase, Alpha Zero however only played games against itself to learn. Throughout these games Alpha Zero seemed to play different to other AIs; where you might expect, due to the reward system in reinforcement learning, that it would highly value taking pieces and minimising losses to eventually win, it unexpectedly made large sacrifices of valuable pieces to instead gain positional advantages that lead to victory in the long term. Alpha Zero also utilised neural networks to come out on top too, something that is a core aspect of the

next subject.

Deep learning uses neural networks in a way that models the human brain to process data it is given. Deep learning uses supervised, semi-supervised or unsupervised learning, and it can also use reinforcement learning. In a multilayer perceptron model, the artificial neurons in a neural network are connected to each other in layers, with a minimum of three. At the minimum an input and output layer are needed, as well as a hidden layer in the middle, every neuron in the input layer are connected to all of the neurons in the middle layer, which then connect to every neuron in the output layer. Each of these neurons has a weight assigned to it that adjusts the information it has been sent and decides where to send it or if even to send it on at all, these weightings are adjusted throughout the training phase to hone in on the optimal final state. It is this sandwich of hidden neuron layers that holds the model which assigns outputs to inputs. There are other types of ways neural networks can be implemented as well such as convolutional neural networks and recursive neural networks, and needless to say there is a lot of interest in the field of deep learning at the moment. Some of this interest is actually from neuroscientists who are observing how neural networks arrive at the conclusions they do as an insight to how the process might work with biological neurons. Due to the presence of hidden layers in these neural networks though, there are times when an AI can arrive at a result without anyone understanding why it did, or what sort of path it followed to get there, which raises some ethical questions. For instance, if you had an AI judge that sentenced someone as guilty without there being a clear reason how it reached its conclusion, it would far from instil trust and could leave potential for undectectable abuse if someone were able to force it to provide a fake answer. Another situation which may have more relevance at this time is the decision-making process in a self-driving car during an emergency. If the car was faced with its own version of the trolley problem or something similar, involving having to decide between the safety of different people, how could we come to the conclusion that the decision the car had made was fair and reasonable? Along this path there have been AIs developed to try and understand the actions of other AIs, in a bid to try and keep the ethical problems under control. Though this methodology raises obvious questions of circularity. ..

This was a very brief introduction into some of the main types of machine learning. There is a lot more to say about these methods and many others, especially on the subject of neural networks, but these will have to be explored more at a later time. Artificial intelligence as a field will only continue to grow with new and interesting developments being abundant, and with AI creeping its way into many parts of our lives it will be worthwhile to keep up with its evolution, wherever it leads.

]]>Back in 2017, I wrote a post on gravitational waves here at RTU, describing how such waves are generated. I briefly explained that, in order for the waves to be currently detectable, the sources need to be extremely massive i.e. colliding neutron stars or black holes. Just as with any other astronomical observation, to pick out a clear signal, one needs to know what they are looking for in the data. Here’s where the theory comes in; systems such as black hole binaries (two black holes locked in orbit around each other) are complex solutions, but of course solutions nonetheless to Einstein’s equations of motion. The field of numerical relativity uses numerical methods and algorithms to solve Einstein’s equations for such complex, dynamical systems. Solutions of the Einstein Field Equations that we can solve fully by hand represent only trivially simple systems in nature and astrophysical binaries certainly don’t fall into this box. Equipped with computer clusters, we can now computationally model these systems and theoretically compute the templates of the gravitational waves they would emit.

Such computational methodology works well when the two objects in the binary system are roughly the same size – i.e. when their mass ratio is roughly 1. The two objects circle each other a handful number of times before spiralling inward and amalgamating into one fat mass. This process is known as an inspiral. Key point being, the number of orbits undertaken during the inspiral in this case is relatively small and consequently, the evolution of the system can be computationally run in a reasonable amount of time. When the two entities of the binary are of roughly equal mass it is known as a Comparable Mass Ratio Inspiral (CMRI). Our success with numerical relativity in this area has led to the LIGO gravitational wave detector spotting eleven of such events since 2015! Detailed descriptions of such inspirals have been a major computational effort in gravitational research for recent decades. The ability to predict the exact pattern of gravitational waves for such systems, allows for meaningful observation and it can be safely said that gravitational waves have now firmly entered the domain of the observational.

*Artists impression of the gravitational waves from a CMRI system*

The challenge comes when we move from objects of comparable mass to those of disparate mass. Of particular interest is the set up where the larger object is a factor of 10,000 *or more* heavier than it’s partner in the system. This type of binary system is called an Extreme Mass Ratio Inspiral (EMRI) and is often embodied in nature by a supermassive black hole at the center of a galaxy, being orbited by a stellar mass black hole. Because the little black hole is so much smaller than its partner, it exhibits between *10^(5)-10^(6)** *orbits before eventually plunging in. The examination of the gravitational waves from such a system would provide us with a wealth of knowledge. Due to the thousands of orbits, the gravitational wave signal encodes highly detailed mapping of the spacetime geometry surrounding the super massive black hole. You can think of the little black hole as tracing out the structure of spacetime with each encircling and transmitting this information in the form of gravitational waves.

*Artists impression of an EMRI system’s spacetime curvature*

Results from such set ups would be extremely accurate tests for the predictions of Einstein’s theory of General Relativity in the regime of strong gravity – a regime which has largely been untestable thus far. Additionally, the data from such an inspiral would give in profound insight into parameters of the components, such as mass and angular momentum. This would hugely help theoretical physicists validate their hypotheses on the *types* of black holes that exist.

Due to the colossal number of orbits in an EMRI system, modelling the gravitational waveforms with numerical relativity would be highly computationally expensive, if not impossible. However, large mass difference in the EMRI case *can* be used to our advantage, providing us with a highly accurate approximation scheme to solving the Einstein equations. Approximation schemes, are often used in theoretical physics and center around expanding equations about a small perturbative parameter – in the case of EMRI’s we expand in one over the mass ratio of the two objects. The Einstein equations are perfectly accepting of a perturbative expansion in powers of such a parameter and in the case of EMRI systems the mass ratio can be as small as 10^(-6). At first order of the expansion, the path of the lighter object is simply treated as that of a massive test particle, affected solely by the gravity created by the larger black hole. Then, order by order we add corrections into the equations, to account for the mass of the lighter object and the small effective force it imposes. This force is known as the gravitational self-force. In fact, it has been estimated that reaching the second-order expansion will be sufficient for accuracy in the gravitational waveform templates, allowing for detection of EMRI systems from data gathered by the upcoming gravitational wave detector, LISA. This analysis of EMRI systems is a key area of research of my supervisors to be, Professor Leor Barack and Dr. Adam Pound, and one where they have already had great success.

*Artists impression of the LISA space mission*

LISA, a space-based observatory to detect gravitational waves, is planned to launch in the early 2030s. The sensitivity of LISA will peak in the mililihertz band, the frequency range at which EMRI systems will emit gravitational waves. However, even if an EMRI system is very close, its signal will still be much weaker than the instrumental noise gathered by LISA. Such is the problem when trying to catch such extraordinarily sensitive signals that are buried in detector noise. To maximise the science return from the multi-billion dollar mission it is vital that the theoretical waveform models are derived accurately, in advance. Then, the data from LISA can be matched up against these theoretical templates, acting as a filter against the noise, allowing us to clear signals. Getting the EMRI waveforms right would unlock a wealth of scientific information. The encoding of the geometry of spacetime in the gravitational waves, would provide profound insight into our understanding of gravity in the strong regime – we just need the wave template cipher.

To recap, we have the comparable mass binary systems (i.e. two similar size black holes) whose gravitational waves have been detected by LIGO, for which numerical methods have proved fruitful to model. And, the extreme mass binary systems, key LISA targets, for which we are using our perturbative tricks to model. A third system sits between these, the logically named Intermediate Mass Ratio Inspiral (IMRI). IMRI systems are those for which the mass ratio is around 1000. They would be embodied in nature by either an intermediate mass black hole around a supermassive black hole (case 1) *or *a stellar mass black hole around a intermediate mass black hole (case 2). There is doubt around the existence of such systems however, as intermediate black holes have not *yet* been proven to exist.

Being the middle sibling in this situation, means neither of our above methods for theoretical waveform modelling can do the trick. The accuracy of the perturbative expansion in the mass ratio method severely deteriorates as the parameter is no longer small, yet the number of orbits remains large. Such a set up thus requires a hybrid approach and this is what my PhD will hope to investigate. But let me end by telling you why this last case is worth cracking. As well as providing the first confirmation of the existence of an intermediate mass black hole, observations of IMRI gravitational waves will allow us to probe the dynamical processes in globular clusters and galactic nuclei. Rich astrophysical insights are up for grabs, along with fundamental knowledge on black hole formation and morphology. In case 1 IMRI’s, since the central object is large, gravitational waves are produced at a low frequencies. Such systems would then be detectable by the capabilities of LISA in the future. In case 2, IMIRI’s the central object is smaller, producing higher frequency gravitational waves which could actually be detectable by the currently running Advanced LIGO instrument. Tantalising prospects, whereby discoveries are theoretically possible as soon as we have the correct gravitational wave templates against which to filter the LIGO data.

Gravitational waves, black holes and all things gravity will return to being a central theme here at RTU. Posts in the near future will also include a more in-depth look at IMRI systems and the workings of the LISA instrument. Lots to discuss in this exciting and relatively new field of theoretical physics.

]]>

To understand fuzzballs we must first remind ourselves of the key properties of black holes. Earlier posts in the Black Holes series (#1, #2, #3, #4) at RTU cover these fantastical features in more depth, but in a nutshell they are defined by two key features. Firstly, the event horizon, the barrier from which information cannot escape. Secondly, the singularity, the point at the very centre of the black hole at which, due to an infinity matter density, space and time as we know them breakdown. The problem with the original black hole view is that the nature of these beasts can be described through both the quantum lens and the gravity lens. The two leading theories of the universe have been unable to reconcile their differences and their clash, in the context of black holes, has caused arguably the most troublesome conflict in the subject, the information paradox. This, coupled with the seemingly impossible understanding of a singularity has caused physicists to tear their hair out over the years. Fuzzballs, claim to present solutions to both these problems. To understand these let us take each problem and its supposed resolution in turn.

**The information paradox**

*The Black Hole*

In the standard description of a black hole there exists the infamous event horizon – a boundary a distance from the centre of the black hole from which nothing can escape. A point of no return. Hawking realised that in the empty space surrounding this horizon there can enter into existence, particle and anti-particle pairs. If this happens, there is chance that one of these pairs will escape outwards, while the other passes through the event horizon of the black hole, never to be seen again. As a result of the outwardly escaping particle, the black hole is seen to be radiating and with a loss of energy through radiation comes a shrinking of the black hole. For a dedicated explanation of this process see Black Holes: #2 Glowing and Shrinking. As a result of this ongoing phenomena, a black hole will finally cease to exist altogether, having evaporated entirely. In doing so, information of whatever fell into the black hole will be destroyed and this destruction of information is staunchly in opposition to the laws of quantum mechanics. In quantum mechanics, information is *never* lost. General Relativity also states that a black hole is characterised only by its mass, spin and charge. There is no other information that can be deduced about a black hole from examination of its event horizon. This lack of information and eventual disappearance of it altogether is known as *the information paradox.*

*The Fuzzball *

The fuzzball view claims to resolve this paradox with doing away with the event horizon altogether. The theory instead claims that these extremely dense coagulations of matter are comprised entirely of strings and *do* have a physical surface, just like a neutron star, ordinary star, or planet does. This surface however, is fuzzy instead of entirely solid. The diagram below may help your visualisation.

By eliminating the event horizon, we eliminate the phenomena of Hawking radiation as information can not longer be lost past a boundary or no return as there *is *no boundary of no return. Radiation still gets emitted from the fuzzball if one particle falls in and the other escapes but there is no clash with quantum mechanics. The information about the infalling particle can be retrieved. Furthermore because the fuzzball has a surface, there is structure here and information about the past history of the fuzzball can be deduced from it. From analysis of this structure all fuzzballs are seen to be unique and are characterised by a lot more than just their mass, spin and charge. As John Wheeler famously said to sum up the generic nature of black holes, ‘a black has no hair’. Fuzzballs however very much do have hair, and knotty hair at that.

**The singularity**

*The Black Hole*

Another troubling feature of the black hole is the point at the very centre where space and time breakdown due to the extreme density of matter. In the standard theory of general relativity the curvature of spacetime tends towards infinity with the mathematics blowing up in our faces, producing seemingly unphysical results.

*The Fuzzball*

The fuzzball structure, as we have said, is made from strings – as, according to string theory, is everything in our universe as they are the fundamental components of matter. As objects fall into the fuzzball, their strings combine with those on fuzzball’s surface forming larger, more complex string structures. When these strings combine together there is resultant outward pressure from the massless fields at play. At the centre of the fuzzball the density of these strings is at its highest and the strong resultant outward pressure causes a phase transition to a new state of matter which prevents the formation of a singularity. Perhaps a little hard to swallow without examining the maths first hand but i’m giving you the quick and dirty jist of it.

*a) the black hole view with singularity in spacetime represented by a jagged line*

*b) the fuzzball view with the centre of the fuzzball represented by a dense coagulation of strings*

**Entropy**

Another advocate for fuzzballs is entropy. As required by the second law of thermodynamics, black holes have entropy – an inherent measure of their level of disorder or simply put, chaos. All systems have a measure of entropy and this entropy can be quantified by counting the number of microstates of the system. Different microstates are the different ways the components of the system can be arranged whilst preserving the overall macroscopic picture. For example a messy room has a high entropy as the items can be strewn around in many ways, i.e. a large number of microstates, whilst still preserving the overall look of messiness.

In 1973 Bekenstein postulated that the level of this entropy associated to the black hole is proportional area of the black hole’s event horizon. Together with Hawking, the formula for a black hole’s entropy was produced, expressing it as proportional to the area of the horizon with factors of fundamental constants. It is a truly remarkable formula as it includes the fundamental constant of gravity and a fundamental constant of the quantum world, the planck length. Such constants rarely meet in our descriptions of nature, given the long standing incompatibility of quantum mechanics and general relativity. In the original black hole view, the only way we can measure this entropy is from properties of the event horizon since we cannot retrieve any further information from inside.

The fuzzball theory however, allows us to directly count the number of microstates of the system. Within string theory, a black hole’s structure comes in the forms of strings and branes and the ways in which these can be arranged represent the different microstates. Mathur’s calculation of the entropy from analysing these microstates can be found to equal that found by the Bekenstein-Hawking formula! A very promising find.

**Fuzzy thoughts**

Fuzzballs present a way to reconcile classical and quantum descriptions of black holes, however the jury is still out in the theoretical physics community. Fuzzballs make use of string theory, much to the delight of many who have poured over its formulation as a possible quantum gravity theory. However, string theory is by no means a complete theory and the fuzzballs rely heavily on its claims. Although the framework seemingly resolves problems of singularities and information destruction, it raises new questions in lieu, including the nature of extra dimensions to name one (did I mention, string theory is at minimum 10 dimensional?!) And… as much as the event horizon of a black hole is a wicked feature, physicists have a somewhat twisted affinity for it. Not all are keen to champion its dismissal and instead would rather find a theory which resolves the paradox whilst maintaining its inclusion.

The final fate of the fuzz is still unknown.

]]>The Standard Model of particle physics is the basis of our current best understanding of the sub-atomic realm. The model postulates a conglomeration of different fundamental particles, that together explain the behaviour of the forces and matter that we observe in nature. A recent triumph of the LHC in 2012 (recent as major discoveries in particle physics go) was the discovery of Higgs Boson, the elusive last member of the Standard Model to be observed experimentally. The Higgs is believed to be the particle that explains the existence of the range of masses of different particles and was a crucial missing piece to fully validate the Standard Model.

However, there is still a lot about our universe that we cannot describe with the Standard Model. To be precise, the known constituents make up *only 5% *of the whole universe. The remaining 95% of the universe is made up of what physicists call dark matter and dark energy. The LHC has provided no insight into the nature of these mysterious entities and it is believed that collisions at a much higher energy are necessary to unlock their secrets. Additionally, the current model cannot unite the forces that govern the quantum world with the force of gravity. Since 1905 there has been an incompatibility in theoretical physics, between quantum mechanics (our best theory of the very small) and general relativity (our best theory of the very large). By seeking to probe physics *beyond *the Standard Model, the FCC represents a chance for physicists to find a way to break this stalemate.

Such bold ideas come with a very hefty price tag – £9 billion for the least expensive design, rising to £20 billion for the full capabilities that the CERN team are hoping for. Such a cost has sparked serious criticism at a time when issues of environmental sustainability and climate change are at the forefront of many discussions amongst scientific and political communities. A crucial problem lies within the very nature of the quest, a probing of the *unknown*. There is no guarantee that the energy at which the FCC is built to operate, will be the energy at which currently hidden physics becomes visible. The entire endeavour could function at an energy way off, or *just *short of that necessary to reveal the currently unseen particles. Some argue this is too large a gamble on resources that could deliver tangible, guaranteed benefits to humanity’s very human problems of the environment and health. Nevertheless, lead scientists at CERN, such as Director-General Professor Fabiola Gioanotti and senior physicist John Womersley, are keen to emphasise the peripheral advancements to technology and benefits to society that the endeavour would bring. Being at the very forefront of science, it is argued the FCC will unearth innovative technologies during its design, construction and operation phases; just as electronics, the internet and superconducting magnets in MRI machines all arose from previous enterprises in fundamental physics.

The FCC has now been proposed to the European Strategy for Particle Physics. A decision is expected in 2020 and if accepted, the initial phases of the collider would be up and running between 2040-2050. CERN scientists firmly believe the creation of this facility is the necessary next step towards uncovering nature’s secrets, but such a gigantic vision will no doubt require global support from both national governments and the public. Although the potential challenges of the FCC are enormous, its potential impact on humanity’s understanding of the universe is arguably much larger. To stop pushing the limits of our exploration, is to stop discovering and this is something CERN physicists are determined not to allow.

]]>Einstein’s understanding of this behaviour of space and time is cemented in his field equations, which represent the evolution of a spacetime given an initial set up of masses or energy. However the universe is complex and spacetime can be very messy – it can contain chaotic galaxies, multi-planetary systems, exploding stars and even colliding black holes. Einstein’s equations cannot be exactly solved for such detailed, interacting systems – so there exist but a finite collection of *exact *solutions to the field equations representing physical setups in nature that we can analytically work with. A collection of black hole’s is one such class of solutions – and the nature of these solutions will be our focus for today.

The Schwarzchild solution to the Einstein field equations represents a static (not rotating) black hole in the presence of nothing else. One black hole, alone in the spacetime – isolated and simple enough, giving an exact solution. The Kerr solutions to the field equations are the same setup except the black hole is spinning. A black hole can’t do much else really – in fact the only three ingredients that characterise a black hole is its mass, its rotation and whether it has any charge. This is summarised as the ‘No Hair Theorem‘ – all other information about a black hole disappears and for some reason according to John Wheeler, this is akin to it being bald… not so sure about his reasoning there but I digress, such lack of hair is a story for another time. Now a question at the forefront of mathematical physics is whether these black hole solutions are *stable*. Whether if we *perturb *the black hole a little bit, it will settle back to down a stable system which can be represented by the previous solution. Proving this would be proving the ‘Black Hole Stability Conjecture’. But what is meant here by perturb? Let me explain.

Imagine a still pond, undisturbed with flat water. Then imagine this pond is disturbed or *perturbed *by an external object, for example by throwing in a pebble. A perturbation is essentially an external influence on a system which causes a disturbance to its original state. Perturbations are created by the injection of energy in some form or another and as we know from Einstein, energy is akin to mass and mass causes curvature of spacetime. So if we throw a pebble into a pond, the water is disturbed and energy is propagated outwards in the form of ripples. Similar idea with a black hole, throw in some kind of object or add some form of energy and spacetime will ripple outward. Instead of water waves forming these ripples, in spacetime the ripples take the form of gravitational waves and these waves warp the surrounding spacetime just as the water waves warp the surrounding water. What mathematical physicists want to prove, is that regardless of the size of the perturbation, if we are dealing with a static or rotating black hole (provided it isn’t rotating *too* fast) then it will eventually settle back down into its original stable state.

Merger of two black holes – a highly perturbed system creating strong gravitational waves.

The key to this endeavour is to show that the gravitational waves produced in the perturbation all eventually fully decay – as they ripple outwards they become smaller and smaller until they finally have no effect on the surrounding spacetime. We need to measure the size of these waves to make sure they are getting smaller/decaying*, *so for this we need coordinates – as coordinates allow us to measure distances between points. These calculations can all be done but it’s not that easy… there is a catch. General relativity is* diffeomorphism invariant. *Diffeo-what I hear you say? Horrible phrase but really not that complicated at all – basically all it means is that it doesn’t matter what coordinate system you choose to use in your work. Think back to the GCSE maths days, different coordinates exist to better describe different physical systems. Euclidean x,y,z coordinates are often used when dealing with straight lines or 3D objects whilst spherical coordinates r, θ, ϕ are often used when dealing with curved lines or 3D objects. But the point is the world is does not have such coordinates naturally built in – they are mathematical constructs designed by humans to help us with whatever our calculation at hand is. However, in this case* *a particular choice of coordinates can *obscure* whether the gravitational waves are decaying. It wouldn’t be a *wrong *choice that you made when choosing the coordinates you did, but it would not have been the best possible choice given your “are black holes stable?” aims. Therefore, the problem is making the right coordinate choice in order to examine black hole stability, and making the best choice in mathematics is not obvious…

Mathematicians across the world are working on this proof in order to show black holes are stable solutions to the Einstein field equations and there is excitement in the community as some groups seem to be getting close. Proving the stability of black holes will give us a greater understanding of their behaviour and given some of the universe’s most specular phenomena occur around these astronomical beasts, the more we know about them with certainty, the better equipped we are to tackling some of nature’s biggest outstanding questions.

]]>

One of the popular phrases used to describe the nature of space and time through the eyes of general relativity is as follows;

*“Spacetime tells matter how to move,*

*matter tells spacetime how to curve,”*

Imagine the old bowling ball analogy for general relativity and spacetime. Spacetime is depicted as a rubber sheet whilst an object such as a bowling ball, often representing a planet or star (though it can be any object with mass) then deforms this rubber sheet when it is placed on top. This simple analogy is meant to represent the curvature of spacetime as a result of the mass of an object. The two key words here being; mass and curvature. Mass *produces *curvature. The larger the mass of the matter in question, the greater the curvature of the surrounding spacetime. A black hole for example, one of the heaviest objects in our universe, produces a curvature so large, a warping *so great,* of the surrounding spacetime that phenomena like no where else occur – see here and here. In fact, at the centre of the black hole where bending of the spacetime is so great due to the highly concentrated mass, the curvature is mathematically described as *infinite *– and our understanding of the nature of spacetime here breaks down completely. Such an extremity causes us a problem, we do not like it when our maths doesn’t work as a descriptive language for nature. These points of infinite curvature, called singularities in the nomenclature, create problems in the standard theory of general relativity. Take home message 1; general relativity only includes the mass of matter as a property affecting the behaviour of the surrounding spacetime and it does so by causing it to *curve.* If there was no matter in the universe, or if all matter was massless (for example photons) the surrounding spacetime would be flat – imagine a sheet stretched out completely taut. Take home message 2; the theory of general relativity causes us problems, which come in the form of singularities at points of extreme curvature.

An alternative theory, Einstein-Cartan theory, holds a possible key to these problems. Matter in our universe has two fundamental properties; *mass *and *spin*. Spin is a funny little characteristic of matter, it is not related the spinning of a particle in actual space, so don’t imagine a spinning top on a table but instead is the *intrinsic angular momentum *of a particle. If that doesn’t mean much to you don’t fret, but for the purpose of this article humour me and the particle physicists of the world and accept that spin is a fundamental characteristic of matter. See *here* for more reassurance. Einstein-Cartan theory, claims that both these fundamental properties of matter affect the nature of the surrounding spacetime, whereas general relativity only incorporates mass. Whilst mass created a curvature of the spacetime, the spin creates an effect called the torsion of spacetime – unfortunately a similar analogy to the bowling ball does not exist for a visualisation of this torsion (that I know of). The mathematics of Einstein-Cartan theory, with the inclusion of spin and the resulting torsion into the framework, has some very interesting implications. In standard general relativity, the black hole situation described above creates a singularity at the centre due to the infinite curvature of spacetime. *However, *when torsion is also in play at such extreme matter densities, the torsion field creates a repulsive force that pushes outward against this extreme warping. Instead of a singularity at the centre, in Einstein-Cartan theory the interaction between torsion and curvature creates a wormhole or Einstein-Rosen bridge, at the centre of the black hole. The wormhole creates a passage to a new, growing universe on the other-side of the black hole! The same can be said of the situation at the beginning of the universe. In standard general relativity, the big bang represents a singular point, of infinitely dense matter, from which the universe then somehow comes into existence. However in the Einstein-Cartan formulation, the torsion again creates an outward-type repulsion at these such points of extreme curvature and density, forbidding them to occur. The big bang is then replaced by what is known as a big bounce scenario in cosmology.

The nice thing about Einstein-Cartan theory is that, because the extra features only come into play in extremely high matter density regimes, like the centre of a black hole, the tests which probe astronomical phenomena within our experimental reach still agree with the predictions from general relativity. For example, the perihelion of Mercury, a key test of general relativity, would still be true if working within the Einstein-Cartan formulation. Therefore, we can see the theory not as an opponent to general relativity but a slightly revision, extending its validity and in so doing presenting resolutions (and very exciting ones at that) to our previous trouble points. On the other hand, this is also unfortunate because given our nearby surroundings and inability to accurately probe quantum phenomena, we are not in a position to experimentally test the predictions of Einstein-Cartan theory. The experiments that we can conduct are ones where mass is the heavily dominant player affecting the behaviour of spacetime and spin effects remain physically hidden to us.

What we can say, without even commenting on the mathematical elegance of the theory is that Einstein-Cartan simply seems more wholesome ontologically. Why include one fundamental property of matter (mass) and not the other (spin)? The theory seeks to resolve this omission and unlock the missing results even if they cannot be physically probed. A curveball theory, where curvature is no longer the only behaviour of spacetime.

]]>The key idea is that AdS/CFT is an example of a duality between two types of theories. On the left hand side we have theories in Anti-de-Sitter space, this is a special type of geometry that physicists use to model a particular spacetime. There number of dimensions for this spacetime can be chosen freely but the details of what exact *kind *of geometry this represents need not concern us today. When working with spacetimes it is easy to add gravity into your models and physicists try to model the gravitational interactions here in terms of string theory (see How long is a piece of string). Incorporating string theory to AdS space attempts to create a big and bold theory of quantum gravity. The inner workings of such a theory are still very much in the dark but here the assumed pillars of a quantum gravity set-up have been erected. Take-home message; AdS sets up an arena for physicists where the only field (see What is a Field?) is the gravitational field and the constituents are our little friends the strings. It is a proposed theory of quantum gravity, still being explored.

On the right hand side we have what is known as a Conformal Field Theory. Conformal field theories are particular types of quantum field theories (see What is Quantum Field Theory?) which in turn are models describing the interactions of elementary particles. Elementary particles interact heavily with the other three out of four fields in the universe’s playbook; the electromagnetic, strong and weak field. On this side of the correspondence there is no gravity and as such the geometry of the arena is a flat spacetime. The way Maldacena, the father of AdS/CFT, set up the correspondence is that if you choose the number of dimensions in your AdS theory to be D (any integer) the number of dimensions in the CFT theory must be D-1. We’ll explain the consequences of this later. Take home message; CFT sets up an arena for physicists where they can work with the interactions between the very small constituents of the universe and use all the fields *except* gravity.

Ok so both halves have been defined what next? Well we choose a number for our choice of dimensions and play around which each side of the theory. We then begin to see remarkable similarities in the behaviours of the models of quantum gravity formulated in AdS space and those the conformal quantum field theories. Important characteristics of the models, such as emergent symmetries and levels of chaos mirror each other on both sides of the correspondence and the main breakthroughs of this are twofold;

Firstly, the fields at play in quantum field theories on CFT side of things (electromagnetic, strong and weak) are subject to what is known as ‘strong coupling’. When studying theories in theoretical physics there are a bunch of mathematical procedures we very often use however when using maths to study theories with *strong couplings* our calculations essentially blow up. This is because the strong coupling is representing in the mathematics as a larg pre-factor, call it q here for example. As we try to expand the mathematical terms in the theory we get terms with q, q^2, q^3… and because q is a large number already the whole thing blows up in our face! The fancy terminology is that the theory is not ‘mathematically tractable’. *However*, theories on the AdS side of things only contain the gravitational field, which is subject to ‘weak coupling’ The pre-factor here is very small and less than 1, call it g. So when we expand the terms and get g, g^2, g^3 we can essentially ignore the terms which contain g raised to high powers as they would be so much smaller than 1 (check it!). As such we can work with the first few terms alone and the maths makes us happy. Take home message; we can study examples of CFTs in areas in difficult like nuclear and condensed matter physics by translating the projects into *mathematically solvable* problems of string theories on AdS spaces.

Secondly, the string theory on an AdS space set up is, as we said, a proposed theory of quantum gravity. A theory of quantum gravity is the holy grail of the theoretical physics world and this proposed theory is of course only an attempt at a formulation. However due to the similarity in the behaviours of models on both sides of the *duality*, we can probe the proposed quantum gravity with a quantum field theory – a domain much easier for humans to manipulate. Since we know very little about the inner workings of quantum gravity by considering its quantum analogue without gravity (the CFT theory) we can significantly enhance our understanding. Take home message; we explore the quantum gravity AdS set up with what we know about the corresponding CFT.

A final point worth touching on because the idea is pretty funky and Susskind is a wonderful man, is holography. You may have recalled that in the set up the AdS quantum gravity theory has D dimensions but the CFT quantum field theory had D-1 dimensions, one less. The duality between the two theories was proposed by Susskind and ‘T Hooft as being the same kind of duality seen between a real world object in three-dimensions and its hologram on a two-dimensional surface… The idea is that all of the information in a theory of quantum gravity can be encoded within a theory without gravity on a lower-dimensional space. The geometric visualisation is akin to a sphere (3D) holding the full theory and the information being encoded on its surface, the boundary of the sphere (2D). Susskind’s interpretation of the AdS/CFT correspondence has conjured up an image of us living within an analogous hologram, a four-dimensional space, an embodiment of a richer five-dimensional space somewhere in the universe… a post in more detail on this to come.

]]>