I write a little about the history of women in Physics; there are a bunch of other contributions from female scientists in South Africa. I appear to be the only one who got lost in the historical rabbit trail, but oh well . . .

]]>In the details of the interaction we looked at, the electron smashing through the nucleus produces a quark and an antiquark. These are the only way that the electron can change the colour of the nucleus, so if we track what they’re doing, we’ll know whether or not there’s been a colour exchange with the nucleus. Has a gluon been transmitted between them?

The easiest way to track that turns out to be by checking whether or not the quark and the antiquark are still correlated. When they’re created, they have exactly opposite properties, but if one of them interacts with the nucleus, only that one will change. Our job becomes to track how closely the quark and antiquark are correlated, depending on the energy involved and the difference between them. The equation that does this is called the Balitsky-Kovchegov equation^{*} (after Ian Balitsky of Old Dominion University and JLab and Yuri Kovchegov of Ohio State University). The BK equation looks like this (<S* _{xy}*>

It’s fairly complicated. Analytically, it can’t be solved — that is to say, we can’t simplify the symbols any further without putting numbers in.. Therefore, we put numbers in. But solving differential equations numerically is an art in itself: the derivative is full of statements about “tending to zero” which can’t be applied to actual numbers the way they can to symbols. However, it’s certainly possible, if time consuming. Sufficiently time consuming that even when we get a computer to do all the number crunching, it takes days or weeks to get to an answer with reasonable accuracy. To do better than that, we have to call in more advanced computing techniques. In our case we made use of a GPU — which is designed to run high-intensity graphics on the computer — to do our number crunching.

It turns out that we need a substantial detour through numerical methods and computer science to calculate the correlation function for the quark-antiquark pair. In fact the only other *physics* we’ll talk about in this series is when we finally get results from the BK equation. Before that, we’ll think about thinking like a computer.

_{*Technically, the BK equation is a truncation of the full result, which can be achieved either through the JIMWLK equation or the Balitsky hierarchy of equations. The fine print of the mathematical details of the truncation doesn’t affect the broad sweep of the results, although it has some surprisingly visible consequences.}

Changing velocity doesn’t change the laws of physics: this is why tray tables in aeroplanes make sense. Despite the plane’s tremendous velocity, a cup placed on the table stays on the table, just like it would on the ground. Of course, aeroplane tray tables don’t work during takeoff and landing, but the issue there is the acceleration — the change in the velocity — not the velocity itself. It’s also why many physics experiments can ignore the fact that the Earth is a big rock flying through space at a terrifying speed. (Technically the Earth’s velocity is not constant, particularly because it keeps turning, and this gives rise to effects like the Coriolis force.) This idea — that you can pick an constant velocity and physics will work the same way as at any other constant velocity — is one that every idea of relativity keeps front and centre. In fact, velocity-independence (or “frame-independence”) has become a requirement for anything proposed as a law of physics.

So what *does* change if you change velocities? Most obviously, the relative velocity of everything else. I may think I’m walking at half a metre per second down the plane aisle, but from the ground you’d say I’m rushing overhead at several hundred kilometres per hour, with a slight modification based on whether I’m walking towards the front or the back of the plane. For centuries, the slight modification was assumed to mean that my velocity relative to you was just the sum of my velocity relative to the plane and the plane’s velocity relative to you. However, when one gets more accurate — historically this came from various attempts to measure and describe light, notably by Maxwell and Michelson and Morley — just adding velocities doesn’t work out. In order to get the best theories of light to square up with the best measurements, velocity had to combine in some way other than simple addition. (One of the results of the new method of combination is that a light wave travels at the same speed — whether you run towards it or away from it, it has exactly the same speed relative to you. This is very weird — but we just said we were going to redefine addition, so we should expect things to be weird.) Simple addition is a very good approximation to the new rule for slow-moving objects, but it’s not so good for very very fast ones. This is why ordinary addition of velocities seems to make perfect sense. It’s only when we start measuring fairly esoteric things (like the speed of light) that we come across the new velocity combination rule.

In all of this messing around with velocity, we’re actually messing around with distance and time too. That shouldn’t be too surprising, since velocity is just distance divided by time. In general, if an object is moving very fast compared to you, then when you measure its length, you’ll get an answer slightly *smaller* than you would if it wasn’t moving. This is called “length contraction.” It’s very weird, but all the evidence points to it being true. There’s a similar, but opposite effect for time. If two event happen on an object moving very fast compared to you, you’ll measure the time between the events as *longer * than you would if the object wasn’t moving (compared to you). This is called “time dilation.” Again, it’s very strange, but it’s the least strange thing we can do to make sense of the measurements.

So how does all of this relate back to particle physics? In the simplest sense^{1}, it’s because the particles in a collider like the LHC are hurtling towards each other at very nearly the speed of light. To describe a particle about to enter a collision, we will need to take into account the length contraction and time dilation that it will experience. This is where the colour glass condensate comes in.

A particle travelling very fast will be contracted in the direction it travels. So while we might have said that a round ball is a reasonable approximation of a proton or an atomic nucleus, that round ball now becomes more like pancake. If you like (and we do) you could even say it’s like a sheet of glass. Now our particle pancake is not just length contracted, it’s time dilated. That means that the time between events within the particle is larger than usual — in other words, it changes very very slowly. A similar very slow change is a property that is sometimes attributed to glass, which encourages us to name this slowly-changing particle pancake after a sheet of glass.

The glass is colour glass because, as we said last week, there are going to be lots and lots of gluons around — all of which have colour charge. “Condensate” refers to the huge number of gluons and to the fact that if there are enough gluons (and in the CGC there are), the system becomes saturated, so that adding more gluons doesn’t really change anything.

Now, after a brief detour via special relativity, we have our setup. A target particle (a proton or atomic nucleus) is moving so incredibly fast that it becomes a colour glass condensate particle-pancake. A probe particle travels towards the target and interacts with it — probably punching right through it and probably shattering it. (The “inelastic” in “deeply inelastic scattering” means that the target is probably shattered. And the “deeply” means that it’s really, really probably shattered.) Boom! Now all we need are some equations to calculate the probability of this collision actually happening, and perhaps to tell us what we might get out the other side. And maybe some tools that will allow us to calculate with those equations . . .

^{1 In a more complicated sense, the new velocity combination rules put time and distance on the same footing, which can’t be done in ordinary quantum mechanics and require the development of ideas like quantum field theory.}

I snuck one in at the end of last week’s post, where the electron lines in the QED vertex were replaced with quarks, but QCD proper concerns itself with the vertices between quarks and gluons. While QED has just one prettyvertex, QCD has three. (This is one of the reasons that problems in QCD are generally harder to solve than their counterparts in QED.) The first vertex looks somewhat familiar. (Placing a bar over a label indicates an antiparticle.)

The other two vertices come about because the gluon carries colour charge (in contrast, the photon is electrically neutral). This means that gluons can interact amongst themselves:

One consequence of this is that it’s very easy to produce gluons if energy is available to do so (the way the maths works out, the three-gluon vertex is particularly important for this). In general, the energy required to produce a particle is enough energy to give the particle its mass (using Einstein’s famous E=mc^{2} equation) plus a little extra to provide the new particle’s energy of motion. But the mass of the gluon happens to be zero, so all that’s needed is that little extra. At sufficiently high energies, this means that one should expect gluons everywhere. This gluons-everywhere situation can be described by a model called the **c**olour **g**lass **c**ondensate (CGC). This is what I used in my MSc work, and I’ll discuss it in more detail next week. Before that, let’s talk a little more about Feynman diagrams in QCD.

Some features of QCD don’t show up in the pictures until we start doing calculations. For instance, last week we saw that by adding extra vertices (and virtual particles) we can get from A to B in more way than one. How important is each of these diagrams?

versus

It turns out that the number of vertices in a diagram has a lot to say about that diagram’s importance. Broadly speaking, for every vertex in a diagram, it’s importance is multiplied by a quantity called the vertex factor. In QED, the vertex factor is very small. Very complicated diagrams, with many vertices, therefore have a very small importance. Of course, other considerations also affect the calculations made for each diagram, but in general we can safely ignore very complicated diagrams — just using the simple ones gives us a decent idea of what’s going on. Unfortunately, things don’t look so pretty in QCD.

In QCD, under ordinary conditions, the vertex factor is not small. This means that more complicated diagrams are more important. In theory, an infinitely complicated diagram would be infinitely important (instead of infinitely *un*important, as in QED). This is a problem. To date, the problem has not been solved. Some physicists think this means we need an entirely new theory, not based on Feynman diagrams (and the associated perturbation theory) to describe what goes on inside the atomic nucleus. In this work, I simply avoided the problem.

The QCD vertex factor depends on a value called the QCD coupling constant which (roughly speaking) describes the strength of interactions between QCD particles. This turns out to be closely related to the energy involved:

The parameter α_{s} determines the coupling constant — and the vertex factor. We see here (by taking lots of measurements and producing the graph) that α_{s} decreases as the energy goes up. That means, if energy is high enough, the vertex factor will be small after all. If we’re willing to work in the very high energy region — and with modern particle accelerators, that isn’t unreasonable — we can still get some use out of perturbative QCD. (The term “perturbative” essentially means that we’re assuming more complicated diagrams are only small changes or *perturbations* to their simpler counterparts.) This is why the virtual photon in the DIS diagram always has to have very high energy.

Of course, now that we’ve restricted ourselves to working at very high energies, we can expect the case of gluons everywhere to become rather relevant. Next week, I’ll talk about the gluon-saturated state called the colour glass condensate.

]]>Time flows from right to left. The axes are often drawn with time flowing left-to-right, which matches the direction we read, but it’s easier to match right-to-left diagrams to mathematical notations. (If I have a variable *x *to which I apply a function *f* and then I apply another function *g* to the overall result, I write that as *g(f(x))* — the rightmost action happens first.) The axes are intentionally vague: they don’t have units, since we’re more interested in describing the general kind of interaction that might happen than in exact numbers, at this point. If we start doing calculations, we’ll label each particle line with important properties, like its momentum.

So much for reading Feynman diagrams. Let’s talk about how to construct them. A good starting point is the Feynman rules for photons and electrons. The model of photons and electrons in quantum field theory (the most accurate model we have to date) is called **q**uantum **e**lectro**d**ynamics, or QED for short. In QED, there’s only one way of connecting particle lines. The connection between lines is called a vertex and in QED it always looks like this:

One consequence of having no other vertices is that electrons can never interact directly: they have to go through a photon, as in the diagram above. In general, however, having only one vertex is not as restricting as you might first think. We can rotate the vertex however we like and introduce as many vertices as we want into a single diagram. We need both those principles to build up the diagram at the top of the post. However, there’s also another diagram to create by rotating the vertex: this one, which describes pair production.

Last week, I briefly mentioned that fermion lines could point “backwards” with respect to time. The lower electron line in this diagram does just that. Out interpretation of the backward arrow is that instead of dealing with an electron, we’re dealing with its partner the anti-electron, also known as the positron. The positron has the same mass as the electron, but is otherwise its opposite. The electron has negative electric charge, for instance. Well, the positron has the same amount of *positive* electric charge (hence the name). Every particle type has a corresponding antiparticle type, with exactly opposite charges. Given the tendency of positrons to turn into photons — pure light — when they meet electrons, they don’t have much effect on ordinary life. They do tend to crop up in high energy experiments, though. For instance, we said that we represent a photon like this:

However, if all we know is that a photon went in and a photon came out, what might have happened is this:

We might not even detect the intermediary electron and positron with out measuring instruments, if they exist for a short enough time, but the rules of QED tell us that it could happen. In fact, particles that must be part of an interaction, but don’t exist to be measured at the beginning* or * the end of the process turn out to be very useful for hiding some of the uglier parts of the mathematics. (Others may disagree about the ugliness of the mathematics or whether it’s fair to describe virtual particles as hiding these aspects of the maths, but the broad strokes of the picture are at least agreed upon.) The maths involved stems from the uncertainty principle. This means that we can’t assign an exact momentum and an exact position to a particle at the same time — but we got around that by giving particles cloud-like (or wave-like) properties.

Einstein’s theory of relativity tells us that when we talk about position, to be complete we also need to include a “position in time” (which we’d normally just call a time) and when we talk about momentum, we should also include energy. Knowing that, it’s not too surprising that we can’t assign an exact energy to a particle at an exact time. Imagining particles as clouds in space is bad enough — I’m not sure how to begin visualising them as fuzzy in time. Fortunately, virtual particles mean we don’t have to. The way the maths works out, we can use this one weird trick instead: virtual particles don’t conserve energy.

Yup, I just said we were going to violate one of the most fundamental laws of physics: the law of conservation of energy. Remember that I started out by explaining why it’s just a trick, though. We can very carefully consider particles as being fuzzy in time as well as in space and then we keep conservation of energy. It makes the maths a lot harder, though. On the other hand, if we bend the rules when nobody’s looking, we can get to the answers a lot faster. That’s the key, of course: virtual particles are the particles we can never measure. We can treat them as breaking the law of energy conservation instead of as having weird fuzzy times and energies exactly because we’re never going to check what the energies actually are. We just need the maths to work out.

Last week I showed you this diagram, which includes a virtual photon:

In fact, this diagram assumes what’s called a “highly” virtual photon. It violates conservation of energy very badly, so that it gains an enormous momentum out of nowhere. (Or we can say that it’s an extreme case in the time-energy fuzziness, but it gets much harder to describe — people who try to do so can spend years figuring out how to start.) The photon needs to have pretty high energy for the rules of quarks and gluons (**q**uantum **c**hromo**d**ynamics or QCD) to work out, but there’s still a possible range of energies. If we choose a relatively low energy, by using the proton energy to define a fairly complicated standard^{1}, the most likely interaction between the photon and the proton is quite different. This is the case I studied in my MSc project. The diagram looks like this (*A* represents one or more protons):

You’ll notice that to draw this diagram, I’ve introduced a new vertex, where the photon becomes a quark and an antiquark. Next week, we’ll talk about this vertex and other properties of QCD, like the requirement that the photon be highly virtual and why Feynman diagrams don’t work as well as we might hope.

_{1 Such that the square of the photon four-momentum is much smaller than the Minkowski product of the photon four-momentum with the proton four-momentum, meaning that the Bjorken-x variable is small, if you want to get technical.}

This week, I’ll talk about describing collisions and interactions between these particles. This is almost universally done by using a system of diagrams invented by and named after Richard Feynman. These Feynman diagrams absorb all the calculus and group theory and complicated mathematical notation into pictures, making their interpretation quite accessible (although calculating the rules for Feynman diagrams is another story). For example, this is a photon, labelled as γ:

Once we introduce axes and directions to the diagrams, we can talk about what the photon is doing, but for now, that’s a photon. This is a gluon:

And this is a Higgs boson:

Fermions are all represented by straight, solid lines, so the labels become especially important. These particles also come with arrows — if the arrow goes backwards, we’re dealing with antimatter, not ordinary matter! (We’ll go into that a little more next week.) For the moment we haven’t specified forward and backward, but I’ll make the arrows consistent with the notation we’ll adopt by the end of the article. Here are a quark and an electron:

We could go on and draw all the particles of the standard model, but for this project we’ll stop here, since we only need four: the quark, the electron, the photon and the gluon. There’s one more thing we can do with them before we start thinking about axes and directions. Two weeks ago we said protons and neutrons must be made of quarks because of all the threes. So we can put three quarks together to represent a proton:

Of course, three quarks could also be a neutron, so labels matter. (Unless things are crystal clear from context, in which case the labels are sometimes left out.)

To go much further, we need to introduce the axes of Feynman diagrams. Bearing in mind that we’re generally trying to describe collisions between particles, we use one axis to describe the separation between the relevant particles. In fact this is the only information about position that the diagrams explicitly include. So far, two colliding particles would look like this:

Clearly the ability to represent the separation at different times is crucial. We use the diagram’s other axis to represent passing time. Then we can represent two electrons interacting by exchanging a photon like this:

Sometimes it’s easier to represent the maths behind the Feynman diagrams by starting with early times on the right and flowing towards later times on the left:

Although the left-to-right notation is more common, for all the reasons you’d expect, I’m going to need specialised Feynman diagrams that assume the right-to-left convention later, so I’ll draw time progressing from right to left from the start. Here’s a photon undergoing a process called pair production in which it turns into an electron and an anti-electron (called a positron):

And here’s a real live working diagram of a collision between an electron and a proton, mediated by a photon:

This one’s pulled directly from my thesis, so it may need a little more explanation. Since I’ve told you that the quarks make up a proton, you should be able to figure out that they’re quarks, not antiquarks and put the arrows on for yourself. The photon is labelled as γ^{*} because it’s a “virtual particle” — we can’t measure it, since it disappears before the collision is over. This allows it to have some unusual properties that wouldn’t make sense otherwise. The blob between the photon and the quark represents the fact that while they could interact directly, we also want to consider more complicated processes. For instance, the photon could produce an electron and a positron (like in the earlier diagram) and *that* electron could produce another photon, which interacts with the quark. Since we haven’t specified exactly what the interaction is, we don’t know exactly what comes out the other end, so there are just a bunch of lines collectively labelled “X”. This process is called Deep Inelastic Scattering (DIS) and it’s the context for most of the work done in my MSc project (although my work is based on a slightly different diagram that also falls under the DIS heading).

That’s Feynman diagrams! Next week we’ll talk about the Feynman rules, which tell us which diagrams make sense and which don’t. Along the way we’ll also talk more about matter, antimatter and virtual particles.

]]>The common sense definition of a particle goes something like this: there’s empty space for a while, then there’s a thing and immediately afterwards there’s empty space again. The part where there’s a thing is the particle. Since we’re doing *Science!*, let’s draw a graph of that:

(In the full mathematical description, the spike becomes a Dirac delta function, named after Paul Dirac who did a lot of work in this area of physics.) However, last week we said that quantum physics means particles are more like fuzzy clouds than solid objects. We can update the graph to account for that:

That gives us a better picture of what we’re talking about when we use the word ‘particle’: it’s not necessarily what we’d normally call a particle, but it’s a necessary consequence of quantum physics.

Once we start to think of particles as clouds, we have to allow them to do other strange things. (Doing the maths and the experiments confirms that talking about particles this way does help us to predict what will happen in a given situation.) For example, a single particle may have a cloud that is split into two parts:

Frequently the cloud will be much more fragmented than this. Perhaps even worse, multiple particles might have overlapping clouds that can’t be distinguished. (At this point somebody might bring up Wolfgang Pauli and his exclusion principle: the rule that two particles^{†} can’t be in exactly the same state. That almost helps, but while it prevents particles from being exactly the same, it doesn’t mean they can’t have some properties in common.) Our original definition of a particle doesn’t seem much good any more.

We had said that

There’s empty space for a while, then there’s a thing and immediately afterwards there’s empty space again. The part where there’s a thing is the particle.

Now we need something more along the lines of

There’s a thing somewhere — or maybe everywhere — that you’re most likely to bump into in certain places.

Admittedly this doesn’t sound very much like a particle, which is where ideas like wave-particle duality come in. (You can think of how ripples in a pond might fill the whole pond, but all have properties determined by a single disturbance.) The important thing for our purposes is that while a particle may have some very definite properties, such as a particular energy or charge, it’s not at all constrained to a particular position or required to act like a hard little ball. This makes our idea of a particle more difficult to work with, but perhaps we can make up for it a little by getting more use out of the idea. Let’s talk about forces.

Our new definition of a particle says that

There’s a thing somewhere — or maybe everywhere — that you’re most likely to bump into in certain places.

There’s nothing to stop the *thing* in question from being a force. If we treat forces this way, we’ll have to treat them as coming in chunks of some sort — one chunk per particle — but quantum physics requires us to do that anyway (‘quantum’ is just a fancy word for ‘chunk’, after all). In this new way of talking about forces, we can’t say that one particle exerts a force on another particle. Instead, the first particle produces a force particle, which interacts with the second particle. It’s helpful to remember that when we say ‘particle’, we’re *not* talking about a hard little ball. We simply mean a chunk of something that may not have a very specific position: in this case, a force.

Treating force as a particle requires some tweaking of the mathematics involved. These new particles with slightly tweaked maths are all called bosons, after Satyendra Nath Bose; the particles from the last post are called fermions, after Enrico Fermi. One of the biggest differences between bosons and fermions is that while two fermions cannot have exactly the same properties (according to Wolfgang Pauli’s exclusion principle), there is no such restriction for bosons. This makes possible things like Bose-Einstein condensation, which are fascinating, but tangential to our purposes. For today we’ll stick to cataloguing how the fermions we identified last time interact.

Electrically charged particles interact — push and pull — by exchanging force-carrying particles called photons (that’s ‘light things’). Photons are also responsible for interacting with particles in our eyes, allowing us to see things and particles in our radio devices, allowing us to send messages. The theory of electromagnetic waves is essentially an approximation to a full theory of photons — a useful and often very accurate approximation.

Particles that have colour charge interact by exchanging gluons (yup, we went there). The fermions that have colour charge are just the quarks (the particles that make up protons and neutrons). However quarks are not the only particles with colour charge: gluons themselves have colour. This makes the theory of colour interactions (quantum chromodynamics, to give it its technical name) relatively complicated — as do features like the quarks’ insistence on appearing in threes. These complications mean that quantum chromodynamics is still very much an area of active research (including my own MSc work).

All fermions also interact via an additional force called the weak force. (The charge involved here is called flavour.) This is mostly used to describe how the nucleus of an atom breaks up, as in a nuclear fission reaction. The particles that mediate weak force interactions are named W-bosons and Z-bosons. There are two bosons for the weak force: the W and the Z do basically the same job, but they have different masses and electric charges, so they must be different particles.

That’s almost all, but there are two necessary corrections. One is the (in)famous Higgs boson. The Higgs boson doesn’t really describe a particular force. Rather, Peter Higgs, together with a number of other physicists, found a way of writing the equations of particle physics that made the particle masses make a lot more sense. Doing so required introducing new elements to the equations: elements which would exactly correspond to a new kind of boson. Subsequent experiments at CERN’s LHC have detected a particle which seems to have just the properties of the one invented to fix the mass problem — they ‘found’ the Higgs boson.

The other problem is that I’ve omitted gravity from the list of forces. It’s easy enough to make up a name for the particle that mediates gravitational interactions — it’s usually called the graviton — but writing down a consistent mathematical description is another story. Gravity has very little effect at the scale of particle physics experiments and thus far, the most effective tactic has been to ignore it. It’s not very satisfying, but it’s all we have — so far, at least.

And that really is all. Here’s the fundamental particle summary diagram, as seen in particle physics talks everywhere (click through for source):

Next week we’ll start drawing these particles and their interactions using Feynman diagrams.

† Technically Pauli’s exclusion principle also only applies to particles in the fermion (‘thing named after Enrico Fermi’) category (which includes all the particles we’ve discussed up to this footnote).

]]>As a result of these regrettable complications, it seems like there could be value in a step-by-step, fifteen-minutes-a-week, minimally-technical walkthrough of such a dissertation. It occurred to me (no doubt purely by happenstance) that I am particularly well situated to produce such a walkthrough, having just submitted a dissertation that contains such phrases as

The simplest, although not necessarily the most elegant, solution is to simply factorise the average (the so-called large-N

_{c}approximation) so that the dependence is instead over a set of 2-point functions.

The attempts at introducing humour or otherwise amusing elements into the narrative will no doubt render the entire project repugnant to beings of higher taste than the author, but I hold out hope that a few people may nonetheless find it instructive, or at least entertaining. Thusly, I present step the first: A standard model of what stuff is made from (AKA “The Standard Model of Particle Physics”).

A “model” is simply a “tool for describing something,” so our task is to elucidate this standard tool for describing what stuff is made of. Let’s use the example of table salt, since it’s a relatively simple material (if you’re reading this, you’re definitely a lot more complicated than table salt), but it shares the features we want with all those more complicated systems.

We can zoom in to look at the underlying structure of the salt crystals. Since we’ll rapidly get to the stage where photographing the things I”m talking about is downright impossible, I’ll make use of a cartoon microscope right from the start. We can immediately see that the salt crystal is made of two kinds of stuff: in the picture, the small purple balls and the big green balls. The big green balls are chlorine (like the stuff that goes into swimming pools) and the purple balls are sodium (which is why salty food is sometimes described as having “high sodium content”). It turns out that you can’t get chlorine or sodium in smaller chunks than these balls, so the balls are called atoms. ‘Atom’ is really just a name for the smallest chunk you can have of any particular chemical. For a long time this was considered a good model for what stuff is made of: stuff is made of atoms.

Because scientists can never leave well alone (that’s why they’re scientists, after all), people began trying to shoot things through atoms to see what happened. In general atoms seemed to be pretty fuzzy — it wasn’t too difficult to get things through them. However, sometimes this didn’t happen at all and the things that were shot at the atom bounced right back. I like the way Ernest Rutherford described this:

It was quite the most incredible event that has ever happened to me in my life. It was almost as incredible as if you fired a 15-inch shell at a piece of tissue paper and it came back and hit you.

One might begin to suspect that the tissue paper was not entirely tissue paper after all. A more detailed model of what things are made of was in order. It was pretty well established that there was some part of the atom that makes electricity work. This electricity part of the atom is made up of electrons (as far as I can tell, ‘electron’ just means ‘thing that makes electricity work’). The electrons turn out to be (more or less) the fuzzy part of the atom. The less fuzzy part of the atom was called the nucleus (which means ‘the thing in the middle’). The rule of thumb is that each electron contributes a particular fuzzy area. Where exactly is the electron itself? That’s rather contentious, actually — it turns out that the fuzzy cloud is all we can really see of the electron anyway. (We’re seeing it through a cartoon microscope, so perhaps I mean all we can know about it — although recent experiments have captured more ‘real’ images of these clouds.) The various electron clouds combined produce the green and purple balls that we saw earlier. Now that we’ve zoomed in, we see that they aren’t perfectly round, but it wasn’t a bad approximation.

So much for the fuzzy electrons. What about the thing at the centre, that seemed to be responsible for bouncing things back at the bemused Ernest Rutherford? The nucleus turns out to be pretty fuzzy too, although it’s in some sense ‘more concentrated’ than the electron cloud. (One of the fundamental insights of quantum physics is that everything is a bit fuzzy if you look closely enough — this comes from Werner Heisenberg’s famous uncertainty principle.) One of the first things we can discover about the nucleus is that it has clumps. Some clumps change the way the electron cloud behaves (and determine the number of electrons in the typical atom) — these are called the protons (which means something like ‘first thing’, although they aren’t). Other clumps seem to ignore the electron cloud: the neutrons (‘neutral things’, since they don’t interact with the electrons). This is a good start to an explanation, but it doesn’t answer all the questions one might ask. For one thing, if the neutrons are so neutral, why do they insist on hanging around with the protons? This leads to the idea that being electrically neutral (and ignoring electrons) doesn’t necessarily mean being completely neutral — there must more than electricity involved. Another question that physicists doing experiments began to ask is, “Why are there so many threes?” This turns out to be an important question.

Whenever a proton or a neutron — one of the chunks inside the nucleus (middle bit) of the atom — can do something, it seems to be able to do it three times. The more one does different experiments, the more threes one finds. Eventually Murray Gell-Mann produced an explanation based on the idea that protons and neutron were made up of three smaller things. He thought it would be amusing to call the smaller things ‘quarks’ and so he did. (You can make different people think you’re wrong depending on whether you make ‘quark’ rhyme with ‘squawk’ or with ‘mark’, but that’s a story for another day.)

This explanation of having three quarks inside each proton and neutron explains the threes so perfectly that it’s generally accepted to be correct. However, it’s impossible to break a proton or a neutron up into quarks, which makes them very strange objects indeed. In fact, putting enough energy into a proton (or neutron) to separate out a quark tends to produce antimatter instead. However, since antimatter isn’t a defining feature of most everyday stuff (like our original example of table salt), I’ll gloss over that for now.

To describe the way that quarks are always found in groups of three, they were named after the three primary colours: red, green and blue. A package of quarks always contains one of each ‘colour’ (we can make this slightly more complicated by including antimatter). This deals with most of the threes in the experiments, but not quite all of them. To deal with the remaining threes, the quarks are also named after various flavours. The flavour pairs (yes, flavour now comes in pairs) are up and down, strange and charm, and top and bottom. Very strange flavours indeed, but then quarks are very strange objects. The quarks in the picture above are up and down quarks: two ups and a down form a proton. Two downs and an up would form a neutron. The other flavours form more exotic things, which I’ll skip over on account of their not being required for table salt.

In fact, at this point, we can begin to think of making table salt from scratch. Salt is made of chlorine and sodium atoms. Those atoms are made of fuzzy electron clouds and a things at the centre. The thing at the centre has neutron chunks and proton chunks. Those chunks (like bureaucrats, perhaps) do everything in triplicate, which leads us to believe that there are even smaller things, called quarks, inside. Since quantum physics tells us everything is fuzzy, the quarks must be fuzzy too, but we expect them to be fuzzy in groups of three. And that’s it. Stick it all together and you have table salt.

How do we stick it together? Well, that’s a good question. As we noticed earlier, there seem to be several ways for things to interact (or not interact). Come back next week for photons (‘light things’), gluons (‘glue things’), W’s and Z’s (‘we ran out of names things’). If you have questions, feel free to ask in the comments and I’ll do my best to answer!

]]>This got me thinking about my general approach to introducing people to a new topic: I like to jump right in. It’s not that I don’t see value in giving context and explaining a system from the outside. If I’m honest, part of why I prefer the other approach is that I’m just not very good at that sort of thing. (Or more positively, I *am* good at explaining as I go along; take your pick.) Jumping right in comes with its own advantages too: it gives a feel for what the subject’s really about much sooner, as long as it isn’t too confusing, for one. I think it also has the potential to be a bigger motivator to actually learn the nitty gritty of what makes something tick.

In my Pathfinder games, I tend to refer people to the Core Rulebook of the online reference document to figure out that nitty gritty. In *Mechatropolis* the links to Wikipedia are intended to do a similar job. I think the access to a more detailed explanation elsewhere is what makes a ‘jump in and see how it goes’ approach viable. The thought of combining both elements — jumping in and examining the detail — in a single resource has a definite appeal, but it seems like it could be an enormous project. It might not even be possible to pull off both together; I struggle to think of examples that do it. Nonetheless, it’s an interesting idea to kick around and maybe one day I’ll figure out how to make it work.