On moduli stabilisation in heterotic orbifolds ]]>

Let me start with a brief introduction and summary of what we have done and what we have found in our adventures with the *heterotic orbifolds*.

For a newcomer to the heterotic orbifold world (like Susha and I were at the time we started working on this) after GKP and KKLT, an obvious question to ask was: how the situation with moduli stabilisation is answered in these models? Maybe not surprisingly, you could get two answers, depending of whom you asked:

1) It is perfectly fine, non-perturbative effects can do the job (and don’t be too curious to ask any details!).

2) It is not possible, because there are not enough fluxes.

Yet, someone gave me a third answer and much thanks to that, I am now writing about this new adventure in the heterotic world (and no, I am not going to revel any names!). The answer was something like: well, all ingredients are there, however nobody has done it explicitly, but it should be possible.

So, why were we interested about the question of moduli stabilisation in heterotic orbifolds? There are several good reasons for that. Orbifolds are among the simplest constructions in string theory, where you can make explicit controlled calculations. Moreover, it is possible to reproduce (semi) realistic MSSM-like particle physics spectra and several interesting phenomenological features (see for example this work by Lebedev et al.). However, in these models the question of how to stabilise geometrical moduli – e.g. the size of the compactification - was left unanswered. Nevertheless, in order to understand questions such as supersymmetry breaking, soft masses, dark energy and in particular cosmology (which was one of our main motivations) it is necessary to incorporate the dynamics of moduli fields. The advantage of the simplicity of orbifolds, is that they offer a perfect arena to embed models of particle physics in a globally consistent string theory and thus study such questions.

The scenario from the nineties that we found when we started this work, consisted of several studies of toy models in different combinations, where some of the most common assumptions were:

a) A single universal Kaehler modulus .

b) No complex structure moduli (except for this work by Bailin et al., where however the dynamics of the dilaton was ignored).

c) Multiple gaugino condensation with special hidden gauge groups, usually (with the most complete analysis carried on in the work by de Carlos, Casas and Munoz).

Given the successful recent orbifold constructions, we decided to go beyond the usual claims and tackle the problem of moduli stabilisation in explicit MSSM candidates arising from heterotic orbifolds.

We studied on a subset of (semi)realistic orbifold models of the minilandscape discussed in Lebedev et al. In these orbifolds there are three Kaehler moduli , one complex structure modulus plus the dilaton , giving a total of 10 real degrees of freedom. In order to study the bulk moduli dynamics, we computed the 4D low energy effective action of the orbifold models, which has contributions from various, computable, perturbative and non-perturbative effects. These effects then can potentially lift all flat directions associated to the 10 moduli above as follows:

Since the dilaton describes the tree level gauge couplings, gaugino condensation in a hidden non-Abelian sector leads to a non-trivial potential for that field (see for example the work by Dine, Rohm, Seiberg and Witten). A racetrack potential may then be sufficient to stabilise it. The gauge couplings also receive threshold corrections from massive string states that depend, via certain modular-forms (e.~g.~the Dedekind function) on several of the Kaehler moduli and on all the complex structure moduli present. Thus gaugino condensation may also stabilise those fields or even force compactification. Yukawa couplings between twisted fields arise thanks to worldsheet instantons, and turn our to be suppressed with the area, or the Kaehler moduli, that describe their separation. This then lifts all the remaining Kaehler moduli directions.

We found out that doing an explicit analysis reveled several differences of concrete models as compared to

the inspiring toy models of the past. Some of these are:

*(i)* The terms in the action are almost completely determined with very few free parameters (which come mainly from the little studied physics that decouples exotics and hidden matter).

*(ii)* It is typically difficult to find more than one condensing gauge group, especially without decoupling hidden matter (since the latter tends to destroy asymptotic freedom), and subsequently, it is hard (but not impossible) to find dilatonic racetrack models.

*(iii)* The moduli-dependent threshold corrections to the gauge couplings often do not appear with the required sign to force compactification, as was assumed in toy models.

*(iv)* The target space modular symmetry, which has served as a powerful tool in computing the effective 4D action, is generically broken from to some congruence subgroup due to the presence of discrete Wilson lines. Such Wilson lines are necessary to obtain realistic gauge groups and spectra. An implication of this is that justified to take a universal Kaehler modulus, and moreover the dynamics of all the bulk moduli are highly coupled together.

All these features made the search for metastable (de Sitter) vacua more challenging than previously thought and our results underlined this.

Let me now describe more explicitly one of the models we considered, highlighting the relevant properties and then the results, which hold also for other models we considered. Of course, you are kindly invited to check arXiv:1009.3931 for more information!

The set up was heterotic string theory on with factorisable lattice . The orbifold twist for this orbifold acts as

,

where the twist vector is

.

Thus one can check that in the first twisted sector (the one above), all three sub-torii are rotated. Consider now the second twisted sector . It leaves the third sub-torus unrotated. While the third twisted sector leaves the second sub-torus unrotated. All other twisted sectors can be seen to be equivalent to one of these three. Thus we have that the first sub-torus is always rotated. This will be important later on.

Now we need to choose a phenomenologically attractive gauge embedding. The “standard embedding” is the simplest choice, but it gives unrealistic gauge groups. Therefore, we chose a non-standard embedding allowing also for the presence of discrete Wilson lines. These choices are not arbitrary and need to be compatible with the orbifold action.

The model I want to describe below has two Wilson lines in the last two sub-torii. Taking this and the shift vector which specifies the gauge embedding, the final gauge group of the model is

.

So it has two hidden gauge groups and thus we have a racetrack type superpotential for the dilaton.

With this information we compute the spectrum of the model. It consists of untwisted and twisted sectors. The untwisted sector includes the three Kaehler moduli , which describe the sizes of the three sub-torii, one complex structure modulus , which describes the shape of the third torus (the other two being fixed by the orbifold projection) and the dilaton , for a total of 10 real fields.

Beside this, the spectrum contains the MSSM plus standard model singlets, which include matter charged under the hidden group and some exotics. We assume that the later, as well as the charged hidden matter can be decoupled at a universal scale without breaking supersymmetry, but producing the spontaneous breakdown of the additional . In this process we are left with a pure Yang-Mills hidden sector at lower energies. This then leads to double gaugino condensation in this sector.

To study the dynamics of the moduli fields, we now construct the low energy effective field theory, which describes the physics at energies well below the compactification scale. This is given by a 4D N=1 supergravity theory and thus we must identify the corresponding Kaehler potential, , gauge kinetic functions, , and superpotential, . Moreover, the low energy effective theory enjoys a further constraint from the discrete target-space modular invariance, which is inherited from the underlying conformal field theory. This provided a very powerful tool that allowed us to checked all our computations.

In the simplest orbifold compactifications, without Wilson lines, the corresponding target space duality group is

.

Nonetheless, the non trivial discrete Wilson lines break this group to a congruence subgroup. In our model, the final group turns out to be

transforming three Kaehler moduli and one complex structure moduli respectively.

Under the modular symmetry, the fields and 4D functions transform according to

,

where , ,

,

and the gauge kinetic function is invariant. Here the sums and products run over all and . For the subgroups the values of the integers are constrained (see our paper).

The Kaehler potential, including 1-loop effects is given by

,

where runs over and and

.

Here and are the Green-Schwarz coefficients needed to cancel part of the anomalies associated to sigma-model and target space modular symmetries, arising at 1-loop (the other part is canceled via threshold corrections to the gauge kinetic functions due to massive string states, as we see below).

The contributions to the superpotential come from two non-perturbative sources: trilinear twisted matter couplings and gaugino condensation in the gauge sector. Higher order couplings help decoupling exotic and hidden matter via the universal mass . Let us look at these two contributions in some more detail.

As we mentioned before, Yukawa couplings among twisted fields are given by non-perturbative worldsheet instantons. They turn out to depend on the Kaehler moduli associated to rotated planes. In our $Z_{6-II}$ orbifold, these can depend on and , according to the coupling under consideration. However, never appears because the trilinear couplings allowed by the orbifold, always involve a twist which leaves the third plane unrotated. Thus, the superpotential for Yukawa couplings among some of the singlets can lift the flat directions associated to .

On the gauge sector, below some scale , gauginos condense leading to another non-perturbative contribution to the superpotential. This depends on the gauge kinetic functions , which are given at tree-level by the dilaton . However, they receive threshold corrections, which depend on those Kaehler moduli associated to unrotated planes (or supersymmetric subsectors), complementing the dependences from the Yukawa couplings, and all complex moduli present.

Therefore gaugino condensation provides a superpotential for the dilaton, , and .

The explicit form of the superpotential from the contributions described above results to be

Here only the leading terms in the Yukawa couplings are included. The full trilinear couplings are given in terms of generalised Theta functions.

Important to note here is that there are very few free parameters ( comes from explicit calculation of string worldsheet instantons), contrary to the several free parameters assumed in toy models. These are the decoupling scale , the constant arising from integrating the condensates , and the *vev*‘s of the singlets . The latter are further constrained by consistency to be as the K\aehler potential has been calculated in this approximation. The Dedekind function is defined as

and the particular dependence on the moduli fields arises due to the modular symmetries along those directions (e.g. transforms covariantly only under ).

We can now combine the superpotential with the Kaehler potential above to study the F-term scalar potential, which takes the form:

,

where .

A comment about the D-term potential is needed here. The way in which the decoupling of exotic matter occurs in our set up is via induced non-trivial *vevs* for some non-Abelian singlets, thanks to a Fayet-Iliopoulos term. This term moreover cancels the anomaly from the would be anomalous $U(1)$ present in the theory via the Green-Schwarz mechanism. These singlet *vevs* subsequently give masses to the exotics and charged

hidden matter, due to their higher order couplings. At the same time, the hidden gauge symmetries will typically all be broken at that scale. Therefore, the relevant contributions to the scalar potential can only be from F-terms.

As already pointed out above, the superpotential has sufficient structure to lift all flat directions we are considering. In particular we various types of stabilisation mechanisms can be seen to be at work here:

*(i)* For the dilaton , the superpotential has a racetrack form, albeit with coefficients that depend on the other moduli. The dependence of the potential on occurs only via the non-perturbative Yukawa couplings. The form of the superpotential for this field is then reminiscent of a gaugino condensation mechanism plus a constant (w.r.t. ) except that now the coefficients depend on the other moduli. Similarly, since appears in , it has also a racetrack behaviour, with competing against the leading dependence in .

*(ii)* Let us now look at the three moduli, , , , which appear in the superpotential due to stringy threshold corrections, via the Dedekind eta functions. It was suggested some time ago that such dependence may be enough to force compactification. In particular, if the eta functions appear in the superpotential with negative powers, then the scalar potential diverges as the relevant moduli

go to zero and infinity (this can be seen by looking at the asymptotic behaviour of the eta function). Then a minimum in these directions is guaranteed via a “modular form” stabilisation mechanism, as we call it. Moreover, with simplest factorisable superpotentials such as those corresponding to a single gaugino condensate only, the fixed points under the modular transformations are necessarily extrema of the potential.

However in the explicit model I have described, the Dedekind eta functions appear in the superpotential with positive powers, so that the scalar potential goes to zero as the relevant moduli go to zero and infinity. In this case, we cannot ensure the existence of minima. Furthermore, as we have seen, Wilson lines break the modular symmetries in each plane in different ways. One effect of this is that the subgroups to which the symmetry is broken, not always have fixed points.

In spite of the several differences with respect to the toy models of the past, we have seen that the superpotential, and therefore the scalar potential, have a very rich structure to lift all flat directions. It was then surprising for us to find out that all the (de Sitter) vacua we found, turned out to have at least one unstable direction, dominated by the direction.

In spite of this, these vacua present some very interesting properties, which we expect to be preserved when a stable vacuum is found. These are as follows:

a) All vacua we found are de Sitter.

b) Our setting seems to favor anisotropic compactifications, which have been used recently to achieve precision gauge coupling unification, for example.

c) The overall volume turns out to be large enough in consistency with the supergravity approximation used.

d) Although difficult, it is possible to reach large enough values of $S$, in consistency with the string loop expansion and to accomplish gauge coupling unification at the GUT scale.

e) There are always some almost flat directions, which are linear combinations of the axions , and . The reason for these is as follows. Notice that the contribution to from the gaugino condensate is typically highly suppressed with respect to the condensate due to the stronger exponential suppression in $S$, and the stronger suppression in . Thus, effectively, we have only a single condensate in both models. Moreover, the higher order contributions to the eta functions at their solutions, are strongly suppressed for and . Therefore, to a good approximation, , and appear in the superpotential, and thus in the scalar potential, only via a single linear combination. As a consequence, to a good approximation, only this linear combination of the three fields is lifted by the non-perturbative dynamics, leaving two linearly independent combinations as flat directions, and two corresponding shift symmetries. Of course, the higher order corrections do lift the latter directions, rendering them only almost flat. We expect this feature to be also present in any metastable minima that may exist, being quite a generic consequence of the moduli stabilisation via gaugino condensation with threshold corrections. Such almost flat directions could potentially be interesting, since extremely light scalar fields are often called upon in cosmological models, for inflation, quintessence and the like.

Our study has been very valuable to understand what the real problems are in moduli stabilisation in heterotic orbifolds. Thus we are now in a position to look for possible ways to extend/improve our analysis. Hopefully, we will now be able to find other orbifold set ups where stable minima can be found. Let me also stress that we have been focusing on heterotic orbifolds. However, recent studies seem to indicate that moduli stabilisation of heterotic string on smooth Calabi-Yau manifolds is also possible.

So it seems that the understanding of moduli stabilisation in heterotic theory is advancing. We hope to report soon on progress about this and also some other issues we have uncovered via our orbifold investigations!

Post from: NEQNET: The world of theoretical physics

On moduli stabilisation in heterotic orbifolds

A holographic model of the quantum Hall effect ]]>

One of the hot topics in string theory in the past couple years has been the application of AdS/CFT holography to condensed matter systems. This is in many ways a natural extension of the earlier and continuing holographic study of nuclear physics and non-perturbative QCD and has attracted quite a bit of attention from the condensed matter community. A range of interesting systems have been modeled holographically, including the quantum Hall effect, high superconductors, graphene, cold atoms, and topological insulators, with some tentative success. For string theorists, it’s certainly very exciting to actually be making connections to experimentally-driven subjects for a change.

Roughly speaking, there are two basic approaches one can take towards constructing the holographic dual of a condensed matter system. One can build a phenomenological theory using what might be called effective string-inspired QFT; these are referred to as bottom-up models. The procedure is to engineer a bulk gravity plus matter action (or even just a matter action on a given gravitational background) whose physical properties most closely resemble the desired system.

The top-down approach, which we will adopt, begins instead with a true string background, or at least the low-energy energy approximation of one. Starting with a tractable string theory solution yields a holographic dual of a known field theory whose properties can be studied and which can hopefully then be matched to interesting physical systems. Because string theory is highly constrained, the range of possible low-energy field theories is likewise reduced, but it is much more natural to include stringy objects such as D-branes.

Our goal is to investigate a class of top-down holographic models of the quantum Hall effect (QHE). The QHE describes the generic behavior of electrons in a two-dimensional conductor in a transverse magnetic field and was introduced at length previously in this blog here, here, here, and here. When the filling fraction, defined as the charge density to magnetic field ratio and denote , takes integer and certain rational values, the electron fluid assumes a quantum Hall state with many striking properties. Most characteristic of these, of course, is the response to an applied electric field; the current in the direction of the field vanishes, while the conductivity in the orthogonal direction, or Hall conductivity, is exactly . The states with integer can be well described in terms of Landau levels (the integer QHE). However, the quantum Hall states at non-integer filling fraction (the fractional QHE) are strongly-coupled and the physics can’t be explained very well.

The important features of the QHE effect are very universal, in that they don’t depend significantly on the details of the experiment, the crystal structure of the conductor, etc. This kind of generic phenomenon makes a promising candidate for holographic modeling.

The model we studied is based on a probe D8-brane in the background sourced by a large stack of D2-branes. This construction is similar in many ways to brane models of holographic QCD, such as the D3-D7 model and the Sakai-Sugimoto model, but in two spatial dimensions rather than three. The dual field theory consists, at low energy, of fermions coupled to a gauge field. Supersymmetry is broken, as evidenced by the lack of scalar superpartners of the fermions. The dynamics of these strongly-interacting fermions is encoded holographically in the physics of the probe D8-brane. For example, both the magnetic field and the charged currents are described on the gravity side by various components of the D8-brane world volume gauge field.

The D2-brane geometry has a black hole horizon, and for generic values of the magnetic field and charge density, the D8-brane necessarily crosses the horizon. In this case the system resembles a metal, with massless charged excitations. However, when , the D8-brane smoothly ends outside the horizon, a mass gap opens up, and the fermions enter a QH state. This is the only known holographic example of a dynamically generated mass gap in a system with nonzero charge density.

Of course, the model isn’t perfect. Rather than a tower of QH states corresponding to successive Landau levels, we only have one state with . Naively, we can find solutions for the D8-brane with , but they are unstable.

Some discrepancies with real QH systems are due to unphysical simplifications of the model. For example, in experimental measurements, as the magnetic field is varied for fixed charge density, the Hall conductivity is constant over finite-sized plateaux around the special values of $\nu$. This is due to impurities, and since we haven’t included impurities in our model, there is no plateau around , only a point.

Other aspects of QH physics are reproduced only qualitatively. If we heat up a real QH system to a temperature of the order of the mass gap, there is a crossover to metalic behavior. In our holographic model, this transition occurs at the correct temperature but is a first-order phase transition instead.

In sum, we’ve taken some initial steps toward a holographic description of the QHE. This top-down approach is complementary to the several recent bottom-up models engineered using dyonic black holes, which capture other features of QH systems, such as more physical sets of filling fractions. On the other hand, they lack, for example, a dynamically-induced mass gap. In the future, these two approaches will, hopefully, converge toward increasingly realistic holographic models capable of testable predictions.

Post from: NEQNET: The world of theoretical physics

A holographic model of the quantum Hall effect

Transdimensional Multiverse Measures ]]>

We are going to discuss here a paper called “Measures for a Transdimensional Multiverse”, by D. S-P and Alexander Vilenkin [arXiv: 1004.4567]. This work is an extension of a body of knowledge that has been built by many people, and the paper contains an extensive list of references. In this blog I will not cite the literature, but invite you to check out the references cited in the paper. After an introduction to the multiverse and the so-called “measure problem”, we will outline the scale factor measure for an eternally inflating multiverse (all of this predates our paper). It should be emphasized at the outset that this measure was developed for a multiverse where all vacua are (3+1) dimensional. All tunnelings amongst these vacua are “equidimensional” transitions: parent and daughter vacua have the same number of dimensions. We will then go on to define two related measures for a “transdimensional” multiverse which includes transitions between universes which have different effective dimensionality.

**So what is a multiverse?**

FIG. 1. Recycling multiverse

Consider a simple scalar field potential that has two metastable de Sitter minima separated by a barrier (see Fig. 1). If the field starts in the false vacuum state, it can tunnel through the barrier, and land up in the true vacuum state. This process represents a bubble of true vacuum nucleating within the false vacuum background. The bubble expands rapidly, but it never catches up with the inflating false vacuum, thus there is always room for more bubbles to form. If the true vacuum has a positive cosmological constant (we will call such bubbles recyclable), then bubbles of false vacuum can in turn nucleate within the true vacuum. This recycling process will generate an infinite number of bubble universes. In this simple example there are only two *types* of vacua.

FIG. 2. Each of the valleys in this multidimensional potential energy density surface corresponds to a metastable vacuum state. We live in one of them. Transitions between the different vacua in the multidimensional landscape can take place via bubble nucleation.

String theory also suggests that there may be a multitude of possible vacua characterized by different values of the low-energy constants of Nature. String theory vacua may have extra dimensions, and also involve additional objects, such as fluxes and branes. Thanks to the power of combinatorics, there are googols of ways to combine these ingredients to produce different vacua, and we land up with a “string landscape” of possible vacuum solutions. Via bubble nucleations, the universe can dynamically be populated with each and every possibility allowed within the landscape. This is the multiverse.

FIG. 3. What is the relative probability for a randomly picked observer to find themselves in vacuum i?

**What is the measure problem?**

Or, how do we make predictions in an eternally inflating multiverse? Given a fundamental theory (string theory) which allows a vast number of possible universes, and a dynamical mechanism (such as eternal inflation) with which to populate the universe, we can no longer predict with certainty what the value of a constant that varies in the landscape is (for example, the cosmological constant). But we can, *in principle*, calculate how the constants of nature are distributed amongst all the different bubble types. Figuring out how to do so, *in practice*, is called the measure problem. But why is it a “problem”?

Let’s begin with a volume of inflating space. As time goes on more and more bubbles will nucleate one within the other. Now we may want to know how likely are we to live in a particular vacuum. We may ask: *What is the relative probability for a randomly picked observer to find themselves in vacuum i?* We could try to count the number of observations of type , , and define the relative probability as the ratio

But it turns out that in an eternally inflating universe the number of bubbles grows without bound, and thus the number of any type of observation is infinite! Thus the fraction of observations (and vacua) with any given property is infinity divided by infinity. The cause of the problem is the infinite spacetime volume of an eternally inflating multiverse.

FIG. 4. Global time cutoffs. Choose an initial spacelike hypersurface. Then consider a congruence of future-directed timelike geodesics. These geodesics sweep out a spacetime volume between the initial hypersurface and some final hypersurface. Count the number of different types of events within the finite spacetime region between these two hypersurfaces, and then take the limit of infinite eta (a time coordinate) to get the relative probability of any two events.

*So how do we begin to tame this infinite spacetime volume? * Probably the most fruitful approach has been to impose global time cutoffs. The basic recipe to define a global time cutoff measure is outlined in Fig. 4. There are two ways in which global time cutoffs regulate infinite volumes in the multiverse.

- Recyclable bubbles are best described by open FRW coordinates which have constant time hypersurfaces of infinite spatial extent. So even if we were only interested in counting events within a single inflating bubble, we would still need a cutoff.
- In an eternally inflating multiverse, an unbounded number of bubble universes will be created. By imposing a global cutoff we put a lid on the production and are able to sample the types of vacua created from a huge subset of the otherwise infinite set.

*Thus the multiverse is an infinite stage for all events to play out both because individual bubbles can have infinite spatial hypersurfaces, and because there are an infinite number of bubbles.*

Many people have proposed global time cutoff measures for a eternally inflating multiverse. These measures include the proper time measure, scale factor cutoff measure, causal patch prescription and so on. In the next section I will describe the “equidimensional” scale factor measure because our new results for transdimensional multiverses are generalizations and extensions of this measure.

**Scale factor cutoff measure**

The scale factor is defined as the cube root of the volume expansion factor,

,

along the geodesics swept between an initial and final hypersurface. The scale factor time is defined in terms of

as , ,

where is the local expansion rate of the geodesic congruence (which can be defined as the divergence of the four velocity field of the congruence.)

To gain an intuitive understanding for what it means to impose a scale factor cutoff, imagine a sprinkling of test dust particles on an initial hypersurface. Now imagine following the trajectories of these particles as the universe expands. A constant scale-factor time hypersurface is by definition the surface on which the density of the dust has been diluted by the same factor

.

The scale factor cutoff is initiated when the density drops below a critical value. (For later on, we can also introduce the volume expansion factor ; and the volume time . Note, that in an equidimensional multiverse .)

FIG. 5. Nucleated bubble showing bubble wall, light cone of center of nucleation (dashed line), thermalization hypersurface (red/fuzzy line), “observation” hypersurface (galaxies), and cutoff surface.

So, once we define our cut-off surface, what next? First let?s think about one bubble universe (see Fig. 5) of type (the double index notation indicates that we are talking about a vacuum of type i that was reached from a parent of type j – it turns out that it’s not just which vacuum you land in that counts, it also matters how you get there). One can make the simplifying assumption that all observers form at a fixed FRW time with some fixed density – this defines the “observation hypersurface” inside a given bubble. The infinite three volume of the observation hypersurface is regularized by the scale factor cutoff at . If we denote the regulated volume of the observation hypersurface by then the regulated number of observers in this single bubble is

.

FIG. 6. As time goes on more and more bubbles are nucleated close to the cutoff surface.

With the single bubble scenario under our belts, are we ready to count the number of observers in all vacua of type ?} Almost – we will also need to know the number of type bubbles that are produced below the cutoff. The state of the art in bubble counting rests on solving the rate equations of eternal inflation. It is important to point out that these equations and their solutions depend on your chosen time coordinate. For the scale factor measure the time coordinate used is the scale factor time .

Without reproducing the mathematics of the scale factor rate equations here, we will state the key result: the number of nucleated bubbles of any type grows as , where , for a multiverse, and is an exponentially small number (see Fig. 6). This means that most vacua are created close to the cutoff surface. *What effect does this have?*

If we have a landscape of vacua that are similar in all ways and have almost identical evolution except some bubbles undergo more inflation that others (see Fig. 7), then the vacua which undergo large amounts of inflation (before being able to thermalize and produce galaxies) must nucleate long before the cutoff, compared to similar vacua which undergo less inflation before thermalizing. So which effect wins? Are most observers to be found in the many vacua which nucleate shortly before the cutoff? Or are most observers to be found on the larger observation hypersurfaces of the sparse older vacua which underwent more inflation? In the eternally inflating multiverse, these two effects balance one another.

FIG. 7. Vacua with a different number of e-folds of inflation but the same post inflationary evolution. Vacua which undergo less e-folds of inflation can nucleate closer to the cutoff surface, and thus there are more of them.

**Transdimensional measures**

FIG. 8. Lower dimensional depictions of decompactification and compactification.

Up to now we have discussed the scale factor measure that was developed for an “equidimensional” multiverse. But, we expect transitions in the landscape between vacua with different effective dimensionality. For example, consider a Einstein Maxwell landscape with a positive cosmological constant and a magnetically charged two form field.

The landscape of this model includes: a de Sitter vacuum, many metastable , , and vacua. The model also includes perturbatively unstable and vacua. Decompactification, flux tunneling and compactificaion transitions take place between these vacua.

*How do we define a measure on a landscape that includes transdimensional tunneling?* In our paper we came up with two new measures: the transdimensional scale factor measure, and the transdimensional volume factor measure. Let’s first consider the transdimensional scale factor measure.

We basically go through the same steps as the regular scale factor measure to define our cutoff surface, but the eternal inflation rate equations need to be modified. This time we find that the number of bubbles of all types grows as where is the maximum number of inflating dimensions in the landscape (for example, in a Einstein Maxwell landscape, ).

For the transdimensional volume factor cutoff, after defining our cutoff surface as the surface of constant volume growth, we again need input from rate equations, which once again need to be modified. We find that the growth of the number of all new bubbles grows as the exponential of , where is the volume factor time (recall for , ).

So what do these modifications to the rate equations and their solutions mean? *What are the properties of the transdimensional scale and volume factor measures?*

Suppose we have a landscape with bubbles which have comparable nucleation rates and post-inflationary evolution, BUT different inflaton potentials (see Fig. 7) and thus different numbers of e-folds of inflation (and ). We can ask:

*How does the number of observers regulated with the transdimensional scale and volume factor cutoffs depend on* ?

For the scale factor measure, if we are comparing observers in vacua with , there is no selection effect favoring small or large values of . What has happened here is that there has been an almost precise cancellation of two effects: the volume growth inside bubbles, and the decrease in the number of nucleated bubbles as we go back in time.

However, for vacua with , the transdimensional scale factor measure is inconsistent with observations.

*What about the transdimensional volume factor measure?* There is no selection effect favoring small or large values of , because (like the equidimensional scale factor measure) the volume growth inside bubbles is balanced by the decrease in the number of bubbles as we go back in time.

**Youngness bias**

Another issue that a candidate measure needs to avoid is the “youngness paradox”. Consider two kinds of observers who live in the same type bubbles, but take different time to evolve after the end of inflation (as shown in Fig. 9). Observers who take less time to evolve can live in bubbles that nucleate closer to the cutoff and thus have more volume available to them because there are more of these young bubbles (remember the rate equations). With proper time slicing this growth of volume is so fast that observers who evolve even a tiny bit faster are rewarded by a huge volume factor, resulting in some bizarre predictions. On the other hand, the scale factor cutoff in gives only a mild youngness bias – observers evolving faster by a scale factor time $\delta\eta$ are rewarded by the bias factor .

The generalization of this result to a transdimensional landscape with scale factor cutoff is . In the string theory landscape we could have , so the youngness bias for the transdimensional scale factor measure is stronger than the equidimensional case, but it is still mild enough to avoid any paradoxes.

Similarly, for the transdimensional volume factor cutoff, we have a mild youngness bias.

FIG. 9. Youngness bias.

**Conclusions**

In the paper “Measures for a Transdimensional Multiverse”, we investigated possible generalizations of the scale factor measure to a transdimensional multiverse. We considered a straightforward extension of the scale factor cutoff, and what we called the volume factor cutoff.

On the positive side, we found that the transdimensional scale factor measure is not subject to the youngness paradox. On the negative side, however, this measure exponentially disfavors large amounts of slow-roll inflation inside the bubbles. This results in a preference for low values for the density parameter , while observations indicate that is actually very close to 1,

.

The severity of the problem depends on the highest dimension of an inflating vacuum in the landscape. In the paper we show that if , as can be expected in the string theory landscape, then the scale factor measure is observationally ruled out at a high confidence level.

The properties of the transdimensional volume factor measure are essentially the same as those of the equidimensional scale factor measure in dimensions. It is free from the youngness paradox and does not exponentially favor small or large amounts of slow-roll inflation.

Another potential problem for any measure is an excessive production of freak observers that spontaneously nucleate as de Sitter vacuum fluctuations, called Boltzmann brains (BB’s). Measures predicting that BB’s greatly outnumber ordinary observers should be ruled out. In our paper we discussed how our measures stand up to the BB test. In summary, the transdimensional scale factor measure does not suffer from a BB paradox, and the transdimensional volume factor measure may or may not have a BB problem, depending on the properties of the landscape (the conditions for BB avoidance for this measure are the same as those previously found for the scale factor case). Given our present state of knowledge, the volume factor measure is certainly not ruled out by the BB problem.

Our goal in this paper was to analyze and compare the properties of the scale factor and volume factor measures in a transdimensional multiverse. Overall, our results suggest that the volume factor cutoff is a more promising measure candidate than the scale factor measure which predicts low values for the density parameter . This problem becomes acceptably mild only if the highest dimension of the inflating vacua in the landscape is .

Post from: NEQNET: The world of theoretical physics

Transdimensional Multiverse Measures

Large non-Gaussianity from axion inflation ]]>

**1. Probing Inflaton Interactions with the CMB**

What can be gleaned about microscopic particle physics by studying the large scale structure of our universe? At first glance, the question might seem surprising. However, in recent years the interface between cosmology and high energy particle physics has been a vibrant and active field of research. In part, this interaction has been driven by the idea that the observed large-scale cosmological perturbations in our universe originate from quantum mechanical processes. According to the dominant paradigm, the very early universe underwent a prolonged phase of rapid expansion called inflation during which the quantum fluctuations of a spin-0 particle (the inflaton) were stretched to cosmological size and imprinted on the large-scale geometry of space-time. These fluctuations provide the seeds for structure formation and may be probed in the contemporary universe by Cosmic Microwave Background (CMB) and Large Scale Structure (LSS) observations. Although we’re still eagerly waiting for the “smoking gun” confirmation of this scenario (most likely in the form of a detection of primordial gravitational waves), the data nevertheless seem consistent with the idea that inflation occurred.

For me, a big part of the excitment surrounding this idea is that the dynamics of the very early universe seem to have involved particle physics in very extreme conditions. The characteristic energy scale associated with inflation is many orders of magnitude higher than what is accessible in particle accelerators. Moreover, the transistion from inflation to the conventional hot big bang may have been characterized by violent instabilities and turbulent, nonlinear dynamics of quantum fields far from equilibrium. The microphysics underlying these extreme moments in the early history of our universe may have left observable imprints in the CMB, LSS and also relic gravitational waves. In this way, there is a possibility that cosmological observations might teach us something about fundamental particle physics in a regime that will probably never be accessible in the laboratory.

A key question is how the inflaton “fits in” with the rest of particle physics. What are the properties of this new particle? How does it interact with other particles in nature? I will argue that valuable clues may be encoded in the statistical distribution of CMB fluctuations, specifically departures from gaussianity. Cosmological nongaussianity has attracted a lot of interest over the last few years. This excitement, I think, is well justified. Nongaussian statistics encode a wealth of detailed information about the physics of the early universe. Moreover, this excitement is timely: nongaussianity will be probed to unprecedented accuracy with the recently-lauched Planck satellite.

What level nongaussianity should be expected for reasonable microscopic models of inflation? Most cosmologists would say “not very much”; it is usually claimed that the simplest models of inflation predict a level of primordial nongaussianity that is too small to be resolved with Planck. Indeed, nongaussianity is sometimes touted as a “smoking gun” signal for non-standard inflationary dynamics. Very roughly, the argument goes like this: nongaussianity is a measure of the strength of interactions, however, the kind of theories that leads to inflation are almost always very weakly interacting. (Getting a bit more technical: slow roll inflation requires extremely flat scalar potentials which, in turn, implies that interactions are weak and nongaussianities are small.) In order to circumvent this “no go” argument previous studies have invoked a variety of novel effects including strong dissipation, higher derivatives and small sound speed. These are all very interesting ideas and I’ve worked on some of them myself. However, I think it’s fair to say that many of the models on the market which predict large nongaussianity are somewhat unorthodox.

My recent paper, written in collaboration with Marco Peloso, challenges the conventional lore. We have found that **the simplest and most natural particle physics models of inflation can easily lead to an observable nongaussian signal** and are, in fact, already constrained by observation! In this recent work we rely on a very simple loophole to the “no go” argument presented above: that chain of logic only applies to self-interactions of the inflaton particle. On the other hand, one generically expects that the inflaton should couple not just to itself but also to other particles in nature. (Indeed, this is probably necessary in order to successfully recover the hot big bang after inflation.) Such interactions are almost always ignored, however, it turns out that their consistent inclusion can have a radical impact on the observational predictions of a model. The reason these interactions can be so important has to do with a novel effect — nonperturbative particle production — that I’ll discuss in more detail shortly.

The idea that particle production during inflation might lead to interesting phenomenology was also the subject of my last posting on NEQNET. In my recent paper, we consider a qualitatively similar effect which can arise in a very natural way. We work in the context of a popular and theoretically appealing class of inflation models and show how consistent inclusion of the interactions between the inflaton and spin-1 particles very naturally leads to large nongaussianity. Part of what I find exciting about this model is its simplicity: we don’t require any “extra” structure or unnecessary complication in model-building in order to get an interesting phenomenology. In terms of the bigger picture, I think the message to take away from these kinds of studies is the following: **large nongaussianity is probably much more generic than has previously been thought**. Personally, I find this very encouraging. If we are lucky enough to detect this kind of effect, it may provide us with valuable clues about how the inflaton “fits in” with the rest of particle physics.

**2. Axions, Inflation and Particle Production**

As I mentioned above, slow roll inflation requires extremely flat scalar field potentials. Flat potentials are notoriously sensitive to UV physics and hence we are faced with a serious technical fine tuning problem. There are a number of possible solutions, however, one of the most cogent idea is to suppose that the inflaton is an axion. Axions are characterized by a softly broken shift symmetry which protects the potential from large corrections that might otherwise spoil inflation. This simple idea provides a very natural way to implement inflation in a controllable effective field theory. Moreover, axions are ubiquitous in particle physics: they can arise whenever an approximate global symmetry is spontaneously broken and are plentiful in string theory models.

The first axion inflation model was proposed by Freese, Frieman and Olinto in 1990 and dubbed “Natural Inflation”. The original model involved a super-Planckian decay constant and might be impossible to embed in a sensible UV theory. Happily, it turns out that there are plenty of ways to make axion inflation work with sub-Planckian decay constant! (Some popular examples including N-flation and axion monodromy.) Our results apply to a huge variety of particle physics models of axion inflation.

For our mechanism, the important thing is how axions couple to spin-1 gauge fields. In any axion inflation model, there must generically be present a pseudo-scalar coupling of the form:

In the perturbative regime, this interaction can be represented by a diagram like the one below, that couples the inflaton field to two gauge bosons. (I should stress that the gauge boson in question here might be the usual standard model photon, or it might be some hidden sector field. Our analysis works in either case.)

This kind of interaction vertex is a very generic feature of what are perhaps the most natural particle physics models of inflation. Therefore, it makes sense to ask what are the implications of this interaction for the cosmological predictions of the model. We weren’t the first group to think about this, in a very interesting work Anber and Sorbo noticed that, at very strong coupling, this interaction could help to slow the motion of the inflaton via dissipation. The approach that Marco and I took is somewhat more conservative. We noticed that this interaction can dramatically modify the phenomenology of the model *even in the conventional slow roll regime*.

The underlying physics is as follows. During inflation the axion forms a homogeneous, classical condensate. Gauge bosons propagating in this background condensate experience a modified dispersion relation: instead of the usual E=p that one expects for a massless particle, there is a new contributions which depend on the dynamics of the inflaton. The effect of this modified dispersion relation is to induce an exponential growth for one of circular polarization modes of the gauge field. This can be interpreted as gauge particle production and is very similar to tachyonic preheating (however, I must stress that we’re talking about particle production *during* inflation).

So far, we have established that the dynamics of the homogeneous inflaton lead to copious production of gauge quanta, via the usual pseudo-scalar interaction that is unavoidable in any axion inflation model. What is the fate of this produced gauge particles? The interaction vertex above can describe inverse decay, a process in which two gauge quanta “merge” to produce an inflaton particle. (You get this from the diagram above with time running from right to left.) Inverse decay processes provide a new source of cosmological perturbations, complementary to the usual quantum vacuum fluctuations of the inflaton. Remarkably, this new source of perturbations can actually *dominate* in the regime where the decay constant is sub-Planckian (which is where the effective field theory is under control). It is interesting that, although the general idea of axion inflation is more than 20 years old, this important source of cosmological perturbations has only recently been accounted for.

The main result of my recent paper is to note that **the phenomenology of axion inflation is radically modified when particle production and inverse decay are taken into accoun**t. Firstly, the fluctuations generated by inverse decay are highly nongaussian. We find that a large (nearly) equilateral shape nongaussianity is easily generated. In fact, the WMAP7 limit on nongaussianity already implies a nontrivial bound on the axion decay constant. The second interesting prediction is a violation of the usual consistency relation for the tensor-to-scalar ratio. This arises because the produced gauge quanta contribute to the anisotropic stress tensor and thus source gravitational waves. In the most interesting regime of parameter space, it turns out that this latter effect is small. Hence, the generic expectation for axion inflation is large equilateral nongaussianity along with the same value of spectral index and tensor-to-scalar ratio as in chaotic inflation. If Planck detects this type of nongaussianity, then we will have an indirect measurement of the inflaton coupling to gauge fields. If Planck sees nothing, then we will have a surprisingly stringent bound on this same coupling.

To sum up: in a given microscopic realization of inflation, there will typically be some couplings between the inflaton and other particles in nature. The specific form of these interactions depends on how the inflaton “fits in” with the rest of particle physics. What we have found is that very simple kinds of interactions in well-motivated models of inflation can lead to surprisingly rich particle production dynamics. These, in turn, give rise to novel observational signatures, most notably large nongaussianity. Moreover, this subject is just the beginning to be explored, I expect that a rich variety of qualitatively similar effects may be possible in all sorts of simple models. **It may be time to re-think the lore that observable nongaussianity cannot be obtained in simple, natural models of slow roll, single field inflation**.

Post from: NEQNET: The world of theoretical physics

Large non-Gaussianity from axion inflation

On strong disorder renormalization ]]>

It is well established that if we consider a tight-binding model (see my previous blog if you are not familiar with tight-binding Hamiltonians) with on-site energies which are random, independent variables, the wavefunctions can either decay exponentially (localized) or be extended over space (delocalized), depending on the system’s dimensionality and the relative strength of the disorder (the magnitude of the fluctuating on-site energy) and the tunneling matrix element.

It is also well known that if we construct the equations of motion for a system of masses and springs, we can find *N* independent eigenmodes (where *N* is the number of degrees of freedom) that oscillate with a given eigenfrequency. To find the eigenmodes and eigenfrequencies, we have to diagonalize a matrix whose off-diagonals are related to the spring constants between the different masses, and with the diagonal such that the sum of every row vanishes. If the springs are random, we have a similar problem to the previous one (the tight-binding Hamiltonian): we have to diagonalize a matrix with random entities, albeit with a different type of disorder (both “diagonal disorder” and “off-diagonal disorder” exists in the phonon problem) and a “strange” connection between the fluctuations of the diagonal and off-diagonal matrix elements, arising from the vanishing rows rule (which comes about, essentially, from momentum conservation). Can the vibrations of this disordered system be localized in space?

In the paper, we define an ensemble of random matrices which is related to the above phonon problem, but, as emphasized before, once formulated mathematically is also adequate to describe different physical problems. To define the model, let me present a prescription to generate a random matrix from the ensemble considered:

1. Choose *N* points randomly in a *d*-dimensional box, with a uniform distribution.

2. The off-diagonal matrix elements are chosen as an exponential of the Euclidean distance between the sites.

3. The diagonal of the matrix is chosen as minus the sum of the rest of the row (or column, since the matrix is symmetric).

Another simplification we make is that the system has low density, namely, that the length characterizing the exponential decay of the matrix elements, , is much smaller than , the average nearest neighbor distance. We thus have a small parameter .

To connect this with the masses and springs model mentioned previously, one should think of the points as masses, and of the off-diagonal matrix elements as the springs. Then, this matrix is the one discussed before corresponding to the normal modes (“phonons”) of the mechanical system.

The question we ask in this paper, regards these random matrices. What do the eigenmodes look like? (for the masses and springs model, these are the “phonons”). What is the density-of-states of the eigenvalues? In fact, our motivation was not the phonon problem, but arose from our study of relaxations in electron glasses ,which turns out to map mathematically to the same form of matrices (I will not explain what electron glasses are or describe the relation, but you can see Phys. Rev. B 77, 165207 (2008) and Phys. Rev. Lett. 103, 1264023 (2009) for more details).

These analogies between different physical models which have the same underlying mathematical structure is very useful, and before going into the results let me mention another problem which is described by the same matrix, and which one can draw further intuition from: diffusion of a particle in a random environment. Here, we know that it is useful to consider the matrix describing the Markov process, whose off diagonals are the transition rates between sites, and the diagonal is such that the sum of columns vanishes (this property comes from conservation of the particle number this time).

In the following, I will describe the results for the density-of-states (DOS), as well as the localization properties of the model, which we obtain by using a strong disorder renormalization group approach. I should stress that the field of diffusion in a random environment is vast, as is the field of vibrations in disordered materials, and the results presented here are for a definite model, with its particular form of disorder. In the following, to have a concrete physical system in mind, I will focus on the masses and springs interpretation of the matrices.

Let me first discuss the DOS. It is possible to prove that all eigenvalues are negative, except one which is exactly zero. This in fact must be the case since in the phonon problem. This is in contrast to the “usual” Anderson localization problem. We find that the DOS is different from the spectrum of phonons in an ordered lattice: in the former, there is a broad range over which the spectrum approximately follows . We find a formula that also accounts for the deviations from this form, plotted in Figure 3. The results for the DOS have important implications in the glass relaxation problem I mentioned above, and lead to slow, logarithmic relaxations, measurable over the course of hours and days, see Phys. Rev. Lett. 103, 126403 (2009) for more details.

To find the DOS, we calculated its moments, which can be related to the average of certain diagrams, and then reconstructed the distribution from its moments. As far as I know, this method was used by Wigner to find the beautiful semi-circle law for the Gaussian random matrix ensembles (there, the moments turn out to be the Catalan numbers).

Let me proceed to the second main result of the work, which addresses the eigenmodes (*i.e.*, the localization properties). Here, we employed a strong disorder renormalization group (RG), which is similar in spirit to that used by Dasgupta and Ma, Bhatt and Lee and Daniel Fisher to study certain spin systems, and recently used by Lee et *al.* to study a synchronization problem in 1D (see references in our paper). Here, the mechanical picture of masses and springs will be particularly useful to comprehend the RG process.

At each step of the RG process, we choose the two masses connected by the largest spring, and claim that they form an approximate eigenmode (to justify this step, we take advantage of the “small parameter” , corresponding to the disorder in the spring constants being large). We record the oscillation frequency, and now, since the masses have a strong spring between them, we combine them together to make a larger, single mass. In this RG step, we eliminated one degree of freedom of the system, and found one eigenmode.

Repeating the process over and over again would yield growing clusters, see Figure 1.

So we conclude that the eigenmodes (phonons) are localized clusters. The high frequency ones correspond to pairs of points, while as we go to lower and lower frequencies their size increases. This is demonstrated numerically in Fig. 2, showing 4 eigenmodes of a particular 5000 X 5000 random matrix, found by numerical diagonalization (not using the RG). The most negative eigenvalues are close to -2, and correspond to two masses moving in anti-phase (top left eigenmode in the figure). As one goes to lower eigenvalues, the eigenmodes contain a larger number of points, corresponding to larger clusters, which are localized in space (in the figure, the x axis is an arbitrary index, so it does not reflect this fact).

By combining the results of the DOS discussed earlier, with this RG procedure, we found an analytical form for the size of the cluster as a function of eigenvalue, showing that it *diverges* as one goes to zero frequency. This makes sense since we have an extended zero frequency mode, corresponding to the center-of-mass motion.

In a recent interesting work, http://arxiv.org/abs/1010.1627, Monthus and Garel study the same model, and correctly point out that, in general, one should choose not the two masses connected with the largest spring, but those two masses which have the largest* frequency. *However, in the case above in which initially all masses are equal, this is not important, since as I shall shortly explain the mass distribution stays narrow along the RG process. The following graph compares the DOS obtained by exact diagonalization with that obtained by the RG, as well as with the analytic formula mentioned above.

Let me conclude with explaining why the distribution of masses stays narrow in the process. We assume that the clustering process is completely random: *i.e.*, at every step two random clusters are chosen and glued together. So we have a simple statistical problem: initially, we have a bag full of unit masses, and at every step two of them are chosen and united to be one mass. What is the resulting distribution of masses? One can show that in the continuous limit, the flow of the mass distribution is described by the following integro-differential equation:

,

where , and k is the number of steps.

This equation was written down almost a hundred years ago by Smoulochovskii in the context of molecules sticking to each other and clustering, and is known as the Smoluchowski coagulation equation. is a solution, and it can be shown that starting with any initial conditions one would converge to this solution (see M. Aizenman and T. Bak, Commun. Math. Phys. 65, 203 (1979)).

This shows that the mass distribution will be exponential, which is narrow in comparison to the broad distribution of spring constants.

To summarize, I have discussed here the localization properties of a class of random matrices which correspond to a variety of physical models, and shown that they are characterized by a particular distribution of eigenvalues, and have localized eigenmodes whose spatial size diverges as one approaches zero eigenvalues.

There are many other important open problems related to the model I discussed here. What happens at higher densities? (a lot of work was done on this aspect of the problem by Parisi and coworkers). Is the RG process exact at low frequencies? (see Monthus and Garel’s paper). What are the elastic properties of the disordered phonon system, and its thermal conductance? (we are working on these now, in collaboration with Prof. Vincenzo Vitelli at Leiden University).

**Literature**

- M. Mehta. Random matrices
- K. Efetov. Supersymmetry in disorder and chaos
- G. Anderson et al. An introduction to random matrices
- S. Alexander, J. Bernasconi, W. R. Schneider and R. Orbach, Excitation dynamics in random one-dimensional systems, Rev. Mod. Phys. 53, 175?198 (1981)

Post from: NEQNET: The world of theoretical physics

On strong disorder renormalization

Relaunching NEQNET ]]>

After 1.5 years I finally consider myself settled down and ready to return to more or less active blogging. NEQNET will be relaunched within the next 30 days with new design and hopefully lots of new content. Also, as it seems, there will be new team members working for NEQNET together with me in the position slightly more extended than that of a guest blogger. I really hope that my readers are not lost for me completely, so if you still care about NEQNET, please leave a comment to this post

If you were ever thinking about writing a guest post for NEQNET about your recent work, now is the possibility to do that. Contact me at *dmitry AT nonequilibrium.net* and I’ll create an account for you at NEQNET.

If you would be interested to be a co-editor at NEQNET for one (or several) of the following topics: HEP theory and string theory, HEP phenomenology, condensed matter theory, non-linear and hydrodynamics, please contact me (do the same if you feel that NEQNET should discuss some other topics, too).

Cheers,

and very respectfully yours.

Post from: NEQNET: The world of theoretical physics

Relaunching NEQNET

Saturday’s photoguess: what does this monkey symbolize? ]]>

There are two questions for this week’s photoguess:

1) What exactly does poor monkey symbolize?

2) Where is this strange artefact located? (Answering this question will also help you to find answer for the question 1).

**Hint**: the topic of this photoguess is directly related to the subject of one of this week’s posts on NEQNET

Post from: NEQNET: The world of theoretical physics

Saturday’s photoguess: what does this monkey symbolize?

Dynamics of space storm ]]>

Typically, we know that a storm hit Earth if we see auroras, and it is easier to catch them on the north, closer to magnetic poles of the Earth, where the characteristic width of the Earth’s magnetic field “layer” is much smaller than near equator. Although I spent 3 years in Helsinki, I was not lucky enough to catch an aurora

Post from: NEQNET: The world of theoretical physics

Dynamics of space storm

One step for a Man ]]>

Nature News has decided to start running a Twitter microblog devoted to the history of Apollo 11 mission – the first manned mission to the Moon. They will basically twit all the steps of the mission, to the Moon and back, day after day, event after event as if it was happening today, in 2009. They also promise that the feed will include various contextual information – politics, related events, etc. Following this Twitter feed you will open yourself to a quite unique experience: learn exactly what your father felt back in 1969. *Thanks so much* for this precious gift, Nature. Such feeds as yours make Twitter a really great service, worth to have a Twitter account.

If I may to suggest one thing… Many of your readers are young, restless and eager to learn. Although it is priceless to remember the history, to know what our fathers and mothers were able to accomplish, your younger readers look into the Future. Start another feed – about *the future mission*, the next “step for a Man”. I believe such a feed will find its readers.

*Thank you, Nature*.

P.S. Let me also list some other news on the subject (space research) I found interesting…

a) due to the lack of funding ESA is going to somewhat cut the forthcoming (scheduled for 2016 at this moment) mission ExoMars. In particular, they decided to drop “Humbolt” from the project – “Humbolt” is a static science payload for studying Mars climate. There are also difficulties with ESA Bepi-Colombo mission to Mercury. It is not like EU was more seriously affected by crisis than USA, and ESA funding is uncomparable to the one of NASA anyway. I really hope that this is not the reflection of seemingly stable recent trend of funding cuts for space research.

b) China Daily and a couple of other portals including *the authorized government portal site to China*, www.china.org.cn, have published recently a rather interesting note. In this note an opinion by a seniour strategist is presented that “hi-tech military corps, including space forces, need to be considered in the future development plan of the Chinese Army”. I guess, the very fact that China develops military satellites (as well as military space station) is le secret de notre polichenel. What is interesting is that the opinion was published on the State Council Information Office portal. From my point of view, it means that the development of the program is really close to the end. Whether you want it or not, Middle Kingdom is going to be a Player in space.

Post from: NEQNET: The world of theoretical physics

One step for a Man

LISA technology and instrumentation ]]>

Oliver Jennrich (European Space Agency) has prepared a large review on technical aspects of LISA (space laser interferometer) mission – the project is extremely complicated for realization, many technologies are not even yet fully developed, and various prototypes will have to be launched. Yet, possible payout is so huge – even including possible detection of gravitational waves from the very early Universe – from inflationary and reheating stages (just think about it – detecting EM radiation did not really allow us to go beyond redshift so far, not including here CMB of course). Who is a sucker for space research as I am – please check out the paper. It does not discuss science related to the mission but contains tons of technical information about LISA you won’t find anywhere else.

Post from: NEQNET: The world of theoretical physics

LISA technology and instrumentation

Test beam for LHCb ]]>

Post from: NEQNET: The world of theoretical physics

Test beam for LHCb

A bit about climate change ]]>

In their recent paper “Testing the link between terrestrial climate change and Galactic spiral structure” Adrian Melott with collaborator debunk this hypothesis using new data about structure of the Milky Way (Englmaier et al., 2008 – see Fig. below from their paper).

Current location of the Solar system is shown by black dot; units on axis are kPcs.

The result of the study is that any such correlation as well as any periodic trend related to the rotation of the Galaxy is absent.

Via sergepolar.

Post from: NEQNET: The world of theoretical physics

A bit about climate change

Susskind’s lectures on cosmology ]]>

P.S. If you were unable to see embedded video, here is the link to the playlist I’ve created for you.

Post from: NEQNET: The world of theoretical physics

Susskind’s lectures on cosmology

Other interesting things in ArXiv (12 Jun 2009) ]]>

M. Hermanns. Condensing non-Abelian quasiparticles. Maria Hermanns discusses physics of fractional quantum Hall effect at filling factor where elementary excitations (anyons) seem to possess non-Abelian statistics ( with odd ). Corresponding wavefunctions are given by certain CFTs, and she is interested to understand the physics of daughter states in these CFTs.

H. Perets et al. A new type of stellar explosion. As we know, all supernovae explosions are divided into two classes: Type I (a,/b/c) and Type II (let me remind you that Type Ia are standard candles and are used to determine expansion rate of the Universe at large redshifts). This team claims to discover supernova of a new type which does not fit this classiication. Maybe, I’ll write about this result in more details later.

R. Williams et al., Skyalert: Real-time Astronomy for You and Your Robots. Skyalert.org is a web application which will be certainly useful for astronomers – professionals and amateurs. It collects data about time-critical astronomical transients and then pushes them to subscribers who, but setting their own trigger rules, can filter events according to the location, magnitude, etc. etc.

Post from: NEQNET: The world of theoretical physics

Other interesting things in ArXiv (12 Jun 2009)

How to use technology to teach undegraduate biology ]]>

Post from: NEQNET: The world of theoretical physics

How to use technology to teach undegraduate biology