Friday, December 02, 2016

Can dark energy and dark matter emerge together with gravity?

A macaroni pie? Elephants blowing ballons? 
No, it’s Verlinde’s entangled universe.
In a recent paper, the Dutch physicist Erik Verlinde explains how dark energy and dark matter arise in emergent gravity as deviations from general relativity.

It’s taken me some while to get through the paper. Vaguely titled “Emergent Gravity and the Dark Universe,” it’s a 51-pages catalog of ideas patched together from general relativity, quantum information, quantum gravity, condensed matter physics, and astrophysics. It is clearly still research in progress and not anywhere close to completion.

The new paper substantially expands on Verlinde’s earlier idea that the gravitational force is some type of entropic force. If that was so, it would mean gravity is not due to the curvature of space-time – as Einstein taught us – but instead caused by the interaction of the fundamental elements which make up space-time. Gravity, hence, would be emergent.

I find it an appealing idea because it allows one to derive consequences without having to specify exactly what the fundamental constituents of space-time are. Like you can work out the behavior of gases under pressure without having a model for atoms, you can work out the emergence of gravity without having a model for whatever builds up space-time. The details would become relevant only at very high energies.

As I noted in a comment on the first paper, Verlinde’s original idea was merely a reinterpretation of gravity in thermodynamic quantities. What one really wants from emergent gravity, however, is not merely to get back general relativity. One wants to know which deviations from general relativity come with it, deviations that are specific predictions of the model and which can be tested.

Importantly, in emergent gravity such deviations from general relativity could make themselves noticeable at long distances. The reason is that the criterion for what it means for two points to be close by each other emerges with space-time itself. Hence, in emergent gravity there isn’t a priori any reason why new physics must be at very short distances.

In the new paper, Verlinde argues that his variant of emergent gravity gives rise to deviations from general relativity on long distances, and these deviations correspond to dark energy and dark matter. He doesn’t explain dark energy itself. Instead, he starts with a universe that by assumption contains dark energy like we observe, ie one that has a positive cosmological constant. Such a universe is described approximately by what theoretical physicists call a de-Sitter space.

Verlinde then argues that when one interprets this cosmological constant as the effect of long-distance entanglement between the conjectured fundamental elements, then one gets a modification of the gravitational law which mimics dark matter.

The reason is works is that to get normal gravity one assigns an entropy to a volume of space which scales with the surface of the area that encloses the volume. This is known as the “holographic scaling” of entropy, and is at the core of Verlinde’s first paper (and earlier work by Jacobson and Padmanabhan and others). To get deviations from normal gravity, one has to do something else. For this, Verlinde argues that de Sitter space is permeated by long-distance entanglement which gives rise to an entropy which scales, not with the surface area of a volume, but with the volume itself. It consequently leads to a different force-law. And this force-law, so he argues, has an effect very similar to dark matter.

Not only does this modified force-law from the volume-scaling of the entropy mimic dark matter, it more specifically reproduces some of the achievements of modified gravity.

In his paper, Verlinde derives the observed relation between the luminosity of spiral galaxies and the angular velocity of their outermost stars, known as the Tully-Fisher relation. The Tully-Fisher relation can also be found in certain modifications of gravity, such as Moffat Gravity (MOG), but more generally every modification that approximates Milgrom’s modified Newtonian Dynamics (MOND). Verlinde, however, does more than that. He also derives the parameter which quantifies the acceleration at which the modification of general relativity becomes important, and gets a value that fits well with observations.

It was known before that this parameter is related to the cosmological constant. There have been various attempts to exploit this relation, most recently by Lee Smolin. In Verlinde’s approach the relation between the acceleration scale and the cosmological constant comes out naturally, because dark matter has the same origin of dark energy. Verlinde further offers expressions for the apparent density of dark matter in galaxies and clusters, something that, with some more work, can probably be checked observationally.

I find this is an intriguing link which suggests that Verlinde is onto something. However, I also find the model sketchy and unsatisfactory in many regards. General Relativity is a rigorously tested theory with many achievements. To do any better than general relativity is hard, and thus for any new theory of gravity the most important thing is to have a controlled limit in which General Relativity is reproduced to good precision. How this might work in Verlinde’s approach isn’t clear to me because he doesn’t even attempt to deal with the general case. He starts right away with cosmology.

Now in cosmology we have a preferred frame which is given by the distribution of matter (or by the restframe of the CMB if you wish). In general relativity this preferred frame does not originate in the structure of space-time itself but is generated by the stuff in it. In emergent gravity models, in contrast, the fundamental structure of space-time tends to have an imprint of the preferred frame. This fundamental frame can lead to violations of the symmetries of general relativity and the effects aren’t necessarily small. Indeed, there are many experiments that have looked for such effects and haven’t found anything. It is hence a challenge for any emergent gravity approach to demonstrate just how to avoid such violations of symmetries.

Another potential problem with the idea is the long-distance entanglement which is sprinkled over the universe. The physics which we know so far works “locally,” meaning stuff can’t interact over long distances without a messenger that travels through space and time from one to the other point. It’s the reason my brain can’t make spontaneous visits to the Andromeda nebula, and most days I think that benefits both of us. But like that or not, the laws of nature we presently have are local, and any theory of emergent gravity has to reproduce that.

I have worked for some years on non-local space-time defects, and based on what I learned from that I don’t think the non-locality of Verlinde’s model is going to be a problem. My non-local defects aren’t the same as Verlinde’s entanglement, but guessing that the observational consequences scale similarly, the amount of entanglement that you need to get something like a cosmological constant is too small to leave any other noticeable effects on particle physics. I am therefore more worried about the recovery of local Lorentz-invariance. I went to great pain in my models to make sure I wouldn’t get these, and I can’t see how Verlinde addresses the issue.

The more general problem I have with Verlinde’s paper is the same I had with his 2010 paper, which is that it’s fuzzy. It remained unclear to me exactly what are the necessary assumptions. I hence don’t know whether it’s really necessary to have this interpretation with the entanglement and the volume-scaling of the entropy and with assigning elasticity to the dark energy component that pushes in on galaxies. Maybe it would be sufficient already to add a non-local modification to the sources of general relativity. Having toyed with that idea for a while, I doubt it. But I think Verlinde’s approach would benefit from a more axiomatic treatment.

In summary, Verlinde’s recent paper offers the most convincing argument I have seen so far that dark matter and dark energy are related. However, it is presently unclear if not this approach would also have unwanted side-effects that are in conflict with observation already.

Wednesday, November 30, 2016

Dear Dr. B: What is emergent gravity?

    “Hello Sabine, I've seen a couple of articles lately on emergent gravity. I'm not a scientist so I would love to read one of your easy-to-understand blog entries on the subject.

    Regards,

    Michael Tucker
    Wichita, KS”

Dear Michael,

Emergent gravity has been in the news lately because of a new paper by Erik Verlinde. I’ll tell you some more about that paper in an upcoming post, but answering your question makes for a good preparation.

The “gravity” in emergent gravity refers to the theory of general relativity in the regimes where we have tested it. That means Einstein’s field equations and curved space-time and all that.

The “emergent” means that gravity isn’t fundamental, but instead can be derived from some underlying structure. That’s what we mean by “emergent” in theoretical physics: If theory B can be derived from theory A but not the other way round, then B emerges from A.

You might be more familiar with seeing the word “emergent” applied to objects or properties of objects, which is another way physicists use the expression. Sound waves in the theory of gases, for example, emerge from molecular interactions. Van-der Waals forces emerge from quantum electrodynamics. Protons emerge from quantum chromodynamics. And so on.

Everything that isn’t in the standard model or general relativity is known to be emergent already. And since I know that it annoys so many of you, let me point out again that, yes, to our current best knowledge this includes cells and brains and free will. Fundamentally, you’re all just a lot of interacting particles. Get over it.

General relativity and the standard model are the currently the most fundamental descriptions of nature which we have. For the theoretical physicist, the interesting question is then whether these two theories are also emergent from something else. Most physicists in the field think the answer is yes. And any theory in which general relativity – in the tested regimes – is derived from a more fundamental theory, is a case of “emergent gravity.”

That might not sound like such a new idea and indeed it isn’t. In string theory, for example, gravity – like everything else – “emerges” from, well, strings. There are a lot of other attempts to explain gravitons – the quanta of the gravitational interaction – as not-fundamental “quasi-particles” which emerge, much like sound-waves, because space-time is made of something else. An example for this is the model pursued by Xiao-Gang Wen and collaborators in which space-time, and matter, and really everything is made of qbits. Including cells and brains and so on.

Xiao-Gang’s model stands out because it can also include the gauge-groups of the standard model, though last time I looked chirality was an issue. But there are many other models of emergent gravity which focus on just getting general relativity. Lorenzo Sindoni has written a very useful, though quite technical, review of such models.

Almost all such attempts to have gravity emerge from some underlying “stuff” run into trouble because the “stuff” defines a preferred frame which shouldn’t exist in general relativity. They violate Lorentz-invariance, which we know observationally is fulfilled to very high precision.

An exception to this is entropic gravity, an idea pioneered by Ted Jacobson 20 years ago. Jacobson pointed out that there are very close relations between gravity and thermodynamics, and this research direction has since gained a lot of momentum.

The relation between general relativity and thermodynamics in itself doesn’t make gravity emergent, it’s merely a reformulation of gravity. But thermodynamics itself is an emergent theory – it describes the behavior of very large numbers of some kind of small things. Hence, that gravity looks a lot like thermodynamics makes one think that maybe it’s emergent from the interaction of a lot of small things.

What are the small things? Well, the currently best guess is that they’re strings. That’s because string theory is (at least to my knowledge) the only way to avoid the problems with Lorentz-invariance violation in emergent gravity scenarios. (Gravity is not emergent in Loop Quantum Gravity – its quantized version is directly encoded in the variables.)

But as long as you’re not looking at very short distances, it might not matter much exactly what gravity emerges from. Like thermodynamics was developed before it could be derived from statistical mechanics, we might be able to develop emergent gravity before we know what to derive it from.

This is only interesting, however, if the gravity that “emerges” is only approximately identical to general relativity, and differs from it in specific ways. For example, if gravity is emergent, then the cosmological constant and/or dark matter might emerge with it, whereas in our current formulation, these have to be added as sources for general relativity.

So, in summary “emergent gravity” is a rather vague umbrella term that encompasses a large number of models in which gravity isn’t a fundamental interaction. The specific theory of emergent gravity which has recently made headlines is better known as “entropic gravity” and is, I would say, the currently most promising candidate for emergent gravity. It’s believed to be related to, or maybe even be part of string theory, but if there are such links they aren’t presently well understood.

Thanks for an interesting question!

Aside: Sorry about the issue with the comments. I turned on G+ comments, thinking they'd be displayed in addition, but that instead removed all the other comments. So I've reset this to the previous version, though I find it very cumbersome to have to follow four different comment threads for the same post.

Monday, November 28, 2016

This isn’t quantum physics. Wait. Actually it is.

Rocket science isn’t what it used to be. Now that you can shoot someone to Mars if you can spare a few million, the colloquialism for “It’s not that complicated” has become “This isn’t quantum physics.” And there are many things which aren’t quantum physics. For example, making a milkshake:
“Guys, this isn’t quantum physics. Put the stuff in the blender.”
Or losing weight:
“if you burn more calories than you take in, you will lose weight. This isn't quantum physics.”
Or economics:
“We’re not talking about quantum physics here, are we? We’re talking ‘this rose costs 40p, so 10 roses costs £4’.”
You should also know that Big Data isn’t Quantum Physics and Basketball isn’t Quantum Physics and not driving drunk isn’t quantum physics. Neither is understanding that “[Shoplifting isn’t] a way to accomplish anything of meaning,” or grasping that no doesn’t mean yes.

But my favorite use of the expression comes from Noam Chomsky who explains how the world works (so the modest title of his book):
“Everybody knows from their own experience just about everything that’s understood about human beings – how they act and why – if they stop to think about it. It’s not quantum physics.”
From my own experience, stopping to think and believing one understands other people effortlessly is the root of much unnecessary suffering. Leaving aside that it’s quite remarkable some people believe they can explain the world, and even more remarkable others buy their books, all of this is, as a matter of fact, quantum physics. Sorry, Noam.

Yes, that’s right. Basketballs, milkshakes, weight loss – it’s all quantum physics. Because it’s all happening by the interactions of tiny particles which obey the rules of quantum mechanics. If it wasn’t for quantum physics, there wouldn’t be atoms to begin with. There’d be no Sun, there’d be no drunk driving, and there’d be no rocket science.

Quantum mechanics is often portrayed as the theory of the very small, but this isn’t so. Quantum effects can stretch over large distances and have been measured over distances up to several hundred kilometers. It’s just that we don’t normally observe them in daily life.

The typical quantum effects that you have heard of – things whose position and momentum can’t be measured precisely, are both dead and alive, have a spooky action at a distance and so on – don’t usually manifest themselves for large objects. But that doesn’t mean that the laws of quantum physics suddenly stop applying at a hair’s width. It’s just that the effects are feeble and human experience is limited. There is some quantum physics, however, which we observe wherever we look: If it wasn’t for Pauli’s exclusion principle, you’d fall right through the ground.

Indeed, a much more interesting question is What is not quantum physics?” For all we presently know, the only thing not quantum is space-time and its curvature, manifested by gravity. Most physicists believe, however, that gravity too is a quantum theory, just that we haven’t been able to figure out how this works.

“This isn’t quantum physics,” is the most unfortunate colloquialism ever because really everything is quantum physics. Including Noam Chomsky.

Wednesday, November 23, 2016

I wrote you a song.

I know you’ve all missed my awesome chord progressions and off-tune singing, so I’ve made yet another one of my music videos!


In the attempt to protect you from my own appearance, I recently invested some money into an animation software by name Anime Studio. It has a 350 pages tutorial. Me being myself, I didn’t read it. But I spent the last weekend clicking on any menu item that couldn’t vanish quickly enough, and I’ve integrated the outcome into the above video. I think I kind of figured out now how the basics work. I might do some more of this. It was actually fun to make a visual idea into a movie, something I’ve never done before. Though it might help if I could draw, so excuse the sickly looking tree.

Having said this, I also need to get myself a new video editing software. I’m presently using the Corel VideoStudio Pro which, after the Win10 upgrade works even worse than it did before. I could not for the hell of it export the clip with both good video and audio quality. In the end I sacrificed on the video quality, so sorry about the glitches. They’re probably simply computation errors or, I don’t know, the ghost of Windows 7 still haunting my hard disk.

The song I hope explains itself. One could say it’s the aggregated present mood of my facebook and twitter feeds. You can download the mp3 here.

I wish you all a Happy Thanksgiving, and I want to thank you for giving me some of your attention, every now and then. I especially thank those of you who have paid attention to the donate-button in the top right corner. It’s not much that comes in through this channel, but for me it makes all the difference -- it demonstrates that you value my writing and that keeps me motivated.

I’m somewhat behind with a few papers that I wanted to tell you about, so I’ll be back next week with more words and fewer chords. Meanwhile, enjoy my weltschmerz song ;)

Wednesday, November 16, 2016

A new theory SMASHes problems

Most of my school nightmares are history exams. But I also have physics nightmares, mostly about not being able to recall Newton’s laws. Really, I didn’t like physics in school. The way we were taught the subject, it was mostly dead people’s ideas. On the rare occasion our teacher spoke about contemporary research, I took a mental note every time I heard “nobody knows.” Unsolved problems were what fascinated me, not laws I knew had long been replaced by better ones.

Today, mental noting is no longer necessary – Wikipedia helpfully lists the unsolved problems in physics. And indeed, in my field pretty much every paper starts with a motivation that names at least one of these problems, preferably several.

A recent paper which excels on this count is that of Guillermo Ballesteros and collaborators, who propose a new phenomenological model named SM*A*S*H.
    Unifying inflation with the axion, dark matter, baryogenesis and the seesaw mechanism
    Guillermo Ballesteros, Javier Redondo, Andreas Ringwald, Carlos Tamarit
    arXiv:1608.05414 [hep-ph]

A phenomenological model in high energy particle physics is an extension of the Standard Model by additional particles (or fields, respectively) for which observable, and potentially testable, consequences can be derived. There are infinitely many such models, so to grab the reader’s attention, you need a good motivation why your model in particular is worth the attention. Ballesteros et al do this by tackling not one but five different problems! The name SM*A*S*H stands for Standard Model*Axion*Seesaw*Higgs portal inflation.

First, there are the neutrino oscillations. Neutrinos can oscillate into each other if at least two of them have small but nonzero masses. But neutrinos are fermions and fermions usually acquire masses by a coupling between left-handed and right-handed versions of the particle. Trouble is, nobody has ever seen a right-handed neutrino. We have measured only left-handed neutrinos (or right-handed anti-neutrinos).

So to explain neutrino oscillations, there either must be right-handed neutrinos so heavy we haven’t yet seen them. Or the neutrinos differ from the other fermions – they could be so-called Majorana neutrinos, which can couple to themselves and that way create masses. Nobody knows which is the right explanation.

Ballesteros et al in their paper assume heavy right-handed neutrinos. These create small masses for the left-handed neutrinos by a process called see-saw. This is an old idea, but the authors then try to use these heavy neutrinos also for other purposes.

The second problem they take on is the baryon asymmetry, or the question why matter was left over from the Big Bang but no anti-matter. If matter and anti-matter had existed in equal amounts – as the symmetry between them would suggest – then they would have annihilated to radiation. Or, if some of the stuff failed to annihilate, the leftovers should be equal amounts of both matter and anti-matter. We have not, however, seen any large amounts of anti-matter in the universe. These would be surrounded by tell-tale signs of matter-antimatter annihilation, and none have been observed. So, presently, nobody knows what tilted the balance in the early universe.

In the SM*A*S*H model, the right-handed neutrinos give rise to the baryon asymmetry by a process called thermal leptogenesis. This works basically because the most general way to add right-handed neutrinos to the standard model already offers an option to violate this symmetry. One just has to get the parameters right. That too isn’t a new idea. What’s interesting is that Ballesteros et al point out it’s possible to choose the parameters so that the neutrinos also solve a third problem.

The third problem is dark matter. The universe seems to contain more matter than we can see at any wavelength we have looked at. The known particles of the standard model do not fit the data – they either interact too strongly or don’t form structures efficiently enough. Nobody knows what dark matter is made of. (If it is made of something. Alternatively, it could be a modification of gravity. Regardless of what xkcd says.)

In the model proposed by Ballesteros, the right-handed neutrinos could make up the dark matter. That too is an old idea and it’s working very well: The more massive of the right-handed neutrinos can decay into lighter ones by emitting a photon and this hasn’t been seen. The problem here is getting the mass range of the neutrinos to both work for dark matter and the baryon asymmetry. Ballesteros et al solve this problem by making up dark matter mostly from something else, a particle called the axion. This particle has the benefit of also being good to solve a fourth problem.

Fourth, the strong CP problem. The standard model is lacking a possible interaction term which would cause the strong nuclear force to violate CP symmetry. We know this term is either absent or very tiny because otherwise the neutron would have an electric dipole moment, which hasn’t been observed.

This problem can be fixed by promoting the constant in front of this term (the theta parameter) to a field. The field then will move towards the minimum of the potential, explaining the smallness of the parameter. The field however is accompanied by a particle (dubbed the “axion” by Frank Wilczek) which hasn’t been observed. Nobody knows whether the axion exists.

In the SMASH model, the axion gives rise to dark matter by leaving behind a condensate and particles that are created in the early universe from the decay of topological defects (strings and domain walls). The axion gets its mass from an additional quark-like field (denoted with Q in the paper), and also solves the strong CP problem.

Fifth, inflation, the phase of rapid expansion in the early universe. Inflation was invented to explain several observational puzzles, notably why the temperature of the cosmic microwave background seems to be almost the same in every direction we look (up to small fluctuations). That’s surprising because in a universe without inflation the different parts of the hot plasma in the early universe which created this radiation had never been in contact before. They thus had no chance to exchange energy and come to a common temperature. Inflation solves this problem by blowing up an initially small patch to gigantic size. Nobody knows, however, what causes inflation. It’s normally assumed to be some scalar field. But where that field came from or what happened to it is unclear.

Ballesteros and his collaborators assume that the scalar field which gives rise to inflation is the Higgs – the only fundamental scalar which we have so far observed. This too is an old idea, and one that works badly. To make Higgs inflation works, one needs to introduce an unconventional coupling of the Higgs field to gravity, and this leads to a breakdown of the theory (loss of unitarity) in ranges where one needs it to work (ie the breakdown can’t be blamed on quantum gravity).

The SM*A*S*H model contains an additional scalar field which gives rise to a more complicated coupling and the authors claim that in this case the breakdown doesn’t happen until at the Planck scale (where it can be blamed on quantum gravity).

So, in summary, we have three right-handed neutrinos with their masses and mixing matrix, a new quark-like field and its mass, the axion field, a scalar field, the coupling between the scalar and the Higgs, the self-coupling of the scalar, the coupling of the quark to the scalar, the axion decay constant, the coupling of the Higgs to gravity, and the coupling of the new scalar to gravity. Though I might have missed something.

In case you just scrolled down to see if I think this model might be correct. The answer is almost certainly no. It’s a great model according to the current quality standard in the field. But when you combine several speculative ideas without observational evidence, you don’t get a model that is less speculative and has more evidence speaking for it.

Wednesday, November 09, 2016

Away Note

I’ll be in London for a few days, attending a RAS workshop on “Fine-Tuning on the Cosmological and the Quantum Scales.” First time I’m speaking about the topic, so a little nervous about that.

It just so happens that tomorrow evening theres a public lecture in London by Roger Penrose which I might or might not attend, depending on whether my flight arrives as planned. Feeling somewhat bad because I haven’t read his recent book. Just judging by the title I’m afraid it might have some overlap with mine.

This public lecture is arranged by the Ideas Roadshow, which I mentioned before. It’s run by Howard Burton, former director of PI. They have some teaser videos which you might enjoy:



Speaking of former directors, I believe Neil Turok’s term at PI is about to run out and I want to complain that I haven’t yet heard rumors who’s in the pipe^^.

As for this blog, please expect that comments might get stuck in the queue longer than usual.

Monday, November 07, 2016

Steven Weinberg doesn’t like Quantum Mechanics. So what?

A few days ago, Nobel laureate Steven Weinberg gave a one-hour lecture titled “What’s the matter with quantum mechanics?” at a workshop for science writers organized by the Council for the Advancement of Science Writing (CASW).

In his lecture, Weinberg expressed a newfound sympathy for the critics of quantum mechanics.
“I’m not as happy about quantum mechanics as I used to be, and not as dismissive of the critics. And it’s a bad sign in particular that those physicists who are happy about quantum mechanics, who see nothing wrong with it, don’t agree with each other about what it means.”
You can watch the full lecture here. (The above quote is at 17:40.)


It’s become a cliché that physicists in their late years develop an obsession with quantum mechanics. On this account, you can file Weinberg together with Mermin and Penrose and Smolin. I’m not sure why that is. Maybe it’s something which has bothered them all along, they just never saw it as important enough. Maybe it’s because they start paying more attention to their intuition, and quantum mechanics – widely regarded as non-intuitive – begins itching. Or maybe it’s because they conclude it’s the likely reason we haven’t seen any progress in the foundations of physics for 30 years.

Whatever Weinberg’s motivation, he doesn’t like neither Copenhagen, nor Many Worlds, nor decoherent or consistent histories, and he seems to be allergic to pilot waves (1:02:15). As for qbism, which Mermin finds so convincing, that doesn’t even seem noteworthy to Weinberg.

I learned quantum mechanics in the mid-1990s from Walter Greiner, the one with the textbook series. (He passed away a few weeks ago at age 80.) Walter taught the Copenhagen Interpretation. The attitude he conveyed in his lectures was what Mermin dubbed “shut up and calculate.”

Of course I as most other students spent some time looking into the different interpretations of quantum mechanics – nothing’s more interesting than the topics your prof refuses to talk about. But I’m an instrumentalist by heart and also I quite like the mathematics of quantum mechanics, so I never had a problem with the Copenhagen Interpretation. I’m also, however, a phenomenologist. And so I’ve always thought of quantum mechanics as an incomplete, not fundamental, theory which needs to be superseded by a better, underlying explanation.

My misgivings of quantum mechanics are pretty much identical to the ones which Weinberg expresses in his lecture. The axioms of quantum mechanics, whatever interpretation you chose, are unsatisfactory for a reductionist. They should not mention the process of measurement, because the fundamental theory should tell you what a measurement is.

If you believe the wave-function is a real thing (psi-ontic), decoherence doesn’t solve the issue because you’re left with a probabilistic state that needs to be suddenly updated. If you believe the wave-function only encodes information (psi-epistemic) and the update merely means we’ve learned something new, then you have to explain who learns and how they learn. None of the currently existing interpretations address these issues satisfactorily.

It isn’t so surprising I’m with Weinberg on this because despite attending Greiner’s lectures, I never liked Greiner’s textbooks. That we students were more or less forced to buy them didn’t make them any more likable. So I scraped together my Deutsche Marks and bought Weinberg’s textbooks, which I loved for the concise mathematical approach.

I learned both general relativity and quantum field theory from Weinberg’s textbooks. I also later bought Weinberg’s lectures on Quantum Mechanics which appeared in 2013, but haven’t actually read them, except for section 3.7, where he concludes that:
“[T]oday there is no interpretation of quantum mechanics that does not have serious flaws, and [we] ought to take seriously the possibility of finding some more satisfactory other theory, to which quantum mechanics is merely a good approximation.”
It’s not much of a secret that I’m a fan of non-local hidden variables (aka superdeterminism), which I believe to be experimentally testable. To my huge frustration, however, I haven’t been able to find an experimental group willing and able to do that. I am therefore happy that Weinberg emphasizes the need to find a better theory, and to also look for experimental evidence. I don’t know what he thinks of superdeterminism. But superdeterminism or something else, I think probing quantum mechanics in new regimes is best shot we presently have at making progress on the foundations of physics.

I therefore don’t understand the ridicule aimed at those who think that quantum mechanics needs an overhaul. Being unintuitive and feeling weird doesn’t make a theory wrong – we can all agree on this. We don’t even have to agree it’s unintuitive – I actually don’t think so. Intuition comes with use. Even if you can’t stomach the math, you can build your quantum intuition for example by playing “Quantum Moves,” a video game that crowd-sources players’ solutions for quantum mechanical optimization problems. Interestingly, humans do better than algorithms (at least for now).

[Weinberg (left), getting some
kind of prize or title. Don't
know for what. Image: CASW]
So, yeah, maybe quantum physics isn’t weird. And even if it is, being weird doesn’t make it wrong, and therefore you don’t think it’s a promising research avenue to pursue. Fine, then don’t. But before you make jokes about physicists who rely on their intuition, let us be clear that being ugly doesn’t make a theory wrong either. And yet it’s presently entirely acceptable to develop new theories with the only aim of prettifying the existing ones.

I don’t think for example that numerological coincidences are problems worth thinking about – they’re questions of aesthetic appeal. The mass of the Higgs is much smaller than the Planck mass. So what? The spatial curvature of the universe is almost zero, the cosmological constant tiny, and the electric dipole moment of the neutron is for all we know absent. Why should that bother me? If you think that’s a mathematical inconsistency, think again – it’s not. There’s no logical reason for why that shouldn’t be so. It’s just that to our human sense it doesn’t quite feel right.

A huge amount of work has gone into curing these “problems” because finetuned constants aren’t thought of as beautiful. But in my eyes the cures are all worse than the disease: Solutions usually require the introduction of additional fields and potentials for these fields and personally I think it’s much preferable to just have a constant – is there any axiom simpler than that?

The difference between the two research areas is that there are tens of thousands of theorists trying to make the fundamental laws of nature less ugly, but only a few hundred working on making them less weird. That in and by itself is reason to shift focus to quantum foundations, just because it’s the path less trodden and more left to explore.

But maybe I’m just old beyond my years. So I’ll shut up now and go back to my calculations.

Monday, October 31, 2016

Modified Gravity vs Particle Dark Matter. The Plot Thickens.

They sit in caves, deep underground. Surrounded by lead, protected from noise, shielded from the warmth of the Sun, they wait. They wait for weakly interacting massive particles, short WIMPs, the elusive stuff that many physicists believe makes up 80% of the matter in the universe. They have been waiting for 30 years, but the detectors haven’t caught a single WIMP.

Even though the sensitivity of dark matter detectors has improved by more than five orders of magnitude since the early 1980s, all results so far are compatible with zero events. The searches for axions, another popular dark matter candidate, haven’t fared any better. Coming generations of dark matter experiments will cross into the regime where the neutrino background becomes comparable to the expected signal. But, as a colleague recently pointed out to me, this merely means that the experimentalists have to understand the background better.

Maybe in 100 years they’ll still sit in caves, deep underground. And wait.

Meanwhile others are running out of patience. Particle dark matter is a great explanation for all the cosmological observations that general relativity sourced by normal matter cannot explain. But maybe it isn’t right after all. The alternative to using general relativity and adding particle is to modify general relativity so that space-time curves differently in response to the matter we already know.

Already in the mid 1980s, Modehai Milgrom showed that modifying gravity has the potential to explain observations commonly attributed to particle dark matter. He proposed Modified Newtonian Dynamics – short MOND – to explain the galactic rotation curves instead of adding particle dark matter. Intriguingly, MOND, despite it having only one free parameter, fits a large number of galaxies. It doesn’t work well for galaxy clusters, but this clearly shows that many galaxies are similar in very distinct ways, ways that the concordance model (also known as LambdaCDM) hasn’t been able to account for.

In its simplest form the concordance model has sources which are collectively described as homogeneous throughout the universe – an approximation known as the cosmological principle. In this form, the concordance model doesn’t predict how galaxies rotate – it merely describes the dynamics on supergalactic scales.

To get galaxies right, physicists have to also take into account astrophysical processes within the galaxies: how stars form, which stars form, where do they form, how do they interact with the gas, how long do they live, when and how they go supernova, what magnetic fields permeate the galaxies, how the fields affect the intergalactic medium, and so on. It’s a mess, and it requires intricate numerical simulations to figure out just exactly how galaxies come to look how they look.

And so, physicists today are divided in two camps. In the larger camp are those who think that the observed galactic regularities will eventually be accounted for by the concordance model. It’s just that it’s a complicated question that needs to be answered with numerical simulations, and the current simulations aren’t good enough. In the smaller camp are those who think there’s no way these regularities will be accounted for by the concordance model, and modified gravity is the way to go.

In a recent paper, McGaugh et al reported a correlation among the rotation curves of 153 observed galaxies. They plotted the gravitational pull from the visible matter in the galaxies (gbar) against the gravitational pull inferred from the observations (gobs), and find that the two are closely related.

Figure from arXiv:1609.05917 [astro-ph.GA] 

This correlation – the mass-discrepancy-acceleration relation (MDAR) – so they emphasize, is not itself new, it’s just a new way to present previously known correlations. As they write in the paper:
“[This Figure] combines and generalizes four well-established properties of rotating galaxies: flat rotation curves in the outer parts of spiral galaxies; the “conspiracy” that spiral rotation curves show no indication of the transition from the baryon-dominated inner regions to the outer parts that are dark matter-dominated in the standard model; the Tully-Fisher relation between the outer velocity and the inner stellar mass, later generalized to the stellar plus atomic hydrogen mass; and the relation between the central surface brightness of galaxies and their inner rotation curve gradient.”
But this was only act 1.

In act 2, another group of researchers responds to the McGaugh et al paper. They present results of a numerical simulation for galaxy formation and claim that particle dark matter can account for the MDAR. The end of MOND, so they think, is near.

Figure from arXiv:1610.06183 [astro-ph.GA]

McGaugh, hero of act 1, points out that the sample size for this simulation is tiny and also pre-selected to reproduce galaxies like we observe. Hence, he thinks the results are inconclusive.

In act 3, Mordehai Milgrom, the original inventor of MOND – posts a comment on the arXiv. He also complains about the sample size of the numerical simulation and further explains that there is much more to MOND than the MDAR correlation. Numerical simulations with particle dark matter have been developed to fit observations, he writes, so it’s not surprising they now fit observations.

“The simulation in question attempt to treat very complicated, haphazard, and unknowable events and processes taking place during the formation and evolution histories of these galaxies. The crucial baryonic processes, in particular, are impossible to tackle by actual, true-to-nature, simulation. So they are represented in the simulations by various effective prescriptions, which have many controls and parameters, and which leave much freedom to adjust the outcome of these simulations [...]

The exact strategies involved are practically impossible to pinpoint by an outsider, and they probably differ among simulations. But, one will not be amiss to suppose that over the years, the many available handles have been turned so as to get galaxies as close as possible to observed ones.”
In act 4, another paper with results of a numerical simulation for galaxy structures with particle dark matter appears.

This one uses a code with acronym EAGLE, for Evolution and Assembly of GaLaxies and their Environments. This code has “quite a few” parameters, as Aaron Ludlow, the paper’s first author told me, and these parameters have been optimized to reproduce realistic galaxies. In this simulation, however, the authors didn’t use this optimized parameter configuration but let several parameters (3-4) vary to produce a larger set of galaxies. These galaxies in general do not look like those we observe. Nevertheless, the researchers find that all their galaxies display the MDAR correlation, regardless.

This would indicate that the particle dark matter is enough to describe the observations.


Figure from arXiv:1610.07663 [astro-ph.GA] 


However, even when varying some parameters, the EAGLE code still contains parameters that have been fixed previously to reproduce observations. Ludlow calls them “subgrid parameters,” meaning they quantify physics on scales smaller than what the simulation can presently resolve. One sees for example in Figure 1 of their paper (shown below) that all those galaxies have a pronounced correlation between the velocities of the outer stars (Vmax) and the luminosity (M*) already.
Figure from arXiv:1610.07663 [astro-ph.GA]
Note that the plotted quantities are correlated in all data sets,
though the off-sets differ somewhat.

One shouldn’t hold this against the model. Such numerical simulations are done for the purpose of generating and understanding realistic galaxies. Runs are time-consuming and costly. From the point of view of an astrophysicist, the question just how unrealistic galaxies can get in these simulations is entirely nonsensical. And yet that’s exactly what the modified-gravity/dark matter showoff now asks for.

In act 5, John Moffat shows that modified gravity – the general relativistic completion of MOND – reproduces the MDAR correlation, but also predicts a distinct deviation for the very outside stars of galaxies.

Figure from arXiv:1610.06909 [astro-ph.GA] 
The green curve is the prediction from modified gravity.


The crucial question here is, I think, which correlations are independent of each other. I don’t know. But I’m sure there will be further acts in this drama.

Sunday, October 23, 2016

The concordance model strikes back

Two weeks ago, I summarized a recent paper by McGaugh et al who reported a correlation in galactic structures. The researchers studied a data-set with the rotation curves of 153 galaxies and showed that the gravitational acceleration inferred from the rotational velocity (including dark matter), gobs, is strongly correlated to the gravitational acceleration from the normal matter (stars and gas), gbar.

Figure from arXiv:1609.05917 [astro-ph.GA] 

This isn’t actually new data or a new correlation, but a new way to look at correlations in previously available data.

The authors of the paper were very careful not to jump to conclusions from their results, but merely stated that this correlation requires some explanation. That galactic rotation curves have surprising regularities, however, has been evidence in favor of modified gravity for two decades, so the implication was clear: Here is something that the concordance model might have trouble explaining.

As I remarked in my previous blogpost, while the correlation does seem to be strong, it would be good to see the results of a simulation with the concordance model that describes dark matter, as usual, as a pressureless, cold fluid. In this case too one would expect there to be some relation. Normal matter forms galaxies in the gravitational potentials previously created by dark matter, so the two components should have some correlation with each other. The question is how much.

Just the other day, a new paper appeared on the arxiv, which looked at exactly this. The authors of the new paper analyzed the result of a specific numerical simulation within the concordance model. And they find that the correlation in this simulated sample is actually stronger than the observed one!

Figure from arXiv:1610.06183 [astro-ph.GA]


Moreover, they also demonstrate that in the concordance model, the slope of the best-fit curve should depend on the galaxies’ redshift (z), ie the age of the galaxy. This would be a way to test which explanation is correct.

Figure from arXiv:1610.06183 [astro-ph.GA]

I am not familiar with the specific numerical code that the authors use and hence I am not sure what to make of this. It’s been known for a long time that the concordance model has difficulties getting structures on galactic size right, especially galactic cores, and so it isn’t clear to me just how many parameters this model uses to work right. If the parameters were previously chosen so as to match observations already, then this result is hardly surprising.

McGaugh, one of the authors of the first paper, has already offered some comments (ht Yves). He notes that the sample size of the galaxies in the simulation is small, which might at least partly account for the small scatter. He also expresses himself skeptical of the results: “It is true that a single model does something like this as a result of dissipative collapse. It is not true that an ensemble of such models are guaranteed to fall on the same relation.”

I am somewhat puzzled by this result because, as I mentioned above, the correlation in the McGaugh paper is based on previously known correlations, such as the brightness-velocity relation which, to my knowledge, hadn’t been explained by the concordance model. So I would find it surprising should the results of the new paper hold up. I’m sure we’ll hear more about this in the soon future.

Wednesday, October 19, 2016

Dear Dr B: Where does dark energy come from and what’s it made of?

“As the universe expands and dark energy remains constant (negative pressure) then where does the ever increasing amount of dark energy come from? Is this genuinely creating something from nothing (bit of lay man’s hype here), do conservation laws not apply? Puzzled over this for ages now.”
-- pete best
“When speaking of the Einstein equation, is it the case that the contribution of dark matter is always included in the stress energy tensor (source term) and that dark energy is included in the cosmological constant term? If so, is this the main reason to distinguish between these two forms of ‘darkness’? I ask because I don’t normally read about dark energy being ‘composed of particles’ in the way dark matter is discussed phenomenologically.”
-- CGT

Dear Pete, CGT:

Dark energy is often portrayed as very mysterious. But when you look at the math, it’s really the simplest aspect of general relativity.

Ahead, allow me to clarify that your questions refer to “dark energy” but are specifically about the cosmological constant which is a certain type of dark energy. For all we know, the cosmological constant fits all existing observations. Dark energy could be more complicated than that, but let’s start with the cosmological constant.

Einstein’s field equations can be derived from very few assumptions. First, there’s the equivalence principle, which can be formulated mathematically as the requirement that the equations be tensor-equations. Second, the equations should describe the curvature of space-time. Third, the source of gravity is the stress-energy tensor and it’s locally conserved.

If you write down the simplest equations which fulfill these criteria you get Einstein’s field equations with two free constants. One constant can be fixed by deriving the Newtonian limit and it turns out to be Newton’s constant, G. The other constant is the cosmological constant, usually denoted Λ. You can make the equations more complicated by adding higher order terms, but at low energies these two constants are the only relevant ones.
Einstein's field equations. [Image Source]
If the cosmological constant is not zero, then flat space-time is no longer a solution of the equations. If the constant is positive-valued in particular, space will undergo accelerated expansion if there are no other matter sources, or these are negligible in comparison to Λ. Our universe presently seems to be in a phase that is dominated by a positive cosmological constant – that’s the easiest way to explain the observations which were awarded the 2011 Nobel Prize in physics.

Things get difficult if one tries to find an interpretation of the rather unambiguous mathematics. You can for example take the term with the cosmological constant and not think of it as geometrical, but instead move it to the other side of the equation and think of it as some stuff that causes curvature. If you do that, you might be tempted to read the entries of the cosmological constant term as if it was a kind of fluid. It would then correspond to a fluid with constant density and with constant, negative pressure. That’s something one can write down. But does this interpretation make any sense? I don’t know. There isn’t any known fluid with such behavior.

Since the cosmological constant is also present if matter sources are absent, it can be interpreted as the energy-density and pressure of the vacuum. Indeed, one can calculate such a term in quantum field theory, just that the result is infamously 120 orders of magnitude too large. But that’s a different story and shall be told another time. The cosmological constant term is therefore often referred to as the “vacuum energy,” but that’s sloppy. It’s an energy-density, not an energy, and that’s an important difference.

How can it possibly be that an energy density remains constant as the universe expands, you ask. Doesn’t this mean you need to create more energy from somewhere? No, you don’t need to create anything. This is a confusion which comes about because you interpret the density which has been assigned to the cosmological constant like a density of matter, but that’s not what it is. If it was some kind of stuff we know, then, yes, you would expect the density to dilute as space expands. But the cosmological constant is a property of space-time itself. As space expands, there’s more space, and that space still has the same vacuum energy density – it’s constant!

The cosmological constant term is indeed conserved in general relativity, and it’s conserved separately from that of the other energy and matter sources. It’s just that conservation of stress-energy in general relativity works differently than you might be used to from flat space.

According to Noether’s theorem there’s a conserved quantity for every (continuous) symmetry. A flat space-time is the same at every place and at every moment of time. We say it has a translational invariance in space and time. These are symmetries, and they come with conserved quantities: Translational invariance of space conserves momentum, translational invariance in time conserves energy.

In a curved space-time generically neither symmetry is fulfilled, hence neither energy nor momentum are conserved. So, if you take the vacuum energy density and you integrate it over some volume to get an energy, then the total energy grows with the volume indeed. It’s just not conserved. How strange! But that makes perfect sense: It’s not conserved because space expands and hence we have no invariance in time. Consequently, there’s no conserved quantity for invariance in time.

But General Relativity has a more complicated type of symmetry to which Noether’s theorem can be applied. This gives rise to a local conservation of stress-momentum when coupled to gravity (the stress-momentum tensor is covariantly conserved).

The conservation law for the density of a pressureless fluid, for example, works as you expect it to work: As space expands, the density goes down with the volume. For radiation – which has pressure – the energy density falls faster than that of matter because wavelengths also redshift. And if you put the cosmological constant term with its negative pressure into the conservation law, both energy and pressure remain the same. It’s all consistent: They are conserved if they are constant.

Dark energy now is a generalization of the cosmological constant, in which one invents some fields which give rise to a similar term. There are various fields that theoretical physicists have played with: chameleon fields and phantom fields and quintessence and such. The difference to the cosmological constant is that these fields’ densities do change with time, albeit slowly. There is however presently no evidence that this is the case.

As to the question which dark stuff to include in which term. Dark matter is usually assumed to be pressureless, which means that for what its gravitational pull is concerned it behaves just like normal matter. Dark energy, in contrast, has negative pressure and does odd things. That’s why they are usually collected in different terms.

Why don’t you normally read about dark energy being made of particles? Because you need some really strange stuff to get something that behaves like dark energy. You can’t make it out of any kind of particle that we know – this would either give you a matter term or a radiation term, neither of which does what dark energy needs to do.

If dark energy was some kind of field, or some kind of condensate, then it would be made of something else. In that case its density might indeed also vary from one place to the next and we might be able to detect the presence of that field in some way. Again though, there isn’t presently any evidence for that.

Thanks for your interesting questions!

Wednesday, October 12, 2016

What if dark matter is not a particle? The second wind of modified gravity.

Another year has passed and Vera Rubin was not awarded the Nobel Prize. She’s 88 and the prize can’t be awarded posthumously, so I can’t shake the impression the Royal Academy is waiting for her to die while they work off a backlog of condensed-matter breakthroughs.

Sure, nobody knows whether galaxies actually contain the weakly interacting and non-luminous particles we have come to call dark matter. And Fritz Zwicky was first to notice a cluster of galaxies which moved faster than the visible mass alone could account for – and the one to coin the term dark matter. But it was Rubin who pinned down the evidence that galaxies are systematically misbehaved by showing the rotational velocities of spiral galaxies don’t flatten out with distance from the galactic center – as if there was unseen extra mass in the galaxies. And Zwicky is dead anyway, so the Nobel committee doesn’t have to worry about him.

After Rubin’s discovery, many other observations confirmed that we were missing matter, and not only a little bit, but 80% of all matter in the universe. It’s there, but it’s not some stuff that we know. The fluctuations in the cosmic microwave background, gravitational lensing, the formation of large-scale structures in the universe – none of these would fit with the predictions of general relativity if there wasn’t additional matter to curve space-time. And if you go through all the particles in the standard model, none of them fits the bill. They’re either too light or too heavy or too strongly interacting or too unstable.

But once physicists had the standard model, every problem began to look like a particle, and so, beginning in the mid-1980s, dozens of experiments started to search for dark matter particles. So far, they haven’t found anything. No WIMPS, no axions, no wimpzillas, neutralinos, sterile neutrinos, or other things that would be good candidates for the missing matter.

This might not mean much. It might mean merely that the dark matter particles are even more weakly interacting than expected. It might mean that the particle types we’ve dealt with so far were too simple. Or maybe it means dark matter isn’t made of particles.

It’s an old idea, though one that never rose to popularity, that rather than adding new sources for gravity we could instead keep the known sources but modify the way they gravitate. And the more time passes without a dark matter particle caught in a detector, the more appealing this alternative starts to become. Maybe gravity doesn’t work the way Einstein taught us.

Modified gravity had an unfortunate start because its best known variant – Modified Newtonian Dynamics or MOND – is extremely unappealing from a theoretical point of view. It’s in contradiction with general relativity and that makes it a non-starter for most theorists. Meanwhile, however, there are variants of modified gravity which are compatible with general relativity.

The benefit of modifying gravity is that it offers an explanation for observations that particle dark matter has nothing to say about: Many galaxies show regularities in the way their stars’ motion is affected by dark matter. Clouds of dark particles that would collect in halos around galaxies can be flexibly adapted to match the observations of all observed galaxies. But dark matter particles are so flexible, that it’s difficult to reproduce regularities.

The best known of them is the Tully-Fisher relation, a correlation between the luminosity of a galaxy and the velocity of the outermost stars. Nobody has succeeded to explain this with particle dark matter, but modified gravity can explain it.

In a recent paper, a group of researchers from the United States offers a neat new way to quantify these regularities. They compare the gravitational acceleration that must be acting on stars in galaxies as inferred from observation (gobs) with the gravitational acceleration due to the observed stars and gas, ie baryonic matter (gbar). As expected, the observed gravitational acceleration is much larger than what the visible mass would lead one to expect. They are also, however, strongly correlated with each other (see figure below). It’s difficult to see how particle dark matter could cause this. (Though I would like to see how this plot looks for a ΛCDM simulation. I would still expect some correlation and would prefer not to judge its strength by gut feeling.)

Figure from arXiv:1609.05917 [astro-ph.GA] 


This isn’t so much new evidence as an improved way to quantify existing evidence for regularities in spiral galaxies. Lee Smolin, always quick on his feet, thinks he can explain this correlation with quantum gravity. I don’t quite share his optimism, but it’s arguably intriguing.

Modifying gravity however has its shortcomings. While it seems to work reasonably well on the level of galaxies, it’s hard to make it work for galaxy clusters too. Observations for example of the Bullet cluster (image below) seem to show that the visible mass can be at a different place than the gravitating mass. That’s straight-forward to explain with particle dark matter but difficult to make sense of with modified gravity.

The bullet cluster.
In red: estimated distribution of baryonic mass.
In blue: estimated distribution of gravitating mass, extracted from gravitational lensing.
Source: APOD.

The explanation I presently find most appealing is that dark matter is a type of particle whose dynamical equations sometimes mimic those of modified gravity. This option, pursued, among others, by Stefano Liberati and Justin Khoury, combines the benefits of both approaches without the disadvantages of either. There is, however, a lot of data in cosmology and it will take a long time to find out whether this idea can fit the observations as well – or better – than particle dark matter.

But regardless of what dark matter turns out to be, Rubin’s observations have given rise to one of the most active research areas in physics today. I hope that the Royal Academy eventually wakes up and honors her achievement.

Wednesday, October 05, 2016

Demystifying Spin 1/2

Theoretical physics is the most math-heavy of disciplines. We don’t use all that math because we like to be intimidating, but because it’s the most useful and accurate description of nature we know.

I am often asked to please explain this or that mathematical description in layman terms – and I try to do my best. But truth is, it’s not possible. The mathematical description is the explanation. The best I can do is to summarize the conclusions we have drawn from all that math. And this is pretty much how popular science accounts of theoretical physics work: By summarizing the consequences of lots of math.

This, however, makes science communication in theoretical physics a victim of its own success. If readers get away thinking they can follow a verbal argument, they’re left to wonder why physicists use all that math to begin with. Sometimes I therefore wish articles reporting on recent progress in theoretical physics would on occasion have an asterisk that notes “It takes several years of lectures to understand how B follows from A.”

One of the best examples for the power of math in theoretical physics – if not the best example to illustrate this – are spin 1/2 particles. They are usually introduced as particles that have to be rotated twice to return to the same initial state. I don’t know if anybody who didn’t know the math already has ever been able to make sense of this explanation – certainly not me when I was a teenager.

But this isn’t the only thing you’ll stumble across if you don’t know the math. Your first question may be: Why have spin 1/2 to begin with?

Well, one answer to this is that we need spin 1/2 particles to describe observations. Such particles are fermionic and therefore won’t occupy the same quantum state. (It takes several years of lectures to understand how B follows from A.) This is why for example electrons – which have spin 1/2 – sit in shells around the atomic nucleus rather than clumping together.

But a better answer is “Why not?” (Why not?, it turns out, is also a good answer to most why-questions that Kindergartners come up with.)

Mathematics allows you to classify everything a quantum state can do under rotations. If you do that you not only find particles that return to their initial state after 1, 1/2, 1/3 and so on of a rotation – corresponding to spin 1, 2, 3... etc – you also find particles that return to their initial state after 2, 2/3, 2/5 and so on of a rotation – corresponding to spin 1/2, 3/2, 5/2 etc. The spin, generally, is the inverse of the fraction of rotations necessary to return the particle to itself. The one exception is spin 0 which doesn’t change at all.

So the math tells you that spin 1/2 is a thing, and it’s there in our theories already. It would be stranger if it nature didn’t make use of it.

But how come that the math gives rise to such strange and non-intuitive particle behaviors? It comes from the way that rotations (or symmetry transformations more generally) act on quantum states, which is different from how they act on non-quantum states. A symmetry transformation acting on a quantum state must be described by a unitary transformation – this is a transformation which, most importantly, ensures that probabilities always add up to one. And the full set of all symmetry transformations must be described by a “unitary representation” of the group.

Symmetry groups, however, can be difficult to handle, and so physicists prefer to instead work with the algebra associated to the group. The algebra can be used to build up the group, much like you can build up a grid from right-left steps and forwards-backwards steps, repeated sufficiently often. But here’s where things get interesting: If you use the algebra of the rotation group to describe how particles transform, you don’t get back merely the rotation group. Instead you get what’s called a “double cover” of the rotation group. It means – guess! – you have to turn the state around twice to get back to the initial state.

I’ve been racking my brain trying to find a good metaphor for “double-cover” to use in the-damned-book I’m writing. Last year, I came across the perfect illustration in real life when we took the kids to a Christmas market. Here it is:



I made a sketch of this for my book:



The little trolley has to make two full rotations to get back to the starting point. And that’s pretty much how the double-cover of the rotation group gives rise to particles with spin 1/2. Though you might have to wrap your head around it twice to understand how it works.

I later decided not to use this illustration in favor of one easier to generalize to higher spin. But you’ll have to buy the-damned-book to see how this works :p

Tuesday, September 27, 2016

Dear Dr B: What do physicists mean by “quantum gravity”?

[Image Source: giphy.com]
“please could you give me a simple definition of "quantum gravity"?

J.”

Dear J,

Physicists refer with “quantum gravity” not so much to a specific theory but to the sought-after solution to various problems in the established theories. The most pressing problem is that the standard model combined with general relativity is internally inconsistent. If we just use both as they are, we arrive at conclusions which do not agree with each other. So just throwing them together doesn’t work. Something else is needed, and that something else is what we call quantum gravity.

Unfortunately, the effects of quantum gravity are very small, so presently we have no observations to guide theory development. In all experiments made so far, it’s sufficient to use unquantized gravity.

Nobody knows how to combine a quantum theory – like the standard model – with a non-quantum theory – like general relativity – without running into difficulties (except for me, but nobody listens). Therefore the main strategy has become to find a way to give quantum properties to gravity. Or, since Einstein taught us gravity is nothing but the curvature of space-time, to give quantum properties to space and time.

Just combining quantum field theory with general relativity doesn’t work because, as confirmed by countless experiments, all the particles we know have quantum properties. This means (among many other things) they are subject to Heisenberg’s uncertainty principle and can be in quantum superpositions. But they also carry energy and hence should create a gravitational field. In general relativity, however, the gravitational field can’t be in a quantum superposition, so it can’t be directly attached to the particles, as it should be.

One can try to find a solution to this conundrum, for example by not directly coupling the energy (and related quantities like mass, pressure, momentum flux and so on) to gravity, but instead only coupling the average value, which behaves more like a classical field. This solves one problem, but creates a new one. The average value of a quantum state must be updated upon measurement. This measurement postulate is a non-local prescription and general relativity can’t deal with it – after all Einstein invented general relativity to get rid of the non-locality of Newtonian gravity. (Neither decoherence nor many worlds remove the problem, you still have to update the probabilities, somehow, somewhere.)

The quantum field theories of the standard model and general relativity clash in other ways. If we try to understand the evaporation of black holes, for example, we run into another inconsistency: Black holes emit Hawking-radiation due to quantum effects of the matter fields. This radiation doesn’t carry information about what formed the black hole. And so, if the black hole entirely evaporates, this results in an irreversible process because from the end-state one can’t infer the initial state. This evaporation however can’t be accommodated in a quantum theory, where all processes can be time-reversed – it’s another contradiction that we hope quantum gravity will resolve.

Then there is the problem with the singularities in general relativity. Singularities, where the space-time curvature becomes infinitely large, are not mathematical inconsistencies. But they are believed to be physical nonsense. Using dimensional analysis, one can estimate that the effects of quantum gravity should become large close by the singularities. And so we think that quantum gravity should replace the singularities with a better-behaved quantum space-time.

The sought-after theory of quantum gravity is expected to solve these three problems: tell us how to couple quantum matter to gravity, explain what happens to information that falls into a black hole, and avoid singularities in general relativity. Any theory which achieves this we’d call quantum gravity, whether or not you actually get it by quantizing gravity.

Physicists are presently pursuing various approaches to a theory of quantum gravity, notably string theory, loop quantum gravity, asymptotically safe gravity, and causal dynamical triangulation, for just to name the most popular ones. But none of these approaches has experimental evidence speaking for it. Indeed, so far none of them has made a testable prediction.

This is why, in the area of quantum gravity phenomenology, we’re bridging the gap between theory and experiment with simplified models, some of which motivated by specific approaches (hence: string phenomenology, loop quantum cosmology, and so on). These phenomenological models don’t aim to directly solve the above mentioned problems, they merely provide a mathematical framework – consistent in its range of applicability – to quantify and hence test the presence of effects that could be signals of quantum gravity, for example space-time fluctuations, violations of the equivalence principle, deviations from general relativity, and so on.

Thanks for an interesting question!

Wednesday, September 21, 2016

We understand gravity just fine, thank you.

Yesterday I came across a Q&A on the website of Discover magazine, titled “The Root of Gravity - Does recent research bring us any closer to understanding it?” Jeff Lepler from Michigan has the following question:
Q: Are we any closer to understanding the root cause of gravity between objects with mass? Can we use our newly discovered knowledge of the Higgs boson or gravitational waves to perhaps negate mass or create/negate gravity?”
A person by name Bill Andrews (unknown to me) gives the following answer:
A: Sorry, Jeff, but scientists still don’t really know why gravity works. In a way, they’ve just barely figured out how it works.”
The answer continues, but let’s stop right there where the nonsense begins. What’s that even mean scientists don’t know “why” gravity works? And did the Bill person really think he could get away with swapping “why” for a “how” and nobody would notice?

The purpose of science is to explain observations. We have a theory by name General Relativity that explains literally all data of gravitational effects. Indeed, that General Relativity is so dramatically successful is a great frustration for all those people who would like to revolutionize science a la Einstein. So in which sense, please, do scientists barely know how it works?

For all we can presently tell gravity is a fundamental force, which means we have no evidence for an underlying theory from which gravity could be derived. Sure, theoretical physicists are investigating whether there is such an underlying theory that would give rise to gravity as well as the other interactions, a “theory of everything”. (Please submit nomenclature complaints to your local language police, not to me.) Would such a theory of everything explain “why” gravity works? No, because that’s not a meaningful scientific question. A theory of everything could potentially explain how gravity can arise from more fundamental principles similar to, say, the ideal gas law can arise from statistical properties of many atoms in motion. But that still wouldn’t explain why there should be something like gravity, or anything, in the first place.

Either way, even if gravity arises within a larger framework like, say, string theory, the effects of what we call gravity today would still come about because energy-densities (and related quantities like pressure and momentum flux and so on) curve space-time, and fields move in that space-time. Just that these quantities might no longer be fundamental. We’ve known since 101 years how this works.

After a few words on Newtonian gravity, the answer continues:
“Because the other forces use “force carrier particles” to impart the force onto other particles, for gravity to fit the model, all matter must emit gravitons, which physically embody gravity. Note, however, that gravitons are still theoretical. Trying to reconcile these different interpretations of gravity, and understand its true nature, are among the biggest unsolved problems of physics.”
Reconciling which different interpretations of gravity? These are all the same “interpretation.” It is correct that we don’t know how to quantize gravity so that the resulting theory remains viable also when gravity becomes strong. It’s also correct that the force-carrying particle associated to the quantization – the graviton – hasn’t been detected. But the question was about gravity, not quantum gravity. Reconciling the graviton with unquantized gravity is straight-forward – it’s called perturbative quantum gravity –  and exactly the reason most theoretical physicists are convinced the graviton exists. It’s just that this reconciliation breaks down when gravity becomes strong, which means it’s only an approximation.
“But, alas, what we do know does suggest antigravity is impossible.”
That’s correct on a superficial level, but it depends on what you mean by antigravity. If you mean by antigravity that you can let any of the matter which surrounds us “fall up” it’s correct. But there are modifications of general relativity that have effects one can plausibly call anti-gravitational. That’s a longer story though and shall be told another time.

A sensible answer to this question would have been:
“Dear Jeff,

The recent detection of gravitational waves has been another confirmation of Einstein’s theory of General Relativity, which still explains all the gravitational effects that physicists know of. According to General Relativity the root cause of gravity is that all types of energy curve space-time and all matter moves in this curved space-time. Near planets, such as our own, this can be approximated to good accuracy by Newtonian gravity.

There isn’t presently any observation which suggests that gravity itself emergens from another theory, though it is certainly a speculation that many theoretical physicists have pursued. There thus isn’t any deeper root for gravity because it’s presently part of the foundations of physics. The foundations are the roots of everything else.

The discovery of the Higgs boson doesn’t tell us anything about the gravitational interaction. The Higgs boson is merely there to make sure particles have mass in addition to energy, but gravity works the same either way. The detection of gravitational waves is exciting because it allows us to learn a lot about the astrophysical sources of these waves. But the waves themselves have proved to be as expected from General Relativity, so from the perspective of fundamental physics they didn’t bring news.

Within the incredibly well confirmed framework of General Relativity, you cannot negate mass or its gravitational pull. 
You might also enjoy hearing what Richard Feynman had to say when he was asked a similar question about the origin of the magnetic force:


This answer really annoyed me because it’s a lost opportunity to explain how well physicists understand the fundamental laws of nature.