Tuesday, March 20, 2018

Hawking’s “Final Theory” is not groundbreaking

Yesterday, the media buzzed with the revelation that Stephen Hawking had completed a paper two weeks before his death. This paper supposedly contains some breathtaking insight.

The headlines refer to a paper titled “A Smooth Exit from Eternal Inflation” in collaboration with Thomas Hertog. The paper was originally uploaded to the arXiv in July last year, but it was updated two weeks ago. It is supposedly under review with “a leading journal” which I suspect but do not know is Physical Review D. Thomas Hertog gave a talk about this at the conference which I attended last summerYou can watch the video of Hertog’s talk here.

According to The Independent the paper contains “a theory explaining how we might detect parallel universes and a prediction for the end of the world.” Furthermore, we learn, “Hawking also theorised in his final work that scientists could find alternate universes using probes on space ships, allowing humans to form an even better understanding of our own universe, what else is out there and our place in the cosmos.”

In the Sunday Times you can read that the paper “shows how we might find other universes”  and in The Telegraph you find a quote by Carlos Frenk, professor of cosmology at Durham University who said: “The intriguing idea in Hawking’s paper is that [the multiverse] left its imprint on the background radiation permeating our universe and we could measure it with a detector on a spaceship.”

Since the paper doesn’t say anything about detecting parallel universes, I was originally confused whether the headlines were referring to another paper. But no, Thomas Hertog confirmed to me that the paper in question is indeed the paper that is on the arXiv. There is no other paper.

So what does the paper say?

The paper is based on an old idea by Stephen Hawking and Jim Hartle called the “no-boundary” proposal. In the paper, the authors employ a new method to do calculations that were not previously possible. Specifically, they calculate which type of universes a multiverse would contain if this theory was correct. The main conclusion seems to be that our universe is compatible with the idea, and also that this particular multiverse which they deal with is not as large as the usual multiverse one gets from eternal inflation.

It’s not entirely uninteresting if you are into multiverse ideas, because then you need this information to calculate the probability of our universe. But it is also a very theoretical paper that does not say anything about observational consequences.

The only thing that the paper does say is that inflation took place. And inflation predicts that gravitational waves produced in the early universe should leave an imprint in the cosmic microwave background (CMB). This is the CMB polarization signal that BICEP was looking for but didn’t find. There are, however, some satellite missions in the planning that will look for it with better precision.

So how do we detect parallel universes? By detecting the CMB polarization. I do not kid you.

Here’s what Hertog said about this:
“This model predicts that our universe came into existence with a burst of rapid expansion called cosmic inflation. A big bang of this kind amplifies gravitational waves which in turn show up in satellite images of [the pattern of temperature fluctuations in] the cosmic microwave background. Future satellite missions should see this, if the theory is correct.

Observational evidence for the no-boundary model [in the form of gravitational waves from the big bang] would yield strong evidence for a multiverse. This paper provides a step towards a mathematically sound and testable model of the multiverse. That constitutes a significant extension of our notion of physical reality.

Some cosmologists have argued against the multiverse on the basis it can’t be tested. However our model shows that observations in our own universe can provide strong evidence for the existence of other universes. ”
Allow me put this into perspective.

Theoretical physicist have proposed some thousand ideas for what might have happened in the early universe. There are big bangs and big bounces and brane collisions and string cosmologies and loop cosmologies and all kinds of weird fields that might or might not have done this or that. All of this is pure speculation, none of it is supported by evidence. The Hartle-Hawking proposal is one of these speculations.

The vast majority of these ideas contain a phase of inflation and they all predict CMB polarization. In some scenarios the signal is larger than in others. But there isn’t even a specific prediction for the amount of CMB polarization in the Hawking paper. In fact, the paper doesn’t so much as even contain the word “polarization” or “tensor modes.”

The claim that the detection of CMB polarization would mean the multiverse exists makes as much sense as claiming that if I find a coin on the street then Bill Gates must have walked by. And a swarm of invisible angels floated past him playing harp and singing “Ode To Joy.”

In case that was too metaphorical, let me say it once again but plainly. Hawking has not found a new way to measure the existence of other universes.

Stephen Hawking was beloved by everyone I know, both inside and outside the scientific community. He was a great man without doubt, but this paper is utterly unremarkable.

Wednesday, March 14, 2018

Stephen Hawking dies at 76. What was he famous for?

I woke up this morning to the sad news that Stephen Hawking has died. His 1988 book “A Brief History of Time” got me originally interested in physics, and I ended up writing both my diploma thesis and my PhD thesis about black holes. It is fair to say that without Hawking my life would have been an entirely different one.

While Hawking became “officially famous” with “A Brief History of Time,” among physicists he was more renowned for the singularity theorems. In his 1960s work together with Roger Penrose, Hawking proved that singularities form under quite general conditions in General Relativity, and they developed a mathematical framework to determine when these conditions are met.

Before Hawking and Penrose’s work, physicists had hoped that the singularities which appeared in certain solutions to General Relativity were mathematical curiosities of little relevance for physical reality. But the two showed that this was not so, that, to the very contrary, it’s hard to avoid singularities in General Relativity.

Thanks to this seminal work, physicists understood that the singularities in General Relativity signal the theory's breakdown in regions of high energy-densities. In 1973, together with George Ellis, Hawking published the book “The Large Scale Structure of Space-Time” in which they laid out the mathematical treatment in detail. Still today it’s one of the most relevant references in the field.

A somewhat lesser known step in Hawking's career is that  already in 1971 he wrote one of the first papers on the analysis of gravitational wave signals. In this paper together with Gary Gibbons, the authors proposed a simple yet path-leading way to extract signals from the background noise.

Also Hawking’s – now famous – area theorem for black holes stemmed from this interest in gravitational waves, which is why the paper is titled “Gravitational Radiation from Colliding Black Holes.” This theorem shows that when two black hole horizons merge their total surface area can only increase. In that, the area of black hole horizons resembles the entropy of physical systems.

Only a few years later, in 1974, Hawking published a seminal paper in which he demonstrates that black holes give off thermal radiation, now referred to as “Hawking radiation.” This evaporation of black holes results in the black hole information loss paradox which is still unsolved today. Hawking’s work demonstrated clearly that the combination of General Relativity with the quantum field theories of the standard model spells trouble. Like the singularity theorems, it’s a result that doesn’t merely indicate, but prove that we need a theory of quantum gravity in order to consistently describe nature.

While the 1974 paper was predated by Bekenstein’s finding that black holes resemble thermodynamical systems, Hawking’s derivation was the starting point for countless later revelations. Thanks to it, physicists understand today that black holes are a melting pot for many different fields of physics – besides general relativity and quantum field theory, there is thermodynamics and statistical mechanics, and quantum information and quantum gravity. Let’s not forget astrophysics, and also mix in a good dose of philosophy. In 2017, “black hole physics” could be a subdiscipline in its own right – and maybe it should be. We owe much of this to Stephen Hawking.

In the 1980s, Hawking worked with Jim Hartle on the no-boundary proposal according to which our universe started in a time-less state. It’s an appealing idea whose time hasn’t yet come, but I believe this might change within the next decade or so.

After this, Hawking tried several times to solve the riddle of black hole information loss that he posed himself, alas, unsuccessfully. It seems that the paradox he helped create finally outlived him.

Besides his scientific work, Hawking has been a master of science communication. In 1988, “A Brief History of Time” was a daring book about abstract ideas in a fringe area of theoretical physics. Hawking, to everybody’s surprise, proved that the public has an interest in esoteric problems like what happens if you fall into a black hole, what happed at the Big Bang, or whether god had any choice when he created the laws of nature.

Since 1988, the popular science landscape has changed dramatically. There are more books about theoretical physics than ever before and they are more widely read than ever before. I believe that Stephen Hawking played a big role in encouraging other scientists to write about their own research for the public. It certainly was an inspiration for me.

Good bye, Stephen, and thank you.

Tuesday, March 13, 2018

The Multiworse Is Coming

You haven’t seen headlines recently about the Large Hadron Collider, have you? That’s because even the most skilled science writers can’t find much to write about.

There are loads of data for sure, and nuclear physicists are giddy with joy because the LHC has delivered a wealth of new information about the structure of protons and heavy ions. But the good old proton has never been the media’s darling. And the fancy new things that many particle physicists expected – the supersymmetric particles, dark matter, extra dimensions, black holes, and so on – have shunned CERN.

It’s a PR disaster that particle physics won’t be able to shake off easily. Before the LHC’s launch in 2008, many theorists expressed themselves confident the collider would produce new particles besides the Higgs boson. That hasn’t happened. And the public isn’t remotely as dumb as many academics wish. They’ll remember next time we come ask for money.

The big proclamations came almost exclusively from theoretical physicists; CERN didn’t promise anything they didn’t deliver. That is an important distinction, but I am afraid in the public perception the subtler differences won’t matter. It’s “physicists said.” And what physicists said was wrong. Like hair, trust is hard to split. And like hair, trust is easier to lose than to grow.

What the particle physicists got wrong was an argument based on a mathematical criterion called “naturalness”. If the laws of nature were “natural” according to this definition, then the LHC should have seen something besides the Higgs. The data analysis isn’t yet completed, but at this point it seems unlikely something more than statistical anomalies will show up.

I must have sat through hundreds of seminars in which naturalness arguments were repeated. Let me just flash you a representative slide from a 2007 talk by Michelangelo L. Mangano (full pdf here), so you get the idea. The punchline is at the very top: “new particles must appear” in an energy range of about a TeV (ie accessible at the LHC) “to avoid finetuning.”

I don’t mean to pick on Mangano in particular; his slides are just the first example that Google brought up. This was the argument why the LHC should see something new: To avoid finetuning and to preserve naturalness.

I explained many times previously why the conclusions based on naturalness were not predictions, but merely pleas for the laws of nature to be pretty. Luckily I no longer have to repeat these warnings, because the data agree that naturalness isn’t a good argument.

The LHC hasn’t seen anything new besides the Higgs. This means the laws of nature aren’t “natural” in the way that particle physicists would have wanted them to be. The consequence is not only that there are no new particles at the LHC. The consequence is also that we have no reason to think there will be new particles at the next higher energies – not until you go up a full 15 orders of magnitude, far beyond what even futuristic technologies may reach.

So what now? What if there are no more new particles? What if we’ve caught them all and that’s it, game over? What will happen to particle physics or, more to the point, to particle physicists?

In an essay some months ago, Adam Falkowski expressed it this way:
“[P]article physics is currently experiencing the most serious crisis in its storied history. The feeling in the field is at best one of confusion and at worst depression”
At present, the best reason to build another particle collider, one with energies above the LHC’s, is to measure the properties of the Higgs-boson, specifically its self-interaction. But it’s difficult to spin a sexy story around such a technical detail. My guess is that particle physicists will try to make it sound important by arguing the measurement would probe whether our vacuum is stable. Because, depending on the exact value of a constant, the vacuum may or may not eventually decay in a catastrophic event that rips apart everything in the universe.*

Such a vacuum decay, however, wouldn’t take place until long after all stars have burned out and the universe has become inhospitable to life anyway. And seeing that most people don’t care what might happen to our planet in a hundred years, they probably won’t care much what might happen to our universe in 10100 billion years.

Personally I don’t think we need a specific reason to build a larger particle collider. A particle collider is essentially a large microscope. It doesn’t use light, it uses fast particles, and it doesn’t probe a target plate, it probes other particles, but the idea is the same: It lets us look at matter very closely. A larger collider would let us look closer than we have so far, and that’s the most obvious way to learn more about the structure of matter.

Compared to astrophysical processes which might reach similar energies, particle colliders have the advantage that they operate in a reasonably clean and well-controlled environment. Not to mention nearby, as opposed to some billion light-years away.

That we have no particular reason to expect the next larger collider will produce so-far unknown particles is in my opinion entirely tangential. If we stop here, the history of particle physics will be that of a protagonist who left town and, after the last street sign, sat down and died, the end. Some protagonist.

But I have been told by several people who speak to politicians more frequently than I that the “just do it” argument doesn’t fly. To justify substantial investments, I am told, an experiment needs a clear goal and at least a promise of breakthrough discoveries.

Knowing this, it’s not hard to extrapolate what particle physicists will do next. We merely have to look at what they’ve done in the past.

The first step is to backpedal from their earlier claims. This has already happened. Originally we were told that if supersymmetric particles are there, we would see them right away.
“Discovering gluinos and squarks in the expected mass range […] seems straightforward, since the rates are large and the signals are easy to separate from Standard Model backgrounds.” Frank Paige (1998).

“The Large Hadron Collider will either make a spectacular discovery or rule out supersymmetry entirely.” Michael Dine (2007)
Now they claim no one ever said it would be easy. By 2012, it was Natural SUSY is difficult to see at LHC and “"Natural supersymmetry" may be hard to find.” 

Step two is arguing that the presently largest collider will just barely fail to see the new particles but that the next larger collider will be up to the task.

One of the presently most popular proposals for the next collider is the International Linear Collider (ILC), which would be a lepton collider. Lepton colliders have the benefit of doing away with structure functions and fragmentation functions that you need when you collide composite particles like the proton.

In a 2016 essay for Scientific American Howard Baer, Vernon D. Barger, and Jenny List kicked off the lobbying campaign:
“Recent theoretical research suggests that Higgsinos might actually be showing up at the LHC—scientists just cannot find them in the mess of particles generated by the LHC's proton-antiproton collisions […] Theory predicts that the ILC should create abundant Higgsinos, sleptons (partners of leptons) and other superpartners. If it does, the ILC would confirm supersymmetry.”
The “recent theoretical research” they are referring to happens to be that of the authors themselves, vividly demonstrating that the quality standard of this field is currently so miserable that particle physicists can come up with predictions for anything they want. The phrase “theory predicts” has become entirely meaningless.

The website of the ILC itself is also charming. There we can read:
“A linear collider would be best suited for producing the lighter superpartners… Designed with great accuracy and precision, the ILC becomes the perfect machine to conduct the search for dark matter particles with unprecedented precision; we have good reasons to anticipate other exciting discoveries along the way.”
They don’t tell you what those “good reasons” are because there are none. At least not so far. This brings us to step three.

Step three is the fabrication of reasons why the next larger collider should see something. The leading proposal is presently that of Michael Douglas, who is advocating a different version of naturalness, that is naturalness in theory space. And the theory space he is referring to is, drums please, the string theory landscape.

Naturalness, of course, has always been a criterion in theory-space, which is exactly why I keep saying it’s nonsense: You need a probability distribution to define it and since we only ever observe one point in this theory space, we have no way to ever get empirical evidence about this distribution. So far, however, the theory space was that of quantum field theory.

When it comes to the landscape at least the problem of finding a probability distribution is known (called “the measure problem”), but it’s still unsolvable because we never observe laws of nature other than our own. “Solving” the problem comes down to guessing a probability distribution and then drowning your guess in lots of math. Let us see what predictions Douglas arrives at:

Slide from Michael Douglas. PDF here. Emphasis mine.

Supersymmetry might be just barely out of reach of the LHC, but a somewhat larger collider would find it. Who’d have thought.

You see what is happening here. Conjecturing a multiverse of any type (string landscape or eternal inflation or what have you) is useless. It doesn’t explain anything and you can’t calculate anything with it. But once you add a probability distribution on that multiverse, you can make calculations. Those calculations are math you can publish. And those publications you can later refer to in proposals read by people who can’t decipher the math. Mission accomplished.

The reason this cycle of empty predictions continues is that everyone involved only stands to benefit. From the particle physicists who write the papers to those who review the papers to those who cite the papers, everyone wants more funding for particle physics, so everyone plays along.

I too would like to see a next larger particle collider, but not if it takes lies to trick taxpayers into giving us money. More is at stake here than the employment of some thousand particle physicists. If we tolerate fabricated arguments in the scientific literature just because the conclusions suit us, we demonstrate how easy it is for scientists to cheat.

Fact is, we presently have no evidence –  neither experimental nor theoretical evidence –  that a next larger collider would find new particles. The absolutely last thing particle physicists need right now is to weaken their standards even more and appeal to multiversal math magic that can explain everything and anything. But that seems to be exactly where we are headed.

* I know that’s not correct. I merely said that’s likely how the story will be spun.

Like what you read? My upcoming book “Lost in Math” is now available for preorder. Follow me on twitter for updates.

Saturday, March 10, 2018

Book Update: German Cover Image

My US publisher has transferred the final manuscript to my German publisher and the translation is in the making. The Germans settled on the title “Das Hässliche Universum” (The Ugly Universe). They have come up with a cover image that leaves me uncertain whether it’s ugly or not which I think is brilliant.

New Scientist, not entirely coincidentally, had a feature last week titled “Welcome To The Uglyverse.” The article comes with an illustration showing the Grand Canyon clogged by an irregular polyhedron in deepest ultramarine. It looks like a glitch in the matrix, a mathematical tumor on nature’s cheek. Or maybe a resurrected povray dump file. Either way, it captures amazingly well how artificial the theoretical ideals of beauty are. It is also interesting that both the designer of the German cover and the designer of the New Scientist illustration chose lack of symmetry to represent ugliness.

The New Scientist feature was written by Daniel Cossins, who did an awesome job explaining what the absence of supersymmetric particles has to do with the mass of the Higgs and why that’s such a big deal now. It’s one of the topics that I explore in depth in my book. If you’re still trying to decide whether the book is for you, check out the New Scientist piece for context.

Speaking of images, the photographer came and photographed, so here is me gazing authorly into the distance. He asked me whether the universe is random. I said I don’t know

Tuesday, March 06, 2018

Book Review: “Richie Doodles,” a picture book about Richard Feynman by M. J. Mouton and J. S. Cuevas

Richie Doodles: The Brilliance of a Young Richard Feynman
Rare Bird Books (February 20, 2018)

I’m weak. I have a hard time saying “no” when offered a free book. And as the pile grows, so does my guilt for not reading them. So when I was offered a free copy of a picture book about Richard Feynman, of course I said “yes.” I’d write some nice words, work off some guilt, and everyone would be happy. How hard could it be?

So the book arrived and I handed it to the twins, that being my great plan reviewing a children’s book. I don’t think the kids understood why someone would give them a book for free just to hear whether they liked it, but then I’m not entirely sure I understand the review business myself.

In any case, my zero-effort review failed at the first hurdle, that being that the book is in English but the twins barely just read German. So “mommy, read!” it was. Except that of course reading wouldn’t have done because, a thousand hours of Peppa Pig notwithstanding, they don’t understand much English either.

I am telling you this so you can properly judge the circumstances under which this, cough, review was conducted. It was me translating English verse on the fly. Oh, yes, the book is in verse. Which you might find silly but I can attest, that seven year olds think it’s the best.

The translation problem was fairly easy to solve – I even managed a rhyme here and there – but the next problem wasn’t. Turns out that the book doesn’t have a plot. It is a series of pictures loosely connected to the text, but it has no storyline. At least I couldn’t find one. There’s a dog named “Hitch” which appears throughout the “Tiny Thinkers” series (so far three books), but the dog is not present on most pages. And even if it’s on the page, it’s not clear why or what it’s doing.

That absence of story was some disappointment. Not like first-graders are demanding when it comes to storytelling. “The dog stole the doodle and the cat found it” would have done. But no plot.

Ok, well, so I made up a plot. Something along the lines that everyone thought Richie was just crazy doing all the doodles but turned out he was a genius. No, I don’t plan making career with this.

The next problem I encountered is that the illustrations are as awesome as the text isn’t. They are professionally done cartoon-style drawings (four-fingered hands and all) with a lovely attention to detail. The particular headache they gave me is that in several images a girl appears and, naturally, my daughters were much more interested in who the girl is and what she is doing rather than what the boy’s squiggly lines have to do with tau neutrinos. Maybe Richie’s sister? The book leaves one guessing.

The final problem appeared on the concluding page, where we see an angry looking (female) math teacher reprimanding a very smug looking boy (we aren’t told who that is) for drawing doodles instead of paying attention to the teacher. The text says “If your teacher sees you doodling in class, and says those silly drawings won’t help you pass… You can explain that your doodle isn’t silly at all. It’s called a Feynman Diagram explaining things that are small.”

I wasn’t amused. Please understand. I have a degree. I can’t possibly tell my kids it’s ok to ignore their math teacher because maybe their drawings will one day revolutionize the world.

So I turned this into an explanation about how math isn’t merely about numbers and calculus, but more generally about relations that can, among others, be represented by drawings. I ended up giving a two-hour lecture on braid groups and set theory.

The book finishes with some biographical notes about Feynman.

On Amazon, the book is marked for Kindergardners, age 4-6. But to even make sense of the images, the children need to know what an atom is, what mathematics is, and what a microscope is. The text is even more demanding: It contains phrases like “quark and antiquark pair” and speaks of particles that repel or attract, and so on.

Because of this I’d have guessed the book is aimed at children age 7 to 10. Or maybe more specifically at children of physicists. Of course I don’t expect a picture book to actually explain how Feynman diagrams work, but the text in the book is so confused I can’t see how a child can make sense of it without an adult who actually knows that stuff.

At some point, for example, the text raises the impression that all particles pass through matter without interaction. “Things so small they pass right through walls!” You have to look at the illustration on the opposite page to figure out this refers only to neutrinos (which are not named in the text). If you don’t already know what neutrinos are, you’ll end up very confused which collisions the later pages refer to.

Another peculiar thing about the book is that besides the “doodles” it says pretty much nothing about Richard Feynman. Bongo drums appear here and there but are not mentioned in the text. A doodle-painted van can be spotted, but is only referred to in the biography. There is also what seems to be an illustration of Schrödinger’s cat experiment and later a “wanted” poster looking for the cat “dead or alive.” Cute, yes, but that too is disconnected from the text.

I got the impression the book is really aimed at children of physicists – or maybe just physicists themselves? – who can fill in the details. And no word of lock-picking!

As you can tell, I wasn’t excited. But then the book wasn’t for me. When I asked the girls for their impression, they said they liked the book because it’s “funny.” Further inquiry revealed that what’s funny about the book are the illustrations. There’s a dog walking through a bucket of paint, leaving behind footprints. That scored highly, let me tell you. There’s a car accident (a scattering event), an apple with a worm inside, and a family of mice living in a hole in the wall. There are also flying noodles and even I haven’t figured out what those are, which made them the funniest thing in the world ever, at least for what my children are concerned.

The book has a foreword by Lawrence Krauss, but since Krauss recently moved to the sinner’s corner, that might turn out to be more of a benefit for Krauss than for the book.

In summary, the illustrations are awesome, the explanations aren’t.

I feel like I should be grateful someone produces children’s books about physics at all. Then again I’m not grateful enough to settle for mediocrity.

Really, why anyone asks me to review books is beyond me.

Thursday, March 01, 2018

Who is crazy now? (In which I am stunned to encounter people who agree with me that naturalness is nonsense.)

I have new front teeth. Or rather, I have a new dentist who looked at the fixes and patches his colleagues left and said they’ve got to go. Time for crowns. Welcome to middle age.

After several hours of unpleasant short-range interactions with various drills, he puts on the crowns and hands me a mirror. “Have a look,” he says. “They’re tilted,” I say. He frowns, then asks me to turn my head this way and that way. “Open your mouth,” he says, “Close. Open.” He grabs my temples with a pair of tongs and holds a ruler to my nose. Then he calls the guy from the lab.

The lab guy shakes my hand. “What’s up?” he asks. “They crowns are tilted,” I say. He stares into my mouth. “They’re not,” he declares and explains he made them personally from several impressions and angle measurements and photos. He uses complicated words that I can’t parse. He calls for his lab mate, who confirms that the crowns are perfectly straight. It’s not the crowns, they say, it’s my face. My nose, I am told, isn’t in the middle between my pupils. I look into the mirror again, thinking “what-the-fuck,” saying “they’re tilted.”

Now three guys are staring at my teeth. “They’re not tilted,” one of them repeats. “Well,” I try a different take “They don’t have the same angle as they used to.” “Then they were tilted before,” one of them concludes. I contemplate the possibility that my teeth were misaligned all my life but no one ever told me. It seems very possible. Then again, if no one told me so far chances are no one ever will. “They’re tilted,” I insist.

The dentist still frowns. He calls for a colleague who appears promptly but clearly dismayed that her work routine was interrupted. I imagine a patient left behind, tubes and instruments hanging out of the mouth. “Smile,” she orders. I do. “Yes, tilted,” she speaks, turns around and leaves.

For a moment there, I felt like the participants in Asch’s famous 1951 experiment. Asch assigned volunteers to join a group of seven. The group was tasked with evaluations of simple images which they were told were vision tests. The volunteers did not know that the other members of the group had been given instructions to every once in a while all judge a longer of two lines as the shorter one, though the answer was clearly wrong. 75 percent of the trial participants went with the wrong majority opinion at least once.

I’d like to think if you’d put me among people who insisted the shorter line is the longer one, I’d agree with them too. I also wouldn’t drink the water, keep my back to the wall, and leave the room slowly while mumbling “Yes, you are right, yes, I see it clearly now.”

In reality, I’d probably conclude I’m crazy and then go write a book about it. Because that’s pretty much what happened.

For more than a decade I’ve tried to find out why so many high energy physicists believe that “natural” theories are more likely to be correct. “Naturalness,” here, is mathematical property of theories which physicists use to predict new particles or other observable consequences. Particle physicists’ widespread conviction that natural theories were preferable was the reason so many of them thought the Large Hadron Collider would see something new besides the Higgs boson: Supersymmetry, dark matter, extra dimensions, black holes, gravitons, or other exotic things.

Whenever I confessed to one of my colleagues I am skeptical that naturalness is a reliable guide, I was met with a combination of amusement and consternation. Most were nice. They explained things to me that I already knew. They didn’t answer my questions but insisted they did. Some gave up and walked away. Others got annoyed. Every once in a while someone told me I’m just stupid. All of them ignored me.

After each conversation I went and looked again at the papers and lecture notes and textbooks, but each time I arrived at the same conclusion, that naturalness is an argument from beauty, based on experience but with scant empirical evidence. For all I could tell, that a theory be natural was a wish not a prediction. I failed to see a reason for the LHC to honor this wish.

And it didn’t. The predictions for the LHC that were based on naturalness arguments did not come true. At least not so far, and we are nearing the end of new data. Gian-Francesco Giudice, head of the CERN theory division, recently rang in the post-naturalness era. Confusion reigns among particle physicists.

A few months have passed since Giudice’s paper. I am sitting at a conference in Aachen on naturalness and finetuning where I am scheduled to give my speech about how naturalness is a criterion of beauty, as prone to error as criteria of beauty have always been in the history of science. It’s a talk usually met with  befuddlement. Questions I get are mostly alterations of “Did you really just say what I thought you said?”

But this time it’s different. One day into the conference I notice that all I was about to say has already been said. The meeting, it seems, collected the world’s naturalness skeptics, a group of likeminded people I didn’t know exists. And they are getting more numerous by the day.

Most here agree that naturalness is not a reliable guide but a treacherous one, one that looks like it works until suddenly it doesn’t. And though we don’t agree on the reason why this guide failed just now and what to do about it, I’m not the crazy one any more. Several say they are looking forward to reading my book.

The crowns went back to the lab. Attempts at fixing them failed, and the lab remade them entirely. They’re straight now, and I am no longer afraid that smiling will reveal the holes between my teeth.

Thursday, February 22, 2018

Shut up and simulate. (In which I try to understand how dark matter forms galaxies, and end up very confused.)

Galactic structures.
Illustris Simulation.
[Image Source]
Most of the mass in the universe isn’t a type of matter we are familiar with. Instead, it’s a mysterious kind of “dark matter” that doesn’t emit or absorb light. It also interacts rarely both with itself and with normal matter, too rarely to have left any trace in our detectors.

We know dark matter is out there because we see its gravitational pull. Without dark matter, Einstein’s theory of general relativity does not predict a universe that looks like what we see; neither galaxies, nor galaxy clusters, nor galactic filaments come out right. At least that’s what I used to think.

But the large-scale structure we observe in the universe also don’t come out right with dark matter.

These are not calculations anyone can do with a pen on paper, so almost all of it is computer simulations. It’s terra-flopping, super-clustering, parallel computing that takes months even on the world’s best hardware. The outcome is achingly beautiful videos that show how initially homogeneous matter clumps under its own gravitational pull, slowly creating the sponge-like structures we see today.

Dark matter begins to clump first, then the normal matter follows the dark matter’s gravitational pull, forming dense clouds of gas, stars, and solar systems: The cradles of life.

structure formation, Magneticum simulation
Credit: Dolag et al. 2015
It is impressive work that simply wouldn’t have been possible two decades ago.

But the results of the computer simulations are problem-ridden, and have been since the very first ones. The clumping matter, it turns out, creates too many small “dwarf” galaxies. Also, the distribution of dark matter inside the galaxies is too peaked towards the middle, a trouble known as the “cusp problem.”

The simulations also leave some observations unexplained, such as an empirically well-established relation between the brightness of a galaxy and the velocity of its outermost stars, known as the Tully-Fisher-relation. And this is just to mention the problems that I understand well enough to even mention them.

It’s not something I used to worry about. Frankly I’ve been rather uninterested in the whole thing because for all I know dark matter is just another particle and really I don’t care much what it’s called.

Whenever I spoke to an astrophysicist about the shortcomings of the computer simulations they told me that mismatches with data are to be expected. That’s because the simulations don’t yet properly take into account the – often complicated – physics of normal matter, such as the pressure generated when stars go supernovae, the dynamics of interstellar gas, or the accretion and ejection of matter by supermassive black holes which are at the center of most galaxies.

Fair enough, I thought. Something with supernovae and so on that creates pressure and prevents the density peaks in the center of galaxies. Sounds plausible. These “feedback” processes, as they are called, must be highly efficient to fit the data, and make use of almost 100% of supernovae energy. This doesn’t seem realistic. But then again, astrophysicists aren’t known for high precision data. When the universe is your lab, error margins tend to be large. So maybe “almost 100%” in the end turns out to be more like 30%. I could live with that.

Eta Carinae. An almost supernova.
Image Source: NASA

Then I learned about the curious case of low surface brightness galaxies. I learned that from Stacy McGaugh who blogs next door. How I learned about that is a story by itself.

The first time someone sent me a link to Stacy’s blog, I read one sentence and closed the window right away. Some modified gravity guy, I thought. And modified gravity, you must know, is the crazy alternative to dark matter. The idea is that rather than adding dark matter to the universe, you fiddle with Einstein’s theory of gravity. And who in their right mind messes with Einstein.

The second time someone sent me a link to Stacy’s blog it came with the remark I might have something in common with the modified gravity dude. I wasn’t flattered. Also I didn’t bother clicking on the link.

The third time I heard of Stacy it was because I had a conversation with my husband about low surface brightness galaxies. Yes, I know, not the most romantic topic of a dinner conversation, but things happen when you marry a physicist. Turned out my dear husband clearly knew more about the subject than I. And when prompted for the source of his wisdom he referred to me to no other than Stacy-the-modified-gravity-dude.

So I had another look at that guy’s blog.

Upon closer inspection it became apparent Stacy isn’t a modified gravity dude. He isn’t even a theorist. He’s an observational astrophysicist somewhere in the US North-East who has become, rather unwillingly, a lone fighter for modified gravity. Not because he advocates a particular theory, but because he has his thumb on the pulse of incoming data.

I am not much of an astrophysicist and understand like 5% of what Stacy writes on his blog. There are so many words I can’t parse. Is it low-surface brightness galaxy or low surface-brightness galaxy? And what’s the surface of a galaxy anyway? If there are finite size galaxies, does that mean there are also infinite size galaxies? What the heck is an UFD? What means NFW, ISM, RAR, and EFE?* And why do astrophysicists use so many acronyms that you can’t tell a galaxy from an experiment? Questions over questions.

Though I barely understood what the man was saying, it was also clear why other people thought I may have something in common with him. Even if you don’t have a clue what he’s on about, frustration pours out of his writing. That’s a guy shouting at a scientific community to stop deluding themselves. A guy whose criticism is totally and utterly ignored while everybody goes on doing what they’ve been doing for decades, never mind that it doesn’t work. Oh yes, I know that feeling.

Still, I had no particular reason to look at the galactic literature and reassess which party is the crazier one, modified gravity or particle dark matter. I merely piped Stacy’s blog into my feed just for the occasional amusement. It took yet another guy to finally make me look at this.

I get a lot of requests from students. Not because I am such a famous physicists, I am afraid, but just because I am easy to find. So far I have deterred these students by pointing out that I have no money to pay them and that my own contract will likely run out before they have even graduated. But last year I was confronted with a student who was entirely unperturbed by my bleak future vision. He simply moved to Frankfurt and one day showed up in my office to announce he was here to work with me. On modified gravity, out of all things.

So now that I carry responsibility for somebody else’s career, I thought, I should at least get an opinion on the matter of dark matter.

That’s why I finally looked at a bunch of papers from different simulations for galaxy formation. I had the rather modest goal of trying to find out how many parameters they use, which of the simulations fare best in terms of explaining the most with the least input, and how those simulations compare to what you can do with modified gravity. I still don’t know. I don’t think anyone knows.

But after looking at a dozen or so papers the problem Stacy is going on about became apparent. These papers typically start with a brief survey of other, previous, simulations, none of which got the structures right, all of which have been adapted over and over and over again to produce results that fit better to observations. It screams “epicycles” directly into your face.

Now, there isn’t anything scientifically wrong with this procedure. It’s all well and fine to adapt a model so that it describes what you observe. But this way you’ll not get a model that has much predictive power. Instead, you will just extract fitting parameters from data. It is highly implausible that you can spend twenty or so years fiddling with the details of computer simulations to then find what’s supposedly a universal relation. It doesn’t add up. It doesn’t make sense. I get this cognitive dissonance.

And then there are the low surface-brightness galaxies. These are interesting because 30 years ago they were thought to be not existent. They do exist though, they are just difficult to see. And they spelled trouble for dark matter, just that no one wants to admit it.

Low surface brightness galaxies are basically dilute types of galaxies, so that there is less brightness per surface area, hence the name. If you believe that dark matter is a type of particle, then you’d naively expect these galaxies to not obey the Tully-Fisher relation. That’s because if you stretch out the matter in a galaxy, then the orbital velocity of the outermost stars should decrease while the total luminosity doesn’t, hence the relation between them should change.

But the data don’t comply. The low surface brightness things, they obey the very same Tully-Fisher relation than all the other galaxies. This came as a surprise to the dark matter community. It did not come as a surprise to Mordehai Milgrom, the inventor of modified Newtonian dynamics, who had predicted this in 1983, long before there was any data.

You’d think this would have counted as strong evidence for modified gravity. But it barely made a difference. What happened instead is that the dark matter models were adapted.

You can explain the observations of low surface brightness galaxies with dark matter, but it comes at a cost. To make it work, you have to readjust the amount of dark matter relative to normal matter. The lower the surface-brightness, the higher the fraction of dark matter in a galaxy.

And you must be good in your adjustment to match just the right ratio. Because that is fixed by the Tully-Fisher relation. And then you have to come up with a dynamical process for ridding your galaxies of normal matter to get the right ratio. And you have to get the same ratio pretty much regardless of how the galaxies formed, whether they formed directly, or whether they formed through mergers of smaller galaxies.

The stellar feedback is supposed to do it. Apparently it works. As someone who has nothing to do with the computer simulations for galaxy structures, the codes are black boxes to me. I have little doubt that it works. But how much fiddling and tuning is necessary to make it work, I have no telling.

My attempts to find out just how many parameters the computer simulations use were not very successful. It is not information that you readily find in the papers, which is odd enough. Isn’t this the major, most relevant information you’d want to have about the simulations? One person I contacted referred me to someone else who referred me to a paper which didn’t contain the list I was looking for. When I asked again, I got no response. On another attempt my question how many parameters there are in a simulations was answered with “in general, quite a few.”

But I did eventually get a straight reply from Volker Springel. In the Illustris Simulation, he told me, there are 10 physically relevant parameters, in addition to the 8 cosmological parameters. (That’s not counting the parameters necessary to initialize the simulation, like the resolution and so on.) I assume the other simulations have comparable numbers. That’s not so many. Indeed, that’s not bad at all, given how many different galaxy types there are!

Still, you have to compare this to Milgrom’s prediction from modified gravity. He needs one parameter. One. And out falls a relation that computer simulations haven’t been able to explain for twenty years.

And even if the simulations would get the right result, would that count as an explanation?

From the outside, it looks much like dark magic.

* ultra faint dwarfs, Navarro-Frenk-White, interstellar medium, radial acceleration relation, external field effect

Thursday, February 15, 2018

What does it mean for string theory that the LHC has not seen supersymmetric particles?

The LHC data so far have not revealed any evidence for supersymmetric particles, or any other new particles. For all we know at present, the standard model of particle physics suffices to explain observations.

There is some chance that better statistics which come with more data will reveal some less obvious signal, so the game isn’t yet over. But it’s not looking good for susy and her friends.
Simulated signal of black hole
production and decay at the LHC.
[Credits: CERN/ATLAS]

What are the consequences? The consequences for supersymmetry itself are few. The reason is that supersymmetry by itself is not a very predictive theory.

To begin with, there are various versions of supersymmetry. But more importantly, the theory doesn’t tell us what the masses of the supersymmetric particles are. We know they must be heavier than something we would have observed already, but that’s it. There is nothing in supersymmetric extensions of the standard model which prevents theorists from raising the masses of the supersymmetric partners until they are out of the reach of the LHC.

This is also the reason why the no-show of supersymmetry has no consequences for string theory. String theory requires supersymmetry, but it makes no requirements about the masses of supersymmetric particles either.

Yes, I know the headlines said the LHC would probe string theory, and the LHC would probe supersymmetry. The headlines were wrong. I am sorry they lied to you.

But the LHC, despite not finding supersymmetry or extra dimensions or black holes or unparticles or what have you, has taught us an important lesson. That’s because it is clear now that the Higgs mass is not “natural”, in contrast to all the other particle masses in the standard model. That the mass be natural means, roughly speaking, that getting masses from a calculation should not require the input of finely tuned numbers.

The idea that the Higgs-mass should be natural is why many particle physicists were confident the LHC would see something beyond the Higgs. This didn’t happen, so the present state of affairs forces them to rethink their methods. There are those who cling to naturalness, hoping it might still be correct, just in a more difficult form. Some are willing to throw it out and replace it instead with appealing to random chance in a multiverse. But most just don’t know what to do.

Personally I hope they’ll finally come around and see that they have tried for several decades to solve a problem that doesn’t exist. There is nothing wrong with the mass of the Higgs. What’s wrong with the standard model is the missing connection to gravity and a Landau pole.

Be that as it may, the community of theoretical particle physicists is currently in a phase of rethinking. There are of course those who already argue a next larger collider is needed because supersymmetry is just around the corner. But the main impression that I get when looking at recent publications is a state of confusion.

Fresh ideas are needed. The next years, I am sure, will be interesting.

I explain all about supersymmetry, string theory, the problem with the Higgs-mass, naturalness, the multiverse, and what they have to do with each other in my upcoming book “Lost in Math.”

Monday, February 12, 2018

Book Update: First Review!

The final proofs are done and review copies were sent out. One of the happy receivers, Emmanuel Rayner, read the book within two days and so we have a first review on Goodreads now. That’s not counting the two-star review by someone who I am very sure hasn’t read the book because he “reviewed” it before there were review copies. Tells you all about online ratings you need to know.

The German publisher, Fischer, is still waiting for the final manuscript which has not yet left the US publisher’s rear end. Fischer wants to get started on the translation so that the German edition appears in early fall, only a few months later than the US edition.

Since I get this question a lot, no, I will not translate the book myself. To begin with, it seemed like a rather stupid thing to do, agree on translating an 80k word manuscript if someone else can do it instead. Maybe more importantly, my German writing is miserable, that owing to a grammar reform which struck the country the year after I had moved overseas, and which therefore entirely passed me by. It adds to this that the German spell-check on my laptop isn’t working (it’s complicated), I have an English keyboard, hence no umlauts, and also did I mention I didn’t have to do it in the first place.

Problems start with the title. “Lost in Math” doesn’t translate well to German, so the Fischer people search for a new title. Have been searching for two months, for all I can tell. I imagine them randomly opening pages of a dictionary, looking for inspiration.

Meanwhile, they have recruited and scheduled an appointment for me with a photographer to take headshots. Because in Germany you leave nothing to coincidence. So next week I’ll be photographed.

In other news, end of February I will give a talk at a workshop on “Naturalness, Hierarchy, and Fine Tuning” in Aachen, and I agreed to give a seminar in Heidelberg end of April, both of which will be more or less about the topic of the book. So stop by if you are interested and in the area.

And do not forget to preorder a copy if you haven’t yet done so!

Wednesday, February 07, 2018

Which problems make good research problems?

mini-problem [answer here]
Scientists solve problems; that’s their job. But which problems are promising topics of research? This is the question I set out to answer in Lost in Math at least concerning the foundations of physics.

A first, rough, classification of research problems can be made using Thomas Kuhn’s cycle of scientific theories. Kuhn’s cycle consists of a phase of “normal science” followed by “crisis” leading to a paradigm change, after which a new phase of “normal science” begins. This grossly oversimplifies reality, but it will be good enough for what follows.

Normal Problems

During the phase of normal science, research questions usually can be phrased as “How do we measure this?” (for the experimentalists) or “How do we calculate this?” (for the theorists).

The Kuhn Cycle.
[Img Src: thwink.org]
In the foundations of physics, we have a lot of these “normal problems.” For the experimentalists it’s because the low-hanging fruits have been picked and measuring anything new becomes increasingly challenging. For the theorists it’s because in physics predictions don’t just fall out of hypotheses. We often need many steps of argumentation and lengthy calculations to derive quantitative consequences from a theory’s premises.

A good example for a normal problem in the foundations of physics is cold dark matter. The hypothesis is easy enough: There’s some cold, dark, stuff in the cosmos that behaves like a fluid and interacts weakly both with itself and other matter. But that by itself isn’t a useful prediction. A concrete research problem would instead be: “What is the effect of cold dark matter on the temperature fluctuations of the cosmic microwave background?” And then the experimental question “How can we measure this?”

Other problems of this type in the foundations of physics are “What is the gravitational contribution to the magnetic moment of the muon?,” or “What is the photon background for proton scattering at the Large Hadron Collider?”

Answering such normal problems expands our understanding of existing theories. These are calculations that can be done within the frameworks we have, but calculations can be be challenging.

The examples in the previous paragraphs are solved problems, or at least problems that we know how to solve, though you can always ask for higher precision. But we also have unsolved problems in this category.

The quantum theory of the strong nuclear force, for example, should largely predict the masses of particles that are composed of several quarks, like neutrons, protons, and other similar (but unstable) composites. Such calculations, however, are hideously difficult. They are today made by use of sophisticated computer code – “lattice calculations” – and even so the predictions aren’t all that great. A related question is how does nuclear matter behave in the core of neutron stars.

These are but some randomly picked examples for the many open questions in physics that are “normal problems,” believed to be answerable with the theories we know already, but I think they serve to illustrate the case.

Looking beyond the foundations, we have normal problems like predicting the solar cycle and solar weather – difficult because the system is highly nonlinear and partly turbulent, but nothing that we expect to be in conflict with existing theories. Then there is high-temperature superconductivity, a well-studied but theoretically not well-understood phenomenon, due to the lack of quasi-particles in such materials. And so on.

So these are the problems we study when business goes as normal. But then there are problems that can potentially change paradigms, problems that signal a “crisis” in the Kuhnian terminology.

Crisis Problems

The obvious crisis problems are observations that cannot be explained with the known theories.

I do not count most of the observations attributed to dark matter and dark energy as crisis problems. That’s because most of this data can be explained well enough by just adding two new contributions to the universe’s energy budget. You will undoubtedly complain that this does not give us a microscopic description, but there’s no data for the microscopic structure either, so no problem to pinpoint.

But some dark matter observations really are “crisis problems.” These are unexplained correlations, regularities in galaxies that are hard to come by with cold dark matter, such as the Tully-Fisher-relation or the strange ability of dark matter to seemingly track the distribution of matter. There is as yet no satisfactory explanation for these observations using the known theories. Modifying gravity successfully explains some of it but that brings other problems. So here is a crisis! And it’s a good crisis, I dare to say, because we have data and that data is getting better by the day.

This isn’t the only good observational crisis problem we presently have in the foundations of physics. One of the oldest ones, but still alive and kicking, is the magnetic moment of the muon. Here we have a long-standing mismatch between theoretical prediction and measurement that has still not been resolved. Many theorists take this as an indication that this cannot be explained with the standard model and a new, better, theory is needed.

A couple more such problems exist, or maybe I should say persist. The DAMA measurements for example. DAMA is an experiment that searches for dark matter. They have been getting a signal of unknown origin with an annual modulation, and have kept track of it for more than a decade. The signal is clearly there, but if it was dark matter that would conflict with other experimental results. So DAMA sees something, but no one knows what it is.

There is also the still-perplexing LSND data on neutrino oscillation that doesn’t want to agree with any other global parameter fit. Then there is the strange discrepancy in the measurement results for the proton radius using two different methods, and a similar story for the lifetime of the neutron. And there are the recent tensions in the measurement of the Hubble rate using different methods, which may or may not be something to worry about.

Of course each of these data anomalies might have a “normal” explanation in the end. It could be a systematic measurement error or a mistake in a calculation or an overlooked additional contribution. But maybe, just maybe, there’s more to it.

So that’s one type of “crisis problem” – a conflict between theory and observations. But besides these there is an utterly different type of crisis problem, which is entirely on the side of theory-development. These are problems of internal consistency.

A problem of internal consistency occurs if you have a theory that predicts conflicting, ambiguous, or just nonsense observations. A typical example for this would be probabilities that become larger than one, which is inconsistent with a probabilistic interpretation. Indeed, this problem was the reason physicists were very certain the LHC would see some new physics. They couldn’t know it would be the Higgs, and it could have been something else – like an unexpected change to the weak nuclear force – but the Higgs it was. It was restoring internal consistency that led to this successful prediction.

Historically, studying problems of consistency has led to many stunning breakthroughs.

The “UV catastrophe” in which a thermal source emits an infinite amount of light at small wavelength is such a problem. Clearly that’s not consistent with a meaningful physical theory in which observable quantities should be finite. (Note, though, that this is a conflict with an assumption. Mathematically there is nothing wrong with infinity.) Planck solved this problem, and the solution eventually led to the development of quantum mechanics.

Another famous problem of consistency is that Newtonian mechanics was not compatible with the space-time symmetries of electrodynamics. Einstein resolved this disagreement, and got special relativity. Dirac later resolved the contradiction between quantum mechanics and special relativity which, eventually, gave rise to quantum field theory. Einstein further removed contradictions between special relativity and Newtonian gravity, getting general relativity.

All these have been well-defined, concrete, problems.

But most theoretical problems in the foundations of physics today are not of this sort. Yes, it would be nice if the three forces of the standard model could be unified to one. It would be nice, but it’s not necessary for consistency. Yes, it would be nice if the universe was supersymmetric. But it’s not necessary for consistency. Yes, it would be nice if we could explain why the Higgs mass is not technically natural. But it’s not inconsistent if the Higgs mass is just what it is.

It is well documented that Einstein and even more so Dirac were guided by the beauty of their theories. Dirac in particular was fond of praising the use of mathematical elegance in theory-development. Their personal motivation, however, is only of secondary interest. In hindsight, the reason they succeeded was that they were working on good problems to begin with.

There are a few real theory-problems in the foundations of physics today, but they exist. One is the lacking quantization of gravity. Just lumping the standard model together with general relativity doesn’t work mathematically, and we don’t know how to do it properly.

Another serious problem with the standard model alone is the Landau pole in one of the coupling constants. That means that the strength of one of the forces becomes infinitely large. This is non-physical for the same reason the UV catastrophe was, so something must happen there. This problem has received little attention because most theorists presently believe that the standard model becomes unified long before the Landau pole is reached, making the extrapolation redundant.

And then there are some cases in which it’s not clear what type of problem we’re dealing with. The non-convergence of the perturbative expansion is one of these. Maybe it’s just a question of developing better math, or maybe there’s something we get really wrong about quantum field theory. The case is similar for Haag’s theorem. Also the measurement problem in quantum mechanics I find hard to classify. Appealing to a macroscopic process in the theory’s axioms isn’t compatible with the reductionist ideal, but then again that is not a fundamental problem, but a conceptual worry. So I’m torn about this one.

But for what crisis problems in theory development are concerned, the lesson from the history of physics is clear: Problems are promising research topics if they really are problems, which means you must be able to formulate a mathematical disagreement. If, in contrast, the supposed problem is that you simply do not like a particular aspect of a theory, chances are you will just waste your time.

Homework assignment: Convince yourself that the mini-problem shown in the top image is mathematically ill-posed unless you appeal to Occam’s razor.

Wednesday, January 31, 2018

Physics Facts and Figures

Physics is old. Together with astronomy, it’s the oldest scientific discipline. And the age shows. Compared to other scientific areas, physics is a slowly growing field. I learned this from a 2010 paper by Larsen and van Ins. The authors counted the number of publications per scientific areas. In physics, the number of publications grows at an annual rate of 3.8%. This means it currently takes 18 years for the body of physics literature to double. For comparison, the growth rate for publications in electric engineering and technology is 9% (7.5%) and has a doubling time of 8 years (9.6 years).

The total number of scientific papers closely tracks the total number of authors, irrespective of discipline. The relation between the two can be approximately fit by a power law, so that the number of papers is equal to the number of authors to the power of β. But this number, β, turns out to be field-specific, which I learned from a more recent paper: “Allometric Scaling in Scientific Fields” by Dong et al.

In mathematics the exponent β is close to one, which means that the number of papers increases linearly with the number of authors. In physics, the exponent is smaller than one, approximately 0.877. And not only this, it has been decreasing in the last ten years or so. This means we are seeing here diminishing returns: More physicists result in a less than proportional growth of output.

Figure 2 from Dong et al, Scientometrics 112, 1 (2017) 583.
β measures is the exponent by which the number of papers
scales with the number of authors. 
The paper also found some fun facts. For example, a few sub-fields of physics are statistical outliers in that their researchers produce more than the average number papers. Dong et al quantified this by a statistical measure that unfortunately doesn’t have an easy interpretation. Either way, they offer a ranking of the most productive sub-fields in physics which is (in order):

(1) Physics of black holes, (2) Cosmology, (3) Classical general relativity, (4) Quantum information (5) Matter waves (6) Quantum mechanics (7) Quantum field theory in curved space time (8) general theory and models of magnetic ordering (9) Theories and models of many electron systems (10) Quantum gravity.

Isn’t it interesting that this closely matches the fields that tend to attract media attention?

Another interesting piece of information that I found in the Dong et al paper is that in all sub-fields the exponent relating the numbers of citations with the number of authors is larger than one, approximately 1.1. This means that on the average the more people work in a sub-field, the more citation they receive. I think this is relevant information for anyone who wants to make sense of citation indices.

A third paper that I found very insightful to understand the research dynamics in physics is “A Century of Physics” by Sinatra et al. Among other things, they analyzed the frequency by which sub-fields of physics reference to their own or other sub-fields. The most self-referential sub-fields, they conclude, are nuclear physics and the physics of elementary particles and fields.

Papers from these two sub-fields also have by far the lowest expected “ultimate impact” which the authors define as the typical number of citations a paper attracts over its lifetime, where the lifetime is the typical number of years in which the paper attracts citations (see figure below). In nuclear physics (labelled NP in figure) and and particle physics (EPF), the interest of papers is short-term and the overall impact remains low. By this measure, the category with the highest impact is electromagnetism, optics, acoustics, heat transfer, classical mechanics and fluid dynamics (labeled EOAHCF).

Figure 3 e from Sinatra et al, Nature Physics 11, 791–796 (2015).

A final graph from the Sinatra et al paper which I want to draw your attention to is the productivity of physicists. As we saw earlier, the total number of papers normalized to the total number of authors is somewhat below 1 and has been falling in the recent decade. However, if you look at the number of papers per author, you find that it has been sharply rising since the early 1990s, ie, basically ever since there was email.

Figure 1 e from Sinatra et al, Nature Physics 11, 791–796 (2015)

This means that the reason physicists seem so much more productive today than when you were young is that they collaborate more. And maybe it’s not so surprising because there is a strong incentive for that: If you and I both write a paper, we both have one paper. But if we agree to co-author each other’s paper, we’ll both have two. I don’t mean to accuse scientists of deliberate gaming, but it’s obvious that accounting for papers by the number puts single-authors at a disadvantage.

So this is what physics is, in 2018. An ageing field that doesn’t want to accept its dwindling relevance.

Thursday, January 25, 2018

More Multiverse Madness

The “multiverse” – the idea that our universe is only one of infinitely many – enjoys some credibility, at least in the weirder corners of theoretical physics. But there are good reasons to be skeptical, and I’m here to tell you all of them.

Before we get started, let us be clear what we are talking about because there isn’t only one but multiple multiverses. The most commonly discussed ones are: (a) The many worlds interpretation of quantum mechanics, (b) eternal inflation, and (c) the string theory landscape.

The many world’s interpretation is, guess what, an interpretation. At least to date, it makes no predictions that differ from other interpretations of quantum mechanics. So it’s up to you whether you believe it. And that’s all I have to say about this.

Eternal inflation is an extrapolation of inflation, which is an extrapolation of the concordance model, which is an extrapolation of the present-day universe back in time. Eternal inflation, like inflation, works by inventing a new field (the “inflaton”) that no one has ever seen because we are told it vanished long ago. Eternal inflation is a story about the quantum fluctuations of the now-vanished field and what these fluctuations did to gravity, which no one really knows, but that’s the game.

There is little evidence for inflation, and zero evidence for eternal inflation. But there is a huge number of models for both because available data don’t constraint the models much. Consequently, theorists theorize the hell out of it. And the more papers they write about it, the more credible the whole thing looks.

And then there’s the string theory landscape, the graveyard of disappointed hopes. It’s what you get if you refuse to accept that string theory does not predict which particles we observe.

String theorists originally hoped that their theory would explain everything. When it became clear that didn’t work, some string theorists declared if they can’t do it then it’s not possible, hence everything that string theory allows must exist – and there’s your multiverse. But you could do the same thing with any other theory if you don’t draw on sufficient observational input to define a concrete model. The landscape, therefore, isn’t so much a prediction of string theory as a consequence of string theorists’ insistence that theirs a theory of everything.

Why then, does anyone take the multiverse seriously? Multiverse proponents usually offer the following four arguments in favor of the idea:

1. It’s falsifiable!

Our Bubble Universe.
There are certain cases in which some version of the multiverse leads to observable predictions. The most commonly named example is that our universe could have collided with another one in the past, which could have left an imprint in the cosmic microwave background. There is no evidence for this, but of course this doesn’t rule out the multiverse. It just means we are unlikely to live in this particular version of the multiverse.

But (as I explained here) just because a theory makes falsifiable predictions doesn’t mean it’s scientific. A scientific theory should at least have a plausible chance of being correct. If there are infinitely many ways to fudge a theory so that the alleged prediction is no more, that’s not scientific. This malleability is a problem already with inflation, and extrapolating this to eternal inflation only makes things worse. Lumping the string landscape and/or many worlds on top of doesn’t help parsimony either.

So don’t get fooled by this argument, it’s just wrong.

2. Ok, so it’s not falsifiable, but it’s sound logic!

Step two is the claim that the multiverse is a logical consequence of well-established theories. But science isn’t math. And even if you trust the math, no deduction is better than the assumptions you started from and neither string theory nor inflation are well-established. (If you think they are you’ve been reading the wrong blogs.)

I would agree that inflation is a good effective model, but so is approximating the human body as a bag of water, and see how far that gets you making sense of the evening news.

But the problem with the claim that logic suffices to deduce what’s real runs deeper than personal attachment to pretty ideas. The much bigger problem which looms here is that scientists mistake the purpose of science. This can nicely be demonstrated by a phrase in Sean Carroll’s recent paper. In defense of the multiverse he writes “Science is about what is true.” But, no, it’s not. Science is about describing what we observe. Science is about what is useful. Mathematics is about what is true.

Fact is, the multiverse extrapolates known physics by at least 13 orders of magnitude (in energy) beyond what we have tested and then adds unproved assumptions, like strings and inflatons. That’s not science, that’s math fiction.

So don’t buy it. Just because they can calculate something doesn’t mean they describe nature.

3. Ok, then. So it’s neither falsifiable nor sound logic, but it’s still business as usual.

The gist of this argument, also represented in Sean Carroll’s recent paper, is that we can assess the multiverse hypothesis just like any other hypothesis, by using Bayesian inference.

Bayesian inference a way of probability assessment in which you update your information to arrive at what’s the most likely hypothesis. Eg, suppose you want to know how many people on this planet have curly hair. For starters you would estimate it’s probably less than the total world-population. Next, you might assign equal probability to all possible percentages to quantify your lack of knowledge. This is called a “prior.”

You would then probably think of people you know and give a lower probability for very large or very small percentages. After that, you could go and look at photos of people from different countries and count the curly-haired fraction, scale this up by population, and update your estimate. In the end you would get reasonably accurate numbers.

If you replace words with equations, that’s how Bayesian inference works.

You can do pretty much the same for the cosmological constant. Make some guess for the prior, take into account observational constraints, and you will get some estimate for a likely value. Indeed, that’s what Steven Weinberg famously did, and he ended up with a result that wasn’t too badly wrong. Awesome.

But just because you can do Bayesian inference doesn’t mean there must be a planet Earth for each fraction of curly-haired people. You don’t need all these different Earths because in a Bayesian assessment the probability represents your state of knowledge, not the distribution of an actual ensemble. Likewise, you don’t need a multiverse to update the likelihood of parameters when taking into account observations.

So to the extent that it’s science as usual you don’t need the multiverse.

4. So what? We’ll do it anyway.

The fourth, and usually final, line of defense is that if we just assume the multiverse exists, we might learn something, and that could lead to new insights. It’s the good, old Gospel of Serendipity.

In practice this means that multiverse proponents insist on interpreting probabilities for parameters as those of an actual ensemble of universes, ie the multiverse. Then they have the problem of where to get the probability distribution from, a thorny issue since the ensemble is infinitely large. This is known as the “measure problem” of the multiverse.

To solve the problem, they have to construct a probability distribution, which means they must invent a meta-theory for the landscape. Of course that’s just another turtle in the tower and will not help finding a theory of everything. And worse, since there are infinitely many such distributions you better hope they’ll find one that doesn’t need more assumptions than the standard model already has, because if that was so, the multiverse would be shaved off by Occam’s razor.

But let us assume the best possible outcome, that they find a measure for the multiverse according to which the parameters of the standard model are likely, and this measure indeed needs fewer assumptions than just postulating the standard model parameters. That would be pretty cool and I would be duly impressed. But even in this case we don’t need the multiverse! All we need is the equation to calculate what’s presumably a maximum of a probability distribution. Thus, again, Occam’s razor should remove the multiverse.

You could then of course insist that the multiverse is a possible interpretation, so you are allowed to believe in it. And that’s all fine by me. Believe whatever you want, but don’t confuse it with science.

The multiverse and other wild things that physicists believe in are subject of my upcoming book “Lost in Math” which is now available for preorder.