Sunday Discussion, Part 1: Epistemology Camp

bSI Epistemology Camp: 2024

Sunday at BSI’s 2024 Epistemology Camp was a loosely structured open discussion.
Greg Glassman started the conversation with mention of the 1927 Solvay Conference, where revolutionary ideas in physics and quantum theory were debated. Most took an ontological perspective on probability in quantum theory, while a few disagreed and preferred an epistemological view.

The conversation emphasized the need for a shift in scientific practices, with a call for models to be used for making predictions rather than just fitting data. The importance of moving towards a more predictive and experimental approach, as seen in physics and engineering, was emphasized, bashing ad hoc modeling practices used in other scientific disciplines.

Anton Garrett delves into the challenges of unifying quantum mechanics with general relativity and the potential for a breakdown of the current standard model of cosmology due to the unexplained phenomena like dark energy and dark matter.

Peter Coles then touches upon speculative theories like the Multiverse and the many-worlds interpretation of quantum mechanics, and discusses work on resolving uncertainties in the parameters in the field of cosmology.

Transcript

[Greg Glassman]
Just to get us started this morning, I thought I’d bring up the Solvay Conference, and this is a picture of it. Here, the dates, uh, it’s late October 1927. And, uh, I think 19 of the people there had or eventually received Nobel Prizes in physics or chemistry. Chemistry, in the case of Dr. Dr. Marie Curie, here got a Nobel Prize in chemistry and in physics. And, guide me here, what they did was made official the probability inherent in quantum theory. They decided that this probability was an ontological reality. That is indeed how the universe behaved. And there were some in attendance, Einstein in particular, that took grave exception to this. And another person that did, E.T. Jaynes. And I’m going to read to you from our Jayne’s book here, “Probability Theory: The Logic of Science.” And this is Jaynes on that.

“In current Quantum Theory, probabilities express our own ignorance due to our failure to search for the real causes of physical phenomenon, and worse, our failure even to think seriously about the problem. This ignorance may be unavoidable in practice, but in our present state of knowledge, we do not know whether it is unavoidable in principle. The central dogma simply asserts this and draws the conclusion that belief in causes and searching for them is philosophically naive. If everybody accepted this and abided by it, no further advances in understanding of physical laws would ever be made. Indeed, no such advance has been made since the 1927 Solvay Congress in which this mentality became solidified into physics. But, it seems to us that this attitude places a premium on stupidity. To lack the ingenuity to think of a causal, rational physical explanation is to support the supernatural view.”

That’s a powerful sentiment. And I’m going to share with you here, I think there’s more than a few of us here that agree with that. And it is a minority view. It is a minority view. And some of the consequences of believing that the probability is ontological, that it’s physically that way, rather than the probability being a rational estimation of your knowledge.

I know, what was I going to say? Where was I going to blank… Lack of progress. Oh, let me go to Saina Hassenfelder, who takes 50-60 pages to say that the modern physics is not empirical. Then she travels the world, sitting in front of Nobel laureates, and over tea and cookies in a long, drawn-out interaction gets them to admit as much. Yeah, it’s not empirical, not really. And she wonders why nothing major has happened in physics since 1945. I’ve already asked Dr. Briggs here. I think she understands the problem but not its origins. Fair enough.

Oddly, this modern-day probability theory is logic, the stuff that has been so effective in resolving so many science issues. The maximum entropy, that concept has baked into it that understanding derives from this notion that probability doesn’t inhere in objects, but it lives in our heads. It is epistemological, and that’s a fascinating thing. This, I think, leads to other things like this. There are other things that are somewhat counterintuitive, and this is what caused Aubrey Clayton. He said why he got so interested in probability theory as extended logic because it was the most counterintuitive thing he’d ever studied. And I get it, that’s an amazing thing. What do you want to add to that?

[William Briggs]
On the subject, yes well, on the subject of yes, there’s a lot of people still that believe probability is ontological. It is the fundamental belief of the standard model in physics. I think most people do believe it, and when you try to talk them out of it, they become, well, a lot of them become somewhat angry about the idea. When you try to tell them that this isn’t real, probability isn’t real. Probability is just a measure of our belief in things. And then they’ll say something like, I don’t know how much quantum mechanics we all know here, but you could represent some of these particles as wave functions as a function. And they’ll say, well, the wave function is real. The wave function is ontologically real. Well, the wave function may or may not be real, but probability is a function of the wave function. Probability is a function of the wave function. It’s not the wave function itself, so they fool themselves with the mathematics a little bit. The mathematics becomes reified, and the theory is believed above reality. And there’s no real demonstration of the fact that the experimental results are giving you ontological probability, except that’s the interpretation given to it a priori. The experiments come out, and Anton can go on onto this in much more detail. The experiments come out so that they’re different every time, even though if you think you have your apparatus set exactly the same. But that can’t be true because when you’re talking about tiny, tiny little things that can’t be seen, and your instrument is also composed in whole or in part in different ways of thinking about this, you cannot guarantee that you’ve set your equipment exactly as it was every single time. And there’s all kinds of theories about this. People look at it in a Bayesian way, they try to use Bayesian statistics rather than frequentist statistics, but they don’t solve the problem. They just sort of say the real probability is still ontological but should be read in a Bayesian way. So, they’ve talked themselves into a corner, more or less. But there are people who are trying to remove themselves from this, and they’re trying to bring in order.

The whole thing still revolves on why do they do this. Well, they have a view, a metaphysical view of the way the universe is, and they call it realism. Unfortunately, they got the name first, and they say the objects named in physical theories exist. That’s the realist position in physics. That’s contrasted with an empirical sort of view, which says it doesn’t really matter whether or not they exist. We can use these equations and everything in an instrumentalist kind of way. In other words, we could just use them to make predictions. They’re good theories. It doesn’t necessarily mean the objects behind them exist. There could be all kinds of refinements and so forth. And then there’s a middle ground between these two; it’s called sort of structural realism. And I think that’s the position that I hold with regard to all this. Some of the objects are real, and some are not, but the idea that probability is ontological, it’s bad news in physics because it’s holding them back. We can talk about obscure new theories that are kind of fun to think about, how quantum mechanics can be interpreted differently, and I’m not going to bore you with that right now.

But it’s not just in physics. It’s everywhere that probability models are used, especially like Gerd said, there’s a ritualistic approach to creating statistical models. They create a model. It’s more or less ad hoc in most situations, and this technique they call regression. I’m not going to show the math of regression and so forth, but a lot of people just say, “We use regression,” and all of a sudden, in their minds, the regression becomes real. The parameters, the statistical parameters, the probability parameters inside this model become real to them, and they say, “We now need to estimate the true values of these parameters.” The true values of these parameters, as if they exist out there like you can measure something else of height and weight and dimension and so forth, that these things become real and are out there and take values that we can estimate.

But the problem is the model started as an ad hoc thing, and we talked about this a bit yesterday in conversation. I gave this when I did the speech we did at Hillsdale. They gave a famous experiment—they gave, I can’t remember the exact number—of research teams like a hundred or more research teams the exact same set of data. The exact same set of data, and they asked them to answer the exact same question: Does X cause Y, a particular thing? And half of them said no, it doesn’t. Half said yes, it does. And of the half that said no, about half of those were statistically significant, meaning they had p-values. And the half that said yes had said it was statistically significant and had p-values. Now, this can’t be true, obviously. It’s false. And the reason that it’s false is because they all took the view that the model is real, the parameters are real, and they’re searching for these real values of these parameters of the models. That was all ad hoc. All the models were different; they were allowed to be different in different ways. That’s why they came to so many different answers.

But even starting this is—and that experiment has been repeated, as a matter of fact, that in a true replication, it was done in public policy, sociology; it’s been done again in economics, and I think other people are doing it now. Doing the exact same thing, handing out people the same sets of data and so forth, it proves that these things cannot be out there and real. They’re just ad hoc models. And in fact, if you instead look at these things in terms of it’s just a matter of our knowledge, if I assume that this model is a good representation of reality, even if I can’t see all of reality, I can then make predictions. But they don’t make predictions; they stop and look at and examine the interior bits of the model, and then they report on the interior bits of the model, the parameters, by using confidence intervals and p-values and all this kind of a thing. And that’s it. They stop.

It’s just fitting the data to the model. And if the data fits the model well, they say, “We have found.” Well, they don’t actually always say they’ve found causation, but they heavily imply it. They use words like “linked to” and “associated with” and things like this, but they all want to say their correlation is causation without knowing they can’t actually make that claim, although some do. That’s wrong. It’s just wrong. This is the main reason what we’re doing here, I think. One of them is that we have to change this practice. At least physics does it right. They have to make predictions; they build experiments and test them, and that’s good. But that does not happen in most of the rest of areas of science. It happens a bit in medicine and so forth, but whenever you see statistics used, most of the time, it does not. They just report on the model. They report on how well the model fits, and you never see predictions made using the models that they provide, which they could.

They could easily turn any of the models that they’ve used and put it into a predictive form, publish their results, here’s what I predict if X is like this, here’s the probability that Y will do this and such. They could all do that, and then anybody can take any scientific paper and say, “All right, here all I got to do is put X in this position, and here’s what the model said Y would be,” and check, simple as that. But this would be bad news for the publishing industry because it would show a lot of models are absolutely no use, they’re absolutely worthless. And I can go into detail and more and more details about why and the small details about p-values and how all that fits in, but I think that’s the whole point of it. Probability is not real. “We had”, as Def he shouted in a famous book and bold capital letters before, “it was a thing, probability does not exist. Probability does not exist.” It doesn’t inhere in things, and that was, it’s still difficult for many people to accept and that’s what we have too.

[Greg Glassman]
And where I got lost before, the practical consequences of this is this absurd theory that there are two universes, parallel universes, many more than two, yeah, many universes, the Multiverse. And when I was going through Jaynes, he sent me back to Max Tegmark, and you can actually see in what’s the title of the book, “Our Mathematical Universe,” you can actually see them run down the wrong road and come to that conclusion. You get to witness the mental lapse in real time. It’s a pretty crazy thing to see. You authored a paper with Nguyen and Trafimow on an alternative to P-values and that was the predictive strength of a model just like you described here. And it’s the simplest, easiest thing, and I think it’s the way science has always been done. It’s certainly the way engineering is done.

[William Briggs]
Engineering is the key, engineering when it’s done right, of course.

I mean, there’s theory and engineering as well, but it comes down in the end: Does the model build something that works? If it does, well, then the model is useful. Otherwise, no. And it’s a simple matter, like I say, to turn any of these statistical models, probability models that people use, into predictive form. The math is there. The math is actually quite difficult in a lot of cases, but the software is already written for it, you just have to do it.

[Anton Garrett]
I’d like to make one or two comments on that. Can I commend to the house the rhetorical trick that I mentioned yesterday of do not get into arguments about what probability is, just say that in every problem where there is uncertainty, you need a measure of how strongly the assumed truth of one statement implies the truth of another. And that we’ve known since Cox 1946 that this quantity obeys what everybody for 300 years has called the laws of probability. Now on those grounds, I’m happy to call strength of implication probability. But if anybody objects, don’t go head-on, go around them, just calculate strength of implication and you will solve the problem while they are still arguing in a mirror. That, I think, is a way forward.

Let me add to that a little of the history of physics and the attempts to unify the fundamental interactions because that’s relevant to the fact that there is still progress going on postwar. But in one area of physics, we are still up the dead end that Greg accurately stated. Hundreds of years ago, we discovered electricity and we discovered magnetism. It was not until the 19th century that they were unified by Maxwell into a single theory, electromagnetism. And there is gravity as well, which was first described by Newton in the 17th century in the famous inverse square law of attraction between point particles in terms of the distance between them, much the same between electric charges. But with electric charges, you can have positive ones which repel each other and negative and positive and negative, which attract each other, whereas gravity is always attractive, mass is always positive.

In the atom, you have electrons going around, which are negatively charged, whizzing around a nucleus, which is positively charged, and you have as many positively charged particles in that nucleus as you have negatively charged electrons, so the whole thing is overall neutral. If you’re smart, you’ll ask yourself the question, “Hold on, didn’t he just say that like charges repel, and there’s a load of positive charges together in the nucleus, why then doesn’t the nucleus just fly apart?” And the answer to that is that there are two more. There’s another force which is very short-range, which only manifests inside the nucleus. That’s called the strong force, and it’s strong and attractive that holds the positive charges in the nucleus together. In that case, if you’re smart, you might say, “How come some nuclei spontaneously fly apart in nuclear fission, which power nuclear reactors today?” There’s a fourth force called the weak force, and that is also inside nuclei. It’s stronger than the electromagnetic force but weaker than the strong force.

In an absolute ton of force in the 1960s, the sadly late Steven Weinberg managed to come up with another unification, just as electricity and magnetism had been unified in the 19th century. Weinberg and some others unified the weak force inside the nucleus with electromagnetism, and the resulting theory allows you to predict certain properties of the electron called the gyromagnetic ratio up to well better than one part in a billion. You start to need the weak corrections at about the billionth decimal place, and that theory has been tested to within technological accuracy, no error in it is known.

I think to predict something to one part in a billion is a tremendous achievement of postwar physics, but we are still stuck in two areas. We’ve got a theory of the strong force as well, though it’s very hard to predict using it—quantum chromodynamics. It’s not sort of unified with the electromagnetic weak unified theory; it sits next to it and it’s consistent with it. The one we have the problem with is gravity and unifying that with the others. We are still absolutely stuck with. If you try it, there are infinities all over the place in your theory, and essentially it doesn’t work. And I do wonder if the dead end that Greg has mentioned, which is that in very, very simple experiments far simpler than you need to start testing the tentative theories of quantum gravity—if I can’t explain why the next electron goes that way and the one after that goes that way in what’s called two consecutive Stern-Gerlach apparatuses, if I can’t explain that, then something is very wrong.

I’d like to take a minute to expand on something I mentioned briefly yesterday, which is called Bell’s theorem. And that looks at two particles which once were together and fly apart and are therefore correlated, meaning if I measure, if I know something about one then without touching the other, I can know something about the other. And it turns out that if you do certain measurements on a large number of these particle pairs, then the results can’t be reconciled with the measurement on this one being only to do with variables concerned with the measuring apparatus and variables concerned with the particle. Somehow those particles are signaling to each other, and nobody knows how. That’s one of the reasons that I want what I called yesterday a hidden variables theory. It is alarming that signaling appears to go on in a manner that is not causal, it’s acausal. You can see acausality in one or two places at the boundaries of quantum mechanics.

There was something that John Wheeler, the great postwar physicist, often said to be the best physicist never to win the Nobel Prize. There is one experiment that John Wheeler invented called the delayed-choice experiment, where there’s a causality, and it might just be that in order to predict what goes on in these Bell type experiments, you need information from the future as well as the past. In that case, there might actually be, although you can have a theory which is completely deterministic, you might not be able to get the information you need in order to do prediction.

Now, I would contend that being in that situation is still an advance. You never know what you’re going to find until you actually look. So, I continue despite Bell’s theorem and acausality to commend looking for hidden variable theory. But I do think that we are slightly stuck in that area. There have been great advances in cosmology postwar because that is increasingly a data-rich area. Cosmologists always say, “Hang on, hang on, when I say that physics involves designed interventionist experiment, if only…”. They say they can’t go up there and intervene with the universe, it’s too big and it’s too far away. But there, increasingly, as technology improves, our detectors, we can get more and more and more information, and that does allow improvements in theories of cosmology.

If Peter wants a microphone, he could explain more about that, that’s up to him. Some of the wackier theories that have come out involve the Multiverse. There are different flavors of Multiverse, some of those theories are completely crazy, including the many-worlds theory in quantum mechanics as a subset of that. Some of them do have something to commend them, so it’s worth looking more closely into the literature at that. Some areas of physics are very healthy, some are very unhealthy, and let’s try and answer questions in all those areas.

[Greg Glassman]
Thank you. You want to give the mic to Peter?


[Peter Coles]
I guess I have to just say something now that I’ve got the microphone. Should I stand up or can I stay? I’ll stand up.

So thank you, Anton, for putting for that. So the invitation was maybe to say something about cosmology and what it has to say about the themes that we’ve been discussing. So let me just start. You know, as you probably know, cosmology is the study of the universe as a whole as a single system, rather than the study of the objects that make it up. So we’re trying to work out how the universe as a whole works. And one of the first things you come to when you look at cosmology from a theoretical point of view is that it’s a very backward subject. What I mean by that is that unlike experimental physics, where you go to a lab, laboratory, experiment, set up the initial conditions for your experiment, and let it run so you have control over the initial conditions and you can tweak them to see what effect those initial conditions have on the endpoint.

In cosmology, there are two things that we have. One universe—I won’t get into the Multiverse question, I think enough has been said about that—we have one universe but we’re at the endpoint of the experiment. We don’t know the initial conditions. So what we have to do is infer what happened early on based on what we can observe around us now. Now, as Anton said, our detectors and telescopes are getting more and more powerful and one of the things that gives us is the fact that we can look back in time.

The further we can look out into the universe, we know light travels with a finite speed, so the further you look, the further you’re looking back in time. You’re seeing objects as they were in the past. So we can do history of the universe. It’s not just the final state that we can measure. We can go back a long way in the universe. We can actually observe galaxies 10 billion light-years away, so it took like 10 billion years to reach us from where they were so they—they’re not the same as galaxies around us now. They were babies, yeah.

So there’s that issue, but we can’t see right back to the very beginning of the universe. We believe the best model that we have is that the universe began with a big bang, and for the reason that Anton talked about, we cannot describe the very instant of the Big Bang because we know that’s the point where we really have to unify the theory of gravity, which is a theory of space and time, with the theories of matter, which are the interactions that Anton. We just don’t know how to do that. So we’re backwards in the sense that we have to look at where we are now and a partial view of the history of the universe to try and reconstruct what the very beginning may or may not have looked like, and that’s a very ambitious challenge and it’s quite remarkable, I think, that we’ve got as far as we have. The theory that we have is very incomplete and I’ll say a little bit more about that in a moment if I can.

But that’s what I mean by a backward subject, and one of the things that immediately strikes you if you go into cosmology is that it has more in common with something like forensic science or archaeology than it does with experimental physics in some ways because we’re actually reconstructing; it’s an inverse problem, and I think that’s one of the reasons why I sort of got drawn to the Bayesian inference approach because it’s implicitly an inverse reasoning. So we have two things: we have a world of theory. That theory is kind of incomplete for the reasons I’ve talked about, partly because we don’t know the physical laws at the very beginning and consequently you don’t know the initial conditions, but we also have a world of observations, and at the moment, the world of observations is getting bigger and bigger.

You’ve all seen James Webb Space Telescope. We’ve got. I’m involved with a satellite mission called UID, which is mapping billions of galaxies over the sky. So we learn a lot about what the universe actually looks like. We don’t have a complete model for how those galaxies formed and evolved because we don’t know the initial conditions, so what we do is we have a model based on those theories. It’s a standard model, and to represent the fact that we don’t know the initial conditions, we have some free parameters, and those free parameters in the model are like how much matter there is in the universe, what kind of energy there is, how quickly it’s expanding. Now, that’s called the Hubble constant; you may have heard of that if you’re interested in cosmology, and half a dozen parameters.

So how do we proceed? Well, if we were frequentists, we would choose the universe, the combination of parameters, which make our universe the most probable. Yeah, that’s not the way we do it because we know there’s a lot of flexibility in the choice of these parameters. So we have the world of theory, which I’ll put on my right hand.

Okay, the world of theory has parameters in it. The world of observations is measurements of galaxies and Cosmic microwave background, all kinds of different things all put together. We have to go and calculate for different possible values of these parameters what the universe would look like when you observe it. That’s a forward problem. Take the model, calculate what the universe would look like. And because we don’t know the parameters in detail, we have to do that for lots of different choices of parameters, okay?

Now, that gets you somewhere. Some of those will look like the observed universe; some of them won’t. But what we have to do first is to go form a loop. So once we’ve done a forward calculation, you look at the universe, how it actually is, how it compares with that model. We can then infer tighter values for the parameters so that that particular model will fit, but only for a certain choice of parameters.

So you narrow down the parameters, you come back to Theory space. You say, actually, we now know these parameters a bit better than we did before because we’ve used some observational information to tighten them. And then we do it again; we go forward and we come back again. And it’s a loop that’s going forwards and backwards all the time. It’s an inductive loop, as it were. So this actually is an ongoing process where we don’t know the answers. We don’t know exactly what the best theory is at the moment, but the more and more data we have, the tighter the constraints on these free parameters in the model.

And eventually, it may well come to the state where there’s no wiggle room left in the parameters and the observations can’t be fitted by that model with any choice of parameters anymore. Then we’ve done something good. We’ve ruled out that particular model, but it’s a long process going backwards and forwards because you have this freedom in the parameters. It’s not just; you go forward into the observational measurement space, if you will, and make a conclusion from that.

So I think this illustrates something that came up in the discussions yesterday about how many statistical methods now move away from cosmology. You might say, “What’s this got to do with psychology or medical statistics or whatever?” But it’s actually the same theme, although cosmology is kind of a strange subject because of all these peculiarities I’ve talked about, many of the problems that beset cosmology are actually Universal science.

One of them is that it’s no good at all just calculating from doing the forward problem, calculating the data based on one choice of model and doing that. You have to tighten up the free parameters. One of the problems that was talked about yesterday was the issue that what tends to happen is that people will say, “Oh, I’ve got a, I’ll do some calculation. I have a model, take that a null hypothesis or whatever, and you calculate what the data would look like from that, and you say, I’ve got a significant level of 5% whatever, that’s the end, I publish a paper, I’ve made a discovery.”

Yeah, that’s not the way science works. It’s not just a one-way system like that. You have to go around, you know, there’s two steps, there’s two directions that the logic must take you. And I think the idea, one of the pathologies about having people publishing these little bite-sized chunks of papers with results that are claimed to be offering closure to an argument, a piece of science, are distorting how science actually works.

In cosmology, we publish papers, but they’re mostly of the kind like we’ve reduced the uncertainty in some parameters by this amount, and we need more data to do more, and we know that we don’t have a full test of the theory. It’s an ongoing process. So I think science is a process. It’s not a collection of final results, which is the way the literature often presents them, and that’s as true in psychology as it is in cosmology. I think one of the things that’s different about cosmology and physics generally to uh subjects like psychology and medical statistics and medical science, for example, is that our Theory that we have, I think my theory was on my right hand. I keep forgetting. Um, could swap them around, it doesn’t really matter. Um, our theories are inherently mathematical and therefore in principle, we can calculate quantitative consequences for the observational world from those models.

It’s often extremely hard to do that but in principle, we can do it. In cosmology, for example, we have to calculate detailed statistical properties of the distribution of galaxies, and that needs massive computer calculations to do it. So it’s not a back of the envelope calculation. But the elements of the model are mathematical, they’re mathematical functions and parameters. In a subject like psychology, there isn’t that kind of model for how brains work or how consciousness works or whatever. So it’s very much harder to take a calculation from a theory into the data space. It’s very hard to make that a quantitative argument, and that’s why generally, people don’t talk about their actual hypothesis because they can’t frame it in a way that can be easily compared with observations.

So there’s a difference there. There’s a really big difference between the different fields, but that, I think, does illustrate there is despite the differences, there’s commonality in the sort of process involved. You have free parameters, you have an unknown aspect of your theory, and before you can make statements about that theory, you have to reduce the uncertainty in the free components of the theory. Otherwise, it’s like trying to nail jelly to a wall. It just doesn’t, it’s, you know, it’s hard. You can, there’s wiggle room and you can avoid it. You don’t test the theory; you test one particular set of one particular manifestation of that, and there are many manifestations which are possible.

Just to go back to the—should I go back to the Multiverse? I’ve spoken for a while, but I’ll just say a thing about the Multiverse. I’ll say one thing about the Salve conference thing, if I can just in pass, which is that I don’t think we should be too hard on the people in the photograph back in 1927. In 1927, quantum mechanics was very new. It was very shocking to physicists because it had implications that were totally different from classical physics as far as anyone could understand. So the fact that they come up with a weird interpretation back then, we can forgive them for that. I think it was shockingly new, and they were a bit freaked out by—as we remain today by quantum mechanics.

I think the thing we should really be ashamed about is that 100 years later people are still talking about the same interpretation. We should really have gotten over that by now. It’s not their fault, really. It’s the fact that we’ve had plenty of time to think about it and we haven’t come up with a better way of interpreting quantum mechanics. We have then the other thing I’ll say about that is that actually if you talk to most physicists, I don’t think they really elucidate the nature of what they think about the probabilistic aspect of quantum mechanics. The classic quote is “shut up and calculate.” So there’s no question that quantum mechanics, however you think about it, ontological or epistemological, it is a very effective way of calculating the outcomes of experiments, and that’s what most physicists spend their day job doing. And they don’t really think that much about the philosophical implications. That’s my view. I’m not saying that they shouldn’t. I think they should, but you can be a perfectly good physicist by calculating and so on without thinking too much about these deeper questions.

I think, what else was I going to say? Multiverse? Oh, Multiverse, yeah. Well, what should I say about the Multiverse? I think, so there are well as, if you’ve read Max Tegmark’s papers, you realize that he’s—he wanded off into the Multiverse a long time ago, and it’s very unlikely that he’ll ever come back, I think. There are different kinds of Multiverse, and I think the name can be quite misleading. We don’t know how big the universe is. I mean, I take it my definition of the universe is that the universe is everything that exists. I think that’s a reasonable definition, and by definition, there can only be one of that because everything that exists must be in the universe. So Multiverse, there can’t be more than everything that exists, but you can have a very big universe. A very big universe in which different parts of the universe have quite different properties to this part that we happen to live in. Some people call that a Multiverse. I would just say, no, it’s a big universe, and I have no problem with that.

We have to accept though because of the light travel, you know, the fact that light travels with a finite speed, that we can’t actually observe directly parts of the universe that are outside our causal range. So it’s not something that I worry about because you’ll never be able to measure it. It may be, of course, that some theories of fundamental physics turn out, there aren’t any at the moment, but they might turn out to predict that the universe is much bigger than the patch that we live in and may make predictions about what properties the laws of physics might have in the regions that we can’t see. That would be interesting, but it wouldn’t be amenable to test directly.

The other bit, the other one of the other versions of the Multiverse is this idea of the many-worlds interpretation of quantum physics, whereby you have this sign of the universe splits whenever a measurement is made, and there are actually two—you know, if a spin can be up or down of an electron, there’s a universe in which it’s up and another universe where it’s down. I think that’s just absurd, and I’ve heard very distinguished physicists talk about this very seriously, and it just doesn’t make any sense to me. It seems much clearer that the up and the possibilities of up and down are in your head or on your piece of paper when you do the calculation, but it doesn’t mean that there’s a whole universe created just so that you can physically realize those two possibilities. You will only ever measure one of the two in the universe in which you live.

So that duplication seems totally unnecessary to me, not duplicate, multiplication rather than duplication—I guess you’d call it. Metaverse? Yeah, well, you talk about Facebook. So, you know, it’s interesting that these more speculative ideas like the Multiverse in the general public’s mind, those members of the general public are interested in watching videos on YouTube about physics, theoretical physics, and things like that. These are often presented as being what theoretical physicists think now. It’s kind of that this is really everybody thinks about the Multiverse. I think the Multiverse is actually a rather minority view amongst theoretical physicists, largely because of the reason I said before that most of them don’t think about it, they just calculate, and that’s better, I think, than conjuring up an infinite number of imaginary universes. One final comment, though, going back to the initial condition problem that I talked about before and the problem that we can’t unify gravity—I just add one other comment here—that Einstein’s general theory of relativity is our theory of gravity that we work on at the moment. It may turn out to be wrong, falsified in some way, but it’s consistent with all the experimental data we have now. Einstein’s theory of gravity is actually a theory of space and time. Gravity manifests itself as distortions in space-time. It’s rather different from the theories of particle physics we heard from Anton, and it’s that crucial difference that makes it very different and very difficult to put those theories together in a unified theory.

One of the contenders is string theory. String theory also has some responsibility for string, the ideas of the Multiverse because the string landscape which is an idea kind of related to the Multiverse. We heard yesterday that string theory has a problem that it’s not predictive, so basically it’s a theory of very high-energy particle interactions, a theory perhaps of the very high-energy State our universe began in, but in order to be successful, it has to predict the low-energy physics that we know around us today—electricity, magnetism, the strong and the weak nuclear forces, and gravity. That has to come from the theory, and it turns out that string theory can more or less predict any low-energy physics that you can imagine. Some people criticize it because it’s not predictable. I would say that actually that criticism is a bit false. You could argue that string theory is the most predictive theory there has ever been because it can predict absolutely anything you want, and therefore it can’t be falsified. If all possible outcomes of the theory are possible, how on Earth do you falsify it? Any measurement will confirm it, any measurement in any low-energy state. So I don’t think string theory is going to provide an immediate answer to the problems we have in cosmology or in fundamental physics because it’s just not—it doesn’t have that predictive power.

Sorry for rambling on; I was thrown into that at short notice, but I hope I’ve put a few ideas there which are interesting.


[Greg Glassman]
And I have a quick question. So, in your really big universe model, there could be potentially a big bang elsewhere. So my question is, what would that look like?

[Peter Coles]
Well, you wouldn’t be able to see it. So, uh, so there are interesting ideas here which are, which, so if you go back more than half a century in cosmology before the 1960s before the discovery of the microwave background, there were two viable theories. One, we knew the universe was expanding. Then the two viable theories at that time were the Big Bang and the steady-state theory. So, the steady-state theory was that the Universe was expanding, but its density didn’t go down. Most things, when you expand, they get less dense. The steady-state theory involved the continuous creation of matter to fill in the gaps caused by the expansion of the universe, so the density stayed constant. This was Fred Hoyle and so on. Now, in order to make this work, they introduced a concept called the sea field. It’s in physics parlance, it’s a scalar field. Now, the discovery of the microwave background basically showed that the universe is not in a steady state. In the past, it was very different from what it is now. So, that theory eventually was rejected, but more recently, it’s kind of come back in a different guise, which is that there’s a thing called Cosmic inflation which is supposed to have happened in the early universe, and one of the possible ways of realizing this is to have a scalar field with the same mathematical properties as Hoyle’s C field, but instead of creating, say, atoms or protons to keep the density constant, you actually create whole universes. So, you have the creation events or Big Bangs on a very large scale. The universe is steady state; there are all these big Bangs going off all over the place, but where in one little bubble created by this by a particular big bang, and we can’t see out of that bubble. Yeah, so you can’t observe the other big Bangs. They may exist, or they may not. I think the only hope of making a really fully scientific theory about that kind of notion is if you could prove that the scalar field that gives rise to these the prediction of alternate universes has some other test that you can do which necessarily implies the existence of this field and therefore the other universes, but you wouldn’t see…

[Greg Glassman]
In this ever expanding universe, there’s another one also ever expanding, there isn’t a moment of intersection?

[Peter Coles]
No, because the space in between the Big Bang events is, it’s, it’s dense space; it’s expanding exponentially in time, so these things are accelerating away from each other unless they hit each other when they’re just born, like two bubbles in a glass of champagne that hit each other. But if they don’t hit each other straight away as soon as they nucleate, they’ll be carried away with increasing speed, so you can’t catch. Those people have actually claimed to have seen indications of bubble collisions in the early universe, but the statistical evidence for that in the microwave background, but I don’t, um, those claims have not met with general acceptance, let’s put it that way.

You look puzzled, Anton.

We got another mic?

[Anton Garrett]
I’m confused on a far higher plane now, Peter, thank you. It occurred to me in real time while you were talking about that, that if the interactions between particles in Bell experiments deterministically are superluminal and/or acausal, you might just be able in the deeper hidden variable theory to access those other universes in the Multiverse Theory. That’s an exciting possibility, but really, I think a lot of this sort of speculation would be laid to rest if only we could get a quantum theory of gravity and unify it with the other forces. I got something else I’d like to say, but not now, on a two-minute knockdown of the many-worlds interpretation of quantum mechanics, but let’s stay on cosmology for now.

[Peter Coles]
Can I just come back to there? One thing that I forgot to say was that you mentioned the acausal nature of hidden variables. If there are hidden variables and as far as I know that’s a viable interpretation, there has to be a causal communication or if you like, superluminal communication. Now we don’t have a unification of quantum theory with general relativity, and as I said before, General Relativity is at its core a theory of space and time, and the causal structure of space and time is built into General Relativity. It’s perfectly possible that if there were a unified theory of quantum mechanics, Quantum Field Theory, and General Relativity, there would be a departure from that causal structure at a microscopic level because General Relativity is a classical theory; it doesn’t have that. So, it’s not obvious to me that if you go back to the very origin of the universe, the very big bang, that causal structure will be the same then as it is now. It’s called the signature of the metric. Basically, it could change.

[Greg Glassman]
I have one more question – what would be, what would you hope to see, what would be the most amazing thing to see looking at a telescope that makes a career?

[Peter Coles]
Being well-known can be for many reasons, right, and not all of them are good, I think. So, I have a suspicion. So, our standard model of cosmology is based on General Relativity and the standard description of matter, which Anton described, so in terms of the four forces. But we know that we can’t do that consistently. We can’t put General Relativity and quantum physics together in a mathematically satisfying way. Our standard model of cosmology has a sort of shotgun marriage between these two things, and it’s very uncomfortable because there are aspects of our standard model of cosmology which we don’t understand at all. There’s Dark Energy; we don’t know what that is. It fits the observations, but we have no idea what it is.

I think the most exciting thing that could happen in cosmology is someone shoots down the standard picture or replaces some of the elements, the vague elements of this theory with things that are well-defined and derived from a deeper physical principle that may come from a unified theory of quantum gravity or it may come from some unexpected direction. So, I would say that the most exciting thing we could see through a telescope would be evidence of the breakdown of our standard model. And actually, particle physicists who have their standard model of their own have suffered for many years, the fact that the measurements fit the standard model and they can’t do anything to break the standard model. Everything seems to converge to the standard model. So, that’s a lot of what Sabine talks about in her book that actually, nobody really knows that these possible particles exist or that there’s physics beyond the standard model. We just can’t measure those things.

So, in cosmology, we could, with the Euclid satellite, for example, measure departures from Einstein’s general theory of relativity. Um, you might say that’s a negative thing – we show that the theory is wrong, but it would be a huge step forward in our understanding of the universe if a theory, which we’ve embedded into all of our calculations, turned out to be wrong. And that might dispense with the need for these dark energy and dark matter things. So, it’s not just an observation that would, but a breakthrough would be evidence for some really fundamental change in physics because we know, and thing that I didn’t say in words earlier on when I was talking about, you know, going from theory to observations, everybody who works in cosmology knows that the standard model we have now, the Big Bang Theory, is a working hypothesis, and it’s got these elements in it which we don’t know particularly accurately, and we don’t even know if they correspond to a real thing in the universe.

The Dark Energy is a parameter that controls Dark Energy, but what is it? Can you measure it directly? The answer is we can’t measure it directly. It’s very unsatisfactory. So, they’re just these three parameters are kind of placeholders for things that we don’t understand, and I don’t think anyone in cosmology would mind too much if that model fell apart, and the person who made it fell apart would have their career made for sure, but it’s kind of hard to kill a theory that’s actually quite good at accounting for the data. I can put this in terms the free. The model that we have, the standard model of cosmology has six free parameters in it – six numbers. Those numbers specify all the things that I talked about before, and they fit into a framework provided by Einstein’s general theory of relativity and the standard description of matter. Six free parameters in modern. Well, I could give you the symbols – Omega matter, Omega Lambda, H.

Um, so there are, I’ll tell you what they do. There’s one that tells you what the vacuum energy in the universe is, that’s Omega Lambda, it’s called. There’s one that tells you how much matter there is in the universe, that’s called Omega matter. There’s one that tells you how much bionic material there is in the universe, that’s the material that’s you and I are made of, protons and neutrons and the rest. There’s one that basically measures the curvature of space-time, whether the universe is Euclidean space or it’s distorted as Einstein’s gravity would predict. There’s one that tells you what the initial irregularities were in the Big Bang. The Big Bang would not be a band unless there were sound waves propagating through the universe, so we need to specify the form of those. There’s one parameter need to specify that. And then the final one is effectively the rate at which the universe is expanding now, the Hubble constant, which we can try to measure with nearby experiments. Actually, quite hard to measure that constant.

So, that’s just six numbers. Martin Rees wrote a book called just six numbers, that’s all. One chapter about each of these parameters. But you put it in perspective, we measure, we have a, the data sets that I work with, the Planck map of the cosmic microwave background has billions of measurements of temperature in different directions in the sky. It’s a map of the whole sky, the temperature of the Big Bang. We’re measuring positions and velocities of billions of galaxies out to high redshifts and enormous distances. All those data sets are put together, billions and billions of measurements of different frequencies and different positions scattered throughout space. And all of those measurements are consistent with the model specified by just six free parameters. That’s a success, right? Six parameters is quite a lot, but the amount of data it has to account for is enormously large. Everyone knows you can fit a curve with this, if fit any kind of curve to data, if you like, if you include enough parameters. But to reduce that enormous observational space to just six numbers and compress it, or is a great success. Whether we’ll be able to compress it even further, I don’t know.

There’s still uncertainty in the parameters, as I mentioned before, but that’s a measure of how successful that theory is. I mean, I think you can if you like, you can say theory is successful if it accomplishes that kind of level of data comp if you can write down a theory in just a, a few equations and it accounts for billions of measurements, that’s a pretty good theory, I think. Pretty good working hypothesis.

Oh, he bought it. I think it was published about 15 years ago or something. Yeah.

Epsistemology Camp Series