Sunday at BSI’s 2024 Epistemology Camp was a loosely structured open discussion.

This second part starts with Gerd Gigerenzer discussing the evolution and interpretations of probability, emphasizing its connection to human beliefs and societal contexts. He explains that the classical theory of probability, emerging in the mid-17th century with Pascal and Fermat, was not merely abstract but intertwined with human rationality. He also touches on the ontic interpretation of probability, which considers it inherent in nature, began in the 19th century with thinkers like Charles Sanders Peirce and Gustav Theodor Fechner.

The discussion moves to critique of the current state of scientific theories, particularly focusing on quantum mechanics and the many-worlds interpretation. Anton Garrett dismisses the many-worlds interpretation, asserting that it leads to contradictions in quantum mechanics.

The conversation then shifts to the application of probability in legal contexts, with Dale Saran and Dan MacDougal discussing the challenges of proving guilt beyond a reasonable doubt. They note the issues with expert witnesses and the misuse of statistical evidence in court. Gigerenzer concludes by pointing out the lack of statistical training among lawyers and judges, which contributes to miscarriages of justice, and suggests that better risk communication could mitigate these problems.

Transcript

[Gerd Gigerenzer]
I would like to make a few comments on probability, going back to the issue before, and pointing out one thing: that probability and its meaning is not just abstract, but it’s very much tied to people’s beliefs about society and the mind. So, the classical theory of probability emerged in the mid-17th century, usually dated at 1654, with the exchange between Pascal and Fermat. The classical theory was a theory about the human mind and human rationality, meaning probability was nothing abstract. It’s called mixed mathematics; it was about the subject matter of human rationality. In Laplace’s famous words, he said probability theory is nothing but common sense put into a calculus. He didn’t mean the common sense of everyone, but of educated people.

So, that’s one important observation. When people witnessed what happened in the French Revolution—the crazy, irrational things—that was the breakdown of the classical theory of probability. You could no longer argue that it would capture the rationality of people if people were so irrational. That was the moment, around 1830 (it took a while), when the frequency interpretation came out. It was about frequencies, no longer about the mind. Before, in the classical theory, people like Hume and David Hartley were thinking that the human mind monitors exactly the frequency of things that happen—how long is everyone talking here, how often is someone nodding to someone else—and that made the unity between frequencies outside and degrees of belief inside. This was the unity around the 19th century, and then three different interpretations were separated: degrees of belief or logical probability, sometimes frequencies, and propensities. Propensity is the design—a slot machine has a certain probability.

So, that’s the first remark I would like to make. The second one is that the early probabilists were all determinists, and that shaped the belief that probability is epistemic, because it’s the lack of our knowledge. You mentioned Einstein and also Marx Blanc; this was all in this spirit. You may remember or have heard that at certain times in the 16th, 17th, 18th, and into the 19th century, some countries forbade gambling. It was not just because people would lose their money, but in a determinist interpretation, if you throw dice, you ask God to decide what number comes up because it’s determined. The argument was that it’s frivolous to occupy God all the time with your unrealistic, unimportant business.

The change to this ontic interpretation came, to the best of my knowledge, in the 19th century. It was two people who had the idea that probability might be outside, and these were Charles Sanders Peirce and Gustav Theodor Fechner. You may have heard of Peirce but not so much of Fechner; he was the father of psychophysics. For Fechner’s interpretation, it was very important what he believed. He believed in monism. Monism is the idea that the mind and the world are just two sides of the same coin. Since he knew that people have—or believed people have—free will, he inferred nature must have free will too. That was the beginning of this interpretation, that probability is actually in nature. I just make the point that may sound funny, but the point is these interpretations often come out of key convictions that people have. Later, this opened up the possibility of the Copenhagen interpretation of quantum theories. Max Planck, who is often said to have invented quantum theory, was totally unhappy about his discovery. He was a man who was looking for order, certainty. He was a very reliable man, and his teacher told him that if you study physics, everything is already known, nothing to be discovered. But Planck seemed to like that. Then the thing happened that he didn’t want: he discovered something, and the thing he discovered didn’t look like determinism.

So, final remark: statistical mechanics is also inspired by human behavior, by theories of human behavior. The short story is, there is a Belgian astronomer, Adolphe Quetelet, who used normal distributions as a model of human observational error around the true position of a star. Quetelet was interested in human behavior such as suicide, murder, and other typical human activities. He could not predict who is committing, say, a murder or some crime, but he observed that the proportion of crimes being committed is different in London than in Paris, and these differences are stable. So he concluded, maybe it’s like in astronomy—the human error—so that we need to build a, he was one of the founders of sociology, a sociology that does not look at the individual but at the collective. The collective are the same normal distributions. So, that’s the first part of the story. Now Maxwell and Boltzmann were pondering the behavior of molecules, and they also found them highly unpredictable. Both of them had read Quetelet, and to make it short, thought maybe molecules are like humans: unpredictable but predictable as a collective. Thus came statistical mechanics, the revolution in physics from a deterministic to a probabilistic one.

These are just some ideas to give you a perspective that much of what we’re talking about—is probability ontic or epistemic—comes from human beliefs and conceptions. When we listen here to the intensity of the arguments, we are still in this debate, and I must say I enjoy this meeting very much.

Last remark on medicine: if you lived in the 19th century, there were three kinds of philosophies about what a doctor is. One is an artist, a person who knows better than you in everything. There are still some artists around. The two major other competitors were the determinists, going back to Laplace, who were looking for causes, and the statisticians, who were looking for chances. There was a fierce competition between the two. Claude Bernard, a famous physiologist in the 19th century, and also the person who invented blinding in experiments, accused his statistician colleagues of not understanding or not even wanting to understand the causes, because they’re not looking for causes, just for chances. It was actually Sir Ronald Fisher who got these fighting groups into one by making statistics and experimentation the same thing, dominated by statistics. That’s a short history to throw in, to see how much of what we are debating in physics or philosophy depends on human nature.

[William Briggs]
Some people were asking, how do we tie all this discussion of physics and everything back to broken science? The fact that there’s speculation and various theories being tried out and all that kind of thing—that’s in no way broken science. That’s all to the good, absolutely to the good. But the desire to have one’s theory be true can sometimes be overwhelming for some scientists. So, some scientists want to change the criterion of what makes a good model.

We say what makes a good model, a good theory, is at least its predictive usefulness. At the very least, the best theory would be one where every one of its premises is true, and what it says about the thing being predicted is said with certainty and is always accurate. That’s the absolute best theory that you could possibly have, and that’s also a good predictive theory. But particularly, like the many-worlds interpretation of quantum mechanics, there are some people who want to say, “Well, you know, I love this theory so much, I want to change the criterion of a good theory to being that which makes a good explanation. My theory explains the data or the thing I want explained very well, therefore it’s a good theory.” But the problem is, of course, for any given observation you have, or any given idea you have, you can come up with innumerable explanations. Each one would be just as valid if they explained what you wanted to explain very well. So, that’s not a very good criterion to use. But they’re trying to sneak that in, in order to avoid the idea that, in order for a theory to be valid, you should be able to make good predictions with it.

You can’t make a prediction with the many-worlds theory. With some ideas of the multiverse, perhaps you can, but with the many-worlds one, you can’t make a prediction that can ever be tested. So, they want to eliminate the idea of testing and making predictions as part of science and just turn it into explanation. If you allow that to happen, you open the door for everything. What we’ve seen—and I don’t want to make this at all political—but we’ve seen in various other aspects of scientific life where the theory, because of itself, becomes more important. Therefore, we don’t need to check our predictions. We see this in climate change and so forth. The seriousness of the charges becomes the justification for the theory more than anything else.

That brings us right back to philosophy and ideas like the precautionary principle and all these kinds of things, whether that has any validity. I don’t think it does, but we can discuss all those kinds of things. It’s not that all this stuff about what’s going on in physics is in any way wrong or broken or anything like that—that’s not the case. But when they try to change the criterion of what we mean by a good theory, or what a theory should be and what the consequences of theories are, that’s when it becomes broken. That’s what we have to sort of keep an eye out for.

[Anton Garrett]

Cue the many-worlds interpretation of quantum mechanics. There are several things which make it terminally ill, but I’ll give you the one which absolutely does it in. What goes on is that if you are making a measurement of a physical system in quantum mechanics, then all quantum mechanics can tell you is the probabilities of the differing possible outcomes. The many-worlds interpretation says that each one of those, when you make the measurement, each one of those is realized in a different universe, and the universe actually splits off. They are always inaccessible to each other, slightly different from the cosmological multiverse that Peter was talking about.

The knockdown of the many-worlds interpretation is that you can, if you like, treat the measuring apparatus and the thing that you’re measuring as a single quantum system and just study the interactions within it, which you might say comprise the measurement, study them from the quantum mechanical point of view. It’s the same interaction, and if you do that, you do not get any splitting. You cannot have the universe splitting and not splitting at the same time. So that is the contradiction. That’s the most deep reason why the many-worlds interpretation is nonsense.

Now about that Solvay Conference, essentially what came out and what Peter summarized in the words of David Mermin is “shut up and calculate,” which became known as the Copenhagen interpretation. I fully agree with Peter that in the 1920s, it was the only valuable way to proceed because people really didn’t understand what was going on at a deeper level. I would say that responses to it, such as the many-worlds interpretation, are not just useless; they are actually worse than useless, far worse, because they provide a rationalization for why you might never be able to go further. I think you should do your utmost to go further. Who wants the mic?

[Greg Glassman]
Yeah. Questions?

[Dale Saran]
Not a question, but a response to what you said, Matt. I’m a lawyer, not a physicist. I’ve dabbled in this, and to the extent that I’ve been around Greg, it’s gotten to me through osmosis. But I think there’s an important thing—I’ll relate it to what I do, and it goes specifically to what Matt’s talking about. When you’re trying to prove something in court—and I was a defense attorney as well as a prosecutor—it reminds me of this notion: there’s an infinite number of ways to be wrong. That’s the problem, it seems to me, with the frequentist approach.

In court, a series of events happened already. There’s an event; there’s a dead body, and the most likely interpretation is frequently “my client did it.” So the government proceeds, and they’ve got to establish by a certain quantum of evidence that, in fact, your guy is the guilty party. What defense attorneys do, what you learn to do—at least I think the good ones and the ones that I learned from—is you take that set of data and you really kind of construct a frequentist approach, which is: let’s suppose for a moment—horror!—my guy actually did it. Let’s suppose he is the guilty bastard, and I know it. My job is to construct, from the data that I have, that I know the government’s got, that they’ve turned over in discovery, I know all their witnesses. What I’m going to do is construct an alternative hypothesis that is consistent with the evidence and will yield a result that is: my guy walks out of court at the end of the day.

Having gone through that process over and over again with different sets of facts, you come to realize that it seems to me that the frequentists are patting themselves on the back when all they’re doing is constructing an infinite number of possible stories about why this isn’t what you think it is. I think that’s really what’s going on with the frequentist approach. You’ve produced one of the many innumerable cases of “it’s correlated, it’s consistent with,” to use what Matt talked about, consistent with the data. But in fact, you could be wildly wrong, and you could be confirming an obviously incorrect theory. It goes on all over the place; it’s just a way of thinking about the world. So, for whatever that’s worth.

[Anton Garrett]
Do you have anything that you might say about how to combine quantitative DNA evidence and likelihoods with qualitative prior information about the case?

[Dale Saran]
Yes, but not that I’m prepared to do right now. Yes.

[Dan MacDougal]
If the glove doesn’t fit…

[Dale Saran]
Right. If the glove doesn’t fit you must acquit.

[Peter Coles]
So can I just make a couple of comments about this? It’s something that I’ve often thought about probability in the courtroom forensic science and on and of course Lass’s great essay on probability went into that role of probability in jurist Prudence. Um, one of the things that’s different from science, you know, as I was talking about cosmology because that’s my field, but it’s a process, and I talked about how often when you publish a paper, it sort of implies a closure to that process which is not real, it’s kind of artificial, and we just carry on getting hopefully closer and closer to complete know but maybe never getting a complete knowledge and we’re all quite happy that our hypothesis is a working hypothesis it might be a good one or it might be a bad one but that’s the way we do it and we we don’t have the constraint that you have in a court of law where you have to end at some point and the your defendant is either guilty or not guilty it has to it’s forced into that binary choice and one of the issues that I’ve always had as a Bayesian, uh, I was converted to Bayesianism quite a long time ago actually it’s worth mentioning that almost everybody who works in my field now is uses of Bayesian techniques for the reasons that I tried to elucidate before and that’s not true throughout physics but it’s true in cosmology. One of the problems I have is that you, you know, a Bayesian approach you have model comparison techniques your data may favor one model over another and it may get more probable when you make a measurement or you reveal more data might favor one model over the other but it’s very difficult to get absolute certainty at the end of that process because it’s a never-ending thing you might get to very high probability of your model and a very low probability of the alternative model and then you ask yourself what does Reasonable Doubt actually mean do I ever is it ever unreasonable to doubt um and I have a big problem with that so if I were ever selected for a jury and instructed by the judge to say uh you know you have to decide the the weight of evidence has to be beyond all reasonable there I I would ask what do you actually mean by that and and I don’t know because I think I have doubts in lots of things which are reasonable and surely surely it’s reasonable to live with uncertainty rather than forcing yourself to believe or not believe and so you’ll never you’ll never close a court case if you have that attitude jury yeah so I’d be excluded from a jury probably if I…

[Dale Saran]
Yeah, there’s a great definition for what constitutes a reasonable doubt the jury instruction the standard bench book instruction for what constitutes a reasonable doubt it’s neither fanciful nor speculative and it’s suggested by the evidence or lack of and it’s one that leaves you in your mind with the and that’s where they get to that that I I’m not willing to throw a guy in jail I’m almost sure but it’s a doubt that’s neither fanciful nor speculative it’s a great definition I’ll I’ll send you a copy you’ll love it.

[Dan MacDougal]
I want to speak to this because I’ve tried about 150 Criminal cases as a prosecutor and about a 100 as a defense attorney um and when I was a prosecutor I felt like the biggest obstacle to winning the case was the burden of proof and so I developed several lines of argument to make that mountain smaller and smaller and smaller and then when I switched sides to the defense I my working hypothesis was that there’s only one defense and that’s reasonable doubt and there’s a multitude of ways to create it Alibi right justification some other dude did it what what else and so developed a whole line of arguments based on Reasonable Doubt is M higher than Mount Everest um sometimes I think it uh devolves around to this little joke that a jury at least in a America or 12 people who are selected Chosen and sworn to decide who has the best lawyer.

[Anton Garrett]
I think that uh there are different standards of proof required in civil and criminal cases and the one thing that that guide book for lawyers that you cited cannot do is give numbers it cannot say 1% uh doubt in uh criminal cases but 10% in civil cases they can’t go numerical you can see people in the history of probability uh in the history of uh jism wrestling with the no underlying quantitative notion without ever mentioning numbers.

[Dale Saran]
Well, it’s it’s funny you bring that up because to return to P values to bring it back to what we’re talking about how how bad how awful how much Injustice would there be if we said well Reasonable Doubt really means P equals 0.5 you know I mean how atrocious the outcomes would be if we if we use that kind of mathematical you know pseudo certainty.

[Greg Glassman]
And your jury pool is people that have nothing to do.

[Dale Saran]
Right. They’re getting paid 10 bucks a day to be there. You know the other joke is that a jury is made up of the 12 people who weren’t smart enough to get out of jury duty. so…

[Peter Coles]
I mean the other thing is that expert witnesses often present evidence in the form of P values and they’re often very misleading. And we know of many miscarriages of Justice which are uh caused by essentially an expert witnesses who talked a load of statistical nonsense.

[Dale Saran]
If you want to give yourself a horrible thought, is look at all of the pieces of scientific evidence that have turned out to be complete nonsense, and particularly uh I can think of several of them but one of them was bite marks, the idea that you know bite marks uh was a science and that some guy could come in and go ‘oh yeah yeah well the bite mark on that body is that guy’s you know that’s that guy’s jaw or teeth or whatever’ that you know there’s a lot of even what we accept in popular conceptions of these things.

DNA evidence, I was dating a gal here in the valley actually was an associate professor at ASU law school one of the top 25 law schools in the country and uh she was also the general counsel of her genomics company and she did genetic testing and this is years AG this is 15 years ago and she was saying at the time the prevailing theory of about genetic evidence there was this there’s this whole bunch of stuff that we don’t know what it means in your DNA where just and she used to say they call it like artifact so that it doesn’t matter and she used to laugh and she’d say 10 years from now that this will be will be looked at as the dark ages of you know genetic technology and yet we were using it to convict people s people to to jail forever um on sort of chained probabilistic reasoning well why is your DNA in that location you know and and so it’s a lot of science can be used to produce massive over certainty.

[William Briggs]
K just showed this to me: “The Colorado Bureau of Investigation finds DNA scientists manipulated data in hundreds of cases over decades.”

[Dale Saran]
Yeah, the drug, the one for in the military was the, yeah, the mandatory drug testing. The mandatory drug testing probably convicted more people because as they go back, you know, here’s your client who’s like ‘I’ve never never done a drug in my life’. You know, we in the military, you’re subject to random Ur analysis which is a fairly, you know, known science. We’ve got this down, we understand how this works. We pulled the urine, we’ve got this metabolite for a drug and it shows up in your urine. And then the conclusion from that is there’s this sort of logical leap and the law struggled with this, which is okay, it’s in my urine but now you have to uh prove that the actual crime is willful use of that substance, that’s what makes it wrongful. So you get the innocent ingestion defense but the other problem is that what happens when the guy says ‘I did i’ never taken that in my life, I’ve never stuck that in my body’ and then what about all the alternative ways and which things can be metabolized and it people were convicted there’s a whole Litany of cases es over this but um and it later comes out oh my God what do you know the Navy drug screening lab, the chemists were using the drugs themselves and you I mean there’s no, I mean it’s, it’s in, you know, so there’s all these problems that exist and yet you you would have innocent guys sitting in court you’re sitting next to him he’s like ‘I never did this or I swear to you I did not smoke pot’ but you’re sitting in front of the jury and they’re going ‘oh right how did that get there you know’ and so and then of course there’s the Lost second s Le I mean it, it, it’s Rife with the possibility of innocent people going to going to prison and it, it, it certainly has happened for sure.

[Gerd Gigerenzer]
And there is a reason for all of this mess, the law schools don’t teach statistical thinking. I once taught a course at the University of Virginia law school, I think it was the first one and maybe the last one and the studies show that most lawyers, most Jud churches including Federal churches I’ve trained have no understanding of probability and as a consequence you can mislead them on top of what you said. For instance, when I trained Federal churches in the US, I give them very simple task now DNA evidence, the problem is not just that DNA evidence is difficult and but that people don’t learn to understand. So let me do a a test with you so you are accused of having committed a murder, there’s no evidence against you except one thing, your DNA matches with the traces on the victim so you stay in trial and the judge ask a expert witness and the expert witness may say because they’ve never trained in Risk communication may say your honor the probability that this match occur by Chain is one in 100,000 okay looks not good for you so what we do we train the uh the churches to not to allow this statement that’s a single event statement and people are confused about it so they probability to you so rather the expert witness should say uh your honor out of every 100,000 people one will show match and then you think in this city there are 3 million it’s one to 30 it doesn’t look like so these are very simple tools of risk communication and we brought it first into the OJ Simpson trial and where the judge asked the expert witness to testify in frequencies not in single event probabilities and the expert witness was a a stattic said it’s the same anyhow and didn’t understand that the risk communication matters and that could be easily trained but it isn’t same problem in medicine most doctors are innumerate ask your own doctor what specificity mean or sensitivity means good luck you may have one of the 10% or so who sees through but most doctors cannot read an article in their own field because it’s all randomiz trials statistics and they don’t know what exactly an ODS ratio is or cannot evaluate the thing and this is a state of healthc care in which we are so doctors go by rumor what they have heard or where they earn money and patients are often in the same state they don’t take the effort to educate themselves and then uh there are in this country now an estimated of 250,000 people who every year die in hospital from preventable causes.

[Emily Kaplan]
Patrick did you want to say something?

[Patrick Whalen]
I had a question about, about that. It seems like this, uh, thanks. It seems like where, where your average person, uh, where the rubber meets the road for the average person in all this is, uh, maybe medicine and technology, like in the average life. And I guess it’s a, it’s a pedagogical question. We’re pointing out all the enormous damage or risk that comes from the practice of broken science in society, uh, and yet I’ve, I’ve heard a few, uh, handy framing techniques that really would help that are easy to understand. I don’t remember the Australian term, but, you know, for guys like me, easy to understand. Um, and in the readings before I, I think you coined the term, um, a fast and frugal heuristic. It’s like in the military, you’re always making decisions that matter, that might affect your health in a major way with radically limited information, uh, but you have to be able to make the decision quickly. And, uh, I think we all find ourselves in that situation with respect, especially to healthcare and contemporary society. And I’m wondering what, what’s a suggestion for, you know, as Greg said, we’re not going to, you’re not going to save the world, you’re not going to come in on a white, there’s no Messiah complex and all this, but is there a way to kind of to train a fast and frugal heuristic so that normal folks are less vulnerable to the predations of broken science?

[Gerd Gigerenzer]
Yes, yes. So what, what is a fast and frugal heuristic in the first place? So a heuristic is useful in the situation where probability theory doesn’t give you the right answer. So in situations of uncertainty, so for instance, whom to marry if you sit down and calculate, good luck, or where to invest. So, uh, what part of my research is we design heuristic strategies for decision making. Uh, I give you one example. We worked with the German military when they were in Afghanistan. A problem is there are checkpoints and cars come in and too many people are being killed in the cars because the soldiers think they are enemies. How to do that, NATO has no instruction except first shoot in the air and then in the tires and then at a person. But when M and a fast and frugal heuristic is a device that helps them to make a decision fast in this situation and it asks only three questions. First, is there one person or more in the car? If there are more, fine, that’s if there’s one, that could be a terrorist but not sure, then the next question is, does the person speed down when a signal is, if not, alarm andur, otherwise, a last question is, is there a notice around about a certain car that may be problematic like a red Honda? So this is a fast and frugal heuristic, can be executed very fastly, systematically being trained, and it reduces, it reduce the, the number of people, the false positives being shot by more than a half. So that’s an example. The same thing, you can train in, in healthcare. So we have worked with US hospitals in, so a man comes in the hospital with severe heart, with sever chest pains and the question is should one send them into a coronary care unit or in a regular bed with telemetry and that needs to be decided soon otherwise it could be too late and there’s similar type of decision trios to help them to quickly make decisions and that reduces the usual chaos that you have often in, in these units and so ticks can be very useful in situations where you have no time and you have also not the possibility to calculate the right answer and because there’s too much uncertainty and the systematic study of juristic is still not be taught and that’s another case and there are entire parts of science particular the, at the moment dominant part of behavioral economics who thinks of ticks as something generally bad and believe that logic always gives you the right answer and there have been two Noble prizes for that, yeah, and we try to, to bring uncertainty back there are in at least in human life there’s uncertainty you never know what the other person is going to do you have to trust and imitation trust are all heuristics that we use to get along with one another.

[Greg Glassman]
Malcolm, will your thoughts survive lunch?

[Malcolm Kendrick]
Well, just going to be very quick. I’m just a bit of information really because talking about the cases where they got the statistics wrong, actually a profession Roy Meadows where a child, two children died of cut death, uh, and the odds of cut death are one in 18,000 or whatever and because two died, Professor Meadow said it’s 18,000 times 18,000 which is 175 million schion and on the basis of that very large figure the mother was sent to jail and so basically that was just to just to provide that little bit of detail as to when that went horribly wrong so s that was how that went.


Epsistemology Camp Series

Science Is Successful Prediction

By Greg Glassman
Watch

Mindless Statistics

By Gerd Gigerenzer
Watch

Intersections of Probability, Philosophy, And Physics

By Anton Garrett
Watch

Sunday Discussion, Part 1: Epistemology Camp

With Greg Glassman, Anton Garrett, Peter Coles, et al.
Watch

Sunday Discussion, Part 2: Epistemology Camp

With Dale Saran, Anton Garrett, Gerd Gigerenzer et al.
Watch

Sunday Discussion, Part 3: Epistemology Camp

With Greg Glassman, Dale Saran, Peter Coles, et al.
Watch

About the Author: Broken Science

Let's start with the truth!

Leave A Comment

recent posts