By William Briggs

Stick with me on this not-so-easy subject, because I’m going to reveal a trick used to make you “Follow the Science!”

Belief is an act. Uncertainty is a state. Decision is a choice. Probability is a calculation. There is no difference between belief and decision in the sense that they are both acts. Just as uncertainty and probability are both understandings.

What should be obvious, but strangely is not in some quarters, is that belief is not the same as uncertainty, and that therefore decision is not the same as probability. Conflating them leads to The Science.

Take any proposition you like. Say, Y = “This die when thrown will land on 3”. And take any evidence you like; call it E. Then we can form the objective probability Pr(Y|E). This is the uncertainty in Y given the evidence—premises, assumptions, data, models, whatever you have. It is you who pick this. The E is subjective, the Pr(Y|E) is not.

The probability doesn’t have to be a number, but depending on E it can be. When it is, it can be any value from 0 to 1, inclusive. False to True. Conditionally false or true, given E, or anywhere in between. Y can be necessarily false or true, too, but only if E itself is (with respect to “deeper” evidence).

There isn’t much dispute about this; what little there is (in varying interpretations) doesn’t interest us today.

Now if your E makes Pr(Y|E) = 1 (or 0), then you move to act, which is to say, you move to believe or to decide with (conditional) certainty. But what if Pr(Y|E) = p, with p somewhere in the interval (0,1), and possibly not even a number but a vague notion?

Now if your E makes Pr(Y|E) = 1 (or 0), then you move to act, which is to say, you move to believe or to decide with (conditional) certainty. But what if Pr(Y|E) = p, with p somewhere in the interval (0,1), and possibly not even a number but a vague notion?

There are two things people think—for we are talking exclusively about thinking today. The first is to bet, and the second is to believe. Both are acts.

To bet is to take some risk. You act as if Y is true, or false, while acknowledging that it might turn out to be the opposite of the side you are betting it is. The elements of the extent to which you act as if Y is true, or false, is just as subjective as the choice of E is.

Even if two people agree to the letter on all aspects of Y and E, and therefore they must agree on Pr(Y|E), they can still disagree in infinite ways about how to bet about Y, about how to act as if it is true or false. In a trivial example, the behavior (act) of a billionaire betting on the price of a stock at some future date will not be the same as the behavior of an obscure internet hundredaire. Everybody knows this.

Without getting too deeply into it, we can also add to E to boost Y all the way to conditionally true or false, so that we believe. We add something like “Close enough.” But this maneuver also varies widely and is highly conditional on the state of the mind of the person moving to belief. Without getting into detail, I think the idea is (more or less) clear to most.

Recapitulation:

  1. For subjective reasons, we pick a Y in which we would like to know the uncertainty.
  2. We next subjectively pick a suite of evidence E with which to judge Y.
  3. We form Pr(Y|E), which if E is rich enough may be a number in [0,1], or else it is just a notion.
  4. We bet or believe; we act, where acting may be a purely mental effort.
  5. Acting and believing are sensitive and dependent on individual conditions, and so what is “best” varies almost without end.

The exception to this, or rather its natural completion, is where E itself is necessarily true, conditional on deeper evidence. In this case when Pr(Y|E) = 1 (or 0) one must believe, and must act as if Y is true (or false), though just how to act is again dependent and conditional. (Of course, irrational behavior can blow this up.)

The exception to this, or rather its natural completion, is where E itself is necessarily true, conditional on deeper evidence. In this case when Pr(Y|E) = 1 (or 0) one must believe, and must act as if Y is true (or false), though just how to act is again dependent and conditional.

That, my friends, is, or should be, all of science.

Problems come from every direction. Picking the Y is not easy and subject to various uncertainties. Much error comes in picking E, where theories and models and data reside. Scientists always want you to believe that the E they subjectively picked should be treated as if it was necessarily true and not perhaps some (wild) guess and highly conditional. Yet change any part of E, even subtracting or adding even one data point, and you change Pr(Y|E)!

We go over examples like that all the time, and won’t belabor the point more today.

The big mistake, the one that caused science to lapse into scientism, rather the point at which science becomes scientism, comes in forcing all to act, bet or believe, in the same way as scientists. Again, even if we agree with the scientist on Y and E, and agree exactly, it does not follow, at all, that we should act or bet in the same way.

Maybe it won’t surprise you that in statistics, p-values, Bayes factors, and other similar measures (there is a new one called e-values), make exactly this big mistake. They all move from probability, which is all scientists should announce, to act, which they insist you follow. P-values and their ilk are all decisions, which we agreed are never one-size-fits-all. But which, in science, are treated as if they are. Follow the Science! follows directly from this error.

Scientists should announce their Y and E, and then state Pr(Y|E)—and then stop. Since the E picked by scientists (with some exceptions for mathematicians and the like) won’t be necessarily true, and only contingent, all need to consider what different evidence would do to Y.

In any case, it is always scientism to say “Pr(Y|E) means we should all do the following”, and this is so even if there are no problems whatsoever with Pr(Y|E). From this it follows that nearly all research that involves statistics is deeply saturated in scientism.

Which is why science turns out to be so broken. There is no such thing, therefore, of ‘Following The Science.’

I am a wholly independent writer, statistician, scientist and consultant. Previously a Professor at the Cornell Medical School, a Statistician at DoubleClick in its infancy, a Meteorologist with the National Weather Service, and an Electronic Cryptologist with the US Air Force (the only title I ever cared for was Staff Sergeant Briggs).

My PhD is in Mathematical Statistics: I am now an Uncertainty Philosopher, Epistemologist, Probability Puzzler, and Unmasker of Over-Certainty. My MS is in Atmospheric Physics, and Bachelors is in Meteorology & Math.

Author of Uncertainty: The Soul of Modeling, Probability & Statistics, a book which calls for a complete and fundamental change in the philosophy and practice of probability & statistics; author of two other books and dozens of works in fields of statistics, medicine, philosophy, meteorology and climatology, solar physics, and energy use appearing in both professional and popular outlets. Full CV (pdf updated rarely).

Support the Broken Science Initiative.
Subscribe today →

Leave A Comment

recent posts