Greg Glassman – Ranch Jun 2023

Greg Glassman introduces the Broken Science Initiative, a new endeavor dedicated to exploring and disseminating knowledge about broken science. He discusses how a curriculum is being developed specifically for middle school students, with the goal of making intricate concepts more relatable. He furthermore discusses how the initiative will delve into the replication crisis and its far-reaching implications, emphasizing the significance of scientific studies that fail to replicate. Finally, Greg ends with suggestions for further reading on the topic for those interested in learning more.

Transcript

Greg Glassman: Good morning.

We don’t get to do too many of these morning things, right? The
seminars.

Any questions? So Broken Science Initiative, what is that?
Well, it was, it’s an effort to collect information about broken science and
share it, explore it. Got some other ambitions and aims. Emily and I are
working towards developing a curriculum for middle school kids, and you’re part
of that experiment.

My thinking is if something can be explained to middle school
kids in a semester, they should probably be able to hint at some of its
importance to the scope of it, to an audience of reasonably bright CrossFitters,
which is an above average kind of mindset, and have something resonate. This
isn’t the first time that someone’s developed a curriculum, a book, a course of
study through experimenting with audiences and or taking a lecture series and
condensing it.

Very often it starts with a, a lecture series will start with
that aim. And we’re at, we’re about that task here. I’m gonna rip through some
stuff. So, I want to kind of set the stage and answer some questions, and it’s
helps me in what I’m, what I’m gonna do, in what you’re gonna do in listening
to this nonsense.

First of all, I didn’t make up any of it. None of it’s my work.
Not the, none of it’s the fruit of any of my labor really. I’ve coalesced some
material, individuals that were well aware of each other’s work. Been a lot of
collaboration. There’s a big historical reference to this that’ll all come, but
I wanna make it clear that I’m not here saying that science is broken.

What I’m here to talk about is broken science, and there’s a
lot of it, and I can speak very specifically to cuz I’m, we’re gonna go into a
lot of details where it broke, who broke it, when it broke the nature of the
break, the consequences, who benefits, who loses. All of that’s all of that’s
there to be looked at.

But I owe it to you to talk about what I mean by broke and, and
I’m talking about science that won’t replicate and the replication crisis is
too big a issue to go into here. Hint, hint at it being something that we could
talk about at another time and place. And we can talk about where you find this
broken science, science that won’t replicate, and even the names for it, the
terminology.

I’m given a little warmup here acutely of the aware of the
time. I’ve already spent five minutes, and I know I only got about an hour to do
this, but I think we can pull this off and I’m giving you some background here,
but I’m gonna refer to the science that works, the science that that will
replicate.

As generally as just science, sorry? Or modern science. Okay. I
don’t mean science that is practiced today by modern science, I mean science
that comes out of that Aristotle, Bacon, Newton, traditional trend that we all
associate with technology and advancement and the wonderful achievements of
science. And I’m gonna, and I need a name for the other science.

It appears on one of the slides here is consensus science. And
it is that indeed, it’s also found almost exclusively in academia. This science
doesn’t work in industry largely because of the epistemic nature of it and the
very nature and the demands of deliver of deliverables. And so it’s interesting
even in areas like related to the replication crisis, who was it that discovers
that?

Who was Begley of Begley, Ellis? There’s a guy from Amgen that
was highly suspicious of the preclinical science that was making it nearly
impossible for his team to, to come up with successful random clinical trials,
randomized clinical trials. They were failing, failing, failing. And he became
suspicious of the underpinnings from industry.

Fields affected. Psychology hit hard. But some of the best
material you can find on broken science, even the term broken science, is not
my own. I picked it because when you would put it before us, when you put
broken science into Google, what you found is others that had written
intelligently, thoughtfully with concern about science, it wouldn’t replicate.

So we just picked that terminology for the science that won’t
replicate. I said consensus science. We’ll also refer to it as post-modern
science. Again, a dig for sure, but not one of my own creation. I got this from
Windschuttle and people that edited David Stove’s book, but they’ve used it
repeatedly and I think it’s, I think it’s reasonably accurate.

I think that the, the schism that occurs in science, that
creates two sciences and that’s exactly what has happened. I think that
postmodernism is, more than appropriate tests. It’s hard to suggest it without
laughing, Gary, but I think it is indeed certainly of that anything goes
mindset that swept the universities.

So I knew for a while and you know, a lot of this is inspired
by what we did at CrossFit and what was going on at the university. It’s really
stuff for another discussion. But I’d seen academic science at work. And my
upbringing, again, subject for another discussion entirely, but had me
sensitive to what science is and isn’t.

And as we go through this, I’m gonna, I’m gonna, again, cheat
the whole thing here, cuz now I have, I have 52 minutes. What I’m gonna tell
you is that I want you to pay close attention to anything relating to
prediction, predictive capacity, predictive strength, okay? Prediction,
prediction, prediction, because it’s on the subject of prediction it’s on the area of prediction that modern science, the stuff that replicates, right? That modern science finds validation, that is becomes good, acceptable, useful, well done. You get a round of applause. Its validation comes around prediction. The fracture that created the science that won’t replicate is also broken around the area of prediction.

And the only rational trust you have for science, and I will
make the argument for anything, comes from prediction. So I’m gonna make a
point of saying, you know, when I say prediction, listen carefully. And if
that’s all you get out of this, I think we get to the end zone even in only 50
minutes. Ready? So here we go, Dale move, you have these notes that are here.

Each of these slides are posters. And this is so much better
than dimming the light and signaling someone to put up another slide. I know
that modern science right here makes claim to being able to answer what comes
next because the basics of a scientific model are, they are predictions. We’re
gonna cover that in in better detail, but that’s not the only thing that claims
predictions.

And these are strategies of, of prediction and I, even had to
put consensus science on here. Had to, had to do it. But it’s interesting to
look, forget about on what, you know, some of these, like the spirit board, you
can use it to contact long lost relatives, right? Or find out if Janie’s gonna
go with you to the prom I understand. And science doesn’t address things on
those level, but I just pull out all the stops and like if that lays claim to
being able to predict something, let’s throw it up there and look at it. And
then I went on a little bit of a shopping spree. Might have been somewhat of a
mistake. And you’re welcome to help yourself to any of this shit stuff.

Resources. I got the magic eight ball that’s unused. Do you
remember that? And that would be under the crystallography. This is the only
crystal ball I remember. Does anyone remember this thing? You’re old if you put
your hand up. Yeah, I got a Ouija board and that’s up there. Spirit board. And
again, all of this is free for the taking.

The books I got and what I did to myself on Amazon with this shit was really probably a mistake cuz I mean, I’ve been hit up like look. Tasseography, ground coffee ground and tea leaf fortune telling. Ceromancy and Metalomancy, fire divination, wax and lead casting rituals, with actual examples here. Which is super useful. Ornithomancy, this chick watches birds take off and can tell you what comes next for you. What’s up? It’s pretty cool, right? And most of these people offer workshops too. Pleiadian rune casting, it’s throwing of stones or rocks. The BaZi, the destination code. Man, if I had to be locked up with one, I’d take this one. I don’t, it was really colorful and I don’t know what it’s about, but it’s gnarly. They had a guy, and they said that he, whatever happened to him happened but under the influence of the rat and the pig and I was like, I get it, that can’t be good. Here, the future told by reading livers. Yeah. They take your liver out and go, you’re fucked, right? I don’t know. I don’t know. Palmistry, we got palm reading Greek and Roman NecromancyWell, if there’s ever a book that needed pictures right, it’d be this one, but it’s not, it’s all words Gary. The whole thing, the one book. 

And that’s all I brought with me, and I’ve contaminated my, my existence on the net by buying this shit from Amazon. You should, I mean, really, it’s crazy this stuff. That, plus it’s having it sent to Santa Cruz like I’m figured out. But let’s make some important points here while I’m here having fun at, at everyone’s expense.

Modern science will distinguish itself, will hold as
demarcation between itself and all other, the predictive strength of its
models. And that’s, that says something. I expect someone here to stand up and
say, I’ve been doing extispicy for 14 years, and it works. I’m sure, I’m sure
there’s a lot of, there’s a lot of believers and a lot of belief here, but the
beauty of modern science is that there’s a demarcation between science and
non-science and its predictive strength. Now we’re already into some wonderful
turf here, and we’re just, we’re just three minutes in. If you look up
demarcation problem or the problem of demarcation, the demarcation issue, you
look it up as a philosophical concept, and what we see is that Popper , Sir
Karl Popper
, shows falsification is the demarcation between science and
non-science. And on the Wikipedia article on demarcation, the demarcation
problem, they identified as that it’s several millennium old. So we got a
problem that’s thousands of years old and I’m gonna tell you right now, it’s,
we can solve it right here today at this very instance.

And it’s the strength of, its predictive strength of its
models. And we’re gonna talk about what a model is and how that comes to be.
But that’s one thing, very, very important. We have a major course correction
on a philosophical error at once. Looking at this and the other things the
other piece here is that, so it’s, it’s the demarcation it’s the basis for our
rational trust and the objectivity of science comes from the predictive
strength of its models.

Okay. In fact, I would make the point that predictability is at
a cornerstone. And, boy, look this up. Look what you get when you put
predictability, cornerstone trust, and stay away from all the legal trust
stuff. But it’s pretty easy to see that there is no trust without
predictability. Okay? It is, its cornerstone.

Let’s jump into it cuz what we’re gonna do now is we’re gonna
take a look at modern science, how it is that this thing works and you know,
we’re flying at a high altitude here and I’m covering something that bright
eighth or ninth graders would spend a semester on. And we’re gonna do it in 47
minutes now.

But I want to assure you that the, that the specifics, the distillation is correct, even if it leaves
out some detail that might be of use to you. And I first wanna talk about the
methods of modern science and so much is said of the scientific method and the,
one of the more prominent of more recent academic philosophers of science, Paul
Feyerabend
denied a scientific method and boy, I, there’s a sense in which you
can understand what may have been it his intent in saying that, and I’ll make
that clear in short order here. First we start with an observation. It’s a
registration of the real world on our senses or sensing equipment, right?

Geiger counter, thermometer, wind vane, we saw it, right?
Becomes a measurement when this observation is tied to a standard scale with a
well-characterized error. We can also call this a scientific fact right now, we
can map a fact to a future unrealized fact. It’s a forecast of a measurement
and it constitutes a model, a scientific model.

It’s kind of sounds like anyone could do science, doesn’t it?
That is indeed the case. Validation is derived from the predictive strength of
a model, graded and ranked as conjecture, hypothesis, theory, and law. And
we’re gonna talk about those rankings, but let’s look again, we’ve got an observation
that tied to a standard scale with a well-characterized error, becomes a
measurement.

We can also refer to that as a fact. We map that fact to a
future fact that constitutes a model. These models are graded by their
predictive strength, right, by their batting average, by their success as
conjecture, hypothesis, theory, and law. Now, this is sterile, it’s high
altitude, and it leaves out some really important pieces, and I’ll tell you
what they are.

One is the experiment. Designed to do the validation is hard as
hell, and it’s an act of sublime creativity.

And it’s a genius. We know it when we see it, but it is
required, but it doesn’t eliminate the need for validation coming entirely from
the predictive strength of the work. That part never goes away. And one of the
things we hope you take away from here is that there is no model that is
scientific, that has not predicted or done some retrodiction that proved to be
important. Absent predictive success, it’s still not science and I don’t care
who made it. I don’t care how dire the the, the warning. I don’t care how many
other scientists say Yes, this is what’s coming until it’s found predictive
success and only to the extent that it has it’s owed Nothing.

It’s owed nothing. So let’s look at some criteria that are
fundamentally ineluctable implications of that method. Modern science is the
source and repository of objective knowledge. Where does that objectivity comes
from? It comes from the predictive strength. You might be guessing, this is
inductive, boy, it is.

This knowledge, silos and models graded and ranked by
predictive strength. We’re hearing that again, right? Graded and ranked by
predictive strength. A model is a forecast. They’re forecasts of a measurement
tells you, under these circumstances, this is what you’ll see and measure. This
is what be the observation.

Predictive strength is the sole determinant of validation. God,
that’s important. It’s the sole, and this is why Feyerabend can be against
method because it really doesn’t matter how you came up with the e equal mc
squared or anything else that has found to be scientifically of importance. My
father’s line was inspiration or perspiration doesn’t matter.

Does not affect the validity, does not enter into the validity,
is not part of how we assess its legitimacy.

Leading us to what we’re saying that validation and method are
entirely independent. Now, they may not be in terms of batting averages or what
you do. But it’s interesting, even when Laplace tried to tell us exactly how he
did what he did, it was, it was, it was a combination of, I think we should
listen to this guy, he’s made more discoveries in a broader area of than anyone
since Newton, and he took where Newton handed us and ran with it. Maybe we
should listen to him and others said, these methods are impossible. They would
never work. They turned out, I think Laplace was correct. Let’s talk about this
model ranking and grading and be done with it.

Mrs. You have this, you can pull this offline, it sits in front
of you, but conjectures an incomplete model or analogy to another domain. Okay.
This is the incident when Claude Shannon said that information would, is like
entropy. It’s like energy. It should be measurable. Hypothesis is a model based
on all, and really the heart of of this is hypothesis because that’s just the,
that’s, you know, you turn your conjecture into a hypothesis, it has some
requirements and as soon as it delivers on one on one instance, delivers the
goods. It now can, can make argument to being a theory, but a hypothesis is a
model based on all its data, all data in that specified domain with no counter
examples. Right. We can’t know there’s a problem with it going in while
incorporating a novel prediction yet to be validated by facts.

Remember our forecast of a measurement, right? The theory is
hypotheses with at least one non-trivial datum. Done an experiment, done a
test, and found it, hell, prediction validated. Does it mean you’re done?
Probably not, right? You still need P values, right? No, we’ll come to that. A
law is a theory that has received validation in all possible ramifications to
known levels of accuracy.

You probably don’t need to worry too much about when you get to
a law. Be nice to just come up with a theory that makes a non-trivial
prediction of a measurement and turns out to, turns out to work. Now there’s a
lot here. But I’m gonna show you how simple it is for the debasement to turn
this modern science into post-modern science.

And as I suggested earlier, it relates to prediction. Okay? So
let’s make the flip here, and here it is, debased. Number one, modern science
is inductive, where conclusions follow premises with probability and not
certainty. God, I love that. I got that from Martin Gardner in Scientific
American
as a kid, and you see differentiations, delineations of induction and
deduction regularly.

And about from this, from the seen to the unseen, from the
specific to the general that it really, it really works powerfully to lock onto
induction being where the conclusions come from the premises. With probability
and deduction where they come with certainty. And so this is the, in our
nomenclature here, the probability of the hypothesis of the data is between
zero and one.

That’s the modern science view, that your probability and that there
is a probability of a hypothesis that it would come between zero and one versus
being true or false. And the modern, the postmodern science that we found that
his. Impacted universities and where this, where this ends up, where we find
this, and I, I failed to mention it, but the, the, the medical infection of the
postmodern science is really the reason for the total interest here.

It’s in health that we find the greatest examples of the
infection and for the very reasons that we can give here. The demarcation of
modern science from non science is the predictive power of the models of
science. Popper chose falsification is demarcation between science and nonsense
with his terminology.

And I, I can hand him this, falsification can be seen as a
necessary requirement for a meaningful assertion, but a scientific assertion
has a requirement to, other than being meaningful, okay? It has to make a
prediction. The historical progress of modern science comes from the improved
predictive strength of its of its models. 

The irrationalists, here we are introducing Popper, Kuhn, Lakatos, and FeyerabendThey are the philosophers of science that are the overwhelming, dominant force of in academic philosophy of science, and it is to this day, largely deductivist. And supported by an inferential statistics that doesn’t use the predictive strength of the model as the sole rational trust, but has left in its place, null hypothesis, significant testing , and peer review.

And so we see studies published that fly, pass with flying
colors, the null hypothesis, significance testing. They have great P values,
but they’re not making novel predictions. And what they are is trying to show
an association, a link, between one thing and an another, and then using, I
think, the p values to imply that there is a relationship of a causal nature
when there’s not.

So this is the debasement. And I wanna just take some time to
look at, at the null hypothesis, significance testing, and in fact the p value
for a moment. I know this isn’t the most exciting topic in the world, but the,
the p value is, is the probability that, so you have an alternative and I’m
like, Matt, lean forward.

You have, you have some hypothesis that you’re trying to show.
And something that has an effect, presume positive or not. But you wanna show that
this has an effect. And so you form a null hypothesis that there is no effect.
And then you, you in a wonderful bit of ad hocery get to choose a test
statistic and we make the assumption that the null is true, and then we get a
probability for that data.

And the problem with it is that it can logically shed nothing
of importance on the subject of an observable. And you’re dealing in a realm of
inferential statistics where the probability of a hypothesis doesn’t even have
meaning. We can only talk about the probability of data. And what that means is
that we now have a philosophy of science in practice at the university where we
have flensed validation, the very thing that Elon needs to see to make SpaceX
do what it needs to do, make those rockets do what they need to do, has been
removed. And if we look at what it was replaced with, and, and by the way, this
isn’t my gripe the, the griping about P values and their misleading impact and
the bad behavior they inspire and the over certainty they cause is is coming
close to a hundred year history, near a hundred years. Here’s a fella, his
name’s Lambdin somehow we picked up another I in there between the B and the D,
but he’s writing from Intel. Imagine that.

Imagine someone from Intel weighing in on P values. I would
think it would be important where we do see. P values in use is around quality
control. It makes a lot of sense in that space, and I would imagine that Intel
knows as much about quality control as any entity on Earth. But you go through
here and you look at the things.

I’m not gonna make you do it all, but I’m going to make some
comments about ’em. The p values, the probability of the results will replicate
if this study is conducted. Again, that’s false. It is not. But it is widely
believed to be that we should have more confidence in P-values obtained with
larger ends than smaller ends.

No. In fact, that we can show that expanding the end will drop
the P value to whatever you want it to be. You can expand your sample space to
make a a p-value of your choosing. We’ll show you how that’s done. A P-value is
a measure of the degree of confidence in the obtained result. Not a rational
confidence, absolutely not. You’re standing on that probability of the data
side of things, and you can make no important assertions about an observable on
the other side of that. That’s just how that’s gonna be. But what’s so
fascinating here is that, and we, we can go through these, they’re all, they
all hurt. Look, p values the probability that the results obtained occurred due
to chance.

Very popular, but nevertheless, false. Part of the way you can
know it’s, it’s false is nothing’s due to chance. Chance can’t cause anything.
Chance is you, is your ignorance. That’s all. It’s when you just don’t know
anything. No knowledge. That’s the state of chance. So let’s look. So this is,
this is the guy from Intel, Dale flip me here, and let’s go to our friend, Gerd
Gigerenzer
. Someone we’ve brought around the community, talked to, this is
eminently readable material. There’s no one here that can’t go onto the
internet and pull down Gerd Gigerenzer’s, piece on mindless statistics. And by
the way, Dale, let’s throw that other one back up here.

There’s some rather unfair and nasty language here. It comes
right from the literature, technical literature on P-values. Ritual sorcery,
surrogate science, delusions, fad, idol, and dirty little secret. Those have
all been used over the space of about a, about 70 years to refer to the
mischief around P values.

See, there are people that called the replication crisis before
it happened. There are people that said, Hey, this doesn’t end well. Look what
you’re doing. Look what you’re doing here. Okay. The ritual one, it was Gerd
Gigerenzer that that in mindless statistics used the word ritual. And someone
had objected to that saying that, that most people don’t even know why, what
the P-value is.

They can’t even tell you what it, what it, what it’s supposed
to be. And Gerd responds, that’s exactly right. That’s an essential requirement
of a ritual. If you knew you’d stop doing it. And boy, that’s a, that’s a
biting one here, but this is a list. These are very much like the other ones
that Lambdin gave us.

But it’s, it’s a, it’s a shorter list and some things kind of emerge
that you can see you have absolutely disprove – Look, you didn’t, you, we don’t
disprove or prove anything in science. It’s, it becomes probable or less
probable, but the null hypothesis, no, I don’t even need to, I don’t even need
to know the structure of the sense you’re claiming.

It’s showing something about a hypothesis. It is not. It
cannot. Next one. You found the problem, no hypothesis again, Uhuh P values
can’t do that. It can’t tell you anything about your theory or your null. Only
about the data that you obtained from it. Well, it’s an important distinction.
It’s so crazy important.

And the blurring over it, the missing of it, they’re like,
yeah, whatever. And by the way, this list, this time, these are things these
were polled of, of academics and asked which of these, they may all be false,
which of these are true? And I think it was close to 92% that picked at least
one of them, and they’re all patently, deadly, absolutely false. You’ve
absolutely proved your experimental hypothesis is that there is no difference
between the population means no. You can deduce the probability of the
experimental hypothesis being true. You cannot, you know, if you decide to
reject the null hypothesis of the probability you’re making the wrong decision.

No, you have none of that information. That is not there. And
these things are believed. And then the next step we take this, okay, so people
are sitting in their stats classes and they’re, and they’re not listening
carefully enough. They’re students kind of like I was, and they’re watching
someone or something else and not paying attention and they’re coming out with
these common misunderstanding, right?

Now, it was taught, and is taught, is being taught, and in
fact, I have here in this book, A Introduction to Statistics for Psychology and
Education by Nunnally
. I have here just in three pages, all of these false
assertions about what the, what a p-value indicates. I have a generation of
scientists that were raised this way, that have been taught this.

What Jeff was doing, Gary, is he would look at the, at the
paper, we’d hand them and he’d find the statistician and he’d get back and see
if they were a frequentist or not, and what, with what kind of fire they burned
their, frequentism, and then tell you whether he had time to read the thing or
not. So what we have is, and, and look at this, each of these things lends
itself to what?

To the imprimatur of what validation. Look at ’em, the
probability that you’d get the same results if you did it again, why do it again?
This wasn’t caused by chance. We found a very real effect. Look at them all.
Each and every one of them gives the, gives the promise, the suggestion, the
stamp of validation, and each is false.

Is that a coincidence? Are they used? They, you know, it’s
funny cuz you, when you corner guys on what’s wrong with P-values? Nothing, If
it’s interpreted right. If you say it the way I said it, like I do with my math
teacher looking at me.

Almost nobody knows what it is. And they have settled upon, and
these are scientists, these are guys published in medical literature, and what
they’ve settled on is a personal understanding that gives heavy implication to
validation to predictive strength where none exists. Let’s go to the next one.

I, this is a defensive P values, and you know, if you remove
all of those, those improper implications, right? All those misunderstandings,
there’s nothing wrong with it. You’re left with something fairly innocuous,
which basically tells you nothing worth knowing about observables.

It’s a good test of your data against a test statistic. Fair?
Potentially? No, not even that. I put in this paradox of P values, finding it
insufferably upsetting that alls you had to do is increase your sample size and
you could get the p value of your wanting. And then I found it right here.

The paradox of the P value, the associations between the P
value, the sample size, and the significance level. The figure shows that the P
value goes down as the sample size increases in the binomial experiment given
in the text. That paradox lies in the fact that given a particular significance
level, say, 0.05 , one can increase the size of the sample to obtain a P value
that is significant. That seems pretty easy to Jimmy and you know, but people
are what, clapping for your larger end. Right man. Let’s go to the next one,
Dale.

I put this out here because it’s gonna come up again and again
and again. And I plan on doing others of lectures like this on related
materials. But I wanna point out some things that are manmade tools of science
that do not exist in nature. And to think they do you kind of to confuse the,
the, the manmade abstraction with something real is to commit a reification.

And we we’re already on the subject of that with uncertainty
and probability right here without even sharing that, would be something that
we could talk to students about. But nature doesn’t have coordinate systems or
numbers or ratios or rates or parameters, values, standards, definitions,
scales, none of that.

Units, equations, math graphs, dimensions, infinity, logic,
uncertainty, calendar, this one, uncertainty where I’d say the difference is as
simple as does the, the uncertainty inhere in the dice, or is it in my head?
Where does it sit? And there’s, there’s two answers to that. And it’s a, it’s
debated and discussed, and you can imagine either being correct when you think
about it hard enough, but one of the answers will lead you to science that
won’t replicate.

And that is if you think that it inheres in the object and not
in your head. Which is a crazy thing to think that that one in six or the 50 50
of the penny is a measure of what’s in your head and not, not a quality that
inheres in the penny per se. So, I put this up cuz it’ll come up again, we can
just talk about the uncertainty now.

And you know, if you wanted to look at this further, and this
is exactly what students would be faced with in an exposure, and I, again, the
idea I think here is to teach kids what science is, and, maybe most importantly
in that regard, is what science isn’t. And to do that ahead of trying to learn
about the solar system and periodic tables. And I think the periodic chart just
got removed from some schools curriculum. Is that right? Did anyone else see
that? Yeah, I don’t know what, can’t imagine what the offense was.

Books for further reading. David Stove’s Popper and After: Four
Modern Irrationalists
, 1982, Anything Goes, Origins of the Cult of Scientific
Irrationalism
, ’98, and Scientific Irrationalism: Origins of a Postmodern Cult,
2000.

I think those are three of the best books ever written on the
philosophy of science. And lucky for you, they’re the same book. And so if you
get any one of ’em, you can say you’ve read the other two and you won’t be
lying. What a thing that it took 18 years of publishing with different titles
to try and get someone to look at ’em and still no one would.

And so Emily and I in our broken science org initiative
purchased the rights to the book. And the, the problem is this, we’re turning
people onto it, but the book goes from $4 to $45. And what we’re gonna do is,
is clean it up. Put it back on the net as a PDF for free and then offer
something bound and, and nicer and a little more permanent for people with a,
with a real interest in books you hold. Probability and, and human inductive
skepticism, the rationale of induction. My personal journey was I struggled
through David Stove’s work and tried to entice all of my friends to look at
David Stove’s work, and, and when I was done, I sat on the material in my head
for about six months and I said, if David Stove is correct, and I believe he,
is that science grounds in induction, which grounds in probability theory. If
that’s correct, there should be someone in the world of probability theory on
the other side of the world looking at this problem that has seen what David
Stove is seen from the math, hard science side of things.

And as, as soon as I had that thought, I went on to Amazon. And
found E. T. Jayne’s Probability Theory: The logic of Science. And it is that work
that brought me then next to Matt Briggs. I was able just to, it was a crazy
thing, but I could put a seeming failure of Modus tollens into, I’ll let
someone else tell the story in detail, but the oddest search for a human being
ever. I put in, ” If the, baby cries, we beat him, we don’t beat the baby,
therefore he cries.” I put that into Google and I found Matt Briggs and I
knew that whoever had found that odd bit of thinking, it is bizarre, right?
It’s a logical seeming problem with modus tollens. And it was mentioned by
David Stove in, in one of his books. And I was looking at the material in
Stove’s book and wondering, I can’t be the only person that’s read these books.
There’s gotta be someone else. So there is, there’s, there’s like four or five
of us and I, I found the other guy.

My father’s evolution in science. You know, I’ve lived the
book, I can’t really speak to it. But there are other people in here that have
read it, and much of the material that we’ve presented here came from that.

And it was material that he had developed as chief of
development, head of development at Hughes Aircraft Company, internal research
and development in their glory years. Dr. Brigg’s Uncertainty is a, is a
brilliant work. I think he’s, I think he’s an amazing character and that he’s in
my pantheon of brilliant philosophers of science, and he’s still living.

So that’s a, that’s such an upside in terms of talking to him.
Unless you got some of this stuff, if you wanna borrow my Ouija board, yeah, to
hit the others up.

Crowd: *Unitellgible*

Greg Glassman: Gerd Gigerenzer, mindless statistics, surrogate science, you know. Here he is 11
years later, help somebody the same damn problem.

And he’s, he’s explaining that this thing’s all gonna come down
like a house of cards. And in fact, it did and has. Eminently readable, David,
David Bakan was very, very prescient in 1966 about the whole problem. It seemed
to be largely unnoticed. Trafimow and Michael, Michael Marks, their Journal
of Basic and Applied Social Psychology
banned P Values, and it’s a delightful
editorial fun to read. Wasserstein and Lazar, this is the American Statistical
Association’s Statement
on P values you know, sounding the alarm. I bring this
up because I don’t think this was coming without what Trafimow and Marks had
done. And then I’ve got Briggs again with Trafimow in the replacement for
hypothesis testing, which is a, a suggestion rather than testing hypothesis.

Why don’t we test for what we’re really after, which is the
probability of a hypothesis, deliverable of the, the measurement of an
observable, right? And so, This got weird, it was science wars in meta science.
If you take just the material that I’ve given you here today and made such a
stink of prediction that it is the sole basis for any rational trust in
science, that it is the reason for science being the objective branch of, of
knowledge.

If you then look through these things, just throw them into,
into Google Demarcation Problem, foundation of statistics, philosophy of
science
, it’s amazing how much you’ve picked up. It’s amazing what a sharp tool
that is. It’s like, it’s like being a kid and being handed , a leatherman tool
or something.

It’s really works wonders on a lot of these conundrum. For a
academic pursuit here, the idea is to deemphasize the deductive roots of Hume,
Pearson, Fisher, Neyman, Popper, Lakatos, Kuhn, and Feyerabend, and replace
them with Bayes, Laplace, Polya, Jeffreys, Cox, Shannon, Jaynes, and Stove.
What’s interesting, I think telling, is that our improved Pantheon has one
philosopher. The one that has created the, the problem has one two, three four,
five philosophers in it.

And our heroes here, Shannon, Cox, James, Polya, Jeffreys,
Laplace, and Bayes each made monumental contributions to hard science
independent of their, of their work in this field, with the exception of Bayes
whose singular work was in this space.

And, some food for thoughts. I think these are kind of cool. It
sums up a lot of what we’re doing here. My launch of this effort began with the
reaching out to people that knew more about this kind of thing than I did, and
I hit ’em with “when predictive values replaced with consensus, it’s
determinant of a model’s validity, science becomes nonsense.” That’s just
what we’re talking about here. And what does that consensus looks like?

It looks like your good P value, right? You get the sign off
from the stat and published in the esteemed journal, the Favored magazine.
Popper, “I do not believe in definitions, and I do not believe definitions
lead to exactness. Science, I do not try to define it. Definitions are either
unnecessary or unconscious conventional dogmas.” To which my father
responded, “Popper eschews definitions, and then defines things. You can
find all sorts of contradictions in Popper.”

I thought that was cute. It’s fun. It’s also annoyingly
accurate. This, because you’re here, Matt, this is after four chapters of
Uncertainty, which is, as tough a thing as you’ll ever read. But nothing,
nothing more important I don’t think. We seem to sum up so much of what we read
with this single sentence, “Chance is unpredictability which is a synonym
of ignorance, which is what random means.”

And I just think that’s so good and it’s such a fun one to grow
into. And this by Briggs, “The real problem is all parameter based
inference. Parameters tell you nothing worth knowing about observables,
prediction does.

That speaks directly to P values. That’s an all parameter
inference. It tells you nothing worth knowing.

It’s funny in the literature it’s often referred to that and
the thing we really want to know, it’s always that dirty little secret, that
thing we really want to know and we’re trying to infer the thing we really want
to know from the data and it’s incomplete.

“The more absurd, meaningless, or nonsensical, your
thesis, the odds of successfully fooling academics with your work increase in
direct proportion to the number of words and pages spewing.”

Okay? And it should have been increases. But someone find that
for me. You’ve got it. If you do, I there’s a reward for you. I’m looking for
the author of that or a first instance. Okay. A little more deep here. Michael
Crichton
, Andromeda Strain, Jurassic Park. A Harvard physician that never
practiced a day of medicine.

He says, “I wanna pause here and talk about this notion of
consensus in the rise of what has been called consensus science. I regard
consensus science as an extremely pernicious development that ought to be
stopped cold in its tracks. Historically the claim of consensus has been the
first refuge of scoundrels. It is a way to avoid debate by claiming that the
matter is already settled. Whenever you hear the con, hear the consensus of
scientists agree on something or other reach for your wallet because you’re
being had.”

Right, let’s talk about things that kind of get better with
time. Right? And here’s Richard Smith, 25 years CEO, and editor in chief of
British Medical Journal, okay. CEO and editor in Chief, “Stephen Lock, my
predecessor, as editor of the BMJ, became worried about research fraud in the
1980s. But people thought his concerns eccentric. Research authorities insisted
that fraud was rare, didn’t matter because science was self-correcting, and
that no patients had suffered because of scientific fraud. All those reasons
for not taking research fraud seriously have proved to be false and 40 years on
from Lock’s concerns, we are realizing that the problem is huge, the system
encourages fraud, and we have no adequate way to respond. It may be time to
move from assuming that research has been honestly conducted and reported to
assuming it to be untrustworthy until there’s some evidence to the
contrary.”

That’s very recently. It’s very recently and it’s damning, go
through the list. He means these quite, quite deliberately. No patients had
suffered because of fraud. That it was rare, that it was self-correcting. And
from Richard Horton, “the mistake, of course, is the thought that peer
review…” and he’s editor-in-chief of Lancet.

And then we have the other big one was new England Journal of
Medicine. I’ve got quotes from Marcia Angell there on the FDA’s regulatory
capture being a hundred percent and has been for 30 years.

But “the mistake, of course, is the thought that peer
review was any more than a crude means of discovering the acceptability, not
the validity.”

And we learned here today that in modern science validation
comes from the what predictive strength of your model?

“Of a new finding. Editors and scientists that lie consist
on the pivotal importance of peer review. We portray peer review to the public
as a quasi sacred process-” ritual, even, right?

“-That helps make science our most objective truth teller.
But we know that the system of peer review is biased, unjust, unaccountable,
incomplete, easily fixed, often insulting, usually ignorant, occasionally
foolish, and frequently wrong.”

And then I leave you with another one from my dad, “The
worst taco I ever had was pretty damn good,” and we’ve got tacos and some
drink here.

Thank you. Any questions?

Crowd: Coach? I have
one question.

Greg Glassman: Yes.

Crowd: The purpose
behind your own initiative?

Greg Glassman: Yeah.

Crowd: What is the
end goal? Is it just pointing out the, the fault? Is the end goal [inaudible]

Greg Glassman: A
little history too. You know, I’m not an endpoint guy. I’m a process guy. And I
know that if anything’s gonna improve, it would, like, what was the CrossFit
health mantra?

Let’s start with the fucking truth. Right? And as long as we’re
lying to ourselves or in denial, nothing’s gonna happen. But specifically I
believe a, a societal or institutional fix to be absolutely impossible. I
don’t, I have, no dreams of that, just in the sense that I don’t think there’s
any cop that thinks he’s gonna be such a good cop there won’t be any more bad
guys. Right? It’s not that kind of thing. But I think what we can do is, and,
and, and in leading CrossFit, I kind of got forced into this everyone, anyone
distinction. And I’d had a journalist that was teeing off on me about to hit me
with this, you gotta admit it’s not for everyone.

As I’ve heard it so many fucking times, I never had answers for
it. And this time it just came in my mouth, I said, yes, but it is for anyone.
And the difference between everyone and anyone who all know what everyone is,
anyone is he or she, that Rogers up and goes, I’m your person. And I, and it
got me to looking at the literary notion of every man gets a capital E.

It’s a, somebody like St. Nick, every man, right? And I think
not enough has been made of any man, and that’s a different kind of soul. And
so I don’t have anything for every man, but I have something for any man. And
if you wanna really think about these things, then I think we can make it a, a
matter of school curriculum.

I’m certainly gonna arm my children with this, but I think what
we can do is free people from the tyranny of shitty science and its purveyors.
And boy, is this a good time for this? I don’t know if any of you, if this
resonates with anyone here or not, but I, I, I think there’s a profound need to
start telling people what science is and what it is not.

My dad was of the view, you couldn’t adequately understand what
it was if you didn’t know what it wasn’t. And so we had that list of the, of
all the things in predicting stuff, you know? Science isn’t gonna tell you if
Janie loves you or not, and whether Bob’s gonna ask you to the prom, right?

It, doesn’t work in every province, it never claimed to.