Speaking Event: The Crisis Of Evidence, Or, Why Probability & Statistics Cannot Discover Cause

By William Briggs


Summary

'المتحدث، الإحصائي ويليام بريغز، يجادل أن الاحتمالات لا يمكنها أن تظهر السببية. يقول بريغز أن حتى لو كان من الممكن إظهار السببية، فإن طرقنا الحالية جميعها خاطئة. بدلاً من ذلك، يجب علينا أن نشدد كثيراً على التكرار. يجب اختبار أي توقع يتم صنعه بواسطة نموذج مرارًا وتكرارًا إذا كان من الممكن أن يدعي بوجود قيمة تنبؤية. يشدد بريغز على الحاجة لتفسير دقيق لنتائج تحليل الخطر، نظراً للشكوك المختلفة التي قد ترتبط بها. يستخدم مثالاً على دراسة تعرض PM 2.5 (المواد الصلبة الصغيرة الحجم) وخطر السرطان. الرصد البسيط يظهر فرقاً بين مجموعات التعرض العالي والمنخفض. ومع ذلك، دائماً ما يتبع إجراء قياسي (يسمى "طقوس" بواسطة Gigerenzer): إنشاء نموذج معلمات. صياغة فرضية صفرية (حتى لو كنا نعلم أنها كاذبة). حساب إحصائية، والتي هي وظيفة للبيانات. حساب p-value. الـ p-value - وهو ما يسميه بريغز بشكل مضحك "الرقم السحري" - يظهر ضئيلة الإحتمالات للإحصائية المحسوبة، بناءً على افتراض أن الفرضية الصفرية صحيحة وأن التجربة يمكن أن تُجرى عدداً لا نهائياً من المرات. الـp-value الصغيرة تعني النجاح، الذي يؤدي إلى النشر والمزيد من التمويل. ولكنها لا تقول شيئاً عن السببية. إذا تم تغيير البيانات بشكل طفيف، ستكون الـ p-value أعلى ولم تعد ذات أهمية. التفسير القياسي الآن هو أن الفرق بين المجموعات تم تسببه بـ"الصدفة". يشير بريغز إلى أنه لا يوجد شيء مثل القوة السببية المسماة "الصدفة". الصدفة أو العشوائية لا يمكن أن تسبب أي شيء. كان رونالد فيشر، والد المدرسة التكرارية للإحصاء، متأثرًا بشدة بمفهوم كارل بوبر لـ"التكذيب". قال اقتراح بوبر أن الأفكار التي يمكن إثبات خطأها من خلال الملاحظة هي الأفكار العلمية فقط. وبالتالي، يجب ألا تثبت الإجراءات الإحصائية، بل تثبت أن الفرضية خاطئة. اخترع فيشر p-value وطقوس الفرضية الصفرية التي تسمح لعالم برفض الفرضية الصفرية، بدلاً من إثبات فرضية بديلة. يذكرنا بريغز أن هذه الطقوس لا تثبت أي شيء بأنه صحيح أو خاطئ، على الرغم من استخدامها الحديث كأداة للتحقق. إذا كانت الـ p-value فوق عتبة معينة (عادة 0.05)، فيعتبر ذلك عادةً تجربة فاشلة. يفترض أن يقول العالم إنهم "فشلوا في رفض" الفرضية الصفرية. هذا أيضًا لتلبية الفكرة البوبرية التي تقول أننا لا نستطيع أبداً إثبات فرضية (بما في ذلك الفرضية الصفرية)، ولكننا يمكننا فقط تكذيب واحدة. في قسم الأسئلة والأجوبة، سُئل بريغز إذا كان يرفض مفهوم بوبر للتكذيب وأكد بشدة أنه يفعل. كمثال، لا يمكن تكذيب أي اقتراح رياضي عن طريق الملاحظة. وبالإضافة إلى ذلك، ذكر أنه في الحالات التي يعرف فيها السبب، الاحتمالات والنماذج الإحصائية ليست ضرورية. على سبيل المثال، لمعرفة أن القلم المسقط سوف يسقط، نحتاج إلى فهم الجوهر، وليس الاحتمال. العلماء غالبًا ما يستخدمون الإحصاءات والاحتمالات كوسيلة مختصرة لاكتشاف السببية. في بعض الحالات، التي يطلق عليها اسم خطأ الوبائي، يقال أن "x يسبب y"، ولكن x لم يتم قياسه. بدلاً من ذلك، يتم قياس وكيل لـ x ويتم تقدير y بواسطة نموذج إحصائي. يذكر بريغز أن جميع الأوراق التي قرأها حول الجسيم الجوي المرتبط بمشاكل التنفس والسرطان، المعروف بـ PM 2.5 تقع ضحية لهذا الخطأ. يجادل بريغز أيضًا بأنه يضلل العلماء عندما يقدمون نتائجهم في شكل نسب الخطر. إذا كانت احتمالية الإصابة بمرض معين نتيجة التعرض لـ PM 2.5 هي 2 في المليون، فيتم ثم تقسيم ذلك بواسطة احتمالية 1 في المليون للإصابة بالمرض بدون التعرض. يحسب هذا إلى نسبة خطر قدرها 2. يعني هذا أن الخطر قد تضاعف، والذي يبدو مخيفًا إلا إذا كنت تعرف أن الخطر المطلق صغير جدًا. العلماء غالبًا ما يبلغون عن نسب الخطر دون خطر مطلق. يناقش بريغز ورقة واحدة، التي تقدر خطر السرطان لسكان لوس أنجلوس بناءً على التعرض لـ PM 2.5. تم تقدير أن 400 شخص في مجموعة التعرض و380 في مجموعة الغير معرضة سيصابون بالسرطان. من المفترض أن يبرر هذا الفرق بين 20 شخصًا من بين 4 ملايين نسمة في لوس أنجلوس القواعد الجديدة لوكالة حماية البيئة حول PM 2.5. ومع ذلك، فإن هذه النتيجة لا تثبت أن الفرق في التعرض لـ PM 2.5 هو الذي أحدث الفرق. في الواقع، لم يتم قياس التعرض لـ PM 2.5 لأي فرد، ولكن تم تقديره بناءً على مستويات PM 2.5 التقديرية بالقرب من عناوين منازلهم. نصيحة بريغز للعلماء الذين يقومون بهذا النوع من البحث هو استخدام الطرق التنبؤية الحقيقية، بدلاً من السكان الافتراضيين والمعلمات التقديرية. وبعد ذلك، يجب أن ينتظر العالم حتى تأتي البيانات لتأكيد التوقعات. يجب اختبار النموذج ضد الواقع، كما يفعل الفيزيائيون والمهندسون. ما لا يمكنك القيام به هو الإعلان عن أن النموذج يعمل قبل أن يتم اختباره.'

Jump to original

'المتحدث، السيد ويليام بريغز، يتحدث عن استخدام الأرقام والرياضيات لمحاولة إظهار ما إذا كان شيء ما يسبب حدوث شيء آخر. يقول أنه حتى لو كنا نستطيع استخدام الرياضيات لإظهار العلاقة السببية، فإن الطرق التي نستخدمها الآن كلها خاطئة. نحن بحاجة لاختبار التوقعات مرارا وتكرارا لنرى ما إذا كانت مفيدة. أي توقع يتم بواسطة science يجب أن يتم التحقق منه مرارا وتكرارا. يقول السيد بريغز أننا بحاجة إلى أن نكون حذرين جدا عندما ننظر إلى الأرقام حول المخاطر. يستخدم مثالا عن دراسة لجزيئات التلوث الصغيرة في الهواء تسمى PM 2.5. تظهر الدراسة أن أكثر الأشخاص الذين يتنفسون PM 2.5 العالي يصابون بالسرطان أكثر من الأشخاص الذين يتنفسون PM 2.5 المنخفض. لكن هذا لا يخبرنا إذا كان PM 2.5 يسبب فعلا السرطان. كلما أراد العلماء اختبار واحدة من أفكارهم، يخلقون نموذجا لمحاكاته. ثم يقومون بحسابات رياضية على البيانات لنرى ما إذا كانت النتائج مرتبطة بأفكارهم، أو فرضية أم لا. النتيجة النهائية، أو نتائج الحساب النهائية تتمثل في ما يسمى هو p-value. يتم استخدام p-value كقياس للفرق بين الفكرة الجديدة والفكرة القديمة، لكنه لا يستطيع قول أي شيء عن سبب الفكرة الجديدة. يعتقد العلماء خطأ أن p-value الصغير يعني أن تجربتهم نجحت، أو أنه يثبت شيئا عن سبب أو تأثير، حتى يتمكنوا من نشر النتيجة في مجلة والحصول على المزيد من الأموال للبحث الإضافي. ولكن p-value لا يقول أي شيء عن العلاقة السببية. العديد من العلماء مشوشون بشأن هذه الحقيقة وأدى ذلك إلى مشاكل كبيرة في science. جزء من المشكلة هو أن كل ما يلزم لجعل p-value يظهر تأثيرا هو تغيير في البيانات، وهو أمر سهل جدا للعلماء القيام به. يشير السيد بريغز إلى الطريقة التي يستخدمها العلماء p-values بأنها "طقوس"، لأنها تتم بشكل منتظم، وقد أصبح العلماء يعتقدون أن لديها الكثير من القوة عندما لا يكون لديها في الواقع. تم اختراع p-value بواسطة عالم يدعى رونالد فيشر. أراد استخدام الرياضيات لإظهار ما إذا كانت الفكرة خاطئة، ولكنه لم يدعي أن p-value سيكون قادرا على إثبات ما إذا كانت الفكرة صحيحة. إذا كان p-value الذي صممه صغيرا بما يكفي، اعتقد أننا يمكن أن نرفض الاحتمال أن فكرة العالم لم يكن لها أي تأثير. لكن هذا لا يثبت أن فكرتهم صحيحة. الكثير من العلماء مشوشون بهذا. يقول السيد بريغز أن طقوس p-value لا يمكن أن تثبت أي شيء صحيح أو خاطئ. إنه حتى يرفض الفكرة الكاملة التي نحتاج إلى تفنيد الأفكار. على سبيل المثال، لا يمكن تفنيد الأفكار الرياضية بالملاحظات. أيضا، بالنسبة للأفكار المستندة إلى الملاحظات مثل عندما أسقط قلمي يسقط على الأرض،لا نحتاج إلى الرياضيات لإثبات أنها صحيحة. يستخدم العلماء الإحصاء كوسيلة مختصرة للعثور على الأسباب. لكن بدلا من قياس السبب الفعلي الذي يتم التحقيق فيه، يقيسون شيئا آخر مرتبطا به. يربك العلماء الأشخاص أيضا عن طريق استخدام شيء يسمى نسب المخاطر. إذا أصيب شخصان من بين مليون بالسرطان من جزء موجود في الهواء يسمى PM 2.5 وأصيب شخص واحد من بين مليون بالسرطان بدون PM 2.5، فإن نسبة المخاطر هي 2. وهذا يعني أن مخاطر الإصابة بالسرطان تتضاعف من PM 2.5. يبدو هذا مخيفا، رغم أن المخاطر الفعلية لا تزال ضئيلة جدا. حاولت دراسة واحدة التنبؤ بالسرطان في LA من PM 2.5. قالت أن PM 2.5 سبب 20 حالة سرطان إضافية من بين 4 ملايين شخص. لكنها لم تقم بقياس تعرض أي شخص ل PM 2.5. يقول السيد بريغز أن دراسات من هذا القبيل لا يمكن أن تثبت العلاقة السببية. يقول السيد بريغز أن العلماء بحاجة لتقديم التوقعات ومن ثم الانتظار لرؤية ما إذا كانت البيانات الحقيقية مشابهة للتنبؤ. يجب أن يتم اختبار النموذج في العالم الحقيقي، مثلما يفعل المهندسون. لا يمكنك فقط الادعاء بأن النموذج يعمل قبل اختباره.'

--------- Original ---------

Jump to original

'يتحدث المتحدث السيد ويليام بريجز عن استخدام الإحصاءات لمحاولة إثبات أن شيئًا يسبب شيئًا آخر. يجادل بأن الطريقة التي يستخدم بها العلماء الإحصاء حاليًا لا تظهر العلاقة السببية في الواقع. يعتقد أنه يجب اختبار النماذج العديد من المرات لتأكيد التنبؤات. يقول بريجز إنه يجب علينا تفسير دراسات تحليل المخاطر بعناية. يعطي مثالًا على دراسة على نوع من التلوث المكون من جسيمات صغيرة تسمى PM 2.5 وما إذا كانت تسبب السرطان أم لا. تظهر الدراسة وجود سرطان أكثر في مجموعة التعرض العالي مقارنة بمجموعة التعرض المنخفض. ولكن هذا لا يعني أن PM 2.5 يسبب السرطان. يمكن أن يكون الأشخاص في كل مجموعة مختلفين بطرق أخرى بصرف النظر عن التعرض لـ PM 2.5. عندما يرغب العلماء في اختبار فرضية، يبنون أولاً نموذجًا لمحاكاة الوضع. يشمل هذا فرضية تتوقع عدم وجود تأثير لفرضية العلماء على النتائج. بعدها يحسبون الإحصاءات والنتيجة النهائية هي p value. يظهر p value مدى احتمالية النتيجة الإحصائية إذا كانت الفرضية صحيحة. قيمة p value صغيرة تعني أن الفرضية خاطئة. يعتبر هذا نجاحًا، لذا فمن المحتمل أن يتم نشر الدراسة بشكل أكبر وتلقي تمويلًا أكثر. ولكن p value بحد ذاتها لا يثبت العلاقة السببية. تم اختراع هذه الإجراءات الإحصائية بواسطة رونالد فيشر. كان فيشر يريد استخدام الإحصاء لإثبات خطأ الفرضيات، وليس صحتها. p-value صغيرة تعني رفض الفرضية. ولكنها لا تثبت صحة فرضية العلماء البديلة. يرفض السيد بريجز فكرة كاملة أنه يجب تفنيد الفرضيات. على سبيل المثال، لا يمكن تفنيد الحقائق الرياضية بواسطة الملاحظات. أيضاً، بالنسبة للأمور التي تعرف أسبابها، مثل سقوط القلم على الأرض بسبب الجاذبية، لا نحتاج إلى الإحصاء والاحتمالات. يستخدم العلماء الإحصاء لاختصار العثور على الأسباب الحقيقية. غالبًا ما يتم دراسة السبب وليس قياسه مباشرة، ولكن بدلاً من ذلك يتم قياس شيء مرتبط به. يقول السيد بريجز أن كل الدراسات التي قرأها عن PM 2.5 هي خاطئة لأنها تقوم بذلك. يقدم العلماء غالبًا نتائجهم باستخدام إحصاء مربك يسمى نسبة المخاطر. إذا كانت احتمالاتك في الإصابة بالسرطان من التعرض لـ PM 2.5 هي 2 في المليون مقابل 1 في المليون بدون التعرض، فإن نسبة المخاطر تكون 2. هذا يعني أن المخاطر ازدادت مرتين، والذي يبدو مخيفًا، ولكن المخاطر الفعلية لا تزال ضئيلة. يناقش السيد بريجز دراسة تتنبأ بالسرطان في لوس أنجلوس من PM 2.5. ادعت أن PM 2.5 تسببت في 20 حالة إضافية من بين سكان بلغ عددهم 4 ملايين شخص. ولكنه لم يقس أي تعرض فعلي. قدر التعرض استنادًا إلى مستويات PM 2.5 المقاسة بالقرب من عنوان منزلهم. نصيحة بريجز للعلماء هي أن يجعلوا تنبؤات حقيقية ثم ينتظرون حتى تصل البيانات لتأكيد التنبؤات. يجب اختبار النموذج في العالم الحقيقي، كما يفعل الفيزيائيون والمهندسون. لا يمكنك القول بأن النموذج يعمل قبل أن يتم اختباره.'

--------- Original ---------

Transcript

  • Transcript — Transcript – Briggs, 2015 – The Crisis Of Evidence, Or, Why Probability & Statistics Cannot Discover Cause – YouTube

    <aside> 💡 So basically what I want to tell you is that probability and statistics cannot do what they promise to do, in its classical sense, and that’s to show causation.

    </aside>

    <aside> 💡 And that’s the philosophical topic. And I want to explain that first, and then I’m going to show you that even if we assume that probability and statistics can show causation, even if we do understand causation, the procedures that we use are wrong and they should be adjusted and done in a completely different way.

    And that way is essentially just what Ed was telling us. We replicate, we replicate. We have a model, we make predictions. We see if those predictions are upheld, and we have to do that repeatedly.

    The problem with probability and statistics is they seem to show us, give a shortcut. They seem to promise that we could know things with very little effort. And I’m going to prove that to you.

    </aside>

    <aside> 💡 So what are traditionally probability and statistics used for? What do you think they’re used for?

    Explain or quantify uncertainty in that which we do not know. And nothing else.

    </aside>

    <aside> 💡 Strangely, however, classical procedure in both its frequentist and Bayesian procedures, say the opposite.

    </aside>

    So let me give you a little example.

    in a low or no group of 1,000 5 people got cancer of the albondigas, and in the “some” or high PM2.5 group of 1,000 15 did

    <aside> 💡 What is the probability that more people in the High Group had cancer? One. That’s it. So I’ve proved to you that we do not need probability and statistical models to tell us what we already know. We do not need any other kind of model.

    </aside>

    <aside> 💡 We can say that there’s three times as many people got sick in the High Group, or only five people got sick in the Low Group. We know these things by observation. We do not need probability and statistics to tell us what we’ve already seen.

    </aside>

    But what are the real questions of interest here?

    <aside> 💡 Why do you do a statistical study like this? What caused the difference? That’s the first.

    </aside>

    <aside> 💡 Speaker1: [00:04:32]

    But no matter what, something caused each of those cancers. We want to know, can probability and statistics answer that question? And the the answer to that is no. Although everybody assumes it does.

    </aside>

    <aside> 💡 The second question that probability and statistics can answer is: Given that I assume I do know the cause, which I cannot learn from probability and statistics – some other way I learn it. But assuming I do know the cause, what can I say about future groups of people who are exposed or not? What can I say about the uncertainty in their cancer rates? That’s where probability and statistics can be useful.

    </aside>

    So what is probability and statistics answer to causality?

    How do we typically do a statistical procedure in this type of a case?

    Some sort of hypothesis test, correct?

    <aside> 💡 Step one is always the same, and that is always to form some sort of usually parameterized probability model for the observed data.

    </aside>

    <aside> 💡 Step two is to form what we call the null hypothesis.

    </aside>

    <aside> 💡 Now, we already know that that’s false.

    Did we not say there’s 100% probability the groups are different? Yes. So why do we want now?

    Why do a null hypothesis test? We’ve already ascertained the groups are in fact different. 100% certain.

    </aside>

    <aside> 💡 Number three is the calculate a statistic.

    A statistic is just a function of the data. Many statistics are available, hence the field statistics.

    </aside>

    <aside> 💡 Step four is to calculate this creature (p-value):

    Given the data we’ve assumed, given the model we’ve assumed, given the data we’ve observed, and assuming the null hypothesis is true, we calculate the probability of seeing a test statistic larger than the one we actually got, in absolute value, assuming we could repeat the experiment an infinite number of times. This is called the p value.

    </aside>

    The P value for this particular data happens to be for a test of so called differences in proportions, .04. So what do we say?

    .04 is less than the magic number. The number is Magic. Gigerenzer, Another critic of the field of statistics, calls this procedure ritual. Zizek [sp?tk] calls it something I can’t repeat. I call it magic. It is the magic number.

    <aside> 💡 If the P value is less than the magic number. You have success. You have statistical significance. You can write grants, you can write your papers. It will be accepted and all this kind of glorious things.

    </aside>

    <aside> 💡 What does p-value mean?

    Given the model we’ve assumed, given the data we’ve seen, accepting the null hypothesis is true. It’s calculating the probability of a test statistic larger than the one we actually got in absolute value if we were to repeat the experiment an infinite number of times. And that’s all it means.

    It certainly does not say anything about cause.

    It does not say in this p value is less than the magic number. But it does not then prove that PM2.5 is a cause of the cancer of those people in the High Group.

    </aside>

    <aside> 💡 If we assume PM 2.5 as a cause. If we don’t know it’s a cause, then we don’t know what caused the cancer of the people in the low group. And we also don’t know what caused the cancer and the people in the High Group.

    </aside>

    But if it is a cause, it is always a cause. Unless it is blocked. And the other option is. It is not a cause, as simple as that.

    BK: “blocked” can mean confounding (but it can work both ways – it not only blocks the cause but it may yield a false positive cause)

    <aside> 💡 So it’s either always a cause, it is a cause or it isn’t. A cause is a tautology.

    That statement is true. I can say that chewing on pencils is or isn’t a cause of cancer of the abundance. That is a true statement.

    I can say that wearing hats is or isn’t a cause of cancer of the albondigas because that is a true statement. It is a true statement because it is a tautology.

    Tautology. These are always true. Therefore, it adds nothing to the logic of the situation.

    Merely proposing a cause does not prove in any way that it is a cause or give any extra probability to the idea that it’s a cause.

    That is a very subtle but difficult point.

    If you can understand that, you can understand the real deep hole that probability and statistics have dug themselves. Because they do say that you can ascertain the probability that it’s a cause. But it’s not true.

    </aside>

    Here’s the second point. Now, on any given person or group of people, we can measure innumerable things, not infinite, but large.

    Now it’s almost certain to be true that in these groups, these two groups, this low in this high group, there will be other differences that only apply to the high people and the low people.

    <aside> 💡 Speaker1: [00:13:05] Now I said that the two groups were high and low PM 2.5 and I did this statistical test and I got statistical significance. And that led me to say that PM 2.5 is associated with or linked to or causes the cancer. But then it’s an arbitrary label.

    I could have just as easily put low and high bananas.

    This is also true of these people that I’ve measured.

    Everybody in the low group had one fewer banana. Therefore, I also have to say that the p value that I got also proves that bananas are a cause of the cancer. And that’s true of every other thing that’s different between these two groups. And that’s absurd.

    </aside>

    **As the number of UFO reports increased, so did the temperature anomaly. Now we all laugh at that. If he had statistics, p values, all this kind of thing and the usual hypothesis test and we laugh at that. But why? Why are we laughing at that? Why is it absurd?

    That’s because we understand the essence of the situation.

    It’s absurd. It’s nuts.**

    We understand what’s going on with the temperature and we know it cannot have any causative play with these fictional UFO observations. We understand yet every statistical test it passed in glory. Yet we’re willing to say that PM 2.5 might be a cause of cancer and not the bananas, because we’re trying to get at something else that we cannot get from these statistical procedures. And this is the idea of essence.

    And the philosophical system that Ruled Most people for the greater part of the 20th century was something called empiricism.

    <aside> 💡 But let me lead you to a little bit of the background on this strange idea that probability and statistics can show cause.

    </aside>

    So imagine instead of 15 people in the High Group. Having cancer, it was only 14. The P value is now 0.06. What do we say? It’s not significant.

    What else can we say? Chance.

    Yes, we’re saying that chance caused the results. There is no such thing.

    There is no such thing as chance. Chance is an epistemological state. It’s a state of our knowledge.

    There is no such thing as a material chance.

    There is no energy called chance.

    There is no force called chance nor randomness.

    Chance or randomness cannot be a cause.

    It’s impossible that it could be a cause, something physical or biological, I should say, or some combination, caused each of these cancers. It cannot have been chance.

    Chance and randomness are a product of our epistemology, of our state of knowledge. They basically mean, I don’t know what caused these things.

    So that’s fair enough.

    I could say I don’t know what caused these Things But because if I just add one more person with cancer, I all of a sudden say the cause is definitely PM 2.5. That’s a fallacy.

    Figure. Let’s play: Who Said It!

    Screen Shot on 2022-05-03 at 10-36-11.png

    1. David Hume
    2. Karl Popper
    3. Karl Popper
    4. Karl Popper

    It was Hume’s idea that we can never really observe or understand cause. We can only look at events. This event followed by that event, everything is entirely loose and separate.

    But the next man who is responsible for the next three quotes, Karl Popper did, he was a logical positivist, not quite part of the Vienna circle, but one of their associate members.

    Popper believed in this idea called falsifiability.

    Perhaps you’ve heard of this science.

    **Scientific theory is not a scientific theory unless it is falsifiable.

    He said that unless you can prove something that is false, it’s not scientific. Empirically prove, meaning with observation.**

    Now logical positivism said, you can never believe any theory except based on empirical evidence. Should we believe logical positivism then, because that’s not empirically provable. And so basically logical positivism died out in the 20th century.

    David Stove, a philosopher from Australia, basically called it an episode in black comedy.

    But this idea of poppers became exceedingly popular among scientists. We hear this all the time. That’s not falsifiable. Not falsifiable. So it isn’t scientific. Well, it was very influential with RA Fisher. He’s sort of the father of modern day frequentist statistics.

    It was Fisher that developed the p value. He loved this idea of poppers that you could never believe anything but you could disbelieve. And he wanted to build falsifiability into the practice of probability and statistics. So he developed this p value.

    He said once the p value is less than the magic number. He didn’t use the word magic number. But **once the p value is smaller than something, you are then allowed to say that the null hypothesis is false.

    You are allowed to act as if it is false.

    You are allowed to believe it is false.

    Now, this is a pure act of will.

    You’ve not proven anything false. You haven’t come up with a probability that anything is true or false. It is a pure act of will.**

    And that was Neyman and Pearson’s criticism of the P value, there were other statisticians early in the 20th century who said don’t use the P value because it will lead you to make these kind of mistakes that everybody is now making. Now the fields are inundated with this kind of thinking, especially in fields like sociology, psychology, and even in medicine.

    These PM 2.50, in medicine and so forth that use p values statistical significance as the proof that I have discovered cause and we’ve already seen that can’t be true.

    Now, if the p value is greater than the magic number we say we fail to reject. Right. We don’t say we actually accept the null. And that’s because of Karl Popper. We failed to reject because we’re only always after rejecting, because when we reject something, we falsified.

    But it’s just as nonsensical because if we take a proposition and say we could never believe any proposition. We can only believe that we have proved it false. That means we’re believing a proposition just like this last thing here. It’s self refuting.

    p values are self refuting. In fact, you can show in an argument which I won’t do here, they add nothing. They add no information to the problem at all.

    P values were always an act of will.

    This is why we need to do something else. We need to look at essence.

    Ed showed us this morning when he was first doing his experiment, his advisor asked him to do and redo and redo and redo.

    All work like this, all real scientific labor, as we all know when we’re involved in studies, is extremely laborious, difficult, hard work.

    But statistics promises that we could do it in an instant. All we have to do is submit this data to some test.

    Q: Are you dismissing Karl Popper?

    Yes, I am.

    Math, for instance, no mathematical proposition can be falsified. Any theorem that we have proved true cannot be falsified. No empirical evidence can ever show a mathematical theorem to be false.

    Probability is not falsifiability, for the most part.

    Many people use normal distributions and regression and so forth. What’s the extent of a normal distribution? What does it give probability to? If I can ask the experts in the audience: anywhere from negative infinity to infinity, so that any observation we make will never falsify a probability statement. That’s why you have to say “practically falsified.”

    But that has the same epistemic status as “practically a virgin.” It’s an act of will. It doesn’t have anything to do with anything else.

    Now, if I had a pencil here, I could do this. So let me do this. Let me this. Talk about the essence thing. So what I’m going to do is I’m going to let go of this. What’s going to happen?

    It’s going to fall.

    Why do we need a statistical test? No, we understand gravity.

    We various levels of understanding of gravity exist at some high level. We understand it’s the nature of gravity, the mass of this thing and the mass of the earth to bend space.

    But we also understand it’s the power of gravity to cause things to fall. It’s not because some equation exists out there, some instrumentalist equation.

    Quantification is very nice and when we can quantify things, but not everything can be quantified.

    Even if we do understand cause. Which sometimes we do. Even more. We don’t need to use probability and statistical models to tell us what cause might be.

    Think about this. You’re at the casino. You’ve been playing roulette in the last ten times. It’s come up red. Is black due. Why not? It’s called the Gambler’s Fallacy. We all know that. Yes, but why is not black due? If we were to use the statistics, we’d have to say the p value is going to be one. It’s going to be P, it’s going to be two to the minus ten. It’s a very low number. There’s no probability. There’s no probability, as will something’s causing that ball to rattle around.

    That’s the key, the cause. We all understand there’s nothing in the nature of the physics that have changed. It stayed the same. The wheel might have worn infinitesimally from one run to the next. That’s true, but we could move to other examples where there is no wearing of parts.

    So what we need to do is understand cause we need to understand essence

    But still, just by saying something is possibly a cause is no evidence of cause. It doesn’t give you anything. Anything could possibly be a cause. But we were trying to get at this nature of it. In order to find the real nature, we have to understand the etiology of the disease.

    But in order to answer the real question, we have to not do the shortcut that probability and statistics seemingly provides us.

    And I’m giving you the example in PM 2.5, but this must be done with absolutely every statistical analysis.

    **See, what I call the epidemiologist fallacy is when an epidemiologist or statistician or doctor, if somebody says X causes Y, but where he never measures X, never measures X, and where he ascertains through statistical models that X is a cause.

    Now the epidemic epidemiologist fallacy is a compound then of the ecological fallacy, which is when you don’t measure what you want to measure and instead measure a proxy and say the proxy is the same as the thing you want to measure. And the ascertaining of cause through probability models. I call it the epidemiologists fallacy.

    Without the epidemiologists fallacy, epidemiologists would be out of work. They have these data sets, they go in and they just start playing around. They start looking for P values that are wee P values and they start publishing and this is nonsense.

    So we need to understand first what we’re dealing with. All of the papers that I have discovered that have claimed a causative agency for PM 2.5 use the epidemiologists fallacy.**

    You already know the facts, but he looked at the epidemiologist factor angle, which is to say all of these studies measure some kind of ambient PM 2.5, some average level. For instance, Los Angeles, they’ll measure the PM, the average level of PM 2.5 in Los Angeles and then ascribe that to every single person in their study, which is nonsense.

    There are several studies that don’t show any statistical significance, and these are the ones, just as Ed says, the regulators leave out.

    There’s a Seventh Day Adventist study. You’ve heard of the six cities. Willie showed it in the American Cancer Society study.

    I’m about to show you these predictive methods. And I applied. Jim, help me out on this to get the data from the ACS. They claim that they’ll let researchers have it if they can show a good reason for it. I applied in the normal process and showed them my bona fides, all this kind of thing, and I was rejected. So they actually, like everybody else, don’t want to know how bad things are. But I did them on my own anyway.

    **But the risk ratio for going just 20 micrograms above baseline PM 2.5, anywhere from 1 to 10 was about 1.7, 1.17 and 1.7. So if you work that out, if you give the equivalent of the PM 2.5 as cigarette smoke, that means that PM 2.5 must be gamble proved about 150 to 300 times more toxic than smoking two packs of cigarettes a day for many years.

    Speaker1: [00:37:25] So just going out and breathing the air outside here is more toxic for you than being a chain smoker. That’s what the results prove. So that’s why we need these predictive methods.**

    Figure. Risk ratio

    Screen Shot on 2022-05-03 at 11-05-45.png

    So risk ratio is a very common way to present results. Everybody thinks it’s just kosher as anything. It is extremely misleading. It’s a terrible way to show results, and I’m going to prove that to you.

    It’s the probability of having the disease, the malady, whatever, given that you were exposed divided by the same probability, given you are not exposed.

    **Well, risk ratio only applies to single people. It only applies to one individual at a time. And if you’re just one individual, you don’t care about the risk ratio. You care about these individual probabilities.

    If I’m not exposed, the probability is one in 10 million. If I am exposed, it’s two in 10 million.**

    Speaker1: [00:40:15] So this is the way if you work out these numbers, the probability of at least one person getting it in the exposed group and at least one person getting the non exposed group works out to be this. **The risk ratio has suddenly dropped.

    If you do it for New York City, which has LA, has about 4 million people, New York has eight. The risk ratio drops again to about 1.7. If you do it for the entire United States, the risk ratio drops to something just over one.**

    Figure. Jerrett et al.

    Screen Shot on 2022-05-03 at 11-10-03.png

    So here is the interesting one. Now this is the paper that I looked at in depth and my I wrote a bunch of I wrote an official comment on this and it was submitted to the California Air Resources Board at one of their meetings. Jim has actually the audiotape of or the MP three file or something in this and you can listen to their comments about my criticisms.

    And basically, I think I told this group before that they considered what I had to say and they basically said, well, you know, Dr. Briggs is right, but everybody else makes these same mistakes. Therefore, We don’t want to be different. I’m not kidding.

    So two in 10,000 is more than regulation worthy, according to the EPA. I got this directive that’s well within their bounds for considering a government action.

    You know, EPA agents now are armed, right? They carry weapons. Some of these guys, they go out in the field when they’re testing things. So they’re dead serious about these things.

    Now, it turns out that Jerrett’s risk ratio 1.06, if we use this two times ten to the minus for the two and 10,000 watermark, that works out to be 1.89. That’s the probability of having the disease or morbidity for the not exposed group.

    Now, let’s apply this to Los Angeles.

    We need to apply this. It just doesn’t make any difference. Yes, this 1.06 had a small p value. Yes, he did in fact use the epidemiologist fallacy. He used this land use regression model to guess exposure. He never measured exposure on anybody, ever. Nobody did. It’s just Wild. It’s nonsense is what it is.

    Figure. Probabilities of developing cancer

    Screen Shot on 2022-05-03 at 11-13-40.png

    Now, these are not normal distributions. They kind of look at their binomial because remember, we’re assuming these numbers are true. We’re assuming that this is the best case in their world. What I have here is this dash group, the middle one, this dash one right here. These are the 2 million people. I’m I’m basically assuming here I don’t have anything else I could do, that half the people in LA did not get exposed and half the people did.

    To this high level of PM 2.5. Basically there’s a 99.99% chance that 330 to about 450 people will get cancer in the low group in the people who weren’t exposed with about 380 being the most likely number and about 400 people and the people who were exposed, that’s a difference of about 20 people.

    So all of LA. The difference is about 20 people. That’s what I can expect. Even assuming there are things. So how much money would you pay to eliminate Pm 2.5? That’s too much. Because why?

    Because we don’t know Pm 2.5 as a cause. This is assuming it’s a cause. We still don’t know. It’s a cause.

    Figure. Probabilities of developing cancer, half exposed

    Screen Shot on 2022-05-03 at 11-19-22.png

    This is the real curve to look at. This is now the dashed line is assuming all 4 million LA residents are not exposed. All 4 million residents are not exposed. It’s anywhere from like 680 people to about 800 and 850. There’s a 99.9% chance. That’s how many cancer cases we’ll see. That’s a predictive model. This is no. Speaker2: [00:45:37] Longer. Speaker1: [00:45:38] Confidence intervals or p values or any of this kind of stuff.

    I’m saying given the information that I have, what’s the probability of stuff I don’t yet know? Which is the future. Now, given that half are exposed and half. Are not exposed This is the number of people we expect to see. This is without any regulation of PM 2.5 assuming PM 2.5 is a cause. And the difference is still just about 20 people and the maximum you could get if you take this point here and this point here, meaning the most worst situation you can imagine, a 99.9% chance of having 867 people or so having cancer if I didn’t do anything. And the worst case scenario here, meaning the best case scenario, meaning only like 690 people got cancer, is a savings of 200 lives. But the probability of that happening is only like 0.05.

    The overwhelming probability is if you were to eliminate PM 2.5 completely. This is using Jarrett’s data from best I can tell eliminated completely. The best is the savings that you could expect of about 20 lives. The best.

    But notice I just took his 1.06 from his paper and said, That’s it. Is that true, though? When we do a statistical model, are we certain of these estimates? Now, there’s some uncertainty in these estimates. There’s some plus or minus, and there’s various mathematical ways to deal with these uncertainties.

    **The way I used and the way that I advocate everybody use I don’t have any time to explain this is to use the

    Bayesian posterior predictive distribution.

    In other words, I want to say what the future will be like integrating out all uncertainty I have in these mathematical parameters.**

    So I’m going to basically take that 1.06 and the plus or minus whatever Jared had published. And I’m going to put that plus or minus in here and add the uncertainty. Okay. I’m going to add that on. I have to. I can’t just use the 1.06. That’s not fair.

    That’s not even taking their published results seriously. I need to take that plus or minus into account.

    Figure. Probabilities of developing cancer taking uncertainty into account

    Screen Shot on 2022-05-03 at 11-23-09.png

    This is the result. This right here assumes this dash line that if I removed PM 2.5, all 4 million residents of LA would not be exposed. This is the number of, I guess its cardiac cases. I don’t know exactly what miss or something. Anywhere from 500 to 1000 people. There’s a 99% chance, 99.99% chance of that. However, if Half were exposed to PM 2.5 and half not The savings in lives now is about from the peak of this to the peak of this is two. Now, that should be stunning to you, but you’re not used to seeing statistics put in this way.

    **But the difference the real difference, the real expected difference that we could see is trivial. And that’s assuming PM 2.5 as a cause we don’t know PM 2.5 is the cause so the real savings are much less. And if we factor in the epidemiologist fallacy, any purported savings disappear entirely.

    And this is the way that all of these studies should be done. You cannot use hypothetical populations. You have to use real predictive methods that say nothing about all these parameters like P values and confidence intervals and all this nonsense. And you have to do it in this predictive way. You have to make real predictions.

    Now, notice I’ve made a prediction or Jerrett has. What do I do next? I see if it’s true. I wait for data. This is what everybody else has to do. Why is it that when the model itself tells us the theory is desirable, that I’m removed from the responsibility of checking?

    That happens in global warming. It happens for anything the government wants to regulate.**

    So let me just recap here.

    We did a lot what I wanted to show you. And I called this talk the crisis of evidence, because that’s exactly what it is, at least in medicine and these type of fields. They’re making a stab at understanding what the real causes are for these things.

    **Everybody just keeps assuming the stuff and nobody checks.

    I mean, it used to be in physics when you made things and if you’re an engineer, you have to check your stuff against reality.

    So the way to do statistics is first try to understand like everybody does. The hard way. And if you’re not going to do that, at least do this in a predictive way then wait and see if your model has any damn value before proclaiming that it does.**

    So it’s it’s not a panacea. There’s not saying that statistics is going to certainly provide us good models. And just like the chess example causes undetermined, we can’t tell from the data.

    So we have to do a lot of hard work. If I’ve just given you a little bit of disquiet, that’s all I think I could do at this point. I have paper on this at archive.

    Just look up my name and archive and you can read a fuller version of this. And that’s the rest of my contact information.

    The Crisis Of Evidence: Why Probability And Statistics Cannot Discover Cause

    Q: Question is, you said either it causes cancer or it doesn’t cause cancer. But my understanding was, I mean, sometimes we think **maybe it takes three things together to cause the cancer. So given that, isn’t it useful to to sort out confounding variables?

    Speaker1:** [00:52:44] Yes, that’s what I say. So it’s either always a cause or it is sometimes blocked. It can be blocked by missing a catalyst, for instance. Yes, exactly right. But that was just a subtle philosophical point to show you that you cannot it’s like multiplying an algebraic equation by one, a tautology in logic. It doesn’t provide additional information, but finding all these things like Ed showed us, he had this, the cause was there, and they went and they did a counterfactual. They they block some pathway. And there was the block and and then the cause disappeared. So it’s just those kind of things.

    BK: think about the oncogenic paradox and how there are many carcinogens…that damage mitochondrial respiration…which can lead to compensatory fermentation…and cancer

    The mathematics works out beautifully, but deciding what to do based on a p value is an act of will. It’s it’s completely arbitrary. What you do with it is arbitrary in the extreme.

    Mathematicians are weird, you know, I’m one of them and they don’t always come down to reality. They don’t understand how this stuff is going to apply in real life. And you can’t just because it’s an equation and you can match the terms in that to real life things doesn’t mean it works for those real life things. And that’s the problem.

    Popper was wrong, but he had a very good motive. It’s the same thing. He basically was looking at UFOs and these kinds of things and homeopathy and the like, saying that any observation confirms the theory, these people said, And that’s exactly what we have at global warming, too. Well, for politicians especially, any observation confirms there’s nothing that will disconfirm.

    **Q: Just the same question about Popper. I don’t get it. I mean, if falsifiability is a useful way of evaluating scientific propositions, and that’s what Popper. Explained…

    Although it’s false. Because if I tell you, if I tell you, the probability of the temperature will be some number. The probability using a normal model, for instance, the probability will always be greater than zero. Therefore there’s no way you could falsify that thing with any observation.

    Speaker2:** [00:58:06] It doesn’t apply to every situation.

    Speaker1: [00:58:09] It doesn’t apply. That’s exactly right. It doesn’t apply whenever you use probability.

    But unfortunately, we have to use probability to quantify our uncertainty. So it doesn’t work for those kinds. It does work sometimes, don’t get me wrong, but it doesn’t it doesn’t work for most of science. Falsifiability has very little role to play.