Principles and Phenomena

Navigation

Principle

TitleNotesDetailExample(s)StrategyOriginal link
Don’t trust, but verify
When faced with a new idea or hypothesis, it’s OK to be skeptical, but don’t dismiss it out of hand as nonsense. See if you can verify that it’s nonsense. You need to be both skeptical and open-minded.
One of the unwritten rules of experimental research that I learned in reporting my first two books was: “Trust the scientists to be reporting truthfully what they did, be skeptical of the evidence and the interpretation.” GT
John Snow was dismissed, Semmelweiss was dismissed, Wegener was dismissed, Galileo was dismissed. Peter Mitchell was dismissed. Lynn Margulis was dismissed.
  
The ignorance principle
It’s complicated or multifactorial when you don’t understand something. When you do understand something, it’s simple.
    
Constraints/Weakest link
    
via negativa
a way of describing something by saying what it is not
   
Quantification
    
Hanlon’s razor
Hanlon’s razor is an aphorism expressed in various ways, including: “Never attribute to malice that which can be adequately explained by stupidity.”
Inspired by Occam’s razor, the aphorism became known in this form and under this name by the Jargon File, a glossary of computer programmer slang.[3][1] Later that same year, the Jargon File editors noted lack of knowledge about the term’s derivation and the existence of a similar epigram by William James.[4] In 1996, the Jargon File entry on Hanlon’s Razor noted the existence of a similar quotation in Robert A. Heinlein’s novella Logic of Empire (1941), with speculation that Hanlon’s Razor might be a corruption of “Heinlein’s Razor”.[5] (The character “Doc” in Heinlein’s story described the “devil theory” fallacy, explaining, “You have attributed conditions to villainy that simply result from stupidity.”)
Many complaints about evil corporations come from outliers, the 1% that corporations strategically decide to ignore. It’s not that that the concerns of the outliers are not legitimate, it’s that they are not profitable to satisfy. When some people say that a corporation is evil, they should just say that they are outside the company’s market.
 
Occam’s razor
Occam’s razor (also Ockham’s razor or Ocham’s razor: Latin: novacula Occami; or law of parsimony: Latin: lex parsimoniae) is the problem-solving principle that states “Entities should not be multiplied without necessity.”
Occam’s razor is used as an abductive heuristic in the development of theoretical models rather than as a rigorous arbiter between candidate models.[4][5] In the scientific method, Occam’s razor is not considered an irrefutable principle of logic or a scientific result; the preference for simplicity in the scientific method is based on the falsifiability criterion. For each accepted explanation of a phenomenon, there may be an extremely large, perhaps even incomprehensible, number of possible and more complex alternatives. Since one can always burden failing explanations with ad hoc hypotheses to prevent them from being falsified, simpler theories are preferable to more complex ones because they are more testable.
In science, Occam’s razor is used as a heuristic to guide scientists in developing theoretical models rather than as an arbiter between published models.[4][5] In physics, parsimony was an important heuristic in Albert Einstein’s formulation of special relativity,[43][44] in the development and application of the principle of least action by Pierre Louis Maupertuis and Leonhard Euler,[45] and in the development of quantum mechanics by Max Planck, Werner Heisenberg and Louis de Broglie.
 
Changing your mind when you get new facts
    
Big picture
If you want to determine if something is as close to the ‘truth’ as possible, or to look at overall impact on health, you have to look beyond a small, well contained isolated ‘fact,’ or truth, and place it within the bigger picture. At which point one fact can be overturned by other, far more important, and completely contradictory fact. Unfortunately, looking at the bigger picture means a hell of a lot of work.
   
Unintended consequences
Try to think about unintended consequences of any action (or inaction) BEFORE you take (or don’t take) it
    
Look at the tails
Look at the tails. If some cancer drug improves overall survival by three months, how consistent is it between individuals? Are those three months spent in agony or bliss?
    
Know what you’re talking about
To “know what you’re talking about” literally requires that you know the history of what you believe and of the competing belief systems and so the evidence on which they’re based
   
Context comparison
Look at the actual numbers when a study says that 65% more heart attacks over 5 years for example. Try to always compare the 5-year rate to what is known in that general population and the population that best represents the characteristics of the participants in the study. Another example is a longevity study in a mouse. If an intervention increases lifespan compared to controls, what does this look like in the context of other mice in the wild, and in general? Think about it.
    
Prevent often means delay
To prevent something means to stop it happening at all – ever. When it comes to death, you cannot prevent this from happening, no matter what you do, or how clever you are. The very best you can achieve is to delay the inevitable.
   
Public health vs individual health
  
At the time I was researching GCBC, I spoke to maybe a dozen experts on risk analysis (from folks in the business world to folks at the Jet Propulsion Lab in Pasadena) trying to understand how else to understand this kind of risk assessment. I remember Scott Grundy of all people talking about the possibility that this benefit is hoarded by one out of the n whose cholesterol is lowered, and so he lives 30 years longer than he otherwise would have, but everyone else’s life has no change (leaving out the possibility that some people die prematurely and it’s just the average who live longer). Grundy called this the “I have to go on a lifelong diet so my neighbor doesn’t die of a heart attack” scenario. [Taubes]
  
Everyone is biased
It’s important that researchers often unconsciously bias their thinking — this is what we all need to try and guard against.
    
Time context
Always look at NNTs AND time. Similar to looking at relative risk without knowing absolute risk. If an NNT is 57 for example, it’s a lot different if it you need to treat 57 people over 10 months or 10 years.
    
Does this make me live longer (or shorter)?
You must always look for overall mortality (ACM), whenever and wherever possible
    
The dog that did not bark
when you are looking at the results of a clinical study, or reading a newspaper article, you must strain your ears to listen for the dog that did not bark. Look for what is not said.
   
Statistically significant can be clinically insignificant
Is the study statistically significant but clinically meaningless? Is there a 12% relative risk reduction, and a 0.0001% absolute risk reduction?
    
The power principle
when someone states that their trial was ‘not adequately powered’ to demonstrate a reduction in mortality, you have to ask yourself just how small was the improvement they were expecting to find?
   
Don’t fight dogshit with dogshit
You can’t fight pseudoscience with pseudoscience. If eggs are good one day and bad the next, what gives? Eggs didn’t become good or bad overnight, it’s just that epidemiologists look for associations between two variables (one of which they typically can’t measure with any meaningful accuracy, which is a food or food group), but there are almost an infinite number of variables that can’t be controlled for.
   
Do you need a magnifying glass to see a difference?
Keep an eye out for thin lines and magnifying glasses when survial curves are represented. The differences between groups are often very small but blown up to show a seemingly big difference. Vinay Prasad has a line about how you usually can’t fit a pen or laser pointer between the difference CKTK. Journals often blow up or magnify the figure to show the difference in absolute risk between the treatment and placebo.
    
Do you need statistics to show you something is there?
If it takes a computer and statistical models to demonstrate an association or an effect, you should highly doubt its relevance to real life.
    
New ideas are like poison in (bad) science
“It would not perhaps be too fanciful to say that a new idea is the most quickly acting antigen known to science.”
   
Old > new
If you read medical papers from 50 or 60 years ago they are crystal clear, and the findings are presented in such a way that you can actually understand what the authors are trying to say. Incomprehensible statistical tests were kept to an absolute minimum. It is also difficult to spot any underlying agenda – other than an attempt to establish the truth.
   
Science is the belief in the ignorance of experts
    
Murphy’s law and known unknowns
Anything that can go wrong will go wrong. You need to really think through an issue, and all of the things that can go wrong in science.
    
We all have (varying degrees of) cognitive dissonance
Most scientists (if not all), do not set out to deliberately manipulate, or distort the truth. But when the pressure is on, it is remarkably easy to dismiss a finding here, explain away a fact there, seek out studies that confirm your views, and find flaws in research that appears contradictory. Before you know it, facts can twist through one hundred and eighty degrees. Which is why two groups of researchers can look at exactly the same data, and come to completely opposed views as to their meaning. (See climate change). If it is true that most research findings are just confirmation of popular bias/dogma, and personally I have no doubt that it is, how can you sift out the information that is true/accurate, and potentially vitally important, from that which is true but completely irrelevant? Or, perhaps more critically, how can you recognise the health stories which are so biased as to be – effectively – untrue. How can you find out the consequences of ‘doctoring the data’?
   
An experiment (or interventional trial) is not comparable to an observational study
A well-designed observational study cannot be considered an alternative to an interventional study. This disingenuous phrase implies that the two things are in some way comparable. When they are not, even remotely.
   
Wanting to do good and doing good are two different things
I suppose we all want to convince ourselves that what we are doing is fantastically beneficial. We don’t want naysayers, we don’t want criticism. This is just basic human nature. BK: this is the opposite of scientific thinking here! We want naysayers and criticism and skepticism — the scientific way of thinking goes against human nature.
   
Science advances one scientist at a time and one model at a time
AGW fails the test because it is proclaimed by a consensus. Science places no value on such a vote. A unanimous opinion, much less a consensus, is insufficient. Science advances one scientist at a time, and we honor their names. It advances one model at a time. When the article gets around to saying “most scientists believe…,” it’s time to go back to the comics section. Science relies instead on models that make factual predictions that are or might be validated.

Phenomena

TitleNotesDetailExample(s)StrategyOriginal link
The Streisand effect
The Streisand effect is a phenomenon that occurs when an attempt to hide, remove, or censor information has the unintended consequence of increasing awareness of that information, often via the Internet.
It is named after American singer Barbra Streisand, whose attempt to suppress the California Coastal Records Project’s photograph of her residence in Malibu, California, taken to document California coastal erosion, inadvertently drew greater attention to the photograph in 2003.[1]
  
Hype and spin
“Scientists are usually too careful and clever to risk telling outright lies, but instead they push the envelope of exaggeration, selectivity and distortion as far as possible. And tolerance for this kind of untruthfulness has greatly increased over recent years. So it is now routine for scientists deliberately to ‘hype’ the significance of their status and performance and ‘spin’ the importance of their research.” Bruce Charlton: Professor of Theoretical Medicine
   
Zombie science
“Zombie science is a science that is dead, but is artificially kept moving by a continual infusion of funding. From a distance Zombie science looks like the real thing, the surface features of a science are in place – white coats, laboratories, computer programming, PhDs, papers, conference, prizes, etc. But the Zombie is not interested in the pursuit of truth – its citations are externally-controlled and directed at non-scientific goals, and inside the Zombie everything is rotten.”
   
Torturing data until it confesses
Investigators can torture data until it confesses. Be mindful of how much adjustment goes into an anlaysis. See p-hacking, etc.
    
The cyanide effect
You can only die once, of one thing. Rates of heart disease can go down because rates of death by some other cause go up. If you want to prevent a group of people from ever getting cancer, a very effective pill for that is cyanide.