The dead can’t tell their stories. Even when their accounts are documented, they are often given less credence than the accounts of survivors who still exist to share theirs. Simply put, survivorship bias is a logical fallacy in which we concentrate on the accounts of those who have made it past some criteria and overlook the accounts of those who did not. The error is compounded as conclusions are drawn that suggest that success is due to some characteristic shared by the successful instead of extraneous variables or random chance.
I most often see this bias play out in three ways:
- “The secret to a long life” stories
- “I did XYZ as a child, and I turned out fine” stories
- “Rags to riches” stories
Even though some of these stories may be shared with a humorous intent, there are plenty of examples where people believe that they are drawing accurate conclusions about success based on a selective review of attempts in pursuit of that success.
It doesn’t matter the topic, there will always be a glut of information at our fingertips. Right now, the topic is the novel coronavirus, its spread, its mortality rate, and the measures put in place to limit transmission. As usual, there is good information and plenty of bad information. People share this information without even a basic examination of its merit. I have noticed there are a few different categories that bad information typically falls into.
- Bad information may look like good information
- Good information shared in ways that limit its credibility
- Old information that may not be the best in an evolving situation
- Garbage information that has no merit at any time
When I use the term “bad information,” I am referring to two broad subcategories. The first subgroup includes the use of clearly marked opinion or commentary pieces as support for an argument. That other people exist with the same opinion as you is not proof that your opinion is correct or valid. Opinions do not equal facts. Facts can support opinions, but it should never be vice versa. The second subgroup is information that may be formatted to appear reputable, but it should not be used as substantiation for a debate. This includes content from media outlets that are known for sensationalism (New York Post, Fox News, anything relying on clickbait) or others with strong biases that drift away from fact reporting and more into the realm of commentary. The Media Bias Chart does a great job of showing a continuum of sources and where they fall in terms of bias and amount of fact reporting.
Woodturning is the craft of spinning wood on a lathe and using tools and abrasives to shape it into a finished product. Penturning is a subset of this focused specifically on creating handmade pens with a lathe. Before I purchased my first material, I did a lot of research and price comparing online to make sure I got all of the requisite components at the best price. Also, in the process of watching others work (primarily on YouTube), I picked up some best practices and identified some ancillary pieces of equipment that would make life easier and result in less waste. Due to my remote location, it will likely be much easier and cheaper for others to acquire the equipment and materials necessary to pursue this hobby.
Two of my favorite linguistic devices are the garden path sentence and the paraprosdokian. You’ve likely encountered these many times in the past, even if you didn’t know how they were categorized. They are both sentences that cause the reader to reconsider information as the sentence is being read. For garden path sentences, this is a result of the reader parsing a sentence incorrectly, and for a paraprosdokian, the sentence is introduced in a manner to intentionally set the reader up for an unexpected conclusion.
You’ve written a thoughtful post or comment speaking against a particular policy or practice. You’ve done your best to contribute meaningfully to the discussion by avoiding logical fallacies and substantiating claims with evidence. Then someone responds…
I didn’t see you crowing on about this issue/policy when [other politician] did it!
If you frequently wade through the rhetorical sewer that is a thread of internet comments, you know this kind of response is all too common. Instead of engaging with the issue constructively or providing a reasoned rebuttal, they have instead chosen to insinuate that you are a hypocrite. This lazy response is called whataboutism, and it has its history as a state propaganda technique employed by the Soviet Union.
Merriam-Webster defines bias as “a personal and sometimes unreasoned judgment.” We should look for media outlets that don’t do this then, right? I would argue that the presence of bias should not immediately render a media outlet as unacceptable. We, as readers, should be able to recognize bias when we see it and analyze how it may be affecting how we interpret the facts of a story. We must train our eyes to see the hallmarks of questionable journalism and avoid sources that lean too heavily on bad practices.
Enter AdFontesMedia, publisher of the Media Bias Chart, which has published version 5.0 of its interactive chart as of this writing. Their research seeks to organize media outlets in a coordinate plane based on two factors, political bias and overall source reliability. The political bias measure follows a traditional left/right continuum, and source reliability seeks to quantify the difference between straight fact reporting and the various levels of analysis and commentary that can stem from it.
YOU WON’T BELIEVE WHICH SUPERFOOD WILL [[insert claim here]]!
When it comes to medical science and health, be extra careful about the sources you share from, who is attaching their name to the articles, and what research they are linking to. There is a dangerous habit of media outlets of all sizes taking the results of a very preliminary study and making wildly exaggerated claims about how the possible benefits.
10 things to watch out for when interpreting research
Last Week Tonight with John Oliver did a great breakdown of how wild headlines can get when cherry-picking from research back in 2016. This focuses mostly on major mainstream media outlets not to mention the barrage of small, pseudo-anonymous “health” websites sharing dubious information with little substantiation.
It’s not enough to have sources to support your claims; they must be good sources. Good science is that which is overwhelmingly supported by scientists throughout the field of study. Good science is not a single study, conclusion, or data point that happens to support a preconceived idea.
The greater challenge is to be truly open-minded to changing or modifying our practices or beliefs when presented with new, substantiated information. Appeals to antiquity or tradition are NOT good reasons for continuing to practice or believe something, particularly in the fields of science, especially when those long-held practices were formed in the absence of newer, more relevant information.
Is your source CRAAP? The CRAAP test was designed by librarians at CSU Chico to examine a source of information. It looks at the areas of Currency, Relevance, Authority, Accuracy, and Purpose as a means of separating the good from the bad.
Sharing a picture with words on it on Facebook because you agree with it fails this test. That is most often when one fails to examine accuracy and purpose. Sure, perhaps you’re very upset that Pepsi didn’t include the words “under God” in the Pledge on its packaging, but that doesn’t change the reality that it never happened. Who created that deliberately misleading image with incorrect information? Why?
Just as it is important that credible news have an author’s name attached to show that a reputation is linked to the accuracy of the information within, we all also have reputations being formed by the types of information we attach our name to when we share it. Make sure you attach your name to information that won’t damage your credibility.