Fallacies of Reasoning

Deductive reasoning uses specific logical structures to ensure that a claim follows from its premises. A fallacy of reasoning is when the arguer tries to make a deductive-style claim, but the structure isn’t logically valid.

Misreasoning fallacies often seem like they should be valid on face value. Some have entire research papers showing why they're not.

There are lots of ways people can get logic wrong, but here are some common ones.

This is the assumption that something that is true of part of a thing must apply to the whole of the thing, or other parts of the thing. For example: 

  • Since the leaves of a tomato plant are poisonous, the berries must also be poisonous.

It also goes the other way—something that is true of the whole of something may not be true of an individual component:

  • Since the leaves of a tomato plant are poisonous, and tomato plant leaves contain waterwater must be poisonous.

This is the assumption that because two things are related, or co-occur, that one causes the other.

Do the names of NFL quarterbacks affect the Miami Dolphins' success?

In 2021-2022, this team won every game where the quarterback in the opposing team had a last name containing the letter "o", and lost every other game (source: Reddit). This is an example of coincidence—these things are completely unrelated.

Does eating icecream cause drowning?

A famous study on how to use and interpret correlations in academic research used the the strong correlation of icecream sales to number of drownings to demonstrate how correlations can hide other relationships (in this case, weather; hotter days cause both more icecream consumption and more swimming).

We could use this relationship to make rough predictions ("If the local icecream shop has sold this much icecream by 10am, put more lifeguards out on the beach to watch for people in trouble!"), but we couldn't prevent someone from drowning by telling them not to buy icecream.

Getting from correlation to causation

Statisticians and scientists have developed sophisticated ways to test correlations further, so that we can pick apart what's pure chance (like the NRL games), what's a non-causal relationship (like ice cream sales and drownings), and what's actually causal. Briefly:

  • Replication. Scientists will copy each others' studies to make sure the original findings aren't just a fluke. The more studies find the same relationship, the more confident we can be that there's something real.
  • Sampling and probability calculations. Scientists will try to do experiments with lots of participants, and then use special statistical calculations to figure out how likely the results they got were assuming it was all just due to chance. The more unlikely their results are, the more confident they can be that they found something real.
  • Controlling variables. Scientists will try to limit the amount of noise and uncertainty in their experiments by testing as few variables at a time. They'll try to have groups of participants that are as similar to each other as possible, and then only change one thing, and see what effect that has. If there's an effect, that indicates a possible causal relationship! (But only if other scientists can replicate it and we're confident it's not just random chance.)

In a circular argument, the main claim (or conclusion) of an argument is contained inside the reasoning or evidence. Basically, “this is true because it is true”.

Of course, circular arguments in the wild won’t look that trivial.

There might be multiple steps:

  • Achilles is a hero because he does heroic deeds.
  • What makes Achilles’ deeds heroic?
  • They are done by a hero.

Or, the reason might be restated in a different way, but ultimately boils down to the same thing as the main claim:

  • Achilles is a hero because he shows up in Greek epics as the main human protagonist.

Slippery slope arguments use hypothetical cause and effect chains to make predictions, usually to argue against an initial course of action.

Causal chains are great for looking back and working out how something happened, but as it turns out they’re not very good at predictions, because we’re often working with probability, and probability can really muck things around.

You can check the math details yourself if you’re keen, but here’s a little demonstration (based on the rat neurotransmitter example from the paper):

  1. Say we know for a fact that people who eat carrots will tend to grow taller than people who don’t. We could represent this knowledge like this:
    There are three carrot eaters and three non-carrot eaters. Two of the carrot eaters become tall people, compared to one of the non-carrot eaters.
    We can see here that probabilistically, carrot eaters are becoming tall people at a higher rate than non-carrot eaters.
  2. Now, say that we also know that tall people are more likely to become pro-basketball players than short people. You can probably imagine a similar looking diagram.
  3. Does this mean that carrot eaters are more likely to become pro-basketball players than non-carrot eaters?

    A causes B
    B causes C
    Therefore A causes C?


    Not necessarily! Check this out:
    Of the three tall people, one carrot eater and one non-carrot eater becomes a pro basketball player. One short non-carrot eater also becomes a pro basketball player. This means two of three pro basketball players in this sample don't eat carrots!
    Our diagram shows that tall people (squares) are becoming basketball pros at a higher rate than short people (circles). BUT, our original carrot eaters (orange) are not qualifying.

If you find the math intimidating, that’s okay—the takeaway here is that using causal chains to make predictions isn’t as straightforward as most people think.

Even a very basic two-step chain like the example is fallible. If someone's trying to make a claim based on a three-, four-, or twenty-step chain, they're basically just making things up!

Frankenstories game prompt: Write like they are deciding who to donate money to. The image is of a deep sea diver wrestling an octopus.

Write like they are deciding who to donate money to

R1: Introduce issue and list 3 possible organisations.
R2: Make a judgement of the first organisation based purely on one component of it.
R3: Make a causal claim about the second organisation, based purely on correlation or co-occurrence.
R4: Make a claim about a quality of the third option, then back up the claim by restating that quality in a different way.
R5: Make a decision. What happens next?

Example game: Cheque Your Reasoning