2

Matthew Knachel

Informal Logical Fallacies

I. Logical Fallacies: Formal and Informal

Generally and crudely speaking, a logical fallacy is just a bad argument. Bad, that is, in the logical sense of being incorrect—not bad in sense of being ineffective or unpersuasive. Alas, many fallacies are quite effective in persuading people; that is why they’re so common. Often, they’re not used mistakenly, but intentionally—to fool people, to get them to believe things that maybe they shouldn’t. The goal of this chapter is to develop the ability to recognize these bad arguments for what they are so as not to be persuaded by them.

There are many, many fallacies out there. This chapter will examine the most common ones. By the end of this chapter, you should feel familiar with these villains. Throughout the chapter, ask yourself, “can the same line of reasoning be applied to different plausible premises but lead to a conclusion that is obviously false?” If so, there’s likely a fallacy sabotaging the works. Using our sword of appealing to logical analogy can help us cut through confounding trickery to expose poor reasoning.

There are formal and informal logical fallacies. The formal fallacies are simple: they’re just invalid deductive arguments. There is a clear line here. Either the premises necessitate the conclusion, or they do not. Consider the following:

If the Orcs take the village of Maedhros then the buildings will fall down.

But the Orcs won’t take Maedhros.

/∴ The buildings won’t fall down.

This argument is invalid. It’s got an invalid form: If A then B; not A; therefore, not B. Any argument of this form is fallacious, an instance of “Denying the Antecedent.”[1] Just because A doesn’t happen doesn’t mean B mightn’t occur from other reasons. We can leave it as an exercise for the reader to fill in propositions for A and B to get true premises and a false conclusion. Intuitively, it’s possible for that to happen: for example, maybe the buildings in Maedhros will fall down for some other reason besides orc attack. “Deny the Antecedent” is a fallacy that occurs due to a faulty form of the argument. It is, therefore, known as a “formal fallacy.” We will return to this class of error later.

Our concern in this chapter is not with formal fallacies—arguments that are bad because they have a bad form—but with informal fallacies. These arguments are bad, roughly, because of their content. More than that: their content, context, and/or mode of delivery.

Consider Hitler. Here’s a guy who convinced a lot of people to believe things they had no business believing (because they were false). How did he do it? With lots of fallacious arguments. But it wasn’t just the contents of the arguments (appeals to fear and patriotism, personal attacks on opponents, etc.) that made them fallacious; it was also the context in which he made them, and the (extremely effective) way he delivered them. Leni Riefenstahl’s famous 1935 documentary/propaganda film Triumph of the Will, which follows Hitler during a Nazi party rally in Nuremberg, illustrates this. It has lots of footage of Hitler giving speeches. We hear the jingoistic slogans and vitriolic attacks—but we also see important elements of his persuasive technique. First, the setting. We see Hitler marching through row upon row of neatly formed and impeccably outfitted German troops—thousands of them—approaching a massive raised dais, behind which are stories-high banners with the swastika on a red field. The setting, the context for Hitler’s speeches, was literally awesome—designed to inspire awe. It makes his audience all the more receptive to his message, all the more persuadable. Moreover, Hitler’s speechifying technique was masterful. He is said to have practiced assiduously in front of a mirror, and it shows. His array of hand gestures, facial contortions, and vocal modulations were all expertly designed to have maximum impact on the audience.

This consideration of Hitler highlights a couple of important things about the informal fallacies. First, they’re more than just bad arguments—they’re rhetorical tricks, extra-logical techniques used intentionally to try to convince people of things they maybe ought not to believe. Second, they work! Hitler convinced an entire nation to believe all sorts of crazy things. And advertisers and politicians continue to use these same techniques all the time. It’s incumbent upon a responsible citizen and consumer to be aware of this, and to do everything possible to avoid being bamboozled. That means learning about the fallacies. Hence, this chapter.

There are lots of different informal logical fallacies, lots of different ways of defining and characterizing them, lots of different ways of organizing them into groups. Since Aristotle first did it in his Sophistical Refutations, authors of logic books have been defining and classifying the informal fallacies in various ways. These remarks are offered as a kind of disclaimer: the reader is warned that the particular presentation of the fallacies in this chapter will be unique and will disagree in various ways with other presentations, reflecting as it must the author’s own idiosyncratic interests, understanding, and estimation of what is important. This is as it should be and always is. The interested reader is encouraged to consult alternative sources for further edification.

We will discuss 20 different informal fallacies, and we will group them into four families: (1) Fallacies of Distraction, (2) Fallacies of Weak Induction, (3) Fallacies of Illicit Presumption, and (4) Fallacies of Linguistic Emphasis. We take these up in turn.

II. Fallacies of Distraction

We will discuss five informal fallacies under this heading. What they all have in common is that they involve arguing in such a way that issue that’s supposed to be under discussion is somehow sidestepped, avoided, or ignored. These fallacies are often called “Fallacies of Relevance” because they involve arguments that are bad insofar as the reasons given are irrelevant to the issue at hand. People who use these techniques with malicious intent are attempting to distract their audience from the central questions they’re supposed to be addressing, allowing them to appear to win an argument that they haven’t really engaged in.

Appeal to The People (Argumentum ad Populum)

The Latin name of this fallacy literally means “argument to the people,” where ‘the people’ is used in the pejorative sense of “the unwashed masses,” or “the fickle mob”—the hoi polloi.[2] It’s notoriously effective to play on people’s emotions to get them to go along with you, and that’s the technique identified here. But, the thought is, we shouldn’t decide whether or not to believe things based on an emotional response; emotions are a distraction, blocking hard-headed, rational analysis.

Go back to Hitler for a minute. He was an expert at the appeal to emotion. He played on Germans’ fears and prejudices, their economic anxieties, their sense of patriotism and nationalistic pride. He stoked these emotions with explicit denunciations of Jews and non-Germans, promises of the return of glory for the Fatherland—but also using the sorts of techniques we canvassed above, with awesome settings and hyper-sensational speechifying.

There are as many different versions of the appeal to emotion as there are human emotions. Fear is perhaps the most commonly exploited emotion for politicians. Political ads inevitably try to suggest to voters that one’s opponent will take away medical care or leave us vulnerable to terrorists, or some other scary outcome—usually without a whole lot in the way of substantive proof that these fears are at all reasonable. This is a fallacious appeal to emotion.

Advertisers do it, too. Think of all the ads with sexy models schilling for cars or beers or whatever. What does sexiness have to do with how good a beer tastes? Nothing. The ads are trying to engage your emotions to get you thinking positively about their product. To imagine how irrelevant sexy models are to beer quality, imagine how funny it’d be to see an ad firms try the same trick of using models to sell toilet paper!

An extremely common technique, especially for advertisers, is to appeal to people’s underlying desire to fit in, to be hip to what everybody else is doing, not to miss out. This is the bandwagon appeal. The advertisement assures us that a certain television show is #1 in the ratings—with the tacit conclusion being that we should be watching, too. But this is a fallacy. We’ve all known it’s a fallacy since we were little kids, the first time we did something wrong because all of our friends were doing it, too, and our moms asked us, “If all of your friends jumped off a bridge, would you do that too?”

One more example: suppose you’re one of those sleazy personal injury lawyers—an “ambulance chaser”. You’ve got a client who was grocery shopping at Wal-Mart, and in the produce aisle she slipped on a grape that had fallen on the floor and injured herself. Your eyes turn into dollar signs and a cha-ching noise goes off in your brain: Wal-Mart has deep pockets. So on the day of the trial, what do you do? How do you coach your client? Tell her to wear her nicest outfit, to look her best? Of course not! You wheel her into the courtroom in a wheelchair (whether she needs it or not); you put one of those foam neck braces on her, maybe give her an eye patch for good measure. You tell her to periodically emit moans of pain. When you’re summing up your case before the jury, you spend most of your time talking about the horrible suffering your client has undergone since the incident in the produce aisle: the hospital stays, the grueling physical therapy, the addiction to pain medications, etc., etc.

All of this is a classic fallacious appeal to emotion—specifically, in this case, pity. The people you’re trying to convince are the jurors. The conclusion you have to convince them of, presumably, is that Wal-Mart was negligent and hence legally liable in the matter of the grape on the floor. The details don’t matter, but there are specific conditions that have to be met—proved beyond a reasonable doubt—in order for the jury to find Wal-Mart guilty. But you’re not addressing those (probably because you can’t). Instead, you’re trying to distract the jury from the real issue by playing to their emotions. You’re trying to get them feeling sorry for your client, in the hopes that those emotions will cause them to bring in the verdict you want. That’s why the appeal to emotion is a Fallacy of Distraction: the goal is to divert your attention from the dispassionate evaluation of premises and the degree to which they support their conclusion, to get you thinking with your heart instead of your brain.

Appeal to Force (Argumentum ad Baculum)

In Latin, ‘baculus’ refers to a stick or a club, which you could clobber someone with, presumably. Perhaps the least subtle of the fallacies is the appeal to force, in which you attempt to convince your interlocutor to believe something by threatening him. Threats pretty clearly distract one from the business of dispassionately appraising premises’ support for conclusions, so it’s natural to classify this technique as a Fallacy of Distraction.

There are many examples of this technique throughout history. In totalitarian regimes, there are often severe consequences for those who don’t toe the party line (see George Orwell’s 1984 for a vivid, though fictional, depiction of the phenomenon). The Catholic Church used this technique during the infamous Spanish Inquisition: the goal was to get non-believers to accept Christianity; the method was to torture them until they did.

The appeal to force is can also be subtle. There is a very common, very effective debating technique that belongs under this heading, one that is a bit less overt than explicitly threatening someone who fails to share your opinions. It involves the sub-conscious, rather than conscious, perception of a threat. Here’s what you do: during the course of a debate, make yourself physically imposing; sit up in your chair, move closer to your opponent, use hand gestures, like pointing right in their face; cut them off in the middle of a sentence, shout them down, be angry and combative. If you do these things, you’re likely to make your opponent very uncomfortable—physically and emotionally. They might start sweating a bit; their heart may beat a little faster. They’ll get flustered and maybe trip over their words. They may lose their train of thought; winning points they may have made in the debate will come out wrong or not at all. You’ll look like the more effective debater, and the audience’s perception will be that you made the better argument.

But you didn’t. You came off better because your opponent was uncomfortable. The discomfort was not caused by an actual threat of violence; on a conscious level, they never believed you were going to attack them physically. But you behaved in a way that triggered, at the sub-conscious level, the types of physical/emotional reactions that occur in the presence of an actual physical threat. This is the more subtle version of the appeal to force. It’s very effective and quite common, especially on some cable news channels.

Straw Man

This fallacy involves the misrepresentation of an opponent’s viewpoint—an exaggeration or distortion of it that renders it indefensible, something nobody in their right mind would agree with. You make your opponent out to be a complete wacko (even though he isn’t), then declare that you don’t agree with his (made-up) position. Thus, you merely appear to defeat your opponent: your real opponent doesn’t hold the crazy view you imputed to him; instead, you’ve defeated a distorted version of him, one of your own making, one that is easily dispatched. Instead of taking on the real man, you construct one out of straw, thrash it, and pretend to have achieved victory. It works if your audience doesn’t realize what you’ve done, if they believe that your opponent really holds the crazy view.

Politicians are most frequently victims (and practitioners) of this tactic. After his 2005 State of the Union Address, President George W. Bush’s proposals were treated with straw-man abuse such as:

George W. Bush’s State of the Union Address, masked in talk of “freedom” and “democracy,” was an outline of a brutal agenda of endless war, global empire, and the destruction of what remains of basic social services.[3]

Well, who’s not against “endless war” and “destruction of basic social services”? That Bush guy must be a complete nut! But of course this characterization is a gross exaggeration of what was actually said in the speech, in which Bush declared that we must “confront regimes that continue to harbor terrorists and pursue weapons of mass murder” and rolled out his proposal for privatization of Social Security accounts. Whatever you think of those actual policies, you need to do more to undermine them than to mis-characterize them as “endless war” and “destruction of social services.” That’s distracting your audience from the real substance of the issues.

In 2009, during the (interminable) debate over President Obama’s healthcare reform bill—the Patient Protection and Affordable Care Act—former vice presidential candidate Sarah Palin took to Facebook to denounce the bill thus:

The America I know and love is not one in which my parents or my baby with Down Syndrome will have to stand in front of Obama’s “death panel” so his bureaucrats can decide, based on a subjective judgment of their “level of productivity in society,” whether they are worthy of health care. Such a system is downright evil.

Yikes! That sounds like the evilest bill in the history of evil! Bureaucrats euthanizing Down Syndrome babies and their grandparents? Holy Cow. ‘Death panel’ and ‘level of productivity in society’ are even in quotes. Did she pull those phrases from the text of the bill?

Of course she didn’t. This is a completely insane distortion of what’s actually in the bill (the kernel of truth behind the “death panels” thing seems to be a provision in the Act calling for Medicare to fund doctor-patient conversations about end-of-life care); the non-partisan fact-checking outfit Politifact named it their “Lie of the Year” in 2009. Palin is not taking on the bill or the president themselves; she’s confronting a made-up version, defeating it (which is easy, because the madeup bill is evil as heck; I can’t get the disturbing idea of a Kafkaesque Death Panel out of my head), and pretending to have won the debate. But this distraction only works if her audience believes her straw man is the real thing. Alas, many did. But of course this is why these techniques are used so frequently: they work.

Red Herring

This fallacy gets its name from the actual fish. When herring are smoked, they turn red and are quite pungent. Stinky things can be used to distract hunting dogs, who of course follow the trail of their quarry by scent; if you pass over that trail with a stinky fish and run off in a different direction, the hound may be distracted and follow the wrong trail. Whether or not this practice was ever used to train hunting dogs, as some suppose, the connection to logic and argumentation is clear. One commits the red herring fallacy when one attempts to distract one’s audience from the main thread of an argument, taking things off in a different direction. The diversion is often subtle, with the detour starting on a topic closely related to the original—but gradually wandering off into unrelated territory. The tactic is often (but not always) intentional: one commits the red herring fallacy because one is not comfortable arguing about a particular topic on the merits, often because one’s case is weak; so instead, the arguer changes the subject to an issue about which he feels more confident, makes strong points on the new topic, and pretends to have won the original argument.[4]

A fictional example can illustrate the technique. Consider Frank, who, after a hard day at work, heads to the tavern to unwind. He has far too much to drink, and, unwisely, decides to drive home. Well, he’s swerving all over the road, and he gets pulled over by the police. Let’s suppose that Frank has been pulled over in a posh suburb where there’s not a lot of crime. When the police officer tells him he’s going to be arrested for drunk driving, Frank becomes belligerent:

“Where do you get off? You’re barely even real cops out here in the ’burbs. All you do is sit around all day and pull people over for speeding and stuff. Why don’t you go investigate some real crimes? There’s probably some unsolved murders in the inner city they could use some help with. Why do you have to bother a hard-working citizen like me who just wants to go home and go to bed?”

Frank is committing the red herring fallacy (and not very subtly). The issue at hand is whether or not he deserves to be arrested for driving drunk. He clearly does. Frank is not comfortable arguing against that position on the merits. So he changes the subject—to one about which he feels like he can score some debating points. He talks about the police out here in the suburbs, who, not having much serious crime to deal with, spend most of their time issuing traffic violations. Yes, maybe that’s not as taxing a job as policing in the city. Sure, there are lots of serious crimes in other jurisdictions that go unsolved. But that’s beside the point! It’s a distraction from the real issue of whether Frank should get a DUI.

Politicians use the red herring fallacy all the time. Consider a debate about Social Security—a retirement stipend paid to all workers by the federal government. Suppose a politician makes the following argument:

We need to cut Social Security benefits, raise the retirement age, or both. As the baby boom generation reaches retirement age, the amount of money set aside for their benefits will not be enough cover them while ensuring the same standard of living for future generations when they retire. The status quo will put enormous strains on the federal budget going forward, and we are already dealing with large, economically dangerous budget deficits now. We must reform Social Security.

Now imagine an opponent of the proposed reforms offering the following reply:

Social Security is a sacred trust, instituted during the Great Depression by FDR to insure that no hard-working American would have to spend their retirement years in poverty. I stand by that principle. Every citizen deserves a dignified retirement. Social Security is a more important part of that than ever these days, since the downturn in the stock market has left many retirees with very little investment income to supplement government support.

The second speaker makes some good points, but notice that they do not speak to the assertion made by the first: Social Security is economically unsustainable in its current form. It’s possible to address that point head on, either by making the case that in fact the economic problems are exaggerated or non-existent, or by making the case that a tax increase could fix the problems. The respondent does neither of those things, though; he changes the subject, and talks about the importance of dignity in retirement. I’m sure he’s more comfortable talking about that subject than the economic questions raised by the first speaker, but it’s a distraction from that issue—a red herring.

Perhaps the most blatant kind of red herring is evasive: used especially by politicians, this is the refusal to answer a direct question by changing the subject. Examples are almost too numerous to cite; to some degree, no politician ever answers difficult questions straightforwardly (there’s an old axiom in politics, put nicely by Robert McNamara: “Never answer the question that is asked of you. Answer the question that you wish had been asked of you.”).

A particularly egregious example of this occurred in 2009 on CNN’s Larry King Live. Michele Bachmann, Republican Congresswoman from Minnesota, was the guest. The topic was “birtherism,” the (false) belief among some that Barack Obama was not in fact born in America and was therefore not constitutionally eligible for the presidency. After playing a clip of Senator Lindsey Graham (R, South Carolina) denouncing the myth and those who spread it, King asked Bachmann whether she agreed with Senator Graham. She responded thus:

“You know, it’s so interesting, this whole birther issue hasn’t even been one that’s ever been brought up to me by my constituents. They continually ask me, where’s the jobs? That’s what they want to know, where are the jobs?”

Bachmann doesn’t want to respond directly to the question. If she outright declares that the “birthers” are right, she looks crazy for endorsing a clearly false belief. But if she denounces them, she alienates a lot of her potential voters who believe the falsehood. Tough bind. So she blatantly, and rather desperately, tries to change the subject. Jobs! Let’s talk about those instead. Please?

Argumentum ad Hominem

Everybody always used the Latin for this one—usually shortened to just ‘ad hominem’, which means ‘at the person’. You commit this fallacy when, instead of attacking your opponent’s views, you attack your opponent himself.

This fallacy comes in a lot of different forms; there are a lot of different ways to attack a person while ignoring (or downplaying) their actual arguments. To organize things a bit, we’ll divide the various ad hominem attacks into two groups: Abusive and Circumstantial.

Abusive ad hominem is the more straightforward of the two. The simplest version is simply calling your opponent names instead of debating him. If you pepper your descriptions of your opponent with tendentious, unflattering, politically charged language, you can get a rhetorical leg-up and undermine the credibility of the opponent.

Another abusive ad hominem attack is guilt by association. Here, you tarnish your opponent by associating him or his views with someone or something that your audience despises. Again, cable news is full of tenuous comparisons of contemporary leaders and ideas to unpopular foils such as Nazis and the Bolscheviks.[5] Not every comparison like this is fallacious, of course. But in many cases, where the connection is particularly flimsy, someone is clearly pulling a fast one.

The circumstantial ad hominem fallacy is not as blunt an instrument as its abusive counterpart. It also involves attacking one’s opponent, focusing on some aspect of his person—his circumstances—as the core of the criticism. This version of the fallacy comes in many different forms, and some of the circumstantial criticisms involved raise legitimate concerns about the relationship between the arguer and his argument. They only rise (sink?) to the level of fallacy when these criticisms are taken to be definitive refutations, which, on their own, they cannot be.

To see what we’re talking about, consider the circumstantial ad hominem attack that points out one’s opponent’s self-interest in making the argument he does. Consider:

A recent study from scientists at the University of Minnesota claims to show that glyphosate—the main active ingredient in the widely used herbicide Roundup—is safe for humans to use. But guess whose business school just got a huge donation from Monsanto, the company that produces Roundup? That’s right, the University of Minnesota. Ever hear of conflict of interest? This study is junk, just like the product it’s defending.

This is a fallacy. It doesn’t follow from the fact that the University received a grant from Monsanto that scientists working at that school faked the results of a study. But the fact of the grant does raise a red flag. There may be some conflict of interest at play. Such things have happened in the past (e.g., studies funded by Big Tobacco showing that smoking is harmless). But raising the possibility of a conflict is not enough, on its own, to show that the study in question can be dismissed out of hand. It may be appropriate to subject it to heightened scrutiny, but we cannot shirk our duty to assess its arguments on their merits.

A similar thing happens when we point to the hypocrisy of someone making a certain argument— when their actions are inconsistent with the conclusion they’re trying to convince us of. Consider the following:

The head of the local branch of the American Federation of Teachers union wrote an oped yesterday in which she defended public school teachers from criticism and made the case that public schools’ quality has never been higher. But guess what? She sends her own kids to private schools out in the suburbs! What a hypocrite. The public school system is a wreck and we need more accountability for teachers.

This passage makes a strong point, but then commits a fallacy. It would appear that, indeed, the AFT leader is hypocritical; her choice to send her kids to private schools suggests (but doesn’t necessarily prove) that she doesn’t believe her own assertions about the quality of public schools. Again, this raises a red flag about her arguments; it’s a reason to subject them to heightened scrutiny. But it is not a sufficient reason to reject them out of hand, and to accept the opposite of her conclusions. That’s committing a fallacy. She may have perfectly good reasons, having nothing to do with the allegedly low quality of public schools, for sending her kids to the private school in the suburbs. Or she may not. She may secretly think, deep down, that her kids would be better off not going to public schools. But none of this means her arguments in the op-ed should be dismissed; it’s beside the point. Do her premises back up her conclusion? Are her premises true? That’s how we evaluate an argument; hypocrisy on the part of the arguer doesn’t relieve us of the responsibility to conduct thorough, dispassionate logical analysis.

A very specific version of the circumstantial ad hominem, one that involves pointing out one’s opponent’s hypocrisy, is worth highlighting, since it happens so frequently. It has its own Latin name: tu quoque, which translates roughly as “you, too.” This is the “I know you are but what am I?” fallacy; the “pot calling the kettle black”; “look who’s talking”. It’s a technique used in very specific circumstances: your opponent accuses you of doing or advocating something that’s wrong, and, instead of making an argument to defend the rightness of your actions, you simply throw the accusation back in your opponent’s face—they did it too. But that doesn’t make it right!

An example. In February 2016, Supreme Court Justice Antonin Scalia died unexpectedly. President Obama, as is his constitutional duty, nominated a successor. The Senate is supposed to ‘advise and consent’ (or not consent) to such nominations, but instead of holding hearings on the nominee (Merrick Garland), the Republican leaders of the Senate declared that they wouldn’t even consider the nomination. Since the presidential primary season had already begun, they reasoned, they should wait until the voters has spoken and allow the new president to make a nomination. Democrats objected strenuously, arguing that the Republicans were shirking their constitutional duty. The response was classic tu quoque. A conservative writer asked, “Does any sentient human being believe that if the Democrats had the Senate majority in the final year of a conservative president’s second term—and Justice [Ruth Bader] Ginsburg’s seat came open—they would approve any nominee from that president?”[6] Senate Majority Leader Mitch McConnell said that he was merely following the “Biden Rule,” a principle advocated by Vice President Joe Biden when he was a Senator, back in the election year of 1992, that then-President Bush should wait until after the election season was over before appointing a new Justice (the rule was hypothetical; there was no Supreme Court vacancy at the time).

This is a fallacious argument. Whether or not Democrats would do the same thing if the circumstances were reversed is irrelevant to determining whether that’s the right, constitutional thing to do.

The final variant of the circumstantial ad hominem fallacy is perhaps the most egregious. It’s certainly the most ambitious: it’s a preemptive attack on one’s opponent to the effect that, because of the type of person he is, nothing he says on a particular topic can be taken seriously; he is excluded entirely from debate. It’s called poisoning the well. This phrase was coined by the famous 19th century Catholic intellectual John Henry Cardinal Newman, who was a victim of the tactic. In the course of a dispute he was having with the famous Protestant intellectual Charles Kingsley, Kingsley is said to have remarked that anything Newman said was suspect, since, as a Catholic priest, his first allegiance was not to the truth (but rather to the Pope). As Newman rightly pointed out, this remark, if taken seriously, has the effect of rendering it impossible for him or any other Catholic to participate in any debate whatsoever. He accused Kingsley of “poisoning the wells.”

We poison the well when we exclude someone from a debate because of who they are. Imagine an Englishman saying something like, “It seems to me that you Americans should reform your healthcare system. Costs over here are much higher than they are in England. And you have millions of people who don’t even have access to healthcare. In the UK, we have the NHS (National Health Service); medical care is a basic right of every citizen.” Suppose an American responded by saying, “What you know about it, Limey? Go back to England.” That would be poisoning the well (with a little name-calling thrown in). The Englishman is excluded from debating American healthcare just because of who he is—an Englishman, not an American.

III. Fallacies of Weak Induction

As their name suggests, what these fallacies have in common is that they are bad—that is, weak— inductive arguments. Recall, inductive arguments attempt to provide premises that make their conclusions more probable. We evaluate them according to how probable their conclusions are in light of their premises: the more probable the conclusion (given the premises), the stronger the argument; the less probable, the weaker. The fallacies of weak induction are arguments whose premises do not make their conclusions very probable—but that are nevertheless often successful in convincing people of their conclusions. We will discuss five informal fallacies that fall under this heading.

Argument from Ignorance (Argumentum ad Ignorantiam)

This is a particularly egregious and perverse fallacy. In essence, it’s an inference from premises to the effect that there’s a lack of knowledge about some topic to a definite conclusion about that topic. We don’t know; therefore, we know!

Of course, put that baldly, it’s plainly absurd; actual instances are more subtle. The fallacy comes in a variety of closely related forms. It will be helpful to state them in bald/absurd schematic fashion first, then elucidate with more subtle real-life examples.

The first form can be put like this:

Nobody knows how to explain phenomenon X.

/∴ My crazy theory about X is true.

That sounds silly, but consider an example: those “documentary” programs on cable TV about aliens. You know, the ones where they suggest that extraterrestrials built the pyramids or something (there are books and websites, too). How do they get you to believe that crazy theory? By creating mystery! By pointing to facts that nobody can explain. The Great Pyramid at Giza is aligned (almost) exactly with the magnetic north pole! On the day of the summer solstice, the sun sets exactly between two of the pyramids! The height of the Great Pyramid is (almost) exactly one one-millionth the distance from the Earth to the Sun! How could the ancient Egyptians have such sophisticated astronomical and geometrical knowledge? Why did the Egyptians, careful recordkeepers in (most) other respects, (apparently) not keep detailed records of the construction of the pyramids? Nobody knows. Conclusion: aliens built the pyramids.

In other words, there are all sorts of (sort of) surprising facts about the pyramids, and nobody knows how to explain them. From these premises, which establish only our ignorance, we’re encouraged to conclude that we know something: aliens built the pyramids. That’s quite a leap— too much of a leap. The burden of proof falls upon those making the claims to justify them. It is not okay to present claims and insist that they be taken as true and until proven false. Moreover, those who do try to shift the burden of proof often suddenly become very rigorous and demanding of other’s attempts to refute their claims.

Another form this fallacy takes can be put crudely thus:

Nobody can PROVE that I’m wrong.____

/∴ I’m right.

The word ‘prove’ is in all-caps because stressing it is the key to this fallacious argument: the standard of proof is set impossibly high, so that almost no amount of evidence would constitute a refutation of the conclusion.

An example will help. There are lots of people who claim that evolutionary biology is a lie: there’s no such thing as evolution by natural selection, and it’s especially false to claim that humans evolved from earlier species, that we share a common ancestor with apes. Rather, the story goes, the Bible is literally true: the Earth is only about 6,000 years old, and humans were created as-is by God just as the Book of Genesis describes. The Argument from Ignorance is one of the favored techniques of proponents of this view. They are especially fond of pointing to “gaps” in the fossil record—the so-called “missing link” between humans and a pre-human, ape-like species—and claim that the incompleteness of the fossil record vindicates their position.

But this argument is an instance of the fallacy. The standard of proof—a complete fossil record without any gaps—is impossibly high. Evolution has been going on for a LONG time (the Earth is actually about 4.5 billion years old, and living things have been around for at least 3.5 billion years). So many species have appeared and disappeared over time that it’s absurd to think that we could even come close to collecting fossilized remains of anything but the tiniest fraction of them. It’s hard to become a fossil, after all: a creature has to die under special circumstances to even have a chance for its remains to do anything than turn into compost. And we haven’t been searching for fossils in a systematic way for very long (only since the mid-1800s or so). It’s no surprise that there are gaps in the fossil record, then. What’s surprising, in fact, is that we have as rich a fossil record as we do. Many, many transitional species have been discovered, both between humans and their ape-like ancestors, and between other modern species and their distant forbears (whales used to be land-based creatures, for example; we know this (in part) from the fossils of early protowhale species with longer and longer rear hip- and leg-bones).

We will never have a fossil record complete enough to satisfy skeptics of evolution. But their standard is unreasonably high, so their argument is fallacious. Sometimes they put it even more simply: nobody was around to witness evolution in action; therefore, it didn’t happen. This is patently absurd, but it follows the same pattern: an unreasonable standard of proof (witnesses to evolution in action; impossible, since it takes place over such a long period of time), followed by the leap to the unwarranted conclusion.

Yet another version of the Argument from Ignorance goes like this:

I can’t imagine/understand how X could be true.

/∴ X is false.

Of course lack of imagination on the part of an individual isn’t evidence for or against a proposition, but people often argue this way. A (hilarious) example comes from the rap duo Insane Clown Posse in their 2009 single, “Miracles”. Here’s the line:

Water, fire, air and dirt

F**king magnets, how do they work?

And I don’t wanna talk to a scientist

Y’all mother**kers lying, and getting me pissed.

Violent J and Shaggy 2 Dope can’t understand how there could be a scientific, non-miraculous explanation for the workings of magnets. They conclude, therefore, that magnets are miraculous.

A final form of the Argument from Ignorance can be put crudely thus:

No evidence has been found that X is true.

/∴ X is false.

You may have heard the slogan, “Absence of evidence is not evidence of absence.” This is an attempt to sum up this version of the fallacy. But it’s not quite right. What it should say is that absence of evidence is not always definitive evidence of absence. An example will help illustrate the idea. During the 2016 presidential campaign, a reporter (David Fahrentold) took to Twitter to announce that despite having “spent weeks looking for proof that [Donald Trump] really does give millions of his own [money] to charity…” he could only find one donation, to the NYC Police Athletic League. Trump has claimed to have given millions of dollars to charities over the years. Does this reporter’s failure to find evidence of such giving prove that Trump’s claims about his charitable donations are false? No. To rely only on this reporter’s testimony to draw such a conclusion would be to commit the fallacy.

However, the failure to uncover evidence of charitable giving does provide some reason to suspect Trump’s claims may be false. How much of a reason depends on the reporter’s methods and credibility, among other things. But sometimes a lack of evidence can provide strong support for a negative conclusion. This is an inductive argument; it can be weak or strong. For example, despite multiple claims over many years (centuries, if some sources can be believed), no evidence has been found that there’s a sea monster living in Loch Ness in Scotland. Given the size of the body of water, and the extensiveness of the searches, this is pretty good evidence that there’s no such creature—a strong inductive argument to that conclusion. To claim otherwise—that there is such a monster, despite the lack of evidence—would be to commit the version of the fallacy whereby one argues “You can’t PROVE I’m wrong; therefore, I’m right,” where the standard of proof is unreasonably high.

One final note on this fallacy: it’s common for people to mislabel certain bad arguments as arguments from ignorance; namely, arguments made by people who obviously don’t know what the heck they’re talking about. People who are confused or ignorant about the subject on which they’re offering an opinion are liable to make bad arguments, but the fact of their ignorance is not enough to label those arguments as instances of the fallacy. We reserve that designation for arguments that take the forms canvassed above: those that rely on ignorance—and not just that of the arguer, but of the audience as well—as a premise to support the conclusion.

Appeal to Inappropriate Authority

One way of making an inductive argument—of lending more credence to your conclusion—is to point to the fact that some relevant authority figure agrees with you. In law, for example, this kind of argument is indispensable: appeal to precedent (Supreme Court rulings, etc.) is the attorney’s bread and butter. And in other contexts, this kind of move can make for a strong inductive argument. If I’m trying to convince you that fluoridated drinking water is safe and beneficial, I can point to the Centers for Disease Control, where a wealth of information supporting that claim can be found.[7] Those people are scientists and doctors who study this stuff for a living; they know what they’re talking about.

One commits the fallacy when one points to the testimony of someone who’s not an authority on the issue at hand. This is a favorite technique of advertisers. We’ve all seen celebrity endorsements of various products. Sometimes the celebrities are appropriate authorities: there was a Buick commercial from 2012 featuring Shaquille O’Neal, the Hall of Fame basketball player, testifying to the roominess of the car’s interior (despite its compact size). Shaq, a very, very large man, is an appropriate authority on the roominess of cars! But when Tiger Woods was shilling for Buicks a few years earlier, it wasn’t at all clear that he had any expertise to offer about their merits relative to other cars. Woods was an inappropriate authority; those ads committed the fallacy.

Usually, the inappropriateness of the authority being appealed to is obvious. But sometimes it isn’t. A particularly subtle example is AstraZeneca’s hiring of Dr. Phil McGraw in 2016 as a spokesperson for their diabetes outreach campaign. AstraZeneca is a drug manufacturing company. They make a diabetes drug called Bydureon. The aim of the outreach campaign, ostensibly, is to increase awareness among the public about diabetes; but of course the real aim is to sell more Bydureon. A celebrity like Dr. Phil can help. Is he an appropriate authority? That’s a hard question to answer. It’s true that Dr. Phil had suffered from diabetes himself for 25 years, and that he personally takes the medication. So that’s a mark in his favor, authority-wise. But is that enough? We’ll talk about how feeble Phil’s sort of anecdotal evidence is in supporting general claims (in this case, about a drug’s effectiveness) when we discuss the hasty generalization fallacy; suffice it to say, one person’s positive experience doesn’t prove that the drug is effective. But, Dr. Phil isn’t just a person who suffers from diabetes; he’s a doctor! It’s right there in his name (everybody always simply refers to him as ‘Dr. Phil’). Surely that makes him an appropriate authority on the question of drug effectiveness. Or maybe not. Phil McGraw is not a medical doctor; he’s a PhD. He has a doctorate in Psychology. He’s not a licensed psychologist; he cannot legally prescribe medication. He has no relevant professional expertise about drugs and their effectiveness. He is not an appropriate authority in this case. He looks like one, though, which makes this a very sneaky, but effective, advertising campaign.

Post hoc ergo propter hoc

Here’s another fallacy for which people always use the Latin, usually shortening it to ‘post hoc’. The whole phrase translates to ‘After this, therefore because of this’, which is a pretty good summation of the pattern of reasoning involved. Crudely and schematically, it looks like this:

X occurred before Y.

/∴ X caused Y.

This is not a good inductive argument. That one event occurred before another gives you some reason to believe it might be the cause—after all, X can’t cause Y if it happened after Y did—but not nearly enough to conclude that it is the cause. A silly example: I, your humble author, was born on June 19th, 1974; this was just shortly before a momentous historical event, Richard Nixon’s resignation of the Presidency on August 9th later that summer. My birth occurred before Nixon’s resignation; but this is (obviously!) not a reason to think that it caused his resignation.

Though this kind of reasoning is obviously shoddy—a mere temporal relationship clearly does not imply a causal relationship—it is used surprisingly often. In 2012, New York Yankees shortstop Derek Jeter broke his ankle. It just so happened that this event occurred immediately after another event, as Donald Trump pointed out on Twitter: “Derek Jeter broke ankle one day after he sold his apartment in Trump World Tower.” Trump followed up: “Derek Jeter had a great career until 3 days ago when he sold his apartment at Trump World Tower- I told him not to sell- karma?” No, Donald, not karma; just bad luck.

Nowhere is this fallacy more in evidence than in our evaluation of the performance of presidents of the United States. Everything that happens during or immediately after their administrations tends to be pinned on them. But presidents aren’t all-powerful; they don’t cause everything that happens during their presidencies. On July 9th, 2016, a short piece appeared in the Washington Post with the headline “Police are safer under Obama than they have been in decades”. What does a president have to do with the safety of cops? Very little, especially compared to other factors like poverty, crime rates, policing practices, rates of gun ownership, etc., etc., etc. To be fair, the article was aiming to counter the equally fallacious claims that increased violence against police was somehow caused by Obama. Another example: in October 2015, US News & World Report published an article asking (and purporting to answer) the question, “Which Presidents Have Been Best for the Economy?” It had charts listing GDP growth during each administration since Eisenhower. But while presidents and their policies might have some effect on economic growth, their influence is certainly swamped by other factors. Similar claims on behalf of state governors are even more absurd. At the 2016 Republican National Convention, Governors Scott Walker and Mike Pence—of Wisconsin and Indiana, respectively—both pointed to record-high employment in their states as vindication of their conservative, Republican policies. But some other states were also experiencing record-high employment at the time: California, Minnesota, New Hampshire, New York, Washington. Yes, they were all controlled by Democrats. Maybe there’s a separate cause for those strong jobs numbers in differently governed states? Possibly it has something to do with the improving economy and overall health of the job market in the whole country? Holding rituals every day before dawn doesn’t make the sun rise.

Slippery Slope

Like the post hoc fallacy, the slippery slope fallacy is a weak inductive argument to a conclusion about causation. This fallacy involves making an insufficiently supported claim that a certain action or event will set off an unstoppable causal chain-reaction—putting us on a slippery slope— leading to some disastrous effect.

This style of argument was a favorite tactic of religious conservatives who opposed gay marriage. They claimed that legalizing same-sex marriage would put the nation on a slippery slope to disaster. Famous Christian leader Pat Robertson, on his television program The 700 Club, puts the case nicely. When asked about gay marriage, he responded with this:

We haven’t taken this to its ultimate conclusion. You’ve got polygamy out there. How can we rule that polygamy is illegal when you say that homosexual marriage is legal? What is it about polygamy that’s different? Well, polygamy was outlawed because it was considered immortal according to Biblical standards. But if we take Biblical standards away in homosexuality, well what about the other? And what about bestiality? And ultimately what about child molestation and pedophilia? How can we criminalize these things, at the same time have Constitutional amendments allowing same-sex marriage among homosexuals? You mark my words, this is just the beginning of a long downward slide in relation to all the things that we consider to be abhorrent.

This a classic slippery slope fallacy; he even uses the phrase ‘long downward slide’! The claim is that allowing gay marriage will force us to decriminalize polygamy, bestiality, child molestation, pedophilia—and ultimately, “all the things that we consider to be abhorrent.” Yikes! That’s a lot of things. Apparently, gay marriage will lead to utter anarchy.

There are genuine slippery slopes out there—unstoppable causal chain-reactions. But this isn’t one of them. The mark of the slippery slope fallacy is the assertion that the chain can’t be stopped, with reasons that are insufficient to back up that assertion. In this case, Pat Robertson has given us the abandonment of “Biblical standards” as the lubrication for the slippery slope. But this is obviously insufficient. Biblical standards are expressly forbidden, by the “establishment clause” of the First Amendment to the U.S. Constitution, from forming the basis of the legal code. The slope is not slippery. As recent history has shown, the legalization of same sex marriage does not lead to the acceptance of bestiality and pedophilia; the argument is fallacious.

Fallacious slippery slope arguments have long been deployed to resist social change. Those opposed to the abolition of slavery warned of economic collapse and social chaos. Those who opposed women’s suffrage asserted that it would lead to the dissolution of the family, rampant sexual promiscuity, and social anarchy. Of course none of these dire predictions came true; the slopes simply weren’t slippery.

Hasty Generalization

Many inductive arguments involve an inference from particular premises to a general conclusion; this is generalization. For example, if you make a bunch of observations every morning that the sun rises in the east, and conclude on that basis that, in general, the sun always rises in the east, this is a generalization. And it’s a good one! With all those particular sunrise observations as premises, your conclusion that the sun always rises in the east has a lot of support; that’s a strong inductive argument.

One commits the hasty generalization fallacy when one makes this kind of inference based on an insufficient number of particular premises, when one is too quick—hasty—in inferring the general conclusion.

People who deny that global warming is a genuine phenomenon often commit this fallacy. In February of 2015, the weather was unusually cold in Washington, DC. Senator James Inhofe of Oklahoma famously took to the Senate floor wielding a snowball. “In case we have forgotten, because we keep hearing that 2014 has been the warmest year on record, I ask the chair, ‘You know what this is?’ It’s a snowball, from outside here. So it’s very, very cold out. Very unseasonable.” He then tossed the snowball at his colleague, Senator Bill Cassidy of Louisiana, who was presiding over the debate, saying, “Catch this.”

Senator Inhofe commits the hasty generalization fallacy. He’s trying to establish a general conclusion—that 2014 wasn’t the warmest year on record, or that global warming isn’t really happening (he’s on the record that he considers it a “hoax”). But the evidence he presents is insufficient to support such a claim. His evidence is an unseasonable coldness in a single place on the planet, on a single day. We can’t derive from that any conclusions about what’s happening, temperature-wise, on the entire planet, over a long period of time. That the earth is warming is not a claim that everywhere, at every time, it will always be warmer than it was; the claim is that, on average, across the globe, temperatures are rising. This is compatible with a couple of cold snaps in the nation’s capital.

Many people are susceptible to hasty generalizations in their everyday lives. When we rely on anecdotal evidence to make decisions, we commit the fallacy. Suppose you’re thinking of buying a new car, and you’re considering a Subaru. Your neighbor has a Subaru. So what do you do? You ask your neighbor how he likes his Subaru. He tells you it runs great, hasn’t given him any trouble. You then, fallaciously, conclude that Subarus must be terrific cars. But one person’s testimony isn’t enough to justify that conclusion; you’d need to look at many, many more drivers’ experiences to reach such a conclusion (this is why the magazine Consumer Reports is so useful).

A particularly pernicious instantiation of the Hasty Generalization fallacy is the development of negative stereotypes. People often make general claims about religious or racial groups, ethnicities and nationalities, based on very little experience with them. If you once got mugged by a Puerto Rican, that’s not a good reason to think that, in general, Puerto Ricans are crooks. If a waiter at a restaurant in Paris was snooty, that’s no reason to think that French people are stuck up. And yet we see this sort of faulty reasoning all the time.

IV. Fallacies of Illicit Presumption

This is a family of fallacies whose common characteristic is that they (often tacitly, implicitly) presume the truth of some claim that they’re not entitled to. They are arguments with a premise (again, often hidden) that is assumed to be true, but is actually a controversial claim, which at best requires support that’s not provided, which at worst is simply false. We will look at six fallacies under this heading.

Accident

This fallacy is the reverse of the hasty generalization. That was a fallacious inference from insufficient particular premises to a general conclusion; accident is a fallacious inference from a general premise to a particular conclusion. What makes it fallacious is an illicit presumption: the general rule in the premise is assumed, incorrectly, not to have any exceptions; the particular conclusion fallaciously inferred is one of the exceptional cases.

Here’s a simple example to help make that clear:

Cutting people with knives is illegal.

Surgeons cut people with knives.

/∴ Surgeons should be arrested.

One of the premises is the general claim that cutting people with knives is illegal. While this is true in almost all cases, there are exceptions—surgery among them. We pay surgeons lots of money to cut people with knives! It is therefore fallacious to conclude that surgeons should be arrested, since they are an exception to the general rule. The inference only goes through if we presume, incorrectly, that the rule is exceptionless.

Another example. Suppose I volunteer at my first grade daughter’s school; I go in to her class one day to read a book aloud to the children. As I’m sitting down on the floor with the kiddies, crisscross applesauce, as they say, I realize that I can’t comfortably sit that way because of the .44 Magnum revolver that I have tucked into my waistband. So I remove the piece from my pants and set it down on the floor in front of me, among the circled-up children. The teacher screams and calls the office, the police are summoned, and I’m arrested. As they’re hauling me out of the room, I protest: “The Second Amendment to the Constitution guarantees my right to keep and bear arms! This state has a ‘concealed carry’ law, and I have a license to carry that gun! Let me go!”

I’m committing the fallacy of Accident in this story. True, the Second Amendment guarantees the right to keep and bear arms; but that rule is not without exceptions. Similarly, concealed carry laws also have exceptions—among them being a prohibition on carrying weapons into elementary schools. My insistence on being released only makes sense if we presume, incorrectly, that the legal rules I’m citing are without exception.

One more example from real life. After the financial crisis in 2008, the Federal Reserve—the central bank in the United States, whose task it is to create conditions leading to full employment and moderate inflation—found itself in a bind. The economy was in a free-fall, and unemployment rates were skyrocketing, but the usual tool it used to mitigate such problems—cutting the short-term federal funds rate (an interest rate banks charge each other for overnight loans)—was unavailable, because they had already cut the rate to zero (the lowest it could go). So they had to resort to unconventional monetary policies, among them something called “quantitative easing”. This involved the purchase, by the Federal Reserve, of financial assets like mortgage-backed securities and longer-term government debt (Treasury notes).

Now, the nice thing about being the Federal Reserve is that when you want to buy something—in this case a bunch of financial assets—it’s really easy to pay for it: you have the power to create new money out of thin air! That’s what the Federal Reserve does; it controls the amount of money that exists. So if the Fed wants to buy, say, $10 million worth of securities from Bank of America, they just press a button and presto—$10 million dollars that didn’t exist a second ago comes into being as an asset of Bank of America.

This quantitative easing policy was controversial. Many people worried that it would lead to runaway inflation. Generally speaking, the more money there is, the less each bit of it is worth. So creating more money makes things cost more—inflation. The Fed was creating money on a very large scale—on the order of a trillion dollars. Shouldn’t that lead to a huge amount of inflation?

Economist Art Laffer thought so. In June of 2009, he wrote an op-ed in the Wall Street Journal warning that “[t]he unprecedented expansion of the money supply could make the ’70s look benign.”[8] (There was a lot of inflation in the ’70s.)

Another famous economist, Paul Krugman, accused Laffer of committing the fallacy of accident. While it’s generally true that an increase in the supply of money leads to inflation, that rule is not without exceptions. He had described such exceptional circumstances in 1998,[9] and pointed out that the economy of 2009 was in that condition (which economists call a “liquidity trap”): “Let me add, for the 1.6 trillionth time, we are in a liquidity trap. And in such circumstances a rise in the monetary base does not lead to inflation.”[10]

It turns out Krugman was correct. The expansion of the monetary supply did not lead to runaway inflation; as a matter of fact, inflation remained below the level that the Federal Reserve wanted, barely moving at all. Laffer had indeed committed the fallacy of accident.

Begging the Question (Petitio Principii)

First things first: ‘begging the question’ is not synonymous with ‘raising the question’; this is an extremely common usage, but it is wrong. You might hear a newscaster say, “Today Donald Trump’s private jet was spotted at the Indianapolis airport, which begs the question: ‘Will he choose Indiana Governor Mike Pence as running mate?’” This is a mistaken usage of ‘begs the question’; the newscaster should have said ‘raises the question’ instead.

‘Begging the question’ is a translation of the Latin ‘petitio principii’, which refers to the practice of asking (begging, petitioning) your audience to grant you the truth of a claim (principle) as a premise in an argument—but it turns out that the claim you’re asking for is either identical to, or presupposes the truth of, the very conclusion of the argument you’re trying to make.

In other words, when you beg the question, you’re arguing in a circle: one of the reasons for believing the conclusion is the conclusion itself! It’s a Fallacy of Illicit Presumption where the proposition being presumed is the very proposition you’re trying to demonstrate; that’s clearly an illicit presumption.

Here’s a stark example. If I’m trying to convince you that Goldfinger is a dangerous idiot (the conclusion of my argument is ‘Goldfinger is a dangerous idiot’), then I can’t ask you to grant me the claim ‘Goldfinger is a dangerous idiot’. The premise can’t be the same as the conclusion. Imagine a conversation:

Me: “Goldfinger is a dangerous idiot.”

You: “Really? Why do you say that?”

Me: “Because Goldfinger is a dangerous idiot.”

You: “So you said. But why should I agree with you? Give me some reasons.”

Me: “Here’s a reason: Goldfinger is a dangerous idiot.”

And round and round we go. Circular reasoning; begging the question.

It’s not always so blatant. Sometimes the premise is not identical to the conclusion, but merely presupposes its truth. Why should we believe that the Bible is true? Because it says so right there in the Bible that it’s the infallible Word of God. This premise is not the same as the conclusion, but it can only support the conclusion if we take the Bible’s word for its own truthfulness, i.e., if we assume that the Bible is true. But that was the very claim we were trying to prove!

Sometimes the premise is just a re-wording of the conclusion. Consider this argument: “To allow every man unbounded freedom of speech must always be, on the whole, advantageous to the state; for it is highly conducive to the interests of the community that each individual should enjoy a liberty, perfectly unlimited, of expressing his sentiments.”[11] Replacing synonyms with synonyms, this comes down to “Free speech is good for society because free speech is good for society.” Not a good argument.

Loaded Questions

Loaded questions are questions the very asking of which presumes the truth of some claim. Asking these can be an effective debating technique, a way of sneaking a controversial claim into the discussion without having outright asserted it.

The classic example of a loaded question is, “Have you stopped beating your wife?” Notice that this is a yes-or-no question, and no matter which answer one gives, one admits to beating his wife: if the answer is ‘no’, then the person continues to beat his wife; if the answer is ‘yes’, then he admits to beating his wife in the past. Either way, he’s a wife-beater. The question itself presumes the truth of this claim; that’s what makes it “loaded”.

Strategic deployment of loaded yes-or-no questions can be an extremely effective debating technique. If you catch your opponent off-guard, they will struggle to respond to your question, since a simple ‘yes’ or ‘no’ commits them to the truth of the illicit presumption, which they want to deny. This makes them look evasive, shifty. And as they struggle to come up with a response, you can pounce on them: “It’s a simple question. Yes or no? Why won’t you answer the question?” It’s a great way to appear to be winning a debate, even if you don’t have a good argument. Imagine the following dialogue:

Liberal TV Host: “Are you or are you not in favor of the president’s plan to force wealthy business owners to pay their fair share in taxes to protect the vulnerable and aid this nation’s underprivileged?”

Conservative Guest: “Well, I don’t agree with the way you’ve laid out the question. As a matter of fact…”

Host: “It’s a simple question. Should business owners pay their fair share; yes or no?”

Guest: “You’re implying that the president’s plan would correct some injustice. But corporate taxes are already very…”

Host: “Stop avoiding the question! It’s a simple yes or no!”

Combine this with the sort of subconscious appeal to force discussed above—yelling, fingerpointing, etc.—and the host might come off looking like the winner of the debate, with his opponent appearing evasive, uncooperative, and inarticulate.

Another use for loaded questions is the particularly sneaky political practice of “push polling”. In a normal opinion poll, you call people up to try to discover what their views are about the issues. In a push poll, you call people up pretending to be conducting a normal opinion poll, pretending only to be interested in discovering their views, but with a different intention entirely: you don’t want to know what their views are; you want to shape their views, to convince them of something. And you use loaded questions to do it.

A famous example of this occurred during the Republican presidential primary in 2000. George W. Bush was the front-runner, but was facing a surprisingly strong challenge from the upstart John McCain. After McCain won the New Hampshire primary, he had a lot of momentum. The next state to vote was South Carolina; it was very important for the Bush campaign to defeat McCain there and reclaim the momentum. So they conducted a push poll designed to spread negative feelings about McCain—by implanting false beliefs among the voting public. “Pollsters” called voters and asked, “Would you be more or less likely to vote for John McCain for president if you knew he had fathered an illegitimate black child?” The aim, of course, is for voters to come to believe that McCain fathered an illegitimate black child. But he did no such thing. He and his wife adopted a daughter, Bridget, from Bangladesh.

A final note on loaded questions: there’s a minimal sense in which every question is loaded. The social practice of asking questions is governed by implicit norms. One of these is that it’s only appropriate to ask a question when there’s some doubt about the answer. So every question carries with it the presumption that this norm is being adhered to, that it’s a reasonable question to ask, that the answer is not certain. One can exploit this fact, again to plant beliefs in listeners’ minds that they otherwise wouldn’t hold. In a particularly shameful bit of alarmist journalism, the cover of the July 1, 2016 issue of Newsweek asks the question, “Can ISIS Take Down Washington?” The cover is an alarming, eye-catching shade of yellow, and shows four missiles converging on the Capitol dome. The simple answer to the question, though, is ‘no, of course not’. There is no evidence that ISIS has the capacity to destroy the nation’s capital. But the very asking of the question presumes that it’s a reasonable thing to wonder about, that there might be a reason to think that the answer is ‘yes’. The goal is to scare readers (and sell magazines) by getting them to believe there might be such a threat.

False Choice

This fallacy occurs when someone tries to convince you of something by presenting it as one of limited number of options and the best choice among those options. The illicit presumption is that the options are limited in the way presented; in fact, there are additional options that are not offered. The choice you’re asked to make is a false choice, since not all the possibilities have been presented. Other times, the offered choice is between two options which are presented as being mutually exclusive when they are not.

Composition

The fallacy of Composition rests on an illicit presumption about the relationship between a whole thing and the parts that make it up. This is an intuitive distinction, between whole and parts: for example, a person can be considered as a whole individual thing; it is made up of lots of parts— hands, feet, brain, lungs, etc., etc. We commit the fallacy of Composition when we mistakenly assume that any property that all of the parts share is also a property of the whole. Schematically, it looks like this:

All of the parts of X have property P.

Any property shared by all of the parts of a thing is also a property of the whole.

/∴ X has the property P.

The second premise is the illicit presumption that makes this argument go through. It is illicit because it is simply false: sometimes all the parts of something have a property in common, but the whole does not have that property.

Consider the 1980 U.S. Men’s Hockey Team. They won the gold medal at the Olympics that year, beating the unstoppable-seeming Russian team in the semifinals. (That game is often referred to as “The Miracle on Ice” after announcer Al Michaels’ memorable call as the seconds ticked off at the end: “Do you believe in miracles? Yes!”) Famously, the U.S. team that year was a rag-tag collection of no-name college guys; the average age on the team was 21, making them the youngest team ever to compete for the U.S. in the Olympics. The Russian team, on the other hand, was packed with seasoned hockey veterans with world-class talent.

In this example, the team is the whole, and the individual players on the team are the parts. It’s safe to say that one of the properties that all of the parts shared was mediocrity—at least, by the standards of international competition at the time. They were all good hockey players, of course— Division I college athletes—but compared to the Hall of Famers the Russians had, they were mediocre at best. So, all of the parts have the property of being mediocre. But it would be a mistake to conclude that the whole made up of those parts—the 1980 U.S. Men’s Hockey Team—also had that property. The team was not mediocre; they defeated the Russians and won the gold medal! They were a classic example of the whole being greater than the sum of its parts.

Division

The fallacy of Division is the exact reverse of the fallacy of Composition. It’s an inference from the fact that a whole has some property to a conclusion that a part of that whole has the same property, based on the illicit presumption that wholes and parts must have the same properties. Schematically:

X has the property P.

Any property of a whole thing is shared by all of its parts.

/∴ x, which is a part of X, has property P.

The second premise is the illicit presumption. It is false, because sometimes parts of things don’t have the same properties as the whole. George Clooney is handsome; does it follow that his large intestine is also handsome? Of course not. Toy Story 3 is a funny movie. Remember when Mr. Potato Head had to use a tortilla for his body? Or when Buzz gets flipped into Spanish mode and does the flamenco dance with Jessie? Hilarious. But not all of the parts of the movie are funny. When it looks like all the toys are about to be incinerated at the dump? When Andy finally drives off to college? Not funny at all!

V. Fallacies of Linguistic Emphasis

Natural languages like English are unruly things. They’re full of ambiguity, shades of meaning, vague expressions; they grow and develop and change over time, often in unpredictable ways, at the capricious collective whim of the people using them. Languages are messy, complicated. This state of affairs can be taken advantage of by the clever debater, exploiting the vagaries of language to make convincing arguments that are nevertheless fallacious. This exploitation involves the manipulation of linguistic forms to emphasize facts, claims, emotions, etc. that favor one’s position, and to de-emphasize those that do not. We will survey four techniques that fall under this heading.

Accent

This is one of the original 13 fallacies that Aristotle recognized in his Sophistical Refutations. Our usage, however, will depart from Aristotle’s. He identifies a potential for ambiguity and misunderstanding that is peculiar to his language—ancient Greek. That language—in written form—used diacritical marks along with the alphabet, and transposition of these could lead to changes in meaning. English is not like this, but we can identify a fallacy that is roughly in line with the spirit of Aristotle’s accent: it is possible, in both written and spoken English (along with every other language), to convey different meanings by stressing individual words and phrases. The devious use of stress to emphasize contents that are helpful to one’s rhetorical goals, and to suppress or obscure those that are not—that is the fallacy of accent.

There are a number of techniques one can use with the written word that fall in the category of accent. Perhaps the simplest way to emphasize favorable contents, and de-emphasize unfavorable ones, is to vary the size of one’s text. We see this in advertising all the time. You drive past a store that’s having a sale, which they advertise with a sign in the window. In the largest, most eye-catching font, you read, “70% OFF!” “Wow,” you might think, “that’s a really steep discount. I should go in to the store and get a great deal.” At least, that’s what the store wants you to think. They’re emphasizing the fact of (at least one) steep discount. If you look more closely at the sign, however, you’ll see the things that they’re legally required to say, but that they’d like to deemphasize. There’s a tiny ‘Up to’ in front of the gigantic ‘70% OFF!’. For all you know, there’s one crappy item that nobody wants, tucked in the back of the store, that’s discounted at 70%; everything else has much smaller discounts, or none at all. Also, if you squint really hard, you’ll see an asterisk after the ‘70% OFF!’, which leads to some text at the bottom of the poster, in the tiniest font possible, that reads, “While supplies last. See store details. Not available in all locations. Offer not valid weekends or holidays. All sales are final.” This is the proverbial “fine print”. It makes the sale look a lot less exciting. So they hide it.

Footnotes are generally a good place to hide unfavorable content. We all know that CEOs of big companies—especially banks—get paid ridiculous sums of money. Some of it is just their salary and stock options; those amounts are huge enough to turn most people off. But there are other perks that are so over-the-top, companies and executives feel like it’s best to hide them from the public (and their shareholders) in the footnotes of CEO contracts and SEC reports. Michelle Leder runs a website called footnoted.com, which is dedicated to combing through these documents and exposing outrageous compensation packages. She’s uncovered executives spending over $700,000 to renovate their offices, demanding helicopters in addition to their corporate jets, receiving millions of dollars’ worth of private security services, etc., etc. These additional, extravagant forms of compensation seem excessive to most people, so companies do all they can to hide them from the public.

Another abuse of footnotes can occur in academic or legal writing. Legal briefs and opinions and academic papers seek to persuade. If you’re writing such a document, and you relegate a strong objection to your conclusion to a brief mention in the footnotes, you’re de-emphasizing that point of view and making it less likely that the reader will reject your arguments. That’s a fallacious suppression of opposing content, a sneaky trick to try to convince people you’re right without giving them a forthright presentation of the merits (and demerits) of your position.

The fallacy of accent can occur in speech as well as writing. The audible correlate of “fine print” is that guy talking really fast at the end of the commercial, rattling off all the unpleasant side effects and legal disclaimers that, if given a full, deliberate presentation might make you less likely to buy the product they’re selling. The reason, by the way, that we know about such horrors as the possibility of driving while not awake (a side-effect of some sleep aids) and a four-hour erection (side-effect of erectile-dysfunction drugs), is that drug companies are required, by federal law, not to commit the fallacy of accent if they want to market drugs directly to consumers. They have to read what’s called a “major statement” that lists all of these side-effects explicitly, and no fair cramming them in at the end and talking over them really fast.

When we speak, how we stress individual words and phrases can alter the meaning that we convey with our utterances. Consider the sentence ‘These pretzels are making me thirsty.’ Now consider various utterances of that sentence, each stressing a different word; different meanings will be conveyed:

These pretzels are making me thirsty. [Not those over there, these right here.]

These pretzels are making me thirsty. [It’s not the chips, it’s the pretzels.]

These pretzels are making me thirsty. [Don’t try to tell me they’re not; they are.]

And so on. We can capture the various stresses typographically by using italics (or boldface or all-caps), but if we leave that out, we lose some of the meaning conveyed by the actual, stressed utterance. One can commit the fallacy of accent by transcribing someone’s speech in a way that omits stress-indicators, and thereby obscures or alters the meaning that the person actually conveyed. Suppose a candidate for president says, “I hope this country never has to wage war with Iran.” The stress on ‘hope’ clearly conveys that the speaker doubts that his hopes will be realized; the candidate has expressed a suspicion that there may be war with Iran. This speech might set off a scandal: saying such a thing during an election could negatively affect the campaign, with the candidate being perceived as a war-monger; it could upset international relations. The campaign might try to limit the damage by writing an op-ed in a major newspaper, and transcribing the candidate’s utterance without any indication of stress: “The Senator said, ‘I hope this country never has to wage war with Iran.’ This is a sentiment shared by most voters, and even our opponent.” This transcription, of course, obscures the meaning of the original utterance. Without the stress, there is not additional implication that the candidate suspects that there will in fact be a war.

Quoting out of Context

Another way to obscure or alter the meaning of what someone actually said is to quote them selectively. Remarks taken out of their proper context might convey a different meaning than they did within that context.

Consider a simple example: movie ads. These often feature quotes from film critics, which are intended to convey the impression that the movie was well-liked by them. “Critics call the film ‘unrelenting’, ‘amazing’, and ‘a one-of-a-kind movie experience’”, the ad might say. That sounds like pretty high praise. I think I’d like to see that movie. That is, until I read the actual review from which those quotes were pulled:

I thought I’d seen it all at the movies, but even this jaded reviewer has to admit that this film is something new, a one-of-a-kind movie experience: two straight hours of unrelenting, snooze-inducing mediocrity. I find it amazing that not one single aspect of this movie achieves even the level of “eh, I guess that was OK.”

The words ‘unrelenting’ and ‘amazing’—and the phrase ‘a one-of-a-kind movie experience’—do in fact appear in that review. But situated in their original context, they’re doing something completely different than the movie ad would like us to believe.

Politicians often quote each other out of context to make their opponents look bad. In the 2012 presidential campaign, both sides did it rather memorably. The Romney campaign was trying to paint President Obama as anti-business. In a campaign speech, Obama once said the following:

If you’ve been successful, you didn’t get there on your own. You didn’t get there on your own. I’m always struck by people who think, well, it must be because I was just so smart. There are a lot of smart people out there. It must be because I worked harder than everybody else. Let me tell you something: there are a whole bunch of hardworking people out there. If you’ve got a business, you didn’t build that. Somebody else made that happen.

Yikes! What an insult to all the hard-working small-business owners out there. They didn’t build their own businesses? The Romney campaign made some effective ads, with these remarks playing in the background, and small-business people describing how they struggled to get their firms going. The problem is, that quote above leaves some bits out—specifically, a few sentences before the last two. Here’s the full transcript:

If you’ve been successful, you didn’t get there on your own. You didn’t get there on your own. I’m always struck by people who think, well, it must be because I was just so smart. There are a lot of smart people out there. It must be because I worked harder than everybody else. Let me tell you something: there are a whole bunch of hardworking people out there.

If you were successful, somebody along the line gave you some help. There was a great teacher somewhere in your life. Somebody helped to create this unbelievable American system that we have that allowed you to thrive. Somebody invested in roads and bridges. If you’ve got a business, you didn’t build that. Somebody else made that happen.

Oh. He’s not telling business owners that they didn’t build their own businesses. The word ‘that’ in “you didn’t build that” doesn’t refer to the businesses; it refers to the roads and bridges—the “unbelievable American system” that makes it possible for businesses to thrive. He’s making a case for infrastructure and education investment; he’s not demonizing small-business owners.

The Obama campaign pulled a similar trick on Romney. They were trying to portray Romney as an out-of-touch billionaire, someone who doesn’t know what it’s like to struggle, and someone who made his fortune by buying up companies and firing their employees. During one speech, Romney said: “I like being able to fire people who provide services to me.” Yikes! What a creep. This guy gets off on firing people? What, he just finds joy in making people suffer? Sounds like a moral monster. Until you see the whole speech:

I want individuals to have their own insurance. That means the insurance company will have an incentive to keep you healthy. It also means if you don’t like what they do, you can fire them. I like being able to fire people who provide services to me. You know, if someone doesn’t give me the good service that I need, I want to say I’m going to go get someone else to provide that service to me.

He’s making a case for a particular health insurance policy: self-ownership rather than employer-provided health insurance. The idea seems to be that under such a system, service will improve since people will be empowered to switch companies when they’re dissatisfied—kind of like with cell phones, for example. When he says he likes being able to fire people, he’s talking about being a savvy consumer. I guess he’s not a moral monster after all.

Equivocation

Typical of natural languages is the phenomenon of homonymy: when words have the same spelling and pronunciation, but different meanings—like ‘bat’ (referring to the nocturnal flying mammal) and ‘bat’ (referring to the thing you hit a baseball with). This kind of natural-language messiness allows for potential fallacious exploitation: a sneaky debater can manipulate the subtleties of meaning to convince people of things that aren’t true—or at least not justified based on what they say. We call this kind of maneuver the fallacy of equivocation

Here’s an example. Consider a banker; let’s call him Fred. Fred is the president of a bank, a real big-shot. He’s married, but he’s not faithful: he’s carrying on an affair with one of the tellers at his bank, Linda. Fred and Linda have a favorite activity: they take long lunches away from their workplace, having romantic picnics at a beautiful spot they found a short walk away. They lay out their blanket underneath an old, magnificent oak tree, which is situated right next to a river, and enjoy champagne and strawberries while canoodling and watching the boats float by.

One day—let’s say it’s the anniversary of when they started their affair—Fred and Linda decide to celebrate by skipping out of work entirely, spending the whole day at their favorite picnic spot. (Remember, Fred’s the boss, so he can get away with this.) When Fred arrives home that night, his wife is waiting for him. She suspects that something is up: “What are you hiding, Fred? Are you having an affair? I called your office twice, and your secretary said you were ‘unavailable’ both times. Tell me this: Did you even go to work today?” Fred replies, “Scout’s honor, dear. I swear I spent all day at the bank today.”

See what he did there? ‘Bank’ can refer either to a financial institution or the side of a river—a river bank. Fred and Linda’s favorite picnic spot is on a river bank, and Fred did indeed spend the whole day at that bank. He’s trying to convince his wife he hasn’t been cheating on her, and he exploits this little quirk of language to do so. That’s equivocation.

A similar linguistic phenomenon can also be exploited to equivocate: polysemy. This is distinct from, but similar to, homonymy. The meanings of homonyms are typically unrelated. In polysemy, the same word or phrase has multiple, related meanings—different senses. Consider the word ‘law’. The meaning that comes immediately to mind is the statutory one: “A rule of conduct imposed by authority.”[12] The state law prohibiting murder is an instance of a law in this sense. There is another sense of ‘law’, however; this is the sense operative when we speak of scientific laws. These are regularities in nature—Newton’s law of universal gravitation, for example. These meanings are similar, but distinct: statutes, human laws, are prescriptive; scientific laws are descriptive. Human laws tell us how we ought to behave; scientific laws describe how things actually do, and must, behave. Human laws can be violated: I could murder someone. Scientific laws cannot be violated: if two bodies have mass, they will be attracted to one another by a force directly proportional to the product of their masses and inversely proportional to the square of the distance between them; there’s no getting around it.

A common argument for the existence of God relies on equivocation between these two senses of ‘law’:

There are laws of nature.

By definition, laws are rules imposed by an Authority.

So the laws of nature were imposed by an Authority.

The only Authority who could impose such laws is an all-powerful Creator—God.

/∴ God exists.

This argument relies on fallaciously equivocating between the two senses of ‘law’—human and natural. It’s true that human laws are by definition imposed by an authority; but that is not true of natural laws. Additional argument is needed to establish that those must be so imposed.

A famous instance of equivocation of this sort occurred in 1998, when President Bill Clinton denied having an affair with White House intern Monica Lewinsky by declaring forcefully in a press conference: “I did not have sexual relations with that woman—Ms. Lewinsky.” The president wanted to convince his audience that nothing sexually inappropriate had happened, even though, as was revealed later, lots of icky sex stuff had been going on. He does this by taking advantage of the polysemy of the phrase ‘sexual relations’. In the broadest sense, the phrase connotes sexual activity of any kind—including oral sex (which Bill and Monica engaged in). This is the sense the president wants his audience to have in mind, so that they’re convinced by his denial that nothing untoward happened. But a more restrictive sense of ‘sexual relations’—a bit more old-fashioned and Southern usage—refers specifically to intercourse (which Bill and Monica did not engage in). It’s this sense that the president can fall back on if anyone accuses him of having lied; he can claim that, strictly speaking, he was telling the truth: he and Monica didn’t have ‘relations’ in the intercourse sense. Clinton later admitted to “misleading” the American people—but, importantly, not to lying.

The distinction between lying and misleading is a hard one to draw precisely, but roughly speaking it’s the difference between trying to get someone to believe something false by saying something false (lying) and trying to get them to believe something false by saying something true but deceptive (misleading). Besides homonymy and polysemy, yet another common linguistic phenomenon can be exploited to this end. This phenomenon is implicature, identified and named by the philosopher Paul Grice in the 1960s.[13] Implicatures are contents that we communicate over and above the literal meaning of what we say—aspects of what we mean by our utterances that aren’t stated explicitly. People listening to us infer these additional meanings based on the assumption that the speaker is being cooperative, observing some unwritten rules of conversational practice. To use one of Grice’s examples, suppose your car has run out of gas on the side of the road, and you stop me as I walk by, explaining your plight, and I say, “There’s a gas station right around the corner.” Part of what I communicate by my utterance is that the station is open and selling gas right now—that you can go there and solve your problem. You can infer this content based on the assumption that I’m being a cooperative conversational partner; if the station is closed or out of gas—and I knew it—then I would be acting unhelpfully, uncooperatively. Notice, though, that this content is not part of what I literally said: all I told you is that there is a gas station around the corner, which would still be true even if it were closed and/or out of gas.

Implicatures are yet another subtle aspect of meaning in natural language that can be exploited. So a final technique that we might classify under the fallacy of equivocation is false implication— saying things that are strictly speaking true, but which communicate false implicatures. Grocery stores do this all the time. You know those signs posted under, say, cans of soup that say “10 for $10”? That’s the store’s way of telling us that soup’s on sale for a buck a can; that’s right, you don’t need to buy 10 cans to get the deal; if you buy one can, it’s $1; 2 cans are $2, and so on. So why not post a sign saying “$1 per can”? Because the 10-for-$10 sign conveys the false implicature that you need to buy 10 cans in order to get the sale price. The store’s trying to drive up sales.

A striking example of false implicature is featured in one of the most prominent U.S. Supreme Court rulings on perjury law. In the original criminal case, a defendant by the name of Bronston had the following exchange with the prosecuting attorney: “Q. Do you have any bank accounts in Swiss Banks, Mr. Bronston? A. No, sir. Q. Have you ever? A. The company had an account there for about six months, in Zurich.”[14] As it turns out, Bronston did not have any Swiss bank accounts at the time of the questioning, so his first answer was strictly true. But he did have Swiss bank accounts in the past. However, his second answer does not deny this. All he says is that his company had Swiss bank accounts—an answer that implicates that he himself did not. Based on this exchange, Bronston was convicted of perjury, but the Supreme Court overturned that conviction, pointing out that Bronston had not made any false statements (a requirement of the perjury statute); the falsehood he conveyed was an implicature.

Manipulative Framing

Words are powerful. They can trigger emotional responses and activate associations with related ideas, altering the way we perceive the world and conceptualize issues. The language we use to describe a particular policy, for example, can affect how favorably our listeners are likely to view that proposal. How we frame issues with language can profoundly influence how persuasive our arguments about those issues will be. The technique of choosing words to frame issues intentionally to manipulate your audience is what we will call the fallacy of manipulative framing.

The importance of framing in politics has long been recognized, but only in recent decades has it been raised to an art form. One prominent practitioner of the art is Republican consultant Frank Luntz. In a 200-plus page memo he sent to Congressional Republicans in 1997, and later in a book, Luntz stressed the importance of choosing persuasive language to frame issues so that voters would be more likely to support Republican positions on issues.[15] One of his recommendations illustrates manipulative framing nicely. In the United States, if you leave a fortune to your heirs after you die, then the government taxes it (provided it’s greater than about $5.5 million, or $11 million for a couple, as of 2016). The usual name for this tax is the ‘estate tax’. Luntz encouraged Republicans—who are generally opposed to this tax—to start referring to it instead as the “death tax”. This framing is likelier to cause voters to oppose the tax as well: taxing people for dying? Talk about kicking a man when he’s down! (Polling bears this out: people oppose the tax in higher numbers when it’s called the ‘death tax’ than when it’s called the ‘estate tax’.)

The linguist George Lakoff has written extensively on the subject of framing.[16] His remarks on the subject of “tax relief” nicely illustrate how framing works:

On the day that George W. Bush took office, the words tax relief started appearing in White House communiqués to the press and in official speeches and reports by conservatives. Let us look in detail at the framing evoked by this term.

The word relief evokes a frame in which there is a blameless Afflicted Person who we identify with and who has some Affliction, some pain or harm that is imposed by some external Cause-of-pain. Relief is the taking away of the pain or harm, and it is brought about by some Reliever-of-pain.

The Relief frame is an instance of a more general Rescue scenario, in which there a Hero (The Reliever-of-pain), a Victim (the Afflicted), a Crime (the Affliction), A Villain (the Cause-of-affliction), and a Rescue (the Pain Relief). The Hero is inherently good, the Villain is evil, and the Victim after the Rescue owes gratitude to the Hero.

The term tax relief evokes all of this and more. Taxes, in this phrase, are the Affliction (the Crime), proponents of taxes are the Causes-of Affliction (the Villains), the taxpayer is the Afflicted Victim, and the proponents of “tax relief” are the Heroes who deserve the taxpayers’ gratitude.

Every time the phrase tax relief is used and heard or read by millions of people, the more this view of taxation as an affliction and conservatives as heroes gets reinforced.[17] Carefully chosen words can trigger all sorts of mental associations, mostly at the subconscious level, that affect how people perceive the issues and have the power to change opinions. That’s why manipulative framing is ubiquitous in public discourse.

Consider debates about illegal immigration. Those who are generally opposed to policies that favor such people will often refer to them as “illegal immigrants”. This framing emphasizes the fact that they are in this country illegally, making it likelier that the listener will also oppose policies that favor them. A further modification can further increase this likelihood: “illegal aliens.” The word ‘alien’ has a subtle dehumanizing effect; if we don’t think of them as individual people with hopes and dreams, we’re not likely to care much about them. Even more dehumanizing is a framing one often sees these days: referring to illegal immigrants simply as “illegals”. They are the living embodiment of illegality! Those who advocate on behalf of such people, of course, use different terminology to refer to them: “undocumented workers”, for example. This framing de-emphasizes the fact that they’re here illegally; they’re merely “undocumented”. They lack certain pieces of paper; what’s the big deal? It also emphasizes the fact that they are working, which is likely to cause listeners to think of them more favorably.

The use of manipulative framing in the political sphere extends to the very names that politicians give the laws they pass. Consider the healthcare reform act passed in 2010. Its official name is The Patient Protection and Affordable Care Act. Protection of patients, affordability, care—these all trigger positive associations. The idea is that every time someone talks about the law prior to and after its passage, they will use the name with this positive framing and people will be more likely to support it. As you may know, this law is commonly referred to with a different moniker: ‘Obamacare’. This is the framing of choice for the law’s opponents: any negative associations people have with President Obama are attached to the law; and any negative feelings they have about healthcare reform get attached to Obama. Late night talk show host Jimmy Kimmel demonstrated the effectiveness of framing on his show one night in 2013. He sent a crew outside his studio to interview people on the street and ask them which approach to health reform they preferred, the Affordable Care Act or Obamacare. Overwhelmingly, people expressed a preference for the Affordable Care Act over Obamacare, even though those are just two different ways of referring to the same piece of legislation. Framing is especially important when the public is ignorant of the actual content of policy proposals, which is all too often the case.

EXERCISES

Identify the fallacy most clearly exhibited in the following passages.

1. Responding to a critical comment from one Mike Helgeson, the anonymous proprietor of the “Governor Scott Walker Sucks” Facebook page wrote this:

“Mike Helgeson is a typical right wing idiot who assumes anyone who doesn’t like Walker doesn’t work and is living off the government. I work 60-70 hours a week during the summer so get a clue and quit whining like a child.”

2. Randy: “I think abortion should be illegal. Unborn children have a right not to be killed.”

Sally: “What do you know about it? You’re a man.”

3. We need a balanced budget amendment, forcing the U.S. government to balance its budget every year. All of the states have to balance their budgets; so should the country.

4. Privacy is important to the development of full individuals because there has to be an interior zone within each person that other people don’t see.[18]

5. Of course, the real gripe the left has in Wisconsin is that the current legislative districts were drawn by Republicans, who were granted that right due to their large victories in 2010. Since the new maps were unveiled in 2011, Democrats have issued several legal challenges trying to argue the maps are “unfair” and that Republicans overstepped their bounds.

Did Republicans draw the maps to their advantage? Of course they did — just as Democrats would have done had they held control of state government in 2010.[19]

6. President Obama has been terrible for healthcare costs in this county. When we had our first child, before he was president, we only paid a couple of hundred dollars out of pocket; insurance covered the rest. The new baby we just had? The hospital bills cost us over $5,000!

7. Let’s call our public schools what they really are—‘government’ schools.[20]

8. You shouldn’t hire that guy. The last company he worked for went bankrupt. He’s probably a failure, too.

9. Fred: “I read about a new study that shows diet soda is good for weight loss—better than water, even.”

Fiona: “Yeah, but look at who sponsored it: the International Life Sciences Institute, which is a non-profit, but whose board of directors is stacked with people from Coca-Cola and PepsiCo.”

10. Buy the Amazing RonCo Super Bass-o-Matic ’76, the easiest way to prepare delicious bass: only 3 installments of $19.99.*

*Shipping and handling fees apply. Price is before state, local, and federal taxes. Safety goggles sold separately. The rush from using Super Basso-Matic ’76 has been shown to be addictive in laboratory mice. RonCo not legally responsible for injury or choking death due to ingestion of insufficiently pureed bass. The following aquatic species cannot be safely prepared using the Super Bass-o-Matic: shark, cod, squid, octopus, catfish, dogfish, crab (snow, blue, and king), salmon, tuna, lobster, crayfish, crawfish, crawdaddy, whale (sperm, killer, and humpback). Super Bass-o-Matic is illegal to operate in the following jurisdictions: California, Massachusetts, Canada, the European Union, Haiti.

11. Former pro golfer Brandel Chamblee, expressing concern about the workout habits of current pro Rory McIlroy:

“I think of what happened to Tiger Woods. And I think more than anything of what Tiger Woods did early in his career with his game was just an example of how good a human being can be, what he did towards the middle and end of his career is an example to be wary of. That’s just my opinion. And it does give me a little concern when I see the extensive weightlifting that Rory is doing in the gym.”

12. Former pro golfer Gary Player, famous for his rigorous workouts and long career, responding on McIlroy’s behalf via Twitter:

“Haha, too funny. Don’t worry about the naysayers mate. They all said I would be done at 30 too.”

13. Responding to North Korean rhetoric about pre-emptive nuclear strikes if S. Korea and U.S.

engage in war games, Russia:

“We consider it to be absolutely impermissible to make public statements containing threats to deliver some ‘preventive nuclear strikes’ against opponents,” said the statement, as translated by the Russian TASS news agency. “Pyongyang should be aware of the fact that in this way the DPRK [North Korea] will become fully opposed to the international community and will create international legal grounds for using military force against itself in accordance with the right of a state to self-defense enshrined in the United Nations Charter.”[21]

14. Responding to criticism that the state university system was declining in quality under his watch due to a lack of funding, the Governor said, “Look, we can either have huge tuition increases, which no one wants, or university administrators and professors can learn to do more with less.”

15. Man, I told you flossing was useless. Look at this newspaper article, “Medical benefits of flossing unproven”:

“The two leading professional groups — the American Dental Association and the American Academy of Periodontology, for specialists in gum disease and implants — cited other studies as proof of their claims that flossing prevents buildup of gunk known as plaque, early gum inflammation called gingivitis, and tooth decay. However, most of these studies used outdated methods or tested few people. Some lasted only two weeks, far too brief for a cavity or dental disease to develop. One tested 25 people after only a single use of floss. Such research, like the reviewed studies, focused on warning signs like bleeding and inflammation, barely dealing with gum disease or cavities.”[22]

17. Did you hear about Jason Pierre-Paul, the defensive end for the New York Giants? He blew off half his hand lighting off fireworks on the Fourth of July. Man, jocks are such idiots.

18. Mother of recent law school grad, on the phone with her son: “Did you pass the bar?”

Son: “Yes, mom.”

[He failed the bar exam. But he did walk past a tavern on his way home from work.]

19. Alfred: “I’m telling you, Obama is a socialist. He said, and I quote, ‘I actually believe in redistribution.’”

Betty: “C’mon. Read the whole interview, he said: ‘I think the trick is figuring out how do we structure government systems that pool resources and hence facilitate some redistribution because I actually believe in redistribution, at least at a certain level to make sure that everybody’s got a shot. How do we pool resources at the same time as we decentralize delivery systems in ways that both foster competition, can work in the marketplace, and can foster innovation at the local level and can be tailored to particular communities.’ Socialists don’t talk about ‘decentralization,’ ‘competition,’ and ‘the marketplace.’ That’s straight-up capitalism.”

20. In 2016, Supreme Court Justice Ruth Bader Ginsburg gave an interview in which she criticized Republican presidential candidate Donald Trump, calling him a “faker” and saying she couldn’t imagine him as president. She was criticized for these remarks: as a judge, she’s supposed to be politically impartial, the argument went; her remarks amounted to a violation of judicial ethics. Defenders of Ginsburg were quick to point out that her late colleague, Justice Antonin Scalia, was a very outspoken conservative on a variety of political issues, and even went hunting with Vice President Dick Cheney one time before he was set to hear a case in which Cheney was involved. Isn’t that a violation of judicial ethics, too?

21. Horace: “Man, these long lines at the airport are ridiculous. No liquids on the plane, taking off my shoes, full-body scans. Is all this really necessary?”

Pete: “Of course it is. TSA precautions prevent terrorism. There hasn’t been a successful terrorist attack in America involving planes since these extra security measures went into place, has there?”

22. A married couple goes out to dinner, and they have a bit too much wine to drink. After some discussion, they decide nevertheless to drive home. Since the wife is the more intoxicated of the two, the husband takes the wheel. On the way home, he’s pulled over by the police. When asked whether he’s had anything to drink that night, he replies, with a nod toward his wife, “She did. That’s why I’m driving.”


  1. If/then propositions like the first premise are called "conditional" propositions. The A part is the so-called "antecedent" of the conditional. The second premise denies it, and an invalid conclusion is drawn from this denial. We will present more analysis of arguments with this sort of technical vocabular in chapters 4 and 5.
  2. Many of the fallacies have Latin names, because, as we noted, identifying the fallacies has been an occupation of logicians since ancient times, and because ancient and medieval work comes down to us in Latin, which was the language of scholarship in the West for centuries.
  3. International Action Center, Feb. 4 2005, http://iacenter.org/folder06/stateoftheunion.htm
  4. People often offer red herring arguments unintentionally, without the subtle deceptive motivation to change the subject—usually because they’re just parroting a red herring argument they heard from someone else. Sometimes a person’s response will be off-topic, apparently because they weren’t listening to their interlocutor or they’re confused for some reason. I prefer to label such responses as instances of Missing the Point (Ignoratio Elenchi), a fallacy that some books discuss at length, but which I’ve just relegated to a footnote.
  5. Comparing your opponent to Hitler—or the Nazis—is quite common. Some clever folks came up with a fake-Latin term for the tactic: Argumentum ad Nazium (cf. the real Latin phrase, ad nauseum—to the point of nausea). Such comparisons are so common that author Mike Godwin formulated “Godwin's Law of Nazi Analogies: As an online discussion grows longer, the probability of a comparison involving Nazis or Hitler approaches one.” (“Meme, Counter-meme”, Wired, 10/1/94)
  6. David French, National Review, 2/14/16
  7. Check it out: https://www.cdc.gov/fluoridation/
  8. Art Laffer, “Get Ready for Inflation and Higher Interest Rates,” June 11, 2009, Wall Street Journal
  9. “But if current prices are not downwardly flexible, and the public expects price stability in the long run, the economy cannot get the expected inflation it needs; and in that situation the economy finds itself in a slump against which shortrun monetary expansion, no matter how large, is ineffective.” From Paul Krugman, "It's baack: Japan's Slump and the Return of the Liquidity Trap," 1998, Brookings Papers on Economic Activity, 2
  10. Paul Krugman, June 13, 2009, The New York Times
  11. This is a classic example, from Richard Whately’s 1826 Elements of Logic.
  12. From the Oxford English Dictionary.
  13. See his Studies in the Way of Words, 1989, Cambridge: Harvard University Press.
  14. Bronston v. United States, 409 US 352 - Supreme Court 1973
  15. Frank Luntz, 2007, Words That Work: It’s Not What You Say, It’s What People Hear. New York: Hyperion.
  16. See. e.g. Lakoff, G. 2004, Don't Think of an Elephant! Chelsea Green Publishing.
  17. George Lakoff, 2/14/2006, "Simple Framing," Rockridge Institute.
  18. David Brooks, 4/14/15, New York Times
  19. Christian Schneider, 7/14/16, Milwaukee Journal-Sentinel
  20. John Stossel, 10/2/13, foxnews.com
  21. 3/18/16, The Daily Caller
  22. Jeff Donn, 8/2/16, “Medical benefits of dental floss unproven,” Associated Press

License

Icon for the Creative Commons Attribution-NonCommercial 4.0 International License

Revised Fundamental Methods of Logic Copyright © 2022 by Matthew Knachel and Sean Gould is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License, except where otherwise noted.

Share This Book