5

Deductive Logic III: Natural Proofs in Propositional Logic

Sean Gould

Deductive Logic III: Natural Proofs in Propositional Logic[1]

I. Replacement and Rules of Inference

In Chapter 4, we saw that we could use the Truth Table Test to check an argument’s validity. If there is no possible scenarios where all true premises lead to a false conclusion then the argument is valid. This also means that a valid argument always preserves the potential truth of its premises all the way through to its conclusion. Therefore, it is also possible to ensure an argument’s validity by ensuring that it can be established through a chain reasoning such that every link in the chain is valid itself. A major goal of this chapter is learning how to write out premises, conclusions, and the in-between steps in a very neat and tidy manner so that we can check each step and thus prove validity.

This idea of preserving truth introduces another way of establishing the validity of arguments, natural deduction. This is also called using “formal proofs of validity.” When we use natural deduction, we move through an argument one premise at a time, show what these premises legitimately entail, and eventually arrive at a statement of our conclusion. We reach our conclusion by taking certain baby steps that are guaranteed to preserve any potential truth of what came before them.

The two types of tools we use for basic formal proofs are the Rules of Replacement and the Rules of Inference. We use these tools as guarantors of individual steps within an overall valid chain of reasoning. And this is related to something many of us have done outside formal logic. When we argue and try to walk someone through our reasoning, we often try to explicitly show how the truth of one idea, or the combination of two ideas, entails the truth of a further idea. In doing so, we aim to express how some complicated, two-part statements (i.e., those involving multiple, singular propositions, such as “A • H” or “A ∨ H” can further entail other claims when more information is given. We should be able to see that when given “A ⊃ B” and “A” we can conclude “B”. A concrete example might be “If it’s raining then the ground will be wet. It’s raining. Thus, the ground is wet.” Or, if told, “the patient has a broken arm and a heart condition” we can legitimately conclude the simpler claim, “the patient has a heart condition,” from the earlier compound statement. Formally, this would be like concluding “H” from “A • H.” And it might just be that this simpler intermediary premise is just what we’d need to use in a proof of some other conclusion, such as that the patient either has a heart condition or Lyme disease, “H ∨ L.”

The above reasoning informally utilized some rules of inference. The first section of this chapter will cover nine such rules of inference and give them labels for future use as tools in constructing formal proofs. The second section covers the rules of replacement and shows us how some statements in SL are equivalent to each other. For example, we’ve already seen that “~~p” is equal to “p.” We also saw that “~ (p • q)” is true if and only if either p or q is not true, i.e., “~p ∨ ~q.” We will see that when we talk about replacement rules, we are just looking at statements that are really saying the same thing in different ways. This means that whenever the one is true whenever the other is, and vice versa, and so logically speaking, the two are functionally equivalent. However, restating the expression of the term, such as replacing “~~p” with “p,” can often help us move closer to expressing a particular conclusion. Therefore, the second part of this chapter will present 10 such replacement rules and give them labels for use as tools in constructing formal proofs.

These inference and replacement rules are not exhaustive of all valid arguments steps. In fact, we will introduce a few more later in the book. However, these 19 rules of inference and replacement are enough to help justify almost any basic deductive argument. These “rules” make up a set of reasoning steps we can rely upon to ensure or prove valid reasoning.

After we’ve got a labeled set of 19 rules of inference and replacement rules, the third section will explain how we write out a formal proof using these new tools. As a quick preview of where we are going, our aim will be to construct little roadmaps where the premises and their validly-entailed intermediate premises are written in a “statement column” on the left, and the justification for adding them (one of the 19 rules of inference or replacement rules) is provided in a “justification ledger” on the right. After the third section of this chapter, our aim is to be able to understand the following notation for proving D from A • B and (B ∨ C) ⊃ D would look like this:

Formal Proof Example 1

  1. A • B
  2. (C ∨ B) ⊃ D      /∴ D
  3. B                      1, Simp.
  4. B ∨ C               3, Add.
  5. C ∨ B               4, Com.
  6. D                     1, 2, M.P.

It’s okay if this looks cryptic now. The above method of writing out proofs will all get explained in the following sections. Our learning goal is to work towards understanding this and being able to write things out in this manner. By the end of this chapter, you will be able to see how the first two lines are our given premises. You will be able to recognize that the right column states our end conclusion as /∴ D, and that line 3 uses the rule “simplification” to get B from the premise A • B that was stated in line 1, as is noted in the right-hand column. You will also be able to understand how line 4 uses a rule called “addition” to move from line 3 to line 4, and at the end of this chapter, you will understand how line 5 uses a rule called modus ponens to get D from lines 2 and 4.

Writing out deductive arguments in this systematic way allows us to focus our attention on the form of the argument alone. We can avoid getting distracted by details, idiosyncratic ways of writing terms, or other tangential issues. By training ourselves to utilize a consistent presentation of arguments, we can make analyzing and talking about arguments a little easier. Learning this skill may seem hard at first, but it will save effort once it is mastered, sort of like how learning to ride a bike is more difficult than walking, at first. Once we’ve got things figured out the bike becomes a very useful tool.

After we introduce the basic idea of proofs and how to read and write them, the chapter will finish by adding two more argumentative steps. The first is using hypothetical scenarios alongside given premises to establish certain conditional connections which we can further use in our arguments – so-called “conditional proofs.” The other is how to formally show potential contradictions between propositions to establish statements of negation, so-called “indirect proofs.”

A person doesn’t need to memorize all 19 of these tools and the two tricks to do natural deduction and write out formal proofs. Most of us create little proofs all the time anyhow. We’re just learning how to articulate them so that the connections are made more salient, transparent, and explicit. And so, there’s no reason a cheat sheet of the 19 rules wouldn’t do just as well as memorizing them. However, there is a bit of creativity involved in finding an elegant way to validly move from premises to a conclusion while showing every step. Having the 19 tools we are about to encounter at one’s fingertips helps us know which ones to use and when.

II. Rules of Inference

Modus Ponens (M.P.)

Modus ponens (M.P. for short) is the first “rule of inference” we will cover. The reasoning itself is very, very basic and might be familiar to you. What we are doing, overall, though, is aiming to recognize some very basic patterns across multiple examples and talk about them in a way that will serve other purposes, such as expressing formal proofs later down the road.

And so, our first “rule of inference” utilizes a familiar argument form. Suppose we claim “p” and “if p, then q.” Well, then, we’ve set ourselves up for asserting “q.” This simple inference is called “modus ponens.”

Notice how the “example” we just used was very schematic. That’s because it was only the general form. An argument form shows the relationships between potential propositions without relying upon any specific content. It’s even more abstract than using capital letters to stand in for specific propositions.

Let’s step back a step to review the notion of argument form (sometimes called “logical form, or “general form”). In previous chapters, we talked about how we can take full sentences from English, Spanish, or any language and boil them down to their basic propositions. We also saw that it is quite convenient to take particular propositions and label them with a capital letter. “Snow is White” becomes “W” – if only so that we needn’t write the whole thing down.

Now, suppose we say, “if the snow is white, then the snow is clean,” and we want to use W for “the snow is white,” and C for “the snow is clean,” and “⊃” for the “if . . . then . . .” relationship. Let’s further say we’ve got an argument with the premise “if the snow is white, then the snow is clean” written out as “W ⊃ C,” and we also add the premise “the snow is white” written as just plain “W.” And suppose from these two we conclude “the snow is clean.” We will add a new symbol, “∴” to stand for “in conclusion.”

And so, we’ve got the argument:

W ⊃ C

W

∴ C

Okay. Now, suppose the next day we give a different argument: “if the dog is howling, then the cat will be annoyed. And the dog is howling. Therefore, the cat is annoyed.” Say we want to use H for “the dog is howling,” and A for “the cat will be annoyed” and all the other same logical operator symbols as before.

And so, we’ve got the argument:

H ⊃ A

H

∴ A

If we put the two arguments side by side, you should be able to see that even though the propositions differ (one’s about snow, the other pets) the argument form is the exact same.

M. P. Example 1

M.P. Example 2

H ⊃ A

H

∴ A

W ⊃ C

W

∴ C

Now, in Chapter 3, III, “Semantics of SL” we encountered “argument forms” as a way to say things about our logical arguments. Here, to say something about the structure or “form” of any given argument, we’ll use lower case symbols such as p, q, r, s, etc. as variables to stand in as functional placeholders for propositions (or their capital letter abbreviations).

M.P. Example 1

M.P. Example 2

M.P. Argument Form

H ⊃ A

H

∴ A

W ⊃ C

W

∴ C

p ⊃ q

p

∴ q

Both examples use the same argument form. Any set of specific arguments that have the same general form is said to be “logically analogous.” In this case, the general form is called “modus ponens.” Here it is presented below, with its name, an abbreviation (useful for later formal-proof-making), and a reminder that it is always valid:

Modus Ponens (M.P.)

p ⊃ q

p

∴ q

Always Valid!

The general form of modus ponens is always valid. If a specific argument has this form, such as in examples 1 and 2, then it is valid. Arguments with this form might not always be sound, but it is impossible for the two premises to be true and the conclusion false. For proof, you can refer to truth table proof exercise # 5 at the end of the last chapter. An accurate truth table will show that it is impossible for the two premises to be true and the conclusion false. The form says that if you have a conditional and affirm the antecedent, then the consequent follows as being true, too.

Once we accept a general form as valid, the application of the inference rule allows us to work backward to a specific argument. Thus, we can say something like

A ⊃ B

A

∴ B

is justified by modus ponens.

The above argument instantiates a version of modus ponens and is therefore valid. Using rules of inference and replacement rules to justify an argument’s validity is a critical skill in general logic and the construction of formal proofs.

Do not confuse modus ponens with a similar, pernicious, and fallacious line of reasoning called “affirming the consequent.” Affirming the consequent is like trying to do modus ponens backward! Here is this mistake’s general form:

Affirming the Consequent

p ⊃ q

q

∴ p

Never Valid!

Notice how we switched the “q” and “p” in the bottom two lines of the form. This argument form does not preserve truth. It is possible for the premises – whatever gets plugged in for “p ⊃ q” and “q” to be true while the antecedent fails to hold. Here’s an example to help illustrate such a scenario. Suppose we say, “if aliens built the pyramids, then the pyramids would have unsolved mysterious about them (that’s our content for “p ⊃ q”). The pyramids do have unsolved mysteries (here’s the “q” part). Therefore, aliens built them (p).” Nope. There are many ways for mysteries to occur beyond extraterrestrial construction. Remember from Chapter 3 that a conditional can be true even when the antecedent does not obtain.

Any argument that partakes in this invalid form is logically analogous to any other argument that shares this same nasty form. Sometimes though, it can be hard to see that there is a problem.

“Affirming the consequent” is often a seductive fallacy; it is common in a lot of pseudosciences, wishful thinking, superstition, conspiracy theory, and other practices that utilize confirmation bias and the cherry-picking of evidence. However, as the above example shows, it is not a valid form of reasoning.

Sometimes it’s easier to show the invalidity of an argument by providing a logical analogy that is clearly off the mark. When we can see an example where the premise can be true while the conclusion of the argument can be false, we have reason to reject any argument with the same form. The fact that “if a person is eaten by a shark then they will be dead” is not enough to let us conclude that all dead people died from shark attack! On the other hand, valid argument forms, like modus ponens, force a true conclusion from true premises. No counterexamples can be drawn that match the premises but not the conclusion. This is what the truth-table process establishes.

There are many different potential general forms for arguments besides modus ponens. Being familiar with some of the most basic valid forms allows us to use them as rules of inference when we move on to further developing proofs. And so, let’s look at some more.

Modus Tollens (M.T.)

Our second rule of inference is called modus tollens (abbreviated as M.T.). Modus tollens also uses a conditional, but it differs from modus ponens. Instead of establishing the consequent by asserting the antecedent, modus ponens works in the opposite direction. Here, we assert the conditional, deny the conclusion, and therefore commit ourselves to accept that the antecedent doesn’t hold either. For example, “If it’s nighttime in Boise, then it’s nighttime in Nampa. But it’s not nighttime in Nampa. Therefore, it’s not nighttime in Boise, either.” Here’s another example, “If Fred is guilty, then George must be guilty. But George is not guilty. And so, Fred must not be guilty after all.” Both tiny arguments utilize the form of modus tollens.

Let’s express the argument form occurring here. Label the propositions in the Boise/Nampa example as B = it’s nighttime in Boise, N = it’s nighttime in Nampa. ~N = not nighttime in Nampa, and ~B = it’s not nighttime in Boise. And for the Fred/George Example, let’s use F for Fred is guilty, ~F for Fred’s not guilty, and G and ~G for the same things for George. This gives us the following table with the two example arguments and their general form, all displayed side by side:

Example 1

Example 2

Argument Form

B ⊃ N

~N

∴~B

F ⊃ G

~G

∴~F

p ⊃ q

~q

∴ ~p

The general form here is called modus tollens (M. T.). Here it is by itself:

Modus Tollens (M. T.)

p ⊃ q

~q

∴~p

Always Valid!

Modus tollens is always valid. It is impossible for the premises to be true and the conclusion false. This, too, is displayed by the way the truth table (exercise # 2) maps out the possibilities. Conversely, we can justify the reasoning of any argument that instantiates this general form by simply saying it is, in fact, a version of modus tollens.

Do not confuse modus tollens with a similar, fallacious line of reasoning called “denying the antecedent.” Here is this mistake’s general form:

Denying Antecedent

p ⊃ q

~p

∴ ~q

Never Valid!

This argument form does not preserve truth. It is possible for the premises that fill in the “p ⊃ q” and “~p” parts to be true while the conclusion “~q” is false. Here’s an example to help illustrate such a scenario. Suppose we say, “if it were to rain, then the grass would get wet (that’s our “p ⊃ q” part). But it did not rain (here’s the ~p part). Therefore, the grass will not be wet (~q).” Lack of rain doesn’t guarantee dry lawns. Maybe sprinklers came on overnight, for example. The conditional in the first line, “if it were to rain, then the grass would get wet,” is true even if it doesn’t end up raining. Remember from Chapter 3 that a conditional can be true even when the antecedent does not in fact obtain.

Hypothetical Syllogism (H.S.)

Hypothetical Syllogism refers to an argument form that links two conditionals into a longer chain. For example, “if I knock over the first domino, then the second will fall (premise 1). And, if the second one falls, the third will fall (premise 2). Therefore, if I knock over the first domino, the third will fall (conclusion).” Our speaker here seems pretty sure of her domino stacking, but if we take her word about the certainty of the premises (i.e., assume them as true) then we’d have to admit that the conclusion must hold. Thus, hypothetical syllogism represents our third valid argument form. You can see that all true premises yield a true conclusion if you refer to the correct answer for exercise # 6 from the end of the last chapter.

Here is hypothetical syllogism laid out in its argument form:

Hypothetical Syllogism (H.S.)

p ⊃ q

q ⊃ r

∴ p ⊃ r

Always Valid!

Just like dominos, right? Notice here that the conclusion is only asserting that the conditional chain can be linked. It does not assert that the final consequent obtains. For that, we’d need an added premise of one of the antecedents occurring. Returning to our domino example, we can deduce that knocking the first domino would cause the third one to fall without actually knocking any. If we take the first two conditionals as certain (we really trust the domino-set up), then we’re committed to the conclusion. Denying a premise, i.e., saying a premise is false, and thereby questioning the soundness of the argument is not the same as rejecting the validity of the argument form itself. With hypothetical syllogism, a true conclusion always follows from true premises.

Disjunctive Syllogism (D.S.)

Our fourth valid form is called “disjunctive syllogism.” You might already be familiar with it under another name, “the process of elimination,” where we are presented with two potential options, reject one, and therefore find ourselves left with the last standing alternative. For a micro-example, consider a coin toss: “The coin landed head or tails. It didn’t land heads. Therefore, it landed tails.” Here’s the argument form:

Disjunctive Syllogism (D.S.)

p ∨ q

~p

∴ q

Always Valid!

If you did exercise # 3 from the last chapter correctly, you’ve already proved the form’s validity.

A thing to remember here is that the argument form is taking the initial “p ∨ q” disjunct as true. Again, contesting a premise isn’t the same as challenging the logical form. Keep soundness and validity as separate concepts. It is true that many times we can get tricked by a “false dichotomy” where we accidentally take a disjunct as true when it isn’t. But that doesn’t mean there are many legitimate disjuncts out there. Either way, and once again, whenever the premises are in fact true, with this valid argument form, the conclusion will hold.

Constructive Dilemma (C.D.)

Suppose you are standing in front of two doors, and you get to open one, the other, or both. Door #1 leads to a lion. Door #2 leads to a lamb. Well, it seems safe to phrase your options as being the lion or the lamb (or both).

The form of reasoning we just used is called “constructive dilemma.” We took the dilemma of the two doors and built it out a little further – constructed – it to a different dilemma, lion v lamb.

Here’s the form:

Constructive Dilemma (C.D.)

(p ⊃ q) • (r ⊃ s)

p ∨ r

∴ q ∨ s

Always Valid!

Keeping our example in mind, you can see how in premise #2, “p ∨ r” could represent the two doors. Premise #1, (p ⊃ q) • (r ⊃ s), gives a diagram of where the doors lead. And the conclusion “q ∨ s” just states where we could end up.

Simplification (Simp.)

Let’s introduce our next form, “simplification,” with an example: “I went to the post office and the bank. So, yes, I went to the post office.” That’s it. Simplification just lets us recognize that when we assert a pair of claims in a conjunction, we are entitled to assert a simplified part that pair. Here’s the form:

Simplification (Simp.)

p • q

∴ p

Always Valid!

Sometimes formal logic can seem tricky simply because it’s hard to remember how fine-grained and simple the baby steps we are taking really are. But these small, subtle steps are useful. Sometimes isolating an important claim from a long string of statements really helps our reasoning move along.

Conjunction (Conj.)

If we state two things separately, there’s no harm in stating them in one breath. Consider, “I have apples. I have oranges. Therefore, I have apples and oranges.” The argument form of “conjunction” might strike you as being so rudimentary that it’d be easy to overlook. Nevertheless, it is representing a way to move from two premises to a conclusion that always necessarily follows. Plus, keeping in mind that “conjoining” premises is a step we can take that will help us think clearly about more complex arguments down the road. Here’s the conjunction’s argument form:

Conjunction (Conj.)

p

q

∴ p • q

Always Valid!

Absorption (Abs.)

When we assert a basic conditional (p ⊃ q), we are saying that the consequent follows from the antecedent. This next argument form shows us that when we assert a conditional, then we also get to assert that both the antecedent and the consequent follow from the antecedent! Here’s an example, “if the toast is burnt, then it will be black. Therefore, if the toast is burnt, it will be black and burnt.” It may be funny to say that if the antecedent is true, then in addition to the consequent, the antecedent is true too, but it sure is valid. We are allowed to “absorb” the antecedent back into the consequent of its own conditional.

Absorption (Abs.)

p ⊃ q

p ⊃ (p • q)

Always Valid!

Addition (Add.)

Remember that we call things with the form (p ∨ q) a disjunction. And that the minimum truth requirement for a disjunction to be true is that at least one of its disjuncts holds. This means that when we assert a simple proposition, “p,” then we could add on an “or q” for free!

Here’s an example. “This is the last rule of inference presented in this section. Therefore, either this is the last rule of inference presented in this section, or the moon is made of cheese.” This tiny argument, like all others of the same form, can be mapped out as such:

Addition (Add.)

p

∴ p ∨ q

Always Valid!

When the premise, in this case merely “p” holds, a disjunctive conclusion that includes the known part will be true, too. An accurate truth table for exercise # 1 from the end of last chapter confirms the validity of this simple argument form.

Conclusion

The argument forms introduced in this section represent valid logical connections between premises and conclusions at a very fundamental level. In real life, arguments are often much more complicated. However, as we move on to constructing formal proofs of deductive arguments, it is important that we learn to recognize these forms so that we can appeal to them during each step of more complex chains of reasoning. This lets us assess arguments by breaking them into smaller pieces.

We’ve identified argument forms where given a true premise or two the conclusion must follow. In other words, we might say for our valid argument forms, “if the premises were true, then the conclusion would be true.”

Valid rules of inference DO NOT give us premises and conclusions that are interchangeable. It would be an instance of “affirming the consequent” to move from the conclusion of a valid argument form to an affirmation of its premises.

Simplification

And so?

?

Reverse?

p • q

∴ p

p

∴ p • q

Valid Form

Reverse Doesn’t Work

While it is valid to say, “A • B, therefore A,” it is not valid to say, “A just means ‘A • B’.” To make it more concrete, I can’t say, “I have apples. Therefore, and from that fact alone, I have oranges, too.” This would be nonsense.

The next section on Replacement Rules will be different. In that section, we introduce propositional statements that are interchangeable because they are logically equivalent. “I don’t not have apples” really does mean the same as “I have apples.” The section on Replacement Rules will introduce accurate ways of re-stating claims – a skill that, in combination with our use of inference rules, will help us develop proofs of more complicated arguments.

III. Replacement Rules

Before moving on to our replacement rules, it is important that we are comfortable with understanding just what our logical operators ~, •, ∨, ⊃, and ≡ mean. I.e., what they are saying – what their truth conditions are. Please refer to Chapter 3, Section III if you need to refresh your grasp. Understanding replacement rules simply builds from grasping the meaning of logical operators so that truth is maintained through different expressions of complex propositions. Especially, refer back to Chapter 3, III, “Biconditional (Triple-Bar)” ≡ which is introduced as logic’s “equals-sign.” When we have “a statement form all of whose substitution instances must be true” the equivalence is known as a “tautology” (Copit et. al). Tautologies are written as ≡t to show that the biconditional is of this special variety.

At the end of this section, you should be able to look at a general form of any given replacement rule and see how either way of stating a claim is equivalent to its sister claim. The ability to recognize this similarity can then help take any given claim and translated it to its sister expression when we move on to applying these replacement rules in the next section on formal proofs.

Double Negation (D.N.)

You are probably familiar with the concept of a “double negative.” For any given proposition, negating the negation is the same as stating the proposition. For example, take “it’s not, not snowing.” We’ll use two negations, one for each “not,” and let the proposition that it is snowing = S so that our example can be written in SL as, “~~S.” Well, I hope you can see that ~~S is true if and only if S is true.

In other terms, “~~S ­≡t S.”

Of course, this feature of double negation holds for any proposition, not just S. So, we’ll go back to using the lower case variable “p” to express the form of the idea:

Double Negation (D.N.)

~~p ≡t p

Because each side of the ” ≡t ” mean the exact same thing, the two can be interchanged or used to replace one another. Hence, double negation is our first replacement rule.

Commutation (Com.)

Both the “∨” disjunction and the “•” conjunction signs present pairs of simple propositions. The truth of the complex disjunction (p ∨ q) or complex conjunction (p • q) do not depend on the order in which the components “p” and “q” are written. Therefore, we can use the “commutation” replacement rule to switch the order of the parts. Here is the form for commutation:

Commutation (Com.)

(p ∨ q) ≡t (q ∨ p)

(p • q) ≡t (q • p)

This form states that we can swap the order of propositions that are within a disjunction or conjunction. Because we want to show how every single change is justified by a valid step, even something as small as altering the order of components of a disjunction of a conjunction requires explicit justification. It is important to note, though, that just like in Ghost Busters, you cannot cross the streams. Each line only works with itself, and you cannot recombine the biconditionals. “p ∨ q” is not equivalent to “q • p.”

Association (Assoc.)

The replacement rule of association is like commutation. Association refers the fact that when we have a set of three disjuncts, or a set of three conjuncts, it is immaterial as to where we put the dividing parenthesis.

Association (Assoc.)

[p ∨ (q ∨ r)] ≡t [(p ∨ q) ∨ r]

[p • (q • r)] ≡t [(p • q) • r]

Once again, the lines cannot cross.

Transposition (Trans.)

Recall our friend modus tollens. That argument form said that if we have a conditional (p ⊃ q) and reject the consequent “~q,” it is valid to reject the antecedent and get “~p.” To rephrase this, if we know (p ⊃ q) we know (~q ⊃ ~p). And these two always go together. We only have one if and only if we have the other. They are logically equivalent. Hence, we have another replacement rule, that of transposition:

Transposition (Trans.)

p ⊃ q ≡t (~q ⊃ ~p)

Material Implication (Impl.)

The replacement rule “material implication” follows from the definition of the horseshoe conditional sign. Recall the only way a conditional statement p ⊃ q is false is if “p” is true, but “q” is false. This means that if either p is false, or q is true, i.e., (~p v q), then the conditional p ⊃ q is true. And if the conditional p ⊃ q is true only if (~p v q) is true. Therefore:

Material Implication (Impl.)

p ⊃ q ≡t (~p ∨ q)

De Morgan’s Theorems (De M.)

We met De Morgan’s Theorems in Chapter 3. De Morgan’s Theorems state what needs to be the case to negate a conjunction or a disjunction. A conjunction, p • q, can be negated if and only if at least one of its conjuncts is negated. Hence, ~(p • q) ≡t (~p ∨ ~ q).

On the other hand, since a disjunct only needs one of its parts to hold for the entire disjunction to hold, a disjunction, p ∨ q, can be negated if and only if both disjuncts are negated. And so, ~(p ∨ q) ≡t (~p • ~q).

Therefore, we have the pair:

De Morgan’s Theorems (De M.)

~(p q) ≡t (~p ∨ ~ q)

~(p ∨ q) ≡t (~p ~q)

Again, you cannot mix and match these two different forms of De Morgan’s.

Distribution (Dist.)

Suppose we know “roses are red” (R) and either “violets are blue (B) or violets are orange (O).” We could write this out in SL as R • (B ∨ O). But this means, in other terms, that we know either “roses are red and violets are blue” or “roses are red and violets are orange.” Which we could write as (R • B) ∨ (R • O).

Hence, we see [R • (B ∨ O)] ≡t [(R • B) ∨ (R • O)]. Forgetting flowers, we can articulate this logical equivalence schematically as:

[p • (q ∨ r)] ≡t [(p • q) ∨ (p • r)]

The above gives us the first half of the distribution replacement rule. Here, the single conjunct is “distributed” to the other parts of the disjunctive to which it is conjoined.

There is a related variation involving an initial disjunction. Suppose we know “Al Capone robed the bank (A) or both Selman robbed the bank (S), and Louise robbed the bank (L).” We can write it out as A v (S • L). Again, this same complex statement is logically equivalent to affirming both it was Al Capone or Selma, and it was Al Capone or Louise; (A ∨ S) • (A ∨ L). And so, we have A ∨ (S • L) ≡ (A ∨ S) • (A ∨ L). Since this is true of any set of propositions, we’ll just write it schematically as:

[p ∨ (q • r)] ≡t [(p ∨ q) • (p ∨ r)]

Thus, we have the complete statement of the distribution replacement rule:

Distribution (Dist.)

[p • (q ∨ r)] ≡t [(p • q) ∨ (p •r)]

[p ∨ (q • r)] ≡t [(p ∨ q) • (p ∨ r)]

Again, each line only works with itself. You can’t mix and match the pairs of the different biconditionals.

Material Equivalence (Equiv.)

The material equivalence (Equiv.) replacement rule works by the nature of the biconditional itself. It also comes in two forms. First, to say p ⊃ q is the same as saying if p then q and if q then p. The second form states that if we say p ⊃ q then we are saying either you have both p and q, or you have neither. Schematically, we get:

Material Equivalence (Equiv.)

(p ≡ q) ≡t [(p ⊃ q) • (q ⊃ p)]

(p ≡ q) ≡t [(p • q) ∨ (~p • ~q)]

Exportation (Exp.)

Exportation describes the fact that stating that pair of conjuncts entails a third term is logically equivalent to stating that if we know the first half of the pair, then we know that completing the pair would also yield the third term.

Here is an example. This first statement, “if the bucket is full (B) and it tips over (T) then the floor will get wet (W)” is logically equivalent to “if the bucket is full, then if it tips over the floor will get wet.” In our example, we have: [(B • T) ⊃ W] ≡ [ B ⊃ (T ⊃ W)]. Once again, replacing our proposition symbols with some general placeholders, we can express the form of this logical equivalence as such:

Exportation (Exp.)

[(p • q) ⊃ r] ≡t [ p ⊃ (q ⊃ r)]

Tautology (Taut.)

The final replacement rule simply states that any proposition is logically equivalent to a conjunction of itself with itself or a disjunction of itself with itself. It is a tautological equivalence that is, itself, called “tautology.” In other words, p ≡t p • p as well as p ≡t p ∨ p. Stating this fact as a replacement rule gives a “formal stamp of approval” for replacing somewhat redundant statements with a simpler version. Sometimes during formal proofs, the outcome of a deduction, such as a constructive dilemma, might yield something like p ∨ p from which we’d like to just conclude p. With this last rule, we can make all our steps clear and transparent.

Here is the schema for this final replacement rule:

Tautology (Taut.)

p ≡t (p ∨ p)

p ≡t (p • p)

We have now been introduced to 9 rules of inference and 10 replacement rules. We will use these rules for the construction of formal proofs. While memorizing these rules and their names will make proofs much easier, the most important thing right now is to be able to “see” how each rule makes sense.

Here is a comprehensive list of the entire set of tools, for your convenience:

Rules of Inference

Modus Ponens (M.P.)

p ⊃ q

p

∴ q

Modus Tollens (M.T.)

p ⊃ q

~q

∴ ~p

Hypothetical Syllogism (H.S.)

p ⊃ q

q ⊃ r

∴ p ⊃ r

Disjunctive Syllogism (D.S.)

p ∨ q

~p

∴ q

Constructive Dilemma (C.D.)

(p ⊃ q) • (r ⊃ s)

p ∨ r

∴q ∨ s

 

Simplification (Simp.)

p • q

∴ p

Conjunction (Conj.)

p

q

∴ p • q

Absorption (Abs.)

p ⊃ q

∴ p ⊃ (p • q)

Addition (Add.)

p

∴ p ∨ q

Replacement Rules

Double Negation (D.N.)

~~p ≡t p

Commutation (Com.)

(p ∨ q) ≡t (q ∨ p)

(p • q) ≡t (q • p)

Association (Assoc.)

[p ∨ (q ∨ r)] ≡t [(p ∨ q) ∨ r]

[p • (q • r)] ≡t [(p • q) • r]

Transposition (Trans.)

p ⊃ q ≡t (~q ⊃ ~p)

Material Implication (Impl.)

p ⊃ q ≡t ~p ∨ q

De Morgan’s Theorems (De M.)

~(p • q) ≡t (~p ∨ ~q)

~(p ∨ q) ≡t (~p • ~q)

Distribution (Dist.)

[p • (q ∨ r)] ≡t [(p • q) ∨ (p • r)]

[p ∨ (q • r)] ≡t [(p ∨ q) • (p ∨ r)]

Material Equivalence (Equiv.)

(p ≡ q) ≡t [(p ⊃ q) • (q ⊃ p)]

(p ≡ q) ≡t [(p • q) ∨ (~p • ~q)]

Exportation (Exp.)

[(p • q) ⊃ r] ≡t [ p ⊃ (q ⊃ r)]

Tautology (Taut.)

p ≡t (p ∨ p)

p ≡t (p • p)

IV. Constructing Formal Proofs Using Rules of Inference and Replacement

A formal proof is a way to show that a conclusion validly follows from a given set of premises. Formal proofs simultaneously use a step-by-step process to a) derive intermediate premises from those that are given toward the conclusion and b) show the justification for each step. The 9 rules of inference and the 10 replacement rules are used to justify the step-by-step moves and allow us to construct formal deductive proofs.

Let’s return to the example from the start of the chapter. Below is an example of a finished formal proof. The following paragraphs will explain how to interpret the way formal proofs are written by using this proof as an example.

Formal Proof Example 1

  1. A • B
  2. (C ∨ A) ⊃ D       /∴ D
  3. A                       1, Simp.
  4. A ∨ C                3, Add.
  5. C ∨ A                4, Com.
  6. D                      2,5, M.P.

The above example begins with an argument (shown on lines 1 and 2), then adds lines (3-6) to justify the argument and prove its validity. It does so by showing how each new line (3-6) proceeds, one step at a time, by solely utilizing the valid inference and replacement rules we studied in the previous section. Each new line only uses one application of one rule at a time. But the end result is a chain of reasoned steps that justifies the argument and its conclusion. Now, let’s break down how this all works in more detail.

Every proof is a proof of an argument. So, we need to back up and look at what the argument is. In this case, the original argument was:

P1. A • B

P2. (C ∨ A) D

∴ D

“(P1)” and “(P2)” indicate the first and second premises. The “∴” indicates the conclusion. From these we can begin re-writing our proof as:

  1. A • B
  2. (C ∨ A) ⊃ D     /∴ D

On the far left of the proof, you will see that each line is numbered. Just to the right of the numbers, the next column presents the two premises. It will later include future intermediate premises the proof maker decided to add, and the conclusion, restated.

The far-right column is very, very important. It is the “ledger column.” Right now, we’ve just moved the conclusion to the top of that column. Later that far-right column will contain the information that explains the steps of the argument. Most of this chapter will be dedicated to explaining this ledger column. Here, you’ll notice that the column doesn’t begin until the last row of the premises. In this case, row 2.

As the proof is created, we will add new lines of intermediate premises in the left column and show how such lines are justified by citing the relevant rules in the ledger column. Creating formal proofs is an art. The goal is to proceed toward the conclusion in a way that is elegant and does not clutter things up with extra steps. But there are critical rules.

The first rule is that any step we take, i.e. any new line we add, must be justified by one of the inference or replacement rules we’ve already introduced, or some of the other more complex rules we’ll introduce later.

The second rule that is each step and new line can make only one change at a time and only use one rule at a time.

The third rule is that each step and its justification must match the exact same form as the inference or replacement rule is suited to. For instance, one cannot use the rule “addition” plus the statement “A” to yield “A ⊃ B”. That’s just not how that rule works. Addition, to continue, only allows one to start with a line like “A” and get something like “A ∨ __” where the blank is filled in with another capital letter. This strict adherence to the form of each rule and its application is precisely why formal proofs are, when done correctly, exactly what they claim to be; formal proofs of an argument’s validity.

Looking at the start of our proof above, we can see that our added steps need to ultimately arrive at a “D” to prove our conclusion. In line 2, we see that D follows from a disjunction with an C and A (see the “C ∨ A”). However, the only time we see A in the proof so far is when it is in a conjunction with B (see the “A • B”). And so, the clever proof-maker might begin by separating A from B. Luckily, the inference rule simplification lets us do that from line 1. Recall that “simplification” says that for any (p q) set, we are allowed to simplify it to the first conjunct. Deriving A from (A B) is an instantiation of “simplification” and is therefore valid. And so, the next step in our proof construction is to create a new line (#3) and write a new intermediate premise “A” there. Then, we use the ledger column to note that the other information in line #3 comes from line #1 and an application of the simplification rule, which is abbreviated “Simp.”

  1. A • B
  2. (C ∨ A) ⊃ D      /∴ D
  3. A                      1, Simp.

We have created our first new line. Line 3 shows our new bit of information, “A” on the left. The ledger column shows which rule we used to justify our new information with the abbreviation  “Simp.” The ledger column also shows which line the rule simplification was applied toward to yield the new output “A”. Here, that was line 1.

Each new line must cite the rule to justifies its new statement. Look at the table of inference and replacement rules. Each one is a little argument. Some, like modus ponens, rely upon two premises to get is conclusion. Others, like simplification only use one premise to deduce its conclusion. When use inference or replacement rules in a proof, we cite exactly as many lines as are needed for the deduction to work. Sometimes, as in simplification, we need only cite one line, as we did in our example proof. At other times, we will need to cite two lines, if the inference step requires.

One may only cite lines above the step on is working on. This is because valid steps can only proceed from information that is already established. Were we to mistakenly cite the same line we are one, or a line latter down in the proof, we’d literally be comiting the fallacy of circular reasoning!

Let’s return to our example. So far, so good. We are getting close. We can see that our conclusion, D, is the consequent that follows from (C v A). And so, it would do well to add a C to our mix. Moreover, we can see that the C appears in line 2 as part of a disjunct. Well, when we covered the rule of addition earlier in the chapter we learned that for any variable, p, it is completely valid to add a second variable as a disjunct. And so, we’ll repeat our process and add line 4 to show that we’ve gotten “A ∨ C” from line 3 and the “addition” rule:

  1. A • B
  2. (C ∨ A) ⊃ D         /∴ D
  3. A                         1, Simp.
  4. A ∨ C                  3, Add.

Again, notice how a new line was added, “A ∨ C” and how it was shown as legitimate by the ledger’s notation that “A ∨ C” follows from application of the addition rule, “Add.” to line 3.

Our proof is close to complete. One might now think that combine line 4 with line 2 would give us D via modus ponens. The problem is, however, that the propositions labeled “A” and “C” are in different orders as they appear in lines 2 and 4. We cannot take any shortcuts constructing formal proofs. And so, before we do modus ponens, we need to restate the info in line 4 by creating a new line and applying the replacement rule of “commutation” which justifies switching the order of conjuncts, yielding:

  1. A • B
  2. (C ∨ A) ⊃ D         /∴ D
  3. A                         1, Simp.
  4. A ∨ C                  3, Add.
  5. C ∨ A                  4, Com.

Here, we have added another line “C ∨ A” and shown how it is legitimately derived from an application of the commutation, “Com.” rule to line 4.

Now, we can apply the rule modus ponens to both lines 2 and 5 to get “D”! Modus ponens is the inference rule that uses two premises, “(p ⊃ q)” and “p” to derive q. And so, when we apply a specific use of the form of modus ponens to get our D, in our ledge column we’ll have to refer to both lines involved in that step. Modus ponens is one of the inferences that requires utilizing two separate premises. Thus, our completed proof will look like:

  1. A • B
  2. (C ∨ A) ⊃ D         /∴ D
  3. A                         1, Simp.
  4. A ∨ C                  3, Add.
  5. C ∨ A                  4, Com.
  6. D                        2, 5, M.P.

Here, we see our new line 6 that has a “D.” The ledger column explains how this “D” was legitimately derived from the application of “modus ponens” from both lines 2 and 5.

Boom. Done. Line 6 just reads “D” and that is exactly what the conclusion we set out to prove is. Therefore we are done. Because we have shown how the conclusion, “D,” followed from lines 1 and 2 via the application of legitimate rules of inference or replacement, we have shown that “D” is a valid conclusion of our initial argument. In other words, we constructed a formal proof.

We’ve explained what the numbered lines mean, how we can add intermediate premises to help show the steps needed to reach our conclusion, and how we use the ledger column to show the rule that guarantees the validity of each step. Because each step is valid, we can rest assured that the argument:

P1. A • B

P2. (C ∨ A) ⊃ D

∴ D

is valid.

Equally important for now is that our example-proof construction showed us that we can only take one step at a time and that we need to justify each and every step.

When we go to add a line of an intermediate premise to our proof, there is no rule that says we must follow a certain order regarding which prior premises or lines we refer to – so long as the occur above the line we are currently adding. Complicated proofs might require some jumping around among the lines above our current location in the proof. In the latter sections we will learn about two more skills, such as “conditional proofs” and “indirect proofs,” which involve fencing off sub-sets of premises, but we’ll cross that bridge when we get there in sections V and VI.

What our example demonstration did not show was exactly how we knew which move to make and when. This is because there is no straightforward and mechanical system for knowing which lines to add and what strategy to pursue in constructing a formal proof. It’s a bit of an art. While truth tables are entirely mechanical, formal proofs require some ingenuity. There is some room for variation. Consider the following example of two valid variations of a proof for the same argument:

Argument

  1. M ⊃ ~C
  2. E • M
 

/∴ E • ~C

To start with, a clever logician might look at the conclusion and identify what components need worked with. Here, we see an E and a ~C. Scanning the premises, we see that we’d have the E if we got it from line 2 somehow. And we’d have our ~C if we somehow got it from line 1. But this means we’d need an M. Line 2 looks like a good potential place to find that, too. And so, suppose our proof proceeds along the lines of the following variation:

Validity Proof Variation 1

  1. M ⊃ ~C
  2. E • M      /∴ E • ~C
  3. M • E      2, Com.
  4. M            3, Simp.
  5. ~C           1, 4, M.P.
  6. E             2, Simp.
  7. E • ~C     6,5, Conj.

Now, notice how in variation 1, M was isolated from “E • M” using Com. and Simp. in lines 3 and 4, while E wasn’t isolated until line 6 by using Simp from line 3.

On the other hand, here’s an equally valid variation:

Validity Proof Variation 2

  1. M ⊃ ~C
  2. E • M            /∴ E • ~C
  3. E                  2, Simp.
  4. M • E           2, Com.
  5. M                 4, Simp.
  6. ~C                1, 5, M.P.
  7. E • ~C          5, 6, Conj.

Variation 2 decided to isolate the E and M in a slightly different order than as occurred in variation 1. This is just fine. Sometimes the order doesn’t matter for certain steps. Other times, the order does matter, though. For example, note how both arguments used commutation to get the conjunction E • M switched to M • E before extracting M with simplification. That’s because to keep things consistent, simplification only works with the first part. Therefore, the order of those steps mattered. Likewise, of course, in both variations M had to be isolated before it could be used in the proof as a partner with M ⊃ ~C in the application of modus ponens. Again, this order mattered. Whether a formal proof follows the rules and properly shows validity is a closed question. How to get there is not.

EXERCISES

Please fill in the ledger column for the following completed proofs:

1.

1. F ⊃ G

2. ~G             /∴ ~F

3. ~F

2.

1. ~C ∨ ~B

2. B ⊃ C            /∴ ~B ∨ C

3. ~ (C • B)

4. B ⊃ (C • B)

5. ~B

6. ~B ∨ C

3.

1. (F ⊃ D) • (H ⊃ J)

2. F ∨ H

3. K                         /∴ [(K • D) ∨ (K • J)]

4. D ∨ J

5. K • (D ∨ J)

6. [(K • D) ∨ (K • J)]

4.

1. ~A ∨ B

2. F ∨ G

3 ~G ∨ ~B

4. ~F             /∴ ~A

5. G

6. ~~G

7. ~B

8. ~A

5.

1. [(F • G) ⊃ C]

2. G ⊃ F

3. ~(~G ∨ K)            /∴ C

4. ~~G • ~K

5. ~~G

6. G

7. F

8. F • G

9. C

Construct formal proofs for the provided arguments.

1.

1. A ⊃B

2. ~(B • A)      / ∴ ~A

2.

1. F

2. (F ⊃ G) • (H ∨ J)      /∴ (G ∨ J)

3.

1. A ⊃ D

2. D ⊃ F

3. ~F ∨ ~G

4. G           /∴ ~A

4.

1. ~A ⊃ (F ∨ G)

2. F ⊃D

3. G ⊃ H

4. ~(D ∨ H)       /∴ A

5.

1. [A ⊃ (B ⊃ C)]

2. ~(C ∨ D)                /∴ ~(A • B)

V. Conditional Proofs

This section will introduce a technique for deducing conditional statements from existing premises. While our replacement and inference rules enable us to do this already, our new “conditional proof” technique and notation skill will make this process much shorter, simpler, and more manageable.

Conditional “if . . . then . . .” statements are very powerful within our deductive system. Nine out of the 19 rules involve conditionals. In everyday life, it is often critical to establish conditional connections between claims, events, or other statements. Our ability to know things like B would be true if A were true can be of crucial importance in our daily reasoning.

Recall that the truth of a conditional does not solely depend on the truth of the antecedent. We can assert p ⊃ q without necessarily asserting p. Often in an argument, it is useful to use existing premises to help us arrive at the claim “if p were true, then q would be true” without needing to assert p. And this conditional itself p ⊃ q might then even have a further role in an argument. The method of “conditional proofs” helps us utilize this matter for our deductive proofs.

Learning how to use conditional proofs only requires adding a few new bits of presentation and abiding by a few new rules.

Conditional proofs are sub-proofs. They work by allowing us to introduce a hypothetical assumption into an argument, derive an assumption-specific-conclusion, and then “officially” export the fact that the assumption yields such a conclusion back out one level as an intermediate premise in the bigger argument. The new intermediate premise shows up as a conditional with the assumption as the antecedent and the conclusion as the consequent.

Below is an example of what a finished argument that uses a conditional proof can look like. As with the previous section, the remainder of this section will break down and clarify this example as a way to present the general idea.

Conditional Proof Example 1[2]

1.

2.

3.

4.

5.

6.

7.

8.

9.

10.

11.

(A ∨ B) ⊃ (C • D)

(D ∨ E) ⊃ F              /∴  A ⊃ (F • A)

|A                            / ∴ F (A.C.P.)

|(A ∨ B)                    3, Add.

|C • D                      1, 4, M.P.

|D • C                      5, Com.

|D                            6, Simp.

|D ∨ E                      7, Add.

|F                             2, 8, M.P.

A ⊃ F                       3-9, C.P.

A ⊃ (F • A)              10, Abs.

The Anatomy of a Conditional Proof

The above example introduces a conditional proof at line 3 that ends at line 9 and is therefore used to introduce a new intermediate premise at line 10. Looking at the above example, you will see a vertical line that begins at line 3 and runs down all the way to line 9. This line is the scope line. The scope line marks out all the steps that are inside the sub-proof.

Moving to the right, after the scope line on line 3, we have “A.” Here, A is the assumption that we’ve strategically introduced for the broader purpose of ultimately getting our conclusion, A ⊃ F, from our original given premises 1 and 2. You can introduce any properly written propositional statement as an assumption for a conditional proof – it all depends on what you are trying to prove. The important thing is that you note that the introduction is marked as the “assumption of a conditional proof.” In the ledger column, we have “/∴ F (A.C.P.)”. After we introduced A, used this notation to justify its introduction by noting it is an Assumption for a Conditional Proof (A.C.P.) that is being introduced specifically to help us get to our target sub-conclusion, in this case F.

Once our conditional assumption is introduced, the remaining lines behind the scope line act as a mini-sub proof that get to use all the original premises plus assumption A as sub-premises for its purposes. Thus, you can see lines 4 – 9 using the replacement and inference rules we learned about earlier to ultimately derive F from lines 1, 2, and 3. It is important to also notice how line 9 uses line 2 from the original layer of the proof and line 8 from our sub-proof in combination. This is legitimate; the steps within a sub-proof can pull from the main argument and from its own steps.

Having achieved F at line 9, line 10 then “exports” the conditional “A ⊃ F” back into the main argument. You will see there is no scope line on line 10. This is because A ⊃ F has been proven as a valid contribution to our intermediate premises. If we assume A, then (given our other premises from lines 1 and 2) we have F, and the ledger note, “3-9, C.P.” documents that this contribution was achieved through the Conditional Proof (C.P.) of lines 3-9.

Notice how line 11 follows the same format as normal proofs from our earlier examples, except that it is using line 10 as its base material for the application of the “absorption” inference rule. This shows how the conclusion from a conditional proof, once exported, becomes a part of the larger argument. And, in our argument, line 10 states our desired conclusion, thus completing the proof as a whole.

Our example walked us through the use of a conditional proof to add an intermediate premise to our argument. The conclusion of a conditional proof, i.e., what we get to export back out, will always be in the form of a conditional with the initial assumption as the antecedent and the desired sub-conclusion as the consequent.

The conditional (assumption ⊃ sub-conclusion) is the only thing that can be exported from the proof. This is a crucial rule. Any other statement that occurs within the confines of the scope line must stay where it is. The scope line is a one-way barrier. Premises can come in from outside, but nothing can go out.

Using Multiple Conditional Proofs

Arguments can have multiple conditional proofs. They can occur in serial, as in the following example:

Serial Conditional Proof Example

                  1. A ∨ B
                  2. ~A ∨ F
                  3. ~B ∨ G                  /∴ F ∨ G
                  4. | A                         / ∴ F (A.C.P)
                  5. | ~~A                     4, D.N.
                  6. | F                         2, 5, D.S.
                  7. A ⊃ F                    4 – 6, C.P.
                  8. | B                         /∴ G (A.C.P)
                  9. | ~~B.                    8, D.N.
                  10. | G.                        3, 9, D.S.
                  11. B ⊃ G                    8 – 10, C.P.
                  12. (A ⊃ F) • (B ⊃ G)   7, 11, Add.
                  13. F ∨ G                     1, 12, C.D.

In this example, we have two different conditional proofs, one at 4-6 and the other at 8-9. Each one, “sub-proof 1” and “sub-proof 2,” can draw inferences from lines in “main proof” that have been established above it. However, “sub-proof 2” cannot use any statements from within the scope line of sub-proof 1.

Conditional proofs can also be nested, as shown in example 3, below:

Nested Conditional Proof Example

                    1. A ⊃ ~(~D • C)
                    2. C                            /∴ ~A ∨ D
                    3. | A                          /∴ D (A.C.P.)
                    4. || ~D                       /∴ (~D • C) (A.C.P.)
                    5. || ~D • C                 2, 4, Conj.
                    6. | ~D ⊃ (~D • C)       3-5, C.P.
                    7. | ~(~D • C)              1, 3, M.P.
                    8. | D                          6, 7, M.T.
                    9. A ⊃ D                     3-8, C.P.
                    10. ~A ∨ D                   9, Impl.

Here, we have a sub-proof (sub-proof 1) at lines 3-8. Within this, we have a “sub-sub” at 4-5. Again, each scope line is a one-way barrier. And so, “sub-sub” can draw from the main proof or the sub-proof 1 lines, but sub-proof 1 is only allowed to take in the official export from the sub-sub. Moreover, sub-sub can only export its conditional one layer back out – in this case, into sub-proof 1. Only the official conditional export of sub-proof 1 gets to appear as an intermediate premise in the main proof, here at line 9.

Conditional Proofs for Tautologies

In addition to supplementing arguments, conditional proofs can be used to prove claims that need no premises at all. Logical tautologies (not to be confused with the replacement rule of the same name) are statements that are always true, no matter what. Thus, they need no premises.

Here is an example of a conditional proof of a tautological statement:

Conditional Proof for Tautology Example

1.

2.

3.

A

| A ∨ ~A

A ⊃ (A ∨ ~A)

/∴ (A ∨ ~A) (A.C.P.)

1, Add.

1-2, C.P.

Notice here how the ultimate conclusion, A ⊃ (A ∨ ~A), does not appear behind the first “/∴” sign as it does with most proofs. Instead, the proof uses that space for stating the conditional proof’s conclusion. That’s just how it is. Not all tautological statements are as recognizably valid as is A ⊃ (A ∨ ~A). However, being able to prove their truth, lacking any premise whatsoever, can be a useful trick for a logician to be familiar with.

EXERCISES

Provide Conditional Proofs for the following arguments:

1.

1. A

2. B ∨ C      /∴ ~C ⊃ (B • A)

2.

1. F ∨ G

2. F ⊃ H

3. ~K ∨ ~H

4. G ⊃ K          /∴  K ≡ G

VI. Indirect Proofs

Our second new skill is the use of “Indirect Proofs.” As with conditional proofs, indirect proofs utilize introducing an assumption, scope lines, and sub-arguments to further our main argument. However, in this case what we will be doing is inserting a possible assumption, [sub]proving that the assumption leads to a contradiction, and thereby adding the assumption’s negation to the main list of intermediate premises.

Here is an example of a proof that uses an indirect proof. This section will dissect the example to introduce the concept:

Indirect Proof Example 1[3]

1.

2.

3.

4.

5.

6.

7.

8.

(H ⊃ I) • (J ⊃ K)

(I ∨ K) ⊃ L

~L

| H ∨ J

| I ∨ K

| L

| L • ~L

~(H ∨ J)

 

 

/∴ ~ (H ∨ J)

/∴ (A.I.P)

1, 4, C.D.

2, 5, M.P.

6, 3, Conj.

4-7, I.P.

The above use of an indirect proof has quite a few commonalities with a conditional proof, so let’s flag the differences. The indirect proof is introduced in line 4. Notice the assumption “H v J” behind the scope line. Notice, too, the ledger entry. Here, we just have “/(A.I.P.)” to note that line 4 was introduced as an Assumption for an Indirect Proof. Lines 5 and 6 are derivations from lines 1 and 4 then 2 and 5, as is noted in the ledger column.

The final line of the Indirect Proof’s sub-proof is important. Here, they’ve validly derived “L • ~L” from lines 6 and 3. Well, L • ~ L is a contradiction and, therefore cannot be true. Because introducing assumption, H ∨ J, is what allowed us to validly derive a contradiction, we are justified in rejecting H ∨ J. Thus, our indirect proof can export ~(H ∨ J) as its upshot into line 8. Notice both that the scope line has ended, thereby allowing ~(H ∨ J) to be used in the main argument, and that the ledger column notes the lines of the indirect proof by listing the first through last lines and an “I.P.” abbreviation for indirect proof.

All the same principles regarding scope, nesting, and exportation that held for conditional proofs also apply to indirect proofs. Again, the scope line is a one-way barrier, and only one line can be exported out; with indirect proofs, the only legitimate exportation is the negation of the introduced assumption.

Proving Tautologies with Indirect Proofs

Just as conditional proofs can be used without premises to prove tautologies, we can also use indirect proofs to provide formal demonstrations of tautological statements. In this case, we introduce the rejection of the tautology as a starting assumption and show that such a rejection leads to contradiction, thus proving the tautology itself.

Below is an example of an indirect proof being used to prove a tautology, in this case, G ∨ ~ G,

Indirect Proof Example 2[4]

1.

2.

3.

4.

5.

6.

| ~(G ∨ ~G)

| ~G • ~~G

| ~G • G

| G • ~ G

~~ (G ∨ ~ G)

G ∨ ~ G

/∴ A.I.P.

1, De M.

2, D.N.

3, Com.

1-4, I. P.

5, D.N.

In the above example, the negation of our desired conclusion is immediately introduced to begin an indirect proof leading to its contradiction.

Indirect proofs are a popular way to make a case and/or eliminate untenable propositions. The overall argumentative strategy of reductio ad absurdum, which aims to advance a claim by showing that the claim’s negation leads to a contradiction is analogous to the use of indirect proof.

A famous example from the history of philosophy is Gottfried Leibniz’s (1646-1716) argument for the claim that despite all appearances of evil, we live in the “best of all possible worlds.” Leibniz takes as premises the ideas that the world was created by God, and that this God is all-powerful, all-knowing, and all-benevolent. Leibniz rejected the view that our world was somehow flawed or less than the best possible worlds because he thought such a position ran itself into a contradiction when met with his premises about God’s perfection. After all, any factor that would force or inspire God to create a lesser world would contradict Leibniz’s ideas that God is all-powerful, all-knowing, and all-benevolent – since presumably nothing could force God’s hand or diminish God’s kindness. Therefore, using a version of indirect proof, Leibniz takes as established the claim that we live the best of all possible worlds.

EXERCISES

Complete indirect proofs for the following arguments.

1.

1. S ⊃ G

2. S        /∴ G

2.

1. (A ∨ B)

2. (B ⊃ D)    /∴ ~A ⊃ D

3.

1. (F • G) • B

2. ~H ⊃ [~(F ∨ G) ∨ ~B]   /∴ H


  1. This chapter follows Copi, I.M., Cohen, C. & Rodych, V. (2020) Introduction to Logic. 15th Ed. Special Indian Edition. Routledge. Chapter 9, pp 332 - 450. for its organization and general content presentation. Where direct excerpts are utilized, specific citation is given through further footnotes.
  2. modified from Copi, 2020: p 421
  3. taken from Copi et al, 2020 p 439
  4. from Copi et al, 2020, p 441

License

Icon for the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License

Chapter 5 Copyright © 2022 by Sean Gould is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License, except where otherwise noted.

Share This Book