Propositional
Calculus:

Syntax,
Semantics and Rules of Inference

__1.
Introduction__

The
propositional calculus (PC) is a formal language that adequately represents the
set of valid (truth preserving) inferences which depend on coordinate
expressions such as *and*, *or*, *not*,
*if…then…*, *if and only if*. From the
optic of PC, we are only interested in those inferences whose validity depends
on the role of these expressions. Hence, we abstract away from what particular
sentences mean. Here is a simple example:

(1)a. Man U have won the European Cup and

b. Therefore, Man U have
won the European Cup.

(1)
is clearly valid - it is not possible for b. to be false and for a. to be true;
alternatively, if a. is true, then b. must be true.
But the validity of (1) doesn’t depend on the fact that we are talking about
Man U, *variables*
which range over all sentences. So revised, (1) is depicted as

(2)
__P & Q__

P

Q

PC
is able to represent only those inferences such (2) - those whose validity just
depends on the logical constants.

*Logical constants*
= expressions whose meaning is constant, i.e., not variable.

*Truth functions*
= the constants express functions from truth values to truth values.

The
language of PC thus includes variables which range over sentences (P, Q, R, S,…) and logical constants.

*The standard logical
constants of PC *=

¬
= not

&
= and

v
= or

→
= if…, then…

↔
= if and only if (iff)

(i) The identifications here will later be questioned, but, *pro tem*, think of the constants as
expressing the corresponding English terms.

(ii)
We shall see later that the five constants mark a redundancy. It is easy to
show that we can express the remaining three constants by using either (i) ‘¬’ and ‘&’ or (ii) ‘¬’ and ‘v’ or (iii) ‘¬’ and ‘→’.
In fact, we only need one of two logical constants here not listed: ‘↓’
(dagger) and ‘|’
(stroke). Such joys are to come.

Sentences
of PC include all and only those symbol strings which are lawful combinations
of constants and variables. For example:

(3)a. ¬P

(‘Bill isn’t tall’)

b. P → (Q & ¬R)

(‘If Harry is short, then Bill is tall and
Mary isn’t blonde’)

c. (P ↔ Q) → ((P → Q)
& (Q → P))

(‘If Harry is tall iff
Sarah is blonde, then if Harry is tall, then Sarah is blonde, and if

Sarah is blonde, then
Harry is tall’)

Etc.

*An excursus on scope*

The
use of brackets in PC is to avoid scope ambiguities. English, as we saw, is
rife with ambiguity, but in PC, we rigorously avoid it - brackets serve this
end, as they could do in a regimented version of English.

(3)a. Mad dogs and Englishmen go out in the mid-day sun.

b. [Mad [dogs and Englishmen]] go out in the mid-day sun.

c. [[Mad dogs] and Englishmen] go out in
the mid-day sun.

Here
is an example from PC

(4)a. P → P &
Q

b. (P → P) & Q

c. P → (P & Q)

The
latter two formulae are quite distinct: b. is a conjunction which ‘says’ two
things are the case (if P, then P *and*
Q); c. is a conditional which ‘says’ if something (P) is the case, then
something else (P & Q) is the case. More formally, b. is *stronger*, c. is *weaker*.

*Strong*
= takes more for it to be false.

*Weak*
= takes less for it to be true.

Think
of it this way: a formula which ‘says’ more is more likely to be false; it
makes a greater demand on the world, as it were - saying more equals strength.
A formula which says little, places a weak demand on the world - saying little
equals weakness. In general, *a
conditional is always weaker than a conjunction*.

If
we were just faced with a., the convention is to read it as if it were c., i.e.,
we take the weaker constant to be the main constant - the constant which
defines the kind of formula we have.

*Main constant*
= constant with widest possible scope.

*Scope*
= the scope of
an expression is the smallest formula of which it is a part.

It
follows that the constant with widest possible scope has the whole containing
formula in its scope. For example, in b., ‘→’ scopes
just over ‘P → P’, for that is the smallest formula it is a part of.
On the other hand, the smallest formula which contains
‘&’ is the whole of b. The case is reversed for c. Explicit marking of scope via brackets removes ambiguity.

Our
definition of scope presupposes that we can be precise about what is and is not
a formula of PC.

__2.
Syntax of PC__

The
syntax of a natural language such as English must be discovered. When we are
dealing with formal languages, we can stipulate the syntax, for the languages
are our own creations. Of course, this doesn’t mean that we can stipulate
anything we please, for we want the language to do a
‘job’ and an acceptable formula should reflect that objective. In the present
case, we want PC to codify valid inferences which turn on the constants.

*Syntax*
(for formal languages) = a definition of ‘formula of the language’

Here
is an adequate definition for ‘formula of PC’.

(Df-PC)

(i) P, Q, R,… are formulae of PC.

(ii)
If X is a formula of PC, then ¬X,

X & X,

X v X,

X→X,

and X↔X are formulae of PC.

(iii)
These are all the formulae of PC.

(‘X’
is a meta-variable which ranges over formulae of PC - it is not a part of PC
itself.)

(Df-PC) is recursive, which means that we can cycle through
(ii) to generate all the formulae of PC. In other words, the definition
provides us with a finite *implicit*
definition of the infinite set of formulae of PC.

**Q1**:
In (i), we appear to commit ourselves to an
indefinite number of variables, and so, if we tried to write out the definition
in full, it would no longer be finite, i.e., we couldn’t write it out. Can you
see a way of complicating the definition to avoid this problem? (Hint: we just
need to discriminate between an infinity of variables,
we don’t actually need an infinity of variables.)

**Q2**:
Is (iii) required? What job is it doing?

__3.
Semantics of PC (Truth Tables)__

The
semantics for a formal language tell us what the formulae mean, i.e., truth
conditions. Since we have abstracted away from the meaning of particular
sentences, we simply assume that that any given variable can take the value of
True (T) or False (F). We work out the truth conditions of all further formulae,
following the syntax, as a function of the meaning of the logical constants,
which compositionally form complex formulae from, ultimately, Ps and Qs. We *define* the meaning of each constant via
a basic truth table, with the a truth table for each
further formulae being provided as a function of the basic truth tables. We say
that *a constant expresses a truth
function*: it takes truth values as argument and produces truth values as
values.

Let ‘A’ and ‘B’ be meta-variables which range
over PC formulae of arbitrary complexity.
For example, the truth table for ‘A & B’ applies to any formulae whose main
constant is ‘&’.

__¬__ __& __ __v__ __→__ ↔

__¬ A__ __A____ &
B__ __A v
B__ __A
→ B__ __A ↔____ B__

F T
T T T T T T T T T T T T

T F
T F F T
T
F T F F T F F

F F T
F T T F T T F F T

F F F F F F F T
F F T
F

(i) ‘¬A’ is true iff ‘A’ is false.

(ii) ‘A & B’ is true iff ‘A’ is
true and ‘B’ is true.

(iii)
‘A v B’ is true iff either ‘A’ is true or ‘B’ is
true.

(iv)
‘A → B’ is true iff it’s not the case that ‘A’
is true and ‘B’ is false.

(v)
‘A ↔ B’ is true iff either ‘A’ is true and ‘B’
is true, or ‘A’ is false and ‘B’ is false.

Each
truth table details all possible ways in which the given formula could be
assigned the value T or F. That is, a truth table exhausts the values a truth
function can have, given all possible arguments. In general, for a formula of n
variables, its table will contain 2^{n} rows, each specifying a way the
‘world’ could be such that the formula receives a value. So, with ‘¬P’ we have
2^{1} = 2; with two variable formulae, we have 2^{2} = 4. For
the formula, ‘P → (Q → R)’, we have 2^{3} = 8. And so
on.

The
above tables provide you with all the information you require to specify a
truth table for any formula of PC.

*An Excursus on the
Material Conditional*

The
conditional can cause problems. First we need some definitions:

*Sufficient condition*
= A is a sufficient condition for B iff A alone,
regardless of anything

else, is enough - suffices - for B.

*Necessary condition = *A
is a necessary condition for B iff without A, then no B.

For
example:

(i) A’s being a sister is a *sufficient condition* for A’s being a sibling; it is *not* a necessary condition, for if A were
a brother, A would still be a sibling.

(ii)
A’s being female is a *necessary condition*
for A’s being a sister; it is *not* a
sufficient condition, for A need not have any siblings at all.

(iii)
A’s being a female sibling is a *necessary
and sufficient condition* for A’s being a sister.

A
conditional ‘A → B’, expresses a sufficient condition from A to B, and a
necessary condition from B to A. In English, there are different formulations
of these relations:

(i) If A, then B (A sufficient for B)

(ii)
A, if B (B
sufficient for A)

(iii)
Only if A, B (B sufficient for A)

(iv) A,
only if B (A is sufficient for B)

*Definitions*

*Tautology*
(logical truth) = a formula whose main column is all Ts, i.e., a function whose

only value is T.

E.g.,

__A
v____ ¬A__

T
T F T

T
T F T

F
T T
F

F
T T
F

*Inconsistency*
(contradiction, logical falsehood) = a formula whose main column is all

Fs, i.e., a function whose only value

is F.

E.g.,

__A
& ¬ A__

T F F T

T F F T

F F T F

F F T F

*Contingency*
= a formula whose main column features both Ts and Fs, i.e., a function

whose
value can be both T and F.

Examples
are provided by our basic truth tables.

(i) The negation of a tautology is an inconsistency.

(ii)
The negation of an inconsistency is a tautology.

(iii)
The negation of a contingency is a contingency.

__3.1.
Definitions - Reducing the Constants__

As
mentioned above, our five constants signal a redundancy. We can easily show
that we only need two constants: either (i) ‘¬’ and
‘&’ or (ii) ‘¬’ and ‘v’ or (iii) ‘¬’ and ‘→’. That is to say, we can *contextually define* the remaining three
truth functions in terms of a given two truth functions. Remember, a truth
table defines a constant, so if we can express the truth table for constant ‘$’
without using ‘$’, then we have adequately defined the function expressed by
‘$’. Such definitions allow us to eliminate redundant constants; equally, they
allow us to introduce constants.

Here
are the definitions if we take ‘¬’ and ‘&’ as given, where ‘↔_{df}’ means ‘logically equivalent by definition’:

(i) A v B ↔_{df} ¬(¬A & ¬B)

(ii) A → B ↔_{df} ¬(A & ¬B)

(iii)
A↔ B ↔_{df}_{ }¬(A & ¬B)
& ¬(B & ¬A)

Here
are the definitions if we take ‘¬’ and ‘v’ as given:

(iv) A & B ↔_{df}
¬(¬A v ¬B)

(v) A → B ↔_{df}_{ } ¬A v B

(vi)
A↔ B ↔_{df}_{ } ¬(¬(¬A v
B) v ¬(¬B v A))

Here
are the definitions if we take ‘¬’ and ‘→’ as given:

(vii) A & B ↔_{df}_{ } ¬(B → ¬A)

(viii)
A v B ↔_{df} ¬B→ A

(ix) A↔ B ↔_{df}_{ }¬((A→ B) → ¬(B→ A))

**Q**:
To check the correctness of these definitions, provide truth tables for (i)-(ix) using ‘↔_{df}’
as the main constant. (The subscript is merely notational, and may be happily
deleted.)

__3.2.
Stroke and Dagger__

It
was mentioned above that we can express every truth function via either of two
constants: ‘|’
(stroke) and ‘↓’(dagger).

*Stroke*
(‘Sheffer’s Stroke’ or NAND)

Stroke
is a function of two arguments. Its truth table is:

__A ____|____ B__

T F T

T
T F

F
T T

F
T F

which
is equivalent to the table for ‘¬(P & Q).’

‘A
| B’ is true iff
it’s not the case that both ‘A’ is true and ‘B’ is true.

The
stroke definitions of the five main logical constants are as follows:

(i) ¬A ↔_{df} A |
A

(ii) A & B ↔_{
df} (A | B) | (A | B)

(iii)
A v B ↔_{ df} (A | A) | (B | B)

(iv)
A → B ↔_{ df}
((A | A) | (B | B)) |
(B |
B)

(v)
A ↔ B ↔_{ df}
[(((A | A) | (B | B)) |
(B |
B)) |
(((B | B) | (A | A)) |
(A |
A))] |

[(((A | A) | (B | B)) |
(B |
B)) |
(((B | B) | (A | A)) |
(A |
A))]

**Q**:
To check the correctness of the definitions, provide truth tables for the
stroke formulae.

*Dagger *(‘Pierce’s Dagger’ or NOR)

Dagger
is a function of two arguments. Its truth table is:

__A
↓ B__

T F T

T F F

F F T

F T F

‘A
↓ B’ is
true iff neither ‘A’ is true nor ‘B’ is true

iff ‘A’ is false and ‘B’ is false.

So,
the truth table above is the same as that for ‘¬(A v
B)’ and ‘¬A & ¬ B’.

The
dagger definitions of the five main constants are as follows:

(i) ¬A ↔_{df}_{ }A ↓
A

(ii) A & B ↔_{df} (A ↓ A) ↓(B ↓ B)

(iii)
A v B ↔_{ df}
(A ↓ B) ↓ (A ↓ B)

(iv)
A → B ↔_{ df} [(((A ↓ A) ↓
(B ↓ B)) ↓(A↓A))] ↓ [(((A ↓
A) ↓ (B ↓ B)) ↓ (A ↓ A))]

(v)
A ↔ B ↔_{ df}
[(((A ↓ A) ↓ (A ↓ A)) ↓ (B ↓ B))] ↓ [(((A ↓
A) ↓ (A ↓ A)) ↓ (B ↓ B))]

**Q**: To check the definitions, provide truth
tables for the dagger formulae.

*Duality*

(i) A | B ↔_{ df} [(A ↓ A) ↓ (B ↓ B)] ↓
[(A ↓ A) ↓ (B ↓ B)]
(i.e., negation of dagger &)

(ii)
A ↓B ↔_{ df}
[(A | A) | (B | B)] |
[(A | A) | (B | B)] (i.e., negation of stroke v)

**Q**:** **To check the correctness of the
definitions, provide truth tables for the formulae treating ‘↔_{ df}’_{ }as the main
constant. (Again, delete the notational subscript.)

__4.
Checking for Validity via Truth Tables__

We
take an argument to be any collection of assumptions from which a conclusion is
drawn, where it is understood that if the assumptions are true, then the
conclusion must also be true - equivalently it is not possible for the
conclusion to be false and the premises all to be true. From the definition of
‘→’, we see that any argument can be expressed by a conditional, where
the assumptions are the antecedent (if there is more that one assumption, then
the antecedent will be the conjunction of them) and the conclusion will be the
consequent.

Can
we check the truth table for a conditional to see whether the corresponding
argument is valid?

Yes.
If the conditionalisation of an argument is a
tautology, then the corresponding argument is valid.

Why?
Because a conditional is false exactly when its antecedent is true and its
consequent is false. This is precisely the conditions under which the
corresponding argument would be invalid. Therefore, if this case doesn’t arise,
then the argument isn’t invalid, i.e., it’s valid; or, equivalently, the main
column of the truth table for the conditional will not feature a F, i.e., the conditional will be a tautology. Thus, a
necessary and sufficient condition for an argument to be valid is that its conditionalisation is a tautology.

*An Example*

(i)Argument

P

__P
→ Q__

Q

(ii)
Conditionalisation and Truth Table

(P & (P → Q)) → Q

T T T T T T T

T F
T F F T
F

F F F T T T T

F F F T
F T F

Note:
A conditionalisation whose truth table marks it as
contingent is not a ‘sometimes’ valid argument - there is no such notion; e.g.,
‘¬P & (P → Q) → Q’.

*A Non-Truth Table
Method to Check for Validity* (an algorithm)

(i) Conditionalise the argument.

(ii) Assume the conditional is false, i.e., put a F under the main constant.

(iii)
On the assumption of (ii), the antecedent of the conditional is true and the
consequent

false.
Therefore, put a T under the main constant of the antecedent and put a F under

the main
constant of the consequent.

(iv) Proceed, guided by the basic truth tables, to put a T or
F under each variable and

constant of the
conditional.

(v) If each variable and constant of the
conditional can be assigned a T or F, then the

conditional is
not a tautology, for the assignment constitutes a row of the complete

truth table
under which the conditional is false, i.e., the conditional is not a tautology,

i.e., the corresponding argument is not
valid.

(vi) If
a complete assignment can’t be made (i.e., if a basic truth table is
contradicted),

then the
assumption of (ii) is false, i.e., the
complete truth table for the conditional

doesn’t contain
a F in its main column, i.e., the conditional is a tautology, i.e., the

corresponding
argument is valid.

*Two examples*

(i) (P
& (P →
Q)) → Q

T T T T F F F

4
2 __6 5 7__ 1
3

X

(ii)
¬ P & (P →
Q) → Q

T F T
F T F F F

4 5
2 7 6
8 1 3

No contradiction

**Q**:
The method tells us whether a conditional formula is a tautology. Does it tell
us whether a formula is (i) an inconsistency or (ii)
a contingency? Give reasons.

__5.
Rules of Inference and Proof Theory for PC__

*Rule of inference*
= a rule which allows us to transform or combine formulae under the

preservation of validity (truth), i.e., if the input to the
rule is true, then

its
output is also true.

*A formal proof*
= a finite sequence of PC formulae such that each member is either an

*assumption* or a transformation or
combination of preceding formulae

according to
the rules of inference. The last member of the sequence

is
the *conclusion* and is such that the
conjunction of its negation and

and
the assumptions upon which it rests is a contradiction.

*Rules of inference*

(i) Rule of Assumption (A)

Insert
any formula at any stage into a proof. The assumed formula rests upon the
assumption of itself.

(ii)
Double Negation (DN)

a. __¬¬A__ b.__ A__

A ¬¬A

(‘Two
negations cancel each other out’.)

The
derived formula rests upon the assumptions upon which the transformed formula
rests.

(ii)
&-Introduction (&I)

__A
B__

A & B

(‘One
can derive the conjunction of any two preceding formulae’)

The
derived formula rests upon the assumptions upon which the individual conjoined
formulae rest.

(iii)
&-Elimination (&E)

__A & B__

A

B

(‘One
can derive either conjunct form a preceding conjunction’)

Either
of the derived formulae rest upon the assumptions upon which the conjunction
rests.

(iv)
v-Introduction (vI)

__A_____

A v B

(‘One
can derive a disjunction which has a disjunct as any
preceding formula’)

The
derived formula rests upon the assumptions upon which the given disjunct rests.

(v)
v-Elimination (vE)

__A v B__

__A__ __B__

__C
C__

C

(‘One
can derive a formula from a disjunction so long as one can derive the formula
from both disjuncts of the initial disjunction’)

The
derived formula rests upon the assumptions of (i) the
initial disjunction, (ii) each disjunct as
assumption, and (iii) each conclusion from each disjunct
assumption, *minus* assumptions upon
which rest two or more formula on which the rule operates.

(vii)
Modus Ponendo Ponens (MPP)

A

__ A __→__ B__

B

(‘One
can derive the consequent of a conditional if the antecedent of the conditional
is also given’)

The
derived formula rests upon the assumptions upon which the conditional and its
antecedent rest.

(viii)
Conditional Proof (CP)

__A__

__B__

A→ B__ __

(‘One
can derive a conditional from the assumption of its antecedent if one can
derive the conditional’s consequent from the assumption’)

The
derived formula rests upon the assumptions upon which the assumption and its
derived formula rest, *minus* any
assumptions upon which both rest.

(ix)
Reductio ad Absurdum (RAA)

__A______

__¬A & A__

¬A

(‘One
can derive the negation of an assumption if, from that assumption, one can
derive a contradiction’)

The
derived formula rests upon the assumptions upon which the initial assumption
and the contradiction rest, ninus any assumptions
upon which both rest.

(x) Equivalence definition (↔df)

a. __A ↔ B_________ b. __A__→__ B & B __→__A__

A→ B & B →A A ↔ B

(‘One
can rewrite a bi-conditional as a conjunction of conditionals, from right to
left and left to right, and *vice versa*.)

The
derived formula rests upon the assumptions upon which the given formula rests.

*A simple sample
derivation*

P
& Q ├ P & (P v Q) (‘A ├ B’ means ‘B is derivable from A via
rules of inference’.)

1(1)
P & Q A

1(2) Q 1&E

1(3)
P v Q 2vI

1(4)
P 1&E

1(5)
P & (P v Q) 3,4&I

*Explanation, from left
to right*

(i) The first numbers indicate assumptions upon which the
corresponding formula rests.

In this example, all formulae rest just on
the first and only assumption.

(ii)
The bracketed number is the number of the line.

(iii)
In the middle, we have the derivation itself.

(iv) On
the right, we have the rules and the line numbers to which they apply. So, look
at

line (5). This
tells us that the formula is the result of the rule &-introduction applied

to lines (3)
and (4).

We
can apply our algorithm to check that the derivation is valid.

P & Q → P
& (P v Q)

T T T F T
F __T T T__

4 2 5
1 6 3
X

*A more complex example*

We
know that A v B ↔_{df} ¬(¬A & ¬B). So,
if our rules are adequate, we should be able to derive ‘P v Q’ from ‘¬(¬P & ¬Q)’, and *vice
versa*.

P
v Q ├ ¬(¬P
& ¬Q)

(a)

1 (1)
P v Q A (The initial assumption)

2 (2) ¬P & ¬Q A (Assumption for RAA)

3 (3)
P A (The first assumption for vE)

2 (4) ¬P 2&E (Derivation for the purposes of forming a
contradiction)

2,3 (5) P & ¬P 3,4&I (A contradiction)

3 (6) ¬(¬P &
¬Q) 2,5RAA (The application of
RAA)

7 (7) Q A (The second assumption of vE)

2 (8) ¬Q 2&E (Derivation for the purposes of forming
a contradiction)

2,7 (9) Q & ¬Q 7,8&I (A contradiction)

7 (10) ¬(¬P &
¬Q) 2,9RAA (The application of RAA)

1 (11) ¬(¬P &
¬Q) 1,3,6,7,10vE (The application of vE
- end of proof)

**Q1**:
Prove ¬(¬P & ¬Q) ├ P v Q. (Hint: begin by
assuming ‘¬P v Q’ for the purposes of RAA. For those who are stuck, a proof is
given in Lemmon, p.38.)

**Q2**:
Prove that the remainder of our definitions hold
(Lemmon doesn’t contain the proofs):

(i) P → Q ├
¬(P & ¬Q)

(ii) P↔ Q ├ ¬(P
& ¬Q) & ¬(Q & ¬P)

(ii) P & Q ├ ¬(¬P
v ¬Q)

(iv) P → Q ├ ¬P v Q

(v) P↔ Q ├ ¬(¬(¬P
v Q) v ¬(¬Q v P))

(vi) P & Q ├ ¬(Q
→ ¬P)

(vii) P v Q ├ ¬Q→ P

(viii) P↔ Q ├ ¬((P→
Q) → ¬(Q→ P))

*Hints for proving*

(i) If the formula
to be proved is a conditional, then assume its antecedent, and from that

prove its
consequent. Then by CP, one will have proved the formula.

(ii) If the formula to be proved is a negation
(i.e., its main constant is ‘¬’), then assume

the formula
negated. Generate a contradiction from your assumption, then by RAA,

negate the
assumption to leave you with the formula that was to be proved.

(iii)
If you start with an equivalence, then immediately
break it up via ‘↔df’. The proofs

here tend to be
longer, but no more complex.

(iv) The
more interesting proofs require you to make unobvious assumptions. Always

think about
what is to be proved. If, say, its main constant is a negation, then you will

probably
require RAA. If you can’t see how to get a contradiction, then work

backwards.

(v) Again, the more interesting proofs will
involve embedded arguments - RAAs within

RAAs within vEs. With practice, you
will enjoy the challenge.

*Elegance*

Given
that a conditional is true if its consequent is true, we should be able to
prove

Q
├ P → Q

1 (1) Q A

2 (2)
P A

1,2(3) Q & P
1,2&I

1,2(4) Q
3&E

1 (5) P → Q 2,4CP

Similarly,
if the antecedent of a conditional is false, then the conditional is true;
hence, we should be able to prove

¬P
├ P → Q

1 (1) ¬P A

2 (2) P A

3 (3) ¬Q A

1,3 (4) ¬P &
¬Q 1,3&I

1,3 (5) ¬P 4&E

1,2,3(6) P & ¬P
2,5&I

1,2 (7) ¬¬Q
3,6RAA

1,2 (8) Q 7DN

1 (9) P→ Q 2,8CP

(See
Lemmon, pp.58-60 for a discussion of these cases. His method of proof is much
less elegant than that provided.)

*Theorems*

A
theorem is a formula that can be proved resting upon no assumptions. If our system is adequate, all and only tautologies should be
theorems.

From
any given proof, we can prove a theorem by CP.

Taking
the example from just above,

├
Q → (P → Q)

1 (1) Q A

2 (2)
P A

1,2(3) Q & P
1,2&I

1,2(4) Q
3&E

1 (5) P → Q 2,4CP

(6) Q → (P → Q) 1,5CP

Not
all theorems have a conditional form; thus, we can’t employ CP to prove all
theorems:

├
¬(P & ¬ P)

1(1)
P & ¬P A

(2) ¬(P & ¬
P) 1,1RAA

**Q**:
Prove ‘P v ¬P’ as a theorem. (Hint: Assume ‘¬(P v ¬P)’
for the purposes of RAA. If you are stuck, Lemmon, pp.52-3, provides a proof
and discussion.)

__5.1.
Truth Tables and Completeness of PC__

PC
is complete (and consistent) iff all and only
tautologies are provable as theorems.

Lemmon
(chp.5) provides a meta-theoretic proof. For our purposes, a simpler way of showing
that PC is complete is to derive the properties of the basic truth tables. The
truth tables provide an adequate test for a formula being a tautology. Thus, if
our proof theory (rules of inference) can derive all of the basic truth tables,
then we would have shown that we can derive *every*
tautology. Can we show that we can *only*
derive tautologies as theorems? Yes, on the assumption that our proof theory is
consistent.

*Explanatory excursus*

We
derive a conditional for each row of each basic truth table. The antecedent of
each conditional is a conjunction: if ‘P/Q’ is true in a row, we have ‘P/Q’ as
a conjunct; if ‘P/Q’ is false in a row, we have ‘¬P/¬Q’ as a conjunct. The
consequent of each conditional is a formula of the type whose truth table we are
deriving. If the row renders the formula true, our consequent is the formula; if
the row renders the formula false, our consequent is the negation of the
formula.

(Lemmon
doesn’t contain the proofs to follow.)

*Derivation
of the truth function**
of Negation*:

(i) ├ ¬¬P → P (ii) ├ P →
¬¬P

1(1)
¬¬P A 1(1) P A

1(2) P
1DN 1(2) ¬¬P 1DN

(3) ¬¬P → P 1,2CP (3) P → ¬¬P 1,2CP

*Derivation of the
Truth Function of Conjunction*

(i) ├ P & Q → P & Q (First row: T T
T)

1(1)
P & Q A

(2) P & Q → P & Q 1,1CP

(ii)
├ P & ¬Q → ¬(P & Q) (second row: T
F F)

1 (1) P & ¬Q A

2
(2) P & Q A

2 (3)
Q
2&E

1 (4)
¬Q
1&E

1,2(5) Q & ¬Q 3,4&I

1 (6) ¬(P &
Q) 2,5RAA

(7) P & ¬Q → ¬(P
& Q) 1,6CP

The
derivations for the remaining two rows follow the same pattern as (ii)

(iii)
├ ¬P & Q → ¬(P & Q) (Third row: F F T)

(iv)
├ ¬P & ¬Q → ¬(P & Q) (Fourth row:
F F F)

**Q**:
Prove (iii) and (iv)

*Derivation of the
Truth Function of Disjunction*

(i) ├ P & Q → P v Q (First row: T T T)

1(1)
P & Q A

1(2)
P 1&E

1(3)
P V Q 2vI

(4) P & Q → P v Q 1,1CP

The
derivations for the second and third rows follow the same pattern; the fourth
row is more tricky.

(ii) ├ P & ¬Q → P v Q (Second row: T T
F)

(iii)
├ ¬P & Q → P v Q
(Third row: F T T)

(iv)
├ ¬P & ¬Q → ¬(P v Q) (Fourth row: F F F)

**Q**:
Prove (ii)-(iv)

*Derivation of the
Truth Function of Material Implication*

(i) ├ P & Q → P → Q (First row: T T T)

1 (1) P & Q A

2 (2) P A

1 (3)
Q 1&E

1,2(4) P & Q
2,3&I

1,2(5) Q 4&E

1 (6) P → Q 2,5CP

(7) P & Q → P → Q 1,6CP

(ii)
├ ¬Q → ¬(P → Q) (Second row: T F F)

1 (1) P & ¬Q A

2 (2) P → Q A

1 (3) P 1&E

1,2(4) Q 2,3MPP

1 (5)
¬Q
1&E

1,2(6) Q &
¬Q 4,5&I

1 (7) ¬(P →
Q) 2,6RAA

(8) P & ¬Q → ¬(P
→ Q) 1,7CP

The
remaining two derivations are

(iii)** **├ ¬P & Q → P →
Q (Third row: F T T)

(iv) ├
¬P & ¬Q → P → Q
(Fourth row: F T F)

**Q**:
Prove (iii)-(iv).

*Derivation of the
Truth Function of Material Equivalence*

(i)** **├ Q →
P ↔ Q (First row: T T T)

1 (1) P & Q A

2 (2) P A

1 (3)
Q 1&E

1,2 (4) P & Q 2,3&I

1,2 (5) Q 4&E

1 (6) P → Q 2,5CP

7 (7) Q A

1 (8) P 1&E

1,7 (9) P & Q 7,8&I

1,7 (10) P 9&E

1 (11) Q → P 7,10CP

1 (12) P → Q & Q → P 6,11&I

1 (13) P ↔ Q 12 ↔ df

(14) P & Q → P ↔ Q 1,13CP

(ii)
├ P & ¬Q → ¬(P ↔ Q) (Second row: T F F)

1 (1) P & ¬Q A

2 (2) P ↔ Q A

2 (3) P → Q & Q → P 2 ↔ df

2 (4) P → Q 3&E

1 (5) P 1&E

1,2(6) Q 4,5MPP

1 (7)
¬Q
1&E

1,2(8) Q &
¬Q 6,7&I

1 (9) ¬(P ↔ Q) 2,8RAA

(10) P & ¬Q → ¬(P
↔ Q) 1,9CP

The
remaining two derivations, following the same pattern, are

(iii)
├ ¬P & Q → ¬(P ↔ Q) (Second row: F F
T)

(iv) ├
¬P & ¬Q → P ↔ Q (Second row: F T F)

**Q**:
Prove (iii)-(iv).