First-order Predicate Logic (FOPL, or
first-order predicate calculus, or just
first-order logic) is an extension of
propositional logic, which allows
quantification over variables. Whereas in propositional logic you can only talk about specifics (e.g. "Socrates is a man"), in predicate logic you can also talk more generally (e.g. "all men are mortal").
Propositional LogicPropositional logic consists of a set of atomic propositional symbols (e.g. Socrates, Father, etc), which are often referred to by letters
p,
q,
r etc. (Note that these letters aren't variables as such, as propositional logic has no means of binding variables). These symbols are joined together by
logical operators (or
connectives) to form sentences. The basic logical operators are:
- Negation: ¬p ("it is not the case that ");
- Conjunction: p ∧ q ("p and q");
- Disjunction: p ∨ q ("p or q");
- Implication: p ⇒ q ("p implies q", or "q if p");
- Equivalence: p ⇔ q ("p if and only if q").
(Note that the symbols I've used above are not the only possible choices. e.g. some people use ~p instead of ¬p, p & q instead of p ∧ q etc.)
Implication can also be written with the arrow pointing the other way (in which case the implication is reversed). In addition to these basic elements, there are also the constant
Boolean values, True and False. Logics are frequently extended with other features such as
identity (=) and arithmetic. The language of propositional logic can then be defined by
induction:
- The atomic symbols are members of the language;
- If p is a member of the language, then ¬p is a member of the language;
- If p and q are members, then p ∧ q, p ∨ q, p ⇒ q, and p ⇔ q are all members of the language.
- Nothing else is a member of the language.
This defines the set of
well-formed formulas (wffs) of propositional logic. By repeated application of these rules we can build up complex formulas such as ¬(
p ∧
q) ⇔ (¬
p ∨ ¬
q). (Note that we also allow parentheses for grouping). This can be read as "the negation of
p and
q is equivalent to the negation of
p or the negation of
q". Note that the sentences we form are not necessarily true: we have only defined a
syntax, and not a
semantics. Some examples of sentences in propositional logic would be:
Man(Socrates). -- Socrates is a man
Weather(Sunny) ⇒ GoTo(Park). -- If it is sunny, then we go to the park
Predicate LogicPropositional logic is useful, but it has one big drawback: we cannot talk generally, but only about specific examples (propositions). Predicate logic overcomes this by allowing us to
quantify over variables. To do this, predicate logic introduces two new quantification symbols:
- Universal quantification: ∀x. ("for all x, it is the case that ...");
- Existential quantification: ∃x. ("there exists an x, such that ...").
This allows us to make some more general sentences over those possible in propositional logic:
∀x.Man(x) ⇒ Mortal(x). -- All men are mortal.
∀x.∃y.Father(x,y). -- Every man has a father.
∀x.Man(x) ⇒ Blue(x). -- All men are blue.
Rather like parameters to a
procedure, these quantifiers
bind their variables and introduce a scope in which that variable is bound. However, they automatically range over suitable values rather than needing to be called with specific arguments. Any variables that appear in a formula without being bound by some quantifier are known as
free variables.
Deduction and SemanticsSo far we have only defined the
syntax of predicate logic, and said nothing about how we use it, and how we determine which sentences are true. There are two main methods of achieving this. Firstly, we can define a set of
deduction rules that allow us to make
inferences (i.e. derive new formulas from old ones) from our knowledge base (a collection of well-formed formulas). Secondly, we can define a
semantics which describes the meaning or possible truth values of part or all of the formulas in our language.
Deduction rules are defined outside of the language itself, and show how to derive new well-formed formulas from previous ones. A new form of notation is introduced, but note that this is
not part of the syntax of first-order logic:
- Proof: p ⊢ q ("p proves q").
which says that if we know
p then we can deduce
q, i.e., we can add
q to our knowledge base.
Note that there is a similarity between implication and proof, and it can be tempting to read
p ⇒
q and
p ⊢
q as meaning the same thing. This is not correct, however. The first forms a sentence in predicate logic, which may or may not be true under some interpretation. The latter
asserts the truth of
q based on the evidence of
p. Thus, while it makes sense to say ∀
x.Man(
x) ⇒ Blue(
x) -- which is a false sentence, at least in our world -- it does not make sense to assert that Man(
x) ⊢ Blue(
x) -- that being a man is
proof of being blue.
There are a number of common deduction rules used in predicate logic. Perhaps the most famous is
modus ponens:
p,
p ⇒
q ⊢
qwhich says, if we know
p and we know that
p implies
q, then we can deduce
q, where
know means roughly "have assumed or have previously proved".
The
semantics of our language is given by two things: a
domain of discourse, and an
interpretation function. The domain of discourse (or simply domain) describes what our sentences are about: what values variables can range over. The interpretation function maps symbols in our language onto the domain. Constants (e.g. Socrates) are mapped onto objects in the domain (in this case, the person called Socrates). 0-ary predicates (e.g. Sunny) are mapped onto {True,False} (i.e., whether they are true or false in this interpretation). N-ary predicates are mapped onto sets of n-ary ordered-tuples of elements of the domain (i.e., those tuples of members for which the predicate holds).
The interpretation of a formula,
p, in our language is then given by this interpretation together with an assignment of values to any
free variables in
p. If
M is an interpretation and
s is a variable assignment on
M then we can write
M,s ⊨
p to mean that
M satisfies p under the assignment
s [
1]. In other words,
p is true under interpretation
M and assignment
s. Our interpretation function (
I) assigns denotations to constants in the language, while
s assigns denotations to free variables. We can call the combination of them
D. Thus,
D(c) is
I(c) for constants, and
D(v) is
s(v) for free variables.
Once we have the denotations of elements of the language we can work out which formulas are
satisfied (i.e., are true) in our system, using some simple rules:
- M,s ⊨ p = q only if D(p) is the same as D(q);
- M,s ⊨ P (i.e., a 0-ary predicate) only if I(P) is True;
- M,s ⊨ P(p,q,...) (i.e., an n-ary predicate) only if the tuple <D(p),D(q),...> is in I(P);
In informal terms, these rules mean:
- Two terms are identical only if their denotation (i.e. what they stand for) is the same;
- A predicate with no arguments is satisfied if it is true in the interpretation;
- A predicate with arguments is satisfied if the denotation of its arguments is in the interpretation (roughly).
Once we have interpreted what atomic formulas of our language mean, then we can move on to the compound formulas of our language, using the following rules:
- M,s ⊨ ¬p if and only if it is not the case that M,s ⊨ p;
- M,s ⊨ p ∧ q if and only if both M,s ⊨ p and M,s ⊨ q;
- M,s ⊨ p ∨ q if and only if either M,s ⊨ p or M,s ⊨ q;
- M,s ⊨ p ⇒ q if and only if either it is not the case that M,s ⊨ p or it is the case that M,s ⊨ q;
- M,s ⊨ ∀v.p if and only if p is satisfied no matter what the assignment to v;
- M,s ⊨ ∃v.p if and only if p is satisfied for some assignment to v.
We leave out the rule for equivalence (⇔) by noting that
p ⇔
q is defined as (
p ⇒
q) ∧ (
q ⇒
p), and so can be derived from the other rules.
As an example, if we consider our domain of discourse to be men, and in particular the man called Socrates, then we can define our interpretation function as follows:
I(Socrates) = <the man called Socrates>. I(Man(Socrates)) = True. I(Mortal(Socrates)) = True.
Thus, our domain consists of a single man, Socrates, who is mortal. We can now interpret sentences such as:
∀x.Man(x) ⇒ Mortal(x).
by applying the rules of interpretation. In order for the universal quantification to hold the formula Man(x) ⇒ Mortal(x) must hold for every possible instantiation of x. Luckily, we only have one possible instantiation, Socrates, so all we need to prove is that Man(Socrates) ⇒ Mortal(Socrates) is satisfied. In order to prove this we need to prove either that Man(Socrates) is not satisfied, or that Mortal(Socrates) is. (The interpretation of implication is that if the
antecedent holds then the
consequent must also hold. However, if the antecedent doesn't hold then we don't make any claims about the consequent: it can be true or false). We know from our interpretation function that Man(Socrates) is satisfied, and also that Mortal(Socrates) is satisfied. Thus our formula is also satisfied.
That's just a very brief overview of the basics of first-order logic. I (
NEM) recommend the Stanford Encyclopedia of Philosophy for a more detailed discussion [
2]. (In fact for most discussions of logic it tends to be much more detailed than Wikipedia).
Second- and Higher-Order LogicsFirst-order predicate logic allows variables to range over atomic symbols in the domain. It doesn't allow variables to be bound to predicate symbols, however. A
second order logic (such as second order predicate logic) does allow this, and you can write sentences such as:
∀p.p(Socrates).
which states that all predicates apply to Socrates. A
higher order logic allows predicates to accept arguments which are themselves predicates (see also
Functional Programming which encourages the use of higher-order functions). Second order logic cannot be reduced to first-order logic. However, this extra power is not often needed and first-order logic is much more common as it is easier to reason about.
See Also edit
- Playing predicate logic