Propositional formula
Encyclopedia
In propositional logic, a propositional formula is a type of syntactic formula which is well formed and has a truth value. If the values of all variables in a propositional formula are given, it determines a unique truth value. A propositional formula may also be called a propositional expression, a sentence, or a sentential formula.
A propositional formula is constructed from simple propositions, such as "x is greater than three" or propositional variable
s such as P and Q, using connectives such as NOT, AND, OR, and IMPLIES; for example:
IMPLIES x + y = 6.
In mathematics, a propositional formula is often more briefly referred to as a "proposition", but, more precisely, a propositional formula is not a proposition but a formal expression that denotes a proposition, a formal object under discussion, just like an expression such as "" is not a value, but denotes a value. In some contexts, maintaining the distinction may be of importance.
"parenthesis rule" with respect to sequences of simple propositions (see more below about well-formed formulas).
Simple propositions are declarative in nature, that is, they make assertions about the condition or nature of a particular object of sensation e.g. "This cow is blue", "There's a coyote!" ("That coyote IS there, behind the rocks."). Thus the simple "primitive" assertion
s must be about specific objects or specific states of mind. Each must have at least a subject (an immediate object of thought or observation), a verb (in the active voice and present tense preferred), and perhaps an adjective or adverb. "Dog!" probably implies "I see a dog" but should be rejected as too ambiguous.
For the purposes of the propositional calculus a compound proposition can usually be reworded into a series of simple sentences, although the result will probably sound stilted.
-- a verb or possibly verb-clause that asserts a quality or attribute of the object(s)). The predicate calculus then generalizes the "subject|predicate" form (where | symbolizes concatenation
(stringing together) of symbols) into a form with the following blank-subject structure " ___|predicate", and the predicate in turn generalized to all things with that property.
The generalization of "pig" to a (potential) member of two classes "winged things" and "blue things" means that it has a truth-relationship with both of these classes. In other words, given a domain of discourse "winged things", either we find p to be a member of this domain or not. Thus we have a relationship W (wingedness) between p (pig) and { T, F }, W(p) evaluates to { T, F }. Likewise for B (blueness) and p (pig) and { T, F }: B(p) evaluates to { T, F }. So we now can analyze the connected assertions "B(p) AND W(p)" for its overall truth-value, i.e.:
In particular, simple sentences that employ notions of "all", "some", "a few", "one of", etc. are treated by the predicate calculus. Along with the new function symbolism "F(x)" two new symbols are introduced: ∀ (For all), and ∃ (There exists ..., At least one of ... exists, etc.). The predicate calculus can handle the following statement:
(symbol-formation) rather than in semantics
(meaning) of the symbols. The meanings are to be found outside the algebra.
For a well-formed sequence of symbols in the algebra—a formula -- to have some usefulness outside the algebra the symbols are assigned meanings and eventually the variables are assigned values; then by a series of rules the formula is evaluated.
When the values are restricted to just two and applied to the notion of simple sentences (e.g. spoken utterances or written assertions) linked by propositional connectives this whole algebraic system of symbols and rules and evaluation-methods is usually called the propositional calculus
or the sentential calculus.
While some of the familiar rules of arithmetic algebra continue to hold in the algebra of propositions (e.g. the commutative and associative laws for AND and OR), some do not (e.g. the distributive laws for AND, OR and NOT).
, philosophers, rhetoricians and mathematicians reduce arguments to formulas and then study them (usually with truth table
s) for correctness (soundness). For example: Is the following argument sound?
Engineers analyze the logic circuits they have designed using synthesis techniques and then apply various reduction and minimization techniques to simplify their designs.
Synthesis: Engineers in particular synthesize propositional formulas (that eventually end up as circuits of symbols) from truth table
s. For example, one might write down a truth table for how binary addition should behave given the addition of variables "b" and "a" and "carry_in" "ci", and the results "carry_out" "co" and "sum" Σ:
. Propositions that are simple (atomic
), symbolic expressions are often denoted by variables named a, b, or A, B, etc. A propositional variable is intended to represent an atomic proposition (assertion), such as "It is Saturday" = "a" (here the symbol = means " ... is assigned the variable named ...") or "I only go to the movies on Monday" = "b".
Truth values in rhetoric, philosophy and mathematics: The truth values are only two: { TRUTH "T", FALSITY "F" }. An empiricist puts all propositions into two broad classes: analytic -- true no matter what (e.g. tautology
), and synthetic -- derived from experience and thereby susceptible to confirmation by third parties (the verification theory
of meaning). Empiricits hold that, in general, to arrive at the truth-value of a synthetic proposition, meanings (pattern-matching templates) must first be applied to the words, and then these meaning-templates must be matched against whatever it is that is being asserted. For example, my utterance "That cow is blue!" Is this statement a TRUTH? Truly I said it. And maybe I am seeing a blue cow—unless I am lying my statement is a TRUTH relative to the object of my (perhaps flawed) perception. But is the blue cow "really there"? What do you see when you look out the same window? In order to proceed with a verification, you will need a prior notion (a template) of both "cow" and "blue", and an ability to match the templates against the object of sensation (if indeed there is one).
Truth values in engineering: Engineers try to avoid notions of truth and falsity that bedevil philosophers, but in the final analysis engineers must trust their measuring instruments. In their quest for robustness
, engineers prefer to pull known objects from a small library—objects that have well-defined, predictable behaviors even in large combinations, (hence their name for the propositional calculus: "combinatorial logic"). The fewest behaviors of a single object are two (e.g. { OFF, ON }, { open, shut }, { UP, DOWN } etc.), and these are put in correspondence with { 0, 1 }. Such elements are called digital
; those with a continuous range of behaviors are called analog
. Whenever decisions must be made in an analog system, quite often an engineer will convert an analog behavior (the door is 45.32146% UP) to digital (e.g. DOWN=0 ) by use of a comparator
.
Thus an assignment of meaning of the variables and the two value-symbols { 0, 1 } comes from "outside" the formula that represents the behavior of the (usually) compound object. An example is a garage door with two "limit switches", one for UP labelled SW_U and one for DOWN labelled SW_D, and whatever else is in the door's circuitry. Inspection of the circuit (either the diagram or the actual objects themselves—door, switches, wires, circuit board, etc.) might reveal that, on the circuit board "node 22" goes to +0 volts when the contacts of switch "SW_D" are mechanically in contact ("closed") and the door is in the "down" position (95% down), and "node 29" goes to +0 volts when the door is 95% UP and the contacts of switch SW_U are in mechanical contact ("closed"). The engineer must define the meanings of these voltages and all possible combinations (all 4 of them), including the "bad" ones (e.g. both nodes 22 and 29 at 0 volts, meaning that the door is open and closed at the same time). The circuit mindlessly responds to whatever voltages it experiences without any awareness of TRUTH or FALSEHOOD, RIGHT or WRONG, SAFE or DANGEROUS.
s. Examples of connectives include:
s. The symbols used will vary from author to author and between fields of endeavor. In general the abbreviations "T" and "F" stand for the evaluations TRUTH and FALSITY applied to the variables in the propositional formula (e.g. the assertion: "That cow is blue" will have the truth-value "T" for Truth or "F" for Falsity, as the case may be.).
The connectives go by a number of different word-usages, e.g. "a IMPLIES b" is also said "IF a THEN b". Some of these are shown in the table.
' notion (a+a = a).
and computation theory and is the connective responsible for conditional goto's (jumps, branches). From this one connective all other connectives can be constructed (see more below). Although " IF c THEN b ELSE a " sounds like an implication it is, in its most reduced form, a switch that makes a decision and offers as outcome only one of two alternatives "a" or "b" (hence the name switch statement
in the C
programming language).
The following three propositions are equivalent (as indicated by the logical equivalence sign ≡ ):
Thus IF ... THEN ... ELSE—unlike implication—does not evaluate to an ambiguous "TRUTH" when the first proposition is false i.e. c = F in (c → b). For example, most people would reject the following compound proposition as a nonsensical non sequitor because the second sentence is not connected in meaning to the first.
In recognition of this problem, the sign → of formal implication in the propositional calculus is called material implication to distinguish it from the everyday, intuitive implication.
The use of the IF ... THEN ... ELSE construction avoids controversy because it offers a completely deterministic choice between two stated alternatives; it offers two "objects" (the two alternatives b and a), and it selects between them exhaustively and unabiguously. In the truth table below, d1 is the formula: ( (IF c THEN b) AND (IF NOT-c THEN a) ). Its fully reduced form d2 is the formula: ( (c AND b) OR (NOT-c AND a). The two formulas are equivalent as shown by the columns "=d1" and "=d2". Electrical engineers call the fully reduced formula the AND-OR-SELECT operator. The CASE (or SWITCH) operator is an extension of the same idea to n possible, but mutually exclusive outcomes. Electrical engineers call the CASE operator a multiplexer
.
" is not the same thing as "identity". For example, most would agree that the assertion "That cow is blue" is identical to the assertion "That cow is blue". On the other hand logical equivalence sometimes appears in speech as in this example: " 'The sun is shining' means 'I'm biking' " Translated into a propositional formula the words become: "IF 'the sun is shining' THEN 'I'm biking', AND IF 'I'm biking' THEN 'the sun is shining'".:
Different authors use different signs for logical equivalence: ↔ (e.g. Suppes, Goodstein, Hamilton), ≡ (e.g. Robbin), ⇔ (e.g. Bender and Williamson). Typically identity is written as the equals sign =. One exception to this rule is found in Principia Mathematica. For more about the philosophy of the notion of IDENTITY see Leibniz's law.
As noted above, Tarski considers IDENTITY to lie outside the propositional calculus, but he asserts that without the notion, "logic" is insufficient for mathematics and the deductive sciences. In fact the sign comes into the propositional calculus when a formula is to be evaluated.
In some systems there are no truth tables, but rather just formal axioms (e.g. strings of symbols from a set { ~, →, , variables p1, p2, p3, ... } and formula-formation rules (rules about how to make more symbol strings from previous strings by use of e.g. substitution and modus ponens
). the result of such a calculus will be another formula (i.e. a well-formed symbol string). Eventually, however, if one wants to use the calculus to study notions of validity and truth, one must add axioms that define the behavior of the symbols called "the truth values" {T, F} ( or {1, 0}, etc.) relative to the other symbols.
For example, Hamilton uses two symbols = and ≠ when he defines the notion of a valuation v of any wffs A and B in his "formal statement calculus" L. A valuation v is a function
from the wffs of his system L to the range (output) { T, F }, given that each variable p1, p2, p3 in a wff is assigned an arbitrary truth value { T, F }.
The two definitions (i) and (ii) define the equivalent of the truth tables for the ~ (NOT) and → (IMPLICATION) connectives of his system. The first one derives F ≠ T and T ≠ F, in other words " v(A) does not mean v(~A)". Definition (ii) specifies the third row in the truth table, and the other three rows then come from an application of definition (i). In particular (ii) assigns the value F (or a meaning of "F") to the entire expression. The definitions also serve as formation rules that allow substitution of a value previously derived into a formula:
Some formal system
s specify these valuation axioms at the outset in the form of certain formulas such as the law of contradiction or laws of identity and nullity. The choice of which ones to use, together with laws such as commutation and distribution, is up to the system's designer as long as the set of axioms is complete (i.e. sufficient to form and to evaluate any well-formed formula created in the system).
Electrical engineering uses drawn symbols and connect them with lines that stand for the mathematicals act of substitution and replacement. They then verify their drawings with truth tables and simplify the expressions as shown below by use of Karnaugh map
s or the theorems. In this way engineers have created a host of "combinatorial logic" (i.e. connectives without feedback) such as "decoders", "encoders", "mutifunction gates", "majority logic", "binary adders", "arithmetic logic units", etc.
s (or "schemata"), that is, they are models (demonstrations, examples) for a general formula format but shown (for illustrative purposes) with specific letters a, b, c for the variables, whereas any variable letters can go in their places as long as the letter substitutions follow the rule of substitution below.
Replacement: (i) the formula to be replaced must be within a tautology, i.e. logically equivalent ( connected by ≡ or ↔) to the formula that replaces it, and (ii) unlike substitution its permissible for the replacement to occur only in one place (i.e. for one formula).
This inductive definition can be easily extended to cover additional connectives.
The inductive definition can also be rephrased in terms of a closure
operation (Enderton 2002). Let V denote a set of propositional variables and let XV denote the set of all strings from an alphabet including symbols in V, left and right parentheses, and all the logical connectives under consideration. Each logical connective corresponds to a formula building operation, a function from XXV to XXV:
The set of formulas over V is defined to be the smallest subset of XXV containing V and closed under all the formula building operations.
A propositional formula is constructed from simple propositions, such as "x is greater than three" or propositional variable
Propositional variable
In mathematical logic, a propositional variable is a variable which can either be true or false...
s such as P and Q, using connectives such as NOT, AND, OR, and IMPLIES; for example:
IMPLIES x + y = 6.
In mathematics, a propositional formula is often more briefly referred to as a "proposition", but, more precisely, a propositional formula is not a proposition but a formal expression that denotes a proposition, a formal object under discussion, just like an expression such as "" is not a value, but denotes a value. In some contexts, maintaining the distinction may be of importance.
Propositions
For the purposes of the propositional calculus, propositions (utterances, sentences, assertions) are considered to be either simple or compound. Compound propositions are considered to be linked by sentential connectives, some of the most common of which are AND, OR, "IF ... THEN ...", "NEITHER ... NOR...", "... IS EQUIVALENT TO ..." . The linking semicolon " ; ", and connective BUT are considered to be expressions of AND. A sequence of discrete sentences are considered to be linked by ANDs, and formal analysis applies a recursiveRecursive
Recursive may refer to:*Recursion, the technique of functions calling themselves*Recursive function, a total computable function*Recursive language, a language which is decidable...
"parenthesis rule" with respect to sequences of simple propositions (see more below about well-formed formulas).
- For example: The assertion: "This cow is blue. That horse is orange but this horse here is purple." is actually a compound proposition linked by ANDs: " ( ("This cow is blue" AND "that horse is orange") AND "this horse here is purple" ) ".
Simple propositions are declarative in nature, that is, they make assertions about the condition or nature of a particular object of sensation e.g. "This cow is blue", "There's a coyote!" ("That coyote IS there, behind the rocks."). Thus the simple "primitive" assertion
Assertion
The term assertion has several meanings:* Assertion , a computer programming technique* Logical assertion, logical assertion of a statement* Proof by assertion, an assertion as opposed to an argument...
s must be about specific objects or specific states of mind. Each must have at least a subject (an immediate object of thought or observation), a verb (in the active voice and present tense preferred), and perhaps an adjective or adverb. "Dog!" probably implies "I see a dog" but should be rejected as too ambiguous.
- Example: "That purple dog is running", "This cow is blue", "Switch M31 is closed", "This cap is off", "Tomorrow is Friday".
For the purposes of the propositional calculus a compound proposition can usually be reworded into a series of simple sentences, although the result will probably sound stilted.
Relationship between propositional and predicate formulas
The predicate calculus goes a step further than the propositional calculus to an "analysis of the inner structure of propositions" It breaks a simple sentence down into two parts (i) its subject (the object (singular or plural) of discourse) and (ii) a predicatePredicate (grammar)
There are two competing notions of the predicate in theories of grammar. Traditional grammar tends to view a predicate as one of two main parts of a sentence, the other being the subject, which the predicate modifies. The other understanding of predicates is inspired from work in predicate calculus...
-- a verb or possibly verb-clause that asserts a quality or attribute of the object(s)). The predicate calculus then generalizes the "subject|predicate" form (where | symbolizes concatenation
Concatenation
In computer programming, string concatenation is the operation of joining two character strings end-to-end. For example, the strings "snow" and "ball" may be concatenated to give "snowball"...
(stringing together) of symbols) into a form with the following blank-subject structure " ___|predicate", and the predicate in turn generalized to all things with that property.
- Example: "This blue pig has wings" becomes two sentences in the propositional calculus: "This pig has wings" AND "This pig is blue." The first sentence breaks into "pig" as subject, and "has wings" as the predicate. Thus it asserts that object "pig" is a member of the class (set, collection) of "winged things". The second sentence asserts that object "pig" has an attribute "blue" and thus is a member of the class of "blue things". One might choose to write the two sentences connected with AND as:
- p|W AND p|B
The generalization of "pig" to a (potential) member of two classes "winged things" and "blue things" means that it has a truth-relationship with both of these classes. In other words, given a domain of discourse "winged things", either we find p to be a member of this domain or not. Thus we have a relationship W (wingedness) between p (pig) and { T, F }, W(p) evaluates to { T, F }. Likewise for B (blueness) and p (pig) and { T, F }: B(p) evaluates to { T, F }. So we now can analyze the connected assertions "B(p) AND W(p)" for its overall truth-value, i.e.:
- ( B(p) AND W(p) ) evaluates to { T, F }
In particular, simple sentences that employ notions of "all", "some", "a few", "one of", etc. are treated by the predicate calculus. Along with the new function symbolism "F(x)" two new symbols are introduced: ∀ (For all), and ∃ (There exists ..., At least one of ... exists, etc.). The predicate calculus can handle the following statement:
- "All blue pigs have wings but some winged pigs are not blue".
Identity
Tarski asserts that the notion of IDENTITY (as distinguished from LOGICAL EQUIVALENCE) lies outside the propositional calculus; however, he notes that if a logic is to be of use for mathematics and the sciences it must contain a "theory" of IDENTITY. Some authors refer to "predicate logic with identity" to emphasize this extension. See more about this below.An algebra of propositions, the propositional calculus
An algebra (and there are many different ones), loosely defined, is a method by which a collection of symbols called variables together with some other symbols such as parentheses and some sub-set of symbols such as *, +, ~, &, V, =, ≡, ⋀, ¬ are manipulated within a system of rules. These symbols, and well-formed strings of them, are said to represent objects, but in a specific algebraic system these objects do not have meanings. Thus work inside the algebra becomes an exercise in obeying certain laws (rules) of the algebra's syntaxSyntax
In linguistics, syntax is the study of the principles and rules for constructing phrases and sentences in natural languages....
(symbol-formation) rather than in semantics
Semantics
Semantics is the study of meaning. It focuses on the relation between signifiers, such as words, phrases, signs and symbols, and what they stand for, their denotata....
(meaning) of the symbols. The meanings are to be found outside the algebra.
For a well-formed sequence of symbols in the algebra—a formula -- to have some usefulness outside the algebra the symbols are assigned meanings and eventually the variables are assigned values; then by a series of rules the formula is evaluated.
When the values are restricted to just two and applied to the notion of simple sentences (e.g. spoken utterances or written assertions) linked by propositional connectives this whole algebraic system of symbols and rules and evaluation-methods is usually called the propositional calculus
Propositional calculus
In mathematical logic, a propositional calculus or logic is a formal system in which formulas of a formal language may be interpreted as representing propositions. A system of inference rules and axioms allows certain formulas to be derived, called theorems; which may be interpreted as true...
or the sentential calculus.
While some of the familiar rules of arithmetic algebra continue to hold in the algebra of propositions (e.g. the commutative and associative laws for AND and OR), some do not (e.g. the distributive laws for AND, OR and NOT).
Usefulness of propositional formulas
Analysis: In deductive reasoningDeductive reasoning
Deductive reasoning, also called deductive logic, is reasoning which constructs or evaluates deductive arguments. Deductive arguments are attempts to show that a conclusion necessarily follows from a set of premises or hypothesis...
, philosophers, rhetoricians and mathematicians reduce arguments to formulas and then study them (usually with truth table
Truth table
A truth table is a mathematical table used in logic—specifically in connection with Boolean algebra, boolean functions, and propositional calculus—to compute the functional values of logical expressions on each of their functional arguments, that is, on each combination of values taken by their...
s) for correctness (soundness). For example: Is the following argument sound?
- "Given that consciousness is sufficient for an artificial intelligenceArtificial intelligenceArtificial intelligence is the intelligence of machines and the branch of computer science that aims to create it. AI textbooks define the field as "the study and design of intelligent agents" where an intelligent agent is a system that perceives its environment and takes actions that maximize its...
and only conscious entities can pass the Turing testTuring testThe Turing test is a test of a machine's ability to exhibit intelligent behaviour. In Turing's original illustrative example, a human judge engages in a natural language conversation with a human and a machine designed to generate performance indistinguishable from that of a human being. All...
, before we can conclude that a robot is an artificial intelligence the robot must pass the Turing test."
Engineers analyze the logic circuits they have designed using synthesis techniques and then apply various reduction and minimization techniques to simplify their designs.
Synthesis: Engineers in particular synthesize propositional formulas (that eventually end up as circuits of symbols) from truth table
Truth table
A truth table is a mathematical table used in logic—specifically in connection with Boolean algebra, boolean functions, and propositional calculus—to compute the functional values of logical expressions on each of their functional arguments, that is, on each combination of values taken by their...
s. For example, one might write down a truth table for how binary addition should behave given the addition of variables "b" and "a" and "carry_in" "ci", and the results "carry_out" "co" and "sum" Σ:
- Example: in row 5, ( (b+a) + ci ) = ( (1+0) + 1 ) = the number "2". written as a binary number this is 102, where "co"=1 and Σ=0 as shown in the right-most columns.
row b a ci (b+a)+ci co Σ 0 0 0 0 0 0 0 1 0 0 1 1 0 1 2 0 1 0 1 0 1 3 0 1 1 2 1 0 4 1 0 0 1 0 1 5 1 0 1 2 1 0 6 1 1 0 2 1 0 7 1 1 1 3 1 1
Propositional variables
The simplest type of propositional formula is a propositional variablePropositional variable
In mathematical logic, a propositional variable is a variable which can either be true or false...
. Propositions that are simple (atomic
Atomic formula
In mathematical logic, an atomic formula is a formula with no deeper propositional structure, that is, a formula that contains no logical connectives or equivalently a formula that has no strict subformulas. Atoms are thus the simplest well-formed formulas of the logic...
), symbolic expressions are often denoted by variables named a, b, or A, B, etc. A propositional variable is intended to represent an atomic proposition (assertion), such as "It is Saturday" = "a" (here the symbol = means " ... is assigned the variable named ...") or "I only go to the movies on Monday" = "b".
Truth-value assignments, formula evaluations
Evaluation of a propositional formula begins with assignment of a truth value to each variable. Because each variable represents a simple sentence, the truth values are being applied to the "truth" or "falsity" of these simple sentences.Truth values in rhetoric, philosophy and mathematics: The truth values are only two: { TRUTH "T", FALSITY "F" }. An empiricist puts all propositions into two broad classes: analytic -- true no matter what (e.g. tautology
Tautology (logic)
In logic, a tautology is a formula which is true in every possible interpretation. Philosopher Ludwig Wittgenstein first applied the term to redundancies of propositional logic in 1921; it had been used earlier to refer to rhetorical tautologies, and continues to be used in that alternate sense...
), and synthetic -- derived from experience and thereby susceptible to confirmation by third parties (the verification theory
Verification theory
The verification theory is a philosophical theory proposed by the logical positivists of the Vienna Circle. A simplified form of the theory states that a proposition's meaning is determined by the method through which it is empirically verified. In other words, if something cannot be empirically...
of meaning). Empiricits hold that, in general, to arrive at the truth-value of a synthetic proposition, meanings (pattern-matching templates) must first be applied to the words, and then these meaning-templates must be matched against whatever it is that is being asserted. For example, my utterance "That cow is blue!" Is this statement a TRUTH? Truly I said it. And maybe I am seeing a blue cow—unless I am lying my statement is a TRUTH relative to the object of my (perhaps flawed) perception. But is the blue cow "really there"? What do you see when you look out the same window? In order to proceed with a verification, you will need a prior notion (a template) of both "cow" and "blue", and an ability to match the templates against the object of sensation (if indeed there is one).
Truth values in engineering: Engineers try to avoid notions of truth and falsity that bedevil philosophers, but in the final analysis engineers must trust their measuring instruments. In their quest for robustness
Robust statistics
Robust statistics provides an alternative approach to classical statistical methods. The motivation is to produce estimators that are not unduly affected by small departures from model assumptions.- Introduction :...
, engineers prefer to pull known objects from a small library—objects that have well-defined, predictable behaviors even in large combinations, (hence their name for the propositional calculus: "combinatorial logic"). The fewest behaviors of a single object are two (e.g. { OFF, ON }, { open, shut }, { UP, DOWN } etc.), and these are put in correspondence with { 0, 1 }. Such elements are called digital
Digital
A digital system is a data technology that uses discrete values. By contrast, non-digital systems use a continuous range of values to represent information...
; those with a continuous range of behaviors are called analog
Analog signal
An analog or analogue signal is any continuous signal for which the time varying feature of the signal is a representation of some other time varying quantity, i.e., analogous to another time varying signal. It differs from a digital signal in terms of small fluctuations in the signal which are...
. Whenever decisions must be made in an analog system, quite often an engineer will convert an analog behavior (the door is 45.32146% UP) to digital (e.g. DOWN=0 ) by use of a comparator
Comparator
In electronics, a comparator is a device that compares two voltages or currents and switches its output to indicate which is larger. They are commonly used in devices such as Analog-to-digital converters .- Input voltage range :...
.
Thus an assignment of meaning of the variables and the two value-symbols { 0, 1 } comes from "outside" the formula that represents the behavior of the (usually) compound object. An example is a garage door with two "limit switches", one for UP labelled SW_U and one for DOWN labelled SW_D, and whatever else is in the door's circuitry. Inspection of the circuit (either the diagram or the actual objects themselves—door, switches, wires, circuit board, etc.) might reveal that, on the circuit board "node 22" goes to +0 volts when the contacts of switch "SW_D" are mechanically in contact ("closed") and the door is in the "down" position (95% down), and "node 29" goes to +0 volts when the door is 95% UP and the contacts of switch SW_U are in mechanical contact ("closed"). The engineer must define the meanings of these voltages and all possible combinations (all 4 of them), including the "bad" ones (e.g. both nodes 22 and 29 at 0 volts, meaning that the door is open and closed at the same time). The circuit mindlessly responds to whatever voltages it experiences without any awareness of TRUTH or FALSEHOOD, RIGHT or WRONG, SAFE or DANGEROUS.
Propositional connectives
Arbitrary propositional formulas are built from propositional variables and other propositional formulas using propositional connectiveLogical connective
In logic, a logical connective is a symbol or word used to connect two or more sentences in a grammatically valid way, such that the compound sentence produced has a truth value dependent on the respective truth values of the original sentences.Each logical connective can be expressed as a...
s. Examples of connectives include:
- The unary negation connective. If is a formula, then is a formula.
- The classical binary connectives . Thus, for example, if and are formulas, so is .
- Other binary connectives, such as NAND, NOR, and XOR
- The ternary connective IF ... THEN ... ELSE ...
- Constant 0-ary connectives ⊤ and ⊥ (alternately, constants { T, F }, { 1, 0 } etc. )
- The "theory-extension" connective EQUALS (alternately, IDENTITY, or the sign " = " as distinguished from the "logical connective" )
Connectives of rhetoric, philosophy and mathematics
The following are the connectives common to rhetoric, philosophy and mathematics together with their truth tableTruth table
A truth table is a mathematical table used in logic—specifically in connection with Boolean algebra, boolean functions, and propositional calculus—to compute the functional values of logical expressions on each of their functional arguments, that is, on each combination of values taken by their...
s. The symbols used will vary from author to author and between fields of endeavor. In general the abbreviations "T" and "F" stand for the evaluations TRUTH and FALSITY applied to the variables in the propositional formula (e.g. the assertion: "That cow is blue" will have the truth-value "T" for Truth or "F" for Falsity, as the case may be.).
The connectives go by a number of different word-usages, e.g. "a IMPLIES b" is also said "IF a THEN b". Some of these are shown in the table.
b only if a | |||||||||||
| | | | | |
b IS SUFFICIENT FOR a | | | | | |
||||||||||
| | | | | |
a IS NECESSARY FOR b | b IF AND ONLY IF a; b IFF a | | | | |
|||||||||
| | | | |
inclusive OR | IF b THEN a | b IS NECESSARY AND SUFFICIENT FOR a | | | | |
||||||||
| |
negation | negation | conjunction | disjunction | implication | biconditional | | | | |
|||||
variable | variable |
NOT b | NOT a | b AND a | b OR a | b IMPLIES a | b IS logically equivalent Logical equivalence In logic, statements p and q are logically equivalent if they have the same logical content.Syntactically, p and q are equivalent if each can be proved from the other... TO a *** | f IS A tautology | NEITHER a NOR b | b stroke a | exclusive OR |
|||||
b | a | (b) | (a) | (b a) | (b a) | (b a) | (b a) | (f = formula) | (a NOR b) |
(b|a) | various | ||
F | F |
T | T | F | F | T | T | T | T | T |
F | ||
F | T | T | F | F | T | T | F | T | F | T |
T | |
T | F |
F | T | F | T | F | F | T | F | T |
T | ||
T | T | F | F | T | T | T | T | T | F | F | F |
Engineering connectives
In general, the engineering connectives are just the same as the mathematics connectives excepting they tend to evaluate with "1" = "T" and "0" = "F". This is done for the purposes of analysis/minimization and synthesis of formulas by use of the notion of minterms and Karnaugh maps (see below). Engineers also use the words logical product from Boole's notion (a*a = a) and logical sum from JevonsWilliam Stanley Jevons
William Stanley Jevons was a British economist and logician.Irving Fisher described his book The Theory of Political Economy as beginning the mathematical method in economics. It made the case that economics as a science concerned with quantities is necessarily mathematical...
' notion (a+a = a).
logical product | logical sum | half-adder (no carry) | |||||||
| | | | | | | | | exclusive OR |
|||||||||
row number | variable | variable |
NOT | NOT | AND | OR | NAND | NOR | XOR |
|||||
b*21+a*20 | b | a | ~(b) | ~(a) | (b & a) | (b V a) | ~(b & a) | ~(b V a) | ⊕ |
0 | 0 | 0 |
1 | 1 | 0 | 0 | 1 | 1 | 0 | ||
1 | 0 |
1 | 1 | 0 | 0 | 1 | 1 | 0 | 1 | |
2 | 1 | 0 |
0 | 1 | 0 | 1 | 1 | 0 | 1 | |
3 | 1 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 0 |
CASE connective: IF ... THEN ... ELSE ...
The IF ... THEN ... ELSE ... connective appears as the simplest form of CASE operator of recursion theoryRecursion theory
Computability theory, also called recursion theory, is a branch of mathematical logic that originated in the 1930s with the study of computable functions and Turing degrees. The field has grown to include the study of generalized computability and definability...
and computation theory and is the connective responsible for conditional goto's (jumps, branches). From this one connective all other connectives can be constructed (see more below). Although " IF c THEN b ELSE a " sounds like an implication it is, in its most reduced form, a switch that makes a decision and offers as outcome only one of two alternatives "a" or "b" (hence the name switch statement
Switch statement
In computer programming, a switch, case, select or inspect statement is a type of selection control mechanism that exists in most imperative programming languages such as Pascal, Ada, C/C++, C#, Java, and so on. It is also included in several other types of languages...
in the C
C (programming language)
C is a general-purpose computer programming language developed between 1969 and 1973 by Dennis Ritchie at the Bell Telephone Laboratories for use with the Unix operating system....
programming language).
The following three propositions are equivalent (as indicated by the logical equivalence sign ≡ ):
- (1) ( IF 'counter is zero' THEN 'go to instruction b ' ELSE 'go to instruction a ') ≡
- (2) ( (c → b) & (~c → a) ) ≡ ( ( IF 'counter is zero' THEN 'go to instruction b ' ) AND ( IF 'It is NOT the case that counter is zero' THEN 'go to instruction a ) " ≡
- (3) ( (c & b) V (~c & a) ) = " ( 'Counter is zero' AND 'go to instruction b ) OR ( 'It is NOT the case that 'counter is zero' AND 'go to instruction a ) "
Thus IF ... THEN ... ELSE—unlike implication—does not evaluate to an ambiguous "TRUTH" when the first proposition is false i.e. c = F in (c → b). For example, most people would reject the following compound proposition as a nonsensical non sequitor because the second sentence is not connected in meaning to the first.
- Example: The proposition " IF 'Winston Churchill was Chinese' THEN 'The sun rises in the east' " evaluates as a TRUTH given that 'Winston Church was Chinese' is a FALSEHOOD and 'The sun rises in the east' evaluates as a TRUTH.
In recognition of this problem, the sign → of formal implication in the propositional calculus is called material implication to distinguish it from the everyday, intuitive implication.
The use of the IF ... THEN ... ELSE construction avoids controversy because it offers a completely deterministic choice between two stated alternatives; it offers two "objects" (the two alternatives b and a), and it selects between them exhaustively and unabiguously. In the truth table below, d1 is the formula: ( (IF c THEN b) AND (IF NOT-c THEN a) ). Its fully reduced form d2 is the formula: ( (c AND b) OR (NOT-c AND a). The two formulas are equivalent as shown by the columns "=d1" and "=d2". Electrical engineers call the fully reduced formula the AND-OR-SELECT operator. The CASE (or SWITCH) operator is an extension of the same idea to n possible, but mutually exclusive outcomes. Electrical engineers call the CASE operator a multiplexer
Multiplexer
In electronics, a multiplexer is a device that selects one of several analog or digital input signals and forwards the selected input into a single line. A multiplexer of 2n inputs has n select lines, which are used to select which input line to send to the output...
.
d1 | d2 | ||||||||||||||||||||||||||||||||||||||
row | c | b | a |
| ( | ( | c | --> | b | ) |
& | ( |
~ | ( | c | ) | --> | a | ) | ) |
=d1 | | ( | ( | c |
& | b | ) |
V | ( |
~ | ( | c | ) |
& | a | ) | ) |
=d2 | |||||||||||||||||||||||||||||
0 | 0 | 0 |
0 | | | | 0 | 1 | 0 | |
0 | | 1 | | 0 | |
0 | 0 | | |
0 | | | | 0 | 0 | 0 | |
0 | | 1 | | 0 | |
0 | 0 | | |
0 | ||||||||||||||||||||||||||||
1 | 0 | 0 |
1 | | | | 0 | 1 | 0 | |
1 | | 1 | | 0 | |
1 | 1 | | |
1 | | | | 0 | 0 | 0 | |
1 | | 1 | | 0 | |
1 | 1 | | |
1 | ||||||||||||||||||||||||||||
2 | 0 | 1 |
0 | | | | 0 | 1 | 1 | |
0 | | 1 | | 0 | |
0 | 0 | | |
0 | | | | 0 | 0 | 1 | |
0 | | 1 | | 0 | |
0 | 0 | | |
0 | ||||||||||||||||||||||||||||
3 | 0 | 1 |
1 | | | | 0 | 1 | 1 | |
1 | | 1 | | 0 | |
1 | 1 | | |
1 | | | | 0 | 0 | 1 | |
1 | | 1 | | 0 | |
1 | 1 | | |
1 | ||||||||||||||||||||||||||||
4 | 1 |
0 | 0 |
| | | 1 |
0 | 0 | |
0 | | 0 | | 1 | | 1 | 0 | | |
0 | | | | 1 |
0 | 0 | |
0 | | 0 | | 1 | | 0 | 0 | | |
0 | ||||||||||||||||||||||||||||
5 | 1 |
0 | 1 |
| | | 1 |
0 | 0 | |
0 | | 0 | | 1 | | 1 | 1 | | |
0 | | | | 1 |
0 | 0 | |
0 | | 0 | | 1 | | 0 | 1 | | |
0 | ||||||||||||||||||||||||||||
6 | 1 |
1 | 0 |
| | | 1 |
1 | 1 | |
1 | | 0 | | 1 | | 1 | 0 | | |
1 | | | | 1 |
1 | 1 | |
1 | | 0 | | 1 | | 0 | 0 | | |
1 | ||||||||||||||||||||||||||||
7 | 1 |
1 | 1 |
| | | 1 |
1 | 1 | |
1 | | 0 | | 1 | | 1 | 1 | | |
1 | | | | 1 |
1 | 1 | |
1 | | 0 | | 1 | | 0 | 1 | | |
1 |
IDENTITY and evaluation
The first table of this section stars *** the entry logical equivalence to note the fact that "Logical equivalenceLogical equivalence
In logic, statements p and q are logically equivalent if they have the same logical content.Syntactically, p and q are equivalent if each can be proved from the other...
" is not the same thing as "identity". For example, most would agree that the assertion "That cow is blue" is identical to the assertion "That cow is blue". On the other hand logical equivalence sometimes appears in speech as in this example: " 'The sun is shining' means 'I'm biking' " Translated into a propositional formula the words become: "IF 'the sun is shining' THEN 'I'm biking', AND IF 'I'm biking' THEN 'the sun is shining'".:
- "IF 's' THEN 'b' AND IF 'b' THEN 's' " is written as ((s → b) & (b → s)) or in an abbreviated form as (s ↔ b). As the rightmost symbol string is a definition for a new symbol in terms of the symbols on the left, the use of the IDENTITY sign = is appropriate:
- ((s → b) & (b → s)) = (s ↔ b)
Different authors use different signs for logical equivalence: ↔ (e.g. Suppes, Goodstein, Hamilton), ≡ (e.g. Robbin), ⇔ (e.g. Bender and Williamson). Typically identity is written as the equals sign =. One exception to this rule is found in Principia Mathematica. For more about the philosophy of the notion of IDENTITY see Leibniz's law.
As noted above, Tarski considers IDENTITY to lie outside the propositional calculus, but he asserts that without the notion, "logic" is insufficient for mathematics and the deductive sciences. In fact the sign comes into the propositional calculus when a formula is to be evaluated.
In some systems there are no truth tables, but rather just formal axioms (e.g. strings of symbols from a set { ~, →, , variables p1, p2, p3, ... } and formula-formation rules (rules about how to make more symbol strings from previous strings by use of e.g. substitution and modus ponens
Modus ponens
In classical logic, modus ponendo ponens or implication elimination is a valid, simple argument form. It is related to another valid form of argument, modus tollens. Both Modus Ponens and Modus Tollens can be mistakenly used when proving arguments...
). the result of such a calculus will be another formula (i.e. a well-formed symbol string). Eventually, however, if one wants to use the calculus to study notions of validity and truth, one must add axioms that define the behavior of the symbols called "the truth values" {T, F} ( or {1, 0}, etc.) relative to the other symbols.
For example, Hamilton uses two symbols = and ≠ when he defines the notion of a valuation v of any wffs A and B in his "formal statement calculus" L. A valuation v is a function
Function (mathematics)
In mathematics, a function associates one quantity, the argument of the function, also known as the input, with another quantity, the value of the function, also known as the output. A function assigns exactly one output to each input. The argument and the value may be real numbers, but they can...
from the wffs of his system L to the range (output) { T, F }, given that each variable p1, p2, p3 in a wff is assigned an arbitrary truth value { T, F }.
- (i) v(A) ≠ v(~A)
- (ii) v(A → B) = F if and only if v(A) = T and v(B) = F
The two definitions (i) and (ii) define the equivalent of the truth tables for the ~ (NOT) and → (IMPLICATION) connectives of his system. The first one derives F ≠ T and T ≠ F, in other words " v(A) does not mean v(~A)". Definition (ii) specifies the third row in the truth table, and the other three rows then come from an application of definition (i). In particular (ii) assigns the value F (or a meaning of "F") to the entire expression. The definitions also serve as formation rules that allow substitution of a value previously derived into a formula:
v(A→B) | ||||
( | v(A) |
→ | v(B) | ) |
|||
| F |
T | F | |
|||
| F |
T | T | |
|||
| T |
F | F | |
|||
| T |
T | T | |
Some formal system
Formal system
In formal logic, a formal system consists of a formal language and a set of inference rules, used to derive an expression from one or more other premises that are antecedently supposed or derived . The axioms and rules may be called a deductive apparatus...
s specify these valuation axioms at the outset in the form of certain formulas such as the law of contradiction or laws of identity and nullity. The choice of which ones to use, together with laws such as commutation and distribution, is up to the system's designer as long as the set of axioms is complete (i.e. sufficient to form and to evaluate any well-formed formula created in the system).
More complex formulas
As shown above, the CASE (IF c THEN b ELSE a ) connective is constructed either from the 2-argument connectives IF...THEN... and AND or from OR and AND and the 1-argument NOT. Connectives such as the n-argument AND (a & b & c & ... & n), OR (a V b V c V ... V n) are constructed from strings of two-argument AND and OR and written in abbreviated form without the parentheses. These, and other connectives as well, can then used as building blocks for yet further connectives. Rhetoricians, philosophers, and mathematicians use truth tables and the various theorems to analyze and simplify their formulas.Electrical engineering uses drawn symbols and connect them with lines that stand for the mathematicals act of substitution and replacement. They then verify their drawings with truth tables and simplify the expressions as shown below by use of Karnaugh map
Karnaugh map
The Karnaugh map , Maurice Karnaugh's 1953 refinement of Edward Veitch's 1952 Veitch diagram, is a method to simplify Boolean algebra expressions...
s or the theorems. In this way engineers have created a host of "combinatorial logic" (i.e. connectives without feedback) such as "decoders", "encoders", "mutifunction gates", "majority logic", "binary adders", "arithmetic logic units", etc.
Definitions
A definition creates a new symbol and its behavior, often for the purposes of abbreviation. Once the definition is presented, either form of the equivalent symbol or formula can be used. The following symbolism =Df is following the convention of Reichenbach. Some examples of convenient definitions drawn from the symbol set { ~, &, } and variables. Each definition is producing a logically equivalent formula that can be used for substitution or replacement.- definition of a new variable: (c & d) =Df s
- OR: ~(~a & ~b) =Df (a V b)
- IMPLICATION: (~a V b) =Df (a → b)
- XOR: (~a & b) V (a & ~b) =Df (a ⊕ b)
- LOGICAL EQUIVALENCE: ( (a → b) & (b → a) ) =Df ( a ≡ b )
Axiom and definition schemas
The definitions above for OR, IMPLICATION, XOR, and logical equivalence are actually schemaAxiom schema
In mathematical logic, an axiom schema generalizes the notion of axiom.An axiom schema is a formula in the language of an axiomatic system, in which one or more schematic variables appear. These variables, which are metalinguistic constructs, stand for any term or subformula of the system, which...
s (or "schemata"), that is, they are models (demonstrations, examples) for a general formula format but shown (for illustrative purposes) with specific letters a, b, c for the variables, whereas any variable letters can go in their places as long as the letter substitutions follow the rule of substitution below.
- Example: In the definition (~a V b) =Df (a → b), other variable-symbols such as "SW2" and "CON1" might be used, i.e. formally:
- a =Df SW2, b =Df CON1, so we would have as an instance of the definition schema (~SW2 V CON1) =Df (SW2 → CON1)
Substitution versus replacement
Substitution: The variable or sub-formula to be substituted with another variable, constant, or sub-formula must be replaced in all instances throughout the overall formula.- Example: (c & d) V (p & ~(c & ~d)), but (q1 & ~q2) ≡ d. Now wherever variable "d" occurs, substitute (q1 & ~q2):
- (c & (q1 & ~q2)) V (p & ~(c & ~(q1 & ~q2)))
Replacement: (i) the formula to be replaced must be within a tautology, i.e. logically equivalent ( connected by ≡ or ↔) to the formula that replaces it, and (ii) unlike substitution its permissible for the replacement to occur only in one place (i.e. for one formula).
- Example: Use this set of formula schemas/equivalences: 1: ( (a V 0) ≡ a ). 2: ( (a & ~a) ≡ 0 ). 3: ( (~a V b) =Df (a → b) ). 6. ( ~(~a) ≡ a )
- start with "a": a
- Use 1 to replace "a" with (a V 0): (a V 0)
- Use the notion of "schema" to substitute b for a in 2: ( (a & ~a) ≡ 0 )
- Use 2 to replace 0 with (b & ~b): ( a V (b & ~b) )
- (see below for how to distribute "a V" over (b & ~b), etc
Inductive definition
The classical presentation of propositional logic (see Enderton 2002) uses the connectives . The set of formulas over a given set of propositional variables is inductively defined to be the smallest set of expressions such that:- Each propositional variable in the set is a formula,
- is a formula whenever is, and
- is a formula whenever and are formulas and is one of the binary connectives .
This inductive definition can be easily extended to cover additional connectives.
The inductive definition can also be rephrased in terms of a closure
Closure (mathematics)
In mathematics, a set is said to be closed under some operation if performance of that operation on members of the set always produces a unique member of the same set. For example, the real numbers are closed under subtraction, but the natural numbers are not: 3 and 8 are both natural numbers, but...
operation (Enderton 2002). Let V denote a set of propositional variables and let XV denote the set of all strings from an alphabet including symbols in V, left and right parentheses, and all the logical connectives under consideration. Each logical connective corresponds to a formula building operation, a function from XXV to XXV:
- Given a string z, the operation returns .
- Given strings y and z, the operation returns . There are similar operations , , and corresponding to the other binary connectives.
The set of formulas over V is defined to be the smallest subset of XXV containing V and closed under all the formula building operations.
Parsing formulas
The following "laws" of the propositional calculus are used to "reduce" complex formulas. The "laws" can be easily verified with truth tables. For each law, the principal (outermost) connective is associated with logical equivalence ≡ or identity =. A complete analysis of all 2n combinations of truth-values for its n distinct variables will result in a column of 1's (T's) underneath this connective. This finding makes each law, by definition, a tautology. And, for a given law, because its formula on the left and right are equivalent (or identical) they can be substituted for one another.- Example: The following truth table is De Morgan's law for the behavior of NOT over OR: ~(a V b) ≡ (~a & ~b). To the left of the principal connective ≡ (yellow column labelled "taut") the formula ~(b V a) evaluates to (1, 0, 0, 0) under the label "P". On the right of "taut" the formula (~(b) V ~(a)) also evaluates to (1, 0, 0, 0) under the label "Q". As the two columns have equivalent evaluations, the logical equivalence ≡ under "taut" evaluates to (1, 1, 1, 1), i.e. P ≡ Q. Thus either formula can be substituted for the other if it appears in an larger formula.
P taut Q b a ( ~ ( b V a ) ≡ ( ~ ( b ) & ~ ( a ) ) ) 0
| 0
|1
|
| 00
| 0
|1
|1
|
| 0
|1 1
|
| 0
|
|
|0
| 1
|0
|
| 01
| 1
|1
|1
|
| 0
|0 0
|
| 1
|
|
|1
| 0
|0
|
| 11
| 0
|1
|0
|
| 1
|0 1
|
| 0
|
|
|1
| 1
|0
|
| 11
| 1
|1
|0
|
| 1
|0 0
|
| 1
|
|
|
Enterprising readers might challenge themselves to invent an "axiomatic system" that uses the symbols { V, &, ~, , variables a, b, c }, the formation rules specified above, as few as possible of the laws listed below, and then derive as theorems the others as well as the truth-table valuations for V, &, and ~. One set attributed to Huntington (1904) (Suppes:204) uses 8 of the laws defined below.
Note that that if used in an axiomatic system, the symbols 1 and 0 (or T and F) are considered to be wffs and thus obey all the same rules as the variables. Thus the laws listed below are actually axiom schemaAxiom schemaIn mathematical logic, an axiom schema generalizes the notion of axiom.An axiom schema is a formula in the language of an axiomatic system, in which one or more schematic variables appear. These variables, which are metalinguistic constructs, stand for any term or subformula of the system, which...
s, that is, they stand in place of an infinite number of instances. Thus ( x V y ) ≡ ( y V x ) might be used in one instance, ( p V 0 ) ≡ ( 0 V p ) and in another instance ( 1 V q ) ≡ ( q V 1 ), etc.
Connective seniority (symbol rank)
In general, to avoid confusion during analysis and evaluation of propositional formulas make liberal use parentheses. However, quite often authors leave them out. To parse a complicated fromula one first needs to know the seniority, or rank that each of the connectives (excepting *) has over the other connectives. To" well-form" a formula start with the connective with the highest rank and add parentheses around its components, then move down in rank (paying close attention to the connective's scope over which the it is working). From most- to least-senior, with the precidate signs ∀x and ∃x, the IDENTITY = and arithmetic signs added for completeness:-
- ≡ (LOGICAL EQUIVALENCE), → (IMPLICATION), & (AND), V (OR), ~ (NOT), ∀x (FOR ALL x), ∃x (THERE EXISTS AN x), = (IDENTITY), + (arithmetic sum), *(arithmetic multiply), ' (s, arithmetic successor).
Thus the formula can be parsed—but note that, because NOT does not obey the distributive law, the parentheses around the inner formula (~c & ~d) is mandatory:- Example: " d & c V w " rewritten is ( (d & c) V w )
- Example: " a & a → b ≡ a & ~a V b " rewritten (rigorously) is
-
- ≡ has seniority: ( ( a & a → b ) ≡ ( a & ~a V b ) )
- → has seniority: ( ( a & (a → b) ) ≡ ( a & ~a V b ) )
- & has seniority both sides: ( ( ( (a) & (a → b) ) ) ≡ ( ( (a) & (~a V b) ) )
- ~ has seniority: ( ( ( (a) & (a → b) ) ) ≡ ( ( (a) & (~(a) V b) ) )
- check 9 ( -parenthesis and 9 ) -parenthesis: ( ( ( (a) & (a → b) ) ) ≡ ( ( (a) & (~(a) V b) ) )
-
- Example:
- d & c V p & ~(c & ~d) ≡ c & d V p & c V p & ~d rewritten is ( ( (d & c) V ( p & ~((c & ~(d)) ) ) ) ≡ ( (c & d) V (p & c) V (p & ~(d)) ) )
Commutative and associative laws
Both AND and OR obey the commutative law and associative law:- Commutative law for OR: ( a V b ) ≡ ( b V a )
- Commutative law for AND: ( a & b ) ≡ ( b & a )
- Associative law for OR: (( a V b ) V c ) ≡ ( a V (b V c) )
- Associative law for AND: (( a & b ) & c ) ≡ ( a & (b & c) )
Omitting parentheses in strings of AND and OR: The connectives are considered to be unary (one-variable, e.g. NOT) and binary (i.e. two-variable AND, OR, IMPLIES). For example: V (p & c) V (p & ~d) ) above should be written ( ((c & d) V (p & c)) V (p & ~(d) ) ) or possibly ( (c & d) V ( (p & c) V (p & ~(d)) ) )
However, a truth-table demonstration shows that the form without the extra parentheses is perfectly adequate.
Omitting parentheses with regards to a single-variable NOT: While ~(a) where a is a single variable is perfectly clear, ~a is adequate and is the usual way this literalLiteral (mathematical logic)In mathematical logic, a literal is an atomic formula or its negation.The definition mostly appears in proof theory , e.g...
would appear. When the NOT is over a formula with more than one symbol, then the parentheses are mandatory, e.g. ~(a V b)
Distributive laws
OR distributes over AND and AND distributes over OR. NOT does not distribute over AND nor OR. See below about De Morgan's law:- Distributive law for OR: ( c V ( a & b) ) ≡ ( (c V a) & (c V b) )
- Distributive law for AND: ( c & ( a V b) ) ≡ ( (c & a) V (c & b) )
De Morgan's laws
NOT, when distributed over OR or AND, does something peculiar (again, these can be verified with a truth-table):- De Morgan's law for OR: ~(a V b) ≡ (~a & ~b)
- De Morgan's law for AND: ~(a & b) ≡ (~a V ~b)
Laws of absorption
Absorption, in particular the first one, cause the "laws" of logic to differ from the "laws" of arithmetic:- Absorption (idempotency) for OR: (a V a) ≡ a
- Absorption (idempotency) for AND: (a & a) ≡ a
Laws of evaluation: Identity, nullity, and complement
The sign " = " (as distinguished from logical equivalence ≡, alternately ↔ or ⇔) symbolizes the assignment of value or meaning. Thus the string (a & ~(a)) symbolizes "1", i.e. it means the same thing as symbol "1" ". In some "systems" this will be an axiom (definition) perhaps shown as ( (a & ~(a)) =Df 1 ) ; in other systems, it may be derived in the truth table below:c taut c a ( ( a & ~ ( a ) ) ≡ 0 ) 0
|
|
|
| 00 1
|
| 0
|
|1
| 0
|1
|
|
|
| 10 0
|
| 1
|
|1
| 0
|
- Commutation of equality: (a = b) ≡ (b = a)
- Identity for OR: (a V 0) = a or (a V F) = a
- Identity for AND: (a & 1) = a or (a & T) = a
- Nullity for OR: (a V 1) = 1 or (a V T) = T
- Nullity for AND: (a & 0) = 0 or (a & F) = F
- Complement for OR: (a V ~a) = 1 or (a V ~a) = T, law of excluded middleLaw of excluded middleIn logic, the law of excluded middle is the third of the so-called three classic laws of thought. It states that for any proposition, either that proposition is true, or its negation is....
- Complement for AND: (a & ~a) = 0 or (a & ~a) = F, law of contradiction
Well-formed formulas (wffs)
A key property of formulas is that they can be uniquely parsed to determine the structure of the formula in terms of its propositional variables and logical connectives. When formulas are written in infix notationInfix notationInfix notation is the common arithmetic and logical formula notation, in which operators are written infix-style between the operands they act on . It is not as simple to parse by computers as prefix notation or postfix notation Infix notation is the common arithmetic and logical formula notation,...
, as above, unique readability is ensured through an appropriate use of parentheses in the definition of formulas. Alternatively, formulas can be written in Polish notationPolish notationPolish notation, also known as prefix notation, is a form of notation for logic, arithmetic, and algebra. Its distinguishing feature is that it places operators to the left of their operands. If the arity of the operators is fixed, the result is a syntax lacking parentheses or other brackets that...
or reverse Polish notationReverse Polish notationReverse Polish notation is a mathematical notation wherein every operator follows all of its operands, in contrast to Polish notation, which puts the operator in the prefix position. It is also known as Postfix notation and is parenthesis-free as long as operator arities are fixed...
, eliminating the need for parentheses altogether.
The inductive definition of infix formulas in the previous section can be converted to a formal grammarFormal grammarA formal grammar is a set of formation rules for strings in a formal language. The rules describe how to form strings from the language's alphabet that are valid according to the language's syntax...
in Backus-Naur form:::= | (
) | (
) | (
) | (
) | (
)
It can be shown that any expression matched by the grammar has a balanced number of left and right parentheses, and any nonempty initial segment of a formula has more left than right parentheses (Enderton 2002). This fact can be used to give an algorithm for parsing formulas. For example, suppose that an expression x begins with . Starting after the second symbol, match the shortest subexpression y of x that has balanced parentheses. If x is a formula, there is exactly one symbol left after this expression, this symbol is a closing parenthesis, and y itself is a formula. This idea can be used to generate a recursive descent parserRecursive descent parserA recursive descent parser is a top-down parser built from a set of mutually-recursive procedures where each such procedure usually implements one of the production rules of the grammar...
for formulas.
Example of parenthesis counting:
This method locates as "1" the principal connective -- the connective under which the overall evaluation of the formula occurs for the outer-most parentheses (which are often omitted). It also locates the inner-most connective where one would begin evaluatation of the formula without the use of a truth table, e.g. at "level 6".start ( ( ( c & d ) V ( p & ~ ( ( c & ~ ( d ) ) ) ) ) = ( ( ( c & d ) V ( p & d ) ) V ( p & ~ ( c ) ) ) ) count 0 1 2 3 3 3 3 2 2 3 3 3 3 4 5 5 5 5 6 6 5 4 3 3 1 1 2 3 4 4 4 4 3 3 4 4 4 4 3 2 2 3 3 3 3 3 3 3 2 1 0
Wffs versus valid formulas in inferences
The notion of valid argument is usually applied to inferenceInferenceInference is the act or process of deriving logical conclusions from premises known or assumed to be true. The conclusion drawn is also called an idiomatic. The laws of valid inference are studied in the field of logic.Human inference Inference is the act or process of deriving logical conclusions...
s in arguments, but arguments reduce to propositional formulas and can be evaluated the same as any other propositional formula. Here a valid inference means: "The formula that represents the inference evaluates to "truth" beneath its principal connective, no matter what truth-values are assigned to its variables", i.e. the formula is a tautology.
Quite possibly a formula will be well-formed but not valid. Another way of saying this is: "Being well-formed is necessary for a formula to be valid but it is not sufficient." The only way to find out if it is both well-formed and valid is to submit it to verification with a truth table or by use of the "laws":
- Example 1: What does one make of the following difficult-to-follow assertion? Is it valid? "If it's sunny, but if the frog is croaking then it's not sunny, then it's the same as saying that the frog isn't croaking." Convert this to a propositional formula as follows:
- " IF (a AND (IF b THEN NOT-a) THEN NOT-a" where " a " represents "its sunny" and " b " represents "the frog is croaking":
- ( ( (a) & ( (b) → ~(a) ) ≡ ~(b) )
- This is well-formed, but is it valid? In other words, when evaluated will this yield a tautology (all T) beneath the logical-equivalence symbol ≡ ? The answer is NO, it is not valid. However, if reconstructed as an implication then the argument is valid.
- "Saying it's sunny, but if the frog is croaking then it's not sunny, implies that the frog isn't croaking."
- Other circumstances may be preventing the frog from croaking: perhaps a crane ate it.
- Example 2 (from Reichenbach via Bertrand Russell):
- "If pigs have wings, some winged animals are good to eat. Some winged animals are good to eat, so pigs have wings."
- ( ((a) → (b)) & (b) → (a) ) is well formed, but an invalid argument as shown by the red evaluation under the principal implication:
W G arg a b ( ( ( a -> b ) & b ) -> a ) 0
| 0
|
|
|
|
| 01
| 0
|0
| 0
|1
| 0
|0
| 1
|
|
|
|
| 01
| 1
|1
| 1
|0
| 0
|1
| 0
|
|
|
|
| 10
| 0
|0
| 0
|1
| 1
|1
| 1
|
|
|
|
| 11
| 1
|1
| 1
|1
| 1
|
Reduced sets of connectives
A set of logical connectives is called complete if every propositional formula is tautologically equivalent to a formula with just the connectives in that set. There are many complete sets of connectives, including , , and . There are two binary connectives that are complete on their own, corresponding to NAND and NOR, respectively. Some pairs are not complete, for example .
The stroke (NAND)
The binary connective corresponding to NAND is called the Sheffer strokeSheffer strokeIn Boolean functions and propositional calculus, the Sheffer stroke, named after Henry M. Sheffer, written "|" , "Dpq", or "↑", denotes a logical operation that is equivalent to the negation of the conjunction operation, expressed in ordinary language as "not both"...
, and written with a vertical bar | or vertical arrow ↑. The completeness of this connective was noted in Principia Mathematica (1927:xvii). Since it is complete on its own, all other connectives can be expressed using only the stroke. For example, where the symbol " ≡ " represents logical equivalence:- ~p ≡ p|p
- p → q ≡ p|~q
- p V q ≡ ~p|~q
- p & q ≡ ~(p|q)
In particular, the zero-ary connectives (representing truth) and (representing falsity) can be expressed using the stroke:
IF ... THEN ... ELSE
This connective together with { 0, 1 }, ( or { F, T } or { , } ) forms a complete set. In the following the IF...THEN...ELSE relationRelation (mathematics)In set theory and logic, a relation is a property that assigns truth values to k-tuples of individuals. Typically, the property describes a possible connection between the components of a k-tuple...
(c, b, a) = d represents ( (c → b) V (~c → b) ) ≡ ( (c & b) V (~c & a) ) = d- (c, b, a):
- (c, 0, 1) ≡ ~c
- (c, b, 1) ≡ (c → b)
- (c, c, a) ≡ (c V a)
- (c, b, c) ≡ (c & b)
Example: The following shows how a theorem-based proof of "(c, b, 1) ≡ (c → b)" would proceed, below the proof is its truth-table verification. ( Note: (c → b) is defined to be (~c V b) ):- Begin with the reduced form: ( (c & b) V (~c & a) )
- Substitute "1" for a: ( (c & b) V (~c & 1) )
- Identity (~c & 1) = ~c: ( (c & b) V (~c) )
- Law of commutation for V: ( (~c) V (c & b) )
- Distribute "~c V" over (c & b): ( ((~c) V c ) & ((~c) V b )
- Law of excluded middle (((~c) V c ) = 1 ): ( (1) & ((~c) V b ) )
- Distribute "(1) &" over ((~c) V b): ( ((1) & (~c)) V ((1) & b )) )
- Commutivity and Identity (( 1 & ~c) = (~c & 1) = ~c, and (( 1 & b) ≡ (b & 1) ≡ b: ( ~c V b )
- ( ~c V b ) is defined as c → b Q. E. D.
In the following truth table the column labelled "taut" for tautology evaluates logical equivalence (symbolized here by ≡) between the two columns labelled d. Because all four rows under "taut" are 1's, the equivalence indeed represents a tautology.d taut d rows
| c
| b
| a
| (
| (
| (
| c&
| b
| )V
| (~
| (
| c
| )&
| a
| )
| )≡
| (~
| (
| c
| )V
| b
| )
| )0,1
| 0
| 0
| 1
|
|
|
| 00
| 0
|1
|1
|
| 0
|1
| 1
|
|1
|1
|
| 0
|1
| 0
|
|2,3
| 0
| 1
| 1
|
|
|
| 00
| 1
|1
|1
|
| 0
|1
| 1
|
|1
|1
|
| 0
|1
| 1
|
|4,5
| 1
| 0
| 1
|
|
|
| 10
| 0
|0
|0
|
| 1
|0
| 1
|
|1
|0
|
| 1
|0
| 0
|
|6,7
| 1
| 1
| 1
|
|
|
| 11
| 1
|1
|0
|
| 1
|0
| 1
|
|1
|0
|
| 1
|1
| 1
|
|
Normal forms
An arbitrary propositional formula may have a very complicated structure. It is often convenient to work with formulas that have simpler forms, known as normal forms. Some common normal forms include conjunctive normal formConjunctive normal formIn Boolean logic, a formula is in conjunctive normal form if it is a conjunction of clauses, where a clause is a disjunction of literals.As a normal form, it is useful in automated theorem proving...
and disjunctive normal formDisjunctive normal formIn boolean logic, a disjunctive normal form is a standardization of a logical formula which is a disjunction of conjunctive clauses. As a normal form, it is useful in automated theorem proving. A logical formula is considered to be in DNF if and only if it is a disjunction of one or more...
. Any propositional formula can be reduced to its conjunctive or disjunctive normal form.
Reduction to normal form
Reduction to normal form is relatively simple once a truth table for the formula is prepared. But further attempts to minimize the number of literals (see below) requires some tools: reduction by De Morgan's laws and truth tableTruth tableA truth table is a mathematical table used in logic—specifically in connection with Boolean algebra, boolean functions, and propositional calculus—to compute the functional values of logical expressions on each of their functional arguments, that is, on each combination of values taken by their...
s can be unwieldy, but Karnaugh mapKarnaugh mapThe Karnaugh map , Maurice Karnaugh's 1953 refinement of Edward Veitch's 1952 Veitch diagram, is a method to simplify Boolean algebra expressions...
s are very suitable a small number of variables (5 or less). Some sophisticated tabular methods exist for more complex circuits with multiple outputs but these are beyond the scope of this article; for more see Quine–McCluskey algorithm.
Literal, term and alterm
In electrical engineering a variable x or its negation ~(x) is lumped together into a single notion called a literalLiteral (mathematical logic)In mathematical logic, a literal is an atomic formula or its negation.The definition mostly appears in proof theory , e.g...
. A string of literals connected by ANDs is called a term. A string of literals connected by OR is called an alterm. Typically the literal ~(x) is abbreviated ~x. Sometimes the &-symbol is omitted altogether in the manner of algebraic multiplication.
- Example: a, b, c, d are variables. ((( a & ~(b) ) & ~(c)) & d) is a term. This can be abbreviated as (a & ~b & ~c & d), or a~b~cd.
- Example: p, q, r, s are variables. (((p & ~(q) ) & r) & ~(s) ) is an alterm. This can be abbreviated as (p V ~q V r V ~s).
Minterms
In the same way that a 2n-row truth table displays the evaluation of a propositional formula for all 2n possible values of its variables, n variables produces a 2n-square Karnaugh map (even though we cannot draw it in its full-dimensional realization). For example, 3 variables produces 23 = 8 rows and 8 Karnaugh squares; 4 variables produces 16 truth-table rows and 16 squares and therefore 16 minterms. Each Karnaugh-map square and its corresponding truth-table evaluation represents one minterm.
Any propositional formula can be reduced to the "logical sum" (OR) of the active (i.e. "1"- or "T"-valued) minterms. When in this form the formula is said to be in disjunctive normal formDisjunctive normal formIn boolean logic, a disjunctive normal form is a standardization of a logical formula which is a disjunction of conjunctive clauses. As a normal form, it is useful in automated theorem proving. A logical formula is considered to be in DNF if and only if it is a disjunction of one or more...
. But even though it is in this form, it is not necessarily minimized with respect to either the number of terms or the number of literals.
In the following table, observe the peculiar numbering of the rows: (0, 1, 3, 2, 6, 7, 5, 4, 0). The first column is the decimal equivalent of the binary equivalent of the digits "cba", in other words:- Example: cba2 = c*22 + b*21 + a*20:
- cba = (c=1, b=0, a=0) = 1012 = 1*22 + 0*21 + 1*20 = 510
This numbering comes about because as one moves down the table from row to row only one variable at a time changes its value. Gray codeGray codeThe reflected binary code, also known as Gray code after Frank Gray, is a binary numeral system where two successive values differ in only one bit. It is a non-weighted code....
is derived from this notion. This notion can be extended to three and four-dimensional hypercubeHypercubeIn geometry, a hypercube is an n-dimensional analogue of a square and a cube . It is a closed, compact, convex figure whose 1-skeleton consists of groups of opposite parallel line segments aligned in each of the space's dimensions, perpendicular to each other and of the same length.An...
s called Hasse diagramHasse diagramIn order theory, a branch of mathematics, a Hasse diagram is a type of mathematical diagram used to represent a finite partially ordered set, in the form of a drawing of its transitive reduction...
s where each corner's variables change only one at a time as one moves around the edges of the cube. Hasse diagrams (hypercubes) flattened into two dimensions are either Veitch diagrams or Karnaugh mapKarnaugh mapThe Karnaugh map , Maurice Karnaugh's 1953 refinement of Edward Veitch's 1952 Veitch diagram, is a method to simplify Boolean algebra expressions...
s (these are virtually the same thing).
When working with Karnaugh maps one must always keep in mind that the top edge "wrap arounds" to the bottom edge, and the left edge wraps around to the right edge—the Karnaugh diagram is really a three- or four- or n-dimensional flattened object.
decimal equivalent of (c, b, a) c b a minterm 0
| 0
| 0
| 0(~c & ~b & ~a) 1
| 0
| 0
| 1(~c & ~b & a) 3
| 0
| 1
| 1(~c & b & a) 2
| 0
| 1
| 0(~c & b & ~a) 6
| 1
| 1
| 0(c & b & ~a) 7
| 1
| 1
| 1(c & b & a) 5
| 1
| 0
| 1(c & ~b & a) 4
| 1
| 0
| 0(c & ~b & ~a) 0 0
| 0
| 0
| (~a & ~b & ~c)
Reduction by use of the map method (Veitch, Karnaugh)
Veitch improved the notion of Venn diagramVenn diagramVenn diagrams or set diagrams are diagrams that show all possible logical relations between a finite collection of sets . Venn diagrams were conceived around 1880 by John Venn...
s by converting the circles to abutting squares, and Karnaugh simplified the Veitch diagram by converting the minterms, written in their literal-form (e.g. ~abc~d) into numbers. The method proceeds as follows:
(1) Produce the formula's truth table
Produce the formula's truth table. Number its rows using the binary-equivalents of the variables (usually just sequentially 0 through n-1) for n variables.
- Technically, the propositional functionPropositional functionA propositional function in logic, is a statement expressed in a way that would assume the value of true or false, except that within the statement is a variable that is not defined or specified, which leaves the statement undetermined...
has been reduced to its (unminimized) conjunctive normal form: each row has its minterm expression and these can be OR'd to produce the formula in its (unminimized) conjunctive normal form.
Example: ((c & d) V (p & ~(c & (~d)))) = q in conjunctive normal form is:-
-
- ( (~p & d & c ) V (p & d & c) V (p & d & ~c) V (p & ~d & ~c) ) = q
-
However, this formula be reduced both in the number of terms (from 4 to 3) and in the total count of its literals (12 to 6).
row Minterms p d c ( ( c & d ) V ( p & ~ ( ( c & ~ ( d ) ) ) ) ) Active minterms Formula in conjunctive normal form 0 ( ~p & ~d & ~c ) 0 0 0 0 0 0 0 0 0 1 0 0 1 0 1 ( ~p & ~d & c) 0 0 1 1 0 0 0 0 0 0 1 1 1 0 2 ( ~p & d & ~c ) 0 1 0 0 0 1 0 0 0 1 0 0 0 1 3 ( ~p & d & c ) 0 1 1 1 1 1 1 0 0 1 1 0 0 1 (~p & d & c) 4 ( p & ~d & ~c ) 1 0 0 0 0 0 1 1 1 1 0 0 1 0 (~p & d & c) 5 ( p & ~d & c ) 1 0 1 1 0 0 0 1 0 0 1 1 1 0 6 ( p & d & ~c ) 1 1 0 0 0 1 1 1 1 1 0 0 0 1 (p & d & ~c) 7 ( p & d & c ) 1 1 1 0 1 1 1 1 1 1 1 0 0 1 ( p & d & c ) q = (~p&d&c) V (~p&d&c) V (p&d&~c ) V (p&d&c )
(2) Create the formula's Karnaugh map
Use the values of the formula (e.g. "p") found by the truth-table method and place them in their into their respective (associated) Karnaugh squares (these are numbered per the Gray code convention). If values of "d" for "don't care" appear in the table, this adds flexibility during the reduction phase.
(3) Reduce minterms
Minterms of adjacent (abutting) 1-squares (T-squares) can be reduced with respect to the number of their literalLiteral (mathematical logic)In mathematical logic, a literal is an atomic formula or its negation.The definition mostly appears in proof theory , e.g...
s, and the number terms also will be reduced in the process. Two abutting squares (2 x 1 horizontal or 1 x 2 vertical, even the edges represent abutting squares) loose one literal, four squares in a 4 x 1 rectangle (horizontal or vertical) or 2 x 2 square (even the four corners represent abutting squares) loose two literals, eight squares in a rectangle loose 3 literals, etc. (One seeks out the largest square or rectangles and ignores the smaller squares or rectanges contained totally within it. ) This process continues until all abutting squares are accounted for, at which point the propositional formula is minimized.
For example, squares #3 and #7 abut. These two abutting squares can loose one literal (e.g. "p" from squares #3 and #7), four squares in a rectangle or square loose two literals, eight squares in a rectangle loose 3 literals, etc. (One seeks out the largest square or rectangles.) This process continues until all abutting squares are accounted for, at which point the propositional formula is said to be minimized.
Example: The map method usually is done by inspection. The following example expands the algebraic method to show the "trick" behind the combining of terms on a Karnaugh map:- Minterms #3 and #7 abut, #7 and #6 abut, and #4 and #6 abut (because the table's edges wrap around). So each of these pairs can be reduced.
Observe that by the Idempotency law (A V A) = A, we can create more terms. Then by association and distributive laws the variables to disappear can be paired, and then "disappeared" with the Law of contradiction (x & ~x)=0. The following uses brackets [ and ] only to keep track of the terms; they have no special significance:- Put the formula in conjunctive normal form with the formula to be reduced:
-
-
- q = ( (~p & d & c ) V (p & d & c) V (p & d & ~c) V (p & ~d & ~c) ) = ( #3 V #7 V #6 V #4 )
- Idempotency (absorption) [ A V A) = A:
- ( #3 V [ #7 V #7 ] V [ #6 V #6 ] V #4 )
- Associative law (x V (y V z)) = ( (x V y) V z )
- ( [ #3 V #7 ] V [ #7 V #6 ] V [ #6 V #4] )
- [ (~p & d & c ) V (p & d & c) ] V [ (p & d & c) V (p & d & ~c) ] V [ (p & d & ~c) V (p & ~d & ~c) ].
- Distributive law ( x & (y V z) ) = ( (x & y) V (x & z) ) :
- ( [ (d & c) V (~p & p) ] V [ (p & d) V (~c & c) ] V [ (p & ~c) V (c & ~c) ] )
- Commutative law and law of contradiction (x & ~x) = (~x & x) = 0:
- ( [ (d & c) V (0) ] V [ (p & d) V (0) ] V [ (p & ~c) V (0) ] )
- Law of identity ( x V 0 ) = x leading to the reduced form of the formula:
- q = ( (d & c) V (p & d) V (p & ~c) )
- q = ( (~p & d & c ) V (p & d & c) V (p & d & ~c) V (p & ~d & ~c) ) = ( #3 V #7 V #6 V #4 )
-
(4) Verify reduction with a truth table
row Minterms p d c ( ( d & c ) V ( p & d ) V ( p & ~ ( c ) ) 0
| ( ~p & ~d & ~c )
| 0
| 0
| 0
|
|
| 00
| 0
|0
|
| 00
| 0
|0
|
| 00 1
|
| 0
|
|1
| ( ~p & ~d & c)
| 0
| 0
| 1
|
|
| 00
| 1
|0
|
| 00
| 0
|0
|
| 00 0
|
| 1
|
|2
| ( ~p & d & ~c )
| 0
| 1
| 0
|
|
| 10
| 0
|0
|
| 00
| 1
|0
|
| 00 1
|
| 0
|
|3
| ( ~p & d & c )
| 0
| 1
| 1
|
|
| 11
| 1
|1
|
| 00
| 1
|1
|
| 00 0
|
| 1
|
|4
| ( p & ~d & ~c )
| 1
| 0
| 0
|
|
| 00
| 0
|0
|
| 10
| 0
|1
|
| 11 1
|
| 0
|
|5
| ( p & ~d & c )
| 1
| 0
| 1
|
|
| 00
| 1
|0
|
| 10
| 0
|0
|
| 10 0
|
| 1
|
|6
| ( p & d & ~c )
| 1
| 1
| 0
|
|
| 10
| 0
|1
|
| 11
| 1
|1
|
| 11 1
|
| 0
|
|7
| ( p & d & c )
| 1
| 1
| 1
|
|
| 11
| 1
|1
|
| 11
| 1
|1
|
| 10 0
|
| 1
|
|q
Impredicative propositions
Given the following examples-as-definitions, what does one make of the subsequent reasoning:- (1) "This sentence is simple." (2) "This sentence is complex, and it is conjoined by AND."
Then assign the variable "s" to the left-most sentence "This sentence is simple". Define "compound" c = "not simple" ~s, and assign c = ~s to "This sentence is compound"; assign "j" to "It [this sentence] is conjoined by AND". The second sentence can be expressed as:- ( NOT(s) AND j )
If truth values are to be placed on the sentences c = ~s and j, then all are clearly FALSEHOODS: e.g. "This sentence is complex" is a FALSEHOOD (it is simple, by definition). So their conjunction (AND) is a falsehood. But when taken in its assembed form, the sentence a TRUTH.
This is an example of the paradoxParadoxSimilar to Circular reasoning, A paradox is a seemingly true statement or group of statements that lead to a contradiction or a situation which seems to defy logic or intuition...
es that result from an impredicative definition -- that is, when an object m has a property P, but the object m is defined in terms of property P. The best advice for a rhetorician or one involved in deductive analysis is avoid impredicative definitions but at the same time be on the lookout for them because they can indeed create paradoxes. Engineers, on the other hand, put them to work in the form of propositional formulas with feedback.
Propositional formula with "feedback"
The notion of a propositional formula appearing as one of its own variables requires a formation rule that allows the assignment of the formula to a variable. In general there is no stipulation (either axiomatic or truth-table systems of objects and relations) that forbids this from happening.
The simplest case occurs when an OR formula becomes one its own inputs e.g. p = q. Begin with (p V s) = q, then let p = q. Observe that q's "definition" depends on itself "q" as well as on "s" and the OR connective; this definition of q is thus impredicative.
Either of two conditions can result: oscillation or memory.
Without knowledge of what is going on "inside" the formula, it would appear (from the "outside") as if the output is no longer a functionFunction (mathematics)In mathematics, a function associates one quantity, the argument of the function, also known as the input, with another quantity, the value of the function, also known as the output. A function assigns exactly one output to each input. The argument and the value may be real numbers, but they can...
of the inputs alone. That is, sometimes when looks at q one sees 0 and other times 1. To avoid this problem one has to know the state (condition) of the "hidden" variable p (i.e. the value of q fed back and assigned to p). When this is known the apparent inconsistency goes away.
To understand [predict] the behavior of formulas with feedback requires the more sophisticated analysis of sequential circuits. Propositional formulas with feedback lead, in their simplest form, to state machines; they also lead to memories in the form of Turing tapes and counter-machine counters. From combinations of these elements one can build any sort of bounded computational model (e.g. Turing machineTuring machineA Turing machine is a theoretical device that manipulates symbols on a strip of tape according to a table of rules. Despite its simplicity, a Turing machine can be adapted to simulate the logic of any computer algorithm, and is particularly useful in explaining the functions of a CPU inside a...
s, counter machineCounter machineA counter machine is an abstract machine used in formal logic and theoretical computer science to model computation. It is the most primitive of the four types of register machines...
s, register machineRegister machineIn mathematical logic and theoretical computer science a register machine is a generic class of abstract machines used in a manner similar to a Turing machine...
s, Macintosh computers, etc.).
Oscillation
In the abstract (ideal) case the simplest oscillating formula is a NOT fed back to itself: ~(~(p=q)) = q. Analysis of an abstract (ideal) propositional formula in a truth-table reveals an inconsistency for both p=1 and p=0 cases: When p=1, q=0, this cannot be because p=q; ditto for when p=0 and q=1.
q p
|~ ( p ) = q
|0
|1 0 1 q & p inconsistent 1
|0 1 0 q & p inconsistent
Oscillation with delay: If an delay (ideal or non-ideal) is inserted in the abstract formula between p and q then p will oscillate between 1 and 0: 101010...101... ad infinitum. If either of the delay and NOT are not abstract (i.e. not ideal), the type of analysis to be used will be dependent upon the exact nature of the objects that make up the oscillator; such things fall outside mathematics and into engineering.
Analysis requires a delay to be inserted and then the loop cut between the delay and the input "p". The delay must be viewed as a kind of proposition that has "qd" (q-delayed) as output for "q" as input. This new proposition adds another column to the truth table. The inconsistency is now between "qd" and "p" as shown in red; two stable states resulting:
q qd p ( ~ ( p ) = q
|0
| 0
|1
|
| 0
|1
| state 10 1 0 1 0 qd & p inconsistent 1 0 1 0 1 qd & p inconsistent 1
| 1
|0
|
| 1
|0
| state 0
Memory
Without delay, inconsistencies must be eliminated from a truth table analysis. With the notion of "delay", this condition presents itself as a momentary inconsistency between the fed-back output variable q and p = qdelayed.
A truth table reveals the rows where inconsistencies occur between p = qdelayed at the input and q at the output. After "breaking" the feed-back, the truth table construction proceeds in the conventional manner. But afterwards, in every row the output q is compared to the now-independent input p and any inconsistencies between p and q are noted (i.e. p=0 together with q=1, or p=1 and q=0); when the "line" is "remade" both are rendered impossible by the Law of contradiction ~(p & ~p)). Rows revealing inconsistences are either considered transient states or just eliminated as inconsistent and hence "impossible".
Once-flip memory
About the simplest memory results when the output of an OR feeds back to one of its inputs, in this case output "q" feeds back into "p". Given that the formula is first evaluated (initialized) with p=0 & q=0, it will "flip" once when "set" by s=1. Thereafter, output "q" will sustain "q" in the "flipped" condition (state q=1). This behavior, now time-dependent, is shown by the state diagramState diagramA state diagram is a type of diagram used in computer science and related fields to describe the behavior of systems. State diagrams require that the system described is composed of a finite number of states; sometimes, this is indeed the case, while at other times this is a reasonable abstraction...
to the right of the once-flip.
q p s ( s V p ) = q
|0
| 0
|
| 00
| 0
|0
| state 0, s=00 1 1 1 0 q & p inconsistent 1
| 0
|
| 01
| 1
|1
| state 1 with s = 01
| 1
|1 1
| 1
|1
| state 1 with s = 1
Flip-flop memory
The next simplest case is the "set-reset" flip-flopFlip-flopFlip-flops, thongs, Japanese sandals, or jandals are an open type of outdoor footwear, consisting of a flat sole held loosely on the foot by a Y-shaped strap, like a thin thong, that passes between the first and second toes and around either side of the foot...
shown below the once-flip. Given that r=0 & s=0 and q=0 at the outset, it is "set" (s=1) in a manner similar to the once-flip. It however has a provision to "reset" q=0 when "r"=1. And additional complication occurs if both set=1 and reset=1. In this formula, the set=1 forces the output q=1 so when and if (s=0 & r=1) the flip-flop will be reset. Or, if (s=1 & r=0) the flip-flop will be set. In the abstract (ideal) instance in which s=1 => s=0 & r=1 => r=0 simultaneously, the formula q will be indeterminate (undecidable). Due to delays in "real" OR, AND and NOT the result will be unknown at the outset but thereafter predicable.
q p s r ( s V ( p & ~ ( r ) ) ) = q
|0
| 0
| 0
|
| 00
|
| 00 1
|
| 0
|
|
|0
| state 0 with ( s=0 & r=0 )0
| 0
| 1
|
| 00
|
| 00 0
|1
|
|
|0
| state 0 with ( s=0 & r=1 )0 1 0 1 1 0 0 1 0 q & p inconsistent 0 1 1 1 1 0 0 0 1 q & p inconsistent 1
| 0
| 0
|
| 01
|
| 11 1
|
| 0
|
|
|1
| state 1 with ( s=0 & r=0 )1 0 1 0 0 1 0 0 1 q & p inconsistent 1
| 1
| 0
|1 1
|
| 11 1
|
| 0
|
|
|1
| state 1 with ( s=1 & r=0 )1 1 1 1 1 1 0 0 1 1 state 1 with s & r simultaneously 1
Clocked flip-flop memory
The formula known as "clocked flip-flop" memory ("c" is the "clock" and "d" is the "data") is given below. It works as follows: When c = 0 the data d (either 0 or 1) cannot "get through" to affect output q. When c = 1 the data d "gets through" and output q "follows" d's value. When c goes from 1 to 0 the last value of the data remains "trapped" at output "q". As long as c=0, d can change value without causing q to change.
Example: ( ( c & d ) V ( p & ( ~( c & ~( d ) ) ) ) = q, but now let p = q:
- Example: ( ( c & d ) V ( q & ( ~( c & ~( d ) ) ) ) = q
The state diagram is similar in shape to the flip-flop's state diagram, but with different labelling on the transitions.
s q w v r u row q d c ( ( c & d ) V ( q & ~ ( ( c & ~ ( d ) ) ) ) ) =q Description 0
| 0
| 0
| 0
|
|
| 00
| 0
|0
|0 0 1
|
|
| 00 1
|
| 0
|
|
|
|
|0
| state 0 with ( s=0 & r=0 ), 0 is trapped1
| 0
| 0
| 1
|
|1 0
| 0
|0
|0 0 0
|
|1 1 1
|
| 0
|
|
|
|
|0
| state 0 with ( d=0 & c=1 ): q=0 is following d=02
| 0
| 1
| 0
|
|
| 00
| 1
|0
|0 0 1
|
|
| 00 0
|
| 1
|
|
|
|
|0
| state 0 with ( d=1 & r=0 ), 0 is trapped3 0 1 1 1 1 1 1 0 0 1 1 0 0 1 q & p inconsistent 4
| 1
| 0
| 0
|
|
| 00
| 0
|1
|1 1 1
|
|
| 00 1
|
| 0
|
|
|
|
|1
| state 1 with (d =0 & c=0 ), 1 is trapped5 1 0 1 1 0 0 0 1 0 0 1 1 1 0 q & p inconsistent 6
| 1
| 1
| 0
|
|
| 00
| 1
|1
|1 1 1
|
| 00 0
|
| 1
|
|
|
|
|1
| state 1 with (d =1 & c=0 ), 1 is trapped7
| 1
| 1
| 1
|
|1 1 1
|1
|1 1 1
|
|1 0 0
|1
|
|
|
|
|1
| state 1 with ( d=1 & c=1 ): q=1 is following d=1
Historical development
Bertrand RussellBertrand RussellBertrand Arthur William Russell, 3rd Earl Russell, OM, FRS was a British philosopher, logician, mathematician, historian, and social critic. At various points in his life he considered himself a liberal, a socialist, and a pacifist, but he also admitted that he had never been any of these things...
(1912:74) lists three laws of thought that derive from AristotleAristotleAristotle was a Greek philosopher and polymath, a student of Plato and teacher of Alexander the Great. His writings cover many subjects, including physics, metaphysics, poetry, theater, music, logic, rhetoric, linguistics, politics, government, ethics, biology, and zoology...
: (1) The law of identity: "Whatever is, is.", (2) The law of contradiction: "Nothing cannot both be and not be", and (3) The law of excluded middleLaw of excluded middleIn logic, the law of excluded middle is the third of the so-called three classic laws of thought. It states that for any proposition, either that proposition is true, or its negation is....
: "Everything must be or not be."- Example: Here O is an expression about an objects BEING or QUALITY:
- (1) Law of Identity: O = O
- (2) Law of contradiction: ~(O & ~(O))
- (3) Law of excluded middle: (O V ~(O))
The use of the word "everything" in the law of excluded middle renders Russell's expression of this law open to debate. If restricted to an expression about BEING or QUALITY with reference to a finite collection of objects (a finite "universe of discourse") -- the members of which can be investigated one after another for the presence or absence of the assertion—then the law is considered intuitionistically appropriate. Thus an assertion such as: "This object must either BE or NOT BE (in the collection)", or "This object must either have this QUALITY or NOT have this QUALITY (relative to the objects in the collection)" is acceptable. See more at Venn diagramVenn diagramVenn diagrams or set diagrams are diagrams that show all possible logical relations between a finite collection of sets . Venn diagrams were conceived around 1880 by John Venn...
.
Although a propositional calculus originated with Aristotle, the notion of an algebra applied to propositions had to wait until the early 19th century. In an (adverse) reaction to the 2000 year tradition of Aristotle's syllogismSyllogismA syllogism is a kind of logical argument in which one proposition is inferred from two or more others of a certain form...
s, John LockeJohn LockeJohn Locke FRS , widely known as the Father of Liberalism, was an English philosopher and physician regarded as one of the most influential of Enlightenment thinkers. Considered one of the first of the British empiricists, following the tradition of Francis Bacon, he is equally important to social...
's Essay concerning human understanding (1690) used the word semioticsSemioticsSemiotics, also called semiotic studies or semiology, is the study of signs and sign processes , indication, designation, likeness, analogy, metaphor, symbolism, signification, and communication...
(theory of the use of symbols). By 1826 Richard WhatelyRichard WhatelyRichard Whately was an English rhetorician, logician, economist, and theologian who also served as the Church of Ireland Archbishop of Dublin.-Life and times:...
had critically analyzed the syllogistic logic with a sympathy torward Locke's semiotics. George BenthamGeorge BenthamGeorge Bentham CMG FRS was an English botanist, characterized by Duane Isely as "the premier systematic botanist of the nineteenth century".- Formative years :...
's work (1827) resulted in the notion of "quantification of the predicate" (1827) (nowadays symbolized as ∀ ≡ "for all"). A "row" instigated by William HamiltonWilliam Hamilton-Europeans:Politicians and noblemen*William Hamilton, 2nd Duke of Hamilton , Scottish nobleman*William Douglas-Hamilton, Duke of Hamilton , Scottish Nobleman*William Hamilton , Lord Chancellor of England...
over a priority dispute with Augustus De MorganAugustus De MorganAugustus De Morgan was a British mathematician and logician. He formulated De Morgan's laws and introduced the term mathematical induction, making its idea rigorous. The crater De Morgan on the Moon is named after him....
"inspired George BooleGeorge BooleGeorge Boole was an English mathematician and philosopher.As the inventor of Boolean logic—the basis of modern digital computer logic—Boole is regarded in hindsight as a founder of the field of computer science. Boole said,...
to write up his ideas on logic, and to publish them as MAL [Mathematical Analysis of Logic] in 1847" (Grattin-Guinness and Bornet 1997:xxviii).
About his contribution Grattin-Guinness and Bornet comment:- "Boole's principal single innovation was [the] law [ xn = x ] for logic: it stated that the mental acts of choosing the property x and choosing x again and again is the same as choosing x once... As consequence of it he formed the equations x•(1-x)=0 and x+(1-x)=1 which for him expressed respectively the law of contradiction and the law of excluded middle" (p. xxviiff). For Boole "1" was the universe of discourse and "0" was nothing.
Gottlob FregeGottlob FregeFriedrich Ludwig Gottlob Frege was a German mathematician, logician and philosopher. He is considered to be one of the founders of modern logic, and made major contributions to the foundations of mathematics. He is generally considered to be the father of analytic philosophy, for his writings on...
's massive undertaking (1879) resulted in a formal calculus of propositions, but his symbolism is so daunting that it had little influence excepting on one person: Bertrand RussellBertrand RussellBertrand Arthur William Russell, 3rd Earl Russell, OM, FRS was a British philosopher, logician, mathematician, historian, and social critic. At various points in his life he considered himself a liberal, a socialist, and a pacifist, but he also admitted that he had never been any of these things...
. First as the student of Alfred North WhiteheadAlfred North WhiteheadAlfred North Whitehead, OM FRS was an English mathematician who became a philosopher. He wrote on algebra, logic, foundations of mathematics, philosophy of science, physics, metaphysics, and education...
he studied Frege's work and suggested a (famous and notorious) emendation with respect to it (1904) around the problem of an antinomyAntinomyAntinomy literally means the mutual incompatibility, real or apparent, of two laws. It is a term used in logic and epistemology....
that he discovered in Frege's treatment ( cf Russell's paradoxRussell's paradoxIn the foundations of mathematics, Russell's paradox , discovered by Bertrand Russell in 1901, showed that the naive set theory created by Georg Cantor leads to a contradiction...
). Russell's work led to a collatoration with Whitehead that, in the year 1912, produced the first volume of Principia Mathematica (PM). It is here that what we consider "modern" propositional logic first appeared. In particular, PM introduces NOT and OR and the assertion symbol ⊦ as primitives. In terms of these notions they define IMPLICATION → ( def. *1.01: ~p V q ), then AND (def. *3.01: ~(~p V ~q) ), then EQUIVALENCE p ←→ q (*4.01: (p → q) & ( q → p ) ).
- Henry M. ShefferHenry M. ShefferHenry Maurice Sheffer was an American logician.Sheffer was a Polish Jew born in the western Ukraine, who immigrated to the USA in 1892 with his parents and six siblings. He studied at the Boston Latin School before entering Harvard University, learning logic from Josiah Royce, and completing his...
(1921) and Jean NicodJean NicodJean George Pierre Nicod was a French philosopher and logician.In his best known work, he showed that the classical propositional calculus could be derived from one axiom and one rule, both expressed using the Sheffer stroke...
demonstrate that only one connective, the "stroke" | is sufficient to express all propositional formulas. - Emil Post (1921) develops the truth-table method of analysis in his "Introduction to a general theory of elementary propositions". He notes Nicod's stroke | .
- Whitehead and Russell add an introduction to their 1927 re-publication of PM adding, in part, a favorable treatment of the "stroke".
Computation and switching logic:- William EcclesWilliam EcclesWilliam Henry Eccles was a British physicist and a pioneer in the development of radio communication.He was born in Barrow-in-Furness, Lancashire, England. Following graduation from the Royal College of Science, London, in 1898, he became an assistant to Guglielmo Marconi, the Italian radio...
and F. W. JordanF. W. JordanFrank Wilfred Jordan was a British physicist who together with William Henry Eccles invented the so-called "flip-flop" circuit in 1918. This circuit became the basis of electronic memory in computers....
(1919) describe a "trigger relay" made from a vacuum tube. - George StibitzGeorge StibitzGeorge Robert Stibitz is internationally recognized as one of the fathers of the modern digital computer...
(1937) invents the binary adder using mechanical relays. He builds this on his kitchen table.
- Example: Given binary bitBitA bit is the basic unit of information in computing and telecommunications; it is the amount of information stored by a digital device or other physical system that exists in one of two possible distinct states...
s ai and bi and carry-in ( c_ini), their summation Σi and carry-out (c_outi) are:- ( ( ai XOR bi ) XOR c_ini )= Σi
- ( ai & bi ) V c_ini ) = c_outi;
- Alan TuringAlan TuringAlan Mathison Turing, OBE, FRS , was an English mathematician, logician, cryptanalyst, and computer scientist. He was highly influential in the development of computer science, providing a formalisation of the concepts of "algorithm" and "computation" with the Turing machine, which played a...
builds a multiplier using relays (1937–1938). He has to hand-wind his own relay coils to do this. - Textbooks about "switching circuits" appear in early 1950s.
- Willard Quine 1952 and 1955, E. W. VeitchEdward W. VeitchEdward W. Veitch is an American computer scientist. He graduated from Harvard University in 1946 with a degree in Physics, followed by graduate degrees from Harvard in Physics and Applied Physics in 1948 and 1949 respectively...
1952, and M. KarnaughMaurice KarnaughMaurice Karnaugh is an American physicist, famous for the Karnaugh map used in Boolean algebra.He studied mathematics and physics at City College of New York and transferred to Yale University to complete his B.Sc. , M.Sc. and Ph.D...
(1953) develop map-methods for simplifying propositional functions. - George H. Mealy (1955) and Edward F. MooreEdward F. MooreEdward Forrest Moore was an American professor of mathematics and computer science, the inventor of the Moore finite state machine, and an early pioneer of artificial life....
(1956) address the theory of sequential (i.e. switching-circuit) "machines". - E. J. McCluskey and H. Shorr develop a method for simplifying propositional (switching) circuits (1962).
-