Sentient software is the hot topic as of late. Speculative news about Artificial Intelligence (AI) systems such as Watson, Alexa, and even autonomous vehicles are dominating social media. It’s feasible that this impression is nothing more than Baader-Meinhof phenomenon (AKA frequency illusion). However, it seems that the populace has genuine interest in AI. Questions abound. Are there limits? Is it possible to create a factitious soul? Gödel’s incompleteness theorem is at the core of these questions; however, the conclusions are cryptic and often misunderstood.
Gödel’s incompleteness theorem is frequently adduced as proof of antithetical concepts. For instance, Roger Penrose’s book Shadows of the Mind claims that the theorem disproves the possibility of sentient machines (Penrose, 1994, p. 65). Douglas Hofstadter asserts the opposite in his book, I Am Strange Loop (Hofstadter, 2007). This article aims to provide a cursory view of the theorem in laymen’s terms and elucidate its practical implications on AI.
Context
Gödel’s Incompleteness Theorem is best understood within its historical context. This section covers requite concepts and notable events to provide the reader with adequate background knowledge. This is not meant to be comprehensive coverage of the material: rather it is stripped down to essentials.
The Challenge
The mathematics community was never filled with more hope than at the turn of the twentieth century. On August 8th, 1900, David Hilbert gave his seminal address at the Second International Congress of Mathematics in which he declared, “in mathematics there is no ignorabimus” (Petzold, 2008, p. 40). Ignorabimus is a Latin word meaning “we shall not know”. Hilbert believed that, unlike some other branches of science, all things mathematical were knowable. Furthermore, he framed a plan to actualize a mathematical panacea.
In this address, Hilbert outlined ten open problems and challenged the mathematics community to solve them (this was a subset of twenty-three problems published by Hilbert). The problem of relevance for this article is the second which is entitled, The Computability of Arithmetical Axioms. Hilbert’s second problem called for the axiomatization of real numbers “to prove that there are no contradictory, this is, that a finite number of logical steps based upon them can never lead to contradictory results” (Petzold, 2008, p. 41). More concisely, Hilbert wished to axiomatize number theory.
The following sections delve into axiomatization. However, a pertinent idea here is the phrase “finite number of logical steps”. In modern nomenclature, this is known as algorithmic. Hilbert, along with his contemporaries, believed that every mathematical problem was solvable via an algorithmic process. (Petzold, 2008) This is a key concept that will be revisited after exploring axiomatization.
Axiomatization
Stated concisely, axiomatization is a means of deriving a system’s theorems by logical inferences based on a set of axioms. Axioms are unprovable rules that are self-evidently true. The most well-known axiomatized system is Euclidean geometry; therefore, it serves as an archetype for understanding axiomatic systems. The whole of Euclidean geometry is based on five axioms.
- A straight-line segment can be drawn joining any two points.
- Any straight-line segment can be extended indefinitely in a straight line.
- Given any straight-line segment, a circle can be drawn having the segment as radius and one endpoint as center.
- All right angles are congruent.
- If two lines are drawn which intersect a third in such a way that the sum of
the inner angles on one side is less than two right angles, then the two
lines inevitably must intersect each other on that side if extended far
enough.
(Wolfram Research, Inc., 2017)
As a small aside, the fifth axiom is also known as the parallel postulate. This has the been the subject of mathematical quandary for centuries. It is highly recommended that the enthusiastic reader perform additional research on the subject.
These five axioms form the foundation of geometry. Pythagorean theorem, Pons Asinorum, Congruence of triangles, Thales’ theorem, and countless others are derived via logical inferences based on the assumption that these self-evidentiary axioms are true. Axioms provide a solid foundation for a system, much like the cornerstone of a building.
Another key concept introduced in the previous paragraph is logical inferences. It’s not enough to have a firm foundation of axioms. Theorems derived from the axioms must be likewise sound and logical inference offers a guarantee of said soundness.
Logical Inference
The process of connecting axioms to theorems cannot rely on intuition in any way. This is to say that they are definitive rules and constructs in which logical inference can be validated. This is important because the legitimacy of axioms is irrelevant if conclusions drawn from them are not completely consistent. A strong, stable, and trusted system must be composed of theorems that use valid logical inferences stemming from axioms.
It is beyond the scope of this blog post to give even a cursory explanation of logical systems of inference. However, it’s important for the reader to understand that formal logic has stringent rules and notations much like any mathematical system. Logic statements are written and manipulated like any other mathematical formulas. This allows for the creation of proofs that cement the validity from the bottom up.
Each theorem is analogous to a brick in a house. Because the theorem sits firmly on either an axiom or another theorem planted on an axiom, it’s validity is confirmed. This is commonly known as infinite regress. All the theorems taken together form a strong and stable system capable of being trusted. Formalism expands on the concept.
Formalism
Recall the Computability of Arithmetical Axioms problem outlined in The Challenge section. Hilbert envisioned Formalism as the solution to this problem. Formalism, as conceived by Hilbert, is a “system comprised of definitions, axioms, and rules for constructing theorems from the axioms” (Petzold, 2008, p. 45). It is often described as a sort of metamathematics. Hilbert envisioned a formal logic language where axioms are represented as strings and theorems are derived by an algorithmic process. These concepts were introduced in the previous two chapters. A new concept to this section is the qualities that such a system must possess.
For a system, such as formalism, to truly axiomatize the whole of arithmetic, it must have four qualities which are outlined below.
- Independence – There are no superfluous axioms.
- Decidability – A algorithmic process for deriving the validity of formulas.
- Consistency – It is NOT possible to derive two theorems that contradict one another.
- Completeness – Ability to derive ALL true formulas from the axioms.
(Petzold, 2008, p. 46)
As a small aside, there is a fair bit of legerdemain happening here. The concepts of truth, formulas, theorems, and proof are purposely glossed over to avoid minutia. Curious readers are encouraged to investigate further.
The two qualities that are particularly cogent to Gödel’s incompleteness theorem are consistency and completeness. Luckily, they are both self-explanatory. A system that is both complete and consistent will yield all possible true formulas, none of which are contradictory.
Why?
The truth is that axiomatization is a fastidious process that can seem maddeningly pedantic. One may be forced to question the very premise that it is a good thing. One can further postulate that simple human intuition is sufficient. However, recall the concept of infinite regress called out in the last paragraph of the Logical Inference section. New theorems are built upon existing theorems. Without stringent formal logic rules, systems become a “house of cards”. Mistakes found in foundational theorems can bring the entire system crashing down.
An archetypal example is Cantor’s set theory. The details of the theory are largely irrelevant to this line of inquiry, but the curious reader should refer to this set of blog posts for more information. In short, set theory took the mathematical world by storm. Countless mathematicians augmented it by building new abstractions on top of it. Bertrand Russel discovered a fatal flaw known as Russel’s Paradox which brought the system down like a proverbial “house of cards”. Formalism is meant to avoid similar debacles.
Principia Mathematica
The Principia Mathematica is an infamous three-volume treatise by Alfred North Whitehead and Bertrand Russell published in 1910, 1912, and 1913. It is a truly herculean attempt to formalize the whole of arithmetic. The work is dense and inaccessible to even most mathematicians (Nagel & Newman, 2001). The system set forth sets the stage for Gödel’s incompleteness theorem.
Incompleteness Theorem
In 1931, Kurt Gödel published a seminal, albeit recondite, paper entitled On Formally Undecidable Propositions of Principia Mathematica and Related Systems. The paper dismayed the whole of the mathematical community despite its esoteric content. It not only trampled the validity of Principia Mathematica, it proved that such a system isn’t achievable by any means. The implication being that Hilbert’s second problem, The Computability of Arithmetical Axioms, will never have a satisfactory solution.
In short, Gödel proved that any system complex enough to encompass simple arithmetic cannot be both complete and consistent as defined in the Formalism section. Through a clever method of converting logical expressions to numbers, the proof showed that any such system will enable the creation of a self-referential statement in the form of “this statement is false”.
The previous paragraph is a blatant over-simplification of Gödel’s incompleteness theorem. The intimate details of the proof are well beyond the scope of this humble article. As mentioned so many times throughout this work, the reader is encouraged to continue research independently. On a positive note, the arcane details are not requisite for comprehension of the implications.
Implications
In short, the implications of Gödel’s Incompleteness Theorem are nothing more than that an axiomatic system of logic cannot be both complete and consistent. Expanding on that, it is not possible to derive an algorithm that will generate all possible proofs of a formalized system. One can then infer that it is not possible to write a computer program to generate said proofs.
There have been countless extrapolations based on the implications stated above. For instance, a commonly adduced argument is that there are more truths in the universe than there are proofs. Likewise, there are some things that are obviously true that cannot be formally proven. While these are both true, be careful not to fall into the enticing trap of applying the rule to anything outside of axiomatic systems of logic.
Why the Confusion?
Although it’s a rather unsatisfying observation, the reality is that Gödel’s proofs are onerous to all but accomplished logicians. Despite this, the implications are far reaching. This situation creates a particularly fertile breeding ground for misconceptions. Many venerated experts within other disciplines attempt to apply the theorem by fallacious means.
A cursory Google search for “Gödel’s incompleteness theorem and God” will yield seemingly boundless results with varied interpretations. The fact of the matter is, the theorem strictly applies to formal axiomatic systems of logic. It does not apply to religious texts. Likewise, it has no implications on the validity of the afterlife or mystical intuition. (Tieszen, 2017, p. Kindle Loc. 1173)
As an example, Gödel’s ontological argument is often cited by theists because it formally proves the existence of God. Given the description, it is easy to see how someone ignorant of formal logical proofs could draw fallacious conclusions. As stated previously, Gödel’s proofs apply exclusively to formal axiomatic systems of logic. The concept of God is far from this. Gödel himself said that “it was undertaken as a purely logical investigation, to demonstrate that such a proof could be carried out on the basis of accepted principals of formal logic” (Tieszen, 2017, p. Kindle Loc. 2158). He also hesitated to publish “for fear that a belief in God might be ascribed to him” (Tieszen, 2017, p. Kindle Loc. 2158).
The cogent point is that it is easy to misinterpret the significance of Gödel’s work. It is difficult for anyone lacking a strong background in mathematical logic to draw valid conclusions based on the incompleteness theorem. Gödel’s work is best confined to scientific contexts.
Implications for Artificial Intelligence
The thesis of this work is to define the implications of Gödel’s incompleteness theorem on AI. Unfortunately, a surfeit of background concepts is requisite to comprehension and the author humbly apologizes for the necessary discomfort. Possibly more disappointing is that the verdict is not as definitive as one may suppose as this section explains.
One thing is definite, it is not possible to use a computer to automatically derive proofs from an axiomatic system. Hilbert’s dream of automated formalization is inert. On the bright side, if it were many mathematicians would be out of work. Some claim, as does Roger Penrose, that this necessarily precludes any possibility of AI within the current computational model. Consider this, a human can necessarily comprehend some truths that a machine cannot. The insinuation is that humans are endowed with creativity that is not obtainable by a machine. Mr. Penrose postulates that this is a quantum effect that is beyond our current understanding. (Penrose, 1994)
Douglas Hofstadter passionately refutes Roger Penrose’s claims. He believes that the said limits stem from a fundamental misunderstanding of how the brain works and presents a compelling model of consciousness in his book, I Am Strange Loop (Hofstadter, 2007). Theorem proving is by no means the only way to make a machine “think”. “The human mind is fundamentally not a logic engine but an analogy engine, a learning engine, a guessing engine, and esthetics-driven engine, a self-correcting engine” (Nagel & Newman, 2001, p. Kindle Loc. 146). From this frame of reference, Gödel’s incompleteness theorem doesn’t apply to AI.
Penrose and Hofstadter sit among varied experts with similar opinions. With the considerable amount of resources funneled into AI projects, the final verdict will be decided in due course of time. Not that this should sway the reader in any way, but the author tends to side with Mr. Hofstadter. The reader is encouraged to do their own research and form their own opinions.
Conclusion
Gödel’s incompleteness theorem is inextricably associated with philosophy, religion, and the viability of Artificial Intelligence (AI). However, Gödel’s work is in a recondite field and its applicability beyond axiomatic systems of logic is perplexing and often misapplied. In the final analysis, the theorem’s only definitive assertion is that it is not possible for an axiomatic system of logic to be both consistent and complete. Many experts make conflicting ancillary claims and it’s difficult to draw any absolute conclusions.
This article presents a simplistic high-level view of Gödel’s incompleteness theorem aimed at the novice with limited exposure. It is highly recommended that readers use this as a starting point for much deeper exploration. The books listed in the bibliography are all excellent references for further research.
Biography
Hofstadter, D. (2007). I Am A Strange Loop. Retrieved 8 27, 2017
Nagel, E., & Newman, J. R. (2001). Gödel’s Proof: Edited and with a New Foreword
by Douglas R. Hofstadter. (D. Hofstadter, Ed.) New York University Press, NY.
Retrieved 8 27, 2017
Penrose, R. (1994). Shadows of the Mind. Oxford University Press p. 413.
Retrieved 8 27, 2017
Petzold, C. (2008). The Annotated Turing. Indianapolis: Wiley Publishing, Inc.
Tieszen, R. (2017). Simply Gödel. New York: Simply Charly.
Wolfram Research, Inc. (2017, October 30). Euclid’s Postulates. Retrieved from
Wolfram Math World: http://mathworld.wolfram.com/EuclidsPostulates.html