INITIALIZATION Knowledgebase: ki-dev-large Base Query: Please summarize the whole context. It is important that you include a summary for each file. All files should be included, so please make sure to go through the entire context Model: gemini-1.5-pro-preview-0409 Use Curl?: ================================================== **Elapsed Time: 0.00 seconds** ================================================== ROUTING Query type: summary ================================================== **Elapsed Time: 2.16 seconds** ================================================== PRIMER Primer: You are Simon, a highly intelligent personal assistant in a system called KIOS. You are a chatbot that can read knowledgebases through the "CONTEXT" that is included in the user's chat message. Your role is to act as an expert at summarization and analysis. In your responses to enterprise users, prioritize clarity, trustworthiness, and appropriate formality. Be honest by admitting when a topic falls outside your scope of knowledge, and suggest alternative avenues for obtaining information when necessary. Make effective use of chat history to avoid redundancy and enhance response relevance, continuously adapting to integrate all necessary details in your interactions. Use as much tokens as possible to provide a detailed response. ================================================== **Elapsed Time: 0.48 seconds** ================================================== FINAL QUERY Final Query: CONTEXT: ########## File: 2D%20Game%20Development%20From%20Zero%20To%20Hero%20-%20Daniele%20Penazzo%20HTML%2C%20PDF%2C%20EBPUB%2C.pdf Page: 602 Context: ```markdown To make the game a bit more interesting, it could be an idea to implement different block types: - **Explosive Blocks:** When destroyed, these blocks explode, destroying the surrounding blocks. - **Multiple-hit Blocks:** They simply require more than one hit to destroy. - **Score Blocks:** When destroyed, these blocks drop score items that slowly descend toward the bottom of the screen. If you catch these items with the paddle, you get extra points. Furthermore, the ball should progressively get faster with gameplay; you can either do it every few bounces or just every bounce with the paddle. The choice is yours. ### Further Skills Required: - Managing Object's states; - Subclassing; - Managing Game State. ### 28.3 Master Level With the "master level," we are going to complete this game by adding powerups; they work in a similar way to the advanced level's score blocks with a descending item that grants a certain status to either the ball or the paddle. You can make it a random chance for each block destruction or make a dedicated block. Here are some suggestions for powerups: - Larger/Smaller paddle; - Paddle that shots to destroy blocks; - Faster/Slower ball; - Ball that goes through blocks, destroying them; - "Sticky Paddle" that allows to stop and then release the ball; - Multiball. Furthermore, you can implement a sort of "biased bouncing" for the paddle: the further left or right on the paddle the ball touches, the more it will take a horizontal bias toward that direction. This way the center of the paddle is "neutral," keeping the normal bouncing mechanics, while the leftmost and rightmost sides allow the player to direct the ball the way they want. ``` #################### File: A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf Page: 1 Context: ``` # A Computational Logic ## Introduction This document introduces the fundamental concepts of computational logic. ## Key Concepts 1. **Propositional Logic** - Definition - Examples 2. **Predicate Logic** - Definition - Examples ## Applications - Artificial Intelligence - Computer Science - Linguistics ## Conclusion Understanding computational logic is essential for various fields such as computer science and artificial intelligence. ``` #################### File: A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf Page: 3 Context: ```markdown # A Computational Logic **Robert S. Boyer and J Strother Moore** SRI International Menlo Park, California --- **ACADEMIC PRESS** A subsidiary of Harcourt Brace Jovanovich, Publishers New York | London | Toronto | Sydney | San Francisco ``` #################### File: A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf Page: 4 Context: ```markdown # Copyright (C) 1979 by Academic Press No part of this publication may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopying, recording, or any information storage and retrieval system, without permission in writing from the publisher. --- ## ACADEMIC PRESS, INC. 111 Fifth Avenue, New York, New York 10003 **United Kingdom Edition published by** ACADEMIC PRESS, INC. (LONDON) LTD. 24/28 Oval Road, London NW1 7DX --- ### Library of Congress Cataloging in Publication Data **Boyer, Robert S.** *A Computational Logic* - (ACM monographs series) Includes bibliographic references and index. 1. Automatic theorem proving. I. Moore, J. Stroustrup, Date joint author. II. Title. III. Series: Association of Computing Machinery. ACM monograph series. QA76.9.A96B68 519.4'9–51693 ISBN 0-12-122950-5 --- PRINTED IN THE UNITED STATES OF AMERICA 79 81 81 82 9 8 7 6 5 4 3 2 1 ``` #################### File: A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf Page: 5 Context: ```markdown To our wives, Anne and Liz ``` #################### File: A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf Page: 6 Context: I'm unable to assist with that. #################### File: A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf Page: 7 Context: ```markdown # Contents ## Preface vii ## 1 Introduction 1 ### 1.1 Motivation 2 ### 1.2 Our Formal Theory 3 ### 1.3 Proof Techniques 3 ### 1.4 Examples 3 ### 1.5 Our Mechanical Theorem Prover 5 ### 1.6 Artificial Intelligence or Logic? 6 ### 1.7 Organization 7 ## 2 A Sketch of the Theory and Two Simple Examples 9 ### 2.1 An Informal Sketch of the Theory 9 ### 2.2 A Simple Inductive Proof 18 ### 2.3 A More Difficult Problem 20 ### 2.4 A More Difficult Proof 23 ### 2.5 Summary 26 ### 2.6 Notes 27 ## 3 A Precise Definition of the Theory 29 ### 3.1 Syntax 29 ``` #################### File: A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf Page: 8 Context: ```markdown # CONTENTS ## 3.2 The Theory of If and Equal ........................................ 31 ## 3.3 Well-founded Relations ........................................... 32 ## 3.4 Induction .......................................................... 34 ## 3.5 Shells .............................................................. 37 ## 3.6 Natural Numbers ................................................... 41 ## 3.7 Literal Atoms ..................................................... 42 ## 3.8 Ordered Pairs ...................................................... 44 ## 3.9 Definitions ......................................................... 45 ## 3.10 Lexicographic Relations .......................................... 53 ## 3.11 Lessp and Count .................................................. 54 ## 3.12 Conclusion ........................................................ 57 ## 4 The Correctness of a Tautology Checker ........................... 59 ### 4.1 Informal Development ............................................ 60 ### 4.2 Formal Specification of the Problem ............................ 63 ### 4.3 The Formal Definition of Tautology Checker ................... 66 ### 4.4 The Mechanical Proofs ............................................ 71 ### 4.5 Summary ............................................................ 88 ### 4.6 Notes .............................................................. 90 ## 5 An Overview of How We Prove Theorems .......................... 91 ### 5.1 The Role of the User ............................................ 91 ### 5.2 Clausal Representation of Conjectures ........................ 92 ### 5.3 The Organization of Our Heuristics ............................. 93 ### 5.4 The Organization of Our Presentation ............................ 95 ## 6 Using Type Information to Simplify Formulas ..................... 97 ### 6.1 Type Sets .......................................................... 97 ### 6.2 Assuming Expressions True or False ................................ 100 ``` #################### File: A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf Page: 9 Context: ```markdown # CONTENTS 6.3 Computing Type Sets . . . . . . . . 101 6.4 Type Prescriptions . . . . . . . . 103 6.5 Summary . . . . . . . . . . . . . 107 6.6 Notes . . . . . . . . . . . . . . 107 # 7 Using Axioms and Lemmas as Rewrite Rules 7.1 Directed Equalities . . . . . . . . 109 7.2 Infinite Looping . . . . . . . . . 110 7.3 More General Rewrite Rules . . . . 111 7.4 An Example of Using Rewrite Rules . 113 7.5 Infinite Backwards Chaining . . . . 115 7.6 Free Variables In Hypotheses . . . . 117 # 8 Using Definitions 8.1 Nonrecursive Functions . . . . . . . 120 8.2 Computing Values . . . . . . . . . . 120 8.3 Diving In to See . . . . . . . . . . 122 # 9 Rewriting Terms and Simplifying Clauses 9.1 Rewriting Terms . . . . . . . . . . 127 9.2 Simplifying Clauses . . . . . . . . 131 9.3 The Reverse Example . . . . . . . . 134 9.4 Simplification In the Reverse Example 134 # 10 Eliminating Destructors 10.1 Trading Bad Terms for Good Terms . . 139 10.2 The Form of Elimination Lemmas . . 142 10.3 The Precise Use of Elimination Lemmas 143 10.4 A Nontrivial Example . . . . . . . 144 ``` #################### File: A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf Page: 10 Context: ```markdown # CONTENTS ## 10 Multiple Destructors and Infinite Looping 10.5 Multiple Destructors and Infinite Looping . . . . . . . . . . . . . . . . 148 10.6 When Elimination Is Risky . . . . . . . . . . . . . . . . . . . . . . 149 10.7 Destructor Elimination In the Reverse Example . . . . . . . . . . . . 151 ## 11 Using Equalities 11.1 Using and Throwing Away Equalities . . . . . . . . . . . . . . . . . 155 11.2 Cross-fertilization . . . . . . . . . . . . . . . . . . . . . . . . . 156 11.3 A Simple Example of Cross-fertilization . . . . . . . . . . . . . . 157 11.4 The Precise Use of Equalities . . . . . . . . . . . . . . . . . . . . 159 11.5 Cross-fertilization In the Reverse Example . . . . . . . . . . . . . 160 ## 12 Generalization 12.1 A Simple Generalization Heuristic . . . . . . . . . . . . . . . . . . 163 12.2 Restricting Generalizations . . . . . . . . . . . . . . . . . . . . . 165 12.3 Examples of Generalizations . . . . . . . . . . . . . . . . . . . . . 167 12.4 The Precise Statement of the Generalization Heuristic . . . . . . . . 168 12.5 Generalization In the Reverse Example . . . . . . . . . . . . . . . . 170 ## 13 Eliminating Irrelevance 13.1 Two Simple Checks for Irrelevance . . . . . . . . . . . . . . . . . 173 13.2 The Reason for Eliminating Isolated Hypotheses . . . . . . . . . . . . 174 13.3 Elimination of Irrelevance In the Reverse Example . . . . . . . . . 176 ## 14 Induction and the Analysis of Recursive Definitions 177 ``` #################### File: A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf Page: 11 Context: ```markdown # CONTENTS ## 14 Formulating an Induction Scheme for a Conjecture 14.1 Satisfying the Principle of Definition . . . . . . 179 14.2 Induction Schemes Suggested By Recursive Functions . . . . . . 186 14.3 The Details of the Definition-time Analysis . . . . . . . . . 196 14.4 Recursion In the Reverse Example . . . . . . 200 ## 15 Illustrations of our Techniques Via Elementary Number Theory 15.1 Collecting the Induction Candidates . . . . . . 201 15.2 The Heuristic Manipulation of Induction Schemes . . . . . . . . 206 15.3 Examples of Induction . . . . . . . . . . . . . 214 15.4 The Entire Reverse Example . . . . . . . . . 219 ## 16 Illustrations of our Techniques Via Elementary Number Theory 16.1 Plus.right.id . . . . . . . . . . . . 226 16.2 Commutativity2.of.plus . . . . . . . . 227 16.3 Commutativity.of.plus . . . . . . . . 231 16.4 Associativity.of.plus . . . . . . . . 235 16.5 Times . . . . . . . . . . . . . . . 235 16.6 Times.zero . . . . . . . . . . . . 235 16.7 Times.add1 . . . . . . . . . . . . 236 16.8 Associativity.of.times . . . . . . . . 239 16.9 Difference . . . . . . . . . . . . . 243 16.10 Recursion.by.difference . . . . . . 244 16.11 Remainder . . . . . . . . . . . . . 251 16.12 Quotient . . . . . . . . . . . . . 251 16.13 Remainder.quotient.elim . . . . . . 252 ``` #################### File: A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf Page: 12 Context: ```markdown # CONTENTS ## 17 The Correctness of a Simple Optimizing Expression Compiler 259 ### 17.1 Informal Development .............................. 261 ### 17.2 Formal Specification of the Problem .............. 265 ### 17.3 Formal Definition of the Compiler ................ 271 ### 17.4 The Mechanical Proof of Correctness ............... 274 ### 17.5 Notes ............................................ 287 ## 18 The Correctness of a Fast String Searching Algorithm 291 ### 18.1 Informal Development .............................. 292 ### 18.2 Formal Specification of the Problem .............. 301 ### 18.3 Developing the Verification Conditions for the Algorithm ............ 302 ### 18.4 The Mechanical Proofs of the Verification Conditions ................. 312 ### 18.5 Notes ............................................ 317 ## 19 The Unique Prime Factorization Theorem 321 ### 19.1 The Context ..................................... 321 ### 19.2 Formal Development of the Unique Prime Factorization Theorem .... 323 ### 19.3 The Mechanical Proofs ........................... 327 ## A Definitions Accepted and Theorems Proved By our System 341 ## B The Implementation of the Shell Principle 391 ``` #################### File: A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf Page: 13 Context: ```markdown # CONTENTS ## C Clauses for our Theory - **C.1** Logical Definitions .......................... 395 - **C.2** Axioms for Natural Numbers .................... 396 - **C.3** Axioms for Literal Atoms ...................... 396 - **C.4** Axioms for Ordered Pairs ...................... 397 - **C.5** A Sample Theorem In Clausal Form ............ 397 ### Index 399 ### Bibliography 415 ``` #################### File: A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf Page: 14 Context: I'm unable to view or process images. Please provide the text in question, and I'll help with the Markdown formatting. #################### File: A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf Page: 15 Context: ```markdown # Preface Mechanical theorem-proving is crucial to the automation of reasoning about computer programs. Today, few computer programs can be mechanically certified to be free of “bugs.” The principal reason is the lack of mechanical theorem-proving power. In current research on automating program analysis, a common approach to overcoming the lack of mechanical theorem-proving power has been to require that the user direct a proof-checking program. That is, the user is required to construct a formal proof employing only the simplest rules of inference, such as modus ponens, instantiation of variables, or substitution of equals for equals. The proof-checking program guarantees the correctness of the formal proof. We have found proof-checking programs too frustrating to use because they require too much direction. Another approach to overcoming the lack of mechanical theorem-proving power is to use a weak theorem-proving program and to introduce axioms freely. Often these axioms are called “lemmas,” but they are usually not proved. While using a proof checker is only frustrating, introducing axioms freely is deplorable. This approach has been abused so far as to be ludicrous: we have seen researchers “verify” a program by first obtaining formulas that imply the program's correctness, then running the formulas through a simplifier, and finally assuming the resulting slightly simplified formulas as axioms. Some researchers admit that these “lemmas” ought to be proved, but never get around to proving them because they lack the mechanical theorem-proving power. Others, however, believe that it is reasonable to assume lots of “lemmas” and never try to prove them. We are strongly opposed to this latter attitude because it so completely under- ``` #################### File: A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf Page: 16 Context: ```markdown # PREFACE It mines the spirit of proof, and we therefore reply to the arguments we have heard in its defense. 1. It is argued that the axioms assumed are obvious facts about the concepts involved. We say that a great number of mistakes in computer programs arise from false “obvious” observations, and we have already seen researchers present proofs based on false lemmas. Furthermore, the concepts involved in the complicated computer systems one hopes eventually to certify are so insufficiently canonized that one man’s “obvious” is another man’s “difficult” and a third man’s “false.” 2. It is argued that one must assume some axioms. We agree, but observe that mathematicians do not contrive their axioms to solve the problem at hand. Yet often the “lemmas” assumed in program verification are remarkably close to the main idea or trick in the program being checked. 3. It is argued that mathematicians use lemmas. We agree. In fact, our theorem-proving system relies heavily on lemmas. But no proof is complete until the lemmas have been proved, too. The assumption of lemmas in program proving often amounts to sweeping under the rug the hard and interesting inferences. 4. It is argued that the definition of concepts necessarily involves the addition of axioms. But the axioms that arise from proper definitions, unlike most “lemmas,” have a very special form that guarantees two important properties. First, adding a definition never renders one’s theory inconsistent. Second, the definition of a concept involved in the proof of a subsidiary result (but not in the statement of one’s main conjecture) can be safely forgotten. It does not matter if the definition was of the “wrong” concept. But an ordinary axiom (or “lemma”), once used, always remains a hypothesis of any later inference. If the axiom is “wrong,” the whole proof may be worthless and the validity of the main conjecture is in doubt. ``` #################### File: A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf Page: 17 Context: ```markdown # The Use of Induction in Proving Theorems One reason that researchers have had to assume “lemmas” so freely is that they have not implemented the principle of mathematical induction in their theorem-proving systems. Since mathematical induction is a fundamental rule of inference for the objects about which computer programmers think (e.g., integers, sequences, trees), is it surprising that anyone would implement a theorem prover for program verification that could not make inductive arguments? Why has the mechanization of mathematical induction received scant attention? Perhaps it has been neglected because the main research on mechanical theorem-proving, the resolution theorem-proving tradition (see Chang and Lee [15] and Loveland [29]), does not handle axiom schemes, such as mathematical induction. We suspect, however, that the mechanization of mathematical induction has been neglected because many researchers believe that the only need for induction is in program semantics. Program semantics enables one to obtain from a given program and specification some conjectures (“verification conditions”) that imply the program is correct. The study of program semantics has produced a plethora of ways to use induction. Because some programs do not terminate, the role of induction in program semantics is fascinating and subtle. Great effort has been invested in mechanizing induction in program semantics. For example, the many “verification condition generation” programs implicitly rely upon induction to provide the semantics of iteration. But program semantics is not the only place induction is necessary. The conjectures that verification condition generators produce often require inductive proofs because they concern inductively defined concepts such as the integers, sequences, trees, grammars, formulas, stacks, queues, and lists. If you cannot make an inductive argument about an inductively defined concept, then you are doomed to assume what you want to prove. This book addresses the use of induction in proving theorems rather than the use of induction in program semantics. We will present a formal theory providing for inductively constructed objects, recursive definitions, and inductive proofs. Readers familiar with programming languages will see a strong connection. ``` #################### File: A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf Page: 18 Context: ```markdown # xvi Preface There is a stylistic resemblance between the language of our theory and that fragment of the programming language LISP known as “pure LISP” (see McCarthy et al. [35]). We chose pure LISP as a model for our language because pure LISP was designed as a mathematical language whose formulas could be easily represented within computers. Because of its mathematical nature (e.g., one cannot “destructively transform” the ordered pair (7,3) into (8,3)), pure LISP is considered a “toy” programming language. It is an easy jump to the non sequitur: “The language and theory presented in this book are irrelevant to real program analysis problems because they deal with a toy programming language.” But that statement misses the point. It is indeed true that our theory may be viewed as a programming language. In fact, many programs are naturally written as functions in our theory. But our theory is a mathematical tool for making precise assertions about the properties of discrete objects. As such, it can be used in conjunction with any of the usual program specification methods to state and prove properties of programs written in any programming language whatsoever. When we began our research into proving theorems about recursive functions [7], [38], we thought of ourselves as proving theorems only about pure LISP and viewed our work as an implementation of McCarthy’s [34] functional style of program analysis. However, we now regard recursion as a natural alternative to quantification when making assertions about programs. Using recursive functions to make assertions about computer programs no more limits the programming language to one that implements recursion than using the ordinary quantifiers limits the programming language to one that implements quantification! In this book we use both the functional style and Floyd’s inductive assertion style [18] of program specification in examples. (For the benefit of readers not familiar with the program verification literature, we briefly explain both ideas when they are first used.) We have relegated the foregoing remarks to the preface because we are not in general interested in program semantics in this book. We are interested in how one proves theorems about inductively constructed objects. ``` #################### File: A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf Page: 19 Context: ```markdown # Acknowledgments Our work on induction and theorem-proving in general has been deeply influenced by that of Bledsoe [3], [4]. Some early versions of our work have been previously reported in [7], [38], [39], [40], and [8]. Work closely related to our work on induction has been done by Brotz [11], Aubin [2], and Cartwright [14]. We thank Anne Boyer, Jacqueline Handley, Paul Gloess, John Laski, Greg Nelson, Richard Pattis, and Jay Spitzner for their careful criticisms of this book. We also thank Jay Spitzner for showing us how to prove the prime factorization theorem. Finally, we thank our wives and children for their usually cheerful long-suffering through the years of late hours behind this book. Our work has been supported by the National Science Foundation under Grant MCS-7681425 and by the Office of Naval Research under Contract N00014-75-C-0816. We are very grateful to these agencies for their support. --- Computer Science Laboratory SRI International Menlo Park, California December, 1978 ``` #################### File: A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf Page: 20 Context: I'm unable to assist with that. #################### File: A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf Page: 21 Context: ```markdown # Chapter 1 ## Introduction Unlike most texts on logic and mathematics, this book is about how to prove theorems rather than the proofs of specific results. We give our answers to such questions as: - When should induction be used? - How does one invent an appropriate induction argument? - When should a definition be expanded? We assume the reader is familiar with the mathematical notion of equality and with the logical connectives “and,” “or,” “not,” and “implies” of propositional calculus. We present a logical theory in which one can introduce inductively constructed objects (such as the natural numbers and finite sequences) and prove theorems about them. Then we explain how we prove theorems in our theory. We illustrate our proof techniques by using them to discover proofs of many theorems. For example, we formalize a version of the propositional calculus in our theory, and, using our techniques, we formally prove the correctness of a decision procedure for that version of propositional calculus. In another example, we develop elementary number theory from axioms introducing the natural numbers and finite sequences through the prime factorization theorem. Since our theory is undecidable, our proof techniques are not perfect. But we know that they are unambiguous, well integrated, and successful on a large number of theorems because ``` #################### File: A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf Page: 22 Context: ```markdown # CHAPTER 1. INTRODUCTION We have programmed a computer to follow our rules and have observed the program prove many interesting theorems. In fact, the proofs we describe are actually those discovered by our program. ## 1.1 Motivation Suppose it were practical to reason, mechanically and with mathematical certainty, about computer programs. For example, suppose it were practical to prove mechanically that a given program satisfied some specification, or exhibited the same output behavior as another program, or executed in certain time or space bounds. [^1] Then there would follow a tremendous improvement in the reliability of computer programs and a subsequent reduction of the overall cost of producing and maintaining programs. To reason mechanically about programs, one must have a formal program semantics, a formal logical theory, and a mechanical theorem prover for that theory. The study of formal program semantics has provided a variety of alternative methods for specifying and modeling programs. But all the methods have one thing in common: they reduce the question “Does this program have the desired property?” to the question “Are these formulas theorems?” Because of the nature of computers, the formulas in question almost exclusively involve inductively constructed mathematical objects: the integers, finite sequences, n-tuples, trees, grammars, expressions, stacks, queues, buffers, etc. Thus, regardless of which program semantics we use to obtain the formulas to be proved, our formal theory and mechanical theorem prover must permit definition and proof by induction. This book is about such a theory and a mechanical theorem prover for it. [^1]: See Manna and Waldinger [31] for a description of the many other ways that formal reasoning can be usefully applied in computer programming. ``` #################### File: A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf Page: 23 Context: ```markdown # 1.2 Our Formal Theory We will present a logical theory that we have tailored to the needs of thinking about computer programs. It provides for the introduction of new “types” of objects, a general principle of induction on well-founded relations (Noetherian Induction [6]), and a principle permitting the definition of recursive functions. Recursive functions offer such a powerful form of expression when dealing with discrete mathematics (such as underlies computer programs) that we do not use any additional form of quantification.² ## 1.3 Proof Techniques After defining our formal theory, we describe many techniques we have developed for proving theorems in it. We devote eleven chapters to the description of these techniques and how, when, and where they should be applied to prove theorems. The most important of these techniques is the use of induction. The formulation of an induction argument for a conjecture is based on an analysis of the recursive definitions of the concepts involved in the conjecture. Thus the use of recursively defined functions facilitates proving theorems about inductively defined objects. Many of the other proof techniques are designed to support our induction heuristics. ## 1.4 Examples All the techniques are illustrated with examples. Most of our techniques are first illustrated with simple theorems about functions on lists and trees. These elementary functions are simple to define and are worth knowing if one is interested in mechanical theorem-proving (as we assume many readers will be). In ²The program of using recursive functions and induction to understand computer programs, and the use of computers to aid the generation of proofs, were begun by McCarthy [33],[34]. See also Burstall [12]. The idea of using recursive functions and induction but no other form of quantification in the foundations of mathematics (or at least of arithmetic) was first presented by Skolem in 1919 [52]. See also Goodstein [22]. ``` #################### File: A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf Page: 24 Context: ```markdown # CHAPTER 1. INTRODUCTION In addition, it is more fun to work through the proofs of novel theorems than through the proofs of, say, the familiar theorems of elementary number theory. We have also included four complicated examples, chosen from several different subject domains, to illustrate the general applicability of the theory and our proof methods. 1. **The Tautology Checker** In the first such example, we write a tautology checker as a recursive function on trees representing formulas in propositional calculus. We exercise the theory and proof techniques in an interesting way by stating and proving that the tautology checker always returns an answer, recognizes only tautologies, and recognizes all tautologies. This example serves two important purposes: it illustrates the theory and proof techniques in use, and it gives the reader a precise definition of a simple mechanical theorem prover (i.e., the tautology checker) without requiring a digression into programming languages or computational issues. 2. **Algorithm for Arithmetic Expressions** In the second major example, we prove the correctness of a simple algorithm that "optimizes" and "compiles" arithmetic expressions into sequences of instructions for a hand-held calculator. In order to specify the algorithm and the calculator, we use (and briefly explain) McCarthy's "functional semantics" [34] for programs. This example is the first part of the book that deals explicitly with computational (rather than mathematical) ideas. Because the example is simple (compared to real compilers and real hardware) and because it is ideally suited to our mathematical language, the reader unfamiliar with computing should be able to read this chapter comfortably (and even learn the basic ideas behind compiling expressions and one style of program specification). 3. **Fast String Searching Algorithm** In the third major example, we prove the correctness of a fast string searching algorithm. The algorithm finds the first occurrence of one character sequence in another (if such an occurrence exists), and on the average, is the fastest such algorithm currently known. In this example we explain and use a second program specification style, called the "inductive assertion" method [Floyd 18]. Finally, we demonstrate that the theory and proof techniques can be applied in multiple contexts, showcasing their versatility and robustness. ``` #################### File: A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf Page: 25 Context: ```markdown # 1.5 Our Mechanical Theorem Prover It is one thing to describe a loosely connected set of heuristics that a human might use to discover proofs and quite a different thing to formulate them so that a machine can use them to discover proofs. [^3] All of the heuristics described have been implemented and together comprise our automatic theorem-proving program. Our description of the heuristics makes little or no reference to the fact that they can be mechanized. However, we want competent readers to be able to reproduce and build upon our results. Thus, we are more precise than we would be had we desired only to teach a student how we prove theorems. All of the example proofs discussed in this book are actually produced by our program. We present the theorem prover's proofs in an informal style. While describing proof techniques, we present small inference steps, but as we move on to the interesting examples we ascend to the level upon which humans usually deal with proofs. By the time we reach the prime factorization theorem, the proofs we describe are very much like those in number theory textbooks: we make large inference leaps, use lemmas without describing their proofs, and dismiss whole theorems with phrases like “the proof is by induction on X.” However, our high-level descriptions of the machine's proofs should not be confused with what the machine does: before pronouncing a conjecture proved, the machine discovers a complete sequence of applications of our proof techniques establishing the conjecture from axioms and previously proved lemmas. Furthermore, given the detailed presentation of our proof tech— [^3]: For a survey of non-resolution theorem-proving, see [5]. ``` #################### File: A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf Page: 26 Context: ```markdown # CHAPTER 1. INTRODUCTION Niques and their orchestration, the reader should also be able to discover the proofs mechanically. It is perhaps easiest to think of our program much as one would think of a reasonably good mathematics student: given the axioms of Peano, he could hardly be expected to prove (much less discover) the prime factorization theorem. However, he could cope quite well if given the axioms of Peano and a list of theorems to prove (e.g., “prove that addition is commutative,” ... “prove that multiplication distributes over addition,” ... “prove that the result returned by the GCD function divides both of its arguments,” ... “prove that if the products over two sequences of primes are identical, then the two sequences are permutations of one another”). The examples discussed are just part of a standard sequence of approximately 400 definitions and theorems the program is expected to reestablish whenever we incorporate a new technique. The sequence contains theorems some readers will have trouble proving before they read the heuristics. In Appendix A we list the entire sequence as evidence that the heuristics are generally useful and well integrated. It should be noted that when the latter theorems of the sequence are proved the theorem prover is aware of the earlier theorems. The fact that previously proved results are remembered permits their use as lemmas in later proofs. The theorem prover would fail to prove many of its most interesting theorems in the absence of such lemmas. However, the more a theorem-proving program knows, the more difficult it becomes for it to prove theorems because the program is often tempted to consider using theorems that have no relevance to the task at hand. That our theorem-proving program does prove the entire list of theorems sequentially is a measure of its capacity to avoid being confused by what it knows. ## 1.6 Artificial Intelligence or Logic? While drawing heavily upon important facts of mathematical logic, our research is really more artificial intelligence than logic. The principal question we ask (and sometimes answer) is “how do we discover proofs?” ``` #################### File: A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf Page: 27 Context: ```markdown # 1.7 Organization Theorem-proving is an impossible task because certain theories are known to be undecidable or super-super-exponential in complexity. Such metamathematical results are, of course, no more of an impediment to mechanical theorem-proving than to human theorem-proving.¹ They only make the task more interesting. This book is structured as follows. We begin informally in Chapter 2 by sketching our logical theory, formulating a simple conjecture, and proving that conjecture by using many of the techniques we will discuss. We present the theory formally in Chapter 3. In Chapter 4 we state and prove in our theory the correctness of a function for recognizing tautologies. In Chapters 5 through 15 we develop and explain our heuristics for proving theorems. These heuristics are illustrated with simple proof steps taken from many theorems. In Chapter 16, our theorem prover develops an interesting initial segment of elementary number theory. Finally, in Chapters 17 through 19 we discuss three complex examples: the proof of correctness of a simple optimizing compiler for arithmetic expressions, the correctness of a fast string searching algorithm, and the proof of the unique prime factorization theorem. Readers interested in ascertaining the power of our automatic theorem-proving program and in seeing how recursive functions can be used to formalize a variety of problems should first read the informal overview (Chapter 2) and then look at the chapters on the tautology checker (Chapter 4), compiler (Chapter 17), string searching algorithm (Chapter 18), and prime factorization theorem (Chapter 19). The book has three appendices. The first lists all the definitions and theorems the theorem prover routinely proves. The second appendix presents implementation details concerning the introduction of new types. The third appendix exhibits, in clausal form, the axioms of our theory. ¹ Nils Nilsson, private communication. ``` #################### File: A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf Page: 28 Context: ```markdown # Chapter 1: Introduction Content of the introduction goes here... ## Key Concepts 1. **Concept One** - Description of concept one. 2. **Concept Two** - Description of concept two. ## Important Points - **Point A**: Explanation of point A. - **Point B**: Explanation of point B. ## Table of Information | Header 1 | Header 2 | Header 3 | |----------|----------|----------| | Item 1 | Item 2 | Item 3 | | Item 4 | Item 5 | Item 6 | ## References - [Link to resource](http://example.com) - [Another link](http://example.org) ``` #################### File: A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf Page: 29 Context: ```markdown # Chapter 2 ## A Sketch of the Theory and Two Simple Examples To prove theorems formally one must have in mind a formal theory in which the proofs are to be constructed. We will present our formal theory in Chapter 3. Following the precise presentation of the theory, we describe, in great detail, how we discover proofs. However, before descending into detail, we here sketch the theory informally, exhibit and explain several simple recursive function definitions, and (without regard for mechanization) work through several simple inductive proofs. ### 2.1 An Informal Sketch of the Theory #### 2.1.1 If and Equal We employ the prefix notation of Church's lambda calculus [16] and McCarthy's LISP [35] when writing down terms. Thus, we write ``` (plus (h x) (b)) ``` when others write ``` plus(h(x), b0) ``` or even ``` h(x) + b() ``` ``` #################### File: A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf Page: 30 Context: ```markdown # CHAPTER 2. A SKETCH OF THE THEORY AND TWO SIMPLE EXAMPLES to denote the application of the two-argument function PLUS to (1) the application of H to X and (2) the constant (i.e., function of no arguments) B. We wish to make it easy to define new functions and predicates on inductively constructed objects. For example, given axioms for the natural numbers, we would like to define functions such as addition and multiplication, and predicates such as whether a given number is prime; given the axioms for sequences, we would like to define operations such as sequence concatenation and predicates such as whether one sequence is a permutation of another. We find it most convenient to define new functions with equality axioms of the form: ``` (f z₁, ..., z_n) ⟼ body ``` where certain constraints are placed on f, the zᵢ, and the term body. It is often necessary to make conditional definitions. For example, (PLUS X Y) is defined to be one thing if X=0, and another thing if X≠0. In order for definitions to be equality axioms, the right-hand side of the definition, body, must be a term. But in the usual treatment of logic, it is not permitted to embed a proposition (such as X=0) in a term. Thus, we find it necessary to reproduce the logic of truth functions and equality at the term level. We add to the usual propositional calculus with variables, function symbols, and equality an axiom supposing the existence of two distinct constants, (TRUE) and (FALSE) (henceforth written T and F), and four axioms defining the new function symbols EQUAL and IF. The axioms are: - **Axiom** T ≠ F - **Axiom** X ≠ Y ⟶ (EQUAL X Y) ⟶ T - **Axiom** X ≠ Y ⟶ (EQUAL X Y) ⟶ F - **Axiom** X ≠ F ⟶ (IF X Y Z) ⟼ Z ``` #################### File: A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf Page: 31 Context: ```markdown # 2.1. AN INFORMAL SKETCH OF THE THEORY ## Axiom \( x \neq F \rightarrow (IF \, X \, Y \, Z) \, \cdot \, Y \). We can paraphrase the above axioms as follows: T is not F. For all \( X \) and \( Y \), \( (EQUAL \, X \, Y) \) is T if \( X \) is \( Y \), and is F if \( X \) is not \( Y \). \( (IF \, X \, Y \, Z) \) is Y if \( X \) is non-F and is Z otherwise. Thus \( (IF \, (EQUAL \, X \, 0) \, Y \, Z) \) is equal to Y if \( X \) is 0 and is equal to Z otherwise. Strictly speaking, we never define "predicates," for they can only be used in the construction of formulas and thus cannot be used in terms (such as function bodies). Without loss of generality, we restrict our attention to functions. For example, we will later define the function PRIME, so that \( (PRIME \, X) \) is T if \( X \) is prime and F otherwise. To permit terms or functions to test logical combinations of expressions, it is convenient to define the functional versions of "not," "and," and "implies." For example, we want \( (NOT \, P) \) to be T if \( P \) is F and to be F if \( P \) is not F. Similarly, we want \( (AND \, P \, Q) \) to be T if both \( P \) and \( Q \) are non-F, and F otherwise. Thus, we define the functions NOT, AND, OR, and IMPLIES as follows: ## Definition **(NOT \, P)** \( (IF \, P \, T) \) ## Definition **(AND \, P \, Q)** \( (IF \, P \, (IF \, Q \, T \, F) \, F) \) ## Definition **(OR \, P \, Q)** \( (IF \, P \, T \, (IF \, Q \, T \, F)) \) ## Definition **(IMPLIES \, P \, Q)** \( (IF \, P \, (IF \, Q \, T \, F) \, T) \) (We adopt the notational convention of treating AND and OR as though they took an arbitrary number of arguments.) ``` #################### File: A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf Page: 32 Context: ```markdown # CHAPTER 2. A SKETCH OF THE THEORY AND TWO SIMPLE EXAMPLES For example, \( ( \text{AND} \, p \, q \, r ) \) is an abbreviation for \( ( \text{AND} \, p \, ( \text{AND} \, q \, r )) \). It is easy to show that these definitions capture the semantics of the ordinary logical connectives. For example, it is a theorem that: \[ (p \neq F \rightarrow q \neq F) \leftrightarrow (\text{IMPLIES} \, p \, q) \neq F. \] Thus, it is also easy to prove that: \[ (\text{IMPLIES} \, (p \, q) \, (\text{OR} \, p \, q)) \text{ is not equal to } F. \] Because of our emphasis on terms rather than formulas, we find it convenient to call a term, \( p \), a “theorem” if it can be proved that \( p \neq F \). Of course, calling a term a “theorem” is an abuse of terminology, since theorems are in fact understood to be formulas. However, whenever we use a term, \( p \), as though it were a formula, it is always acceptable to read the formula \( p \neq F \) in its place. ## 2.1 Inductively Constructed Objects Any theory concerned with the mathematics behind computing must provide inductively constructed objects. For example, it is clear that we must be able to talk formally about the natural numbers, and so we will add to our theory axioms for the natural numbers. In formalizing properties of programs, we have found that it is convenient to allow the introduction of “new” types of inductively constructed objects, of which the integers are just a single example. To eliminate the possibility that the axioms for a new type will render the theory inconsistent (or not fully specify the properties of the type) we have included in our theory a general principle under which one can introduce new types. We call the principle the “shell” principle. The name “shell” derives from imagining the new objects to be colored structures encapsulating a fixed number of components (possibly of certain colors). It is actually with the shell principle that we add the axioms defining the nonnegative integers. We also use the principle to add the set of literal atoms (i.e., atomic symbols such as “NIL” and “X”), and the set of ordered pairs. For example, to axiomatize the set of ordered pairs we incant: ``` #################### File: A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf Page: 33 Context: ```markdown ## 2.1 AN INFORMAL SKETCH OF THE THEORY Add the shell `CONS`, with recognizer `LISTP`, and accessors `CAR` and `CDR` that return `NIL` on non-`LISTP` objects, which is a shorthand for adding a set of axioms that specifies `(CONS X Y)` to be the red, say, ordered pair containing `X` and `Y`, `LISTP` to be the function that returns `T` if its argument is a red pair and `F` otherwise, and `CAR` and `CDR` to be the functions that return the first and second components of red pairs (or `NIL` if given an object other than a red pair). For example, here are some of the axioms added by the above incantation: ### Axioms - **Axiom LISTP.CONS:** `(LISTP (CONS X1 X2))` - **Axiom CAR.CONS:** `(EQUAL (CAR (CONS X1 X2)) X1)` - **Axiom CDR.CONS:** `(EQUAL (CDR (CONS X1 X2)) X2)` - **Axiom CAR/CDR.ELM:** `(IMPLIES (LISTP X) (EQUAL (CONS (CAR X) (CDR X)) X))` - **Axiom CONS.EQUAL:** `(EQUAL (EQUAL (CONS X1 X2) (CONS Y1 Y2)) (AND (EQUAL X1 Y1) (EQUAL X2 Y2)))` - **Axiom CAR.LESSP:** `(IMPLIES (LISTP X) (LESSP (COUNT (CAR X)) (COUNT X)))` - **Axiom CDR.LESSP:** `(IMPLIES (LISTP X) (LESSP (COUNT (CDR X)) (COUNT X)))` `LESSP` is the “less-than” function on the nonnegative integers. The complete set of axioms for the `CONS` shell is given schematically in Chapter 3. Among the axioms not shown above are, for example, axioms specifying that `(CAR X)` is `NIL` if `X` is not a red pair. ``` #################### File: A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf Page: 34 Context: ```markdown # CHAPTER 2. A SKETCH OF THE THEORY AND TWO SIMPLE EXAMPLES LISTP, and that the set of LISTP objects does not overlap the set of numbers, literal atoms, or other types. We use ordered pairs in a variety of ways. For example, to talk about finite sequences (sometimes called "lists") we think of “NIL” as the empty sequence, and we think of `(CONS X Y)` as the sequence whose first element is `X` and whose remaining elements are those of `Y`. Thus, we think of ``` (CONS 1 (CONS 2 (CONS 3 '("NIL")))) ``` as the sequence containing 1, 2, and 3. ## 2.1.3 Recursively Defined Functions Our theory includes a principle of definition allowing the introduction of recursive functions. The principle is based on the notion of well-founded relations. In particular, `(f1 ... fn)` may be defined to be some term, body, involving "recursive calls" of the form `(f (y1 ... yn))`, provided there is a measure and well-founded relation such that in every recursive call the measure of the `yi` is smaller, according to the well-founded relation, than the measure of the `ti`. Since "well-founded" means that there is no infinite sequence of objects, each of which is smaller than its predecessor in the sequence, the above restriction, together with a few simple syntactic requirements, ensures that there exists one and only one function satisfying the definition. The existence of a function satisfying the definition implies that adding the definition as an axiom does not render the theory inconsistent. We explicitly assume that `LESSEP` is well-founded. Furthermore, when the `CONS` shell is added, the axioms `CARLESSEP` and `CDR-LESSP`, above, inform us that the `CAR` and `CDR` of a pair both have smaller size (as measured by the function `COUNT`) than the pair itself. Thus, if in every recursive call of a function some particular argument is replaced by its own `CAR` or `CDR`, and we can establish that in each such case that argument is a pair, then the principle of definition would admit the function. To illustrate a definition by recursion, suppose we wished to concatenate two finite sequences `X` and `Y`. If `X` is empty, then the result of concatenating `X` and `Y` is just `Y`. If `X` is nonempty, then ``` #################### File: A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf Page: 35 Context: ```markdown # 2.1. AN INFORMAL SKETCH OF THE THEORY X has a first element, `(CAR X)`, and some remaining elements, `(CDR X)`. The concatenation of X and Y in this case is the sequence whose first element is `(CAR X)` and whose remaining elements are those in the concatenation of `(CDR X)` and Y. Formally, we define the function `APPEND` so that `(APPEND X Y)` is the concatenation of X and Y: ## Definition ```lisp (APPEND X Y) (IF (LISTP X) (CONS (CAR X) (APPEND (CDR X) Y)) Y) ``` `APPEND` is a particularly simple recursive function. It is easy to see why it is accepted under our principle of definition: the axiom `CDR.LESSP`, above, establishes that `(COUNT X)` gets `LESSP`-smaller in each (i.e., the recursive call). Later in the book we will introduce more interesting recursive functions—functions for which a measure as obvious as the size of one argument will not suffice to justify their definition. By the axioms of equality, we can replace any instance of `(APPEND X Y)` with the corresponding instance of the right-hand side of the definition above. For example, we can show that `(APPEND (CONS A D) Y)` is equal to `(CONS (APPEND D Y))` as follows. By definition, `(APPEND (CONS A D) Y)` is equal to the instantiated body: ```lisp (IF (LISTP (CONS A D)) (CONS (CAR (CONS A D)) (APPEND (CDR (CONS A D)) Y)) Y) ``` By the axiom `LISTP.CONS`, above, `(LISTP (CONS A D))` is non-`F`. But, by the axioms defining `IF`, we know that when the first argument in an `IF`-expression is non-`F`, the `IF`-expression is equal to its second argument. Thus, the above `IF`-expression can be replaced by its second argument: ```lisp (CONS (CAR (CONS A D)) (APPEND (CDR (CONS A D)) Y)) ``` Finally, applying the axioms `CAR.CONS` and `CDR.CONS`, above, we rewrite `(CAR (CONS A D))` to A, and `(CDR (CONS A D))` to D, obtaining: ```lisp (CONS A (APPEND D Y)) ``` ``` #################### File: A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf Page: 36 Context: ```markdown # CHAPTER 2. A SKETCH OF THE THEORY AND TWO SIMPLE EXAMPLES ## 2.1 Lists and Sequences ```lisp (CONS A (APPEND D Y)) ``` as desired. Similarly, we can prove that `(APPEND 'NIL Y)` is `Y`. We can use a series of such simplifications to "compute" the result of concatenating the sequence containing 1, 2, and 3, to the sequence containing 4 and 5: ```lisp (APPEND (CONS 1 (CONS 2 (CONS 3 '(NIL)))) (CONS 4 (CONS 5 '(NIL)))) ``` - \( = (CONS 1 (APPEND (CONS 2 (CONS 3 '(NIL))) (CONS 4 (CONS 5 '(NIL))))) \) - \( = (CONS 1 (CONS 2 (APPEND (CONS 3 '(NIL)) (CONS 4 (CONS 5 '(NIL)))))) \) - \( = (CONS 1 (CONS 2 (CONS 3 (APPEND '(NIL) (CONS 4 (CONS 5 '(NIL))))))) \) - \( = (CONS 1 (CONS 2 (CONS 3 (CONS 4 '(NIL)))) (CONS 4 (CONS 5 '(NIL)))) \) Using recursion, we can introduce, under our principle of definition, almost all the concepts in which we are interested. Indeed, recursion is a very important tool when dealing with inductively constructed objects such as the integers or sequences. For instance, recursion can be regarded as a form of quantification: we can use recursion to check that all or some elements of a sequence have some property. We do not use any other form of quantification in our formal theory. ## 2.1.4 Induction Because we are interested in proving theorems about inductively constructed objects such as the natural numbers, sequences, pairs, etc., we need a rule of inference that embodies the construction process itself. For example, we know that if \(X\) is a pair, then it can be constructed by applying `CONS` to two "previously" constructed objects, namely `(CAR X)` and `(CDR X)`. Thus, we can prove that some property holds for all \(X\) by considering two cases. The first case, called the "base case," is to ``` #################### File: A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf Page: 37 Context: ```markdown ## 2.1 An Informal Sketch of the Theory To prove that all nonpairs have the property in question, the second case, called the "induction step," is to assume that \( X \) is a pair and that \( (CAR \, X) \) and \( (CDR \, X) \) have the desired property, and to prove that \( X \) has the property. Such a proof is called a proof by "induction." The magic idea behind induction, the idea that made it appear unsound to both authors when they first encountered the idea in high school, is that one gets to assume instances of the conjecture being proved during its own proof. Why is induction sound? For example, why can we conclude, after proving the two cases above, that any \( X \) must have the property? Suppose some object does not have the property. Then let \( X \) be a minimal object not having the property, where we compare two such objects by comparing their COUNTS using the LESSP function. There is a minimal object not having the property in question since, by the well-foundedness of LESSP, there is no infinite sequence of objects, each of which has smaller COUNT than the previous one. \( X \) must be a pair because otherwise the base case establishes that \( X \) has the property. But if \( X \) is a pair, \( (CAR \, X) \) and \( (CDR \, X) \) are both smaller than \( X \) (as measured by COUNT and LESSP). Therefore, since everything smaller than \( X \) has the property in question, \( (CAR \, X) \) and \( (CDR \, X) \) have the property. But in that case, the induction step establishes that \( X \) must have it also. Thus, the assumption that some object does not have the property has led to a contradiction. In general, the induction principle in our theory permits one to assume arbitrary instances of the conjecture being proved, provided those instantiations decrease some measure in some well-founded ordering. Because our induction principle allows the use of arbitrary measures and well-founded relations, we can make inductive arguments that are much more subtle than the "structural induction" illustrated below.1 We will illustrate more subtle inductions later in the book. Throughout this chapter we confine ourselves to structural induction. 1. The use of structural induction to prove programs correct is beautifully described by Burstall in [12]. ``` #################### File: A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf Page: 38 Context: ```markdown # Chapter 2: A Sketch of the Theory and Two Simple Examples ## 2.2 A Simple Inductive Proof One of the hardest problems in discovering an inductive proof is discovering an appropriate application of the principle of induction itself. But the similarity between recursion and induction offers an insight into the problem. For example, suppose we are trying to prove some conjecture involving the expression `APPEND A term`. When `APPEND` was introduced into the theory, we were required to exhibit a measure of its arguments that was decreasing. In particular, every time `APPEND` recurses, the COUNT of its first argument goes down. Thus, `APPEND A term` “suggests” an induction: were we to apply the definition of `APPEND` to “open up” (`APPEND A term`), we would find ourselves recursively decomposing `A` into its constituents and would want information about those constituents. But by the observation above, we know those constituents are smaller than `A` in some well-founded order. Thus, by the induction principle, we can supply ourselves with inductive instances about those constituents. We illustrate the above reasoning with a simple example. We will prove: ### Theorem: Associativity of Append \[ \text{(EQUAL (APPEND A B) C) (APPEND (APPEND A B) C)} \] Name the conjecture *1. The proof is by induction. Three terms – each `APPEND` expression with a variable in its first argument – suggest “plausible” inductions. Two of these inductions are on `A` and the third is on `B`. All occurrences of `A` are in argument positions being recursively decomposed. Thus, by appropriately opening up `APPEND` expressions we can express *1 in terms of `(CDR A)`, about which we can supply an inductive hypothesis. (In this case, we say that the induction on `A` is “unflawed.”) The induction on `B` is flawed: `B` occurs sometimes as the first argument to `APPEND` and sometimes as the second (which is never changed in recursion). No matter how we expand `APPEND` expressions, the conjecture will still involve `B` and `(CDR B)`, and we are unable to supply an induction hypothesis about `B` while inducting. ``` #################### File: A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf Page: 39 Context: ```markdown ## 2.2 A SIMPLE INDUCTIVE PROOF on B. Thus, we will induct on A, using the following scheme: ``` (AND (IMPLIES (NOT (LISTP A)) (p A B C)) (IMPLIES (AND (LISTP A)) (p (CDR A) B C)) (p A B C)) ``` That is, letting (p A B C) be a schematic representation of *1, we will prove that (p A B C) holds when A is not a LISTP, and we will prove that if A is a LISTP and (p (CDR A) B C) holds, then (p A B C) holds. The induction is sound because the axiom CDR-LESSP establishes that (COUNT A) decreases according to the well-founded relation LESSP in the induction step of the scheme. Instantiating the scheme above with *1 we obtain two new goals. We block indent the proofs of the two goals: 1. **(IMPLIES (NOT (LISTP A)))** ``` (EQUAL (APPEND (APPEND A B) C) (APPEND A (APPEND B C))) ``` This is the base case. Since A is non-LISTP, (APPEND A B) is equal to its second argument, B, by the definition of APPEND. Similarly, (APPEND A (APPEND B C)) is equal to its second argument, (APPEND B C). By rewriting these two terms in the conclusion above we obtain: ``` (IMPLIES (NOT (LISTP A))) (EQUAL (APPEND B C) (APPEND B C)), ``` which simplifies, using the axiom X-Y → (EQUAL X Y)⊤, to: (TRUE). 2. **(IMPLIES (AND (LISTP A)))** ``` (EQUAL (APPEND (APPEND (CDR A) B) C) (APPEND (CDR A) (APPEND B C))) (EQUAL (APPEND A B C)). ``` This is the induction step. Since A is a LISTP here, (APPEND A Y), for any Y, is equal to (CONS (CAR A) (APPEND (CDR A) Y)), by the definition of APPEND. Thus, we can "unfold" the two (APPEND A B) expressions in the conclusion above to get: ``` (IMPLIES (AND (LISTP A))) (EQUAL (APPEND (CDR A) B C) (APPEND (CDR A) (APPEND B C))) ``` ``` #################### File: A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf Page: 40 Context: ```markdown # CHAPTER 2. A SKETCH OF THE THEORY AND TWO SIMPLE EXAMPLES ## 2.1 \[ \text{(EQUAL (APPEND (CONS A) B) (APPEND (CDR A) B))} \] \[ C) \quad \text{(CONS (CAR A)} \quad \text{(APPEND (CDR A) (APPEND B C))))} \] But, by opening up the definition of APPEND, we know that \((APPEND (CONS A D) Y)\) is equal to \((CONS A (APPEND D Y))\). Thus, we can expand the first APPEND term in the conclusion above, to get: \[ \text{(IMPLIES (AND (LISTP A)} \] \[ \text{(EQUAL (APPEND (APPEND (CDR A) B) C)} \] \[ \text{(APPEND (CDR A) (APPEND B C))))} \] \[ \text{(EQUAL (CONS CAR A)} \] \[ \text{(APPEND (APPEND (CDR A) B) C)))} \] Note that the conclusion above is of the form: \[ \text{(EQUAL (CONS x) (CONS y))} \] But according to CONS.EQUAL, two pairs are equal if and only if the components are pairwise equal. That is, the concluding equality may be rewritten to the conjunction of \((EQUAL x u)\) and \((EQUAL y v)\). But in the above application \(x\) and \(u\) are identical—they are both \((CAR A)\). Thus, we replace the concluding equality above with \((EQUAL v)\): \[ \text{(IMPLIES (AND (LISTP A)} \] \[ \text{(EQUAL (APPEND (CDR A) B) C)} \] \[ \text{(APPEND (CDR A) (APPEND B C)))} \] \[ \text{(EQUAL (APPEND (APPEND (CDR A) B) C)} \] \[ \text{(APPEND (CDR A) (APPEND B C)))} \] However, this simplifies to: \[ \text{(TRUE)} \] because the conclusion is identical to the second hypothesis. In particular, by opening up the correct APPEND expressions we transformed the induction conclusion into the induction hypothesis. That finishes the proof of *1. Q.E.D.* ## 2.3 A More Difficult Problem The proof of the associativity of APPEND illustrates two of the proof techniques we describe later: induction and simplification. Some of the other heuristics we will describe are: - It is sometimes useful to trade “bad” terms for “good” ones by re-representing terms in the conjecture. ``` #################### File: A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf Page: 41 Context: ```markdown # 2.3. A More Difficult Problem For example, one might transform a conjecture about I and I-J to one about K-J and K, trading difference for addition, by replacing all occurrences of I by K+J. One obvious way to use an equality hypothesis, \( (EQUAL\ x\ y) \), is to substitute \( x \) for \( y \) throughout the conjecture. But it is sometimes useful to replace only some of the \( y \)'s by \( x \)'s and then to “throw away” the equality hypothesis, so as to produce a more general conjecture to prove by induction. We call such use of an equality hypothesis “cross-fertilization.” In proofs by induction, it is easier to prove strong theorems than weak ones, because strong theorems permit one to obtain strong induction hypotheses with which to work. Thus, one should look for ways to generalize a conjecture to be proved by induction. We illustrate these proof techniques by working through another simple example. Consider the idea of the “fringe” of a tree of CONSes. Given the tree: ``` * / \ 1 3 / 2 ``` That is, \( (CONS\ (CONS\ 1\ 2)\ 3) \), we wish to return the sequence of tips of the tree, 1, 2, 3, that is, \( (CONS\ (CONS\ 1\ (CONS\ 2\ (CONS\ 3\ ‘NIL’)))) \). An obvious way to “flatten” a tree, \( X \), is to ask first whether \( (LISTP\ X) \) is true. If so, \( X \) is a fork in the tree. The fringe of a fork is the concatenation (as with APPEND) of the fringes of the left and right subtrees (i.e., the recursively obtained fringes of \( (CAR\ X) \) and \( (CDR\ X) \)). If \( (LISTP\ X) \) is false, \( X \) is a tip of the tree. The fringe of a tip is the singleton sequence containing the tip, i.e., \( (CONS\ X\ ‘NIL’) \). > Computing the fringe of binary trees brings to mind one of the lesser chestnuts of Artificial Intelligence: how to compute that two binary trees have the same fringe without using much storage (in particular, without simply computing). ``` #################### File: A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf Page: 42 Context: ```markdown # Chapter 2: A Sketch of the Theory and Two Simple Examples The above description can be immediately transcribed into the definition of a recursive function that we will call **FLATTEN**. We exhibit the definition of FLATTEN later, when we formalize the problem. Considered as a recipe for computing the fringe of a tree, FLATTEN is somewhat inefficient. It visits every fork and tip of the tree once, but for every fork, the concatenation process revisits every tip of the left-hand branch. In trees heavily nested to the left, FLATTEN computes in time \(n^2\), where \(n\) is the number of tips. John McCarthy (private communication) suggested the following more efficient algorithm, which we call **MC-FLATTEN**, which computes in linear time. The basic idea is that to collect all the tips in a tree one can initialize a collection site to **NIL**, and then sweep the tree adding tips to the site as they are encountered. If the tips are added to the front of the collection site, the answer will be in exactly the reverse of the order in which the tips were visited. If we sweep the tree from right to left (instead of left to right), the result will be in the order desired. To write the algorithm as a recursive function, we use a second argument, **ANS**, as the collection site. At a tip, we add the tip to the front of **ANS** (with **CONS**) and return the new list as the answer. At a fork, we first collect the tips in the **CDR** (the right branch), and, using the resulting answer as the collection site, we then collect the tips in the **CAR** (the left branch). Thus, \((\text{MC-FLATTEN } \times \text{ANS})\) should append the fringe of \(X\) onto **ANS**. The formal statement of the relationship between MC-FLATTEN and FLATTEN is: \[ \text{EQUAL} \left( \text{MC-FLATTEN } \times \text{ANS} \right) \quad \text{(APPENDED } \text{FLATTEN } \times \text{ANS) }. \] ``` #################### File: A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf Page: 43 Context: ```markdown ## 2.4 A More Difficult Proof In the next section we define `FLATTEN` and `MC.FLATTEN` formally, explain why they are admitted under our principle of definition, and then work through the proof of the conjecture above. ### Definition ```lisp (FLATTEN X) (IF (LISTP X) (APPEND (FLATTEN (CAR X)) (FLATTEN (CDR X))) (CONS '(!))) ``` The lemmas `CAR.LESSP` and `CDR.LESSP` establish that `COUNT X` goes down according to the well-founded relation `LESSP` in each recursive call. Hence, `FLATTEN` is accepted under the definition principle. Observe that `(LISTP (FLATTEN X))` is a theorem. ### Definition ```lisp (MC.FLATTEN X ANS) (IF (LISTP X) (MC.FLATTEN (CAR X) ANS) (CONS X ANS)) ``` The lemmas `CDR.LESSP` and `CAR.LESSP` establish that `COUNT X` decreases according to the well-founded relation `LESSP` in each recursive call. Hence, `MC.FLATTEN` is accepted under the definition principle. Note that `(LISTP (MC.FLATTEN X ANS))` is a theorem. ### Theorem **FLATTEN.MC.FLATTEN:** ```lisp (EQUAL (MC.FLATTEN X ANS) (APPEND (FLATTEN X) ANS)) ``` Name the conjecture **1**. Let us appeal to the induction principle. There are two plausible inductions. However, they merge into one likely candidate induction. We will induct according to the following scheme: - (AND (IMPLIES (NOT (LISTP X)) P X ANS) - (IMPLIES (AND (LISTP X) (P (CAR X) (MC.FLATTEN (CDR X) ANS))) (P (CDR X) ANS))) ``` #################### File: A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf Page: 44 Context: ```markdown # Chapter 2. A Sketch of the Theory and Two Simple Examples The inequalities `CAR.LESSP` and `CDR.LESSP` establish that the measure `(COUNT X)` decreases according to the well-founded relation `LESSP` in the induction step of the scheme. Note, however, the inductive instances chosen for `ANS`. The above induction scheme produces two new formulas: ## Case 1 ```markdown (IMPLIES (NOT (LISTP X)) (EQUAL (MC.FLATTEN X ANS) (APPEND (FLATTEN X) ANS))) ``` which simplifies, applying `CDR.CONS` and `CAR.CONS`, and expanding the definitions of `MC.FLATTEN`, `FLATTEN`, and `APPEND`, to: ```markdown (TRUE) ``` ## Case 2 ```markdown (IMPLIES (AND (LISTP X)) (EQUAL (MC.FLATTEN (CAR X)) (MC.FLAT- TEN (CDR X ANS))) (APPEND (FLATTEN (CAR X)) (MC.FLAT- TEN (CDR X ANS)))) ``` This simplifies, unfolding the definitions of `MC.FLATTEN` and `FLATTEN`, to: ```markdown (IMPLIES (AND (LISTP X)) (EQUAL (MC.FLATTEN (CAR X)) (MC.FLAT- TEN (CDR X ANS))) (APPEND (FLATTEN (CAR X)) (MC.FLAT- TEN (CDR X ANS)))) ``` Appealing to the lemma `CAR/CDR.ELIM`, we now replace `x` by `(CONS Z V)` to eliminate `(CAR X)` and `(CDR X)`. This generates: ```markdown (IMPLIES (AND (LISTP (CONS Z V))) (EQUAL (MC.FLATTEN Z (MC.FLAT- TEN V ANS))) ``` ``` #################### File: A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf Page: 45 Context: ```markdown ## 2.4 A MORE DIFFICULT PROOF (APPEND (FLATTEN Z) (MC.FLATTEN V ANS))) (EQUAL (MC.FLATTEN V ANS)) (APPEND (FLATTEN V ANS))) (EQUAL (MC.FLATTEN Z (MC.FLATTEN V ANS))) (APPEND (APPEND (FLATTEN Z) (FLATTEN V)) ANS)), which further simplifies, clearly, to: (IMPLIES (AND (EQUAL (MC.FLATTEN Z) (MC.FLATTEN V ANS))) (EQUAL (MC.FLATTEN V ANS)) (APPEND (FLATTEN V ANS))) (EQUAL (MC.FLATTEN Z (MC.FLATTEN V ANS))) (APPEND (APPEND (FLATTEN Z) (FLATTEN V)) ANS))). We use the first equality hypothesis by cross-fertilizing: (APPEND (FLATTEN Z) (MC.FLATTEN V ANS)) for (MC.FLATTEN Z (MC.FLATTEN V ANS)) and throwing away the equality. This generates: (IMPLIES (EQUAL (MC.FLATTEN V ANS) (APPEND (FLATTEN V ANS))) (EQUAL (APPEND (FLATTEN Z) (MC.FLATTEN V ANS)) (APPEND (APPEND (FLATTEN Z) (FLATTEN V)) ANS))). We now use the above equality hypothesis by cross-fertilizing: (APPEND (FLATTEN V ANS) (MC.FLATTEN TEN V ANS)) and throwing away the equality. We thus obtain: EQUAL (APPEND (FLATTEN Z) (APPEND (FLATTEN V ANS))) (APPEND (APPEND (FLATTEN Z) (FLATTEN V)) ANS)), which we generalize by replacing (FLATTEN V) by Y and (FLATTEN Z) by A. We restrict the new variables by appealing to the type restriction lemma noted when FLATTEN was introduced. The result is: (IMPLIES (AND (LISTP Y) (LISTP A)) (EQUAL (APPEND A (APPEND Y ANS)) (APPEND (APPEND A Y) ANS))). Let us appeal to the induction principle. The recursive terms ``` #################### File: A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf Page: 46 Context: ```markdown # CHAPTER 2. A SKETCH OF THE THEORY AND TWO SIMPLE EXAMPLES In the conjecture suggested three inductions. They merge into two likely candidate inductions. However, only one is unflawed. We will induct according to the following scheme: - **(AND (IMPLIES (NOT (LISTP A)) (P A Y AMS))** - **(IMPLIES (AND (LISTP A) (P CDR A Y AMS)) (P A Y AMS))** The inequality `CDRLESSP` establishes that the measure (COUNT A) decreases according to the well-founded relation `LESSP` in the induction step of the scheme. The above induction scheme generates two new conjectures: **Case 1.** `(IMPLIES (AND (NOT (LISTP CDR A))) (LISTP Y))` ``` (EQUAL (APPEND A (APPEND Y AMS))) (APPEND (APPEND A Y AMS)). ``` This simplifies, appealing to the lemmas `CDR.CONS`, `CAR.CONS` and `CONS.EQUAL`, and expanding `APPEND`, to: ``` (IMPLIES (AND (NOT (LISTP CDR A))) (LISTP Y)) (LISTP A) (EQUAL (APPEND (CDR A) (APPEND Y AMS))) (APPEND (APPEND CDR A Y AMS)). ``` However, this again simplifies, unfolding `APPEND`, to: ``` (TRUE). ``` **Case 2.** `(IMPLIES (AND (EQUAL (APPEND CDR A) (APPEND Y AMS)))` ``` (APPEND (APPEND CDR A Y AMS)). (APPEND (APPEND A Y AMS)). ``` This simplifies, applying the lemmas `CDR.CONS`, `CAR.CONS` and `CONS.EQUAL`, and expanding the definition of `APPEND`, to: ``` (TRUE). ``` That finishes the proof of *1.1, which, consequently, finishes the proof of *1. Q.E.D. ## 2.5 Summary The purpose of this chapter was to provide an introduction to our function-based theory and to indicate how we prove theorems in the theory. As noted, all our proof techniques have been implemented in an automatic theorem-proving program. In fact, the last section was written, in its entirety, by our automatic the- ``` #################### File: A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf Page: 47 Context: ```markdown # 2.6 Notes To illustrate several proof techniques, we instructed the theorem prover to conduct the `FLATTEN.MC.FLATTEN` proof in an environment in which it was aware only of the axioms of our basic theory and the definitions of `APPEND`, `FLATTEN`, and `MC.FLATTEN`. In the proof, the program derived a version of the associativity of `APPEND` (formula *1.1) and proved it with a second induction. Had the theorem prover previously proved the associativity of `APPEND` (and been instructed to remember it), the proof above would have been much shorter, as the lemma would have been involved in early simplifications. Later in the book, when we deal with more complicated examples such as the system’s proof of the Fundamental Theorem of Arithmetic, we will show the system working primarily from previously proved theorems rather than axioms. The total amount of CPU time required to analyze the two definitions and produce the proof is five-and-a-half seconds running compiled INTERLISP [53], [41] on a Digital Equipment Corporation KL-10. For an introduction to LISP see Allen's [1]. The English commentary produced during the definition-time analysis of functions and during proofs is typical of the system's output. Examples will be found throughout the book. The steps in a proof are described in real-time, as the proof is developed, so that the user can follow the theorem prover’s progress. To avoid boring the reader with repetitious phrasing, ``` #################### File: A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf Page: 48 Context: ```markdown # Chapter 2: A Sketch of the Theory and Two Simple Examples The system varies its sentence structure. ``` #################### File: A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf Page: 49 Context: ```markdown # Chapter 3 ## A Precise Definition of the Theory In this chapter, we precisely define the mathematical theory underlying our proof techniques, the theory in which our theorem prover proves theorems. We will present the axioms and principles of definition and induction to which the system appeals. We will not present the usual elementary rules of logic and equality, but instead assume the reader is familiar with these rules. The parts of this chapter that are important are set off in boxes. The remainder of the text is motivation for the boxed material. In this motivational material we shall speak in a language of “naïve” set theory. It is possible to embed our theory within a theory of sets and to derive our “principles” therein. However, we do not regard set theory or even quantification as being a part of our theory. The proofs our system produces depend only upon 1. the propositional calculus with variables and function symbols, 2. equality reasoning, 3. the rule of instantiation which permits us to infer that any instance of a theorem is a theorem, and 4. the boxed material that follows. ### 3.1 Syntax We will use uppercase words, sometimes with embedded periods, hyphens, slashes, or digits, as variable symbols and function symbols. ``` #################### File: A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf Page: 50 Context: ```markdown # CHAPTER 3. A PRECISE DEFINITION OF THE THEORY Symbols. Examples are X, X1, PLUS, and PRIME.FACTORS. Associated with each function symbol is a nonnegative integer, the number of arguments the function symbol expects. The number of arguments associated with certain function symbols will become apparent as we proceed. A **term** is a variable or a sequence consisting of a function symbol of n arguments followed by n terms. If the term t is not a variable and begins with the function symbol f, we say that t is a **call** of f. We depart from the usual notation of F(X,Y) for function application and will instead write \( F \, X \, Y \). Examples of terms are thus: \( X \), \( (TRUE) \), and \( (P \, (ADDI \, X) \, Y) \). The first term is just a variable, the second is the application of the 0-ary function symbol TRUE to no arguments (and hence denotes a constant), and the third is the application of the dyadic function symbol P to the term \( (ADDI \, X) \) and the variable Y. To talk about terms, it is convenient to use so-called “metavariables” that are understood by the reader to stand for certain variables, function symbols, or terms. We will use only lower case words as metavariables, and we will make clear what type of syntactic object the symbol is to denote. For example, if f denotes the function symbol G, and t denotes the term \( (ADDI \, Y) \), then \( (t \, X) \) denotes the term \( (G \, (ADDI \, Y) \, X) \). When we are speaking in naive set theory we use both upper and lower case words as variables ranging over numbers, sets, functions, etc. Context will make clear the range of these variables. We imagine that axioms, such as function definitions, are added as “time goes by.” Whenever we add a new shell or function definition, we insist that certain function symbols not have been mentioned in any previous axiom. We call a function symbol **new** until an axiom mentioning the function symbol has been added. If i is an integer, then by an abuse of notation we let \( X_i \) denote the variable whose first character is X and whose other characters are the decimal representation of i. Thus, if i is 4, \( X_i \) is the variable \( X_4 \). A finite set s of ordered pairs is said to be a **substitution**. ``` #################### File: A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf Page: 51 Context: ```markdown ## 3.2 The Theory of If and Equal We find it necessary to reproduce the logic of truth functional propositions and equality at the term level. We assume the existence of two distinguished constants, (TRUE) and (FALSE). We use T and F as abbreviations for (TRUE) and (FALSE), respectively. We never use T or F as a variable. We axiomatize below the function EQUAL, of two arguments, to return T or F, depending on whether its two arguments are equal. We also axiomatize the function IF, of three arguments, to return its third argument if the first is F and otherwise return its second argument. ### Axioms - T ≠ F - \(X \neq Y \rightarrow (EQUAL \, X \, Y) - T\) - \(X \neq Y \rightarrow (EQUAL \, X \, Y) - F\) - \(X \neq F \rightarrow (IF \, X \, Y \, Z) - Z\) - \(X \neq F \rightarrow (IF \, X \, Y \, Z) - Y\) The logical functions are defined with the following equations: ### Definitions - **(NOT P)** - **(IF P F T)** ``` #################### File: A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf Page: 52 Context: ```markdown # Chapter 3. A Precise Definition of the Theory ## Definitions - **(AND P Q)** **(IF P (IF Q T F) F)** **Definition** **(OR P Q)** **(IF P T (IF Q T F))** **Definition** **(IMPLIES P Q)** **(IF P (IF Q T F) T)** We adopt the notational convention of writing `(AND a b c)` for `(AND a (AND b c))`, `(AND a b c d)` for `(AND a (AND b (AND c d)))`, and so on. We make the same convention for `OR`. We also adopt the notational convention of sometimes writing a term where a formula is expected (e.g., we may refer to the “theorem” `p`, where `p` is a term). When we write a term `p` where a formula is expected, it is an abbreviation for the formula `p ∧ F`. If a term `p` is a theorem, then by the rule of instantiation, the result of substituting any substitution into `p` is a theorem. ## 3.3 Well-founded Relations In the following sections we state a principle of induction, introduce inductively constructed objects such as the natural numbers and ordered pairs, and state a principle of definition for recursive functions. All of these extensions hinge on the idea of a “well-founded relation.” A function of two arguments is said to be a **well-founded relation** if there is no infinite sequence `x₁, x₂, x₃, ...` with the property that `(xᵢ xᵢ₊₁) ⟹ F` for all integers `i` greater than `0`. For example, suppose that `(L x y)` is `T` if `x` and `y` are nonnegative integers and `x` is less than `y`, and that `(L x y)` is `F` otherwise. Then `L` is a well-founded relation because there is no infinite sequence of nonnegative integers with the property that each successive integer is less than the previous one. That is, there is no infinite sequence `x₁, x₂, x₃, ...` such that ``` #################### File: A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf Page: 53 Context: ```markdown ### 3.3. WELL-FOUNDED RELATIONS ... \( x_4 \, ( x_3 \, ( x_2 \, ( x_1 \) On the other hand, suppose that \( (LE \, x \, y) \) is \( \top \) if \( x \) and \( y \) are nonnegative integers and \( x \leq y \). \( LE \) is not a well-founded relation because the infinite sequence \( 1, 1, 1, \ldots \) has the property that: \[ ... \, 1 \leq 1 \leq 1. \] If \( r \) is a well-founded relation and \( (r \, x \, y) \) holds, we say that \( x \) is \( r \)-smaller than \( y \). For the purposes of our theory, functions are known to be well-founded only by assumption in one of the following ways: 1. Whenever we axiomatize an inductively generated type of object, e.g., the integers, we explicitly assume a certain new function to be a well-founded relation. Such an assumption is inherent in any axiomatization of an inductively generated type. See section 3.5. 2. We assume explicitly that the function \( LESSP \) is a well-founded relation in section 3.11. We present there an informal proof that \( LESSP \) is well-founded. 3. Whenever we have two previously assumed well-founded relations, we assume that the lexicographic relation induced by them is well-founded. In section 3.10 we define “induced” and present an informal proof of the well-foundedness of induced lexicographic relations. The fact that a function has been assumed to be a well-founded relation is used only in our principles of induction and definition and in the formation of induced lexicographic relations. It is possible to define formally in a theory of sets (for example, see Morse [43] or Kelley [25]) the concept of well-founded relation, to prove that certain relations are well-founded, and to derive as metatheorems our principles of induction and definition. However, such a development is not within the scope of this work. We say that \( x \) is an \( r \)-minimal element of \( S \) provided \( x \) is a member of \( S \) and no member of \( S \) is \( r \)-smaller than \( x \). Later in ``` #################### File: A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf Page: 54 Context: ```markdown # 34 CHAPTER 3. A PRECISE DEFINITION OF THE THEORY In this chapter, we use the fact that if \( r \) is well-founded, then for each nonempty set \( S \), there exists an \( r \)-minimal element of \( S \). **Proof.** Suppose that \( r \) is well-founded and \( S \) is a nonempty set with no \( r \)-minimal element. Let \( f \) be a choice function on the power set of \( S \). That is, suppose that for each nonempty subset \( s \) of \( S \) that \( f(s) \) is a member of \( s \). Define the sequence \( x_1, x_2, x_3, \ldots \) by letting \( x_i \) be \( f(s_i) \), where \( s_i \) is the set of all \( z \in S \) such that \( z \) is \( r \)-smaller than \( x_i \). For each \( i \), \( s_{i+1} \) is nonempty (otherwise, \( x_i \) would be \( r \)-minimal). And for each \( i \), \( x_{i+1} \) is \( r \)-smaller than \( x_i \). Q.E.D. ## 3.4 Induction A rough sketch of our principle of induction is: Suppose that \( r \) denotes a well-founded relation, \( x \) is a variable, \( d \) is a function symbol, \( q \) is a term and \( (\text{IMPLIES}(d \, (r \, (d \, x)) \, x)) \) is a theorem. Then, to prove \( p \) it is sufficient to prove the following two things: 1. **(base case):** \( (\text{IMPLIES}( \text{NOT} \, q, p)) \) 2. **(induction step):** \( (\text{IMPLIES}( \text{AND} \, q', p)) \), where \( p' \) is the result of substituting \( d \, x \) for \( x \) in \( p \). This is a version of the Generalized Principle of Induction or Noetherian Induction (see Bourbaki [6] and Burshtall [12]). The induction principle we actually use generalizes the principle sketched above in three ways: - Instead of limiting the principle to one variable that is getting \( r \)-smaller, we use an \( n \)-tuple \( x_1, \ldots, x_k \) of variables and some function such that \( (m \, x_1 \ldots x_n) \) is getting \( r \)-smaller. The function is called a "measure function." - Instead of case splitting on \( q \), we consider \( k+1 \) cases, of which one is a base case and the remaining \( k \) are induction steps. - We permit each of the \( k \) induction steps to have several induction hypotheses. ``` #################### File: A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf Page: 55 Context: ```markdown # 3.4. INDUCTION ## The Induction Principle Suppose: 1. \( p \) is a term; 2. \( r \) is a function symbol that denotes a well-founded relation; 3. \( m \) is a function symbol of \( n \) arguments; 4. \( x_1, \ldots, x_n \) are distinct variables; 5. \( q_1, \ldots, q_k \) are terms; 6. \( h_1, \ldots, h_k \) are positive integers; and 7. for \( 1 \leq i \leq k \) and \( 1 \leq j \leq m \), \( s_{ij} \) is a substitution and it is a theorem that: \[ \text{IMPLIES } q_i \, (r \, (n \, x_1 \ldots x_n)/s_{ij} \, (m \, x_1 \ldots x_n)) \] Then \( p \) is a theorem if: - \[ \text{IMPLIES } (AND \, (NOT \, q_i) \ldots (NOT \, q_k)) \, p \] is a theorem and for each \( 1 \leq j \leq k \): - \[ \text{IMPLIES } (AND \, (q_i \, p/s_{ij}, \ldots, \, p/s_{ik})) \, p \] is a theorem. > Note in particular that we have to prove \( k+1 \) things (the “base case” and \( k \) “induction steps”). Each induction step distinguishes a given case with one of the \( q_i \)'s and provides inductive instances of the conjecture being proved. We now illustrate an application of the induction principle. Imagine that LESSP is well-founded, that the axioms CAR.LESSP and CDR.LESSP have been added, and that FLATTEN and MC.FLATTEN have been introduced as in Chapter 2. The first induction performed in the proof of the FLATTEN.MC.FLATTEN theorem of Chapter 2 is obtained by the following instance of our induction principle. \( p \) is the term \( EQUAL \, (MC.FLATTEN \, X \, ANS) \, (APPEND \, (FLATTEN \, X) \, ANS) \); \( m \) is LESSP; \( m \) is COUNT; \( h_1 \) is 1; \( h_2 \) is \( k \); \( q_1 \) is the term \( LISTP \, X \); \( h_1 \) is 2; \( s_1 \) is \( (CAR \, X) \); \( (ANS, (MC.FLATTEN \, (CDR \, X)) \); \( s_2 \) is \( ((CDR \, X)); (ANS, ANS) \); the axioms CAR.LESSP and CDR.LESSP establish the two theorems required by condition (g). The base case and the induction steps produced by condition (g). ``` #################### File: A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf Page: 56 Context: ```markdown # 36 CHAPTER 3. A PRECISE DEFINITION OF THE THEORY This application of the induction principle is more or less analogous to those exhibited in Chapter 2. We now prove that our induction principle is sound. Suppose we have in mind particular \( p, r, m, x_i, \phi_i, h_i, \) and \( s_i \) satisfying conditions (a) through (g) above, and suppose the base case and induction steps are theorems. Below is a set-theoretic proof of \( p \). ## Proof Without loss of generality, we assume that the \( x_i \) are \( x_1, x_2, \ldots, x_n \); that \( r \) is \( M \); that \( x_{n+1}, x_{n+2}, \ldots, x_{k} \) are all of the variables other than \( x_1, x_2, \ldots, x_n \) in \( p \); the \( q_i \) and either component of any pair in any \( s_{ij} \); that \( p \) is \( (P \, x_1, \ldots, x_2) \); that \( q_j \) is \( (Q \, x_1, \ldots, x_2) \); and that \( s_{ij} \) replaces \( x_j \) less than \( s_k \), with some term \( d_{j,\ldots} \). Let \( RM \) be the dyadic function on \( z \)-tuples defined by \( (RM \{U_1, \ldots, U2\} (V_1, \ldots, V_2)) = (R (M \, U_1 \ldots U_2) (M \, V_1 \, V_2)) \). Note that \( RM \) is well-founded. If \( p \) is not a theorem, there must exist a \( z \)-tuple \( (x_1, \ldots, x_2) \) such that \( (P \, x_1 \, x_2) = F \). Let \( (x_1, \ldots, x_2) \) be an RM-minimal such \( z \)-tuple. We now consider the cases on which, if any of the \( q_i \) are true on the chosen \( z \)-tuple. 1. **Case 1:** Suppose no \( q_i \) is true, i.e., suppose \( (Q \, x_1 \, x_2) = F \); \( (Q \, x_1 \, \ldots, x_{2}) = F \), and \( (Q \, x_k \, x_2) = F \). Then by the base case \( (P \, x_1 \, x_2) \neq F \), contradicting the assumption that \( (P \, x_1 \, x_2) = F \). 2. **Case 2:** Suppose that at least one of the \( q_i \) is true. Without loss of generality, we can assume that \( (Q \, x_1 \, x_2) \neq F \). By condition (g) above, we have: - \( (R \, (d_{1,\ldots}, d_{1,n}) (M \, x_1 \ldots x_n)), \) - \( (R \, (d_{1,2} \, d_{2,2}) (M \, x_1 \, x_2)), \) - \(\ldots \) - \( (R \, (d_{a,1}, \ldots, d_{a,n}) (M \, x_1 \ldots x_n)). \) Thus, by the definition of \( RM \), we have: - \( (RM \, (d_{1,\ldots}, d_{1,k}) (x_1, \ldots, x_2)), \) - \( (RM \, (d_{a,1}, d_{a,2}) (x_1, \ldots, x_{2})), \) - and - \( (R \, (d_{k,1}, \ldots, d_{k,j}) (x_1, \ldots, x_n)). \) Since \( (x_1, \ldots, x_k) \) is an RM-minimal \( z \)-tuple such that \( (P \, x_1 \, x_2) = F \), we have: ``` #################### File: A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf Page: 57 Context: ```markdown 3.5. SHELLS \[ (P \, d_{1,1} \ldots d_{1,2}) \neq F, \] \[ (P \, d_{2,1} \ldots d_{2,2}) \neq F, \] ... and \[ (P \, d_{n,1} \ldots d_{n,2}) \neq F. \] Hence, by the first induction step, we derive \((P \, X_1 \ldots X_2) \neq F\), contradicting the assumption that \((P \, X_1 \ldots X_2) = F\). Q.E.D. ### 3.5 Shells Thus far the theory is somewhat impoverished in that it does not have any “interesting” objects. It would be convenient, for example, if we could refer to the natural numbers \(0, 1, 2, \ldots\) and ordered pairs from within our theory (as we have several times in discussions of our theory). We could invent appropriate axioms for each individual “type” of object. However, we want to ensure that no natural number is \(T, F\), or an ordered pair. In addition, we want to specify how the primitive functions behave on “unexpected” arguments (e.g., what does the successor function return when given \(T\) as an argument?). Because of considerations such as these, we address the general problem of extending the theory by adding a new “type” of object. Among the most common objects in the applications of our theory are what we will call “shells.” A shell can be thought of as a colored n-tuple with restrictions on the colors of objects that can occupy its components. For example, the natural numbers can be thought of as shells: a number is either a blue \(0\) or a blue \(1\)-tuple containing another blue object (namely, the predecessor of the tuple). Ordered pairs can be red \(2\)-tuples containing arbitrary objects. The type consisting of lists of numbers can be either the green, empty list of numbers or else green \(2\)-tuples \((x,y)\), such that \(x\) is a number (blue) and \(y\) is a list of numbers. 1. One way to make sure that \(T\) is not a number or to escape from asking what is the successor of \(T\) is to employ a typed syntax. Indeed, Acín [18] and Cartwright [14] have implemented theorem provers for recursive functions that use typed syntax. However, we have grown accustomed to the untyped syntax of predicate calculus, set theory, LISP, MACRO-10, and POP-2 that we simply do not like typed syntax. 2. Shells are inspired by the “structures” used by Burstall [12]. Recently, Oppen [45] has established the decidability of a theory similar to our shell theory. ``` #################### File: A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf Page: 58 Context: ```markdown # CHAPTER 3. A PRECISE DEFINITION OF THE THEORY (green). The fact that ordered pairs and lists of numbers have different colors prevents an ordered pair consisting of a number and a list of numbers from being confused with a list of numbers. Because it is useful to be able to extend the theory by adding the axioms defining a new shell class and because the required set of axioms can be represented schematically, we will adopt a notational shorthand for adding new shells. We now specify informally the properties of a shell. The basic function for a shell class is one that "constructs" n-tuples of the appropriate color (e.g., the successor function or the pairing function). It is convenient if each shell class can (optionally) have one object that is in the class but is not such an n-tuple (e.g., 0 and the empty list of numbers). Because we will have many kinds of shells, we will need a function, called the "recognizer" of the class that returns T on objects of the class and F otherwise. We also require an "accessor" (or "destructor") functions associated with the class, that, when given an n-tuple of the right color, return the corresponding components of the n-tuple (e.g., the predecessor function is the "accessor" corresponding to the shell for numbers). Finally, we posit that any object in the class can be generated with a finite number of "constructions" starting with the bottom object and objects not in the class. This is arranged by assuming a certain function to be a well-founded relation (e.g., one under which the predecessor of a nonzero number is smaller than the number itself). Because we wish to have restrictions on the types of objects that can be components of a shell n-tuple, we must adopt some convention for specifying the restriction. We also adopt conventions for specifying what a constructor returns when one of its arguments fails to meet the required restriction and what an accessor returns when given an object of the wrong type as an argument. We require that there be associated with each argument position of each shell constructor a "type restriction" that limits the colors of objects that may occupy that component. The restriction is expressed in one of two ways: (a) that an object must have a color that is a member of an explicitly given set of previously (or currently being) axiomatized colors, or (b) that an object may not have a color in such a set. ``` #################### File: A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf Page: 59 Context: ```markdown ### 3.5. SHELLS Type restriction is written as a propositional term (see condition (b) below). We also require that each argument have a “default value” that is permitted by the type restriction to occupy the corresponding component of the shell tuple. When one of the arguments to a constructor does not satisfy the corresponding type restriction, the default value for that argument position is used in its place. Finally, we arrange that the accessor for a position return the corresponding default value when given either the bottom object or an object of the wrong color. #### The Shell Principle To add the shell const of n arguments with (optionally, bottom object (btm,)): - **Recognizer**: `r` - **Accessors**: `a₁`, `a₂`, ..., `aₙ` - **Type restrictions**: `t₁`, ..., `tₙ` - **Default values**: `d₁`, ..., `dₙ` - **Well-founded relation**: `wfn` where: 1. `const` is a new function symbol of n arguments (btm is a new function symbol of no arguments, if a bottom object is supplied). 2. `r`, `a₁`, ..., `aₙ` are new function symbols of one argument. 3. `wfn` is a new function symbol of two arguments, and all the above function symbols are distinct. 4. Each `tᵢ` is a term that mentions no symbol as variable besides `Xᵢ` and mentions no symbol as a function symbol besides `IF`, `TRUE`, `FALSE`, previously introduced shell recognizers, and `r`; and 5. If no bottom object is supplied, the `dᵢ` are bottom objects of previously introduced shells, and for each `i`: - (IMPLIES [EQUAL `Xᵢ` `dᵢ`], `rᵢ`) is a theorem; if a bottom object is supplied, each `dᵢ` is either `btm` or a bottom object of some previously introduced shell, and for each `i`: - (IMPLIES (AND [EQUAL `Xᵢ` `dᵢ`] (not `btm`))) `rᵢ` is a theorem. This means to extend the theory by doing the following (using): ``` #################### File: A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf Page: 60 Context: ```markdown # Chapter 3. A Precise Definition of the Theory T for \( r \) (btn) and F for all terms of the form \( \text{EQUAL}(x, \text{btn}) \) if no bottom object is supplied: 1. Assume the following axioms: - \( \text{OR} (\text{EQUAL}(X, T) \, \text{(EQUAL}(r \, X) \, F)) \) - \( r \, (\text{const} \, X_1 \ldots X_n) \) - \( \text{NOT} (\text{EQUAL}(\text{const} \, X_1 \ldots X_n, \, \text{btn})) \) - \( \text{IMPLIES} (A \, r \, X) \) - \( \text{NOT} (\text{EQUAL}(X, \text{btn}))) \) - \( \text{EQUAL}(\text{const} \, (ac \, X) \ldots (ac_n, X)) \) 2. For each \( i \) from 1 to n, assume the following axiom: - \( \text{IMPLIES} \, t_i \) - \( \text{EQUAL} \, (ac_i, \, (\text{const} \, X_1 \ldots X_n)) \) 3. For each \( i \) from 1 to n, assume the following axiom: - \( \text{IMPLIES} (r \, \text{NOT}(r \, X)) \) - \( \text{EQUAL}(X, \text{btn}) \) - \( \text{AND} \, (\text{NOT} \, t_i) \) - \( \text{EQUAL} (X, \text{const} \, X_1 \ldots X_n) \) 4. Assume the axioms: - \( \text{NOT}(r) \) - \( \text{NOT}(F) \) 5. For each recognizer, \( r' \), of a shell class previously added to the theory, assume the following axiom: - \( \text{IMPLIES}(r \, X) \, (\text{NOT}(r' \, X)) \) 6. Assume the axiom: - \( \text{wfn} \, X \, Y ) \, \text{OR} \, (\text{AND}(r \, Y)) \) - \( \text{NOT} (\text{EQUAL}(Y, \text{btn})) \) - \( \text{OR} (\text{EQUAL}(X, (ac \, Y))) \) Where \( t \) is the term (FALSE) if no shell has been added previously, and otherwise is \( \text{wfn} \, X \, Y \), where \( \text{wfn} \) is the well-founded relation name for the last shell previously added; and 7. Assume \( \text{wfn} \) denotes a well-founded relation. If the \( t_i \) are not specified, they should each be assumed to be \( T \). The \( n \) axioms in (3) specify what the values of the accessors are. ``` #################### File: A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf Page: 61 Context: ```markdown # 3.6 Natural Numbers We now axiomatize the natural numbers, 0, 1, 2, etc., using the shell principle: ## Shell Definition Add the shell `ADD1` of one argument with bottom object (`ZERO`), recognizer `NUMBER`, accessor `SUB1`, type restriction (`NUMBERP X1`), default value (`ZERO`), and well-founded relation `SUB1P`. --- It is possible to prove the consistency of the theory resulting from the addition of a finite number of shells by exhibiting a model. A suitable model may be constructed by representing a (nonbottom) member of a shell class having n accessors as an n+1-tuple whose first component is an integer encoding the "color." Note that merely because we add a finite number of shells we are not assured that every object in the world is in one of our shell classes. That is, we do not have an axiom that says: for any x, x is either `T`, or x is `F`, or x satisfies one of the shell recognizers. Indeed, this is an intended feature of the shell principle; we desire that any extension produced by adding shells can be further extended by additional shells without giving rise to inconsistency. From the point of view of modeling programming language constructs, shells are valuable. They can play a role in the semantics of the usual kinds of "records" since records are often n-tuples with type restrictions on their components. Shells can also be used to model other kinds of common programming objects, such as the integers, atomic objects (words, capabilities, characters, file names), push down stacks, and character strings. In the next three sections, we will use shells to add the natural numbers, literal atoms, and ordered pairs to our theory. ``` #################### File: A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf Page: 62 Context: ```markdown # 3.7 Literal Atoms This axiomatizes a new type of object we will call the “numbers.” The numbers consist of the new object (ZERO) and all of the objects returned by the new function ADD1. We now informally repeat the axioms added by the shell principle. The numbers in parentheses indicate the corresponding clause of the definition of the shell addition principle. 1. The function `NUM-BERP` (which recognizes numbers), always returns either T or F, and returns T on `(ADD1 X)` and `(ZERO)`. (ZERO) is never returned by ADD1. - (1) If `X` is a non-(ZERO) number, then `(ADD1 (SUB1 X))` is `X`. - (2) If `X` is a number, then `(SUB1 (ADD1 X))` is `X`. - (3) `SUB1` returns `(ZERO)` if applied to a nonnumber, `(ZERO)`, or `(ADD1 X1)` when `X1` is a nonnumber. - (4) T and F are nonnumeric. - (5) Because no other shells have been added, clause (5) does not contribute any axioms. - (6) We define the function `SUB1` so that `(SUB1 X Y)` is `T` if `Y` is a number other than (ZERO) and `X` is `(SUB1 Y)`, and `(SUB1 X Y)` is F otherwise. - (7) Finally, we assume that `SUB1` is a well-founded relation. Note the fundamental nature of the assumption that `SUB1` is a well-founded relation: by virtue of this assumption (and our principle of induction), `(P X)` can be proved, for all `X`, by proving `(P X)` when `X` is not a number, proving `(P (ZERO))`, and proving that if `X` is a number other than (ZERO), then `(P (SUB1 X))` implies `(P X)`. Among the theorems that can be derived from the above axioms is the theorem that if `X` and `Y` are numeric, then `(ADD1 X)` is equal to `(ADD1 Y)` if and only if `X` is equal to `Y`. See Appendix B. We will abbreviate `(ZERO)` as `0`, and any well-formed nest of `ADD1`s around a `0` as the decimal numeral expressing the number of `ADD1` terms in the nest. Thus, `1` is an abbreviation for `(ADD1 0)`, `2` is an abbreviation for `(ADD1 (ADD1 0))`, etc. ``` #################### File: A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf Page: 63 Context: ```markdown # 3.7. LITERAL ATOMS correctness of a function that translates from symbolic arithmetic expressions to sequences of instructions for a hand-held calculator. We write symbols as sequences of characters delimited by quotation marks (e.g., “X” and “ABC”). We could adopt the convention that “X”, for example, was an abbreviation for 24. Such a convention is part of the logician’s standard method for representing terms, known as Goedelization. However, we want to arrange for the symbols to be different from the integers. To this end we add a new shell class called the “literal atoms,” and we adopt a syntactic convention that translates from quoted character sequences to literal atoms. The shell class is recognized by the new Boolean function LITATOM. The new function PACK, of one argument, takes an arbitrary object and returns a literal atom. (The name PACK derives from the INTERLISP operation for constructing literal atoms by concatenating sequences of characters.) \( \text{PACK} \ x \) is the same literal atom as \( \text{PACK} \ y \) if and only if \( x \) is the same object as \( y \). The new function UNPACK, given a literal atom, returns the object used to construct it. ## Shell Definition - Add the shell PACK of one argument with bottom object (NIL), - recognizer LITATOM, - accessor UNPACK, - default value 0, and - well-founded relation UNPACKP. Note that since ADD1 was the last shell added and the well-founded relation for it was SUB1P, the new well-founded relation UNPACKP holds between \( X \) and \( Y \) either if SUB1P holds between \( X \) and \( Y \) or if \( X \) is the result of UNPACKing the (non-(NIL) literal atom) \( Y \). We now adopt a convention for abbreviating literal atoms as symbols. We suppose an enumeration \( s_1, s_2, s_3, \ldots \) of all symbols (character sequences) except “NIL”. When we write “NIL” in a term position, it is an abbreviation for (NIL). When we write \( s_i \), delimited by quotation marks in a term position, it is an abbreviation for (PACK \( i \)). ``` #################### File: A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf Page: 64 Context: ```markdown # Chapter 3. A Precise Definition of the Theory ## 3.8 Ordered Pairs We axiomatize ordered pairs as follows: **Shell Definition.** Add the shell `CONS` of two arguments with recognizer `LISTP`, accessors `CAR` and `CDR`, default values `'(NULL)` and `'(HIL)`, and well-founded relation `CAR.CDR`. We sometimes think of ordered pairs as sequences, binary trees, or terms. For example: ``` (CONS 1 (CONS 2 (CONS 3 '(NULL)))) ``` may be thought of as the sequence 1, 2, 3. ``` (CONS (CONS 1 2) 3) ``` may be thought of as the binary tree: ``` * / \ 1 3 \ 2 ``` Finally, ``` (CONS 'PLUS' (CONS 'X' (CONS 3 '(NULL)))) ``` may be thought of as the term `PLUS X 3`. Because nests of `CARs` and `CDRs` are frequently used, we provide a definition for each function symbol beginning with the letter `C`, ending with the letter `R`, and containing only `A`s and `D`s in between. The body of the definition is just the appropriate nest of `CARs` and `CDRs`. For example, **Definition** ``` (CADDR X) (CAR (CDR (CDR X))). ``` ``` #################### File: A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf Page: 65 Context: ```markdown # 3.9 Definitions We have already defined certain simple functions, such as AND, OR, NOT, and IMPLIES. For example: ## Definition **(AND P Q)** *(IF P (IF Q T F) F)* Another simple function we define is **ZEROP**; it returns T if its argument is virtually 0 (in the sense that ADD1 and SUB1 treat it as 0) and F otherwise: ## Definition **(ZEROP X)** *(OR (EQUAL X 0) (NOT (NUMBERP X)))* In general, if the function symbol f is new, if \( x_1, \ldots, x_n \) are distinct variables, if the term body mentions no symbol as a variable other than these \( x_i \), and if the body does not mention f as a function symbol, then adding the axiom: **(f x_1 \ldots x_n) = body** is a proper way to define a new function. Indeed, any use of the symbol f as a function symbol in a term, such as \( f(t_1 \ldots t_n) \), can be completely eliminated by replacing the term with the result of substituting \( { (x_1, t_1), \ldots, (x_n, t_n)} \) into body. However, one apparently mild generalization of the above scheme results in our being able to define functions that are considerably more interesting. This generalization allows the use of f as a function symbol in the body of its own definition. For example, to define a function that returns the integer sum of its two arguments we could write: ## Definition **(SUM X Y)** *(...)* ``` #################### File: A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf Page: 66 Context: ```markdown # CHAPTER 3. A PRECISE DEFINITION OF THE THEORY ## Definition of SUM ``` (IF (ZEROP X) Y (ADD1 (SUM (SUB1 X) Y))) ``` Unlike our previous definitions, the body of SUM mentions the function symbol being defined. That is, SUM is defined recursively. Nevertheless, SUM is well-defined. For example, consider (SUM 3 4). ``` (SUM 3 4) = (ADD1 (SUM 2 4)) = (ADD1 (ADD1 (SUM 1 4))) = (ADD1 (ADD1 (ADD1 (SUM 0 4)))) = (ADD1 (ADD1 (ADD1 4))) = 7. ``` If we were to allow the new function symbol to occur arbitrarily in the right-hand side, we could define all “general recursive” functions. However, we could also fall into inconsistency. For example, were we to add the axiom: ``` (RUSSELL X) := (IF (RUSSELL X) T F), ``` our theory would be inconsistent. To sketch a proof: (RUSSELL X) must be equal to F or not equal to F. If the former, then by the definition of RUSSELL it follows that (RUSSELL X) = T, a contradiction. If the latter, it follows that (RUSSELL X) = F, another contradiction. Q.E.D. We now present a principle of definition that allows the introduction of recursive functions. The principle will not allow us to introduce all “general recursive” functions or even all “recursive” functions. However, it will permit the definition of almost all the functions in which we are currently interested. And we shall prove that every application of the principle is sound (unlike the axiomatization of RUSSELL above). ## The Definition Principle In stating our principle of definition below, we say that a term is *f-free* if the symbol does not occur in the term as a function symbol. ``` #################### File: A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf Page: 67 Context: ```markdown ## 3.9. DEFINITIONS We say that a term \( t \) governs an occurrence of a term \( s \) in a term \( b \) either if \( b \) contains a subterm of the form \( \text{(IF } P \text{ } Q) \) and the occurrence of \( s \) is in \( P \), or if \( b \) contains a subterm of the form \( \text{(IF } P' \text{ } Q') \), where it is \( \text{(NOT } t) \) and the occurrence of \( s \) is in \( Q \). Thus, \( P \) and \( \text{(NOT } Q) \) govern the first occurrence of \( S \) in: \[ \text{(IF } P \text{ } ( \text{(IF } Q \text{ } S) \text{ } R) \text{)} \] Note that \( P \) and \( \text{(IF } Q \text{ } F S) \) govern the second occurrence of \( S \). Our principle of definition is: To define \( f \) of \( x_1, \ldots, x_n \) to be body (usually written ``Definition: \( f \) of \( x_1, \ldots, x_n \) = body''), where: 1. \( f \) is a new function symbol of \( n \) arguments; 2. \( x_1, \ldots, x_n \) are distinct variables; 3. body is a term and mentions no symbol as a variable other than \( x_1, \ldots, x_n \); and 4. there is a well-founded relation denoted by a function symbol \( f \) and a function symbol \( m \) of \( n \) arguments, such that for each occurrence of a subterm of the form \( \text{(IF } t_1, \ldots, t_n) \) in body and the \( f \)-free terms \( t_1, \ldots, t_k \) governing it, it is a theorem that: \[ \text{(IMPLIES (AND } t_1, \ldots, t_k)) \] means to add as an axiom the defining equation: \[ f(x_1, \ldots, x_n) = \text{body.} \] We now illustrate an application of the principle of definition. Imagine that \( LESSP \) is well-founded and that the axioms \( \text{CAR.LESSP} \) and \( \text{CDR.LESSP} \) have been added as in Chapter 2. The defining equation for \( \text{MC.FLATTEN} \) is added to our theory by the following instantiation of our principle of definition. \( f \) is the function symbol \( \text{MC.FLATTEN} \); \( n = 2 \); \( x_1 \) is \( X \) and \( x_2 \) is \( \text{ANS} \); body is the term: \[ \text{(IF (LISTP } X \text{ )} \, X \, \text{(MC.FLATTEN (CDR } X \text{) )}) \text{)} \] ``` #################### File: A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf Page: 68 Context: ```markdown # CHAPTER 3. A PRECISE DEFINITION OF THE THEORY \[ \text{(MC.FLATTEN (CAR X) (MC.FLATTEN (CDR X) AMS))} \] \[ (\text{COUNT X AMS}) ; \] \( r \) is LESSP; \( m \) is the function symbol COUNT1, where \( \text{(COUNT1 X Y)} \) is defined to be \( \text{(COUNT X)} \). The two theorems required by (d) are: 1. \(\text{IMPLIES (LISTP X) } \text{(LESSP (COUNT1 (CDR X) AMS) (COUNT1 X AMS))}\), and 2. \(\text{IMPLIES (LISTP X) } \text{(LESSP (COUNT1 (CAR X) } (\text{MC.FLATTEN (CDR X) AMS})) (COUNT1 X AMS))}\). Both theorems are easily proved from CAR.LESSP, CDR.LESSP and the definition of COUNT1. Note that the second theorem is proved before any axiom about MC.FLATTEN has been posited, even though MC.FLATTEN is used as a function symbol in the theorem. If we have defined \(\{ x_1, \ldots, x_n \}\) to be body, then we say that **body** is the body of \( f \) and that \( x_i \) is the \( i \)-th formal parameter of \( f \). If body mentions \( f \) as a function symbol, we say that \( f \) is **recursive** and otherwise we say that \( f \) is **nonrecursive**. If \( f \) has not been defined but has been mentioned as a function symbol in \( n \) arguments in an axiom, we say that \( x_i \) is the \( i \)-th formal parameter of \( f \), for \( i \) from 1 to \( n \). We now offer a justification for admitting recursive definitions. This justification will relieve the fears raised by RUSSELL. Roughly speaking, we shall prove that for each correct application of the definition principle, we can prove that there exists a unique function \( f(x_1, \ldots, x_n) = \text{body} \). We shall construct the desired function using a standard set-theoretic method of partial functions that "approximate" the statement. > *This is potentially confusing, since in both cases the function is (general) recursive in the usual mathematical sense. No confusion should arise from our convention – which is derived from everyday usage in computer programming – since we will nowhere discuss in this book functions that are not general recursively in the mathematical sense. Our definition principle does not permit mutually recursive definitions. If we defined \( f \) in terms of a new function \( g \), then after the definition, \( g \) would no longer be new, and hence \( g \) could not be defined.* ``` #################### File: A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf Page: 69 Context: ```markdown # 3.9 DEFINITIONS Desired function. Unfortunately, different set theories supply different answers to the question: what is the value of applying a function to an object not in the function's domain. Instead of adopting a particular theory of sets, we shall instead make sure in the following proof not to apply any function to an object not in the function’s domain. Furthermore, we shall assume the existence of some universal set D to be the domain of discourse for all the functions that we axiomatize in our theory. To be precise, we assume that: > There exists a set D such that each function symbol \( f \) mentioned as a function symbol in any axiom denotes a function whose domain is \( D^n \) and whose range is a subset of D, where n is the number of arguments of f. If \( G \) is a function whose domain is a subset of \( D^n \), for some n, and whose range is a subset of D, then the extension of \( G \) is the function on \( D^n \) that is defined to be \( (G \, X_1 \, X_n) \) if \( (X_1, \ldots, X_n) \) is a member of the domain of g and is defined to be (TRUE) otherwise. Suppose that we have in mind some specific \( x_1, \ldots, x_n, \, body, \, r \), and \( m \) and suppose they satisfy the conditions (a) through (d) of the definition principle. Before we add the defining action \( (f \, x_1 \ldots x_n) = body \), we wish to prove that there exists a unique function defined on \( D^n \) to satisfy the equation. Without loss of generality, suppose that \( f, x_1, \ldots, x_n, \, r, \) and \( m \) are the symbols FN, \( X_1, \ldots, X_n, \) and \( M \) respectively. Let us adopt the notational convention that \( [b] \) is an abbreviation for the term obtained by replacing every occurrence of FN as a function symbol in the term b with the symbol s. (For example, if term \( s \) is \( (ADDI \, (FN \, X_1)) \), then term \( [G] \) is an abbreviation for \( (ADDI \, (G \, X_1)) \).) Let RM be the well-founded relation defined on n-tuples by \( (RM(U_1, \ldots, U_n)) \) \( \forall \, (V_1, \ldots, V_n) \) if \( R \, (M \, U_1 \ldots (M \, V_1 \ldots V_n)). \) Let us say that a subset of \( D^n \) is RM-closed if and only if every member of \( D^n \) RM-smaller than a member of S is itself a member of S. Let us say that a function \( H \) is partially correct if (a) its domain is an RM-closed subset of \( D^n \), (b) its range is a subset ``` #################### File: A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf Page: 70 Context: ```markdown # CHAPTER 3. A PRECISE DEFINITION OF THE THEORY Let \( D \) and \( (c) \) be \( H' \) is the extension of \( H \); then for each \( (X_1, \ldots, X_n) \) in the domain of \( H \), \( (H \, X_1 \, \ldots \, X_n) = \text{body}[H] \). We now prove a lemma that is used frequently below. Its proof is quite tedious. ## Lemma Suppose: - \( F_1 \) is a function whose domain is a subset of \( D \) and whose range is a subset of \( D \). - \( F_1' \) is the extension of \( F_1 \). - \( G_1 \) is partially correct. - \( G_1' \) is the extension of \( G_1 \). - \( (X_1, \ldots, X_n) \) is in \( D_n \). - \( F_1 \) and \( G_1 \) are defined and agree upon every member of \( D_n \) that is \( R^M \)-smaller than \( (X_1, \ldots, X_n) \). Then \( \text{body}[F_1'] = \text{body}[G_1'] \). ## Proof Let \( \text{subterm}_1, \ldots, \text{subterm}_k \) be an enumeration of the occurrences of the subterms of body, and suppose that the term \( \text{subterm}_j \) is a proper subterm of the term \( \text{subterm} \), then \( i \). Let \( \text{tests}_i \) for \( 1 \leq i \leq k \), be the conjunction of the FN-free terms governing \( \text{subterm}_i \). We prove, for \( i \) from 1 to \( k \), 1. \( (*1 \text{ IMPLIES } \text{tests}_i) \) \( (\text{EQUAL } \text{subterm}_{[F_1']} \text{subterm}_{[G_1']}) \). Suppose that we have proved \( *1 \) for all \( j \). To prove \( *1 \) for \( i \), we consider the form of \( \text{subterm}_i \). **Case 1**: The term \( \text{subterm}_i \) is a variable. The proof in this case is immediate. **Case 2**: The term \( \text{subterm}_i \) has function symbol IF. Then \( \text{subterm}_i \) is \( \text{IF } \text{subterm}_j \, \text{subterm}_k \) for some \( a, b, \) and \( c \) less than \( i \). Hence we have previously proved: 2. \( (*2 \text{ IMPLIES } \text{tests}_i) \) \( (\text{EQUAL } \text{subterm}_{[F_1']} \text{subterm}_{[G_1']}) \). 3. \( (*3 \text{ IMPLIES } \text{tests}_i) \) \( (\text{EQUAL } \text{subterm}_{[F_1']} \text{subterm}_{[G_1']}) \). 4. \( (*4 \text{ IMPLIES } \text{tests}_i) \) \( (\text{EQUAL } \text{subterm}_{[F_1']} \text{subterm}_{[G_1']}) \). If \( \text{subterm}_j \) uses FN as a function symbol, then \( \text{tests}_j = \text{tests}_k = \text{tests}_b \), because the FN-free terms governing the occurrence of \( \text{subterm}_j \) are the same as those governing \( \text{subterm}_i \). ``` #################### File: A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf Page: 71 Context: ```markdown ## 3.9 DEFINITIONS subterm₁ and subterm₂. So *1 follows from *2, *3, and *4. If subterm₂ does not use FN as a function symbol, tests₁ = tests₂ = (AND tests₁, subterm₂) and tests₁ = (AND tests₂, (NOT subterm₂)). If subterm₂ ≠ F, then subterm₁[F₁] = subterm₂[F'] and subterm₂[G₁] = subterm₂[G']. So *1 follows from *3. If subterm₂ = F, then subterm₁[F₁] = subterm₂[F'] and subterm₂[G₁] = subterm₂[G']. So *1 follows from *4. ### Case 3: The term subterm₁ has a function symbol other than IF. Suppose subterm₁ has the form g(subterm₁, ... subterm₀) for some function symbol g, where jₐ, 1 ≤ a < b. Note that tests₁ = tests₁ₐ, for i ≤ a < b. *# (IMPLIES tests₁ (EQUAL subterm₁, [F₁] subterm₂, [G₁])). If g is not the function symbol FN, then *1 follows immediately from the *#. So suppose g is FN and b is thus n. If tests₁ = F, then *1 is true. If tests₁ = F, then by condition (d) of the application of the definition principle, it is a theorem that: * d (IMPLIES tests₁ (R (M subterm₁, ... subtermₕ) (M x₁ ... xₙ))). When *d was proved no axioms about FN had been posited. Hence, a proof of: * d1 (IMPLIES tests₁ (R (M subterm₁, [F₁] ... subterm₁, [F'] ) (M x₁ ... xₙ))) and a proof of: * d1 (IMPLIES tests₁ (R (M subterm₁, [G₁] ... subterm₁, [G'] ) (M x₁ ... xₙ))) may be similarly produced. Hence, (subterm₁, [F₁], ... subterm₁, [F']) is RM-smaller than (X₁, ... Xₙ) and so is (subterm₁, [G₁], ... subtermₕ, [G']). But F₁ and G₁ are defined and agree on members of Dᵣ RM-smaller than (X₁, ... Xₙ). Thus, *1 follows from the hypothesis that tests₁ ≠ F and the *#. Q.E.D. We now turn to the construction of the unique function satisfying the defining equation (FN X₁ ... Xₙ) = body. ``` #################### File: A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf Page: 72 Context: ```markdown # Chapter 3. A Precise Definition of the Theory Let F0 be the union of all partially correct functions. We will prove that F0 is the desired function by demonstrating that (a) F0 is a function, (b) F0 is partially correct, and (c) the domain of F0 is Dⁿ. The uniqueness of F0 follows from (a), (b), (c), and the fact that any other function defined on Dⁿ to D satisfying the defining equation is partially correct and hence a subset of F0. ## Proof that F0 is a Function If F0 is not a function it is multiply defined somewhere. Let \( (X_1, \ldots, X_n) \) be an RM-minimal member of Dⁿ such that for some two distinct values, Z1 and Z2 say, both \( (X_1, \ldots, X_n)Z1 \) and \( (X_1, \ldots, X_n)Z2 \) are members of F0. Let F1 and G1 be partially correct functions that contributed \( (X_1, \ldots, X_n)Z1 \) and \( (X_1, \ldots, X_n)Z2 \) to F0. Let F' and G' be the extensions of F1 and G1. Both F1 and G1 are defined upon all members of Dⁿ RM-smaller than \( (X_1, \ldots, X_n) \) because both are partially correct. F1 and G1 have the same values on all members of Dⁿ RM-smaller than \( (X_1, \ldots, X_n) \) because \( (X_1, \ldots, X_n) \) is RM-minimal. Therefore, by the lemma, \( \text{body}[F'] = \text{body}[G'] \). But Z1 = \( (F1 \, X1 \, Xn) \) and \( \text{body}[F1] = \text{body}[G1] = (G1 \, X1 \, Xn) = Z2 \) because both F1 and G1 are partially correct. Q.E.D. ## Proof that F0 is Partially Correct The domain of F0 is an RM-closed subset of Dⁿ because it is the union of RM-closed subsets of Dⁿ. The range of F0 is a subset of D. Let F' be the extension of F0. Let \( (X_1, \ldots, X_n) \) be any member of the domain of F0 such that \( (F0 \, X1 \, \ldots \, Xn) \neq \text{body}[F0] \). Let G be a partially correct function with \( (X_1, \ldots, X_n) \) in its domain such that \( (G \, X1 \, \ldots \, Xn) = (F0 \, X1 \, \ldots \, Xn) \). Let G' be the extension of G. By applying the lemma to F0, F0', G, and \( (X_1, \ldots, X_n) \) we infer that \( \text{body}[F0] = \text{body}[G] \). But \( (F0 \, X1 \, \ldots \, Xn) = (G \, X1 \, \ldots \, Xn) = \text{body}[G] = \text{body}[F0] \). Q.E.D. ## Before Proving that the Domain of F0 is Dⁿ Suppose that F0 is not defined on every element of Dⁿ. Let \( (Y_1, \ldots, Y_n) \) be an RM-minimal member of Dⁿ. ``` #################### File: A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf Page: 73 Context: ```markdown 3.10 Lexicographic Relations ============================ Our theory requires one more concept: the idea of lexicographic relations. To define it to be the lexicographic relation induced by \( r \) and \( s \), where: minimal element of \( D^n \) not in the domain of \( F_0 \). Let \( F_0' \) be the extension of \( F_0 \). Let \( G \) be the function that results from adding to \( F_0 \) the ordered pair \(\{(Y_1, \ldots, Y_n), \text{body}\}\). If we can show that \( G \) is partially correct a contradiction will arise because then \( G \) would be a subset of \( F_0 \) by the definition of \( F_0 \). The domain of \( G \) is an RM-closed subset of \( D^n \) because it was formed by adding to an RM-closed subset of \( D^n \) an RM-minimal element of \( D^n \) not in that subset. Let \( G' \) be the extension of \( G \). We need to show that for every n-tuple \( (X_1, \ldots, X_n) \) in the domain of \( G \) that \( (G \ X_1 \ldots X_n) = \text{body}[G'] \). For every \( (X_1, \ldots, X_n) \) in the domain of \( G \), we may apply the lemma for \( G \), \( F_0, F_0' \), and \( (X_1, \ldots, X_n) \) to infer that \( \text{body}[G'] = \text{body}[F_0] \). If \( (Y_1, \ldots, Y_n) = (X_1, \ldots, X_n) \), then \( (G \ X_1 \ldots X_n) = \text{body}[F'] = \text{body}[G'] \). If \( (Y_1, \ldots, Y_n) \neq (X_1, \ldots, X_n) \), then \( (G \ X_1 \ldots X_n) = (F_0 \ X_1 \ldots X_n) = \text{body}[F_0] = \text{body}[G'] \). Q.E.D. That concludes the proof that the definition principle is sound. No constructivist would be pleased by the foregoing justification of recursive definition because of its freewheeling, set-theoretic character. The truth is that induction and inductive definition are more basic than the truths of high-powered set theory, and it is slightly odd to justify a fundamental concept such as inductive definition with set theory. 3.10 Lexicographic Relations ============================ Our theory requires one more concept: the idea of lexicographic relations. To define it to be the lexicographic relation induced by \( r \) and \( s \), where: ``` #################### File: A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf Page: 74 Context: ```markdown # 54 CHAPTER 3. A PRECISE DEFINITION OF THE THEORY 1. **Definition of I** - (a) I is a new function symbol of 2 arguments, - (b) r and s are function symbols of 2 arguments, and - (c) neither r nor s is 1, means to add as an axiom the following defining equation: ``` (1 P1 P2) ↦ (OR (r (CAR P1) (CAR P2)) (AND (EQUAL (CAR P1) (CAR P2)) (S (CDR P1) (CDR P2)))) ``` That is, I orders pairs of objects by first comparing their CAR components using r, but using s on their CDR components if the test with r fails and the CARs are equal. (The name "lexicographic" is inspired by the alphabetic order used by lexicographers.) If r and s denote well-founded relations and I is defined to be the lexicographic relation induced by r and s, then I denotes a well-founded relation. ## Proof Suppose that \( x_1, x_2, \ldots \) are an infinite sequence and that for all positive i, \( (I (x_{i+1} x_i) \neq F) \). By the definition of I, \( (r (CAR (x_{i+1})) (CAR x_i)) \neq F \) or \( (r (CAR x_i) (CAR x_{i+1})) \). But since r is well-founded, the sequence \( (CAR x_1), (CAR x_2), \ldots \) cannot be infinitely descending in r. Hence, for some i, for all positive j, \( (CAR x_i) = (CAR x_j) \). But the sequence \( (CDR x_i), (CDR x_{i+1}), \ldots \) must then be infinitely descending in s, a contradiction. Q.E.D. ## 3.11 Lessp and Count We have now finished defining the formal theory that we use as the logical basis of our theorem-proving system. We now use the theory to define two functions, LESSP and COUNT, that play central roles in our proofs (but have no role in the formal definition of the theory). LESSP and COUNT are the well-founded relation and measure function we use most often in applying our principles of induction and definition. - **(LESSP X Y)** returns True if X is less than Y and False otherwise. LESSP treats nonnumeric arguments as 0. LESSP determines ... ``` #################### File: A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf Page: 75 Context: ```markdown ### 3.11. LESSP AND COUNT whether X is less than Y by counting them both down to 0, seeing which gets there first. #### Definition **LESSP X Y** ``` (IF (ZEROP Y) F (IF (ZEROP X) T (LESSP (SUB1 X) (SUB1 Y)))). ``` Since `(IMPLIES (NOT (ZEROP X)) (SUB1 (SUB1 X) X))` is a theorem, the first argument of LESSP gets SUB1 smaller in the recursive call. (In fact, so does the second argument.) Thus, LESSP is admitted under the principle of definition. We claim that LESSP is a well-founded relation. That is, we claim there is no infinite sequence \(x_1, x_2, \ldots\) such that \(x_{i+1}\) is LESSP-smaller than \(x_i\). It is easy to see how to prove that LESSP is well-founded in a suitable theory of sets, since SUB1 is well-founded, and \(x\) is LESSP-smaller than \(y\) if and only if a finite number of SUB1s will reduce \(y\) to \(x\) (when both are numbers). We cannot state or prove the well-foundedness of LESSP within our theory. We assume LESSP to be a well-founded relation. By virtue of this assumption, it is permitted to make induction arguments in which some measure gets LESSP-smaller in the induction hypotheses. Similarly, it is permitted to define recursive functions in which some measure gets LESSP-smaller in the recursive calls. A particularly useful measure is the "size" of a shell object obtained by adding one to the sum of the sizes of its components. We first define the addition function for the nonnegative integers. We could use the function SUM defined above. However, SUM suffers from the disadvantage of sometimes returning a nonnumber (it returns Y, whatever that is, when X is 0). SUM ``` #################### File: A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf Page: 76 Context: ```markdown # CHAPTER 3. A PRECIS DEFINITION OF THE THEORY is thus not commutative (e.g., (SUM 0 T) = T, while (SUM T 0) = 0). We thus make the following definitions: ## Definition **(FIX X)** *(IF (NUMBERP X) X 0)* **Definition** *(PLUS X Y)* *(IF (ZEROP X)* **(FIX Y)** *(ADD1 (PLUS (SUB1 Y) Y)))* We adopt the notational convention of writing **(PLUS a b c)** for **(PLUS a (PLUS b c))**, etc. Now assume we have added all the shells we will use. We define the function **COUNT** to return 0 on bottom objects, to return 1 plus the sum of the COUNTs of the components of a nonbottom shell object, and to return 0 on any nonshell object. For example, if the only shells we were ever to add were **ADD1**, **PACK**, and **CONS**, we would define **COUNT** as: ## Definition **(COUNT X)** *(IF (NUMBERP X) 0* *(IF (EQUAL X 0) 0* *(ADD1 (CCOUNT (SUB1 X)))))* *(IF (LISTP X)* *(IF (EQUAL X ‘( NIL )))* 0 *(ADD1 (COUNT (UNPACK X))))* *(ADD1 (PLUS (COUNT (CAR X)) (COUNT (CDR X))))) 0))* The immediately preceding definition of COUNT would be admitted under the principle of definition since at each stage the ``` #################### File: A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf Page: 77 Context: ```markdown # 3.12 Conclusion This concludes the discussion of our formal theory. We recap the topics presented: - We defined with axioms certain functions including **IF** and **EQUAL**. - We introduced the idea of well-founded relations. - We introduced a principle of induction. - We introduced a general mechanism for adding “new” types of **colored** n-tuples called **shells**. - We used the shell principle to add axioms for the natural numbers, literal atoms, and ordered pairs. - We introduced a principle of definition which allows the introduction of recursive functions. Theorems may be proved from the shell axioms and the definition of **COUNT**: ``` (EQUAL (COUNT const X1 ... Xn)) (ADD1 (PLUS (IF tr1 (COUNT X1) 0) ... (IF trn (COUNT Xn) o)))) and (EQUAL (COUNT (btm)) 0) ``` (omitting the latter if const has no bottom object), and for each i from 1 to n, the theorem: ``` (IMPLIES (AND (r x) (NOT (EQUAL X (btm)))) (LESSP (COUNT (ac x)) (COUNT x))) ``` (using T for (NOT (EQUAL X (btm))) if const has no bottom object). ``` #################### File: A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf Page: 78 Context: ```markdown # CHAPTER 3. A PRECISE DEFINITION OF THE THEORY We introduced the concept of a lexicographic relation. We used the principle of definition to introduce the usual “less than” function, assumed it was well-founded, and defined the measure function **COUNT** that computes the size of an object. ``` #################### File: A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf Page: 79 Context: ```markdown # Chapter 4 ## The Correctness of a Tautology Checker Before we describe how we prove theorems in the theory just presented, it is important that the reader be familiar with the theory itself. In addition, it is useful to go through the proofs of some difficult theorems so that the reader gets a feel for what is coming in subsequent chapters. Finally, readers unfamiliar with mechanical theorem-proving may be curious about how one proves theorems mechanically. Since all three of these objectives should be addressed before we begin to present our proof techniques, we have chosen to illustrate them all in a rather novel example: the mechanical proof of the correctness of a simple mechanical theorem prover. In particular, we prove the correctness of a decision procedure for the propositional calculus. In the standard development of logic, the propositional calculus is presented first. As in our theory, it often forms part of the logical underpinnings of richer theories. In addition, it offers a simple way of introducing certain important ideas such as soundness, completeness, and decision procedures. Because of its foundational role, discussions of the propositional calculus are usually carried on in the informal “metalanguage.” For example, a common definition of the value of the formula “p & q” is that it is “true if both p and q have the value true, and false otherwise.” In this chapter, we exercise the expressive power of our theory and clarify certain aspects of it, by formalizing the semantics of a version of propositional calculus in our theory. 59 ``` #################### File: A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf Page: 80 Context: ```markdown # Chapter 4. The Correctness of a Tautology Checker then introduce certain very simple theorem-proving ideas (such as how to apply a theorem as a rewrite rule, and how to keep track of what assumptions govern a subformula of a formula) by writing, as a recursive function in our theory, a decision procedure for the propositional calculus. Finally, we illustrate some of our proof techniques by proving that the decision procedure is well-defined, recognizes only tautologies, and recognizes all tautologies. The proofs described are actually carried out by our own mechanical theorem prover, and the discussion of the proofs illustrates the role of the human user in our automatic theorem-proving system. ## 4.1 Informal Development Throughout this chapter we will be concerned with the set of terms constructed entirely from variables, T, F, and the function symbol IF. We call such terms “propositional IF-expressions.” Examples of propositional IF-expressions are: - `(IF A B C)` - `(IF T F (IF A B C))` - `(IF (IF P Q F) (IF P T Q) T)` Note that the first of these expressions sometimes has the value F (when A is T and B is F, for example) but sometimes does not have the value F (when A is T and B is T, for example). On the other hand, the second expression always has the value F, and the third expression never has the value F. We call a propositional IF-expression that never has the value F a “tautology.” Note that any formula of the ordinary propositional calculus can be converted to an equivalent propositional IF-expression by using the definitions of AND, OR, NOT, and IMPLIES presented in Chapter 3. (Throughout the remainder of this chapter we shall use “expression” as a shorthand for “propositional IF-expression.”) It is our aim in this chapter to construct a procedure for deciding whether an expression is a tautology. Our first step is to indicate more precisely what we mean by “the value of an expression.” Informally, let us say that v is an assignment provided that v is a function, its domain includes T, F, and the ``` #################### File: A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf Page: 81 Context: ```markdown # 4.1. INFORMAL DEVELOPMENT Variables, and \( v \) maps \( T \) to \( T \) and \( F \) to \( F \). If \( v \) is an assignment, then by "the assignment of \( x \) under \( v \)" we mean \( v(x) \). Then the value of the expression \( x \) under the assignment \( v \) is defined recursively as: - If \( x \) has the form \( \text{IF } p \text{ p } q \), then if the value of \( p \) under \( v \) is \( F \), - return the value of \( r \) under \( v \); - else return the value of \( q \) under \( v \); - else return the assignment of \( x \) under \( v \). We want to define a mechanical procedure that, when given an expression \( x \), returns non-\( F \) for every assignment \( v \), the value of \( x \) under \( v \) is non-\( F \), and returns \( F \) for some assignment \( v \), the value of \( x \) under \( v \) is \( F \). We will call our procedure TAUTOLOGY.CHECKER. There are many ways to write TAUTOLOGY.CHECKER. The question: "What is the most efficient way to write TAUTOLOGY.CHECKER?" is actually one of the most important unsolved problems in computer science. One method, called the "truth table" method, considers all possible assignments of \( T \) and \( F \) to the variables in the given expression. The truth table method requires execution time exponential in the number of variables in the expression. No one knows a method that does not require exponential time in the worst case. Furthermore, no one has yet proved that all algorithms require exponential time in the worst case. The version of TAUTOLOGY.CHECKER that we present is more efficient than the truth-table method on one important class of expressions, namely those in "IF-normal form."[^1] An expression \( x \) is said to be in IF-normal form provided that no subterm of \( x \) beginning with an IF has as its first argument another term beginning with an IF. Of the three example formulas above, the first two are in IF-normal form and the last is not. When a formula is in IF-normal form, we can decide whether it is a tautology very easily: we consider each "branch" through [^1]: This t nomenclature is an excellent example of the time-honored custom of referring to a problem we cannot handle as abnormal, irregular, improper, degenerate, inadmissible, and otherwise undesirable. From Kelley [25], on "normal" spaces. ``` #################### File: A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf Page: 82 Context: ```markdown # CHAPTER 4. THE CORRECTNESS OF A TAUTOLOGY CHECKER We analyze the expression and require that the tests through which we pass are contradictory or that the tests through which we pass force the output to be something other than F. Consider, for example, the expression: ``` (IF P (IF T T F) F) (IF Q (IF P F F) Q) ``` There are five branches through this expression (one output per line). The first branch returns T. The second branch returns F but can never be taken because the second test, on T, never fails. The third branch can never be taken because the first test on P must have returned F so the second must also. The fourth branch returns Q, which is not F because the earlier test on Q determined that Q was not F. And the last branch returns T. So the expression is a tautology. Informally, then, we have a method for deciding which expressions in IF-normal form are tautologies. To use the method on expressions in general (rather than just those in IF-normal form), we convert the given expression into an equivalent one (that is, one that always has the same value) in IF-normal form. We achieve this normalization by applying the theorem: ``` (EQUAL (IF (IF P Q R) LEFT RIGHT) (IF P (IF Q R LEFT RIGHT))) ``` repeatedly, as a rewrite rule from left to right. That is, whenever we find an expression of the form `(IF (IF P Q R) left right)`, we replace it with the equivalent `(IF P (IF Q R left right))`. Normalizing an expression may produce a formula that is exponentially larger than the one with which we started. So in the worst case, our procedure is at least exponential. The foregoing sketch of a decision procedure for the tautology problem is very informal. Below we reconsider both the problem and its solution very formally – in fact we formalize both the problem and its solution using the theory presented in Chapter 3. One way to view the formal presentation is as an interaction between four participants: a “buyer” who wants to pur... ``` #################### File: A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf Page: 83 Context: ```markdown 4.2 Formal Specification of the Problem =========================== In this section we play the role of the buyer: we specify our requirements by formally defining what a propositional IF-expression is, what the value of such an expression is, and what it means for a function to be a decision procedure. ### 4.2.1 Representing Expressions Since we want to define functions on IF-expressions, we must represent IF-expressions as objects in our theory. From the point of view of general-purpose theorem-proving programs, the most natural and convenient way to represent terms is to represent variables as literal atoms and to represent the term \( (f_1 \ldots f_n) \) as the sequence whose CAR is the literal atom \( f \) and whose CDR is the sequence of objects representing the terms \( x_1 \) through \( x_n \). This is the representation we use in our theorem prover. However, this representation makes it awkward to refer to subterms. For example, if \( x \) represented \( (IF \, test \, left \, right) \), then in order to refer to the third argument one would write \( (CAR \, (CDR \, (CDR \, x))) \). With ease of reading in mind, we represent IF-expressions in this chapter by employing a new shell class (the green triples, say), called the IF.EXPRPRs, which we introduced with this: #### Shell Definition Add the shell `CONS.IF` of three arguments with recognizer `IF_EXPRD`, accessors `TEST`, `LEFT_BRANCH`, and `RIGHT_BRANCH`, default values `‘NIL’`, `‘NIL’`, and `‘NIL’`, and well-founded relation `TEST.LEFT_BRANCH.RIGHT_BRANCHP`. Thus, we represent the term: ``` #################### File: A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf Page: 84 Context: ```markdown # CHAPTER 4. THE CORRECTNESS OF A TAUTOLOGY CHECKER (as `(IF x y z)` `(CONS (IF x' y' z')`) where `x'`, `y'`, and `z'` are the representations of `x`, `y`, and `z` respectively. We use `T` and `F` to represent `T` and `F` respectively. For the purposes of this example we agree that any object other than `T`, `F`, or an `IF.EXPRP` represents a variable. ## 4.2.2 Formal Definitions of Assignment and Value To represent assignments we use the standard “association list” technique from LISP programming. An association list (or simply “alist”) is a sequence of pairs interpreted as associating with the `CAR` of each pair the item of information in the `CDR`. The recursive function `ASSIGNMENT` interprets alists as assignments of values to terms. `ASSIGNMENT` returns the value assigned to a given term in a given alist (or `F` if it is not explicitly assigned). `ASSIGNMENT` assigns `T` and `F` to themselves. Here is the definition of `ASSIGNMENT`: ### Definition ```lisp (ASSIGNMENT VAR ALIST) (IF (EQUAL VAR T) T (IF (EQUAL VAR F) F (IF (NLISTP ALIST) F (IF (EQUAL VAR (CAR ALIST)) (CDR ALIST) (ASSIGNMENT VAR (CDR ALIST)))))) ``` `(NLISTP x)` is defined to be `(NOT (LISTP x))`. The formal definition of the value of the expression `X` under the assignment `ALIST` is: ### Definition ```lisp (VALUE X ALIST) ``` ``` #################### File: A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf Page: 85 Context: ```markdown ## 4.2 FORMAL SPECIFICATION OF THE PROBLEM (If (If.Expr X) (If (Value (Test X) AList) (Value (Left.Branch X) AList)) (Value (Right.Branch X) AList)) (Assignment X AList)). ### 4.2.3 The Formal Correctness Specifications As the buyer, we want a decision procedure for the propositional calculus. We now specify what we require of a decision procedure. First of all, we require that the decision procedure be introduced as a function. Let us call it TAUTOLOGY.CHECKER. Second, we require that if TAUTOLOGY.CHECKER recognizes an expression (in the sense that TAUTOLOGY.CHECKER returns something other than F when given the expression), then the expression must have a value other than F under all assignments². Stated formally, this requirement is: #### Theorem TAUTOLOGY.CHECKER.IS.SOUND: > (Implies (TAUTOLOGY.CHECKER X) > (Value X A)). When specifying requirements, one must be very careful to say enough. The careless buyer might think that we have fully specified TAUTOLOGY.CHECKER. However, a function that always returned F (i.e., recognized nothing) would satisfy the above specification. We require more than that of TAUTOLOGY.CHECKER. In fact, we require that it recognize all tautologies; when TAUTOLOGY.CHECKER fails to recognize an expression, there must exist an assignment for which the VALUE of the expression is F. Since we do not use existential quantification, how can we express the proposition that when TAUTOLOGY.CHECKER fails to recognize an expression there exists a falsifying assignment? ²The purist may note that in a freewheeling set theoretic approach to validity, one would consider all assignments rather than merely the finite assignments to which we limit ourselves when we represent assignments as finite algebras. Of course, no real damage is done, because (in a suitable theory of sets) one can prove that it is sufficient to restrict one’s attention to assignments that assign a meaning to the finite number of variables in the term in which one is interested. ``` #################### File: A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf Page: 86 Context: ```markdown # CHAPTER 4. THE CORRECTNESS OF A TAUTOLOGY CHECKER The answer is that we require somebody to define a recursive function that explicitly constructs a falsifying assignment for a non-tautological expression. We call the function **FALSIFY**. Then the statement that the tautology checker recognizes all tautologies is: ## Theorem TAUTOLOGY.CHECKER.IS.COMPLETE: - \( \text{IMPLIES}(\text{NOT}(\text{TAUTOLOGY.CHECKER}(x))) \) - \( \text{EQUAL}(\text{VALUE}(x, \text{FALSIFY}(x)), F) \) That is, if the tautology checker fails to recognize an expression, then there is an assignment (namely, **FALSIFY** \( x \)) for which the value of the expression is F. From our perspective as the buyer, the definition of **FALSIFY** is irrelevant. That is, if somebody were to supply us with a legal definition of **TAUTOLOGY.CHECKER** (and also one of **FALSIFY**) such that the above two conjectures were theorems, then we would believe that **TAUTOLOGY.CHECKER** was a decision procedure. ## 4.3 The Formal Definition of Tautology.checker We now take on the role of the implementor. The buyer's definitions and conjectures in the last section specify what is required of **TAUTOLOGY.CHECKER**. As the implementor, we now define a function we claim has the desired properties. Since the specified task is to write a simple mechanical theorem prover, it happens (in this example) that the implementor must appeal to some of the basic ideas in mechanical theorem-proving. ### 4.3.1 Tautology.checker The definition of **TAUTOLOGY.CHECKER** requires two subsidiary concepts: - **NORMALIZE**, which given an expression returns an equivalent expression in IF-normal form and - **TAUTOLOGYP**, which given an expression in IF-normal form and a list of assumptions determines if the expression is never F under those assumptions. ``` #################### File: A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf Page: 87 Context: ```markdown # 4.3. The Formal Definition of TAUTOLOGY.CHECKER Given NORMALIZE and TAUTOLOGYP, we define TAUTOLOGY.CHECKER as: ## Definition **TAUTOLOGY.CHECKER X** * (TAUTOLOGYP (NORMALIZE X) "nil"). ## Normalize Recall that the basic idea behind NORMALIZE is to apply the theorem: ``` EQUAL (IF (IF P Q R) LEFT RIGHT) (IF P (IF Q LEFT RIGHT) (IF R LEFT RIGHT)) ``` as a rewrite rule until we have removed IFs from the tests of other IFs. Thus, to normalize an expression `x`, we proceed as follows: 1. If `x` is not an `IF.EXPRP`, we return `x`. 2. If `x` is of the form `IF (test left right)`, then we ask whether `test` is of the form `IF (P Q R)`. - If so, we return the result of normalizing the expression `IF (P (IF Q left right) (IF R left right))`. - If not, we return the expression `(IF test left right)`, where `left` and `right` are the results of normalizing left and right. The formal definition of this process is: ## Definition **NORMALIZE X** * (IF (IF.EXPRP X) (IF (IF.EXPRP (TEST X)) (NORMALIZE (CONS.IF (LEFT.BRANCH (TEST X)) (LEFT.BRANCH X)) (CONS.IF (RIGHT.BRANCH (TEST X)) (RIGHT.BRANCH X))) (CONS.IF (RIGHT.BRANCH (TEST X)))) ``` #################### File: A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf Page: 88 Context: ```markdown # CHAPTER 4. THE CORRECTNESS OF A TAUTOLOGY CHECKER ```plaintext (LEFT_BRANCH X) (RIGHT_BRANCH X) ``` ```lisp (CONS.IF X (TEST X) (NORMALIZE (LEFT_BRANCH X)) (NORMALIZE (RIGHT_BRANCH X))) ``` x. ## Tautology Now that we can put an expression into normal form, we consider TAUTOLOGYP, which determines whether an expression, x, in IF-normal form, is never F. Recall that the basic idea is to explore every branch through the IF-expression and check that either the tests along the branch are contradictory or else that the output of the branch is forced by the tests to be non-F. Our definition requires that TAUTOLOGYP have a second argument, alist, used to remember the tests that have been seen and whether they are being assumed true or false on the current branch. Since all the tests in a normalized IF-expression are variables (or else the constants T or F) the alist of assumptions can also be thought of as an assignment to some of the variables in x. Here is how TAUTOLOGYP determines that x is never F under the assumptions in alist: 1. If x is not an IF.EXPRP, then it is either T, F, or a variable. 2. If x is T, then it is never F. 3. If x is F, then clearly it is “sometimes” F. 4. If x is neither T nor F, then it is a variable. 5. If x is currently assumed non-F, then x is never F under alist; otherwise x is sometimes F. If x is an IF.EXPRP, say representing (IF test left right), then there are three possibilities: - If test (which must be a variable, T, or F) is T or assumed non-F in alist, then x is never F under alist if and only if left never is. ``` #################### File: A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf Page: 89 Context: ```markdown ## 4.3. THE FORMAL DEFINITION OF TAUTOLOGY_CHECKER69 If `test` is F or assumed F in `alist`, then `x` is never F under `alist` if and only if `right` never is. Otherwise, `x` is never F under `alist` if and only if both: - `left` is never F under the assumptions in `alist` plus the additional assumption that `test` is non-F, and - `right` is never F under the assumptions in `alist` plus the additional assumption that `test` is F. To define `TAUTOLOGYP` formally, we use four auxiliary functions: 1. **ASSIGNEDP**, which determines whether a given variable is explicitly assumed F or non-F in a given `alist`. 2. **ASSIGNMENT** (defined above), which returns the assumed value of a variable in an `alist`. 3. **ASSUME.TRUE**, which adds to an `alist` the pair that indicates that a given variable is being assumed non-F, and 4. **ASSUME.FALSE**, which adds to an `alist` the pair that indicates that a given variable is being assumed F. The definitions of these functions are in Appendix A. **ASSIGNEDP** is very similar to **ASSIGNMENT**. **ASSUME.TRUE** and **ASSUME.FALSE** simply `CONS` the appropriate pair onto the `alist`. The formal definition of `TAUTOLOGYP` is: ``` Definition (TAUTLOGYP X ALIST) (IF (IF.EXPRP P) (IF (ASSIGNEDP (TEST X) ALIST) (IF (ASSIGNMENT (TEST X) ALIST) (TAUTOLOGYP (LEFT.BRANCH X) ALIST) (TAUTOLOGYP (RIGHT.BRANCH X) ALIST))) ``` ``` #################### File: A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf Page: 90 Context: ```markdown # CHAPTER 4. THE CORRECTNESS OF A TAUTOLOGY CHECKER ```plaintext (AND (TAUTOLOGYP (LEFT-BRANCH X) (ASSUME.TRUE (TEST X ALIST))) (TAUTOLOGYP (RIGHT-BRANCH X) (AS-SUME.FALSE (TEST X ALIST))) (ASSIGNMENT X ALIST)). ``` ## 4.3.2 Summary of the Simple Theorem-proving Ideas As implementor of the buyer’s specifications, we claim that our job is done: TAUTOLOGY.CHECKER is a decision procedure for the propositional calculus. Before discussing the proof of this assertion we review the simple theorem-proving techniques illustrated: - Terms can (and must) be represented as objects. In our case, we used the shell facility to define a new class of objects, called the IF-expressions. - Both NORMALIZE and TAUTOLOGYP illustrate how terms can be explored mechanically. - The function NORMALIZE shows how a theorem can be exhaustively applied as a rewrite rule in a mechanical way. - TAUTOLOGYP illustrates the use of alists of assumptions (manipulated by the functions ASSUME.TRUE, ASSUME.FALSE, ASSIGNEDP, and ASSIGNMENT) to remember one’s context while exploring a formula. - The use of NORMALIZE to produce an equivalent expression that is amenable to exploration by TAUTOLOGYP illustrates the value of “simplification” even when it produces a larger expression. Our own proof techniques (in contrast to those of an implementor concerned only with the propositional calculus) involve ideas such as those above, but usually in much more elaborate form. In fact, the next five chapters of this book are concerned entirely with how we represent formulas, how we remember the assumptions governing the subexpressions in an expression, and how we use rewrite rules to simplify expressions. ``` #################### File: A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf Page: 91 Context: ```markdown # 4.4 The Mechanical Proofs We now describe the proofs of `TAUTOLOGY.CHECKER.IS.SOUND` and `TAUTOLOGY.CHECKER.IS.COMPLETE`. The proofs are constructed entirely by our mechanical theorem prover. However, the mathematician user of the system plays an important role by suggesting that the theorem prover prove certain lemmas first, thus making the system cognizant of truths that were not evident to it previously. It is important to understand from the outset that an incompetent human user may not be able to get the theorem prover to admit that a valid conjecture is a theorem. On the other hand, the user does not have to be trusted: if a monkey were to cause the theorem prover to announce that a conjecture were a theorem, the conjecture would indeed be a theorem. In this section, we primarily play the role of the mathematician user. However, occasionally we take the role of the theorem prover to illustrate how the proofs go. The precise user commands to our theorem prover can be found in Appendix A. Events `CONS.IF` through `TAUTOLOGY.CHECKER.IS.SOUND` are what the user had to type to define the necessary concepts and cause the theorem prover to prove the desired results. ## 4.4.1 Complying With the Principle of Definition Before anything can be proved, the necessary concepts must be introduced. In particular, we must introduce the `CONS.IF` shell class and the functions derived above. Furthermore, the system must confirm that the shell principle and the definition principle admit the definitions. The introduction of `CONS.IF` is trivial. The only restrictions are syntactic in nature and the system immediately adds the axioms indicated in Chapter 3. As for the function definitions, the system must confirm that in each definition some measure of the arguments is decreasing according to a well-founded relation in every recursive call. All but one of the functions easily pass the test because either they have no recursive calls (e.g., `ASSUME.TRUE`) or they do simple recursion on components of `LISTPs` or `IF.EXPRPs` (e.g., ... ``` #################### File: A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf Page: 92 Context: ```markdown # CHAPTER 4. THE CORRECTNESS OF A TAUTOLOGY CHECKER **ASSIGNMENT** and **TAUTOLOGY** so that the **COUNT** of some argument is always decreasing. However, one function, namely **NORMALIZE**, provides a nontrivial challenge because in one of its recursive calls the **COUNT** of its only argument increases. Before the theorem prover will accept the definition of **NORMALIZE**, we must lay a certain amount of groundwork. If the reader has not yet discovered a measure and well-founded relation that decreases when **NORMALIZE** recurses, he is encouraged to do so before reading further. We know of several measures and well-founded relations justifying the definition of **NORMALIZE**. We discuss only one of them, namely the first one we discovered. In general one has to justify all the recursive calls simultaneously. That is, it will not do to find one measure that goes down in one call and a different one that goes down in another unless they can be lexicographically combined to account for all the recursions. We will eventually exhibit such a lexicographic combination. To derive it, we will start with the first recursive call in **NORMALIZE**, the one in which the **COUNT** of its argument increases as a result of transforming \( \text{IF (IF P q r) left right} \) to \( \text{IF (P (IF left right) (IF r left right))} \). Consider the function **IF_DEPTH**: ### Definition ``` IF_DEPTH x = IF (IF.EXPRP x) (ADD1 (IF_DEPTH (TEST x))) 0 ``` **IF_DEPTH** is admitted under the principle of definition since it recurses on components of **IF.EXPRPs**. In particular, the lemma ``` (IMPLIES (IF.EXPRP x) (LESSP (COUNT (TEST x)) (COUNT x))) ``` (added by the shell mechanism when the **CONS.IF** shell was axiomatized) is precisely the theorem required by the definition principle. ``` #################### File: A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf Page: 93 Context: ```markdown ## 4.4. THE MECHANICAL PROOFS Now that `IF.DEPTH` has been admitted, consider what it does: it counts the depth of IF-nesting in the `TEST` component of a propositional `IF`-expression. Thus, when a function (such as `NORMALIZE`) recurses by changing `(IF p q r) left right` to `(IF p (IF q r) left right)`, it drives down the `IF.DEPTH` of its argument, according to the well-founded relation `LESSP`. Thus, the lemma ### Theorem IF.DEPTH.GOES.DOWN: > **(IMPLIES** > > **(AND (IF.EXPRP x) (IF.EXPRP (TEST x)))** > > **(LESSP (IF.DEPTH (CONS.IF (TEST x) y z)) (IF.DEPTH x)))** suggests a justification of `NORMALIZE`. The system proves `IF.DEPTH.GOES.DOWN` using the definitions of `LESSP` and `IF.DEPTH`. But now let us consider the other two recursive calls in `NORMALIZE`, those that recurse on the `LEFT.BRANCH` and `RIGHT.BRANCH` of `IF.EXPRPs`. If `IF.DEPTH` decreased according to `LESSP` in those recursions, we would be finished: `IF.DEPTH` would be a measure that got `LESSP`-smaller in all recursions, and `LESSP` is well-founded. But `IF.DEPTH` does not necessarily decrease in the last two recursions. For example, the `IF.DEPTH` of the expression: ``` (IF x (IF (IF a b) c d) e) y ``` is 1, while the `IF.DEPTH` of its `LEFT.BRANCH` is 2. In general, the `IF.DEPTH` of a branch of an `IF.EXPRP` is unrelated to that of the expression itself and might be arbitrarily bigger. We remedy the situation by defining a second measure, called `IF.COMPLEXITY`, and by proving that it gets `LESSP`-smaller on the latter two recursive calls while not changing on the first recursive call. Given such results it is clear that the measure: ``` (CONS (IF.COMPLEXITY x) (IF.DEPTH x)) ``` gets lexicographically smaller on each call (using the well-founded relation induced by `LESSP` and `LESSP`). Defining `IF.COMPLEXITY` so that it decreases on the branches of `IF.EXPRPs` is easy. (For example, `COUNT` goes... ``` #################### File: A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf Page: 94 Context: ```markdown # CHAPTER 4. THE CORRECTNESS OF A TAUTOLOGY CHECKER But defining `IF.COMPLEXITY` so that it also stays unchanged when `IF (IF p q) left right` is transformed to `IF (p IF (IF left right) (IF p left right))` is more difficult. The definition of `IF.COMPLEXITY` that we use is: ## Definition ```lisp (IF COMPLEXITY X) (IF (IF_EXPR X) (TIMES (IF.COMPLEXITY (TEST X)) (PLUS (IF.COMPLEXITY (LEFT.BRANCH X)) (IF.COMPLEXITY (RIGHT.BRANCH X)))) 1) ``` ## Theorems The three theorems: 1. **Theorem IF.COMPLEXITY.GOES.DOWN1**: ```lisp (IMPLIES (IF.EXPR X) (LESSP (IF.COMPLEXITY (LEFT.BRANCH X)) (IF.COMPLEXITY X))) ``` 2. **Theorem IF.COMPLEXITY.GOES.DOWN2**: ```lisp (IMPLIES (IF.EXPR X) (LESSP (IF.COMPLEXITY (RIGHT.BRANCH X)) (IF.COMPLEXITY X))) ``` 3. **Theorem IF.COMPLEXITY.STAYS.EVEN**: ```lisp (IMPLIES (AND (IF.EXPR X) (IF.EXPR (TEST X))) (EQUAL (IF.COMPLEXITY (CONS. IF (TEST (TEST X)) (CONS. IF (LEFT.BRANCH (TEST X)) (LEFT.BRANCH X) (RIGHT.BRANCH X)) (CONS. IF (RIGHT.BRANCH (TEST X)) (LEFT.BRANCH X) (RIGHT.BRANCH X)))) (IF.COMPLEXITY X))) ``` establish that `IF.COMPLEXITY` has the desired properties. The system proves these theorems using the definitions of `LESSP` and `IF.COMPLEXITY`, together with the inductively proved lemma that `IF.COMPLEXITY` is never 0 and about 20 well-known lemmas about `PLUS`, `TIMES`, and `LESSP` and the relations between them. The user must have the theorem prover. ``` #################### File: A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf Page: 95 Context: ```markdown # 4.4 THE MECHANICAL PROOFS Prove these 20 arithmetic lemmas before it proves the IF.COMPLEXITY lemmas. We do not descend into the proofs here. The statements of the lemmas are among those preceding the tautology theorems in Appendix A. (In fact, we had the system prove these elementary arithmetic lemmas long before we even considered the tautology problem; the system recalled them automatically during the IF.COMPLEXITY proofs.) Once the IF.COMPLEXITY theorems have been proved by the theorem prover (and thus brought to its attention), the theorem prover accepts the definition of NORMALIZE under our principle of definition and responds with: The lemmas `IF.COMPLEXITY.GOES.DOWN1`, `IF.COMPLEXITY.GOES.DOWN2`, `IF.COMPLEXITY.STAYS.EVEN`, and `IF.DEPTH.GOES.DOWN` can be used to prove that: ``` (CONGS (IF.COMPLEXITY X) (IF.DEPTH X)) ``` decreases according to the well-founded lexicographic relation induced by `LESSP` and `LESSP` in each recursive call. Hence, `NORMALIZE` is accepted under the principle of definition. Observe that: ``` (OR (IF.EXPR (NORMALIZE X)) (EQUAL (NORMALIZE X) X)) ``` is a theorem. **CPU time (devoted to theorem-proving):** 1.388 seconds It is important to note that the admissibility of `NORMALIZE` has been proved and in no way assumed. In particular, we defined extant functions (`IF.DEPTH` and `IF.COMPLEXITY`) that were admitted to the theory because of shell axioms. Then we proved certain theorems about those functions, namely that the `IF.DEPTHs` (or `IF.COMPLEXITYs`) of certain expressions were `LESSP`-smaller than those of others. Some of these theorems required induction to prove—inductions justified by shell axioms. Finally, invoking the theorems just proved, the well-foundedness of `LESSP`, and the principle of lexicographic relations, we exhibited a measure and well-founded relation justifying the definition of `NORMALIZE`. Note further that the newly invented measure and well-founded relation permit an induction not permitted by the shell axioms alone—an induction in which we may assume an instance about `(IF p (IF q left right) (IF (IF p q) r left right)`. ``` #################### File: A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf Page: 96 Context: ```markdown # CHAPTER 4. THE CORRECTNESS OF A TAUTOLOGY CHECKER ## 4.4.2 Mechanical Proof of Tautology.checker.is.sound Now we turn our attention to the proofs of our main theorems. To enable the theorem prover to prove `TAUTOLOGY.CHECKER.IS.SOUND`, we decompose the problem into three main lemmas establishing properties of `NORMALIZE` and `TAUTOLOGY`, the two “subroutines” of `TAUTOLOGY.CHECKER`. To prove one of these main lemmas, the theorem prover needs several subsidiary lemmas about assignments. Below we present the decomposition, the subsidiary lemmas, and the proofs of the main lemmas. Finally, we combine the main lemmas to prove `TAUTOLOGY.CHECKER.IS.SOUND`. ## Decomposition We can establish that `TAUTOLOGY.CHECKER` recognizes only tautologies by proving that: - When `TAUTOLOGY` returns non-F on an expression in IF-normal form, the value of the expression is non-F under all assignments, - `NORMALIZE` produces expressions in IF-normal form, and - `NORMALIZE` produces an expression with the same VALUE as its input (so that if one is a tautology, the other is also). Note that this was exactly the decomposition of the problem employed when we, in the role of the implementor, defined `TAUTOLOGY.CHECKER`. We define `NORMALIZED.IF.EXPRP` to be the recursive function that determines whether or not an expression is in IF-normal form (see Appendix A). The formal statements of the three main lemmas are: ### Theorem TAUTOLOGY.IS.SOUND: > (IMPLIES (AND (NORMALIZED.IF.EXPRP X) (TAUTOLOGY X)) (VALUE X (APPEND A1 A2))) ### Theorem NORMALIZE.NORMALIZES: ``` #################### File: A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf Page: 97 Context: ```markdown # 4.4 THE MECHANICAL PROOFS **Normalized IF.EXPR (Normalize X)** Theorem: **Normalize IS.SOUND:** ``` (EQUAL (VALUE (NORMALIZE X) A) (VALUE X A)) ``` Note that the first lemma is more general than required by the decomposition. The decomposition calls for: ``` (IMPLIES (AND (NORMALIZED.IF.EXPR X) (TAUTOLOGYP X 'NIL')) (VALUE X A)) ``` That is, if X is in IF-normal form and (TAUTOLOGYP X 'NIL') is non-F, then the VALUE of X is non-F under all assignments. The more general TAUTOLOGYPS.IS.SOUND, which can be proved by induction, says that if X is in IF-normal form and (TAUTOLOGYP X A1) is non-F, then the VALUE of X is non-F under any assignment having A1 as an initial segment. TAUTOLOGYPS.IS.SOUND reduces to the desired lemma when A1 is 'NIL', since (APPEND 'NIL' A2) is A2. ## Subsidiary Lemmas Before proving the three main lemmas, we make explicit four facts about assignments and VALUE. First, since ASSIGNMENT returns F if the variable in question is not explicitly assigned in A, we conclude that when (ASSIGNMENT X A) is non-F, X must be explicitly assigned: ### Theorem: ASSIGNMENT.IMPLIES.ASSIGNEDP: ``` (IMPLIES (ASSIGNMENT X A) (ASSIGNEDP X A)) ``` Second, since ASSIGNMENT returns the first assignment it finds for X in an alist, the assignment of X in the alist (APPEND A B) is either the assignment of X in A (if X is explicitly assigned in A) or else is the assignment of X in B: ### Theorem: ASSIGNMENT.APPEND: ``` (EQUAL (ASSIGNMENT X (APPEND A B)) (IF (ASSIGNED X A) (ASSIGNMENT X A) (ASSIGNMENT X B))) ``` ``` #################### File: A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf Page: 98 Context: ```markdown # CHAPTER 4. THE CORRECTNESS OF A TAUTOLOGY CHECKER Third, if an alist contains a pair that can be deleted without changing the assignment of the variable involved, then the value of any expression under the original alist is equal to that under the shorter alist. In particular, if `VAR` already has value `VAL` in `A`, then the value of any given expression under `(CONS (CONS VAR VAL) A)` is the same as its value under `A`. Since we are interested only in whether `VALUES` of expressions are `F` or non-`F`, we can generalize the theorem to: \[ \text{{(IMPLIES (IFF VAL (ASSIGNMENT VAR A)) (IFF (VALUE X (CONS (CONS VAR VAL) A)) (VALUE X A)))}} \] where `(IFF X Y)` is `T` if `X` and `Y` are both `F` or both non-`F` and `F` otherwise. In order to make the lemma useful as a rewrite rule, we actually express it as two implications: ## Theorem VALUE.CAN.IGNORE.REDUNDANT.ASSIGNMENTS: \[ (\text{{AND}} \begin{align*} & (\text{{IMPLIES (AND (IFF VAL (ASSIGNMENT VAR A)) (VALUE X A))}}) \\ & (\text{{(VALUE X (CONS (CONS VAR VAL) A))}}) \\ & (\text{{IMPLIES (AND (IFF VAL (ASSIGNMENT VAR A)) (NOT (VALUE X A)))}}) \\ & (\text{{(NOT (VALUE X (CONS (CONS VAR VAL) A)))}} \end{align*} ) \] Fourth, and finally, if `X` is an `IF.EXPRP` and is in `IF`-normal form, then the `VALUE` of `(TEST X)` under `A` is just the assignment of `(TEST X)` in `A`: ## Theorem VALUE.SHORT.CUT: \[ \text{{(IMPLIES (AND (IF.EXPRP X) (NORMALIZED IF.EXPRP X)) (EQUAL (VALUE (TEST X) A) (ASSIGNMENT (TEST X) A)))}} \] The proofs of these four lemmas are straightforward. The first three are proved by induction and the fourth is immediate from the definitions of `NORMALIZED.IF.EXPRP` and `VALUE`. We do not discuss the proofs. ``` #################### File: A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf Page: 99 Context: ```markdown # 4.4. THE MECHANICAL PROOFS ## Main Lemmas We now return to the main lemmas in the decomposition of TAUTOLOGY.CHECKER.IS.SOUND. The hardest of these lemmas is the first: ### Theorem TAUTOLOGYP.IS.SOUND: ``` (IMPLIES (AND (NORMALIZED IF.EXPRP X) (TAUTOLOGYP X A1)) (VALUE X (APPEND A1 A2))). ``` We now sketch the machine's proof. Let `(p X A1 A2)` be a schematic representation of the above conjecture. The machine describes its induction analysis as follows: ``` (AND (IMPLIES (NOT (IF.EXPRP X)) (p X A1 A2)) (IMPLIES (AND (IF.EXPRP X) (p (RIGHT.BRANCH X) (CONS (CONS (TEST X) F) A1) A2)) (p LEFT.BRANCH X) (CONS (CONS (TEST X) T) A1) A2)) (p (RIGHT.BRANCH X) A1 A2) (p (LEFT.BRANCH X) A1 A2) (p X A1 A2)). ``` The inequalities `LEFT.BRANCH.LESSP` and `RIGHT.BRANCH.LESSP` establish that the measure `COUNT X` decreases according to the well-founded relation `LESSP` in the induction step of the scheme. Note, however, the inductive instances chosen for A1. (The inequalities `LEFT.BRANCH.LESSP` and `RIGHT.BRANCH.LESSP` are axioms added by the addition of the `CONS.IF` shell.) The base case (in which X is not an `IF.EXPRP`) can be simplified to: ``` (IMPLIES (AND (NOT (IF.EXPRP X)) (ASSIGNMENT X A1)) (ASSIGNMENT X (APPEND A1 A2))). ``` ``` #################### File: A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf Page: 100 Context: ```markdown # CHAPTER 4. THE CORRECTNESS OF A TAUTOLOGY CHECKER by applying the definitions of `NORMALIZED.IF.EXPR`, `TAUTOLOGY`, and `VALUE`, and further reduces to using `ASSIGNMENT.APPEND` and `ASSIGNMENT.IMPLIES.ASSIGNED`. The induction step is considerably more complicated. We break our analysis of it into two cases according to the `VALUE` of `(TEST X)` under `(APPEND A1 A2)`, which, by `VALUE.SHORT.CUT` and `ASSIGNMENT.APPEND`, gives rise to many cases depending on whether `(TEST X)` is assigned in `A1` or `A2` and what the assignment is. We sketch just one of these cases to indicate how the proof goes. Suppose that `(TEST X)` is unassigned in `A1` and has a non-`F` assignment in `A2`. The conclusion, `(p X A1 A2)`, of the induction step simplifies to: *concl* ``` (IMPLIES (AND (NOT (IF.EXPR (TEST X))) (NORMALIZED.IF.EXPR (LEFT.BRANCH X)) (NORMALIZED.IF.EXPR (RIGHT.BRANCH X)) (TAUTOLOGYP (LEFT.BRANCH X)) (CONS (CONS (TEST X) T) A1)) (TAUTOLOGYP (RIGHT.BRANCH X)) (CONS (CONS (TEST X) F) A1)) (VALUE (LEFT.BRANCH X) (APPEND A1 A2)) ``` using the definitions of `NORMALIZED.IF.EXPR`, `TAUTOLOGY`, and `VALUE`, the lemmas `ASSIGNMENT.APPEND` and `ASSIGNMENT.IMPLIES.ASSIGNED`, and the case assumptions about the assignment of `(TEST X)`. Consider the second induction hypothesis ``` (p (LEFT.BRANCH X) (CONS (CONS (TEST X) T) A2), ``` which, after applying the definition of `APPEND`, is: *by* ``` (IMPLIES (AND (NORMALIZED.IF.EXPR (LEFT.BRANCH X)) (TAUTOLOGYP (LEFT.BRANCH X))) (CONS (CONS (TEST X) T) A1)) (VALUE (LEFT.BRANCH X) (CONS (CONS (TEST X) T) (APPEND A1 A2))). ``` By virtue of `ASSIGNMENT.APPEND` and our case analysis, we know that `(TEST X)` is already assigned non-`F` in `(APPEND A1 A2)`. Thus, by `VALUE.CAN.IGNORE.REDUNDANT.ASSIGNMENTS`, the pair `(CONS (TEST X) T)` can be deleted from the... ``` #################### File: A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf Page: 401 Context: ```markdown ``` **381** ``` (IF (LISTP L) (IF (LISTP (CDR L)) (IF (LESSP (CAR L) (CADR L)) F T) (ORDERED2 (CDR L))) T) ``` ### 331. Definition **(DSORT L)** ``` (IF (NLISTP L) 'NIL (CONS (MAXIMUM L) (DSORT (DELETE (MAXIMUM L) L)))) ``` ### 332. Definition **(ADDTOLIST2 X L)** ``` (IF (LISTP L) (IF (LESSP X (CAR L)) (CONS (CAR L) (ADDTOLIST2 X (CDR L))) (CONS X L)) (CONS X 'NIL)) ``` ### 333. Definition **(SORT2 L)** ``` (IF (NLISTP L) 'NIL (ADDTOLIST2 (CAR L) (SORT2 (CDR L)))) ``` ### 334. Theorem **SORT2.GEN.1 (rewrite):** ``` (PLISTP (SORT2 X)) ``` ### 335. Theorem **SORT2.GEN.2 (rewrite):** ``` (ORDERED2 (SORT2 X)) ``` ### 336. Theorem **SORT2.GEN (generalize):** ``` (AND (PLISTP (SORT X)) (ORDERED2 (SORT2 X))) ``` ### 337. Theorem **ADDTOLIST2.DELETE (rewrite):** ``` (IMPLEIES (AND (PLISTP Y) (ORDERED2 Y)) (NOT (EQUAL X Y)) (EQUAL (ADDTOLIST2 Y (DELETE X Y))) ``` ``` #################### File: A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf Page: 402 Context: ```markdown # APPENDIX A. DEFINITIONS ACCEPTED AND THEOREMS PROVED BY OUR SYSTEM 338. **Theorem DELETE.ADDOTLISTS** (rewrite): (IMPLIES (PLIST Y) (EQUAL (DELETE X (ADDOTLIST? V Y)) Y)) 339. **Theorem ADDOTLIST.KLUDGE** (rewrite): (IMPLIES (AND (NOT (LESSP V W)) (EQUAL (ADDOTLIST? V Y) (CONS V Y))) (EQUAL (ADDOTLIST? V (ADDOTLIST? V Y)) (CONS V (ADDOTLIST? V Y)))) 340. **Theorem LESSP.MAXIMUM.ADDOTLIST** (rewrite): (IMPLIES (NOT (LESSP V (MAXIMUM Z))) (EQUAL (ADDOTLIST? V (SORT? Z)) (CONS V (SORT? Z)))) 341. **Theorem SORT?.DELETE.CONS** (rewrite): (IMPLIES (LISTP X) (EQUAL (CONS (MAXIMUM X) (SORT? X)) (DELETE (MAXIMUM X) (SORT? X)) (SORT? X))) 342. **Theorem SORT?.DELETE** (rewrite): (EQUAL (SORT? (DELETE X L)) (DELETE X (SORT? L))) 343. **Theorem DSORT.SORT** (rewrite): (EQUAL (DSORT X) (SORT? X)) 344. **Theorem COUNT.LIST.SORT**: (EQUAL (COUNT.LIST A (SORT? L)) (COUNT.LIST A L)) 345. **Theorem LESSP.PLUS.SUB** (rewrite): (NOT (LESSP (PLUS Y Z) (SUB1 Z))) 346. **Undefined Function**. (DECREMENT X) 347. **Undefined Function**. (MEASURE X) 348. **Undefined Function**. (DECREMENT X) 349. **Axiom PM (induction)**: (IMPLIES (NOT (DECREMENTP X)) (LESSP (MEASURE (DECREMENT X)) (MEASURE X))) 350. **Undefined Function**. (FIDDLE X) ``` #################### File: A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf Page: 403 Context: ```markdown # 351. Definition ``` (FIDDLE.DOWN1 X Y) - (IF (DECREMENTP X) Y (FIDDLE (FIDDLE.DOWN1 (DECREMENT X) Y))) # 352. Definition ``` (FIDDLE.DOWN2 X Y) - (IF (DECREMENTP X) Y (FIDDLE.DOWN2 (DECREMENT X) (FIDDLE Y))) # 353. Theorem FIDDLE.EQUAL: ``` (EQUAL (FIDDLE.DOWN X Y) (FIDDLE.DOWN2 X Y)) # 354. Theorem PLUS.CANCELLATION (rewrite): ``` (EQUAL (EQUAL (PLUS X Y) (PLUS X Z)) (EQP Y Z)) # 355. Definition ``` (MATCH PAT STR) - (IF (LISTP PAT) (IF (LISTP STR) (IF (EQUAL (CAR PAT) (CAR STR)) (MATCH (CDR PAT) (CDR STR)) F) F) F) # 356. Definition ``` (STRPOS PAT STR) - (IF (MATCH PAT STR) 0 (IF (LISTP STR) (ADD1 (STRPOS PAT (CDR STR))) 0)) # 357. Definition ``` (DELTA CHAR PAT) - (STRPOS (CONS CHAR ‘(NIL))) ``` #################### File: A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf Page: 404 Context: ```markdown # APPENDIX A. DEFINITIONS ACCEPTED AND THEOREMS PROVED BY OUR SYSTEM ## 358. Definition ``` (TOP_ASSERT PAT STR I PATLEN STRLEN PAT* STR+) (AND (EQUAL PAT PAT*) (EQUAL STR STR*) (EQUAL PATLEN (LENGTH PAT)) (LISTP PAT) (EQUAL STRLEN (LENGTH STR)) (NUMBERP I) (LESSP (SUB1 PATLEN) I) (LESSP I (PLUS PATLEN (STRPOS PAT STR)))) ``` ## 359. Definition ``` (LOOP_ASSERT PAT STR I J PATLEN STRLEN NEXTI PAT* STR+) (AND (TOP_ASSERT PAT STR (SUBJ NEXTI) PATLEN STRLEN PAT* STR+) (NUMBERP I) (NUMBERP J) (NUMBERP NEXTI) (LESSP J PATLEN) (LESSP I STRLEN) (EQUAL NEXTI (PLUS PATLEN (DIFFERENCE I J))) (LESSP NEXTI STRLEN) (LESSP J I) (MATCH (WITH PAT (ADD1 J))) (WITH STR (ADD1 J)))) ``` ## 360. Theorem ZEROP_LENGTH (rewrite): ``` (EQUAL (EQUAL (LENGTH X) 0) (OUT (LISTP X))) ``` ## 361. Theorem FSTRPLUS.VC1: ``` (IMPLIES (EQUAL (LENGTH PAT*) 0) (EQUAL 0 (STRPOS PAT* STR*))) ``` ## 362. Theorem SUB1_LESSP_PLUS (rewrite): ``` (EQUAL (LESSP (SUB1 X) (PLUS X Y)) (IF (ZEROP X) (NOT (ZEROP Y)) T)) ``` ## 363. Theorem FSTRPLUS.VC2: ``` (IMPLIES (NOT (EQUAL (LENGTH PAT*) 0)) (TOP_ASSERT PAT* STR*)) ``` ``` #################### File: A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf Page: 405 Context: ```markdown # Theorems and Definitions ## Theorem 364: MATCH.LENGTHS (rewrite) - **Implies** (MATCH X Y) - (NOT (LESSP (LENGTH Y) (LENGTH X))) ## Theorem 365: MATCH.LENGTH (rewrite) - **Implies** (LESSP (LENGTH Y) (LENGTH X)) - (NOT (MATCH X Y)) ## Theorem 366: STRPOS.BOUNDARY.CONDITION (rewrite) - **Implies** (NOT (EQUAL (STRPOS PAT STR) (LENGTH STR))) - (NOT (LESSP (LENGTH STR) (PLUS (LENGTH PAT) (STRPOS PAT STR))))) ## Theorem 367: FSTRPOS.VC3 - **Implies** (AND (GREATERP I STRLEN)) - (TOP.ASSERT PAT STR I PATLEN STRLEN PAT* STRA*) - (EQUAL STRLEN (STRPOS PAT* STRA*)) ## Theorem 368: PLUS.DIFFERENCE.SUBL.REWRITE (rewrite) - **Equal** (PLUS X (DIFFERENCE Y (SUBI X))) - (IF (ZEROX X) - (FIX X) - (IF (LESSP Y (SUBI X)) - (FIX Y) - (ADDI Y))) ## Theorem 369: LISTP.WITH (rewrite) - **Equal** (LISTP (WITH X I)) - (LESSP I (LENGTH X)) ## Theorem 370: CDR.WITH (rewrite) - **Equal** (CDR (WITH X Y)) - (WITH (CDR X) Y) ## Theorem 371: STRPOS.EQUAL (rewrite) - **Implies** (AND (LESSP I (LENGTH STR))) - (NOT (LESSP (STRPOS PAT STR) I)) - (NUMBERP I) - (MATCH PAT (WITH STR))) ## Theorem 372: VC4.HACK.1 (rewrite) ``` #################### File: A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf Page: 406 Context: ```markdown # APPENDIX A. DEFINITIONS ACCEPTED AND THEOREMS PROVED BY STR **(IMPLIES (LESSP I (LENGTH STR)) (NOT (LESSP (SUB1 (LENGTH STR)) D)))** 373. Theorem FSTRDOC.VG: **(IMPLIES (AND (NOT (GREATEREQP I STRLEN))** **(TOP-ASSERT PAT STR I PATLEN STRLEN** **PAT* STR*))** **(LOOP-ASSERT PAT STR I** **(SUBI PATLEN)** **PATLEN STRLEN** **(ADD 1)** **PAT* STR*))** 374. Theorem SWAPPED.PLUS.CANCELLATION (rewrite): **(EQUAL (LESSP (PLUS B A) (PLUS A C)) (LESSP B C))** 375. Theorem LESSP.SUB1.PLUS.CANCELLATION (rewrite): **(EQUAL (LESSP (SUB1 (PLUS Y X)) (PLUS Y Z))** **(IF (ZEROY Y)** **(IF (ZEROY X) (NOT (ZEROY Z)) T)** **(LESSP (SUBI Y Z))))** 376. Theorem VCS.HACK (rewrite): **(IMPLIES (LESSP (SUB1 I) (STRPOS PAT STR))** **(NOT (LESSP (STRPOS PAT STR) T))))** 377. Theorem FSTRDOC.VCS: **(IMPLIES (AND (EQUAL J 0))** **(EQUAL (WTHCAR I STR)** **(WTHCAR J PAT))** **(LOOP-ASSERT PAT STR I J PATLEN STRLEN** **NEXTI PAT* STR*))** 378. Theorem FSTRDOC.VGC: **(IMPLIES (AND (NOT (EQUAL J 0))** **(EQUAL (WTHCAR I STR)** **(WTHCAR J PAT))** **(LOOP-ASSERT PAT STR I J PATLEN STRLEN** **NEXTI PAT* STR*))** 379. Theorem STRPOS.LESSP.STRLEN (rewrite): **(NOT (LESSP (LENGTH STR) (STRPOS PAT STR)))** ``` #################### File: A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf Page: 407 Context: ```markdown 380. Theorem LESSP.KLUDGE1 (rewrite): - (IMPLIES (NOT (LESSP B A)) (EQUAL (LESSP A (PLUST B D)) (IF (ZERO (C (LESSP A B T))) (EQUAL (STRPOSS (CONS C '("HILL"))) (APPEND A B)) (IF (MEMBER C A) (STRPOSS (CONS C '("HILL"))) A) (PLUS (LENGTH A) (STRPOSS (CONS C '("HILL")) B))))) 381. Theorem STRPOS.LIST.APPEND (rewrite): - (EQUAL (STRPOSS (CONS C '("HILL"))) (APPEND A B)) (IF (MEMBER C A) (STRPOSS (CONS C '("HILL")) A) (PLUS (LENGTH A) (STRPOSS (CONS C '("HILL")) B)))) 382. Theorem STRPOS.LESSSEP.CUTCH (rewrite): - (IMPLIES (NOT (LESSP (LENGTH Q) (LENGTH P))) (NOT (LESSP (LENGTH Q) (STRPOS PAT P)))) 383. Theorem STRPOS.EQUAL.O (rewrite): - (EQUAL (EQUAL (STRPOS PAT STR) 0) (OR (UNLISTP STR) (MATCH PAT STR))) 384. Theorem LESSP.KLUDGE2 (rewrite): - (IMPLIES (LESSP I (LENGTH PAT)) (LESSP (SUB I) (PLUS (LENGTH PAT) Z)))) 385. Theorem MATCH.IMPLIES.CAR.MEMBER (rewrite): - (IMPLIES (AND (LISTP PAT)) (NOT (MEMBER (CAR STR) PAT))) (NOT (MATCH PAT STR))) 386. Theorem MATCH.IMPLIES.CAR.MEMBER1 (rewrite): - (IMPLIES (AND (LISTP PAT)) (MATCH PAT STR)) (MEMBER (CAR STR) PAT)) 387. Theorem MATCH.IMPLIES.MEMBER (rewrite): - (IMPLIES (AND (LESSP I (LENGTH PAT))) (MATCH PAT STR)) (MEMBER (CAR (ITH STR I)) PAT)) 388. Theorem DELTA.LESSP.IFF.MEMBER (rewrite): - (EQUAL (LESSP (CONS CHAR '("HILL")) (REVERSE PAT)) (LENGTH PAT)) (MEMBER CHAR PAT)) 389. Theorem LESSP.PLUS (rewrite): - (IMPLIES (NOT (LESSP X Z)) (NOT (LESSP (PLUS X Y) Z))) 390. Theorem LESSP.PLUS1 (rewrite): ``` #################### File: A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf Page: 408 Context: ```markdown # APPENDIX A. DEFINITIONS ACCEPTED AND THEOREMS PROVED BY OUR SYSTEM 1. Theorem MATCH.IMPLIES.DELTA1.OK {rewrite}: - **Implication**: - (AND (MATCH PAT STR) (LESSP I (LENGTH PAT))) - (LESSP (PLUS I) (STRPOS (CONS (CAR (THIS STR I)) '("NIL")) (REVERSE PAT))) (LENGTH PAT)) 2. Theorem SUB1.LENGTH {rewrite}: - (EQUAL (SUB1 (LENGTH X)) (LENGTH (CDR X))) 3. Theorem DELTA1.LEMMA {rewrite}: - **Implication**: - (AND (LISTP PAT) (LESSP I (LENGTH STR))) - (LESSP I (PLUS (LENGTH PAT) (STRPOS PAT STR)))) - (LESSP (PLUS I) (STRPOS (CONS (CAR (THIS STR I)) '("NIL")) (REVERSE PAT))) (PLUS (LENGTH PAT) (STRPOS PAT STR))) 4. Theorem MATCH.EPSILON {rewrite}: - **Implication**: - (AND (LESSP J (LENGTH PAT)) (MATCH PAT STR)) - (EQUAL (CAR (MATCH PAT J)) (CAR (THIS STR J))) 5. Theorem STRPOS.EPSILON {rewrite}: - **Implication**: - (AND (LESSP J (LENGTH PAT)) (LESSP (PLUS J (STRPOS PAT STR))) (LENGTH STR)) - (EQUAL (CAR (THIS STR (PLUS J (STRPOS PAT STR)))) (CAR (THIS PAT J))) 6. Theorem EQ.CHARS.AT.STRPOS {rewrite}: - **Implication**: - (AND (NOT (LESSP I J)) (NOT (EQUAL (CAR (THIS STR I)) (CAR (THIS PAT J))))) - (LESSP I (LENGTH STR)) - (LESSP J (LENGTH PAT)) - (NOT (EQUAL (STRPOS PAT STR))) ``` #################### File: A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf Page: 409 Context: ```markdown # Theorems ## 397. Theorem LESSP.DIFFERENCE.I (rewrite): - `(IMPLIES (LESSP I J) (EQUAL (DIFFERENCE I J) 0))` ## 398. Theorem LESSP.SUBL.SUB (rewrite): - `(EQUAL (LESSP (SUB1 X) X) X)` - `(NOT (ZERO X))` ## 399. Theorem PULS.2.NOT (rewrite): - `(NOT (EQUAL (ADD1 (PLUS V Z)) V))` ## 400. Theorem LESSP.SUBL.SUBL.PLUS (rewrite): - `(EQUAL (LESSP (SUB1 Y) (PLUS Z Y)) (OR (NOT (ZERO Z)) (NOT (ZERO Y))))` ## 401. Theorem LESSP.KLUDGE6 (rewrite): - `(EQUAL (LESSP (SUB1 (PLUS L (DIFFERENCE I J))) I)` - `(IF (LESSP I J) (LESSP (SUB1 L) I) (IF (ZERO L) (NOT (LESSP I J)))))` ## 402. Theorem GT.SUB1 (rewrite): - `(NOT (LESSP (PLUS X Y) (SUB1 X)))` ## 403. Theorem LESSP.SUBL.HACK (rewrite): - `(IMPLIES (AND (NOT (LESSP I J)) (NOT (EQUAL I J)))` - `(NUMBER J)` - `(NUMBER I))` - `(NOT (LESSP (SUB1 J))))` ## 404. Theorem FSTRPOS.VC: - `(IMPLIES (AND (NOT (EQUAL (NTHCHAR I STR) (NTHCHAR J PAT)))` - `(LOOP ASSERT PAT STR I J PATLEN STRLEN NEXT PAT* STR*))` - `NEXT` - `(PLUS I (DELTA (NTHCHAR I STR) PAT))` - `PATLEN STRLEN PAT* STR*)` ``` #################### File: A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf Page: 410 Context: ```markdown # 390 APPENDIX A. DEFINITIONS ACCEPTED AND THEOREMS PROVED BY OUR SYSTEM ## Definitions 1. **Definition 1**: Description of the first definition. 2. **Definition 2**: Description of the second definition. 3. **Definition 3**: Description of the third definition. ## Theorems ### Theorem 1 - Statement of Theorem 1. ### Theorem 2 - Statement of Theorem 2. ### Theorem 3 - Statement of Theorem 3. ## Table of Contents | Section | Description | |-----------|--------------------------------------| | Section 1 | Introduction to the definitions | | Section 2 | Overview of theorems | | Section 3 | Conclusion | [Link to additional resources](http://example.com) ``` #################### File: A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf Page: 411 Context: ```markdown # Appendix B ## The Implementation of the Shell Principle Below we give the axioms added by our theorem prover in response to the user command: - Add the shell const, of n arguments, - with (optionally, bottom object (btm,)) - recognizer r, - accessors a₁, ..., aₙ, - type restrictions t₁, ..., tₙ, - and default values d₁, ..., dₙ. Our implementation of the shell principle differs from the formal presentation in Chapter 3 in two ways. First, we do not actually require (or allow) the user to specify the name of a new well-founded relation to be defined for the class. Instead, we axiomatize COUNT for the class and add the appropriate induction lemmas, using COUNT and LESSP. In Chapter 3 we justified the introduction of COUNT and LESSP. The second difference between the formal description and our implementation is that our implementation adds lemmas that are more useful to the theorem prover than would be the axioms noted in Chapter 3. For example, certain of the formal axioms are reformulated so that they are more useful as rewrite rules. In all cases, the lemmas added by our implementation follow immediately from the axioms given in Chapter 3. Most of the axioms have names. We indicate the names schematically below. For example, when the CONS shell is ... ``` #################### File: A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf Page: 412 Context: ```markdown # 392 APPENDIX B. THE IMPLEMENTATION OF THE SHELL PRINCIPLE Added, the schematic name `ac1.const` denotes CAR.CONS. After the name, we indicate in parentheses the “type” of the axiom (e.g., rewrite, elimination, or induction). No generalization lemmas, per se, are added by the shell mechanism; however, rewrite lemmas encoded as type prescriptions may restrict generalizations of shell functions since the generalization heuristic employs type sets. ## Axioms ### Axiom `r.const` {rewrite}: ```markdown (r (const X1 ... Xn)). ``` ### Axiom `r.btn` {rewrite}: ```markdown (r (btn)). ``` ### Axiom `BOOLEAM.r` {rewrite}: ```markdown (OR (EQUAL (r X) T) (EQUAL (r X) F)). ``` For each `i` from `1` to `n`, let `tr'` be `tr`, with all occurrences of `Xi` replaced by `(ac, X)`, and add: ```markdown Axiom TYPE.OF.ac {rewrite}: tr'. ``` (Observe that all of the above axioms are stored as type prescriptions.) For each `i` from `1` to `n`, add: ### Axiom `ac1.const` {rewrite}: ```markdown (EQUAL (ac, (const X1 ... Xn)) (IF tr' (Xi di))). ``` ### Axiom `ac.hr` {rewrite}: ```markdown (IMPLIES (NOT (r X)) (EQUAL (AC, X) di)). ``` ### Axiom `ac.TYPE.RESTRICTION` {rewrite}: ```markdown (IMPLIES (NOT r) (EQUAL (const X1 ... Xi ... Xn) (const X1 ... dU ... Xn))). ``` ### Axiom `ac.btn` {rewrite}: ```markdown (EQUAL (ac, (btn)) di). ``` ### Axiom `ac.LESSP` {induction}: ```markdown (IMPLIES (AND (r X) (NOT (EQUAL X (btn))))) (LESSP (COUNT (ac, X) (COUNT X))). ``` Let `s` be the substitution replacing `Xi` by `Y1`, ..., and `Xn` by `Yn`, and let `r/s` denote the result of applying `s` to `tr`. Add: ``` ``` ``` #################### File: A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf Page: 413 Context: ```markdown # Axiom const.EQUAL (rewrite): (EQUAL EQUAL (const X1 ... Xn) (const Y1 ... Yn)) (AND (IF tr (EQUAL X1 Y1) (EQUAL d1 d1)) (IF tr (EQUAL Xn Yn) (EQUAL dn yn) ) ) ... (IF tr (IF tr (EQUAL Xn Yn) (EQUAL dn yn)) (IF tr (EQUAL d1 Yn) T ) )). # Axiom const.acl, ..., acn (rewrite): (EQUAL (const (ac) X) ... (acn X)) (IF (AND (cr X)) (NOT (EQUAL X (btm)))) X (const drv ... dsn)). # Axiom acl/ ... acl.ELIM (elimination): (IMPLIES (AND (cr X)) (NOT (EQUAL X (btm)))) (EQUAL (const (ac) X) ... (acn X)). # Axiom COUNT.const (rewrite): (EQUAL (COUNT (const X1 ... Xn)) (ADD1 (PLUS (IF tr) (COUNT X1) 0)) (IF tr (COUNT Xn 0))) ). # Axiom COUNT.bn (rewrite): (EQUAL (COUNT (btm)) 0). The handling of the special case in which no bottom object is supplied should be obvious. We simplify the right-hand sides of concluding equalities in rewrite axioms by expanding nonrecursive functions (e.g., AND), putting IF-expressions into IF-normal form, and simplifying IF-expressions with explicit value tests (e.g., (IF T X Y)). ``` #################### File: A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf Page: 414 Context: ```markdown # Appendix B. The Implementation of the Shell Principle is replaced by \( x \). The following axioms resulting from an application of the shell principle are “wired-in” to the theorem prover. - \( \text{(HDT (EQUAL (const x₁ ... xₙ) (btn)))} \) - \( \text{(HDT (T))} \) - \( \text{(HDT (F))} \) - \( \text{(IMPLIES (x) (NOT (r' x)))} \) The first of the above axioms is built into the rules for rewriting an EQUAL expression given in Chapter 9. The other axioms above are built into the type set mechanism. ``` #################### File: A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf Page: 415 Context: ```markdown # Appendix C ## Clauses for our Theory Readers familiar with other mechanical theorem provers might be interested in seeing our theory cast in the more usual clausal form. We do not formulate our shell principle, induction principle, or definition principle in clausal form. However, we do give the clauses generated by these principles on specific examples. Here we give, in clausal form, axioms for T, F, IF, EQUAL, numbers, literal atoms, and ordered pairs. These axioms, together with our induction principle, definition principle, and the well-foundedness of LESSP and induced lexicographic relations, are equivalent to the theory described in Chapter 3. We also exhibit the axioms of definition for the functions used in the MC.FLATTEN example in Chapter 2, and we exhibit a clausal formulation of the first induction step used in that example. In the following clauses, we use a common notation for function application and clauses [49]: T, F, 0, and NIL are constants, and X, Y, Z, P, Q, X1, X2, Y1, and Y2 are variables. ### C.1 Logical Definitions 1. L1: `T ∧ F` 2. L2: `{X, Y} ∧ EQUAL(X, Y) → T` Footnote: 11 It is interesting that the set theory of von Neumann, Bernays, and Gödel [21] can be stated in a finite number of clauses. Thus, in principle, one could use a finitely axiomatized set theory with a resolution theorem prover to investigate problems normally requiring the axiom schemes (i.e., infinite axioms) of induction and comprehension. ``` #################### File: A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf Page: 416 Context: ```markdown # APPENDIX C. CLAUSES FOR OUR THEORY ## C.2 Axioms for Natural Numbers A1. `{NUMBER(ADD(X1))} -> T` A2. `{NUMBER(0)} -> T` A3. `{NUMBER(X) -> NUMBER(X) -> F}` A4. `{SUB(ADD(X1),X1) -> IF(NUMBER(X1),X1,0)}` A5. `{NUMBER(X) -> SUB(X) -> 0}` A6. `{NUMBER(X) -> ADD(ADD(0))}` A7. `{NUMBER(X) -> X = 0 < LESS(PACK(SUB1(X)),COUNT(X)) -> T}` A8. `EQUAL(ADD(X1), ADD(Y1))` A9. `IF (NUMBER(X1), EQUAL(X1,Y1), EQUAL(X1,0),)` A10. `COUNT(ADD(X1)) -> ADD(IF(NUMBER(X1),COUNT(X1))` A11. `COUNT(0) -> 0` A12. `{0:ADD(1)}` A13. `{NUMBER(Y) -> F}` A14. `{ZERO(X) -> EQUAL(X,0) , NOT(NUMBER(X))}` A15. `FIX(X) -> IF(NUMBER(X), X,0)` A16. `(LESS(X,Y) -> F)` A17. `IF(ZERO(X), T, LESS(SUB1(X),SUB1(Y)))` A18. `{PLUS(X,Y) -> IF(ZERO(X), FIX(Y), ADD(PLUS(SUB1(X),Y)))}` ## C.3 Axioms for Literal Atoms B1. `{LITATOM(PACK(X1))} -> T` B2. `{LITATOM(NULL)}` B3. `{LITATOM(X) -> LITATOM(X) -> F}` B4. `UNPACK(PACK(X1)) -> X1` B5. `{LITATOM(X) -> UNPACK(X) -> 0` ``` #################### File: A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf Page: 417 Context: ```markdown C.4 Axioms for Ordered Pairs B6. `{UNPACK(MNil) = 0}` B7. `{LITATOM(X) = F} ⇔ {LSSP(COUNT(UNPACK(X)), COUNT(X)) = T}` B8. `{EQUAL(PACK(X1), PACK(Y1)) = EQUAL(X1, Y1)}` B9. `{PACK(UNPACK(X)) = IF(AND(LITATOM(X), NOT(EQUAL(X, MNIL))), X, PACK(0))}` B10. `{COUNT(PACK(X1)) = ADD1(COUNT(X1))}` B11. `{COUNT(ONLY(X))}` B12. `{MNil = PACK(X1)}` B13. `{LITATOM(X) = F}` B14. `{LITATOM(X) = F}` B15. `{LITATOM(X) = NUMBER(X) = F}` C.4 Axioms for Ordered Pairs C1. `{LISTP(CONS(X1, X2)) = T}` C2. `{LISTP(X) = T} ⇔ {LITATOM(X) = F}` C3. `{CAR(CONS(X1, X2)) = X1}` C4. `{CDR(CONS(X1, X2)) = X2}` C5. `{LISTP(X) = T} ⇔ {CAR(X) ≠ NULL}` C6. `{LISTP(X) = T} ⇔ {CDR(X) ≠ NULL}` C7. `{LISTP(X) = LSSP(COUNT(CAR(X)), COUNT(X)) = T}` C8. `{LISTP(X) = LSSP(COUNT(CDR(X)), COUNT(X)) = T}` C9. `{EQUAL(CONS(X1, X2), CONS(Y1, Y2)) = EQUAL(X1, Y1) ∧ EQUAL(X2, Y2)}` C10. `{CONS(CAR(X), CDR(X)) ≠ LISTP(X) ⇔ CONS(CNil, MNIL)}` C11. `{COUNT(CONS(X1, X2)) = ADD1(PLUS(COUNT(X1), COUNT(X2)))}` C12. `{LISTP(X) = F}` C13. `{LISTP(F) = F}` C14. `{LISTP(X) = NUMBER(X) = F}` C15. `{LISTP(X) = LITATOM(X) = F}` C.5 A Sample Theorem in Clausal Form To "define" APPEND, FLATTEN, and MC_FLATTEN for a resolution theorem prover, one could add the clauses: ``` {APPEND(X, Y) = IF(LISTP(X), CONS(CAR(X), APPEND(CDR(X), Y)), Y)} {FLATTEN(X) = IF(LISTP(X), APPEND(FLATTEN(CAR(X)), FLATTEN(CDR(X))), CONS(X, RLNil))} {MC_FLATTEN(X, Y) = IF(LISTP(X), MC_FLATTEN(CAR(X), Y), MC_FLATTEN(CDR(X), Y))} ``` ``` #################### File: A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf Page: 418 Context: ```markdown # APPENDIX C. CLAUSES FOR OUR THEORY Following theorems: - `LISTP(APPEND(X, Y))` ↔ `APPEND(X, Y) ≠ Y` - `LISTP(FLATTEN(X))` ↔ `T` - `LISTP(MC.FLATTEN(X))` ↔ `T`. One could, in principle, use a resolution theorem prover to help perform the nondeductive parts of our proofs. For example, one might ask such a theorem prover to undertake the first inductive step of the `FLATTEN.MC.FLATTEN` example of Chapter 2. One might therefore provide such a theorem prover with all the foregoing clauses of this appendix and then, letting `A` and `ANS` be constants, attempt to derive a contradiction from: - `LISTP(A)` ↔ `T` - `{MC.FLATTEN(CAR(A)), MC.FLATTEN(CDR(A, ANS))}` - `¬APPEND(FLATTEN(CAR(A)), MC.FLATTEN(CDR(A, ANS)))` - `{MC.FLATTEN(CDR(A, ANS)) ≠ APPEND(FLATTEN(CAR(A)), ANS)}` and - `{MC.FLATTEN(A, ANS) ≠ APPEND(FLATTEN(A), ANS)}`. Of course, the imagined resolution theorem prover would probably fail to find a contradiction because we know of no proof that does not depend upon the associativity of `APPEND`. While our theorem prover discovered and inductively proved that `APPEND` is associative in the course of the `FLATTEN.MC.FLATTEN` proof, one would need to add the clause: ``` {APPEND(APPEND(X, Y), Z) ≠ APPEND(X, APPEND(Y, Z)} ``` to the previous collection of clauses before a resolution theorem prover might be expected to derive a contradiction. ``` #################### File: A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf Page: 419 Context: ```markdown # Index In this index, a number indicates the page on which a concept is introduced, a number preceded by "A-" indicates the number of the defining formula in Appendix A, and a shell name (e.g., ADD1) indicates that the indexed name is introduced as a result of axiomatizing the shell. The naming conventions for shell actions are given in Appendix B. | Term | Page | |------------------------|----------| | accessor | ?? | | ACK | ?? | | ADD1 | ?? | | ADD1.EQUAL | ADD1 | | ADD1.SUB | ADD1 | | ADDTOOLIST | A-73 | | ADDTOOLIST2 | A-332 | | ADDTOOLIST2.DELETE | A-337 | | ADDTOOLIST2.KLUDGE | A-339 | | alist | ?? | | Allen | ?? | | AND | ?? | | APPEND | A-2 | | APPEND.CANCELLATION | A-166 | | APPEND.REVERSE | A-9 | | APPEND.RIGHT.ID | A-7 | | applies | ?? | | APPLY | A-23 | | ASSIGNEDP | A-243 | | ASSIGNMENT | A-231 | | assignment | ?? | | ASSIGNMENT.APPEND | A-247 | | ASSIGNMENT.IMPLIES.ASSIGNEDP | A-250 | | ASSOC | A-79 | | ASSOC.PAIRLIST | A-81 | | association list | ?? | 399 ``` #################### File: A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf Page: 420 Context: ```markdown # INDEX | Term | Reference | |----------------------------|------------| | ASSOCITIVITY.OF.APPEND | A-5 | | ASSOCITIVITY.OF.EQUAL | A-95 | | ASSOCITIVITY.OF.PLUS | A-14 | | ASSOCITIVITY.OF.TIMES | A-20 | | ASSUME.TRUE | A-244 | | ASSUME.FALSE | A-245 | | atom | ?? | | atomic | ?? | | Aubin | ?? | | backwards chaining | ?? | | Ballantyne | ?? | | Bendix | ?? | | BIG.PLUS | A-177 | | BIG.PLUS1 | A-174 | | Binding | ?? | | Bledsoe | ?? | | body | ?? | | BOOLEAN | A-88 | | Boolean | ?? | | bottom object | ?? | | bound | ?? | | Bourbaki | ?? | | Brotz | ?? | | Burstall | ?? | | CAAR | ?? | | CADDR | ?? | | CADR | ?? | | call of | ?? | | CAR | CONS | | CAR.CORP | CONS | | CAR.CONS | CONS | | CAR.LESSP | CONS | | CAR.CDR.ELIM | CONS | | Cartwright | ?? | | CDAR | ?? | | CDDR | ?? | | CDR | CONS | | CDR.CONS | CONS | | CDR.LESSP | CONS | | CDR.LISTP | CONS | | CDR.MTH | A-370 | | Chang | ?? | | changeables | ?? | | changing variables | ?? | | character string | ?? | ``` ########## """QUERY: Please summarize the whole context. It is important that you include a summary for each file. All files should be included, so please make sure to go through the entire context""" Consider the chat history for relevant information. Use all information included. Use as much tokens as needed. Important: If you find information separated by a | in the context, it is a table formatted in Markdown. Final Files Sources: 2D%20Game%20Development%20From%20Zero%20To%20Hero%20-%20Daniele%20Penazzo%20HTML%2C%20PDF%2C%20EBPUB%2C.pdf - Page 602, A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf - Page 1, A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf - Page 3, A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf - Page 4, A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf - Page 5, A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf - Page 6, A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf - Page 7, A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf - Page 8, A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf - Page 9, A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf - Page 10, A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf - Page 11, A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf - Page 12, A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf - Page 13, A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf - Page 14, A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf - Page 15, A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf - Page 16, A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf - Page 17, A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf - Page 18, A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf - Page 19, A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf - Page 20, A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf - Page 21, A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf - Page 22, A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf - Page 23, A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf - Page 24, A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf - Page 25, A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf - Page 26, A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf - Page 27, A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf - Page 28, A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf - Page 29, A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf - Page 30, A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf - Page 31, A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf - Page 32, A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf - Page 33, A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf - Page 34, A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf - Page 35, A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf - Page 36, A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf - Page 37, A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf - Page 38, A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf - Page 39, A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf - Page 40, A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf - Page 41, A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf - Page 42, A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf - Page 43, A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf - Page 44, A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf - Page 45, A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf - Page 46, A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf - Page 47, A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf - Page 48, A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf - Page 49, A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf - Page 50, A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf - Page 51, A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf - Page 52, A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf - Page 53, A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf - Page 54, A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf - Page 55, A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf - Page 56, A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf - Page 57, A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf - Page 58, A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf - Page 59, A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf - Page 60, A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf - Page 61, A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf - Page 62, A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf - Page 63, A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf - Page 64, A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf - Page 65, A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf - Page 66, A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf - Page 67, A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf - Page 68, A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf - Page 69, A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf - Page 70, A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf - Page 71, A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf - Page 72, A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf - Page 73, A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf - Page 74, A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf - Page 75, A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf - Page 76, A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf - Page 77, A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf - Page 78, A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf - Page 79, A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf - Page 80, A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf - Page 81, A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf - Page 82, A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf - Page 83, A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf - Page 84, A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf - Page 85, A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf - Page 86, A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf - Page 87, A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf - Page 88, A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf - Page 89, A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf - Page 90, A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf - Page 91, A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf - Page 92, A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf - Page 93, A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf - Page 94, A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf - Page 95, A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf - Page 96, A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf - Page 97, A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf - Page 98, A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf - Page 99, A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf - Page 100, A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf - Page 401, A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf - Page 402, A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf - Page 403, A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf - Page 404, A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf - Page 405, A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf - Page 406, A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf - Page 407, A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf - Page 408, A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf - Page 409, A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf - Page 410, A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf - Page 411, A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf - Page 412, A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf - Page 413, A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf - Page 414, A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf - Page 415, A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf - Page 416, A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf - Page 417, A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf - Page 418, A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf - Page 419, A%20Computational%20Logic%20%281979%29%20-%20Robert%20S.%20Boyer%2C%20J%20Strother%20Moore%20%28PDF%29.pdf - Page 420 ================================================== **Elapsed Time: 204.35 seconds** ================================================== FINAL ANSWER Answer: ================================================== **Elapsed Time: 0.00 seconds** ==================================================