Belief Revision A Critique Essay

In the last three or four decades, dynamic turns [1] have attracted scholars from various areas to questions about epistemic change. Perhaps dynamic epistemic logic [2] (DEL) in a broad sense and belief revision theory in the AGM tradition [3, 4] can be considered the two main approaches in dealing with belief change for normative theories, although the latter might be presentable in certain DEL languages [5]. According to the classical literature on belief revision mentioned in Hansson [6], it is reasonable to think (and probably well accepted) that there are three different types of belief change about static worlds for rational agents in the AGM tradition: expansion, contraction and revision. Tennant's book is mainly in the tradition of belief revision theory, although he puts forward a completely different starting point and a new method.

The main contents of this book are divided into three related parts: "Computational Considerations" (seven chapters, making up the main body of the book), "Logical and Philosophical Considerations" (three chapters), and "Comparisons" (two chapters). Overall, the book makes several major contributions to the literature. First, Tennant makes an interesting distinction between logical saints and paragons, and he takes the latter as the rational agents in the study of belief change. Next, he provides an intuitive, effective graphic tool, finite dependency networks (FDNs), to demonstrate the principles of rational belief change processes. Also, he has constructed the respective formalization and core logic. Since in FDNs all premise sets are finite, the computational results of Tennant's work are much better than those based on the logical saint tradition. They can be implemented in computer programs to better simulate actual belief change processes for real rational agents. Tennant also does this in a Prolog implementation, which allows many practical operations and testing of examples. FDNs can overcome many defects in the AGM tradition, such as the problems of recovery, minimal mutilation, success, and so on. Tennant also shows that the AGM revision operator has several serious -- even fatal -- problems. One of them is that there can be many other revision operators which satisfy all the AGM basic and supplementary postulates. Although the representation theorems are similar to that of AGM, these operators can be, intuitively, absolutely absurd.

Chapter 1 is an informative and pertinent introduction, describing basic concepts and features of belief revision. Tennant first presents his distinction between the notions of logical saint and logical paragon, and uses them to represent two different kinds of agents. The saints are logically omniscient, as in the tradition of epistemic logic [7]. Their belief sets are treated as infinite logically closed theories. The paragons can complete only a finite number of deductive steps in a limited time. According to Tennant, a paragon is always right about the logical proofs she constructs; but she is not presumed to be aware of every logical entailment, except for those that hold in effectively decidable systems of logic. He presents the paragon as more appropriate for the study of actual belief revision with respect to practical rational agents. Chapter 1 also describes some basic notions of belief-revision theory, including what constitute a belief set; different modes of belief change, such as contraction and expansion due to different changes in the doxastic status of a proposition; and rational belief revision requirements such as minimal mutilation and minimal bloating. Finally, Tennant states his main goal of giving a philosophically reliable, mathematically rigorous, and computational implementable belief revision theory, and introduces the topics and ideas that he will cover.

The next four chapters are the core of the book and provide detailed descriptions and demonstrations for the theory of FDNs. With its graphic representation approach, chapter 2 provides an informal description of the theory that is clear and easy to be understood. In the model of networks, nodes and strokes are the only elements in the domain, while arrows are just relations between nodes and strokes. Generally speaking, nodes, strokes and arrows are used to form a graph with which we can express a respective rational agents' belief system (belief scheme), where nodes represent beliefs, and steps express justificatory relationships between beliefs. For an agent, believing a proposition or not can be expressed by coloration of nodes. Whether or not a justificatory relation holds between the premise set and conclusion can be expressed by coloring the respective inference stroke. A whitened stroke says that the respective justificatory relation is not established. But according to Tennant, this does not mean that the deductive relation in the justificatory step has been abandoned. It remains in the agent's inference base. When an agent acquires the justificatory relation, she will know or be aware of it. It is clear that there are many steps that are not known by a particular agent. In addition, Tennant's account covers only single-conclusion justificatory relations in which each stroke has only one conclusion node.

Tennant then provides eight axioms for the arrow relations between nodes and strokes. These determine whether a given structure is a qualified model for a given agent's belief system. He also puts forward four coloration axioms for nodes and strokes, which specify the coloring relation on adjacent nodes and strokes. According to the epistemic interpretation given by Tennant, the first eight axioms characterize the overall structure of justificatory relations known by the given agent. The next four axioms characterize constraints in obtaining or giving up a belief, which may affect holding or giving up other beliefs with respect to a certain agent. Roughly speaking, based on these axioms, a belief contraction can be presented as whitening the corresponding nodes and strokes; this is called spreading white. Similarly, an expanding of a belief set is called spreading black. Using this framework, Tennant offers six persuasive examples of the belief revision problem from different perspectives, illustrating his graphical theory's superior expressive power and greater convenience in applying to real agents.

Chapter 3 explains the basic features of belief contraction. This is preparatory work for the formalization in the next chapter. Tennant claims that a real agent's belief system should not be regarded as a logically closed theory, but rather as a finite belief set that has justificatory relational structures. He then makes a distinction among three different kinds of belief sets in terms of theories, bases and developments. In terms of this distinction, a finite dependency network can be regarded as a systematic development. Finally, these intuitions can be precisely encoded into two properties: success and minimal mutilation. Tennant maintains that the general principle of recovery in AGM theory does not correctly express the property of minimal mutilation.

The main work of Chapter 4 is elaborating a formal theory of FDNs. According to Tennant, this theory can provide an abstract and universal framework for discussing a belief system and its contraction. We are able to introduce a strict definition of minimal mutilation that accurately describes belief contraction and thereby understand the complexity problems of contraction processes. Tennant provides a brief introduction to computational theory assuming background knowledge of P, NP, and NP-completeness, and cites several results from complexity theory for decision problems in classical logic. Then he presents four intuitive ideas for belief revision in the above framework. Tennant emphasizes that dealing with the structure of justificatory relations in a belief system is more important for a belief revision process. It does not need to consider the internal logical structures or contents of beliefs.  After these preliminaries, Tennant provides definitions for steps, dependency networks, kernelsof a subnetwork, contraction networks, minimal mutilation, and so on. Detailed explanations of these definitions reflect the nature of rational belief dynamics. Given these definitions, he develops four theorems regarding the complexity of the decision problem for contraction.

Chapter 5 provides four contraction algorithms based on the formalization provided in chapter 4. In giving up certain beliefs, one often has a variety of options about what beliefs should be further abandoned during belief contraction. Therefore, contraction algorithms constrain the contraction's particular steps, as well as making choices at every step of computation. Minimal mutilation is the ideal requirement of belief contraction, but it may not be realistic given computational considerations. From this perspective, Tennant introduces a greedy algorithm, which, at every step of computation, yields the local optimal solution. The next two chapters make the algorithm implementable in computer programs for belief contraction. Chapter 6 presents the details of a Prolog program for the simplest version of the algorithm. Chapter 7 then presents the operation results of programs for various kinds of contractions. This completes the explanation of the method of FDNs, from basic theory to computer implementation.

Part II of the book consists of three chapters. Chapter 8 gives a system of inferential rules for core logic, and claims that the core logic is probably most appropriate for the natural sciences and intuitionistic mathematics -- and even for the theory of meaning. Then, by proving that every violation of the four coloration axioms for FDNs will lead to a contradiction under core logic, Tennant claims that all the inferential rules of core logic for belief revision are necessary. In Chapter 9, invoking Harvey Friedman's proof that an arbitrary theorem in an axiomatizable theory has only a finite number of respective proofs, Tennant concludes that considering belief revision limited to finite systems does not lead to the loss of generality. Those finite sets of beliefs are sufficient for further expansion. Chapter 10 then provides mathematical justifications for this idea. All three chapters in the second part support the FDNs method from the perspective of logic and meta-theory.

In part III, Tennant compares his work and other relevant belief revision theories. Chapter 11 examines three formal theories of belief revision, with JTMS and Bayesian network from artificial intelligence, and AGM theory from mathematical logic. Chapter 12 discusses several epistemological accounts for belief revision that may have some connections with his work. Tennant divides them into three categories: addressing the belief-revision problem by formal modeling, addressing belief revision without offering any formal modeling, and treating belief revision indirectly through the discussion of other topics. Tennant claims that his account of belief revision is neutral with respect to traditional skepticism, foundationalism, coherentism, and basic foundherentism; and that it can be used to analyze and clarify those views.

Given the above summary, we would like to make several comments on some particular issues. First of all, since the notion of justificatory steps is a core idea and major contribution of this book, it seems reasonable and interesting to make the concept as clear as possible. We can see the informal definition of 'steps' in chapter 2: a step from the premise set {b1, . . . , bn} to the distinct conclusion a. This is called a transitional step. The transitional step carries the logical interpretation (for an agent who adopts the step) of 'if one is justified in believing each of b1, . . . , bn then one is justified in believing a'. Tennant makes this interpretation in the following natural-deduction inference mode:

Jb1 . . . Jbn

Ja

where Jφ means 'one is justified in believing φ'. The crucial point here is that what contributes to the justification for believing a is not only b1, . . . , bn, but more importantly a certain kind of logical relation between premises {b1, . . . , bn} and conclusion a. What then are those logical relations? Are they valid logical rules of basic propositional deduction, so that {b1, . . . , bn}⊢a? If so, according to the description of logical paragons, the agent must be aware of this since the premise set is finite and basic propositional logic is decidable. The agent can probably carry out a validity-check for the proposition b1∧ . . . ∧b→ a. This means that as long as {b1, . . . , bn}⊢a holds in propositional logic objectively, the paragon must know the respective justificatory step from {b1, . . . , bn} to a. However, Tennant also claims that it is possible for a rational paragon not to know by her own lights a justificatory step, such as {b2, . . . , bn}⊢a, that is actually logically valid , though the agent may know the justificatory step given a larger premise set {b1, b2, . . . , bn} and conclusion a. In this sense, we can conclude that the logical relations in justificatory steps cannot only be valid rules of basic propositional deduction.

Then we may consider justificatory steps in a perspective of propositional epistemic (doxastic) logic with finite premise sets. The easiest way of interpreting a transitional step can now be expressed as the following mode:

Bb1 . . . Bbn

Ba

where Bφ means 'the agent believes that φ'. The doxastic logic here is just classical modal logic KD45 [8]. Then the inference mode is equivalent to

B(b1∧ . . . ∧bn)

Ba

since belief is closed under finite conjunctions.

Suppose that, for an agent, Ba can be deduced from the premise B(b1∧ . . . ∧bn), intuitively meaning that the agent concludes that she believesa from believingb1∧ . . . ∧bn. In this setting, if contracting belief a means that Ba becomes ¬Ba, then by contraposition of propositional logic, we should have ¬B(b1∧ . . . ∧bn) for the agent. It can be concluded that there must be at least one Bbi (1≤i≤n) that will become ¬Bbi since {b1, . . . , bn} is consistent and every bi is contingent (as Tennant stipulates). With regard to the agent, this says that she should give up at least one element in the set of {b1, . . . , bn}. Following Tennant's use of coloration, we may use black node bi to denote Bbi, and the respective white node to denote ¬Bbi. Then the above contraction process is quite like spreading whiteupward in Tennant's FDNs. Similarly, if we understand a transition step as 'for a certain agent, only if she believes all the elements of {b1, . . . , bn}, she believes a', then if one of her beliefs bi needs to be discarded (that is, Bbi becomes ¬Bbi), then Ba becomes ¬Ba. That's also like Tennant's spreading white downwards.

In spite of these similarities, Tennant's notion of 'stroke' (with black and white coloring) seems more expressive than the logical deduction notions of ⊢and ⊬. We would like to see if there is a kind of doxastic logic in our above characterization of contraction and revision as there is in Tennant's FDNs. In any case, we know that {b1, . . . , bn} may not have any objective deduction relations with a. Therefore, it is also possible for the agent to give up transitional steps (like Ba from {Bb1, . . . , Bbn}) in contraction processes, as defined in the above belief operator. But this is not the issue that Tennant has dealt with here. So the logical relation of justificatory steps interpreted in the above way cannot be the one that Tennant wants.

Now let's suppose the logical relations 'inhibited' in justificatory steps are valid first-order rules of deduction with finite premise sets. In this setting, we can readily understand the situations where the agent may not know the logically valid inferential rules, since first-order logic is undecidable in general. But, as we know, a rational agent is a logical paragon who never makes mistakes in logic. It is not possible for the agent to consider a justificatory step to be valid which is originally not valid by her own lights since so far no proof has been found for the (invalid) logic rules. The agent should consider as just invalid even actually valid inferential rules for which she herself has not settled the validity. It seems that the valid first-order deductive inferential rules with finite premises have the logical characteristics of justificatory steps that Tennant desires.

Furthermore, we think that there are some further interesting problems suggested by Tennant's work. First, it's possible to find several important implicit concepts which cannot be expressed directly in the FDNs, such as justifying, knowing, or being aware of a step. Those notions are the basis for fully understanding belief-change problems in the framework of FDNs. In order to characterize those concepts more precisely and clearly, we may hope to find epistemic logics to integrate them and make them explicit. Second, the presupposition of logical paragon for practical rational agents is still too idealized. In reality, people are often likely to make logical errors in practical reasoning activities. It's quite possible for them to treat incorrect proofs as valid arguments. Therefore, for the real belief revision of practical rational agents, justificatory steps should be concerned with change as well.

Last but not least, in Tennant's comparison of his approach to others, he doesn't mention any work from the DEL tradition. But such work is now under active development and intersects a broad family of disciplines, such as logic, artificial intelligence, and game theory. In [9], van Benthem expresses belief-change actions in the framework of dynamic doxastic logic. For instance, [+A]Bφ, [*A]Bφ may intuitively represent 'after every successful expanding (or revising) with A, the agent believes φ'. Van Benthem even finds valid reduction axioms for the laws of belief-change processes. From a DEL tradition standpoint, it's quite natural to think of belief revision in a multi-agent setting. It could be interesting to apply Tennant's work on the framework of integrating belief change processes to DEL, and describe such behaviors as decision making, communications, and even interactive games from the perspective actions in general [10].

REFERENCES

[1] J. van Benthem. Exploring Logical Dynamics, CSLI Publications 1996, Stanford.

[2] H. van Ditmarsch, W. van der Hoek, B. Kooi. Dynamic Epistemic Logic, Springer 2007.

[3] C. E. Alchourrón, P. Gärdenfors and D. Makinson, On the Logic of Theory Change: Partial Meet Contraction and Revision Functions, in Journal of Symbolic Logic 50 (1985), 510-530.

[4] P. Gärdenfors, Knowledge in Flux: Modeling the Dynamics of Epistemic States, Bradford Books, The MIT Press, Cambridge, MA, 1988.

[5] J. van Benthem. Dynamic logic for belief revision. Journal of Applied Non-Classical Logics, 14(2): 129-155, 2004.

[6] S. O. Hansson, A Textbook of Belief Dynamics: Theory Change and Database Updating, Kluwer Academic Publishers, 1999.

[7] R. Fagin, J. Y. Halpern, Y. Moses, M. Y. Vardi, Reasoning about Knowledge, The MIT Press, Cambridge, MA, 1995.

[8] J. Hintikka, Knowledge and Belief, Cornell University Press, 1962.

[9] J. van Benthem. Open Problems in Logical Dynamics, in: D. Gabbay, S. Goncharov and M. Zakharyashev, editors, Mathematical Problems from Applied Logic I, Springer, 2006, 137-192.

[10] J. van Benthem, H. van Ditmarsch, J. van Eijck and J, Jaspars, Logic in Action, Open Course Project, Institute for Logic, Language and Computation, University of Amsterdam, 2012.

Belief revision is the process of changing beliefs to take into account a new piece of information. The logical formalization of belief revision is researched in philosophy, in databases, and in artificial intelligence for the design of rational agents.

What makes belief revision non-trivial is that several different ways for performing this operation may be possible. For example, if the current knowledge includes the three facts " is true", " is true" and "if and are true then is true", the introduction of the new information " is false" can be done preserving consistency only by removing at least one of the three facts. In this case, there are at least three different ways for performing revision. In general, there may be several different ways for changing knowledge..

Revision and update[edit]

Two kinds of changes are usually distinguished:

update 
the new information is about the situation at present, while the old beliefs refer to the past; update is the operation of changing the old beliefs to take into account the change;
revision 
both the old beliefs and the new information refer to the same situation; an inconsistency between the new and old information is explained by the possibility of old information being less reliable than the new one; revision is the process of inserting the new information into the set of old beliefs without generating an inconsistency.

The main assumption of belief revision is that of minimal change: the knowledge before and after the change should be as similar as possible. In the case of update, this principle formalizes the assumption of inertia. In the case of revision, this principle enforces as much information as possible to be preserved by the change.

Example[edit]

The following classical example shows that the operations to perform in the two settings of update and revision are not the same. The example is based on two different interpretations of the set of beliefs and the new piece of information :

update 
in this scenario, two satellites, Unit A and Unit B, orbit around Mars; the satellites are programmed to land while transmitting their status to Earth; Earth has received a transmission from one of the satellites, communicating that it is still in orbit; however, due to interference, it is not known which satellite sent the signal; subsequently, Earth receives the communication that Unit A has landed; this scenario can be modeled in the following way; two propositional variables and indicate that Unit A and Unit B, respectively, are still in orbit; the initial set of beliefs is (either one of the two satellites is still in orbit) and the new piece of information is (Unit A has landed, and is therefore not in orbit); the only rational result of the update is ; since the initial information that one of the two satellites had not landed yet was possibly coming from the Unit A, the position of the Unit B is not known;
revision 
the play "Six Characters in Search of an Author" will be performed in one of the two local theatres; this information can be denoted by , where and indicates that the play will be performed at the first or at the second theatre, respectively; a further information that "Jesus Christ Superstar" will be performed at the first theatre indicates that holds; in this case, the obvious conclusion is that "Six Characters in Search of an Author" will be performed at the second but not the first theatre, which is represented in logic by .

This example shows that revising the belief with the new information produces two different results and depending on whether the setting is that of update or revision.

Contraction, expansion, revision, consolidation, and merging[edit]

In the setting in which all beliefs refer to the same situation, a distinction between various operations that can be performed is made:

contraction 
removal of a belief;
expansion 
addition of a belief without checking consistency;
revision 
addition of a belief while maintaining consistency;
extraction 
extracting a consistent set of beliefs and/or epistemic entrenchment ordering;
consolidation 
restoring consistency of a set of beliefs;
merging 
fusion of two or more sets of beliefs while maintaining consistency.

Revision and merging differ in that the first operation is done when the new belief to incorporate is considered more reliable than the old ones; therefore, consistency is maintained by removing some of the old beliefs. Merging is a more general operation, in that the priority among the belief sets may or may not be the same.

Revision can be performed by first incorporating the new fact and then restoring consistency via consolidation. This is actually a form of merging rather than revision, as the new information is not always treated as more reliable than the old knowledge.

The AGM postulates[edit]

The AGM postulates (named after the names of their proponents, Alchourrón, Gärdenfors, and Makinson) are properties that an operator that performs revision should satisfy in order for that operator to be considered rational. The considered setting is that of revision, that is, different pieces of information referring to the same situation. Three operations are considered: expansion (addition of a belief without a consistency check), revision (addition of a belief while maintaining consistency), and contraction (removal of a belief).

The first six postulates are called "the basic AGM postulates". In the settings considered by Alchourrón, Gärdenfors, and Makinson, the current set of beliefs is represented by a deductively closed set of logical formulae called belief base, the new piece of information is a logical formula , and revision is performed by a binary operator that takes as its operands the current beliefs and the new information and produces as a result a belief base representing the result of the revision. The operator denoted expansion: is the deductive closure of . The AGM postulates for revision are:

  1. Closure: is a belief base (i.e., a deductively closed set of formulae);
  2. Success:
  3. Inclusion:
  4. Vacuity:
  5. is inconsistent only if is inconsistent or is inconsistent
  6. Extensionality: (see logical equivalence)

A revision operator that satisfies all eight postulates is the full meet revision, in which is equal to if consistent, and to the deductive closure of otherwise. While satisfying all AGM postulates, this revision operator has been considered to be too conservative, in that no information from the old knowledge base is maintained if the revising formula is inconsistent with it.[citation needed]

Conditions equivalent to the AGM postulates[edit]

The AGM postulates are equivalent to several different conditions on the revision operator; in particular, they are equivalent to the revision operator being definable in terms of structures known as selection functions, epistemic entrenchments, systems of spheres, and preference relations. The latter are reflexive, transitive, and total relations over the set of models.

Each revision operator satisfying the AGM postulates is associated to a set of preference relations , one for each possible belief base , such that the models of are exactly the minimal of all models according to . The revision operator and its associated family of orderings are related by the fact that is the set of formulae whose set of models contains all the minimal models of according to . This condition is equivalent to the set of models of being exactly the set of the minimal models of according to the ordering .

A preference ordering represents an order of implausibility among all situations, including those that are conceivable but yet currently considered false. The minimal models according to such an ordering are exactly the models of the knowledge base, which are the models that are currently considered the most likely. All other models are greater than these ones, and are indeed considered less plausible. In general, indicates that the situation represented by the model is believed to be more plausible than the situation represented by . As a result, revising by a formula having and as models should select only to be a model of the revised knowledge base, as this model represent the most likely scenario among those supported by .

Contraction[edit]

Contraction is the operation of removing a belief from a knowledge base ; the result of this operation is denoted by . The operators of revision and contractions are related by the Levi and Harper identities:

Eight postulates have been defined for contraction. Whenever a revision operator satisfies the eight postulates for revision, its corresponding contraction operator satisfies the eight postulates for contraction, and vice versa. If a contraction operator satisfies at least the first six postulates for contraction, translating it into a revision operator and then back into a contraction operator using the two identities above leads to the original contraction operator. The same holds starting from a revision operator.

One of the postulates for contraction has been longly discussed: the recovery postulate:

According to this postulate, the removal of a belief followed by the reintroduction of the same belief in the belief base should lead to the original belief base. There are some examples showing that such behavior is not always reasonable: in particular, the contraction by a general condition such as leads to the removal of more specific conditions such as from the belief base; it is then unclear why the reintroduction of should also lead to the reintroduction of the more specific condition . For example, if George was previously believed to have German citizenship, he was also believed to be European. Contracting this latter belief amounts to ceasing to believe that George is European; therefore, that George has German citizenship is also retracted from the belief base. If George is later discovered to have Austrian citizenship, then the fact that he is European is also reintroduced. According to the recovery postulate, however, the belief that he also has German citizenship should also be reintroduced.

The correspondence between revision and contraction induced by the Levi and Harper identities is such that a contraction not satisfying the recovery postulate is translated into a revision satisfying all eight postulates, and that a revision satisfying all eight postulates is translated into a contraction satisfying all eight postulates, including recovery. As a result, if recovery is excluded from consideration, a number of contraction operators are translated into a single revision operator, which can be then translated back into exactly one contraction operator. This operator is the only one of the initial group of contraction operators that satisfies recovery; among this group, it is the operator that preserves as much information as possible.

The Ramsey test[edit]

The evaluation of a counterfactual conditional can be done, according to the Ramsey test (named for Frank P. Ramsey), to the hypothetical addition of to the set of current beliefs followed by a check for the truth of . If is the set of beliefs currently held, the Ramsey test is formalized by the following correspondence:

if and only if

If the considered language of the formulae representing beliefs is propositional, the Ramsey test gives a consistent definition for counterfactual conditionals in terms of a belief revision operator. However, if the language of formulae representing beliefs itself includes the counterfactual conditional connective , the Ramsey test leads to the Gardenfors triviality result: there is no non-trivial revision operator that satisfies both the AGM postulates for revision and the condition of the Ramsey test. This result holds in the assumption that counterfactual formulae like can be present in belief bases and revising formulae. Several solutions to this problem have been proposed.

Non-monotonic inference relation[edit]

Given a fixed knowledge base and a revision operator , one can define a non-monotonic inference relation using the following definition: if and only if . In other words, a formula entails another formula if the addition of the first formula to the current knowledge base leads to the derivation of . This inference relation is non-monotonic.

The AGM postulates can be translated into a set of postulates for this inference relation. Each of these postulates is entailed by some previously considered set of postulates for non-monotonic inference relations. Vice versa, conditions that have been considered for non-monotonic inference relations can be translated into postulates for a revision operator. All these postulates are entailed by the AGM postulates.

Foundational revision[edit]

In the AGM framework, a belief set is represented by a deductively closed set of propositional formulae. While such sets are infinite, they can always be finitely representable. However, working with deductively closed sets of formulae leads to the implicit assumption that equivalent belief bases should be considered equal when revising. This is called the principle of irrelevance of syntax.

This principle has been and is currently debated: while and are two equivalent sets, revising by should produce different results. In the first case, and are two separate beliefs; therefore, revising by should not produce any effect on , and the result of revision is . In the second case, is taken a single belief. The fact that is false contradicts this belief, which should therefore be removed from the belief base. The result of revision is therefore in this case.

The problem of using deductively closed knowledge bases is that no distinction is made between pieces of knowledge that are known by themselves and pieces of knowledge that are merely consequences of them. This distinction is instead done by the foundational approach to belief revision, which is related to foundationalism in philosophy. According to this approach, retracting a non-derived piece of knowledge should lead to retracting all its consequences that are not otherwise supported (by other non-derived pieces of knowledge). This approach can be realized by using knowledge bases that are not deductively closed and assuming that all formulae in the knowledge base represent self-standing beliefs, that is, they are not derived beliefs. In order to distinguish the foundational approach to belief revision to that based on deductively closed knowledge bases, the latter is called the coherentist approach. This name has been chosen because the coherentist approach aims at restoring the coherence (consistency) among all beliefs, both self-standing and derived ones. This approach is related to coherentism in philosophy.

Foundationalist revision operators working on non-deductively closed belief bases typically select some subsets of that are consistent with , combined them in some way, and then conjoined them with . The following are two non-deductively closed base revision operators.

WIDTIO 
(When in Doubt, Throw it Out) the maximal subsets of

0 comments

Leave a Reply

Your email address will not be published. Required fields are marked *