Imprecise Probabilities with a Generalized Interval Form

Similar documents
Imprecise Probabilities with a Generalized Interval Form

Independence in Generalized Interval Probability. Yan Wang

CROSS-SCALE, CROSS-DOMAIN MODEL VALIDATION BASED ON GENERALIZED HIDDEN MARKOV MODEL AND GENERALIZED INTERVAL BAYES RULE

Statistical Inference

A unified view of some representations of imprecise probabilities

Probability Review I

2 Interval-valued Probability Measures

Regular finite Markov chains with interval probabilities

Bayesian Inference under Ambiguity: Conditional Prior Belief Functions

Extreme probability distributions of random/fuzzy sets and p-boxes

On the Approximation of Linear AE-Solution Sets

Origins of Probability Theory

Rough operations on Boolean algebras

Serena Doria. Department of Sciences, University G.d Annunzio, Via dei Vestini, 31, Chieti, Italy. Received 7 July 2008; Revised 25 December 2008

Practical implementation of possibilistic probability mass functions

Rough Approach to Fuzzification and Defuzzification in Probability Theory

Basic Probabilistic Reasoning SEG

Modal Intervals Revisited Part 1: A Generalized Interval Natural Extension

Finite Element Structural Analysis using Imprecise Probabilities Based on P-Box Representation

Extreme probability distributions of random sets, fuzzy sets and p-boxes

CMPSCI 240: Reasoning about Uncertainty

On the Relation of Probability, Fuzziness, Rough and Evidence Theory

Topic 5 Basics of Probability

Imprecise Probability

A survey of the theory of coherent lower previsions

Practical implementation of possibilistic probability mass functions

Symmetric Probability Theory

Fuzzy Systems. Possibility Theory.

BIVARIATE P-BOXES AND MAXITIVE FUNCTIONS. Keywords: Uni- and bivariate p-boxes, maxitive functions, focal sets, comonotonicity,

A Right-Preconditioning Process for the Formal-Algebraic Approach to Inner and Outer Estimation of AE-Solution Sets

Probability Theory Review

Independence Concepts for Convex Sets of Probabilities. Luis M. De Campos and Serafn Moral. Departamento de Ciencias de la Computacion e I.A.

CMPSCI 240: Reasoning about Uncertainty

Numerical Probabilistic Analysis under Aleatory and Epistemic Uncertainty

Solutions of fuzzy equations based on Kaucher arithmetic and AE-solution sets

Journal of Universal Computer Science, vol. 1, no. 7 (1995), submitted: 15/12/94, accepted: 26/6/95, appeared: 28/7/95 Springer Pub. Co.

LECTURE 1. 1 Introduction. 1.1 Sample spaces and events

Probability Theory Review

Probabilistic Logics and Probabilistic Networks

Interval based Uncertain Reasoning using Fuzzy and Rough Sets

Probabilistic Robotics

On maxitive integration

University of Bielefeld

Lebesgue measure on R is just one of many important measures in mathematics. In these notes we introduce the general framework for measures.

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 3 9/10/2008 CONDITIONING AND INDEPENDENCE

MATH 556: PROBABILITY PRIMER

Lecture 2. Conditional Probability

Introduction to Probability. Ariel Yadin. Lecture 1. We begin with an example [this is known as Bertrand s paradox]. *** Nov.

A generic framework for resolving the conict in the combination of belief structures E. Lefevre PSI, Universite/INSA de Rouen Place Emile Blondel, BP

Uncertain Logic with Multiple Predicates

arxiv: v1 [math.pr] 9 Jan 2016

Why study probability? Set theory. ECE 6010 Lecture 1 Introduction; Review of Random Variables

Bayesian Reasoning. Adapted from slides by Tim Finin and Marie desjardins.

Introduction to belief functions

UNCERTAINTY. In which we see what an agent should do when not all is crystal-clear.

Interval Gauss-Seidel Method for Generalized Solution Sets to Interval Linear Systems

Probability. 25 th September lecture based on Hogg Tanis Zimmerman: Probability and Statistical Inference (9th ed.)

The Limitation of Bayesianism

Finite Element Structural Analysis using Imprecise Probabilities Based on P-Box Representation

Dept. of Linguistics, Indiana University Fall 2015

Sklar s theorem in an imprecise setting

Reasoning about Expectation

An Interval Based Approach To Model Input Uncertainty In Discrete-event Simulation

Durham Research Online

MA : Introductory Probability

Part IA Probability. Definitions. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015

Uncertainty. Chapter 13

Bayesian Networks. Distinguished Prof. Dr. Panos M. Pardalos

Analysis of Probabilistic Systems

BOOLEAN ALGEBRA INTRODUCTION SUBSETS

STAT:5100 (22S:193) Statistical Inference I

Compenzational Vagueness

Copyright c 2007 Jason Underdown Some rights reserved. statement. sentential connectives. negation. conjunction. disjunction

A gentle introduction to imprecise probability models

Possibility transformation of the sum of two symmetric unimodal independent/comonotone random variables

Lecture Notes 1 Probability and Random Variables. Conditional Probability and Independence. Functions of a Random Variable

Uncertainty and Rules

On 3-valued paraconsistent Logic Programming

A Generalization of P-boxes to Affine Arithmetic, and Applications to Static Analysis of Programs

Probability as Logic

Recitation 2: Probability

Machine Learning. Probability Theory. WS2013/2014 Dmitrij Schlesinger, Carsten Rother

Representing qualitative capacities as families of possibility measures

Sample Space: Specify all possible outcomes from an experiment. Event: Specify a particular outcome or combination of outcomes.

An Iterative Method for Algebraic Solution to Interval Equations

Professor Terje Haukaas University of British Columbia, Vancouver System Reliability

Venn Diagrams; Probability Laws. Notes. Set Operations and Relations. Venn Diagram 2.1. Venn Diagrams; Probability Laws. Notes

Applied Logics - A Review and Some New Results

Probability Theory for Machine Learning. Chris Cremer September 2015

Conditional Belief Functions: a Comparison among Different Definitions

Lecture 16. Lectures 1-15 Review

Mean, Median and Mode. Lecture 3 - Axioms of Probability. Where do they come from? Graphically. We start with a set of 21 numbers, Sta102 / BME102

of the set A. Note that the cross-section A ω :={t R + : (t,ω) A} is empty when ω/ π A. It would be impossible to have (ψ(ω), ω) A for such an ω.

Possibilistic information fusion using maximal coherent subsets

Probability theory basics

COHERENCE AND PROBABILITY. Rosangela H. Loschi and Sergio Wechsler. Universidade Federal de Minas Gerais e Universidade de S~ao Paulo ABSTRACT

Total Belief Theorem and Generalized Bayes Theorem

The cautious rule of combination for belief functions and some extensions

Belief functions: A gentle introduction

G. de Cooman E. E. Kerre Universiteit Gent Vakgroep Toegepaste Wiskunde en Informatica

Transcription:

Introduction Imprecise Probability based on Generalized Intervals Conditioning and Updating Imprecise Probabilities with a Generalized Interval Form Yan Wang University of Central Florida REC2008 Wang Imprecise Probabilities with a Generalized Interval Form

Outline 1 Introduction Imprecise Probabilities & Applications Generalized Intervals 2 Imprecise Probability based on Generalized Intervals Denitions Logic Coherence Constraint Focal and Non-Focal Events 3 Conditioning and Updating Conditional Interval Probabilities Updating

Interval-based Representations of Imprecise Probabilities Dempster-Shafer evidence theory (Dempster, 1967; Shafer, 1976) Behavioral imprecise probability theory (Walley, 1991) Possibility theory (Zadeh, 1978; Dubois and Prade, 1988) Random set (Molchanov, 2005) Probability bound analysis (Ferson et al., 2002) F-probability (Weichselberger, 2000) Fuzzy probability (Möller and Beer, 2004) Cloud (Neumaier, 2004)

Engineering Applications of Imprecise Probabilities Sensor data fusion (Guede and Girardi, 1997; Elouedi et al., 2004) Reliability assessment (Kozine and Filimonov, 2000; Berleant and Zhang, 2004; Coolen, 2004) Reliability-based design optimization (Mourelatos and Zhou, 2006; Du et al., 2006) Design decision making under uncertainty (Nikolaidis et al., 2004; Aughenbaugh and Paredis, 2006)

Generalized Intervals Modal interval analysis (MIA) (Gardenes et al., 2001; Markov, 2001; Shary, 2002; Popova, 2001; Armengol et al., 2001) is an algebraic and semantic extension of interval analysis (IA) (Moore, 1966). A modal interval or generalized interval x := [x, x] KR is called proper when x x. The set of proper intervals is IR = {[x, x] x x}. improper when x x. The set of improper interval is IR = {[x, x] x x}. Operations are dened in Kaucher arithmetic (Kaucher, 1980). Not only for outer range estimations, generalized intervals are also convenient for inner range estimations (Kupriyanova, 1995; Kreinovich et al., 1996; Goldsztejn, 2005).

Generalized Intervals Modal interval analysis (MIA) (Gardenes et al., 2001; Markov, 2001; Shary, 2002; Popova, 2001; Armengol et al., 2001) is an algebraic and semantic extension of interval analysis (IA) (Moore, 1966). A modal interval or generalized interval x := [x, x] KR is called proper when x x. The set of proper intervals is IR = {[x, x] x x}. improper when x x. The set of improper interval is IR = {[x, x] x x}. Operations are dened in Kaucher arithmetic (Kaucher, 1980). Not only for outer range estimations, generalized intervals are also convenient for inner range estimations (Kupriyanova, 1995; Kreinovich et al., 1996; Goldsztejn, 2005).

Generalized Intervals Modal interval analysis (MIA) (Gardenes et al., 2001; Markov, 2001; Shary, 2002; Popova, 2001; Armengol et al., 2001) is an algebraic and semantic extension of interval analysis (IA) (Moore, 1966). A modal interval or generalized interval x := [x, x] KR is called proper when x x. The set of proper intervals is IR = {[x, x] x x}. improper when x x. The set of improper interval is IR = {[x, x] x x}. Operations are dened in Kaucher arithmetic (Kaucher, 1980). Not only for outer range estimations, generalized intervals are also convenient for inner range estimations (Kupriyanova, 1995; Kreinovich et al., 1996; Goldsztejn, 2005).

Generalized/Modal Interval Analysis Two operators pro and imp return proper and improper values respectively, dened as prox := [min(x, x),max(x, x)] (1) impx := [max(x, x),min(x, x)] (2) The relationship between proper and improper intervals is established with the operator dual: dualx := [x, x] (3) The inclusion relation between generalized intervals x = [x, x] and y = [y,y] is dened as [x, x] [y,y] x y x y [x, x] [y,y] x y x y The less-than-or-equal-to and greater-than-or-equal-to relations are dened as [x, x] [y,y] x y x y [x, x] [y,y] x y x y (4) (5)

Generalized/Modal Interval Analysis Two operators pro and imp return proper and improper values respectively, dened as prox := [min(x, x),max(x, x)] (1) impx := [max(x, x),min(x, x)] (2) The relationship between proper and improper intervals is established with the operator dual: dualx := [x, x] (3) The inclusion relation between generalized intervals x = [x, x] and y = [y,y] is dened as [x, x] [y,y] x y x y [x, x] [y,y] x y x y The less-than-or-equal-to and greater-than-or-equal-to relations are dened as [x, x] [y,y] x y x y [x, x] [y,y] x y x y (4) (5)

Generalized/Modal Interval Analysis Two operators pro and imp return proper and improper values respectively, dened as prox := [min(x, x),max(x, x)] (1) impx := [max(x, x),min(x, x)] (2) The relationship between proper and improper intervals is established with the operator dual: dualx := [x, x] (3) The inclusion relation between generalized intervals x = [x, x] and y = [y,y] is dened as [x, x] [y,y] x y x y [x, x] [y,y] x y x y The less-than-or-equal-to and greater-than-or-equal-to relations are dened as [x, x] [y,y] x y x y [x, x] [y,y] x y x y (4) (5)

Generalized/Modal Interval Analysis Two operators pro and imp return proper and improper values respectively, dened as prox := [min(x, x),max(x, x)] (1) impx := [max(x, x),min(x, x)] (2) The relationship between proper and improper intervals is established with the operator dual: dualx := [x, x] (3) The inclusion relation between generalized intervals x = [x, x] and y = [y,y] is dened as [x, x] [y,y] x y x y [x, x] [y,y] x y x y The less-than-or-equal-to and greater-than-or-equal-to relations are dened as [x, x] [y,y] x y x y [x, x] [y,y] x y x y (4) (5)

Inf-Sup Diagram

Dierences between MIA and Tranditional IA Interval Analysis Modal Interval Analysis Validity [3, 2] is invalid Both [3, 2] and [3, 2] are valid intervals Semantics richness Algebraic closure of arithmetic [2, 3] + [2, 4] = [4, 7] is the only valid relation for +, and it only means stack-up and worstcase.,, are similar. a + x = b, but x b a. [2, 3] + [2, 4] = [4, 7], but [2, 4] [4, 7] [2, 3] a x = b, but x b a. [2, 3] [3, 4] = [6, 12], but [3, 4] [6, 12] [2, 3] x x 0 [2, 3] + [2, 4] = [4, 7], [2, 3] + [4, 2] = [6, 5], [3, 2] + [2, 4] = [5, 6], [3, 2] + [4, 2] = [7, 4] are all valid. The respective meanings are ( a [2, 3])( b [2, 4])( c [4, 7])(a + b = c) ( a [2, 3])( c [5, 6])( b [2, 4])(a + b = c) ( b [2, 4])( a [2, 3])( c [5, 6])(a + b = c) ( c [4, 7])( a [2, 3])( b [2, 4])(a + b = c).,, are similar. a + x = b, and x = b duala. [2, 3] + [2, 4] = [4, 7], and [2, 4] = [4, 7] [3, 2] a x = b, and x = b duala. [2, 3] [3, 4] = [6, 12], and [3, 4] = [6, 12] [3, 2] x dualx = 0 [2, 3] [3, 2] = 0

Interval Probability Denition Given a sample space Ω and a σ-algebra A of random events over Ω, The generalized interval probability p : A [0,1] [0,1] obeys the axioms of Kolmogorov: p(ω) = [1, 1] = 1; 0 p(e) 1 ( E A ); For any countable mutually disjoint events E i E j = /0 (i j), p( n i=1 E i) = n i=1 p(e i). The lower and upper probabilities here do not have the traditional meanings of lower and upper envelopes: P (E) = inf P P P(E) P (E) = sup P(E) P P Rather, they provide the algebraic closure and logical interpretation.

Interval Probability Denition Given a sample space Ω and a σ-algebra A of random events over Ω, The generalized interval probability p : A [0,1] [0,1] obeys the axioms of Kolmogorov: p(ω) = [1, 1] = 1; 0 p(e) 1 ( E A ); For any countable mutually disjoint events E i E j = /0 (i j), p( n i=1 E i) = n i=1 p(e i). The lower and upper probabilities here do not have the traditional meanings of lower and upper envelopes: P (E) = inf P P P(E) P (E) = sup P(E) P P Rather, they provide the algebraic closure and logical interpretation.

Operations Denition p(e 1 E 2 ) := p(e 1 ) + p(e 2 ) dualp(e 1 E 2 ) (6) From Eq.(6), we have p(e 1 E 2 ) + p(e 1 E 2 ) = p(e 1 ) + p(e 2 ) (7) Eq.(7) indicates the generalized interval probabilities are 2-monotone (and 2-alternating) in the sense of Choquet's capacities, but stronger than 2-monotonicity. Since p(e 1 E 2 ) 0, p(e 1 E 2 ) p(e 1 ) + p(e 2 ) (8) The equality of Eq.(8) occurs when p(e 1 E 2 ) = 0. For three events, p(e 1 E 2 E 3 ) = p(e 1 ) + p(e 2 ) + p(e 3 ) dualp(e 1 E 2 ) dualp(e 2 E 3 ) dualp(e 1 E 3 ) + p(e 1 E 2 E 3 )

Operations Denition p(e 1 E 2 ) := p(e 1 ) + p(e 2 ) dualp(e 1 E 2 ) (6) From Eq.(6), we have p(e 1 E 2 ) + p(e 1 E 2 ) = p(e 1 ) + p(e 2 ) (7) Eq.(7) indicates the generalized interval probabilities are 2-monotone (and 2-alternating) in the sense of Choquet's capacities, but stronger than 2-monotonicity. Since p(e 1 E 2 ) 0, p(e 1 E 2 ) p(e 1 ) + p(e 2 ) (8) The equality of Eq.(8) occurs when p(e 1 E 2 ) = 0. For three events, p(e 1 E 2 E 3 ) = p(e 1 ) + p(e 2 ) + p(e 3 ) dualp(e 1 E 2 ) dualp(e 2 E 3 ) dualp(e 1 E 3 ) + p(e 1 E 2 E 3 )

Operations Denition p(e 1 E 2 ) := p(e 1 ) + p(e 2 ) dualp(e 1 E 2 ) (6) From Eq.(6), we have p(e 1 E 2 ) + p(e 1 E 2 ) = p(e 1 ) + p(e 2 ) (7) Eq.(7) indicates the generalized interval probabilities are 2-monotone (and 2-alternating) in the sense of Choquet's capacities, but stronger than 2-monotonicity. Since p(e 1 E 2 ) 0, p(e 1 E 2 ) p(e 1 ) + p(e 2 ) (8) The equality of Eq.(8) occurs when p(e 1 E 2 ) = 0. For three events, p(e 1 E 2 E 3 ) = p(e 1 ) + p(e 2 ) + p(e 3 ) dualp(e 1 E 2 ) dualp(e 2 E 3 ) dualp(e 1 E 3 ) + p(e 1 E 2 E 3 )

Operations Denition p(e 1 E 2 ) := p(e 1 ) + p(e 2 ) dualp(e 1 E 2 ) (6) From Eq.(6), we have p(e 1 E 2 ) + p(e 1 E 2 ) = p(e 1 ) + p(e 2 ) (7) Eq.(7) indicates the generalized interval probabilities are 2-monotone (and 2-alternating) in the sense of Choquet's capacities, but stronger than 2-monotonicity. Since p(e 1 E 2 ) 0, p(e 1 E 2 ) p(e 1 ) + p(e 2 ) (8) The equality of Eq.(8) occurs when p(e 1 E 2 ) = 0. For three events, p(e 1 E 2 E 3 ) = p(e 1 ) + p(e 2 ) + p(e 3 ) dualp(e 1 E 2 ) dualp(e 2 E 3 ) dualp(e 1 E 3 ) + p(e 1 E 2 E 3 )

Complement Denition p(e c ) := 1 dualp(e) (9) which is equivalent to p(e) + p(e c ) = 1 (10) p(e c ) := 1 p(e) (11) p(e c ) := 1 p(e) (12)

Logic Coherence Constraint Logic Coherence Constraint For a mutually disjoint event partition n i=1 E i = Ω, we have n i=1 p(e i ) = 1 (13) Suppose p(e i ) IR (for i = 1,...,k) and p(e i ) IR (for i = k + 1,...,n). Based on the interpretability principles of MIA (Gardenes et al., 2001), Eq.(13) can be interpreted as p 1 p (E 1 ),..., p k p (E k ) p k+1 p (E k+1 ),..., p n p (E n ) n i=1 p i = 1

Focal and Non-Focal Events Denition An event E is a focal event if its associated semantics is universal (Q p(e) = ). Otherwise it is a non-focal event if the semantics is existential (Q p(e) = ). A focal event is an event of interest. The uncertainties associated with focal events are critical. In contrast, the uncertainties associated with non-focal events are complementary and balancing. The uncertainties of non-focal events are derived from those of the corresponding focal events.

Focal and Non-Focal Events Denition An event E is a focal event if its associated semantics is universal (Q p(e) = ). Otherwise it is a non-focal event if the semantics is existential (Q p(e) = ). A focal event is an event of interest. The uncertainties associated with focal events are critical. In contrast, the uncertainties associated with non-focal events are complementary and balancing. The uncertainties of non-focal events are derived from those of the corresponding focal events.

Relationships and Interpretations Denition Event E 1 is said to be less likely to occur than event E 2, denoted as E 1 E 2, dened as E 1 E 2 p(e 1 ) p(e 2 ) (14) Event E 1 is said to be less focused than event E 2, denoted as E 1 E 2, dened as E 1 E 2 p(e 1 ) p(e 2 ) (15) E 1 E 2 E 1 E 2. If E 1 E 3 = /0 and E 2 E 3 = /0, E 1 E 2 E 1 E 3 E 2 E 3, E 1 E 2 E 1 E 3 E 2 E 3.

Relationships and Interpretations Denition Event E 1 is said to be less likely to occur than event E 2, denoted as E 1 E 2, dened as E 1 E 2 p(e 1 ) p(e 2 ) (14) Event E 1 is said to be less focused than event E 2, denoted as E 1 E 2, dened as E 1 E 2 p(e 1 ) p(e 2 ) (15) E 1 E 2 E 1 E 2. If E 1 E 3 = /0 and E 2 E 3 = /0, E 1 E 2 E 1 E 3 E 2 E 3, E 1 E 2 E 1 E 3 E 2 E 3.

Relationships and Interpretations Denition Event E 1 is said to be less likely to occur than event E 2, denoted as E 1 E 2, dened as E 1 E 2 p(e 1 ) p(e 2 ) (14) Event E 1 is said to be less focused than event E 2, denoted as E 1 E 2, dened as E 1 E 2 p(e 1 ) p(e 2 ) (15) E 1 E 2 E 1 E 2. If E 1 E 3 = /0 and E 2 E 3 = /0, E 1 E 2 E 1 E 3 E 2 E 3, E 1 E 2 E 1 E 3 E 2 E 3.

Relationships between a Focal Event and Its Complement A focal event E is less likely to occur than its complement if p(e) 0.5; E is more likely to occur than its complement if p(e) 0.5; otherwise, E is more focused than its complement. Figure: inf-sup diagrams for dierent relationships between p(e) and p(e c ) when p(e) IR

Conditional Interval Probabilities Denition The conditional interval probability p(e C) for E,C A is dened as [ ] p(e C) p(e C) p(e C) := dualp(c) = p(e C), p(c) p(c) when p(c) > 0. (16) The denition is based on marginal probabilities. It ensures the algebraic closure of the interval probability calculus. It is a generalization of the canonical conditional probability in F-probabilities.

Conditional Interval Probabilities Denition The conditional interval probability p(e C) for E,C A is dened as [ ] p(e C) p(e C) p(e C) := dualp(c) = p(e C), p(c) p(c) when p(c) > 0. (16) The denition is based on marginal probabilities. It ensures the algebraic closure of the interval probability calculus. It is a generalization of the canonical conditional probability in F-probabilities.

Conditioning Example Example p (E1) = [0.10, 0.25] p (E2) = [0.20, 0.40] p (E3) = [0.40, 0.60] p (E2 E3) = [0.75, 0.90] p (E1 E3) = [0.60, 0.80] p (E1 E2) = [0.40, 0.60] A partition of Ω = E1 E2 E3 is C = {C1, C2} where C1 = E1 E2 and C2 = E3. p(c1) = [0.40, 0.60],p(C2) = [0.60, 0.40] Suppose p(e1) = [0.10, 0.25] and p(c1) = [0.60, 0.40], we have a complete estimation [0.10, 0.25] p(e1 C1) = = [0.1666, 0.6250] [0.40, 0.60] p E1 [0.10, 0.25], p C1 [0.40, 0.60], p E1 C1 [0.1666, 0.6250], p E1 C1 = p E1 p C1 Suppose p(e1) = [0.25, 0.10] and p(c1) = [0.40, 0.60], we have a sound estimation [0.25, 0.10] p(e1 C1) = = [0.6250, 0.1666] [0.60, 0.40] p E1 C1 [0.1666, 0.6250], p E1 [0.10, 0.25], p C1 [0.40, 0.60], p E1 C1 = p E1 p C1

Conditioning Example - cont'd Example Suppose p(e1) = [0.25, 0.10], p(e2) = [0.20, 0.40], and p(c1) = [0.60, 0.40], we have The interpretations are p(e1 C1) = p(e2 C1) = [0.25, 0.10] = [0.4166, 0.25] [0.40, 0.60] [0.20, 0.40] = [0.3333, 1.0] [0.40, 0.60] p E1 C1 [0.25, 0.4166], p C1 [0.40, 0.60], p E1 [0.10, 0.25], p E1 C1 = p E1 p E2 [0.20, 0.40], p C1 [0.40, 0.60], p E2 C1 [0.3333, 1.0], p E2 C1 = p E2 respectively. Combining the two, we can have the interpretation of p E2 [0.20, 0.40], p C1 [0.40, 0.60], p E1 [0.25, 0.4166], C1 p E1 [0.10, 0.25] p E2 [0.3333, 1.0], C1 p = p E 1 E1, p C1 pc = p E 2 E2 1 C1 pc 1 p C1 p C1

Properties of Conditioning Independence If events A and B are independent, then p(a B) = p(a)p(b) dualp(b) = p(a) (17) Mutual Exclusion For a mutually disjoint event partition n i=1 E i = Ω, we have p(a) = n i=1 p(a E i )p(e i ) (18)

Properties of Conditioning Independence If events A and B are independent, then p(a B) = p(a)p(b) dualp(b) = p(a) (17) Mutual Exclusion For a mutually disjoint event partition n i=1 E i = Ω, we have p(a) = n i=1 p(a E i )p(e i ) (18)

Properties of Conditioning Value of Contradictory Information If B C = /0, p(a C) p(a B) p(a B C) p(a B). If there are two pieces of evidence (B and C ), and one (C ) may provide a more precise estimation about a focal event (A) than the other (B) may, then the new estimation of probability about the focal event (A) based on the disjunctively combined evidence can be more precise than the one based on only one of them (B), even though the two pieces of information are contradictory to each other. If the precision of the focal event estimation with the newly introduced evidence (C ) is improved, the new evidence (C ) must be more informative than the old one (B) although these two are controdictory.

Properties of Conditioning Value of Contradictory Information If B C = /0, p(a C) p(a B) p(a B C) p(a B). If there are two pieces of evidence (B and C ), and one (C ) may provide a more precise estimation about a focal event (A) than the other (B) may, then the new estimation of probability about the focal event (A) based on the disjunctively combined evidence can be more precise than the one based on only one of them (B), even though the two pieces of information are contradictory to each other. If the precision of the focal event estimation with the newly introduced evidence (C ) is improved, the new evidence (C ) must be more informative than the old one (B) although these two are controdictory.

Properties of Conditioning Value of Contradictory Information If B C = /0, p(a C) p(a B) p(a B C) p(a B). If there are two pieces of evidence (B and C ), and one (C ) may provide a more precise estimation about a focal event (A) than the other (B) may, then the new estimation of probability about the focal event (A) based on the disjunctively combined evidence can be more precise than the one based on only one of them (B), even though the two pieces of information are contradictory to each other. If the precision of the focal event estimation with the newly introduced evidence (C ) is improved, the new evidence (C ) must be more informative than the old one (B) although these two are controdictory.

Properties of Conditioning Value of Accumulative Information If B C = /0, p(a B C) p(a B) p(a C) p(a B). If the estimation about a focal event (A) becomes more precise if some new evidence (B) excludes some possibilities (C ) from the original evidence (B C ), then the estimation of probability about the focal event (A) based on the new evidence (B) must be more precise than the one based on the excluded one (C ) along. If the precision of the focal event estimation with a contradictory evidence (C ) is not improved compared to the old one with another evidence (B), then the new evidence (B C ) does not improve the estimation of the focal event (A).

Properties of Conditioning Value of Accumulative Information If B C = /0, p(a B C) p(a B) p(a C) p(a B). If the estimation about a focal event (A) becomes more precise if some new evidence (B) excludes some possibilities (C ) from the original evidence (B C ), then the estimation of probability about the focal event (A) based on the new evidence (B) must be more precise than the one based on the excluded one (C ) along. If the precision of the focal event estimation with a contradictory evidence (C ) is not improved compared to the old one with another evidence (B), then the new evidence (B C ) does not improve the estimation of the focal event (A).

Properties of Conditioning Value of Accumulative Information If B C = /0, p(a B C) p(a B) p(a C) p(a B). If the estimation about a focal event (A) becomes more precise if some new evidence (B) excludes some possibilities (C ) from the original evidence (B C ), then the estimation of probability about the focal event (A) based on the new evidence (B) must be more precise than the one based on the excluded one (C ) along. If the precision of the focal event estimation with a contradictory evidence (C ) is not improved compared to the old one with another evidence (B), then the new evidence (B C ) does not improve the estimation of the focal event (A).

Bayes' Rule with Generalized Intervals Denition The Bayes' rule with generalized intervals (GIBR) is dened as p(e i A) = p(a E i )p(e i ) n j=1 dualp(a E j)dualp(e j ) where E i (i = 1,...,n) are mutually disjoint event partitions of Ω and n j=1 p(e j) = 1. (19) [ p(e i A), p(e i A) ] [ ] p(a E i )p(e i ) = n j=1 p(a E j )p(e j ), p(a E i )p(e i ) n j=1 p(a E j )p(e j ) (20) n j=1 Algebraically consistent with the conditional denition in Eq.(16) dualp(a E j )dualp(e j ) = n j=1 dual [ p(a E j )p(e j ) ] = dual n j=1 p(a E j ) = dualp(a)

2-Monotone Tight Envelope Equivalency When n = 2, p(e) + p(e c ) = 1. Let p(e c ) IR. Eq.(19) becomes p(e A) = p(e A) = p(a E)p(E) p(a E)p(E) + p(a E c )p(e c ) = p(a E) p(a E) + p(a E c ) (21) p(a E)p(E) p(a E)p(E) + p(a E c )p(e c ) = p(a E) p(a E) + p(a E c ) (22) When p(a E) IR and p(a E c ) IR, the relation is equivalent to the well-known 2-monotone tight envelope (Fagin and Halpern, 1991; de Campos et al., 1990; Wasserman and Kadan, 1990; Jaray, 1992; Chrisman, 1995), given as: P (A E) P (E A) = P (A E) + P (A E c ) P P (A E) (E A) = P (A E) + P (A E c ) where P and P are the lower and upper probability bounds dened in the traditional interval probabilities. (23) (24)

2-Monotone Tight Envelope Equivalency When n = 2, p(e) + p(e c ) = 1. Let p(e c ) IR. Eq.(19) becomes p(e A) = p(e A) = p(a E)p(E) p(a E)p(E) + p(a E c )p(e c ) = p(a E) p(a E) + p(a E c ) (21) p(a E)p(E) p(a E)p(E) + p(a E c )p(e c ) = p(a E) p(a E) + p(a E c ) (22) When p(a E) IR and p(a E c ) IR, the relation is equivalent to the well-known 2-monotone tight envelope (Fagin and Halpern, 1991; de Campos et al., 1990; Wasserman and Kadan, 1990; Jaray, 1992; Chrisman, 1995), given as: P (A E) P (E A) = P (A E) + P (A E c ) P P (A E) (E A) = P (A E) + P (A E c ) where P and P are the lower and upper probability bounds dened in the traditional interval probabilities. (23) (24)

Properties of Updating p(a E) p(a E c ) p(e A) p(e). p(a E) p(a E c ) p(e A) p(e). Suppose the likelyhood functions p(a E) and p(a E c ) as well as prior and posterior probabilities are proper intervals. If the likelyhood estimation of event A given E occurs is more accurate than that of event A given event E does not occur, then the extra information A can reduce the ambiguity of the prior estimation.

Properties of Updating p(a E) p(a E c ) p(e A) p(e). p(a E) p(a E c ) p(e A) p(e). If the occurance of event E increases the likelyhood estimation of event A compared to the one without the occurance of event E, then the extra information A will increase the probability of knowing that event E occurs.

Properties of Updating p(a E) = p(a E c ) p(e A) = p(e). The extra information A does not add much value to the assessment of event E if we have very similar likelyhood ratios, p(a E) and p(a E c ).

Properties of Updating Sequence-Independence p(e A B) = p(e B A)/dualp(B A) p(a B) = p(b A)p(A) The posterior lower and upper bounds obtained by applying a series of evidences sequencially agree with the bounds obtained by conditioning the prior with all of the evidences in a single step.

Soundness of Posterior Probability Estimation p(e i A) = p(a E i )p(e i ) n j=1 dualp(a E j)dualp(e j ) Soundness can be veried when p(a E i ) IR, p(e i ) IR, p(a E j ) IR(j = 1,...,n, j i), p(e j ) IR(j = 1,...,n, j i), and p(e i A) IR j i p A Ej p (A E j ), j i p Ej p (E j ), p Ei A p (E i A), p A Ei p (A E i ), p Ei p (E i ), (25) p Ei A = p A E p Ei i n j=1 p A E p Ej j

Summary We dierentiate focal events from non-focal events by the modalities and semantics of interval probabilities. An event is focal when the semantics associated with its interval probability is universal, whereas it is non-focal when the semantics is existential. This dierentiation allows us to have a simple and unied representation based on a logic coherence constraint, which is a stronger restriction than the regular 2-monotoniciy. Algebraic closure of the new interval form simplies the calculus. It is also shown that the new Bayes' updating rule is a generalization of the 2-monotone tight envelope updating rule under the new representation. Logic interpretation helps to verify completeness and soundness of range estimations.