A gentle introduction to imprecise probability models

Similar documents
Imprecise Probability

A survey of the theory of coherent lower previsions

Practical implementation of possibilistic probability mass functions

Practical implementation of possibilistic probability mass functions

A BEHAVIOURAL MODEL FOR VAGUE PROBABILITY ASSESSMENTS

Predictive inference under exchangeability

Sklar s theorem in an imprecise setting

INDEPENDENT NATURAL EXTENSION

Extension of coherent lower previsions to unbounded random variables

Imprecise Bernoulli processes

arxiv: v1 [math.pr] 9 Jan 2016

BIVARIATE P-BOXES AND MAXITIVE FUNCTIONS. Keywords: Uni- and bivariate p-boxes, maxitive functions, focal sets, comonotonicity,

Conservative Inference Rule for Uncertain Reasoning under Incompleteness

Some exercises on coherent lower previsions

CONDITIONAL MODELS: COHERENCE AND INFERENCE THROUGH SEQUENCES OF JOINT MASS FUNCTIONS

Coherent Predictive Inference under Exchangeability with Imprecise Probabilities

Stochastic dominance with imprecise information

GERT DE COOMAN AND DIRK AEYELS

Weak Dutch Books versus Strict Consistency with Lower Previsions

COHERENCE OF RULES FOR DEFINING CONDITIONAL POSSIBILITY

Irrelevant and Independent Natural Extension for Sets of Desirable Gambles

Durham Research Online

Introduction to Imprecise Probability and Imprecise Statistical Methods

Optimal control of linear systems with quadratic cost and imprecise input noise Alexander Erreygers

Imprecise Probabilities in Stochastic Processes and Graphical Models: New Developments

arxiv: v6 [math.pr] 23 Jan 2015

Decision Making under Uncertainty using Imprecise Probabilities

Dominating countably many forecasts.* T. Seidenfeld, M.J.Schervish, and J.B.Kadane. Carnegie Mellon University. May 5, 2011

Coherence with Proper Scoring Rules

Bayesian Networks with Imprecise Probabilities: Theory and Application to Classification

ON THE EQUIVALENCE OF CONGLOMERABILITY AND DISINTEGRABILITY FOR UNBOUNDED RANDOM VARIABLES

CLASSICAL PROBABILITY MODES OF CONVERGENCE AND INEQUALITIES

LEXICOGRAPHIC CHOICE FUNCTIONS

Serena Doria. Department of Sciences, University G.d Annunzio, Via dei Vestini, 31, Chieti, Italy. Received 7 July 2008; Revised 25 December 2008

Computing Minimax Decisions with Incomplete Observations

Three contrasts between two senses of coherence

Three grades of coherence for non-archimedean preferences

MAS3301 Bayesian Statistics

Relations. Relations. Definition. Let A and B be sets.

Week 2. Review of Probability, Random Variables and Univariate Distributions

Decision-making with belief functions

arxiv: v1 [cs.ai] 16 Aug 2018

Active Elicitation of Imprecise Probability Models

Journal of Mathematical Analysis and Applications

Logic and Probability Lecture 1: Probability as Logic

On the Consistency among Prior, Posteriors, and Information Sets

A Note On Comparative Probability

COHERENCE AND PROBABILITY. Rosangela H. Loschi and Sergio Wechsler. Universidade Federal de Minas Gerais e Universidade de S~ao Paulo ABSTRACT

In N we can do addition, but in order to do subtraction we need to extend N to the integers

4130 HOMEWORK 4. , a 2

Expected Utility Framework

Probabilistic Logics and Probabilistic Networks

Decision-Theoretic Specification of Credal Networks: A Unified Language for Uncertain Modeling with Sets of Bayesian Networks

Coherent Predictive Inference under Exchangeability with Imprecise Probabilities (Extended Abstract) *

Reasoning with Uncertainty

Scalar multiplication and addition of sequences 9

Inference for Stochastic Processes

Microeconomics. Joana Pais. Fall Joana Pais

Introduction. Introduction

Preliminary Statistics Lecture 2: Probability Theory (Outline) prelimsoas.webs.com

University of Bielefeld

MATH 102 INTRODUCTION TO MATHEMATICAL ANALYSIS. 1. Some Fundamentals

Chapter One. The Real Number System

Epistemic independence for imprecise probabilities

Introduction to belief functions

4. Conditional risk measures and their robust representation

On maxitive integration

Belief function and multivalued mapping robustness in statistical estimation

NBER WORKING PAPER SERIES PRICE AND CAPACITY COMPETITION. Daron Acemoglu Kostas Bimpikis Asuman Ozdaglar

Machine Learning

Problem Sheet 1. You may assume that both F and F are σ-fields. (a) Show that F F is not a σ-field. (b) Let X : Ω R be defined by 1 if n = 1

Foundations of Nonparametric Bayesian Methods

F (x) = P [X x[. DF1 F is nondecreasing. DF2 F is right-continuous

In N we can do addition, but in order to do subtraction we need to extend N to the integers

Bayesian Persuasion Online Appendix

Bayesian Methods for Machine Learning

An Introduction to Bayesian Reasoning

Optimisation under uncertainty applied to a bridge collision problem

Dr. Truthlove, or How I Learned to Stop Worrying and Love Bayesian Probabilities

Lecture 1. Stochastic Optimization: Introduction. January 8, 2018

APM 421 Probability Theory Probability Notes. Jay Taylor Fall Jay Taylor (ASU) Fall / 62

Fundamental Inequalities, Convergence and the Optional Stopping Theorem for Continuous-Time Martingales

Convex Optimization Notes

arxiv: v3 [math.pr] 4 May 2017

Independence in Generalized Interval Probability. Yan Wang

9 Bayesian inference. 9.1 Subjective probability

Chapter 1 - Preference and choice

Part 3 Robust Bayesian statistics & applications in reliability networks

Deceptive Advertising with Rational Buyers

Lectures for the Course on Foundations of Mathematical Finance

Stochastic integral. Introduction. Ito integral. References. Appendices Stochastic Calculus I. Geneviève Gauthier.

Regular finite Markov chains with interval probabilities

Extreme probability distributions of random sets, fuzzy sets and p-boxes

Extreme probability distributions of random/fuzzy sets and p-boxes

Max-min (σ-)additive representation of monotone measures

THE REAL NUMBERS Chapter #4

Lecture 1: Probability Fundamentals

Continuous updating rules for imprecise probabilities

09 Modal Logic II. CS 3234: Logic and Formal Systems. October 14, Martin Henz and Aquinas Hobor

Reintroducing Credal Networks under Epistemic Irrelevance Jasper De Bock

Transcription:

A gentle introduction to imprecise probability models and their behavioural interpretation Gert de Cooman gert.decooman@ugent.be SYSTeMS research group, Ghent University A gentle introduction to imprecise probability models p.1/54

Overview General considerations about probability Epistemic probability Decision making Conditioning A gentle introduction to imprecise probability models p.2/54

Part I General considerations about probability A gentle introduction to imprecise probability models p.3/54

Two kinds of probabilities Aleatory probabilities physical property, disposition related to frequentist models other names: objective, statistical or physical probability, chance Epistemic probabilities model knowledge, information represent strength of beliefs other names: personal or subjective probability A gentle introduction to imprecise probability models p.4/54

Part II Epistemic probability A gentle introduction to imprecise probability models p.5/54

First observation For many applications, we need theories to represent and reason with certain and uncertain knowledge certain logic uncertain probability theory One candidate: Bayesian theory of probability I shall: argue that it is not general enough present the basic ideas behind a more general theory imprecise probability theory (IP) A gentle introduction to imprecise probability models p.6/54

A theory of epistemic probability Three pillars: how to measure epistemic probability? by what rules does epistemic probability abide? how can we use epistemic probability in reasoning, decision making, statistics...? Notice that: 1 and 2 = knowledge representation 3 = reasoning, inference A gentle introduction to imprecise probability models p.7/54

How to measure personal probability? Introspection difficulty: how to convey and compare strengths of beliefs? lack of a common standard belief = inclination to act beliefs lead to behaviour, that can be used to measure their strength special type of behaviour: accepting gambles a gamble is a transaction/action/decision that yields different outcomes (utilities) in different states of the world. A gentle introduction to imprecise probability models p.8/54

Gambles Ω is the set of possible outcomes ω A gamble X is a bounded real-valued function on Ω X : Ω R: ω X(ω) Example: How did I come to Lugano? By plane (p), by car (c) or by train (t)? Ω = {p, c, t} X(p) = 3, X(c) = 2, X(t) = 5 Whether your accept this gamble or not will depend on your knowledge about how I came to Lugano Denote your set of desirable gambles by D L(Ω) A gentle introduction to imprecise probability models p.9/54

Modelling your uncertainty Accepting a gamble = taking a decision/action in the face of uncertainty Your set of desirable gambles contains the gambles that you accept It is a model for your uncertainty about which value ω of Ω actually obtains (or will obtain) More common models (lower and upper) previsions (lower and upper) probabilities preference orderings probability orderings sets of probabilities A gentle introduction to imprecise probability models p.10/54

Desirability and rationality criteria Rewards are expressed in units of a linear utility scale Axioms: a set of desirable gambles D is coherent iff D1. 0 D D2. If X > 0 then X D D3. If X, Y D then X + Y D D4. If X D and λ > 0 then λx D Consequence: If X D and Y X then Y D Consequence: If X 1,..., X n D and λ 1,..., λ n > 0 then n k=1 λ kx k D A coherent set of desirable gambles is a convex cone of gambles that contains all positive gambles but not the zero gamble. A gentle introduction to imprecise probability models p.11/54

Definition of lower/upper prevision Consider a gamble X Buying X for a price µ yields a new gamble X µ the lower prevision P (X) of X = supremum acceptable price for buying X = supremum p such that X µ is desirable for all µ < p = sup {µ: X µ D} Selling X for a price µ yields a new gamble µ X the upper prevision P (X) of X = infimum acceptable price for selling X = infimum p such that µ X is desirable for all µ > p = inf {µ: µ X D} A gentle introduction to imprecise probability models p.12/54

Lower and upper prevision 1 Selling a gamble X for price µ = buying X for price µ: µ X = ( X) ( µ) Consequently: P (X) = inf {µ: µ X D} = inf { λ: X λ D} = sup {λ: X λ D} = P ( X) A gentle introduction to imprecise probability models p.13/54

Lower and upper prevision 2 P (X) = sup {µ: X µ D} if you specify a lower prevision P (X), you are committed to accepting X P (X) + ɛ = X [P (X) ɛ] for all ɛ > 0 (but not necessarily for ɛ = 0). P (X) = inf {µ: µ X D} if you specify an upper prevision P (X), you are committed to accepting P (X) X + ɛ = [P (X) + ɛ] X for all ɛ > 0 (but not necessarily for ɛ = 0). A gentle introduction to imprecise probability models p.14/54

Precise previsions When lower and upper prevision coincide: P (X) = P (X) = P (X) is called the (precise) prevision of X P (X) is a prevision, or fair price in de Finetti s sense Previsions are the precise, or Bayesian, probability models if you specify a prevision P (X), you are committed to accepting [P (X) + ɛ] X and X [P (X) ɛ] for all ɛ > 0 (but not necessarily for ɛ = 0). A gentle introduction to imprecise probability models p.15/54

Allowing for indecision a) p P (X) q buy X for price p sell X for price q b) p P (X) P (X) q buy X for price p sell X for price q Specifying a precise prevision P (X) means that you choose, for essentially any real price p, between buying X for price p or selling X for that price Imprecise models allow for indecision! A gentle introduction to imprecise probability models p.16/54

Events and lower probabilities An event is a subset of Ω Example: the event {c, t} that I did not come by plane to Lugano. It can be identied with a special gamble I A on Ω { 1 if ω A, i.e., A occurs I A (ω) = 0 if ω A, i.e., A doesn t occur The lower probability P (A) of A = lower prevision P (I A ) of indicator I A = supremum rate for betting on A = measure of evidence in favour of A = measure of (strength of) belief in A A gentle introduction to imprecise probability models p.17/54

Upper probabilities The upper probability P (A) of A = the upper prevision P (I A ) = P (1 I coa ) = 1 P (I coa ) of I A = 1 P (coa) = measures lack of evidence against A = measures the plausibility of A This gives a behavioural interpretation to lower and upper probability evidence for A P (A) evidence against A P (A) A gentle introduction to imprecise probability models p.18/54

Rules of epistemic probability Lower and upper previsions represent commitments to act/behave in certain ways Rules that govern lower and upper previsions reflect rationality of behaviour. Your behaviour is considered to be irrational when it is harmful to yourself : specifying betting rates such that you lose utility, whatever the outcome = avoiding sure loss (cf. logical consistency) it is inconsistent: you are not fully aware of the implications of your betting rates = coherence (cf. logical closure) A gentle introduction to imprecise probability models p.19/54

Avoiding sure loss Example: two bets on A: I A P (A) on coa: I coa P (coa) together: 1 [P (A) + P (coa)] 0 Avoiding a sure loss implies P (A) + P (coa) 1, or P (A) P (A) A gentle introduction to imprecise probability models p.20/54

Avoiding sure loss: general condition A set of gambles K and a lower prevision P defined for each gamble in K. Definition 1. P avoids sure loss if for all n 0, X 1,..., X n in K and for all non-negative λ 1,..., λ n : [ n ] sup ω Ω k=1 λ k [X k (ω) P (X k )] 0. If it doesn t hold, there are ɛ > 0, n 0, X 1,..., X n and positive λ 1,..., λ n such that for all ω: n k=1 λ k [X k (ω) P (X k ) + ɛ] ɛ! A gentle introduction to imprecise probability models p.21/54

Coherence Example: two bets involving A and B with A B = on A: I A P (A) on B: I B P (B) together: I A B [P (A) + P (B)] Coherence implies that P (A) + P (B) P (A B) A gentle introduction to imprecise probability models p.22/54

Coherence: general condition A set of gambles K and a lower prevision P defined for each gamble in K. Definition 2. P is coherent if for all n 0, X o, X 1,..., X n in K and for all non-negative λ o, λ 1,..., λ n : [ n ] sup ω Ω k=1 λ k [X k (ω) P (X k )] λ o [X o P (X o )] 0. If it doesn t hold, there are ɛ > 0, n 0, X o, X 1,..., X n and positive λ 1,..., λ n such that for all ω: X o (ω) (P (X o ) + ɛ) n k=1 λ k [X k (ω) P (X k ) + ɛ]! A gentle introduction to imprecise probability models p.23/54

Coherence of precise previsions A (precise) prevision is coherent when it is coherent both as a lower and as an upper prevision a precise prevision P on L(Ω) is coherent iff P (λx + µy ) = λp (X) + µp (Y ) if X 0 then P (X) 0 P (Ω) = 1 coincides with de Finetti s notion of a coherent prevision restriction to events is a (finitely additive) probability measure Let P denote the set of all linear previsions on L(Ω) A gentle introduction to imprecise probability models p.24/54

Sets of previsions Lower prevision P on a set of gambles K Let M(P ) be the set of precise previsions that dominate P on its domain K: M(P ) = {P P : ( X K)(P (X) P (X))}. Then avoiding sure loss is equivalent to: M(P ). and coherence is equivalent to: P (X) = min {P (X): P M(P )}, X K. A lower envelope of a set of precise previsions is always coherent A gentle introduction to imprecise probability models p.25/54

Coherent lower/upper previsions 1 probability measures, previsions à la de Finetti 2-monotone capacities, Choquet capacities contamination models possibility and necessity measures belief and plausibility functions random set models A gentle introduction to imprecise probability models p.26/54

Coherent lower/upper previsions 2 reachable probability intervals lower and upper mass/density functions lower and upper cumulative distributions (p-boxes) (lower and upper envelopes of) credal sets distributions (Gaussian, Poisson, Dirichlet, multinomial,... ) with interval-valued parameters robust Bayesian models... A gentle introduction to imprecise probability models p.27/54

Natural extension Third step toward a scientific theory = how to make the theory useful = use the assessments to draw conclusions about other things [(conditional) events, gambles,... ] Problem: extend a coherent lower prevision defined on a collection of gambles to a lower prevision on all gambles (conditional events, gambles,... ) Requirements: coherence as low as possible (conservative, least-committal) = NATURAL EXTENSION A gentle introduction to imprecise probability models p.28/54

Natural extension: an example 1 Lower probabilities P (A) and P (B) for two events A and B that are logically independent: A B A cob coa B coa cob For all λ 0 and µ 0, you accept to buy any gamble X for price α if for all ω X(ω) α λ[i A (ω) P (A)] + µ[i B (ω) P (B)] The natural extension E(X) of the assessments P (A) and P (B) to any gamble X is the highest α such that this inequality holds, over all possible choices of λ and µ. A gentle introduction to imprecise probability models p.29/54

Natural extension: an example 2 Calculate E(A B): maximise α subject to the constraints: λ 0, µ 0, and for all ω: I A B (ω) α λ[i A (ω) P (A)] + µ[i B (ω) P (B)] or equivalently: I A B (ω) λi A (ω) + µi B (ω) + [α λp (A) µp (B)] and if we put γ = α λp (A) µp (B) this is equivalent to maximising γ + λp (A) + µp (B) subject to the inequalities 1 λ + µ + γ, 1 λ + γ, 1 µ + γ, 0 γ λ 0, µ 0 A gentle introduction to imprecise probability models p.30/54

Natural extension: an example 3 This is a linear programming problem, and its solution is easily seen to be: E(A B) = max{p (A), P (B)} Similarly, for X = I A B we get another linear programming problem that yields E(A B) = max{0, P (A) + P (B) 1} These are the Fréchet bounds! Natural extension always gives the most conservative values that are still compatible with coherence and other additional assumptions made... A gentle introduction to imprecise probability models p.31/54

Another example: set information Information: ω assumes a value in a subset A of Ω This information is represented by the vacuous lower prevision relative to A: P A (X) = inf ω A X(ω); X L(Ω) P M(P A ) iff P (A) = 1 P A is the natural extension of the precise probability assessment P (A) = 1 ; also of the belief function with probability mass one on A Take any P such that P (A) = 1, then P (X) is only determined up to an interval [P A (X), P A (X)] according to de Finetti s fundamental theorem of probability A gentle introduction to imprecise probability models p.32/54

Natural extension: sets of previsions Lower prevision P on a set of gambles K If it avoids sure loss then M(P ) and its natural extension is given by the lower envelope of M(P ): E(X) = min {P (X): P M(P )}, X L(Ω) P is coherent iff it coincides on its domain K with its natural extension A gentle introduction to imprecise probability models p.33/54

Natural extension: desirable gambles Consider a set D of gambles you have judged desirable What are the implications of these assessments for the desirability of other gambles? The natural extension E of D is the smallest coherent set of desirable gambles that includes D It is the smallest extension of D to a convex cone of gambles that contains all positive gambles but not the zero gamble. A gentle introduction to imprecise probability models p.34/54

Natural extension: special cases Natural extension is a very powerful reasoning method. In special cases it reduces to: logical deduction belief functions via random sets fundamental theorem of probability/prevision Lebesgue integration of a probability measure Choquet integration of 2-monotone lower probabilities Bayes rule for probability measures Bayesian updating of lower/upper probabilities robust Bayesian analysis first-order model from higher-order model A gentle introduction to imprecise probability models p.35/54

Three pillars 1. behavioural definition of lower/upper previsions that can be made operational 2. rationality criteria of avoiding sure loss coherence 3. natural extension to make the theory useful A gentle introduction to imprecise probability models p.36/54

Gambles and events 1 How to represent: event A is at least n times as probable as event B Set of precise previsions M: P M P (A) np (B) P (I A ni B ) 0 lower previsions: P (I A ni B ) 0 sets of desirable gambles: I A ni B + ɛ D, ɛ > 0. I A ni B is a gamble, generally not an indicator! Cannot be expressed by lower probabilities: { P (A) P (B), P (A) P (B) too weak P (A) P (B) too strong A gentle introduction to imprecise probability models p.37/54

Gambles and events 2 Did I come to Lugano by plane, by car or by train? Assessments: not by plane is at least as probable as by plane by plane is at least a probable as by train by train is at least a probable as by car Convex set M of probability mass functions m on {p, t, c} such that m(p) 1, m(p) m(t), m(t) m(c) 2 M is a convex set with extreme points ( 1 2, 1 2, 0), (1 2, 1 4, 1 4 ), (1 3, 1 3, 1 3 ) A gentle introduction to imprecise probability models p.38/54

Gambles and events 3 the natural extension E is the lower envelope of this set E(X) = min [m(p)x(p) + m(t)x(t) + m(c)x(c)] m M The lower probabilities are completely specified by E({p}) = 1 3 E({p}) = 1 2 E({t}) = 1 2 E({t}) = 1 4 E({c}) = 0 E({c}) = 1 3 A gentle introduction to imprecise probability models p.39/54

Gambles and events 4 the corresponding set of mass functions M is a convex set with extreme points ( 1 2, 1 2, 0), (1 2, 1 4, 1 4 ), (1 3, 1 3, 1 3 ) ( 5 12, 1 4, 1 3 ), (1 3, 1 2, 1 6 ) M is more informative than M : M M with M we can infer that E(I {p} I {t} ) = 0: by plane is at least as probable as by train with M this inference cannot be made: we lose information by restricting ourselves to lower probabilities A gentle introduction to imprecise probability models p.40/54

Gambles and events 5 event A gamble I A lower probability P (A) lower prevision P (I A ) In precise probability theory: events are as expressive as gambles In imprecise probability theory: events are less expressive than gambles A gentle introduction to imprecise probability models p.41/54

And by the way There is a natural embedding of classical propositional logic into imprecise probability theory. set of propositions lower probability logically consistent ASL deductively closed coherent deductive closure natural extension maximal deductively closed probability No such embedding exists into precise probability theory. A gentle introduction to imprecise probability models p.42/54

Part III Decision making A gentle introduction to imprecise probability models p.43/54

Decision making 1 Consider an action a whose outcome (reward) depends on the actual value of ω (state of the world) With such an action we can associate a reward function X a : Ω R: ω X a (ω) When do you strictly prefer action a over action b: You almost-prefer a over b if a > b P (X a X b ) > 0 a b P (X a X b ) 0 We identify an action a with its reward function X a A gentle introduction to imprecise probability models p.44/54

Decision making 2 You are indifferent between a and b if a b a b and b a Actions a and b are incomparable if P (X a X b ) = P (X a X b ) = 0 a b a b and b a and a b In that case there is not enough information in the model to choose between a and b: you are undecided! Imprecise probability models allow for indecision! In fact, modelling and allowing for indecision is one of the motivations for introducing imprecise probabilities A gentle introduction to imprecise probability models p.45/54

Decision making: maximal actions Consider a set of actions A and reward functions K = {X a : a A} Due to the fact that certain actions may be incomparable, the actions cannot be linearly ordered, only partially! A gentle introduction to imprecise probability models p.46/54

Ordering of actions A gentle introduction to imprecise probability models p.47/54

Decision making: maximal actions Consider a set of actions A and reward functions K = {X a : a A} Due to the fact that certain actions may be incomparable, the actions cannot be linearly ordered, only partially! The maximal actions a are actions that are undominated: ( b A)(b a) or equivalently ( b A)(P (X a X b ) 0) Two maximal actions are either indifferent or incomparable! A gentle introduction to imprecise probability models p.48/54

Decision making: the precise case a > b P (X a X b ) > 0 P (X a ) > P (X b ) a b P (X a X b ) 0 P (X a ) P (X b ) a b P (X a ) = P (X b ) never a b! There is no indecision in precise probability models Whatever the available information, they always allow you a best choice between two available actions! Actions can always be ordered linearly, maximal actions are unique (up to indifference): they have the highest expected utility. A gentle introduction to imprecise probability models p.49/54

Decision making: sets of previsions a > b ( P M(P ))(P (X a ) > P (X b )) a b ( P M(P ))(P (X a ) P (X b )) a b ( P M(P ))(P (X a ) = P (X b )) a b ( P M(P ))(P (X a ) < P (X b )) and ( Q M(P ))(Q(X a ) > Q(X b )) If K is convex then a is maximal if and only if there is some P M(P ) such that ( b A)(P (X a ) P (X b )) A gentle introduction to imprecise probability models p.50/54

Part IV Conditioning A gentle introduction to imprecise probability models p.51/54

Generalised Bayes Rule Let P be defined on a large enough domain, and B Ω. If P (B) > 0 then coherence implies that P (X B) is the unique solution of the following equation in µ: P (I B [X µ]) = 0 (Generalised Bayes Rule) If P = P is precise, this reduces to P (X B) = µ = P (XI B) P (B) (Bayes Rule) Observe that also P (X B) = inf {P (X B): P M(P )} A gentle introduction to imprecise probability models p.52/54

Regular extension If P (B) = 0 but P (B) > 0 then one often considers the so-called regular extension R(X B): it is the greatest µ such that P (I B [X µ]) 0 Observe that also R(X B) = inf {P (X B): P M(P ) and P (B) > 0} Regular extension is the most conservative coherent extension that satisfies an additional regularity condition A gentle introduction to imprecise probability models p.53/54

Questions? A gentle introduction to imprecise probability models p.54/54