Imprecise Probabilities in Stochastic Processes and Graphical Models: New Developments

Size: px
Start display at page:

Download "Imprecise Probabilities in Stochastic Processes and Graphical Models: New Developments"

Transcription

1 Imprecise Probabilities in Stochastic Processes and Graphical Models: New Developments Gert de Cooman Ghent University SYSTeMS NLMUA2011, 7th September 2011

2 IMPRECISE PROBABILITY MODELS

3 Imprecise probability models Sets of probability models An imprecise probability model is a set M of precise probability models P More precisely: M is a closed convex set of probabilities.

4 Imprecise probability models Sets of probability models An imprecise probability model is a set M of precise probability models P More precisely: M is a closed convex set of probabilities. Conditioning To condition M on an event B, we condition each precise model in M : M B := {P( B): P M }

5 Imprecise probability models Lower and upper previsions Consider, for a function f P(f ) := min{p(f ): P M } P(f ) := max{p(f ): P M } The nonlinear functionals P and P determine M completely.

6 Imprecise probability models Lower and upper previsions Consider, for a function f P(f ) := min{p(f ): P M } P(f ) := max{p(f ): P M } The nonlinear functionals P and P determine M completely. Conditioning and the Generalised Bayes Rule When P(B) > 0 then P(f B) is the unique solution of the equation in x: P(I B [f x]) = 0.

7 @ARTICLE{walley2000, author = {Walley, Peter}, title = {Towards a unified theory of imprecise probability}, journal = {International Journal of Approximate Reasoning}, year = 2000, volume = 24, pages = { } }

8 Imprecise probability models Accepting gambles Consider an exhaustive set Ω of mutually exclusive alternatives ω, exactly one of which obtains. Subject is uncertain about which alternative obtains. A gamble f : Ω R is interpreted as an uncertain reward: if the alternative that obtains is ω, then the reward for Subject is f (ω). Let G (Ω) be the set of all gambles on Ω.

9 Imprecise probability models Accepting gambles Subject accepts a gamble f if he accepts to engage in the following transaction, where 1. we determine which alternative ω obtains; 2. Subject receives f (ω). We try to model Subject s uncertainty by looking at which gambles in G (Ω) he accepts.

10 Imprecise probability models Coherent sets of desirable gambles Subject specifies a set D of gambles he accepts, his set of desirable gambles. D is called coherent if it satisfies the following rationality requirements: D1. if f 0 then f D [avoiding partial loss]; D2. if f > 0 then f D [accepting partial gain]; D3. if f 1 D and f 2 D then f 1 + f 2 D [combination]; D4. if f D then λf D for all non-negative real numbers λ [scaling]. Here f > 0 means f 0 and not f = 0. Walley has also argued that sets of desirable gambles should satisfy an additional axiom: D5. D is B-conglomerable for any partition B of Ω: if I B f D for all B B, then also f D [full conglomerability].

11 Imprecise probability models Natural extension as a form of inference Since D1 D4 are preserved under arbitrary non-empty intersections: Theorem Let A be any set of gambles. Then there is a coherent set of desirable gambles that includes A if and only if f 0 for all f posi(a ) In that case, the natural extension E (A ) of A is the smallest such coherent set, and given by: E (A ) := posi(a G + (Ω)).

12 Imprecise probability models Lower and upper previsions Given Subject s coherent set D, we can define his upper and lower previsions: so P(f ) = P( f ). P(f ) := inf{α : α f D} P(f ) := sup{α : f α D} P(f ) is the supremum price α for which Subject will buy the gamble f, i.e., accept the gamble f α. the lower probability P(A) := P(I A ) is Subject s supremum rate for betting on the event A.

13 Imprecise probability models Conditional lower and upper previsions We can also define Subject s conditional lower and upper previsions: for any gamble f and any non-empty subset B of Ω, with indicator I B : P(f B) := inf{α : I B (α f ) D} P(f B) := sup{α : I B (f α) D} so P(f B) = P( f B) and P(f ) = P(f Ω). P(f B) is the supremum price α for which Subject will buy the gamble f, i.e., accept the gamble f α, contingent on the occurrence of B. For any partition B, define the gamble P(f B) as P(f B)(ω) := P(f B), B B,ω B

14 Imprecise probability models Coherence of conditional lower and upper previsions Suppose you have a number of functionals P( B 1 ),...,P( B n ) These are called coherent if there is some coherent set of desirable gambles D that is B k -conglomerable for all k, such that P(f B k ) = sup{α R: I Bk (f α) D} B k B k,k = 1,...,n

15 Imprecise probability models Properties of conditional lower and upper previsions Theorem ([10]) Consider a coherent set of desirable gambles, let B be any non-empty subset of Ω, and let f, f 1 and f 2 be gambles on Ω. Then: 1. inf ω B f (ω) P(f B) P(f B) sup ω B f (ω) [positivity]; 2. P(f 1 + f 2 B) P(f 1 B) + P(f 2 B) [super-additivity]; 3. P(λf B) = λp(f B) for all real λ 0 [non-negative homogeneity]; 4. if B is a partition of Ω that refines the partition {B,B c } and D is B-conglomerable, then P(f B) P(P(f B) B) [conglomerative property].

16 Imprecise probability models Conditional previsions If P(f B) = P(f B) =: P(f B) then P(f B) is Subject s fair price or prevision for f, conditional on B. It is the fixed amount of utility that I am willing to exchange the uncertain reward f for, conditional on the occurrence of B. Related to de Finetti s fair prices [6], and to Huygens s [7] Corollary 1. inf ω B f (ω) P(f B); dit is my so veel weerdt als. 2. P(λf + µg B) = λp(f B) + µp(g B); 3. if B is a partition of Ω that refines the partition {B,B c } and if there is B-conglomerability, then P(f B) = P(P(f B) B).

17 IMPRECISE STOCHASTIC PROCESSES

18 @ARTICLE{cooman2008, author = {{d}e Cooman, Gert and Hermans, Filip}, title = {Imprecise probability trees: Bridging two theories of imprecise probability}, journal = {Artificial Intelligence}, year = 2008, volume = 172, pages = { }, doi = { /j.artint } }

19 Reality s event tree and move spaces Reality can make a number of moves, where the possible next moves may depend only on the previous moves he has made. We can represent Reality s moves by an event tree. In each non-terminal situation t, Reality has a set of possible next moves { W t := w: tw Ω }, called Reality s move space in situation t. W t may be infinite, but has at least two elements. We assume the event tree has finite horizon.

20 An event tree and its situations Situations are nodes in the event tree ω terminal initial t non-terminal

21 An event tree and its situations Situations are nodes in the event tree ω terminal initial t non-terminal

22 An event tree and its situations Situations are nodes in the event tree ω terminal initial t non-terminal

23 An event tree and its situations The sample space Ω is the set of all terminal situations

24 An event tree and its situations The partial order on the set Ω of all situations s s t t s precedes t

25 An event tree and its situations The partial order on the set Ω of all situations s s t t s strictly precedes t

26 An event tree and its situations An event A is a subset of the sample space Ω s E(s) = {ω Ω: s ω}

27 An event tree and its situations Cuts of the initial situation u 1 u 2 u 3 U

28 Reality s event tree and move spaces Reality can make a number of moves, where the possible next moves may depend only on the previous moves he has made. We can represent Reality s moves by an event tree. In each non-terminal situation t, Reality has a set of possible next moves { W t := w: tw Ω }, called Reality s move space in situation t. W t may be infinite, but has at least two elements. We assume the event tree has finite horizon.

29 Reality s move space W t in a non-terminal situation t t sw 1 w 3 tw 3 w4 tw 4 w 5 tw 5 W s = {w 1,w 2 } s w 1 W t = {w 3,w 4,w 5 } w2 sw 2

30 Coherent immediate prediction Immediate prediction models In each non-terminal situation t, Forecaster has beliefs about which move w t W t Reality will make immediately afterwards. Forecaster specifies those local predictive beliefs in the form of a coherent set of desirable gambles D t on G (W t ). This leads to an immediate prediction model D t, t Ω \ Ω. Coherence here means D1 D4.

31 Coherent immediate prediction Immediate prediction models t sw 1 w 3 tw 3 w4 tw 4 w 5 tw 5 R s G ({w 1,w 2 }) s w 1 R t G ({w 3,w 4,w 5 }) w2 sw 2

32 Coherent immediate prediction From a local to a global model How to combine the local pieces of information into a global model, i.e., which gambles f on the entire sample space Ω does Forecaster accept? For each non-terminal situation t and each h t D t, Forecaster accepts the gamble I E(t) h t on Ω, where I E(t) h t (ω) := { 0 t ω h t (w) tw ω,w W t I E(t) h t represents the gamble on Ω that is called off unless Reality ends up in situation t, and then depends only on Reality s move immediately after t, and gives the same value h t (w) to all paths ω that go through tw.

33 Coherent immediate prediction From a local to a global model So Forecaster accepts all gambles in the set { } D := I E(t) h t : h t D t,t Ω \ Ω. Find the natural extension E (D) of D: the smallest subset of G (Ω) that includes D and is coherent, i.e., satisfies D1 D4 and cut conglomerability.

34 Coherent immediate prediction Cut conglomerability We want predictive models, so we will condition on the E(t), i.e., on the event that we get to situation t. The E(t) are the only events that we can legitimately condition on. The events E(t) form a partition B U of the sample space Ω iff the situations t belong to a cut U. Definition A set of desirable gambles D on Ω is cut-conglomerable (D5 ) if it is B U -conglomerable for all cuts U: ( u U)(I E(u) f D) f D.

35 Coherent immediate prediction Selections and gamble processes A t-selection S is a process, defined on all non-terminal situations s that follow t, and such that S (s) D s. It selects, in advance, a desirable gamble S (s) from the available desirable gambles in each non-terminal s t. With a t-selection S, we can associate a real-valued t-gamble process I S, which is a t-process such that for all s t and w W s, I S (sw) = I S (s) + S (s)(w), I S (t) = 0.

36 Coherent immediate prediction Selections and gamble processes w S (t)(w) w 1 +3 w 2 4 W t = {w 1,w 2 } t w 1 tw 1 I S (tw 1 ) = = 5 w2 tw 2 I S (t) = 2 I S (tw 2 ) = 2 4 = 2

37 Coherent immediate prediction Marginal Extension Theorem Theorem (Marginal Extension Theorem) There is a smallest set of gambles that satisfies D1 D4 and D5 and includes D. This natural extension of D is given by } E (D) := {g G (Ω): g IΩ S for some -selection S. Moreover, for any non-terminal situation t and any t-gamble g, it holds that I E(t) g E (D) if and only if there is some t-selection S t such that g I S t Ω.

38 Coherent immediate prediction Predictive lower and upper previsions Use the coherent set of desirable gambles E (D) to define special lower (and upper) previsions P( t) := P( E(t)) conditional on an event E(t). For any gamble f on Ω and for any non-terminal situation t, P(f t) := sup { α : I E(t) (f α) E (D) } } = sup {α : f α IΩ S for some t-selection S. We call such conditional lower previsions predictive lower previsions for Forecaster. For a cut U of t, define the U-measurable t-gamble P(f U) by P(f U)(ω) := P(f u), u U, u ω.

39 Properties of predictive previsions Concatenation Theorem Theorem (Concatenation Theorem) Consider any two cuts U and V of a situation t such that U precedes V. Then for all t-gambles f on Ω, 1. P(f t) = P(P(f U) t); 2. P(f U) = P(P(f V) U). We can calculate P(f t) by backwards recursion, starting with P(f ω) = f (ω), and using only the local models: P s (g) = sup{α : g α D s }, where the non-terminal s t and g is a gamble on W s.

40 Properties of predictive previsions Concatenation Theorem P(f u 1 ) U P(f t) = P(P(f U) t) u 1 t u 2 P(f u 2 ) P(f ω 1 ) = f (ω 1 ) P(f ω 2 ) = f (ω 2 ) P(f ω 3 ) = f (ω 3 ) P(f ω 4 ) = f (ω 4 ) P(f ω 5 ) = f (ω 5 )

41 Properties of predictive previsions Envelope theorems Consider in each non-terminal situation s a compatible precise model P s on G (W s ): P s M s ( g G (W s ))(P s (g) P s (g)) This leads to collection of compatible probability trees in the sense of Huygens (and Shafer). Use the Concatenation Theorem to find the corresponding precise predictive previsions P(f t) for each compatible probability tree. Theorem (Lower Envelope Theorem) For all situations t and t-gambles f, P(f t) is the infimum (minimum) of the P(f t) over all compatible probability trees.

42 Considering unbounded time What if Reality s event tree no longer has a finite time horizon: how to calculate the lower prices/previsions P(f t)? The Shafer Vovk Ville approach { } sup α : f α limsupi S for some t-selection S. Open question(s): What does natural extension yield in this case, must coherence be strengthened to yield the Shafer Vovk Ville approach, and if so, how?

43 @ARTICLE{cooman2009, author = {{d}e Cooman, Gert and Hermans, Filip and Quaegehebeur, Erik}, title = {Imprecise {M}arkov chains and their limit behaviour}, journal = {Probability in the Engineering and Informational Sciences}, year = 2009, volume = 23, pages = { }, doi = { /S } }

44 Imprecise Markov chains Discrete-time and finite state uncertain process Consider an uncertain process with variables X 1, X 2,..., X n,... Each X k assumes values in a finite set of states X. This leads to a standard event tree with situations s = (x 1,x 2,...,x n ), x k X, n 0 In each situation s there is a local imprecise belief model M s : a closed convex set of probability mass functions p on X. Associated local lower prevision P s : P s (f ) := min{e p (f ): p M s }; E p (f ) := f (x)p(x). x X

45 Imprecise Markov chains Example of a standard event tree X 1 b a (b,b) (b,a) (a,b) (a,a) X 2 (b,b,b) (b,b,a) (b,a,b) (b,a,a) (a,b,b) (a,b,a) (a,a,b) (a,a,a)

46 Imprecise Markov chains Precise Markov chain The uncertain process is a (stationary) precise Markov chain when all M s are singletons (precise), and M = {m 1 }, Markov Condition: M (x1,...,x n ) = {q( x n )}.

47 Imprecise Markov chains Probability tree for a precise Markov chain b q( b) (b,b) (b,a) q( b) q( a) (b,b,b) (b,b,a) (b,a,b) (b,a,a) m 1 a q( a) (a,b) (a,a) q( b) q( a) (a,b,b) (a,b,a) (a,a,b) (a,a,a)

48 Imprecise Markov chains Definition of an imprecise Markov chain The uncertain process is a (stationary) imprecise Markov chain when the Markov Condition is satisfied: M (x1,...,x n ) = Q( x n ). An imprecise Markov chain can be seen as an infinity of probability trees.

49 Imprecise Markov chains Probability tree for an imprecise Markov chain b Q( b) (b,b) (b,a) (b,b,b) Q( b) (b,b,a) (b,a,b) Q( a) (b,a,a) M 1 a Q( a) (a,b) (a,a) (a,b,b) Q( b) (a,b,a) (a,a,b) Q( a) (a,a,a)

50 Imprecise Markov chains Lower and upper transition operators T: G (X) G (X) and T: G (X) G (X) where for any gamble f on X: Tf (x) := min{e p (f ): p Q( x)} Tf (x) := max{e p (f ): p Q( x)} Then the Concatenation Formula yields: P n (f ) = P 1 (T n 1 f ) and P n (f ) = P 1 (T n 1 f ). Complexity is linear in the number of time steps!

51 Imprecise Markov chains An example with lower and upper mass functions [ ] q(a a) q(b a) q(c a) TI{{a}} TI {{b}} TI {{c}} = q(a b) q(b b) q(c b) q(a c) q(b c) q(c c) = [ ] q(a a) q(b a) q(c a) TI{{a}} TI {{b}} TI {{c}} = q(a b) q(b b) q(c b) q(a c) q(b c) q(c c) =

52 Imprecise Markov chains An example with lower and upper mass functions n = 1 n = 2 n = 3 n = 4 n = 5 n = 6 n = 7 n = 8 n = 9 n = 10 n = 22 n = 1000

53 Imprecise Markov chains A Perron Frobenius Theorem Theorem ([4]) Consider a stationary imprecise Markov chain with finite state set X and an upper transition operator T. Suppose that T is regular, meaning that there is some n > 0 such that mint n I {{x}} > 0 for all x X. Then for every initial upper prevision P 1, the upper prevision P n = P 1 T n 1 for the state at time n converges point-wise to the same upper prevision P : lim P n(h) = lim P 1 (T n 1 h) := P (h) n n for all h in G (X). Moreover, the corresponding limit upper prevision E is the only T-invariant upper prevision on G (X), meaning that P = P T.

54 CREDAL NETWORKS

55 @ARTICLE{cooman2010, author = {{d}e Cooman, Gert and Hermans, Filip and Antonucci, Alessandro and Zaffalon, Marco}, title = {Epistemic irrelevance in credal nets: the case of imprecise {M}arkov trees}, journal = {International Journal of Approximate Reasoning}, year = 2010, volume = 51, pages = { }, doi = { /j.ijar } }

56 Credal trees X 1 X 2 X 5 X 7 X 3 X 4 X 6 X 8 X 9 X 10 X 11

57 Credal trees Local uncertainty models X m(i) X j... X i... X k the variable X i may assume a value in the finite set X i ; for each possible value x m(i) X m(i) of the mother variable X m(i), we have a conditional lower prevision P i ( x m(i) ): G (X i ) R: P i (f x m(i) ) = lower prevision of f (X i ), given that X m(i) = x m(i) local model P i ( X m(i) ) is a conditional lower prevision operator

58 Credal trees under epistemic irrelevance Definition Interpretation of graphical structure: Conditional on the mother variable, the non-parent non-descendants of each node variable are epistemically irrelevant to it and its descendants. X 1 X 2 X 5 X 7 X 3 X 4 X 6 X 8 X 9 X 10 X 11

59 Credal networks under epistemic irrelevance As an expert system When the credal network is a (Markov) tree we can find the joint model from the local models recursively, from leaves to root. Exact message passing algorithm credal tree treated as an expert system linear complexity in the number of nodes Python code written by Filip Hermans testing in cooperation with Marco Zaffalon and Alessandro Antonucci Current (toy) applications in HMMs character recognition [3] air traffic trajectory tracking and identification [1]

60 Example of application HMMs: character recognition for Dante s Divina Commedia Original text:... V I T A S 1 S 2 S 3 OCR output:... V O 1 I O 2 T O 3 O

61 Example of application HMMs: character recognition for Dante s Divina Commedia Accuracy 93.96% (7275/7743) Accuracy (if imprecise indeterminate) 64.97% (243/374) Determinacy 95.17% (7369/7743) Set-accuracy 93.58% (350/374) Single accuracy 95.43% (7032/7369) Indeterminate output size 2.97 over 21 Table: Precise vs. imprecise HMMs. Test results obtained by twofold cross-validation on the first two chants of Dante s Divina Commedia and n = 2. Quantification is achieved by IDM with s = 2 and modified Perks prior. The single-character output by the precise model is then guaranteed to be included in the set of characters the imprecise HMM identifies.

62 @INPROCEEDINGS{cooman2011, author = {De Bock, Jasper and {d}e Cooman, Gert}, title = {Imprecise probability trees: Bridging two theories of imprecise probability}, booktitle = {ISIPTA Proceedings of the Sixth International Symposium on Imprecise Probability: Theories and Applications}, year = 2009, editor = {Coolen, Frank P. A. and {d}e Cooman, Gert and Fetz, Thomas and Oberguggenberger, Michael}, address = {Innsbruck, Austria}, publisher = {SIPTA} }

63 Imprecise Hidden Markov Model (ihmm) marginal model: Q 1 transition model: Q k ( X k 1 ) states: X 1 X 2... X k... X n outputs: O 1 O 2... O k... O n emission model S k ( X k )

64 Imprecise Hidden Markov Model (ihmm) Interpretation of the graphical structure: traditional credal networks X 1 X 2... X k... X n O 1 O 2... O k... O n non-parent non-descendants node and descendants strongly independent

65 Imprecise Hidden Markov Model (ihmm) Interpretation of the graphical structure: recent developments X 1 X 2... X k... X n O 1 O 2... O k... O n non-parent non-descendants node and descendants epistemically irrelevant

66 The notion of optimality we use: maximality We define a partial order on state sequences: ˆx 1:n x 1:n P 1 (I {{ˆx1:n }} I {{x1:n }} o 1:n ) > 0

67 The notion of optimality we use: maximality We define a partial order on state sequences: ˆx 1:n x 1:n P 1 (I {{ˆx1:n }} I {{x1:n }} o 1:n ) > 0 A state sequence is optimal if it is undominated, or maximal: ˆx 1:n opt(x 1:n o 1:n ) ( x 1:n X 1:n )x 1:n ˆx 1:n ( x 1:n X 1:n )P 1 (I {{x1:n }} I {{ˆx1:n }} o 1:n ) 0 ( x 1:n X 1:n )P 1 (I {{o1:n }}[I {{x1:n }} I {{ˆx1:n }}]) 0,

68 The notion of optimality we use: maximality We define a partial order on state sequences: ˆx 1:n x 1:n P 1 (I {{ˆx1:n }} I {{x1:n }} o 1:n ) > 0 A state sequence is optimal if it is undominated, or maximal: ˆx 1:n opt(x 1:n o 1:n ) ( x 1:n X 1:n )x 1:n ˆx 1:n ( x 1:n X 1:n )P 1 (I {{x1:n }} I {{ˆx1:n }} o 1:n ) 0 ( x 1:n X 1:n )P 1 (I {{o1:n }}[I {{x1:n }} I {{ˆx1:n }}]) 0, P 1 (I {{o1:n }}[I {{x1:n }} I {{ˆx1:n }}]) can be easily determined recursively!

69 The notion of optimality we use: maximality We define a partial order on state sequences: ˆx 1:n x 1:n P 1 (I {{ˆx1:n }} I {{x1:n }} o 1:n ) > 0 A state sequence is optimal if it is undominated, or maximal: ˆx 1:n opt(x 1:n o 1:n ) ( x 1:n X 1:n )x 1:n ˆx 1:n ( x 1:n X 1:n )P 1 (I {{x1:n }} I {{ˆx1:n }} o 1:n ) 0 ( x 1:n X 1:n )P 1 (I {{o1:n }}[I {{x1:n }} I {{ˆx1:n }}]) 0, P 1 (I {{o1:n }}[I {{x1:n }} I {{ˆx1:n }}]) can be easily determined recursively! Aim of our EstiHMM algorithm: Determine opt(x 1:n o 1:n ) exactly and efficiently.

70 Complexity of EstiHMM linear in the length of the chain n cubic in the number of states s linear in the number of maximal elements Comparable to the Viterbi counterpart for precise HMMs. Important conclusion: The complexity depends crucially on how the number of maximal elements increases with the length of the chain n

71 Number of maximal elements Output sequence (0,1,1,1,0,0) and imprecision 1%

72 Number of maximal elements Output sequence (0,1,1,1,0,0) and imprecision 5%

73 Number of maximal elements Output sequence (0,1,1,1,0,0) and imprecision 10%

74 Number of maximal elements Output sequence (0,1,1,1,0,0) and imprecision 25%

75 Number of maximal elements Output sequence (0,1,1,1,0,0) and imprecision 40%

76 Literature I Enrique Miranda, A survey of the theory of coherent lower previsions. International Journal of Approximate Reasoning, 48: , Matthias Troffaes, Decision making under uncertainty using imprecise probabilities. International Journal of Approximate Reasoning, 45:17 29, Alessandro Antonucci, Alessio Benavoli, Marco Zaffalon, Gert de Cooman, and Filip Hermans. Multiple model tracking by imprecise Markov trees. In Proceedings of the 12th International Conference on Information Fusion (Seattle, WA, USA, July 6 9, 2009), pages , Gert de Cooman and Filip Hermans. Imprecise probability trees: Bridging two theories of imprecise probability. Artificial Intelligence, 172(11): , 2008.

77 Literature II Gert de Cooman, Filip Hermans, Alessandro Antonucci, and Marco Zaffalon. Epistemic irrelevance in credal networks: the case of imprecise Markov trees. International Journal of Approximate Reasoning, 51: , Gert de Cooman, Filip Hermans, and Erik Quaeghebeur. Imprecise Markov chains and their limit behaviour. Probability in the Engineering and Informational Sciences, 23(4): , arxiv: B. de Finetti. Teoria delle Probabilità. Einaudi, Turin, 1970.

78 Literature III B. de Finetti. Theory of Probability: A Critical Introductory Treatment. John Wiley & Sons, Chichester, English translation of [5], two volumes. Christiaan Huygens. Van Rekeningh in Spelen van Geluck Reprinted in Volume XIV of [8]. Christiaan Huygens. Œuvres complètes de Christiaan Huygens. Martinus Nijhoff, Den Haag, Twenty-two volumes. Available in digitised form from the Bibliothèque nationale de France ( G. Shafer and V. Vovk. Probability and Finance: It s Only a Game! Wiley, New York, 2001.

79 Literature IV Peter Walley. Statistical Reasoning with Imprecise Probabilities. Chapman and Hall, London, Thomas Augustin, Frank Coolen, Gert de Cooman and Matthias Troffaes (eds.). Introduction to Imprecise Probabilities. Wiley, Chichester, mid Also look at

Predictive inference under exchangeability

Predictive inference under exchangeability Predictive inference under exchangeability and the Imprecise Dirichlet Multinomial Model Gert de Cooman, Jasper De Bock, Márcio Diniz Ghent University, SYSTeMS gert.decooman@ugent.be http://users.ugent.be/

More information

Imprecise Probability

Imprecise Probability Imprecise Probability Alexander Karlsson University of Skövde School of Humanities and Informatics alexander.karlsson@his.se 6th October 2006 0 D W 0 L 0 Introduction The term imprecise probability refers

More information

Imprecise Bernoulli processes

Imprecise Bernoulli processes Imprecise Bernoulli processes Jasper De Bock and Gert de Cooman Ghent University, SYSTeMS Research Group Technologiepark Zwijnaarde 914, 9052 Zwijnaarde, Belgium. {jasper.debock,gert.decooman}@ugent.be

More information

A survey of the theory of coherent lower previsions

A survey of the theory of coherent lower previsions A survey of the theory of coherent lower previsions Enrique Miranda Abstract This paper presents a summary of Peter Walley s theory of coherent lower previsions. We introduce three representations of coherent

More information

INDEPENDENT NATURAL EXTENSION

INDEPENDENT NATURAL EXTENSION INDEPENDENT NATURAL EXTENSION GERT DE COOMAN, ENRIQUE MIRANDA, AND MARCO ZAFFALON ABSTRACT. There is no unique extension of the standard notion of probabilistic independence to the case where probabilities

More information

Practical implementation of possibilistic probability mass functions

Practical implementation of possibilistic probability mass functions Practical implementation of possibilistic probability mass functions Leen Gilbert Gert de Cooman Etienne E. Kerre October 12, 2000 Abstract Probability assessments of events are often linguistic in nature.

More information

Practical implementation of possibilistic probability mass functions

Practical implementation of possibilistic probability mass functions Soft Computing manuscript No. (will be inserted by the editor) Practical implementation of possibilistic probability mass functions Leen Gilbert, Gert de Cooman 1, Etienne E. Kerre 2 1 Universiteit Gent,

More information

A gentle introduction to imprecise probability models

A gentle introduction to imprecise probability models A gentle introduction to imprecise probability models and their behavioural interpretation Gert de Cooman gert.decooman@ugent.be SYSTeMS research group, Ghent University A gentle introduction to imprecise

More information

Calculating Bounds on Expected Return and First Passage Times in Finite-State Imprecise Birth-Death Chains

Calculating Bounds on Expected Return and First Passage Times in Finite-State Imprecise Birth-Death Chains 9th International Symposium on Imprecise Probability: Theories Applications, Pescara, Italy, 2015 Calculating Bounds on Expected Return First Passage Times in Finite-State Imprecise Birth-Death Chains

More information

Extension of coherent lower previsions to unbounded random variables

Extension of coherent lower previsions to unbounded random variables Matthias C. M. Troffaes and Gert de Cooman. Extension of coherent lower previsions to unbounded random variables. In Proceedings of the Ninth International Conference IPMU 2002 (Information Processing

More information

Multiple Model Tracking by Imprecise Markov Trees

Multiple Model Tracking by Imprecise Markov Trees 1th International Conference on Information Fusion Seattle, WA, USA, July 6-9, 009 Multiple Model Tracking by Imprecise Markov Trees Alessandro Antonucci and Alessio Benavoli and Marco Zaffalon IDSIA,

More information

Reintroducing Credal Networks under Epistemic Irrelevance Jasper De Bock

Reintroducing Credal Networks under Epistemic Irrelevance Jasper De Bock Reintroducing Credal Networks under Epistemic Irrelevance Jasper De Bock Ghent University Belgium Jasper De Bock Ghent University Belgium Jasper De Bock Ghent University Belgium Jasper De Bock IPG group

More information

Coherent Predictive Inference under Exchangeability with Imprecise Probabilities

Coherent Predictive Inference under Exchangeability with Imprecise Probabilities Journal of Artificial Intelligence Research 52 (2015) 1-95 Submitted 07/14; published 01/15 Coherent Predictive Inference under Exchangeability with Imprecise Probabilities Gert de Cooman Jasper De Bock

More information

Asymptotic properties of imprecise Markov chains

Asymptotic properties of imprecise Markov chains FACULTY OF ENGINEERING Asymptotic properties of imprecise Markov chains Filip Hermans Gert De Cooman SYSTeMS Ghent University 10 September 2009, München Imprecise Markov chain X 0 X 1 X 2 X 3 Every node

More information

Coherent Predictive Inference under Exchangeability with Imprecise Probabilities (Extended Abstract) *

Coherent Predictive Inference under Exchangeability with Imprecise Probabilities (Extended Abstract) * Coherent Predictive Inference under Exchangeability with Imprecise Probabilities (Extended Abstract) * Gert de Cooman *, Jasper De Bock * and Márcio Alves Diniz * IDLab, Ghent University, Belgium Department

More information

Sklar s theorem in an imprecise setting

Sklar s theorem in an imprecise setting Sklar s theorem in an imprecise setting Ignacio Montes a,, Enrique Miranda a, Renato Pelessoni b, Paolo Vicig b a University of Oviedo (Spain), Dept. of Statistics and O.R. b University of Trieste (Italy),

More information

Efficient Computation of Updated Lower Expectations for Imprecise Continuous-Time Hidden Markov Chains

Efficient Computation of Updated Lower Expectations for Imprecise Continuous-Time Hidden Markov Chains PMLR: Proceedings of Machine Learning Research, vol. 62, 193-204, 2017 ISIPTA 17 Efficient Computation of Updated Lower Expectations for Imprecise Continuous-Time Hidden Markov Chains Thomas Krak Department

More information

COSE212: Programming Languages. Lecture 1 Inductive Definitions (1)

COSE212: Programming Languages. Lecture 1 Inductive Definitions (1) COSE212: Programming Languages Lecture 1 Inductive Definitions (1) Hakjoo Oh 2018 Fall Hakjoo Oh COSE212 2018 Fall, Lecture 1 September 5, 2018 1 / 10 Inductive Definitions Inductive definition (induction)

More information

LEXICOGRAPHIC CHOICE FUNCTIONS

LEXICOGRAPHIC CHOICE FUNCTIONS LEXICOGRAPHIC CHOICE FUNCTIONS ARTHUR VAN CAMP, GERT DE COOMAN, AND ENRIQUE MIRANDA ABSTRACT. We investigate a generalisation of the coherent choice functions considered by Seidenfeld et al. (2010), by

More information

arxiv: v1 [math.pr] 9 Jan 2016

arxiv: v1 [math.pr] 9 Jan 2016 SKLAR S THEOREM IN AN IMPRECISE SETTING IGNACIO MONTES, ENRIQUE MIRANDA, RENATO PELESSONI, AND PAOLO VICIG arxiv:1601.02121v1 [math.pr] 9 Jan 2016 Abstract. Sklar s theorem is an important tool that connects

More information

COSE212: Programming Languages. Lecture 1 Inductive Definitions (1)

COSE212: Programming Languages. Lecture 1 Inductive Definitions (1) COSE212: Programming Languages Lecture 1 Inductive Definitions (1) Hakjoo Oh 2017 Fall Hakjoo Oh COSE212 2017 Fall, Lecture 1 September 4, 2017 1 / 9 Inductive Definitions Inductive definition (induction)

More information

Optimal control of linear systems with quadratic cost and imprecise input noise Alexander Erreygers

Optimal control of linear systems with quadratic cost and imprecise input noise Alexander Erreygers Optimal control of linear systems with quadratic cost and imprecise input noise Alexander Erreygers Supervisor: Prof. dr. ir. Gert de Cooman Counsellors: dr. ir. Jasper De Bock, ir. Arthur Van Camp Master

More information

CONDITIONAL MODELS: COHERENCE AND INFERENCE THROUGH SEQUENCES OF JOINT MASS FUNCTIONS

CONDITIONAL MODELS: COHERENCE AND INFERENCE THROUGH SEQUENCES OF JOINT MASS FUNCTIONS CONDITIONAL MODELS: COHERENCE AND INFERENCE THROUGH SEQUENCES OF JOINT MASS FUNCTIONS ENRIQUE MIRANDA AND MARCO ZAFFALON Abstract. We call a conditional model any set of statements made of conditional

More information

Serena Doria. Department of Sciences, University G.d Annunzio, Via dei Vestini, 31, Chieti, Italy. Received 7 July 2008; Revised 25 December 2008

Serena Doria. Department of Sciences, University G.d Annunzio, Via dei Vestini, 31, Chieti, Italy. Received 7 July 2008; Revised 25 December 2008 Journal of Uncertain Systems Vol.4, No.1, pp.73-80, 2010 Online at: www.jus.org.uk Different Types of Convergence for Random Variables with Respect to Separately Coherent Upper Conditional Probabilities

More information

Decision Making under Uncertainty using Imprecise Probabilities

Decision Making under Uncertainty using Imprecise Probabilities Matthias C. M. Troffaes. Decision making under uncertainty using imprecise probabilities. International Journal of Approximate Reasoning, 45:17-29, 2007. Decision Making under Uncertainty using Imprecise

More information

Durham Research Online

Durham Research Online Durham Research Online Deposited in DRO: 16 January 2015 Version of attached le: Accepted Version Peer-review status of attached le: Peer-reviewed Citation for published item: De Cooman, Gert and Troaes,

More information

Weak Dutch Books versus Strict Consistency with Lower Previsions

Weak Dutch Books versus Strict Consistency with Lower Previsions PMLR: Proceedings of Machine Learning Research, vol. 62, 85-96, 2017 ISIPTA 17 Weak Dutch Books versus Strict Consistency with Lower Previsions Chiara Corsato Renato Pelessoni Paolo Vicig University of

More information

Conservative Inference Rule for Uncertain Reasoning under Incompleteness

Conservative Inference Rule for Uncertain Reasoning under Incompleteness Journal of Artificial Intelligence Research 34 (2009) 757 821 Submitted 11/08; published 04/09 Conservative Inference Rule for Uncertain Reasoning under Incompleteness Marco Zaffalon Galleria 2 IDSIA CH-6928

More information

On the Complexity of Strong and Epistemic Credal Networks

On the Complexity of Strong and Epistemic Credal Networks On the Complexity of Strong and Epistemic Credal Networks Denis D. Mauá Cassio P. de Campos Alessio Benavoli Alessandro Antonucci Istituto Dalle Molle di Studi sull Intelligenza Artificiale (IDSIA), Manno-Lugano,

More information

Introduction to Imprecise Probability and Imprecise Statistical Methods

Introduction to Imprecise Probability and Imprecise Statistical Methods Introduction to Imprecise Probability and Imprecise Statistical Methods Frank Coolen UTOPIAE Training School, Strathclyde University 22 November 2017 (UTOPIAE) Intro to IP 1 / 20 Probability Classical,

More information

arxiv: v3 [math.pr] 4 May 2017

arxiv: v3 [math.pr] 4 May 2017 Computable Randomness is Inherently Imprecise arxiv:703.0093v3 [math.pr] 4 May 207 Gert de Cooman Jasper De Bock Ghent University, IDLab Technologiepark Zwijnaarde 94, 9052 Zwijnaarde, Belgium Abstract

More information

Decision-Theoretic Specification of Credal Networks: A Unified Language for Uncertain Modeling with Sets of Bayesian Networks

Decision-Theoretic Specification of Credal Networks: A Unified Language for Uncertain Modeling with Sets of Bayesian Networks Decision-Theoretic Specification of Credal Networks: A Unified Language for Uncertain Modeling with Sets of Bayesian Networks Alessandro Antonucci Marco Zaffalon IDSIA, Istituto Dalle Molle di Studi sull

More information

Independence in Generalized Interval Probability. Yan Wang

Independence in Generalized Interval Probability. Yan Wang Independence in Generalized Interval Probability Yan Wang Woodruff School of Mechanical Engineering, Georgia Institute of Technology, Atlanta, GA 30332-0405; PH (404)894-4714; FAX (404)894-9342; email:

More information

Irrelevant and Independent Natural Extension for Sets of Desirable Gambles

Irrelevant and Independent Natural Extension for Sets of Desirable Gambles Journal of Artificial Intelligence Research 45 (2012) 601-640 Submitted 8/12; published 12/12 Irrelevant and Independent Natural Extension for Sets of Desirable Gambles Gert de Cooman Ghent University,

More information

arxiv: v6 [math.pr] 23 Jan 2015

arxiv: v6 [math.pr] 23 Jan 2015 Accept & Reject Statement-Based Uncertainty Models Erik Quaeghebeur a,b,1,, Gert de Cooman a, Filip Hermans a a SYSTeMS Research Group, Ghent University, Technologiepark 914, 9052 Zwijnaarde, Belgium b

More information

GERT DE COOMAN AND DIRK AEYELS

GERT DE COOMAN AND DIRK AEYELS POSSIBILITY MEASURES, RANDOM SETS AND NATURAL EXTENSION GERT DE COOMAN AND DIRK AEYELS Abstract. We study the relationship between possibility and necessity measures defined on arbitrary spaces, the theory

More information

Computing Minimax Decisions with Incomplete Observations

Computing Minimax Decisions with Incomplete Observations PMLR: Proceedings of Machine Learning Research, vol. 62, 358-369, 207 ISIPTA 7 Computing Minimax Decisions with Incomplete Observations Thijs van Ommen Universiteit van Amsterdam Amsterdam (The Netherlands)

More information

Exploiting Bayesian Network Sensitivity Functions for Inference in Credal Networks

Exploiting Bayesian Network Sensitivity Functions for Inference in Credal Networks 646 ECAI 2016 G.A. Kaminka et al. (Eds.) 2016 The Authors and IOS Press. This article is published online with Open Access by IOS Press and distributed under the terms of the Creative Commons Attribution

More information

Regular finite Markov chains with interval probabilities

Regular finite Markov chains with interval probabilities 5th International Symposium on Imprecise Probability: Theories and Applications, Prague, Czech Republic, 2007 Regular finite Markov chains with interval probabilities Damjan Škulj Faculty of Social Sciences

More information

A BEHAVIOURAL MODEL FOR VAGUE PROBABILITY ASSESSMENTS

A BEHAVIOURAL MODEL FOR VAGUE PROBABILITY ASSESSMENTS A BEHAVIOURAL MODEL FOR VAGUE PROBABILITY ASSESSMENTS GERT DE COOMAN ABSTRACT. I present an hierarchical uncertainty model that is able to represent vague probability assessments, and to make inferences

More information

On Conditional Independence in Evidence Theory

On Conditional Independence in Evidence Theory 6th International Symposium on Imprecise Probability: Theories and Applications, Durham, United Kingdom, 2009 On Conditional Independence in Evidence Theory Jiřina Vejnarová Institute of Information Theory

More information

CS 188: Artificial Intelligence Fall Recap: Inference Example

CS 188: Artificial Intelligence Fall Recap: Inference Example CS 188: Artificial Intelligence Fall 2007 Lecture 19: Decision Diagrams 11/01/2007 Dan Klein UC Berkeley Recap: Inference Example Find P( F=bad) Restrict all factors P() P(F=bad ) P() 0.7 0.3 eather 0.7

More information

Game-Theoretic Probability: Theory and Applications Glenn Shafer

Game-Theoretic Probability: Theory and Applications Glenn Shafer ISIPTA 07 July 17, 2007, Prague Game-Theoretic Probability: Theory and Applications Glenn Shafer Theory To prove a probabilistic prediction, construct a betting strategy that makes you rich if the prediction

More information

Handling Uncertainty in Complex Systems

Handling Uncertainty in Complex Systems University Gabriele d Annunzio Chieti-Pescara - Italy Handling Uncertainty in Complex Systems Dr. Serena Doria Department of Engineering and Geology, University G. d Annunzio The aim of the course is to

More information

A Study of the Pari-Mutuel Model from the Point of View of Imprecise Probabilities

A Study of the Pari-Mutuel Model from the Point of View of Imprecise Probabilities PMLR: Proceedings of Machine Learning Research, vol. 62, 229-240, 2017 ISIPTA 17 A Study of the Pari-Mutuel Model from the Point of View of Imprecise Probabilities Ignacio Montes Enrique Miranda Dep. of

More information

Dominating countably many forecasts.* T. Seidenfeld, M.J.Schervish, and J.B.Kadane. Carnegie Mellon University. May 5, 2011

Dominating countably many forecasts.* T. Seidenfeld, M.J.Schervish, and J.B.Kadane. Carnegie Mellon University. May 5, 2011 Dominating countably many forecasts.* T. Seidenfeld, M.J.Schervish, and J.B.Kadane. Carnegie Mellon University May 5, 2011 Abstract We contrast de Finetti s two criteria for coherence in settings where

More information

BIVARIATE P-BOXES AND MAXITIVE FUNCTIONS. Keywords: Uni- and bivariate p-boxes, maxitive functions, focal sets, comonotonicity,

BIVARIATE P-BOXES AND MAXITIVE FUNCTIONS. Keywords: Uni- and bivariate p-boxes, maxitive functions, focal sets, comonotonicity, BIVARIATE P-BOXES AND MAXITIVE FUNCTIONS IGNACIO MONTES AND ENRIQUE MIRANDA Abstract. We give necessary and sufficient conditions for a maxitive function to be the upper probability of a bivariate p-box,

More information

The Limit Behaviour of Imprecise Continuous-Time Markov Chains

The Limit Behaviour of Imprecise Continuous-Time Markov Chains DOI 10.1007/s00332-016-9328-3 The Limit Behaviour of Imprecise Continuous-Time Markov Chains Jasper De Bock 1 Received: 1 April 2016 / Accepted: 6 August 2016 Springer Science+Business Media New York 2016

More information

Some exercises on coherent lower previsions

Some exercises on coherent lower previsions Some exercises on coherent lower previsions SIPTA school, September 2010 1. Consider the lower prevision given by: f(1) f(2) f(3) P (f) f 1 2 1 0 0.5 f 2 0 1 2 1 f 3 0 1 0 1 (a) Does it avoid sure loss?

More information

Active Elicitation of Imprecise Probability Models

Active Elicitation of Imprecise Probability Models Active Elicitation of Imprecise Probability Models Natan T'Joens Supervisors: Prof. dr. ir. Gert De Cooman, Dr. ir. Jasper De Bock Counsellor: Ir. Arthur Van Camp Master's dissertation submitted in order

More information

Regression with Imprecise Data: A Robust Approach

Regression with Imprecise Data: A Robust Approach 7th International Symposium on Imprecise Probability: Theories and Applications, Innsbruck, Austria, 2011 Regression with Imprecise Data: A Robust Approach Marco E. G. V. Cattaneo Department of Statistics,

More information

Theory of Computation

Theory of Computation Theory of Computation Lecture #2 Sarmad Abbasi Virtual University Sarmad Abbasi (Virtual University) Theory of Computation 1 / 1 Lecture 2: Overview Recall some basic definitions from Automata Theory.

More information

On Markov Properties in Evidence Theory

On Markov Properties in Evidence Theory On Markov Properties in Evidence Theory 131 On Markov Properties in Evidence Theory Jiřina Vejnarová Institute of Information Theory and Automation of the ASCR & University of Economics, Prague vejnar@utia.cas.cz

More information

Stochastic dominance with imprecise information

Stochastic dominance with imprecise information Stochastic dominance with imprecise information Ignacio Montes, Enrique Miranda, Susana Montes University of Oviedo, Dep. of Statistics and Operations Research. Abstract Stochastic dominance, which is

More information

Coherence with Proper Scoring Rules

Coherence with Proper Scoring Rules Coherence with Proper Scoring Rules Mark Schervish, Teddy Seidenfeld, and Joseph (Jay) Kadane Mark Schervish Joseph ( Jay ) Kadane Coherence with Proper Scoring Rules ILC, Sun Yat-Sen University June 2010

More information

Three contrasts between two senses of coherence

Three contrasts between two senses of coherence Three contrasts between two senses of coherence Teddy Seidenfeld Joint work with M.J.Schervish and J.B.Kadane Statistics, CMU Call an agent s choices coherent when they respect simple dominance relative

More information

Three-group ROC predictive analysis for ordinal outcomes

Three-group ROC predictive analysis for ordinal outcomes Three-group ROC predictive analysis for ordinal outcomes Tahani Coolen-Maturi Durham University Business School Durham University, UK tahani.maturi@durham.ac.uk June 26, 2016 Abstract Measuring the accuracy

More information

Harvard CS121 and CSCI E-121 Lecture 2: Mathematical Preliminaries

Harvard CS121 and CSCI E-121 Lecture 2: Mathematical Preliminaries Harvard CS121 and CSCI E-121 Lecture 2: Mathematical Preliminaries Harry Lewis September 5, 2013 Reading: Sipser, Chapter 0 Sets Sets are defined by their members A = B means that for every x, x A iff

More information

arxiv: v1 [math.pr] 26 Mar 2008

arxiv: v1 [math.pr] 26 Mar 2008 arxiv:0803.3679v1 [math.pr] 26 Mar 2008 The game-theoretic martingales behind the zero-one laws Akimichi Takemura 1 takemura@stat.t.u-tokyo.ac.jp, http://www.e.u-tokyo.ac.jp/ takemura Vladimir Vovk 2 vovk@cs.rhul.ac.uk,

More information

{a, b, c} {a, b} {a, c} {b, c} {a}

{a, b, c} {a, b} {a, c} {b, c} {a} Section 4.3 Order Relations A binary relation is an partial order if it transitive and antisymmetric. If R is a partial order over the set S, we also say, S is a partially ordered set or S is a poset.

More information

Generalized Loopy 2U: A New Algorithm for Approximate Inference in Credal Networks

Generalized Loopy 2U: A New Algorithm for Approximate Inference in Credal Networks Generalized Loopy 2U: A New Algorithm for Approximate Inference in Credal Networks Alessandro Antonucci, Marco Zaffalon, Yi Sun, Cassio P. de Campos Istituto Dalle Molle di Studi sull Intelligenza Artificiale

More information

p L yi z n m x N n xi

p L yi z n m x N n xi y i z n x n N x i Overview Directed and undirected graphs Conditional independence Exact inference Latent variables and EM Variational inference Books statistical perspective Graphical Models, S. Lauritzen

More information

arxiv: v3 [math.pr] 9 Dec 2010

arxiv: v3 [math.pr] 9 Dec 2010 EXCHANGEABILITY AND SETS OF DESIRABLE GAMBLES GERT DE COOMAN AND ERIK QUAEGHEBEUR arxiv:0911.4727v3 [math.pr] 9 Dec 2010 ABSTRACT. Sets of desirable gambles constitute a quite general type of uncertainty

More information

Hidden Markov models 1

Hidden Markov models 1 Hidden Markov models 1 Outline Time and uncertainty Markov process Hidden Markov models Inference: filtering, prediction, smoothing Most likely explanation: Viterbi 2 Time and uncertainty The world changes;

More information

Continuous updating rules for imprecise probabilities

Continuous updating rules for imprecise probabilities Continuous updating rules for imprecise probabilities Marco Cattaneo Department of Statistics, LMU Munich WPMSIIP 2013, Lugano, Switzerland 6 September 2013 example X {1, 2, 3} Marco Cattaneo @ LMU Munich

More information

CS 133 : Automata Theory and Computability

CS 133 : Automata Theory and Computability CS 133 : Automata Theory and Computability Lecture Slides 1 Regular Languages and Finite Automata Nestine Hope S. Hernandez Algorithms and Complexity Laboratory Department of Computer Science University

More information

Introduction to belief functions

Introduction to belief functions Introduction to belief functions Thierry Denœux 1 1 Université de Technologie de Compiègne HEUDIASYC (UMR CNRS 6599) http://www.hds.utc.fr/ tdenoeux Spring School BFTA 2011 Autrans, April 4-8, 2011 Thierry

More information

Probabilistic Logics and Probabilistic Networks

Probabilistic Logics and Probabilistic Networks Probabilistic Logics and Probabilistic Networks Jan-Willem Romeijn, Philosophy, Groningen Jon Williamson, Philosophy, Kent ESSLLI 2008 Course Page: http://www.kent.ac.uk/secl/philosophy/jw/2006/progicnet/esslli.htm

More information

Independence for Full Conditional Measures, Graphoids and Bayesian Networks

Independence for Full Conditional Measures, Graphoids and Bayesian Networks Independence for Full Conditional Measures, Graphoids and Bayesian Networks Fabio G. Cozman Universidade de Sao Paulo Teddy Seidenfeld Carnegie Mellon University February 28, 2007 Abstract This paper examines

More information

Hidden Markov Models. AIMA Chapter 15, Sections 1 5. AIMA Chapter 15, Sections 1 5 1

Hidden Markov Models. AIMA Chapter 15, Sections 1 5. AIMA Chapter 15, Sections 1 5 1 Hidden Markov Models AIMA Chapter 15, Sections 1 5 AIMA Chapter 15, Sections 1 5 1 Consider a target tracking problem Time and uncertainty X t = set of unobservable state variables at time t e.g., Position

More information

Birth-death chain. X n. X k:n k,n 2 N k apple n. X k: L 2 N. x 1:n := x 1,...,x n. E n+1 ( x 1:n )=E n+1 ( x n ), 8x 1:n 2 X n.

Birth-death chain. X n. X k:n k,n 2 N k apple n. X k: L 2 N. x 1:n := x 1,...,x n. E n+1 ( x 1:n )=E n+1 ( x n ), 8x 1:n 2 X n. Birth-death chains Birth-death chain special type of Markov chain Finite state space X := {0,...,L}, with L 2 N X n X k:n k,n 2 N k apple n Random variable and a sequence of variables, with and A sequence

More information

An extension of chaotic probability models to real-valued variables

An extension of chaotic probability models to real-valued variables An extension of chaotic probability models to real-valued variables P.I.Fierens Department of Physics and Mathematics, Instituto Tecnológico de Buenos Aires (ITBA), Argentina pfierens@itba.edu.ar Abstract

More information

STA 4273H: Statistical Machine Learning

STA 4273H: Statistical Machine Learning STA 4273H: Statistical Machine Learning Russ Salakhutdinov Department of Statistics! rsalakhu@utstat.toronto.edu! http://www.utstat.utoronto.ca/~rsalakhu/ Sidney Smith Hall, Room 6002 Lecture 11 Project

More information

Course Introduction. Probabilistic Modelling and Reasoning. Relationships between courses. Dealing with Uncertainty. Chris Williams.

Course Introduction. Probabilistic Modelling and Reasoning. Relationships between courses. Dealing with Uncertainty. Chris Williams. Course Introduction Probabilistic Modelling and Reasoning Chris Williams School of Informatics, University of Edinburgh September 2008 Welcome Administration Handout Books Assignments Tutorials Course

More information

Module 1. Probability

Module 1. Probability Module 1 Probability 1. Introduction In our daily life we come across many processes whose nature cannot be predicted in advance. Such processes are referred to as random processes. The only way to derive

More information

Journal of Mathematical Analysis and Applications

Journal of Mathematical Analysis and Applications J. Math. Anal. Appl. 421 (2015) 1042 1080 Contents lists available at ScienceDirect Journal of Mathematical Analysis and Applications www.elsevier.com/locate/jmaa Extreme lower previsions Jasper De Bock,

More information

Three grades of coherence for non-archimedean preferences

Three grades of coherence for non-archimedean preferences Teddy Seidenfeld (CMU) in collaboration with Mark Schervish (CMU) Jay Kadane (CMU) Rafael Stern (UFdSC) Robin Gong (Rutgers) 1 OUTLINE 1. We review de Finetti s theory of coherence for a (weakly-ordered)

More information

Artificial Intelligence Markov Chains

Artificial Intelligence Markov Chains Artificial Intelligence Markov Chains Stephan Dreiseitl FH Hagenberg Software Engineering & Interactive Media Stephan Dreiseitl (Hagenberg/SE/IM) Lecture 12: Markov Chains Artificial Intelligence SS2010

More information

Discrete time Markov chains. Discrete Time Markov Chains, Definition and classification. Probability axioms and first results

Discrete time Markov chains. Discrete Time Markov Chains, Definition and classification. Probability axioms and first results Discrete time Markov chains Discrete Time Markov Chains, Definition and classification 1 1 Applied Mathematics and Computer Science 02407 Stochastic Processes 1, September 5 2017 Today: Short recap of

More information

ON THE EQUIVALENCE OF CONGLOMERABILITY AND DISINTEGRABILITY FOR UNBOUNDED RANDOM VARIABLES

ON THE EQUIVALENCE OF CONGLOMERABILITY AND DISINTEGRABILITY FOR UNBOUNDED RANDOM VARIABLES Submitted to the Annals of Probability ON THE EQUIVALENCE OF CONGLOMERABILITY AND DISINTEGRABILITY FOR UNBOUNDED RANDOM VARIABLES By Mark J. Schervish, Teddy Seidenfeld, and Joseph B. Kadane, Carnegie

More information

Approximate Credal Network Updating by Linear Programming with Applications to Decision Making

Approximate Credal Network Updating by Linear Programming with Applications to Decision Making Approximate Credal Network Updating by Linear Programming with Applications to Decision Making Alessandro Antonucci a,, Cassio P. de Campos b, David Huber a, Marco Zaffalon a a Istituto Dalle Molle di

More information

Markov Chains and Stochastic Sampling

Markov Chains and Stochastic Sampling Part I Markov Chains and Stochastic Sampling 1 Markov Chains and Random Walks on Graphs 1.1 Structure of Finite Markov Chains We shall only consider Markov chains with a finite, but usually very large,

More information

2.2 Some Consequences of the Completeness Axiom

2.2 Some Consequences of the Completeness Axiom 60 CHAPTER 2. IMPORTANT PROPERTIES OF R 2.2 Some Consequences of the Completeness Axiom In this section, we use the fact that R is complete to establish some important results. First, we will prove that

More information

Imprecise Continuous-Time Markov Chains: Efficient Computational Methods with Guaranteed Error Bounds

Imprecise Continuous-Time Markov Chains: Efficient Computational Methods with Guaranteed Error Bounds ICTMCS: EFFICIENT COMPUTATIONAL METHODS WITH GUARANTEED ERROR BOUNDS Imprecise Continuous-Time Markov Chains: Efficient Computational Methods with Guaranteed Error Bounds Alexander Erreygers Ghent University,

More information

Optimisation under uncertainty applied to a bridge collision problem

Optimisation under uncertainty applied to a bridge collision problem Optimisation under uncertainty applied to a bridge collision problem K. Shariatmadar 1, R. Andrei 2, G. de Cooman 1, P. Baekeland 3, E. Quaeghebeur 1,4, E. Kerre 2 1 Ghent University, SYSTeMS Research

More information

Outline. Spring It Introduction Representation. Markov Random Field. Conclusion. Conditional Independence Inference: Variable elimination

Outline. Spring It Introduction Representation. Markov Random Field. Conclusion. Conditional Independence Inference: Variable elimination Probabilistic Graphical Models COMP 790-90 Seminar Spring 2011 The UNIVERSITY of NORTH CAROLINA at CHAPEL HILL Outline It Introduction ti Representation Bayesian network Conditional Independence Inference:

More information

Semigroup presentations via boundaries in Cayley graphs 1

Semigroup presentations via boundaries in Cayley graphs 1 Semigroup presentations via boundaries in Cayley graphs 1 Robert Gray University of Leeds BMC, Newcastle 2006 1 (Research conducted while I was a research student at the University of St Andrews, under

More information

Probability. CS 3793/5233 Artificial Intelligence Probability 1

Probability. CS 3793/5233 Artificial Intelligence Probability 1 CS 3793/5233 Artificial Intelligence 1 Motivation Motivation Random Variables Semantics Dice Example Joint Dist. Ex. Axioms Agents don t have complete knowledge about the world. Agents need to make decisions

More information

Nonparametric predictive inference for ordinal data

Nonparametric predictive inference for ordinal data Nonparametric predictive inference for ordinal data F.P.A. Coolen a,, P. Coolen-Schrijner, T. Coolen-Maturi b,, F.F. Ali a, a Dept of Mathematical Sciences, Durham University, Durham DH1 3LE, UK b Kent

More information

Part I. C. M. Bishop PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 8: GRAPHICAL MODELS

Part I. C. M. Bishop PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 8: GRAPHICAL MODELS Part I C. M. Bishop PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 8: GRAPHICAL MODELS Probabilistic Graphical Models Graphical representation of a probabilistic model Each variable corresponds to a

More information

Control Theory : Course Summary

Control Theory : Course Summary Control Theory : Course Summary Author: Joshua Volkmann Abstract There are a wide range of problems which involve making decisions over time in the face of uncertainty. Control theory draws from the fields

More information

Bayesian Networks and Decision Graphs

Bayesian Networks and Decision Graphs Bayesian Networks and Decision Graphs Chapter 3 Chapter 3 p. 1/47 Building models Milk from a cow may be infected. To detect whether or not the milk is infected, you can apply a test which may either give

More information

Basic Probabilistic Reasoning SEG

Basic Probabilistic Reasoning SEG Basic Probabilistic Reasoning SEG 7450 1 Introduction Reasoning under uncertainty using probability theory Dealing with uncertainty is one of the main advantages of an expert system over a simple decision

More information

Hidden Markov Models. By Parisa Abedi. Slides courtesy: Eric Xing

Hidden Markov Models. By Parisa Abedi. Slides courtesy: Eric Xing Hidden Markov Models By Parisa Abedi Slides courtesy: Eric Xing i.i.d to sequential data So far we assumed independent, identically distributed data Sequential (non i.i.d.) data Time-series data E.g. Speech

More information

Likelihood-Based Naive Credal Classifier

Likelihood-Based Naive Credal Classifier 7th International Symposium on Imprecise Probability: Theories and Applications, Innsbruck, Austria, 011 Likelihood-Based Naive Credal Classifier Alessandro Antonucci IDSIA, Lugano alessandro@idsia.ch

More information

Grundlagen der Künstlichen Intelligenz

Grundlagen der Künstlichen Intelligenz Grundlagen der Künstlichen Intelligenz Uncertainty & Probabilities & Bandits Daniel Hennes 16.11.2017 (WS 2017/18) University Stuttgart - IPVS - Machine Learning & Robotics 1 Today Uncertainty Probability

More information

Imprecise Filtering for Spacecraft Navigation

Imprecise Filtering for Spacecraft Navigation Imprecise Filtering for Spacecraft Navigation Tathagata Basu Cristian Greco Thomas Krak Durham University Strathclyde University Ghent University Filtering for Spacecraft Navigation The General Problem

More information

Hoeffding s inequality in game-theoretic probability

Hoeffding s inequality in game-theoretic probability Hoeffding s inequality in game-theoretic probability Vladimir Vovk $5 Peter Paul $0 $50 Peter $0 Paul $100 The Game-Theoretic Probability and Finance Project Answer to Frequently Asked Question #3 August

More information

Theory of Computation

Theory of Computation Fall 2002 (YEN) Theory of Computation Midterm Exam. Name:... I.D.#:... 1. (30 pts) True or false (mark O for true ; X for false ). (Score=Max{0, Right- 1 2 Wrong}.) (1) X... If L 1 is regular and L 2 L

More information

STA 414/2104: Machine Learning

STA 414/2104: Machine Learning STA 414/2104: Machine Learning Russ Salakhutdinov Department of Computer Science! Department of Statistics! rsalakhu@cs.toronto.edu! http://www.cs.toronto.edu/~rsalakhu/ Lecture 9 Sequential Data So far

More information