Graphical Models Another Approach to Generalize the Viterbi Algorithm
|
|
- Cory Melton
- 6 years ago
- Views:
Transcription
1 Exact Marginalization Another Approach to Generalize the Viterbi Algorithm Oberseminar Bioinformatik am 20. Mai 2010 Institut für Mikrobiologie und Genetik Universität Göttingen 1.1
2 Undirected Graphical Model Undirected Graphical Model (Markov random field) describes joint probability distribution of a set of random variables x = (x v ) v V conditional independence properties of the random variables given by an undirected graph G = (V, E) p(x v x \v ) = p(x v x N(v) ) (for all v V ) Here: N(v) = {w V {v, w} E} = set of neighbours of v in the graph Example HMM / linear chain CRF x 1 x i 1 x i x i+1 x n P(x i x 1,..., x i 1, x i+1,..., x n ) = P(x i x i 1, x i+1 ) 1.2
3 Undirected Graphical Model Factorization (Hammersley and Clifford, 1971) All undirected graphical models with P(x) > 0 are of this form: P(x) = 1 Z ψ C (x C ) C C ψ C > 0: potential functions C: subsets of V (maximal cliques or subsets thereof) x C = (x v ) v C = variables associated with the subset C Z implicitly defined normalization constant 1.3
4 Undirected Graphical Model Factorization (Hammersley and Clifford, 1971) All undirected graphical models with P(x) > 0 are of this form: P(x) = 1 Z ψ C (x C ) C C ψ C > 0: potential functions C: subsets of V (maximal cliques or subsets thereof) x C = (x v ) v C = variables associated with the subset C Z implicitly defined normalization constant Example (HMM) Conditional on observation sequence y: n P(x y) = 1 P(y) P(y i x i )P(x i x i 1 ) i=1 Z = P(y), C = V E ψ i = P(y i x i ) (emission probs) ψ {i 1,i} = P(x i x i 1 ) (transition probs) 1.3
5 CRFs in Particular Conditional Random Field Distribution conditional on observation y is an undirected graphical model: P(x y) = 1 ψ C (x C, y) Z (y) C C 1.4
6 CRFs in Particular Conditional Random Field Distribution conditional on observation y is an undirected graphical model: P(x y) = 1 ψ C (x C, y) Z (y) C C exponential form In practice, potentials are of this exponential form k ψ C (x C, y) = exp w i f i (x C, y, C), where the feature functions f i are often binary features. Remark: HMMs are still of this form. i=1 1.4
7 (Conditional) Random Fields Quantities of Interest 1 most likely labeling arg max P(x y) (or approx.) corresponds to most likely gene structure most likely alignment denoised image best party plan x 1.5
8 (Conditional) Random Fields Quantities of Interest 1 most likely labeling arg max P(x y) (or approx.) corresponds to most likely gene structure most likely alignment denoised image best party plan 2 posterior probabilities / marginals: P(x W y) / P(x W ) for W a vertex or clique x 1.5
9 (Conditional) Random Fields Quantities of Interest 1 most likely labeling arg max P(x y) (or approx.) corresponds to most likely gene structure most likely alignment denoised image best party plan 2 posterior probabilities / marginals: P(x W y) / P(x W ) for W a vertex or clique x 3 normalization constant Z e.g. to actually compute P(x) for given x (classification) 1.5
10 Reminder: Had Developed Generalized Viterbi Algorithm Own algorithm Finds most likely element (labeling) when C = vertices and edges. ˆx arg max P(x), x H1 B1 B2 H2 H3 B3 Implementation Java program GeneralViterbi written by Moritz Maneke in Bachelor thesis. 1.6
11 Application: Comparative Gene Finding Danesh Morady Garavand (lab rotation) use GeneralViterbi as a new approach to comparative gene finding: Find genes simultaneously in two or more orthologous DNA sequences. Example (two aligned genomes) horizontal edges: neighboring bases in same species vertical edges: aligned bases in different species 1.7
12 Application: Comparative Gene Finding Comparative Model used simple label space with 7 labels {N,E0,E1,E2,I0,I1,I2} 1.8
13 Application: Comparative Gene Finding Comparative Model used simple label space with 7 labels {N,E0,E1,E2,I0,I1,I2} reused parameters of AUGUSTUS (linear chain CRF) for all node features and horizontal edge features (emission probabilities, transition probabilities) without vertical edges, two independent ab initio predictions 1.8
14 Application: Comparative Gene Finding Comparative Model used simple label space with 7 labels {N,E0,E1,E2,I0,I1,I2} reused parameters of AUGUSTUS (linear chain CRF) for all node features and horizontal edge features (emission probabilities, transition probabilities) without vertical edges, two independent ab initio predictions used very simple vertical edge features: f (x {u,v}, y, {u, v}) = { 1, if xu = x v 0, otherwise same labels of aligned bases are rewarded 1.8
15 Application: Comparative Gene Finding Results/Conclusions test on sequence pairs of length 5000 accuracy improvement of with vertical edges over without vertical edges : 8 percent points on exon level extension to production setting conceptionally easy, e.g. 12 Drosophila species, evidence integration 1.9
16 Application: Comparative Gene Finding Results/Conclusions test on sequence pairs of length 5000 accuracy improvement of with vertical edges over without vertical edges : 8 percent points on exon level extension to production setting conceptionally easy, e.g. 12 Drosophila species, evidence integration But: efficiency drops exponentially with width of graph 1.9
17 Application: Protein-Protein-Interaction interface residue backbone close in space Find labeling of vertices in {interface, not interface} based on vertex and edge features.
18 Limits of GeneralViterbi Problems GeneralViterbi too inefficient on very complex graphs (problem inherent limitation: NP-hard) also can use marginals like P(x v = l y) for training CRFs But: my approach not directly transferrable to marginalization Forward algorithm working, but not clear what corresponding Backwards variables would be. 1.11
19 Eliminate Limits Catch 2 Birds with 1 Stone: : 1 can do maximization and marginalization with very similar algorithm 2 can tackle complex graphs by running an approximation variant of the same algorithm 1.12
20 Eliminate Limits Catch 2 Birds with 1 Stone: : 1 can do maximization and marginalization with very similar algorithm 2 can tackle complex graphs by running an approximation variant of the same algorithm Maybe 3rd Bird: can use higher-order clique features in PPI. E.g. patch features. : most interfaces consist of [...] buried cores surrounded by partially accessible rims amino acid composition of the core differs considerably from the rim (Shoemaker and Panchenko, 2007) 1.12
21 Reminder: Definition () A tree decomposition of an undirected graph G = (V, E) is a tree (T, M), in which the vertices A T (bags) are subsets of vertices in G A = V A T u, v E A T with u, v A A, B T : C on the unique path from A to B A B C (running intersection property) Example 1.13 Copyright by Hein Röhrig
22 Tree Width of a Graph Definition (tree width) the width of a tree decomposition is: (size of largest bag) - 1 the tree width of a graph is the minimal width of all its tree decompositions 1.14
23 Compute Narrow For each k there is a linear time algorithm that tests whether a graph G = (V, E) has tree width at most k and, if so, computes a tree decomposition (T, M) of width at most k. Hans Bodlaender,
24 Compute Narrow For each k there is a linear time algorithm that tests whether a graph G = (V, E) has tree width at most k and, if so, computes a tree decomposition (T, M) of width at most k. Hans Bodlaender, 1992 Practial Solutions exakt k algorithm complicated, but linear time implementation is available: Diplomarbeit of Hein Röhrig, 1998, GNU licence,
25 Compute Narrow For each k there is a linear time algorithm that tests whether a graph G = (V, E) has tree width at most k and, if so, computes a tree decomposition (T, M) of width at most k. Hans Bodlaender, 1992 Practial Solutions exakt k algorithm complicated, but linear time implementation is available: Diplomarbeit of Hein Röhrig, 1998, GNU licence, junction tree: Can also add edges to E ( Triangulation), so that in the resulting graph G a tree of cliques exists that is a tree decomposition of G. 1.15
26 Compute Narrow For each k there is a linear time algorithm that tests whether a graph G = (V, E) has tree width at most k and, if so, computes a tree decomposition (T, M) of width at most k. Hans Bodlaender, 1992 Practial Solutions exakt k algorithm complicated, but linear time implementation is available: Diplomarbeit of Hein Röhrig, 1998, GNU licence, junction tree: Can also add edges to E ( Triangulation), so that in the resulting graph G a tree of cliques exists that is a tree decomposition of G. may chose to remove edges 1.15
27 Overview: Setting Have: undirected graphical model with potentials ψ C (x c ), tree decomposition of graph Want: marginal distributions P(x W ) for bags W of tree decomposition 1.16
28 Overview: Setting Have: undirected graphical model with potentials ψ C (x c ), tree decomposition of graph Want: marginal distributions P(x W ) for bags W of tree decomposition Remark: will give us posterior probs of vertices and edges needed for CRF training with IIS: P(x v y), P(x {v,w} y) 1.16
29 Overview: Setting Have: undirected graphical model with potentials ψ C (x c ), tree decomposition of graph Want: marginal distributions P(x W ) for bags W of tree decomposition Remark: will give us posterior probs of vertices and edges needed for CRF training with IIS: P(x v y), P(x {v,w} y) Idea: algorithm will maintain a distribution φ W for each bag ( potential ) 1.16
30 Overview: Setting Have: undirected graphical model with potentials ψ C (x c ), tree decomposition of graph Want: marginal distributions P(x W ) for bags W of tree decomposition Remark: will give us posterior probs of vertices and edges needed for CRF training with IIS: P(x v y), P(x {v,w} y) Idea: algorithm will maintain a distribution φ W for each bag ( potential ) will iteratively and locally update potentials, by passing messages between neighboring bags 1.16
31 Overview: Setting Have: undirected graphical model with potentials ψ C (x c ), tree decomposition of graph Want: marginal distributions P(x W ) for bags W of tree decomposition Remark: will give us posterior probs of vertices and edges needed for CRF training with IIS: P(x v y), P(x {v,w} y) Idea: algorithm will maintain a distribution φ W for each bag ( potential ) will iteratively and locally update potentials, by passing messages between neighboring bags after a fixed number of steps potentials are true marginals 1.16
32 Separators For each edge {X, Y } in the tree decomposition we add a separator node S between X and Y with all vertices in the intersection: S = X Y Example X S Y abcd cd cdef d dgh separator bag Why? 1.17
33 Separators For each edge {X, Y } in the tree decomposition we add a separator node S between X and Y with all vertices in the intersection: S = X Y Example X S Y abcd cd cdef d dgh separator bag Local Consistency When potentials are true marginals of P(x), then they agree on their common vertices. In particular, φ X and φ y must eventually agree on S. 1.17
34 Local Consistency Consistency on Separator We need X\S φ X = φ S = Y \S φ Y Example X S Y abcd cd cdef φ X (a, b, c, d) = φ S (c, d) = φ Y (c, d, e, f ) a,b e,f 1.18
35 Global Consistency Must have for any bags X, Y with intersection I, that φ X = φ Y. X\I Y \I 1.19
36 Global Consistency Must have for any bags X, Y with intersection I, that φ X = φ Y. X\I Y \I running intersection property & local consistency = global consistency 1.19
37 Messages Example φ X φ S φ Y X S Y 1.20
38 Messages Example φ X φ S φ Y X S Y Message from bag X to bag Y update φ S and φ Y according to φ S X\S φ X (1) φ Y φ Y φ S φ S (2) φ S φ S (3) 1.20
39 Two Passes: Leaves to Root to Leaves 1 initialize potentials: φ A = ψ A on bags, φ 1 on separators 1.21
40 Two Passes: Leaves to Root to Leaves 1 initialize potentials: φ A = ψ A on bags, φ 1 on separators 2 choose root bag arbitrarily 1.21
41 Two Passes: Leaves to Root to Leaves 1 initialize potentials: φ A = ψ A on bags, φ 1 on separators 2 choose root bag arbitrarily 3 pass messages from leaves to root (depth first search order) 1.21
42 Two Passes: Leaves to Root to Leaves 1 initialize potentials: φ A = ψ A on bags, φ 1 on separators 2 choose root bag arbitrarily 3 pass messages from leaves to root (depth first search order) 4 pass messages from root to leaves (reverse order) 1.21
43 Two Passes: Leaves to Root to Leaves 1 initialize potentials: φ A = ψ A on bags, φ 1 on separators 2 choose root bag arbitrarily 3 pass messages from leaves to root (depth first search order) 4 pass messages from root to leaves (reverse order) 5 potentials are exact marginals, Z can be obtained by summming any potential 1.21
44 Two Passes: Leaves to Root to Leaves 1 initialize potentials: φ A = ψ A on bags, φ 1 on separators 2 choose root bag arbitrarily 3 pass messages from leaves to root (depth first search order) 4 pass messages from root to leaves (reverse order) 5 potentials are exact marginals, Z can be obtained by summming any potential Example 1.21
45 Two Passes: Leaves to Root to Leaves 1 initialize potentials: φ A = ψ A on bags, φ 1 on separators 2 choose root bag arbitrarily 3 pass messages from leaves to root (depth first search order) 4 pass messages from root to leaves (reverse order) 5 potentials are exact marginals, Z can be obtained by summming any potential Example Forward pass: leaves to root 1.21
46 Two Passes: Leaves to Root to Leaves 1 initialize potentials: φ A = ψ A on bags, φ 1 on separators 2 choose root bag arbitrarily 3 pass messages from leaves to root (depth first search order) 4 pass messages from root to leaves (reverse order) 5 potentials are exact marginals, Z can be obtained by summming any potential Example Forward pass: leaves to root 1.21
47 Two Passes: Leaves to Root to Leaves 1 initialize potentials: φ A = ψ A on bags, φ 1 on separators 2 choose root bag arbitrarily 3 pass messages from leaves to root (depth first search order) 4 pass messages from root to leaves (reverse order) 5 potentials are exact marginals, Z can be obtained by summming any potential Example Forward pass: leaves to root 1.21
48 Two Passes: Leaves to Root to Leaves 1 initialize potentials: φ A = ψ A on bags, φ 1 on separators 2 choose root bag arbitrarily 3 pass messages from leaves to root (depth first search order) 4 pass messages from root to leaves (reverse order) 5 potentials are exact marginals, Z can be obtained by summming any potential Example Forward pass: leaves to root 1.21
49 Two Passes: Leaves to Root to Leaves 1 initialize potentials: φ A = ψ A on bags, φ 1 on separators 2 choose root bag arbitrarily 3 pass messages from leaves to root (depth first search order) 4 pass messages from root to leaves (reverse order) 5 potentials are exact marginals, Z can be obtained by summming any potential Example Forward pass: leaves to root 1.21
50 Two Passes: Leaves to Root to Leaves 1 initialize potentials: φ A = ψ A on bags, φ 1 on separators 2 choose root bag arbitrarily 3 pass messages from leaves to root (depth first search order) 4 pass messages from root to leaves (reverse order) 5 potentials are exact marginals, Z can be obtained by summming any potential Example Forward pass: leaves to root 1.21
51 Two Passes: Leaves to Root to Leaves 1 initialize potentials: φ A = ψ A on bags, φ 1 on separators 2 choose root bag arbitrarily 3 pass messages from leaves to root (depth first search order) 4 pass messages from root to leaves (reverse order) 5 potentials are exact marginals, Z can be obtained by summming any potential Example Forward pass: leaves to root 1.21
52 Two Passes: Leaves to Root to Leaves 1 initialize potentials: φ A = ψ A on bags, φ 1 on separators 2 choose root bag arbitrarily 3 pass messages from leaves to root (depth first search order) 4 pass messages from root to leaves (reverse order) 5 potentials are exact marginals, Z can be obtained by summming any potential Example Forward pass: leaves to root 1.21
53 Two Passes: Leaves to Root to Leaves 1 initialize potentials: φ A = ψ A on bags, φ 1 on separators 2 choose root bag arbitrarily 3 pass messages from leaves to root (depth first search order) 4 pass messages from root to leaves (reverse order) 5 potentials are exact marginals, Z can be obtained by summming any potential Example Backward pass: root to leaves 1.21
54 Two Passes: Leaves to Root to Leaves 1 initialize potentials: φ A = ψ A on bags, φ 1 on separators 2 choose root bag arbitrarily 3 pass messages from leaves to root (depth first search order) 4 pass messages from root to leaves (reverse order) 5 potentials are exact marginals, Z can be obtained by summming any potential Example Backward pass: root to leaves 1.21
55 Two Passes: Leaves to Root to Leaves 1 initialize potentials: φ A = ψ A on bags, φ 1 on separators 2 choose root bag arbitrarily 3 pass messages from leaves to root (depth first search order) 4 pass messages from root to leaves (reverse order) 5 potentials are exact marginals, Z can be obtained by summming any potential Example Backward pass: root to leaves 1.21
56 Two Passes: Leaves to Root to Leaves 1 initialize potentials: φ A = ψ A on bags, φ 1 on separators 2 choose root bag arbitrarily 3 pass messages from leaves to root (depth first search order) 4 pass messages from root to leaves (reverse order) 5 potentials are exact marginals, Z can be obtained by summming any potential Example Backward pass: root to leaves 1.21
57 Two Passes: Leaves to Root to Leaves 1 initialize potentials: φ A = ψ A on bags, φ 1 on separators 2 choose root bag arbitrarily 3 pass messages from leaves to root (depth first search order) 4 pass messages from root to leaves (reverse order) 5 potentials are exact marginals, Z can be obtained by summming any potential Example Backward pass: root to leaves 1.21
58 Two Passes: Leaves to Root to Leaves 1 initialize potentials: φ A = ψ A on bags, φ 1 on separators 2 choose root bag arbitrarily 3 pass messages from leaves to root (depth first search order) 4 pass messages from root to leaves (reverse order) 5 potentials are exact marginals, Z can be obtained by summming any potential Example Backward pass: root to leaves 1.21
59 Two Passes: Leaves to Root to Leaves 1 initialize potentials: φ A = ψ A on bags, φ 1 on separators 2 choose root bag arbitrarily 3 pass messages from leaves to root (depth first search order) 4 pass messages from root to leaves (reverse order) 5 potentials are exact marginals, Z can be obtained by summming any potential Example Backward pass: root to leaves 1.21
60 Two Passes: Leaves to Root to Leaves 1 initialize potentials: φ A = ψ A on bags, φ 1 on separators 2 choose root bag arbitrarily 3 pass messages from leaves to root (depth first search order) 4 pass messages from root to leaves (reverse order) 5 potentials are exact marginals, Z can be obtained by summming any potential Example 1.21 Have local consistency between X and Y after backward pass!
61 Party Planning Example binary classification of variables S, O, D, M, H: 1=invited, 0=not invited Stephan Oliver Dana BANG Marc Hamed P = 1 Z ψ S ψ O ψ Dψ Mψ Hψ SO ψ OD ψ MO ψ DH v : ψ v(0) = 1, ψ v(1) = 2 S O φ DH O D φ DH D H φ DH M O φ DH
62 Party Planning Example: Brute Force 1.23 distribution O D M S H sum 608 marginals P(S = 1) = 418/608 P(O = 1) = 380/608 P(D = 1) = 320/608 P(M = 1) = 152/608 P(H = 1) = 320/608
63 Party Planning Example: SO O OD D DH O MO 1.24
64 Party Planning Example: SO O OD D DH O MO P = 1 Z ψ SOψ OD ψ MO ψ DH define potentials ψ to achieve equivalence: (arbitrarily) assign original potentials ψ to bags S O φ SO O D φ OD D H φ DH M O φ MO
65 Party Planning Example S O φ SO O φ O O D φ OD D φ D D H φ DH SO O OD D DH O O φ O MO M O φ MO Initialize potentials: bags with original potentials ψ C separators with unity
66 Party Planning Example S O φ SO O φ O O D φ OD D φ D D H φ DH SO O OD D DH O O φ O MO M O φ MO choose root
67 Party Planning Example S O φ SO O φ O O D φ OD D φ D D H φ DH SO O OD D DH O O φ O MO M O φ MO pass messages from leaves to root
68 Party Planning Example S O φ SO O φ O O D φ OD D φ D D H φ DH SO O OD D DH O O φ O MO M O φ MO pass messages from leaves to root
69 Party Planning Example S O φ SO O φ O O D φ OD D φ D D H φ DH SO O OD D DH O O φ O MO M O φ MO pass messages from leaves to root
70 Party Planning Example S O φ SO O φ O O D φ OD D φ D D H φ DH Z 608 SO O OD D DH O O φ O MO M O φ MO
71 Party Planning Example S O φ SO O φ O O D φ OD Z 608 D φ D Z 608 D H φ DH Z 608 SO O OD D DH O O φ O MO M O φ MO pass messages from root to leaves
72 Party Planning Example S O φ SO O φ O O D φ OD Z 608 D φ D Z 608 D H φ DH Z 608 SO O OD D DH O O φ O Z 608 MO M O φ MO Z pass messages from root to leaves
73 Party Planning Example S O φ SO Z 608 O φ O Z 608 O D φ OD Z 608 D φ D Z 608 D H φ DH Z 608 SO O OD D DH O O φ O Z 608 MO M O φ MO Z pass messages from root to leaves
74 Party Planning Example S O φ SO Z 608 O φ O Z 608 O D φ OD Z 608 D φ D Z 608 D H φ DH Z 608 SO O OD D DH O O φ O Z 608 MO M O φ MO Z pass messages from root to leaves
75 Results We get... Z = 608 P(S = 1) = 418/608 P(O = 1) = 380/608 P(D = 1) = 320/608 P(M = 1) = 152/608 P(H = 1) = 320/608 just as in the brute force results. 1.26
76 Maximization Viterbi arg max P(x) x can be computed with a conceptually very similar algorithm in same running time. 1.27
77 Approximation rough procedure 1 use a graph of bags (that need be not a tree decomposition) 2 pass messages according to some schedule 3 stop passing messages at will 1.28
78 Literature Christopher M. Bishop: Pattern Recognition and Machine Learning, 2006 (book, chapter online available at cmbishop/prml) The Junction Tree Algorithm, script of Chris Williams, School of Informatics, University of Edinburgh, 2009 Graphical models, message-passing algorithms, and variational methods: Part I, Martin Wainwright,
Graphical Models Seminar
Graphical Models Seminar Forward-Backward and Viterbi Algorithm for HMMs Bishop, PRML, Chapters 13.2.2, 13.2.3, 13.2.5 Dinu Kaufmann Departement Mathematik und Informatik Universität Basel April 8, 2013
More informationUndirected Graphical Models
Outline Hong Chang Institute of Computing Technology, Chinese Academy of Sciences Machine Learning Methods (Fall 2012) Outline Outline I 1 Introduction 2 Properties Properties 3 Generative vs. Conditional
More informationConditional Random Field
Introduction Linear-Chain General Specific Implementations Conclusions Corso di Elaborazione del Linguaggio Naturale Pisa, May, 2011 Introduction Linear-Chain General Specific Implementations Conclusions
More informationCRF for human beings
CRF for human beings Arne Skjærholt LNS seminar CRF for human beings LNS seminar 1 / 29 Let G = (V, E) be a graph such that Y = (Y v ) v V, so that Y is indexed by the vertices of G. Then (X, Y) is a conditional
More informationProbabilistic Graphical Models
Probabilistic Graphical Models Brown University CSCI 295-P, Spring 213 Prof. Erik Sudderth Lecture 11: Inference & Learning Overview, Gaussian Graphical Models Some figures courtesy Michael Jordan s draft
More informationChris Bishop s PRML Ch. 8: Graphical Models
Chris Bishop s PRML Ch. 8: Graphical Models January 24, 2008 Introduction Visualize the structure of a probabilistic model Design and motivate new models Insights into the model s properties, in particular
More informationMachine Learning 4771
Machine Learning 4771 Instructor: Tony Jebara Topic 16 Undirected Graphs Undirected Separation Inferring Marginals & Conditionals Moralization Junction Trees Triangulation Undirected Graphs Separation
More information4 : Exact Inference: Variable Elimination
10-708: Probabilistic Graphical Models 10-708, Spring 2014 4 : Exact Inference: Variable Elimination Lecturer: Eric P. ing Scribes: Soumya Batra, Pradeep Dasigi, Manzil Zaheer 1 Probabilistic Inference
More information6.047 / Computational Biology: Genomes, Networks, Evolution Fall 2008
MIT OpenCourseWare http://ocw.mit.edu 6.047 / 6.878 Computational Biology: Genomes, etworks, Evolution Fall 2008 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.
More informationMACHINE LEARNING 2 UGM,HMMS Lecture 7
LOREM I P S U M Royal Institute of Technology MACHINE LEARNING 2 UGM,HMMS Lecture 7 THIS LECTURE DGM semantics UGM De-noising HMMs Applications (interesting probabilities) DP for generation probability
More informationSTA 4273H: Statistical Machine Learning
STA 4273H: Statistical Machine Learning Russ Salakhutdinov Department of Statistics! rsalakhu@utstat.toronto.edu! http://www.utstat.utoronto.ca/~rsalakhu/ Sidney Smith Hall, Room 6002 Lecture 11 Project
More informationSTA 414/2104: Machine Learning
STA 414/2104: Machine Learning Russ Salakhutdinov Department of Computer Science! Department of Statistics! rsalakhu@cs.toronto.edu! http://www.cs.toronto.edu/~rsalakhu/ Lecture 9 Sequential Data So far
More informationIntelligent Systems (AI-2)
Intelligent Systems (AI-2) Computer Science cpsc422, Lecture 19 Oct, 23, 2015 Slide Sources Raymond J. Mooney University of Texas at Austin D. Koller, Stanford CS - Probabilistic Graphical Models D. Page,
More informationMachine Learning 4771
Machine Learning 4771 Instructor: Tony Jebara Topic 18 The Junction Tree Algorithm Collect & Distribute Algorithmic Complexity ArgMax Junction Tree Algorithm Review: Junction Tree Algorithm end message
More informationCheng Soon Ong & Christian Walder. Canberra February June 2018
Cheng Soon Ong & Christian Walder Research Group and College of Engineering and Computer Science Canberra February June 2018 Outlines Overview Introduction Linear Algebra Probability Linear Regression
More informationSequence labeling. Taking collective a set of interrelated instances x 1,, x T and jointly labeling them
HMM, MEMM and CRF 40-957 Special opics in Artificial Intelligence: Probabilistic Graphical Models Sharif University of echnology Soleymani Spring 2014 Sequence labeling aking collective a set of interrelated
More informationConditional Random Fields and beyond DANIEL KHASHABI CS 546 UIUC, 2013
Conditional Random Fields and beyond DANIEL KHASHABI CS 546 UIUC, 2013 Outline Modeling Inference Training Applications Outline Modeling Problem definition Discriminative vs. Generative Chain CRF General
More informationCS281A/Stat241A Lecture 19
CS281A/Stat241A Lecture 19 p. 1/4 CS281A/Stat241A Lecture 19 Junction Tree Algorithm Peter Bartlett CS281A/Stat241A Lecture 19 p. 2/4 Announcements My office hours: Tuesday Nov 3 (today), 1-2pm, in 723
More informationPart I. C. M. Bishop PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 8: GRAPHICAL MODELS
Part I C. M. Bishop PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 8: GRAPHICAL MODELS Probabilistic Graphical Models Graphical representation of a probabilistic model Each variable corresponds to a
More informationUndirected Graphical Models: Markov Random Fields
Undirected Graphical Models: Markov Random Fields 40-956 Advanced Topics in AI: Probabilistic Graphical Models Sharif University of Technology Soleymani Spring 2015 Markov Random Field Structure: undirected
More informationProbabilistic Graphical Models (I)
Probabilistic Graphical Models (I) Hongxin Zhang zhx@cad.zju.edu.cn State Key Lab of CAD&CG, ZJU 2015-03-31 Probabilistic Graphical Models Modeling many real-world problems => a large number of random
More informationMassachusetts Institute of Technology Department of Electrical Engineering and Computer Science Algorithms For Inference Fall 2014
Massachusetts Institute of Technology Department of Electrical Engineering and Computer Science 6.438 Algorithms For Inference Fall 2014 Problem Set 3 Issued: Thursday, September 25, 2014 Due: Thursday,
More informationMachine Learning Lecture 14
Many slides adapted from B. Schiele, S. Roth, Z. Gharahmani Machine Learning Lecture 14 Undirected Graphical Models & Inference 23.06.2015 Bastian Leibe RWTH Aachen http://www.vision.rwth-aachen.de/ leibe@vision.rwth-aachen.de
More informationProbabilistic Graphical Models Homework 2: Due February 24, 2014 at 4 pm
Probabilistic Graphical Models 10-708 Homework 2: Due February 24, 2014 at 4 pm Directions. This homework assignment covers the material presented in Lectures 4-8. You must complete all four problems to
More informationProf. Dr. Lars Schmidt-Thieme, L. B. Marinho, K. Buza Information Systems and Machine Learning Lab (ISMLL), University of Hildesheim, Germany, Course
Course on Bayesian Networks, winter term 2007 0/31 Bayesian Networks Bayesian Networks I. Bayesian Networks / 1. Probabilistic Independence and Separation in Graphs Prof. Dr. Lars Schmidt-Thieme, L. B.
More informationIntelligent Systems (AI-2)
Intelligent Systems (AI-2) Computer Science cpsc422, Lecture 19 Oct, 24, 2016 Slide Sources Raymond J. Mooney University of Texas at Austin D. Koller, Stanford CS - Probabilistic Graphical Models D. Page,
More informationCOMP90051 Statistical Machine Learning
COMP90051 Statistical Machine Learning Semester 2, 2017 Lecturer: Trevor Cohn 24. Hidden Markov Models & message passing Looking back Representation of joint distributions Conditional/marginal independence
More informationBayesian Machine Learning - Lecture 7
Bayesian Machine Learning - Lecture 7 Guido Sanguinetti Institute for Adaptive and Neural Computation School of Informatics University of Edinburgh gsanguin@inf.ed.ac.uk March 4, 2015 Today s lecture 1
More information9 Forward-backward algorithm, sum-product on factor graphs
Massachusetts Institute of Technology Department of Electrical Engineering and Computer Science 6.438 Algorithms For Inference Fall 2014 9 Forward-backward algorithm, sum-product on factor graphs The previous
More informationRandom Field Models for Applications in Computer Vision
Random Field Models for Applications in Computer Vision Nazre Batool Post-doctorate Fellow, Team AYIN, INRIA Sophia Antipolis Outline Graphical Models Generative vs. Discriminative Classifiers Markov Random
More information3 : Representation of Undirected GM
10-708: Probabilistic Graphical Models 10-708, Spring 2016 3 : Representation of Undirected GM Lecturer: Eric P. Xing Scribes: Longqi Cai, Man-Chia Chang 1 MRF vs BN There are two types of graphical models:
More information11 The Max-Product Algorithm
Massachusetts Institute of Technology Department of Electrical Engineering and Computer Science 6.438 Algorithms for Inference Fall 2014 11 The Max-Product Algorithm In the previous lecture, we introduced
More informationJunction Tree, BP and Variational Methods
Junction Tree, BP and Variational Methods Adrian Weller MLSALT4 Lecture Feb 21, 2018 With thanks to David Sontag (MIT) and Tony Jebara (Columbia) for use of many slides and illustrations For more information,
More informationLecture 4: State Estimation in Hidden Markov Models (cont.)
EE378A Statistical Signal Processing Lecture 4-04/13/2017 Lecture 4: State Estimation in Hidden Markov Models (cont.) Lecturer: Tsachy Weissman Scribe: David Wugofski In this lecture we build on previous
More informationInference in Bayesian Networks
Andrea Passerini passerini@disi.unitn.it Machine Learning Inference in graphical models Description Assume we have evidence e on the state of a subset of variables E in the model (i.e. Bayesian Network)
More informationInference in Graphical Models Variable Elimination and Message Passing Algorithm
Inference in Graphical Models Variable Elimination and Message Passing lgorithm Le Song Machine Learning II: dvanced Topics SE 8803ML, Spring 2012 onditional Independence ssumptions Local Markov ssumption
More informationProbability Propagation
Graphical Models, Lectures 9 and 10, Michaelmas Term 2009 November 13, 2009 Characterizing chordal graphs The following are equivalent for any undirected graph G. (i) G is chordal; (ii) G is decomposable;
More informationCS Lecture 4. Markov Random Fields
CS 6347 Lecture 4 Markov Random Fields Recap Announcements First homework is available on elearning Reminder: Office hours Tuesday from 10am-11am Last Time Bayesian networks Today Markov random fields
More informationDynamic Approaches: The Hidden Markov Model
Dynamic Approaches: The Hidden Markov Model Davide Bacciu Dipartimento di Informatica Università di Pisa bacciu@di.unipi.it Machine Learning: Neural Networks and Advanced Models (AA2) Inference as Message
More informationToday s Lecture: HMMs
Today s Lecture: HMMs Definitions Examples Probability calculations WDAG Dynamic programming algorithms: Forward Viterbi Parameter estimation Viterbi training 1 Hidden Markov Models Probability models
More informationProbabilistic Models for Sequence Labeling
Probabilistic Models for Sequence Labeling Besnik Fetahu June 9, 2011 Besnik Fetahu () Probabilistic Models for Sequence Labeling June 9, 2011 1 / 26 Background & Motivation Problem introduction Generative
More informationGraphical models. Sunita Sarawagi IIT Bombay
1 Graphical models Sunita Sarawagi IIT Bombay http://www.cse.iitb.ac.in/~sunita 2 Probabilistic modeling Given: several variables: x 1,... x n, n is large. Task: build a joint distribution function Pr(x
More informationCourse 16:198:520: Introduction To Artificial Intelligence Lecture 9. Markov Networks. Abdeslam Boularias. Monday, October 14, 2015
Course 16:198:520: Introduction To Artificial Intelligence Lecture 9 Markov Networks Abdeslam Boularias Monday, October 14, 2015 1 / 58 Overview Bayesian networks, presented in the previous lecture, are
More information6.867 Machine learning, lecture 23 (Jaakkola)
Lecture topics: Markov Random Fields Probabilistic inference Markov Random Fields We will briefly go over undirected graphical models or Markov Random Fields (MRFs) as they will be needed in the context
More informationCSC 412 (Lecture 4): Undirected Graphical Models
CSC 412 (Lecture 4): Undirected Graphical Models Raquel Urtasun University of Toronto Feb 2, 2016 R Urtasun (UofT) CSC 412 Feb 2, 2016 1 / 37 Today Undirected Graphical Models: Semantics of the graph:
More informationLogistic Regression: Online, Lazy, Kernelized, Sequential, etc.
Logistic Regression: Online, Lazy, Kernelized, Sequential, etc. Harsha Veeramachaneni Thomson Reuter Research and Development April 1, 2010 Harsha Veeramachaneni (TR R&D) Logistic Regression April 1, 2010
More informationVariational Inference (11/04/13)
STA561: Probabilistic machine learning Variational Inference (11/04/13) Lecturer: Barbara Engelhardt Scribes: Matt Dickenson, Alireza Samany, Tracy Schifeling 1 Introduction In this lecture we will further
More informationStatistical Learning
Statistical Learning Lecture 5: Bayesian Networks and Graphical Models Mário A. T. Figueiredo Instituto Superior Técnico & Instituto de Telecomunicações University of Lisbon, Portugal May 2018 Mário A.
More informationHidden Markov Models. Aarti Singh Slides courtesy: Eric Xing. Machine Learning / Nov 8, 2010
Hidden Markov Models Aarti Singh Slides courtesy: Eric Xing Machine Learning 10-701/15-781 Nov 8, 2010 i.i.d to sequential data So far we assumed independent, identically distributed data Sequential data
More informationMessage Passing Algorithms and Junction Tree Algorithms
Message Passing lgorithms and Junction Tree lgorithms Le Song Machine Learning II: dvanced Topics S 8803ML, Spring 2012 Inference in raphical Models eneral form of the inference problem P X 1,, X n Ψ(
More informationHidden Markov Models
10-601 Introduction to Machine Learning Machine Learning Department School of Computer Science Carnegie Mellon University Hidden Markov Models Matt Gormley Lecture 22 April 2, 2018 1 Reminders Homework
More informationHidden Markov Models
CS769 Spring 2010 Advanced Natural Language Processing Hidden Markov Models Lecturer: Xiaojin Zhu jerryzhu@cs.wisc.edu 1 Part-of-Speech Tagging The goal of Part-of-Speech (POS) tagging is to label each
More informationCOMS 4771 Probabilistic Reasoning via Graphical Models. Nakul Verma
COMS 4771 Probabilistic Reasoning via Graphical Models Nakul Verma Last time Dimensionality Reduction Linear vs non-linear Dimensionality Reduction Principal Component Analysis (PCA) Non-linear methods
More informationp L yi z n m x N n xi
y i z n x n N x i Overview Directed and undirected graphs Conditional independence Exact inference Latent variables and EM Variational inference Books statistical perspective Graphical Models, S. Lauritzen
More informationIntroduction to Machine Learning CMU-10701
Introduction to Machine Learning CMU-10701 Hidden Markov Models Barnabás Póczos & Aarti Singh Slides courtesy: Eric Xing i.i.d to sequential data So far we assumed independent, identically distributed
More informationHidden Markov Models. By Parisa Abedi. Slides courtesy: Eric Xing
Hidden Markov Models By Parisa Abedi Slides courtesy: Eric Xing i.i.d to sequential data So far we assumed independent, identically distributed data Sequential (non i.i.d.) data Time-series data E.g. Speech
More informationHidden Markov Models (I)
GLOBEX Bioinformatics (Summer 2015) Hidden Markov Models (I) a. The model b. The decoding: Viterbi algorithm Hidden Markov models A Markov chain of states At each state, there are a set of possible observables
More information6.047 / Computational Biology: Genomes, Networks, Evolution Fall 2008
MIT OpenCourseWare http://ocw.mit.edu 6.047 / 6.878 Computational Biology: Genomes, Networks, Evolution Fall 2008 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.
More informationIntelligent Systems (AI-2)
Intelligent Systems (AI-2) Computer Science cpsc422, Lecture 18 Oct, 21, 2015 Slide Sources Raymond J. Mooney University of Texas at Austin D. Koller, Stanford CS - Probabilistic Graphical Models CPSC
More informationBayesian Networks: Construction, Inference, Learning and Causal Interpretation. Volker Tresp Summer 2014
Bayesian Networks: Construction, Inference, Learning and Causal Interpretation Volker Tresp Summer 2014 1 Introduction So far we were mostly concerned with supervised learning: we predicted one or several
More informationMarkov Random Fields
Markov Random Fields Umamahesh Srinivas ipal Group Meeting February 25, 2011 Outline 1 Basic graph-theoretic concepts 2 Markov chain 3 Markov random field (MRF) 4 Gauss-Markov random field (GMRF), and
More information5. Sum-product algorithm
Sum-product algorithm 5-1 5. Sum-product algorithm Elimination algorithm Sum-product algorithm on a line Sum-product algorithm on a tree Sum-product algorithm 5-2 Inference tasks on graphical models consider
More informationCISC 889 Bioinformatics (Spring 2004) Hidden Markov Models (II)
CISC 889 Bioinformatics (Spring 24) Hidden Markov Models (II) a. Likelihood: forward algorithm b. Decoding: Viterbi algorithm c. Model building: Baum-Welch algorithm Viterbi training Hidden Markov models
More informationProbabilistic Graphical Models
Probabilistic Graphical Models David Sontag New York University Lecture 4, February 16, 2012 David Sontag (NYU) Graphical Models Lecture 4, February 16, 2012 1 / 27 Undirected graphical models Reminder
More informationChapter 8 Cluster Graph & Belief Propagation. Probabilistic Graphical Models 2016 Fall
Chapter 8 Cluster Graph & elief ropagation robabilistic Graphical Models 2016 Fall Outlines Variable Elimination 消元法 imple case: linear chain ayesian networks VE in complex graphs Inferences in HMMs and
More informationData Mining in Bioinformatics HMM
Data Mining in Bioinformatics HMM Microarray Problem: Major Objective n Major Objective: Discover a comprehensive theory of life s organization at the molecular level 2 1 Data Mining in Bioinformatics
More informationGenerative and Discriminative Approaches to Graphical Models CMSC Topics in AI
Generative and Discriminative Approaches to Graphical Models CMSC 35900 Topics in AI Lecture 2 Yasemin Altun January 26, 2007 Review of Inference on Graphical Models Elimination algorithm finds single
More informationProbability Propagation
Graphical Models, Lecture 12, Michaelmas Term 2010 November 19, 2010 Characterizing chordal graphs The following are equivalent for any undirected graph G. (i) G is chordal; (ii) G is decomposable; (iii)
More informationGenome 373: Hidden Markov Models II. Doug Fowler
Genome 373: Hidden Markov Models II Doug Fowler Review From Hidden Markov Models I What does a Markov model describe? Review From Hidden Markov Models I A T A Markov model describes a random process of
More informationA brief introduction to Conditional Random Fields
A brief introduction to Conditional Random Fields Mark Johnson Macquarie University April, 2005, updated October 2010 1 Talk outline Graphical models Maximum likelihood and maximum conditional likelihood
More information13 : Variational Inference: Loopy Belief Propagation
10-708: Probabilistic Graphical Models 10-708, Spring 2014 13 : Variational Inference: Loopy Belief Propagation Lecturer: Eric P. Xing Scribes: Rajarshi Das, Zhengzhong Liu, Dishan Gupta 1 Introduction
More informationPattern Recognition and Machine Learning
Christopher M. Bishop Pattern Recognition and Machine Learning ÖSpri inger Contents Preface Mathematical notation Contents vii xi xiii 1 Introduction 1 1.1 Example: Polynomial Curve Fitting 4 1.2 Probability
More informationUC Berkeley Department of Electrical Engineering and Computer Science Department of Statistics. EECS 281A / STAT 241A Statistical Learning Theory
UC Berkeley Department of Electrical Engineering and Computer Science Department of Statistics EECS 281A / STAT 241A Statistical Learning Theory Solutions to Problem Set 2 Fall 2011 Issued: Wednesday,
More informationProbabilistic Graphical Models
Probabilistic Graphical Models Lecture 9 Undirected Models CS/CNS/EE 155 Andreas Krause Announcements Homework 2 due next Wednesday (Nov 4) in class Start early!!! Project milestones due Monday (Nov 9)
More informationParameter learning in CRF s
Parameter learning in CRF s June 01, 2009 Structured output learning We ish to learn a discriminant (or compatability) function: F : X Y R (1) here X is the space of inputs and Y is the space of outputs.
More informationLog-Linear Models, MEMMs, and CRFs
Log-Linear Models, MEMMs, and CRFs Michael Collins 1 Notation Throughout this note I ll use underline to denote vectors. For example, w R d will be a vector with components w 1, w 2,... w d. We use expx
More informationLecture 9: PGM Learning
13 Oct 2014 Intro. to Stats. Machine Learning COMP SCI 4401/7401 Table of Contents I Learning parameters in MRFs 1 Learning parameters in MRFs Inference and Learning Given parameters (of potentials) and
More informationProbabilistic Graphical Models
Probabilistic Graphical Models Brown University CSCI 2950-P, Spring 2013 Prof. Erik Sudderth Lecture 9: Expectation Maximiation (EM) Algorithm, Learning in Undirected Graphical Models Some figures courtesy
More informationDirected Probabilistic Graphical Models CMSC 678 UMBC
Directed Probabilistic Graphical Models CMSC 678 UMBC Announcement 1: Assignment 3 Due Wednesday April 11 th, 11:59 AM Any questions? Announcement 2: Progress Report on Project Due Monday April 16 th,
More informationProbabilistic Graphical Models Lecture Notes Fall 2009
Probabilistic Graphical Models Lecture Notes Fall 2009 October 28, 2009 Byoung-Tak Zhang School of omputer Science and Engineering & ognitive Science, Brain Science, and Bioinformatics Seoul National University
More informationDirected and Undirected Graphical Models
Directed and Undirected Davide Bacciu Dipartimento di Informatica Università di Pisa bacciu@di.unipi.it Machine Learning: Neural Networks and Advanced Models (AA2) Last Lecture Refresher Lecture Plan Directed
More informationFinal Exam, Machine Learning, Spring 2009
Name: Andrew ID: Final Exam, 10701 Machine Learning, Spring 2009 - The exam is open-book, open-notes, no electronics other than calculators. - The maximum possible score on this exam is 100. You have 3
More informationGraphical models for part of speech tagging
Indian Institute of Technology, Bombay and Research Division, India Research Lab Graphical models for part of speech tagging Different Models for POS tagging HMM Maximum Entropy Markov Models Conditional
More informationLinear Dynamical Systems
Linear Dynamical Systems Sargur N. srihari@cedar.buffalo.edu Machine Learning Course: http://www.cedar.buffalo.edu/~srihari/cse574/index.html Two Models Described by Same Graph Latent variables Observations
More informationLecture 6: Graphical Models
Lecture 6: Graphical Models Kai-Wei Chang CS @ Uniersity of Virginia kw@kwchang.net Some slides are adapted from Viek Skirmar s course on Structured Prediction 1 So far We discussed sequence labeling tasks:
More informationUndirected graphical models
Undirected graphical models Semantics of probabilistic models over undirected graphs Parameters of undirected models Example applications COMP-652 and ECSE-608, February 16, 2017 1 Undirected graphical
More informationHidden Markov Models
Hidden Markov Models CI/CI(CS) UE, SS 2015 Christian Knoll Signal Processing and Speech Communication Laboratory Graz University of Technology June 23, 2015 CI/CI(CS) SS 2015 June 23, 2015 Slide 1/26 Content
More informationComputational Genomics and Molecular Biology, Fall
Computational Genomics and Molecular Biology, Fall 2011 1 HMM Lecture Notes Dannie Durand and Rose Hoberman October 11th 1 Hidden Markov Models In the last few lectures, we have focussed on three problems
More informationProbabilistic Graphical Models
Probabilistic Graphical Models Lecture 9: Variational Inference Relaxations Volkan Cevher, Matthias Seeger Ecole Polytechnique Fédérale de Lausanne 24/10/2011 (EPFL) Graphical Models 24/10/2011 1 / 15
More informationRapid Introduction to Machine Learning/ Deep Learning
Rapid Introduction to Machine Learning/ Deep Learning Hyeong In Choi Seoul National University 1/24 Lecture 5b Markov random field (MRF) November 13, 2015 2/24 Table of contents 1 1. Objectives of Lecture
More informationThe Origin of Deep Learning. Lili Mou Jan, 2015
The Origin of Deep Learning Lili Mou Jan, 2015 Acknowledgment Most of the materials come from G. E. Hinton s online course. Outline Introduction Preliminary Boltzmann Machines and RBMs Deep Belief Nets
More informationSemi-Markov/Graph Cuts
Semi-Markov/Graph Cuts Alireza Shafaei University of British Columbia August, 2015 1 / 30 A Quick Review For a general chain-structured UGM we have: n n p(x 1, x 2,..., x n ) φ i (x i ) φ i,i 1 (x i, x
More informationMachine Learning for Structured Prediction
Machine Learning for Structured Prediction Grzegorz Chrupa la National Centre for Language Technology School of Computing Dublin City University NCLT Seminar Grzegorz Chrupa la (DCU) Machine Learning for
More informationAlternative Parameterizations of Markov Networks. Sargur Srihari
Alternative Parameterizations of Markov Networks Sargur srihari@cedar.buffalo.edu 1 Topics Three types of parameterization 1. Gibbs Parameterization 2. Factor Graphs 3. Log-linear Models with Energy functions
More informationLecture 17: May 29, 2002
EE596 Pat. Recog. II: Introduction to Graphical Models University of Washington Spring 2000 Dept. of Electrical Engineering Lecture 17: May 29, 2002 Lecturer: Jeff ilmes Scribe: Kurt Partridge, Salvador
More informationDirected and Undirected Graphical Models
Directed and Undirected Graphical Models Adrian Weller MLSALT4 Lecture Feb 26, 2016 With thanks to David Sontag (NYU) and Tony Jebara (Columbia) for use of many slides and illustrations For more information,
More informationHidden Markov Models
10-601 Introduction to Machine Learning Machine Learning Department School of Computer Science Carnegie Mellon University Hidden Markov Models Matt Gormley Lecture 19 Nov. 5, 2018 1 Reminders Homework
More informationCheng Soon Ong & Christian Walder. Canberra February June 2018
Cheng Soon Ong & Christian Walder Research Group and College of Engineering and Computer Science Canberra February June 2018 Outlines Overview Introduction Linear Algebra Probability Linear Regression
More informationUndirected Graphical Models
Undirected Graphical Models 1 Conditional Independence Graphs Let G = (V, E) be an undirected graph with vertex set V and edge set E, and let A, B, and C be subsets of vertices. We say that C separates
More informationConditional Random Fields: An Introduction
University of Pennsylvania ScholarlyCommons Technical Reports (CIS) Department of Computer & Information Science 2-24-2004 Conditional Random Fields: An Introduction Hanna M. Wallach University of Pennsylvania
More information