Why BP Works STAT 232B

Size: px
Start display at page:

Download "Why BP Works STAT 232B"

Transcription

1 Why BP Works STAT 232B Free Energes Helmholz & Gbbs Free Energes 1 Dstance between Probablstc Models - K-L dvergence b{ KL b{ p{ = b{ ln { } p{ Here, p{ s the eact ont prob. b{ s the appromaton, called belef. Boltzmann s law for computng ont prob. 1 p{ = ep E{ / T Z

2 Free Energes Helmholz & Gbbs Free Energes 2 b{ KL b{ p{ = b{ ln { } p{ = b{ ln b{ + b{ E{ / T + ln Z Snce KL.. >=, { } { } b{ ln b{ + b{ E{ / T + ln Z { } { } Defne F = T ln Z Tb{ ln b{ + b{ E{ F { } { } F s called the Helmholz free energy, whch s the lower bound of the above nequalty. Free Energes Helmholz & Gbbs Free Energes 3 Let s defne, G b{ = b{ E{ + Tb{ ln b{ = U b{ TS b{ { } { } 1 where Gb{ s called Appromate Gbbs free energy, U s called average energy, and S s called entropy. The Eact Gbbs free energy s defned as G eact p{ = p{ E{ + T { } { } = U p{ TS p{ = F p{ ln p{ The Eact Gbbs free energy s equal to the Helmholtz free energy at equlbrum.

3 Free Energes Mean-feld free energy a varatonal approach 1 Let s ntroduce an arbtrary tral energy functon Snce We have Z = = p ep E where <.> s the epectaton. = ep E / T ep E / T Z = ep E E ep E / T / T ep E / T E, a tral prob. s constructed as: / T ep E ep E E / T p ep E / T ep =< ep E E / T > E / T / T Free Energes Mean-feld free energy a varatonal approach 2 By the property of the convety of the eponental functon : < ep > ep < > We have Z ep < E E / T > ep E Then / T E / T + < E E > Fvar F T ln ep After a few more steps manpulate, we have where F var S =< E > TS F = p ln p p Ths suggests us a useful varatonal arguments: look for the tral prob. func. whch gves us the lowest varatonal free energy. The closer the tral prob. to the eact ont prob., the better the varatonal appromaton.

4 Free Energes Mean-feld free energy 3 Mean-feld theory assumes a tral probablty func. bearng the factorzed form where p { = b b = 1 The energy of a confguraton of a parwse MRF s E { = lnψ, lnφ, Pluggng ths energy nto 1, we obtan mean-feld Gbbs free energy G MF = U MF TS MF Free Energes Mean-feld free energy 4 where U MF = b{ E{ { } = b b lnψ,,, b lnφ and S = b{ ln b{ = b ln b MF { } Note the eact Gbbs free energy s a func. of full ont prob Helmholz free energy lowest bound of KL dvergence. But the mean-feld Gbbs free energy s only a func. of the one-node belefs. To obtan the best appromaton of p{, we need to search for b{ whch mnmze. G MF

5 Free Energes The Bethe free energy 1 For tree-lke topology MRF, the eact ont prob. can be factorzed nto a form that only depends on one-node and two-node margnal prob. where s the number of nodes that are connected to node. We defne S U q and We obtan B B E b{ = b,, [ b ] 1 q, = lnψ, lnφ lnφ E = lnφ = b, ln b, q 1 b ln b,, = b, E, q 1 b E,, Free Energes The Bethe free energy 2 Then the Bethe free energy s G B = b, E, +,, 1 b E + ln b, q ln b Together wth a few normalzaton and margnalzaton constrants, the Lagrangan L s as follows: λ [ b b, ] + β [1 L = G + b ] + B,, [ b b, ] + β [1, λ b, ] 2,,,

6 Free Energes The Bethe free energy 3 Takng dervatves of the L wrt the belefs and those Lagrange multplers, we have margnal prob. appromaton: λ 1 E b = ep[ + ] Z T T q 1 b, 1 E, λ λ = ep[ + + ] Z T T T 3 4 The Bethe appromaton s a much better appromaton to the eact Gbbs free energy than the mean feld appromaton. The dffculty les n the computatonal part. The Belef Propagaton algorthm provdes a good soluton. Bref of Belef PropagatonBP 1 For parwse MRF s, the ont prob. dstrbuton for {} can be factorzed 1 p { = ψ, φ Z, where ψ, tells nternal bound between node and, and φ ndcates eternal evdence at node. φ 1 φ 2 φ ψ 1, 2 ψ 2 3,

7 Bref of Belef PropagatonBP 2 Messages m are ntroduced to pass nformaton between nodes n BP network. The belef margnal posteror at a node s computed as follows: and the ont belef ont margnal posteror of a par of neghborng nodes and s: = N m b φ =,, N l l N k k m m b φ φ βψ the message from nodes to s: N k k m m \, ψ φ 5 6 Equvalence of BP to the Bethe Appromaton By defnng t s easly to show that 3 and 4 derved for the Bethe appromaton are equvalent to the BP equaton 5 and 6. = N k k m T \ ln λ

8 Equvalence of BP to Dynamc Programmng To get MAP soluton ma product from a belef network, e.g. 3-node graph, the BP algorthm s equvalent to the dynamc programmng. ˆ = arg ma φ ma φ ψ, ma φ ψ, ˆ = arg ma φ ma φ ψ, ma φ ψ, ˆ = arg ma φ ma φ ψ, ma φ ψ, Loopy & non-loopy graph BP works for sngly connected networks. It s guaranteed to converge to the correct answers. BP does not always work for loopy networks. Because same evdence s passed around the network multple tmes and mstaken for new evdence.

9 Loopy graph works sometmes Although evdence s double counted, all evdence may be double counted. It s proved to be correct n ths stuaton. Sngle loop BP s guaranteed to generate the most lkely state sequence. Multple loops Balanced network wll work. BP Vstng Order Reschedule In tradtonal BP algorthm, messages beng passed and updated between nodes are wthout any prorty. Ths s not effcent because notes wth weak evdence provdng less useful nformaton to ther neghbors. Messages from these nodes should be passed at later stage compared wth those nodes wth strong evdence. We desgn a new node vstng order to effectvely passng messages between graph nodes. 1. Rank the nodes accordng to the belef of ther local evdence breadth frst search. Most nformatve node passes ts message frst. 2. Reverse the order n step 1, pass messages back.

10 Toy Problem 1 - Isng model 1 Defnton E { h = J, 1 p{ = e Z Specfcatons 1, J, =.1, h = 1 E{ T = Toy Problem 1 - Isng model 2 Bounce back vstng order potental BP Iteratons

11 Toy Problem 2 - Rectangle Matchng 1 Matchng black rectangle n two mages match Buld BP graph 7 7 lattce φ φ 1 2 φ 4 φ 3 ψ, 1 2 ψ, ψ, ψ, 3 4 Toy Problem 2 - Rectangle Matchng 2 Ordnary vstng order potental BP Iteratons

12 Toy Problem 2 - Rectangle Matchng 3 Flush vstng order startng from 4 corners potental BP Iteratons Real Data Peps Can

13 Real Data Peps Can Real Data Car

14 Real Data Car Real Data Flower Garden

15 Real Data Flower Garden Real Data Beverly

16 Real Data Beverly Real Data Prntng Room

17 Real Data Prntng Room Real Data Nagara Fall

18 Real Data Nagara Fall Reference J. Yedda, W. T. Freeman and Y. Wess, Understandng belef propagaton and ts generalzatons Internatonal Jont Conference on Artfcal Intellgence IJCAI 21. Yedda, J.S., "An Idosyncratc Journey Beyond Mean Feld Theory", Advanced Mean Feld Methods, Theory and Practce, ISBN: , pps 21-36, February 21.

Mean Field / Variational Approximations

Mean Field / Variational Approximations Mean Feld / Varatonal Appromatons resented by Jose Nuñez 0/24/05 Outlne Introducton Mean Feld Appromaton Structured Mean Feld Weghted Mean Feld Varatonal Methods Introducton roblem: We have dstrbuton but

More information

Probability-Theoretic Junction Trees

Probability-Theoretic Junction Trees Probablty-Theoretc Juncton Trees Payam Pakzad, (wth Venkat Anantharam, EECS Dept, U.C. Berkeley EPFL, ALGO/LMA Semnar 2/2/2004 Margnalzaton Problem Gven an arbtrary functon of many varables, fnd (some

More information

Physical Fluctuomatics Applied Stochastic Process 9th Belief propagation

Physical Fluctuomatics Applied Stochastic Process 9th Belief propagation Physcal luctuomatcs ppled Stochastc Process 9th elef propagaton Kazuyuk Tanaka Graduate School of Informaton Scences Tohoku Unversty kazu@smapp.s.tohoku.ac.jp http://www.smapp.s.tohoku.ac.jp/~kazu/ Stochastc

More information

Probabilistic & Unsupervised Learning

Probabilistic & Unsupervised Learning Probablstc & Unsupervsed Learnng Convex Algorthms n Approxmate Inference Yee Whye Teh ywteh@gatsby.ucl.ac.uk Gatsby Computatonal Neuroscence Unt Unversty College London Term 1, Autumn 2008 Convexty A convex

More information

Kernel Methods and SVMs Extension

Kernel Methods and SVMs Extension Kernel Methods and SVMs Extenson The purpose of ths document s to revew materal covered n Machne Learnng 1 Supervsed Learnng regardng support vector machnes (SVMs). Ths document also provdes a general

More information

Lecture 7: Boltzmann distribution & Thermodynamics of mixing

Lecture 7: Boltzmann distribution & Thermodynamics of mixing Prof. Tbbtt Lecture 7 etworks & Gels Lecture 7: Boltzmann dstrbuton & Thermodynamcs of mxng 1 Suggested readng Prof. Mark W. Tbbtt ETH Zürch 13 März 018 Molecular Drvng Forces Dll and Bromberg: Chapters

More information

Clock Synchronization in WSN: from Traditional Estimation Theory to Distributed Signal Processing

Clock Synchronization in WSN: from Traditional Estimation Theory to Distributed Signal Processing Clock Synchronzaton n WS: from Tradtonal Estmaton Theory to Dstrbuted Sgnal Processng Yk-Chung WU The Unversty of Hong Kong Emal: ycwu@eee.hku.hk, Webpage: www.eee.hku.hk/~ycwu Applcatons requre clock

More information

Expectation propagation

Expectation propagation Expectaton propagaton Lloyd Ellott May 17, 2011 Suppose p(x) s a pdf and we have a factorzaton p(x) = 1 Z n f (x). (1) =1 Expectaton propagaton s an nference algorthm desgned to approxmate the factors

More information

Department of Computer Science Artificial Intelligence Research Laboratory. Iowa State University MACHINE LEARNING

Department of Computer Science Artificial Intelligence Research Laboratory. Iowa State University MACHINE LEARNING MACHINE LEANING Vasant Honavar Bonformatcs and Computatonal Bology rogram Center for Computatonal Intellgence, Learnng, & Dscovery Iowa State Unversty honavar@cs.astate.edu www.cs.astate.edu/~honavar/

More information

EM and Structure Learning

EM and Structure Learning EM and Structure Learnng Le Song Machne Learnng II: Advanced Topcs CSE 8803ML, Sprng 2012 Partally observed graphcal models Mxture Models N(μ 1, Σ 1 ) Z X N N(μ 2, Σ 2 ) 2 Gaussan mxture model Consder

More information

Speech and Language Processing

Speech and Language Processing Speech and Language rocessng Lecture 3 ayesan network and ayesan nference Informaton and ommuncatons Engneerng ourse Takahro Shnozak 08//5 Lecture lan (Shnozak s part) I gves the frst 6 lectures about

More information

Approximate Inference: Mean Field Methods

Approximate Inference: Mean Field Methods School of Comuter Scence Aromate Inference: Mean Feld Methods Probablstc Grahcal Models 10-708 Lecture 17 Nov 12 2007 Recetor A Knase C Gene G Recetor B X 1 X 2 Knase D Knase X 3 X 4 X 5 TF F X 6 Gene

More information

Computing Correlated Equilibria in Multi-Player Games

Computing Correlated Equilibria in Multi-Player Games Computng Correlated Equlbra n Mult-Player Games Chrstos H. Papadmtrou Presented by Zhanxang Huang December 7th, 2005 1 The Author Dr. Chrstos H. Papadmtrou CS professor at UC Berkley (taught at Harvard,

More information

Hidden Markov Models

Hidden Markov Models CM229S: Machne Learnng for Bonformatcs Lecture 12-05/05/2016 Hdden Markov Models Lecturer: Srram Sankararaman Scrbe: Akshay Dattatray Shnde Edted by: TBD 1 Introducton For a drected graph G we can wrte

More information

Generalized Linear Methods

Generalized Linear Methods Generalzed Lnear Methods 1 Introducton In the Ensemble Methods the general dea s that usng a combnaton of several weak learner one could make a better learner. More formally, assume that we have a set

More information

Thermodynamics and statistical mechanics in materials modelling II

Thermodynamics and statistical mechanics in materials modelling II Course MP3 Lecture 8/11/006 (JAE) Course MP3 Lecture 8/11/006 Thermodynamcs and statstcal mechancs n materals modellng II A bref résumé of the physcal concepts used n materals modellng Dr James Ellott.1

More information

PHYS 705: Classical Mechanics. Calculus of Variations II

PHYS 705: Classical Mechanics. Calculus of Variations II 1 PHYS 705: Classcal Mechancs Calculus of Varatons II 2 Calculus of Varatons: Generalzaton (no constrant yet) Suppose now that F depends on several dependent varables : We need to fnd such that has a statonary

More information

Hidden Markov Models & The Multivariate Gaussian (10/26/04)

Hidden Markov Models & The Multivariate Gaussian (10/26/04) CS281A/Stat241A: Statstcal Learnng Theory Hdden Markov Models & The Multvarate Gaussan (10/26/04) Lecturer: Mchael I. Jordan Scrbes: Jonathan W. Hu 1 Hdden Markov Models As a bref revew, hdden Markov models

More information

EEL 6266 Power System Operation and Control. Chapter 3 Economic Dispatch Using Dynamic Programming

EEL 6266 Power System Operation and Control. Chapter 3 Economic Dispatch Using Dynamic Programming EEL 6266 Power System Operaton and Control Chapter 3 Economc Dspatch Usng Dynamc Programmng Pecewse Lnear Cost Functons Common practce many utltes prefer to represent ther generator cost functons as sngle-

More information

U.C. Berkeley CS294: Beyond Worst-Case Analysis Luca Trevisan September 5, 2017

U.C. Berkeley CS294: Beyond Worst-Case Analysis Luca Trevisan September 5, 2017 U.C. Berkeley CS94: Beyond Worst-Case Analyss Handout 4s Luca Trevsan September 5, 07 Summary of Lecture 4 In whch we ntroduce semdefnte programmng and apply t to Max Cut. Semdefnte Programmng Recall that

More information

An Experiment/Some Intuition (Fall 2006): Lecture 18 The EM Algorithm heads coin 1 tails coin 2 Overview Maximum Likelihood Estimation

An Experiment/Some Intuition (Fall 2006): Lecture 18 The EM Algorithm heads coin 1 tails coin 2 Overview Maximum Likelihood Estimation An Experment/Some Intuton I have three cons n my pocket, 6.864 (Fall 2006): Lecture 18 The EM Algorthm Con 0 has probablty λ of heads; Con 1 has probablty p 1 of heads; Con 2 has probablty p 2 of heads

More information

3.1 ML and Empirical Distribution

3.1 ML and Empirical Distribution 67577 Intro. to Machne Learnng Fall semester, 2008/9 Lecture 3: Maxmum Lkelhood/ Maxmum Entropy Dualty Lecturer: Amnon Shashua Scrbe: Amnon Shashua 1 In the prevous lecture we defned the prncple of Maxmum

More information

Introduction to Hidden Markov Models

Introduction to Hidden Markov Models Introducton to Hdden Markov Models Alperen Degrmenc Ths document contans dervatons and algorthms for mplementng Hdden Markov Models. The content presented here s a collecton of my notes and personal nsghts

More information

Monte Carlo method II

Monte Carlo method II Course MP3 Lecture 5 14/11/2006 Monte Carlo method II How to put some real physcs nto the Monte Carlo method Dr James Ellott 5.1 Monte Carlo method revsted In lecture 4, we ntroduced the Monte Carlo (MC)

More information

Lecture 3: Probability Distributions

Lecture 3: Probability Distributions Lecture 3: Probablty Dstrbutons Random Varables Let us begn by defnng a sample space as a set of outcomes from an experment. We denote ths by S. A random varable s a functon whch maps outcomes nto the

More information

Introduction to Density Functional Theory. Jeremie Zaffran 2 nd year-msc. (Nanochemistry)

Introduction to Density Functional Theory. Jeremie Zaffran 2 nd year-msc. (Nanochemistry) Introducton to Densty Functonal Theory Jereme Zaffran nd year-msc. (anochemstry) A- Hartree appromatons Born- Oppenhemer appromaton H H H e The goal of computatonal chemstry H e??? Let s remnd H e T e

More information

For now, let us focus on a specific model of neurons. These are simplified from reality but can achieve remarkable results.

For now, let us focus on a specific model of neurons. These are simplified from reality but can achieve remarkable results. Neural Networks : Dervaton compled by Alvn Wan from Professor Jtendra Malk s lecture Ths type of computaton s called deep learnng and s the most popular method for many problems, such as computer vson

More information

Singular Value Decomposition: Theory and Applications

Singular Value Decomposition: Theory and Applications Sngular Value Decomposton: Theory and Applcatons Danel Khashab Sprng 2015 Last Update: March 2, 2015 1 Introducton A = UDV where columns of U and V are orthonormal and matrx D s dagonal wth postve real

More information

Density Propagation and Improved Bounds on the Partition Function

Density Propagation and Improved Bounds on the Partition Function Densty Propagaton and Improved Bounds on the Partton Functon Stefano Ermon, Carla P. Gomes Dept. of Computer Scence Cornell Unversty Ithaca NY 14853, U.S.A. Ashsh Sabharwal IBM Watson esearch Ctr. Yorktown

More information

Second order approximations for probability models

Second order approximations for probability models Second order approxmatons for probablty models lbert Kappen Department of Bophyscs Njmegen Unversty Njmegen, the Netherlands bertmbfysunnl Wm Wegernc Department of Bophyscs Njmegen Unversty Njmegen, the

More information

Outline. Bayesian Networks: Maximum Likelihood Estimation and Tree Structure Learning. Our Model and Data. Outline

Outline. Bayesian Networks: Maximum Likelihood Estimation and Tree Structure Learning. Our Model and Data. Outline Outlne Bayesan Networks: Maxmum Lkelhood Estmaton and Tree Structure Learnng Huzhen Yu janey.yu@cs.helsnk.f Dept. Computer Scence, Unv. of Helsnk Probablstc Models, Sprng, 200 Notces: I corrected a number

More information

and Statistical Mechanics Material Properties

and Statistical Mechanics Material Properties Statstcal Mechancs and Materal Propertes By Kuno TAKAHASHI Tokyo Insttute of Technology, Tokyo 15-855, JAPA Phone/Fax +81-3-5734-3915 takahak@de.ttech.ac.jp http://www.de.ttech.ac.jp/~kt-lab/ Only for

More information

1 Matrix representations of canonical matrices

1 Matrix representations of canonical matrices 1 Matrx representatons of canoncal matrces 2-d rotaton around the orgn: ( ) cos θ sn θ R 0 = sn θ cos θ 3-d rotaton around the x-axs: R x = 1 0 0 0 cos θ sn θ 0 sn θ cos θ 3-d rotaton around the y-axs:

More information

8 : Learning in Fully Observed Markov Networks. 1 Why We Need to Learn Undirected Graphical Models. 2 Structural Learning for Completely Observed MRF

8 : Learning in Fully Observed Markov Networks. 1 Why We Need to Learn Undirected Graphical Models. 2 Structural Learning for Completely Observed MRF 10-708: Probablstc Graphcal Models 10-708, Sprng 2014 8 : Learnng n Fully Observed Markov Networks Lecturer: Erc P. Xng Scrbes: Meng Song, L Zhou 1 Why We Need to Learn Undrected Graphcal Models In the

More information

Gaussian Conditional Random Field Network for Semantic Segmentation - Supplementary Material

Gaussian Conditional Random Field Network for Semantic Segmentation - Supplementary Material Gaussan Condtonal Random Feld Networ for Semantc Segmentaton - Supplementary Materal Ravtea Vemulapall, Oncel Tuzel *, Mng-Yu Lu *, and Rama Chellappa Center for Automaton Research, UMIACS, Unversty of

More information

Motion Perception Under Uncertainty. Hongjing Lu Department of Psychology University of Hong Kong

Motion Perception Under Uncertainty. Hongjing Lu Department of Psychology University of Hong Kong Moton Percepton Under Uncertanty Hongjng Lu Department of Psychology Unversty of Hong Kong Outlne Uncertanty n moton stmulus Correspondence problem Qualtatve fttng usng deal observer models Based on sgnal

More information

error in mean TAP 0.25

error in mean TAP 0.25 Belef Optmzaton for Bnary Networks: A Stable Alternatve to Loopy Belef Propagaton Max Wellng Gatsby Computatonal Neuroscence Unt Unversty College London Queen Square London, WCN AR, U.K. Abstract We present

More information

Gaussian process classification: a message-passing viewpoint

Gaussian process classification: a message-passing viewpoint Gaussan process classfcaton: a message-passng vewpont Flpe Rodrgues fmpr@de.uc.pt November 014 Abstract The goal of ths short paper s to provde a message-passng vewpont of the Expectaton Propagaton EP

More information

PHYS 705: Classical Mechanics. Newtonian Mechanics

PHYS 705: Classical Mechanics. Newtonian Mechanics 1 PHYS 705: Classcal Mechancs Newtonan Mechancs Quck Revew of Newtonan Mechancs Basc Descrpton: -An dealzed pont partcle or a system of pont partcles n an nertal reference frame [Rgd bodes (ch. 5 later)]

More information

MIMA Group. Chapter 2 Bayesian Decision Theory. School of Computer Science and Technology, Shandong University. Xin-Shun SDU

MIMA Group. Chapter 2 Bayesian Decision Theory. School of Computer Science and Technology, Shandong University. Xin-Shun SDU Group M D L M Chapter Bayesan Decson heory Xn-Shun Xu @ SDU School of Computer Scence and echnology, Shandong Unversty Bayesan Decson heory Bayesan decson theory s a statstcal approach to data mnng/pattern

More information

A Bayes Algorithm for the Multitask Pattern Recognition Problem Direct Approach

A Bayes Algorithm for the Multitask Pattern Recognition Problem Direct Approach A Bayes Algorthm for the Multtask Pattern Recognton Problem Drect Approach Edward Puchala Wroclaw Unversty of Technology, Char of Systems and Computer etworks, Wybrzeze Wyspanskego 7, 50-370 Wroclaw, Poland

More information

Artificial Intelligence Bayesian Networks

Artificial Intelligence Bayesian Networks Artfcal Intellgence Bayesan Networks Adapted from sldes by Tm Fnn and Mare desjardns. Some materal borrowed from Lse Getoor. 1 Outlne Bayesan networks Network structure Condtonal probablty tables Condtonal

More information

Markov Chain Monte Carlo Lecture 6

Markov Chain Monte Carlo Lecture 6 where (x 1,..., x N ) X N, N s called the populaton sze, f(x) f (x) for at least one {1, 2,..., N}, and those dfferent from f(x) are called the tral dstrbutons n terms of mportance samplng. Dfferent ways

More information

Global Gaussian approximations in latent Gaussian models

Global Gaussian approximations in latent Gaussian models Global Gaussan approxmatons n latent Gaussan models Botond Cseke Aprl 9, 2010 Abstract A revew of global approxmaton methods n latent Gaussan models. 1 Latent Gaussan models In ths secton we ntroduce notaton

More information

MACHINE APPLIED MACHINE LEARNING LEARNING. Gaussian Mixture Regression

MACHINE APPLIED MACHINE LEARNING LEARNING. Gaussian Mixture Regression 11 MACHINE APPLIED MACHINE LEARNING LEARNING MACHINE LEARNING Gaussan Mture Regresson 22 MACHINE APPLIED MACHINE LEARNING LEARNING Bref summary of last week s lecture 33 MACHINE APPLIED MACHINE LEARNING

More information

College of Computer & Information Science Fall 2009 Northeastern University 20 October 2009

College of Computer & Information Science Fall 2009 Northeastern University 20 October 2009 College of Computer & Informaton Scence Fall 2009 Northeastern Unversty 20 October 2009 CS7880: Algorthmc Power Tools Scrbe: Jan Wen and Laura Poplawsk Lecture Outlne: Prmal-dual schema Network Desgn:

More information

CS 468 Lecture 16: Isometry Invariance and Spectral Techniques

CS 468 Lecture 16: Isometry Invariance and Spectral Techniques CS 468 Lecture 16: Isometry Invarance and Spectral Technques Justn Solomon Scrbe: Evan Gawlk Introducton. In geometry processng, t s often desrable to characterze the shape of an object n a manner that

More information

Real-Time Systems. Multiprocessor scheduling. Multiprocessor scheduling. Multiprocessor scheduling

Real-Time Systems. Multiprocessor scheduling. Multiprocessor scheduling. Multiprocessor scheduling Real-Tme Systems Multprocessor schedulng Specfcaton Implementaton Verfcaton Multprocessor schedulng -- -- Global schedulng How are tasks assgned to processors? Statc assgnment The processor(s) used for

More information

find (x): given element x, return the canonical element of the set containing x;

find (x): given element x, return the canonical element of the set containing x; COS 43 Sprng, 009 Dsjont Set Unon Problem: Mantan a collecton of dsjont sets. Two operatons: fnd the set contanng a gven element; unte two sets nto one (destructvely). Approach: Canoncal element method:

More information

Lecture 10 Support Vector Machines II

Lecture 10 Support Vector Machines II Lecture 10 Support Vector Machnes II 22 February 2016 Taylor B. Arnold Yale Statstcs STAT 365/665 1/28 Notes: Problem 3 s posted and due ths upcomng Frday There was an early bug n the fake-test data; fxed

More information

Section 8.3 Polar Form of Complex Numbers

Section 8.3 Polar Form of Complex Numbers 80 Chapter 8 Secton 8 Polar Form of Complex Numbers From prevous classes, you may have encountered magnary numbers the square roots of negatve numbers and, more generally, complex numbers whch are the

More information

Lagrange Multipliers. A Somewhat Silly Example. Monday, 25 September 2013

Lagrange Multipliers. A Somewhat Silly Example. Monday, 25 September 2013 Lagrange Multplers Monday, 5 September 013 Sometmes t s convenent to use redundant coordnates, and to effect the varaton of the acton consstent wth the constrants va the method of Lagrange undetermned

More information

Bayesian Networks. Course: CS40022 Instructor: Dr. Pallab Dasgupta

Bayesian Networks. Course: CS40022 Instructor: Dr. Pallab Dasgupta Bayesan Networks Course: CS40022 Instructor: Dr. Pallab Dasgupta Department of Computer Scence & Engneerng Indan Insttute of Technology Kharagpur Example Burglar alarm at home Farly relable at detectng

More information

Problem Set 9 Solutions

Problem Set 9 Solutions Desgn and Analyss of Algorthms May 4, 2015 Massachusetts Insttute of Technology 6.046J/18.410J Profs. Erk Demane, Srn Devadas, and Nancy Lynch Problem Set 9 Solutons Problem Set 9 Solutons Ths problem

More information

ANSWERS. Problem 1. and the moment generating function (mgf) by. defined for any real t. Use this to show that E( U) var( U)

ANSWERS. Problem 1. and the moment generating function (mgf) by. defined for any real t. Use this to show that E( U) var( U) Econ 413 Exam 13 H ANSWERS Settet er nndelt 9 deloppgaver, A,B,C, som alle anbefales å telle lkt for å gøre det ltt lettere å stå. Svar er gtt . Unfortunately, there s a prntng error n the hnt of

More information

1 Motivation and Introduction

1 Motivation and Introduction Instructor: Dr. Volkan Cevher EXPECTATION PROPAGATION September 30, 2008 Rce Unversty STAT 63 / ELEC 633: Graphcal Models Scrbes: Ahmad Beram Andrew Waters Matthew Nokleby Index terms: Approxmate nference,

More information

ECE559VV Project Report

ECE559VV Project Report ECE559VV Project Report (Supplementary Notes Loc Xuan Bu I. MAX SUM-RATE SCHEDULING: THE UPLINK CASE We have seen (n the presentaton that, for downlnk (broadcast channels, the strategy maxmzng the sum-rate

More information

Hidden Markov Models

Hidden Markov Models Hdden Markov Models Namrata Vaswan, Iowa State Unversty Aprl 24, 204 Hdden Markov Model Defntons and Examples Defntons:. A hdden Markov model (HMM) refers to a set of hdden states X 0, X,..., X t,...,

More information

Canonical transformations

Canonical transformations Canoncal transformatons November 23, 2014 Recall that we have defned a symplectc transformaton to be any lnear transformaton M A B leavng the symplectc form nvarant, Ω AB M A CM B DΩ CD Coordnate transformatons,

More information

11 Tail Inequalities Markov s Inequality. Lecture 11: Tail Inequalities [Fa 13]

11 Tail Inequalities Markov s Inequality. Lecture 11: Tail Inequalities [Fa 13] Algorthms Lecture 11: Tal Inequaltes [Fa 13] If you hold a cat by the tal you learn thngs you cannot learn any other way. Mark Twan 11 Tal Inequaltes The smple recursve structure of skp lsts made t relatvely

More information

MAE140 - Linear Circuits - Winter 16 Midterm, February 5

MAE140 - Linear Circuits - Winter 16 Midterm, February 5 Instructons ME140 - Lnear Crcuts - Wnter 16 Mdterm, February 5 () Ths exam s open book. You may use whatever wrtten materals you choose, ncludng your class notes and textbook. You may use a hand calculator

More information

Physics 4B. A positive value is obtained, so the current is counterclockwise around the circuit.

Physics 4B. A positive value is obtained, so the current is counterclockwise around the circuit. Physcs 4B Solutons to Chapter 7 HW Chapter 7: Questons:, 8, 0 Problems:,,, 45, 48,,, 7, 9 Queston 7- (a) no (b) yes (c) all te Queston 7-8 0 μc Queston 7-0, c;, a;, d; 4, b Problem 7- (a) Let be the current

More information

Estimation: Part 2. Chapter GREG estimation

Estimation: Part 2. Chapter GREG estimation Chapter 9 Estmaton: Part 2 9. GREG estmaton In Chapter 8, we have seen that the regresson estmator s an effcent estmator when there s a lnear relatonshp between y and x. In ths chapter, we generalzed the

More information

The Second Anti-Mathima on Game Theory

The Second Anti-Mathima on Game Theory The Second Ant-Mathma on Game Theory Ath. Kehagas December 1 2006 1 Introducton In ths note we wll examne the noton of game equlbrum for three types of games 1. 2-player 2-acton zero-sum games 2. 2-player

More information

MARKOV CHAIN AND HIDDEN MARKOV MODEL

MARKOV CHAIN AND HIDDEN MARKOV MODEL MARKOV CHAIN AND HIDDEN MARKOV MODEL JIAN ZHANG JIANZHAN@STAT.PURDUE.EDU Markov chan and hdden Markov mode are probaby the smpest modes whch can be used to mode sequenta data,.e. data sampes whch are not

More information

Week 5: Neural Networks

Week 5: Neural Networks Week 5: Neural Networks Instructor: Sergey Levne Neural Networks Summary In the prevous lecture, we saw how we can construct neural networks by extendng logstc regresson. Neural networks consst of multple

More information

Polynomial Regression Models

Polynomial Regression Models LINEAR REGRESSION ANALYSIS MODULE XII Lecture - 6 Polynomal Regresson Models Dr. Shalabh Department of Mathematcs and Statstcs Indan Insttute of Technology Kanpur Test of sgnfcance To test the sgnfcance

More information

NP-Completeness : Proofs

NP-Completeness : Proofs NP-Completeness : Proofs Proof Methods A method to show a decson problem Π NP-complete s as follows. (1) Show Π NP. (2) Choose an NP-complete problem Π. (3) Show Π Π. A method to show an optmzaton problem

More information

4DVAR, according to the name, is a four-dimensional variational method.

4DVAR, according to the name, is a four-dimensional variational method. 4D-Varatonal Data Assmlaton (4D-Var) 4DVAR, accordng to the name, s a four-dmensonal varatonal method. 4D-Var s actually a drect generalzaton of 3D-Var to handle observatons that are dstrbuted n tme. The

More information

Lecture 10: May 6, 2013

Lecture 10: May 6, 2013 TTIC/CMSC 31150 Mathematcal Toolkt Sprng 013 Madhur Tulsan Lecture 10: May 6, 013 Scrbe: Wenje Luo In today s lecture, we manly talked about random walk on graphs and ntroduce the concept of graph expander,

More information

APPROXIMATE PRICES OF BASKET AND ASIAN OPTIONS DUPONT OLIVIER. Premia 14

APPROXIMATE PRICES OF BASKET AND ASIAN OPTIONS DUPONT OLIVIER. Premia 14 APPROXIMAE PRICES OF BASKE AND ASIAN OPIONS DUPON OLIVIER Prema 14 Contents Introducton 1 1. Framewor 1 1.1. Baset optons 1.. Asan optons. Computng the prce 3. Lower bound 3.1. Closed formula for the prce

More information

Randomness and Computation

Randomness and Computation Randomness and Computaton or, Randomzed Algorthms Mary Cryan School of Informatcs Unversty of Ednburgh RC 208/9) Lecture 0 slde Balls n Bns m balls, n bns, and balls thrown unformly at random nto bns usually

More information

Hopfield networks and Boltzmann machines. Geoffrey Hinton et al. Presented by Tambet Matiisen

Hopfield networks and Boltzmann machines. Geoffrey Hinton et al. Presented by Tambet Matiisen Hopfeld networks and Boltzmann machnes Geoffrey Hnton et al. Presented by Tambet Matsen 18.11.2014 Hopfeld network Bnary unts Symmetrcal connectons http://www.nnwj.de/hopfeld-net.html Energy functon The

More information

princeton univ. F 17 cos 521: Advanced Algorithm Design Lecture 7: LP Duality Lecturer: Matt Weinberg

princeton univ. F 17 cos 521: Advanced Algorithm Design Lecture 7: LP Duality Lecturer: Matt Weinberg prnceton unv. F 17 cos 521: Advanced Algorthm Desgn Lecture 7: LP Dualty Lecturer: Matt Wenberg Scrbe: LP Dualty s an extremely useful tool for analyzng structural propertes of lnear programs. Whle there

More information

Convergent Propagation Algorithms via Oriented Trees

Convergent Propagation Algorithms via Oriented Trees Convergent Propagaton Algorthms va Orented Trees Amr Globerson CSAIL Massachusetts Insttute of Technology Cambrdge, MA 02139 Tomm Jaakkola CSAIL Massachusetts Insttute of Technology Cambrdge, MA 02139

More information

Sensor localization using nonparametric generalized belief propagation in network with loop

Sensor localization using nonparametric generalized belief propagation in network with loop Sensor localzaton usng nonparametrc generalzed belef propagaton n network wth loop Vladmr Savc and Santago Zazo Post Prnt N.B.: When ctng ths work, cte the orgnal artcle. 009 IEEE. Personal use of ths

More information

Stat260: Bayesian Modeling and Inference Lecture Date: February 22, Reference Priors

Stat260: Bayesian Modeling and Inference Lecture Date: February 22, Reference Priors Stat60: Bayesan Modelng and Inference Lecture Date: February, 00 Reference Prors Lecturer: Mchael I. Jordan Scrbe: Steven Troxler and Wayne Lee In ths lecture, we assume that θ R; n hgher-dmensons, reference

More information

Maximum Likelihood Estimation of Binary Dependent Variables Models: Probit and Logit. 1. General Formulation of Binary Dependent Variables Models

Maximum Likelihood Estimation of Binary Dependent Variables Models: Probit and Logit. 1. General Formulation of Binary Dependent Variables Models ECO 452 -- OE 4: Probt and Logt Models ECO 452 -- OE 4 Mamum Lkelhood Estmaton of Bnary Dependent Varables Models: Probt and Logt hs note demonstrates how to formulate bnary dependent varables models for

More information

Expected Value and Variance

Expected Value and Variance MATH 38 Expected Value and Varance Dr. Neal, WKU We now shall dscuss how to fnd the average and standard devaton of a random varable X. Expected Value Defnton. The expected value (or average value, or

More information

Physics 5153 Classical Mechanics. D Alembert s Principle and The Lagrangian-1

Physics 5153 Classical Mechanics. D Alembert s Principle and The Lagrangian-1 P. Guterrez Physcs 5153 Classcal Mechancs D Alembert s Prncple and The Lagrangan 1 Introducton The prncple of vrtual work provdes a method of solvng problems of statc equlbrum wthout havng to consder the

More information

Non-Ideality Through Fugacity and Activity

Non-Ideality Through Fugacity and Activity Non-Idealty Through Fugacty and Actvty S. Patel Deartment of Chemstry and Bochemstry, Unversty of Delaware, Newark, Delaware 19716, USA Corresondng author. E-mal: saatel@udel.edu 1 I. FUGACITY In ths dscusson,

More information

CS : Algorithms and Uncertainty Lecture 14 Date: October 17, 2016

CS : Algorithms and Uncertainty Lecture 14 Date: October 17, 2016 CS 294-128: Algorthms and Uncertanty Lecture 14 Date: October 17, 2016 Instructor: Nkhl Bansal Scrbe: Antares Chen 1 Introducton In ths lecture, we revew results regardng follow the regularzed leader (FTRL.

More information

Retrieval Models: Language models

Retrieval Models: Language models CS-590I Informaton Retreval Retreval Models: Language models Luo S Department of Computer Scence Purdue Unversty Introducton to language model Ungram language model Document language model estmaton Maxmum

More information

VARIATION OF CONSTANT SUM CONSTRAINT FOR INTEGER MODEL WITH NON UNIFORM VARIABLES

VARIATION OF CONSTANT SUM CONSTRAINT FOR INTEGER MODEL WITH NON UNIFORM VARIABLES VARIATION OF CONSTANT SUM CONSTRAINT FOR INTEGER MODEL WITH NON UNIFORM VARIABLES BÂRZĂ, Slvu Faculty of Mathematcs-Informatcs Spru Haret Unversty barza_slvu@yahoo.com Abstract Ths paper wants to contnue

More information

Ensemble Methods: Boosting

Ensemble Methods: Boosting Ensemble Methods: Boostng Ncholas Ruozz Unversty of Texas at Dallas Based on the sldes of Vbhav Gogate and Rob Schapre Last Tme Varance reducton va baggng Generate new tranng data sets by samplng wth replacement

More information

MAXIMUM A POSTERIORI TRANSDUCTION

MAXIMUM A POSTERIORI TRANSDUCTION MAXIMUM A POSTERIORI TRANSDUCTION LI-WEI WANG, JU-FU FENG School of Mathematcal Scences, Peng Unversty, Bejng, 0087, Chna Center for Informaton Scences, Peng Unversty, Bejng, 0087, Chna E-MIAL: {wanglw,

More information

Curve Fitting with the Least Square Method

Curve Fitting with the Least Square Method WIKI Document Number 5 Interpolaton wth Least Squares Curve Fttng wth the Least Square Method Mattheu Bultelle Department of Bo-Engneerng Imperal College, London Context We wsh to model the postve feedback

More information

Physics 607 Exam 1. ( ) = 1, Γ( z +1) = zγ( z) x n e x2 dx = 1. e x2

Physics 607 Exam 1. ( ) = 1, Γ( z +1) = zγ( z) x n e x2 dx = 1. e x2 Physcs 607 Exam 1 Please be well-organzed, and show all sgnfcant steps clearly n all problems. You are graded on your wor, so please do not just wrte down answers wth no explanaton! Do all your wor on

More information

Outline. Communication. Bellman Ford Algorithm. Bellman Ford Example. Bellman Ford Shortest Path [1]

Outline. Communication. Bellman Ford Algorithm. Bellman Ford Example. Bellman Ford Shortest Path [1] DYNAMIC SHORTEST PATH SEARCH AND SYNCHRONIZED TASK SWITCHING Jay Wagenpfel, Adran Trachte 2 Outlne Shortest Communcaton Path Searchng Bellmann Ford algorthm Algorthm for dynamc case Modfcatons to our algorthm

More information

LINEAR REGRESSION ANALYSIS. MODULE IX Lecture Multicollinearity

LINEAR REGRESSION ANALYSIS. MODULE IX Lecture Multicollinearity LINEAR REGRESSION ANALYSIS MODULE IX Lecture - 30 Multcollnearty Dr. Shalabh Department of Mathematcs and Statstcs Indan Insttute of Technology Kanpur 2 Remedes for multcollnearty Varous technques have

More information

Problem Points Score Total 100

Problem Points Score Total 100 Physcs 450 Solutons of Sample Exam I Problem Ponts Score 1 8 15 3 17 4 0 5 0 Total 100 All wor must be shown n order to receve full credt. Wor must be legble and comprehensble wth answers clearly ndcated.

More information

Queueing Networks II Network Performance

Queueing Networks II Network Performance Queueng Networks II Network Performance Davd Tpper Assocate Professor Graduate Telecommuncatons and Networkng Program Unversty of Pttsburgh Sldes 6 Networks of Queues Many communcaton systems must be modeled

More information

Introduction to Vapor/Liquid Equilibrium, part 2. Raoult s Law:

Introduction to Vapor/Liquid Equilibrium, part 2. Raoult s Law: CE304, Sprng 2004 Lecture 4 Introducton to Vapor/Lqud Equlbrum, part 2 Raoult s Law: The smplest model that allows us do VLE calculatons s obtaned when we assume that the vapor phase s an deal gas, and

More information

Difference Equations

Difference Equations Dfference Equatons c Jan Vrbk 1 Bascs Suppose a sequence of numbers, say a 0,a 1,a,a 3,... s defned by a certan general relatonshp between, say, three consecutve values of the sequence, e.g. a + +3a +1

More information

Assortment Optimization under MNL

Assortment Optimization under MNL Assortment Optmzaton under MNL Haotan Song Aprl 30, 2017 1 Introducton The assortment optmzaton problem ams to fnd the revenue-maxmzng assortment of products to offer when the prces of products are fxed.

More information

Probabilistic image processing and Bayesian network

Probabilistic image processing and Bayesian network Computatonal Intellgence Semnar (8 November, 2005, Waseda Unversty, Tokyo, Japan) Probablstc mage processng and Bayesan network Kazuyuk Tanaka 1 Graduate School of Informaton Scences, Tohoku Unversty,

More information

Maximum Likelihood Estimation of Binary Dependent Variables Models: Probit and Logit. 1. General Formulation of Binary Dependent Variables Models

Maximum Likelihood Estimation of Binary Dependent Variables Models: Probit and Logit. 1. General Formulation of Binary Dependent Variables Models ECO 452 -- OE 4: Probt and Logt Models ECO 452 -- OE 4 Maxmum Lkelhood Estmaton of Bnary Dependent Varables Models: Probt and Logt hs note demonstrates how to formulate bnary dependent varables models

More information

coordinates. Then, the position vectors are described by

coordinates. Then, the position vectors are described by Revewng, what we have dscussed so far: Generalzed coordnates Any number of varables (say, n) suffcent to specfy the confguraton of the system at each nstant to tme (need not be the mnmum number). In general,

More information

Lagrangian Theory. Several-body Systems

Lagrangian Theory. Several-body Systems Lagrangan Theory of Several-body Systems Ncholas Wheeler, Reed College Physcs Department November 995 Introducton. Let the N-tuple of 3-vectors {x (t) : =, 2,..., N} descrbe, relatve to an nertal frame,

More information

Dynamics of a Superconducting Qubit Coupled to an LC Resonator

Dynamics of a Superconducting Qubit Coupled to an LC Resonator Dynamcs of a Superconductng Qubt Coupled to an LC Resonator Y Yang Abstract: We nvestgate the dynamcs of a current-based Josephson juncton quantum bt or qubt coupled to an LC resonator. The Hamltonan of

More information