Spezielle Themen der Künstlichen Intelligenz
|
|
- Homer Jacobs
- 5 years ago
- Views:
Transcription
1 Bayesian neworks Spezielle Themen der Künslichen Inelligenz! A graphical noaion or condiional independ. asserions, and a compac speciicaion o he ull join disribuion! Encodes he assumpion ha each node is condiionally independen o is non-descendans given is parens 9. Termin: Bayesian neworks & decision making! Then, each join disribuion can be acored as ollows P (X1,,Xn) =!i =1 P (Xi X1,,Xi-1) =!i =1P (Xi Parens(Xi)) STdKI/SS07 2 Probabilisic inerences in Bayes Nes Inerence in Bayesian Neworks! Query ypes (given evidence E): " Condiional probabiliy query: wha is he poserior probabiliy disribuion over values o a subse Y? " Mos probable explanaion query: wha is he mos likely assignmen o values o remaining (hidden) variables X-E? " Maximum a poseriori query: wha is he mos likely assignmen o values o subse Y?! Wors case, exac inerence is NP-hard! In pracice, much easier wih variable eliminaion or approximaive mehods Exac! Enumeraion: P(X E=e) =!P(X,E=e) =!"y P(X,E=e, Y=y)! Variable eliminaion: Avoid repeaed compuaions Approximaive! Sampling rom empy nework (no evidence) " draw N samples and compue approximae poserior " generae evens condiioned on CPT enries! Likelihood weighing " generae only samples ha agree wih he evidence! Markov chain simulaion " sochasic process whose saionary disribuion o saes is he rue poserior 4
2 p(smar)=.8 smar prepared pass sudy p(sudy)=.6 p(air)=.9 air P(s pa=rue) =!P(s,pa) =! "pr"sm" P(s,pa,pr,sm,) =! "pr"sm" P(s)P(pa pr,sm,)p(pr sm,s)p(sm)p() =! P(s) "sm" P(pa pr=t,sm,)p(pr=t sm,s)p(sm)p()! P(s) "sm" P(pa pr=f,sm,)p(pr=f sm,s)p(sm)p() p(pass ) =! P(s) " P(pa pr=t,sm=t,)p(pr=t sm=t,s)p(sm=t)p()! P(s) " P(pa pr=t,sm=f,)p(pr=t sm=f,s)p(sm=f)p()! P(s) " P(pa pr=f,sm=t,)p(pr=f sm=t,s)p(sm=t)p()! P(s) " P(pa pr=f,sm=f,)p(pr=f sm=f,s)p(sm=f)p() =... = 1/P(pa) 0.6 ( ) = 0.40 p(pr ) smar smar sudy.9.7 sudy.5.1 smar smar prep prep prep prep air air Query: Wha is he probabiliy ha a suden sudied, given ha hey pass he exam? Answer: The probabiliy ha a suden sudied, given ha she pass he exam, is 40% Using Bayes nes o model change The world changes; we need o rack and predic i Diabees managemen vs vehicle diagnosis Basic idea: copy sae and evidence variables or each ime sep X = se o unobservable sae variables a ime e.g., BloodSugar, SomachConens, ec. E = se o observable evidence variables a ime e.g., MeasuredBloodSugar, P ulseae, F oodeaen This assumes discree ime; sep size depends on problem Noaion: X a:b = X a, X a1,..., X b 1, X b 6 Markov chains / Markov processes Consruc a Bayes ne rom hese variables: parens? Markov assumpion: X depends on bounded subse o X 0: 1 Firs-order Markov process: P(X X 0: 1 ) = P(X X 1 ) Second-order Markov process: P(X X 0: 1 ) = P(X X 2, X 1 ) Firs!order Second!order X!2 X!1 X X 1 X 2 X!2 X!1 X X 1 X 2 Sensor Markov assumpion: P(E X 0:, E 0: 1 ) = P(E X ) Saionary process: ransiion model P(X X 1 ) and sensor model P(E X ) ixed or all ransiion model sensor model (changes ollows ixed laws) Markov processes Need prior probabiliy P(X0) over saes a ime 0 Then we have: P(X0,X1,...,X,E1,...,E)=P(X0)!!"#$$% P(Xi!Xi-1)P(Ei!Xi) Example ain!1 Umbrella!1!1 P( ) ain Umbrella P(U ) ain 1 Umbrella 1 Firs-order Markov assumpion no exacly rue in real world! Possible ixes: 1. Increase order o Markov process 2. Augmen sae, e.g., add T emp, P ressure 7 8
3 Inerence asks Inerence asks DynamicDynamic Bayesian neworks Bayesian neworks Filering: P(X e1:) belie sae inpu o he decision process o a raional agen X, E conain arbirarily many variables in a replicaed Bayes ne BMeer 1 Predicion: P(Xk e1:) or k > 0 evaluaion o possible acion sequences; like ilering wihou he evidence P( 0) ain 0 ain 1 Smoohing: P(Xk e1:) or 0 k < beer esimae o pas saes, essenial or learning Baery 0 Baery 1 X0 X1 XX 0 X1 Umbrella 1 Mos likely explanaion: arg maxx1: P (x1: e1:) speech recogniion, decoding wih a noisy channel given observaions, ind sequence o saes ha is mos likely o have generaed hem (e.g.vierbi algorihm) Z1 Need o give ransiion and senor model (saionary) only or irs slice (see Sec or algorihms) 9 Sensor variables Z, BMeer; sae variables X, Xdo, Baery Hidden Markov Model = DBN wih a single discree sae variable Chaper 15, Secions Chaper 15, Secions Exac inerence in DBNs Approximaive inerence in DBNs DBNs are Bayesian neworks, i.e., we can use our known algorihms Paricle ilering: ensure ha he populaion o samples ( paricles ) racks he high-likelihood regions o he sae-space Naive he nework and run any exac algorihm exac mehod: inerence unroll algorihm (e.g. variable eliminaion) Naive mehod: unroll he nework and run any exac algorihm Paricle ilering Algorihm: Creae N samples rom prior disribuion P(X0), hen cycle Propagae by sampling nex sae x1, given x and using P(X1 x) Basic idea: ensure ha he populaion o samples ( paricles ) 2. Weigh samples by likelihood i assigns o new evidence P(e1 x1) racks he high-likelihood regions o he sae-space 3. esample new N samples rom he curren populaion, probabiliy ha sample is replicaed proporional o isorweigh e eplicae paricles proporional o likelihood Exac inerence in DBNs Exac inerence in DBNs Exac inerence: Unroll nework o accommodae all observaions and run P( 0) ain 0 P( 0) ain 0 P( 0) ain 1 ain 0 P( 0) ain 1 Umbrella 1 Umbrella 1 ain 0 ain 2 ain 1 P(U01 ) ain 1 Umbrella 1 Umbrella 1 ain 3 P(U01 ) ain 2 Umbrella 2 Umbrella 2 ain 4 P(U01 ) ain 3 Umbrella 3 Umbrella 3 ain 5 P(U01 ) ain 4 Umbrella 4 Umbrella 4 ain 6 P(U01 ) ain 5 Umbrella 5 Umbrella 5 ain 7 P(U01 ) ain 6 Umbrella 6 Umbrella 6 ain 7 Umbrella 7 Umbrella 7 Problem: inerence cos or each updae grows wih Problem: inerence cos or each updae grows wih ollup ilering: add slice 1, sum ou slice using variable eliminaion ollup ilering: add n1 slice 1, sum ou n2 slice using variable eliminaion ), always updae exponenial cos O(d in) number o sae variables Larges acor size) is O(d Coss (acor almos 2n n1 n2 (c. HMM updae cos O(d )) cos O(d ) ), updae Larges acor is O(d (c. HMM updae cos O(d2n)) ain ain 1 ain 1 (b) Weigh (c) esample rue alse (a) Propagae SS07 Widely usedstdki or /racking nonlinear 11 ain 1 umbrella observed sysems, esp. in vision Also used or simulaneous localizaion and mapping in mobile robos 105-dimensional sae space Chaper 15, Secions
4 Paricle ilering Widely used or racking nonlinear sysems, especially in vision, sellocalizaion and mapping in mobile robos. Consisen (proo see Sec. 15.5) Approximaion error remains bounded over ime, a leas empirically. In pracice eicien, ye no heoreical guaranees (so ar) Avg absolue error LW(25) LW(100) LW(1000) LW(10000) E/SOF(25) Time sep Paricle ilering Likelihood weighing Summary! Exac inerence inracable in large neworks " Enumeraion, Variable eliminaion! Approximaive inerence " Likelihood Weighing, Mone Carlo Markov chains! Temporal models (DBN, HMM) " sae and sensor variables, replicaed over ime " Assumpions! Markov assumpion! ransiion model P(X X!1)! Sensor (Markov) assumpion! sensor model P(E X)! Saionary assumpion! models ixed! Tasks: ilering, predicion, smoohing, likely explanaion " approx. recursive algorihms exis (consan cos per sep) Making decisions under uncerainy Making decisions Suppose I believe he ollowing: Ge o airpor on ime P(A25 ges me here on ime )! = 0.04 P(A90 ges me here on ime )! = 0 P(A120 ges me here on ime )! = 5 P(A1440 ges me here on ime )! = Which acion o choose depends on likelihood an acion succeeds and my preerences over is possible consequences Uiliy heory is used o represen and iner preerences Decision heory = probabiliy heory uiliy heory STdKI/SS
5 Maximum expeced uiliy A nondeerminisic acion has possible oucome saes esuli(a) Prior o execuing A, he agen deermines he probabiliies P(esuli(A) Do(A),E) The expeced uiliy o A, given evidence E, is deined as EU(A E)=!i P(esuli(A) Do(A),E) U(esuli(A)) Principle o maximum expeced uiliy (MEU) An agen is raional i i chooses he acion ha yields he highes expeced uiliy, averaged over all possible oucomes o he acion. Le s ocus on one-sho decisions over single acions or now. (sequences o acions laer) Basis o uiliy heory - preerences An agen chooses among prizes (A, B, ec.) and loeries, i.e., siuaions wih uncerain prizes Loery L = [p, A; (1 p), B] 1!p L=[pi,Ci] oucome Ci can occur wih probabiliy pi Noaion: A B A B A B A preerred o B indierence beween A and B B no preerred o A Key quesion: How are preerences relaed when making decisions? L p A B aional preerences aional preerences cond. Idea: preerences o a raional agen mus obey consrains. aional preerences behavior describable as maximizaion o expeced uiliy Consrains: Orderabiliy (A B) (B A) (A B) Agen canno avoid deciding Transiiviy (A B) (B C) (A C) Coninuiy A B C p [p, A; 1 p, C] B Indieren beween loery A vs. C, and geing B or sure Subsiuabiliy A B [p, A; 1 p, C] [p, B; 1 p, C] Monooniciy A B (p q [p, A; 1 p, B] [q, A; 1 q, B]) Violaing he consrains leads o sel-eviden irraionaliy For example: an agen wih inransiive preerences can be induced o give away all is money I B C, hen an agen who has C would pay (say) 1 cen o ge B I A B, hen an agen who has B would pay (say) 1 cen o ge A I C A, hen an agen who has A would pay (say) 1 cen o ge C 1c B A 1c C 1c 19 20
6 Uiliies and preerences Preerences are a basic propery o raional agens. The exisence o a uiliy uncion ollows hen rom wo principles: Theorem (amsey, 1931; von Neumann and Morgensern, 1944): Given preerences saisying he consrains here exiss a real-valued uncion U such ha U(A) U(B) A B Uiliy principle U([p 1, S 1 ;... ; p n, S n ]) = Σ i p i U(S i ) Max. Expeced Uiliy principle MEU principle: Choose he acion ha maximizes expeced uiliy Tha is, uiliy uncion can be ormulaed in accord wih agen s preerences. Noe: Can ac raionally wihou expliciely rying o maximize expeced uiliy Uiliy uncions Uiliy uncion maps rom saes o real number. Bu which numbers? Preerences o real agens are usually sysemaic, and here are sysemaic ways o designing uiliy uncions. Monoonic preerences: Agen preers more money o less, all oher hings being equal. Does ha say anyhing abou loeries involving money? Ge $ or sure o lip coin or 50% chance o geing $ ? Expeced moneary value (EMV) = 0.5 $0 0.5 $ = $ EU(Accep) = 0.5 U(Sk0) 0.5 U(Sk ) (Sk=sae o possessing $k) EU(Decline) = U(Sk ) aional decision depends on uiliies assigned o oucome saes! The uiliy o money Uiliy scales Money does no behave as a uiliy uncion Given a loery L wih expeced moneary value EMV (L), usually U(L) < U(SEMV(L)) U(EMV (L)), i.e., people are risk-averse Uiliy curve: or wha probabiliy p am I indieren beween a prize x and a loery [p, $M; (1 p), $0] or large M? Typical empirical daa, exrapolaed wih risk-prone behavior: Uiliy decreases less when debs are already large (geing more risk-seeking) U o o o o o o o o $ o o o o!150,000 o 800,000 o o Uiliy increases less when a lo is already possessed (geing less risk-averse) bes possible prize u Normalized uiliies: u = 1.0, u = 0.0 wors possible caasrophe u Micromors: one-millionh chance o deah Sudies (rom 1980): worh ~$20 useul or ussian roulee, paying o reduce produc risks, ec. QALYs: qualiy-adjused lie years useul or medical decisions involving subsanial risk Noe: behavior is invarian w.r.. ve linear ransormaion U (x) = k 1 U(x) k 2 where k 1 > 0 Wih deerminisic prizes only (no loery choices), only ordinal uiliy can be deermined, i.e., oal order on prizes Any monoonic ransormaion leaves behavior unchanged 23 24
7 Muli-aribue uiliy Sric dominance (no uncerainy) Typically deine aribues such ha U is monoonic in each Oen oucomes are characerized by wo or more aribues. How can we handle uiliy uncions o many variables X 1... X n? E.g., wha is U(Deahs, Noise, Cos)? How can complex uiliy uncions be assessed rom preerence behaviour? Sric dominance: choice B sricly dominaes choice A i i X i (B) X i (A) (and hence U(B) U(A)) X 2 C B This region dominaes A X 2 B C Idea 1: ideniy condiions under which decisions can be made wihou complee ideniicaion o U(x 1,..., x n ) (exploiing he dominance o xi) A D A Idea 2: ideniy various ypes o independence in preerences and derive consequen canonical orms or U(x 1,..., x n ) Deerminisic aribues X 1 Uncerain aribues X 1 Holds seldom in pracice Sochasic dominance (uncerainy) Sochasic dominance cond. Probabiliy S1 S Negaive cos Probabiliy S1 S Negaive cos Disribuion p 1 sochasically dominaes disribuion p 2 i p 1(x)dx p p2(x)dx 2()d (greaer probabiliy o higher values) I U is monoonic in x, hen A 1 wih oucome disribuion p 1 sochasically dominaes A 2 wih oucome disribuion p 2 : p 1(x)U(x)dx p 2(x)U(x)dx EU(A1)! EU(A2) Muliaribue case: sochasic dominance on all aribues opimal Sochasic dominance can oen be deermined wihou exac disribuions using qualiaive reasoning E.g., consrucion cos increases wih disance rom ciy S 1 is closer o he ciy han S 2 S 1 sochasically dominaes S 2 on cos E.g., injury increases wih collision speed Can annoae belie neworks wih sochasic dominance inormaion: X Y (X posiively inluences Y ) means ha For every value z o Y s oher parens Z x 1, x 2 x 1 x 2 P(Y x 1, z) sochasically dominaes P(Y x 2, z) Qualiaive probabilisic neworks (QPNs): insead o giving condiional probabiliies, jus deine qualiaive consrains ha new evidence on X increases () or decreases (-) he belie value o Y
8 Age GoodSuden SocioEcon ExraCar Age GoodSuden SocioEcon ExraCar DrivingHis DrivingHis DrivQualiy Anilock Airbag CarValue HomeBase AniThe DrivQualiy Anilock Airbag CarValue HomeBase AniThe uggedness Acciden The uggedness Acciden The OherCos OwnCos OherCos OwnCos MedicalCos LiabiliyCos ProperyCos MedicalCos LiabiliyCos ProperyCos SocioEcon Age GoodSuden ExraCar DrivingHis Anilock DrivQualiy uggedness Acciden Airbag CarValue HomeBase AniThe The OherCos OwnCos SocioEcon Age GoodSuden ExraCar DrivingHis Anilock DrivQualiy Airbag CarValue HomeBase AniThe! uggedness Acciden The OherCos OwnCos MedicalCos LiabiliyCos ProperyCos MedicalCos LiabiliyCos ProperyCos 31 32
9 SocioEcon Age GoodSuden DrivingHis ExraCar! Anilock DrivQualiy Airbag CarValue HomeBase AniThe! uggedness Acciden The OherCos OwnCos SocioEcon Age GoodSuden DrivingHis ExraCar! Anilock DrivQualiy Airbag CarValue HomeBase AniThe! uggedness Acciden The OherCos OwnCos MedicalCos LiabiliyCos ProperyCos MedicalCos LiabiliyCos ProperyCos Preerences wihou uncerainy X 1 and X 2 preerenially independen o X 3 i preerence beween x 1, x 2, x 3 and x 1, x 2, x 3 does no depend on x 3 E.g., Noise, Cos, Saey : 20,000 suer, $4.6 billion, 0.06 deahs/mpm vs. 70,000 suer, $4.2 billion, 0.06 deahs/mpm Theorem (Leonie, 1947): i every pair o aribues is P.I. o is complemen, hen every subse o aribues is P.I o is complemen: muual P.I.. Theorem (Debreu, 1960): muual P.I. addiive value uncion: V (S) = Σ i V i (X i (S)) Hence assess n single-aribue uncions; oen a good approximaion Preerences wih uncerainy Need o consider preerences over loeries: (X,Y ses o aribues) X is uiliy-independen o Y i preerences over loeries in X do no depend on y (values in Y) Muual U.I.: each subse is U.I o is complemen muliplicaive uiliy uncion: (here, or hree aribues, Ui=Ui(xi)) U = k 1 U 1 k 2 U 2 k 3 U 3 k 1 k 2 U 1 U 2 k 2 k 3 U 2 U 3 k 3 k 1 U 3 U 1 k 1 k 2 k 3 U 1 U 2 U 3 ouine procedures and soware packages or generaing preerence ess o ideniy various canonical amilies o uiliy uncions 35 36
10 Decision neworks Add acion nodes and uiliy nodes o belie neworks o enable raional decision making Sae variable/chance nodes, cond. disribuion over parens (chance nodes, decision nodes) Air Traic Liigaion Airpor Sie Deahs Noise Decision/Acion nodes dieren value or each choice o acion Uiliy nodes deines uiliy uncion, Consrucion Cos parens = all var s ha Algorihm: aec he uiliy 1. Se Forevidence each value variables o acion or node curren sae 2. For each compue possible expeced value value o he o decision uiliy node: given acion, evidence (a) eurn se decision MEU acion node, (b) calc poserior or parens o uiliy nodes, (c) calc resuling uiliy or he acion U Value o inormaion 3. eurn he STdKI acion / SS07wih he highes uiliy (MEU) 37 Idea: compue value o acquiring each possible piece o evidence Can be done direcly rom decision nework Example: buying oil drilling righs Two blocks A and B, exacly one has oil, worh k Prior probabiliies 0.5 each, muually exclusive Curren price o each block is k/2 Consulan oers accurae survey o A. Fair price? Soluion: compue expeced value o inormaion = expeced value o bes acion given he inormaion minus expeced value o bes acion wihou inormaion Survey may say oil in A or no oil in A, prob. 0.5 each (given!) = [0.5 value o buy A given oil in A 0.5 value o buy B given no oil in A ] 0 = (0.5 k/2) (0.5 k/2) 0 = k/2 38 General ormula Curren evidence E, curren bes acion α Possible acion oucomes S i, poenial new evidence E j EU(α E) = max a Σ i U(S i ) P (S i E, a) Suppose we knew E j = e jk, hen we would choose α ejk s.. EU(α ejk E, E j = e jk ) = max a Σ i U(S i ) P (S i E, a, E j = e jk ) E j is a random variable whose value is currenly unknown mus compue expeced gain over all possible values: V P I E (E j ) = ( Σ k P (E j = e jk E)EU(α ejk E, E j = e jk ) ) EU(α E) Properies o VPI Nonnegaive in expecaion, no pos hoc j, E V P I E (E j ) 0 Nonaddiive consider, e.g., obaining E j wice V P I E (E j, E k ) V P I E (E j ) V P I E (E k ) Order-independen V P I E (E j, E k ) = V P I E (E j ) V P I E,Ej (E k ) = V P I E (E k ) V P I E,Ek (E j (VPI = value o perec inormaion) 39 40
11 Qualiaive behaviors Inormaion-gahering agen a) Choice is obvious, inormaion worh lile b) Choice is nonobvious, inormaion worh a lo c) Choice is nonobvious, inormaion worh lile P ( U E j ) P ( U E j ) P ( U E j ) Agen chooses beween sensing acion (EQUEST, which will yield evidence in nex percep) or real acion. U U U U 2 U 1 U 2 U 1 U 2 U 1 (a) (b) (c) Broad uiliies, di. can be high Narrow uiliies, di. will be small Exension: consider all possible sensing acion sequences and all possible oucomes o hose requess. Because values o requess depend on previous requess, need o build condiional plans Summary probabiliy heory: wha he agen should believe uiliy heory: wha he agen wans! decision heory: wha he agen should do Agen wih preerences beween loeries ha are consisen wih simple axioms can be described by uiliy uncion; selecs acions as i maximizing is expeced uiliy Muliaribue uiliies, sochasic dominance as echnique or making decisions wihou precise uiliy values Decision neworks o express and solve decision problems, oen involves gahering urher evidence based on value o inormaion 43
Temporal probability models. Chapter 15, Sections 1 5 1
Temporal probabiliy models Chaper 15, Secions 1 5 Chaper 15, Secions 1 5 1 Ouline Time and uncerainy Inerence: ilering, predicion, smoohing Hidden Markov models Kalman ilers (a brie menion) Dynamic Bayesian
More informationTemporal probability models
Temporal probabiliy models CS194-10 Fall 2011 Lecure 25 CS194-10 Fall 2011 Lecure 25 1 Ouline Hidden variables Inerence: ilering, predicion, smoohing Hidden Markov models Kalman ilers (a brie menion) Dynamic
More informationHidden Markov Models
Hidden Markov Models Probabilisic reasoning over ime So far, we ve mosly deal wih episodic environmens Excepions: games wih muliple moves, planning In paricular, he Bayesian neworks we ve seen so far describe
More informationTwo Popular Bayesian Estimators: Particle and Kalman Filters. McGill COMP 765 Sept 14 th, 2017
Two Popular Bayesian Esimaors: Paricle and Kalman Filers McGill COMP 765 Sep 14 h, 2017 1 1 1, dx x Bel x u x P x z P Recall: Bayes Filers,,,,,,, 1 1 1 1 u z u x P u z u x z P Bayes z = observaion u =
More informationRational decisions. Chapter 16. Chapter 16 1
Rational decisions Chapter 16 Chapter 16 1 Outline Rational preferences Utilities Money Multiattribute utilities Decision networks Value of information Chapter 16 2 Preferences An agent chooses among prizes
More informationVehicle Arrival Models : Headway
Chaper 12 Vehicle Arrival Models : Headway 12.1 Inroducion Modelling arrival of vehicle a secion of road is an imporan sep in raffic flow modelling. I has imporan applicaion in raffic flow simulaion where
More informationRational preferences. Rational decisions. Outline. Rational preferences contd. Maximizing expected utility. Preferences
Rational preferences Rational decisions hapter 16 Idea: preferences of a rational agent must obey constraints. Rational preferences behavior describable as maximization of expected utility onstraints:
More informationRational decisions From predicting the effect to optimal intervention
Rational decisions From predicting the effect to optimal intervention ADAPTED FROM AIMA SLIDES Russel&Norvig:Artificial Intelligence: a modern approach AIMA: Rational decisions AIMA: Rational decisions
More informationRational decisions. Chapter 16 1
Rational decisions Chapter 16 Chapter 16 1 Rational preferences Utilities Money Multiattribute utilities Decision networks Value of information Outline Chapter 16 2 Preferences An agent chooses among prizes
More informationAnnouncements. Recap: Filtering. Recap: Reasoning Over Time. Example: State Representations for Robot Localization. Particle Filtering
Inroducion o Arificial Inelligence V22.0472-001 Fall 2009 Lecure 18: aricle & Kalman Filering Announcemens Final exam will be a 7pm on Wednesday December 14 h Dae of las class 1.5 hrs long I won ask anyhing
More informationLast update: April 15, Rational decisions. CMSC 421: Chapter 16. CMSC 421: Chapter 16 1
Last update: April 15, 2010 Rational decisions CMSC 421: Chapter 16 CMSC 421: Chapter 16 1 Outline Rational preferences Utilities Money Multiattribute utilities Decision networks Value of information CMSC
More informationWritten HW 9 Sol. CS 188 Fall Introduction to Artificial Intelligence
CS 188 Fall 2018 Inroducion o Arificial Inelligence Wrien HW 9 Sol. Self-assessmen due: Tuesday 11/13/2018 a 11:59pm (submi via Gradescope) For he self assessmen, fill in he self assessmen boxes in your
More informationSequential Importance Resampling (SIR) Particle Filter
Paricle Filers++ Pieer Abbeel UC Berkeley EECS Many slides adaped from Thrun, Burgard and Fox, Probabilisic Roboics 1. Algorihm paricle_filer( S -1, u, z ): 2. Sequenial Imporance Resampling (SIR) Paricle
More informationAn introduction to the theory of SDDP algorithm
An inroducion o he heory of SDDP algorihm V. Leclère (ENPC) Augus 1, 2014 V. Leclère Inroducion o SDDP Augus 1, 2014 1 / 21 Inroducion Large scale sochasic problem are hard o solve. Two ways of aacking
More information1 Review of Zero-Sum Games
COS 5: heoreical Machine Learning Lecurer: Rob Schapire Lecure #23 Scribe: Eugene Brevdo April 30, 2008 Review of Zero-Sum Games Las ime we inroduced a mahemaical model for wo player zero-sum games. Any
More informationZürich. ETH Master Course: L Autonomous Mobile Robots Localization II
Roland Siegwar Margaria Chli Paul Furgale Marco Huer Marin Rufli Davide Scaramuzza ETH Maser Course: 151-0854-00L Auonomous Mobile Robos Localizaion II ACT and SEE For all do, (predicion updae / ACT),
More informationExpert Advice for Amateurs
Exper Advice for Amaeurs Ernes K. Lai Online Appendix - Exisence of Equilibria The analysis in his secion is performed under more general payoff funcions. Wihou aking an explici form, he payoffs of he
More informationin Engineering Prof. Dr. Michael Havbro Faber ETH Zurich, Switzerland Swiss Federal Institute of Technology
Risk and Saey in Engineering Pro. Dr. Michael Havbro Faber ETH Zurich, Swizerland Conens o Today's Lecure Inroducion o ime varian reliabiliy analysis The Poisson process The ormal process Assessmen o he
More informationTom Heskes and Onno Zoeter. Presented by Mark Buller
Tom Heskes and Onno Zoeer Presened by Mark Buller Dynamic Bayesian Neworks Direced graphical models of sochasic processes Represen hidden and observed variables wih differen dependencies Generalize Hidden
More informationComparing Means: t-tests for One Sample & Two Related Samples
Comparing Means: -Tess for One Sample & Two Relaed Samples Using he z-tes: Assumpions -Tess for One Sample & Two Relaed Samples The z-es (of a sample mean agains a populaion mean) is based on he assumpion
More informationLecture 33: November 29
36-705: Inermediae Saisics Fall 2017 Lecurer: Siva Balakrishnan Lecure 33: November 29 Today we will coninue discussing he boosrap, and hen ry o undersand why i works in a simple case. In he las lecure
More informationAnno accademico 2006/2007. Davide Migliore
Roboica Anno accademico 2006/2007 Davide Migliore migliore@ele.polimi.i Today Eercise session: An Off-side roblem Robo Vision Task Measuring NBA layers erformance robabilisic Roboics Inroducion The Bayesian
More informationLearning a Class from Examples. Training set X. Class C 1. Class C of a family car. Output: Input representation: x 1 : price, x 2 : engine power
Alpaydin Chaper, Michell Chaper 7 Alpaydin slides are in urquoise. Ehem Alpaydin, copyrigh: The MIT Press, 010. alpaydin@boun.edu.r hp://www.cmpe.boun.edu.r/ ehem/imle All oher slides are based on Michell.
More informationL07. KALMAN FILTERING FOR NON-LINEAR SYSTEMS. NA568 Mobile Robotics: Methods & Algorithms
L07. KALMAN FILTERING FOR NON-LINEAR SYSTEMS NA568 Mobile Roboics: Mehods & Algorihms Today s Topic Quick review on (Linear) Kalman Filer Kalman Filering for Non-Linear Sysems Exended Kalman Filer (EKF)
More informationLearning a Class from Examples. Training set X. Class C 1. Class C of a family car. Output: Input representation: x 1 : price, x 2 : engine power
Alpaydin Chaper, Michell Chaper 7 Alpaydin slides are in urquoise. Ehem Alpaydin, copyrigh: The MIT Press, 010. alpaydin@boun.edu.r hp://www.cmpe.boun.edu.r/ ehem/imle All oher slides are based on Michell.
More informationDiebold, Chapter 7. Francis X. Diebold, Elements of Forecasting, 4th Edition (Mason, Ohio: Cengage Learning, 2006). Chapter 7. Characterizing Cycles
Diebold, Chaper 7 Francis X. Diebold, Elemens of Forecasing, 4h Ediion (Mason, Ohio: Cengage Learning, 006). Chaper 7. Characerizing Cycles Afer compleing his reading you should be able o: Define covariance
More informationEstimation of Poses with Particle Filters
Esimaion of Poses wih Paricle Filers Dr.-Ing. Bernd Ludwig Chair for Arificial Inelligence Deparmen of Compuer Science Friedrich-Alexander-Universiä Erlangen-Nürnberg 12/05/2008 Dr.-Ing. Bernd Ludwig (FAU
More informationRL Lecture 7: Eligibility Traces. R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 1
RL Lecure 7: Eligibiliy Traces R. S. Suon and A. G. Baro: Reinforcemen Learning: An Inroducion 1 N-sep TD Predicion Idea: Look farher ino he fuure when you do TD backup (1, 2, 3,, n seps) R. S. Suon and
More informationPhysics 235 Chapter 2. Chapter 2 Newtonian Mechanics Single Particle
Chaper 2 Newonian Mechanics Single Paricle In his Chaper we will review wha Newon s laws of mechanics ell us abou he moion of a single paricle. Newon s laws are only valid in suiable reference frames,
More informationMatlab and Python programming: how to get started
Malab and Pyhon programming: how o ge sared Equipping readers he skills o wrie programs o explore complex sysems and discover ineresing paerns from big daa is one of he main goals of his book. In his chaper,
More informationT L. t=1. Proof of Lemma 1. Using the marginal cost accounting in Equation(4) and standard arguments. t )+Π RB. t )+K 1(Q RB
Elecronic Companion EC.1. Proofs of Technical Lemmas and Theorems LEMMA 1. Le C(RB) be he oal cos incurred by he RB policy. Then we have, T L E[C(RB)] 3 E[Z RB ]. (EC.1) Proof of Lemma 1. Using he marginal
More informationEnsamble methods: Boosting
Lecure 21 Ensamble mehods: Boosing Milos Hauskrech milos@cs.pi.edu 5329 Senno Square Schedule Final exam: April 18: 1:00-2:15pm, in-class Term projecs April 23 & April 25: a 1:00-2:30pm in CS seminar room
More informationEnsamble methods: Bagging and Boosting
Lecure 21 Ensamble mehods: Bagging and Boosing Milos Hauskrech milos@cs.pi.edu 5329 Senno Square Ensemble mehods Mixure of expers Muliple base models (classifiers, regressors), each covers a differen par
More informationCSE/NB 528 Lecture 14: Reinforcement Learning (Chapter 9)
CSE/NB 528 Lecure 14: Reinforcemen Learning Chaper 9 Image from hp://clasdean.la.asu.edu/news/images/ubep2001/neuron3.jpg Lecure figures are from Dayan & Abbo s book hp://people.brandeis.edu/~abbo/book/index.hml
More informationY. Xiang, Learning Bayesian Networks 1
Learning Bayesian Neworks Objecives Acquisiion of BNs Technical conex of BN learning Crierion of sound srucure learning BN srucure learning in 2 seps BN CPT esimaion Reference R.E. Neapolian: Learning
More informationOBJECTIVES OF TIME SERIES ANALYSIS
OBJECTIVES OF TIME SERIES ANALYSIS Undersanding he dynamic or imedependen srucure of he observaions of a single series (univariae analysis) Forecasing of fuure observaions Asceraining he leading, lagging
More information3.1 More on model selection
3. More on Model selecion 3. Comparing models AIC, BIC, Adjused R squared. 3. Over Fiing problem. 3.3 Sample spliing. 3. More on model selecion crieria Ofen afer model fiing you are lef wih a handful of
More informationm = 41 members n = 27 (nonfounders), f = 14 (founders) 8 markers from chromosome 19
Sequenial Imporance Sampling (SIS) AKA Paricle Filering, Sequenial Impuaion (Kong, Liu, Wong, 994) For many problems, sampling direcly from he arge disribuion is difficul or impossible. One reason possible
More information5. Stochastic processes (1)
Lec05.pp S-38.45 - Inroducion o Teleraffic Theory Spring 2005 Conens Basic conceps Poisson process 2 Sochasic processes () Consider some quaniy in a eleraffic (or any) sysem I ypically evolves in ime randomly
More informationUNIVERSITY OF TRENTO MEASUREMENTS OF TRANSIENT PHENOMENA WITH DIGITAL OSCILLOSCOPES. Antonio Moschitta, Fabrizio Stefani, Dario Petri.
UNIVERSIY OF RENO DEPARMEN OF INFORMAION AND COMMUNICAION ECHNOLOGY 385 Povo reno Ialy Via Sommarive 4 hp://www.di.unin.i MEASUREMENS OF RANSIEN PHENOMENA WIH DIGIAL OSCILLOSCOPES Anonio Moschia Fabrizio
More informationCSE-473. A Gentle Introduction to Particle Filters
CSE-473 A Genle Inroducion o Paricle Filers Bayes Filers for Robo Localizaion Dieer Fo 2 Bayes Filers: Framework Given: Sream of observaions z and acion daa u: d Sensor model Pz. = { u, z2, u 1, z 1 Dynamics
More informationLecture 2 October ε-approximation of 2-player zero-sum games
Opimizaion II Winer 009/10 Lecurer: Khaled Elbassioni Lecure Ocober 19 1 ε-approximaion of -player zero-sum games In his lecure we give a randomized ficiious play algorihm for obaining an approximae soluion
More informationKriging Models Predicting Atrazine Concentrations in Surface Water Draining Agricultural Watersheds
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 Kriging Models Predicing Arazine Concenraions in Surface Waer Draining Agriculural Waersheds Paul L. Mosquin, Jeremy Aldworh, Wenlin Chen Supplemenal Maerial Number
More informationSelf assessment due: Monday 4/29/2019 at 11:59pm (submit via Gradescope)
CS 188 Spring 2019 Inroducion o Arificial Inelligence Wrien HW 10 Due: Monday 4/22/2019 a 11:59pm (submi via Gradescope). Leave self assessmen boxes blank for his due dae. Self assessmen due: Monday 4/29/2019
More informationACE 562 Fall Lecture 5: The Simple Linear Regression Model: Sampling Properties of the Least Squares Estimators. by Professor Scott H.
ACE 56 Fall 005 Lecure 5: he Simple Linear Regression Model: Sampling Properies of he Leas Squares Esimaors by Professor Sco H. Irwin Required Reading: Griffihs, Hill and Judge. "Inference in he Simple
More information(10) (a) Derive and plot the spectrum of y. Discuss how the seasonality in the process is evident in spectrum.
January 01 Final Exam Quesions: Mark W. Wason (Poins/Minues are given in Parenheses) (15) 1. Suppose ha y follows he saionary AR(1) process y = y 1 +, where = 0.5 and ~ iid(0,1). Le x = (y + y 1 )/. (11)
More informationINTRODUCTION TO MACHINE LEARNING 3RD EDITION
ETHEM ALPAYDIN The MIT Press, 2014 Lecure Slides for INTRODUCTION TO MACHINE LEARNING 3RD EDITION alpaydin@boun.edu.r hp://www.cmpe.boun.edu.r/~ehem/i2ml3e CHAPTER 2: SUPERVISED LEARNING Learning a Class
More informationOverview. COMP14112: Artificial Intelligence Fundamentals. Lecture 0 Very Brief Overview. Structure of this course
OMP: Arificial Inelligence Fundamenals Lecure 0 Very Brief Overview Lecurer: Email: Xiao-Jun Zeng x.zeng@mancheser.ac.uk Overview This course will focus mainly on probabilisic mehods in AI We shall presen
More informationChristos Papadimitriou & Luca Trevisan November 22, 2016
U.C. Bereley CS170: Algorihms Handou LN-11-22 Chrisos Papadimiriou & Luca Trevisan November 22, 2016 Sreaming algorihms In his lecure and he nex one we sudy memory-efficien algorihms ha process a sream
More information20. Applications of the Genetic-Drift Model
0. Applicaions of he Geneic-Drif Model 1) Deermining he probabiliy of forming any paricular combinaion of genoypes in he nex generaion: Example: If he parenal allele frequencies are p 0 = 0.35 and q 0
More informationST2352. Stochastic Processes constructed via Conditional Simulation. 09/02/2014 ST2352 Week 4 1
ST35 Sochasic Processes consruced via Condiional Simulaion 09/0/014 ST35 Week 4 1 Sochasic Processes consruced via Condiional Simulaion Markov Processes Simulaing Random Tex Google Sugges n grams Random
More informationApproximation Algorithms for Unique Games via Orthogonal Separators
Approximaion Algorihms for Unique Games via Orhogonal Separaors Lecure noes by Konsanin Makarychev. Lecure noes are based on he papers [CMM06a, CMM06b, LM4]. Unique Games In hese lecure noes, we define
More informationIntroduction D P. r = constant discount rate, g = Gordon Model (1962): constant dividend growth rate.
Inroducion Gordon Model (1962): D P = r g r = consan discoun rae, g = consan dividend growh rae. If raional expecaions of fuure discoun raes and dividend growh vary over ime, so should he D/P raio. Since
More informationSubway stations energy and air quality management
Subway saions energy and air qualiy managemen wih sochasic opimizaion Trisan Rigau 1,2,4, Advisors: P. Carpenier 3, J.-Ph. Chancelier 2, M. De Lara 2 EFFICACITY 1 CERMICS, ENPC 2 UMA, ENSTA 3 LISIS, IFSTTAR
More informationNotes for Lecture 17-18
U.C. Berkeley CS278: Compuaional Complexiy Handou N7-8 Professor Luca Trevisan April 3-8, 2008 Noes for Lecure 7-8 In hese wo lecures we prove he firs half of he PCP Theorem, he Amplificaion Lemma, up
More informationTopic Astable Circuits. Recall that an astable circuit has two unstable states;
Topic 2.2. Asable Circuis. Learning Objecives: A he end o his opic you will be able o; Recall ha an asable circui has wo unsable saes; Explain he operaion o a circui based on a Schmi inverer, and esimae
More informationProbabilistic Robotics
Probabilisic Roboics Bayes Filer Implemenaions Gaussian filers Bayes Filer Reminder Predicion bel p u bel d Correcion bel η p z bel Gaussians : ~ π e p N p - Univariae / / : ~ μ μ μ e p Ν p d π Mulivariae
More informationCS Homework Week 2 ( 2.25, 3.22, 4.9)
CS3150 - Homework Week 2 ( 2.25, 3.22, 4.9) Dan Li, Xiaohui Kong, Hammad Ibqal and Ihsan A. Qazi Deparmen of Compuer Science, Universiy of Pisburgh, Pisburgh, PA 15260 Inelligen Sysems Program, Universiy
More informationRandom Walk with Anti-Correlated Steps
Random Walk wih Ani-Correlaed Seps John Noga Dirk Wagner 2 Absrac We conjecure he expeced value of random walks wih ani-correlaed seps o be exacly. We suppor his conjecure wih 2 plausibiliy argumens and
More informationt is a basis for the solution space to this system, then the matrix having these solutions as columns, t x 1 t, x 2 t,... x n t x 2 t...
Mah 228- Fri Mar 24 5.6 Marix exponenials and linear sysems: The analogy beween firs order sysems of linear differenial equaions (Chaper 5) and scalar linear differenial equaions (Chaper ) is much sronger
More informationMachine Learning 4771
ony Jebara, Columbia Universiy achine Learning 4771 Insrucor: ony Jebara ony Jebara, Columbia Universiy opic 20 Hs wih Evidence H Collec H Evaluae H Disribue H Decode H Parameer Learning via JA & E ony
More informationState-Space Models. Initialization, Estimation and Smoothing of the Kalman Filter
Sae-Space Models Iniializaion, Esimaion and Smoohing of he Kalman Filer Iniializaion of he Kalman Filer The Kalman filer shows how o updae pas predicors and he corresponding predicion error variances when
More informationCSE/NB 528 Lecture 14: From Supervised to Reinforcement Learning (Chapter 9) R. Rao, 528: Lecture 14
CSE/NB 58 Lecure 14: From Supervised o Reinforcemen Learning Chaper 9 1 Recall from las ime: Sigmoid Neworks Oupu v T g w u g wiui w Inpu nodes u = u 1 u u 3 T i Sigmoid oupu funcion: 1 g a 1 a e 1 ga
More informationProblem Set 5. Graduate Macro II, Spring 2017 The University of Notre Dame Professor Sims
Problem Se 5 Graduae Macro II, Spring 2017 The Universiy of Nore Dame Professor Sims Insrucions: You may consul wih oher members of he class, bu please make sure o urn in your own work. Where applicable,
More informationPlanning in POMDPs. Dominik Schoenberger Abstract
Planning in POMDPs Dominik Schoenberger d.schoenberger@sud.u-darmsad.de Absrac This documen briefly explains wha a Parially Observable Markov Decision Process is. Furhermore i inroduces he differen approaches
More informationNotes on Kalman Filtering
Noes on Kalman Filering Brian Borchers and Rick Aser November 7, Inroducion Daa Assimilaion is he problem of merging model predicions wih acual measuremens of a sysem o produce an opimal esimae of he curren
More informationDesigning Information Devices and Systems I Spring 2019 Lecture Notes Note 17
EES 16A Designing Informaion Devices and Sysems I Spring 019 Lecure Noes Noe 17 17.1 apaciive ouchscreen In he las noe, we saw ha a capacior consiss of wo pieces on conducive maerial separaed by a nonconducive
More information5.1 - Logarithms and Their Properties
Chaper 5 Logarihmic Funcions 5.1 - Logarihms and Their Properies Suppose ha a populaion grows according o he formula P 10, where P is he colony size a ime, in hours. When will he populaion be 2500? We
More informationOrdinary dierential equations
Chaper 5 Ordinary dierenial equaions Conens 5.1 Iniial value problem........................... 31 5. Forward Euler's mehod......................... 3 5.3 Runge-Kua mehods.......................... 36
More information0.1 MAXIMUM LIKELIHOOD ESTIMATION EXPLAINED
0.1 MAXIMUM LIKELIHOOD ESTIMATIO EXPLAIED Maximum likelihood esimaion is a bes-fi saisical mehod for he esimaion of he values of he parameers of a sysem, based on a se of observaions of a random variable
More informationMathcad Lecture #8 In-class Worksheet Curve Fitting and Interpolation
Mahcad Lecure #8 In-class Workshee Curve Fiing and Inerpolaion A he end of his lecure, you will be able o: explain he difference beween curve fiing and inerpolaion decide wheher curve fiing or inerpolaion
More informationTracking. Announcements
Tracking Tuesday, Nov 24 Krisen Grauman UT Ausin Announcemens Pse 5 ou onigh, due 12/4 Shorer assignmen Auo exension il 12/8 I will no hold office hours omorrow 5 6 pm due o Thanksgiving 1 Las ime: Moion
More informationRandom Processes 1/24
Random Processes 1/24 Random Process Oher Names : Random Signal Sochasic Process A Random Process is an exension of he concep of a Random variable (RV) Simples View : A Random Process is a RV ha is a Funcion
More informationUnit Root Time Series. Univariate random walk
Uni Roo ime Series Univariae random walk Consider he regression y y where ~ iid N 0, he leas squares esimae of is: ˆ yy y y yy Now wha if = If y y hen le y 0 =0 so ha y j j If ~ iid N 0, hen y ~ N 0, he
More informationOn the Submodularity of Influence in Social Networks
On he ubmodulariy o Inluence in ocial Neworks Elchanan Mossel & ebasien Roch OC07 peaker: Xinran He Xinranhe1990@gmail.com ocial Nework ocial nework as a graph Nodes represen indiiduals. Edges are social
More informationA Bayesian Approach to Spectral Analysis
Chirped Signals A Bayesian Approach o Specral Analysis Chirped signals are oscillaing signals wih ime variable frequencies, usually wih a linear variaion of frequency wih ime. E.g. f() = A cos(ω + α 2
More informationChapter 5. Localization. 5.1 Localization of categories
Chaper 5 Localizaion Consider a caegory C and a amily o morphisms in C. The aim o localizaion is o ind a new caegory C and a uncor Q : C C which sends he morphisms belonging o o isomorphisms in C, (Q,
More informationLecture #6: Continuous-Time Signals
EEL5: Discree-Time Signals and Sysems Lecure #6: Coninuous-Time Signals Lecure #6: Coninuous-Time Signals. Inroducion In his lecure, we discussed he ollowing opics:. Mahemaical represenaion and ransormaions
More information7630 Autonomous Robotics Probabilistic Localisation
7630 Auonomous Roboics Probabilisic Localisaion Principles of Probabilisic Localisaion Paricle Filers for Localisaion Kalman Filer for Localisaion Based on maerial from R. Triebel, R. Käsner, R. Siegwar,
More informationSEIF, EnKF, EKF SLAM. Pieter Abbeel UC Berkeley EECS
SEIF, EnKF, EKF SLAM Pieer Abbeel UC Berkeley EECS Informaion Filer From an analyical poin of view == Kalman filer Difference: keep rack of he inverse covariance raher han he covariance marix [maer of
More informationLecture 20: Riccati Equations and Least Squares Feedback Control
34-5 LINEAR SYSTEMS Lecure : Riccai Equaions and Leas Squares Feedback Conrol 5.6.4 Sae Feedback via Riccai Equaions A recursive approach in generaing he marix-valued funcion W ( ) equaion for i for he
More informationEXERCISES FOR SECTION 1.5
1.5 Exisence and Uniqueness of Soluions 43 20. 1 v c 21. 1 v c 1 2 4 6 8 10 1 2 2 4 6 8 10 Graph of approximae soluion obained using Euler s mehod wih = 0.1. Graph of approximae soluion obained using Euler
More informationEKF SLAM vs. FastSLAM A Comparison
vs. A Comparison Michael Calonder, Compuer Vision Lab Swiss Federal Insiue of Technology, Lausanne EPFL) michael.calonder@epfl.ch The wo algorihms are described wih a planar robo applicaion in mind. Generalizaion
More informationBias in Conditional and Unconditional Fixed Effects Logit Estimation: a Correction * Tom Coupé
Bias in Condiional and Uncondiional Fixed Effecs Logi Esimaion: a Correcion * Tom Coupé Economics Educaion and Research Consorium, Naional Universiy of Kyiv Mohyla Academy Address: Vul Voloska 10, 04070
More informationLecture 16 (Momentum and Impulse, Collisions and Conservation of Momentum) Physics Spring 2017 Douglas Fields
Lecure 16 (Momenum and Impulse, Collisions and Conservaion o Momenum) Physics 160-02 Spring 2017 Douglas Fields Newon s Laws & Energy The work-energy heorem is relaed o Newon s 2 nd Law W KE 1 2 1 2 F
More informationSpeaker Adaptation Techniques For Continuous Speech Using Medium and Small Adaptation Data Sets. Constantinos Boulis
Speaker Adapaion Techniques For Coninuous Speech Using Medium and Small Adapaion Daa Ses Consaninos Boulis Ouline of he Presenaion Inroducion o he speaker adapaion problem Maximum Likelihood Sochasic Transformaions
More informationBiol. 356 Lab 8. Mortality, Recruitment, and Migration Rates
Biol. 356 Lab 8. Moraliy, Recruimen, and Migraion Raes (modified from Cox, 00, General Ecology Lab Manual, McGraw Hill) Las week we esimaed populaion size hrough several mehods. One assumpion of all hese
More informationSolutions Problem Set 3 Macro II (14.452)
Soluions Problem Se 3 Macro II (14.452) Francisco A. Gallego 04/27/2005 1 Q heory of invesmen in coninuous ime and no uncerainy Consider he in nie horizon model of a rm facing adjusmen coss o invesmen.
More informationFinal Spring 2007
.615 Final Spring 7 Overview The purpose of he final exam is o calculae he MHD β limi in a high-bea oroidal okamak agains he dangerous n = 1 exernal ballooning-kink mode. Effecively, his corresponds o
More informationGraphical Event Models and Causal Event Models. Chris Meek Microsoft Research
Graphical Even Models and Causal Even Models Chris Meek Microsof Research Graphical Models Defines a join disribuion P X over a se of variables X = X 1,, X n A graphical model M =< G, Θ > G =< X, E > is
More informationLinear Response Theory: The connection between QFT and experiments
Phys540.nb 39 3 Linear Response Theory: The connecion beween QFT and experimens 3.1. Basic conceps and ideas Q: How do we measure he conduciviy of a meal? A: we firs inroduce a weak elecric field E, and
More informationCourse Notes for EE227C (Spring 2018): Convex Optimization and Approximation
Course Noes for EE7C Spring 018: Convex Opimizaion and Approximaion Insrucor: Moriz Hard Email: hard+ee7c@berkeley.edu Graduae Insrucor: Max Simchowiz Email: msimchow+ee7c@berkeley.edu Ocober 15, 018 3
More informationOnline Appendix to Solution Methods for Models with Rare Disasters
Online Appendix o Soluion Mehods for Models wih Rare Disasers Jesús Fernández-Villaverde and Oren Levinal In his Online Appendix, we presen he Euler condiions of he model, we develop he pricing Calvo block,
More information2016 Possible Examination Questions. Robotics CSCE 574
206 Possible Examinaion Quesions Roboics CSCE 574 ) Wha are he differences beween Hydraulic drive and Shape Memory Alloy drive? Name one applicaion in which each one of hem is appropriae. 2) Wha are he
More informationMath 10B: Mock Mid II. April 13, 2016
Name: Soluions Mah 10B: Mock Mid II April 13, 016 1. ( poins) Sae, wih jusificaion, wheher he following saemens are rue or false. (a) If a 3 3 marix A saisfies A 3 A = 0, hen i canno be inverible. True.
More informationSTA 114: Statistics. Notes 2. Statistical Models and the Likelihood Function
STA 114: Saisics Noes 2. Saisical Models and he Likelihood Funcion Describing Daa & Saisical Models A physicis has a heory ha makes a precise predicion of wha s o be observed in daa. If he daa doesn mach
More informationGeorey E. Hinton. University oftoronto. Technical Report CRG-TR February 22, Abstract
Parameer Esimaion for Linear Dynamical Sysems Zoubin Ghahramani Georey E. Hinon Deparmen of Compuer Science Universiy oftorono 6 King's College Road Torono, Canada M5S A4 Email: zoubin@cs.orono.edu Technical
More informationApplication of a Stochastic-Fuzzy Approach to Modeling Optimal Discrete Time Dynamical Systems by Using Large Scale Data Processing
Applicaion of a Sochasic-Fuzzy Approach o Modeling Opimal Discree Time Dynamical Sysems by Using Large Scale Daa Processing AA WALASZE-BABISZEWSA Deparmen of Compuer Engineering Opole Universiy of Technology
More informationAsymptotic Equipartition Property - Seminar 3, part 1
Asympoic Equipariion Propery - Seminar 3, par 1 Ocober 22, 2013 Problem 1 (Calculaion of ypical se) To clarify he noion of a ypical se A (n) ε and he smalles se of high probabiliy B (n), we will calculae
More informationHidden Markov Models. Adapted from. Dr Catherine Sweeney-Reed s slides
Hidden Markov Models Adaped from Dr Caherine Sweeney-Reed s slides Summary Inroducion Descripion Cenral in HMM modelling Exensions Demonsraion Specificaion of an HMM Descripion N - number of saes Q = {q
More information