Neural-Network Quantum States
|
|
- Mervyn Daniels
- 6 years ago
- Views:
Transcription
1 Neural-Network Quantum States A Lecture for the Machne Learnng and Many-Body Physcs workshop Guseppe Carleo 1 June , Bejng 1 Insttute for Theoretcal Physcs, ETH Zurch, Wolfgang-Paul-Str. 27, 8093 Zurch, Swtzerland
2 2
3 Chapter 1 Quantum Mechancs as a Machne Learnng Problem Every machne learnng approach has two fundamental ngredents. 1. The machne: typcally an artfcal neural-network, t s a hghly dmensonal (non-lnear) functon F (x; p 1... p Np ) of the parameters p 1... p Np 2. The learnng: the parameters p are learned on the bass of a stochastc optmzaton, that mnmzes some average loss functon L (p) on a dataset x 1, x 2... x Ns. For example L(x ) = F (x ; p) y,n the supervsed learnng settng wth expected labels y. On the other hand, the central goal n quantum mechancs s to fnd a soluton to Schroednger equaton H Ψ = E Ψ, (1.1) for = 0, 1,... and E 0 < E 1 <.... How can we reduce quantum mechancs then to a machne learnng problem? Frst of all, I wll address the requrement 2, whch has been done by poneers n computatonal quantum physcs lke Bll McMllan, n the 60s. Then, I wll address requrement 1, whch has been done nstead only very recently, thus completng the connecton between machne learnng and quantum mechancs. 1.1 Varatonal Monte Carlo To satsfy requrement 2, we need to transform ths egenvalue problem (1.1) nto a stochastc optmzaton problem. To acheve ths, we start from an alternatve formulaton of Schroednger s equaton, based on the varatonal prncple. In partcular, consder the energy functonal: E[Ψ] = Ψ H Ψ E 0, (1.2) where Ψ s some arbtrary physcal state, and E 0 s the exact ground-state energy of the Hamltonan H. From the varatonal theorem t s then clear that one can fnd the 3
4 exact ground-state wave-functon as the soluton of the optmzaton problem: Ψ 0 = argmn Ψ E[Ψ]. (1.3) For an arbtrary state Ψ, however t s seldom possble to compute analytcally the energy functonal, snce t nvolves ntegrals over a hgh-dmensonal space. To solve ths problem, n the 60s McMllan realzed that the energy functonal can be computed stochastcally. [McMllan1965] In partcular, the Varatonal Monte Carlo method s rooted nto the observaton that expectaton values lke (1.2) can be wrtten as statstcal averages over a sutable probablty dstrbuton. Let us assume that our Hlbert space s spanned by the many-body kets x. These n practce depend on the system n exam. For example n the case of spns 1/2 we would typcally have x = σ1, z σ2, z... σn z, for second-quantzed fermons x = n 1, n 2,... n N, for partcles n contnuous space x = r 1, r 2,... r N. The only dfference s of course that n the frst two cases one has a dscrete set of quantum numbers, whereas n the latter case the degrees of freedom are contnuos. In both cases we wll denote sums over the Hlbert space wth dscrete sums, although one should always bear n mnd that n the case of contnuous varables these sums must be nterpreted as ntegrals. In partcular we wll use the closure relaton x x x = Stochastc Estmates of Propertes Usng the closure relaton, we can rewrte a generc quantum expectaton value of some operator O as x,x Ψ x x O x x Ψ = x Ψ x x Ψ (1.4) Ψ O Ψ Ψ Ψ There can be, n general, two cases: = x,x Ψ (x)o xx Ψ(x ) x Ψ(x) 2. (1.5) 1. The operator O s dagonal n the computatonal bass,.e. O xx = δ xx O(x). Then Ψ O Ψ x = Ψ(x) 2 O(x) (1.6) Ψ Ψ x Ψ(x) 2 O, (1.7) where... denote statstcal expectaton values over the probablty dstrbuton Π(x) = Ψ(x) 2. In other words, n ths case quantum expectaton values are completely equvalent to averagng over Hlbert-space states sampled accordng to the square-modulus of the wave-functon. 2. The operator O s off-dagonal n the computatonal bass. Then, we can defne an auxlary dagonal operator (often called, n a somehow msleadng fashon, local operator or estmator) O loc (x) = Ψ(x ) O xx Ψ(x), (1.8) x 4
5 such that t s easly proven that Ψ O Ψ Ψ Ψ = x Ψ(x) 2 O loc (x) x Ψ(x) 2 (1.9) O loc. (1.10) For any observable, then, we can always compute expectaton values over arbtrary wave-functons as statstcal averages. In the case of off-dagonal operators, t should be notced that the sum x O xx Ψ(x ), s extended to the tny porton of the Hlbert Ψ(x) space for whch x s such that O xx 0. For the great majorty of physcal observables, and for a gven x, the number of elements x connected by those matrx elements s polynomal n the system sze, thus the summaton can be carred systematcally. Ths has to be contrasted nstead to the summatons n x Ψ(x) 2, where one typcally has an exponentally large number of possble values of x on whch to perform the summaton, and therefore cannot be done by brute-force. The powerful dea of the Varatonal Monte Carlo, s therefore to replace these sums over exponentally many states, wth a statstcal average over a large but fnte set of states sampled accordng to the probablty dstrbuton Π(x). We therefore have a way to compute, stochastcally, the expectaton value of all the propertes of nterest. For example we mght want to compute the expectaton value of σ x for a spn system, the expectaton value of c c j for fermons, or even the expectaton value of the nteracton energy W ee ( r 1... r N ) for our electronc structure problems Energy An mmedate corollary of the prevously presented scheme, s that also the expectaton value of the Hamltonan H (whch s tself a generc off-dagonal operator) can be computed usng the estmator (1.10). Hstorcally, the local estmator assocated to the Hamltonan s called local energy : E loc (x) = Ψ(x ) H xx Ψ(x). (1.11) x 1.2 Stochastc Varatonal Optmzaton The fnal goal we want to acheve here s to optmze the varatonal energy. In practce, we assume that the wave functon depends on some (possbly mllons of) parameters p = p 1,... p M. We have seen that the expectaton value of the energy can be wrtten as a statstcal average of the form H E loc. (1.12) It s easy to show that also the gradent of the energy can be wrtten under to form of the expectaton value of some stochastc varable. In partcular, defne D k (x) = p k Ψ(x) Ψ(x), (1.13) 5
6 then pk H = pk x,x Ψ (x)h xx Ψ(x ) x Ψ(x) 2 = x,x Ψ (x)h xx D k (x )Ψ(x ) x,x Ψ (x)d + k (x)h xx Ψ(x ) x x Ψ(x) 2 Ψ(x) 2 x,x Ψ (x)h xx Ψ(x ) x Ψ(x) 2 x Ψ(x) 2 (D k (x) + D k (x)) x Ψ(x) 2 x,x Ψ (x) Ψ = H (x ) xx D k(x ) Ψ(x ) 2 + x,x Ψ(x) 2 H xx Dk (x ) Ψ(x ) Ψ(x) x Ψ(x) 2 x H Ψ(x) 2 (D k (x) + Dk (x)) x Ψ(x) 2 E loc Dk E loc Dk + cc. (1.14) We can therefore compactly wrte pk H G k, wth the gradent estmator beng G k (x) = 2Re [(E loc (x) E loc ) D k(x)]. (1.15) Zero-Varance Property One of the most nterestng feature of the energy and energy-gradent estmators sofar presented s that they have the so-called zero-varance property: ther statstcal fluctuatons are exactly zero when samplng from the exact ground-state wave-functon. Let us consder for example var(e loc ) = E 2 loc E loc 2 = Ψ(x) 2 E loc (x) 2 H 2 x = H x,x1 Ψ(x 1 ) H x,x2 Ψ(x 2 ) H 2 x x 1 x 2 = Ψ(x 1 ) H x,x1 H x,x2 Ψ(x 2 ) H 2 x 1 x x 2 = H 2 H 2, (1.16) where we have assumed for smplcty that the wave-functon s real. Therefore the varance of the local energy s an mportant physcal quantty: the energy varance. It s easy to see that f Ψ s an egenstate of H then H 2 = H 2 = E0, 2 and var(e loc ) = 0,.e. the statstcal fluctuatons completely vansh. Ths property s very mportant snce t also mples that, n a sense to be specfed below, the closer we get to the ground-state, the less fluctuatons we have on the quantty we want to mnmze, the energy. 6
7 1.2.2 Stochastc Gradent Descent The gradent descent method s the smplest optmzaton scheme, where at each teraton the varatonal parameters are modfed accordng to p +1 k = p k η pk H, (1.17) where η s a (small) parameter called the learnng rate n the machne learnng communty. An mportant dfference wth respect to the non-stochastc (determnstc) gradent descent approach, s that now we only have stochastc averages of the gradent whch s therefore subjected to nose. Let us assume for smplcty that all the components of the gradent are subjected to the same amount of gaussan nose wth varance σ,.e. pk H = Normal ( G k, σ). (1.18) We can then compare Eq. 7 to the dscretzed Langevn equaton: ( p +1 k = p k δ t G k + Normal 0, ) 2δ t T, (1.19) where δ t s a small tme step. Ths equaton samples the Boltzmann dstrbuton Π B (p 1... p M ) = e H T, (1.20) whch n the lmt T 0 would converge to the varatonal ground-state,.e. to mn p H (p). We therefore see that the varance of the gradent corresponds to the effectve temperature as σ 2 = var(ḡk) = 2T/δ t (1.21) η = δ t. (1.22) Snce we want to fnd the varatonal ground state, we should have a scheme n whch the temperature s gradually decreased at each optmzaton step,.e. T 1 > T 2 > T 3..., as n the smulated annealng optmzaton protocol. The frst thng we notce s that σ 2 1 N s, decreases lke the number of samples n the Markov chan, therefore T = ηvar(ḡk) (1.23) 2 η var(g k ) (1.24) N s and convenent ways to reduce the temperature are ether to reduce the learnng rate: η() = η 0 / + 1 or to ncrease the number of samples wth the teraton count. Durng the optmzaton however f often happens that f we are close enough to the ground-state soluton var(g k ) 0. Indeed, t s easy to show that for an exact egenstate the statstcal fluctuatons of the gradent are exactly vanshng,.e. var(g k ) = 0. In practce then, even a constant number of samples and a fxed (small) η are suffcent to converge to the ground-state, provded that one checks durng the optmzaton that the value of the effectve temperature (1.23) s actually gong to zero as expected. 7
8 1.3 Example: Jastrow Factors Let us gve a specfc example of varatonal states, we consder now a system of nteractng partcles n contnuous space, for whch the most general Hamltonan s H = 2 2m N r 2 + V 1 ( r ) + <j V 2 ( r, r j ), (1.25) where V 1 and V 2 are generc one and two-body nteracton potental. We now defne the exact Jastrow-Feenberg expanson for the many-body state: [ Ψ p ( r 1,... r N ) = Ψ 0 ( r 1,... r N ) exp J 1 ( r ) + 1 J 2 ( r, r j ) + 2 j J p ( r 1, r 2... r p ), (1.26) p! p where Ψ 0 ( r 1,... r N ) s some parameter-ndependent wave-functon, and the varatonal parameters are the functons J 1 ( r),j 2 ( r, r ),... J p ( r 1, r 2... r p ). These expanson s clearly exact when p = N, however n practce one observes convergence to the exact groundstate much sooner, and typcally p = 2, 3 are enough to obtan very accurate results One-dmensonal trapped partcles As a smple exercse one can consder sngle-partcle, one-dmensonal Hamltonans of trapped partcles, for whch V 1 (x) s an even functon of x and V 2 = 0 (non-nteractng partcles). In ths case (for symmetry reasons) one can wrte the functon expanson J 1 (x) = p 1 x 2 + p 2 x , where p 1,p 2 etc are the parameters to be determned varatonally 1 Ψ(x) 2 In ths case t s easy to show that Ψ(x) = J x2 1 (x) + J 1(x) 2. (1.27) E loc (x) = 2 ( J 1 (x) + J 2m 1(x) 2) + V 1 (x) (1.28) D k (x) = x 2k. (1.29) 8
9 Chapter 2 Neural-Network Quantum States In the frst part of ths lecture we have rephrased the problem of fndng a ground state n terms of a stochastc optmzaton problem. To really take advantage of the potentaltes of machne learnng, however t s stll necessary to accomplsh task 1, n our prevous lst: we need to defne a sutable machne to solve our learnng problem. Ths s what we have done n our recent work. [CarleoTroyer2017] 2.1 Wave-Functon as a Neural Network The fundamental problem wth the stochastc optmzaton problem descrbed before s that, n prncple, to acheve the exact ground state energy one needs to consder exponentally many parameters. To see ths pont, consder the case of N spn 1/2 partcles, then the exact ground-state wave-functon s fully specfed by the 2 N ampltudes x Ψ = Ψ(x), (2.1) for all the possble values of x = σ1σ z 2 z... σn z. Ths task however s clearly unfeasble when the number of partcles N s too large. For example, one can do a back-of-the-envelope calculaton to show that only storng the wave-functon for more than 100 spns would requre a number of atoms larger than what can be found on our planet! However, ths exponental complexty s not necessary a lmtng factor. In ths case we can ndeed thnk of usng the ablty of artfcal neural networks to compress hgh-dmensonal data nto a low-dmensonal representaton. The startng pont s to ask a sutable neural network to compute the wave-functon ampltudes. Formally, we then set: Ψ(x) = F (x; p 1, p 2... p Np ), (2.2) where F s the output of a sutably chosen artfcal neural network, dependng on a set of parameters p. 9
10 2.2 Restrcted Boltzmann Machnes The choce of the specfc neural network used to represent the wave-functon s arbtrary, provded that t s reasonably expressve (.e. that n the lmt of large N p we can always recover the exact wave-functon). A convenent choce s the so-called Restrcted Boltzmann Machne (RBM), whch s defned as: F rbm (σ1, z σ2, z... σn) z = [ exp W j σ z h j + h j b j + ] σ z a, (2.3) {h} j j where the network parameters are W,a, and b. Ths archtecture corresponds to the partton functon of a gas of M hdden unts (h j ) connected to the physcal spns (σ z ). Snce the connectons are allowed only between hdden and vsble unts, but not between hdden unts, nor between vsble unts, ths archtecture s called restrcted. Because of ths restrcton, however t s easy to compute F explctly. Indeed [ exp W j σ z h j + h j b j + ] σ z a = (2.4) {h} j j e σz a [ ] Π j exp W j σ z h j + h j b j = (2.5) {h} [ ] [ e σz a Π j (exp W j σ z + b j + exp ]) W j σ z b j [ ] e σz a Π j 2 cosh W j σ z + b j = (2.6). (2.7) Because the wave-functon, n general, can be complex valued, also the weghts n ths expresson should be taken complex. It s easy to convnce one-self that f ths s the case than the wave-functon takes arbtrary complex values. 2.3 An example mplementaton Durng the lecture I wll show an example mplementaton of the stochastc optmzaton algorthm for neural-network RBM states. In partcular, I wll consder the transversefled Isng hamltonan n 1D: H = h σ x J σ z σ z +1, (2.8) wth perodc boundary condtons over a rng of L stes. To smplfy thngs, and knowng that the ground-state wave-functon n ths case s postve defnte, I wll consder the followng quantum state [TGC2017]: Ψ(σ1, z σ2, z... σn) z = F rbm (σ1, z σ2, z... σn z ), (2.9) 10
11 where the specfc RBM taken here contans only real-valued parameters. An advantage of ths formulaton s that samplng from Ψ(σ z 1, σ z 2,... σ z N ) 2 = F rbm (σ z 1, σ z 2,... σ z N ) s partcularly easy, snce t can been done usng alternate Gbbs samplng Gbbs Samplng Gbbs samplng s a specal case of the Metropols-Hastngs algorthm (see Appendx A). When the RBM has only real-valued parameters, then one can nterpret the quantty [ P (σ, h) = exp W j σ z h j + h j b j + ] σ z a, (2.10) j j as a jont probablty densty (apart from a global normalzaton) of the physcal and hdden unts. [FscherIgel2014] The dea of alternate Gbbs samplng s then to devse a two step Markov-chan samplng wth transton probabltes: T σ ((σ, h) (σ, h)) = P (σ, h) σ P (σ, h) = P (σ h) (2.11) T h ((σ, h) (σ, h )) = P (σ, h ) h P (σ, h ) = P (h σ). (2.12) The acceptance probablty for these two type of moves can be readly computed usng the Metropols-Hastng acceptance rule: ( A(x x ) = mn 1, Π(x ) Π(x) T ) (x x), (2.13) T (x x ) where n one case spn confguraton are changed, x =(σ, h) and n the other case hdden varable confguraton are changed, thus x =(σ, h ). For example, n the frst case the acceptance probablty reads: { A((σ, h) (σ, h)) = mn 1, P (σ, h) P (σ, h) T } σ((σ, h) (σ, h)) T σ ((σ, h) (σ, h)) { = mn 1, P (σ, h) P (σ, h) P (σ h) } P (σ h) = 1, (2.14) where n the last lne we have used the fact that P (σ h) P (σ h) = P (σ,h) σ P (σ,h) P (σ,h) σ P (σ,h) = P (σ, h) P (σ, h). (2.15) The same reasonng can be done for moves that change the hdden unts only, and one gets an acceptance of 1 as well. The mportant pont s that P (σ h) and P (h σ) can be computed exactly for an RBM. 11
12 For example, we have that: P (h σ) = P (σ, h) h P (σ, h ) = e σz a Π j exp [ W jσ z h j + b j h j ] e σz a Πj 2 cosh [ W jσ z + b, (2.16) j] and each hdden varable has a probablty whch s ndependent on the value of the other hdden varables. We have: P (h j = 1 σ) = Logstc(2θ j ) (2.17) P (h j = 1 σ) = Logstc( 2θ j ), (2.18) where Logstc(x) = 1 1+exp( x) and θ j = W jσ z + b j. A smlar expresson can be derved also for the other condtonal probablty, whch reads: P (h σ) = = P (σ, h) P (h) e [ j h jb j ] Π exp j W jσ z h j + a σ e [ j h jb j ], (2.19) Π 2 cosh j W jh z j + a and each spn varable has a probablty whch s ndependent on the value of the other spn varables. We then have: P (σ = 1 h) = Logstc(2γ ) (2.20) P (σ = 1 h) = Logstc( 2γ ), (2.21) where γ = j W jh z j +a. Proposng spn and hdden varable confguratons accordng to the Gbbs transton probablty s therefore very easy and conssts n the followng: 1. Generate N random numbers r [0, 1). 2. Set the -th spn wth probablty P (σ = 1 h) = Logstc(2γ ),.e. f P (σ = 1 h) > r then set σ = 1 otherwse σ = Generate M random number l j [0, 1). 4. Set the j-th hdden unt wth probablty P (h j = 1 σ) = Logstc(2θ j ),.e. f P (h j = 1 σ) > l j then set h j = 1 otherwse h j = 1. Repeatng these steps N s tmes, we then generate spn confguratons whch are sampled from Ψ(σ1, z σ2, z... σn z ) 2 = F rbm (σ1, z σ2, z... σn z ) Eq. (2.3). Notce that ths scheme s rather easy to mplement snce we do not need to perform a Metropols-Hastngs test at each step of the Markov chan, gven that all moves are accepted (see Appendx A for detals). 12
13 2.3.2 Computng the local energy For a gven spn confguraton, we also need to compute the local energy: E loc (σ) = ψ(σ ) H σ,σ ψ(σ). (2.22) σ For the transverse-feld Isng model, the sum runs over the N+1 confguratons σ (0) = σ and σ (k) = σ z 1 σ z k... σz N, wth H σ,σ = J σz σ z +1 and H σ,σ (k>0) = h. Ths sum can then be computed n polynomal tme, and t s effcently done pre-computng the values of the angles θ j. In partcular, ψ(σ (k)) ψ(σ) = e 2a kσ k cosh(θ j 2σ k W jk ) Π j. (2.23) cosh(θ j ) Computng the varatonal dervatves The varatonal dervatves can also be computed effcently and read: D k (σ) = p k Ψ(σ) Ψ(σ), (2.24) D a (σ) = 1 2 σz, (2.25) D bj (σ) = 1 2 tanh(θ j), (2.26) D Wj (σ) = 1 2 tanh(θ j)σ z. (2.27) 13
14 14
15 Bblography [McMllan1965] Wllam L. McMllan, Phys. Rev. 138, A442 (1965) [FscherIgel2014] Asja Fscher, and Chrstan Igel. Tranng Restrcted Boltzmann Machnes: An Introducton. Pattern Recognton 47, 25-39, [CarleoTroyer2017] Guseppe Carleo, and Matthas Troyer. Solvng the quantum many-body problem wth artfcal neural networks. Scence 355, , [TGC2017] Gacomo Torla, Guglelmo Mazzola, Juan Carrasqulla, Matthas Troyer, Roger Melko, and Guseppe Carleo. arxv ,
16 16
17 Appendx A Samplng Methods Durng the lecture we have establshed a fundamental connecton between quantum mechancs and statstcal samplng. For ths mappng to be effcent, we need an effcent way of samplng from the probablty dstrbuton Π(x) = Ψ(x) 2. In partcular the goal s to generate N s samples x (1), x (2),... x (Ns) such that we can estmate expectaton values as averages over those samples: A.0.1 O loc 1 O loc (x () ). N s Markov Chan and Detaled Balance (A.1) A Markov chan s completely specfed by the transton probablty T (x () x (+1) ),.e. gven a sample x (), we transton to the next element of the chan wth probablty T. The transton probablty (as all well-defne probabltes) must always be normalzed: x T (x x ) = 1. We would lke to devse a Markov chan process such that Π mc (x) = Π(x),.e. that the probablty wth whch a gven state x appears n the chan s equal to desred probablty we want to sample from. An mportant condton for ths to happen s that the probablty dstrbuton Π mc (x) s statonary,.e. all states along the chan should be dstrbuted accordng to the same probablty, and ths should not change along the chan. A suffcent condton for ths to happen s that Π(x)T (x x ) = Π(x )T (x x), (A.2) whch s called detaled balance equaton. Ths condton bascally enforces statonarty (also called mcro-reversblty) n the chan: the probablty of beng n a gven state x and of dong a transton to another state x must be equal to the reverse process, startng from x and transtonng to x. A.0.2 The Metropols-Hastngs Algorthm There exst many possble transton probabltes that satsfy the detaled balance condton (A.2), however the most famous choce s certanly the Metropols-Hastngs pre- 17
18 scrpton. In ths case, we separate the transton process nto two steps: T (x x ) = T (x x )A(x x ), (A.3).e. we frst propose a state wth some (smple) probablty dstrbuton T (x x ) we can easly sample from, and then accept or reject the new state x as the next element of the chan wth probablty A(x x ). Usng the detaled balance condton, we see that the acceptance probablty must satsfy: A(x x ) A(x x) = Π(x ) Π(x) T (x x) T (x x ). (A.4) A possble acceptance that satsfes ths condton s: ( A(x x ) = mn 1, Π(x ) Π(x) T ) (x x). (A.5) T (x x ) Notce that ths acceptance probablty satsfes (A.4), snce f Π(x ) T (x x) < 1 then Π(x) T (x x ) Π(x) T (x x ) > 1, Π(x ) T (x x) A(x x) = 1 and (A.4) s trvally verfed. The same reasonng can be appled for the case Π(x ) T (x x) > 1. Π(x) T (x x ) The Metropols-Hastng Algorthm can be then summarzed n the followng steps: 1. Generate a random state x drawng from the (smple) transton probablty T (x () x ). 2. Compute the quantty R = Π(x ) Π(x () ) T (x x () ) T (x () x ). (A.6) 3. Draw a unformly dstrbuted random number η [0, 1). 4. If R > η, accept the new states,.e. x (+1) = x. Otherwse, the followng state n the chan stays the current one: x (+1) = x (). Notce that steps 2-4 are necessary to decde whether to accept or reject the proposed state accordng to the Metropols probablty (A.5). 18
19 Appendx B Estmatng Errors and Auto-Correlaton Tmes Snce Markov chans are generated transtonng from a state to the next one, t s natural to expect that adjacent ponts n the chan wll be statstcally correlated. To quantfy ths noton of correlaton more precsely, let us frst consder the Markov chan estmate for the expectaton value of a gven functon: ḡ ns = 1 n s g, n s (B.1) where we have used the short-hand g g(x () ). The law of large numbers states that ḡ ns n s x and the central lmt theorem says that ḡ ns Π(x)g(x), (B.2) s a random varable normally dstrbuted, Prob(ḡ ns ) = Normal(ḡ, σ 2 ), (B.3) wth expected value ḡ and varance σ 2 = var(ḡ ns ), where the varance s computed over dfferent realzatons of the Markov chan. It explctly reads ( ) 1 var(ḡ ns ) = var g n s = 1 g n 2 g j 1 g s n 2 g j j s j ( ) = 1 1 ( g 2 n s n g 2) + 2 ( g g j g g j ) s n s j=+1 = 1 n s ) var(g 0 ) + 2 ( g 0 g j g 0 g j ) (1 jns, (B.4) n s j=1 where we assumed that the Markov chan s statonary,.e. var(g ) does not depend on the ndex and the same for the covarance. Therefore var(ḡ ns ) = 1 n s var(g 0 )2τ nt, (B.5) 19
20 havng defned the ntegrated auto-correlaton tme as τ nt = 1 n s 2 + ) ( g 0 g j g 0 g j ) (1 jns. (B.6) j=1 We therefore see that unless the Markov chan samples are completely uncorrelated (.e. g s g j g s g j = 0) the statstcal error on the estmator ḡ ns s ncreased by the postve factor τ nt. A way to correctly estmate the ntegrated autocorrelaton tme s through the correlaton functon ρ(j) = g 0g j g 2 g 2 g 2, (B.7) and a numercally stable estmate of the correlaton tme s gven by τ nt 1 j cut 2 + ρ(j), j=1 (B.8) where j cut s chosen for numercal stablty as the frst j such that ρ(j max ) < 0. In practce, gven a sequence of estmates g 1,... g ns = g, then the correlaton functon can be effcently estmated wth a sequence of Fast Fourer Transforms and ts nverses: A = F F T (g ḡ), (B.9) B = AA, (B.10) ρ = F F T 1 (B) g 2 g 2. (B.11) 20
The Feynman path integral
The Feynman path ntegral Aprl 3, 205 Hesenberg and Schrödnger pctures The Schrödnger wave functon places the tme dependence of a physcal system n the state, ψ, t, where the state s a vector n Hlbert space
More informationDensity matrix. c α (t)φ α (q)
Densty matrx Note: ths s supplementary materal. I strongly recommend that you read t for your own nterest. I beleve t wll help wth understandng the quantum ensembles, but t s not necessary to know t n
More informationMarkov Chain Monte Carlo (MCMC), Gibbs Sampling, Metropolis Algorithms, and Simulated Annealing Bioinformatics Course Supplement
Markov Chan Monte Carlo MCMC, Gbbs Samplng, Metropols Algorthms, and Smulated Annealng 2001 Bonformatcs Course Supplement SNU Bontellgence Lab http://bsnuackr/ Outlne! Markov Chan Monte Carlo MCMC! Metropols-Hastngs
More information3.1 Expectation of Functions of Several Random Variables. )' be a k-dimensional discrete or continuous random vector, with joint PMF p (, E X E X1 E X
Statstcs 1: Probablty Theory II 37 3 EPECTATION OF SEVERAL RANDOM VARIABLES As n Probablty Theory I, the nterest n most stuatons les not on the actual dstrbuton of a random vector, but rather on a number
More informationLinear Approximation with Regularization and Moving Least Squares
Lnear Approxmaton wth Regularzaton and Movng Least Squares Igor Grešovn May 007 Revson 4.6 (Revson : March 004). 5 4 3 0.5 3 3.5 4 Contents: Lnear Fttng...4. Weghted Least Squares n Functon Approxmaton...
More informationEEE 241: Linear Systems
EEE : Lnear Systems Summary #: Backpropagaton BACKPROPAGATION The perceptron rule as well as the Wdrow Hoff learnng were desgned to tran sngle layer networks. They suffer from the same dsadvantage: they
More informationLecture 10 Support Vector Machines II
Lecture 10 Support Vector Machnes II 22 February 2016 Taylor B. Arnold Yale Statstcs STAT 365/665 1/28 Notes: Problem 3 s posted and due ths upcomng Frday There was an early bug n the fake-test data; fxed
More informationInner Product. Euclidean Space. Orthonormal Basis. Orthogonal
Inner Product Defnton 1 () A Eucldean space s a fnte-dmensonal vector space over the reals R, wth an nner product,. Defnton 2 (Inner Product) An nner product, on a real vector space X s a symmetrc, blnear,
More informationC/CS/Phy191 Problem Set 3 Solutions Out: Oct 1, 2008., where ( 00. ), so the overall state of the system is ) ( ( ( ( 00 ± 11 ), Φ ± = 1
C/CS/Phy9 Problem Set 3 Solutons Out: Oct, 8 Suppose you have two qubts n some arbtrary entangled state ψ You apply the teleportaton protocol to each of the qubts separately What s the resultng state obtaned
More informationThe Multiple Classical Linear Regression Model (CLRM): Specification and Assumptions. 1. Introduction
ECONOMICS 5* -- NOTE (Summary) ECON 5* -- NOTE The Multple Classcal Lnear Regresson Model (CLRM): Specfcaton and Assumptons. Introducton CLRM stands for the Classcal Lnear Regresson Model. The CLRM s also
More informationLecture Notes on Linear Regression
Lecture Notes on Lnear Regresson Feng L fl@sdueducn Shandong Unversty, Chna Lnear Regresson Problem In regresson problem, we am at predct a contnuous target value gven an nput feature vector We assume
More informationGeneralized Linear Methods
Generalzed Lnear Methods 1 Introducton In the Ensemble Methods the general dea s that usng a combnaton of several weak learner one could make a better learner. More formally, assume that we have a set
More informationLimited Dependent Variables
Lmted Dependent Varables. What f the left-hand sde varable s not a contnuous thng spread from mnus nfnty to plus nfnty? That s, gven a model = f (, β, ε, where a. s bounded below at zero, such as wages
More informationMarkov Chain Monte Carlo Lecture 6
where (x 1,..., x N ) X N, N s called the populaton sze, f(x) f (x) for at least one {1, 2,..., N}, and those dfferent from f(x) are called the tral dstrbutons n terms of mportance samplng. Dfferent ways
More informationPHYS 705: Classical Mechanics. Calculus of Variations II
1 PHYS 705: Classcal Mechancs Calculus of Varatons II 2 Calculus of Varatons: Generalzaton (no constrant yet) Suppose now that F depends on several dependent varables : We need to fnd such that has a statonary
More information1 Convex Optimization
Convex Optmzaton We wll consder convex optmzaton problems. Namely, mnmzaton problems where the objectve s convex (we assume no constrants for now). Such problems often arse n machne learnng. For example,
More informationON MECHANICS WITH VARIABLE NONCOMMUTATIVITY
ON MECHANICS WITH VARIABLE NONCOMMUTATIVITY CIPRIAN ACATRINEI Natonal Insttute of Nuclear Physcs and Engneerng P.O. Box MG-6, 07725-Bucharest, Romana E-mal: acatrne@theory.npne.ro. Receved March 6, 2008
More informationEcon107 Applied Econometrics Topic 3: Classical Model (Studenmund, Chapter 4)
I. Classcal Assumptons Econ7 Appled Econometrcs Topc 3: Classcal Model (Studenmund, Chapter 4) We have defned OLS and studed some algebrac propertes of OLS. In ths topc we wll study statstcal propertes
More informationwhere the sums are over the partcle labels. In general H = p2 2m + V s(r ) V j = V nt (jr, r j j) (5) where V s s the sngle-partcle potental and V nt
Physcs 543 Quantum Mechancs II Fall 998 Hartree-Fock and the Self-consstent Feld Varatonal Methods In the dscusson of statonary perturbaton theory, I mentoned brey the dea of varatonal approxmaton schemes.
More informationComposite Hypotheses testing
Composte ypotheses testng In many hypothess testng problems there are many possble dstrbutons that can occur under each of the hypotheses. The output of the source s a set of parameters (ponts n a parameter
More informationFor now, let us focus on a specific model of neurons. These are simplified from reality but can achieve remarkable results.
Neural Networks : Dervaton compled by Alvn Wan from Professor Jtendra Malk s lecture Ths type of computaton s called deep learnng and s the most popular method for many problems, such as computer vson
More informationConvergence of random processes
DS-GA 12 Lecture notes 6 Fall 216 Convergence of random processes 1 Introducton In these notes we study convergence of dscrete random processes. Ths allows to characterze phenomena such as the law of large
More informationj) = 1 (note sigma notation) ii. Continuous random variable (e.g. Normal distribution) 1. density function: f ( x) 0 and f ( x) dx = 1
Random varables Measure of central tendences and varablty (means and varances) Jont densty functons and ndependence Measures of assocaton (covarance and correlaton) Interestng result Condtonal dstrbutons
More information14 The Postulates of Quantum mechanics
14 The Postulates of Quantum mechancs Postulate 1: The state of a system s descrbed completely n terms of a state vector Ψ(r, t), whch s quadratcally ntegrable. Postulate 2: To every physcally observable
More informationCSC321 Tutorial 9: Review of Boltzmann machines and simulated annealing
CSC321 Tutoral 9: Revew of Boltzmann machnes and smulated annealng (Sldes based on Lecture 16-18 and selected readngs) Yue L Emal: yuel@cs.toronto.edu Wed 11-12 March 19 Fr 10-11 March 21 Outlne Boltzmann
More informationParametric fractional imputation for missing data analysis. Jae Kwang Kim Survey Working Group Seminar March 29, 2010
Parametrc fractonal mputaton for mssng data analyss Jae Kwang Km Survey Workng Group Semnar March 29, 2010 1 Outlne Introducton Proposed method Fractonal mputaton Approxmaton Varance estmaton Multple mputaton
More informationCHAPTER 5 NUMERICAL EVALUATION OF DYNAMIC RESPONSE
CHAPTER 5 NUMERICAL EVALUATION OF DYNAMIC RESPONSE Analytcal soluton s usually not possble when exctaton vares arbtrarly wth tme or f the system s nonlnear. Such problems can be solved by numercal tmesteppng
More informationHidden Markov Models & The Multivariate Gaussian (10/26/04)
CS281A/Stat241A: Statstcal Learnng Theory Hdden Markov Models & The Multvarate Gaussan (10/26/04) Lecturer: Mchael I. Jordan Scrbes: Jonathan W. Hu 1 Hdden Markov Models As a bref revew, hdden Markov models
More informationChapter 2 - The Simple Linear Regression Model S =0. e i is a random error. S β2 β. This is a minimization problem. Solution is a calculus exercise.
Chapter - The Smple Lnear Regresson Model The lnear regresson equaton s: where y + = β + β e for =,..., y and are observable varables e s a random error How can an estmaton rule be constructed for the
More informationMASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.265/15.070J Fall 2013 Lecture 12 10/21/2013. Martingale Concentration Inequalities and Applications
MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.65/15.070J Fall 013 Lecture 1 10/1/013 Martngale Concentraton Inequaltes and Applcatons Content. 1. Exponental concentraton for martngales wth bounded ncrements.
More information10-701/ Machine Learning, Fall 2005 Homework 3
10-701/15-781 Machne Learnng, Fall 2005 Homework 3 Out: 10/20/05 Due: begnnng of the class 11/01/05 Instructons Contact questons-10701@autonlaborg for queston Problem 1 Regresson and Cross-valdaton [40
More informationCanonical transformations
Canoncal transformatons November 23, 2014 Recall that we have defned a symplectc transformaton to be any lnear transformaton M A B leavng the symplectc form nvarant, Ω AB M A CM B DΩ CD Coordnate transformatons,
More informationTHEOREMS OF QUANTUM MECHANICS
THEOREMS OF QUANTUM MECHANICS In order to develop methods to treat many-electron systems (atoms & molecules), many of the theorems of quantum mechancs are useful. Useful Notaton The matrx element A mn
More informationConjugacy and the Exponential Family
CS281B/Stat241B: Advanced Topcs n Learnng & Decson Makng Conjugacy and the Exponental Famly Lecturer: Mchael I. Jordan Scrbes: Bran Mlch 1 Conjugacy In the prevous lecture, we saw conjugate prors for the
More informationLectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix
Lectures - Week 4 Matrx norms, Condtonng, Vector Spaces, Lnear Independence, Spannng sets and Bass, Null space and Range of a Matrx Matrx Norms Now we turn to assocatng a number to each matrx. We could
More informationC4B Machine Learning Answers II. = σ(z) (1 σ(z)) 1 1 e z. e z = σ(1 σ) (1 + e z )
C4B Machne Learnng Answers II.(a) Show that for the logstc sgmod functon dσ(z) dz = σ(z) ( σ(z)) A. Zsserman, Hlary Term 20 Start from the defnton of σ(z) Note that Then σ(z) = σ = dσ(z) dz = + e z e z
More information763622S ADVANCED QUANTUM MECHANICS Solution Set 1 Spring c n a n. c n 2 = 1.
7636S ADVANCED QUANTUM MECHANICS Soluton Set 1 Sprng 013 1 Warm-up Show that the egenvalues of a Hermtan operator  are real and that the egenkets correspondng to dfferent egenvalues are orthogonal (b)
More informationSome Comments on Accelerating Convergence of Iterative Sequences Using Direct Inversion of the Iterative Subspace (DIIS)
Some Comments on Acceleratng Convergence of Iteratve Sequences Usng Drect Inverson of the Iteratve Subspace (DIIS) C. Davd Sherrll School of Chemstry and Bochemstry Georga Insttute of Technology May 1998
More informationProf. Dr. I. Nasser Phys 630, T Aug-15 One_dimensional_Ising_Model
EXACT OE-DIMESIOAL ISIG MODEL The one-dmensonal Isng model conssts of a chan of spns, each spn nteractng only wth ts two nearest neghbors. The smple Isng problem n one dmenson can be solved drectly n several
More informationNumerical Heat and Mass Transfer
Master degree n Mechancal Engneerng Numercal Heat and Mass Transfer 06-Fnte-Dfference Method (One-dmensonal, steady state heat conducton) Fausto Arpno f.arpno@uncas.t Introducton Why we use models and
More informationSupporting Information
Supportng Informaton The neural network f n Eq. 1 s gven by: f x l = ReLU W atom x l + b atom, 2 where ReLU s the element-wse rectfed lnear unt, 21.e., ReLUx = max0, x, W atom R d d s the weght matrx to
More informationLossy Compression. Compromise accuracy of reconstruction for increased compression.
Lossy Compresson Compromse accuracy of reconstructon for ncreased compresson. The reconstructon s usually vsbly ndstngushable from the orgnal mage. Typcally, one can get up to 0:1 compresson wth almost
More informationLECTURE 9 CANONICAL CORRELATION ANALYSIS
LECURE 9 CANONICAL CORRELAION ANALYSIS Introducton he concept of canoncal correlaton arses when we want to quantfy the assocatons between two sets of varables. For example, suppose that the frst set of
More informationEcon Statistical Properties of the OLS estimator. Sanjaya DeSilva
Econ 39 - Statstcal Propertes of the OLS estmator Sanjaya DeSlva September, 008 1 Overvew Recall that the true regresson model s Y = β 0 + β 1 X + u (1) Applyng the OLS method to a sample of data, we estmate
More informationA how to guide to second quantization method.
Phys. 67 (Graduate Quantum Mechancs Sprng 2009 Prof. Pu K. Lam. Verson 3 (4/3/2009 A how to gude to second quantzaton method. -> Second quantzaton s a mathematcal notaton desgned to handle dentcal partcle
More informationKernel Methods and SVMs Extension
Kernel Methods and SVMs Extenson The purpose of ths document s to revew materal covered n Machne Learnng 1 Supervsed Learnng regardng support vector machnes (SVMs). Ths document also provdes a general
More informationProbability Theory (revisited)
Probablty Theory (revsted) Summary Probablty v.s. plausblty Random varables Smulaton of Random Experments Challenge The alarm of a shop rang. Soon afterwards, a man was seen runnng n the street, persecuted
More informationFeature Selection: Part 1
CSE 546: Machne Learnng Lecture 5 Feature Selecton: Part 1 Instructor: Sham Kakade 1 Regresson n the hgh dmensonal settng How do we learn when the number of features d s greater than the sample sze n?
More informationModule 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur
Module 3 LOSSY IMAGE COMPRESSION SYSTEMS Verson ECE IIT, Kharagpur Lesson 6 Theory of Quantzaton Verson ECE IIT, Kharagpur Instructonal Objectves At the end of ths lesson, the students should be able to:
More information1 Derivation of Rate Equations from Single-Cell Conductance (Hodgkin-Huxley-like) Equations
Physcs 171/271 -Davd Klenfeld - Fall 2005 (revsed Wnter 2011) 1 Dervaton of Rate Equatons from Sngle-Cell Conductance (Hodgkn-Huxley-lke) Equatons We consder a network of many neurons, each of whch obeys
More informationRepresentation theory and quantum mechanics tutorial Representation theory and quantum conservation laws
Representaton theory and quantum mechancs tutoral Representaton theory and quantum conservaton laws Justn Campbell August 1, 2017 1 Generaltes on representaton theory 1.1 Let G GL m (R) be a real algebrac
More informationGeorgia Tech PHYS 6124 Mathematical Methods of Physics I
Georga Tech PHYS 624 Mathematcal Methods of Physcs I Instructor: Predrag Cvtanovć Fall semester 202 Homework Set #7 due October 30 202 == show all your work for maxmum credt == put labels ttle legends
More informationEffects of Ignoring Correlations When Computing Sample Chi-Square. John W. Fowler February 26, 2012
Effects of Ignorng Correlatons When Computng Sample Ch-Square John W. Fowler February 6, 0 It can happen that ch-square must be computed for a sample whose elements are correlated to an unknown extent.
More informationDifference Equations
Dfference Equatons c Jan Vrbk 1 Bascs Suppose a sequence of numbers, say a 0,a 1,a,a 3,... s defned by a certan general relatonshp between, say, three consecutve values of the sequence, e.g. a + +3a +1
More informationAdvanced Quantum Mechanics
Advanced Quantum Mechancs Rajdeep Sensarma! sensarma@theory.tfr.res.n ecture #9 QM of Relatvstc Partcles Recap of ast Class Scalar Felds and orentz nvarant actons Complex Scalar Feld and Charge conjugaton
More informationLecture 12: Discrete Laplacian
Lecture 12: Dscrete Laplacan Scrbe: Tanye Lu Our goal s to come up wth a dscrete verson of Laplacan operator for trangulated surfaces, so that we can use t n practce to solve related problems We are mostly
More informationPhysics 5153 Classical Mechanics. D Alembert s Principle and The Lagrangian-1
P. Guterrez Physcs 5153 Classcal Mechancs D Alembert s Prncple and The Lagrangan 1 Introducton The prncple of vrtual work provdes a method of solvng problems of statc equlbrum wthout havng to consder the
More informationBOUNDEDNESS OF THE RIESZ TRANSFORM WITH MATRIX A 2 WEIGHTS
BOUNDEDNESS OF THE IESZ TANSFOM WITH MATIX A WEIGHTS Introducton Let L = L ( n, be the functon space wth norm (ˆ f L = f(x C dx d < For a d d matrx valued functon W : wth W (x postve sem-defnte for all
More informationLecture 21: Numerical methods for pricing American type derivatives
Lecture 21: Numercal methods for prcng Amercan type dervatves Xaoguang Wang STAT 598W Aprl 10th, 2014 (STAT 598W) Lecture 21 1 / 26 Outlne 1 Fnte Dfference Method Explct Method Penalty Method (STAT 598W)
More informationWeek 5: Neural Networks
Week 5: Neural Networks Instructor: Sergey Levne Neural Networks Summary In the prevous lecture, we saw how we can construct neural networks by extendng logstc regresson. Neural networks consst of multple
More informatione i is a random error
Chapter - The Smple Lnear Regresson Model The lnear regresson equaton s: where + β + β e for,..., and are observable varables e s a random error How can an estmaton rule be constructed for the unknown
More informationANSWERS. Problem 1. and the moment generating function (mgf) by. defined for any real t. Use this to show that E( U) var( U)
Econ 413 Exam 13 H ANSWERS Settet er nndelt 9 deloppgaver, A,B,C, som alle anbefales å telle lkt for å gøre det ltt lettere å stå. Svar er gtt . Unfortunately, there s a prntng error n the hnt of
More informationProbabilistic Graphical Models
School of Computer Scence robablstc Graphcal Models Appromate Inference: Markov Chan Monte Carlo 05 07 Erc Xng Lecture 7 March 9 04 X X 075 05 05 03 X 3 Erc Xng @ CMU 005-04 Recap of Monte Carlo Monte
More informationNUMERICAL DIFFERENTIATION
NUMERICAL DIFFERENTIATION 1 Introducton Dfferentaton s a method to compute the rate at whch a dependent output y changes wth respect to the change n the ndependent nput x. Ths rate of change s called the
More information2E Pattern Recognition Solutions to Introduction to Pattern Recognition, Chapter 2: Bayesian pattern classification
E395 - Pattern Recognton Solutons to Introducton to Pattern Recognton, Chapter : Bayesan pattern classfcaton Preface Ths document s a soluton manual for selected exercses from Introducton to Pattern Recognton
More informationCIS526: Machine Learning Lecture 3 (Sept 16, 2003) Linear Regression. Preparation help: Xiaoying Huang. x 1 θ 1 output... θ M x M
CIS56: achne Learnng Lecture 3 (Sept 6, 003) Preparaton help: Xaoyng Huang Lnear Regresson Lnear regresson can be represented by a functonal form: f(; θ) = θ 0 0 +θ + + θ = θ = 0 ote: 0 s a dummy attrbute
More informationIntroduction to Vapor/Liquid Equilibrium, part 2. Raoult s Law:
CE304, Sprng 2004 Lecture 4 Introducton to Vapor/Lqud Equlbrum, part 2 Raoult s Law: The smplest model that allows us do VLE calculatons s obtaned when we assume that the vapor phase s an deal gas, and
More informationChapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems
Numercal Analyss by Dr. Anta Pal Assstant Professor Department of Mathematcs Natonal Insttute of Technology Durgapur Durgapur-713209 emal: anta.bue@gmal.com 1 . Chapter 5 Soluton of System of Lnear Equatons
More informationRate of Absorption and Stimulated Emission
MIT Department of Chemstry 5.74, Sprng 005: Introductory Quantum Mechancs II Instructor: Professor Andre Tokmakoff p. 81 Rate of Absorpton and Stmulated Emsson The rate of absorpton nduced by the feld
More informationEPR Paradox and the Physical Meaning of an Experiment in Quantum Mechanics. Vesselin C. Noninski
EPR Paradox and the Physcal Meanng of an Experment n Quantum Mechancs Vesseln C Nonnsk vesselnnonnsk@verzonnet Abstract It s shown that there s one purely determnstc outcome when measurement s made on
More informationChat eld, C. and A.J.Collins, Introduction to multivariate analysis. Chapman & Hall, 1980
MT07: Multvarate Statstcal Methods Mke Tso: emal mke.tso@manchester.ac.uk Webpage for notes: http://www.maths.manchester.ac.uk/~mkt/new_teachng.htm. Introducton to multvarate data. Books Chat eld, C. and
More informationClassification as a Regression Problem
Target varable y C C, C,, ; Classfcaton as a Regresson Problem { }, 3 L C K To treat classfcaton as a regresson problem we should transform the target y nto numercal values; The choce of numercal class
More informationReport on Image warping
Report on Image warpng Xuan Ne, Dec. 20, 2004 Ths document summarzed the algorthms of our mage warpng soluton for further study, and there s a detaled descrpton about the mplementaton of these algorthms.
More informationPh 219a/CS 219a. Exercises Due: Wednesday 23 October 2013
1 Ph 219a/CS 219a Exercses Due: Wednesday 23 October 2013 1.1 How far apart are two quantum states? Consder two quantum states descrbed by densty operators ρ and ρ n an N-dmensonal Hlbert space, and consder
More informationMMA and GCMMA two methods for nonlinear optimization
MMA and GCMMA two methods for nonlnear optmzaton Krster Svanberg Optmzaton and Systems Theory, KTH, Stockholm, Sweden. krlle@math.kth.se Ths note descrbes the algorthms used n the author s 2007 mplementatons
More informationInductance Calculation for Conductors of Arbitrary Shape
CRYO/02/028 Aprl 5, 2002 Inductance Calculaton for Conductors of Arbtrary Shape L. Bottura Dstrbuton: Internal Summary In ths note we descrbe a method for the numercal calculaton of nductances among conductors
More information12. The Hamilton-Jacobi Equation Michael Fowler
1. The Hamlton-Jacob Equaton Mchael Fowler Back to Confguraton Space We ve establshed that the acton, regarded as a functon of ts coordnate endponts and tme, satsfes ( ) ( ) S q, t / t+ H qpt,, = 0, and
More informationHidden Markov Models
Hdden Markov Models Namrata Vaswan, Iowa State Unversty Aprl 24, 204 Hdden Markov Model Defntons and Examples Defntons:. A hdden Markov model (HMM) refers to a set of hdden states X 0, X,..., X t,...,
More informationGrover s Algorithm + Quantum Zeno Effect + Vaidman
Grover s Algorthm + Quantum Zeno Effect + Vadman CS 294-2 Bomb 10/12/04 Fall 2004 Lecture 11 Grover s algorthm Recall that Grover s algorthm for searchng over a space of sze wors as follows: consder the
More informationLecture 3: Probability Distributions
Lecture 3: Probablty Dstrbutons Random Varables Let us begn by defnng a sample space as a set of outcomes from an experment. We denote ths by S. A random varable s a functon whch maps outcomes nto the
More informationU.C. Berkeley CS294: Spectral Methods and Expanders Handout 8 Luca Trevisan February 17, 2016
U.C. Berkeley CS94: Spectral Methods and Expanders Handout 8 Luca Trevsan February 7, 06 Lecture 8: Spectral Algorthms Wrap-up In whch we talk about even more generalzatons of Cheeger s nequaltes, and
More informationMin Cut, Fast Cut, Polynomial Identities
Randomzed Algorthms, Summer 016 Mn Cut, Fast Cut, Polynomal Identtes Instructor: Thomas Kesselhem and Kurt Mehlhorn 1 Mn Cuts n Graphs Lecture (5 pages) Throughout ths secton, G = (V, E) s a mult-graph.
More informationSection 8.3 Polar Form of Complex Numbers
80 Chapter 8 Secton 8 Polar Form of Complex Numbers From prevous classes, you may have encountered magnary numbers the square roots of negatve numbers and, more generally, complex numbers whch are the
More informationStrong Markov property: Same assertion holds for stopping times τ.
Brownan moton Let X ={X t : t R + } be a real-valued stochastc process: a famlty of real random varables all defned on the same probablty space. Defne F t = nformaton avalable by observng the process up
More information1 (1 + ( )) = 1 8 ( ) = (c) Carrying out the Taylor expansion, in this case, the series truncates at second order:
68A Solutons to Exercses March 05 (a) Usng a Taylor expanson, and notng that n 0 for all n >, ( + ) ( + ( ) + ) We can t nvert / because there s no Taylor expanson around 0 Lets try to calculate the nverse
More informationChapter 9: Statistical Inference and the Relationship between Two Variables
Chapter 9: Statstcal Inference and the Relatonshp between Two Varables Key Words The Regresson Model The Sample Regresson Equaton The Pearson Correlaton Coeffcent Learnng Outcomes After studyng ths chapter,
More informationLagrangian Field Theory
Lagrangan Feld Theory Adam Lott PHY 391 Aprl 6, 017 1 Introducton Ths paper s a summary of Chapter of Mandl and Shaw s Quantum Feld Theory [1]. The frst thng to do s to fx the notaton. For the most part,
More informationRandom Walks on Digraphs
Random Walks on Dgraphs J. J. P. Veerman October 23, 27 Introducton Let V = {, n} be a vertex set and S a non-negatve row-stochastc matrx (.e. rows sum to ). V and S defne a dgraph G = G(V, S) and a drected
More informationPHYS 215C: Quantum Mechanics (Spring 2017) Problem Set 3 Solutions
PHYS 5C: Quantum Mechancs Sprng 07 Problem Set 3 Solutons Prof. Matthew Fsher Solutons prepared by: Chatanya Murthy and James Sully June 4, 07 Please let me know f you encounter any typos n the solutons.
More informationBezier curves. Michael S. Floater. August 25, These notes provide an introduction to Bezier curves. i=0
Bezer curves Mchael S. Floater August 25, 211 These notes provde an ntroducton to Bezer curves. 1 Bernsten polynomals Recall that a real polynomal of a real varable x R, wth degree n, s a functon of the
More informationCSC 411 / CSC D11 / CSC C11
18 Boostng s a general strategy for learnng classfers by combnng smpler ones. The dea of boostng s to take a weak classfer that s, any classfer that wll do at least slghtly better than chance and use t
More informationThis chapter illustrates the idea that all properties of the homogeneous electron gas (HEG) can be calculated from electron density.
1 Unform Electron Gas Ths chapter llustrates the dea that all propertes of the homogeneous electron gas (HEG) can be calculated from electron densty. Intutve Representaton of Densty Electron densty n s
More informationCHAPTER 14 GENERAL PERTURBATION THEORY
CHAPTER 4 GENERAL PERTURBATION THEORY 4 Introducton A partcle n orbt around a pont mass or a sphercally symmetrc mass dstrbuton s movng n a gravtatonal potental of the form GM / r In ths potental t moves
More informationFeb 14: Spatial analysis of data fields
Feb 4: Spatal analyss of data felds Mappng rregularly sampled data onto a regular grd Many analyss technques for geophyscal data requre the data be located at regular ntervals n space and/or tme. hs s
More informationNon-interacting Spin-1/2 Particles in Non-commuting External Magnetic Fields
EJTP 6, No. 0 009) 43 56 Electronc Journal of Theoretcal Physcs Non-nteractng Spn-1/ Partcles n Non-commutng External Magnetc Felds Kunle Adegoke Physcs Department, Obafem Awolowo Unversty, Ile-Ife, Ngera
More informationAPPROXIMATE PRICES OF BASKET AND ASIAN OPTIONS DUPONT OLIVIER. Premia 14
APPROXIMAE PRICES OF BASKE AND ASIAN OPIONS DUPON OLIVIER Prema 14 Contents Introducton 1 1. Framewor 1 1.1. Baset optons 1.. Asan optons. Computng the prce 3. Lower bound 3.1. Closed formula for the prce
More informationU.C. Berkeley CS294: Beyond Worst-Case Analysis Luca Trevisan September 5, 2017
U.C. Berkeley CS94: Beyond Worst-Case Analyss Handout 4s Luca Trevsan September 5, 07 Summary of Lecture 4 In whch we ntroduce semdefnte programmng and apply t to Max Cut. Semdefnte Programmng Recall that
More informationLecture 14: Forces and Stresses
The Nuts and Bolts of Frst-Prncples Smulaton Lecture 14: Forces and Stresses Durham, 6th-13th December 2001 CASTEP Developers Group wth support from the ESF ψ k Network Overvew of Lecture Why bother? Theoretcal
More informationUsing T.O.M to Estimate Parameter of distributions that have not Single Exponential Family
IOSR Journal of Mathematcs IOSR-JM) ISSN: 2278-5728. Volume 3, Issue 3 Sep-Oct. 202), PP 44-48 www.osrjournals.org Usng T.O.M to Estmate Parameter of dstrbutons that have not Sngle Exponental Famly Jubran
More informationThe Order Relation and Trace Inequalities for. Hermitian Operators
Internatonal Mathematcal Forum, Vol 3, 08, no, 507-57 HIKARI Ltd, wwwm-hkarcom https://doorg/0988/mf088055 The Order Relaton and Trace Inequaltes for Hermtan Operators Y Huang School of Informaton Scence
More information