Radar Trackers. Study Guide. All chapters, problems, examples and page numbers refer to Applied Optimal Estimation, A. Gelb, Ed.

Size: px
Start display at page:

Download "Radar Trackers. Study Guide. All chapters, problems, examples and page numbers refer to Applied Optimal Estimation, A. Gelb, Ed."

Transcription

1 Radar rackers Study Gude All chapters, problems, examples and page numbers refer to Appled Optmal Estmaton, A. Gelb, Ed. Chapter Example.0- Problem Statement wo sensors Each has a sngle nose measurement z x+ v Unknown x s a constant Measurement noses v are uncorrelated (generalzed n Problem - pages 7 and 8 Estmate of x s a lnear combnaton of the measurements that s not a functon of the unknown to be estmated, x We defne a measurement vector y, z x+ v y x v z + x v + where s a vector wth all s, and a lnear estmator gan k, k k k and an estmate s a lnear combnaton of the measurements: xˆ k y. We defne an error (note dfference n notaton from the example, for consstency wth later usage of smlar notaton x ˆ e x x. We requre that k be ndependent of x and that the mean of the estmate be equal to x: E( xˆ x where E(* s the expectaton, or ensemble mean, operator. Soluton: We substtute the equatons for the estmate and measurement to fnd a constrant on the lnear estmator gan k as follows:

2 ( ˆ ( ( ( E x E k y E k x + v x. he hypothess of zero mean measurement nose s, n equaton form, E( v 0 and we have k as the condton on the lnear estmator gan. ote that another soluton s x0, whch we dscard snce x s an unknown and s not to be constraned, snce we have requred that k not depend on x. We mnmze the mean square error, and denote what we are mnmzng as J, whch we wll call a cost functon. hs s a term for an optmzaton crteron. hs s Agan substtutng from the above, we have (( ( J E k x + v x ( ˆ e J E x E x x. ( E x k k v x + E x k k + x E k v k + k E v v k ( ( x E x k + k v + x v v x + k R k x + x k R k One term n ths dervaton contans a very specal matrx, the covarance matrx of the measurement nose vector v: v v v σ ρ σ σ E( v v E R v. v v v ρσ σ σ We wll leave n the correlaton coeffcent ρ and not take t as zero as n the example, because, as we wll quckly see, there s no mmedate advantage n smplcty n droppng t out. We well make t zero when approprate later. We need to mnmze J wth respect to the lnear estmator gan k, subject to the condton for an unbased estmate. We can do ths drectly, at the expense of the smple equaton for the optmzaton crtera J that we have, and n the process make our soluton specfc to the problem statement and gve up ts generalty for other problems. A way to keep the problem lnear and n the vector doman s to add a new equaton wth an addtonal unknown and lnk t to the orgnal equaton. hs s called the method of Lagrangan multplers. For our problem, the optmzaton crtera J becomes J k R k λ k. v

3 he new varable λ s the Lagrangan multpler λ, and the optmzaton crtera s lnear n λ and quadratc n k. What we wll do s relax the constrant on k and apply the unbased constrant on the soluton to fnd a value for λ to complete our soluton. We proceed by takng the gradent of the optmzaton crtera wth respect to k and set t equal to zero and fnd a soluton: J Rv k λ 0 k λ k Rv. We use the unbased condton to fnd the value of the new varable λ : λ k Rv λ Rv ote that Rv sum of all the terms of R v. hs leaves us wth an equaton for the lnear estmator gan k: Rv k. Rv hs gves us a mnmum value of the optmzaton crtera of J MI. R v We wll close wth an explct expresson for the nverse of the covarance matrx of v: σ ρ σ σ R v ρ σ σ σ σ ρσ σ σ σ ( ρ ρσ σ σ ρ σ ( ρ σ σ ( ρ ρ σ σ ( ρ σ ( ρ he mnmum value of the optmzaton crtera s σ ( σ ρ J MI σ + σ ρ σ σ ρ ρ + σ σ σ σ

4 and the lnear estmator gan k s σ ρ σ σ k σ + σ ρ σ σ σ ρ σ σ ρ. σ σ σ ρ + ρ σ σ σ σ σ σ σ hs basc technque and result can be appled to hgher order problems. In partcular, see that problem -3 becomes smply a matter of wrtng the result, followed by a smple algebrac step. In fact, for any number of uncorrelated measurements M, the general result s obvous from 0 σ 0 Rv σ σ M M σ and j R v j σ σ. σ M Gelb Problem - See the example for the estmaton weghts and mnmum mean square error. In ths problem we see from the expresson for the mnmum mean square error σ ( σ ρ J MI σ + σ ρ σ σ ρ ρ + σ σ σ σ

5 that when ρ s large and the error varances are not the same, that the mnmum mean square error s smaller. hs means that we can use the nformaton that the errors are correlated to help estmate the unknown x. We can nvestgate what happens near ρ by makng the substtutons ρ δ δ ρ ± δ ± and lookng at the mnmum mean square error: δ J MI δ ± σ σ σ σ δ σ σ When ρ + ths means that the error n one of the measurements s proportonal to the error n the other measurement. If the RMS measurement errors are dfferent, we can use that knowledge to fnd the unknown x exactly. If they are equal, we have a sngularty; we smply take the estmate as ether of the measurements because they are equal When ρ we can always use the knowledge that the errors are proportonal to each other to fnd the unknown exactly. Gelb Problem -3 hs problem was worked as part of Example.0- as presented avove. Chapter Gelb problem - Show that P P P P Soluton: We begn wth P P I and take the dervatve of both sdes of ths equaton wth respect to tme. he result s, usng the chan rule, P P+ P P 0 and the result follows from rght-multplyng by P and movng the second term to the rght-hand sde. Gelb problem - For the matrx

6 show that the egenvalues are, -4, and 5. Soluton: he characterstc equaton s 3 λ 4 λ λ Expandng the determnant by mnors about the left column, ( 3 λ ( λ ( λ λ ( 4 ( ( 4 ( λ + 3 λ λ λ ( λ ( λ 4 ( λ 5 Gelb Problem -3 Show that A s postve defnte f, and only f, all of the egenvalues of A are postve. Soluton: he defnton of the property postve defnte s on page : x A x> 0 for all real x. ote that the use of a quadratc form means that any real matrx can be replaced by the real symmetrc matrx ASYM ( A+ A so that a matrx n a quadratc form s mplctly symmetrc. From Gelp equaton (.-46 page 9 and Lnear Algebra, Crash Course page, an egenvalue s a number λ such that, for some vector x, A x λ x he vector x s the egenvector correspondng to λ. From heorem 7.5, Lnear Algebra, Crash Course page, the egenvectors of a real, symmetrc matrx are real and orthogonal. hus the set of egenvectors may be consdered as axes of a rotated coordnate system, and any vector s made up of a lnear combnaton of egenvectors. hus, any quadratc from satsfes the nequalty x A x λmi x x hus, f λ MI > 0, the matrx s postve defnte. Gelb Problem -4 If R(t s a tme-varyng orthogonal matrx, and

7 dr t show that S(t must be skew-symmetrc. Soluton: Construct the product ( ( R t S t R ( t R I where the product s the dentty matrx because R(t s orthogonal, or ts transpose s ts nverse, as defned n the paragraph followng equaton (.-35 on page 8. akng the tme dervatve of ths equaton and usng the chan rule, dr ( t dr ( t R ( t + R( t 0 or, dr ( t dr ( t R ( t R( t dr ( t R ( t whch shows that the left-hand sde of the equaton n the problem statement s skewsymmetrc. hus, the rgh-hand sde, S(t, must be skew-symmetrc. Gelb Problem -5 Show that the matrx satsfes the polynomal equaton A 3 4 A A I 5 0 and fnd the egenvalues. Use the results to show the form of the soluton n terms of exponental functons. Soluton: he characterstc equaton s found from λ λ 5 λ λ he Cayley-Hamlton theorem (bottom of page 9 states that a matrx satsfes ts own characterstc equaton, so we have proven the polynomal matrx equaton. From the polynomal equaton, we can use A 5 A+ I to reduce any polynomal or seres n A to a lnear combnaton of A and I. he egenvalues are 5± 33 λ he remanng equaton, to fnd the closed forms for a (t and a (t, where

8 exp At a t I+ a t A s done as follows. Frst for a two-by-two matrx, the Cayley-Hamlton theorem gves us A ( λ+ λ A+ λ λ I 0 or, A ( λ+ λ A λ λ I hs equaton means that the matrx exponental seres as gven by Equaton (.-48 on page 0 can be collapsed to a lnear combnaton of I and A. Snce we are lookng at exp At, we use ( At ( λ + λ t At + λ λ t ote that the egenvalues of ( At are the egenvalues of A multpled by t. Scalar General Soluton for Order wo he Cayley-Hamlton theorem tells us that the matrx tself satsfes ts characterstc equaton, but we know also that both the egenvalues also satsfy the characterstc equaton. Snce the characterstc equaton s used to collapse the seres for the exponental nto a polynomal of order - (a frst-order polynomal for two by two matrces, we also have exp ( λ t a( t + a( t λ,, We can wrte ths equaton down for both egenvalues and solve for a (t and a (t: exp( λ t a( t + a( t λ exp( λ t a( t + a( t λ s a set of two lnear equatons n a (t and a (t, and can be solved for a (t and a (t. he soluton s λ exp( λ t λ exp( λ t a ( t λ λ exp( λ t exp( λ t a ( t λ λ hese can be substtuted nto the two lnear equatons n a (t and a (t to verfy that they are correct. We can dfferentate the seres for exp( At term by term and show that d exp( At A exp( At and use ths to verfy the soluton we have for a (t and a (t. ote that a (t and a (t as we have found them are the same as gven n the problem statement when t gves s the form a (t and a (t (a hnt that there may be a sgn dfference or a proportonalty factor between the equatons gven there and the exact soluton as we have found here, but the sgn of the form for a (t gven n the problem statement s ncorrect.

9 General Matrx Soluton We can extend the result to hgher order matrces by smply wrtng the set of two lnear equatons n equatons n a (t and a (t as a set of lnear equatons n a set of tme functons: j exp ( λ t aj( t λ,, j hs set of equatons can be wrtten as a vector equal to a matrx tmes a vector: λ λ λ a( t exp( λ t ( exp( λ λ λ a t λ t λ a 3 λ3 λ 3( t exp( λ3 t λ a ( t exp λ λ ( λ t he set of a (t are found by left-multplyng ths equaton by the nverse of the matrx. he soluton to part (b of the problem can easly be seen to be gven by ths equaton for. he matrx s a specal form, ts rows beng gven by successve powers of dfferent constants. he determnant of such a matrx s called a Vandermonde determnant and s very famous, and a lot of papers have been wrtten about t. In partcular, a closed form for ths determnant s (see Introducton to Matrx Analyss, Second Edton, by Rchard Bellman, page 93 j λ λ λ j < j ote that the matrx s sngular f any two egenvalues are equal. Chapter 3 Example 3.- Page 56 he Schuler loop s the dfferental equaton for an nertal navgaton system (IS. A smple form s to look at one axs. An IS usually needs a north-south and an east-west loop, and these loops are coupled, but we can understand some basc prncples by examnng ths smplfcaton. We defne the quanttes of nterest n terms of dfferental equatons. he quanttes are φ platform tlt angle n radans x δ v system velocty error δ p system poston error We defne the errors as ε g gryo random drft rate v ε a accelerometer random error From frst prncples, we have the dfferental equatons

10 ( δ p d δ v d( δ v ε a g sn ( φ εa g φ dφ δv + ε g Re he block dagram shown as Fgure 3.-4 on page 56 follows from these equatons. Gven these dfferental equatons, we can collect them and express them n vector-matrx form as dx A x G v + where 0 0 R e A g G We can use Laplace transforms to analyze ths equaton to understand the form of the solutons. he Laplace transform of the dfferental equaton s s X ( s x( 0 A X ( s + G V ( s he soluton n the s doman s X ( s ( s I A ( x( 0 + G V( s hus we can understand the nature of the soluton by examnng the egenvalues of the matrx A. he characterstc equaton of A s g s + 0 Re hs shows that the IS steady state s an undamped oscllaton. You can easly show that ths perod s about 84 mnutes, whch s why an IS s sometmes called an 84- mnute pendulum. Use the elementary equaton for the perod of a pendulum as a functon of ts length fnd the length of a real 84-mnute pendulum. Gelb Problem 3- he lnear varance equaton s the dfferental equaton for propagaton of errors of a nose-drven system. hs s developed n Secton 3.7, Propagaton of Errors, pp he lnear varance equaton s gven as Equaton (3.7-7 at the bottom of page 77: P ( t F( t P( t + P( t F( t + G( t Q( t G ( t he problem statement s to verfy a soluton as gven:

11 t P t Φ t, t P t Φ t, t + Φ t, τ G τ Q τ G τ Φ t, τ dτ t0 he smplest way to do ths s to take the dervatve of the canddate expresson for P(t and show that the rght-hand-sde of the dfferental equaton s obtaned. In dong ths, we need the defnton from equatons (3.7-3 and (3.7-4 on page 77, ( t F( t ( t Φ Φ and Lebntz rule for dfferentaton of an ntegral, ( ( d f t df( t df( t (, (, ( (, ( f t d g t x dx g t f t g t f t + g( t, x dx f( t f( t Because of the sze of the equatons, we wll take one term at a tme. We begn wth d ( Φ( tt, 0 Pt ( 0 Φ ( tt, 0 F( t Φ( tt, 0 Pt ( 0 Φ ( tt, 0 +Φ( tt, 0 Pt ( 0 Φ ( tt, 0 F ( t Repeatng ths operaton on the ntegrand for P(t produces the remander of the terms n F( t P( t + P( t F( t. Lebntz rule on the ntegral, as appled to the lmts of the ntegral, produce the term Φ( tt, G( t Qt G ( t Φ ( tt, G( t Qt G ( t and we are done. Gelb, Problem 3- When F and Q are constant matrces, we have Φ ( tt, 0 exp( F ( t t0 he gven equaton s equal to the lnear varance equaton for constant P and for the nose mappng matrx G as the dentty matrx I, but for the tme dervatve of the covarance equal to zero. By lookng at the case P 0, we are askng for the steady-state form for the soluton, or what happens as t. We use the assumpton of a stable system, lm exp F t 0 t and the soluton to the lnear varance equaton gven n the frst problem to show that the steady-state covarance for t 0 0, constant F, and G equal to the dentty matrx I as whch s the form desred. 0 exp exp P + F τ Q F τ dτ 0 Chapter 4 See the separate secton Multvarate Gaussan Dstrbuton below.

12 Summary of Equatons of the Kalman Flter State Vector x, measurement vector y, measurement model y h x + v h( x H x Cov R v E v v State vector extrapolaton model x f x + w f ( x F x Cov Q w E w w Covarance extrapolaton approxmaton P Φ P Φ + Q Φ F Φ, Φ ( t I Φ I + F State vector extrapolaton approxmaton x Φ xˆ Kalman gan K P H ( H P H + R or, f we have P avalable before K s computed, K P H R State vector update ˆx x + K ( y h( x Covarance extrapolaton short form P ( I K H P Short form; UMERICALLY USABLE Joseph stablzed form for covarance extrapolaton (good for most applcatons P I K H P I K H + K R K ormal form or nformaton matrx format for covarance extrapolaton use to check for accuracy of Joseph stablzed form: P P + H R H Other: square root flters (not treated n Gelb; overvew n the next course Gelb Problem 4- Repeat example.0- wth a Kalman flter, treatng the measurements as sequental and smultaneous. Sequental measurements:

13 We are estmatng a constant so that FI and Q0. We have one element n the state vector so t s just a scalar. he measurement model s conventonal. For the frst measurement, we begn wth the assumpton that we have no nformaton, so K and P σ. hen, for the second measurement, we have x y P σ R σ P σ K P R + σ + σ he estmate s ˆx x + K ( y x σ y+ ( y y σ + σ σ y+ σ y σ + σ and the covarance s P K P σ σ σ + σ σ σ σ + σ + σ σ Smultaneous measurements: Here we have a scalar state vector, two elements n the measurement vector, and we work wth unknown pror data by takng P as very large. We use the nformaton form for the covarance matrx, P H R H P v where we have dropped out the P term because we assume no pror nformaton. We have the Kalman gan from P as Pv K P H R P hs form allows us to keep the correlatons, as we dd n the vector approach to Example.0-, and thus to have a ready soluton to problem -. v

14 he Alpha-Beta racker he measurements are a sequence of data ponts y and the states are poston and velocty, arranged n a state vector x: poston x velocty At the frst measurement, before another s avalable, the measurement s used:: y xˆ 0 he poston and velocty estmates are defned at the second measurement: y xˆ y y t t Extrapolaton from last update (needed to prepare values for last update: t t Φ 0 x ˆ Φ x Update usng new data y : β K α h( x x H x, H [ 0] xˆ x + K y H x Relatonshp between α and β he most commonly used value of β s α β α he choce of α s determned by sgnal-to-nose rato, tme between updates, target behavor, etc. A plot of β as a functon of α s:

15 beta alpha he alpha-beta tracker as a vector-matrx recursve flter: x x v x Φ x, Φ 0 α x x + K ( y H x, K α, α H [ 0] he alpha-beta tracker as a recursve flter s xˆ I K H Φ xˆ + K y he Z ransform of the Alpha-Beta racker he recursve dgtal flter equaton s of the general form s A s + B y hs s smple one-pole flter, posed n vector-matrx notaton. For the alpha-beta tracker, when the tme between updates s constant, the A matrx s

16 A I K H Φ α 0 β [ 0] 0 0 α ( α β β he characterstc values of the matrx A are α + β α + β λa ± j β α ( ± j α α ote that the characterstc values are not a functon of the update tme. For the value of β n terms of α as gven above, the characterstc values as a functon of α are as n ths table: A root locus of the poles of the transfer functon of the flter n the z plane s:

17 he Multvarate Gaussan Dstrbuton Gaussan Random Varables A contnuous random varable s called Gaussan wth mean m and varance σ f ts probablty densty functon s ( x m p( x exp π σ σ Multple Correlated Gaussan Random Varables he probablty densty functon of multple ndependent Gaussan random varables wth mean zero and varance one s p ( y p ( y p ( y p ( y ( π exp y π exp ote that we can wrte the sum n the exponental as a quadratc form, y y y We defne another vector of random varables x as a set of lnear combnatons of the random varables y, x R y where R s any real matrx. For smplcty here we wll also assume that R s square and nonsngular. We fnd the probablty densty functon of x by equatng the dfferentals p( x dx p( y dy and we use the Jacoban dx R dy We also note that the covarance of x, P x, s gven by Px R R We leave as an exercse n algebra to show that p ( x exp x P x x π P x y

APPENDIX A Some Linear Algebra

APPENDIX A Some Linear Algebra APPENDIX A Some Lnear Algebra The collecton of m, n matrces A.1 Matrces a 1,1,..., a 1,n A = a m,1,..., a m,n wth real elements a,j s denoted by R m,n. If n = 1 then A s called a column vector. Smlarly,

More information

Lectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix

Lectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix Lectures - Week 4 Matrx norms, Condtonng, Vector Spaces, Lnear Independence, Spannng sets and Bass, Null space and Range of a Matrx Matrx Norms Now we turn to assocatng a number to each matrx. We could

More information

Linear Approximation with Regularization and Moving Least Squares

Linear Approximation with Regularization and Moving Least Squares Lnear Approxmaton wth Regularzaton and Movng Least Squares Igor Grešovn May 007 Revson 4.6 (Revson : March 004). 5 4 3 0.5 3 3.5 4 Contents: Lnear Fttng...4. Weghted Least Squares n Functon Approxmaton...

More information

Time-Varying Systems and Computations Lecture 6

Time-Varying Systems and Computations Lecture 6 Tme-Varyng Systems and Computatons Lecture 6 Klaus Depold 14. Januar 2014 The Kalman Flter The Kalman estmaton flter attempts to estmate the actual state of an unknown dscrete dynamcal system, gven nosy

More information

3.1 Expectation of Functions of Several Random Variables. )' be a k-dimensional discrete or continuous random vector, with joint PMF p (, E X E X1 E X

3.1 Expectation of Functions of Several Random Variables. )' be a k-dimensional discrete or continuous random vector, with joint PMF p (, E X E X1 E X Statstcs 1: Probablty Theory II 37 3 EPECTATION OF SEVERAL RANDOM VARIABLES As n Probablty Theory I, the nterest n most stuatons les not on the actual dstrbuton of a random vector, but rather on a number

More information

Fall 2012 Analysis of Experimental Measurements B. Eisenstein/rev. S. Errede

Fall 2012 Analysis of Experimental Measurements B. Eisenstein/rev. S. Errede Fall 0 Analyss of Expermental easurements B. Esensten/rev. S. Errede We now reformulate the lnear Least Squares ethod n more general terms, sutable for (eventually extendng to the non-lnear case, and also

More information

Inner Product. Euclidean Space. Orthonormal Basis. Orthogonal

Inner Product. Euclidean Space. Orthonormal Basis. Orthogonal Inner Product Defnton 1 () A Eucldean space s a fnte-dmensonal vector space over the reals R, wth an nner product,. Defnton 2 (Inner Product) An nner product, on a real vector space X s a symmetrc, blnear,

More information

PHYS 705: Classical Mechanics. Calculus of Variations II

PHYS 705: Classical Mechanics. Calculus of Variations II 1 PHYS 705: Classcal Mechancs Calculus of Varatons II 2 Calculus of Varatons: Generalzaton (no constrant yet) Suppose now that F depends on several dependent varables : We need to fnd such that has a statonary

More information

Chapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems

Chapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems Numercal Analyss by Dr. Anta Pal Assstant Professor Department of Mathematcs Natonal Insttute of Technology Durgapur Durgapur-713209 emal: anta.bue@gmal.com 1 . Chapter 5 Soluton of System of Lnear Equatons

More information

Feb 14: Spatial analysis of data fields

Feb 14: Spatial analysis of data fields Feb 4: Spatal analyss of data felds Mappng rregularly sampled data onto a regular grd Many analyss technques for geophyscal data requre the data be located at regular ntervals n space and/or tme. hs s

More information

Difference Equations

Difference Equations Dfference Equatons c Jan Vrbk 1 Bascs Suppose a sequence of numbers, say a 0,a 1,a,a 3,... s defned by a certan general relatonshp between, say, three consecutve values of the sequence, e.g. a + +3a +1

More information

Econ107 Applied Econometrics Topic 3: Classical Model (Studenmund, Chapter 4)

Econ107 Applied Econometrics Topic 3: Classical Model (Studenmund, Chapter 4) I. Classcal Assumptons Econ7 Appled Econometrcs Topc 3: Classcal Model (Studenmund, Chapter 4) We have defned OLS and studed some algebrac propertes of OLS. In ths topc we wll study statstcal propertes

More information

However, since P is a symmetric idempotent matrix, of P are either 0 or 1 [Eigen-values

However, since P is a symmetric idempotent matrix, of P are either 0 or 1 [Eigen-values Fall 007 Soluton to Mdterm Examnaton STAT 7 Dr. Goel. [0 ponts] For the general lnear model = X + ε, wth uncorrelated errors havng mean zero and varance σ, suppose that the desgn matrx X s not necessarly

More information

Chat eld, C. and A.J.Collins, Introduction to multivariate analysis. Chapman & Hall, 1980

Chat eld, C. and A.J.Collins, Introduction to multivariate analysis. Chapman & Hall, 1980 MT07: Multvarate Statstcal Methods Mke Tso: emal mke.tso@manchester.ac.uk Webpage for notes: http://www.maths.manchester.ac.uk/~mkt/new_teachng.htm. Introducton to multvarate data. Books Chat eld, C. and

More information

Lecture 10 Support Vector Machines II

Lecture 10 Support Vector Machines II Lecture 10 Support Vector Machnes II 22 February 2016 Taylor B. Arnold Yale Statstcs STAT 365/665 1/28 Notes: Problem 3 s posted and due ths upcomng Frday There was an early bug n the fake-test data; fxed

More information

Composite Hypotheses testing

Composite Hypotheses testing Composte ypotheses testng In many hypothess testng problems there are many possble dstrbutons that can occur under each of the hypotheses. The output of the source s a set of parameters (ponts n a parameter

More information

Chapter 7 Generalized and Weighted Least Squares Estimation. In this method, the deviation between the observed and expected values of

Chapter 7 Generalized and Weighted Least Squares Estimation. In this method, the deviation between the observed and expected values of Chapter 7 Generalzed and Weghted Least Squares Estmaton The usual lnear regresson model assumes that all the random error components are dentcally and ndependently dstrbuted wth constant varance. When

More information

LECTURE 9 CANONICAL CORRELATION ANALYSIS

LECTURE 9 CANONICAL CORRELATION ANALYSIS LECURE 9 CANONICAL CORRELAION ANALYSIS Introducton he concept of canoncal correlaton arses when we want to quantfy the assocatons between two sets of varables. For example, suppose that the frst set of

More information

Tracking with Kalman Filter

Tracking with Kalman Filter Trackng wth Kalman Flter Scott T. Acton Vrgna Image and Vdeo Analyss (VIVA), Charles L. Brown Department of Electrcal and Computer Engneerng Department of Bomedcal Engneerng Unversty of Vrgna, Charlottesvlle,

More information

LINEAR REGRESSION ANALYSIS. MODULE IX Lecture Multicollinearity

LINEAR REGRESSION ANALYSIS. MODULE IX Lecture Multicollinearity LINEAR REGRESSION ANALYSIS MODULE IX Lecture - 30 Multcollnearty Dr. Shalabh Department of Mathematcs and Statstcs Indan Insttute of Technology Kanpur 2 Remedes for multcollnearty Varous technques have

More information

C/CS/Phy191 Problem Set 3 Solutions Out: Oct 1, 2008., where ( 00. ), so the overall state of the system is ) ( ( ( ( 00 ± 11 ), Φ ± = 1

C/CS/Phy191 Problem Set 3 Solutions Out: Oct 1, 2008., where ( 00. ), so the overall state of the system is ) ( ( ( ( 00 ± 11 ), Φ ± = 1 C/CS/Phy9 Problem Set 3 Solutons Out: Oct, 8 Suppose you have two qubts n some arbtrary entangled state ψ You apply the teleportaton protocol to each of the qubts separately What s the resultng state obtaned

More information

Norms, Condition Numbers, Eigenvalues and Eigenvectors

Norms, Condition Numbers, Eigenvalues and Eigenvectors Norms, Condton Numbers, Egenvalues and Egenvectors 1 Norms A norm s a measure of the sze of a matrx or a vector For vectors the common norms are: N a 2 = ( x 2 1/2 the Eucldean Norm (1a b 1 = =1 N x (1b

More information

1 Convex Optimization

1 Convex Optimization Convex Optmzaton We wll consder convex optmzaton problems. Namely, mnmzaton problems where the objectve s convex (we assume no constrants for now). Such problems often arse n machne learnng. For example,

More information

The Geometry of Logit and Probit

The Geometry of Logit and Probit The Geometry of Logt and Probt Ths short note s meant as a supplement to Chapters and 3 of Spatal Models of Parlamentary Votng and the notaton and reference to fgures n the text below s to those two chapters.

More information

MEM 255 Introduction to Control Systems Review: Basics of Linear Algebra

MEM 255 Introduction to Control Systems Review: Basics of Linear Algebra MEM 255 Introducton to Control Systems Revew: Bascs of Lnear Algebra Harry G. Kwatny Department of Mechancal Engneerng & Mechancs Drexel Unversty Outlne Vectors Matrces MATLAB Advanced Topcs Vectors A

More information

Limited Dependent Variables

Limited Dependent Variables Lmted Dependent Varables. What f the left-hand sde varable s not a contnuous thng spread from mnus nfnty to plus nfnty? That s, gven a model = f (, β, ε, where a. s bounded below at zero, such as wages

More information

CSci 6974 and ECSE 6966 Math. Tech. for Vision, Graphics and Robotics Lecture 21, April 17, 2006 Estimating A Plane Homography

CSci 6974 and ECSE 6966 Math. Tech. for Vision, Graphics and Robotics Lecture 21, April 17, 2006 Estimating A Plane Homography CSc 6974 and ECSE 6966 Math. Tech. for Vson, Graphcs and Robotcs Lecture 21, Aprl 17, 2006 Estmatng A Plane Homography Overvew We contnue wth a dscusson of the major ssues, usng estmaton of plane projectve

More information

More metrics on cartesian products

More metrics on cartesian products More metrcs on cartesan products If (X, d ) are metrc spaces for 1 n, then n Secton II4 of the lecture notes we defned three metrcs on X whose underlyng topologes are the product topology The purpose of

More information

NUMERICAL DIFFERENTIATION

NUMERICAL DIFFERENTIATION NUMERICAL DIFFERENTIATION 1 Introducton Dfferentaton s a method to compute the rate at whch a dependent output y changes wth respect to the change n the ndependent nput x. Ths rate of change s called the

More information

Chapter 2 - The Simple Linear Regression Model S =0. e i is a random error. S β2 β. This is a minimization problem. Solution is a calculus exercise.

Chapter 2 - The Simple Linear Regression Model S =0. e i is a random error. S β2 β. This is a minimization problem. Solution is a calculus exercise. Chapter - The Smple Lnear Regresson Model The lnear regresson equaton s: where y + = β + β e for =,..., y and are observable varables e s a random error How can an estmaton rule be constructed for the

More information

Section 8.3 Polar Form of Complex Numbers

Section 8.3 Polar Form of Complex Numbers 80 Chapter 8 Secton 8 Polar Form of Complex Numbers From prevous classes, you may have encountered magnary numbers the square roots of negatve numbers and, more generally, complex numbers whch are the

More information

ELASTIC WAVE PROPAGATION IN A CONTINUOUS MEDIUM

ELASTIC WAVE PROPAGATION IN A CONTINUOUS MEDIUM ELASTIC WAVE PROPAGATION IN A CONTINUOUS MEDIUM An elastc wave s a deformaton of the body that travels throughout the body n all drectons. We can examne the deformaton over a perod of tme by fxng our look

More information

Asymptotics of the Solution of a Boundary Value. Problem for One-Characteristic Differential. Equation Degenerating into a Parabolic Equation

Asymptotics of the Solution of a Boundary Value. Problem for One-Characteristic Differential. Equation Degenerating into a Parabolic Equation Nonl. Analyss and Dfferental Equatons, ol., 4, no., 5 - HIKARI Ltd, www.m-har.com http://dx.do.org/.988/nade.4.456 Asymptotcs of the Soluton of a Boundary alue Problem for One-Characterstc Dfferental Equaton

More information

LINEAR REGRESSION ANALYSIS. MODULE IX Lecture Multicollinearity

LINEAR REGRESSION ANALYSIS. MODULE IX Lecture Multicollinearity LINEAR REGRESSION ANALYSIS MODULE IX Lecture - 31 Multcollnearty Dr. Shalabh Department of Mathematcs and Statstcs Indan Insttute of Technology Kanpur 6. Rdge regresson The OLSE s the best lnear unbased

More information

Transfer Functions. Convenient representation of a linear, dynamic model. A transfer function (TF) relates one input and one output: ( ) system

Transfer Functions. Convenient representation of a linear, dynamic model. A transfer function (TF) relates one input and one output: ( ) system Transfer Functons Convenent representaton of a lnear, dynamc model. A transfer functon (TF) relates one nput and one output: x t X s y t system Y s The followng termnology s used: x y nput output forcng

More information

Canonical transformations

Canonical transformations Canoncal transformatons November 23, 2014 Recall that we have defned a symplectc transformaton to be any lnear transformaton M A B leavng the symplectc form nvarant, Ω AB M A CM B DΩ CD Coordnate transformatons,

More information

APPROXIMATE PRICES OF BASKET AND ASIAN OPTIONS DUPONT OLIVIER. Premia 14

APPROXIMATE PRICES OF BASKET AND ASIAN OPTIONS DUPONT OLIVIER. Premia 14 APPROXIMAE PRICES OF BASKE AND ASIAN OPIONS DUPON OLIVIER Prema 14 Contents Introducton 1 1. Framewor 1 1.1. Baset optons 1.. Asan optons. Computng the prce 3. Lower bound 3.1. Closed formula for the prce

More information

The Multiple Classical Linear Regression Model (CLRM): Specification and Assumptions. 1. Introduction

The Multiple Classical Linear Regression Model (CLRM): Specification and Assumptions. 1. Introduction ECONOMICS 5* -- NOTE (Summary) ECON 5* -- NOTE The Multple Classcal Lnear Regresson Model (CLRM): Specfcaton and Assumptons. Introducton CLRM stands for the Classcal Lnear Regresson Model. The CLRM s also

More information

Lecture 21: Numerical methods for pricing American type derivatives

Lecture 21: Numerical methods for pricing American type derivatives Lecture 21: Numercal methods for prcng Amercan type dervatves Xaoguang Wang STAT 598W Aprl 10th, 2014 (STAT 598W) Lecture 21 1 / 26 Outlne 1 Fnte Dfference Method Explct Method Penalty Method (STAT 598W)

More information

Week3, Chapter 4. Position and Displacement. Motion in Two Dimensions. Instantaneous Velocity. Average Velocity

Week3, Chapter 4. Position and Displacement. Motion in Two Dimensions. Instantaneous Velocity. Average Velocity Week3, Chapter 4 Moton n Two Dmensons Lecture Quz A partcle confned to moton along the x axs moves wth constant acceleraton from x =.0 m to x = 8.0 m durng a 1-s tme nterval. The velocty of the partcle

More information

e i is a random error

e i is a random error Chapter - The Smple Lnear Regresson Model The lnear regresson equaton s: where + β + β e for,..., and are observable varables e s a random error How can an estmaton rule be constructed for the unknown

More information

Statistical pattern recognition

Statistical pattern recognition Statstcal pattern recognton Bayes theorem Problem: decdng f a patent has a partcular condton based on a partcular test However, the test s mperfect Someone wth the condton may go undetected (false negatve

More information

ON A DETERMINATION OF THE INITIAL FUNCTIONS FROM THE OBSERVED VALUES OF THE BOUNDARY FUNCTIONS FOR THE SECOND-ORDER HYPERBOLIC EQUATION

ON A DETERMINATION OF THE INITIAL FUNCTIONS FROM THE OBSERVED VALUES OF THE BOUNDARY FUNCTIONS FOR THE SECOND-ORDER HYPERBOLIC EQUATION Advanced Mathematcal Models & Applcatons Vol.3, No.3, 2018, pp.215-222 ON A DETERMINATION OF THE INITIAL FUNCTIONS FROM THE OBSERVED VALUES OF THE BOUNDARY FUNCTIONS FOR THE SECOND-ORDER HYPERBOLIC EUATION

More information

Econ Statistical Properties of the OLS estimator. Sanjaya DeSilva

Econ Statistical Properties of the OLS estimator. Sanjaya DeSilva Econ 39 - Statstcal Propertes of the OLS estmator Sanjaya DeSlva September, 008 1 Overvew Recall that the true regresson model s Y = β 0 + β 1 X + u (1) Applyng the OLS method to a sample of data, we estmate

More information

Kernel Methods and SVMs Extension

Kernel Methods and SVMs Extension Kernel Methods and SVMs Extenson The purpose of ths document s to revew materal covered n Machne Learnng 1 Supervsed Learnng regardng support vector machnes (SVMs). Ths document also provdes a general

More information

Chapter 11: Simple Linear Regression and Correlation

Chapter 11: Simple Linear Regression and Correlation Chapter 11: Smple Lnear Regresson and Correlaton 11-1 Emprcal Models 11-2 Smple Lnear Regresson 11-3 Propertes of the Least Squares Estmators 11-4 Hypothess Test n Smple Lnear Regresson 11-4.1 Use of t-tests

More information

Errors for Linear Systems

Errors for Linear Systems Errors for Lnear Systems When we solve a lnear system Ax b we often do not know A and b exactly, but have only approxmatons  and ˆb avalable. Then the best thng we can do s to solve ˆx ˆb exactly whch

More information

Solutions to Problem Set 6

Solutions to Problem Set 6 Solutons to Problem Set 6 Problem 6. (Resdue theory) a) Problem 4.7.7 Boas. n ths problem we wll solve ths ntegral: x sn x x + 4x + 5 dx: To solve ths usng the resdue theorem, we study ths complex ntegral:

More information

Estimating the Fundamental Matrix by Transforming Image Points in Projective Space 1

Estimating the Fundamental Matrix by Transforming Image Points in Projective Space 1 Estmatng the Fundamental Matrx by Transformng Image Ponts n Projectve Space 1 Zhengyou Zhang and Charles Loop Mcrosoft Research, One Mcrosoft Way, Redmond, WA 98052, USA E-mal: fzhang,cloopg@mcrosoft.com

More information

Hidden Markov Models & The Multivariate Gaussian (10/26/04)

Hidden Markov Models & The Multivariate Gaussian (10/26/04) CS281A/Stat241A: Statstcal Learnng Theory Hdden Markov Models & The Multvarate Gaussan (10/26/04) Lecturer: Mchael I. Jordan Scrbes: Jonathan W. Hu 1 Hdden Markov Models As a bref revew, hdden Markov models

More information

4DVAR, according to the name, is a four-dimensional variational method.

4DVAR, according to the name, is a four-dimensional variational method. 4D-Varatonal Data Assmlaton (4D-Var) 4DVAR, accordng to the name, s a four-dmensonal varatonal method. 4D-Var s actually a drect generalzaton of 3D-Var to handle observatons that are dstrbuted n tme. The

More information

Formulas for the Determinant

Formulas for the Determinant page 224 224 CHAPTER 3 Determnants e t te t e 2t 38 A = e t 2te t e 2t e t te t 2e 2t 39 If 123 A = 345, 456 compute the matrx product A adj(a) What can you conclude about det(a)? For Problems 40 43, use

More information

The Prncpal Component Transform The Prncpal Component Transform s also called Karhunen-Loeve Transform (KLT, Hotellng Transform, oregenvector Transfor

The Prncpal Component Transform The Prncpal Component Transform s also called Karhunen-Loeve Transform (KLT, Hotellng Transform, oregenvector Transfor Prncpal Component Transform Multvarate Random Sgnals A real tme sgnal x(t can be consdered as a random process and ts samples x m (m =0; ;N, 1 a random vector: The mean vector of X s X =[x0; ;x N,1] T

More information

Salmon: Lectures on partial differential equations. Consider the general linear, second-order PDE in the form. ,x 2

Salmon: Lectures on partial differential equations. Consider the general linear, second-order PDE in the form. ,x 2 Salmon: Lectures on partal dfferental equatons 5. Classfcaton of second-order equatons There are general methods for classfyng hgher-order partal dfferental equatons. One s very general (applyng even to

More information

n α j x j = 0 j=1 has a nontrivial solution. Here A is the n k matrix whose jth column is the vector for all t j=0

n α j x j = 0 j=1 has a nontrivial solution. Here A is the n k matrix whose jth column is the vector for all t j=0 MODULE 2 Topcs: Lnear ndependence, bass and dmenson We have seen that f n a set of vectors one vector s a lnear combnaton of the remanng vectors n the set then the span of the set s unchanged f that vector

More information

CHAPTER 5 NUMERICAL EVALUATION OF DYNAMIC RESPONSE

CHAPTER 5 NUMERICAL EVALUATION OF DYNAMIC RESPONSE CHAPTER 5 NUMERICAL EVALUATION OF DYNAMIC RESPONSE Analytcal soluton s usually not possble when exctaton vares arbtrarly wth tme or f the system s nonlnear. Such problems can be solved by numercal tmesteppng

More information

U.C. Berkeley CS294: Beyond Worst-Case Analysis Luca Trevisan September 5, 2017

U.C. Berkeley CS294: Beyond Worst-Case Analysis Luca Trevisan September 5, 2017 U.C. Berkeley CS94: Beyond Worst-Case Analyss Handout 4s Luca Trevsan September 5, 07 Summary of Lecture 4 In whch we ntroduce semdefnte programmng and apply t to Max Cut. Semdefnte Programmng Recall that

More information

Primer on High-Order Moment Estimators

Primer on High-Order Moment Estimators Prmer on Hgh-Order Moment Estmators Ton M. Whted July 2007 The Errors-n-Varables Model We wll start wth the classcal EIV for one msmeasured regressor. The general case s n Erckson and Whted Econometrc

More information

Iterative General Dynamic Model for Serial-Link Manipulators

Iterative General Dynamic Model for Serial-Link Manipulators EEL6667: Knematcs, Dynamcs and Control of Robot Manpulators 1. Introducton Iteratve General Dynamc Model for Seral-Lnk Manpulators In ths set of notes, we are gong to develop a method for computng a general

More information

ANSWERS. Problem 1. and the moment generating function (mgf) by. defined for any real t. Use this to show that E( U) var( U)

ANSWERS. Problem 1. and the moment generating function (mgf) by. defined for any real t. Use this to show that E( U) var( U) Econ 413 Exam 13 H ANSWERS Settet er nndelt 9 deloppgaver, A,B,C, som alle anbefales å telle lkt for å gøre det ltt lettere å stå. Svar er gtt . Unfortunately, there s a prntng error n the hnt of

More information

Physics 5153 Classical Mechanics. D Alembert s Principle and The Lagrangian-1

Physics 5153 Classical Mechanics. D Alembert s Principle and The Lagrangian-1 P. Guterrez Physcs 5153 Classcal Mechancs D Alembert s Prncple and The Lagrangan 1 Introducton The prncple of vrtual work provdes a method of solvng problems of statc equlbrum wthout havng to consder the

More information

10-701/ Machine Learning, Fall 2005 Homework 3

10-701/ Machine Learning, Fall 2005 Homework 3 10-701/15-781 Machne Learnng, Fall 2005 Homework 3 Out: 10/20/05 Due: begnnng of the class 11/01/05 Instructons Contact questons-10701@autonlaborg for queston Problem 1 Regresson and Cross-valdaton [40

More information

1 Matrix representations of canonical matrices

1 Matrix representations of canonical matrices 1 Matrx representatons of canoncal matrces 2-d rotaton around the orgn: ( ) cos θ sn θ R 0 = sn θ cos θ 3-d rotaton around the x-axs: R x = 1 0 0 0 cos θ sn θ 0 sn θ cos θ 3-d rotaton around the y-axs:

More information

Poisson brackets and canonical transformations

Poisson brackets and canonical transformations rof O B Wrght Mechancs Notes osson brackets and canoncal transformatons osson Brackets Consder an arbtrary functon f f ( qp t) df f f f q p q p t But q p p where ( qp ) pq q df f f f p q q p t In order

More information

Lossy Compression. Compromise accuracy of reconstruction for increased compression.

Lossy Compression. Compromise accuracy of reconstruction for increased compression. Lossy Compresson Compromse accuracy of reconstructon for ncreased compresson. The reconstructon s usually vsbly ndstngushable from the orgnal mage. Typcally, one can get up to 0:1 compresson wth almost

More information

Differentiating Gaussian Processes

Differentiating Gaussian Processes Dfferentatng Gaussan Processes Andrew McHutchon Aprl 17, 013 1 Frst Order Dervatve of the Posteror Mean The posteror mean of a GP s gven by, f = x, X KX, X 1 y x, X α 1 Only the x, X term depends on the

More information

8 : Learning in Fully Observed Markov Networks. 1 Why We Need to Learn Undirected Graphical Models. 2 Structural Learning for Completely Observed MRF

8 : Learning in Fully Observed Markov Networks. 1 Why We Need to Learn Undirected Graphical Models. 2 Structural Learning for Completely Observed MRF 10-708: Probablstc Graphcal Models 10-708, Sprng 2014 8 : Learnng n Fully Observed Markov Networks Lecturer: Erc P. Xng Scrbes: Meng Song, L Zhou 1 Why We Need to Learn Undrected Graphcal Models In the

More information

ECONOMICS 351*-A Mid-Term Exam -- Fall Term 2000 Page 1 of 13 pages. QUEEN'S UNIVERSITY AT KINGSTON Department of Economics

ECONOMICS 351*-A Mid-Term Exam -- Fall Term 2000 Page 1 of 13 pages. QUEEN'S UNIVERSITY AT KINGSTON Department of Economics ECOOMICS 35*-A Md-Term Exam -- Fall Term 000 Page of 3 pages QUEE'S UIVERSITY AT KIGSTO Department of Economcs ECOOMICS 35* - Secton A Introductory Econometrcs Fall Term 000 MID-TERM EAM ASWERS MG Abbott

More information

where I = (n x n) diagonal identity matrix with diagonal elements = 1 and off-diagonal elements = 0; and σ 2 e = variance of (Y X).

where I = (n x n) diagonal identity matrix with diagonal elements = 1 and off-diagonal elements = 0; and σ 2 e = variance of (Y X). 11.4.1 Estmaton of Multple Regresson Coeffcents In multple lnear regresson, we essentally solve n equatons for the p unnown parameters. hus n must e equal to or greater than p and n practce n should e

More information

Bezier curves. Michael S. Floater. August 25, These notes provide an introduction to Bezier curves. i=0

Bezier curves. Michael S. Floater. August 25, These notes provide an introduction to Bezier curves. i=0 Bezer curves Mchael S. Floater August 25, 211 These notes provde an ntroducton to Bezer curves. 1 Bernsten polynomals Recall that a real polynomal of a real varable x R, wth degree n, s a functon of the

More information

princeton univ. F 17 cos 521: Advanced Algorithm Design Lecture 7: LP Duality Lecturer: Matt Weinberg

princeton univ. F 17 cos 521: Advanced Algorithm Design Lecture 7: LP Duality Lecturer: Matt Weinberg prnceton unv. F 17 cos 521: Advanced Algorthm Desgn Lecture 7: LP Dualty Lecturer: Matt Wenberg Scrbe: LP Dualty s an extremely useful tool for analyzng structural propertes of lnear programs. Whle there

More information

Digital Signal Processing

Digital Signal Processing Dgtal Sgnal Processng Dscrete-tme System Analyss Manar Mohasen Offce: F8 Emal: manar.subh@ut.ac.r School of IT Engneerng Revew of Precedent Class Contnuous Sgnal The value of the sgnal s avalable over

More information

EEE 241: Linear Systems

EEE 241: Linear Systems EEE : Lnear Systems Summary #: Backpropagaton BACKPROPAGATION The perceptron rule as well as the Wdrow Hoff learnng were desgned to tran sngle layer networks. They suffer from the same dsadvantage: they

More information

P R. Lecture 4. Theory and Applications of Pattern Recognition. Dept. of Electrical and Computer Engineering /

P R. Lecture 4. Theory and Applications of Pattern Recognition. Dept. of Electrical and Computer Engineering / Theory and Applcatons of Pattern Recognton 003, Rob Polkar, Rowan Unversty, Glassboro, NJ Lecture 4 Bayes Classfcaton Rule Dept. of Electrcal and Computer Engneerng 0909.40.0 / 0909.504.04 Theory & Applcatons

More information

Module 1 : The equation of continuity. Lecture 1: Equation of Continuity

Module 1 : The equation of continuity. Lecture 1: Equation of Continuity 1 Module 1 : The equaton of contnuty Lecture 1: Equaton of Contnuty 2 Advanced Heat and Mass Transfer: Modules 1. THE EQUATION OF CONTINUITY : Lectures 1-6 () () () (v) (v) Overall Mass Balance Momentum

More information

1 (1 + ( )) = 1 8 ( ) = (c) Carrying out the Taylor expansion, in this case, the series truncates at second order:

1 (1 + ( )) = 1 8 ( ) = (c) Carrying out the Taylor expansion, in this case, the series truncates at second order: 68A Solutons to Exercses March 05 (a) Usng a Taylor expanson, and notng that n 0 for all n >, ( + ) ( + ( ) + ) We can t nvert / because there s no Taylor expanson around 0 Lets try to calculate the nverse

More information

8.6 The Complex Number System

8.6 The Complex Number System 8.6 The Complex Number System Earler n the chapter, we mentoned that we cannot have a negatve under a square root, snce the square of any postve or negatve number s always postve. In ths secton we want

More information

Interval Estimation in the Classical Normal Linear Regression Model. 1. Introduction

Interval Estimation in the Classical Normal Linear Regression Model. 1. Introduction ECONOMICS 35* -- NOTE 7 ECON 35* -- NOTE 7 Interval Estmaton n the Classcal Normal Lnear Regresson Model Ths note outlnes the basc elements of nterval estmaton n the Classcal Normal Lnear Regresson Model

More information

8.4 COMPLEX VECTOR SPACES AND INNER PRODUCTS

8.4 COMPLEX VECTOR SPACES AND INNER PRODUCTS SECTION 8.4 COMPLEX VECTOR SPACES AND INNER PRODUCTS 493 8.4 COMPLEX VECTOR SPACES AND INNER PRODUCTS All the vector spaces you have studed thus far n the text are real vector spaces because the scalars

More information

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur Module 3 LOSSY IMAGE COMPRESSION SYSTEMS Verson ECE IIT, Kharagpur Lesson 6 Theory of Quantzaton Verson ECE IIT, Kharagpur Instructonal Objectves At the end of ths lesson, the students should be able to:

More information

Math1110 (Spring 2009) Prelim 3 - Solutions

Math1110 (Spring 2009) Prelim 3 - Solutions Math 1110 (Sprng 2009) Solutons to Prelm 3 (04/21/2009) 1 Queston 1. (16 ponts) Short answer. Math1110 (Sprng 2009) Prelm 3 - Solutons x a 1 (a) (4 ponts) Please evaluate lm, where a and b are postve numbers.

More information

Relevance Vector Machines Explained

Relevance Vector Machines Explained October 19, 2010 Relevance Vector Machnes Explaned Trstan Fletcher www.cs.ucl.ac.uk/staff/t.fletcher/ Introducton Ths document has been wrtten n an attempt to make Tppng s [1] Relevance Vector Machnes

More information

Report on Image warping

Report on Image warping Report on Image warpng Xuan Ne, Dec. 20, 2004 Ths document summarzed the algorthms of our mage warpng soluton for further study, and there s a detaled descrpton about the mplementaton of these algorthms.

More information

Solutions to exam in SF1811 Optimization, Jan 14, 2015

Solutions to exam in SF1811 Optimization, Jan 14, 2015 Solutons to exam n SF8 Optmzaton, Jan 4, 25 3 3 O------O -4 \ / \ / The network: \/ where all lnks go from left to rght. /\ / \ / \ 6 O------O -5 2 4.(a) Let x = ( x 3, x 4, x 23, x 24 ) T, where the varable

More information

Lecture 3 Stat102, Spring 2007

Lecture 3 Stat102, Spring 2007 Lecture 3 Stat0, Sprng 007 Chapter 3. 3.: Introducton to regresson analyss Lnear regresson as a descrptve technque The least-squares equatons Chapter 3.3 Samplng dstrbuton of b 0, b. Contnued n net lecture

More information

2E Pattern Recognition Solutions to Introduction to Pattern Recognition, Chapter 2: Bayesian pattern classification

2E Pattern Recognition Solutions to Introduction to Pattern Recognition, Chapter 2: Bayesian pattern classification E395 - Pattern Recognton Solutons to Introducton to Pattern Recognton, Chapter : Bayesan pattern classfcaton Preface Ths document s a soluton manual for selected exercses from Introducton to Pattern Recognton

More information

The exponential map of GL(N)

The exponential map of GL(N) The exponental map of GLN arxv:hep-th/9604049v 9 Apr 996 Alexander Laufer Department of physcs Unversty of Konstanz P.O. 5560 M 678 78434 KONSTANZ Aprl 9, 996 Abstract A fnte expanson of the exponental

More information

Generalized Linear Methods

Generalized Linear Methods Generalzed Lnear Methods 1 Introducton In the Ensemble Methods the general dea s that usng a combnaton of several weak learner one could make a better learner. More formally, assume that we have a set

More information

Mathematical Preparations

Mathematical Preparations 1 Introducton Mathematcal Preparatons The theory of relatvty was developed to explan experments whch studed the propagaton of electromagnetc radaton n movng coordnate systems. Wthn expermental error the

More information

U.C. Berkeley CS294: Spectral Methods and Expanders Handout 8 Luca Trevisan February 17, 2016

U.C. Berkeley CS294: Spectral Methods and Expanders Handout 8 Luca Trevisan February 17, 2016 U.C. Berkeley CS94: Spectral Methods and Expanders Handout 8 Luca Trevsan February 7, 06 Lecture 8: Spectral Algorthms Wrap-up In whch we talk about even more generalzatons of Cheeger s nequaltes, and

More information

Bézier curves. Michael S. Floater. September 10, These notes provide an introduction to Bézier curves. i=0

Bézier curves. Michael S. Floater. September 10, These notes provide an introduction to Bézier curves. i=0 Bézer curves Mchael S. Floater September 1, 215 These notes provde an ntroducton to Bézer curves. 1 Bernsten polynomals Recall that a real polynomal of a real varable x R, wth degree n, s a functon of

More information

Polynomial Regression Models

Polynomial Regression Models LINEAR REGRESSION ANALYSIS MODULE XII Lecture - 6 Polynomal Regresson Models Dr. Shalabh Department of Mathematcs and Statstcs Indan Insttute of Technology Kanpur Test of sgnfcance To test the sgnfcance

More information

Multi-dimensional Central Limit Theorem

Multi-dimensional Central Limit Theorem Mult-dmensonal Central Lmt heorem Outlne ( ( ( t as ( + ( + + ( ( ( Consder a sequence of ndependent random proceses t, t, dentcal to some ( t. Assume t = 0. Defne the sum process t t t t = ( t = (; t

More information

e - c o m p a n i o n

e - c o m p a n i o n OPERATIONS RESEARCH http://dxdoorg/0287/opre007ec e - c o m p a n o n ONLY AVAILABLE IN ELECTRONIC FORM 202 INFORMS Electronc Companon Generalzed Quantty Competton for Multple Products and Loss of Effcency

More information

Supplement: Proofs and Technical Details for The Solution Path of the Generalized Lasso

Supplement: Proofs and Technical Details for The Solution Path of the Generalized Lasso Supplement: Proofs and Techncal Detals for The Soluton Path of the Generalzed Lasso Ryan J. Tbshran Jonathan Taylor In ths document we gve supplementary detals to the paper The Soluton Path of the Generalzed

More information

Moments of Inertia. and reminds us of the analogous equation for linear momentum p= mv, which is of the form. The kinetic energy of the body is.

Moments of Inertia. and reminds us of the analogous equation for linear momentum p= mv, which is of the form. The kinetic energy of the body is. Moments of Inerta Suppose a body s movng on a crcular path wth constant speed Let s consder two quanttes: the body s angular momentum L about the center of the crcle, and ts knetc energy T How are these

More information

= = = (a) Use the MATLAB command rref to solve the system. (b) Let A be the coefficient matrix and B be the right-hand side of the system.

= = = (a) Use the MATLAB command rref to solve the system. (b) Let A be the coefficient matrix and B be the right-hand side of the system. Chapter Matlab Exercses Chapter Matlab Exercses. Consder the lnear system of Example n Secton.. x x x y z y y z (a) Use the MATLAB command rref to solve the system. (b) Let A be the coeffcent matrx and

More information

Modelli Clamfim Equazione del Calore Lezione ottobre 2014

Modelli Clamfim Equazione del Calore Lezione ottobre 2014 CLAMFIM Bologna Modell 1 @ Clamfm Equazone del Calore Lezone 17 15 ottobre 2014 professor Danele Rtell danele.rtell@unbo.t 1/24? Convoluton The convoluton of two functons g(t) and f(t) s the functon (g

More information

Lecture 6/7 (February 10/12, 2014) DIRAC EQUATION. The non-relativistic Schrödinger equation was obtained by noting that the Hamiltonian 2

Lecture 6/7 (February 10/12, 2014) DIRAC EQUATION. The non-relativistic Schrödinger equation was obtained by noting that the Hamiltonian 2 P470 Lecture 6/7 (February 10/1, 014) DIRAC EQUATION The non-relatvstc Schrödnger equaton was obtaned by notng that the Hamltonan H = P (1) m can be transformed nto an operator form wth the substtutons

More information

Economics 130. Lecture 4 Simple Linear Regression Continued

Economics 130. Lecture 4 Simple Linear Regression Continued Economcs 130 Lecture 4 Contnued Readngs for Week 4 Text, Chapter and 3. We contnue wth addressng our second ssue + add n how we evaluate these relatonshps: Where do we get data to do ths analyss? How do

More information