Differentiating Gaussian Processes

Size: px
Start display at page:

Download "Differentiating Gaussian Processes"

Transcription

1 Dfferentatng Gaussan Processes Andrew McHutchon Aprl 17, Frst Order Dervatve of the Posteror Mean The posteror mean of a GP s gven by, f = x, X KX, X 1 y x, X α 1 Only the x, X term depends on the test pont x, therefore to calculate the slope of the posteror mean we just need to dfferentate the ernel. For the squared exponental covarance functon the dervatve of the ernel between x and a tranng pont x s, x, x = { σf exp 1 } x x T x x = { 1 } x x T x x x, x = x x x, x whch s a D 1 vector. To compute the dervatve of the posteror mean we need to concatenate ths dervatve for each of the tranng ponts. It s helpful to defne, X = x x 1,..., x x N T, whch s an N D matrx. f = x, X α = XT x, X T α 3 whch s a D 1 vector. represents an element-wse product. Dstrbuton over Frst Order Dervatves of Posteror Functons In the prevous secton we found the dervatve of the posteror mean of a GP. However, t s possble to fnd the dstrbuton over dervatves of functons drawn from the GP posteror. Consder the random GP functon values at two test pont locatons, fx = fx + z fx + δ = fx 4 + δ + z δ where, P z, z δ = N T K 0, 1 δ T K 1 δ δ T δ K 1 δδ T δ K 1 δ 5 1

2 The dervatve s, f = lm δ 0 fx + δ fx x + δ x fx + δ + z δ = lm fx z δ 0 δ fx + δ = lm fx δ 0 δ = f z δ + lm δ 0 z δ z δ + lm δ 0 z δ 6 Ths s a random varable, the mean of whch s gven by the frst term, and the varance comes from the second. The varance of the second term s found as follows, z δ z 1 V lm = lm δ 0 δ δ 0 δ V z δ + V z C z δ, z C z, z δ 1 = lm δ 0 δ δδ T δ K 1 δ + T K 1 δ T K 1 δ δ T δ K 1 1 = lm δ 0 δ δδ δ δ + δ T K 1 δ = x 1, x 1 x x 1, X 1 K 1 X, x 7 whch s a D D matrx - the varances and covarances of the dervatves w.r.t. each dmenson n x. Thus, f f P = N, x 1, x x, X K 1 X, x 1 x 8 We see that the mean of the dstrbuton of dervatves s the dervatve of the posteror mean. Ths s to be expected as both dfferentaton and expectaton are lnear operatons and so are commutatve. 3 Expected Squared Dervatve When propagatng varances through frst order Taylor seres models, one uses the square of the frst order dervatve. We could also tae the square nsde the expectaton whch mght lead to a better model. The square of the expected dervatve s gven by, f E = f f T 9 We can fnd the expected squared dervatve as follows, f f f E = V + E = x 1, x 1 x x, X K 1 X, x + f f T 10 Compared to equaton 9 the expected squared dervatve s nflated by the varance of the dervatve 4 Dervatves wth Uncertan Inputs We can also as what the dstrbuton over the dervatves s when the nput locaton s Gaussan dstrbuted,.e., x N µ, Σ 11

3 4.1 The mean We can use the rule of terated expectatons to fnd the mean dervatve, f f E = E x E f f = E x = E x X x, X T α N = E x α x x x, x = N = N α E x x x x, x α E x x x, x α x E x x, x 1 To fnd the two expectatons t s useful to note the squared exponental ernel s closely related to the Gaussan p.d.f., x, x = σf exp 1 x x T x x 13 = σ f π D/ Λ 1/ N x ; x, Λ 14 and also to quote the area under a product of two Gaussans, N x; µ 1, Σ 1 N x; µ, Σ dx = π D/ Σ 1 + Σ 1/ exp 1 µ 1 µ T Σ 1 + Σ 1 µ 1 µ 15 Z 16 We start wth the smpler of the two expectatons, E x x, x = x, x px dx = σf π D/ Λ 1/ N x ; x, Λ N x ; µ, Σ dx = σf π D/ Λ 1/ π D/ Λ + Σ 1/ exp 1 x µ T Λ + Σ 1 x µ = σf Λ 1/ Λ + Σ 1/ exp 1 x µ T Λ + Σ 1 x µ = Σ + I 1/ x, µ, Λ + Σ 17 where the thrd argument to the covarance functon specfes the lengthscales. The second expectaton, E x x x, x = x x, x px dx = σf π D/ Λ 1/ x N x ; x, Λ N x ; µ, Σ dx 18 3

4 whch s the mean of the product of two Gaussan dstrbutons tmes a constant. Therefore, E x x x, x = σf π D/ Λ 1/ Z + Σ 1 1 x + Σ 1 µ = σf Λ 1/ Σ + Λ 1/ Λ exp 1 x µ T Σ + Λ 1 x µ Σ + Λ 1 Σ x + Σ 1 µ = Σ + I 1/ x, µ, Λ + Σ Λ Σ + Λ 1 Σ x + µ = E x x, x Λ Σ + Λ 1 Σ x + µ 19 Puttng equatons 17 and 19 nto equaton 1 gves, f E = N = N = N α E x x x, x x E x x, x Λ Σ + Λ 1 Σ x + µ x α E x x, x Σ + Λ 1 Σ x Σ + Λ 1 µ = Σ + Λ 1 N x µ α E x x, x = Σ + I 1/ Σ + Λ 1 N α E x x, x x µ α x, µ, Λ + Σ 0 = Σ + I 1/ Σ + Λ 1 XT α X, µ, Λ + Σ 1 4. The varance We can use the rule of total varance to fnd the varance of the dervatve, f f f V = E x V f + V x E f x = E 1, x x x, X K 1 X, x f + V 1 x x = E x x 1, x 1 x x, X K 1 X, x + V x XT x, X T α We wll calculate these expectatons separately. Frstly the expectaton of the second dervatve of the ernel see secton 5 for dervaton of the second dervatve, x E 1, x = E x, x 1 x σf 3 E x, X K 1 X, x j T E X x, X K 1 X, x E + tr X T j X x, X K 1 j E X, x X K 1 T C X x, X, XjT x, X j j 4 4

5 where, whch s a N 1 vector. E x x 1 x X = x 1 x,..., x N x 1, x = x 1 E x 1, x Ex x 1, x Defnng U to be a N 1 vector wth elements, U whch s a N 1 vector. C x x 1 x E x X = x µ, 1 Σ, x 1, Λ µ, Σ, x 1, ΛΣ = x 1 Σ x 1, x, x j x j x, x x T Σ x 1 + µ Σ x 1 + µ µ, Σ, x1, Λ = µ, Σ, x 1, Λ x 1 µ Σ = x µ, X, x = X, µ, Σ, Λ U Σ = x 1 xj C x x 1, x, x, x x 1 C x x 1, x, x j x, x x j C x x x 1, x, x, x + C x x x 1, x, x j x, x = x 1 xj C µ, x 1, x, Σ x 1 Cj x µ, x, x 1, Σ x j C x µ, x 1, x, Σ + C j xx µ, x 1, x, Σ 8 The C terms are derved and defned n secton 6. Therefore, C T X x, X, XjT x, X = X X jt C µ, X, X, Σ X C j x µ, X, X, Σ XjT C x µ, X, X, Σ + Cj xx µ, X, X, Σ 9 whch s a N N matrx. Fnally we need, V x XT X, x α, whch we wll brea up and compute as, C x X T X, x α, j j C x j j α T l x α α l X jt X, x α x x, x α, l x xj l C x Cj x j l x xj l l j x l, x α l C x + Cj xx X X jt C X C j x XjT C x + Cj xx α l 30 5 Dfferentatng the Squared Exponental ernel The squared exponental ernel s gven by, x 1, x = σ f exp 1 x 1 x T x 1 x 31 where the hyperparameters are the sgnal varance σf and a characterstc length-scale for each dmenson, {l } D. The squared length-scales are collected nto a D D, dagonal matrx Λ, l1 0 0 Λ = ld 5

6 The dervatve of ths ernel wth respect to the frst argument s, x 1, x = { σf exp 1 } x 1 x 1 x 1 x T x 1 x = { 1 } x 1 x 1 x T x 1 x x 1, x = 1 { x T x 1 x 1 x T x 1 x T 1 } x x1, x 1 = 1 x 1 x x1, x = x 1 x x 1, x 33 whch s a D 1 vector. We can also tae the dervatve w.r.t. the second argument, x 1, x = { σf exp 1 } x x x 1 x T x 1 x = { 1 } x x 1 x T x 1 x x 1, x = 1 { x T x x 1 x T 1 x + x T } x x1, x = 1 x 1 + x x1, x x 1 x x 1, x 34 Note that the only dfference between these two dervatves equatons 33 and 34 s the mnus sgn n equaton 33. Ths comes about because the dstance between x 1 and x s calculated as x 1 x and hence ncreasng x 1 ncreases the separaton and so decreases the covarance; the opposte s true for x. It s trval to extend these results for the case when one of the nputs s a collecton of ponts, such as a N D tranng matrx X, X = x 1,..., x T 35 X, x = X, x X 36 where we defne X to be the N D matrx, x 1 x,..., x N x T. Note that X, x s a N 1 column vector. Buldng on equatons 33 and 34 we can now fnd the second dervatve cross term, We can see the followng relatonshp, x 1, x x 1 x = x 1 { x 1 x x 1, x } I x 1 x x 1 x T x 1, x 37 x 1, x x 1 = x 1, x x = x 1, x x 1 x = x 1, x x x 1 38 We can summarse the dervatves as follows, x 1, x x x 1 x x 1, x 39 x 1, x x = x 1, x + x 1 x x T 1, x x 40 6

7 To fnd hgher dervatves we need to swtch to wrtng down elements of the dervatve. Frst rephrase the frst two dervatves, x 1, x 4 x 1, x 3 x 1, x x xj x x l x xj x x 1, x x j x = j = j x x 1, x x l x x 1 x x 1, x 41 = j x 1, x + x 1 x x 1, x x 1, x x x 1, x x l xj We sometmes need to evaluate these dervatves for x 1 = x, x 1, x x j l + x 1, x x xj x j x 1 x x 1, x + x xj x 1 x x 1, x x l x xj x 1, x x1 = x = σ f 45 x 1, x = 0 46 x1 = x x x 1, x = x1 = x x j x 3 x 1, x x xj x 4 x 1, x x l x xj x 44 j σ f 47 = 0 48 x1 = x j x1 = x Λ 1 l σf + Λ 1 jl σf + l j σ f 49 6 Squared Exponental Kernel Moments The squared exponental ernel, x 1, x = σ f exp 1 x 1 x T x 1 x 50 = σ f π D/ Λ 1/ N x 1 ; x, Λ 51 Its mean, E x x, x 1 = σf Λ 1/ Λ + Σ 1/ exp 1 µ x 1 T Λ + Σ 1 µ x 1 The product of two squared exponental ernels, x1 x, x 1 x, x =, x, Λ = Σ + I 1/ µ, x 1, Λ + Σ 5 µ, x 1, Σ, Λ = σ f π D/ Λ 1/ x, x 1 + x, Λ 54 N x; x 1 + x /, Λ/ 55 x1, x, Λ 7

8 The mean of a product, E x x 1, x x, x = x 1 /, x /, Λ/ x 1 + x /, µ 1, Σ, Λ/ 56 Therefore, the covarance of two ernels, = Σ + I 1/ x 1 /, x /, Λ/ x 1 + x /, µ 1, Λ/ + Σ 57 E µ, x 1, x, Σ C x x, x 1, x, x = Σ + I 1/ x 1 /, x /, Λ/ x 1 + x /, µ, Λ/ + Σ Σ + I 1 µ, x 1, Λ + Σ µ, x, Λ + Σ 59 C µ, x 1, x, Σ The covarance between x tmes a covarance functon wth another covarance functon, C x x x, x 1, x, x x1 =, x, Λ κµ, Σ, x 1 + x /, Λ/ κ µ, Σ, x 1, Λ µ, x, Σ, Λ C x µ, x 1, x, Σ D 1 61 The covarance of x tmes a covarance functon wth x tmes another covarance functon, C x x x, x 1, x x, x x1 =, x, Λ µ, Σ, x 1 + x /, Λ/ Σ + I 1 Σ x 1 + x + µσ x 1 + x + µ T Σ + I 1 + Σ Σ + Λ/ 1 Λ/ κ µ, x 1, Σ, Λ κ µ, x, Σ, Λ T C xx µ, x 1, x, Σ D D 6 8

NUMERICAL DIFFERENTIATION

NUMERICAL DIFFERENTIATION NUMERICAL DIFFERENTIATION 1 Introducton Dfferentaton s a method to compute the rate at whch a dependent output y changes wth respect to the change n the ndependent nput x. Ths rate of change s called the

More information

3.1 Expectation of Functions of Several Random Variables. )' be a k-dimensional discrete or continuous random vector, with joint PMF p (, E X E X1 E X

3.1 Expectation of Functions of Several Random Variables. )' be a k-dimensional discrete or continuous random vector, with joint PMF p (, E X E X1 E X Statstcs 1: Probablty Theory II 37 3 EPECTATION OF SEVERAL RANDOM VARIABLES As n Probablty Theory I, the nterest n most stuatons les not on the actual dstrbuton of a random vector, but rather on a number

More information

Lecture 10 Support Vector Machines II

Lecture 10 Support Vector Machines II Lecture 10 Support Vector Machnes II 22 February 2016 Taylor B. Arnold Yale Statstcs STAT 365/665 1/28 Notes: Problem 3 s posted and due ths upcomng Frday There was an early bug n the fake-test data; fxed

More information

Chapter 12. Ordinary Differential Equation Boundary Value (BV) Problems

Chapter 12. Ordinary Differential Equation Boundary Value (BV) Problems Chapter. Ordnar Dfferental Equaton Boundar Value (BV) Problems In ths chapter we wll learn how to solve ODE boundar value problem. BV ODE s usuall gven wth x beng the ndependent space varable. p( x) q(

More information

1 Matrix representations of canonical matrices

1 Matrix representations of canonical matrices 1 Matrx representatons of canoncal matrces 2-d rotaton around the orgn: ( ) cos θ sn θ R 0 = sn θ cos θ 3-d rotaton around the x-axs: R x = 1 0 0 0 cos θ sn θ 0 sn θ cos θ 3-d rotaton around the y-axs:

More information

Multi-dimensional Central Limit Argument

Multi-dimensional Central Limit Argument Mult-dmensonal Central Lmt Argument Outlne t as Consder d random proceses t, t,. Defne the sum process t t t t () t (); t () t are d to (), t () t 0 () t tme () t () t t t As, ( t) becomes a Gaussan random

More information

Multi-dimensional Central Limit Theorem

Multi-dimensional Central Limit Theorem Mult-dmensonal Central Lmt heorem Outlne ( ( ( t as ( + ( + + ( ( ( Consder a sequence of ndependent random proceses t, t, dentcal to some ( t. Assume t = 0. Defne the sum process t t t t = ( t = (; t

More information

ANSWERS. Problem 1. and the moment generating function (mgf) by. defined for any real t. Use this to show that E( U) var( U)

ANSWERS. Problem 1. and the moment generating function (mgf) by. defined for any real t. Use this to show that E( U) var( U) Econ 413 Exam 13 H ANSWERS Settet er nndelt 9 deloppgaver, A,B,C, som alle anbefales å telle lkt for å gøre det ltt lettere å stå. Svar er gtt . Unfortunately, there s a prntng error n the hnt of

More information

Hidden Markov Models

Hidden Markov Models Hdden Markov Models Namrata Vaswan, Iowa State Unversty Aprl 24, 204 Hdden Markov Model Defntons and Examples Defntons:. A hdden Markov model (HMM) refers to a set of hdden states X 0, X,..., X t,...,

More information

C/CS/Phy191 Problem Set 3 Solutions Out: Oct 1, 2008., where ( 00. ), so the overall state of the system is ) ( ( ( ( 00 ± 11 ), Φ ± = 1

C/CS/Phy191 Problem Set 3 Solutions Out: Oct 1, 2008., where ( 00. ), so the overall state of the system is ) ( ( ( ( 00 ± 11 ), Φ ± = 1 C/CS/Phy9 Problem Set 3 Solutons Out: Oct, 8 Suppose you have two qubts n some arbtrary entangled state ψ You apply the teleportaton protocol to each of the qubts separately What s the resultng state obtaned

More information

xp(x µ) = 0 p(x = 0 µ) + 1 p(x = 1 µ) = µ

xp(x µ) = 0 p(x = 0 µ) + 1 p(x = 1 µ) = µ CSE 455/555 Sprng 2013 Homework 7: Parametrc Technques Jason J. Corso Computer Scence and Engneerng SUY at Buffalo jcorso@buffalo.edu Solutons by Yngbo Zhou Ths assgnment does not need to be submtted and

More information

APPROXIMATE PRICES OF BASKET AND ASIAN OPTIONS DUPONT OLIVIER. Premia 14

APPROXIMATE PRICES OF BASKET AND ASIAN OPTIONS DUPONT OLIVIER. Premia 14 APPROXIMAE PRICES OF BASKE AND ASIAN OPIONS DUPON OLIVIER Prema 14 Contents Introducton 1 1. Framewor 1 1.1. Baset optons 1.. Asan optons. Computng the prce 3. Lower bound 3.1. Closed formula for the prce

More information

What would be a reasonable choice of the quantization step Δ?

What would be a reasonable choice of the quantization step Δ? CE 108 HOMEWORK 4 EXERCISE 1. Suppose you are samplng the output of a sensor at 10 KHz and quantze t wth a unform quantzer at 10 ts per sample. Assume that the margnal pdf of the sgnal s Gaussan wth mean

More information

Linear Approximation with Regularization and Moving Least Squares

Linear Approximation with Regularization and Moving Least Squares Lnear Approxmaton wth Regularzaton and Movng Least Squares Igor Grešovn May 007 Revson 4.6 (Revson : March 004). 5 4 3 0.5 3 3.5 4 Contents: Lnear Fttng...4. Weghted Least Squares n Functon Approxmaton...

More information

Expectation propagation

Expectation propagation Expectaton propagaton Lloyd Ellott May 17, 2011 Suppose p(x) s a pdf and we have a factorzaton p(x) = 1 Z n f (x). (1) =1 Expectaton propagaton s an nference algorthm desgned to approxmate the factors

More information

From Biot-Savart Law to Divergence of B (1)

From Biot-Savart Law to Divergence of B (1) From Bot-Savart Law to Dvergence of B (1) Let s prove that Bot-Savart gves us B (r ) = 0 for an arbtrary current densty. Frst take the dvergence of both sdes of Bot-Savart. The dervatve s wth respect to

More information

This model contains two bonds per unit cell (one along the x-direction and the other along y). So we can rewrite the Hamiltonian as:

This model contains two bonds per unit cell (one along the x-direction and the other along y). So we can rewrite the Hamiltonian as: 1 Problem set #1 1.1. A one-band model on a square lattce Fg. 1 Consder a square lattce wth only nearest-neghbor hoppngs (as shown n the fgure above): H t, j a a j (1.1) where,j stands for nearest neghbors

More information

Lecture 2: Numerical Methods for Differentiations and Integrations

Lecture 2: Numerical Methods for Differentiations and Integrations Numercal Smulaton of Space Plasmas (I [AP-4036] Lecture 2 by Lng-Hsao Lyu March, 2018 Lecture 2: Numercal Methods for Dfferentatons and Integratons As we have dscussed n Lecture 1 that numercal smulaton

More information

Lecture 3: Probability Distributions

Lecture 3: Probability Distributions Lecture 3: Probablty Dstrbutons Random Varables Let us begn by defnng a sample space as a set of outcomes from an experment. We denote ths by S. A random varable s a functon whch maps outcomes nto the

More information

Solutions Homework 4 March 5, 2018

Solutions Homework 4 March 5, 2018 1 Solutons Homework 4 March 5, 018 Soluton to Exercse 5.1.8: Let a IR be a translaton and c > 0 be a re-scalng. ˆb1 (cx + a) cx n + a (cx 1 + a) c x n x 1 cˆb 1 (x), whch shows ˆb 1 s locaton nvarant and

More information

Gaussian process classification: a message-passing viewpoint

Gaussian process classification: a message-passing viewpoint Gaussan process classfcaton: a message-passng vewpont Flpe Rodrgues fmpr@de.uc.pt November 014 Abstract The goal of ths short paper s to provde a message-passng vewpont of the Expectaton Propagaton EP

More information

j) = 1 (note sigma notation) ii. Continuous random variable (e.g. Normal distribution) 1. density function: f ( x) 0 and f ( x) dx = 1

j) = 1 (note sigma notation) ii. Continuous random variable (e.g. Normal distribution) 1. density function: f ( x) 0 and f ( x) dx = 1 Random varables Measure of central tendences and varablty (means and varances) Jont densty functons and ndependence Measures of assocaton (covarance and correlaton) Interestng result Condtonal dstrbutons

More information

2 Finite difference basics

2 Finite difference basics Numersche Methoden 1, WS 11/12 B.J.P. Kaus 2 Fnte dfference bascs Consder the one- The bascs of the fnte dfference method are best understood wth an example. dmensonal transent heat conducton equaton T

More information

EEE 241: Linear Systems

EEE 241: Linear Systems EEE : Lnear Systems Summary #: Backpropagaton BACKPROPAGATION The perceptron rule as well as the Wdrow Hoff learnng were desgned to tran sngle layer networks. They suffer from the same dsadvantage: they

More information

10-701/ Machine Learning, Fall 2005 Homework 3

10-701/ Machine Learning, Fall 2005 Homework 3 10-701/15-781 Machne Learnng, Fall 2005 Homework 3 Out: 10/20/05 Due: begnnng of the class 11/01/05 Instructons Contact questons-10701@autonlaborg for queston Problem 1 Regresson and Cross-valdaton [40

More information

Stanford University CS359G: Graph Partitioning and Expanders Handout 4 Luca Trevisan January 13, 2011

Stanford University CS359G: Graph Partitioning and Expanders Handout 4 Luca Trevisan January 13, 2011 Stanford Unversty CS359G: Graph Parttonng and Expanders Handout 4 Luca Trevsan January 3, 0 Lecture 4 In whch we prove the dffcult drecton of Cheeger s nequalty. As n the past lectures, consder an undrected

More information

Supplementary material: Margin based PU Learning. Matrix Concentration Inequalities

Supplementary material: Margin based PU Learning. Matrix Concentration Inequalities Supplementary materal: Margn based PU Learnng We gve the complete proofs of Theorem and n Secton We frst ntroduce the well-known concentraton nequalty, so the covarance estmator can be bounded Then we

More information

1 Convex Optimization

1 Convex Optimization Convex Optmzaton We wll consder convex optmzaton problems. Namely, mnmzaton problems where the objectve s convex (we assume no constrants for now). Such problems often arse n machne learnng. For example,

More information

The Expectation-Maximization Algorithm

The Expectation-Maximization Algorithm The Expectaton-Maxmaton Algorthm Charles Elan elan@cs.ucsd.edu November 16, 2007 Ths chapter explans the EM algorthm at multple levels of generalty. Secton 1 gves the standard hgh-level verson of the algorthm.

More information

Mechanics Physics 151

Mechanics Physics 151 Mechancs Physcs 5 Lecture 0 Canoncal Transformatons (Chapter 9) What We Dd Last Tme Hamlton s Prncple n the Hamltonan formalsm Dervaton was smple δi δ p H(, p, t) = 0 Adonal end-pont constrants δ t ( )

More information

CHALMERS, GÖTEBORGS UNIVERSITET. SOLUTIONS to RE-EXAM for ARTIFICIAL NEURAL NETWORKS. COURSE CODES: FFR 135, FIM 720 GU, PhD

CHALMERS, GÖTEBORGS UNIVERSITET. SOLUTIONS to RE-EXAM for ARTIFICIAL NEURAL NETWORKS. COURSE CODES: FFR 135, FIM 720 GU, PhD CHALMERS, GÖTEBORGS UNIVERSITET SOLUTIONS to RE-EXAM for ARTIFICIAL NEURAL NETWORKS COURSE CODES: FFR 35, FIM 72 GU, PhD Tme: Place: Teachers: Allowed materal: Not allowed: January 2, 28, at 8 3 2 3 SB

More information

P exp(tx) = 1 + t 2k M 2k. k N

P exp(tx) = 1 + t 2k M 2k. k N 1. Subgaussan tals Defnton. Say that a random varable X has a subgaussan dstrbuton wth scale factor σ< f P exp(tx) exp(σ 2 t 2 /2) for all real t. For example, f X s dstrbuted N(,σ 2 ) then t s subgaussan.

More information

Lecture 3. Ax x i a i. i i

Lecture 3. Ax x i a i. i i 18.409 The Behavor of Algorthms n Practce 2/14/2 Lecturer: Dan Spelman Lecture 3 Scrbe: Arvnd Sankar 1 Largest sngular value In order to bound the condton number, we need an upper bound on the largest

More information

STAT 3340 Assignment 1 solutions. 1. Find the equation of the line which passes through the points (1,1) and (4,5).

STAT 3340 Assignment 1 solutions. 1. Find the equation of the line which passes through the points (1,1) and (4,5). (out of 15 ponts) STAT 3340 Assgnment 1 solutons (10) (10) 1. Fnd the equaton of the lne whch passes through the ponts (1,1) and (4,5). β 1 = (5 1)/(4 1) = 4/3 equaton for the lne s y y 0 = β 1 (x x 0

More information

The Prncpal Component Transform The Prncpal Component Transform s also called Karhunen-Loeve Transform (KLT, Hotellng Transform, oregenvector Transfor

The Prncpal Component Transform The Prncpal Component Transform s also called Karhunen-Loeve Transform (KLT, Hotellng Transform, oregenvector Transfor Prncpal Component Transform Multvarate Random Sgnals A real tme sgnal x(t can be consdered as a random process and ts samples x m (m =0; ;N, 1 a random vector: The mean vector of X s X =[x0; ;x N,1] T

More information

C4B Machine Learning Answers II. = σ(z) (1 σ(z)) 1 1 e z. e z = σ(1 σ) (1 + e z )

C4B Machine Learning Answers II. = σ(z) (1 σ(z)) 1 1 e z. e z = σ(1 σ) (1 + e z ) C4B Machne Learnng Answers II.(a) Show that for the logstc sgmod functon dσ(z) dz = σ(z) ( σ(z)) A. Zsserman, Hlary Term 20 Start from the defnton of σ(z) Note that Then σ(z) = σ = dσ(z) dz = + e z e z

More information

8.4 COMPLEX VECTOR SPACES AND INNER PRODUCTS

8.4 COMPLEX VECTOR SPACES AND INNER PRODUCTS SECTION 8.4 COMPLEX VECTOR SPACES AND INNER PRODUCTS 493 8.4 COMPLEX VECTOR SPACES AND INNER PRODUCTS All the vector spaces you have studed thus far n the text are real vector spaces because the scalars

More information

Lecture 4: September 12

Lecture 4: September 12 36-755: Advanced Statstcal Theory Fall 016 Lecture 4: September 1 Lecturer: Alessandro Rnaldo Scrbe: Xao Hu Ta Note: LaTeX template courtesy of UC Berkeley EECS dept. Dsclamer: These notes have not been

More information

CIS526: Machine Learning Lecture 3 (Sept 16, 2003) Linear Regression. Preparation help: Xiaoying Huang. x 1 θ 1 output... θ M x M

CIS526: Machine Learning Lecture 3 (Sept 16, 2003) Linear Regression. Preparation help: Xiaoying Huang. x 1 θ 1 output... θ M x M CIS56: achne Learnng Lecture 3 (Sept 6, 003) Preparaton help: Xaoyng Huang Lnear Regresson Lnear regresson can be represented by a functonal form: f(; θ) = θ 0 0 +θ + + θ = θ = 0 ote: 0 s a dummy attrbute

More information

Review of Taylor Series. Read Section 1.2

Review of Taylor Series. Read Section 1.2 Revew of Taylor Seres Read Secton 1.2 1 Power Seres A power seres about c s an nfnte seres of the form k = 0 k a ( x c) = a + a ( x c) + a ( x c) + a ( x c) k 2 3 0 1 2 3 + In many cases, c = 0, and the

More information

Using T.O.M to Estimate Parameter of distributions that have not Single Exponential Family

Using T.O.M to Estimate Parameter of distributions that have not Single Exponential Family IOSR Journal of Mathematcs IOSR-JM) ISSN: 2278-5728. Volume 3, Issue 3 Sep-Oct. 202), PP 44-48 www.osrjournals.org Usng T.O.M to Estmate Parameter of dstrbutons that have not Sngle Exponental Famly Jubran

More information

Modelli Clamfim Equazione del Calore Lezione ottobre 2014

Modelli Clamfim Equazione del Calore Lezione ottobre 2014 CLAMFIM Bologna Modell 1 @ Clamfm Equazone del Calore Lezone 17 15 ottobre 2014 professor Danele Rtell danele.rtell@unbo.t 1/24? Convoluton The convoluton of two functons g(t) and f(t) s the functon (g

More information

Lectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix

Lectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix Lectures - Week 4 Matrx norms, Condtonng, Vector Spaces, Lnear Independence, Spannng sets and Bass, Null space and Range of a Matrx Matrx Norms Now we turn to assocatng a number to each matrx. We could

More information

APPENDIX A Some Linear Algebra

APPENDIX A Some Linear Algebra APPENDIX A Some Lnear Algebra The collecton of m, n matrces A.1 Matrces a 1,1,..., a 1,n A = a m,1,..., a m,n wth real elements a,j s denoted by R m,n. If n = 1 then A s called a column vector. Smlarly,

More information

Linear Feature Engineering 11

Linear Feature Engineering 11 Lnear Feature Engneerng 11 2 Least-Squares 2.1 Smple least-squares Consder the followng dataset. We have a bunch of nputs x and correspondng outputs y. The partcular values n ths dataset are x y 0.23 0.19

More information

Bezier curves. Michael S. Floater. August 25, These notes provide an introduction to Bezier curves. i=0

Bezier curves. Michael S. Floater. August 25, These notes provide an introduction to Bezier curves. i=0 Bezer curves Mchael S. Floater August 25, 211 These notes provde an ntroducton to Bezer curves. 1 Bernsten polynomals Recall that a real polynomal of a real varable x R, wth degree n, s a functon of the

More information

MATH 829: Introduction to Data Mining and Analysis The EM algorithm (part 2)

MATH 829: Introduction to Data Mining and Analysis The EM algorithm (part 2) 1/16 MATH 829: Introducton to Data Mnng and Analyss The EM algorthm (part 2) Domnque Gullot Departments of Mathematcal Scences Unversty of Delaware Aprl 20, 2016 Recall 2/16 We are gven ndependent observatons

More information

ELASTIC WAVE PROPAGATION IN A CONTINUOUS MEDIUM

ELASTIC WAVE PROPAGATION IN A CONTINUOUS MEDIUM ELASTIC WAVE PROPAGATION IN A CONTINUOUS MEDIUM An elastc wave s a deformaton of the body that travels throughout the body n all drectons. We can examne the deformaton over a perod of tme by fxng our look

More information

ENG 8801/ Special Topics in Computer Engineering: Pattern Recognition. Memorial University of Newfoundland Pattern Recognition

ENG 8801/ Special Topics in Computer Engineering: Pattern Recognition. Memorial University of Newfoundland Pattern Recognition EG 880/988 - Specal opcs n Computer Engneerng: Pattern Recognton Memoral Unversty of ewfoundland Pattern Recognton Lecture 7 May 3, 006 http://wwwengrmunca/~charlesr Offce Hours: uesdays hursdays 8:30-9:30

More information

CALCULUS CLASSROOM CAPSULES

CALCULUS CLASSROOM CAPSULES CALCULUS CLASSROOM CAPSULES SESSION S86 Dr. Sham Alfred Rartan Valley Communty College salfred@rartanval.edu 38th AMATYC Annual Conference Jacksonvlle, Florda November 8-, 202 2 Calculus Classroom Capsules

More information

Credit Card Pricing and Impact of Adverse Selection

Credit Card Pricing and Impact of Adverse Selection Credt Card Prcng and Impact of Adverse Selecton Bo Huang and Lyn C. Thomas Unversty of Southampton Contents Background Aucton model of credt card solctaton - Errors n probablty of beng Good - Errors n

More information

Conjugacy and the Exponential Family

Conjugacy and the Exponential Family CS281B/Stat241B: Advanced Topcs n Learnng & Decson Makng Conjugacy and the Exponental Famly Lecturer: Mchael I. Jordan Scrbes: Bran Mlch 1 Conjugacy In the prevous lecture, we saw conjugate prors for the

More information

First Year Examination Department of Statistics, University of Florida

First Year Examination Department of Statistics, University of Florida Frst Year Examnaton Department of Statstcs, Unversty of Florda May 7, 010, 8:00 am - 1:00 noon Instructons: 1. You have four hours to answer questons n ths examnaton.. You must show your work to receve

More information

CSci 6974 and ECSE 6966 Math. Tech. for Vision, Graphics and Robotics Lecture 21, April 17, 2006 Estimating A Plane Homography

CSci 6974 and ECSE 6966 Math. Tech. for Vision, Graphics and Robotics Lecture 21, April 17, 2006 Estimating A Plane Homography CSc 6974 and ECSE 6966 Math. Tech. for Vson, Graphcs and Robotcs Lecture 21, Aprl 17, 2006 Estmatng A Plane Homography Overvew We contnue wth a dscusson of the major ssues, usng estmaton of plane projectve

More information

2E Pattern Recognition Solutions to Introduction to Pattern Recognition, Chapter 2: Bayesian pattern classification

2E Pattern Recognition Solutions to Introduction to Pattern Recognition, Chapter 2: Bayesian pattern classification E395 - Pattern Recognton Solutons to Introducton to Pattern Recognton, Chapter : Bayesan pattern classfcaton Preface Ths document s a soluton manual for selected exercses from Introducton to Pattern Recognton

More information

Lecture Notes on Linear Regression

Lecture Notes on Linear Regression Lecture Notes on Lnear Regresson Feng L fl@sdueducn Shandong Unversty, Chna Lnear Regresson Problem In regresson problem, we am at predct a contnuous target value gven an nput feature vector We assume

More information

Radar Trackers. Study Guide. All chapters, problems, examples and page numbers refer to Applied Optimal Estimation, A. Gelb, Ed.

Radar Trackers. Study Guide. All chapters, problems, examples and page numbers refer to Applied Optimal Estimation, A. Gelb, Ed. Radar rackers Study Gude All chapters, problems, examples and page numbers refer to Appled Optmal Estmaton, A. Gelb, Ed. Chapter Example.0- Problem Statement wo sensors Each has a sngle nose measurement

More information

Week3, Chapter 4. Position and Displacement. Motion in Two Dimensions. Instantaneous Velocity. Average Velocity

Week3, Chapter 4. Position and Displacement. Motion in Two Dimensions. Instantaneous Velocity. Average Velocity Week3, Chapter 4 Moton n Two Dmensons Lecture Quz A partcle confned to moton along the x axs moves wth constant acceleraton from x =.0 m to x = 8.0 m durng a 1-s tme nterval. The velocty of the partcle

More information

Fall 2012 Analysis of Experimental Measurements B. Eisenstein/rev. S. Errede

Fall 2012 Analysis of Experimental Measurements B. Eisenstein/rev. S. Errede Fall 0 Analyss of Expermental easurements B. Esensten/rev. S. Errede We now reformulate the lnear Least Squares ethod n more general terms, sutable for (eventually extendng to the non-lnear case, and also

More information

Digital Signal Processing

Digital Signal Processing Dgtal Sgnal Processng Dscrete-tme System Analyss Manar Mohasen Offce: F8 Emal: manar.subh@ut.ac.r School of IT Engneerng Revew of Precedent Class Contnuous Sgnal The value of the sgnal s avalable over

More information

SELECTED PROOFS. DeMorgan s formulas: The first one is clear from Venn diagram, or the following truth table:

SELECTED PROOFS. DeMorgan s formulas: The first one is clear from Venn diagram, or the following truth table: SELECTED PROOFS DeMorgan s formulas: The frst one s clear from Venn dagram, or the followng truth table: A B A B A B Ā B Ā B T T T F F F F T F T F F T F F T T F T F F F F F T T T T The second one can be

More information

Expected Value and Variance

Expected Value and Variance MATH 38 Expected Value and Varance Dr. Neal, WKU We now shall dscuss how to fnd the average and standard devaton of a random varable X. Expected Value Defnton. The expected value (or average value, or

More information

Generalized Linear Methods

Generalized Linear Methods Generalzed Lnear Methods 1 Introducton In the Ensemble Methods the general dea s that usng a combnaton of several weak learner one could make a better learner. More formally, assume that we have a set

More information

Limited Dependent Variables

Limited Dependent Variables Lmted Dependent Varables. What f the left-hand sde varable s not a contnuous thng spread from mnus nfnty to plus nfnty? That s, gven a model = f (, β, ε, where a. s bounded below at zero, such as wages

More information

Section 8.3 Polar Form of Complex Numbers

Section 8.3 Polar Form of Complex Numbers 80 Chapter 8 Secton 8 Polar Form of Complex Numbers From prevous classes, you may have encountered magnary numbers the square roots of negatve numbers and, more generally, complex numbers whch are the

More information

Lecture 4: Universal Hash Functions/Streaming Cont d

Lecture 4: Universal Hash Functions/Streaming Cont d CSE 5: Desgn and Analyss of Algorthms I Sprng 06 Lecture 4: Unversal Hash Functons/Streamng Cont d Lecturer: Shayan Oves Gharan Aprl 6th Scrbe: Jacob Schreber Dsclamer: These notes have not been subjected

More information

8.592J: Solutions for Assignment 7 Spring 2005

8.592J: Solutions for Assignment 7 Spring 2005 8.59J: Solutons for Assgnment 7 Sprng 5 Problem 1 (a) A flament of length l can be created by addton of a monomer to one of length l 1 (at rate a) or removal of a monomer from a flament of length l + 1

More information

Logistic Regression. CAP 5610: Machine Learning Instructor: Guo-Jun QI

Logistic Regression. CAP 5610: Machine Learning Instructor: Guo-Jun QI Logstc Regresson CAP 561: achne Learnng Instructor: Guo-Jun QI Bayes Classfer: A Generatve model odel the posteror dstrbuton P(Y X) Estmate class-condtonal dstrbuton P(X Y) for each Y Estmate pror dstrbuton

More information

PHYS 705: Classical Mechanics. Calculus of Variations II

PHYS 705: Classical Mechanics. Calculus of Variations II 1 PHYS 705: Classcal Mechancs Calculus of Varatons II 2 Calculus of Varatons: Generalzaton (no constrant yet) Suppose now that F depends on several dependent varables : We need to fnd such that has a statonary

More information

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.265/15.070J Fall 2013 Lecture 12 10/21/2013. Martingale Concentration Inequalities and Applications

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.265/15.070J Fall 2013 Lecture 12 10/21/2013. Martingale Concentration Inequalities and Applications MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.65/15.070J Fall 013 Lecture 1 10/1/013 Martngale Concentraton Inequaltes and Applcatons Content. 1. Exponental concentraton for martngales wth bounded ncrements.

More information

Econ107 Applied Econometrics Topic 3: Classical Model (Studenmund, Chapter 4)

Econ107 Applied Econometrics Topic 3: Classical Model (Studenmund, Chapter 4) I. Classcal Assumptons Econ7 Appled Econometrcs Topc 3: Classcal Model (Studenmund, Chapter 4) We have defned OLS and studed some algebrac propertes of OLS. In ths topc we wll study statstcal propertes

More information

Transfer Functions. Convenient representation of a linear, dynamic model. A transfer function (TF) relates one input and one output: ( ) system

Transfer Functions. Convenient representation of a linear, dynamic model. A transfer function (TF) relates one input and one output: ( ) system Transfer Functons Convenent representaton of a lnear, dynamc model. A transfer functon (TF) relates one nput and one output: x t X s y t system Y s The followng termnology s used: x y nput output forcng

More information

Chapter Newton s Method

Chapter Newton s Method Chapter 9. Newton s Method After readng ths chapter, you should be able to:. Understand how Newton s method s dfferent from the Golden Secton Search method. Understand how Newton s method works 3. Solve

More information

Outline. Multivariate Parametric Methods. Multivariate Data. Basic Multivariate Statistics. Steven J Zeil

Outline. Multivariate Parametric Methods. Multivariate Data. Basic Multivariate Statistics. Steven J Zeil Outlne Multvarate Parametrc Methods Steven J Zel Old Domnon Unv. Fall 2010 1 Multvarate Data 2 Multvarate ormal Dstrbuton 3 Multvarate Classfcaton Dscrmnants Tunng Complexty Dscrete Features 4 Multvarate

More information

For now, let us focus on a specific model of neurons. These are simplified from reality but can achieve remarkable results.

For now, let us focus on a specific model of neurons. These are simplified from reality but can achieve remarkable results. Neural Networks : Dervaton compled by Alvn Wan from Professor Jtendra Malk s lecture Ths type of computaton s called deep learnng and s the most popular method for many problems, such as computer vson

More information

Week 5: Neural Networks

Week 5: Neural Networks Week 5: Neural Networks Instructor: Sergey Levne Neural Networks Summary In the prevous lecture, we saw how we can construct neural networks by extendng logstc regresson. Neural networks consst of multple

More information

Feb 14: Spatial analysis of data fields

Feb 14: Spatial analysis of data fields Feb 4: Spatal analyss of data felds Mappng rregularly sampled data onto a regular grd Many analyss technques for geophyscal data requre the data be located at regular ntervals n space and/or tme. hs s

More information

However, since P is a symmetric idempotent matrix, of P are either 0 or 1 [Eigen-values

However, since P is a symmetric idempotent matrix, of P are either 0 or 1 [Eigen-values Fall 007 Soluton to Mdterm Examnaton STAT 7 Dr. Goel. [0 ponts] For the general lnear model = X + ε, wth uncorrelated errors havng mean zero and varance σ, suppose that the desgn matrx X s not necessarly

More information

Important Instructions to the Examiners:

Important Instructions to the Examiners: Summer 0 Examnaton Subject & Code: asc Maths (70) Model Answer Page No: / Important Instructons to the Examners: ) The Answers should be examned by key words and not as word-to-word as gven n the model

More information

Lecture 12: Discrete Laplacian

Lecture 12: Discrete Laplacian Lecture 12: Dscrete Laplacan Scrbe: Tanye Lu Our goal s to come up wth a dscrete verson of Laplacan operator for trangulated surfaces, so that we can use t n practce to solve related problems We are mostly

More information

Lagrangian Field Theory

Lagrangian Field Theory Lagrangan Feld Theory Adam Lott PHY 391 Aprl 6, 017 1 Introducton Ths paper s a summary of Chapter of Mandl and Shaw s Quantum Feld Theory [1]. The frst thng to do s to fx the notaton. For the most part,

More information

β0 + β1xi. You are interested in estimating the unknown parameters β

β0 + β1xi. You are interested in estimating the unknown parameters β Revsed: v3 Ordnar Least Squares (OLS): Smple Lnear Regresson (SLR) Analtcs The SLR Setup Sample Statstcs Ordnar Least Squares (OLS): FOCs and SOCs Back to OLS and Sample Statstcs Predctons (and Resduals)

More information

15 Lagrange Multipliers

15 Lagrange Multipliers 15 The Method of s a powerful technque for constraned optmzaton. Whle t has applcatons far beyond machne learnng t was orgnally developed to solve physcs equatons), t s used for several ey dervatons n

More information

Homework & Solution. Contributors. Prof. Lee, Hyun Min. Particle Physics Winter School. Park, Ye

Homework & Solution. Contributors. Prof. Lee, Hyun Min. Particle Physics Winter School. Park, Ye Homework & Soluton Prof. Lee, Hyun Mn Contrbutors Park, Ye J(yej.park@yonse.ac.kr) Lee, Sung Mook(smlngsm0919@gmal.com) Cheong, Dhong Yeon(dhongyeoncheong@gmal.com) Ban, Ka Young(ban94gy@yonse.ac.kr) Ro,

More information

Eigenvalues of Random Graphs

Eigenvalues of Random Graphs Spectral Graph Theory Lecture 2 Egenvalues of Random Graphs Danel A. Spelman November 4, 202 2. Introducton In ths lecture, we consder a random graph on n vertces n whch each edge s chosen to be n the

More information

p 1 c 2 + p 2 c 2 + p 3 c p m c 2

p 1 c 2 + p 2 c 2 + p 3 c p m c 2 Where to put a faclty? Gven locatons p 1,..., p m n R n of m houses, want to choose a locaton c n R n for the fre staton. Want c to be as close as possble to all the house. We know how to measure dstance

More information

Outline and Reading. Dynamic Programming. Dynamic Programming revealed. Computing Fibonacci. The General Dynamic Programming Technique

Outline and Reading. Dynamic Programming. Dynamic Programming revealed. Computing Fibonacci. The General Dynamic Programming Technique Outlne and Readng Dynamc Programmng The General Technque ( 5.3.2) -1 Knapsac Problem ( 5.3.3) Matrx Chan-Product ( 5.3.1) Dynamc Programmng verson 1.4 1 Dynamc Programmng verson 1.4 2 Dynamc Programmng

More information

Homework Assignment 3 Due in class, Thursday October 15

Homework Assignment 3 Due in class, Thursday October 15 Homework Assgnment 3 Due n class, Thursday October 15 SDS 383C Statstcal Modelng I 1 Rdge regresson and Lasso 1. Get the Prostrate cancer data from http://statweb.stanford.edu/~tbs/elemstatlearn/ datasets/prostate.data.

More information

Relevance Vector Machines Explained

Relevance Vector Machines Explained October 19, 2010 Relevance Vector Machnes Explaned Trstan Fletcher www.cs.ucl.ac.uk/staff/t.fletcher/ Introducton Ths document has been wrtten n an attempt to make Tppng s [1] Relevance Vector Machnes

More information

14 Lagrange Multipliers

14 Lagrange Multipliers Lagrange Multplers 14 Lagrange Multplers The Method of Lagrange Multplers s a powerful technque for constraned optmzaton. Whle t has applcatons far beyond machne learnng t was orgnally developed to solve

More information

Learning Theory: Lecture Notes

Learning Theory: Lecture Notes Learnng Theory: Lecture Notes Lecturer: Kamalka Chaudhur Scrbe: Qush Wang October 27, 2012 1 The Agnostc PAC Model Recall that one of the constrants of the PAC model s that the data dstrbuton has to be

More information

Richard Socher, Henning Peters Elements of Statistical Learning I E[X] = arg min. E[(X b) 2 ]

Richard Socher, Henning Peters Elements of Statistical Learning I E[X] = arg min. E[(X b) 2 ] 1 Prolem (10P) Show that f X s a random varale, then E[X] = arg mn E[(X ) 2 ] Thus a good predcton for X s E[X] f the squared dfference s used as the metrc. The followng rules are used n the proof: 1.

More information

Hidden Markov Models & The Multivariate Gaussian (10/26/04)

Hidden Markov Models & The Multivariate Gaussian (10/26/04) CS281A/Stat241A: Statstcal Learnng Theory Hdden Markov Models & The Multvarate Gaussan (10/26/04) Lecturer: Mchael I. Jordan Scrbes: Jonathan W. Hu 1 Hdden Markov Models As a bref revew, hdden Markov models

More information

Lecture 14 (03/27/18). Channels. Decoding. Preview of the Capacity Theorem.

Lecture 14 (03/27/18). Channels. Decoding. Preview of the Capacity Theorem. Lecture 14 (03/27/18). Channels. Decodng. Prevew of the Capacty Theorem. A. Barg The concept of a communcaton channel n nformaton theory s an abstracton for transmttng dgtal (and analog) nformaton from

More information

A NOTE ON CES FUNCTIONS Drago Bergholt, BI Norwegian Business School 2011

A NOTE ON CES FUNCTIONS Drago Bergholt, BI Norwegian Business School 2011 A NOTE ON CES FUNCTIONS Drago Bergholt, BI Norwegan Busness School 2011 Functons featurng constant elastcty of substtuton CES are wdely used n appled economcs and fnance. In ths note, I do two thngs. Frst,

More information

find (x): given element x, return the canonical element of the set containing x;

find (x): given element x, return the canonical element of the set containing x; COS 43 Sprng, 009 Dsjont Set Unon Problem: Mantan a collecton of dsjont sets. Two operatons: fnd the set contanng a gven element; unte two sets nto one (destructvely). Approach: Canoncal element method:

More information

Math1110 (Spring 2009) Prelim 3 - Solutions

Math1110 (Spring 2009) Prelim 3 - Solutions Math 1110 (Sprng 2009) Solutons to Prelm 3 (04/21/2009) 1 Queston 1. (16 ponts) Short answer. Math1110 (Sprng 2009) Prelm 3 - Solutons x a 1 (a) (4 ponts) Please evaluate lm, where a and b are postve numbers.

More information

MACHINE APPLIED MACHINE LEARNING LEARNING. Gaussian Mixture Regression

MACHINE APPLIED MACHINE LEARNING LEARNING. Gaussian Mixture Regression 11 MACHINE APPLIED MACHINE LEARNING LEARNING MACHINE LEARNING Gaussan Mture Regresson 22 MACHINE APPLIED MACHINE LEARNING LEARNING Bref summary of last week s lecture 33 MACHINE APPLIED MACHINE LEARNING

More information

Difference Equations

Difference Equations Dfference Equatons c Jan Vrbk 1 Bascs Suppose a sequence of numbers, say a 0,a 1,a,a 3,... s defned by a certan general relatonshp between, say, three consecutve values of the sequence, e.g. a + +3a +1

More information

LINEAR REGRESSION ANALYSIS. MODULE IX Lecture Multicollinearity

LINEAR REGRESSION ANALYSIS. MODULE IX Lecture Multicollinearity LINEAR REGRESSION ANALYSIS MODULE IX Lecture - 31 Multcollnearty Dr. Shalabh Department of Mathematcs and Statstcs Indan Insttute of Technology Kanpur 6. Rdge regresson The OLSE s the best lnear unbased

More information