Machine Learning for Data Science (CS4786) Lecture 4

Size: px
Start display at page:

Download "Machine Learning for Data Science (CS4786) Lecture 4"

Transcription

1 Machie Learig for Data Sciece (CS4786) Lecture 4 Caoical Correlatio Aalysis (CCA) Course Webpage :

2 Aoucemet We are gradig HW0 ad you will be added to cms by moday HW1 will be posted toight o webpage (homework tab) HW1 o CCA ad PCA (due i a week)

3 QUIZ x > 1 X x > d Assume poits are cetered. Which of the followig are equal to the covariace matrix? X X A. = 1 t=1 x > t x t B. = 1 t=1 C. = XX > D. = XX > x t x > t

4 Example: Studets i classroom z y x

5 Maximize Spread Miimize Recostructio Error

6 ! PRINCIPAL COMPONENT ANALYSIS Eigevectors of the covariace matrix are the pricipal compoets 1. =cov X Top K pricipal compoets are the eigevectors with K largest eigevalues 2. W = eigs,k Projectio = Data Top Keigevectors ( ) Recostructio = Projectio Traspose of top K eigevectors X µ Idepedetly discovered by Pearso i 1901 ad Hotellig i Y = W

7 RECONSTRUCTION 4. bx W > +µ = Y

8 WHEN d >> If d >> the is large But we oly eed top K eige vectors. Idea: use SVD X µ = UDV V T V = I U T U = I The ote that, = (X µ) (X µ) = VD 2 V Hece, matrix V is the same as matrix W got from eige decompositio of, eigevalues are diagoal elemets of D 2 Alterative algorithm: [U, V] = SVD(X µ, K) W = V

9 WHEN TO USE PCA? Whe data aturally lies i a low dimesioal liear subspace To miimize recostructio error Fid directios where data is maximally spread

10 Caoical Correlatio Aalysis Age + Geder z y Agle x

11 Caoical Correlatio Aalysis Age + Geder z y Agle x

12 TWO VIEW DIMENSIONALITY REDUCTION Data comes i pairs (x 1, x 1 ),...,(x, x ) where x t s are d dimesioal ad x t s are d dimesioal Goal: Compress say view oe ito y 1,...,y, that are K dimesioal vectors Retai iformatio redudat betwee the two views Elimiate oise specific to oly oe of the views

13 EXAMPLE I: SPEECH RECOGNITION + Audio might have backgroud souds ucorrelated with video Video might have lightig chages ucorrelated with audio Redudat iformatio betwee two views: the speech

14 EXAMPLE II: COMBINING FEATURE EXTRACTIONS Method A ad Method B are both equally good feature extractio techiques Cocateatig the two features blidly yields large dimesioal feature vector with redudacy Applyig techiques like CCA extracts the key iformatio betwee the two methods Removes extra uwated iformatio

15 How do we get the right directio? (say K = 1) Age + Geder Agle

16 WHICH DIRECTION TO PICK? View I View II

17 WHICH DIRECTION TO PICK? PCA directio 0 0

18 WHICH DIRECTION TO PICK? Directio has large covariace

19 How do we pick the right directio to project to?

20 MAXIMIZING CORRELATION COEFFICIENT Say w 1 ad v 1 are the directios we choose to project i views 1 ad 2 respectively we wat these directios to maximize, 1 t=1 y t [1] 1 t=1 y t [1] y t[1] 1 y t[1] t=1 s.t. 1 t=1 y t[1] 1 t=1 y t[1] 2 = 1 t=1 y t [1] 1 t=1 y t [1] = 1 where y t [1] = w 1 x t ad y t [1] = v 1 x t

21 MAXIMIZING CORRELATION COEFFICIENT Say w 1 ad v 1 are the directios we choose to project i views 1 ad 2 respectively we wat these directios to maximize, 1 t=1 y t [1] 1 t=1 y t [1] y t[1] 1 y t[1] t=1 s.t. 1 t=1 y t[1] 1 t=1 y t[1] 2 = 1 t=1 y t [1] 1 t=1 y t [1] = 1 where y t [1] = w 1 x t ad y t [1] = v 1 x t

22 What is the problem with the above?

23 WHY NOT MAXIMIZE COVARIANCE Relevat iformatio Say 1 X t=1 x t [2] x 0 t[2] > 0 Scalig up this coordiate we ca blow up covariace

24 MAXIMIZING CORRELATION COEFFICIENT Say w 1 ad v 1 are the directios we choose to project i views 1 ad 2 respectively we wat these directios to maximize, 1 t=1 y t[1] 1 t=1 y t[1] y t [1] 1 t=1 y t [1] 1 t=1 y t[1] 1 t=1 y t[1] 2 1 t=1 y t [1] 1 t=1 y t [1]

25 BASIC IDEA OF CCA Normalize variace i chose directio to be costat (say 1) The maximize covariace This is same as maximizig correlatio coefficiet (recall from last class).

26 COVARIANCE VS CORRELATION Covariace(A, B) = E[(A E[A]) (B E[B])] Depeds o the scale of A ad B. If B is rescaled, covariace shifts. Corelatio(A, B) = E[(A E[A]) (B E[B])] Var(A) Var(B) Scale free.

27 MAXIMIZING CORRELATION COEFFICIENT Say w 1 ad v 1 are the directios we choose to project i views 1 ad 2 respectively we wat these directios to maximize, 1 t=1 y t [1] 1 t=1 y t [1] y t[1] 1 y t[1] t=1 s.t. 1 t=1 y t[1] 1 t=1 y t[1] 2 = 1 t=1 y t [1] 1 t=1 y t [1] = 1 where y t [1] = w 1 x t ad y t [1] = v 1 x t

28 CANONICAL CORRELATION ANALYSIS Hece we wat to solve for projectio vectors w 1 ad v 1 that maximize 1 subject to 1 t=1 w 1 (x t µ) v 1 (x t µ ) t=1(w 1 (x t µ)) 2 = 1 t=1 (v 1 (x t µ )) 2 = 1 where µ = 1 t=1 x t ad µ = 1 t=1 x t

29 CANONICAL CORRELATION ANALYSIS Hece we wat to solve for projectio vectors w 1 ad v 1 that maximize 1 subject to 1 t=1 w 1 (x t µ) v 1 (x t µ ) t=1(w 1 (x t µ)) 2 = 1 t=1 (v 1 (x t µ )) 2 = 1 where µ = 1 t=1 x t ad µ = 1 t=1 x t

30 CANONICAL CORRELATION ANALYSIS Hece we wat to solve for projectio vectors w 1 ad v 1 that maximize w 1 1,2v 1 subject to w 1 1,1w 1 = v 1 2,2v 1 = 1 Writig Lagragia takig derivative equatig to 0 we get =cov X 0 = X1,1 1,2 v 1 = 2 v 1 1,2 1 2,2 2,1w 1 = 2 1,1 w 1 ad 2,1 1 1,1 1,2v 1 = 2 2,2 v or equivaletly 1 1,1 1,2 1 2,2 2,1 w21 1 = 2 22 w 1 ad 1 2,2 2,1 1!

31 SOLUTION eigs ( ) =,K W 1 1 ( ) = eigs,k W 2 1

32 1. CCA ALGORITHM! X = X 1 X 2 Write x t = x t x t the d + d dimesioal cocateated vectors., Calculate covariace matrix d1 of the joit d2 data poits = =cov 1,1 1,2 X 2. = ,1 2, ! Calculate 1 1,1 1,2 1 2,2 2,1. The top K eige vectors of this matrix give us projectio matrix for view I. ( ) = eigs,k W 1 1 Calculate 1 2,2 2,1 1 1,1 1,2. The top K eige vectors of this matrix give us projectio matrix for view II. X µ 4. Y = W

Machine Learning for Data Science (CS4786) Lecture 9

Machine Learning for Data Science (CS4786) Lecture 9 Machie Learig for Data Sciece (CS4786) Lecture 9 Pricipal Compoet Aalysis Course Webpage : http://www.cs.corell.eu/courses/cs4786/207fa/ DIM REDUCTION: LINEAR TRANSFORMATION x > y > Pick a low imesioal

More information

Machine Learning for Data Science (CS 4786)

Machine Learning for Data Science (CS 4786) Machie Learig for Data Sciece CS 4786) Lecture & 3: Pricipal Compoet Aalysis The text i black outlies high level ideas. The text i blue provides simple mathematical details to derive or get to the algorithm

More information

Machine Learning for Data Science (CS 4786)

Machine Learning for Data Science (CS 4786) Machie Learig for Data Sciece CS 4786) Lecture 9: Pricipal Compoet Aalysis The text i black outlies mai ideas to retai from the lecture. The text i blue give a deeper uderstadig of how we derive or get

More information

BIOINF 585: Machine Learning for Systems Biology & Clinical Informatics

BIOINF 585: Machine Learning for Systems Biology & Clinical Informatics BIOINF 585: Machie Learig for Systems Biology & Cliical Iformatics Lecture 14: Dimesio Reductio Jie Wag Departmet of Computatioal Medicie & Bioiformatics Uiversity of Michiga 1 Outlie What is feature reductio?

More information

Factor Analysis. Lecture 10: Factor Analysis and Principal Component Analysis. Sam Roweis

Factor Analysis. Lecture 10: Factor Analysis and Principal Component Analysis. Sam Roweis Lecture 10: Factor Aalysis ad Pricipal Compoet Aalysis Sam Roweis February 9, 2004 Whe we assume that the subspace is liear ad that the uderlyig latet variable has a Gaussia distributio we get a model

More information

Cov(aX, cy ) Var(X) Var(Y ) It is completely invariant to affine transformations: for any a, b, c, d R, ρ(ax + b, cy + d) = a.s. X i. as n.

Cov(aX, cy ) Var(X) Var(Y ) It is completely invariant to affine transformations: for any a, b, c, d R, ρ(ax + b, cy + d) = a.s. X i. as n. CS 189 Itroductio to Machie Learig Sprig 218 Note 11 1 Caoical Correlatio Aalysis The Pearso Correlatio Coefficiet ρ(x, Y ) is a way to measure how liearly related (i other words, how well a liear model

More information

Linear regression. Daniel Hsu (COMS 4771) (y i x T i β)2 2πσ. 2 2σ 2. 1 n. (x T i β y i ) 2. 1 ˆβ arg min. β R n d

Linear regression. Daniel Hsu (COMS 4771) (y i x T i β)2 2πσ. 2 2σ 2. 1 n. (x T i β y i ) 2. 1 ˆβ arg min. β R n d Liear regressio Daiel Hsu (COMS 477) Maximum likelihood estimatio Oe of the simplest liear regressio models is the followig: (X, Y ),..., (X, Y ), (X, Y ) are iid radom pairs takig values i R d R, ad Y

More information

Machine Learning Regression I Hamid R. Rabiee [Slides are based on Bishop Book] Spring

Machine Learning Regression I Hamid R. Rabiee [Slides are based on Bishop Book] Spring Machie Learig Regressio I Hamid R. Rabiee [Slides are based o Bishop Book] Sprig 015 http://ce.sharif.edu/courses/93-94//ce717-1 Liear Regressio Liear regressio: ivolves a respose variable ad a sigle predictor

More information

Definitions and Theorems. where x are the decision variables. c, b, and a are constant coefficients.

Definitions and Theorems. where x are the decision variables. c, b, and a are constant coefficients. Defiitios ad Theorems Remember the scalar form of the liear programmig problem, Miimize, Subject to, f(x) = c i x i a 1i x i = b 1 a mi x i = b m x i 0 i = 1,2,, where x are the decisio variables. c, b,

More information

5.1 Review of Singular Value Decomposition (SVD)

5.1 Review of Singular Value Decomposition (SVD) MGMT 69000: Topics i High-dimesioal Data Aalysis Falll 06 Lecture 5: Spectral Clusterig: Overview (cotd) ad Aalysis Lecturer: Jiamig Xu Scribe: Adarsh Barik, Taotao He, September 3, 06 Outlie Review of

More information

Chapter 12 EM algorithms The Expectation-Maximization (EM) algorithm is a maximum likelihood method for models that have hidden variables eg. Gaussian

Chapter 12 EM algorithms The Expectation-Maximization (EM) algorithm is a maximum likelihood method for models that have hidden variables eg. Gaussian Chapter 2 EM algorithms The Expectatio-Maximizatio (EM) algorithm is a maximum likelihood method for models that have hidde variables eg. Gaussia Mixture Models (GMMs), Liear Dyamic Systems (LDSs) ad Hidde

More information

Topics in Eigen-analysis

Topics in Eigen-analysis Topics i Eige-aalysis Li Zajiag 28 July 2014 Cotets 1 Termiology... 2 2 Some Basic Properties ad Results... 2 3 Eige-properties of Hermitia Matrices... 5 3.1 Basic Theorems... 5 3.2 Quadratic Forms & Noegative

More information

Dimensionality Reduction vs. Clustering

Dimensionality Reduction vs. Clustering Dimesioality Reductio vs. Clusterig Lecture 9: Cotiuous Latet Variable Models Sam Roweis Traiig such factor models (e.g. FA, PCA, ICA) is called dimesioality reductio. You ca thik of this as (o)liear regressio

More information

Lecture 8: October 20, Applications of SVD: least squares approximation

Lecture 8: October 20, Applications of SVD: least squares approximation Mathematical Toolkit Autum 2016 Lecturer: Madhur Tulsiai Lecture 8: October 20, 2016 1 Applicatios of SVD: least squares approximatio We discuss aother applicatio of sigular value decompositio (SVD) of

More information

Mon Apr Second derivative test, and maybe another conic diagonalization example. Announcements: Warm-up Exercise:

Mon Apr Second derivative test, and maybe another conic diagonalization example. Announcements: Warm-up Exercise: Math 2270-004 Week 15 otes We will ot ecessarily iish the material rom a give day's otes o that day We may also add or subtract some material as the week progresses, but these otes represet a i-depth outlie

More information

The variance of a sum of independent variables is the sum of their variances, since covariances are zero. Therefore. V (xi )= n n 2 σ2 = σ2.

The variance of a sum of independent variables is the sum of their variances, since covariances are zero. Therefore. V (xi )= n n 2 σ2 = σ2. SAMPLE STATISTICS A radom sample x 1,x,,x from a distributio f(x) is a set of idepedetly ad idetically variables with x i f(x) for all i Their joit pdf is f(x 1,x,,x )=f(x 1 )f(x ) f(x )= f(x i ) The sample

More information

Outline. Linear regression. Regularization functions. Polynomial curve fitting. Stochastic gradient descent for regression. MLE for regression

Outline. Linear regression. Regularization functions. Polynomial curve fitting. Stochastic gradient descent for regression. MLE for regression REGRESSION 1 Outlie Liear regressio Regularizatio fuctios Polyomial curve fittig Stochastic gradiet descet for regressio MLE for regressio Step-wise forward regressio Regressio methods Statistical techiques

More information

Linear Regression Demystified

Linear Regression Demystified Liear Regressio Demystified Liear regressio is a importat subject i statistics. I elemetary statistics courses, formulae related to liear regressio are ofte stated without derivatio. This ote iteds to

More information

ECON 3150/4150, Spring term Lecture 3

ECON 3150/4150, Spring term Lecture 3 Itroductio Fidig the best fit by regressio Residuals ad R-sq Regressio ad causality Summary ad ext step ECON 3150/4150, Sprig term 2014. Lecture 3 Ragar Nymoe Uiversity of Oslo 21 Jauary 2014 1 / 30 Itroductio

More information

Session 5. (1) Principal component analysis and Karhunen-Loève transformation

Session 5. (1) Principal component analysis and Karhunen-Loève transformation 200 Autum semester Patter Iformatio Processig Topic 2 Image compressio by orthogoal trasformatio Sessio 5 () Pricipal compoet aalysis ad Karhue-Loève trasformatio Topic 2 of this course explais the image

More information

Apply change-of-basis formula to rewrite x as a linear combination of eigenvectors v j.

Apply change-of-basis formula to rewrite x as a linear combination of eigenvectors v j. Eigevalue-Eigevector Istructor: Nam Su Wag eigemcd Ay vector i real Euclidea space of dimesio ca be uiquely epressed as a liear combiatio of liearly idepedet vectors (ie, basis) g j, j,,, α g α g α g α

More information

Lecture 22: Review for Exam 2. 1 Basic Model Assumptions (without Gaussian Noise)

Lecture 22: Review for Exam 2. 1 Basic Model Assumptions (without Gaussian Noise) Lecture 22: Review for Exam 2 Basic Model Assumptios (without Gaussia Noise) We model oe cotiuous respose variable Y, as a liear fuctio of p umerical predictors, plus oise: Y = β 0 + β X +... β p X p +

More information

The DOA Estimation of Multiple Signals based on Weighting MUSIC Algorithm

The DOA Estimation of Multiple Signals based on Weighting MUSIC Algorithm , pp.10-106 http://dx.doi.org/10.1457/astl.016.137.19 The DOA Estimatio of ultiple Sigals based o Weightig USIC Algorithm Chagga Shu a, Yumi Liu State Key Laboratory of IPOC, Beijig Uiversity of Posts

More information

APPLIED MULTIVARIATE ANALYSIS

APPLIED MULTIVARIATE ANALYSIS ALIED MULTIVARIATE ANALYSIS FREQUENTLY ASKED QUESTIONS AMIT MITRA & SHARMISHTHA MITRA DEARTMENT OF MATHEMATICS & STATISTICS INDIAN INSTITUTE OF TECHNOLOGY KANUR X = X X X [] The variace covariace atrix

More information

LINEAR REGRESSION ANALYSIS. MODULE IX Lecture Multicollinearity

LINEAR REGRESSION ANALYSIS. MODULE IX Lecture Multicollinearity LINEAR REGRESSION ANALYSIS MODULE IX Lecture - 9 Multicolliearity Dr Shalabh Departmet of Mathematics ad Statistics Idia Istitute of Techology Kapur Multicolliearity diagostics A importat questio that

More information

Grouping 2: Spectral and Agglomerative Clustering. CS 510 Lecture #16 April 2 nd, 2014

Grouping 2: Spectral and Agglomerative Clustering. CS 510 Lecture #16 April 2 nd, 2014 Groupig 2: Spectral ad Agglomerative Clusterig CS 510 Lecture #16 April 2 d, 2014 Groupig (review) Goal: Detect local image features (SIFT) Describe image patches aroud features SIFT, SURF, HoG, LBP, Group

More information

ECE 8527: Introduction to Machine Learning and Pattern Recognition Midterm # 1. Vaishali Amin Fall, 2015

ECE 8527: Introduction to Machine Learning and Pattern Recognition Midterm # 1. Vaishali Amin Fall, 2015 ECE 8527: Itroductio to Machie Learig ad Patter Recogitio Midterm # 1 Vaishali Ami Fall, 2015 tue39624@temple.edu Problem No. 1: Cosider a two-class discrete distributio problem: ω 1 :{[0,0], [2,0], [2,2],

More information

Homework Set #3 - Solutions

Homework Set #3 - Solutions EE 15 - Applicatios of Covex Optimizatio i Sigal Processig ad Commuicatios Dr. Adre Tkaceko JPL Third Term 11-1 Homework Set #3 - Solutios 1. a) Note that x is closer to x tha to x l i the Euclidea orm

More information

10-701/ Machine Learning Mid-term Exam Solution

10-701/ Machine Learning Mid-term Exam Solution 0-70/5-78 Machie Learig Mid-term Exam Solutio Your Name: Your Adrew ID: True or False (Give oe setece explaatio) (20%). (F) For a cotiuous radom variable x ad its probability distributio fuctio p(x), it

More information

EECS 442 Computer vision. Multiple view geometry Affine structure from Motion

EECS 442 Computer vision. Multiple view geometry Affine structure from Motion EECS 442 Computer visio Multiple view geometry Affie structure from Motio - Affie structure from motio problem - Algebraic methods - Factorizatio methods Readig: [HZ] Chapters: 6,4,8 [FP] Chapter: 2 Some

More information

Paired Data and Linear Correlation

Paired Data and Linear Correlation Paired Data ad Liear Correlatio Example. A group of calculus studets has take two quizzes. These are their scores: Studet st Quiz Score ( data) d Quiz Score ( data) 7 5 5 0 3 0 3 4 0 5 5 5 5 6 0 8 7 0

More information

S Y Y = ΣY 2 n. Using the above expressions, the correlation coefficient is. r = SXX S Y Y

S Y Y = ΣY 2 n. Using the above expressions, the correlation coefficient is. r = SXX S Y Y 1 Sociology 405/805 Revised February 4, 004 Summary of Formulae for Bivariate Regressio ad Correlatio Let X be a idepedet variable ad Y a depedet variable, with observatios for each of the values of these

More information

Axis Aligned Ellipsoid

Axis Aligned Ellipsoid Machie Learig for Data Sciece CS 4786) Lecture 6,7 & 8: Ellipsoidal Clusterig, Gaussia Mixture Models ad Geeral Mixture Models The text i black outlies high level ideas. The text i blue provides simple

More information

Soo King Lim Figure 1: Figure 2: Figure 3: Figure 4: Figure 5: Figure 6: Figure 7:

Soo King Lim Figure 1: Figure 2: Figure 3: Figure 4: Figure 5: Figure 6: Figure 7: 0 Multivariate Cotrol Chart 3 Multivariate Normal Distributio 5 Estimatio of the Mea ad Covariace Matrix 6 Hotellig s Cotrol Chart 6 Hotellig s Square 8 Average Value of k Subgroups 0 Example 3 3 Value

More information

Assumptions. Motivation. Linear Transforms. Standard measures. Correlation. Cofactor. γ k

Assumptions. Motivation. Linear Transforms. Standard measures. Correlation. Cofactor. γ k Outlie Pricipal Compoet Aalysis Yaju Ya Itroductio of PCA Mathematical basis Calculatio of PCA Applicatios //04 ELE79, Sprig 004 What is PCA? Pricipal Compoets Pricipal Compoet Aalysis, origially developed

More information

CMSE 820: Math. Foundations of Data Sci.

CMSE 820: Math. Foundations of Data Sci. Lecture 17 8.4 Weighted path graphs Take from [10, Lecture 3] As alluded to at the ed of the previous sectio, we ow aalyze weighted path graphs. To that ed, we prove the followig: Theorem 6 (Fiedler).

More information

18.S096: Homework Problem Set 1 (revised)

18.S096: Homework Problem Set 1 (revised) 8.S096: Homework Problem Set (revised) Topics i Mathematics of Data Sciece (Fall 05) Afoso S. Badeira Due o October 6, 05 Exteded to: October 8, 05 This homework problem set is due o October 6, at the

More information

Sensitivity Analysis of Daubechies 4 Wavelet Coefficients for Reduction of Reconstructed Image Error

Sensitivity Analysis of Daubechies 4 Wavelet Coefficients for Reduction of Reconstructed Image Error Proceedigs of the 6th WSEAS Iteratioal Coferece o SIGNAL PROCESSING, Dallas, Texas, USA, March -4, 7 67 Sesitivity Aalysis of Daubechies 4 Wavelet Coefficiets for Reductio of Recostructed Image Error DEVINDER

More information

Machine Learning Brett Bernstein

Machine Learning Brett Bernstein Machie Learig Brett Berstei Week Lecture: Cocept Check Exercises Starred problems are optioal. Statistical Learig Theory. Suppose A = Y = R ad X is some other set. Furthermore, assume P X Y is a discrete

More information

Affine Structure from Motion

Affine Structure from Motion Affie Structure from Motio EECS 598-8 Fall 24! Foudatios of Computer Visio!! Istructor: Jaso Corso (jjcorso)! web.eecs.umich.edu/~jjcorso/t/598f4!! Readigs: FP 8.2! Date: /5/4!! Materials o these slides

More information

BHW #13 1/ Cooper. ENGR 323 Probabilistic Analysis Beautiful Homework # 13

BHW #13 1/ Cooper. ENGR 323 Probabilistic Analysis Beautiful Homework # 13 BHW # /5 ENGR Probabilistic Aalysis Beautiful Homework # Three differet roads feed ito a particular freeway etrace. Suppose that durig a fixed time period, the umber of cars comig from each road oto the

More information

Correlation and Covariance

Correlation and Covariance Correlatio ad Covariace Tom Ilveto FREC 9 What is Next? Correlatio ad Regressio Regressio We specify a depedet variable as a liear fuctio of oe or more idepedet variables, based o co-variace Regressio

More information

Symmetric Matrices and Quadratic Forms

Symmetric Matrices and Quadratic Forms 7 Symmetric Matrices ad Quadratic Forms 7.1 DIAGONALIZAION OF SYMMERIC MARICES SYMMERIC MARIX A symmetric matrix is a matrix A such that. A = A Such a matrix is ecessarily square. Its mai diagoal etries

More information

Variable selection in principal components analysis of qualitative data using the accelerated ALS algorithm

Variable selection in principal components analysis of qualitative data using the accelerated ALS algorithm Variable selectio i pricipal compoets aalysis of qualitative data usig the accelerated ALS algorithm Masahiro Kuroda Yuichi Mori Masaya Iizuka Michio Sakakihara (Okayama Uiversity of Sciece) (Okayama Uiversity

More information

Simple Linear Regression

Simple Linear Regression Chapter 2 Simple Liear Regressio 2.1 Simple liear model The simple liear regressio model shows how oe kow depedet variable is determied by a sigle explaatory variable (regressor). Is is writte as: Y i

More information

Non-linear Feature Extraction by the Coordination of Mixture Models

Non-linear Feature Extraction by the Coordination of Mixture Models No-liear Feature Extractio by the Coordiatio of Mixture Models J.J. Verbeek N. Vlassis B.J.A. Kröse Itelliget Autoomous Systems Group, Iformatics Istitute, Faculty of Sciece, Uiversity of Amsterdam, Kruislaa

More information

Notes The Incremental Motion Model:

Notes The Incremental Motion Model: The Icremetal Motio Model: The Jacobia Matrix I the forward kiematics model, we saw that it was possible to relate joit agles θ, to the cofiguratio of the robot ed effector T I this sectio, we will see

More information

Bivariate Sample Statistics Geog 210C Introduction to Spatial Data Analysis. Chris Funk. Lecture 7

Bivariate Sample Statistics Geog 210C Introduction to Spatial Data Analysis. Chris Funk. Lecture 7 Bivariate Sample Statistics Geog 210C Itroductio to Spatial Data Aalysis Chris Fuk Lecture 7 Overview Real statistical applicatio: Remote moitorig of east Africa log rais Lead up to Lab 5-6 Review of bivariate/multivariate

More information

6. Kalman filter implementation for linear algebraic equations. Karhunen-Loeve decomposition

6. Kalman filter implementation for linear algebraic equations. Karhunen-Loeve decomposition 6. Kalma filter implemetatio for liear algebraic equatios. Karhue-Loeve decompositio 6.1. Solvable liear algebraic systems. Probabilistic iterpretatio. Let A be a quadratic matrix (ot obligatory osigular.

More information

PCA SVD LDA MDS, LLE, CCA. Data mining. Dimensionality reduction. University of Szeged. Data mining

PCA SVD LDA MDS, LLE, CCA. Data mining. Dimensionality reduction. University of Szeged. Data mining Dimesioality reductio Uiversity of Szeged The role of dimesioality reductio We ca spare computatioal costs (or simply fit etire datasets ito mai memory) if we represet data i fewer dimesios Visualizatio

More information

State Space Representation

State Space Representation Optimal Cotrol, Guidace ad Estimatio Lecture 2 Overview of SS Approach ad Matrix heory Prof. Radhakat Padhi Dept. of Aerospace Egieerig Idia Istitute of Sciece - Bagalore State Space Represetatio Prof.

More information

Properties and Hypothesis Testing

Properties and Hypothesis Testing Chapter 3 Properties ad Hypothesis Testig 3.1 Types of data The regressio techiques developed i previous chapters ca be applied to three differet kids of data. 1. Cross-sectioal data. 2. Time series data.

More information

18.657: Mathematics of Machine Learning

18.657: Mathematics of Machine Learning 8.657: Mathematics of Machie Learig Lecturer: Philippe Rigollet Lecture 0 Scribe: Ade Forrow Oct. 3, 05 Recall the followig defiitios from last time: Defiitio: A fuctio K : X X R is called a positive symmetric

More information

Algebra of Least Squares

Algebra of Least Squares October 19, 2018 Algebra of Least Squares Geometry of Least Squares Recall that out data is like a table [Y X] where Y collects observatios o the depedet variable Y ad X collects observatios o the k-dimesioal

More information

Lecture 7: Density Estimation: k-nearest Neighbor and Basis Approach

Lecture 7: Density Estimation: k-nearest Neighbor and Basis Approach STAT 425: Itroductio to Noparametric Statistics Witer 28 Lecture 7: Desity Estimatio: k-nearest Neighbor ad Basis Approach Istructor: Ye-Chi Che Referece: Sectio 8.4 of All of Noparametric Statistics.

More information

CS276A Practice Problem Set 1 Solutions

CS276A Practice Problem Set 1 Solutions CS76A Practice Problem Set Solutios Problem. (i) (ii) 8 (iii) 6 Compute the gamma-codes for the followig itegers: (i) (ii) 8 (iii) 6 Problem. For this problem, we will be dealig with a collectio of millio

More information

Economics 241B Relation to Method of Moments and Maximum Likelihood OLSE as a Maximum Likelihood Estimator

Economics 241B Relation to Method of Moments and Maximum Likelihood OLSE as a Maximum Likelihood Estimator Ecoomics 24B Relatio to Method of Momets ad Maximum Likelihood OLSE as a Maximum Likelihood Estimator Uder Assumptio 5 we have speci ed the distributio of the error, so we ca estimate the model parameters

More information

Math 475, Problem Set #12: Answers

Math 475, Problem Set #12: Answers Math 475, Problem Set #12: Aswers A. Chapter 8, problem 12, parts (b) ad (d). (b) S # (, 2) = 2 2, sice, from amog the 2 ways of puttig elemets ito 2 distiguishable boxes, exactly 2 of them result i oe

More information

For a 3 3 diagonal matrix we find. Thus e 1 is a eigenvector corresponding to eigenvalue λ = a 11. Thus matrix A has eigenvalues 2 and 3.

For a 3 3 diagonal matrix we find. Thus e 1 is a eigenvector corresponding to eigenvalue λ = a 11. Thus matrix A has eigenvalues 2 and 3. Closed Leotief Model Chapter 6 Eigevalues I a closed Leotief iput-output-model cosumptio ad productio coicide, i.e. V x = x = x Is this possible for the give techology matrix V? This is a special case

More information

Chapter 1 Simple Linear Regression (part 6: matrix version)

Chapter 1 Simple Linear Regression (part 6: matrix version) Chapter Simple Liear Regressio (part 6: matrix versio) Overview Simple liear regressio model: respose variable Y, a sigle idepedet variable X Y β 0 + β X + ε Multiple liear regressio model: respose Y,

More information

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur Module 3 LOSSY IMAGE COMPRESSION SYSEMS Versio ECE II, Kharagpur Lesso 8 rasform Codig & K-L rasforms Versio ECE II, Kharagpur Istructioal Oectives At the ed of this lesso, the studets should e ale to:.

More information

Machine Learning Brett Bernstein

Machine Learning Brett Bernstein Machie Learig Brett Berstei Week 2 Lecture: Cocept Check Exercises Starred problems are optioal. Excess Risk Decompositio 1. Let X = Y = {1, 2,..., 10}, A = {1,..., 10, 11} ad suppose the data distributio

More information

Intelligent Systems I 08 SVM

Intelligent Systems I 08 SVM Itelliget Systems I 08 SVM Stefa Harmelig & Philipp Heig 12. December 2013 Max Plack Istitute for Itelliget Systems Dptmt. of Empirical Iferece 1 / 30 Your feeback Ejoye most Laplace approximatio gettig

More information

STATS 306B: Unsupervised Learning Spring Lecture 8 April 23

STATS 306B: Unsupervised Learning Spring Lecture 8 April 23 STATS 306B: Usupervised Learig Sprig 2014 Lecture 8 April 23 Lecturer: Lester Mackey Scribe: Kexi Nie, Na Bi 8.1 Pricipal Compoet Aalysis Last time we itroduced the mathematical framework uderlyig Pricipal

More information

Run-length & Entropy Coding. Redundancy Removal. Sampling. Quantization. Perform inverse operations at the receiver EEE

Run-length & Entropy Coding. Redundancy Removal. Sampling. Quantization. Perform inverse operations at the receiver EEE Geeral e Image Coder Structure Motio Video (s 1,s 2,t) or (s 1,s 2 ) Natural Image Samplig A form of data compressio; usually lossless, but ca be lossy Redudacy Removal Lossless compressio: predictive

More information

Optimum LMSE Discrete Transform

Optimum LMSE Discrete Transform Image Trasformatio Two-dimesioal image trasforms are extremely importat areas of study i image processig. The image output i the trasformed space may be aalyzed, iterpreted, ad further processed for implemetig

More information

Probability 2 - Notes 10. Lemma. If X is a random variable and g(x) 0 for all x in the support of f X, then P(g(X) 1) E[g(X)].

Probability 2 - Notes 10. Lemma. If X is a random variable and g(x) 0 for all x in the support of f X, then P(g(X) 1) E[g(X)]. Probability 2 - Notes 0 Some Useful Iequalities. Lemma. If X is a radom variable ad g(x 0 for all x i the support of f X, the P(g(X E[g(X]. Proof. (cotiuous case P(g(X Corollaries x:g(x f X (xdx x:g(x

More information

PROBLEM SET I (Suggested Solutions)

PROBLEM SET I (Suggested Solutions) Eco3-Fall3 PROBLE SET I (Suggested Solutios). a) Cosider the followig: x x = x The quadratic form = T x x is the required oe i matrix form. Similarly, for the followig parts: x 5 b) x = = x c) x x x x

More information

THE KALMAN FILTER RAUL ROJAS

THE KALMAN FILTER RAUL ROJAS THE KALMAN FILTER RAUL ROJAS Abstract. This paper provides a getle itroductio to the Kalma filter, a umerical method that ca be used for sesor fusio or for calculatio of trajectories. First, we cosider

More information

Linear Support Vector Machines

Linear Support Vector Machines Liear Support Vector Machies David S. Roseberg The Support Vector Machie For a liear support vector machie (SVM), we use the hypothesis space of affie fuctios F = { f(x) = w T x + b w R d, b R } ad evaluate

More information

(3) If you replace row i of A by its sum with a multiple of another row, then the determinant is unchanged! Expand across the i th row:

(3) If you replace row i of A by its sum with a multiple of another row, then the determinant is unchanged! Expand across the i th row: Math 50-004 Tue Feb 4 Cotiue with sectio 36 Determiats The effective way to compute determiats for larger-sized matrices without lots of zeroes is to ot use the defiitio, but rather to use the followig

More information

ECE-S352 Introduction to Digital Signal Processing Lecture 3A Direct Solution of Difference Equations

ECE-S352 Introduction to Digital Signal Processing Lecture 3A Direct Solution of Difference Equations ECE-S352 Itroductio to Digital Sigal Processig Lecture 3A Direct Solutio of Differece Equatios Discrete Time Systems Described by Differece Equatios Uit impulse (sample) respose h() of a DT system allows

More information

LINEAR ALGEBRAIC GROUPS: LECTURE 6

LINEAR ALGEBRAIC GROUPS: LECTURE 6 LINEAR ALGEBRAIC GROUPS: LECTURE 6 JOHN SIMANYI Grassmaias over Fiite Fields As see i the Fao plae, fiite fields create geometries that are uite differet from our more commo R or C based geometries These

More information

Chapter Vectors

Chapter Vectors Chapter 4. Vectors fter readig this chapter you should be able to:. defie a vector. add ad subtract vectors. fid liear combiatios of vectors ad their relatioship to a set of equatios 4. explai what it

More information

STATISTICAL PROPERTIES OF LEAST SQUARES ESTIMATORS. Comments:

STATISTICAL PROPERTIES OF LEAST SQUARES ESTIMATORS. Comments: Recall: STATISTICAL PROPERTIES OF LEAST SQUARES ESTIMATORS Commets:. So far we have estimates of the parameters! 0 ad!, but have o idea how good these estimates are. Assumptio: E(Y x)! 0 +! x (liear coditioal

More information

Q. 1 Q. 5 carry one mark each.

Q. 1 Q. 5 carry one mark each. Geeral Aptitude (GA) Set-8 Q. Q. 5 carry oe mark each. Q. The fisherme, the flood victims owed their lives, were rewarded by the govermet. (A) whom (B) to which (C) to whom (D) that Q.2 Some studets were

More information

CEE 522 Autumn Uncertainty Concepts for Geotechnical Engineering

CEE 522 Autumn Uncertainty Concepts for Geotechnical Engineering CEE 5 Autum 005 Ucertaity Cocepts for Geotechical Egieerig Basic Termiology Set A set is a collectio of (mutually exclusive) objects or evets. The sample space is the (collectively exhaustive) collectio

More information

Notes 27 : Brownian motion: path properties

Notes 27 : Brownian motion: path properties Notes 27 : Browia motio: path properties Math 733-734: Theory of Probability Lecturer: Sebastie Roch Refereces:[Dur10, Sectio 8.1], [MP10, Sectio 1.1, 1.2, 1.3]. Recall: DEF 27.1 (Covariace) Let X = (X

More information

Questions and answers, kernel part

Questions and answers, kernel part Questios ad aswers, kerel part October 8, 205 Questios. Questio : properties of kerels, PCA, represeter theorem. [2 poits] Let F be a RK defied o some domai X, with feature map φ(x) x X ad reproducig kerel

More information

Math 61CM - Solutions to homework 3

Math 61CM - Solutions to homework 3 Math 6CM - Solutios to homework 3 Cédric De Groote October 2 th, 208 Problem : Let F be a field, m 0 a fixed oegative iteger ad let V = {a 0 + a x + + a m x m a 0,, a m F} be the vector space cosistig

More information

Simple Regression. Acknowledgement. These slides are based on presentations created and copyrighted by Prof. Daniel Menasce (GMU) CS 700

Simple Regression. Acknowledgement. These slides are based on presentations created and copyrighted by Prof. Daniel Menasce (GMU) CS 700 Simple Regressio CS 7 Ackowledgemet These slides are based o presetatios created ad copyrighted by Prof. Daiel Measce (GMU) Basics Purpose of regressio aalysis: predict the value of a depedet or respose

More information

Math Solutions to homework 6

Math Solutions to homework 6 Math 175 - Solutios to homework 6 Cédric De Groote November 16, 2017 Problem 1 (8.11 i the book): Let K be a compact Hermitia operator o a Hilbert space H ad let the kerel of K be {0}. Show that there

More information

Singular value decomposition. Mathématiques appliquées (MATH0504-1) B. Dewals, Ch. Geuzaine

Singular value decomposition. Mathématiques appliquées (MATH0504-1) B. Dewals, Ch. Geuzaine Lecture 11 Sigular value decompositio Mathématiques appliquées (MATH0504-1) B. Dewals, Ch. Geuzaie V1.2 07/12/2018 1 Sigular value decompositio (SVD) at a glace Motivatio: the image of the uit sphere S

More information

1 Principal Component Analysis in High Dimensions and the Spike Model

1 Principal Component Analysis in High Dimensions and the Spike Model Pricipal Compoet Aalysis i High Dimesios ad the Spike Model. Dimesio Reductio ad PCA Whe faced with a high dimesioal dataset, a atural approach is to try to reduce its dimesio, either by projectig it to

More information

11 Correlation and Regression

11 Correlation and Regression 11 Correlatio Regressio 11.1 Multivariate Data Ofte we look at data where several variables are recorded for the same idividuals or samplig uits. For example, at a coastal weather statio, we might record

More information

Econ 325: Introduction to Empirical Economics

Econ 325: Introduction to Empirical Economics Eco 35: Itroductio to Empirical Ecoomics Lecture 3 Discrete Radom Variables ad Probability Distributios Copyright 010 Pearso Educatio, Ic. Publishig as Pretice Hall Ch. 4-1 4.1 Itroductio to Probability

More information

Eigenvalues and Eigenvectors

Eigenvalues and Eigenvectors 5 Eigevalues ad Eigevectors 5.3 DIAGONALIZATION DIAGONALIZATION Example 1: Let. Fid a formula for A k, give that P 1 1 = 1 2 ad, where Solutio: The stadard formula for the iverse of a 2 2 matrix yields

More information

Lecture 4. Hw 1 and 2 will be reoped after class for every body. New deadline 4/20 Hw 3 and 4 online (Nima is lead)

Lecture 4. Hw 1 and 2 will be reoped after class for every body. New deadline 4/20 Hw 3 and 4 online (Nima is lead) Lecture 4 Homework Hw 1 ad 2 will be reoped after class for every body. New deadlie 4/20 Hw 3 ad 4 olie (Nima is lead) Pod-cast lecture o-lie Fial projects Nima will register groups ext week. Email/tell

More information

Lecture 25 (Dec. 6, 2017)

Lecture 25 (Dec. 6, 2017) Lecture 5 8.31 Quatum Theory I, Fall 017 106 Lecture 5 (Dec. 6, 017) 5.1 Degeerate Perturbatio Theory Previously, whe discussig perturbatio theory, we restricted ourselves to the case where the uperturbed

More information

Geometry of LS. LECTURE 3 GEOMETRY OF LS, PROPERTIES OF σ 2, PARTITIONED REGRESSION, GOODNESS OF FIT

Geometry of LS. LECTURE 3 GEOMETRY OF LS, PROPERTIES OF σ 2, PARTITIONED REGRESSION, GOODNESS OF FIT OCTOBER 7, 2016 LECTURE 3 GEOMETRY OF LS, PROPERTIES OF σ 2, PARTITIONED REGRESSION, GOODNESS OF FIT Geometry of LS We ca thik of y ad the colums of X as members of the -dimesioal Euclidea space R Oe ca

More information

Linear Regression Models, OLS, Assumptions and Properties

Linear Regression Models, OLS, Assumptions and Properties Chapter 2 Liear Regressio Models, OLS, Assumptios ad Properties 2.1 The Liear Regressio Model The liear regressio model is the sigle most useful tool i the ecoometricia s kit. The multiple regressio model

More information

Improvement of Generic Attacks on the Rank Syndrome Decoding Problem

Improvement of Generic Attacks on the Rank Syndrome Decoding Problem Improvemet of Geeric Attacks o the Rak Sydrome Decodig Problem Nicolas Arago, Philippe Gaborit, Adrie Hauteville, Jea-Pierre Tillich To cite this versio: Nicolas Arago, Philippe Gaborit, Adrie Hauteville,

More information

Chapter 4. Fourier Series

Chapter 4. Fourier Series Chapter 4. Fourier Series At this poit we are ready to ow cosider the caoical equatios. Cosider, for eample the heat equatio u t = u, < (4.) subject to u(, ) = si, u(, t) = u(, t) =. (4.) Here,

More information

1 Last time: similar and diagonalizable matrices

1 Last time: similar and diagonalizable matrices Last time: similar ad diagoalizable matrices Let be a positive iteger Suppose A is a matrix, v R, ad λ R Recall that v a eigevector for A with eigevalue λ if v ad Av λv, or equivaletly if v is a ozero

More information

CHAPTER 5. Theory and Solution Using Matrix Techniques

CHAPTER 5. Theory and Solution Using Matrix Techniques A SERIES OF CLASS NOTES FOR 2005-2006 TO INTRODUCE LINEAR AND NONLINEAR PROBLEMS TO ENGINEERS, SCIENTISTS, AND APPLIED MATHEMATICIANS DE CLASS NOTES 3 A COLLECTION OF HANDOUTS ON SYSTEMS OF ORDINARY DIFFERENTIAL

More information

A Risk Comparison of Ordinary Least Squares vs Ridge Regression

A Risk Comparison of Ordinary Least Squares vs Ridge Regression Joural of Machie Learig Research 14 (2013) 1505-1511 Submitted 5/12; Revised 3/13; Published 6/13 A Risk Compariso of Ordiary Least Squares vs Ridge Regressio Paramveer S. Dhillo Departmet of Computer

More information

(VII.A) Review of Orthogonality

(VII.A) Review of Orthogonality VII.A Review of Orthogoality At the begiig of our study of liear trasformatios i we briefly discussed projectios, rotatios ad projectios. I III.A, projectios were treated i the abstract ad without regard

More information

MATH 10550, EXAM 3 SOLUTIONS

MATH 10550, EXAM 3 SOLUTIONS MATH 155, EXAM 3 SOLUTIONS 1. I fidig a approximate solutio to the equatio x 3 +x 4 = usig Newto s method with iitial approximatio x 1 = 1, what is x? Solutio. Recall that x +1 = x f(x ) f (x ). Hece,

More information

Probabilistic Unsupervised Learning

Probabilistic Unsupervised Learning HT2015: SC4 Statistical Data Miig ad Machie Learig Dio Sejdiovic Departmet of Statistics Oxford http://www.stats.ox.ac.u/~sejdiov/sdmml.html Probabilistic Methods Algorithmic approach: Data Probabilistic

More information

9. Simple linear regression G2.1) Show that the vector of residuals e = Y Ŷ has the covariance matrix (I X(X T X) 1 X T )σ 2.

9. Simple linear regression G2.1) Show that the vector of residuals e = Y Ŷ has the covariance matrix (I X(X T X) 1 X T )σ 2. LINKÖPINGS UNIVERSITET Matematiska Istitutioe Matematisk Statistik HT1-2015 TAMS24 9. Simple liear regressio G2.1) Show that the vector of residuals e = Y Ŷ has the covariace matrix (I X(X T X) 1 X T )σ

More information