Generalized Principal Component Analysis (GPCA)

Size: px
Start display at page:

Download "Generalized Principal Component Analysis (GPCA)"

Transcription

1 Geeralized Pricipal Compoet Aalysis (GPCA) Reé Vidal Yi Ma Shakar Sastry Departmet of EECS, Uiversity of Califoria, Berkeley, CA ECE Departmet, Uiversity of Illiois, Urbaa, IL Abstract We propose a algebraic geometric approach to the problem of estimatig a mixture of liear subspaces from sample data poits, the so-called Geeralized Pricipal Compoet Aalysis (GPCA) problem. I the absece of oise, we show that GPCA is equivalet to factorig a homogeeous polyomial whose degree is the umber of subspaces ad whose factors (roots) represet ormal vectors to each subspace. We derive a formula for the umber of subspaces ad provide a aalytic solutio to the factorizatio problem usig liear algebraic techiques. The solutio is closed form if ad oly if 4. I the presece of oise, we cast GPCA as a costraied oliear least squares problem ad derive a optimal fuctio from which the subspaces ca be directly recovered usig stadard oliear optimizatio techiques. We apply GPCA to the motio segmetatio problem i computer visio, i.e. the problem of estimatig a mixture of motio models from 2-D imagery. 1. Itroductio Pricipal Compoet Aalysis (PCA) [4] refers to the problem of estimatig a liear subspace S R K of ukow dimesio k < K from N sample poits {x j S} N j=1. This problem shows up i a variety of applicatios i may fields, e.g., patter recogitio, data compressio, image aalysis, regressio, etc., ad ca be solved i a remarkably simple way from the sigular value decompositio (SVD) of the data matrix [x 1, x 2,...,x N ] R K N. Extesios of PCA iclude probabilistic PCA [9, 2], where the subspace is estimated i Maximum Likelihood sese usig a probabilistic geerative model, ad oliear PCA (NLPCA) [6], where the subspace is estimated after applyig a oliear embeddig to the data. I this paper, we cosider a alterative geeralizatio called Geeralized Pricipal Compoet Aalysis (GPCA), i which the sample poits {x j R K } N j=1 are draw from k-dimesioal liear subspaces of R K, {S i }, as illustrated i Figure 1 for =3,k =2,adK =3. I this case, the problem becomes oe of idetifyig each subspace without kowig which sample poits belog to which subspace. 1 Research supported by grats ONR N ad DARPA F C-3614, ad by UIUC ECE Departmet startup fud. S 3 x S 2 R 3 S 1 Figure 1: GPCA for =3, k =2ad K =3. Idetifyig three 2-dimesioal subspaces S 1,S 2,S 3 i R 3 from sample poits {x} draw from these subspaces. Geometric approaches to mixtures of pricipal compoets have bee proposed i the computer visio commuity i the cotext of 3-D motio segmetatio. The mai idea is to first segmet the data associated with each subspace, ad the apply stadard PCA to each group. I [5] (see also [1, 3]) it was show that if the pairwise itersectio of the subspaces is trivial, which implies that K k, oe ca use the SVD of all the data to build a similarity matrix from which the segmetatio ca be easily extracted. Whe the subspaces do itersect, the segmetatio of the data is usually obtaied i a ad-hoc fashio usig various clusterig algorithms, e.g., K-meas. A alterative algebraic solutio for the case of two plaes i R 3 was proposed i [7] i the cotext of 2-D segmetatio of trasparet motios. 2 See also [14] for 3-D segmetatio of two rigid motios. Probabilistic approaches to mixtures of pricipal compoets [8] assume that sample poits withi each subspace are draw from a ukow probability distributio. The membership of the data poits to each oe of the subspaces is modeled with a multiomial distributio whose parameters are referred to as the mixig proportios. The parameters of this mixture model are estimated i a Maximum Likelihood or Maximum a Posteriori framework as follows: oe first estimates the mixig proportios give a curret estimate for the subspaces, ad the estimates the subspaces give a curret estimate of the mixig proportios. This is usually doe i a iterative maer usig the Expectatio Maximizatio (EM) algorithm. Ufortuately, EM is 1 If the associatio betwee sample poits ad subspaces was kow, the the problem would reduce to stadard PCA applied to each subspace. 2 The authors thak Dr. David Fleet for poitig out this referece.

2 i geeral sesitive to iitializatio ad may ot coverge to the global optimum. Aother disadvatage is that it is hard to aalyze some theoretical questios such as the existece ad uiqueess of a solutio to the problem. Also there are may cases i which it is hard to solve the groupig problem correctly, yet it is possible to obtai a precise estimate of the subspaces. I such cases a direct estimatio of the subspaces (without groupig) seems more appropriate tha a estimatio based o icorrectly segmeted data Cotributios of this paper We propose a ovel algebraic geometric approach to mixtures of pricipal compoets, the so-called Geeralized Pricipal Compoet Aalysis (GPCA) problem. I the absece of oise, we cast GPCA i a algebraic geometric framework i which the umber of subspaces becomes the degree of a certai polyomial ad the ormals to each subspace become the factors (roots) of such a polyomial. We show that the umber of subspaces ca be obtaied from the rak of a certai matrix that depeds o the data. Give, the estimatio of the subspaces S i R K is essetially equivalet to a factorizatio problem i the space of homogeeous polyomials of degree i K variables. We prove that the factorizatio problem has a uique solutio which ca be obtaied from the roots of a polyomial of degree i oe variable ad from the solutio of K 2 liear systems i variables. Hece, the solutio is closed form whe 4. Ulike previous work, GPCA allows for arbitrary itersectios amog a arbitrary umber of differet subspaces ad does ot require previous kowledge of the segmetatio of the data or the umber of subspaces. I fact, the subspaces are estimated directly usig segmetatio idepedet costraits that are satisfied by all the poits, regardless of the subspace to which they belog. I the presece of oise, we cast GPCA as a costraied oliear least squares problem that miimizes the error betwee the oisy poits ad their projectios oto the subspaces. By covertig this costraied problem ito a ucostraied oe, we obtai a optimal fuctio from which the subspaces ca be directly recovered usig stadard oliear optimizatio techiques. We show that the optimal objective fuctio is just a ormalized versio of the algebraic error miimized by our aalytic solutio to GPCA. Although this meas that the aalytic solutio to GPCA may be sub-optimal i the presece of oise, we ca still use it as a global iitializer for our oliear algorithm or ay other iterative algorithm, such as K-meas or EM. Our solutio to GPCA ca be applied to various estimatio problems i which the data comes simultaeously from multiple (approximately) liear models. I this paper, we apply GPCA to the motio segmetatio problem i computer visio, i.e. the problem of estimatig a mixture of motio models from 2-D imagery. Applicatios to segmetatio of static ad dyamic textures are forthcomig [10]. 2 Problem formulatio ad aalysis I this paper, we cosider the followig geeralizatio of pricipal compoet aalysis (PCA). Problem 1 (Geeralized Pricipal Compoet Aalysis) Give a set of sample poits X = {x j R K } N j=1 draw from >1distict liear subspaces {S i R K } of dimesio k, 0 <k<k, idetify each subspace S i without kowig which sample poits belog to which subspace. By idetifyig the subspaces we mea the followig: 1. Idetify the umber of subspaces ad their dimesio; 2. Idetify a basis for each subspace S i (or for S i ); 3. Group or segmet the give N data poits ito the subspace(s) they belog to; I our aalysis of the GPCA problem, we will distiguish betwee the followig two cases: The geeral case of subspaces of ukow dimesio k, where 0 <k<k 1, ad the special case of hyperplaes of kow dimesio k = K 1. It turs out that the geeral case ca always be reduced to the special case, as log as all the subspaces {S i } have the same dimesio k (see [10] for the proof). This is because, from a geometric poit of view, the segmetatio of a sample set X draw from k-dimesioal subspaces of a space of dimesio K>kis preserved after projectig the sample set X oto a geeric 3 subspace P of dimesio k 1( K). A example is show i Figure 2, where two lies L 1 ad L 2 i R 3 are projected oto a plae P ot orthogoal to the plae cotaiig the lies. I geeral, there are various techical details ivolved i reducig the geeral case 0 <k<kto the special case k = K 1, e.g., how to determie the dimesio of the subspaces k ad how to choose the (k 1)-dimesioal subspace P. We refer the reader to [10] for all those details, ad cocetrate o the special but importat case k = K 1 from ow o. R 3 L 1 L 2 o l 1l2 Figure 2: Samples o two 1-dimesioal subspaces L 1,L 2 i R 3 projected oto a 2-dimesioal plae P. The membership of each sample is preserved through the projectio. 3 By geeric we mea except for a zero-measure set {P } of subspaces. P

3 3. A solutio to GPCA with k = K 1 I this sectio, we give the followig costructive solutio to the GPCA problem i the case k = K 1. Theorem 1 (GPCA with k = K 1) The GPCA problem with k = K 1 is algebraically equivalet to factorig a homogeeous polyomial of degree i K variables ito a product of polyomials of degree 1. The factorizatio problem ca be solved from the roots of a polyomial of degree i oe variable plus K 2 liear systems i variables. Thus GPCA with k =K 1 has a uique solutio, which ca be obtaied i closed form if ad oly if 4. We establish the equivalece betwee GPCA ad polyomial factorizatio i Sectio 3.1. We show how to estimate the umber of subspaces i Sectio 3.2 ad give a aalytic solutio to the factorizatio problem i Sectio 3.3. We summarize the overall algorithm i Sectio 3.4 ad preset a optimal algorithm i the presece of oise i Sectio GPCA ad polyomial factorizatio We otice that every (K 1)-dimesioal space S i R K ca be represeted by a ozero ormal vector b i R K as S i = {x R K : b T i x = b i1 x 1 b i2 x 2...b ik x K =0}. Sice the subspaces S i are all distict from each other, the ormal vectors {b i } are pairwise liearly idepedet. Imagie that we are give a poit x R K lyig o oe of the subspaces S i. Such a poit must satisfy the formula: (b T 1 x =0) (b T 2 x =0) (b T x =0), (1) which is equivalet to the followig homogeeous polyomial of degree i x with real coefficiets: p (x) = (b T i x) =0. (2) The problem of idetifyig each subspace S i is the equivalet to that of solvig for the vectors {b i } 1=1 from the oliear equatio (2). A stadard techique used i algebra to reder a oliear problem ito a liear oe is to fid a embeddig that lifts the problem ito a higherdimesioal space. Let R (K) =R [x 1,...,x K ] be the set of all homogeeous polyomials of degree i K variables. We otice that each R (K), ca be made ito a vector space uder the usual additio ad scalar multiplicatio. Furthermore, R (K) is geerated by the set of moomials x = x 1 1 x2 2 xk K, with 0 j, j =1,...,K, ad 1 2 K =. Sice there are a total of ( ) ( ) K 1 K 1 M = = (3) K 1 differet moomials, the dimesio of R (K) as a vector space is M. Therefore, we ca defie the followig embeddig (or liftig) from R K ito R M. Defiitio 1 (Veroese map) Give ad K, the Veroese map of degree, ν : R K R M, is defied as: ν : [x 1,...,x K ] T [...,x,...] T, (4) where x is a moomial of the form x 1 1 x2 2 xk K with chose i the degree-lexicographic order. With the so-defied Veroese map (also kow as the polyomial embeddig), equatio (2) becomes the followig liear expressio i the vector of coefficiets c R M : p (x) =ν (x) T c = c 1,..., K x 1 1 xk K =0, (5) where c R represets the coefficiet of the moomial x. Example 1 The case =2ad K =2correspods to segmetig two lies i R 2. These two lies are represeted by the polyomial p 2 (x) =(b 11 x 1 b 12 x 2 )(b 21 x 1 b 22 x 2 ).I this case the Veroese map is ν 2 (x) =[x 2 1,x 1 x 2,x 2 2] T ad the coefficiets are c =[b 11 b }{{ 21,b } 11 b 22 b 12 b 21,b }{{} 12 b 22 ] }{{} T. c 2,0 c 1,1 c 0,2 Give c, the slope of each lie ca be immediately computed from the roots of the polyomial c 2,0 w 2 c 1,1 w c 0,2 = Estimatio of the umber of subspaces Applyig equatio (5) to a give collectio of N M 1 sample poits {x j } N j=1 gives the followig system of liear equatios o the vector of coefficiets c ν (x 1 ) T L c =. ν (x 2 ) T. c =0 RN. (6) ν (x N ) T Sice the above liear system (6) depeds explicitly o the umber of subspaces, we caot estimate c directly without kowig i advace. It turs out that the estimatio of the umber of subspaces is very much related to the coditios uder which the solutio for c is uique (up to a scale factor), as stated by the followig propositio. Propositio 1 (Number of subspaces) Assume that a collectio of N M 1 sample poits {x j } N j=1 o differet (K 1)-dimesioal subspaces of R K is give. Let L i R N Mi be the matrix defied i (6), but computed with the Veroese map ν i (x) of degree i. If the sample poits are i geeral positio ad at least K 1 poits correspod to each subspace, the: >M i 1, i <, rak(l i ) = M i 1, i =, (7) <M i 1, i >. Therefore, the umber of subspaces is give by: =mi{i : rak(l i )=M i 1}. (8)

4 The ituitio behid Propositio 1 (see [10] for the proof) is that there is o polyomial of degree i<that is satisfied by all the data, hece rak(l i )=M i for i<. Coversely, there are multiple polyomials of degree i>, amely ay multiple of p (x), which are satisfied by all the data, hece rak(l i ) <M i 1 for i>. Thus the case i = is the oly oe i which system (6) has a uique solutio, amely the coefficiets c of the polyomial p (x). Remark 1 I the presece of oise, oe caot directly estimate from (8), because the matrix L i is always full rak. I practice we declare the rak of L i to be r if σ r1 /(σ 1 σ r ) <ɛ,whereσ k is the k-th sigular value of L i ad ɛ>0is a pre-specified threshold. We have foud this simple criterio to work well i our experimets. Remark 2 I the case of subspaces of arbitrary dimesio k, where0 <k<k, oe ca derive a rak coditio similar to (8) from which oe ca joitly estimate ad k. Such a rak coditio i GPCA is a atural geeralizatio of the rak coditio k = rak(l 1 ) i stadard PCA [10] Estimatio of the subspaces {S i } Propositio 1 ad the liear system i equatio (6) allow us to determie the umber of subspaces ad the vector of coefficiets c, respectively, from sample poits {x j } N. The rest of the problem becomes ow how to recover the ormal vectors {b i } from c. From (2) ad (5) we have p (x) = K c x = b ij x j. j=1 Therefore, recoverig {b i } from c is equivalet to factorig a give homogeeous polyomial p (x) R (K), ito distict polyomials i R 1 (K). 4 We ow preset a polyomial factorizatio algorithm that recovers the b i s from c. For simplicity, we will first preset a example with the case of two plaes i R 3, i.e. =2ad K =3, because it gives most of the ituitio about our geeral algorithm for arbitrary ad K. Example 2 Cosider the case =2ad K =3. The p 2 (x) =(b 11 x 1 b 12 x 2 b 13 x 3 )(b 21 x 1 b 22 x 2 b 23 x 3 ) =(b T 1 x)(b T 2 x) =[x 2 1,x 1 x 2,x 1 x 3,x 2 2,x 2 x 3,x 2 3]c = (b 11 b }{{ 21 )x } 2 1(b 11 b 22 b 12 b 21 )x }{{} 1 x 2 (b 11 b 23 b 13 b 21 )x }{{} 1 x 3 c 2,0,0 c 1,1,0 c 1,0,1 (b 12 b }{{ 22 )x } 2 2 (b 12 b 23 b 13 b 22 )x }{{} 2 x 3 (b 13 b 23 )x }{{} 2 3. c 0,2,0 c 0,1,1 c 0,0,2 4 Oe ca iterpret c as a vector represetatio of the symmetric part of the tesor b 1 b. I this case estimatig {b i } from c is equivalet to factorig such a symmetric tesor. The factorizatio algorithm we are about to preset ca be thought of as a SVD for symmetric tesors. We otice that the last three terms c 0,2,0 x 2 2 c 0,1,1x 2 x 3 c 0,0,2 x 2 3 correspod to a polyomial i x 2 ad x 3 oly, which is equal to the product of the last two terms of the origial factors, i.e. (b 12 x 2 b 13 x 3 )(b 22 x 2 b 23 x 3 ). After dividig by x 2 3 ad lettig w = x 2/x 3 we obtai q 2 (w). = c 0,2,0 w 2 c 0,1,1 w c 0,0,2 =(b 12 w b 13 )(b 22 w b 23 ). Sice c R 6 is kow, so is the secod order polyomial q 2 (w). Thus we ca obtai b13 b 12 ad b23 b 22 from the roots w 1 ad w 2 of q 2 (w). Sice b 1 ad b 2 are oly computable up to a scale factor, we ca actually divide c by c 0,2,0 (if ozero) ad set the last two etries of b 1 ad b 2 to be b 12 =1, b 13 = w 1, b 22 =1, ad b 23 = w 2. We are left with the computatio of the first etry of b 1 ad b 2. We otice that the coefficiets c 1,1,0 ad c 1,0,1 are liear fuctios of the ukows b 11 ad b 21. Therefore, if b 22 b 13 b 23 b 12 0, i.e. if w 1 w 2, the we ca obtai b 11 ad b 21 from the liear system [ ][ ] [ ] b22 b 12 b11 c1,1,0 =. (9) b 23 b 13 b 21 c 1,0,1 We coclude from the Example 2 that, except for the degeerate cases c 0,2,0 = b 12 b 22 =0or b 22 b 13 b 23 b 12 = 0, the factorizatio of a homogeeous polyomial of degree =2i K =3variables ca be doe as follows. 1. Solve for the last two etries of {b i } from the roots of a polyomial q (w) associated with the last 1 coefficiets of p (x). 2. Solve for the first K 2 etries of {b i } by solvig K 2 liear systems i variables. We ow show i Sectios ad how the above example ca be geeralized to arbitrary ad K, except for some degeerate cases. We aalyze such degeerate cases i Sectio ad briefly outlie how to hadle them. We summarize the overall algorithm i Sectio Solvig for the last 2 etries of each b i Cosider the last 1coefficiets of p (x): [c 0,...,0,,0,c 0,...,0, 1,1,..., c 0,...,0,0, ] T R 1, which defie the followig homogeeous polyomial of degree i the two variables x K 1 ad x K : c0,...,0,k 1, K x K 1 = (b ik 1 x K 1 b ik x K ). K 1 xk K Lettig w = x K 1 /x K, we obtai (b ik 1 x K 1 b ik x K )=0 (b ik 1 w b ik )=0.

5 Hece the roots of the polyomial q (w) =c 0,...,0,,0 w c 0,...,0, 1,1 w 1 c 0,...,0,0, are exactly w i = b ik /b ik 1, for all i =1,...,. Therefore, after dividig c by c 0,...,0,,0, we obtai the last two etries of each b i as: (b ik 1,b ik )=(1, w i ). (10) If b ik 1 =0for some i, the some of the leadig coefficiets of q (w) are zero ad we caot proceed as before, because q (w) has less tha roots. More specifically, assume that the first l coefficiets of q (w) are zero ad divide c by the (l 1)-st coefficiet. I this case, we ca choose (b ik 1,b ik )=(0, 1), fori =1,...,l, ad obtai {(b ik 1,b ik )} i= l1 from the l roots of q (w) usig equatio (10). Fially, if all the coefficiets of q (w) are zero, we set (b ik 1,b ik )=(0, 0), for all i =1,..., Solvig for the first K 2 etries of each b i We have demostrated how to obtai the last two etries of each b i from the roots of a polyomial of degree i oe variable. We are ow left with the computatio of the first K 2 etries of each b i. We assume that we have computed {b ij }, j = J 1,...K for some J, startig with J = K 2, ad show how to liearly solve for {b ij }.Asi Example 2, the key is to cosider the coefficiets of p (x) which are liear i x J. These coefficiets are of the form c 0,...,0,1,J1,..., K ad are liear i b ij. To see this, otice that the polyomial c 0,...,0,1,J1,..., K x J1 J1 xk K is equal to the partial derivative of p (x) with respect to x J evaluated at x 1 = x 2 = = x J =0. Sice ( ) ( i 1 ) (b T i x) = b ij (b T i x) (b T i x), x J l=1 l=i1 after evaluatig at x 1 = x 2 = = x J =0we obtai c0,...,0,1,j1,..., K x J1 J1 xk K = b ij gi J (x), (11) where i 1 gi J (x) = l=1 K j=j1 b lj x j l=i1 K j=j1 b lj x j (12) is a homogeeous polyomial of degree 1 i the last K J variables i x. LetVi J be the vector of coefficiets of the polyomial gi J(x). Notice that the vectors {V i J} are kow, because they are fuctios of the kow b ij s, for j J 1. Therefore we ca use equatio (11) to solve for the ukows {b ij } from the liear system b 1J c 0,...,0,1, 1,0,...,0 [ ] V J 1 V2 J V J b 2J. = c 0,...,0,1, 2,1,...,0.. (13) b J c 0,...,0,1,0,0,..., Degeerate cases I order for the liear system i (13) to have a uique solutio, the colum vectors {Vi J} (i the matrix o the left had side) must be liearly idepedet. We showed i [10] that this is ideed the case if ad oly if for all r s, 1 r, s, the vectors (b rj1,b rj2,...,b rk ) ad (b sj1,b sj2,...,b sk ) are pairwise liearly idepedet. This latter coditio is always satisfied, except for some degeerate cases described i Remark 3 below. I those degeerate cases, as log as the origial polyomial p (x) has distict factors, oe ca always perform a ivertible liear trasformatio o the data poits x x = T x, T R K K (14) that iduces a liear trasformatio o the vector of coefficiets c c = T c, T R M M, such that the ew vectors (b rj1,b rj2,...,b rk ) are pairwise liearly idepedet. We refer the reader to [10] for further details o the solutio of these degeerate cases. Remark 3 (Degeerate cases) There are essetially three cases i which the vectors (b rj1,b rj2,...,b rk ) are ot pairwise liearly idepedet: 1. The origial polyomial p (x) is such that the polyomial q (w) has repeated roots, e.g., p 3 (x) =(x 1 x 2 x 3 )(x 1 2x 2 2x 3 )(x 1 2x 2 x 3 ). 2. The polyomial q (w) associated with some factorable p (x), e.g., p (x) =(x 1 x 3 )x 3, has more tha oe zero leadig coefficiets. I this case we have (b i2,b i3 )=(0, 1) for more tha oe i. 3. The origial polyomial p (x) is ot factorable. This happes, for example, whe the vector of coefficiets c is corrupted with oise. I this case the polyomial q (w) may have complex roots, e.g., p (x) = x 2 1 x2 2 x 2x 3 x 2 3, ad oe could project these complex roots oto their real parts. This typically itroduces repeated real roots i the resultig polyomial, e.g., after projectio the above polyomial p (x) becomes x 2 1 x2 2 x 2x x GPCA algorithm for k = K 1 Algorithm 1 (GPCA algorithm for the case k = K 1) Give sample poits {x j } N j=1, fid the umber of subspaces ad their ormals {b i R K } as follows: 1. Apply the Veroese map of degree i, for i =1, 2,...,to the vectors {x j } N j=1 ad form the matrix L i R N Mi as i (6). Stop whe rak(l i )=M i 1 ad set the umber of subspaces to be the curret i. The solve for the vector of coefficiets c R M from the liear system L c =0ad ormalize so that c =1.

6 2. (a) Divide c by the first ozero coefficiet of q (w). (b) If the first l, 0 l, coefficiets of q (w) are equal to zero, set (b ik 1,b ik )=(0, 1) for i =1,...,l. Compute {(b ik 1,b ik )} i= l1 from the l roots of q (w) usig (10). (c) If all the coefficiets of q (w) are zero, set (b ik 1,b ik )=(0, 0), for i =1,...,. (d) If (b rk 1,b rk ) is parallel to (b sk 1,b sk ) for some r s, apply the trasformatio x x i (14) ad repeat 2(a), 2(b) ad 2(c) for the ew polyomial p (x ) to obtai {(b ik 1,b ik )}. 3. Give (b ik 1,b ik ), i =1,...,, solve for {b ij } from (13) for J = K 2,...,1. If a trasformatio T R K K was used i 2(d), the set b i = T T b i Optimal GPCA i the presece of oise I the previous sectio, we proposed a liear algorithm for estimatig a collectio of subspaces from sample data poits {x j } N j=1 lyig o those subspaces. I essece, Algorithm 1 solves for the ormal vectors {b i } from the set of oliear equatios (bt i x j )=0, j =1,...,N. From a optimizatio poit of view, Algorithm 1 gives a liear solutio to the oliear least squares problem ( N 2 (b T i x )) j (15) mi b 1,...,b S K 1 j=1 where S K 1 is the uit sphere i R K. I this sectio, we derive a optimal algorithm for recostructig the subspaces whe the sample data poits are corrupted with i.i.d. zero-mea Gaussia oise. We show that the optimal solutio ca be obtaied by miimizig a fuctio similar to the algebraic error i (15), but properly ormalized. Sice our derivatio is based o segmetatio idepedet costraits, we do ot eed to model the membership of each data poit with a probability distributio. Therefore, we do ot eed to iterate betwee model estimatio ad data segmetatio, as most iterative techiques do, e.g., K-meas ad EM. Istead, our approach elimiates the data segmetatio step algebraically ad solves the GPCA problem by directly optimizig over the ormals to each subspace. Let {x j } N j=1 be the give collectio of oisy data poits. We would like to fid a collectio of subspaces {S i } such that the correspodig oise free data poits { x j } N j=1 lie o those subspaces. That is, we would like to solve the costraied oliear least squares optimizatio problem N mi j=1 xj x j 2 (16) subject to (bt i xj )=0 j =1,...,N. By usig Lagrage multipliers λ j for each costrait, the above optimizatio problem is equivalet to miimizig the Lagragia fuctio ( N x j x j 2 λ j j=1 (b T i xj ) After takig partial derivatives w.r.t. x j we obtai 2( x j x j )λ j ). (17) b i (b T l xj )=0, (18) l i from which we ca solve for λ j /2 as bt i (x j x j ) l i (bt l x j ) = (bt i x j ) l i (bt l x j ) bi l i (bt l x j ) 2 bi l i (bt l x j ). (19) 2 Similarly, after premultiplyig (18) by ( x j x j ) T we get x j x j 2 = λj 2 (b T i x j ) (b T l xj ). (20) l i Replacig (19) ad (20) o the objective fuctio (16) gives Ẽ ({ x j }, {b i })= N j=1 ( ) 2 (bt i x j ) l i (bt l xj ) b i l i (bt l xj ) 2. (21) We ca obtai a objective fuctio o the ormal vectors oly by cosiderig first order statistics of c T ν (x j ). Sice this is equivalet to settig x j = x j i (21), we obtai the simplified objective fuctio ( N ) 2 (bt i x j ) E (b 1,...,b )= j=1 b i l i (bt l x j ) 2, (22) which is essetially the same as the algebraic error (15), but properly ormalized accordig to the chose oise model. By costructio, the error fuctio i (22) does ot deped o the segmetatio of the data, hece it ca be used to directly recover the subspace ormals {b i } from a set of N (K 1) data poits {x j } N j=1. Oe ca use Algorithm 1 to obtai a iitial estimate for ad {b i } ad the use stadard oliear optimizatio techiques to miimize (22). However, Algorithm 1 requires a much larger umber of poits N M 1, because it uses a overparameterized represetatio c R M of the ormal vectors. Remark 4 The optimal error i (21) has a very ituitive iterpretatio. If poit j belogs to group i, the b T i xj =0. Thus the cotributio of poit j to Ẽ reduces to ( b T i x j ) 2 l i (bt l x j ) ( 2 =(b l i (bt l x )) T i x j ) 2, (23) j which is the optimal fuctio to miimize for subspace i. Therefore, the optimal error Ẽ is just a clever algebraic way of writig a mixture of optimal fuctios for each subspace ito a sigle objective fuctio for all the subspaces.

7 4. Applicatios of GPCA I this sectio, we test GPCA o sythetic data ad preset various applicatios o 2-D ad 3-D motio segmetatio from 2-D imagery Experimets o sythetic data We first test GPCA (Algorithm 1) ad optimal GPCA (see Sectio 3.5) o sythetically geerated data. We radomly pick =2, 3, 4 collectios of N = 600 poits o k =2 dimesioal subspaces of R 3. Zero-mea Gaussia oise with stadard deviatio from 0% to 5% is added to the sample poits. We ru 1000 trials for each oise level. For each trial the error betwee the true (uit) ormals {b i } ad the estimates {ˆb i } is computed as error = 1 acos (b T ˆb ) i i (degrees). (24) Figure 3 plots the mea error as a fuctio of the oise level. I all the trials, the umber of subspaces was correctly estimated from equatio (8) as = 2, 3, 4. 5 Notice that the estimates of the algebraic algorithm (left) are withi 3.8, 8.5 ad 13.3 of the groud truth for =2, =3ad =4, respectively, while the estimates of the optimal algorithm (right) are withi 3.1, 6.4 ad 9.7 of the groud truth for =2, 3 ad 4, respectively. This is expected, because the algebraic algorithm uses a overparameterized represetatio c R M of the ormal vectors [b 1,..., b ] R K. Notice also that, as expected, the performace of both algorithms deteriorates as icreases Segmetatio of 2D traslatioal motios Cosider a image sequece whose 2-D motio field ca be modeled as a mixture of purely traslatioal motio models. That is, we assume that the optical flow u =[u, v, 1] T P 2 i a widow aroud every pixel ca take oe out of possible values {u i }, where the umber of models is ukow. Uder the Lambertia model, the optical flow u at pixel x = [x 1,x 2, 1] T P 2 is related to the partials of the image itesity y =[I x1,i x2,i t ] T R 3 at x by the well-kow brightess costacy costrait (BCC) y T u = I x1 ui x2 vi t =0. Thus the estimatio of multiple traslatioal motio models ca be casted as a GPCA problem with k =2ad K =3, i.e. the segmetatio of plaes i R 3. The optical flows {u i } correspod to the ormals to the plaes, ad the image partial derivatives {y j } N j=1 are the data poits. Furthermore, we iterpret the polyomial p (y) = (ut i y)=ũt ν (y) =0as the multibody brightess costacy costrait (MBCC) ad the vector of coefficiets ũ R M, where M =( 1)( 2)/2, as the multibody optical flow. 5 We used a threshold of ɛ = to compute the rak of L =2 =3 = =2 =3 = Figure 3: Error i the estimatio of the subspaces as a fuctio of oise for GPCA (left) ad optimal GPCA (right) Figure 4: Frames from the flower garde sequece (left) ad the image partials projected oto the I x1 -I t plae (right). (a) Tree (b) Houses (c) Grass Figure 5: Segmetatio of two frames from the flower garde sequece usig a mixture of three traslatioal models. Sice the MBCC icorporates multiple motio models, oe ca use a larger widow i the computatio of optical flow without havig the problem of itegratig image data across motio boudaries. Figures 4 ad 5 show the extreme situatio i which the whole image is used to estimate three traslatioal models for the flower garde sequece. Figure 4 shows two frames of the sequece ad the image partials for oe frame projected oto the I x1 -I t plae to facilitate visualizatio. We observe that the image partials lie approximately o three plaes through the origi, although the data is oisy ad cotais may outliers. We estimated three motio models by applyig Algorithm 1 to the image data, followed by the oliear algorithm described i Sectio 3.5. Figure 5 shows the segmetatio of the image pixels for two frames of the flower garde sequece accordig to the estimated motio models. Although we used a simple mixture of three traslatioal motios to model the 2-D motio field of the sequece, a good segmetatio of the tree, the houses ad the grass is obtaied. We did ot cluster pixels with low texture (y 0), e.g., pixels i the sky, sice they ca be assiged to either of the three models.

8 Remark 5 (Affie motio segmetatio) GPCA ca also be applied to the estimatio of a mixture of affie motio models {A i R 3 3 } from image data {(xj, y j )} N j=1. I this case the optical flow is modeled with the affie model u = A i x, thus the BCC becomes y T A i x =0. Affie motio segmetatio is the equivalet to estimatig {A i } from the multibody affie costrait (yt A i x)=0. This ca be doe by factorig this product of biliear forms. This problem ca be reduced to a collectio of GPCA problems with k = K 1=2as demostrated i [11] Segmetatio of liearly movig objects Cosider the problem of segmetig the 3-D motio of multiple objects udergoig a liear motio. That is, we assume that the scee ca be modeled as a mixture of purely traslatioal motio models, {e i R 3 }, where e i represets the epipole (traslatio) of object i relative to the camera betwee two cosecutive frames. Therefore, give the images x 1 P 2 ad x 2 P 2 of a poit i object i i the first ad secod frame, respectively, the rays x 1, x 2 ad e i must satisfy the well-kow epipolar costrait for liear motios x T 2 (e i x 1 )=0. Sice the epipolar costrait ca be coveietly rewritte as e T i (x 2 x 1 ) = 0, the segmetatio of liear motios is a GPCA problem with K =3ad k =2where the data poits are the epipolar lies l = x 2 x 1 R 3 ad the ormal vectors are the epipoles {e i }. Furthermore, we iterpret the polyomial p (l) = (et i l)=ẽt ν (l) =0as the multibody epipolar costrait ad the vector of coefficiets ẽ R M as the multibody epipole. We tested GPCA o a sequece with =2liearly movig objects. Figure 6(a) shows the first frame with N =92tracked features: 44 for the truck ad 48 for the car. Figure 6(b) plots the segmetatio of the features. There are o mismatches. The estimatio error for the epipoles was 3.3 for the truck ad 1.2 for the car. truck car (a) First frame (b) Feature segmetatio Figure 6: Segmetatio of =2liearly movig objects. Remark 6 (Multibody structure from motio (MSFM)) GPCA ca also be applied to the problem of estimatig a mixture of fudametal matrices {F i R 3 3 } from image pairs {(x j 1, xj 2 )}N j=1. I this case the epipolar costrait reads x T 2 F i x 1 =0ad the multibody epipolar costrait reads (xt 2 F i x 1 )=0. The MSFM problem is the equivalet to factorig this product of biliear forms. Such a problem ca be reduced to a collectio of GPCA problems with k = K 1=2as show i [13, 12]. 5. Discussio ad ope issues We have proposed a ovel geometric approach to the idetificatio of mixtures of subspaces (GPCA). We derived a formula for estimatig the umber of subspaces ad showed that GPCA is equivalet to estimatig ad factorig homogeeous polyomials. I the absece of oise, we preseted a aalytic solutio to the factorizatio problem based o liear algebraic techiques. I the presece of oise, we preseted oliear algorithm that miimizes the optimal error. We tested GPCA o sythetic data ad preseted various applicatios o 2-D ad 3-D motio segmetatio. Ope issues iclude a aalysis of the robustess of the polyomial factorizatio algorithm i the presece of oise. At preset the algorithm works well whe the umber ad dimesio of the subspaces is small, but the performace deteriorates as the umber of subspaces icreases. This is because the algorithm uses a overparameterized represetatio of the ormal vectors that eeds at least N M 1 poits, as opposed to the (K 1) poits eeded by the oliear algorithm. Whether it is possible to avoid workig i a space of dimesio M by usig somethig similar to the kerel trick i NLPCA [6], remais a ope questio. Refereces [1] T. Boult ad L. Brow. Factorizatio-based segmetatio of motios. I IEEE Workshop o Motio Uderstadig, pages , [2] M. Collis, S. Dasgupta, ad R. Schapire. A geeralizatio of pricipal compoet aalysis to the expoetial family. I Advaces i Neural Iformatio Processig Systems (NIPS), volume 14, [3] J. Costeira ad T. Kaade. A multibody factorizatio method for idepedetly movig objects. IJCV, 29(3): , [4] I. Jollife. Pricipal Compoet Aalysis. Spriger-Verlag, New York, [5] K. Kaatai. Motio segmetatio by subspace separatio ad model selectio. I ICCV, volume 2, pages , [6] B. Schölkopf, A. Smola, ad K.-R. Müller. Noliear compoet aalysis as a kerel eigevalue problem. Neural Computatio, 10(5): , [7] M. Shizawa ad K. Mase. A uified computatioal theory for motio trasparecy ad motio boudaries based o eigeeergy aalysis. I CVPR, pages , [8] M. Tippig ad C. Bishop. Mixtures of probabilistic pricipal compoet aalyzers. Neural Computatio, 11(2): , [9] M. Tippig ad C. Bishop. Probabilistic pricipal compoet aalysis. Joural of the Royal Statistical Society, 61(3): , [10] R. Vidal. Geeralized Pricipal Compoet Aalysis (GPCA). PhD thesis, Uiversity of Califoria at Berkeley, [11] R. Vidal ad S. Sastry. Segmetatio of dyamic scees from image itesities. I IEEE Workshop o Motio ad Video Computig, pages 44 49, [12] R. Vidal ad S. Sastry. Optimal segmetatio of dyamic scees from the multibody epipolar costrait. I CVPR, [13] R. Vidal, S. Soatto, Y. Ma, ad S. Sastry. Segmetatio of dyamic scees from the multibody fudametal matrix. I ECCV Workshop o Visual Modelig of Dyamic Scees, [14] L. Wolf ad A. Shashua. Two-body segmetatio from two perspective views. I CVPR, pages , 2001.

Introduction to Machine Learning DIS10

Introduction to Machine Learning DIS10 CS 189 Fall 017 Itroductio to Machie Learig DIS10 1 Fu with Lagrage Multipliers (a) Miimize the fuctio such that f (x,y) = x + y x + y = 3. Solutio: The Lagragia is: L(x,y,λ) = x + y + λ(x + y 3) Takig

More information

Linear Regression Demystified

Linear Regression Demystified Liear Regressio Demystified Liear regressio is a importat subject i statistics. I elemetary statistics courses, formulae related to liear regressio are ofte stated without derivatio. This ote iteds to

More information

Algebra of Least Squares

Algebra of Least Squares October 19, 2018 Algebra of Least Squares Geometry of Least Squares Recall that out data is like a table [Y X] where Y collects observatios o the depedet variable Y ad X collects observatios o the k-dimesioal

More information

w (1) ˆx w (1) x (1) /ρ and w (2) ˆx w (2) x (2) /ρ.

w (1) ˆx w (1) x (1) /ρ and w (2) ˆx w (2) x (2) /ρ. 2 5. Weighted umber of late jobs 5.1. Release dates ad due dates: maximimizig the weight of o-time jobs Oce we add release dates, miimizig the umber of late jobs becomes a sigificatly harder problem. For

More information

Optimization Methods MIT 2.098/6.255/ Final exam

Optimization Methods MIT 2.098/6.255/ Final exam Optimizatio Methods MIT 2.098/6.255/15.093 Fial exam Date Give: December 19th, 2006 P1. [30 pts] Classify the followig statemets as true or false. All aswers must be well-justified, either through a short

More information

5.1 Review of Singular Value Decomposition (SVD)

5.1 Review of Singular Value Decomposition (SVD) MGMT 69000: Topics i High-dimesioal Data Aalysis Falll 06 Lecture 5: Spectral Clusterig: Overview (cotd) ad Aalysis Lecturer: Jiamig Xu Scribe: Adarsh Barik, Taotao He, September 3, 06 Outlie Review of

More information

Singular Continuous Measures by Michael Pejic 5/14/10

Singular Continuous Measures by Michael Pejic 5/14/10 Sigular Cotiuous Measures by Michael Peic 5/4/0 Prelimiaries Give a set X, a σ-algebra o X is a collectio of subsets of X that cotais X ad ad is closed uder complemetatio ad coutable uios hece, coutable

More information

Machine Learning for Data Science (CS 4786)

Machine Learning for Data Science (CS 4786) Machie Learig for Data Sciece CS 4786) Lecture & 3: Pricipal Compoet Aalysis The text i black outlies high level ideas. The text i blue provides simple mathematical details to derive or get to the algorithm

More information

Linear regression. Daniel Hsu (COMS 4771) (y i x T i β)2 2πσ. 2 2σ 2. 1 n. (x T i β y i ) 2. 1 ˆβ arg min. β R n d

Linear regression. Daniel Hsu (COMS 4771) (y i x T i β)2 2πσ. 2 2σ 2. 1 n. (x T i β y i ) 2. 1 ˆβ arg min. β R n d Liear regressio Daiel Hsu (COMS 477) Maximum likelihood estimatio Oe of the simplest liear regressio models is the followig: (X, Y ),..., (X, Y ), (X, Y ) are iid radom pairs takig values i R d R, ad Y

More information

ECE 8527: Introduction to Machine Learning and Pattern Recognition Midterm # 1. Vaishali Amin Fall, 2015

ECE 8527: Introduction to Machine Learning and Pattern Recognition Midterm # 1. Vaishali Amin Fall, 2015 ECE 8527: Itroductio to Machie Learig ad Patter Recogitio Midterm # 1 Vaishali Ami Fall, 2015 tue39624@temple.edu Problem No. 1: Cosider a two-class discrete distributio problem: ω 1 :{[0,0], [2,0], [2,2],

More information

Affine Structure from Motion

Affine Structure from Motion Affie Structure from Motio EECS 598-8 Fall 24! Foudatios of Computer Visio!! Istructor: Jaso Corso (jjcorso)! web.eecs.umich.edu/~jjcorso/t/598f4!! Readigs: FP 8.2! Date: /5/4!! Materials o these slides

More information

Recursive Algorithms. Recurrences. Recursive Algorithms Analysis

Recursive Algorithms. Recurrences. Recursive Algorithms Analysis Recursive Algorithms Recurreces Computer Sciece & Egieerig 35: Discrete Mathematics Christopher M Bourke cbourke@cseuledu A recursive algorithm is oe i which objects are defied i terms of other objects

More information

Problem Set 2 Solutions

Problem Set 2 Solutions CS271 Radomess & Computatio, Sprig 2018 Problem Set 2 Solutios Poit totals are i the margi; the maximum total umber of poits was 52. 1. Probabilistic method for domiatig sets 6pts Pick a radom subset S

More information

Machine Learning for Data Science (CS 4786)

Machine Learning for Data Science (CS 4786) Machie Learig for Data Sciece CS 4786) Lecture 9: Pricipal Compoet Aalysis The text i black outlies mai ideas to retai from the lecture. The text i blue give a deeper uderstadig of how we derive or get

More information

Similarity Solutions to Unsteady Pseudoplastic. Flow Near a Moving Wall

Similarity Solutions to Unsteady Pseudoplastic. Flow Near a Moving Wall Iteratioal Mathematical Forum, Vol. 9, 04, o. 3, 465-475 HIKARI Ltd, www.m-hikari.com http://dx.doi.org/0.988/imf.04.48 Similarity Solutios to Usteady Pseudoplastic Flow Near a Movig Wall W. Robi Egieerig

More information

Statistical Pattern Recognition

Statistical Pattern Recognition Statistical Patter Recogitio Classificatio: No-Parametric Modelig Hamid R. Rabiee Jafar Muhammadi Sprig 2014 http://ce.sharif.edu/courses/92-93/2/ce725-2/ Ageda Parametric Modelig No-Parametric Modelig

More information

CHAPTER 5. Theory and Solution Using Matrix Techniques

CHAPTER 5. Theory and Solution Using Matrix Techniques A SERIES OF CLASS NOTES FOR 2005-2006 TO INTRODUCE LINEAR AND NONLINEAR PROBLEMS TO ENGINEERS, SCIENTISTS, AND APPLIED MATHEMATICIANS DE CLASS NOTES 3 A COLLECTION OF HANDOUTS ON SYSTEMS OF ORDINARY DIFFERENTIAL

More information

Statistical Inference Based on Extremum Estimators

Statistical Inference Based on Extremum Estimators T. Rotheberg Fall, 2007 Statistical Iferece Based o Extremum Estimators Itroductio Suppose 0, the true value of a p-dimesioal parameter, is kow to lie i some subset S R p : Ofte we choose to estimate 0

More information

U8L1: Sec Equations of Lines in R 2

U8L1: Sec Equations of Lines in R 2 MCVU U8L: Sec. 8.9. Equatios of Lies i R Review of Equatios of a Straight Lie (-D) Cosider the lie passig through A (-,) with slope, as show i the diagram below. I poit slope form, the equatio of the lie

More information

Session 5. (1) Principal component analysis and Karhunen-Loève transformation

Session 5. (1) Principal component analysis and Karhunen-Loève transformation 200 Autum semester Patter Iformatio Processig Topic 2 Image compressio by orthogoal trasformatio Sessio 5 () Pricipal compoet aalysis ad Karhue-Loève trasformatio Topic 2 of this course explais the image

More information

EECS 442 Computer vision. Multiple view geometry Affine structure from Motion

EECS 442 Computer vision. Multiple view geometry Affine structure from Motion EECS 442 Computer visio Multiple view geometry Affie structure from Motio - Affie structure from motio problem - Algebraic methods - Factorizatio methods Readig: [HZ] Chapters: 6,4,8 [FP] Chapter: 2 Some

More information

Geometry of LS. LECTURE 3 GEOMETRY OF LS, PROPERTIES OF σ 2, PARTITIONED REGRESSION, GOODNESS OF FIT

Geometry of LS. LECTURE 3 GEOMETRY OF LS, PROPERTIES OF σ 2, PARTITIONED REGRESSION, GOODNESS OF FIT OCTOBER 7, 2016 LECTURE 3 GEOMETRY OF LS, PROPERTIES OF σ 2, PARTITIONED REGRESSION, GOODNESS OF FIT Geometry of LS We ca thik of y ad the colums of X as members of the -dimesioal Euclidea space R Oe ca

More information

MAT 271 Project: Partial Fractions for certain rational functions

MAT 271 Project: Partial Fractions for certain rational functions MAT 7 Project: Partial Fractios for certai ratioal fuctios Prerequisite kowledge: partial fractios from MAT 7, a very good commad of factorig ad complex umbers from Precalculus. To complete this project,

More information

The Method of Least Squares. To understand least squares fitting of data.

The Method of Least Squares. To understand least squares fitting of data. The Method of Least Squares KEY WORDS Curve fittig, least square GOAL To uderstad least squares fittig of data To uderstad the least squares solutio of icosistet systems of liear equatios 1 Motivatio Curve

More information

Lecture 6 Chi Square Distribution (χ 2 ) and Least Squares Fitting

Lecture 6 Chi Square Distribution (χ 2 ) and Least Squares Fitting Lecture 6 Chi Square Distributio (χ ) ad Least Squares Fittig Chi Square Distributio (χ ) Suppose: We have a set of measuremets {x 1, x, x }. We kow the true value of each x i (x t1, x t, x t ). We would

More information

Distributional Similarity Models (cont.)

Distributional Similarity Models (cont.) Sematic Similarity Vector Space Model Similarity Measures cosie Euclidea distace... Clusterig k-meas hierarchical Last Time EM Clusterig Soft versio of K-meas clusterig Iput: m dimesioal objects X = {

More information

1 Duality revisited. AM 221: Advanced Optimization Spring 2016

1 Duality revisited. AM 221: Advanced Optimization Spring 2016 AM 22: Advaced Optimizatio Sprig 206 Prof. Yaro Siger Sectio 7 Wedesday, Mar. 9th Duality revisited I this sectio, we will give a slightly differet perspective o duality. optimizatio program: f(x) x R

More information

6.3 Testing Series With Positive Terms

6.3 Testing Series With Positive Terms 6.3. TESTING SERIES WITH POSITIVE TERMS 307 6.3 Testig Series With Positive Terms 6.3. Review of what is kow up to ow I theory, testig a series a i for covergece amouts to fidig the i= sequece of partial

More information

Singular value decomposition. Mathématiques appliquées (MATH0504-1) B. Dewals, Ch. Geuzaine

Singular value decomposition. Mathématiques appliquées (MATH0504-1) B. Dewals, Ch. Geuzaine Lecture 11 Sigular value decompositio Mathématiques appliquées (MATH0504-1) B. Dewals, Ch. Geuzaie V1.2 07/12/2018 1 Sigular value decompositio (SVD) at a glace Motivatio: the image of the uit sphere S

More information

Support vector machine revisited

Support vector machine revisited 6.867 Machie learig, lecture 8 (Jaakkola) 1 Lecture topics: Support vector machie ad kerels Kerel optimizatio, selectio Support vector machie revisited Our task here is to first tur the support vector

More information

Distributional Similarity Models (cont.)

Distributional Similarity Models (cont.) Distributioal Similarity Models (cot.) Regia Barzilay EECS Departmet MIT October 19, 2004 Sematic Similarity Vector Space Model Similarity Measures cosie Euclidea distace... Clusterig k-meas hierarchical

More information

Chapter 12 EM algorithms The Expectation-Maximization (EM) algorithm is a maximum likelihood method for models that have hidden variables eg. Gaussian

Chapter 12 EM algorithms The Expectation-Maximization (EM) algorithm is a maximum likelihood method for models that have hidden variables eg. Gaussian Chapter 2 EM algorithms The Expectatio-Maximizatio (EM) algorithm is a maximum likelihood method for models that have hidde variables eg. Gaussia Mixture Models (GMMs), Liear Dyamic Systems (LDSs) ad Hidde

More information

Expectation-Maximization Algorithm.

Expectation-Maximization Algorithm. Expectatio-Maximizatio Algorithm. Petr Pošík Czech Techical Uiversity i Prague Faculty of Electrical Egieerig Dept. of Cyberetics MLE 2 Likelihood.........................................................................................................

More information

Machine Learning Theory Tübingen University, WS 2016/2017 Lecture 11

Machine Learning Theory Tübingen University, WS 2016/2017 Lecture 11 Machie Learig Theory Tübige Uiversity, WS 06/07 Lecture Tolstikhi Ilya Abstract We will itroduce the otio of reproducig kerels ad associated Reproducig Kerel Hilbert Spaces (RKHS). We will cosider couple

More information

LECTURE 8: ORTHOGONALITY (CHAPTER 5 IN THE BOOK)

LECTURE 8: ORTHOGONALITY (CHAPTER 5 IN THE BOOK) LECTURE 8: ORTHOGONALITY (CHAPTER 5 IN THE BOOK) Everythig marked by is ot required by the course syllabus I this lecture, all vector spaces is over the real umber R. All vectors i R is viewed as a colum

More information

(VII.A) Review of Orthogonality

(VII.A) Review of Orthogonality VII.A Review of Orthogoality At the begiig of our study of liear trasformatios i we briefly discussed projectios, rotatios ad projectios. I III.A, projectios were treated i the abstract ad without regard

More information

6.867 Machine learning, lecture 7 (Jaakkola) 1

6.867 Machine learning, lecture 7 (Jaakkola) 1 6.867 Machie learig, lecture 7 (Jaakkola) 1 Lecture topics: Kerel form of liear regressio Kerels, examples, costructio, properties Liear regressio ad kerels Cosider a slightly simpler model where we omit

More information

CSE 527, Additional notes on MLE & EM

CSE 527, Additional notes on MLE & EM CSE 57 Lecture Notes: MLE & EM CSE 57, Additioal otes o MLE & EM Based o earlier otes by C. Grat & M. Narasimha Itroductio Last lecture we bega a examiatio of model based clusterig. This lecture will be

More information

CS284A: Representations and Algorithms in Molecular Biology

CS284A: Representations and Algorithms in Molecular Biology CS284A: Represetatios ad Algorithms i Molecular Biology Scribe Notes o Lectures 3 & 4: Motif Discovery via Eumeratio & Motif Represetatio Usig Positio Weight Matrix Joshua Gervi Based o presetatios by

More information

10-701/ Machine Learning Mid-term Exam Solution

10-701/ Machine Learning Mid-term Exam Solution 0-70/5-78 Machie Learig Mid-term Exam Solutio Your Name: Your Adrew ID: True or False (Give oe setece explaatio) (20%). (F) For a cotiuous radom variable x ad its probability distributio fuctio p(x), it

More information

Lecture 6 Chi Square Distribution (χ 2 ) and Least Squares Fitting

Lecture 6 Chi Square Distribution (χ 2 ) and Least Squares Fitting Lecture 6 Chi Square Distributio (χ ) ad Least Squares Fittig Chi Square Distributio (χ ) Suppose: We have a set of measuremets {x 1, x, x }. We kow the true value of each x i (x t1, x t, x t ). We would

More information

Random Matrices with Blocks of Intermediate Scale Strongly Correlated Band Matrices

Random Matrices with Blocks of Intermediate Scale Strongly Correlated Band Matrices Radom Matrices with Blocks of Itermediate Scale Strogly Correlated Bad Matrices Jiayi Tog Advisor: Dr. Todd Kemp May 30, 07 Departmet of Mathematics Uiversity of Califoria, Sa Diego Cotets Itroductio Notatio

More information

Recurrence Relations

Recurrence Relations Recurrece Relatios Aalysis of recursive algorithms, such as: it factorial (it ) { if (==0) retur ; else retur ( * factorial(-)); } Let t be the umber of multiplicatios eeded to calculate factorial(). The

More information

Outline. Linear regression. Regularization functions. Polynomial curve fitting. Stochastic gradient descent for regression. MLE for regression

Outline. Linear regression. Regularization functions. Polynomial curve fitting. Stochastic gradient descent for regression. MLE for regression REGRESSION 1 Outlie Liear regressio Regularizatio fuctios Polyomial curve fittig Stochastic gradiet descet for regressio MLE for regressio Step-wise forward regressio Regressio methods Statistical techiques

More information

Cov(aX, cy ) Var(X) Var(Y ) It is completely invariant to affine transformations: for any a, b, c, d R, ρ(ax + b, cy + d) = a.s. X i. as n.

Cov(aX, cy ) Var(X) Var(Y ) It is completely invariant to affine transformations: for any a, b, c, d R, ρ(ax + b, cy + d) = a.s. X i. as n. CS 189 Itroductio to Machie Learig Sprig 218 Note 11 1 Caoical Correlatio Aalysis The Pearso Correlatio Coefficiet ρ(x, Y ) is a way to measure how liearly related (i other words, how well a liear model

More information

Statistical and Mathematical Methods DS-GA 1002 December 8, Sample Final Problems Solutions

Statistical and Mathematical Methods DS-GA 1002 December 8, Sample Final Problems Solutions Statistical ad Mathematical Methods DS-GA 00 December 8, 05. Short questios Sample Fial Problems Solutios a. Ax b has a solutio if b is i the rage of A. The dimesio of the rage of A is because A has liearly-idepedet

More information

Summary: CORRELATION & LINEAR REGRESSION. GC. Students are advised to refer to lecture notes for the GC operations to obtain scatter diagram.

Summary: CORRELATION & LINEAR REGRESSION. GC. Students are advised to refer to lecture notes for the GC operations to obtain scatter diagram. Key Cocepts: 1) Sketchig of scatter diagram The scatter diagram of bivariate (i.e. cotaiig two variables) data ca be easily obtaied usig GC. Studets are advised to refer to lecture otes for the GC operatios

More information

Segmenting Motions of Different Types by Unsupervised Manifold Clustering

Segmenting Motions of Different Types by Unsupervised Manifold Clustering Segmetig Motios of Differet Types by Usupervised Maifold Clusterig Alvia Goh Reé Vidal Ceter for Imagig Sciece, Johs Hopkis Uiversity, Baltimore MD 21218, USA Abstract We propose a ovel algorithm for segmetig

More information

Topics in Eigen-analysis

Topics in Eigen-analysis Topics i Eige-aalysis Li Zajiag 28 July 2014 Cotets 1 Termiology... 2 2 Some Basic Properties ad Results... 2 3 Eige-properties of Hermitia Matrices... 5 3.1 Basic Theorems... 5 3.2 Quadratic Forms & Noegative

More information

1 Introduction to reducing variance in Monte Carlo simulations

1 Introduction to reducing variance in Monte Carlo simulations Copyright c 010 by Karl Sigma 1 Itroductio to reducig variace i Mote Carlo simulatios 11 Review of cofidece itervals for estimatig a mea I statistics, we estimate a ukow mea µ = E(X) of a distributio by

More information

Introduction to Optimization Techniques. How to Solve Equations

Introduction to Optimization Techniques. How to Solve Equations Itroductio to Optimizatio Techiques How to Solve Equatios Iterative Methods of Optimizatio Iterative methods of optimizatio Solutio of the oliear equatios resultig form a optimizatio problem is usually

More information

Eigenvalues and Eigenvectors

Eigenvalues and Eigenvectors 5 Eigevalues ad Eigevectors 5.3 DIAGONALIZATION DIAGONALIZATION Example 1: Let. Fid a formula for A k, give that P 1 1 = 1 2 ad, where Solutio: The stadard formula for the iverse of a 2 2 matrix yields

More information

8. Applications To Linear Differential Equations

8. Applications To Linear Differential Equations 8. Applicatios To Liear Differetial Equatios 8.. Itroductio 8.. Review Of Results Cocerig Liear Differetial Equatios Of First Ad Secod Orders 8.3. Eercises 8.4. Liear Differetial Equatios Of Order N 8.5.

More information

Chimica Inorganica 3

Chimica Inorganica 3 himica Iorgaica Irreducible Represetatios ad haracter Tables Rather tha usig geometrical operatios, it is ofte much more coveiet to employ a ew set of group elemets which are matrices ad to make the rule

More information

ECON 3150/4150, Spring term Lecture 3

ECON 3150/4150, Spring term Lecture 3 Itroductio Fidig the best fit by regressio Residuals ad R-sq Regressio ad causality Summary ad ext step ECON 3150/4150, Sprig term 2014. Lecture 3 Ragar Nymoe Uiversity of Oslo 21 Jauary 2014 1 / 30 Itroductio

More information

An Introduction to Randomized Algorithms

An Introduction to Randomized Algorithms A Itroductio to Radomized Algorithms The focus of this lecture is to study a radomized algorithm for quick sort, aalyze it usig probabilistic recurrece relatios, ad also provide more geeral tools for aalysis

More information

Infinite Sequences and Series

Infinite Sequences and Series Chapter 6 Ifiite Sequeces ad Series 6.1 Ifiite Sequeces 6.1.1 Elemetary Cocepts Simply speakig, a sequece is a ordered list of umbers writte: {a 1, a 2, a 3,...a, a +1,...} where the elemets a i represet

More information

Lecture 2: Monte Carlo Simulation

Lecture 2: Monte Carlo Simulation STAT/Q SCI 43: Itroductio to Resamplig ethods Sprig 27 Istructor: Ye-Chi Che Lecture 2: ote Carlo Simulatio 2 ote Carlo Itegratio Assume we wat to evaluate the followig itegratio: e x3 dx What ca we do?

More information

Most text will write ordinary derivatives using either Leibniz notation 2 3. y + 5y= e and y y. xx tt t

Most text will write ordinary derivatives using either Leibniz notation 2 3. y + 5y= e and y y. xx tt t Itroductio to Differetial Equatios Defiitios ad Termiolog Differetial Equatio: A equatio cotaiig the derivatives of oe or more depedet variables, with respect to oe or more idepedet variables, is said

More information

6. Kalman filter implementation for linear algebraic equations. Karhunen-Loeve decomposition

6. Kalman filter implementation for linear algebraic equations. Karhunen-Loeve decomposition 6. Kalma filter implemetatio for liear algebraic equatios. Karhue-Loeve decompositio 6.1. Solvable liear algebraic systems. Probabilistic iterpretatio. Let A be a quadratic matrix (ot obligatory osigular.

More information

Information-based Feature Selection

Information-based Feature Selection Iformatio-based Feature Selectio Farza Faria, Abbas Kazeroui, Afshi Babveyh Email: {faria,abbask,afshib}@staford.edu 1 Itroductio Feature selectio is a topic of great iterest i applicatios dealig with

More information

Inverse Matrix. A meaning that matrix B is an inverse of matrix A.

Inverse Matrix. A meaning that matrix B is an inverse of matrix A. Iverse Matrix Two square matrices A ad B of dimesios are called iverses to oe aother if the followig holds, AB BA I (11) The otio is dual but we ofte write 1 B A meaig that matrix B is a iverse of matrix

More information

Math 61CM - Solutions to homework 3

Math 61CM - Solutions to homework 3 Math 6CM - Solutios to homework 3 Cédric De Groote October 2 th, 208 Problem : Let F be a field, m 0 a fixed oegative iteger ad let V = {a 0 + a x + + a m x m a 0,, a m F} be the vector space cosistig

More information

ACO Comprehensive Exam 9 October 2007 Student code A. 1. Graph Theory

ACO Comprehensive Exam 9 October 2007 Student code A. 1. Graph Theory 1. Graph Theory Prove that there exist o simple plaar triagulatio T ad two distict adjacet vertices x, y V (T ) such that x ad y are the oly vertices of T of odd degree. Do ot use the Four-Color Theorem.

More information

Definitions and Theorems. where x are the decision variables. c, b, and a are constant coefficients.

Definitions and Theorems. where x are the decision variables. c, b, and a are constant coefficients. Defiitios ad Theorems Remember the scalar form of the liear programmig problem, Miimize, Subject to, f(x) = c i x i a 1i x i = b 1 a mi x i = b m x i 0 i = 1,2,, where x are the decisio variables. c, b,

More information

(3) If you replace row i of A by its sum with a multiple of another row, then the determinant is unchanged! Expand across the i th row:

(3) If you replace row i of A by its sum with a multiple of another row, then the determinant is unchanged! Expand across the i th row: Math 5-4 Tue Feb 4 Cotiue with sectio 36 Determiats The effective way to compute determiats for larger-sized matrices without lots of zeroes is to ot use the defiitio, but rather to use the followig facts,

More information

Economics 241B Relation to Method of Moments and Maximum Likelihood OLSE as a Maximum Likelihood Estimator

Economics 241B Relation to Method of Moments and Maximum Likelihood OLSE as a Maximum Likelihood Estimator Ecoomics 24B Relatio to Method of Momets ad Maximum Likelihood OLSE as a Maximum Likelihood Estimator Uder Assumptio 5 we have speci ed the distributio of the error, so we ca estimate the model parameters

More information

Roberto s Notes on Series Chapter 2: Convergence tests Section 7. Alternating series

Roberto s Notes on Series Chapter 2: Convergence tests Section 7. Alternating series Roberto s Notes o Series Chapter 2: Covergece tests Sectio 7 Alteratig series What you eed to kow already: All basic covergece tests for evetually positive series. What you ca lear here: A test for series

More information

Discrete Orthogonal Moment Features Using Chebyshev Polynomials

Discrete Orthogonal Moment Features Using Chebyshev Polynomials Discrete Orthogoal Momet Features Usig Chebyshev Polyomials R. Mukuda, 1 S.H.Og ad P.A. Lee 3 1 Faculty of Iformatio Sciece ad Techology, Multimedia Uiversity 75450 Malacca, Malaysia. Istitute of Mathematical

More information

Definition 4.2. (a) A sequence {x n } in a Banach space X is a basis for X if. unique scalars a n (x) such that x = n. a n (x) x n. (4.

Definition 4.2. (a) A sequence {x n } in a Banach space X is a basis for X if. unique scalars a n (x) such that x = n. a n (x) x n. (4. 4. BASES I BAACH SPACES 39 4. BASES I BAACH SPACES Sice a Baach space X is a vector space, it must possess a Hamel, or vector space, basis, i.e., a subset {x γ } γ Γ whose fiite liear spa is all of X ad

More information

5.1. The Rayleigh s quotient. Definition 49. Let A = A be a self-adjoint matrix. quotient is the function. R(x) = x,ax, for x = 0.

5.1. The Rayleigh s quotient. Definition 49. Let A = A be a self-adjoint matrix. quotient is the function. R(x) = x,ax, for x = 0. 40 RODICA D. COSTIN 5. The Rayleigh s priciple ad the i priciple for the eigevalues of a self-adjoit matrix Eigevalues of self-adjoit matrices are easy to calculate. This sectio shows how this is doe usig

More information

Product measures, Tonelli s and Fubini s theorems For use in MAT3400/4400, autumn 2014 Nadia S. Larsen. Version of 13 October 2014.

Product measures, Tonelli s and Fubini s theorems For use in MAT3400/4400, autumn 2014 Nadia S. Larsen. Version of 13 October 2014. Product measures, Toelli s ad Fubii s theorems For use i MAT3400/4400, autum 2014 Nadia S. Larse Versio of 13 October 2014. 1. Costructio of the product measure The purpose of these otes is to preset the

More information

Chapter 4. Fourier Series

Chapter 4. Fourier Series Chapter 4. Fourier Series At this poit we are ready to ow cosider the caoical equatios. Cosider, for eample the heat equatio u t = u, < (4.) subject to u(, ) = si, u(, t) = u(, t) =. (4.) Here,

More information

Support Vector Machines and Kernel Methods

Support Vector Machines and Kernel Methods Support Vector Machies ad Kerel Methods Daiel Khashabi Fall 202 Last Update: September 26, 206 Itroductio I Support Vector Machies the goal is to fid a separator betwee data which has the largest margi,

More information

Chi-Squared Tests Math 6070, Spring 2006

Chi-Squared Tests Math 6070, Spring 2006 Chi-Squared Tests Math 6070, Sprig 2006 Davar Khoshevisa Uiversity of Utah February XXX, 2006 Cotets MLE for Goodess-of Fit 2 2 The Multiomial Distributio 3 3 Applicatio to Goodess-of-Fit 6 3 Testig for

More information

On the Linear Complexity of Feedback Registers

On the Linear Complexity of Feedback Registers O the Liear Complexity of Feedback Registers A. H. Cha M. Goresky A. Klapper Northeaster Uiversity Abstract I this paper, we study sequeces geerated by arbitrary feedback registers (ot ecessarily feedback

More information

Math 155 (Lecture 3)

Math 155 (Lecture 3) Math 55 (Lecture 3) September 8, I this lecture, we ll cosider the aswer to oe of the most basic coutig problems i combiatorics Questio How may ways are there to choose a -elemet subset of the set {,,,

More information

Vector Quantization: a Limiting Case of EM

Vector Quantization: a Limiting Case of EM . Itroductio & defiitios Assume that you are give a data set X = { x j }, j { 2,,, }, of d -dimesioal vectors. The vector quatizatio (VQ) problem requires that we fid a set of prototype vectors Z = { z

More information

2 Geometric interpretation of complex numbers

2 Geometric interpretation of complex numbers 2 Geometric iterpretatio of complex umbers 2.1 Defiitio I will start fially with a precise defiitio, assumig that such mathematical object as vector space R 2 is well familiar to the studets. Recall that

More information

First, note that the LS residuals are orthogonal to the regressors. X Xb X y = 0 ( normal equations ; (k 1) ) So,

First, note that the LS residuals are orthogonal to the regressors. X Xb X y = 0 ( normal equations ; (k 1) ) So, 0 2. OLS Part II The OLS residuals are orthogoal to the regressors. If the model icludes a itercept, the orthogoality of the residuals ad regressors gives rise to three results, which have limited practical

More information

L = n i, i=1. dp p n 1

L = n i, i=1. dp p n 1 Exchageable sequeces ad probabilities for probabilities 1996; modified 98 5 21 to add material o mutual iformatio; modified 98 7 21 to add Heath-Sudderth proof of de Fietti represetatio; modified 99 11

More information

10. Comparative Tests among Spatial Regression Models. Here we revisit the example in Section 8.1 of estimating the mean of a normal random

10. Comparative Tests among Spatial Regression Models. Here we revisit the example in Section 8.1 of estimating the mean of a normal random Part III. Areal Data Aalysis 0. Comparative Tests amog Spatial Regressio Models While the otio of relative likelihood values for differet models is somewhat difficult to iterpret directly (as metioed above),

More information

Probability, Expectation Value and Uncertainty

Probability, Expectation Value and Uncertainty Chapter 1 Probability, Expectatio Value ad Ucertaity We have see that the physically observable properties of a quatum system are represeted by Hermitea operators (also referred to as observables ) such

More information

Modeling and Estimation of a Bivariate Pareto Distribution using the Principle of Maximum Entropy

Modeling and Estimation of a Bivariate Pareto Distribution using the Principle of Maximum Entropy Sri Laka Joural of Applied Statistics, Vol (5-3) Modelig ad Estimatio of a Bivariate Pareto Distributio usig the Priciple of Maximum Etropy Jagathath Krisha K.M. * Ecoomics Research Divisio, CSIR-Cetral

More information

Math 203A, Solution Set 8.

Math 203A, Solution Set 8. Math 20A, Solutio Set 8 Problem 1 Give four geeral lies i P, show that there are exactly 2 lies which itersect all four of them Aswer: Recall that the space of lies i P is parametrized by the Grassmaia

More information

Grouping 2: Spectral and Agglomerative Clustering. CS 510 Lecture #16 April 2 nd, 2014

Grouping 2: Spectral and Agglomerative Clustering. CS 510 Lecture #16 April 2 nd, 2014 Groupig 2: Spectral ad Agglomerative Clusterig CS 510 Lecture #16 April 2 d, 2014 Groupig (review) Goal: Detect local image features (SIFT) Describe image patches aroud features SIFT, SURF, HoG, LBP, Group

More information

The z-transform. 7.1 Introduction. 7.2 The z-transform Derivation of the z-transform: x[n] = z n LTI system, h[n] z = re j

The z-transform. 7.1 Introduction. 7.2 The z-transform Derivation of the z-transform: x[n] = z n LTI system, h[n] z = re j The -Trasform 7. Itroductio Geeralie the complex siusoidal represetatio offered by DTFT to a represetatio of complex expoetial sigals. Obtai more geeral characteristics for discrete-time LTI systems. 7.

More information

Double Stage Shrinkage Estimator of Two Parameters. Generalized Exponential Distribution

Double Stage Shrinkage Estimator of Two Parameters. Generalized Exponential Distribution Iteratioal Mathematical Forum, Vol., 3, o. 3, 3-53 HIKARI Ltd, www.m-hikari.com http://dx.doi.org/.9/imf.3.335 Double Stage Shrikage Estimator of Two Parameters Geeralized Expoetial Distributio Alaa M.

More information

Algorithms for Clustering

Algorithms for Clustering CR2: Statistical Learig & Applicatios Algorithms for Clusterig Lecturer: J. Salmo Scribe: A. Alcolei Settig: give a data set X R p where is the umber of observatio ad p is the umber of features, we wat

More information

TEACHER CERTIFICATION STUDY GUIDE

TEACHER CERTIFICATION STUDY GUIDE COMPETENCY 1. ALGEBRA SKILL 1.1 1.1a. ALGEBRAIC STRUCTURES Kow why the real ad complex umbers are each a field, ad that particular rigs are ot fields (e.g., itegers, polyomial rigs, matrix rigs) Algebra

More information

Clustering. CM226: Machine Learning for Bioinformatics. Fall Sriram Sankararaman Acknowledgments: Fei Sha, Ameet Talwalkar.

Clustering. CM226: Machine Learning for Bioinformatics. Fall Sriram Sankararaman Acknowledgments: Fei Sha, Ameet Talwalkar. Clusterig CM226: Machie Learig for Bioiformatics. Fall 216 Sriram Sakararama Ackowledgmets: Fei Sha, Ameet Talwalkar Clusterig 1 / 42 Admiistratio HW 1 due o Moday. Email/post o CCLE if you have questios.

More information

Section 1.1. Calculus: Areas And Tangents. Difference Equations to Differential Equations

Section 1.1. Calculus: Areas And Tangents. Difference Equations to Differential Equations Differece Equatios to Differetial Equatios Sectio. Calculus: Areas Ad Tagets The study of calculus begis with questios about chage. What happes to the velocity of a swigig pedulum as its positio chages?

More information

THE SOLUTION OF NONLINEAR EQUATIONS f( x ) = 0.

THE SOLUTION OF NONLINEAR EQUATIONS f( x ) = 0. THE SOLUTION OF NONLINEAR EQUATIONS f( ) = 0. Noliear Equatio Solvers Bracketig. Graphical. Aalytical Ope Methods Bisectio False Positio (Regula-Falsi) Fied poit iteratio Newto Raphso Secat The root of

More information

Kinetics of Complex Reactions

Kinetics of Complex Reactions Kietics of Complex Reactios by Flick Colema Departmet of Chemistry Wellesley College Wellesley MA 28 wcolema@wellesley.edu Copyright Flick Colema 996. All rights reserved. You are welcome to use this documet

More information

Resampling Methods. X (1/2), i.e., Pr (X i m) = 1/2. We order the data: X (1) X (2) X (n). Define the sample median: ( n.

Resampling Methods. X (1/2), i.e., Pr (X i m) = 1/2. We order the data: X (1) X (2) X (n). Define the sample median: ( n. Jauary 1, 2019 Resamplig Methods Motivatio We have so may estimators with the property θ θ d N 0, σ 2 We ca also write θ a N θ, σ 2 /, where a meas approximately distributed as Oce we have a cosistet estimator

More information

MATH10212 Linear Algebra B Proof Problems

MATH10212 Linear Algebra B Proof Problems MATH22 Liear Algebra Proof Problems 5 Jue 26 Each problem requests a proof of a simple statemet Problems placed lower i the list may use the results of previous oes Matrices ermiats If a b R the matrix

More information

6 Integers Modulo n. integer k can be written as k = qn + r, with q,r, 0 r b. So any integer.

6 Integers Modulo n. integer k can be written as k = qn + r, with q,r, 0 r b. So any integer. 6 Itegers Modulo I Example 2.3(e), we have defied the cogruece of two itegers a,b with respect to a modulus. Let us recall that a b (mod ) meas a b. We have proved that cogruece is a equivalece relatio

More information

Polynomials with Rational Roots that Differ by a Non-zero Constant. Generalities

Polynomials with Rational Roots that Differ by a Non-zero Constant. Generalities Polyomials with Ratioal Roots that Differ by a No-zero Costat Philip Gibbs The problem of fidig two polyomials P(x) ad Q(x) of a give degree i a sigle variable x that have all ratioal roots ad differ by

More information

1 Last time: similar and diagonalizable matrices

1 Last time: similar and diagonalizable matrices Last time: similar ad diagoalizable matrices Let be a positive iteger Suppose A is a matrix, v R, ad λ R Recall that v a eigevector for A with eigevalue λ if v ad Av λv, or equivaletly if v is a ozero

More information

3. Z Transform. Recall that the Fourier transform (FT) of a DT signal xn [ ] is ( ) [ ] = In order for the FT to exist in the finite magnitude sense,

3. Z Transform. Recall that the Fourier transform (FT) of a DT signal xn [ ] is ( ) [ ] = In order for the FT to exist in the finite magnitude sense, 3. Z Trasform Referece: Etire Chapter 3 of text. Recall that the Fourier trasform (FT) of a DT sigal x [ ] is ω ( ) [ ] X e = j jω k = xe I order for the FT to exist i the fiite magitude sese, S = x [

More information