A Novel Biometric Feature Extraction Algorithm using Two Dimensional Fisherface in 2DPCA subspace for Face Recognition
|
|
- Mark Warren
- 5 years ago
- Views:
Transcription
1 A Novel ometrc Feature Extracton Algorthm usng wo Dmensonal Fsherface n 2DPA subspace for Face Recognton R. M. MUELO, W.L. WOO, and S.S. DLAY School of Electrcal, Electronc and omputer Engneerng Unversty of Newcastle Newcastle upon yne, NE 7RU UNIED KINDOM Abstract: - hs paper descrbes a novel algorthm, 2D-FPA, for face feature extracton and representaton. he new algorthm fuses the two dmensonal Fsherface method wth the two dmensonal prncpal component analyss (2DPA). Our algorthm operates on the two dmensonal mage matrces. herefore a total mage covarance matrx can be constructed drectly usng the orgnal mage matrces and ts egenvectors are derved for feature extracton. Smlarly, the between and the wthn mage covarance matrces are constructed and transformed to a 2DPA subspace. he result s that 2D-FPA s faster and yelds greater recognton accuracy. he ORL database s used as a benchmark. he new algorthm acheves a recognton rate of 95.50% compared to the recognton rate of 90.00% for the Fsherface method. Key-Words: ometrcs, Recognton, Feature Extracton, Image Representaton, two dmensonal Fsherface, two dmensonal prncpal component analyss. Introducton he early strategy for face dentfcaton was based on geometrcal features such as nose wdth and length, mouth poston and chn shape [] almost all of the new approaches developed n recent years are based on holstc representatons known as templates. In [2] a comparson of geometrcal feature based matchng wth template (statstcal features) matchng s presented and the result favours the statstcal feature based matchng approach. he most commonly used statstcal representaton for face recognton s the Egenfaces [3], whch uses prncpal component analyss (PA) for dmensonalty reducton. he produced templates are not always good for dscrmnaton among classes. elhumeur et al. [4] proposed Fsherfaces method, whch s based on Fsher s Lnear Dscrmnant and produces well separated classes n a low-dmensonal subspace. Nonlnear PA-related methods such as ndependent component analyss (IA) and kernel prncpal component analyss (Kernel PA) have been proposed for feature extracton [5-6]. hese methods are superor to representatons based on PA. However, Kernel PA and IA are both computatonally more expensve than PA. In all the PA-based related technques above, the 2D face mage matrces must be transformed nto D mage vectors. he resultng mage vectors of faces usually lead to a hgh dmensonal mage vector space, where t s dffcult to evaluate the covarance matrx accurately due to ts large sze and the relatvely small number of tranng samples. Yang et al. [7] proposed a straghtforward mage proecton technque, called the mage prncpal component analyss, whch s drectly based on analyss of orgnal mage matrces. ased on the dea of the mage matrx L et al. [8] proposed the two dmensonal lnear dscrmnant analyss where the Fsher lnear proecton crteron s used to fnd a good proecton for better feature extracton. In ths paper, a straghtforward mage proecton technque called two dmensonal Fsherface Prncpal omponent Analyss, 2D-FPA, for feature extracton s presented. Our method uses Fsher lnear proecton crteron to fnd a good proecton n the 2DPA subspace. Performng 2DFLD n ths new subspace has three mportant advantages over Fsherface. Frstly, t s easer to evaluate the covarance matrces accurately. Secondly, less tme s requred to determne the correspondng egenvectors. Fnaly, the face recognton accuracy ncreases sgnfcantly. 2 2D-FPA 2. Idea and Algorthm Suppose that there are pattern classes n tranng set, {, where N denotes the sze of tranng set. A } N he th class has n samples, thus.he N n mean mage of all tranng set s
2 A and A,,..., denotes the mean mage of class. he th class and th tranng mage s denoted by A an m n mage matrx. denotes an n -dmensonal untary column vector. Our novel dea s to proect the th mage, A, onto to the 2DPA subspace by the followng lnear transformaton [7], [9]: Y A () We obtan a m dmensonal proected vector whch s the feature vector of the th mage A. We need to determne a good proecton vector. y defnton, PA fnds a proecton drecton, onto whch all samples are proected, so that the total scatter of the resultng proected samples s maxmzed. he total covarance (scatter) matrx of the proected samples s characterzed by the trace (tr) of the covarance matrx of the proected feature vectors as follows: J ) tr( S ) (2) ( t Where S E ( Y EY )( Y EY ) (3) t E [( A EA ) ][ A EA ) ] Hence J ( ) tr( S ) E[( A EA ) A EA )]. t ( We defne the total mage covarance matrx as: Gt E[( A EA ) ( A EA )] (4) herefore, (2) can be wrtten as, tr( St ) Gt (5) he untary proecton vector,, that maxmzes (5) s called the optmal proecton axs,( ) whch s opt the egenvector of G t. It s not enough to have only one optmal proecton axs. We need to select a set of proecton axes,, n such a way that not,.... d much dscrmnant nformaton s lost whle nose s reduced. he selected egenvectors,,,.... d contan prncpal components (Ps) whch retan both the wthn varatons (unwanted nformaton) and between varatons (mportant dscrmnant nformaton). Once the 2DPA subspace has been created we use the Fsher s lnear proecton crteron n ths new space to fnd a good proecton that maxmzes the rato of the scatter among the face classes to the scatter wthn the face of the same class. he between mage class covarance (scatter) matrx and the wthn mage class covarance (scatter) matrx the nput space can be transformed to the 2DPA subspace as S and S by, S S W WW SWW SW (6) n S S (7) We derve S and S as follows: WW S n ( Y Y ) ( Y Y ) (8) n[( A A) ] [( A A) ] n ( A A) ( A A) Where S n ( A A) ( A A) (9) herefore S S (0) n ( ) ( ) Smlarly S ( Y Y ) ( Y Y ) () ww ( ) ( ) n ( ) ( ) ( ) ( ) [( A A ) ] [( A A ) ] n ( ) ( ) ( ) ( ) ( A A ) ( A A ) n ( ) ( A A ) WW SW Where ( ) ( ) (2) Sw ( ) ( A A ) Smlarly, S (3) We refer to S and SWW as the between feature and wthn feature class covarance (scatter) matrces wth dmensons of d d, where d s the number of egenvectors of G t that we choose. he man dea at ths stage s to proect onto the untary column vector Z as follows: Y R Y Z (4) o measure the class separablty of the proected samples, R,,2,... N, we determne the trace of the between S and SWWW wthn matrx of the proected samples as follows: tr( S ) J ( Z) (5) tr( SWWW ) Usng a smlar dea as n (2), tr ( S ) and tr( S WWW ) s expressed as: Z S Z J ( Z) Z SWW Z hs s the Fsher lnear proecton crteron. he untary vector Z that maxmzes (5) s called the Fsher optmal proecton axs ( ). It means that Z opt the proected mage samples on the drecton Z has the mnmal wthn mage class scatter and the
3 maxmal between mage class scatter n the subspace spanned by Z. If SWW s nonsngular, the soluton to above optmzaton problem s to solve the generalzed egenvalue problem [3-4]: S Z λs Z (6) opt WW opt SWW SZopt λzopt In (6), λ s the egenvalue of SWW S. In the tradtonal LDA, we were faced wth the sngularty problem. However, 2D-FPA overcomes ths problem successfully. L et al. [8] showed that the wthn mage class covarance (scatter) matrx s non-sngular n real stuaton. In general, t s not enough to have only one Fsher optmal proecton axs. We usually need to select a set of proecton axs, Z,...Z g S W, subect to the orthonormal constrants. 2.2 Feature Extracton he optmal proecton vectors of 2DPA,, are used for the frst feature extracton.,.... d For a gven mage sample, A Y A k, k,... d. (7) he prncpal component vectors obtaned are used to form an m d matrx Y [,... ], whch s called the feature matrx or feature mage of the th mage sample A. We refer to as the prncpal component (vectors) of the th sample A. Here we can thnk of Y as a smaller mage matrx wth less nose from th sample mage A. he Fsher feature vectors k, form a,.... g g d m Fsher feature matrx R [,... g ] for the th mage sample A as follows: R Y Z, p,... g. (8) p It should be noted that each Fsher feature of Fsherface s a scalar, whereas the Fsher feature of 2D-FPA s a vector. 2.3 lassfcaton Method After a transformaton by Z, a Fsher feature matrx s obtaned for each mage. hen, a nearest neghbour classfer s used for classfcaton. Here, the dstance between two feature matrces, R [,,... ] and R [ 2, 2, g ] s defned as D R, R2 ) g p p 2 p 2 g ( (9) Where 2 denotes the Eucldean dstance p p between the two prncpal component vectors p and. Gven a test sample R, f 2 p and R, we decde that R. D ( R, R7 ) mn D( R, R ) 7 k k Fg. Fve sample mages of one subect n the ORL database 3 Image Reconstructon Image reconstructon for the 2D-FPA can be performed n the followng way. he orgnal mage can be reconstructed usng (7) and (8). From (7), Y A, k,... d. Let k U [,... d ] Y A U, then (7) can be wrtten as (20) Snce,.... d are orthonormal, t s easy to obtan the reconstructed mage of the th sample A : d A Y U k k k (2) ut Y needs to be obtaned from (8), let V Z,... Z ]. We can rewrte (8) as: g Y R V A R p p Z p [ g (22) Hence, V U, whch s the same sze as the th mage sample A and represents the reconstructed sub mages A. hat s, the th mage A can be approxmately reconstructed by addng up the frst g and frst d sub mages. In partcular, when the selected number of prncpal component vectors A g d and d n, we have A,.e., the mage s completely reconstructed by ts prncpal component vectors wthout any loss of nformaton. Otherwse, f ether g < d or d < n, the reconstructed mage A s an approxmaton for A. 4 Analyss he new 2D-FPA method was used for face recognton. he performance of our proposed algorthm was tested on a well-known face mage database from the Olvett Research Laboratory (ORL database). he ORL database [0] was used to evaluate the performance of the 2D-FPA under condtons where the pose and number of 2DPA egenvectors vared.
4 4. Analyss usng the ORL benchmark Database hs publcly avalable database comprses of 40 ndvduals, each provdng 0 dfferent mages. For some subects, the mages were taken at dfferent tmes. Furthermore, other wthn class varatons such as facal expresson e.g. eyes open, eyes closed, smlng, not smlng vared. Occluson due to other accessores e.g. glasses was used for some subects. All the mages were taken aganst a dark background wth the subects n an uprght, frontal poston wth tolerance for some sde movement. he mages were taken wth a tolerance for tltng and rotaton of the face of up to 20 degrees. he sze of each mage s 2 92 pxels, wth 256 grey levels per pxel. Fve samples from the ORL database are shown n Fg. Frst, a face recognton experment was performed usng fve mage samples per class for tranng, and the remanng mages for testng. he mages were normalzed to pxels. A total number of tranng samples and testng samples were both 200. he 2D-FPA algorthm was used for feature extracton. Here, the sze of the total mage covarance G, wthn mage covarance S, and between mage covarance S s n n matrx, so t was relatvely easy to evaluate the egenvectors of G t. We chose the egenvectors correspondng to the nne ( d 9) largest egenvalues,., as proecton axs. After the proecton of the th mage sample onto these axes usng (7), we obtan nne prncpal component vectors. We then transformed SW and S t,.... d,... wth the same egenvectors usng (6) and (7) to obtan S and S n the 2DPA subspace. W hese new d d covarance matrces are called the wthn feature and between feature class covarance matrces, SWW and S. It s therefore relatvely easer to calculate the egenvectors of S WW and compared to S and S. hs results n faster S W and more accurate evaluaton of S and SWW. he evaluaton of G t does not have to be as accurate as the evaluaton of S and SWW, rather t s mportant to be able to keep a set of egenvectors,, that wll optmze the evaluaton of (6) wthout a loss of much nformaton. he egenvectors of (6) contan dscrmnant nformaton whch s essental n face classfcaton. In ths analyss we kept only d W A,.... d fve egenvectors, Z,...Z g, from (6) correspondng to the fve ( g 5) largest egenvalues. Fnally, the Fsher features vectors are extracted usng (8) and are used to form a m g matrx R. he magntude of the egenvalues obtaned for the 2DPA and for the 2DFLD n the subspace s plotted n decreasng order n Fg. 2 and Fg. 3. Magntude of egenvalues No. of egenvalues Fg. 2 he magntude of the egenvalues of 2DPA n descendng order Fg. 2 shows that the magntude of the egenvalues quckly converges to zero, whch s exactly consstent wth the results n [3, 4, and 7]. Fg. 3 shows smlar characterstcs although t does not converge to zero n ths case. herefore, we can conclude that the energy of an mage s concentrated on ts frst small number of component vectors. It s reasonable to use these component vectors to represent the mage for dentty recognton purposes. Magntude of egenvalues No. of egenvalues Fg. 3 he magntude of the egenvalues for 2DFLD n descendng order We evaluated the performance of 2D-FPA under condtons where the number of egenvectors for 2DPA ( ) s vared e.g. 5, 7, 9,, 3.,.... d d he recognton results are shown n Fg. 4. he graphs ndcates that the performance of 2D-FPA s optmzed when d 9. he accuracy decreases for values of d > 9 and d < 9. 2D-FPA acheves a top recognton of % at t s optmum value ( d 9) usng fve Fsher feature vectors. he lowest top recognton accuracy of % was obtaned when d 5 usng fve Fsher feature vectors.
5 Recognton Accuracy (%) Dmenson of the 2DFLD Feature Vector Fg. 4 he plot of the recognton accuracy (%) versus the dmenson of the feature vector usng n classfcaton for varyng d. We further compared the performance of Fsherface and 2D-FPA for d 9. 2D-FPA and the Fsherface method were used for feature extracton. Fnally, a nearest neghbour classfer was employed for classfcaton. For 2D-FPA, (9) was used to calculate the dstance between two feature matrces. In Fsherface, the common Eucldean dstance measure s adopted. Fg. 5 presents the recognton accuracy plotted aganst the dmensons of the feature matrx used n classfcaton for 2D-FPA and for the Fsherface approach. 2D-FPA performed better than the Fsherface method. A top recognton accuracy of % for the 2D-FPA was acheved and % for the Fsherface. he 2D-FPA method s also superor to Fsherface n terms of computatonal effcency for feature extracton snce n the Fsherface approach Gt s a mn mn matrx compared to a n n matrx for the 2D-FPA. It s hard work to calculate the egenvectors of a mn mn matrx where mn 2576 and n 46 n ths case. Recognton Accuracy (%) d5 d7 d9 d d3 fsherface 2D-FPA Dmenson Fg. 5 Perfomance of 2D-FPA and Fsherface. Furthermore, the Fsherface approach selects d N egenvectors ( N s the number of tranng mages and s the number of classes) from to make S W non-sngular whereas n 2D-FPA we seek a value of d that optmzes the recognton accuracy whle mantanng a faster speed. In the Fsherface approach S and SWW are of the order d N whch can stll be a large number dependng on the sze of the tranng sample compared to the optmal value of d for 2D-FPA. In ths analyss the former was a matrx compared to a 9 9 matrx for 2D-FPA. When compared to recently publshed results [7-3], 2D-FPA has superor performance. 4 oncluson A novel technque for mage feature extracton and representaton was developed usng the two dmensonal Fsher s Lnear Dscrmnant (2DFLD) combned wth 2DPA subspace. 2D-FPA has many advantages over Fsherface. Frstly, snce 2D-FPA s based on the mage matrx, t s smpler and more straghtforward to use for mage feature extracton. Secondly, 2D-FPA outperforms Fsherface n terms of recognton accuracy as t acheves a hgh recognton accuracy of 95.50% compared to 90.00% for Fsherface. hrdly, 2D-FPA selects a smaller set of 2DPA s egenvectors that optmzes the accuracy and speed. Fnally, 2D-FPA s computatonally more effcent than the Fsherface and t mproves the speed of mage feature generaton sgnfcantly. References: [] W. Zhao, R. hellappa, A. Rosenfeld, and P.J. Phllps, Face recognton: A lterature survey, VL echncal Report, Unversty of Maryland, Oct.2000, [2] R runell and Pogo, Face Recognton through Geometrcal Features, Proc. European onference on omputer Vson, pp , 992 [3] M. A. urk and A. P. Pentland, Egenfaces for recognton, J. ogntve Neurosc., vol. 3, pp. 7 86, 99. [4] P. N. elhumeur, J. P. Hespanha, and D. J. Kregman, Egenfaces versus fsherfaces: Recognton usng class specfc lnear proecton, IEEE rans. Pattern Anal. Machne Intell., vol. 9, pp , 997. [5] S. artlett, J. R. Movellan,. J. senowsk, Face Recognton by Independent omponent Analyss, IEEE rans. On Neural Networks, Vol. 3, No. 6, Nov G t
6 [6] Mng-Hsuan Yang, Kernel egenfaces vs. kernel fsherfaces: Face recognton usng kernel methods, n Proc. IEEE Internatonal onference on Automatc Face and Gesture Recognton, Washngton D.., May 2002, pp [7] J. Yang, D. Zhang, A. F. Frang, J. Y. Yang, wo dmensonal PA: A new approach to appearance-based face representaton and recognton. IEEE rans. Pattern Anal. Machne Intell. Vol. 26, no., pp. 3 37, [8] Mng L and aozong Yuan, 2D-LDA: A statstcal lnear dscrmnant analyss for mage matrx, Pattern Recognton Letters, Vol. 26, no. 5, pp , [9] J. Yang, J.Y. Yang, From Image Vector to Matrx: A Straghtforward Image Proecton echnque IMPA vs. PA, Pattern Recognton, Vol. 35, no. 9, pp , [0] he Olvett Research Laboratory (ORL), Olvett database, 994 < [] Q. Lu, H. Lu, S.Ma, Improvng Kernel Fsherface Dscrmnant Analyss for Face Recognton, IEEE rans. Pattern Anal. Machne Intell. Vol. 4, no., pp , [2] J. Yang, A. F. Frang, J. Y. Yang, D. Zhang, Z. Jn, KPA Plus LDA: A omplete Kernel Fsher Dscrmnant Framework for Feature Extracton and Recognton, IEEE rans. Pattern Anal. Machne Intell. Vol. 27, no. 2, pp , [3] J. Lu, K. N. Platanots, A.N. Venetsanopoulos, Face Recognton Usng Kernel Drect Dscrmnant Analyss Algorthms, IEEE rans. Neural Networks, Vol. 4, no., pp. 7-26, 2003.
Regularized Discriminant Analysis for Face Recognition
1 Regularzed Dscrmnant Analyss for Face Recognton Itz Pma, Mayer Aladem Department of Electrcal and Computer Engneerng, Ben-Guron Unversty of the Negev P.O.Box 653, Beer-Sheva, 845, Israel. Abstract Ths
More informationSubspace Learning Based on Tensor Analysis. by Deng Cai, Xiaofei He, and Jiawei Han
Report No. UIUCDCS-R-2005-2572 UILU-ENG-2005-1767 Subspace Learnng Based on Tensor Analyss by Deng Ca, Xaofe He, and Jawe Han May 2005 Subspace Learnng Based on Tensor Analyss Deng Ca Xaofe He Jawe Han
More informationUnified Subspace Analysis for Face Recognition
Unfed Subspace Analyss for Face Recognton Xaogang Wang and Xaoou Tang Department of Informaton Engneerng The Chnese Unversty of Hong Kong Shatn, Hong Kong {xgwang, xtang}@e.cuhk.edu.hk Abstract PCA, LDA
More informationTensor Subspace Analysis
Tensor Subspace Analyss Xaofe He 1 Deng Ca Partha Nyog 1 1 Department of Computer Scence, Unversty of Chcago {xaofe, nyog}@cs.uchcago.edu Department of Computer Scence, Unversty of Illnos at Urbana-Champagn
More informationStatistical pattern recognition
Statstcal pattern recognton Bayes theorem Problem: decdng f a patent has a partcular condton based on a partcular test However, the test s mperfect Someone wth the condton may go undetected (false negatve
More informationA New Facial Expression Recognition Method Based on * Local Gabor Filter Bank and PCA plus LDA
Hong-Bo Deng, Lan-Wen Jn, L-Xn Zhen, Jan-Cheng Huang A New Facal Expresson Recognton Method Based on Local Gabor Flter Bank and PCA plus LDA A New Facal Expresson Recognton Method Based on * Local Gabor
More informationLINEAR REGRESSION ANALYSIS. MODULE IX Lecture Multicollinearity
LINEAR REGRESSION ANALYSIS MODULE IX Lecture - 30 Multcollnearty Dr. Shalabh Department of Mathematcs and Statstcs Indan Insttute of Technology Kanpur 2 Remedes for multcollnearty Varous technques have
More informationInner Product. Euclidean Space. Orthonormal Basis. Orthogonal
Inner Product Defnton 1 () A Eucldean space s a fnte-dmensonal vector space over the reals R, wth an nner product,. Defnton 2 (Inner Product) An nner product, on a real vector space X s a symmetrc, blnear,
More informationEfficient and Robust Feature Extraction by Maximum Margin Criterion
Effcent and Robust Feature Extracton by Maxmum Margn Crteron Hafeng L Tao Jang Department of Computer Scence Unversty of Calforna Rversde, CA 95 {hl,jang}@cs.ucr.edu Keshu Zhang Department of Electrcal
More informationFeature Extraction by Maximizing the Average Neighborhood Margin
Feature Extracton by Maxmzng the Average Neghborhood Margn Fe Wang, Changshu Zhang State Key Laboratory of Intellgent Technologes and Systems Department of Automaton, Tsnghua Unversty, Bejng, Chna. 184.
More informationSupport Vector Machines. Vibhav Gogate The University of Texas at dallas
Support Vector Machnes Vbhav Gogate he Unversty of exas at dallas What We have Learned So Far? 1. Decson rees. Naïve Bayes 3. Lnear Regresson 4. Logstc Regresson 5. Perceptron 6. Neural networks 7. K-Nearest
More information8.4 COMPLEX VECTOR SPACES AND INNER PRODUCTS
SECTION 8.4 COMPLEX VECTOR SPACES AND INNER PRODUCTS 493 8.4 COMPLEX VECTOR SPACES AND INNER PRODUCTS All the vector spaces you have studed thus far n the text are real vector spaces because the scalars
More informationNon-linear Canonical Correlation Analysis Using a RBF Network
ESANN' proceedngs - European Smposum on Artfcal Neural Networks Bruges (Belgum), 4-6 Aprl, d-sde publ., ISBN -97--, pp. 57-5 Non-lnear Canoncal Correlaton Analss Usng a RBF Network Sukhbnder Kumar, Elane
More informationSome Comments on Accelerating Convergence of Iterative Sequences Using Direct Inversion of the Iterative Subspace (DIIS)
Some Comments on Acceleratng Convergence of Iteratve Sequences Usng Drect Inverson of the Iteratve Subspace (DIIS) C. Davd Sherrll School of Chemstry and Bochemstry Georga Insttute of Technology May 1998
More informationChapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems
Numercal Analyss by Dr. Anta Pal Assstant Professor Department of Mathematcs Natonal Insttute of Technology Durgapur Durgapur-713209 emal: anta.bue@gmal.com 1 . Chapter 5 Soluton of System of Lnear Equatons
More informationP R. Lecture 4. Theory and Applications of Pattern Recognition. Dept. of Electrical and Computer Engineering /
Theory and Applcatons of Pattern Recognton 003, Rob Polkar, Rowan Unversty, Glassboro, NJ Lecture 4 Bayes Classfcaton Rule Dept. of Electrcal and Computer Engneerng 0909.40.0 / 0909.504.04 Theory & Applcatons
More informationCSci 6974 and ECSE 6966 Math. Tech. for Vision, Graphics and Robotics Lecture 21, April 17, 2006 Estimating A Plane Homography
CSc 6974 and ECSE 6966 Math. Tech. for Vson, Graphcs and Robotcs Lecture 21, Aprl 17, 2006 Estmatng A Plane Homography Overvew We contnue wth a dscusson of the major ssues, usng estmaton of plane projectve
More informationFisher Linear Discriminant Analysis
Fsher Lnear Dscrmnant Analyss Max Wellng Department of Computer Scence Unversty of Toronto 10 Kng s College Road Toronto, M5S 3G5 Canada wellng@cs.toronto.edu Abstract Ths s a note to explan Fsher lnear
More informationErrors for Linear Systems
Errors for Lnear Systems When we solve a lnear system Ax b we often do not know A and b exactly, but have only approxmatons  and ˆb avalable. Then the best thng we can do s to solve ˆx ˆb exactly whch
More informationLecture 12: Classification
Lecture : Classfcaton g Dscrmnant functons g The optmal Bayes classfer g Quadratc classfers g Eucldean and Mahalanobs metrcs g K Nearest Neghbor Classfers Intellgent Sensor Systems Rcardo Guterrez-Osuna
More informationKernels in Support Vector Machines. Based on lectures of Martin Law, University of Michigan
Kernels n Support Vector Machnes Based on lectures of Martn Law, Unversty of Mchgan Non Lnear separable problems AND OR NOT() The XOR problem cannot be solved wth a perceptron. XOR Per Lug Martell - Systems
More informationVQ widely used in coding speech, image, and video
at Scalar quantzers are specal cases of vector quantzers (VQ): they are constraned to look at one sample at a tme (memoryless) VQ does not have such constrant better RD perfomance expected Source codng
More informationSupporting Information
Supportng Informaton The neural network f n Eq. 1 s gven by: f x l = ReLU W atom x l + b atom, 2 where ReLU s the element-wse rectfed lnear unt, 21.e., ReLUx = max0, x, W atom R d d s the weght matrx to
More informationEstimating the Fundamental Matrix by Transforming Image Points in Projective Space 1
Estmatng the Fundamental Matrx by Transformng Image Ponts n Projectve Space 1 Zhengyou Zhang and Charles Loop Mcrosoft Research, One Mcrosoft Way, Redmond, WA 98052, USA E-mal: fzhang,cloopg@mcrosoft.com
More informationThe Prncpal Component Transform The Prncpal Component Transform s also called Karhunen-Loeve Transform (KLT, Hotellng Transform, oregenvector Transfor
Prncpal Component Transform Multvarate Random Sgnals A real tme sgnal x(t can be consdered as a random process and ts samples x m (m =0; ;N, 1 a random vector: The mean vector of X s X =[x0; ;x N,1] T
More informationPHYS 705: Classical Mechanics. Calculus of Variations II
1 PHYS 705: Classcal Mechancs Calculus of Varatons II 2 Calculus of Varatons: Generalzaton (no constrant yet) Suppose now that F depends on several dependent varables : We need to fnd such that has a statonary
More informationn α j x j = 0 j=1 has a nontrivial solution. Here A is the n k matrix whose jth column is the vector for all t j=0
MODULE 2 Topcs: Lnear ndependence, bass and dmenson We have seen that f n a set of vectors one vector s a lnear combnaton of the remanng vectors n the set then the span of the set s unchanged f that vector
More informationUsing Random Subspace to Combine Multiple Features for Face Recognition
Usng Random Subspace to Combne Multple Features for Face Recognton Xaogang Wang and Xaoou ang Department of Informaton Engneerng he Chnese Unversty of Hong Kong Shatn, Hong Kong {xgwang1, xtang}@e.cuhk.edu.hk
More informationStructure from Motion. Forsyth&Ponce: Chap. 12 and 13 Szeliski: Chap. 7
Structure from Moton Forsyth&once: Chap. 2 and 3 Szelsk: Chap. 7 Introducton to Structure from Moton Forsyth&once: Chap. 2 Szelsk: Chap. 7 Structure from Moton Intro he Reconstructon roblem p 3?? p p 2
More informationPattern Classification
Pattern Classfcaton All materals n these sldes ere taken from Pattern Classfcaton (nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John Wley & Sons, 000 th the permsson of the authors and the publsher
More informationEfficient, General Point Cloud Registration with Kernel Feature Maps
Effcent, General Pont Cloud Regstraton wth Kernel Feature Maps Hanchen Xong, Sandor Szedmak, Justus Pater Insttute of Computer Scence Unversty of Innsbruck 30 May 2013 Hanchen Xong (Un.Innsbruck) 3D Regstraton
More informationMulti-Task Learning in Heterogeneous Feature Spaces
Proceedngs of the Twenty-Ffth AAAI Conference on Artfcal Intellgence Mult-Task Learnng n Heterogeneous Feature Spaces Yu Zhang & Dt-Yan Yeung Department of Computer Scence and Engneerng Hong Kong Unversty
More informationA Tutorial on Data Reduction. Linear Discriminant Analysis (LDA) Shireen Elhabian and Aly A. Farag. University of Louisville, CVIP Lab September 2009
A utoral on Data Reducton Lnear Dscrmnant Analss (LDA) hreen Elhaban and Al A Farag Unverst of Lousvlle, CVIP Lab eptember 009 Outlne LDA objectve Recall PCA No LDA LDA o Classes Counter eample LDA C Classes
More informationRotation Invariant Shape Contexts based on Feature-space Fourier Transformation
Fourth Internatonal Conference on Image and Graphcs Rotaton Invarant Shape Contexts based on Feature-space Fourer Transformaton Su Yang 1, Yuanyuan Wang Dept of Computer Scence and Engneerng, Fudan Unversty,
More informationStanford University CS359G: Graph Partitioning and Expanders Handout 4 Luca Trevisan January 13, 2011
Stanford Unversty CS359G: Graph Parttonng and Expanders Handout 4 Luca Trevsan January 3, 0 Lecture 4 In whch we prove the dffcult drecton of Cheeger s nequalty. As n the past lectures, consder an undrected
More informationGrover s Algorithm + Quantum Zeno Effect + Vaidman
Grover s Algorthm + Quantum Zeno Effect + Vadman CS 294-2 Bomb 10/12/04 Fall 2004 Lecture 11 Grover s algorthm Recall that Grover s algorthm for searchng over a space of sze wors as follows: consder the
More informationNorms, Condition Numbers, Eigenvalues and Eigenvectors
Norms, Condton Numbers, Egenvalues and Egenvectors 1 Norms A norm s a measure of the sze of a matrx or a vector For vectors the common norms are: N a 2 = ( x 2 1/2 the Eucldean Norm (1a b 1 = =1 N x (1b
More informationReport on Image warping
Report on Image warpng Xuan Ne, Dec. 20, 2004 Ths document summarzed the algorthms of our mage warpng soluton for further study, and there s a detaled descrpton about the mplementaton of these algorthms.
More informationOn a direct solver for linear least squares problems
ISSN 2066-6594 Ann. Acad. Rom. Sc. Ser. Math. Appl. Vol. 8, No. 2/2016 On a drect solver for lnear least squares problems Constantn Popa Abstract The Null Space (NS) algorthm s a drect solver for lnear
More informationAutomatic Object Trajectory- Based Motion Recognition Using Gaussian Mixture Models
Automatc Object Trajectory- Based Moton Recognton Usng Gaussan Mxture Models Fasal I. Bashr, Ashfaq A. Khokhar, Dan Schonfeld Electrcal and Computer Engneerng, Unversty of Illnos at Chcago. Chcago, IL,
More informationHongyi Miao, College of Science, Nanjing Forestry University, Nanjing ,China. (Received 20 June 2013, accepted 11 March 2014) I)ϕ (k)
ISSN 1749-3889 (prnt), 1749-3897 (onlne) Internatonal Journal of Nonlnear Scence Vol.17(2014) No.2,pp.188-192 Modfed Block Jacob-Davdson Method for Solvng Large Sparse Egenproblems Hongy Mao, College of
More informationLecture 10 Support Vector Machines II
Lecture 10 Support Vector Machnes II 22 February 2016 Taylor B. Arnold Yale Statstcs STAT 365/665 1/28 Notes: Problem 3 s posted and due ths upcomng Frday There was an early bug n the fake-test data; fxed
More informationMULTISPECTRAL IMAGE CLASSIFICATION USING BACK-PROPAGATION NEURAL NETWORK IN PCA DOMAIN
MULTISPECTRAL IMAGE CLASSIFICATION USING BACK-PROPAGATION NEURAL NETWORK IN PCA DOMAIN S. Chtwong, S. Wtthayapradt, S. Intajag, and F. Cheevasuvt Faculty of Engneerng, Kng Mongkut s Insttute of Technology
More informationLecture 12: Discrete Laplacian
Lecture 12: Dscrete Laplacan Scrbe: Tanye Lu Our goal s to come up wth a dscrete verson of Laplacan operator for trangulated surfaces, so that we can use t n practce to solve related problems We are mostly
More informationWhich Separator? Spring 1
Whch Separator? 6.034 - Sprng 1 Whch Separator? Mamze the margn to closest ponts 6.034 - Sprng Whch Separator? Mamze the margn to closest ponts 6.034 - Sprng 3 Margn of a pont " # y (w $ + b) proportonal
More informationProblem Set 9 Solutions
Desgn and Analyss of Algorthms May 4, 2015 Massachusetts Insttute of Technology 6.046J/18.410J Profs. Erk Demane, Srn Devadas, and Nancy Lynch Problem Set 9 Solutons Problem Set 9 Solutons Ths problem
More informationC/CS/Phy191 Problem Set 3 Solutions Out: Oct 1, 2008., where ( 00. ), so the overall state of the system is ) ( ( ( ( 00 ± 11 ), Φ ± = 1
C/CS/Phy9 Problem Set 3 Solutons Out: Oct, 8 Suppose you have two qubts n some arbtrary entangled state ψ You apply the teleportaton protocol to each of the qubts separately What s the resultng state obtaned
More informationComposite Hypotheses testing
Composte ypotheses testng In many hypothess testng problems there are many possble dstrbutons that can occur under each of the hypotheses. The output of the source s a set of parameters (ponts n a parameter
More informationThe exam is closed book, closed notes except your one-page cheat sheet.
CS 89 Fall 206 Introducton to Machne Learnng Fnal Do not open the exam before you are nstructed to do so The exam s closed book, closed notes except your one-page cheat sheet Usage of electronc devces
More informationProf. Dr. I. Nasser Phys 630, T Aug-15 One_dimensional_Ising_Model
EXACT OE-DIMESIOAL ISIG MODEL The one-dmensonal Isng model conssts of a chan of spns, each spn nteractng only wth ts two nearest neghbors. The smple Isng problem n one dmenson can be solved drectly n several
More informationAPPENDIX A Some Linear Algebra
APPENDIX A Some Lnear Algebra The collecton of m, n matrces A.1 Matrces a 1,1,..., a 1,n A = a m,1,..., a m,n wth real elements a,j s denoted by R m,n. If n = 1 then A s called a column vector. Smlarly,
More informationSalmon: Lectures on partial differential equations. Consider the general linear, second-order PDE in the form. ,x 2
Salmon: Lectures on partal dfferental equatons 5. Classfcaton of second-order equatons There are general methods for classfyng hgher-order partal dfferental equatons. One s very general (applyng even to
More informationLinear Classification, SVMs and Nearest Neighbors
1 CSE 473 Lecture 25 (Chapter 18) Lnear Classfcaton, SVMs and Nearest Neghbors CSE AI faculty + Chrs Bshop, Dan Klen, Stuart Russell, Andrew Moore Motvaton: Face Detecton How do we buld a classfer to dstngush
More informationMultigradient for Neural Networks for Equalizers 1
Multgradent for Neural Netorks for Equalzers 1 Chulhee ee, Jnook Go and Heeyoung Km Department of Electrcal and Electronc Engneerng Yonse Unversty 134 Shnchon-Dong, Seodaemun-Ku, Seoul 1-749, Korea ABSTRACT
More informationU.C. Berkeley CS294: Beyond Worst-Case Analysis Luca Trevisan September 5, 2017
U.C. Berkeley CS94: Beyond Worst-Case Analyss Handout 4s Luca Trevsan September 5, 07 Summary of Lecture 4 In whch we ntroduce semdefnte programmng and apply t to Max Cut. Semdefnte Programmng Recall that
More informationSingular Value Decomposition: Theory and Applications
Sngular Value Decomposton: Theory and Applcatons Danel Khashab Sprng 2015 Last Update: March 2, 2015 1 Introducton A = UDV where columns of U and V are orthonormal and matrx D s dagonal wth postve real
More informationModule 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur
Module 3 LOSSY IMAGE COMPRESSION SYSTEMS Verson ECE IIT, Kharagpur Lesson 6 Theory of Quantzaton Verson ECE IIT, Kharagpur Instructonal Objectves At the end of ths lesson, the students should be able to:
More information2E Pattern Recognition Solutions to Introduction to Pattern Recognition, Chapter 2: Bayesian pattern classification
E395 - Pattern Recognton Solutons to Introducton to Pattern Recognton, Chapter : Bayesan pattern classfcaton Preface Ths document s a soluton manual for selected exercses from Introducton to Pattern Recognton
More informationLectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix
Lectures - Week 4 Matrx norms, Condtonng, Vector Spaces, Lnear Independence, Spannng sets and Bass, Null space and Range of a Matrx Matrx Norms Now we turn to assocatng a number to each matrx. We could
More informationInexact Newton Methods for Inverse Eigenvalue Problems
Inexact Newton Methods for Inverse Egenvalue Problems Zheng-jan Ba Abstract In ths paper, we survey some of the latest development n usng nexact Newton-lke methods for solvng nverse egenvalue problems.
More information10-701/ Machine Learning, Fall 2005 Homework 3
10-701/15-781 Machne Learnng, Fall 2005 Homework 3 Out: 10/20/05 Due: begnnng of the class 11/01/05 Instructons Contact questons-10701@autonlaborg for queston Problem 1 Regresson and Cross-valdaton [40
More informationWhy Bayesian? 3. Bayes and Normal Models. State of nature: class. Decision rule. Rev. Thomas Bayes ( ) Bayes Theorem (yes, the famous one)
Why Bayesan? 3. Bayes and Normal Models Alex M. Martnez alex@ece.osu.edu Handouts Handoutsfor forece ECE874 874Sp Sp007 If all our research (n PR was to dsappear and you could only save one theory, whch
More informationThe Study of Teaching-learning-based Optimization Algorithm
Advanced Scence and Technology Letters Vol. (AST 06), pp.05- http://dx.do.org/0.57/astl.06. The Study of Teachng-learnng-based Optmzaton Algorthm u Sun, Yan fu, Lele Kong, Haolang Q,, Helongang Insttute
More informationDiscriminative Dictionary Learning with Low-Rank Regularization for Face Recognition
Dscrmnatve Dctonary Learnng wth Low-Rank Regularzaton for Face Recognton Langyue L, Sheng L, and Yun Fu Department of Electrcal and Computer Engneerng Northeastern Unversty Boston, MA 02115, USA {l.langy,
More informationTime-Varying Systems and Computations Lecture 6
Tme-Varyng Systems and Computatons Lecture 6 Klaus Depold 14. Januar 2014 The Kalman Flter The Kalman estmaton flter attempts to estmate the actual state of an unknown dscrete dynamcal system, gven nosy
More informationCalculation of time complexity (3%)
Problem 1. (30%) Calculaton of tme complexty (3%) Gven n ctes, usng exhaust search to see every result takes O(n!). Calculaton of tme needed to solve the problem (2%) 40 ctes:40! dfferent tours 40 add
More informationAssortment Optimization under MNL
Assortment Optmzaton under MNL Haotan Song Aprl 30, 2017 1 Introducton The assortment optmzaton problem ams to fnd the revenue-maxmzng assortment of products to offer when the prces of products are fxed.
More informationModule 9. Lecture 6. Duality in Assignment Problems
Module 9 1 Lecture 6 Dualty n Assgnment Problems In ths lecture we attempt to answer few other mportant questons posed n earler lecture for (AP) and see how some of them can be explaned through the concept
More informationLecture 10: Dimensionality reduction
Lecture : Dmensonalt reducton g The curse of dmensonalt g Feature etracton s. feature selecton g Prncpal Components Analss g Lnear Dscrmnant Analss Intellgent Sensor Sstems Rcardo Guterrez-Osuna Wrght
More informationA New Refinement of Jacobi Method for Solution of Linear System Equations AX=b
Int J Contemp Math Scences, Vol 3, 28, no 17, 819-827 A New Refnement of Jacob Method for Soluton of Lnear System Equatons AX=b F Naem Dafchah Department of Mathematcs, Faculty of Scences Unversty of Gulan,
More informationNatural Language Processing and Information Retrieval
Natural Language Processng and Informaton Retreval Support Vector Machnes Alessandro Moschtt Department of nformaton and communcaton technology Unversty of Trento Emal: moschtt@ds.untn.t Summary Support
More informationQuantum Mechanics I - Session 4
Quantum Mechancs I - Sesson 4 Aprl 3, 05 Contents Operators Change of Bass 4 3 Egenvectors and Egenvalues 5 3. Denton....................................... 5 3. Rotaton n D....................................
More informationCSE 252C: Computer Vision III
CSE 252C: Computer Vson III Lecturer: Serge Belonge Scrbe: Catherne Wah LECTURE 15 Kernel Machnes 15.1. Kernels We wll study two methods based on a specal knd of functon k(x, y) called a kernel: Kernel
More informationρ some λ THE INVERSE POWER METHOD (or INVERSE ITERATION) , for , or (more usually) to
THE INVERSE POWER METHOD (or INVERSE ITERATION) -- applcaton of the Power method to A some fxed constant ρ (whch s called a shft), x λ ρ If the egenpars of A are { ( λ, x ) } ( ), or (more usually) to,
More informationStructure and Drive Paul A. Jensen Copyright July 20, 2003
Structure and Drve Paul A. Jensen Copyrght July 20, 2003 A system s made up of several operatons wth flow passng between them. The structure of the system descrbes the flow paths from nputs to outputs.
More informationPattern Recognition 42 (2009) Contents lists available at ScienceDirect. Pattern Recognition. journal homepage:
Pattern Recognton 4 (9) 764 -- 779 Contents lsts avalable at ScenceDrect Pattern Recognton ournal homepage: www.elsever.com/locate/pr Perturbaton LDA: Learnng the dfference between the class emprcal mean
More informationPop-Click Noise Detection Using Inter-Frame Correlation for Improved Portable Auditory Sensing
Advanced Scence and Technology Letters, pp.164-168 http://dx.do.org/10.14257/astl.2013 Pop-Clc Nose Detecton Usng Inter-Frame Correlaton for Improved Portable Audtory Sensng Dong Yun Lee, Kwang Myung Jeon,
More informationNatural Images, Gaussian Mixtures and Dead Leaves Supplementary Material
Natural Images, Gaussan Mxtures and Dead Leaves Supplementary Materal Danel Zoran Interdscplnary Center for Neural Computaton Hebrew Unversty of Jerusalem Israel http://www.cs.huj.ac.l/ danez Yar Wess
More informationDISCRIMINANTS AND RAMIFIED PRIMES. 1. Introduction A prime number p is said to be ramified in a number field K if the prime ideal factorization
DISCRIMINANTS AND RAMIFIED PRIMES KEITH CONRAD 1. Introducton A prme number p s sad to be ramfed n a number feld K f the prme deal factorzaton (1.1) (p) = po K = p e 1 1 peg g has some e greater than 1.
More informationCS 468 Lecture 16: Isometry Invariance and Spectral Techniques
CS 468 Lecture 16: Isometry Invarance and Spectral Technques Justn Solomon Scrbe: Evan Gawlk Introducton. In geometry processng, t s often desrable to characterze the shape of an object n a manner that
More informationarxiv:cs.cv/ Jun 2000
Correlaton over Decomposed Sgnals: A Non-Lnear Approach to Fast and Effectve Sequences Comparson Lucano da Fontoura Costa arxv:cs.cv/0006040 28 Jun 2000 Cybernetc Vson Research Group IFSC Unversty of São
More informationLECTURE 9 CANONICAL CORRELATION ANALYSIS
LECURE 9 CANONICAL CORRELAION ANALYSIS Introducton he concept of canoncal correlaton arses when we want to quantfy the assocatons between two sets of varables. For example, suppose that the frst set of
More informationQuantum Mechanics for Scientists and Engineers. David Miller
Quantum Mechancs for Scentsts and Engneers Davd Mller Types of lnear operators Types of lnear operators Blnear expanson of operators Blnear expanson of lnear operators We know that we can expand functons
More informationTracking with Kalman Filter
Trackng wth Kalman Flter Scott T. Acton Vrgna Image and Vdeo Analyss (VIVA), Charles L. Brown Department of Electrcal and Computer Engneerng Department of Bomedcal Engneerng Unversty of Vrgna, Charlottesvlle,
More informationNumber of cases Number of factors Number of covariates Number of levels of factor i. Value of the dependent variable for case k
ANOVA Model and Matrx Computatons Notaton The followng notaton s used throughout ths chapter unless otherwse stated: N F CN Y Z j w W Number of cases Number of factors Number of covarates Number of levels
More informationNon-Negative Graph Embedding
Non-Negatve Graph Embeddng Janchao Yang, Shucheng Yang 2, Yun Fu, Xuelong L 3, Thomas Huang ECE Department, Unversty of Illnos at Urbana-Champagn, USA 2 ECE Department, Natonal Unversty of Sngapore, Sngapore
More informationCS 3710: Visual Recognition Classification and Detection. Adriana Kovashka Department of Computer Science January 13, 2015
CS 3710: Vsual Recognton Classfcaton and Detecton Adrana Kovashka Department of Computer Scence January 13, 2015 Plan for Today Vsual recognton bascs part 2: Classfcaton and detecton Adrana s research
More informationMACHINE APPLIED MACHINE LEARNING LEARNING. Gaussian Mixture Regression
11 MACHINE APPLIED MACHINE LEARNING LEARNING MACHINE LEARNING Gaussan Mture Regresson 22 MACHINE APPLIED MACHINE LEARNING LEARNING Bref summary of last week s lecture 33 MACHINE APPLIED MACHINE LEARNING
More informationC4B Machine Learning Answers II. = σ(z) (1 σ(z)) 1 1 e z. e z = σ(1 σ) (1 + e z )
C4B Machne Learnng Answers II.(a) Show that for the logstc sgmod functon dσ(z) dz = σ(z) ( σ(z)) A. Zsserman, Hlary Term 20 Start from the defnton of σ(z) Note that Then σ(z) = σ = dσ(z) dz = + e z e z
More informationLogo Matching Technique Based on Principle Component Analysis
Internatonal Journal of Vdeo& Image Processng etwork Securty IJVIPS-IJES Vol:10 o:03 0 Logo Matchng echnque Based on Prncple Component Analyss Sam M. Halawan 1 Ibrahm A. Albdew Faculty of Computng Informaton
More informationSTAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 16
STAT 39: MATHEMATICAL COMPUTATIONS I FALL 218 LECTURE 16 1 why teratve methods f we have a lnear system Ax = b where A s very, very large but s ether sparse or structured (eg, banded, Toepltz, banded plus
More informationThe Second Anti-Mathima on Game Theory
The Second Ant-Mathma on Game Theory Ath. Kehagas December 1 2006 1 Introducton In ths note we wll examne the noton of game equlbrum for three types of games 1. 2-player 2-acton zero-sum games 2. 2-player
More informationCommon loop optimizations. Example to improve locality. Why Dependence Analysis. Data Dependence in Loops. Goal is to find best schedule:
15-745 Lecture 6 Data Dependence n Loops Copyrght Seth Goldsten, 2008 Based on sldes from Allen&Kennedy Lecture 6 15-745 2005-8 1 Common loop optmzatons Hostng of loop-nvarant computatons pre-compute before
More informationHomework Notes Week 7
Homework Notes Week 7 Math 4 Sprng 4 #4 (a Complete the proof n example 5 that s an nner product (the Frobenus nner product on M n n (F In the example propertes (a and (d have already been verfed so we
More informationOn the symmetric character of the thermal conductivity tensor
On the symmetrc character of the thermal conductvty tensor Al R. Hadjesfandar Department of Mechancal and Aerospace Engneerng Unversty at Buffalo, State Unversty of New York Buffalo, NY 146 USA ah@buffalo.edu
More information1 GSW Iterative Techniques for y = Ax
1 for y = A I m gong to cheat here. here are a lot of teratve technques that can be used to solve the general case of a set of smultaneous equatons (wrtten n the matr form as y = A), but ths chapter sn
More information1 Matrix representations of canonical matrices
1 Matrx representatons of canoncal matrces 2-d rotaton around the orgn: ( ) cos θ sn θ R 0 = sn θ cos θ 3-d rotaton around the x-axs: R x = 1 0 0 0 cos θ sn θ 0 sn θ cos θ 3-d rotaton around the y-axs:
More informationA linear imaging system with white additive Gaussian noise on the observed data is modeled as follows:
Supplementary Note Mathematcal bacground A lnear magng system wth whte addtve Gaussan nose on the observed data s modeled as follows: X = R ϕ V + G, () where X R are the expermental, two-dmensonal proecton
More informationLearning with Tensor Representation
Report No. UIUCDCS-R-2006-276 UILU-ENG-2006-748 Learnng wth Tensor Representaton by Deng Ca, Xaofe He, and Jawe Han Aprl 2006 Learnng wth Tensor Representaton Deng Ca Xaofe He Jawe Han Department of Computer
More informationVector Norms. Chapter 7 Iterative Techniques in Matrix Algebra. Cauchy-Bunyakovsky-Schwarz Inequality for Sums. Distances. Convergence.
Vector Norms Chapter 7 Iteratve Technques n Matrx Algebra Per-Olof Persson persson@berkeley.edu Department of Mathematcs Unversty of Calforna, Berkeley Math 128B Numercal Analyss Defnton A vector norm
More information