The Robust Low Rank Matrix Factorization in the Presence of. Zhang Qiang

Size: px
Start display at page:

Download "The Robust Low Rank Matrix Factorization in the Presence of. Zhang Qiang"

Transcription

1 he Robust Low Rank Matrix Factorization in the Presence of Missing data and Outliers Zhang Qiang

2 Content he definition of matrix factorization and its application he methods for matrix factorization in the presence of missing data he L Wiberg Algorithm for matrix factorization in the presence pese ceof missing data a and doutliers he Huber Wiberg Algorithm for matrix factorization

3 he definition of matrix factorization and its application i In linear algebra, matrix factorization, also called matrix decomposition, matrix approximation, is a factorization of a matrix ti into some canonical lform. here are many different matrix decompositions, such as LU, Cholesky, QR, SVD, which are utilized amonga a particular class of problems. In many vision problems, such as Structure from Motion(SFM), motion estimation, object recognition, and object tracking, matrix factorization is the decomposition of measurements or observation data into the product of two low rank matrices, one of which is a lower dimensional subspace within the original high dimension.

4 he definition of matrix factorization and its application Matrix factorization of M M d n = U V dn dk nk Where d is the dimension of the input data space; n is the number of input data items; k is the dimension of the linear subspace. he k columns of the matrix U are the basis of the linear subspace. In SFM, M includes the observed points in dff different images, U is the stake of all camera matrices, V is the matrix of space points.

5 he definition of matrix factorization and its application SVD for matrixfactorization When the observation matrix is the matrix without missing any element, the common method is SVD decomposition. he procedure as follow SVD ) ; M = U D V d n d d d n n n SVD ˆ M d n = U d kdk kvn k U dk U d d D k k 2), where includes the first k columns of, is a diagonal matrix with the first k diagonal elements of D dn, and V nk includes the first k columns of V n n ; SVD 3) ˆ ˆ M, where. d n= Ud kv Vˆ n k nk = Dkk Vnk SVD decomposition is equivalent to least squares minimization, e.g. min M UV.In computer vision, we F directly use SVD decompositionto to compute affine reconstruction.

6 he methods for matrix factorization in the presence of missing data When the observation matrix is the matrix with missing elements, SVD decomposition doesn't work. We need change the form of originalproblem problem. he followingequations are equivalent. M m = = U V dn dk nk ij UV il jl l m = GUv ( ) m= FVu () where m = [ m,,, m2, L, m d ] O and is the row of u muv,, i i i G ( U ) M, U, V. u d u = O O u d F ( V ) v M v n = O v M v n

7 he methods for matrix factorization in the presence of missing data So each element of M is equal to inner product of the rows from matrices U and V, which creates a set of equations. If there are missing elements in M, we just ignore these corresponding equations. For example, m u u2 m2 u u2 v m 3 u u 2 m m2 m3 u u2 m2 u2 u v2 22 vv2 v3 m2 m22 m23 u2 u 22 m v2 = 22 = u2 u 22 v2 v22 v 32 v22 m3 m32 m33 u3 u32 m23 u2 u22 v m3 u3 u 32 3 v 32 m 32 u3 u 32 m 33 u 3 u32

8 he methods for matrix factorization in the presence of missing data So each element of M is equal to inner product of the rows from matrices U and V, which creates a set of equations. If there are missing elements in M, we just ignore these corresponding equations. For example,? u u2 m2 u u2 v m 3 u u 2? m2 m3 u u2 m2 u2 u v2 22 vv2 v 3 m2 m22 m23 u2 u 22 m v2 = 22 = u2 u 22 v2 v22 v 32 v22 m3? m33 u3 u32 m23 u2 u22 m v 3 u3 u 32 3 v 32? u3 u 32 m33 u3 u32

9 he methods for matrix factorization in the presence of missing data So each element of M is equal to inner product of the rows from matrices U and V, which creates a set of equations. If there are missing elements in M, we just ignore these corresponding equations. For example, m2 u u2 v m3 u u2 v? m2 m3 u u2 m 2 u2 u 22 vvv 2 3 v m2 m22 m23 = u2 u22 m22 = u2 u22 v2 v22 v32 v m3? m33 u3 u32 m23 u2 u22 v m3 u3 u32 m33 u3 u v 32 hroughout the remainder of this ppt, we also use original notation for modified vectors or matrices

10 he methods for matrix factorization in the presence of missing data Alternated Least Squares Algorithm he optimization problem is min Φ( U, V ) U, V 2 where Φ( UV, ) = M UV. 2 o find the minimum of Φ( UV, ), we search for a solution to the equations Φ/ = u Φ/ = v 0.Using the notations mentioned above, these are expressed as Φ/ u FV ( ) ( FV ( ) u m) 0 = = Φ/ v GU ( ) ( GU ( ) v m) 0 AKAYUKI OKAANI AND KOICHIRO DEGUCHI. On the Wiberg Algorithm for Matrix Factorization in the Presence of Missing Components.IJCV,2007.

11 he methods for matrix factorization in the presence of missing data Alternated Least Squares Algorithm When considering the two equations independently, solutions are given by ˆ = ( ( ) ( )) ( ) u F V F V F V m ˆ = ( ( ) ( )) ( ) v G U G U G U m he ALS algorithm updates u from v,and v from u in an alternative manner, starting from some initial values (0) (0) of u, v.

12 he methods for matrix factorization in the presence of missing data Gauss Newton Algorithm Defining x= u, v, we rewrite Φ( UV, ) as Φ( x) = f f 2 where f = Fu m = Gv m. Here, we will find a solution x to Φ( x)/ x= 0.he Newton s algorithm seeks a solution by iteratively updating x as x+ Vx x. Vx is computed as a 2 2 solution to Φ( x)/ + x ( Φ( x)/ x )Δx= 0. Substituting f, the equation is represented as 2 2 ( f / x) f+ (( f / x )( f / x) + ( f / x ) f )Δx = 0

13 he methods for matrix factorization in the presence of missing data the Wiberg Algorithm Basic idea: Deriving a Newton based algorithm for the rewritten problem achieves btt better algorithms in terms of computational complexity etc. than deriving one for the originalproblem problem. Sowe rewrite the function Φ( U, V ) intoa function Ψ( v) of only v as follow. Ψ( v) = Φ( uˆ ( v), v) ˆ( ) ( ) uv ( ) = ( F F) F m Wiberg,. Computation of principal components when data are missing. In Proceedings of the Second Symposium of Computational Statistics.,Berlin.,976.

14 he methods for matrix factorization in the presence of missing data the Wiberg Algorithm It is clear that the new minimization problem yields the same solution as the original ii problem, since v minimizing i i i Ψ( v) together with u= uv ˆ( ) minimizes Φ( uv,). hen, minimizationof of Ψ by Gauss Newton algorithm yields the Wiberg algorithm. he function Ψ( v) can be written as Ψ( v) = g g 2 where g = Fuˆ( () v m. By Gauss Newton algorithm, Δv is obtained by solving equation 2 2 ( g / v ) g + (( g / v )( g / v ) + ( g / v ) g )Δ v = 0

15 he methods for matrix factorization in the presence of missing data the Wiberg Algorithm Substituting F, G, the equation mentioned above is GQQGΔv GQQy= 0 F F F F ( QF = I F F F) F

16 he methods for matrix factorization in the presence of missing data Experiment As test data, matrices of with rank r = 4 are used. Each component is randomly generated according to y ij = uv i j +ε ij,where u are all random variables generated v ij, ij according to a normal density N(0,),the noise ε ij according to N(0,0.05). he missing components are also randomly chosen in the matrix. In this experiment, we test three algorithms, Wiberg, LM, ALS. And each simulation repeats 0different matrices, andeach matrix30trials trials.

17 he methods for matrix factorization in the presence of missing data 300 Wiberg 50 LM 300 ALS Wiberg 60 LM 60 ALS Results for synthetic data with 30% missing components. Iteration count(up), Residue(down)

18 he methods for matrix factorization in the presence of missing data Wiberg LM ALS Wiberg LM ALS Results for synthetic data with 50% missing components. Iteration count(up), Residue(down).

19 he L Wiberg Algorithm for matrix factorization in the presence of missing data and outliers Which is the better estimator,l norm or L2 norm, for scale parameter of a normal distribution? /2 2 L norm: d L2 norm: n = xi x sn= ( xi x) n n Eddington advocated dthe use of the former: his is contrary to the advice of most textbook; but it can be shown to be true. Fisher pointed out that for normal observations s n is about 2% moreefficient efficient than d. d n Huber, P J.. Robust Statistics. John Wiley & Sons,98. Eddington, A.S. Stellar Movements and the structure of the universe. Macmillan,London, 94. Fisher, R.A. A mathematical examination of the methods of determining the accuracy of an observation by the mean error and the mean square error. Monthly Not. Roy. Astron. Soc., 80,920.

20 he L Wiberg Algorithm for matrix factorization Huber(his paper Robust Estimation of a Location Parameter (964)formed the first basis for a theory of robust estimation): just 2 bad observations in 000 suffice to offset the 2% advantage of the mean square error. ε ARE ARE ARE = Asymptotic Efficiency( d )/ Asymptotic Efficiency( s ) n n ε Huber, P.J. Robust statistical procedures. Regional Conference Series in Applied Mathematics No. 27, SIAM, Philadelphia, Penn, 979.

21 he L Wiberg Algorithm for matrix factorization Fit a line to 0 given data points. he two data points on upper right are outliers.

22 he L Wiberg Algorithm for matrix factorization Efficient Computation of Robust Low Rank Matrix Approximations in the Presence of Missing Data using the L Norm. CVPR, 200. Best Paper Anders Eriksson, Research Staff Anton van den Hengel, Professor School of Computer Science University of Adelaide, Australia

23 he L Wiberg Algorithm for matrix factorization he L Wiberg Algorithm he original problem M = U V dn dk kn where M is known, U, V is unknown. d n d k k n the optimization problem with L nrom is min W e ( M UV ) UV, where W is a indicator function such that wij is if mij is known, and 0 otherwise. e is Hadamard product.

24 he L Wiberg Algorithm for matrix factorization he L Wiberg Algorithm is equal to delete the missing data, because We ( M UV)? m2 m3 uv + uv 2 2 uv 2 + uv 2 22 uv 3 + uv 2 23 m2 m22 m23 u2v + u22v2 u2v2 + u22v22 u2v3 + u22v23 m3? m 33 uv 3 uv 32 2 uv 3 2 uv uv 3 3 uv m2 m3 0 uv2 + u2v22 uv3 + u2v23 m2 m22 m23 u2v + u22v2 u2v2 + u22v22 u2v3 + u22v23 m 3 0 m 33 uv 3 uv uv 3 3 uv

25 he L Wiberg Algorithm for matrix factorization he L Wiberg Algorithm Fixing U and V, it is possible to rewrite the optimization problem as vˆ( U ) = argmin wm wg( U) v () v uv ˆ( ) = argmin wm wfv ( ) u (2) u where v and u is the vectors of V and U with column nd nk nd kd expansion,. GU ( ) = I U R, FV ( ) = V I R n d

26 he L Wiberg Algorithm for matrix factorization he L Wiberg Algorithm As L2 Wiberg algorithm, firstly we should obtain the optimal solution of v with hypothesis that t u is fixed. hen, substituting btit v(u) into the second equation, we obtain the minimization problem as follow. min We M We UV% ( U) U min wm wg( U) v% ( U) u

27 he L Wiberg Algorithm for matrix factorization he L Wiberg Algorithm By aylor expansion, m+ m+ wg( U ) v% ( U ) wg U v % U + wg U v % U u u u m m m m m+ m ( ) ( ) ( ( ( ) ( ))/ )( ) he minimization problem mentioned above is rewritten as min wm ( wg( U) v% ( U) + ( ( wg( U) v% ( U))/ u ) δ) u

28 he L Wiberg Algorithm for matrix factorization he L Wiberg Algorithm ( wg ( U ) v % ( U ))/ u = wg( U )( v% ( U )/ u ) + wf( V% ( U )) Unfortunately, vu %( ) have not a easily differentiable, closed form solution.

29 he L Wiberg Algorithm for matrix factorization he L Wiberg Algorithm So it is important to show how the optimal solution v% to be obtained. he equivalent tformulation of () is min [0,0,,0] v +,,, t s v v t s v + + v wg( U ) wg( U ) I v wm st.. I = ( ) ( ) wm wg U wg U I t b AU ( ) s + v, v, t, s 0 + v, v R, t R, s R kn dn 2 dn

30 he L Wiberg Algorithm for matrix factorization he L Wiberg Algorithm simplex method A standard form of linear program is min x R n c x st.. Ax= b x 0 he solution form of simplex method is xb B b x = N 0 where B is the basis matrix. From A to B,there is a transformation matrix such that B= AQ. Q

31 he L Wiberg Algorithm for matrix factorization he L Wiberg Algorithm Supposing the result of minimization of () is there is a matrix such that Q 2 x x N 0 B Bb = v + v xb = Q2 t x N s and a matrix = ( I I 0) such that v. nk nk + v %= v t s

32 he L Wiberg Algorithm for matrix factorization he L Wiberg Algorithm So the optimal solution of () as follow then, vu %( ) ( A( UQ ) ) = Q2 0 b (( AU ( ) Q ) b) / u vu % ( )/ u = Q2, 0 (( QAU ( )) b)/ u = ( Q AU ( ) b)/ u Q AU b vec A U ( ( ) ) ( ( )) = vec( A( U)) u where vec(a) stands for the vector A with column expansion.

33 he L Wiberg Algorithm for matrix factorization he L Wiberg Algorithm ( Q AU ( ) b) vec( A( U) ) = ( b I2kn 3dn)( I Q ) + vec ( A ( U )) vec ( A ( U )) = ( b I )( I Q )( AU ( ) AU ( ) ) = 2kn+ 3dn (( AU ( ) b ) Q AU ( ) ) = (( QQ ( AU ( ) b)) Q AU ( ) )

34 he L Wiberg Algorithm for matrix factorization he L Wiberg Algorithm wg( U) wg( U) Idn vec ( I2dn ) vec( A( U)) wg( U) wg( U) Idn = u u wg ( U ) w vec( ) ( Ink ) vec( ( II Id ) u ( II 2 Id ) u ( II nk Id ) u ) wg( U) w L = u u II I d w II2 Id = ( Ink ) w M IInk Id kn II R where n stands for a matrix whose vectorization is. e n

35 he L Wiberg Algorithm for matrix factorization he L Wiberg Algorithm After solving vu %()/ u, we introduce it into the minimization problem min wm ( wgu ( ) v %( U) + ( ( wgu ( ) v %( U))/ u ) δ). o obtain the u optimal solution of u%, the linear program to solve as follow

36 he L Wiberg Algorithm for matrix factorization he L Wiberg Algorithm δ min [0 0 0] δ δ, t t s st δ ( ( wg ( U ) v % ( U ))/ u ) ( ( wg ( U ) v % ( U ))/ u ) I wm wgu ( ) v ( U) I δ + % = ( ( wg( U) v( U))/ u) ( ( wg( U) v( U))/ u) I t wm wg( U) v( U) % % % s δ μ +, dk dn δ δ R, t R, s 2 dn + R + δ, δ, ts, 0 andu% = u+ δ *

37 he L Wiberg Algorithm for matrix factorization he L Wiberg Algorithm input : U, > η > η > 0andc > 0 2 k = 0; repeat Computethe Jacobian of wg( U) v% ( U) using functions in page 32 page 34; Solvethe subproblemin page Let gain = fu fu δ % fu % fu δ * ( k) ( k+ k) * ; ( k) ( k+ k) * 36 to obtainδk ;

38 he L Wiberg Algorithm for matrix factorization he L Wiberg Algorithm if gain > η * k + = k + k; U U δ end if gain η then μ = μ /2; end if gain η 2 then μ = 2 μ ; end k = k + ; until convergence. η = / 4, η = 3 / 4 2

39 he Huber Wiberg Algorithm for matrix factorization Huber Distribution 0.4 Huber Normal ( ε ) x exp( ep( ), x / σ c 2 2πσ 2σ fx ( / σ) = 2 ( ε) c cx 0.06 exp( ), x/ σ > c 2πσ 2 σ

40 he Huber Wiberg Algorithm for matrix factorization Huber M estimator Huber L2-norm L-norm Ax x = b i ( Ax b ) min ρ ( ) i 2 t, t γ where ρ () t = 2 2 γ t γ, t > γ

41 he Huber Wiberg Algorithm for matrix factorization Fit a line to 0 given data points. he two data points on upper right are outliers.

42 he Huber Wiberg Algorithm for matrix factorization Huber Wiberg Algorithm At first, we assume that the γ in Huber M estimator is known. Ui Using an alternate t form, the Huber M estimator t can be given by: ( Ax b ) min ρ ( ) x i 2 min z + γ Ax b z xz, 2 i 2 Qifa Ke and akeo Kanade.Robust L Norm Factorization in the Presence of Outliers and Missing Data by Alternative Convex Programming. g y g g In Conference on Computer Vision and Pattern Recognition, Washington, USA, O. L. Mangasarian and D. R. Musicant. Robust linear and support vector regression. IEEE rans. on PAMI, 22(9): , 2000.

43 he Huber Wiberg Algorithm for matrix factorization Huber Wiberg Algorithm he optimization problem min ρ( wij ( mij UV i j )) UV, i, j 2 min Z + γ W( M UV) Z UV,, Z min z + γ w( m G( U) v) z uvz,, 2 2 where w is a matrix that deletes unknown elements. his optimal problem is a convex optimization.

44 he Huber Wiberg Algorithm for matrix factorization Huber Wiberg Algorithm By aylor expansion, wg U % wg U v U + wg U v U u u u m+ m+ ( ) v( U ) m m m m m+ m ( ) % ( ) ( ( ( ) % ( ))/ )( ) he minimization problem mentioned above is rewritten as min z wm ( wg( U) v( U) ( ( wg ( U) v( U))/ u ) ) z uvz,, where 2 ( wg( U) v% ( U))/ u = wg( U)( v% ( U)/ u ) + wf( V %( U)) 2 + γ % + % δ 2

45 he Huber Wiberg Algorithm for matrix factorization Huber Wiberg Algorithm Assuming that U is valued, 0 v v min [ v, t, z ] 0 t + [0,,0] t uvz,, γ I z c z 4243 H wg( U ) I I v wm s.. t wg U I I t wm ( ) 0 I 0 z which is a quadratic programming problem. A b

46 he Huber Wiberg Algorithm for matrix factorization Huber Wiberg Algorithm quadratic programming A quadratic programming with constraints is min x Hx + c x 2 st.. Ax b he solution must satisfy KK condition, so the form of solution is H A x c Ω % = AΩ 0 λ b Ω where AΩx= bω is active constraint. here is a transformation matrix ti Q such that t A = Q Ab, = Qb. QΩ Ω Ω Ω Ω

47 he Huber Wiberg Algorithm for matrix factorization Huber Wiberg Algorithm + x% H ( Q A) c x Ω % + + K d v QK d % λ ( Q A) 0 QΩ b Ω λ where Q = K ( I 0) kn + v% K d vec ( K ) vec ( A ) Q u vec( K ) vec( A) u d = Q ( d K I)(( K K) ( K K) )(( I K ) + ( K I) ) + ( d ( K K) ) II I d 0kn+ 2dn w I II2 I Q d Ω ( I w ) nk M I 0 I ( I QΩ ) IInk I d 0 kn+ 2dn 0 ( k n where is transposition matrix for which vec A ) = vec( A), IIn R stands for a matrix whose vectorization is e n.

48 he Huber Wiberg Algorithm for matrix factorization Huber Wiberg Algorithm After solving vu %( )/ u, we have the new minimization problem 2 min z +γ t uvz,, 2 2 δ ( wg( U) v% ( U))/ u ) I I wm wg( U) v( U) st.. % t I I wm+ wg U v U ( wg( U) v% ( U))/ u ) ( ( )%( ) z t 0, δ μ * and u% = u +δ

49 hanks!any Questions?

Efficient Computation of Robust Low-Rank Matrix Approximations in the Presence of Missing Data using the L 1 Norm

Efficient Computation of Robust Low-Rank Matrix Approximations in the Presence of Missing Data using the L 1 Norm Efficient Computation of Robust Low-Rank Matrix Approximations in the Presence of Missing Data using the L 1 Norm Anders Eriksson, Anton van den Hengel School of Computer Science University of Adelaide,

More information

Efficient Computation of Robust Low-Rank Matrix Approximations in the Presence of Missing Data using the L 1 Norm

Efficient Computation of Robust Low-Rank Matrix Approximations in the Presence of Missing Data using the L 1 Norm Efficient Computation of Robust Low-Rank Matrix Approximations in the Presence of Missing Data using the L 1 Norm Anders Eriksson and Anton van den Hengel School of Computer Science University of Adelaide,

More information

General and Nested Wiberg Minimization: L 2 and Maximum Likelihood

General and Nested Wiberg Minimization: L 2 and Maximum Likelihood General and Nested Wiberg Minimization: L 2 and Maximum Likelihood Dennis Strelow Google Mountain View, CA strelow@google.com Abstract. Wiberg matrix factorization breaks a matrix Y into low-rank factors

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

Method 1: Geometric Error Optimization

Method 1: Geometric Error Optimization Method 1: Geometric Error Optimization we need to encode the constraints ŷ i F ˆx i = 0, rank F = 2 idea: reconstruct 3D point via equivalent projection matrices and use reprojection error equivalent projection

More information

A Multi-Affine Model for Tensor Decomposition

A Multi-Affine Model for Tensor Decomposition Yiqing Yang UW Madison breakds@cs.wisc.edu A Multi-Affine Model for Tensor Decomposition Hongrui Jiang UW Madison hongrui@engr.wisc.edu Li Zhang UW Madison lizhang@cs.wisc.edu Chris J. Murphy UC Davis

More information

Algebra C Numerical Linear Algebra Sample Exam Problems

Algebra C Numerical Linear Algebra Sample Exam Problems Algebra C Numerical Linear Algebra Sample Exam Problems Notation. Denote by V a finite-dimensional Hilbert space with inner product (, ) and corresponding norm. The abbreviation SPD is used for symmetric

More information

Solving Constrained Rayleigh Quotient Optimization Problem by Projected QEP Method

Solving Constrained Rayleigh Quotient Optimization Problem by Projected QEP Method Solving Constrained Rayleigh Quotient Optimization Problem by Projected QEP Method Yunshen Zhou Advisor: Prof. Zhaojun Bai University of California, Davis yszhou@math.ucdavis.edu June 15, 2017 Yunshen

More information

General and Nested Wiberg Minimization

General and Nested Wiberg Minimization General and Nested Wiberg Minimization Dennis Strelow Google Mountain View, CA strelow@google.com Abstract Wiberg matrix factorization breaks a matrix Y into lowrank factors U and V by solving for V in

More information

Least-Squares Fitting of Model Parameters to Experimental Data

Least-Squares Fitting of Model Parameters to Experimental Data Least-Squares Fitting of Model Parameters to Experimental Data Div. of Mathematical Sciences, Dept of Engineering Sciences and Mathematics, LTU, room E193 Outline of talk What about Science and Scientific

More information

The Kalman filter is arguably one of the most notable algorithms

The Kalman filter is arguably one of the most notable algorithms LECTURE E NOTES «Kalman Filtering with Newton s Method JEFFREY HUMPHERYS and JEREMY WEST The Kalman filter is arguably one of the most notable algorithms of the 0th century [1]. In this article, we derive

More information

10-725/36-725: Convex Optimization Prerequisite Topics

10-725/36-725: Convex Optimization Prerequisite Topics 10-725/36-725: Convex Optimization Prerequisite Topics February 3, 2015 This is meant to be a brief, informal refresher of some topics that will form building blocks in this course. The content of the

More information

Linear Algebraic Equations

Linear Algebraic Equations Linear Algebraic Equations 1 Fundamentals Consider the set of linear algebraic equations n a ij x i b i represented by Ax b j with [A b ] [A b] and (1a) r(a) rank of A (1b) Then Axb has a solution iff

More information

Vector and Matrix Norms. Vector and Matrix Norms

Vector and Matrix Norms. Vector and Matrix Norms Vector and Matrix Norms Vector Space Algebra Matrix Algebra: We let x x and A A, where, if x is an element of an abstract vector space n, and A = A: n m, then x is a complex column vector of length n whose

More information

Secrets of Matrix Factorization: Supplementary Document

Secrets of Matrix Factorization: Supplementary Document Secrets of Matrix Factorization: Supplementary Document Je Hyeong Hong University of Cambridge jhh37@cam.ac.uk Andrew Fitzgibbon Microsoft, Cambridge, UK awf@microsoft.com Abstract This document consists

More information

Lecture 2: Linear Algebra Review

Lecture 2: Linear Algebra Review EE 227A: Convex Optimization and Applications January 19 Lecture 2: Linear Algebra Review Lecturer: Mert Pilanci Reading assignment: Appendix C of BV. Sections 2-6 of the web textbook 1 2.1 Vectors 2.1.1

More information

COMP 558 lecture 18 Nov. 15, 2010

COMP 558 lecture 18 Nov. 15, 2010 Least squares We have seen several least squares problems thus far, and we will see more in the upcoming lectures. For this reason it is good to have a more general picture of these problems and how to

More information

Computational Methods. Eigenvalues and Singular Values

Computational Methods. Eigenvalues and Singular Values Computational Methods Eigenvalues and Singular Values Manfred Huber 2010 1 Eigenvalues and Singular Values Eigenvalues and singular values describe important aspects of transformations and of data relations

More information

ECEN 615 Methods of Electric Power Systems Analysis Lecture 18: Least Squares, State Estimation

ECEN 615 Methods of Electric Power Systems Analysis Lecture 18: Least Squares, State Estimation ECEN 615 Methods of Electric Power Systems Analysis Lecture 18: Least Squares, State Estimation Prof. om Overbye Dept. of Electrical and Computer Engineering exas A&M University overbye@tamu.edu Announcements

More information

Lecture 9: Numerical Linear Algebra Primer (February 11st)

Lecture 9: Numerical Linear Algebra Primer (February 11st) 10-725/36-725: Convex Optimization Spring 2015 Lecture 9: Numerical Linear Algebra Primer (February 11st) Lecturer: Ryan Tibshirani Scribes: Avinash Siravuru, Guofan Wu, Maosheng Liu Note: LaTeX template

More information

A Cross-Associative Neural Network for SVD of Nonsquared Data Matrix in Signal Processing

A Cross-Associative Neural Network for SVD of Nonsquared Data Matrix in Signal Processing IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 12, NO. 5, SEPTEMBER 2001 1215 A Cross-Associative Neural Network for SVD of Nonsquared Data Matrix in Signal Processing Da-Zheng Feng, Zheng Bao, Xian-Da Zhang

More information

Optimisation on Manifolds

Optimisation on Manifolds Optimisation on Manifolds K. Hüper MPI Tübingen & Univ. Würzburg K. Hüper (MPI Tübingen & Univ. Würzburg) Applications in Computer Vision Grenoble 18/9/08 1 / 29 Contents 2 Examples Essential matrix estimation

More information

Mobile Robotics 1. A Compact Course on Linear Algebra. Giorgio Grisetti

Mobile Robotics 1. A Compact Course on Linear Algebra. Giorgio Grisetti Mobile Robotics 1 A Compact Course on Linear Algebra Giorgio Grisetti SA-1 Vectors Arrays of numbers They represent a point in a n dimensional space 2 Vectors: Scalar Product Scalar-Vector Product Changes

More information

Least Squares Optimization

Least Squares Optimization Least Squares Optimization The following is a brief review of least squares optimization and constrained optimization techniques. I assume the reader is familiar with basic linear algebra, including the

More information

Ch 4. Linear Models for Classification

Ch 4. Linear Models for Classification Ch 4. Linear Models for Classification Pattern Recognition and Machine Learning, C. M. Bishop, 2006. Department of Computer Science and Engineering Pohang University of Science and echnology 77 Cheongam-ro,

More information

Functional SVD for Big Data

Functional SVD for Big Data Functional SVD for Big Data Pan Chao April 23, 2014 Pan Chao Functional SVD for Big Data April 23, 2014 1 / 24 Outline 1 One-Way Functional SVD a) Interpretation b) Robustness c) CV/GCV 2 Two-Way Problem

More information

Robust Tensor Factorization Using R 1 Norm

Robust Tensor Factorization Using R 1 Norm Robust Tensor Factorization Using R Norm Heng Huang Computer Science and Engineering University of Texas at Arlington heng@uta.edu Chris Ding Computer Science and Engineering University of Texas at Arlington

More information

Lecture 4.3 Estimating homographies from feature correspondences. Thomas Opsahl

Lecture 4.3 Estimating homographies from feature correspondences. Thomas Opsahl Lecture 4.3 Estimating homographies from feature correspondences Thomas Opsahl Homographies induced by central projection 1 H 2 1 H 2 u uu 2 3 1 Homography Hu = u H = h 1 h 2 h 3 h 4 h 5 h 6 h 7 h 8 h

More information

Advanced Techniques for Mobile Robotics Least Squares. Wolfram Burgard, Cyrill Stachniss, Kai Arras, Maren Bennewitz

Advanced Techniques for Mobile Robotics Least Squares. Wolfram Burgard, Cyrill Stachniss, Kai Arras, Maren Bennewitz Advanced Techniques for Mobile Robotics Least Squares Wolfram Burgard, Cyrill Stachniss, Kai Arras, Maren Bennewitz Problem Given a system described by a set of n observation functions {f i (x)} i=1:n

More information

Fundamentals of Engineering Analysis (650163)

Fundamentals of Engineering Analysis (650163) Philadelphia University Faculty of Engineering Communications and Electronics Engineering Fundamentals of Engineering Analysis (6563) Part Dr. Omar R Daoud Matrices: Introduction DEFINITION A matrix is

More information

Matrix Factorization and Analysis

Matrix Factorization and Analysis Chapter 7 Matrix Factorization and Analysis Matrix factorizations are an important part of the practice and analysis of signal processing. They are at the heart of many signal-processing algorithms. Their

More information

Numerical Linear Algebra Primer. Ryan Tibshirani Convex Optimization /36-725

Numerical Linear Algebra Primer. Ryan Tibshirani Convex Optimization /36-725 Numerical Linear Algebra Primer Ryan Tibshirani Convex Optimization 10-725/36-725 Last time: proximal gradient descent Consider the problem min g(x) + h(x) with g, h convex, g differentiable, and h simple

More information

Nonnegative Tensor Factorization using a proximal algorithm: application to 3D fluorescence spectroscopy

Nonnegative Tensor Factorization using a proximal algorithm: application to 3D fluorescence spectroscopy Nonnegative Tensor Factorization using a proximal algorithm: application to 3D fluorescence spectroscopy Caroline Chaux Joint work with X. Vu, N. Thirion-Moreau and S. Maire (LSIS, Toulon) Aix-Marseille

More information

Camera Calibration The purpose of camera calibration is to determine the intrinsic camera parameters (c 0,r 0 ), f, s x, s y, skew parameter (s =

Camera Calibration The purpose of camera calibration is to determine the intrinsic camera parameters (c 0,r 0 ), f, s x, s y, skew parameter (s = Camera Calibration The purpose of camera calibration is to determine the intrinsic camera parameters (c 0,r 0 ), f, s x, s y, skew parameter (s = cotα), and the lens distortion (radial distortion coefficient

More information

5.6. PSEUDOINVERSES 101. A H w.

5.6. PSEUDOINVERSES 101. A H w. 5.6. PSEUDOINVERSES 0 Corollary 5.6.4. If A is a matrix such that A H A is invertible, then the least-squares solution to Av = w is v = A H A ) A H w. The matrix A H A ) A H is the left inverse of A and

More information

On the Stability of the Best Reply Map for Noncooperative Differential Games

On the Stability of the Best Reply Map for Noncooperative Differential Games On the Stability of the Best Reply Map for Noncooperative Differential Games Alberto Bressan and Zipeng Wang Department of Mathematics, Penn State University, University Park, PA, 68, USA DPMMS, University

More information

1 Cricket chirps: an example

1 Cricket chirps: an example Notes for 2016-09-26 1 Cricket chirps: an example Did you know that you can estimate the temperature by listening to the rate of chirps? The data set in Table 1 1. represents measurements of the number

More information

Math113: Linear Algebra. Beifang Chen

Math113: Linear Algebra. Beifang Chen Math3: Linear Algebra Beifang Chen Spring 26 Contents Systems of Linear Equations 3 Systems of Linear Equations 3 Linear Systems 3 2 Geometric Interpretation 3 3 Matrices of Linear Systems 4 4 Elementary

More information

SUPPLEMENTAL NOTES FOR ROBUST REGULARIZED SINGULAR VALUE DECOMPOSITION WITH APPLICATION TO MORTALITY DATA

SUPPLEMENTAL NOTES FOR ROBUST REGULARIZED SINGULAR VALUE DECOMPOSITION WITH APPLICATION TO MORTALITY DATA SUPPLEMENTAL NOTES FOR ROBUST REGULARIZED SINGULAR VALUE DECOMPOSITION WITH APPLICATION TO MORTALITY DATA By Lingsong Zhang, Haipeng Shen and Jianhua Z. Huang Purdue University, University of North Carolina,

More information

Simple Examples on Rectangular Domains

Simple Examples on Rectangular Domains 84 Chapter 5 Simple Examples on Rectangular Domains In this chapter we consider simple elliptic boundary value problems in rectangular domains in R 2 or R 3 ; our prototype example is the Poisson equation

More information

Methods for sparse analysis of high-dimensional data, II

Methods for sparse analysis of high-dimensional data, II Methods for sparse analysis of high-dimensional data, II Rachel Ward May 26, 2011 High dimensional data with low-dimensional structure 300 by 300 pixel images = 90, 000 dimensions 2 / 55 High dimensional

More information

2 Nonlinear least squares algorithms

2 Nonlinear least squares algorithms 1 Introduction Notes for 2017-05-01 We briefly discussed nonlinear least squares problems in a previous lecture, when we described the historical path leading to trust region methods starting from the

More information

March 8, 2010 MATH 408 FINAL EXAM SAMPLE

March 8, 2010 MATH 408 FINAL EXAM SAMPLE March 8, 200 MATH 408 FINAL EXAM SAMPLE EXAM OUTLINE The final exam for this course takes place in the regular course classroom (MEB 238) on Monday, March 2, 8:30-0:20 am. You may bring two-sided 8 page

More information

Bare minimum on matrix algebra. Psychology 588: Covariance structure and factor models

Bare minimum on matrix algebra. Psychology 588: Covariance structure and factor models Bare minimum on matrix algebra Psychology 588: Covariance structure and factor models Matrix multiplication 2 Consider three notations for linear combinations y11 y1 m x11 x 1p b11 b 1m y y x x b b n1

More information

Lie Groups for 2D and 3D Transformations

Lie Groups for 2D and 3D Transformations Lie Groups for 2D and 3D Transformations Ethan Eade Updated May 20, 2017 * 1 Introduction This document derives useful formulae for working with the Lie groups that represent transformations in 2D and

More information

Augmented Reality VU Camera Registration. Prof. Vincent Lepetit

Augmented Reality VU Camera Registration. Prof. Vincent Lepetit Augmented Reality VU Camera Registration Prof. Vincent Lepetit Different Approaches to Vision-based 3D Tracking [From D. Wagner] [From Drummond PAMI02] [From Davison ICCV01] Consider natural features Consider

More information

Augmented Reality VU numerical optimization algorithms. Prof. Vincent Lepetit

Augmented Reality VU numerical optimization algorithms. Prof. Vincent Lepetit Augmented Reality VU numerical optimization algorithms Prof. Vincent Lepetit P3P: can exploit only 3 correspondences; DLT: difficult to exploit the knowledge of the internal parameters correctly, and the

More information

Matrix Derivatives and Descent Optimization Methods

Matrix Derivatives and Descent Optimization Methods Matrix Derivatives and Descent Optimization Methods 1 Qiang Ning Department of Electrical and Computer Engineering Beckman Institute for Advanced Science and Techonology University of Illinois at Urbana-Champaign

More information

Tikhonov Regularization of Large Symmetric Problems

Tikhonov Regularization of Large Symmetric Problems NUMERICAL LINEAR ALGEBRA WITH APPLICATIONS Numer. Linear Algebra Appl. 2000; 00:1 11 [Version: 2000/03/22 v1.0] Tihonov Regularization of Large Symmetric Problems D. Calvetti 1, L. Reichel 2 and A. Shuibi

More information

Similarity matrices for colored graphs

Similarity matrices for colored graphs Similarity matrices for colored graphs Paul Van Dooren Catherine Fraikin Abstract In this paper we extend the notion of similarity matrix which has been used to define similarity between nodes of two graphs

More information

Local Feature Extraction Models from Incomplete Data in Face Recognition Based on Nonnegative Matrix Factorization

Local Feature Extraction Models from Incomplete Data in Face Recognition Based on Nonnegative Matrix Factorization American Journal of Software Engineering and Applications 2015; 4(3): 50-55 Published online May 12, 2015 (http://www.sciencepublishinggroup.com/j/ajsea) doi: 10.11648/j.ajsea.20150403.12 ISSN: 2327-2473

More information

Math Review: parameter estimation. Emma

Math Review: parameter estimation. Emma Math Review: parameter estimation Emma McNally@flickr Fitting lines to dots: We will cover how Slides provided by HyunSoo Park 1809, Carl Friedrich Gauss What about fitting line on a curved surface? Least

More information

Least Squares Approximation

Least Squares Approximation Chapter 6 Least Squares Approximation As we saw in Chapter 5 we can interpret radial basis function interpolation as a constrained optimization problem. We now take this point of view again, but start

More information

Computational Methods. Least Squares Approximation/Optimization

Computational Methods. Least Squares Approximation/Optimization Computational Methods Least Squares Approximation/Optimization Manfred Huber 2011 1 Least Squares Least squares methods are aimed at finding approximate solutions when no precise solution exists Find the

More information

DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix to upper-triangular

DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix to upper-triangular form) Given: matrix C = (c i,j ) n,m i,j=1 ODE and num math: Linear algebra (N) [lectures] c phabala 2016 DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix

More information

σ 11 σ 22 σ pp 0 with p = min(n, m) The σ ii s are the singular values. Notation change σ ii A 1 σ 2

σ 11 σ 22 σ pp 0 with p = min(n, m) The σ ii s are the singular values. Notation change σ ii A 1 σ 2 HE SINGULAR VALUE DECOMPOSIION he SVD existence - properties. Pseudo-inverses and the SVD Use of SVD for least-squares problems Applications of the SVD he Singular Value Decomposition (SVD) heorem For

More information

Optimal state estimation for numerical weather prediction using reduced order models

Optimal state estimation for numerical weather prediction using reduced order models Optimal state estimation for numerical weather prediction using reduced order models A.S. Lawless 1, C. Boess 1, N.K. Nichols 1 and A. Bunse-Gerstner 2 1 School of Mathematical and Physical Sciences, University

More information

C&O367: Nonlinear Optimization (Winter 2013) Assignment 4 H. Wolkowicz

C&O367: Nonlinear Optimization (Winter 2013) Assignment 4 H. Wolkowicz C&O367: Nonlinear Optimization (Winter 013) Assignment 4 H. Wolkowicz Posted Mon, Feb. 8 Due: Thursday, Feb. 8 10:00AM (before class), 1 Matrices 1.1 Positive Definite Matrices 1. Let A S n, i.e., let

More information

Robust Component Analysis via HQ Minimization

Robust Component Analysis via HQ Minimization Robust Component Analysis via HQ Minimization Ran He, Wei-shi Zheng and Liang Wang 0-08-6 Outline Overview Half-quadratic minimization principal component analysis Robust principal component analysis Robust

More information

Computational Linear Algebra

Computational Linear Algebra Computational Linear Algebra PD Dr. rer. nat. habil. Ralf Peter Mundani Computation in Engineering / BGU Scientific Computing in Computer Science / INF Winter Term 2017/18 Part 2: Direct Methods PD Dr.

More information

Fantope Regularization in Metric Learning

Fantope Regularization in Metric Learning Fantope Regularization in Metric Learning CVPR 2014 Marc T. Law (LIP6, UPMC), Nicolas Thome (LIP6 - UPMC Sorbonne Universités), Matthieu Cord (LIP6 - UPMC Sorbonne Universités), Paris, France Introduction

More information

Low Rank Matrix Completion Formulation and Algorithm

Low Rank Matrix Completion Formulation and Algorithm 1 2 Low Rank Matrix Completion and Algorithm Jian Zhang Department of Computer Science, ETH Zurich zhangjianthu@gmail.com March 25, 2014 Movie Rating 1 2 Critic A 5 5 Critic B 6 5 Jian 9 8 Kind Guy B 9

More information

Positive Definite Matrix

Positive Definite Matrix 1/29 Chia-Ping Chen Professor Department of Computer Science and Engineering National Sun Yat-sen University Linear Algebra Positive Definite, Negative Definite, Indefinite 2/29 Pure Quadratic Function

More information

Least Squares Optimization

Least Squares Optimization Least Squares Optimization The following is a brief review of least squares optimization and constrained optimization techniques. Broadly, these techniques can be used in data analysis and visualization

More information

14.2 QR Factorization with Column Pivoting

14.2 QR Factorization with Column Pivoting page 531 Chapter 14 Special Topics Background Material Needed Vector and Matrix Norms (Section 25) Rounding Errors in Basic Floating Point Operations (Section 33 37) Forward Elimination and Back Substitution

More information

sparse and low-rank tensor recovery Cubic-Sketching

sparse and low-rank tensor recovery Cubic-Sketching Sparse and Low-Ran Tensor Recovery via Cubic-Setching Guang Cheng Department of Statistics Purdue University www.science.purdue.edu/bigdata CCAM@Purdue Math Oct. 27, 2017 Joint wor with Botao Hao and Anru

More information

Linear Algebra in Actuarial Science: Slides to the lecture

Linear Algebra in Actuarial Science: Slides to the lecture Linear Algebra in Actuarial Science: Slides to the lecture Fall Semester 2010/2011 Linear Algebra is a Tool-Box Linear Equation Systems Discretization of differential equations: solving linear equations

More information

Applications of Randomized Methods for Decomposing and Simulating from Large Covariance Matrices

Applications of Randomized Methods for Decomposing and Simulating from Large Covariance Matrices Applications of Randomized Methods for Decomposing and Simulating from Large Covariance Matrices Vahid Dehdari and Clayton V. Deutsch Geostatistical modeling involves many variables and many locations.

More information

EIGENVALUES AND SINGULAR VALUE DECOMPOSITION

EIGENVALUES AND SINGULAR VALUE DECOMPOSITION APPENDIX B EIGENVALUES AND SINGULAR VALUE DECOMPOSITION B.1 LINEAR EQUATIONS AND INVERSES Problems of linear estimation can be written in terms of a linear matrix equation whose solution provides the required

More information

https://goo.gl/kfxweg KYOTO UNIVERSITY Statistical Machine Learning Theory Sparsity Hisashi Kashima kashima@i.kyoto-u.ac.jp DEPARTMENT OF INTELLIGENCE SCIENCE AND TECHNOLOGY 1 KYOTO UNIVERSITY Topics:

More information

Today s class. Linear Algebraic Equations LU Decomposition. Numerical Methods, Fall 2011 Lecture 8. Prof. Jinbo Bi CSE, UConn

Today s class. Linear Algebraic Equations LU Decomposition. Numerical Methods, Fall 2011 Lecture 8. Prof. Jinbo Bi CSE, UConn Today s class Linear Algebraic Equations LU Decomposition 1 Linear Algebraic Equations Gaussian Elimination works well for solving linear systems of the form: AX = B What if you have to solve the linear

More information

sublinear time low-rank approximation of positive semidefinite matrices Cameron Musco (MIT) and David P. Woodru (CMU)

sublinear time low-rank approximation of positive semidefinite matrices Cameron Musco (MIT) and David P. Woodru (CMU) sublinear time low-rank approximation of positive semidefinite matrices Cameron Musco (MIT) and David P. Woodru (CMU) 0 overview Our Contributions: 1 overview Our Contributions: A near optimal low-rank

More information

There are six more problems on the next two pages

There are six more problems on the next two pages Math 435 bg & bu: Topics in linear algebra Summer 25 Final exam Wed., 8/3/5. Justify all your work to receive full credit. Name:. Let A 3 2 5 Find a permutation matrix P, a lower triangular matrix L with

More information

FINITE-DIMENSIONAL LINEAR ALGEBRA

FINITE-DIMENSIONAL LINEAR ALGEBRA DISCRETE MATHEMATICS AND ITS APPLICATIONS Series Editor KENNETH H ROSEN FINITE-DIMENSIONAL LINEAR ALGEBRA Mark S Gockenbach Michigan Technological University Houghton, USA CRC Press Taylor & Francis Croup

More information

Parallel Numerical Algorithms

Parallel Numerical Algorithms Parallel Numerical Algorithms Chapter 6 Matrix Models Section 6.2 Low Rank Approximation Edgar Solomonik Department of Computer Science University of Illinois at Urbana-Champaign CS 554 / CSE 512 Edgar

More information

MIDTERM. b) [2 points] Compute the LU Decomposition A = LU or explain why one does not exist.

MIDTERM. b) [2 points] Compute the LU Decomposition A = LU or explain why one does not exist. MAE 9A / FALL 3 Maurício de Oliveira MIDTERM Instructions: You have 75 minutes This exam is open notes, books No computers, calculators, phones, etc There are 3 questions for a total of 45 points and bonus

More information

L-statistics based Modification of Reconstruction Algorithms for Compressive Sensing in the Presence of Impulse Noise

L-statistics based Modification of Reconstruction Algorithms for Compressive Sensing in the Presence of Impulse Noise L-statistics based Modification of Reconstruction Algorithms for Compressive Sensing in the Presence of Impulse Noise Srdjan Stanković, Irena Orović and Moeness Amin 1 Abstract- A modification of standard

More information

Low-Rank Factorization Models for Matrix Completion and Matrix Separation

Low-Rank Factorization Models for Matrix Completion and Matrix Separation for Matrix Completion and Matrix Separation Joint work with Wotao Yin, Yin Zhang and Shen Yuan IPAM, UCLA Oct. 5, 2010 Low rank minimization problems Matrix completion: find a low-rank matrix W R m n so

More information

Least Squares Optimization

Least Squares Optimization Least Squares Optimization The following is a brief review of least squares optimization and constrained optimization techniques, which are widely used to analyze and visualize data. Least squares (LS)

More information

13. Nonlinear least squares

13. Nonlinear least squares L. Vandenberghe ECE133A (Fall 2018) 13. Nonlinear least squares definition and examples derivatives and optimality condition Gauss Newton method Levenberg Marquardt method 13.1 Nonlinear least squares

More information

M e t ir c S p a c es

M e t ir c S p a c es A G M A A q D q O I q 4 78 q q G q 3 q v- q A G q M A G M 3 5 4 A D O I A 4 78 / 3 v D OI A G M 3 4 78 / 3 54 D D v M q D M 3 v A G M 3 v M 3 5 A 4 M W q x - - - v Z M * A D q q q v W q q q q D q q W q

More information

Vector Space and Linear Transform

Vector Space and Linear Transform 32 Vector Space and Linear Transform Vector space, Subspace, Examples Null space, Column space, Row space of a matrix Spanning sets and Linear Independence Basis and Dimension Rank of a matrix Vector norms

More information

Subspace Projection Matrix Completion on Grassmann Manifold

Subspace Projection Matrix Completion on Grassmann Manifold Subspace Projection Matrix Completion on Grassmann Manifold Xinyue Shen and Yuantao Gu Dept. EE, Tsinghua University, Beijing, China http://gu.ee.tsinghua.edu.cn/ ICASSP 2015, Brisbane Contents 1 Background

More information

Low Rank Approximation Lecture 3. Daniel Kressner Chair for Numerical Algorithms and HPC Institute of Mathematics, EPFL

Low Rank Approximation Lecture 3. Daniel Kressner Chair for Numerical Algorithms and HPC Institute of Mathematics, EPFL Low Rank Approximation Lecture 3 Daniel Kressner Chair for Numerical Algorithms and HPC Institute of Mathematics, EPFL daniel.kressner@epfl.ch 1 Sampling based approximation Aim: Obtain rank-r approximation

More information

Two-View Segmentation of Dynamic Scenes from the Multibody Fundamental Matrix

Two-View Segmentation of Dynamic Scenes from the Multibody Fundamental Matrix Two-View Segmentation of Dynamic Scenes from the Multibody Fundamental Matrix René Vidal Stefano Soatto Shankar Sastry Department of EECS, UC Berkeley Department of Computer Sciences, UCLA 30 Cory Hall,

More information

Numerical solution of Least Squares Problems 1/32

Numerical solution of Least Squares Problems 1/32 Numerical solution of Least Squares Problems 1/32 Linear Least Squares Problems Suppose that we have a matrix A of the size m n and the vector b of the size m 1. The linear least square problem is to find

More information

Linear Subspace Models

Linear Subspace Models Linear Subspace Models Goal: Explore linear models of a data set. Motivation: A central question in vision concerns how we represent a collection of data vectors. The data vectors may be rasterized images,

More information

Optimization on the Manifold of Multiple Homographies

Optimization on the Manifold of Multiple Homographies Optimization on the Manifold of Multiple Homographies Anders Eriksson, Anton van den Hengel University of Adelaide Adelaide, Australia {anders.eriksson, anton.vandenhengel}@adelaide.edu.au Abstract It

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 7: More on Householder Reflectors; Least Squares Problems Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 15 Outline

More information

Notes on Cellwise Data Interpolation for Visualization Xavier Tricoche

Notes on Cellwise Data Interpolation for Visualization Xavier Tricoche Notes on Cellwise Data Interpolation for Visualization Xavier Tricoche urdue University While the data (computed or measured) used in visualization is only available in discrete form, it typically corresponds

More information

Gaussian Elimination for Linear Systems

Gaussian Elimination for Linear Systems Gaussian Elimination for Linear Systems Tsung-Ming Huang Department of Mathematics National Taiwan Normal University October 3, 2011 1/56 Outline 1 Elementary matrices 2 LR-factorization 3 Gaussian elimination

More information

Linear Algebra, part 3. Going back to least squares. Mathematical Models, Analysis and Simulation = 0. a T 1 e. a T n e. Anna-Karin Tornberg

Linear Algebra, part 3. Going back to least squares. Mathematical Models, Analysis and Simulation = 0. a T 1 e. a T n e. Anna-Karin Tornberg Linear Algebra, part 3 Anna-Karin Tornberg Mathematical Models, Analysis and Simulation Fall semester, 2010 Going back to least squares (Sections 1.7 and 2.3 from Strang). We know from before: The vector

More information

1. Nonlinear Equations. This lecture note excerpted parts from Michael Heath and Max Gunzburger. f(x) = 0

1. Nonlinear Equations. This lecture note excerpted parts from Michael Heath and Max Gunzburger. f(x) = 0 Numerical Analysis 1 1. Nonlinear Equations This lecture note excerpted parts from Michael Heath and Max Gunzburger. Given function f, we seek value x for which where f : D R n R n is nonlinear. f(x) =

More information

r=1 r=1 argmin Q Jt (20) After computing the descent direction d Jt 2 dt H t d + P (x + d) d i = 0, i / J

r=1 r=1 argmin Q Jt (20) After computing the descent direction d Jt 2 dt H t d + P (x + d) d i = 0, i / J 7 Appendix 7. Proof of Theorem Proof. There are two main difficulties in proving the convergence of our algorithm, and none of them is addressed in previous works. First, the Hessian matrix H is a block-structured

More information

Computational methods for mixed models

Computational methods for mixed models Computational methods for mixed models Douglas Bates Department of Statistics University of Wisconsin Madison March 27, 2018 Abstract The lme4 package provides R functions to fit and analyze several different

More information

Inverse Singular Value Problems

Inverse Singular Value Problems Chapter 8 Inverse Singular Value Problems IEP versus ISVP Existence question A continuous approach An iterative method for the IEP An iterative method for the ISVP 139 140 Lecture 8 IEP versus ISVP Inverse

More information

From Matrix to Tensor. Charles F. Van Loan

From Matrix to Tensor. Charles F. Van Loan From Matrix to Tensor Charles F. Van Loan Department of Computer Science January 28, 2016 From Matrix to Tensor From Tensor To Matrix 1 / 68 What is a Tensor? Instead of just A(i, j) it s A(i, j, k) or

More information

A fast randomized algorithm for the approximation of matrices preliminary report

A fast randomized algorithm for the approximation of matrices preliminary report DRAFT A fast randomized algorithm for the approximation of matrices preliminary report Yale Department of Computer Science Technical Report #1380 Franco Woolfe, Edo Liberty, Vladimir Rokhlin, and Mark

More information

Robust Principal Component Analysis

Robust Principal Component Analysis ELE 538B: Mathematics of High-Dimensional Data Robust Principal Component Analysis Yuxin Chen Princeton University, Fall 2018 Disentangling sparse and low-rank matrices Suppose we are given a matrix M

More information

Conjugate Gradient Method

Conjugate Gradient Method Conjugate Gradient Method direct and indirect methods positive definite linear systems Krylov sequence spectral analysis of Krylov sequence preconditioning Prof. S. Boyd, EE364b, Stanford University Three

More information