Singular Value Decomposition and Digital Image Compression

Size: px
Start display at page:

Download "Singular Value Decomposition and Digital Image Compression"

Transcription

1 Singular Value Decomposition and Digital Image Compression Chris Bingham December 1, 016 Page 1 of

2 Abstract The purpose of this document is to be a very basic introduction to the singular value decomposition (SVD) process. There will be an explanation of how the compute the SVD of an m n matrix A, followed by an example problem. SVD has many applications but this document will focus on it s application to digital image compression with example images. Page 1 of

3 As a digital photographer I am always dealing with storage of photos on hard drives and being concerned about the space required. The photographs that come straight out of camera are massive files that aren t easily transferred to clients. When I saw a project retaining to digital image compression by way of singular value decomposition (SVD), it immediately stood out as something I would like to learn. Through this document I will be exploring the mathematical processes taken to compress a digital image using linear algebra. I will be using one of my own photographs to give a visual demonstration of how SVD affects a photograph. SVD is a method used to diagonalize any non symmetric matrix A. Through this diagonalization, matrix A is decomposed into three matrices. One of those matrices is made up of a diagonal that contains the singular values of A, which will be very important in the application of image compression. The other two matrices are made up of the orthogonal unit vectors that correspond to the singular values found. When multiplied, these three matrices equal matrix A. This is useful in image compressions because you can choose how many of the singular values and their corresponding orthogonal unit vectors to obtain a compressed version of matrix A. Before jumping into image compession, let s go over how to calculate the SVD of any m n matrix. Singular Value Decomposition Theorem Let A be an m n matrix with rank r. Then there exists an m n matrix Σ for which the diagonal entries in D are the first r singular values of A, σ 1 σ... σ r > 0, and there exist an m m orthogonal matrix U and an n n orthogonal matrix V such that A = UΣV T Page 1 of

4 Process of SVD SVD can be achieved by hand calculations if you re attempting to decompose a small matrix, such as a matrix. The calculations become much more time consuming as the matrices get larger. Computer programs such as Mathematica or Matlab can be used to calculate the SVD of a matrix A quickly. In the following section I will outline and explain each step to determine the SVD of matrix A. I will follow that up by giving an example matrix. Step One: Calculate A T A and find its eigenvalues The first step is to take A and find its product with the transpose of A, or A T. Once you have the new matrix A T A, you will need to find the eigenvalues and their corresponding eigenvectors. Finding the product of A T A is necessary to be able to diagonalize as you will obtain a symmetric matrix. You need a symmetric matrix to be able to determine the eigenvalues of matrix A. Assume A to be an m n matrix: Step Two: Create matrices V and Σ A T A = A T A n m m n n n To create matrix V, take the eigenvalues obtained from A T A and rearrange them from highest magnitude to lowest. The eigenvectors corresponding to those eigenvalues will make up the columns of V. The last step for matrix V is to turn the columns into unit vectors. The columns of matrix V are known as the right singular vectors of A. V = [ v 1 v... v n ] To create Σ you will start by temporarily creating a matrix D that consists of the first r singular values of A. The singular values of A are obtained by taking the square root of the eigenvalues found from A T A. Then you will want to rearrange them from highest magnitude Page of

5 to lowest. σ D = σ r Now that you have a diagonal matrix D, you can create Σ. Sigma will have the same dimensions as your original matrix A. It is a partitioned matrix with the matrix D in the top left corner and and the rest is filled in with zeroes. Here is what Σ should look like: [ r n r] Σ mxn = D 0 r 0 0 m r Step Three: Create Matrix U The last matrix to construct is matrix U. To construct the columns of U, find the product of the original matrix A and the columns of V. Then multiply by 1/σ r. Remember that matrix U will be an m m matrix. The columns of U are known as the left singular vectors of A.Below is the formula used to find the columns of U. u k = 1 σ k (Av k ) (1) Letting k range from 1 to the rank of A. You should end up with something like this U = [ u 1 u u m ] You may run into a problem here where you have used all of the non-zero singular values to find the columns that make up matrix U, but do not have a square orthogonal matrix. In this case, you will need to use the Gram-Schmidt process to find a basis for the subspace spanned by the existing columns of U. To do so you want to start by finding a vector y that is not in the plane spanned by the existing column vectors of U. Then you can plug it into this formula to find the remaining column vectors of U to produce a square orthogonal matrix needed for the SVD to be complete. Page of

6 u = y y.u 1 u 1.u 1 u 1 y.u a u a.u a u a Letting a be the number of column vectors in U you already have. The last thing to remember is the convert any column vectors found using the Gram-Schmidt process into unit vectors. Final Step We have now created all three matrices required for the SVD of A. They should have the form; Method to Check SVD Calculation A = U Σ V T m n m m m n n n The method to check whether you performed SVD correctly on a matrix requires a few steps involving partitioning matrices and column-row multiplication. I will outline how to go about doing this below. Let r be the rank of the original matrix A. v T σ A = [ ]. u 1 u u r u m v T r 0 0 σ r vr+1 T 0 0. Start by multiplying the matrix U by the columns of Σ. v T n Page 4 of

7 v A = [ σ 1 u 1 σ u σ r u r 0 0 ] 1. T vr T. v T r+1 For the next step you will want to partition the two remaining matrices. A = [ σ 1 u 1 σ u σ r u r 0 0 ] By performing column-row multiplication we get v T n v T 1. v T r v T r+1 A = σ 1 u 1 v T 1 + σ u v T σ r u r v T r + 0v T r+1 + 0v T n. v T n The values in Σ and V T after σ r and v T r will all be zeros, resulting in Page 5 of A = σ 1 u 1 v1 T + σ u v T σ r u r vr T () Which can be seen as k A k = σ i u i vi T () i=1 With k being the number of singular values you want to use. To check whether the original matrix is achieved you will use all of the singular values. However this is also the method through which you find an approximation matrix A k of the original matrix A.

8 Example of SVD Using a Matrix Given the matrix A below, compute A = UΣV T. 1 1 A = Step One A T A = [ ] = 0 1 [ ] 1 1 Now that we have a square diagonal matrix, we can easily find the eigenvalues and corresponding eigenvectors. Since this is a x matrix, we can use the shortcut formula to find the characteristic polynomial. Find the trace and determinant of A T A and substitute into the following formula: p(λ) =λ T λ + D p(λ) =λ 4λ + p(λ) =(λ )(λ 1) The eigenvalues, arranged from highest magnitude to lowest are and 1. To find the corresponding eigenvectors calculate A I and A I. Through elementary row operations and determining the basis for each eigenvalue, we find the eigenvectors. [ ] 1 λ 1 = v 1 = 1 [ ] 1 λ = 1 v = 1 Page 6 of

9 Step Two Since we have the eigenvectors of A T A, we can use them as the columns of matrix V. [ ] 1 1 V = 1 1 We want matrix V to be an orthogonal matrix, so divide each column by it s length to make them unit vectors. [ ] 1 V = 1 We can also construct Σ at this point because we know the eigenvalues. Take the square root of each eigenvalue and place them on the diagonal of the matrix, as arranged from highest magnitude to lowest. At this point we need to make sure that the dimensions of Σ are the same as the dimensions of the original matrix A. 0 Step Three Σ = The last matrix to construct is matrix U. To do so, use formula (1). u 1 = [ ] 1 0 = u = [ ] = Page 7 of

10 Therefore: U = Here we run into a problem. Following the theorem for SVD, this matrix needs to be a square orthogonal matrix. We only have two singular values to plug into the formula though, resulting in a x matrix. To fix this we use the Gram-Schmidt Process to find the third vector. To do this, we need to choose a vector that is in the same plane as u 1 and u. Then we have everything we need to plug into the formula to find the third vector. I chose the vector 1 u = y y.u 1 u 1.u 1 u 1 y.u u.u u y = Plugging what we have for u 1 and u and y into the formula we achieve we get our third vector. The last step is to convert it to a unit vector and we have completed matrix U. 1 u = 1 u 1 u = 1 1 We now have the correct matrix U and we can complete the SVD of matrix A. U = Page 8 of

11 Final Step We now have all three matrices needed for the SVD of matrix A. Check Answer A = UΣV T [ = ] T Using the method described, we can check our SVD calculation. Our calculated SVD has the form: A = [ ] u 1 u u σ 1 0 [ ] 0 σ v T 1 v 0 0 T Plugging columns and singular values and rows we have into forumla (), we get A = [ ] [ ] A = A = (4) The two matrices above are both rank one matrices. The first of which is actually the rank one approximation, or A 1, of the original matrix A. When you add the two rank one matrices together, you will see that they are equal to the original matrix A. Page 9 of

12 Approximation of the Original Matrix We discovered in the method to check that our calculations equaled the original matrix A, that a formula () can be found to calculate various approximated matrices of the A. Using said formula, you can choose how many singular values to use in order to find an approximation of A. The number of singular values directly determines the rank of the approximated matrix. This is because the summation formula is adding k rank one matrices together, with the final result being a matrix of rank k. If we once again look at our example matrix A, we can see how this formula works. We can see in (4) that if we were looking to find a rank one approximation of A, or A 1, then 1 1 A 1 = σ 1 u 1 v1 T = 1 1 When trying to achieve a rank one approximation we omit all other ranks of the original matrix A. For this specific example we started with a rank two matrix. Therefore, if we add only one more rank, we end up with our original matrix A. For matrices with a much larger rank, there will be more options for approximated matrices. The example image that will be shown below will begin as a rank 480 matrix. We will see that more of the pertinent information regarding the original information corresponds to the higher singular values. Approximated matrices are a great way to remove some of the redundant information contained in a matrix and still have very useful information retained. This will be seen in the application to digital images. Application of SVD to Image Compression We now know how SVD can be used to find an approximation of an m n matrix. When applied to digital images, this can be used to compress an image to achieve a smaller file size for easier storage or transfer. This is a unique opportunity to see a visual representation of how SVD affects a matrix. In order to perform this, I used Mathematica. This is a powerful program that 1 1 Page 10 of

13 not only enabled me to upload and extract data from the image, but also compute its SVD and approximated data matrices. Digital images are stored as data in the form of pixels. Each pixel is represented by a cell in a matrix. The value of the pixel is the intensity of that pixel. A truly grayscale image consists of one matrix while color images are made up of three matrices storing the intensity for red, blue and green. For my demonstration of the affect that SVD can have on an image, I will be showing a grayscale image of the New York skyline that I captured. The image size is 480 pixels by 640 pixels. Data is extracted from the image to create a matrix with 480 rows and 640 columns, containing 07,00 entries. The rank of the image matrix is 480. This gives us plenty of possible approximations to try. Through this we will see a gradual change in the image quality as we increase the rank of the approximation. Visual Representation of SVD You will see in the images below that the number of singular values used has a big impact on the image compression. By only using one singular value in Figure, the image is unrecognizable. Therefore it is not a very useful approximation. When 80 singular values are used in Figure 7, there is still evidence of some lost data in the approximation but is very much recognizable. When using 160 singular values in Figure 8, we get an image that is closer to the original quality of the image. Depending on its application, this image could be used in place of the original, and has a compressed file size. By the time we ve added 0 singular values in Figure 9, there is almost no visual change from the original image. We can assume that from that point forward, any additional rank approximated added would have very minimal change, if any. This is a clear example that most of the pertinent information in the original image is stored with the higher singular values. Therefore for this image exclusively we can say that somewhere around a rank 160 approximation could be a desirable image. This of course depends on just how clear you desire the image to be and what it will be used for. Page 11 of

14 Page 1 of Figure 1: The original image that data was extracted from to create matrix A.

15 Page 1 of Figure : Approximation using 1 singular value. The image is unrecognizable. The highest compression of the image, but not useful at all.

16 Page 14 of Figure : With a rank 5 approximation, we begin to see more information being added to the image.

17 Page 15 of Figure 4: Rank 10 approximation.

18 Page 16 of Figure 5: Rank 0 approximation, the image continues to get sharper. This is still too compressed and an image of this quality is not desirable.

19 Page 17 of Figure 6: Rank 40 approximation.

20 Page 18 of Figure 7: Rank 80 approximation.

21 Page 19 of Figure 8: Rank 160 approximation.

22 Page 0 of Figure 9: Rank 0 approximation. You can see at this point that the image is almost identical to the original.

23 Page 1 of Figure 10: The approximated matrix with the same rank as the original. Looks identical to original image.

24 Through this document, I broke down the process to calculate the SVD of an m n matrix A. My hope is that this will be a useful tool for students to be able to understand the process and be able replicate the calculations with ease. SVD is an extremely powerful process that be used to manipulate matrices. I barely scratched the surface of the applications for SVD. As a visual learner, this was the most intriguing application as it gave me an opportunity to see a visual representation of an approximated matrix. References [1] A. Schultz. 006 SWEET Applications of SVD aschultz/summer06/math10/coursenotes and handouts/mathematica / [] C. Long. Visualization of Matrix Singular Value Decomposition [] D. Kalman The Singularly Valuable Decomposition: The SVD of a Matrix [4] D. Lay. 01 Linear Algebra and its Applications [5] J. Chen. Image Compression with SVD Page of

Conceptual Questions for Review

Conceptual Questions for Review Conceptual Questions for Review Chapter 1 1.1 Which vectors are linear combinations of v = (3, 1) and w = (4, 3)? 1.2 Compare the dot product of v = (3, 1) and w = (4, 3) to the product of their lengths.

More information

Learning goals: students learn to use the SVD to find good approximations to matrices and to compute the pseudoinverse.

Learning goals: students learn to use the SVD to find good approximations to matrices and to compute the pseudoinverse. Application of the SVD: Compression and Pseudoinverse Learning goals: students learn to use the SVD to find good approximations to matrices and to compute the pseudoinverse. Low rank approximation One

More information

Linear Algebra Review. Fei-Fei Li

Linear Algebra Review. Fei-Fei Li Linear Algebra Review Fei-Fei Li 1 / 51 Vectors Vectors and matrices are just collections of ordered numbers that represent something: movements in space, scaling factors, pixel brightnesses, etc. A vector

More information

Linear Algebra Review. Fei-Fei Li

Linear Algebra Review. Fei-Fei Li Linear Algebra Review Fei-Fei Li 1 / 37 Vectors Vectors and matrices are just collections of ordered numbers that represent something: movements in space, scaling factors, pixel brightnesses, etc. A vector

More information

Lecture: Face Recognition and Feature Reduction

Lecture: Face Recognition and Feature Reduction Lecture: Face Recognition and Feature Reduction Juan Carlos Niebles and Ranjay Krishna Stanford Vision and Learning Lab 1 Recap - Curse of dimensionality Assume 5000 points uniformly distributed in the

More information

Singular Value Decomposition

Singular Value Decomposition Chapter 5 Singular Value Decomposition We now reach an important Chapter in this course concerned with the Singular Value Decomposition of a matrix A. SVD, as it is commonly referred to, is one of the

More information

The Singular Value Decomposition (SVD) and Principal Component Analysis (PCA)

The Singular Value Decomposition (SVD) and Principal Component Analysis (PCA) Chapter 5 The Singular Value Decomposition (SVD) and Principal Component Analysis (PCA) 5.1 Basics of SVD 5.1.1 Review of Key Concepts We review some key definitions and results about matrices that will

More information

. = V c = V [x]v (5.1) c 1. c k

. = V c = V [x]v (5.1) c 1. c k Chapter 5 Linear Algebra It can be argued that all of linear algebra can be understood using the four fundamental subspaces associated with a matrix Because they form the foundation on which we later work,

More information

Linear Algebra & Geometry why is linear algebra useful in computer vision?

Linear Algebra & Geometry why is linear algebra useful in computer vision? Linear Algebra & Geometry why is linear algebra useful in computer vision? References: -Any book on linear algebra! -[HZ] chapters 2, 4 Some of the slides in this lecture are courtesy to Prof. Octavia

More information

Lecture: Face Recognition and Feature Reduction

Lecture: Face Recognition and Feature Reduction Lecture: Face Recognition and Feature Reduction Juan Carlos Niebles and Ranjay Krishna Stanford Vision and Learning Lab Lecture 11-1 Recap - Curse of dimensionality Assume 5000 points uniformly distributed

More information

Review of similarity transformation and Singular Value Decomposition

Review of similarity transformation and Singular Value Decomposition Review of similarity transformation and Singular Value Decomposition Nasser M Abbasi Applied Mathematics Department, California State University, Fullerton July 8 7 page compiled on June 9, 5 at 9:5pm

More information

The Singular Value Decomposition

The Singular Value Decomposition The Singular Value Decomposition Philippe B. Laval KSU Fall 2015 Philippe B. Laval (KSU) SVD Fall 2015 1 / 13 Review of Key Concepts We review some key definitions and results about matrices that will

More information

Linear Algebra. and

Linear Algebra. and Instructions Please answer the six problems on your own paper. These are essay questions: you should write in complete sentences. 1. Are the two matrices 1 2 2 1 3 5 2 7 and 1 1 1 4 4 2 5 5 2 row equivalent?

More information

Singular Value Decomposition

Singular Value Decomposition Singular Value Decomposition Motivatation The diagonalization theorem play a part in many interesting applications. Unfortunately not all matrices can be factored as A = PDP However a factorization A =

More information

UNIT 6: The singular value decomposition.

UNIT 6: The singular value decomposition. UNIT 6: The singular value decomposition. María Barbero Liñán Universidad Carlos III de Madrid Bachelor in Statistics and Business Mathematical methods II 2011-2012 A square matrix is symmetric if A T

More information

LINEAR ALGEBRA KNOWLEDGE SURVEY

LINEAR ALGEBRA KNOWLEDGE SURVEY LINEAR ALGEBRA KNOWLEDGE SURVEY Instructions: This is a Knowledge Survey. For this assignment, I am only interested in your level of confidence about your ability to do the tasks on the following pages.

More information

Linear Algebra & Geometry why is linear algebra useful in computer vision?

Linear Algebra & Geometry why is linear algebra useful in computer vision? Linear Algebra & Geometry why is linear algebra useful in computer vision? References: -Any book on linear algebra! -[HZ] chapters 2, 4 Some of the slides in this lecture are courtesy to Prof. Octavia

More information

LINEAR ALGEBRA 1, 2012-I PARTIAL EXAM 3 SOLUTIONS TO PRACTICE PROBLEMS

LINEAR ALGEBRA 1, 2012-I PARTIAL EXAM 3 SOLUTIONS TO PRACTICE PROBLEMS LINEAR ALGEBRA, -I PARTIAL EXAM SOLUTIONS TO PRACTICE PROBLEMS Problem (a) For each of the two matrices below, (i) determine whether it is diagonalizable, (ii) determine whether it is orthogonally diagonalizable,

More information

Chapter 3 Transformations

Chapter 3 Transformations Chapter 3 Transformations An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1 Linear Transformations A function is called a linear transformation if 1. for every and 2. for every If we fix the bases

More information

(a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? Solution: dim N(A) 1, since rank(a) 3. Ax =

(a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? Solution: dim N(A) 1, since rank(a) 3. Ax = . (5 points) (a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? dim N(A), since rank(a) 3. (b) If we also know that Ax = has no solution, what do we know about the rank of A? C(A)

More information

[Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty.]

[Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty.] Math 43 Review Notes [Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty Dot Product If v (v, v, v 3 and w (w, w, w 3, then the

More information

Singular Value Decompsition

Singular Value Decompsition Singular Value Decompsition Massoud Malek One of the most useful results from linear algebra, is a matrix decomposition known as the singular value decomposition It has many useful applications in almost

More information

The Principal Component Analysis

The Principal Component Analysis The Principal Component Analysis Philippe B. Laval KSU Fall 2017 Philippe B. Laval (KSU) PCA Fall 2017 1 / 27 Introduction Every 80 minutes, the two Landsat satellites go around the world, recording images

More information

Math 1553, Introduction to Linear Algebra

Math 1553, Introduction to Linear Algebra Learning goals articulate what students are expected to be able to do in a course that can be measured. This course has course-level learning goals that pertain to the entire course, and section-level

More information

Singular Value Decomposition. 1 Singular Value Decomposition and the Four Fundamental Subspaces

Singular Value Decomposition. 1 Singular Value Decomposition and the Four Fundamental Subspaces Singular Value Decomposition This handout is a review of some basic concepts in linear algebra For a detailed introduction, consult a linear algebra text Linear lgebra and its pplications by Gilbert Strang

More information

Singular Value Decomposition: Compression of Color Images

Singular Value Decomposition: Compression of Color Images 1/26 Singular Value Decomposition: Compression of Color Images Bethany Adams and Nina Magnoni Introduction The SVD has very useful applications. It can be used in least squares approximations, search engines,

More information

Final Exam, Linear Algebra, Fall, 2003, W. Stephen Wilson

Final Exam, Linear Algebra, Fall, 2003, W. Stephen Wilson Final Exam, Linear Algebra, Fall, 2003, W. Stephen Wilson Name: TA Name and section: NO CALCULATORS, SHOW ALL WORK, NO OTHER PAPERS ON DESK. There is very little actual work to be done on this exam if

More information

7. Symmetric Matrices and Quadratic Forms

7. Symmetric Matrices and Quadratic Forms Linear Algebra 7. Symmetric Matrices and Quadratic Forms CSIE NCU 1 7. Symmetric Matrices and Quadratic Forms 7.1 Diagonalization of symmetric matrices 2 7.2 Quadratic forms.. 9 7.4 The singular value

More information

I. Multiple Choice Questions (Answer any eight)

I. Multiple Choice Questions (Answer any eight) Name of the student : Roll No : CS65: Linear Algebra and Random Processes Exam - Course Instructor : Prashanth L.A. Date : Sep-24, 27 Duration : 5 minutes INSTRUCTIONS: The test will be evaluated ONLY

More information

Introduction to Data Mining

Introduction to Data Mining Introduction to Data Mining Lecture #21: Dimensionality Reduction Seoul National University 1 In This Lecture Understand the motivation and applications of dimensionality reduction Learn the definition

More information

Principal Components Analysis (PCA)

Principal Components Analysis (PCA) Principal Components Analysis (PCA) Principal Components Analysis (PCA) a technique for finding patterns in data of high dimension Outline:. Eigenvectors and eigenvalues. PCA: a) Getting the data b) Centering

More information

18.06SC Final Exam Solutions

18.06SC Final Exam Solutions 18.06SC Final Exam Solutions 1 (4+7=11 pts.) Suppose A is 3 by 4, and Ax = 0 has exactly 2 special solutions: 1 2 x 1 = 1 and x 2 = 1 1 0 0 1 (a) Remembering that A is 3 by 4, find its row reduced echelon

More information

Linear Algebra Primer

Linear Algebra Primer Linear Algebra Primer David Doria daviddoria@gmail.com Wednesday 3 rd December, 2008 Contents Why is it called Linear Algebra? 4 2 What is a Matrix? 4 2. Input and Output.....................................

More information

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination Math 0, Winter 07 Final Exam Review Chapter. Matrices and Gaussian Elimination { x + x =,. Different forms of a system of linear equations. Example: The x + 4x = 4. [ ] [ ] [ ] vector form (or the column

More information

Linear Algebra Methods for Data Mining

Linear Algebra Methods for Data Mining Linear Algebra Methods for Data Mining Saara Hyvönen, Saara.Hyvonen@cs.helsinki.fi Spring 2007 The Singular Value Decomposition (SVD) continued Linear Algebra Methods for Data Mining, Spring 2007, University

More information

Lecture 6. Numerical methods. Approximation of functions

Lecture 6. Numerical methods. Approximation of functions Lecture 6 Numerical methods Approximation of functions Lecture 6 OUTLINE 1. Approximation and interpolation 2. Least-square method basis functions design matrix residual weighted least squares normal equation

More information

LEC 2: Principal Component Analysis (PCA) A First Dimensionality Reduction Approach

LEC 2: Principal Component Analysis (PCA) A First Dimensionality Reduction Approach LEC 2: Principal Component Analysis (PCA) A First Dimensionality Reduction Approach Dr. Guangliang Chen February 9, 2016 Outline Introduction Review of linear algebra Matrix SVD PCA Motivation The digits

More information

Final Review Sheet. B = (1, 1 + 3x, 1 + x 2 ) then 2 + 3x + 6x 2

Final Review Sheet. B = (1, 1 + 3x, 1 + x 2 ) then 2 + 3x + 6x 2 Final Review Sheet The final will cover Sections Chapters 1,2,3 and 4, as well as sections 5.1-5.4, 6.1-6.2 and 7.1-7.3 from chapters 5,6 and 7. This is essentially all material covered this term. Watch

More information

Chapter 7: Symmetric Matrices and Quadratic Forms

Chapter 7: Symmetric Matrices and Quadratic Forms Chapter 7: Symmetric Matrices and Quadratic Forms (Last Updated: December, 06) These notes are derived primarily from Linear Algebra and its applications by David Lay (4ed). A few theorems have been moved

More information

COMP 558 lecture 18 Nov. 15, 2010

COMP 558 lecture 18 Nov. 15, 2010 Least squares We have seen several least squares problems thus far, and we will see more in the upcoming lectures. For this reason it is good to have a more general picture of these problems and how to

More information

Image Compression Using the Haar Wavelet Transform

Image Compression Using the Haar Wavelet Transform College of the Redwoods http://online.redwoods.cc.ca.us/instruct/darnold/laproj/fall2002/ames/ 1/33 Image Compression Using the Haar Wavelet Transform Greg Ames College of the Redwoods Math 45 Linear Algebra

More information

HOSTOS COMMUNITY COLLEGE DEPARTMENT OF MATHEMATICS

HOSTOS COMMUNITY COLLEGE DEPARTMENT OF MATHEMATICS HOSTOS COMMUNITY COLLEGE DEPARTMENT OF MATHEMATICS MAT 217 Linear Algebra CREDIT HOURS: 4.0 EQUATED HOURS: 4.0 CLASS HOURS: 4.0 PREREQUISITE: PRE/COREQUISITE: MAT 210 Calculus I MAT 220 Calculus II RECOMMENDED

More information

Notes on singular value decomposition for Math 54. Recall that if A is a symmetric n n matrix, then A has real eigenvalues A = P DP 1 A = P DP T.

Notes on singular value decomposition for Math 54. Recall that if A is a symmetric n n matrix, then A has real eigenvalues A = P DP 1 A = P DP T. Notes on singular value decomposition for Math 54 Recall that if A is a symmetric n n matrix, then A has real eigenvalues λ 1,, λ n (possibly repeated), and R n has an orthonormal basis v 1,, v n, where

More information

Announcements Wednesday, November 01

Announcements Wednesday, November 01 Announcements Wednesday, November 01 WeBWorK 3.1, 3.2 are due today at 11:59pm. The quiz on Friday covers 3.1, 3.2. My office is Skiles 244. Rabinoffice hours are Monday, 1 3pm and Tuesday, 9 11am. Section

More information

Properties of Matrices and Operations on Matrices

Properties of Matrices and Operations on Matrices Properties of Matrices and Operations on Matrices A common data structure for statistical analysis is a rectangular array or matris. Rows represent individual observational units, or just observations,

More information

Review problems for MA 54, Fall 2004.

Review problems for MA 54, Fall 2004. Review problems for MA 54, Fall 2004. Below are the review problems for the final. They are mostly homework problems, or very similar. If you are comfortable doing these problems, you should be fine on

More information

Image Compression. 1. Introduction. Greg Ames Dec 07, 2002

Image Compression. 1. Introduction. Greg Ames Dec 07, 2002 Image Compression Greg Ames Dec 07, 2002 Abstract Digital images require large amounts of memory to store and, when retrieved from the internet, can take a considerable amount of time to download. The

More information

Linear Algebra- Final Exam Review

Linear Algebra- Final Exam Review Linear Algebra- Final Exam Review. Let A be invertible. Show that, if v, v, v 3 are linearly independent vectors, so are Av, Av, Av 3. NOTE: It should be clear from your answer that you know the definition.

More information

MATH 122: Matrixology (Linear Algebra) Solutions to Level Tetris (1984), 10 of 10 University of Vermont, Fall 2016

MATH 122: Matrixology (Linear Algebra) Solutions to Level Tetris (1984), 10 of 10 University of Vermont, Fall 2016 MATH : Matrixology (Linear Algebra) Solutions to Level Tetris (984), 0 of 0 University of Vermont, Fall 0 (Q 4, 5) Show that the function f (x, x ) x + 4x x + x does not have a minimum at (0, 0) even though

More information

Practical Linear Algebra: A Geometry Toolbox

Practical Linear Algebra: A Geometry Toolbox Practical Linear Algebra: A Geometry Toolbox Third edition Chapter 12: Gauss for Linear Systems Gerald Farin & Dianne Hansford CRC Press, Taylor & Francis Group, An A K Peters Book www.farinhansford.com/books/pla

More information

5.) For each of the given sets of vectors, determine whether or not the set spans R 3. Give reasons for your answers.

5.) For each of the given sets of vectors, determine whether or not the set spans R 3. Give reasons for your answers. Linear Algebra - Test File - Spring Test # For problems - consider the following system of equations. x + y - z = x + y + 4z = x + y + 6z =.) Solve the system without using your calculator..) Find the

More information

Vectors and Matrices Statistics with Vectors and Matrices

Vectors and Matrices Statistics with Vectors and Matrices Vectors and Matrices Statistics with Vectors and Matrices Lecture 3 September 7, 005 Analysis Lecture #3-9/7/005 Slide 1 of 55 Today s Lecture Vectors and Matrices (Supplement A - augmented with SAS proc

More information

Applied Linear Algebra in Geoscience Using MATLAB

Applied Linear Algebra in Geoscience Using MATLAB Applied Linear Algebra in Geoscience Using MATLAB Contents Getting Started Creating Arrays Mathematical Operations with Arrays Using Script Files and Managing Data Two-Dimensional Plots Programming in

More information

Machine Learning (Spring 2012) Principal Component Analysis

Machine Learning (Spring 2012) Principal Component Analysis 1-71 Machine Learning (Spring 1) Principal Component Analysis Yang Xu This note is partly based on Chapter 1.1 in Chris Bishop s book on PRML and the lecture slides on PCA written by Carlos Guestrin in

More information

LINEAR ALGEBRA: NUMERICAL METHODS. Version: August 12,

LINEAR ALGEBRA: NUMERICAL METHODS. Version: August 12, LINEAR ALGEBRA: NUMERICAL METHODS. Version: August 12, 2000 74 6 Summary Here we summarize the most important information about theoretical and numerical linear algebra. MORALS OF THE STORY: I. Theoretically

More information

CS168: The Modern Algorithmic Toolbox Lecture #8: How PCA Works

CS168: The Modern Algorithmic Toolbox Lecture #8: How PCA Works CS68: The Modern Algorithmic Toolbox Lecture #8: How PCA Works Tim Roughgarden & Gregory Valiant April 20, 206 Introduction Last lecture introduced the idea of principal components analysis (PCA). The

More information

14 Singular Value Decomposition

14 Singular Value Decomposition 14 Singular Value Decomposition For any high-dimensional data analysis, one s first thought should often be: can I use an SVD? The singular value decomposition is an invaluable analysis tool for dealing

More information

Linear Algebra, part 3. Going back to least squares. Mathematical Models, Analysis and Simulation = 0. a T 1 e. a T n e. Anna-Karin Tornberg

Linear Algebra, part 3. Going back to least squares. Mathematical Models, Analysis and Simulation = 0. a T 1 e. a T n e. Anna-Karin Tornberg Linear Algebra, part 3 Anna-Karin Tornberg Mathematical Models, Analysis and Simulation Fall semester, 2010 Going back to least squares (Sections 1.7 and 2.3 from Strang). We know from before: The vector

More information

Image Compression Using Singular Value Decomposition

Image Compression Using Singular Value Decomposition Image Compression Using Singular Value Decomposition Ian Cooper and Craig Lorenc December 15, 2006 Abstract Singular value decomposition (SVD) is an effective tool for minimizing data storage and data

More information

7 Principal Component Analysis

7 Principal Component Analysis 7 Principal Component Analysis This topic will build a series of techniques to deal with high-dimensional data. Unlike regression problems, our goal is not to predict a value (the y-coordinate), it is

More information

The Haar Wavelet Transform: Compression and. Reconstruction

The Haar Wavelet Transform: Compression and. Reconstruction The Haar Wavelet Transform: Compression and Damien Adams and Halsey Patterson December 14, 2006 Abstract The Haar Wavelet Transformation is a simple form of compression involved in averaging and differencing

More information

a s 1.3 Matrix Multiplication. Know how to multiply two matrices and be able to write down the formula

a s 1.3 Matrix Multiplication. Know how to multiply two matrices and be able to write down the formula Syllabus for Math 308, Paul Smith Book: Kolman-Hill Chapter 1. Linear Equations and Matrices 1.1 Systems of Linear Equations Definition of a linear equation and a solution to a linear equations. Meaning

More information

MAT1302F Mathematical Methods II Lecture 19

MAT1302F Mathematical Methods II Lecture 19 MAT302F Mathematical Methods II Lecture 9 Aaron Christie 2 April 205 Eigenvectors, Eigenvalues, and Diagonalization Now that the basic theory of eigenvalues and eigenvectors is in place most importantly

More information

18.06 Quiz 2 April 7, 2010 Professor Strang

18.06 Quiz 2 April 7, 2010 Professor Strang 18.06 Quiz 2 April 7, 2010 Professor Strang Your PRINTED name is: 1. Your recitation number or instructor is 2. 3. 1. (33 points) (a) Find the matrix P that projects every vector b in R 3 onto the line

More information

Karhunen-Loève Transform KLT. JanKees van der Poel D.Sc. Student, Mechanical Engineering

Karhunen-Loève Transform KLT. JanKees van der Poel D.Sc. Student, Mechanical Engineering Karhunen-Loève Transform KLT JanKees van der Poel D.Sc. Student, Mechanical Engineering Karhunen-Loève Transform Has many names cited in literature: Karhunen-Loève Transform (KLT); Karhunen-Loève Decomposition

More information

Applied Linear Algebra in Geoscience Using MATLAB

Applied Linear Algebra in Geoscience Using MATLAB Applied Linear Algebra in Geoscience Using MATLAB Contents Getting Started Creating Arrays Mathematical Operations with Arrays Using Script Files and Managing Data Two-Dimensional Plots Programming in

More information

18.06 Problem Set 10 - Solutions Due Thursday, 29 November 2007 at 4 pm in

18.06 Problem Set 10 - Solutions Due Thursday, 29 November 2007 at 4 pm in 86 Problem Set - Solutions Due Thursday, 29 November 27 at 4 pm in 2-6 Problem : (5=5+5+5) Take any matrix A of the form A = B H CB, where B has full column rank and C is Hermitian and positive-definite

More information

Glossary of Linear Algebra Terms. Prepared by Vince Zaccone For Campus Learning Assistance Services at UCSB

Glossary of Linear Algebra Terms. Prepared by Vince Zaccone For Campus Learning Assistance Services at UCSB Glossary of Linear Algebra Terms Basis (for a subspace) A linearly independent set of vectors that spans the space Basic Variable A variable in a linear system that corresponds to a pivot column in the

More information

Introduction to SVD and Applications

Introduction to SVD and Applications Introduction to SVD and Applications Eric Kostelich and Dave Kuhl MSRI Climate Change Summer School July 18, 2008 Introduction The goal of this exercise is to familiarize you with the basics of the singular

More information

Chapter 6: Orthogonality

Chapter 6: Orthogonality Chapter 6: Orthogonality (Last Updated: November 7, 7) These notes are derived primarily from Linear Algebra and its applications by David Lay (4ed). A few theorems have been moved around.. Inner products

More information

Detailed Assessment Report MATH Outcomes, with Any Associations and Related Measures, Targets, Findings, and Action Plans

Detailed Assessment Report MATH Outcomes, with Any Associations and Related Measures, Targets, Findings, and Action Plans Detailed Assessment Report 2015-2016 MATH 3401 As of: 8/15/2016 10:01 AM EDT (Includes those Action Plans with Budget Amounts marked One-Time, Recurring, No Request.) Course Description Theory and applications

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information

MATH36001 Generalized Inverses and the SVD 2015

MATH36001 Generalized Inverses and the SVD 2015 MATH36001 Generalized Inverses and the SVD 201 1 Generalized Inverses of Matrices A matrix has an inverse only if it is square and nonsingular. However there are theoretical and practical applications

More information

Designing Information Devices and Systems II

Designing Information Devices and Systems II EECS 16B Fall 2016 Designing Information Devices and Systems II Linear Algebra Notes Introduction In this set of notes, we will derive the linear least squares equation, study the properties symmetric

More information

Designing Information Devices and Systems II Fall 2018 Elad Alon and Miki Lustig Homework 9

Designing Information Devices and Systems II Fall 2018 Elad Alon and Miki Lustig Homework 9 EECS 16B Designing Information Devices and Systems II Fall 18 Elad Alon and Miki Lustig Homework 9 This homework is due Wednesday, October 31, 18, at 11:59pm. Self grades are due Monday, November 5, 18,

More information

Linear Algebra, part 3 QR and SVD

Linear Algebra, part 3 QR and SVD Linear Algebra, part 3 QR and SVD Anna-Karin Tornberg Mathematical Models, Analysis and Simulation Fall semester, 2012 Going back to least squares (Section 1.4 from Strang, now also see section 5.2). We

More information

q n. Q T Q = I. Projections Least Squares best fit solution to Ax = b. Gram-Schmidt process for getting an orthonormal basis from any basis.

q n. Q T Q = I. Projections Least Squares best fit solution to Ax = b. Gram-Schmidt process for getting an orthonormal basis from any basis. Exam Review Material covered by the exam [ Orthogonal matrices Q = q 1... ] q n. Q T Q = I. Projections Least Squares best fit solution to Ax = b. Gram-Schmidt process for getting an orthonormal basis

More information

MAA507, Power method, QR-method and sparse matrix representation.

MAA507, Power method, QR-method and sparse matrix representation. ,, and representation. February 11, 2014 Lecture 7: Overview, Today we will look at:.. If time: A look at representation and fill in. Why do we need numerical s? I think everyone have seen how time consuming

More information

Name Solutions Linear Algebra; Test 3. Throughout the test simplify all answers except where stated otherwise.

Name Solutions Linear Algebra; Test 3. Throughout the test simplify all answers except where stated otherwise. Name Solutions Linear Algebra; Test 3 Throughout the test simplify all answers except where stated otherwise. 1) Find the following: (10 points) ( ) Or note that so the rows are linearly independent, so

More information

Eigenvalues and Eigenvectors

Eigenvalues and Eigenvectors Chapter 6 Eigenvalues and Eigenvectors 6. Introduction to Eigenvalues Eigenvalues are the key to a system of n differential equations : dy=dt ay becomes dy=dt Ay. Now A is a matrix and y is a vector.y.t/;

More information

Math 3191 Applied Linear Algebra

Math 3191 Applied Linear Algebra Math 9 Applied Linear Algebra Lecture : Orthogonal Projections, Gram-Schmidt Stephen Billups University of Colorado at Denver Math 9Applied Linear Algebra p./ Orthonormal Sets A set of vectors {u, u,...,

More information

Math 224, Fall 2007 Exam 3 Thursday, December 6, 2007

Math 224, Fall 2007 Exam 3 Thursday, December 6, 2007 Math 224, Fall 2007 Exam 3 Thursday, December 6, 2007 You have 1 hour and 20 minutes. No notes, books, or other references. You are permitted to use Maple during this exam, but you must start with a blank

More information

Rational Expressions & Equations

Rational Expressions & Equations Chapter 9 Rational Epressions & Equations Sec. 1 Simplifying Rational Epressions We simply rational epressions the same way we simplified fractions. When we first began to simplify fractions, we factored

More information

Image Registration Lecture 2: Vectors and Matrices

Image Registration Lecture 2: Vectors and Matrices Image Registration Lecture 2: Vectors and Matrices Prof. Charlene Tsai Lecture Overview Vectors Matrices Basics Orthogonal matrices Singular Value Decomposition (SVD) 2 1 Preliminary Comments Some of this

More information

Information Retrieval

Information Retrieval Introduction to Information CS276: Information and Web Search Christopher Manning and Pandu Nayak Lecture 13: Latent Semantic Indexing Ch. 18 Today s topic Latent Semantic Indexing Term-document matrices

More information

be a Householder matrix. Then prove the followings H = I 2 uut Hu = (I 2 uu u T u )u = u 2 uut u

be a Householder matrix. Then prove the followings H = I 2 uut Hu = (I 2 uu u T u )u = u 2 uut u MATH 434/534 Theoretical Assignment 7 Solution Chapter 7 (71) Let H = I 2uuT Hu = u (ii) Hv = v if = 0 be a Householder matrix Then prove the followings H = I 2 uut Hu = (I 2 uu )u = u 2 uut u = u 2u =

More information

MATH 31 - ADDITIONAL PRACTICE PROBLEMS FOR FINAL

MATH 31 - ADDITIONAL PRACTICE PROBLEMS FOR FINAL MATH 3 - ADDITIONAL PRACTICE PROBLEMS FOR FINAL MAIN TOPICS FOR THE FINAL EXAM:. Vectors. Dot product. Cross product. Geometric applications. 2. Row reduction. Null space, column space, row space, left

More information

8 The SVD Applied to Signal and Image Deblurring

8 The SVD Applied to Signal and Image Deblurring 8 The SVD Applied to Signal and Image Deblurring We will discuss the restoration of one-dimensional signals and two-dimensional gray-scale images that have been contaminated by blur and noise. After an

More information

MA 265 FINAL EXAM Fall 2012

MA 265 FINAL EXAM Fall 2012 MA 265 FINAL EXAM Fall 22 NAME: INSTRUCTOR S NAME:. There are a total of 25 problems. You should show work on the exam sheet, and pencil in the correct answer on the scantron. 2. No books, notes, or calculators

More information

SVD and Image Compression

SVD and Image Compression The SVD and Image Compression Lab Objective: The Singular Value Decomposition (SVD) is an incredibly useful matrix factorization that is widely used in both theoretical and applied mathematics. The SVD

More information

Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) 1.1 The Formal Denition of a Vector Space

Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) 1.1 The Formal Denition of a Vector Space Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) Contents 1 Vector Spaces 1 1.1 The Formal Denition of a Vector Space.................................. 1 1.2 Subspaces...................................................

More information

Least-Squares Rigid Motion Using SVD

Least-Squares Rigid Motion Using SVD Least-Squares igid Motion Using SVD Olga Sorkine Abstract his note summarizes the steps to computing the rigid transformation that aligns two sets of points Key words: Shape matching, rigid alignment,

More information

Reduction to the associated homogeneous system via a particular solution

Reduction to the associated homogeneous system via a particular solution June PURDUE UNIVERSITY Study Guide for the Credit Exam in (MA 5) Linear Algebra This study guide describes briefly the course materials to be covered in MA 5. In order to be qualified for the credit, one

More information

(Linear equations) Applied Linear Algebra in Geoscience Using MATLAB

(Linear equations) Applied Linear Algebra in Geoscience Using MATLAB Applied Linear Algebra in Geoscience Using MATLAB (Linear equations) Contents Getting Started Creating Arrays Mathematical Operations with Arrays Using Script Files and Managing Data Two-Dimensional Plots

More information

Cheat Sheet for MATH461

Cheat Sheet for MATH461 Cheat Sheet for MATH46 Here is the stuff you really need to remember for the exams Linear systems Ax = b Problem: We consider a linear system of m equations for n unknowns x,,x n : For a given matrix A

More information

Deep Learning Book Notes Chapter 2: Linear Algebra

Deep Learning Book Notes Chapter 2: Linear Algebra Deep Learning Book Notes Chapter 2: Linear Algebra Compiled By: Abhinaba Bala, Dakshit Agrawal, Mohit Jain Section 2.1: Scalars, Vectors, Matrices and Tensors Scalar Single Number Lowercase names in italic

More information

Data Mining Lecture 4: Covariance, EVD, PCA & SVD

Data Mining Lecture 4: Covariance, EVD, PCA & SVD Data Mining Lecture 4: Covariance, EVD, PCA & SVD Jo Houghton ECS Southampton February 25, 2019 1 / 28 Variance and Covariance - Expectation A random variable takes on different values due to chance The

More information

Numerical Linear Algebra

Numerical Linear Algebra Chapter 3 Numerical Linear Algebra We review some techniques used to solve Ax = b where A is an n n matrix, and x and b are n 1 vectors (column vectors). We then review eigenvalues and eigenvectors and

More information

15 Singular Value Decomposition

15 Singular Value Decomposition 15 Singular Value Decomposition For any high-dimensional data analysis, one s first thought should often be: can I use an SVD? The singular value decomposition is an invaluable analysis tool for dealing

More information

M.A.P. Matrix Algebra Procedures. by Mary Donovan, Adrienne Copeland, & Patrick Curry

M.A.P. Matrix Algebra Procedures. by Mary Donovan, Adrienne Copeland, & Patrick Curry M.A.P. Matrix Algebra Procedures by Mary Donovan, Adrienne Copeland, & Patrick Curry This document provides an easy to follow background and review of basic matrix definitions and algebra. Because population

More information