MAT 343 Laboratory 6 The SVD decomposition and Image Compression

Similar documents
Introduction to SVD and Applications

Singular Value Decompsition

Introduction to Matlab

Image Compression Using Singular Value Decomposition

MAT 343 Laboratory 3 The LU factorization

MATH2071: LAB #5: Norms, Errors and Condition Numbers

The Singular Value Decomposition (SVD) and Principal Component Analysis (PCA)

MATH2071: LAB #7: Factorizations

SVD and Image Compression

8 The SVD Applied to Signal and Image Deblurring

MAT 275 Laboratory 4 MATLAB solvers for First-Order IVP

6 The SVD Applied to Signal and Image Deblurring

Applied Mathematics 205. Unit II: Numerical Linear Algebra. Lecturer: Dr. David Knezevic

8 The SVD Applied to Signal and Image Deblurring

What is Image Deblurring?

Lecture 5b: Starting Matlab

Matrix decompositions

Linear Least Squares. Using SVD Decomposition.

MATH 612 Computational methods for equation solving and function minimization Week # 2

The Singular Value Decomposition

Dimensionality Reduction: PCA. Nicholas Ruozzi University of Texas at Dallas

Designing Information Devices and Systems II Fall 2018 Elad Alon and Miki Lustig Homework 9

Exercise Sheet 1. 1 Probability revision 1: Student-t as an infinite mixture of Gaussians

Linear Algebra Review. Fei-Fei Li

Linear Algebra Review. Fei-Fei Li

LAB 2: Orthogonal Projections, the Four Fundamental Subspaces, QR Factorization, and Inconsistent Linear Systems

14 Singular Value Decomposition

7 Principal Component Analysis

Assignment #10: Diagonalization of Symmetric Matrices, Quadratic Forms, Optimization, Singular Value Decomposition. Name:

LAB 1: MATLAB - Introduction to Programming. Objective:

8. the singular value decomposition

Singular value decomposition

Quick Introduction to Nonnegative Matrix Factorization

Learning goals: students learn to use the SVD to find good approximations to matrices and to compute the pseudoinverse.

Gopalkrishna Veni. Project 4 (Active Shape Models)

Singular Value Decomposition: Compression of Color Images

Lab 2 Worksheet. Problems. Problem 1: Geometry and Linear Equations

(Linear equations) Applied Linear Algebra in Geoscience Using MATLAB

Hands-on Session: Linear Algebra and Ordinary Differential Equations with SCILAB

Learning outcomes. Palettes and GIF. The colour palette. Using the colour palette The GIF file. CSM25 Secure Information Hiding

A Review of Linear Algebra

Linear Algebra (Review) Volker Tresp 2017

Notes on singular value decomposition for Math 54. Recall that if A is a symmetric n n matrix, then A has real eigenvalues A = P DP 1 A = P DP T.

Lecture: Face Recognition and Feature Reduction

Dimension reduction, PCA & eigenanalysis Based in part on slides from textbook, slides of Susan Holmes. October 3, Statistics 202: Data Mining

Linear Algebra (Review) Volker Tresp 2018

COMP 558 lecture 18 Nov. 15, 2010

Math Fall Final Exam

Matrices and Vectors. Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A =

Section 3.9. Matrix Norm

Linear Algebra and Matrices

Lecture 9: SVD, Low Rank Approximation

Problem Set 8 - Solution

Kronecker Decomposition for Image Classification

Numerical Analysis. Carmen Arévalo Lund University Arévalo FMN011

Linear Algebra, part 3. Going back to least squares. Mathematical Models, Analysis and Simulation = 0. a T 1 e. a T n e. Anna-Karin Tornberg

EE263 homework 3 solutions

The University of Texas at Austin Department of Electrical and Computer Engineering. EE381V: Large Scale Learning Spring 2013.

Numerical Linear Algebra SEAS Matlab Tutorial 2

18.06 Problem Set 1 - Solutions Due Wednesday, 12 September 2007 at 4 pm in

Compressive Sensing, Low Rank models, and Low Rank Submatrix

Number Representation and Waveform Quantization

5.1 Vectors A vector is a one-dimensional array of numbers. A column vector has its m elements arranged from top to bottom.

EEE161 Applied Electromagnetics Laboratory 1

Introduction to Machine Learning. PCA and Spectral Clustering. Introduction to Machine Learning, Slides: Eran Halperin

EXAM. Exam 1. Math 5316, Fall December 2, 2012

MATH0328: Numerical Linear Algebra Homework 3 SOLUTIONS

Large Scale Data Analysis Using Deep Learning

18.085: Summer 2016 Due: 3 August 2016 (in class) Problem Set 8

A First Course on Kinetics and Reaction Engineering Example S5.1

Information Retrieval

Math 671: Tensor Train decomposition methods II

Singular Value Decomposition and Digital Image Compression

3. Array and Matrix Operations

Vector and Matrix Norms. Vector and Matrix Norms

EE731 Lecture Notes: Matrix Computations for Signal Processing

Singular Value Decomposition

Image Compression. 1. Introduction. Greg Ames Dec 07, 2002

Lab 1 Uniform Motion - Graphing and Analyzing Motion

FINM 331: MULTIVARIATE DATA ANALYSIS FALL 2017 PROBLEM SET 3

SINGULAR VALUE DECOMPOSITION

3D Computer Vision - WT 2004

In this LAB you will explore the following topics using MATLAB. Investigate properties of the column space of a matrix.

CS246: Mining Massive Data Sets Winter Only one late period is allowed for this homework (11:59pm 2/14). General Instructions

Midterm Solutions. EE127A L. El Ghaoui 3/19/11

Linear Algebra, part 3 QR and SVD

Data Mining and Matrices

Math 414 Lecture 1. Reading assignment: Text: Pages 1-29 and Scilab Users Guide: Sections 1-3.

1. Vectors.

Project 2: Using linear systems for numerical solution of boundary value problems

Lab 6: Linear Algebra

Learning MATLAB by doing MATLAB

Lecture 5: Web Searching using the SVD

Linear Algebra Massoud Malek

9 Searching the Internet with the SVD

Multimedia & Computer Visualization. Exercise #5. JPEG compression

Third-Order Tensor Decompositions and Their Application in Quantum Chemistry

Definition: Quadratic equation: A quadratic equation is an equation that could be written in the form ax 2 + bx + c = 0 where a is not zero.

BCMB/CHEM 8190 Lab Exercise Using Maple for NMR Data Processing and Pulse Sequence Design March 2012

Homework 1. Yuan Yao. September 18, 2011

Transcription:

MA 4 Laboratory 6 he SVD decomposition and Image Compression In this laboratory session we will learn how to Find the SVD decomposition of a matrix using MALAB Use the SVD to perform Image Compression Introduction Let A be an m n matrix (m n) hen A can be factored as σ 0 0 0 σ 0 A = USV = u u u m 0 0 0 0 0 σ n m m 0 0 0 0 m n v v v n n n (L6) By convention σ σ σ n 0 U and V are orthogonal matrices (square matrices with orthonormal column vectors) Note that the matrix S is usually denoted by Σ, however we will use S to be consistent with MALAB notation One useful application of the SVD is the approximation of a large matrix by another matrix of lower rank he lower-rank matrix contains less data and so can be represented more compactly than the original one his is the idea behind one form of signal compression he Frobenius Norm and Lower-Rank Approximation he Frobenius norm is one way to measure the size of a matrix [ a b = c d] a + b + c + d F and in general, A F = ( i,j a ij) /, that is, the square root of the sum of squares of the entries of A If  is an approximation to A, then we can quantify the goodness of fit by A  F his is just a least-squares measure he SVD has the property that, if you want to approximate a matrix A by a matrix  of lower rank, then the matrix that minimizes A  F among all rank- matrices is the matrix  = u m [ σ ] v n In general, the best rank-r approximation to A is given by σ 0 0  = u u u r 0 σ 0 0 0 0 m r 0 0 σ r = σ u v + σ u v + + σ r u r v r = σ u v r r v v v r c 06 Stefania racogna, SoMSS, ASU r n

and it can be shown that A Â F = σ r+ + σ r+ + + σ n (L6) he MALAB command [U, S, V] = svd(a) returns the SVD decomposition of the matrix A, that is, it returns matrices U, S and V such that A = USV Example he SVD of the following matrix A is: 8 0 5 4 5 0 A = 4 9 0 = 4 5 5 0 0 0 0 0 5 0 0 0 0 0 (a) Enter the matrix A and compute U, S and V using the svd comand Verify that A = USV >> A=[-,8,0;4,9,0;, -, ]; >> [U,S,V]=svd(A) U = 06000-08000 00000 08000 06000 00000 00000-00000 0000 S = 00000 0 0 0 50000 0 0 0 0000 V = 0 06667 06667 06667 0-06667 06667-06667 0 We can easily verify that U*S*V returns the matrix A (b) Use MALAB to find the best rank- approximation to A (with respect to the Frobenius norm), that is, A = σ u v Use the command rank to verify that the matrix A has indeed rank one Evaluate A A F by typing norm(a - A, fro ) and verify Eq (L6) >> A = S(,)*U(:,)*V(:,) A = 60000 0000 0000 80000 60000 60000 00000 00000 00000 >> rank(a) >> norm(a-a, fro ) c 06 Stefania racogna, SoMSS, ASU

597 >> sqrt(s(,)^+s(,)^) 597 Note that, although A is and therefore it contains nine entries, we only need seven values to generate it: (one singular value)+ (one column of U) +( one column of V ) = + + = 7 his is less than the number of entries in the matrix A (c) Find the best rank- approximation to A Check the rank and the Frobenius norm of A A using MALAB and verify (L6) >> A = A+S(,)*U(:,)*V(:,) A = -0000 80000 00000 40000 90000 00000-00000 00000 00000 >> rank(a) >> norm(a-a, fro ) 0000 >> S(,) Note that we need 4 entries to generate A: ( singular values) + ( columns of U) + ( columns of V ) = + + = 4 his is more than the number of entries in the original matrix A Image Compression Exercises Instructions: he following problems can be done interactively or by writing the commands in an M- file (or by a combination of the two) In either case, record all MALAB input commands and output in a text document and edit it according to the instructions of LAB and LAB For problem, include a picture of the rank- approximation For problem, include a picture of the rank-0 approximation and for problem 4, include a picture of the lower rank approximation that, in your opinion, gives an acceptable approximation to the original picture Crop and resize the pictures so that they do not take up too much space and paste them into your lab report in the appropriate order Step Download the file cauchybwjpg and save it to your working MALAB directory hen load it into MALAB with the command A = imread( cauchybwjpg ); note the semicolon he semicolon is necessary so that MALAB does not print out many screenfuls of data he result is a matrix of grayscale values corresponding to a black and white picture of a dog (he matrix has 04, 780 entries) Now, A is actually 0 8 o create an image from the matrix A, MALAB uses the three values A(i, j, : ) as the RGB values to use to color the ijth pixel We have a black and white picture so A(:,:,) = A(:,:,) = A(:,:,) and we only need to work with A(:,:,) c 06 Stefania racogna, SoMSS, ASU

Step We need to do some initial processing ype B = double(a(:,:,)) + ; don t forget the semicolon which converts A into the double-precision format that is needed for the singular value decomposition Now type B = B/56; semicolon! [U S V] = svd(b); semicolon! his decomposition is just Eq (L6) he gray scale goes from 0 to 56 in a black- and-white JPEG image We divide B by 56 to obtain values between 0 and, which is required for MALAB s image routines, which we will use later PROBLEM What are the dimensions of U, S, and V? (Find out by typing size(u) - without the semicolon - and likewise for the others) Here S has more columns than rows; in fact, columns to 8 are all zeros (When A has more columns than rows, we pad S on the right with zero columns to turn S into an m n matrix ) Otherwise, with this modification, the SVD is just like Eq (L6) PROBLEM Compute the best rank- approximation to B and store it in the matrix rank (Use the commands given in the Example parts (a) and (b) on page, but applied to the matrix B, rather than A Make sure you suppress the output ) Step Let s visualize rank o do that, first create C = zeros(size(a)); semicolon! his creates an array of zeros, C, of the same dimension as the original image matrix A Step 4 Copy the rank- image into C as follows: C(:,:,) C(:,:,) C(:,:,) = rank; = rank; = rank; Include the code and the figure in your report Step 5: We are almost done, except for one hitch MALAB does all its scaling using values from 0 to (and maps them to values between 0 and 56 for the graphics hardware) Lower-rank approximations to the actual image can have values that are slightly less than 0 and greater than So we will truncate them to fit, as follows: C = max(0,min(,c)); Step 6 View the resulting image: image(c) no semicolon PROBLEM Create and view a rank-0 approximation to the original picture (Use Steps 4-6 but with rank0 instead of rank If you mess up - for example, you get an all-black picture - then start over from Step ) It is convenient to create an M-file with a for loop to evaluate σ u v + + σ 0 u 0 v 0 Include the code and the figure PROBLEM 4 Repeat with rank-0, 0 and 40 approximations (and any other ranks that you d like to experiment with) What is the smallest rank that, in your opinion, gives an acceptable approximation to the original picture? In your lab write-up, include only the code that gives an acceptable approximation and the corresponding picture c 06 Stefania racogna, SoMSS, ASU 4

PROBLEM 5 What rank-r approximation exactly reproduces the original picture? You only need to answer the question Do not include the picture Hint: hink about the original picture Do not just answer this question by looking at the figures and guessing PROBLEM 6 (i) Suppose that a rank-k approximation, for some k, gives an acceptable approximation How much data is needed to represent the rank-k approximation? Your answer should be an expression in terms of k, m and n Hint: you need k columns of U, k columns of V, and k singular values of S (ii) he ratio of the amount of data used for the approximation (which you found in part (i)) and the amount of data of the (original format of the) picture is the compression rate Find the compression rate for the value of the rank you determined in Problem 4 What does the compression rate represent? Hint: After finding the compression rate for the value of the rank you determined in Problem 4, you may want to present this number as a percentage hink about how this percentage relates to the amount of data of the original approximation PROBLEM 7 If we use a high rank approximation, then the amount of data needed for the approximation may exceed the amount of data used in the original representation of the picture Find the smallest value of k such that the rank-k approximation of the matrix uses the same or more amount of data as the original picture Approximations with ranks higher than this k cannot be used for image compression since they defeat the purpose of using less data than in the original representation Hint: Use the general compression rate formula you found in Problem 6, part (ii) When you substitute the numbers for m and n, your k value will be a decimal numbers Since k represents rank, it must be an integer hink about whether you should round down or up c 06 Stefania racogna, SoMSS, ASU 5