The Haar Wavelet Transform: Compression and Reconstruction

Similar documents
The Haar Wavelet Transform: Compression and. Reconstruction

Image Compression Using the Haar Wavelet Transform

Image Compression. 1. Introduction. Greg Ames Dec 07, 2002

The Structure of Digital Imaging: The Haar Wavelet Transform. Wavelet Transformation and Image Compression

Student Activity: Finding Factors and Prime Factors

MITOCW ocw f99-lec01_300k

Lecture 9: Elementary Matrices

Confidence intervals

Singular Value Decomposition and Digital Image Compression

Learning goals: students learn to use the SVD to find good approximations to matrices and to compute the pseudoinverse.

Image Compression Using Singular Value Decomposition

5.2 Infinite Series Brian E. Veitch

Final Review Sheet. B = (1, 1 + 3x, 1 + x 2 ) then 2 + 3x + 6x 2

Math101, Sections 2 and 3, Spring 2008 Review Sheet for Exam #2:

A Few Examples of Limit Proofs

Vectors and their uses

Singular Value Decomposition. 1 Singular Value Decomposition and the Four Fundamental Subspaces

( )( b + c) = ab + ac, but it can also be ( )( a) = ba + ca. Let s use the distributive property on a couple of

Science Literacy: Reading and Writing Diagrams Video Transcript

Getting Started with Communications Engineering

The General Linear Model. How we re approaching the GLM. What you ll get out of this 8/11/16

Dylan Zwick. Fall Ax=b. EAx=Eb. UxrrrEb

The General Linear Model. Monday, Lecture 2 Jeanette Mumford University of Wisconsin - Madison

MITOCW ocw f99-lec31_300k

MITOCW ocw f99-lec17_300k

MITOCW ocw f99-lec05_300k

MITOCW watch?v=ztnnigvy5iq

Hi, my name is Dr. Ann Weaver of Argosy University. This WebEx is about something in statistics called z-

Lecture 10: Powers of Matrices, Difference Equations

PageRank: The Math-y Version (Or, What To Do When You Can t Tear Up Little Pieces of Paper)

MAT137 - Term 2, Week 2

CS168: The Modern Algorithmic Toolbox Lecture #7: Understanding Principal Component Analysis (PCA)

CS168: The Modern Algorithmic Toolbox Lecture #8: How PCA Works

Chapter 26: Comparing Counts (Chi Square)

The words linear and algebra don t always appear together. The word

Note: Please use the actual date you accessed this material in your citation.

Math 416, Spring 2010 Matrix multiplication; subspaces February 2, 2010 MATRIX MULTIPLICATION; SUBSPACES. 1. Announcements

MITOCW ocw f99-lec30_300k

Introduction to Measurements of Physical Quantities

1 GSW Sets of Systems

Regression, part II. I. What does it all mean? A) Notice that so far all we ve done is math.

Introduction to SVD and Applications

Similar Shapes and Gnomons

Limits Involving Infinity (Horizontal and Vertical Asymptotes Revisited)

Elementary Matrices. MATH 322, Linear Algebra I. J. Robert Buchanan. Spring Department of Mathematics

Vector, Matrix, and Tensor Derivatives

Lesson 21 Not So Dramatic Quadratics

Math 416, Spring 2010 The algebra of determinants March 16, 2010 THE ALGEBRA OF DETERMINANTS. 1. Determinants

Safety: BE SURE TO KEEP YOUR SMART CART UPSIDE-DOWN WHEN YOU RE NOT ACTIVELY USING IT TO RECORD DATA.

Inverses and Elementary Matrices

Designing Information Devices and Systems I Fall 2018 Lecture Notes Note 6

Gaussian Quiz. Preamble to The Humble Gaussian Distribution. David MacKay 1

LAB 8: INTEGRATION. Figure 1. Approximating volume: the left by cubes, the right by cylinders

MAT1302F Mathematical Methods II Lecture 19

Singular Value Decompsition

Partial Fractions. (Do you see how to work it out? Substitute u = ax + b, so du = a dx.) For example, 1 dx = ln x 7 + C, x x (x 3)(x + 1) = a

Where Is Newton Taking Us? And How Fast?

Designing Information Devices and Systems I Spring 2019 Lecture Notes Note 6

MAT 1302B Mathematical Methods II

MIT BLOSSOMS INITIATIVE

33-1. L-S Terms via L 2, S 2 and Projection Lecture #33 LAST TIME:

Lab on Taylor Polynomials. This Lab is accompanied by an Answer Sheet that you are to complete and turn in to your instructor.

Quadratic Equations Part I

The Fundamental Theorem of Linear Algebra

Properties of Arithmetic

Fibonacci mod k. In this section, we examine the question of which terms of the Fibonacci sequence have a given divisor k.

ASTRO 114 Lecture Okay. We re now gonna continue discussing and conclude discussing the entire

Math 31 Lesson Plan. Day 5: Intro to Groups. Elizabeth Gillaspy. September 28, 2011

Conservation of Momentum

Mathematical Logic Part One

Math 103, Summer 2006 Determinants July 25, 2006 DETERMINANTS. 1. Some Motivation

Linear Algebra & Geometry why is linear algebra useful in computer vision?

Solving Systems of Equations

Lecture 5: closed sets, and an introduction to continuous functions

Linear Algebra Review. Fei-Fei Li

5. LECTURE 5. Objectives

Mounts and Coordinate Systems

Integration by Parts

Wavelets and Multiresolution Processing

Physics E-1ax, Fall 2014 Experiment 3. Experiment 3: Force. 2. Find your center of mass by balancing yourself on two force plates.

Tree Building Activity

Linear Algebra Review. Fei-Fei Li

Math 3361-Modern Algebra Lecture 08 9/26/ Cardinality

Main topics for the First Midterm Exam

Honors Advanced Mathematics Determinants page 1

Algebra 8.6 Simple Equations

Chapter 6. Systems of Equations and Inequalities

Algebra. Here are a couple of warnings to my students who may be here to get a copy of what happened on a day that you missed.

Section 1.2. Row Reduction and Echelon Forms

Distributive property and its connection to areas

Ratios, Proportions, Unit Conversions, and the Factor-Label Method

This is an instrument. That s my ruler. And I need it right now because I ve gotta tell dude here about exact measurements.

EXAMPLES CLASS 2 MORE COSETS, FIRST INTRODUCTION TO FACTOR GROUPS

Differential Equations

Sun s differential rotation. Student s Guide Advanced Level CESAR s Science Case

Unary negation: T F F T

Lecture 14, Thurs March 2: Nonlocal Games

Announcements Wednesday, October 25

3 Fields, Elementary Matrices and Calculating Inverses

Math 304 Handout: Linear algebra, graphs, and networks.

Transcription:

and Reconstruction December 13, 2006

Have you ever looked at an image on your computer? Of course you have. Images today aren t just stored on rolls of film. Most images today are stored or compressed using linear algebra. What does linear algebra have to do with images? Images are made up of individual pixels. Pixels are squares of uniform color. Each pixel is represented by a number. Lower numbers are darker. Zero is completely black.

Tieing into linear algebra The numbers are organized in a matrix. A typical image can have a lot of pixels-256x256, 640x480,1024x768..etc. We need a way to store images without storing all of that data. The answer is compression. Images are compressed and then retrieved (reconstructed) using averaging and differencing. Conceptually, this works by averaging neighboring values and replacing the two with their average. So, the two numbers are replaced by one. Differencing allows us to keep track of the difference between the average and original values.

What it looks like Here is an 8 8 matrix. A = 210 215 204 225 73 111 201 106 169 145 245 189 120 58 174 78 87 95 134 35 16 149 118 224 74 180 226 3 254 195 145 3 87 140 44 229 149 136 204 197 137 114 251 51 108 164 15 249 186 178 69 76 132 53 154 254 79 159 64 169 85 97 12 202

Averaging Let s grab an arbitrary row for example: [ 45 11 30 24 45 38 0 23 ] Now we begin averaging 45 11 30 24 45 38 0 23 These averages are now placed back into the row: [ 28 27 41.5 11.5 x x x x ]

Differencing Differencing is taking the difference between the value on the left side of each pair and the average of each pair. Averaged First Value Average Differenced 28 45 28 17 27 30 27 3 41.5 45 41.5 3.5 11.5 0 11.5 11.5 The result is our averaged and differenced row. [ 28 27 41.5 11.5 17 3 3.5 11.5 ]

More Averaging and Differencing The averaging and differencing can be continued until there is just one averaged value (on the far left) and the rest are difference values. The difference values are called detail coefficients. But surely isn t there a way to average and difference entire matrices? This process is called Transforming a matrix. Naturally, Transforming is done with linear algebra-namely block multiplication.

Transforming Let s look at which matrices are used for Transforming. We will use the following for our 8 8: 1/2 0 0 0 1/2 0 0 0 1/2 0 0 0 1/2 0 0 0 0 1/2 0 0 0 1/2 0 0 W 1 = 0 1/2 0 0 0 1/2 0 0 0 0 1/2 0 0 0 1/2 0 0 0 1/2 0 0 0 1/2 0 0 0 0 1/2 0 0 0 1/2 0 0 0 1/2 0 0 0 1/2 These transforming matrices average and difference a piece of the matrix at a time.

Transforming By computing AW 1, the rows of A are left with four averages and four detail coefficients. So, we need to transform more. Can you guess how? Block muliplication is used to get our W [ ] 2 W2 2 0 W 2 = 0 I

Transforming We now have W 2. 1/2 0 1/2 0 0 0 0 0 1/2 0 1/2 0 0 0 0 0 0 1/2 0 1/2 0 0 0 0 W 2 = 0 1/2 0 1/2 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 Now, AW 1 W 2 gives us a matrix where each row has two averages and six detail coefficients.

Transforming Similarly, we are able to get W 3. 1/2 1/2 0 0 0 0 0 0 1/2 1/2 0 0 0 0 0 0 0 0 1 0 0 0 0 0 W 3 = 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 Finally, we have that W = W 1 W 2 W 3 in this case, or more generally W = W 1 W 2 W 3...W n. Finally, we can get our wavelet transformed matrix T = AW.

What now? Okay, but where does the compression come in? We just have a matrix with a few average values and a bunch of detail coefficients! Detail coefficients let us know the difference in darkness between neighboring pixels. Small detail coefficients mean a small difference in the shade of neighboring pixels. Ignoring small differences may not change the big picture. How small of differences can we ignore? We must set a threshhold, ɛ, for which any detail coefficient smaller is just set to zero. What effect will this have? Our new matrix is called a sparse matrix.

Compressed! Zeroing out some values has resulted in a loss of data. The result has pros and cons, so try out a few ɛ s before sticking with one. For our image of Lena, let s try ɛ = 1. We ll see her image later.

Reconstruction Remember how viewing images on web pages used to be? That is image reconstruction going on. It turns out that multiplying W 1 on the left of our transformed matrix T, we get our reconstructed matrix R. Also, W 1 is much easier to calculate if W is orthogonal. Once we ve got W 1, we can see how to get R. T = AW 1 W 2 W 3...W n = AW and R = W 1 T = W 1 AW This wont be quite as good as the original.

Success Let s see how our reconstructed matrix compares to the original. Figure: Woman and Her Using ɛ = 1 The fact that the image was essential preserved means that our choice of ɛ was a success.

Figure: Previous Matrix A Comparison with ɛ = 25

Conclusion In summary, you begin with a matrix A that represents an image. Then average and difference to get your transformed matrix. Choose a ɛ value as a threshhold, and get alot of zeros in your matrix. Next, with your transformed matrix T, you reconstruct by multiplying it on the left by W 1. This process is the Haar Transformation.

The End Special thanks to Dave Arnold for a lot of help, Colm Mulachy for the great Haar Transform paper and the matrices that are used in Matlab to wavelet compress these images, and to Gilbert Strang for providing an excellent textbook and Matlab manual! Fin!