deconvolution Documentation
|
|
- Margery Campbell
- 5 years ago
- Views:
Transcription
1 deconvolution Documentation Release Frederic Grabowski, Paweł Czyż Aug 07, 2018
2
3 Contents 1 Readme deconvolution Indices and tables 11 Bibliography 13 i
4 ii
5 CHAPTER 1 Readme 1.1 deconvolution A Python module providing Deconvolution class that implements and generalises Ruifrok-Johnston color deconvolution algorithm [RJ], [IJ]. It allows one to split an image into distinct color layers in just a few lines of code: from deconvolution import Deconvolution from PIL import Image img = Image.open("image.jpg") # Declare an instance of Deconvolution, with image loaded and with color basis defining what layers are interesting decimg = Deconvolution(image=img, basis=[[1, 0.1, 0.2], [0, 0.1, 0.8]]) # Constructs new PIL Images, with different color layers layer1, layer2 = decimg.out_images(mode=[1, 2]) Installation You can install the package using pip: pip install deconvolution Alternatively, you can clone the repository and run: make install Since then you can import use the module in your scripts: from deconvolution import Deconvolution d = Deconvolution() 1
6 1.1.2 Testing # For Python 3 users make test # For Python 2 users make comp # Check the code coverage make coverage # Check the coverage interactively, using a web browser make html Deconvolve For better usage experience we created a script allowing one to deconvolve images from the shell. Copy deconvolve.py file into /usr/local/bin or, if you want to use it locally: mkdir ~/bin cp deconvolve.py ~/bin export PATH=~/bin:$PATH Since then you can deconvolve images using: deconvolve.py image1.png image2.png... # For help deconvolve.py -h Documentation Check out our documentation at Read The Docs Contributors Method developed by Frederic Grabowski generalising Ruifrok-Johnston algorithm [RJ]. and implemented by Frederic Grabowski [FG] and Paweł Czyż [PC]. We are very grateful to prof. Daniel Wójcik and dr Piotr Majka [N1], [N2] who supervised the project. We also would like to thank prof. Gabriel Landini [GL], who implemented the colour deconvolution in ImageJ [IJ] and allowed us to test the algorithm on his data References Examples Ruifrok-Johnston deconvolution Assume that we have an image and we know three stains. We can deconvolve image and get density layers: """ In this example, we have an image and a known basis with three vectors. We want to get the density layers. 2 Chapter 1. Readme
7 In this example we use a cropped image [1], released under CC licence. [1] File:Chronic_lymphocytic_leukemia_-_high_mag.jpg """ from deconvolution import Deconvolution from PIL import Image def join_horizontally(*args): """Joins many PIL images of the same dimensions horizontally""" w, h = args[0].size n = len(args) joined = Image.new("RGB", (n*w, h)) for x_off, img in zip(range(0, n*w, w), args): joined.paste(img, (x_off, 0)) return joined if name == " main ": # Load an image original = Image.open("cropped.jpg") # Create a deconvolution object with the image dec = Deconvolution(image=original, basis=[[0.91, 0.38, 0.71], [0.39, 0. 47, 0.85], [1, 0, 0]]) # Produce density matrices for both colors. Be aware, as Beer's law do not always hold. first_density, second_density, third_density = dec.out_scalars() print(first_density.shape, second_density.shape, third_density.shape) # Produce reconstructed image, first layer, second layer, third layer and rest out_images = dec.out_images(mode=[0, 1, 2, 3, -1]) # Original image, reconstructed, layers and rest join_horizontally(original, *out_images).show() Two stain deconvolution Alternatively, we can use two stains. If we know them, we can just mimic the procedure for two stains: """ In this example, we have an image and a known basis with two vectors. We want to get the density layers. In this example we use a cropped image [1], released under CC licence. [1] File:Chronic_lymphocytic_leukemia_-_high_mag.jpg """ from deconvolution import Deconvolution from PIL import Image 1.1. deconvolution 3
8 def join_horizontally(*args): """Joins many PIL images of the same dimensions horizontally""" w, h = args[0].size n = len(args) joined = Image.new("RGB", (n*w, h)) for x_off, img in zip(range(0, n*w, w), args): joined.paste(img, (x_off, 0)) return joined if name == " main ": # Load an image original = Image.open("cropped.jpg") # Create a deconvolution object with the image dec = Deconvolution(image=original, basis=[[0.91, 0.38, 0.71], [0.39, 0. 47, 0.85]]) # Produce density matrices for both colors. Be aware, as Beer's law do not always hold. first_density, second_density = dec.out_scalars() print(first_density.shape, second_density.shape) # Produce reconstructed image, first layer, second layer and rest out_images = dec.out_images(mode=[0, 1, 2, -1]) # Original image, reconstructed, layers and rest join_horizontally(original, *out_images).show() But if we do not know at least one of them, we should first find appropriate vectors: """ In this example, we have an image and an insufficient number of color vectors - we will use Deconvolution complete_basis method to find other vectors. Then we try to make them more independent applying resolve_dependencies with different belligerency parameter. We show all the bases found. In this example we use a cropped image [1], released under CC licence. [1] File:Chronic_lymphocytic_leukemia_-_high_mag.jpg """ from deconvolution import Deconvolution from PIL import Image def join_horizontally(*args): """Joins many PIL images of the same dimensions horizontally""" w, h = args[0].size n = len(args) joined = Image.new("RGB", (n*w, h)) for x_off, img in zip(range(0, n*w, w), args): 4 Chapter 1. Readme
9 joined.paste(img, (x_off, 0)) return joined def join_vertically(*args): """Joins many PIL images of the same dimensions vertically""" w, h = args[0].size n = len(args) joined = Image.new("RGB", (w, n*h)) for y_off, img in zip(range(0, n*h, h), args): joined.paste(img, (0, y_off)) return joined if name == " main ": # Load an image original = Image.open("cropped.jpg") # Create a deconvolution object with the image dec = Deconvolution(image=original, sample_density=6) # Complete basis - as we did not provide 2 or 3 vectors, it needs to be found dec.complete_basis() # We can get the basis found print("basis before resolve:\n{}\n".format(dec.pixel_operations.get_ basis())) # Produce reconstructed image, first layer, second layer and rest out_images1 = dec.out_images(mode=[0, 1, 2, -1]) # Original image, reconstructed, layers and rest before_resolve = join_horizontally(original, *out_images1) # Resolve dependencies - make the vectors more independent dec.resolve_dependencies(belligerency=0.1) # We can get the basis found print("basis after resolve:\n{}\n".format(dec.pixel_operations.get_ basis())) # And produce reconstructed image, first layer, second layer and rest out_images2 = dec.out_images(mode=[0, 1, 2, -1]) # Construct a joined image from original image, reconstructed, layers and rest after_resolve = join_horizontally(original, *out_images2) # Resolve dependencies with huge belligerency - make the vectors very independent dec.resolve_dependencies(belligerency=1) print("basis after aggressive resolve:\n{}\n".format(dec.pixel_operations. get_basis())) # Produce reconstructed image, first layer, second layer and rest out_images3 = dec.out_images(mode=[0, 1, 2, -1]) # Show original image, reconstructed, layers and rest after_huge_resolve = join_horizontally(original, *out_images3) # Show all images for visual comparision join_vertically(before_resolve, after_resolve, after_huge_resolve).show() 1.1. deconvolution 5
10 Mathematical description of color deconvolution Introduction Let us imagine a light source that sends light through layers of different substances - each of them absorbing some of the light passing through. Given a digital recording of the light that passed through this setup (and information of the input light), we would like to reconstruct the layout of the substances. Each pixel is represented by a three-component vector (it s RGB components). Hadamard (or entry-wise) product: x 1 y 1 x 1 y 1 x 2 y 2 = x 2 y 2 x 3 y 3 x 3 y 3 I will make frequent use of the The Hadamard product is similar to the standard multiplication of real numbers and I will use the same notation for inverses and powers. Consider light represented by a RGB-vector i passing through a unit-wide layer of some substance. I will assume that Beer s law [LB] hold s, that is the RGB-vector of the outgoing light will be: i s = i 1s 1 i 2 s 2, i 3 s 3 where s is specific for this substance (the choice of unitary width does change this value; this issue will be addressed later on). If, for example, s = (1, 1, 1) the layer does not absorb any light, for s = (0, 1, 1) the red channel is completely absorbed, but the remaining channels are untouched. Should the layer be a times wider, the RGB components of the outgoing light would be: i 1 s a i s a 1 = i 2 s a 2, i 3 s a 3 for multiple substances, i 1 p a i p a q b 1 q1 b = i 2 p a 2 q2 b.... i 3 p a 3 q3 b... This equation implies that changing the order of layers or splitting some of them does not change the outgoing light. I will work under the assumption that this also holds for mixed substances Ruifrok-Johnston deconvolution In the case studied by Ruifrok and Johnston [RJ], the light i passes through three substances. Vectors (,,) describing absorption rates for all substances (that is, absorption coefficients for unit-wide substance layers) are assumed to be known. The width of each substance layer (which may change from point to point) has to be calculated given the output light. Suppose the camera registers a single vector r at some arbitrary pixel. We wish to express this vector according to the equation: i v a u b w c = r. Solving this equation for a, b and c gives the layers widths in unit lengths. We may then compute how much light would have passed through each layer separately: the first deconvolution is just i v a, the second i u b and the third i w c. It is possible that no real non-negative a, b, c solve this equation (due to data noise, imperfect digitization, 6 Chapter 1. Readme
11 traces of other substances, etc.). In that case, the reconstructed image will differ from the original: this difference can be visualized by considering rest picture : r v a u b w c. Let us briefly return to the issue of picking a unit width. Notice that changing the unit width of a substance by a factor of λ, changes the constant $a$ at each pixel to a/λ, and thus the density distribution a times the unit width does not change. Similarly, the deconvolved images don t change: i v a = i ( v λ ) a/λ. Equation (ref{eq:decon3}) is in fact a set of three equations - one for each component. Lets rewrite it for the k-th component and transform equivalently: v a k u b k w c k = r k i k a log v k + b log u k + c log w k = log(r k /i k ), going back to vector representation: log v 1 log u 1 log w 1 log v 2 log u 2 log w 2 a b log(r 1/i 1 ) log(r 2 /i 2 ) = 0. log v 3 log u 3 log w 3 c log(r 3 /i 3 ) This equation is solvable if and only if the left matrix is invertible, and i k 0. In physical terms, the first assumption states that no substance can be faked by mixing the remaining two, and the second that the input light is nonzero for all channels. Given an input image, at each pixel r we may solve for a, b, c. Should we get any negative results, it means that this particular pixels color can not be obtained by mixing the given substances. In this case the assumption that the image was obtained by mixing the given substances is violated - hence it s reasonable to disregard such results data noise and drop all negative parts. Having that done, we are able to construct five pixels: Reconstructed pixel: i v a u b w c Difference from the original image, due to negative cut-off: r v a u b w c Three single-substance pixels: + i v a + i u b + i w c After processing every pixel in this manner, the reconstructed image, three single substance images, and one remainder image (showing the error) are obtained. An example is shown in Figure ref{fig:ruifrok}. Until now the approach was based on Ruifrok and Johnston - however, the choice of formalism makes it easier to look for further development. Handle any number of channels straightforward: in fact, for the general case, the notation does not change. Secondly, if only two substances are of interest, Ruifrok and Johnston suggest measuring the absorbances of those two substances, and then choosing the third so that it minimizes the negative cut-off. The third single-substance image is then used as a measure of error. This arbitrarity seems a bit artificial - now I introduce a method developed by Frederic Grabowski Two-substance deconvolution Choosing the third substance by hand so that it minimizes data loss due to negative cut-offs introduces ambiguity into the measurement, and seems artificial. In order to fix this problem, drop the third substance entirely, and look for a, b that minimize the squared error of the approximation: [ log(r a 1 /i 1 ) b] log v 1 log u 1 log v 2 log u 2 log v 3 log u 3 log(r 2 /i 2 ) log(r 3 /i 3 ) deconvolution 7
12 Where both the matrix and the 3-vector are given. Clean up the notation: inf f(x) = inf A x y 2 x R 2 x R2 We want to know the x for which this infimum is obtained. This x always exists, because it is the orthogonal projection of y onto A(R 2 ). For each minimum grad f = 0. That is, for the first component: 0 = 1 f (x 1, x 2 ) = 2 x 1 x 1 3 A k1 (A k1 x 1 + A k2 x 2 r k ) k=1 3 A 2 k1 + x 2 k=1 3 A k1 A k2 = k=1 (1.1) 3 A k1 r k Combining (ref{eq:solab}) with the equation for the second component we get a set of two linear equations: [ ] [ ] [ A 2 k1 A k1 A k2 x1 = k A ] k1r k A k1 A k2 x 2 k A. k2r k k The equation above can be easily solved if and only if it s determinant is not 0: [ det k A2 k1 k A ] k1a k2 k A = k1a k2 k A2 k2 A 2 k1 ( ) 2 A 2 k2 A k1 A k2 0 k k A 2 k2 The Cauchy-Schwarz inequality states that the considered determinant is 0 if and only if there is a number t for which A k1 = t A k2. This is again the mixing independence of the basis. If dim A(R 2 ) < 2, then there does not exist a unique x for which the projected vector is obtained. We now have a method for finding the best a, b solving equation (ref{eq:2vec}). This means that for each pixel and a basis of two given substances, we are able to calculate four pixels: - Best reconstructed pixel: i v a u b - Difference from the original image: r v a u b - Two single-substance pixels: i v a and i u b After processing every pixel in this manner the reconstructed image, two single substance images, and one remainder image (showing the error) are obtained. An example is shown in Figure ref{fig:auto}. k k= Formulation of the optimization problem Considering deconvolutions with two substances has another advantage - it gives a criterium for comparing bases. Taking inf LHS of eq.(1.1.10) 2 a,b,c R p pixels does not work, because the equation is always soluble and the expression is identically zero (at least for all independent bases). Decreasing the number of degrees of freedom (that is, the number of substances to match) solves this difficulty: log v 1 log u 1 [ ] log(r log v 2 log u 2 a 1 /i 1 ) log(r b 2 /i 2 ) 0. log v 3 log u 3 log(r 3 /i 3 ) log v 1 log u 1 [ log(r inf log v 2 log u 2 a 1 /i 1 ) log(r a,b R b] 2 /i 2 ) 2 (1.2) p pixels log v 3 log u 3 log(r 3 /i 3 ) 8 Chapter 1. Readme
13 To solve this optimization problem, first clean up the notation. Let: A = a 11 a 12 a 21 a 22, a 31 a 32 [ ] [ x1 a = x = x 2 b] y 1 (p) log(r 1 /i 1 ) y 2 (p) = y(p) = log(r 2 /i 2 ) y 3 (p) log(r 3 /i 3 ) Given some y(p), the problem is to find a 2 3 matrix A, that minimizes the expression: f(a) = inf A x y(p) 2 x R2 p pixels Solving the optimization problem For any A in equation ref{eq:optorig}: inf A x x R y(p) 2 = d( y(p), A(R 2 )) 2, 2 hence we want to minimize the mean squared distance of the points $y$ from the image space of A. There are two cases: either dim A(R 2 ) = 2 or is strictly less than 2. In the second case, we are always able to choose such a matrix A, that the previous image is a subspace of the new image, but then the distances can only be smaller. It thus suffices to find the best two dimensional space. Every such space has a normal vector, which we choose so that the third component in non-negative (this convention is arbitrary, but does not matter). We can now rewrite (ref{eq:optorig}) as: f(a) = g( n) = n y(p) 2 p pixels Any n minimizing (ref{eq:optg}), determines a class of matrices minimizing (ref{eq:optorig}) - precisely those, whose image is perpendicular to n. We only need to consider n such that n = 1. Because the set of such n is compact in R 3, we can apply the method of Lagrange multipliers: (g( n) λ n 2 ) = 0, after expanding and rearranging p pixels ( y(p) n ) y(p) = λ n. The left hand side is a linear operator from R 3 to R 3 applied to n. Equation (ref{eq:eigenorig}) is just the eigenvalue equation for this operator. Moreover, multiplying both sides by n we notice λ = g( n). The smallest-eigenvalue eigenvector is the n minimizing g( n), and the corresponding eigenvalue the value g( n) Computing the deconvolution Rewrite the left hand side of equation (ref{eq:eigenorig}) using index notation: p pixels 3 y i (p)y j (p) n i, i= deconvolution 9
14 is the j-th component of the resulting vector. Hence the matrix of the linear transformation is: (Y ) ij = p pixels 3 y i (p)y j (p) i=1 Given an input image, first calculate Y, and find it s eigenvalue decomposition. Pick the eigenvector n with smallest eigenvalue. Choosing an A such that n is perpendicular to A s image is equivalent to choosing a basis with both elements perpendicular to n. To have any preference, let s return to the physical interpretation. Naively, all these bases allow us to mix the same set of colors: but not for all bases will this mixing will be physically meaningful. Consider the following example: the basis consisting of two substances, one absorbing only red the other only blue, will be equivalent to a basis of one substance absorbing both red and blue, and the other only blue. However, only in the first base is it possible to construct a color with only the red channel absorbed. This happens because we cannot physically have negative widths of substances. It seems advantageous to choose the basis that allows us to mix the widest range of colors physically. It turns out that this choice is not always optimal. For now, we stick with the maximal physical color range. The basis of our choosing is the one for which both vectors are non-negative (so that the resulting substances absorb light, and not amplify it), and have the biggest angle between them. This determines them uniquely up to a rearrangement References 10 Chapter 1. Readme
15 CHAPTER 2 Indices and tables genindex modindex search 11
16 12 Chapter 2. Indices and tables
17 Bibliography [RJ] Research paper by Ruifrok and Johnston [IJ] ImageJ webpage [N1] Laboratory of Neuroinformatics webpage [GL] Prof. Gabriel Landini s webpage [N2] [FG] [PC] [LB] Modern analysis of this law can be found in Employing Theories Far beyond Their Limits The Case of the (Boguer-) Beer Lambert Law by Mayerhoefer et al. [RJ] Research paper by Ruifrok and Johnston 13
MATRICES ARE SIMILAR TO TRIANGULAR MATRICES
MATRICES ARE SIMILAR TO TRIANGULAR MATRICES 1 Complex matrices Recall that the complex numbers are given by a + ib where a and b are real and i is the imaginary unity, ie, i 2 = 1 In what we describe below,
More informationApplied Linear Algebra in Geoscience Using MATLAB
Applied Linear Algebra in Geoscience Using MATLAB Contents Getting Started Creating Arrays Mathematical Operations with Arrays Using Script Files and Managing Data Two-Dimensional Plots Programming in
More informationMath Linear Algebra II. 1. Inner Products and Norms
Math 342 - Linear Algebra II Notes 1. Inner Products and Norms One knows from a basic introduction to vectors in R n Math 254 at OSU) that the length of a vector x = x 1 x 2... x n ) T R n, denoted x,
More informationLecture 6 Positive Definite Matrices
Linear Algebra Lecture 6 Positive Definite Matrices Prof. Chun-Hung Liu Dept. of Electrical and Computer Engineering National Chiao Tung University Spring 2017 2017/6/8 Lecture 6: Positive Definite Matrices
More information4 ORTHOGONALITY ORTHOGONALITY OF THE FOUR SUBSPACES 4.1
4 ORTHOGONALITY ORTHOGONALITY OF THE FOUR SUBSPACES 4.1 Two vectors are orthogonal when their dot product is zero: v w = orv T w =. This chapter moves up a level, from orthogonal vectors to orthogonal
More informationPh 219/CS 219. Exercises Due: Friday 3 November 2006
Ph 9/CS 9 Exercises Due: Friday 3 November 006. Fidelity We saw in Exercise. that the trace norm ρ ρ tr provides a useful measure of the distinguishability of the states ρ and ρ. Another useful measure
More informationMath 215 HW #9 Solutions
Math 5 HW #9 Solutions. Problem 4.4.. If A is a 5 by 5 matrix with all a ij, then det A. Volumes or the big formula or pivots should give some upper bound on the determinant. Answer: Let v i be the ith
More informationLecture 13. Principal Component Analysis. Brett Bernstein. April 25, CDS at NYU. Brett Bernstein (CDS at NYU) Lecture 13 April 25, / 26
Principal Component Analysis Brett Bernstein CDS at NYU April 25, 2017 Brett Bernstein (CDS at NYU) Lecture 13 April 25, 2017 1 / 26 Initial Question Intro Question Question Let S R n n be symmetric. 1
More informationDuke University, Department of Electrical and Computer Engineering Optimization for Scientists and Engineers c Alex Bronstein, 2014
Duke University, Department of Electrical and Computer Engineering Optimization for Scientists and Engineers c Alex Bronstein, 2014 Linear Algebra A Brief Reminder Purpose. The purpose of this document
More informationApplied Linear Algebra in Geoscience Using MATLAB
Applied Linear Algebra in Geoscience Using MATLAB Contents Getting Started Creating Arrays Mathematical Operations with Arrays Using Script Files and Managing Data Two-Dimensional Plots Programming in
More informationAPPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.
APPENDIX A Background Mathematics A. Linear Algebra A.. Vector algebra Let x denote the n-dimensional column vector with components 0 x x 2 B C @. A x n Definition 6 (scalar product). The scalar product
More informationAlgebra II. Paulius Drungilas and Jonas Jankauskas
Algebra II Paulius Drungilas and Jonas Jankauskas Contents 1. Quadratic forms 3 What is quadratic form? 3 Change of variables. 3 Equivalence of quadratic forms. 4 Canonical form. 4 Normal form. 7 Positive
More informationCS 246 Review of Linear Algebra 01/17/19
1 Linear algebra In this section we will discuss vectors and matrices. We denote the (i, j)th entry of a matrix A as A ij, and the ith entry of a vector as v i. 1.1 Vectors and vector operations A vector
More informationReview of Linear Algebra
Review of Linear Algebra Definitions An m n (read "m by n") matrix, is a rectangular array of entries, where m is the number of rows and n the number of columns. 2 Definitions (Con t) A is square if m=
More informationMA201: Further Mathematical Methods (Linear Algebra) 2002
MA201: Further Mathematical Methods (Linear Algebra) 2002 General Information Teaching This course involves two types of teaching session that you should be attending: Lectures This is a half unit course
More informationCOMS 4721: Machine Learning for Data Science Lecture 19, 4/6/2017
COMS 4721: Machine Learning for Data Science Lecture 19, 4/6/2017 Prof. John Paisley Department of Electrical Engineering & Data Science Institute Columbia University PRINCIPAL COMPONENT ANALYSIS DIMENSIONALITY
More informationProblems in Linear Algebra and Representation Theory
Problems in Linear Algebra and Representation Theory (Most of these were provided by Victor Ginzburg) The problems appearing below have varying level of difficulty. They are not listed in any specific
More informationIntroduction to Machine Learning
10-701 Introduction to Machine Learning PCA Slides based on 18-661 Fall 2018 PCA Raw data can be Complex, High-dimensional To understand a phenomenon we measure various related quantities If we knew what
More informationLinear Algebra Massoud Malek
CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product
More information7. Symmetric Matrices and Quadratic Forms
Linear Algebra 7. Symmetric Matrices and Quadratic Forms CSIE NCU 1 7. Symmetric Matrices and Quadratic Forms 7.1 Diagonalization of symmetric matrices 2 7.2 Quadratic forms.. 9 7.4 The singular value
More informationActivity: Derive a matrix from input-output pairs
Activity: Derive a matrix from input-output pairs [ ] a b The 2 2 matrix A = satisfies the following equations: c d Calculate the entries of the matrix. [ ] [ ] a b 5 c d 10 [ ] [ ] a b 2 c d 1 = = [ ]
More information12.4 Known Channel (Water-Filling Solution)
ECEn 665: Antennas and Propagation for Wireless Communications 54 2.4 Known Channel (Water-Filling Solution) The channel scenarios we have looed at above represent special cases for which the capacity
More informationImage Registration Lecture 2: Vectors and Matrices
Image Registration Lecture 2: Vectors and Matrices Prof. Charlene Tsai Lecture Overview Vectors Matrices Basics Orthogonal matrices Singular Value Decomposition (SVD) 2 1 Preliminary Comments Some of this
More informationFinal Review Sheet. B = (1, 1 + 3x, 1 + x 2 ) then 2 + 3x + 6x 2
Final Review Sheet The final will cover Sections Chapters 1,2,3 and 4, as well as sections 5.1-5.4, 6.1-6.2 and 7.1-7.3 from chapters 5,6 and 7. This is essentially all material covered this term. Watch
More informationDot Products, Transposes, and Orthogonal Projections
Dot Products, Transposes, and Orthogonal Projections David Jekel November 13, 2015 Properties of Dot Products Recall that the dot product or standard inner product on R n is given by x y = x 1 y 1 + +
More informationMath 10b Ch. 8 Reading 1: Introduction to Taylor Polynomials
Math 10b Ch. 8 Reading 1: Introduction to Taylor Polynomials Introduction: In applications, it often turns out that one cannot solve the differential equations or antiderivatives that show up in the real
More informationLecture 4: Postulates of quantum mechanics
Lecture 4: Postulates of quantum mechanics Rajat Mittal IIT Kanpur The postulates of quantum mechanics provide us the mathematical formalism over which the physical theory is developed. For people studying
More informationThe following definition is fundamental.
1. Some Basics from Linear Algebra With these notes, I will try and clarify certain topics that I only quickly mention in class. First and foremost, I will assume that you are familiar with many basic
More informationImage Compression. 1. Introduction. Greg Ames Dec 07, 2002
Image Compression Greg Ames Dec 07, 2002 Abstract Digital images require large amounts of memory to store and, when retrieved from the internet, can take a considerable amount of time to download. The
More informationQuantum Computing Lecture 2. Review of Linear Algebra
Quantum Computing Lecture 2 Review of Linear Algebra Maris Ozols Linear algebra States of a quantum system form a vector space and their transformations are described by linear operators Vector spaces
More informationCLUe Training An Introduction to Machine Learning in R with an example from handwritten digit recognition
CLUe Training An Introduction to Machine Learning in R with an example from handwritten digit recognition Ad Feelders Universiteit Utrecht Department of Information and Computing Sciences Algorithmic Data
More informationSingular Value Decompsition
Singular Value Decompsition Massoud Malek One of the most useful results from linear algebra, is a matrix decomposition known as the singular value decomposition It has many useful applications in almost
More information= W z1 + W z2 and W z1 z 2
Math 44 Fall 06 homework page Math 44 Fall 06 Darij Grinberg: homework set 8 due: Wed, 4 Dec 06 [Thanks to Hannah Brand for parts of the solutions] Exercise Recall that we defined the multiplication of
More informationA PRIMER ON SESQUILINEAR FORMS
A PRIMER ON SESQUILINEAR FORMS BRIAN OSSERMAN This is an alternative presentation of most of the material from 8., 8.2, 8.3, 8.4, 8.5 and 8.8 of Artin s book. Any terminology (such as sesquilinear form
More informationDot Products. K. Behrend. April 3, Abstract A short review of some basic facts on the dot product. Projections. The spectral theorem.
Dot Products K. Behrend April 3, 008 Abstract A short review of some basic facts on the dot product. Projections. The spectral theorem. Contents The dot product 3. Length of a vector........................
More informationDS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.
DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1
More informationLinear Algebra March 16, 2019
Linear Algebra March 16, 2019 2 Contents 0.1 Notation................................ 4 1 Systems of linear equations, and matrices 5 1.1 Systems of linear equations..................... 5 1.2 Augmented
More information1 9/5 Matrices, vectors, and their applications
1 9/5 Matrices, vectors, and their applications Algebra: study of objects and operations on them. Linear algebra: object: matrices and vectors. operations: addition, multiplication etc. Algorithms/Geometric
More informationMath 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination
Math 0, Winter 07 Final Exam Review Chapter. Matrices and Gaussian Elimination { x + x =,. Different forms of a system of linear equations. Example: The x + 4x = 4. [ ] [ ] [ ] vector form (or the column
More information3.7 Constrained Optimization and Lagrange Multipliers
3.7 Constrained Optimization and Lagrange Multipliers 71 3.7 Constrained Optimization and Lagrange Multipliers Overview: Constrained optimization problems can sometimes be solved using the methods of the
More informationProblem # Max points possible Actual score Total 120
FINAL EXAMINATION - MATH 2121, FALL 2017. Name: ID#: Email: Lecture & Tutorial: Problem # Max points possible Actual score 1 15 2 15 3 10 4 15 5 15 6 15 7 10 8 10 9 15 Total 120 You have 180 minutes to
More informationIntroduction to Machine Learning. PCA and Spectral Clustering. Introduction to Machine Learning, Slides: Eran Halperin
1 Introduction to Machine Learning PCA and Spectral Clustering Introduction to Machine Learning, 2013-14 Slides: Eran Halperin Singular Value Decomposition (SVD) The singular value decomposition (SVD)
More informationLecture 8 : Eigenvalues and Eigenvectors
CPS290: Algorithmic Foundations of Data Science February 24, 2017 Lecture 8 : Eigenvalues and Eigenvectors Lecturer: Kamesh Munagala Scribe: Kamesh Munagala Hermitian Matrices It is simpler to begin with
More informationHOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION)
HOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION) PROFESSOR STEVEN MILLER: BROWN UNIVERSITY: SPRING 2007 1. CHAPTER 1: MATRICES AND GAUSSIAN ELIMINATION Page 9, # 3: Describe
More informationTBP MATH33A Review Sheet. November 24, 2018
TBP MATH33A Review Sheet November 24, 2018 General Transformation Matrices: Function Scaling by k Orthogonal projection onto line L Implementation If we want to scale I 2 by k, we use the following: [
More informationnonlinear simultaneous equations of type (1)
Module 5 : Solving Nonlinear Algebraic Equations Section 1 : Introduction 1 Introduction Consider set of nonlinear simultaneous equations of type -------(1) -------(2) where and represents a function vector.
More informationLinear Algebra. Min Yan
Linear Algebra Min Yan January 2, 2018 2 Contents 1 Vector Space 7 1.1 Definition................................. 7 1.1.1 Axioms of Vector Space..................... 7 1.1.2 Consequence of Axiom......................
More informationCS 231A Computer Vision (Fall 2011) Problem Set 2
CS 231A Computer Vision (Fall 2011) Problem Set 2 Solution Set Due: Oct. 28 th, 2011 (5pm) 1 Some Projective Geometry Problems (15 points) Suppose there are two parallel lines that extend to infinity in
More informationTopics in linear algebra
Chapter 6 Topics in linear algebra 6.1 Change of basis I want to remind you of one of the basic ideas in linear algebra: change of basis. Let F be a field, V and W be finite dimensional vector spaces over
More informationSymmetric matrices and dot products
Symmetric matrices and dot products Proposition An n n matrix A is symmetric iff, for all x, y in R n, (Ax) y = x (Ay). Proof. If A is symmetric, then (Ax) y = x T A T y = x T Ay = x (Ay). If equality
More informationMATHEMATICS 23a/E-23a, Fall 2015 Linear Algebra and Real Analysis I Module #1, Week 4 (Eigenvectors and Eigenvalues)
MATHEMATICS 23a/E-23a, Fall 205 Linear Algebra and Real Analysis I Module #, Week 4 (Eigenvectors and Eigenvalues) Author: Paul Bamberg R scripts by Paul Bamberg Last modified: June 8, 205 by Paul Bamberg
More informationLinear Algebra Review. Fei-Fei Li
Linear Algebra Review Fei-Fei Li 1 / 37 Vectors Vectors and matrices are just collections of ordered numbers that represent something: movements in space, scaling factors, pixel brightnesses, etc. A vector
More informationDesigning Information Devices and Systems I Spring 2018 Lecture Notes Note Introduction to Linear Algebra the EECS Way
EECS 16A Designing Information Devices and Systems I Spring 018 Lecture Notes Note 1 1.1 Introduction to Linear Algebra the EECS Way In this note, we will teach the basics of linear algebra and relate
More informationPreliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012
Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.
More information2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian
FE661 - Statistical Methods for Financial Engineering 2. Linear algebra Jitkomut Songsiri matrices and vectors linear equations range and nullspace of matrices function of vectors, gradient and Hessian
More informationMATH 31 - ADDITIONAL PRACTICE PROBLEMS FOR FINAL
MATH 3 - ADDITIONAL PRACTICE PROBLEMS FOR FINAL MAIN TOPICS FOR THE FINAL EXAM:. Vectors. Dot product. Cross product. Geometric applications. 2. Row reduction. Null space, column space, row space, left
More informationCS281 Section 4: Factor Analysis and PCA
CS81 Section 4: Factor Analysis and PCA Scott Linderman At this point we have seen a variety of machine learning models, with a particular emphasis on models for supervised learning. In particular, we
More informationENGINEERING MATH 1 Fall 2009 VECTOR SPACES
ENGINEERING MATH 1 Fall 2009 VECTOR SPACES A vector space, more specifically, a real vector space (as opposed to a complex one or some even stranger ones) is any set that is closed under an operation of
More informationCSE 554 Lecture 7: Alignment
CSE 554 Lecture 7: Alignment Fall 2012 CSE554 Alignment Slide 1 Review Fairing (smoothing) Relocating vertices to achieve a smoother appearance Method: centroid averaging Simplification Reducing vertex
More informationChapter 2: Vector Geometry
Chapter 2: Vector Geometry Daniel Chan UNSW Semester 1 2018 Daniel Chan (UNSW) Chapter 2: Vector Geometry Semester 1 2018 1 / 32 Goals of this chapter In this chapter, we will answer the following geometric
More informationMTH Linear Algebra. Study Guide. Dr. Tony Yee Department of Mathematics and Information Technology The Hong Kong Institute of Education
MTH 3 Linear Algebra Study Guide Dr. Tony Yee Department of Mathematics and Information Technology The Hong Kong Institute of Education June 3, ii Contents Table of Contents iii Matrix Algebra. Real Life
More informationLinear Algebra Review. Fei-Fei Li
Linear Algebra Review Fei-Fei Li 1 / 51 Vectors Vectors and matrices are just collections of ordered numbers that represent something: movements in space, scaling factors, pixel brightnesses, etc. A vector
More informationCHAPTER 6. Representations of compact groups
CHAPTER 6 Representations of compact groups Throughout this chapter, denotes a compact group. 6.1. Examples of compact groups A standard theorem in elementary analysis says that a subset of C m (m a positive
More informationLinear algebra I Homework #1 due Thursday, Oct Show that the diagonals of a square are orthogonal to one another.
Homework # due Thursday, Oct. 0. Show that the diagonals of a square are orthogonal to one another. Hint: Place the vertices of the square along the axes and then introduce coordinates. 2. Find the equation
More informationLecture 12 : Graph Laplacians and Cheeger s Inequality
CPS290: Algorithmic Foundations of Data Science March 7, 2017 Lecture 12 : Graph Laplacians and Cheeger s Inequality Lecturer: Kamesh Munagala Scribe: Kamesh Munagala Graph Laplacian Maybe the most beautiful
More informationReview problems for MA 54, Fall 2004.
Review problems for MA 54, Fall 2004. Below are the review problems for the final. They are mostly homework problems, or very similar. If you are comfortable doing these problems, you should be fine on
More informationExtreme Values and Positive/ Negative Definite Matrix Conditions
Extreme Values and Positive/ Negative Definite Matrix Conditions James K. Peterson Department of Biological Sciences and Department of Mathematical Sciences Clemson University November 8, 016 Outline 1
More informationNetwork Optimization: Notes and Exercises
SPRING 2016 1 Network Optimization: Notes and Exercises Michael J. Neely University of Southern California http://www-bcf.usc.edu/ mjneely Abstract These notes provide a tutorial treatment of topics of
More informationMath 118, Fall 2014 Final Exam
Math 8, Fall 4 Final Exam True or false Please circle your choice; no explanation is necessary True There is a linear transformation T such that T e ) = e and T e ) = e Solution Since T is linear, if T
More informationWhat is Image Deblurring?
What is Image Deblurring? When we use a camera, we want the recorded image to be a faithful representation of the scene that we see but every image is more or less blurry, depending on the circumstances.
More informationPCA and LDA. Man-Wai MAK
PCA and LDA Man-Wai MAK Dept. of Electronic and Information Engineering, The Hong Kong Polytechnic University enmwmak@polyu.edu.hk http://www.eie.polyu.edu.hk/ mwmak References: S.J.D. Prince,Computer
More informationMatrices and Vectors. Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A =
30 MATHEMATICS REVIEW G A.1.1 Matrices and Vectors Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A = a 11 a 12... a 1N a 21 a 22... a 2N...... a M1 a M2... a MN A matrix can
More informationFactor Analysis (FA) Non-negative Matrix Factorization (NMF) CSE Artificial Intelligence Grad Project Dr. Debasis Mitra
Factor Analysis (FA) Non-negative Matrix Factorization (NMF) CSE 5290 - Artificial Intelligence Grad Project Dr. Debasis Mitra Group 6 Taher Patanwala Zubin Kadva Factor Analysis (FA) 1. Introduction Factor
More informationNONCOMMUTATIVE POLYNOMIAL EQUATIONS. Edward S. Letzter. Introduction
NONCOMMUTATIVE POLYNOMIAL EQUATIONS Edward S Letzter Introduction My aim in these notes is twofold: First, to briefly review some linear algebra Second, to provide you with some new tools and techniques
More information6 The Fourier transform
6 The Fourier transform In this presentation we assume that the reader is already familiar with the Fourier transform. This means that we will not make a complete overview of its properties and applications.
More informationCLASS NOTES Computational Methods for Engineering Applications I Spring 2015
CLASS NOTES Computational Methods for Engineering Applications I Spring 2015 Petros Koumoutsakos Gerardo Tauriello (Last update: July 27, 2015) IMPORTANT DISCLAIMERS 1. REFERENCES: Much of the material
More informationOctober 25, 2013 INNER PRODUCT SPACES
October 25, 2013 INNER PRODUCT SPACES RODICA D. COSTIN Contents 1. Inner product 2 1.1. Inner product 2 1.2. Inner product spaces 4 2. Orthogonal bases 5 2.1. Existence of an orthogonal basis 7 2.2. Orthogonal
More informationChapter 7. Extremal Problems. 7.1 Extrema and Local Extrema
Chapter 7 Extremal Problems No matter in theoretical context or in applications many problems can be formulated as problems of finding the maximum or minimum of a function. Whenever this is the case, advanced
More informationDesigning Information Devices and Systems I Fall 2018 Lecture Notes Note Introduction to Linear Algebra the EECS Way
EECS 16A Designing Information Devices and Systems I Fall 018 Lecture Notes Note 1 1.1 Introduction to Linear Algebra the EECS Way In this note, we will teach the basics of linear algebra and relate it
More information7 Principal Component Analysis
7 Principal Component Analysis This topic will build a series of techniques to deal with high-dimensional data. Unlike regression problems, our goal is not to predict a value (the y-coordinate), it is
More informationMath 396. An application of Gram-Schmidt to prove connectedness
Math 396. An application of Gram-Schmidt to prove connectedness 1. Motivation and background Let V be an n-dimensional vector space over R, and define GL(V ) to be the set of invertible linear maps V V
More informationSingular Value Decomposition
Chapter 6 Singular Value Decomposition In Chapter 5, we derived a number of algorithms for computing the eigenvalues and eigenvectors of matrices A R n n. Having developed this machinery, we complete our
More informationCreative Data Mining
Creative Data Mining Using ML algorithms in python Artem Chirkin Dr. Daniel Zünd Danielle Griego Lecture 7 0.04.207 /7 What we will cover today Outline Getting started Explore dataset content Inspect visually
More informationTHE SINGULAR VALUE DECOMPOSITION AND LOW RANK APPROXIMATION
THE SINGULAR VALUE DECOMPOSITION AND LOW RANK APPROXIMATION MANTAS MAŽEIKA Abstract. The purpose of this paper is to present a largely self-contained proof of the singular value decomposition (SVD), and
More informationLAGRANGE MULTIPLIERS
LAGRANGE MULTIPLIERS MATH 195, SECTION 59 (VIPUL NAIK) Corresponding material in the book: Section 14.8 What students should definitely get: The Lagrange multiplier condition (one constraint, two constraints
More informationPhysics ; CS 4812 Problem Set 4
Physics 4481-7681; CS 4812 Problem Set 4 Six problems (six pages), all short, covers lectures 11 15, due in class 25 Oct 2018 Problem 1: 1-qubit state tomography Consider a 1-qubit state ψ cos θ 2 0 +
More information235 Final exam review questions
5 Final exam review questions Paul Hacking December 4, 0 () Let A be an n n matrix and T : R n R n, T (x) = Ax the linear transformation with matrix A. What does it mean to say that a vector v R n is an
More informationExercise Sheet 1.
Exercise Sheet 1 You can download my lecture and exercise sheets at the address http://sami.hust.edu.vn/giang-vien/?name=huynt 1) Let A, B be sets. What does the statement "A is not a subset of B " mean?
More informationIr O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )
Section 3.2 Theorem 3.6. Let A be an m n matrix of rank r. Then r m, r n, and, by means of a finite number of elementary row and column operations, A can be transformed into the matrix ( ) Ir O D = 1 O
More informationIntroduction to SVD and Applications
Introduction to SVD and Applications Eric Kostelich and Dave Kuhl MSRI Climate Change Summer School July 18, 2008 Introduction The goal of this exercise is to familiarize you with the basics of the singular
More informationMachine Learning 2nd Edition
INTRODUCTION TO Lecture Slides for Machine Learning 2nd Edition ETHEM ALPAYDIN, modified by Leonardo Bobadilla and some parts from http://www.cs.tau.ac.il/~apartzin/machinelearning/ The MIT Press, 2010
More information2. Review of Linear Algebra
2. Review of Linear Algebra ECE 83, Spring 217 In this course we will represent signals as vectors and operators (e.g., filters, transforms, etc) as matrices. This lecture reviews basic concepts from linear
More informationOptimization Theory. A Concise Introduction. Jiongmin Yong
October 11, 017 16:5 ws-book9x6 Book Title Optimization Theory 017-08-Lecture Notes page 1 1 Optimization Theory A Concise Introduction Jiongmin Yong Optimization Theory 017-08-Lecture Notes page Optimization
More informationConceptual Questions for Review
Conceptual Questions for Review Chapter 1 1.1 Which vectors are linear combinations of v = (3, 1) and w = (4, 3)? 1.2 Compare the dot product of v = (3, 1) and w = (4, 3) to the product of their lengths.
More information. = V c = V [x]v (5.1) c 1. c k
Chapter 5 Linear Algebra It can be argued that all of linear algebra can be understood using the four fundamental subspaces associated with a matrix Because they form the foundation on which we later work,
More informationPart 3: Trust-region methods for unconstrained optimization. Nick Gould (RAL)
Part 3: Trust-region methods for unconstrained optimization Nick Gould (RAL) minimize x IR n f(x) MSc course on nonlinear optimization UNCONSTRAINED MINIMIZATION minimize x IR n f(x) where the objective
More informationImage Compression Using Singular Value Decomposition
Image Compression Using Singular Value Decomposition Ian Cooper and Craig Lorenc December 15, 2006 Abstract Singular value decomposition (SVD) is an effective tool for minimizing data storage and data
More informationCS168: The Modern Algorithmic Toolbox Lectures #11 and #12: Spectral Graph Theory
CS168: The Modern Algorithmic Toolbox Lectures #11 and #12: Spectral Graph Theory Tim Roughgarden & Gregory Valiant May 2, 2016 Spectral graph theory is the powerful and beautiful theory that arises from
More informationIntroduction to Algorithms
Lecture 1 Introduction to Algorithms 1.1 Overview The purpose of this lecture is to give a brief overview of the topic of Algorithms and the kind of thinking it involves: why we focus on the subjects that
More informationTHE EIGENVALUE PROBLEM
THE EIGENVALUE PROBLEM Let A be an n n square matrix. If there is a number λ and a column vector v 0 for which Av = λv then we say λ is an eigenvalue of A and v is an associated eigenvector. Note that
More information