Application of Numerical Algebraic Geometry to Geometric Data Analysis
|
|
- Willa Evangeline Wright
- 5 years ago
- Views:
Transcription
1 Application of Numerical Algebraic Geometry to Geometric Data Analysis Daniel Bates 1 Brent Davis 1 Chris Peterson 1 Michael Kirby 1 Justin Marks 2 1 Colorado State University - Fort Collins, CO 2 Air Force Institute of Technology - Wright Patterson Air Force Base, OH August 2, 2013
2 Definitions Definition: Grassmann manifold Let the Grassmann manifold Gr(n, p) denote the set of all p-dimensional subspaces of R n. Definition: Elements in Gr(n, p) A point [M] Gr(n, p) is = to an equivilence class of full rank n p orthonormal matrices that have the same column space as M. Definition: Principal Angles Two subspaces [X] and [Y ] of R n have p principal angles θ 1 θ 2 θ p π 2 where p = min{dim [X], dim [Y ]}. The principal angles between [X] and [Y ] are the inverse cosine of the singular values of the matrix X T Y.
3 Examples Example: Angles between lines Consider the x-axis and y-axis in Gr(2, 1) represented by the unit vectors e T 1 = (1, 0)T and e T 2 = (0, 1)T respectively. (1, 0) T (0, 1) = 0. SVD(0) = T θ 1 = cos 1 (0) = π 2
4 Examples Example: Angle between hyperplanes in R 3 Consider two matrices X , Y θ 1 0 θ In fact for any two distinct [X], [Y ] Gr(3, 2): θ 1 = 0 since the two subspaces intersect in a line.
5 Big Picture and Problems Fundamental Problem in Geometric Data Analysis Given a cluster of points {[Y 1 ],..., [Y k ]} in some Grassmann manifold(s) how do we assign a mean representative to a cluster of points using principal angles?
6 Big Picture and Problems Problems 1 The cluster of points might be in Gr(n, 1) Gr(n, 2) Gr(n, n 1) a disjoint union of Grassmann manifolds of various dimensions. 2 Geometric assumptions need to be made so local gradient-based methods work properly. 3 There could be more than one mean representative and even an infinite number of them.
7 Section 2 Problem Statement
8 Problem Statement We address these issues using a mean representative based on the cosine of the principal angles. Problem Statement {V 1,..., V k } : subspaces of R n. Y i : fixed n d i orthonormal matrices such that V i = [Y i ]. L : one dimensional subspace of R n θ(l, V i ) : the principal angle between L and V i. Find L such that it maximizes the function: F (L) = k cos θ(l, V i ) i=1
9 Geometric Examples Example 1 Consider the three standard coordinate planes in R 3. The special structure of the coordinate axes will produce four distinct lines L 1,..., L 4 to all give the same optimal solution.
10 Geometric Examples Example 2 Consider three randomly chosen planes in R 3. The generic behavior of three hyperplanes will produce one distinct line L that gives the optimal solution.
11 Geometric Examples Example 3 Consider the xy-plane and the z-axis in R 3 The line L is non-unique. There is an entire cone of lines L.
12 Section 3 Reformulating the Problem
13 Step 1 Main Idea The problem can be reformulated as a completely algebraic optimization problem. We break the reformulation into two main parts. Step 1 L = span of some unit length vector l. cos θ(l, V i ) is the singular value of l T Y i. l T Y i is the length of the projection of l onto V i. proj Vi l is a vector which makes the smallest angles with l that is in V i.
14 Step 1 Step 1 (continued) Therefore, max L subject to is equivilent to k cos θ(l, V i ) i=1 L R n is a one-dimensional vector space max l k proj Vi l i=1 subject to l T l = 1.
15 Step 1 Step 1 (continued) Since proj Vi l = cos θ(l, V i ) = l T v i for some unit length vector v i V i the problem: max l k proj Vi l i=1 subject to l T l = 1. is equivilent to finding an l to the optimization problem: max l,v i subject to k l T v i i=1 l T l = 1, v T i v i = 1 and v i V i
16 Step 2 We need to introduce numerical data for V i which is given in the form of orthonormal matrices Y i such that V i = [Y i ]. Step 2: Introduce data Y i v i [Y i ] v i = Y i α i for some coefficient vector α i. vi T v i = αi T Y i T Y i α i = αi T α i = 1 since Y i orthonormal Now we have: max l,α i after factoring out l T. l T k Y i α i i=1 subject to l T l = 1, α T i α i = 1 for 1 i k.
17 Step 2 Key Fact: The unit length vector l can be chosen independently of the choice of α i. To Maximize: l should point in the direction of k i=1 Y iα i. Reformulation 2 Find the α i s that optimizes max α i k 2 Y i α i i=1 subject to α T i α i = 1 then set v = k Y i α i and recover l = v/ v and produce L. i=1 We call L the max-length-vector-line of best fit to a collection of subspaces {V 1,..., V k }.
18 Geometry of max-length-vector-line of best fit Geometry behind max-length-vector-line of best fit Red vectors rotate in their subspaces Black vector represents the max-length-vector-line of best fit Maximize the length of the black vector(s).
19 Section 4 Karush-Kuhn-Tucker
20 KKT conditions The constraints αi T α i = 1 form a compact set. k 2 Y i α i continuous maximum obtained. i=1 Local solutions found at Karush-Kuhn-Tucker (KKT) points. Set α T = (α1 T,..., αt k ). The KKT conditions are k 2 α Y i α i + i=1 k λ i α (αi T α i 1) = 0 i=1 α T i α i = 1
21 KKT equations KKT equations The KKT polynomial equations can be written compactly as: where ( ) [Y 1 Y k ] [Y 1 Y k ] + diag(λ d 1 1,..., λd k k ) α = 0 α T i α i = 1 denotes the block outer product of the matrices Y i. diag(λ d 1 1,..., λd k k ) is a diagonal matrix with diagonal elements λ 1,..., λ k each repeated d i times. Geometrically, we are almost solving a symmetric eigenvalue problem on S d 1 S d k.
22 Section 5 Implementation
23 Main Algorithm Main Algorithm To solve max L k cos θ(l, V i ) i=1 1 Find all solutions to ( ) [Y 1 Y k ] [Y 1 Y k ] + diag(λ d 1 1,..., λd k k ) α = 0 2 Using the α i s compute v = k Y i α i. i=1 3 Find the largest v and set l = v v to recover L. α T i α i = 1
24 Main Algorithm Implementing Main Algorithm 1 Use Bertini as a blackbox to approximate all solutions and find real solutions. 2 Standard MATLAB routines 3 Standard MATLAB routines
25 Main Algorithm
26 Example Consider five subspaces {[Y 1 ],..., [Y 5 ]} R 10 of dimension 4, 3, 3, 2, 2, respectively. Y 1 = Y 2 = Y 3 = Y 4 = Y 5 =
27 Example Using the formulation: ([Y 1 Y 2 Y 3 Y 4 Y 5 ] [Y 1 Y 2 Y 3 Y 4 Y 5 ] + diag(λ 4 1, λ 3 2, λ 3 3, λ 2 4, λ 2 5)) α = 0 α α α α14 2 = 1 α α α23 2 = 1 α α α33 2 = 1 α α42 2 = 1 α α52 2 = 1
28 Example Results: Bertini was capable of finding 172 possible real solutions in terms of α T = (α T 1, αt 2,..., αt 5 ) For these paticular matrices, l = ± which was produced by a vector v of length
29 Example Parallel implementation of Bertini v GHz Xeon 5650 compute nodes on the CentOS 6.4 operating system. Using the regeneration routine, Bertini tracked 14, 866 paths in approximately 52 seconds. Among the 14, 866 paths, 3552 of them were successful for which 172 produces real solutions. Post-processing of the data to compute v and l was done in serial in negligible time.
30 Future Work A few notes and next steps: The ambient dimension n can be made arbitrarily large. Find applications outside of math such as image classification problems. Relax to a symmetric eigenvalue problem and use parameter homotopies for faster solving. Find an entire flag of max-length-vector-lines of best fit Compute new mean representatives similiar to this method. Thank you for listening!
Assignment 1 Math 5341 Linear Algebra Review. Give complete answers to each of the following questions. Show all of your work.
Assignment 1 Math 5341 Linear Algebra Review Give complete answers to each of the following questions Show all of your work Note: You might struggle with some of these questions, either because it has
More informationDISSERTATION MEAN VARIANTS ON MATRIX MANIFOLDS. Submitted by. Justin D Marks. Department of Mathematics. In partial fulfillment of the requirements
DISSERTATION MEAN VARIANTS ON MATRIX MANIFOLDS Submitted by Justin D Marks Department of Mathematics In partial fulfillment of the requirements For the Degree of Doctor of Philosophy Colorado State University
More informationConceptual Questions for Review
Conceptual Questions for Review Chapter 1 1.1 Which vectors are linear combinations of v = (3, 1) and w = (4, 3)? 1.2 Compare the dot product of v = (3, 1) and w = (4, 3) to the product of their lengths.
More informationSemidefinite Programming
Semidefinite Programming Notes by Bernd Sturmfels for the lecture on June 26, 208, in the IMPRS Ringvorlesung Introduction to Nonlinear Algebra The transition from linear algebra to nonlinear algebra has
More information7. Symmetric Matrices and Quadratic Forms
Linear Algebra 7. Symmetric Matrices and Quadratic Forms CSIE NCU 1 7. Symmetric Matrices and Quadratic Forms 7.1 Diagonalization of symmetric matrices 2 7.2 Quadratic forms.. 9 7.4 The singular value
More informationMATH 304 Linear Algebra Lecture 20: The Gram-Schmidt process (continued). Eigenvalues and eigenvectors.
MATH 304 Linear Algebra Lecture 20: The Gram-Schmidt process (continued). Eigenvalues and eigenvectors. Orthogonal sets Let V be a vector space with an inner product. Definition. Nonzero vectors v 1,v
More informationApplied Linear Algebra in Geoscience Using MATLAB
Applied Linear Algebra in Geoscience Using MATLAB Contents Getting Started Creating Arrays Mathematical Operations with Arrays Using Script Files and Managing Data Two-Dimensional Plots Programming in
More informationMath 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination
Math 0, Winter 07 Final Exam Review Chapter. Matrices and Gaussian Elimination { x + x =,. Different forms of a system of linear equations. Example: The x + 4x = 4. [ ] [ ] [ ] vector form (or the column
More informationParallel Singular Value Decomposition. Jiaxing Tan
Parallel Singular Value Decomposition Jiaxing Tan Outline What is SVD? How to calculate SVD? How to parallelize SVD? Future Work What is SVD? Matrix Decomposition Eigen Decomposition A (non-zero) vector
More informationLinear Algebra Massoud Malek
CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product
More informationInner Product, Length, and Orthogonality
Inner Product, Length, and Orthogonality Linear Algebra MATH 2076 Linear Algebra,, Chapter 6, Section 1 1 / 13 Algebraic Definition for Dot Product u 1 v 1 u 2 Let u =., v = v 2. be vectors in Rn. The
More informationELE/MCE 503 Linear Algebra Facts Fall 2018
ELE/MCE 503 Linear Algebra Facts Fall 2018 Fact N.1 A set of vectors is linearly independent if and only if none of the vectors in the set can be written as a linear combination of the others. Fact N.2
More informationExamples of numerics in commutative algebra and algebraic geo
Examples of numerics in commutative algebra and algebraic geometry MCAAG - JuanFest Colorado State University May 16, 2016 Portions of this talk include joint work with: Sandra Di Rocco David Eklund Michael
More informationFinal Exam, Linear Algebra, Fall, 2003, W. Stephen Wilson
Final Exam, Linear Algebra, Fall, 2003, W. Stephen Wilson Name: TA Name and section: NO CALCULATORS, SHOW ALL WORK, NO OTHER PAPERS ON DESK. There is very little actual work to be done on this exam if
More informationElementary linear algebra
Chapter 1 Elementary linear algebra 1.1 Vector spaces Vector spaces owe their importance to the fact that so many models arising in the solutions of specific problems turn out to be vector spaces. The
More informationLinear Algebra Practice Problems
Linear Algebra Practice Problems Page of 7 Linear Algebra Practice Problems These problems cover Chapters 4, 5, 6, and 7 of Elementary Linear Algebra, 6th ed, by Ron Larson and David Falvo (ISBN-3 = 978--68-78376-2,
More informationMATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)
MATH 20F: LINEAR ALGEBRA LECTURE B00 (T KEMP) Definition 01 If T (x) = Ax is a linear transformation from R n to R m then Nul (T ) = {x R n : T (x) = 0} = Nul (A) Ran (T ) = {Ax R m : x R n } = {b R m
More information2. Every linear system with the same number of equations as unknowns has a unique solution.
1. For matrices A, B, C, A + B = A + C if and only if A = B. 2. Every linear system with the same number of equations as unknowns has a unique solution. 3. Every linear system with the same number of equations
More informationLinear Algebra Review. Vectors
Linear Algebra Review 9/4/7 Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa (UCSD) Cogsci 8F Linear Algebra review Vectors
More informationExercise Sheet 1.
Exercise Sheet 1 You can download my lecture and exercise sheets at the address http://sami.hust.edu.vn/giang-vien/?name=huynt 1) Let A, B be sets. What does the statement "A is not a subset of B " mean?
More informationLinear Systems. Class 27. c 2008 Ron Buckmire. TITLE Projection Matrices and Orthogonal Diagonalization CURRENT READING Poole 5.4
Linear Systems Math Spring 8 c 8 Ron Buckmire Fowler 9 MWF 9: am - :5 am http://faculty.oxy.edu/ron/math//8/ Class 7 TITLE Projection Matrices and Orthogonal Diagonalization CURRENT READING Poole 5. Summary
More informationMath 3191 Applied Linear Algebra
Math 9 Applied Linear Algebra Lecture : Orthogonal Projections, Gram-Schmidt Stephen Billups University of Colorado at Denver Math 9Applied Linear Algebra p./ Orthonormal Sets A set of vectors {u, u,...,
More informationft-uiowa-math2550 Assignment NOTRequiredJustHWformatOfQuizReviewForExam3part2 due 12/31/2014 at 07:10pm CST
me me ft-uiowa-math2550 Assignment NOTRequiredJustHWformatOfQuizReviewForExam3part2 due 12/31/2014 at 07:10pm CST 1. (1 pt) local/library/ui/eigentf.pg A is n n an matrices.. There are an infinite number
More informationLecture 9: Vector Algebra
Lecture 9: Vector Algebra Linear combination of vectors Geometric interpretation Interpreting as Matrix-Vector Multiplication Span of a set of vectors Vector Spaces and Subspaces Linearly Independent/Dependent
More informationIE 5531: Engineering Optimization I
IE 5531: Engineering Optimization I Lecture 12: Nonlinear optimization, continued Prof. John Gunnar Carlsson October 20, 2010 Prof. John Gunnar Carlsson IE 5531: Engineering Optimization I October 20,
More informationSystems of Linear Equations
Systems of Linear Equations Math 108A: August 21, 2008 John Douglas Moore Our goal in these notes is to explain a few facts regarding linear systems of equations not included in the first few chapters
More informationProblem Set # 1 Solution, 18.06
Problem Set # 1 Solution, 1.06 For grading: Each problem worths 10 points, and there is points of extra credit in problem. The total maximum is 100. 1. (10pts) In Lecture 1, Prof. Strang drew the cone
More informationKernel Methods. Machine Learning A W VO
Kernel Methods Machine Learning A 708.063 07W VO Outline 1. Dual representation 2. The kernel concept 3. Properties of kernels 4. Examples of kernel machines Kernel PCA Support vector regression (Relevance
More informationGRE Subject test preparation Spring 2016 Topic: Abstract Algebra, Linear Algebra, Number Theory.
GRE Subject test preparation Spring 2016 Topic: Abstract Algebra, Linear Algebra, Number Theory. Linear Algebra Standard matrix manipulation to compute the kernel, intersection of subspaces, column spaces,
More informationMATH 31 - ADDITIONAL PRACTICE PROBLEMS FOR FINAL
MATH 3 - ADDITIONAL PRACTICE PROBLEMS FOR FINAL MAIN TOPICS FOR THE FINAL EXAM:. Vectors. Dot product. Cross product. Geometric applications. 2. Row reduction. Null space, column space, row space, left
More information1. Select the unique answer (choice) for each problem. Write only the answer.
MATH 5 Practice Problem Set Spring 7. Select the unique answer (choice) for each problem. Write only the answer. () Determine all the values of a for which the system has infinitely many solutions: x +
More informationI. Multiple Choice Questions (Answer any eight)
Name of the student : Roll No : CS65: Linear Algebra and Random Processes Exam - Course Instructor : Prashanth L.A. Date : Sep-24, 27 Duration : 5 minutes INSTRUCTIONS: The test will be evaluated ONLY
More informationLinear Algebra Final Exam Study Guide Solutions Fall 2012
. Let A = Given that v = 7 7 67 5 75 78 Linear Algebra Final Exam Study Guide Solutions Fall 5 explain why it is not possible to diagonalize A. is an eigenvector for A and λ = is an eigenvalue for A diagonalize
More informationDS-GA 1002 Lecture notes 10 November 23, Linear models
DS-GA 2 Lecture notes November 23, 2 Linear functions Linear models A linear model encodes the assumption that two quantities are linearly related. Mathematically, this is characterized using linear functions.
More informationMATH 167: APPLIED LINEAR ALGEBRA Chapter 2
MATH 167: APPLIED LINEAR ALGEBRA Chapter 2 Jesús De Loera, UC Davis February 1, 2012 General Linear Systems of Equations (2.2). Given a system of m equations and n unknowns. Now m n is OK! Apply elementary
More informationSystems of Linear Equations
Systems of Linear Equations Linear Algebra MATH 2076 Linear Algebra SLEs Chapter 1 Section 1 1 / 8 Linear Equations and their Solutions A linear equation in unknowns (the variables) x 1, x 2,..., x n has
More informationUNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems
UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems Robert M. Freund February 2016 c 2016 Massachusetts Institute of Technology. All rights reserved. 1 1 Introduction
More informationThroughout these notes we assume V, W are finite dimensional inner product spaces over C.
Math 342 - Linear Algebra II Notes Throughout these notes we assume V, W are finite dimensional inner product spaces over C 1 Upper Triangular Representation Proposition: Let T L(V ) There exists an orthonormal
More informationNotes on singular value decomposition for Math 54. Recall that if A is a symmetric n n matrix, then A has real eigenvalues A = P DP 1 A = P DP T.
Notes on singular value decomposition for Math 54 Recall that if A is a symmetric n n matrix, then A has real eigenvalues λ 1,, λ n (possibly repeated), and R n has an orthonormal basis v 1,, v n, where
More informationSUMMARY OF MATH 1600
SUMMARY OF MATH 1600 Note: The following list is intended as a study guide for the final exam. It is a continuation of the study guide for the midterm. It does not claim to be a comprehensive list. You
More informationMATH 2331 Linear Algebra. Section 2.1 Matrix Operations. Definition: A : m n, B : n p. Example: Compute AB, if possible.
MATH 2331 Linear Algebra Section 2.1 Matrix Operations Definition: A : m n, B : n p ( 1 2 p ) ( 1 2 p ) AB = A b b b = Ab Ab Ab Example: Compute AB, if possible. 1 Row-column rule: i-j-th entry of AB:
More informationDot Products. K. Behrend. April 3, Abstract A short review of some basic facts on the dot product. Projections. The spectral theorem.
Dot Products K. Behrend April 3, 008 Abstract A short review of some basic facts on the dot product. Projections. The spectral theorem. Contents The dot product 3. Length of a vector........................
More information235 Final exam review questions
5 Final exam review questions Paul Hacking December 4, 0 () Let A be an n n matrix and T : R n R n, T (x) = Ax the linear transformation with matrix A. What does it mean to say that a vector v R n is an
More informationReview problems for MA 54, Fall 2004.
Review problems for MA 54, Fall 2004. Below are the review problems for the final. They are mostly homework problems, or very similar. If you are comfortable doing these problems, you should be fine on
More informationChapter 7: Symmetric Matrices and Quadratic Forms
Chapter 7: Symmetric Matrices and Quadratic Forms (Last Updated: December, 06) These notes are derived primarily from Linear Algebra and its applications by David Lay (4ed). A few theorems have been moved
More informationSOLUTION KEY TO THE LINEAR ALGEBRA FINAL EXAM 1 2 ( 2) ( 1) c a = 1 0
SOLUTION KEY TO THE LINEAR ALGEBRA FINAL EXAM () We find a least squares solution to ( ) ( ) A x = y or 0 0 a b = c 4 0 0. 0 The normal equation is A T A x = A T y = y or 5 0 0 0 0 0 a b = 5 9. 0 0 4 7
More informationThe Singular Value Decomposition and Least Squares Problems
The Singular Value Decomposition and Least Squares Problems Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo September 27, 2009 Applications of SVD solving
More informationLecture Notes Introduction to Cluster Algebra
Lecture Notes Introduction to Cluster Algebra Ivan C.H. Ip Update: May 16, 2017 5 Review of Root Systems In this section, let us have a brief introduction to root system and finite Lie type classification
More informationLEC 2: Principal Component Analysis (PCA) A First Dimensionality Reduction Approach
LEC 2: Principal Component Analysis (PCA) A First Dimensionality Reduction Approach Dr. Guangliang Chen February 9, 2016 Outline Introduction Review of linear algebra Matrix SVD PCA Motivation The digits
More informationCOMP 558 lecture 18 Nov. 15, 2010
Least squares We have seen several least squares problems thus far, and we will see more in the upcoming lectures. For this reason it is good to have a more general picture of these problems and how to
More informationMatrix Decomposition and Latent Semantic Indexing (LSI) Introduction to Information Retrieval INF 141/ CS 121 Donald J. Patterson
Matrix Decomposition and Latent Semantic Indexing (LSI) Introduction to Information Retrieval INF 141/ CS 121 Donald J. Patterson Latent Semantic Indexing Outline Introduction Linear Algebra Refresher
More informationLinear Algebra Fundamentals
Linear Algebra Fundamentals It can be argued that all of linear algebra can be understood using the four fundamental subspaces associated with a matrix. Because they form the foundation on which we later
More informationLINEAR ALGEBRA KNOWLEDGE SURVEY
LINEAR ALGEBRA KNOWLEDGE SURVEY Instructions: This is a Knowledge Survey. For this assignment, I am only interested in your level of confidence about your ability to do the tasks on the following pages.
More informationMathematical Properties of Stiffness Matrices
Mathematical Properties of Stiffness Matrices CEE 4L. Matrix Structural Analysis Department of Civil and Environmental Engineering Duke University Henri P. Gavin Fall, 0 These notes describe some of the
More informationIV. Matrix Approximation using Least-Squares
IV. Matrix Approximation using Least-Squares The SVD and Matrix Approximation We begin with the following fundamental question. Let A be an M N matrix with rank R. What is the closest matrix to A that
More informationChapter 3 Transformations
Chapter 3 Transformations An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1 Linear Transformations A function is called a linear transformation if 1. for every and 2. for every If we fix the bases
More informationFinite-Dimensional Cones 1
John Nachbar Washington University March 28, 2018 1 Basic Definitions. Finite-Dimensional Cones 1 Definition 1. A set A R N is a cone iff it is not empty and for any a A and any γ 0, γa A. Definition 2.
More informationMachine Learning. Principal Components Analysis. Le Song. CSE6740/CS7641/ISYE6740, Fall 2012
Machine Learning CSE6740/CS7641/ISYE6740, Fall 2012 Principal Components Analysis Le Song Lecture 22, Nov 13, 2012 Based on slides from Eric Xing, CMU Reading: Chap 12.1, CB book 1 2 Factor or Component
More informationSVD, Power method, and Planted Graph problems (+ eigenvalues of random matrices)
Chapter 14 SVD, Power method, and Planted Graph problems (+ eigenvalues of random matrices) Today we continue the topic of low-dimensional approximation to datasets and matrices. Last time we saw the singular
More informationDS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.
DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1
More informationSingular Value Decompsition
Singular Value Decompsition Massoud Malek One of the most useful results from linear algebra, is a matrix decomposition known as the singular value decomposition It has many useful applications in almost
More informationMath 302 Outcome Statements Winter 2013
Math 302 Outcome Statements Winter 2013 1 Rectangular Space Coordinates; Vectors in the Three-Dimensional Space (a) Cartesian coordinates of a point (b) sphere (c) symmetry about a point, a line, and a
More informationCSL361 Problem set 4: Basic linear algebra
CSL361 Problem set 4: Basic linear algebra February 21, 2017 [Note:] If the numerical matrix computations turn out to be tedious, you may use the function rref in Matlab. 1 Row-reduced echelon matrices
More information. = V c = V [x]v (5.1) c 1. c k
Chapter 5 Linear Algebra It can be argued that all of linear algebra can be understood using the four fundamental subspaces associated with a matrix Because they form the foundation on which we later work,
More informationLINEAR ALGEBRA 1, 2012-I PARTIAL EXAM 3 SOLUTIONS TO PRACTICE PROBLEMS
LINEAR ALGEBRA, -I PARTIAL EXAM SOLUTIONS TO PRACTICE PROBLEMS Problem (a) For each of the two matrices below, (i) determine whether it is diagonalizable, (ii) determine whether it is orthogonally diagonalizable,
More informationLeast Squares Optimization
Least Squares Optimization The following is a brief review of least squares optimization and constrained optimization techniques, which are widely used to analyze and visualize data. Least squares (LS)
More informationLinear Algebra. P R E R E Q U I S I T E S A S S E S S M E N T Ahmad F. Taha August 24, 2015
THE UNIVERSITY OF TEXAS AT SAN ANTONIO EE 5243 INTRODUCTION TO CYBER-PHYSICAL SYSTEMS P R E R E Q U I S I T E S A S S E S S M E N T Ahmad F. Taha August 24, 2015 The objective of this exercise is to assess
More informationLecture 3: Review of Linear Algebra
ECE 83 Fall 2 Statistical Signal Processing instructor: R Nowak Lecture 3: Review of Linear Algebra Very often in this course we will represent signals as vectors and operators (eg, filters, transforms,
More informationProblem Set (T) If A is an m n matrix, B is an n p matrix and D is a p s matrix, then show
MTH 0: Linear Algebra Department of Mathematics and Statistics Indian Institute of Technology - Kanpur Problem Set Problems marked (T) are for discussions in Tutorial sessions (T) If A is an m n matrix,
More informationLecture 3: Review of Linear Algebra
ECE 83 Fall 2 Statistical Signal Processing instructor: R Nowak, scribe: R Nowak Lecture 3: Review of Linear Algebra Very often in this course we will represent signals as vectors and operators (eg, filters,
More informationLecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min.
MA 796S: Convex Optimization and Interior Point Methods October 8, 2007 Lecture 1 Lecturer: Kartik Sivaramakrishnan Scribe: Kartik Sivaramakrishnan 1 Conic programming Consider the conic program min s.t.
More information1 Invariant subspaces
MATH 2040 Linear Algebra II Lecture Notes by Martin Li Lecture 8 Eigenvalues, eigenvectors and invariant subspaces 1 In previous lectures we have studied linear maps T : V W from a vector space V to another
More informationMATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators.
MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators. Adjoint operator and adjoint matrix Given a linear operator L on an inner product space V, the adjoint of L is a transformation
More information(a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? Solution: dim N(A) 1, since rank(a) 3. Ax =
. (5 points) (a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? dim N(A), since rank(a) 3. (b) If we also know that Ax = has no solution, what do we know about the rank of A? C(A)
More informationFinal Review Sheet. B = (1, 1 + 3x, 1 + x 2 ) then 2 + 3x + 6x 2
Final Review Sheet The final will cover Sections Chapters 1,2,3 and 4, as well as sections 5.1-5.4, 6.1-6.2 and 7.1-7.3 from chapters 5,6 and 7. This is essentially all material covered this term. Watch
More informationThe SVD-Fundamental Theorem of Linear Algebra
Nonlinear Analysis: Modelling and Control, 2006, Vol. 11, No. 2, 123 136 The SVD-Fundamental Theorem of Linear Algebra A. G. Akritas 1, G. I. Malaschonok 2, P. S. Vigklas 1 1 Department of Computer and
More informationNumerical Optimization of Partial Differential Equations
Numerical Optimization of Partial Differential Equations Part I: basic optimization concepts in R n Bartosz Protas Department of Mathematics & Statistics McMaster University, Hamilton, Ontario, Canada
More informationMath 3191 Applied Linear Algebra
Math 191 Applied Linear Algebra Lecture 16: Change of Basis Stephen Billups University of Colorado at Denver Math 191Applied Linear Algebra p.1/0 Rank The rank of A is the dimension of the column space
More informationNORMS ON SPACE OF MATRICES
NORMS ON SPACE OF MATRICES. Operator Norms on Space of linear maps Let A be an n n real matrix and x 0 be a vector in R n. We would like to use the Picard iteration method to solve for the following system
More informationData Mining and Analysis: Fundamental Concepts and Algorithms
Data Mining and Analysis: Fundamental Concepts and Algorithms dataminingbook.info Mohammed J. Zaki 1 Wagner Meira Jr. 2 1 Department of Computer Science Rensselaer Polytechnic Institute, Troy, NY, USA
More informationSolutions to Review Problems for Chapter 6 ( ), 7.1
Solutions to Review Problems for Chapter (-, 7 The Final Exam is on Thursday, June,, : AM : AM at NESBITT Final Exam Breakdown Sections % -,7-9,- - % -9,-,7,-,-7 - % -, 7 - % Let u u and v Let x x x x,
More informationProblem # Max points possible Actual score Total 120
FINAL EXAMINATION - MATH 2121, FALL 2017. Name: ID#: Email: Lecture & Tutorial: Problem # Max points possible Actual score 1 15 2 15 3 10 4 15 5 15 6 15 7 10 8 10 9 15 Total 120 You have 180 minutes to
More informationOutline Introduction: Problem Description Diculties Algebraic Structure: Algebraic Varieties Rank Decient Toeplitz Matrices Constructing Lower Rank St
Structured Lower Rank Approximation by Moody T. Chu (NCSU) joint with Robert E. Funderlic (NCSU) and Robert J. Plemmons (Wake Forest) March 5, 1998 Outline Introduction: Problem Description Diculties Algebraic
More informationThe following definition is fundamental.
1. Some Basics from Linear Algebra With these notes, I will try and clarify certain topics that I only quickly mention in class. First and foremost, I will assume that you are familiar with many basic
More informationPositive Definite Matrix
1/29 Chia-Ping Chen Professor Department of Computer Science and Engineering National Sun Yat-sen University Linear Algebra Positive Definite, Negative Definite, Indefinite 2/29 Pure Quadratic Function
More informationThis is a closed book exam. No notes or calculators are permitted. We will drop your lowest scoring question for you.
Math 54 Fall 2017 Practice Exam 2 Exam date: 10/31/17 Time Limit: 80 Minutes Name: Student ID: GSI or Section: This exam contains 7 pages (including this cover page) and 7 problems. Problems are printed
More information33AH, WINTER 2018: STUDY GUIDE FOR FINAL EXAM
33AH, WINTER 2018: STUDY GUIDE FOR FINAL EXAM (UPDATED MARCH 17, 2018) The final exam will be cumulative, with a bit more weight on more recent material. This outline covers the what we ve done since the
More informationMA 265 FINAL EXAM Fall 2012
MA 265 FINAL EXAM Fall 22 NAME: INSTRUCTOR S NAME:. There are a total of 25 problems. You should show work on the exam sheet, and pencil in the correct answer on the scantron. 2. No books, notes, or calculators
More information1 Matrix notation and preliminaries from spectral graph theory
Graph clustering (or community detection or graph partitioning) is one of the most studied problems in network analysis. One reason for this is that there are a variety of ways to define a cluster or community.
More information1. Let m 1 and n 1 be two natural numbers such that m > n. Which of the following is/are true?
. Let m and n be two natural numbers such that m > n. Which of the following is/are true? (i) A linear system of m equations in n variables is always consistent. (ii) A linear system of n equations in
More informationMATH. 20F SAMPLE FINAL (WINTER 2010)
MATH. 20F SAMPLE FINAL (WINTER 2010) You have 3 hours for this exam. Please write legibly and show all working. No calculators are allowed. Write your name, ID number and your TA s name below. The total
More informationUsing the Karush-Kuhn-Tucker Conditions to Analyze the Convergence Rate of Preconditioned Eigenvalue Solvers
Using the Karush-Kuhn-Tucker Conditions to Analyze the Convergence Rate of Preconditioned Eigenvalue Solvers Merico Argentati University of Colorado Denver Joint work with Andrew V. Knyazev, Klaus Neymeyr
More informationMATH 315 Linear Algebra Homework #1 Assigned: August 20, 2018
Homework #1 Assigned: August 20, 2018 Review the following subjects involving systems of equations and matrices from Calculus II. Linear systems of equations Converting systems to matrix form Pivot entry
More informationThe Eigenvalue Problem: Perturbation Theory
Jim Lambers MAT 610 Summer Session 2009-10 Lecture 13 Notes These notes correspond to Sections 7.2 and 8.1 in the text. The Eigenvalue Problem: Perturbation Theory The Unsymmetric Eigenvalue Problem Just
More informationSome preconditioners for systems of linear inequalities
Some preconditioners for systems of linear inequalities Javier Peña Vera oshchina Negar Soheili June 0, 03 Abstract We show that a combination of two simple preprocessing steps would generally improve
More informationMATH 304 Linear Algebra Lecture 34: Review for Test 2.
MATH 304 Linear Algebra Lecture 34: Review for Test 2. Topics for Test 2 Linear transformations (Leon 4.1 4.3) Matrix transformations Matrix of a linear mapping Similar matrices Orthogonality (Leon 5.1
More information1 Linearity and Linear Systems
Mathematical Tools for Neuroscience (NEU 34) Princeton University, Spring 26 Jonathan Pillow Lecture 7-8 notes: Linear systems & SVD Linearity and Linear Systems Linear system is a kind of mapping f( x)
More informationLinear Algebra. Paul Yiu. 6D: 2-planes in R 4. Department of Mathematics Florida Atlantic University. Fall 2011
Linear Algebra Paul Yiu Department of Mathematics Florida Atlantic University Fall 2011 6D: 2-planes in R 4 The angle between a vector and a plane The angle between a vector v R n and a subspace V is the
More informationReview of similarity transformation and Singular Value Decomposition
Review of similarity transformation and Singular Value Decomposition Nasser M Abbasi Applied Mathematics Department, California State University, Fullerton July 8 7 page compiled on June 9, 5 at 9:5pm
More informationPart I Generalized Principal Component Analysis
Part I Generalized Principal Component Analysis René Vidal Center for Imaging Science Institute for Computational Medicine Johns Hopkins University Principal Component Analysis (PCA) Given a set of points
More information