Linear Algebra Massoud Malek


 Jack Cobb
 9 months ago
 Views:
Transcription
1 CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product is a generalization of the dot product In a vector space, it is a way to multiply vectors together, with the result of this multiplication being a scalar More precisely, for a real vector space, an inner product, satisfies the following four properties Let u, v, and w be vectors and α be a scalar, then: Linearity: u + v, w = u, w + v, w α v, w = α v, w Symmetry: v, w = w, v Positivedefiniteness: v, v 0 and equal if and only if v = 0 A vector space together with an inner product on it is called an inner product space When given a complex vector space, the symmetry property above is usually replaced by Conjugate Symmetry: v, w = w, v, where w, v refers to complex conjugation With this property, the inner product is called a Hermitian inner product and a complex vector space with a Hermitian inner product is called a Hermitian inner product space An inner product space is a vector space with the additional inner product structure An inner product naturally induces an associated norm, u = u, u Thus an inner product space is also a normed vector space Examples of Inner Product Space (a) The Euclidean inner product on IF n, where IF is the set of real or complex number is defined by (u, u 2,, u n, v, v 2,, v n ) = u v + u 2 v u n v n (b) An inner product can be defined on the vector space of smooth realvalued functions on the interval [, ] by f)x), g(x) = f(x) g(x) d x
2 Linear Algebra Inner Product and Normed Space Page 2 (c) An inner product can be defined on the set of real polynomials by Examples of Vector Norms p(x), q(x) = Here are three mostly used vector norms: (a) norm: v = v i i (b) 2norm or Euclidean norm: v 2 = v i 2 (c) norm: v = max x i 0 p(x) q(x) e x d x i Also, note that if is a norm and M is any nonsingular square matrix, then v M v is also a norm The case where M is diagonal is particularly common in practice Note Any Hermitian positive definite matrix H induces an inner product u, v H = u H v; and thus a norm u H = u, u H In MatLab, the norm, 2norm, and norm are invoked by the statements norm (u, ), norm (u, 2), and norm (u, inf), respectively The 2norm is the default in MatLab The statement norm (u) is interpreted as the 2norm of u in MatLab Theorem If is a norm induce by an inner product, then u = 0, if and only if u = θ, otherwise u > 0 ; 2 λ u = λ u ; 3 (CauchySchwarz inequality) u, v u v ; 4 (Triangle inequality) u + v u + v, Orthogonality In an inner product vector space V, vectors u and v are orthogonal, if u, v = 0 A subset S of V is orthogonal, if any two distinct vectors of S are orthogonal A vector u is a unit vector, if u = Finally a subset S of V is orthonormal, if S is orthogonal and consists entirely of unit vectors If v θ, then the vector u = v is a unit vector; we say that v was normalized v Note The zero vector θ is orthogonal to every vector in V and θ is the only vector in V that is orthogonal to itself Pythagorean Theorem Suppose u and v are orthogonal vectors in V Then Proof we have u + v 2 = u 2 + v 2 u + v 2 = u + v, u + v = u, u + u, v + v, u + v, v = u, u 2 + v, v 2 as desired
3 Linear Algebra GramSchmidt Page 3 The CauchySchwarz inequality states that for all vectors u and v of an inner product space: u, v 2 u, u v, v, where, is the inner product Equivalently, by taking the square root of both sides, and referring to the norms of the vectors, the inequality is written as u, v u v Moreover, the two sides are equal if and only if u and v are linearly dependent Proof Let u, v be arbitrary vectors in a vector space V over IF with an inner product, where IF is the field of real or complex numbers If u, v = 0, the theorem holds trivially If not, then u 0, v 0 For any λ IF, we have 0 u λ v 2 = u λ v, u λ v = u, u λ v λ v, u λ v u, u λ u, v λ v, u + λ λ v, v Choose λ = u, v v, v, then 0 u, u λ u, v λ v, u + λ λ v, v = u, u u, v 2 v, v = u 2 u, v 2 v 2 It follows that u, v 2 u, u v, v or u, v u v Examples of the CauchySchwarz Inequality (a) If u, u 2,, u n, v, v 2,, v n are real numbers, then u v + u 2 v u n v n 2 ( u 2 + u u 2 n) ( v 2 + v v 2 n) (b) If f(x) and g(x) are smooth realvalued functions, then f(x) g(x) d x Triangle Inequality For any vectors u and v, Proof We have 2 ( ) ( f(x) 2 d x u + v u + v g(x) 2 d x u + v 2 = u + v, u + v = u, u + u, v + v, u + v, v = u Re u, v + v 2 u 2 + u, v + v 2 u u v + v 2 = u 2 + v 2 )
4 Linear Algebra GramSchmidt Page 4 GramSchmidt Process It is often more simpler to work in an orthogonal basis GramSchmidt process is a method for obtaining orthonormal vectors from a linearly independent set of vectors in an inner product space, most commonly the Euclidean space IR n S = {v, v 2, v 3,, v k } and generates an orthogonal set S G = {u, u 2, u 3,, u k } or orthonormal basis S N = {e, e 2, e 3,, e k } that spans the same subspace of IR n as the set S v, u We define the projection operator by proj u (v) = u, where v, u denotes the inner u, u product of the vectors v and u This operator projects the vector v orthogonally onto the line spanned by vector u The GramSchmidt process then works as follows: u = v, e = u u u 2 = v 2 proj u (v 2 ), e 2 = u 2 u 2 u 3 = v 3 proj u (v 3 ) proj u2 (v 3 ), e 3 = u 3 u 3 u 4 = v 4 proj u (v 4 ) proj u2 (v 4 ) proj u3 (v 4 ), e 4 = u 4 u 4 = = k u k = v k proj uj (v k ), e k = u k u k j= The sequence {u, u 2,, u k } is the required system of orthogonal vectors, and the normalized vectors {e, e 2,, e k } form an orthonormal set The calculation of the sequence u, u 2,, u k is known as GramSchmidt orthogonalization, while the calculation of the sequence e, e 2,, e k is known as GramSchmidt orthonormalization as the vectors are normalized Example Consider the basis B = v =, v 2 =, v 3 = Then use Gram Schmidt to find the orthogonal basis B G = { u, u 2, u 3 } and the orthonormal basis B N = { e, e 2, e 3 } associated with B u = v = ; e = u u = 3 u 2 = v 2 v 2, u u, u u = = 2 2 = u 2 = 2 ; e 2 = u u 2 = 2 6 u 3 = v 3 v 3, u u, u u v 3, u 2 u 2, u 2 u 2 = + 2 = 0 ; e 3 = u u 3 = 0 2 Hence B G = u =, u 2 = 2, u 3 = 0 and B N = e =, e 2 = 2, e 3 =
5 Linear Algebra Matrix Norms Page 5 Matrix Norms A matrix norm is an extension of the notion of a vector norm to matrices It is a vector norm on IF m n That is, if A denotes the norm of the matrix A, then, A 0; 2 A = 0 if and only if A = 0; 3 α A = α A for all α IF and A IF m n ; 4 A, B IF m n, A + B A + B Additionally, in the case of square matrices (thus, m = n), some (but not all) matrix norms satisfy the following condition, which is related to the fact that matrices are more than just vectors: 5 AB A B for all matrices A and B in IF m n A matrix norm that satisfies this additional property is called a submultiplicative norm Operator Norm If vector norms on IF m and IF n are given (IF is the field of real or complex numbers), then one defines the corresponding operator norm or induced norm or, on the space of mbyn matrices as the following maxima: A = sup{ Ax : x IF n, x = } { } Ax = sup x : x IF n, x θ The operator norm corresponding to the pnorm for vectors is: Ax p A p = sup x 0 x p Note All operator norms are submultiplicative Here are some example matrix norms: m Column norm or norm: A = max j n i= a ij, which is simply the maximum absolute column sum of the matrix n Row norm or infinitynorm: A = max i m j= a ij, which is simply the maximum absolute row sum of the matrix The 2norm induced by the euclidean vector norm is A = λ max, where λ max is the largest eigenvalue of the matrix A A Although the Frobenius norm, defined as: k n A F = a ij 2 = trace (A A) i= j=
6 Linear Algebra Matrix Norms Page 6 is not an induced norm but it is a submultiplicative norm In MatLab, the Frobenius nom is invoked by the statement norm (A, fro ) The max norm A max = max a ij is neither an induced norm nor a submultiplicative norm The largest eigenvalue of a square matrix A in modulus, denoted by ρ (A) and called the spectral radius of A, never exceeds its operator norm: ρ (A) A p Given two normed vector spaces V and W (over the same base field, either the real numbers or the complex numbers), a linear map A : V W is continuous if and only if there exists a real number λ such that for all v V, Condition Number A v λ v The condition number associated with the linear equation A x = b, gives a bound on how inaccurate the solution x will be after approximation Note that this is before the effects of roundoff error are taken into account; conditioning is a property of the matrix, not the algorithm or floating point accuracy of the computer used to solve the corresponding system Let x 0 be the solution of the linear equation A x = b and let x be the calculated solution of A x = b, so the error in the solution is e = x x 0, and the relative error in the solution is e = x x 0 x 0 Generally, we can t verify the error or relative error (we don t know x 0 ), so a possible way to check this is by testing the accuracy back in the system A x = b : and relative residual is = residual r = A x 0 A x = b b, A x 0 A x A x = b b b = r b A matrix A is said to be illconditioned, if relatively small changes in the input (in the matrix A) can cause large change in the output (the solution of A x = b), ie; the solution is not very accurate if input is rounded Otherwise it is wellconditioned The condition number of an invertible matrix A is defined to be κ (A) = A A This quantity is always greater than or equal to The condition number is defined more precisely to be the maximum ratio of the relative error in x divided by the relative error in b If A is nonsingular, then by knowing the eigenvalues of A A (λ λ 2 λ n > 0), we can compute the condition number using the 2norm: κ 2 (A) = A 2 A 2 = λ λ n
7 Linear Algebra Matrix Norms Page 7 The smallest nonzero eigenvalue λ n of A A is a measure of how close the matrix is to being singular If λ n is small, then κ 2 (A) is large Thus, the closer the matrix is to being singular, the more illconditioned it is Consistent Norms A matrix norm ab on IF m n is called consistent with a vector norm a on IF n and a vector norm b on IF m, if: Ax b A ab u a, for all A IF m n, u IF n All induced norms are consistent by definition Compatible Norms A matrix norm b on IF n n is called compatible with a vector norm a on IF n if: A x a A b x a, for all A IF n n, x IF n All induced norms are compatible by definition Equivalence of Norms Two matrix norms α and β are said to be equivalent, if for all matrices A IF m n, there exist some positive numbers r and s such that Examples of Norm Equivalence r A α A β s A α For matrix A R \ of rank r, the following inequalities hold: A 2 A F r A 2 A F A r A F A max A 2 mn A max A A 2 m A n A A 2 n A m Here, A p refers to the matrix norm induced by the vector pnorm Another useful inequality between matrix norms is: A 2 A A
MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)
MATH 20F: LINEAR ALGEBRA LECTURE B00 (T KEMP) Definition 01 If T (x) = Ax is a linear transformation from R n to R m then Nul (T ) = {x R n : T (x) = 0} = Nul (A) Ran (T ) = {Ax R m : x R n } = {b R m
More information2. Review of Linear Algebra
2. Review of Linear Algebra ECE 83, Spring 217 In this course we will represent signals as vectors and operators (e.g., filters, transforms, etc) as matrices. This lecture reviews basic concepts from linear
More informationLecture 3: Review of Linear Algebra
ECE 83 Fall 2 Statistical Signal Processing instructor: R Nowak, scribe: R Nowak Lecture 3: Review of Linear Algebra Very often in this course we will represent signals as vectors and operators (eg, filters,
More informationLinear Algebra Final Exam Study Guide Solutions Fall 2012
. Let A = Given that v = 7 7 67 5 75 78 Linear Algebra Final Exam Study Guide Solutions Fall 5 explain why it is not possible to diagonalize A. is an eigenvector for A and λ = is an eigenvalue for A diagonalize
More informationMath 18, Linear Algebra, Lecture C00, Spring 2017 Review and Practice Problems for Final Exam
Math 8, Linear Algebra, Lecture C, Spring 7 Review and Practice Problems for Final Exam. The augmentedmatrix of a linear system has been transformed by row operations into 5 4 8. Determine if the system
More informationv = v 1 2 +v 2 2. Two successive applications of this idea give the length of the vector v R 3 :
Length, Angle and the Inner Product The length (or norm) of a vector v R 2 (viewed as connecting the origin to a point (v 1,v 2 )) is easily determined by the Pythagorean Theorem and is denoted v : v =
More informationIMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET
IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET This is a (not quite comprehensive) list of definitions and theorems given in Math 1553. Pay particular attention to the ones in red. Study Tip For each
More informationEE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 2
EE/ACM 150  Applications of Convex Optimization in Signal Processing and Communications Lecture 2 Andre Tkacenko Signal Processing Research Group Jet Propulsion Laboratory April 5, 2012 Andre Tkacenko
More informationHOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION)
HOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION) PROFESSOR STEVEN MILLER: BROWN UNIVERSITY: SPRING 2007 1. CHAPTER 1: MATRICES AND GAUSSIAN ELIMINATION Page 9, # 3: Describe
More informationProblem Set 1. Homeworks will graded based on content and clarity. Please show your work clearly for full credit.
CSE 151: Introduction to Machine Learning Winter 2017 Problem Set 1 Instructor: Kamalika Chaudhuri Due on: Jan 28 Instructions This is a 40 point homework Homeworks will graded based on content and clarity
More informationProperties of Matrices and Operations on Matrices
Properties of Matrices and Operations on Matrices A common data structure for statistical analysis is a rectangular array or matris. Rows represent individual observational units, or just observations,
More informationTangent spaces, normals and extrema
Chapter 3 Tangent spaces, normals and extrema If S is a surface in 3space, with a point a S where S looks smooth, i.e., without any fold or cusp or selfcrossing, we can intuitively define the tangent
More informationMATH 304 Linear Algebra Lecture 10: Linear independence. Wronskian.
MATH 304 Linear Algebra Lecture 10: Linear independence. Wronskian. Spanning set Let S be a subset of a vector space V. Definition. The span of the set S is the smallest subspace W V that contains S. If
More informationLecture 3: QRFactorization
Lecture 3: QRFactorization This lecture introduces the Gram Schmidt orthonormalization process and the associated QRfactorization of matrices It also outlines some applications of this factorization
More informationMath 443 Differential Geometry Spring Handout 3: Bilinear and Quadratic Forms This handout should be read just before Chapter 4 of the textbook.
Math 443 Differential Geometry Spring 2013 Handout 3: Bilinear and Quadratic Forms This handout should be read just before Chapter 4 of the textbook. Endomorphisms of a Vector Space This handout discusses
More informationChapter 3 Least Squares Solution of y = A x 3.1 Introduction We turn to a problem that is dual to the overconstrained estimation problems considered s
Lectures on Dynamic Systems and Control Mohammed Dahleh Munther A. Dahleh George Verghese Department of Electrical Engineering and Computer Science Massachuasetts Institute of Technology 1 1 c Chapter
More information(a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? Solution: dim N(A) 1, since rank(a) 3. Ax =
. (5 points) (a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? dim N(A), since rank(a) 3. (b) If we also know that Ax = has no solution, what do we know about the rank of A? C(A)
More informationMATH 304 Linear Algebra Lecture 23: Diagonalization. Review for Test 2.
MATH 304 Linear Algebra Lecture 23: Diagonalization. Review for Test 2. Diagonalization Let L be a linear operator on a finitedimensional vector space V. Then the following conditions are equivalent:
More informationFIXED POINT ITERATIONS
FIXED POINT ITERATIONS MARKUS GRASMAIR 1. Fixed Point Iteration for Nonlinear Equations Our goal is the solution of an equation (1) F (x) = 0, where F : R n R n is a continuous vector valued mapping in
More informationTopic 2 Quiz 2. choice C implies B and B implies C. correctchoice C implies B, but B does not imply C
Topic 1 Quiz 1 text A reduced rowechelon form of a 3 by 4 matrix can have how many leading one s? choice must have 3 choice may have 1, 2, or 3 correctchoice may have 0, 1, 2, or 3 choice may have 0,
More informationECON 4117/5111 Mathematical Economics
Test 1 September 29, 2006 1. Use a truth table to show the following equivalence statement: (p q) (p q) 2. Consider the statement: A function f : X Y is continuous on X if for every open set V Y, the preimage
More information(K + L)(c x) = K(c x) + L(c x) (def of K + L) = K( x) + K( y) + L( x) + L( y) (K, L are linear) = (K L)( x) + (K L)( y).
Exercise 71 We have L( x) = x 1 L( v 1 ) + x 2 L( v 2 ) + + x n L( v n ) n = x i (a 1i w 1 + a 2i w 2 + + a mi w m ) i=1 ( n ) ( n ) ( n ) = x i a 1i w 1 + x i a 2i w 2 + + x i a mi w m i=1 Therefore y
More informationParallel Singular Value Decomposition. Jiaxing Tan
Parallel Singular Value Decomposition Jiaxing Tan Outline What is SVD? How to calculate SVD? How to parallelize SVD? Future Work What is SVD? Matrix Decomposition Eigen Decomposition A (nonzero) vector
More informationL3: Review of linear algebra and MATLAB
L3: Review of linear algebra and MATLAB Vector and matrix notation Vectors Matrices Vector spaces Linear transformations Eigenvalues and eigenvectors MATLAB primer CSCE 666 Pattern Analysis Ricardo GutierrezOsuna
More informationLinear Algebra in Hilbert Space
Physics 342 Lecture 16 Linear Algebra in Hilbert Space Lecture 16 Physics 342 Quantum Mechanics I Monday, March 1st, 2010 We have seen the importance of the plane wave solutions to the potentialfree Schrödinger
More informationReal symmetric matrices/1. 1 Eigenvalues and eigenvectors
Real symmetric matrices 1 Eigenvalues and eigenvectors We use the convention that vectors are row vectors and matrices act on the right. Let A be a square matrix with entries in a field F; suppose that
More informationImage Registration Lecture 2: Vectors and Matrices
Image Registration Lecture 2: Vectors and Matrices Prof. Charlene Tsai Lecture Overview Vectors Matrices Basics Orthogonal matrices Singular Value Decomposition (SVD) 2 1 Preliminary Comments Some of this
More informationConjugate Gradient (CG) Method
Conjugate Gradient (CG) Method by K. Ozawa 1 Introduction In the series of this lecture, I will introduce the conjugate gradient method, which solves efficiently large scale sparse linear simultaneous
More informationInner Product Spaces 5.2 Inner product spaces
Inner Product Spaces 5.2 Inner product spaces November 15 Goals Concept of length, distance, and angle in R 2 or R n is extended to abstract vector spaces V. Sucn a vector space will be called an Inner
More informationMATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS SYSTEMS OF EQUATIONS AND MATRICES Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a
More information3.7 Constrained Optimization and Lagrange Multipliers
3.7 Constrained Optimization and Lagrange Multipliers 71 3.7 Constrained Optimization and Lagrange Multipliers Overview: Constrained optimization problems can sometimes be solved using the methods of the
More informationLECTURE VI: SELFADJOINT AND UNITARY OPERATORS MAT FALL 2006 PRINCETON UNIVERSITY
LECTURE VI: SELFADJOINT AND UNITARY OPERATORS MAT 204  FALL 2006 PRINCETON UNIVERSITY ALFONSO SORRENTINO 1 Adjoint of a linear operator Note: In these notes, V will denote a ndimensional euclidean vector
More informationHomework set 4  Solutions
Homework set 4  Solutions Math 407 Renato Feres 1. Exercise 4.1, page 49 of notes. Let W := T0 m V and denote by GLW the general linear group of W, defined as the group of all linear isomorphisms of W
More informationSingular Value Decomposition
Chapter 6 Singular Value Decomposition In Chapter 5, we derived a number of algorithms for computing the eigenvalues and eigenvectors of matrices A R n n. Having developed this machinery, we complete our
More informationNotation. 0,1,2,, 1 with addition and multiplication modulo
Notation Q,, The set of all natural numbers 1,2,3, The set of all integers The set of all rational numbers The set of all real numbers The group of permutations of distinct symbols 0,1,2,,1 with addition
More informationDepartment of Biostatistics Department of Stat. and OR. Refresher course, Summer Linear Algebra. Instructor: Meilei Jiang (UNC at Chapel Hill)
Department of Biostatistics Department of Stat. and OR Refresher course, Summer 216 Linear Algebra Original Author: Oleg Mayba (UC Berkeley, 26) Modified By: Eric Lock (UNC, 21 & 211) Gen Li (UNC, 212)
More informationCAAM 336 DIFFERENTIAL EQUATIONS IN SCI AND ENG Examination 1
CAAM 6 DIFFERENTIAL EQUATIONS IN SCI AND ENG Examination Instructions: Time limit: uninterrupted hours There are four questions worth a total of 5 points Please do not look at the questions until you begin
More informationNONCOMMUTATIVE POLYNOMIAL EQUATIONS. Edward S. Letzter. Introduction
NONCOMMUTATIVE POLYNOMIAL EQUATIONS Edward S Letzter Introduction My aim in these notes is twofold: First, to briefly review some linear algebra Second, to provide you with some new tools and techniques
More information1. Let r, s, t, v be the homogeneous relations defined on the set M = {2, 3, 4, 5, 6} by
Seminar 1 1. Which ones of the usual symbols of addition, subtraction, multiplication and division define an operation (composition law) on the numerical sets N, Z, Q, R, C? 2. Let A = {a 1, a 2, a 3 }.
More informationLinear Algebra Problems
Math 504 505 Linear Algebra Problems Jerry L. Kazdan Topics 1 Basics 2 Linear Equations 3 Linear Maps 4 Rank One Matrices 5 Algebra of Matrices 6 Eigenvalues and Eigenvectors 7 Inner Products and Quadratic
More information1.2 LECTURE 2. Scalar Product
6 CHAPTER 1. VECTOR ALGEBRA Pythagean theem. cos 2 α 1 + cos 2 α 2 + cos 2 α 3 = 1 There is a onetoone crespondence between the components of the vect on the one side and its magnitude and the direction
More informationReview of Linear Algebra
Review of Linear Algebra Dr Gerhard Roth COMP 40A Winter 05 Version Linear algebra Is an important area of mathematics It is the basis of computer vision Is very widely taught, and there are many resources
More informationGRE Math Subject Test #5 Solutions.
GRE Math Subject Test #5 Solutions. 1. E (Calculus) Apply L Hôpital s Rule two times: cos(3x) 1 3 sin(3x) 9 cos(3x) lim x 0 = lim x 2 x 0 = lim 2x x 0 = 9. 2 2 2. C (Geometry) Note that a line segment
More informationMATH 300, Second Exam REVIEW SOLUTIONS. NOTE: You may use a calculator for this exam You only need something that will perform basic arithmetic.
MATH 300, Second Exam REVIEW SOLUTIONS NOTE: You may use a calculator for this exam You only need something that will perform basic arithmetic. [ ] [ ] 2 2. Let u = and v =, Let S be the parallelegram
More informationDepartment of Biostatistics Department of Stat. and OR. Refresher course, Summer Linear Algebra
Department of Biostatistics Department of Stat. and OR Refresher course, Summer 2017 Linear Algebra Original Author: Oleg Mayba (UC Berkeley, 2006) Modified By: Eric Lock (UNC, 2010 & 2011) Gen Li (UNC,
More informationMath 3013 Problem Set 6
Math 3013 Problem Set 6 Problems from 31 (pgs 189190 of text): 11,16,18 Problems from 32 (pgs 140141 of text): 4,8,12,23,25,26 1 (Problems 3111 and 31 16 in text) Determine whether the given set is closed
More informationGATE Engineering Mathematics SAMPLE STUDY MATERIAL. Postal Correspondence Course GATE. Engineering. Mathematics GATE ENGINEERING MATHEMATICS
SAMPLE STUDY MATERIAL Postal Correspondence Course GATE Engineering Mathematics GATE ENGINEERING MATHEMATICS ENGINEERING MATHEMATICS GATE Syllabus CIVIL ENGINEERING CE CHEMICAL ENGINEERING CH MECHANICAL
More informationMatrix Operations: Determinant
Matrix Operations: Determinant Determinants Determinants are only applicable for square matrices. Determinant of the square matrix A is denoted as: det(a) or A Recall that the absolute value of the determinant
More informationMath 121 Homework 5: Notes on Selected Problems
Math 121 Homework 5: Notes on Selected Problems 12.1.2. Let M be a module over the integral domain R. (a) Assume that M has rank n and that x 1,..., x n is any maximal set of linearly independent elements
More informationPh.D. Katarína Bellová Page 1 Mathematics 2 (10PHYBIPMA2) EXAM  Solutions, 20 July 2017, 10:00 12:00 All answers to be justified.
PhD Katarína Bellová Page 1 Mathematics 2 (10PHYBIPMA2 EXAM  Solutions, 20 July 2017, 10:00 12:00 All answers to be justified Problem 1 [ points]: For which parameters λ R does the following system
More informationMAT 242 CHAPTER 4: SUBSPACES OF R n
MAT 242 CHAPTER 4: SUBSPACES OF R n JOHN QUIGG 1. Subspaces Recall that R n is the set of n 1 matrices, also called vectors, and satisfies the following properties: x + y = y + x x + (y + z) = (x + y)
More informationFUNCTIONAL ANALYSIS LECTURE NOTES: COMPACT SETS AND FINITEDIMENSIONAL SPACES. 1. Compact Sets
FUNCTIONAL ANALYSIS LECTURE NOTES: COMPACT SETS AND FINITEDIMENSIONAL SPACES CHRISTOPHER HEIL 1. Compact Sets Definition 1.1 (Compact and Totally Bounded Sets). Let X be a metric space, and let E X be
More information3 Scalar Product. 3.0 The Dot Product. ~v ~w := v 1 w 1 + v 2 w v n w n.
3 Scalar Product Copyright 2017, Gregory G. Smith 28 September 2017 Although vector products on R n are rare, every coordinate space R n is equipped with a binary operation that sends two vectors to a
More information18.06 Problem Set 8  Solutions Due Wednesday, 14 November 2007 at 4 pm in
806 Problem Set 8  Solutions Due Wednesday, 4 November 2007 at 4 pm in 206 08 03 Problem : 205+5+5+5 Consider the matrix A 02 07 a Check that A is a positive Markov matrix, and find its steady state
More informationTOEPLITZ OPERATORS. Toeplitz studied infinite matrices with NWSE diagonals constant. f e C :
TOEPLITZ OPERATORS EFTON PARK 1. Introduction to Toeplitz Operators Otto Toeplitz lived from 18811940 in Goettingen, and it was pretty rough there, so he eventually went to Palestine and eventually contracted
More information5.6. PSEUDOINVERSES 101. A H w.
5.6. PSEUDOINVERSES 0 Corollary 5.6.4. If A is a matrix such that A H A is invertible, then the leastsquares solution to Av = w is v = A H A ) A H w. The matrix A H A ) A H is the left inverse of A and
More informationEigenvalues, Eigenvectors, and Diagonalization
Math 240 TA: Shuyi Weng Winter 207 February 23, 207 Eigenvalues, Eigenvectors, and Diagonalization The concepts of eigenvalues, eigenvectors, and diagonalization are best studied with examples. We will
More informationLecture 1 Introduction
L. Vandenberghe EE236A (Fall 201314) Lecture 1 Introduction course overview linear optimization examples history approximate syllabus basic definitions linear optimization in vector and matrix notation
More informationMATH 423 Linear Algebra II Lecture 12: Review for Test 1.
MATH 423 Linear Algebra II Lecture 12: Review for Test 1. Topics for Test 1 Vector spaces (F/I/S 1.1 1.7, 2.2, 2.4) Vector spaces: axioms and basic properties. Basic examples of vector spaces (coordinate
More informationEigenvalues and Eigenvectors
Eigenvalues and Eigenvectors MAT 67L, Laboratory III Contents Instructions (1) Read this document. (2) The questions labeled Experiments are not graded, and should not be turned in. They are designed for
More informationMath 396. An application of GramSchmidt to prove connectedness
Math 396. An application of GramSchmidt to prove connectedness 1. Motivation and background Let V be an ndimensional vector space over R, and define GL(V ) to be the set of invertible linear maps V V
More informationMath 25a Practice Final #1 Solutions
Math 25a Practice Final #1 Solutions Problem 1. Suppose U and W are subspaces of V such that V = U W. Suppose also that u 1,..., u m is a basis of U and w 1,..., w n is a basis of W. Prove that is a basis
More informationAgenda: Understand the action of A by seeing how it acts on eigenvectors.
Eigenvalues and Eigenvectors If Av=λv with v nonzero, then λ is called an eigenvalue of A and v is called an eigenvector of A corresponding to eigenvalue λ. Agenda: Understand the action of A by seeing
More informationCHAPTER 6. Representations of compact groups
CHAPTER 6 Representations of compact groups Throughout this chapter, denotes a compact group. 6.1. Examples of compact groups A standard theorem in elementary analysis says that a subset of C m (m a positive
More informationMath 209B Homework 2
Math 29B Homework 2 Edward Burkard Note: All vector spaces are over the field F = R or C 4.6. Two Compactness Theorems. 4. Point Set Topology Exercise 6 The product of countably many sequentally compact
More informationRole of Linear Algebra in Numerical Analysis
Indian Statistical Institute, Kolkata Numerical Analysis (BStat I) Instructor: Sourav Sen Gupta Scribe: Sayan Bhadra, Sinchan Snigdha Adhikary Date of Lecture: 18 January 2016 LECTURE 2 Role of Linear
More information1. Elements of linear algebra
Elements of linear algebra Contents Solving systems of linear equations 2 Diagonal form of a square matrix 3 The Jordan normal form of a square matrix 4 The GramSchmidt orthogonalization process 5 The
More information2) Let X be a compact space. Prove that the space C(X) of continuous realvalued functions is a complete metric space.
University of Bergen General Functional Analysis Problems with solutions 6 ) Prove that is unique in any normed space. Solution of ) Let us suppose that there are 2 zeros and 2. Then = + 2 = 2 + = 2. 2)
More informationTHE EULER CHARACTERISTIC OF A LIE GROUP
THE EULER CHARACTERISTIC OF A LIE GROUP JAY TAYLOR 1 Examples of Lie Groups The following is adapted from [2] We begin with the basic definition and some core examples Definition A Lie group is a smooth
More informationNotes on multivariable calculus
Notes on multivariable calculus Jonathan Wise February 2, 2010 1 Review of trigonometry Trigonometry is essentially the study of the relationship between polar coordinates and Cartesian coordinates in
More information1 Invariant subspaces
MATH 2040 Linear Algebra II Lecture Notes by Martin Li Lecture 8 Eigenvalues, eigenvectors and invariant subspaces 1 In previous lectures we have studied linear maps T : V W from a vector space V to another
More informationMath 240 Calculus III
The Calculus III Summer 2015, Session II Wednesday, July 8, 2015 Agenda 1. of the determinant 2. determinants 3. of determinants What is the determinant? Yesterday: Ax = b has a unique solution when A
More informationMatrices. Chapter What is a Matrix? We review the basic matrix operations. An array of numbers a a 1n A = a m1...
Chapter Matrices We review the basic matrix operations What is a Matrix? An array of numbers a a n A = a m a mn with m rows and n columns is a m n matrix Element a ij in located in position (i, j The elements
More informationDifferentiation. f(x + h) f(x) Lh = L.
Analysis in R n Math 204, Section 30 Winter Quarter 2008 Paul Sally, email: sally@math.uchicago.edu John Boller, email: boller@math.uchicago.edu website: http://www.math.uchicago.edu/ boller/m203 Differentiation
More informationw T 1 w T 2. w T n 0 if i j 1 if i = j
Lyapunov Operator Let A F n n be given, and define a linear operator L A : C n n C n n as L A (X) := A X + XA Suppose A is diagonalizable (what follows can be generalized even if this is not possible 
More informationHere each term has degree 2 (the sum of exponents is 2 for all summands). A quadratic form of three variables looks as
Reading [SB], Ch. 16.116.3, p. 375393 1 Quadratic Forms A quadratic function f : R R has the form f(x) = a x. Generalization of this notion to two variables is the quadratic form Q(x 1, x ) = a 11 x
More informationKernel Methods. Machine Learning A W VO
Kernel Methods Machine Learning A 708.063 07W VO Outline 1. Dual representation 2. The kernel concept 3. Properties of kernels 4. Examples of kernel machines Kernel PCA Support vector regression (Relevance
More informationBoolean InnerProduct Spaces and Boolean Matrices
Boolean InnerProduct Spaces and Boolean Matrices Stan Gudder Department of Mathematics, University of Denver, Denver CO 80208 Frédéric Latrémolière Department of Mathematics, University of Denver, Denver
More informationRankone LMIs and Lyapunov's Inequality. Gjerrit Meinsma 4. Abstract. We describe a new proof of the wellknown Lyapunov's matrix inequality about
Rankone LMIs and Lyapunov's Inequality Didier Henrion 1;; Gjerrit Meinsma Abstract We describe a new proof of the wellknown Lyapunov's matrix inequality about the location of the eigenvalues of a matrix
More informationNATIONAL BOARD FOR HIGHER MATHEMATICS. M. A. and M.Sc. Scholarship Test. September 25, Time Allowed: 150 Minutes Maximum Marks: 30
NATIONAL BOARD FOR HIGHER MATHEMATICS M. A. and M.Sc. Scholarship Test September 25, 2010 Time Allowed: 150 Minutes Maximum Marks: 30 Please read, carefully, the instructions on the following page 1 INSTRUCTIONS
More informationMath 702 Problem Set 10 Solutions
Math 70 Problem Set 0 Solutions Unless otherwise stated, X is a complex Hilbert space with the inner product h; i the induced norm k k. The nullspace range of a linear map T W X! X are denoted by N.T
More informationProblem Set # 1 Solution, 18.06
Problem Set # 1 Solution, 1.06 For grading: Each problem worths 10 points, and there is points of extra credit in problem. The total maximum is 100. 1. (10pts) In Lecture 1, Prof. Strang drew the cone
More information7.1 Definitions and Generator Polynomials
Chapter 7 Cyclic Codes Lecture 21, March 29, 2011 7.1 Definitions and Generator Polynomials Cyclic codes are an important class of linear codes for which the encoding and decoding can be efficiently implemented
More informationPolynomial and Rational Functions. Chapter 3
Polynomial and Rational Functions Chapter 3 Quadratic Functions and Models Section 3.1 Quadratic Functions Quadratic function: Function of the form f(x) = ax 2 + bx + c (a, b and c real numbers, a 0) 30
More informationLinear Algebra in A Nutshell
Linear Algebra in A Nutshell Gilbert Strang Computational Science and Engineering WellesleyCambridge Press. 2007. Outline 1 Matrix Singularity 2 Matrix Multiplication by Columns or Rows Rank and nullspace
More informationUniversity of Missouri. In Partial Fulllment LINDSEY M. WOODLAND MAY 2015
Frames and applications: Distribution of frame coecients, integer frames and phase retrieval A Dissertation presented to the Faculty of the Graduate School University of Missouri In Partial Fulllment of
More informationLinear Algebra: Graduate Level Problems and Solutions. Igor Yanovsky
Linear Algebra: Graduate Level Problems and Solutions Igor Yanovsky Linear Algebra Igor Yanovsky, 5 Disclaimer: This handbook is intended to assist graduate students with qualifying examination preparation.
More information2 so Q[ 2] is closed under both additive and multiplicative inverses. a 2 2b 2 + b
. FINITEDIMENSIONAL VECTOR SPACES.. Fields By now you ll have acquired a fair knowledge of matrices. These are a concrete embodiment of something rather more abstract. Sometimes it is easier to use matrices,
More informationMath 5520 Homework 2 Solutions
Math 552 Homework 2 Solutions March, 26. Consider the function fx) = 2x ) 3 if x, 3x ) 2 if < x 2. Determine for which k there holds f H k, 2). Find D α f for α k. Solution. We show that k = 2. The formulas
More informationMASTER OF SCIENCE IN ANALYTICS 2014 EMPLOYMENT REPORT
MASTER OF SCIENCE IN ANALYTICS 204 EMPLOYMENT REPORT! Results at graduation, May 204 Number of graduates: 79 MSA 205 Number of graduates seeking new employment: 75 Percent with one or more offers of employment
More information2: LINEAR TRANSFORMATIONS AND MATRICES
2: LINEAR TRANSFORMATIONS AND MATRICES STEVEN HEILMAN Contents 1. Review 1 2. Linear Transformations 1 3. Null spaces, range, coordinate bases 2 4. Linear Transformations and Bases 4 5. Matrix Representation,
More informationConjugate gradient method. Descent method. Conjugate search direction. Conjugate Gradient Algorithm (294)
Conjugate gradient method Descent method Hestenes, Stiefel 1952 For A N N SPD In exact arithmetic, solves in N steps In real arithmetic No guaranteed stopping Often converges in many fewer than N steps
More informationAlgebra I Fall 2007
MIT OpenCourseWare http://ocw.mit.edu 18.701 Algebra I Fall 007 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms. 18.701 007 Geometry of the Special Unitary
More informationLecture 7 Spectral methods
CSE 291: Unsupervised learning Spring 2008 Lecture 7 Spectral methods 7.1 Linear algebra review 7.1.1 Eigenvalues and eigenvectors Definition 1. A d d matrix M has eigenvalue λ if there is a ddimensional
More informationEigenvalues and Eigenvectors
Eigenvalues and Eigenvectors Philippe B. Laval KSU Fall 2015 Philippe B. Laval (KSU) Eigenvalues and Eigenvectors Fall 2015 1 / 14 Introduction We define eigenvalues and eigenvectors. We discuss how to
More informationOrdinary Differential Equations II
Ordinary Differential Equations II February 9 217 Linearization of an autonomous system We consider the system (1) x = f(x) near a fixed point x. As usual f C 1. Without loss of generality we assume x
More informationLinear programming: theory, algorithms and applications
Linear programming: theory, algorithms and applications illes@math.bme.hu Department of Differential Equations Budapest 2014/2015 Fall Vector spaces A nonempty set L equipped with addition and multiplication
More information3 QR factorization revisited
LINEAR ALGEBRA: NUMERICAL METHODS. Version: August 2, 2000 30 3 QR factorization revisited Now we can explain why A = QR factorization is much better when using it to solve Ax = b than the A = LU factorization
More informationMath 113 Homework 5. Bowei Liu, Chao Li. Fall 2013
Math 113 Homework 5 Bowei Liu, Chao Li Fall 2013 This homework is due Thursday November 7th at the start of class. Remember to write clearly, and justify your solutions. Please make sure to put your name
More informationc c c c c c c c c c a 3x3 matrix C= has a determinant determined by
Linear Algebra Determinants and Eigenvalues Introduction: Many important geometric and algebraic properties of square matrices are associated with a single real number revealed by what s known as the determinant.
More information