SYLLABUS. 1 Linear maps and matrices

Similar documents
MATH 240 Spring, Chapter 1: Linear Equations and Matrices

1. General Vector Spaces

Ph.D. Katarína Bellová Page 1 Mathematics 2 (10-PHY-BIPMA2) EXAM - Solutions, 20 July 2017, 10:00 12:00 All answers to be justified.

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Review problems for MA 54, Fall 2004.

MATH 23a, FALL 2002 THEORETICAL LINEAR ALGEBRA AND MULTIVARIABLE CALCULUS Solutions to Final Exam (in-class portion) January 22, 2003

homogeneous 71 hyperplane 10 hyperplane 34 hyperplane 69 identity map 171 identity map 186 identity map 206 identity matrix 110 identity matrix 45

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

1. Foundations of Numerics from Advanced Mathematics. Linear Algebra

Linear Algebra in Actuarial Science: Slides to the lecture

Algebra II. Paulius Drungilas and Jonas Jankauskas

1 9/5 Matrices, vectors, and their applications

LINEAR ALGEBRA SUMMARY SHEET.

Problem Set (T) If A is an m n matrix, B is an n p matrix and D is a p s matrix, then show

ELEMENTARY LINEAR ALGEBRA WITH APPLICATIONS. 1. Linear Equations and Matrices

BASIC ALGORITHMS IN LINEAR ALGEBRA. Matrices and Applications of Gaussian Elimination. A 2 x. A T m x. A 1 x A T 1. A m x

A Brief Outline of Math 355

6 Inner Product Spaces

The following definition is fundamental.

Introduction to Linear Algebra, Second Edition, Serge Lange

2 Determinants The Determinant of a Matrix Properties of Determinants Cramer s Rule Vector Spaces 17

Equality: Two matrices A and B are equal, i.e., A = B if A and B have the same order and the entries of A and B are the same.

Linear Algebra Lecture Notes-II

Last name: First name: Signature: Student number:

Linear Algebra Highlights

Diagonalizing Matrices

Linear Algebra M1 - FIB. Contents: 5. Matrices, systems of linear equations and determinants 6. Vector space 7. Linear maps 8.

Vector Spaces, Affine Spaces, and Metric Spaces

Numerical Linear Algebra

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

2. Every linear system with the same number of equations as unknowns has a unique solution.

MATH 304 Linear Algebra Lecture 34: Review for Test 2.

Linear Algebra. Min Yan

SUMMARY OF MATH 1600

Math 302 Outcome Statements Winter 2013

Lecture Summaries for Linear Algebra M51A

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

x 3y 2z = 6 1.2) 2x 4y 3z = 8 3x + 6y + 8z = 5 x + 3y 2z + 5t = 4 1.5) 2x + 8y z + 9t = 9 3x + 5y 12z + 17t = 7

Glossary of Linear Algebra Terms. Prepared by Vince Zaccone For Campus Learning Assistance Services at UCSB

Math113: Linear Algebra. Beifang Chen

MA 265 FINAL EXAM Fall 2012

MATH 235. Final ANSWERS May 5, 2015

Linear algebra 2. Yoav Zemel. March 1, 2012

Chapter 3 Transformations

NOTES on LINEAR ALGEBRA 1

ANSWERS. E k E 2 E 1 A = B

Linear Algebra: Matrix Eigenvalue Problems

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA

Linear Algebra Review

Math Linear Algebra II. 1. Inner Products and Norms

LinGloss. A glossary of linear algebra

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

ELEMENTARY MATRIX ALGEBRA

Optimization Theory. A Concise Introduction. Jiongmin Yong

Linear Algebra problems

Online Exercises for Linear Algebra XM511

MATH 115A: SAMPLE FINAL SOLUTIONS

Linear Algebra. Workbook

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88

a s 1.3 Matrix Multiplication. Know how to multiply two matrices and be able to write down the formula

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)

MATH 31 - ADDITIONAL PRACTICE PROBLEMS FOR FINAL

Math 18, Linear Algebra, Lecture C00, Spring 2017 Review and Practice Problems for Final Exam

Math Linear Algebra Final Exam Review Sheet

LAKELAND COMMUNITY COLLEGE COURSE OUTLINE FORM

MATH 1120 (LINEAR ALGEBRA 1), FINAL EXAM FALL 2011 SOLUTIONS TO PRACTICE VERSION

LINEAR ALGEBRA 1, 2012-I PARTIAL EXAM 3 SOLUTIONS TO PRACTICE PROBLEMS

Honors Algebra II MATH251 Course Notes by Dr. Eyal Goren McGill University Winter 2007

Linear Algebra. Matrices Operations. Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0.

Knowledge Discovery and Data Mining 1 (VO) ( )

(f + g)(s) = f(s) + g(s) for f, g V, s S (cf)(s) = cf(s) for c F, f V, s S

Lecture 1: Review of linear algebra

Math 554 Qualifying Exam. You may use any theorems from the textbook. Any other claims must be proved in details.

Quizzes for Math 304

Math 21b. Review for Final Exam

MTH 2032 SemesterII

PRACTICE PROBLEMS FOR THE FINAL

Review of some mathematical tools

1 Last time: least-squares problems

and let s calculate the image of some vectors under the transformation T.

LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM

Applied Linear Algebra in Geoscience Using MATLAB

4. Determinants.

HOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION)

The value of a problem is not so much coming up with the answer as in the ideas and attempted ideas it forces on the would be solver I.N.

MATH 320: PRACTICE PROBLEMS FOR THE FINAL AND SOLUTIONS

MAT Linear Algebra Collection of sample exams

LINEAR ALGEBRA REVIEW

Math 315: Linear Algebra Solutions to Assignment 7

Lecture 7: Positive Semidefinite Matrices

Chapter 5 Eigenvalues and Eigenvectors

Transpose & Dot Product

Elementary linear algebra

Analysis Preliminary Exam Workshop: Hilbert Spaces

Quantum Computing Lecture 2. Review of Linear Algebra

Math 307 Learning Goals

Topic 2 Quiz 2. choice C implies B and B implies C. correct-choice C implies B, but B does not imply C

235 Final exam review questions

LINEAR ALGEBRA QUESTION BANK

Transcription:

Dr. K. Bellová Mathematics 2 (10-PHY-BIPMA2) SYLLABUS 1 Linear maps and matrices Operations with linear maps. Prop 1.1.1: 1) sum, scalar multiple, composition of linear maps are linear maps; 2) L(U, V ) is a vector space. Matrix of linear map. Set of matrices M m,n (F ). Thm 1.1.2: Given bases of n- dimensional U and m-dimensional V, the map T A T is a bijection of the sets L(U, V ) and M m,n (F ). Matrix of sum of linear maps and sum of matrices. Matrix of map multiplied by scalar and multiplication of matrices by scalars. Prop 1.1.3: 1) M m,n (F ) is a vector space; 2) Given bases of n-dimensional U and m-dimensional V, the map T A T is an isomorphism between L(U, V ) and M m,n (F ). Matrix of composition of linear maps and product of matrices. Prop 1.1.4: 1) A S T = A S A S ; 2+3) Product of matrices is an associative operation but generally non-commutative. Row and column vectors. Identity matrix, inverse matrix, invertible matrix. Inverse of a map. Review: map f : X Y is invertible f is a bijection; (f 1 ) 1 = f. Prop 1.1.5: If T : U V is a linear bijeciton, then T 1 : V U is linear. Thm 1.1.6: Map is invertible if and only if its matrix (in some bases) is invertible; moreover, (A T ) 1 = A T 1. Systems of linear equations: Consistent and inconsistent systems. Equivalent systems. Elementary operations with systems. Thm 1.1.7: Elementary operations keep the systems equivalent. Matrix and augmented matrix of the system. Elementary row operations with matrices. Row echelon form and reduced row echelon form. Thm 1.1.8 [Gaussian elimination]: Every matrix can be transformed to row echelon form, and hence to reduced row echelon form, by elementary operations. Algorithm for solving a system of linear equations. Homogeneous system. The solutions set to a homogeneous system as a vector space. Example: equation of a plane in space, corresponding homogeneous equation. Rank of a linear map. Prop 1.1.9: For a linear map T : U V, the inverse T 1 exists if and only if dim U = dim V = rank T. Rank of matrix as maximal number of linearly independent columns. Thm 1.1.10: Rank of a linear map equals to rank of its matrix (in some bases), i.e., rank T = rank A T. Corollary 1.1.11: A matrix A M n (F ) is invertible if and only if its rank is n. Thm 1.1.12: Rank of matrix is equal to the maximal number of linearly independent rows. Rank is preserved under elementary row and column transformations. Computation of the rank of a matrix using Gaussian elimination. Finding a maximal subset of linearly independent vectors among a given finite set of vectors. Computation of inverse matrix using Gaussian elimination. Theorem 1.1.13: For all A, B M n (F ), rank(ab) min (rank(a), rank(b)). Corollary 1.1.14: A M n (F ) is invertible if and only if B : AB = I n (then

B = A 1 ). Two matrices A, B M n (F ) are both invertible if and only if their product AB is invertible. Example of rank(ab) rank(ba) (HW). Prop 1.1.15: Dimension of solution set to homogeneous system of equation as the number of unknowns minus the rank of the system matrix. Fundamental system of solutions. Permutations. Row notation. Cycle decomposition. Composition of permutations. Transposition. Thm 1.1.16: (1) each permutation can be written as a composition of transpositions, (i 1... i k ) = (i 1 i k )(i 1 i k 1 )... (i 1 i 3 )(i 1 i 2 ); (2) for a given permutation, the number of transpositions in any such decomposition is always even or always odd. Signature (or sign) of a permutation. Natural properties of a volume function. Multilinear function. Alternating function. Proposition 1.1.17: If f : V n F is multilinear and alternating, then for any σ S n, f(v σ(1),..., v σ(n) ) = sgn(σ)f(v 1,..., v n ). Theorem 1.1.18: Let v 1,..., v n be a basis of V. There exists a unique multilinear alternating function f : V n F with f(v 1,..., v n ) = 1. Fromula for f(u 1,..., u n ) in terms of the coordinates of u i s in basis (v 1,..., v n ). Theorem 1.1.19: Let f : V n F be multilinear and alternating, v 1,..., v n a basis of V, and f(v 1,..., v n ) 0. Then any u 1,..., u n V are linearly independent if and only if f(u 1,..., u n ) 0. Determinant of a matrix as the unique multilinear alternating function of rows det : (F n ) n F with det(e 1,..., e n ) = 1. Formulas for determinants for n = 2, 3. Cor 1.1.20: A is invertible iff det(a) 0. Thm 1.1.21: det(ab) = det(a) det(b). Prop 1.1.22 (Further properties of determinants): (a) det(a) is the unique alternating multilinear function of columns of A such that det(i n ) = 1; (b) determinant does not change if a row (column) multiplied by a scalar is added to another row (column); (c) determinant after multiplying a row (column) by a scalar, det(αa) = α n det(a); (d) determinant after swapping two rows or two columns; (e) determinant of a block matrix. Theorem 1.1.23 (Laplace expansion): For any i, j {1,..., n}, (a) det(a) = n k=1 ( 1)k+j α kj deta(k j) (expansion along j-th column), (b) det(a) = n k=1 ( 1)i+k α ik deta(i k) (expansion along i-th row). Inverse matrix expressed by determinant and determinants of the minors. Cramer s rule for solving systems of linear equations. 2 Change of basis, eigenvalues and eigenvectors Change of basis. Transition matrix Q from (e 1,..., e n ) to (e 1,..., e n): i th column of Q as coordinates of e i in basis (e 1,..., e n ), or (e 1,..., e n) = (e 1,..., e n ) Q. Transition matrix Q as the matrix of the identity map from F n with basis (e 1,..., e n) to F n with basis (e 1,..., e n ). 2

Prop 1.2.1: If Q is the transition matrix from (e 1,..., e n ) to (e 1,..., e n), then Q 1 is the transition matrix from (e 1,..., e n) to (e 1,..., e n ). Thm 1.2.2 (change of basis): Let T : U V be a linear map. If (e 1,..., e n), (e 1,..., e n ) are two bases of U with transision matrix Q from e to e and (f 1,..., f m), (f 1,..., f m ) are two bases of V with transision matrix P from f to f, then A (e,f ) T = P 1 A (e,f) T Q. Cor 1.2.3: Determinant Det T = Det A T does not depend on the choice of basis. Similar matrices. Eigenvectors and eigenvalues of linear operators. Spectrum of linear operator. Characteristic polynomial. Prop 1.2.4: Independence of characteristic polynomial of basis. Theorem 1.2.5: λ 0 Spec T iff P T (λ 0 ) = 0, i.e., eigenvalues are roots of the characteristic polynomial. Theorem 1.2.6: Eigenvectors corresponding to different eigenvalues are linearly independent. Simple spectrum. Prop 1.2.7: If T : U U has a simple spectrum, then there exists a basis e 1,..., e n U (formed by eigenvectors) such that A T = diag(λ 1,..., λ n ). One says that T is diagonalizable. Diagonalizable matrices. Examples of non-diagonalizable operators: (a) characteristic polynomial ( ) can be resolved not in every field, algebraically closed fields, (b) A T =. 2 1 0 2 Algebraic and geometric multiplicities of eigenvalues. Eigenspace. Prop 1.2.8: (a) The sum of algebraic multiplicities is n; (b) Geometric multiplicity is at most algebraic multiplicity. Theorem 1.2.9: An operator (resp., its matrix) is diagonalizable if and only if for every eigenvalue of T, its geometric and algebraic multiplicities coincide. Remark: Jordan form of a matrix. 3 Inner product spaces Scalar product in R 2 /R 3. Prop 1.3.1: (a) (u, v) = (v, u); (b) (u, u) = u 2 > 0 if u 0; (c) (u, v) = 0 iff u v or one of them is 0; (d) linearity. Orthonormal basis. Thm 1.3.2: Expressing the scalar product, lengths of vectors and angles between vectors in terms of the coordinates of the vectors. Orientation of a basis. Vector product in R 3. Prop 1.3.3: (a) u v = v u; (b) u v = 0 iff u and v are collinear; (c) linearity. Thm 1.3.4: Expressing the vector product in terms of the coordinates of the vectors in a right-hand oriented basis. Definition of the inner product space. Real and unitary spaces. Prop 1.3.5: (a) αu, βv = α β u, v ; (b) u, v 1 + v 2 = u, v 1 + u, v 2 ; (c) n i=1 α iu i, n j=1 β jv j = n i,j=1 α β i j u i, v j ; (d) 0, v = u, 0 = 0. Examples: R n,c n, C([0, 1], C). In a finite-dimensional vector space, one can always define an inner product. 3

Norm of a vector. Prop 1.3.6: (a) u 0 with u = 0 u = 0; (b) αu = α u ; (c) Cauchy-Schwarz inequality. Cauchy-Schwarz inequality in R n, C n and C([0, 1], C). Angle between two vectors in a real inner product space. Prop 1.3.7 (Triangle inequality for norms): u + v u + v. Normed vector space. Distance between two vectors. Prop 1.3.8: Properties of the distance. Metric space. Orthogonality of two vectors, u v. Prop 1.3.9: Properties of. Orthogonal system of vectors. Orthonormal system of vectors. Theorem 1.3.10: Every orthogonal system is linearly independent. Corollary 1.3.11: If dim(v ) = n, then every orthogonal system contains at most n vectors. Thm 1.3.12 (Gram-Schmidt orthogonalization): If u 1,..., u n V are linearly independent, then there exist v 1,..., v n V orthonormal such that for all k n, Span{v 1,..., v k } = Span{u 1,..., u k }. Corollary 1.3.13: Every finite-dimensional inner product space has an orthonormal basis. Prop 1.3.14 (Properties of ONBs): If e 1,..., e n is an ONB of V, then (a) u V!u 1,..., u n F such that u = u 1 e 1 +... + u n e n, and u i = u, e i, (b) for all u = u 1 e 1 +... + u n e n, v = v 1 e 1 +... + v n e n V, u, v = u 1 v 1 +... u n v n, (c) u 2 = u 1 2 +... + u n 2 (Parseval s identity). Sum of vector subspaces and direct sum of vector subspaces. Prop 1.3.15: (a) U 1 +U 2 is a vector subspace; (b) If V = U 1 U 2, then v V!v 1 U 1, v 2 U 2 such that v = v 1 + v 2. Orthogonal complement of a set = S V, S. Prop 1.3.16 (Properties of ): (a) { 0 } = V, V = { 0 }, (b) S is a vector subspace of V (even if S is not), (c) S = (SpanS), (d) (S ) = SpanS; in particular, if S is a vector subspace, then (S ) = S. Theorem 1.3.17: If U is a vector subspace of V (V is finite dimensional), then V = U U. Corollary 1.3.18: Any vector v V can be decomposed uniquely as v = v U + v U, where v U U and v U U. Orthogonal projection, P U. Prop 1.3.19 (Properties of P U ): (a) P U is linear, (b) P U U = Id, (c) PU 2 = P U, (d) for all v 1, v 2 V, P U (v 1 ), v 2 = v 1, P U (v 2 ), (e) KerP U = U, ImP U = U. Example (projection on the line spanned by vector u): If U = Span{u}, then P U (v) = v,u u. u 2 Distance between sets S 1, S 2 V, ρ(s 1, S 2 ). Proposition 1.3.20: If U is a vector subspace of V and v V, then ρ(v, U) = v U = P U (v). Dual space or the space of linear functionals on V, V = L(V, F ). Example: for any u V, the map f u : V F defined by f u (v) = v, u is in V. Theorem 1.3.21 (Riesz representation theorem): If V is a finite dimensional inner product space, then for any f V there exists unique u V such that f = f u, i.e., for all v V, f(v) = v, u. 4

Theorem 1.3.22: For any linear map T : V V there exists unique linear map T : V V such that for all u, v V, T (u), v = u, T (v). T is the adjoint operator of T. Properties 1.3.23: (a) (T ) = T, (b) (αt ) = αt, (c) (T 1 + T 2 ) = T 1 + T 2, (d) (T 1 T 2 ) = T 2 T 1, (e) if e 1,..., e n is an ONB of V, then A T = (A T ) t. Self-adjoint operator, T = T. Example: P U. Hermitian matrix, A t = A, symmetric matrix, A t = A. Proposition 1.3.24: T is self-adjoint iff its matrix is Hermitian in any ONB. Properties: If T 1, T 2 are self-adjoint then T 2 1, T 1 + T 2, αt 1, T 1 T 2 + T 2 T 1 are also self-adjoint, but T 1 T 2 is not necessarily self-adjoint. Theorem 1.3.25: (a) All eigenvalues of a self-adjoint operator are real. In particular, characteristic polynomial is real. (b) Eigenvectors corresponding to different eigenvalues are othogonal. (c) T is self-adjoint iff there exists an ONB in which A T is diagonal and real. In particular, symmetric matrices are diagonalizable. Remark: Normal operator. Unitary operator, T = T 1. Unitary operators in real spaces are called orthogonal. Unitary operators are normal. Proposition 1.3.26: The following conditions are equivalent: (a) T is unitary; (b) for all u, v V, T (u), T (v) = u, v (c) T maps some (or all) ONB to ONB. Unitary matrix, A t = A 1, orthogonal matrix, A t = A. Proposition 1.3.27: T is unitary iff in some ONB A T is unitary. Proposition 1.3.28: If A is unitary, then its rows form an ONB in the space of rows C n, and its columns form an ONB in C n. Proposition 1.3.29: (a) If T is normal and λ is eigenvalue of T with eigenvector u, then λ is eigenvalue of T with the same eigenvector u. (b) If T is unitary, then all its eigenvalues have modulus 1, and deta T = 1. Remark: For any operator T on a vector space over C, det A T equals to the product of all eigenvalues of T (counting algebraic multiplicity). Theorem 1.3.30: (a) If T is unitary, then there exists an ONB such that A T is diagonal with all entries on the diagonal being of modulus 1. (b) If T is orthogonal, then there exists an ONB such that A T is ( block diagonal with) blocks of size 1 being cos ϕi sin ϕ either 1 or 1 and blocks of size 2 being i. (Canonical form of sin ϕ i cos ϕ i unitary/orthogonal operator.) Corollary 1.3.31: Every unitary (resp., orthogonal) matrix is unitary (resp., orthogonally) equivalent to a unitary (resp., orthogonal) matrix in the canonical form. Example: List of all 6 canonical forms of orthogonal operators in R 3. (These are the only transformations of R 3 which preserve lengths and angles between vectors.) 4 Bilinear forms Definition of bilinear form, B(, ). Examples: (a) B(u, v) = n i,j=1 a i,ju i v j, u, v F n, (b) B(f, g) = 1 K(t)f(t)g(t)dt, K, f, g C[0, 1], (c) B(u, v) = f(u)g(v), u, v 0 5

V, f, g V, (d) inner product in Euclidean space. Coordinate representation: If e 1,..., e n is a basis of V, then B(u, v) = n i,j=1 u iv j B(e i, e j ). Gram matrix of B: A B = (B(e i, e j )) n i,j=1. Proposition 1.4.1 (Change of basis): if (e 1... e n) = (e 1... e n ) C then A B = Ct A B C. Congruent matrices. Rank of a bilinear form, non-degenerate bilinear form. Symmetric bilinear form, B(u, v) = B(v, u). Example. Proposition 1.4.2: If B is symmetric then its matrix in any basis is symmetric, A B = A t B. Theorem 1.4.3 (normal form of a symmetric bilinear form): If B is a symmetric bilinear form in a real vector space V, then there exists a basis of V in which the matrix of B is diagonal with only 1 s, 1 s, and 0 s on the diagonal, namely, there exists 1 i = j s 1 s < i = j r e 1,..., e n basis of V and s r n such that B(e i, e j ) =. In 0 i = j > r 0 i j this basis B(u, v) = u 1 v 1 +... + u s v s u s+1 v s+1... u r v r. Proposition 1.4.4 (Sylvester s law of intertia): The number of 1 s, 1 s, and 0 s in Theorem 1.4.3 is independent of the basis in which the matrix is of the above form. Number of 1 s, 1 s and 0 s form the signature of B. Quadratic form associated to a symmetric bilinear form B, B(v, v). Theorem 1.4.5 (polarization of a quadratic form): Any quadratic form in a real vector space is associated to a unique symmetric bilinear form, B(u, v) = (B(u + v, u + v) B(u, u) B(v, v)). Example. 1 2 Theorem 1.4.6: For every quadratic form B(v, v) in R n, there exists a basis in which B(v, v) = v 2 1 + + v 2 s v 2 s+1 v 2 r. Numbers s and r are given uniquely by the quadratic form. Lagrange algorithm to find this form ( fill into square ). Positive definite quadratic form, B(v, v) > 0 for v 0. B is positive definite quadratic form on V iff its polarization B defines an inner product on V. Example: Minkowski space, B(v, v) = v 2 1 + v 2 2 + v 2 3 v 2 4. 5 Functions of several variables Definition of y = f(x 1,..., x n ). Other notation: z = f(x, y), w = f(x, y, z). x = x 2 1 +... + x 2 n. Open ball B r (x). Limit of f at a R n, lim x a f(x), for f : D R and D R n such that B r (a) D contains points other than a for any r > 0. (Mind: f(a) may be undefined.) Properties of the limit. Examples: lim (x,y) (0,0) estimates). x 2 y x 2 +y 2 = 0, lim (x,y) (0,0) x 3/2 y x 2 +y 2 = 0 (proof from definition/by Proposition 2.1.1: Let α 1 = α 1 (t),..., α n = α n (t) be real functions defined on an interval containing t 0 R, and let α(t) = (α 1 (t),..., α n = α n (t)). If 6

for all i {1,..., n}, lim t t0 α i (t) = a i, for some r > 0 and all t (t r, t + r) \ {t 0 }, α(t) a and lim x a f(x) = L, then lim t t0 f(α 1 (t),..., α n (t)) = L. (This is useful in proving non-existence of the limit: if for two different choices of α i s, the corresponding limits lim t t0 f(α 1 (t),..., α n (t)) are different, then the limit of f at a does not exist.) Examples: (a) f(x, y) = xy, for x 2 + y 2 > 0 (limit at 0 does not exist), (b) x 2 +y 2 f(x, y) = x2 y, for x 2 + y 2 > 0 (limit at 0 along every line is 0, but the limit at 0 x 4 +y 2 does not exist). Proposition 2.1.2: If f(x 1,..., x n ) = g( x ) for some g : R + 0 R, then lim x 0 f(x) = lim r 0+ g(r). Such f s are called isotropic. Example: lim (x,y) (0,0) (x 2 + y 2 ) ln(x 2 + y 2 ) = lim r 0+ r 2 ln(r 2 ) = 0. Continuous function at a R n. Continuous function on D. Examples: (a) polynomials, (b) rational functions (in their domains). Open set, limit points, and closed sets. Examples: B r (x) is open but not closed, B r (x) and S r (x) are closed but not open, and R n are both open and closed. Complement of a set. Proposition 2.1.3: D R n is closed if and only if D c is open. Closure of a set. Remark: D is closed; D it is the smallest closed set containig D; D = D. Bounded set. Theorem 2.1.4: If a function is continuous on a bounded and closed set D R n, then it is bounded on D and attains its maximal and minimal values at some points of D. Remark: In metric spaces valid for D compact; all compact sets are bounded and closed, but not all bounded and closed sets must be compact. 6 Literature 1. I. Lankham, B. Nachtergaele, A. Schilling. Linear algebra (as an introduction to abstract mathematics). 2. A. Schüler. Calculus. Lecture notes, Leipzig University 2006. 3. K. Hoffman and R. Kunze. Linear algebra. 4. G. Fichtenholz. Differential- und Integralrechnung. 5. S. Mac Lane and G. Birkhoff. Algebra. 6. T.L. Chow. Mathematical methods for physicists: A concise introduction. 7. G. Knieper. Mathematik für Physiker. 7