dr bob s elementary differential geometry

Size: px
Start display at page:

Download "dr bob s elementary differential geometry"

Transcription

1 dr bob s elementary differential geometry a slightly different approach based on elementary undergraduate linear algebra, multivariable calculus and differential equations by bob jantzen (Robert T. Jantzen) Department of Mathematical Sciences Villanova University Copyright 2007, 2008 with Hans Kuo, Taiwan in progress: version: March 4, 2010

2 Abstract There are lots of books on differential geometry, including at the introductory level. Why yet another one by an author who doesn t seem to take himself that seriously and occasionally refers to himself in the third person? This one is a bit different than all the rest. dr bob loves this stuff, but how to teach it to students at his own (not elite) university in order to have a little more fun at work than usual? This unique approach may not work for everyone, but it attempts to explain the nuts and bolts of how a few basically simple ideas taken seriously underlie the whole mess of formulas and concepts, without worrying about technicalities which only serve to put off students at the first pass through this scenery. It is also presented with an eye towards being able to understand the key concepts needed for the mathematical side of modern physical theories, while still providing the tools that underlie the classical theory of surfaces in space.

3 Contents Preface I ALGEBRA 8 0 Introduction: motivating index algebra 9 1 Foundations of tensor algebra Index conventions A vector space V The dual space V Linear transformations of a vector space into itself (and tensors) Linear transformations of V into itself and a change of basis Linear transformations between V and V Symmetry properties of tensors Measure motivation and determinants Tensor symmetry properties Epsilons and deltas Antisymmetric tensors Symmetric tensors and multivariable Taylor series Time out Whoa! Review of what we ve done so far Antisymmetric tensors, subspaces and measure Determinants gone wild The wedge product Subspace orientation and duality Wedge and duality on R n in practice II CALCULUS From multivariable calculus to the foundation of differential geometry 169 3

4 5.1 The tangent space in multivariable calculus More motivation for the re-interpretation of the tangent space Flow lines of vector fields Frames and dual frames and Lie brackets Non-Cartesian coordinates on R n (polar coordinates in R 2 ) Cylindrical and spherical coordinates on R Cylindrical coordinate frames Spherical coordinate frames Lie brackets and noncoordinate frames Covariant derivatives Covariant derivatives on R n with Euclidean metric Notation for covariant derivatives Covariant differentiation and the general linear group Covariant constant tensor fields The clever way of evaluating the components of the covariant derivative Noncoordinate frames Geometric interpretation of the Lie bracket More on covariant derivatives Gradient and divergence Second covariant derivatives and the Laplacian Spherical coordinate orthonormal frame Rotations and derivatives Parallel transport Covariant differentiation along a curve and parallel transport Geodesics Parametrized curves as motion of point particles The Euclidean plane and the Kepler problem The 2-sphere of radius r The torus Geodesics as extremal curves: a peek at the calculus of variations Intrinsic curvature Calculating the curvature tensor Interpretation of curvature The limiting loop parallel transport coordinate calculation The limiting loop parallel transport frame curvature calculation The symmetry of the covariant derivative

5 10 Extrinsic curvature The extrinsic curvature tensor Spheres and cylinders: a pair of useful concrete examples Cones: a useful cautionary example Total curvature: intrinsic plus extrinsic curvature Integration of differential forms Changing the variable in a single variable integral Changing variables in multivariable integrals Parametrized p-surfaces and pushing forward the coordinate grid and tangent vectors Pulling back functions, covariant tensors and differential forms Changing the parametrization The exterior derivative d The exterior derivative and a metric Induced orientation Stokes theorem Worked examples of Stokes Theorem for R Spherical coordinates on R 4 and 3-spheres: a useful example with n > Wrapping things up Final remarks MATH 5600 Spring 1991 Differential Geometry: Take Home Final A Miscellaneous background 418 A.1 From trigonometry to hyperbolic functions and hyperbolic geometry B Maple worksheets 426 C Solutions 427 C.1 Chapter C.2 Chapter C.3 Chapter C.4 Chapter C.5 Chapter C.6 Chapter C.7 Chapter C.8 Chapter C.9 Chapter C.10 Chapter C.11 Chapter C.12 Chapter 12: final exam worked Final exam

6 List of Figures 484 The last page is:

7 7 Preface This book began as a set of handwritten notes from a course given at Villanova University in the spring semester of 1991 that were scanned and posted on the web in 2006 at and were converted to a L A TEX compuscript and completely revised in with the help of Hans Kuo of Taiwan through a serendipitous internet collaboration and chance second offering of the course to actual students in the spring semester of 2008, offering the opportunity for serious revision with feedback. Life then intervened and the necessary cleanup operations to put this into a finished form were delayed indefinitely. Most undergraduate courses on differential geometry are leftovers from the early part of the last century, focusing on curves and surfaces in space, which is not very useful for the most important application of the twentieth century: general relativity and field theory in theoretical physics. Most mathematicians who teach such courses are not well versed in physics, so perhaps this is a natural consequence of the distancing of mathematics from physics, two fields which developed together in creating these ideas from Newton to Einstein and beyond. The idea of these notes is to develop the essential tools of modern differential geometry while bypassing more abstract notions like manifolds, which although important for global questions, are not essential for local differential geometry and therefore need not steal precious time from a first course aimed at undergraduates. Part 1 (Algebra) develops the vector space structure of R n and its dual space of real-valued linear functions, and builds the tools of tensor algebra on that structure, getting the index manipulation part of tensor analysis out of the way first. Part 2 (Calculus) then develops R n as a manifold first analyzed in Cartesian coordinates, beginning by redefining the tangent space of multivariable calculus to be the space of directional derivatives at a point, so that all of the tools of Part 1 then can be applied pointwise to the tangent space. Non-Cartesian coordinates and the Euclidean metric are then used as a shortcut to what would be the consideration of more general manifolds with Riemannian metrics in a more ambitious course, followed by the covariant derivative and parallel transport, leading naturally into curvature. The exterior derivative and integration of differential forms is the final topic, showing how conventional vector analysis fits into a more elegant unified framework. The theme of Part 1 is that one needs to distinguish the linearity properties from the inner product ( metric ) properties of elementary linear algebra. The inner product geometry governs lengths and angles, and the determinant then enables one to extend the linear measure of length to area and volume in the plane or 3-dimensional space, and to p-dimensional objects in R n. The determinant also tests linear independence of a set of vectors and hence is key to characterizing subspaces independent of the particular set of vectors we use to describe them while assigning an actual measure to the p-parallelepipeds formed by a particular set, once an inner product sets the length scale for orthogonal directions. By appreciating the details of these basic notions in the setting of R n, one is ready for the tools needed point by point in the tangent spaces to R n, once one understands the relationship between each tangent space and the simpler enveloping space.

8 Part I ALGEBRA 8

9 Chapter 0 Introduction: motivating index algebra Elementary linear algebra is the mathematics of linearity, whose basic objects are 1- and 2- dimensional arrays of numbers, which can be visualized as at most 2-dimensional rectangular arrangements of those numbers on sheets of paper or computer screens. Arrays of numbers of dimension d can be described as sets that can be put into a 1-1 correspondence with regular rectangular grids of points in R d whose coordinates are integers, used as index labels: {a i i = 1,..., n} 1-d array : n entries {a ij i = 1,..., n 1, j = 1,..., n 2 } 2-d array : n 1 n 2 entries {a ijk i = 1,..., n 1, j = 1,..., n 2, k = 1,..., n 3 } 3-d array : n 1 n 2 n 3 entries 1-dimensional arrays (vectors) and 2-dimensional arrays (matrices), coupled with the basic operation of matrix multiplication, itself an organized way of performing dot products of two sets of vectors, combine into a powerful machine for linear computation. When working with arrays of specific dimensions (3 component vectors, 2 3 matrices, etc.), one can avoid index notation and the sigma summation symbol n i=1 after using it perhaps to define the basic operation of dot products for vectors of arbitrary dimension, but to discuss theory for indeterminant dimensions (n-component vectors, m n matrices), index notation is necessary. However, index positioning (distinguishing subscript and superscript indices) is not essential and rarely used, especially by mathematicians. Going beyond 2-dimensional arrays to d-dimensional arrays for d > 2, the arena of tensors, index notation and index positioning are instead both essential to an efficient computational language. Suppose we start with 3-vectors to illustrate the basic idea. The dot product between two vectors is symmetric in the two factors a = a 1, a 2, a 3, b = b 1, b 2, b 3 3 a b = a 1 b 1 + a 2 b 2 + a 3 b 3 = a i b i = b a, but using it to describe a linear function in R 3, a basic asymmetry is introduced 3 f a ( x) = a x = a 1 x 1 + a 2 x 2 + a 3 x 3 = a i x i. 9 i=1 i=1

10 10 Chapter 0. Introduction: motivating index algebra The left factor is a constant vector of coefficients, while the right factor is the vector of variables and this choice of left and right is arbitrary but convenient, although some mathematicians like to reverse it for some reason. To reflect this distinction, we introduce superscripts (up position) to denote the variable indices and subscripts (down position) to denote the coefficient indices, and then agree to sum over the understood 3 values of the index range for any repeated such pair of indices (one up, one down) f a ( x) = a 1 x 1 + a 2 x 2 + a 3 x 3 = 3 a i x i = a i x i. i=1 The last convention, called the Einstein summation convention, turns out to be an extremely convenient and powerful shorthand, which in this example, streamlines the notation for taking a linear combination of variables. This index positioning notation encodes the distinction between rows and columns in matrix notation. Now we will represent a matrix (a ij ) representing a linear transformation as (a i j) with row indices (left) associated with superscripts, and column indices (right) with subscripts. A single row matrix or column matrix is used to denote respectively a coefficient vector and a variable vector ( ) x 1 a1 a 2 a 3, x 2, x 3 where the entries of a single row matrix are labeled by the column index (down), and the entries of a single column matrix are labeled by the row index (up). The matrix product of a row matrix on the left by a column matrix on the left re-interprets the dot product between two vectors as the way to combine a row vector (left factor) of coefficients with a column vector (right factor) of variables to produce a single number, the value of a linear function of the variables ( ) a1 a 2 a 3 x 1 x 2 x 3 = a 1 x 1 + a 2 x 2 + a 3 x 3 = a x. If we agree to use an underlined kernel symbol x for a column vector, and the transpose a T for a row vector, where the transpose simply interchanges rows and columns of a matrix, this can be represented as a T x = a x. Extending the matrix product to more than one row in the left factor is the second step in defining a general matrix product, leading to a column vector result ( ) a 1 1 a 1 2 a 1 3 x 1 ( ) a 2 1 a 2 2 a 2 x 2 a 1T = x 1 3 x 3 a 2T x 2 = x 3 ( ) a1 x a 2 = x ( ) a 1 i x i a 2 ix i. Thinking of the coefficient matrix as a 1-dimensional vertical array of row vectors (the first right hand side of this sequence of equations), one gets a corresponding array of numbers (a

11 11 column) as the result, consisting of the corresponding dot products of the rows with the single column. Denoting the left matrix factor by A, then the product column matrix has entries [A x] i = 3 a i kx k = a i kx k, 1 i 2. k=1 Finally, adding more columns to the right factor in the matrix product, we generate corresponding columns in the matrix product, with the resulting array of numbers representing all possible dot products between the row vectors on the left and the column vectors on the right, labeled by the same row and column indices as the factor vectors from which they come ( ) a 1 1 a 1 2 a 1 3 x 1 1 x 1 ( ) ( ) 2 a 2 1 a 2 2 a 2 x 2 1 x 2 a 1T (x1 ) a1 x 2 = 3 x 3 1 x 3 a 2T x 2 = 1 a 1 x 2 a 2 x 1 a 2. x 2 2 Denoting the new left matrix factor again by A and the right matrix factor by X, then the product matrix has entries (row index left up, column index right down) [AX] i j = 3 a i kx k j = a i kx k j, 1 i 2, 1 j 2, k=1 where the sum over three entries (representing the dot product) is implied by our summation convention in the second equality, and the row and column indices here go from 1 to 2 to label the entries of the 2 rows and 2 columns of the product matrix. Thus matrix multiplication in this example is just an organized way of displaying all such dot products of two ordered sets of vectors in an array where the rows of the left factor in the matrix product correspond to the coefficient vectors in the left set and the columns in the right factor in the matrix product correspond to the variable vectors in the right set. The dot product itself in this context of matrix multiplication is representing the natural evaluation of linear functions (left row) on vectors (right column). No geometry (lengths and angles in Euclidean geometry) is implied in this context, only linearity and the process of linear combination. The matrix product of a matrix with a single column vector can be reinterpreted in terms of the more general concept of a vector-valued linear function of vectors, namely a linear combination of vectors, in which case the right factor column vector entries play the role of coefficients. In this case the left factor matrix must be thought of as a horizontal array of column vectors ( ) x 1 ( ) v 1 v 2 v 3 x 2 v 1 = 1 v 1 2 v3 1 x 1 ( ) x 3 v 2 1 v 2 2 v3 2 x 2 v 1 = 1 x 1 + v 1 2x 2 + v 1 3x 3 x 3 v 2 1x 1 + v 2 2x 2 + v 2 3x 3 ) ) ) = x 1 ( v 1 1 v x 2 ( v 1 2 v x 3 ( v 1 3 v 2 3 = x 1 v 1 + x 2 v 2 + x 3 v 3 = x i v i. Thus in this case the summed-over index pair performs a linear combination of the columns of the left factor of the matrix product, whose coefficients are the entries of the right column

12 12 Chapter 0. Introduction: motivating index algebra matrix factor. This interpretation extends to more columns in the right matrix factor, leading to a matrix product consisting of the same number of columns, each of which represents a linear combination of the column vectors of the left factor matrix. In this case the coefficient indices are superscripts since the labels of the vectors being combined linearly are subscripts, but the one up, one down repeated index summation is still consistent. Note that when the left factor matrix is not square (in this example, a 2 3 matrix multiplied by a 3 1 matrix), one is dealing with coefficient vectors v i and vectors x of different dimensions, in this example combining three 2-component vectors by linear combination. If we call our basic column vectors just vectors (contravariant vectors, indices up) and call row vectors covectors (covariant vectors, indices down), then combining them with the matrix product represents the evaluation operation for linear functions, and implies no geometry in the sense of lengths and angles usually associated with the dot product, although one can easily carry over this interpretation. In this example R 3 is our basic vector space consisting of all possible ordered triplets of real numbers, and the space of all linear functions on it is equivalent to another copy of R 3, the space of all coefficient vectors. The space of linear functions on a vector space is called the dual space, and given a basis of the original vector space, expressing linear functions with respect to this basis leads to a component representation in terms of their matrix of coefficients as above. It is this basic foundation of a vector space and its dual, together with the natural evaluation represented by matrix multiplication in component language, reflected in superscript and subscript index positioning respectively associated with column vectors and row vectors, that is used to go beyond elementary linear algebra to the algebra of tensors, or d-dimensional arrays for any positive integer d. Index positioning together with the Einstein summation convention is essential in letting the notation itself directly carry the information about its role in this scheme of linear mathematics extended beyond the elementary level. Combining this linear algebra structure with multivariable calculus leads to differential geometry. Consider R 3 with the usual Cartesian coordinates x 1, x 2, x 3 thought of as functions on this space. The differential of any function on this space can be expressed in terms of partial derivatives by the formula df = f x 1dx1 + f x 2dx2 + f x 3dx3 = i fdx i = f,i dx i using first the abbreviation i = / x i for the partial derivative operator and then the abbreviation f,i for the corresponding partial derivatives of the function f. At each point of R 3, the differentials df and dx i play the role of linear functions on the tangent space. The differential of f acts on a tangent vector v at a given point by evaluation to form the directional derivative along the vector D v f = f + f + f = f, x 1v1 x 2v2 x 3v3 x ivi so that the coefficients of this linear function of a tangent vector v at a given point are the values of the partial derivative functions there, and hence have indices down compared to the up indices of the tangent vector itself, which belongs to the tangent space, the fundamental vector space describing the diffential geometry near each point of the whole space. In the linear

13 13 function notation, the application of the linear function df to the vector v gives the same result df( v) = f x ivi. If f/ x i are therefore the components of a covector, and v i the components of a vector in the tangent space, what is the basis of the tangent space, analogous to the natural (ordered) basis {e 1, e 2, e 3 } = { 1, 0, 0, 0, 1, 0, 0, 0, 1 } of R 3 thought of as a vector space in our previous discussion? In other words how do we express a tangent vector in the abstract form like in the naive R 3 discussion where x = x 1, x 2, x 3 = x i e i is expressed as a linear combination of the standard basis vectors {e i } = { 1, 0, 0, 0, 1, 0, 0, 0, 1 } usually denoted by i,j,k? This question will be answered in the following notes, making the link between old fashioned tensor analysis and modern differential geometry. One last remark about matrix notation is needed. We adopt here the notational conventions of the computer algebra system Maple for matrices and vectors. A vector u 1, u 2 will be interpreted as a column matrix in matrix expressions ( ) u u = u 1, u 2 1 = while its transpose will be denoted by u 2 u T = u 1 u 2 = ( u 1 u 2). In other words within triangle bracket delimiters, a comma will represent a vertical separator in a list, while a vertical line will represent a horizontal separator in a list. A matrix can then be represented as a vertical list of rows or as a horizontal list of columns, as in ( ) a b = a b, c d = a, c b, d. c d Finally if A is a matrix, we will not use a lowercase letter a i j for its entries but retain the same symbol: A = (A i j). Since the matrix notation and matrix multiplication which suppresses all indices and the summation is so efficient, it is important to be able to translate between the summed indexed notation to the corresponding index-free matrix symbols. In the usual language, matrix multiplication the ith row and jth column entry of the product matrix is [AB] ij = n A ik B kj. k=1 In our application of this to matrices with indices in various up/down positions, the left index will always be the row index and the right index the column index and to translate from indexed notation to symbolic matrices we always have to use the above correspondence independent of the index up or down position: only left-right position counts. Thus to translate an expression like M ij B i mb j n we need to first rearrange the factors to B i mm ij B j n and then recognize that

14 14 Chapter 0. Introduction: motivating index algebra the second summed index j is in the right adjacent pair of positions for interpretation of matrix multiplication, but the first summed index i is in the row instead of column position so the transpose is required to place it adjacent to the middle matrix factor (B i mm ij B j n) = ([B T M B] mn ) = B T M B.

15 Chapter 1 Foundations of tensor algebra 1.1 Index conventions We need an efficient abbreviated notation to handle the complexity of mathematical structure before us. We will use indices of a given type to denote all possible values of given index ranges. By index type we mean a collection of similar letter types, like those from the beginning or middle of the Latin alphabet, or Greek letters a, b, c,... i, j, k,... α, β, γ... each index of which is understood to have a given common range of successive integer values. Variations of these might be barred or primed letters or capital letters. For example, suppose we are looking at linear transformations between R n and R m where m n. We would need two different index ranges to denote vector components in the two vector spaces of different dimensions, say i, j, k,... = 1, 2,..., n and α, β, γ,... = 1, 2,..., m. In order to introduce the so called Einstein summation convention, we agree to the following limitations on how indices may appear in formulas. A given index letter may occur only once in a given term in an expression (call this a free index ), in which case the expression is understood to stand for the set of all such expressions for which the index assumes its allowed values, or it may occur twice but only as a superscript-subscript pair (one up, one down) which will stand for the sum over all allowed values (call this a repeated index ). Here are some examples. If i, j = 1,..., n then A i n expressions : A 1, A 2,..., A n A i i n i=1 Ai i, a single expression with n terms A ji i n i=1 A1i i,..., n i=1 Ani i, n expressions each of which has n terms in the sum A ii no sum, just an expression for each i, if we want to refer to a specific diagonal component (entry) of a matrix, for example A i (v i + w i ) = A i v i + A i w i 2 sums of n terms each or one combined sum 15

16 16 Chapter 1. Foundations of tensor algebra A repeated index is a dummy index, like the dummy variable in a definite integral b f(x)dx = b f(u)du. We can change them at will: a a Ai i = A j j. 1.2 A vector space V Let V be an n-dimensional real vector space. Elements of this space are called vectors. Ordinary real numbers (let R denote the set of real numbers) will be called scalars and denoted by a, b, c,..., while vectors will be denoted by various symbols depending on the context: u, v, w or u (1), u (2),..., where here the parentheses indicate that the subscripts are only labeling the vectors in an ordered set of vectors, to distinguish them from component indices. Sometimes X, Y, Z, W are convenient vector symbols. The basic structure of a real vector space is that it has two operations defined, vector addition and scalar multiplication, which can then be combined together to perform linear combinations of vectors: vector addition: the sum of two vectors u + v is again a vector in the space, scalar multiplication: the product cu of a scalar c and a vector u is again a vector in the space, called a scalar multiple of the vector, so that linear combinations of two or more vectors with scalar coefficients au + bv are defined. These operations satisfy a list of properties that we take for granted when working with sums and products of real numbers alone, i.e., the set of real numbers R thought of as a 1-dimensional vector space. A basis of V, denoted by {e i }, i = 1, 2,..., n or just {e i }, where it is understood that a free index (meaning not repeated and therefore summed over) like the i in this expression will assume all of its possible values, is a linearly independent spanning set for V 1. spanning condition: Any vector v V can be represented as a linear combination of the basis vectors: v = n v i e i = v i e i i=1 whose coefficients v i are called the components of v with respect to {e i }. The index i on v i labels the components (coefficients), while the index i on e i labels the basis vectors. 2. linear independence: If v i e i = 0, then v i = 0, (i.e., more explicitly if v = n i=1 vi e i = 0, then v i = 0 for all i = 1, 2,..., n).

17 1.2. A vector space V 17 Example V = R n = {u = (u 1,..., u n ) = (u i ) u i R}, the space of n-tuples of real numbers with the natural basis e 1 = (1, 0,..., 0), e 2 = (0, 1,..., 0),..., e n = (0, 0,..., 1), which we will refer to as the standard basis or natural basis. In R 3, these basis vectors are customarily denoted by i, j, k. When we want to distinguish the vector properties of R n from its point properties, we will emphasize the difference by using angle brackets instead of parentheses: u 1, u 2, u 3. In the context of matrix calculations, this representation of a vector will be understood to be a column matrix. As a set of points, R n has a natural set of Cartesian coordinate functions x i which pick out the ith entry in an n-tuple, for example on R 3 : x 1 ((a 1, a 2, a 3 )) = a 1, etc. These are linear functions on the space. Interpreting the points as vectors, these coordinate functions pick out the individual components of the vectors with respect to the standard basis. Any two n-dimensional vector spaces are isomorphic. This just means there is some map from one to the other, say Φ : V W, and it does not matter whether the vector operations (vector sum and scalar multiplication, i.e., linear combination which encompasses them both) are done before or after using the map: Φ(au +bv) = aφ(u)+bφ(v). The practical implication of this rather abstract statement is that once you establish a basis in any n-dimensional vector space V, the n-tuples of components of vectors with respect to this basis undergo the usual vector operations in R n when the vectors they represent undergo the vector operations in V. For example, the set of at most quadratic polynomial functions in a single variable ax 2 + bx + c = a(x 2 ) + b(x) + c(1) has the natural basis {1, x, x 2 } and under linear combination of these functions, the triplet of coordinates (c, b, a) (coefficients ordered by increasing powers) undergo the corresponding linear combination as vectors in R 3. We might as well just work in R 3 to visualize relationships between vectors in the original abstract space. Exercise By expanding at most quadratic polynomial functions in a Taylor series about x = 1, one expresses these functions in the new basis {(x 1) p }, p = 0, 1, 2, say as A(x 1) 2 +B(x 1)+C(1). Express (c, b, a) as linear functions of (C, B, A) by expanding out this latter expression. Then solve these relations for the inverse expressions, giving (C, A, B) as functions of (c, b, a) and express both relationships in matrix form, showing explicitly the coefficient matrices. Alternatively, actually evaluate (C, B, A) in terms of (c, b, a) using the Taylor series expansion technique. Make a crude drawing of the three new basis vectors in R 3 which correspond to the new basis functions, or use technology to draw them. Exercise An antisymmetric matrix is a square matrix which reverses sign under the transpose operation:

18 18 Chapter 1. Foundations of tensor algebra A T = A. Any 3 3 antisymmetric matrix has the form 0 a 3 a A = a 3 0 a 1 = a a a a 2 a a 1 E 1 + a 2 E 2 + a 3 E 3. The space of all such matrices is a 3-dimensional vector space with basis {E i } since it is defined as the span of this set of vectors (hence a subspace of the vector space of 3 3 matrices), and setting the linear combination equal to the zero matrix forces all the coefficients to be zero proving the linear independence of this set of vectors (which is therefore a linear independent spanning set). a) Show that matrix multiplication of a vector in R 3 by such a matrix A is equivalent to taking the cross product with the corresponding vector a = a 1, a 2, a 3 : A b = a b. b) Although the result of two successive cross products a (b u) is not equivalent to a single cross product c u, the difference of two such successive cross products is. Confirm the matrix product A B B A = (a b) i E i Then by part a) it follows that (AB B A)u = (a b) u, c) Use the matrix distributive law to fill in the one further step which then proves the vector identity a (b u) b (a u) = (a b) u. Example The field C of complex numbers is a 2-dimensional real vector space isomorphic to R 2 through the isomorphism z = x + iy (x, y) which associates the basis {1, i} with the standard basis {e 1 = (1, 0), e 2 = (0, 1)}. A p-dimensional linear subspace of a vector space V can be represented as the set of all possible linear combinations of a set of p linearly independent vectors, and such a subspace results from the solution of a set of linear homogeneous conditions on the variable components of a vector variable expressed in some basis. Thus if x = x 1,..., x n is the column matrix of components of an unknown vector in V with respect to a basis {e i }, and A is an m n matrix of rank m (i.e., the rows are linearly independent), the solution space of A x = 0 will be a (p = n m)-dimensional subspace, since m < n independent conditions on n variables leave n m variables freely specifiable. In R 3, these are the lines (p = 1) and planes (p = 2) through the origin. In higher dimensional R n spaces, the (n 1)-dimensional subspaces are called hyperplanes in analogy with the ordinary planes in the case n = 3, and we can refer to p-planes through the origin for the values of p between 2 and n 1.

19 1.2. A vector space V 19 Elementary linear algebra: solving systems of linear equations It is worth remembering the basic problem of elementary linear algebra: solving m linear equations in n unknowns or variables x i, which is most efficiently handled with matrix notation A 1 1x 1 + A 1 nx n = b 1.., Ax = b. A m 1x 1 + A m nx n = b m The interpretation of the problem requires a slight shift in emphasis to the n columns u (i) R m of the coefficient matrix by defining u i (j) = A i j or A = u (1) u (n). Then this is equivalent to setting a linear combination of these columns equal to the right hand side vector b = b 1,..., b m R m A x = x 1 u (1) + x n u (n) = b. If b = 0, the homogeneous case, this is equivalent to trying to find a linear relationship among the n column vectors, namely a linear combination of them equal to the zero vector whose coefficients are not all zero; then for each nonzero coefficient, one can solve for the vector it multiplies and express it as a linear combination of the remaining vectors in the set. When no such relationship exists among the vectors, they are called linearly independent, otherwise they are called linearly dependent. The span (set of all possible linear combinations) of the set of these column vectors is called the column space Col(A) of the coefficient matrix A. If b 0, then the system admits a solution only if b belongs to the column space, and is inconsistent if not. If b 0 and the vectors are linearly independent, then if the solution admits a solution, it is unique. If they are not linearly independent, then the solution is not unique but involves a number of free parameters. The solution technique is row reduction involving a sequence of elementary row operations of three types: adding a multiple of one row to another row, multiplying a row by a nonzero number, and interchanging two rows. These row operations correspond to taking new independent combinations of the equations in the system, or scaling a particular equation, or changing their order, none of which changes the solution of the system. The row reduced echelon form A R b R of the augmented matrix A b leads to an equivalent ( reduced ) system of equations A R x = b R which is easily solved. The row reduced echelon form has all the zero rows (if any) at the bottom of the matrix, the leading (first from left to right) entry of each nonzero row is 1, the columns containing those leading 1 entries (the leading columns) have zero entries above and below those leading 1 entries, and finally the pattern of leading 1 entries moves down and to the right, i.e., the leading entry of the next nonzero row is to the right of a preceding leading entry. The leading 1 entries of the matrix are also called the pivot entries, and the corresponding columns, the pivot columns. A pivot consists of the set of add row operations which makes the remaining entries of a pivot column zero. The number of nonzero rows of the reduced augmented matrix is called the rank of the augmented matrix and represents the number of independent equations in the original set. The number of nonzero rows of the reduced coefficient matrix alone is called its rank: r = rank(a) m and equals the number of leading 1 entries in A R, in turn the number of leading 1 columns of A R. The remaining n r n m columns are called free columns. This classification

20 20 Chapter 1. Foundations of tensor algebra of the columns of the reduced coefficient matrix into leading and free columns is extended to the original coefficient matrix. The associated variables of the system of linear equations then fall into two groups, the leading variables (r m in number) and the free variables (n r in number), since each variable corresponds to one of the columns of the coefficient matrix. Each leading variable can immediately be solved for in its corresponding reduced system equation and expressed in terms of the free variables, whose values are then not constrained and may take any real values. Setting the n r free variables equal to arbitrary parameters t B, B = 1,, n r leads to a solution in the form x i = x i (particular) + t B v i (B) The particular solution satisfies A x (particular) = b, while the remaining part is the general solution of the related homogeneous linear system for which b = 0, an (n r)-dimensional subspace Null(A) of R n called the null space of the matrix A, since it consists of those vectors which are taken to zero under multiplication by that matrix. A(t B v i (B)) = t B (Av i (B)) = 0. This form of the solution defines a basis {v (B) } of the null space since by definition any solution of the homogeneous equations can be expressed as a linear combination of them, and if such a linear combination is zero, every parameter t B is forced to be zero, so they are linearly independent. This basis of coefficient vectors {v (B) } R n is really a basis of the space of linear relationships among the original n vectors {u (1),..., u (n) }, each one representing the coefficients of an independent linear relationship among those vectors: 0 = A j iv i (B) = v i (B)u j (i). In fact these relationships correspond to the fact that each free column of the reduced matrix can be expressed as a linear combination of the leading columns which precede it going from left to right in the matrix, and in fact the same linear relationships apply to the original set of vectors (since the coefficients x i of the solution space are the same!). Thus one can remove the free columns from the original set of vectors to get a basis of the column space of the matrix consisting of its r leading columns, so the dimension of the column space is the rank r of the matrix. By introducing the row space of the coefficient matrix Row(A) R n consisting of all possible linear combinations of the rows of the matrix, the row reduction process can be interpreted as finding a basis of this subspace that has a certain characteristic form: the r nonzero rows of the reduced matrix. The dimension of the row space is thus equal to the rank r of the matrix. Each equation of the original system corresponding to each (nonzero) row of the coefficient matrix separately has a solution space which represents a hyperplane in R n, namely an (n 1)-dimensional subspace. Re-interpreting the linear combination of the variables as a dot product with the row vector, in the homogeneous case, these hyperplanes consist of all vectors orthogonal to the original row vector, and the joint solution of all the equations of the system is the subspace which is orthogonal to the entire row space, namely the orthogonal complement of the row space within R n. Thus Null(A) and Row(A) decompose the total space R n into an orthogonal decomposition with respect to the dot product, and the solution algorithm for the homogeneous linear system provides a basis of each such subspace.

21 1.2. A vector space V 21 Left multiplication of A by a row matrix of variables y T = y 1... y m yields a row matrix, so one can consider the transposed linear system in which that product is set equal to a constant row vector c T = c 1... c m y T A = c T, or A T y = c This is the linear system of equations associated with the transpose of the matrix, which interchanges rows and columns and hence the row space and column space Row(A T ) = Col(A), Col(A T ) = Row(A), but adds one more space Null(A T ), which can be interpreted as the subspace orthogonal to Row(A T ) = Col(A), hence determining an orthogonal decomposition of R m as well. Example Here is the augmented matrix and its row reduced echelon form for 5 equations in 7 unknowns A b = , A R b R = and its solution 2 + 2t 1 t t t x = t 2 = 0 + t t t 3 0 = x (particular) + t B v (B). t The rank of the 5 7 coefficient matrix (and of the 5 8 augmented matrix) is r = 4 with 4 leading variables {x 1, x 3, x 6, x 7 } and 3 free variables {x 2, x 4, x 5 }. By inspection one sees that the 2nd, 4th, and 5th columns are linear combinations of the preceding leading columns with coefficients which are exactly the entries of those columns. The same linear relationships apply to the original matrix, so columns 1,3,6,7 of the coefficient matrix A = u 1... u 7, namely {u 1, u 3, u 6, u 7 }, are a basis of the column space Col(A) R 5. The 4 nonzero rows of the reduced coefficient matrix A R are a basis of the row space Row(A) R 7. The three columns {v (1), v (2), v (3) } appearing in the solution vector x multiplied by the arbitrary parameters {t 1, t 2, t 3 } are a basis of the homogeneous solution space Null(A) R 7. Together these 7 vectors form a basis of R 7. One concludes that the right hand side vector b R 5 can be expressed in the form b = x i u (i) = x i (particular)u (i) + t B v i (B)u (i) = x i (particular)u (i) = 2u (1) + 3u (3) + 6u (7)

22 22 Chapter 1. Foundations of tensor algebra since the homogeneous part of the solution forms the zero vector from its linear combination of the original columns. Notice that the fifth column u (5) = 0; the zero vector makes any set of vectors trivially linearly dependent, so t 3 is a trivial parameter and v (3) represents that trivial linear relationship. Thus there are only two independent relationships among the 6 nonzero columns of A. The row space Row(A) = Col(A T ) is a 4-dimensional subspace of R 5. If one row reduces the 7 5 transpose matrix A T, the 4 nonzero rows of the reduced matrix are a basis of this space, and one finds one free variable and a single basis vector 258, 166, 165, 96, 178 /178 for the 1-dimensional subspace Null(A T ), which is the orthogonal subspace to the 4-dimensional subspace Row(A) R 5. Don t worry. We will not need the details of row and column spaces in what follows, so if your first introduction to linear algebra stopped short of this topic, don t despair. Example We can also consider multiple linear systems with the same coefficient matrix. For example consider the two linearly independent vectors X (1) = 1, 3, 2, X (2) = 2, 3, 1 which span a plane through the origin in R 3 and let 1 2 X = X (1) X (2) = Clearly the sum X (1) + X (2) = 3, 6, 3 and difference X (2) X (1) = 1, 0, 1 vectors are a new basis of the same subspace (since they are not proportional) so if we try to express each of them in turn as linear combinations of the original basis vectors, we know already the unique solutions for each 1 2 ( ) 3 3 u 1 3 u = 6, ( ) 3 3 v 1 v = ( ) u 1 = u 2 ( ) 1, 1 ( ) v 1 = v 2 ( ) 1. 1 Clearly from the definition of matrix multiplication, we can put these two linear systems together as 1 2 ( ) 3 3 u 1 v u v 2 = 6 0, 3 1 which has the form X Z = Y where X is the 3 2 coefficient matrix, Y is the 3 2 right hand side matrix, and Z is the unknown 2 2 matrix whose columns tell us how to express the vectors Y (1), Y (2) as linear combinations of the vectors X (1), X (2). Of course here we know the unique solution is ( ) 1 1 Z =, 1 1

23 1.2. A vector space V 23 a matrix which together with its inverse can be used to transform the components of vectors from one basis to the other. In other words it is sometimes useful to generalize the simple linear system A x = b to an unknown matrix X of more than one column A X = B when the right hand side matrix is more than one column A }{{} m n X }{{} n p = B. }{{} m p Elementary linear algebra: the eigenvalue problem and linear transformations The next step in elementary linear algebra is to understand how a square n n matrix acts on R n by matrix multiplication as linear transformation of the space into itself x A x, x i A i jx j which maps each vector x to the new location A x. Under this mapping the standard basis vectors e i are mapped to the new vectors A e i, each of which can be expressed as a unique linear combination of the basis vectors with coefficients A j i, hence the index notation e i A e i = e j A j i which makes those coefficients for each value of i into the columns of the matrix A. To understand how this matrix multiplication moves around the vectors in the space, one looks for special directions ( eigendirections ) along which matrix multiplication reduces to scalar multiplication, i.e., subspaces along which the direction of the new vectors remains parallel to their original directions (although they might reverse direction) A x = λx, x 0, which defines a proportionality factor λ called the eigenvalue associated with the eigenvector x, which must be nonzero to have a direction to speak about. This eigenvector condition is equivalent to (A λi)x = Ax λx = 0. In order for the square matrix A λi to admit nonzero solutions it must row reduce to a matrix which has at least one free variable and hence at least one zero row, and hence zero determinant, so a necessary condition for finding an eigenvector is that the characteristic equation is satisfied by the eigenvalue det(a λi) = 0. The roots of this nth degree polynomial are the eigenvalues of the matrix, and once found can be separately backsubstituted into the linear system to find the solution space which defines

24 24 Chapter 1. Foundations of tensor algebra the corresponding eigenspace. The row reduction procedure provides a default basis of this eigenspace, i.e., a set of linearly independent eigenvectors for each eigenvalue. It is easily shown that eigenvectors corresponding to distinct eigenvalues are linearly independent so this process leads to a basis of the subspace of R n spanned by all these eigenspace bases. If they are n in number, this is a basis of the whole space and the matrix can be diagonalized. Let B = b 1... b n be the matrix whose columns are such an eigenbasis of R n, with A b i = λ i b i. In other words define B j i = b j i as the jth component of the ith eigenvector. Then A B = A b 1... Ab 1 = λ 1 b 1... λ n b 1 = b 1... b n λ λ n where the latter diagonal matrix multiplies each column by its corresponding eigenvalue, so that λ B 1 A B =..... A B 0... λ n is a diagonal matrix whose diagonal elements are the eigenvalues listed in the same order as the corresponding eigenvectors. Thus (multiplying this equation on the left by B and on the right by B 1 ) the matrix A can be represented in the form A = B A B B 1. This matrix transformation has a simple interpretation in terms of a linear transformation of the Cartesian coordinates of the space, expressing the old coordinates x i (with respect to the standard basis) as linear combinations of the new basis vectors b j whose coefficients are the new coordinates x i = y j b i j = B i jy j, which takes the matrix form x = B y, x i = B i jy j, y = B 1 x, y i = B 1i jx j, The top line expresses the old coordinates as linear functions of the new Cartesian coordinates y i. Inverting this relationship by multiplying both sides of the top matrix equation by B 1, one arrives at the bottom line, which instead expresses the new coordinates as linear functions of the old coordinates. Then under matrix multiplication of the old coordinates by A, namely x A x, the new coordinates are mapped to y i = B 1 i jx j B 1i j(a j kx k ) = B 1 i ja j kb k my m = [A R ] i jy j, so A B is just the new matrix of the linear transformation with respect to the new basis of eigenvectors. In the eigenbasis, matrix multiplication is reduced to distinct scalar multiplications along each eigenvector, which may be interpreted as a stretch 0 λ i < 1 or a contraction 1 < λ i (but no change if λ i = 1) combined with a change in direction (reflection) if the eigenvalue is negative λ i < 0. Not all square matrices can be diagonalized in this way. For example, rotations,

25 1.2. A vector space V 25 occur in the interesting case in which one cannot find enough independent (real) eigenvectors to form a complete basis, but correspond instead to complex conjugate pairs of eigenvectors. Don t worry. We will not need to deal with the eigenvector problem in most of what follows, except in passing for symmetric matrices A = A T which can always be diagonalized by an orthogonal matrix B. However, the change of basis example is fundamental to everything we will do. Example Consider the matrix ( ) ( ) ( ) A = = B A 2 3 B B 1, A B =, B =, B 1 = 1 ( ) Under matrix multiplication by A, the first eigenvector b 1 = 1, 1 is stretched by a factor of 5 while the second one b 2 = 2, 1 is reversed in direction. A shown in figure 1.1, this reflects the letter F across the y 1 axis and then stretches it in the y 1 direction by a factor of 5. Figure 1.1: The action of a linear transformation on a figure shown with a grid adapted to the new basis of eigenvectors. Vectors are stretched by a factor 5 along the y 1 direction and reflected across that direction along the y 2 direction.

26 26 Chapter 1. Foundations of tensor algebra 1.3 The dual space V Let V be the dual space of V, just a fancy name for the space of real-valued linear functions on V ; elements of V are called covectors. (Sometimes I will slip and call them 1-forms in the same sense that one sometimes speaks of a linear form or a quadratic form on a vector space.) The condition of linearity is linearity condition: f V f(au + bv) = af(u) + bf(v), or in words: the value on a linear combination = the linear combination of the values. This easily extends to linear combinations with any number of terms; for example ( N ) f(v) = f v i e i = i=1 N v i f(e i ) i=1 where the coefficients f i f(e i ) are the components of a covector with respect to the basis {e i }, or in our shorthand notation f(v) = f(v i e i ) = v i f(e i ) (linearity) (express in terms of basis) = v i f i. (definition of components) A covector f is entirely determined by its values f i on the basis vectors, namely its components with respect to that basis. Our linearity condition is usually presented separately as a pair of separate conditions on the two operations which define a vector space: sum rule: the value of the function on a sum of vectors is the sum of the values, f(u+v) = f(u) + f(v), scalar multiple rule: the value of the function on a scalar multiple of a vector is the scalar times the value on the vector, f(cu) = cf(u). Example In the usual calculus notation on R 3, with Cartesian coordinates (x 1, x 2, x 3 ) = (x, y, z), linear functions are of the form f(x, y, z) = ax +by + cz, but a function with an extra additive term g(x, y, z) = ax + by + c + d is called linear as well. Only linear homogeneous functions (no additive term) satisfy the basic linearity property f(au + bv) = af(u) + bf(v). Unless otherwise indicated, the term linear here will always be intended in its narrow meaning of linear homogeneous. Warning: In this example, the variables (x, y, z) in the defining statement f(x, y, z) = ax + by + cz are simply place holders for any three real numbers in the equation, while the Cartesian coordinate functions denoted by the same symbols are instead the names of three independent (linear) functions on the vector space whose values on any triplet of numbers are just the corresponding

27 1.3. The dual space V 27 number from the triplet: y(1, 2, 3) = 2, for example. To emphasize that it is indeed a function of the vector u = (1, 2, 3), we might also write this as y(u) = y((1, 2, 3)) = 2 or even y( 1, 2, 3 ) if we adopt the vector delimiters, instead of the point delimiters (, ). Notation is extremely important in conveying mathematical meaning, but we only have so many symbols to go around, so flexibility in interpretation is also required. The dual space V is itself an n-dimensional vector space, with linear combinations of covectors defined in the usual way that one can takes linear combinations of any functions, i.e., in terms of values covector addition: (af + bg)(v) af(v) + bg(v), f, g covectors, v a vector. Exercise Show that this defines a linear function af + bg, so that the space is closed under this linear combination operation. [All the other vector space properties of V are inherited from the linear structure of V.] In other words, show that if f, g are linear functions, satisfying our linearity condition, then c 1 f + c 2 g also satisfies the linearity condition for linear functions. Let us produce a basis for V, called the dual basis {ω i } or the basis dual to {e i }, by defining n covectors which satisfy the following duality relations { ω i (e j ) = δ i 1 if i = j, j 0 if i j, where the symbol δ i j is called the Kronecker delta, nothing more than a symbol for the components of the n n identity matrix I = (δ i j). We then extend them to any other vector by linearity. Then by linearity ω i (v) = ω i (v j e j ) (expand in basis) = v j ω i (e j ) (linearity) = v j δ i j (duality) = v i (Kronecker delta definition) where the last equality follows since for each i, only the term with j = i in the sum over j contributes to the sum. Alternatively matrix multiplication of a vector on the left by the identity matrix δ i jv j = v i does not change the vector. Thus the calculation shows that the i-th dual basis covector ω i picks out the i-th component v i of a vector v. Notice that a Greek letter has been introduced for the covectors ω i partially following a convention that distinguishes vectors and covectors using Latin and Greek letters, but this convention is obviously incompatible with our more familiar calculus notation in which f denotes a function, so we limit it to our conventional symbol for the dual basis associated with a starting basis {e i }. Why do the n covectors {ω i } form a basis of V? We can easily show that the two conditions for a basis are satisfied.

28 28 Chapter 1. Foundations of tensor algebra 1. spanning condition: Using linearity and the definition f i = f(e i ), this calculation shows that every linear function f can be written as a linear combination of these covectors f(v) = f(v i e i ) (expand in basis) = v i f(e i ) (linearity) = v i f i (definition of components) = v i δ j if j (Kronecker delta definition) = v i ω j (e i )f j (dual basis definition) = (f j ω j )(v i e i ) (linearity) = (f j ω j )(v). (expansion in basis, in reverse) Thus f and f i ω i have the same value on every v V so they are the same function: f = f i ω i, where f i = f(e i ) are the components of f with respect to the basis {ω i } of V also said to be the components of f with respect to the basis {e i } of V already introduced above. The index i on f i labels the components of f, while the index i on ω i labels the dual basis covectors. 2. linear independence: Suppose f i ω i = 0 is the zero covector. Then evaluating each side of this equation on e j and using linearity 0 = 0(e j ) (zero scalar = value of zero linear function) = (f i ω i )(e j ) (expand zero vector in basis) = f i ω i (e j ) (definition of linear combination function value) = f i δ i j (duality) = f j (Knonecker delta definition) forces all the coefficients of ω i to vanish, i.e., no nontrivial linear combination of these covectors exists which equals the zero covector (the existence of which would be a linear relationship among them) so these covectors are linearly independent. Thus V is also an n-dimensional vector space. Example The familiar Cartesian coordinates on R n are defined by x i ((u 1,..., u n )) = u i (value of i-th number in n-tuple). But this is exactly what the basis {ω i } dual to the natural basis {e i } does i.e., the set of Cartesian coordinates {x i }, interpreted as linear functions on the vector space R n (why are they linear?), is the dual basis: ω i = x i. A general linear function on R n has the familiar form f = f i ω i = f i x i.

29 1.3. The dual space V 29 If we return to R 3 and calculus notation where a general linear function has the form f = ax + by + cz, then all we are doing is abstracting the familiar relations x(1, 0, 0) x(0, 1, 0) x(0, 0, 1) y(1, 0, 0) y(0, 1, 0) y(0, 0, 1) = z(1, 0, 0) z(0, 1, 0) z(0, 0, 1) for the values of the Cartesian coordinates on the standard basis unit vectors along the coordinate axes, making the three simple linear functions {x, y, z} a dual basis to the standard basis, usually designated by the unit vectors {î,ĵ, ˆk} with or without hats (the physics notation to indicate unit vectors). Note that linearity of a function can be interpreted in terms of linear interpolation of intermediate values of the function. Given any two points u, v in R n, then the set of points tu + (1 t)v for t = 0 to t = 1 is the directed line segment between the two points. Then the linearity condition f(tu + (1 t)v) = tf(u) + (1 t)f(v) says that the value of the function at a certain fraction of the way from u to v is exactly that fraction of the way between the values of the function at those two points. v O origin or zero vector u + v u Figure 1.2: Vector addition: main diagonal of parallelogram. Vectors and vector addition are best visualized by interpreting points in R n as directed line segments from the origin ( arrows ). Functions can instead be visualized in terms of their level surfaces f(x) = f i x i = t (t, a parameter), which are a family of parallel hyperplanes, best represented by selecting an equally spaced set of such hyperplanes, say by choosing integer values of the parameter t. However, it is enough to graph two such level surfaces f(x) = 0 and f(x) = 1 to have a mental picture of the entire family since they completely determine the orientation and separation of all other members of this family. This pair of planes also enables one to have a geometric interpretation of covector addition on the vector space itself, like the parallelogram law for vectors. However, instead of the directed main diagonal line segment, one has the cross diagonal hyperplane for the result. Let s look at two pairs of such hyperplanes representing f and g but edge on, namely in the 2-plane orthogonal to the (n 2)-plane of intersection of the two (n 1)-planes which are these hyperplanes. The intersection of two nonparallel hyperplanes, each of which represents

30 30 Chapter 1. Foundations of tensor algebra Figure 1.3: Geometric representation of a covector: the representative hyperplanes of values 0 and 1 are enough to capture its orientation and magnitude. the solution of a single linear homogeneous condition on n variables, represents the solution of two independent conditions on n variables, and hence must be an (n 2)-dimensional plane. This is easier to see if we are more concrete. Figs 1.4 and 1.5 illustrate this in three dimensions. The first figure looking at the intersecting planes edge on down the lines of intersection is actually the two-dimensional example, where it is clear that the cross-diagonal intersection points of the two pairs of lines must both belong to the line (f + g)(x) = 1 on which the sum covector has the value 1 = = The second line of the pair (f + g)(x) = 0 needed to represent the sum covector is the parallel line passing through the origin. If we now rotate our point of view away from the edge-on orientation, we get the picture depicted in Fig. 1.5, which looks like a honeycomb of intersecting planes, with the cross-diagonal plane of intersection representing the sum covector. Of course the dual space (R n ) is isomorphic to R n f = f i ω i = f i x i (R n ) f (f i ) = (f i,..., f n ) R n, where the flat symbol notation reminds us that a correspondence has been established between two different objects (effectively lowering the component index), and since (R n ) is a vector space itself, covector addition is just the usual parallelogram vector addition there. However, the above hyperplane interpretation of the dual space covector addition occurs on the original vector space! These same pictures apply to any finite dimensional vector space. The difference in geometrical interpretation between directed line segments and directed hyperplane pairs is one reason

31 1.3. The dual space V 31 Figure 1.4: Covector addition seen edge-on in R 3. The plane (f + g)(x) = 1 representing the addition of two covectors is the plane through the lines of intersection of the cross-diagonal of the parallelogram formed by the intersection of the two pairs of planes when seen edge-on down the lines of intersection. Moving that plane parallel to itself until it passes through the origin gives the second plane of the pair representing the sum covector. for carefully distinguishing V from V by switching index positioning. For R n the distinction between n-tuples of numbers which are vectors and (the component n-tuples of) covectors is still made using matrix notation. Vectors in R n are identified with column matrices and covectors in the dual space with row matrices u = (u 1,..., u n ) u 1. u n, f = f i ω i f (f 1,..., f n ) (f 1... f n ) [no commas here], which we will sometimes designate respectively by u 1,..., u n and f 1... f n to emphasize the vector/covector column/row matrix dual interpretation of the n-tuple of numbers. The natural evaluation of a covector on a vector then corresponds to matrix multiplication f(u) = f i u i = (f 1... f n ) This evaluation of a covector (represented by a row matrix on the left) on a vector (represented by a column matrix on the right), which is just the value of the linear function f = f i x i at the point with position vector u, is a matrix product of two different objects, although it can be u 1. u n.

32 32 Chapter 1. Foundations of tensor algebra Figure 1.5: Covector addition in R 3 no longer seen edge-on. One has a honeycomb of intersecting planes, with the sum covector represented by the cross-diagonal plane of intersection.

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017 Math 4A Notes Written by Victoria Kala vtkala@math.ucsb.edu Last updated June 11, 2017 Systems of Linear Equations A linear equation is an equation that can be written in the form a 1 x 1 + a 2 x 2 +...

More information

Solving a system by back-substitution, checking consistency of a system (no rows of the form

Solving a system by back-substitution, checking consistency of a system (no rows of the form MATH 520 LEARNING OBJECTIVES SPRING 2017 BROWN UNIVERSITY SAMUEL S. WATSON Week 1 (23 Jan through 27 Jan) Definition of a system of linear equations, definition of a solution of a linear system, elementary

More information

Linear Algebra March 16, 2019

Linear Algebra March 16, 2019 Linear Algebra March 16, 2019 2 Contents 0.1 Notation................................ 4 1 Systems of linear equations, and matrices 5 1.1 Systems of linear equations..................... 5 1.2 Augmented

More information

Linear Algebra. The analysis of many models in the social sciences reduces to the study of systems of equations.

Linear Algebra. The analysis of many models in the social sciences reduces to the study of systems of equations. POLI 7 - Mathematical and Statistical Foundations Prof S Saiegh Fall Lecture Notes - Class 4 October 4, Linear Algebra The analysis of many models in the social sciences reduces to the study of systems

More information

Chapter Two Elements of Linear Algebra

Chapter Two Elements of Linear Algebra Chapter Two Elements of Linear Algebra Previously, in chapter one, we have considered single first order differential equations involving a single unknown function. In the next chapter we will begin to

More information

a s 1.3 Matrix Multiplication. Know how to multiply two matrices and be able to write down the formula

a s 1.3 Matrix Multiplication. Know how to multiply two matrices and be able to write down the formula Syllabus for Math 308, Paul Smith Book: Kolman-Hill Chapter 1. Linear Equations and Matrices 1.1 Systems of Linear Equations Definition of a linear equation and a solution to a linear equations. Meaning

More information

1. General Vector Spaces

1. General Vector Spaces 1.1. Vector space axioms. 1. General Vector Spaces Definition 1.1. Let V be a nonempty set of objects on which the operations of addition and scalar multiplication are defined. By addition we mean a rule

More information

Some Notes on Linear Algebra

Some Notes on Linear Algebra Some Notes on Linear Algebra prepared for a first course in differential equations Thomas L Scofield Department of Mathematics and Statistics Calvin College 1998 1 The purpose of these notes is to present

More information

Linear Algebra Done Wrong. Sergei Treil. Department of Mathematics, Brown University

Linear Algebra Done Wrong. Sergei Treil. Department of Mathematics, Brown University Linear Algebra Done Wrong Sergei Treil Department of Mathematics, Brown University Copyright c Sergei Treil, 2004, 2009 Preface The title of the book sounds a bit mysterious. Why should anyone read this

More information

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

MATH 240 Spring, Chapter 1: Linear Equations and Matrices MATH 240 Spring, 2006 Chapter Summaries for Kolman / Hill, Elementary Linear Algebra, 8th Ed. Sections 1.1 1.6, 2.1 2.2, 3.2 3.8, 4.3 4.5, 5.1 5.3, 5.5, 6.1 6.5, 7.1 7.2, 7.4 DEFINITIONS Chapter 1: Linear

More information

LS.1 Review of Linear Algebra

LS.1 Review of Linear Algebra LS. LINEAR SYSTEMS LS.1 Review of Linear Algebra In these notes, we will investigate a way of handling a linear system of ODE s directly, instead of using elimination to reduce it to a single higher-order

More information

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET This is a (not quite comprehensive) list of definitions and theorems given in Math 1553. Pay particular attention to the ones in red. Study Tip For each

More information

Linear Algebra. Min Yan

Linear Algebra. Min Yan Linear Algebra Min Yan January 2, 2018 2 Contents 1 Vector Space 7 1.1 Definition................................. 7 1.1.1 Axioms of Vector Space..................... 7 1.1.2 Consequence of Axiom......................

More information

Vectors. January 13, 2013

Vectors. January 13, 2013 Vectors January 13, 2013 The simplest tensors are scalars, which are the measurable quantities of a theory, left invariant by symmetry transformations. By far the most common non-scalars are the vectors,

More information

MATH 310, REVIEW SHEET 2

MATH 310, REVIEW SHEET 2 MATH 310, REVIEW SHEET 2 These notes are a very short summary of the key topics in the book (and follow the book pretty closely). You should be familiar with everything on here, but it s not comprehensive,

More information

W2 ) = dim(w 1 )+ dim(w 2 ) for any two finite dimensional subspaces W 1, W 2 of V.

W2 ) = dim(w 1 )+ dim(w 2 ) for any two finite dimensional subspaces W 1, W 2 of V. MA322 Sathaye Final Preparations Spring 2017 The final MA 322 exams will be given as described in the course web site (following the Registrar s listing. You should check and verify that you do not have

More information

1 Matrices and Systems of Linear Equations

1 Matrices and Systems of Linear Equations March 3, 203 6-6. Systems of Linear Equations Matrices and Systems of Linear Equations An m n matrix is an array A = a ij of the form a a n a 2 a 2n... a m a mn where each a ij is a real or complex number.

More information

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET This is a (not quite comprehensive) list of definitions and theorems given in Math 1553. Pay particular attention to the ones in red. Study Tip For each

More information

Chapter 4 & 5: Vector Spaces & Linear Transformations

Chapter 4 & 5: Vector Spaces & Linear Transformations Chapter 4 & 5: Vector Spaces & Linear Transformations Philip Gressman University of Pennsylvania Philip Gressman Math 240 002 2014C: Chapters 4 & 5 1 / 40 Objective The purpose of Chapter 4 is to think

More information

SUMMARY OF MATH 1600

SUMMARY OF MATH 1600 SUMMARY OF MATH 1600 Note: The following list is intended as a study guide for the final exam. It is a continuation of the study guide for the midterm. It does not claim to be a comprehensive list. You

More information

MTH Linear Algebra. Study Guide. Dr. Tony Yee Department of Mathematics and Information Technology The Hong Kong Institute of Education

MTH Linear Algebra. Study Guide. Dr. Tony Yee Department of Mathematics and Information Technology The Hong Kong Institute of Education MTH 3 Linear Algebra Study Guide Dr. Tony Yee Department of Mathematics and Information Technology The Hong Kong Institute of Education June 3, ii Contents Table of Contents iii Matrix Algebra. Real Life

More information

Glossary of Linear Algebra Terms. Prepared by Vince Zaccone For Campus Learning Assistance Services at UCSB

Glossary of Linear Algebra Terms. Prepared by Vince Zaccone For Campus Learning Assistance Services at UCSB Glossary of Linear Algebra Terms Basis (for a subspace) A linearly independent set of vectors that spans the space Basic Variable A variable in a linear system that corresponds to a pivot column in the

More information

Math 1553, Introduction to Linear Algebra

Math 1553, Introduction to Linear Algebra Learning goals articulate what students are expected to be able to do in a course that can be measured. This course has course-level learning goals that pertain to the entire course, and section-level

More information

Chapters 5 & 6: Theory Review: Solutions Math 308 F Spring 2015

Chapters 5 & 6: Theory Review: Solutions Math 308 F Spring 2015 Chapters 5 & 6: Theory Review: Solutions Math 308 F Spring 205. If A is a 3 3 triangular matrix, explain why det(a) is equal to the product of entries on the diagonal. If A is a lower triangular or diagonal

More information

2. Every linear system with the same number of equations as unknowns has a unique solution.

2. Every linear system with the same number of equations as unknowns has a unique solution. 1. For matrices A, B, C, A + B = A + C if and only if A = B. 2. Every linear system with the same number of equations as unknowns has a unique solution. 3. Every linear system with the same number of equations

More information

[Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty.]

[Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty.] Math 43 Review Notes [Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty Dot Product If v (v, v, v 3 and w (w, w, w 3, then the

More information

Math 302 Outcome Statements Winter 2013

Math 302 Outcome Statements Winter 2013 Math 302 Outcome Statements Winter 2013 1 Rectangular Space Coordinates; Vectors in the Three-Dimensional Space (a) Cartesian coordinates of a point (b) sphere (c) symmetry about a point, a line, and a

More information

Linear Algebra Highlights

Linear Algebra Highlights Linear Algebra Highlights Chapter 1 A linear equation in n variables is of the form a 1 x 1 + a 2 x 2 + + a n x n. We can have m equations in n variables, a system of linear equations, which we want to

More information

Gaussian elimination

Gaussian elimination Gaussian elimination October 14, 2013 Contents 1 Introduction 1 2 Some definitions and examples 2 3 Elementary row operations 7 4 Gaussian elimination 11 5 Rank and row reduction 16 6 Some computational

More information

Math Linear Algebra Final Exam Review Sheet

Math Linear Algebra Final Exam Review Sheet Math 15-1 Linear Algebra Final Exam Review Sheet Vector Operations Vector addition is a component-wise operation. Two vectors v and w may be added together as long as they contain the same number n of

More information

Linear Algebra Primer

Linear Algebra Primer Linear Algebra Primer David Doria daviddoria@gmail.com Wednesday 3 rd December, 2008 Contents Why is it called Linear Algebra? 4 2 What is a Matrix? 4 2. Input and Output.....................................

More information

MAT2342 : Introduction to Applied Linear Algebra Mike Newman, fall Projections. introduction

MAT2342 : Introduction to Applied Linear Algebra Mike Newman, fall Projections. introduction MAT4 : Introduction to Applied Linear Algebra Mike Newman fall 7 9. Projections introduction One reason to consider projections is to understand approximate solutions to linear systems. A common example

More information

A Brief Outline of Math 355

A Brief Outline of Math 355 A Brief Outline of Math 355 Lecture 1 The geometry of linear equations; elimination with matrices A system of m linear equations with n unknowns can be thought of geometrically as m hyperplanes intersecting

More information

Upon successful completion of MATH 220, the student will be able to:

Upon successful completion of MATH 220, the student will be able to: MATH 220 Matrices Upon successful completion of MATH 220, the student will be able to: 1. Identify a system of linear equations (or linear system) and describe its solution set 2. Write down the coefficient

More information

Elementary maths for GMT

Elementary maths for GMT Elementary maths for GMT Linear Algebra Part 2: Matrices, Elimination and Determinant m n matrices The system of m linear equations in n variables x 1, x 2,, x n a 11 x 1 + a 12 x 2 + + a 1n x n = b 1

More information

CS 246 Review of Linear Algebra 01/17/19

CS 246 Review of Linear Algebra 01/17/19 1 Linear algebra In this section we will discuss vectors and matrices. We denote the (i, j)th entry of a matrix A as A ij, and the ith entry of a vector as v i. 1.1 Vectors and vector operations A vector

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information

1 9/5 Matrices, vectors, and their applications

1 9/5 Matrices, vectors, and their applications 1 9/5 Matrices, vectors, and their applications Algebra: study of objects and operations on them. Linear algebra: object: matrices and vectors. operations: addition, multiplication etc. Algorithms/Geometric

More information

Linear Algebra and Robot Modeling

Linear Algebra and Robot Modeling Linear Algebra and Robot Modeling Nathan Ratliff Abstract Linear algebra is fundamental to robot modeling, control, and optimization. This document reviews some of the basic kinematic equations and uses

More information

Final Review Written by Victoria Kala SH 6432u Office Hours R 12:30 1:30pm Last Updated 11/30/2015

Final Review Written by Victoria Kala SH 6432u Office Hours R 12:30 1:30pm Last Updated 11/30/2015 Final Review Written by Victoria Kala vtkala@mathucsbedu SH 6432u Office Hours R 12:30 1:30pm Last Updated 11/30/2015 Summary This review contains notes on sections 44 47, 51 53, 61, 62, 65 For your final,

More information

Math 3108: Linear Algebra

Math 3108: Linear Algebra Math 3108: Linear Algebra Instructor: Jason Murphy Department of Mathematics and Statistics Missouri University of Science and Technology 1 / 323 Contents. Chapter 1. Slides 3 70 Chapter 2. Slides 71 118

More information

Math 314 Lecture Notes Section 006 Fall 2006

Math 314 Lecture Notes Section 006 Fall 2006 Math 314 Lecture Notes Section 006 Fall 2006 CHAPTER 1 Linear Systems of Equations First Day: (1) Welcome (2) Pass out information sheets (3) Take roll (4) Open up home page and have students do same

More information

Contents. 1 Vectors, Lines and Planes 1. 2 Gaussian Elimination Matrices Vector Spaces and Subspaces 124

Contents. 1 Vectors, Lines and Planes 1. 2 Gaussian Elimination Matrices Vector Spaces and Subspaces 124 Matrices Math 220 Copyright 2016 Pinaki Das This document is freely redistributable under the terms of the GNU Free Documentation License For more information, visit http://wwwgnuorg/copyleft/fdlhtml Contents

More information

(b) If a multiple of one row of A is added to another row to produce B then det(b) =det(a).

(b) If a multiple of one row of A is added to another row to produce B then det(b) =det(a). .(5pts) Let B = 5 5. Compute det(b). (a) (b) (c) 6 (d) (e) 6.(5pts) Determine which statement is not always true for n n matrices A and B. (a) If two rows of A are interchanged to produce B, then det(b)

More information

Quizzes for Math 304

Quizzes for Math 304 Quizzes for Math 304 QUIZ. A system of linear equations has augmented matrix 2 4 4 A = 2 0 2 4 3 5 2 a) Write down this system of equations; b) Find the reduced row-echelon form of A; c) What are the pivot

More information

Lecture I: Vectors, tensors, and forms in flat spacetime

Lecture I: Vectors, tensors, and forms in flat spacetime Lecture I: Vectors, tensors, and forms in flat spacetime Christopher M. Hirata Caltech M/C 350-17, Pasadena CA 91125, USA (Dated: September 28, 2011) I. OVERVIEW The mathematical description of curved

More information

Review of Linear Algebra

Review of Linear Algebra Review of Linear Algebra Definitions An m n (read "m by n") matrix, is a rectangular array of entries, where m is the number of rows and n the number of columns. 2 Definitions (Con t) A is square if m=

More information

NOTES ON DIFFERENTIAL FORMS. PART 3: TENSORS

NOTES ON DIFFERENTIAL FORMS. PART 3: TENSORS NOTES ON DIFFERENTIAL FORMS. PART 3: TENSORS 1. What is a tensor? Let V be a finite-dimensional vector space. 1 It could be R n, it could be the tangent space to a manifold at a point, or it could just

More information

Matrix Algebra: Vectors

Matrix Algebra: Vectors A Matrix Algebra: Vectors A Appendix A: MATRIX ALGEBRA: VECTORS A 2 A MOTIVATION Matrix notation was invented primarily to express linear algebra relations in compact form Compactness enhances visualization

More information

1 Matrices and Systems of Linear Equations. a 1n a 2n

1 Matrices and Systems of Linear Equations. a 1n a 2n March 31, 2013 16-1 16. Systems of Linear Equations 1 Matrices and Systems of Linear Equations An m n matrix is an array A = (a ij ) of the form a 11 a 21 a m1 a 1n a 2n... a mn where each a ij is a real

More information

Linear Algebra. Ben Woodruff. Compiled July 17, 2010

Linear Algebra. Ben Woodruff. Compiled July 17, 2010 Linear Algebra Ben Woodruff Compiled July 7, i c This work is licensed under the Creative Commons Attribution-Share Alike 3. United States License. You may copy, distribute, display, and perform this copyrighted

More information

Topics. Vectors (column matrices): Vector addition and scalar multiplication The matrix of a linear function y Ax The elements of a matrix A : A ij

Topics. Vectors (column matrices): Vector addition and scalar multiplication The matrix of a linear function y Ax The elements of a matrix A : A ij Topics Vectors (column matrices): Vector addition and scalar multiplication The matrix of a linear function y Ax The elements of a matrix A : A ij or a ij lives in row i and column j Definition of a matrix

More information

Linear Algebra (Math-324) Lecture Notes

Linear Algebra (Math-324) Lecture Notes Linear Algebra (Math-324) Lecture Notes Dr. Ali Koam and Dr. Azeem Haider September 24, 2017 c 2017,, Jazan All Rights Reserved 1 Contents 1 Real Vector Spaces 6 2 Subspaces 11 3 Linear Combination and

More information

Professor Terje Haukaas University of British Columbia, Vancouver Notation

Professor Terje Haukaas University of British Columbia, Vancouver  Notation Notation This document establishes the notation that is employed throughout these notes. It is intended as a look-up source during the study of other documents and software on this website. As a general

More information

Final Review Sheet. B = (1, 1 + 3x, 1 + x 2 ) then 2 + 3x + 6x 2

Final Review Sheet. B = (1, 1 + 3x, 1 + x 2 ) then 2 + 3x + 6x 2 Final Review Sheet The final will cover Sections Chapters 1,2,3 and 4, as well as sections 5.1-5.4, 6.1-6.2 and 7.1-7.3 from chapters 5,6 and 7. This is essentially all material covered this term. Watch

More information

Math113: Linear Algebra. Beifang Chen

Math113: Linear Algebra. Beifang Chen Math3: Linear Algebra Beifang Chen Spring 26 Contents Systems of Linear Equations 3 Systems of Linear Equations 3 Linear Systems 3 2 Geometric Interpretation 3 3 Matrices of Linear Systems 4 4 Elementary

More information

Stat 159/259: Linear Algebra Notes

Stat 159/259: Linear Algebra Notes Stat 159/259: Linear Algebra Notes Jarrod Millman November 16, 2015 Abstract These notes assume you ve taken a semester of undergraduate linear algebra. In particular, I assume you are familiar with the

More information

REVIEW FOR EXAM II. The exam covers sections , the part of 3.7 on Markov chains, and

REVIEW FOR EXAM II. The exam covers sections , the part of 3.7 on Markov chains, and REVIEW FOR EXAM II The exam covers sections 3.4 3.6, the part of 3.7 on Markov chains, and 4.1 4.3. 1. The LU factorization: An n n matrix A has an LU factorization if A = LU, where L is lower triangular

More information

Mathematics for Graphics and Vision

Mathematics for Graphics and Vision Mathematics for Graphics and Vision Steven Mills March 3, 06 Contents Introduction 5 Scalars 6. Visualising Scalars........................ 6. Operations on Scalars...................... 6.3 A Note on

More information

Chapter 4 - MATRIX ALGEBRA. ... a 2j... a 2n. a i1 a i2... a ij... a in

Chapter 4 - MATRIX ALGEBRA. ... a 2j... a 2n. a i1 a i2... a ij... a in Chapter 4 - MATRIX ALGEBRA 4.1. Matrix Operations A a 11 a 12... a 1j... a 1n a 21. a 22.... a 2j... a 2n. a i1 a i2... a ij... a in... a m1 a m2... a mj... a mn The entry in the ith row and the jth column

More information

LINEAR ALGEBRA REVIEW

LINEAR ALGEBRA REVIEW LINEAR ALGEBRA REVIEW SPENCER BECKER-KAHN Basic Definitions Domain and Codomain. Let f : X Y be any function. This notation means that X is the domain of f and Y is the codomain of f. This means that for

More information

LINEAR SYSTEMS, MATRICES, AND VECTORS

LINEAR SYSTEMS, MATRICES, AND VECTORS ELEMENTARY LINEAR ALGEBRA WORKBOOK CREATED BY SHANNON MARTIN MYERS LINEAR SYSTEMS, MATRICES, AND VECTORS Now that I ve been teaching Linear Algebra for a few years, I thought it would be great to integrate

More information

chapter 12 MORE MATRIX ALGEBRA 12.1 Systems of Linear Equations GOALS

chapter 12 MORE MATRIX ALGEBRA 12.1 Systems of Linear Equations GOALS chapter MORE MATRIX ALGEBRA GOALS In Chapter we studied matrix operations and the algebra of sets and logic. We also made note of the strong resemblance of matrix algebra to elementary algebra. The reader

More information

7. Dimension and Structure.

7. Dimension and Structure. 7. Dimension and Structure 7.1. Basis and Dimension Bases for Subspaces Example 2 The standard unit vectors e 1, e 2,, e n are linearly independent, for if we write (2) in component form, then we obtain

More information

Spring 2014 Math 272 Final Exam Review Sheet

Spring 2014 Math 272 Final Exam Review Sheet Spring 2014 Math 272 Final Exam Review Sheet You will not be allowed use of a calculator or any other device other than your pencil or pen and some scratch paper. Notes are also not allowed. In kindness

More information

An Introduction To Linear Algebra. Kuttler

An Introduction To Linear Algebra. Kuttler An Introduction To Linear Algebra Kuttler April, 7 Contents Introduction 7 F n 9 Outcomes 9 Algebra in F n Systems Of Equations Outcomes Systems Of Equations, Geometric Interpretations Systems Of Equations,

More information

The value of a problem is not so much coming up with the answer as in the ideas and attempted ideas it forces on the would be solver I.N.

The value of a problem is not so much coming up with the answer as in the ideas and attempted ideas it forces on the would be solver I.N. Math 410 Homework Problems In the following pages you will find all of the homework problems for the semester. Homework should be written out neatly and stapled and turned in at the beginning of class

More information

MATRICES. knowledge on matrices Knowledge on matrix operations. Matrix as a tool of solving linear equations with two or three unknowns.

MATRICES. knowledge on matrices Knowledge on matrix operations. Matrix as a tool of solving linear equations with two or three unknowns. MATRICES After studying this chapter you will acquire the skills in knowledge on matrices Knowledge on matrix operations. Matrix as a tool of solving linear equations with two or three unknowns. List of

More information

Lecture Summaries for Linear Algebra M51A

Lecture Summaries for Linear Algebra M51A These lecture summaries may also be viewed online by clicking the L icon at the top right of any lecture screen. Lecture Summaries for Linear Algebra M51A refers to the section in the textbook. Lecture

More information

Foundations of Matrix Analysis

Foundations of Matrix Analysis 1 Foundations of Matrix Analysis In this chapter we recall the basic elements of linear algebra which will be employed in the remainder of the text For most of the proofs as well as for the details, the

More information

Chapter 1: Systems of Linear Equations

Chapter 1: Systems of Linear Equations Chapter : Systems of Linear Equations February, 9 Systems of linear equations Linear systems Lecture A linear equation in variables x, x,, x n is an equation of the form a x + a x + + a n x n = b, where

More information

MATH10212 Linear Algebra Lecture Notes

MATH10212 Linear Algebra Lecture Notes MATH10212 Linear Algebra Lecture Notes Last change: 23 Apr 2018 Textbook Students are strongly advised to acquire a copy of the Textbook: D. C. Lay. Linear Algebra and its Applications. Pearson, 2006.

More information

HONORS LINEAR ALGEBRA (MATH V 2020) SPRING 2013

HONORS LINEAR ALGEBRA (MATH V 2020) SPRING 2013 HONORS LINEAR ALGEBRA (MATH V 2020) SPRING 2013 PROFESSOR HENRY C. PINKHAM 1. Prerequisites The only prerequisite is Calculus III (Math 1201) or the equivalent: the first semester of multivariable calculus.

More information

Chapter 3 Transformations

Chapter 3 Transformations Chapter 3 Transformations An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1 Linear Transformations A function is called a linear transformation if 1. for every and 2. for every If we fix the bases

More information

MAT 1302B Mathematical Methods II

MAT 1302B Mathematical Methods II MAT 1302B Mathematical Methods II Alistair Savage Mathematics and Statistics University of Ottawa Winter 2015 Lecture 19 Alistair Savage (uottawa) MAT 1302B Mathematical Methods II Winter 2015 Lecture

More information

MTH 464: Computational Linear Algebra

MTH 464: Computational Linear Algebra MTH 464: Computational Linear Algebra Lecture Outlines Exam 2 Material Prof. M. Beauregard Department of Mathematics & Statistics Stephen F. Austin State University March 2, 2018 Linear Algebra (MTH 464)

More information

Linear Algebra M1 - FIB. Contents: 5. Matrices, systems of linear equations and determinants 6. Vector space 7. Linear maps 8.

Linear Algebra M1 - FIB. Contents: 5. Matrices, systems of linear equations and determinants 6. Vector space 7. Linear maps 8. Linear Algebra M1 - FIB Contents: 5 Matrices, systems of linear equations and determinants 6 Vector space 7 Linear maps 8 Diagonalization Anna de Mier Montserrat Maureso Dept Matemàtica Aplicada II Translation:

More information

homogeneous 71 hyperplane 10 hyperplane 34 hyperplane 69 identity map 171 identity map 186 identity map 206 identity matrix 110 identity matrix 45

homogeneous 71 hyperplane 10 hyperplane 34 hyperplane 69 identity map 171 identity map 186 identity map 206 identity matrix 110 identity matrix 45 address 12 adjoint matrix 118 alternating 112 alternating 203 angle 159 angle 33 angle 60 area 120 associative 180 augmented matrix 11 axes 5 Axiom of Choice 153 basis 178 basis 210 basis 74 basis test

More information

Chapter 2: Matrix Algebra

Chapter 2: Matrix Algebra Chapter 2: Matrix Algebra (Last Updated: October 12, 2016) These notes are derived primarily from Linear Algebra and its applications by David Lay (4ed). Write A = 1. Matrix operations [a 1 a n. Then entry

More information

B553 Lecture 5: Matrix Algebra Review

B553 Lecture 5: Matrix Algebra Review B553 Lecture 5: Matrix Algebra Review Kris Hauser January 19, 2012 We have seen in prior lectures how vectors represent points in R n and gradients of functions. Matrices represent linear transformations

More information

Linear Algebra: Matrix Eigenvalue Problems

Linear Algebra: Matrix Eigenvalue Problems CHAPTER8 Linear Algebra: Matrix Eigenvalue Problems Chapter 8 p1 A matrix eigenvalue problem considers the vector equation (1) Ax = λx. 8.0 Linear Algebra: Matrix Eigenvalue Problems Here A is a given

More information

Properties of Linear Transformations from R n to R m

Properties of Linear Transformations from R n to R m Properties of Linear Transformations from R n to R m MATH 322, Linear Algebra I J. Robert Buchanan Department of Mathematics Spring 2015 Topic Overview Relationship between the properties of a matrix transformation

More information

Eigenvalues, Eigenvectors. Eigenvalues and eigenvector will be fundamentally related to the nature of the solutions of state space systems.

Eigenvalues, Eigenvectors. Eigenvalues and eigenvector will be fundamentally related to the nature of the solutions of state space systems. Chapter 3 Linear Algebra In this Chapter we provide a review of some basic concepts from Linear Algebra which will be required in order to compute solutions of LTI systems in state space form, discuss

More information

NOTES on LINEAR ALGEBRA 1

NOTES on LINEAR ALGEBRA 1 School of Economics, Management and Statistics University of Bologna Academic Year 207/8 NOTES on LINEAR ALGEBRA for the students of Stats and Maths This is a modified version of the notes by Prof Laura

More information

Definitions for Quizzes

Definitions for Quizzes Definitions for Quizzes Italicized text (or something close to it) will be given to you. Plain text is (an example of) what you should write as a definition. [Bracketed text will not be given, nor does

More information

Eigenvalues and Eigenvectors

Eigenvalues and Eigenvectors LECTURE 3 Eigenvalues and Eigenvectors Definition 3.. Let A be an n n matrix. The eigenvalue-eigenvector problem for A is the problem of finding numbers λ and vectors v R 3 such that Av = λv. If λ, v are

More information

Cheat Sheet for MATH461

Cheat Sheet for MATH461 Cheat Sheet for MATH46 Here is the stuff you really need to remember for the exams Linear systems Ax = b Problem: We consider a linear system of m equations for n unknowns x,,x n : For a given matrix A

More information

LECTURES 4/5: SYSTEMS OF LINEAR EQUATIONS

LECTURES 4/5: SYSTEMS OF LINEAR EQUATIONS LECTURES 4/5: SYSTEMS OF LINEAR EQUATIONS MA1111: LINEAR ALGEBRA I, MICHAELMAS 2016 1 Linear equations We now switch gears to discuss the topic of solving linear equations, and more interestingly, systems

More information

Knowledge Discovery and Data Mining 1 (VO) ( )

Knowledge Discovery and Data Mining 1 (VO) ( ) Knowledge Discovery and Data Mining 1 (VO) (707.003) Review of Linear Algebra Denis Helic KTI, TU Graz Oct 9, 2014 Denis Helic (KTI, TU Graz) KDDM1 Oct 9, 2014 1 / 74 Big picture: KDDM Probability Theory

More information

Math 308 Spring Midterm Answers May 6, 2013

Math 308 Spring Midterm Answers May 6, 2013 Math 38 Spring Midterm Answers May 6, 23 Instructions. Part A consists of questions that require a short answer. There is no partial credit and no need to show your work. In Part A you get 2 points per

More information

MATH10212 Linear Algebra Lecture Notes

MATH10212 Linear Algebra Lecture Notes MATH1212 Linear Algebra Lecture Notes Textbook Students are strongly advised to acquire a copy of the Textbook: D. C. Lay. Linear Algebra and its Applications. Pearson, 26. ISBN -521-28713-4. Other editions

More information

Linear Algebra I. Ronald van Luijk, 2015

Linear Algebra I. Ronald van Luijk, 2015 Linear Algebra I Ronald van Luijk, 2015 With many parts from Linear Algebra I by Michael Stoll, 2007 Contents Dependencies among sections 3 Chapter 1. Euclidean space: lines and hyperplanes 5 1.1. Definition

More information

MATH 2030: ASSIGNMENT 4 SOLUTIONS

MATH 2030: ASSIGNMENT 4 SOLUTIONS MATH 23: ASSIGNMENT 4 SOLUTIONS More on the LU factorization Q.: pg 96, q 24. Find the P t LU factorization of the matrix 2 A = 3 2 2 A.. By interchanging row and row 4 we get a matrix that may be easily

More information

Math Bootcamp An p-dimensional vector is p numbers put together. Written as. x 1 x =. x p

Math Bootcamp An p-dimensional vector is p numbers put together. Written as. x 1 x =. x p Math Bootcamp 2012 1 Review of matrix algebra 1.1 Vectors and rules of operations An p-dimensional vector is p numbers put together. Written as x 1 x =. x p. When p = 1, this represents a point in the

More information

1 Last time: least-squares problems

1 Last time: least-squares problems MATH Linear algebra (Fall 07) Lecture Last time: least-squares problems Definition. If A is an m n matrix and b R m, then a least-squares solution to the linear system Ax = b is a vector x R n such that

More information

Maths for Signals and Systems Linear Algebra for Engineering Applications

Maths for Signals and Systems Linear Algebra for Engineering Applications Maths for Signals and Systems Linear Algebra for Engineering Applications Lectures 1-2, Tuesday 11 th October 2016 DR TANIA STATHAKI READER (ASSOCIATE PROFFESOR) IN SIGNAL PROCESSING IMPERIAL COLLEGE LONDON

More information

Study Guide for Linear Algebra Exam 2

Study Guide for Linear Algebra Exam 2 Study Guide for Linear Algebra Exam 2 Term Vector Space Definition A Vector Space is a nonempty set V of objects, on which are defined two operations, called addition and multiplication by scalars (real

More information

VECTOR AND TENSOR ALGEBRA

VECTOR AND TENSOR ALGEBRA PART I VECTOR AND TENSOR ALGEBRA Throughout this book: (i) Lightface Latin and Greek letters generally denote scalars. (ii) Boldface lowercase Latin and Greek letters generally denote vectors, but the

More information

Example: 2x y + 3z = 1 5y 6z = 0 x + 4z = 7. Definition: Elementary Row Operations. Example: Type I swap rows 1 and 3

Example: 2x y + 3z = 1 5y 6z = 0 x + 4z = 7. Definition: Elementary Row Operations. Example: Type I swap rows 1 and 3 Linear Algebra Row Reduced Echelon Form Techniques for solving systems of linear equations lie at the heart of linear algebra. In high school we learn to solve systems with or variables using elimination

More information

PRACTICE PROBLEMS FOR THE FINAL

PRACTICE PROBLEMS FOR THE FINAL PRACTICE PROBLEMS FOR THE FINAL Here are a slew of practice problems for the final culled from old exams:. Let P be the vector space of polynomials of degree at most. Let B = {, (t ), t + t }. (a) Show

More information