NOTES ON LINEAR ALGEBRA CLASS HANDOUT

Size: px
Start display at page:

Download "NOTES ON LINEAR ALGEBRA CLASS HANDOUT"

Transcription

1 NOTES ON LINEAR ALGEBRA CLASS HANDOUT ANTHONY S. MAIDA CONTENTS 1. Introduction 2 2. Basis Vectors 2 3. Linear Transformations Example: Rotation Transformation 3 4. Matrix Multiplication and Function Composition Example: Rotation Transformation Revisited 4 5. Identity Matrices, Inverses, and Determinants Example: Inverse of Rotation Transformation 6 6. Eigenvectors and Eigenvalues of a Matrix Example: Finding Eigenvalues and Eigenvectors Example: Eigenvalues of Rotation Matrix 8 7. Significance of Eigenvectors and Eigenvalues Example: Raising M to a Power Example: Stability of Discrete System of Linear Equations Example: Dominant Mode of a Discrete Time System of Linear Equations Vector Spaces Distance metrics Inner Product or Dot Product Properties of the Inner Product Vector Geometry Perpendicular Vectors Cosine of the Angle Between Vectors Method Method Matlab 15 Date: Version February 13, Copyright c

2 2 ANTHONY S. MAIDA 1. INTRODUCTION This write-up explains some concepts in linear algebra using the intuitive case of 2 2 matrices. Once the reader envisions the concepts for these simple matrices, it is hoped that his or her intuition will extend to the more general case of n n matrices, and make more advanced treatments on the topic accessible. In the following, we assume that a matrix M is a 2 2 matrix with elements as shown below. [ ] a b M (1) c d A key idea will be that a matrix represents a linear transformation. We will also have need to represent two-component vectors such as x (x 1, x 2 ). Since we are working in a linear algebra context, we will represent these as 2 1 matrices. These are also known as column vectors, denoted x [ x1 x 2 ]. (2) If we write, [x 1 x 2 ] T, this denotes a column vector and it is an alternative to using expression 2 above. This convention saves vertical space in written documents. 2. BASIS VECTORS We will denote a continuous, two-dimensional plane of points by R 2, which is shorthand for R R. Points in the plane are denoted by vectors. When we talk about R 2, combined with the rules of vector algebra, then we are using a vector space. Vectors can be decomposed into a canonical representation which is just a (linear) combination of so-called basis vectors. Any pair of vectors can serve as a basis for R 2 as long is they are non-zero and not colinear. When discussing the vector space R 2, we normally use the standard basis, which consists of the vectors [1, 0] T and [0, 1] T. An arbitrary point [x 1 x 2 ] T in two-dimensional space can be decomposed into a linear combination of these basis vectors. For instance, the point [2, 3] can be represented as [ 2 3 ] [ ] + 3 [ 0 1 ]. (3) The right-hand side of the above formula is an example of a linear combination. As noted above, any pair of vectors that are linearly independent could be used as a basis. A pair of vectors is linearly independent if they are both nonzero and their directions are not aligned. Given a linear transformation, we may chose a basis set of vectors that is convenient to understand the structure of the transformation. We now define a linear transformation. 3. LINEAR TRANSFORMATIONS The first thing to learn is that a 2 2 matrix of real numbers is not just a table of numbers. It represents a linear transformation from R 2 to R 2. That is, it represents a mapping from points in the plane to other points in the plane. Algebraically, a linear transformation is a function, f( ), which has the following property. f(a v + b w) af( v) + bf( w) (4)

3 NOTES ON LINEAR ALGEBRA CLASS HANDOUT 3 In the above, a and b are scalars, and v and w are two-dimensional vectors. If a mapping has the above property, then it is a linear transformation. A linear transformation shall map these basis vectors into some other (possibly the same) points. Let us call these points [a, c] T and [b, d] T. Specifically, we have ([ ]) [ ] 1 a f (5) 0 c ([ ]) [ ] 0 b f. (6) 1 d Keeping this in mind, let us see what a linear transformation,f, does to an arbitrary value [x 1 x 2 ] T. ([ ]) ( [ ] [ ]) x1 1 0 f f x x 1 + x ([ ]) ([ ]) 1 0 x 1 f + x 0 2 f 1 [ ] [ ] a b x 1 + x c 2 d [ ] [ ] ax1 bx2 + cx 1 dx 2 [ ] ax1 + bx 2 cx 1 + dx 2 [ ] [ ] a b x1 M x (7) c d x 2 Because f([x 1 x 2 ] T ) is shown to equal M x, this shows that multiplying a matrix with a vector is the same as applying a linear transformation to the vector. The matrix, M, represents a linear transformation and is defined by what the linear transformation does to the basis vectors Example: Rotation Transformation. A counter clockwise rotation about of a point in the plane about the origin is an example of a linear transformation and can be represented by a 2 2 matrix. The a and c values of the matrix are determined by specifying how the basis vector [1, 0] T should be transformed. Similarly, the b and d values of the matrix are determined by specifying how the [0, 1] T should be transformed (see Figure 1). The resulting matrix is given below. [ ] cos θ sin θ M (8) sin θ cos θ 4. MATRIX MULTIPLICATION AND FUNCTION COMPOSITION Let f( ), g( ), and h( ) be arbitrary functions that map from values in R 2 to values in R 2. Let x denote a vector in R 2. Let (g f)( ) denote the function that results when applying g( ) to the output of f( ). In other words, (g f)( x) means the same thing as g(f( x)), which is depicted in Figure 2. The former notation is the mathematician s way of creating a name for large procedure,

4 4 ANTHONY S. MAIDA (a, c) (a, T c) T (b, d) (b, T d) T (0, 1 (0, ) T 1 ) T θ θ θ θ (1, 0 (1, ) T 0 ) T (A) (B) FIGURE 1. Illustration of the trigonometry for rotating the basis vectors [1, 0] T and [0, 1] T. (g f)( ), that is built of two subprocedures f( ) and g( ) that are executed in sequence. The operation of assembling functions in this fashion is called function composition and the operator is the function composition operator. Function composition is associative. Specifically, (h (g f))( ) ((h g) f)( ). (9) Although function composition is associative, it is not commutative. That would correspond to swapping the order of subprocedures within a procedure. Now let us suppose that the above mentioned functions f( ), g( ), and h( ) are linear transformations. Then they can be represented by the 2 2 matrices which are F, G, and H. When we write H(G(F x)) (10) it means to first mulitply F with x. This yields a point in R 2 that is represented as a column vector. Multiplying G with this result yields another column vector, and H can be multiplied with that result. Thus, matrix multiplication corresponds to function composition. Since function composition is associative, it does not matter how we parenthesize the matrices, as long as we do not change the order of the matrices. In fact, we can leave the parentheses out completely, as shown below. HGF x (11) In this vein, it is worth noting that matrix multiplication, like function composition, is associative but not commutative. If this were not true, matrix multiplication would be unable to represent function composition Example: Rotation Transformation Revisited. Suppose one wants to apply a rotation transformation by an amount θ 1 and then after that apply another rotation transformation by an amount θ 2. This is shown below. [ ] [ ] [ ] cos θ2 sin θ 2 cos θ1 sin θ 1 x1 (12) sin θ 2 cos θ 2 sin θ 1 cos θ 1 x 2

5 NOTES ON LINEAR ALGEBRA CLASS HANDOUT 5 x f y g z FIGURE 2. Procedural representation of the effect of composing functions g( ) and f( ) to obtain z g(f( x)). Of course, we can compose the matrix transformations by multiplying the matrices together. This gives the matrix. [ ] (cos θ1 cos θ 2 sin θ 1 sin θ 2 ) (sin θ 1 cos θ 2 + cos θ 1 sin θ 2 ) (13) (sin θ 1 cos θ 2 + cos θ 1 sin θ 2 ) (cos θ 1 cos θ 2 sin θ 1 sin θ 2 ) If θ θ 1 + θ 2, then this matrix is equivalent to that in Equation 8. Since the corresponding matrix elements are equal, we have proved the following two trigonometric identities below. sin(θ 1 + θ 2 ) sin θ 1 cos θ 2 + cos θ 1 sin θ 2 (14) cos(θ 1 + θ 2 ) cos θ 1 cos θ 2 sin θ 1 sin θ 2 (15) Later, we will use the latter identity to obtain a formula for the cosine of the angle between two vectors. 5. IDENTITY MATRICES, INVERSES, AND DETERMINANTS If f( ) is the identity function, then f( x) x for all x. This function is a linear transformation and can be represented by the identity matrix, shown below. [ ] 1 0 I (16) 0 1 If a function, f( ), has an inverse, denoted f 1 ( ), then f f 1 ( ) f 1 f( ) which equals the identity function. If M is a matrix that has an inverse M 1, then MM 1 M 1 M I. (17) For a 2 2 matrix, M, define the determinant to be the quantity ad bc. Specifically, det(m) ad bc. Note that sometimes the notation M is used to denote the determinant of matrix M. The formula for the inverse of a matrix is given below. [ M 1 d c ] b a 1 det(m) Note that this formula is defined only if det(m) 0. In particular, a matrix has an inverse if and only if its determinant is not equal to 0. This is convenient because it allows one to easily determine whether a linear transformation has an inverse. (18)

6 6 ANTHONY S. MAIDA (b, d) (a, c) FIGURE 3. Image of unit square generated by matrix transformation. Area is given by ad bc. The area is nonzero unless the parallelogram degenerates to a line or a point. A linear transformation maps the points falling within the unit square into a parallelogram. The unit square consists of the points in the region of R 2 where 0 x 1 1 and 0 x 2 1. The corners of the unit square map to the corners of the parallelogram (see Figure 3). The determinant of a matrix gives the surface area of the transformation s image when applied to the unit square. The area of this parallelogram gives the value of the determinant. If the transformation maps the square into a line or point (both of which are degenerate parallelograms), then the value of the determinant is zero. Otherwise, it is nonzero. If the parallelogram is not degenerate then the mapping that is specified by the matrix is nonsingular. Specifically, unique points on the unit square map to unique points on the parallelogram, and the reverse is also true. If the image is a line or a point, then many points on the square map to single points on the degenerate parallelogram. In this case, the function does not (for obvious reasons) have an inverse Example: Inverse of Rotation Transformation. The matrix in Formula 8 gives a counter clockwise rotation by an angle θ. The inverse transformation would instead give a clockwise rotation. Using the formula for the inverse we can obtain the clockwise rotation matrix shown below. [ M 1 cos θ sin θ ] sin θ cos θ When deriving this remember that sin 2 θ + cos 2 θ 1. Also note, for the case of rotation, the inverse of a rotation matrix is its transpose. That is, M 1 M T. When this happens we have an orthonormal transformation. This corresponds to a (possibly) flipped rigid rotation. By rigid rotation, we mean that vector lengths and angles between vectors do not change when the transformation is applied. (19)

7 NOTES ON LINEAR ALGEBRA CLASS HANDOUT 7 6. EIGENVECTORS AND EIGENVALUES OF A MATRIX There is an effective way to perform a structural analysis of a linear transformation. This involves the use of eigenvectors and eigenvalues of the matrix representing the transformation. Consider the situation of multiplying a matrix with a nonzero vector as in M x. If the vector x is chosen correctly, this operation has the effect of shrinking or stretching the vector but it does not change the direction of the vector other than perhaps reversing the direction. This can be written as M x λ x. (20) In the above, λ is a scalar. Given a matrix M, if a vector x has this property, then x is said to be an eigenvector of M, and λ is its associated eigenvalue. We shall now solve for the eigenvectors and eigenvalues of 2 2 matrix M. Consider the following steps. M x λi x (21) M x λi x 0 (22) (M λi) x 0 (23) In the above, note that (M λi) denotes a matrix and Eq. 23 is called the characteristic equation for matrix M. Since we have assumed that x is nonzero, the only way that this matrix can map x into zero is if it is mapping the unit square into a degenerate parallelogram. Thus the determinant of this matrix is zero. This gives us some leverage to find the value of λ. Matrix (M λi) expands as shown below. [ ] [ ] a b λ 0 M λi (24) c d 0 λ [ ] a λ b (25) c d λ However, we are actually interested in the determinant of this matrix, rather than the matrix itself. The determinant expands to M λi (a λ)(d λ) bc (26) λ 2 (a + d)λ + ad bc 0. (27) The above is a quadratic equation where λ is the unknown, and it can be solved using the quadratic formula. It is called the characteristic polynomial for matrix M. When this is solved, we have obtained up to two eigenvalues for the matrix M. Once we know the eigenvalues, we can use Formula 20 to solve for the eigenvectors that go with the eigenvalues. For a 2 2 matrix, we will have two eigenvalues and one eigenvector to go with each eigenvalue. Notice that, in Equation 27, the quantity a + d is the sum of the diagonals of matrix M. This is called the trace of M. Also note that ad bc is the determinant of M.

8 8 ANTHONY S. MAIDA Let us denote the trace of M by τ and the determinant of M by. Then using the quadratic formula, we can write concise formulas for the eigenvalues. λ 1 τ + τ λ 2 τ τ Example: Finding Eigenvalues and Eigenvectors. Compute the eigenvalues and eigenvectors of the matrix [ ] 4 0 M. (30) 0 1 Solution. [ ] 4 λ 0 (4 λ)(1 λ) λ 0 1 λ 2 5λ (31) The quadratic equation on the right has two solutions: λ 1 4 and λ 2 1. These are the two eigenvalues listed in order of numerical magnitude. The corresponding eigenvectors can be obtained by substituting the value of λ back into Equation 23. Specifically, using λ 1 gives the first eigenvector. [ ] [ ] 4 λ1 0 x1 (M λ 1 I) x (32) 0 1 λ 1 x 2 [ ] [ ] 0 0 x1 (33) 0 3 x 2 [ ] [ ] 0 0 (34) 3x 2 0 The above equations imply that x 2 0 but place no constraint on the value of x 1. For convenience, we set x 1 1 so that the length of the vector is one. We shall denote the eigenvector that corresponds to λ 1 by ξ 1. Thus, ξ 1 [1, 0] T. Similar analysis shows that ξ 2 [0, 1] T Example: Eigenvalues of Rotation Matrix. If one computes the eigenvalues of the rotation matrix, one finds that they are complex numbers. This makes sense because the transformation rotates all vectors in R 2, thus the transformed vector never points in the same direction. 7. SIGNIFICANCE OF EIGENVECTORS AND EIGENVALUES For this section, we will assume that the eigenvalues of the matrix under discussion are distinct and that we are working with an n n matrix. Given a matrix M, there is an alternative way to represent it using its eigenvectors and eigenvalues. This yields a canonical representation that makes the structure of the underlying linear transformation explicit. Define the matrix Λ as shown below λ λ Λ (35) λ n (28) (29)

9 NOTES ON LINEAR ALGEBRA CLASS HANDOUT 9 This is a diagonal matrix of eigenvalues of M, where the λ i are the eigenvalues, and they are ordered according to their magnitude. We will see that this matrix represents the same linear transformation as M but using a more convenient basis set. To go further, we have to change the representation of the vectors that we have been using so that they use the basis set assumed by the matrix Λ. Define the matrix V as shown below V [ ξ 1 ξ 2... ξ n ]. (36) V is an n n matrix where the first column is the first eigenvector of M, and so forth. The eigenvectors are ordered according to the corresponding eigenvalues of Λ (which are in turn ordered according to their magnitudes). The original matrix M can be factored into the product of V, Λ, and V 1, as shown below. (Why?) M V ΛV 1 (37) In other words, the factorized transformation can be applied to x, as shown below. M x V ΛV 1 x (38) How do we interpret this? First notice that V and V 1 are inverses. V 1 represents a transformation that converts x to a representation that Λ understands. Λ applies the transformation of interest. Finally,V converts the result of the transformation to the representation that we started with. Why is the diagonal matrix representation Λ desirable? Let us look in more detail at the eigenvector matrix, V. The columns of this matrix (eigenvectors) serve as an alternate basis set for points in the underlying space. Specifically, an arbitrary point x in the space can be represented as a linear combination of the eigenvectors as shown below. x α 1 e 1 + α 2 e α n e n (39) Put another way, the vector x which is represented using the standard basis, is represented as α when represented using the alternate basis which consists of the eigenvectors of M. This allows us to create a very simple representation of the transformation M using this new basis. The derivation below shows this. M x V ΛV 1 (α 1 e 1 + α 2 e α n e n ) (40) V ΛV 1 V [α 1... α n ] T (41) V Λ [α 1... α n ] T (42) λ 1 α 1 e 1 + λ 2 α 2 e λ n α n e n (43) 7.1. Example: Raising M to a Power. In the next section, we will need to consider the quantity M k, where k is a positive integer. It is very useful to represent this in terms of the matrix factorization. Specifically, M k ( V ΛV 1) k V Λ k V 1 (44)

10 10 ANTHONY S. MAIDA Further, note that Λ is a diagonal matrix. Raising a diagonal matrix to a power involves raising the elements on the diagonal to the power. Thus, λ k Λ k 0 λ k (45) λ k n This representation of M k will be used in the next example Example: Stability of Discrete System of Linear Equations. This example shows how to use eigenvalues to study the stability of a discrete system of linear equations. The discrete system may be a set of coupled equations. The same system can be alternately represented as a set of uncoupled equations. This makes the system much easier to analyze. Let us represent a discrete system of linear equations with constant coefficients as shown below. x(k + 1) M x(k) + b (46) The vector x holds the values of state variables which take on values at discrete time steps k 0, 1, 2... Matrix M is a matrix of constant coefficients. Finally, b is a vector of inputs that are constant over the life of the system. Let us pose the question: Is this system stable? If the system is stable, there is a vector x such that when the state vector x(k) is sufficiently close to x, then x(k ) tends to evolve toward x for all k k. That is, the difference between the current state and the stable state, represented as x(k ) x, will approach zero as k approaches infinity. Furthermore, the relation below also holds. x M x + b (47) The above equation holds because the system is stable at point x. If the system state ever reaches x, it stays at x for all subsequent k. With this in mind, let us expand the expression representing the difference between the current state and the stable state as shown below. x(k + 1) x M x(k) + b M x b (48) M( x(k) x ) (49) We can perform a change of variable to simplify the above expression. We shall let z(k) x(k) x. This allows us to rewrite the above equation as z(k + 1) M z(k). (50) Since z(k) now represents the difference between the current state vector and the stable state, z(k) approaches zero as k approaches infinity for a system with stable state x. Note also that the values of z(k) need not be zero. These two facts imply that the matrix M k approaches the zero matrix as k approaches infinity. This is because z(k) M k z(0) (51)

11 NOTES ON LINEAR ALGEBRA CLASS HANDOUT 11 From the previous example, we know that M k can be factored as λ k M k 0 λ k V V 1. (52) λ k n Thus M k approaches zero as k approaches infinity if and only if Λ k approaches zero as k approaches infinity. Λ k approaches zero if and only if λ i < 1 for all i {1... n} Example: Dominant Mode of a Discrete Time System of Linear Equations. Consider the discrete time system x(k + 1) A x(k) (53) Suppose that matrix A has n distinct eigenvalues. The eigenvalue, λ i, with the largest magnitude, λ i, is the dominant eigenvalue. As k approaches infinity, the state vector x(k +1) evolves to align with the eigenvector corresponding to the dominant eigenvalue. The initial state vector can be represented as Thus, the solution to the system for any time step k 1 is x(0) α 1 e α n e n. (54) x(k) α 1 λ k 1 e α n λ k n e n (55) Without loss of generality, assume that λ 1 is the dominant eigenvalue. λ k 1 grows faster than λk i for any other eigenvalue. Therefore, the following holds α 1 λ k 1 >> α i λ k i (56) as long as α 1 0. Therefore, for sufficiently large k, the state vector x(k) is essentially aligned with e VECTOR SPACES In section 2, we referred to a vector space but did not define it. We need to delve into this so that we can say more about vector geometry. A vector is a quantity consisting of a direction and a magnitude, often drawn as an arrow with a head and a tail. If we assume that vectors always have their tails at the origin in Euclidean space, then we can specify a vector by listing the coordinates of its head. In three dimensional space, the vector x can be specified using the coordinates (x 1, x 2, x 3 ). Such an expression is called a tuple and x 1, x 2, and x 3 are called its components. For our purposes, we shall assume that the components are always real numbers and that the vector space is defined over the real numbers. That is, whenever a scalar is encountered, it is a real number. Intuitively, by vector space, we mean a set of vectors which is closed under a set of operations (addition and multiplication by a scalar). For instance, if you add any two vectors, the result is another vector. Vectors in Euclidean space over the field of real numbers have the following properties.

12 12 ANTHONY S. MAIDA 1. The sum of two vectors is a vector (closure). You add two vectors by adding their corresponding components. Both vectors must have the same number of components. Vector addition is commutative and associative. The following example shows how to add two vectors and also shows that vector addition is commutative. x + y (x 1, x 2,..., x n ) + (y 1, y 2,..., y n ) (x 1 + y 1, x 2 + y 2,..., x n + y n ) (y 1 + x 1, y 2 + x 2,..., y n + x n ) y + x 2. The zero vector is of length zero has zero for each of its components. Adding the zero vector to a vector doesn t change the vector. The zero vector is the only vector that has this property. 3. To multiply a vector by a scalar, multiply each component of the vector with the scalar. This gives you another vector. If you multiply by the scalar +1, you don t change the vector. The +1 scalar is the only scalar that has this property. If you multiply each component of a vector x by the scalar -1, you get x. The sum of x and x equals the zero vector. For a given vector x, x is the only vector which has this property. 4. Vectors have the following algebraic properties. Let a and b be scalars a (b x) (ab) x 4.2. (a + b) x a x + b x 4.3. a ( x + y) a x + b y We shall prove property (a) because it gives us an opportunity to provide an example of scalar multiplication. a (b x) a(b x 1, b x 2,..., b x n ) (ab x 1, ab x 2,..., ab x n ) (ab) x (57) 8.1. Distance metrics. A distance metric is a convention for defining distance between two points in an n-dimensional space. A point in n-dimensional space is specified as a vector with n components. A legal distance metric must satisfy the following properties for any points a, b, and c. 1. distance(a, a) distance(a, b) > 0 if a b. 3. distance(a, b) distance(b, a). 4. distance(a, c) distance(a, b) + distance(b, c). Both the Euclidean distance metric, based on the Pythagorean theorem, and the city block distance metric satisfy these properties. The third property is known as symmetry and the fourth property is known as the triangle inequality. In Euclidean space, the triangle inequality is a corollary of the fact that the shortest distance between two points is a straight line Inner Product or Dot Product. The inner product or dot product of two n-dimensional vectors is computed by multiplying the corresponding components together and then summing the results. The inner product of vectors x and y, written x y, is defined below in expression (58).

13 NOTES ON LINEAR ALGEBRA CLASS HANDOUT 13 y-x y y+x -x x FIGURE 4. If vectors x and y are perpendicular then y x y + x. Expression (58) also shows that the inner product is commutative. x-cy n n x x y x i y i y i x i y x (58) i1 i1 θ y Using matrix notation, if vectors are written as column vectors, cy then the inner product between two vectors x and y is written x T y. The inner product is a measure of the degree of overlap between the vectors. If the vectors both point in the same direction, the inner product is positive and maximum. If they are pointing in opposite directions, the inner product is negative and minimum. If the inner product is 0, the vectors are said to be orthogonal (perpendicular). It is useful to look at the inner product of a vector with itself, as shown below. x x n x 2 i (59) Since a vector points in the same direction as itself, this quantity will be positive (unless the vector is the zero vector). The square root of this quantity is known as the Euclidean norm of the vector x, written x. This is the length of the vector as determined by the Pythagorean theorem (Euclidean distance metric) generalized to n dimensions. If x 1, then we say x is a unit vector Properties of the Inner Product. Some basic properties of the inner product are the following. commutativity: If x and y are vectors, then i1 x y y x distributivity: If x, y, and z are vectors, then x ( y + z) x y + x z multiplication by scalar: If c is a scalar and x and y are vectors, then (a x) y a( x y) magnitude: x x 0 if and only if x is the zero vector. Otherwise, x x > VECTOR GEOMETRY This section develops intuitions about geometric interpretations of linear algebra in two-dimensions.

14 14 ANTHONY S. MAIDA (x1, x2) T θ β α T (y1, y2) FIGURE 5. Angle θ β α is the angle between vectors x and y Perpendicular Vectors. If two vectors are perpendicular, then their dot product is zero. To see this, take a look at Figure 4. If vectors x and y are perpendicular then y x y + x. From this we obtain The last line above can only be true if x y 0. ( x + y) ( x + y) ( x y) ( x y) (60) x x + 2 x y + y y x x 2 x y + y y (61) x y x y (62) 9.2. Cosine of the Angle Between Vectors. The cosine of the angle between two vectors, x and y, is defined below. x y cos θ (63) x y This definition holds for n dimensions. To strengthen our intuitions, let us see why this definition corresponds to the cosine of an angle when the number of dimensions is two Method 1. Consider vectors x and y on the plane shown in Figure 5. They are separated by the angle θ β α. From Equation 15, we obtain Formula 65. cos θ cos(β α) (64) cos β cos α + sin β sin α (65) y 1 y 2 x 1 x y + x 2 x y x T y x y From applying trigonometry to Figure 5, we obtain Formula 66. We obtain Formula 67 by simplifying Method 2. Consider the angle θ between vectors x and y in Figure 6. Consider the projection of x onto y at c y so that the angle is perpendicular. For the figure, it follows that (66) (67) cos θ c y x. (68)

15 -x x NOTES ON LINEAR ALGEBRA CLASS HANDOUT 15 x-cy x θ cy y FIGURE 6. The dot product between x c y and c y is zero. It remains to obtain the value for c. Since the vectors x c y and c y are perpendicular, it follows that ( x c y) c y 0. Solving for c we obtain x y c y y. (69) If we plug the value of c back into Equation 68, we obtain cos θ c y x x T y x y (70) (71) 10. MATLAB When manipulating matrices whose dimensions are larger than 2 2, use MATLAB. Here are some commands. Given a square matrix M, the expression inv(m) computes its inverse, the expression det(m) computes its determinant, and the expression trace(m) computes the trace, and diag() extracts the diagonal and represents it as a column vector. The expression below obtains the eigenvectors and eigenvalues. [V, D] eig(m) The above is a component-wise assignment statement as indicated by the brackets on the left-hand side. Both the variables V and D are assigned new values because the function eig() returns two values. The variable V holds the eigenvectors of M. Each column of the matrix V stores an eigenvector. Their lengths are normalized to 1. The variable D is a diagonal matrix holding the corresponding eigenvalues. The eigenvalues fall on the diagonal of the matrix. Both the eigenvectors and the eigenvalues are sorted according to the magnitude of the eigenvalues. The first eigenvalue corresponds to the first eigenvector and so forth for the second, third, etcetera.

Introduction to Matrix Algebra

Introduction to Matrix Algebra Introduction to Matrix Algebra August 18, 2010 1 Vectors 1.1 Notations A p-dimensional vector is p numbers put together. Written as x 1 x =. x p. When p = 1, this represents a point in the line. When p

More information

Math Bootcamp An p-dimensional vector is p numbers put together. Written as. x 1 x =. x p

Math Bootcamp An p-dimensional vector is p numbers put together. Written as. x 1 x =. x p Math Bootcamp 2012 1 Review of matrix algebra 1.1 Vectors and rules of operations An p-dimensional vector is p numbers put together. Written as x 1 x =. x p. When p = 1, this represents a point in the

More information

Dot Products. K. Behrend. April 3, Abstract A short review of some basic facts on the dot product. Projections. The spectral theorem.

Dot Products. K. Behrend. April 3, Abstract A short review of some basic facts on the dot product. Projections. The spectral theorem. Dot Products K. Behrend April 3, 008 Abstract A short review of some basic facts on the dot product. Projections. The spectral theorem. Contents The dot product 3. Length of a vector........................

More information

Linear Algebra Massoud Malek

Linear Algebra Massoud Malek CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product

More information

Extreme Values and Positive/ Negative Definite Matrix Conditions

Extreme Values and Positive/ Negative Definite Matrix Conditions Extreme Values and Positive/ Negative Definite Matrix Conditions James K. Peterson Department of Biological Sciences and Department of Mathematical Sciences Clemson University November 8, 016 Outline 1

More information

Linear Algebra and Eigenproblems

Linear Algebra and Eigenproblems Appendix A A Linear Algebra and Eigenproblems A working knowledge of linear algebra is key to understanding many of the issues raised in this work. In particular, many of the discussions of the details

More information

Linear Algebra Review

Linear Algebra Review Chapter 1 Linear Algebra Review It is assumed that you have had a course in linear algebra, and are familiar with matrix multiplication, eigenvectors, etc. I will review some of these terms here, but quite

More information

Designing Information Devices and Systems I Fall 2018 Lecture Notes Note 21

Designing Information Devices and Systems I Fall 2018 Lecture Notes Note 21 EECS 16A Designing Information Devices and Systems I Fall 2018 Lecture Notes Note 21 21.1 Module Goals In this module, we introduce a family of ideas that are connected to optimization and machine learning,

More information

Review: Linear and Vector Algebra

Review: Linear and Vector Algebra Review: Linear and Vector Algebra Points in Euclidean Space Location in space Tuple of n coordinates x, y, z, etc Cannot be added or multiplied together Vectors: Arrows in Space Vectors are point changes

More information

CS 246 Review of Linear Algebra 01/17/19

CS 246 Review of Linear Algebra 01/17/19 1 Linear algebra In this section we will discuss vectors and matrices. We denote the (i, j)th entry of a matrix A as A ij, and the ith entry of a vector as v i. 1.1 Vectors and vector operations A vector

More information

Mathematics for Graphics and Vision

Mathematics for Graphics and Vision Mathematics for Graphics and Vision Steven Mills March 3, 06 Contents Introduction 5 Scalars 6. Visualising Scalars........................ 6. Operations on Scalars...................... 6.3 A Note on

More information

The geometry of least squares

The geometry of least squares The geometry of least squares We can think of a vector as a point in space, where the elements of the vector are the coordinates of the point. Consider for example, the following vector s: t = ( 4, 0),

More information

(, ) : R n R n R. 1. It is bilinear, meaning it s linear in each argument: that is

(, ) : R n R n R. 1. It is bilinear, meaning it s linear in each argument: that is 17 Inner products Up until now, we have only examined the properties of vectors and matrices in R n. But normally, when we think of R n, we re really thinking of n-dimensional Euclidean space - that is,

More information

Vectors and Matrices

Vectors and Matrices Vectors and Matrices Scalars We often employ a single number to represent quantities that we use in our daily lives such as weight, height etc. The magnitude of this number depends on our age and whether

More information

Review of Coordinate Systems

Review of Coordinate Systems Vector in 2 R and 3 R Review of Coordinate Systems Used to describe the position of a point in space Common coordinate systems are: Cartesian Polar Cartesian Coordinate System Also called rectangular coordinate

More information

Chapter 8. Rigid transformations

Chapter 8. Rigid transformations Chapter 8. Rigid transformations We are about to start drawing figures in 3D. There are no built-in routines for this purpose in PostScript, and we shall have to start more or less from scratch in extending

More information

Linear Algebra I. Ronald van Luijk, 2015

Linear Algebra I. Ronald van Luijk, 2015 Linear Algebra I Ronald van Luijk, 2015 With many parts from Linear Algebra I by Michael Stoll, 2007 Contents Dependencies among sections 3 Chapter 1. Euclidean space: lines and hyperplanes 5 1.1. Definition

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information

CS 143 Linear Algebra Review

CS 143 Linear Algebra Review CS 143 Linear Algebra Review Stefan Roth September 29, 2003 Introductory Remarks This review does not aim at mathematical rigor very much, but instead at ease of understanding and conciseness. Please see

More information

We use the overhead arrow to denote a column vector, i.e., a number with a direction. For example, in three-space, we write

We use the overhead arrow to denote a column vector, i.e., a number with a direction. For example, in three-space, we write 1 MATH FACTS 11 Vectors 111 Definition We use the overhead arrow to denote a column vector, ie, a number with a direction For example, in three-space, we write The elements of a vector have a graphical

More information

Matrices and Vectors. Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A =

Matrices and Vectors. Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A = 30 MATHEMATICS REVIEW G A.1.1 Matrices and Vectors Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A = a 11 a 12... a 1N a 21 a 22... a 2N...... a M1 a M2... a MN A matrix can

More information

v = v 1 2 +v 2 2. Two successive applications of this idea give the length of the vector v R 3 :

v = v 1 2 +v 2 2. Two successive applications of this idea give the length of the vector v R 3 : Length, Angle and the Inner Product The length (or norm) of a vector v R 2 (viewed as connecting the origin to a point (v 1,v 2 )) is easily determined by the Pythagorean Theorem and is denoted v : v =

More information

Linear Algebra V = T = ( 4 3 ).

Linear Algebra V = T = ( 4 3 ). Linear Algebra Vectors A column vector is a list of numbers stored vertically The dimension of a column vector is the number of values in the vector W is a -dimensional column vector and V is a 5-dimensional

More information

Matrices and Deformation

Matrices and Deformation ES 111 Mathematical Methods in the Earth Sciences Matrices and Deformation Lecture Outline 13 - Thurs 9th Nov 2017 Strain Ellipse and Eigenvectors One way of thinking about a matrix is that it operates

More information

Week Quadratic forms. Principal axes theorem. Text reference: this material corresponds to parts of sections 5.5, 8.2,

Week Quadratic forms. Principal axes theorem. Text reference: this material corresponds to parts of sections 5.5, 8.2, Math 051 W008 Margo Kondratieva Week 10-11 Quadratic forms Principal axes theorem Text reference: this material corresponds to parts of sections 55, 8, 83 89 Section 41 Motivation and introduction Consider

More information

This appendix provides a very basic introduction to linear algebra concepts.

This appendix provides a very basic introduction to linear algebra concepts. APPENDIX Basic Linear Algebra Concepts This appendix provides a very basic introduction to linear algebra concepts. Some of these concepts are intentionally presented here in a somewhat simplified (not

More information

A = 3 B = A 1 1 matrix is the same as a number or scalar, 3 = [3].

A = 3 B = A 1 1 matrix is the same as a number or scalar, 3 = [3]. Appendix : A Very Brief Linear ALgebra Review Introduction Linear Algebra, also known as matrix theory, is an important element of all branches of mathematics Very often in this course we study the shapes

More information

A VERY BRIEF LINEAR ALGEBRA REVIEW for MAP 5485 Introduction to Mathematical Biophysics Fall 2010

A VERY BRIEF LINEAR ALGEBRA REVIEW for MAP 5485 Introduction to Mathematical Biophysics Fall 2010 A VERY BRIEF LINEAR ALGEBRA REVIEW for MAP 5485 Introduction to Mathematical Biophysics Fall 00 Introduction Linear Algebra, also known as matrix theory, is an important element of all branches of mathematics

More information

Partial Fractions. June 27, In this section, we will learn to integrate another class of functions: the rational functions.

Partial Fractions. June 27, In this section, we will learn to integrate another class of functions: the rational functions. Partial Fractions June 7, 04 In this section, we will learn to integrate another class of functions: the rational functions. Definition. A rational function is a fraction of two polynomials. For example,

More information

Notes: Vectors and Scalars

Notes: Vectors and Scalars A particle moving along a straight line can move in only two directions and we can specify which directions with a plus or negative sign. For a particle moving in three dimensions; however, a plus sign

More information

Math Linear Algebra Final Exam Review Sheet

Math Linear Algebra Final Exam Review Sheet Math 15-1 Linear Algebra Final Exam Review Sheet Vector Operations Vector addition is a component-wise operation. Two vectors v and w may be added together as long as they contain the same number n of

More information

Final Review Sheet. B = (1, 1 + 3x, 1 + x 2 ) then 2 + 3x + 6x 2

Final Review Sheet. B = (1, 1 + 3x, 1 + x 2 ) then 2 + 3x + 6x 2 Final Review Sheet The final will cover Sections Chapters 1,2,3 and 4, as well as sections 5.1-5.4, 6.1-6.2 and 7.1-7.3 from chapters 5,6 and 7. This is essentially all material covered this term. Watch

More information

2. Review of Linear Algebra

2. Review of Linear Algebra 2. Review of Linear Algebra ECE 83, Spring 217 In this course we will represent signals as vectors and operators (e.g., filters, transforms, etc) as matrices. This lecture reviews basic concepts from linear

More information

Review of Linear Algebra

Review of Linear Algebra Review of Linear Algebra Definitions An m n (read "m by n") matrix, is a rectangular array of entries, where m is the number of rows and n the number of columns. 2 Definitions (Con t) A is square if m=

More information

For a semi-circle with radius r, its circumfrence is πr, so the radian measure of a semi-circle (a straight line) is

For a semi-circle with radius r, its circumfrence is πr, so the radian measure of a semi-circle (a straight line) is Radian Measure Given any circle with radius r, if θ is a central angle of the circle and s is the length of the arc sustained by θ, we define the radian measure of θ by: θ = s r For a semi-circle with

More information

Linear Algebra: Matrix Eigenvalue Problems

Linear Algebra: Matrix Eigenvalue Problems CHAPTER8 Linear Algebra: Matrix Eigenvalue Problems Chapter 8 p1 A matrix eigenvalue problem considers the vector equation (1) Ax = λx. 8.0 Linear Algebra: Matrix Eigenvalue Problems Here A is a given

More information

4.1 Distance and Length

4.1 Distance and Length Chapter Vector Geometry In this chapter we will look more closely at certain geometric aspects of vectors in R n. We will first develop an intuitive understanding of some basic concepts by looking at vectors

More information

Notes on multivariable calculus

Notes on multivariable calculus Notes on multivariable calculus Jonathan Wise February 2, 2010 1 Review of trigonometry Trigonometry is essentially the study of the relationship between polar coordinates and Cartesian coordinates in

More information

Vectors To begin, let us describe an element of the state space as a point with numerical coordinates, that is x 1. x 2. x =

Vectors To begin, let us describe an element of the state space as a point with numerical coordinates, that is x 1. x 2. x = Linear Algebra Review Vectors To begin, let us describe an element of the state space as a point with numerical coordinates, that is x 1 x x = 2. x n Vectors of up to three dimensions are easy to diagram.

More information

(arrows denote positive direction)

(arrows denote positive direction) 12 Chapter 12 12.1 3-dimensional Coordinate System The 3-dimensional coordinate system we use are coordinates on R 3. The coordinate is presented as a triple of numbers: (a,b,c). In the Cartesian coordinate

More information

x 1 x 2. x 1, x 2,..., x n R. x n

x 1 x 2. x 1, x 2,..., x n R. x n WEEK In general terms, our aim in this first part of the course is to use vector space theory to study the geometry of Euclidean space A good knowledge of the subject matter of the Matrix Applications

More information

235 Final exam review questions

235 Final exam review questions 5 Final exam review questions Paul Hacking December 4, 0 () Let A be an n n matrix and T : R n R n, T (x) = Ax the linear transformation with matrix A. What does it mean to say that a vector v R n is an

More information

Linear Algebra. Min Yan

Linear Algebra. Min Yan Linear Algebra Min Yan January 2, 2018 2 Contents 1 Vector Space 7 1.1 Definition................................. 7 1.1.1 Axioms of Vector Space..................... 7 1.1.2 Consequence of Axiom......................

More information

Linear Algebra - Part II

Linear Algebra - Part II Linear Algebra - Part II Projection, Eigendecomposition, SVD (Adapted from Sargur Srihari s slides) Brief Review from Part 1 Symmetric Matrix: A = A T Orthogonal Matrix: A T A = AA T = I and A 1 = A T

More information

Vectors. A vector is usually denoted in bold, like vector a, or sometimes it is denoted a, or many other deviations exist in various text books.

Vectors. A vector is usually denoted in bold, like vector a, or sometimes it is denoted a, or many other deviations exist in various text books. Vectors A Vector has Two properties Magnitude and Direction. That s a weirder concept than you think. A Vector does not necessarily start at a given point, but can float about, but still be the SAME vector.

More information

Lecture 2: Vector-Vector Operations

Lecture 2: Vector-Vector Operations Lecture 2: Vector-Vector Operations Vector-Vector Operations Addition of two vectors Geometric representation of addition and subtraction of vectors Vectors and points Dot product of two vectors Geometric

More information

Study guide for Exam 1. by William H. Meeks III October 26, 2012

Study guide for Exam 1. by William H. Meeks III October 26, 2012 Study guide for Exam 1. by William H. Meeks III October 2, 2012 1 Basics. First we cover the basic definitions and then we go over related problems. Note that the material for the actual midterm may include

More information

Section 13.4 The Cross Product

Section 13.4 The Cross Product Section 13.4 The Cross Product Multiplying Vectors 2 In this section we consider the more technical multiplication which can be defined on vectors in 3-space (but not vectors in 2-space). 1. Basic Definitions

More information

Vector Geometry. Chapter 5

Vector Geometry. Chapter 5 Chapter 5 Vector Geometry In this chapter we will look more closely at certain geometric aspects of vectors in R n. We will first develop an intuitive understanding of some basic concepts by looking at

More information

Course Notes Math 275 Boise State University. Shari Ultman

Course Notes Math 275 Boise State University. Shari Ultman Course Notes Math 275 Boise State University Shari Ultman Fall 2017 Contents 1 Vectors 1 1.1 Introduction to 3-Space & Vectors.............. 3 1.2 Working With Vectors.................... 7 1.3 Introduction

More information

Chapter 3 Vectors. 3.1 Vector Analysis

Chapter 3 Vectors. 3.1 Vector Analysis Chapter 3 Vectors 3.1 Vector nalysis... 1 3.1.1 Introduction to Vectors... 1 3.1.2 Properties of Vectors... 1 3.2 Coordinate Systems... 6 3.2.1 Cartesian Coordinate System... 6 3.2.2 Cylindrical Coordinate

More information

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces. Math 350 Fall 2011 Notes about inner product spaces In this notes we state and prove some important properties of inner product spaces. First, recall the dot product on R n : if x, y R n, say x = (x 1,...,

More information

Exercises * on Linear Algebra

Exercises * on Linear Algebra Exercises * on Linear Algebra Laurenz Wiskott Institut für Neuroinformatik Ruhr-Universität Bochum, Germany, EU 4 February 7 Contents Vector spaces 4. Definition...............................................

More information

11.1 Vectors in the plane

11.1 Vectors in the plane 11.1 Vectors in the plane What is a vector? It is an object having direction and length. Geometric way to represent vectors It is represented by an arrow. The direction of the arrow is the direction of

More information

Eigenvectors and Hermitian Operators

Eigenvectors and Hermitian Operators 7 71 Eigenvalues and Eigenvectors Basic Definitions Let L be a linear operator on some given vector space V A scalar λ and a nonzero vector v are referred to, respectively, as an eigenvalue and corresponding

More information

The Cross Product. In this section, we will learn about: Cross products of vectors and their applications.

The Cross Product. In this section, we will learn about: Cross products of vectors and their applications. The Cross Product In this section, we will learn about: Cross products of vectors and their applications. THE CROSS PRODUCT The cross product a x b of two vectors a and b, unlike the dot product, is a

More information

14 Singular Value Decomposition

14 Singular Value Decomposition 14 Singular Value Decomposition For any high-dimensional data analysis, one s first thought should often be: can I use an SVD? The singular value decomposition is an invaluable analysis tool for dealing

More information

Rigid Geometric Transformations

Rigid Geometric Transformations Rigid Geometric Transformations Carlo Tomasi This note is a quick refresher of the geometry of rigid transformations in three-dimensional space, expressed in Cartesian coordinates. 1 Cartesian Coordinates

More information

Chapter 2 - Vector Algebra

Chapter 2 - Vector Algebra A spatial vector, or simply vector, is a concept characterized by a magnitude and a direction, and which sums with other vectors according to the Parallelogram Law. A vector can be thought of as an arrow

More information

Inner Product Spaces 6.1 Length and Dot Product in R n

Inner Product Spaces 6.1 Length and Dot Product in R n Inner Product Spaces 6.1 Length and Dot Product in R n Summer 2017 Goals We imitate the concept of length and angle between two vectors in R 2, R 3 to define the same in the n space R n. Main topics are:

More information

Vectors and Matrices Statistics with Vectors and Matrices

Vectors and Matrices Statistics with Vectors and Matrices Vectors and Matrices Statistics with Vectors and Matrices Lecture 3 September 7, 005 Analysis Lecture #3-9/7/005 Slide 1 of 55 Today s Lecture Vectors and Matrices (Supplement A - augmented with SAS proc

More information

Page 52. Lecture 3: Inner Product Spaces Dual Spaces, Dirac Notation, and Adjoints Date Revised: 2008/10/03 Date Given: 2008/10/03

Page 52. Lecture 3: Inner Product Spaces Dual Spaces, Dirac Notation, and Adjoints Date Revised: 2008/10/03 Date Given: 2008/10/03 Page 5 Lecture : Inner Product Spaces Dual Spaces, Dirac Notation, and Adjoints Date Revised: 008/10/0 Date Given: 008/10/0 Inner Product Spaces: Definitions Section. Mathematical Preliminaries: Inner

More information

Algebra II. Paulius Drungilas and Jonas Jankauskas

Algebra II. Paulius Drungilas and Jonas Jankauskas Algebra II Paulius Drungilas and Jonas Jankauskas Contents 1. Quadratic forms 3 What is quadratic form? 3 Change of variables. 3 Equivalence of quadratic forms. 4 Canonical form. 4 Normal form. 7 Positive

More information

Functional Analysis Review

Functional Analysis Review Outline 9.520: Statistical Learning Theory and Applications February 8, 2010 Outline 1 2 3 4 Vector Space Outline A vector space is a set V with binary operations +: V V V and : R V V such that for all

More information

GEOMETRY AND VECTORS

GEOMETRY AND VECTORS GEOMETRY AND VECTORS Distinguishing Between Points in Space One Approach Names: ( Fred, Steve, Alice...) Problem: distance & direction must be defined point-by-point More elegant take advantage of geometry

More information

Linear Algebra Review. Fei-Fei Li

Linear Algebra Review. Fei-Fei Li Linear Algebra Review Fei-Fei Li 1 / 51 Vectors Vectors and matrices are just collections of ordered numbers that represent something: movements in space, scaling factors, pixel brightnesses, etc. A vector

More information

HOW TO THINK ABOUT POINTS AND VECTORS WITHOUT COORDINATES. Math 225

HOW TO THINK ABOUT POINTS AND VECTORS WITHOUT COORDINATES. Math 225 HOW TO THINK ABOUT POINTS AND VECTORS WITHOUT COORDINATES Math 225 Points Look around. What do you see? You see objects: a chair here, a table there, a book on the table. These objects occupy locations,

More information

Vectors. September 2, 2015

Vectors. September 2, 2015 Vectors September 2, 2015 Our basic notion of a vector is as a displacement, directed from one point of Euclidean space to another, and therefore having direction and magnitude. We will write vectors in

More information

Linear Algebra Review. Vectors

Linear Algebra Review. Vectors Linear Algebra Review 9/4/7 Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa (UCSD) Cogsci 8F Linear Algebra review Vectors

More information

Chapter 8 Vectors and Scalars

Chapter 8 Vectors and Scalars Chapter 8 193 Vectors and Scalars Chapter 8 Vectors and Scalars 8.1 Introduction: In this chapter we shall use the ideas of the plane to develop a new mathematical concept, vector. If you have studied

More information

Lecture 4: Affine Transformations. for Satan himself is transformed into an angel of light. 2 Corinthians 11:14

Lecture 4: Affine Transformations. for Satan himself is transformed into an angel of light. 2 Corinthians 11:14 Lecture 4: Affine Transformations for Satan himself is transformed into an angel of light. 2 Corinthians 11:14 1. Transformations Transformations are the lifeblood of geometry. Euclidean geometry is based

More information

Math 52: Course Summary

Math 52: Course Summary Math 52: Course Summary Rich Schwartz September 2, 2009 General Information: Math 52 is a first course in linear algebra. It is a transition between the lower level calculus courses and the upper level

More information

Principal Component Analysis

Principal Component Analysis Principal Component Analysis Laurenz Wiskott Institute for Theoretical Biology Humboldt-University Berlin Invalidenstraße 43 D-10115 Berlin, Germany 11 March 2004 1 Intuition Problem Statement Experimental

More information

Lecture 7. Econ August 18

Lecture 7. Econ August 18 Lecture 7 Econ 2001 2015 August 18 Lecture 7 Outline First, the theorem of the maximum, an amazing result about continuity in optimization problems. Then, we start linear algebra, mostly looking at familiar

More information

What you will learn today

What you will learn today What you will learn today The Dot Product Equations of Vectors and the Geometry of Space 1/29 Direction angles and Direction cosines Projections Definitions: 1. a : a 1, a 2, a 3, b : b 1, b 2, b 3, a

More information

12x + 18y = 30? ax + by = m

12x + 18y = 30? ax + by = m Math 2201, Further Linear Algebra: a practical summary. February, 2009 There are just a few themes that were covered in the course. I. Algebra of integers and polynomials. II. Structure theory of one endomorphism.

More information

On-Line Geometric Modeling Notes VECTOR SPACES

On-Line Geometric Modeling Notes VECTOR SPACES On-Line Geometric Modeling Notes VECTOR SPACES Kenneth I. Joy Visualization and Graphics Research Group Department of Computer Science University of California, Davis These notes give the definition of

More information

y 2 . = x 1y 1 + x 2 y x + + x n y n 2 7 = 1(2) + 3(7) 5(4) = 3. x x = x x x2 n.

y 2 . = x 1y 1 + x 2 y x + + x n y n 2 7 = 1(2) + 3(7) 5(4) = 3. x x = x x x2 n. 6.. Length, Angle, and Orthogonality In this section, we discuss the defintion of length and angle for vectors and define what it means for two vectors to be orthogonal. Then, we see that linear systems

More information

4.2. ORTHOGONALITY 161

4.2. ORTHOGONALITY 161 4.2. ORTHOGONALITY 161 Definition 4.2.9 An affine space (E, E ) is a Euclidean affine space iff its underlying vector space E is a Euclidean vector space. Given any two points a, b E, we define the distance

More information

Vectors a vector is a quantity that has both a magnitude (size) and a direction

Vectors a vector is a quantity that has both a magnitude (size) and a direction Vectors In physics, a vector is a quantity that has both a magnitude (size) and a direction. Familiar examples of vectors include velocity, force, and electric field. For any applications beyond one dimension,

More information

Abstract & Applied Linear Algebra (Chapters 1-2) James A. Bernhard University of Puget Sound

Abstract & Applied Linear Algebra (Chapters 1-2) James A. Bernhard University of Puget Sound Abstract & Applied Linear Algebra (Chapters 1-2) James A. Bernhard University of Puget Sound Copyright 2018 by James A. Bernhard Contents 1 Vector spaces 3 1.1 Definitions and basic properties.................

More information

CALC 3 CONCEPT PACKET Complete

CALC 3 CONCEPT PACKET Complete CALC 3 CONCEPT PACKET Complete Written by Jeremy Robinson, Head Instructor Find Out More +Private Instruction +Review Sessions WWW.GRADEPEAK.COM Need Help? Online Private Instruction Anytime, Anywhere

More information

Knowledge Discovery and Data Mining 1 (VO) ( )

Knowledge Discovery and Data Mining 1 (VO) ( ) Knowledge Discovery and Data Mining 1 (VO) (707.003) Review of Linear Algebra Denis Helic KTI, TU Graz Oct 9, 2014 Denis Helic (KTI, TU Graz) KDDM1 Oct 9, 2014 1 / 74 Big picture: KDDM Probability Theory

More information

REVIEW - Vectors. Vectors. Vector Algebra. Multiplication by a scalar

REVIEW - Vectors. Vectors. Vector Algebra. Multiplication by a scalar J. Peraire Dynamics 16.07 Fall 2004 Version 1.1 REVIEW - Vectors By using vectors and defining appropriate operations between them, physical laws can often be written in a simple form. Since we will making

More information

The Cross Product of Two Vectors

The Cross Product of Two Vectors The Cross roduct of Two Vectors In proving some statements involving surface integrals, there will be a need to approximate areas of segments of the surface by areas of parallelograms. Therefore it is

More information

Coach Stones Expanded Standard Pre-Calculus Algorithm Packet Page 1 Section: P.1 Algebraic Expressions, Mathematical Models and Real Numbers

Coach Stones Expanded Standard Pre-Calculus Algorithm Packet Page 1 Section: P.1 Algebraic Expressions, Mathematical Models and Real Numbers Coach Stones Expanded Standard Pre-Calculus Algorithm Packet Page 1 Section: P.1 Algebraic Expressions, Mathematical Models and Real Numbers CLASSIFICATIONS OF NUMBERS NATURAL NUMBERS = N = {1,2,3,4,...}

More information

Rotational motion of a rigid body spinning around a rotational axis ˆn;

Rotational motion of a rigid body spinning around a rotational axis ˆn; Physics 106a, Caltech 15 November, 2018 Lecture 14: Rotations The motion of solid bodies So far, we have been studying the motion of point particles, which are essentially just translational. Bodies with

More information

Fact: Every matrix transformation is a linear transformation, and vice versa.

Fact: Every matrix transformation is a linear transformation, and vice versa. Linear Transformations Definition: A transformation (or mapping) T is linear if: (i) T (u + v) = T (u) + T (v) for all u, v in the domain of T ; (ii) T (cu) = ct (u) for all scalars c and all u in the

More information

1. General Vector Spaces

1. General Vector Spaces 1.1. Vector space axioms. 1. General Vector Spaces Definition 1.1. Let V be a nonempty set of objects on which the operations of addition and scalar multiplication are defined. By addition we mean a rule

More information

MATH 431: FIRST MIDTERM. Thursday, October 3, 2013.

MATH 431: FIRST MIDTERM. Thursday, October 3, 2013. MATH 431: FIRST MIDTERM Thursday, October 3, 213. (1) An inner product on the space of matrices. Let V be the vector space of 2 2 real matrices (that is, the algebra Mat 2 (R), but without the mulitiplicative

More information

a b 0 a cos u, 0 u 180 :

a b 0 a  cos u, 0 u 180 : Section 7.3 The Dot Product of Two Geometric Vectors In hapter 6, the concept of multiplying a vector by a scalar was discussed. In this section, we introduce the dot product of two vectors and deal specifically

More information

A PRIMER ON SESQUILINEAR FORMS

A PRIMER ON SESQUILINEAR FORMS A PRIMER ON SESQUILINEAR FORMS BRIAN OSSERMAN This is an alternative presentation of most of the material from 8., 8.2, 8.3, 8.4, 8.5 and 8.8 of Artin s book. Any terminology (such as sesquilinear form

More information

Algebraic. techniques1

Algebraic. techniques1 techniques Algebraic An electrician, a bank worker, a plumber and so on all have tools of their trade. Without these tools, and a good working knowledge of how to use them, it would be impossible for them

More information

The following definition is fundamental.

The following definition is fundamental. 1. Some Basics from Linear Algebra With these notes, I will try and clarify certain topics that I only quickly mention in class. First and foremost, I will assume that you are familiar with many basic

More information

Linear Algebra. The Manga Guide. Supplemental Appendixes. Shin Takahashi, Iroha Inoue, and Trend-Pro Co., Ltd.

Linear Algebra. The Manga Guide. Supplemental Appendixes. Shin Takahashi, Iroha Inoue, and Trend-Pro Co., Ltd. The Manga Guide to Linear Algebra Supplemental Appendixes Shin Takahashi, Iroha Inoue, and Trend-Pro Co., Ltd. Copyright by Shin Takahashi and TREND-PRO Co., Ltd. ISBN-: 978--97--9 Contents A Workbook...

More information

LINEAR ALGEBRA - CHAPTER 1: VECTORS

LINEAR ALGEBRA - CHAPTER 1: VECTORS LINEAR ALGEBRA - CHAPTER 1: VECTORS A game to introduce Linear Algebra In measurement, there are many quantities whose description entirely rely on magnitude, i.e., length, area, volume, mass and temperature.

More information

Contents. 1 Vectors, Lines and Planes 1. 2 Gaussian Elimination Matrices Vector Spaces and Subspaces 124

Contents. 1 Vectors, Lines and Planes 1. 2 Gaussian Elimination Matrices Vector Spaces and Subspaces 124 Matrices Math 220 Copyright 2016 Pinaki Das This document is freely redistributable under the terms of the GNU Free Documentation License For more information, visit http://wwwgnuorg/copyleft/fdlhtml Contents

More information

Math Assignment 3 - Linear Algebra

Math Assignment 3 - Linear Algebra Math 216 - Assignment 3 - Linear Algebra Due: Tuesday, March 27. Nothing accepted after Thursday, March 29. This is worth 15 points. 10% points off for being late. You may work by yourself or in pairs.

More information

AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES

AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES JOEL A. TROPP Abstract. We present an elementary proof that the spectral radius of a matrix A may be obtained using the formula ρ(a) lim

More information

22.3. Repeated Eigenvalues and Symmetric Matrices. Introduction. Prerequisites. Learning Outcomes

22.3. Repeated Eigenvalues and Symmetric Matrices. Introduction. Prerequisites. Learning Outcomes Repeated Eigenvalues and Symmetric Matrices. Introduction In this Section we further develop the theory of eigenvalues and eigenvectors in two distinct directions. Firstly we look at matrices where one

More information