Linear Algebra for Theoretical Neuroscience (Part 1) Ken Miller

Size: px
Start display at page:

Download "Linear Algebra for Theoretical Neuroscience (Part 1) Ken Miller"

Transcription

1 Linear Algebra for Theoretical Neuroscience Part 1 Ken Miller c 2001, 2008 by Kenneth Miller. This work is license uner the Creative Commons Attribution- Noncommercial-Share Alike 3.0 Unite States License. To view a copy of this license, visit or sen a letter to Creative Commons, 171 Secon Street, Suite 300, San Francisco, California, 94105, USA. Current versions of all parts of this work can be foun at ken/math-notes. Please feel free to link to this site. I woul appreciate any an all feeback that woul help improve these notes as a teaching tool what was particularly helpful, where you got stuck an what might have helpe get you unstuck. I alreay know that more figures, problems, an neurobiological examples are neee in a future incarnation for the most part I in t have time to make figures but that shouln t iscourage contributions of or suggestions as to useful figures, problems, examples. There are also many missing mathematical pieces I woul like to fill in, as escribe on the home page for these notes. If anyone wants to turn this into a collaboration an help, I be open to iscussing that too. Feeback can be sent to me by , ken@neurotheory.columbia.eu Reaing These Notes Instructions as written for classes I ve taught that use these notes I have trie to begin at the beginning an make things clear enough that everyone can follow assuming basic college math as backgroun. Some of it will be trivial for you; I hope none of it will be over your hea, but some might. My suggeste rules for reaing this are: Rea an work through everything. Rea with pen an paper besie you. Never let yourself rea through anything you on t completely unerstan; work through it until it is crystal clear to you. Go at your own pace; breeze through whatever is trivial for you. Do all of the problems. Talk among yourselves as much as esire in coming to an unerstaning of them, but then actually write up the answers by yourself. Most or all of the problems are very simple; many only require one line as an answer. If you fin a problem to be so obvious for you that it is a waste of your time or annoying to write it own, go ahea an skip it. But o be conservative in your jugements it can be surprising how much you can learn by working out in etail what you think you unerstan in a general way. You can t unerstan the material without oing. In most cases, I have le you step by step through what is require. The purpose of the problems is not to test your math ability, but simply to make sure you o enough to achieve unerstaning. The exercises o not require a written answer. But except where one is preface by something like for those intereste you shoul rea them, make sure you unerstan them, an if possible solve them in your hea or on paper. As you rea these notes, mark them with feeback: things you on t unerstan, things you get confuse by, things that seem trivial or unnecessary, suggestions, whatever. Then turn in to me a copy of your annotate notes. 1

2 References If you want to consult other references on this material: an excellent text, although fairly mathematical, is Differential equations, ynamical systems an linear algebra, by Morris W. Hirsch an Steven Smale Acaemic Press, NY, Gilbert Strang has written several very nice texts that are strong on intuition, incluing a couple of ifferent linear algebra texts I m not sure of their relative strengths an weaknesses an an Introuction to Applie Mathematics. A goo practical reference sort of a cheat sheet of basic results, plus computer algorithms an practical avice on oing computations is Numerical Recipes in C, 2n Eition, by W.H. Press, S.A. Teukolsky, W.T. Vetterling, an B.P. Flannery Cambrige University Press, Part 3 of these notes, which eals with non-normal matrices matrices that o not have a complete orthonormal basis of eigenvectors nees to be completely rewritten: since it was written, I ve learne that non-normal matrices have many features not preicte by the eigenvalues that are of great relevance in neurobiology an in biology more generally, an the notes on t eal with this. In the meantime, for mathematical aspects of non-normal matrix behavior, see the book by L.N. Trefethen an M. Embree, Spectra an Pseuospectra: The Behavior of Nonnormal Matrices an Operators. Princeton University Press,

3 1 Introuction to Vectors an Matrices We will start out by reviewing basic notation escribing, an basic operations of, vectors an matrices. Why o we care about such things? In neurobiological moeling we are often ealing with arrays of variables: the activities of all of the neurons in a network at a given time; the firing rate of a neuron in each of many small epochs of time; the weights of all of the synapses impinging on a postsynaptic cell. The natural language for thinking about an analyzing the behavior of such arrays of variables is the language of vectors an matrices. 1.1 Notation A scalar is simply a number we use the term scalar to istinguish numbers from vectors, which are arrays of numbers. Scalars will be written without bolface: x, y, etc. We will write a vector as a bol-face small letter, e.g. v; this enotes a column vector. Its elements v i are written without bol-face: v = v 0 v 1... v N 1 Here N, the number of elements, is the imension of v. The transpose of v, v T, is a row vector: v T = v 0, v 1,..., v N 1. 2 The transpose of a row vector, in turn, is a column vector; in particular, v T T = v. Thus, to keep things easier to write, we can also write v as v = v 0, v 1,..., v N 1 T. 1 We will write a matrix as a bol-face capital letter, e.g. M; its elements M ij, where i inicates the row an j inicates the column, are written without bolface: M = M 00 M M 0N 1 M 10 M M 1N M N 10 M N M N 1N 1 This is a square, N N matrix. A matrix can also be rectangular, e.g. a P N matrix woul have P rows an N columns. In particular, an N-imensional vector can be regare as an N 1 matrix, while its transpose can be regare as a 1 N matrix. For the most part, we will only be concerne with square matrices an with vectors, although we will eventually return to non-square matrices. The transpose of M, M T, is the matrix with elements Mij T = M ji: M T = M 00 M M N 10 M 01 M M N M 0N 1 M 1N 1... M N 1N 1 1 Those of you who have taken upper-level physics courses may have seen the bra an ket notation, v ket an v bra. For vectors, these are just another notation for a vector an its transpose: v = v, v T = v. The bra an ket notation is useful because one can effortlessly move between vectors an functions using the same notation, making transparent the fact which we will eventually iscuss in these notes that vector spaces an function spaces can all be ealt with using the same formalism of linear algebra. But we will be focusing on vectors an will stick to the simple notation v an v T

4 Note, uner this efinition, the transpose of a P N matrix is an N P matrix. Definition 1 A square matrix M is calle symmetric if M = M T ; that is, if M ij = M ji for all i an j. T Example: The matrix is not symmetric. Its transpose is =. The matrix is symmetric; it is equal to its own transpose. 2 4 A final point about notation: we will generally use 0 to mean any object all of whose entries are 0. It shoul be clear from context whether the thing that is set equal to zero is just a number, or a vector all of whose elements are 0, or a matrix all of whose elements are 0. So we abuse notation by using the same symbol 0 for all of these cases. 1.2 Matrix an vector aition The efinitions of matrix an vector aition are simple: you can only a objects of the same type an size, an things a element-wise: Aition of two vectors: v + x is the vector with elements v + x i = v i + x i. Aition of two matrices: M + P is the matrix with elements M + P ij = M ij + P ij. Subtraction works the same way: v x i = v i x i, M P ij = M ij P ij. Aition or subtraction of two vectors has a simple geometrical interpretation... illustrate. 1.3 Multiplication by a scalar Vectors or matrices can be multiplie by a scalar, which is just efine to mean multiplying every element by the scalar: Multiplication of a vector or matrix by a scalar: Let k be a scalar an orinary number. The vector kv = vk = kv 0, kv 1,..., kv N 1 T. The matrix km = Mk is the matrix with entries km ij = km ij. 1.4 Linear Mappings of Vectors Consier a function Mv that maps an N-imensional vector v to a P-imensional vector Mv = M 0 v, M 1 v,..., M P 1 v T. We say that this mapping is linear if 1 for all scalars a, Mav = amv an 2 for all pairs of N-imensional vectors v an w, Mv + w = Mv + Mw. It turns out that the most general linear mapping can be written in the following form: each element of Mv is etermine by a linear combination of the elements of v, so that for each i, M i v = M i0 v 0 + M i1 v M ip 1 v P 1 = j M ijv j for some constants M ij. This motivates the efinition of matrices an matrix multiplication. We efine the P N matrix M to have the elements M ij, an the prouct of M with v, Mv, is efine by Mv i = j M ijv j. Thus, the set of all possible linear functions correspons precisely to the set of all possible matrices, an matrix multiplication of a vector correspons to a linear transformation of the vector. This motivates the efinition of matrix multiplication, to which we now turn. 4

5 1.5 Matrix an vector multiplication The efinitions of matrix an vector multiplication soun complicate, but it gets easy when you actually o it see examples below, an Problem 1. The basic iea is this: The multiplication of two objects A an B to form AB is only efine if the number of columns of A the object on the left equals the number of rows of B the object on the right. Note that this means that orer matters! In general, even if both AB an BA are efine, they nee not be the same thing: AB BA. To form AB, take row i of A; rotate it clockwise to form a column, an multiply each element with the corresponing element of column j of B. Sum the results of these multiplications, an that gives a single number, entry ij of the resulting output structure AB. Let s see what this means by efining the various possible allowe cases if this is confusing, just keep plowing on through; working through Problem 1 shoul clear things up: Multiplication of two matrices: MP is the matrix with elements MP ik = j M ijp jk. Example: a b c e f g h ae + bg af + bh = ce + g cf + h Multiplication of a column vector by a matrix: Mv = Mv 0, Mv 1,..., Mv N 1 T where Mv i = j M ijv j. Mv is a column vector. Example: a b c x y = ax + by cx + y Multiplication of a matrix by a row vector. v T M = v T M 0, v T M 1,..., v T M N 1 where v T M j = i v im ij. v T M is a row vector. Example: x y a b c = xa + yc xb + y Dot or inner prouct of two vectors: multiplication by a row vector on the left of a column vector on the right. v x is a notation for the ot prouct, which is efine by v x = v T x = i v ix i. v T x is a scalar, that is, a single number. Note from this efinition that v T x = x T v. Example: x y z w = x y T z w = x y z w = xz + yw Outer prouct of two vectors: multiplication by a column vector on the left of a row vector on the right. vx T is a matrix, with elements vx T ij = v i x j. x y z w T = x y xz xw z w = yz yw 5

6 These rules will all become obvious with a tiny bit of practice, as follows: Problem 1 Let v = 1, 2, 3 T, x = 4, 5, 6 T. Compute the inner prouct v T x an the outer proucts vx T an xv T. To compute v T x, begin by writing the row vector v T to the left of the column vector x, so you can see the multiplication that the inner prouct consists of, an why it results in a single number, a scalar. Similarly, to compute the outer proucts, say vx T, begin by writing the column vector v to the left of the row vector x T, so you can see the multiplication, an why it results in a matrix of numbers. Finally, let A = vx T, an note that A T = xv T ; that is, vx T T = xv T. Compute the matrix AA T = vx T xv T in two ways: as a prouct of two matrices, vx T xv T, an as a scalar times the outer prouct of two vectors: vx T xv T = x T xvv T note, in the last step we have mae use of the fact that a scalar, x T x, commutes with anything an so can be pulle out front. Show that the outcomes are ientical. Show that AA T A T A; that is, matrix multiplication nee not commute. Note that A T A can also be written xv T vx T = v T vxx T. Compute the row vector x T vx T in two ways, as a row vector times a matrix: x T vx T ; an as a scalar times a row vector: x T vx T. Show that the outcomes are ientical, an proportional to the vector x T. Compute the column vector vx T v in two ways: as a matrix times a column vector: vx T v; an as a column vector times a scalar vx T v. Show that the outcomes are ientical, an proportional to v. Exercise 1 Make up more examples as neee to make sure the efinitions above of matrix an vector multiplication are intuitively clear to you. Problem 2 1. Prove that for any vectors v an x an matrices M an P: vx T T = xv T, Mv T = v T M T, an MP T = P T M T. Hint: in general, the way to get starte in a proof is to write own precisely what you nee to prove. In this case, it helps to write this own in terms of inices. For example, here s how to solve the first one: we nee to show that vx T T ij = xv T ij for any i an j. So write own what each sie means: vx T T ij = vx T ji = v j x i, while xv T ij = x i v j. We re one! v j x i = x i v j, so just writing own what the proof requires, in terms of inices, is enough to solve the problem. 2. Show that MPQ T = Q T P T M T for any matrices M, P an Q. Hint: apply the two-matrix result first to the prouct of the two matrices M an PQ; then apply it again to the prouct of the two matrices P an Q. As you might guess, or easily prove, this result extens to a prouct of any number of matrices: you form the transpose of the prouct by reversing their orer an taking the transpose of each element. As the above problems an exercises suggest, matrix an vector multiplication are associative: ABC = ABC = ABC, etc.; but they are not in general commutative: AB BA. However, a scalar a number always commutes with anything. From the ot prouct, we can also efine two other important concepts: Definition 2 The length or absolute value v of a vector v is given by v = v v = i v2 i. 6

7 This is just the stanar Eucliean length of the vector: the istance from the origin the vector 0 to the en of the vector. This might also be a goo place to remin you of your high school geometry: the ot prouct of any two vectors v an w can be expresse v w = v w cos θ where θ is the angle between the two vectors. Definition 3 Two vectors v an w are sai to be orthogonal if v w = 0. Geometrically, two vectors are orthogonal when the angle between them is 90 o, so that the cosine of the angle between them is 0. Problem 3 Better unerstaning matrix multiplication: Let the N N matrix M have columns c i : M = c 0 c 1... c N 1 where each c i is an N-imensional column vector. Let it have rows r T i : M = r 0 r 1... r N 1 T. 1. Show that for any vector v, Mv = r 0 v r 1 v... r N 1 v T. Hint: note that M ij = r i j, an show that Mv k = r k v; that is, Mv k = i M kiv i, while r k v = i r k i v i, so show that these are equal. Thus, any vector v that is orthogonal to all the rows of M, that is, for which r i v = 0 i, is mappe to the zero vector. 2. Show that for any vector v, Mv = i v ic i. Hint: note that M ij = c j i, where c j i is the i th component of c j ; an show that Mv k = i v ic i k = i v ic i k. Thus, the range of M the set of vectors {w : w = Mv for some vector v} is compose of all linear combinations of the columns of M a linear combination of the c i is a combination i a ic i for some constants a i. You can gain some intuition for this result by noting that, in the matrix multiplication Mv, v 0 only multiplies elements of c 0, v 1 only multiplies elements of c 1, etc. 3. Let s make this concrete: consier the matrix M = an the vector v = Compute Mv the orinary way, which correspons to the format of item 1 above. Now instea write i v ic i where c i are the columns of M, an show that this gives the same answer. 4. Consier another N N matrix P, with columns i an rows s T i. Show that MP ij = r i j. Hint: MP ij = k M ikp kj, while r i j = k r i k j k ; show that these are equal. Show that MP = i c is T i, by showing that MP kj = i c is T i kj = i c i k s i j. Note that each term c i s T i is a matrix. Again, you can gain some intuition for this result by noticing that elements of s i only multiply elements of c i in the matrix multiplication Let s make this concrete: consier M = an P =. Compute MP the orinary way, which amounts to MP ij = r i j. Now instea write it as MP = i c is T i, an show that this sums to the same thing. 1.6 The Ientity Matrix The ientity matrix will be written as 1. This is the matrix that is 1 on the iagonal an zero otherwise: =

8 Note that 1v = v an v T 1 = v T for any vector v, an 1M = M1 = M for any matrix M. The imension of the matrix 1 is generally to be inferre from context; at any point, we are referring to that ientity matrix with the same imension as the other vectors an matrices being consiere. Exercise 2 Verify that 1v = v an v T 1 = v T matrix M. for any vector v, an 1M = M1 = M for any 1.7 The Inverse of a Matrix Definition 4 The inverse of a square matrix M is a matrix M 1 satisfying M 1 M = MM 1 = 1. Fact 1 For square matrices A an B, if AB = 1, then BA = 1; so knowing either AB = 1 or BA = 1 is enough to establish that A = B 1 an B = A 1. Not all matrices M have an inverse; but if a matrix has an inverse, that inverse is unique there is at most one matrix that is the inverse of M, proof for square matrices: suppose C an B are both inverses of A. Then CAB = CAB = C1 = C; but also CAB = CAB = 1B = B; hence C = B. Intuitively, the inverse of M unoes whatever M oes: if you apply M to a vector or matrix, an then apply M 1 to the result, you en up having applie the ientity matrix, that is, not having change anything. If a matrix has an inverse, we say that it is invertible. A matrix fails to have an inverse when it maps some nonzero vectors to the zero vector, 0. Suppose Mv = 0 for v 0. Then, since matrix multiplication is a linear operation, for any other vector w, Mav + w = amv + Mw = Mw, so all input vectors of the form av + w are mappe to the same output vector Mw. Hence in this case the action of M cannot be unone given the output vector Mw, we cannot say which input vector prouce it. You may notice that above, we efine aition, subtraction, an multiplication for matrices, but not ivision. Orinary ivision is really multiplying by the inverse of a number: x/y = y 1 x where y 1 = 1/y. As you might imagine, the generalization for matrices woul be multiplying by the inverse of a matrix. Since not all matrices have inverses, it turns out to be more sensible to leave it at that, an not efine ivision as a separate operation for matrices. Exercise 3 Suppose A an B are both invertible N N matrices. Show that AB 1 = B 1 A 1. Hint: just multiply AB times B 1 A 1 an see what you get. Similarly if C is another invertible N N matrix, ABC 1 = C 1 B 1 A 1 ; etc. This shoul remin you of the result of problem 2 for transposes. Exercise 4 Show that A T 1 = A 1 T. Hint: take the equation A T 1 A T transpose. = 1, an take the 1.8 Why Vectors an Matrices? Two Toy Problems As mentione at the outset, in problems of theoretical neuroscience, we are often ealing with large sets of variables the activities of a large set of neurons in a network; the evelopment of a large set of synaptic strengths impinging on a neuron. The equations to escribe moels of these systems are usually best expresse an analyze in terms of vectors an matrices. Here are two simple examples of the formulation of problems in these terms; as we go along we will evelop the tools to analyze them. 8

9 Development in a set of synapses. Consier a set of N presynaptic neurons with activities a i making synapses w i onto a single postsynaptic cell. Take the activity of the postsynaptic cell to be b = j w ja j. Suppose there is a simple linear Hebb-like plasticity rule of the form τw i /t = ba i for some time constant τ that etermines how quickly weights change. Substituting in the expression for b, this becomes τ w i t = j a i a j w j 6 or τ w t = aat w. 7 Now, suppose that input activity patterns occur with some overall statistical structure, e.g. some overall patterns as to which neurons ten to be coactive or not with one another. For example, suppose the input neurons represent the lateral geniculate nucleus LGN, which receives visual input from the eyes an projects to primary visual cortex. We may consier spontaneous activity in the LGN before vision; or we might consier visually-inuce LGN activity patterns as an animal explores its natural environment. In either case, average over some short time perhaps ranging from a few minutes to a few hours, the tenency of ifferent neurons to be coactive or not may be quite reproucible. If τ is much larger than this time, so that weights change little over this time, then we can average Eq. 7 an replace aa T by aa T where x represents the average over input activity patterns of x. Defining C = aa T to be the matrix of correlations between activities of the ifferent inputs, we arrive at the equation 2 τ w = Cw. 8 t Of course, this is only a toy moel: weights are unboune an can change their signs, an more generally we on t expect postsynaptic activity or plasticity to be etermine by such simple linear equations. But it s useful to play with toy cars before riving real ones; as with cars, we ll fin out that they o have something in common with the real thing. We will return to this moel as we evelop the tools to unerstan its behavior. Activity in a network of neurons. Consier two layers of N neurons each, an input layer an an output layer. Label the activities of the input layer neurons by a i, i = 0,..., N 1, an similarly label the activities of the output layer neurons by b i. Let W ij by the strength of the synaptic connection from input neuron j to output neuron i. Also let there be synaptic connections between the output neurons: let B ij be the strength of the connection from output neuron j to output neuron i we can efine B ii = 0 for all i, if we want to exclue self-synapses. Let τ be a time constant of integration in the postsynaptic neuron. Then a 2 Equation 8 can also be erive starting from slightly more complicate moels. For example, we might assume that the learning epens on the covariance rather than prouct of the postsynaptic an presynaptic activities: τw i/t = b b a i a i. This means that, if the post- an pre-synaptic activities fluctuate up from their mean activities at the same time, the weight gets stronger this also happens if the activites fluctuate own together, which is certainly not realistic; while if one activity goes up from its mean while the other goes own, the weight gets weaker. After averaging, this gives Eq. 8, but with C now efine by C = a a a T a T check that this is so. More generally, any rules in which the postsynaptic activity epens linearly on presynaptic activity, an the weight change epens linearly on postsynaptic activity though perhaps nonlinearly on presynaptic activity, will yiel an equation of the form τ w = Cw + h for some matrix C efine by the input activities an some constant t vector h. Equations of this form can also sometimes be erive to escribe aspects of evelopment starting from more nonlinear rules. 9

10 very simple, linear moel of activity in the output layer, given the activity in the input layer, woul be: τ b i t = b i + W ij a j + B ij b j. 9 j j The b i term on the right just says that, in the absence of input from other cells, the neuron s activity b i ecays to zero with time constant τ. Again, this is only a toy moel, e.g. rates can go positive or negative an are unboune in magnitue. Eq. 9 can be written as a vector equation: τ b t = b + Wa + Bb 10 = 1 Bb + Wa Wa is a vector that is inepenent of b: Wa i = j W ija j is the external input to output neuron i. So, let s give it a name: we ll call the vector of external inputs h = Wa. Thus, our equation finally is τ b = 1 Bb + h 11 t This is very similar in form to Eq. 8 for the previous moel: the right sie has a term in which the variable whose time erivative we are stuying b or w is multiplie by a matrix here, 1 B; previously, C. In aition, this equation now has a term h inepenent of that variable. In general, an equation of the form tx = Cx is calle homogeneous, while one with an ae constant term, tx = Cx + h, is calle inhomogeneous. We can also write own an equation for the steay-state or fixe-point output activity pattern b FP for a given input activity pattern h: by efinition, a steay state or fixe point is a point where b t = 0. Thus, the fixe point is etermine by 1 Bb FP = h 12 If the matrix 1 B has an inverse, 1 B 1, then we can multiply both sies of Eq. 12 by this inverse to obtain b FP = 1 B 1 h 13 We ll return to this later to better unerstan what this equation means. 2 Coorinate Systems, Orthogonal Basis Vectors, an Orthogonal Change of Basis To solve the equations that arise in the toy moels just introuce, an in many other moels, it will be critical to be able to view the problem in alternative coorinate systems. Choice of the right coorinate system will greatly simplify the equations an allow us to solve them. So, in this section we aress the topic of coorinate systems: what they are, what it means to change coorinates, an how we change them. We begin by aressing the problem in two imensions, where one can raw pictures an things are more intuitively clear. We ll then generalize our results to higher imensions, as neee to aress problems involving many variables such as our toy moels. For now we are only going to consier coorinate systems in which each coorinate axis is orthogonal to all the other coorinate axes; much later we will consier more general coorinate systems. 10

11 2.1 Coorinate Systems an Orthogonal Basis Vectors in Two Dimensions When we write v = vx v y, we are working in some coorinate system. For example, in Fig. 1, v x an v y are the coorinates of v along the x an y axes, respectively, so these are the coorinates of v in the x, y coorinate system. What o these coorinates mean? v x is the extent of v in the x irection, while v y is its extent in the y irection. How o we compute v x an v y? If φ is the angle between the x axis an v, then from trigonometry, v x = v cos φ, while v y = v sin φ. We can express this in more general form by efining basis vectors: vectors of unit length along each of our orthogonal coorinate axes. The basis vectors along the x an y irections, when 1 0 expresse in the x an y coorinate system, are e x = an e 0 y =, respectively; that 1 is, e x is the vector with extent 1 in the x irection an 0 in the y irection, an similarly for e y. Note that these basis vectors are orthogonal: e x e y = 0. Then the same geometry gives e x v = e x v cos φ = v cos φ. That is, e x v gives the component of v along the x axis, v x. We v can also see this irectly from the efinition of the ot prouct: e T xv = 1 0 x = v v x. y v Similarly, e y v = v sin φ = 0 1 x = v v y. y So, we can unerstan the statement that v = vx v y in the x, y coorinate system to mean that v has v x units of the e x basis vector, an v y units of the e y basis vector, where v x = e T xv an v y = e T y v: v = vx v y = v x v y 0 1 = v x e x + v y e y = e T xve x + e T y ve y 14 We call e x an e y basis vectors, because together they form a basis for our space: any vector in our two-imensional space can be expresse as a linear combination of e x an e y a weighte sum of these basis vectors. For orthogonal basis vectors, the weighting of each basis vector in the sum is just that basis vector s ot prouct with the vector being expresse note that v was an arbitrary vector, so Eq. 14 is true for any arbitrary vector in our space. Note that we can use the orthogonality of the basis vectors to show that this is the correct weighting: e x v = v x e x e x + v y e x e y = v x, an similarly e y v = v y. Notice that the statement v = v x e x + v y e y is a geometric statement about the relationship between vectors between the vector v, an the vectors e x an e y. This states that you can buil v by multiplying e x by v x, multiplying e y by v y, an aing the two resulting vectors make sure this is clear to you both geometrically look at Fig. 1 an algebraically, Eq. 14. This statement about vectors will be true no matter what coorinate system we express these vectors in. When we express this as v = e T xve x + e T y ve y, there are no numbers in the equation this is an equation entirely about the relationship between vectors. Again, this statement will be true in any particular coorinate system in which we choose to express these vectors. But since the ot prouct, e T xv, is a scalar its value is inepenent of the coorinates in which we express the vectors then in any coorinate system, the equation v = e T xve x + e T y ve y will yiel the equation v = v x e x + v y e y. 2.2 Rigi Change of Basis in Two Dimensions Equations are generally written in some coorinate system for example, the x, y coorinate system in Fig. 1. But we coul certainly escribe the same biology equally well in other coorinate 11

12 Y Y v y v X v y v x O _ O _ v x X Figure 1: Representation of a vector in two coorinate systems The vector v is shown represente in two coorinate systems. The x, y coorinate system is rotate by an angle θ from the x, y coorinate system. The coorinates of v in a given coorinate system are given by the perpenicular projections of v onto the coorinate axes, as illustrate by the ashe lines. Thus, in the x, y basis, v has coorinates v x, v y, while in the x, y basis, it has coorinates v x, v y. 12

13 systems. Suppose we want to escribe things in the new coorinate axes, x, y, etermine by a rigi rotation by an angle θ from the x, y coorinate axes, Fig. 1. How o we efine coorinates in this new coorinate system? Let s first efine basis vectors e x, e y to be the vectors of unit length along the x an y axes, respectively. Like any other vectors, we can write these vectors as linear combinations of e x an e y : e x = e T xe x e x + e T y e x e y 15 e y = e T xe y e x + e T y e y e y 16 From the geometry, an the fact that the basis vectors have unit length, we fin the following ot proucts: Thus, we can write our new basis vectors as Check, from the geometry of Fig. 1, that this makes sense. e T xe x = cos θ 17 e T y e x = sin θ 18 e T xe y = sin θ 19 e T y e y = cos θ 20 e x = cos θe x + sin θe y 21 e y = sin θe x + cos θe y 22 Exercise 5 Using the expressions for e x an e y in Eqs , check that e x an e y are orthogonal to one another that is, that e T x e y = 0 an that they each have unit length that is, that e T x e x = et y e y = 1. Problem 4 We ve seen that, in a given coorinate system with basis vectors e 0, e 1, any vector v e T has the representation v = 0 v e T 1 v, which is just shorthan for v = e T 0 ve 0 + e T 1 ve 1. Base on this an Eqs , we know that, in the x, y coorinate system, e x = sin θ 1 0, e cos θ x =, e 0 y = Now, show that, in the x, y coorinate system, e x =, e 0 y = 1 sin θ e y =. Note, for each of these four vectors v, you just have to form cos θ cos θ sin θ, e y = cos θ, e x =, sin θ e T x v e T y v. You can compute the necessary ot proucts using the representations in the x, y coorinate system, since ot proucts are coorinate-inepenent although you can also just look them up from Eqs Note also that these equations shoul make intuitive sense: the x, y coorinate system is rotate by θ from the x, y system, so expressing e x, e y in terms of e x, e y shoul look exactly like expressing e x, e y in terms of e x, e y, except that we must substitute θ for θ; an note that cos θ = cosθ, sin θ = sinθ. 13

14 We can reexpress the above equations for each set of basis vectors in the other s coorinate system in the coorinate-inepenent form: e x = cos θe x + sin θe y 23 e y = sin θe x + cos θe y 24 e x = cos θe x sin θe y 25 e y = sin θe x + cos θe y 26 Now, verify these equations in each coorinate system. That is, first, using the x, y representation, substitute the coorinates of each vector an show that each equation is true. Then o the same thing again using the x, y representation. The numbers change, but the equations, which are statements about geometry that are true in any coorinate system, remain true. OK, back to our original problem: we want to fin the representation vx v y of v in the new coorinate system. As we ve seen, this is really just a short way of saying that v = v x e x + v y e y where v x = e T x v an v y = et y v. But we also know that v = v xe x + v y e y. So, using Eqs , we re reay to compute: v x = e T x v = et x v xe x + v y e y = v x e T x e x + v y e T x e y = v x cos θ + v y sin θ 27 v y = e T y v = et y v xe x + v y e y = v x e T y e x + v y e T y e y = v x sin θ + v y cos θ 28 or in matrix form vx e T x e x e T x e y vx cos θ sin θ = sin θ cos θ vx v y = e T y e x e T y e y v y v y 29 Note that the first row of the matrix is just e T x as expresse in the e x, e y coorinate system, an similarly the secon row is just e T y as expresse in the e x, e y coorinate system. This shoul make intuitive sense: to fin v x, we want to fin e T x v, which is obtaine by applying the first row of the matrix to v as written in the e x, e y coorinate system; an similarly v y is foun as e T y v, which is just the secon row of the matrix applie to v, all carrie out in the e x, e y coorinate system. cos θ sin θ We can give a name to the above matrix: R θ =. This is a commonly sin θ cos θ encountere matrix known as a rotation matrix. R θ represents rotation of coorinates by an angle θ: it is the matrix that transforms coorinates to a new set of coorinate axes rotate by θ from the previous coorinate axes. Problem 5 Verify the equation v = v x e x + v y e y in the x, y coorinate system. That is, substitute the x, y coorinate representation of v from Eq. 29, e x, an e y, an verify that this equation is true. It s not quite as obvious as it was when it was expresse in the x, y coorinate system Eq. 14, but it s still just as true. Problem 6 Show that R T θ R θ = R θ R T θ = 1, that is, that RT θ = R 1 θ. Note that this makes intuitive sense, because R T θ = R θ; this follows from cos θ = cos θ, sin θ = sin θ. To summarize, we ve learne how a vector v transforms uner a rigi change of basis, in which our coorinate axes are rotate counterclockwise by an angle θ. If v is the representation of v in 14

15 the new coorinate system, then v = R θ v. Furthermore, using the fact that R T θ R θ = 1, we can also fin the inverse transform: R T θ v = R T θ R θv = v, i.e. v = R T θ v. Now, we face a final question: how shoul matrices be transforme uner this change of basis? For any matrix M, let M be its representation in the rotate coorinate system. To see how this shoul be transforme, note that Mv is a vector for any vector v; so we know that Mv = R θ Mv. But the transformation of the vector Mv shoul be the same as the vector we get from operating on the transforme vector v with the transforme matrix M ; that is, Mv = M v. An we know v = R θ v. So, we fin that M R θ v = R θ Mv, for every vector v. But this can only true if M R θ an R θ M are the same matrix 3 : M R θ = R θ M. Finally, multiplying on the right by R T θ, an using R θ R T θ = 1, we fin M = R θ MR T θ 30 Intuitively, you can think of this as follows: to compute M v, which is just Mv in the new coorinate system, you first multiply v by R T θ, the inverse of R θ. This takes v back to v, i.e. moves us back from the new coorinate system to the ol coorinate system. You then apply M to v in the ol coorinate system. Finally, you apply R θ to the result, to transform the result back into the new coorinate system. 2.3 Rigi Change of Basis in Arbitrary Dimensions As our toy moels shoul make clear, in neural moeling we are generally ealing with vectors of large imensions. The above results in two imensions generalize nicely to N imensions. Suppose we want to consier only changes of basis consisting of rigi rotations. How shall we efine these? We efine these as the class of transformations O that preserve all inner proucts: that is, the transformations O such that, for any vectors v an x, v x = Ov Ox. Transformations satisfying this are calle orthogonal transformations. Why are these rigi? The ot prouct of two vectors of unit length gives the cosine of the angle between them, in any imensions; an the ot prouct of a vector with itself tells you its length square. So, a ot-prouct-preserving transformation preserves the angles between all pairs of vectors an the lengths of all vectors. This coincies with what we mean by a rigi rotation no stretching, no shrinking, no istortions. We can rewrite the ot prouct, Ov Ox = Ov T Ox = v T O T Ox. The requirement that this be equal to v T x for any vectors v an x can only be satisfie if O T O = 1. Thus, we efine: Definition 5 An orthogonal matrix is a matrix O satisfying O T O = OO T = 1. Note that the rotation matrix R θ in two imensions is an example of an orthogonal matrix. Uner an orthogonal transformation O, a column vector is transforme v Ov; a row vector is transforme v T v T O T as can be seen by consiering v T Ov T = v T O T ; an a matrix is transforme M OMO T. The argument as to why M is mappe to OMO T is just as we worke out for two imensions; the argument goes through unchange for arbitrary imensions. Here are two other ways to see it: The outer prouct vx T is a matrix. Uner an orthogonal change of basis, v Ov, x Ox, so the outer prouct is mappe vx T OvOx T = Ovx T O T = Ovx T O T. Thus, the matrix vx T transforms as inicate. 3 Given that Av = Bv for all vectors v, suppose the i th column of A is not ientical to the i th column of B. Then choose v to be the vector that is all 0 s except a 1 in the i th position. Then Av is just the i th column of A, an similarly for Bv, so Av Bv for this vector. Contraiction. Therefore every column of A an B must be ientical, i.e. A an B must be ientical. 15

16 An expression of the form v T Mx is a scalar, so it is unchange by a coorinate transformation. In the new coorinates, this is Ov T MOx, where M is the represention of M in the new coorinate system. Thus, Ov T MOx = vt Mx, for any v, x, an M an orthogonal transform O. We can rewrite v T Mx by inserting the ientity, 1 = O T O, as follows: v T Mx = v T 1M1x = v T O T OMO T Ox = Ov T OMO T Ox. The only way this can be equal to Ov T MOx for any v an x is if M = OMOT. Exercise 6 Show that the property M is the ientity matrix is basis-inepenent, that is, O1O T = 1. Thus, the ientity matrix looks the same in any basis. Exercise 7 Note that the property x is the zero vector x = 0; x is the vector all of whose elements are zero is basis-inepenent; that is, if x = 0, then Ox = 0 for any O. Similarly, M is the zero matrix M = 0; M is the matrix all of whose elements are zero is basis-inepenent: if M = 0, then OMO T = 0 for any O. Problem 7 1. Show that the property P is the inverse of M is basis-inepenent. That is, if P = M 1, then OPO T = OMO T 1, where O is orthogonal. Hint: to show that A = B 1, just show that AB = Note, from problem 2, that OMO T T = OM T O T. Use this result to prove two immeiate corollaries: The property P is the transpose of M is invariant uner orthogonal changes of basis: that is, OPO T = OMO T T for P = M T. The property M is symmetric is invariant uner orthogonal changes of basis: that is, if M = M T, OMO T T = OMO T. Problem 8 Write own arguments to show that 1 a ot-prouct preserving transformation is one for which O T O = 1; an 2 uner this transformation, M OMO T without looking at these notes. You can look at these notes as much as you want in preliminary tries, but the last try you have to go from beginning to en without looking at the notes. 2.4 Complete Orthonormal Bases Consier the stanar basis vectors in N imensions: e 0 = 1, 0,..., 0 T, e 1 = 0, 1,..., 0 T,..., e N 1 = 0, 0,..., 1 T. These form an orthonormal basis. This means: 1 The e i are mutually orthogonal: e T i e j = 0 for i j; an 2 the e i are each normalize to length 1: e T i e i = 1, i = 0,..., N 1. We can summarize an generalize this by use of the Kronecker elta: Definition 6 The Kronecker elta δ ij is efine by δ ij = 1, i = j; δ ij = 0, i j. Note that δ ij escribes the elements of the ientity matrix: 1 ij = δ ij. Problem 9 Show that, for any vector x, j δ ijx j = x i. This ability of the Kronecker elta to collapse a sum to a single term is something that will be use over an over again. Note that this equation is just the equation 1x = x, in component form. Definition 7 A set of N vectors e i, i = 0,..., N 1, form an orthonormal basis for an N- imensional vector space if e T i e j = δ ij. 16

17 Exercise 8 Show that in two imensions, the vectors e 0 = R θ 1, 0 T = cos θ, sin θ T, an e 1 = R θ 0, 1 T = sin θ, cos θ T, form an orthonormal basis, for any angle θ. Exercise 9 Prove that an orthonormal basis remains an orthonormal basis after transformation by an orthogonal matrix. Your proof is likely to consist of writing own one sentence about what orthogonal transforms preserve. Let s restate more generally what we learne in two imensions: when we state that v = v 0, v 1,..., v N 1 T in some orthonormal basis e i, we mean that v has extent v 0 in the e 0 irection, etc. We can state this more formally by writing v = v 0 e v N 1 e N 1 = i v i e i 31 This is an expansion of the vector v in the e i basis: an expression of v as a weighte sum of the e i. This is, in essence, what it means for the e i to be a basis: any vector v can be written as a weighte sum of the e i. The coefficients of the expansion, v i, are the components of v in the basis of the e i ; we summarize all of this when we state that v = v 0, v 1,..., v N 1 T in the e i basis. The coefficients v i are given by the ot prouct of v an e i : v i = e T i v: Problem 10 Show that v j = e T j v. Hint: multiply Eq. 31 from the left by et j, an use the result of Problem 9. In particular, we can expan the basis vectors in themselves: e i = e T 0 e i e e T N 1e i e N 1 = j e T j e i e j = j δ ij e j = e i. 32 That is, the basis vectors, when expresse in their own basis, are always just written e 0 = 1, 0,..., 0 T, e 1 = 0, 1,..., 0 T,..., e N 1 = 0, 0,..., 1 T. Thus, the equation v = i v ie i Eq. 31, when written in the e i basis, just represents the intuitive statement v = v 0 v 1... v N 1 = v v v N 1 In summary, for any vector v an orthonormal basis e i, we can write = i v i e i 33 v = i e i e T i v = i v i e i 34 In particular, any orthonormal basis vectors e i, when expresse in their own basis, have the simple representation e 0 = 1, 0,..., 0 T, e 1 = 0, 1,..., 0 T,..., e N 1 = 0, 0,..., 1 T. We can rewrite v = i e ie T i v as v = i e ie T i v = i e ie T i v. Since this is true for any vector v, this means that i e ie T i = 1, the ientity matrix. This is true for any orthonormal basis. Problem 11 For any orthonormal basis e i, i = 0,..., N 1: Show that i e ie T i = 1, by working in the e i basis, as follows. In that basis, show that e i e T i is the matrix compose of all 0 s, except for a 1 on the iagonal in the i th row/column. Do the summation to show that i e ie T i = 1. 17

18 Exercise 10 Make sure you unerstan the following. Although you have erive i e ie T i = 1 in Problem 11 by working in a particular basis, the result is general: it is true no matter in which orthonormal basis you express the e i. This follows immeiately from exercise 6. Or, you can see this explicitly, for example, by transforming the equation to another orthonormal basis by applying an orthogonal matrix O on the left an O T on the right. This gives i Oe ie T i OT = O1O T, which becomes i Oe ioe i T = 1. Thus, the equation hols for the e i as expresse in the new coorinate system. We can restate the fact that i e ie T i = 1 in wors to, hopefully, make things more intuitive, as follows. The matrix e i e T i, when applie to the vector v, fins the component of v along the e i irection, an multiplies this by the vector e i : e i e T i v = e ie T i v = v ie i. That is, e i e T i fins the projection of v along the e i axis. When the e i form an orthonormal basis, these separate projections are inepenent: any v is just the sum of its projections onto each of the e i : v = i e ie T i v. Taking the projections of v onto each axis of a complete orthonormal basis, an aing up the results, just reconstitutes the vector v. For example, Fig. 1 illustrates that in two imensions, aing the vectors v x e x an v y e y, the projections of v on the x an y axes, reconstitutes v. That is, the operation of taking the projections of v on each axis, an then summing the projections, is just the ientity operation; so i e ie T i = 1. The property i e ie T i = 1 represents a pithy summation of the fact that an orthonormal basis is complete: Definition 8 A complete basis for a vector space is a set of vectors e i such that any vector v can be uniquely expane as a weighte sum of the e i : v = i v ie i, where there is only one set of v i for a given v that will satisfy this equation. Fact 2 An orthonormal set of vectors e i forms a complete basis if an only if i e ie T i = 1. Intuitively: if we have an incomplete basis we are missing some irections then i e ie T i will give 0 when applie to vectors representing the missing irections, so it can t be the ientity; saying i e ie T i = 1 means that it reconstitutes any vector, so there are no missing irections. More formally, we can prove this as follows: if i e ie T i = 1, then for any vector v, v = 1v = i e ie T i v = i v ie i where v i = e T i v. So any vector v can be represente as a linear combination of the e i, so they form a complete basis. Conversely, if the e i form a complete basis, then for any vector v, v = i v ie i for some v i. By the orthonormality of the e i, taking the ot prouct with e j gives e j v = i v ie j e i = i v iδ ji = v j. So for any v, v = i e iv i = i e ie T i v = i e ie T i v. This can only be true for every vector v if i e ie T i = 1. Fact 3 In an N-imensional vector space, a set of orthonormal vectors forms a complete basis if an only if the set contains N vectors. That is, any set of N orthonormal vectors constitutes a complete basis; you can t have more than N mutually orthonormal vectors in an N-imensional space; an if you only have N-1 or fewer orthonormal vectors, you re missing a irection an so can t represent vectors pointing in that irection or that have a component in that irection. Finally, we ve interprete the components of a vector, v = v 0, v 1,..., v N 1 T, as escribing v only in some particular basis; the more general statements, given some unerlying basis vectors e i, are v = i v ie i, where v i = e T i v. We now o the same for a matrix. We write M = 1M1 = 18

19 i e ie T i M j e je T j = i,j e ie T i Me je T j. But et i Me j is a scalar; call it M ij. Since a scalar commutes with anything, we can pull this out front; thus, we have obtaine M = ij M ij e i e T j where M ij = e T i Me j 35 When working in the basis of the e i vectors, e i e T j is the matrix that is all 0 s except for a 1 in the i th row, j th column verify this!. Thus, in the basis of the e i vectors, M = M 00 M M 0N 1 M 10 M M 1N M N 10 M N M N 1N 1 Thus, the M ij = e T i Me j are the elements of M in the e i basis, just as v i = e T i v are the elements of v in the e i basis. The more general escription of M is given by Eq Which Basis Does an Orthogonal Matrix Map To? Suppose we change basis by some orthogonal matrix O: v Ov, M OMO T. What basis are we mapping to? The answer is: in our current basis, O is the matrix each of whose rows is one of the new basis vectors, as expresse in our current basis. This shoul be intuitive: applying the first row of O to a vector v, we shoul get the coorinate of v along the first new basis vector e 0 ; but this coorinate is e T 0 v, hence the first row shoul be et 0. We can write this as O = e 0 e 1... e N 1 T, where e 0 means a column of our matrix corresponing to the new basis vector e 0 as expresse in our current basis. To be precise, we mean the following: letting O ij be the ij th component of the matrix O, an letting e i j be the j th component of new basis vector e i all of these components expresse in our current basis, then O ij = e i j. It of course follows that each column of O T is one of the new basis vectors, that is, O T = e 0 e 1... e N 1. Problem 12 Use the results of problem 3, or reerive from scratch, to show the following: 1. Show that the statement OO T = 1 simply states the orthonormality of the new basis vectors: e T i e j = δ ij. 2. Similarly, show that the statement O T O = 1 simply expresses the completeness of the new basis vectors: i e ie T i = Recapitulation: The Transformation From One Orthogonal Basis To Another We have seen that, for any orthonormal basis {e i }, any vector v can be expresse v = i v ie i where v i = e T i v, an any matrix M can be expresse M = ij M ije i e T j where M ij = e T i Me j. Consier another orthonormal basis {f i }. Using 1 = k f kfk T, we can erive the rules for transforming coorinates from the {e i } basis to the {f i } basis, an in so oing recapitulate the results of this chapter, as follows: Transformation of a vector: write v = i v ie i = i v i1e i = ik v if k f T k e i = k v k f k, where v k = i f T k e iv i = i O kiv i, an the matrix O is efine by O ki = f T k e i. That is, the coorinates v i of v in the {f k} coorinate system are given, in terms of the coorinates v i in the {e i } coorinate system, by v = Ov

Vectors in two dimensions

Vectors in two dimensions Vectors in two imensions Until now, we have been working in one imension only The main reason for this is to become familiar with the main physical ieas like Newton s secon law, without the aitional complication

More information

Math 1B, lecture 8: Integration by parts

Math 1B, lecture 8: Integration by parts Math B, lecture 8: Integration by parts Nathan Pflueger 23 September 2 Introuction Integration by parts, similarly to integration by substitution, reverses a well-known technique of ifferentiation an explores

More information

Linear First-Order Equations

Linear First-Order Equations 5 Linear First-Orer Equations Linear first-orer ifferential equations make up another important class of ifferential equations that commonly arise in applications an are relatively easy to solve (in theory)

More information

Math Notes on differentials, the Chain Rule, gradients, directional derivative, and normal vectors

Math Notes on differentials, the Chain Rule, gradients, directional derivative, and normal vectors Math 18.02 Notes on ifferentials, the Chain Rule, graients, irectional erivative, an normal vectors Tangent plane an linear approximation We efine the partial erivatives of f( xy, ) as follows: f f( x+

More information

Euler equations for multiple integrals

Euler equations for multiple integrals Euler equations for multiple integrals January 22, 2013 Contents 1 Reminer of multivariable calculus 2 1.1 Vector ifferentiation......................... 2 1.2 Matrix ifferentiation........................

More information

Problem Sheet 2: Eigenvalues and eigenvectors and their use in solving linear ODEs

Problem Sheet 2: Eigenvalues and eigenvectors and their use in solving linear ODEs Problem Sheet 2: Eigenvalues an eigenvectors an their use in solving linear ODEs If you fin any typos/errors in this problem sheet please email jk28@icacuk The material in this problem sheet is not examinable

More information

The derivative of a function f(x) is another function, defined in terms of a limiting expression: f(x + δx) f(x)

The derivative of a function f(x) is another function, defined in terms of a limiting expression: f(x + δx) f(x) Y. D. Chong (2016) MH2801: Complex Methos for the Sciences 1. Derivatives The erivative of a function f(x) is another function, efine in terms of a limiting expression: f (x) f (x) lim x δx 0 f(x + δx)

More information

Sturm-Liouville Theory

Sturm-Liouville Theory LECTURE 5 Sturm-Liouville Theory In the three preceing lectures I emonstrate the utility of Fourier series in solving PDE/BVPs. As we ll now see, Fourier series are just the tip of the iceberg of the theory

More information

Equations of lines in

Equations of lines in Roberto s Notes on Linear Algebra Chapter 6: Lines, planes an other straight objects Section 1 Equations of lines in What ou nee to know alrea: The ot prouct. The corresponence between equations an graphs.

More information

Some vector algebra and the generalized chain rule Ross Bannister Data Assimilation Research Centre, University of Reading, UK Last updated 10/06/10

Some vector algebra and the generalized chain rule Ross Bannister Data Assimilation Research Centre, University of Reading, UK Last updated 10/06/10 Some vector algebra an the generalize chain rule Ross Bannister Data Assimilation Research Centre University of Reaing UK Last upate 10/06/10 1. Introuction an notation As we shall see in these notes the

More information

The Exact Form and General Integrating Factors

The Exact Form and General Integrating Factors 7 The Exact Form an General Integrating Factors In the previous chapters, we ve seen how separable an linear ifferential equations can be solve using methos for converting them to forms that can be easily

More information

Free rotation of a rigid body 1 D. E. Soper 2 University of Oregon Physics 611, Theoretical Mechanics 5 November 2012

Free rotation of a rigid body 1 D. E. Soper 2 University of Oregon Physics 611, Theoretical Mechanics 5 November 2012 Free rotation of a rigi boy 1 D. E. Soper 2 University of Oregon Physics 611, Theoretical Mechanics 5 November 2012 1 Introuction In this section, we escribe the motion of a rigi boy that is free to rotate

More information

6 General properties of an autonomous system of two first order ODE

6 General properties of an autonomous system of two first order ODE 6 General properties of an autonomous system of two first orer ODE Here we embark on stuying the autonomous system of two first orer ifferential equations of the form ẋ 1 = f 1 (, x 2 ), ẋ 2 = f 2 (, x

More information

Year 11 Matrices Semester 2. Yuk

Year 11 Matrices Semester 2. Yuk Year 11 Matrices Semester 2 Chapter 5A input/output Yuk 1 Chapter 5B Gaussian Elimination an Systems of Linear Equations This is an extension of solving simultaneous equations. What oes a System of Linear

More information

Rank, Trace, Determinant, Transpose an Inverse of a Matrix Let A be an n n square matrix: A = a11 a1 a1n a1 a an a n1 a n a nn nn where is the jth col

Rank, Trace, Determinant, Transpose an Inverse of a Matrix Let A be an n n square matrix: A = a11 a1 a1n a1 a an a n1 a n a nn nn where is the jth col Review of Linear Algebra { E18 Hanout Vectors an Their Inner Proucts Let X an Y be two vectors: an Their inner prouct is ene as X =[x1; ;x n ] T Y =[y1; ;y n ] T (X; Y ) = X T Y = x k y k k=1 where T an

More information

Applications of the Wronskian to ordinary linear differential equations

Applications of the Wronskian to ordinary linear differential equations Physics 116C Fall 2011 Applications of the Wronskian to orinary linear ifferential equations Consier a of n continuous functions y i (x) [i = 1,2,3,...,n], each of which is ifferentiable at least n times.

More information

Diagonalization of Matrices Dr. E. Jacobs

Diagonalization of Matrices Dr. E. Jacobs Diagonalization of Matrices Dr. E. Jacobs One of the very interesting lessons in this course is how certain algebraic techniques can be use to solve ifferential equations. The purpose of these notes is

More information

Exam 2 Review Solutions

Exam 2 Review Solutions Exam Review Solutions 1. True or False, an explain: (a) There exists a function f with continuous secon partial erivatives such that f x (x, y) = x + y f y = x y False. If the function has continuous secon

More information

Table of Common Derivatives By David Abraham

Table of Common Derivatives By David Abraham Prouct an Quotient Rules: Table of Common Derivatives By Davi Abraham [ f ( g( ] = [ f ( ] g( + f ( [ g( ] f ( = g( [ f ( ] g( g( f ( [ g( ] Trigonometric Functions: sin( = cos( cos( = sin( tan( = sec

More information

Schrödinger s equation.

Schrödinger s equation. Physics 342 Lecture 5 Schröinger s Equation Lecture 5 Physics 342 Quantum Mechanics I Wenesay, February 3r, 2010 Toay we iscuss Schröinger s equation an show that it supports the basic interpretation of

More information

19 Eigenvalues, Eigenvectors, Ordinary Differential Equations, and Control

19 Eigenvalues, Eigenvectors, Ordinary Differential Equations, and Control 19 Eigenvalues, Eigenvectors, Orinary Differential Equations, an Control This section introuces eigenvalues an eigenvectors of a matrix, an iscusses the role of the eigenvalues in etermining the behavior

More information

Integration by Parts

Integration by Parts Integration by Parts 6-3-207 If u an v are functions of, the Prouct Rule says that (uv) = uv +vu Integrate both sies: (uv) = uv = uv + u v + uv = uv vu, vu v u, I ve written u an v as shorthan for u an

More information

Pure Further Mathematics 1. Revision Notes

Pure Further Mathematics 1. Revision Notes Pure Further Mathematics Revision Notes June 20 2 FP JUNE 20 SDB Further Pure Complex Numbers... 3 Definitions an arithmetical operations... 3 Complex conjugate... 3 Properties... 3 Complex number plane,

More information

JUST THE MATHS UNIT NUMBER DIFFERENTIATION 2 (Rates of change) A.J.Hobson

JUST THE MATHS UNIT NUMBER DIFFERENTIATION 2 (Rates of change) A.J.Hobson JUST THE MATHS UNIT NUMBER 10.2 DIFFERENTIATION 2 (Rates of change) by A.J.Hobson 10.2.1 Introuction 10.2.2 Average rates of change 10.2.3 Instantaneous rates of change 10.2.4 Derivatives 10.2.5 Exercises

More information

MA 2232 Lecture 08 - Review of Log and Exponential Functions and Exponential Growth

MA 2232 Lecture 08 - Review of Log and Exponential Functions and Exponential Growth MA 2232 Lecture 08 - Review of Log an Exponential Functions an Exponential Growth Friay, February 2, 2018. Objectives: Review log an exponential functions, their erivative an integration formulas. Exponential

More information

Implicit Differentiation

Implicit Differentiation Implicit Differentiation Thus far, the functions we have been concerne with have been efine explicitly. A function is efine explicitly if the output is given irectly in terms of the input. For instance,

More information

Solving the Schrödinger Equation for the 1 Electron Atom (Hydrogen-Like)

Solving the Schrödinger Equation for the 1 Electron Atom (Hydrogen-Like) Stockton Univeristy Chemistry Program, School of Natural Sciences an Mathematics 101 Vera King Farris Dr, Galloway, NJ CHEM 340: Physical Chemistry II Solving the Schröinger Equation for the 1 Electron

More information

Lectures - Week 10 Introduction to Ordinary Differential Equations (ODES) First Order Linear ODEs

Lectures - Week 10 Introduction to Ordinary Differential Equations (ODES) First Order Linear ODEs Lectures - Week 10 Introuction to Orinary Differential Equations (ODES) First Orer Linear ODEs When stuying ODEs we are consiering functions of one inepenent variable, e.g., f(x), where x is the inepenent

More information

Determinant and Trace

Determinant and Trace Determinant an Trace Area an mappings from the plane to itself: Recall that in the last set of notes we foun a linear mapping to take the unit square S = {, y } to any parallelogram P with one corner at

More information

Lagrangian and Hamiltonian Mechanics

Lagrangian and Hamiltonian Mechanics Lagrangian an Hamiltonian Mechanics.G. Simpson, Ph.. epartment of Physical Sciences an Engineering Prince George s Community College ecember 5, 007 Introuction In this course we have been stuying classical

More information

3.7 Implicit Differentiation -- A Brief Introduction -- Student Notes

3.7 Implicit Differentiation -- A Brief Introduction -- Student Notes Fin these erivatives of these functions: y.7 Implicit Differentiation -- A Brief Introuction -- Stuent Notes tan y sin tan = sin y e = e = Write the inverses of these functions: y tan y sin How woul we

More information

ensembles When working with density operators, we can use this connection to define a generalized Bloch vector: v x Tr x, v y Tr y

ensembles When working with density operators, we can use this connection to define a generalized Bloch vector: v x Tr x, v y Tr y Ph195a lecture notes, 1/3/01 Density operators for spin- 1 ensembles So far in our iscussion of spin- 1 systems, we have restricte our attention to the case of pure states an Hamiltonian evolution. Toay

More information

G j dq i + G j. q i. = a jt. and

G j dq i + G j. q i. = a jt. and Lagrange Multipliers Wenesay, 8 September 011 Sometimes it is convenient to use reunant coorinates, an to effect the variation of the action consistent with the constraints via the metho of Lagrange unetermine

More information

Lecture 10 Notes, Electromagnetic Theory II Dr. Christopher S. Baird, faculty.uml.edu/cbaird University of Massachusetts Lowell

Lecture 10 Notes, Electromagnetic Theory II Dr. Christopher S. Baird, faculty.uml.edu/cbaird University of Massachusetts Lowell Lecture 10 Notes, Electromagnetic Theory II Dr. Christopher S. Bair, faculty.uml.eu/cbair University of Massachusetts Lowell 1. Pre-Einstein Relativity - Einstein i not invent the concept of relativity,

More information

UNDERSTANDING INTEGRATION

UNDERSTANDING INTEGRATION UNDERSTANDING INTEGRATION Dear Reaer The concept of Integration, mathematically speaking, is the "Inverse" of the concept of result, the integration of, woul give us back the function f(). This, in a way,

More information

Differentiation ( , 9.5)

Differentiation ( , 9.5) Chapter 2 Differentiation (8.1 8.3, 9.5) 2.1 Rate of Change (8.2.1 5) Recall that the equation of a straight line can be written as y = mx + c, where m is the slope or graient of the line, an c is the

More information

Assignment 1. g i (x 1,..., x n ) dx i = 0. i=1

Assignment 1. g i (x 1,..., x n ) dx i = 0. i=1 Assignment 1 Golstein 1.4 The equations of motion for the rolling isk are special cases of general linear ifferential equations of constraint of the form g i (x 1,..., x n x i = 0. i=1 A constraint conition

More information

DIFFERENTIAL GEOMETRY, LECTURE 15, JULY 10

DIFFERENTIAL GEOMETRY, LECTURE 15, JULY 10 DIFFERENTIAL GEOMETRY, LECTURE 15, JULY 10 5. Levi-Civita connection From now on we are intereste in connections on the tangent bunle T X of a Riemanninam manifol (X, g). Out main result will be a construction

More information

Notes on Lie Groups, Lie algebras, and the Exponentiation Map Mitchell Faulk

Notes on Lie Groups, Lie algebras, and the Exponentiation Map Mitchell Faulk Notes on Lie Groups, Lie algebras, an the Exponentiation Map Mitchell Faulk 1. Preliminaries. In these notes, we concern ourselves with special objects calle matrix Lie groups an their corresponing Lie

More information

Lecture Introduction. 2 Examples of Measure Concentration. 3 The Johnson-Lindenstrauss Lemma. CS-621 Theory Gems November 28, 2012

Lecture Introduction. 2 Examples of Measure Concentration. 3 The Johnson-Lindenstrauss Lemma. CS-621 Theory Gems November 28, 2012 CS-6 Theory Gems November 8, 0 Lecture Lecturer: Alesaner Mąry Scribes: Alhussein Fawzi, Dorina Thanou Introuction Toay, we will briefly iscuss an important technique in probability theory measure concentration

More information

d dx But have you ever seen a derivation of these results? We ll prove the first result below. cos h 1

d dx But have you ever seen a derivation of these results? We ll prove the first result below. cos h 1 Lecture 5 Some ifferentiation rules Trigonometric functions (Relevant section from Stewart, Seventh Eition: Section 3.3) You all know that sin = cos cos = sin. () But have you ever seen a erivation of

More information

Quantum Mechanics in Three Dimensions

Quantum Mechanics in Three Dimensions Physics 342 Lecture 20 Quantum Mechanics in Three Dimensions Lecture 20 Physics 342 Quantum Mechanics I Monay, March 24th, 2008 We begin our spherical solutions with the simplest possible case zero potential.

More information

4. Important theorems in quantum mechanics

4. Important theorems in quantum mechanics TFY4215 Kjemisk fysikk og kvantemekanikk - Tillegg 4 1 TILLEGG 4 4. Important theorems in quantum mechanics Before attacking three-imensional potentials in the next chapter, we shall in chapter 4 of this

More information

A Sketch of Menshikov s Theorem

A Sketch of Menshikov s Theorem A Sketch of Menshikov s Theorem Thomas Bao March 14, 2010 Abstract Let Λ be an infinite, locally finite oriente multi-graph with C Λ finite an strongly connecte, an let p

More information

A Second Time Dimension, Hidden in Plain Sight

A Second Time Dimension, Hidden in Plain Sight A Secon Time Dimension, Hien in Plain Sight Brett A Collins. In this paper I postulate the existence of a secon time imension, making five imensions, three space imensions an two time imensions. I will

More information

Calculus and optimization

Calculus and optimization Calculus an optimization These notes essentially correspon to mathematical appenix 2 in the text. 1 Functions of a single variable Now that we have e ne functions we turn our attention to calculus. A function

More information

Chapter 2 Lagrangian Modeling

Chapter 2 Lagrangian Modeling Chapter 2 Lagrangian Moeling The basic laws of physics are use to moel every system whether it is electrical, mechanical, hyraulic, or any other energy omain. In mechanics, Newton s laws of motion provie

More information

Math 210 Midterm #1 Review

Math 210 Midterm #1 Review Math 20 Miterm # Review This ocument is intene to be a rough outline of what you are expecte to have learne an retaine from this course to be prepare for the first miterm. : Functions Definition: A function

More information

Lecture 6: Calculus. In Song Kim. September 7, 2011

Lecture 6: Calculus. In Song Kim. September 7, 2011 Lecture 6: Calculus In Song Kim September 7, 20 Introuction to Differential Calculus In our previous lecture we came up with several ways to analyze functions. We saw previously that the slope of a linear

More information

2.1 Derivatives and Rates of Change

2.1 Derivatives and Rates of Change 1a 1b 2.1 Derivatives an Rates of Change Tangent Lines Example. Consier y f x x 2 0 2 x-, 0 4 y-, f(x) axes, curve C Consier a smooth curve C. A line tangent to C at a point P both intersects C at P an

More information

Dot Products, Transposes, and Orthogonal Projections

Dot Products, Transposes, and Orthogonal Projections Dot Products, Transposes, and Orthogonal Projections David Jekel November 13, 2015 Properties of Dot Products Recall that the dot product or standard inner product on R n is given by x y = x 1 y 1 + +

More information

Mathematical Review Problems

Mathematical Review Problems Fall 6 Louis Scuiero Mathematical Review Problems I. Polynomial Equations an Graphs (Barrante--Chap. ). First egree equation an graph y f() x mx b where m is the slope of the line an b is the line's intercept

More information

Calculus in the AP Physics C Course The Derivative

Calculus in the AP Physics C Course The Derivative Limits an Derivatives Calculus in the AP Physics C Course The Derivative In physics, the ieas of the rate change of a quantity (along with the slope of a tangent line) an the area uner a curve are essential.

More information

a) Identify the kinematical constraint relating motions Y and X. The cable does NOT slip on the pulley. For items (c) & (e-f-g) use

a) Identify the kinematical constraint relating motions Y and X. The cable does NOT slip on the pulley. For items (c) & (e-f-g) use EAMPLE PROBLEM for MEEN 363 SPRING 6 Objectives: a) To erive EOMS of a DOF system b) To unerstan concept of static equilibrium c) To learn the correct usage of physical units (US system) ) To calculate

More information

Quantum Algorithms: Problem Set 1

Quantum Algorithms: Problem Set 1 Quantum Algorithms: Problem Set 1 1. The Bell basis is + = 1 p ( 00i + 11i) = 1 p ( 00i 11i) + = 1 p ( 01i + 10i) = 1 p ( 01i 10i). This is an orthonormal basis for the state space of two qubits. It is

More information

Review of Differentiation and Integration for Ordinary Differential Equations

Review of Differentiation and Integration for Ordinary Differential Equations Schreyer Fall 208 Review of Differentiation an Integration for Orinary Differential Equations In this course you will be expecte to be able to ifferentiate an integrate quickly an accurately. Many stuents

More information

SYSTEMS OF DIFFERENTIAL EQUATIONS, EULER S FORMULA. where L is some constant, usually called the Lipschitz constant. An example is

SYSTEMS OF DIFFERENTIAL EQUATIONS, EULER S FORMULA. where L is some constant, usually called the Lipschitz constant. An example is SYSTEMS OF DIFFERENTIAL EQUATIONS, EULER S FORMULA. Uniqueness for solutions of ifferential equations. We consier the system of ifferential equations given by x = v( x), () t with a given initial conition

More information

and from it produce the action integral whose variation we set to zero:

and from it produce the action integral whose variation we set to zero: Lagrange Multipliers Monay, 6 September 01 Sometimes it is convenient to use reunant coorinates, an to effect the variation of the action consistent with the constraints via the metho of Lagrange unetermine

More information

TMA 4195 Matematisk modellering Exam Tuesday December 16, :00 13:00 Problems and solution with additional comments

TMA 4195 Matematisk modellering Exam Tuesday December 16, :00 13:00 Problems and solution with additional comments Problem F U L W D g m 3 2 s 2 0 0 0 0 2 kg 0 0 0 0 0 0 Table : Dimension matrix TMA 495 Matematisk moellering Exam Tuesay December 6, 2008 09:00 3:00 Problems an solution with aitional comments The necessary

More information

PHYS 414 Problem Set 2: Turtles all the way down

PHYS 414 Problem Set 2: Turtles all the way down PHYS 414 Problem Set 2: Turtles all the way own This problem set explores the common structure of ynamical theories in statistical physics as you pass from one length an time scale to another. Brownian

More information

Many problems in physics, engineering, and chemistry fall in a general class of equations of the form. d dx. d dx

Many problems in physics, engineering, and chemistry fall in a general class of equations of the form. d dx. d dx Math 53 Notes on turm-liouville equations Many problems in physics, engineering, an chemistry fall in a general class of equations of the form w(x)p(x) u ] + (q(x) λ) u = w(x) on an interval a, b], plus

More information

Separation of Variables

Separation of Variables Physics 342 Lecture 1 Separation of Variables Lecture 1 Physics 342 Quantum Mechanics I Monay, January 25th, 2010 There are three basic mathematical tools we nee, an then we can begin working on the physical

More information

Linear Algebra March 16, 2019

Linear Algebra March 16, 2019 Linear Algebra March 16, 2019 2 Contents 0.1 Notation................................ 4 1 Systems of linear equations, and matrices 5 1.1 Systems of linear equations..................... 5 1.2 Augmented

More information

1 dx. where is a large constant, i.e., 1, (7.6) and Px is of the order of unity. Indeed, if px is given by (7.5), the inequality (7.

1 dx. where is a large constant, i.e., 1, (7.6) and Px is of the order of unity. Indeed, if px is given by (7.5), the inequality (7. Lectures Nine an Ten The WKB Approximation The WKB metho is a powerful tool to obtain solutions for many physical problems It is generally applicable to problems of wave propagation in which the frequency

More information

Proof by Mathematical Induction.

Proof by Mathematical Induction. Proof by Mathematical Inuction. Mathematicians have very peculiar characteristics. They like proving things or mathematical statements. Two of the most important techniques of mathematical proof are proof

More information

Short Intro to Coordinate Transformation

Short Intro to Coordinate Transformation Short Intro to Coorinate Transformation 1 A Vector A vector can basically be seen as an arrow in space pointing in a specific irection with a specific length. The following problem arises: How o we represent

More information

Tutorial Test 5 2D welding robot

Tutorial Test 5 2D welding robot Tutorial Test 5 D weling robot Phys 70: Planar rigi boy ynamics The problem statement is appene at the en of the reference solution. June 19, 015 Begin: 10:00 am En: 11:30 am Duration: 90 min Solution.

More information

Entanglement is not very useful for estimating multiple phases

Entanglement is not very useful for estimating multiple phases PHYSICAL REVIEW A 70, 032310 (2004) Entanglement is not very useful for estimating multiple phases Manuel A. Ballester* Department of Mathematics, University of Utrecht, Box 80010, 3508 TA Utrecht, The

More information

Integration Review. May 11, 2013

Integration Review. May 11, 2013 Integration Review May 11, 2013 Goals: Review the funamental theorem of calculus. Review u-substitution. Review integration by parts. Do lots of integration eamples. 1 Funamental Theorem of Calculus In

More information

Final Review Sheet. B = (1, 1 + 3x, 1 + x 2 ) then 2 + 3x + 6x 2

Final Review Sheet. B = (1, 1 + 3x, 1 + x 2 ) then 2 + 3x + 6x 2 Final Review Sheet The final will cover Sections Chapters 1,2,3 and 4, as well as sections 5.1-5.4, 6.1-6.2 and 7.1-7.3 from chapters 5,6 and 7. This is essentially all material covered this term. Watch

More information

Calculus of Variations

Calculus of Variations Calculus of Variations Lagrangian formalism is the main tool of theoretical classical mechanics. Calculus of Variations is a part of Mathematics which Lagrangian formalism is base on. In this section,

More information

Survey Sampling. 1 Design-based Inference. Kosuke Imai Department of Politics, Princeton University. February 19, 2013

Survey Sampling. 1 Design-based Inference. Kosuke Imai Department of Politics, Princeton University. February 19, 2013 Survey Sampling Kosuke Imai Department of Politics, Princeton University February 19, 2013 Survey sampling is one of the most commonly use ata collection methos for social scientists. We begin by escribing

More information

Math 115 Section 018 Course Note

Math 115 Section 018 Course Note Course Note 1 General Functions Definition 1.1. A function is a rule that takes certain numbers as inputs an assigns to each a efinite output number. The set of all input numbers is calle the omain of

More information

Introduction to Markov Processes

Introduction to Markov Processes Introuction to Markov Processes Connexions moule m44014 Zzis law Gustav) Meglicki, Jr Office of the VP for Information Technology Iniana University RCS: Section-2.tex,v 1.24 2012/12/21 18:03:08 gustav

More information

. Using a multinomial model gives us the following equation for P d. , with respect to same length term sequences.

. Using a multinomial model gives us the following equation for P d. , with respect to same length term sequences. S 63 Lecture 8 2/2/26 Lecturer Lillian Lee Scribes Peter Babinski, Davi Lin Basic Language Moeling Approach I. Special ase of LM-base Approach a. Recap of Formulas an Terms b. Fixing θ? c. About that Multinomial

More information

θ x = f ( x,t) could be written as

θ x = f ( x,t) could be written as 9. Higher orer PDEs as systems of first-orer PDEs. Hyperbolic systems. For PDEs, as for ODEs, we may reuce the orer by efining new epenent variables. For example, in the case of the wave equation, (1)

More information

Math 1271 Solutions for Fall 2005 Final Exam

Math 1271 Solutions for Fall 2005 Final Exam Math 7 Solutions for Fall 5 Final Eam ) Since the equation + y = e y cannot be rearrange algebraically in orer to write y as an eplicit function of, we must instea ifferentiate this relation implicitly

More information

Make graph of g by adding c to the y-values. on the graph of f by c. multiplying the y-values. even-degree polynomial. graph goes up on both sides

Make graph of g by adding c to the y-values. on the graph of f by c. multiplying the y-values. even-degree polynomial. graph goes up on both sides Reference 1: Transformations of Graphs an En Behavior of Polynomial Graphs Transformations of graphs aitive constant constant on the outsie g(x) = + c Make graph of g by aing c to the y-values on the graph

More information

Physics 5153 Classical Mechanics. The Virial Theorem and The Poisson Bracket-1

Physics 5153 Classical Mechanics. The Virial Theorem and The Poisson Bracket-1 Physics 5153 Classical Mechanics The Virial Theorem an The Poisson Bracket 1 Introuction In this lecture we will consier two applications of the Hamiltonian. The first, the Virial Theorem, applies to systems

More information

Math 342 Partial Differential Equations «Viktor Grigoryan

Math 342 Partial Differential Equations «Viktor Grigoryan Math 342 Partial Differential Equations «Viktor Grigoryan 6 Wave equation: solution In this lecture we will solve the wave equation on the entire real line x R. This correspons to a string of infinite

More information

Cable holds system BUT at t=0 it breaks!! θ=20. Copyright Luis San Andrés (2010) 1

Cable holds system BUT at t=0 it breaks!! θ=20. Copyright Luis San Andrés (2010) 1 EAMPLE # for MEEN 363 SPRING 6 Objectives: a) To erive EOMS of a DOF system b) To unerstan concept of static equilibrium c) To learn the correct usage of physical units (US system) ) To calculate natural

More information

II. First variation of functionals

II. First variation of functionals II. First variation of functionals The erivative of a function being zero is a necessary conition for the etremum of that function in orinary calculus. Let us now tackle the question of the equivalent

More information

Chapter 2. Exponential and Log functions. Contents

Chapter 2. Exponential and Log functions. Contents Chapter. Exponential an Log functions This material is in Chapter 6 of Anton Calculus. The basic iea here is mainly to a to the list of functions we know about (for calculus) an the ones we will stu all

More information

Chapter 2. Linear Algebra. rather simple and learning them will eventually allow us to explain the strange results of

Chapter 2. Linear Algebra. rather simple and learning them will eventually allow us to explain the strange results of Chapter 2 Linear Algebra In this chapter, we study the formal structure that provides the background for quantum mechanics. The basic ideas of the mathematical machinery, linear algebra, are rather simple

More information

Lecture XII. where Φ is called the potential function. Let us introduce spherical coordinates defined through the relations

Lecture XII. where Φ is called the potential function. Let us introduce spherical coordinates defined through the relations Lecture XII Abstract We introuce the Laplace equation in spherical coorinates an apply the metho of separation of variables to solve it. This will generate three linear orinary secon orer ifferential equations:

More information

Diophantine Approximations: Examining the Farey Process and its Method on Producing Best Approximations

Diophantine Approximations: Examining the Farey Process and its Method on Producing Best Approximations Diophantine Approximations: Examining the Farey Process an its Metho on Proucing Best Approximations Kelly Bowen Introuction When a person hears the phrase irrational number, one oes not think of anything

More information

Lecture 2 Lagrangian formulation of classical mechanics Mechanics

Lecture 2 Lagrangian formulation of classical mechanics Mechanics Lecture Lagrangian formulation of classical mechanics 70.00 Mechanics Principle of stationary action MATH-GA To specify a motion uniquely in classical mechanics, it suffices to give, at some time t 0,

More information

Notation, Matrices, and Matrix Mathematics

Notation, Matrices, and Matrix Mathematics Geographic Information Analysis, Second Edition. David O Sullivan and David J. Unwin. 010 John Wiley & Sons, Inc. Published 010 by John Wiley & Sons, Inc. Appendix A Notation, Matrices, and Matrix Mathematics

More information

Experiment I Electric Force

Experiment I Electric Force Experiment I Electric Force Twenty-five hunre years ago, the Greek philosopher Thales foun that amber, the harene sap from a tree, attracte light objects when rubbe. Only twenty-four hunre years later,

More information

MATH 13200/58: Trigonometry

MATH 13200/58: Trigonometry MATH 00/58: Trigonometry Minh-Tam Trinh For the trigonometry unit, we will cover the equivalent of 0.7,.4,.4 in Purcell Rigon Varberg.. Right Triangles Trigonometry is the stuy of triangles in the plane

More information

MITOCW ocw-18_02-f07-lec02_220k

MITOCW ocw-18_02-f07-lec02_220k MITOCW ocw-18_02-f07-lec02_220k The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free.

More information

A matrix over a field F is a rectangular array of elements from F. The symbol

A matrix over a field F is a rectangular array of elements from F. The symbol Chapter MATRICES Matrix arithmetic A matrix over a field F is a rectangular array of elements from F The symbol M m n (F ) denotes the collection of all m n matrices over F Matrices will usually be denoted

More information

Section 2.7 Derivatives of powers of functions

Section 2.7 Derivatives of powers of functions Section 2.7 Derivatives of powers of functions (3/19/08) Overview: In this section we iscuss the Chain Rule formula for the erivatives of composite functions that are forme by taking powers of other functions.

More information

Implicit Differentiation

Implicit Differentiation Implicit Differentiation Implicit Differentiation Using the Chain Rule In the previous section we focuse on the erivatives of composites an saw that THEOREM 20 (Chain Rule) Suppose that u = g(x) is ifferentiable

More information

0.1 Differentiation Rules

0.1 Differentiation Rules 0.1 Differentiation Rules From our previous work we ve seen tat it can be quite a task to calculate te erivative of an arbitrary function. Just working wit a secon-orer polynomial tings get pretty complicate

More information

Least-Squares Regression on Sparse Spaces

Least-Squares Regression on Sparse Spaces Least-Squares Regression on Sparse Spaces Yuri Grinberg, Mahi Milani Far, Joelle Pineau School of Computer Science McGill University Montreal, Canaa {ygrinb,mmilan1,jpineau}@cs.mcgill.ca 1 Introuction

More information

Designing Information Devices and Systems I Spring 2018 Lecture Notes Note 16

Designing Information Devices and Systems I Spring 2018 Lecture Notes Note 16 EECS 16A Designing Information Devices an Systems I Spring 218 Lecture Notes Note 16 16.1 Touchscreen Revisite We ve seen how a resistive touchscreen works by using the concept of voltage iviers. Essentially,

More information

4.2 First Differentiation Rules; Leibniz Notation

4.2 First Differentiation Rules; Leibniz Notation .. FIRST DIFFERENTIATION RULES; LEIBNIZ NOTATION 307. First Differentiation Rules; Leibniz Notation In this section we erive rules which let us quickly compute the erivative function f (x) for any polynomial

More information

CMU CS 462/662 (INTRO TO COMPUTER GRAPHICS) HOMEWORK 0.0 MATH REVIEW/PREVIEW LINEAR ALGEBRA

CMU CS 462/662 (INTRO TO COMPUTER GRAPHICS) HOMEWORK 0.0 MATH REVIEW/PREVIEW LINEAR ALGEBRA CMU CS 462/662 (INTRO TO COMPUTER GRAPHICS) HOMEWORK 0.0 MATH REVIEW/PREVIEW LINEAR ALGEBRA Andrew ID: ljelenak August 25, 2018 This assignment reviews basic mathematical tools you will use throughout

More information

DIFFERENTIAL GEOMETRY OF CURVES AND SURFACES 7. Geodesics and the Theorem of Gauss-Bonnet

DIFFERENTIAL GEOMETRY OF CURVES AND SURFACES 7. Geodesics and the Theorem of Gauss-Bonnet A P Q O B DIFFERENTIAL GEOMETRY OF CURVES AND SURFACES 7. Geoesics an the Theorem of Gauss-Bonnet 7.. Geoesics on a Surface. The goal of this section is to give an answer to the following question. Question.

More information