ECE557 Systems Control

Size: px
Start display at page:

Download "ECE557 Systems Control"

Transcription

1 ECE557 Systems Control Bruce Francis Course notes, Version 20, September 2008

2

3 Preface This is the second Engineering Science course on control It assumes ECE356 as a prerequisite If you didn t take ECE356, you must go through Chapters 2 and 3 of the ECE356 course notes This course is on the state-space approach to control system analysis and design By contrast, ECE356 treated frequency domain methods Generally speaking, the state-space methods scale better to higher order, multi-input/output systems The frequency domain methods use complex function theory; the state-space approach uses linear algebra eigenvalues, subspaces, and all that The emphasis in the lectures will be on concepts, examples, and use of the theory There are several computer applications for solving numerical problems in this course The most widely used is MATLAB, but it s expensive I like Scilab, which is free Others are Mathematica (expensive) and Octave (free) 3

4 4

5 Contents Introduction 7 State Models 7 2 Examples 0 2 Magnetic levitation 0 22 Vehicles 3 Problems 4 2 The Equation ẋ = Ax 7 2 Brief Review of Some Linear Algebra 7 22 Eigenvalues and Eigenvectors 8 23 The Jordan Form The Transition Matrix Stability Problems 29 3 More Linear Algebra 33 3 Subspaces Linear Transformations Matrix Equations Invariant Subspaces 4 35 Problems 42 4 Controllability 47 4 Reachable States Properties of Controllability The PBH (Popov-Belevitch-Hautus) Test Controllability from a Single Input Pole Assignment Stabilizability Problems 67 5 Observability 73 5 State Reconstruction The Kalman Decomposition Detectability Observers 77 5

6 6 CONTENTS 55 Problems 80 6 Feedback Loops 8 6 BIBO Stability 8 62 Feedback Stability Observer-Based Controllers Problems 86 7 Tracking and Regulation 87 7 Review of Tracking Steps Distillation Columns Problem Setup Tools for the Solution Regulator Problem Solution Unobservability More Examples Problems 04 8 Optimal Control 07 8 Minimizing Quadratic Functions with Equality Constraints The LQR Problem and Solution 3 83 Hand Waving 8 84 Sketch of Proof that F is Optimal Problems 22

7 Chapter Introduction Control is that beautiful part of system science/engineering where we get to design part of the system, the controller, so that the system performs as intended Control is a very rich subject, ranging from pure theory (Can a robot with just vision sensors be programmed to ride a unicycle?) down to the writing of real-time code This course is mathematical, but that doesn t imply it is only theoretical and isn t applicable to real problems You are assumed to know Chapters 2 and 3 of the ECE356 course notes This chapter gives a brief review of only part and is not sufficient First, some notation: Usually, a vector is written as a column vector, but sometimes to save space it is written as an n-tuple: x = x x n State Models or x =(x,,x n ) Systems that are linear, time-invariant, causal, finite-dimensional, and having proper transfer functions have state models, ẋ = Ax + Bu, y = Cx + Du Here u, x, y are vector-valued functions of t and A, B, C, D real constant matrices Deriving State Models How to get a state model depends on what we have to start with Example n th order ODE Suppose we have the system 2ÿ ẏ +3y = u The natural state vector is y x x = =: ẏ x 2 7

8 8 CHAPTER INTRODUCTION Then ẋ = x 2 ẋ 2 = 2 x x + 2 u, so A = This technique extends to 0, B = 2, C = 0, D =0 a n y (n) + + a ẏ + a 0 y = u What about derivatives on the right-hand side: 2ÿ ẏ +3y = u 2u? The transfer function is Y (s) = s 2 2s 2 s +3 U(s) Introduce an intermediate signal v: Then Y (s) =(s 2) 2s 2 s +3 U(s) =:V (s) 2 v v +3v = u y = v 2v Taking x =(v, v) we get 0 A = This technique extends to, B = 0 2, C = 2, D =0 y (n) + + a ẏ + a 0 y = b n u (n ) + + b 0 u The transfer function is Then G(s) = b n s n + b n 2 s n b 0 s n + a n s n + + a 0 G(s) =C(sI A) B,

9 STATE MODELS 9 where A = a 0 a a n 2 a n 0, B = 0 C = b 0 b n This state model is called the controllable (canonical) realization of G(s) For the case n = m, you divide denominator into numerator and thereby factor G(s) intothe sum of a constant and a strictly proper transfer function This gives D = 0, namely, the constant If m>n, there is no state model What if we have two inputs u,u 2, two outputs y,y 2, and coupled equations such as ÿ ẏ +ẏ 2 +3y = u + u 2 2 d3 y 2 dt 3 ẏ +ẏ 2 +4y = u 2? The natural state is x =(y, ẏ,y 2, ẏ 2, ÿ 2 ) Please complete this example Let s study the transfer matrix for the state model ẋ = Ax + Bu, y = Cx + Du Take Laplace transforms with zero initial conditions: sx(s) =AX(s)+BU(s), Y(s) =CX(s)+DU(s) Eliminate X(s): (si A)X(s) =BU(s) X(s) =(si A) BU(s) Y (s) =[C(sI A) B + D] U(s) transfer matrix This leads to the realization problem: GivenG(s), find A, B, C, D such that G(s) =C(sI A) B + D A solution exists iff G(s) is rational and proper (every element of G(s) has deg denom deg num) The solution is never unique There are general procedures for getting a state model, but we choose not to cover this topic in the interests of moving to other interests

10 0 CHAPTER INTRODUCTION 2 Examples Here we look at two examples that we ll use repeatedly for illustration 2 Magnetic levitation + u R L i y This example was used frequently in ECE356 Imagine an electromagnet suspending an iron ball Let the input be the voltage u and the output the position y of the ball below the magnet; let i denote the current in the circuit Then L di + Ri = u dt Also, it can be derived that the magnetic force on the ball has the form Ki 2 /y 2, K a constant Thus Mÿ = Mg K i2 y 2 Realistic numerical values are M =0 Kg, R = 5 ohms, L =05 H, K =0000 Nm 2 /A 2, g =98 m/s 2 Substituting in these numbers gives the equations 05 di dt + 5i = u 0 d2 y i2 = dt2 y 2 Define state variables x =(x,x 2,x 3 )=(i, y, ẏ) Then the nonlinear state model is ẋ = f(x, u), where f(x, u) =( 30x +2u, x 3, x 2 /x 2 2) Suppose we want to stabilize the ball at y = cm, or 00 m We need a linear model valid in the neighbourhood of that value Solve for the equilibrium point ( x, ū) where x 2 =00: Thus 30 x +2ū =0, x 3 =0, x 2 /00 2 =0 x =(099, 00, 0), ū = 485

11 2 EXAMPLES The linearized model is δx = Aδx + Bδu, δy = Cδx, where A equals the Jacobian of f with respect to x, evaluated at ( x, ū), and B equals the same except with respect to u: A = 0 0 = x /x x 2 /x B = 2 0 0, C = 0 0 ( x,ū) The eigenvalues of A are 30, ±4405, the units being s The corresponding time constants are /30 = 0033, /4405 = 0023 s The first is the time constant of the electric circuit; the second, the time constant of the magnetics 22 Vehicles The second example is a vehicle control problem motivated by research on intelligent highway systems We begin with the simplest vehicle, a cart with a motor driving one wheel: u y The input is the voltage u to the motor, the output the cart position y We want the model from u to y Free body diagrams: motor u θ τ wheel τ f f f y cart

12 2 CHAPTER INTRODUCTION The cart A force f via the wheel through the axle: Mÿ = f () The wheel An equal and opposite force f at the axle; a horizontal force where the wheel contacts the floor If the inertia of the wheel is negligible, the two horizontal forces are equal Finally, a torque τ from the motor Equating moments about the axle gives τ = fr,wherer is the radius of the wheel Thus f = τ/r (2) The motor The electric circuit equation is L di dt + Ri = u v b, where v b is the back emf The torque produced by the motor: τ m = Ki (3) (4) Newton s second law for the motor shaft: J θ = τ m τ (5) The back emf is v b = K b θ (6) Finally, the relationship between shaft angle and cart position: y = rθ (7) Combining The block diagram is then u i τ m K Ls + R r Js ẏ s y r f Ms K b r

13 2 EXAMPLES 3 The inner loop can be reduced, giving u i τ m α ẏ y K Ls + R s s K b r α = r r 2 M + J Finally, we have the third-order system u β s 2 + R L s + γ ẏ s y β = αk L γ = αkk b rl Although this vehicle is very easy to control, for more complex vehicles (Jeeps on open terrain) it s customary to design a loop to cancel the dynamics, leaving a simpler kinematic vehicle, like this: ẏ ref u β s 2 + R L s + γ ẏ s y If the loop is well designed, that is, ẏ ref ẏ, we can regard the system as merely a kinematic point, with input, velocity, say v, and output, position, y Platoons Now suppose there are several of these motorized carts We want them to move in a straight line like this: A designated leader should go at constant speed: the second should follow at a fixed distance d; the third should follow the second at the distance d; and so on follower should stay distance d behind leader under cruise control We ll return to this problem later

14 4 CHAPTER INTRODUCTION 3 Problems The first few problems study the concept of linearity of a system Recall that a system F with input u and output y is linear if it satisfies two conditions: superposition, ie, F(u + u 2 )=F(u )+F(u 2 ), and homogeneity, F(cu) =cf(u), c a real constant To prove it s not linear, you have to give a counterexample for one of these two conditions Consider a quantizer Q with input u(t), that can take on a continuum of values, and output y(t), which can take on only countably many values, say, {b k } k Z More specifically, suppose R is partitioned into intervals I k, k Z, and if u(t) I k,theny(t) =b k Prove that Q is not linear 2 Let S denote the ideal sampler of sampling period T ; it maps a continuous-time signal u(t) into the discrete-time signal u[k] = u(kt) Let H denote the synchronized zero-order hold; it maps a discrete-time signal y[k] into y(t), where y(t) =y[k], kt t<(k + )T Then HS maps u(t) to y(t) where y(t) =u(kt), kt t<(k + )T Is HS linear? If so, prove it; if not, give a counterexample 3 Consider the amplitude modulation system with input u(t) and output y(t) = u(t) cos(t) Is it linear? 4 At time t = 0 a force v(t) is applied to a mass M whose position is y(t); the mass is initially at rest Thus Mÿ = v, wherey(0) = ẏ(0) = 0 The force is the output of a saturating actuator with input u(t) inthisway: v = u, u, u >, u < Is the system from u to y linear? 5 Give an example of a system that is linear, infinite-dimensional, causal, and time-varying 6 Express the superposition property of a system F in terms of a block diagram Express the homogeneity property in like manner 7 Both by hand and by Scilab/MATLAB find a state model for the system with transfer function G(s) = s 3 2s 3 + s 2 2s

15 3 PROBLEMS 5 8 Consider the system model ẋ = Ax + Bu, y = Cx with A = , B = 0 4, C = Both by hand and by Scilab/MATLAB find the transfer function from u to y 9 Kirchoff s laws for a circuit lead to algebraic constraints (eg, currents into a node sum to zero) Consider a system with inputs u, u 2 and outputs y, y 2 governed by the equations ÿ +2ẏ + y 2 = u y + y 2 = u 2 Find the transfer matrix from u =(u,u 2 )toy =(y,y 2 ) Does this system have a state model? If so, find one 0 Consider the system with input u(t) and output y(t) where 4ÿ +ẏ 2 y =(3t 2 + 8)u The nominal input and output are u 0 (t) =, y 0 (t) =t 2 (you can check that they satisfy the differential equation) Derive a nonlinear state model of the form ẋ = f(x, u, t) Linearize this about the nominal state and input, ending up with a linear state equation An unforced pendulum is modeled by the equation L θ + g sin θ =0, where L = length, g = gravity constant, θ = angle of pendulum (a) Put this model in the form of a state equation (b) Find all equilibrium points (c) Find the linearized model for each equilibrium point 2 A system has three inputs u, u 2, and u 3 and three outputs y, y 2, and y 3 The equations are y + a ÿ + a 2 (ẏ +ẏ 2 )+a 3 (y y 3 ) = u ÿ 2 + a 4 (ẏ 2 ẏ +2ẏ 3 )+a 5 (y 2 y ) = u 2 ẏ 3 + a 6 (y 3 y ) = u 3 Find a state-space model for this system 3 Find two different state models for the system ÿ + aẏ + by = u + c u

16 6 CHAPTER INTRODUCTION

17 Chapter 2 The Equation ẋ = Ax The object of study in this chapter is the unforced state equation ẋ = Ax Here A is an n n real matrix and x(t) an n-dimensional vector-valued function of time 2 Brief Review of Some Linear Algebra In this brief section we review these concepts/results: R n, linear independence of a set of vectors, span of a set of vectors, subspace, basis for a subspace, rank of a matrix, existence and uniqueness of a solution to Ax = b where A is not necessarily square, inverse of a matrix, invertibility If you remember them (and I hope you do), skip to the next section The symbol R n stands for the vector space of n-tuples, ie, ordered lists of n real numbers A set of vectors {v,,v k } in R n is linearly independent if none is a linear combination of the others One way to check this is to write the equation c v + + c k v k =0 and then try to solve for the c i s The set is linearly independent iff the only solution is c i = 0 for every i, The span of {v,,v k }, denoted Span{v,,v k }, is the set of all linear combinations of these vectors A subspace V of R n is a subset of R n that is also a vector space in its own right This is true iff these two conditions hold: If x, y are in V, thensoisx + y; ifx is in V and c is a scalar, then cx is in V Thus V is closed under the operations of addition and scalar multiplication In R 3 the subspaces are the lines through the origin, the planes through the origin, the whole of R 3, and the set consisting of only the zero vector A basis for a subspace is a set of linearly independent vectors whose span equals the subspace The number of elements in a basis is the dimension of the subspace The rank of a matrix is the dimension of the span of its columns This can be proved to equal the dimension of the span of its rows The equation Ax = b has a solution iff b belongs to the span of the columns of A, equivalently rank A = rank A b 7

18 8 CHAPTER 2 THE EQUATION Ẋ = AX When a solution exists, it is unique iff the columns of A are linearly independent, that is, the rank of A equals its number of columns The inverse of a square matrix A is a matrix B such that BA = I Ifthisistrue,thenAB = I The inverse is unique and we write A A square matrix A is invertible iff its rank equals its dimension (we say A has full rank ); equivalently, its determinant is nonzero The inverse equals the adjoint divided by the determinant 22 Eigenvalues and Eigenvectors Now we turn to ẋ = Ax The time evolution of x(t) can be understood from the eigenvalues and eigenvectors of A a beautiful connection between dynamics and algebra Recall that the eigenvalue equation is Av = λv Here λ is a real or complex number and v is a nonzero real or complex vector; λ is an eigenvalue and v a corresponding eigenvector The eigenvalues of A are unique but the eigenvectors are not: If v is an eigenvector, so is cv for any real number c = 0 The spectrum of A, denoted σ(a), is its set of eigenvalues The spectrum consists of n numbers, in general complex, and they are equal to the zeros of the characteristic polynomial det(si A) Example Consider two carts and a dashpot like this: x x 2 M D M 2 Take D =, M =, M 2 =/2, x 3 =ẋ, x 4 =ẋ 2 You can derive that the model is ẋ = Ax, where A = The characteristic polynomial of A is s 3 (s + 3), and therefore σ(a) ={0, 0, 0, 3} The equation Av = λv says that the action of A on an eigenvector is very simple just multiplication by the eigenvalue Likewise, the motion of x(t) starting at an eigenvector is very simple Lemma 22 If x(0) is an eigenvector v of A and λ the corresponding eigenvalue, then x(t) =e λt v Thus x(t) is an eigenvector too for every t

19 22 EIGENVALUES AND EIGENVECTORS 9 Proof The initial-value problem ẋ = Ax, x(0) = v has a unique solution this is from differential equation theory So all we have to do is show that e λt v satisfies both the initial condition and the differential equation, for then e λt v must be the solution x(t) The initial condition is easy: e λt v = v t=0 And for the differential equation, d dt (eλt v)=e λt λv =e λt Av = A(e λt v) The result of the lemma extends to more than one eigenvalue Let λ,,λ n be the eigenvalues of A and let v,,v n be corresponding eigenvectors Suppose the initial state x(0) can be written as a linear combination of the eigenvectors: x(0) = c v + + c n v n This is certainly possible for every x(0) if the eigenvectors are linearly independent solution satisfies Then the x(t) =c e λ t v + + c n e λnt v n This is called a modal expansion of x(t) Example A = 2 2 Let s say x(0) = (0, ) The equation, λ =0,λ 2 = 3, v =, v 2 = 2 x(0) = c v + c 2 v 2 is equivalent to x(0) = Vc, where V is the 2 2 matrix with columns v,v 2 and c is the vector (c,c 2 ) Solving gives c = c 2 = /3 So x(t) = 3 v + 3 e 3t v 2

20 20 CHAPTER 2 THE EQUATION Ẋ = AX The case of complex eigenvalues is only a little complicated If λ is a complex eigenvalue, some other, say λ 2, is its complex conjugate: λ 2 = λ The two eigenvectors, v and v 2, can be taken to be complex conjugates too (easy proof) Then if x(0) is real and we solve x(0) = c v + c 2 v 2, we ll find that c,c 2 are complex conjugates as well Thus the equation will look like x(0) = c v + c v 2 =2 (c v ), where denotes real part Example 0 A = 0, λ = j, λ 2 = j, v = j, v 2 = j Suppose x(0) = (0, ) Then c = j/2, c 2 = j/2 and x(t) =2 c e λt v = je jt sin t = j cos t 23 The Jordan Form Now we turn to the structure theory of a matrix related to its eigenvalues It s convenient to introduce a term, the kernel of a matrix A Kernel is another name for nullspace Thus Ker A is the set of all vectors x such that Ax = 0; that is, Ker A is the solution space of the homogeneous equation Ax = 0 Notice that the zero vector is always in the kernel If A is square, then Ker A is the zero subspace, and we write Ker A = 0, iff 0 is not an eigenvalue of A If 0 is an eigenvalue, then Ker A equals the span of all the eigenvectors corresponding to this eigenvalue; we say Ker A is the eigenspace corresponding to the eigenvalue 0 More generally, if λ is an eigenvalue of A the corresponding eigenspace is the solution space of Av = λv, that is, of (A λi)v = 0, that is, Ker (A λi) Let s begin with the simplest case, where A is 2 2 and has 2 distinct eigenvalues, λ,λ 2 You can show (this is a good exercise) that there are then 2 linearly independent eigenvectors, say v,v 2 (maybe complex vectors) The equations Av = λ v, Av 2 = λ 2 v 2 are equivalent to the matrix equation A v v 2 = v v 2 λ 0 0 λ 2 that is, AV = VA JF,where, V = v v 2, AJF = diag (λ,λ 2 )

21 23 THE JORDAN FORM 2 The latter matrix is the Jordan form of A It is unique up to reordering of the eigenvalues The mapping A A JF = V AV is called a similarity transformation Example: A = 2 2, V = 2 0 0, A JF = 0 3 Corresponding to the eigenvalue λ = 0 is the eigenvector v =(, ), the first column of V All other eigenvectors corresponding to λ have the form cv, c = 0 We call the subspace spanned by v the eigenspace corresponding to λ Likewise, λ 2 = 3 has a one-dimensional eigenspace These results extend from n = 2 to general n Note that in the preceding result we didn t actually need distinctness of the eigenvalues only linear independence of the eigenvectors Theorem 23 The Jordan form of A is diagonal, ie, A is diagonalizable by similarity transformation, iff A has n linearly independent eigenvectors A sufficient condition is n distinct eigenvalues The great thing about diagonalization is that the equation ẋ = Ax can be transformed via w = V x into ẇ = A JF w, that is, n decoupled equations: ẇ i = λ i w i, i =,,n The latter equations are trivial to solve: w i (t) =e λ it w i (0), i =,,n Now we look at how to construct the Jordan form when there are not n linearly independent eigenvectors We start where A has only 0 as an eigenvalue Nilpotent matrices Consider , (2) For both of these matrices, σ(a) ={0, 0, 0} For the first matrix, the eigenspace Ker A is twodimensional and for the second matrix, one-dimensional These are examples of nilpotent matrices: A is nilpotent if A k = 0 for some k The following statements are equivalent: A is nilpotent 2 All its eigs are 0 3 Its characteristic polynomial is s n 4 It is similar to a matrix of the form (2), where all elements are 0 s, except 0 s or s on the first diagonal above the main one This is called the Jordan form of the nilpotent matrix

22 22 CHAPTER 2 THE EQUATION Ẋ = AX Example Suppose A is 3 3 and A = 0 Then of course it s already in Jordan form, Example Here we do an example of transforming a nilpotent matrix to Jordan form Take A = The rank of A is 3 and hence the kernel has dimension 2 We can compute that A = , A3 = , A4 = Take any vector v 5 in Ker A 4 = R 5 that is not in Ker A 3, for example, Then take We get v 5 =(0, 0, 0, 0, ) v 4 = Av 5, v 3 = Av 4, v 2 = Av 3 v 4 =(0, 0, 0,, ) Ker A 3, Ker A 4 v 3 =(0,, 0, 0, 0) Ker A 2, Ker A 3 v 2 =(,, 0, 0, 0) Ker A, Ker A 2 Finally, take v Ker A, linearly independent of v 2, for example, v =(0, 0,, 0, 0) Assemble v,,v 5 into the columns of V Then V AV = A JF =

23 23 THE JORDAN FORM 23 This is block diagonal, like this: In general, the Jordan form of a nilpotent matrix has 0 in each entry except possibly in the first diagonal above the main diagonal which may have some s A nilpotent matrix has only the eigenvalue 0 Now consider a matrix A that has only one eigenvalue, λ, ie, det(si A) =(s λ) n To simplify notation, suppose n = 3 Letting r = s λ, wehave det[ri (A λi)] = r 3, ie, A λi has only the zero eigenvalue, and hence A λi =: N, a nilpotent matrix So the Jordan form of N must look like , where each star can be 0 or, and hence the Jordan form of A is λ 0 0 λ 0 0 λ, (22) To recap, if A has just one eigenvalue, λ, then its Jordan form is λi + N, wheren is a nilpotent matrix in Jordan form An extension of this analysis results in the Jordan form in general Suppose A is n n and λ,,λ p are the distinct eigenvalues of A and m,,m p are their multiplicities; that is, the characteristic polynomial is det(si A) =(s λ ) m (s λ p ) mp Then A is similar to A A JF =, A p

24 24 CHAPTER 2 THE EQUATION Ẋ = AX where A i is m i m i and it has only the eigenvalue λ i ThusA i has the form λ i I + N i,wheren i is a nilpotent matrix in Jordan form Example: A = As we saw, the spectrum is σ(a) ={0, 0, 0, 3} Thus the Jordan form must be of the form A JF = Since A has rank 2, so does A JF Thus only one of the stars is Either is possible, for example, A JF = This has two Jordan blocks : A 0 A JF =, A 0 A = 2 24 The Transition Matrix , A 2 = 3 Let us review from the ECE356 course notes For a square matrix M, the exponential e M is defined as e M := I + M + 2! M 2 + 3! M 3 + The matrix e M is not the same as the component-wise exponential of M Facts: e M is invertible for every M, and (e M ) =e M 2 e M+N =e M e N iff M and N commute, ie, MN = NM The matrix function t e ta : R R n n is then defined and is called the transition matrix associated with A It has the properties e ta t=0 = I 2 e ta and A commute d 3 dt eta = Ae ta =e ta A Moreover, the solution of ẋ = Ax, x(0) = x 0 is x(t) =e ta x 0 So e ta maps the state at time 0 to the state at time t In fact, it maps the state at any time t 0 to the state at time t 0 + t

25 24 THE TRANSITION MATRIX 25 On computing the transition matrix via the Jordan form If one can compute the Jordan form of A, thene ta can be written in closed form, as follows The equation implies AV = VA JF A 2 V = AV A JF = VA 2 JF Continuing in this way gives and then so finally A k V = VA k JF, e At V = V e A JFt, e At = V e A JFt V The matrix exponential e A JFt is easy to write down For example, suppose there s just one eigenvalue, so A JF = λi + N, N nilpotent, n n Then e A JFt = e λt e Nt = e λt I + Nt+ N 2 t2 2! + + N n t n (n )! via Laplace transforms Taking Laplace transforms of ẋ = Ax, x(0) = x 0 gives sx(s) x 0 = AX(s) This yields X(s) =(si A) x 0 Comparing x(t) =e ta x 0, X(s) =(si A) x 0 shows that e ta,(si A) are Laplace transform pairs So one can get e ta by finding the matrix (si A) and then taking the inverse Laplace transform of each element

26 26 CHAPTER 2 THE EQUATION Ẋ = AX 25 Stability The concept of stability is fundamental in control engineering Here we look at the scenario where the system has no input, but its state has been perturbed and we want to know if the system will recover This was introduced in the ECE356 course notes Here we go a little farther now that we re armed with the Jordan form The maglev example is a good one to illustrate this point Suppose a feedback controller has been designed to balance the ball s position at cm below the magnet Suppose if the ball is placed at precisely cm it will stay there; that is, the cm location is a closed-loop equilibrium point Finally, suppose there is a temporary wind gust that moves the ball away from the cm position The stability questions are, will the ball move back to the cm location; if not, will it at least stay near that location? So consider ẋ = Ax Obviously if x(0) = 0, then x(t) = 0 for all t We say the origin is an equilibrium point if you start there, you stay there Equilibrium points can be stable or not While there are more elaborate and formal definitions of stability for the above homogeneous system, we choose the following two: The origin is asymptotically stable if x(t) 0 as t for all x(0) The origin is stable if x(t) remains bounded as t for all x(0) Since x(t) =e At x(0), the origin is asymptotically stable iff every element of the matrix e At converges to zero, and is stable iff every element of the matrix e At remains bounded as t Of course, asymptotic stability implies stability Asymptotic stability is relatively easy to characterize Using the Jordan form, one can prove this very important result, where denotes real part : Theorem 25 The origin is asymptotically stable iff the eigenvalues of A all satisfy λ< 0 Let s say the matrix A is stable if its eigenvalues satisfy λ<0 Then the origin is asymptotically stable iff A is stable Now we turn to the more subtle property of stability We ll do some examples, and we may as well have A in Jordan form Consider the nilpotent matrix A = N = Obviously, x(t) = x(0) for all t and so the origin is stable By contrast, consider Then A = N = e Nt = I + tn, which is unbounded and so the origin is not stable This example extends to the n n case: If A is nilpotent, the origin is stable iff A = 0

27 25 STABILITY 27 Here s the test for stability in general in terms of the Jordan form of A: A A JF = A p Recall that each A i has just one eigenvalue, λ i, and that A i = λ i I + N i,wheren i is a nilpotent matrix in Jordan form Theorem 252 The origin is stable iff the eigenvalues of A all satisfy λ 0 and for any eigenvalue with λ i =0, the nilpotent matrix N i is zero, ie, A i is diagonal Here s an example with complex eigenvalues: 0 j 0 A =, A 0 JF = 0 j The origin is stable since there are two Jordan blocks Now consider 0 0 A = The eigenvalues are j, j, j, j and so the Jordan form must look like j 0 0 A JF = 0 j j j Since the rank of A ji equals 3, the upper star is ; since the rank of A + ji equals 3, the lower star is Thus j 0 0 A JF = 0 j j j Since the Jordan blocks are not diagonal, the origin is not stable Example Consider the cart-spring-damper system y D K

28 28 CHAPTER 2 THE EQUATION Ẋ = AX The equation is Mÿ + Dẏ + Ky =0 Defining x =(y, ẏ), we have ẋ = Ax with 0 A = K/M D/M Assume M > 0 and K, D 0 If D = K = 0, the eigenvalues are {0, 0} and A is a nilpotent matrix in Jordan form The origin is an unstable equilibrium If only D = 0 or K = 0 but not both, the origin is stable but not asymptotically stable And if both D, K are nonzero, the origin is asymptotically stable Example Two points move on the line R The positions of the points are x,x 2 They move toward each other according to the control laws ẋ = x 2 x, ẋ 2 = x x 2 Thus the state is x =(x,x 2 ) and the state equation is ẋ = Ax, A = The eigenvalues are λ =0,λ 2 = 2, so the origin is stable but not asymptotically stable Obviously, the two points tend toward each other; that is, the state x(t) tends toward the subspace V = {x : x = x 2 } This is the eigenspace for the zero eigenvalue To see this convergence, write the initial condition as a linear combination of eigenvectors: x(0) = c v + c 2 v 2, v =, v 2 = Then x(t) =c e λ t v + c 2 e λ 2t v 2 = c v + c 2 e 2t v 2 c v So x (t) and x 2 (t) both converge to c, the same point Phase portraits help us visualize state evolution and stability, but they re applicable only for the n = 2 case Below is shown a plot in R 2 of the vector field for 0 A =, that is, at a grid of points, the directions of the velocity vectors Ax are shown translated to the point x By following the arrows, we get a trajectory; one is shown The plot was done using wwwmathpsuedu/melvin/phase/newphasehtml

29 26 PROBLEMS 29 You can also use MATLAB, Scilab (free), Mathematica, or Octave (free) 26 Problems Are the following vectors linearly independent? v =(,, 2, 0), v 2 =(, 0, 2, 2), v 3 =(, 2, 2, 6) 2 Continuing with the same vectors, find a basis for Span {v,v 2,v 3 } 3 What kind of geometric object is {x : Ax = b} when A R m n? That is, is it a sphere, a point what? 4 (a) Let A be an 8 8 real matrix with eigenvalues Assume 2, 2, 3, 3, 3, 8, 4, 4 rank(a 2I) =7, rank(a +3I) =6, rank(a 4I) =6 Write down the Jordan form of A (b) The matrix A = is nilpotent Write down its Jordan form

30 30 CHAPTER 2 THE EQUATION Ẋ = AX 5 Take A = Show that the matrix V constructed as follows satisfies V AV = A JF : Select v 3 in Ker A 2 but not in Ker A Set v 2 = Av 3 Select v in Ker A such that {v,v 2 } is linearly independent Select an eigenvector v 4 corresponding to the eigenvalue 3 Set V =[v v 2 v 3 v 4 ] (The general construction of the basis for the Jordan form is along these lines) 6 Let A = Write down the Jordan form of A 7 Consider σ ω A = ω σ, where σ and ω = 0 are real Find the Jordan form and the transition matrix 8 In the previous problem, we saw that when σ ω A = ω σ its transition matrix is easy to write down This problem demonstrates that a matrix with distinct complex eigenvalues can be transformed into the above form using a nonsingular transformation Let 4 A = Determine the eigenvalues and eigenvectors of A, noting that they form complex conjugate pairs Let the first eigenvalue be written as a+jb with the corresponding eigenvector v +jv 2 Take v and v 2 as the columns of a matrix V FindV AV

31 26 PROBLEMS 3 9 Consider the homogeneous state equation ẋ = Ax with 3 A = 2 2 and x 0 =(3, 2) Find a modal expansion of x(t) 0 Show that the origin is asymptotically stable for ẋ = Ax iff all poles of every element of (si A) are in the open left half-plane Show that the origin is stable iff all poles of every element of (si A) are in the closed left half-plane and those on the imaginary axis have multiplicity Consider the linear system 0 ẋ = x + 0 y = 0 x u (a) If u(t) is the unit step and x(0) = 0, is y(t) bounded? (b) If u(t) = 0 and x(0) is arbitrary, is y(t) bounded? 2 (a) Suppose that σ(a) ={, 3, 3, +j2, j2} and the rank of (A λi) λ= 3 is 4 Determine A JF (b) Suppose that σ(a) ={, 2, 2, 2} and the rank of (A λi) λ= 2 is 3 Determine A JF (c) Suppose that σ(a) ={, 2, 2, 2, 3} and the rank of (A λi) λ= 2 is 3 Determine A JF 3 Find A JF for A = Summarize all the ways to find exp(at) Then find exp(at) for 0 A = Consider the set {cv : c 0}, where v = 0 is a given vector in R 2 This set is called a ray from the origin in the direction of v More generally, {x 0 + cv : c 0} is a ray from x 0 in the direction of v Find a 2 2 matrix A and a vector x 0 such that the solution x(t) of ẋ = Ax, x(0) = x 0 is a ray

32 32 CHAPTER 2 THE EQUATION Ẋ = AX 6 Consider the following system: ẋ = x 2 ẋ 2 = x 3x 2 Do a phase portrait using Scilab or MATLAB Interpret the phase portrait in terms of the modal decomposition of the system Do lots more examples of this type

33 Chapter 3 More Linear Algebra This chapter extends our knowledge of linear algebra: subspaces, matrix representations, linear matrix equations, and invariant subspaces 3 Subspaces Let X = R n and let V, W be subspaces of X ThenV + W denotes the set {v + w : v V,w W}, and it is a subspace of X The set union V W is not a subspace in general unless one is contained in the other The intersection V Wis however a subspace As an example: X = R 3, V aline, W a plane Then V + W = R 3 if V does not lie in W IfV W, then of course V + W = W It is a fact that dim(v + W) =dim(v)+dim(w) dim(v W) For example, think of V, W as two planes in R 3 that intersect in a line Then the dimension equation evaluates to 3=2+2 Two subspaces V, W are independent if V W = 0 This is not the same as being orthogonal For example two lines in R 2 are independent iff they are not colinear (ie, the angle between them is not 0), while they are orthogonal iff the angle is 90 Every vector x in V + W can be written as x = v + w, v V,w W If V, W are independent, then v, w are unique Think of v as the component of x in V and w as its component in W Let s prove uniqueness Suppose x = v + w = v + w In this chapter when we speak of lines we mean lines through 0 Similarly for planes 33

34 34 CHAPTER 3 MORE LINEAR ALGEBRA Then v v = w w The left-hand side is in V and the right-hand side in W Since the intersection of these two subspaces is zero, both sides equal 0 Clearly, V, W are independent iff dim(v + W) =dim(v)+dim(w) Three subspaces U, V, W are independent if U, V + W are independent, V, U + W are independent, and W, U + V are independent This is not the same as being pairwise independent As an example, let U, V, W be -dimensional subspaces of R 3, ie, three lines When are they independent? Pairwise independent? Every vector x in U + V + W can be written as x = u + v + w, u U,v V,w W If U, V, W are independent, then u, v, w are unique Also, U, V, W are independent iff dim(u + V + W) =dim(u)+dim(v)+dim(w) If V, W are independent subspaces, we write their sum as V W This is called a direct sum Likewise for more than two Let s finish this section with a handy fact: Every subspace has an independent complement, ie, V X = ( W X ) X = V W Think of X as R 3 and V as a plane Then W can be any line not in the plane 32 Linear Transformations We now introduce linear transformations The important point is that a linear transformation is not the same as a matrix, but every linear transformation has a matrix representation once you choose a basis Let X = R n and Y = R p A linear function A : X Ydefines a linear transformation (LT); X is called its domain and Y its co-domain Thus A(x + x 2 )=Ax + Ax 2, x,x 2 X A(ax) =aax, a R, x X It is an important fact that an LT is uniquely determined by its action on a basis That is, if A : X Yis an LT and if {e,,e n } is a basis for X, then if we know the vectors Ae i, we can compute Ax for every x X, by linearity Example For us, the most important example is an LT generated by a matrix Let A R m n For each vector x in R n, Ax is a vector in R m The mapping x Ax is an LT A : R n R m Linearity is easy to check

35 32 LINEAR TRANSFORMATIONS 35 Example Take a vector in the plane and rotate it counterclockwise by 90 This defines an LT A : R 2 R 2 Note that A is not given as a matrix; it s given by its domain, its co-domain, and its action on vectors If we take a vector to be represented by its Cartesian coordinates, x =(x,x 2 ), then we ve chosen a basis for R 2 In that case A maps x =(x,x 2 )toax =( x 2,x ), and so there s an associated rotation matrix 0 0 We ll return to matrix representation later Example Let X = R n and let {e,,e n } be a basis Every vector x in X has a unique expansion x = a e + + a n e n, a i R Let a denote the vector (a,,a n ), the n-tuple of coordinates of x with respect to the basis The function x a defines an LT Q : X R n The equation x = a e + + a n e n can be written compactly as x = Ea, wheree is the matrix with columns e,,e n and a is the vector with components a,,a n Therefore a = E x and so Qx = E x, that is, the action of Q is to multiply by the matrix E For example, let X = R 2 Take the natural basis e = 0, e 2 = 0 In this case E = I and Qx = x If the basis instead is e =, e 2 =, then E = and Qx = E x Every LT on finite-dimensional vector spaces has a matrix representation Let s do this very important construction carefully Let A be an LT X Y, X = R n, basis {e,,e n }; Y = R p, basis {f,,f p } Bring in the coordinate LTs: Q : X R n, R : Y R p So now we have the setup

36 36 CHAPTER 3 MORE LINEAR ALGEBRA X A Y Q R R n R p The left downward arrow gives us the n-tuple, say a, that represents a vector x in the basis {e,,e n } The right downward arrow gives us the p-tuple, say b, that represents a vector y in the basis {f,,f n } It s possible to add a fourth LT to complete the square: X A Y Q R R n M R p This is called a commutative diagram The object M in the diagram is the matrix representation of A with respect to these two bases Notice that the bottom arrow represents the LT generated by the matrix M; wewritem in the diagram for simplicity, but you should understand that really the object is an LT The matrix M is the p n matrix that makes the diagram commute, that is, for every x X Ma = b, where a = Qx, b = RAx In particular, take x = e i,thei th basis vector in X Thena is the n-vector with in the i th entry and 0 otherwise So Ma equals the i th column of the matrix M Thus, we have the following recipe for constructing the matrix M: Take the st basis vector e of X 2 Apply the LT A to get Ae 3 Find b, the coordinate vector of Ae in the basis for Y 4 Enter this b as column of M 5 Repeat for the other columns Recall that Q is the LT generated by E, where the columns of E are the basis in the domain of A Likewise, R is the LT generated by F, where the columns of F are the basis in the co-domain of A Thus the equation Ma = b reads ME x = F Ax (3) Example Let A : R 2 R 2 be the LT that rotates a vector counterclockwise by 90 Let s first take the standard bases: e =(, 0),e 2 =(0, ) for the domain and f =(, 0),f 2 =(0, ) for the co-domain Following the steps we first apply A to e, that is, we rotate e counterclockwise by 90 ;theresultisae =(0, ) Then we express this vector in the basis {f,f 2 }: Ae =0 f + f 2

37 32 LINEAR TRANSFORMATIONS 37 Thus the first column of M is (0, ), the vector of coefficients Now for the second column, rotate e 2 to get (, 0) and represent this in the basis {f,f 2 }: Ae 2 = f +0 f 2 So the second column of M is (, 0) Thus 0 M = 0 Suppose we had different bases: e =(, ), e 2 =(, 2), f =(, 2), f 2 =(, 0) Apply the recipe again Get Ae =(, ) Expand it in the basis {f,f 2 }: (, ) = 2 f 3 2 f 2 Get Ae 2 =( 2, ) Expand it in the basis {f,f 2 }: ( 2, ) = 2 f 3 2 f 2 Thus M = Example Let A R m n and let A : R n R m be the generated LT It is easy to check that A itself is then the matrix representation of A with respect to the standard bases Let s do it Let {e,,e n } be the standard basis on R n and {f,,f m } the standard basis on R m Then Ae = Ae equals the first column, (a,a 2,,a m ), of A This column can be written as a f + + a m f m, and hence (a,a 2,,a m ) is the first column of the matrix representation of A Suppose instead that we have general bases, {e,,e n } on R n and {f,,f m } on R m Form the matrices E and F from these basis vectors From (3) we get that the matrix representation M with respect to these bases satisfies or equivalently ME = F A, AE = FM A very interesting special case of this is where A is square and the same basis {e,,e n } is taken for both the domain and co-domain Then AE = EM,

38 38 CHAPTER 3 MORE LINEAR ALGEBRA or M = E AE; the matrix M is a similarity transformation of the given matrix A Finally, suppose we start with a square A and take the basis {v,,v n } of generalized eigenvectors The new matrix representation is our familiar Jordan form A JF = V AV Thusthetwo matrices A and A JF represent the same LT: A in the given standard basis and A JF in the basis of generalized eigenvectors An LT has two important associated subspaces Let A : X Y be an LT The kernel (or nullspace) of A is the subspace of X on which A is zero: Ker A := {x : Ax =0} The LT A is said to be one-to-one if Ker A = 0, equivalently, the homogeneous equation Ax =0 has only the trivial solution x = 0 The image (or range space) of A is the subspace of Y that A can reach: Im A := {y :( x X)y = Ax} We say A is onto if Im A = Y, equivalently, the equation Ax = y has a solution x for every y Whether A is one-to-one or onto (or both) can be easily checked by examining any matrix representation A: A is one-to-one A has full column rank; A is onto A has full row rank If A is a matrix, we will write Im A for the image of the generated LT it s the column span of the matrix; and we ll write Ker A for the kernel of the LT Example Let A : R 3 R 3 map a vector to its projection on the horizontal plane Then the kernel equals the vertical axis, the image equals the horizontal plane, A is neither onto nor one-to-one, and its matrix with respect to the standard basis is We could modify the co-domain to have A : R 3 R 2, again mapping a vector to its projection on the horizontal plane Then the kernel equals the vertical axis, the image equals the horizontal plane, A is onto but not one-to-one, and its matrix with respect to the standard basis is Example Let V X (think of V as a plane in 3-dimensional space X ) Define the function V : V X, Vx = x This is an LT called the insertion LT Clearly V is one-to-one and Im V = V Suppose we have a basis for V, {e,,e k },

39 32 LINEAR TRANSFORMATIONS 39 and we extend it to get a basis for X, {e,,e k,,e n } Then the matrix rep of V is V = Ik 0 Clearly, rank V = k Example Let X be 3-dimensional space, V a plane (2-dimensional subspace), and W a line not in V Then V, W are independent subspaces and X = V W Every x in X can be written x = v+w for unique v in V and w in W Define the function P : X V mapping x to v This is an LT called the natural projection onto V Check that Im P = V, Ker P = W Suppose {e,e 2 } is a basis for V, {e 3 } a basis for W The induced matrix representation is P = Example Let A : X Ybe an LT Its kernel, Ker A, is a subspace of X ;let{e k+,,e n } be a basis for Ker A and extend it to get a basis for X : Then {e,,e k,,e n } for X {Ae,,Ae k } is a basis for Im A Extend it to get a basis for Y: {Ae,,Ae k,f k+,,f p } Then the matrix representation of A is A = Ik 0 0 0

40 40 CHAPTER 3 MORE LINEAR ALGEBRA 33 Matrix Equations We already reviewed the linear equation Ax = b, A R n m,x R m,b R n The equation is another way of saying b is a linear combination of the columns of A Thus the equation has a solution iff b column span of A, ie,b ImA Then the solution is unique iff rank A = m, ie,kera = 0 These results extend to the matrix equation AX = B, A R n m,x R m p,b R n p In this section we study this and similar equations We could work with LTs but we ll use matrices instead The first equation is AX = I Such an X is called a right-inverse of A Lemma 33 A R n m has a right-inverse iff it s onto, ie the rank of A equals n Proof (= ) IfAX = I, then, for every y R n, AXy = y Thus for every y R n,thereexistsx R m such that Ax = y Thus A is onto ( =) Let {f,,f p } be the standard basis for R n SinceA is onto ( i)( x i R m )f i = Ax i Now define X to be the matrix whose i th column is x i,ie,viaxf i = x i Then AXf i = f i This implies AX = I The second equation is the dual situation XA = I Obviously, such an X is a left-inverse Lemma 332 A R n m has a left-inverse iff it s one-to-one, ie, A has rank m Lemma 333 There exists X such that AX = B iff Im B Im A, thatis, rank A = rank A B 2 There exists X such that XA = B iff Ker A Ker B, that is, rank A = rank A B

41 34 INVARIANT SUBSPACES 4 34 Invariant Subspaces Example Let A = 2 2 and let A : R 2 R 2 be the generated LT Clearly, Ker A is the -dimensional subspace spanned by Also, x Ker A Ax =0 Ker A, or equivalently, AKer A Ker A In general, if A : X X is an LT, a subspace V Xis A-invariant if AV V The zero subspace, X itself, Ker A, and Im A are all A-invariant Now Ker A is the eigenspace for the zero eigenvalue, assuming λ = 0 is an eigenvalue (as in the example above) More generally, suppose λ is an eigenvalue of A Assume λ R ThenAx = λx for some x = 0 Then V = Span {x} is A-invariant So is the eigenspace {x : Ax = λx} = {x :(A λi)x =0} =Ker(A λi) Let V be an A-invariant subspace Take a basis for V, {e,,e k }, and extend it to a basis for X : {e,,e k,,e n } Then the matrix representation of A has the form A A A = 2 0 A 22 Notice that the lower-left block of A equals zero; this is because V is A-invariant Example Let X = R 3,letV be the (x,x 2 )-plane, and let A : X X be the LT that rotates a vector 90 about the x 3 -axis using the right-hand rule Thus V is A-invariant Let us take the bases 0 e = 0,e 2 = for V 0 0 e,e 2,e 3 = for X

42 42 CHAPTER 3 MORE LINEAR ALGEBRA The matrix representation of A with respect to the latter basis is 0 2 A = So, in particular, the restriction of A to V is represented by the rotation matrix 0 A = 0 Finally, let A be an n n matrix Suppose V is an n k matrix Then Im V is a subspace of R n How can we know if this subspace is invariant under A, or more precisely, under the LT generated by A? The answer is this: Lemma 34 The subspace Im V is A-invariant iff the linear equation AV = VA has a solution A Proof If AV = VA,thenImAV Im V, that is, A Im V Im V,whichsaysImV is A-invariant Conversely, if Im AV Im V, then the equation AV = VA is solvable, by Lemma Problems Prove the following facts about subspaces: (a) V + V = V Hint: You have to show V +V Vand V V+V Similarly for other subspace equalities (b) If V W,thenV + W = W (c) If V W,thenW (V + T )=V + W T 2 Show that W (V+T )=W V+W T is false in general by giving an explicit counterexample 3 Let A be the identity LT on R 2 Take, = basis for domain, Find the matrix A 2 0, 3 = basis for co-domain 4 Let A denote the LT R 4 R 5 with the action x x 4 x 2 x 3 0 2x 4 x x 2 + x 3 +2x 4 4 x 2 + x 3 Find bases for R 4 and R 5 so that the matrix representation is I 0 A = 0 0

43 35 PROBLEMS 43 5 Let A be an LT Show that if {Ae,,Ae n } is linearly independent, so is {e,,e n }Give an example where the converse is false 6 Find all right-inverses of the matrix A = 2 7 Let X denote the 4-dimensional vector space with basis {sint, cos t, sin2t, cos 2t} Thus vectors in X are time-domain signals of frequency rad/s, 2 rad/s, or a combination of both Suppose an input x(t) from X is applied to a lowpass RC-filter, producing the output y(t) The equation for the circuit is RCẏ(t)+y(t) =x(t) For simplicity, take RC = From circuit theory, we know that y(t) belongs to X too (This is steady-state analysis; transient response is neglected) So the mapping from x(t) toy(t) defines a linear transformation A : X X Find the matrix representation of A with respect to the given basis 8 Consider the vector space R 3 Let x, x 2, and x 3 denote the components of a vector x in R 3 Now let V denote the subspace of R 3 of all vectors x where x + x 2 x 3 =0, and let W denote the subspace of R 3 of all vectors x where 2x 3x 3 =0 Find a basis for the intersection V W 9 Let A : R 3 R 3 be the LT defined by A : x x 2 x 3 8x 2x 3 x +7x 2 2x 3 4x x 3 Find bases for Ker A and Im A 0 Find all solutions of the matrix equation XA = I where A = 2 0 2

44 44 CHAPTER 3 MORE LINEAR ALGEBRA For a square matrix X, let diagx denote the vector formed from the elements on the diagonal of X Let A : R n n R n be the LT defined by A : X diagx Does A have a left inverse? A right inverse? 2 Consider the two matrices: , For each matrix, find its rank, a basis for its image, and a basis for its kernel 3 Let A, U R n n with U nonsingular True or false: (a) Ker (A) =Ker(UA) (b) Ker (A) =Ker(AU) (c) Ker (A 2 ) Ker (A) 4 Is {(x,x 2,x 3 ):2x +3x 2 +6x 3 5=0} a subspace of R 3? 5 You are given the n eigenvalues of a matrix in R n n Can you determine the rank of the matrix? If no, can you give bounds on the rank? 6 Suppose that A R m n and B R n m with m n and rank A = rank B = m Find a necessary and sufficient condition that AB be invertible 7 Let A be an LT from X to X, a finite-dimensional vector space Fix a basis for X and let A denote the matrix representation of A with respect to this basis Show that A 2 is the matrix representation of A 2 8 Consider the following result: Lemma If A is a matrix with full column rank, then the equation Ax = y is solvable for every vector y Proof Let y be arbitrary Multiply the equation Ax = y by the transpose of A: A T Ax = A T y Since A has full column rank, A T A is invertible Thus x =(A T A) A T y (a) Give a counterexample to the lemma (b) What is the mistake in logic in the proof?

45 35 PROBLEMS 45 9 Let L denote the line in the plane that passes through the origin and makes an angle +π/6 radians with the positive x-axis Let A : R 2 R 2 be the LT that maps a vector to its reflection about L (a) Find the matrix representation of A with respect to the basis e =, e 2 = (b) Show that A is invertible and find its inverse 20 Fix a vector v = 0inR 3 and consider the LT A : R 3 R 3 that maps x to the cross product v x (a) Find Ker(A) and Im(A) (b) Is A invertible?

ECE557 Systems Control

ECE557 Systems Control ECE557 Systems Control Bruce Francis Course notes, Version.0, September 008 Preface This is the second Engineering Science course on control. It assumes ECE56 as a prerequisite. If you didn t take ECE56,

More information

Controllability. Chapter Reachable States. This chapter develops the fundamental results about controllability and pole assignment.

Controllability. Chapter Reachable States. This chapter develops the fundamental results about controllability and pole assignment. Chapter Controllability This chapter develops the fundamental results about controllability and pole assignment Reachable States We study the linear system ẋ = Ax + Bu, t, where x(t) R n and u(t) R m Thus

More information

ẋ n = f n (x 1,...,x n,u 1,...,u m ) (5) y 1 = g 1 (x 1,...,x n,u 1,...,u m ) (6) y p = g p (x 1,...,x n,u 1,...,u m ) (7)

ẋ n = f n (x 1,...,x n,u 1,...,u m ) (5) y 1 = g 1 (x 1,...,x n,u 1,...,u m ) (6) y p = g p (x 1,...,x n,u 1,...,u m ) (7) EEE582 Topical Outline A.A. Rodriguez Fall 2007 GWC 352, 965-3712 The following represents a detailed topical outline of the course. It attempts to highlight most of the key concepts to be covered and

More information

Foundations of Matrix Analysis

Foundations of Matrix Analysis 1 Foundations of Matrix Analysis In this chapter we recall the basic elements of linear algebra which will be employed in the remainder of the text For most of the proofs as well as for the details, the

More information

Control Systems. Laplace domain analysis

Control Systems. Laplace domain analysis Control Systems Laplace domain analysis L. Lanari outline introduce the Laplace unilateral transform define its properties show its advantages in turning ODEs to algebraic equations define an Input/Output

More information

Chapter Two Elements of Linear Algebra

Chapter Two Elements of Linear Algebra Chapter Two Elements of Linear Algebra Previously, in chapter one, we have considered single first order differential equations involving a single unknown function. In the next chapter we will begin to

More information

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017 Math 4A Notes Written by Victoria Kala vtkala@math.ucsb.edu Last updated June 11, 2017 Systems of Linear Equations A linear equation is an equation that can be written in the form a 1 x 1 + a 2 x 2 +...

More information

Linear Algebra. Min Yan

Linear Algebra. Min Yan Linear Algebra Min Yan January 2, 2018 2 Contents 1 Vector Space 7 1.1 Definition................................. 7 1.1.1 Axioms of Vector Space..................... 7 1.1.2 Consequence of Axiom......................

More information

Module 09 From s-domain to time-domain From ODEs, TFs to State-Space Modern Control

Module 09 From s-domain to time-domain From ODEs, TFs to State-Space Modern Control Module 09 From s-domain to time-domain From ODEs, TFs to State-Space Modern Control Ahmad F. Taha EE 3413: Analysis and Desgin of Control Systems Email: ahmad.taha@utsa.edu Webpage: http://engineering.utsa.edu/

More information

MCE693/793: Analysis and Control of Nonlinear Systems

MCE693/793: Analysis and Control of Nonlinear Systems MCE693/793: Analysis and Control of Nonlinear Systems Systems of Differential Equations Phase Plane Analysis Hanz Richter Mechanical Engineering Department Cleveland State University Systems of Nonlinear

More information

21 Linear State-Space Representations

21 Linear State-Space Representations ME 132, Spring 25, UC Berkeley, A Packard 187 21 Linear State-Space Representations First, let s describe the most general type of dynamic system that we will consider/encounter in this class Systems may

More information

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA Kent State University Department of Mathematical Sciences Compiled and Maintained by Donald L. White Version: August 29, 2017 CONTENTS LINEAR ALGEBRA AND

More information

Module 03 Linear Systems Theory: Necessary Background

Module 03 Linear Systems Theory: Necessary Background Module 03 Linear Systems Theory: Necessary Background Ahmad F. Taha EE 5243: Introduction to Cyber-Physical Systems Email: ahmad.taha@utsa.edu Webpage: http://engineering.utsa.edu/ taha/index.html September

More information

ECEN 420 LINEAR CONTROL SYSTEMS. Lecture 6 Mathematical Representation of Physical Systems II 1/67

ECEN 420 LINEAR CONTROL SYSTEMS. Lecture 6 Mathematical Representation of Physical Systems II 1/67 1/67 ECEN 420 LINEAR CONTROL SYSTEMS Lecture 6 Mathematical Representation of Physical Systems II State Variable Models for Dynamic Systems u 1 u 2 u ṙ. Internal Variables x 1, x 2 x n y 1 y 2. y m Figure

More information

Linear Algebra Review

Linear Algebra Review Chapter 1 Linear Algebra Review It is assumed that you have had a course in linear algebra, and are familiar with matrix multiplication, eigenvectors, etc. I will review some of these terms here, but quite

More information

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces. Math 350 Fall 2011 Notes about inner product spaces In this notes we state and prove some important properties of inner product spaces. First, recall the dot product on R n : if x, y R n, say x = (x 1,...,

More information

NONCOMMUTATIVE POLYNOMIAL EQUATIONS. Edward S. Letzter. Introduction

NONCOMMUTATIVE POLYNOMIAL EQUATIONS. Edward S. Letzter. Introduction NONCOMMUTATIVE POLYNOMIAL EQUATIONS Edward S Letzter Introduction My aim in these notes is twofold: First, to briefly review some linear algebra Second, to provide you with some new tools and techniques

More information

MAT2342 : Introduction to Applied Linear Algebra Mike Newman, fall Projections. introduction

MAT2342 : Introduction to Applied Linear Algebra Mike Newman, fall Projections. introduction MAT4 : Introduction to Applied Linear Algebra Mike Newman fall 7 9. Projections introduction One reason to consider projections is to understand approximate solutions to linear systems. A common example

More information

7 Planar systems of linear ODE

7 Planar systems of linear ODE 7 Planar systems of linear ODE Here I restrict my attention to a very special class of autonomous ODE: linear ODE with constant coefficients This is arguably the only class of ODE for which explicit solution

More information

A Brief Outline of Math 355

A Brief Outline of Math 355 A Brief Outline of Math 355 Lecture 1 The geometry of linear equations; elimination with matrices A system of m linear equations with n unknowns can be thought of geometrically as m hyperplanes intersecting

More information

Chapter 7. Canonical Forms. 7.1 Eigenvalues and Eigenvectors

Chapter 7. Canonical Forms. 7.1 Eigenvalues and Eigenvectors Chapter 7 Canonical Forms 7.1 Eigenvalues and Eigenvectors Definition 7.1.1. Let V be a vector space over the field F and let T be a linear operator on V. An eigenvalue of T is a scalar λ F such that there

More information

Linear Algebra Highlights

Linear Algebra Highlights Linear Algebra Highlights Chapter 1 A linear equation in n variables is of the form a 1 x 1 + a 2 x 2 + + a n x n. We can have m equations in n variables, a system of linear equations, which we want to

More information

Linear Algebra March 16, 2019

Linear Algebra March 16, 2019 Linear Algebra March 16, 2019 2 Contents 0.1 Notation................................ 4 1 Systems of linear equations, and matrices 5 1.1 Systems of linear equations..................... 5 1.2 Augmented

More information

1 Continuous-time Systems

1 Continuous-time Systems Observability Completely controllable systems can be restructured by means of state feedback to have many desirable properties. But what if the state is not available for feedback? What if only the output

More information

Review: control, feedback, etc. Today s topic: state-space models of systems; linearization

Review: control, feedback, etc. Today s topic: state-space models of systems; linearization Plan of the Lecture Review: control, feedback, etc Today s topic: state-space models of systems; linearization Goal: a general framework that encompasses all examples of interest Once we have mastered

More information

Final Review Sheet. B = (1, 1 + 3x, 1 + x 2 ) then 2 + 3x + 6x 2

Final Review Sheet. B = (1, 1 + 3x, 1 + x 2 ) then 2 + 3x + 6x 2 Final Review Sheet The final will cover Sections Chapters 1,2,3 and 4, as well as sections 5.1-5.4, 6.1-6.2 and 7.1-7.3 from chapters 5,6 and 7. This is essentially all material covered this term. Watch

More information

1 9/5 Matrices, vectors, and their applications

1 9/5 Matrices, vectors, and their applications 1 9/5 Matrices, vectors, and their applications Algebra: study of objects and operations on them. Linear algebra: object: matrices and vectors. operations: addition, multiplication etc. Algorithms/Geometric

More information

Linear Algebra: Matrix Eigenvalue Problems

Linear Algebra: Matrix Eigenvalue Problems CHAPTER8 Linear Algebra: Matrix Eigenvalue Problems Chapter 8 p1 A matrix eigenvalue problem considers the vector equation (1) Ax = λx. 8.0 Linear Algebra: Matrix Eigenvalue Problems Here A is a given

More information

Eigenvalues, Eigenvectors. Eigenvalues and eigenvector will be fundamentally related to the nature of the solutions of state space systems.

Eigenvalues, Eigenvectors. Eigenvalues and eigenvector will be fundamentally related to the nature of the solutions of state space systems. Chapter 3 Linear Algebra In this Chapter we provide a review of some basic concepts from Linear Algebra which will be required in order to compute solutions of LTI systems in state space form, discuss

More information

Control Systems I. Lecture 4: Diagonalization, Modal Analysis, Intro to Feedback. Readings: Emilio Frazzoli

Control Systems I. Lecture 4: Diagonalization, Modal Analysis, Intro to Feedback. Readings: Emilio Frazzoli Control Systems I Lecture 4: Diagonalization, Modal Analysis, Intro to Feedback Readings: Emilio Frazzoli Institute for Dynamic Systems and Control D-MAVT ETH Zürich October 13, 2017 E. Frazzoli (ETH)

More information

Eigenvalues and Eigenvectors

Eigenvalues and Eigenvectors /88 Chia-Ping Chen Department of Computer Science and Engineering National Sun Yat-sen University Linear Algebra Eigenvalue Problem /88 Eigenvalue Equation By definition, the eigenvalue equation for matrix

More information

In these chapter 2A notes write vectors in boldface to reduce the ambiguity of the notation.

In these chapter 2A notes write vectors in boldface to reduce the ambiguity of the notation. 1 2 Linear Systems In these chapter 2A notes write vectors in boldface to reduce the ambiguity of the notation 21 Matrix ODEs Let and is a scalar A linear function satisfies Linear superposition ) Linear

More information

sc Control Systems Design Q.1, Sem.1, Ac. Yr. 2010/11

sc Control Systems Design Q.1, Sem.1, Ac. Yr. 2010/11 sc46 - Control Systems Design Q Sem Ac Yr / Mock Exam originally given November 5 9 Notes: Please be reminded that only an A4 paper with formulas may be used during the exam no other material is to be

More information

Topics in linear algebra

Topics in linear algebra Chapter 6 Topics in linear algebra 6.1 Change of basis I want to remind you of one of the basic ideas in linear algebra: change of basis. Let F be a field, V and W be finite dimensional vector spaces over

More information

1. General Vector Spaces

1. General Vector Spaces 1.1. Vector space axioms. 1. General Vector Spaces Definition 1.1. Let V be a nonempty set of objects on which the operations of addition and scalar multiplication are defined. By addition we mean a rule

More information

Lecture Summaries for Linear Algebra M51A

Lecture Summaries for Linear Algebra M51A These lecture summaries may also be viewed online by clicking the L icon at the top right of any lecture screen. Lecture Summaries for Linear Algebra M51A refers to the section in the textbook. Lecture

More information

OHSx XM511 Linear Algebra: Solutions to Online True/False Exercises

OHSx XM511 Linear Algebra: Solutions to Online True/False Exercises This document gives the solutions to all of the online exercises for OHSx XM511. The section ( ) numbers refer to the textbook. TYPE I are True/False. Answers are in square brackets [. Lecture 02 ( 1.1)

More information

Robust Control 2 Controllability, Observability & Transfer Functions

Robust Control 2 Controllability, Observability & Transfer Functions Robust Control 2 Controllability, Observability & Transfer Functions Harry G. Kwatny Department of Mechanical Engineering & Mechanics Drexel University /26/24 Outline Reachable Controllability Distinguishable

More information

Math 118, Fall 2014 Final Exam

Math 118, Fall 2014 Final Exam Math 8, Fall 4 Final Exam True or false Please circle your choice; no explanation is necessary True There is a linear transformation T such that T e ) = e and T e ) = e Solution Since T is linear, if T

More information

18.06 Problem Set 8 - Solutions Due Wednesday, 14 November 2007 at 4 pm in

18.06 Problem Set 8 - Solutions Due Wednesday, 14 November 2007 at 4 pm in 806 Problem Set 8 - Solutions Due Wednesday, 4 November 2007 at 4 pm in 2-06 08 03 Problem : 205+5+5+5 Consider the matrix A 02 07 a Check that A is a positive Markov matrix, and find its steady state

More information

235 Final exam review questions

235 Final exam review questions 5 Final exam review questions Paul Hacking December 4, 0 () Let A be an n n matrix and T : R n R n, T (x) = Ax the linear transformation with matrix A. What does it mean to say that a vector v R n is an

More information

Represent this system in terms of a block diagram consisting only of. g From Newton s law: 2 : θ sin θ 9 θ ` T

Represent this system in terms of a block diagram consisting only of. g From Newton s law: 2 : θ sin θ 9 θ ` T Exercise (Block diagram decomposition). Consider a system P that maps each input to the solutions of 9 4 ` 3 9 Represent this system in terms of a block diagram consisting only of integrator systems, represented

More information

j=1 x j p, if 1 p <, x i ξ : x i < ξ} 0 as p.

j=1 x j p, if 1 p <, x i ξ : x i < ξ} 0 as p. LINEAR ALGEBRA Fall 203 The final exam Almost all of the problems solved Exercise Let (V, ) be a normed vector space. Prove x y x y for all x, y V. Everybody knows how to do this! Exercise 2 If V is a

More information

Bare-bones outline of eigenvalue theory and the Jordan canonical form

Bare-bones outline of eigenvalue theory and the Jordan canonical form Bare-bones outline of eigenvalue theory and the Jordan canonical form April 3, 2007 N.B.: You should also consult the text/class notes for worked examples. Let F be a field, let V be a finite-dimensional

More information

MATH 221: SOLUTIONS TO SELECTED HOMEWORK PROBLEMS

MATH 221: SOLUTIONS TO SELECTED HOMEWORK PROBLEMS MATH 221: SOLUTIONS TO SELECTED HOMEWORK PROBLEMS 1. HW 1: Due September 4 1.1.21. Suppose v, w R n and c is a scalar. Prove that Span(v + cw, w) = Span(v, w). We must prove two things: that every element

More information

22.2. Applications of Eigenvalues and Eigenvectors. Introduction. Prerequisites. Learning Outcomes

22.2. Applications of Eigenvalues and Eigenvectors. Introduction. Prerequisites. Learning Outcomes Applications of Eigenvalues and Eigenvectors 22.2 Introduction Many applications of matrices in both engineering and science utilize eigenvalues and, sometimes, eigenvectors. Control theory, vibration

More information

Vector Spaces. 9.1 Opening Remarks. Week Solvable or not solvable, that s the question. View at edx. Consider the picture

Vector Spaces. 9.1 Opening Remarks. Week Solvable or not solvable, that s the question. View at edx. Consider the picture Week9 Vector Spaces 9. Opening Remarks 9.. Solvable or not solvable, that s the question Consider the picture (,) (,) p(χ) = γ + γ χ + γ χ (, ) depicting three points in R and a quadratic polynomial (polynomial

More information

= A(λ, t)x. In this chapter we will focus on the case that the matrix A does not depend on time (so that the ODE is autonomous):

= A(λ, t)x. In this chapter we will focus on the case that the matrix A does not depend on time (so that the ODE is autonomous): Chapter 2 Linear autonomous ODEs 2 Linearity Linear ODEs form an important class of ODEs They are characterized by the fact that the vector field f : R m R p R R m is linear at constant value of the parameters

More information

Linear Algebra- Final Exam Review

Linear Algebra- Final Exam Review Linear Algebra- Final Exam Review. Let A be invertible. Show that, if v, v, v 3 are linearly independent vectors, so are Av, Av, Av 3. NOTE: It should be clear from your answer that you know the definition.

More information

Lecture 1: Review of linear algebra

Lecture 1: Review of linear algebra Lecture 1: Review of linear algebra Linear functions and linearization Inverse matrix, least-squares and least-norm solutions Subspaces, basis, and dimension Change of basis and similarity transformations

More information

16.31 Fall 2005 Lecture Presentation Mon 31-Oct-05 ver 1.1

16.31 Fall 2005 Lecture Presentation Mon 31-Oct-05 ver 1.1 16.31 Fall 2005 Lecture Presentation Mon 31-Oct-05 ver 1.1 Charles P. Coleman October 31, 2005 1 / 40 : Controllability Tests Observability Tests LEARNING OUTCOMES: Perform controllability tests Perform

More information

CONTROL DESIGN FOR SET POINT TRACKING

CONTROL DESIGN FOR SET POINT TRACKING Chapter 5 CONTROL DESIGN FOR SET POINT TRACKING In this chapter, we extend the pole placement, observer-based output feedback design to solve tracking problems. By tracking we mean that the output is commanded

More information

4. Linear transformations as a vector space 17

4. Linear transformations as a vector space 17 4 Linear transformations as a vector space 17 d) 1 2 0 0 1 2 0 0 1 0 0 0 1 2 3 4 32 Let a linear transformation in R 2 be the reflection in the line = x 2 Find its matrix 33 For each linear transformation

More information

Robot Control Basics CS 685

Robot Control Basics CS 685 Robot Control Basics CS 685 Control basics Use some concepts from control theory to understand and learn how to control robots Control Theory general field studies control and understanding of behavior

More information

1. Find the solution of the following uncontrolled linear system. 2 α 1 1

1. Find the solution of the following uncontrolled linear system. 2 α 1 1 Appendix B Revision Problems 1. Find the solution of the following uncontrolled linear system 0 1 1 ẋ = x, x(0) =. 2 3 1 Class test, August 1998 2. Given the linear system described by 2 α 1 1 ẋ = x +

More information

MAT 2037 LINEAR ALGEBRA I web:

MAT 2037 LINEAR ALGEBRA I web: MAT 237 LINEAR ALGEBRA I 2625 Dokuz Eylül University, Faculty of Science, Department of Mathematics web: Instructor: Engin Mermut http://kisideuedutr/enginmermut/ HOMEWORK 2 MATRIX ALGEBRA Textbook: Linear

More information

MATH 583A REVIEW SESSION #1

MATH 583A REVIEW SESSION #1 MATH 583A REVIEW SESSION #1 BOJAN DURICKOVIC 1. Vector Spaces Very quick review of the basic linear algebra concepts (see any linear algebra textbook): (finite dimensional) vector space (or linear space),

More information

Chapter 4 & 5: Vector Spaces & Linear Transformations

Chapter 4 & 5: Vector Spaces & Linear Transformations Chapter 4 & 5: Vector Spaces & Linear Transformations Philip Gressman University of Pennsylvania Philip Gressman Math 240 002 2014C: Chapters 4 & 5 1 / 40 Objective The purpose of Chapter 4 is to think

More information

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

MATH 240 Spring, Chapter 1: Linear Equations and Matrices MATH 240 Spring, 2006 Chapter Summaries for Kolman / Hill, Elementary Linear Algebra, 8th Ed. Sections 1.1 1.6, 2.1 2.2, 3.2 3.8, 4.3 4.5, 5.1 5.3, 5.5, 6.1 6.5, 7.1 7.2, 7.4 DEFINITIONS Chapter 1: Linear

More information

A matrix over a field F is a rectangular array of elements from F. The symbol

A matrix over a field F is a rectangular array of elements from F. The symbol Chapter MATRICES Matrix arithmetic A matrix over a field F is a rectangular array of elements from F The symbol M m n (F ) denotes the collection of all m n matrices over F Matrices will usually be denoted

More information

Control Systems. Internal Stability - LTI systems. L. Lanari

Control Systems. Internal Stability - LTI systems. L. Lanari Control Systems Internal Stability - LTI systems L. Lanari outline LTI systems: definitions conditions South stability criterion equilibrium points Nonlinear systems: equilibrium points examples stable

More information

Lecture 6. Eigen-analysis

Lecture 6. Eigen-analysis Lecture 6 Eigen-analysis University of British Columbia, Vancouver Yue-Xian Li March 7 6 Definition of eigenvectors and eigenvalues Def: Any n n matrix A defines a LT, A : R n R n A vector v and a scalar

More information

Math Linear Algebra Final Exam Review Sheet

Math Linear Algebra Final Exam Review Sheet Math 15-1 Linear Algebra Final Exam Review Sheet Vector Operations Vector addition is a component-wise operation. Two vectors v and w may be added together as long as they contain the same number n of

More information

Definition (T -invariant subspace) Example. Example

Definition (T -invariant subspace) Example. Example Eigenvalues, Eigenvectors, Similarity, and Diagonalization We now turn our attention to linear transformations of the form T : V V. To better understand the effect of T on the vector space V, we begin

More information

Some solutions of the written exam of January 27th, 2014

Some solutions of the written exam of January 27th, 2014 TEORIA DEI SISTEMI Systems Theory) Prof. C. Manes, Prof. A. Germani Some solutions of the written exam of January 7th, 0 Problem. Consider a feedback control system with unit feedback gain, with the following

More information

ELEMENTARY LINEAR ALGEBRA

ELEMENTARY LINEAR ALGEBRA ELEMENTARY LINEAR ALGEBRA K R MATTHEWS DEPARTMENT OF MATHEMATICS UNIVERSITY OF QUEENSLAND First Printing, 99 Chapter LINEAR EQUATIONS Introduction to linear equations A linear equation in n unknowns x,

More information

MATH 315 Linear Algebra Homework #1 Assigned: August 20, 2018

MATH 315 Linear Algebra Homework #1 Assigned: August 20, 2018 Homework #1 Assigned: August 20, 2018 Review the following subjects involving systems of equations and matrices from Calculus II. Linear systems of equations Converting systems to matrix form Pivot entry

More information

Control Systems Design, SC4026. SC4026 Fall 2010, dr. A. Abate, DCSC, TU Delft

Control Systems Design, SC4026. SC4026 Fall 2010, dr. A. Abate, DCSC, TU Delft Control Systems Design, SC4026 SC4026 Fall 2010, dr. A. Abate, DCSC, TU Delft Lecture 4 Controllability (a.k.a. Reachability) and Observability Algebraic Tests (Kalman rank condition & Hautus test) A few

More information

Contents. 1 State-Space Linear Systems 5. 2 Linearization Causality, Time Invariance, and Linearity 31

Contents. 1 State-Space Linear Systems 5. 2 Linearization Causality, Time Invariance, and Linearity 31 Contents Preamble xiii Linear Systems I Basic Concepts 1 I System Representation 3 1 State-Space Linear Systems 5 1.1 State-Space Linear Systems 5 1.2 Block Diagrams 7 1.3 Exercises 11 2 Linearization

More information

Chap. 3. Controlled Systems, Controllability

Chap. 3. Controlled Systems, Controllability Chap. 3. Controlled Systems, Controllability 1. Controllability of Linear Systems 1.1. Kalman s Criterion Consider the linear system ẋ = Ax + Bu where x R n : state vector and u R m : input vector. A :

More information

Jim Lambers MAT 610 Summer Session Lecture 1 Notes

Jim Lambers MAT 610 Summer Session Lecture 1 Notes Jim Lambers MAT 60 Summer Session 2009-0 Lecture Notes Introduction This course is about numerical linear algebra, which is the study of the approximate solution of fundamental problems from linear algebra

More information

EE363 homework 7 solutions

EE363 homework 7 solutions EE363 Prof. S. Boyd EE363 homework 7 solutions 1. Gain margin for a linear quadratic regulator. Let K be the optimal state feedback gain for the LQR problem with system ẋ = Ax + Bu, state cost matrix Q,

More information

Elementary linear algebra

Elementary linear algebra Chapter 1 Elementary linear algebra 1.1 Vector spaces Vector spaces owe their importance to the fact that so many models arising in the solutions of specific problems turn out to be vector spaces. The

More information

5 More on Linear Algebra

5 More on Linear Algebra 14.102, Math for Economists Fall 2004 Lecture Notes, 9/23/2004 These notes are primarily based on those written by George Marios Angeletos for the Harvard Math Camp in 1999 and 2000, and updated by Stavros

More information

Math 321: Linear Algebra

Math 321: Linear Algebra Math 32: Linear Algebra T. Kapitula Department of Mathematics and Statistics University of New Mexico September 8, 24 Textbook: Linear Algebra,by J. Hefferon E-mail: kapitula@math.unm.edu Prof. Kapitula,

More information

Identification Methods for Structural Systems

Identification Methods for Structural Systems Prof. Dr. Eleni Chatzi System Stability Fundamentals Overview System Stability Assume given a dynamic system with input u(t) and output x(t). The stability property of a dynamic system can be defined from

More information

Chapter 5 Eigenvalues and Eigenvectors

Chapter 5 Eigenvalues and Eigenvectors Chapter 5 Eigenvalues and Eigenvectors Outline 5.1 Eigenvalues and Eigenvectors 5.2 Diagonalization 5.3 Complex Vector Spaces 2 5.1 Eigenvalues and Eigenvectors Eigenvalue and Eigenvector If A is a n n

More information

[Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty.]

[Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty.] Math 43 Review Notes [Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty Dot Product If v (v, v, v 3 and w (w, w, w 3, then the

More information

Properties of Matrices and Operations on Matrices

Properties of Matrices and Operations on Matrices Properties of Matrices and Operations on Matrices A common data structure for statistical analysis is a rectangular array or matris. Rows represent individual observational units, or just observations,

More information

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v ) Section 3.2 Theorem 3.6. Let A be an m n matrix of rank r. Then r m, r n, and, by means of a finite number of elementary row and column operations, A can be transformed into the matrix ( ) Ir O D = 1 O

More information

Generalized Eigenvectors and Jordan Form

Generalized Eigenvectors and Jordan Form Generalized Eigenvectors and Jordan Form We have seen that an n n matrix A is diagonalizable precisely when the dimensions of its eigenspaces sum to n. So if A is not diagonalizable, there is at least

More information

Perspective. ECE 3640 Lecture 11 State-Space Analysis. To learn about state-space analysis for continuous and discrete-time. Objective: systems

Perspective. ECE 3640 Lecture 11 State-Space Analysis. To learn about state-space analysis for continuous and discrete-time. Objective: systems ECE 3640 Lecture State-Space Analysis Objective: systems To learn about state-space analysis for continuous and discrete-time Perspective Transfer functions provide only an input/output perspective of

More information

Topic 2 Quiz 2. choice C implies B and B implies C. correct-choice C implies B, but B does not imply C

Topic 2 Quiz 2. choice C implies B and B implies C. correct-choice C implies B, but B does not imply C Topic 1 Quiz 1 text A reduced row-echelon form of a 3 by 4 matrix can have how many leading one s? choice must have 3 choice may have 1, 2, or 3 correct-choice may have 0, 1, 2, or 3 choice may have 0,

More information

Math Linear Algebra II. 1. Inner Products and Norms

Math Linear Algebra II. 1. Inner Products and Norms Math 342 - Linear Algebra II Notes 1. Inner Products and Norms One knows from a basic introduction to vectors in R n Math 254 at OSU) that the length of a vector x = x 1 x 2... x n ) T R n, denoted x,

More information

BASIC NOTIONS. x + y = 1 3, 3x 5y + z = A + 3B,C + 2D, DC are not defined. A + C =

BASIC NOTIONS. x + y = 1 3, 3x 5y + z = A + 3B,C + 2D, DC are not defined. A + C = CHAPTER I BASIC NOTIONS (a) 8666 and 8833 (b) a =6,a =4 will work in the first case, but there are no possible such weightings to produce the second case, since Student and Student 3 have to end up with

More information

Lecture 7: Positive Semidefinite Matrices

Lecture 7: Positive Semidefinite Matrices Lecture 7: Positive Semidefinite Matrices Rajat Mittal IIT Kanpur The main aim of this lecture note is to prepare your background for semidefinite programming. We have already seen some linear algebra.

More information

REVIEW FOR EXAM II. The exam covers sections , the part of 3.7 on Markov chains, and

REVIEW FOR EXAM II. The exam covers sections , the part of 3.7 on Markov chains, and REVIEW FOR EXAM II The exam covers sections 3.4 3.6, the part of 3.7 on Markov chains, and 4.1 4.3. 1. The LU factorization: An n n matrix A has an LU factorization if A = LU, where L is lower triangular

More information

Math Camp II. Basic Linear Algebra. Yiqing Xu. Aug 26, 2014 MIT

Math Camp II. Basic Linear Algebra. Yiqing Xu. Aug 26, 2014 MIT Math Camp II Basic Linear Algebra Yiqing Xu MIT Aug 26, 2014 1 Solving Systems of Linear Equations 2 Vectors and Vector Spaces 3 Matrices 4 Least Squares Systems of Linear Equations Definition A linear

More information

2.3. VECTOR SPACES 25

2.3. VECTOR SPACES 25 2.3. VECTOR SPACES 25 2.3 Vector Spaces MATH 294 FALL 982 PRELIM # 3a 2.3. Let C[, ] denote the space of continuous functions defined on the interval [,] (i.e. f(x) is a member of C[, ] if f(x) is continuous

More information

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET This is a (not quite comprehensive) list of definitions and theorems given in Math 1553. Pay particular attention to the ones in red. Study Tip For each

More information

MIT Final Exam Solutions, Spring 2017

MIT Final Exam Solutions, Spring 2017 MIT 8.6 Final Exam Solutions, Spring 7 Problem : For some real matrix A, the following vectors form a basis for its column space and null space: C(A) = span,, N(A) = span,,. (a) What is the size m n of

More information

THE MINIMAL POLYNOMIAL AND SOME APPLICATIONS

THE MINIMAL POLYNOMIAL AND SOME APPLICATIONS THE MINIMAL POLYNOMIAL AND SOME APPLICATIONS KEITH CONRAD. Introduction The easiest matrices to compute with are the diagonal ones. The sum and product of diagonal matrices can be computed componentwise

More information

Math113: Linear Algebra. Beifang Chen

Math113: Linear Algebra. Beifang Chen Math3: Linear Algebra Beifang Chen Spring 26 Contents Systems of Linear Equations 3 Systems of Linear Equations 3 Linear Systems 3 2 Geometric Interpretation 3 3 Matrices of Linear Systems 4 4 Elementary

More information

Control Systems. System response. L. Lanari

Control Systems. System response. L. Lanari Control Systems m i l e r p r a in r e v y n is o System response L. Lanari Outline What we are going to see: how to compute in the s-domain the forced response (zero-state response) using the transfer

More information

LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM

LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM Unless otherwise stated, all vector spaces in this worksheet are finite dimensional and the scalar field F is R or C. Definition 1. A linear operator

More information

MTH Linear Algebra. Study Guide. Dr. Tony Yee Department of Mathematics and Information Technology The Hong Kong Institute of Education

MTH Linear Algebra. Study Guide. Dr. Tony Yee Department of Mathematics and Information Technology The Hong Kong Institute of Education MTH 3 Linear Algebra Study Guide Dr. Tony Yee Department of Mathematics and Information Technology The Hong Kong Institute of Education June 3, ii Contents Table of Contents iii Matrix Algebra. Real Life

More information

Topic # /31 Feedback Control Systems

Topic # /31 Feedback Control Systems Topic #7 16.30/31 Feedback Control Systems State-Space Systems What are the basic properties of a state-space model, and how do we analyze these? Time Domain Interpretations System Modes Fall 2010 16.30/31

More information

W2 ) = dim(w 1 )+ dim(w 2 ) for any two finite dimensional subspaces W 1, W 2 of V.

W2 ) = dim(w 1 )+ dim(w 2 ) for any two finite dimensional subspaces W 1, W 2 of V. MA322 Sathaye Final Preparations Spring 2017 The final MA 322 exams will be given as described in the course web site (following the Registrar s listing. You should check and verify that you do not have

More information

Analog Signals and Systems and their properties

Analog Signals and Systems and their properties Analog Signals and Systems and their properties Main Course Objective: Recall course objectives Understand the fundamentals of systems/signals interaction (know how systems can transform or filter signals)

More information