ECE557 Systems Control

Similar documents
ECE557 Systems Control

Controllability. Chapter Reachable States. This chapter develops the fundamental results about controllability and pole assignment.

ẋ n = f n (x 1,...,x n,u 1,...,u m ) (5) y 1 = g 1 (x 1,...,x n,u 1,...,u m ) (6) y p = g p (x 1,...,x n,u 1,...,u m ) (7)

Foundations of Matrix Analysis

Control Systems. Laplace domain analysis

Chapter Two Elements of Linear Algebra

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017

Linear Algebra. Min Yan

Module 09 From s-domain to time-domain From ODEs, TFs to State-Space Modern Control

MCE693/793: Analysis and Control of Nonlinear Systems

21 Linear State-Space Representations

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA

Module 03 Linear Systems Theory: Necessary Background

ECEN 420 LINEAR CONTROL SYSTEMS. Lecture 6 Mathematical Representation of Physical Systems II 1/67

Linear Algebra Review

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

NONCOMMUTATIVE POLYNOMIAL EQUATIONS. Edward S. Letzter. Introduction

MAT2342 : Introduction to Applied Linear Algebra Mike Newman, fall Projections. introduction

7 Planar systems of linear ODE

A Brief Outline of Math 355

Chapter 7. Canonical Forms. 7.1 Eigenvalues and Eigenvectors

Linear Algebra Highlights

Linear Algebra March 16, 2019

1 Continuous-time Systems

Review: control, feedback, etc. Today s topic: state-space models of systems; linearization

Final Review Sheet. B = (1, 1 + 3x, 1 + x 2 ) then 2 + 3x + 6x 2

1 9/5 Matrices, vectors, and their applications

Linear Algebra: Matrix Eigenvalue Problems

Eigenvalues, Eigenvectors. Eigenvalues and eigenvector will be fundamentally related to the nature of the solutions of state space systems.

Control Systems I. Lecture 4: Diagonalization, Modal Analysis, Intro to Feedback. Readings: Emilio Frazzoli

Eigenvalues and Eigenvectors

In these chapter 2A notes write vectors in boldface to reduce the ambiguity of the notation.

sc Control Systems Design Q.1, Sem.1, Ac. Yr. 2010/11

Topics in linear algebra

1. General Vector Spaces

Lecture Summaries for Linear Algebra M51A

OHSx XM511 Linear Algebra: Solutions to Online True/False Exercises

Robust Control 2 Controllability, Observability & Transfer Functions

Math 118, Fall 2014 Final Exam

18.06 Problem Set 8 - Solutions Due Wednesday, 14 November 2007 at 4 pm in

235 Final exam review questions

Represent this system in terms of a block diagram consisting only of. g From Newton s law: 2 : θ sin θ 9 θ ` T

j=1 x j p, if 1 p <, x i ξ : x i < ξ} 0 as p.

Bare-bones outline of eigenvalue theory and the Jordan canonical form

MATH 221: SOLUTIONS TO SELECTED HOMEWORK PROBLEMS

22.2. Applications of Eigenvalues and Eigenvectors. Introduction. Prerequisites. Learning Outcomes

Vector Spaces. 9.1 Opening Remarks. Week Solvable or not solvable, that s the question. View at edx. Consider the picture

= A(λ, t)x. In this chapter we will focus on the case that the matrix A does not depend on time (so that the ODE is autonomous):

Linear Algebra- Final Exam Review

Lecture 1: Review of linear algebra

16.31 Fall 2005 Lecture Presentation Mon 31-Oct-05 ver 1.1

CONTROL DESIGN FOR SET POINT TRACKING

4. Linear transformations as a vector space 17

Robot Control Basics CS 685

1. Find the solution of the following uncontrolled linear system. 2 α 1 1

MAT 2037 LINEAR ALGEBRA I web:

MATH 583A REVIEW SESSION #1

Chapter 4 & 5: Vector Spaces & Linear Transformations

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

A matrix over a field F is a rectangular array of elements from F. The symbol

Control Systems. Internal Stability - LTI systems. L. Lanari

Lecture 6. Eigen-analysis

Math Linear Algebra Final Exam Review Sheet

Definition (T -invariant subspace) Example. Example

Some solutions of the written exam of January 27th, 2014

ELEMENTARY LINEAR ALGEBRA

MATH 315 Linear Algebra Homework #1 Assigned: August 20, 2018

Control Systems Design, SC4026. SC4026 Fall 2010, dr. A. Abate, DCSC, TU Delft

Contents. 1 State-Space Linear Systems 5. 2 Linearization Causality, Time Invariance, and Linearity 31

Chap. 3. Controlled Systems, Controllability

Jim Lambers MAT 610 Summer Session Lecture 1 Notes

EE363 homework 7 solutions

Elementary linear algebra

5 More on Linear Algebra

Math 321: Linear Algebra

Identification Methods for Structural Systems

Chapter 5 Eigenvalues and Eigenvectors

[Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty.]

Properties of Matrices and Operations on Matrices

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

Generalized Eigenvectors and Jordan Form

Perspective. ECE 3640 Lecture 11 State-Space Analysis. To learn about state-space analysis for continuous and discrete-time. Objective: systems

Topic 2 Quiz 2. choice C implies B and B implies C. correct-choice C implies B, but B does not imply C

Math Linear Algebra II. 1. Inner Products and Norms

BASIC NOTIONS. x + y = 1 3, 3x 5y + z = A + 3B,C + 2D, DC are not defined. A + C =

Lecture 7: Positive Semidefinite Matrices

REVIEW FOR EXAM II. The exam covers sections , the part of 3.7 on Markov chains, and

Math Camp II. Basic Linear Algebra. Yiqing Xu. Aug 26, 2014 MIT

2.3. VECTOR SPACES 25

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

MIT Final Exam Solutions, Spring 2017

THE MINIMAL POLYNOMIAL AND SOME APPLICATIONS

Math113: Linear Algebra. Beifang Chen

Control Systems. System response. L. Lanari

LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM

MTH Linear Algebra. Study Guide. Dr. Tony Yee Department of Mathematics and Information Technology The Hong Kong Institute of Education

Topic # /31 Feedback Control Systems

W2 ) = dim(w 1 )+ dim(w 2 ) for any two finite dimensional subspaces W 1, W 2 of V.

Analog Signals and Systems and their properties

Transcription:

ECE557 Systems Control Bruce Francis Course notes, Version 20, September 2008

Preface This is the second Engineering Science course on control It assumes ECE356 as a prerequisite If you didn t take ECE356, you must go through Chapters 2 and 3 of the ECE356 course notes This course is on the state-space approach to control system analysis and design By contrast, ECE356 treated frequency domain methods Generally speaking, the state-space methods scale better to higher order, multi-input/output systems The frequency domain methods use complex function theory; the state-space approach uses linear algebra eigenvalues, subspaces, and all that The emphasis in the lectures will be on concepts, examples, and use of the theory There are several computer applications for solving numerical problems in this course The most widely used is MATLAB, but it s expensive I like Scilab, which is free Others are Mathematica (expensive) and Octave (free) 3

4

Contents Introduction 7 State Models 7 2 Examples 0 2 Magnetic levitation 0 22 Vehicles 3 Problems 4 2 The Equation ẋ = Ax 7 2 Brief Review of Some Linear Algebra 7 22 Eigenvalues and Eigenvectors 8 23 The Jordan Form 20 24 The Transition Matrix 24 25 Stability 26 26 Problems 29 3 More Linear Algebra 33 3 Subspaces 33 32 Linear Transformations 34 33 Matrix Equations 40 34 Invariant Subspaces 4 35 Problems 42 4 Controllability 47 4 Reachable States 47 42 Properties of Controllability 52 43 The PBH (Popov-Belevitch-Hautus) Test 55 44 Controllability from a Single Input 57 45 Pole Assignment 60 46 Stabilizability 66 47 Problems 67 5 Observability 73 5 State Reconstruction 73 52 The Kalman Decomposition 76 53 Detectability 77 54 Observers 77 5

6 CONTENTS 55 Problems 80 6 Feedback Loops 8 6 BIBO Stability 8 62 Feedback Stability 82 63 Observer-Based Controllers 83 64 Problems 86 7 Tracking and Regulation 87 7 Review of Tracking Steps 87 72 Distillation Columns 88 73 Problem Setup 90 74 Tools for the Solution 92 75 Regulator Problem Solution 94 76 Unobservability 97 77 More Examples 02 78 Problems 04 8 Optimal Control 07 8 Minimizing Quadratic Functions with Equality Constraints 07 82 The LQR Problem and Solution 3 83 Hand Waving 8 84 Sketch of Proof that F is Optimal 20 85 Problems 22

Chapter Introduction Control is that beautiful part of system science/engineering where we get to design part of the system, the controller, so that the system performs as intended Control is a very rich subject, ranging from pure theory (Can a robot with just vision sensors be programmed to ride a unicycle?) down to the writing of real-time code This course is mathematical, but that doesn t imply it is only theoretical and isn t applicable to real problems You are assumed to know Chapters 2 and 3 of the ECE356 course notes This chapter gives a brief review of only part and is not sufficient First, some notation: Usually, a vector is written as a column vector, but sometimes to save space it is written as an n-tuple: x = x x n State Models or x =(x,,x n ) Systems that are linear, time-invariant, causal, finite-dimensional, and having proper transfer functions have state models, ẋ = Ax + Bu, y = Cx + Du Here u, x, y are vector-valued functions of t and A, B, C, D real constant matrices Deriving State Models How to get a state model depends on what we have to start with Example n th order ODE Suppose we have the system 2ÿ ẏ +3y = u The natural state vector is y x x = =: ẏ x 2 7

8 CHAPTER INTRODUCTION Then ẋ = x 2 ẋ 2 = 2 x 2 3 2 x + 2 u, so A = 3 2 0 2 This technique extends to 0, B = 2, C = 0, D =0 a n y (n) + + a ẏ + a 0 y = u What about derivatives on the right-hand side: 2ÿ ẏ +3y = u 2u? The transfer function is Y (s) = s 2 2s 2 s +3 U(s) Introduce an intermediate signal v: Then Y (s) =(s 2) 2s 2 s +3 U(s) =:V (s) 2 v v +3v = u y = v 2v Taking x =(v, v) we get 0 A = 3 2 2 This technique extends to, B = 0 2, C = 2, D =0 y (n) + + a ẏ + a 0 y = b n u (n ) + + b 0 u The transfer function is Then G(s) = b n s n + b n 2 s n 2 + + b 0 s n + a n s n + + a 0 G(s) =C(sI A) B,

STATE MODELS 9 where A = 0 0 0 0 0 0 0 0 0 0 a 0 a a n 2 a n 0, B = 0 C = b 0 b n This state model is called the controllable (canonical) realization of G(s) For the case n = m, you divide denominator into numerator and thereby factor G(s) intothe sum of a constant and a strictly proper transfer function This gives D = 0, namely, the constant If m>n, there is no state model What if we have two inputs u,u 2, two outputs y,y 2, and coupled equations such as ÿ ẏ +ẏ 2 +3y = u + u 2 2 d3 y 2 dt 3 ẏ +ẏ 2 +4y = u 2? The natural state is x =(y, ẏ,y 2, ẏ 2, ÿ 2 ) Please complete this example Let s study the transfer matrix for the state model ẋ = Ax + Bu, y = Cx + Du Take Laplace transforms with zero initial conditions: sx(s) =AX(s)+BU(s), Y(s) =CX(s)+DU(s) Eliminate X(s): (si A)X(s) =BU(s) X(s) =(si A) BU(s) Y (s) =[C(sI A) B + D] U(s) transfer matrix This leads to the realization problem: GivenG(s), find A, B, C, D such that G(s) =C(sI A) B + D A solution exists iff G(s) is rational and proper (every element of G(s) has deg denom deg num) The solution is never unique There are general procedures for getting a state model, but we choose not to cover this topic in the interests of moving to other interests

0 CHAPTER INTRODUCTION 2 Examples Here we look at two examples that we ll use repeatedly for illustration 2 Magnetic levitation + u R L i y This example was used frequently in ECE356 Imagine an electromagnet suspending an iron ball Let the input be the voltage u and the output the position y of the ball below the magnet; let i denote the current in the circuit Then L di + Ri = u dt Also, it can be derived that the magnetic force on the ball has the form Ki 2 /y 2, K a constant Thus Mÿ = Mg K i2 y 2 Realistic numerical values are M =0 Kg, R = 5 ohms, L =05 H, K =0000 Nm 2 /A 2, g =98 m/s 2 Substituting in these numbers gives the equations 05 di dt + 5i = u 0 d2 y i2 =098 0000 dt2 y 2 Define state variables x =(x,x 2,x 3 )=(i, y, ẏ) Then the nonlinear state model is ẋ = f(x, u), where f(x, u) =( 30x +2u, x 3, 98 000x 2 /x 2 2) Suppose we want to stabilize the ball at y = cm, or 00 m We need a linear model valid in the neighbourhood of that value Solve for the equilibrium point ( x, ū) where x 2 =00: Thus 30 x +2ū =0, x 3 =0, 98 000 x 2 /00 2 =0 x =(099, 00, 0), ū = 485

2 EXAMPLES The linearized model is δx = Aδx + Bδu, δy = Cδx, where A equals the Jacobian of f with respect to x, evaluated at ( x, ū), and B equals the same except with respect to u: 30 0 0 30 0 0 A = 0 0 = 0 0 0002x /x 2 2 0002x 2 /x3 2 0 98 940 0 B = 2 0 0, C = 0 0 ( x,ū) The eigenvalues of A are 30, ±4405, the units being s The corresponding time constants are /30 = 0033, /4405 = 0023 s The first is the time constant of the electric circuit; the second, the time constant of the magnetics 22 Vehicles The second example is a vehicle control problem motivated by research on intelligent highway systems We begin with the simplest vehicle, a cart with a motor driving one wheel: u y The input is the voltage u to the motor, the output the cart position y We want the model from u to y Free body diagrams: motor u θ τ wheel τ f f f y cart

2 CHAPTER INTRODUCTION The cart A force f via the wheel through the axle: Mÿ = f () The wheel An equal and opposite force f at the axle; a horizontal force where the wheel contacts the floor If the inertia of the wheel is negligible, the two horizontal forces are equal Finally, a torque τ from the motor Equating moments about the axle gives τ = fr,wherer is the radius of the wheel Thus f = τ/r (2) The motor The electric circuit equation is L di dt + Ri = u v b, where v b is the back emf The torque produced by the motor: τ m = Ki (3) (4) Newton s second law for the motor shaft: J θ = τ m τ (5) The back emf is v b = K b θ (6) Finally, the relationship between shaft angle and cart position: y = rθ (7) Combining The block diagram is then u i τ m K Ls + R r Js ẏ s y r f Ms K b r

2 EXAMPLES 3 The inner loop can be reduced, giving u i τ m α ẏ y K Ls + R s s K b r α = r r 2 M + J Finally, we have the third-order system u β s 2 + R L s + γ ẏ s y β = αk L γ = αkk b rl Although this vehicle is very easy to control, for more complex vehicles (Jeeps on open terrain) it s customary to design a loop to cancel the dynamics, leaving a simpler kinematic vehicle, like this: ẏ ref u β s 2 + R L s + γ ẏ s y If the loop is well designed, that is, ẏ ref ẏ, we can regard the system as merely a kinematic point, with input, velocity, say v, and output, position, y Platoons Now suppose there are several of these motorized carts We want them to move in a straight line like this: A designated leader should go at constant speed: the second should follow at a fixed distance d; the third should follow the second at the distance d; and so on follower should stay distance d behind leader under cruise control We ll return to this problem later

4 CHAPTER INTRODUCTION 3 Problems The first few problems study the concept of linearity of a system Recall that a system F with input u and output y is linear if it satisfies two conditions: superposition, ie, F(u + u 2 )=F(u )+F(u 2 ), and homogeneity, F(cu) =cf(u), c a real constant To prove it s not linear, you have to give a counterexample for one of these two conditions Consider a quantizer Q with input u(t), that can take on a continuum of values, and output y(t), which can take on only countably many values, say, {b k } k Z More specifically, suppose R is partitioned into intervals I k, k Z, and if u(t) I k,theny(t) =b k Prove that Q is not linear 2 Let S denote the ideal sampler of sampling period T ; it maps a continuous-time signal u(t) into the discrete-time signal u[k] = u(kt) Let H denote the synchronized zero-order hold; it maps a discrete-time signal y[k] into y(t), where y(t) =y[k], kt t<(k + )T Then HS maps u(t) to y(t) where y(t) =u(kt), kt t<(k + )T Is HS linear? If so, prove it; if not, give a counterexample 3 Consider the amplitude modulation system with input u(t) and output y(t) = u(t) cos(t) Is it linear? 4 At time t = 0 a force v(t) is applied to a mass M whose position is y(t); the mass is initially at rest Thus Mÿ = v, wherey(0) = ẏ(0) = 0 The force is the output of a saturating actuator with input u(t) inthisway: v = u, u, u >, u < Is the system from u to y linear? 5 Give an example of a system that is linear, infinite-dimensional, causal, and time-varying 6 Express the superposition property of a system F in terms of a block diagram Express the homogeneity property in like manner 7 Both by hand and by Scilab/MATLAB find a state model for the system with transfer function G(s) = s 3 2s 3 + s 2 2s

3 PROBLEMS 5 8 Consider the system model ẋ = Ax + Bu, y = Cx with 25 0 0 A = 0 0 0 2 3 0, B = 0 4, C = 2 0 0 05 0 Both by hand and by Scilab/MATLAB find the transfer function from u to y 9 Kirchoff s laws for a circuit lead to algebraic constraints (eg, currents into a node sum to zero) Consider a system with inputs u, u 2 and outputs y, y 2 governed by the equations ÿ +2ẏ + y 2 = u y + y 2 = u 2 Find the transfer matrix from u =(u,u 2 )toy =(y,y 2 ) Does this system have a state model? If so, find one 0 Consider the system with input u(t) and output y(t) where 4ÿ +ẏ 2 y =(3t 2 + 8)u The nominal input and output are u 0 (t) =, y 0 (t) =t 2 (you can check that they satisfy the differential equation) Derive a nonlinear state model of the form ẋ = f(x, u, t) Linearize this about the nominal state and input, ending up with a linear state equation An unforced pendulum is modeled by the equation L θ + g sin θ =0, where L = length, g = gravity constant, θ = angle of pendulum (a) Put this model in the form of a state equation (b) Find all equilibrium points (c) Find the linearized model for each equilibrium point 2 A system has three inputs u, u 2, and u 3 and three outputs y, y 2, and y 3 The equations are y + a ÿ + a 2 (ẏ +ẏ 2 )+a 3 (y y 3 ) = u ÿ 2 + a 4 (ẏ 2 ẏ +2ẏ 3 )+a 5 (y 2 y ) = u 2 ẏ 3 + a 6 (y 3 y ) = u 3 Find a state-space model for this system 3 Find two different state models for the system ÿ + aẏ + by = u + c u

6 CHAPTER INTRODUCTION

Chapter 2 The Equation ẋ = Ax The object of study in this chapter is the unforced state equation ẋ = Ax Here A is an n n real matrix and x(t) an n-dimensional vector-valued function of time 2 Brief Review of Some Linear Algebra In this brief section we review these concepts/results: R n, linear independence of a set of vectors, span of a set of vectors, subspace, basis for a subspace, rank of a matrix, existence and uniqueness of a solution to Ax = b where A is not necessarily square, inverse of a matrix, invertibility If you remember them (and I hope you do), skip to the next section The symbol R n stands for the vector space of n-tuples, ie, ordered lists of n real numbers A set of vectors {v,,v k } in R n is linearly independent if none is a linear combination of the others One way to check this is to write the equation c v + + c k v k =0 and then try to solve for the c i s The set is linearly independent iff the only solution is c i = 0 for every i, The span of {v,,v k }, denoted Span{v,,v k }, is the set of all linear combinations of these vectors A subspace V of R n is a subset of R n that is also a vector space in its own right This is true iff these two conditions hold: If x, y are in V, thensoisx + y; ifx is in V and c is a scalar, then cx is in V Thus V is closed under the operations of addition and scalar multiplication In R 3 the subspaces are the lines through the origin, the planes through the origin, the whole of R 3, and the set consisting of only the zero vector A basis for a subspace is a set of linearly independent vectors whose span equals the subspace The number of elements in a basis is the dimension of the subspace The rank of a matrix is the dimension of the span of its columns This can be proved to equal the dimension of the span of its rows The equation Ax = b has a solution iff b belongs to the span of the columns of A, equivalently rank A = rank A b 7

8 CHAPTER 2 THE EQUATION Ẋ = AX When a solution exists, it is unique iff the columns of A are linearly independent, that is, the rank of A equals its number of columns The inverse of a square matrix A is a matrix B such that BA = I Ifthisistrue,thenAB = I The inverse is unique and we write A A square matrix A is invertible iff its rank equals its dimension (we say A has full rank ); equivalently, its determinant is nonzero The inverse equals the adjoint divided by the determinant 22 Eigenvalues and Eigenvectors Now we turn to ẋ = Ax The time evolution of x(t) can be understood from the eigenvalues and eigenvectors of A a beautiful connection between dynamics and algebra Recall that the eigenvalue equation is Av = λv Here λ is a real or complex number and v is a nonzero real or complex vector; λ is an eigenvalue and v a corresponding eigenvector The eigenvalues of A are unique but the eigenvectors are not: If v is an eigenvector, so is cv for any real number c = 0 The spectrum of A, denoted σ(a), is its set of eigenvalues The spectrum consists of n numbers, in general complex, and they are equal to the zeros of the characteristic polynomial det(si A) Example Consider two carts and a dashpot like this: x x 2 M D M 2 Take D =, M =, M 2 =/2, x 3 =ẋ, x 4 =ẋ 2 You can derive that the model is ẋ = Ax, where A = 0 0 0 0 0 0 0 0 0 0 2 2 The characteristic polynomial of A is s 3 (s + 3), and therefore σ(a) ={0, 0, 0, 3} The equation Av = λv says that the action of A on an eigenvector is very simple just multiplication by the eigenvalue Likewise, the motion of x(t) starting at an eigenvector is very simple Lemma 22 If x(0) is an eigenvector v of A and λ the corresponding eigenvalue, then x(t) =e λt v Thus x(t) is an eigenvector too for every t

22 EIGENVALUES AND EIGENVECTORS 9 Proof The initial-value problem ẋ = Ax, x(0) = v has a unique solution this is from differential equation theory So all we have to do is show that e λt v satisfies both the initial condition and the differential equation, for then e λt v must be the solution x(t) The initial condition is easy: e λt v = v t=0 And for the differential equation, d dt (eλt v)=e λt λv =e λt Av = A(e λt v) The result of the lemma extends to more than one eigenvalue Let λ,,λ n be the eigenvalues of A and let v,,v n be corresponding eigenvectors Suppose the initial state x(0) can be written as a linear combination of the eigenvectors: x(0) = c v + + c n v n This is certainly possible for every x(0) if the eigenvectors are linearly independent solution satisfies Then the x(t) =c e λ t v + + c n e λnt v n This is called a modal expansion of x(t) Example A = 2 2 Let s say x(0) = (0, ) The equation, λ =0,λ 2 = 3, v =, v 2 = 2 x(0) = c v + c 2 v 2 is equivalent to x(0) = Vc, where V is the 2 2 matrix with columns v,v 2 and c is the vector (c,c 2 ) Solving gives c = c 2 = /3 So x(t) = 3 v + 3 e 3t v 2

20 CHAPTER 2 THE EQUATION Ẋ = AX The case of complex eigenvalues is only a little complicated If λ is a complex eigenvalue, some other, say λ 2, is its complex conjugate: λ 2 = λ The two eigenvectors, v and v 2, can be taken to be complex conjugates too (easy proof) Then if x(0) is real and we solve x(0) = c v + c 2 v 2, we ll find that c,c 2 are complex conjugates as well Thus the equation will look like x(0) = c v + c v 2 =2 (c v ), where denotes real part Example 0 A = 0, λ = j, λ 2 = j, v = j, v 2 = j Suppose x(0) = (0, ) Then c = j/2, c 2 = j/2 and x(t) =2 c e λt v = je jt sin t = j cos t 23 The Jordan Form Now we turn to the structure theory of a matrix related to its eigenvalues It s convenient to introduce a term, the kernel of a matrix A Kernel is another name for nullspace Thus Ker A is the set of all vectors x such that Ax = 0; that is, Ker A is the solution space of the homogeneous equation Ax = 0 Notice that the zero vector is always in the kernel If A is square, then Ker A is the zero subspace, and we write Ker A = 0, iff 0 is not an eigenvalue of A If 0 is an eigenvalue, then Ker A equals the span of all the eigenvectors corresponding to this eigenvalue; we say Ker A is the eigenspace corresponding to the eigenvalue 0 More generally, if λ is an eigenvalue of A the corresponding eigenspace is the solution space of Av = λv, that is, of (A λi)v = 0, that is, Ker (A λi) Let s begin with the simplest case, where A is 2 2 and has 2 distinct eigenvalues, λ,λ 2 You can show (this is a good exercise) that there are then 2 linearly independent eigenvectors, say v,v 2 (maybe complex vectors) The equations Av = λ v, Av 2 = λ 2 v 2 are equivalent to the matrix equation A v v 2 = v v 2 λ 0 0 λ 2 that is, AV = VA JF,where, V = v v 2, AJF = diag (λ,λ 2 )

23 THE JORDAN FORM 2 The latter matrix is the Jordan form of A It is unique up to reordering of the eigenvalues The mapping A A JF = V AV is called a similarity transformation Example: A = 2 2, V = 2 0 0, A JF = 0 3 Corresponding to the eigenvalue λ = 0 is the eigenvector v =(, ), the first column of V All other eigenvectors corresponding to λ have the form cv, c = 0 We call the subspace spanned by v the eigenspace corresponding to λ Likewise, λ 2 = 3 has a one-dimensional eigenspace These results extend from n = 2 to general n Note that in the preceding result we didn t actually need distinctness of the eigenvalues only linear independence of the eigenvectors Theorem 23 The Jordan form of A is diagonal, ie, A is diagonalizable by similarity transformation, iff A has n linearly independent eigenvectors A sufficient condition is n distinct eigenvalues The great thing about diagonalization is that the equation ẋ = Ax can be transformed via w = V x into ẇ = A JF w, that is, n decoupled equations: ẇ i = λ i w i, i =,,n The latter equations are trivial to solve: w i (t) =e λ it w i (0), i =,,n Now we look at how to construct the Jordan form when there are not n linearly independent eigenvectors We start where A has only 0 as an eigenvalue Nilpotent matrices Consider 0 0 0 0 0 0 0 0, 0 0 0 0 0 0 0 (2) For both of these matrices, σ(a) ={0, 0, 0} For the first matrix, the eigenspace Ker A is twodimensional and for the second matrix, one-dimensional These are examples of nilpotent matrices: A is nilpotent if A k = 0 for some k The following statements are equivalent: A is nilpotent 2 All its eigs are 0 3 Its characteristic polynomial is s n 4 It is similar to a matrix of the form (2), where all elements are 0 s, except 0 s or s on the first diagonal above the main one This is called the Jordan form of the nilpotent matrix

22 CHAPTER 2 THE EQUATION Ẋ = AX Example Suppose A is 3 3 and A = 0 Then of course it s already in Jordan form, 0 0 0 0 0 0 0 0 0 Example Here we do an example of transforming a nilpotent matrix to Jordan form Take 0 0 0 0 0 A = 0 0 0 0 0 0 0 0 0 0 0 The rank of A is 3 and hence the kernel has dimension 2 We can compute that 0 0 0 0 0 0 0 A 2 0 0 0 0 = 0 0 0 0 0 0 0 0 0 0, 0 0 0 A3 = 0 0 0 0 0 0 0 0 0 0, A4 =0 0 0 0 0 0 0 0 0 0 0 Take any vector v 5 in Ker A 4 = R 5 that is not in Ker A 3, for example, Then take We get v 5 =(0, 0, 0, 0, ) v 4 = Av 5, v 3 = Av 4, v 2 = Av 3 v 4 =(0, 0, 0,, ) Ker A 3, Ker A 4 v 3 =(0,, 0, 0, 0) Ker A 2, Ker A 3 v 2 =(,, 0, 0, 0) Ker A, Ker A 2 Finally, take v Ker A, linearly independent of v 2, for example, v =(0, 0,, 0, 0) Assemble v,,v 5 into the columns of V Then 0 0 0 0 0 V 0 0 0 0 AV = A JF = 0 0 0 0 0 0 0 0 0 0 0 0 0

23 THE JORDAN FORM 23 This is block diagonal, like this: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 In general, the Jordan form of a nilpotent matrix has 0 in each entry except possibly in the first diagonal above the main diagonal which may have some s A nilpotent matrix has only the eigenvalue 0 Now consider a matrix A that has only one eigenvalue, λ, ie, det(si A) =(s λ) n To simplify notation, suppose n = 3 Letting r = s λ, wehave det[ri (A λi)] = r 3, ie, A λi has only the zero eigenvalue, and hence A λi =: N, a nilpotent matrix So the Jordan form of N must look like 0 0 0 0, 0 0 0 where each star can be 0 or, and hence the Jordan form of A is λ 0 0 λ 0 0 λ, (22) To recap, if A has just one eigenvalue, λ, then its Jordan form is λi + N, wheren is a nilpotent matrix in Jordan form An extension of this analysis results in the Jordan form in general Suppose A is n n and λ,,λ p are the distinct eigenvalues of A and m,,m p are their multiplicities; that is, the characteristic polynomial is det(si A) =(s λ ) m (s λ p ) mp Then A is similar to A A JF =, A p

24 CHAPTER 2 THE EQUATION Ẋ = AX where A i is m i m i and it has only the eigenvalue λ i ThusA i has the form λ i I + N i,wheren i is a nilpotent matrix in Jordan form Example: 0 0 0 A = 0 0 0 0 0 0 0 2 2 As we saw, the spectrum is σ(a) ={0, 0, 0, 3} Thus the Jordan form must be of the form 0 0 0 A JF = 0 0 0 0 0 0 0 0 0 0 3 Since A has rank 2, so does A JF Thus only one of the stars is Either is possible, for example, 0 0 0 0 A JF = 0 0 0 0 0 0 0 0 0 0 3 This has two Jordan blocks : A 0 A JF =, A 0 A = 2 24 The Transition Matrix 0 0 0 0 0 0 0 0, A 2 = 3 Let us review from the ECE356 course notes For a square matrix M, the exponential e M is defined as e M := I + M + 2! M 2 + 3! M 3 + The matrix e M is not the same as the component-wise exponential of M Facts: e M is invertible for every M, and (e M ) =e M 2 e M+N =e M e N iff M and N commute, ie, MN = NM The matrix function t e ta : R R n n is then defined and is called the transition matrix associated with A It has the properties e ta t=0 = I 2 e ta and A commute d 3 dt eta = Ae ta =e ta A Moreover, the solution of ẋ = Ax, x(0) = x 0 is x(t) =e ta x 0 So e ta maps the state at time 0 to the state at time t In fact, it maps the state at any time t 0 to the state at time t 0 + t

24 THE TRANSITION MATRIX 25 On computing the transition matrix via the Jordan form If one can compute the Jordan form of A, thene ta can be written in closed form, as follows The equation implies AV = VA JF A 2 V = AV A JF = VA 2 JF Continuing in this way gives and then so finally A k V = VA k JF, e At V = V e A JFt, e At = V e A JFt V The matrix exponential e A JFt is easy to write down For example, suppose there s just one eigenvalue, so A JF = λi + N, N nilpotent, n n Then e A JFt = e λt e Nt = e λt I + Nt+ N 2 t2 2! + + N n t n (n )! via Laplace transforms Taking Laplace transforms of ẋ = Ax, x(0) = x 0 gives sx(s) x 0 = AX(s) This yields X(s) =(si A) x 0 Comparing x(t) =e ta x 0, X(s) =(si A) x 0 shows that e ta,(si A) are Laplace transform pairs So one can get e ta by finding the matrix (si A) and then taking the inverse Laplace transform of each element

26 CHAPTER 2 THE EQUATION Ẋ = AX 25 Stability The concept of stability is fundamental in control engineering Here we look at the scenario where the system has no input, but its state has been perturbed and we want to know if the system will recover This was introduced in the ECE356 course notes Here we go a little farther now that we re armed with the Jordan form The maglev example is a good one to illustrate this point Suppose a feedback controller has been designed to balance the ball s position at cm below the magnet Suppose if the ball is placed at precisely cm it will stay there; that is, the cm location is a closed-loop equilibrium point Finally, suppose there is a temporary wind gust that moves the ball away from the cm position The stability questions are, will the ball move back to the cm location; if not, will it at least stay near that location? So consider ẋ = Ax Obviously if x(0) = 0, then x(t) = 0 for all t We say the origin is an equilibrium point if you start there, you stay there Equilibrium points can be stable or not While there are more elaborate and formal definitions of stability for the above homogeneous system, we choose the following two: The origin is asymptotically stable if x(t) 0 as t for all x(0) The origin is stable if x(t) remains bounded as t for all x(0) Since x(t) =e At x(0), the origin is asymptotically stable iff every element of the matrix e At converges to zero, and is stable iff every element of the matrix e At remains bounded as t Of course, asymptotic stability implies stability Asymptotic stability is relatively easy to characterize Using the Jordan form, one can prove this very important result, where denotes real part : Theorem 25 The origin is asymptotically stable iff the eigenvalues of A all satisfy λ< 0 Let s say the matrix A is stable if its eigenvalues satisfy λ<0 Then the origin is asymptotically stable iff A is stable Now we turn to the more subtle property of stability We ll do some examples, and we may as well have A in Jordan form Consider the nilpotent matrix A = N = 0 0 0 0 Obviously, x(t) = x(0) for all t and so the origin is stable By contrast, consider Then A = N = 0 0 0 e Nt = I + tn, which is unbounded and so the origin is not stable This example extends to the n n case: If A is nilpotent, the origin is stable iff A = 0

25 STABILITY 27 Here s the test for stability in general in terms of the Jordan form of A: A A JF = A p Recall that each A i has just one eigenvalue, λ i, and that A i = λ i I + N i,wheren i is a nilpotent matrix in Jordan form Theorem 252 The origin is stable iff the eigenvalues of A all satisfy λ 0 and for any eigenvalue with λ i =0, the nilpotent matrix N i is zero, ie, A i is diagonal Here s an example with complex eigenvalues: 0 j 0 A =, A 0 JF = 0 j The origin is stable since there are two Jordan blocks Now consider 0 0 A = 0 0 0 0 0 0 0 0 The eigenvalues are j, j, j, j and so the Jordan form must look like j 0 0 A JF = 0 j 0 0 0 0 j 0 0 0 j Since the rank of A ji equals 3, the upper star is ; since the rank of A + ji equals 3, the lower star is Thus j 0 0 A JF = 0 j 0 0 0 0 j 0 0 0 j Since the Jordan blocks are not diagonal, the origin is not stable Example Consider the cart-spring-damper system y D K

28 CHAPTER 2 THE EQUATION Ẋ = AX The equation is Mÿ + Dẏ + Ky =0 Defining x =(y, ẏ), we have ẋ = Ax with 0 A = K/M D/M Assume M > 0 and K, D 0 If D = K = 0, the eigenvalues are {0, 0} and A is a nilpotent matrix in Jordan form The origin is an unstable equilibrium If only D = 0 or K = 0 but not both, the origin is stable but not asymptotically stable And if both D, K are nonzero, the origin is asymptotically stable Example Two points move on the line R The positions of the points are x,x 2 They move toward each other according to the control laws ẋ = x 2 x, ẋ 2 = x x 2 Thus the state is x =(x,x 2 ) and the state equation is ẋ = Ax, A = The eigenvalues are λ =0,λ 2 = 2, so the origin is stable but not asymptotically stable Obviously, the two points tend toward each other; that is, the state x(t) tends toward the subspace V = {x : x = x 2 } This is the eigenspace for the zero eigenvalue To see this convergence, write the initial condition as a linear combination of eigenvectors: x(0) = c v + c 2 v 2, v =, v 2 = Then x(t) =c e λ t v + c 2 e λ 2t v 2 = c v + c 2 e 2t v 2 c v So x (t) and x 2 (t) both converge to c, the same point Phase portraits help us visualize state evolution and stability, but they re applicable only for the n = 2 case Below is shown a plot in R 2 of the vector field for 0 A =, that is, at a grid of points, the directions of the velocity vectors Ax are shown translated to the point x By following the arrows, we get a trajectory; one is shown The plot was done using wwwmathpsuedu/melvin/phase/newphasehtml

26 PROBLEMS 29 You can also use MATLAB, Scilab (free), Mathematica, or Octave (free) 26 Problems Are the following vectors linearly independent? v =(,, 2, 0), v 2 =(, 0, 2, 2), v 3 =(, 2, 2, 6) 2 Continuing with the same vectors, find a basis for Span {v,v 2,v 3 } 3 What kind of geometric object is {x : Ax = b} when A R m n? That is, is it a sphere, a point what? 4 (a) Let A be an 8 8 real matrix with eigenvalues Assume 2, 2, 3, 3, 3, 8, 4, 4 rank(a 2I) =7, rank(a +3I) =6, rank(a 4I) =6 Write down the Jordan form of A (b) The matrix A = 0 0 0 0 0 0 0 0 is nilpotent Write down its Jordan form

30 CHAPTER 2 THE EQUATION Ẋ = AX 5 Take A = 0 0 0 0 0 0 0 0 0 0 2 2 Show that the matrix V constructed as follows satisfies V AV = A JF : Select v 3 in Ker A 2 but not in Ker A Set v 2 = Av 3 Select v in Ker A such that {v,v 2 } is linearly independent Select an eigenvector v 4 corresponding to the eigenvalue 3 Set V =[v v 2 v 3 v 4 ] (The general construction of the basis for the Jordan form is along these lines) 6 Let A = 0 0 0 0 0 0 0 0 0 2 0 2 Write down the Jordan form of A 7 Consider σ ω A = ω σ, where σ and ω = 0 are real Find the Jordan form and the transition matrix 8 In the previous problem, we saw that when σ ω A = ω σ its transition matrix is easy to write down This problem demonstrates that a matrix with distinct complex eigenvalues can be transformed into the above form using a nonsingular transformation Let 4 A = Determine the eigenvalues and eigenvectors of A, noting that they form complex conjugate pairs Let the first eigenvalue be written as a+jb with the corresponding eigenvector v +jv 2 Take v and v 2 as the columns of a matrix V FindV AV

26 PROBLEMS 3 9 Consider the homogeneous state equation ẋ = Ax with 3 A = 2 2 and x 0 =(3, 2) Find a modal expansion of x(t) 0 Show that the origin is asymptotically stable for ẋ = Ax iff all poles of every element of (si A) are in the open left half-plane Show that the origin is stable iff all poles of every element of (si A) are in the closed left half-plane and those on the imaginary axis have multiplicity Consider the linear system 0 ẋ = x + 0 y = 0 x u (a) If u(t) is the unit step and x(0) = 0, is y(t) bounded? (b) If u(t) = 0 and x(0) is arbitrary, is y(t) bounded? 2 (a) Suppose that σ(a) ={, 3, 3, +j2, j2} and the rank of (A λi) λ= 3 is 4 Determine A JF (b) Suppose that σ(a) ={, 2, 2, 2} and the rank of (A λi) λ= 2 is 3 Determine A JF (c) Suppose that σ(a) ={, 2, 2, 2, 3} and the rank of (A λi) λ= 2 is 3 Determine A JF 3 Find A JF for A = 0 0 0 0 2 4 3 4 Summarize all the ways to find exp(at) Then find exp(at) for 0 A = 0 0 0 2 5 Consider the set {cv : c 0}, where v = 0 is a given vector in R 2 This set is called a ray from the origin in the direction of v More generally, {x 0 + cv : c 0} is a ray from x 0 in the direction of v Find a 2 2 matrix A and a vector x 0 such that the solution x(t) of ẋ = Ax, x(0) = x 0 is a ray

32 CHAPTER 2 THE EQUATION Ẋ = AX 6 Consider the following system: ẋ = x 2 ẋ 2 = x 3x 2 Do a phase portrait using Scilab or MATLAB Interpret the phase portrait in terms of the modal decomposition of the system Do lots more examples of this type

Chapter 3 More Linear Algebra This chapter extends our knowledge of linear algebra: subspaces, matrix representations, linear matrix equations, and invariant subspaces 3 Subspaces Let X = R n and let V, W be subspaces of X ThenV + W denotes the set {v + w : v V,w W}, and it is a subspace of X The set union V W is not a subspace in general unless one is contained in the other The intersection V Wis however a subspace As an example: X = R 3, V aline, W a plane Then V + W = R 3 if V does not lie in W IfV W, then of course V + W = W It is a fact that dim(v + W) =dim(v)+dim(w) dim(v W) For example, think of V, W as two planes in R 3 that intersect in a line Then the dimension equation evaluates to 3=2+2 Two subspaces V, W are independent if V W = 0 This is not the same as being orthogonal For example two lines in R 2 are independent iff they are not colinear (ie, the angle between them is not 0), while they are orthogonal iff the angle is 90 Every vector x in V + W can be written as x = v + w, v V,w W If V, W are independent, then v, w are unique Think of v as the component of x in V and w as its component in W Let s prove uniqueness Suppose x = v + w = v + w In this chapter when we speak of lines we mean lines through 0 Similarly for planes 33

34 CHAPTER 3 MORE LINEAR ALGEBRA Then v v = w w The left-hand side is in V and the right-hand side in W Since the intersection of these two subspaces is zero, both sides equal 0 Clearly, V, W are independent iff dim(v + W) =dim(v)+dim(w) Three subspaces U, V, W are independent if U, V + W are independent, V, U + W are independent, and W, U + V are independent This is not the same as being pairwise independent As an example, let U, V, W be -dimensional subspaces of R 3, ie, three lines When are they independent? Pairwise independent? Every vector x in U + V + W can be written as x = u + v + w, u U,v V,w W If U, V, W are independent, then u, v, w are unique Also, U, V, W are independent iff dim(u + V + W) =dim(u)+dim(v)+dim(w) If V, W are independent subspaces, we write their sum as V W This is called a direct sum Likewise for more than two Let s finish this section with a handy fact: Every subspace has an independent complement, ie, V X = ( W X ) X = V W Think of X as R 3 and V as a plane Then W can be any line not in the plane 32 Linear Transformations We now introduce linear transformations The important point is that a linear transformation is not the same as a matrix, but every linear transformation has a matrix representation once you choose a basis Let X = R n and Y = R p A linear function A : X Ydefines a linear transformation (LT); X is called its domain and Y its co-domain Thus A(x + x 2 )=Ax + Ax 2, x,x 2 X A(ax) =aax, a R, x X It is an important fact that an LT is uniquely determined by its action on a basis That is, if A : X Yis an LT and if {e,,e n } is a basis for X, then if we know the vectors Ae i, we can compute Ax for every x X, by linearity Example For us, the most important example is an LT generated by a matrix Let A R m n For each vector x in R n, Ax is a vector in R m The mapping x Ax is an LT A : R n R m Linearity is easy to check

32 LINEAR TRANSFORMATIONS 35 Example Take a vector in the plane and rotate it counterclockwise by 90 This defines an LT A : R 2 R 2 Note that A is not given as a matrix; it s given by its domain, its co-domain, and its action on vectors If we take a vector to be represented by its Cartesian coordinates, x =(x,x 2 ), then we ve chosen a basis for R 2 In that case A maps x =(x,x 2 )toax =( x 2,x ), and so there s an associated rotation matrix 0 0 We ll return to matrix representation later Example Let X = R n and let {e,,e n } be a basis Every vector x in X has a unique expansion x = a e + + a n e n, a i R Let a denote the vector (a,,a n ), the n-tuple of coordinates of x with respect to the basis The function x a defines an LT Q : X R n The equation x = a e + + a n e n can be written compactly as x = Ea, wheree is the matrix with columns e,,e n and a is the vector with components a,,a n Therefore a = E x and so Qx = E x, that is, the action of Q is to multiply by the matrix E For example, let X = R 2 Take the natural basis e = 0, e 2 = 0 In this case E = I and Qx = x If the basis instead is e =, e 2 =, then E = and Qx = E x Every LT on finite-dimensional vector spaces has a matrix representation Let s do this very important construction carefully Let A be an LT X Y, X = R n, basis {e,,e n }; Y = R p, basis {f,,f p } Bring in the coordinate LTs: Q : X R n, R : Y R p So now we have the setup

36 CHAPTER 3 MORE LINEAR ALGEBRA X A Y Q R R n R p The left downward arrow gives us the n-tuple, say a, that represents a vector x in the basis {e,,e n } The right downward arrow gives us the p-tuple, say b, that represents a vector y in the basis {f,,f n } It s possible to add a fourth LT to complete the square: X A Y Q R R n M R p This is called a commutative diagram The object M in the diagram is the matrix representation of A with respect to these two bases Notice that the bottom arrow represents the LT generated by the matrix M; wewritem in the diagram for simplicity, but you should understand that really the object is an LT The matrix M is the p n matrix that makes the diagram commute, that is, for every x X Ma = b, where a = Qx, b = RAx In particular, take x = e i,thei th basis vector in X Thena is the n-vector with in the i th entry and 0 otherwise So Ma equals the i th column of the matrix M Thus, we have the following recipe for constructing the matrix M: Take the st basis vector e of X 2 Apply the LT A to get Ae 3 Find b, the coordinate vector of Ae in the basis for Y 4 Enter this b as column of M 5 Repeat for the other columns Recall that Q is the LT generated by E, where the columns of E are the basis in the domain of A Likewise, R is the LT generated by F, where the columns of F are the basis in the co-domain of A Thus the equation Ma = b reads ME x = F Ax (3) Example Let A : R 2 R 2 be the LT that rotates a vector counterclockwise by 90 Let s first take the standard bases: e =(, 0),e 2 =(0, ) for the domain and f =(, 0),f 2 =(0, ) for the co-domain Following the steps we first apply A to e, that is, we rotate e counterclockwise by 90 ;theresultisae =(0, ) Then we express this vector in the basis {f,f 2 }: Ae =0 f + f 2

32 LINEAR TRANSFORMATIONS 37 Thus the first column of M is (0, ), the vector of coefficients Now for the second column, rotate e 2 to get (, 0) and represent this in the basis {f,f 2 }: Ae 2 = f +0 f 2 So the second column of M is (, 0) Thus 0 M = 0 Suppose we had different bases: e =(, ), e 2 =(, 2), f =(, 2), f 2 =(, 0) Apply the recipe again Get Ae =(, ) Expand it in the basis {f,f 2 }: (, ) = 2 f 3 2 f 2 Get Ae 2 =( 2, ) Expand it in the basis {f,f 2 }: ( 2, ) = 2 f 3 2 f 2 Thus M = 2 2 3 2 3 2 Example Let A R m n and let A : R n R m be the generated LT It is easy to check that A itself is then the matrix representation of A with respect to the standard bases Let s do it Let {e,,e n } be the standard basis on R n and {f,,f m } the standard basis on R m Then Ae = Ae equals the first column, (a,a 2,,a m ), of A This column can be written as a f + + a m f m, and hence (a,a 2,,a m ) is the first column of the matrix representation of A Suppose instead that we have general bases, {e,,e n } on R n and {f,,f m } on R m Form the matrices E and F from these basis vectors From (3) we get that the matrix representation M with respect to these bases satisfies or equivalently ME = F A, AE = FM A very interesting special case of this is where A is square and the same basis {e,,e n } is taken for both the domain and co-domain Then AE = EM,

38 CHAPTER 3 MORE LINEAR ALGEBRA or M = E AE; the matrix M is a similarity transformation of the given matrix A Finally, suppose we start with a square A and take the basis {v,,v n } of generalized eigenvectors The new matrix representation is our familiar Jordan form A JF = V AV Thusthetwo matrices A and A JF represent the same LT: A in the given standard basis and A JF in the basis of generalized eigenvectors An LT has two important associated subspaces Let A : X Y be an LT The kernel (or nullspace) of A is the subspace of X on which A is zero: Ker A := {x : Ax =0} The LT A is said to be one-to-one if Ker A = 0, equivalently, the homogeneous equation Ax =0 has only the trivial solution x = 0 The image (or range space) of A is the subspace of Y that A can reach: Im A := {y :( x X)y = Ax} We say A is onto if Im A = Y, equivalently, the equation Ax = y has a solution x for every y Whether A is one-to-one or onto (or both) can be easily checked by examining any matrix representation A: A is one-to-one A has full column rank; A is onto A has full row rank If A is a matrix, we will write Im A for the image of the generated LT it s the column span of the matrix; and we ll write Ker A for the kernel of the LT Example Let A : R 3 R 3 map a vector to its projection on the horizontal plane Then the kernel equals the vertical axis, the image equals the horizontal plane, A is neither onto nor one-to-one, and its matrix with respect to the standard basis is 0 0 0 0 0 0 0 We could modify the co-domain to have A : R 3 R 2, again mapping a vector to its projection on the horizontal plane Then the kernel equals the vertical axis, the image equals the horizontal plane, A is onto but not one-to-one, and its matrix with respect to the standard basis is 0 0 0 0 Example Let V X (think of V as a plane in 3-dimensional space X ) Define the function V : V X, Vx = x This is an LT called the insertion LT Clearly V is one-to-one and Im V = V Suppose we have a basis for V, {e,,e k },

32 LINEAR TRANSFORMATIONS 39 and we extend it to get a basis for X, {e,,e k,,e n } Then the matrix rep of V is V = Ik 0 Clearly, rank V = k Example Let X be 3-dimensional space, V a plane (2-dimensional subspace), and W a line not in V Then V, W are independent subspaces and X = V W Every x in X can be written x = v+w for unique v in V and w in W Define the function P : X V mapping x to v This is an LT called the natural projection onto V Check that Im P = V, Ker P = W Suppose {e,e 2 } is a basis for V, {e 3 } a basis for W The induced matrix representation is P = 0 0 0 0 Example Let A : X Ybe an LT Its kernel, Ker A, is a subspace of X ;let{e k+,,e n } be a basis for Ker A and extend it to get a basis for X : Then {e,,e k,,e n } for X {Ae,,Ae k } is a basis for Im A Extend it to get a basis for Y: {Ae,,Ae k,f k+,,f p } Then the matrix representation of A is A = Ik 0 0 0

40 CHAPTER 3 MORE LINEAR ALGEBRA 33 Matrix Equations We already reviewed the linear equation Ax = b, A R n m,x R m,b R n The equation is another way of saying b is a linear combination of the columns of A Thus the equation has a solution iff b column span of A, ie,b ImA Then the solution is unique iff rank A = m, ie,kera = 0 These results extend to the matrix equation AX = B, A R n m,x R m p,b R n p In this section we study this and similar equations We could work with LTs but we ll use matrices instead The first equation is AX = I Such an X is called a right-inverse of A Lemma 33 A R n m has a right-inverse iff it s onto, ie the rank of A equals n Proof (= ) IfAX = I, then, for every y R n, AXy = y Thus for every y R n,thereexistsx R m such that Ax = y Thus A is onto ( =) Let {f,,f p } be the standard basis for R n SinceA is onto ( i)( x i R m )f i = Ax i Now define X to be the matrix whose i th column is x i,ie,viaxf i = x i Then AXf i = f i This implies AX = I The second equation is the dual situation XA = I Obviously, such an X is a left-inverse Lemma 332 A R n m has a left-inverse iff it s one-to-one, ie, A has rank m Lemma 333 There exists X such that AX = B iff Im B Im A, thatis, rank A = rank A B 2 There exists X such that XA = B iff Ker A Ker B, that is, rank A = rank A B

34 INVARIANT SUBSPACES 4 34 Invariant Subspaces Example Let A = 2 2 and let A : R 2 R 2 be the generated LT Clearly, Ker A is the -dimensional subspace spanned by Also, x Ker A Ax =0 Ker A, or equivalently, AKer A Ker A In general, if A : X X is an LT, a subspace V Xis A-invariant if AV V The zero subspace, X itself, Ker A, and Im A are all A-invariant Now Ker A is the eigenspace for the zero eigenvalue, assuming λ = 0 is an eigenvalue (as in the example above) More generally, suppose λ is an eigenvalue of A Assume λ R ThenAx = λx for some x = 0 Then V = Span {x} is A-invariant So is the eigenspace {x : Ax = λx} = {x :(A λi)x =0} =Ker(A λi) Let V be an A-invariant subspace Take a basis for V, {e,,e k }, and extend it to a basis for X : {e,,e k,,e n } Then the matrix representation of A has the form A A A = 2 0 A 22 Notice that the lower-left block of A equals zero; this is because V is A-invariant Example Let X = R 3,letV be the (x,x 2 )-plane, and let A : X X be the LT that rotates a vector 90 about the x 3 -axis using the right-hand rule Thus V is A-invariant Let us take the bases 0 e = 0,e 2 = for V 0 0 e,e 2,e 3 = for X

42 CHAPTER 3 MORE LINEAR ALGEBRA The matrix representation of A with respect to the latter basis is 0 2 A = 0 0 0 0 So, in particular, the restriction of A to V is represented by the rotation matrix 0 A = 0 Finally, let A be an n n matrix Suppose V is an n k matrix Then Im V is a subspace of R n How can we know if this subspace is invariant under A, or more precisely, under the LT generated by A? The answer is this: Lemma 34 The subspace Im V is A-invariant iff the linear equation AV = VA has a solution A Proof If AV = VA,thenImAV Im V, that is, A Im V Im V,whichsaysImV is A-invariant Conversely, if Im AV Im V, then the equation AV = VA is solvable, by Lemma 333 35 Problems Prove the following facts about subspaces: (a) V + V = V Hint: You have to show V +V Vand V V+V Similarly for other subspace equalities (b) If V W,thenV + W = W (c) If V W,thenW (V + T )=V + W T 2 Show that W (V+T )=W V+W T is false in general by giving an explicit counterexample 3 Let A be the identity LT on R 2 Take, = basis for domain, Find the matrix A 2 0, 3 = basis for co-domain 4 Let A denote the LT R 4 R 5 with the action x x 4 x 2 x 3 0 2x 4 x x 2 + x 3 +2x 4 4 x 2 + x 3 Find bases for R 4 and R 5 so that the matrix representation is I 0 A = 0 0

35 PROBLEMS 43 5 Let A be an LT Show that if {Ae,,Ae n } is linearly independent, so is {e,,e n }Give an example where the converse is false 6 Find all right-inverses of the matrix A = 2 7 Let X denote the 4-dimensional vector space with basis {sint, cos t, sin2t, cos 2t} Thus vectors in X are time-domain signals of frequency rad/s, 2 rad/s, or a combination of both Suppose an input x(t) from X is applied to a lowpass RC-filter, producing the output y(t) The equation for the circuit is RCẏ(t)+y(t) =x(t) For simplicity, take RC = From circuit theory, we know that y(t) belongs to X too (This is steady-state analysis; transient response is neglected) So the mapping from x(t) toy(t) defines a linear transformation A : X X Find the matrix representation of A with respect to the given basis 8 Consider the vector space R 3 Let x, x 2, and x 3 denote the components of a vector x in R 3 Now let V denote the subspace of R 3 of all vectors x where x + x 2 x 3 =0, and let W denote the subspace of R 3 of all vectors x where 2x 3x 3 =0 Find a basis for the intersection V W 9 Let A : R 3 R 3 be the LT defined by A : x x 2 x 3 8x 2x 3 x +7x 2 2x 3 4x x 3 Find bases for Ker A and Im A 0 Find all solutions of the matrix equation XA = I where A = 2 0 2

44 CHAPTER 3 MORE LINEAR ALGEBRA For a square matrix X, let diagx denote the vector formed from the elements on the diagonal of X Let A : R n n R n be the LT defined by A : X diagx Does A have a left inverse? A right inverse? 2 Consider the two matrices: 4 3 2 3, 3 0 2 3 4 5 2 3 4 2 3 4 5 0 0 For each matrix, find its rank, a basis for its image, and a basis for its kernel 3 Let A, U R n n with U nonsingular True or false: (a) Ker (A) =Ker(UA) (b) Ker (A) =Ker(AU) (c) Ker (A 2 ) Ker (A) 4 Is {(x,x 2,x 3 ):2x +3x 2 +6x 3 5=0} a subspace of R 3? 5 You are given the n eigenvalues of a matrix in R n n Can you determine the rank of the matrix? If no, can you give bounds on the rank? 6 Suppose that A R m n and B R n m with m n and rank A = rank B = m Find a necessary and sufficient condition that AB be invertible 7 Let A be an LT from X to X, a finite-dimensional vector space Fix a basis for X and let A denote the matrix representation of A with respect to this basis Show that A 2 is the matrix representation of A 2 8 Consider the following result: Lemma If A is a matrix with full column rank, then the equation Ax = y is solvable for every vector y Proof Let y be arbitrary Multiply the equation Ax = y by the transpose of A: A T Ax = A T y Since A has full column rank, A T A is invertible Thus x =(A T A) A T y (a) Give a counterexample to the lemma (b) What is the mistake in logic in the proof?

35 PROBLEMS 45 9 Let L denote the line in the plane that passes through the origin and makes an angle +π/6 radians with the positive x-axis Let A : R 2 R 2 be the LT that maps a vector to its reflection about L (a) Find the matrix representation of A with respect to the basis e =, e 2 = (b) Show that A is invertible and find its inverse 20 Fix a vector v = 0inR 3 and consider the LT A : R 3 R 3 that maps x to the cross product v x (a) Find Ker(A) and Im(A) (b) Is A invertible?