IDENTIFICATION AND ANALYSIS OF TIME-VARYING MODAL PARAMETERS

Size: px
Start display at page:

Download "IDENTIFICATION AND ANALYSIS OF TIME-VARYING MODAL PARAMETERS"

Transcription

1 IDENTIFICATION AND ANALYSIS OF TIME-VARYING MODAL PARAMETERS By STEPHEN L. SORLEY A THESIS PRESENTED TO THE GRADUATE SCHOOL OF THE UNIVERSITY OF FLORIDA IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF MASTER OF SCIENCE UNIVERSITY OF FLORIDA

2 c Stephen L. Sorley

3 I dedicate this thesis to my savior Jesus Christ, without whose help I wouldn t be here. 3

4 ACKNOWLEDGMENTS I would like to first thank my parents, Curtis and Cynthia Sorley, for their constant encouragement in this endeavor. I would also like to thank my advisor Rick Lind and coworkers Dan Grant and Animesh Chakravarthy for the many contributions they have made to this work. Finally, I would like to thank everyone in the lab for providing such an enjoyable working environment, both through their camaraderie and their ability to laugh. 4

5 TABLE OF CONTENTS page ACKNOWLEDGMENTS LIST OF FIGURES ABSTRACT CHAPTER INTRODUCTION Motivation Contributions BACKGROUND LTV System Definition Time-varying System Identification Survey of Available LTV Eigenpair Definitions LTV ANALYSIS METHODS Inadequacy of Frozen-Time Eigenpair Analysis Definition of LTV Poles and Eigenvectors LTV Response Equation LTV Eigenrelation Relationship between LTV Eigenpairs and Stability Relationship between LTV Eigenpairs and Oscillation Mode Vector Analysis Eigenpairs and Linear Independence Transformations between Equivalent Eigenpairs Definition of Mode Vectors Kamen Analysis A Different Perspective on Kamen s Method Sufficiency of Kamen s Poles to Determine Stability Kamen Analysis via Equivalent Eigenpair Transformations Problems with Kamen Analysis Summary DIRECT IDENTIFICATION ALGORITHMS Development of Identification Algorithms Complete vs. Incremental Optimization Unnormalized Eigenvector Method Canonical Normalized Eigenvector Method Demonstration of Practical Usage Issues

6 4.. Effect of Number of Datasets Effect of Linear Independence of Initial States Effect of State Measurement Noise Example: Aircraft Model with Variable Wing Sweep Lateral Dynamics Longitudinal Dynamics CONCLUSIONS REFERENCES BIOGRAPHICAL SKETCH

7 Figure LIST OF FIGURES page 3- Eigenvectors (left) and poles (right) from FTE analysis of Equation State Response of Equation 3 3 for x () = [ ] (left) and x () = [ ] (right). 3-3 Eigenvectors (left) and poles (right) from an LTV solution of Equation Validity checks for eigenpair (left) and eigenpair (right) from an LTV solution of Equation State trajectories from Equation versus the modal response of an LTV solution of Equation 3 9, for x() = [ ] Eigenvectors (left) and poles (right) from an LTV solution for Equation 3 with oscillation represented by complex poles Eigenvectors (left) and poles (right) from an LTV solution for Equation 3 with oscillation represented by poles Eigenvectors (left) and poles (right) from an LTV solution for Equation 3 with oscillation represented by eigenvectors Eigenvectors (left) and (right) from the original and transformed solutions of Equation Poles from the original and transformed solutions of Equation Validity checks for eigenpair (left) and eigenpair (right) from the transformed solution of Equation State trajectories from Equation versus the modal response of the transformed solution of Equation 3 9, for x () = [ ] (left) and x () = [ ] (right) Mode vectors (left) and (right) from the original and transformed solutions of Equation Eigenvectors (left) and poles (right) from the Kamen solution to Equation Validity check of eigenpairs and (left) and state trajectories versus modal response (right) for the Kamen solution to Equation Eigenvectors (left) and poles (right) from the original LTV solution of Equation Eigenvectors (left) and poles (right) from both the Kamen solution and the transformed non-kamen solution to Equation Poles from Kamen (left) and transformed (right) solutions to Equation

8 3-9 First component of both eigenvectors for solution # (left) and solution # (right) of Equation State trajectories obtained from Equation 3 53 with x () = [ ] (left) and x () = [ ] (right) Poles from Kamen (left) and transformed (right) solutions to Equation State response to x() = [ ] for Equation 3 and Equation Norm of mode vector from solutions to Equation 3 and Equation State response to x() = [ ] for Equation 3 57 and Equation Non-canonical Oscillator, Original vs. Canonical Matrix: mode vector norms Poles from Kamen solution to Equation Mode vector (left) and norm of mode vector (right) from solution to Equation Poles from the Kamen and CNE solutions of Run # (left), and validity check of eigenpair from the CNE solution of Run # (right) State trajectories of Equation 4 9 versus the modal response of the CNE solution of Run #, for x () = [5 7] (left) and x () = [7 5] (right) Poles from the Kamen and CNE solutions of Run # (left), and validity check of eigenpair from the CNE solution of Run # (right) State trajectories of Equation 4 9 versus the modal response of the CNE solution of Run #, for x () = [5 7] (left) and x () = [7 5] (right) Poles from the Kamen and CNE solutions of Run #3 (left), and validity check of eigenpair from the CNE solution of Run #3 (right) State trajectories of Equation 4 9 versus the modal response of the CNE solution of Run #3, for x () = [5 7] (left), x () = [7 5] (center) and x () = [3 ] (right) Norm of mode vector from the Kamen and UE solutions of Run # (left), and validity check of eigenpair from the UE solution of Run # (right) State trajectories of Equation 4 9 versus the modal response of the UE solution of Run #, for x () = [5 7] (left) and x () = [7 5] (right) Norm of mode vector from the Kamen and UE solutions of Run # (left), and validity check of eigenpair from the UE solution of Run # (right) State trajectories of Equation 4 9 versus the modal response of the UE solution of Run #, for x () = [5 7] (left) and x () = [7 5] (right)

9 4- Norm of mode vector from the Kamen and UE solutions of Run #3 (left), and validity check of eigenpair from the UE solution of Run #3 (right) State trajectories of Equation 4 9 versus the modal response of the UE solution of Run #3, for x () = [5 7] (left), x () = [7 5] (center), and x () = [3 ] (right) Poles from Kamen and CNE solutions of Run #4 (left), Run #5 (center), and Run #6 (right) State trajectory of Equation 4 9 versus the modal response of the CNE solution of Run #4 (left), Run #5 (center), and Run #6 (right), for x () = [5 7] Norm of mode vector from Kamen and UE solutions of Run #4 (left), Run #5 (center), and Run #6 (right) State trajectory of Equation 4 9 versus the modal response of the UE solution of Run #4 (left), Run #5 (center), and Run #6 (right), for x () = [5 7] State trajectories of Equation 4 9 with (5% uniform error added), for x () = [ ] (left) and x () = [ ] (right) Pole (left) and pole (right) from the Kamen solution and the noisy CNE solution of Equation Modal response of the noisy CNE solution to Equation 4 9, for x () = [ ] (left) and x () = [ ] (right) Norm of mode vector from the Kamen solution and the noisy UE solution of Equation Modal response of the noisy UE solution to Equation 4 9, for x () = [ ] (left) and x () = [ ] (right) State trajectories for lateral (left) and longitudinal (right) dynamics of aircraft model, for x () = [ ] Eigenvectors (left) and (right) from UE solution of lateral dynamics Eigenvectors 3 (left) and 4 (right) from UE solution of lateral dynamics Poles from UE solution of lateral dynamics Mode vectors (left) and (right) from UE solution of lateral dynamics Mode vectors 3 (left) and 4 (right) from UE solution of lateral dynamics Mode vector norms (left) and (right) from UE solution of lateral dynamics Mode vector norms 3 (left) and 4 (right) from UE solution of lateral dynamics Eigenvectors (left) and (right) from UE solution of longitudinal dynamics

10 4-3 Eigenvectors 3 (left) and 4 (right) from UE solution of longitudinal dynamics Poles from UE solution of longitudinal dynamics Mode vectors (left) and (right) from UE solution of longitudinal dynamics Mode vectors 3 (left) and 4 (right) from UE solution of longitudinal dynamics Mode vector norms (left) and (right) from UE solution of longitudinal dynamics Mode vector norms 3 (left) and 4 (right) from UE solution of longitudinal dynamics 9

11 Abstract of Thesis Presented to the Graduate School of the University of Florida in Partial Fulfillment of the Requirements for the Degree of Master of Science IDENTIFICATION AND ANALYSIS OF TIME-VARYING MODAL PARAMETERS Chair: Rick Lind Major: Mechanical Engineering By Stephen L. Sorley December In this thesis, methods for identifying and analyzing the modal parameters of a linear, time-varying system with no control input are proposed. These methods are derived from two related definitions for time-varying poles and eigenvectors found in the literature. The limitations of the analysis methods used for each time-varying eigenpair definition are explored. Practical requirements regarding the quantity and quality of the experimental data used in each identification method are also addressed. Finally, a morphing-wing aircraft model is analyzed using the proposed techniques, and the results are compared to traditional frozen-time analysis for a particular morphing trajectory.

12 CHAPTER INTRODUCTION. Motivation The modal parameters of a system are important evaluators of stability and performance. For linear systems with constant coefficients (linear time-invariant systems), methods for finding useful modal parameters are well-established and mature. The same cannot be said for linear systems whose coefficients are functions of time (linear time-varying systems). Both the identification of linear time-varying models [] and the definition of useful modal parameters for such systems [] are open research questions. Linear time-varying (LTV) systems have many practical applications. An aircraft can leverage variable geometry to actively enhance performance during a maneuver, or adapt to conflicting mission requirements [3]. In speech analysis, some elements of the signal vary too rapidly for time-invariant methods to be effective [4]. Active automobile suspensions have been modeled as LTV systems [5]. The mechanics of biological systems also provide many applications, such as the problem of dynamic ankle joint stiffness [6]. The existence of so many useful applications motivates finding modal parameters which characterize the behavior of these systems. Unfortunately, the system dynamics will not be known explicitly for many -world situations. One approach to deal with such systems is to use identification methods to approximate the state equations, before performing some sort of modal analysis on the identified dynamics (such as the work done by Liu []). An alternative approach is to identify the modal parameters directly.. Contributions The LTV poles and eigenvectors proposed by Wu [7] [8] and Kamen [9] are evaluated. Wu s pole and eigenvector definition is shown to be non-unique, and a transformation between linearly-dependent eigenpairs under Wu s definition is derived.

13 Kamen s definition is shown to be a special case of Wu s definition, reachable from any solution to Wu s definition via the derived transformation. A mathematical quantity called the mode vector, which remains invariant across these transformations, is then defined. An analysis procedure based on this quantity (mode vector analysis) is described and compared to the analysis method outlined by Kamen. This analysis procedure is shown to operate correctly in situations where Kamen analysis fails to provide a good result. Two algorithms are proposed to identify LTV poles and eigenvectors directly from state measurements, without explicit knowledge of the time-varying dynamics of the system. These algorithms are shown to produce valid eigenpairs for a variety of systems. Finally, a morphing-wing aircraft model is examined using mode vector analysis, and the results are evaluated in the context of traditional linear time-invariant (LTI) aircraft modes. 3

14 CHAPTER BACKGROUND. LTV System Definition This work deals primarily with systems defined by coupled linear homogeneous differential equations, as shown in Equation. In this equation A : R + R n n is the coefficient matrix and x : R + R n is the state vector of the system, where n is the number of individual states. _x (t) refers to the first derivative of the state vector x (t) with respect to time. _x (t) = A (t) x (t) ( ) A system is said to be in canonical form if the coefficient matrix A (t) has the special structure shown in Equation, where a : R + R through a n : R + R are the only time-varying components of the coefficient matrix. A canon (t) =.... a (t) a (t) a n (t) ( ) As a consequence of this special structure, the components of the state vector all become derivatives of the same scalar quantity. This is shown in Equation 3, where x s : R + R refers to this single state from which the state vector is derived and x (n) s refers to the n th derivative of x s (t) with respect to time. x_ s (t) x s (t). x (n) s (t) = A canon (t) x s (t) x_ s (t). x (n ) s (t) ( 3) 4

15 Note that a system of coupled differential equations that is in canonical form can be alternatively expressed as a scalar n th order homogeneous differential equation, as shown in Equation 4. x (n) s (t) + a (t) x s (t) + a (t) x_ s (t) + + a n (t) x (n ) s (t) = ( 4). Time-varying System Identification Identification of the coefficient matrix of an LTV system is more complicated than in LTI systems. In LTI systems, state values collected at any time can be used to find the coefficient matrix, since the matrix is constant. As a result, the coefficient matrix of an LTI system can be found from a single state trajectory of sufficient length. In LTV systems, since the coefficient matrix varies with time, the state values collected at a given time can only be used to identify the coefficient matrix at that specific time (unless some knowledge of how the coefficient matrix develops with time is already known). Thus, multiple state trajectories resulting from performing the same time-variation multiple times from different initial conditions are required to identify the coefficient matrix accurately [6]. Strategies for identifying the coefficient matrix A (t) of a linear time-varying system can be divided into three primary categories [6] []. First, quasi-time-invariant methods allow for identification from only one state trajectory when the coefficient matrix changes slowly with respect to time (i.e., d A (t) ). In these methods, the coefficient matrix dt is assumed to be constant over short intervals of a single state trajectory. Over each interval, the standard LTI system identification techniques are applied. An example of this method was explored by Kamen []. Temporal expansion methods can be used for systems whose coefficient matrices are known to vary periodically. Since the values of the coefficient matrix repeat themselves after every period, a single state trajectory of sufficient length can be used to identify the system. While this typically requires a much longer state trajectory 5

16 than is necessary for LTI analysis, an accurate result can be achieved. An example of this method can be found in work by Verhaegen []. Finally, ensemble methods are used for non-periodic systems whose coefficient matrices vary too quickly for quasi-time-invariant methods to be accurate. In an ensemble method, state measurements are collected from multiple experimental runs using different initial conditions. In each experimental run, the coefficient matrix must perform the same time-variation at the same time points. In practice, this is difficult to achieve exactly; however, it is the only method which is guaranteed to produce accurate results for a system about which nothing is known beforehand. Examples of ensemble methods can be found in MacNeil [6] and Liu []..3 Survey of Available LTV Eigenpair Definitions A number of different definitions for LTV poles and eigenvectors exist. Zhu [] [3] defined LTV poles and eigenvectors using vector polynomial differential operators. O Brien and Iglesias [4] [] created a more general definition by performing a QR decomposition of the system s transition matrix. In both these definitions, the concept of poles and eigenvectors are generalized to include cross-terms in the response equation. In these generalizations, each pole may be associated with several different eigenvectors in the response equation. In other words, the poles are found by transforming the state equation into an upper-triangular matrix instead of a diagonal matrix. Wu [7] [8] gave the definition for LTV poles and eigenvectors shown in Equation 5, where ε i : R + C N are the LTV eigenvectors, and p i : R + C are the LTV poles. A (t) ε (t) = p (t) ε (t) + _ε (t) ( 5) The LTV poles of this definition were then shown to remain invariant under an algebraic transformation. This led to the definition for similar LTV matrices shown in Equation 6, where T : R + R n n is the algebraic transformation applied to the system and 6

17 A : R + R n n is the resulting similar matrix. A (t) = T (T ) A (t) T (t) T (t) _ T (t) ( 6) If the algebraic transformation T (t) was defined such that each column of T (t) was a linearly-independent eigenvector under Equation 5, the similar matrix resulting from Equation 6 was shown to equal a diagonal matrix composed of the poles. Wu then showed that these poles are arbitrary, in that a transformation matrix can be found which changes A (t) into any desired diagonal matrix [7]. Since these poles are arbitrary and the eigenvectors are time-varying, both the poles and the eigenvectors are required to characterize the stability of this system. Stability was then defined using the response modes of the system, as shown in Equation 7. ϕ (t) = exp [ t Wu used response modes to define stability in two ways: t ] p (τ ) dτ ε (t) ( 7) The linear time-varying system given in Equation is stable if and only if the norm of every response mode ϕ i (t) of A (t) is bounded. This is expressed in Equation 8, where C C is some arbitrary finite constant. ϕ i (t) < C t >, i =,..., n ( 8) This system is asymptotically stable, if in addition to being stable, the norm of every response mode approaches as time goes to infinity. ϕ i (t) as t, i =,..., n ( 9) Kamen [9] defined LTV poles and eigenvectors for scalar, n th order differential equations (as shown in Equation 4), by factoring scalar polynomial differential operators (SPDO s). Using these operators, Equation 4 was rewritten as shown Wu referred to these quantities as mode vectors, but they have been renamed in this work to avoid confusion with a different quantity defined later that has the same name. 7

18 in Equation. In this equation, D : R R is an operator which denotes scalar differentiation with respect to time, and all other quantities are defined as for Equation 4. Note that an n th order differential equation results in a polynomial of order n with respect to the exponent of the differential operator. a (D, t) x s (t) = a (D, t) = D n + n a i (t) ( ) Kamen s poles were found by factoring the SPDO a (D, t). In other words, p i : R + C is a pole of the system if an SPDO e (D, t) can be found for which Equation is true. Since the SPDO is an n th order polynomial, there should be at most n possible factorizations. Thus, Kamen s poles are unique [9] (unlike Wu s poles). i= a (D, t) = e (D, t) [D p i (t)], i = [,..., n] ( ) The operator S pi was defined as in Equation, for some differentiable function f : R + C. S pi (f (t)) = p i (t) f (t) + _ f (t) ( ) Using this operator, Kamen defined the eigenvectors as shown in Equation 3, where each eigenvector ε i : R + C n forms the i th column of the generalized Vandermonde matrix [9]. ε i (t) = p i (t) S pi (p i (t)). S n p i (p i (t)) ( 3) The modes ϕ i : R + C of the system were then defined by Kamen as in Equation 4. Note that these modes are scalar quantities which depend only on the poles, 8

19 unlike the response modes defined by Wu. ϕ i (t) = exp [ t t ] p i (τ ) dτ ( 4) It was then shown that all solutions of the differential equation given in Equation 4 and Equation were composed of a linear combination of these modes. This result is shown in Equation 5, where C i C are arbitrary constants. x s (t) = n C i ϕ (t) ( 5) i= From this equation, Kamen stated that the system was asymptotically stable if and only if the absolute value of each mode converges to zero. In contrast to Wu s results, under Kamen s definition the stability of the system is not affected by the eigenvectors. ϕ i (t) as t for i = [,..., n] ( 6) 9

20 CHAPTER 3 LTV ANALYSIS METHODS 3. Inadequacy of Frozen-Time Eigenpair Analysis In linear time-invariant (LTI) systems, the poles and eigenvectors of an n th order system with constant coefficients are defined by Equation 3. In this equation A R n n is the coefficient matrix, p C is a pole, and ε C n is an eigenvector [5]. Aε pε = (3 ) As a first step towards examining linear, time-varying systems (systems with a coefficient matrix that varies with time), it seems logical to extend this LTI eigenrelation directly to the time domain by simply making the poles and eigenvectors in Equation 3 functions of time. A (t) ε (t) p (t) ε (t) = (3 ) Practically, this method finds poles and eigenvectors by solving the LTI eigenrelation independently at each discrete time step. The poles can then be examined to determine the stability or other performance metrics. This procedure is known as Frozen-Time Eigenpair (FTE) analysis []. While FTE analysis provides an intuitive extension of LTI analysis and has provided useful results in some cases, it cannot correctly characterize the stability of LTV systems in general. To demonstrate the insufficiency of FTE analysis, an example given by Khalil [6] is introduced. Khalil s example concerns a linear time-varying system as defined in Equation, with a coefficient matrix specified as in Equation 3 3. A (t) = +.5 cos (t).5 sin (t) cos (t).5 sin (t) cos (t) +.5 sin (t) (3 3) First, the FTE poles and eigenvectors are found by solving Equation 3 at each time step. These poles and eigenvectors are shown in Figure 3-. The eigenvectors are complex-conjugate, periodic, and bounded for the entire time range. Note that, in the

21 e e e 4 e p p Figure 3-. Eigenvectors (left) and poles (right) from FTE analysis of Equation 3 3 legend, e ij denotes the j th component of the i th eigenvector). The poles are a constant complex-conjugate pair with a negative part. Since the eigenvectors are bounded and the poles have positive parts, FTE analysis would conclude that the system is stable x x x x Figure 3-. State Response of Equation 3 3 for x () = [ ] (left) and x () = [ ] (right) However, if the system _x (t) = A (t) x (t) is solved numerically to obtain state trajectories, it becomes obvious that the system is unstable for certain initial conditions. In Figure 3-, the state trajectories resulting from two different sets of initial conditions ([ ] and [ ]) are depicted. While the second initial condition vector results in a stable trajectory, the first diverges. A thorough treatment of this example has been detailed by

22 Markus and Yamabe [7]. An example of the insufficiency of frozen-time methods for more general system identification problems is described by MacNeil [6]. 3. Definition of LTV Poles and Eigenvectors FTE analysis provides some useful results on occasion, but it is not applicable in general. This problem motivates a different definition for poles and eigenvectors that correctly characterize the system. One such definition is examined in this section. 3.. LTV Response Equation A response equation that relates the LTV poles and eigenvectors to the states x (t) must first be defined. To keep the differences between the proposed LTV eigenpair definition and traditional LTI analysis to a minimum, the response equation is defined using the mode definitions provided by Wu [8] and Kamen [9]. This response equation is given in Equation 3 4, where for i = [,..., n], C i C are constants, ϕ i : R + C N are linearly independent solutions to Equation, ε i : R + C N are the LTV eigenvectors, and p i : R + C are the LTV poles. The eigenvectors ε i (t) are assumed to be differentiable. x (t) = N C i ϕ i (t) = i= N i= C i ε i (t) exp [ t ] p i (τ ) dτ (3 4) Note that each linearly-independent solution ϕ i (t) can also be thought of as a mode of the system [8]. When ϕ i (t) is used in this context, it will be referred to as the i th response mode of the system. 3.. LTV Eigenrelation The relationship between the eigenpairs and the states given in Equation 3 4 is used to define a relationship between the eigenpairs and the coefficient matrix. First, an expression for the i th response mode (one particular solution to Equation ) is

23 extracted from Equation 3 4. ϕ i (t) = ε i (t) exp [ t ] p i (τ ) dτ (3 5) The derivative with respect to time of Equation 3 5 is then obtained in Equation 3 6. _ϕ i (t) = _ε i (t) exp [ t = [ _ε i (t) + ε i (t) p i (t)] exp ] [ t p i (τ ) dτ + ε i (t) p i (t) exp [ t ] p i (τ ) dτ ] p i (τ ) dτ Since the response modes are solutions to the system, Equation 3 5 and Equation 3 6 can be inserted into Equation. (3 6) _ϕ i (t) = A (t) ϕ i (t) [ t ] [ _ε i (t) + ε i (t) p i (t)] exp p i (τ ) dτ = A (t) ε i (t) exp [ t ] p i (τ ) dτ (3 7) The exponential terms are guaranteed to be non-zero if the pole contains no singular points over the time interval in question, which allows the exponentials to be canceled from both sides of Equation 3 7. This cancelation yields Equation 3 8, which relates the LTV eigenpairs to the coefficient matrix. _ε i (t) + ε i (t) p i (t) = A (t) ε i (t) _ε i (t) = A (t) ε i (t) ε i (t) p i (t) (3 8) Equation 3 8 will hereafter be referred to as the LTV eigenrelation. An LTV pole and eigenvector that satisfies this equation will be referred to as an LTV eigenpair. This definition is identical to the one specified by Wu [7]. Note that Equation 3 8 is under-defined. For a system with n states, this equation has n + unknowns (n eigenvector components and scalar pole), but only n scalar equations. This means that there are an infinite number of LTV eigenpairs with a given initial value that solve Equation 3 8. This is also true in LTI systems, where solutions 3

24 are only unique up to a scalar constant multiplied onto the eigenvector. By multiplying different constants onto the eigenvector, an infinite number of LTI eigenpairs that satisfy Equation 3 can be obtained. In LTV systems, eigenpair solutions are unique up to a scalar, differentiable function of time. These functions are multiplied onto the eigenvector, and must modify the pole in a particular way to result in another LTV eigenpair that satisfies Equation 3 8. This idea is explored more thoroughly in Section Relationship between LTV Eigenpairs and Stability The transition from LTI to LTV analysis introduces several additional problems. Since the eigenvectors and poles both vary with time, the stability of the system is determined by both quantities instead of the poles alone [8]. Even if the poles are negative and bounded, the system can still be made unstable by rapidly diverging eigenvectors. To illustrate this point, an unstable system with the coefficient matrix specified in Equation 3 9 is introduced. A (t) =. t.5 (3 9) An LTV eigenpair solution is computed for this system by choosing a pair of complex-conjugate pole values with negative parts, then solving the LTV eigenrelation (given in Equation 3 8) once for each pole value to obtain the appropriate eigenvector components. Since this system is unstable, choosing the pole values to have negative parts forces the instability in the system to be reflected in the eigenvectors. The two eigenpairs found by this process are shown in Figure 3-3. Note that this solution is not unique, since a different choice of poles would have produced a different set of eigenpairs. To verify that the two eigenpairs shown in Figure 3-3 satisfy the LTV eigenrelation, the numerical derivative of the eigenvector components can be compared to the 4

25 e.5 e e e p p Figure 3-3. Eigenvectors (left) and poles (right) from an LTV solution of Equation 3 9 right-hand side of Equation 3 8. If the two values are roughly equal (the numerical derivative introduces some error), this indicates that the eigenpair satisfies the equation. This check is performed on both eigenpairs, and the results are shown in Figure 3-4. Note that in the legend of Figure 3-4, de ij refers to the numerical derivative of the j th component of the i th eigenvector, and mde ij refers to the value of the same component predicted by the right-hand side of the LTV eigenrelation. The figure indicates that the two eigenpairs satisfy the LTV eigenrelation de 4 de mde mde 4 de de mde mde Figure 3-4. Validity checks for eigenpair (left) and eigenpair (right) from an LTV solution of Equation 3 9 As a final check, the two eigenpairs are inserted into Equation 3 4 (LTV response equation). The constants in this equation are computed for a particular initial state 5

26 vector x () by inserting the initial values of the eigenpairs into the response equation, then solving for the constants. The states produced as a result of solving the response equation are then compared to the actual state trajectories of the system obtained by solving Equation with a numerical ODE solver. Figure 3-5 shows both the state trajectories and the modal response (state estimates computed from the response equation) for x (t) and x (t). Note that the state estimates computed from the modal response are denoted in the legend as xhat and xhat. This figure indicates that the two eigenpairs satisfy Equation 3 4. : x : x : xhat 8 : xhat Figure 3-5. State trajectories from Equation versus the modal response of an LTV solution of Equation 3 9, for x() = [ ] These figures show that the poles alone are not enough to determine the stability of an LTV system. It is possible to obtain valid LTV eigenpairs that divide the stability information between the poles and the eigenvectors such that the poles have negative parts, even though the system is unstable. The choice of poles simply forced the eigenvectors to diverge faster than the exponents of the poles were decaying. Since these sets of eigenpairs aren t unique, it would be possible to generate equally valid eigenpairs for which the parts of the poles were positive. This new set of eigenpairs would simply partition the stability information in a different manner. 6

27 3..4 Relationship between LTV Eigenpairs and Oscillation Another issue introduced by the move from LTI to LTV analysis involves the sources of oscillation in the response. In LTI analysis, the frequencies of any oscillatory behavior were completely characterized by the imaginary value of the pole. In LTV analysis the oscillatory behavior may arise from three sources: complex poles, poles, and eigenvectors. To illustrate each of these sources, a system with the coefficient matrix given in Equation 3 is introduced: A (t) =. sin (8t) (3 ) Three different eigenpair solutions are computed for this system, each of which represents the oscillation in a different part of the eigenpairs. Each set of eigenpairs satisfies Equation 3 8 (the LTV eigenrelation) and Equation 3 4 (the LTV response equation). In Figure 3-6, the first set of eigenpairs demonstrates how the oscillation in this system can be represented by the complex values of the poles. This division of information reflects the standard behavior of LTI poles and eigenvectors..5 e.5.5 e p.5 p.5.5 e e Figure 3-6. Eigenvectors (left) and poles (right) from an LTV solution for Equation 3 with oscillation represented by complex poles In Figure 3-7, a second set of eigenpairs demonstrates how the same system can have its oscillation represented by the parts of the poles. A sine wave (sin (8t)) 7

28 is chosen for the values of the poles, with no imaginary component. By choosing the correct frequency it is possible to very nearly eliminate the oscillation from the eigenvectors, leaving almost all of the system s oscillatory behavior in the parts of the poles. 4 e 4.5 e p p 5 5 e e Figure 3-7. Eigenvectors (left) and poles (right) from an LTV solution for Equation 3 with oscillation represented by poles Finally, Figure 3-8 shows a third set of eigenpairs that demonstrates that this same system can have its oscillation represented by the values of the eigenvectors. In this set, the poles are chosen to be and constant so that they cannot contribute anything to the oscillatory behavior of the system. Instead, the oscillation is forced to show up in the eigenvectors. While these three eigenpair solutions are chosen such that the oscillatory behavior is artificially limited to one part of the eigenpair, eigenpairs in general will represent oscillatory information as some combination of the and imaginary parts of both the poles and the eigenvectors. The entire eigenpair must be analyzed to determine the oscillatory characteristics of the response. 3.3 Mode Vector Analysis Given the definition for LTV eigenpairs described in Section 3., neither the stability nor the oscillatory characteristics of an LTV system can be determined in general from either the LTV poles or eigenvectors alone. Instead, each eigenpair must be analyzed 8

29 .5.5 e e.5.5 e e p p Figure 3-8. Eigenvectors (left) and poles (right) from an LTV solution for Equation 3 with oscillation represented by eigenvectors as a whole. Since LTI analysis techniques are based on the assumption that the stability and oscillatory characteristics are isolated in the poles, new analysis techniques are clearly necessary. This section proposes a suitable method: Mode Vector Analysis Eigenpairs and Linear Independence Recall that the state trajectories of an LTV system with n states can be represented by n eigenpairs. Each eigenpair forms a response mode of the system according to Equation 3 5, and a linear combination of linearly-independent response modes forms a particular state trajectory according to Equation 3 4. The concept of linear independence (and dependence) can be extended to time-varying vectors by simply applying the time-invariant definition at each individual time step [5]. Thus, two time-varying vectors are linearly dependent if one vector is equal to the other vector multiplied by a scalar function of time. Linear independence is defined to be the failure to meet this condition. Under this definition, linear independence is thought of as acting between only two vectors at a time. The more general definition, that a vector is linearly independent from a group of other vectors if it cannot be expressed as a linear combination of those other vectors, is omitted from this discussion for the sake of simplicity. 9

30 Since the exponential of the integral of a pole is a scalar value, the response mode ϕ i (t) is linearly independent from another response mode ϕ j (t) under this definition if and only if the eigenvector ε i (t) is linearly independent from ε j (t). This result means that for a system with n states, n eigenpairs with linearly-independent eigenvectors are enough to completely characterize all solutions of the system. Any of these eigenpairs can be replaced by another eigenpair whose eigenvector is linearly dependent with the original s eigenvector, and the set of eigenpairs will still completely characterize the system. For this reason, the term equivalent eigenpairs will be used to describe two eigenpairs whose eigenvectors are linearly-dependent Transformations between Equivalent Eigenpairs The eigenvector portion of an equivalent eigenpair can be found from an existing eigenpair by multiplying the eigenvector by some scalar function of time. However, the pole value may also need to change in order for the new eigenpair to be a valid solution of Equation 3 8 (the LTV eigenrelation). A transformation equation must be derived that describes how the pole needs to change for a given eigenvector transformation. Assume that there exists some known eigenpair ε (t), p (t) that satisfies Equation 3 8 for the coefficient matrix A (t). Let T : R + C n n be an invertible, differentiable transformation applied to the eigenvector, as shown in Equation 3. In this equation, ε : R + C n denotes the eigenvector that results after the transformation is applied. ε (t) = T (t) ε (t) ε (t) = T (t) ε (t) (3 ) Substituting Equation 3 into Equation 3 8 yields Equation 3. A (t) T (t) ε (t) T (t) p (t) ε (t) = _ T (t) ε (t) + T (t) _ε (t) (3 ) 3

31 Multiplying both sides of the equation by T (t) produces Equation 3 3. T (t) A (t) T (t) ε (t) p (t) ε (t) = T (t) _ T (t) ε (t) + _ε (t) T (t) A (t) T (t) ε (t) [ p (t) + T (t) _ T (t)] ε (t) = _ε (t) (3 3) The goal is to find a new, transformed eigenpair that is valid for the original A (t) matrix. To accomplish this objective, it is necessary to restrict T (t) such that the multiplication A (t) T (t) is commutative. This restriction is shown in Equation 3 4. T (t) A (t) T (t) = A (t) (3 4) If T (t) is chosen to satisfy Equation 3 4, Equation 3 3 reduces to Equation 3 5. A (t) ε (t) [ p (t) + T (t) _ T (t)] ε (t) = _ε (t) (3 5) Let p : R + C be the transformed pole, as defined in Equation 3 6. p (t) = p (t) + T (t) _ T (t) (3 6) This transformed pole is then substituted into Equation 3 5. A (t) ε (t) p (t) ε (t) = _ε (t) (3 7) Equation 3 7 is the LTV eigenrelation given in Equation 3 8. So, the pair ε (t), p (t) forms a valid eigenpair of the matrix A (t). For practical purposes p (t) should to be restricted to a scalar value, since an eigenpair involving a matrix-valued pole would be difficult to interpret in analysis. A second restriction on T (t) is therefore necessary. This restriction is given in Equation 3 8, where g : R + C is some scalar function of time. T (t) _ T (t) = g (t) I n n (3 8) 3

32 In other words, T (t) should be chosen such that T (t) _ T (t) evaluates to a scalar time-varying matrix. To meet the two restrictions given in Equation 3 4 and Equation 3 8, T (t) will be chosen from the class of scalar matrices given in Equation 3 9. This choice is sufficient to meet the two restrictions, but not necessary. Note that the function f (t) is required to be differentiable and invertible. T = { T (t) : T (t) = f (t) I n n, f : R + C } (3 9) Transformations chosen from this class meet the first restriction given in Equation 3 4: T (t) A (t) T (t) = f (t) I n n A (t) f (t) I n n = A (t) (3 ) Transformations of this form also meet the second restriction given in Equation 3 8: T (t) T _ (t) = f (t) I n n f _ n n (t) I = _f (t) f (t) I n n (scalar matrix) (3 ) In summary, transformations of the form T (t) = f (t) I n n applied to an LTV eigenpair ε (t), p (t) of A (t) result in new equivalent eigenpairs ε (t), p (t) of A (t) with the following form: ε (t) = [ ] f (t) I n n ε (t) _f (t) p (t) = p (t) + f (t) (3 ) To demonstrate that these transformation equations work, the unstable system given in Equation 3 9 is again examined. A complete eigenpair solution for this system is obtained by choosing values for each of the two poles, then solving Equation 3 8 with an ODE solver to obtain the eigenvector components. The two eigenpairs that form the 3

33 solution, ε (t), p (t) and ε (t), p (t), are then transformed according to Equation 3. The function f (t) = + sin (t) is used to transform the first eigenpair, as shown in Equation 3 3. ε (t) = p (t) = p (t) + [ ] [ f (t) I n n ε (t) = _ f (t) f (t) = p (t) + + sin (t) I n n cos (t) + sin (t) ] ε (t) (3 3) The function f (t) = exp (t) is used to transform the second eigenpair, as shown in Equation 3 4. ε (t) = p (t) = p (t) + [ ] [ f (t) I n n ε (t) = exp ( t) I n n] ε (t) _ f (t) f (t) = p (t) + (3 4) Figure 3-9 shows the eigenvectors of the original solution (UE in the legend), plotted against the transformed eigenvectors (TR in the legend). Figure 3- shows the poles of the original solution, plotted against those from the transformed eigenpair. These figures demonstrate that the transformed eigenpairs are meaningfully different from the original eigenpairs e e e (TR) e (TR) e e e (TR) e (TR) Figure 3-9. Eigenvectors (left) and (right) from the original and transformed solutions of Equation

34 p p p (TR) p (TR) Figure 3-. Poles from the original and transformed solutions of Equation 3 9 Though the transformed eigenpairs are significantly different than the original ones, they still satisfy Equation 3 8 (LTV eigenrelation). This is demonstrated by two eigenpair validity checks in Figure 3-. Note that in the legend of Figure 3-, de ij refers to the numerical derivative of the j th component of the i th eigenvector, and mde ij refers to the value of the same component predicted by the right-hand side of Equation 3 8 (LTV eigenrelation) de (TR) de (TR) mde (TR) mde (TR).6.4. de (TR) de (TR) mde (TR) mde (TR) Figure 3-. Validity checks for eigenpair (left) and eigenpair (right) from the transformed solution of Equation 3 9 Finally, Figure 3- shows that the transformed eigenpair satisfies Equation 3 4 (the LTV response equation). For both state trajectories (x () = [ ] on the left, x () = [ ] on the right), the modal response computed using the two transformed eigenpairs 34

35 matches the state trajectories computed by solving Equation directly. These figures verify that equivalent eigenpairs are valid solutions to the LTV eigenrelation and LTV response equation. 8 : x : x : xhat (TR) : xhat (TR) : x : x : xhat (TR) : xhat (TR) Figure 3-. State trajectories from Equation versus the modal response of the transformed solution of Equation 3 9, for x () = [ ] (left) and x () = [ ] (right) Definition of Mode Vectors In order to perform analysis using the eigenpairs of an LTV system, it is desirable to obtain some quantity that encapsulates all the information in the eigenpair and remains invariant to equivalent eigenpair transformations. Basing the analysis of a system on such a quantity would prevent problems caused by the partitioning of time-varying information in between the pole and the eigenvector, as described in Sections 3..3 and A mathematical entity that meets these goals is defined by adapting the definition of the response modes of the system. The stability of an LTV system has been successfully characterized through the use of the system s response modes, defined in Equation 7 (see Section.3). While the response modes combine the parts of each eigenpair in a way that preserves the stability information, they do not remain invariant to equivalent eigenpair transformations. A modification of the response mode is therefore necessary. This new quantity will be called a mode vector of the system. First, the components of each eigenvector are 35

36 specified using the notation given in Equation 3 5, where ε i : R + C n is the LTV eigenvector of the i th eigenpair, e ij : R + C is the j th component of that eigenvector, and n is the number of states in the system. ε i (t) = e i (t). e in (t) (3 5) The mode vectors are then defined in Equation 3 6, where µ i : R + C n is the mode vector of the i th eigenvector, µ ij : R + C is the j th component of that mode vector, and p i : R + C is the LTV pole of the i th eigenpair. µ ij = exp µ i (t) = [ t µ i (t). µ in (t) ] p i (τ ) + _e ij (τ ) e ij (τ dτ ) (3 6) Moving the eigenvector components outside of the integral yields a second, equivalent definition given in Equation 3 7. [ µ ij = e t ] ij (t) e ij () exp p i (τ ) dτ (3 7) Note that Equation 3 7 differs from the definition of the response modes in Equation 3 5 by only the initial value of each eigenvector component, which is a constant factor. As long as every component of the eigenvectors at time t= is non-zero, the two stability results derived by Wu [8] for the response modes should work equally well for the mode vectors. The linear time-varying system given in Equation is stable if and only if the norm of every mode vector µ i (t) of A (t) is bounded. This is expressed in Equation 3 8, where C C is some arbitrary finite constant. µ i (t) < C t >, i =,..., n (3 8) 36

37 This system is asymptotically stable if, in addition to being stable, the norm of every mode vector approaches as time goes to infinity. This is expressed in Equation 3 9. µ i (t) as t, i =,..., n (3 9) Like response modes, mode vectors encapsulate the entire eigenpair in a way that preserves the stability information. What remains to be seen is whether the mode vectors remain invariant to equivalent eigenpair transformations. To prove that they do remain invariant, the mode vector of an eigenpair after an equivalent eigenpair transformation is compared to the mode vector of the eigenpair before the transformation. First, assume that an eigenpair ε i (t), p i (t) of A (t) is given. Given some scalar differentiable function f (t) : R + C, an equivalent eigenpair ε i (t), p i (t) is obtained via the transformations in Equation 3, as shown in Equation 3 3. _f (t) p i (t) = p i (t) + f (t) ε i (t) = e i (t). e in (t) = f (t) e i (t). f (t) e in (t) (3 3) The objective is to show that the mode vector of the transformed eigenpair is equal to the mode vector of the original eigenpair. This is depicted in Equation 3 3, where µ i : R + C n is the mode vector of the transformed eigenpair ε i (t), p i (t) and µ i : R + C n is the mode vector of the original eigenpair ε i (t), p i (t). µ i (t) = µ i (t) (3 3) The mode vectors on either side of Equation 3 3 must be equal if the corresponding components of each mode vector are equal. Using the first definition given in Equation 37

38 3 6, the two mode vectors are compared component-to-component in Equation 3 3. exp µ ij (t) = µ ij (t) [ t ] [ e_ ij (τ t ) p i (τ ) + e ij (τ dτ = exp ) ] p i (τ ) + _e ij (τ ) e ij (τ dτ ) (3 3) If the integrands in Equation 3 3 are equal for all time, the expressions on both sides of the equation must be equal. This simplifies Equation 3 3 to Equation p i (t) + _ e ij (t) e ij (t) = p i (t) + _e ij (t) e ij (t) (3 33) The left-hand side of Equation 3 33 is then expanded by inserting the definitions for p i (t) and ε i (t) given in Equation 3 3. p i (t) + _ e ij (t) e ij (t) = p i (t) + = p i (t) + = p i (t) + = p i (t) + _e ij (t) e ij (t) _f (t) f (t) + f (t) e ij (t) _f (t) f (t) + f (t) e ij (t) ( eij (t) _f (t) f (t) + _e ij (t) e ij (t) _ f (t) f (t) ) d dt f (t) [ eij _ (t) f _ (t) e ij (t) f (t) f (t) Proof. (3 34) To demonstrate that the mode vectors from two equivalent eigenpairs are the same, the mode vectors of the two LTV eigenpair solutions of Equation 3 9 covered in the example in Section 3.3. are calculated. The mode vectors from the original solution are graphed against those from the transformed solution in Figure 3-3. The figure shows that the mode vectors from both solutions are nearly identical. It is trivial to show that as the rate at which the eigenvectors change with time drops to zero, the values of each component of the mode vector converge to the traditional [ ] t definition of the modes used in LTI and FTE analysis, exp p i (τ ) dτ. This result can be seen by making the substitution _e ij (τ ) = into Equation 3 6. ] 38

MATH 583A REVIEW SESSION #1

MATH 583A REVIEW SESSION #1 MATH 583A REVIEW SESSION #1 BOJAN DURICKOVIC 1. Vector Spaces Very quick review of the basic linear algebra concepts (see any linear algebra textbook): (finite dimensional) vector space (or linear space),

More information

21 Linear State-Space Representations

21 Linear State-Space Representations ME 132, Spring 25, UC Berkeley, A Packard 187 21 Linear State-Space Representations First, let s describe the most general type of dynamic system that we will consider/encounter in this class Systems may

More information

In these chapter 2A notes write vectors in boldface to reduce the ambiguity of the notation.

In these chapter 2A notes write vectors in boldface to reduce the ambiguity of the notation. 1 2 Linear Systems In these chapter 2A notes write vectors in boldface to reduce the ambiguity of the notation 21 Matrix ODEs Let and is a scalar A linear function satisfies Linear superposition ) Linear

More information

Math Ordinary Differential Equations

Math Ordinary Differential Equations Math 411 - Ordinary Differential Equations Review Notes - 1 1 - Basic Theory A first order ordinary differential equation has the form x = f(t, x) (11) Here x = dx/dt Given an initial data x(t 0 ) = x

More information

Linearization of Differential Equation Models

Linearization of Differential Equation Models Linearization of Differential Equation Models 1 Motivation We cannot solve most nonlinear models, so we often instead try to get an overall feel for the way the model behaves: we sometimes talk about looking

More information

Lecture Notes 6: Dynamic Equations Part C: Linear Difference Equation Systems

Lecture Notes 6: Dynamic Equations Part C: Linear Difference Equation Systems University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 1 of 45 Lecture Notes 6: Dynamic Equations Part C: Linear Difference Equation Systems Peter J. Hammond latest revision 2017 September

More information

A system that is both linear and time-invariant is called linear time-invariant (LTI).

A system that is both linear and time-invariant is called linear time-invariant (LTI). The Cooper Union Department of Electrical Engineering ECE111 Signal Processing & Systems Analysis Lecture Notes: Time, Frequency & Transform Domains February 28, 2012 Signals & Systems Signals are mapped

More information

Mobile Robotics 1. A Compact Course on Linear Algebra. Giorgio Grisetti

Mobile Robotics 1. A Compact Course on Linear Algebra. Giorgio Grisetti Mobile Robotics 1 A Compact Course on Linear Algebra Giorgio Grisetti SA-1 Vectors Arrays of numbers They represent a point in a n dimensional space 2 Vectors: Scalar Product Scalar-Vector Product Changes

More information

1 Lyapunov theory of stability

1 Lyapunov theory of stability M.Kawski, APM 581 Diff Equns Intro to Lyapunov theory. November 15, 29 1 1 Lyapunov theory of stability Introduction. Lyapunov s second (or direct) method provides tools for studying (asymptotic) stability

More information

Seminar 6: COUPLED HARMONIC OSCILLATORS

Seminar 6: COUPLED HARMONIC OSCILLATORS Seminar 6: COUPLED HARMONIC OSCILLATORS 1. Lagrangian Equations of Motion Let consider a system consisting of two harmonic oscillators that are coupled together. As a model, we will use two particles attached

More information

Physics 202 Laboratory 5. Linear Algebra 1. Laboratory 5. Physics 202 Laboratory

Physics 202 Laboratory 5. Linear Algebra 1. Laboratory 5. Physics 202 Laboratory Physics 202 Laboratory 5 Linear Algebra Laboratory 5 Physics 202 Laboratory We close our whirlwind tour of numerical methods by advertising some elements of (numerical) linear algebra. There are three

More information

arxiv: v1 [math.na] 5 May 2011

arxiv: v1 [math.na] 5 May 2011 ITERATIVE METHODS FOR COMPUTING EIGENVALUES AND EIGENVECTORS MAYSUM PANJU arxiv:1105.1185v1 [math.na] 5 May 2011 Abstract. We examine some numerical iterative methods for computing the eigenvalues and

More information

ANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3

ANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3 ANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3 ISSUED 24 FEBRUARY 2018 1 Gaussian elimination Let A be an (m n)-matrix Consider the following row operations on A (1) Swap the positions any

More information

Often, in this class, we will analyze a closed-loop feedback control system, and end up with an equation of the form

Often, in this class, we will analyze a closed-loop feedback control system, and end up with an equation of the form ME 32, Spring 25, UC Berkeley, A. Packard 55 7 Review of SLODEs Throughout this section, if y denotes a function (of time, say), then y [k or y (k) denotes the k th derivative of the function y, y [k =

More information

EIGENVALUES AND EIGENVECTORS 3

EIGENVALUES AND EIGENVECTORS 3 EIGENVALUES AND EIGENVECTORS 3 1. Motivation 1.1. Diagonal matrices. Perhaps the simplest type of linear transformations are those whose matrix is diagonal (in some basis). Consider for example the matrices

More information

2. Review of Linear Algebra

2. Review of Linear Algebra 2. Review of Linear Algebra ECE 83, Spring 217 In this course we will represent signals as vectors and operators (e.g., filters, transforms, etc) as matrices. This lecture reviews basic concepts from linear

More information

22.3. Repeated Eigenvalues and Symmetric Matrices. Introduction. Prerequisites. Learning Outcomes

22.3. Repeated Eigenvalues and Symmetric Matrices. Introduction. Prerequisites. Learning Outcomes Repeated Eigenvalues and Symmetric Matrices. Introduction In this Section we further develop the theory of eigenvalues and eigenvectors in two distinct directions. Firstly we look at matrices where one

More information

Linear Algebra Practice Problems

Linear Algebra Practice Problems Math 7, Professor Ramras Linear Algebra Practice Problems () Consider the following system of linear equations in the variables x, y, and z, in which the constants a and b are real numbers. x y + z = a

More information

7 Planar systems of linear ODE

7 Planar systems of linear ODE 7 Planar systems of linear ODE Here I restrict my attention to a very special class of autonomous ODE: linear ODE with constant coefficients This is arguably the only class of ODE for which explicit solution

More information

Math 3191 Applied Linear Algebra

Math 3191 Applied Linear Algebra Math 9 Applied Linear Algebra Lecture 9: Diagonalization Stephen Billups University of Colorado at Denver Math 9Applied Linear Algebra p./9 Section. Diagonalization The goal here is to develop a useful

More information

Math 123, Week 2: Matrix Operations, Inverses

Math 123, Week 2: Matrix Operations, Inverses Math 23, Week 2: Matrix Operations, Inverses Section : Matrices We have introduced ourselves to the grid-like coefficient matrix when performing Gaussian elimination We now formally define general matrices

More information

Control Systems I. Lecture 4: Diagonalization, Modal Analysis, Intro to Feedback. Readings: Emilio Frazzoli

Control Systems I. Lecture 4: Diagonalization, Modal Analysis, Intro to Feedback. Readings: Emilio Frazzoli Control Systems I Lecture 4: Diagonalization, Modal Analysis, Intro to Feedback Readings: Emilio Frazzoli Institute for Dynamic Systems and Control D-MAVT ETH Zürich October 13, 2017 E. Frazzoli (ETH)

More information

2. As we shall see, we choose to write in terms of σ x because ( X ) 2 = σ 2 x.

2. As we shall see, we choose to write in terms of σ x because ( X ) 2 = σ 2 x. Section 5.1 Simple One-Dimensional Problems: The Free Particle Page 9 The Free Particle Gaussian Wave Packets The Gaussian wave packet initial state is one of the few states for which both the { x } and

More information

Math 216 Final Exam 24 April, 2017

Math 216 Final Exam 24 April, 2017 Math 216 Final Exam 24 April, 2017 This sample exam is provided to serve as one component of your studying for this exam in this course. Please note that it is not guaranteed to cover the material that

More information

Linear Algebra. Min Yan

Linear Algebra. Min Yan Linear Algebra Min Yan January 2, 2018 2 Contents 1 Vector Space 7 1.1 Definition................................. 7 1.1.1 Axioms of Vector Space..................... 7 1.1.2 Consequence of Axiom......................

More information

NOTES ON LINEAR ODES

NOTES ON LINEAR ODES NOTES ON LINEAR ODES JONATHAN LUK We can now use all the discussions we had on linear algebra to study linear ODEs Most of this material appears in the textbook in 21, 22, 23, 26 As always, this is a preliminary

More information

Identification Methods for Structural Systems. Prof. Dr. Eleni Chatzi System Stability - 26 March, 2014

Identification Methods for Structural Systems. Prof. Dr. Eleni Chatzi System Stability - 26 March, 2014 Prof. Dr. Eleni Chatzi System Stability - 26 March, 24 Fundamentals Overview System Stability Assume given a dynamic system with input u(t) and output x(t). The stability property of a dynamic system can

More information

Dynamic Response. Assoc. Prof. Enver Tatlicioglu. Department of Electrical & Electronics Engineering Izmir Institute of Technology.

Dynamic Response. Assoc. Prof. Enver Tatlicioglu. Department of Electrical & Electronics Engineering Izmir Institute of Technology. Dynamic Response Assoc. Prof. Enver Tatlicioglu Department of Electrical & Electronics Engineering Izmir Institute of Technology Chapter 3 Assoc. Prof. Enver Tatlicioglu (EEE@IYTE) EE362 Feedback Control

More information

Chap 4. State-Space Solutions and

Chap 4. State-Space Solutions and Chap 4. State-Space Solutions and Realizations Outlines 1. Introduction 2. Solution of LTI State Equation 3. Equivalent State Equations 4. Realizations 5. Solution of Linear Time-Varying (LTV) Equations

More information

1 Math 241A-B Homework Problem List for F2015 and W2016

1 Math 241A-B Homework Problem List for F2015 and W2016 1 Math 241A-B Homework Problem List for F2015 W2016 1.1 Homework 1. Due Wednesday, October 7, 2015 Notation 1.1 Let U be any set, g be a positive function on U, Y be a normed space. For any f : U Y let

More information

Answers to Problem Set Number MIT (Fall 2005).

Answers to Problem Set Number MIT (Fall 2005). Answers to Problem Set Number 6. 18.305 MIT (Fall 2005). D. Margetis and R. Rosales (MIT, Math. Dept., Cambridge, MA 02139). December 12, 2005. Course TA: Nikos Savva, MIT, Dept. of Mathematics, Cambridge,

More information

Copyright (c) 2006 Warren Weckesser

Copyright (c) 2006 Warren Weckesser 2.2. PLANAR LINEAR SYSTEMS 3 2.2. Planar Linear Systems We consider the linear system of two first order differential equations or equivalently, = ax + by (2.7) dy = cx + dy [ d x x = A x, where x =, and

More information

Quantitative Understanding in Biology Module III: Linear Difference Equations Lecture II: Complex Eigenvalues

Quantitative Understanding in Biology Module III: Linear Difference Equations Lecture II: Complex Eigenvalues Quantitative Understanding in Biology Module III: Linear Difference Equations Lecture II: Complex Eigenvalues Introduction In the previous section, we considered the generalized twovariable system of linear

More information

MAT1302F Mathematical Methods II Lecture 19

MAT1302F Mathematical Methods II Lecture 19 MAT302F Mathematical Methods II Lecture 9 Aaron Christie 2 April 205 Eigenvectors, Eigenvalues, and Diagonalization Now that the basic theory of eigenvalues and eigenvectors is in place most importantly

More information

AUTOMATIC CONTROL. Andrea M. Zanchettin, PhD Spring Semester, Introduction to Automatic Control & Linear systems (time domain)

AUTOMATIC CONTROL. Andrea M. Zanchettin, PhD Spring Semester, Introduction to Automatic Control & Linear systems (time domain) 1 AUTOMATIC CONTROL Andrea M. Zanchettin, PhD Spring Semester, 2018 Introduction to Automatic Control & Linear systems (time domain) 2 What is automatic control? From Wikipedia Control theory is an interdisciplinary

More information

Linear Algebra. The analysis of many models in the social sciences reduces to the study of systems of equations.

Linear Algebra. The analysis of many models in the social sciences reduces to the study of systems of equations. POLI 7 - Mathematical and Statistical Foundations Prof S Saiegh Fall Lecture Notes - Class 4 October 4, Linear Algebra The analysis of many models in the social sciences reduces to the study of systems

More information

Identification Methods for Structural Systems

Identification Methods for Structural Systems Prof. Dr. Eleni Chatzi System Stability Fundamentals Overview System Stability Assume given a dynamic system with input u(t) and output x(t). The stability property of a dynamic system can be defined from

More information

FREE VIBRATION RESPONSE OF UNDAMPED SYSTEMS

FREE VIBRATION RESPONSE OF UNDAMPED SYSTEMS Lecture Notes: STRUCTURAL DYNAMICS / FALL 2011 / Page: 1 FREE VIBRATION RESPONSE OF UNDAMPED SYSTEMS : : 0, 0 As demonstrated previously, the above Equation of Motion (free-vibration equation) has a solution

More information

Page 404. Lecture 22: Simple Harmonic Oscillator: Energy Basis Date Given: 2008/11/19 Date Revised: 2008/11/19

Page 404. Lecture 22: Simple Harmonic Oscillator: Energy Basis Date Given: 2008/11/19 Date Revised: 2008/11/19 Page 404 Lecture : Simple Harmonic Oscillator: Energy Basis Date Given: 008/11/19 Date Revised: 008/11/19 Coordinate Basis Section 6. The One-Dimensional Simple Harmonic Oscillator: Coordinate Basis Page

More information

[Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty.]

[Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty.] Math 43 Review Notes [Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty Dot Product If v (v, v, v 3 and w (w, w, w 3, then the

More information

Poles, Zeros and System Response

Poles, Zeros and System Response Time Response After the engineer obtains a mathematical representation of a subsystem, the subsystem is analyzed for its transient and steady state responses to see if these characteristics yield the desired

More information

MATH SOLUTIONS TO PRACTICE MIDTERM LECTURE 1, SUMMER Given vector spaces V and W, V W is the vector space given by

MATH SOLUTIONS TO PRACTICE MIDTERM LECTURE 1, SUMMER Given vector spaces V and W, V W is the vector space given by MATH 110 - SOLUTIONS TO PRACTICE MIDTERM LECTURE 1, SUMMER 2009 GSI: SANTIAGO CAÑEZ 1. Given vector spaces V and W, V W is the vector space given by V W = {(v, w) v V and w W }, with addition and scalar

More information

Designing Information Devices and Systems I Spring 2016 Official Lecture Notes Note 21

Designing Information Devices and Systems I Spring 2016 Official Lecture Notes Note 21 EECS 6A Designing Information Devices and Systems I Spring 26 Official Lecture Notes Note 2 Introduction In this lecture note, we will introduce the last topics of this semester, change of basis and diagonalization.

More information

211 Real Analysis. f (x) = x2 1. x 1. x 2 1

211 Real Analysis. f (x) = x2 1. x 1. x 2 1 Part. Limits of functions. Introduction 2 Real Analysis Eample. What happens to f : R \ {} R, given by f () = 2,, as gets close to? If we substitute = we get f () = 0 which is undefined. Instead we 0 might

More information

Lecture Notes 1: Vector spaces

Lecture Notes 1: Vector spaces Optimization-based data analysis Fall 2017 Lecture Notes 1: Vector spaces In this chapter we review certain basic concepts of linear algebra, highlighting their application to signal processing. 1 Vector

More information

A Review of Matrix Analysis

A Review of Matrix Analysis Matrix Notation Part Matrix Operations Matrices are simply rectangular arrays of quantities Each quantity in the array is called an element of the matrix and an element can be either a numerical value

More information

Math 307 Learning Goals. March 23, 2010

Math 307 Learning Goals. March 23, 2010 Math 307 Learning Goals March 23, 2010 Course Description The course presents core concepts of linear algebra by focusing on applications in Science and Engineering. Examples of applications from recent

More information

Finding eigenvalues for matrices acting on subspaces

Finding eigenvalues for matrices acting on subspaces Finding eigenvalues for matrices acting on subspaces Jakeniah Christiansen Department of Mathematics and Statistics Calvin College Grand Rapids, MI 49546 Faculty advisor: Prof Todd Kapitula Department

More information

systems of linear di erential If the homogeneous linear di erential system is diagonalizable,

systems of linear di erential If the homogeneous linear di erential system is diagonalizable, G. NAGY ODE October, 8.. Homogeneous Linear Differential Systems Section Objective(s): Linear Di erential Systems. Diagonalizable Systems. Real Distinct Eigenvalues. Complex Eigenvalues. Repeated Eigenvalues.

More information

Bare-bones outline of eigenvalue theory and the Jordan canonical form

Bare-bones outline of eigenvalue theory and the Jordan canonical form Bare-bones outline of eigenvalue theory and the Jordan canonical form April 3, 2007 N.B.: You should also consult the text/class notes for worked examples. Let F be a field, let V be a finite-dimensional

More information

Index. book 2009/5/27 page 121. (Page numbers set in bold type indicate the definition of an entry.)

Index. book 2009/5/27 page 121. (Page numbers set in bold type indicate the definition of an entry.) page 121 Index (Page numbers set in bold type indicate the definition of an entry.) A absolute error...26 componentwise...31 in subtraction...27 normwise...31 angle in least squares problem...98,99 approximation

More information

Module 6 : Solving Ordinary Differential Equations - Initial Value Problems (ODE-IVPs) Section 3 : Analytical Solutions of Linear ODE-IVPs

Module 6 : Solving Ordinary Differential Equations - Initial Value Problems (ODE-IVPs) Section 3 : Analytical Solutions of Linear ODE-IVPs Module 6 : Solving Ordinary Differential Equations - Initial Value Problems (ODE-IVPs) Section 3 : Analytical Solutions of Linear ODE-IVPs 3 Analytical Solutions of Linear ODE-IVPs Before developing numerical

More information

(17) (18)

(17) (18) Module 4 : Solving Linear Algebraic Equations Section 3 : Direct Solution Techniques 3 Direct Solution Techniques Methods for solving linear algebraic equations can be categorized as direct and iterative

More information

Represent this system in terms of a block diagram consisting only of. g From Newton s law: 2 : θ sin θ 9 θ ` T

Represent this system in terms of a block diagram consisting only of. g From Newton s law: 2 : θ sin θ 9 θ ` T Exercise (Block diagram decomposition). Consider a system P that maps each input to the solutions of 9 4 ` 3 9 Represent this system in terms of a block diagram consisting only of integrator systems, represented

More information

Understand the existence and uniqueness theorems and what they tell you about solutions to initial value problems.

Understand the existence and uniqueness theorems and what they tell you about solutions to initial value problems. Review Outline To review for the final, look over the following outline and look at problems from the book and on the old exam s and exam reviews to find problems about each of the following topics.. Basics

More information

8.1 Bifurcations of Equilibria

8.1 Bifurcations of Equilibria 1 81 Bifurcations of Equilibria Bifurcation theory studies qualitative changes in solutions as a parameter varies In general one could study the bifurcation theory of ODEs PDEs integro-differential equations

More information

Numerical Linear Algebra Homework Assignment - Week 2

Numerical Linear Algebra Homework Assignment - Week 2 Numerical Linear Algebra Homework Assignment - Week 2 Đoàn Trần Nguyên Tùng Student ID: 1411352 8th October 2016 Exercise 2.1: Show that if a matrix A is both triangular and unitary, then it is diagonal.

More information

MATH 205C: STATIONARY PHASE LEMMA

MATH 205C: STATIONARY PHASE LEMMA MATH 205C: STATIONARY PHASE LEMMA For ω, consider an integral of the form I(ω) = e iωf(x) u(x) dx, where u Cc (R n ) complex valued, with support in a compact set K, and f C (R n ) real valued. Thus, I(ω)

More information

1 9/5 Matrices, vectors, and their applications

1 9/5 Matrices, vectors, and their applications 1 9/5 Matrices, vectors, and their applications Algebra: study of objects and operations on them. Linear algebra: object: matrices and vectors. operations: addition, multiplication etc. Algorithms/Geometric

More information

LS.1 Review of Linear Algebra

LS.1 Review of Linear Algebra LS. LINEAR SYSTEMS LS.1 Review of Linear Algebra In these notes, we will investigate a way of handling a linear system of ODE s directly, instead of using elimination to reduce it to a single higher-order

More information

Iterative Methods for Solving A x = b

Iterative Methods for Solving A x = b Iterative Methods for Solving A x = b A good (free) online source for iterative methods for solving A x = b is given in the description of a set of iterative solvers called templates found at netlib: http

More information

Eigenvalues and Eigenvectors

Eigenvalues and Eigenvectors Contents Eigenvalues and Eigenvectors. Basic Concepts. Applications of Eigenvalues and Eigenvectors 8.3 Repeated Eigenvalues and Symmetric Matrices 3.4 Numerical Determination of Eigenvalues and Eigenvectors

More information

EE Control Systems LECTURE 9

EE Control Systems LECTURE 9 Updated: Sunday, February, 999 EE - Control Systems LECTURE 9 Copyright FL Lewis 998 All rights reserved STABILITY OF LINEAR SYSTEMS We discuss the stability of input/output systems and of state-space

More information

PROBLEMS In each of Problems 1 through 12:

PROBLEMS In each of Problems 1 through 12: 6.5 Impulse Functions 33 which is the formal solution of the given problem. It is also possible to write y in the form 0, t < 5, y = 5 e (t 5/ sin 5 (t 5, t 5. ( The graph of Eq. ( is shown in Figure 6.5.3.

More information

Notes on the Matrix-Tree theorem and Cayley s tree enumerator

Notes on the Matrix-Tree theorem and Cayley s tree enumerator Notes on the Matrix-Tree theorem and Cayley s tree enumerator 1 Cayley s tree enumerator Recall that the degree of a vertex in a tree (or in any graph) is the number of edges emanating from it We will

More information

Mathematical Methods wk 2: Linear Operators

Mathematical Methods wk 2: Linear Operators John Magorrian, magog@thphysoxacuk These are work-in-progress notes for the second-year course on mathematical methods The most up-to-date version is available from http://www-thphysphysicsoxacuk/people/johnmagorrian/mm

More information

1. Diagonalize the matrix A if possible, that is, find an invertible matrix P and a diagonal

1. Diagonalize the matrix A if possible, that is, find an invertible matrix P and a diagonal . Diagonalize the matrix A if possible, that is, find an invertible matrix P and a diagonal 3 9 matrix D such that A = P DP, for A =. 3 4 3 (a) P = 4, D =. 3 (b) P = 4, D =. (c) P = 4 8 4, D =. 3 (d) P

More information

Linear Algebra. Matrices Operations. Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0.

Linear Algebra. Matrices Operations. Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0. Matrices Operations Linear Algebra Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0 The rectangular array 1 2 1 4 3 4 2 6 1 3 2 1 in which the

More information

Repeated Eigenvalues and Symmetric Matrices

Repeated Eigenvalues and Symmetric Matrices Repeated Eigenvalues and Symmetric Matrices. Introduction In this Section we further develop the theory of eigenvalues and eigenvectors in two distinct directions. Firstly we look at matrices where one

More information

Remark 1 By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.

Remark 1 By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero. Sec 5 Eigenvectors and Eigenvalues In this chapter, vector means column vector Definition An eigenvector of an n n matrix A is a nonzero vector x such that A x λ x for some scalar λ A scalar λ is called

More information

Linear Algebra: Matrix Eigenvalue Problems

Linear Algebra: Matrix Eigenvalue Problems CHAPTER8 Linear Algebra: Matrix Eigenvalue Problems Chapter 8 p1 A matrix eigenvalue problem considers the vector equation (1) Ax = λx. 8.0 Linear Algebra: Matrix Eigenvalue Problems Here A is a given

More information

Matrices and Matrix Algebra.

Matrices and Matrix Algebra. Matrices and Matrix Algebra 3.1. Operations on Matrices Matrix Notation and Terminology Matrix: a rectangular array of numbers, called entries. A matrix with m rows and n columns m n A n n matrix : a square

More information

A Search for the Simplest Chaotic Partial Differential Equation

A Search for the Simplest Chaotic Partial Differential Equation A Search for the Simplest Chaotic Partial Differential Equation C. Brummitt University of Wisconsin-Madison, Department of Physics cbrummitt@wisc.edu J. C. Sprott University of Wisconsin-Madison, Department

More information

ODEs Cathal Ormond 1

ODEs Cathal Ormond 1 ODEs Cathal Ormond 2 1. Separable ODEs Contents 2. First Order ODEs 3. Linear ODEs 4. 5. 6. Chapter 1 Separable ODEs 1.1 Definition: An ODE An Ordinary Differential Equation (an ODE) is an equation whose

More information

which arises when we compute the orthogonal projection of a vector y in a subspace with an orthogonal basis. Hence assume that P y = A ij = x j, x i

which arises when we compute the orthogonal projection of a vector y in a subspace with an orthogonal basis. Hence assume that P y = A ij = x j, x i MODULE 6 Topics: Gram-Schmidt orthogonalization process We begin by observing that if the vectors {x j } N are mutually orthogonal in an inner product space V then they are necessarily linearly independent.

More information

On Exponential Decay and the Riemann Hypothesis

On Exponential Decay and the Riemann Hypothesis On Exponential Decay and the Riemann Hypothesis JEFFREY N. COOK ABSTRACT. A Riemann operator is constructed in which sequential elements are removed from a decaying set by means of prime factorization,

More information

1. The Transition Matrix (Hint: Recall that the solution to the linear equation ẋ = Ax + Bu is

1. The Transition Matrix (Hint: Recall that the solution to the linear equation ẋ = Ax + Bu is ECE 55, Fall 2007 Problem Set #4 Solution The Transition Matrix (Hint: Recall that the solution to the linear equation ẋ Ax + Bu is x(t) e A(t ) x( ) + e A(t τ) Bu(τ)dτ () This formula is extremely important

More information

Chapter Two Elements of Linear Algebra

Chapter Two Elements of Linear Algebra Chapter Two Elements of Linear Algebra Previously, in chapter one, we have considered single first order differential equations involving a single unknown function. In the next chapter we will begin to

More information

Table of contents. d 2 y dx 2, As the equation is linear, these quantities can only be involved in the following manner:

Table of contents. d 2 y dx 2, As the equation is linear, these quantities can only be involved in the following manner: M ath 0 1 E S 1 W inter 0 1 0 Last Updated: January, 01 0 Solving Second Order Linear ODEs Disclaimer: This lecture note tries to provide an alternative approach to the material in Sections 4. 4. 7 and

More information

Dr. Ian R. Manchester

Dr. Ian R. Manchester Dr Ian R. Manchester Week Content Notes 1 Introduction 2 Frequency Domain Modelling 3 Transient Performance and the s-plane 4 Block Diagrams 5 Feedback System Characteristics Assign 1 Due 6 Root Locus

More information

Linear Algebra Review

Linear Algebra Review Chapter 1 Linear Algebra Review It is assumed that you have had a beginning course in linear algebra, and are familiar with matrix multiplication, eigenvectors, etc I will review some of these terms here,

More information

Quaternion Dynamics, Part 1 Functions, Derivatives, and Integrals. Gary D. Simpson. rev 00 Dec 27, 2014.

Quaternion Dynamics, Part 1 Functions, Derivatives, and Integrals. Gary D. Simpson. rev 00 Dec 27, 2014. Quaternion Dynamics, Part 1 Functions, Derivatives, and Integrals Gary D. Simpson gsim100887@aol.com rev 00 Dec 27, 2014 Summary Definitions are presented for "quaternion functions" of a quaternion. Polynomial

More information

Introduction to Group Theory

Introduction to Group Theory Chapter 10 Introduction to Group Theory Since symmetries described by groups play such an important role in modern physics, we will take a little time to introduce the basic structure (as seen by a physicist)

More information

NONCOMMUTATIVE POLYNOMIAL EQUATIONS. Edward S. Letzter. Introduction

NONCOMMUTATIVE POLYNOMIAL EQUATIONS. Edward S. Letzter. Introduction NONCOMMUTATIVE POLYNOMIAL EQUATIONS Edward S Letzter Introduction My aim in these notes is twofold: First, to briefly review some linear algebra Second, to provide you with some new tools and techniques

More information

Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) 1.1 The Formal Denition of a Vector Space

Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) 1.1 The Formal Denition of a Vector Space Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) Contents 1 Vector Spaces 1 1.1 The Formal Denition of a Vector Space.................................. 1 1.2 Subspaces...................................................

More information

arxiv: v1 [cs.sc] 17 Apr 2013

arxiv: v1 [cs.sc] 17 Apr 2013 EFFICIENT CALCULATION OF DETERMINANTS OF SYMBOLIC MATRICES WITH MANY VARIABLES TANYA KHOVANOVA 1 AND ZIV SCULLY 2 arxiv:1304.4691v1 [cs.sc] 17 Apr 2013 Abstract. Efficient matrix determinant calculations

More information

Control Systems I. Lecture 6: Poles and Zeros. Readings: Emilio Frazzoli. Institute for Dynamic Systems and Control D-MAVT ETH Zürich

Control Systems I. Lecture 6: Poles and Zeros. Readings: Emilio Frazzoli. Institute for Dynamic Systems and Control D-MAVT ETH Zürich Control Systems I Lecture 6: Poles and Zeros Readings: Emilio Frazzoli Institute for Dynamic Systems and Control D-MAVT ETH Zürich October 27, 2017 E. Frazzoli (ETH) Lecture 6: Control Systems I 27/10/2017

More information

We use the overhead arrow to denote a column vector, i.e., a number with a direction. For example, in three-space, we write

We use the overhead arrow to denote a column vector, i.e., a number with a direction. For example, in three-space, we write 1 MATH FACTS 11 Vectors 111 Definition We use the overhead arrow to denote a column vector, ie, a number with a direction For example, in three-space, we write The elements of a vector have a graphical

More information

20. The pole diagram and the Laplace transform

20. The pole diagram and the Laplace transform 95 0. The pole diagram and the Laplace transform When working with the Laplace transform, it is best to think of the variable s in F (s) as ranging over the complex numbers. In the first section below

More information

Introduction to Mobile Robotics Compact Course on Linear Algebra. Wolfram Burgard, Cyrill Stachniss, Kai Arras, Maren Bennewitz

Introduction to Mobile Robotics Compact Course on Linear Algebra. Wolfram Burgard, Cyrill Stachniss, Kai Arras, Maren Bennewitz Introduction to Mobile Robotics Compact Course on Linear Algebra Wolfram Burgard, Cyrill Stachniss, Kai Arras, Maren Bennewitz Vectors Arrays of numbers Vectors represent a point in a n dimensional space

More information

MATH 23a, FALL 2002 THEORETICAL LINEAR ALGEBRA AND MULTIVARIABLE CALCULUS Solutions to Final Exam (in-class portion) January 22, 2003

MATH 23a, FALL 2002 THEORETICAL LINEAR ALGEBRA AND MULTIVARIABLE CALCULUS Solutions to Final Exam (in-class portion) January 22, 2003 MATH 23a, FALL 2002 THEORETICAL LINEAR ALGEBRA AND MULTIVARIABLE CALCULUS Solutions to Final Exam (in-class portion) January 22, 2003 1. True or False (28 points, 2 each) T or F If V is a vector space

More information

Solution via Laplace transform and matrix exponential

Solution via Laplace transform and matrix exponential EE263 Autumn 2015 S. Boyd and S. Lall Solution via Laplace transform and matrix exponential Laplace transform solving ẋ = Ax via Laplace transform state transition matrix matrix exponential qualitative

More information

1 Matrices and vector spaces

1 Matrices and vector spaces Matrices and vector spaces. Which of the following statements about linear vector spaces are true? Where a statement is false, give a counter-example to demonstrate this. (a) Non-singular N N matrices

More information

w T 1 w T 2. w T n 0 if i j 1 if i = j

w T 1 w T 2. w T n 0 if i j 1 if i = j Lyapunov Operator Let A F n n be given, and define a linear operator L A : C n n C n n as L A (X) := A X + XA Suppose A is diagonalizable (what follows can be generalized even if this is not possible -

More information

Cayley-Hamilton Theorem

Cayley-Hamilton Theorem Cayley-Hamilton Theorem Massoud Malek In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n Let A be an n n matrix Although det (λ I n A

More information

Remark By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.

Remark By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero. Sec 6 Eigenvalues and Eigenvectors Definition An eigenvector of an n n matrix A is a nonzero vector x such that A x λ x for some scalar λ A scalar λ is called an eigenvalue of A if there is a nontrivial

More information

A = 3 1. We conclude that the algebraic multiplicity of the eigenvalues are both one, that is,

A = 3 1. We conclude that the algebraic multiplicity of the eigenvalues are both one, that is, 65 Diagonalizable Matrices It is useful to introduce few more concepts, that are common in the literature Definition 65 The characteristic polynomial of an n n matrix A is the function p(λ) det(a λi) Example

More information

MIT (Spring 2014)

MIT (Spring 2014) 18.311 MIT (Spring 014) Rodolfo R. Rosales May 6, 014. Problem Set # 08. Due: Last day of lectures. IMPORTANT: Turn in the regular and the special problems stapled in two SEPARATE packages. Print your

More information

1 Continuous-time Systems

1 Continuous-time Systems Observability Completely controllable systems can be restructured by means of state feedback to have many desirable properties. But what if the state is not available for feedback? What if only the output

More information

Final Review Sheet. B = (1, 1 + 3x, 1 + x 2 ) then 2 + 3x + 6x 2

Final Review Sheet. B = (1, 1 + 3x, 1 + x 2 ) then 2 + 3x + 6x 2 Final Review Sheet The final will cover Sections Chapters 1,2,3 and 4, as well as sections 5.1-5.4, 6.1-6.2 and 7.1-7.3 from chapters 5,6 and 7. This is essentially all material covered this term. Watch

More information