The field of Dynamics and Control is concerned with the study of dynamical systems. A dynamical system

Similar documents
Fractals, Dynamical Systems and Chaos. MATH225 - Field 2008

A Mathematica Companion for Differential Equations

Stochastic processes. MAS275 Probability Modelling. Introduction and Markov chains. Continuous time. Markov property

THE N-VALUE GAME OVER Z AND R

Math 1553, Introduction to Linear Algebra

Nonlinear dynamics & chaos BECS

Wind Tunnel Experiments of Stall Flutter with Structural Nonlinearity

Markov Chains. Chapter 16. Markov Chains - 1

3.5 Competition Models: Principle of Competitive Exclusion

Dynamical Systems and Chaos Part I: Theoretical Techniques. Lecture 4: Discrete systems + Chaos. Ilya Potapov Mathematics Department, TUT Room TD325

Lecture #AC 3. Aircraft Lateral Dynamics. Spiral, Roll, and Dutch Roll Modes

Discrete time dynamical systems (Review of rst part of Math 361, Winter 2001)

Modelling biological oscillations

A pragmatic approach to including complex natural modes of vibration in aeroelastic analysis

0.1 Naive formulation of PageRank

Chaos & Recursive. Ehsan Tahami. (Properties, Dynamics, and Applications ) PHD student of biomedical engineering

Global Analysis of Piecewise Linear Systems Using Impact Maps and Quadratic Surface Lyapunov Functions

Global Analysis of Piecewise Linear Systems Using Impact Maps and Surface Lyapunov Functions

Lokta-Volterra predator-prey equation dx = ax bxy dt dy = cx + dxy dt

ONGOING WORK ON FAULT DETECTION AND ISOLATION FOR FLIGHT CONTROL APPLICATIONS

THE DYNAMICS OF SUCCESSIVE DIFFERENCES OVER Z AND R

A new algorithm for solving Van der Pol equation based on piecewise spectral Adomian decomposition method

Bucket brigades - an example of self-organized production. April 20, 2016

INTRODUCTION TO MARKOV CHAINS AND MARKOV CHAIN MIXING

4.1 Markov Processes and Markov Chains

New Advances in Uncertainty Analysis and Estimation

Stability and Control

MATH36001 Perron Frobenius Theory 2015

The probability of going from one state to another state on the next trial depends only on the present experiment and not on past history.

arxiv: v1 [nlin.ao] 19 May 2017

Finite-Horizon Statistics for Markov chains

Applications of Linear and Nonlinear Robustness Analysis Techniques to the F/A-18 Flight Control Laws

SUM-OF-SQUARES BASED STABILITY ANALYSIS FOR SKID-TO-TURN MISSILES WITH THREE-LOOP AUTOPILOT

(Refer Slide Time: 00:32)

Chapter 16 focused on decision making in the face of uncertainty about one future

TWO DIMENSIONAL FLOWS. Lecture 5: Limit Cycles and Bifurcations

Difference Resonances in a controlled van der Pol-Duffing oscillator involving time. delay

Fundamentals of Dynamical Systems / Discrete-Time Models. Dr. Dylan McNamara people.uncw.edu/ mcnamarad

Math 345 Intro to Math Biology Lecture 20: Mathematical model of Neuron conduction

RISKy Business: An In-Depth Look at the Game RISK

n α 1 α 2... α m 1 α m σ , A =

Follow links Class Use and other Permissions. For more information, send to:

CS145: Probability & Computing Lecture 18: Discrete Markov Chains, Equilibrium Distributions

Differential Equations

Neural, Parallel, and Scientific Computations 18 (2010) A STOCHASTIC APPROACH TO THE BONUS-MALUS SYSTEM 1. INTRODUCTION

Introduction to Ordinary Differential Equations with Mathematica

Neuroscience applications: isochrons and isostables. Alexandre Mauroy (joint work with I. Mezic)

Self-Organization in Nonequilibrium Systems

QUARTERLY OF APPLIED MATHEMATICS

dynamical zeta functions: what, why and what are the good for?

Markov Chains. As part of Interdisciplinary Mathematical Modeling, By Warren Weckesser Copyright c 2006.

Qualitative Analysis of Tumor-Immune ODE System

MODELING OF ACOUSTIC PROCESSES IN SOLIDS BASED ON PARTICLE INTERACTION

MATH 415, WEEKS 14 & 15: 1 Recurrence Relations / Difference Equations

Is chaos possible in 1d? - yes - no - I don t know. What is the long term behavior for the following system if x(0) = π/2?

UCLA STAT 233 Statistical Methods in Biomedical Imaging

AEROELASTIC ANALYSIS OF SPHERICAL SHELLS

Eco504, Part II Spring 2010 C. Sims PITFALLS OF LINEAR APPROXIMATION OF STOCHASTIC MODELS

Invariant Measures for Hybrid Stochastic Systems

A plane autonomous system is a pair of simultaneous first-order differential equations,

FREQUENCY DOMAIN FLUTTER ANALYSIS OF AIRCRAFT WING IN SUBSONIC FLOW

Markov Chains for the RISK Board Game Revisited

16 Period doubling route to chaos

CHALMERS, GÖTEBORGS UNIVERSITET. EXAM for DYNAMICAL SYSTEMS. COURSE CODES: TIF 155, FIM770GU, PhD

The Franck-Hertz Experiment Physics 2150 Experiment No. 9 University of Colorado

THREE SCENARIOS FOR CHANGING OF STABILITY IN THE DYNAMIC MODEL OF NERVE CONDUCTION

Gerardo Zavala. Math 388. Predator-Prey Models

The Hopf-van der Pol System: Failure of a Homotopy Method

Markov Chains and Pandemics

nonlinear oscillators. method of averaging

Modeling Prey and Predator Populations

Introduction to Dynamical Systems Basic Concepts of Dynamics

A Control-Theoretic Perspective on the Design of Distributed Agreement Protocols, Part

Entrainment Alex Bowie April 7, 2004

Introduction to Flight Dynamics

Mathematical Model of Forced Van Der Pol s Equation

Lecture 14: Random Walks, Local Graph Clustering, Linear Programming

B5.6 Nonlinear Systems

1 Basic Analysis of Forward-Looking Decision Making

A Generalized Fault Coverage Model for Linear Time- Invariant Systems

4.4 Noetherian Rings

CDS 101/110a: Lecture 2.1 Dynamic Behavior

Chapter #4 EEE8086-EEE8115. Robust and Adaptive Control Systems

Analysis of Bifurcations in a Power System Model with Excitation Limits

Chapter 23. Predicting Chaos The Shift Map and Symbolic Dynamics

Notation, Matrices, and Matrix Mathematics

Modelling residual wind farm variability using HMMs

Clearly the passage of an eigenvalue through to the positive real half plane leads to a qualitative change in the phase portrait, i.e.

2. The Power Method for Eigenvectors

6.842 Randomness and Computation March 3, Lecture 8

Convex Optimization CMU-10725

The Markov Decision Process (MDP) model

Google Page Rank Project Linear Algebra Summer 2012

Math 12: Discrete Dynamical Systems Homework

ELEC633: Graphical Models

6.3.4 Action potential

... it may happen that small differences in the initial conditions produce very great ones in the final phenomena. Henri Poincaré

CDS 101/110a: Lecture 2.1 Dynamic Behavior

Waves in a Shock Tube

ONR MURI AIRFOILS: Animal Inspired Robust Flight with Outer and Inner Loop Strategies. Calin Belta

Transcription:

Analysis of Nonlinear Dynamical Systems Using Markovian Representations Christopher L. Bertagne, Raktim Bhattacharya Department of Aerospace Engineering, Texas A&M University, College Station, TX 77840 This paper presents a framework for understanding the behavior of nonlinear dynamical systems in N-dimensional state space. The basis of this approach is the generation of a stochastic matrix using techniques from the field of statistical physics. Though it contains the nonlinear nature of the system, the matrix itself is a linear operator, allowing easy and straightforward computation of the long-term behavior of the system via its stationary probability vector. Such a technique can be applied to situations where knowledge of the stability of highly nonlinear dynamical systems is of critical importance, such as in the design flight control laws. I. Background The field of Dynamics and Control is concerned with the study of dynamical systems. A dynamical system is a means of describing how one state develops into another state over the course of time. 1 These systems are usually represented by differential equations for continuous time and difference equations for discrete time. An example dynamical system is the predator-prey model usually studied in an introductory differential equations course. In this system, the independent variables represent the populations of different species. There are two broad categories of dynamical systems: linear systems and nonlinear systems. Linear systems are relatively easy to solve and there are many tools and techniques available to analyze them. For example, the predator-prey model has analytical solutions: functions that give the population of each species with respect to time. On the other hand, nonlinear systems are frequently impossible to solve analytically and there are far fewer ways to analyze them. This poses a problem because nonlinear dynamical systems occur far more frequently when dealing with physical phenomena. Nonlinear dynamical systems are primarily studied by understanding their stability. Stability refers to the robustness of a given outcome to small changes in initial conditions or small random fluctuations. 2 Understanding the stability of a dynamical system is of utmost importance in situations where lives and property depend on the system s behavior. Indeed, Chen states that the [s]tability of a dynamical system, with or without control and disturbance inputs, is a fundamental requirement for its practical value, particularly in most real-world applications. 3 The first four models of the Boeing F/A-18 contained a mode in their flight control laws that demonstrates the implications of incomplete knowledge of nonlinear dynamical systems. This mode is highly unstable and results in violent oscillations around the roll and yaw axes as the aircraft quickly loses altitude. The mode has been called the falling-leaf mode due to the similarity between the aircraft s behavior and the motion of a leaf fluttering to the ground. 4 Though no lives have been claimed by this unstable phenomenon, several aircraft were lost when their pilots ejected after triggering this mode. Due to the nature of nonlinear dynamical systems, the designers of the flight control law simply had no knowledge that the falling-leaf mode existed. II. Introduction The previous example demonstrates the importance of understanding stable and unstable regions of a nonlinear dynamical system. One way to achieve this is by linearizing the nonlinear system about an equilibrium point. However, this method is only valid for regions close to the equilibrium point. Additionally, 1 of 6

there is a loss of accuracy due to the linearization process. Therefore, linear analysis is entirely inadequate to understand highly nonlinear phenomena, such as the falling-leaf mode. A framework has been developed to determine the long-term behavior of a nonlinear dynamical system without large approximations or restrictions on the domain of application. This framework uses concepts from the field of statistical physics. Specifically, the dynamical system is represented as a matrix called a stochastic or Markov matrix. Once this matrix has been created, future states of the system can be predicted through multiplication by the stochastic matrix, a linear operation. The primary advantage of this probabilistic framework should now be evident. Even though the stochastic matrix contains a form of the nonlinear dynamics used to create it, the matrix itself becomes a linear operator. Thus, eigenvalue and eigenvector analysis can be applied to the stochastic matrix. In essence, this framework creates a linear representation of the nonlinear system without any loss of fidelity. III. Theory: Markov Chains A brief background on the theory behind the probabilistic framework will now be presented. The basis of the approach is a Markov chain. 5 A Markov chain is a probabilistic way to predict future events that depend only on the current state and not the sequence of states that preceded it. This is done through a stochastic matrix M, also called a Markov matrix, that is populated according the following rule: The entry in matrix M with indices i and j is the probability that the system is in state j and reaches state i in the next time step: M ij = P (j i) (1) The states i and j can represent different things depending on the problem. For instance, they might represent different cities and P (j i) would mean the probability that an individual moves from city j to city i. It can be seen that M will be a square matrix with dimension equal to the number of independent states. Additionally, the sum of the entries of each column of M will be 1. Once the stochastic matrix has been constructed, it contains a probabilistic representation of the system used to generate it. Future behavior can be determined simply by multiplication by the stochastic matrix, which is a linear operation: p + = Mp (2) In this formula, p is a column vector whose entries represent the probability of being in a given state. The sum of the entries of p must equal 1. The column vector is multiplied by the stochastic matrix, yielding a new probability vector, p + that represents the probability distribution after one time step. Successive applications of the stochastic matrix can be applied to continue stepping through time. If enough iterations are applied, the system will eventually approach a stable distribution, where the probability vector does not change under application of the stochastic matrix: p = Mp (3) Thus, p is the eigenvector of M corresponding to an eigenvalue of 1. This eigenvector is sometimes called the stationary probability vector. To find the long-term behavior of a Markov chain, it is sufficient to find this eigenvector. It is noted that p depends only on the stochastic matrix. This means that the long-term probability of being in a given state is independent of the initial state. IV. Generating the Stochastic Matrix With this probabilistic framework, the problem of determining the stable region of a dynamical system has been reduced to an eigenvector problem on the stochastic matrix, M. Therefore, a means of computing this matrix must be derived. Since the Markov chain requires a finite number of possible states, the domain of the problem must be discretized. This accomplished using a rectangular grid. This is by no means the only way to discretize the domain; it was selected due to its ease of implementation. 2 of 6

Now, an interpretation of the probability in Equation 1 can be made. States i and j are taken to represent individual grid cells. Therefore, P (j i) represents the probability that the system s state is in cell j and ends up in cell i after some timestep. The Markovian probability can be computed in the following way. The concepts will be introduced in two dimensions and then generalized to higher dimensions. a First, grid cells j and i are represented as a polygon. The nonlinear dynamics are applied to each of the four vertices of the polygon, creating a new, transformed polygon, j t. The area of the intersection of polygon j t with polygon i divided by the area of j is the Markovian probability: M ij = P (j i) = Area(jt i) Area(j) Figure 1 gives a graphical interpretation for the two-dimensional case. Cells i and j have been labeled, and the transformed vertices belonging to j t can be seen. The shaded region represents the intersection between polygons j t and i. (4) j i Figure 1: Definition of the Markovian probabilities in two dimensions. Though the concept was developed in two-dimensions, the process is valid in N dimensions, with the following corrections: Instead of polygons, the N dimensional objects are called polytopes. 6 The number vertices of a hyperrectangular polytope is 2 N. Finally, area is replaced by the N-dimensional concept of volume. V. Results The Van der Pol oscillator was used as a nonlinear test case for the probabilistic framework b. This system contains a stable region called a limit cycle; in the long term, all trajectories converge upon this region. The goal was to use the framework to determine the limit cycle. This required the creation of the stochastic matrix using the process described above. Next, standard eigenvector analysis was conducted on the stochastic matrix to produce the stationary probability vector. Grid cells with a high stationary probability define the limit cycle. When these cells are plotted, the result is a graphical representation of the limit cycle. Figure 2 demonstrates the computed limit cycles. The solid white line defines the true limit cycle. The grey region gives the limit cycle as determined by the probabilistic algorithm. Starting with 2a, the limit cycle is computed quickly at a low resolution. Each successive iteration requires more execution time, but doubles the resolution. After six iterations, the computed limit cycle is quite well-defined. At each level of resolution, the true limit cycle is contained within the probabilistic representation. This is important because even at the lowest resolution, the framework produces the correct result and yields a The italicized terms pertain solely to two dimensions b See the Appendix for more information on the Van der Pol oscillator 3 of 6

valuable information. As the resolution increases, the set is further refined until it represents the true limit cycle very closely. VI. Discussion The goal of this research was to develop a new framework for analyzing nonlinear dynamical systems. To that end, a technique from the field of statistical physics was adapted. Specifically, a stochastic matrix is generated for the system. Though the matrix contains a representation of the nonlinear nature of the system, it is itself a linear operator, allowing for eigenvector and eigenvalue analysis. It is subsequently straightforward to find the stable and unstable regions. The framework was applied to the Van der Pol oscillator, demonstrating that it generates the correct limit cycle at different resolutions. This technique could have important applications wherever the behavior of nonlinear dynamical systems is of critical importance. For instance, designers of flight control laws could run simulations to detect unstable regions like were present in the F/A-18. This would allow appropriate corrections to be applied before the aircraft reached production. The result is a sharp increase in the safety and robustness of the flight controllers. 4 of 6

(a) (b) (c) (d) (e) (f) Figure 2: Computation of Van der Pol oscillator limit cycle at increasing resolutions. indicates limit cycle. Solid white line 5 of 6

Appendix Van der Pol Oscillator The Van der Pol oscillator was used as an example nonlinear system when describing the implementation of the probabilistic framework for understanding nonlinear dynamical systems. Equation (5) gives the secondorder differential form of the system. This system models stable oscillations in electrical circuits with vacuum tubes. It has also been used to model neurological functions 7 and geologic faults. 8 2 x t 2 ɛ(1 x2 ) x t + x = 0 (5) As with any second-order differential equation, the Van der Pol equation can be transformed into a system of two first-order equations. In this way, the system can be either two-dimensional or three-dimensional: the parameter ɛ can be fixed, in which case, the system will be two-dimensional, or it can be left as an additional independent variable, causing the system to take on a third dimension. The Van der Pol oscillator contains a stable region known as a limit cycle. A limit cycle is a region of attraction toward which trajectories converge. Once in the limit cycle, the trajectories are periodic. In the case of the Van der Pol oscillator, all trajectories converge to the limit cycle regardless of initial condition. This makes the system ideal to use as a test case. Figure 3 shows the limit cycle of the system when ɛ = 1. 4 2 0-2 -4 r -2-1 0 1 2 Figure 3: Limit cycle of Van der Pol oscillator when ɛ = 1. References 1 Weisstein, E. W., Dynamical System, Wolfram MathWorld. 2 Weisstein, E. W., Stability, Wolfram MathWorld. 3 Chen, G., Stability of Nonlinear Systems, Encyclopedia of RF and Microwave Engineering, Dec. 2004, pp. 4881 4896. 4 Chakraborty, A., Seiler, P., and Balas, G. J., Susceptibility of F/A-18 Flight Controllers to the Falling-Leaf Mode: Linear Analysis, Journal of Guidance, Control, and Dynamics, Vol. 34, No. 1, 2011. 5 Grinstead, C. M. and Snell, J. L., Introduction to Probability, American Mathematical Society, 1997. 6 Grunbaum, B., Convex Polytopes, John Wiley & Sons, 1967. 7 FitzHugh, R., Impulses and physiological states in theoretical models of nerve membranes, Biophysics Journal, 1961, pp. 445 466. 8 Cartwright, J., Eguiluz, V., Hernandez-Garcia, E., and Piro, O., Dynamics of elastic excitable media, International Journal of Bifurcation and Chaos,, No. 9, 1999, pp. 2197 2202. 6 of 6