Dynamical Systems and Mathematical Models A system is any collection of objects and their interconnections that we happen to be interested in; be it

Similar documents
The Implicit Function Theorem

Models for Control and Verification

2 Discrete Dynamical Systems (DDS)

Equivalence of dynamical systems by bisimulation

arxiv: v1 [cs.sy] 20 Nov 2017

Lecture - 30 Stationary Processes

The Starting Point: Basic Concepts and Terminology

Intelligent Systems and Control Prof. Laxmidhar Behera Indian Institute of Technology, Kanpur

AC&ST AUTOMATIC CONTROL AND SYSTEM THEORY SYSTEMS AND MODELS. Claudio Melchiorri

6.241 Dynamic Systems and Control

16.30 Estimation and Control of Aerospace Systems

Definition of Dynamic System

Optimization Over Time

Victoria University of Wellington. Te Whare Wānanga o te Ūpoko o te Ika a Maui VUW. Conservative entropic forces. Matt Visser

Process Modelling. Table of Contents

Newtonian Mechanics. Chapter Classical space-time

The small ball property in Banach spaces (quantitative results)

Chapter 5. The Laws of Motion

Control Systems Design

CHAPTER 1. First-Order Differential Equations and Their Applications. 1.1 Introduction to Ordinary Differential Equations

1 Stochastic Dynamic Programming

2.2 Average vs. Instantaneous Description

ECE504: Lecture 8. D. Richard Brown III. Worcester Polytechnic Institute. 28-Oct-2008

Newton's second law of motion

Solving Dynamic Equations: The State Transition Matrix II

Modulation of symmetric densities

Chapter 13 Internal (Lyapunov) Stability 13.1 Introduction We have already seen some examples of both stable and unstable systems. The objective of th

How to Use Calculus Like a Physicist

Nonlinear Dynamical Systems Lecture - 01

Suppose that we have a real phenomenon. The phenomenon produces events (synonym: outcomes ).

Module 07 Controllability and Controller Design of Dynamical LTI Systems

G : Statistical Mechanics

Impulsive Stabilization and Application to a Population Growth Model*

1 Mathematical toolbox

Sketchy Notes on Lagrangian and Hamiltonian Mechanics

Solving Dynamic Equations: The State Transition Matrix

Slope Fields: Graphing Solutions Without the Solutions

System Control Engineering 0

Chapter 4. The Laws of Motion

Module 1: Signals & System

On the problem of control and observation: a general overview

3. DIFFERENT MODEL TYPES

The Laws of Motion. Newton s first law Force Mass Newton s second law Gravitational Force Newton s third law Examples

Equilibrium points: continuous-time systems

Dynamical Systems: Lecture 1 Naima Hammoud

Accelerated Observers

Statistics for scientists and engineers

43.1 Vector Fields and their properties

STABILITY ANALYSIS OF DYNAMIC SYSTEMS

Chapter 5 - Differentiating Functions

Chapter 18. Remarks on partial differential equations

Basic Theory of Dynamical Systems

Linear System Theory. Wonhee Kim Lecture 1. March 7, 2018

Problem 1: Lagrangians and Conserved Quantities. Consider the following action for a particle of mass m moving in one dimension

The Convolution Sum for Discrete-Time LTI Systems

Orbital Motion in Schwarzschild Geometry

Towards a formal language for systemic requirements

MTH210 DIFFERENTIAL EQUATIONS. Dr. Gizem SEYHAN ÖZTEPE

Discrete Mathematics and Probability Theory Fall 2014 Anant Sahai Note 15. Random Variables: Distributions, Independence, and Expectations

PHYSICS 107. Lecture 5 Newton s Laws of Motion

Mass on a Horizontal Spring

Discrete-time linear systems

What is a Force? Free-Body diagrams. Contact vs. At-a-Distance 11/28/2016. Forces and Newton s Laws of Motion

Metric spaces and metrizability

Stochastic Histories. Chapter Introduction

Lecture Notes of EE 714

Energy Diagrams --- Attraction

MATH3203 Lecture 1 Mathematical Modelling and ODEs

PHYS 121: Work, Energy, Momentum, and Conservation Laws, Systems, and Center of Mass Review Sheet

Module 08 Observability and State Estimator Design of Dynamical LTI Systems

Parametric Equations, Function Composition and the Chain Rule: A Worksheet

Lyapunov Stability Theory

Signals and Systems Chapter 2

The Liapunov Method for Determining Stability (DRAFT)

Lecture 1, August 21, 2017

Theoretical physics. Deterministic chaos in classical physics. Martin Scholtz

Notes on numerical solution of differential equations

Summary of Special Cases

Observability and forward-backward observability of discrete-time nonlinear systems

PHY131H1S Class 10. Preparation for Practicals this week: Today: Equilibrium Mass, Weight, Gravity Weightlessness

Fig 1: Stationary and Non Stationary Time Series

The World According to Physics 121

EE291E Lecture Notes 3 Autonomous Hybrid Automata

Linear algebra and differential equations (Math 54): Lecture 19

1 2 predators competing for 1 prey

Chapter 5. The Laws of Motion

2-D Kinematics. In general, we have the following 8 equations (4 per dimension): Notes Page 1 of 7

Extreme Values and Positive/ Negative Definite Matrix Conditions

ACM/CMS 107 Linear Analysis & Applications Fall 2016 Assignment 4: Linear ODEs and Control Theory Due: 5th December 2016

Mathematics 256 a course in differential equations for engineering students

AH Mechanics Checklist (Unit 1) AH Mechanics Checklist (Unit 1) Rectilinear Motion

Fixed Point Theorems

Chapter 4. The Laws of Motion. Dr. Armen Kocharian

ELEC4631 s Lecture 2: Dynamic Control Systems 7 March Overview of dynamic control systems

State Estimation of Linear and Nonlinear Dynamic Systems

7.1 Significance of question: are there laws in S.S.? (Why care?) Possible answers:

Continuous Dynamics Solving LTI state-space equations גרא וייס המחלקה למדעי המחשב אוניברסיטת בן-גוריון

1 Continuous-time Systems

Lecture 13 - Wednesday April 29th

Engineering Mechanics Prof. U. S. Dixit Department of Mechanical Engineering Indian Institute of Technology, Guwahati

Transcription:

Dynamical Systems and Mathematical Models A system is any collection of objects and their interconnections that we happen to be interested in; be it physical, engineering, economic, financial, demographic, social or whatever. A dynamical system is one which evolves with time. A mathematical model of such a system is a collection of quantities (variables and parameters) and their interrelationships (usually equations) which describe how the system behaves. More than that, it is argued that a useful mathematical model is one which not only explains the system s behaviour to date but can also be used to predict the future behaviour of the system; the ability to predict a system s behaviour is the first step in controlling it. We ll tacitly assume that having identified those aspects of a system that we wish to model, it is possible to build a mathematical model, and that having done so, we ll use the terms system and mathematical model loosely and interchangeably. What s the difference between a variable and a parameter? The answer is essentially one of semantics, but how these words are used throughout this module is best illustrated by means of an example : Consider modelling the passage of a small ball falling through the air towards the ground under the influence of gravity and wind fluctuations. Time is obviously an important variable as are the position of the ball as a function of time and the strength and direction of the wind as functions of time. The mass of the ball, the gravitational constant and any frictional constants are usually considered as parameters of this system. Thus variables are quantities which vary significantly during the time period over which the system is observed, while parameters are quantities which are constant or whose variation is small and irrelevant in comparison with that of the variables ( which makes it attractive to model this type of parameter as a constant) over the same time period. Depending on how frequently the system is observed, the associated mathematical model may be continuous-time: all variables are continuously monitored - thus the lifetime for which the model is valid is an interval; or discrete-time: all variables are monitored at discrete points in time - and so the time period of interest is a countable set. We shall use t 0 and t f as the initial and final times of the relevant time period. For continuous-time models, we shall use t to represent an arbitrary time instant in [t 0, t f ]. For discrete-time models, t k, k = 0, 1, 2,... will represent the k-th observation time. In addition, we ll usually restrict ourselves to equally spaced observa- 1

tions. In which case if M + 1 observation times are available to us between t 0 and t f, then with = (t f t 0 )/M we have t k = t 0 + k. For a given system and time period, t 0 and are fixed and so t k and k are equivalent, and for convenience we ll use k as the time variable. Consider the falling ball example again. In a situation where the wind fluctuations are negligible, it is in principle possible to compute exactly the position of the ball as a function of time given that we know the parameter values and the initial position and velocity of the ball - by integrating the differential equation given by Newton s 2nd Law. Indeed, in a windy situation, if we know the wind fluctuations (i.e. have a function for them), we could solve the differential equation to get the future position of the ball exactly. In practice, however, we don t know in advance how the wind will fluctuate, and thus it is impossible to compute exactly the future position of the ball (we might tackle this situation using probabilistic or statistical methods). Such examples lead us to the following classification. A system is called deterministic if knowledge of its initial variable values, its parameter values and all input functions over time is sufficient to compute exactly its future behaviour ( computing future behaviour means computing future values of all relevant variables). Systems for which we can only estimate or predict future behaviour in some statistical sense are called stochastic. It is common practice to approximate a stochastic system by a simplified deterministic model usually because it is argued that such a model captures the essential features of the system. All the models dealt with in this module are deterministic. 1 State Models Phrases such as state of the country, state of the economy, state of the weather conjure up somewhat vague images of how the complex system under consideration is currently behaving. The mathematical model known as the state model attempts to reflect this notion, yet being mathematical it is precisely defined and thus perhaps more limited. It is closely related to and developed out of the notion of a deterministic system, though it is easily 2

adapted to describe stochastic systems. If we place ourselves in the position of an external observer, then once we identify the system of interest, we can distinguish between it and its environment. (The border between the two can in reality be fuzzy with the result that the border between the mathematical model and its environment can appear arbitrary). We place the variables associated with the mathematical model in three categories: Input variables or inputs - those that describe how the environment influences the system, or those that the observer can choose to affect the behaviour of the system. We ll use u 1 (t), u,..., u r (t) to denote the values of the inputs to a continuous-time model with r- inputs. As a shorthand, we ll use vector notation as follows u 1 (t) u u(t) = u r (t) For a discrete-time model with equally spaced observations, we ll similarly use u(k). It is not necessary for a dynamic system to have inputs. Such systems are called unforced or autonomous. Output variables or outputs - those that describe how the system influences the environment, or the variables that we observe. We ll denote the outputs of a m-output system by y 1 (t), y,..., y m (t) Again we ll use y 1 (t) y y(t) = y m (t) and similarly y(k). In many situations, the outputs are a subset of the state variables. State variables - the current state of the system is described by the values of a set of variables (the state variables or internal variables), which contain sufficient information to compute exactly the future states of the system (if the future inputs are known). The number of state variables is called the order of the system. It is an important measure of 3

system complexity. We ll denote it by n. Then we ll denote the state variables by x 1 (t), x,..., x n (t) and the state vector by x 1 (t) x x(t) = x n (t) or for discrete-time models by x(k). We sometimes refer to the state vector as the state, and to a particular value of the state vector as a state. For many high order systems, it is usual to have n r n m We next look at the relationships between variables. Implicit in the definition of the state is that future states can be computed from the current state and knowledge of any inputs now and to come. This leads us to a functional relationship called the state equation. The state equation is a dynamic equation in that its solution tells us how the state evolves with time, but the exact nature of this equation depends on whether we are dealing with a discrete-time or continuous-time model. 1.1 State Equation for discrete-time Models The state equation details the relationship between the next state and the current state, current input if any, and current time if appropriate. Thus for systems with equally spaced observations, the most general form is x(k + 1) = f(x(k), u(k), k) where f is an appropriate function. Such an model is said to be time- varying. If there is no explicit dependence on time, then the model is said to be time invariant. So for instance a time-invariant system with no input has a state equation of form x(k + 1) = f(x(k)) Many of the examples we will meet are of the form f(x(k), u(k)) = F(x(k)) + g(x(k))u(k) 4

This is referred as an affine control system. A particularly common type of affine system is the linear system f(x(k), u(k)) = Ax(k) + Bu(k) 1.2 State Equation for continuous-time Models There is no such thing as the next time and hence no next state. However if the state is differentiable, then the rate of change of the state with respect to time will only depend on the current state, current input if any, and current time if appropriate, and so the state may be represented as the solution of the differential equation ẋ(t) = f(x(t), u(t), t) where ẋ = dx and the function f is such that this equation has a unique dt solution in (t 0, t f ) for each initial state x(t 0 ) and all forcing functions u(t). Again, this formulation of the model is said to be time-varying. We can also have time-invariant models, and so for example a time-invariant system with no input has a state equation of form ẋ(t) = f(x(t)) Again many of the examples will be affine or linear. 1.3 Output Equation In addition to the state equation, the state model requires an output equation which tells us how the current outputs depend on the current state, (occasionally the current inputs) and current time. For both discrete-time and continuous-time systems, the form of this static equation is therefore y(t) = h(x(t), u(t), t) where h is an appropriate function. For many low order systems, it is usual to have y(t) = x(t). 5