STABILITY ANALYSIS AND CONTROL OF STOCHASTIC DYNAMIC SYSTEMS USING POLYNOMIAL CHAOS. A Dissertation JAMES ROBERT FISHER

Size: px
Start display at page:

Download "STABILITY ANALYSIS AND CONTROL OF STOCHASTIC DYNAMIC SYSTEMS USING POLYNOMIAL CHAOS. A Dissertation JAMES ROBERT FISHER"

Transcription

1 STABILITY ANALYSIS AND CONTROL OF STOCHASTIC DYNAMIC SYSTEMS USING POLYNOMIAL CHAOS A Dissertation by JAMES ROBERT FISHER Submitted to the Office of Graduate Studies of Texas A&M University in artial fulfillment of the requirements for the degree of DOCTOR OF PHILOSOPHY August 2008 Major Subject: Aerosace Engineering

2 STABILITY ANALYSIS AND CONTROL OF STOCHASTIC DYNAMIC SYSTEMS USING POLYNOMIAL CHAOS A Dissertation by JAMES ROBERT FISHER Submitted to the Office of Graduate Studies of Texas A&M University in artial fulfillment of the requirements for the degree of DOCTOR OF PHILOSOPHY Aroved by: Chair of Committee, Committee Members, Head of Deartment, Raktim Bhattacharya Srinivas Rao Vadali John L. Junkins John Hurtado Suman Chakravorty D.V.A.H.G. Swaroo Helen Reed August 2008 Major Subject: Aerosace Engineering

3 iii ABSTRACT Stability Analysis and Control of Stochastic Dynamic Systems Using Polynomial Chaos. (August 2008) James Robert Fisher, B.S., Texas A&M University; M.S., Texas A&M University Chair of Advisory Committee: Dr. Raktim Bhattacharya Recently, there has been a growing interest in analyzing stability and develoing controls for stochastic dynamic systems. This interest arises out of a need to develo robust control strategies for systems with uncertain dynamics. While traditional robust control techniques ensure robustness, these techniques can be conservative as they do not utilize the risk associated with the uncertainty variation. To imrove controller erformance, it is ossible to include the robability of each arameter value in the control design. In this manner, risk can be taken for arameter values with low robability and erformance can be imroved for those of higher robability. To accomlish this, one must solve the resulting stability and control roblems for the associated stochastic system. In general, this is accomlished using samling based methods by creating a grid of arameter values and solving the roblem for each associated arameter. This can lead to roblems that are difficult to solve and may ossess no analytical solution. The novelty of this dissertation is the utilization of non-samling based methods to solve stochastic stability and otimal control roblems. The olynomial chaos exansion is able to aroximate the evolution of the uncertainty in state trajectories induced by stochastic system uncertainty with arbitrary accuracy. This aroximation is used to transform the stochastic dynamic system into a deterministic system that can be analyzed in an analytical framework.

4 iv In this dissertation, we describe the generalized olynomial chaos exansion and resent a framework for transforming stochastic systems into deterministic systems. We resent conditions for analyzing the stability of the resulting systems. In addition, a framework for solving L 2 otimal control roblems is resented. For linear systems, feedback laws for the infinite-horizon L 2 otimal control roblem are resented. A framework for solving finite-horizon otimal control roblems with time-correlated stochastic forcing is also resented. The stochastic receding horizon control roblem is also solved using the new deterministic framework. Results are resented that demonstrate the links between stability of the original stochastic system and the aroximate system determined from the olynomial chaos aroximation. The solutions of these stochastic stability and control roblems are illustrated throughout with examles.

5 To Marin, who believed in me even when I did not. v

6 vi ACKNOWLEDGMENTS While this dissertation bears my name, it would never have reached comletion without the aid and suort of many eole. First, I would like to thank Dr. Raktim Bhattacharya who has been a friend and mentor. His creativity and drive have been essential to the comletion of this work. Secondly, I would like to thank Dr. Vadali for his guidance throughout my time as a graduate student. My areciation also goes to Dr. Suman Chakravorty for letting me bother him many times to discuss the theoretical imlications of the work. I would also like to thank my friends who have been very suortive and have heled me to maintain my sanity while comleting this work. I would esecially like to thank Jeff Wischkaemer for always being there to share a laugh or a hardshi. As always I must thank my wife Marin for all the suort and love she has shown me during this time. I would never have comleted this without her. Most imortantly, I must thank my God, who leads me beside still waters. His atience when I fall short and His comfort when I feel lost have never failed me.

7 vii TABLE OF CONTENTS CHAPTER Page I INTRODUCTION A. Background B. Content of This Dissertation II POLYNOMIAL CHAOS A. Preliminary Concets from Probability Theory B. The Weiner-Askey Polynomial Chaos Askey Hyergeometric Orthogonal Polynomials a. Generalized Hyergeometric Series b. Proerties of Orthogonal Polynomials c. Askey-Scheme Homogeneous Chaos Generalized Polynomial Chaos C. Building Sets of Orthogonal Polynomials D. Karhunen-Loéve Exansion E. Summary III MODELLING AND STABILITY ANALYSIS OF STOCHAS- TIC DYNAMIC SYSTEMS A. Introduction B. Wiener-Askey Polynomial Chaos C. Stochastic Linear Dynamics and Polynomial Chaos D. Stochastic Nonlinear Dynamics and Polynomial Chaos Preliminaries Stochastic Polynomial Systems Nonlinear Systems Examle E. Stochastic Stability Analysis of Linear Systems F. Linear Stability Examle G. Stability of Nonlinear Systems Methodology Examle H. Summary IV OPTIMAL CONTROL OF STOCHASTIC SYSTEMS

8 viii CHAPTER Page A. Introduction B. Stochastic LQR Design Minimum Exectation Control Feedback Solutions a. Augmented Deterministic State Feedback with Constant Deterministic Gain b. Stochastic State Feedback with Constant Deterministic Gain c. Stochastic State Feedback with Stochastic Gain.. 69 d. Deterministic Control with Augmented State Feedback Examles a. Deterministic State Feedback Examle b. Stochastic State Feedback with Constant Gain Examle C. Otimal Control with Probabilistic System Parameters Minimum Exectation Trajectories Minimum Covariance Trajectories Examle - Van der Pol Oscillator D. Otimal Control with Stochastic Forcing E. Summary V RECEDING HORIZON CONTROL A. Introduction B. Stochastic Receding Horizon Control Receding Horizon Policy State and Control Constraints a. Exectation Constraints b. Covariance Constraints c. Probability One Constraints Stability of the RHC Policy RHC Policy Descrition a. Policy A b. Policy B c. Policy C C. Examles Examle Examle

9 ix CHAPTER Page D. Summary VI STABILITY OF STOCHASTIC SYSTEMS A. Introduction B. Convergence Results C. Stability Results D. Summary VII CONCLUSIONS A. Dissertation Summary B. Future Work Controllability and Observability Comutational Efficiency RHC Formulation Robust Control REFERENCES APPENDIX A APPENDIX B VITA

10 x LIST OF TABLES TABLE Page 1 Examles of hyergeometric series Corresondence between choice of olynomials and given distribution of (ω) Corresondence between choice of olynomials and given distribution of (ω) One-dimensional Hermite olynomials u to 8 th order Two-dimensional Hermite olynomials u to 4 th order One-dimensional Legendre olynomials u to 8 th order Two-dimensional Legendre olynomials u to 3 rd order One-dimensional Jacobi olynomials u to 4 th order Polynomials u to 4 th order for Gaussian distribution with domain 2σ Two-dimensional olynomials u to 3 rd order where 1 is governed by a uniform distribution and 2 is governed by a beta distribution

11 xi LIST OF FIGURES FIGURE Page 1 Askey-Scheme of hyergeometric orthogonal olynomials Altitude and velocity resonse for longitudinal Vinh s equations with uncertainty in h γ and horizontal osition resonse for longitudinal Vinh s equations with uncertainty in h Variance of altitude resonse for Monte-Carlo and gpc redicted trajectories Variance of velocity resonse for Monte Carlo and gpc redicted trajectories Altitude mean and variance errors between gpc and Monte-Carlo for various olynomial orders Closed loo eigenvalue distributions of short-eriod mode for ±20% arameter uncertainty Predicted and Monte-Carlo system resonse to ±10% arameter uncertainty Normalized mean and variance errors between gpc rediction and Monte Carlo observation Family of α trajectories from Monte-Carlo and redicted mean and uncertainty from PC Frobenius norm of P mc P normalized by P F Closed loo eigenvalues of minimum exectation control and nominal otimal control Comarison of cost of nominal and minimum exectation control as a function of

12 xii FIGURE Page 14 Closed loo eigenvalues of minimum exectation control for Beta distribution (to). Cost functions associated with various values of α as a function of (bottom) Beta distribution for various values of α = β. Distribution aroaches δ function as α Comarison of trajectories obtained from gpc exansions and Monte-Carlo simulations Evolution of the robability density function of the state trajectories due to µ( ). The solid (red) line denotes the exected trajectory of (x 1, x 2 ). The circles (blue) denote time instances, on the mean trajectory, for which the snashots of df are shown PDF at final time, due to the terminal constraint based on covariance Comarison of trajectories obtained from gpc exansions and Monte-Carlo simulations Eigenvalues (λ n ) of the KL exansion for various window lengths (1/a) Comarison of trajectories obtained from gpc exansion and Monte- Carlo simulations for 2 nd order gpc exansion Comarison of trajectories obtained from gpc exansion and Monte- Carlo simulations for 8 th order olynomials Legendre olynomials as a function of on the interval [ 1, 1] Trajectories generated for various values of and their bounds aroximated at extreme values of Samle trajectory aths to demonstrate ossible constraint violations Receding horizon strategy with oen loo statistics rediction First two stes of receding horizon feedback olicy Block diagram of closed loo receding horizon olicy

13 xiii FIGURE Page 29 Trajectories for oen loo RHC olicy for various values of [ 1, 1] Exected trajectories for oen loo RHC olicy obtained by Monte Carlo simulation and redicted by gpc Trajectories for closed loo RHC olicy for various values of [ 1, 1] Resonse of the linearized F-16 model under RHC control for twenty values of [ 1, 1] with control constraint Unconstrained resonse of the linearized F-16 model under RHC control for twenty values of [ 1, 1] with horizon length of 1 second Unconstrained resonse of the linearized F-16 model under RHC control for twenty values of [ 1, 1] with horizon length of 0.6 seconds Eigenvalues of system over 2-σ variation in for gpc aroximation and Monte-Carlo rediction Exected value of both states of the stochastic system from gpc rediction and Monte-Carlo observation Variance resonse of each state of the stochastic system from gpc rediction and Monte-Carlo observation

14 1 CHAPTER I INTRODUCTION A. Background Recently there has been a growing interest in combining robust control aroaches with stochastic control methods to develo the so called field of robabilistic robust control. This field arises from the need to a) understand how stochastic system uncertainties affect state trajectories and, b) exloit it to design less conservative robust control algorithms. Traditional robust control design techniques assume uniformly distributed uncertainty over the arameter sace and the controller is designed to be robust for the worst case scenario. In the framework of robabilistic robust control, the binary notion of robustness is discarded and the notion of a risk-adjusted robustness margin tradeoff is adoted. This framework is more ractical as in many cases a control system designer may have knowledge of the distribution of the uncertainty in the system arameters, and may be willing to accet a small well defined level of risk in order to obtain larger robustness margins. It is also ossible to stabilize the system for all ossible arameter values with non-zero robability of occurrence and otimize erformance with resect to the robability distribution of the arameters. Both these design hilosohies result in less conservative controllers in the robabilistic sense. Therefore, it is imortant to consider not only the range of arameter values but also the robability density function of the arameters in the controller design. Many aroaches exist for stability analysis and controller design for stochastic systems. In general, many of these aroaches tend to take on one of two aroaches. The journal model is IEEE Transactions on Automatic Control.

15 2 One aroach deals with solution of equations of motion that result from Itô s formula [1]. In general, exressions of this tye can be difficult to solve and solutions only exist for certain secial cases. Furthermore, these tyes of roblems generally deal only with systems with stochastic forcing and not with systems that contain stochastic uncertainty in the arameters directly. In general, these roblems tend to involve white noise rocesses and do not address correlated inuts. The other aroach to solving stochastic stability and control roblems involves samling based methods. These methods attemt to aroximate a robability distribution by utilizing a large number of oints. This is generally accomlished by creating a mesh of values over the suort of the distribution and erforming simulation and analysis for each of the oints in the mesh. In stability analysis for examle, conditions for stability might be tested for each of the oints to determine if the system is stable for the entire distribution. While this is certainly effective, this requires a large number of comutations and therefore can require a large amount of comutational time. Furthermore, if such methods are used to determine system trajectories, any changes in initial conditions will void all calculations and require recomutation for each oint in the distribution because the trajectories are uniquely determined by their initial conditions. Much of the revious work has dealt with solving stochastic stability and control roblems with either stochastic forcing or robabilistic uncertainty in system arameters. Secifically, roblems dealing with stochastic forcing assume that the unknown forcing terms are Gaussian. When this is not the case, it is common to aroximate the distribution using a Gaussian closure. The roblem of covariance control with Gaussian excitation has also been investigated, by Skelton et al. [2]. The roblem of arametric uncertainty is treated searately from that of stochastic excitation. Much of the work here deals with the utilization of samling based methods. Stengel [3]

16 3 introduces the idea of robability of stability and uses samling based methods to ensure robustness and stability in a robabilistic sense [4]. The methodology is alied to nonlinear systems, but requires a genetic algorithm to solve the resulting control roblem [5, 6]. In a similar aroach Barmish et al. uses Monte-Carlo based methods to analyze stability and control roblems in an LMI framework [7, 8], however the results are limited to a multidimensional uniform distribution with resect to the uncertain arameters. An additional aroach by Polyak et al. [9] develos an algorithm to determine a control with guaranteed worst-case cost. This aroach also requires the uncertain system arameters to be linear functions of the random variable. Additionally, a robabilistic design is alied to LPV control in [10]. Here, an algorithm is develoed that uses sequential random generation of uncertain arameters using the associated robability distribution function to converge to an LPV solution that is satisfied for the entire range of uncertainty. A samling based technique is also alied to the H roblem in [11]. In this case a similar sequential solution technique is emloyed to solve the H roblem. The novelty of the framework resented in this work is that non-samling based methods are used to aroximate, with arbitrary accuracy, the evolution of uncertainty in state trajectories induced by uncertain system arameters. The framework is built on the olynomial chaos theory which transforms stochastic dynamics into deterministic dynamics in higher dimensional state sace. However, this increase in dimensionality is often significantly lower than the samling based methods for comarably accurate reresentation of uncertainty. The benefit of this aroach is that stochastic linear and nonlinear systems are transformed into deterministic systems and existing stability analysis and control theory can be used for design and analysis. Polynomial chaos was first introduced in 1938 by Wiener [12] where Hermite olynomials were used to model stochastic rocesses with Gaussian random variables. As

17 4 the concet of chaos would not be develoed until decades later, the concet of using these olynomials to model stochastic behavior was termed Homogeneous Chaos. According to Cameron and Martin [13] such an exansion converges in the L 2 sense for any arbitrary L 2 functional. In terms of stochastic rocesses, this imlies that the exansion is able to converge to any arbitrary stochastic rocess with a finite second moment. This alies to most hysical systems. The original Homogeneous Chaos exansion utilizes Hermite olynomials to model stochastic rocesses. Though these olynomials can be used to model any stochastic rocess with arbitrary accuracy, the convergence rate is exonential when they are used to model Gaussian rocesses [14]. This is because the Hermite olynomials associated with the Homogeneous Chaos exansion are orthogonal with resect to the robability density function (df) of a Gaussian distribution. Using this idea and alying it to other tyes of olynomials, Xiu et al. [15] generalized the result of Cameron-Martin to various continuous and discrete distributions using orthogonal olynomials from the so called Askey-scheme [16] and demonstrated L 2 convergence in the corresonding Hilbert functional sace. This is oularly known as the generalized olynomial chaos (gpc) framework. In this work, it was demonstrated that when olynomials orthogonal with resect to a given robability distribution are used, the convergence rate for the resulting aroximation is exonential even when the rocess is the solution of a differential equations. The gpc framework has been alied to alications including stochastic fluid dynamics [17, 18, 19], stochastic finite elements [20], and solid mechanics [21, 22]. In general these works are alied to static roblems, though in [20] alication to a dynamic system is resented briefly. Alication of gpc to control related roblems has been surrisingly limited. Furthermore, there have been few if any attemts at develoing a generalized framework to understand the dynamics of the deterministic

18 5 system obtained from utilizing the exansion. The work of Hover et al. [23] addresses stability and control of a bilinear dynamical system, with robabilistic uncertainty on the system arameters. The controller design roblem considered involved determining a family of roortional gains to minimize a finite time integral cost functional. Outside of the work of Hover, there has been additional work that has dealt with alying olynomial chaos to estimation and control roblems in ower electronics [24, 25]. Unfortunately, several of these aers seem to assume that the uncertainty is time varying and governed by white rocesses, and thus aly the exansion incorrectly as an infinite number of random variables would be required to model such rocesses. B. Content of This Dissertation In this dissertation we focus on the alication of the olynomial chaos exansion to stability analysis and control of stochastic linear and nonlinear systems. In articular we assume that the uncertainty in the system dynamics is deendent on random variable, governed by a known robability distribution. The uncertainty may enter the system as linear or nonlinear functions of the random variable which may be governed by any continuous stationary robability distribution. Although it is certainly ossible to extend these results to discrete distributions, these have not been treated. The benefit of the gpc aroach is that it admits analytical solutions to stochastic stability and control roblems. In fact, we will show that known methods for stability and control analysis can be readily alied to stochastic systems in this framework. The main contribution of this dissertation lies in the alication of the gpc exansion to linear and nonlinear stability analysis and control design roblems. Conditions for the solution of these roblems are written in the gpc framework and

19 6 demonstrated with several examles. The work is divided into seven chaters. The second chater deals with background on robability theory and outlines generalized Polynomial Chaos theory. Many fundamental concets such as that of a random variable and a σ-algebra will be resented. We will also outline the Homogeneous Chaos framework and show the extension of this concet to other olynomial sets. Finally, a methodology for incororating correlated noise into the analysis will be resented by means of the Karhunen-Loéve (KL) exansion. This will allow us to analyze systems with correlated rocess noise uncertainty. Chater III deals with modelling and stability analysis of stochastic systems using the generalized Polynomial Chaos framework. The chater will deal with results for both linear and nonlinear systems. A generalized framework for modelling stochastic linear systems with the gpc exansion is resented. Next, a discussion on the alication of the gpc exansion for different tyes of nonlinearities is resented and system form is resented for olynomial systems. The ability of the exansion to accurately redict the statistics of these systems is demonstrated by alication to the non-dimensional longitudinal Vinh s equations with uncertainty in initial conditions. Stability results for linear and nonlinear systems are resented in terms of the gpc framework. For linear systems results are resented for both continuous and discrete time system. These results are verified with examles. In Chater IV, we focus on otimal control of stochastic linear and nonlinear systems. For linear systems, we focus on otimal control in the L 2 sense for systems with robabilistic uncertainty in the system arameters. A framework for minimum exectation control is resented and the roblem is solved for different feedback structures. Conditions for feasibility and otimality are determined for each of these structures. In addition, the minimum exectation control framework is alied to nonlinear systems and the gpc exansion is used to generate otimal trajectories for

20 7 nonlinear stochastic roblems. Finally, the KL exansion is used to generate otimal trajectories for systems with colored stochastic forcing. In Chater V the gpc methodology is utilized to solve the stochastic receding horizon roblem for linear systems. In articular, a roof is resented for stability in terms of the gpc coefficients and several methods of erforming receding horizon control for stochastic systems are resented. Examles are resented that highlight the differences in the alication of these olicies. Chater VI resents several results that hel theoretically justify the usage of the gpc exansion for rocesses that can be determined by solution of stochastic differential equations. Finally in the last chater the main contributions of the work are highlighted and areas of future research are resented. Various robability distributions and their corresonding gpc exansions are resented in the aendix along with a methodology for erforming analysis with resect to various confidence intervals. The gpc methodology is used to demonstrate a method for solving roblems that require roerties to be satisfied with a secific robability. Additionally, the gpc exansion is used to solve stochastic roblems involving indeendent random variables governed by different robability distributions.

21 8 CHAPTER II POLYNOMIAL CHAOS The methodology described in this dissertation utilizes orthogonal functionals to reresent 2 nd order random rocesses that are solutions of stochastic dynamic systems. This aroach is sectral with resect to the stochastic elements in the dynamic system. It utilizes families of orthogonal olynomials, which we will refer to as Polynomial Chaoses, to aroximate the both the functions of random variables which aear in the equations of motion for a dynamic system as well as the actual solution. In this chater, we define the structure of these orthogonal olynomials and resent some of their roerties, which will be alied to analyze stochastic stability and control roblems. A. Preliminary Concets from Probability Theory Before resenting a formal definition of Polynomial Chaos, we introduce a few imortant concets from robability theory. Let Ω be a set of events. This is a set of all ossible outcomes and to these outcomes we will assign robabilities. As we know intuitively, there are relationshis between the robability that a event occurs and the robability that it does not. To account for the relationshis between events we will need to introduce another set called a σ-algebra [1, 26]. Definition II.1 A σ-algebra, B is a non-emty class of subsets of Ω such that 1. Ω B 2. B B imlies B c B 3. B i B, i 1 imlies i=1 B i B

22 9 One examle of a σ-algebra is a set made of all ossible combinations of elements in Ω. We are now ready to define a robability sace [1, 26]. Definition II.2 A robability sace is a trile (Ω,B,P ) where Ω is the samle sace B is a σ-algebra of subsets of Ω P is a robability measure, a function P B [0, 1] such that 1. P (A) 0 for all A B 2. If {A n, n 1} are events in B that are disjoint, then 3. P (Ω) = 1 P ( A n ) = n=1 n=1 P (A n ) In this work we will deal with olynomials which are functions of random variables. These random variables are functions that ma between a robability sace and the robability sace, (R k, B(R k ), P ). We will next define the concet of a random variable. To accomlish this, we first define an inverse image. Definition II.3 Suose Ω 1 and Ω 2 are two sets (Often Ω 2 = R) and suose that Ω 1 Ω 2. The inverse image of is defined by 1 (A 2 ) = {ω Ω 1 (ω) A 2 } This is similar to a function inverse, but more general as it is defined on sets. The inverse image is defined to be all of the elements of Ω 1 which corresond to elements in Ω 2 when maed through. We now define a random variable [1, 26]. A random variable is defined between two measurable saces (a measurable sace is the air (Ω, B)).

23 10 Definition II.4 Suose (Ω 1, B 1 ) and (Ω 2, B 2 ) are two measurable saces and the function Ω 1 Ω 2. The function is said to be a measurable function if for every set B 2 B 2, 1 (B 2 ) B 1 Definition II.5 If (Ω 2, B 2 ) = (R k, B(R k )) in definition II.4, we call the function a random variable. In the above definition, B(R k ) is the σ-algebra generated by all the oen subsets of R k. A random variable is therefore a function that associates a real number (or vector) with outcomes of an exeriment or events. Next, we will define the exectation of a random variable, X [1, 26]. Definition II.6 Suose (Ω, B, P ) is a robability sace and X (Ω, B) ( R, B( R)), where R = [, ] (X can have ± in its range). Define the exectation of X, written E[X] as the Lebesgue-Stieltjes integral of X with resect to P or E[X] = Ω XdP = Ω X(ω)P (dω) The exectation of a random variable is its average value with resect to the robability sace from which it mas. In general, we will be dealing with variables that have an associated robability distribution (either continuous or discrete). When this is the case and P (dω) = f(ω)dω we can write the exectation oerator as E[X(ω)] = Ω X(ω)f(ω)dω (II.1) when f(ω) is iecewise continuous and E[X(ω)] = X(ω)f(ω) {ω Ω f(ω) 0} (II.2)

24 11 when there are discrete events that occur with finite robability. In the discrete case, the summation is over the values of ω Ω that occur with non-zero robability and f(ω) is the robability associated the discrete oint ω. This discrete formulation will not be utilized in this work as we will mainly be dealing with continuous functions. B. The Weiner-Askey Polynomial Chaos One of the major difficulties in incororating stochastic rocesses into analysis and control of dynamic systems is the necessity of dealing with abstract measure saces which are usually infinite dimensional and are difficult to understand hysically. In articular it is difficult to understand the behavior of functions defined on this abstract measure sace, more secifically the random variables defined on the σ-algebra of random events. In many alications a Monte-Carlo aroach is used and the events in the σ-algebra are samled. This requires a large number of samle oints to achieve a good aroximation. An alternative methodology is to aroximate the function with a Fourier-like series. 1. Askey Hyergeometric Orthogonal Polynomials To aroximate a stochastic rocess, a set of orthogonal olynomials will be emloyed. In this section, we will resent an overview of the theory of these olynomials as well as rovide details on the Askey scheme. There is a wealth of literature on orthogonal olynomials and the interested reader is referred to [27, 28, 29].

25 12 a. Generalized Hyergeometric Series To begin, the generalized hyergeometric series as resented in [30] is introduced. The generalized hyergeometric series, F r s, is defined by F r s (a 1,..., a r, b 1,..., b s, z) = where the term (a) n is the Pochhammer symbol defined by (a 1 ) k (a r ) k z k k=0 (b 1 ) k (b s ) k k!, (II.3) (a) n = 1, n = 0, a(a + 1) (a + n 1), n = 1, 2,.... (II.4) The denominator terms, b i Z +, are ositive to ensure that the denominator factors for the series are nonzero. The radius of convergence of the series deends on the relative values of r and s and is given by ρ = 0 r > s r = s + 1 (II.5) r < s + 1 While the values in the denominator (b i ) must be greater than zero, the terms in the numerator, a i, may be negative. When one of the terms is negative, the series terminates at the value of that term. For examle, if a 1 = m, F r s = m ( m) k (a r ) k z k k=0 (b 1 ) k (b s ) k k! (II.6) When this occurs, the order of z becomes finite resulting in a olynomial that is m th order with resect to z. Table 1 rovides a list of some olynomials and their corresonding r and s values.

26 13 Table 1. Examles of hyergeometric series r s Hyergeometric Series 0 0 Exonential Series 1 0 Binomial Series 2 1 Gauss Series b. Proerties of Orthogonal Polynomials In this section, some of the roerties of series of orthogonal olynomials will be discussed. Consider a set of olynomials, {Q n (x), n N } where the olynomial Q n (x) is of degree n and the set N can be either N = {0, 1, 2,...} if the series is infinite or N = {0, 1, 2,..., N} for a finite series with N being a finite non-negative integer. The system of olynomials is orthogonal with resect to a real ositive measure, γ(x), if D Q n (x)q m (x)dγ(x) = h 2 nδ nm (II.7) for n, m N, where D is the suort for the measure, γ(x), and the values h n are ositive constants. If h n = 1, we say that the series of olynomials is orthonormal. In general, the measure may have a continuous weighting function w(x) associated with it or may have discrete weight values w(x i ) at the oints, x i. As a result, (II.7) becomes D Q n (x)q m (x)w(x)dx = h 2 nδ nm (II.8) for the case when the weighting function is continuous and N i=0 Q n (x i )Q m (x i )w(x i ) = h 2 nδ nm (II.9) when the suort is discrete. For the discrete case it is ossible that N is finite (ositive) or N =. In these exressions, n, m N. This weighting function will

27 14 become imortant as we continue our develoment because for certain olynomials the weights are identical to a robability distribution. This will be the foundation of the ideas behind Polynomial Chaos. An imortant characteristic of orthogonal olynomials is the fact that any three consecutive olynomials are connected by a recurrent relation involving three terms. There are several different ways to exress this relationshi. We will write the relation as xq n (x) = A n Q n+1 (x) (A n + C n )Q n (x) + C n Q n 1 (x), n 1 (II.10) where the terms A n, C n 0 and C n /A n 1 0. To initialize the series, Q 1 (x) and Q 0 (x) are required. These initialized as Q 1 (x) = 0 and Q 0 (x) = 1. With these initial olynomials, the rest of the terms can then be comuted. Continuous orthogonal olynomials also satisfy the second order differential equation α(x)f + β(x)f + λf = 0 (II.11) where α(x) is a olynomial of second degree and β(x) is a olynomial of first degree. The equation is a Sturm-Liouville tye of equation, meaning that λ = λ n is the eigenvalue of the solution and the corresonding eigenfunctions are the olynomials, f(x) = f n (x). The eigenvalues, λ, are given by λ = λ n = n (β + n 1 2 α ) (II.12) All orthogonal olynomials can be obtained by continuously alying a differential oerator known as the generalized Rodriguez formula. For continuous orthogonal olynomials, the oerator takes the form Q n (x) = 1 d n w(x) dx n (w(x)αn (x)) (II.13)

28 15 For the discrete case, the differential relationshi in equation (II.11) becomes a difference relationshi. To introduce the relationshi, we first introduce the forward and backward difference oerators y(x) = y(x + 1) y(x) y(x) = y(x) y(x 1) (II.14) The discrete version of equation (II.11) is given by α(x) f(x) + β(x) f(x) + λf(x) = 0 (II.15) Furthermore, for discrete orthogonal olynomials, the Rodriguez formula is found by relacing the derivative oerator ( d dx ) with the backward difference oerator,. c. Askey-Scheme The Askey-scheme rovides a classification for each of the hyergeometric olynomials and also indicates the limit relation between each. The scheme can be reresented by the tree-like structure found in figure 1 [30]. The scheme demonstrates the limit relationshis between each element in the tree structure. The tree starts with the olynomials of class F3 4. The Wilson olynomials are discrete olynomials and the Racah olynomials are discrete. The lines reresent the olynomials that can be linked to others via limit relationshis. The olynomials at the to can be used to obtain the linked olynomials below it by use of a limit. For examle, it is ossible to obtain Laguerre olynomials from Jacobi olynomials by using lim P n (α,β) β (1 2x β ) = L(α) n (x) (II.16) and it is ossible to obtain Hermite olynomials from Laguerre olynomials lim ( 2 n/2 α α ) L (α) n ((2α) 1/2 x + α) = ( 1)n H n (x) n! (II.17)

29 16 Fig. 1. Askey-Scheme of hyergeometric orthogonal olynomials 2. Homogeneous Chaos The concet of Homogeneous Chaos was first introduced by Wiener [12] and is an extension of Volterra s work on the generalization of Taylor series to functionals [31, 32, 20]. The Homogenous Chaos utilizes Hermite olynomials to aroximate Gaussian random variables. Based on Wiener s ideas, Cameron and Martin used Hermite functionals to create an orthogonal basis for nonlinear functionals and showed that these functionals can aroximate any functionals with finite second moment in L 2 and that these functionals in fact converge in the L 2 sense [13]. Therefore,

30 17 it is ossible to use Hermite-Chaos to exand any second order rocess, that is a rocess having finite second moment, in terms of orthogonal olynomials. While the tyes of rocesses are limited by the finite second moment requirement, most hysical rocesses do in fact meet this requirement, meaning such a requirement is reasonable. We now introduce the Homogeneous Chaos. Define the set of all square integrable random variables to be Θ. Let {ξ i (θ)} i=1 be a set of orthonormal Gaussian random variables and let ˆΓ be the sace of all olynomials in {ξ i (θ)} i=1 of degree less than or equal to. In addition, Γ will reresent the set of all olynomials in ˆΓ that are orthogonal to the set ˆΓ 1. The sace sanned by Γ is denoted Γ. This sace is a subsace of Θ ( Γ Θ) and is called the th Homogeneous Chaos. We call Γ the Polynomial Chaos (PC) of order. The Polynomial Chaoses are therefore olyvariate orthogonal olynomials of order of any combination of the random variables {ξ i (θ)} i=1. Because the Polynomial Chaoses must account for all combinations of the random variables, it is therefore clear that the number of chaoses of order which involve secific random variables in the set increase as increases. Furthermore, the Polynomial Chaoses are in fact functionals since they are functions of random variables which in turn are functions maing from the event sace to some real number. The set of Polynomial Chaoses is a linear subsace of square integrable random variables (Θ). This set, Γ, is also a ring with resect to the functional multilication Γ Γ q (x) = Γ (x)γ q (x). Denote the Hilbert sace sanned by the set of random variables {ξ i (θ)} i=1 by Θ(ξ) and denote the resulting ring Φ Θ(ξ). This is the ring of functions generated by Θ(ξ). It has been shown that under general conditions, this ring is dense in Θ [33]. As a result, any square integrable random variable maing from Ω to R can be aroximated as closely as desired. We can therefore write any

31 18 general second-order random rocess as X(θ) = 0 n n r= Γ (ξ ρ1 (θ),..., ξ ρr (θ)) ρ 1,...,ρ r (II.18) or as a linear combination of all Polynomial Chaoses (Γ ) of order 0. The olynomials in equation (II.18) involve r distinct random variables out of {ξ i (θ)} i=1, with the k th random variable having multilicity n k, and the total number of random variables involved is equal to the order of the Polynomial Chaos,. If we assume that the Polynomial Chaoses to be symmetrical with resect to their arguments (symmetrization is always ossible [20]), equation (II.18) can be simlified to give the following exression for a random rocess X(θ) =a 0 Γ i=1 i 1 i 1 =1 i 2 =1 a i1 Γ 1 (ξ i1 (θ)) i 1 i 2 i 1 =1 i 2 =1 i 3 =1 a i1 i 2 Γ 2 (ξ i1 (θ)ξ i2 (θ)) a i1 i 2 i 3 Γ 3 (ξ i1 (θ)ξ i2 (θ)ξ i3 (θ)) +... (II.19) where Γ ( ) is the Polynomial Chaos of order. For the case of Homogeneous Chaos with the Gaussian variables, ξ having zero mean and unit variance, these olynomials are Hermite Polynomials and we will henceforth exress them as Γ = H (The term ξ = (ξ i1, ξ i2,..., ξ in )). These olynomials have the form n H n (ξ i1,..., ξ in ) = e 1 2 ξt ξ ( 1) n e 2 ξt ξ (II.20) ξ i1 ξ in The values of the uer limits are a reflection of the symmetry of each olynomial with resect to its arguments. The olynomials of different orders are orthogonal as are the olynomials of the same order, but with different arguments. Equation (II.19) is a discrete version of the original Wiener olynomial chaos exression where 1

32 19 the continuous integrals are relaced by summations. For notational convenience, we can write equation (II.19) as X(θ) = i=0 â i Ψ i (ξ) (II.21) where there is a one-to-one corresondence between Ψ i (ξ) and H (ξ i1,..., ξ in ), and also between the coefficients â i and a i1 i r. To hel attain some insight into the form of the summation in equation (II.19) as well as into how the Ψ i s in equation (II.21) relate to the H n s, consider the following exansion for two random variables. X(θ) = a 0 H 0 + a 1 H l (ξ 1 ) + a 2 H 1 (ξ 2 ) + a 11 H 2 (ξ 1, ξ 1 ) + a 12 H 2 (ξ 2, ξ 1 ) + a 22 H 2 (ξ 2, ξ 2 ) + a 111 H 3 (ξ 1, ξ 1, ξ 1 ) + a 211 H 3 (ξ 2, ξ 1, ξ 1 ) + a 221 H 3 (ξ 2, ξ 2, ξ 1 ) + a 222 H 3 (ξ 2, ξ 2, ξ 2 ) +... (II.22) The terms in this exansion corresond with the terms in equation (II.21) such that â 0 Φ 0 = a 0 H 0, â 1 Φ 1 = a 1 H 1 (ξ 1 ), â 2 Φ 2 = a 2 H 1 (ξ 2 ), and so forth. The olynomials for the Homogeneous (Hermite) Chaos form an orthogonal basis, which means Ψ i Ψ j = Ψ 2 i δ ij (II.23) where δ ij is the Kronecker delta (δ ij = 0 if i j and δ ij = 1 when i = j) and, denotes a weighted inner roduct. For Hermite Chaos, this is the inner roduct on the Hilbert sace determined by the suort of the Gaussian variables f(ξ)g(ξ) = f(ξ)g(ξ)w(ξ) dξ (II.24) where the weighting function is given by w(ξ) = 1 (2π) n e 1 2 ξt ξ (II.25)

33 20 The variable n is the size of the random variable vector, ξ. This weighting function is equivalent to the robability density function for an n-dimensional indeendent Gaussian distribution. Therefore, the basis olynomials of the Hermite Chaos are orthogonal with resect to a Gaussian distribution and the variables in the exansion are Gaussian random variables. Therefore, Homogeneous Chaos (Hermite-Chaos) is used in situations where the stochastic uncertainty in the system is known to be Gaussian. 3. Generalized Polynomial Chaos The Hermite-Chaos discussed in the revious section is very useful for solving stochastic differential equations for systems with Gaussian inuts as well as systems with certain non-gaussian inuts [20, 18, 21]. While the Cameron-Martin theorem guarantees that this tye of olynomial converges to a function with finite second moment in the L 2 sense [13], it does not guarantee the rate of convergence. The Homogeneous Chaos for systems with Gaussian inuts has an exonential rate of convergence. This is because the olynomials are orthogonal with resect to the robability distribution of the inuts. For systems without Gaussian inuts, the rate of convergence can be drastically deteriorated [15]. As a result the Wiener-Askey olynomial chaos exansion will be resented. This is a generalization of the original Wiener-Chaos exansion, but uses the comlete olynomial basis from the Askey-scheme resented earlier. Let { i (θ)} i=1 be a set of orthonormal random variables of any continuous distribution. As with Homogeneous

34 21 Chaos, we model a second order rocess in the following manner X(θ) =a 0 I i=1 i 1 i 1 =1 i 2 =1 c i1 I 1 ( i1 (θ)) i 1 i 2 i 1 =1 i 2 =1 i 3 =1 c i1 i 2 I 2 ( i1 (θ) i2 (θ)) c i1 i 2 i 3 I 3 ( i1 (θ) i2 (θ) i3 (θ)) +... (II.26) where I i ( i1,..., in ) denotes the Wiener-Askey olynomial chaos of order n in terms of the random variables = ( i1,..., in ). Unlike the Homogeneous Chaos exansion where the olynomials were Hermite olynomials, I i can be any olynomial in the Askey-scheme shown in figure 1. As in the revious section, equation (II.26) can be ut into the more notationally convenient form X(θ) = i=0 ĉ i Φ i ( ) (II.27) Again, the basis function in equation (II.27) have a one-to-one relationshi with the olynomials in equation (II.26) as do the coefficients. Using this formulation and the fact that each basis, Φ i, is orthogonal, the inner roduct of any two olynomials becomes Φ i Φ j = Φ 2 i δ ij (II.28) where as before, δ ij is the Kronecker delta and, denotes the weighted inner roduct on the Hilbert sace of the variables,. Similarly to the revious section, this inner roduct is defined by f( )g( ) = f( )g( )w( ) (II.29)

35 22 for olynomials with discrete suort or f( )g( ) = f( )g( )w( )d (II.30) when the weight function has continuous suort. Previously, the weighting function, w( ) corresonded to the weight for the Hermite basis, but in this more general setting w( ) is the weighting function associated with the articular choice of basis from the Wiener-Askey olynomial chaos. The choice of basis function for the Wiener-Askey chaos scheme is deendent on the robability distribution that is to be modeled. Several of the olynomials in the Askey-scheme are orthogonal with resect to well-known robability distributions. When modeling a system with uncertainty governed by a secific tye of distribution, we can choose the corresonding olynomial in the scheme to model it. Because each tye of olynomial from the Askey-scheme forms a comlete basis in the Hilbert sace determined from their suort, each tye of olynomial in the Wiener-Askey exansion will converge to an L 2 function in the L 2 sense. This result can be obtained as a general result of the Cameron-Martin Theorem [13, 34]. Table 2 shows some common robability distributions and their corresonding olynomial basis in the Askey-scheme. Table 2. Corresondence between choice of olynomials and given distribution of (ω). Random Variable Gaussian Uniform Gamma Beta φ i ( ) of the Wiener-Askey Scheme Hermite Legendre Laguerre Jacobi

36 23 The basis functions in table 2 are all orthogonal with resect to their associated distribution. As a result, this gives hysical insight into the inner roducts in equations (II.28-II.30). Therefore, w( (ω))d = dp ( (ω)) where P is the corresonding robability measure. It becomes clear that the inner roduct is an exectation oerator, or f( )g( ) = f( )g( )w( )d = E[f( )g( )] (II.31) Therefore, the inner roduct of the orthogonal olynomial basis functions becomes E[Φ i Φ j ] = E[Φ 2 i ]δ ij (II.32) where the exectation oerator has been taken with resect to the robability density function associated with the olynomial basis. C. Building Sets of Orthogonal Polynomials While in theory it is easy to assume that all distributions fall into one of the tyes listed in table 2, in ractice this may not be the case. Furthermore, it may not be desirable to use Hermite olynomials in ractice as their suort is infinite. As a result it is often necessary to generate a set of orthogonal olynomials with desired suort. This can be accomlished several ways. One method is the Gram-Schmidt rocess which we will briefly describe here. This rocess involves determining a set of orthogonal (not necessarily orthonormal) functions, {Φ i (x)} i=0, from a set of linearly indeendent functions, {u i (x)} i=0 begin, let with a given weighting function, w(x) [35]. To Φ 0 (x) = u 0 (x) (II.33) We will build on this function to create orthogonal functions from the linearly indeendent ones. The order in which we erform the oeration with resect to the u i s

37 24 is unimortant since each u i is linearly indeendent. The next function, Φ 1 (x), can be determined from Φ 0 (x) by subtracting off the rojection of u 1 (x) onto Φ 0 (x). Φ 1 (x) = u 1 (x) u 1(x)Φ 0 (x) Φ Φ 0 (x) 2 0 (x) (II.34) where f(x)g(x) = f(x)g(x)w(x) dx. To verify that Φ 1 (x) is in fact orthogonal to Φ 0 (x), consider the inner roduct Φ 0 (x)φ 1 (x) = Φ 0 (x)u 1 (x) u 1(x)Φ 0 (x) Φ Φ 0 (x) 2 0 (x)φ 0 (x) = Φ 0 (x)u 1 (x) Φ 0 (x)u 1 (x) = 0 In general, we have Φ i (x) = u i (x) i 1 k=0 u i (x)φ k (x) Φ Φ k (x) 2 k (x) (II.35) This gives us a way to systematically comute and orthogonal basis with resect to some weighting function. The sets of olynomials from the Askey-scheme can also be generated in this manner. For examle, let us consider generation of a set of orthogonal olynomials orthogonal with resect to w(x) = e x2 /2 over the domain x (, ). We will take u i (x) = x i for i = 1,.... The first olynomial, Φ 0 (x) = u 0 (x) = 1. To find Φ 1 (x), Φ 1 (x) = x x (II.36) Now 1, 1 = e x2 dx = 2π and x, 1 = xe x2 dx = 0, so Φ 1 = x. To find Φ 2 (x), where x 2 = 2π and x 3 = 0. This gives Φ 2 (x) = x 2 x2 1 x3 x 2 x (II.37) Φ 2 (x) = x 2 1 (II.38)

38 25 This rocess could be continued for as many terms as desired. Uon examination of the first three terms, it becomes clear that the orthogonalization is recovering the Hermite olynomials, which is what should be exected for the given weight function. D. Karhunen-Loéve Exansion The olynomial chaos aroach is useful when the statistics of the solution are unknown and for stationary rocesses. When dealing with time-varying (or satially varying) rocesses with known covariance, the Karhunen-Loéve exansion becomes the exansion of choice [36, 1, 20]. In a manner similar to equation (II.21), define an exansion of the form X(t, ω) = n=0 λn f n (t)z n (ω) (II.39) where {Z n (ω)} is a set of random variables that will be determined, λ n is a constant, and {f n (t)} is an orthonormal set of deterministic functions. The functions, Z n (ω) Ω R are random variables and can be written as functions of (ω). The functions f n (t) can be satial, temoral, or both deending on the elements in the vector t and have suort on D. For the resent work these will usually be a function of time only. For the resent discussion, denote X(t) as the exectation of X(t, ω) over all realizations of the rocess and let R(t 1, t 2 ) be the covariance function. The covariance function is symmetric and ositive definite. It can be written as R(t 1, t 2 ) = n=0 λ n f n (t 1 )f n (t 2 ) (II.40) where λ n is an eigenvalue of the covariance and f n (t) is the associated eigenvector or eigenfunction. These quantities are solutions of D R(t 1, t 2 )f n (t 1 ) dt 1 = λ n f n (t 2 ) (II.41)

39 26 The eigenfunctions are orthogonal and can be chosen to satisfy the relationshi D f n (t)f m (t) dt = δ nm (II.42) The random rocess, X(t, ω) can be rewritten as X(t, ω) = X(t) + λn f n (t)z n (ω) n=0 (II.43) We wish to determine roerties of the functions, Z n (ω). Define X(t, ω)= n=0 ( λ n f n (t)z n (ω)), where X has zero mean and let its covariance function be written as R(t 1, t 2 ) = X(t 1, ω) X(t 2, ω) = λn λ m f n (t 1 )f m (t 2 ) Z n (ω)z m (ω) n=0 m=0 (II.44) Multilying both sides of the above exression by f k (t 2 ) and integrating over the domain, D, gives (recall equation (II.41)) D R(t 1, t 2 )f k (t 2 ) dt 2 = = n=0 m=0 n=0 λn λ m Z n (ω)z m (ω) f n (t 1 )f m (t 2 )f k (t 2 ) λn λ k Z n (ω)z k (ω) f n (t 1 ) = λ k f k (t 1 ) (II.45) Multilying this exression by an additional basis vector f j (t) and once more integrating over the domain, D gives therefore n=0 Z n (ω)z k (ω) λ n λ k δ nj = λ k D f k (t 1 )f j (t 1 ) dt 1 (II.46) λk λ j Z j (ω)z k (ω) = λ k δ kj (II.47)

40 27 From the revious equation, it becomes clear that Z j (ω)z k (ω) = δ jk (II.48) Therefore the KL exansion of the function X can be written as X(t, ω) = X(t) + λn f n (t)z n (ω) n=0 (II.49) where λ n and f n (t) are the eigenvalues and eigenfunctions of the covariance and Z n (ω) = 0 Z n (ω)z m (ω) = δ mn (II.50) The exressions for the random variables, Z n (ω) can be obtained from Z n (ω) = 1 λn D f n (t)x(t, ω) dt The KL exansion is able to model a random rocess with covariance R(t 1, t 2 ) over finite time with arbitrary accuracy. The exansion convergence in the L 2 sense is guaranteed by Mercer s theorem [1]. This exansion is articularly useful because it allows us to reresent colored rocesses in terms of random variables Z n (ω). These random variables can be used in conjunction with the gpc exansion described in the revious section to include time-varying rocesses in our analysis. This will allow us to examine roblems such as that of stochastic forcing in the gpc framework. The key is that each term in the KL exansion defines a new random variable that must be included in the PC exansion. This will be discussed in more detail in Chater IV.

41 28 E. Summary In this chater, we have covered many of the reliminary concets that will be utilized throughout the rest of the dissertation. We have resented a general overview of the Polynomial Chaos exansion that will be used to transform stochastic stability and control roblems into deterministic roblems in higher dimensional sace.

42 29 CHAPTER III MODELLING AND STABILITY ANALYSIS OF STOCHASTIC DYNAMIC A. Introduction SYSTEMS In this chater, we aly some of the concets introduced in the revious chater to modelling and stability analysis of stochastic dynamic systems. This chater will deal with analysis of both linear and nonlinear stochastic systems. The first art of the chater will deal with creating a general framework for modelling of stochastic systems in the generalized Polynomial Chaos framework. A generalized framework will be resented for linear systems as well as nonlinear olynomial systems. The latter ortion of the chater deals with stability analysis of linear and nonlinear stochastic systems. For linear systems, if the arameters are bounded linear functions of the random variable that governs the uncertainty, then it is only necessary to test the extreme values of the arameters [9]. However, if the uncertainty does not aear linearly, this method is no longer valid. Stability analysis for nonlinear systems is more difficult than for linear systems. In general, for linear systems if in the worst-case, the system is stable, then the system will be stable for the entire range of ossible arameter variations. For nonlinear systems, this may not be true. As a result, we must not only ensure that the system is stable for the worst-case uncertainty, but for the entire arameter distribution. For nonlinear systems in general, stability has been addressed for deterministic systems with stochastic forcing [37, 38]. In our stability discussion, we will restrict our attention to systems with stochastic arameters, i.e. systems with robabilistic uncertainty in system arameters. For such class of systems, samling based aroaches are often used to solve

43 30 the stochastic roblem in a deterministic setting. The drawback of this aroach is that it can result in the solution of very large roblems for accurate characterization of uncertainty. As a result, we aly non-samling based methods to aroximate, with arbitrary accuracy, the evolution of uncertainty in state trajectories induced by uncertain system arameters. As mentioned reviously, the framework is built on the olynomial chaos theory which transforms stochastic dynamics into deterministic dynamics in higher dimensional state sace. We assume that the system uncertainty is a function of random variables governed by known stationary distributions. The benefit of this aroach is that stochastic linear and nonlinear systems are transformed into deterministic systems and existing system theory can be used for stability analysis. B. Wiener-Askey Polynomial Chaos Let (Ω, F, P ) be a robability sace, where Ω is the samle sace, F is the σ-algebra of the subsets of Ω, and P is the robability measure. Let (ω) = ( 1 (ω),, d (ω)) (Ω, F) (R d, B d ) be an R d -valued continuous random variable, where d N, and B d is the σ-algebra of Borel subsets of R d. A general second order rocess X(ω) L 2 (Ω, F, P ) can be exressed by olynomial chaos as X(ω) = i=0 x i φ i ( (ω)) (III.1) where ω is the random event and φ i ( (ω)) denotes the gpc basis of degree in terms of the random variables (ω). The functions {φ i } are a family of orthogonal basis in L 2 (Ω, F, P ) satisfying the relation D (ω) φ i φ j w( (ω)) d (ω) = h 2 i δ ij (III.2)

44 31 where δ ij is the Kronecker delta, h i is a constant term corresonding to D φ 2 i w( ) d, D is the domain of the random variable (ω), and w( ) is a weighting function. Henceforth, we will use to reresent (ω). For random variables with certain distributions, the family of orthogonal basis functions {φ i } can be chosen in such a way that its weight function has the same form as the robability density function f( ). When these tyes of olynomials are chosen, we have f( ) = w( ) and D φ i φ j f( ) d = E[φ i φ j ] = E[φ 2 i ]δ ij (III.3) where E[ ] denotes the exectation with resect to the robability measure dp ( (ω)) = f( (ω))d (ω) and robability density function f( (ω)). The orthogonal olynomials that are chosen are the members of the Askey-scheme of olynomials [16], which forms a comlete basis in the Hilbert sace determined by their corresonding suort. Table 3 summarizes the corresondence between the choice of olynomials for a given distribution of [15]. Table 3. Corresondence between choice of olynomials and given distribution of (ω). Random Variable Gaussian Uniform Gamma Beta φ i ( ) of the Wiener-Askey Scheme Hermite Legendre Laguerre Jacobi

45 32 C. Stochastic Linear Dynamics and Polynomial Chaos Define a linear stochastic system in the following manner ẋ(t, ) = A( )x(t, ) + B( )u(t, ) (III.4) where x R n, u R m. For the case of a discrete time system, the system is defined as x(k + 1, ) = A( )x(k, ) + B( )u(k, ) (III.5) The system has robabilistic uncertainty in the system arameters, characterized by A( ) and B( ), which are matrix functions of random variable (ω) R d with certain stationary distributions. Due to the stochastic nature of (A, B), the system trajectory will also be stochastic. The control u(t) may be deterministic or stochastic, deending on the imlementation. Let us reresent comonents of x(t, ), A( ) and B( ) as x(t, ) = [x 1 (t, ) x n (t, )] T A 11 ( ) A 1n ( ) A( ) = A n1 ( ) A nn ( ) B 11 ( ) B 1m ( ) B( ) = B n1 ( ) B nm ( ) (III.6) (III.7) (III.8) By alying the Wiener-Askey gpc exansion to x i (t, ), A ij ( ) and B ij ( ), we get ˆx i (t, ) = û i (t, ) = k=0 k=0 x i,k (t)φ k ( ) = x i (t) T Φ( ) u i,k (t)φ k ( ) = u i (t) T Φ( ) (III.9) (III.10)

46 33 Â ij ( ) = ˆB ij ( ) = k=0 k=0 where x i (t), a ij, b ij, Φ( ) R are defined by a ij,k φ k ( ) = a T ijφ( ) b ij,k φ k ( ) = b T ijφ( ) (III.11) (III.12) x i (t) = [x i,0 (t) x i, (t)] T u i (t) = [u i,0 (t) u i, (t)] T a ij = [a ij,0 (t) a ij, (t)] T b ij = [b ij,0 (t) b ij, (t)] T Φ( ) = [φ 0 ( ) φ ( )] T (III.13) (III.14) (III.15) (III.16) (III.17) When u i (t, ) is a feedback control, it follows that it must also be robabilistic (deending on the imlementation), and if the control is not robabilistic, this imlies u i (t) = u i,0 (t) with all other coefficients as zero. The number of terms is determined by the dimension, d, of and the order, r, of the orthogonal olynomials {φ k }, satisfying + 1 = (d+r)! d!r!. The coefficients a ij,k and b ij,k are obtained via Galerkin rojection onto {φ k } k=0 given by a ij,k = A ij( ), φ k ( ) φ k ( ) 2 b ij,k = B ij( ), φ k ( ) φ k ( ) 2 (III.18) (III.19) The inner roduct or ensemble average,, used throughout this work, utilizes the weighting function associated with the assumed robability distribution, as listed in table 3. The n( + 1) time varying coefficients, {x i,k (t)}; i = 1,, n; k = 0,,, are obtained by substituting the aroximated solution in the governing equation (eqn.(iii.4)) and conducting Galerkin rojection on the basis functions {φ k } k=0, to

47 34 yield n( + 1) deterministic linear differential equations, given by Ẋ = AX + BU (III.20) for the continuous time case. For the discrete time case the rojection yields the linear difference equations X(k + 1) = AX(k) + BU (III.21) In both exressions, X R n(+1), A R n(+1) n(+1), B R n(+1) m, and X = [x T 1 x T 2 x T n] T (III.22) U = [u T 1 u T 2 u T m] T (III.23) While it is ossible to derive many forms for the A and B matrices, a convenient form can be obtained in the following manner. Define ê ijk = φ i,φ j φ k φ 2 i. The linear equations of motion can be exressed as n ẋ i,l = a ij,k x j,q ê lkq + j=1 k=0 q=0 m b ij,k u j,q ê lkq j=1 k=0 q=0 Here we will only deal with continuous time systems as the develoment in the discrete time is identical. Define the matrix Ψ k as Ψ k = ê 0k0 ê 0k1 ê 0k ê 0k1 ê 1k1 ê 1k ê 0k ê 1k ê k (III.24)

48 35 The matrices A and B can be written as A 11 A 12 A 1n A 21 A 22 A 2n A = A n1 A n2 A nn (III.25) A ij = k=0 a ij,k Ψ k B 11 B 12 B 1m B 21 B 22 B 2m B = B n1 B n2 B nm B ij = k=0 b ij,k Ψ k More convenient exressions for A and B are given by A = A k Ψ k B = k=0 k=0 B k Ψ k (III.26) (III.27) (III.28) (III.29) (III.30) where is the Kronecker roduct and the matrices A k, B k are the rojections of A( ), B( ) on the olynomial chaos basis functions. Therefore, transformation of a stochastic linear system with x R n, u R m, with th order gpc exansion, results in a deterministic linear system with increased dimensionality equal to n( + 1). Examle: Consider the system ẋ(t, ) = a( )x(t, ) where a( ) = ā 0 + ā 2 2 with [ 1, 1] governed by a uniform distribution and ā i,

49 36 i = 0, 2 is known. Because is governed by a uniform distribution, we use Legendre olynomials to model each of the rocesses. For this examle, we will use u to order 3. The exansion of x is therefore x(t, ) = 3 i=0 x i (t)φ i ( ) and the exansion of a( ) is a( ) = 3 i=0 a i φ i ( ). The first 3 Legendre olynomials (unnormalized) are given by φ 0 = 1 φ 1 = φ 2 = φ 3 = It is clear from the form of a( ) that it can be exressed as a( ) = (ā ā2)φ ā2φ 2 The equation of motion then becomes 3 j=0 3 ẋ j φ j = ( a k φ k ) ( x i φ i ) = a k x i φ k φ i k=0 3 i=0 3 3 i=0 k=0 If we take the rojection of both sides onto φ j and divide by φ 2 j we obtain ẋ j = = 3 1 φ 2 j 3 a k x i φ k φ i φ j i=0 k=0 3 1 φ 2 j ( k=0 a k [ φ k φ j φ 0 φ k φ j φ 1 φ k φ j φ 2 φ k φ j φ 3 ]) X T where X = [ x 0 x 1 x 2 x 3 ]. The structure above can be easily identified as the j th row of the Ψ k matrix described in the revious section in (III.24). Now, because there are only two non-zero coefficients in the exansion of a( ), our equations of

50 37 motion now can be written as Ẋ = (a 0 Ψ 0 + a 2 Ψ 2 ) X = ((ā ā2)Ψ ā2Ψ 2 ) X In this manner we are able to describe the dynamics of the stochastic linear system. The rocedure can be reeated for any order of olynomial to obtain better aroximations. D. Stochastic Nonlinear Dynamics and Polynomial Chaos As was done in the revious section, let (Ω, F, P ) be a robability sace, where Ω is the samle sace, F is the σ-algebra of the subsets of Ω, and P is the robability measure. We again let (ω) = ( 1 (ω),, d (ω)) (Ω, F) (R d, B d ) be an R d -valued continuous random variable, where d N, and B d is the σ-algebra of Borel subsets of R d. 1. Preliminaries In the revious section, the gpc exansion was used to transform linear stochastic differential equations into higher dimensional linear ordinary differential equations. In this section we will exlore a similar transformation of the nonlinear roblem. Here we consider certain tyes of nonlinearities that may be resent in the system model. The nonlinearities considered here are rational olynomials, transcendental functions and exonentials. We outline the rocess for reresenting these nonlinearities in terms of olynomial chaos exansions. If x, y are random variables with gpc exansions similar to eqn.(iii.9) then the gpc exansion of the exression xy can be written as xy = x i y j φ i φ j i=0 j=0

51 38 The gpc exansion of x 2 can be derived by setting y = x in the above exansion to obtain Similarly x 3 can be exanded as x 2 = x i x j φ i φ j i=0 j=0 x 3 = x i x j x k φ i φ j φ k i=0 j=0 k=0 This aroach can be used to derive the gpc exansions of any multi-variate whole rational monomial in general. The gpc exansion of fractional rational monomials of random variables is illustrated using the exression z = x y. If x, y are random variables then z is also a random variable with gpc exansions similar to eqn.(iii.9). The exansions of x, y are known. The gpc exansions of z can be determined using the following stes. Rewrite z = x y as yz = x Exanding yz and x in terms of their gpc exansions gives z i y j φ i φ j = i=0 j=0 k=0 x k φ k To determine the unknown z i we roject both sides of the equation on the subsace basis to obtain a system of + 1 linear equations 1 φ k, φ k z i y j φ i φ j φ k = x k, k = 0,..., i=0 j=0 to solve for the unknowns z i. This can be generalized to obtain the gpc exansion of any fractional rational monomial. For dynamic systems that can be exressed as olynomial systems with stochastic coefficients, we can develo a framework for obtaining the gpc exansion. The gpc

52 39 methodology is useful because it reserves the order of olynomial systems. In other words, a q th -order olynomial remains a q th -order olynomial after the substitution. While the order of the dynamics is reserved, as with linear systems the number of states is increased. Consider a system of the form 2. Stochastic Polynomial Systems ẋ i (t, ) = m j=1 a ij ( )x α j (t, ) (III.31) where m reresents the number of terms in the exression, i = 1,..., n reresents the number of states, a ij are the coefficients, x = [x 1 x n ] T, and α j = [α j1 α jn ] T with α jk N + is a vector containing the order of each term in the monomial. For examle the term given by x 2 1 x3 2 x 3 = x α with α = [2 3 1] T. Note that without loss of generality, this vector does not need to deend uon i because we can add zeros to a ij for any terms that do not aear in the equations of some state x i. To aly the gpc exansion to this equation of motion, we write x i (t, ) = k=0 x i,k (t)φ k ( ) (III.32) a ij ( ) = k=0 a ij,k φ k ( ) where these forms are familiar as they are identical to those of the linear system. (III.33) These exressions can be utilized to derive the equations of motion in a fashion

53 40 similar to that utilized in the revious section. Consider the term a ij ( )x α j (t, ) = k=0 k 11 =0 k 1αj1 k 21 =0 k nαjn [ a ij,k x 1,k11 x 1,k1αj1 x 2,k21 x n,knαjn φ k φ knαjn ] While this exression involves a large number of summations (the number deends uon the order of each olynomial term), clearly, the order of the olynomial in terms of the vector x is reserved; however, the number of terms in the olynomial has increased dramatically. As was done in the linear case, the equations of motion can be rojected onto each olynomial subsace to obtain a system of ordinary differential equations in terms of our coefficients. Each equation of motion is then given by m j=1 k=0 ẋ i,q = n [ê q,k,k11,...,k nαjn a ij,k α jr r=1 m=1 x rkm ] (III.34) where ê q,k,k11,...,k nαjn = 1 φ 2 q φ qφ k φ k11 φ knαjn and k=0 [ ] = k=0 k 11 =0 k nαjn =0 [ ]. As an examle, consider a olynomial of the form ax 2 1 x 2 with x 1, x 2, and a as random variables. For this term, α = [2 1] T. The gpc exansion of this term is written as ax 2 1x 2 = a k x 1,k11 x 1,k12 x 2,k21 φ k φ k11 φ k12 φ k21 k=0 In general, we can write the exanded system in the following form Ẋ = ˆm â Xˆα j ij j=1 (III.35) where X has been reviously defined in (III.22). The term, ˆm, reresents the new number for terms based on the addition of more variables, â ij is the coefficient of each new term, and ˆα j contains the orders of each of the monomials. The stochastic

54 41 olynomial dynamics are thus written as deterministic olynomial dynamics of state dimension R n(+1). When nonlinearities involve non olynomial functions, such as transcendental functions and exonentials, difficulties occur during comutation of the rojection on the gpc subsace. The corresonding integrals may not have closed form solutions. In such cases, the integrals either have to be numerically evaluated or these nonlinearities are first aroximated as olynomials using Taylor series exansions and then the rojections are comuted using methods described above. While Taylor series aroximation is straightforward and generally comutationally cost effective, it can become severely inaccurate when higher order gpc exansions are required to reresent the hysical variability. A more robust algorithm is resented by Debusschere et al.[39] for any non olynomial function u(x) for which du dx as a rational function of x, u(x). can be exressed 3. Nonlinear Systems Examle In this section we will outline a brief examle that demonstrates the ability of the gpc exansion to accurately cature the statistics of even nonlinear systems. We will consider uncertainty in initial conditions as arametric uncertainty will be considered as art of another examle in a later section. Consider the following set of longitudinal non-dimensionalized Vinh s equations for a vehicle assing through the atmoshere. ḣ = V sin γ V = ρv 2 R 0 gr 0 sin γ 2B c Vc 2 γ = gr 0 Vc 2 ẋ = V cos γ cos γ V 2 1 V + ρ R 0 2B c V L D

55 42 In these equations, V reresents the non-dimensionalized velocity, h reresents the non-dimensionalized altitude, γ is the flight ath angle and x is the non-dimensionalized lateral distance. The term R 0 reresents the radius of the body (in this case the Mars), B c is the ballistic coefficient, g is the acceleration due to gravity at the surface of the body, and V c is the circular orbit velocity of the body which can be aroximated as gr0. Finally, L D is the lift to drag ratio. For the uroses of this examle we will assume that this is a constant. The density, ρ is a function of the altitude given by ρ = ρ 0 e ( h 2 hr 0 h 1 ) For the uroses of this examle we assume that the vehicle is entering the atmoshere of Mars, meaning R 0 = 3397 km, g = m/s 2, ρ 0 = km/m 3, h 1 = 9.8 km, and h 2 = 20 km. The vehicle is assumed to have a ballistic coefficient of 72.8 kg/m 2 and a lift to drag ratio of 0.3. For this examle, we will consider initial condition uncertainty in the altitude arameter. Assume that the initial condition uncertainty aears as a linear erturbation to the nominal initial condition and that the value of the erturbation is governed by a beta distribution with 20% uncertainty. In other words, at the start of the simulation the height of the vehicle is unknown and the starting altitude may be anywhere within 20% of the mean value. The α and β arameters of the beta distribution are chosen to be equal and to be 2. This roduces a Gaussian-like curve with the highest robability associated with the mean. The total range of uncertainty is governed by a single random variable. We will utilize the gpc exansion for each

56 43 of the states to obtain the equations i=0 i=0 i=0 i=0 ḣ i φ i = ( V i φ i ) sin ( γ j φ j ) i=0 V i φ i = ρ R 0 2B c γ i φ i = gr 0 V 2 c i=0 j=0 j=0 V i V j φ i φ j gr 0 V 2 c cos ( γ i φ i ) ( V j φ j i=0 i=0 ẋ i φ i = ( V i φ i ) cos ( γ j φ j ) i=0 j=0 sin ( γ i φ i ) i=0 1 k=0 V kφ k ) + ρ R 0 2B c L D Additionally, since ρ is a function of the altitude it must also be written as a gpc i=0 V i φ i variable. ρ = ρ 0 e ( h 2 ( i=0 h i φ i )R 0 ) h 1 The gpc variables in this equation are all written with resect to the random variable that governs the initial condition uncertainty. These equations are non-linear and non-olynomial. This means that we have no means of determining gpc rojections for the system directly. As a result, at each time ste numerical integration will be used to erform the Galerkin rojection and determine the equations of motion. The initial condition can be written as h(0) = h 0 + The other initial conditions are assumed to be known erfectly for this examle so we write V (0) = V 0, γ(0) = γ 0, and x(0) = x 0. It is easy to write the other initial conditions as functions of the same or additional random variables. Adding additional random variables adds more dimensionality to the roblem. To test the accuracy of the gpc exansion we examine the time resonse of each set of states for a olynomial order of = 7. Figures 2(a) and 2(b) demonstrate the resonse of the body s altitude and veloc-

57 44 (a) Time resonse of altitude with (b) Time resonse of velocity with 20% uncertainty in h 0 20% uncertainty in h 0 Fig. 2. Altitude and velocity resonse for longitudinal Vinh s equations with uncertainty in h 0 (a) Time resonse of γ with 20% (b) Time resonse of dr with 20% uncertainty in h 0 uncertainty in h 0 Fig. 3. γ and horizontal osition resonse for longitudinal Vinh s equations with uncertainty in h 0 ity over time due to the uncertainty in the initial value for the altitude. Figures 3(a) and 3(b) show the resonse of γ and the horizontal osition resectively. In these figures the gray area reresents the trajectories generated through Monte-Carlo. The Monte-Carlo trajectories are generated by successive solution of the equations of motion with different initial conditions which have values governed by a Beta distri-

58 45 bution. The solid black line in the figures reresents the redicted mean value from Fig. 4. Variance of altitude resonse for Monte-Carlo and gpc redicted trajectories gpc and the dashed lines on the edges reresent the maximum and minimum trajectory values redicted from gpc. The figures demonstrate that the redictions of the mean as well as the uncertainty bounds are very accurate. As the evolution of the equations demonstrate, the initial condition uncertainty in h results in uncertainty in all of the other arameters as time rogresses. The uncertainty in γ is very large (57 degrees) by the time the body is nearing the surface (h = 0). Desite the large uncertainty, the gpc aroximation is able to cature the bounds very accurately. To further demonstrate the accuracy of the exansion, consider figures 4 and 5. The to ortions of these figures show the variance of the altitude and velocity resectively. In each of the figures it is clear that the variance obtained from Monte-Carlo simulation and that obtained by the gpc aroximation are very close.

59 46 Fig. 5. Variance of velocity resonse for Monte Carlo and gpc redicted trajectories As an additional comarison, the bottom ortions of figures 4 and 5 show the normalized error between the variances redicted by Monte-Carlo simulation and those redicted by the gpc aroximation. In each case the errors in redicted variance are at least four orders of magnitude smaller than the actual variance. The largest errors are observed when the variance is near zero. This is because the actual variance error is beginning to aroach the tolerances of the integration scheme. Next, we quantify the relationshi between the observed error and the number of terms used in the olynomial chaos exansion. In articular, it has been observed that the exansion converges exonentially [15], but we wish to verify this. Figure 6 shows the errors between the redicted altitude mean and variance values for gpc versus the Monte-Carlo results. The figure shows the trends in these errors as the olynomial order is increased. It is imortant to note that the error in the variance

60 47 Fig. 6. Altitude mean and variance errors between gpc and Monte-Carlo for various olynomial orders rediction is higher that of the redicted mean for the same olynomial order. This is to be exected as the variance is second order while the mean is first order. A more accurate rediction of the variance will therefore require higher order olynomials. As an examle of this, consider a trajectory that can be comletely modeled with three olynomials. We can write this trajectory as x t ( ) = x 0 φ 0 + x 1 φ 1 + x 2 φ 2. The mean of the trajectory is simly x 0. Therefore, only a single gpc term is needed to cature the mean, though with differential equations higher-order terms can influence the evolution of x 0 meaning these will be required to accurately redict the mean. To comute the variance, we need to comute E[(x E[x]) 2 ] = E[(x 1 φ 1 + x 2 φ 2 ) 2 ] = E[x 2 1φ x 1 x 2 φ 1 φ 2 + x 2 2φ 2 2] = x 2 1 φ x 2 2 φ 2 2 The first term, x 0, is cancelled from the exression because it is the mean. So, we can see that even while the mean can be catured accurately with just one term, to cature the variance we indeed need all three terms. Therefore, to redict the

61 48 variance with high accuracy, we will require olynomials of higher order than would be needed to roduce equivalent accuracy in the mean. Fortunately, figure 6 shows that the errors associated with the gpc aroximation go to zero exonentially as the order of the olynomials is increased. This means that we will require only a few olynomials to obtain a high accuracy. In fact, accuracy to 3 significant figures is nearly obtained for third order olynomials (corresonding to + 1 = 4). While the errors will continue to decrease indefinitely as the olynomial order is increased, this might not always be observed in ractice. If higher order terms were added to figure 6, the errors would remain constant and no imrovement would be observed ast +1 = 8. This is because the errors in the variance are aroaching the integration tolerances of the solver. If higher accuracy is required, the solver accuracy should be increased. E. Stochastic Stability Analysis of Linear Systems By reresenting the stochastic system in a deterministic framework, we are able to analyze stability roerties of the stochastic system using tools develoed for deterministic systems. This enables definition of stability conditions in terms of the augmented state vector, which results in a larger linear matrix inequality (LMI), as oosed to many smaller LMI s in the case of samling based aroaches. Proosition III.1 The system in (III.20) with u = 0 is stable if and only if there exists a P = P T > 0 such that A T P + P A 0 Proof Choose V = X T P X and utilize the standard Lyaunov argument. For the discrete time case, we have a similar result.

62 49 Proosition III.2 The system in (III.21) with u = 0 is stable if and only if there exists a P = P T > 0 such that A T P A P 0 Proof Choose V = X T P X and again utilize the standard Lyaunov argument. This result is resented to demonstrate the ower of the aroach to enable the study of system stability in terms of well known methodologies. Remark III.3 The number of olynomials should be chosen to accurately reresent the stochastic rocess in finite dimensional rocess, as the validity of the stability arguments only relates to the aroximated random rocess. Remark III.4 The stability condition in the revious roositions only guarantee stability of the Galerkin rojection of the linear system. This is an aroximation and as such does not guarantee that the original stochastic system is indeed stable for all. For more discussion see Chater VI. The closed-loo stability of a system can be analyzed by utilizing similar arguments. Proosition III.5 Given a feedback control u(t, ) = Kx(t, ), the feedback gain K asymtotically stabilizes the rojection of the family of systems, arameterized by, if the condition A T P + P A + (K T I +1 )B T P + P B(K I +1 ) < 0 is satisfied for some P = P T > 0. Proof First, let us look at u(t, ) = Kx(t, ). When x(t, ) is aroximated by a gpc exansion, u(t, ) = K ˆx(t, ), and u i (t, ) = l=0 n u i,l φ l = k ij x j,k φ k j=1 k=0

63 50 By rojecting, we find that U = (K I +1 )X (III.36) Therefore, the closed loo system is given by Ẋ = AX + B(K I +1 )X Using the Lyaunov function V = X T P X, where P = P T > 0, and taking its derivative imlies that the system is asymtotically stable (exonentially stable since this is a linear system) if X T (A T P + P A + (K I +1 )BP + P B(K I +1 )) X < 0. This comletes the roof. For discrete time, we can formulate a similar roosition. Proosition III.6 Given a feedback control u(k, ) = Kx(k, ), the feedback gain K asymtotically stabilizes the rojection of the family of systems, arameterized by, if the condition (A T + (K T I +1 )B T ) P (A + B(K I +1 )) P < 0 is satisfied for some P = P T > 0. Determination of stability can be solved via the LMI feasibility roblem P P (A + B(K I +1 )) (A T + (K T I +1 )B T ) P P > 0 Proof As was determined in the revious roosition, U = (K I +1 )X. To analyze the stability we use the Lyaunov function V (k) = X(k) T P X(k). For stability of linear systems with a Lyaunov function of this form we require V (k + 1) V (k) < 0.

64 51 It is simle to see that substitution of X(k + 1) in terms of X(k) yields X T (k) ((A T + (K T I +1 )B T ) P (A + B(K I +1 )) P ) X(k) < 0 which imlies that if P = P T 0 is found, the system is stable. Using Schur Comlements, it is easy to write this condition as an LMI [40]. This comletes the roof. This result allows us to test the stability of a control law for a family of systems by the analysis of a single deterministic system. It does not make sense to examine marginal stability (λ(a) = 0 for continuous and λ i (A) = 1 for discrete systems) for these systems because any inaccuracy in the aroximation of the system could lead to instability. Therefore, the amount of uncertainty in the aroximation should be considered when analyzing stability margins. It is also worth noting that for stability analysis, the robability density function being considered is not imortant. For the system to be stable with robability one, it must be stable for any arameter values that occur with non-zero robability. Thus, for a continuous df the system must be stable over the entire suort with non-zero measure. For certain tyes of distributions this can be imractical. For examle, when considering a Gaussian distribution the olynomial suort is for (, ), meaning that it could be ractically imossible to ensure stability with finite robability. In such cases it is more ractical to consider stability metrics such as 6σ. F. Linear Stability Examle Here we consider a flight control roblem, based on an F-16 aircraft model, where a feedback control K has been designed for the nominal system. We wish to verify the robustness of the controller in the resence of arametric uncertainty in the F-

65 52 16 model. For simlicity, we assume that the variation in the system arameters are deendent on a single random variable,, i.e. the variation in these arameters is not indeendent. In general, these arameters could be indeendent random rocesses. In this examle, we consider the short-eriod aroximation of an F-16. The model is given by ẋ = Ax + Bu y = Cx where the state vector x = [α q x e ] T ; α is the angle of attack, q is the itch rate, and x e is an elevator state which catures actuator dynamics. The control, u = δ ec, is the elevator command in degrees. The matrix arameters are A = ( ) ( ) ( ) B = [ ] T C = [ π 0 ] The values in arenthesis are assumed to be uniformly distributed with 10% deviation about their nominal values. A frequency-domain control has been designed based on feedback of q for the nominal system. The control is of the form u = s s s q which is designed to be a itch-rate tracking controller. This is converted to statesace form (A c, B c, C c ) and augmented to the system to arrive at the closed loo

66 53 system ẋ a = A cl x a + B cl u where A BC A cl = c B c C A c 0 B cl = B c The accuracy of the gpc based aroach, for finite dimensional aroximation of linear stochastic dynamics, can be inferred from figure 7. The circles (black) reresent the eigenvalues of the gpc system with ten terms. The solid (red) dots reresent the eigenvalues of the system obtained by samling the stochastic system over. It is interesting to note that the range of eigenvalues of the stochastic system is accurately catured by the eigenvalues of the gpc system. It should be noted that λ(a) does not give the distribution of eigenvalues of the actual system. This would require the solution of a different roblem. Regardless, this gives us confidence in the use of olynomial chaos for stability analysis and control of stochastic dynamical systems. Furthermore, we are able to understand how the uncertainty in system trajectories evolve over time. Figure 8 shows the itch rate resonse of the system in the resence of ±10% system uncertainty in the aforementioned arameters. The redicted mean and trajectory bounds from gpc are reresented by the dark solid and dashed lines resectively. The Monte-Carlo resonses of each system are deicted in gray. We observe that the bounds redicted by the gpc system are in excellent agreement with the resonses of the Monte-Carlo simulations. As an additional oint of comarison, figure 9 shows the normalized errors in the mean and variance for the itch rate resonse of the aircraft. This figure is generated for = 3. This demonstrates that

67 54 Fig. 7. Closed loo eigenvalue distributions of short-eriod mode for ±20% arameter uncertainty Fig. 8. Predicted and Monte-Carlo system resonse to ±10% arameter uncertainty

68 55 Fig. 9. Normalized mean and variance errors between gpc rediction and Monte Carlo observation for linear systems, high accuracy can be obtained with a relatively small number of olynomials. In this manner, we are able to redict the statistical behavior of the system through simulation of the gpc system, which is comutationally far suerior than Monte-Carlo methods. G. Stability of Nonlinear Systems 1. Methodology In the section on nonlinear modelling, we showed that stochastic olynomial systems of n variables could be modelled as deterministic olynomial systems of the same order with n( + 1) variables. Transforming stochastic dynamic systems into deterministic systems allows us to utilize reviously existing techniques to test the stability of such a system. Because the resulting system is deterministic, any stability technique for nonlinear systems can be utilized to analyze the stability of the rojected gpc system.

69 56 In general, for nonlinear systems the number of olynomials required to aroximate the system behavior (esecially if extreme accuracy in variance is required) can be large. For this reason, it can become difficult to analyze the stability of the system using traditional analytical techniques. The larger set of system dynamics makes numerical or automated stability analysis tools much more attractive. One such technique is Sum-of-Squares (SOS) rogramming. We can utilize the SOS framework to discuss the stability of these new olynomials. For details on this aroach, see [41, 42, 43] Let X (X) be a vector of monomials with the roerty that X = 0 if and only if X = 0. Define a function V = X T P X (III.37) Furthermore, define a function W (X) that is ositive definite in X and is a sum-ofsquares olynomial in terms of monomials of X. Proosition III.7 The aroximation of the family of olynomial systems is stable when a function, V, can be found such that V (X) W (X) is SOS (III.38) V (X) is SOS (III.39) Proof For roof see[41]. Remark III.8 This result is straight-forward but owerful. It enables the analysis of uncertainty in nonlinear systems in an algorithmic manner that does not require caseby-case analysis of the various changes in the terms. The drawback is that stability is only roven for the aroximation governed by the rojected system and not for the actual stochastic system.

70 57 2. Examle For linear systems with arametric uncertainty aearing linearly in the arameters (as in the revious examle), it is ossible to determine system stability by examining the stability of the vertex set [9]. While for many nonlinear systems, this may be the case, one cannot in general assume that the stability of the vertex set imlies stability of the nonlinear system over the entire range. As a result, it becomes even more imortant to ensure that stability is guaranteed for the entire distribution of arameters. The gpc methodology, in this context, is very useful in the analysis of stability for uncertain nonlinear systems. Proof of stability for the gpc system ensures that the stochastic nonlinear system is stable for the entire distribution of arameter uncertainty with non-zero measure. This is exemlified by the following analysis. Consider the system ẋ 1 = x 2 ẋ 2 = x 1 + a( )x 3 2 we want to understand the stability of this system when a is uncertain and its value is based on a uniform distribution around a mean value of 0.5. For this case, we consider the distribution that varies by ±0.4 (a( ) [ 0.9, 0.1]). The nominal system is stable, and by utilizing SOSTOOLS (see[44, 45]), we are able to show stability and obtain a Lyaunov function of the form V =.79602x x 2 2 To verify the stability of the system, we introduce the gpc exansion and determine the stability of the deterministic system. The deterministic system is another olynomial system of the same order, but with increased dimensionality. To demonstrate the

71 58 methodology, stability certificates were generated for various values of, the number of gpc exansions. For a secific case of = 4, the Lyaunov function is given by, V = Z T QZ where Z = [x 23 x 22 x 21 x 20 x 13 x 12 x 11 x 10 ] T, and The sub-matrices are given by Q = Q Q 22 Q 11 = Q 22 = It is interesting to note that for this system, the structure of the Q matrix takes a block diagonal form. The Lyaunov function for the gpc system retains the original structure, i.e. it is also block diagonal. This suggests ways of examining stability and generating certificates for gpc systems. It is imortant to note that the number of terms in the certificate increases significantly as more coefficients are added. If the structure of the Lyaunov function is unknown, then guessing all ossibilities of monomials can lead to roblem formulations with large numbers of variables, which are extremely comutationally intensive in the SOSTOOLS framework.

72 59 H. Summary In this chater we have resented a generalized framework for the analysis of stochastic linear and nonlinear systems within the gpc framework. A general formulation for writing a stochastic system in terms of a higher dimensional deterministic system was resented for linear systems as well as nonlinear olynomial systems. Stability roblems for linear systems with stochastic arameter uncertainty have been reduced to the solution of an LMI feasibility roblem. For nonlinear olynomial systems, stability roblems are solved using a sum of squares rogramming aroach. The chater also resents several examles that highlight the alication of the new deterministic stability conditions to analysis of linear and nonlinear stability roblems.

73 60 CHAPTER IV A. Introduction OPTIMAL CONTROL OF STOCHASTIC SYSTEMS In the revious chater, the discussion was centered around modelling and stability analysis of linear and nonlinear stochastic systems. We now take some of the same concets and aly these ideas to the study of control design of stochastic systems. In this chater we will use the modelling results from the revious chater to solve linear and nonlinear stochastic otimal control roblems. As mentioned reviously, control of stochastic systems is receiving a heightened amount of attention as of late. The more revalent aroaches in the literature tend to utilize samling based methods. For linear systems, the work of Barmish et al. utilizes Monte-Carlo based methods to analyze stability and control roblems in an LMI framework [7, 8]. The results are somewhat limited, however as they can only be alied to systems where the uncertain arameters are governed by a uniform distribution. Polyak et al. [9] develos an algorithm to determine a control with guaranteed worst-case cost. Unfortunately, this aroach is also limited to systems with uncertainty governed by a uniform distribution and aearing linearly in the arameters. A samling based technique is also alied to the H roblem in [11]. These samling based results are extended to linear arameter varying (LPV) control roblems in [10]. For linear roblems we are interested in obtaining otimal feedback laws for unconstrained infinite horizon otimal control roblems. We will also examine finitehorizon constrained roblems for both linear and nonlinear systems. The main focus of this chater is on otimal control in the L 2 sense for linear and

74 61 nonlinear systems with robabilistic uncertainty in system arameters. It is assumed that the robability density functions of these arameters are known. These arameters may enter the system dynamics in any manner. In the beginning of the chater, we address minimum exectation feedback control for linear systems. We will discuss the formulation of the stochastic L 2 otimal roblem of the Bolza tye in terms of the olynomial chaos exansion. To solve this roblem we will consider several different feedback structures for continuous and discrete time systems. Several examles will be resented to highlight the various feedback laws. The latter art of the chater deals with the transformation of finite-time constrained oen-loo stochastic otimal control roblems to equivalent deterministic otimal control roblems in higher dimensional state sace. These roblems are solved using standard numerical methods available for deterministic otimal control roblems. For these roblems, uncertainty is assumed to be in the system arameters as well as in stochastic forcing terms. We will assume that for each tye of uncertainty, the robability distribution function is known. In articular, nonlinear dynamical systems are considered though the methodology could easily be alied to linear systems. B. Stochastic LQR Design In this section we address feedback control of linear stochastic dynamical systems with robabilistic system arameters, in the gpc framework. Here we consider otimal control with resect to exected value of a quadratic cost function, that deends on the state and control vectors. We will examine the solution of this roblem for both deterministic and stochastic feedback control laws, and highlight salient features of each.

75 62 1. Minimum Exectation Control Minimum exectation otimal trajectories are obtained by minimizing the following cost function, which is analogous to the Bolza form, min E u [ 0 (x T Qx + u T Ru)dt] (IV.1) where x x(t) R n, u u(t) R m, Q = Q T > 0, R = R T > 0, S = S T > 0. For discrete time systems, the following cost function is used min E [ u x T (k)qx(k) + u T (k)ru(k)] k=0 (IV.2) For scalar x, the quantity E[x 2 ] in terms of its gpc exansions is given by E[x 2 ] = x i x j φ i φ j fd = x T W x D i=0 j=0 (IV.3) where D is the domain of, x i are the gpc exansions of x, f f( ) is the robability distribution of ; W R (+1) (+1) = {w ij }, with w ij = D φ i φ j fd = E[φ 2 i ]δ ij (note: W is a diagonal matrix), and x = (x 0 x ) T. Because the olynomials of the PC exansion or orthogonal, we can write W = diag ( φ 2 0, φ 2 1,..., φ 2 ) (IV.4) where diag( ) is a diagonal matrix with its diagonal terms given by the values in arenthesis. The exression E[x 2 ] can be generalized for x R n where E[x T x] is given by E[x T x] = X T (I n W )X (IV.5) I n R n n is the identity matrix and is the Kronecker roduct, and X is given by eqn.(iii.22). The cost function in eqn.(iv.1) can now be written in terms of the gpc

76 63 exansions as min J = min u u 0 (X T Q x X + E [u T Ru]) dt (IV.6) where Q x = Q W. The discrete time cost becomes min J d = min u u k=0 (X T (k)q x X(k) + E [u T (k)ru(k)]) (IV.7) The exected value of u T Ru will deend uon the control imlementation discussed in subsection 2. It is also ossible to write variance control and moment control roblems in a similar fashion. This is discussed in more detail in section C. 2. Feedback Solutions In this section, we will discuss conditions for otimality for various feedback structures as they aly to a quadratic cost of the form develoed in the revious section. a. Augmented Deterministic State Feedback with Constant Deterministic Gain The first imlementation we will discuss involves the assumtion that the control is robabilistic and augmented state vector X is used for feedback. If we assume u = k=0 u i,k(t)φ k ( ) E [u T Ru] = U T RūU (IV.8) where Rū = R W. Proosition IV.1 The cost function in eqn. (IV.6) is minimized with a control of the form U = R 1 ū B T P X (IV.9) where P R n(+1) n(+1) is the solution to the Riccatti equation A T P + P A P BR 1 ū B T P + Q x = 0 (IV.10)

77 64 Proof As er the usual solution to the LQR roblem, we can write the cost function as J = X T P X. From Euler-Lagrange (by substituting for the Lagrange multilier), we obtain 0 = RūU + B T P X, giving U = R 1 ū B T P X where U is defined by eqn.(iii.23). Substituting this into the cost function and taking the derivative of both sides gives P + A T P P BR 1 ū B T P + P A P BR 1 ū B T P = Q x P BR 1 ū B T P For the infinite horizon roblem P = 0, comleting the roof. Remark IV.2 The solution to this exression yields a constant gain matrix, but imlementation requires knowledge of gpc exansions of the states. The control vector U = R 1 ū B T P X with T U = [ u 1 (t) T u 2 (t) T u m (t) T ] (IV.11) defines u(t, ) = {u i (t) T Φ( )} m i=1, a family of control laws, arameterized by. Aroriate u(t) can be determined based on the knowledge of, as a result must also be known during imlementation. Remark IV.3 This control scheme can also be used to simultaneously design otimal controls for all. This is equivalent to solving the LQR roblem for each value of. This will be demonstrated numerically in an examle in the next section. For discrete time systems, the equivalent control strategy is outlined in the following roosition

78 65 Proosition IV.4 The cost function in eqn. (IV.7) is minimized with a control of the form U(k) = (Rū + B T P B) 1 B T P AX(k) (IV.12) where P R n(+1) n(+1) is the solution to the Riccatti equation P = Q x + A T (P P B(Rū + B T P B) 1 B T P ) A (IV.13) Proof Again, following the solution to the LQR roblem we can write the cost function as J k = X T (k)p (k)x(k). Examining the cost at time k in terms of the cost at k + 1 gives X T (k)p (k)x(k) = X T (k)q x X(k) + U T (k)rūu(k) + X T (k + 1)P (k + 1)X(k + 1) But the cost at time k + 1 can be written as a function of the state and control at time k as X T (k + 1)P (k + 1)X(k + 1) = (U T (k)b T + X T (k)a T ) P (k + 1) (AX(k) + BU(k)) From this exression, we can see that U(k) = (Rū + B T P (k + 1)B) 1 B T P (k + 1)AX(k) where P (k + 1) is the solution of P (k) = Q x + A T (P (k + 1) P (k + 1)B(Rū + B T P (k + 1)B) 1 B T P (k + 1)) A For the infinite horizon roblem we have that P (k +1) = P (k) = P and this comletes the roof. Remark IV.5 As in the continuous time case, imlementation requires both knowledge of the gpc exansion of the states as well as knowledge of. Furthermore,

79 66 solution of this equation can be written in terms of an LMI using the Schur Comlement. b. Stochastic State Feedback with Constant Deterministic Gain In this formulation, the state trajectory x(t, ) is used to generate the control law that is not exlicitly arameterized by. This aroach does not require estimation of the gpc exansions of the state and hence doesn t require the knowledge of. We roose feedback of the form u(t, ) = Kx(t, ) (IV.14) where K is a deterministic constant gain. Once again the control is stochastic, due to stochastic state trajectory, and enters the cost function as E [u T Ru]. The control vector in gpc framework then becomes U = (K I +1 ) X. (IV.15) In this manner, we are selecting a feedback structure that results in a roblem similar to the outut feedback roblem in traditional control. The modified cost function becomes J = 0 X T (Q x + (K T I +1 ) Rū (K I +1 )) Xdt (IV.16) for the continuous case and J d = k=0 X T (k) (Q x + (K T I +1 ) Rū (K I +1 )) X(k) (IV.17) for the discrete time case. Proosition IV.6 For a feedback law of the form in eqn.(iv.14), the cost function

80 67 in eqn.(iv.16) is minimized for a matrix K R m n solving A T P + P A + P B(K I +1 ) + (K T I +1 )B T P + Q x + (K T I +1 )Rū(K I +1 ) = 0 (IV.18) subject to P = P T > 0. Furthermore, a solution exists for some Q x and Rū if the feasibility condition A T P + P A + (K T I +1 )B T P + P B(K I +1 ) < 0 (IV.19) is satisfied. Proof Let J = X T P X. Taking the derivative of the cost function gives rise to the matrix equation P + P A + P B(K I +1 ) + A T P + (K T I +1 )B T P = Q x (K T I +1 )Rū(K I +1 ) For an infinite time interval, let P 0, giving the first condition. Now, we must show the second art of the roosition. The feasibility condition imlies that we can select some stabilizing gain, K and that we can select some M = M T > 0, and find a P = P T > 0 such that A T P + P A + (K T I +1 )B T P + P B(K I +1 ) = M Select M = ˆM W. Let ˆM = Q + K T RK. Because K makes the system Hurwitz, use of Lyaunov s theorem guarantees the existence of a P. This comletes the roof. Remark IV.7 The bilinear matrix inequality (BMI) in eqn.(iv.19) does not have any analytical solution and must be solved numerically to obtain K and P. The BMI can be solved using solvers such as PENBMI [46].

81 68 Proosition IV.8 For a feedback law of the form in eqn.(iv.14), the cost function in eqn.(iv.17) is minimized for a matrix K R m n solving Q x + (K T I +1 )Rū(K I +1 ) + ((K T I +1 )B T + A T )P (A + B(K I +1 )) P = 0 subject to P = P T > 0. Furthermore, a solution exists for some Q x and Rū if the feasibility condition ((K T I +1 )B T + A T ) P (A + B(K I +1 )) P < 0 (IV.20) is satisfied. Proof Let J = X T P X. Examining the cost function at time k, as was done reviously, gives rise to the matrix equation P (k) = Q x + (K T I +1 )Rū(K I +1 ) + ((K T I +1 )B T + A T ) P (k + 1) (A + B(K I +1 )) For an infinite time interval, let P (k + 1) = P (k) = P, giving the first condition. The feasibility condition imlies that we can select some stabilizing gain, K and that we can select some M = M T > 0, and find a P = P T > 0 such that ((K T I +1 )B T + A T ) P (A + B(K I +1 )) P = M As in the revious roosition, select M = ˆM W. Let ˆM = Q + K T RK. Because K makes the system stable, use of Lyaunov s theorem guarantees the existence of a P. This comletes the roof. Remark IV.9 The condition in roosition IV.8 can also be written as a bilinear matrix inequality though this is less intuitive. The BMI has the form Y (K T I +1 )(Rū + B T P B) 0 (Rū + B T P B)(K I +1 ) Rū + B T P B

82 69 where Y = Q x + A T P A + (K T I +1 )B T P A + A T P B(K I +1 ) P Remark IV.10 Unlike in the revious design, the variation in the state trajectories directly mas to a corresonding deterministic control and does not require exlicit knowledge of. This can lead to comutational benefits during imlementation. This feedback structure mimics the traditional robust control aroach where a single controller guarantees robust erformance for the entire range of arameter variation. The advantage here is that it admits any arbitrary distribution, where traditional robust control is limited to uniform distribution only. At first glance it would seem that these bilinear equations could be reduced to linear equations through standard substitutions as in [40]. This is not the case, however, because the Kronecker roduct creates more equations than unknowns. Such substitutions require an inverse to solve for the gain, K, but such a rocedure would not reserve the Kronecker structure. c. Stochastic State Feedback with Stochastic Gain This section deals with the otimality of a control law that involves feedback of the form u = K( )x(t, ), where the constant gain deends on the random variable. In terms of the gpc exansions, K( ) can be written as K( ) = {k ij ( )} and k ij ( ) = h=0 k ij,hφ h ( ). This feedback structure is also analogous to outut feedback control, but with increased degree of freedom. Imlementation of this control law requires knowledge of. To determine the values u i,j, we roject the control onto the olynomial subsace u i,l = 1 n φ 2 l k ij,h x j,q φ l, φ h φ q j=1 h=0 q=0

83 70 giving U = ( K k Ψ h ) X = KX h=0 (IV.21) When the control K is not a function of, this corresonds to k ij,h = 0 for h 1. The matrix Ψ 0 = I +1, so the revious case is recovered. The cost function is written in terms of this feedback strategy as J = 0 X T (Q x + K T RūK) Xdt (IV.22) Proosition IV.11 The feedback law in eqn.(iv.21) otimally drives the system to the origin with resect to the cost function in eqn.(iv.22) for K( ) solving A T P + P A + P BK + K T B T P + Q x + K T RūK = 0 (IV.23) subject to P = P T > 0. Furthermore, a solution exists for some Q x and Rū if the feasibility condition A T P + P A + K T B T P + P BK < 0 (IV.24) is satisfied. Proof The roof is similar to the revious roosition and is therefore omitted. For the discrete time case, the cost function becomes J d = k=0 X T (k) (Q x + K T RūK) X(k) (IV.25) Proosition IV.12 The feedback law in eqn.(iv.21) otimally drives the system to the origin with resect to the cost function in eqn.(iv.25) for K( ) solving Q x + K T RūK + (K T B T + A T )P (A + BK) P = 0

84 71 subject to P = P T > 0. Furthermore, a solution exists for some Q x and Rū if the feasibility condition (A T + K T B T )P (A + BK) P < 0 (IV.26) is satisfied. Proof The roof is similar to the revious roosition for a discrete time otimal control olicy and is therefore omitted. Remark IV.13 This control strategy rovides more flexibility for solving the necessary condition for otimality at the exense of more comlexity in imlementation, i.e. the necessity for knowledge of. d. Deterministic Control with Augmented State Feedback In this feedback structure, the augmented gpc states of the stochastic system are used to derive a deterministic control. This corresonds to a control with u i (t, ) = u i,0. As a result, the system B matrix becomes ˆB = b 11,1 b 12,1 b 1m,1 b 11,2 b 12,2 b 1m,2 b 11, b 12, b 1m, b 21,1 b 22,1 b 2m,1 b n1, b n2, b nm, or ˆB can be written in the form of eqn.(iii.27), where ˆB ij = k=0 b ij,k δ 1k

85 72 The δ 1k is a vector of zeros with a 1 at the k th osition. Since u is a deterministic control, E[u T Ru] = u T Ru (IV.27) Unlike revious cases, the dimension of ˆB is n(+1) m instead of n(+1) m(+1). The otimal control roblem for this case involves selecting a control structure of the form u = KX (IV.28) where K R m n(+1). Proosition IV.14 Assume the matrix air (A, ˆB) is stabilizable. The control law in eqn.(iv.28) with a gain given by K = R 1 ˆBT P (IV.29) where P = P T > 0 is the solution of the algebraic Riccatti equation A T P + P A P ˆBR 1 ˆBT P + Q x = 0 (IV.30) and otimizes the erformance index in eqn.(iv.6) for a deterministic feedback law. Proof This is the solution to the standard LQR roblem. The above result gives a feedback law for the continuous time case. We now give a result for the discrete time case. Proosition IV.15 Assume the matrix air (A, ˆB) is stabilizable. The control law in eqn.(iv.28) with a gain given by K = (R + ˆB T P ˆB) 1 ˆBT P A (IV.31)

86 73 where P = P T > 0 is the solution of the algebraic Riccatti equation P = Q x + A T (P P ˆB(R + ˆB T P ˆB) 1 ˆBP ) A (IV.32) and otimizes the erformance index in eqn.(iv.7) for a deterministic feedback law. Proof This is the solution to the standard LQR roblem. Remark IV.16 The solution to this control roblem mas X, the gpc exansions of the states, directly to deterministic control u(t). Hence, knowledge of is necessary to comute X during imlementation. As the number of PC terms is increased, the system becomes less controllable with resect to a single deterministic control vector (u i,0 ). However, if the system is stabilizable with resect to the feedback, then the higher-order uncontrollable PC states will decay to zero as well. Thus a solution to the otimization roblem still exists. This tye of system dynamics (with resect to the control inut matrix, ˆB) is imortant because the system would behave in this manner under the influence of oen-loo otimal control. Therefore, if the system is stabilizable with resect to K, then an oen-loo otimal solution may exist. If the air (A, ˆB) is not stabilizable, then stochastic feedback may be needed and the aroaches of the revious roositions will be required. 3. Examles a. Deterministic State Feedback Examle As a simle examle, consider the following model of an F-16 aircraft at high angle of attack ẋ = Ax + Bu

87 74 with states x = [V α q θ T ] T where V is the velocity, α the angle of attack, q the itch rate, θ its angle, and T is the thrust. The controls, u = [δ th δ e ] T, are the elevator deflection δ e, and the throttle δ th. The A and B matrices are given by A = B = ( ) (0.9276) Similar to the analysis of Lu[11], the terms in arenthesis in the A matrix are assumed to be uncertain and are functions of a single random variable,. The uncertainty in these terms is assumed to be distributed uniformly by ±20% about the nominal values and resectively. This uncertainty corresonds to the uncertainty in the daming term C xq. The control design objective is to kee the aircraft at trim, given erturbation in the initial condition, in the resence of such arametric uncertainty. This is accomlished with an LQR design, using the control law in eqn.(iv.9), which results in a arameterized family of otimal feedback gains. We comare the erformance of the stochastic LQR design with Monte-Carlo designs, where LQR designs were erformed for a family of systems samled over uniformly distributed. The cost function for the Monte-Carlo designs is ket identical to that in the stochastic design, i.e. matrices Q and R were the same for all the designs.

88 75 Fig. 10. Family of α trajectories from Monte-Carlo and redicted mean and uncertainty from PC. Figure 10 shows the erformance of the Monte-Carlo LQR designs, reresented in gray, as well as the erformance of the gpc based design. The variance and mean of the state trajectories, comuted from the gpc exansions, are shown as dashed and solid line resectively. We observe that the statistics obtained from stochastic LQR design are consistent with those obtained from the Monte-Carlo simulations. The key advantage in the gpc based design framework is that the stochastic control design roblem is solved deterministically and by a single design. The controller obtained is statistically similar to the family of LQR designs over the samle set of, but synthesized in a comutationally efficient and statistically consistent manner. If the LQR roblem is solved for each value of and the exected cost is comuted, this gives E[J] = E[x( ) T P ( )x( )] = x T i P j x k φ i φ j φ k f d i=0 j=0 k=0 = X T ( P k W k ) X = X T P mc X k=0 (IV.33)

89 76 where we define e ijk = φ i φ j φ k and let W k = W T k be defined by W k = e 00k e 01k e 0nk e 10k e 11k e 1nk e n0k e n1k e nnk The term, W 0 corresonds to W in eqn.(iv.5). The question naturally arises, How does the gpc solution relate to the exected cost of solving the LQR roblem for each value of? To determine this relationshi we consider the two cost functions and more secifically, their corresonding P matrices. The matrix, P mc, has the same dimensionality as the solution of the Riccatti equation in eqn.(iv.10). Comaring the cost incurred by the controller in equation (IV.9), J = X T P X, with the exected cost of each LQR solution, X T P mc X, we exect P to tend to P mc as the number of terms in the PC exansion is increased. To comare these two matrices, we solve the traditional LQR roblem for a large samle of and obtain the corresonding matrix P ( ). This is then rojected onto the olynomial chaos basis functions, giving P k, and P mc is then calculated. Figure 11 shows the Frobenius norm of P mc P as a Fig. 11. Frobenius norm of P mc P normalized by P F

90 77 function of the number of terms included in the gpc exansion. We can see that P tends to the rojection of the Monte-Carlo solution exonentially as the number of terms is increased. This means that the LQR solution we obtain for the gpc system for the control in eqn.(iv.9) reresents the solution of the LQR roblem at each value of when its uncertainty is uniform in distribution. A formal exosition of this observation will be addressed in our future work. b. Stochastic State Feedback with Constant Gain Examle The revious examle demonstrates the ability of the gpc framework to be used to erform otimal control design over a distribution of arameters but requires knowledge of x i,j, the gpc exansions of the state vector, as well as. In many systems it may not be feasible to measure or estimate, but rather to utilize a constant feedback solution that is otimal with resect to the distribution of the uncertain arameters. For these cases, the control design in eqn.(iv.14) is used where the gain is calculated as the solution of a BMI. As an examle of this design aroach consider the following roblem A = B = 3 4 C = [ 1 0 ] 1 1 where [ 1, 1] is a random variable. We wish to design a single control that stabilizes the system and minimizes the exected cost. Solving the resulting BMI for the otimization condition yields the desired control. In addition to the minimum exectation solution roosed in this work, we also design an otimal control using standard LQR for the nominal system, i.e. = 0. Figure 12 shows the closed loo eigenvalues for both control laws. There is a much larger distribution of system

91 78 Fig. 12. Closed loo eigenvalues of minimum exectation control and nominal otimal control

92 79 eigenvalues for the nominal control law, and the system behavior varies significantly over the range of the random variable,. This examle demonstrates the benefit of otimizing with resect to a distribution as oosed to a single nominal oint. Figure 13 shows the cost of each control law for given values of. As an additional comarison, the cost for minimizing the worst case is shown as well. With resect Fig. 13. Comarison of cost of nominal and minimum exectation control as a function of to the distribution of the arameters, the exected cost of the gpc solution is lower, though the nominal otimal solution is otimal at = 0, which is exected. Utilizing a uniform distribution for arameter uncertainty does not always result in a realistic reresentation. If the uncertainty in a arameter is reresented by a uniform distribution, this means that knowledge of the value is limited to a range, but there is no value more likely than another. In ractice, arameter values exhibit more Gaussian behavior, however the suort of the Gaussian distribution is (, ), making them imractical for use in stability analysis. There are two aroaches that can be used to alleviate this roblem. The distribution can be truncated and new orthogonal olynomials generated through an algorithm such as Gram-Schmidt

93 80 Fig. 14. Closed loo eigenvalues of minimum exectation control for Beta distribution (to). Cost functions associated with various values of α as a function of (bottom) orthogonalization, or a similar distribution with finite suort can be used. For this examle, we chose a Beta distribution with α = β as this roduces a Gaussian like curve with suort on [ 1, 1]. The two arameters, α and β are varied on the line α = β and otimal controllers are generated. Figure 14 shows the closed

94 81 Fig. 15. Beta distribution for various values of α = β. Distribution aroaches δ function as α. loo eigenvalue locations of the system for values of between 1 and +1. The eigenvalue locations are generated for several values of α. As α is increased, the range of eigenvalues increases and in the limiting case, these become the eigenvalues of the closed loo system with the nominal design. The corresonding distribution curves are shown in figure 15. As α = β, the distribution, weighted roerly, becomes a δ function, or a single oint with robability one. Finding an otimal control for the nominal case is then equivalent to finding the otimal exectation control for a system where the robability distribution is f( ) = δ( = 0). As the value of α is decreased, to zero, the distribution widens and tends to a uniform distribution. Figure 14 also deicts the cost function for various values of α. As the value of α is increased from 0 (corresonding to a uniform distribution) to (corresonding to the nominal case), the corresonding ointwise cost evolves from that of a uniform distribution to the cost associated with the nominal design. As α is increased, the extreme values of occur with lower robability, making it accetable to take more risk for these values.

95 82 C. Otimal Control with Probabilistic System Parameters In this section we discuss generation of trajectories for dynamical systems in the otimal control theoretic framework. Otimality with robabilistic uncertainty in system arameters results in stochastic otimal control roblems. In this section, we derive two standard cost functions that are encountered in stochastic otimal control roblems, in terms of the olynomial chaos exansions. Here we consider minimum exectation and minimum variance cost functions. In the following analysis, we assume x(t) is stochastic and u(t) is deterministic. 1. Minimum Exectation Trajectories Minimum exectation otimal trajectories are obtained by minimizing the following cost function, analogous to the Bolza form, min E u [ 0 t f (x T Qx + u T Ru)dt + x T f Sx f] (IV.34) where x x(t) R n, u u(t) R m and x f = x(t f ), Q = Q T > 0, R = R T > 0, S = S T > 0. Unlike the cost function in equation (IV.1), this function is finite time and also has a terminal cost function. Following stes similar to those for the infinite horizon roblem in the revious section, the cost function in eqn.(iv.34) can now be written in terms of the gpc exansions as min u 0 t f [X T Q x X + u T Ru] dt + X T f S xx f (IV.35) where Q x = Q W, S x = S W and X f = X(t f ).

96 83 2. Minimum Covariance Trajectories For x R, the variance σ 2 (x) in terms of the gpc exansions is given by σ 2 (x) = E[x E[x]] 2 = E[x 2 ] E 2 [x] = x T W x E 2 [x] where or E[x] = E [ x i φ i ] = i=0 i=0 x i E[φ i ] = i=0 x i D φ i fd D φ 0 fd E[x] = x T F, where F = D φ fd Therefore, σ 2 for scalar x can be written in a comact form as σ 2 = x T (W F F T )x (IV.36) which may enter the cost function in integral form or as final cost. The covariance of a vector rocess x(t) R R n is given by C xx (t) = E[(x(t) x(t)) (x(t) x(t)) T ] = E[(x(t)x(t) T ] x(t) x(t) T where x(t) = E[x(t)] = x T 1 x T n F

97 84 and x i is defined by eqn.(iii.13). Therefore x(t) x(t) T = = x T 1 x T n F F T x T 1 x T n T x T 1 F F T x 1 x T 1 F F T x n x T nf F T x 1 x T nf F T x n Similarly, x T 1 W x 1 x T 1 W x n E[xx T ] = x T nw x 1 x T nw x n Therefore, in terms of gpc coefficients, C xx can be written as C xx = x T 1 (W F F T )x 1 x T 1 (W F F T )x n x T n(w F F T )x 1 x T n(w F F T )x n (IV.37) An imortant metric for covariance analysis is Tr[C xx ], which can be written in a comact form as, Tr[C xx ] = X T Q σ 2X (IV.38) where Q σ 2 = I n (W F F T ). 3. Examle - Van der Pol Oscillator In this section we aly the olynomial chaos aroach to solve an examle stochastic otimal control roblem based on the Van der Pol oscillator, which highlights numerical solution of stochastic otimal control roblems. Consider the well known

98 85 forced Van der Pol oscillator model ẋ 1 = x 2 ẋ 2 = x 1 + µ( )(1 x 2 1 )x 2 + u (IV.39) where µ is a random variable with uniform distribution in the range µ( ) [0, 1]. For uniform distribution the basis functions are Legendre olynomials and D = [ 1, 1]. Since the dimension of in this case is one, is equal to the order of the Legendre olynomial. For this examle we chose = 4. Reresenting the gpc exansions of x 1, x 2, µ similar to eqn.(iii.9), the dynamics of the Van der Pol oscillator in terms of the gpc exansions can be written as ẋ 1,m = x 2,m ẋ 2,m = x 1,m + 1 φ m, φ m ( µ i x 2,j e ijm i=0 j=0 µ i x 1,j x 1,k x 2,l e ijklm + u, φ m ) i=0 j=0 k=0 l=0 for m = {0,, }; where e ijm = φ i φ j φ m and e ijklm = φ i φ j φ k φ l φ m. Note that the rojection u, φ m = u for m = 0, and u, φ m = 0 for m = {1,, }. Therefore the deterministic control only enters the equation for x 2,0. The stochastic otimal control roblem for this examle is osed as min E [ u(t) 0 5 (x x u 2 )dt] (IV.40) subject to stochastic dynamics given by eqn.(iv.39) and constraints x 1 (0) = 3 x 2 (0) = 0 E[x 1 (5)] = 0 E[x 2 (5)] = 0

99 86 In the osed roblem, we have assumed that there is no uncertainty in the initial condition of the states and the terminal equality constraint is imosed on the exected value of the states at final time. In terms of the gpc exansions, the constraints are x 1,m (0) = 0, m = {1,, } x 2,m (0) = 0, m = {0,, } (IV.41) x 1,0 (0) = 3 E[x 1 (5)] = x 1,0 (5) = 0 E[x 2 (5)] = x 2,0 (5) = 0 The otimal control roblem was solved using OPTRAGEN [47], a MATLAB toolbox that transcribes otimal control roblems to nonlinear rogramming roblems using B-Sline aroximations. The resulting nonlinear rogramming roblem was solved using SNOPT [48]. Figure (16(a)) shows the trajectories of the otimal solution in the subsace sanned by B-Slines. The solid (red) state trajectories are the mean trajectories of x 1 (t), x 2 (t) and they satisfy the constraints defined by eqn.(iv.41). The dashed (blue) state trajectories are the remaining gpc exansions of x 1 (t), x 2 (t). The subotimal cost for these trajectories is and took seconds to solve for in MATLAB environment. The trajectories were aroximated using B-Slines consisting of ten 5 th order olynomial ieces with 4 th order smoothness at the knot oints.

100 87 (a) Otimal gpc trajectories for the Van der Pol oscillator (b) Verification of stochastic otimal control law using Monte-Carlo simulations. Fig. 16. Comarison of trajectories obtained from gpc exansions and Monte-Carlo simulations.

101 88 (a) Evolution of df and mean trajectory. (b) Evolution of uncertainty in state due to uncertainty in µ. Fig. 17. Evolution of the robability density function of the state trajectories due to µ( ). The solid (red) line denotes the exected trajectory of (x 1, x 2 ). The circles (blue) denote time instances, on the mean trajectory, for which the snashots of df are shown.

102 89 To verify the otimal control law for the stochastic system, we alied u (t) to the Van der Pol oscillator for various values of µ, uniformly distributed over the interval [0, 1], and comuted the exected value of the state trajectories from the Monte- Carlo simulations. Figure (16(b)) shows the comarison of the exected trajectories E[x 1 (t)], E[x 2 (t)], obtained from gpc and Monte-Carlo methods. The trajectories are identical. Thus, the generalized olynomial chaos framework rovides a owerful set of tools for solving stochastic otimal control roblems. Figure (17) shows the evolution of uncertainty in state trajectories due to uncertainty in µ. The evolution of robability density functions (df), obtained via samling, are shown in fig. (17(b)). The mean trajectory with the time varying df is shown in fig. (17(a)). From these figures, it can be inferred that the trajectories satisfy the terminal constraint in an average sense. However, none of the individual trajectories arrive at the origin. This is not surrising, as the constraints were only imosed on the exected value of the trajectories. For nonlinear systems the uncertainty evolves in a non-gaussian manner. Therefore, analysis based on exectation can lead to erroneous interretations and it is imortant to include higher order moments in the analysis. For this roblem, localization of the state about the origin at final time can be achieved by including a terminal cost or constraint related to the covariance at final time. Figure (18) shows the robability density function at final time when a terminal constraint Tr[C xx ] < 0.2 was imosed in the otimal control roblem. Figure (18) also shows the robability density function, at final time, obtained without the terminal constraint. It is clear from the figure that inclusion of terminal covariance based constraint has localized the covariance about the origin. Although none of the trajectories for µ [0, 1], even for the constrained case, arrive at the origin. This terminal constraint however incurred a higher cost of The state and

103 90 Fig. 18. PDF at final time, due to the terminal constraint based on covariance. control trajectories are shown in fig. (19(a)). We observe that introduction of terminal constraint Tr[C xx ] < 0.2 results in higher control magnitudes. The otimal control obtained in this case also agrees with the Monte-Carlo simulations over µ [0, 1], and is shown in fig. (19(b)). (a) Otimal gpc trajectories for the (b) Verification of stochastic otimal control law using Monte-Carlo simulations. Van der Pol oscillator with terminal covariance constraint Fig. 19. Comarison of trajectories obtained from gpc exansions and Monte-Carlo simulations.

104 91 D. Otimal Control with Stochastic Forcing In this section we consider otimal control of dynamical systems with stochastic forcing. The stochastic forcing is assumed to be a random rocess with known covariance. Such rocesses can be aroximated using the Karhunen-Loéve exansion functions [20]. If the correlation window of the forcing function is zero, i.e. if it is white noise, then the number of terms of the exansion will be infinite. For this reason, we assume that the stochastic forcing function has a non-zero correlation window, making it amenable for finite dimensional aroximation. This is not an unrealistic assumtion as there are many disturbance inuts in engineering systems with non-zero correlation windows. One notable examle is the Dryden gust model which uses colored (correlated) noise inuts. The gpc exansion used in the revious sections of the aer does not utilize information related to the correlation of the noise inuts. When this information is known the Karhunen-Loéve (KL) exansion is used to include this time based information in the dynamics. This is accomlished by adding additional random variables to the system in the following manner. Let Z(t, ω) be a zero-mean random rocess with known covariance function R(t 1, t 2 ), the KL exansion is given by Z(t, ω) = k=1 λk f k (t) k (IV.42) where k is a random variable governed by the distribution that governs Z(t, ω). For examle, if Z(t, ω) is a correlated Gaussian rocess, each random variable, k is governed by a Gaussian distribution. The functions, f k (t) are eigenfunctions of the covariance function or a b R(t 1, t 2 )f k (t 2 ) dt 2 = λ k f k (t 1 ) (IV.43) The scalars, λ k, are the associated eigenvalues of the orthonormal functions f k (t).

105 92 This exansion is imortant because it allows us to write a correlated stochastic rocess as a summation of random variables with time-varying coefficients. Couling this exansion with the rojection techniques of the gpc gives a means of solving otimization roblems with stochastic forcing. To return to the examle of the Van der Pol Oscillator, consider the oscillator forced with the stochastic forcing function w(t, ). The dynamics of the oscillator are then governed by (IV.44) ẋ 1 = x 2 ẋ 2 = x 1 + µ( 0 ) (1 x 2 1) x 2 + u + w(t, ) (IV.45) where 0 corresonds to the random variable associated with uncertainty in µ. The change in notation will become clear in the following text. The covariance of w(t, ) is given by R w (t 1, t 2 ) = e a t 1 t 2 (IV.46) This covariance can be obtained by assing white noise through a first order filter with time constant a. To determine the new set of aroximate dynamics, we first exand w(t, ) with its truncated KL exansion as w(t, ) = M k=1 λk f k (t) k (IV.47) The time derivative of x 2 then becomes ẋ 2 = µ( 0 ) (1 x 2 M 1) x 2 + u + λk f k (t) k k=1 (IV.48) The reason for the notation 0 now becomes clear. Each term in the KL exansion adds a new random variable to the equations of motion so that we now have a vector of random variables, = [ 0 1 k ]. Exanding our state variables in terms

106 93 of KL gives M x i (t, ) = x i,0 (t, 0 ) + f xi,k(t)χ i,k k=1 (IV.49) Because the covariance of x is not known, the new random variables are written in terms of a new gpc exansion. For simlicity, the two summations in the revious equation are combined into one single summation that starts with k = 0. Then we write each χ i,k as a gpc exansion of the random variables from the vector, or χ i,k = j=0 a j,i,k φ j ( ) (IV.50) The gpc exansion has +1 terms, which can be determined as +1 = (M+1+r)! (M+1)!r!, where r is the desired order of the olynomials and M is the order of the KL exansion. The resulting basis, φ j, is therefore an orthogonal basis with resect to M +1 indeendent random variables. Therefore, x i becomes x i (t, ) = M k=0 [f xi,k(t) a j,i,k φ j ] j=0 (IV.51) This can be rewritten as x i (t, ) = j=0 x i,j (t)φ j (IV.52) where x i,j (t) = M k=0 f x,k (t)a j,k. Because the covariance of x i is unknown, the functions f x,k are unknown as well. Therefore, we can utilize the variables x i,j as the states of a differential equation and solve for these as we have done in revious sections. With the equations in this form, the Galerkin rojection utilized throughout the aer can

107 94 now be used to arrive at the rojected dynamics. These have the form ẋ 1,m = x 2,m ẋ 2,m = x 1,m + 1 φ m, φ m ( µ i x 2,j e ijm i=0 j=0 µ i x 1,j x 1,k x 2,l e ijklm + i=0 j=0 k=0 l=0 w, φ m + u, φ m The terms for these exressions are the same as those obtained reviously with the excetion being the term w, φ m. This term is the rojection onto the forcing. Because the forcing terms are first order in k, in general this term will only be non-zero for M terms. For most sets of orthogonal olynomials in one variable (and in articular the traditional sets), the random variable corresonds to φ 1. When these sets are generalized to multile random variables, j corresonds to φ j+1 where j = 0 corresonds to the arametric uncertainty and j = 1,... M corresonds to the KL variables. Therefore the inner roduct can be written as w, φ m = M j=1 λj f j (t) φ 2 m δ (j+1),m As an examle, consider the same cost function used in the revious section in eqn. (IV.40) subject to the constraints given in eqn. (IV.41), namely an initial constraint and constraints on the final exected value. We now solve the same otimal control roblem that we solved reviously, excet with a stochastic forcing term. For this examle, we consider a uniform noise inut with covariance R(t 1, t 2 ) = e 1 20 t 2 t 1. The eigenfunctions of this covariance can be found analytically for a finite time interval (in our case t [0, 5] and the first three corresonding eigenvalues are λ 1 = 4.61, λ 2 = 0.23, and λ 3 = Because the eigen-

108 95 Fig. 20. Eigenvalues (λ n ) of the KL exansion for various window lengths (1/a). values decay as the number of terms is increased, only the first few terms are needed to obtain a good aroximation. This is because the eigenfunctions are orthonormal and therefore have unit amlitude. For our case, the eigenfunctions are sines and cosines. Figure 20 shows the eigenvalues of the first 11 terms of the KL exansion. For larger correlation window sizes (smaller values of a), the first eigenfunction is dominant as one would exect. For a window size of a = 1/20, the first term is several orders of magnitude larger than the second term, meaning for this examle we can create a good aroximation by only including the first term. Furthermore, to reduce the roblem size and comlexity we use only olynomials u to order r = 2, meaning that we have = 5. To test the accuracy of the solution, we examine the exected trajectory of x 1 and x 2 along with the variance of each one. These are comared to Monte-Carlo simulations for various values of 0 and colored noise inut. Figure 21(a) shows the otimized trajectories of each state along with the control. The trajectory in red is the exected trajectory (corresonding to x i,0 ) and those in blue are the remaining states corresonding to x i,k with k 1. This figure also shows

109 96 (a) Otimal gpc trajectories for the Van (b) Verification of stochastic otimal der Pol oscillator with stochastic forcing control law using Monte-Carlo simulations. Fig. 21. Comarison of trajectories obtained from gpc exansion and Monte-Carlo simulations for 2 nd order gpc exansion

110 97 Fig. 22. Comarison of trajectories obtained from gpc exansion and Monte-Carlo simulations for 8 th order olynomials. the oen loo otimal control solution. To determine the accuracy of the otimal solution and it s redicted exected values and variances, we comare it to a Monte- Carlo simulation with the same control inut. The Monte-Carlo solution utilizes uniformly distributed values of µ [0, 1] as well as a uniformly distributed colored noise, w [ 1, 1] where the covariance is given by R(t 1, t 2 ) = e 1 20 t 2 t 1. Figure 21(b) shows the comarison of the otimal control solutions of E[x 1 ] and E[x 2 ] using gpc to the Monte-Carlo solution. The exected values redicted by gpc are accurate when comared to the Monte-Carlo solution. The redicted variances of each state are also shown. These are also accurate, although there is some error, esecially in x 2. The inaccuracy in the variance estimates is a result of the low-order PC aroximation that is used. If we increase the olynomial order, the accuracy in the variance estimate can be drastically imroved. Figure 22 shows the exected value

111 98 as well as the variance estimates for the PC system when a olynomial order of 8 is used. Here the variance estimates are much closer to the Monte Carlo results. This result demonstrates that for PC systems to cature variance accurately, higher-order olynomials are usually required. While using higher order olynomials is helful for rediction uroses, it can make the otimal control roblem much more difficult to solve. Solving these more comlex roblems is a toic of future research. It is imortant to note that the otimal solution for the roblem with stochastic forcing is not identical to that of the revious (arameter uncertainty only) case. This is in large art due to the nonlinearities in the roblem. E. Summary In this chater, we have solved stochastic otimal control roblems for linear and nonlinear systems using the concet of Polynomial Chaos. The gpc exansion allows us to analyze linear and nonlinear stochastic differential equations in a deterministic framework. For infinite horizon linear otimal control roblems, feedback laws are determined for several different choices of a feedback strategy. The other major result resented in this chater involves the framing of a stochastic trajectory generation roblem as a deterministic otimal control roblem. This allows us to use well known techniques to solve this roblem. In addition, a method of incororating uncertainty in the form of a stochastic rocess was also introduced. Combining the KL exansion with the gpc aroach enabled the generation of otimal trajectories for systems with arametric uncertainty as well as correlated stochastic forcing. These roblems are very difficult to solve using a samling based aroach.

112 99 CHAPTER V RECEDING HORIZON CONTROL A. Introduction Generating otimal state and control trajectories for constrained linear and nonlinear systems has been a toic of interest for researchers for a long time. While there have been many tools develoed to generate otimal control solutions for linear and nonlinear systems, only under certain assumtions is it ossible to obtain analytical feedback solutions. This roblem is made more comlex with the addition of constraints to the control requirements as well as the state trajectories. Often times, the only method of solving such roblems is to find a numerical solution which is generally oen loo and difficult to solve for long horizon lengths. To hel alleviate some of these difficulties and adat to changes in system behavior the idea of receding horizon control was develoed. Receding horizon control, also known as model redictive control, has been oular in the rocess control industry for several years [49, 50]. It is based on the simle idea of reetitive solution of an otimal control roblem and udating states with the first inut of the otimal command sequence. The reetitive nature of the algorithm results in a state deendent feedback control law. The attractive asect of this method is the ability to incororate state and control limits as hard or soft constraints in the otimization formulation. When the model is linear, the otimization roblem is quadratic if the erformance index is exressed via a L 2 -norm, or linear if exressed via a L 1 /L -norm. Issues regarding feasibility of online comutation, stability and erformance are largely understood for linear systems and can be found in [51, 52, 53, 54]. For nonlinear systems, stability of RHC methods is guaranteed

113 100 by using an aroriate control Lyaunov function [55, 56]. For a survey of the state-of-the-art in nonlinear receding horizon control roblems the reader is directed to reference [57]. More recently, the receding horizon control strategy is being extended into the realm of stochastic systems. This extension arises out of a need to create control laws that are robust to robabilistic system uncertainty. Traditional receding horizon laws erform best when modeling error is small as in many cases certain tyes of system uncertainty can lead to oscillatory behavior [58]. Furthermore, it is ossible that even with slight uncertainty, the control strategy may not be robust [59]. Consequently, many aroaches have been taken to imrove robustness of the olicy in the resence of unknown disturbances [60, 61, 62, 63, 64]. Many of these aroaches involve the comutation of a feedback gain to ensure robustness. The difficulty with this aroach is that, even for linear systems, the roblem can become more difficult to solve from a comutational standoint. This is because the feedback gain transforms the quadratic rogramming roblem into a nonlinear rogramming roblem. While the area of robust receding horizon control is not necessarily a new one, aroaching the roblem from a stochastic standoint has only recently aroused interest. In articular, there have been many aroaches that seek to determine solutions to receding horizon roblems for stochastic systems [65, 66, 67, 68, 69]. These aroaches deal with systems under arametric uncertainty that aears as a white noise rocess. There is little work on stochastic receding horizon control when uncertainty in the system arameters is governed by a random variable, as oosed to a white rocess. In this chater, we resent a receding horizon methodology for linear systems with uncertainty that aears as functions of a random variable in the system matrices. This aroach guarantees stability of the aroximated system when modeled using the olynomial chaos technique that we have described throughout the work.

114 101 We begin by resenting the receding horizon olicy and discussing the imlementation of different tyes of constraints in terms of our gpc states. The receding horizon strategy is develoed for linear discrete time systems with stochastic arametric uncertainty. Next we resent a roof for the olicy and finally we resent several examles that highlight the imlementation issues associated with the method. In articular we resent three different olicies which can be used deending on the tyes of measurements collected from the actual system or family of systems. B. Stochastic Receding Horizon Control Here we resent the formulation of the receding horizon roblem for linear systems in terms of the states of the gpc system. In articular, we discuss various imlementation issues that may arise when dealing with constraints and rove stability of the methodology. 1. Receding Horizon Policy In this work, we are interested in determining stable solutions to control roblems for discrete-time systems of the form x(k + 1, ) = A( )x(k, ) + B( )u(k) (V.1) where x(k) R n are the states of the system at time k and u(k) R m is the control vector at time k. In articular, we assume that the uncertainty of the system is a random variable governed by a given robability distribution. We will assume that the full state information is available for feedback. As we have done throughout the work, we can exress this linear equation in terms of gpc states. The dynamics then

115 102 become X(k + 1) = AX(k) + BU(k) (V.2) Consider the following receding horizon olicy comuted from initial state, x, P(x) V N = min V N ({x(k)}, {u(k)}) (V.3) subject to x(k + 1, ) = A( )x(k, ) + B( )u(k) k = 0,..., N 1 (V.4) x(0) = x (V.5) u(k) U k = 0,..., N 1 (V.6) x(k) X k = 0,..., N (V.7) x(n) X f X (V.8) The otimization is erformed over the horizon length, N. In the above exression U and X are feasible sets on u(k) and x(k) with resect to control and state constraints. The set X f is a terminal constraint set. A key assumtion is that within the terminal set, there exists a feasible stabilizing control strategy. This will be described in more detail later. The control u(k) may be deterministic or stochastic deending on how it is determined. If state feedback is used, the control u(k) will be a stochastic variable. At the beginning of each horizon, we will assume that the state is known exactly or E[x(0)] = x(0) (V.9) The cost function V N is given by N 1 V N = E [ x T (k, )Qx(k, ) + u T (k)ru(k)] + C f (x(n)) k=0 (V.10) where C f (x(n)) is a terminal constraint. Unlike traditional receding horizon olicies where u(k) is determined directly, we will assume a secific structure for u(k). The

116 103 control at each time ste will be determined by u(k) = ū(k) + K k (x(k) E[x(k)]) (V.11) The main reason for this structure is that in general, an oen loo control strategy cannot be used to effectively stabilize an entire family of systems. If the system matrix were stable for all values of, then even with u(k) = 0 the family of systems would be stable, but might incur high cost. For unstable systems this is not the case and driving an exected value to zero does not ensure stability of the entire family of systems. Therefore, in general we require use of a feedback gain to ensure stability for the entire family of systems. With the control law in equation (V.11), the system dynamics become x(k + 1) = A( )x(k) + B (ū(k) + K k (x(k) E[x(k)])) (V.12) When written in terms of the gpc states, these equations become X(k + 1) = AX(k) + B (Ū(k) + (K k I +1 ) (X(k) X(k))) (V.13) where Ū(k) = ū(k) [1 0 1 ] T and X = E[x(k)] [1 0 1 ] T. For the gpc system, the exected value of x(k) is given by the first term in the gpc exansion, or x 0 (k). Therefore, X(k) = x0 (k) [1 0 1 ] T. Because the system is written in terms of gpc states, the cost function becomes V N = N 1 [X T (k) QX(k) + (Ū T (k) + (X T (k) X T (k)) (Kk T I +1)) R (Ū(k)+ k=0 (K k I +1 ) (X(k) X(k)))] + C f (x(n)) (V.14) In the above exressions Q = Q W 0 and R = R W 0. In other stochastic receding horizon imlementations, the terminal cost is written as an exectation of a function

117 104 of the final state, x(n). The receding horizon olicy that we resent requires the final state to be in a given terminal region where there exists a feasible, stabilizing control law. Unlike other imlementations [70], we imlement this constraint secifically in terms of the gpc states as C f (x(n)) = X T (N)P X(N) (V.15) where X(N) is a vector of the terminal gpc states and P is a n( + 1) n( + 1)- dimensional matrix that is determined from the terminal control law. In this manner the exected cost of the stochastic system can be exressed in terms of deterministic gpc states. This will allow us to utilize a standard methodology to rove the stability of the control strategy. 2. State and Control Constraints In this section we resent the state and control constraints for the receding horizon olicy and exand on the exressions in equations (V.6) and (V.7). a. Exectation Constraints In general for stochastic systems, it is difficult to handle control and state constraints. Tyically, constraints are enforced on the exected values of the state and control. We consider constraints of the following form E [x T (k)h i,x x(k) + G i,x x(k)] α i,x i = 1... N x (V.16) E x(k) u(k) T E [u T (k)h i,u u(k) + G i,u u(k)] α i,u i = 1... N u (V.17) x(k) x(k) H i,xu + G i,xu u(k) u(k) α i,xu i = 1... N xu (V.18)

118 105 for k = 0... N. In these exressions, H i,x 0, H i,u 0, and H i,xu 0. These constraints are on the exected value of the quadratic functions. Instead of requiring that the constraint be met for all trajectories, they instead imly that the constraints should be satisfied on average. This means that there may be some trajectories that violate the constraints with finite robability. Following a rocedure similar to the revious chaters, we exress these constraints in terms of the gpc states. The constraints become X(k) H i,x X(k) + G i,x x 0 (k) α i,x i = 1... N x (V.19) X(k) U(k) T U(k) H i,u U(k) + G i,u ū(k) α i,x i = 1... N u (V.20) X(k) x H i,xu 0 (k) + G i,x U(k) ū(k) α i,xu i = 1... N xu (V.21) where H i,x = H i,x W 0, Hi,u = H i,x W 0, and H i,xu = H i,xu W 0. Furthermore, the stochastic control vector U(k) is given by U(k) = Ū(k) + (K k I +1 ) (X(k) X(k)). When H i,u = 0, the constraint is linear and constraints on the control only act on the exected oen loo control ū(k). b. Covariance Constraints In many ractical alications, it is desirable to constrain the variances of the trajectory at each time ste. One means of achieving this is to use a constraint of the form Tr (E [(x(k) E[x(k)])(x(k) E[x(k)]) T ]) α σ 2 (V.22) In terms of the gpc states, this becomes X T (k)q σ 2X(k) α σ 2 (V.23)

119 106 where Q σ 2 = I n (W 0 F F T ). In terms of the gpc states, this condition is equivalent to constraining the trajectories to an ellisoid about the exected value (W 0 F F T is a singular matrix). This tye of constraint highlights the ower of the gpc aroach in that it allows us to frame variance and covariance constraints in terms of convex constraints of gpc states. If it is desired to constrain only the variance of a articular state, the constraint can be written as E [(x i (k) E[x i (k)]) 2 ] α i,σ 2 (V.24) where the term i is used to denote the secific state with which the constraint is concerned. This in turn can be written in terms of the gpc states as X T (k)q i,σ 2X(k) α i,σ 2 (V.25) where Q i,σ 2 = diag (δ i, ) W 0 F F T. The exression δ i, denotes a vector of length, consisting of all zeros with a one located at the i th element. The diag( ) term denotes a diagonal matrix with the argument as the diagonal. The comosition of the Q i,σ 2 matrix is such X T (k)q i,σ 2X(k) = X T i (k)(w 0 F F T )X i (k) where X i (k) is a vector of the gpc states corresonding to the i th state. c. Probability One Constraints In many alications, it might be desirable to enforce constraints almost everywhere in. For examle, if RHC were used to determine otimal aths for a ath lanning roblem, we would desire that the vehicle avoid obstacles with robability one. In general for stochastic systems, this is not ossible because the information available is generally only mean and variance of the states and controls. The only means to obtain this information traditionally is to use a Monte-Carlo technique and ensure that the trajectories of each samle satisfy the constraints. This involves generating

120 107 a series of trajectories as art of the otimization routine and ensuring that each of these satisfy the state and control constraints. For systems modeled using gpc, if the arameter variation (and hence the associated robability distribution) is bounded, then we have a means of imosing constraints with robability one. First consider constraints imosed on a single state or control variable such as P (x(k, ) γ x,ub ) = 1 (V.26) P (x(k, ) γ x,lb ) = 1 (V.27) P (u(k, ) γ u,ub ) = 1 (V.28) P (u(k, ) γ u,lb ) = 1 (V.29) In general, there may be a set of measure zero in where these constraints are violated. However, when the states and controls are modeled as continuous olynomials in, the constraints must hold for all. This is summarized by the following roosition. Proosition V.1 Let x(k, ) and u(k, ) be modelled as continuous olynomials in. Then the constraints in (V.26)-(V.29) must hold for all. Proof To rove this roosition, we will show it is true first for (V.26). We know that P (x(k, ) γ) = 1 P (x(k, ) > γ) = 0. Let exist such that P ( ) = 0 and x(k, ) = i=0 x i(k)φ i ( ) > γ. We know x(k, ) = i=0 x i (k)φ i ( ) > γ x(k, ) = γ + ɛ where ɛ > 0. By continuity of the olynomials, φ i, the state, x(k, ), is continuous with resect to. Therefore, a δ such that < δ x(k, ) x(k, ) < ɛ. In other words, B(, δ), x(k, ) > γ. Here, B(, δ) is an oen ball of radius δ

121 108 centered at. Therefore, P (x(k, ) > γ) = P ( B(, δ)) = 1 B(,δ) f( ) d = B(,δ) f( ) d > 0 Because the state is a continuous olynomial, B(, δ) has a non-emty interior, thus the above inequality is strict. This means 0 < P (x(k, ) > γ) = 1 P (x(k, ) γ) P (x(k, ) γ) < 1 Thus by contradiction, the roosition is roven. The other three constraint can be roven in the same manner. The benefit of the gpc framework is that it gives us a means of writing out the state and control trajectories as known functions of the random variable,. In other words, we can write x(k) = i=0 x i(k)φ i ( ) and u(k) = ū(k) + K k ( i=0 x i(k)φ i ( ) x 0 (k)). As a result of the revious roosition, the constraints in equations (V.26)-(V.29) can be written as x(k, ) = x i (k)φ i ( ) γ x,ub (V.30) i=0 x(k, ) = x i (k)φ i ( ) γ x,lb (V.31) i=0 u(k, ) = u i (k)φ i ( ) γ u,ub (V.32) u k ( ) = i=0 i=0 u i (k)φ i ( ) γ u,lb (V.33) Because these constraints must hold for all Ω, without loss of generality, we can relace them with maximum and minimum values of x(k) and u(k) for uer

122 109 bounds and lower bounds resectively. The constraints can then be written as max Ω min Ω max Ω min Ω x(k, ) = max Ω x(k, ) = min Ω u(k, ) = max Ω u(k, ) = min Ω x i (k, ) γ x,ub (V.34) i=0 x i (k, ) γ x,lb (V.35) i=0 u i (k, ) γ u,ub (V.36) i=0 u i (k, ) γ u,lb (V.37) i=0 The functions, φ i ( ) are known, bounded functions of (boundedness follows from being constrained to a finite interval). In general we will assume that the olynomials are constrained such that φ i ( ) [ 1, 1]. This is a valid assumtion as it is always ossible to normalize φ i so that this is the case. As an examle, figure 23 shows the first six Legendre olynomials on the interval [ 1, 1]. It is clear that these Fig. 23. Legendre olynomials as a function of on the interval [ 1, 1] olynomials are bounded on the interval and that they are also bounded by 1 from below and +1 from above. Therefore the roblems osed in equations (V.34-V.37)

123 110 have solutions. When there are a small number of φ i terms and only one random variable,, these can be solved analytically. However, this would require finding the zeros of the derivative of the function x(k, ) (a olynomial in ) which is in general difficult roblem to solve. We therefore resent several alternative methods to aroximate the solution to equations (V.34-V.37). Method 1: One aroach is to use bounds of the form X 0 (k) + X i (k) γ x,ub i=1 X 0 (k) X i (k) γ x,lb i=1 U 0 (k) + U i (k) γ u,ub U 0 (k) i=1 i=1 U i (k) γ u,lb This bound will always be conservative and the resence of the oerator makes this nonlinear. Method 2: A second method of aroximating the constraints is to only enforce them at secific values of. For linear systems, this amounts to enforcing the constraints for values of that corresond to the minimum and maximum values of the arameter. For examle if the system matrix is such that A = 1 0 For this system, we have lotted trajectories of the fist state corresonding to various values of in figure 24. The extremal uncertainty values in the A matrix occur for = 1 and = 1. The trajectories corresonding to these values of are dislayed as thick blue lines and are labelled in the figure. We can see that the trajectories for these values of make u a good aroximate bound throughout the simulation. However,

124 111 Fig. 24. Trajectories generated for various values of and their bounds aroximated at extreme values of there are oints where this aroximate bound is violated. This is demonstrated by the circled region of the figure. This method is more comutationally efficient than the revious method as it results in linear constraints, but this comes at the cost of making errors in the aroximation. Since this aroximation makes it ossible for the bounds to be violated by some systems, if this method is to be used, it is necessary to be overly conservative in determining constraints. Method 3: A final method is to enforce the constraints on a grid of values. Define a grid of values D = { 1, 2,..., nd } (V.38)

125 112 For every constraint in equations (V.34-V.37), we set u n d new constraints, each evaluated at i, for i = 1,..., n d. In other words, we write the constraints as x(k, i ) = X j (k)φ( i ) γ x,ub j=0 x(k, i ) = X j (k)φ( i ) γ x,lb j=0 u(k, i ) = U j (k)φ( i ) γ u,ub u(k, i ) = j=0 j=0 U j (k)φ( i ) γ u,lb for i = 1,..., n d. This has the advantages of being less conservative than method 1 as well as being more accurate than method 2. The disadvantage of this method is that it adds a large number of constraints to the system. While enforcing the constraints in this manner requires a form of samling, the usage of gpc methods still retains an advantage over Monte-Carlo methods because enforcing these constraints does not require generation of a large family of trajectories. Instead the gpc trajectories can be used directly to generate the samled data. Remark V.2 Exectation and variance constraints result in first and second order constraints on the gpc states. As imlemented in this section, robability constraints retain a samling imlementation that is similar to the traditional robust control aroach. This is resented for comleteness and to demonstrate that the gpc aroach retains the ability to enforce more traditional tyes of constraints while having the advantage of simle enforcement of exectation and variance constraints. 3. Stability of the RHC Policy At this oint, we are ready to show stability for the receding horizon olicy P(x) when it is alied to the gpc system. Before roceeding a few assumtions must be

126 113 made about the cost function and system constraints. Assumtion 1 Let the terminal cost matrix satisfy P 0 and let there exist a control u (k) = K x(k) such that for all x(k) R n X T (k)p X(k) E [x T (k)qx(k) + (u (k)) T Ru (k)] + X T (k + 1)P X(k + 1) (V.39) where x(k + 1) = A( )x(k) + B( )u (k) which when written in terms of the gpc dynamics (see eqn. (V.2)) gives X(k + 1) = AX(k) + B (K I +1 ) X(k). This assumtion guarantees that the system is stabilizable with resect to a constant gain control. In addition, it guarantees that without control constraints, the system under the control olicy u is globally asymtotically stable. We next make an assumtion about the terminal constraint (x(n) X f ). In articular, we need to ensure that the control strategy satisfying assumtion 1 is feasible when the terminal constraint is met. Assumtion 2 The terminal constraint is given by X T (N)P X(N) β (V.40) and the arameter β is such that for any family of trajectories at time N, x(n, ), (corresonding to X(N)) satisfying equation (V.40), the state trajectory x(k) (and corresonding X(k)) under the terminal control u (k) in Assumtion 1 will be feasible for the state and control constraints for all k N. The formulation of the terminal cost as X T (k)p X(k) is more general than a cost of

127 114 the form E[x T (k) ˆP x(k)] with ˆP = ˆP T 0. An exected cost formulation will result in a structure imosed on P of the form P = ˆP W 0. This constraint on the structure of the terminal cost matrix is restrictive and as a result of this imosed structure makes the determination of a terminal controller more difficult. These two assumtions guarantee that once the system trajectories enter a given ellisoid, there will exist a feasible linear state feedback controller to bring a family of trajectories (corresonding to X(N)) to the origin. In essence this constraint defines a region in which all ossible trajectories must be contained. Remark V.3 If the terminal constraint is satisfied then we are guaranteed that at time N the gpc states lie in an ellisoid. This translates to a constraining the final value of the systems for all. This is ossible because the arameter uncertainty is deendent on a random variable not on a white noise rocess. When a system has uncertainty that is deendent on a white noise rocess (articularly if the arameter variation has the ability to be unbounded), this might induce a jum outside of the constraint sace. Because the uncertain arameter is governed by a random variable, it is in effect constant for each individual system, meaning that this tye of behavior is not ossible. These assumtions will enable us to rove a stability for a stochastic receding horizon olicy in a similar manner to the traditional methodology [71]. The result is summarized in the following theorem. Theorem V.4 Consider the receding horizon olicy P(x) where the dynamics are governed by that of the gpc system in equation(v.2). If Assumtions 1 and 2 hold

128 115 and a feasible solution exists for x 0, then for the stochastic gpc dynamics, 1. The feasible set is ositively invariant. 2. The resulting control olicy is asymtotically stable. Proof The stability roof will be erformed for the rojected (gpc) system. 1. To rove the invariance of the feasible set, we define the following sequences. Let the otimal control sequence and state sequences be defined as U = {U (0), U (1),..., U (N 1)} and X = {X (0), X (1),..., X (N)}. By assumtion, these sequences are feasible (U U and X X). To show invariance of the feasible set, consider the feasibility of the system after one iteration of the ma. The rojected state after one iteration is X(1) = AX(0) + BU(0). Now define sets Ũ = {U (1),..., U (N 1), U (N)} and X = {X (1),..., X (N), X (N + 1)} where U (N) = (K I +1 )X N (the terminal control olicy) and X (N + 1) = AX (N) + BU (N). Though these trajectories may not be otimal solutions of V N, the iterated sets are feasible by assumtions 1 and 2. Therefore ositive invariance of the feasible set is roven. 2. Next, we rove asymtotic stability of the control olicy. We will use the cost function VN (x) as a Lyaunov function. Consider a trajectory starting at x, satisfying the assumtions of the theorem. The trajectory at the next time ste is given by x + = Ax + Bu where u is the first control inut in the sequence obtained from the receding horizon olicy. Define the state and control sequences { x} = { X i (1)φ i,..., X i (N + 1)φ i } i=0 i=0

129 116 {ũ} = { U i (1)φ i,..., U i (N)φ i } i=0 i=0 {x } = { X i (0)φ i,..., X i (N)φ i } i=0 i=0 {u } = { U i (0)φ i,..., U i (N 1)φ i } i=0 Examining the difference between VN (x+ ) and VN (x) gives i=0 VN(x + ) VN(x) V N ({ x}, {ũ}) V N ({x }, {u }) = V N ( X, Ũ) V N(X, U ) = X T (0) QX(0) U(0) RU(0) + X T (N) QX(N) +U T (N) RU(N) X T (N)P X(N) + X T (N + 1)P X(N + 1) The inequality is a result of VN (x+ ) being otimal and V N ({ x}, {ũ}) being simly a feasible solution. The definition of X and Ũ result in cancellations for all terms excet those corresonding to time k = 0 and k = N. We now utilize Assumtion 1 to obtain V N(x + ) V N(x) X T (0) QX(0) U T (0) RU(0) X T (0) QX(0) λ min ( Q) X(0) 2 The matrix Q is ositive definite, thus the right hand side is strictly less than zero for all X(0) 0. The functions V N (x) and V N (x + ) can be rewritten in terms of gpc states to give the resulting asymtotic stability guarantee for the coefficients. The theorem shows that the states of the gpc system go to zero asymtotically when a receding horizon olicy is emloyed.

130 RHC Policy Descrition In this section we will describe three receding horizon olicies that can be shown to be stable via the receding arguments. The two olicies will deend on the tye of constraints emloyed. If linear exectation constraints are emloyed then we will use a olicy that utilizes oen loo generation of the gpc states and the exected value of the state vector. When exectation constraints are not emloyed, then we can imlement the receding horizon olicy in a more traditional fashion. This distinction is made as a result of feasibility issues with various tyes of constraints. Consider the samle trajectories shown in figure 25. At time 0, the initial condition is known Fig. 25. Samle trajectory aths to demonstrate ossible constraint violations exactly. Therefore, the initial condition is in fact the exected value of the state at time 0. As time rogresses, the mean and trajectory bounds are indicated on the figure. At time k = 2, though the exectation constraint is satisfied there are some trajectories that violate the constraint. In the figure, we use the red curve (denoted Actual Trajectory in the figure) to denote a system trajectory for a secific value of. Assume that the actual system follows this red curve. If at time 2, a measurement were taken and this was set to be the new exected value, then this state would be

131 118 in violation of the constraint and thus the feasibility conditions in the roof would be violated. For this reason the gpc states must be generated by oen loo simulation of the gpc system to obtain accurate estimates of the exected value for the entire family of systems. When the constraints are enforced on the actual state values or in the form of variance constraints, then it is ossible to use a more traditional receding horizon olicy. In this imlementation, whenever the receding horizon controls are recalculated, the gpc states will be reset and the current state will be deemed the exected value at time 0. This reflects a more realistic imlementation. The main issue of imortance for this imlementation is to ensure that when the new state is taken to be the initial variable in the RHC olicy, there still exists a feasible control law. We will now briefly describe the two olicies that will be emloyed. a. Policy A The first receding horizon strategy we discuss differs from traditional RHC olicies in that utilizes oen loo terms as well as a closed loo term. The actual control law is given by u(k) = ū(k) + K k (x(k) E[x(k)]) In this control law, ū(k) and E[x(k)] must be either measured or comuted. If the control strategy is imlemented simultaneously on a family of systems, then it would be ossible to estimate the distribution and thus take measurements of the gpc states. If the control strategy is imlemented on a single system, this is not ossible and we must in turn rely on oen-loo estimation of the statistics to obtain the control law. This is accomlished by an oen-loo simulation of the gpc states

132 119 that begins at the system initial condition at time 0. At each time ste, the values of ū(k), E[x(k)], and K k are taken from the simulation of the gpc system. The secific value of x(k) is taken from a measurement of the states of the actual system. The terms coming from the RHC framework are in some sense all oen loo as they deend uon iteration of the gpc dynamics. The gpc dynamics by their formulation include all of the modeled uncertainty, so they are in this sense robust. The diagram in figure 26 rovides insight into this control strategy. The term x(0) reresents the Fig. 26. Receding horizon strategy with oen loo statistics rediction initial condition when the control strategy is alied. This is used to initialize the generation of the statistics and gains for the control strategy. At each ste using the gpc data new values of ū(k) and K k are generated from the RHC olicy and these couled with E[x(k)] are used to comute the control for the lant. The only feedback comes in the form of x(k). This strategy avoids the feasibility roblem that results from the alication of exectation constraints but requires many of the quantities to be generated in an oen-loo manner.

133 120 b. Policy B Here we discuss a receding horizon olicy that is more traditional in the sense that it does not rely on oen-loo generation of statistics. In Policy A, we assumed that the measurements were available for all simultaneously. This could be the result of state measurements for the entire family of systems or the result of oen-loo simulation of the entire family. In Policy B we will not take measurements for a family of systems. Instead, when the receding horizon law is recomuted, we will take a measurement for a single system and comute a new control assuming all systems start from this newly measured value. The imortant consideration for this olicy is the imact of the reset of the state values on the feasibility of the next iteration. Namely, we need to ensure that when the exected value is reset to the current value, the control strategy for the next RHC iteration is indeed feasible. Figures 27(a) and 27(b) demonstrate the resonse of the system under the receding horizon olicy at two searate time instances. Figure 27(a) demonstrates a samle resonse at time 0. At time 2, the otimization rocess is restarted from the new oint (as shown in figure 27(b)). When this occurs, the current state is set as the exected value as it is a measured quantity. We therefore start the new olicy with the family of systems originating at this oint. For this olicy to meet the conditions of the revious roof, we must show that when the reset is erformed there still exists a feasible solution. Because the uncertainty in the family of systems is governed by a random variable, it is a constant with resect to time. This is crucial to the feasibility of olicy B. To verify the feasibility of the olicy we must kee in mind that the uncertainty in the actual system corresonds to a single value of for all time. As a result the controls determined at the start of the algorithm will roduce a feasible solution throughout

134 121 (a) System trajectories under RHC olicy at time 0 (b) System trajectories under RHC olicy after udate at time 2 Fig. 27. First two stes of receding horizon feedback olicy the horizon. Denote the gpc states at time k corresonding to the olicy comuted at time 0 by X 0 (k) and the controls as U 0 (k). Now, these states and controls give the states and controls at time k as functions of, or X 0 (k) x 0 (k, ) U 0 (k) u 0 (k, ) For each value of, there is therefore a feasible solution for the state and control. The difficulty occurs with the recomutation of the new control law at a time k. Let ˆ be the value of corresonding to the actual system. If the RHC olicy is recomuted at time k, then x k (k) = x 0 (k, ˆ ). The new initial state x k (k) is not deendent on

135 122 Fig. 28. Block diagram of closed loo receding horizon olicy. For = ˆ (the actual system) we can ensure feasibility for this secific value of by setting u k (k + j, ˆ ) = u 0 (k + j, ˆ ) j 0 and comuting aroriate ū k (k + j) and K k k+j values. However, while this ensures feasibility for the actual system, it does not ensure feasibility for all values of with an initial condition of x 0 (k, ˆ ). It is therefore not ossible to guarantee feasibility of the olicy in general. Generally, this situation can occur when state and control bounds are very strict but if the roblem is well osed this does not occur. The feedback control olicy described in this section can be reresented by the diagram in figure 28. As the figure demonstrates, this olicy is more traditional than olicy A. The feed-forward or oen-loo terms ū(k) and E[x(k)] as well as the gain K k are rovided from the otimally redicted olicy.

Estimation of the large covariance matrix with two-step monotone missing data

Estimation of the large covariance matrix with two-step monotone missing data Estimation of the large covariance matrix with two-ste monotone missing data Masashi Hyodo, Nobumichi Shutoh 2, Takashi Seo, and Tatjana Pavlenko 3 Deartment of Mathematical Information Science, Tokyo

More information

F. Augustin, P. Rentrop Technische Universität München, Centre for Mathematical Sciences

F. Augustin, P. Rentrop Technische Universität München, Centre for Mathematical Sciences NUMERICS OF THE VAN-DER-POL EQUATION WITH RANDOM PARAMETER F. Augustin, P. Rentro Technische Universität München, Centre for Mathematical Sciences Abstract. In this article, the roblem of long-term integration

More information

Combining Logistic Regression with Kriging for Mapping the Risk of Occurrence of Unexploded Ordnance (UXO)

Combining Logistic Regression with Kriging for Mapping the Risk of Occurrence of Unexploded Ordnance (UXO) Combining Logistic Regression with Kriging for Maing the Risk of Occurrence of Unexloded Ordnance (UXO) H. Saito (), P. Goovaerts (), S. A. McKenna (2) Environmental and Water Resources Engineering, Deartment

More information

Sums of independent random variables

Sums of independent random variables 3 Sums of indeendent random variables This lecture collects a number of estimates for sums of indeendent random variables with values in a Banach sace E. We concentrate on sums of the form N γ nx n, where

More information

MATH 2710: NOTES FOR ANALYSIS

MATH 2710: NOTES FOR ANALYSIS MATH 270: NOTES FOR ANALYSIS The main ideas we will learn from analysis center around the idea of a limit. Limits occurs in several settings. We will start with finite limits of sequences, then cover infinite

More information

Feedback-error control

Feedback-error control Chater 4 Feedback-error control 4.1 Introduction This chater exlains the feedback-error (FBE) control scheme originally described by Kawato [, 87, 8]. FBE is a widely used neural network based controller

More information

A Comparison between Biased and Unbiased Estimators in Ordinary Least Squares Regression

A Comparison between Biased and Unbiased Estimators in Ordinary Least Squares Regression Journal of Modern Alied Statistical Methods Volume Issue Article 7 --03 A Comarison between Biased and Unbiased Estimators in Ordinary Least Squares Regression Ghadban Khalaf King Khalid University, Saudi

More information

Chapter 7: Special Distributions

Chapter 7: Special Distributions This chater first resents some imortant distributions, and then develos the largesamle distribution theory which is crucial in estimation and statistical inference Discrete distributions The Bernoulli

More information

Towards understanding the Lorenz curve using the Uniform distribution. Chris J. Stephens. Newcastle City Council, Newcastle upon Tyne, UK

Towards understanding the Lorenz curve using the Uniform distribution. Chris J. Stephens. Newcastle City Council, Newcastle upon Tyne, UK Towards understanding the Lorenz curve using the Uniform distribution Chris J. Stehens Newcastle City Council, Newcastle uon Tyne, UK (For the Gini-Lorenz Conference, University of Siena, Italy, May 2005)

More information

Tests for Two Proportions in a Stratified Design (Cochran/Mantel-Haenszel Test)

Tests for Two Proportions in a Stratified Design (Cochran/Mantel-Haenszel Test) Chater 225 Tests for Two Proortions in a Stratified Design (Cochran/Mantel-Haenszel Test) Introduction In a stratified design, the subects are selected from two or more strata which are formed from imortant

More information

Robust Predictive Control of Input Constraints and Interference Suppression for Semi-Trailer System

Robust Predictive Control of Input Constraints and Interference Suppression for Semi-Trailer System Vol.7, No.7 (4),.37-38 htt://dx.doi.org/.457/ica.4.7.7.3 Robust Predictive Control of Inut Constraints and Interference Suression for Semi-Trailer System Zhao, Yang Electronic and Information Technology

More information

System Reliability Estimation and Confidence Regions from Subsystem and Full System Tests

System Reliability Estimation and Confidence Regions from Subsystem and Full System Tests 009 American Control Conference Hyatt Regency Riverfront, St. Louis, MO, USA June 0-, 009 FrB4. System Reliability Estimation and Confidence Regions from Subsystem and Full System Tests James C. Sall Abstract

More information

Distributed Rule-Based Inference in the Presence of Redundant Information

Distributed Rule-Based Inference in the Presence of Redundant Information istribution Statement : roved for ublic release; distribution is unlimited. istributed Rule-ased Inference in the Presence of Redundant Information June 8, 004 William J. Farrell III Lockheed Martin dvanced

More information

Approximating min-max k-clustering

Approximating min-max k-clustering Aroximating min-max k-clustering Asaf Levin July 24, 2007 Abstract We consider the roblems of set artitioning into k clusters with minimum total cost and minimum of the maximum cost of a cluster. The cost

More information

State Estimation with ARMarkov Models

State Estimation with ARMarkov Models Deartment of Mechanical and Aerosace Engineering Technical Reort No. 3046, October 1998. Princeton University, Princeton, NJ. State Estimation with ARMarkov Models Ryoung K. Lim 1 Columbia University,

More information

A CONCRETE EXAMPLE OF PRIME BEHAVIOR IN QUADRATIC FIELDS. 1. Abstract

A CONCRETE EXAMPLE OF PRIME BEHAVIOR IN QUADRATIC FIELDS. 1. Abstract A CONCRETE EXAMPLE OF PRIME BEHAVIOR IN QUADRATIC FIELDS CASEY BRUCK 1. Abstract The goal of this aer is to rovide a concise way for undergraduate mathematics students to learn about how rime numbers behave

More information

MODELING THE RELIABILITY OF C4ISR SYSTEMS HARDWARE/SOFTWARE COMPONENTS USING AN IMPROVED MARKOV MODEL

MODELING THE RELIABILITY OF C4ISR SYSTEMS HARDWARE/SOFTWARE COMPONENTS USING AN IMPROVED MARKOV MODEL Technical Sciences and Alied Mathematics MODELING THE RELIABILITY OF CISR SYSTEMS HARDWARE/SOFTWARE COMPONENTS USING AN IMPROVED MARKOV MODEL Cezar VASILESCU Regional Deartment of Defense Resources Management

More information

Elementary Analysis in Q p

Elementary Analysis in Q p Elementary Analysis in Q Hannah Hutter, May Szedlák, Phili Wirth November 17, 2011 This reort follows very closely the book of Svetlana Katok 1. 1 Sequences and Series In this section we will see some

More information

Solution sheet ξi ξ < ξ i+1 0 otherwise ξ ξ i N i,p 1 (ξ) + where 0 0

Solution sheet ξi ξ < ξ i+1 0 otherwise ξ ξ i N i,p 1 (ξ) + where 0 0 Advanced Finite Elements MA5337 - WS7/8 Solution sheet This exercise sheets deals with B-slines and NURBS, which are the basis of isogeometric analysis as they will later relace the olynomial ansatz-functions

More information

The analysis and representation of random signals

The analysis and representation of random signals The analysis and reresentation of random signals Bruno TOÉSNI Bruno.Torresani@cmi.univ-mrs.fr B. Torrésani LTP Université de Provence.1/30 Outline 1. andom signals Introduction The Karhunen-Loève Basis

More information

An Inverse Problem for Two Spectra of Complex Finite Jacobi Matrices

An Inverse Problem for Two Spectra of Complex Finite Jacobi Matrices Coyright 202 Tech Science Press CMES, vol.86, no.4,.30-39, 202 An Inverse Problem for Two Sectra of Comlex Finite Jacobi Matrices Gusein Sh. Guseinov Abstract: This aer deals with the inverse sectral roblem

More information

Hotelling s Two- Sample T 2

Hotelling s Two- Sample T 2 Chater 600 Hotelling s Two- Samle T Introduction This module calculates ower for the Hotelling s two-grou, T-squared (T) test statistic. Hotelling s T is an extension of the univariate two-samle t-test

More information

arxiv: v1 [cs.ro] 24 May 2017

arxiv: v1 [cs.ro] 24 May 2017 A Near-Otimal Searation Princile for Nonlinear Stochastic Systems Arising in Robotic Path Planning and Control Mohammadhussein Rafieisakhaei 1, Suman Chakravorty 2 and P. R. Kumar 1 arxiv:1705.08566v1

More information

For q 0; 1; : : : ; `? 1, we have m 0; 1; : : : ; q? 1. The set fh j(x) : j 0; 1; ; : : : ; `? 1g forms a basis for the tness functions dened on the i

For q 0; 1; : : : ; `? 1, we have m 0; 1; : : : ; q? 1. The set fh j(x) : j 0; 1; ; : : : ; `? 1g forms a basis for the tness functions dened on the i Comuting with Haar Functions Sami Khuri Deartment of Mathematics and Comuter Science San Jose State University One Washington Square San Jose, CA 9519-0103, USA khuri@juiter.sjsu.edu Fax: (40)94-500 Keywords:

More information

Understanding and Using Availability

Understanding and Using Availability Understanding and Using Availability Jorge Luis Romeu, Ph.D. ASQ CQE/CRE, & Senior Member Email: romeu@cortland.edu htt://myrofile.cos.com/romeu ASQ/RD Webinar Series Noviembre 5, J. L. Romeu - Consultant

More information

Various Proofs for the Decrease Monotonicity of the Schatten s Power Norm, Various Families of R n Norms and Some Open Problems

Various Proofs for the Decrease Monotonicity of the Schatten s Power Norm, Various Families of R n Norms and Some Open Problems Int. J. Oen Problems Comt. Math., Vol. 3, No. 2, June 2010 ISSN 1998-6262; Coyright c ICSRS Publication, 2010 www.i-csrs.org Various Proofs for the Decrease Monotonicity of the Schatten s Power Norm, Various

More information

Uniformly best wavenumber approximations by spatial central difference operators: An initial investigation

Uniformly best wavenumber approximations by spatial central difference operators: An initial investigation Uniformly best wavenumber aroximations by satial central difference oerators: An initial investigation Vitor Linders and Jan Nordström Abstract A characterisation theorem for best uniform wavenumber aroximations

More information

Chapter 1: PROBABILITY BASICS

Chapter 1: PROBABILITY BASICS Charles Boncelet, obability, Statistics, and Random Signals," Oxford University ess, 0. ISBN: 978-0-9-0005-0 Chater : PROBABILITY BASICS Sections. What Is obability?. Exeriments, Outcomes, and Events.

More information

Generalized Coiflets: A New Family of Orthonormal Wavelets

Generalized Coiflets: A New Family of Orthonormal Wavelets Generalized Coiflets A New Family of Orthonormal Wavelets Dong Wei, Alan C Bovik, and Brian L Evans Laboratory for Image and Video Engineering Deartment of Electrical and Comuter Engineering The University

More information

Online Appendix to Accompany AComparisonof Traditional and Open-Access Appointment Scheduling Policies

Online Appendix to Accompany AComparisonof Traditional and Open-Access Appointment Scheduling Policies Online Aendix to Accomany AComarisonof Traditional and Oen-Access Aointment Scheduling Policies Lawrence W. Robinson Johnson Graduate School of Management Cornell University Ithaca, NY 14853-6201 lwr2@cornell.edu

More information

arxiv: v1 [physics.data-an] 26 Oct 2012

arxiv: v1 [physics.data-an] 26 Oct 2012 Constraints on Yield Parameters in Extended Maximum Likelihood Fits Till Moritz Karbach a, Maximilian Schlu b a TU Dortmund, Germany, moritz.karbach@cern.ch b TU Dortmund, Germany, maximilian.schlu@cern.ch

More information

General Linear Model Introduction, Classes of Linear models and Estimation

General Linear Model Introduction, Classes of Linear models and Estimation Stat 740 General Linear Model Introduction, Classes of Linear models and Estimation An aim of scientific enquiry: To describe or to discover relationshis among events (variables) in the controlled (laboratory)

More information

arxiv:cond-mat/ v2 25 Sep 2002

arxiv:cond-mat/ v2 25 Sep 2002 Energy fluctuations at the multicritical oint in two-dimensional sin glasses arxiv:cond-mat/0207694 v2 25 Se 2002 1. Introduction Hidetoshi Nishimori, Cyril Falvo and Yukiyasu Ozeki Deartment of Physics,

More information

On split sample and randomized confidence intervals for binomial proportions

On split sample and randomized confidence intervals for binomial proportions On slit samle and randomized confidence intervals for binomial roortions Måns Thulin Deartment of Mathematics, Usala University arxiv:1402.6536v1 [stat.me] 26 Feb 2014 Abstract Slit samle methods have

More information

Orthogonal Polynomial Ensembles

Orthogonal Polynomial Ensembles Chater 11 Orthogonal Polynomial Ensembles 11.1 Orthogonal Polynomials of Scalar rgument Let wx) be a weight function on a real interval, or the unit circle, or generally on some curve in the comlex lane.

More information

Applications to stochastic PDE

Applications to stochastic PDE 15 Alications to stochastic PE In this final lecture we resent some alications of the theory develoed in this course to stochastic artial differential equations. We concentrate on two secific examles:

More information

Brownian Motion and Random Prime Factorization

Brownian Motion and Random Prime Factorization Brownian Motion and Random Prime Factorization Kendrick Tang June 4, 202 Contents Introduction 2 2 Brownian Motion 2 2. Develoing Brownian Motion.................... 2 2.. Measure Saces and Borel Sigma-Algebras.........

More information

Best approximation by linear combinations of characteristic functions of half-spaces

Best approximation by linear combinations of characteristic functions of half-spaces Best aroximation by linear combinations of characteristic functions of half-saces Paul C. Kainen Deartment of Mathematics Georgetown University Washington, D.C. 20057-1233, USA Věra Kůrková Institute of

More information

COMMUNICATION BETWEEN SHAREHOLDERS 1

COMMUNICATION BETWEEN SHAREHOLDERS 1 COMMUNICATION BTWN SHARHOLDRS 1 A B. O A : A D Lemma B.1. U to µ Z r 2 σ2 Z + σ2 X 2r ω 2 an additive constant that does not deend on a or θ, the agents ayoffs can be written as: 2r rθa ω2 + θ µ Y rcov

More information

Statics and dynamics: some elementary concepts

Statics and dynamics: some elementary concepts 1 Statics and dynamics: some elementary concets Dynamics is the study of the movement through time of variables such as heartbeat, temerature, secies oulation, voltage, roduction, emloyment, rices and

More information

CHAPTER-II Control Charts for Fraction Nonconforming using m-of-m Runs Rules

CHAPTER-II Control Charts for Fraction Nonconforming using m-of-m Runs Rules CHAPTER-II Control Charts for Fraction Nonconforming using m-of-m Runs Rules. Introduction: The is widely used in industry to monitor the number of fraction nonconforming units. A nonconforming unit is

More information

4. Score normalization technical details We now discuss the technical details of the score normalization method.

4. Score normalization technical details We now discuss the technical details of the score normalization method. SMT SCORING SYSTEM This document describes the scoring system for the Stanford Math Tournament We begin by giving an overview of the changes to scoring and a non-technical descrition of the scoring rules

More information

LINEAR SYSTEMS WITH POLYNOMIAL UNCERTAINTY STRUCTURE: STABILITY MARGINS AND CONTROL

LINEAR SYSTEMS WITH POLYNOMIAL UNCERTAINTY STRUCTURE: STABILITY MARGINS AND CONTROL LINEAR SYSTEMS WITH POLYNOMIAL UNCERTAINTY STRUCTURE: STABILITY MARGINS AND CONTROL Mohammad Bozorg Deatment of Mechanical Engineering University of Yazd P. O. Box 89195-741 Yazd Iran Fax: +98-351-750110

More information

Numerical Linear Algebra

Numerical Linear Algebra Numerical Linear Algebra Numerous alications in statistics, articularly in the fitting of linear models. Notation and conventions: Elements of a matrix A are denoted by a ij, where i indexes the rows and

More information

Shadow Computing: An Energy-Aware Fault Tolerant Computing Model

Shadow Computing: An Energy-Aware Fault Tolerant Computing Model Shadow Comuting: An Energy-Aware Fault Tolerant Comuting Model Bryan Mills, Taieb Znati, Rami Melhem Deartment of Comuter Science University of Pittsburgh (bmills, znati, melhem)@cs.itt.edu Index Terms

More information

Calculation of MTTF values with Markov Models for Safety Instrumented Systems

Calculation of MTTF values with Markov Models for Safety Instrumented Systems 7th WEA International Conference on APPLIE COMPUTE CIENCE, Venice, Italy, November -3, 7 3 Calculation of MTTF values with Markov Models for afety Instrumented ystems BÖCÖK J., UGLJEA E., MACHMU. University

More information

Lecture 6. 2 Recurrence/transience, harmonic functions and martingales

Lecture 6. 2 Recurrence/transience, harmonic functions and martingales Lecture 6 Classification of states We have shown that all states of an irreducible countable state Markov chain must of the same tye. This gives rise to the following classification. Definition. [Classification

More information

CHAPTER 5 STATISTICAL INFERENCE. 1.0 Hypothesis Testing. 2.0 Decision Errors. 3.0 How a Hypothesis is Tested. 4.0 Test for Goodness of Fit

CHAPTER 5 STATISTICAL INFERENCE. 1.0 Hypothesis Testing. 2.0 Decision Errors. 3.0 How a Hypothesis is Tested. 4.0 Test for Goodness of Fit Chater 5 Statistical Inference 69 CHAPTER 5 STATISTICAL INFERENCE.0 Hyothesis Testing.0 Decision Errors 3.0 How a Hyothesis is Tested 4.0 Test for Goodness of Fit 5.0 Inferences about Two Means It ain't

More information

Performance Evaluation of Generalized Polynomial Chaos

Performance Evaluation of Generalized Polynomial Chaos Performance Evaluation of Generalized Polynomial Chaos Dongbin Xiu, Didier Lucor, C.-H. Su, and George Em Karniadakis 1 Division of Applied Mathematics, Brown University, Providence, RI 02912, USA, gk@dam.brown.edu

More information

Robustness of classifiers to uniform l p and Gaussian noise Supplementary material

Robustness of classifiers to uniform l p and Gaussian noise Supplementary material Robustness of classifiers to uniform l and Gaussian noise Sulementary material Jean-Yves Franceschi Ecole Normale Suérieure de Lyon LIP UMR 5668 Omar Fawzi Ecole Normale Suérieure de Lyon LIP UMR 5668

More information

Information collection on a graph

Information collection on a graph Information collection on a grah Ilya O. Ryzhov Warren Powell February 10, 2010 Abstract We derive a knowledge gradient olicy for an otimal learning roblem on a grah, in which we use sequential measurements

More information

8 STOCHASTIC PROCESSES

8 STOCHASTIC PROCESSES 8 STOCHASTIC PROCESSES The word stochastic is derived from the Greek στoχαστικoς, meaning to aim at a target. Stochastic rocesses involve state which changes in a random way. A Markov rocess is a articular

More information

Modeling and Estimation of Full-Chip Leakage Current Considering Within-Die Correlation

Modeling and Estimation of Full-Chip Leakage Current Considering Within-Die Correlation 6.3 Modeling and Estimation of Full-Chi Leaage Current Considering Within-Die Correlation Khaled R. eloue, Navid Azizi, Farid N. Najm Deartment of ECE, University of Toronto,Toronto, Ontario, Canada {haled,nazizi,najm}@eecg.utoronto.ca

More information

Stochastic integration II: the Itô integral

Stochastic integration II: the Itô integral 13 Stochastic integration II: the Itô integral We have seen in Lecture 6 how to integrate functions Φ : (, ) L (H, E) with resect to an H-cylindrical Brownian motion W H. In this lecture we address the

More information

Positive Definite Uncertain Homogeneous Matrix Polynomials: Analysis and Application

Positive Definite Uncertain Homogeneous Matrix Polynomials: Analysis and Application BULGARIA ACADEMY OF SCIECES CYBEREICS AD IFORMAIO ECHOLOGIES Volume 9 o 3 Sofia 009 Positive Definite Uncertain Homogeneous Matrix Polynomials: Analysis and Alication Svetoslav Savov Institute of Information

More information

CMSC 425: Lecture 4 Geometry and Geometric Programming

CMSC 425: Lecture 4 Geometry and Geometric Programming CMSC 425: Lecture 4 Geometry and Geometric Programming Geometry for Game Programming and Grahics: For the next few lectures, we will discuss some of the basic elements of geometry. There are many areas

More information

c Copyright by Helen J. Elwood December, 2011

c Copyright by Helen J. Elwood December, 2011 c Coyright by Helen J. Elwood December, 2011 CONSTRUCTING COMPLEX EQUIANGULAR PARSEVAL FRAMES A Dissertation Presented to the Faculty of the Deartment of Mathematics University of Houston In Partial Fulfillment

More information

An Analysis of Reliable Classifiers through ROC Isometrics

An Analysis of Reliable Classifiers through ROC Isometrics An Analysis of Reliable Classifiers through ROC Isometrics Stijn Vanderlooy s.vanderlooy@cs.unimaas.nl Ida G. Srinkhuizen-Kuyer kuyer@cs.unimaas.nl Evgueni N. Smirnov smirnov@cs.unimaas.nl MICC-IKAT, Universiteit

More information

Sharp gradient estimate and spectral rigidity for p-laplacian

Sharp gradient estimate and spectral rigidity for p-laplacian Shar gradient estimate and sectral rigidity for -Lalacian Chiung-Jue Anna Sung and Jiaing Wang To aear in ath. Research Letters. Abstract We derive a shar gradient estimate for ositive eigenfunctions of

More information

#A64 INTEGERS 18 (2018) APPLYING MODULAR ARITHMETIC TO DIOPHANTINE EQUATIONS

#A64 INTEGERS 18 (2018) APPLYING MODULAR ARITHMETIC TO DIOPHANTINE EQUATIONS #A64 INTEGERS 18 (2018) APPLYING MODULAR ARITHMETIC TO DIOPHANTINE EQUATIONS Ramy F. Taki ElDin Physics and Engineering Mathematics Deartment, Faculty of Engineering, Ain Shams University, Cairo, Egyt

More information

Understanding and Using Availability

Understanding and Using Availability Understanding and Using Availability Jorge Luis Romeu, Ph.D. ASQ CQE/CRE, & Senior Member C. Stat Fellow, Royal Statistical Society Past Director, Region II (NY & PA) Director: Juarez Lincoln Marti Int

More information

Chapter 7 Sampling and Sampling Distributions. Introduction. Selecting a Sample. Introduction. Sampling from a Finite Population

Chapter 7 Sampling and Sampling Distributions. Introduction. Selecting a Sample. Introduction. Sampling from a Finite Population Chater 7 and s Selecting a Samle Point Estimation Introduction to s of Proerties of Point Estimators Other Methods Introduction An element is the entity on which data are collected. A oulation is a collection

More information

The Hasse Minkowski Theorem Lee Dicker University of Minnesota, REU Summer 2001

The Hasse Minkowski Theorem Lee Dicker University of Minnesota, REU Summer 2001 The Hasse Minkowski Theorem Lee Dicker University of Minnesota, REU Summer 2001 The Hasse-Minkowski Theorem rovides a characterization of the rational quadratic forms. What follows is a roof of the Hasse-Minkowski

More information

EE 508 Lecture 13. Statistical Characterization of Filter Characteristics

EE 508 Lecture 13. Statistical Characterization of Filter Characteristics EE 508 Lecture 3 Statistical Characterization of Filter Characteristics Comonents used to build filters are not recisely redictable L C Temerature Variations Manufacturing Variations Aging Model variations

More information

Elements of Asymptotic Theory. James L. Powell Department of Economics University of California, Berkeley

Elements of Asymptotic Theory. James L. Powell Department of Economics University of California, Berkeley Elements of Asymtotic Theory James L. Powell Deartment of Economics University of California, Berkeley Objectives of Asymtotic Theory While exact results are available for, say, the distribution of the

More information

Positive decomposition of transfer functions with multiple poles

Positive decomposition of transfer functions with multiple poles Positive decomosition of transfer functions with multile oles Béla Nagy 1, Máté Matolcsi 2, and Márta Szilvási 1 Deartment of Analysis, Technical University of Budaest (BME), H-1111, Budaest, Egry J. u.

More information

Session 5: Review of Classical Astrodynamics

Session 5: Review of Classical Astrodynamics Session 5: Review of Classical Astrodynamics In revious lectures we described in detail the rocess to find the otimal secific imulse for a articular situation. Among the mission requirements that serve

More information

Hidden Predictors: A Factor Analysis Primer

Hidden Predictors: A Factor Analysis Primer Hidden Predictors: A Factor Analysis Primer Ryan C Sanchez Western Washington University Factor Analysis is a owerful statistical method in the modern research sychologist s toolbag When used roerly, factor

More information

Chapter 1 Fundamentals

Chapter 1 Fundamentals Chater Fundamentals. Overview of Thermodynamics Industrial Revolution brought in large scale automation of many tedious tasks which were earlier being erformed through manual or animal labour. Inventors

More information

Using the Divergence Information Criterion for the Determination of the Order of an Autoregressive Process

Using the Divergence Information Criterion for the Determination of the Order of an Autoregressive Process Using the Divergence Information Criterion for the Determination of the Order of an Autoregressive Process P. Mantalos a1, K. Mattheou b, A. Karagrigoriou b a.deartment of Statistics University of Lund

More information

Asymptotically Optimal Simulation Allocation under Dependent Sampling

Asymptotically Optimal Simulation Allocation under Dependent Sampling Asymtotically Otimal Simulation Allocation under Deendent Samling Xiaoing Xiong The Robert H. Smith School of Business, University of Maryland, College Park, MD 20742-1815, USA, xiaoingx@yahoo.com Sandee

More information

Developing A Deterioration Probabilistic Model for Rail Wear

Developing A Deterioration Probabilistic Model for Rail Wear International Journal of Traffic and Transortation Engineering 2012, 1(2): 13-18 DOI: 10.5923/j.ijtte.20120102.02 Develoing A Deterioration Probabilistic Model for Rail Wear Jabbar-Ali Zakeri *, Shahrbanoo

More information

Information collection on a graph

Information collection on a graph Information collection on a grah Ilya O. Ryzhov Warren Powell October 25, 2009 Abstract We derive a knowledge gradient olicy for an otimal learning roblem on a grah, in which we use sequential measurements

More information

NONLINEAR OPTIMIZATION WITH CONVEX CONSTRAINTS. The Goldstein-Levitin-Polyak algorithm

NONLINEAR OPTIMIZATION WITH CONVEX CONSTRAINTS. The Goldstein-Levitin-Polyak algorithm - (23) NLP - NONLINEAR OPTIMIZATION WITH CONVEX CONSTRAINTS The Goldstein-Levitin-Polya algorithm We consider an algorithm for solving the otimization roblem under convex constraints. Although the convexity

More information

Uncorrelated Multilinear Principal Component Analysis for Unsupervised Multilinear Subspace Learning

Uncorrelated Multilinear Principal Component Analysis for Unsupervised Multilinear Subspace Learning TNN-2009-P-1186.R2 1 Uncorrelated Multilinear Princial Comonent Analysis for Unsuervised Multilinear Subsace Learning Haiing Lu, K. N. Plataniotis and A. N. Venetsanooulos The Edward S. Rogers Sr. Deartment

More information

An Estimate For Heilbronn s Exponential Sum

An Estimate For Heilbronn s Exponential Sum An Estimate For Heilbronn s Exonential Sum D.R. Heath-Brown Magdalen College, Oxford For Heini Halberstam, on his retirement Let be a rime, and set e(x) = ex(2πix). Heilbronn s exonential sum is defined

More information

Deriving Indicator Direct and Cross Variograms from a Normal Scores Variogram Model (bigaus-full) David F. Machuca Mory and Clayton V.

Deriving Indicator Direct and Cross Variograms from a Normal Scores Variogram Model (bigaus-full) David F. Machuca Mory and Clayton V. Deriving ndicator Direct and Cross Variograms from a Normal Scores Variogram Model (bigaus-full) David F. Machuca Mory and Clayton V. Deutsch Centre for Comutational Geostatistics Deartment of Civil &

More information

Some Results on the Generalized Gaussian Distribution

Some Results on the Generalized Gaussian Distribution Some Results on the Generalized Gaussian Distribution Alex Dytso, Ronit Bustin, H. Vincent Poor, Daniela Tuninetti 3, Natasha Devroye 3, and Shlomo Shamai (Shitz) Abstract The aer considers information-theoretic

More information

Use of Transformations and the Repeated Statement in PROC GLM in SAS Ed Stanek

Use of Transformations and the Repeated Statement in PROC GLM in SAS Ed Stanek Use of Transformations and the Reeated Statement in PROC GLM in SAS Ed Stanek Introduction We describe how the Reeated Statement in PROC GLM in SAS transforms the data to rovide tests of hyotheses of interest.

More information

ON THE LEAST SIGNIFICANT p ADIC DIGITS OF CERTAIN LUCAS NUMBERS

ON THE LEAST SIGNIFICANT p ADIC DIGITS OF CERTAIN LUCAS NUMBERS #A13 INTEGERS 14 (014) ON THE LEAST SIGNIFICANT ADIC DIGITS OF CERTAIN LUCAS NUMBERS Tamás Lengyel Deartment of Mathematics, Occidental College, Los Angeles, California lengyel@oxy.edu Received: 6/13/13,

More information

p-adic Measures and Bernoulli Numbers

p-adic Measures and Bernoulli Numbers -Adic Measures and Bernoulli Numbers Adam Bowers Introduction The constants B k in the Taylor series exansion t e t = t k B k k! k=0 are known as the Bernoulli numbers. The first few are,, 6, 0, 30, 0,

More information

#A37 INTEGERS 15 (2015) NOTE ON A RESULT OF CHUNG ON WEIL TYPE SUMS

#A37 INTEGERS 15 (2015) NOTE ON A RESULT OF CHUNG ON WEIL TYPE SUMS #A37 INTEGERS 15 (2015) NOTE ON A RESULT OF CHUNG ON WEIL TYPE SUMS Norbert Hegyvári ELTE TTK, Eötvös University, Institute of Mathematics, Budaest, Hungary hegyvari@elte.hu François Hennecart Université

More information

Plotting the Wilson distribution

Plotting the Wilson distribution , Survey of English Usage, University College London Setember 018 1 1. Introduction We have discussed the Wilson score interval at length elsewhere (Wallis 013a, b). Given an observed Binomial roortion

More information

Estimating function analysis for a class of Tweedie regression models

Estimating function analysis for a class of Tweedie regression models Title Estimating function analysis for a class of Tweedie regression models Author Wagner Hugo Bonat Deartamento de Estatística - DEST, Laboratório de Estatística e Geoinformação - LEG, Universidade Federal

More information

1. INTRODUCTION. Fn 2 = F j F j+1 (1.1)

1. INTRODUCTION. Fn 2 = F j F j+1 (1.1) CERTAIN CLASSES OF FINITE SUMS THAT INVOLVE GENERALIZED FIBONACCI AND LUCAS NUMBERS The beautiful identity R.S. Melham Deartment of Mathematical Sciences, University of Technology, Sydney PO Box 23, Broadway,

More information

A Qualitative Event-based Approach to Multiple Fault Diagnosis in Continuous Systems using Structural Model Decomposition

A Qualitative Event-based Approach to Multiple Fault Diagnosis in Continuous Systems using Structural Model Decomposition A Qualitative Event-based Aroach to Multile Fault Diagnosis in Continuous Systems using Structural Model Decomosition Matthew J. Daigle a,,, Anibal Bregon b,, Xenofon Koutsoukos c, Gautam Biswas c, Belarmino

More information

Introduction to Probability and Statistics

Introduction to Probability and Statistics Introduction to Probability and Statistics Chater 8 Ammar M. Sarhan, asarhan@mathstat.dal.ca Deartment of Mathematics and Statistics, Dalhousie University Fall Semester 28 Chater 8 Tests of Hyotheses Based

More information

Bayesian System for Differential Cryptanalysis of DES

Bayesian System for Differential Cryptanalysis of DES Available online at www.sciencedirect.com ScienceDirect IERI Procedia 7 (014 ) 15 0 013 International Conference on Alied Comuting, Comuter Science, and Comuter Engineering Bayesian System for Differential

More information

Adaptive estimation with change detection for streaming data

Adaptive estimation with change detection for streaming data Adative estimation with change detection for streaming data A thesis resented for the degree of Doctor of Philosohy of the University of London and the Diloma of Imerial College by Dean Adam Bodenham Deartment

More information

Elliptic Curves Spring 2015 Problem Set #1 Due: 02/13/2015

Elliptic Curves Spring 2015 Problem Set #1 Due: 02/13/2015 18.783 Ellitic Curves Sring 2015 Problem Set #1 Due: 02/13/2015 Descrition These roblems are related to the material covered in Lectures 1-2. Some of them require the use of Sage, and you will need to

More information

Principal Components Analysis and Unsupervised Hebbian Learning

Principal Components Analysis and Unsupervised Hebbian Learning Princial Comonents Analysis and Unsuervised Hebbian Learning Robert Jacobs Deartment of Brain & Cognitive Sciences University of Rochester Rochester, NY 1467, USA August 8, 008 Reference: Much of the material

More information

Multi-Operation Multi-Machine Scheduling

Multi-Operation Multi-Machine Scheduling Multi-Oeration Multi-Machine Scheduling Weizhen Mao he College of William and Mary, Williamsburg VA 3185, USA Abstract. In the multi-oeration scheduling that arises in industrial engineering, each job

More information

Research of power plant parameter based on the Principal Component Analysis method

Research of power plant parameter based on the Principal Component Analysis method Research of ower lant arameter based on the Princial Comonent Analysis method Yang Yang *a, Di Zhang b a b School of Engineering, Bohai University, Liaoning Jinzhou, 3; Liaoning Datang international Jinzhou

More information

VIBRATION ANALYSIS OF BEAMS WITH MULTIPLE CONSTRAINED LAYER DAMPING PATCHES

VIBRATION ANALYSIS OF BEAMS WITH MULTIPLE CONSTRAINED LAYER DAMPING PATCHES Journal of Sound and Vibration (998) 22(5), 78 85 VIBRATION ANALYSIS OF BEAMS WITH MULTIPLE CONSTRAINED LAYER DAMPING PATCHES Acoustics and Dynamics Laboratory, Deartment of Mechanical Engineering, The

More information

MATH 6210: SOLUTIONS TO PROBLEM SET #3

MATH 6210: SOLUTIONS TO PROBLEM SET #3 MATH 6210: SOLUTIONS TO PROBLEM SET #3 Rudin, Chater 4, Problem #3. The sace L (T) is searable since the trigonometric olynomials with comlex coefficients whose real and imaginary arts are rational form

More information

Radial Basis Function Networks: Algorithms

Radial Basis Function Networks: Algorithms Radial Basis Function Networks: Algorithms Introduction to Neural Networks : Lecture 13 John A. Bullinaria, 2004 1. The RBF Maing 2. The RBF Network Architecture 3. Comutational Power of RBF Networks 4.

More information

Improved Bounds on Bell Numbers and on Moments of Sums of Random Variables

Improved Bounds on Bell Numbers and on Moments of Sums of Random Variables Imroved Bounds on Bell Numbers and on Moments of Sums of Random Variables Daniel Berend Tamir Tassa Abstract We rovide bounds for moments of sums of sequences of indeendent random variables. Concentrating

More information

The Binomial Approach for Probability of Detection

The Binomial Approach for Probability of Detection Vol. No. (Mar 5) - The e-journal of Nondestructive Testing - ISSN 45-494 www.ndt.net/?id=7498 The Binomial Aroach for of Detection Carlos Correia Gruo Endalloy C.A. - Caracas - Venezuela www.endalloy.net

More information

Characterizing the Behavior of a Probabilistic CMOS Switch Through Analytical Models and Its Verification Through Simulations

Characterizing the Behavior of a Probabilistic CMOS Switch Through Analytical Models and Its Verification Through Simulations Characterizing the Behavior of a Probabilistic CMOS Switch Through Analytical Models and Its Verification Through Simulations PINAR KORKMAZ, BILGE E. S. AKGUL and KRISHNA V. PALEM Georgia Institute of

More information

Section 0.10: Complex Numbers from Precalculus Prerequisites a.k.a. Chapter 0 by Carl Stitz, PhD, and Jeff Zeager, PhD, is available under a Creative

Section 0.10: Complex Numbers from Precalculus Prerequisites a.k.a. Chapter 0 by Carl Stitz, PhD, and Jeff Zeager, PhD, is available under a Creative Section 0.0: Comlex Numbers from Precalculus Prerequisites a.k.a. Chater 0 by Carl Stitz, PhD, and Jeff Zeager, PhD, is available under a Creative Commons Attribution-NonCommercial-ShareAlike.0 license.

More information