Computation of Uncertainty Distributions in Complex Dynamical Systems

Size: px
Start display at page:

Download "Computation of Uncertainty Distributions in Complex Dynamical Systems"

Transcription

1 29 American Control Conference Hyatt Regency Riverfront, St. Louis, MO, USA June -2, 29 ThA4.6 Computation of Uncertainty Distributions in Complex Dynamical Systems ThordurRunolfsson,ChenxiLin Abstract The computation of the stationary distribution of an uncertain nonlinear dynamical system is an important tool in analysis of the long term behavior of the system. Onecommon approach istouseamonte Carlotype method. However,thattypeofmethodrequiresmanysimulationsruns to achieve a reasonable accuracy and can be computationally excessive. In this paper we formulate an alternative approach based on the theory of Random Dynamical Systems to solve thisproblem.usingthepropertiesoftheinvariantmeasureof the Perron-Frobenius operator for the dynamical systems we obtain a simple characterization of the stationary distribution. The state space is discretized to obtain a finite dimensional approximation for the infinitedimensionalperron-frobenius operator. Furthermore, an efficient subdivision algorithm for statespacepartitionisdiscussed.theapproachisdemonstrated through a catalytic reactor system. I. INTRODUCTION In this paper we consider an uncertain discrete time dynamical system + = ( ) () = ( ) wherethestate R,theuncertaintyisarandom variable defined on some probability space (z) and taking values in a set R and is real valued scalar output. The initial state = is assumed to be a random variable independent of Assume that for each fixed and each fixed initial state the average P = exists. Then () is a limit () = lim random variable. The basic problem we are interested in is characterizingthedistributionof()wenotethatifis a fixed parameter and system() has the appropriate ergodic properties then the distribution of () is characterized bythestationaryorinvariantdistribution of().similar results are true for the random parameter case as well. Therefore, the characterization of the invariant measure is an important tool in characterizing the long term behavior of the uncertain dynamical system (). In particular, it gives valuable information about the global dynamics and qualitative behavior of the system. Obviously, the invariantmeasure is dependent on the value Our goal is to compute the invariant measure undertheassumptionthe existsforany.onecommon approach is a Monte Carlo type method, that samples the distribution of initial state and random parameter and then simulates the system until it reaches stationarity (steady This research is supported by AFOSR under grant FA School of Electrical and Computer Engineering, The University of Oklahoma, Norman, OK 739, runolfsson@ou.edu, lcx696@ou.edu state). Unfortunately, in order to achieve reasonable accuracy one needs many simulation runs and for complex dynamical systems the computational effort may be excessive. Consequently, there is a strong need for efficient alternatives. In this paper we introduce an alternative approach that is based on the fact that the invariant measure is a fixed point of the so-called Perron-Frobenius operator for the dynamical systems. II. MATHEMATICAL SETUP-RANDOMDYNAMICAL SYSTEMS Consider again the dynamical system (). In order to simplify the discussion we assume that is a compact subsetofr and isacompactsubsetofr.therandom parameter can be very general and is specified in more detail later. We assume that () is in for every and assume that : R satisfies ().Denote ()= where ()=(). Definition: LetM()bethevectorspaceofrealvalue measureson.forafixedvalueoftheperron-frobenius (P-F)operator :M()M()correspondingtothe dynamicalsystem : isdefinedas ()= () = ³ () forallsetsb We remark that the P-F operator characterizes the evolutionofthedistributionofthestate,i.e.iftheinitialstate hasdistributionm()thedistributionof is () Definition2: Ameasure M()issaidtobea invariant measure if ³ ()= ()= () forallsetsb. Thus, M()isinvariantifandonlyifitisafixed pointoftheperron-frobeniusoperator. LetP= bethestate-uncertaintyproductspaceand endowitwiththeproduct-algebrapintheusualway,i.e. ifb istheborelalgebraonandf isthealgebra on,thenp=b F. Definition3: Aprobabilitymeasure on P iscalledan input measure. Weareinterestedinthequestionofhowdoestheuncertainty in the "output" of the process depend on the input measure. For the observable defined by : R the "initial" uncertainty is described by a probabilistic measure onr(endowedwiththeborel-algebrab)definedby ()=(() ()) /9/$ AACC 2458

2 where BThismeasureevolvesintime,becoming () =(( ) ()) = (( ) ())= ( ()) We call anoutputmeasure.itdescribestheuncertainty ofthesystemoutputatthen-thstepoftheprocessgiventhe inputmeasure Frequently we are mostly interested in the long term behavior of the solution of the system. In this case the uncertaintyinthesystemoutputisbeststudiedintermsof the uncertainty in the asymptotic properties of the system. In particular, define the time-average X ()= lim (( )) (2) = andtheasymptoticoutputmeasure ()=(( ) ()) (3) Next we discuss methods for characterizing the asymptotic output(2)intermsoftheinvariantmeasuresoftherandom dynamical system. In particular, we are interested in situationswhenthesystemhasaphysicalmeasureinthesense thatforalmostall() ()= (4) We begin by rewriting system () so that it can be studied within the framework of Discrete Random Dynamical System (DRDS) []. We are particularly interested in characterizing invariant measures for (). For the purpose of presenting the formulation in [] in full generality we allow the uncertainty to vary as a function of time. Let = be the space of all sequences taking values in,denoteanelementof as and let F be the Borel algebraon.let()betheshifttransformationon, i.e.() = + definethemap: by and the coordinate process ()=() =()=(())=(())() Let P be any invariant (probability) measure. Then = () is a stochastic process on the probability space (FP)Nowwelet ˆ()=(())andget + =( )=( ())= ˆ( ()) (5) =( ) Let ():bethe operatordefined by ()= ˆ()Thenthesolutionof(5)canberepresentedas = ()where ½ ( ) () ()= id = whereid istheidentityoperatoronthemapping ()7( ())=()() iscalledtheskewproductof and( )andismeasurabledynamicalsystemon( F B )Let denote theprojectionof ()= Definition4: A probability measure on ( F B ) is said to be an invariant measure of the DRDS definedby(5)ifitsatisfies (i) () = and(ii) =P Note that any measure that is invariant with respect to has a marginal that is invariant with respect to Furthermore, the second condition in the above definition is imposed since the measure P is an a-priori specified invariant measure for.nowletp( ) be the set ofallprobabilitymeasureson anddefine P P ( )={P( ): =P} I P ={P P ( ):isinvariantfor(5)} Assume P P ( ) A function ( ): B [] is called a factorization of with respect to Pif ) forallb ()isf measurable, 2) ( ) is a probability measure on (B ) for P almostall 3) forallf B wehave ()= () ()P() where istheindicatorfunctionfor. Notethatitfollowsthatfor ()wehave = () ()P() It can be shown that under the assumptions we have madeaboutthefactorizationexistsandispalmostsurely unique. Return now back to the uncertain system(), i.e. assume thatforall wehave =whereisarandomparameter thathasdistributionon.thentheinvariantmeasureof ()isa(random)measureconcentratedat andifisa physical measure, ()= = () ()() = () () We note that the through the factorization of the physical measurethetimeaverage ()isanexplicitfunctionof the random parameter The dependence on the (random) initial state can be characterized in terms of the ergodic properties of the system. Definition5: Let be an a-priori fixed measure on System()issaidtobe -regularifforeachfixed there existsafinite aset ofergodicmeasures = suchthatforalmostall andevery()there existsa{ }suchthatthetimeaveragesatisfies ()= () () 2459

3 Furthermore, there exists an ergodic partition, ³ i.e. disjoint (random)sets suchthat = = and ¾ ½ = ()= () () () The following proposition, that is easily proven using the methods in [2], characterizes the calculation of the asymptotic output measure in terms of the ergodic invariant measure. Proposition: Assume that () is parametric B -regular andthefamilyofmeasures = hastheproperty that each () is continuous as a function of for any B Assume that the initial measure is absolutely continuouswithrespecttothea-priorimeasurethenthe cumulative distribution function for is piecewise continuous with a finite number of steps. If we define ()= P ³ = then () is the invariant measure for and factorization of of the invariantmeasurefor(). The following result proven in[] provides further insight into the conditions under which the invariant measure of(5) isaphysicalmeasureinthesenseof(4).thedrdsissaid tobecontinuousifforeachfixedthemapping( ) : iscontinuous.wenotethatbytheorem.5. in [] if is a compact metric space and the DRDS is continuoustheni P isanon-emptyconvexcompactsubsetof P P ( )ForameasureP( )and () define()= R. Theorem2: Assume that DRDS is continuous. For P P ( )define ()= X = ()()= X = (()) Thenaseverylimitpointof convergesweakly tosomei P andeveryi P arisesinthisway. Finally we have the following general result taken from [] that further characterizes the factorization of the invariant measure. Theorem3: LetP P ( )Then(i)I P ifand onlyifforalln (( ) () F = () P (ii)if ismeasurablyinvertiblethen() F=F forall andi P ifandonlyifforall () = () P Consider now the special case of an uncertain system(), i.e. = for all where has the distribution on. In this case, if I P then () = Thus, since () = () for each fixed value of the marginal = istheinvariantmeasureof()on( B ).Furthermore,ifthedynamicalsystem isergodicfor each,thenthe-invariant measure is called a randomdiracmeasure,i.e.thereexistsamap : with = (). Our primary interest is in characterizing the output distribution whichinturnimpliesthatwewanttocharacterize the factorization measure for all The most direct way for characterizing is to use Monte Carlo simulation which we discuss next. We then present the main contribution of the paper, i.e. an operator based approach forcharacterizing thatismuchmoreefficientthanmonte Carlo based methods for most systems. III. MONTECARLOSIMULATION The most straightforward method for obtaining the stationary distribution (invariant measure) for the uncertain dynamical system() is Monte Carlo Simulation. We assume thattheprobabilitydistributionsoftheuncertainparameter andtheinitialstate areknownandwanttofindtheoutput distribution for (). As we indicated in the previous section this is most easily achieved through the intermediate determinationoftheinvariantdistribution foreachfixed We assume the system is -regular and note that in thiscasethedistributionof characterizedbytheperron Frobeniusoperator ()convergesto ()as Thus ()isthedistributionofthelimitingstate. We propose the following nested Monte Carlo algorithm for calculating the invariant distribution. As before we assumethattheparameter hasdistributionandtheinitial state has distribution and let = We sample points fromthedistributionandforeachsample wecarryoutamontecarlosimulationforcharacterizing the distribution () In particular we sample points from the distribution and for each of the samples we simulate the system equation until it reaches steady state Let betheindicatorfunctionfortheset anddefinerandomvariables ()= ( ) Thencompute P = ()asanapproximationto () The accuracy of this method is determined by the number ofthesampleswepickfromthedistribution.inorderto evaluate the error in the Monte Carlo approximation scheme we compare simulation result with the true value () Considertheindependentsamples( )=and assume that the distribution of the ( ) is identical to that of the steady state () i.e. each simulation had reached the steady state. Then it follows that [ ()] = [ (( ))]= ()andbythecentrallimittheorem = X 2 () () = 2 = () where()isthestandarddeviationofthe ().Wenote that the variance depends on the uncertain parameter. Wenotethatthemeansquareerrorbetweentheapproximate and true value is proportional to. Therefore, in order to achieve a mean square error of we need of the order of simulations, i.e. = 2.Inmanyreal 2 applications, particularly for complex dynamical systems, each simulation run can be computationally expensive as well 246

4 as very time-consuming. Although there exist methods(such as variance reduction schemes) for improving the computational efficiency of Monte Carlo methods as well as modified Monte Carlo methods(such as quasi Monte Carlo method) these methods only result in marginal improvements or are only applicable to a limited class of dynamical systems. Thus, there is a considerable need for efficient alternatives. In the following sections, we introduce a new approach to solve this problem using some recently developed results from Random Dynamical Systems. IV. COMPUTATIONOFINVARIANTMEASURE In this section we discuss an approach for calculating the invariant distribution for() using the theory of random dynamical systems in Section II. In particular we note that under the appropriate conditions the system has an ergodic invariant measure that is characterized as a fixed point of the PerronFrobeniusoperator for We note that the invariant measure may not be unique. However, it can be shown that, under mild conditions on the dynamical system, by adding small (localized) noise, the resulting system possesses an unique invariant measure [3]thatconvergestothetrueergodic measure asthenoise intensity converges to zero. The computational approach relies on the discretization of the P-F operator that we discuss next. A. Discretization In order to obtain a finite-dimensional(discrete) approximationofthep-foperator,weconsiderafinitepartitionof thestatespacedenotedas 2 where = and =Correspondingtoeachpartitionelement weassociateapositivenumber []with P = =,i.e.=( )R isaprobabilityvector.definea probabilitymeasureon as ()= X () () ( ) = whereisthelebesguemeasureand istheindicator function for Then, the action of the Perron Frobenius operator onontheelement is ³ ( )= ( ) = wherethe matrixwithentries X () = ()= ( ( ) ) ( ) is a stochastic transition matrix(to simplify notation we drop thedependencyof).wewillseebelowthattheoperator isa"good"approximationof andtheinvariantmeasure for canbeapproximatedbyameasuredefinedby(6) where the coefficients of are invariant for i.e. satisfy =( ) where =.Wenotethatthecomputation of the entries of is much more efficient than the Monte Carlo method. (6) (7) The basic justification for using a finite dimensional approximation for the calculation of the invariant measure lies in the theory of finite dimensional approximations for compact operators [4], [5]. For the Perron Frobenius operator we will define an approximate compact operator : 2 () 2 () andthenusefinite dimensional approximations for compact operators to obtain the finite dimensionalapproximationfor Here 2 ()denotesthe spaceoffunctionsthataresquareintegrableondefinea kernel ()= () µ where istheballofradiusoneandcenteratzero. We note that ( ()) is a transition density for the transition function ()= ( ())() Itcanbeshownthat ( ) ()( )asinaweak sense,i.e. isthetransitionfunctionforamarkovprocess that is a small random perturbation of a discrete dynamical systemdefinedby [6].Wenotethattheevolutionofthe distribution for the Markov process is given by the operator ()= R ()()Iftheinitialmeasure has density with respect to then can be viewed as an operatormapping 2 () 2 (),i.e.thedensityevolves according to ()= ( ())()() Next note that ( ()) ()() () () Therefore, the transitionoperator isacompactoperator on 2 ()[7]. Next we describe how to construct the finite dimensional approximation(7)forthecompactoperator that(forsmall )givesafinite dimensional approximation for as well [4].Let beadimensionalapproximationof 2 () e.g. ={ }forsome"independent"functions 2 () Let : 2 () be a projectionsuchthat convergespointwisetotheidentity in 2 () as Define an approximate operator = where is the compact operator defined previously. Then k k 2 as.weuse thefinitedimensionaloperator asanapproximationof theperron-frobeniusoperator Let bedefinedby ()= ()wherethesets formthepartitionofdiscussedearlier.definethegalerkin projection of 2 ()by h i=h i= (8) whereh iistheinnerproducton 2 ().Since ()= ()wehave = = 246

5 For = { } we write () = P = () It is easy to see that for any such we have = Now for any 2 () we have by definition = Therefore,wegetfrom(8)with replacing h i=h i=h i= (9) Since weknowthatthereexistconstants = such that = P = () Furthermore, for=wegetfrom(9) = = Nowifinaddition then X = () = = X () = Note that if = then ()= P = () where = = = Thus X X = () = = We note that when restricted to the finite dimensional subspace theactionofthetheoperator isfullyrepresented bythe matrix withcoefficients Wefinallynote thatinthelimit wehave ( ( ) ) which after renormalization to a stochastic matrix agrees with (7). Furthermore, it follows from the results in[3],[4] that when the invariant measure of () is an ergodic invariant measure in the sense of Definition 5 then the approximate invariantmeasure convergesto as B. Subdivision The principal factor affecting the computational complexity in the calculation of the approximate invariant measure is the discretization level on the state space. If the requirement for computational accuracy is not very high we can use a standard subdivision algorithm. As before we assume that the dynamical system is defined on a compact subset of R WestartbyspecifyingoneboxinR that contains.forafixed integer we interactivelydefine 2 boxes, 2 ofequalsizebyabisectionalgorithm.thesize of the approximate stochastic transition matrix is then 2 2 Wenotethatiftherequirement forcomputational accuracyisstringentthenwillbelargeandtheresulting will be huge requiring excessive storage and computational effort. The standard subdivision algorithm described above leads to a partition with boxes of equal size. We note that the subdivision is done without utilizing any information about the system dynamics or the invariant measure. However, frequently there exist subsets in the state space that have a very small invariant measure. Consequently, the subdivision of these subsets is not necessary for the computational determination of the invariant measure and their subdivision will lead to unnecessary computation effort. By incorporating information about the invariant measure it is possible to produce more efficient partitioning schemes that result in an adaptive subdivision into boxes of unequal sizes. Here, we introduce an adaptive subdivision algorithm that is a variant of the algorithm developed in[8]. Let{ }beasequenceofpositiverealnumberssuchthat for and let B be a finite collection of compactsubsetsofr atstep(thepartitionatstep)let be invariant measure stochastic matrix obtained at step.givenaninitialpair(b )oneinductivelyobtains (B )from(b )for=2intwosteps: ) Selection and Subdivision: Define B = {B : () } and B + = B \B Constructanewcollection ˆB + suchthat where = ˆ + + diam(ˆ + )diam(ˆ + ) forsome 2) Calculation of the invariant measure: Set B =B ˆB + For the collection B calculate the invariant measure In the realization of the algorithm, we typically subdivide theboxesinthecollectionb + bybisectionandchoosethe sequence{ }as = X ()= ˆB where isthenumberofboxesinb We illustrate the efficiency of the adaptive subdivision algorithm in a catalytic reactor system in the next section. V. EXAMPLE In[9],[], a set of nonlinear partial differential equations, which describe heat and mass transfer in a spherically shaped catalytic pellet, was reduced to a dimensionless first-order ordinary differential equation which we represent in the form =() where is the dimensionless temperature in the reactor, is time expressed in units of the rise time of the system. The rise time is definedasthetimenecessarytoreacha sufficiently small neighborhood of the equilibrium position. is the dimensionless external temperature, which is the randomparameterinourexample.isaparameterdefined bythereactingsubstancesviewedasaconstantand() isoftheform ( () 2 ()) 2462

6 Monte Carlo Simulation invariant measure state parameter. state..5. parameter. Fig.. Invariant measure by Monte Carlo Fig. 2. Invariant measure by standard subdivision where (),=2arepolynomialsin.Inoursimulation,wechoose=55and()=(()) 55. For initial uncertainty of the system, we assume has an uniform initial distribution in [] and has the same distributionin[6].thediscretizationstepis. ) MonteCarlomethod: We sample pointsbothin state space and parameter space. As we discuss before, the meansquareerrorisproportionalto,thatisabout.3 inthisexample.theresultsareshowninfigureandthe simulation costs 536 sec. 2) P-F method: The standard subdivision is shown in Figure 2 and the adaptive subdivision is shown in Figure 3. For standard subdivision, we have 496 boxes to cover the whole space with sec simulation time while we only have 286 boxes with sec in adaptive subdivision. Moreover, the adaptive subdivision method have much higher accuracy than standard method in computation the invariant measure. The average size of boxes capture the invariant measureis25 6 inadaptivesubdivisionwhilethesize ofboxesis39 5 instandardsubdivision.theadaptive subdivision algorithm shows great efficiency in this example. VI. CONCLUSION In this paper we discussed methods for computing the stationary distribution for dynamical systems with random uncertain parameters. Monte Carlo is the traditional approach for the solution of this problem but it needs many simulation runs in order to achieve reasonable error, which is frequently computationally very expensive as well as time consuming. We introduced an alternative approach using properties of the Perron-Frobenius operator within a unified framework based on measure theoretic concepts from the theory of Random Dynamical Systems. We discretized the state space to obtain a finite-dimensional Markov transition matrix to approximate the infinite-dimensional Perron-Frobenius operator. Two subdivision algorithms to partition the state space were introduced. Finally, we applied both a Monte CarlomethodandtheP-Fbasedmethodtoacatalytic reactor system for comparison. The simulation result verify state. adaptive subdivision.5. parameter Fig. 3. Invariant measure by adaptive subdivision the superiority of the P-F method when combined with an adaptive subdivision algorithm for discretization. REFERENCES [] L. Arnold, Random Dynamical Systems. Berlin: Springer, 998. [2] I. Mezic and T. Runolfsson, Uncertainty analysis of complex dynamical systems, in Proceedings of the 24 American Control Conference,Boston,MA,24. [3] Y. Kifer, Random Perturbations of Dynamical Systems. NewYork: Birkhauser, 988. [4] M. Dellnitz and O. Junge, On the approximation of complicated dynamical behavior, SIAM Journal on Numerical Analysis, vol. 36, pp , 999. [5] J. Osborn, Spectral approximation for compact operators. mathematics of compuation, Mathematics of Compuation, vol. 29, pp , 975. [6] Y. Kifer, General random perturbations of hyperbolic and expanding trans- formations, J. Analyse Math, vol. 29, pp. 47, 986. [7] K.Yosida, Functional Analysis. Springer, 98. [8] M. Dellnitz and O. Junge, An adaptive subdivision technique for the approximation of attractors and invariant measure, Computing and Visualization in Science, vol., pp , 998. [9] C.McGreavy and J.M.Thornton, Stability studies of single catalyst particles, Chem.Eng.J., vol., pp , 97. [] D.S.Cohen and B.J.Matkowsky, On inhibiting runaway in catalytic recator, SIAM J.Appl.Math., vol. 35, pp ,

An adaptive subdivision technique for the approximation of attractors and invariant measures. Part II: Proof of convergence

An adaptive subdivision technique for the approximation of attractors and invariant measures. Part II: Proof of convergence An adaptive subdivision technique for the approximation of attractors and invariant measures. Part II: Proof of convergence Oliver Junge Department of Mathematics and Computer Science University of Paderborn

More information

Finite-Horizon Statistics for Markov chains

Finite-Horizon Statistics for Markov chains Analyzing FSDT Markov chains Friday, September 30, 2011 2:03 PM Simulating FSDT Markov chains, as we have said is very straightforward, either by using probability transition matrix or stochastic update

More information

Lecture XI. Approximating the Invariant Distribution

Lecture XI. Approximating the Invariant Distribution Lecture XI Approximating the Invariant Distribution Gianluca Violante New York University Quantitative Macroeconomics G. Violante, Invariant Distribution p. 1 /24 SS Equilibrium in the Aiyagari model G.

More information

STA 4273H: Statistical Machine Learning

STA 4273H: Statistical Machine Learning STA 4273H: Statistical Machine Learning Russ Salakhutdinov Department of Computer Science! Department of Statistical Sciences! rsalakhu@cs.toronto.edu! h0p://www.cs.utoronto.ca/~rsalakhu/ Lecture 7 Approximate

More information

Set oriented numerics

Set oriented numerics Set oriented numerics Oliver Junge Center for Mathematics Technische Universität München Applied Koopmanism, MFO, February 2016 Motivation Complicated dynamics ( chaos ) Edward N. Lorenz, 1917-2008 Sensitive

More information

A FAMILY OF PIECEWISE EXPANDING MAPS HAVING SINGULAR MEASURE AS A LIMIT OF ACIM S

A FAMILY OF PIECEWISE EXPANDING MAPS HAVING SINGULAR MEASURE AS A LIMIT OF ACIM S A FAMILY OF PIECEWISE EXPANDING MAPS HAVING SINGULAR MEASURE AS A LIMIT OF ACIM S ZHENYANG LI, PAWE L GÓ, ABRAHAM BOYARSKY, HARALD PROPPE, AND PEYMAN ESLAMI Abstract Keller [9] introduced families of W

More information

Statistical Mechanics for the Truncated Quasi-Geostrophic Equations

Statistical Mechanics for the Truncated Quasi-Geostrophic Equations Statistical Mechanics for the Truncated Quasi-Geostrophic Equations Di Qi, and Andrew J. Majda Courant Institute of Mathematical Sciences Fall 6 Advanced Topics in Applied Math Di Qi, and Andrew J. Majda

More information

On the Optimal Scaling of the Modified Metropolis-Hastings algorithm

On the Optimal Scaling of the Modified Metropolis-Hastings algorithm On the Optimal Scaling of the Modified Metropolis-Hastings algorithm K. M. Zuev & J. L. Beck Division of Engineering and Applied Science California Institute of Technology, MC 4-44, Pasadena, CA 925, USA

More information

Introduction to Machine Learning CMU-10701

Introduction to Machine Learning CMU-10701 Introduction to Machine Learning CMU-10701 Markov Chain Monte Carlo Methods Barnabás Póczos & Aarti Singh Contents Markov Chain Monte Carlo Methods Goal & Motivation Sampling Rejection Importance Markov

More information

Stochastic process for macro

Stochastic process for macro Stochastic process for macro Tianxiao Zheng SAIF 1. Stochastic process The state of a system {X t } evolves probabilistically in time. The joint probability distribution is given by Pr(X t1, t 1 ; X t2,

More information

Mathematical Hydrodynamics

Mathematical Hydrodynamics Mathematical Hydrodynamics Ya G. Sinai 1. Introduction Mathematical hydrodynamics deals basically with Navier-Stokes and Euler systems. In the d-dimensional case and incompressible fluids these are the

More information

ON STOCHASTIC ANALYSIS APPROACHES FOR COMPARING COMPLEX SYSTEMS

ON STOCHASTIC ANALYSIS APPROACHES FOR COMPARING COMPLEX SYSTEMS Proceedings of the 44th IEEE Conference on Decision and Control, and the European Control Conference 2005 Seville, Spain, December 2-5, 2005 ThC.5 O STOCHASTIC AAYSIS APPROACHES FOR COMPARIG COMPE SYSTEMS

More information

Competitive Exclusion in a Discrete-time, Size-structured Chemostat Model

Competitive Exclusion in a Discrete-time, Size-structured Chemostat Model Competitive Exclusion in a Discrete-time, Size-structured Chemostat Model Hal L. Smith Department of Mathematics Arizona State University Tempe, AZ 85287 1804, USA E-mail: halsmith@asu.edu Xiao-Qiang Zhao

More information

Bayesian Methods for Machine Learning

Bayesian Methods for Machine Learning Bayesian Methods for Machine Learning CS 584: Big Data Analytics Material adapted from Radford Neal s tutorial (http://ftp.cs.utoronto.ca/pub/radford/bayes-tut.pdf), Zoubin Ghahramni (http://hunch.net/~coms-4771/zoubin_ghahramani_bayesian_learning.pdf),

More information

If we want to analyze experimental or simulated data we might encounter the following tasks:

If we want to analyze experimental or simulated data we might encounter the following tasks: Chapter 1 Introduction If we want to analyze experimental or simulated data we might encounter the following tasks: Characterization of the source of the signal and diagnosis Studying dependencies Prediction

More information

On the Nature of Random System Matrices in Structural Dynamics

On the Nature of Random System Matrices in Structural Dynamics On the Nature of Random System Matrices in Structural Dynamics S. ADHIKARI AND R. S. LANGLEY Cambridge University Engineering Department Cambridge, U.K. Nature of Random System Matrices p.1/20 Outline

More information

Lect4: Exact Sampling Techniques and MCMC Convergence Analysis

Lect4: Exact Sampling Techniques and MCMC Convergence Analysis Lect4: Exact Sampling Techniques and MCMC Convergence Analysis. Exact sampling. Convergence analysis of MCMC. First-hit time analysis for MCMC--ways to analyze the proposals. Outline of the Module Definitions

More information

A Family of Piecewise Expanding Maps having Singular Measure as a limit of ACIM s

A Family of Piecewise Expanding Maps having Singular Measure as a limit of ACIM s Ergod. Th. & Dynam. Sys. (,, Printed in the United Kingdom c Cambridge University Press A Family of Piecewise Expanding Maps having Singular Measure as a it of ACIM s Zhenyang Li,Pawe l Góra, Abraham Boyarsky,

More information

Nondeterministic Kinetics Based Feature Clustering for Fractal Pattern Coding

Nondeterministic Kinetics Based Feature Clustering for Fractal Pattern Coding Nondeterministic Kinetics Based Feature Clustering for Fractal Pattern Coding Kohji Kamejima Faculty of Engineering, Osaka Institute of Technology 5-16-1 Omiya, Asahi, Osaka 535-8585 JAPAN Tel: +81 6 6954-4330;

More information

An Introduction to Entropy and Subshifts of. Finite Type

An Introduction to Entropy and Subshifts of. Finite Type An Introduction to Entropy and Subshifts of Finite Type Abby Pekoske Department of Mathematics Oregon State University pekoskea@math.oregonstate.edu August 4, 2015 Abstract This work gives an overview

More information

Defect Detection using Nonparametric Regression

Defect Detection using Nonparametric Regression Defect Detection using Nonparametric Regression Siana Halim Industrial Engineering Department-Petra Christian University Siwalankerto 121-131 Surabaya- Indonesia halim@petra.ac.id Abstract: To compare

More information

Set Oriented Numerical Methods for Dynamical Systems

Set Oriented Numerical Methods for Dynamical Systems Set Oriented Numerical Methods for Dynamical Systems Michael Dellnitz and Oliver Junge Department of Mathematics and Computer Science University of Paderborn D-3395 Paderborn http://www.upb.de/math/~agdellnitz

More information

Expressions for the covariance matrix of covariance data

Expressions for the covariance matrix of covariance data Expressions for the covariance matrix of covariance data Torsten Söderström Division of Systems and Control, Department of Information Technology, Uppsala University, P O Box 337, SE-7505 Uppsala, Sweden

More information

A CENTRAL LIMIT THEOREM FOR NESTED OR SLICED LATIN HYPERCUBE DESIGNS

A CENTRAL LIMIT THEOREM FOR NESTED OR SLICED LATIN HYPERCUBE DESIGNS Statistica Sinica 26 (2016), 1117-1128 doi:http://dx.doi.org/10.5705/ss.202015.0240 A CENTRAL LIMIT THEOREM FOR NESTED OR SLICED LATIN HYPERCUBE DESIGNS Xu He and Peter Z. G. Qian Chinese Academy of Sciences

More information

The Uniformity Principle: A New Tool for. Probabilistic Robustness Analysis. B. R. Barmish and C. M. Lagoa. further discussion.

The Uniformity Principle: A New Tool for. Probabilistic Robustness Analysis. B. R. Barmish and C. M. Lagoa. further discussion. The Uniformity Principle A New Tool for Probabilistic Robustness Analysis B. R. Barmish and C. M. Lagoa Department of Electrical and Computer Engineering University of Wisconsin-Madison, Madison, WI 53706

More information

ITERATED FUNCTION SYSTEMS WITH A GIVEN CONTINUOUS STATIONARY DISTRIBUTION

ITERATED FUNCTION SYSTEMS WITH A GIVEN CONTINUOUS STATIONARY DISTRIBUTION Fractals, Vol., No. 3 (1) 1 6 c World Scientific Publishing Company DOI: 1.114/S18348X1517X Fractals Downloaded from www.worldscientific.com by UPPSALA UNIVERSITY ANGSTROM on 9/17/1. For personal use only.

More information

Rigorous approximation of invariant measures for IFS Joint work April 8, with 2016 S. 1 Galat / 21

Rigorous approximation of invariant measures for IFS Joint work April 8, with 2016 S. 1 Galat / 21 Rigorous approximation of invariant measures for IFS Joint work with S. Galatolo e I. Nisoli Maurizio Monge maurizio.monge@im.ufrj.br Universidade Federal do Rio de Janeiro April 8, 2016 Rigorous approximation

More information

Introduction. Chapter 1

Introduction. Chapter 1 Chapter 1 Introduction In this book we will be concerned with supervised learning, which is the problem of learning input-output mappings from empirical data (the training dataset). Depending on the characteristics

More information

DETERMINISTIC REPRESENTATION FOR POSITION DEPENDENT RANDOM MAPS

DETERMINISTIC REPRESENTATION FOR POSITION DEPENDENT RANDOM MAPS DETERMINISTIC REPRESENTATION FOR POSITION DEPENDENT RANDOM MAPS WAEL BAHSOUN, CHRISTOPHER BOSE, AND ANTHONY QUAS Abstract. We give a deterministic representation for position dependent random maps and

More information

AN IDENTIFICATION ALGORITHM FOR ARMAX SYSTEMS

AN IDENTIFICATION ALGORITHM FOR ARMAX SYSTEMS AN IDENTIFICATION ALGORITHM FOR ARMAX SYSTEMS First the X, then the AR, finally the MA Jan C. Willems, K.U. Leuven Workshop on Observation and Estimation Ben Gurion University, July 3, 2004 p./2 Joint

More information

2 JOHN STACHURSKI collection of all distributions on the state space into itself with the property that the image of the current distribution ' t is t

2 JOHN STACHURSKI collection of all distributions on the state space into itself with the property that the image of the current distribution ' t is t STOCHASTIC GROWTH: ASYMPTOTIC DISTRIBUTIONS JOHN STACHURSKI Abstract. This note studies conditions under which sequences of capital per head generated by stochastic optimal accumulation models havelaw

More information

An Introduction to Non-Standard Analysis and its Applications

An Introduction to Non-Standard Analysis and its Applications An Introduction to Non-Standard Analysis and its Applications Kevin O Neill March 6, 2014 1 Basic Tools 1.1 A Shortest Possible History of Analysis When Newton and Leibnitz practiced calculus, they used

More information

Computer Vision Group Prof. Daniel Cremers. 11. Sampling Methods

Computer Vision Group Prof. Daniel Cremers. 11. Sampling Methods Prof. Daniel Cremers 11. Sampling Methods Sampling Methods Sampling Methods are widely used in Computer Science as an approximation of a deterministic algorithm to represent uncertainty without a parametric

More information

A tutorial on numerical transfer operator methods. numerical analysis of dynamical systems

A tutorial on numerical transfer operator methods. numerical analysis of dynamical systems A tutorial on transfer operator methods for numerical analysis of dynamical systems Gary Froyland School of Mathematics and Statistics University of New South Wales, Sydney BIRS Workshop on Uncovering

More information

Permutation Excess Entropy and Mutual Information between the Past and Future

Permutation Excess Entropy and Mutual Information between the Past and Future Permutation Excess Entropy and Mutual Information between the Past and Future Taichi Haruna 1, Kohei Nakajima 2 1 Department of Earth & Planetary Sciences, Graduate School of Science, Kobe University,

More information

Chapter 3 Pullback and Forward Attractors of Nonautonomous Difference Equations

Chapter 3 Pullback and Forward Attractors of Nonautonomous Difference Equations Chapter 3 Pullback and Forward Attractors of Nonautonomous Difference Equations Peter Kloeden and Thomas Lorenz Abstract In 1998 at the ICDEA Poznan the first author talked about pullback attractors of

More information

Duality in Stability Theory: Lyapunov function and Lyapunov measure

Duality in Stability Theory: Lyapunov function and Lyapunov measure Forty-Fourth Annual Allerton Conference Allerton House, UIUC, Illinois, USA Sept 27-29, 2006 WID.150 Duality in Stability Theory: Lyapunov function and Lyapunov measure Umesh Vaidya Abstract In this paper,

More information

STA 4273H: Sta-s-cal Machine Learning

STA 4273H: Sta-s-cal Machine Learning STA 4273H: Sta-s-cal Machine Learning Russ Salakhutdinov Department of Computer Science! Department of Statistical Sciences! rsalakhu@cs.toronto.edu! h0p://www.cs.utoronto.ca/~rsalakhu/ Lecture 2 In our

More information

A quick introduction to Markov chains and Markov chain Monte Carlo (revised version)

A quick introduction to Markov chains and Markov chain Monte Carlo (revised version) A quick introduction to Markov chains and Markov chain Monte Carlo (revised version) Rasmus Waagepetersen Institute of Mathematical Sciences Aalborg University 1 Introduction These notes are intended to

More information

A Polynomial Chaos Approach to Robust Multiobjective Optimization

A Polynomial Chaos Approach to Robust Multiobjective Optimization A Polynomial Chaos Approach to Robust Multiobjective Optimization Silvia Poles 1, Alberto Lovison 2 1 EnginSoft S.p.A., Optimization Consulting Via Giambellino, 7 35129 Padova, Italy s.poles@enginsoft.it

More information

Population Games and Evolutionary Dynamics

Population Games and Evolutionary Dynamics Population Games and Evolutionary Dynamics William H. Sandholm The MIT Press Cambridge, Massachusetts London, England in Brief Series Foreword Preface xvii xix 1 Introduction 1 1 Population Games 2 Population

More information

INTRODUCTION TO MARKOV CHAIN MONTE CARLO

INTRODUCTION TO MARKOV CHAIN MONTE CARLO INTRODUCTION TO MARKOV CHAIN MONTE CARLO 1. Introduction: MCMC In its simplest incarnation, the Monte Carlo method is nothing more than a computerbased exploitation of the Law of Large Numbers to estimate

More information

Markov Chains, Stochastic Processes, and Matrix Decompositions

Markov Chains, Stochastic Processes, and Matrix Decompositions Markov Chains, Stochastic Processes, and Matrix Decompositions 5 May 2014 Outline 1 Markov Chains Outline 1 Markov Chains 2 Introduction Perron-Frobenius Matrix Decompositions and Markov Chains Spectral

More information

A PARAMETRIC MODEL FOR DISCRETE-VALUED TIME SERIES. 1. Introduction

A PARAMETRIC MODEL FOR DISCRETE-VALUED TIME SERIES. 1. Introduction tm Tatra Mt. Math. Publ. 00 (XXXX), 1 10 A PARAMETRIC MODEL FOR DISCRETE-VALUED TIME SERIES Martin Janžura and Lucie Fialová ABSTRACT. A parametric model for statistical analysis of Markov chains type

More information

CHUN-HUA GUO. Key words. matrix equations, minimal nonnegative solution, Markov chains, cyclic reduction, iterative methods, convergence rate

CHUN-HUA GUO. Key words. matrix equations, minimal nonnegative solution, Markov chains, cyclic reduction, iterative methods, convergence rate CONVERGENCE ANALYSIS OF THE LATOUCHE-RAMASWAMI ALGORITHM FOR NULL RECURRENT QUASI-BIRTH-DEATH PROCESSES CHUN-HUA GUO Abstract The minimal nonnegative solution G of the matrix equation G = A 0 + A 1 G +

More information

Approximating a single component of the solution to a linear system

Approximating a single component of the solution to a linear system Approximating a single component of the solution to a linear system Christina E. Lee, Asuman Ozdaglar, Devavrat Shah celee@mit.edu asuman@mit.edu devavrat@mit.edu MIT LIDS 1 How do I compare to my competitors?

More information

Robustly transitive diffeomorphisms

Robustly transitive diffeomorphisms Robustly transitive diffeomorphisms Todd Fisher tfisher@math.byu.edu Department of Mathematics, Brigham Young University Summer School, Chengdu, China 2009 Dynamical systems The setting for a dynamical

More information

On Kusuoka Representation of Law Invariant Risk Measures

On Kusuoka Representation of Law Invariant Risk Measures MATHEMATICS OF OPERATIONS RESEARCH Vol. 38, No. 1, February 213, pp. 142 152 ISSN 364-765X (print) ISSN 1526-5471 (online) http://dx.doi.org/1.1287/moor.112.563 213 INFORMS On Kusuoka Representation of

More information

Kyle Reing University of Southern California April 18, 2018

Kyle Reing University of Southern California April 18, 2018 Renormalization Group and Information Theory Kyle Reing University of Southern California April 18, 2018 Overview Renormalization Group Overview Information Theoretic Preliminaries Real Space Mutual Information

More information

arxiv: v1 [math.fa] 23 Dec 2015

arxiv: v1 [math.fa] 23 Dec 2015 On the sum of a narrow and a compact operators arxiv:151.07838v1 [math.fa] 3 Dec 015 Abstract Volodymyr Mykhaylyuk Department of Applied Mathematics Chernivtsi National University str. Kotsyubyns koho,

More information

STA 294: Stochastic Processes & Bayesian Nonparametrics

STA 294: Stochastic Processes & Bayesian Nonparametrics MARKOV CHAINS AND CONVERGENCE CONCEPTS Markov chains are among the simplest stochastic processes, just one step beyond iid sequences of random variables. Traditionally they ve been used in modelling a

More information

Performance Evaluation of Generalized Polynomial Chaos

Performance Evaluation of Generalized Polynomial Chaos Performance Evaluation of Generalized Polynomial Chaos Dongbin Xiu, Didier Lucor, C.-H. Su, and George Em Karniadakis 1 Division of Applied Mathematics, Brown University, Providence, RI 02912, USA, gk@dam.brown.edu

More information

Lecture 2: September 8

Lecture 2: September 8 CS294 Markov Chain Monte Carlo: Foundations & Applications Fall 2009 Lecture 2: September 8 Lecturer: Prof. Alistair Sinclair Scribes: Anand Bhaskar and Anindya De Disclaimer: These notes have not been

More information

Reinforcement Learning

Reinforcement Learning Reinforcement Learning March May, 2013 Schedule Update Introduction 03/13/2015 (10:15-12:15) Sala conferenze MDPs 03/18/2015 (10:15-12:15) Sala conferenze Solving MDPs 03/20/2015 (10:15-12:15) Aula Alpha

More information

Simple closed form formulas for predicting groundwater flow model uncertainty in complex, heterogeneous trending media

Simple closed form formulas for predicting groundwater flow model uncertainty in complex, heterogeneous trending media WATER RESOURCES RESEARCH, VOL. 4,, doi:0.029/2005wr00443, 2005 Simple closed form formulas for predicting groundwater flow model uncertainty in complex, heterogeneous trending media Chuen-Fa Ni and Shu-Guang

More information

Optimization Tools in an Uncertain Environment

Optimization Tools in an Uncertain Environment Optimization Tools in an Uncertain Environment Michael C. Ferris University of Wisconsin, Madison Uncertainty Workshop, Chicago: July 21, 2008 Michael Ferris (University of Wisconsin) Stochastic optimization

More information

Long-Run Covariability

Long-Run Covariability Long-Run Covariability Ulrich K. Müller and Mark W. Watson Princeton University October 2016 Motivation Study the long-run covariability/relationship between economic variables great ratios, long-run Phillips

More information

A Note on Some Properties of Local Random Attractors

A Note on Some Properties of Local Random Attractors Northeast. Math. J. 24(2)(2008), 163 172 A Note on Some Properties of Local Random Attractors LIU Zhen-xin ( ) (School of Mathematics, Jilin University, Changchun, 130012) SU Meng-long ( ) (Mathematics

More information

MATH 56A: STOCHASTIC PROCESSES CHAPTER 1

MATH 56A: STOCHASTIC PROCESSES CHAPTER 1 MATH 56A: STOCHASTIC PROCESSES CHAPTER. Finite Markov chains For the sake of completeness of these notes I decided to write a summary of the basic concepts of finite Markov chains. The topics in this chapter

More information

Time Series Models and Inference. James L. Powell Department of Economics University of California, Berkeley

Time Series Models and Inference. James L. Powell Department of Economics University of California, Berkeley Time Series Models and Inference James L. Powell Department of Economics University of California, Berkeley Overview In contrast to the classical linear regression model, in which the components of the

More information

TOPOLOGICAL ENTROPY FOR DIFFERENTIABLE MAPS OF INTERVALS

TOPOLOGICAL ENTROPY FOR DIFFERENTIABLE MAPS OF INTERVALS Chung, Y-.M. Osaka J. Math. 38 (200), 2 TOPOLOGICAL ENTROPY FOR DIFFERENTIABLE MAPS OF INTERVALS YONG MOO CHUNG (Received February 9, 998) Let Á be a compact interval of the real line. For a continuous

More information

Asymptotic Approximation of Marginal Likelihood Integrals

Asymptotic Approximation of Marginal Likelihood Integrals Asymptotic Approximation of Marginal Likelihood Integrals Shaowei Lin 10 Dec 2008 Abstract We study the asymptotics of marginal likelihood integrals for discrete models using resolution of singularities

More information

Measurable Choice Functions

Measurable Choice Functions (January 19, 2013) Measurable Choice Functions Paul Garrett garrett@math.umn.edu http://www.math.umn.edu/ garrett/ [This document is http://www.math.umn.edu/ garrett/m/fun/choice functions.pdf] This note

More information

On the dynamics of strongly tridiagonal competitive-cooperative system

On the dynamics of strongly tridiagonal competitive-cooperative system On the dynamics of strongly tridiagonal competitive-cooperative system Chun Fang University of Helsinki International Conference on Advances on Fractals and Related Topics, Hong Kong December 14, 2012

More information

Sliding Modes in Control and Optimization

Sliding Modes in Control and Optimization Vadim I. Utkin Sliding Modes in Control and Optimization With 24 Figures Springer-Verlag Berlin Heidelberg New York London Paris Tokyo Hong Kong Barcelona Budapest Contents Parti. Mathematical Tools 1

More information

Lecture 9: Introduction to Kriging

Lecture 9: Introduction to Kriging Lecture 9: Introduction to Kriging Math 586 Beginning remarks Kriging is a commonly used method of interpolation (prediction) for spatial data. The data are a set of observations of some variable(s) of

More information

Reliability Theory of Dynamically Loaded Structures (cont.)

Reliability Theory of Dynamically Loaded Structures (cont.) Outline of Reliability Theory of Dynamically Loaded Structures (cont.) Probability Density Function of Local Maxima in a Stationary Gaussian Process. Distribution of Extreme Values. Monte Carlo Simulation

More information

Testing Error Correction in Panel data

Testing Error Correction in Panel data University of Vienna, Dept. of Economics Master in Economics Vienna 2010 The Model (1) Westerlund (2007) consider the following DGP: y it = φ 1i + φ 2i t + z it (1) x it = x it 1 + υ it (2) where the stochastic

More information

Math 421, Homework #9 Solutions

Math 421, Homework #9 Solutions Math 41, Homework #9 Solutions (1) (a) A set E R n is said to be path connected if for any pair of points x E and y E there exists a continuous function γ : [0, 1] R n satisfying γ(0) = x, γ(1) = y, and

More information

Quantifying Uncertainty

Quantifying Uncertainty Sai Ravela M. I. T Last Updated: Spring 2013 1 Markov Chain Monte Carlo Monte Carlo sampling made for large scale problems via Markov Chains Monte Carlo Sampling Rejection Sampling Importance Sampling

More information

A probabilistic proof of Perron s theorem arxiv: v1 [math.pr] 16 Jan 2018

A probabilistic proof of Perron s theorem arxiv: v1 [math.pr] 16 Jan 2018 A probabilistic proof of Perron s theorem arxiv:80.05252v [math.pr] 6 Jan 208 Raphaël Cerf DMA, École Normale Supérieure January 7, 208 Abstract Joseba Dalmau CMAP, Ecole Polytechnique We present an alternative

More information

Markov Chains, Random Walks on Graphs, and the Laplacian

Markov Chains, Random Walks on Graphs, and the Laplacian Markov Chains, Random Walks on Graphs, and the Laplacian CMPSCI 791BB: Advanced ML Sridhar Mahadevan Random Walks! There is significant interest in the problem of random walks! Markov chain analysis! Computer

More information

Output Input Stability and Minimum-Phase Nonlinear Systems

Output Input Stability and Minimum-Phase Nonlinear Systems 422 IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 47, NO. 3, MARCH 2002 Output Input Stability and Minimum-Phase Nonlinear Systems Daniel Liberzon, Member, IEEE, A. Stephen Morse, Fellow, IEEE, and Eduardo

More information

APPPHYS217 Tuesday 25 May 2010

APPPHYS217 Tuesday 25 May 2010 APPPHYS7 Tuesday 5 May Our aim today is to take a brief tour of some topics in nonlinear dynamics. Some good references include: [Perko] Lawrence Perko Differential Equations and Dynamical Systems (Springer-Verlag

More information

Predictive Engineering and Computational Sciences. Local Sensitivity Derivative Enhanced Monte Carlo Methods. Roy H. Stogner, Vikram Garg

Predictive Engineering and Computational Sciences. Local Sensitivity Derivative Enhanced Monte Carlo Methods. Roy H. Stogner, Vikram Garg PECOS Predictive Engineering and Computational Sciences Local Sensitivity Derivative Enhanced Monte Carlo Methods Roy H. Stogner, Vikram Garg Institute for Computational Engineering and Sciences The University

More information

Observations on the Stability Properties of Cooperative Systems

Observations on the Stability Properties of Cooperative Systems 1 Observations on the Stability Properties of Cooperative Systems Oliver Mason and Mark Verwoerd Abstract We extend two fundamental properties of positive linear time-invariant (LTI) systems to homogeneous

More information

Gradient Estimation for Attractor Networks

Gradient Estimation for Attractor Networks Gradient Estimation for Attractor Networks Thomas Flynn Department of Computer Science Graduate Center of CUNY July 2017 1 Outline Motivations Deterministic attractor networks Stochastic attractor networks

More information

Introduction to Dynamical Systems Basic Concepts of Dynamics

Introduction to Dynamical Systems Basic Concepts of Dynamics Introduction to Dynamical Systems Basic Concepts of Dynamics A dynamical system: Has a notion of state, which contains all the information upon which the dynamical system acts. A simple set of deterministic

More information

1 Computing the Invariant Distribution

1 Computing the Invariant Distribution LECTURE NOTES ECO 613/614 FALL 2007 KAREN A. KOPECKY 1 Computing the Invariant Distribution Suppose an individual s state consists of his current assets holdings, a [a, ā] and his current productivity

More information

Let (Ω, F) be a measureable space. A filtration in discrete time is a sequence of. F s F t

Let (Ω, F) be a measureable space. A filtration in discrete time is a sequence of. F s F t 2.2 Filtrations Let (Ω, F) be a measureable space. A filtration in discrete time is a sequence of σ algebras {F t } such that F t F and F t F t+1 for all t = 0, 1,.... In continuous time, the second condition

More information

On Identification of Cascade Systems 1

On Identification of Cascade Systems 1 On Identification of Cascade Systems 1 Bo Wahlberg Håkan Hjalmarsson Jonas Mårtensson Automatic Control and ACCESS, School of Electrical Engineering, KTH, SE-100 44 Stockholm, Sweden. (bo.wahlberg@ee.kth.se

More information

Weak convergence of Markov chain Monte Carlo II

Weak convergence of Markov chain Monte Carlo II Weak convergence of Markov chain Monte Carlo II KAMATANI, Kengo Mar 2011 at Le Mans Background Markov chain Monte Carlo (MCMC) method is widely used in Statistical Science. It is easy to use, but difficult

More information

6.435, System Identification

6.435, System Identification System Identification 6.435 SET 3 Nonparametric Identification Munther A. Dahleh 1 Nonparametric Methods for System ID Time domain methods Impulse response Step response Correlation analysis / time Frequency

More information

This paper is about a new method for analyzing the distribution property of nonuniform stochastic samples.

This paper is about a new method for analyzing the distribution property of nonuniform stochastic samples. This paper is about a new method for analyzing the distribution property of nonuniform stochastic samples. Stochastic sampling is a fundamental component in many graphics applications. Examples include

More information

A note on the σ-algebra of cylinder sets and all that

A note on the σ-algebra of cylinder sets and all that A note on the σ-algebra of cylinder sets and all that José Luis Silva CCM, Univ. da Madeira, P-9000 Funchal Madeira BiBoS, Univ. of Bielefeld, Germany (luis@dragoeiro.uma.pt) September 1999 Abstract In

More information

Distributed Estimation, Information Loss and Exponential Families. Qiang Liu Department of Computer Science Dartmouth College

Distributed Estimation, Information Loss and Exponential Families. Qiang Liu Department of Computer Science Dartmouth College Distributed Estimation, Information Loss and Exponential Families Qiang Liu Department of Computer Science Dartmouth College Statistical Learning / Estimation Learning generative models from data Topic

More information

Characterization of the stability boundary of nonlinear autonomous dynamical systems in the presence of a saddle-node equilibrium point of type 0

Characterization of the stability boundary of nonlinear autonomous dynamical systems in the presence of a saddle-node equilibrium point of type 0 Anais do CNMAC v.2 ISSN 1984-82X Characterization of the stability boundary of nonlinear autonomous dynamical systems in the presence of a saddle-node equilibrium point of type Fabíolo M. Amaral Departamento

More information

Dynamical Systems and Ergodic Theory PhD Exam Spring Topics: Topological Dynamics Definitions and basic results about the following for maps and

Dynamical Systems and Ergodic Theory PhD Exam Spring Topics: Topological Dynamics Definitions and basic results about the following for maps and Dynamical Systems and Ergodic Theory PhD Exam Spring 2012 Introduction: This is the first year for the Dynamical Systems and Ergodic Theory PhD exam. As such the material on the exam will conform fairly

More information

Lecture 10. Theorem 1.1 [Ergodicity and extremality] A probability measure µ on (Ω, F) is ergodic for T if and only if it is an extremal point in M.

Lecture 10. Theorem 1.1 [Ergodicity and extremality] A probability measure µ on (Ω, F) is ergodic for T if and only if it is an extremal point in M. Lecture 10 1 Ergodic decomposition of invariant measures Let T : (Ω, F) (Ω, F) be measurable, and let M denote the space of T -invariant probability measures on (Ω, F). Then M is a convex set, although

More information

Approximation of Top Lyapunov Exponent of Stochastic Delayed Turning Model Using Fokker-Planck Approach

Approximation of Top Lyapunov Exponent of Stochastic Delayed Turning Model Using Fokker-Planck Approach Approximation of Top Lyapunov Exponent of Stochastic Delayed Turning Model Using Fokker-Planck Approach Henrik T. Sykora, Walter V. Wedig, Daniel Bachrathy and Gabor Stepan Department of Applied Mechanics,

More information

MARKOV PARTITIONS FOR HYPERBOLIC SETS

MARKOV PARTITIONS FOR HYPERBOLIC SETS MARKOV PARTITIONS FOR HYPERBOLIC SETS TODD FISHER, HIMAL RATHNAKUMARA Abstract. We show that if f is a diffeomorphism of a manifold to itself, Λ is a mixing (or transitive) hyperbolic set, and V is a neighborhood

More information

PCA with random noise. Van Ha Vu. Department of Mathematics Yale University

PCA with random noise. Van Ha Vu. Department of Mathematics Yale University PCA with random noise Van Ha Vu Department of Mathematics Yale University An important problem that appears in various areas of applied mathematics (in particular statistics, computer science and numerical

More information

The Google Markov Chain: convergence speed and eigenvalues

The Google Markov Chain: convergence speed and eigenvalues U.U.D.M. Project Report 2012:14 The Google Markov Chain: convergence speed and eigenvalues Fredrik Backåker Examensarbete i matematik, 15 hp Handledare och examinator: Jakob Björnberg Juni 2012 Department

More information

Convergence Rate of Nonlinear Switched Systems

Convergence Rate of Nonlinear Switched Systems Convergence Rate of Nonlinear Switched Systems Philippe JOUAN and Saïd NACIRI arxiv:1511.01737v1 [math.oc] 5 Nov 2015 January 23, 2018 Abstract This paper is concerned with the convergence rate of the

More information

Computer Vision Group Prof. Daniel Cremers. 14. Sampling Methods

Computer Vision Group Prof. Daniel Cremers. 14. Sampling Methods Prof. Daniel Cremers 14. Sampling Methods Sampling Methods Sampling Methods are widely used in Computer Science as an approximation of a deterministic algorithm to represent uncertainty without a parametric

More information

OPTIMAL CONTROL AND ESTIMATION

OPTIMAL CONTROL AND ESTIMATION OPTIMAL CONTROL AND ESTIMATION Robert F. Stengel Department of Mechanical and Aerospace Engineering Princeton University, Princeton, New Jersey DOVER PUBLICATIONS, INC. New York CONTENTS 1. INTRODUCTION

More information

Discrete quantum random walks

Discrete quantum random walks Quantum Information and Computation: Report Edin Husić edin.husic@ens-lyon.fr Discrete quantum random walks Abstract In this report, we present the ideas behind the notion of quantum random walks. We further

More information

FIXED POINT OF CONTRACTION AND EXPONENTIAL ATTRACTORS. Y. Takei and A. Yagi 1. Received February 22, 2006; revised April 6, 2006

FIXED POINT OF CONTRACTION AND EXPONENTIAL ATTRACTORS. Y. Takei and A. Yagi 1. Received February 22, 2006; revised April 6, 2006 Scientiae Mathematicae Japonicae Online, e-2006, 543 550 543 FIXED POINT OF CONTRACTION AND EXPONENTIAL ATTRACTORS Y. Takei and A. Yagi 1 Received February 22, 2006; revised April 6, 2006 Abstract. The

More information

A two-layer ICA-like model estimated by Score Matching

A two-layer ICA-like model estimated by Score Matching A two-layer ICA-like model estimated by Score Matching Urs Köster and Aapo Hyvärinen University of Helsinki and Helsinki Institute for Information Technology Abstract. Capturing regularities in high-dimensional

More information

Partially Collapsed Gibbs Samplers: Theory and Methods

Partially Collapsed Gibbs Samplers: Theory and Methods David A. VAN DYK and Taeyoung PARK Partially Collapsed Gibbs Samplers: Theory and Methods Ever-increasing computational power, along with ever more sophisticated statistical computing techniques, is making

More information