Characterizing controllability probabilities of stochastic control systems via Zubov s method
|
|
- Wesley Simmons
- 6 years ago
- Views:
Transcription
1 Characterizing controllability probabilities of stochastic control systems via Zubov s method Fabio Camilli Lars Grüne Fabian Wirth, Keywords: Stochastic differential equations, stochastic control system, almost sure controllability, Zubov s method, computational approach. Abstract We consider a controlled stochastic system with an a.s. locally exponentially controllable compact set. Our aim is to characterize the set of points which can be driven by a suitable control to this set with either positive probability or with probability one. This will be obtained by associating to the stochastic system a suitable control problem and the corresponding Bellman equation. We then show that this approach can be used as basis for numerical computations of these sets. Introduction Zubov s method is a general procedure which allows to characterize the domain of attraction of an asymptotically stable fixed point of a deterministic system by the solution of a suitable partial differential equation, the Zubov equation (see f.e. [3 for an account of the various developments of this method). A typical difficulty in the application of this method, i.e. the existence of a regular solution to the Zubov equation, was overcome in [5 by using a suitable notion of weak solution, the Crandall-Lions viscosity solution. The use of weak solutions allows the extension of this method to perturbed and controlled systems, see [9, Chapter VII for an overview. In [6, [3 the Zubov method was applied to Ito stochastic differential equations obtaining in the former a characterization of set of points which are attracted with positive probability to an almost surely exponentially stable fixed point; in the latter a characterization of the points which are attracted with probability (or any fixed probability) to the fixed point. It is worth noting that the Zubov method also yields a Lyapunov function for the deterministic or the stochastic system Sez. di Matematica per l Ingegneria, Dip. di Matematica Pura e Applicata, Università de l Aquila, 674 Monteluco di Roio (AQ), Italy, camilli@ing.univaq.it Mathematisches Institut, Fakultät für Mathematik und Physik, Universität Bayreuth, 9544 Bayreuth, Germany, lars.gruene@uni-bayreuth.de Hamilton Institute, NUI Maynooth, Maynooth, Co. Kildare, Ireland, fabian.wirth@may.ie Supported by Science Foundation Ireland grant /PI./C67. as the unique solution of the Zubov equation. This fact can be used as a basis for numerical computations of the domain of attraction (see [4 in the deterministic case and [3 in the stochastic one). In many applications it is interesting to consider the socalled asymptotic controllability problem, i.e. the possibility of asymptotically driving a nonlinear system to a desired target by a suitable choice of the control law. Whereas in the deterministic case there is huge literature about this problem (see f.e. [6), in the stochastic case it seems to be less considered, also because it request some degeneration of the stochastic part which makes it difficult to handle with classical stochastic techniques. In [ this problem was studied for a deterministic system by means of Zubov s method. Here we use the same approach for a stochastic differential equation. In the stochastic case the Zubov method splits into two parts: In the first step we introduce a suitable control problem, with a fixed positive discount factor δ (chosen equal to for simplicity), associated with the stochastic system. We show that a suitable level set of the corresponding value function v gives the set of initial points for which there exists a control driving the stochastic system to the locally controllable with positive probability. The value function is characterized as the unique viscosity solution of the Zubov equation, which is the Hamilton Jacobi Bellman of the control problem. In the second step we consider as a parameter the discount factor δ and we pass to the limit for δ +. The set of points controllable to the fixed point with probability one is given by the subset of R N where the sequence v δ converges to. The sequence v δ converges to a l.s.c. v which is a ersolution of an Hamilton-Jacobi-Bellman related to an ergodic control problem. In this respect the Zubov equation with positive discount factor can be seen as a regularization of the limit ergodic control problem which gives the appropriate characterization. This paper is organized as follows: In Section we give the setup and study the domain of possible controllability. In Section 3 we analyze the domain of almost sure controllability, and finally, in Section 4 we describe an example where the previous objects are calculated numerically.
2 Domain of possible null-controllability and the Zubov equation We fix a probability space (Ω, F, F t, P), where {F t } t is a right continuous increasing filtration, and consider the controlled stochastic differential equation { dx(t) = b(x(t), α(t)) dt + σ(x(t), α(t)) dw (t) X() = x () where α(t), the control applied to the system, is a progressively measurable process having values in a compact set A R M. We denote by A the set of the admissible control laws α(t). Solutions corresponding to an initial value x and a control law α A will be denoted by X(t, x, α) (or X(t) if there no ambiguity). We assume that the functions b : R N A R N, σ : R N A R N M are continuous and bounded on R N A and Lipschitz in x uniformly with respect to a A and that A. Moreover we assume that there exists a set R N locally a.s. uniformly null-controllable, i.e. there exist r, λ positive and a finite random variable β such that for any x B(, r) = {x R N : d(x, ) r}, there exists α A for which d(x(t, x, α), ) βe λt a.s. for any t >. () In this section we study the domain of possible nullcontrollability C, i.e. the set of points x for which it is possible to design a control law α such that the corresponding trajectory X(t, x, α) is attracted with positive probability to. Hence C = { x R N : there exists α A s.t. P[ lim t + d(x(t, x, α), ) = > }. We introduce a control problem associated to the dynamics in the following way. We consider for x R N and α A the cost functional { + J(x, α) = E g(x(t), α(t))e t } g(x(s),α(s))ds dt = E [e + g(x(t),α(t))dt (3) where g : R N A R is continuous and bounded on R N A and Lipschitz continuous in x uniformly in a A, g(x, a) = for any (x, a) A and inf g(x, a) g >. (R N \B(,r)) A We consider the value function and we can prove v(x) = inf J(x, α) Theorem. C = {x R N : v(x) < }. Proof: Note that by definition v and v(x) > for x. We claim that C is the set of the points x R N for which there exists α A such that E[exp( t(x, α)) >, where t(x, α) = inf{t > : X(t, x, α) B(, r)}. (4) In fact, if x C, then clearly P[{t(x, α) < } > for some α A and therefore E[exp( t(x, α)) >. On the other hand, if E[exp( t(x, α)) > for a control α A, then P[{t(x, α) < } >. By (), we have P[{t(x, α) < + } { lim d(x(t, x, α), ) = } t + = P[{ lim t + d(x(t, x, α), ) = t(x, α) < } = P[{t(x, α) < } = P[{t(x, α) < + }, hence x C. This shows the claim. Now if x C, then for any control α we have E[e t(x,α) =. Hence E [e t(x,α) g(x(t),α(t))dt E[e gt(x,α) =. and therefore v(x) =. If x C, by the previous claim there exists α such that P[t(x, α) < + >. Set τ = t(x, α) and take T and K sufficiently large in such a way P[B := P [{τ T } {β K} η > where β is given as in () For t > T, by () we have E [ E[ X(t, x α) B χ B Then v(x) = E [ E[ X(t τ, X(τ, x, α), α(t τ)) BχB Ke λ(t T ). E [ E[e T g(x(t),α(t)))dt+ + T e (MgT +LgK/λ) < g(x(t),α(t)))dt Bχ B where M g and L g are respectively an upper bound and the Lipschitz constant of g. We have obtained a link between C and v. In the next two propositions we study the properties of these objects in order to get a PDE characterization of v. Proposition. i) B(, r) is a proper subset of C. ii) C is open, connected, weakly positive forward invariant (i.e. there exists α A such that P[X(t, x, α) C for any t >.)
3 iii) E[exp( t(x, α)) if x x C. Proof: The proof of this proposition is similar to the ones of the corresponding results in [6. Hence we only give the details of i) and we refer the interested reader to [6 for the other two statements. Take x B(, r), let α be a control satisfying () and fix b > such that P[B := P[β b ɛ >. From (), there is T > such that P[B {d(x(t, x, α), ) r for t > T } = ɛ (5) Recalling that for any x, y R N and δ > lim P[ X(t, x, α) X(t, y, α) > δ =. x y t [,T select δ such that for any y B(x, δ), defined A = { t [,T X(t, x, α) X(t, y, α) r/}, then P [A c ɛ/ (A c denotes the complement of A in Ω). Set C = A B. From (5), if y B(x, δ) we have that and therefore, from () Moreover P[{d(X(t, y, α), ) r} P[{d(X(t, x, α), ) r/} C P[{ lim d(x(t, y, α), ) = } t + P[{d(X(t, y, α), ) r} P[C P[C = P[A c B c (P[A c + P[B c ) ɛ. It follows that P[{lim t + d(x(t, y, α), ) = } is positive for any y B(x, δ) and therefore B(x, δ) C for any x B(, r). Remark.3 Note that if C does not coincide with all R N, the weakly forward invariance property requires some degeneration of the diffusion part of the stochastic differential equation on the boundary of C, see f.e. [. The typical example we have in mind is a deterministic system driven by a stochastic force, i.e. a coupled system X(t) = (X (t), X (t)) R N R N = R N of the form dx (t) = b (X (t), X (t), α(t))dt dx (t) = b (X (t), α(t)) dt + σ (X (t), α(t)) dw (t), see e.g. [7 for examples of such systems. Note that for systems of this class the diffusion for the overall process X(t) = (X (t), X (t)) is naturally degenerate. Set Σ(x, a) = σ(x, a)σ t (x, a) for any a A and consider the generator of the Markov process associated to the stochastic differential equation L(x, a) = N N Σ i j (x, a) + b i (x, a). (6) x i x j x i i,j= i= Proposition.4 v is continuous on R N and a viscosity solution of Zubov s equation { L(x, a)vδ ( v(x))g(x) } = (7) a A for x R N \. Proof: The only point is to prove that v is continuous on R N. Then from a standard application of the dynamic programming principle it follows immediately that v is a viscosity solution of (7) (see f.e. [7, [8). Note that v in the complement of C. From Prop., if x n C and x n x C we have v(x n ) E[e gt(xn,α) for n + and therefore v is continuous on the boundary of C. To prove that v is continuous on the interior of C, it is sufficient to show that v is continuous in B(, r) since outside g is strictly positive and we can use the argument in [4, part I, Theorem II.. Fix x, y B(, r) and ɛ >. Let b be such that P[B := P [{β b} ɛ/8. Take T in such a way that L g b exp( λt )/λ < ɛ/4, where λ as in (), and let α be a control satisfying () and v(x) E[e + g(x(t,x,α),α(t))dt + ɛ 8 and δ sufficiently small in such a way that E X(t, x, α) X(t, y, α) ɛ/4l g T if x y δ and t T. Hence [ E d(x(t, y, α), )dt χ B T [ E d(x(t + T, y, α( + T )), )dt χ B and be λt /λ. v(y) v(x) E e + g(x(t,y,α),α(t))dt e + g(x(t,x,α),α(t))dt ɛ + 8 ( P(B c ) + E [L T g X(t, y, α) X(t, x, α) dt + T + ɛ 8 ɛ. (d(x(t, x, α), ) + d(x(t, y, α), ))dt )χ B The next theorem gives the characterization of C through the Zubov equation (7). Theorem.5 The value function v is the unique bounded, continuous viscosity solution of (7) which is null on.
4 Proof: We show that if w is a continuous viscosity subsolution of (7) such that w(x) for x, then w v in R N. Using a standard comparison theorem (see f.e. [8), the only problem is the vanishing of g on. Therefore we first prove that w v in B(, r) using (), we then obtain the result in all R N by applying the comparison result in R N \ B(, r). Since w is a continuous viscosity subsolution, it satisfies for any x {δ d(x, ) /δ} w(x) inf E e T τ δ { T τδ g(x(t), α(t))e t g(x(s),α(s))ds dt + } g(x(t),α(t))dt w(x(t τ δ )) for any T > where τ δ = τ δ (α) is the exit time of the process X(t) = X(t, x, α) from {δ d(x, ) /δ}(see [5). Fix ɛ > and let δ > be such that if d(z, ) δ, then w(z), v(z) ɛ. For x B(, r) by the dynamic programming principle we can find α A satisfying () and such that { T τδ v(x) E g(x(t), α(t))e t g(x(s),α(s))ds dt + e T } τ δ g(x(t),α(t))dt v(x(t τ δ )) + ɛ. Therefore we have w(x) v(x) E { e τδ g(x(t),α(t))dt (w(x(τ δ )) v(x(τ δ )) χ {τδ T }} + Me g δ T + ɛ where g δ = inf{g(x, a) : d(x, ) δ, a A} > and M = max{ w, v }. Set B k = {β K} and take T and K sufficiently large in such a way that Me gδt ɛ, MP[Bk c ɛ and, recalling (), P[B k {τ δ T } = P[B k. By (8), we get v(x) w(x) ɛp[b k + MP[B c k + ɛ 4ɛ and for the arbitrariness of ɛ we have w v in B(, r). By a similar argument we can prove that if u is a continuous viscosity ersolution of (7) such that u(x) for x, then u v in R N. Remark.6 The function v is a stochastic control Lyapunov function for the system in the sense that inf E[v(X(t, x, α)) v(x) < for any x C \ and any t >. 3 Domain of almost sure controllability In this section we are interested in a characterization of the set of points which are asymptotically controllable to the set with probability arbitrarily close to one, i.e. in the set D = { x R N : P[ lim d(x(t, x, α), ) = = }. t + (8) We require a slightly stronger stability condition, namely that besides () it is also verified that for any x B(, r), there exists a control α A such that E[d(X(t, x, α), ) q Me µt a.s. for any t > (9) for some q (, and positive constants M, µ. We consider a family of value functions depending in the discount factor on a positive parameter δ v δ (x) = inf E [ + δg(x(t), α(t))e t δg(x(s),α(s))ds dt = inf E[ e + δg(x(t),α(t))dt The main result of this section is Theorem 3. D = {x R N : lim δ v δ (x) = } () Proof: The proof of the result is split in some steps. Claim : For any x B(, r), v δ (x) Cδ for some positive constant C. Since g is Lipschitz continuous in x uniformly in a and g(x, a) = for any (x, a) A, we have g(x, a) min{l g x, M g } C q x q for any q (, and corresponding constant C q. Let α be a control satisfying (9). Then for any δ, by the Lipschitz continuity of g, () and (9) we get v δ (x) E δ [ + + δg(x(t), α(t))e t δg(x(s),α(s))ds dt E [g(x(t), α(t)) dt + δc q E[d(X(t, x, α), ) q dt + δc q Me µt dt hence the claim. Claim : For any x R N, lim E[e δt(x,α) = P[t(x, α) < () δ where t(x, a) is defined as in (4). The proof of the claim is very similar to the one of Lemma 3. in [3, so we just sketch it. Let α A be such that E[e δt(x,α) E[e δt(x,α) + ɛ and T such that exp( δt ) ɛ for T > T. Hence for T > T E[e δt(x,α) E[e δt(x,α) χ {t(x,a)<t } + E[e δt P[t(x, α) < T + ɛ P[t(x, α) < + ɛ from which we get lim δ E[e δt(x,α) P[t(x, α) <.
5 To obtain the other inequality in (), take α A, T sufficiently large and δ small such that and for t < T Hence P[t(x, α) < P[t(x, α) < + ɛ P[t(x, α) < T + ɛ e δt ɛ. E[e δt(x,α) E[e δt(x,α) χ {t(x,α)<t } E[( ɛ)χ {t(x,α)<t } = ( ɛ)p[t(x, α) < T ( ɛ) ( P[t(x, α) < ɛ ). Since ɛ is arbitrary, it follows that lim inf δ Claim 3: For any x R N, E[e δt(x,α) P[t(x, α) <. lim v δ(x) = P[t(x, α) < δ For any α A, we have E[e δg(x(t),α(t))dt E[e δgt(x,α) and therefore by Claim, lim inf δ v δ(x) lim inf δ inf { E[e δgt(x,α) } P[t(x, α) <. Now fix ɛ >, δ > and take T sufficiently large in such a way that exp( δm g T ) ɛ. By the dynamic programming principle, for any α A we have v δ (x) E{ T t(x,α) δg(x(t), α(t))e t δg(x(s),α(s))ds dt+ e T t(x,α) δg(x(t),α(t))dt v(x(t t(x, α))}. () Now using Claim and recalling that v δ we estimate the second term in the right hand side of () by E[e T t(x,α) δg(x(t),α(t))dt v(x(t t(x, α)) E[v(X(t(x, α))χ {t(x,a) T } + E[e T δmgdt χ {t(x,a) T } Cδ + ɛ and the first one by [ T t(x,α) E δg(x(t), α(t))e t [ t(x,α) E δg(x(t), α(t))e t δg(x(s),α(s))ds dt δg(x(s),α(s))ds dt = E[ e t(x,α) δg(x(t),α(t))dt E[ e δmgt(x,α). Substituting the previous inequalities in () we obtain lim δ v δ (x) lim δ inf E[ e δmgt(x,α) + Cδ + ɛ which, recalling Claim, completes the proof of Claim 3. Equality () follows immediately from Claim 3 observing that P[ lim d(x(t, x, α), ) = = P[t(x, a) < t + Remark 3. Note the by the same argument of the previous theorem we can more generally prove that if D p = { x R N : P[lim t + d(x(t, x, α), ) = = p } for p [, then the following characterization holds D p = {x R N : lim δ v δ (x) = p} Remark 3.3 As in Theorem.5, we can prove that for any δ > the value function v δ is the unique viscosity solution of the Zubov equation a A { L(x, a)vδ δ( v δ (x))g(x) } = in R N \ which is null on, where L(x, a) is defined as in (6) 4 A numerical example We illustrate our results by a numerical example. The example is a stochastic version of a creditworthiness model given by with and dx (t) = (α(t) λx (t))dt + σx (t)dw (t) dx (t) = (H(X (t), X (t)) f(x (t), α(t)))dt H(x, x ) = ( α α + x x ) µ θx, x x α θx α, x > x f(x, α) = ax ν α α β x γ. A detailed study of the deterministic model (i.e., with σ = ) can be found in [. In this model k = x is the capital stock of an economic agent, B = x is the debt, j = α is the rate of investment, H is the external finance premium and f is the agent s net income. The goal of the economic agent is to steer the system to the set {x }, i.e., to reduce the debt to. Extending H to negative values of x via H(x, x ) = θx one easily sees that for the deterministic model controllability to {x } becomes equivalent to controllability to = {x /}, furthermore, also for the stochastic model any solution with initial value (x, x ) with x < will converge to, even in finite time, hence satisfies our assumptions.
6 Using the parameters λ =.5, α =, α = (α + ), µ =, θ =.,, a =.9 ν =., β =, γ =.3 and the cost function g(x, x ) = x we have numerically computed the solution v δ for the corresponding Zubov equation with δ = 4 using the scheme described in [3 extended to the controlled case (see [ for more detailed information). For the numerical solution we used the time step h =.5 and an adaptive grid (see [) covering the domain Ω = [, [ /, 3. For the control values we used the set A = [,.5. As boundary conditions for the outflowing trajectories we used v δ = on the upper boundary and v δ = for the lower boundary, on the left boundary no trajectories can exit. On the right boundary we did not impose boundary conditions (since it does not seem reasonable to define this as either inside or outside ). Instead we imposed a state constraint by projecting all trajectories exiting to the right back to Ω. We should remark that both the upper as well as the right boundary condition affect the attraction probabilities, an effect which has to be taken into account in the interpretation of the numerical results. Figure show the numerical results for σ =,. and.5 (top to bottom). In order to improve the visibility, we have excluded the values for x = from the figures (observe that for x = and x > it is impossible to control the system to, hence we obtain v δ in this case) References [ M. Bardi and P. Goatin, Invariant sets for controlled degenerate diffusions: a viscosity solutions approach, in Stochastic analysis, control, optimization and applications, 9 8, Birkhäuser, Boston, MA, 999. [ F. Camilli and M. Falcone, An approximation scheme for the optimal control of diffusion processes, RAIRO, Modélisation Math. Anal. Numér. 9 (995),97. [3 F. Camilli and L. Grüne, Characterizing attraction probabilities via the stochastic Zubov equation, Discrete Contin. Dyn. Syst. Ser. B. 3 (3), [4 F. Camilli, L. Grüne, and F. Wirth. A regularization of Zubov s equation for robust domains of attraction. In Nonlinear Control in the Year, A. Isidori et al., eds., Lecture Notes in Control and Information Sciences, Vol. 58, Springer-Verlag, London,, [5 F. Camilli, L. Grüne and F. Wirth, A generalization of the Zubov s equation to perturbed systems, SIAM J. of Control & Optimization 4 (), [6 F. Camilli and P. Loreti, A Zubov s method for stochastic differential equations, to appear on NoDEA Nonlinear Differential Equations Appl. Figure : Numerically determined controllability probabilities for σ =,.,.5 (top to bottom) [7 F. Colonius, F.J. de la Rubia and W. Kliemann, Stochastic models with multistability and extinction levels, SIAM J. Appl. Math. 56(996), [8 W.H. Fleming and M.H. Soner, Controlled Markov processes and viscosity solutions, Springer Verlag, New York, 993. [9 L. Grüne, Asymptotic Behavior of Dynamical and Control Systems under Perturbation and Discretization, Lecture Notes in Mathematics, Vol. 783, Springer Verlag, Berlin,. [ L. Grüne, Error estimation and adaptive discretization for the discrete stochastic Hamilton Jacobi Bellman equation, Preprint, Universität Bayreuth, 3, submitted. [ L. Grüne, W. Semmler and M. Sieveking, Creditworthiness and Thresholds in a Credit Market Model with Multiple Equilibria, Economic Theory, to appear. [ L. Grüne and F. Wirth, Computing control Lyapunov functions via a Zubov type algorithm. In Proceedings
7 of the 39th IEEE Conference on Decision and Control, Sydney, Australia,, [3 H. K. Khalil, Nonlinear Systems, Prentice-Hall, 996. [4 P.L. Lions, Optimal control of diffusion processes and Hamilton-Jacobi-Bellman equations: I and II, Comm. Partial Differential Equations, 8 (983),-74 and 9-7. [5 P.L. Lions and P. E. Souganidis, Viscosity solutions of second-order equations, stochastic control and stochastic differential games, in Stochastic differential systems, stochastic control theory and applications (Minneapolis, Minn., 986), 93 39, Springer, New York, 988 [6 E.D. Sontag, Stability and stabilization: discontinuities and the effect of disturbances, in Nonlinear analysis, differential equations and control (Montreal, QC, 998), , NATO Sci. Ser. C Math. Phys. Sci., 58, Kluwer Acad. Publ., Dordrecht, 999. [7 J.Yong and X.Y.Zhou, Stochastic Controls: Hamiltonian systems and HJB equations, Springer Verlag, New York, 999. [8 V.I. Zubov, Methods of A.M. Lyapunov and their Application, P. Noordhoff, Groningen, 964.
STABILIZATION OF CONTROLLED DIFFUSIONS AND ZUBOV S METHOD
STABILIZATION OF CONTROLLED DIFFUSIONS AND ZUBOV S METHOD FABIO CAMILLI, ANNALISA CESARONI, LARS GRÜNE, FABIAN WIRTH Abstract. We consider a controlled stochastic system which is exponentially stabilizable
More informationA generalization of Zubov s method to perturbed systems
A generalization of Zubov s method to perturbed systems Fabio Camilli, Dipartimento di Matematica Pura e Applicata, Università dell Aquila 674 Roio Poggio (AQ), Italy, camilli@ing.univaq.it Lars Grüne,
More informationCalculating the domain of attraction: Zubov s method and extensions
Calculating the domain of attraction: Zubov s method and extensions Fabio Camilli 1 Lars Grüne 2 Fabian Wirth 3 1 University of L Aquila, Italy 2 University of Bayreuth, Germany 3 Hamilton Institute, NUI
More informationZubov s method for perturbed differential equations
Zubov s method for perturbed differential equations Fabio Camilli 1, Lars Grüne 2, Fabian Wirth 3 1 Dip. di Energetica, Fac. di Ingegneria Università de l Aquila, 674 Roio Poggio (AQ), Italy camilli@axcasp.caspur.it
More informationMulti-dimensional Stochastic Singular Control Via Dynkin Game and Dirichlet Form
Multi-dimensional Stochastic Singular Control Via Dynkin Game and Dirichlet Form Yipeng Yang * Under the supervision of Dr. Michael Taksar Department of Mathematics University of Missouri-Columbia Oct
More informationLipschitz continuity for solutions of Hamilton-Jacobi equation with Ornstein-Uhlenbeck operator
Lipschitz continuity for solutions of Hamilton-Jacobi equation with Ornstein-Uhlenbeck operator Thi Tuyen Nguyen Ph.D student of University of Rennes 1 Joint work with: Prof. E. Chasseigne(University of
More informationOn the Bellman equation for control problems with exit times and unbounded cost functionals 1
On the Bellman equation for control problems with exit times and unbounded cost functionals 1 Michael Malisoff Department of Mathematics, Hill Center-Busch Campus Rutgers University, 11 Frelinghuysen Road
More informationThuong Nguyen. SADCO Internal Review Metting
Asymptotic behavior of singularly perturbed control system: non-periodic setting Thuong Nguyen (Joint work with A. Siconolfi) SADCO Internal Review Metting Rome, Nov 10-12, 2014 Thuong Nguyen (Roma Sapienza)
More informationA Remark on IVP and TVP Non-Smooth Viscosity Solutions to Hamilton-Jacobi Equations
2005 American Control Conference June 8-10, 2005. Portland, OR, USA WeB10.3 A Remark on IVP and TVP Non-Smooth Viscosity Solutions to Hamilton-Jacobi Equations Arik Melikyan, Andrei Akhmetzhanov and Naira
More informationNumerical approximation for optimal control problems via MPC and HJB. Giulia Fabrini
Numerical approximation for optimal control problems via MPC and HJB Giulia Fabrini Konstanz Women In Mathematics 15 May, 2018 G. Fabrini (University of Konstanz) Numerical approximation for OCP 1 / 33
More informationSome Properties of NSFDEs
Chenggui Yuan (Swansea University) Some Properties of NSFDEs 1 / 41 Some Properties of NSFDEs Chenggui Yuan Swansea University Chenggui Yuan (Swansea University) Some Properties of NSFDEs 2 / 41 Outline
More informationMean field games and related models
Mean field games and related models Fabio Camilli SBAI-Dipartimento di Scienze di Base e Applicate per l Ingegneria Facoltà di Ingegneria Civile ed Industriale Email: Camilli@dmmm.uniroma1.it Web page:
More informationViscosity Solutions of the Bellman Equation for Perturbed Optimal Control Problems with Exit Times 0
Viscosity Solutions of the Bellman Equation for Perturbed Optimal Control Problems with Exit Times Michael Malisoff Department of Mathematics Louisiana State University Baton Rouge, LA 783-4918 USA malisoff@mathlsuedu
More informationPROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS
PROBABILITY: LIMIT THEOREMS II, SPRING 218. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please
More informationMean Field Games on networks
Mean Field Games on networks Claudio Marchi Università di Padova joint works with: S. Cacace (Rome) and F. Camilli (Rome) C. Marchi (Univ. of Padova) Mean Field Games on networks Roma, June 14 th, 2017
More informationSingular Perturbations of Stochastic Control Problems with Unbounded Fast Variables
Singular Perturbations of Stochastic Control Problems with Unbounded Fast Variables Joao Meireles joint work with Martino Bardi and Guy Barles University of Padua, Italy Workshop "New Perspectives in Optimal
More informationNonlinear Control. Nonlinear Control Lecture # 3 Stability of Equilibrium Points
Nonlinear Control Lecture # 3 Stability of Equilibrium Points The Invariance Principle Definitions Let x(t) be a solution of ẋ = f(x) A point p is a positive limit point of x(t) if there is a sequence
More informationContinuous dependence estimates for the ergodic problem with an application to homogenization
Continuous dependence estimates for the ergodic problem with an application to homogenization Claudio Marchi Bayreuth, September 12 th, 2013 C. Marchi (Università di Padova) Continuous dependence Bayreuth,
More informationc 2004 Society for Industrial and Applied Mathematics
SIAM J. COTROL OPTIM. Vol. 43, o. 4, pp. 1222 1233 c 2004 Society for Industrial and Applied Mathematics OZERO-SUM STOCHASTIC DIFFERETIAL GAMES WITH DISCOTIUOUS FEEDBACK PAOLA MAUCCI Abstract. The existence
More informationA probabilistic approach to second order variational inequalities with bilateral constraints
Proc. Indian Acad. Sci. (Math. Sci.) Vol. 113, No. 4, November 23, pp. 431 442. Printed in India A probabilistic approach to second order variational inequalities with bilateral constraints MRINAL K GHOSH,
More informationWeak convergence and large deviation theory
First Prev Next Go To Go Back Full Screen Close Quit 1 Weak convergence and large deviation theory Large deviation principle Convergence in distribution The Bryc-Varadhan theorem Tightness and Prohorov
More informationExponential stability of families of linear delay systems
Exponential stability of families of linear delay systems F. Wirth Zentrum für Technomathematik Universität Bremen 28334 Bremen, Germany fabian@math.uni-bremen.de Keywords: Abstract Stability, delay systems,
More informationConverse Lyapunov theorem and Input-to-State Stability
Converse Lyapunov theorem and Input-to-State Stability April 6, 2014 1 Converse Lyapunov theorem In the previous lecture, we have discussed few examples of nonlinear control systems and stability concepts
More informationFurther Results on the Bellman Equation for Optimal Control Problems with Exit Times and Nonnegative Lagrangians
Further Results on the Bellman Equation for Optimal Control Problems with Exit Times and Nonnegative Lagrangians Michael Malisoff Department of Mathematics Louisiana State University Baton Rouge, LA 783-4918
More informationInput to state Stability
Input to state Stability Mini course, Universität Stuttgart, November 2004 Lars Grüne, Mathematisches Institut, Universität Bayreuth Part IV: Applications ISS Consider with solutions ϕ(t, x, w) ẋ(t) =
More informationSolution of Stochastic Optimal Control Problems and Financial Applications
Journal of Mathematical Extension Vol. 11, No. 4, (2017), 27-44 ISSN: 1735-8299 URL: http://www.ijmex.com Solution of Stochastic Optimal Control Problems and Financial Applications 2 Mat B. Kafash 1 Faculty
More informationA regularization of Zubov s equation for robust domains of attraction
A regularization of Zubov s equation for robust domains of attraction Fabio Camilli 1,LarsGrüne 2, and Fabian Wirth 3 1 Dip. di Energetica, Fac. di Ingegneria, Università de l Aquila, 674 Roio Poggio (AQ),
More informationHausdorff Continuous Viscosity Solutions of Hamilton-Jacobi Equations
Hausdorff Continuous Viscosity Solutions of Hamilton-Jacobi Equations R Anguelov 1,2, S Markov 2,, F Minani 3 1 Department of Mathematics and Applied Mathematics, University of Pretoria 2 Institute of
More informationViscosity Solutions for Dummies (including Economists)
Viscosity Solutions for Dummies (including Economists) Online Appendix to Income and Wealth Distribution in Macroeconomics: A Continuous-Time Approach written by Benjamin Moll August 13, 2017 1 Viscosity
More informationComputing Lyapunov functions for strongly asymptotically stable differential inclusions
Computing Lyapunov functions for strongly asymptotically stable differential inclusions R. Baier L. Grüne S. F. Hafstein Chair of Applied Mathematics, University of Bayreuth, D-95440 Bayreuth, Germany,
More informationThomas Knispel Leibniz Universität Hannover
Optimal long term investment under model ambiguity Optimal long term investment under model ambiguity homas Knispel Leibniz Universität Hannover knispel@stochastik.uni-hannover.de AnStAp0 Vienna, July
More informationHJB equations. Seminar in Stochastic Modelling in Economics and Finance January 10, 2011
Department of Probability and Mathematical Statistics Faculty of Mathematics and Physics, Charles University in Prague petrasek@karlin.mff.cuni.cz Seminar in Stochastic Modelling in Economics and Finance
More informationFiltered scheme and error estimate for first order Hamilton-Jacobi equations
and error estimate for first order Hamilton-Jacobi equations Olivier Bokanowski 1 Maurizio Falcone 2 2 1 Laboratoire Jacques-Louis Lions, Université Paris-Diderot (Paris 7) 2 SAPIENZA - Università di Roma
More informationA small-gain type stability criterion for large scale networks of ISS systems
A small-gain type stability criterion for large scale networks of ISS systems Sergey Dashkovskiy Björn Sebastian Rüffer Fabian R. Wirth Abstract We provide a generalized version of the nonlinear small-gain
More informationConstrained Optimal Stopping Problems
University of Bath SAMBa EPSRC CDT Thesis Formulation Report For the Degree of MRes in Statistical Applied Mathematics Author: Benjamin A. Robinson Supervisor: Alexander M. G. Cox September 9, 016 Abstract
More informationWeak Convergence of Numerical Methods for Dynamical Systems and Optimal Control, and a relation with Large Deviations for Stochastic Equations
Weak Convergence of Numerical Methods for Dynamical Systems and, and a relation with Large Deviations for Stochastic Equations Mattias Sandberg KTH CSC 2010-10-21 Outline The error representation for weak
More informationSTOCHASTIC PERRON S METHOD AND VERIFICATION WITHOUT SMOOTHNESS USING VISCOSITY COMPARISON: OBSTACLE PROBLEMS AND DYNKIN GAMES
STOCHASTIC PERRON S METHOD AND VERIFICATION WITHOUT SMOOTHNESS USING VISCOSITY COMPARISON: OBSTACLE PROBLEMS AND DYNKIN GAMES ERHAN BAYRAKTAR AND MIHAI SÎRBU Abstract. We adapt the Stochastic Perron s
More informationDeterministic Dynamic Programming
Deterministic Dynamic Programming 1 Value Function Consider the following optimal control problem in Mayer s form: V (t 0, x 0 ) = inf u U J(t 1, x(t 1 )) (1) subject to ẋ(t) = f(t, x(t), u(t)), x(t 0
More informationON THE POLICY IMPROVEMENT ALGORITHM IN CONTINUOUS TIME
ON THE POLICY IMPROVEMENT ALGORITHM IN CONTINUOUS TIME SAUL D. JACKA AND ALEKSANDAR MIJATOVIĆ Abstract. We develop a general approach to the Policy Improvement Algorithm (PIA) for stochastic control problems
More informationSébastien Chaumont a a Institut Élie Cartan, Université Henri Poincaré Nancy I, B. P. 239, Vandoeuvre-lès-Nancy Cedex, France. 1.
A strong comparison result for viscosity solutions to Hamilton-Jacobi-Bellman equations with Dirichlet condition on a non-smooth boundary and application to parabolic problems Sébastien Chaumont a a Institut
More informationStochastic Homogenization for Reaction-Diffusion Equations
Stochastic Homogenization for Reaction-Diffusion Equations Jessica Lin McGill University Joint Work with Andrej Zlatoš June 18, 2018 Motivation: Forest Fires ç ç ç ç ç ç ç ç ç ç Motivation: Forest Fires
More informationStability of Stochastic Differential Equations
Lyapunov stability theory for ODEs s Stability of Stochastic Differential Equations Part 1: Introduction Department of Mathematics and Statistics University of Strathclyde Glasgow, G1 1XH December 2010
More informationVISCOSITY SOLUTIONS OF HAMILTON-JACOBI EQUATIONS, AND ASYMPTOTICS FOR HAMILTONIAN SYSTEMS
VISCOSITY SOLUTIONS OF HAMILTON-JACOBI EQUATIONS, AND ASYMPTOTICS FOR HAMILTONIAN SYSTEMS DIOGO AGUIAR GOMES U.C. Berkeley - CA, US and I.S.T. - Lisbon, Portugal email:dgomes@math.ist.utl.pt Abstract.
More informationHomogenization and error estimates of free boundary velocities in periodic media
Homogenization and error estimates of free boundary velocities in periodic media Inwon C. Kim October 7, 2011 Abstract In this note I describe a recent result ([14]-[15]) on homogenization and error estimates
More informationLMI Methods in Optimal and Robust Control
LMI Methods in Optimal and Robust Control Matthew M. Peet Arizona State University Lecture 15: Nonlinear Systems and Lyapunov Functions Overview Our next goal is to extend LMI s and optimization to nonlinear
More informationShape from Shading with discontinuous image brightness
Shape from Shading with discontinuous image brightness Fabio Camilli Dip. di Matematica Pura e Applicata, Università dell Aquila (Italy), camilli@ing.univaq.it Emmanuel Prados UCLA Vision Lab. (USA), eprados@cs.ucla.edu
More informationRandom and Deterministic perturbations of dynamical systems. Leonid Koralov
Random and Deterministic perturbations of dynamical systems Leonid Koralov - M. Freidlin, L. Koralov Metastability for Nonlinear Random Perturbations of Dynamical Systems, Stochastic Processes and Applications
More informationVariational approach to mean field games with density constraints
1 / 18 Variational approach to mean field games with density constraints Alpár Richárd Mészáros LMO, Université Paris-Sud (based on ongoing joint works with F. Santambrogio, P. Cardaliaguet and F. J. Silva)
More informationRisk-Sensitive Control with HARA Utility
IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 46, NO. 4, APRIL 2001 563 Risk-Sensitive Control with HARA Utility Andrew E. B. Lim Xun Yu Zhou, Senior Member, IEEE Abstract In this paper, a control methodology
More informationBrownian Motion. 1 Definition Brownian Motion Wiener measure... 3
Brownian Motion Contents 1 Definition 2 1.1 Brownian Motion................................. 2 1.2 Wiener measure.................................. 3 2 Construction 4 2.1 Gaussian process.................................
More information[10] F.Camilli, O.Ley e P.Loreti, Homogenization of monotone systems of Hamilton-Jacobi equations, ESAIM Control Optim. Calc. Var. 16 (2010),
References Articoli su Riviste [1] Y. Achdou, F. Camilli, A. Cutrì e N. Tchou, Hamilton-Jacobi equations on networks, in corso di stampa su NoDEA Nonlinear Differential Equations Appl. [2] F.Camilli, O.Ley,
More informationAn Overview of Risk-Sensitive Stochastic Optimal Control
An Overview of Risk-Sensitive Stochastic Optimal Control Matt James Australian National University Workshop: Stochastic control, communications and numerics perspective UniSA, Adelaide, Sept. 2004. 1 Outline
More informationBackward Stochastic Differential Equations with Infinite Time Horizon
Backward Stochastic Differential Equations with Infinite Time Horizon Holger Metzler PhD advisor: Prof. G. Tessitore Università di Milano-Bicocca Spring School Stochastic Control in Finance Roscoff, March
More informationOptimal Control and Viscosity Solutions of Hamilton-Jacobi-Bellman Equations
Martino Bardi Italo Capuzzo-Dolcetta Optimal Control and Viscosity Solutions of Hamilton-Jacobi-Bellman Equations Birkhauser Boston Basel Berlin Contents Preface Basic notations xi xv Chapter I. Outline
More information1 Lyapunov theory of stability
M.Kawski, APM 581 Diff Equns Intro to Lyapunov theory. November 15, 29 1 1 Lyapunov theory of stability Introduction. Lyapunov s second (or direct) method provides tools for studying (asymptotic) stability
More informationRunge-Kutta Method for Solving Uncertain Differential Equations
Yang and Shen Journal of Uncertainty Analysis and Applications 215) 3:17 DOI 1.1186/s4467-15-38-4 RESEARCH Runge-Kutta Method for Solving Uncertain Differential Equations Xiangfeng Yang * and Yuanyuan
More informationTopic # /31 Feedback Control Systems. Analysis of Nonlinear Systems Lyapunov Stability Analysis
Topic # 16.30/31 Feedback Control Systems Analysis of Nonlinear Systems Lyapunov Stability Analysis Fall 010 16.30/31 Lyapunov Stability Analysis Very general method to prove (or disprove) stability of
More informationNonlinear L 2 -gain analysis via a cascade
9th IEEE Conference on Decision and Control December -7, Hilton Atlanta Hotel, Atlanta, GA, USA Nonlinear L -gain analysis via a cascade Peter M Dower, Huan Zhang and Christopher M Kellett Abstract A nonlinear
More informationMean-Field Games with non-convex Hamiltonian
Mean-Field Games with non-convex Hamiltonian Martino Bardi Dipartimento di Matematica "Tullio Levi-Civita" Università di Padova Workshop "Optimal Control and MFGs" Pavia, September 19 21, 2018 Martino
More informationNonlinear Systems and Control Lecture # 12 Converse Lyapunov Functions & Time Varying Systems. p. 1/1
Nonlinear Systems and Control Lecture # 12 Converse Lyapunov Functions & Time Varying Systems p. 1/1 p. 2/1 Converse Lyapunov Theorem Exponential Stability Let x = 0 be an exponentially stable equilibrium
More informationA Narrow-Stencil Finite Difference Method for Hamilton-Jacobi-Bellman Equations
A Narrow-Stencil Finite Difference Method for Hamilton-Jacobi-Bellman Equations Xiaobing Feng Department of Mathematics The University of Tennessee, Knoxville, U.S.A. Linz, November 23, 2016 Collaborators
More informationProf. Erhan Bayraktar (University of Michigan)
September 17, 2012 KAP 414 2:15 PM- 3:15 PM Prof. (University of Michigan) Abstract: We consider a zero-sum stochastic differential controller-and-stopper game in which the state process is a controlled
More informationIntroduction to Nonlinear Control Lecture # 3 Time-Varying and Perturbed Systems
p. 1/5 Introduction to Nonlinear Control Lecture # 3 Time-Varying and Perturbed Systems p. 2/5 Time-varying Systems ẋ = f(t, x) f(t, x) is piecewise continuous in t and locally Lipschitz in x for all t
More informationAsymptotic stability and transient optimality of economic MPC without terminal conditions
Asymptotic stability and transient optimality of economic MPC without terminal conditions Lars Grüne 1, Marleen Stieler 2 Mathematical Institute, University of Bayreuth, 95440 Bayreuth, Germany Abstract
More informationFurther Results on the Bellman Equation for Optimal Control Problems with Exit Times and Nonnegative Instantaneous Costs
Further Results on the Bellman Equation for Optimal Control Problems with Exit Times and Nonnegative Instantaneous Costs Michael Malisoff Systems Science and Mathematics Dept. Washington University in
More informationNonlinear Control. Nonlinear Control Lecture # 2 Stability of Equilibrium Points
Nonlinear Control Lecture # 2 Stability of Equilibrium Points Basic Concepts ẋ = f(x) f is locally Lipschitz over a domain D R n Suppose x D is an equilibrium point; that is, f( x) = 0 Characterize and
More informationA MODEL FOR THE LONG-TERM OPTIMAL CAPACITY LEVEL OF AN INVESTMENT PROJECT
A MODEL FOR HE LONG-ERM OPIMAL CAPACIY LEVEL OF AN INVESMEN PROJEC ARNE LØKKA AND MIHAIL ZERVOS Abstract. We consider an investment project that produces a single commodity. he project s operation yields
More informationStochastic optimal control with rough paths
Stochastic optimal control with rough paths Paul Gassiat TU Berlin Stochastic processes and their statistics in Finance, Okinawa, October 28, 2013 Joint work with Joscha Diehl and Peter Friz Introduction
More informationWeak solutions to Fokker-Planck equations and Mean Field Games
Weak solutions to Fokker-Planck equations and Mean Field Games Alessio Porretta Universita di Roma Tor Vergata LJLL, Paris VI March 6, 2015 Outlines of the talk Brief description of the Mean Field Games
More informationAn Uncertain Control Model with Application to. Production-Inventory System
An Uncertain Control Model with Application to Production-Inventory System Kai Yao 1, Zhongfeng Qin 2 1 Department of Mathematical Sciences, Tsinghua University, Beijing 100084, China 2 School of Economics
More informationLinear-Quadratic Stochastic Differential Games with General Noise Processes
Linear-Quadratic Stochastic Differential Games with General Noise Processes Tyrone E. Duncan Abstract In this paper a noncooperative, two person, zero sum, stochastic differential game is formulated and
More informationProceedings of the 2014 Winter Simulation Conference A. Tolk, S. D. Diallo, I. O. Ryzhov, L. Yilmaz, S. Buckley, and J. A. Miller, eds.
Proceedings of the 014 Winter Simulation Conference A. Tolk, S. D. Diallo, I. O. Ryzhov, L. Yilmaz, S. Buckley, and J. A. Miller, eds. Rare event simulation in the neighborhood of a rest point Paul Dupuis
More informationOn differential games with long-time-average cost
On differential games with long-time-average cost Martino Bardi Dipartimento di Matematica Pura ed Applicata Università di Padova via Belzoni 7, 35131 Padova, Italy bardi@math.unipd.it Abstract The paper
More informationA new approach for investment performance measurement. 3rd WCMF, Santa Barbara November 2009
A new approach for investment performance measurement 3rd WCMF, Santa Barbara November 2009 Thaleia Zariphopoulou University of Oxford, Oxford-Man Institute and The University of Texas at Austin 1 Performance
More informationLINEAR-CONVEX CONTROL AND DUALITY
1 LINEAR-CONVEX CONTROL AND DUALITY R.T. Rockafellar Department of Mathematics, University of Washington Seattle, WA 98195-4350, USA Email: rtr@math.washington.edu R. Goebel 3518 NE 42 St., Seattle, WA
More informationOn the Multi-Dimensional Controller and Stopper Games
On the Multi-Dimensional Controller and Stopper Games Joint work with Yu-Jui Huang University of Michigan, Ann Arbor June 7, 2012 Outline Introduction 1 Introduction 2 3 4 5 Consider a zero-sum controller-and-stopper
More informationComputation of continuous and piecewise affine Lyapunov functions by numerical approximations of the Massera construction
Computation of continuous and piecewise affine Lyapunov functions by numerical approximations of the Massera construction Jóhann Björnsson 1, Peter Giesl 2, Sigurdur Hafstein 1, Christopher M. Kellett
More informationPassivity-based Stabilization of Non-Compact Sets
Passivity-based Stabilization of Non-Compact Sets Mohamed I. El-Hawwary and Manfredi Maggiore Abstract We investigate the stabilization of closed sets for passive nonlinear systems which are contained
More informationAdaptive spline interpolation for Hamilton Jacobi Bellman equations
Adaptive spline interpolation for Hamilton Jacobi Bellman equations Florian Bauer Mathematisches Institut, Universität Bayreuth, 95440 Bayreuth, Germany Lars Grüne Mathematisches Institut, Universität
More informationEmpirical Processes: General Weak Convergence Theory
Empirical Processes: General Weak Convergence Theory Moulinath Banerjee May 18, 2010 1 Extended Weak Convergence The lack of measurability of the empirical process with respect to the sigma-field generated
More informationA Nonlinear PDE in Mathematical Finance
A Nonlinear PDE in Mathematical Finance Sergio Polidoro Dipartimento di Matematica, Università di Bologna, Piazza di Porta S. Donato 5, 40127 Bologna (Italy) polidoro@dm.unibo.it Summary. We study a non
More informationRobust control and applications in economic theory
Robust control and applications in economic theory In honour of Professor Emeritus Grigoris Kalogeropoulos on the occasion of his retirement A. N. Yannacopoulos Department of Statistics AUEB 24 May 2013
More informationExample 1. Hamilton-Jacobi equation. In particular, the eikonal equation. for some n( x) > 0 in Ω. Here 1 / 2
Oct. 1 0 Viscosity S olutions In this lecture we take a glimpse of the viscosity solution theory for linear and nonlinear PDEs. From our experience we know that even for linear equations, the existence
More informationControlled Diffusions and Hamilton-Jacobi Bellman Equations
Controlled Diffusions and Hamilton-Jacobi Bellman Equations Emo Todorov Applied Mathematics and Computer Science & Engineering University of Washington Winter 2014 Emo Todorov (UW) AMATH/CSE 579, Winter
More informationStationary distribution and pathwise estimation of n-species mutualism system with stochastic perturbation
Available online at www.tjnsa.com J. Nonlinear Sci. Appl. 9 6), 936 93 Research Article Stationary distribution and pathwise estimation of n-species mutualism system with stochastic perturbation Weiwei
More informationProduction and Relative Consumption
Mean Field Growth Modeling with Cobb-Douglas Production and Relative Consumption School of Mathematics and Statistics Carleton University Ottawa, Canada Mean Field Games and Related Topics III Henri Poincaré
More informationKey words. Nonlinear systems, Local input-to-state stability, Local ISS Lyapunov function, Robust Lyapunov function, Linear programming
COMPUTATION OF LOCAL ISS LYAPUNOV FUNCTIONS WITH LOW GAINS VIA LINEAR PROGRAMMING H. LI, R. BAIER, L. GRÜNE, S. HAFSTEIN, AND F. WIRTH March 1, 15 Huijuan Li, Robert Baier, Lars Grüne Lehrstuhl für Angewandte
More informationThe Continuity of SDE With Respect to Initial Value in the Total Variation
Ξ44fflΞ5» ο ffi fi $ Vol.44, No.5 2015 9" ADVANCES IN MATHEMATICS(CHINA) Sep., 2015 doi: 10.11845/sxjz.2014024b The Continuity of SDE With Respect to Initial Value in the Total Variation PENG Xuhui (1.
More informationSome Terminology and Concepts that We will Use, But Not Emphasize (Section 6.2)
Some Terminology and Concepts that We will Use, But Not Emphasize (Section 6.2) Statistical analysis is based on probability theory. The fundamental object in probability theory is a probability space,
More informationLecture 5: Importance sampling and Hamilton-Jacobi equations
Lecture 5: Importance sampling and Hamilton-Jacobi equations Henrik Hult Department of Mathematics KTH Royal Institute of Technology Sweden Summer School on Monte Carlo Methods and Rare Events Brown University,
More informationExam February h
Master 2 Mathématiques et Applications PUF Ho Chi Minh Ville 2009/10 Viscosity solutions, HJ Equations and Control O.Ley (INSA de Rennes) Exam February 2010 3h Written-by-hands documents are allowed. Printed
More informationSome lecture notes for Math 6050E: PDEs, Fall 2016
Some lecture notes for Math 65E: PDEs, Fall 216 Tianling Jin December 1, 216 1 Variational methods We discuss an example of the use of variational methods in obtaining existence of solutions. Theorem 1.1.
More informationMeasure and Integration: Solutions of CW2
Measure and Integration: s of CW2 Fall 206 [G. Holzegel] December 9, 206 Problem of Sheet 5 a) Left (f n ) and (g n ) be sequences of integrable functions with f n (x) f (x) and g n (x) g (x) for almost
More informationANISOTROPIC EQUATIONS: UNIQUENESS AND EXISTENCE RESULTS
ANISOTROPIC EQUATIONS: UNIQUENESS AND EXISTENCE RESULTS STANISLAV ANTONTSEV, MICHEL CHIPOT Abstract. We study uniqueness of weak solutions for elliptic equations of the following type ) xi (a i (x, u)
More informationConverse Lyapunov-Krasovskii Theorems for Systems Described by Neutral Functional Differential Equation in Hale s Form
Converse Lyapunov-Krasovskii Theorems for Systems Described by Neutral Functional Differential Equation in Hale s Form arxiv:1206.3504v1 [math.ds] 15 Jun 2012 P. Pepe I. Karafyllis Abstract In this paper
More informationThe main motivation of this paper comes from the following, rather surprising, result of Ecker and Huisken [13]: for any initial data u 0 W 1,
Quasilinear parabolic equations, unbounded solutions and geometrical equations II. Uniqueness without growth conditions and applications to the mean curvature flow in IR 2 Guy Barles, Samuel Biton and
More informationMaximum Process Problems in Optimal Control Theory
J. Appl. Math. Stochastic Anal. Vol. 25, No., 25, (77-88) Research Report No. 423, 2, Dept. Theoret. Statist. Aarhus (2 pp) Maximum Process Problems in Optimal Control Theory GORAN PESKIR 3 Given a standard
More informationMinimum-Phase Property of Nonlinear Systems in Terms of a Dissipation Inequality
Minimum-Phase Property of Nonlinear Systems in Terms of a Dissipation Inequality Christian Ebenbauer Institute for Systems Theory in Engineering, University of Stuttgart, 70550 Stuttgart, Germany ce@ist.uni-stuttgart.de
More informationLecture 10: Singular Perturbations and Averaging 1
Massachusetts Institute of Technology Department of Electrical Engineering and Computer Science 6.243j (Fall 2003): DYNAMICS OF NONLINEAR SYSTEMS by A. Megretski Lecture 10: Singular Perturbations and
More informationAN OVERVIEW OF STATIC HAMILTON-JACOBI EQUATIONS. 1. Introduction
AN OVERVIEW OF STATIC HAMILTON-JACOBI EQUATIONS JAMES C HATELEY Abstract. There is a voluminous amount of literature on Hamilton-Jacobi equations. This paper reviews some of the existence and uniqueness
More information