A Theory of Markovian Time Inconsistent Stochastic Control in Discrete Time

Size: px
Start display at page:

Download "A Theory of Markovian Time Inconsistent Stochastic Control in Discrete Time"

Transcription

1 A Theory of Markovian Time Inconsistent Stochastic Control in Discrete Time Tomas Björk Department of Finance, Stockholm School of Economics Agatha Mrgoci Department of Economics Aarhs University Pblished in Finance and Stochastics 18, No 3, , 2013) Abstract We develop a theory for a general class of discrete time stochastic control problems which, in varios ways, are time inconsistent in the sense that they do not admit a Bellman optimality principle. We attach these problems by viewing them within a game theoretic framework, and we look for sbgame perfect Nash eqilibrim points. For a general controlled Markov process and a fairly general objective fnctional we derive an extension of the standard Bellman eqation, in the form of a system of non-linear eqations, for the determination for the eqilibrim strategy as well as the eqilibrim vale fnction. Most known examples of time inconsistent stochastic control problems in the literatre are easily seen to be special cases of the present theory. We also prove that for every time inconsistent problem, there exists an associated time consistent problem sch that the optimal control and the optimal vale fnction for the consistent problem coincides with the eqilibrim control and vale fnction respectively for the time inconsistent problem. To exemplify the theory we stdy some concrete examples, sch as hyperbolic disconting and mean variance control. Keywords: Time consistency, time inconsistency, time inconsistent control, dynamic programming, stochastic control, Bellman eqation,hyperbolic disconting,mean-variance AMS code: 49L, 60J, 91A, 91G JEL Code C61, C72, C73, D5, G11, G12 The athors are greatly indebted to the associate editor, two anonymos referees, Ivar Ekeland, Ali Lazrak, Martin Schweizer, Traian Pirv, Sleyman Basak, Mogens Steffensen, and Eric Böse-Wolf for very helpfl comments. 1

2 Contents 1 Introdction Dynamic programming and time consistency Three distrbing examples Approaches to handle time inconsistency Previos literatre Contribtions of the present paper Strctre of the paper General theory I: Setp Setp Basic problem formlation The game theoretic formlation The extended Bellman eqation Simplifying the problem The case J n x, ) = E n,x F x, XT ) The recrsion for J n x, ) The recrsion for V n x) The case J n x, ) = G x, E n,x XT ) The recrsion for J n x, ) The recrsion for V n x) The case J n x, ) = E n,x F x, XT ) + G x, E n,x XT ) The general case Control constraints A slight extension A scaling reslt An eqivalent time consistent problem 26 5 Infinite horizon Generalities A time invariant problem Existence and niqeness 30 7 General non exponential disconting A general discont fnction Infinite horizon Qasi-hyperbolic disconting The extended Bellman eqation An example with logarithmic tility Two eqivalent standard problems

3 9 Frther examples Mean variance portfolios Mean variance portfolios with state dependent risk aversion A time inconsistent linear qadratic reglator Another time inconsistent linear qadratic reglator Conclsion and ftre research 49 1 Introdction In a standard discrete time time stochastic optimal control problem the object is that of maximizing or minimizing) a fnctional of the form T E C n, X n, n ) + F X T ). n=0 where X is some controlled Markov process, n is the control applied at time n, and F, C are given real valed fnctions. A typical example is when X is a controlled scalar stochastic eqation of the form X n+1 = µx n, n, Y n+1 ), where Y is the stochastic noise process, and we have some initial condition X 0 = x 0. Later on in the paper we will allow for more general dynamics than those of a difference eqation, bt in this informal section we restrict orselves to this case and for simplicity we assme that there are no constraints on the scalar control n. The object of the present paper is to stdy problems which are similar to the one stated above, bt where there is also an element of time inconsistency. In order to nderstand exactly why and how or problems are different from the standard one above, and what the term time inconsistency really means, we need to recapitlate, very briefly, the main ideas of dynamic programming. 1.1 Dynamic programming and time consistency A standard way of attacking a problem like the one above is by sing Dynamic Programming henceforth DynP), so we now give a brief recapitlation of the main ideas. We restrict orselves to control laws, i.e., the control at time k, given that X k = y, is of the form k, y) where the control law is a deterministic fnction of the variables k, y). We then embed the problem above in a family of problems indexed by the initial point. More precisely we consider, for every n, x), the problem P n,x of maximizing the reward fnctional T J n x, ) = E n,x C k, X k, k ) + F X T ). k=n 3

4 given the initial condition X n = x. Denoting the optimal control law for P n,x by û nx k, y) where n k T 1) and the corresponding optimal vale fnction by V n x) we see that the original problem corresponds to the problem P 0,x0. We note that ex ante the optimal control law û nx k, y) for the problem P n,x mst be indexed by the initial point n, x) bt, as is well known, problems of the kind described above trn ot to be time consistent in the sense that we have the Bellman optimality principle, which roghly says that the optimal control is independent of the initial point. More precisely: if a control law is optimal on the time interval {n,..., T }, then it is also optimal for any sbinterval {m,..., T } where n m, or more formally û nx k, y) = û mz k, y), for all states x, y, z and for all times n m k. Given the Bellman principle, it is easy to derive the Bellman eqation V n x) = { sp Cn, x, ) + En,x Vn+1 Xn+1) }, R V T x) = F x), for the determination of V. We end this section by listing some important conditions concerning time consistency, and in the next section we will see some, seemingly qite natral, problems where these conditions do not hold, ths giving rise to time inconsistency. Remark 1.1 The main reasons for the time consistency of the indexed family {P n,x : x R, n = 0, 1, 2,...} of problems above are as follows. The term C k, X k, k ) in the problem P n,x is allowed to depend on k, X k and k. It is not allowed to depend on the initial point n, x). The terminal evalation term is allowed to be of the form E n,x F X T ), i.e the expected vale of a non-linear fnction of the terminal vale X T. We are not allowed to have a term of the form GE n,x X T ), which is a non-linear fnction of the expected vale. We are not allowed to let the terminal evalation fnction F depend on the initial point n, x). 1.2 Three distrbing examples We will now consider three seemingly simple examples from financial economics, where time consistency fail to hold. In all these cases we consider a financial market with a risky asset as well as a risk free asset with rate of retrn r. We denote by X the market vale of a self financing portfolio, and by c the consmption process. We now consider three indexed families of optimization problems. In all cases the naive) objective is to maximize the objective fnctional J n x, ), where n, x) is the initial point and a shorthand expression for the control strategy, consisting of consmption and portfolio weights. 4

5 1. Non-exponential disconting T 1 J n x, ) = E n,x ϕk n)hc k ) + ϕt n)f X T ) k=n In this problem h is the local tility of consmption, F is the tility of terminal wealth, and ϕ is the disconting fnction. This problem differs from a standard problem by the fact that the initial point in time n enters in the disconting fnction see Remark 1.1). Obviosly; if ϕ is a power fnction so ϕk n) = δ k n then we can factor ot δ n and convert the problem into a standard problem with objective fnctional T 1 J n x, ) = E n,x δ k hc k ) + δ T F X T ) k=n One can show, however, that every choice of the disconting fnction ϕ, apart from the the power case, will lead to a time inconsistent problem. More precisely, the Bellman optimality principle will not hold. 2. Mean variance tility J n x, ) = E n,x X T γ 2 V ar n,x X T ) This case is a dynamic version of a standard Markowitz investment problem where we want to maximize tility of final wealth. The tility of final wealth is basically linear in wealth, as given by the term E n,x X T, bt we penalize the risk by the conditional variance γ 2 V ar n,x X T ). This looks innocent enogh, bt we recall the elementary formla V arx = EX 2 EX) 2. Now, in a standard time consistent problem we are allowed to have terms like E n,x F X T ) in the objective fnctional, i.e. we are allowed to have the expected vale of a non-linear fnction of terminal wealth. In the present case, however we have the term E n,x X) 2. This is not an expected vale of a non-linear fnction of terminal wealth, bt instead a non-linear fnction of the expected vale of terminal wealth, and we ths have a time inconsistent problem see Remark 1.1). 3. Endogenos habit formation J n x, ) = E n,x ln X T x + β), β > 0. In this particlar example we basically want to maximize log tility of terminal wealth. In a standard problem we wold have the objective E n,x ln X T d) where d > 0 is the lowest acceptable level of terminal wealth. In or problem, however, the lowest acceptable level of terminal wealth is given by x β and it ths depends on yor wealth X n = x at time n. This again leads to a time inconsistent problem. We remark in passing that there are other examples of endogenos habit formation which are indeed time consistent.) 5

6 1.3 Approaches to handle time inconsistency In all the three examples of the previos sbsection we are faced with a time inconsistent family of problems, in the sense that if for some fixed initial point n, x) we determine the control law û which maximizes J n x, ), then at some later point k, X k ) the control law û restricted to the interval k, T ) will no longer be optimal for the fnctional J k X k, ). It is ths conceptally nclear what we mean by optimality and even more nclear what we mean by an optimal control law, so or first task is to specify more precisely exactly which problem we are trying to solve. There are then at least three different ways of handling a family of time inconsistent problems, like the ones above We dismiss the entire problem as being silly. We fix one initial point, like for example 0, x 0 ), and then try to find the control law û which maximizes J 0 x 0, ). We then simply disregard the fact that at a later points in time sch as n, X n ) the control law û will not be optimal for the fnctional J n X n, ). In the economics literatre, this is known as pre-commitment. We take the time inconsistency seriosly and formlate the problem in game theoretic terms. All of the three strategies above may in different sitations be perfectly reasonable, bt in the present paper we choose the last one. The basic idea is then that when we decide on a control action at time t we shold explicitly take into accont that at ftre times we will have a different objective fnctional or, in more loose terms, or tastes are changing over time. We can then view the entire problem as a non-cooperative game, with one player for each time n, where player n can be viewed as the ftre incarnation of orselves or rather of or preferences) at time n. Player n chooses the control law n, ) so, given this point of view, it is natral to look for Nash eqilibria for the game, and this is exactly or approach. For the case of a finite time horizon, the approach works roghly as follows. See Section 2 for precise definitions. 1. Given that X T 1 = x, player T 1 has as standard optimization problem to solve, namely that of maximizing J n x, T 1 ) over T 1. We denote the optimal by û T 1 x). 2. Given that X T 2 = x, and that player T 1 is sing û T 1, player T 2 now maximizes J n x, T 2, û T 1 ) over T 2. We denote the optimal by û T 2 x). 3. We then proceed by indction. 6

7 1.4 Previos literatre The game theoretic approach to time inconsistency sing Nash eqilibrim points as above has a long history starting with 16 where a deterministic Ramsay problem with non-exponential disconting is stdied. Frther work along this line in continos and discrete time is provided in 1, 9, 11, 14, 15, and 17. Recently there has been renewed interest in these problems in continos time. In the interesting, and mathematically very advanced, papers 7 and 8, the athors consider optimal consmption and investment nder hyperbolic disconting Problem 1 in or list above) in deterministic and stochastic models from the above game theoretic point of view. In 2 the athors ndertake a deep stdy of the mean variance problem within a Wiener driven framework. This is basically Problem 2 in the list above, bt the athors also consider the case of mltiple assets, as well as the case of a hidden Markov process driving the parameters of the asset price dynamics. Their methodology is based on sing an iterated variance formla. This fits the mean variance framework very well, bt it is hard to see how to extend the methodology to more complicated objective fnctionals. In 6 the athor develops a very complete and impressive theory of the mean variance problem within in a non Markovian general semi martingale framework, ths extending the reslts of 2 considerably. The techniqe in 6 is different from that in 2, bt it is closely related to the case of a mean variance problem, and it is not clear that it can be extended to other objective fnctionals. In all the cited papers above, the varios athors have stdied particlar models and/or objective fnctionals, each athor deriving the relevant eqilibrim conditions for his or her model. What has been lacking in the literatre so far, is a reasonably) general theory of time inconsistent stochastic control, and the prpose of the present paper is precisely to present sch a theory. To or knowledge, the present paper, which is the discrete time part of or working paper 3, is the first attempt to derive a general albeit Markovian) theory of time inconsistent control. The corresponding continos time theory which is technically more complicated) depends heavily on the discrete time reslts of the present paper and can be fond in the working paper 3. It will appear separately as 4. In the working paper 12 the athors se the theory of 3 to stdy several interesting new applications. 1.5 Contribtions of the present paper The object of the present paper is to ndertake a rigoros stdy of time inconsistent control problems in a reasonably general Markovian framework, and in particlar we do not want to tie orselves down to a particlar applied problem. We have therefore chosen a setp of the following form. We consider a general controlled Markov process X, living on some 7

8 sitable space details are given below). It is important to notice that we do not make any strctral assmptions whatsoever abot X, and we note that the setp obviosly incldes the case when X is determined by a system of stochastic difference eqations. We consider a general reward fnctional of the form T 1 J n x, ) = E n,x C n,k x, Xk, k Xk )) + F n x, XT ) +G n x, E n,x XT ), k=n where we also allow the case T = see Section 5.1). Referring to the discssion in Remark 1.1 we see that with the choice of fnctional above, time inconsistency will enter at several points: The shape of the tility fnctional depends explicitly on the initial position n, x) in time-space, as can be seen in the appearance of n and x in the expression F n x, X T ) and similarly for the other terms. In other words, as the X process moves arond, or tility fnction changes, so at time k this part of the tility fnction will have the form F k X k, X T ). For a standard time consistent control problem we are allowed to have expressions like E n,x GX T ) in the tility fnction, i.e. we are allowed to have the expected vale of a non linear fnction G of the ftre process vale. Time consistency is then a relatively simple conseqence of the law of iterated expectations. In or problem above, however, we have an expression of the form G n x, E n,x XT ) which, even apart from the appearance of n and x in the fnction G, is not the expectation of a non linear fnction, bt a nonlinear fnction of the expected vale. We ths do not have access to iterated expectations, so the problem becomes time inconsistent. On top of this we also have the appearance of n and x in the expression G n x, E n,x XT ). This setp is stdied in some detail and or main reslts are as follows. We derive an extension of the standard Bellman eqation to a non standard system of eqations for the determination of the eqilibrim vale fnction V and the eqilibrim control û. We prove that to every time inconsistent problem of the form above, there exists an associated standard, time consistent, control problem with the following properties: The optimal vale fnction for the standard problem coincides with the eqilibrim vale fnction for the time inconsistent problem. The optimal control law for the standard problem coincides with the eqilibrim strategy for the time inconsistent problem. 8

9 For the case of a Ramsay problem with non exponential disconting, a related eqivalence reslt can be fond in 1 bt or reslt is more general and also strctrally different from that of 1. We solve some specific test examples. In particlar we stdy non-exponential disconting in some detail, and we also stdy mean variance optimal portfolios. We ths extend the existing literatre sbstantially by allowing for a considerably more general tility fnctional, and a completely general Markovian strctre. 1.6 Strctre of the paper We develop the general discrete time theory in Section 2 and the main reslt is Theorem 3.2. In Section 4 we prove that for each time inconsistent problem there exists an eqivalent standard time consistent problem, sch that the optimal control for the standard problem coincides with the eqilibrim control for the time inconsistent problem. We discss existence and niqeness qestions in Section 6. In Section 7 we exemplify the general theory by stdying the special case of non exponential disconting in some detail, and in Section 8 we specialize frther to qasi-hyperbolic disconting. Section 9.1 is devoted to mean variance portfolio analysis, and in Section 10 we conclde and give directions for ftre research. 2 General theory I: Setp In this section we present the setp and in the next section we derive the main theoretical reslts. 2.1 Setp We consider a given controlled Markov process X, evolving on a measrable state space {X, G X }, with controls taking vales in a measrable control space {U, G U }. The action is in discrete time, indexed by the set N of natral nmbers. The intitive idea is that if X n = x, then we can choose a control n U, and this control will affect the transition probabilities from X n to X n+1. This idea is formalized by specifying a family of transition probabilities, {p ndz; x) : n N, x X, U}. For every fixed n N, x X and U, we assme that p n ; x) is a probability measre on X, and for each A G X, the probability p na; x) is jointly measrable in x, ). The interpretation of this is that p ndz; x) is the probability distribtion of X n+1, given that X n = x, and that we at time n apply the control, i.e., p ndz; x) = P X n+1 dz X n = x, n = ). 9

10 To obtain a Markov strctre, we restrict the controls to be feedback control laws, i.e. at time n, the control n is allowed to depend on time n and state X n. We can ths write n = n X n ), where the mapping : N X U is measrable. Note the boldface notation for the mapping. In order to distingish between fnctions and fnction vales, we will always denote a control law i.e. a mapping) by sing boldface, like n, whereas a possible vale of the mapping will be denoted withot boldface, like, U. Remark 2.1 It is natral to ask whether or analysis can be extended to the class of adapted control strategies, rather than the more restricted class of feedback laws considered in the present paper. For Markovian standard i.e. time consistent) stochastic control problems, it is well known that it is sfficient to consider the class of feedback laws, in the sense that it can be proved that the optimal adapted policy is in fact of feedback form. It wold ths be natral to investigate whether we wold have the corresponding reslt also for or class of time inconsistent problems. One wold for example hope to be able to prove that all adapted eqilibrim controls are in fact also feedback laws. We have tried to stdy also this qestion bt it trns ot to be technically and notationally) qite complicated, so it has to be postponed to a separate paper. Given the family of transition probabilities we may define a corresponding family of operators, operating on fnction seqences. Definition 2.1 A fnction seqence is a mapping f : N X R, where we se the notation n, x) f n x). For each U, the operator P, acting on the set of integrable fnction seqences, is defined by P f) n x) = f n+1 z)p ndz, x). X The corresponding discrete time infinitesimal operator A is defined by where I is the identity operator. A = P I, For each control law the operator P is defined by P f) n x) = f n+1 z)p nx) n dz, x), and A is defined correspondingly as X A = P I, 10

11 In more probabilistic terms we have the interpretation. or, as we often will write, P f) n x) = E f n+1 X n+1 ) X n = x, n =, P f) n x) = E n,x fn+1 X n+1), and A is the discrete time version of the continos time infinitesimal operator. We immediately have the following reslt. Proposition 2.1 Consider a real valed, fnction seqence {f n x)}, and a control law. The process f n X n ) is then a martingale nder the measre indced by if and only if the following conditions are satisfied The process f n X n ) is integrable. The seqence {f n } satisfies the eqation A f) n x) = 0, n = 0, 1,..., T 1. Proof. Obvios from the definition of A. It is clear that for a fixed initial point n, x) and a fixed control law we may in the obvios way define a Markov process denoted by X n,x,, where for notational simplicity we often drop the pper index n, x and se the notation X. The corresponding expectation operator is denoted by En,x, and we often drop the pper index, and instead se the notation E n,x. A typical example of an expectation will ths have the form E n,x F Xk ) for some real valed fnction F and some point in time k. 2.2 Basic problem formlation For a fixed n, x) N X, a fixed control law = { 0, 1,..., T 1 }, and a fixed time horizon T, we consider a fnctional of the basic form T 1 J n x, ) = E n,x C x, Xk, k Xk )) + F x, XT ) + G x, E n,x XT ), 1) k=n Later on we will in fact allow an even more general fnctional, bt for the present prposes, the form above is general enogh. Obviosly, the fnctional J does not depend on the entire control law = { 0, 1,..., T 1 } bt only on the restriction of to the interval n, T, i.e. on { n, n+1,..., T 1 }. The intitive idea is that we are standing at n, x) and that we wold like to choose a control law which maximizes J. We can ths define an indexed family of problems {P n,x } by P n,x : max J nx, ), where max is shorthand for the imperative maximize!. 11

12 Remark 2.2 For simplicity we assme that there are no constraints on the control, so at time n we are allowed to choose any n U. State and time dependent control constraints can, however, easily be incorporated. See Section 3.6. We can easily extend the theory to the case when the term G x, E n,x XT ) is replaced by G x, E n,x hxt )) for some real valed fnction h. See Section 3.7. As we have seen in Remark 1.1 above, the complicating factor with or indexed family of problems is that the family {P n,x } is time inconsistent in the sense that if û is optimal for P n,x, then the restriction of û to the time set k, k + 1,..., T for k > n) is not necessarily optimal for the problem P k,x. k Ths, if we at some point n, x) decide on a feedback law û which is optimal from the point of view of n, x) then as time goes by, we will no longer consider û to be optimal. To handle this problem we will se a game theoretic approach and we now go on to describe this in some detail. 2.3 The game theoretic formlation The idea, which appears already in 16, is to view the setp above in game theoretic terms. More precisely we view it as a non-cooperative game where we have one player at each point n in time. We refer to this player as player nmber n and the rle is that player nmber n can only choose the control n, or more precisely the control law n ). One interpretation is that these players are different ftre incarnations of yorself or rather incarnations of yor ftre preferences), bt conceptally it is perhaps easier to think of it as one separate player at each n. Given the data n, x), player nmber n wold, in principle, like to maximize J n x, ) over the class of feedback controls restricted to n, T, i.e. he wold like to maximize J n x, ) over { n, n+1,..., T 1 }, bt since he can only choose the control n, this is not possible. Instead of looking for optimal feedback laws, we take the game theoretic point of view and stdy so called sbgame perfect Nash eqilibrim strategies. The formal definition is as follows. Definition 2.2 We consider a fixed control law û and make the following constrction. 1. Fix an arbitrary point n, x) where n < T, and choose an arbitrary control vale U. 2. Now define the control law,n on the time set n, n + 1,..., T 1 by setting, for any y X,,n k y) = { ûk y), for k = n + 1,..., T 1,, for k = n. 12

13 We say that û is a sbgame perfect Nash eqilibrim strategy if, for every fixed n, x), the following condition hold sp J n x,,n ) = J n x, û n ). U If an eqlibrim control û exists, we define the eqilibrim vale fnction V by V n x) = J n x, û). In more pedestrian terms this means that if player nmber n knows that player nmber k will choose the control û k for all k > n, then it is optimal for player nmber n to choose û n. Remark 2.3 An eqivalent, and perhaps more concrete, way of describing an eqilibrim strategy is as follows. The eqilibrim control û T 1 x) is obtained by letting player T 1 optimize J T 1 x, ) over T 1 for all x X. This is a standard optimization problem withot any game theoretic components. The eqilibrim control û T 2 is obtained by letting player T 2 choose T 2 to optimize J T 2, given the knowledge that player nmber T 1 will se û T 1. Proceed recrsively by backward indction. We ths see that, in discrete time and for a finite horizon, the eqilibrim control is determined by backward indction. Note, however, that for the discrete time infinite horizon case, as well as for the continos time case, the sitation is mch more complicated. Obviosly; for a standard time consistent control problem, the game theoretic aspect becomes trivial and the eqilibrim control law coincides with the standard time consistent) optimal law. The eqilibrim vale fnction V will coincide with the optimal vale fnction and, sing dynamic programming argments, V is seen to satisfy a standard Bellman eqation. The main reslt of the present paper is that in the time inconsistent case, the eqilibrim vale fnction V will satisfy a system of non linear eqations. This system of eqations extend the standard Bellman eqation, and for a time consistent problem they redce to the Bellman eqation. 3 The extended Bellman eqation In this section we assme that there exists an eqilibrim control law û which may not be niqe) and we consider the corresponding eqilibrim vale fnction V defined above. The goal of this section is to derive an system of eqations, extending the standard Bellman eqation, for the determination of V. This will be done in the following two steps: 13

14 For an arbitrarily chosen control law, we will derive a recrsive eqation for J n x, ). We will then fix n, x) and consider two control laws. The first one is the eqilibrim law û, and the other one is the law where we choose = n x) arbitrarily, bt follow the law û for all k with k = n+1,... T 1. The trivial observation that sp J n x, ) = J n x, û) = V n x), U will finally give s the extension of the Bellman eqation. The reader with experience from dynamic programming DynP) will recoginize that the general program above is in fact more or less the same as for standard time consistent) DynP. However; in the present time inconsistent setting, the derivation of the recrsion in the first step is mch more tricky than in the corresponding step from DynP, and it also reqires some completely new constrctions. 3.1 Simplifying the problem In order to derive the recrsion for J n x, ) we consider an arbitrary initial point n, x), and we consider an arbitrarily chosen control law. The vale taken by at n, x) will play a special role in the seqel, and for ease of reading we will se the notation n x) =. We now go on to derive a recrsion between J n and J n+1. This is conceptally rather delicate, and sometimes a bit messy. In order to increase readability we therefore carry ot a detailed derivation only for the case when the objective fnctional has the simpler form J n x, ) = E n,x F x, X T ) + G x, E n,x X T ). 2) We then provide the reslt for the general case in Section 3.5. The derivation of this is completely parallel to that of the simplified case. Since also the derivation for the case 2) is rather intricate we will in fact simplify even frther. For pedagogical prposes we will ths consider two special case, namely the case and the case J n x, ) = E n,x F x, X T ). 3) J n x, ) = G x, E n,x X T ). 4) The point is that, by considering these special cases, it is easy to see how we separately handle the two main sorces of time inconsistency in or model: The occrrence of the present state x in the expression F x, XT ), in the otherwise standard) objective fnctional E n,x F x, XT ). 14

15 The occrrence of the nonstandard term G x, E n,x X T ). Having nderstood how to handle these special cases, the extension to the case 2) is very easy. The treatments of the two special cases can be read independently of each other, so the reader reader can depending on interest) stdy both of them or any one of them. The reader who wants to be in medias res can skip the derivations and go directly to Sections 3.4 and 3.5. Before going on to these special cases we make a remark on notation. Given an initial point n, x), the random variable X n+1 will only depend on x and on the control vale n x) = motivating the notation X n+1. The distribtion of X k for k > n + 1 will, on the other hand depend on the entire control law restricted to the interval n, k 1) so for k > n + 1 we se the notation X k. 3.2 The case J n x, ) = E n,x F x, X T ) From the definition of J we have J n+1 X n+1, ) = E n+1 F X n+1, X T ), 5) where for simplicity of notation we write E n+1 instead of E n+1,x n+1. We now make the following definition which will play a central role in the seqel. Definition 3.1 For any control law, we define the fnction seqence {f n }, where f n : X X R by. We also introdce the notation f n x, y) = E n,x F y, X T ). fn,y x) = fn x, y). The difference between fn,y and fn, is that we view fn as a fnction of the two variables x and y, whereas fn,y is, for a fixed y, viewed as a fnction of the single variable x. From the definitions above it is obvios that, for any fixed y, the process fn,y Xn ) is a martingale nder the measre generated by. We ths have the following reslt. Lemma 3.1 For every fixed control law and every fixed choice of y X, the fnction seqence {fn,y } satisfies the recrsion A f,y ) n x) = 0, n = 0, 1,..., T 1. f,y T x) = F y, x). We now go on to derive the recrsion for J n x, ). 15

16 3.2.1 The recrsion for J n x, ) Going back to 5) we note that, from the Markovian strctre and Definition 3.1, we have E n+1 F X n+1, X T ) = f n+1x n+1, X n+1). We can now write 5) as Taking expectations gives s J n+1 X n+1, ) = f n+1x n+1, X n+1). E n,x Jn+1 X n+1, ) = E n,x f n+1 X n+1, X n+1), and, going back to the definition of J n x, ), we can write this as E n,x Jn+1 X n+1, ) = J n x, ) + E n,x f n+1 X n+1, X n+1) E n,x F x, X T ). At this point it may seem natral to se the identity E n,x F x, X T ) = f n x, x), bt for varios reasons this is not a good idea. 1 Instead we note that E n,x F x, X T ) = E n,x E n+1 F x, X T ) = E n,x f n+1 X n+1, x), Sbstitting this into the recrsion above, we can collect the findings so far. Lemma 3.2 The vale fnction J satisfies the following recrsion. J n x, ) = E n,x Jn+1 X n+1, ) { E n,x f n+1 X n+1, X n+1) E n,x f n+1 X n+1, x) } The recrsion for V n x) We will now derive the recrsive eqation for the eqilibrim fnction V n x). In order to do this we assme that there exists an eqilibrim control û. We then fix an arbitrarily chosen initial point n, x) and consider two strategies control laws). 1. The first control law is simply the eqilibrim law û. 2. The second control law is slightly more complicated. We choose an arbitrary point U and then defined the control law as follows {, for k = n, k y) = û k y), for k = n + 1,..., T 1. 1 The main reason is that, in order to get a good recrsion, we need to express the right hand side of the eqation above as E -expectations of objects involving X n+1. 16

17 We now compare the objective fnction J n for these two control laws. Firstly, and by definition, we have J n x, û) = V n x), where V is the eqilibrim vale fnction defined earlier. Secondly, and also by definition, we have J n x, ) J n x, û), for all choices of U. We ths have the ineqality J n x, ) V n x), for all U, with eqality if = û n x). We ths have the basic relation We now make a small variation of Definition 5. sp J n x, ) = V n x). 6) U Definition 3.2 We define the fnction seqence {f n } T n=0 where f n : X X R, by f n x, y) = E n,x F y, X û T ). We also introdce the notation f y nx) = f n x, y), where we view f y n as a fnction of x with y as a fixed parameter. Using Lemma 3.2, the basic relation 6) now reads { En,x Jn+1 Xn+1, ) V n x) sp U E n,x f n+1 X n+1, X n+1) E n,x f n+1 X n+1, x) )} = 0. We now observe that, since the control law coincides with the eqilibrim law û on n + 1, T 1, we have the following eqalities J n+1 X n+1, ) = V n+1 X n+1 ), f n+1x n+1, x) = f n+1 X n+1, x). We can ths write the recrsion as { ) En,x Vn+1 X n+1 Vn x) sp U E n,x fn+1 X n+1, X n+1) E n,x fn+1 X n+1, x) )} = 0. The first line in this eqation can be rewritten as E n,x Vn+1 X n+1 ) Vn x) = A V ) n x). 17

18 The second line can be written as E n,x fn+1 X n+1, X n+1) E n,x fn+1 X n+1, x) = E n,x fn+1 X n+1, X n+1) f n x, x) E n,x fn+1 X n+1, x) f n x, x) ) = A f) n x, x) A f x ) n x). To avoid misnderstandings: The first term A f) n x, x), can be viewed as the operator A operating on the fnction seqence {h} n defined by h n x) = f n x, x). In the second term, A is operating on the fnction seqence f x n ) where the pper index x is viewed as a fixed parameter. We can now state the main reslt for the case nder stdy. Proposition 3.1 Consider a fnctional of the form J n x, ) = E n,x F x, X T ), and assme that an eqilibrim control law û exists. Then, with notation as above, the eqilibrim vale fnction V satisfies the eqation. sp {A V ) n x) A f) n x, x) + A f x ) n x)} = 0, 7) U V T x) = F x, x), 8) where the spremm above is realized by = û n x). Frthermore, the following hold. 1. For every fixed y X the fnction seqence f y nx) is determined by the recrsion and f n x, x) is given by Aûf y nx) = 0, n = 0,..., T 1, 9) f y T x) = F y, x), 10) f n x, x) = f x nx). 2. The probabilistic interpretation of f is, as before, given by f y nx) = E n,x F y, X û T ). 3.3 The case J n x, ) = G x, E n,x X T ) To derive a recrsion for J n we start by noting that from the definition of J we have J n+1 X n+1, ) = G X n+1, E n+1 X T ), 11) where for simplicity of notation we write E n+1 instead of E n+1,x n+1. We now make the following definition which will play a central role in the seqel. 18

19 Definition 3.3 For any control law, we define the fnction seqence {g n}, where g n : X X R by. g nx) = E n,x X T. From the definition above it is obvios that g nx n ) is a martingale nder the measre generated by. We ths have the following reslt. Lemma 3.3 For every fixed control law the fnction seqence {g n} satisfies the recrsion A g ) n x) = 0, n = 0, 1,..., T 1. g T x) = x The recrsion for J n x, ) Going back to 11) we note that, from the Markovian strctre and the definitions above, we have E n+1 X T = g n+1x n+1), so we can write 11) as Taking expectations gives s J n+1 X n+1, ) = G X n+1, g n+1x n+1) ). E n,x Jn+1 X n+1, ) = E n,x G X n+1, g n+1x n+1) ), and, going back to the definition of J n x, ), we can write this as E n,x Jn+1 X n+1, ) = J n x, )+E n,x G X n+1, g n+1x n+1) ) G x, E n,x X T ). We now note that E n,x X T = E n,x E n+1 X T = E n,x g n+1 X n+1). Sbstitting this identity into the recrsion above, we can now collect the findings so far. Lemma 3.4 The vale fnction J satisfies the following recrsion. J n x, ) = E n,x Jn+1 X n+1, ) { E n,x G X n+1, g n+1x n+1) ) G x, E n,x g n+1 X n+1) )}. 19

20 3.3.2 The recrsion for V n x) We will now derive the fndamental eqation for the determination of the eqlibrim fnction V n x). In order to do this we assme, as in Section 3.2.2, that there exists an eqilibrim control û. We then fix an arbitrarily chosen initial point n, x) and consider two strategies control laws). 1. The first control law is simply the eqilibrim law û. 2. The second control law is slightly more complicated. We choose an arbitrary point U and then defined the control law as follows {, for k = n, k y) = û k y), for k = n + 1,..., T 1. We now compare the objective fnction J n for these two control laws. Exactly as in Section we obtain the ineqality J n x, ) V n x), for all U, with eqality if = û n x). We ths have the basic relation We now make a small variation of Definition 3.3. sp J n x, ) = V n x). 12) U Definition 3.4 We define the fnction seqence {g n } T n=0, where g n : X R by g n x) = E n,x X û T. Using Lemma 3.4, the basic relation 12) now reads { En,x Jn+1 Xn+1, ) V n x) sp U E n,x G X n+1, g n+1x n+1) ) G x, E n,x g n+1 X n+1) ))} = 0. We now observe that, since the control law conicides with the eqilibrim law û on n + 1, T 1, we have the following eqalities J n+1 X n+1, ) = V n+1 X n+1 ), g n+1x n+1) = g n+1 X n+1). We can ths write the recrsion as { ) En,x Vn+1 X n+1 Vn x) sp U E n,x G X n+1, g n+1 X n+1) ) G x, E n,x gn+1 X n+1) ))} = 0. 20

21 The first line in this eqation can be rewritten as E n,x Vn+1 X n+1 ) Vn x) = A V ) n x). We rewrite the second line of the recrsion as E n,x G X n+1, g n+1 X n+1) ) G x, E n,x gn+1 X n+1) ) = E n,x G X n+1, g n+1 X n+1) ) Gx, g n x)) { G x, E n,x gn+1 X n+1) ) Gx, g n x)) }. In order to simplify this we need to introdce some new notation. Definition 3.5 The fnction seqence {G g} k and, for a fixed z X, the mapping G z : X R are defined by With this notation we can write G g) k y) = Gy, g k y)), G z y) = Gz, y). E n,x G X n+1, g n+1 X n+1) ) G x, E n,x gn+1 X n+1) ) = A G g) n x) {G x P g n x)) G x g n x))}. We now introdce the last piece of new notation. Definition 3.6 With notation as above we define the fnction seqence { H g G } k by { H g G } x) = n Gx P g n x)) G x g n x)). Finally, we may state the main reslt for the present form of J n. Proposition 3.2 Consider a fnctional of the form J n x, ) = G x, E n,x X T ), and assme that an eqilibrim control law û exists. Then, with notation as above, the eqilibrim vale fnction V satisfies the eqation. { A V ) n x) A G g) n x) + H g G n x) } = 0, 13) sp U V T x) = Gx, x), 14) where the spremm above is realized by = û n x). Frthermore, the following hold. 1. The fnction seqence g n x) is determined by the recrsion. Aûg n x) = 0, n = 0,..., T 1, 15) g T x) = x, 16) 2. The probabilistic interpretation of g is, as before, given by g n x) = E n,x X û T. 21

22 3.4 The case J n x, ) = E n,x F x, X T ) + G x, E n,x X T ). For this form of the objective fnctional the extended Bellman eqation is easily obtained from the reslts for the special cases discssed in Sections 3.2 and 3.3. With notation as above, we obtain the following reslt, which is basically a sperposition of the reslts for the special cases. Theorem 3.1 Consider a fnctional of the form 2), and assme that an eqilibrim control law û exists. Then, with notation as in Section 2.1, the eqilibrim vale fnction V satisfies the eqation. sp {A V ) n x) A f) n x, x) + A f x ) n x) U A G g) n x) + H g G n x) } = 0, 17) V T x) = F x, x) + Gx, x), 18) where the spremm above is realized by = û n x). Frthermore, the following hold. 1. For every fixed y X the fnction seqence f y nx) is determined by the recrsion and f n x, x) is given by Aûf y nx) = 0, n = 0,..., T 1, 19) f y T x) = F y, x), 20) f n x, x) = f x nx). 2. The fnction seqence g n x) is determined by the recrsion. Aûg n x) = 0, n = 0,..., T 1, 21) g T x) = x, 22) 3. The probabilistic interpretations of f and g are, as before, given by f y nx) = E n,x F y, X û T ), g n x) = E n,x X û T. We now have some comments on this reslt. Remark 3.1 The first point to notice is that, as opposed to a standard time consistent problem where we wold have one eqation for the determination of the optimal vale fnction V, we now have a system of recrsion eqations 17)-22) for the simltaneos determination of V, f and g. 22

23 To see the recrsive strctre more clearly we can rewrite the eqation for V in the form V n x) = { ) sp En,x Vn+1 X n+1 A f) n x, x) + A f x ) n x) U A G g) n x) + H g G n x) }. The recrsions for f and g can similarly be written as fnx) y = E n,x f y ) n+1 X û n+1 g n x) = E n,x gn+1 Xûn+1. This is ths the backward recrsion scheme discssed in Remark 2.3. In the case when F x, y) does not depend pon x, and there is no G term, the problem trivializes to a standard time consistent problem. The terms A f) n x, x)+a f x ) n x) in the V -eqation 17) cancel, and the system redces to the standard Bellman eqation V n x) = ) sp E n,x Vn+1 X n+1, U V T x) = F x). In order to solve the V -eqation 17) we need to know f and g bt these are determined by the eqilibrim control law û, which in trn is determined by the sp-part of 17). We can view the system as a fixed point problem, where the eqilibrim control law û solves an eqation of the form Mû) = û. The mapping M is defined by the following procedre. Start with a control. Generate the fnctions f and g by the recrsions A f y nx) = 0, A g n x) = 0, and the obvios terminal conditions. Now plg these choices of f and g into the V eqation and solve it for V. The control law which realizes the sp-part in 17) is denoted by M). The optimal control law is determined by the fixed point problem Mû) = û. This fixed point property is rather expected since we are looking for a Nash eqilibrim point, and it is well known that sch a point is typically determined as fixed points of a mapping. We also note that we can view the system as a fixed point problem for f and g. 23

24 3.5 The general case We finally consider the most general fnctional form, where J is given by T 1 J n x, ) = E n,x C n,k x, Xk, k Xk )) + F n x, XT ) + G n x, E n,x XT ). k=n 23) This case differs from the case above, firstly by the introdction of the sm, and secondly by allowing F and G to depend on crrent time n. The argments for the C terms in the sm above are very similar to the previos argments for the F term, and the occrrence of present time n can be handled very mch like the occrrence of the variable x. It is therefore natral to introdce indexed fnction seqences defined by f ky n x) = E n,x Fk y, X û T ), 24) g n x) = E n,x X û T. 25) c k,m,y )) n x) = E n,x Ck,m y, X û m, û m X û m 26) where, as sal, û denotes the eqilibrim law. Arging very mch like in the simplified cases above we then have the following main reslt. Theorem 3.2 Consider a fnctional of the form 23), and assme that an eqilibrim control law û exists. Then, with notation as in Section 2.1, the the eqilibrim vale fnction V satisfies the eqation. sp U {A V ) n x) + C nn x, x, ) T 1 m=n+1 A c m ) nn x, x) + T 1 m=n+1 A c nmx ) n x) A f) nn x, x) + A f nx ) n x) A G g) n x) + H g G n x) } = 0, 27) V T x) = F T x, x) + G T x, x), 28) where the spremm above is realized by = û n x). Frthermore, the following hold. 1. For every fixed k = 0, 1,..., T and every y X the fnction seqence x) is determined by the recrsion f ky n and f nn x, x) is defined by ky Aûfn x) = 0, n = 0,..., T 1, 29) f ky T x) = F ky, x), 30) f nn x, x) = fn nx x). 24

25 2. The fnction seqence g n x) is determined by the recrsion. Aûg n x) = 0, n = 0,..., T 1, 31) g T x) = x, 32) 3. For every k, m = 0, 1,..., T, with k m, and y X the fnction seqence x) is defined by c kmy n and c m nnx, x) is defined by Aûc k,m,y ) n x) = 0, 0 n m 1, 33) c k,m,y m x) = C k,m y, x, û m x)). 34) c m nnx, x) = c nmx n x). 4. The probabilistic interpretations of f, g and c are given by 24)-26). 5. In the expressions above, û always denotes the eqilibrim control law. 6. Recall that the operators A and Aû only operates on lower case time indices and variables within parentheses. Upper case indices are treated as constant parameters. The detailed discssion at the end of Section 3.4 carries over also to this case. 3.6 Control constraints In the discssions above we have assmed that there are no constraints on the controls, so at time n we are allowed to choose any n U. The theory above can easily be extended to the case when we have constraints of the form n D n X n ), where, for each n and x, D n x) is a sbset of U. In the extended Bellman system we only have to change the expression sp U to sp Dnx). 3.7 A slight extension We can easily extend the theory above to the case when the term is replaced by G n x, E n,x X T ) G n x, E n,x hx T )) for some real valed fnction h. In this case we simply define the g seqence by g n x) = E n,x hx û T ). Theorem 3.2 will still hold, apart from the fact that the bondary condition for g will be replaced by g T x) = hx). 25

26 3.8 A scaling reslt In this section we derive a small scaling reslt, which is sometimes qite sefl. Consider the objective fnctional 23) above and denote, as sal, the eqilibrim control and vale fnction by û and V respectively. Let ϕ : X R + be a fixed real valed fnction and consider a new objective fnctional J ϕ, defined by, Jn ϕ x, ) = ϕx)j n x, ), n = 0, 1,..., T and denote the corresponding eqilibrim control and vale fnction by û ϕ and V ϕ respectively. Since player No n is loosely speaking) trying to maximize Jn ϕ x, ) over n, and ϕx) is jst a scaling factor which is not affected by n the following reslt is intitively obvios. Proposition 3.3 Assme that ϕx)j n x, ) is integrable for all n, ) 2. With notation as above we then have V ϕ n x) = ϕx)v n x), û ϕ nx) = û n x). Proof. The reslt follows from an easy bt messy) indction argment. 4 An eqivalent time consistent problem The object of the present section is to provide a srprising eqivalence reslt between time inconsistent and time consistent problems. To this end we go back to the general extended HJB system of eqations. The first part of this reads as sp U {A V ) n x) + C nn x, x, ) T 1 m=n+1 A c m ) nn x, x) + T 1 m=n+1 A c nmx ) n x) A f) nn x, x) + A f nx ) n x) A G g) n x) + H g G n x) } = 0, Now consider the eqilibrim control law û. Using û we can then constrct f, g, and c by solving the eqations 24)-26). We now define the fnction h by h n x, ) = C nn x, x, ) T 1 m=n+1 A c m ) nn x, x) + T 1 m=n+1 A c nmx ) n x) A f) nn x, x) + A f nx ) n x) A G g) n x) + H g G n x), 2 An easy sfficient condition is that ϕ takes vales in 0, 1) 26

27 where it is important to notice that h does not involve the eqilibrim vale fnction V. With this definition of h, the eqation for V above and its bondary condition become sp {A V ) n x) + h n x, )} = 0, U V T, x) = Hx), where H is defined by Hx) = F x, x) + Gx, x). We now observe, simply by inspection, that this is a standard HJB eqation for the standard time consistent optimal control problem to maximize T E n,x h k X k, k ) + HX T ). 35) k=n We have ths proved the following reslt. Proposition 4.1 For every time inconsistent problem in the present framework there exists a standard, time consistent, optimal control problem with the following properties. The optimal vale fnction for the standard problem coincides with the eqilibrim vale fnction for the time inconsistent problem. The optimal control for the standard problem coincides with the eqilibrim control for the time inconsistent problem. The objective fnctional for the standard problem is given by 35). We immediately remark that Proposition 4.1 above is mostly of theoretical interest, and of little practical vale. The reason is of corse that in order to formlate the eqivalent standard problem we need to know the eqilibrim control û. In or opinion it is however qite srprising. Related reslts can be fond in 1, 7, 10 and 13. In these papers it is proved that, for varios models where time inconsistency stems from nonexponential disconting, there exists an eqivalent standard problem with exponential disconting). Proposition 4.1 differs from the reslts in the cited references above two ways. Firstly it differs by being qite general and not confined to a particlar model. Secondly it differs from the reslts in the cited references by having a different strctre. In other words, for the models stdied in the cited papers, the eqivalent problem described in Proposition 4.1 is strctrally different from the eqivalent problems presented in the cited references. See Section 8.3 for a more detailed discssion of isses of this kind. Frthermore, Proposition 4.1 has modeling conseqences for economics. Sppose that yo want to model consmer behavior. Yo have done this sing standard time consistent dynamic tility maximization and now yo are contemplating to introdce time inconsistent preferences to obtain a richer class of 27

4.2 First-Order Logic

4.2 First-Order Logic 64 First-Order Logic and Type Theory The problem can be seen in the two qestionable rles In the existential introdction, the term a has not yet been introdced into the derivation and its se can therefore

More information

Essentials of optimal control theory in ECON 4140

Essentials of optimal control theory in ECON 4140 Essentials of optimal control theory in ECON 4140 Things yo need to know (and a detail yo need not care abot). A few words abot dynamic optimization in general. Dynamic optimization can be thoght of as

More information

ON THE SHAPES OF BILATERAL GAMMA DENSITIES

ON THE SHAPES OF BILATERAL GAMMA DENSITIES ON THE SHAPES OF BILATERAL GAMMA DENSITIES UWE KÜCHLER, STEFAN TAPPE Abstract. We investigate the for parameter family of bilateral Gamma distribtions. The goal of this paper is to provide a thorogh treatment

More information

Optimal Control of a Heterogeneous Two Server System with Consideration for Power and Performance

Optimal Control of a Heterogeneous Two Server System with Consideration for Power and Performance Optimal Control of a Heterogeneos Two Server System with Consideration for Power and Performance by Jiazheng Li A thesis presented to the University of Waterloo in flfilment of the thesis reqirement for

More information

The Linear Quadratic Regulator

The Linear Quadratic Regulator 10 The Linear Qadratic Reglator 10.1 Problem formlation This chapter concerns optimal control of dynamical systems. Most of this development concerns linear models with a particlarly simple notion of optimality.

More information

Section 7.4: Integration of Rational Functions by Partial Fractions

Section 7.4: Integration of Rational Functions by Partial Fractions Section 7.4: Integration of Rational Fnctions by Partial Fractions This is abot as complicated as it gets. The Method of Partial Fractions Ecept for a few very special cases, crrently we have no way to

More information

Optimal Mean-Variance Portfolio Selection

Optimal Mean-Variance Portfolio Selection Math. Financ. Econ. Vol. 11, No. 2, 2017, 137 160 Research Report No. 14, 2013, Probab. Statist. Grop Manchester 26 pp Optimal Mean-Variance Portfolio Selection J. L. Pedersen & G. Peskir Assming that

More information

Lecture Notes On THEORY OF COMPUTATION MODULE - 2 UNIT - 2

Lecture Notes On THEORY OF COMPUTATION MODULE - 2 UNIT - 2 BIJU PATNAIK UNIVERSITY OF TECHNOLOGY, ODISHA Lectre Notes On THEORY OF COMPUTATION MODULE - 2 UNIT - 2 Prepared by, Dr. Sbhend Kmar Rath, BPUT, Odisha. Tring Machine- Miscellany UNIT 2 TURING MACHINE

More information

BLOOM S TAXONOMY. Following Bloom s Taxonomy to Assess Students

BLOOM S TAXONOMY. Following Bloom s Taxonomy to Assess Students BLOOM S TAXONOMY Topic Following Bloom s Taonomy to Assess Stdents Smmary A handot for stdents to eplain Bloom s taonomy that is sed for item writing and test constrction to test stdents to see if they

More information

Optimal Mean-Variance Portfolio Selection

Optimal Mean-Variance Portfolio Selection Research Report No. 14, 2013, Probab. Statist. Grop Manchester 25 pp Optimal Mean-Variance Portfolio Selection J. L. Pedersen & G. Peskir Assming that the wealth process X is generated self-financially

More information

Imprecise Continuous-Time Markov Chains

Imprecise Continuous-Time Markov Chains Imprecise Continos-Time Markov Chains Thomas Krak *, Jasper De Bock, and Arno Siebes t.e.krak@.nl, a.p.j.m.siebes@.nl Utrecht University, Department of Information and Compting Sciences, Princetonplein

More information

Part II. Martingale measres and their constrctions 1. The \First" and the \Second" fndamental theorems show clearly how \mar tingale measres" are impo

Part II. Martingale measres and their constrctions 1. The \First and the \Second fndamental theorems show clearly how \mar tingale measres are impo Albert N. Shiryaev (Stelov Mathematical Institte and Moscow State University) ESSENTIALS of the ARBITRAGE THEORY Part I. Basic notions and theorems of the \Arbitrage Theory" Part II. Martingale measres

More information

Discussion of The Forward Search: Theory and Data Analysis by Anthony C. Atkinson, Marco Riani, and Andrea Ceroli

Discussion of The Forward Search: Theory and Data Analysis by Anthony C. Atkinson, Marco Riani, and Andrea Ceroli 1 Introdction Discssion of The Forward Search: Theory and Data Analysis by Anthony C. Atkinson, Marco Riani, and Andrea Ceroli Søren Johansen Department of Economics, University of Copenhagen and CREATES,

More information

1 Undiscounted Problem (Deterministic)

1 Undiscounted Problem (Deterministic) Lectre 9: Linear Qadratic Control Problems 1 Undisconted Problem (Deterministic) Choose ( t ) 0 to Minimize (x trx t + tq t ) t=0 sbject to x t+1 = Ax t + B t, x 0 given. x t is an n-vector state, t a

More information

Subcritical bifurcation to innitely many rotating waves. Arnd Scheel. Freie Universitat Berlin. Arnimallee Berlin, Germany

Subcritical bifurcation to innitely many rotating waves. Arnd Scheel. Freie Universitat Berlin. Arnimallee Berlin, Germany Sbcritical bifrcation to innitely many rotating waves Arnd Scheel Institt fr Mathematik I Freie Universitat Berlin Arnimallee 2-6 14195 Berlin, Germany 1 Abstract We consider the eqation 00 + 1 r 0 k2

More information

Lecture: Corporate Income Tax

Lecture: Corporate Income Tax Lectre: Corporate Income Tax Ltz Krschwitz & Andreas Löffler Disconted Cash Flow, Section 2.1, Otline 2.1 Unlevered firms Similar companies Notation 2.1.1 Valation eqation 2.1.2 Weak atoregressive cash

More information

Lecture: Corporate Income Tax - Unlevered firms

Lecture: Corporate Income Tax - Unlevered firms Lectre: Corporate Income Tax - Unlevered firms Ltz Krschwitz & Andreas Löffler Disconted Cash Flow, Section 2.1, Otline 2.1 Unlevered firms Similar companies Notation 2.1.1 Valation eqation 2.1.2 Weak

More information

CHARACTERIZATIONS OF EXPONENTIAL DISTRIBUTION VIA CONDITIONAL EXPECTATIONS OF RECORD VALUES. George P. Yanev

CHARACTERIZATIONS OF EXPONENTIAL DISTRIBUTION VIA CONDITIONAL EXPECTATIONS OF RECORD VALUES. George P. Yanev Pliska Std. Math. Blgar. 2 (211), 233 242 STUDIA MATHEMATICA BULGARICA CHARACTERIZATIONS OF EXPONENTIAL DISTRIBUTION VIA CONDITIONAL EXPECTATIONS OF RECORD VALUES George P. Yanev We prove that the exponential

More information

A Characterization of Interventional Distributions in Semi-Markovian Causal Models

A Characterization of Interventional Distributions in Semi-Markovian Causal Models A Characterization of Interventional Distribtions in Semi-Markovian Casal Models Jin Tian and Changsng Kang Department of Compter Science Iowa State University Ames, IA 50011 {jtian, cskang}@cs.iastate.ed

More information

Discussion Papers Department of Economics University of Copenhagen

Discussion Papers Department of Economics University of Copenhagen Discssion Papers Department of Economics University of Copenhagen No. 10-06 Discssion of The Forward Search: Theory and Data Analysis, by Anthony C. Atkinson, Marco Riani, and Andrea Ceroli Søren Johansen,

More information

Optimization via the Hamilton-Jacobi-Bellman Method: Theory and Applications

Optimization via the Hamilton-Jacobi-Bellman Method: Theory and Applications Optimization via the Hamilton-Jacobi-Bellman Method: Theory and Applications Navin Khaneja lectre notes taken by Christiane Koch Jne 24, 29 1 Variation yields a classical Hamiltonian system Sppose that

More information

Universal Scheme for Optimal Search and Stop

Universal Scheme for Optimal Search and Stop Universal Scheme for Optimal Search and Stop Sirin Nitinawarat Qalcomm Technologies, Inc. 5775 Morehose Drive San Diego, CA 92121, USA Email: sirin.nitinawarat@gmail.com Vengopal V. Veeravalli Coordinated

More information

Krauskopf, B., Lee, CM., & Osinga, HM. (2008). Codimension-one tangency bifurcations of global Poincaré maps of four-dimensional vector fields.

Krauskopf, B., Lee, CM., & Osinga, HM. (2008). Codimension-one tangency bifurcations of global Poincaré maps of four-dimensional vector fields. Kraskopf, B, Lee,, & Osinga, H (28) odimension-one tangency bifrcations of global Poincaré maps of for-dimensional vector fields Early version, also known as pre-print Link to pblication record in Explore

More information

Nonlinear parametric optimization using cylindrical algebraic decomposition

Nonlinear parametric optimization using cylindrical algebraic decomposition Proceedings of the 44th IEEE Conference on Decision and Control, and the Eropean Control Conference 2005 Seville, Spain, December 12-15, 2005 TC08.5 Nonlinear parametric optimization sing cylindrical algebraic

More information

STEP Support Programme. STEP III Hyperbolic Functions: Solutions

STEP Support Programme. STEP III Hyperbolic Functions: Solutions STEP Spport Programme STEP III Hyperbolic Fnctions: Soltions Start by sing the sbstittion t cosh x. This gives: sinh x cosh a cosh x cosh a sinh x t sinh x dt t dt t + ln t ln t + ln cosh a ln ln cosh

More information

L 1 -smoothing for the Ornstein-Uhlenbeck semigroup

L 1 -smoothing for the Ornstein-Uhlenbeck semigroup L -smoothing for the Ornstein-Uhlenbeck semigrop K. Ball, F. Barthe, W. Bednorz, K. Oleszkiewicz and P. Wolff September, 00 Abstract Given a probability density, we estimate the rate of decay of the measre

More information

Stability of Model Predictive Control using Markov Chain Monte Carlo Optimisation

Stability of Model Predictive Control using Markov Chain Monte Carlo Optimisation Stability of Model Predictive Control sing Markov Chain Monte Carlo Optimisation Elilini Siva, Pal Golart, Jan Maciejowski and Nikolas Kantas Abstract We apply stochastic Lyapnov theory to perform stability

More information

Robust Tracking and Regulation Control of Uncertain Piecewise Linear Hybrid Systems

Robust Tracking and Regulation Control of Uncertain Piecewise Linear Hybrid Systems ISIS Tech. Rept. - 2003-005 Robst Tracking and Reglation Control of Uncertain Piecewise Linear Hybrid Systems Hai Lin Panos J. Antsaklis Department of Electrical Engineering, University of Notre Dame,

More information

Typed Kleene Algebra with Products and Iteration Theories

Typed Kleene Algebra with Products and Iteration Theories Typed Kleene Algebra with Prodcts and Iteration Theories Dexter Kozen and Konstantinos Mamoras Compter Science Department Cornell University Ithaca, NY 14853-7501, USA {kozen,mamoras}@cs.cornell.ed Abstract

More information

Move Blocking Strategies in Receding Horizon Control

Move Blocking Strategies in Receding Horizon Control Move Blocking Strategies in Receding Horizon Control Raphael Cagienard, Pascal Grieder, Eric C. Kerrigan and Manfred Morari Abstract In order to deal with the comptational brden of optimal control, it

More information

Joint Transfer of Energy and Information in a Two-hop Relay Channel

Joint Transfer of Energy and Information in a Two-hop Relay Channel Joint Transfer of Energy and Information in a Two-hop Relay Channel Ali H. Abdollahi Bafghi, Mahtab Mirmohseni, and Mohammad Reza Aref Information Systems and Secrity Lab (ISSL Department of Electrical

More information

CHANNEL SELECTION WITH RAYLEIGH FADING: A MULTI-ARMED BANDIT FRAMEWORK. Wassim Jouini and Christophe Moy

CHANNEL SELECTION WITH RAYLEIGH FADING: A MULTI-ARMED BANDIT FRAMEWORK. Wassim Jouini and Christophe Moy CHANNEL SELECTION WITH RAYLEIGH FADING: A MULTI-ARMED BANDIT FRAMEWORK Wassim Joini and Christophe Moy SUPELEC, IETR, SCEE, Avene de la Bolaie, CS 47601, 5576 Cesson Sévigné, France. INSERM U96 - IFR140-

More information

QUANTILE ESTIMATION IN SUCCESSIVE SAMPLING

QUANTILE ESTIMATION IN SUCCESSIVE SAMPLING Jornal of the Korean Statistical Society 2007, 36: 4, pp 543 556 QUANTILE ESTIMATION IN SUCCESSIVE SAMPLING Hosila P. Singh 1, Ritesh Tailor 2, Sarjinder Singh 3 and Jong-Min Kim 4 Abstract In sccessive

More information

Pulses on a Struck String

Pulses on a Struck String 8.03 at ESG Spplemental Notes Plses on a Strck String These notes investigate specific eamples of transverse motion on a stretched string in cases where the string is at some time ndisplaced, bt with a

More information

On Multiobjective Duality For Variational Problems

On Multiobjective Duality For Variational Problems The Open Operational Research Jornal, 202, 6, -8 On Mltiobjective Dality For Variational Problems. Hsain *,, Bilal Ahmad 2 and Z. Jabeen 3 Open Access Department of Mathematics, Jaypee University of Engineering

More information

REINFORCEMENT LEARNING AND OPTIMAL ADAPTIVE CONTROL

REINFORCEMENT LEARNING AND OPTIMAL ADAPTIVE CONTROL Lewis c11.tex V1-10/19/2011 4:10pm Page 461 11 REINFORCEMENT LEARNING AND OPTIMAL ADAPTIVE CONTROL In this book we have presented a variety of methods for the analysis and design of optimal control systems.

More information

Modelling by Differential Equations from Properties of Phenomenon to its Investigation

Modelling by Differential Equations from Properties of Phenomenon to its Investigation Modelling by Differential Eqations from Properties of Phenomenon to its Investigation V. Kleiza and O. Prvinis Kanas University of Technology, Lithania Abstract The Panevezys camps of Kanas University

More information

STURM-LIOUVILLE PROBLEMS

STURM-LIOUVILLE PROBLEMS STURM-LIOUVILLE PROBLEMS ANTON ZETTL Mathematics Department, Northern Illinois University, DeKalb, Illinois 60115. Dedicated to the memory of John Barrett. ABSTRACT. Reglar and singlar Strm-Lioville problems

More information

1. Tractable and Intractable Computational Problems So far in the course we have seen many problems that have polynomial-time solutions; that is, on

1. Tractable and Intractable Computational Problems So far in the course we have seen many problems that have polynomial-time solutions; that is, on . Tractable and Intractable Comptational Problems So far in the corse we have seen many problems that have polynomial-time soltions; that is, on a problem instance of size n, the rnning time T (n) = O(n

More information

Optimal Control, Statistics and Path Planning

Optimal Control, Statistics and Path Planning PERGAMON Mathematical and Compter Modelling 33 (21) 237 253 www.elsevier.nl/locate/mcm Optimal Control, Statistics and Path Planning C. F. Martin and Shan Sn Department of Mathematics and Statistics Texas

More information

On averaged expected cost control as reliability for 1D ergodic diffusions

On averaged expected cost control as reliability for 1D ergodic diffusions On averaged expected cost control as reliability for 1D ergodic diffsions S.V. Anlova 5 6, H. Mai 7 8, A.Y. Veretennikov 9 10 Mon Nov 27 10:24:42 2017 Abstract For a Markov model described by a one-dimensional

More information

Study on the impulsive pressure of tank oscillating by force towards multiple degrees of freedom

Study on the impulsive pressure of tank oscillating by force towards multiple degrees of freedom EPJ Web of Conferences 80, 0034 (08) EFM 07 Stdy on the implsive pressre of tank oscillating by force towards mltiple degrees of freedom Shigeyki Hibi,* The ational Defense Academy, Department of Mechanical

More information

Classify by number of ports and examine the possible structures that result. Using only one-port elements, no more than two elements can be assembled.

Classify by number of ports and examine the possible structures that result. Using only one-port elements, no more than two elements can be assembled. Jnction elements in network models. Classify by nmber of ports and examine the possible strctres that reslt. Using only one-port elements, no more than two elements can be assembled. Combining two two-ports

More information

Gradient Projection Anti-windup Scheme on Constrained Planar LTI Systems. Justin Teo and Jonathan P. How

Gradient Projection Anti-windup Scheme on Constrained Planar LTI Systems. Justin Teo and Jonathan P. How 1 Gradient Projection Anti-windp Scheme on Constrained Planar LTI Systems Jstin Teo and Jonathan P. How Technical Report ACL1 1 Aerospace Controls Laboratory Department of Aeronatics and Astronatics Massachsetts

More information

Sources of Non Stationarity in the Semivariogram

Sources of Non Stationarity in the Semivariogram Sorces of Non Stationarity in the Semivariogram Migel A. Cba and Oy Leangthong Traditional ncertainty characterization techniqes sch as Simple Kriging or Seqential Gassian Simlation rely on stationary

More information

Decision Oriented Bayesian Design of Experiments

Decision Oriented Bayesian Design of Experiments Decision Oriented Bayesian Design of Experiments Farminder S. Anand*, Jay H. Lee**, Matthew J. Realff*** *School of Chemical & Biomoleclar Engineering Georgia Institte of echnology, Atlanta, GA 3332 USA

More information

Network Coding for Multiple Unicasts: An Approach based on Linear Optimization

Network Coding for Multiple Unicasts: An Approach based on Linear Optimization Network Coding for Mltiple Unicasts: An Approach based on Linear Optimization Danail Traskov, Niranjan Ratnakar, Desmond S. Ln, Ralf Koetter, and Mriel Médard Abstract In this paper we consider the application

More information

arxiv:quant-ph/ v4 14 May 2003

arxiv:quant-ph/ v4 14 May 2003 Phase-transition-like Behavior of Qantm Games arxiv:qant-ph/0111138v4 14 May 2003 Jiangfeng D Department of Modern Physics, University of Science and Technology of China, Hefei, 230027, People s Repblic

More information

Sufficient Optimality Condition for a Risk-Sensitive Control Problem for Backward Stochastic Differential Equations and an Application

Sufficient Optimality Condition for a Risk-Sensitive Control Problem for Backward Stochastic Differential Equations and an Application Jornal of Nmerical Mathematics and Stochastics, 9(1) : 48-6, 17 http://www.jnmas.org/jnmas9-4.pdf JNM@S Eclidean Press, LLC Online: ISSN 151-3 Sfficient Optimality Condition for a Risk-Sensitive Control

More information

Characterizations of probability distributions via bivariate regression of record values

Characterizations of probability distributions via bivariate regression of record values Metrika (2008) 68:51 64 DOI 10.1007/s00184-007-0142-7 Characterizations of probability distribtions via bivariate regression of record vales George P. Yanev M. Ahsanllah M. I. Beg Received: 4 October 2006

More information

Chapter 3. Preferences and Utility

Chapter 3. Preferences and Utility Chapter 3 Preferences and Utilit Microeconomics stdies how individals make choices; different individals make different choices n important factor in making choices is individal s tastes or preferences

More information

The Dual of the Maximum Likelihood Method

The Dual of the Maximum Likelihood Method Department of Agricltral and Resorce Economics University of California, Davis The Dal of the Maximm Likelihood Method by Qirino Paris Working Paper No. 12-002 2012 Copyright @ 2012 by Qirino Paris All

More information

arxiv: v1 [physics.flu-dyn] 11 Mar 2011

arxiv: v1 [physics.flu-dyn] 11 Mar 2011 arxiv:1103.45v1 [physics.fl-dyn 11 Mar 011 Interaction of a magnetic dipole with a slowly moving electrically condcting plate Evgeny V. Votyakov Comptational Science Laboratory UCY-CompSci, Department

More information

Optimal mean-variance portfolio selection

Optimal mean-variance portfolio selection Math Finan Econ 2017) 11:137 160 DOI 10.1007/s11579-016-0174-8 Optimal mean-variance portfolio selection Jesper Lnd Pedersen 1 Goran Peskir 2 Received: 26 November 2015 / Accepted: 20 May 2016 / Pblished

More information

RELIABILITY ASPECTS OF PROPORTIONAL MEAN RESIDUAL LIFE MODEL USING QUANTILE FUNC- TIONS

RELIABILITY ASPECTS OF PROPORTIONAL MEAN RESIDUAL LIFE MODEL USING QUANTILE FUNC- TIONS RELIABILITY ASPECTS OF PROPORTIONAL MEAN RESIDUAL LIFE MODEL USING QUANTILE FUNC- TIONS Athors: N.UNNIKRISHNAN NAIR Department of Statistics, Cochin University of Science Technology, Cochin, Kerala, INDIA

More information

9. Tensor product and Hom

9. Tensor product and Hom 9. Tensor prodct and Hom Starting from two R-modles we can define two other R-modles, namely M R N and Hom R (M, N), that are very mch related. The defining properties of these modles are simple, bt those

More information

EXISTENCE AND FAIRNESS OF VALUE ALLOCATION WITHOUT CONVEX PREFERENCES. Nicholas C. Yanne1is. Discussion Paper No.

EXISTENCE AND FAIRNESS OF VALUE ALLOCATION WITHOUT CONVEX PREFERENCES. Nicholas C. Yanne1is. Discussion Paper No. EXISTENCE AND FAIRNESS OF VALUE ALLOCATION WITHOUT CONVEX PREFERENCES by Nicholas C. Yanne1is Discssion Paper No. 184, Agst, 1983 Center for Economic Research Department of Economics University of Minnesota

More information

Queueing analysis of service deferrals for load management in power systems

Queueing analysis of service deferrals for load management in power systems Qeeing analysis of service deferrals for load management in power systems Andrés Ferragt and Fernando Paganini Universidad ORT Urgay Abstract With the advent of renewable sorces and Smart- Grid deployments,

More information

The Lehmer matrix and its recursive analogue

The Lehmer matrix and its recursive analogue The Lehmer matrix and its recrsive analoge Emrah Kilic, Pantelimon Stănică TOBB Economics and Technology University, Mathematics Department 0660 Sogtoz, Ankara, Trkey; ekilic@etedtr Naval Postgradate School,

More information

The Replenishment Policy for an Inventory System with a Fixed Ordering Cost and a Proportional Penalty Cost under Poisson Arrival Demands

The Replenishment Policy for an Inventory System with a Fixed Ordering Cost and a Proportional Penalty Cost under Poisson Arrival Demands Scientiae Mathematicae Japonicae Online, e-211, 161 167 161 The Replenishment Policy for an Inventory System with a Fixed Ordering Cost and a Proportional Penalty Cost nder Poisson Arrival Demands Hitoshi

More information

CRITERIA FOR TOEPLITZ OPERATORS ON THE SPHERE. Jingbo Xia

CRITERIA FOR TOEPLITZ OPERATORS ON THE SPHERE. Jingbo Xia CRITERIA FOR TOEPLITZ OPERATORS ON THE SPHERE Jingbo Xia Abstract. Let H 2 (S) be the Hardy space on the nit sphere S in C n. We show that a set of inner fnctions Λ is sfficient for the prpose of determining

More information

The Cryptanalysis of a New Public-Key Cryptosystem based on Modular Knapsacks

The Cryptanalysis of a New Public-Key Cryptosystem based on Modular Knapsacks The Cryptanalysis of a New Pblic-Key Cryptosystem based on Modlar Knapsacks Yeow Meng Chee Antoine Jox National Compter Systems DMI-GRECC Center for Information Technology 45 re d Ulm 73 Science Park Drive,

More information

Discrete Applied Mathematics. The induced path function, monotonicity and betweenness

Discrete Applied Mathematics. The induced path function, monotonicity and betweenness Discrete Applied Mathematics 158 (2010) 426 433 Contents lists available at ScienceDirect Discrete Applied Mathematics jornal homepage: www.elsevier.com/locate/dam The indced path fnction, monotonicity

More information

Worst-case analysis of the LPT algorithm for single processor scheduling with time restrictions

Worst-case analysis of the LPT algorithm for single processor scheduling with time restrictions OR Spectrm 06 38:53 540 DOI 0.007/s009-06-043-5 REGULAR ARTICLE Worst-case analysis of the LPT algorithm for single processor schedling with time restrictions Oliver ran Fan Chng Ron Graham Received: Janary

More information

An Investigation into Estimating Type B Degrees of Freedom

An Investigation into Estimating Type B Degrees of Freedom An Investigation into Estimating Type B Degrees of H. Castrp President, Integrated Sciences Grop Jne, 00 Backgrond The degrees of freedom associated with an ncertainty estimate qantifies the amont of information

More information

3.1 The Basic Two-Level Model - The Formulas

3.1 The Basic Two-Level Model - The Formulas CHAPTER 3 3 THE BASIC MULTILEVEL MODEL AND EXTENSIONS In the previos Chapter we introdced a nmber of models and we cleared ot the advantages of Mltilevel Models in the analysis of hierarchically nested

More information

The Heat Equation and the Li-Yau Harnack Inequality

The Heat Equation and the Li-Yau Harnack Inequality The Heat Eqation and the Li-Ya Harnack Ineqality Blake Hartley VIGRE Research Paper Abstract In this paper, we develop the necessary mathematics for nderstanding the Li-Ya Harnack ineqality. We begin with

More information

Elements of Coordinate System Transformations

Elements of Coordinate System Transformations B Elements of Coordinate System Transformations Coordinate system transformation is a powerfl tool for solving many geometrical and kinematic problems that pertain to the design of gear ctting tools and

More information

Approach to a Proof of the Riemann Hypothesis by the Second Mean-Value Theorem of Calculus

Approach to a Proof of the Riemann Hypothesis by the Second Mean-Value Theorem of Calculus Advances in Pre Mathematics, 6, 6, 97- http://www.scirp.org/jornal/apm ISSN Online: 6-384 ISSN Print: 6-368 Approach to a Proof of the Riemann Hypothesis by the Second Mean-Vale Theorem of Calcls Alfred

More information

Remarks on strongly convex stochastic processes

Remarks on strongly convex stochastic processes Aeqat. Math. 86 (01), 91 98 c The Athor(s) 01. This article is pblished with open access at Springerlink.com 0001-9054/1/010091-8 pblished online November 7, 01 DOI 10.1007/s00010-01-016-9 Aeqationes Mathematicae

More information

Sensitivity Analysis in Bayesian Networks: From Single to Multiple Parameters

Sensitivity Analysis in Bayesian Networks: From Single to Multiple Parameters Sensitivity Analysis in Bayesian Networks: From Single to Mltiple Parameters Hei Chan and Adnan Darwiche Compter Science Department University of California, Los Angeles Los Angeles, CA 90095 {hei,darwiche}@cs.cla.ed

More information

Formulas for stopped diffusion processes with stopping times based on drawdowns and drawups

Formulas for stopped diffusion processes with stopping times based on drawdowns and drawups Stochastic Processes and their Applications 119 (009) 563 578 www.elsevier.com/locate/spa Formlas for stopped diffsion processes with stopping times based on drawdowns and drawps Libor Pospisil, Jan Vecer,

More information

Convergence analysis of ant colony learning

Convergence analysis of ant colony learning Delft University of Technology Delft Center for Systems and Control Technical report 11-012 Convergence analysis of ant colony learning J van Ast R Babška and B De Schtter If yo want to cite this report

More information

Constructive Root Bound for k-ary Rational Input Numbers

Constructive Root Bound for k-ary Rational Input Numbers Constrctive Root Bond for k-ary Rational Inpt Nmbers Sylvain Pion, Chee Yap To cite this version: Sylvain Pion, Chee Yap. Constrctive Root Bond for k-ary Rational Inpt Nmbers. 19th Annal ACM Symposim on

More information

Conditions for Approaching the Origin without Intersecting the x-axis in the Liénard Plane

Conditions for Approaching the Origin without Intersecting the x-axis in the Liénard Plane Filomat 3:2 (27), 376 377 https://doi.org/.2298/fil7276a Pblished by Faclty of Sciences and Mathematics, University of Niš, Serbia Available at: http://www.pmf.ni.ac.rs/filomat Conditions for Approaching

More information

A Characterization of the Domain of Beta-Divergence and Its Connection to Bregman Variational Model

A Characterization of the Domain of Beta-Divergence and Its Connection to Bregman Variational Model entropy Article A Characterization of the Domain of Beta-Divergence and Its Connection to Bregman Variational Model Hyenkyn Woo School of Liberal Arts, Korea University of Technology and Edcation, Cheonan

More information

arxiv: v3 [gr-qc] 29 Jun 2015

arxiv: v3 [gr-qc] 29 Jun 2015 QUANTITATIVE DECAY RATES FOR DISPERSIVE SOLUTIONS TO THE EINSTEIN-SCALAR FIELD SYSTEM IN SPHERICAL SYMMETRY JONATHAN LUK AND SUNG-JIN OH arxiv:402.2984v3 [gr-qc] 29 Jn 205 Abstract. In this paper, we stdy

More information

Setting The K Value And Polarization Mode Of The Delta Undulator

Setting The K Value And Polarization Mode Of The Delta Undulator LCLS-TN-4- Setting The Vale And Polarization Mode Of The Delta Undlator Zachary Wolf, Heinz-Dieter Nhn SLAC September 4, 04 Abstract This note provides the details for setting the longitdinal positions

More information

Prediction of Transmission Distortion for Wireless Video Communication: Analysis

Prediction of Transmission Distortion for Wireless Video Communication: Analysis Prediction of Transmission Distortion for Wireless Video Commnication: Analysis Zhifeng Chen and Dapeng W Department of Electrical and Compter Engineering, University of Florida, Gainesville, Florida 326

More information

Hedge Funds Performance Fees and Investments

Hedge Funds Performance Fees and Investments Hedge Fnds Performance Fees and Investments A Thesis Sbmitted to the Faclty of the WORCESTER POLYTECHNIC INSTITUTE In partial flfillment of the reqirements for the Degree of Master of Science in Financial

More information

Influence of the Non-Linearity of the Aerodynamic Coefficients on the Skewness of the Buffeting Drag Force. Vincent Denoël *, 1), Hervé Degée 1)

Influence of the Non-Linearity of the Aerodynamic Coefficients on the Skewness of the Buffeting Drag Force. Vincent Denoël *, 1), Hervé Degée 1) Inflence of the Non-Linearity of the Aerodynamic oefficients on the Skewness of the Bffeting rag Force Vincent enoël *, 1), Hervé egée 1) 1) epartment of Material mechanics and Strctres, University of

More information

Graphs and Their. Applications (6) K.M. Koh* F.M. Dong and E.G. Tay. 17 The Number of Spanning Trees

Graphs and Their. Applications (6) K.M. Koh* F.M. Dong and E.G. Tay. 17 The Number of Spanning Trees Graphs and Their Applications (6) by K.M. Koh* Department of Mathematics National University of Singapore, Singapore 1 ~ 7543 F.M. Dong and E.G. Tay Mathematics and Mathematics EdOOation National Institte

More information

Chapter 2 Introduction to the Stiffness (Displacement) Method. The Stiffness (Displacement) Method

Chapter 2 Introduction to the Stiffness (Displacement) Method. The Stiffness (Displacement) Method CIVL 7/87 Chater - The Stiffness Method / Chater Introdction to the Stiffness (Dislacement) Method Learning Objectives To define the stiffness matrix To derive the stiffness matrix for a sring element

More information

Power Enhancement in High Dimensional Cross-Sectional Tests

Power Enhancement in High Dimensional Cross-Sectional Tests Power Enhancement in High Dimensional Cross-Sectional ests arxiv:30.3899v2 [stat.me] 6 Ag 204 Jianqing Fan, Yan Liao and Jiawei Yao Department of Operations Research and Financial Engineering, Princeton

More information

A Regulator for Continuous Sedimentation in Ideal Clarifier-Thickener Units

A Regulator for Continuous Sedimentation in Ideal Clarifier-Thickener Units A Reglator for Continos Sedimentation in Ideal Clarifier-Thickener Units STEFAN DIEHL Centre for Mathematical Sciences, Lnd University, P.O. Box, SE- Lnd, Sweden e-mail: diehl@maths.lth.se) Abstract. The

More information

Department of Industrial Engineering Statistical Quality Control presented by Dr. Eng. Abed Schokry

Department of Industrial Engineering Statistical Quality Control presented by Dr. Eng. Abed Schokry Department of Indstrial Engineering Statistical Qality Control presented by Dr. Eng. Abed Schokry Department of Indstrial Engineering Statistical Qality Control C and U Chart presented by Dr. Eng. Abed

More information

Research Article Permanence of a Discrete Predator-Prey Systems with Beddington-DeAngelis Functional Response and Feedback Controls

Research Article Permanence of a Discrete Predator-Prey Systems with Beddington-DeAngelis Functional Response and Feedback Controls Hindawi Pblishing Corporation Discrete Dynamics in Natre and Society Volme 2008 Article ID 149267 8 pages doi:101155/2008/149267 Research Article Permanence of a Discrete Predator-Prey Systems with Beddington-DeAngelis

More information

UNCERTAINTY FOCUSED STRENGTH ANALYSIS MODEL

UNCERTAINTY FOCUSED STRENGTH ANALYSIS MODEL 8th International DAAAM Baltic Conference "INDUSTRIAL ENGINEERING - 19-1 April 01, Tallinn, Estonia UNCERTAINTY FOCUSED STRENGTH ANALYSIS MODEL Põdra, P. & Laaneots, R. Abstract: Strength analysis is a

More information

Assignment Fall 2014

Assignment Fall 2014 Assignment 5.086 Fall 04 De: Wednesday, 0 December at 5 PM. Upload yor soltion to corse website as a zip file YOURNAME_ASSIGNMENT_5 which incldes the script for each qestion as well as all Matlab fnctions

More information

Spanning Trees with Many Leaves in Graphs without Diamonds and Blossoms

Spanning Trees with Many Leaves in Graphs without Diamonds and Blossoms Spanning Trees ith Many Leaes in Graphs ithot Diamonds and Blossoms Pal Bonsma Florian Zickfeld Technische Uniersität Berlin, Fachbereich Mathematik Str. des 7. Jni 36, 0623 Berlin, Germany {bonsma,zickfeld}@math.t-berlin.de

More information

Admissibility under the LINEX loss function in non-regular case. Hidekazu Tanaka. Received November 5, 2009; revised September 2, 2010

Admissibility under the LINEX loss function in non-regular case. Hidekazu Tanaka. Received November 5, 2009; revised September 2, 2010 Scientiae Mathematicae Japonicae Online, e-2012, 427 434 427 Admissibility nder the LINEX loss fnction in non-reglar case Hidekaz Tanaka Received November 5, 2009; revised September 2, 2010 Abstract. In

More information

Centre de Referència en Economia Analítica

Centre de Referència en Economia Analítica Centre de Referència en Economia Analítica Barcelona Economics Working Paper Series Working Paper nº 325 StrategicReqirements with Indi erence: Single- Peaked verss Single-Plateaed Preferences Dolors Berga

More information

Asymptotics of dissipative nonlinear evolution equations with ellipticity: different end states

Asymptotics of dissipative nonlinear evolution equations with ellipticity: different end states J. Math. Anal. Appl. 33 5) 5 35 www.elsevier.com/locate/jmaa Asymptotics of dissipative nonlinear evoltion eqations with ellipticity: different end states enjn Dan, Changjiang Zh Laboratory of Nonlinear

More information

Chem 4501 Introduction to Thermodynamics, 3 Credits Kinetics, and Statistical Mechanics. Fall Semester Homework Problem Set Number 10 Solutions

Chem 4501 Introduction to Thermodynamics, 3 Credits Kinetics, and Statistical Mechanics. Fall Semester Homework Problem Set Number 10 Solutions Chem 4501 Introdction to Thermodynamics, 3 Credits Kinetics, and Statistical Mechanics Fall Semester 2017 Homework Problem Set Nmber 10 Soltions 1. McQarrie and Simon, 10-4. Paraphrase: Apply Eler s theorem

More information

RESGen: Renewable Energy Scenario Generation Platform

RESGen: Renewable Energy Scenario Generation Platform 1 RESGen: Renewable Energy Scenario Generation Platform Emil B. Iversen, Pierre Pinson, Senior Member, IEEE, and Igor Ardin Abstract Space-time scenarios of renewable power generation are increasingly

More information

Second-Order Wave Equation

Second-Order Wave Equation Second-Order Wave Eqation A. Salih Department of Aerospace Engineering Indian Institte of Space Science and Technology, Thirvananthapram 3 December 016 1 Introdction The classical wave eqation is a second-order

More information

When are Two Numerical Polynomials Relatively Prime?

When are Two Numerical Polynomials Relatively Prime? J Symbolic Comptation (1998) 26, 677 689 Article No sy980234 When are Two Nmerical Polynomials Relatively Prime? BERNHARD BECKERMANN AND GEORGE LABAHN Laboratoire d Analyse Nmériqe et d Optimisation, Université

More information

Linearly Solvable Markov Games

Linearly Solvable Markov Games Linearly Solvable Markov Games Krishnamrthy Dvijotham and mo Todorov Abstract Recent work has led to an interesting new theory of linearly solvable control, where the Bellman eqation characterizing the

More information

We automate the bivariate change-of-variables technique for bivariate continuous random variables with

We automate the bivariate change-of-variables technique for bivariate continuous random variables with INFORMS Jornal on Compting Vol. 4, No., Winter 0, pp. 9 ISSN 09-9856 (print) ISSN 56-558 (online) http://dx.doi.org/0.87/ijoc.046 0 INFORMS Atomating Biariate Transformations Jeff X. Yang, John H. Drew,

More information

Decoder Error Probability of MRD Codes

Decoder Error Probability of MRD Codes Decoder Error Probability of MRD Codes Maximilien Gadolea Department of Electrical and Compter Engineering Lehigh University Bethlehem, PA 18015 USA E-mail: magc@lehighed Zhiyan Yan Department of Electrical

More information