Risk-Averse Stochastic Dual Dynamic Programming

Size: px
Start display at page:

Download "Risk-Averse Stochastic Dual Dynamic Programming"

Transcription

1 Risk-Averse Sochasic Dual Dynamic Programming Václav Kozmík Deparmen of Probabiliy and Mahemaical Saisics Charles Universiy in Prague Prague, Czech Republic David P. Moron Graduae Program in Operaions Research & Indusrial Engineering he Universiy of exas a Ausin Ausin, exas, USA February 27, 2013 Absrac We formulae a risk-averse muli-sage sochasic program using condiional value a risk as he risk measure. he underlying random process is assumed o be sage-wise independen, and a sochasic dual dynamic programming (SDDP) algorihm is applied. We discuss he poor performance of he sandard upper bound esimaor in he risk-averse seing and propose a new approach based on imporance sampling, which yields improved upper bound esimaors. Modes addiional compuaional effor is required o use our new esimaors. Our procedures allow for significan improvemen in erms of conrolling soluion qualiy in an SDDP algorihm in he risk-averse seing. We give compuaional resuls for muli-sage asse allocaion using a log-normal disribuion for he asse reurns. Keywords: Muli-sage sochasic programming, sochasic dual dynamic programming, imporance sampling, risk-averse opimizaion 1 Inroducion We formulae and solve a muli-sage sochasic program, which uses condiional value a risk (CVaR) as he measure of risk. Our soluion procedure is based on sochasic dual dynamic programming (SDDP), which has been employed successfully in a range of applicaions, exhibiing good compuaional racabiliy on large-scale problem insances; see, e.g., [9, 10, 11, 15, 23, 26. here has been very limied applicaion of SDDP o models wih he ype of risk measure ha we use, which involves a form of CVaR ha is nesed o ensure a noion of ime consisency. See Ruszczynski [29 and Shapiro [32 for discussions of ime-consisen risk measures in mulisage sochasic opimizaion. A sandard muli-sage recourse formulaion has an addiive form of expeced uiliy. In his case, he usual upper bound esimaor in an SDDP algorihm is compued by solving subproblems along linear sample pahs hrough he scenario ree, and he resuling compuaional effor is linear in he produc of he number of sages and he number of samples. As we describe below, his ype of esimaor is no valid for a model wih a nesed CVaR risk measure, and his has hampered applicaion of SDDP o such ime-consisen risk-averse formulaions. wo soluions have been proposed in he lieraure o circumven he difficuly we have jus described. One possibiliy is o firs solve a risk-neural version of he problem insance under some 1

2 suiable erminaion crierion, and hen use he same number of ieraions of SDDP o solve he riskaverse model under nesed CVaR. Philpo and de Maos [24 repor good compuaional experience wih his approach. However, his leaves open he quesion of wheher he same number of ieraions is always appropriae for boh risk-neural and risk-averse model insances. Alernaively, we can compue an upper bound esimaor via he condiional sampling mehod of Shapiro [33. However, he associaed compuaional effor grows exponenially in he number of sages, and as Shapiro [33 discusses, he bound can be loose. he purpose of his aricle is o propose, analyze, and compuaionally demonsrae a new upper bound esimaor for SDDP algorihms under a nesed CVaR risk measure. he compuaional effor required o form our bound grows linearly in he number of ime sages and he esimaion procedure fis flawlessly in he sandard SDDP framework. Moreover, our bound is significanly igher han he esimaor based on condiional sampling, which furher faciliies applicaion of naural erminaion crieria, which are usually based on comparing he difference beween he lower bound and an upper bound esimaor. SDDP originaed in he work of Pereira and Pino [22, and inspired a number of relaed algorihms [7, 9, 21, 25, which aim o improve is efficiency. Nesed Benders decomposiion algorihm [5 applied o a muli-sage sochasic program requires compuaional effor ha grows exponenially in he number of sages. SDDP-syle algorihms have compuaional effor per ieraion ha insead grows linearly in he number of sages. o achieve his, SDDP algorihms rely on he assumpion of sage-wise independence. ha said, SDDP algorihms can also be applied in some special cases of addiive inersage dependence, such as when an auoregressive process governs he righ-hand side vecors [17. Oher imporan algorihms designed o solve muli-sage sochasic programs include exensions of sochasic decomposiion o he muli-sage case [14, 35 and progressive hedging [27. While hese algorihms have been developed in he risk-neural seing, hey exend in naural ways o handle he risk measure we consider here, wih he cavea ha he nesed CVaR expression can be exacly compued only when he scenario ree is of modes size. When sampling is required, hese algorihms may also benefi from he ype of esimaor we propose. Risk-averse sochasic opimizaion has received significan aenion in recen years because of is aracive properies for decision makers. he properies required of coheren risk measures, inroduced in Arzner e al. [2, are now widely acceped for ime-saic risk-averse opimizaion. Many risk measures are known o saisfy hese properies; for an overview see, for insance, Krokhmal e al. [19. A number of proposals have been pu forward o exend coheren risk measures o handle muli-sage sochasic opimizaion. In he muli-sage case we seek a policy, which specifies a decision rule a every sage for any realizaion of he sochasic process up o ime. While here are muliple approaches, o obain suiable opimal policies no only coherency bu also ime consisency should be saisfied. his laer propery saes ha opimal decisions a ime should no depend on fuure saes of he sysem, which we already know canno be realized, condiional on he sae of he sysem a ime. Despie he naural saemen of his requiremen, here are a variey of risk measures, which fail o mee his condiion. See Shapiro [32 and Rudloff e al. [28 for such examples, along wih furher discussions of why risk measures ha are no ime-consisen can produce unsaisfacory policies. he main approach o consruc ime-consisen risk measures involves so-called condiional risk measures, inroduced in Ruszczynski and Shapiro [30. he consrucion is based on nesing of he risk measures, condiional on he sae of he sysem. While his ensures ime consisency, i leads o he compuaional difficulies ha we describe above. (See Philpo and de Maos [24 and Shapiro [33 for furher discussion.) Wih ime-consisen CVaR chosen as he risk measure, we propose a new approach o upper bound esimaion o overcome hese compuaional difficulies. We organize he remainder of his aricle as follows. We presen our risk-averse muli-sage 2

3 model in Secion 2 and briefly review he SDDP algorihm in Secion 3. Secion 4 exends his descripion o he risk-averse case. We develop and analyze he proposed upper bound esimaors in Secion 5, and we provide compuaional resuls for wo asse allocaion models, boh wih and wihou ransacion coss, in Secion 6. We conclude and discuss ideas for fuure work in Secion 7. 2 Muli-sage risk-averse model We formulae a muli-sage sochasic program wih a nesed CVaR risk measure in he same manner as Shapiro [33, largely following his noaion. Hence, we provide a brief problem saemen. he model has random parameers in sages = 2,...,, denoed ξ = (c, A, B, b ), which are sage-wise independen and governed by a known, or well-esimaed, disribuion. he parameers of he firs sage, ξ 1 = (c 1, A 1, b 1 ), are assumed o be known when we make decision x 1, bu only a probabiliy disribuion governing fuure realizaions, ξ 2,..., ξ, is assumed known. he realizaion of ξ 2 is known when decisions x 2 mus be made and so on o sage. We denoe he daa process up o ime by ξ [, meaning ξ [ = (ξ 1,..., ξ ). Our model allows specificaion of a differen risk aversion coefficien and confidence level, denoed λ, α [0, 1, respecively, a each ime sage, = 2,...,. In order o provide he nesed formulaion of he model we inroduce he following operaor, which forms a weighed sum of expecaion and risk associaed wih random loss Z: ρ,ξ[ 1 [Z = (1 λ ) E [ Z ξ [ 1 + λ CVaR α [Z ξ [ 1. (1) We can wrie he risk-averse muli-sage model wih sages in he following form: min A 1 x 1 =b 1 x 1 0 c 1 x 1 + ρ 2,ξ[1 min A 2 x 2 =b 2 B 2 x 1 x 2 0 c 2 x ρ,ξ[ 1 min A x =b B x 1 x 0 c x. (2) We assume model (2) is feasible, has relaively complee recourse, and has a finie opimal value. he special case wih λ = 0, = 2,...,, is risk-neural because we hen minimize expeced cos. Model (2) is disinguished from oher possible approaches o characerizing risk, by aking he risk measure as a funcion of he recourse value a each sage. his ensures ime consisency of he risk measure. See Rudloff e al. [28 and Ruszczynski [29 for discussions of a condiional cerainyequivalen inerpreaion from uiliy heory for such nesed formulaions. A soluion o model (2) is a policy, and ime consisency means ha he resuling policy has a naural inerpreaion ha lends iself o implemenaion along a sample pah wih realizaions ha unfold sequenially. Our model, wih he nesed risk measure, allows a dynamic programming formulaion o be developed, as is described in [24, 33. Using as he definiion of condiional value a risk, where [ + max{, 0}, we can wrie CVaR α [Z = min u ( u + 1 α E [Z u + ), min c x 1,u 1 x 1 + λ 2 u 1 + Q 2 (x 1, u 1 ) 1 s.. A 1 x 1 = b 1 x 1 0. (3) 3

4 he recourse value Q (x 1, ξ ) a sage = 2,..., is given by: Q (x 1, ξ ) = min x,u c x + λ +1 u + Q +1 (x, u ) s.. A x = b B x 1 x 0, (4) where [ Q +1 (x, u ) = E (1 λ +1 ) Q +1 (x, ξ +1 ) + λ +1 [ Q+1 (x, ξ α +1 ) u. (5) + +1 We ake Q +1 ( ) 0 and λ +1 0 so ha he objecive funcion of model (4) reduces o c x when =. In conras o a muli-sage formulaion rooed in expeced uiliy, our muli-sage model wih CVaR has an addiional decision variable, u, which esimaes he value-a-risk level. he recourse value a sage depends on ξ raher han ξ [, because we assume he process o be sage-wise independen. Afer inroducing he auxiliary variables, u, he problem seems o be convered o he simpler case, involving only expecaions of an addiive uiliy. his impression may lead o he false conclusion ha a radiional SDDP-syle algorihm can be applied. he nesed nonlineariy arising from he posiive-par funcion precludes his, as we illusrae in he nex example. Example 1. Suppose we incur random coss Z 2 in he second sage and Z 3 in he hird sage. hen under an addiive uiliy wih conribuion u ( ) in sage, we have: [ [ E u 2 (Z 2 ) + E u 3 (Z 3 ) ξ [2 = E [u 2 (Z 2 ) + E [u 3 (Z 3 ). However, his addiive form does no hold under CVaR. While we can wrie he composie risk measure as: [ [ [ [ ξ[2 ξ[2 CVaR α Z 2 + CVaR α Z 3 = CVaR α CVaR α Z 2 + Z 3, he composie risk measure does no lend iself o furher simplificaion. Subaddiiviy of CVaR yields [ [ [ [ ξ[2 ξ[2 CVaR α Z 2 + CVaR α Z 3 CVaR α [Z 2 + CVaR α CVaR α Z 3. his righ-hand side only bounds he risk measure and, even hen, he composie measure sill has o be evaluaed. I is for he reasons illusraed in Example 1 ha Philpo and de Maos [24 and Shapiro [33 poin o he lack of a good upper bound esimaor for model (2) when he problem has more han a very small number of sages. he naural condiional sampling esimaor, discussed in [24, 33, has compuaional effor ha grows exponenially in he number of sages. he following example poins o a second issue associaed wih esimaing CVaR. Example 2. Consider he following esimaor of CVaR α [Z, where Z 1, Z 2,..., Z M are independen and idenically disribued (i.i.d.) from he disribuion of Z: min u + 1 M [ Z j u. u αm + j=1 If α = 0.05, i is clear ha only abou 5% of he generaed samples conribue nonzero values o his esimaor of CVaR. 4

5 he inefficiency poined o in Example 2 furher compounds he compuaional challenges associaed wih forming a condiional sampling esimaor of CVaR in he muli-sage seing. When forming an esimaor of our risk measure from equaion (1), his inefficiency means ha, say, 95% of he samples are devoed o only esimaing expeced cos and he remaining 5% of he samples conribue o esimaing boh CVaR and expeced cos. In wha follows we propose an approach o upper bound esimaion in he conex of SDDP ha recifies his imbalance and has compuaional requiremens ha grow gracefully wih he number of sages. Before urning o our esimaor, we discuss SDDP and is applicaion o our risk-averse formulaion in he nex wo secions. 3 Sochasic dual dynamic programming We use sochasic dual dynamic programming o solve, or raher approximaely solve, model (2). SDPP does no operae direcly on model (2). Insead, we firs form a sample average approximaion (SAA) of model (2), and SDDP approximaely solves ha SAA. hus in our conex SDDP forms esimaors by sampling wihin an empirical scenario ree. Algorihm 1 describes how we form he sampling-based scenario ree for he SAA. hen in he remainder of his aricle we resric aenion o solving ha SAA via SDDP. See Shapiro [31 for a discussion of asympoics of SAA for mulisage problems, Philpo and Guan [25 for convergence properies of SDDP, and Chiralaksanakul and Moron [8 for procedures o assess he qualiy of an SDDP-based policy. Again, we assume ξ, = 2,...,, o be sage-wise independen. We furher assume ha for each sage = 2,..., here is a known (possibly coninuous) disribuion P of ξ and ha we have a procedure o sample i.i.d. observaions from his disribuion. Using his procedure we obain empirical disribuions ˆP, = 2,...,. he scenarios generaed by his procedure all have he same probabiliies, bu his is no required by he SDDP algorihm, which also applies o he case where he scenario probabiliies differ. Algorihm 1. Sampling under inersage independence: 1. Le ξ 1 denoe he deerminisic firs sage realizaion. 2. Sample D 2 i.i.d. observaions ξ 1 2,..., ξ D 2 2 from P 2. hese are he descendans of he firs sage scenario (node) {ξ 1 }. 3. Sample D 3 i.i.d. observaions ξ 1 3,..., ξ D 3 3 from P 3, independen of hose formed in sage 2. Le { hese denoe } he same se of descendan nodes for each of he N 2 = D 2 nodes {ξ 1 } ξ 1 2,..., ξ D Sample D i.i.d. observaions ξ 1,..., ξ D from P, independen of hose formed in sages 2,..., 1. Le hese denoe he same se of descendan nodes for each of he N 1 = 1 { nodes {ξ 1 }. ξ 1 2,..., ξ D 2 2 } { ξ 1 1,..., ξ D 1 1 }. i=2 D i. Sample D i.i.d. observaions ξ 1,..., ξd from P, independen of hose formed in sages 2,..., 1. Le hese denoe he same se of descendan nodes for each of he N 1 = { } { } 1 i=2 D i nodes {ξ 1 } ξ 1 2,..., ξ D 2 2 ξ 1 1,..., ξd

6 We le ˆΩ denoe he sage sample space, where ˆΩ = N. We use j ˆΩ o denoe a sage sample poin, which we call a sage scenario. We define he mapping a(j ) : ˆΩ ˆΩ 1, which specifies he unique sage 1 ancesor for he sage scenario j. Similarly, we use (j ) : ˆΩ 2ˆΩ +1 o denoe he se of descendan nodes for j, where (j ) = D +1. he empirical scenario ree herefore has sage realizaions denoed ξ j, j ˆΩ. A he las sage, we have ξ j, j ˆΩ, and each sage scenario corresponds o a full pah of observaions hrough each sage of he scenario ree. ha is, given j, we recursively have j 1 = a(j ) for =, 1,..., 2. For his reason and for noaional simpliciy, when possible, we suppress he sage subscrip and denoe j ˆΩ by j ˆΩ. As indicaed in Algorihm 1, we emphasize using he same se of D observaions a sage o form he descendan nodes of all N 1 scenarios a sage 1. his ensures he resuling empirical scenario ree is inersage independen. he SDDP algorihm does no apply, for example, o a scenario ree in which we insead use a separae, independen se of i.i.d. observaions ξ 1,..., ξ D for each of he sage 1 scenarios, because he resuling empirical scenario ree would no be sage-wise independen. Noe ha fully general forms of inersage dependency lead o inheren compuaional inracabiliy as even he memory requiremens o sore a general sampled scenario ree grow exponenially in he number of sages. racable dependency srucures are ypically rooed in some form of independen incremens beween sages; e.g., auoregressive models, movingaverage models, and dynamic linear models [36. We give a brief descripion of he SDDP algorihm in order o give sufficien conex for presening our resuls. For furher relaed deails on SDDP, see [22 and [33. he simples SDDP algorihm applies o he risk-neural version of our model, which means seing λ = 0 for = 1,..., in equaion (1) and model (2) or equivalenly in (3)-(5). We denoe he recourse value for he risk-neural version of our model by Q N (x 1, ξ ), which for = 2,...,, is given by: Q N (x 1, ξ ) = min x c x + Q N +1(x ) s.. A x = b B x 1 x 0, (6) where Q N +1(x ) = E [ Q N +1(x, ξ +1 ), (7) and where Q N +1 ( ) 0. he risk-neural formulaion is compleed via model (3) wih λ 2 = 0 and Q 2 (x 1, u 1 ) replaced by Q N 2 (x 1). During a ypical ieraion of he SDDP algorihm, cus have been accumulaed a each sage. hese represen a piecewise linear ouer approximaion of he expeced fuure cos funcion, Q N +1 (x ). On a forward pass we sample a number of linear pahs hrough he ree. As we solve a sequence of maser programs (which we specify below) along hese forward pahs, he cus ha have been accumulaed so far are used o form decisions a each sage. Soluions found along a forward pah in his way form a policy, which does no anicipae he fuure. In fac, he soluions can be found a a node on a sample pah via he sage maser program, even before we sample he random parameers a sage + 1. he sample mean of he coss incurred along all he forward sampled pahs hrough he ree forms an esimaor of he expeced cos of he curren policy, which is deermined by he maser programs. In he backward pass of he algorihm, we add cus o he collecion defining he curren approximaion of he expeced fuure cos funcion a each sage. We do his by solving subproblems a he descendan nodes of each node in he linear pahs from he forward pass, excep in he final sage,. he cus colleced a any node in sage apply o all he nodes in ha sage, and hence 6

7 we mainain a single se of cus for each sage. We le C denoe he number of cus accumulaed so far in sage. his reducion is possible because of our inersage independence assumpion. Model (8) acs as a maser program for is sage + 1 descendan scenarios and acs as a subproblem for is sage 1 ancesor: ˆQ = min x,θ c x + θ (8a) s.. A x = b B x 1 : π (8b) θ ˆQ ) ( ) j +1 (g + j +1 x x j, j = 1,..., C (8c) x 0. Decision variable θ in he objecive funcion (8a), coupled wih cu consrains in (8c), forms he ouer linearizaion of he recourse funcion Q N +1 (x ) from model (6) and equaion (7). he srucural and nonnegaiviy consrains in (8b) and (8d) simply repea he same consrains from model (6). In he final sage, we omi he cu consrains and he θ erm. While we could append an N superscrip on erms like ˆQ, ˆQj +1, gj +1, ec. we suppress his index for noaional simpliciy. As we indicae in consrain (8b), we use π o denoe he dual vecor associaed wih he srucural consrains. Le j denoe a sage scenario from a sampled forward pah. Wih x 1 = x a(j) 1 and wih ξ = ξ j in model (8), we refer o ha model as sub(j ). Given model sub(j ) and is soluion x, we form one new cu consrain a sage for each backward pass of he SDDP algorihm as follows. We form and solve sub(j +1 ), where j +1 (j ) indexes all descendan nodes of j. his yields opimal values ˆQ j (x ) and dual soluions π j for j +1 (j ). We hen form g j = ( B j (8d) ) π j +1 +1, (9) where g j = gj (x ) is a subgradien of ˆQ j = ˆQ j (x ). he cu is hen obained by averaging over he descendans: ˆQ +1 = 1 ˆQ j (10) D +1 g +1 = 1 D +1 j +1 (j ) j +1 (j ) g j (11) As we indicae above, ˆQ+1 = ˆQ +1 (x ) and g +1 = g +1 (x ), and hence ˆQ +1 = ˆQ +1 (x ) and g +1 (x ) bu we suppress his dependency for noaional simpliciy. We also suppress he j index on ˆQ +1 and g +1 because we append a new cu o he sage collecion of cus, and do no associae i wih a paricular sage subproblem. Noe ha from he manner in which we express consrain (8c), i may appear as if we mus keep rack of he soluion, x j, a which we form he cu, bu his is no he case. Raher we sore he erm, ( ) ˆQ j +1 g j +1 x j, as a scalar inercep erm for each cu. For simpliciy in saing he SDDP algorihm below, we assume we have known lower bounds L on he recourse funcions. Algorihm 2. Sochasic dual dynamic programming algorihm 1. Le ieraion k = 1 and append lower bounding cus θ L, = 1,..., 1. 7

8 2. Solve he sage 1 maser program ( = 1) and obain x k 1, θk 1. Le z k = c 1 xk 1 + θk Forward pass: sample i.i.d. pahs from ˆΩ and index hem by S k. For all j S k { For = 2,..., { } } ( Form and solve sub(j ) o obain Form he upper bound esimaor: x j z k = c 1 x k S k ) k; j S k =2 ( ) k (c j ) x j. (12) 4. If a sopping crierion, given z k and z k, is saisfied hen sop and oupu firs sage soluion x 1 = x k 1 and lower bound z = z k. 5. Backward pass: For = 1,..., 1 { For all j S k { For all descendan nodes j +1 (j ) { Form and solve sub(j +1 ) o obain ˆQ j and πj ; Calculae g j using formula (9); } Calculae opimal value ˆQ +1 using equaion (10); Calculae cu gradien g +1 using equaion (11); Append he resuling cu o he collecion (8c) for sage ; } } 6. Le k = k + 1 and goo sep 2. See Bayraksan and Moron [4 and Homem-de-Mello e al. [15 for sopping rules ha can be employed in sep 4. 4 Risk-averse approach We mus modify he SDDP algorihm of Secion 3 o handle he risk-averse model of Secion 2. he auxiliary variables u now play a role boh in compuing he cus and in deermining he policy from he maser programs. In he modified SDDP algorihm we selec he VaR level, u, along wih our sage decisions, x, and hen solve he subproblems a he descendan nodes. he VaR level influences he value of he recourse funcion esimae and herefore is included in he cus, in he 8

9 same way as any anoher decision variable. Exending he developmen from he previous secion, he sage subproblem in he risk-averse case is given by: ˆQ = min x,u,θ c x + λ +1 u + θ s.. A x = b B x 1 : π θ ˆQ ) [ ( ) j +1 (g + j +1 (x, u ) x j, uj, j = 1,..., C x 0. (13) While ˆQ +1 = ˆQ +1 (x ), as we deail below we now have ˆQ +1 = ˆQ +1 (x, u ) and g +1 = g +1 (x, u ) as a funcion of boh he sage decision and he VaR level. he subgradien of ˆQ +1 (x ) is sill compued by equaion (9), bu he oher erms ha conribue o he cus have o be adjused o respec he differences beween he risk-neural funcion Q N +1 (x ) and he risk-averse funcion Q +1 (x, u ). As in Secion 3, we le j denoe a sage scenario from a sample pah. We le sub(j ) denoe model (13) when we se x 1 = x a(j) 1 and ξ = ξ j. Given sub(j ) and is soluion (x, u ), we form a new cu consrain a sage as follows. We form and solve sub(j +1 ), where j +1 (j ) indexes all descendan nodes of j. his yields opimal values ˆQ j and dual soluions πj +1 +1, along wih subgradiens g j via equaion (9), for j +1 (j ). he sample mean varian of equaion (5) hen yields: ˆQ +1 = 1 D +1 j +1 (j ) [ (1 λ +1 ) ˆQ j λ [ ˆQj +1 α u. (14) +1 + o compue a subgradien of ˆQ +1 (x, u ) we mus employ he chain rule of subdifferenials o deal wih he posiive-par operaor. Following [33 his leads o g +1 = 1 (1 λ +1 ) g j D + λ +1 g j α, λ +1 J +1, (15) +1 α +1 j +1 J +1 where he index se J +1 = j +1 (j ) { j +1 : ˆQ } j > u, j +1 (j ). In modifying he SDDP algorihm for he risk-averse formulaion, equaions (14) and (15) replace equaions (10) and (11) in he backward pass of sep 5 of Algorihm 2 o provide he piecewise linear ouer approximaion of Q +1 (x, u ). One issue ha remains concerns evaluaion of an upper bound. he upper bound esimaor (12) in Algorihm 2 mus be modified for he riskaverse seing. As we illusrae in Example 1, we canno expec an analogous addiive esimaor o be appropriae for he risk-averse seing. Example 1 suggess ha o compue he condiional risk measure, we should sar from he las sage and recurse back o he firs sage o obain an esimaor of he risk measure evaluaed a a policy. his differs significanly from he risk-neural case, where he coss incurred a any sage can be esimaed jus by averaging coss a sampled nodes. Saring a he final sage,, our cos under scenario j is (c j ) x j. For he sage 1 ancesor scenario j 1 = a(j ) we mus calculae (c j 1 1 ) x j λ u j Q (x j 1 1, uj 1 1 ). 9

10 he quesion ha remains is how o esimae Q (x j 1 1, uj 1 1 ). We mainain a parallel wih he esimaor in he risk-neural version of he SDDP algorihm in he sense ha we esimae his erm using he value of one descendan scenario along he corresponding forward pah in sep 3 of Algorihm 2. his means ha based on equaion (5) we esimae Q (x j 1 1, uj 1 1 ) by (1 λ ) (c j ) x j + λ [ (c j α ) x j u j Removing he expecaion operaor in equaion (5), he associaed recursion of he objecive funcion in model (4) and equaion (5) yields he following recursive esimaor for = 2,..., : ( ) ˆv (ξ j 1 1 ) = (1 λ ) (c j ) x j + ˆv +1 (ξ j ) + λ u j λ [ (c j α ) x j + ˆv +1 (ξ j ) uj 1 1, (16) + where ˆv +1 (ξ j ) 0. Denoe he esimaor for he sample pah associaed wih scenario j by ˆv(ξ j ) = c 1 x 1 + ˆv 2. (17) Because he firs sage parameers, ξ j 1 1, are deerminisic we can simply wrie ˆv 2 = ˆv 2 (ξ j 1 1 ), dropping is argumen. Having seleced scenario j and solved all nodes associaed wih realizaions ξ j 1 1,..., ξ j along he sample pah, we form he esimaor recursively as follows. We sar a he sage 1 node, compue ˆv (ξ j 1 1 ), subsiue i ino formula (16) for = 1 o obain ˆv 1 (ξ j 2 2 ) and so on unil we obain ˆv 2 and hence can compue he value of ˆv(ξ j ) via equaion (17). hen if ξ j, j = 1,..., M, are i.i.d. sample pahs, sampled from he scenario ree s empirical disribuion as in sep 3 in Algorihm 2, he corresponding upper bound esimaor is given by: U n = 1 M M ˆv(ξ j ). (18) j=1 We use he n superscrip o indicae ha we use naive Mone Carlo sampling here, and o disinguish i from esimaors we develop below. We can aemp o use esimaor (18) in place of (12) o solve he risk-averse problem. Unforunaely, his esimaor has large variance. he main shorcoming of his esimaor lies in he imbalance in sampled scenarios we poin o in Example 2 coupled wih he policy now specifying an approximaion of he value a risk level via u 1. If he descendan node has value less han u 1 hen he posiive-par erm in equaion (16) is zero. When he opposie occurs, he difference beween he node value and u 1 is muliplied by α 1, which can lead o large value of he esimaor because a ypical value of α 1 is 20. When ˆv (ξ j 1 1 ) is large, his increases he likelihood ha preceding values are also large and hence muliplied by α 1 1, α 1 2,... many more imes in he backward recursion. his leads o a highly variable esimaor which is of lile pracical use, paricularly when is no small. o overcome he issues we have jus discussed, Shapiro [33 describes an esimaor which uses more nodes o esimae he recourse value. his esimaor for a 3-sage problem is obained by sampling, and solving subproblems associaed wih i.i.d. realizaions ξ 1 2,..., ξ M 2 2 from he second sage and for each of hese solving subproblems o esimae he fuure risk measure using i.i.d. realizaions ξ 1 3,..., ξ M 3 3 from he hird sage. his requires solving subproblems a a oal of M 2 M 3 nodes. More generally under his approach, given a sage 1 scenario ξ j 1 1 we esimae he recourse funcion 10

11 value by: ˆv (ξ j 1 1 ) = 1 M M j =1 For sages = 2,..., 1 we have: ˆv (ξ j 1 1 ) = 1 M [ (1 λ ) (c j ) x j + λ u j λ [ (c j α ) x j M j =1 [ ( ) (1 λ ) (c j ) x j + ˆv +1 (ξ j ) And finally for he upper bound esimaor we compue: +λ u j λ [ (c j α ) x j + ˆv +1 (ξ j ) uj u j (19) U e = c 1 x 1 + ˆv 2. (20) Shapiro [33 discusses wo significan problems wih he upper bound esimaor (20). Firs, he esimaor requires solving an exponenial number, =2 M, of subproblems in he number of sages (hus he e superscrip.) and hence is impracical unless is small. Second, as we examine furher in Secion 6, even when we can afford o compue he bound provided by (20), he bound is no very igh. For hese reasons, esimaor (20) is no ypically used in pracice. For he reasons we discuss above, he approach o applying SDDP o risk-averse problems ha Philpo and de Maos [24 and Shapiro [33 recommend does no compue an upper bound for he risk-averse model. heir recommendaion is o firs form and solve he risk-neural version of he problem, in which we can compue reliable upper bound esimaors and hence employ a reasonable erminaion crierion. When he SDDP algorihm sops we save he number of ieraions needed o saisfy he erminaion crierion. We hen form he risk-averse model and run he SDDP algorihm, wihou evaluaing an upper bound esimaor. Insead, we run SDDP on he risk-averse model for he same number of ieraions required o solve he risk-neural model. he soluion and corresponding lower bound obained afer ha number of ieraions are considered he algorihm s oupu. However, his approach has some pifalls. I is unclear ha he number of ieraions for he risk-averse model should be he same as in he risk-neural case. he shape of he cos-ogo funcions differs, and cus are being compued in a higher-dimensional space because of he addiion of he decision variables, u, used o compue he risk measure. his approach gives us no guaranees on he qualiy of he soluion and requires ha we run he SDDP algorihm wice. A his poin we have hree possible approaches o deal wih he upper bound esimaion in he risk-averse case based on he wo upper bound esimaors (18) and (20) and based on solving he risk-neural model o deermine he sopping ieraion. In our view, all hree of hese approaches are unsaisfacory. We are eiher forced o use loose upper bounds ha lead o very weak guaranees on soluion qualiy and scale poorly. Or, we are forced o use an approach which provides no guaranees on soluion qualiy, even if reasonable empirical performance has been repored in he lieraure. In he nex secion we propose a new upper bound esimaor o overcome hese difficulies. Our esimaor scales beer wih he number of sages and can yield greaer precision han previous approaches. 5 Improved upper bound esimaion We overcome he shorcomings of he upper bound esimaors (18) and (20) by firs focusing on he main issue causing he esimaors o be poor: A relaively small fracion of he sampled scenarioree nodes conribue o esimaing CVaR, for reasons we illusrae in Example 2. o sample in a 11

12 beer manner we assume ha for every sage, = 2,...,, we can cheaply evaluae a real-valued funcion, h (x 1, ξ ), which esimaes he recourse value of our decisions x 1 afer he random parameers ξ have been observed. he funcions h play a cenral role in our proposal for sampling descendan nodes. Raher han solving linear programs a a large number of descendan nodes, as is done in esimaor (20), we insead evaluae h a hese nodes and hen sor he nodes based on heir values. his guides sampling of he nodes o esimae CVaR. Having such a funcion h indicaes ha once we observe he random oucome for sage + 1, we have some means of disinguishing good and bad decisions a sage wihou knowledge of subsequen random evens in sages + 2,...,. Someimes his is possible via an approximaion recourse value associaed wih he sysem s sae. For example, when dealing wih some asse allocaion models, we may use curren wealh o define h. Algorihm 1 forms an empirical scenario ree wih equally-weighed scenarios and discree empirical disribuions ˆP, = 2,...,. he probabiliy mass funcion (pmf) governing he condiional probabiliy of he descendan nodes from any sage 1 node is given by: f (ξ ) = 1 [ { I ξ D ξ 1,..., ξ D }, (21) where I[ is he indicaor funcion ha akes value one if is argumen is rue and zero oherwise. We propose a sampling scheme based on imporance sampling. he scheme depends on he curren sae of he sysem, giving rise o a new pmf, which we denoe g (ξ x 1 ). his pmf is ailored specifically for use wih CVaR. Alernaive pmfs would be needed o apply he proposed ideas o oher risk measures. Given he curren sae of he sysem we can compue he value a risk for our approximaion funcion, u h = VaR α [h (x 1, ξ ), and pariion he nodes corresponding o ξ 1,..., ξ D ino wo groups by comparing heir approximae value o u h. In paricular, he imporance sampling pmf is: 1 1 [ { 2 α D I ξ g (ξ x 1 ) = 1 1 [ { 2 D α D I ξ ξ 1,..., ξ D ξ 1,..., ξ D }, if h (x 1, ξ ) u h }, if h (x 1, ξ ) < u h, where he operaor rounds down o he neares ineger. he pmf g (ξ x 1 ) modifies he probabiliy masses so ha we are equally likely o draw sample observaions above and below u h = VaR α [h (x 1, ξ ). In accordance wih imporance sampling schemes, we can compue he required expecaion under our new measure via [ E f [Z = E g Z f, g for any random variable Z for which he expecaions exis. If he expecaion is aken across he disribuions for all sages we denoe he analogous operaors by E f [ and E g [. If we omi he rounding operaions in equaion (22), we have ha he likelihood raio saisfies: { f 2α, if h (x 1, ξ ) u h g 2(1 α ), if h (x 1, ξ ) < u h. We can form an esimaor similar o (18), excep ha we employ our imporance sampling disribuions, g, in place of he empirical disribuions, f, in he forward pass of SDDP when selecing he sample pahs. In paricular, given a single sample pah from sage 1 o sage, (22) 12

13 ξ j, we form esimaor (17), which uses recursion (16) and preserves he good scalabiliy of he esimaor wih he number of sages. We carry ou his for a se of samples drawn using he new measure g o selec he sample pahs. hus we have weighs for each sage of which yields weighs along a sample pah of and an esimaor of he form w (ξ x 1 ) = f (ξ ) g (ξ x 1 ), w(ξ j ) = 1 M =2 w (ξ j x 1), M w(ξ j )ˆv(ξ j ). j=1 his esimaor is a weighed sum of he upper bounds (17) for he sampled scenarios. he weighs are random variables and only sum o one in expecaion. Normalizing he weighs so ha hey sum o one reduces he variabiliy of he esimaor (see Heserberg [13) and yields: U i = 1 M j=1 w(ξj ) M w(ξ j )ˆv(ξ j ), (23) where i indicaes ha he esimaor uses imporance sampling. We summarize he developmen so far in he following proposiion. Proposiion 1. Assume model (2) has relaively complee recourse and inersage independence. Le z denoe he opimal value of model (2) under he empirical disribuion generaed by Algorihm 1. Assume ha a collecion of cus using (14) and (15) populae subproblems (13) a each sage. Le ξ denoe a sample pah seleced under he empirical disribuion, and le ˆv(ξ) be defined by (17) for ha sample pah. hen E f [ˆv(ξ) z. Furhermore if ξ j, j = 1,..., M, are i.i.d. and generaed by he pmfs (22) and U i is defined by (23) hen U i E f [ˆv(ξ), w.p.1, as M. Proof. he opimal value of model (2) as reformulaed in model (3) yields z. Along sample pah ξ, under he assumpion of relaively complee recourse, he cus in subproblems (13) generae a feasible policy in he space of he (x, u ) variables. Specifically, subproblems (13) yield a nonanicipaive sequence (x 1, u 1 ),..., (x 1, u 1 ), x, which is feasible o models (3) and (4) for = 2,...,. Removing he expecaion operaor in equaion (5), he associaed recursion of he objecive funcion in model (4) and equaion (5) coincides wih he recursion in equaion (16). aking expecaions yields E f [ˆv(ξ) z. By he law of large numbers we have ha lim M 1 M j=1 M w(ξ j ) = 1, w.p.1. (24) j=1 For ξ generaed by he empirical pmfs (21) and for each ξ j, generaed by he pmfs (22), we have E g [ w(ξ j )ˆv(ξ j ) = E f [ˆv(ξ). 13

14 hus by he law of large numbers we have lim M 1 M M w(ξ j )ˆv(ξ j ) = E f [ˆv(ξ), w.p.1. (25) j=1 Combining equaions (24) and (25) using a converging-ogeher resul we have U i = 1 M 1 M j=1 w(ξj ) 1 M M w(ξ j )ˆv(ξ j ) E f [ˆv(ξ), w.p.1, j=1 as M. In he sense made precise in Proposiion 1, esimaor (23) provides an asympoic upper bound on he opimal value of model (2). he naive esimaor U n of (18) is an unbiased and consisen esimaor of E f [ˆv(ξ). However, if he funcions h provide a good approximaion, in he sense ha hey order he sae of he sysem in he same way as he recourse funcion, we anicipae ha U i will have smaller variance han U n. ha said, we view esimaor (23) as an inermediae sep o an improved esimaor. Under an addiional assumpion, he esimaor can be improved significanly. We now consider a sricer assumpion, wih he simplified noaion, Q = Q (x 1, ξ ) and h = h (x 1, ξ ). Assumpion 1. For every sage = 2,..., and decision x 1 he approximaion funcion h saisfies: Q VaR α [Q if and only if h VaR α [h. Under Assumpion 1, we can srenghen he esimaor hrough a reformulaion. Given a sample pah ξ we modify he recursive esimaor (16) for = 2,..., as: ( ) ˆv h (ξ j 1 1 ) = (1 λ ) (c j ) x j + ˆv h +1(ξ j ) +λ u j I[h VaR α [h λ [ (c j α ) x j + ˆv h +1(ξ j ) uj 1 1 (26a) +, (26b) where ˆv h +1 (ξj ) 0, and we le ˆv h (ξ) = c 1 x 1 + ˆv h 2. (27) Wih ξ j, j = 1,..., M, i.i.d. from he pmfs (22) we form he upper bound esimaor: U h = 1 M j=1 w(ξj ) M w(ξ j )ˆv h (ξ j ). (28) Proposiion 2. Assume he hypoheses of Proposiion 1, le ξ denoe a sample pah seleced under he empirical disribuion, le ˆv h (ξ) be defined by (27) for ha sample pah, and le Assumpion 1 hold. If subproblems (13) induce he same policy for boh ˆv(ξ) and ˆv h (ξ) hen E f [ˆv(ξ) E f [ˆv h (ξ) z. Furhermore if ξ j, j = 1,..., M, are i.i.d. and generaed by he pmfs (22) and U h is defined by (28) hen U h E f [ˆv h (ξ), w.p.1, as M. Proof. Le (x 1, u 1 ),..., (x 1, u 1 ), x be he feasible sequence o models (3) and (4) for = 2,...,, specified by (13) along sample pah ξ. he resul E f [ˆv(ξ) E f [ˆv h (ξ) holds because I[h VaR α [h can preclude some posiive erms in he recursion (26) ha are included in (16). j=1 14

15 he erms in (26b) are used o esimae CVaR. hus o esablish he res of he proposiion i suffices o show: VaR α [Q + 1 α E [ [Q VaR α [Q + u α E [ I[h VaR α [h [Q u 1 + because he res of he proof hen follows in he same fashion as ha of Proposiion 1. Firs consider he case in which u 1 VaR α [Q. We have: VaR α [Q + 1 α E [ [Q VaR α [Q + u α E [ [Q u 1 + = u α E [ I[Q VaR α [Q [Q u 1 + = u α E [ I[h VaR α [h [Q u 1 +, where he inequaliy follows from CVaR s definiion as he opimal value of a minimizaion problem, he firs equaliy holds because he indicaor has no effec when u 1 VaR α [Q, and he las equaliy follows from Assumpion 1. For he case when u 1 < VaR α [Q we firs drop he posiive par operaor, because ha is handled by he indicaor, and wrie: VaR α [Q + 1 α E [ [Q VaR α [Q + = VaR α [Q + 1 E [I[Q VaR α [Q (Q u 1 + u 1 VaR α [Q ) α ( = 1 P [Q ) ( ) VaR α [Q P [Q VaR α [Q VaR α [Q + u 1 α α + 1 α E [I[Q VaR α [Q (Q u 1 ) u α E [ I[Q VaR α [Q [Q u 1 + = u α E [ I[h VaR α [h [Q u 1 +, where he inequaliy holds because P [Q VaR α [Q α and u 1 < VaR α [Q. (Noe ha we would insead have P [Q VaR α [Q = α if we were in he coninuous case.) his complees he proof as he desired resul holds in boh cases. As Proposiion 2 indicaes, U h provides an asympoic upper bound esimaor for he opimal value of model (2). I also provides a igher upper bound in expecaion han esimaors U n and U i. We also anicipae ha esimaor U h will have smaller variance han U i. As we discuss in Secion 4, when a sample pah is such ha he posiive-par erm in (16) is posiive ha erm is muliplied by α 1 = 20 (say), and his increases he likelihood ha as we decremen we obain large values ha are repeaedly muliplied by α 1 1, α 1 2, ec. his repeaed muliplicaion should occur for some samples, bu i can also occur when i should no. he indicaor funcion in U h helps avoid his issue and hence ends o reduce variance. We now weaken he condiion of Assumpion 1 o incorporae he noion of wha we call a margin funcion in order for our ype of upper bound esimaor o address a broader class of sochasic programs. 15

16 Assumpion 2. For every sage = 2,..., and decision x 1 we have real-valued funcions h (x 1, ξ ) and m (x 1, ξ ) which saisfy: if h < m hen Q < VaR α [Q. Given a sample pah ξ we modify he recursive esimaors (16) and (26) for = 2,..., as: ( ) ˆv m (ξ j 1 1 ) = (1 λ ) (c j ) x j + ˆv m +1(ξ j ) where ˆv m +1 (ξj ) 0, and we le + λ u j I[h m λ [ (c j α ) x j + ˆv m +1(ξ j ) uj 1 1 +, (29) ˆv m (ξ) = c 1 x 1 + ˆv m 2. (30) Wih ξ j, j = 1,..., M, i.i.d. and from he pmfs (22), which use funcions h, we form he upper bound esimaor: U m 1 M = M w(ξ j )ˆv m (ξ j ). (31) j=1 w(ξj ) Again, noe ha we do no modify he imporance sampling procedure here o use he margin value. he sampling scheme sill relies on he VaR α [h level of he approximaion funcion via he pmfs (22), bu we drop Assumpion 1 on h and insead require he weaker implicaion of Assumpion 2. Proposiion 3. Assume he hypoheses of Proposiion 1, le ξ denoe a sample pah seleced under he empirical disribuion, le ˆv m (ξ) be defined by (30) for ha sample pah, and le Assumpion 2 hold. hen E f [ˆv m (ξ) z. Furhermore if ξ j, j = 1,..., M, are i.i.d. and generaed by he pmfs (22) and U m is defined by (31) hen U m E f [ˆv m (ξ), w.p.1, as M. Finally, if Assumpion 1 also holds and subproblems (13) induce he same policy for all hree esimaors hen E f [ˆv(ξ) E f [ˆv m (ξ) E f [ˆv h (ξ) z. j=1 Proof. We have: VaR α [Q + 1 α E [ [Q VaR α [Q + u α E [ I[Q VaR α [Q [Q u 1 + u α E [ I[h m [Q u 1 + where he firs inequaliy comes from following he seps of he proof of Proposiion 2 (in boh of he cases considered) and he second inequaliy follows from Assumpion 2. From his we have E f [ˆv m (ξ) z, and he consisency resul for U m follows in he same manner as in he proof of Proposiion 1. Inequaliy E f [ˆv(ξ) E f [ˆv m (ξ) holds because I[h m can preclude some posiive erms in he recursion (29) ha are included in (16). Finally, E f [ˆv m (ξ) E f [ˆv h (ξ) holds because under Assumpions 1 and 2 he indicaor I[h m allows inclusion of some posiive erms ha he indicaor I[h VaR α [h does no. In order o ensure ha U h is a valid upper bound esimaor we require ha we have an approximaion funcion ha can fully order saes of he sysem in he sense of Assumpion 1, and his limis applicabiliy of he esimaor in some cases. Assumpion 2 weakens considerably his requiremen, and widens he applicabiliy of esimaor U m. While U m again provides an 16

17 asympoic upper bound esimaor for he opimal value of model (2), he price we pay is ha i is weaker han U h as Proposiion 3 indicaes. For he ypes of approximaion and margin funcions, h and m, ha we envision our imporancesampling esimaors (U i, U h, and U m ) require modes addiional compuaion relaive o esimaor U n, which uses samples from he empirical pmfs (21). In paricular wih D denoing he number of sage descendan nodes formed in Algorihm 1, he bulk of he addiional compuaion requires evaluaing h and m a each of hese D nodes and deermining VaR α [h, which can be done by soring wih effor O(D log D ) or in linear ime in D (see [6). his effor is small compared o solving linear programs for modes values of D, paricularly recalling ha in SDDP s backward pass we mus solve linear programs a all D nodes o compue a cu. 6 Compuaional resuls We presen compuaional resuls for applying SDDP wih he upper bound esimaors we describe in Secions 4 and 5 o wo asse allocaion models under our CVaR risk measure. We presen resuls for our four new upper bound esimaors: (i) U n from equaion (18); (ii) U i from equaion (23); (iii) U h from equaion (28); and, (iv) U m from equaion (31). We compare heir performance wih ha of he exising upper bound esimaor from he lieraure: U e from equaion (20). he wo asse allocaion models we consider differ only in wheher we include ransacion coss or no. Wihou ransacion coss we can use esimaor U h, bu we can only use esimaor U m when we include ransacion coss. We begin wih he asse allocaion model wihou ransacion coss. A sage he decisions x denoe he allocaions (in unis of a muliple of a base currency, say USD), and p denoes gross reurn per sage; i.e., he raio of he price a sage o ha in sage 1. hese represen he only random parameers in he model. Wihou ransacion coss, model (4) specializes o: Q (x 1, ξ ) = min x,u 1 x + λ +1 u + Q +1 (x, u ) s.. 1 x = p x 1 x 0, excep ha in he firs sage: (i) he righ-hand side is insead 1 and (ii) because 1 x 1 is hen idenically -1, we drop his consan from he objecive funcion. he asses in our allocaion model consis of he sock marke indices DJA, NDX, NYA, and OEX. We used monhly daa for hese indices from Sepember 1985 unil Sepember 2011 o fi he mulivariae log-normal disribuion o he price raios observed monh-o-monh. An empirical scenario ree was hen consruced using Algorihm 1 by sampling from he log-normal disribuion, using he polar mehod [18 for sampling he underlying normal disribuions. he L Ecuyer random generaor [20 was used o generae he required uniform random variables. We implemened he SDDP algorihm in C++ sofware, using CPLEX [16 o solve he required linear programs and he Armadillo [1 library for marix compuaions. he confidence level was se o α = 5% and he risk coefficiens were se o λ = 1, = 2,..., so ha risk aversion increases in laer sages. able 1 shows he sizes of he empirical scenario rees formed by Algorihm 1 for our es problem insances. he approximaion funcion, h (x 1, ξ ), ha we use for he imporance sampling esimaors U i and U h is simply our curren wealh, which is deermined by he previous sage decisions and he curren price: h (x 1, ξ ) = p x 1. 17

18 sages ( ) descendans per node (D ) oal scenarios ( ˆΩ ) 2 50,000 50, ,000 1,000, ,000, ,250, able 1: Sizes of empirical scenario rees for es problem insances Noe ha his funcion mees he requiremens of Assumpion 1, because when we have no ransacion coss he specific allocaions in he vecor x 1 can be rebalanced wih no loss and hence oal wealh deermines he sysem s sae. Our primary purpose is o compare he upper bound esimaors ha we have developed. For his reason we ran he SDDP algorihm wih each of he upper bound esimaors unil he algorihm reached nearly he same opimal value as esimaed by he firs sage maser program s objecive funcion; i.e., he lower bound z from sep 2 of Algorihm 2 for he risk-averse model. Specifically, SDDP was erminaed when z agreed across he four runs and did no improve by more han 10 6 over 10 ieraions. A oal of 100 ieraions of SDDP sufficed o accomplish his for problem insances wih = 2,..., 5 and a oal of 200 ieraions sufficed for he larger insances wih = 10 and 15. For esimaors U n, U i, and U h on problem insances wih = 2, 3, 4, and 5 we used respecive sample sizes of M = 1001, 501, 334, and 251. In his way, forming he esimaors required solving around 1000 linear programming subproblems in each case. For = 10 and 15 we used M = 1112 and 3572 so ha forming he esimaor required solving abou 10,000 and 50,000 linear programs, respecively. For he esimaor U e we mus specify a sample size M for each sage: For = 2 we used M 2 = For = 3 we used M 2 = M 3 = 32 because his means forming he esimaor requires solving linear programs and his allows for a fair comparison wih he single-pah esimaors U n, U i, and U h. Wih similar reasoning for = 4 we used M = 11, and for = 5 we used M = 6. And for he larges value of for which we compue U e, = 10, we used M = 3. z U n (s.d.) U i (s.d.) U h (s.d.) U e (s.d.) (0.0020) (0.0012) (0.0011) (0.0019) (0.0145) (0.0108) (0.0060) (0.0302) (0.1472) (0.1128) (0.0126) (0.0883) (0.1031) (0.1008) (0.0303) (0.5207) ( ) ( ) (0.2562) ( ) NA NA (0.6658) NA able 2: Comparison of four upper bound esimaors, including he poin esimaes and heir sandard deviaions (s.d.) for he model wih no ransacion coss. able 2 shows resuls for four esimaors for he asse allocaion model wihou ransacion coss. hese resuls were compued using he sample sizes ha we indicae above, excep ha we formed 100 i.i.d. replicaes of he esimaors. For a paricular problem insance, all 100 replicaes used he same single run of 100 or 200 ieraions of SDDP. Each cell in able 2 repors he mean and sandard deviaion of he 100 replicaes of he esimaor. he able also shows lower bound z for he models obained as we describe above. he esimaors perform similarly for he wo-sage 18

19 problem insance, bu he advanages of he proposed esimaor, U h, are revealed as he number of sages grows. Noe ha he firs hree esimaors degrade a = 10 for reasons we discuss above involving recursive muliplicaion by α 1 = 20 along some sample pahs. Due o his degradaion we do no repor resuls for hese esimaors for = 15. We suspec i is for his same reason ha he benefi of he imporance sampling scheme is only fully realized when we include he indicaor funcions shown in equaion (26); compare he performance of U i and U h in he able. For = 2,..., 5 he variance reducion of U h relaive o U e grows from roughly 3 o 25 o 50 o 300. he smaller sandard deviaions of U h could faciliae is use in a sensible sopping rule. For our second se of problem insances he model incorporaes ransacion coss. his allows us o show how o implemen our upper bound esimaion procedure in a more complex model (as opposed o claiming he model is fully realisic for asse allocaion). We consider he case in which ransacion coss are proporional o he value of he asses sold or bough, and in paricular ha he fee is f = 0.3% of he ransacion value. We mus modify he rebalancing equaion beween sage 1 and sage o include he ransacion coss of f 1 x x 1, where he funcion applies componen-wise. Linearizing we obain he following special case of model (4): Q (x 1, ξ ) = min 1 x + λ +1 u + Q +1 (x, u ) x,z,u s.. 1 x + f 1 z = p x 1 z x x 1 z + x x 1 x 0. We again use he approximaion funcion h (x 1, ξ ) = p x 1. Wih nonzero ransacion coss he condiions of Assumpion 1 are no longer saisfied: Suppose in he hird sage i is opimal o inves all money in sock A. Arriving a his poin wih he second sage porfolio consising only of sock A is convenien because we need no rebalance and incur a ransacion cos. A porfolio of less worh, in he sense of p x 1, consising only of sock A may be preferred o anoher porfolio wih larger value of p x 1 bu consising of oher socks. Forunaely, we can sill compare some porfolios. Consider he wors case scenario in which we mus rebalance enire porfolio; i.e., sell all our asses and buy some oher asses a he sage. his would reduce he oal porfolio value by a facor of 1 f 1+f. However, his porfolio is sill beer han any porfolio wih oal value less by a facor of 1 f 1+f. his leads us o he margin funcion given by: m = 1 f 1 + f VaR α [h. Funcions h and m saisfy Assumpion 2 and we can apply he upper bound esimaor U m. his consrucion, of course, increases he bias of he esimaor as we indicae in Proposiion 3. However, if he ransacion coss are modes compared o marke volailiy, we may expec our esimaor o provide reasonable resuls. able 3 repors resuls in he same manner as able 2, now comparing U m and U e. he value z is compued in he same way we describe above. From able 2 we see ha he poin esimae U h as a percenage of z drops from 99.8% o 98.8% o 95.6% for = 5, 10, and 15, respecively. he analogous values for U m from able 3 are weaker as expeced, dropping from 99.6% o 98.7% o 90.5%. Noe ha hese same values for U e for = 5 are 78.9% wihou ransacion coss and 75.3% wih ransacion coss. For = 2, 3, 4, 5 he variance reducion of U m over U e grows from roughly 3 o 20 o 40 o 400, again indicaing ha our proposed upper bound esimaor is superior o he previously available esimaor. 19

An introduction to the theory of SDDP algorithm

An introduction to the theory of SDDP algorithm An inroducion o he heory of SDDP algorihm V. Leclère (ENPC) Augus 1, 2014 V. Leclère Inroducion o SDDP Augus 1, 2014 1 / 21 Inroducion Large scale sochasic problem are hard o solve. Two ways of aacking

More information

T L. t=1. Proof of Lemma 1. Using the marginal cost accounting in Equation(4) and standard arguments. t )+Π RB. t )+K 1(Q RB

T L. t=1. Proof of Lemma 1. Using the marginal cost accounting in Equation(4) and standard arguments. t )+Π RB. t )+K 1(Q RB Elecronic Companion EC.1. Proofs of Technical Lemmas and Theorems LEMMA 1. Le C(RB) be he oal cos incurred by he RB policy. Then we have, T L E[C(RB)] 3 E[Z RB ]. (EC.1) Proof of Lemma 1. Using he marginal

More information

Vehicle Arrival Models : Headway

Vehicle Arrival Models : Headway Chaper 12 Vehicle Arrival Models : Headway 12.1 Inroducion Modelling arrival of vehicle a secion of road is an imporan sep in raffic flow modelling. I has imporan applicaion in raffic flow simulaion where

More information

3.1.3 INTRODUCTION TO DYNAMIC OPTIMIZATION: DISCRETE TIME PROBLEMS. A. The Hamiltonian and First-Order Conditions in a Finite Time Horizon

3.1.3 INTRODUCTION TO DYNAMIC OPTIMIZATION: DISCRETE TIME PROBLEMS. A. The Hamiltonian and First-Order Conditions in a Finite Time Horizon 3..3 INRODUCION O DYNAMIC OPIMIZAION: DISCREE IME PROBLEMS A. he Hamilonian and Firs-Order Condiions in a Finie ime Horizon Define a new funcion, he Hamilonian funcion, H. H he change in he oal value of

More information

20. Applications of the Genetic-Drift Model

20. Applications of the Genetic-Drift Model 0. Applicaions of he Geneic-Drif Model 1) Deermining he probabiliy of forming any paricular combinaion of genoypes in he nex generaion: Example: If he parenal allele frequencies are p 0 = 0.35 and q 0

More information

ACE 562 Fall Lecture 5: The Simple Linear Regression Model: Sampling Properties of the Least Squares Estimators. by Professor Scott H.

ACE 562 Fall Lecture 5: The Simple Linear Regression Model: Sampling Properties of the Least Squares Estimators. by Professor Scott H. ACE 56 Fall 005 Lecure 5: he Simple Linear Regression Model: Sampling Properies of he Leas Squares Esimaors by Professor Sco H. Irwin Required Reading: Griffihs, Hill and Judge. "Inference in he Simple

More information

Robust estimation based on the first- and third-moment restrictions of the power transformation model

Robust estimation based on the first- and third-moment restrictions of the power transformation model h Inernaional Congress on Modelling and Simulaion, Adelaide, Ausralia, 6 December 3 www.mssanz.org.au/modsim3 Robus esimaion based on he firs- and hird-momen resricions of he power ransformaion Nawaa,

More information

Diebold, Chapter 7. Francis X. Diebold, Elements of Forecasting, 4th Edition (Mason, Ohio: Cengage Learning, 2006). Chapter 7. Characterizing Cycles

Diebold, Chapter 7. Francis X. Diebold, Elements of Forecasting, 4th Edition (Mason, Ohio: Cengage Learning, 2006). Chapter 7. Characterizing Cycles Diebold, Chaper 7 Francis X. Diebold, Elemens of Forecasing, 4h Ediion (Mason, Ohio: Cengage Learning, 006). Chaper 7. Characerizing Cycles Afer compleing his reading you should be able o: Define covariance

More information

1 Review of Zero-Sum Games

1 Review of Zero-Sum Games COS 5: heoreical Machine Learning Lecurer: Rob Schapire Lecure #23 Scribe: Eugene Brevdo April 30, 2008 Review of Zero-Sum Games Las ime we inroduced a mahemaical model for wo player zero-sum games. Any

More information

ACE 562 Fall Lecture 8: The Simple Linear Regression Model: R 2, Reporting the Results and Prediction. by Professor Scott H.

ACE 562 Fall Lecture 8: The Simple Linear Regression Model: R 2, Reporting the Results and Prediction. by Professor Scott H. ACE 56 Fall 5 Lecure 8: The Simple Linear Regression Model: R, Reporing he Resuls and Predicion by Professor Sco H. Irwin Required Readings: Griffihs, Hill and Judge. "Explaining Variaion in he Dependen

More information

GMM - Generalized Method of Moments

GMM - Generalized Method of Moments GMM - Generalized Mehod of Momens Conens GMM esimaion, shor inroducion 2 GMM inuiion: Maching momens 2 3 General overview of GMM esimaion. 3 3. Weighing marix...........................................

More information

Technical Report Doc ID: TR March-2013 (Last revision: 23-February-2016) On formulating quadratic functions in optimization models.

Technical Report Doc ID: TR March-2013 (Last revision: 23-February-2016) On formulating quadratic functions in optimization models. Technical Repor Doc ID: TR--203 06-March-203 (Las revision: 23-Februar-206) On formulaing quadraic funcions in opimizaion models. Auhor: Erling D. Andersen Convex quadraic consrains quie frequenl appear

More information

R t. C t P t. + u t. C t = αp t + βr t + v t. + β + w t

R t. C t P t. + u t. C t = αp t + βr t + v t. + β + w t Exercise 7 C P = α + β R P + u C = αp + βr + v (a) (b) C R = α P R + β + w (c) Assumpions abou he disurbances u, v, w : Classical assumions on he disurbance of one of he equaions, eg. on (b): E(v v s P,

More information

Inventory Analysis and Management. Multi-Period Stochastic Models: Optimality of (s, S) Policy for K-Convex Objective Functions

Inventory Analysis and Management. Multi-Period Stochastic Models: Optimality of (s, S) Policy for K-Convex Objective Functions Muli-Period Sochasic Models: Opimali of (s, S) Polic for -Convex Objecive Funcions Consider a seing similar o he N-sage newsvendor problem excep ha now here is a fixed re-ordering cos (> 0) for each (re-)order.

More information

Testing for a Single Factor Model in the Multivariate State Space Framework

Testing for a Single Factor Model in the Multivariate State Space Framework esing for a Single Facor Model in he Mulivariae Sae Space Framework Chen C.-Y. M. Chiba and M. Kobayashi Inernaional Graduae School of Social Sciences Yokohama Naional Universiy Japan Faculy of Economics

More information

ACE 562 Fall Lecture 4: Simple Linear Regression Model: Specification and Estimation. by Professor Scott H. Irwin

ACE 562 Fall Lecture 4: Simple Linear Regression Model: Specification and Estimation. by Professor Scott H. Irwin ACE 56 Fall 005 Lecure 4: Simple Linear Regression Model: Specificaion and Esimaion by Professor Sco H. Irwin Required Reading: Griffihs, Hill and Judge. "Simple Regression: Economic and Saisical Model

More information

t is a basis for the solution space to this system, then the matrix having these solutions as columns, t x 1 t, x 2 t,... x n t x 2 t...

t is a basis for the solution space to this system, then the matrix having these solutions as columns, t x 1 t, x 2 t,... x n t x 2 t... Mah 228- Fri Mar 24 5.6 Marix exponenials and linear sysems: The analogy beween firs order sysems of linear differenial equaions (Chaper 5) and scalar linear differenial equaions (Chaper ) is much sronger

More information

Macroeconomic Theory Ph.D. Qualifying Examination Fall 2005 ANSWER EACH PART IN A SEPARATE BLUE BOOK. PART ONE: ANSWER IN BOOK 1 WEIGHT 1/3

Macroeconomic Theory Ph.D. Qualifying Examination Fall 2005 ANSWER EACH PART IN A SEPARATE BLUE BOOK. PART ONE: ANSWER IN BOOK 1 WEIGHT 1/3 Macroeconomic Theory Ph.D. Qualifying Examinaion Fall 2005 Comprehensive Examinaion UCLA Dep. of Economics You have 4 hours o complee he exam. There are hree pars o he exam. Answer all pars. Each par has

More information

Online Appendix to Solution Methods for Models with Rare Disasters

Online Appendix to Solution Methods for Models with Rare Disasters Online Appendix o Soluion Mehods for Models wih Rare Disasers Jesús Fernández-Villaverde and Oren Levinal In his Online Appendix, we presen he Euler condiions of he model, we develop he pricing Calvo block,

More information

0.1 MAXIMUM LIKELIHOOD ESTIMATION EXPLAINED

0.1 MAXIMUM LIKELIHOOD ESTIMATION EXPLAINED 0.1 MAXIMUM LIKELIHOOD ESTIMATIO EXPLAIED Maximum likelihood esimaion is a bes-fi saisical mehod for he esimaion of he values of he parameers of a sysem, based on a se of observaions of a random variable

More information

Introduction D P. r = constant discount rate, g = Gordon Model (1962): constant dividend growth rate.

Introduction D P. r = constant discount rate, g = Gordon Model (1962): constant dividend growth rate. Inroducion Gordon Model (1962): D P = r g r = consan discoun rae, g = consan dividend growh rae. If raional expecaions of fuure discoun raes and dividend growh vary over ime, so should he D/P raio. Since

More information

Applying Genetic Algorithms for Inventory Lot-Sizing Problem with Supplier Selection under Storage Capacity Constraints

Applying Genetic Algorithms for Inventory Lot-Sizing Problem with Supplier Selection under Storage Capacity Constraints IJCSI Inernaional Journal of Compuer Science Issues, Vol 9, Issue 1, No 1, January 2012 wwwijcsiorg 18 Applying Geneic Algorihms for Invenory Lo-Sizing Problem wih Supplier Selecion under Sorage Capaciy

More information

Lecture Notes 2. The Hilbert Space Approach to Time Series

Lecture Notes 2. The Hilbert Space Approach to Time Series Time Series Seven N. Durlauf Universiy of Wisconsin. Basic ideas Lecure Noes. The Hilber Space Approach o Time Series The Hilber space framework provides a very powerful language for discussing he relaionship

More information

On Measuring Pro-Poor Growth. 1. On Various Ways of Measuring Pro-Poor Growth: A Short Review of the Literature

On Measuring Pro-Poor Growth. 1. On Various Ways of Measuring Pro-Poor Growth: A Short Review of the Literature On Measuring Pro-Poor Growh 1. On Various Ways of Measuring Pro-Poor Growh: A Shor eview of he Lieraure During he pas en years or so here have been various suggesions concerning he way one should check

More information

23.2. Representing Periodic Functions by Fourier Series. Introduction. Prerequisites. Learning Outcomes

23.2. Representing Periodic Functions by Fourier Series. Introduction. Prerequisites. Learning Outcomes Represening Periodic Funcions by Fourier Series 3. Inroducion In his Secion we show how a periodic funcion can be expressed as a series of sines and cosines. We begin by obaining some sandard inegrals

More information

14 Autoregressive Moving Average Models

14 Autoregressive Moving Average Models 14 Auoregressive Moving Average Models In his chaper an imporan parameric family of saionary ime series is inroduced, he family of he auoregressive moving average, or ARMA, processes. For a large class

More information

Unit Root Time Series. Univariate random walk

Unit Root Time Series. Univariate random walk Uni Roo ime Series Univariae random walk Consider he regression y y where ~ iid N 0, he leas squares esimae of is: ˆ yy y y yy Now wha if = If y y hen le y 0 =0 so ha y j j If ~ iid N 0, hen y ~ N 0, he

More information

Application of a Stochastic-Fuzzy Approach to Modeling Optimal Discrete Time Dynamical Systems by Using Large Scale Data Processing

Application of a Stochastic-Fuzzy Approach to Modeling Optimal Discrete Time Dynamical Systems by Using Large Scale Data Processing Applicaion of a Sochasic-Fuzzy Approach o Modeling Opimal Discree Time Dynamical Sysems by Using Large Scale Daa Processing AA WALASZE-BABISZEWSA Deparmen of Compuer Engineering Opole Universiy of Technology

More information

d 1 = c 1 b 2 - b 1 c 2 d 2 = c 1 b 3 - b 1 c 3

d 1 = c 1 b 2 - b 1 c 2 d 2 = c 1 b 3 - b 1 c 3 and d = c b - b c c d = c b - b c c This process is coninued unil he nh row has been compleed. The complee array of coefficiens is riangular. Noe ha in developing he array an enire row may be divided or

More information

Notes for Lecture 17-18

Notes for Lecture 17-18 U.C. Berkeley CS278: Compuaional Complexiy Handou N7-8 Professor Luca Trevisan April 3-8, 2008 Noes for Lecure 7-8 In hese wo lecures we prove he firs half of he PCP Theorem, he Amplificaion Lemma, up

More information

Biol. 356 Lab 8. Mortality, Recruitment, and Migration Rates

Biol. 356 Lab 8. Mortality, Recruitment, and Migration Rates Biol. 356 Lab 8. Moraliy, Recruimen, and Migraion Raes (modified from Cox, 00, General Ecology Lab Manual, McGraw Hill) Las week we esimaed populaion size hrough several mehods. One assumpion of all hese

More information

Distribution of Estimates

Distribution of Estimates Disribuion of Esimaes From Economerics (40) Linear Regression Model Assume (y,x ) is iid and E(x e )0 Esimaion Consisency y α + βx + he esimaes approach he rue values as he sample size increases Esimaion

More information

Decentralized Stochastic Control with Partial History Sharing: A Common Information Approach

Decentralized Stochastic Control with Partial History Sharing: A Common Information Approach 1 Decenralized Sochasic Conrol wih Parial Hisory Sharing: A Common Informaion Approach Ashuosh Nayyar, Adiya Mahajan and Demoshenis Tenekezis arxiv:1209.1695v1 [cs.sy] 8 Sep 2012 Absrac A general model

More information

Comparing Means: t-tests for One Sample & Two Related Samples

Comparing Means: t-tests for One Sample & Two Related Samples Comparing Means: -Tess for One Sample & Two Relaed Samples Using he z-tes: Assumpions -Tess for One Sample & Two Relaed Samples The z-es (of a sample mean agains a populaion mean) is based on he assumpion

More information

Matrix Versions of Some Refinements of the Arithmetic-Geometric Mean Inequality

Matrix Versions of Some Refinements of the Arithmetic-Geometric Mean Inequality Marix Versions of Some Refinemens of he Arihmeic-Geomeric Mean Inequaliy Bao Qi Feng and Andrew Tonge Absrac. We esablish marix versions of refinemens due o Alzer ], Carwrigh and Field 4], and Mercer 5]

More information

E β t log (C t ) + M t M t 1. = Y t + B t 1 P t. B t 0 (3) v t = P tc t M t Question 1. Find the FOC s for an optimum in the agent s problem.

E β t log (C t ) + M t M t 1. = Y t + B t 1 P t. B t 0 (3) v t = P tc t M t Question 1. Find the FOC s for an optimum in the agent s problem. Noes, M. Krause.. Problem Se 9: Exercise on FTPL Same model as in paper and lecure, only ha one-period govenmen bonds are replaced by consols, which are bonds ha pay one dollar forever. I has curren marke

More information

On Boundedness of Q-Learning Iterates for Stochastic Shortest Path Problems

On Boundedness of Q-Learning Iterates for Stochastic Shortest Path Problems MATHEMATICS OF OPERATIONS RESEARCH Vol. 38, No. 2, May 2013, pp. 209 227 ISSN 0364-765X (prin) ISSN 1526-5471 (online) hp://dx.doi.org/10.1287/moor.1120.0562 2013 INFORMS On Boundedness of Q-Learning Ieraes

More information

L07. KALMAN FILTERING FOR NON-LINEAR SYSTEMS. NA568 Mobile Robotics: Methods & Algorithms

L07. KALMAN FILTERING FOR NON-LINEAR SYSTEMS. NA568 Mobile Robotics: Methods & Algorithms L07. KALMAN FILTERING FOR NON-LINEAR SYSTEMS NA568 Mobile Roboics: Mehods & Algorihms Today s Topic Quick review on (Linear) Kalman Filer Kalman Filering for Non-Linear Sysems Exended Kalman Filer (EKF)

More information

Physics 235 Chapter 2. Chapter 2 Newtonian Mechanics Single Particle

Physics 235 Chapter 2. Chapter 2 Newtonian Mechanics Single Particle Chaper 2 Newonian Mechanics Single Paricle In his Chaper we will review wha Newon s laws of mechanics ell us abou he moion of a single paricle. Newon s laws are only valid in suiable reference frames,

More information

Section 3.5 Nonhomogeneous Equations; Method of Undetermined Coefficients

Section 3.5 Nonhomogeneous Equations; Method of Undetermined Coefficients Secion 3.5 Nonhomogeneous Equaions; Mehod of Undeermined Coefficiens Key Terms/Ideas: Linear Differenial operaor Nonlinear operaor Second order homogeneous DE Second order nonhomogeneous DE Soluion o homogeneous

More information

SOLUTIONS TO ECE 3084

SOLUTIONS TO ECE 3084 SOLUTIONS TO ECE 384 PROBLEM 2.. For each sysem below, specify wheher or no i is: (i) memoryless; (ii) causal; (iii) inverible; (iv) linear; (v) ime invarian; Explain your reasoning. If he propery is no

More information

PENALIZED LEAST SQUARES AND PENALIZED LIKELIHOOD

PENALIZED LEAST SQUARES AND PENALIZED LIKELIHOOD PENALIZED LEAST SQUARES AND PENALIZED LIKELIHOOD HAN XIAO 1. Penalized Leas Squares Lasso solves he following opimizaion problem, ˆβ lasso = arg max β R p+1 1 N y i β 0 N x ij β j β j (1.1) for some 0.

More information

Christos Papadimitriou & Luca Trevisan November 22, 2016

Christos Papadimitriou & Luca Trevisan November 22, 2016 U.C. Bereley CS170: Algorihms Handou LN-11-22 Chrisos Papadimiriou & Luca Trevisan November 22, 2016 Sreaming algorihms In his lecure and he nex one we sudy memory-efficien algorihms ha process a sream

More information

Article from. Predictive Analytics and Futurism. July 2016 Issue 13

Article from. Predictive Analytics and Futurism. July 2016 Issue 13 Aricle from Predicive Analyics and Fuurism July 6 Issue An Inroducion o Incremenal Learning By Qiang Wu and Dave Snell Machine learning provides useful ools for predicive analyics The ypical machine learning

More information

Linear Response Theory: The connection between QFT and experiments

Linear Response Theory: The connection between QFT and experiments Phys540.nb 39 3 Linear Response Theory: The connecion beween QFT and experimens 3.1. Basic conceps and ideas Q: How do we measure he conduciviy of a meal? A: we firs inroduce a weak elecric field E, and

More information

Lecture 33: November 29

Lecture 33: November 29 36-705: Inermediae Saisics Fall 2017 Lecurer: Siva Balakrishnan Lecure 33: November 29 Today we will coninue discussing he boosrap, and hen ry o undersand why i works in a simple case. In he las lecure

More information

Bias in Conditional and Unconditional Fixed Effects Logit Estimation: a Correction * Tom Coupé

Bias in Conditional and Unconditional Fixed Effects Logit Estimation: a Correction * Tom Coupé Bias in Condiional and Uncondiional Fixed Effecs Logi Esimaion: a Correcion * Tom Coupé Economics Educaion and Research Consorium, Naional Universiy of Kyiv Mohyla Academy Address: Vul Voloska 10, 04070

More information

Econ107 Applied Econometrics Topic 7: Multicollinearity (Studenmund, Chapter 8)

Econ107 Applied Econometrics Topic 7: Multicollinearity (Studenmund, Chapter 8) I. Definiions and Problems A. Perfec Mulicollineariy Econ7 Applied Economerics Topic 7: Mulicollineariy (Sudenmund, Chaper 8) Definiion: Perfec mulicollineariy exiss in a following K-variable regression

More information

STATE-SPACE MODELLING. A mass balance across the tank gives:

STATE-SPACE MODELLING. A mass balance across the tank gives: B. Lennox and N.F. Thornhill, 9, Sae Space Modelling, IChemE Process Managemen and Conrol Subjec Group Newsleer STE-SPACE MODELLING Inroducion: Over he pas decade or so here has been an ever increasing

More information

Solutions to Odd Number Exercises in Chapter 6

Solutions to Odd Number Exercises in Chapter 6 1 Soluions o Odd Number Exercises in 6.1 R y eˆ 1.7151 y 6.3 From eˆ ( T K) ˆ R 1 1 SST SST SST (1 R ) 55.36(1.7911) we have, ˆ 6.414 T K ( ) 6.5 y ye ye y e 1 1 Consider he erms e and xe b b x e y e b

More information

Exponential Weighted Moving Average (EWMA) Chart Under The Assumption of Moderateness And Its 3 Control Limits

Exponential Weighted Moving Average (EWMA) Chart Under The Assumption of Moderateness And Its 3 Control Limits DOI: 0.545/mjis.07.5009 Exponenial Weighed Moving Average (EWMA) Char Under The Assumpion of Moderaeness And Is 3 Conrol Limis KALPESH S TAILOR Assisan Professor, Deparmen of Saisics, M. K. Bhavnagar Universiy,

More information

Notes on Kalman Filtering

Notes on Kalman Filtering Noes on Kalman Filering Brian Borchers and Rick Aser November 7, Inroducion Daa Assimilaion is he problem of merging model predicions wih acual measuremens of a sysem o produce an opimal esimae of he curren

More information

Appendix to Creating Work Breaks From Available Idleness

Appendix to Creating Work Breaks From Available Idleness Appendix o Creaing Work Breaks From Available Idleness Xu Sun and Ward Whi Deparmen of Indusrial Engineering and Operaions Research, Columbia Universiy, New York, NY, 127; {xs2235,ww24}@columbia.edu Sepember

More information

3.1 More on model selection

3.1 More on model selection 3. More on Model selecion 3. Comparing models AIC, BIC, Adjused R squared. 3. Over Fiing problem. 3.3 Sample spliing. 3. More on model selecion crieria Ofen afer model fiing you are lef wih a handful of

More information

DEPARTMENT OF STATISTICS

DEPARTMENT OF STATISTICS A Tes for Mulivariae ARCH Effecs R. Sco Hacker and Abdulnasser Haemi-J 004: DEPARTMENT OF STATISTICS S-0 07 LUND SWEDEN A Tes for Mulivariae ARCH Effecs R. Sco Hacker Jönköping Inernaional Business School

More information

Time series Decomposition method

Time series Decomposition method Time series Decomposiion mehod A ime series is described using a mulifacor model such as = f (rend, cyclical, seasonal, error) = f (T, C, S, e) Long- Iner-mediaed Seasonal Irregular erm erm effec, effec,

More information

Lecture 2-1 Kinematics in One Dimension Displacement, Velocity and Acceleration Everything in the world is moving. Nothing stays still.

Lecture 2-1 Kinematics in One Dimension Displacement, Velocity and Acceleration Everything in the world is moving. Nothing stays still. Lecure - Kinemaics in One Dimension Displacemen, Velociy and Acceleraion Everyhing in he world is moving. Nohing says sill. Moion occurs a all scales of he universe, saring from he moion of elecrons in

More information

How to Deal with Structural Breaks in Practical Cointegration Analysis

How to Deal with Structural Breaks in Practical Cointegration Analysis How o Deal wih Srucural Breaks in Pracical Coinegraion Analysis Roselyne Joyeux * School of Economic and Financial Sudies Macquarie Universiy December 00 ABSTRACT In his noe we consider he reamen of srucural

More information

Lecture 2 October ε-approximation of 2-player zero-sum games

Lecture 2 October ε-approximation of 2-player zero-sum games Opimizaion II Winer 009/10 Lecurer: Khaled Elbassioni Lecure Ocober 19 1 ε-approximaion of -player zero-sum games In his lecure we give a randomized ficiious play algorihm for obaining an approximae soluion

More information

A Hop Constrained Min-Sum Arborescence with Outage Costs

A Hop Constrained Min-Sum Arborescence with Outage Costs A Hop Consrained Min-Sum Arborescence wih Ouage Coss Rakesh Kawara Minnesoa Sae Universiy, Mankao, MN 56001 Email: Kawara@mnsu.edu Absrac The hop consrained min-sum arborescence wih ouage coss problem

More information

Problem Set 5. Graduate Macro II, Spring 2017 The University of Notre Dame Professor Sims

Problem Set 5. Graduate Macro II, Spring 2017 The University of Notre Dame Professor Sims Problem Se 5 Graduae Macro II, Spring 2017 The Universiy of Nore Dame Professor Sims Insrucions: You may consul wih oher members of he class, bu please make sure o urn in your own work. Where applicable,

More information

BU Macro BU Macro Fall 2008, Lecture 4

BU Macro BU Macro Fall 2008, Lecture 4 Dynamic Programming BU Macro 2008 Lecure 4 1 Ouline 1. Cerainy opimizaion problem used o illusrae: a. Resricions on exogenous variables b. Value funcion c. Policy funcion d. The Bellman equaion and an

More information

10. State Space Methods

10. State Space Methods . Sae Space Mehods. Inroducion Sae space modelling was briefly inroduced in chaper. Here more coverage is provided of sae space mehods before some of heir uses in conrol sysem design are covered in he

More information

1. An introduction to dynamic optimization -- Optimal Control and Dynamic Programming AGEC

1. An introduction to dynamic optimization -- Optimal Control and Dynamic Programming AGEC This documen was generaed a :45 PM 8/8/04 Copyrigh 04 Richard T. Woodward. An inroducion o dynamic opimizaion -- Opimal Conrol and Dynamic Programming AGEC 637-04 I. Overview of opimizaion Opimizaion is

More information

This document was generated at 1:04 PM, 09/10/13 Copyright 2013 Richard T. Woodward. 4. End points and transversality conditions AGEC

This document was generated at 1:04 PM, 09/10/13 Copyright 2013 Richard T. Woodward. 4. End points and transversality conditions AGEC his documen was generaed a 1:4 PM, 9/1/13 Copyrigh 213 Richard. Woodward 4. End poins and ransversaliy condiions AGEC 637-213 F z d Recall from Lecure 3 ha a ypical opimal conrol problem is o maimize (,,

More information

12: AUTOREGRESSIVE AND MOVING AVERAGE PROCESSES IN DISCRETE TIME. Σ j =

12: AUTOREGRESSIVE AND MOVING AVERAGE PROCESSES IN DISCRETE TIME. Σ j = 1: AUTOREGRESSIVE AND MOVING AVERAGE PROCESSES IN DISCRETE TIME Moving Averages Recall ha a whie noise process is a series { } = having variance σ. The whie noise process has specral densiy f (λ) = of

More information

Final Spring 2007

Final Spring 2007 .615 Final Spring 7 Overview The purpose of he final exam is o calculae he MHD β limi in a high-bea oroidal okamak agains he dangerous n = 1 exernal ballooning-kink mode. Effecively, his corresponds o

More information

SZG Macro 2011 Lecture 3: Dynamic Programming. SZG macro 2011 lecture 3 1

SZG Macro 2011 Lecture 3: Dynamic Programming. SZG macro 2011 lecture 3 1 SZG Macro 2011 Lecure 3: Dynamic Programming SZG macro 2011 lecure 3 1 Background Our previous discussion of opimal consumpion over ime and of opimal capial accumulaion sugges sudying he general decision

More information

Simulation-Solving Dynamic Models ABE 5646 Week 2, Spring 2010

Simulation-Solving Dynamic Models ABE 5646 Week 2, Spring 2010 Simulaion-Solving Dynamic Models ABE 5646 Week 2, Spring 2010 Week Descripion Reading Maerial 2 Compuer Simulaion of Dynamic Models Finie Difference, coninuous saes, discree ime Simple Mehods Euler Trapezoid

More information

Maintenance Models. Prof. Robert C. Leachman IEOR 130, Methods of Manufacturing Improvement Spring, 2011

Maintenance Models. Prof. Robert C. Leachman IEOR 130, Methods of Manufacturing Improvement Spring, 2011 Mainenance Models Prof Rober C Leachman IEOR 3, Mehods of Manufacuring Improvemen Spring, Inroducion The mainenance of complex equipmen ofen accouns for a large porion of he coss associaed wih ha equipmen

More information

Two Popular Bayesian Estimators: Particle and Kalman Filters. McGill COMP 765 Sept 14 th, 2017

Two Popular Bayesian Estimators: Particle and Kalman Filters. McGill COMP 765 Sept 14 th, 2017 Two Popular Bayesian Esimaors: Paricle and Kalman Filers McGill COMP 765 Sep 14 h, 2017 1 1 1, dx x Bel x u x P x z P Recall: Bayes Filers,,,,,,, 1 1 1 1 u z u x P u z u x z P Bayes z = observaion u =

More information

Lecture Notes 5: Investment

Lecture Notes 5: Investment Lecure Noes 5: Invesmen Zhiwei Xu (xuzhiwei@sju.edu.cn) Invesmen decisions made by rms are one of he mos imporan behaviors in he economy. As he invesmen deermines how he capials accumulae along he ime,

More information

Rapid Termination Evaluation for Recursive Subdivision of Bezier Curves

Rapid Termination Evaluation for Recursive Subdivision of Bezier Curves Rapid Terminaion Evaluaion for Recursive Subdivision of Bezier Curves Thomas F. Hain School of Compuer and Informaion Sciences, Universiy of Souh Alabama, Mobile, AL, U.S.A. Absrac Bézier curve flaening

More information

Hypothesis Testing in the Classical Normal Linear Regression Model. 1. Components of Hypothesis Tests

Hypothesis Testing in the Classical Normal Linear Regression Model. 1. Components of Hypothesis Tests ECONOMICS 35* -- NOTE 8 M.G. Abbo ECON 35* -- NOTE 8 Hypohesis Tesing in he Classical Normal Linear Regression Model. Componens of Hypohesis Tess. A esable hypohesis, which consiss of wo pars: Par : a

More information

UNIVERSITY OF OSLO DEPARTMENT OF ECONOMICS

UNIVERSITY OF OSLO DEPARTMENT OF ECONOMICS UNIVERSITY OF OSLO DEPARTMENT OF ECONOMICS Exam: ECON4325 Moneary Policy Dae of exam: Tuesday, May 24, 206 Grades are given: June 4, 206 Time for exam: 2.30 p.m. 5.30 p.m. The problem se covers 5 pages

More information

5. Stochastic processes (1)

5. Stochastic processes (1) Lec05.pp S-38.45 - Inroducion o Teleraffic Theory Spring 2005 Conens Basic conceps Poisson process 2 Sochasic processes () Consider some quaniy in a eleraffic (or any) sysem I ypically evolves in ime randomly

More information

Essential Microeconomics : OPTIMAL CONTROL 1. Consider the following class of optimization problems

Essential Microeconomics : OPTIMAL CONTROL 1. Consider the following class of optimization problems Essenial Microeconomics -- 6.5: OPIMAL CONROL Consider he following class of opimizaion problems Max{ U( k, x) + U+ ( k+ ) k+ k F( k, x)}. { x, k+ } = In he language of conrol heory, he vecor k is he vecor

More information

Longest Common Prefixes

Longest Common Prefixes Longes Common Prefixes The sandard ordering for srings is he lexicographical order. I is induced by an order over he alphabe. We will use he same symbols (,

More information

Lecture 2 April 04, 2018

Lecture 2 April 04, 2018 Sas 300C: Theory of Saisics Spring 208 Lecure 2 April 04, 208 Prof. Emmanuel Candes Scribe: Paulo Orensein; edied by Sephen Baes, XY Han Ouline Agenda: Global esing. Needle in a Haysack Problem 2. Threshold

More information

Chapter 2. First Order Scalar Equations

Chapter 2. First Order Scalar Equations Chaper. Firs Order Scalar Equaions We sar our sudy of differenial equaions in he same way he pioneers in his field did. We show paricular echniques o solve paricular ypes of firs order differenial equaions.

More information

6. Stochastic calculus with jump processes

6. Stochastic calculus with jump processes A) Trading sraegies (1/3) Marke wih d asses S = (S 1,, S d ) A rading sraegy can be modelled wih a vecor φ describing he quaniies invesed in each asse a each insan : φ = (φ 1,, φ d ) The value a of a porfolio

More information

Math 10B: Mock Mid II. April 13, 2016

Math 10B: Mock Mid II. April 13, 2016 Name: Soluions Mah 10B: Mock Mid II April 13, 016 1. ( poins) Sae, wih jusificaion, wheher he following saemens are rue or false. (a) If a 3 3 marix A saisfies A 3 A = 0, hen i canno be inverible. True.

More information

13.3 Term structure models

13.3 Term structure models 13.3 Term srucure models 13.3.1 Expecaions hypohesis model - Simples "model" a) shor rae b) expecaions o ge oher prices Resul: y () = 1 h +1 δ = φ( δ)+ε +1 f () = E (y +1) (1) =δ + φ( δ) f (3) = E (y +)

More information

A Dynamic Model of Economic Fluctuations

A Dynamic Model of Economic Fluctuations CHAPTER 15 A Dynamic Model of Economic Flucuaions Modified for ECON 2204 by Bob Murphy 2016 Worh Publishers, all righs reserved IN THIS CHAPTER, OU WILL LEARN: how o incorporae dynamics ino he AD-AS model

More information

References are appeared in the last slide. Last update: (1393/08/19)

References are appeared in the last slide. Last update: (1393/08/19) SYSEM IDEIFICAIO Ali Karimpour Associae Professor Ferdowsi Universi of Mashhad References are appeared in he las slide. Las updae: 0..204 393/08/9 Lecure 5 lecure 5 Parameer Esimaion Mehods opics o be

More information

SUPPLEMENTARY INFORMATION

SUPPLEMENTARY INFORMATION SUPPLEMENTARY INFORMATION DOI: 0.038/NCLIMATE893 Temporal resoluion and DICE * Supplemenal Informaion Alex L. Maren and Sephen C. Newbold Naional Cener for Environmenal Economics, US Environmenal Proecion

More information

Chapter 3 Boundary Value Problem

Chapter 3 Boundary Value Problem Chaper 3 Boundary Value Problem A boundary value problem (BVP) is a problem, ypically an ODE or a PDE, which has values assigned on he physical boundary of he domain in which he problem is specified. Le

More information

Excel-Based Solution Method For The Optimal Policy Of The Hadley And Whittin s Exact Model With Arma Demand

Excel-Based Solution Method For The Optimal Policy Of The Hadley And Whittin s Exact Model With Arma Demand Excel-Based Soluion Mehod For The Opimal Policy Of The Hadley And Whiin s Exac Model Wih Arma Demand Kal Nami School of Business and Economics Winson Salem Sae Universiy Winson Salem, NC 27110 Phone: (336)750-2338

More information

Some Basic Information about M-S-D Systems

Some Basic Information about M-S-D Systems Some Basic Informaion abou M-S-D Sysems 1 Inroducion We wan o give some summary of he facs concerning unforced (homogeneous) and forced (non-homogeneous) models for linear oscillaors governed by second-order,

More information

11!Hí MATHEMATICS : ERDŐS AND ULAM PROC. N. A. S. of decomposiion, properly speaking) conradics he possibiliy of defining a counably addiive real-valu

11!Hí MATHEMATICS : ERDŐS AND ULAM PROC. N. A. S. of decomposiion, properly speaking) conradics he possibiliy of defining a counably addiive real-valu ON EQUATIONS WITH SETS AS UNKNOWNS BY PAUL ERDŐS AND S. ULAM DEPARTMENT OF MATHEMATICS, UNIVERSITY OF COLORADO, BOULDER Communicaed May 27, 1968 We shall presen here a number of resuls in se heory concerning

More information

IMPLICIT AND INVERSE FUNCTION THEOREMS PAUL SCHRIMPF 1 OCTOBER 25, 2013

IMPLICIT AND INVERSE FUNCTION THEOREMS PAUL SCHRIMPF 1 OCTOBER 25, 2013 IMPLICI AND INVERSE FUNCION HEOREMS PAUL SCHRIMPF 1 OCOBER 25, 213 UNIVERSIY OF BRIISH COLUMBIA ECONOMICS 526 We have exensively sudied how o solve sysems of linear equaions. We know how o check wheher

More information

CENTRALIZED VERSUS DECENTRALIZED PRODUCTION PLANNING IN SUPPLY CHAINS

CENTRALIZED VERSUS DECENTRALIZED PRODUCTION PLANNING IN SUPPLY CHAINS CENRALIZED VERSUS DECENRALIZED PRODUCION PLANNING IN SUPPLY CHAINS Georges SAHARIDIS* a, Yves DALLERY* a, Fikri KARAESMEN* b * a Ecole Cenrale Paris Deparmen of Indusial Engineering (LGI), +3343388, saharidis,dallery@lgi.ecp.fr

More information

Let us start with a two dimensional case. We consider a vector ( x,

Let us start with a two dimensional case. We consider a vector ( x, Roaion marices We consider now roaion marices in wo and hree dimensions. We sar wih wo dimensions since wo dimensions are easier han hree o undersand, and one dimension is a lile oo simple. However, our

More information

Physics 127b: Statistical Mechanics. Fokker-Planck Equation. Time Evolution

Physics 127b: Statistical Mechanics. Fokker-Planck Equation. Time Evolution Physics 7b: Saisical Mechanics Fokker-Planck Equaion The Langevin equaion approach o he evoluion of he velociy disribuion for he Brownian paricle migh leave you uncomforable. A more formal reamen of his

More information

Two Coupled Oscillators / Normal Modes

Two Coupled Oscillators / Normal Modes Lecure 3 Phys 3750 Two Coupled Oscillaors / Normal Modes Overview and Moivaion: Today we ake a small, bu significan, sep owards wave moion. We will no ye observe waves, bu his sep is imporan in is own

More information

Expert Advice for Amateurs

Expert Advice for Amateurs Exper Advice for Amaeurs Ernes K. Lai Online Appendix - Exisence of Equilibria The analysis in his secion is performed under more general payoff funcions. Wihou aking an explici form, he payoffs of he

More information

Speaker Adaptation Techniques For Continuous Speech Using Medium and Small Adaptation Data Sets. Constantinos Boulis

Speaker Adaptation Techniques For Continuous Speech Using Medium and Small Adaptation Data Sets. Constantinos Boulis Speaker Adapaion Techniques For Coninuous Speech Using Medium and Small Adapaion Daa Ses Consaninos Boulis Ouline of he Presenaion Inroducion o he speaker adapaion problem Maximum Likelihood Sochasic Transformaions

More information

2. Nonlinear Conservation Law Equations

2. Nonlinear Conservation Law Equations . Nonlinear Conservaion Law Equaions One of he clear lessons learned over recen years in sudying nonlinear parial differenial equaions is ha i is generally no wise o ry o aack a general class of nonlinear

More information

Morning Time: 1 hour 30 minutes Additional materials (enclosed):

Morning Time: 1 hour 30 minutes Additional materials (enclosed): ADVANCED GCE 78/0 MATHEMATICS (MEI) Differenial Equaions THURSDAY JANUARY 008 Morning Time: hour 30 minues Addiional maerials (enclosed): None Addiional maerials (required): Answer Bookle (8 pages) Graph

More information

Echocardiography Project and Finite Fourier Series

Echocardiography Project and Finite Fourier Series Echocardiography Projec and Finie Fourier Series 1 U M An echocardiagram is a plo of how a porion of he hear moves as he funcion of ime over he one or more hearbea cycles If he hearbea repeas iself every

More information