Robust Discrete Optimization Under Ellipsoidal Uncertainty Sets. Abstract

Size: px
Start display at page:

Download "Robust Discrete Optimization Under Ellipsoidal Uncertainty Sets. Abstract"

Transcription

1 Robust Discrete Otimization Under Ellisoidal Uncertainty Sets Dimitris Bertsimas Melvyn Sim y March, 004 Abstract We address the comlexity and ractically ecient methods for robust discrete otimization under ellisoidal uncertainty sets. Secically, weshow that the robust counterart of a discrete otimization roblem with correlated objective function data is NP-hard even though the nominal roblem is olynomially solvable. For uncorrelated and identically distributed data, however, we show that the robust roblem retains the comlexity of the nominal roblem. For uncorrelated, but not identically distributed data we roose an aroximation method that solves the robust roblem within arbitrary accuracy. We also roose a Frank-Wolfe tye algorithm for this case, which we rove converges to a locally otimal solution, and in comutational exeriments is remarkably eective. Finally,we roose a generalization of the robust discrete otimization framework we roosed earlier that (a) allows the key arameter that controls the tradeo between robustness and otimalityto deend on the solution and (b) results in increased exibility and decreased conservatism, while maintaining the comlexity of the nominal roblem. Boeing Professor of Oerations Research, Sloan School of Management and Oerations Research Center, Massachusetts Institute of Technology, E53-363, Cambridge, MA 0139, dbertsim@mit.edu. The research of the author was artially suorted by the Singaore-MIT alliance. y NUS Business School, National University of Singaore, dscsimm@nus.edu.sg. 1

2 1 Introduction Robust otimization as a method to address uncertainty in otimization roblems has been in the center of a lot of research activity. Ben-Tal and Nemirovski [1,, 3] and El-Ghaoui et al. [7, 8] roose ecient algorithms to solve certain classes of convex otimization roblems under data uncertainty that is described by ellisoidal sets. Kouvelis and Yu [15] roose a framework for robust discrete otimization, which seeks to nd a solution that minimizes the worst case erformance under a set of scenarios for the data. Unfortunately, under their aroach, the robust counterart of a olynomially solvable discrete otimization roblem can be NP-hard. Bertsimas and Sim [5, 6] roose an aroach in solving robust discrete otimization roblems that has the exibility of adjusting the level of conservativeness of the solution while reserving the comutational comlexity of the nominal roblem. This is attractive asitshows that adding robustness does not come at the rice of a change in comutational comlexity. Ishii et. al. [1] consider solving a stochastic minimum sanning tree roblem with costs that are indeendently and normally distributed leading to a similar framework as robust otimization with an ellisoidal uncertainty set. However, to the best of our knowledge, there has not been any work or comlexity results on robust discrete otimization under ellisoidal uncertainty sets. It is thus natural to ask whether adding robustness in the cost function of a given discrete otimization roblem under an ellisoidal uncertainty set leads to a change in comutational comlexity and whether we can develo ractically ecient methods to solve robust discrete otimization roblems under ellisoidal uncertainty sets. Our objective in this aer is to address these questions. Secically our contributions include: (a) (b) Under a general ellisoidal uncertainty set that models correlated data, we show that the robust counterart can be NP-hard even though the nominal roblem is olynomially solvable in contrast with the uncertainty sets roosed in Bertsimas and Sim [5, 6]. Under an ellisoidal uncertainty set with uncorrelated data, we show that the robust roblem can be reduced to solving a collection of nominal roblems with dierent linear objectives. If the distributions are identical, we show thatwe only require to solve r + 1 nominal roblems, where r is the number of uncertain cost comonents, that is in this case the comutational comlexity is reserved. Under uncorrelated data, we roose an aroximation method that solves the robust roblem within an additive. The comlexity of the method is O((nd max ) 1=4 ;1= ), where d max is the largest number in the data describing the ellisoidal set. We also roose a Frank-Wolfe

3 tye algorithm for this case, which we rove converges to a locally otimal solution, and in comutational exeriments is remarkably eective. We also link the robust roblem with uncorrelated data to classical roblems in arametric discrete otimization. (c) We roose a generalization of the robust discrete otimization framework in Bertsimas and Sim [5] that allows the key arameter that controls the tradeo between robustness and otimality to deend on the solution. This generalization results in increased exibility and decreased conservatism, while maintaining the comlexity of the nominal roblem. Structure of the aer. In Section, we formulate robust discrete otimization roblems under ellisoidal uncertainty sets (correlated data) and show that the roblem is NP-hard even for nominal roblems that are olynomially solvable. In Section 3, we resent structural results and establish that the robust roblem under ball uncertainty (uncorrelated and identically distributed data) has the same comlexity as the nominal roblem. In Sections 4 and 5, we roose aroximation methods for the robust roblem under ellisoidal uncertainty sets with uncorrelated but not identically distributed data. In Section 6, we resent the generalization of the robust discrete otimization framework in Bertsimas and Sim [5]. In Section 7, we resent some exerimental ndings relating to the comutation seed and the quality of robust solutions. The nal section contains some concluding remarks. Formulation of Robust Discrete Otimization Problems A nominal discrete otimization roblem is: minimize subject to c 0 x x X (1) with X f0 1g n.weareinterested in roblems where each entry ~c j j N = f1 ::: ng is uncertain and described by an uncertainty set C. Under the robust otimization aradigm, we solve minimize max ~cc subject to x X: ~c 0 x () Writing ~c = c + ~s, where c is the nominal value and the deviation ~s is restricted to the set D = C ; c, Problem () becomes: minimize 0 c x + (x) (3) subject to x X 3

4 where (x) =max ~ sd ~s 0 x. Secial cases of Formulation (3) include: (a) D = fs : ~s j [0 d j ]g, leading to (x) =d 0 x. (b) D = fs : k ;1= sk g that models ellisoidal uncertainty sets roosed by Ben-Tal and Nemirovski [1,, 3] and El-Ghaoui et al. [7, 8]. It easily follows that (x) = x 0 x, where is the covariance matrix of the random cost coecients. For the secial case that q = diag(d 1 ::: d n ), i.e., the random cost coecients are uncorrelated, we obtain that (x) = PjN d j x j = 0 d x: (c) D = fs :0 s j d j 8j J P kn s k d k ;g roosed in Bertsimas and Sim [6]. It follows that in this case (x) =max fs:jsj=; SJg PjS d jx j, where J is the set of random cost comonents. Bertsimas and Sim [6] show that Problem (3) reduces to solving at most jjj + 1 nominal roblems for dierent cost vectors. In other words, the robust counterart is olynomially solvable if the nominal roblem is olynomially solvable. Under models (a) and (c), robustness reserves the comutational comlexity of the nominal roblem. Our objective in this aer is to investigate the rice (in increased comlexity) of robustness under ellisoidal uncertainty sets (model (b)) and roose eective algorithmic methods to tackle models (b), (c). Our rst result is unfortunately negative. Under ellisoidal uncertainty sets with general covariance matrices, the rice of robustness is an increase in comutational comlexity. The robust counterart may become NP-hard even though the nominal roblem is olynomially solvable. Theorem 1 The robust roblem (3) with (x) = x 0 x (Model (b)) is NP-hard, for the following classes of olynomially solvable nominal roblems: shortest ath, minimum cost assignment, resource scheduling, minimum sanning tree. Proof : Kouvelis and Yu [15] rove that the roblem minimize subject to maxfc 0 1 x c0 xg x X (4) is NP-hard for the olynomially solvable roblems mentioned in the statement of the theorem. We show a simle transformation of Problem (4) to Problem (3) with (x) = x 0 x as follows: 0 maxfc1 x c0 xg = max c 0 1 x + 0 c x + c0 1 x ; c0 x c0 1 x + c0 x ; c0 1 x ; c0 x = c0 1 x + c0 x c 0 +max 1 x ; 0 c x ; c0 1 x ; c0 x 4

5 = c0 1 x + c0 x = c0 1 x + c0 x + c0 1 x ; c0 x + 1 q x 0 (c 1 ; c )(c 1 ; c ) 0 x: The NP-hard Problem (4) is transformed to Problem (3) with (x) = x 0 x, c =(c 1 + c )=, =1= and = (c 1 ; c )(c 1 ; c ) 0.Thus, Problem (3) with (x) = x 0 x is NP-hard. We nextwould like to roose methods for model (b) with = diag(d 1 ::: d n ). naturally led to consider the roblem with f() a concave function. uncorrelated random cost coecients (model (b)). We arethus G =min xx c0 x + f(d 0 x) (5) In articular, f(x) = x models ellisoidal uncertainty sets with 3 Structural Results We rst show that Problem (5) reduces to solving a number of nominal roblems (1). Let W = fd 0 x j x f0 1g n g and (w) be a subgradient of the concave function f() evaluated at w, that is, f(u) ; f(w) (w)(u ; w) 8u R: If f(w) is a dierentiable function and f 0 (0) = 1, wechoose where d min = min fj: dj >0g d j. Theorem Let (w)= 8 >< >: f 0 (w) if w W nf0g f (d min );f (0) d min if w =0, Z(w) = min xx (c + (w)d)0 x + f(w) ; w(w) (6) and w = arg min ww Z(w). Then, w is an otimal solution to Problem (5) and G = Z(w ). Proof : We rst show that G min ww Z(w): Let x be an otimal solution to Problem (5) and w = d 0 x W.Wehave G = c 0 x + f(d 0 x )=c 0 x + f(w )=(c + (w )d) 0 x + f(w ) ; w (w ) min xx (c + (w )d) 0 x + f(w ) ; w (w )=Z(w ) min Z(w): ww 5

6 Conversely, for any w W, let y w be an otimal solution to Problem (6). We have Z(w) = (c + (w)d) 0 y w + f(w) ; w(w) = c 0 y w + f(d 0 y w )+(w)(d 0 y w ; w) ; (f(d 0 y w ) ; f(w)) c 0 y w + f(d 0 y w ) (7) min xx c0 x + f(d 0 x)=g where inequality (7) for w W nf0g follows, since (w) is a subgradient. To see that inequality (7) follows for w =0we argue as follows. Since f(v) isconcave andv d min 8v W nf0g, wehave Rearranging, we have f(d min ) v ; d min v f(v) ; f(0) v f(0) + d min f(v) 8v W nf0g: v f(d min) ; f(0) d min = (0) 8v W nf0g leading to (0)(d 0 y w ; 0) ; (f(d 0 y w ) ; f(0)) 0. Therefore G = min ww Z(w). Note that when d j =, then W = f0 ::: n g, and thus jw j = n + 1, In this case, Problem (5) reduces to solving n + 1 nominal roblems (6), i.e., olynomial solvability is reserved. Secically, for the case of an ellisoidal uncertainty set = I, leading to (x) = q Pj x j = e 0 x,we derive exlicitly the subroblems involved. Proosition 1 Under an ellisoidal uncertainty set with (x) = e 0 x, where Z(w) = 8 >< >: min xx G = min w=0 1 ::: n Z(w) c + 0 w e x + w if w =1 ::: n min xx (c +e) 0 x if w =0: (8) Proof : With (x) = 0 e x,wehave f(w) = w. Substituting (w)=f 0 (w) ==( w) 8 w W nf0g and (0) = (f(d min ) ; f(0))=d min = f(1) ; f(0) = to Eq. (6) we obtain Eq. (8). An immediate corollary of Theorem is to consider a arametric aroach as follows: Corollary 1 An otimal solution to Problem (5) coincides with one of the otimal solutions to the arametric roblem: for [(e 0 d) (0)]. minimize subject to (c + d) 0 x x X (9) 6

7 This establishes a connection of Problem (5) with arametric discrete otimization (see Guseld [11], Hassin and Tamir [13]). It turns out that if X is a matroid, the minimal set of otimal solutions to Problem (9) as varies is olynomial in size, see Estein [9] and Fern et. al. [10]. For otimization over a matroid, the otimal solution deends on the ordering of the cost comonents. Since, as varies, it is easy to see that there are at most ; n + 1 dierent orderings, the corresonding robust roblem is also olynomially solvable. For the case of shortest aths, Kar and Orlin [14] rovide a olynomial time algorithm using the arametric aroach when all d j 's are equal. In contrast, the olynomial reduction in Proosition 1 alies to all discrete otimization roblems. More generally, jw jd max n with d max =max j d j. If d max n for some xed, then Problem (5) reduces to solving n (n + 1) nominal roblems (6). However, when d max is exonential in n, an aroach that enumerates all elements of W does not reserve olynomiality. For this reason, as well as deriving more ractical algorithms even in the case that jw j is olynomial in n we develo in the next section new algorithms. 4 Aroximation via Piecewise Linear Functions In this section, we develo a method for solving Problem (5) that is based on aroximating the function f() with a iecewise linear concave function. We rstshow that if f() is a iecewise linear concave function with a olynomial number ofsegments, we can also reduce Problem (5) to solving a olynomial number of subroblems. Proosition If f(w) w [0 e 0 d] is a continuous iecewise linear concave function of k segments, then where j is the gradient of the jth linear iece ofthefunctionf(). min xx (c + jd) 0 x (10) Proof : The roof follows directly from Theorem and the observations that if f(w) w [0 0 e d]is a continuous iecewise linear concave function of k linear ieces, the set of subgradients of each of the linear ieces constitutes the minimal set of subgradients for the function f. We next show that aroximating the function f() with a iecewise linear concave function leads to an aroximate solution to Problem (5). 7

8 Theorem 3 For W =[w w] such that d 0 x W 8x X, letg(w), w W be a iecewise linear concave function aroximating the function f(w) such that ; 1 f(w) ; g(w) with 1 0: Let x H be an otimal solution of the roblem: minimize c 0 x + g(d 0 x) subject to x X (11) and let G H = c 0 x H + f(d 0 x H ). Then, G G H G : Proof : We have that G = min xx fc0 x + f(d 0 x)g G H = c 0 x H + f(d 0 x H ) c 0 x H + g(d 0 x H )+ (1) = min xx fc0 x + g(d 0 x)g + min xx fc0 x + f(d 0 x)g (13) = G where inequalities (1) and (13) follow from; 1 f(w) ; g(w). We next aly the aroximation idea to the case of ellisoidal uncertainty sets. Secically, we aroximate the function f(w) = w in the domain [w w] with a iecewise linear concave function g(w) satisfying 0 f(w) ; g(w) using the least number of linear ieces. Proosition 3 For >0, w 0 given, let = = and for i =1 ::: k let w i = 8 < : s w0! i ; = : (14) Let g(w) be a iecewise linear concave function on the domain w [w 0 w k ], with breakoints (w g(w)) f(w 0 w 0 ) ::: (w k w k )g. Then, for all w [w 0 w k ] 0 w ; g(w) : 8

9 Proof : Since at the breakoints w i, g(w i )= w i, g(w) is a concave function with g(w) w 8w [w 0 w k ]. For w [w i;1 w i ], we have w ; g(w) = w ; w i;1 + w i ; w i;1 (w ; w i;1 ) w i ; w ( i;1 ) w = ; wi;1 ; w ; w i;1 wi + : w i;1 The maximum value of w ; g(w) is attained at w =( w i + w i;1 )=. Therefore, ( ) w w ; g(w) ; w i;1 ; w ; w i;1 wi + w i;1 8 >< wi ; wi + w i;1 9 w i;1 ; wi;1> = = ; wi + w i;1 = >: 8 < : wi ; w i;1 ; wi +3 w i;1 > wi ; w i;1 9 = wi + w i;1 = ( w i ; w i;1 ) 4( w i + w i;1 ) = = (15) where Eq. (15) follows by substituting Eq. (14). Since max w ; g(w) w[w i;1 w i ] = the roosition follows. Proositions, 3 and Theorem 3 lead to Algorithm 1. Algorithm 1 Aroximation by iecewise linear concave functions. Inut: c d w w, f(x) = x and a routine that otimizes a linear function over the set X f0 1g n. Outut: A solution x H X for which G 0 0 c x H +f(d x H ) G + where G = min 0 0 xx c x+f(d x): Algorithm: 1. (Initialization) Let = = Let w 0 = w Let k = 6 s s w ; w = O (nd 4 max) 1 1 A 9

10 where d max = max j d j and for i =1 ::: k let 8 < s w! w i = : i ; 1 4. For i =1 ::: k solve the roblem Z i = min xx c + wi + d w i;1 Let x i be an otimal solution to Problem (16). 3. Outut G H = Z i =min i=1 ::: k Z i and x H = xi :! 0 9 = : x: (16) Theorem 4 Algorithm 1 nds a solution x H X for which G c 0 x H + f(d 0 x H ) G + : Proof : Using Proosition 3 we nd a iecewise linear concave function g(w) that aroximates within a given tolerance >0 the function w.from Proosition and since the gradient of the ith segment of the function g(w) for w [w i;1 w i ]is i = we solve the Problems for i =1 ::: k wi ; w i;1 w i ; w i;1 = wi + w i;1 : Z i = min xx c + wi + d w i;1 Taking G H = min i Z i and using Theorem 3 it follows that Algorithm 1 roduces a solution within. Although the number of subroblems solved in Algorithm 1 is not olynomial with resect to the bit size of the inut data, the comutation involved is reasonable from a ractical oint of view. For examle, in Table 1 we reort the number of subroblems we need to solve for = 4, as a function of and d 0 e = P n j=1 d j.! 0 x 5 A Frank-Wolfe Tye Algorithm A natural method to solve Problem (5) is to aly a Frank-Wolfe tye algorithm, that is to successively linearize the function f(). Algorithm The Frank-Wolfe tye algorithm. Inut: c d, 0 [(d e) (0)], f(w), (w) and a routine that otimizes a linear function over the set X f0 1g n. Outut: Alocally otimal solution to Problem (5). Algorithm: 10

11 0 d e k Table 1: Number of subroblems, k as a function of the desired recision, size of the roblem d 0 e and =4. 1. (Initialization) k =0 x 0 := arg min yx (c + d) 0 y. Until d 0 x k+1 = d 0 x k, x k+1 := arg min yx (c + (d 0 x k )d) 0 y: 3. Outut x k+1: We next show that Algorithm converges to a locally otimal solution. Theorem 5 Let x, y and z be otimal solutions to the following roblems: for some strictly between and (d 0 x): (a) (Imrovement) c 0 y + f(d 0 y) c 0 x + f(d 0 x): x = arg min ux (c + d)0 u (17) y = arg min ux (c + (d0 x)d) 0 u (18) z = arg min ux (c + d)0 u (19) (b) (Monotonicity) If >(d 0 x), then(d 0 x) (d 0 y). Likewise, if <(d 0 x), then (d 0 x) (d 0 y). Hence, the sequence k = (d 0 x k ) for which x k = arg min xx (c+(d 0 x k;1)d) 0 x is either non-decreasing or non-increasing. (c) (Local otimality) c 0 y + f(d 0 y) c 0 z + f(d 0 z) 11

12 for all strictly between and (d 0 x). Moreover, if d 0 y = d 0 x,thenthesolution y is locally otimal, that is and for all between and (d 0 y): Proof : (a) We have y = arg min ux (c + (d0 y)d) 0 u c 0 y + f(d 0 y) c 0 z + f(d 0 z) c 0 x + f(d 0 x) = (c + (d 0 x)d) 0 x ; (d 0 x)d 0 x + f(d 0 x) c 0 y + (d 0 x)d 0 y ; (d 0 x)d 0 x + f(d 0 x) = c 0 y + f(d 0 y)+f(d 0 x)(d 0 y ; d 0 x) ; (f(d 0 y) ; f(d 0 x)g c 0 y + f(d 0 y) since () is a subgradient. (b) From the otimality of x and y, we have c 0 y + (d 0 x)d 0 y c 0 x + (d 0 x)d 0 x ;(c 0 y + d 0 y) ;(c 0 x + d 0 x): Adding the two inequalities we obtain 0 (d x ; 0 0 d y)((d x) ; ) 0: Therefore, if 0 (d x) >then 0 d y 0 d x and since f(w) is a concave function, i.e., (w) is non-increasing, 0 (d y) 0 (d x). Likewise, if 0 (d x) <then 0 (d y) 0 (d x). Hence, the sequence k = 0 (d x k )is monotone. (c) We rst show that 0 d z is in the convex hull of 0 d x and 0 d y.from the otimality of x, y, and z we obtain 0 c x + 0 d x 0 c z + 0 d z 0 c x + 0 d x 0 c z + 0 d z 0 c y (d x)d y 0 c z (d x)d z 0 c y + 0 d y 0 c z + 0 d z 1

13 From the rst two inequalities we obtain 0 (d z ; 0 d x)( ; ) 0 and from the last two wehave 0 (d z ; 0 0 d y)((d x) ; ) 0: As is between and 0 (d x), then if 0 <<(d x), we conclude since () is non-increasing that 0 d y 0 d z 0 d x. Likewise, if 0 (d x) <<,wehave 0 d x 0 d z 0 d y., i.e., 0 d z is in the convex hull of 0 d x and 0 d y. Next, we have 0 c y + 0 f(d y) = (c + 0 (d x)d) 0 y ; 0 0 (d x)d y + 0 f(d y) (c + 0 (d x)d) 0 z ; 0 0 (d x)d y + 0 f(d y) = 0 c z f(d z)+ff(d y) ; 0 f(d z) ; 0 0 (d x)(d y ; 0 d z)g = 0 c z f(d z)+h(d z) 0 c z + 0 f(d z) (0) where inequality (0) follows from observing that the function h() 0 =f(d y) ; f() ; 0 0 (d x)(d y ; ) is a convex function with 0 0 h(d y)=0andh(d x) 0. Since 0 d z is in the convex hull of 0 d x and 0 d y, by convexity, 0 h(d z) 0 h(d y)+(1; 0 )h(d x) 0, for some [0 1]. Given a feasible solution, x, Theorem 5(a) imlies that we mayimrove the objective by solving a sequence of roblems using Algorithm. Note that at each iteration, we are otimizing a linear function over X. Theorem 5(b) imlies that the sequence of k = 0 (d x k ) is monotone and since it is bounded it converges. Since X is nite, then the algorithm converges in a nite number of stes. Theorem 5(c) imlies that at termination (recall that the termination condition is 0 d y = 0 d x) Algorithm nds a locally otimal solution. Suose = 0 (e d) and fx 1 ::: x k g be the sequence of solutions of Algorithm. From Theorem 5(b), we have = 0 (e d) 1 = 0 (d x 1 ) ::: k = 0 (d x k ): When Algorithm terminates at the solution x k, then from Theorem 5(c), 0 c x k + 0 f(d x k ) 0 c z + 0 f(d z) (1) where z is dened in Eq. (19) for all 0 0 [(e d) (d x k )]. Likewise, if we aly Algorithm starting at = (0), and let fy 1 ::: y l g be the sequence of solutions of Algorithm, then we have = (0) 1 = 0 (d y 1 ) ::: l = 0 (d y l ) 13

14 and c 0 y l + f(d 0 y l ) c 0 z + f(d 0 z) () for all [(d 0 y l ) (0)]. If (d 0 x k ) (d 0 y l ), we have (d 0 x k ) [(d 0 y l ) (0)] and (d 0 y l ) [(e 0 d) (d 0 x k )]. Hence, following from the inequalities (1) and (), we conclude that c 0 y l + f(d 0 y l )=c 0 x k + f(d 0 x k ) c 0 z + f(d 0 z) for all [(e 0 d) (d 0 x k )] [ [(d 0 y l ) (0)] = [(e 0 d) (0)]: Therefore, both y l and x k are globally otimal solutions. However, if (d 0 y l ) >(d 0 x k ), we are assured that the global otimal solution is x k, y l or in fx : x = arg min ux (c + d) 0 u, ((d 0 x k ) (d 0 y l ))g. We next determine an error bound between the otimal objective and the objective of the best local solution, which is either x k or y l. Theorem 6 (a)let W =[w w], (w) >(w), X 0 = X \fx : d 0 x W g, and x 1 = arg min yx 0(c + (w)d)0 y (3) Then where x = arg min yx 0(c + (w)d)0 y: (4) G 0 min fc 0 x 1 + f(d 0 x 1 ) c 0 x + f(d 0 x )gg 0 + " G 0 =min yx 0 c0 y + f(d 0 y) (5) " = (w)(w ; w)+f(w) ; f(w ) and w = (b) Suose the feasible solutions x 1 and x satisfy f(w) ; f(w)+(w)w ; (w)w : (w) ; (w) x 1 = arg min yx (c + (d0 x 1 )d) 0 y (6) x = arg min yx (c + (d0 x )d) 0 y (7) such that (w) > (w), with w = d 0 x 1, w = d 0 x and there exists an otimal solution x = arg min yx (c + d) 0 y for some ((w) (w)), then where G = c 0 x + f(d 0 x ). G min fc 0 x 1 + f(d 0 x 1 ) c 0 x + f(d 0 x )gg + " (8) 14

15 (w*, g(w*)) (w*, f(w*)) Figure 1: Illustration of the maximum ga between the functions f(w) and g(w). Proof : (a) Let g(w), w W be a iecewise concave function comrising of two line segments through (w f(w)), (w f(w)) with resective subgradients (w) and(w). Clearly f(w) g(w) forw W, and hence, we have ;" f(w) ; g(w) 0, where " = max ww (g(w) ; f(w)) = g(w ) ; f(w ), noting that the maximum dierence occurs at the intersection of the line segments (see Figure 1). Therefore, g(w )=(w)(w ; w)+f(w) =(w)(w ; w)+f(w): Solving for w,wehave w = f(w) ; f(w)+(w)w ; (w)w : (w) ; (w) Alying Proosition with X 0 instead of X and k =,we obtain Finally, from Theorem 3, we have min yx 0 c0 y + g(d 0 y) = min fc 0 x 1 + g(d 0 x 1 ) c 0 x + g(d 0 x )g : G 0 min fc 0 x 1 + f(d 0 x 1 ) c 0 x + f(d 0 x )gg 0 + ": (b) Under the stated conditions, observe that the otimal solutions of the roblems (6) and (7) are resectively the same as the otimal solutions of the roblems (3) and (4). Let ((d 0 x ) (d 0 x 1 )) 15

16 such that x = arg min yx (c + d) 0 y.we establish that 0 c x + 0 d x 0 c x d x 1 0 c x + 0 (d x 1 0 )d x 0 c x (d x 1 0 )d x 1 0 c x + 0 d x 0 c x + 0 d x 0 c x + 0 (d x 0 )d x 0 c x + 0 (d x 0 )d x : Since 0 (d x ) 0 <<(d x 1 ), it follows that 0 d x 0 [d x 1 0 d x ] and hence, G = G 0 and the bounds of (5) follows from art (a). Remark: If 0 (d y l ) 0 >(d x k ), Theorem 6(b) rovides a guarantee on the quality of the best solution of the two locally otimal solution x k and y l relative to the global otimum. Moreover, we can imrove the error bound by artitioning the interval [(w) (w)], with w = 0 d y l, w = 0 d x k into two subintervals, [(w) ((w)+(w))=] and [((w)+(w))= (w)] and alying Algorithm in the intervals. Using Theorem 6(a), we can obtain imroved bounds. Continuing this way, we can nd the globally otimal solution. 6 Generalized Bertsimas and Sim Robust Formulation Bertsimas and Sim [6] roose the following model for robust discrete otimization: Z = min xx c0 x + = min xx c0 x + max fs[ftgj SJ jsj=b;c tjnsg max fz:e 0 z; 0zeg 8 9 < X = d : j x j +(;;b;c)d t x t js 8 9 < X = d : j x j z j jj (9) They show: Theorem 7 (Bertsimas and Sim [6]) Let x be an otimal solution of Problem (9). If each ~c j is a random variable, indeendently and symmetrically distributed in[c j ; d j c j + d j ], then Pr X j ~c j x j >Z 1 A B(r ;) = 1 r 8 < : (1 ; ) nx where = ; i+r and = ;bc. Moreover, for ;= r, l=bc r l! + rx l=bc+1 r l!9 = (30) lim B(r ;) = 1 ; () (31) r!1 where () is the cumulative distribution function of a standard normal. 16

17 Intuitively, if we select ; = r, then the robability that the robust solution exceeds Z is aroximately 1 ; (). Since in this case feasible solutions are restricted to binary values, we can achieve a less conservative solution by relacing r by P jj x j = e0 J x, i.e., the arameter ; in the robust roblem (9) deends on 0 e x. We write ; = f(e0 J J x) where f() is a concave function. Thus, we roose to solve the following roblem: Z =min c0 x + xx max fz:e 0 zf (e 0 J x) 0zeg 8 9 < X = d : j x j z j : (3) Without loss of generality, we assume that d 1 d ::: d r. We dene d r+1 = 0 and let S l = f1 ::: lg. For notational convenience, we also dene d 0 =0andS 0 = J. jj Theorem 8 Let (w) be a subgradient of the concave function f() evaluated at w. Problem (3) satises Z = Proof : min (l k):l kj[f0g Z lk, where Z lk =min xx c0 x + X js l (d j ; d l )x j + (k)d l e 0 J x + d l(f(k) ; k(k)): (33) By strong duality of the inner maximization function with resect to z, Problem (3) is equivalent to solving the following roblem: minimize c 0 x + X jj j + f(e 0 J x) subject to j d j x j ; 8j J j 0 8j J x X 0 (34) We eliminate the variables j and exress Problem (34) as follows: minimize c 0 x + X jj maxfd j x j ; 0g + f(e 0 J x) subject to x X 0: (35) Since x f0 1g n,we observe that maxfd j x j ; 0g = 8 >< >: d j ; if x j = 1 and d j 0 if x j =0ord j <. (36) 17

18 By restricting the interval of can vary we obtain that Z =min 0 min l=0 ::: r Z l () where Z l (), l =1 ::: r, is dened for [d l d l+1 ]is and for [d 1 1): Z l () = min c0 X x + (d j ; )x j + 0 f(e J x) (37) xx js l Z 0 () = min c0 x + 0 f(e J x): (38) xx Since each function Z l () is otimized over the interval [d l d l+1 ], the otimal solution is realized in either d l or d l+1. Hence, we can restrict from the set fd 1 ::: d r 0g and establish that Z = min c0 X x + (d j ; d l )x j + 0 f(e x)d J l: (39) lj[f0g js l Since 0 e x f0 1 ::: rg, we aly Theorem to obtain the subroblem decomosition of (33). J Theorem 8 suggests that the robust roblem remains olynomially solvable if the nominal roblem is olynomially solvable, but at the exense of higher comutational comlexity. We next exlore faster algorithms that are only guarantee local otimality. In this sirit and analogously to Theorem 5, we rovide a necessary condition for otimality, which can be exloited in a local search algorithm. Theorem 9 An otimal solution, x to the Problem (3) is also an otimal solution to the following roblem: minimize 0 X c y + (d j ; d l )y j + 0 (e x)d J l e0 J y where l = arg min lj[f0g X subject to y X js l js l (d j ; d l )x j + f(e 0 J x)d l. (40) Proof : Suose x is an otimal solution for Problem (3) but not for Problem (40). Let y be the otimal solution to Problem (40). Therefore, c 0 x + max fz:e 0 zf (e 0 J x) 0zeg 8 9 < X = d : j x j z j jj = min c0 X x + (d j ; d l )x j + 0 f(e x)d J l (41) lj[f0g js l = 0 X c x + (d j ; d l )x j + 0 f(e x)d J l js l = 0 X c x + (d j ; d l )x j + 0 (e x)d J l e0 J x ; (e0 x)d J l e0 J x + f(e0 x)d J l js l 18

19 > 0 X c y + (d j ; d l )y j + 0 (e x)d J l e0 J y ; (e0 x)d J l e0 J x + f(e0 x)d J l js l = 0 X c y + (d j ; d l )y j + 0 f(e y)d J l + ; 0 (e x)(e0 J J y ; e0 x) ; ; 0 J f(e y) ; f(e0 x) J J d l js l 0 X c y + (d j ; d l )y j + 0 f(e y)d J l js l min c0 X y + (d j ; d l )y j + 0 f(e y)d J l lj[f0g js l 8 9 = 0 < X = c y + max d : j y j z j fz:e 0 zf (e 0 J y) 0zeg jj (4) where the Eqs. (41) and (4) follows from Eq. (39). This contradicts that x is otimal. 7 Exerimental Results In this section, we rovide exerimental evidence on the eectiveness of Algorithm. We aly Algorithm as follows. We start with two initial solutions x 1 and x. Starting with x 1 (x ) Algorithm nds a locally otimal solution y 1 (y ). If y 1 = y,by Theorem 5, the otimum solution is found. Otherwise, we reort the otimality ga" derived from Theorem 6. If we want to nd the otimal solution, we artition into smaller search regions (see Remark after Theorem 6) using Theorem 4 and reeatedly aly Algorithm until all regions are covered. We aly the roosed aroach to the binary knasack and the uniform matroid roblems. 7.1 The Robust Knasack Problem The binary knasack roblem is: maximize X in ~c i x i subject to X in w i x i b x f0 1g n : We assume that the costs ~c i are random variables that are indeendently distributed with mean c i and variance d i = i. Under the ellisoidal uncertainty set, the robust model is: maximize X in c i x i + d 0 x subject to X in w i x i b x f0 1g n : 19

20 The instance of the robust knasack roblem is generated randomly with jnj = 00 and caacity limit, b equals The nominal weight w i is randomly chosen from the set f100 ::: 1500g, the cost c i is randomly chosen from the set f ::: g, and the standard deviation j is deendent on c j such that j = j c j, where j is uniformly distributed in [0 1]. We vary the arameter from 1 to 5 and reort in Table the best attainable objective, Z H, the number of instance of nominal roblem solved,aswell as the otimality ga". Z H Iterations " "=Z H : ; : ; Table : Performance of Algorithm on the robust knasack roblem. It is surrising that in all of the instances, we can obtain the otimal solution of the robust roblem using a small number of iterations. Even for the cases, = 3 4, where the Algorithm terminates with more than one local minimum solutions, the resulting otimality ga is very small, which is usually accetable in ractical settings. 7. The Robust Minimum Cost over a Uniform Matroid We consider the roblem of minimizing the total cost of selecting k items out of a set of n items that can be exressed as the following integer rogramming roblem: X minimize ~c i x i in subject to X x i = k in (43) x f0 1g n : 0

21 In this roblem, the cost comonents are subjected to uncertainty. If the model is deterministic, we can easily solve the roblem in O(n log n) by sorting the costs in ascending order and choosing the rst k items. In the robust framework under the ellisoidal uncertainty set, we solve the following roblem: minimize c 0 x + d 0 x subject to X in x i = k (44) x f0 1g n : Since the underlying set is a matroid, it is well known that Problem (44) can be solved in strongly olynomial time using arametric rogramming. Instead, we aly Algorithm and observe the number of iterations needed before converging to a local minimum solution. Setting jkj = jnj=, c j and j = dj being uniformly distributed in [ ] and [ ] resectively, we study the convergence roerties as we vary jnj from 00 to 0,000 and from 1 to 3. For a given jnj and, we generate c and d randomly and solve 100 instances of the roblem. Aggregating the results from solving the 100 instances, we reort in Table 3 the average number of iterations before nding a local solution, the maximum relative otimality ga,"=z H and the ercentage of the local minimum solutions that are global, i.e., " =0. The overall erformance of Algorithm is surrisingly good. It also suggests scalability, as the number of iterations is marginally aected by an increase in jnj. In fact, in most of the roblems tested, we obtain the otimal solution by solving less than 10 iterations of the nominal roblem. Even in cases when local solutions are found, the corresonding otimality ga is negligible. In summary, Algorithm seems ractically romising. 8 Conclusions A message of the resent aer is that the comlexity of robust discrete otimization is aected by the choice of the uncertainty set. For ellisoidal uncertainty sets, we have shown an increase in comlexity for the robust counterart of a discrete otimization roblem for general covariance matrices (correlated data), a reservation of comlexity when = I (uncorrelated and identically distributed data), while we have left oen the comlexity when the matrix is diagonal (uncorrelated data). In the latter case, we roosed two algorithms that in comutational exeriments have excellent emirical erformance. 1

22 jnj Ave. Iter. max("=z H ) Ot. Sol. % :89 10 ;7 98% :71 10 ;8 99% :80 10 ;9 99% % % % % % % % % % % % :6 10 ;6 94% :95 10 ;8 97% % :08 10 ;9 99% :13 10 ;10 98% % % Table 3: Performance of Algorithm on the robust minimum cost roblem over a uniform matroid. References [1] Ben-Tal, A., A. Nemirovski (1998): Robust convex otimization, Math. Oer. Res. 3, [] Ben-Tal, A., A. Nemirovski (1999): Robust solutions to uncertain rograms, Oer. Res. Lett. 5, 1-13.

23 [3] Ben-Tal, A., A. Nemirovski (000): Robust solutions of linear rogramming roblems contaminated with uncertain data, Math. Prog. 88, [4] Ben-Tal, A., L. El-Ghaoui, A. Nemirovski (000): Robust semidenite rogramming, in Saigal, R., Vandenberghe, L., Wolkowicz, H., eds., Semidenite rogramming and alications, Kluwer Academic Publishers, Norwell, Ma. [5] Bertsimas, D., M. Sim (003): Robust Discrete Otimization and Network Flows, Math. Prog. 98, [6] Bertsimas, D., M. Sim (004): The Price of Robustness, Oer. Res. 5,1,35{53. [7] El-Ghaoui, H. Lebret (1997): Robust solutions to least-square roblems to uncertain data matrices, SIAM J. Matrix Anal. Al. 18, [8] El-Ghaoui, L., Oustry, F. and H. Lebret (1998): Robust solutions to uncertain semidenite rograms, SIAM J. Otim. 9, [9] Estein, D. (1995): Geometric lower bounds for arametric matroid otimization, Proc. 7th ACM Symosium on the Theory of Comuting, [10] Fern andez-baca, D., G. Slutzki, and D. Estein. (1996): Using sarsication for arametric minimum sanning tree roblems, Proc. 5th Scand. Worksho in Algorithm Theory. Sringer- Verlag, Lecture Notes in Comuter Science, [11] Guseld, D. (1980): Sensitivity Analysis for combinatorial otimization, Technical Reort UCB/ERL M80/, University of Califonia, Berkeley. [1] Ishii, H., S. Shiode, T. Nishida, Y. Namasuya. (1981) : Stochastic Sanning Tree Problem, Discrete Al. Math. 3, [13] Hassin, R., A. Tamir (1989) Maximizing classes of two-arameter objectives over matriods, Math. Oer. Res. 14, [14] Kar, R. M., J. B. Orlin (1981): Parametric shortest ath algorithms with an alications to cyclic stung, Discrete Al. Math. 3, [15] Kouvelis, P., G. Yu (1997): Robust discrete otimization and its alications, Kluwer Academic Publishers, Norwell, Ma. 3

Robust linear optimization under general norms

Robust linear optimization under general norms Operations Research Letters 3 (004) 50 56 Operations Research Letters www.elsevier.com/locate/dsw Robust linear optimization under general norms Dimitris Bertsimas a; ;, Dessislava Pachamanova b, Melvyn

More information

1 1 c (a) 1 (b) 1 Figure 1: (a) First ath followed by salesman in the stris method. (b) Alternative ath. 4. D = distance travelled closing the loo. Th

1 1 c (a) 1 (b) 1 Figure 1: (a) First ath followed by salesman in the stris method. (b) Alternative ath. 4. D = distance travelled closing the loo. Th 18.415/6.854 Advanced Algorithms ovember 7, 1996 Euclidean TSP (art I) Lecturer: Michel X. Goemans MIT These notes are based on scribe notes by Marios Paaefthymiou and Mike Klugerman. 1 Euclidean TSP Consider

More information

Notes on duality in second order and -order cone otimization E. D. Andersen Λ, C. Roos y, and T. Terlaky z Aril 6, 000 Abstract Recently, the so-calle

Notes on duality in second order and -order cone otimization E. D. Andersen Λ, C. Roos y, and T. Terlaky z Aril 6, 000 Abstract Recently, the so-calle McMaster University Advanced Otimization Laboratory Title: Notes on duality in second order and -order cone otimization Author: Erling D. Andersen, Cornelis Roos and Tamás Terlaky AdvOl-Reort No. 000/8

More information

Approximating min-max k-clustering

Approximating min-max k-clustering Aroximating min-max k-clustering Asaf Levin July 24, 2007 Abstract We consider the roblems of set artitioning into k clusters with minimum total cost and minimum of the maximum cost of a cluster. The cost

More information

For q 0; 1; : : : ; `? 1, we have m 0; 1; : : : ; q? 1. The set fh j(x) : j 0; 1; ; : : : ; `? 1g forms a basis for the tness functions dened on the i

For q 0; 1; : : : ; `? 1, we have m 0; 1; : : : ; q? 1. The set fh j(x) : j 0; 1; ; : : : ; `? 1g forms a basis for the tness functions dened on the i Comuting with Haar Functions Sami Khuri Deartment of Mathematics and Comuter Science San Jose State University One Washington Square San Jose, CA 9519-0103, USA khuri@juiter.sjsu.edu Fax: (40)94-500 Keywords:

More information

Estimation of the large covariance matrix with two-step monotone missing data

Estimation of the large covariance matrix with two-step monotone missing data Estimation of the large covariance matrix with two-ste monotone missing data Masashi Hyodo, Nobumichi Shutoh 2, Takashi Seo, and Tatjana Pavlenko 3 Deartment of Mathematical Information Science, Tokyo

More information

2 K. ENTACHER 2 Generalized Haar function systems In the following we x an arbitrary integer base b 2. For the notations and denitions of generalized

2 K. ENTACHER 2 Generalized Haar function systems In the following we x an arbitrary integer base b 2. For the notations and denitions of generalized BIT 38 :2 (998), 283{292. QUASI-MONTE CARLO METHODS FOR NUMERICAL INTEGRATION OF MULTIVARIATE HAAR SERIES II KARL ENTACHER y Deartment of Mathematics, University of Salzburg, Hellbrunnerstr. 34 A-52 Salzburg,

More information

Convex Optimization methods for Computing Channel Capacity

Convex Optimization methods for Computing Channel Capacity Convex Otimization methods for Comuting Channel Caacity Abhishek Sinha Laboratory for Information and Decision Systems (LIDS), MIT sinhaa@mit.edu May 15, 2014 We consider a classical comutational roblem

More information

A numerical implementation of a predictor-corrector algorithm for sufcient linear complementarity problem

A numerical implementation of a predictor-corrector algorithm for sufcient linear complementarity problem A numerical imlementation of a redictor-corrector algorithm for sufcient linear comlementarity roblem BENTERKI DJAMEL University Ferhat Abbas of Setif-1 Faculty of science Laboratory of fundamental and

More information

#A64 INTEGERS 18 (2018) APPLYING MODULAR ARITHMETIC TO DIOPHANTINE EQUATIONS

#A64 INTEGERS 18 (2018) APPLYING MODULAR ARITHMETIC TO DIOPHANTINE EQUATIONS #A64 INTEGERS 18 (2018) APPLYING MODULAR ARITHMETIC TO DIOPHANTINE EQUATIONS Ramy F. Taki ElDin Physics and Engineering Mathematics Deartment, Faculty of Engineering, Ain Shams University, Cairo, Egyt

More information

Robust discrete optimization and network flows

Robust discrete optimization and network flows Math. Program., Ser. B 98: 49 71 2003) Digital Object Identifier DOI) 10.1007/s10107-003-0396-4 Dimitris Bertsimas Melvyn Sim Robust discrete optimization and network flows Received: January 1, 2002 /

More information

Cryptanalysis of Pseudorandom Generators

Cryptanalysis of Pseudorandom Generators CSE 206A: Lattice Algorithms and Alications Fall 2017 Crytanalysis of Pseudorandom Generators Instructor: Daniele Micciancio UCSD CSE As a motivating alication for the study of lattice in crytograhy we

More information

Robust Solutions to Markov Decision Problems

Robust Solutions to Markov Decision Problems Robust Solutions to Markov Decision Problems Arnab Nilim and Laurent El Ghaoui Deartment of Electrical Engineering and Comuter Sciences University of California, Berkeley, CA 94720 nilim@eecs.berkeley.edu,

More information

AR PROCESSES AND SOURCES CAN BE RECONSTRUCTED FROM. Radu Balan, Alexander Jourjine, Justinian Rosca. Siemens Corporation Research

AR PROCESSES AND SOURCES CAN BE RECONSTRUCTED FROM. Radu Balan, Alexander Jourjine, Justinian Rosca. Siemens Corporation Research AR PROCESSES AND SOURCES CAN BE RECONSTRUCTED FROM DEGENERATE MIXTURES Radu Balan, Alexander Jourjine, Justinian Rosca Siemens Cororation Research 7 College Road East Princeton, NJ 8 fradu,jourjine,roscag@scr.siemens.com

More information

Finding recurrent sources in sequences

Finding recurrent sources in sequences Finding recurrent sources in sequences Aristides Gionis Deartment of Comuter Science Stanford University Stanford, CA, 94305, USA gionis@cs.stanford.edu Heikki Mannila HIIT Basic Research Unit Deartment

More information

Elementary Analysis in Q p

Elementary Analysis in Q p Elementary Analysis in Q Hannah Hutter, May Szedlák, Phili Wirth November 17, 2011 This reort follows very closely the book of Svetlana Katok 1. 1 Sequences and Series In this section we will see some

More information

Correspondence Between Fractal-Wavelet. Transforms and Iterated Function Systems. With Grey Level Maps. F. Mendivil and E.R.

Correspondence Between Fractal-Wavelet. Transforms and Iterated Function Systems. With Grey Level Maps. F. Mendivil and E.R. 1 Corresondence Between Fractal-Wavelet Transforms and Iterated Function Systems With Grey Level Mas F. Mendivil and E.R. Vrscay Deartment of Alied Mathematics Faculty of Mathematics University of Waterloo

More information

4. Score normalization technical details We now discuss the technical details of the score normalization method.

4. Score normalization technical details We now discuss the technical details of the score normalization method. SMT SCORING SYSTEM This document describes the scoring system for the Stanford Math Tournament We begin by giving an overview of the changes to scoring and a non-technical descrition of the scoring rules

More information

Linear diophantine equations for discrete tomography

Linear diophantine equations for discrete tomography Journal of X-Ray Science and Technology 10 001 59 66 59 IOS Press Linear diohantine euations for discrete tomograhy Yangbo Ye a,gewang b and Jiehua Zhu a a Deartment of Mathematics, The University of Iowa,

More information

On a Markov Game with Incomplete Information

On a Markov Game with Incomplete Information On a Markov Game with Incomlete Information Johannes Hörner, Dinah Rosenberg y, Eilon Solan z and Nicolas Vieille x{ January 24, 26 Abstract We consider an examle of a Markov game with lack of information

More information

Efficient algorithms for the smallest enclosing ball problem

Efficient algorithms for the smallest enclosing ball problem Efficient algorithms for the smallest enclosing ball roblem Guanglu Zhou, Kim-Chuan Toh, Jie Sun November 27, 2002; Revised August 4, 2003 Abstract. Consider the roblem of comuting the smallest enclosing

More information

Some results of convex programming complexity

Some results of convex programming complexity 2012c12 $ Ê Æ Æ 116ò 14Ï Dec., 2012 Oerations Research Transactions Vol.16 No.4 Some results of convex rogramming comlexity LOU Ye 1,2 GAO Yuetian 1 Abstract Recently a number of aers were written that

More information

Inequalities for the L 1 Deviation of the Empirical Distribution

Inequalities for the L 1 Deviation of the Empirical Distribution Inequalities for the L 1 Deviation of the Emirical Distribution Tsachy Weissman, Erik Ordentlich, Gadiel Seroussi, Sergio Verdu, Marcelo J. Weinberger June 13, 2003 Abstract We derive bounds on the robability

More information

NONLINEAR OPTIMIZATION WITH CONVEX CONSTRAINTS. The Goldstein-Levitin-Polyak algorithm

NONLINEAR OPTIMIZATION WITH CONVEX CONSTRAINTS. The Goldstein-Levitin-Polyak algorithm - (23) NLP - NONLINEAR OPTIMIZATION WITH CONVEX CONSTRAINTS The Goldstein-Levitin-Polya algorithm We consider an algorithm for solving the otimization roblem under convex constraints. Although the convexity

More information

Combining Logistic Regression with Kriging for Mapping the Risk of Occurrence of Unexploded Ordnance (UXO)

Combining Logistic Regression with Kriging for Mapping the Risk of Occurrence of Unexploded Ordnance (UXO) Combining Logistic Regression with Kriging for Maing the Risk of Occurrence of Unexloded Ordnance (UXO) H. Saito (), P. Goovaerts (), S. A. McKenna (2) Environmental and Water Resources Engineering, Deartment

More information

On the Chvatál-Complexity of Knapsack Problems

On the Chvatál-Complexity of Knapsack Problems R u t c o r Research R e o r t On the Chvatál-Comlexity of Knasack Problems Gergely Kovács a Béla Vizvári b RRR 5-08, October 008 RUTCOR Rutgers Center for Oerations Research Rutgers University 640 Bartholomew

More information

Distributed Rule-Based Inference in the Presence of Redundant Information

Distributed Rule-Based Inference in the Presence of Redundant Information istribution Statement : roved for ublic release; distribution is unlimited. istributed Rule-ased Inference in the Presence of Redundant Information June 8, 004 William J. Farrell III Lockheed Martin dvanced

More information

Introduction Consider a set of jobs that are created in an on-line fashion and should be assigned to disks. Each job has a weight which is the frequen

Introduction Consider a set of jobs that are created in an on-line fashion and should be assigned to disks. Each job has a weight which is the frequen Ancient and new algorithms for load balancing in the L norm Adi Avidor Yossi Azar y Jir Sgall z July 7, 997 Abstract We consider the on-line load balancing roblem where there are m identical machines (servers)

More information

216 S. Chandrasearan and I.C.F. Isen Our results dier from those of Sun [14] in two asects: we assume that comuted eigenvalues or singular values are

216 S. Chandrasearan and I.C.F. Isen Our results dier from those of Sun [14] in two asects: we assume that comuted eigenvalues or singular values are Numer. Math. 68: 215{223 (1994) Numerische Mathemati c Sringer-Verlag 1994 Electronic Edition Bacward errors for eigenvalue and singular value decomositions? S. Chandrasearan??, I.C.F. Isen??? Deartment

More information

Bayesian Model Averaging Kriging Jize Zhang and Alexandros Taflanidis

Bayesian Model Averaging Kriging Jize Zhang and Alexandros Taflanidis HIPAD LAB: HIGH PERFORMANCE SYSTEMS LABORATORY DEPARTMENT OF CIVIL AND ENVIRONMENTAL ENGINEERING AND EARTH SCIENCES Bayesian Model Averaging Kriging Jize Zhang and Alexandros Taflanidis Why use metamodeling

More information

LECTURE 7 NOTES. x n. d x if. E [g(x n )] E [g(x)]

LECTURE 7 NOTES. x n. d x if. E [g(x n )] E [g(x)] LECTURE 7 NOTES 1. Convergence of random variables. Before delving into the large samle roerties of the MLE, we review some concets from large samle theory. 1. Convergence in robability: x n x if, for

More information

General Linear Model Introduction, Classes of Linear models and Estimation

General Linear Model Introduction, Classes of Linear models and Estimation Stat 740 General Linear Model Introduction, Classes of Linear models and Estimation An aim of scientific enquiry: To describe or to discover relationshis among events (variables) in the controlled (laboratory)

More information

Best approximation by linear combinations of characteristic functions of half-spaces

Best approximation by linear combinations of characteristic functions of half-spaces Best aroximation by linear combinations of characteristic functions of half-saces Paul C. Kainen Deartment of Mathematics Georgetown University Washington, D.C. 20057-1233, USA Věra Kůrková Institute of

More information

Genetic Algorithms, Selection Schemes, and the Varying Eects of Noise. IlliGAL Report No November Department of General Engineering

Genetic Algorithms, Selection Schemes, and the Varying Eects of Noise. IlliGAL Report No November Department of General Engineering Genetic Algorithms, Selection Schemes, and the Varying Eects of Noise Brad L. Miller Det. of Comuter Science University of Illinois at Urbana-Chamaign David E. Goldberg Det. of General Engineering University

More information

ON THE LEAST SIGNIFICANT p ADIC DIGITS OF CERTAIN LUCAS NUMBERS

ON THE LEAST SIGNIFICANT p ADIC DIGITS OF CERTAIN LUCAS NUMBERS #A13 INTEGERS 14 (014) ON THE LEAST SIGNIFICANT ADIC DIGITS OF CERTAIN LUCAS NUMBERS Tamás Lengyel Deartment of Mathematics, Occidental College, Los Angeles, California lengyel@oxy.edu Received: 6/13/13,

More information

arxiv: v2 [math.na] 6 Apr 2016

arxiv: v2 [math.na] 6 Apr 2016 Existence and otimality of strong stability reserving linear multiste methods: a duality-based aroach arxiv:504.03930v [math.na] 6 Ar 06 Adrián Németh January 9, 08 Abstract David I. Ketcheson We rove

More information

Sums of independent random variables

Sums of independent random variables 3 Sums of indeendent random variables This lecture collects a number of estimates for sums of indeendent random variables with values in a Banach sace E. We concentrate on sums of the form N γ nx n, where

More information

An Analysis of Reliable Classifiers through ROC Isometrics

An Analysis of Reliable Classifiers through ROC Isometrics An Analysis of Reliable Classifiers through ROC Isometrics Stijn Vanderlooy s.vanderlooy@cs.unimaas.nl Ida G. Srinkhuizen-Kuyer kuyer@cs.unimaas.nl Evgueni N. Smirnov smirnov@cs.unimaas.nl MICC-IKAT, Universiteit

More information

GOOD MODELS FOR CUBIC SURFACES. 1. Introduction

GOOD MODELS FOR CUBIC SURFACES. 1. Introduction GOOD MODELS FOR CUBIC SURFACES ANDREAS-STEPHAN ELSENHANS Abstract. This article describes an algorithm for finding a model of a hyersurface with small coefficients. It is shown that the aroach works in

More information

MATH 2710: NOTES FOR ANALYSIS

MATH 2710: NOTES FOR ANALYSIS MATH 270: NOTES FOR ANALYSIS The main ideas we will learn from analysis center around the idea of a limit. Limits occurs in several settings. We will start with finite limits of sequences, then cover infinite

More information

Multi-Operation Multi-Machine Scheduling

Multi-Operation Multi-Machine Scheduling Multi-Oeration Multi-Machine Scheduling Weizhen Mao he College of William and Mary, Williamsburg VA 3185, USA Abstract. In the multi-oeration scheduling that arises in industrial engineering, each job

More information

Radial Basis Function Networks: Algorithms

Radial Basis Function Networks: Algorithms Radial Basis Function Networks: Algorithms Introduction to Neural Networks : Lecture 13 John A. Bullinaria, 2004 1. The RBF Maing 2. The RBF Network Architecture 3. Comutational Power of RBF Networks 4.

More information

Distributionally Robust Discrete Optimization with Entropic Value-at-Risk

Distributionally Robust Discrete Optimization with Entropic Value-at-Risk Distributionally Robust Discrete Optimization with Entropic Value-at-Risk Daniel Zhuoyu Long Department of SEEM, The Chinese University of Hong Kong, zylong@se.cuhk.edu.hk Jin Qi NUS Business School, National

More information

A Social Welfare Optimal Sequential Allocation Procedure

A Social Welfare Optimal Sequential Allocation Procedure A Social Welfare Otimal Sequential Allocation Procedure Thomas Kalinowsi Universität Rostoc, Germany Nina Narodytsa and Toby Walsh NICTA and UNSW, Australia May 2, 201 Abstract We consider a simle sequential

More information

Asymptotically Optimal Simulation Allocation under Dependent Sampling

Asymptotically Optimal Simulation Allocation under Dependent Sampling Asymtotically Otimal Simulation Allocation under Deendent Samling Xiaoing Xiong The Robert H. Smith School of Business, University of Maryland, College Park, MD 20742-1815, USA, xiaoingx@yahoo.com Sandee

More information

Approximate Dynamic Programming for Dynamic Capacity Allocation with Multiple Priority Levels

Approximate Dynamic Programming for Dynamic Capacity Allocation with Multiple Priority Levels Aroximate Dynamic Programming for Dynamic Caacity Allocation with Multile Priority Levels Alexander Erdelyi School of Oerations Research and Information Engineering, Cornell University, Ithaca, NY 14853,

More information

On the Power of Robust Solutions in Two-Stage Stochastic and Adaptive Optimization Problems

On the Power of Robust Solutions in Two-Stage Stochastic and Adaptive Optimization Problems MATHEMATICS OF OPERATIONS RESEARCH Vol. 35, No., May 010, pp. 84 305 issn 0364-765X eissn 156-5471 10 350 084 informs doi 10.187/moor.1090.0440 010 INFORMS On the Power of Robust Solutions in Two-Stage

More information

Evaluating Circuit Reliability Under Probabilistic Gate-Level Fault Models

Evaluating Circuit Reliability Under Probabilistic Gate-Level Fault Models Evaluating Circuit Reliability Under Probabilistic Gate-Level Fault Models Ketan N. Patel, Igor L. Markov and John P. Hayes University of Michigan, Ann Arbor 48109-2122 {knatel,imarkov,jhayes}@eecs.umich.edu

More information

Research Article Controllability of Linear Discrete-Time Systems with Both Delayed States and Delayed Inputs

Research Article Controllability of Linear Discrete-Time Systems with Both Delayed States and Delayed Inputs Abstract and Alied Analysis Volume 203 Article ID 97546 5 ages htt://dxdoiorg/055/203/97546 Research Article Controllability of Linear Discrete-Time Systems with Both Delayed States and Delayed Inuts Hong

More information

Elements of Asymptotic Theory. James L. Powell Department of Economics University of California, Berkeley

Elements of Asymptotic Theory. James L. Powell Department of Economics University of California, Berkeley Elements of Asymtotic Theory James L. Powell Deartment of Economics University of California, Berkeley Objectives of Asymtotic Theory While exact results are available for, say, the distribution of the

More information

The Knuth-Yao Quadrangle-Inequality Speedup is a Consequence of Total-Monotonicity

The Knuth-Yao Quadrangle-Inequality Speedup is a Consequence of Total-Monotonicity The Knuth-Yao Quadrangle-Ineuality Seedu is a Conseuence of Total-Monotonicity Wolfgang W. Bein Mordecai J. Golin Lawrence L. Larmore Yan Zhang Abstract There exist several general techniues in the literature

More information

On the capacity of the general trapdoor channel with feedback

On the capacity of the general trapdoor channel with feedback On the caacity of the general tradoor channel with feedback Jui Wu and Achilleas Anastasooulos Electrical Engineering and Comuter Science Deartment University of Michigan Ann Arbor, MI, 48109-1 email:

More information

k- price auctions and Combination-auctions

k- price auctions and Combination-auctions k- rice auctions and Combination-auctions Martin Mihelich Yan Shu Walnut Algorithms March 6, 219 arxiv:181.3494v3 [q-fin.mf] 5 Mar 219 Abstract We rovide for the first time an exact analytical solution

More information

Analysis of some entrance probabilities for killed birth-death processes

Analysis of some entrance probabilities for killed birth-death processes Analysis of some entrance robabilities for killed birth-death rocesses Master s Thesis O.J.G. van der Velde Suervisor: Dr. F.M. Sieksma July 5, 207 Mathematical Institute, Leiden University Contents Introduction

More information

2-D Analysis for Iterative Learning Controller for Discrete-Time Systems With Variable Initial Conditions Yong FANG 1, and Tommy W. S.

2-D Analysis for Iterative Learning Controller for Discrete-Time Systems With Variable Initial Conditions Yong FANG 1, and Tommy W. S. -D Analysis for Iterative Learning Controller for Discrete-ime Systems With Variable Initial Conditions Yong FANG, and ommy W. S. Chow Abstract In this aer, an iterative learning controller alying to linear

More information

A Parallel Algorithm for Minimization of Finite Automata

A Parallel Algorithm for Minimization of Finite Automata A Parallel Algorithm for Minimization of Finite Automata B. Ravikumar X. Xiong Deartment of Comuter Science University of Rhode Island Kingston, RI 02881 E-mail: fravi,xiongg@cs.uri.edu Abstract In this

More information

An Ant Colony Optimization Approach to the Probabilistic Traveling Salesman Problem

An Ant Colony Optimization Approach to the Probabilistic Traveling Salesman Problem An Ant Colony Otimization Aroach to the Probabilistic Traveling Salesman Problem Leonora Bianchi 1, Luca Maria Gambardella 1, and Marco Dorigo 2 1 IDSIA, Strada Cantonale Galleria 2, CH-6928 Manno, Switzerland

More information

AI*IA 2003 Fusion of Multiple Pattern Classifiers PART III

AI*IA 2003 Fusion of Multiple Pattern Classifiers PART III AI*IA 23 Fusion of Multile Pattern Classifiers PART III AI*IA 23 Tutorial on Fusion of Multile Pattern Classifiers by F. Roli 49 Methods for fusing multile classifiers Methods for fusing multile classifiers

More information

Cubic Sieve Congruence of the Discrete Logarithm Problem, and Fractional Part Sequences

Cubic Sieve Congruence of the Discrete Logarithm Problem, and Fractional Part Sequences Cubic Sieve Congruence of the Discrete Logarithm Problem, and Fractional Part Sequences Srinivas Vivek University of Luxembourg, Luxembourg C. E. Veni Madhavan Deartment of Comuter Science and Automation,

More information

Exact Solutions in Finite Compressible Elasticity via the Complementary Energy Function

Exact Solutions in Finite Compressible Elasticity via the Complementary Energy Function Exact Solutions in Finite Comressible Elasticity via the Comlementary Energy Function Francis Rooney Deartment of Mathematics University of Wisconsin Madison, USA Sean Eberhard Mathematical Institute,

More information

Some Unitary Space Time Codes From Sphere Packing Theory With Optimal Diversity Product of Code Size

Some Unitary Space Time Codes From Sphere Packing Theory With Optimal Diversity Product of Code Size IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 5, NO., DECEMBER 4 336 Some Unitary Sace Time Codes From Shere Packing Theory With Otimal Diversity Product of Code Size Haiquan Wang, Genyuan Wang, and Xiang-Gen

More information

DETC2003/DAC AN EFFICIENT ALGORITHM FOR CONSTRUCTING OPTIMAL DESIGN OF COMPUTER EXPERIMENTS

DETC2003/DAC AN EFFICIENT ALGORITHM FOR CONSTRUCTING OPTIMAL DESIGN OF COMPUTER EXPERIMENTS Proceedings of DETC 03 ASME 003 Design Engineering Technical Conferences and Comuters and Information in Engineering Conference Chicago, Illinois USA, Setember -6, 003 DETC003/DAC-48760 AN EFFICIENT ALGORITHM

More information

Chater Matrix Norms and Singular Value Decomosition Introduction In this lecture, we introduce the notion of a norm for matrices The singular value de

Chater Matrix Norms and Singular Value Decomosition Introduction In this lecture, we introduce the notion of a norm for matrices The singular value de Lectures on Dynamic Systems and Control Mohammed Dahleh Munther A Dahleh George Verghese Deartment of Electrical Engineering and Comuter Science Massachuasetts Institute of Technology c Chater Matrix Norms

More information

Information collection on a graph

Information collection on a graph Information collection on a grah Ilya O. Ryzhov Warren Powell February 10, 2010 Abstract We derive a knowledge gradient olicy for an otimal learning roblem on a grah, in which we use sequential measurements

More information

OPTIMAL AFFINE INVARIANT SMOOTH MINIMIZATION ALGORITHMS

OPTIMAL AFFINE INVARIANT SMOOTH MINIMIZATION ALGORITHMS 1 OPTIMAL AFFINE INVARIANT SMOOTH MINIMIZATION ALGORITHMS ALEXANDRE D ASPREMONT, CRISTÓBAL GUZMÁN, AND MARTIN JAGGI ABSTRACT. We formulate an affine invariant imlementation of the accelerated first-order

More information

Coding Along Hermite Polynomials for Gaussian Noise Channels

Coding Along Hermite Polynomials for Gaussian Noise Channels Coding Along Hermite olynomials for Gaussian Noise Channels Emmanuel A. Abbe IG, EFL Lausanne, 1015 CH Email: emmanuel.abbe@efl.ch Lizhong Zheng LIDS, MIT Cambridge, MA 0139 Email: lizhong@mit.edu Abstract

More information

Journal of Inequalities in Pure and Applied Mathematics

Journal of Inequalities in Pure and Applied Mathematics Journal of Inequalities in Pure and Alied Mathematics htt://jiam.vu.edu.au/ Volume 3, Issue 5, Article 8, 22 REVERSE CONVOLUTION INEQUALITIES AND APPLICATIONS TO INVERSE HEAT SOURCE PROBLEMS SABUROU SAITOH,

More information

Introduction Model secication tests are a central theme in the econometric literature. The majority of the aroaches fall into two categories. In the r

Introduction Model secication tests are a central theme in the econometric literature. The majority of the aroaches fall into two categories. In the r Reversed Score and Likelihood Ratio Tests Geert Dhaene Universiteit Gent and ORE Olivier Scaillet Universite atholique de Louvain January 2 Abstract Two extensions of a model in the resence of an alternative

More information

Optimal Design of Truss Structures Using a Neutrosophic Number Optimization Model under an Indeterminate Environment

Optimal Design of Truss Structures Using a Neutrosophic Number Optimization Model under an Indeterminate Environment Neutrosohic Sets and Systems Vol 14 016 93 University of New Mexico Otimal Design of Truss Structures Using a Neutrosohic Number Otimization Model under an Indeterminate Environment Wenzhong Jiang & Jun

More information

Chapter 3. GMM: Selected Topics

Chapter 3. GMM: Selected Topics Chater 3. GMM: Selected oics Contents Otimal Instruments. he issue of interest..............................2 Otimal Instruments under the i:i:d: assumtion..............2. he basic result............................2.2

More information

TRACES OF SCHUR AND KRONECKER PRODUCTS FOR BLOCK MATRICES

TRACES OF SCHUR AND KRONECKER PRODUCTS FOR BLOCK MATRICES Khayyam J. Math. DOI:10.22034/kjm.2019.84207 TRACES OF SCHUR AND KRONECKER PRODUCTS FOR BLOCK MATRICES ISMAEL GARCÍA-BAYONA Communicated by A.M. Peralta Abstract. In this aer, we define two new Schur and

More information

On Doob s Maximal Inequality for Brownian Motion

On Doob s Maximal Inequality for Brownian Motion Stochastic Process. Al. Vol. 69, No., 997, (-5) Research Reort No. 337, 995, Det. Theoret. Statist. Aarhus On Doob s Maximal Inequality for Brownian Motion S. E. GRAVERSEN and G. PESKIR If B = (B t ) t

More information

Finite-State Verification or Model Checking. Finite State Verification (FSV) or Model Checking

Finite-State Verification or Model Checking. Finite State Verification (FSV) or Model Checking Finite-State Verification or Model Checking Finite State Verification (FSV) or Model Checking Holds the romise of roviding a cost effective way of verifying imortant roerties about a system Not all faults

More information

Shadow Computing: An Energy-Aware Fault Tolerant Computing Model

Shadow Computing: An Energy-Aware Fault Tolerant Computing Model Shadow Comuting: An Energy-Aware Fault Tolerant Comuting Model Bryan Mills, Taieb Znati, Rami Melhem Deartment of Comuter Science University of Pittsburgh (bmills, znati, melhem)@cs.itt.edu Index Terms

More information

arxiv: v1 [quant-ph] 3 Feb 2015

arxiv: v1 [quant-ph] 3 Feb 2015 From reversible comutation to quantum comutation by Lagrange interolation Alexis De Vos and Stin De Baerdemacker 2 arxiv:502.0089v [quant-h] 3 Feb 205 Cmst, Imec v.z.w., vakgroe elektronica en informatiesystemen,

More information

The inverse Goldbach problem

The inverse Goldbach problem 1 The inverse Goldbach roblem by Christian Elsholtz Submission Setember 7, 2000 (this version includes galley corrections). Aeared in Mathematika 2001. Abstract We imrove the uer and lower bounds of the

More information

System Reliability Estimation and Confidence Regions from Subsystem and Full System Tests

System Reliability Estimation and Confidence Regions from Subsystem and Full System Tests 009 American Control Conference Hyatt Regency Riverfront, St. Louis, MO, USA June 0-, 009 FrB4. System Reliability Estimation and Confidence Regions from Subsystem and Full System Tests James C. Sall Abstract

More information

Recursive Estimation of the Preisach Density function for a Smart Actuator

Recursive Estimation of the Preisach Density function for a Smart Actuator Recursive Estimation of the Preisach Density function for a Smart Actuator Ram V. Iyer Deartment of Mathematics and Statistics, Texas Tech University, Lubbock, TX 7949-142. ABSTRACT The Preisach oerator

More information

ON UNIFORM BOUNDEDNESS OF DYADIC AVERAGING OPERATORS IN SPACES OF HARDY-SOBOLEV TYPE. 1. Introduction

ON UNIFORM BOUNDEDNESS OF DYADIC AVERAGING OPERATORS IN SPACES OF HARDY-SOBOLEV TYPE. 1. Introduction ON UNIFORM BOUNDEDNESS OF DYADIC AVERAGING OPERATORS IN SPACES OF HARDY-SOBOLEV TYPE GUSTAVO GARRIGÓS ANDREAS SEEGER TINO ULLRICH Abstract We give an alternative roof and a wavelet analog of recent results

More information

The non-stochastic multi-armed bandit problem

The non-stochastic multi-armed bandit problem Submitted for journal ublication. The non-stochastic multi-armed bandit roblem Peter Auer Institute for Theoretical Comuter Science Graz University of Technology A-8010 Graz (Austria) auer@igi.tu-graz.ac.at

More information

Improvement on the Decay of Crossing Numbers

Improvement on the Decay of Crossing Numbers Grahs and Combinatorics 2013) 29:365 371 DOI 10.1007/s00373-012-1137-3 ORIGINAL PAPER Imrovement on the Decay of Crossing Numbers Jakub Černý Jan Kynčl Géza Tóth Received: 24 Aril 2007 / Revised: 1 November

More information

Otimal exercise boundary for an American ut otion Rachel A. Kuske Deartment of Mathematics University of Minnesota-Minneaolis Minneaolis, MN 55455 e-mail: rachel@math.umn.edu Joseh B. Keller Deartments

More information

Convex Analysis and Economic Theory Winter 2018

Convex Analysis and Economic Theory Winter 2018 Division of the Humanities and Social Sciences Ec 181 KC Border Conve Analysis and Economic Theory Winter 2018 Toic 16: Fenchel conjugates 16.1 Conjugate functions Recall from Proosition 14.1.1 that is

More information

Probability Estimates for Multi-class Classification by Pairwise Coupling

Probability Estimates for Multi-class Classification by Pairwise Coupling Probability Estimates for Multi-class Classification by Pairwise Couling Ting-Fan Wu Chih-Jen Lin Deartment of Comuter Science National Taiwan University Taiei 06, Taiwan Ruby C. Weng Deartment of Statistics

More information

CSC165H, Mathematical expression and reasoning for computer science week 12

CSC165H, Mathematical expression and reasoning for computer science week 12 CSC165H, Mathematical exression and reasoning for comuter science week 1 nd December 005 Gary Baumgartner and Danny Hea hea@cs.toronto.edu SF4306A 416-978-5899 htt//www.cs.toronto.edu/~hea/165/s005/index.shtml

More information

Convexification of Generalized Network Flow Problem with Application to Power Systems

Convexification of Generalized Network Flow Problem with Application to Power Systems 1 Convexification of Generalized Network Flow Problem with Alication to Power Systems Somayeh Sojoudi and Javad Lavaei + Deartment of Comuting and Mathematical Sciences, California Institute of Technology

More information

Towards understanding the Lorenz curve using the Uniform distribution. Chris J. Stephens. Newcastle City Council, Newcastle upon Tyne, UK

Towards understanding the Lorenz curve using the Uniform distribution. Chris J. Stephens. Newcastle City Council, Newcastle upon Tyne, UK Towards understanding the Lorenz curve using the Uniform distribution Chris J. Stehens Newcastle City Council, Newcastle uon Tyne, UK (For the Gini-Lorenz Conference, University of Siena, Italy, May 2005)

More information

On Code Design for Simultaneous Energy and Information Transfer

On Code Design for Simultaneous Energy and Information Transfer On Code Design for Simultaneous Energy and Information Transfer Anshoo Tandon Electrical and Comuter Engineering National University of Singaore Email: anshoo@nus.edu.sg Mehul Motani Electrical and Comuter

More information

Bent Functions of maximal degree

Bent Functions of maximal degree IEEE TRANSACTIONS ON INFORMATION THEORY 1 Bent Functions of maximal degree Ayça Çeşmelioğlu and Wilfried Meidl Abstract In this article a technique for constructing -ary bent functions from lateaued functions

More information

Various Proofs for the Decrease Monotonicity of the Schatten s Power Norm, Various Families of R n Norms and Some Open Problems

Various Proofs for the Decrease Monotonicity of the Schatten s Power Norm, Various Families of R n Norms and Some Open Problems Int. J. Oen Problems Comt. Math., Vol. 3, No. 2, June 2010 ISSN 1998-6262; Coyright c ICSRS Publication, 2010 www.i-csrs.org Various Proofs for the Decrease Monotonicity of the Schatten s Power Norm, Various

More information

On a class of Rellich inequalities

On a class of Rellich inequalities On a class of Rellich inequalities G. Barbatis A. Tertikas Dedicated to Professor E.B. Davies on the occasion of his 60th birthday Abstract We rove Rellich and imroved Rellich inequalities that involve

More information

Information collection on a graph

Information collection on a graph Information collection on a grah Ilya O. Ryzhov Warren Powell October 25, 2009 Abstract We derive a knowledge gradient olicy for an otimal learning roblem on a grah, in which we use sequential measurements

More information

GENERALIZED NORMS INEQUALITIES FOR ABSOLUTE VALUE OPERATORS

GENERALIZED NORMS INEQUALITIES FOR ABSOLUTE VALUE OPERATORS International Journal of Analysis Alications ISSN 9-8639 Volume 5, Number (04), -9 htt://www.etamaths.com GENERALIZED NORMS INEQUALITIES FOR ABSOLUTE VALUE OPERATORS ILYAS ALI, HU YANG, ABDUL SHAKOOR Abstract.

More information

Approximation of the Euclidean Distance by Chamfer Distances

Approximation of the Euclidean Distance by Chamfer Distances Acta Cybernetica 0 (0 399 47. Aroximation of the Euclidean Distance by Chamfer Distances András Hajdu, Lajos Hajdu, and Robert Tijdeman Abstract Chamfer distances lay an imortant role in the theory of

More information

Topic: Lower Bounds on Randomized Algorithms Date: September 22, 2004 Scribe: Srinath Sridhar

Topic: Lower Bounds on Randomized Algorithms Date: September 22, 2004 Scribe: Srinath Sridhar 15-859(M): Randomized Algorithms Lecturer: Anuam Guta Toic: Lower Bounds on Randomized Algorithms Date: Setember 22, 2004 Scribe: Srinath Sridhar 4.1 Introduction In this lecture, we will first consider

More information

Research Article An iterative Algorithm for Hemicontractive Mappings in Banach Spaces

Research Article An iterative Algorithm for Hemicontractive Mappings in Banach Spaces Abstract and Alied Analysis Volume 2012, Article ID 264103, 11 ages doi:10.1155/2012/264103 Research Article An iterative Algorithm for Hemicontractive Maings in Banach Saces Youli Yu, 1 Zhitao Wu, 2 and

More information

HENSEL S LEMMA KEITH CONRAD

HENSEL S LEMMA KEITH CONRAD HENSEL S LEMMA KEITH CONRAD 1. Introduction In the -adic integers, congruences are aroximations: for a and b in Z, a b mod n is the same as a b 1/ n. Turning information modulo one ower of into similar

More information

LINEAR SYSTEMS WITH POLYNOMIAL UNCERTAINTY STRUCTURE: STABILITY MARGINS AND CONTROL

LINEAR SYSTEMS WITH POLYNOMIAL UNCERTAINTY STRUCTURE: STABILITY MARGINS AND CONTROL LINEAR SYSTEMS WITH POLYNOMIAL UNCERTAINTY STRUCTURE: STABILITY MARGINS AND CONTROL Mohammad Bozorg Deatment of Mechanical Engineering University of Yazd P. O. Box 89195-741 Yazd Iran Fax: +98-351-750110

More information

Yixi Shi. Jose Blanchet. IEOR Department Columbia University New York, NY 10027, USA. IEOR Department Columbia University New York, NY 10027, USA

Yixi Shi. Jose Blanchet. IEOR Department Columbia University New York, NY 10027, USA. IEOR Department Columbia University New York, NY 10027, USA Proceedings of the 2011 Winter Simulation Conference S. Jain, R. R. Creasey, J. Himmelsach, K. P. White, and M. Fu, eds. EFFICIENT RARE EVENT SIMULATION FOR HEAVY-TAILED SYSTEMS VIA CROSS ENTROPY Jose

More information

Damage Identification from Power Spectrum Density Transmissibility

Damage Identification from Power Spectrum Density Transmissibility 6th Euroean Worksho on Structural Health Monitoring - h.3.d.3 More info about this article: htt://www.ndt.net/?id=14083 Damage Identification from Power Sectrum Density ransmissibility Y. ZHOU, R. PERERA

More information