Stochastic Combinatorial Optimization with Risk Evdokia Nikolova

Size: px
Start display at page:

Download "Stochastic Combinatorial Optimization with Risk Evdokia Nikolova"

Transcription

1 Computer Siene and Artifiial Intelligene Laboratory Tehnial Report MIT-CSAIL-TR September 13, 2008 Stohasti Combinatorial Optimization with Risk Evdokia Nikolova massahusetts institute of tehnology, ambridge, ma usa

2 Stohasti Combinatorial Optimization with Risk Evdokia Nikolova Massahusetts Institute of Tehnology, The Stata Center, Room 32-G596, 32 Vassar Street, Cambridge, MA 02139, USA, We onsider general ombinatorial optimization problems that an be formulated as minimizing the weight of a feasible solution w T x over an arbitrary feasible set. For these problems we desribe a broad lass of orresponding stohasti problems where the weight vetor W has independent random omponents, unknown at the time of solution. A natural and important objetive whih inorporates risk in this stohasti setting, is to look for a feasible solution whose stohasti weight has a small tail or a small linear ombination of mean and standard deviation. Our models an be equivalently reformulated as deterministi nononvex programs for whih no effiient algorithms are known. In this paper, we make progress on these hard problems. Our results are several effiient general-purpose approximation shemes. They use as a blak-box (exat or approximate) the solution to the underlying deterministi ombinatorial problem and thus immediately apply to arbitrary ombinatorial problems. For example, from an available δ-approximation algorithm to the deterministi problem, we onstrut a δ(1 + ǫ)-approximation algorithm that invokes the deterministi algorithm only a logarithmi number of times in the input and polynomial in 1 ǫ, for any desired auray level ǫ > 0. The algorithms are based on a geometri analysis of the urvature and approximability of the nonlinear level sets of the objetive funtions. Key words: approximation algorithms, ombinatorial optimization, stohasti optimization, risk, nononvex optimization 1. Introdution Imagine driving to the airport through unertain traffi. While we may not know speifi travel times along different roads, we may have information on their distributions (for example their means and varianes). We want to find a route that gets us to the airport on time. The route minimizing expeted travel time may well ause us to be late. In ontrast, arriving on time requires aounting for traffi variability and risk. In this paper we onsider general ombinatorial optimization problems that an be formulated as minimizing the weight w T x of a feasible solution over a fixed feasible set. For these problems we desribe a broad lass of orresponding stohasti problems where the weight vetor W has independent random omponents, unknown at the time of solution. A natural and important objetive whih inorporates risk in this stohasti setting, is to look for a feasible solution whose stohasti weight has a small tail (as in the 1

3 example above, where we seek to minimize the probability that the random route length exeeds a given threshold) or a small linear ombination of mean and standard deviation. Our models an be equivalently reformulated as deterministi nononvex programs for whih no effiient algorithms are known. In this paper, we make progress on these hard problems, and in partiular our main ontributions are as follows. Our Results 1. Suppose we have an exat algorithm for the underlying deterministi ombinatorial problem. Then, for all stohasti variants we onsider, we obtain effiient (1+ǫ)-approximation shemes, whih make a logarithmi number of orale alls to the deterministi algorithm (Theorem 4, Theorem 15). 2. Suppose we have a δ-approximate algorithm for the deterministi problem. Then, for the stohasti ] problem of minimizing the tail of the solution weight s distribution, we provide a 1 - [ δ (1 ǫ 2 /4) (2+ǫ)ǫ/4 approximation sheme, whih as above makes a logarithmi number of orale alls to the deterministi algorithm (Theorem 10). This result assumes normally distributed weights. 3. Suppose we have a δ-approximate algorithm for the deterministi problem. Then, for the stohasti (nononvex) problem of minimizing a linear ombination of mean and standard deviation of the solution weight, we give an δ(1 + ǫ)-approximation sheme whih makes a logarithmi number of orale alls to the deterministi algorithm (Theorem 21). This result holds for arbitrary weight distributions, and only assumes knowledge of the mean and variane of the distributions. To the best of our knowledge, this is the first treatment of stohasti ombinatorial optimization that inorporates risk, together with providing general-purpose approximation tehniques appliable to arbitrary ombinatorial problems. In fat, sine our algorithms are independent of the feasible set struture, they immediately apply to any disrete problems, and not just {0, 1}. Similarly, they ontinue to work in a ontinuous setting where the feasible set is ompat and onvex. Our approximation shemes are based on a series of geometri lemmas analyzing the form of the objetive funtion level sets, and on a novel onstrution of an approximate non-linear separation orale from a linear orale (the algorithm to the deterministi problem), in whih the main tehnial lemma is that a logarithmi number of appliations of the linear orale suffie to get an arbitrarily good approximation. Given the general-purpose nature of our algorithms and their near-optimal running time, our results onstitute signifiant progress in both stohasti and nononvex optimization. In partiular, we believe that our approah and tehniques would extend to give approximation algorithms for a wider lass of nononvex (and related stohasti) optimization problems, for whih no effiient solutions are urrently available. Perhaps more importantly from a pratial standpoint, as a by-produt of our stohasti models we an approximate the distribution of the weight of the optimal solution: Applying our solution to the stohasti 2

4 tail distribution objetive for different threshold (tail) values will yield an approximate histogram of the optimal distribution. Consequently our models and algorithms provide a powerful tool for approximating arbitrary objetives and statistis from the distribution. We defer to the next setion a disussion of related work and ontrast our results to potential other approahes. Our approximation algorithms are presented in the following four setions. Our algorithms for the tail (threshold) objetive require some additional assumptions, and beause of the form of that objetive the analysis is more hallenging and subtle. We present these in Setions 3 and 4 for the ases when we have an exat and an approximate orale for solving the underlying deterministi problem respetively. Our algorithms for the mean-standard deviation objetive are more general and somewhat easier to analyze: they are presented in Setions 5 and The Stohasti Framework In this setion, we formally define the lasses of stohasti problems we onsider. We then disuss related work and ontrast our approah with other potential solution approahes. Consider an arbitrary deterministi ombinatorial problem whih minimizes the weight of a feasible solution w T x over a fixed feasible set F: minimize w T x subjet to x F. (1) Notation We adopt the ommon standard of bold font for vetors and regular font for salars, and denote the transpose of a vetor, say x, by x T. Define polytope(f) R n to be the onvex hull of the feasible set F. Let W = (W 1,...,W n ) be a vetor of independent stohasti weights, and µ = (µ 1,...,µ n ) and τ = (τ 1,...,τ n ) be the vetors of their means and varianes respetively. We onsider the following broad lasses of stohasti problems, summarized in the table below together with their equivalent reformulation as deterministi nononvex problems. Model Name Stohasti Problem Nononvex Problem Threshold maximize Pr(W T x t) subjet to x F maximize subjet to t µ T x τ T x x polytope(f) (2) Value-at-risk Risk minimize subjet to minimize subjet to t Pr(W T x t) p x F minimize µ T x + τ T x µ T x + τ T x subjet to x polytope(f) x F (3) 3

5 We will give approximation algorithms for the two nononvex problems above, using as an orale the available solutions to the underlying problem (1). A δ-approximation algorithm for a minimization problem (with δ 1) is one whih returns a solution with value at most δ times the optimum. We use the term orale-logarithmi time approximation sheme (abbrev. orale-ltas) for a (1 + ǫ)δ-approximation algorithm whih makes a number of orale queries that is logarithmi in the problem size and polynomial in 1 ǫ (where δ is the multipliative approximation fator of the orale). We now briefly explain why solving the nononvex problems will solve the orresponding stohasti problems. Stohasti threshold objetive When the weights ome from independent normal distributions, a feasible solution x will have a normally distributed weight W T x N(µ T x,τ T x). Therefore [ ] Pr W T x t [ W T x µ T x = Pr τ T x t µt x ] = Φ τ T x ( t µ T x τ T x ), where Φ( ) is the umulative distribution funtion of the standard normal random variable N(0, 1). Sine Φ( ) is monotone inreasing, maximizing the stohasti threshold objetive above is equivalent to maximizing the argument, namely it is equivalent to the nononvex threshold problem (2). Furthermore, it an be easily shown that an approximation for the nononvex problem (2) yields the same approximation fator for the stohasti problem. 1 Stohasti risk and value-at-risk objetives When the weights ome from arbitrary independent distributions, the mean and variane of a feasible solution x will be equal to the sum of means µ T x and sum of varianes τ T x of the omponents of x, hene the equivalent onave formulation (3). The value-at-risk objetive also redues to problem (3). For arbitrary distributions this follows from Chebyshev s bound, see Setion 7.1 in the Appendix for details. Properties of the nononvex objetives Objetives (2) and (3) are instanes of quasi-onvex maximization and onave minimization respetively; onsequently they attain their optima at extreme points of the feasible set (Bertsekas et al., 2003; Nikolova et al., 2006) Related Work The stohasti threshold objetive was previously onsidered in the speial ase of shortest paths (Nikolova et al., 2006). The authors showed that this objetive has the property that its optimum is an extreme point of the feasible set, and gave an exat algorithm based on enumerating extreme points. The property that the optimum is an extreme point holds here as well, however this is where the similarity of our work to this prior work ends: For general ombinatorial problems it is likely that the number of relevant extreme points is too 1 We expet that under reasonable onditions, e.g., if a feasible solution x has suffiiently many nonzero omponents, arbitrary weight distributions will lead to feasible solutions having approximately normal weight by the Central Limit Theorem. Thus our algorithms are likely to provide a good approximation in that general ase as well. 4

6 high (or unknown) and enumerating them will yield very ineffiient algorithms. The fous and results of this paper are instead on approximation algorithms, in partiular ones whih are guaranteed to be effiient for arbitrary ombinatorial optimization problems. Our nononvex objetives fall in the lass of onstant-rank quasi-onave minimization problems onsidered by Kelner and Nikolova (2007) (in our ase the objetives are of rank 2), who give approximation algorithms based on smoothed analysis for some of these problems. Their approximation algorithms do not apply to our setting, sine they require the objetive to have a bounded gradient and a positive lower bound for the optimum (so as to turn additive into multipliative approximation), as is not the ase here. Perhaps losest in spirit to our orale-approximation methods is the work on robust optimization by Bertsimas and Sim (2003). Although very different in terms of models and algorithmi solutions, they also show how to solve the robust ombinatorial problems via a small number of orale alls of the underlying deterministi problems. A wealth of different models for stohasti ombinatorial optimization have appeared in the literature, perhaps most ommonly on two-stage and multi-stage stohasti optimization, see survey by Swamy and Shmoys (2006). Almost all suh work onsiders linear objetive funtions (i.e., minimizing the expeted solution weight) and as suh does not onsider risk. Some of the models inorporate additional budget onstraints (Srinivasan, 2007) or threshold (hane) onstraints for speifi problems suh as knapsak, load balaning and others (Dean et al., 2004; Goel and Indyk, 1999; Kleinberg et al., 2000). A omprehensive survey on stohasti optimization with risk with a different fous (different solution onept and ontinuous settings) is provided by Rokafellar (2007). Similarly, the work on hane onstraints (e.g., Nemirovski and Shapiro (2006)) applies for linear and not disrete optimization problems. Additional related work inludes researh on multi-riteria optimization, e.g., (Papadimitriou and Yannakakis, 2000; Akermann et al., 2005; Safer et al., 2004; Warburton, 1987) and ombinatorial optimization with a ratio of linear objetives (Megiddo, 1979; Radzik, 1992). In one multi-riteria setting, Safer et al. (2004) onsider nonlinear objetives f(x). However they assume that the statement Is f(x) M? an be evaluated in polynomial time (that is a key tehnial hallenge in our paper), and their funtions f(x) have a muh simpler separable form Our results vs other potential approahes Speifi ombinatorial problems under our framework an be solved with alternative approahes. For example, onsider the NP-hard onstrained optimization problem {min µ T x subjet to τ T x B, x F}. Suppose we an get an approximate solution x to the latter, whih satisfies µ T x µ T x and τ T x B(1+ǫ), where x is the optimal solution to the onstrained problem with budget B. Then we an derive a fully 5

7 polynomial-time approximation sheme (FPTAS) to the nononvex problems (2), (3) by onsidering a geometri progression of budgets B in the onstraint above, and piking the solution with the best objetive value (2) or (3). This approah an be used whenever we have the above type of FPTAS to the onstrained problem, as is the ase for shortest paths (Goel et al., 2001). However, sine we do not have a blak-box solution to the onstrained problem in general, this approah does not seem to extend to arbitrary ombinatorial problems. Another approah similar to the onstrained problem above would be to use the approximate Pareto boundary. 2 The latter onsists of a polynomial set of feasible solutions, suh that for any point x on the Pareto boundary, there is a point x in the set that satisfies µ T x (1 + ǫ)µ T x and τ T x (1 + ǫ)τ T x. When available (e.g., for shortest paths, et.), suh a biriteria approximation will translate into an FPTAS for the nononvex risk objetive (3). However it will not yield an approximation algorithm to the nononvex threshold objetive (2), beause a multipliative approximation of µ T x does not translate into a multipliative approximation of (t µ T x). Radzik gives a blak-box solution for ombinatorial optimization with rational objetives that are a ratio of two linear funtions, by onverting the rational objetive into a linear onstraint. A key property of the rational funtion that allows for an effiient algorithm is that it is monotone along the boundary of the feasible set; this is not the ase for any of our objetive funtions and is one of the biggest hallenges in working with nononvex optimization problems: greedy, loal searh, interior point and other standard tehniques do not work. Our approah is oneptually very different from previous analyses of related problems. Common approximation tehniques for hard instanes of stohasti and multiriteria problems onvert pseudopolynomial algorithms to FPTAS by saling and rounding (Warburton, 1987; Safer et al., 2004), or they disretize the deision spae and use a ombination of dynami programming and enumeration of possible feasible solutions over this ruder spae (Goel and Indyk, 1999). In most ases the tehniques are intimately intertwined with the struture of the underlying ombinatorial problem and annot extend to arbitrary problems. In ontrast, the near-optimal effiieny of our algorithms is due to the fat that we arefully analyze the form of the objetive funtion and use a top-down approah where our knowledge of the objetive funtion level sets guides us to zoom down into the neessary portion of the feasible spae. 2 The Pareto boundary onsists of all non-dominated feasible points x, namely all points suh that there is no other feasible point x with smaller mean µ T x µ T x and variane τ T x τ T x. 6

8 3. An orale-ltas for the nononvex threshold objetive with exat orale In this setion, we give an orale-logarithmi time approximation sheme (LTAS) for the nononvex problem formulation (2) that uses aess to an exat orale for solving the underlying problem (1). Our algorithms assume that the maximum of the objetive is non-negative, in other words the feasible solution with smallest mean satisfies µ T x t. Note, it is not lear a priori that that suh a multipliative approximation is even possible, sine we still let the funtion have positive, zero or negative values on different feasible solutions. The ase in whih the maximum is negative is struturally very different (the objetive on its negative range no longer attains optima at extreme points) and remains open. Even with this assumption, approximating the objetive funtion is espeially hallenging due to its unbounded gradient and the form of its numerator. We first note that if the optimal solution has variane 0, we an find it exatly with a single orale query: Apply the linear orale on the set of elements with zero varianes to find the feasible solution with smallest mean. If the mean is no greater than t, output the solution, otherwise onlude that the optimal solution has positive variane and proeed with the approximation sheme below. The main tehnial lemma that our algorithm is based on is an extension of the onept of separation and optimization: instead of deiding whether a line (hyperplane) is separating for a polytope, in the sense that the polytope lies entirely on one side of the line (hyperplane), we onstrut an approximate orale whih deides whether a non-linear urve (in our ase, a parabola) is separating for the polytope. From here on we will analyze the projetions of the objetive funtion and the feasible set onto the plane span(µ,τ) sine the nononvex problem (2) is equivalent in that spae. Consider the lower level sets L λ = {z f(z) λ} of the objetive funtion f(m,s) = t m s, where m,s R. Denote L λ = {z f(z) = λ}. We first prove that any level set boundary an be approximated by a small number of linear segments. The main work here involves deriving a ondition for a linear segment with endpoints on L λ, to have objetive funtion values within (1 ǫ) of λ. Lemma 1. Consider the points (m 1,s 1 ),(m 2,s 2 ) L λ with s 1 > s 2 > 0. The segment onneting these two points is ontained in the level set region L λ \L λ(1 ǫ) whenever s 2 (1 ǫ) 4 s 1, for every ǫ (0,1). Proof. Any point on the segment [(m 1,s 1 ),(m 2,s 2 )] an be written as a onvex ombination of its endpoints, (αm 1 + (1 α)m 2,αs 1 + (1 α)s 2 ), where α [0,1]. Consider the funtion h(α) = f(αm 1 + (1 α)m 2,αs 1 + (1 α)s 2 ). We have, h(α) = t αm 1 (1 α)m 2 = t α(m 1 m 2 ) m 2 αs1 + (1 α)s 2 α(s1 s 2 ) + s 2 7

9 We want to find the point on the segment with smallest objetive value, so we minimize with respet to α. h (α) = (m 2 m 1 ) α(s 1 s 2 ) + s 2 [t α(m 1 m 2 ) m 2 ] 1 2 (s 1 s 2 )/ α(s 1 s 2 ) + s 2 α(s 1 s 2 ) + s 2 = 2(m 2 m 1 )[α(s 1 s 2 ) + s 2 ] [t α(m 1 m 2 ) m 2 ](s 1 s 2 ) 2[α(s 1 s 2 ) + s 2 ] 3/2 = α(m 2 m 1 )(s 1 s 2 ) + 2(m 2 m 1 )s 2 (t m 2 )(s 1 s 2 ) 2[α(s 1 s 2 ) + s 2 ] 3/2. Setting the derivative to 0 is equivalent to setting the numerator above to 0, thus we get α min = (t m 2)(s 1 s 2 ) 2(m 2 m 1 )s 2 (m 2 m 1 )(s 1 s 2 ) = t m 2 m 2 m 1 2s 2 s 1 s 2. Note that the denominator of h (α) is positive and its numerator is linear in α, with a positive slope, therefore the derivative is negative for α < α min and positive otherwise, so α min is indeed a global minimum as desired. It remains to verify that h(α min ) (1 ǫ)λ. Note that t m i = λ s i for i = 1,2 sine (m i,s i ) L λ and onsequently, m 2 m 1 = λ( s 1 s 2 ). We use this further down in the following expansion of h(α min ). h(α min ) = t + α min(m 2 m 1 ) m 2 = t + ( t m 2 s 1 s 2 )(m 2 m 1 ) m 2 αmin (s 1 s 2 ) + s 2 ( t m 2 m 2 m 1 2s 2 s 1 s 2 )(s 1 s 2 ) + s 2 m 2 m 1 2s 2 = t + t m m 2 2s 2 m 1 2 s 1 s 2 m 2 = (t m 2 ) s 1 s 2 m 2 m 1 2s 2 + s 2 2(t m 2 ) 2s 2 λ( s 1 s 2 ) s 1 s 2 λ s 2 s 1 s 2 λ( s 1 s 2 ) s 2 = 2λ λ s 2 2s 2 s1 + s s2 2 s2 ( s 1 + = 2λ s 2 ) s 2 s1 s 2 s 2 s1 + s 2 s1 s 2 + s 2 s 2 = 2λ (s 1 s 2 ) 1/4 ( s 1 + s 2 ) = 2λ (s 1s 2 ) 1/4 s1 +. s 2 We need to show that when the ratio s 1 /s 2 is suffiiently lose to 1, h(α min ) (1 ǫ)λ, or equivalently 2(s 1 s 2 ) 1/4 s1 + s 2 1 ǫ 2(s 1 s 2 ) 1/4 (1 ǫ)(s 1/2 1 + s 1/2 2 ) ( s1 ) 1/2 ( s1 ) 1/4 (1 ǫ) 2 + (1 ǫ) 0 (4) s 2 s 2 ( ) 1/4 The minimum of the last quadrati funtion above is attained at s1 s 2 = 1 1 ǫ and we an hek that at this minimum the quadrati funtion is indeed negative: ( 1 ) 2 ( 1 ) (1 ǫ) 2 + (1 ǫ) = (1 ǫ) 1 1 ǫ 1 ǫ 1 ǫ < 0, 8

10 10 τ Τ x b λ * λ 8 b * 6 POLYTOPE τ 4 s * s µ m * m µ Τ x Figure 1: (a) Level sets of the objetive funtion and the projeted polytope on the span(µ,τ)-plane. (b) Applying the approximate linear orale on the optimal slope gives an approximate value b of the orresponding linear objetive value b. The resulting solution has nonlinear objetive funtion value of at least λ, whih is an equally good approximation for the optimal value λ. for all 0 < ǫ < 1. The inequality (4) is satisfied at s 1 s 2 = 1, therefore it holds for all ( s 1 s 2 ) [1, 1 (1 ǫ) 4 ]. Hene, a suffiient ondition for h(α min ) (1 ǫ)λ is s 2 (1 ǫ) 4 s 1, and we are done. Lemma 1 now makes it easy to show our main lemma, namely that any level set L λ an be approximated within a multipliative fator of (1 ǫ) via a small number of segments. Let s min and s max be a lower and upper bound respetively for the variane of the optimal solution. For example, take s min to be the smallest positive omponent of the variane vetor, and s max the variane of the feasible solution with smallest mean. Lemma 2. The level set L λ = {(m,s) R 2 t m s = λ} an be approximated within a fator of (1 ǫ) by ( ) 1 4 log smax s min /log 1 1 ǫ linear segments. Proof. By definition of s min and s max, the the variane of the optimal solution ranges from s min to s max. By Lemma 1, the segments onneting the points on L λ with varianes s max,s max (1 ǫ) 4,s max (1 ǫ) 8,...,s min all lie in the level set region L λ \L λ(1 ǫ), that is they underestimate and approximate the level set L λ within ( ) a fator of (1 ǫ). The number of these segments is 1 4 log smax s min /log 1 1 ǫ. The above lemma yields an approximate separation orale for the nonlinear level set L λ and polytope(f). The orale takes as input the level λ and either returns a solution x with objetive value f(x) (1 ǫ)λ from the feasible set, or guarantees that f(x) < λ for all x polytope(f). Therefore, an exat orale for solving problem (1) allows us to obtain an approximate nonlinear separation orale, by applying the former to weight vetors aµ + τ, for all possible slopes ( a) of the segments approximating the level set. We formalize this in the next theorem. 9

11 Theorem 3 (Approximate Nonlinear Separation Orale). Suppose we have an exat (linear) orale for solving problem (1). Then, we an onstrut a nonlinear orale whih solves the following approximate separation problem: given a level λ and ǫ (0, 1), the orale returns 1. A solution x F with f(x) (1 ǫ)λ, or 2. An answer that f(x) < λ for all x polytope(f), ( ) and the number of linear orale alls it makes is 1 4 log smax s min /log 1 1 ǫ, that is O(1 smax ǫ log s min ). We an now give an orale-ltas for the nononvex problem (2), by applying the above nonlinear orale on a geometri progression of possible values λ of the objetive funtion f. We first need to bound the maximum value f opt of the objetive funtion f. A lower bound f l is provided by the solution x mean with smallest mean or the solution x var with smallest positive variane, whihever has a higher objetive value: f l = max{f(x mean ),f(x var )} where f(x) = t µt x τ T x. On the other hand, µt x µ T x mean and τ T x τ T x var for all x polytope(f), so an upper bound for the objetive f is given by f u = t µt x mean τ T x var (reall that t µ T x mean > 0 by assumption). Theorem 4. Suppose we have an exat orale for problem (1) and suppose the smallest mean feasible solution satisfies µ T x t. Then for any ǫ (0,1), there is an algorithm for solving the nononvex threshold problem (2), whih returns a feasible solution x F with value at least (1 ǫ) times the optimum, and ( makes O log ( ) ( s max s min log fu ) ) 1 fl orale alls. ǫ 2 Proof. Now, apply the approximate separation orale from Theorem 3 with ǫ = 1 1 ǫ suessively on the levels f u,(1 ǫ )f u,(1 ǫ ) 2 f u,... until we reah a level λ = (1 ǫ ) i f u f l for whih the orale returns a feasible solution x with f(x ) (1 ǫ )λ = ( 1 ǫ) i+1 f u. From running the orale on the previous level f u (1 ǫ ) i 1, we know that f(x) f(x opt ) < ( 1 ǫ) i 1 f u for all x polytope(f), where x opt denotes the optimal solution. Therefore, ( 1 ǫ) i+1 f u f(x ) f(x opt ) < ( 1 ǫ) i 1 f u, and hene (1 ǫ)f(x opt ) < f(x ) f(x opt ). So the solution x gives a (1 ǫ)-approximation to the optimum x opt. In the proess, we run the approximate nonlinear separation orale at most log ( f u fl ) /log 1 1 ǫ times, and eah run makes 1 4 log ( s max s min ) /log 1 1 ǫ queries to the linear orale, so the algorithm makes at most 1 4 log ( s max s min ) log ( fu fl ) / ( 1 2 log 1 1 ǫ) 2 queries to the orale for the linear problem (1). Finally, sine log 1 1 ǫ ǫ for ǫ [0,1), we get the desired bound for the total number of queries. 10

12 4. The nononvex threshold objetive with approximate linear orale In this setion, we show that a δ-approximate orale to problem (1) yields an effiient approximation algorithm to the nononvex problem (2). As in Setion 3, we first hek whether the optimal solution has zero variane and if not, proeed with the algorithm and analysis below. We first prove several geometri lemmas that enable us to derive the approximation guarantees later. Lemma 5 (Geometri lemma). Consider two objetive funtion values λ > λ and points (m,s ) L λ, (m,s) L λ in the positive orthant suh that the tangents to the points at the orresponding level sets are parallel. Then, the y-interepts b,b of the two tangent lines satisfy b b = s [ ( λ ) 2 ] 1 λ. Proof. Suppose the slope of the tangents is ( a), where a > 0. Then the y-interepts of the two tangent lines satisfy b = s + am, b = s + am. In addition, sine the points (m,s) and (m,s ) lie on the level sets L λ,l λ, they satisfy t m = λ s, t m = λ s. Sine the first line is tangent at (m,s) to the parabola y = ( t x λ )2, the slope equals the first derivative at this point, 2(t x) λ 2 x=m = 2(t m) λ 2 = 2λ s λ 2 Similarly the absolute value of the slope also satisfies a = 2 s λ, therefore = 2 s λ, so the absolute value of the slope is a = 2 s λ. s = λ s. λ Note that for λ > λ, this means that s > s. From here, we an represent the differene m m as m m = (t m ) (t m) = λ s λ s = (λ ) 2 [( λ ) 2 ] s λ s = 1 λ s. λ λ Substituting the slope a = 2 s λ in the tangent line equations, we get b b = s + 2 s λ m s 2 s λ m ( λ ) 2s 2 s = s + λ λ (m m ) ( λ ) 2s 2 s [( λ ) 2 ] = s + λ λ λ s 1 λ ( λ ) 2s [( λ ) 2 ] = s + 2s 1 λ λ [( λ ) 2 ] = s 1 = s [ ( λ ) 2 ] 1 λ λ, as desired. 11

13 The next lemma builds intuition as well as helps in the analysis of the algorithm. It shows that we an approximate the optimal solution well if we know the optimal weight vetor to use with the available approximate orale for problem (1). Lemma 6. Suppose we have a δ-approximate linear orale for optimizing over the feasible polytope(f) and suppose that the optimal solution satisfies µ T x (1 ǫ)t. If we an guess the slope of the tangent to the orresponding level set at the optimal point x, then we an find a 1 δ 2 ǫ ǫ -approximate solution to the nononvex problem (2). In partiular setting ǫ = δ gives a (1 δ)-approximate solution. Proof. Denote the projetion of the optimal point x on the plane by (m,s ) = (µ T x,τ T x ). We apply the linear orale with respet to the slope ( a) of the tangent to the level set L λ at (m,s ). The value of the linear objetive at the optimum is b = s + am, whih is the y-interept of the tangent line. The linear orale returns a δ-approximate solution, that is a solution on a parallel line with y-interept b δb. Suppose the original (nonlinear) objetive value at the returned solution is lower-bounded by λ, that is it lies on a line tangent to L λ (See Figure 1(b)). From Lemma 5, we have b b = s [1 ( λ λ ) 2 ], therefore ( λ λ ) 2 = 1 b b s ( ) b b b = 1 b s 1 δ b s. (5) Reall that b = s + m 2 s λ and m (1 ǫ)t, then b s = 1 + 2m λ s = 1 + 2m t m 1 + 2m ǫ 1 ǫ m = 1 + Together with Eq. (5), this gives a 1 δ 2 ǫ ǫ 2(1 ǫ) ǫ = 2 ǫ. ǫ -approximation fator to the optimal. On the other hand, setting ǫ = δ gives the approximation fator 1 δ 2 δ δ = 1 δ. Next, we prove a geometri lemma that will be needed to analyze the approximation fator we get when applying the linear orale on an approximately optimal slope. Lemma 7. Consider the level set L λ and points (m,s ) and (m,s) on it, at whih the tangents to L λ have slopes a and a(1+ξ) respetively. Let the y-interepts of the tangent line at (m,s) and the line parallel to it through (m,s ) be b 1 and b respetively. Then b b ξ 2. 12

14 Proof. The equation of the level set L λ is y = ( t x λ )2 so the slope at a point (m,s) L λ is given by the derivative at x = m, that is 2(t m) λ 2 = 2 s λ. So, the slope of the tangent to the level set L λ at point (m,s ) is a = 2 s λ. Similarly the slope of the tangent at (m,s) is a(1 + ξ) = 2 s λ. Therefore, s = (1 + ξ) s, or equivalently (t m) = (1 + ξ)(t m ). Sine b, b 1 are interepts with the y-axis, of the lines with slopes a(1 + ξ) = 2 s λ points (m,s ), (m,s) respetively, we have Therefore b 1 = s + 2 s λ m = t2 m 2 λ 2 b = s + (1 + ξ) 2 s b = (t m )(t + m + 2ξm ) b 1 (t m)(t + m) ( ) 1 1 = ξ 1 ξ 1 ξ 2, λ m = t m λ 2 (t + m + 2ξm ). = 1 t + m + 2ξm 1 + ξ t + m ontaining the = 1 t + (1 + 2ξ)m 1 + ξ (1 ξ)t + (1 + ξ)m where we use m = t (1 + ξ)(t m ) from above and the last inequality follows by Lemma 8. Lemma 8. For any q,r > 0, q+(1+2ξ)r (1 ξ)q+(1+ξ)r 1 1 ξ. Proof. This follows from the fat that 1+2ξ 1+ξ 1 1 ξ for ξ [0,1). We now show that we get a good approximation even when we use an approximately optimal weight vetor with our orale. Lemma 9. Suppose that we use an approximately optimal weight vetor with a δ-approximate linear orale (1) for solving the nononvex threshold problem (2). In partiular, suppose the weight vetor (slope) that we use is within (1 + ξ) of the slope of the tangent at the optimal solution. Then this will give a solution to [ ] the nononvex threshold problem (2) with value at least 1 δ 1 2 ǫ 1 ξ 2 ǫ times the optimal, provided the optimal solution satisfies µ T x (1 ǫ)t. Proof. Suppose the optimal solution is (m,s ) and it lies on the optimal level set λ. Let the slope of the tangent to the level set boundary at the optimal solution be ( a). We apply our δ-approximation linear orale with respet to slope that is (1 + ξ) times the optimum slope ( a). Suppose the resulting blak box solution lies on the line with y-interept b 2, and the true optimum lies on the line with y-interept b. We know b [b 1,b], where b 1 and b are the y-interepts of the lines with slope (1 + ξ)a that are tangent to L λ and pass through (m,s ) respetively. Then we have b 2 b b 2 b δ. Furthermore, by Lemma 7 we have b b ξ 2. 13

15 On the other hand, from Lemma 5, b 2 b 1 = s[1 ( λ 2 λ )], where λ 2 is the smallest possible objetive funtion value along the line with slope a(1 + ξ) and y-interept b 2, in other words the smallest possible objetive funtion value that the solution returned by the approximate linear orale may have; (m, s) is the tangent point of the line with slope (1 + ξ)a, tangent to L λ. Therefore, applying the above inequalities, we get ( λ2 λ ) 2 = 1 b 2 b 1 s = 1 b 2 b 1 b 1 ( ) ( ) b 1 s = 1 b2 b b1 δ 2 ǫ 1 b b 1 s 1 1 ξ 2 1, ǫ where b 1 s 2 ǫ ǫ follows as in the proof of Lemma 6. The result follows. Consequently, we an approximate the optimal solution by applying the approximate linear orale on a small number of appropriately hosen linear funtions and piking the best resulting solution. Theorem 10. Suppose we have a δ-approximation linear orale for problem (1). Then, the nononvex ] threshold problem (2) has a 1 -approximation algorithm that alls the linear orale a [ δ (1 ǫ 2 /4) (2+ǫ)ǫ/4 logarithmi in the input and polynomial in 1 ǫ number of times, assuming the optimal solution to (2) satisfies µ T x (1 ǫ)t. Proof. The algorithm applies the linear approximation orale with respet to a small number of linear funtions, and hooses the best resulting solution. In partiular, suppose the optimal slope (tangent to the orresponding level set at the optimal solution point) lies in the interval [L,U] (for lower and upper bound). We find approximate solutions with respet to the slopes L,L(1 + ξ),l(1 + ξ) 2,...,L(1 + ξ) k U, namely we apply the approximate linear orale log(u/l) log(1+ξ) times, where ξ = ǫ3. With this, we are ertain that the 2(1+ǫ 3 ) optimal slope will lie in some interval [L(1 + ξ) i,l(1 + ξ) i+1 ] and by Lemma 9 the solution returned by [ ] the linear orale with respet to slope L(1 + ξ) i+1 will give a 1 δ 1 2 ǫ 1 ξ 2 ǫ - approximation to our non-linear objetive funtion value. Sine we are free to hoose ξ, setting it to ξ = ǫ/2 gives the desired number of queries. We onlude the proof by noting that we an take L to be the slope tangent to the orresponding level set at (m L,s L ) where s L is the minimum positive omponent of the variane vetor and m L = t(1 ǫ). Similarly let U be the slope tangent at (m U,s U ) where m U = 0 and s U is the sum of omponents of the variane vetor. Note that when δ = 1, namely we an solve the underlying linear problem exatly in polynomial time, the above algorithm gives an approximation fator of 1 1+ǫ/2 or equivalently 1 ǫ where ǫ = 2[ 1 (1 ǫ ) 2 1]. While this algorithm is still an orale-logarithmi time approximation sheme, it gives a bi-riteria approximation: It requires that there is a small gap between the mean of the optimal solution and t so it is 14

16 9 τ Τ x b 2 λ 1 λ 2 8 b L 4 L L λ L (1+ε)λ b b * L L 1 s * m * λ 1 µ Τ x Figure 2: (left) Level sets and approximate nonlinear separation orale for the projeted non-onvex (stohasti) objetive f(x) = µ T x + τ T x on the span(µ,τ)-plane. (right) Approximating the objetive value λ 1 of the optimal solution (m,s ). weaker than our previous algorithm, whih had no suh requirement. This is expeted, sine of ourse this algorithm is ruder, taking a single geometri progression of linear funtions rather than tailoring the linear orale appliations to the objetive funtion value that it is searhing for, as in the ase of the nonlinear separation orale that the previous algorithm from Setion 3 is based on. 5. An orale-ltas for the nononvex risk objetive with an exat orale In this setion we present an orale-logarithmi time approximation sheme for the nononvex problem (3), using an exat orale for solving the underlying problem (1). The projeted level sets of the objetive funtion f(x) = µ T x + τ T x onto the span(µ,τ) plane are again parabolas, though differently arranged and the analysis in the previous setions does not apply. Following the same tehniques however, we an derive similar approximation algorithms, whih onstrut an approximate nonlinear separation orale from the linear one and apply it appropriately a small number of times. To do this, we first need to deide whih linear segments to approximate a level set with and how many they are. In partiular we want to fit as few segments as possible with endpoints on the level set L λ, entirely ontained in the nonlinear band between L λ and L (1+ǫ)λ (over the range m = µ T x [0,λ], s = τ T x [0,λ 2 ]). Geometrially, the optimal hoie of segments starts from one endpoint of the level set L λ and repeatedly draws tangents to the level set L (1+ǫ)λ, as shown in Figure 2. We first show that the tangent-segments to L (1+ǫ)λ starting at the endpoints of L λ are suffiiently long. 15

17 Lemma 11. Consider points (m 1,s 1 ) and (m 2,s 2 ) on L λ with 0 m 1 < m 2 λ suh that the segment with these endpoints is tangent to L (1+ǫ)λ at point α(m 1,s 1 ) + (1 α)(m 2,s 2 ). Then α = 2 4 s 2 s 1 s 2 and the objetive value at the tangent point is [ 2 4 s 1 s 2 m 2 m 1 + s 2 m 2 m 1 s 1 s 2 + m 2 ]. s 1 s 2 (m 2 m 1 ) 2 Proof. Let f : R 2 R, f(m,s) = m + s be the projetion of the objetive f(x) = µ T x + τ T x. The objetive values along the segment with endpoints (m 1,s 1 ), (m 2,s 2 ) are given by h(α) = α f(m 1,s 1 ) + (1 α) f(m 2,s 2 ) = α(m 1 m 2 ) + m 2 + α(s 1 s 2 ) + s 2, for α [0, 1]. The point along the segment with maximum objetive value (that is, the tangent point to the minimum level set bounding the segment) is found by setting the derivative h (α) = m 1 m 2 + s 1 s 2, to zero: 2 α(s 1 s 2 )+s 2 s 1 s 2 m 2 m 1 = 2 α(s 1 s 2 ) + s 2 s 1 s 2 α(s 1 s 2 ) + s 2 = 2(m 2 m 1 ) α(s 1 s 2 ) + s 2 = 2 (s 1 s 2 ) 2 4(m 2 m 1 ) 2 α(s 1 s 2 ) = 2 (s 1 s 2 ) 2 4(m 2 m 1 ) 2 s 2 α = 2 s 1 s 2 4(m 2 m 1 ) 2 s 2 s 1 s 2. This is a maximum, sine the derivative h (α) is dereasing in α. The objetive value at the maximum is h(α max ) = α max (m 1 m 2 ) + m 2 + α max (s 1 s 2 ) + s 2 = [ 2 s 1 s 2 4(m 2 m 1 ) 2 s ] 2 (m 1 m 2 ) + m s 1 s 2 s 1 s 2 2(m 2 m 1 ) = 2 4 s 1 s 2 m 2 m 1 s 2 m 1 m 2 s 1 s 2 + m s 1 s 2 m 2 m 1 = 2 4 s 1 s 2 m 2 m 1 + s 2 m 2 m 1 s 1 s 2 + m 2. Further, sine s 1 = ( λ m 1 ) 2 and s 2 = ( λ m 2 ) 2, their differene satisfies s 1 s 2 = 1 (m 2 2 m 1 )(2λ m 1 m 2 ), so s 1 s 2 m 2 m 1 = 2λ m 1 m 2 2 and the above expression for the maximum funtion value on the segment 16

18 beomes h(α max ) = 2 4 2λ m 1 m 2 2 s m 2 = 2λ m 1 m 2 + (λ m 2) 2 + m 2. 2λ m 1 m 2 4 2λ m 1 m 2 Now we an show that the tangent segments at the ends of the level set L λ are long. Lemma 12. Consider the endpoint (m 2,s 2 ) = (λ,0) of L λ. Then either the single segment onneting the two endpoints of L λ is entirely below the level set L (1+ǫ)λ, or the other endpoint of the segment tangent to L (1+ǫ)λ is (m 1,s 1 ) = (λ(1 4ǫ),( 4ǫλ )2 ). Proof. Sine 0 m 1 < λ, we an write m 1 = βλ for some β [0,1). Consequently, s 1 = ( λ m 1 ) 2 = λ 2 (1 β) 2 and s 1 s 2 2 m 2 m 1 = λ2 (1 β) 2 2 λ(1 β) = λ(1 β). By Lemma 11, the objetive value at the tangent point is λ(1 β) 2 ( ) 1 β + λ = λ + 1 = (1 + ǫ)λ. 4 The last equality follows by our assumption that the tangent point lies on the L (1+ǫ)λ level set. Hene, β = 1 4ǫ, so m 1 = (1 4ǫ)λ and s 1 = ( λ m 1 ) 2 = ( 4ǫλ )2. Next, we haraterize the segments with endpoints on L λ that are tangent to the level set L λ(1+ǫ). Lemma 13. Consider two points (m 1,s 1 ), (m 2,m 2 ) on L λ with 0 m 1 < m 2 λ and suh that the segment onneting the two points is tangent to L (1+ǫ)λ. Then the ratio s 1 s 2 (1 + 2ǫ) 2. Proof. Let point (m,s) on the segment with endpoints (m 1,s 1 ), (m 2,m 2 ) be the tangent point to the level set L (1+ǫ)λ. Then the slope s 1 s 2 m 1 m 2 of the segment is equal to the derivative of the funtion y = ( (1+ǫ)λ x ) 2 at x = m, whih is 2 (1+ǫ)λ m 2 = 2 s. Sine s 1 s 2 m 1 m 2 = equating the two expressions for the slope we get 2 s = s 2 + s 1. On the other hand, sine (m,s) L (1+ǫ)λ, we have s 1 s 2 (λ m 2 ) (λ m 1 ) = s 1 s 2 ( s 2 s 1 ) = s2 + s 1, m = (1 + ǫ)λ s = (1 + ǫ)λ s 2 + s 1 2 = (1 + ǫ)λ λ m 2 + λ m 1 2 = ǫλ + m 1 + m 2 2 and m = α(m 1 m 2 ) + m 2 for some α (0,1). Therefore α = 1 2 Next, s = α(s 1 s 2 ) + s 2 = = s 1 + s 2 2 ǫλ ( s 1 + s 2 ) ǫλ m 2 m 1 = 1 2 ǫλ ( s 1 s 2 ). [ ] 1 2 ǫλ ( s 1 (s 1 s 2 ) + s 2 = s 1 s 2 ǫλ s 2 ) 2 ( s 1 + s 2 ) + s 2 17

19 therefore using 2 s = s 2 + s 1 from above, we get two equivalent expressions for 4s: 2(s 1 + s 2 ) 4ǫλ ( s 1 + s 2 ) = s 1 + s s 1 s 2 s 1 + s 2 4ǫλ ( s 1 + s 2 ) 2 s 1 s 2 = 0 s ǫλ s 2 s1 s1 ( + 1) 2 = 0 s 2 s 2 s 2 Denote for simpliity z = s1 s 2 and w = 2ǫλ s 2, then we have to solve the following quadrati equation for z in terms of w: z w(z + 1) 2z = 0 z 2 2z(w + 1) + 1 2w = 0. The disriminant of this quadrati expression is D = (w + 1) w = w 2 + 4w and its roots are z 1,2 = 1 + w ± w 2 + 4w. Sine s 1 s 2 > 1, we hoose the bigger root z 2 = 1 + w + w 2 + 4w. Therefore sine w = 2ǫλ s 2 0 we have s1 = 1 + w + w s 2 + 4w 1 + w = 1 + 2ǫλ ǫλ s 2 λ = 1 + 2ǫ, where the last inequality follows from the fat that s 2 < s 1 λ. This onludes the proof. The last lemma shows that eah segment is suffiiently long so that overall the number of tangent segments approximating the level set L λ is small, in partiular it is polynomial in 1 ǫ (and does not depend on the input size of the problem!). This gives us the desired approximate nonlinear separation orale for the level sets of the objetive funtion. Theorem 14. A nonlinear (1 + ǫ)-approximate separation orale to any level set of the nononvex objetive f(x) in problem (3) an be found with (1 + log( 1 16ǫ 2 ) 2log(1+2ǫ) ) queries to the available linear orale for solving problem (1). The nonlinear orale takes as inputs λ, ǫ and returns either a feasible solution x F with f(x) (1 + ǫ)λ or an answer that f(x) > λ for all x in the polytope(f). Proof. Apply the available linear orale to the slopes of the segments with endpoints on the speified level set, say L λ, and whih are tangent to the level set L (1+ǫ)λ. By Lemma 13 and Lemma 12, the y-oordinates of endpoints of these segments are given by s 1 = ( λ )2, s 2 s 1 (1+2ǫ) 2, s 3 s 1 (1+2ǫ) 4,..., s k s 1 (1+2ǫ) 2(k 1), s k+1 = 0, where s k = ( 4ǫλ )2, so k = 1 + log( 1 16ǫ 2 )/2log(1 + 2ǫ), whih is preisely the number of segments we use and the result follows. 18

20 Finally, based on the approximate non-linear separation orale from Theorem 14, we an now easily solve the nononvex problem (3) running the orale on a geometri progression from the objetive funtion range. We again need to bound the optimal value of the objetive funtion. For a lower bound, we an use f l = s min, the smallest positive variane omponent, and for an upper bound take f u = nm max + ns max, where m max and s max are the largest omponents of the mean and variane vetors respetively. Theorem 15. There is a orale-logarithmi time approximation sheme for the nononvex problem (3), whih uses an exat orale for solving the underlying problem (1). This algorithm returns a (1 + ǫ)- approximate solution and makes (1 + 2 ǫ log(fu f l ))(1 + log( 1 16ǫ 2 ) ) orale queries, namely logarithmi in the size of input and polynomial in 1 ǫ. 2log(1+2ǫ) Proof. The proof is analogous to that of Theorem 4, applying the nonlinear orale with approximation fator 1 + ǫ for the objetive funtion values fl,f l 1 + ǫ,fl 1 + ǫ 2,...,fu = f l 1 + ǫ j, namely applying it at most 1 + 2log( fu f l )/log(1 + ǫ) 2 ǫ log(fu f l ) times. In addition, we run the linear orale one with weight vetor equal to the vetor of means, over the subset of omponents with zero varianes and return that solution if it is better than the above. 6. An orale-ltas for the nononvex risk objetive with approximate linear orale Suppose we have a δ-approximate linear orale for solving problem (1). We will provide an algorithm for the nononvex problem (3) with approximation fator δ(1 + ǫ), whih invokes the linear orale a small number of times that is logarithmi in the input-size and polynomial in 1 ǫ. We employ the same tehnique of designing the algorithm and analyzing it as in Setion 4 for the threshold objetive funtion, however again due to the different objetive the previous analysis does not arry through diretly. First, we show that if we an guess the optimal linear objetive, given by the slope of the tangent to the orresponding level set at the optimal solution, then applying the approximate linear orale returns an approximate solution with the same multipliative approximation fator δ. The above statement redues to showing the following geometri fat. Lemma 16. Consider levels 0 λ 1 < λ 2 and two parallel lines tangent to the orresponding level sets L λ1 and L λ2 at points (m 1,s 1 ) and (m 2,s 2 ) respetively. Further, suppose the orresponding y-interepts of these lines are b 1 and b 2. Then b 2 b1 = λ 2+m 2 λ 1 +m 1 λ 2 λ 1. 19

21 Proof. The funtion defining a level set L λ has the form y = (λ x)2, and thus the slope of the tangent to 2 the level set at a point (m,s) L λ is given by the first derivative at the point, 2(λ x) 2 x=m = 2(λ m) = 2 2 s. Therefore the equation of the tangent line is y = 2 s x + b, where b = s + 2 s m = s( s + 2m ) = s( λ m + 2m ) = s( λ + m ) Sine the two tangents from the lemma are parallel, their slopes are equal: 2 s 1 = 2 s 2, therefore s 1 = s 2 and equivalently (λ 1 m 1 ) = (λ 2 m 2 ). Therefore the y-interepts of the two tangents satisfy b 2 b 1 = s2 ( λ 2+m 2 ) s1 ( λ 1+m 1 ) = λ 2 + m 2 λ 1. λ 1 + m 1 λ 2 The last inequality follows from the fat that λ 2 > λ 1 and λ 1 m 1 = λ 2 m 2 (and equality is ahieved when m 1 = λ 1 and m 2 = λ 2 ). Corollary 17. Suppose the optimal solution to the nononvex problem (3) is (m 1,s 1 ) with objetive value λ 1. If we an guess the slope a of the tangent to the level set L λ1 at the optimal solution, then applying the approximate linear orale for solving problem (1) with respet to that slope will give a δ-approximate solution to problem (3). Proof. The approximate linear orale will return a solution (m,s ) with value b 2 = s + am δb 1, where b 1 = s 1 + am 1. The objetive funtion value of (m,s ) is at most λ 2, whih is the value at the level set tangent to the line y = ax + b 2. By Lemma 16, λ 2 λ 1 b 2 b1 δ, therefore the approximation solution has objetive funtion value at most δ times the optimal value, QED. If we annot guess the slope at the optimal solution, we would have to approximate it. The next lemma proves that if we apply the approximate linear orale to slope that is within (1+ ǫ 1+ǫ ) of the optimal slope, we would still get a good approximate solution with approximation fator δ(1 + ǫ). Lemma 18. Consider the level set L λ and points (m,s ) and (m,s) on it, at whih the tangents to L λ have slopes a and a(1 + ǫ 1+ǫ ) respetively. Let the y-interepts of the tangent line at (m,s) and the line parallel to it through (m,s ) be b 1 and b respetively. Then b b ǫ. Proof. Let ξ = ǫ 1+ǫ. As established in the proof of Lemma 16, the slope of the tangent to the level set L λ at point (m,s ) is a = 2 s. Similarly the slope of the tangent at (m,s) is a(1 + ξ) = 2 s. Therefore, s = (1 + ξ) s, or equivalently (λ m) = (1 + ξ)(λ m ). 20

22 Sine b, b 1 are interepts with the y-axis, of the lines with slopes a(1 + ξ) = 2 s points (m,s ), (m,s) respetively, we have Therefore b 1 = s + 2 s m = λ2 m 2 2 b = s + (1 + ξ) 2 s b = (λ m )(λ + m + 2ξm ) = 1 λ + m + 2ξm b 1 (λ m)(λ + m) 1 + ξ λ + m where the last inequality follows by Lemma 19. m = λ m 2 (λ + m + 2ξm ). Lemma 19. Following the notation of Lemma 18, λ+m +2ξm λ+m 1 1 ξ. ontaining the 1 ( ) 1 = ξ 1 ξ 1 ξ 2 = 1 + ǫ, Proof. Reall from the proof of Lemma 18 that (λ m) = (1+ξ)(λ m ), therefore m = λ (1+ξ)(λ m ) = ξλ + (1 + ξ)m. Hene, λ + m + 2ξm λ + m = λ + (1 + 2ξ)m λ (1 ξ)λ + (1 + ξ)m = m + (1 + 2ξ) (1 ξ) λ m + (1 + ξ) 1 1 ξ, sine 1+2ξ 1+ξ 1 1 ξ for ξ [0,1). A orollary from Lemma 16 and Lemma 18 is that applying the linear orale with respet to slope that is within (1 + ǫ 1+ǫ ) times of the optimal slope yields an approximate solution with objetive value within (1 + ǫ)δ times of the optimal. Lemma 20. Suppose the optimal solution to the nononvex problem (3) is (m,s ) with objetive value λ and the slope of the tangent to the level set L λ at it is a. Then running the δ-approximate orale for solving problem (1) with respet to slope that is in [ a, a(1 + ǫ 1+ǫ )] returns a solution to (3) with objetive funtion value no greater than (1 + ǫ)δλ. Proof. Suppose the optimal solution with respet to the linear objetive speified by slope a(1 + ǫ 1+ǫ ) has value b [b 1,b], where b 1,b are the y-interepts of the lines with that slope, tangent to L λ and passing through (m,s ) respetively (See Figure 2-right). Then applying the δ-approximate linear orale to the same linear objetive returns solution with value b 2 δb. Hene b 2 b b 2 b δ. On the other hand, the approximate solution returned by the linear orale has value of our original objetive funtion equal to at most λ 2, where L λ2 is the level set tangent to the line on whih the approximate solution lies. By Lemma 16, λ 2 λ b 2 b1 = b 2 b b b 1 δ(1 + ǫ), where the last inequality follows by Lemma 18 and the above bound on b 2 b. 21

23 Finally, we are ready to state our theorem for solving the nononvex problem (3). The theorem says that there is an algorithm for this problem with essentially the same approximation fator as the underlying problem (1), whih makes only logarithmially many alls to the latter. Theorem 21. Suppose we have a δ-approximate orale for solving problem (1). The nononvex problem (3) an be approximated to a multipliative fator of δ(1 + ǫ) by alling the above orale logarithmially many times in the input size and polynomially many times in 1 ǫ. Proof. We use the same type of algorithm as in Theorem 10: apply the available approximate linear orale on a geometri progression of weight vetors (slopes), determined by the lemmas above. In partiular, apply it to slopes U,(1 + ξ)u,...,(1 + ξ) i U = L, where ξ = ǫ 1+ǫ, L is a lower bound for the optimal slope and U is an upper bound for it. For eah approximate feasible solution obtained, ompute its objetive funtion value and return the solution with minimum objetive funtion value. By Lemma 20, the value of the returned solution would be within δ(1 + ǫ) of the optimal. Note that it is possible for the optimal slope to be 0: this would happen when the optimal solution satisfies m = λ and s = 0. We have to handle this ase differently: run the linear orale just over the subset of omponents with zero variane-values, to find the approximate solution with smallest m. Return this solution if its value is better than the best solution among the above. It remains to bound the values L and U. We established earlier that the optimal slope is given by 2 s, where s is the variane of the optimal solution. Among the solutions with nonzero variane, the variane of a feasible solution is at least s min, the smallest possible nonzero variane of a single element, and at most (λ max ) 2 (nm max + ns max ) 2, where m max is the largest possible mean of a single element and s max is the largest possible variane of a single element (assuming that a feasible solution uses eah element in the ground set at most one). Thus, set U = 2 s min 7. Conlusion and L = 2(nmmax+ ns max) We have presented effiient approximation shemes for a broad lass of stohasti problems that inorporate risk. Our algorithms are independent of the fixed feasible set and use solutions for the underlying deterministi problems as orales for solving the stohasti ounterparts. As suh they apply to very general ombinatorial and disrete, as well as ontinuous settings. From a pratial point of view, it is of interest to onsider orrelations between the omponents of the stohasti weight vetor. We remark that in graph problems, our results an immediately extend to some realisti partial orrelation models. We leave a study of orrelations for future work. 22

Hankel Optimal Model Order Reduction 1

Hankel Optimal Model Order Reduction 1 Massahusetts Institute of Tehnology Department of Eletrial Engineering and Computer Siene 6.245: MULTIVARIABLE CONTROL SYSTEMS by A. Megretski Hankel Optimal Model Order Redution 1 This leture overs both

More information

Maximum Entropy and Exponential Families

Maximum Entropy and Exponential Families Maximum Entropy and Exponential Families April 9, 209 Abstrat The goal of this note is to derive the exponential form of probability distribution from more basi onsiderations, in partiular Entropy. It

More information

max min z i i=1 x j k s.t. j=1 x j j:i T j

max min z i i=1 x j k s.t. j=1 x j j:i T j AM 221: Advaned Optimization Spring 2016 Prof. Yaron Singer Leture 22 April 18th 1 Overview In this leture, we will study the pipage rounding tehnique whih is a deterministi rounding proedure that an be

More information

Complexity of Regularization RBF Networks

Complexity of Regularization RBF Networks Complexity of Regularization RBF Networks Mark A Kon Department of Mathematis and Statistis Boston University Boston, MA 02215 mkon@buedu Leszek Plaskota Institute of Applied Mathematis University of Warsaw

More information

CMSC 451: Lecture 9 Greedy Approximation: Set Cover Thursday, Sep 28, 2017

CMSC 451: Lecture 9 Greedy Approximation: Set Cover Thursday, Sep 28, 2017 CMSC 451: Leture 9 Greedy Approximation: Set Cover Thursday, Sep 28, 2017 Reading: Chapt 11 of KT and Set 54 of DPV Set Cover: An important lass of optimization problems involves overing a ertain domain,

More information

Danielle Maddix AA238 Final Project December 9, 2016

Danielle Maddix AA238 Final Project December 9, 2016 Struture and Parameter Learning in Bayesian Networks with Appliations to Prediting Breast Caner Tumor Malignany in a Lower Dimension Feature Spae Danielle Maddix AA238 Final Projet Deember 9, 2016 Abstrat

More information

Lecture 7: Sampling/Projections for Least-squares Approximation, Cont. 7 Sampling/Projections for Least-squares Approximation, Cont.

Lecture 7: Sampling/Projections for Least-squares Approximation, Cont. 7 Sampling/Projections for Least-squares Approximation, Cont. Stat60/CS94: Randomized Algorithms for Matries and Data Leture 7-09/5/013 Leture 7: Sampling/Projetions for Least-squares Approximation, Cont. Leturer: Mihael Mahoney Sribe: Mihael Mahoney Warning: these

More information

A Characterization of Wavelet Convergence in Sobolev Spaces

A Characterization of Wavelet Convergence in Sobolev Spaces A Charaterization of Wavelet Convergene in Sobolev Spaes Mark A. Kon 1 oston University Louise Arakelian Raphael Howard University Dediated to Prof. Robert Carroll on the oasion of his 70th birthday. Abstrat

More information

The Hanging Chain. John McCuan. January 19, 2006

The Hanging Chain. John McCuan. January 19, 2006 The Hanging Chain John MCuan January 19, 2006 1 Introdution We onsider a hain of length L attahed to two points (a, u a and (b, u b in the plane. It is assumed that the hain hangs in the plane under a

More information

The Effectiveness of the Linear Hull Effect

The Effectiveness of the Linear Hull Effect The Effetiveness of the Linear Hull Effet S. Murphy Tehnial Report RHUL MA 009 9 6 Otober 009 Department of Mathematis Royal Holloway, University of London Egham, Surrey TW0 0EX, England http://www.rhul.a.uk/mathematis/tehreports

More information

A Queueing Model for Call Blending in Call Centers

A Queueing Model for Call Blending in Call Centers A Queueing Model for Call Blending in Call Centers Sandjai Bhulai and Ger Koole Vrije Universiteit Amsterdam Faulty of Sienes De Boelelaan 1081a 1081 HV Amsterdam The Netherlands E-mail: {sbhulai, koole}@s.vu.nl

More information

Product Policy in Markets with Word-of-Mouth Communication. Technical Appendix

Product Policy in Markets with Word-of-Mouth Communication. Technical Appendix rodut oliy in Markets with Word-of-Mouth Communiation Tehnial Appendix August 05 Miro-Model for Inreasing Awareness In the paper, we make the assumption that awareness is inreasing in ustomer type. I.e.,

More information

Sensitivity Analysis in Markov Networks

Sensitivity Analysis in Markov Networks Sensitivity Analysis in Markov Networks Hei Chan and Adnan Darwihe Computer Siene Department University of California, Los Angeles Los Angeles, CA 90095 {hei,darwihe}@s.ula.edu Abstrat This paper explores

More information

Methods of evaluating tests

Methods of evaluating tests Methods of evaluating tests Let X,, 1 Xn be i.i.d. Bernoulli( p ). Then 5 j= 1 j ( 5, ) T = X Binomial p. We test 1 H : p vs. 1 1 H : p>. We saw that a LRT is 1 if t k* φ ( x ) =. otherwise (t is the observed

More information

Millennium Relativity Acceleration Composition. The Relativistic Relationship between Acceleration and Uniform Motion

Millennium Relativity Acceleration Composition. The Relativistic Relationship between Acceleration and Uniform Motion Millennium Relativity Aeleration Composition he Relativisti Relationship between Aeleration and niform Motion Copyright 003 Joseph A. Rybzyk Abstrat he relativisti priniples developed throughout the six

More information

Advanced Computational Fluid Dynamics AA215A Lecture 4

Advanced Computational Fluid Dynamics AA215A Lecture 4 Advaned Computational Fluid Dynamis AA5A Leture 4 Antony Jameson Winter Quarter,, Stanford, CA Abstrat Leture 4 overs analysis of the equations of gas dynamis Contents Analysis of the equations of gas

More information

Sensitivity analysis for linear optimization problem with fuzzy data in the objective function

Sensitivity analysis for linear optimization problem with fuzzy data in the objective function Sensitivity analysis for linear optimization problem with fuzzy data in the objetive funtion Stephan Dempe, Tatiana Starostina May 5, 2004 Abstrat Linear programming problems with fuzzy oeffiients in the

More information

Frequency hopping does not increase anti-jamming resilience of wireless channels

Frequency hopping does not increase anti-jamming resilience of wireless channels Frequeny hopping does not inrease anti-jamming resiliene of wireless hannels Moritz Wiese and Panos Papadimitratos Networed Systems Seurity Group KTH Royal Institute of Tehnology, Stoholm, Sweden {moritzw,

More information

Computer Science 786S - Statistical Methods in Natural Language Processing and Data Analysis Page 1

Computer Science 786S - Statistical Methods in Natural Language Processing and Data Analysis Page 1 Computer Siene 786S - Statistial Methods in Natural Language Proessing and Data Analysis Page 1 Hypothesis Testing A statistial hypothesis is a statement about the nature of the distribution of a random

More information

Lightpath routing for maximum reliability in optical mesh networks

Lightpath routing for maximum reliability in optical mesh networks Vol. 7, No. 5 / May 2008 / JOURNAL OF OPTICAL NETWORKING 449 Lightpath routing for maximum reliability in optial mesh networks Shengli Yuan, 1, * Saket Varma, 2 and Jason P. Jue 2 1 Department of Computer

More information

A NETWORK SIMPLEX ALGORITHM FOR THE MINIMUM COST-BENEFIT NETWORK FLOW PROBLEM

A NETWORK SIMPLEX ALGORITHM FOR THE MINIMUM COST-BENEFIT NETWORK FLOW PROBLEM NETWORK SIMPLEX LGORITHM FOR THE MINIMUM COST-BENEFIT NETWORK FLOW PROBLEM Cen Çalışan, Utah Valley University, 800 W. University Parway, Orem, UT 84058, 801-863-6487, en.alisan@uvu.edu BSTRCT The minimum

More information

Remark 4.1 Unlike Lyapunov theorems, LaSalle s theorem does not require the function V ( x ) to be positive definite.

Remark 4.1 Unlike Lyapunov theorems, LaSalle s theorem does not require the function V ( x ) to be positive definite. Leture Remark 4.1 Unlike Lyapunov theorems, LaSalle s theorem does not require the funtion V ( x ) to be positive definite. ost often, our interest will be to show that x( t) as t. For that we will need

More information

A Functional Representation of Fuzzy Preferences

A Functional Representation of Fuzzy Preferences Theoretial Eonomis Letters, 017, 7, 13- http://wwwsirporg/journal/tel ISSN Online: 16-086 ISSN Print: 16-078 A Funtional Representation of Fuzzy Preferenes Susheng Wang Department of Eonomis, Hong Kong

More information

ON A PROCESS DERIVED FROM A FILTERED POISSON PROCESS

ON A PROCESS DERIVED FROM A FILTERED POISSON PROCESS ON A PROCESS DERIVED FROM A FILTERED POISSON PROCESS MARIO LEFEBVRE and JEAN-LUC GUILBAULT A ontinuous-time and ontinuous-state stohasti proess, denoted by {Xt), t }, is defined from a proess known as

More information

Convergence of reinforcement learning with general function approximators

Convergence of reinforcement learning with general function approximators Convergene of reinforement learning with general funtion approximators assilis A. Papavassiliou and Stuart Russell Computer Siene Division, U. of California, Berkeley, CA 94720-1776 fvassilis,russellg@s.berkeley.edu

More information

LECTURE NOTES FOR , FALL 2004

LECTURE NOTES FOR , FALL 2004 LECTURE NOTES FOR 18.155, FALL 2004 83 12. Cone support and wavefront set In disussing the singular support of a tempered distibution above, notie that singsupp(u) = only implies that u C (R n ), not as

More information

Coding for Random Projections and Approximate Near Neighbor Search

Coding for Random Projections and Approximate Near Neighbor Search Coding for Random Projetions and Approximate Near Neighbor Searh Ping Li Department of Statistis & Biostatistis Department of Computer Siene Rutgers University Pisataay, NJ 8854, USA pingli@stat.rutgers.edu

More information

Modeling of discrete/continuous optimization problems: characterization and formulation of disjunctions and their relaxations

Modeling of discrete/continuous optimization problems: characterization and formulation of disjunctions and their relaxations Computers and Chemial Engineering (00) 4/448 www.elsevier.om/loate/omphemeng Modeling of disrete/ontinuous optimization problems: haraterization and formulation of disjuntions and their relaxations Aldo

More information

Packing Plane Spanning Trees into a Point Set

Packing Plane Spanning Trees into a Point Set Paking Plane Spanning Trees into a Point Set Ahmad Biniaz Alfredo Garía Abstrat Let P be a set of n points in the plane in general position. We show that at least n/3 plane spanning trees an be paked into

More information

LOGISTIC REGRESSION IN DEPRESSION CLASSIFICATION

LOGISTIC REGRESSION IN DEPRESSION CLASSIFICATION LOGISIC REGRESSIO I DEPRESSIO CLASSIFICAIO J. Kual,. V. ran, M. Bareš KSE, FJFI, CVU v Praze PCP, CS, 3LF UK v Praze Abstrat Well nown logisti regression and the other binary response models an be used

More information

Simplification of Network Dynamics in Large Systems

Simplification of Network Dynamics in Large Systems Simplifiation of Network Dynamis in Large Systems Xiaojun Lin and Ness B. Shroff Shool of Eletrial and Computer Engineering Purdue University, West Lafayette, IN 47906, U.S.A. Email: {linx, shroff}@en.purdue.edu

More information

7 Max-Flow Problems. Business Computing and Operations Research 608

7 Max-Flow Problems. Business Computing and Operations Research 608 7 Max-Flow Problems Business Computing and Operations Researh 68 7. Max-Flow Problems In what follows, we onsider a somewhat modified problem onstellation Instead of osts of transmission, vetor now indiates

More information

Word of Mass: The Relationship between Mass Media and Word-of-Mouth

Word of Mass: The Relationship between Mass Media and Word-of-Mouth Word of Mass: The Relationship between Mass Media and Word-of-Mouth Roman Chuhay Preliminary version Marh 6, 015 Abstrat This paper studies the optimal priing and advertising strategies of a firm in the

More information

Chapter 8 Hypothesis Testing

Chapter 8 Hypothesis Testing Leture 5 for BST 63: Statistial Theory II Kui Zhang, Spring Chapter 8 Hypothesis Testing Setion 8 Introdution Definition 8 A hypothesis is a statement about a population parameter Definition 8 The two

More information

Nonreversibility of Multiple Unicast Networks

Nonreversibility of Multiple Unicast Networks Nonreversibility of Multiple Uniast Networks Randall Dougherty and Kenneth Zeger September 27, 2005 Abstrat We prove that for any finite direted ayli network, there exists a orresponding multiple uniast

More information

Viewing the Rings of a Tree: Minimum Distortion Embeddings into Trees

Viewing the Rings of a Tree: Minimum Distortion Embeddings into Trees Viewing the Rings of a Tree: Minimum Distortion Embeddings into Trees Amir Nayyeri Benjamin Raihel Abstrat We desribe a 1+ε) approximation algorithm for finding the minimum distortion embedding of an n-point

More information

Average Rate Speed Scaling

Average Rate Speed Scaling Average Rate Speed Saling Nikhil Bansal David P. Bunde Ho-Leung Chan Kirk Pruhs May 2, 2008 Abstrat Speed saling is a power management tehnique that involves dynamially hanging the speed of a proessor.

More information

arxiv:cond-mat/ v1 [cond-mat.stat-mech] 16 Aug 2004

arxiv:cond-mat/ v1 [cond-mat.stat-mech] 16 Aug 2004 Computational omplexity and fundamental limitations to fermioni quantum Monte Carlo simulations arxiv:ond-mat/0408370v1 [ond-mat.stat-meh] 16 Aug 2004 Matthias Troyer, 1 Uwe-Jens Wiese 2 1 Theoretishe

More information

Relative Maxima and Minima sections 4.3

Relative Maxima and Minima sections 4.3 Relative Maxima and Minima setions 4.3 Definition. By a ritial point of a funtion f we mean a point x 0 in the domain at whih either the derivative is zero or it does not exists. So, geometrially, one

More information

Tight bounds for selfish and greedy load balancing

Tight bounds for selfish and greedy load balancing Tight bounds for selfish and greedy load balaning Ioannis Caragiannis Mihele Flammini Christos Kaklamanis Panagiotis Kanellopoulos Lua Mosardelli Deember, 009 Abstrat We study the load balaning problem

More information

the following action R of T on T n+1 : for each θ T, R θ : T n+1 T n+1 is defined by stated, we assume that all the curves in this paper are defined

the following action R of T on T n+1 : for each θ T, R θ : T n+1 T n+1 is defined by stated, we assume that all the curves in this paper are defined How should a snake turn on ie: A ase study of the asymptoti isoholonomi problem Jianghai Hu, Slobodan N. Simić, and Shankar Sastry Department of Eletrial Engineering and Computer Sienes University of California

More information

Estimating the probability law of the codelength as a function of the approximation error in image compression

Estimating the probability law of the codelength as a function of the approximation error in image compression Estimating the probability law of the odelength as a funtion of the approximation error in image ompression François Malgouyres Marh 7, 2007 Abstrat After some reolletions on ompression of images using

More information

A Spatiotemporal Approach to Passive Sound Source Localization

A Spatiotemporal Approach to Passive Sound Source Localization A Spatiotemporal Approah Passive Sound Soure Loalization Pasi Pertilä, Mikko Parviainen, Teemu Korhonen and Ari Visa Institute of Signal Proessing Tampere University of Tehnology, P.O.Box 553, FIN-330,

More information

Robust Recovery of Signals From a Structured Union of Subspaces

Robust Recovery of Signals From a Structured Union of Subspaces Robust Reovery of Signals From a Strutured Union of Subspaes 1 Yonina C. Eldar, Senior Member, IEEE and Moshe Mishali, Student Member, IEEE arxiv:87.4581v2 [nlin.cg] 3 Mar 29 Abstrat Traditional sampling

More information

Optimization of Statistical Decisions for Age Replacement Problems via a New Pivotal Quantity Averaging Approach

Optimization of Statistical Decisions for Age Replacement Problems via a New Pivotal Quantity Averaging Approach Amerian Journal of heoretial and Applied tatistis 6; 5(-): -8 Published online January 7, 6 (http://www.sienepublishinggroup.om/j/ajtas) doi:.648/j.ajtas.s.65.4 IN: 36-8999 (Print); IN: 36-96 (Online)

More information

Where as discussed previously we interpret solutions to this partial differential equation in the weak sense: b

Where as discussed previously we interpret solutions to this partial differential equation in the weak sense: b Consider the pure initial value problem for a homogeneous system of onservation laws with no soure terms in one spae dimension: Where as disussed previously we interpret solutions to this partial differential

More information

Bilinear Formulated Multiple Kernel Learning for Multi-class Classification Problem

Bilinear Formulated Multiple Kernel Learning for Multi-class Classification Problem Bilinear Formulated Multiple Kernel Learning for Multi-lass Classifiation Problem Takumi Kobayashi and Nobuyuki Otsu National Institute of Advaned Industrial Siene and Tehnology, -- Umezono, Tsukuba, Japan

More information

THE METHOD OF SECTIONING WITH APPLICATION TO SIMULATION, by Danie 1 Brent ~~uffman'i

THE METHOD OF SECTIONING WITH APPLICATION TO SIMULATION, by Danie 1 Brent ~~uffman'i THE METHOD OF SECTIONING '\ WITH APPLICATION TO SIMULATION, I by Danie 1 Brent ~~uffman'i Thesis submitted to the Graduate Faulty of the Virginia Polytehni Institute and State University in partial fulfillment

More information

SINCE Zadeh s compositional rule of fuzzy inference

SINCE Zadeh s compositional rule of fuzzy inference IEEE TRANSACTIONS ON FUZZY SYSTEMS, VOL. 14, NO. 6, DECEMBER 2006 709 Error Estimation of Perturbations Under CRI Guosheng Cheng Yuxi Fu Abstrat The analysis of stability robustness of fuzzy reasoning

More information

arxiv:math/ v4 [math.ca] 29 Jul 2006

arxiv:math/ v4 [math.ca] 29 Jul 2006 arxiv:math/0109v4 [math.ca] 9 Jul 006 Contiguous relations of hypergeometri series Raimundas Vidūnas University of Amsterdam Abstrat The 15 Gauss ontiguous relations for F 1 hypergeometri series imply

More information

Evaluation of effect of blade internal modes on sensitivity of Advanced LIGO

Evaluation of effect of blade internal modes on sensitivity of Advanced LIGO Evaluation of effet of blade internal modes on sensitivity of Advaned LIGO T0074-00-R Norna A Robertson 5 th Otober 00. Introdution The urrent model used to estimate the isolation ahieved by the quadruple

More information

Control Theory association of mathematics and engineering

Control Theory association of mathematics and engineering Control Theory assoiation of mathematis and engineering Wojieh Mitkowski Krzysztof Oprzedkiewiz Department of Automatis AGH Univ. of Siene & Tehnology, Craow, Poland, Abstrat In this paper a methodology

More information

Assessing the Performance of a BCI: A Task-Oriented Approach

Assessing the Performance of a BCI: A Task-Oriented Approach Assessing the Performane of a BCI: A Task-Oriented Approah B. Dal Seno, L. Mainardi 2, M. Matteui Department of Eletronis and Information, IIT-Unit, Politenio di Milano, Italy 2 Department of Bioengineering,

More information

The population dynamics of websites

The population dynamics of websites The population dynamis of websites [Online Report] Kartik Ahuja Eletrial Engineering UCLA ahujak@ula.edu Simpson Zhang Eonomis Department UCLA simpsonzhang@ula.edu Mihaela van der Shaar Eletrial Engineering

More information

The Laws of Acceleration

The Laws of Acceleration The Laws of Aeleration The Relationships between Time, Veloity, and Rate of Aeleration Copyright 2001 Joseph A. Rybzyk Abstrat Presented is a theory in fundamental theoretial physis that establishes the

More information

Grasp Planning: How to Choose a Suitable Task Wrench Space

Grasp Planning: How to Choose a Suitable Task Wrench Space Grasp Planning: How to Choose a Suitable Task Wrenh Spae Ch. Borst, M. Fisher and G. Hirzinger German Aerospae Center - DLR Institute for Robotis and Mehatronis 8223 Wessling, Germany Email: [Christoph.Borst,

More information

SOA/CAS MAY 2003 COURSE 1 EXAM SOLUTIONS

SOA/CAS MAY 2003 COURSE 1 EXAM SOLUTIONS SOA/CAS MAY 2003 COURSE 1 EXAM SOLUTIONS Prepared by S. Broverman e-mail 2brove@rogers.om website http://members.rogers.om/2brove 1. We identify the following events:. - wathed gymnastis, ) - wathed baseball,

More information

Case I: 2 users In case of 2 users, the probability of error for user 1 was earlier derived to be 2 A1

Case I: 2 users In case of 2 users, the probability of error for user 1 was earlier derived to be 2 A1 MUTLIUSER DETECTION (Letures 9 and 0) 6:33:546 Wireless Communiations Tehnologies Instrutor: Dr. Narayan Mandayam Summary By Shweta Shrivastava (shwetash@winlab.rutgers.edu) bstrat This artile ontinues

More information

Math 225B: Differential Geometry, Homework 6

Math 225B: Differential Geometry, Homework 6 ath 225B: Differential Geometry, Homework 6 Ian Coley February 13, 214 Problem 8.7. Let ω be a 1-form on a manifol. Suppose that ω = for every lose urve in. Show that ω is exat. We laim that this onition

More information

When p = 1, the solution is indeterminate, but we get the correct answer in the limit.

When p = 1, the solution is indeterminate, but we get the correct answer in the limit. The Mathematia Journal Gambler s Ruin and First Passage Time Jan Vrbik We investigate the lassial problem of a gambler repeatedly betting $1 on the flip of a potentially biased oin until he either loses

More information

Simplified Buckling Analysis of Skeletal Structures

Simplified Buckling Analysis of Skeletal Structures Simplified Bukling Analysis of Skeletal Strutures B.A. Izzuddin 1 ABSRAC A simplified approah is proposed for bukling analysis of skeletal strutures, whih employs a rotational spring analogy for the formulation

More information

Likelihood-confidence intervals for quantiles in Extreme Value Distributions

Likelihood-confidence intervals for quantiles in Extreme Value Distributions Likelihood-onfidene intervals for quantiles in Extreme Value Distributions A. Bolívar, E. Díaz-Franés, J. Ortega, and E. Vilhis. Centro de Investigaión en Matemátias; A.P. 42, Guanajuato, Gto. 36; Méxio

More information

arxiv: v2 [math.pr] 9 Dec 2016

arxiv: v2 [math.pr] 9 Dec 2016 Omnithermal Perfet Simulation for Multi-server Queues Stephen B. Connor 3th Deember 206 arxiv:60.0602v2 [math.pr] 9 De 206 Abstrat A number of perfet simulation algorithms for multi-server First Come First

More information

A Unified View on Multi-class Support Vector Classification Supplement

A Unified View on Multi-class Support Vector Classification Supplement Journal of Mahine Learning Researh??) Submitted 7/15; Published?/?? A Unified View on Multi-lass Support Vetor Classifiation Supplement Ürün Doğan Mirosoft Researh Tobias Glasmahers Institut für Neuroinformatik

More information

On Component Order Edge Reliability and the Existence of Uniformly Most Reliable Unicycles

On Component Order Edge Reliability and the Existence of Uniformly Most Reliable Unicycles Daniel Gross, Lakshmi Iswara, L. William Kazmierzak, Kristi Luttrell, John T. Saoman, Charles Suffel On Component Order Edge Reliability and the Existene of Uniformly Most Reliable Uniyles DANIEL GROSS

More information

Singular Event Detection

Singular Event Detection Singular Event Detetion Rafael S. Garía Eletrial Engineering University of Puerto Rio at Mayagüez Rafael.Garia@ee.uprm.edu Faulty Mentor: S. Shankar Sastry Researh Supervisor: Jonathan Sprinkle Graduate

More information

Probabilistic Graphical Models

Probabilistic Graphical Models Probabilisti Graphial Models David Sontag New York University Leture 12, April 19, 2012 Aknowledgement: Partially based on slides by Eri Xing at CMU and Andrew MCallum at UMass Amherst David Sontag (NYU)

More information

Design and Development of Three Stages Mixed Sampling Plans for Variable Attribute Variable Quality Characteristics

Design and Development of Three Stages Mixed Sampling Plans for Variable Attribute Variable Quality Characteristics International Journal of Statistis and Systems ISSN 0973-2675 Volume 12, Number 4 (2017), pp. 763-772 Researh India Publiations http://www.ripubliation.om Design and Development of Three Stages Mixed Sampling

More information

Battery Sizing for Grid Connected PV Systems with Fixed Minimum Charging/Discharging Time

Battery Sizing for Grid Connected PV Systems with Fixed Minimum Charging/Discharging Time Battery Sizing for Grid Conneted PV Systems with Fixed Minimum Charging/Disharging Time Yu Ru, Jan Kleissl, and Sonia Martinez Abstrat In this paper, we study a battery sizing problem for grid-onneted

More information

Time Domain Method of Moments

Time Domain Method of Moments Time Domain Method of Moments Massahusetts Institute of Tehnology 6.635 leture notes 1 Introdution The Method of Moments (MoM) introdued in the previous leture is widely used for solving integral equations

More information

Lecture 3 - Lorentz Transformations

Lecture 3 - Lorentz Transformations Leture - Lorentz Transformations A Puzzle... Example A ruler is positioned perpendiular to a wall. A stik of length L flies by at speed v. It travels in front of the ruler, so that it obsures part of the

More information

Ordered fields and the ultrafilter theorem

Ordered fields and the ultrafilter theorem F U N D A M E N T A MATHEMATICAE 59 (999) Ordered fields and the ultrafilter theorem by R. B e r r (Dortmund), F. D e l o n (Paris) and J. S h m i d (Dortmund) Abstrat. We prove that on the basis of ZF

More information

Array Design for Superresolution Direction-Finding Algorithms

Array Design for Superresolution Direction-Finding Algorithms Array Design for Superresolution Diretion-Finding Algorithms Naushad Hussein Dowlut BEng, ACGI, AMIEE Athanassios Manikas PhD, DIC, AMIEE, MIEEE Department of Eletrial Eletroni Engineering Imperial College

More information

UPPER-TRUNCATED POWER LAW DISTRIBUTIONS

UPPER-TRUNCATED POWER LAW DISTRIBUTIONS Fratals, Vol. 9, No. (00) 09 World Sientifi Publishing Company UPPER-TRUNCATED POWER LAW DISTRIBUTIONS STEPHEN M. BURROUGHS and SARAH F. TEBBENS College of Marine Siene, University of South Florida, St.

More information

Reliability Guaranteed Energy-Aware Frame-Based Task Set Execution Strategy for Hard Real-Time Systems

Reliability Guaranteed Energy-Aware Frame-Based Task Set Execution Strategy for Hard Real-Time Systems Reliability Guaranteed Energy-Aware Frame-Based ask Set Exeution Strategy for Hard Real-ime Systems Zheng Li a, Li Wang a, Shuhui Li a, Shangping Ren a, Gang Quan b a Illinois Institute of ehnology, Chiago,

More information

Process engineers are often faced with the task of

Process engineers are often faced with the task of Fluids and Solids Handling Eliminate Iteration from Flow Problems John D. Barry Middough, In. This artile introdues a novel approah to solving flow and pipe-sizing problems based on two new dimensionless

More information

Wave Propagation through Random Media

Wave Propagation through Random Media Chapter 3. Wave Propagation through Random Media 3. Charateristis of Wave Behavior Sound propagation through random media is the entral part of this investigation. This hapter presents a frame of referene

More information

UNCERTAINTY RELATIONS AS A CONSEQUENCE OF THE LORENTZ TRANSFORMATIONS. V. N. Matveev and O. V. Matvejev

UNCERTAINTY RELATIONS AS A CONSEQUENCE OF THE LORENTZ TRANSFORMATIONS. V. N. Matveev and O. V. Matvejev UNCERTAINTY RELATIONS AS A CONSEQUENCE OF THE LORENTZ TRANSFORMATIONS V. N. Matveev and O. V. Matvejev Joint-Stok Company Sinerta Savanoriu pr., 159, Vilnius, LT-315, Lithuania E-mail: matwad@mail.ru Abstrat

More information

The Unified Geometrical Theory of Fields and Particles

The Unified Geometrical Theory of Fields and Particles Applied Mathematis, 014, 5, 347-351 Published Online February 014 (http://www.sirp.org/journal/am) http://dx.doi.org/10.436/am.014.53036 The Unified Geometrial Theory of Fields and Partiles Amagh Nduka

More information

Development of Fuzzy Extreme Value Theory. Populations

Development of Fuzzy Extreme Value Theory. Populations Applied Mathematial Sienes, Vol. 6, 0, no. 7, 58 5834 Development of Fuzzy Extreme Value Theory Control Charts Using α -uts for Sewed Populations Rungsarit Intaramo Department of Mathematis, Faulty of

More information

Capacity Pooling and Cost Sharing among Independent Firms in the Presence of Congestion

Capacity Pooling and Cost Sharing among Independent Firms in the Presence of Congestion Capaity Pooling and Cost Sharing among Independent Firms in the Presene of Congestion Yimin Yu Saif Benjaafar Graduate Program in Industrial and Systems Engineering Department of Mehanial Engineering University

More information

Calibration of Piping Assessment Models in the Netherlands

Calibration of Piping Assessment Models in the Netherlands ISGSR 2011 - Vogt, Shuppener, Straub & Bräu (eds) - 2011 Bundesanstalt für Wasserbau ISBN 978-3-939230-01-4 Calibration of Piping Assessment Models in the Netherlands J. Lopez de la Cruz & E.O.F. Calle

More information

Discrete Bessel functions and partial difference equations

Discrete Bessel functions and partial difference equations Disrete Bessel funtions and partial differene equations Antonín Slavík Charles University, Faulty of Mathematis and Physis, Sokolovská 83, 186 75 Praha 8, Czeh Republi E-mail: slavik@karlin.mff.uni.z Abstrat

More information

F = F x x + F y. y + F z

F = F x x + F y. y + F z ECTION 6: etor Calulus MATH20411 You met vetors in the first year. etor alulus is essentially alulus on vetors. We will need to differentiate vetors and perform integrals involving vetors. In partiular,

More information

Aharonov-Bohm effect. Dan Solomon.

Aharonov-Bohm effect. Dan Solomon. Aharonov-Bohm effet. Dan Solomon. In the figure the magneti field is onfined to a solenoid of radius r 0 and is direted in the z- diretion, out of the paper. The solenoid is surrounded by a barrier that

More information

Quasi-Monte Carlo Algorithms for unbounded, weighted integration problems

Quasi-Monte Carlo Algorithms for unbounded, weighted integration problems Quasi-Monte Carlo Algorithms for unbounded, weighted integration problems Jürgen Hartinger Reinhold F. Kainhofer Robert F. Tihy Department of Mathematis, Graz University of Tehnology, Steyrergasse 30,

More information

Rigorous prediction of quadratic hyperchaotic attractors of the plane

Rigorous prediction of quadratic hyperchaotic attractors of the plane Rigorous predition of quadrati hyperhaoti attrators of the plane Zeraoulia Elhadj 1, J. C. Sprott 2 1 Department of Mathematis, University of Tébéssa, 12000), Algeria. E-mail: zeraoulia@mail.univ-tebessa.dz

More information

2. The Energy Principle in Open Channel Flows

2. The Energy Principle in Open Channel Flows . The Energy Priniple in Open Channel Flows. Basi Energy Equation In the one-dimensional analysis of steady open-hannel flow, the energy equation in the form of Bernoulli equation is used. Aording to this

More information

LADAR-Based Vehicle Tracking and Trajectory Estimation for Urban Driving

LADAR-Based Vehicle Tracking and Trajectory Estimation for Urban Driving LADAR-Based Vehile Traking and Trajetory Estimation for Urban Driving Daniel Morris, Paul Haley, William Zahar, Steve MLean General Dynamis Roboti Systems, Tel: 410-876-9200, Fax: 410-876-9470 www.gdrs.om

More information

Weighted K-Nearest Neighbor Revisited

Weighted K-Nearest Neighbor Revisited Weighted -Nearest Neighbor Revisited M. Biego University of Verona Verona, Italy Email: manuele.biego@univr.it M. Loog Delft University of Tehnology Delft, The Netherlands Email: m.loog@tudelft.nl Abstrat

More information

A filled function method for constrained global optimization

A filled function method for constrained global optimization DOI 0.007/s0898-007-95- ORIGINAL PAPER A filled funtion method for onstrained global optimization Z. Y. Wu F. S. Bai H. W. J. Lee Y. J. Yang Reeived: 0 Marh 005 / Aepted: 5 February 007 Springer Siene+Business

More information

The transition between quasi-static and fully dynamic for interfaces

The transition between quasi-static and fully dynamic for interfaces Physia D 198 (24) 136 147 The transition between quasi-stati and fully dynami for interfaes G. Caginalp, H. Merdan Department of Mathematis, University of Pittsburgh, Pittsburgh, PA 1526, USA Reeived 6

More information

A new initial search direction for nonlinear conjugate gradient method

A new initial search direction for nonlinear conjugate gradient method International Journal of Mathematis Researh. ISSN 0976-5840 Volume 6, Number 2 (2014), pp. 183 190 International Researh Publiation House http://www.irphouse.om A new initial searh diretion for nonlinear

More information

QCLAS Sensor for Purity Monitoring in Medical Gas Supply Lines

QCLAS Sensor for Purity Monitoring in Medical Gas Supply Lines DOI.56/sensoren6/P3. QLAS Sensor for Purity Monitoring in Medial Gas Supply Lines Henrik Zimmermann, Mathias Wiese, Alessandro Ragnoni neoplas ontrol GmbH, Walther-Rathenau-Str. 49a, 7489 Greifswald, Germany

More information

9 Geophysics and Radio-Astronomy: VLBI VeryLongBaseInterferometry

9 Geophysics and Radio-Astronomy: VLBI VeryLongBaseInterferometry 9 Geophysis and Radio-Astronomy: VLBI VeryLongBaseInterferometry VLBI is an interferometry tehnique used in radio astronomy, in whih two or more signals, oming from the same astronomial objet, are reeived

More information

HILLE-KNESER TYPE CRITERIA FOR SECOND-ORDER DYNAMIC EQUATIONS ON TIME SCALES

HILLE-KNESER TYPE CRITERIA FOR SECOND-ORDER DYNAMIC EQUATIONS ON TIME SCALES HILLE-KNESER TYPE CRITERIA FOR SECOND-ORDER DYNAMIC EQUATIONS ON TIME SCALES L ERBE, A PETERSON AND S H SAKER Abstrat In this paper, we onsider the pair of seond-order dynami equations rt)x ) ) + pt)x

More information

On the Bit Error Probability of Noisy Channel Networks With Intermediate Node Encoding I. INTRODUCTION

On the Bit Error Probability of Noisy Channel Networks With Intermediate Node Encoding I. INTRODUCTION 5188 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 54, NO. 11, NOVEMBER 2008 [8] A. P. Dempster, N. M. Laird, and D. B. Rubin, Maximum likelihood estimation from inomplete data via the EM algorithm, J.

More information

15.12 Applications of Suffix Trees

15.12 Applications of Suffix Trees 248 Algorithms in Bioinformatis II, SoSe 07, ZBIT, D. Huson, May 14, 2007 15.12 Appliations of Suffix Trees 1. Searhing for exat patterns 2. Minimal unique substrings 3. Maximum unique mathes 4. Maximum

More information

Metric of Universe The Causes of Red Shift.

Metric of Universe The Causes of Red Shift. Metri of Universe The Causes of Red Shift. ELKIN IGOR. ielkin@yande.ru Annotation Poinare and Einstein supposed that it is pratially impossible to determine one-way speed of light, that s why speed of

More information

On the Complexity of the Weighted Fused Lasso

On the Complexity of the Weighted Fused Lasso ON THE COMPLEXITY OF THE WEIGHTED FUSED LASSO On the Compleity of the Weighted Fused Lasso José Bento jose.bento@b.edu Ralph Furmaniak rf@am.org Surjyendu Ray rays@b.edu Abstrat The solution path of the

More information