Probabilistic Optimal Estimation and Filtering
|
|
- May Vivian Johns
- 5 years ago
- Views:
Transcription
1 Probabilistic Optimal Estimation and Filtering Least Squares and Randomized Algorithms Fabrizio Dabbene 1 Mario Sznaier 2 Roberto Tempo 1 1 CNR - IEIIT Politecnico di Torino 2 Northeastern University Boston Workshop on Uncertain Dynamical Systems, Udine, Italy, August 2011 Dabbene, Sznaier, Tempo (CNR-IEIIT, Northeastern) Probabilistic Optimal Estimation and Filtering Udine, WUDS / 48
2 Motivation: Identification for Robust Control The classical approach to system identification is based on statistical assumptions about the measurement error, and provides estimates that have stochastic nature Worst-case identification, on the other hand, only assumes the knowledge of deterministic error bounds, and provides guaranteed estimates, thus being in principle better suited for robust control design However, a main limitation of such deterministic bounds lies in the fact that they often turn out being overly conservative, thus leading to estimates of limited use Dabbene, Sznaier, Tempo (CNR-IEIIT, Northeastern) Probabilistic Optimal Estimation and Filtering Udine, WUDS / 48
3 Motivation: Identification for Robust Control A re-approahment We propose a re-approachement of the two paradigms: stochastic and worst-case, introducing probabilistically optimal estimates The main idea is to exclude" sets of measure at most ǫ (accuracy) from the set of deterministic estimates We are decreasing the so-called worst-case radius of information at the expense of a probabilistic risk." We compute a trade-off curve which shows how the radius of information decreases as a function of the accuracy Dabbene, Sznaier, Tempo (CNR-IEIIT, Northeastern) Probabilistic Optimal Estimation and Filtering Udine, WUDS / 48
4 The IBC setting for systems ID and filtering Estimation problem: Given an unknown element x, find an estimate of the function S(x), based on a priori information K and on measurements of the function I(x) corrupted by additive noise q. Ingredients (sets) A problem element set X, with prior information K X A measurement space Y A solution space Z Ingredients (operators) An information operator I : X Y Additive uncertainty/noise y = Ix + q A solution operator S : X Z Dabbene, Sznaier, Tempo (CNR-IEIIT, Northeastern) Probabilistic Optimal Estimation and Filtering Udine, WUDS / 48
5 The IBC setting for systems ID and filtering Estimation problem: Given an unknown element x, find an estimate of the function S(x), based on a priori information K and on measurements of the function I(x) corrupted by additive noise q. Ingredients (sets) A problem element set X, with prior information K X A measurement space Y A solution space Z Ingredients (operators) An information operator I : X Y Additive uncertainty/noise y = Ix + q A solution operator S : X Z Dabbene, Sznaier, Tempo (CNR-IEIIT, Northeastern) Probabilistic Optimal Estimation and Filtering Udine, WUDS / 48
6 IBC Setting for System ID Estimation algorithm Estimation algorithm An algorithm A is a mapping (in general nonlinear) from Y into Z, i.e. A : Y Z An algorithm provides an approximation A(y) of Sx using the available information y Y of x K The outcome of such an algorithm is called an estimator and the notation ẑ = A(y) is used Dabbene, Sznaier, Tempo (CNR-IEIIT, Northeastern) Probabilistic Optimal Estimation and Filtering Udine, WUDS / 48
7 The IBC setting for systems ID and filtering Illustration of the considered framework Dabbene, Sznaier, Tempo (CNR-IEIIT, Northeastern) Probabilistic Optimal Estimation and Filtering Udine, WUDS / 48
8 The setup of this talk The IBC setting for systems ID and filtering Problem element set X is R n The information operator I : X Y is linear The uncertainty q Q R m, where Q is a bounding set The solution set Z is R s and the solution operator S : X Z is linear Assumption (Sufficient information) We assume that the information operator I is a one-to-one mapping, i.e. m n and rank I = n We assume for the sake of simplicity that the three sets X, Y, Z are equipped by the same l p norm. Dabbene, Sznaier, Tempo (CNR-IEIIT, Northeastern) Probabilistic Optimal Estimation and Filtering Udine, WUDS / 48
9 Example System parameter identification Parameter identification problem which has the objective to identifying a linear system from noisy measurements. The problem elements are the input-output pairs ξ = ξ(t, x) of a dynamic system, parametrized by some unknown parameter vector x K X and with given basis functions ϕ i (t) ξ(t, x) = n x i ϕ i (t) = Φ T (t)x i=1 Suppose then that m noisy measurements of ξ(t, x) are available for t 1 < t 2 < < t m, y = Ix + q = [Φ(t 1 ) Φ(t m )] T x + q. (1) The solution operator is given by the identity, Sx = x and Z X. In this context, one usually assumes unknown but bounded errors q i R, i = 1,...,m, that is Q = B (R) Dabbene, Sznaier, Tempo (CNR-IEIIT, Northeastern) Probabilistic Optimal Estimation and Filtering Udine, WUDS / 48
10 The consistency set I 1 (y) A key role is played by following set, which represents the set of all problem elements x K X compatible with (i.e. not invalidated by) the information Ix, the uncertainty q and the bounding set Q Consistency set I 1 (y) For given y Y, define I 1 (y). = {x K there exists q Q : y = Ix + q} (2) Under the sufficient information assumption, the set I 1 (y) is bounded. For instance, in the previous example we have { } I 1 (y) = x K : y [Φ(t 1 ) Φ(t m )] T x R In system identification, I 1 (y) is sometimes referred to as parameter feasible set. Dabbene, Sznaier, Tempo (CNR-IEIIT, Northeastern) Probabilistic Optimal Estimation and Filtering Udine, WUDS / 48
11 The IBC setting for systems ID and filtering Our setup for this talk Illustration of the considered framework Dabbene, Sznaier, Tempo (CNR-IEIIT, Northeastern) Probabilistic Optimal Estimation and Filtering Udine, WUDS / 48
12 Worst-Case Setting The Worst Case Setting Dabbene, Sznaier, Tempo (CNR-IEIIT, Northeastern) Probabilistic Optimal Estimation and Filtering Udine, WUDS / 48
13 Worst-Case Setting Errors and optimal algorithms Given perturbed information y Y, the worst-case error is defined as r wc (A, y) =. max Sx A(y) p. x I 1 (y) This error is based on the available information y Y about x K, and it measures the approximation error between Sx and A(y) An algorithm A wc o is called worst-case optimal if it minimizes the error r wc (A, y) for any y Y r wc o (y) =. r wc (A wc o, y) =. inf r wc (A, y) A A worst-case optimal estimator is given by ẑo wc = A wc o (y) The minimal error ro wc (y) is called the (local) worst-case radius of information Dabbene, Sznaier, Tempo (CNR-IEIIT, Northeastern) Probabilistic Optimal Estimation and Filtering Udine, WUDS / 48
14 Chevbychev center and central algorithms The Chebychev center z c (H) of a set H Z and its radius r c (H) are defined as max h z c(h) =. inf max h z =. r c (H) h H z Z h H Optimal algorithms map data y into the Chebychev center of the set SI 1 (y), i.e. z c (SI 1 (y)) = ẑ wc o For this reason they are also called central algorithms For given set H, the l p -Chebychev center x c (H) and radius r c (H) are the center and radius of the smallest l p ball enclosing H. In general z c (H) may not be unique and not necessarily it belongs to H. if H is centrally symmetric then the origin is a Chebychev center of H Dabbene, Sznaier, Tempo (CNR-IEIIT, Northeastern) Probabilistic Optimal Estimation and Filtering Udine, WUDS / 48
15 Chevbychev center and central algorithms The Chebychev center z c (H) of a set H Z and its radius r c (H) are defined as max h z c(h) =. inf max h z =. r c (H) h H z Z h H Optimal algorithms map data y into the Chebychev center of the set SI 1 (y), i.e. z c (SI 1 (y)) = ẑ wc o For this reason they are also called central algorithms For given set H, the l p -Chebychev center x c (H) and radius r c (H) are the center and radius of the smallest l p ball enclosing H. In general z c (H) may not be unique and not necessarily it belongs to H. if H is centrally symmetric then the origin is a Chebychev center of H Dabbene, Sznaier, Tempo (CNR-IEIIT, Northeastern) Probabilistic Optimal Estimation and Filtering Udine, WUDS / 48
16 Related literature The computation of the worst-case radius of information ro wc (y) and of the derivation of optimal algorithms A wc o have been the focal point of a vaste literature in a system identification setting If the norm is l 2 and K X, then the linear optimal estimator is the least squares algorithm A ls (y) = Sx ls, with Ix ls y 2. = min x X Ix y 2 In this case, A ls (y) is the Chebychev center of the ellipsoid I 1 (y) For l norms, an optimal algorithm and the radius of information can be computed solving 2n linear programs, corresponding to computing the center of the tightest hyperrectangle containing the polytope SI 1 (y) Spline algorithms have also been introduced, defined as follows A sp (y) = Sx sp (y) where Ix sp (y) y. = min x X Ix y. Dabbene, Sznaier, Tempo (CNR-IEIIT, Northeastern) Probabilistic Optimal Estimation and Filtering Udine, WUDS / 48
17 Illustrating example Consider the problem of estimating the parameters of a MA(3) model ξ(k) = x 1 u(k)+x 2 u(k 1)+x 3 u(k 2)+q(k), k = 1,...,50 where u(k) is known and q is a uniformly distributed noise with q(k) 0.5 WC optimal (central) algorithm: blu, LS estimate: yellow, spline algorithm: red Dabbene, Sznaier, Tempo (CNR-IEIIT, Northeastern) Probabilistic Optimal Estimation and Filtering Udine, WUDS / 48
18 Worst-Case Setting Pro and contra The worst-case setting is ideal for robust control, since it provides explicit bound on the parameter uncertainty Unfortunately, in many problem instances, these bound may be very large Probabilistic approach: In recent years, a parallel approach to robust control has emerged, aimed at guranteeing robust performance for most of the parameters values The idea at the basis of this talk is: Why not apply this approach directly to the identification problem, thus providing bounds guaranteed for most of the cases? Dabbene, Sznaier, Tempo (CNR-IEIIT, Northeastern) Probabilistic Optimal Estimation and Filtering Udine, WUDS / 48
19 Probabilistic setting A Probabilistic Setting Dabbene, Sznaier, Tempo (CNR-IEIIT, Northeastern) Probabilistic Optimal Estimation and Filtering Udine, WUDS / 48
20 Probabilistic setting Random uncertainty We still assume that q Q, but we also assume to have information on the distribution of q (note that this info is usually available) Objective: To derive optimal algorithms and to compute the related errors for when the uncertainty q is random... In this setting, the error of an algorithm is measured in a worst-case sense, but we disregard a set of measure at most ǫ (0, 1) from the consistency set I 1 (y) Assumption (Random measurement uncertainty) We assume that the measurement noise q is a real random vector with given probability density p Q (q) and support set Q R m. Denote by P Q (q) the probability distribution of q, and by µ Q the probability measure generated by p Q (q) over the set Q. Dabbene, Sznaier, Tempo (CNR-IEIIT, Northeastern) Probabilistic Optimal Estimation and Filtering Udine, WUDS / 48
21 Probabilistic setting Random uncertainty We still assume that q Q, but we also assume to have information on the distribution of q (note that this info is usually available) Objective: To derive optimal algorithms and to compute the related errors for when the uncertainty q is random... In this setting, the error of an algorithm is measured in a worst-case sense, but we disregard a set of measure at most ǫ (0, 1) from the consistency set I 1 (y) Assumption (Random measurement uncertainty) We assume that the measurement noise q is a real random vector with given probability density p Q (q) and support set Q R m. Denote by P Q (q) the probability distribution of q, and by µ Q the probability measure generated by p Q (q) over the set Q. Dabbene, Sznaier, Tempo (CNR-IEIIT, Northeastern) Probabilistic Optimal Estimation and Filtering Udine, WUDS / 48
22 Induced measure over I 1 (y) and SI 1 (y) Probabilistic setting The probability measure over the set Q induces, by means of equ. (1), a probability measure µ I 1 over the set I 1 (y) For any measurable set B X, we can measure it through the probability measureµ Q as follows: µ I 1(B) = µ Q (q Q x B I 1 (y) : Ix + q = y) This conditional measure is such that points outside the consistency set I 1 (y) have measure zero, and µ I 1( I 1 (y) ) = 1, that is this induced measure is concentrated over I 1 (y) We denote by P I 1 the induced probability distribution P I 1 and by p I 1 the density, both having support over I 1 (y) The measure µ I 1 is mapped into SI 1 (y) to a measure µ SI 1, and a pdf p SI 1 and cdf P SI 1 Dabbene, Sznaier, Tempo (CNR-IEIIT, Northeastern) Probabilistic Optimal Estimation and Filtering Udine, WUDS / 48
23 Probabilistic error and optimal algorithms Probabilistic setting Given perturbed information y Y and accuracy ǫ (0, 1), we define the probabilistic error (to level ǫ) as r pr (A, y,ǫ). = inf X 0 : µ I 1(X 0 ) ǫ max Sx A(y) p (3) x {I 1 (y)\x 0 } Clearly, r pr (A, y,ǫ) r wc (A, y) for any algorithm A, data y Y and ǫ (0, 1), which implies a reduction of the approximation error in a probabilistic setting An algorithm A pr o is called probabilistic optimal (to level ǫ) if it minimizes the radius of information r pr (A, y,ǫ) for any y Y and ǫ (0, 1) r pr o (y,ǫ). = r pr (A pr o, y,ǫ) = inf A r pr (A, y,ǫ) The probabilistic optimal estimator is given by ẑo pr (ǫ) =. A pr o (y,ǫ) The minimal error ro pr (y, ǫ) is called the probabilistic radius of information (to level ǫ) Dabbene, Sznaier, Tempo (CNR-IEIIT, Northeastern) Probabilistic Optimal Estimation and Filtering Udine, WUDS / 48
24 Probabilistic error and optimal algorithms Probabilistic setting Given perturbed information y Y and accuracy ǫ (0, 1), we define the probabilistic error (to level ǫ) as r pr (A, y,ǫ). = inf X 0 : µ I 1(X 0 ) ǫ max Sx A(y) p (3) x {I 1 (y)\x 0 } Clearly, r pr (A, y,ǫ) r wc (A, y) for any algorithm A, data y Y and ǫ (0, 1), which implies a reduction of the approximation error in a probabilistic setting An algorithm A pr o is called probabilistic optimal (to level ǫ) if it minimizes the radius of information r pr (A, y,ǫ) for any y Y and ǫ (0, 1) r pr o (y,ǫ). = r pr (A pr o, y,ǫ) = inf A r pr (A, y,ǫ) The probabilistic optimal estimator is given by ẑo pr (ǫ) =. A pr o (y,ǫ) The minimal error ro pr (y, ǫ) is called the probabilistic radius of information (to level ǫ) Dabbene, Sznaier, Tempo (CNR-IEIIT, Northeastern) Probabilistic Optimal Estimation and Filtering Udine, WUDS / 48
25 Problem definition Probabilistic setting Problem Computation of ro pr (y, ǫ) and the derivation of probabilistic optimal algorithms A pr o for different probability distributions P Q and support sets Q. In particular, we are interested in the cases: µ Q is Gaussian µ Q is uniform on the l 2 norm ball B 2 (R) and p = 2 µ Q is uniform on the l norm ball B (R) and p = Dabbene, Sznaier, Tempo (CNR-IEIIT, Northeastern) Probabilistic Optimal Estimation and Filtering Udine, WUDS / 48
26 Chance constraint formulation Probabilistic setting We introduce the violation probability for given A and radius r v(r,a) =. { µ I 1 x I 1 (y) Sx A(y) > r } Equation (3) can be reformulated as a chance-constrained optimization problem r pr (A, y,ǫ) = min{r v(r,a) ǫ} A probabilistic optimal algorithm can be computed as } ro {r pr (y,ǫ) = min inf v(r,a) ǫ A = min{r v o (r) ǫ} where we the optimal violation probability for given radius r is v o (r) =. inf µ { I x I 1 (y) : Sx A(y) A 1 p > r } Dabbene, Sznaier, Tempo (CNR-IEIIT, Northeastern) Probabilistic Optimal Estimation and Filtering Udine, WUDS / 48
27 Chance constraint formulation Related literature Notably, the probabilistic setup and its chance-constraint formulation have already been introduced in the book of Traub In [Traub et al:88] the connection with the average error setting, which has the objective to minimize the expected value of the estimation error E [g( Sx A(y) )], is outlined. To explain this connection, we see that for any r > 0 v(r,a) = I r (Sx A(y)) p X (x)dx I 1 (y) where I r ( Sx A(y) ) is the indicator function of SI 1 (y). Hence, the probabilistic estimator is equivalent to the average estimator that minimizes E [I r ( Sx A(y) )] Dabbene, Sznaier, Tempo (CNR-IEIIT, Northeastern) Probabilistic Optimal Estimation and Filtering Udine, WUDS / 48
28 Random Uncertainty Normally Distributed Optimality of least squares We consider the case when the uncertainty q is normally distributed, i.e. Q R m and q N q,w In this case, the probabilistic optimal algorithm A pr o is the least squares algorithm A ls Theorem Letting K = R n, Q = R m, q N q,w and W = H T H. Then, A pr o (y) = A ls(y) = S(I T H T HI) 1 I T H T y for any y Y and ǫ (0, 1). Moreover, the probabilistic radius r pr o (ǫ). = r pr (A pr o, y,ǫ) does not depend on y. Dabbene, Sznaier, Tempo (CNR-IEIIT, Northeastern) Probabilistic Optimal Estimation and Filtering Udine, WUDS / 48
29 Probabilistic radius for Gaussian measures r pr o (ǫ) =??? We are not aware of explicit ways to compute the radius ro pr (ǫ) In Traub the following bound on ro pr (ǫ) is provided ro pr (ǫ) 2 ln 5 ǫ r o avg where ro avg is the optimal average radius of information, which can be computed in closed form as a function of covariance matrix W Also, the probabilistic radius ro pr (ǫ) seems to have close relation with the work of Campi and Weyer, where they derive non-asymptotic (i.e. based on a finite number of measure, such as in our case) confidence ellipsoids for least-squares estimates Dabbene, Sznaier, Tempo (CNR-IEIIT, Northeastern) Probabilistic Optimal Estimation and Filtering Udine, WUDS / 48
30 Random Uncertainty Uniformly Distributed The induced measure is still uniform We study the case when q is uniformly distributed over the set Q, i.e. q U Q and µ Q µ U(Q) We assume that Q is a compact set. In particular, we are interested in the case when Q is the l p norm ball B p (R) Question: If µ Q is the uniform measure over Q, what is the induced measure µ I 1 over the set I 1 (y)? Theorem Let Q be a compact set, if q U(Q), then for any y Y µ I 1 µ U(I 1 (y)) Moreover, the measure µ SI 1 over Z is log-concave Dabbene, Sznaier, Tempo (CNR-IEIIT, Northeastern) Probabilistic Optimal Estimation and Filtering Udine, WUDS / 48
31 Random Uncertainty Uniformly Distributed The induced measure is still uniform We study the case when q is uniformly distributed over the set Q, i.e. q U Q and µ Q µ U(Q) We assume that Q is a compact set. In particular, we are interested in the case when Q is the l p norm ball B p (R) Question: If µ Q is the uniform measure over Q, what is the induced measure µ I 1 over the set I 1 (y)? Theorem Let Q be a compact set, if q U(Q), then for any y Y µ I 1 µ U(I 1 (y)) Moreover, the measure µ SI 1 over Z is log-concave Dabbene, Sznaier, Tempo (CNR-IEIIT, Northeastern) Probabilistic Optimal Estimation and Filtering Udine, WUDS / 48
32 Weighted l 2 ball Uniform case Optimality of least squares The random uncertainty q is uniformly distributed in a weighted l 2 ball of radius ρ, and the set SI 1 (y) is hence an ellipsoid In this case, the center of SI 1 (y) is a probabilistic optimal estimator, and coincides with the least squares estimator A ls Theorem Letting K = R n and µ q (Q) = U q (Q) where and W = H T H. Then, Q = { q : q T Wq ρ 2} A pr o (y) = A ls (y) = S(I T H T HI) 1 I T H T y Dabbene, Sznaier, Tempo (CNR-IEIIT, Northeastern) Probabilistic Optimal Estimation and Filtering Udine, WUDS / 48
33 l -ball, the case of S = I n Random Uncertainty Uniformly Distributed We now concentrate on the very important case when Q = B (R) In this case, LS is not optimal anymore, and a probabilistic optimal estimator needs be derived In order to simplify our next developments, we start considering the case when S is the identity operator Assumption (Parameter estimation problems) We assume that S = I n This corresponds to the situation when one is interested in parameter estimation The assumption can be relaxed as S being square, S R n,n We recall that in this case SI 1 (y) = I 1 (y) is a polytope equipped by a uniform measure Dabbene, Sznaier, Tempo (CNR-IEIIT, Northeastern) Probabilistic Optimal Estimation and Filtering Udine, WUDS / 48
34 Computation of violation probability l -ball, the case of S = I n Theorem (Computation of violation probability) Let Q be a bounded convex set with p Q = U Q, then for given r > 0 (i) The violation probability v o (r) can be computed as the solution of v o (r) = inf x c vol[x 0 (x c, r)]/v X (4) where X 0 (x c, r). = I 1 (y)\b p (x c, r) and V X = vol[x] (ii) The optimization problem (4) is a quasi-convex problem in x c, over the convex set Ω. = { x I 1 (y) B p (x c, r) } (iii) The function v o (r) is a continuous non-increasing function of r Dabbene, Sznaier, Tempo (CNR-IEIIT, Northeastern) Probabilistic Optimal Estimation and Filtering Udine, WUDS / 48
35 Computation of violation probability Schetch of proof Reminding that µ Q is the uniform measure over Q, from the definition of v o (r) we write v o (r) = 1 V X inf A vol[{ x I 1 (y) : x A(y) p > r }] = 1 V X inf x c vol [{ x I 1 (y) : x x c p > r }] = 1 V X inf x c vol [{ x I 1 (y) x B p (x c, r) }] = 1 inf vol [ I 1 (y)\b p (x c, r) ] = 1 inf vol[x 0 (x c, r)] V X x c V X x c from which it follows point (i) Dabbene, Sznaier, Tempo (CNR-IEIIT, Northeastern) Probabilistic Optimal Estimation and Filtering Udine, WUDS / 48
36 Computation of violation probability Schetch of proof To prove quasi-convexity (point (ii)), note that (4) rewrites v o (r) = 1 1 sup vol[d(x, r)] (5) V X where D(x, r) = I 1 (y) B p (x, r) This is the problem of maximizing the volume of the intersection of two convex sets, one of those can be translated by x This problem can be shown to be quasi-concave in x over the set Ω where the intersection D(x, r) is non-empty More specifically, in [Zalgaller:01] it is shown that the function φ(x). = (vol[d(x, r)]) 1/n is concave over Ω (this is a direct consequence of Brunn-Minkovski ineq.) Hence, it follows immediately that the function φ(x) n is quasi-concave, since φ(x) is a nonnegative function for x Ω and the n-th power function y n is an increasing function for y 0 x Dabbene, Sznaier, Tempo (CNR-IEIIT, Northeastern) Probabilistic Optimal Estimation and Filtering Udine, WUDS / 48
37 Computation of probabilistic optimal estimate Corollary For given r > 0, a global minimizer for problem (4) can be computed as the solution of the following maximization problem supφ(x), with φ(x) = (vol[d(x, r)]) 1/n (6) x Ω Moreover, problem (6) is concave over the convex set Ω, and thus any local maximizer is also global. For fixed r > 0, let x o (r) one such maximizers, than v o (r) = 1 vol[d(x o(r), r)] V X and the probabilistic radius of information (to level ǫ) can be found as the (unique) solution of the following one-dimensional inversion problem r pr o (y,ǫ) = min{r v o(r) ǫ} Dabbene, Sznaier, Tempo (CNR-IEIIT, Northeastern) Probabilistic Optimal Estimation and Filtering Udine, WUDS / 48
38 Probabilistic radius and probabilistic optimal estimator For given ǫ > 0, the inversion problem can be easily solved by bisection, thus providing a way of computing the optimal probabilistic radius of information ro pr (y,ǫ) The corresponding optimal estimate ˆx o pr (y,ǫ) can be directly computed as a minimizer x o (r), for r = ro pr (y,ǫ), of the following problem Problem (Maximum intersection) (P-max-int) : sup vol[d(x c, r)] 1 n x c If the set I 1 (y) is centrally symmetric, then it is easy to see from symmetry that the intersection with a ball is maximized when the sets are concentric. Hence, the optimal estimator for the probabilistic case coincides with that of the worst-case setting Dabbene, Sznaier, Tempo (CNR-IEIIT, Northeastern) Probabilistic Optimal Estimation and Filtering Udine, WUDS / 48
39 Solving (P-max-int) : Volume oracle and oracle-polynomial-time algorithm We first consider the case when one assumes to have a volume oracle that, given x, returns the volume of D(x, r) (and a sub-gradient of the function φ(x)) Problem (P-max-int) has been considered in [FukUno:07], where they derive a strongly oracle-polynomial-time algorithm for polytopic sets Indeed, the fact the the problem is NP hard does not make the intersection maximization problem worthless to investigate, since one can compute the volume of a polytope quickly for considerably complex polytopes in modest (say up to n = 10) dimensions Hence, for small dimensional n, one can use the method proposed by [FukUno:07]. This method has also been used in our examples to compare with the other techniques proposed further on Dabbene, Sznaier, Tempo (CNR-IEIIT, Northeastern) Probabilistic Optimal Estimation and Filtering Udine, WUDS / 48
40 Solving (P-max-int) : Volume oracle and oracle-polynomial-time algorithm Function v o(r) for a problem with n = 3 and m = 13, and corresponding sequence of optimal boxes maximizing the intersection with the polytope. Dabbene, Sznaier, Tempo (CNR-IEIIT, Northeastern) Probabilistic Optimal Estimation and Filtering Udine, WUDS / 48
41 Solving (P-max-int) : Example Computation v o(r) for a problem with six parameters and m = For ǫ = 0.05, the probabilistic radius is ro pr (y,ǫ) = 0.068, almost half of the worst case radius which is ro pr (y) = Dabbene, Sznaier, Tempo (CNR-IEIIT, Northeastern) Probabilistic Optimal Estimation and Filtering Udine, WUDS / 48
42 Solving (P-max-int) : Randomized algorithms and stochastic optimization We do not have a volume oracle in general! We use a randomized approach. Algorithm (Randomized approximation of φ(x)) 1 Generate N points in the ball B p (x, r) 2 Count how many points N g belong to the set I 1 (y) (this can be done in polynomial-time for polytopes or ellipsoids) 3 An approximation of the function φ(x) is immediately obtained as ( ) 1/n Ng φ(x) N vol[b p(x, r))] Note that the expected value wrt samples E( φ(x)) coincides with φ(x). Dabbene, Sznaier, Tempo (CNR-IEIIT, Northeastern) Probabilistic Optimal Estimation and Filtering Udine, WUDS / 48
43 Solving (P-max-int) : Stochastic optimization SPSA We consider the following stochastic problem sup x E[ˆφ(x)], E( φ(x)) = φ(x) For its solution, we can make recourse to classical stochastic optimization algorithms (see Kushner & Yin), as for instance FPSA, SPSA,... Algorithm (Simultaneous Perturbations Stoch. Approx. (SPSA)) Consider a starting point θ 0, and run the following algorithm θ k+1 = θ k +α k [ 1 k ] φ + φ 2c k where [ k ] {0, 1} n is a Bernoulli sequence, [ 1 k ] =. [ T 1 k,1 1 k,n] and φ ±. = φ(θk ± c k k )+η ± k where η ± k is random noise Dabbene, Sznaier, Tempo (CNR-IEIIT, Northeastern) Probabilistic Optimal Estimation and Filtering Udine, WUDS / 48
44 Solving (P-max-int) : Stochastic optimization Convergence of SPSA Theorem Assume φ(x) is concave and nondifferentiable (as in our case), and let φ(x) a subgradient of φ. If a k 0, k a k = and c k 0, k c k =, then under mild conditions, θ k converges to a point such that 0 φ(θ) Remarks: (+) Simulations show that the method works for quite large n, m (-) At present, we have convergence only for k. This is contrary to our philosophy, that aim at results valid for finite k -> Working direction: derive (probabilistic) bounds on θ k and φ(θ k ) -> Is it possible to find conservative but guaranteed approaches? Idea: instead of maximizing the volume of the intersection, we maximize an appropriate lower-bound of this volume. Dabbene, Sznaier, Tempo (CNR-IEIIT, Northeastern) Probabilistic Optimal Estimation and Filtering Udine, WUDS / 48
45 Solving (P-max-int) : An SDP relaxation Ellipsoidal approximation Idea: to construct, for fixed r, the maximal volume ellipsoid contained in the intersection D(x c, r) i.e. to solve the problem sup x c,x ce,p E vol[e(x E, P E )] subject to E(x E, P E ) D(x c, r) where E(x E, P E ). = {x R n x = x E + P E z, z 2 1} The above problem is still non-concave, but a concave formulation is possible. Dabbene, Sznaier, Tempo (CNR-IEIIT, Northeastern) Probabilistic Optimal Estimation and Filtering Udine, WUDS / 48
46 Solving (P-max-int) : An SDP relaxation Ellipsoidal approximation Theorem Let r > 0 be fixed. A global optimizer for problem (7) can be computed solving inf x c,x ce,p E subject to log det P E [ ] (R + yi I i )x E I n P E Ii T 0, i = 1,...,m (R + y i I i )x E [ ] (R yi +I i )x E I n P E Ii T 0, i = 1,...,m (R y i +I i )x E [ ] (r + e T i (x c x E ))I n P E e i r + ei T 0, i = 1,...,n (x c x E ) [ ] (r e T i (x c x E ))I n P E e i r ei T 0, i = 1,...,n (x c x E ) Let x sdp c v sdp o (r) a solution x c the above SDP, and define (r) =. [ ] vol D(xc sdp (r), r). Then, vo sdp (r) v o (r) Dabbene, Sznaier, Tempo (CNR-IEIIT, Northeastern) Probabilistic Optimal Estimation and Filtering Udine, WUDS / 48
47 Solving (P-max-int) : An SDP relaxation Ellipsoidal approximation This SDP relaxation thus leads to a easily computable suboptimal violation function vo sdp (r) Ellipsoid-based SDP relaxation vo sdp (r) (red) and optimal one vo sdp (r) (blu) for an example with m = 200 and n = 4. The convex relaxation (red), is always above the optimal one, as expected. Dabbene, Sznaier, Tempo (CNR-IEIIT, Northeastern) Probabilistic Optimal Estimation and Filtering Udine, WUDS / 48
48 Recovering tight bounds From our developements, it follows that the probabilist radius of information ro pr (y,ǫ), and the optimal estimator ˆx o pr (y,ǫ) can be computed as the solution of a convex optimization problem These quantities are to be interpreted as the radius and center of a l ball guaranteed to contain 1 ǫ of the total volume of I 1 (y) Theorem (Tight bound: ǫ-enclosing orthotope) Let ˆx o pr (y, ǫ) be a probabilistic optimal estimator guaranteeing a probabilistic radius of information of ro pr (y,ǫ). Then, tight bounds on the parameters h i,h + i, i = 1,...,n, can be computed as the solution of the following 2n linear programs: h i ( ) h + i = inf (sup) xi i = 1,...,n subject to x ˆx o pr pr (y,ǫ) ro (y,ǫ) Iˆx o pr (y,ǫ) y R, Dabbene, Sznaier, Tempo (CNR-IEIIT, Northeastern) Probabilistic Optimal Estimation and Filtering Udine, WUDS / 48
49 The case of S = I n When S is not the identity, our approach cannot be applied directly, since the measure on SI 1 (y) is not uniform anymore However, this measure is log-concave, and hence a results similar to the one proved in the previous section can still be proved Theorem Let µ SI 1 the measure induced by µ Q over Z, which is logconcave. Then φ(z) = µ SI 1(D S (z)) 1/n where D S (z) = SI 1 (y) B p (z, r) (7) is still a concave function. This results shows that our problem is still well posed Dabbene, Sznaier, Tempo (CNR-IEIIT, Northeastern) Probabilistic Optimal Estimation and Filtering Udine, WUDS / 48
50 The case of S = I n However, it is hard to compute the measure µ SI 1(D(x, r)), and even a direct stochastic approximation approach would require generating samples according to this measure A solution is to go back to space X, and maximize the volume of the intersection B 1 S (z) I 1 (y) where B 1 S (z). = {x X Sx B p (z, r)} is the back-image of the ball B p (z, r) through S Simple geometric reasoning show that B 1 S (z) is a cylinder, thus the computation of D S (z) can be performed using the same techniques discussed in the previous sections Dabbene, Sznaier, Tempo (CNR-IEIIT, Northeastern) Probabilistic Optimal Estimation and Filtering Udine, WUDS / 48
51 Future work and extensions A different approach is to approximately solve the chance-constraint problem using the discarded-constraints scenario approach of Calafiore Campi Generate N vectors x (i) uniformly distributed in SI 1 (y) Construct the randomized optimization problem ˆr o (N, L) = min zc Sx (i) z c r for all i = 1,...,N but D discarded ones For given ǫ 1 < ǫ, one obtains Prob{r pr o (ǫ, y) ˆr o(n, L) r pr o (ǫ 1, y)} 1 δ(n, L) Problems: i) not easy to sample in I 1 (y), ii) not easy to construct an optimal discard procedure Dabbene, Sznaier, Tempo (CNR-IEIIT, Northeastern) Probabilistic Optimal Estimation and Filtering Udine, WUDS / 48
52 Future work and extensions Reduced complexity estimates In various work, Garulli et al. consider the conditional estimation problem. In this case, they seek an estimate of x using a lower-dimensional representation, described as a l dimensional linear manifold, with l < n, i.e. ˆx M = { z X x = x o + Mz, z R l}. That is, they consider the class of algorithms A M that map Y M X. Then, an optimal conditional central estimate is given by the conditional Chebychev center x M o = arg inf sup x x c p. x c M x I 1 (y) This amounts at finding the point x c lying on the manifold M such that its distance from the farthest point of I 1 (y) is minimized. Nonlinear systems Extensions to the case of nonlinear operators is under study. Dabbene, Sznaier, Tempo (CNR-IEIIT, Northeastern) Probabilistic Optimal Estimation and Filtering Udine, WUDS / 48
53 The end...for now... THANK YOU! Dabbene, Sznaier, Tempo (CNR-IEIIT, Northeastern) Probabilistic Optimal Estimation and Filtering Udine, WUDS / 48
A Probabilistic Approach to Optimal Estimation Part I: Problem Formulation and Methodology
51st IEEE Conference on Decision and Control December 10-13, 2012. Maui, Hawaii, USA A Probabilistic Approach to Optimal Estimation Part I: Problem Formulation and Methodology Fabrizio Dabbene, Mario Sznaier,
More informationA Convex Optimization Approach to Worst-Case Optimal Sensor Selection
1 A Convex Optimization Approach to Worst-Case Optimal Sensor Selection Yin Wang Mario Sznaier Fabrizio Dabbene Abstract This paper considers the problem of optimal sensor selection in a worst-case setup.
More information4. Convex optimization problems
Convex Optimization Boyd & Vandenberghe 4. Convex optimization problems optimization problem in standard form convex optimization problems quasiconvex optimization linear optimization quadratic optimization
More informationLecture: Convex Optimization Problems
1/36 Lecture: Convex Optimization Problems http://bicmr.pku.edu.cn/~wenzw/opt-2015-fall.html Acknowledgement: this slides is based on Prof. Lieven Vandenberghe s lecture notes Introduction 2/36 optimization
More informationConvex optimization problems. Optimization problem in standard form
Convex optimization problems optimization problem in standard form convex optimization problems linear optimization quadratic optimization geometric programming quasiconvex optimization generalized inequality
More informationOn the Power of Robust Solutions in Two-Stage Stochastic and Adaptive Optimization Problems
MATHEMATICS OF OPERATIONS RESEARCH Vol. xx, No. x, Xxxxxxx 00x, pp. xxx xxx ISSN 0364-765X EISSN 156-5471 0x xx0x 0xxx informs DOI 10.187/moor.xxxx.xxxx c 00x INFORMS On the Power of Robust Solutions in
More information4. Convex optimization problems
Convex Optimization Boyd & Vandenberghe 4. Convex optimization problems optimization problem in standard form convex optimization problems quasiconvex optimization linear optimization quadratic optimization
More informationOn the Power of Robust Solutions in Two-Stage Stochastic and Adaptive Optimization Problems
MATHEMATICS OF OPERATIONS RESEARCH Vol. 35, No., May 010, pp. 84 305 issn 0364-765X eissn 156-5471 10 350 084 informs doi 10.187/moor.1090.0440 010 INFORMS On the Power of Robust Solutions in Two-Stage
More informationStability of optimization problems with stochastic dominance constraints
Stability of optimization problems with stochastic dominance constraints D. Dentcheva and W. Römisch Stevens Institute of Technology, Hoboken Humboldt-University Berlin www.math.hu-berlin.de/~romisch SIAM
More informationSemidefinite and Second Order Cone Programming Seminar Fall 2012 Project: Robust Optimization and its Application of Robust Portfolio Optimization
Semidefinite and Second Order Cone Programming Seminar Fall 2012 Project: Robust Optimization and its Application of Robust Portfolio Optimization Instructor: Farid Alizadeh Author: Ai Kagawa 12/12/2012
More information6. Approximation and fitting
6. Approximation and fitting Convex Optimization Boyd & Vandenberghe norm approximation least-norm problems regularized approximation robust approximation 6 Norm approximation minimize Ax b (A R m n with
More informationConvex Optimization: Applications
Convex Optimization: Applications Lecturer: Pradeep Ravikumar Co-instructor: Aarti Singh Convex Optimization 1-75/36-75 Based on material from Boyd, Vandenberghe Norm Approximation minimize Ax b (A R m
More informationScenario Optimization for Robust Design
Scenario Optimization for Robust Design foundations and recent developments Giuseppe Carlo Calafiore Dipartimento di Elettronica e Telecomunicazioni Politecnico di Torino ITALY Learning for Control Workshop
More informationRandomized Algorithms for Semi-Infinite Programming Problems
Randomized Algorithms for Semi-Infinite Programming Problems Vladislav B. Tadić 1, Sean P. Meyn 2, and Roberto Tempo 3 1 Department of Automatic Control and Systems Engineering University of Sheffield,
More informationConvex Optimization in Classification Problems
New Trends in Optimization and Computational Algorithms December 9 13, 2001 Convex Optimization in Classification Problems Laurent El Ghaoui Department of EECS, UC Berkeley elghaoui@eecs.berkeley.edu 1
More informationConvex Optimization & Parsimony of L p-balls representation
Convex Optimization & Parsimony of L p -balls representation LAAS-CNRS and Institute of Mathematics, Toulouse, France IMA, January 2016 Motivation Unit balls associated with nonnegative homogeneous polynomials
More informationApproximating Submodular Functions. Nick Harvey University of British Columbia
Approximating Submodular Functions Nick Harvey University of British Columbia Approximating Submodular Functions Part 1 Nick Harvey University of British Columbia Department of Computer Science July 11th,
More informationLecture: Examples of LP, SOCP and SDP
1/34 Lecture: Examples of LP, SOCP and SDP Zaiwen Wen Beijing International Center For Mathematical Research Peking University http://bicmr.pku.edu.cn/~wenzw/bigdata2018.html wenzw@pku.edu.cn Acknowledgement:
More informationHybrid System Identification via Sparse Polynomial Optimization
2010 American Control Conference Marriott Waterfront, Baltimore, MD, USA June 30-July 02, 2010 WeA046 Hybrid System Identification via Sparse Polynomial Optimization Chao Feng, Constantino M Lagoa and
More informationChance-constrained optimization with tight confidence bounds
Chance-constrained optimization with tight confidence bounds Mark Cannon University of Oxford 25 April 2017 1 Outline 1. Problem definition and motivation 2. Confidence bounds for randomised sample discarding
More informationLecture 14 Ellipsoid method
S. Boyd EE364 Lecture 14 Ellipsoid method idea of localization methods bisection on R center of gravity algorithm ellipsoid method 14 1 Localization f : R n R convex (and for now, differentiable) problem:
More informationORIGINS OF STOCHASTIC PROGRAMMING
ORIGINS OF STOCHASTIC PROGRAMMING Early 1950 s: in applications of Linear Programming unknown values of coefficients: demands, technological coefficients, yields, etc. QUOTATION Dantzig, Interfaces 20,1990
More informationSTAT 200C: High-dimensional Statistics
STAT 200C: High-dimensional Statistics Arash A. Amini May 30, 2018 1 / 57 Table of Contents 1 Sparse linear models Basis Pursuit and restricted null space property Sufficient conditions for RNS 2 / 57
More informationA robust approach to the chance-constrained knapsack problem
A robust approach to the chance-constrained knapsack problem Olivier Klopfenstein 1,2, Dritan Nace 2 1 France Télécom R&D, 38-40 rue du gl Leclerc, 92794 Issy-les-Moux cedex 9, France 2 Université de Technologie
More information10. Ellipsoid method
10. Ellipsoid method EE236C (Spring 2008-09) ellipsoid method convergence proof inequality constraints 10 1 Ellipsoid method history developed by Shor, Nemirovski, Yudin in 1970s used in 1979 by Khachian
More informationA strongly polynomial algorithm for linear systems having a binary solution
A strongly polynomial algorithm for linear systems having a binary solution Sergei Chubanov Institute of Information Systems at the University of Siegen, Germany e-mail: sergei.chubanov@uni-siegen.de 7th
More informationScenario Generation and Sampling Methods
Scenario Generation and Sampling Methods Güzin Bayraksan Tito Homem-de-Mello SVAN 2016 IMPA May 11th, 2016 Bayraksan (OSU) & Homem-de-Mello (UAI) Scenario Generation and Sampling SVAN IMPA May 11 1 / 33
More informationStochastic Optimization One-stage problem
Stochastic Optimization One-stage problem V. Leclère September 28 2017 September 28 2017 1 / Déroulement du cours 1 Problèmes d optimisation stochastique à une étape 2 Problèmes d optimisation stochastique
More informationJ. Sadeghi E. Patelli M. de Angelis
J. Sadeghi E. Patelli Institute for Risk and, Department of Engineering, University of Liverpool, United Kingdom 8th International Workshop on Reliable Computing, Computing with Confidence University of
More informationGeometric Inference for Probability distributions
Geometric Inference for Probability distributions F. Chazal 1 D. Cohen-Steiner 2 Q. Mérigot 2 1 Geometrica, INRIA Saclay, 2 Geometrica, INRIA Sophia-Antipolis 2009 June 1 Motivation What is the (relevant)
More informationThe Uniformity Principle: A New Tool for. Probabilistic Robustness Analysis. B. R. Barmish and C. M. Lagoa. further discussion.
The Uniformity Principle A New Tool for Probabilistic Robustness Analysis B. R. Barmish and C. M. Lagoa Department of Electrical and Computer Engineering University of Wisconsin-Madison, Madison, WI 53706
More informationRobust linear optimization under general norms
Operations Research Letters 3 (004) 50 56 Operations Research Letters www.elsevier.com/locate/dsw Robust linear optimization under general norms Dimitris Bertsimas a; ;, Dessislava Pachamanova b, Melvyn
More informationNonlinear Programming Models
Nonlinear Programming Models Fabio Schoen 2008 http://gol.dsi.unifi.it/users/schoen Nonlinear Programming Models p. Introduction Nonlinear Programming Models p. NLP problems minf(x) x S R n Standard form:
More informationThe Scenario Approach to Robust Control Design
The Scenario Approach to Robust Control Design GC Calafiore and MC Campi Abstract This paper proposes a new probabilistic solution framework for robust control analysis and synthesis problems that can
More informationSolving Corrupted Quadratic Equations, Provably
Solving Corrupted Quadratic Equations, Provably Yuejie Chi London Workshop on Sparse Signal Processing September 206 Acknowledgement Joint work with Yuanxin Li (OSU), Huishuai Zhuang (Syracuse) and Yingbin
More informationapproximation algorithms I
SUM-OF-SQUARES method and approximation algorithms I David Steurer Cornell Cargese Workshop, 201 meta-task encoded as low-degree polynomial in R x example: f(x) = i,j n w ij x i x j 2 given: functions
More informationAn introduction to Mathematical Theory of Control
An introduction to Mathematical Theory of Control Vasile Staicu University of Aveiro UNICA, May 2018 Vasile Staicu (University of Aveiro) An introduction to Mathematical Theory of Control UNICA, May 2018
More informationDistributionally Robust Discrete Optimization with Entropic Value-at-Risk
Distributionally Robust Discrete Optimization with Entropic Value-at-Risk Daniel Zhuoyu Long Department of SEEM, The Chinese University of Hong Kong, zylong@se.cuhk.edu.hk Jin Qi NUS Business School, National
More informationMODULE -4 BAYEIAN LEARNING
MODULE -4 BAYEIAN LEARNING CONTENT Introduction Bayes theorem Bayes theorem and concept learning Maximum likelihood and Least Squared Error Hypothesis Maximum likelihood Hypotheses for predicting probabilities
More informationLecture 7 Introduction to Statistical Decision Theory
Lecture 7 Introduction to Statistical Decision Theory I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw December 20, 2016 1 / 55 I-Hsiang Wang IT Lecture 7
More informationEmpirical Risk Minimization
Empirical Risk Minimization Fabrice Rossi SAMM Université Paris 1 Panthéon Sorbonne 2018 Outline Introduction PAC learning ERM in practice 2 General setting Data X the input space and Y the output space
More informationSystem Identification, Lecture 4
System Identification, Lecture 4 Kristiaan Pelckmans (IT/UU, 2338) Course code: 1RT880, Report code: 61800 - Spring 2012 F, FRI Uppsala University, Information Technology 30 Januari 2012 SI-2012 K. Pelckmans
More informationSystem Identification, Lecture 4
System Identification, Lecture 4 Kristiaan Pelckmans (IT/UU, 2338) Course code: 1RT880, Report code: 61800 - Spring 2016 F, FRI Uppsala University, Information Technology 13 April 2016 SI-2016 K. Pelckmans
More informationModel Selection and Geometry
Model Selection and Geometry Pascal Massart Université Paris-Sud, Orsay Leipzig, February Purpose of the talk! Concentration of measure plays a fundamental role in the theory of model selection! Model
More informationOn deterministic reformulations of distributionally robust joint chance constrained optimization problems
On deterministic reformulations of distributionally robust joint chance constrained optimization problems Weijun Xie and Shabbir Ahmed School of Industrial & Systems Engineering Georgia Institute of Technology,
More informationWorst-Case Violation of Sampled Convex Programs for Optimization with Uncertainty
Worst-Case Violation of Sampled Convex Programs for Optimization with Uncertainty Takafumi Kanamori and Akiko Takeda Abstract. Uncertain programs have been developed to deal with optimization problems
More informationSOME HISTORY OF STOCHASTIC PROGRAMMING
SOME HISTORY OF STOCHASTIC PROGRAMMING Early 1950 s: in applications of Linear Programming unknown values of coefficients: demands, technological coefficients, yields, etc. QUOTATION Dantzig, Interfaces
More informationGeneralized Hypothesis Testing and Maximizing the Success Probability in Financial Markets
Generalized Hypothesis Testing and Maximizing the Success Probability in Financial Markets Tim Leung 1, Qingshuo Song 2, and Jie Yang 3 1 Columbia University, New York, USA; leung@ieor.columbia.edu 2 City
More informationA Two-Stage Moment Robust Optimization Model and its Solution Using Decomposition
A Two-Stage Moment Robust Optimization Model and its Solution Using Decomposition Sanjay Mehrotra and He Zhang July 23, 2013 Abstract Moment robust optimization models formulate a stochastic problem with
More informationOptimization Tools in an Uncertain Environment
Optimization Tools in an Uncertain Environment Michael C. Ferris University of Wisconsin, Madison Uncertainty Workshop, Chicago: July 21, 2008 Michael Ferris (University of Wisconsin) Stochastic optimization
More informationECE 4400:693 - Information Theory
ECE 4400:693 - Information Theory Dr. Nghi Tran Lecture 8: Differential Entropy Dr. Nghi Tran (ECE-University of Akron) ECE 4400:693 Lecture 1 / 43 Outline 1 Review: Entropy of discrete RVs 2 Differential
More informationStochastic Optimization with Inequality Constraints Using Simultaneous Perturbations and Penalty Functions
International Journal of Control Vol. 00, No. 00, January 2007, 1 10 Stochastic Optimization with Inequality Constraints Using Simultaneous Perturbations and Penalty Functions I-JENG WANG and JAMES C.
More informationPhase Transition Phenomenon in Sparse Approximation
Phase Transition Phenomenon in Sparse Approximation University of Utah/Edinburgh L1 Approximation: May 17 st 2008 Convex polytopes Counting faces Sparse Representations via l 1 Regularization Underdetermined
More informationEstimation theory. Parametric estimation. Properties of estimators. Minimum variance estimator. Cramer-Rao bound. Maximum likelihood estimators
Estimation theory Parametric estimation Properties of estimators Minimum variance estimator Cramer-Rao bound Maximum likelihood estimators Confidence intervals Bayesian estimation 1 Random Variables Let
More informationCentral-limit approach to risk-aware Markov decision processes
Central-limit approach to risk-aware Markov decision processes Jia Yuan Yu Concordia University November 27, 2015 Joint work with Pengqian Yu and Huan Xu. Inventory Management 1 1 Look at current inventory
More informationOptimized Bonferroni Approximations of Distributionally Robust Joint Chance Constraints
Optimized Bonferroni Approximations of Distributionally Robust Joint Chance Constraints Weijun Xie Shabbir Ahmed Ruiwei Jiang February 13, 2017 Abstract A distributionally robust joint chance constraint
More informationHandout 6: Some Applications of Conic Linear Programming
ENGG 550: Foundations of Optimization 08 9 First Term Handout 6: Some Applications of Conic Linear Programming Instructor: Anthony Man Cho So November, 08 Introduction Conic linear programming CLP, and
More informationStructural and Multidisciplinary Optimization. P. Duysinx and P. Tossings
Structural and Multidisciplinary Optimization P. Duysinx and P. Tossings 2018-2019 CONTACTS Pierre Duysinx Institut de Mécanique et du Génie Civil (B52/3) Phone number: 04/366.91.94 Email: P.Duysinx@uliege.be
More informationConvex Optimization M2
Convex Optimization M2 Lecture 8 A. d Aspremont. Convex Optimization M2. 1/57 Applications A. d Aspremont. Convex Optimization M2. 2/57 Outline Geometrical problems Approximation problems Combinatorial
More information1. Nonlinear Equations. This lecture note excerpted parts from Michael Heath and Max Gunzburger. f(x) = 0
Numerical Analysis 1 1. Nonlinear Equations This lecture note excerpted parts from Michael Heath and Max Gunzburger. Given function f, we seek value x for which where f : D R n R n is nonlinear. f(x) =
More informationCS675: Convex and Combinatorial Optimization Fall 2016 Convex Optimization Problems. Instructor: Shaddin Dughmi
CS675: Convex and Combinatorial Optimization Fall 2016 Convex Optimization Problems Instructor: Shaddin Dughmi Outline 1 Convex Optimization Basics 2 Common Classes 3 Interlude: Positive Semi-Definite
More informationReconstruction from Anisotropic Random Measurements
Reconstruction from Anisotropic Random Measurements Mark Rudelson and Shuheng Zhou The University of Michigan, Ann Arbor Coding, Complexity, and Sparsity Workshop, 013 Ann Arbor, Michigan August 7, 013
More informationImplementation of Sparse Wavelet-Galerkin FEM for Stochastic PDEs
Implementation of Sparse Wavelet-Galerkin FEM for Stochastic PDEs Roman Andreev ETH ZÜRICH / 29 JAN 29 TOC of the Talk Motivation & Set-Up Model Problem Stochastic Galerkin FEM Conclusions & Outlook Motivation
More informationOn John type ellipsoids
On John type ellipsoids B. Klartag Tel Aviv University Abstract Given an arbitrary convex symmetric body K R n, we construct a natural and non-trivial continuous map u K which associates ellipsoids to
More informationLecture Notes on Support Vector Machine
Lecture Notes on Support Vector Machine Feng Li fli@sdu.edu.cn Shandong University, China 1 Hyperplane and Margin In a n-dimensional space, a hyper plane is defined by ω T x + b = 0 (1) where ω R n is
More informationAssignment 1: From the Definition of Convexity to Helley Theorem
Assignment 1: From the Definition of Convexity to Helley Theorem Exercise 1 Mark in the following list the sets which are convex: 1. {x R 2 : x 1 + i 2 x 2 1, i = 1,..., 10} 2. {x R 2 : x 2 1 + 2ix 1x
More informationMIP reformulations of some chance-constrained mathematical programs
MIP reformulations of some chance-constrained mathematical programs Ricardo Fukasawa Department of Combinatorics & Optimization University of Waterloo December 4th, 2012 FIELDS Industrial Optimization
More informationEE 227A: Convex Optimization and Applications April 24, 2008
EE 227A: Convex Optimization and Applications April 24, 2008 Lecture 24: Robust Optimization: Chance Constraints Lecturer: Laurent El Ghaoui Reading assignment: Chapter 2 of the book on Robust Optimization
More informationOptimization Problems with Probabilistic Constraints
Optimization Problems with Probabilistic Constraints R. Henrion Weierstrass Institute Berlin 10 th International Conference on Stochastic Programming University of Arizona, Tucson Recommended Reading A.
More informationConstrained optimization
Constrained optimization DS-GA 1013 / MATH-GA 2824 Optimization-based Data Analysis http://www.cims.nyu.edu/~cfgranda/pages/obda_fall17/index.html Carlos Fernandez-Granda Compressed sensing Convex constrained
More informationCovariance Matrix Simplification For Efficient Uncertainty Management
PASEO MaxEnt 2007 Covariance Matrix Simplification For Efficient Uncertainty Management André Jalobeanu, Jorge A. Gutiérrez PASEO Research Group LSIIT (CNRS/ Univ. Strasbourg) - Illkirch, France *part
More informationEE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 18
EE/ACM 150 - Applications of Convex Optimization in Signal Processing and Communications Lecture 18 Andre Tkacenko Signal Processing Research Group Jet Propulsion Laboratory May 31, 2012 Andre Tkacenko
More informationUniform sample generation in semialgebraic sets
Uniform sample generation in semialgebraic sets Fabrizio Dabbene, Didier Henrion, Constantino Lagoa To cite this version: Fabrizio Dabbene, Didier Henrion, Constantino Lagoa. Uniform sample generation
More informationCollapsed Variational Inference for Sum-Product Networks
for Sum-Product Networks Han Zhao 1, Tameem Adel 2, Geoff Gordon 1, Brandon Amos 1 Presented by: Han Zhao Carnegie Mellon University 1, University of Amsterdam 2 June. 20th, 2016 1 / 26 Outline Background
More informationPROBABILISTIC ROBUST CONTROL SYSTEM DESIGN BY STOCHASTIC OPTIMIZATION
The Pennsylvania State University The Graduate School College of Engineering PROBABILISTIC ROBUST CONTROL SYSTEM DESIGN BY STOCHASTIC OPTIMIZATION A Thesis in Electrical Engineering by Xiang Li c 2004
More informationminimize x x2 2 x 1x 2 x 1 subject to x 1 +2x 2 u 1 x 1 4x 2 u 2, 5x 1 +76x 2 1,
4 Duality 4.1 Numerical perturbation analysis example. Consider the quadratic program with variables x 1, x 2, and parameters u 1, u 2. minimize x 2 1 +2x2 2 x 1x 2 x 1 subject to x 1 +2x 2 u 1 x 1 4x
More informationOptimal input design for nonlinear dynamical systems: a graph-theory approach
Optimal input design for nonlinear dynamical systems: a graph-theory approach Patricio E. Valenzuela Department of Automatic Control and ACCESS Linnaeus Centre KTH Royal Institute of Technology, Stockholm,
More informationarxiv: v3 [math.oc] 25 Apr 2018
Problem-driven scenario generation: an analytical approach for stochastic programs with tail risk measure Jamie Fairbrother *, Amanda Turner *, and Stein W. Wallace ** * STOR-i Centre for Doctoral Training,
More informationOn Distributionally Robust Chance-Constrained Linear Programs 1
JOURNAL OF OPTIMIZATION THEORY AND APPLICATIONS: Vol. 130, No. 1, pp. 1 22, July 2006 ( C 2006) DOI: 10.1007/s10957-006-9084-x On Distributionally Robust Chance-Constrained Linear Programs 1 G. C. CALAFIORE
More informationFORMULATION OF THE LEARNING PROBLEM
FORMULTION OF THE LERNING PROBLEM MIM RGINSKY Now that we have seen an informal statement of the learning problem, as well as acquired some technical tools in the form of concentration inequalities, we
More informationEECS 495: Combinatorial Optimization Lecture Manolis, Nima Mechanism Design with Rounding
EECS 495: Combinatorial Optimization Lecture Manolis, Nima Mechanism Design with Rounding Motivation Make a social choice that (approximately) maximizes the social welfare subject to the economic constraints
More informationWhat can be expressed via Conic Quadratic and Semidefinite Programming?
What can be expressed via Conic Quadratic and Semidefinite Programming? A. Nemirovski Faculty of Industrial Engineering and Management Technion Israel Institute of Technology Abstract Tremendous recent
More informationStructural Reliability
Structural Reliability Thuong Van DANG May 28, 2018 1 / 41 2 / 41 Introduction to Structural Reliability Concept of Limit State and Reliability Review of Probability Theory First Order Second Moment Method
More informationExercises: Brunn, Minkowski and convex pie
Lecture 1 Exercises: Brunn, Minkowski and convex pie Consider the following problem: 1.1 Playing a convex pie Consider the following game with two players - you and me. I am cooking a pie, which should
More informationModel Counting for Logical Theories
Model Counting for Logical Theories Wednesday Dmitry Chistikov Rayna Dimitrova Department of Computer Science University of Oxford, UK Max Planck Institute for Software Systems (MPI-SWS) Kaiserslautern
More informationSET-MEMBERSHIP ESTIMATION: AN ADVANCED TOOL FOR SYSTEM IDENTIFICATION
UNIVERSITÀ DEGLI STUDI DI SIENA DIPARTIMENTO DI INGEGNERIA DELL INFORMAZIONE DOTTORATO IN INGEGNERIA DELL INFORMAZIONE - CICLO XV SET-MEMBERSHIP ESTIMATION: AN ADVANCED TOOL FOR SYSTEM IDENTIFICATION Candidato:
More informationConvex Optimization in Computer Vision:
Convex Optimization in Computer Vision: Segmentation and Multiview 3D Reconstruction Yiyong Feng and Daniel P. Palomar The Hong Kong University of Science and Technology (HKUST) ELEC 5470 - Convex Optimization
More informationDistributionally Robust Optimization with ROME (part 1)
Distributionally Robust Optimization with ROME (part 1) Joel Goh Melvyn Sim Department of Decision Sciences NUS Business School, Singapore 18 Jun 2009 NUS Business School Guest Lecture J. Goh, M. Sim (NUS)
More informationConvex Optimization. (EE227A: UC Berkeley) Lecture 6. Suvrit Sra. (Conic optimization) 07 Feb, 2013
Convex Optimization (EE227A: UC Berkeley) Lecture 6 (Conic optimization) 07 Feb, 2013 Suvrit Sra Organizational Info Quiz coming up on 19th Feb. Project teams by 19th Feb Good if you can mix your research
More informationRobust conic quadratic programming with ellipsoidal uncertainties
Robust conic quadratic programming with ellipsoidal uncertainties Roland Hildebrand (LJK Grenoble 1 / CNRS, Grenoble) KTH, Stockholm; November 13, 2008 1 Uncertain conic programs min x c, x : Ax + b K
More informationBranch-and-cut Approaches for Chance-constrained Formulations of Reliable Network Design Problems
Branch-and-cut Approaches for Chance-constrained Formulations of Reliable Network Design Problems Yongjia Song James R. Luedtke August 9, 2012 Abstract We study solution approaches for the design of reliably
More informationScalar curvature and the Thurston norm
Scalar curvature and the Thurston norm P. B. Kronheimer 1 andt.s.mrowka 2 Harvard University, CAMBRIDGE MA 02138 Massachusetts Institute of Technology, CAMBRIDGE MA 02139 1. Introduction Let Y be a closed,
More informationAn ellipsoid algorithm for probabilistic robust controller design
Delft University of Technology Fac. of Information Technology and Systems Control Systems Engineering Technical report CSE02-017 An ellipsoid algorithm for probabilistic robust controller design S. Kanev,
More informationEE613 Machine Learning for Engineers. Kernel methods Support Vector Machines. jean-marc odobez 2015
EE613 Machine Learning for Engineers Kernel methods Support Vector Machines jean-marc odobez 2015 overview Kernel methods introductions and main elements defining kernels Kernelization of k-nn, K-Means,
More informationELE539A: Optimization of Communication Systems Lecture 15: Semidefinite Programming, Detection and Estimation Applications
ELE539A: Optimization of Communication Systems Lecture 15: Semidefinite Programming, Detection and Estimation Applications Professor M. Chiang Electrical Engineering Department, Princeton University March
More informationComputational Intelligence Winter Term 2018/19
Computational Intelligence Winter Term 2018/19 Prof. Dr. Günter Rudolph Lehrstuhl für Algorithm Engineering (LS 11) Fakultät für Informatik TU Dortmund Three tasks: 1. Choice of an appropriate problem
More informationPrediction, filtering and smoothing using LSCR: State estimation algorithms with guaranteed confidence sets
2 5th IEEE Conference on Decision and Control and European Control Conference (CDC-ECC) Orlando, FL, USA, December 2-5, 2 Prediction, filtering and smoothing using LSCR: State estimation algorithms with
More informationGauge optimization and duality
1 / 54 Gauge optimization and duality Junfeng Yang Department of Mathematics Nanjing University Joint with Shiqian Ma, CUHK September, 2015 2 / 54 Outline Introduction Duality Lagrange duality Fenchel
More informationESTIMATION THEORY. Chapter Estimation of Random Variables
Chapter ESTIMATION THEORY. Estimation of Random Variables Suppose X,Y,Y 2,...,Y n are random variables defined on the same probability space (Ω, S,P). We consider Y,...,Y n to be the observed random variables
More informationInfo-Greedy Sequential Adaptive Compressed Sensing
Info-Greedy Sequential Adaptive Compressed Sensing Yao Xie Joint work with Gabor Braun and Sebastian Pokutta Georgia Institute of Technology Presented at Allerton Conference 2014 Information sensing for
More information