On construction of constrained optimum designs
|
|
- Cathleen Cunningham
- 6 years ago
- Views:
Transcription
1 On construction of constrained optimum designs Institute of Control and Computation Engineering University of Zielona Góra, Poland DEMA2008, Cambridge, 15 August 2008
2 Numerical algorithms to construct optimal designs 1 Sequential algorithms with selection of support points: Wynn-Fedorov scheme (Atkinson, Donev and Tobias, 2007; Fedorov and Hackl, 1997; Walter and Pronzato, 1997; Pázman, 1986; Silvey, 1980). 2 Sequential numerical design algorithms with support points given a priori: multiplicative scheme (Torsney, 1988; Silvey, Titterington and Torsney, 1978, Torsney and Mandal, 2001; 2004; Pázman, 1986), linear matrix inequalities (Boyd and Vandenberghe, 2004). In practice, various inequality constraints must be sometimes considered due to cost limitations, required restrictions for achieving certain robustness properties, or restrictions on the experimental space. Although much work has been done in theory (Fedorov and Hackl, 1997; Cook and Fedorov, 1995), publications on the algorithmic aspects of constrained optimization are still scarce.
3 Numerical algorithms to construct optimal designs 1 Sequential algorithms with selection of support points: Wynn-Fedorov scheme (Atkinson, Donev and Tobias, 2007; Fedorov and Hackl, 1997; Walter and Pronzato, 1997; Pázman, 1986; Silvey, 1980). 2 Sequential numerical design algorithms with support points given a priori: multiplicative scheme (Torsney, 1988; Silvey, Titterington and Torsney, 1978, Torsney and Mandal, 2001; 2004; Pázman, 1986), linear matrix inequalities (Boyd and Vandenberghe, 2004). In practice, various inequality constraints must be sometimes considered due to cost limitations, required restrictions for achieving certain robustness properties, or restrictions on the experimental space. Although much work has been done in theory (Fedorov and Hackl, 1997; Cook and Fedorov, 1995), publications on the algorithmic aspects of constrained optimization are still scarce.
4 Numerical algorithms to construct optimal designs 1 Sequential algorithms with selection of support points: Wynn-Fedorov scheme (Atkinson, Donev and Tobias, 2007; Fedorov and Hackl, 1997; Walter and Pronzato, 1997; Pázman, 1986; Silvey, 1980). 2 Sequential numerical design algorithms with support points given a priori: multiplicative scheme (Torsney, 1988; Silvey, Titterington and Torsney, 1978, Torsney and Mandal, 2001; 2004; Pázman, 1986), linear matrix inequalities (Boyd and Vandenberghe, 2004). In practice, various inequality constraints must be sometimes considered due to cost limitations, required restrictions for achieving certain robustness properties, or restrictions on the experimental space. Although much work has been done in theory (Fedorov and Hackl, 1997; Cook and Fedorov, 1995), publications on the algorithmic aspects of constrained optimization are still scarce.
5 Numerical algorithms to construct optimal designs 1 Sequential algorithms with selection of support points: Wynn-Fedorov scheme (Atkinson, Donev and Tobias, 2007; Fedorov and Hackl, 1997; Walter and Pronzato, 1997; Pázman, 1986; Silvey, 1980). 2 Sequential numerical design algorithms with support points given a priori: multiplicative scheme (Torsney, 1988; Silvey, Titterington and Torsney, 1978, Torsney and Mandal, 2001; 2004; Pázman, 1986), linear matrix inequalities (Boyd and Vandenberghe, 2004). In practice, various inequality constraints must be sometimes considered due to cost limitations, required restrictions for achieving certain robustness properties, or restrictions on the experimental space. Although much work has been done in theory (Fedorov and Hackl, 1997; Cook and Fedorov, 1995), publications on the algorithmic aspects of constrained optimization are still scarce.
6 Numerical algorithms to construct optimal designs 1 Sequential algorithms with selection of support points: Wynn-Fedorov scheme (Atkinson, Donev and Tobias, 2007; Fedorov and Hackl, 1997; Walter and Pronzato, 1997; Pázman, 1986; Silvey, 1980). 2 Sequential numerical design algorithms with support points given a priori: multiplicative scheme (Torsney, 1988; Silvey, Titterington and Torsney, 1978, Torsney and Mandal, 2001; 2004; Pázman, 1986), linear matrix inequalities (Boyd and Vandenberghe, 2004). In practice, various inequality constraints must be sometimes considered due to cost limitations, required restrictions for achieving certain robustness properties, or restrictions on the experimental space. Although much work has been done in theory (Fedorov and Hackl, 1997; Cook and Fedorov, 1995), publications on the algorithmic aspects of constrained optimization are still scarce.
7 Classical framework Multiresponse parametric model y ij = η(x i, θ) + ε ij, { j = 1,..., ri i = 1,..., n Notation: y ij observations of response variables x i fixed values of explanatory (or independent) variables (e.g., time, temperature, spatial location, drug doses, etc.) r i > 1 no. of replications for setting x i, N = n i=1 r i η(, ) known regression function θ vector of constant but unknown parameters
8 Classical framework Additive random errors Notation: E(ε ij ) = 0 E(ε ij ε T kl ) = δ ijδ kl V (x i ) V (x i ) 0 dispersion matrices are (known, possibly up to a common constant multiplier) δ ij the Kronecker delta
9 Simplification for linear models Linear regression η(x i, θ) = F (x i ) T θ Notation: F (x i ) known matrices BLUE of θ n θ = M 1 r i F (x i )V (x i ) 1 ȳ i, i=1 Notation: ȳ i = 1 r i ri j=1 y ij M i = F (x i )V (x i ) 1 F (x i ) T, i = 1,..., n. M = n i=1 r i M i Fisher information matrix
10 Simplification for linear models Linear regression η(x i, θ) = F (x i ) T θ Notation: F (x i ) known matrices BLUE of θ n θ = M 1 r i F (x i )V (x i ) 1 ȳ i, i=1 Notation: ȳ i = 1 r i ri j=1 y ij M i = F (x i )V (x i ) 1 F (x i ) T, i = 1,..., n. M = n i=1 r i M i Fisher information matrix
11 Linear models (ctd ) Covariance matrix of θ cov( θ) = M 1 We assume that the values of x i, i = 1,..., n are fixed and may not be altered, but we have full control over the corresponding numbers of replications r i, i = 1,..., n. We wish to choose them in an optimal way to enhance the process of estimating θ.
12 Linear models (ctd ) Covariance matrix of θ cov( θ) = M 1 We assume that the values of x i, i = 1,..., n are fixed and may not be altered, but we have full control over the corresponding numbers of replications r i, i = 1,..., n. We wish to choose them in an optimal way to enhance the process of estimating θ.
13 Convenient formulation Discrete design { } x ξ = 1,..., x n p 1,..., p n Notation: x i support points p i = r i /N weights P.m.f. property of weights 1 T p = 1, p 0 Notation: 1 = (1, 1,..., 1)
14 Optimality criterion Normalized FIM D-optimality criterion M(p) = 1 n N M = p i M i i=1 Φ[ M(p)] = log det( M(p)) max Further, for simplicity of notation, the tilde over M( ) will be dropped.
15 Optimality criterion Normalized FIM D-optimality criterion M(p) = 1 n N M = p i M i i=1 Φ[ M(p)] = log det( M(p)) max Further, for simplicity of notation, the tilde over M( ) will be dropped.
16 Problems involved Problem 1. The resulting optimization problem constitutes a classical discrete resource allocation problem. Its combinatorial nature excludes calculus techniques and implies prohibitive computational complexity. Way round: Relaxation Feasible weights p i s are considered as any real numbers in the interval [0, 1] which sum up to unity, and not necessarily integer multiples of 1/N. Advantage A simple and efficient multiplicative algorithm can be exploited (cf. the previous talk by Ben Torsney).
17 Problems involved Problem 1. The resulting optimization problem constitutes a classical discrete resource allocation problem. Its combinatorial nature excludes calculus techniques and implies prohibitive computational complexity. Way round: Relaxation Feasible weights p i s are considered as any real numbers in the interval [0, 1] which sum up to unity, and not necessarily integer multiples of 1/N. Advantage A simple and efficient multiplicative algorithm can be exploited (cf. the previous talk by Ben Torsney).
18 Problems involved Problem 2. The produced designs concentrate at a relatively small number of support points (close to the number of the estimated parameters), rather than spreading the measurement effort around appropriately, which many practicing statisticians tend to do. Solution Prevent spending the overall experimental effort at few points by directly bounding the frequencies of observations from above: p b where b 1 is fixed.
19 Problems involved Problem 2. The produced designs concentrate at a relatively small number of support points (close to the number of the estimated parameters), rather than spreading the measurement effort around appropriately, which many practicing statisticians tend to do. Solution Prevent spending the overall experimental effort at few points by directly bounding the frequencies of observations from above: p b where b 1 is fixed.
20 Problem statement once again Ultimate formulation Given a vector b 0 satisfying 1 T b 1 find a vector of weights p = (p 1,..., p n ) to maximize Φ[M(p)] = log det ( M(p) ) subject to 0 p b 1 T p = 1
21 Properties 1 The performance index Φ is concave over the canonical simplex S n = { p 0 1 T p = 1 } 2 It is differentiable at points yielding nonsingular FIMs, with φ(p) := Φ(p) = [tr { M(p) 1 } { M 1,..., tr M(p) 1 } ] T M n 3 The constraint set P is a rather nice convex set (e.g., fast algorithms of orthogonal projection on P exist). Numerous computational methods can potentially be employed, e.g., the conditional gradient method or a gradient projection method. But, if the number of the support points is large, they may lead to unsatisfactory long computational times.
22 Properties 1 The performance index Φ is concave over the canonical simplex S n = { p 0 1 T p = 1 } 2 It is differentiable at points yielding nonsingular FIMs, with φ(p) := Φ(p) = [tr { M(p) 1 } { M 1,..., tr M(p) 1 } ] T M n 3 The constraint set P is a rather nice convex set (e.g., fast algorithms of orthogonal projection on P exist). Numerous computational methods can potentially be employed, e.g., the conditional gradient method or a gradient projection method. But, if the number of the support points is large, they may lead to unsatisfactory long computational times.
23 Properties 1 The performance index Φ is concave over the canonical simplex S n = { p 0 1 T p = 1 } 2 It is differentiable at points yielding nonsingular FIMs, with φ(p) := Φ(p) = [tr { M(p) 1 } { M 1,..., tr M(p) 1 } ] T M n 3 The constraint set P is a rather nice convex set (e.g., fast algorithms of orthogonal projection on P exist). Numerous computational methods can potentially be employed, e.g., the conditional gradient method or a gradient projection method. But, if the number of the support points is large, they may lead to unsatisfactory long computational times.
24 Characterization of the optimal design Proposition 1 Suppose that the matrix M(p ) is nonsingular for some p P. The vector p constitutes a global maximum of Φ over P if, and only if, there exists a number λ such that for i = 1,..., n. λ if p φ i (p i = b i ) = λ if 0 < pi < b i λ if pi = 0
25 Simplicial decomposition Simplicial Decomposition (SD) stands for a class of methods for solving large-scale continuous problems in mathematical programming with convex feasible sets (von Hohenbalken, 1977). It iterates by alternately solving 1 a linear programming subproblem (the so-called column generation problem) which generates an extreme point of the polyhedron, and 2 a nonlinear restricted master problem (RMP) which finds the maximum of the objective function over the convex hull (a simplex) of previously defined extreme points. Its principal characteristic is that the sequence of successive solutions to the master problem tends to a solution to the original problem in such a way that the objective function strictly monotonically approaches its optimal value.
26 Simplicial decomposition Simplicial Decomposition (SD) stands for a class of methods for solving large-scale continuous problems in mathematical programming with convex feasible sets (von Hohenbalken, 1977). It iterates by alternately solving 1 a linear programming subproblem (the so-called column generation problem) which generates an extreme point of the polyhedron, and 2 a nonlinear restricted master problem (RMP) which finds the maximum of the objective function over the convex hull (a simplex) of previously defined extreme points. Its principal characteristic is that the sequence of successive solutions to the master problem tends to a solution to the original problem in such a way that the objective function strictly monotonically approaches its optimal value.
27 Simplicial decomposition Simplicial Decomposition (SD) stands for a class of methods for solving large-scale continuous problems in mathematical programming with convex feasible sets (von Hohenbalken, 1977). It iterates by alternately solving 1 a linear programming subproblem (the so-called column generation problem) which generates an extreme point of the polyhedron, and 2 a nonlinear restricted master problem (RMP) which finds the maximum of the objective function over the convex hull (a simplex) of previously defined extreme points. Its principal characteristic is that the sequence of successive solutions to the master problem tends to a solution to the original problem in such a way that the objective function strictly monotonically approaches its optimal value.
28 Simplicial decomposition Simplicial Decomposition (SD) stands for a class of methods for solving large-scale continuous problems in mathematical programming with convex feasible sets (von Hohenbalken, 1977). It iterates by alternately solving 1 a linear programming subproblem (the so-called column generation problem) which generates an extreme point of the polyhedron, and 2 a nonlinear restricted master problem (RMP) which finds the maximum of the objective function over the convex hull (a simplex) of previously defined extreme points. Its principal characteristic is that the sequence of successive solutions to the master problem tends to a solution to the original problem in such a way that the objective function strictly monotonically approaches its optimal value.
29 Simplicial decomposition 3 p2 2 1 K3 K2 K p1 K1 K2 K3
30 Simplicial decomposition 3 p2 2 1 K3 K2 K p1 K1 K2 K3
31 Simplicial decomposition 3 p2 2 1 K3 K2 K p1 K1 K2 K3
32 Simplicial decomposition 3 p2 2 1 K3 K2 K p1 K1 K2 K3
33 Simplicial decomposition 3 p2 2 1 K3 K2 K p1 K1 K2 K3
34 Simplicial decomposition 3 p2 2 1 K3 K2 K p1 K1 K2 K3
35 Simplicial decomposition 3 p2 2 1 K3 K2 K p1 K1 K2 K3
36 Simplicial decomposition 3 p2 2 1 K3 K2 K p1 K1 K2 K3
37 Simplicial decomposition 3 p2 2 1 K3 K2 K p1 K1 K2 K3
38 Simplicial decomposition 3 p2 2 1 K3 K2 K p1 K1 K2 K3
39 Simplicial decomposition 3 p2 2 1 K3 K2 K p1 K1 K2 K3
40 Algorithm SD Step 0: (Initialization) Guess an initial solution p (0) P such that M(p (0) ) is nonsingular. Set I = { 1,..., n }, Q (0) = { p (0)} and k = 0. Step 1: (Termination check) Set If I (k) ub I (k) im I (k) lb = { i I p (k) i = b i }, = { i I 0 < p (k) i < b i }, = { i I p (k) i = 0 }. λ if i I (k) ub, φ i (p (k) ) = λ if i I (k) im, λ if i I (k) lb for some λ R +, then STOP and p (k) is optimal.
41 Algorithm SD Step 0: (Initialization) Guess an initial solution p (0) P such that M(p (0) ) is nonsingular. Set I = { 1,..., n }, Q (0) = { p (0)} and k = 0. Step 1: (Termination check) Set If I (k) ub I (k) im I (k) lb = { i I p (k) i = b i }, = { i I 0 < p (k) i < b i }, = { i I p (k) i = 0 }. λ if i I (k) ub, φ i (p (k) ) = λ if i I (k) im, λ if i I (k) lb for some λ R +, then STOP and p (k) is optimal.
42 Step 2: (Solution of the column generation subproblem) Compute q (k+1) = arg max p P φ(p(k) ) T p and set Q (k+1) = Q (k) { q (k+1)}. Step 3: (Solution of the restricted master subproblem) Find p (k+1) = arg max M(p) p co(q (k+1) ) and purge Q (k+1) of all extreme points with zero weights in the resulting expression of p (k+1) as a convex combination of elements in Q (k+1). Increment k by one and go back to Step 1.
43 Step 2: (Solution of the column generation subproblem) Compute q (k+1) = arg max p P φ(p(k) ) T p and set Q (k+1) = Q (k) { q (k+1)}. Step 3: (Solution of the restricted master subproblem) Find p (k+1) = arg max M(p) p co(q (k+1) ) and purge Q (k+1) of all extreme points with zero weights in the resulting expression of p (k+1) as a convex combination of elements in Q (k+1). Increment k by one and go back to Step 1.
44 Column generation problem Basically, it is a linear programming problem: maximize subject to c T p p P where c = φ(p (k) ). A vector q P constitutes its global solution if, and only if, there exists a scalar ρ such that for i = 1,..., n. ρ if q i = b i c i = ρ if 0 < q i < b i ρ if q i = 0
45 Column generation problem Basically, it is a linear programming problem: maximize subject to c T p p P where c = φ(p (k) ). A vector q P constitutes its global solution if, and only if, there exists a scalar ρ such that for i = 1,..., n. ρ if q i = b i c i = ρ if 0 < q i < b i ρ if q i = 0
46 Solution of the column generation problem Step 0: (Initialization) Set j = 0 and v (0) = 0. Step 1: (Sorting) Sort the elements of c in nonincreasing order, i.e., find a permutation π on the index set I = { 1,..., n } such that c π(i) c π(i+1), i = 1,..., n 1 Step 2: (Identification of nonzero weights) Step 2.1: If v (j) + b π(j+1) < 1 then set v (j+1) = v (j) + b π(j+1). Otherwise, go to Step 3. Step 2.2: Increment j by one and go to Step 2.1.
47 Solution of the column generation problem Step 0: (Initialization) Set j = 0 and v (0) = 0. Step 1: (Sorting) Sort the elements of c in nonincreasing order, i.e., find a permutation π on the index set I = { 1,..., n } such that c π(i) c π(i+1), i = 1,..., n 1 Step 2: (Identification of nonzero weights) Step 2.1: If v (j) + b π(j+1) < 1 then set v (j+1) = v (j) + b π(j+1). Otherwise, go to Step 3. Step 2.2: Increment j by one and go to Step 2.1.
48 Solution of the column generation problem Step 0: (Initialization) Set j = 0 and v (0) = 0. Step 1: (Sorting) Sort the elements of c in nonincreasing order, i.e., find a permutation π on the index set I = { 1,..., n } such that c π(i) c π(i+1), i = 1,..., n 1 Step 2: (Identification of nonzero weights) Step 2.1: If v (j) + b π(j+1) < 1 then set v (j+1) = v (j) + b π(j+1). Otherwise, go to Step 3. Step 2.2: Increment j by one and go to Step 2.1.
49 Solution of the column generation problem Step 3: (Form the ultimate solution) Set b π(i) for i = 1,..., j, q π(i) = 1 v (j) for i = j + 1, 0 for i = j + 2,..., n. The algorithm starts by picking the consecutive largest components c i of c and setting the corresponding weights q i as their maximal allowable values b i. The process is repeated until the sum of the assigned weights exceeds one. Then the value of the last weight which was set in this manner should be corrected so that the weights sum up to one. The remaining (i.e., unassigned) weights are set as zeros.
50 Solution of the column generation problem Step 3: (Form the ultimate solution) Set b π(i) for i = 1,..., j, q π(i) = 1 v (j) for i = j + 1, 0 for i = j + 2,..., n. The algorithm starts by picking the consecutive largest components c i of c and setting the corresponding weights q i as their maximal allowable values b i. The process is repeated until the sum of the assigned weights exceeds one. Then the value of the last weight which was set in this manner should be corrected so that the weights sum up to one. The remaining (i.e., unassigned) weights are set as zeros.
51 Solution of the restricted master problem Suppose that in the (k + 1)-th iteration of SD, we have Q (k+1) = { q 1,..., q r } possibly with r < k + 1 (owing to the deletion mechanism of uninformative points). Step 3 of Algorithm SD involves maximization of Φ[M(p)] = log det ( M(p ) over co(q (k+1) ) = p = r w j q j w 0, 1 T w = 1 j=1
52 Solution of the restricted master problem From the representation of any p co(q (k+1) ) as p = or, in component-wise form, r w j q j j=1 r p i = w j q j,i, j=1 i = 1,..., n q j,i being the i-th component of q j, it follows that n r ( n ) r M(p) = p i M i = w j q j,i M i = w j M(q j ) i=1 j=1 i=1 j=1
53 Solution of the restricted master problem Equivalent formulation of the RMP Find the sequence of weights w R r to maximize subject to the constraints Notation: H(w) = r j=1 w j H j H j = M(q j ) Ψ(w) = log det ( H(w) ) 1 T w = 1 w 0
54 Proposition 2 Suppose that the matrix H(w ) is nonsingular for some w S r. The vector w constitutes a global solution to the RMP if and only if for each j = 1,..., r, where { = m if w ψ j (w ) j > 0 m if wj = 0 ψ j (w) = tr [ H(w) 1 H j ], j = 1,..., r
55 Multiplicative algorithm for the RMP Step 0: (Initialization) Select a weight vector w (0) S r R r ++, e.g., set w (0) = (1/r)1. Set l = 0. Step 1: (Termination check) If 1 m ψ(w (l) ) 1 then STOP. Step 2: (Multiplicative update) Evaluate w (l+1) = 1 m ψ(w (l) ) w (l) Increment l by one and go to Step 1.
56 Multiplicative algorithm for the RMP Step 0: (Initialization) Select a weight vector w (0) S r R r ++, e.g., set w (0) = (1/r)1. Set l = 0. Step 1: (Termination check) If 1 m ψ(w (l) ) 1 then STOP. Step 2: (Multiplicative update) Evaluate w (l+1) = 1 m ψ(w (l) ) w (l) Increment l by one and go to Step 1.
57 Multiplicative algorithm for the RMP Step 0: (Initialization) Select a weight vector w (0) S r R r ++, e.g., set w (0) = (1/r)1. Set l = 0. Step 1: (Termination check) If 1 m ψ(w (l) ) 1 then STOP. Step 2: (Multiplicative update) Evaluate w (l+1) = 1 m ψ(w (l) ) w (l) Increment l by one and go to Step 1.
58 Numerical example Consider a batch reactor initially loaded with an aqueous solution of component A. In the presence of a solid catalyst, this reacts to form components B and C according to the consecutive reaction scheme A B C. The time changes in the concentrations [A], [B] and [C] are governed by d[a] = k 1 [A] γ 1, [A] t=0 = 1 dt d[b] = k 1 [A] γ 1 k 2 [B] γ 2, [B] t=0 = 0 dt d[c] = k 2 [B] γ 2, [C] t=0 = 0 dt where k 1 and k 2 are the rates and γ 1 and γ 2 are the orders of the reactions. Usually, the coefficients k 1, k 2, γ 1 and γ 2 are not known in advance.
59 Numerical example We set Moreover, x i = t i, i = 1,..., n θ = (k 1, k 2, γ 1, γ 2 ) η(t, θ) = ([A](t; θ), [B](t; θ), [C](t; θ)) θ 0 = (0.7, 0.2, 1.1, 1.5) V (t i ) = I 3, i = 1,..., n F (t i ) T = η θ (t i, θ 0 ), i = 1,..., n Consider n = 100 potential support points evenly distributed on the time interval [0, 20].
60 Responses and designs: p responses [A](t) [B](t) [C](t) time
61 Responses and designs: p responses [A](t) [B](t) [C](t) time
62 Responses and designs: p responses [A](t) [B](t) [C](t) time
63 Variance function: p variance function time weights
64 Variance function: p variance function weights time
65 Variance function: p variance function weights time
66 Convergence: p x det( M( p (k) )) k
67 Convergence: p x det( M( p (k) )) k
68 Convergence: p x det( M( p (k) )) k
69 Conclusions A simple algorithm was developed for constructing constrained D-optimum designs on finite design spaces. Extensive numerical experiments demonstrate that it can outperform approaches based on the use of sophisticated general-purpose nonlinear programming solvers. Its unquestionable advantage is the simplicity of implementation which does not require any additional numerical routines, nor painstaking programming efforts. A refinement: Restricted simplicial decomposition based on the observation that a particular feasible solution, such as the optimal one, can be represented as the convex combination of an often much smaller number of extreme points than that implied by Carathéodory s Theorem (Hearn et al., 1985; 1997; Ventura and Hearn, 1993).
70 Conclusions A simple algorithm was developed for constructing constrained D-optimum designs on finite design spaces. Extensive numerical experiments demonstrate that it can outperform approaches based on the use of sophisticated general-purpose nonlinear programming solvers. Its unquestionable advantage is the simplicity of implementation which does not require any additional numerical routines, nor painstaking programming efforts. A refinement: Restricted simplicial decomposition based on the observation that a particular feasible solution, such as the optimal one, can be represented as the convex combination of an often much smaller number of extreme points than that implied by Carathéodory s Theorem (Hearn et al., 1985; 1997; Ventura and Hearn, 1993).
71 Conclusions Apart from that, some improvements aimed at removing nonoptimal support points proposed by Luc Pronzato can be incorporated in the restricted master problem to speed up its solution. The method can be incorporated into to find upper bounds to the maximum value of the objective function in the design of a monitoring network for parameter estimation of systems described by partial differential equations. Using this technique in conjunction with the branch-and-bound method, it was then possible to select hundreds of gaged sites from among thousands of admissible sites within no more than five minutes on a low-cost PC (Uciński and Patan, 2007).
72 Conclusions Apart from that, some improvements aimed at removing nonoptimal support points proposed by Luc Pronzato can be incorporated in the restricted master problem to speed up its solution. The method can be incorporated into to find upper bounds to the maximum value of the objective function in the design of a monitoring network for parameter estimation of systems described by partial differential equations. Using this technique in conjunction with the branch-and-bound method, it was then possible to select hundreds of gaged sites from among thousands of admissible sites within no more than five minutes on a low-cost PC (Uciński and Patan, 2007).
73 Conclusions Although the interest here was on constructing D-optimum designs under bound constraints, the same simplicial decomposition technique can be applied to other smooth optimality criteria, e.g., the A-optimality one and other linear constraints on the design weights can be easily included. Efficient parallelization is possible via Parallel Variable Distribution (Ferris and Mangasarian, 1994; Solodov, 1998). Extension to continuous designs (Ermoliev et al., 1985; Higgins and Polak, 1990; Cook and Fedorov, 1995; Shapiro and Ahmed, 2004).
74 Conclusions Although the interest here was on constructing D-optimum designs under bound constraints, the same simplicial decomposition technique can be applied to other smooth optimality criteria, e.g., the A-optimality one and other linear constraints on the design weights can be easily included. Efficient parallelization is possible via Parallel Variable Distribution (Ferris and Mangasarian, 1994; Solodov, 1998). Extension to continuous designs (Ermoliev et al., 1985; Higgins and Polak, 1990; Cook and Fedorov, 1995; Shapiro and Ahmed, 2004).
75 Conclusions Although the interest here was on constructing D-optimum designs under bound constraints, the same simplicial decomposition technique can be applied to other smooth optimality criteria, e.g., the A-optimality one and other linear constraints on the design weights can be easily included. Efficient parallelization is possible via Parallel Variable Distribution (Ferris and Mangasarian, 1994; Solodov, 1998). Extension to continuous designs (Ermoliev et al., 1985; Higgins and Polak, 1990; Cook and Fedorov, 1995; Shapiro and Ahmed, 2004).
Heteroscedastic T-Optimum Designs for Multiresponse Dynamic Models
Heteroscedastic T-Optimum Designs for Multiresponse Dynamic Models Dariusz Uciński 1 and Barbara Bogacka 2 1 Institute of Control and Computation Engineering, University of Zielona Góra, ul. Podgórna 50,
More informationEfficient algorithms for calculating optimal designs in pharmacokinetics and dose finding studies
Efficient algorithms for calculating optimal designs in pharmacokinetics and dose finding studies Tim Holland-Letz Ruhr-Universität Bochum Medizinische Fakultät 44780 Bochum, Germany email: tim.holland-letz@rub.de
More informationAn Adaptive Partition-based Approach for Solving Two-stage Stochastic Programs with Fixed Recourse
An Adaptive Partition-based Approach for Solving Two-stage Stochastic Programs with Fixed Recourse Yongjia Song, James Luedtke Virginia Commonwealth University, Richmond, VA, ysong3@vcu.edu University
More informationarxiv: v1 [stat.co] 7 Sep 2017
Computing optimal experimental designs with respect to a compound Bayes risk criterion arxiv:1709.02317v1 [stat.co] 7 Sep 2017 Radoslav Harman, and Maryna Prus September 8, 2017 Abstract We consider the
More informationD-optimal Designs for Factorial Experiments under Generalized Linear Models
D-optimal Designs for Factorial Experiments under Generalized Linear Models Jie Yang Department of Mathematics, Statistics, and Computer Science University of Illinois at Chicago Joint research with Abhyuday
More informationADAPTIVE EXPERIMENTAL DESIGNS. Maciej Patan and Barbara Bogacka. University of Zielona Góra, Poland and Queen Mary, University of London
ADAPTIVE EXPERIMENTAL DESIGNS FOR SIMULTANEOUS PK AND DOSE-SELECTION STUDIES IN PHASE I CLINICAL TRIALS Maciej Patan and Barbara Bogacka University of Zielona Góra, Poland and Queen Mary, University of
More informationA new algorithm for deriving optimal designs
A new algorithm for deriving optimal designs Stefanie Biedermann, University of Southampton, UK Joint work with Min Yang, University of Illinois at Chicago 18 October 2012, DAE, University of Georgia,
More informationPROBABILITY AND STATISTICS Vol. III - Statistical Experiments and Optimal Design - Andrej Pázman STATISTICAL EXPERIMENTS AND OPTIMAL DESIGN
STATISTICAL EXPERIMENTS AND OPTIMAL DESIGN Andrej Pázman Comenius University, Bratislava, Slovakia Keywords: Experiment design, linear statistical model, nonlinear regression, least squares, information
More informationNetwork Flows. 6. Lagrangian Relaxation. Programming. Fall 2010 Instructor: Dr. Masoud Yaghini
In the name of God Network Flows 6. Lagrangian Relaxation 6.3 Lagrangian Relaxation and Integer Programming Fall 2010 Instructor: Dr. Masoud Yaghini Integer Programming Outline Branch-and-Bound Technique
More informationOptimal sensor location for distributed parameter system identi
Optimal sensor location for distributed parameter system identification (Part 1) Institute of Control and Computation Engineering University of Zielona Góra Structure of the minicourse Tuesday, 27 October,
More informationSum-Power Iterative Watefilling Algorithm
Sum-Power Iterative Watefilling Algorithm Daniel P. Palomar Hong Kong University of Science and Technolgy (HKUST) ELEC547 - Convex Optimization Fall 2009-10, HKUST, Hong Kong November 11, 2009 Outline
More informationBenders Decomposition
Benders Decomposition Yuping Huang, Dr. Qipeng Phil Zheng Department of Industrial and Management Systems Engineering West Virginia University IENG 593G Nonlinear Programg, Spring 2012 Yuping Huang (IMSE@WVU)
More informationOptimum Designs for the Equality of Parameters in Enzyme Inhibition Kinetic Models
Optimum Designs for the Equality of Parameters in Enzyme Inhibition Kinetic Models Anthony C. Atkinson, Department of Statistics, London School of Economics, London WC2A 2AE, UK and Barbara Bogacka, School
More informationLinear and Integer Programming - ideas
Linear and Integer Programming - ideas Paweł Zieliński Institute of Mathematics and Computer Science, Wrocław University of Technology, Poland http://www.im.pwr.wroc.pl/ pziel/ Toulouse, France 2012 Literature
More informationIntroduction to Integer Linear Programming
Lecture 7/12/2006 p. 1/30 Introduction to Integer Linear Programming Leo Liberti, Ruslan Sadykov LIX, École Polytechnique liberti@lix.polytechnique.fr sadykov@lix.polytechnique.fr Lecture 7/12/2006 p.
More informationApproximate Optimal Designs for Multivariate Polynomial Regression
Approximate Optimal Designs for Multivariate Polynomial Regression Fabrice Gamboa Collaboration with: Yohan de Castro, Didier Henrion, Roxana Hess, Jean-Bernard Lasserre Universität Potsdam 16th of February
More information3.7 Cutting plane methods
3.7 Cutting plane methods Generic ILP problem min{ c t x : x X = {x Z n + : Ax b} } with m n matrix A and n 1 vector b of rationals. According to Meyer s theorem: There exists an ideal formulation: conv(x
More informationOPTIMAL SENSOR PLACEMENT FOR JOINT PARAMETER AND STATE ESTIMATION PROBLEMS IN LARGE-SCALE DYNAMICAL SYSTEMS WITH APPLICATIONS TO THERMO-MECHANICS
OPTIMAL SENSOR PLACEMENT FOR JOINT PARAMETER AND STATE ESTIMATION PROBLEMS IN LARGE-SCALE DYNAMICAL SYSTEMS WITH APPLICATIONS TO THERMO-MECHANICS Roland Herzog Ilka Riedel Dariusz Uciński February 7, 2017
More informationInteger Programming ISE 418. Lecture 8. Dr. Ted Ralphs
Integer Programming ISE 418 Lecture 8 Dr. Ted Ralphs ISE 418 Lecture 8 1 Reading for This Lecture Wolsey Chapter 2 Nemhauser and Wolsey Sections II.3.1, II.3.6, II.4.1, II.4.2, II.5.4 Duality for Mixed-Integer
More informationOptimal experimental design, an introduction, Jesús López Fidalgo
Optimal experimental design, an introduction Jesus.LopezFidalgo@uclm.es University of Castilla-La Mancha Department of Mathematics Institute of Applied Mathematics to Science and Engineering Books (just
More informationInteger programming: an introduction. Alessandro Astolfi
Integer programming: an introduction Alessandro Astolfi Outline Introduction Examples Methods for solving ILP Optimization on graphs LP problems with integer solutions Summary Introduction Integer programming
More informationMustafa H. Tongarlak Bruce E. Ankenman Barry L. Nelson
Proceedings of the 0 Winter Simulation Conference S. Jain, R. R. Creasey, J. Himmelspach, K. P. White, and M. Fu, eds. RELATIVE ERROR STOCHASTIC KRIGING Mustafa H. Tongarlak Bruce E. Ankenman Barry L.
More informationSupport Vector Machines
Support Vector Machines Le Song Machine Learning I CSE 6740, Fall 2013 Naïve Bayes classifier Still use Bayes decision rule for classification P y x = P x y P y P x But assume p x y = 1 is fully factorized
More informationminimize x subject to (x 2)(x 4) u,
Math 6366/6367: Optimization and Variational Methods Sample Preliminary Exam Questions 1. Suppose that f : [, L] R is a C 2 -function with f () on (, L) and that you have explicit formulae for
More informationSection Notes 9. IP: Cutting Planes. Applied Math 121. Week of April 12, 2010
Section Notes 9 IP: Cutting Planes Applied Math 121 Week of April 12, 2010 Goals for the week understand what a strong formulations is. be familiar with the cutting planes algorithm and the types of cuts
More informationTransductive Experiment Design
Appearing in NIPS 2005 workshop Foundations of Active Learning, Whistler, Canada, December, 2005. Transductive Experiment Design Kai Yu, Jinbo Bi, Volker Tresp Siemens AG 81739 Munich, Germany Abstract
More informationMultiplicative Algorithm for computing Optimum Designs
for computing s DAE Conference 2012 Roberto Dorta-Guerra, Raúl Martín Martín, Mercedes Fernández-Guerrero and Licesio J. Rodríguez-Aragón Optimum Experimental Design group October 20, 2012 http://areaestadistica.uclm.es/oed
More informationOn fast trust region methods for quadratic models with linear constraints. M.J.D. Powell
DAMTP 2014/NA02 On fast trust region methods for quadratic models with linear constraints M.J.D. Powell Abstract: Quadratic models Q k (x), x R n, of the objective function F (x), x R n, are used by many
More informationPart 4. Decomposition Algorithms
In the name of God Part 4. 4.4. Column Generation for the Constrained Shortest Path Problem Spring 2010 Instructor: Dr. Masoud Yaghini Constrained Shortest Path Problem Constrained Shortest Path Problem
More informationPrimal/Dual Decomposition Methods
Primal/Dual Decomposition Methods Daniel P. Palomar Hong Kong University of Science and Technology (HKUST) ELEC5470 - Convex Optimization Fall 2018-19, HKUST, Hong Kong Outline of Lecture Subgradients
More informationVARIATIONAL CALCULUS IN SPACE OF MEASURES AND OPTIMAL DESIGN
Chapter 1 VARIATIONAL CALCULUS IN SPACE OF MEASURES AND OPTIMAL DESIGN Ilya Molchanov Department of Statistics University of Glasgow ilya@stats.gla.ac.uk www.stats.gla.ac.uk/ ilya Sergei Zuyev Department
More information3.10 Lagrangian relaxation
3.10 Lagrangian relaxation Consider a generic ILP problem min {c t x : Ax b, Dx d, x Z n } with integer coefficients. Suppose Dx d are the complicating constraints. Often the linear relaxation and the
More informationComputational Finance
Department of Mathematics at University of California, San Diego Computational Finance Optimization Techniques [Lecture 2] Michael Holst January 9, 2017 Contents 1 Optimization Techniques 3 1.1 Examples
More informationA Parametric Simplex Algorithm for Linear Vector Optimization Problems
A Parametric Simplex Algorithm for Linear Vector Optimization Problems Birgit Rudloff Firdevs Ulus Robert Vanderbei July 9, 2015 Abstract In this paper, a parametric simplex algorithm for solving linear
More informationDesigns for Generalized Linear Models
Designs for Generalized Linear Models Anthony C. Atkinson David C. Woods London School of Economics and Political Science, UK University of Southampton, UK December 9, 2013 Email: a.c.atkinson@lse.ac.uk
More informationInformation in a Two-Stage Adaptive Optimal Design
Information in a Two-Stage Adaptive Optimal Design Department of Statistics, University of Missouri Designed Experiments: Recent Advances in Methods and Applications DEMA 2011 Isaac Newton Institute for
More informationmaxz = 3x 1 +4x 2 2x 1 +x 2 6 2x 1 +3x 2 9 x 1,x 2
ex-5.-5. Foundations of Operations Research Prof. E. Amaldi 5. Branch-and-Bound Given the integer linear program maxz = x +x x +x 6 x +x 9 x,x integer solve it via the Branch-and-Bound method (solving
More informationOptimal sensor placement in parameter estimation of distributed systems
Optimal sensor placement in parameter estimation of distributed systems Dariusz Uciński Institute of Robotics and Software Engineering Technical University of Zielona Góra ul. Podgórna 5 65 246 Zielona
More informationOptimization. Charles J. Geyer School of Statistics University of Minnesota. Stat 8054 Lecture Notes
Optimization Charles J. Geyer School of Statistics University of Minnesota Stat 8054 Lecture Notes 1 One-Dimensional Optimization Look at a graph. Grid search. 2 One-Dimensional Zero Finding Zero finding
More informationScenario Grouping and Decomposition Algorithms for Chance-constrained Programs
Scenario Grouping and Decomposition Algorithms for Chance-constrained Programs Siqian Shen Dept. of Industrial and Operations Engineering University of Michigan Joint work with Yan Deng (UMich, Google)
More informationReview of Optimization Methods
Review of Optimization Methods Prof. Manuela Pedio 20550 Quantitative Methods for Finance August 2018 Outline of the Course Lectures 1 and 2 (3 hours, in class): Linear and non-linear functions on Limits,
More informationNetwork Localization via Schatten Quasi-Norm Minimization
Network Localization via Schatten Quasi-Norm Minimization Anthony Man-Cho So Department of Systems Engineering & Engineering Management The Chinese University of Hong Kong (Joint Work with Senshan Ji Kam-Fung
More informationFunctional SVD for Big Data
Functional SVD for Big Data Pan Chao April 23, 2014 Pan Chao Functional SVD for Big Data April 23, 2014 1 / 24 Outline 1 One-Way Functional SVD a) Interpretation b) Robustness c) CV/GCV 2 Two-Way Problem
More informationColumn Generation. MTech Seminar Report. Soumitra Pal Roll No: under the guidance of
Column Generation MTech Seminar Report by Soumitra Pal Roll No: 05305015 under the guidance of Prof. A. G. Ranade Computer Science and Engineering IIT-Bombay a Department of Computer Science and Engineering
More informationOPTIMAL DESIGNS FOR GENERALIZED LINEAR MODELS WITH MULTIPLE DESIGN VARIABLES
Statistica Sinica 21 (2011, 1415-1430 OPTIMAL DESIGNS FOR GENERALIZED LINEAR MODELS WITH MULTIPLE DESIGN VARIABLES Min Yang, Bin Zhang and Shuguang Huang University of Missouri, University of Alabama-Birmingham
More informationOptimal Control. Macroeconomics II SMU. Ömer Özak (SMU) Economic Growth Macroeconomics II 1 / 112
Optimal Control Ömer Özak SMU Macroeconomics II Ömer Özak (SMU) Economic Growth Macroeconomics II 1 / 112 Review of the Theory of Optimal Control Section 1 Review of the Theory of Optimal Control Ömer
More informationThe L-Shaped Method. Operations Research. Anthony Papavasiliou 1 / 44
1 / 44 The L-Shaped Method Operations Research Anthony Papavasiliou Contents 2 / 44 1 The L-Shaped Method [ 5.1 of BL] 2 Optimality Cuts [ 5.1a of BL] 3 Feasibility Cuts [ 5.1b of BL] 4 Proof of Convergence
More informationmin3x 1 + 4x 2 + 5x 3 2x 1 + 2x 2 + x 3 6 x 1 + 2x 2 + 3x 3 5 x 1, x 2, x 3 0.
ex-.-. Foundations of Operations Research Prof. E. Amaldi. Dual simplex algorithm Given the linear program minx + x + x x + x + x 6 x + x + x x, x, x. solve it via the dual simplex algorithm. Describe
More informationWritten Examination
Division of Scientific Computing Department of Information Technology Uppsala University Optimization Written Examination 202-2-20 Time: 4:00-9:00 Allowed Tools: Pocket Calculator, one A4 paper with notes
More informationHot-Starting NLP Solvers
Hot-Starting NLP Solvers Andreas Wächter Department of Industrial Engineering and Management Sciences Northwestern University waechter@iems.northwestern.edu 204 Mixed Integer Programming Workshop Ohio
More informationAn Active Set Strategy for Solving Optimization Problems with up to 200,000,000 Nonlinear Constraints
An Active Set Strategy for Solving Optimization Problems with up to 200,000,000 Nonlinear Constraints Klaus Schittkowski Department of Computer Science, University of Bayreuth 95440 Bayreuth, Germany e-mail:
More informationDisconnecting Networks via Node Deletions
1 / 27 Disconnecting Networks via Node Deletions Exact Interdiction Models and Algorithms Siqian Shen 1 J. Cole Smith 2 R. Goli 2 1 IOE, University of Michigan 2 ISE, University of Florida 2012 INFORMS
More informationConstruction of Permutation Mixture Experiment Designs. Ben Torsney and Yousif Jaha University of Glasgow
Construction of Permutation Mixture Experiment Designs Ben Torsney and Yousif Jaha University of Glasgow bent@stats.gla.ac.uk yousif@stats.gla.ac.uk 1 Background Mixture Experiments Canonical Polynomials
More informationEE 381V: Large Scale Optimization Fall Lecture 24 April 11
EE 381V: Large Scale Optimization Fall 2012 Lecture 24 April 11 Lecturer: Caramanis & Sanghavi Scribe: Tao Huang 24.1 Review In past classes, we studied the problem of sparsity. Sparsity problem is that
More informationResponse Surface Methods
Response Surface Methods 3.12.2014 Goals of Today s Lecture See how a sequence of experiments can be performed to optimize a response variable. Understand the difference between first-order and second-order
More informationHW1 solutions. 1. α Ef(x) β, where Ef(x) is the expected value of f(x), i.e., Ef(x) = n. i=1 p if(a i ). (The function f : R R is given.
HW1 solutions Exercise 1 (Some sets of probability distributions.) Let x be a real-valued random variable with Prob(x = a i ) = p i, i = 1,..., n, where a 1 < a 2 < < a n. Of course p R n lies in the standard
More informationGestion de la production. Book: Linear Programming, Vasek Chvatal, McGill University, W.H. Freeman and Company, New York, USA
Gestion de la production Book: Linear Programming, Vasek Chvatal, McGill University, W.H. Freeman and Company, New York, USA 1 Contents 1 Integer Linear Programming 3 1.1 Definitions and notations......................................
More informationΩ R n is called the constraint set or feasible set. x 1
1 Chapter 5 Linear Programming (LP) General constrained optimization problem: minimize subject to f(x) x Ω Ω R n is called the constraint set or feasible set. any point x Ω is called a feasible point We
More informationAccelerated Block-Coordinate Relaxation for Regularized Optimization
Accelerated Block-Coordinate Relaxation for Regularized Optimization Stephen J. Wright Computer Sciences University of Wisconsin, Madison October 09, 2012 Problem descriptions Consider where f is smooth
More informationLecture 3 September 1
STAT 383C: Statistical Modeling I Fall 2016 Lecture 3 September 1 Lecturer: Purnamrita Sarkar Scribe: Giorgio Paulon, Carlos Zanini Disclaimer: These scribe notes have been slightly proofread and may have
More informationEE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 17
EE/ACM 150 - Applications of Convex Optimization in Signal Processing and Communications Lecture 17 Andre Tkacenko Signal Processing Research Group Jet Propulsion Laboratory May 29, 2012 Andre Tkacenko
More informationAn Integer Cutting-Plane Procedure for the Dantzig-Wolfe Decomposition: Theory
An Integer Cutting-Plane Procedure for the Dantzig-Wolfe Decomposition: Theory by Troels Martin Range Discussion Papers on Business and Economics No. 10/2006 FURTHER INFORMATION Department of Business
More informationORF 522. Linear Programming and Convex Analysis
ORF 522 Linear Programming and Convex Analysis The Simplex Method Marco Cuturi Princeton ORF-522 1 Reminder: Basic Feasible Solutions, Extreme points, Optima Some important theorems last time for standard
More informationA strongly polynomial algorithm for linear systems having a binary solution
A strongly polynomial algorithm for linear systems having a binary solution Sergei Chubanov Institute of Information Systems at the University of Siegen, Germany e-mail: sergei.chubanov@uni-siegen.de 7th
More informationON D-OPTIMAL DESIGNS FOR ESTIMATING SLOPE
Sankhyā : The Indian Journal of Statistics 999, Volume 6, Series B, Pt. 3, pp. 488 495 ON D-OPTIMAL DESIGNS FOR ESTIMATING SLOPE By S. HUDA and A.A. AL-SHIHA King Saud University, Riyadh, Saudi Arabia
More informationCHAPTER 2: QUADRATIC PROGRAMMING
CHAPTER 2: QUADRATIC PROGRAMMING Overview Quadratic programming (QP) problems are characterized by objective functions that are quadratic in the design variables, and linear constraints. In this sense,
More informationMIT Manufacturing Systems Analysis Lecture 14-16
MIT 2.852 Manufacturing Systems Analysis Lecture 14-16 Line Optimization Stanley B. Gershwin Spring, 2007 Copyright c 2007 Stanley B. Gershwin. Line Design Given a process, find the best set of machines
More informationMA 575 Linear Models: Cedric E. Ginestet, Boston University Mixed Effects Estimation, Residuals Diagnostics Week 11, Lecture 1
MA 575 Linear Models: Cedric E Ginestet, Boston University Mixed Effects Estimation, Residuals Diagnostics Week 11, Lecture 1 1 Within-group Correlation Let us recall the simple two-level hierarchical
More informationOptimization and Root Finding. Kurt Hornik
Optimization and Root Finding Kurt Hornik Basics Root finding and unconstrained smooth optimization are closely related: Solving ƒ () = 0 can be accomplished via minimizing ƒ () 2 Slide 2 Basics Root finding
More informationChapter 2 Distributed Parameter Systems: Controllability, Observability, and Identification
Chapter 2 Distributed Parameter Systems: Controllability, Observability, and Identification 2.1 Mathematical Description We introduce the class of systems to be considered in the framework of this monograph
More informationConvex Optimization of Graph Laplacian Eigenvalues
Convex Optimization of Graph Laplacian Eigenvalues Stephen Boyd Abstract. We consider the problem of choosing the edge weights of an undirected graph so as to maximize or minimize some function of the
More informationOPTIMAL DESIGNS FOR 2 k FACTORIAL EXPERIMENTS WITH BINARY RESPONSE
1 OPTIMAL DESIGNS FOR 2 k FACTORIAL EXPERIMENTS WITH BINARY RESPONSE Jie Yang 1, Abhyuday Mandal 2 and Dibyen Majumdar 1 1 University of Illinois at Chicago and 2 University of Georgia Abstract: We consider
More informationStructural and Multidisciplinary Optimization. P. Duysinx and P. Tossings
Structural and Multidisciplinary Optimization P. Duysinx and P. Tossings 2018-2019 CONTACTS Pierre Duysinx Institut de Mécanique et du Génie Civil (B52/3) Phone number: 04/366.91.94 Email: P.Duysinx@uliege.be
More informationNumerical Methods for Large-Scale Nonlinear Systems
Numerical Methods for Large-Scale Nonlinear Systems Handouts by Ronald H.W. Hoppe following the monograph P. Deuflhard Newton Methods for Nonlinear Problems Springer, Berlin-Heidelberg-New York, 2004 Num.
More informationContraction Methods for Convex Optimization and Monotone Variational Inequalities No.16
XVI - 1 Contraction Methods for Convex Optimization and Monotone Variational Inequalities No.16 A slightly changed ADMM for convex optimization with three separable operators Bingsheng He Department of
More information1 Principal component analysis and dimensional reduction
Linear Algebra Working Group :: Day 3 Note: All vector spaces will be finite-dimensional vector spaces over the field R. 1 Principal component analysis and dimensional reduction Definition 1.1. Given an
More informationStochastic Design Criteria in Linear Models
AUSTRIAN JOURNAL OF STATISTICS Volume 34 (2005), Number 2, 211 223 Stochastic Design Criteria in Linear Models Alexander Zaigraev N. Copernicus University, Toruń, Poland Abstract: Within the framework
More informationMATHEMATICS FOR COMPUTER VISION WEEK 8 OPTIMISATION PART 2. Dr Fabio Cuzzolin MSc in Computer Vision Oxford Brookes University Year
MATHEMATICS FOR COMPUTER VISION WEEK 8 OPTIMISATION PART 2 1 Dr Fabio Cuzzolin MSc in Computer Vision Oxford Brookes University Year 2013-14 OUTLINE OF WEEK 8 topics: quadratic optimisation, least squares,
More informationLinear Programming Redux
Linear Programming Redux Jim Bremer May 12, 2008 The purpose of these notes is to review the basics of linear programming and the simplex method in a clear, concise, and comprehensive way. The book contains
More informationFundamental Theorems of Optimization
Fundamental Theorems of Optimization 1 Fundamental Theorems of Math Prog. Maximizing a concave function over a convex set. Maximizing a convex function over a closed bounded convex set. 2 Maximizing Concave
More informationD-optimal Designs for Multinomial Logistic Models
D-optimal Designs for Multinomial Logistic Models Jie Yang University of Illinois at Chicago Joint with Xianwei Bu and Dibyen Majumdar October 12, 2017 1 Multinomial Logistic Models Cumulative logit model:
More informationc 2004 Society for Industrial and Applied Mathematics
SIAM J. OPTIM. Vol. 15, No. 1, pp. 275 302 c 2004 Society for Industrial and Applied Mathematics GLOBAL MINIMIZATION OF NORMAL QUARTIC POLYNOMIALS BASED ON GLOBAL DESCENT DIRECTIONS LIQUN QI, ZHONG WAN,
More informationLecture 1: Basic Concepts
ENGG 5781: Matrix Analysis and Computations Lecture 1: Basic Concepts 2018-19 First Term Instructor: Wing-Kin Ma This note is not a supplementary material for the main slides. I will write notes such as
More informationOptimal sensor placement based on model order reduction
P. Benner a, R. Herzog b, N. Lang b, I. Riedel b, J. Saak a a Max Planck Institute for Dynamics of Complex Technical Systems, Computational Methods in Systems and Control Theory, 39106 Magdeburg, Germany
More informationOPTIMAL DESIGNS FOR 2 k FACTORIAL EXPERIMENTS WITH BINARY RESPONSE
Statistica Sinica 26 (2016), 385-411 doi:http://dx.doi.org/10.5705/ss.2013.265 OPTIMAL DESIGNS FOR 2 k FACTORIAL EXPERIMENTS WITH BINARY RESPONSE Jie Yang 1, Abhyuday Mandal 2 and Dibyen Majumdar 1 1 University
More informationLinear Programming Methods
Chapter 11 Linear Programming Methods 1 In this chapter we consider the linear programming approach to dynamic programming. First, Bellman s equation can be reformulated as a linear program whose solution
More informationKaisa Joki Adil M. Bagirov Napsu Karmitsa Marko M. Mäkelä. New Proximal Bundle Method for Nonsmooth DC Optimization
Kaisa Joki Adil M. Bagirov Napsu Karmitsa Marko M. Mäkelä New Proximal Bundle Method for Nonsmooth DC Optimization TUCS Technical Report No 1130, February 2015 New Proximal Bundle Method for Nonsmooth
More informationApplications of Linear Programming
Applications of Linear Programming lecturer: András London University of Szeged Institute of Informatics Department of Computational Optimization Lecture 9 Non-linear programming In case of LP, the goal
More informationInverse Kinematics. Mike Bailey. Oregon State University. Inverse Kinematics
Inverse Kinematics Mike Bailey mjb@cs.oregonstate.edu inversekinematics.pptx Inverse Kinematics Forward Kinematics solves the problem if I know the link transformation parameters, where are the links?.
More informationLinear Programming: Simplex
Linear Programming: Simplex Stephen J. Wright 1 2 Computer Sciences Department, University of Wisconsin-Madison. IMA, August 2016 Stephen Wright (UW-Madison) Linear Programming: Simplex IMA, August 2016
More informationFALL 2018 MATH 4211/6211 Optimization Homework 4
FALL 2018 MATH 4211/6211 Optimization Homework 4 This homework assignment is open to textbook, reference books, slides, and online resources, excluding any direct solution to the problem (such as solution
More informationUses of duality. Geoff Gordon & Ryan Tibshirani Optimization /
Uses of duality Geoff Gordon & Ryan Tibshirani Optimization 10-725 / 36-725 1 Remember conjugate functions Given f : R n R, the function is called its conjugate f (y) = max x R n yt x f(x) Conjugates appear
More informationHomework 4. Convex Optimization /36-725
Homework 4 Convex Optimization 10-725/36-725 Due Friday November 4 at 5:30pm submitted to Christoph Dann in Gates 8013 (Remember to a submit separate writeup for each problem, with your name at the top)
More informationOPTIMAL ESTIMATION of DYNAMIC SYSTEMS
CHAPMAN & HALL/CRC APPLIED MATHEMATICS -. AND NONLINEAR SCIENCE SERIES OPTIMAL ESTIMATION of DYNAMIC SYSTEMS John L Crassidis and John L. Junkins CHAPMAN & HALL/CRC A CRC Press Company Boca Raton London
More informationLecture: Examples of LP, SOCP and SDP
1/34 Lecture: Examples of LP, SOCP and SDP Zaiwen Wen Beijing International Center For Mathematical Research Peking University http://bicmr.pku.edu.cn/~wenzw/bigdata2018.html wenzw@pku.edu.cn Acknowledgement:
More informationActive Robot Calibration Algorithm
Active Robot Calibration Algorithm Yu Sun and John M. Hollerbach Abstract This paper presents a new updating algorithm to reduce the complexity of computing an observability index for kinematic calibration
More informationChapter 5 Linear Programming (LP)
Chapter 5 Linear Programming (LP) General constrained optimization problem: minimize f(x) subject to x R n is called the constraint set or feasible set. any point x is called a feasible point We consider
More informationSparse Covariance Selection using Semidefinite Programming
Sparse Covariance Selection using Semidefinite Programming A. d Aspremont ORFE, Princeton University Joint work with O. Banerjee, L. El Ghaoui & G. Natsoulis, U.C. Berkeley & Iconix Pharmaceuticals Support
More informationIntroduction to Integer Programming
Lecture 3/3/2006 p. /27 Introduction to Integer Programming Leo Liberti LIX, École Polytechnique liberti@lix.polytechnique.fr Lecture 3/3/2006 p. 2/27 Contents IP formulations and examples Total unimodularity
More informationBarrier Method. Javier Peña Convex Optimization /36-725
Barrier Method Javier Peña Convex Optimization 10-725/36-725 1 Last time: Newton s method For root-finding F (x) = 0 x + = x F (x) 1 F (x) For optimization x f(x) x + = x 2 f(x) 1 f(x) Assume f strongly
More information