Are You a Good Beam or a Bad Beam Allen Holder Trinity University Mathematics University of Texas Health Science Center at San Antonio
|
|
- Gilbert Bell
- 6 years ago
- Views:
Transcription
1 Are You a Good Beam or a Bad Beam University of Texas Health Science Center at San Antonio Joint Work with Matthias Ehrgott and Josh Reese, with help from Bill Salter, Ryan Acosta, and Daniel Nevin
2 Intensity Modulated Radiotherapy (IMRT) The design process selects couch and gantry positions so that the target is treated but surrounding tissues are spared.
3 Model Parameters Optimization models depend on Where dose is calculated, Where the isocenter is placed, What couch angle(s) are allowed, and What gantry angle(s) are used. bold decisions are mandatory, others can be addressed within the optimization model. The examples and images are from the academic treatment system RAD, lagrange.math.trinity.edu/rad.
4 Calculating Dose A naive grid distribution: Dose is calculated on a uniform grid (1 cm spacing). A judicious initial placement: We define a swath to place dose points and then only add points in normal tissue to avoid hot spots.
5 Model Parameters Dose Placement This is the most crucial decision required to build a model. Treatment and technique comparisons are directly related to this decision. Isocenter Placement There is typically a single isocenter placed near the center of mass of the target. There is no reason to believe that this is an optimal placement. Couch Angle(s) Couch Angles should be decided by the optimization model, but is currently not done. We expect the user to define couch angles. Gantry Angles Optimal selection of gantry angles is single biggest problem in the field. Selection is currently required by commercial systems.
6 Notation The following vectors are the minimum needed to form a prescription. T UB - vector of upper bounds on the tumor (target volume). T LB - vector of lower bounds on the tumor. CUB - vector of upper bounds for the critical structures. GUB - vector of upper bounds for the good tissue. The dose to tissue is linear in time, and we let x Ax be the transformation that maps exposure times (x) into deposited dose (Ax). The rows of A are separated as indicated. A = A T A C A G tumor pixels critical structures good tissue.
7 Linear Models The simplest linear models are feasibility models. These models attempt to satisfy A T x T G, A C x CUB, A N x NUB, and x 0. Consistency is not guaranteed because physicians are often overly demanding, and many authors have complained that infeasibility is a shortcoming. In fact, the argument that feasibility alone correctly addresses treatment design is that the region defined by these constraints is relatively small, and hence, optimizing over this region does not provide a significant benefit. Computational Solution of the Inverse Problem in Radiation-Therapy Treatment Planning, Y. Censor, M. Altschuler and W. Powlis, Applied Mathematics and Computation, 1988 (25) Semi-Automatic Radiotherapy Treatment Planning with a Mathematical Model to Satisfy Treatment Goals, W. Powlis, M. Altschuler, Y. Censor and E. Buhle, International Journal of Radiation Oncology, Biology, Physics, 1989 (16)
8 Linear Models An elastic model that removes the threat of infeasibility. min{ω l T α + u T C β + ut N γ : T LB Lα A T x T UB, A C x CUB + U C β, A N x NUB + U N γ, CUB U C β, 0 U N γ, 0 x} (1) This model has a nice theoretical results Theorem The linear model and its dual are strictly feasible, meaning that each of the constraints can simultaneously hold without equality. Theorem Allowing (x (ω), α (ω), β (ω), γ (ω)) to be an optimal solution for a particular ω, we have that l T α (ω) = O(1/ω). Designing Radiotherapy Plans with Elastic Constraints and Interior Point Methods, A. Holder, Health Care and Management Science, 2003 (6) 5-16.
9 Candidate Angles The candidates collection of angles is denoted by A and P(A) is the power set of A. Judgment Function A judgment function is a function f : P(A) IR + with the property that A A implies f(a ) f(a ). Judgment Evaluation For any A P(A), f(a ) = min{z(x) : x X(A )}, where z maps a fluency pattern x X(A ) into IR +. Theorem For all 0 N < A we have min{f(a ) : A P(A), A = N + 1} min{f(a ) : A P(A), A = N}. Beam Selection Problem For a fixed judgment function, the N-beam selection problem is min{f(a ) f(a) : A P(A), A = N} = min{f(a ) : A P(A), A = N} f(a).
10 Beam Selectors Beam Selector The function g : W V is a beam selector if 1. W and V are subsets of P(A) and 2. g(w ) W for all W W. Informed A beam selector is informed if the map requires the evaluation of the judgment function. Weakly Informed A beam selector is weakly informed if it uses the data that describes the judgment function but fall short of evaluating it. Stochastic Selectors A beam selector is stochastic if it depends on a random process.
11 Limited Fluency Beam Selectors Notation We let M be the vector with components M a and define { { A M := A : A P(A), X(A ) x : i x (a,i) M a, a A } }. Limited Fluency Beam Selectors An LF-N beam selector is an N-beam selector g lf : {A M } P(A M ). Limited Fluency Surrogate Instead of solving the full beam selection problem, we solve min { f ( g lf (A M ) ) : g lf F M lf (N)} f(a), where F M lf (N) is the collection of LF-N beam selectors.
12 Limited Fluency Beam Selectors Theorem There exists ˆM 0 such that the LF-N beam selection problem and the beam selection problem have the same optimal value. Also, if M j is a nondecreasing sequence in IR A such that lim j M j > ˆM, then f(ĝ lf (A M j )) f(a ), where A is an optimal solution to the beam selection problem and ĝ lf is any LF-N-beam selector that satisfies ĝ lf (A ˆM ) argmin{f(a ) : A P(A ˆM ), A = N}. Theorem For any M, the search space of the limited fluency problem is no greater than the search space of the beam selection problem. The two problems are equivalent for sufficiently large M if and only if for every a A, there is an A P(A) such that a A, A = N, and X(A ).
13 Set Cover Beam Selectors Notation An angle a covers the dose-point k if d i (k,a,i) ε, and for each k T, let A ε k = {a A : a covers dose-point k}. It is important to notice that A ε k = A for all k T if and only if 0 ε ε := min{ d i (k,a,i) : k T, a A}. We define W ε sc = {Aε k : k T }. Set Cover Beam Selector An SC-N-beam selector is an N-beam selector having the form ) g sc : W ε sc (P(A ε k )\. Set Cover Surrogate Allowing F ε sc (N) to be the collection of SC-N-beam selectors, the set covering approach is min f W W ε sc g sc (W ) k T : gsc F ε sc (N) f(a).
14 Set Cover Beam Selectors Theorem If 0 ε ε, the set covering problem is equivalent to the beam selection problem. Implementation Define angle scores by c a = k C i (u C ) k q (k,a,i) CUB k, C, 0, C =, Then solve min { a c a y a : a q (k,a) y a 1, k T, a y a = N, y a {0, 1} }
15 Set Cover Beam Selectors Theorem Assume that 0 ε ε, and order the angles so that c a1 c a2... c a A. Let j and j be the smallest and largest values, respectively, such that c aj =... = c an =... = c aj. Then, y is an optimal in the set cover problem if and only if y aj = 1 for 1 j < j, y aj = N j, and j j j y aj = 0 for j > j. Corollary If 0 ε ε, the set cover technique has ( j j +1 N j +1) optimal solutions.
16 Scoring Beam Selectors Definition An S-N-beam selector is an N-beam selector such that g s : {A} P(A). Scoring Surrogate Allowing F s (N) to be the collection of S-N-beam selectors, we immediately have by definition that min{f(a ) : A A, A = N} f(a) = min{f(g s (A)) : g s F s (N)} f(a). So, the scoring method is the same as the beam selection problem. Theorem If 0 ε ε, then F ε sc (N) = F s(n).
17 Scoring Beam Selectors Example Score each angle by where c a = 1 T k T i ( d(k,a,i) ˆx (a,i) ˆx (a,i) = min{min{cub k /d (k,a,i) : k C}, min{nub k /d (k,a,i) : k N}}. T G ) 2 Example Define the entropy of an angle by e a := i x (a,i) ln x (a,i) score of a is c a = 1 e a min{e a : a A}. max{e a : a A} and the
18 Vector Quantization Selectors Contiguous Partition We say that A is a contiguous subset of A if A is an ordered subset of the form {a j, a j+1,..., a j+r }. A contiguous partition of A is a collection of contiguous subsets of A that partition A, and we let W vq (N) be the collection of N element contiguous partitions of A. Vector Quantization Beam Selector A VQ-N-beam selector is a function of the form g vq : {W j : j = 1, 2,..., N} {{a j } : a j W j }, where {W j : j = 1, 2,..., N} W vq (N). Vector Quantization Surrogate The vector quantization approach to the beam selection problem is { ( N ) } min f g vq (W j ) : g vq F vq (N) f(a), j=1 where F vq (N) is the collection of VQ-N-beam selectors.
19 Vector Quantization Selectors Fact The search space for vector quantization technique is larger than the search space for the beam selection problem. Distortion The distortion of a quantizer is N j=1 a W j α(a) m(a, g vq(w j )), Theorem The VQ-N-beam selector g vq minimizes distortion for the contiguous partition {W j : j = 1, 2,..., N} if and only if g vq satisfies N j=1 a W j α(a) m(a, g vq(w j )) N j=1 min a W j α(a) m(a, a ) : a W j.
20 Vector Quantization Selectors Notation The elements of F vq (N) that satisfy the condition are g opt vq, and we let Fvq opt (N) be the collection of these functions. Discrete Interpretation In the discrete setting, the interpretation of the result is that the image set is the centers of mass. Theorem If 0 ε ε, then g vq opt F vq (N) g opt vq (Wopt vq ) F opt vq (N) F ε sc (N) = F s(n) F vq (N), where W opt vq is the domain of g opt vq.
21 Vector Quantization Selectors Implementation To remove the dependency on a particular solver, we consider generating a balanced probability distribution ( ( )) lexmin z(x), sort x (a,i) : a A. where sort is a function mapping IR A into IR A that reorders the components of the vector ( ) x i (a,i) a A i in a nonincreasing order.
22 Vector Quantization Selectors Algorithm Let z 0 = f(a). Initialize ˆx to be the zero vector of length A, t = 1, B t =, and U t = {j : j = 1, 2,..., A }. Do until B t = A : Let x U t solve z t = min{ x U t : f(a) = z 0, x Bt = ˆx Bt, x Ut 0}. Let Set α = (1/ ˆx )ˆx. B t = {j : (x U t ) j must be z t } B t+1 = B t B t U t+1 = U t \ B t ˆx Bt = z t t = t + 1
23 Vector Quantization Selectors Theorem With the notation of the algorithm, assume that f is a judgment function with the property that x U t solves z t = min{ x Ut : f(a) = z 0, x Bt = ˆx Bt, x Ut 0} = min{ x Ut : x X, x Bt = ˆx Bt, x Ut 0}. Then, the algorithm terminates with an α in the (t + 1)st iteration having the property that α(a) = 0 for some a A if and only if either D (T,Bt ) ˆx Bt l or [ D (T,Bt ) D (C N,Bt ) ] ˆx Bt ( ) u 1, u 2 Moreover, α(a) = 0 implies a B t+1. [ D (T,Bt ) D (C N,Bt ) ] ˆx Bt ( ) u 1. u 2
24 Deterministic Iterative Beam Selectors Basic Idea Define a neighborhood and select a best angle over this neighborhood. Notice this can be interpreted as a scoring technique or as a vector quantizer. Seeding Select angles via one of the earlier selectors and then adjust angles locally i.e. try to locally improve. Alternatively, use equally spaced angles. Novelty One of the suggested techniques uses a deleted neighborhood in an attempt to spread energy.
25 Stochastic Iterative Beam Selectors Stochastic-N-Beam Selector A stochastic-n-beam selector is an N-beam selector such that g st : {A} P(A), where g st (A) is a random set according to some distribution. Implementations Stochastic beam selectors are implemented as simulated annealing or genetic algorithms. These differ from the deterministic methods in that they operate over the entire search space. Note on SA Simulated is typically described but implemented with a temperature of 0 -i.e pure random search is used.
26 Stochastic Iterative Beam Selectors Example GA (A 1, A 2 ) {a 1 i + γ(a2 i a1 i ) : i = 1,..., N} where γ [ 0.25, 1.25] is a random number A 1 (A 1 \ {a i }) {a i + δ} where i {1,..., N} and δ [ π/12, π/12] are random A 1 (A 1 \ {a i }) {a} where i {1,..., N} and a A are randomly chosen A 1 (A 1 \ {a i }) {a i + π} where i {1,..., N} is randomly chosen A 1 {a i + (2π/N)k : k = 0,..., N 1} where i {1,..., N} is randomly chosen.
27 Experimental Design Experimental Design We had 5, 2D examples: Candidate Angles Candidate set A is {2πi/72 : i = 0, 1,..., 71}
28 Experimental Design A taxonomy of different selectors generated 5, 9 and 14 beam treatments. SC VQ Scoring GA SA LS Angle Sets Examples Different Types Angle Assignments N/A N/A N/A Total
29 Angle Assignments The angle values on the fictitious example: BalancedAvg, BalancedMax, DualAvg, DualMax, EntropyAvg, InteriorAvg, InteriorMax, PrimalAvg, PrimalMax, Scoring
30 Numerical Results A Fictitious but Important Example Rounded variances for the deterministic selectors on the fictitious example. Ideal value is 0. N BA BM PA PM DA DM IA IM S Score avg VQ avg SC avg
31 Stochastic Iterative Beam Selectors A Fictitious but Important Example Rounded variances for the iterative techniques on the fictitious example (again, ideal value is 0). N LS1 LS2 SA1 SA2 SA3 GA1 GA avg
32 Example Treatments Below are treatments from a VQ-14 beam selector with the BalancedAvg angle values and from a SC-5 beam selector (with cover tolerance of 0.5 max dose) with the same angle values
33 Average Run Times S VQ SC LS SA GA BA.43 BA.74 BA.55 LS SA GA BM.43 BM.74 BM.55 LS SA GA PA.46 PA.74 PA.58 SA PM.46 PM.74 PM.58 DA.46 DA.73 DA.58 DM.47 DM.74 DM.58 IA.47 IA.75 IA.58 IM.47 IM.75 IM.61 SC1.36 SC1.63 SC1.49 SC2.37 SC2.63 SC2.49 S.47 S.76 S.58 ENT.36 ENT.58 ENT.48
34 Representative Examples
35 General Trends The vector quantizer with the balanced-avg angle assignments consistently produced quality treatments. The balanced and interior (both average and maximum) outperformed the primal and dual angle assignments, with the dual method often leading to poor quality angle collections. Scoring methods were erratic. The local search algorithms worked well in general but were not as fast as vector quantization. The second and third simulated annealing selectors consistently produced quality treatments but were the most computationally heavy, often requiring hours to select beams. The genetic algorithm approaches did as well as simulated annealing in about half of the time.
36 Thank you for your time. Please ask questions.
Beam Selection in Radiotherapy Design
Trinity University Digital Commons @ Trinity Mathematics Faculty Research Mathematics Department 3-2008 Beam Selection in Radiotherapy Design M Ehrgott Allen G. Holder Trinity University, aholder@trinity.edu
More information15-780: LinearProgramming
15-780: LinearProgramming J. Zico Kolter February 1-3, 2016 1 Outline Introduction Some linear algebra review Linear programming Simplex algorithm Duality and dual simplex 2 Outline Introduction Some linear
More informationBenders Decomposition Methods for Structured Optimization, including Stochastic Optimization
Benders Decomposition Methods for Structured Optimization, including Stochastic Optimization Robert M. Freund April 29, 2004 c 2004 Massachusetts Institute of echnology. 1 1 Block Ladder Structure We consider
More informationCO350 Linear Programming Chapter 8: Degeneracy and Finite Termination
CO350 Linear Programming Chapter 8: Degeneracy and Finite Termination 27th June 2005 Chapter 8: Finite Termination 1 The perturbation method Recap max c T x (P ) s.t. Ax = b x 0 Assumption: B is a feasible
More informationAn Adaptive Partition-based Approach for Solving Two-stage Stochastic Programs with Fixed Recourse
An Adaptive Partition-based Approach for Solving Two-stage Stochastic Programs with Fixed Recourse Yongjia Song, James Luedtke Virginia Commonwealth University, Richmond, VA, ysong3@vcu.edu University
More informationThe Primal-Dual Algorithm P&S Chapter 5 Last Revised October 30, 2006
The Primal-Dual Algorithm P&S Chapter 5 Last Revised October 30, 2006 1 Simplex solves LP by starting at a Basic Feasible Solution (BFS) and moving from BFS to BFS, always improving the objective function,
More informationOn Two Class-Constrained Versions of the Multiple Knapsack Problem
On Two Class-Constrained Versions of the Multiple Knapsack Problem Hadas Shachnai Tami Tamir Department of Computer Science The Technion, Haifa 32000, Israel Abstract We study two variants of the classic
More informationAdvanced Linear Programming: The Exercises
Advanced Linear Programming: The Exercises The answers are sometimes not written out completely. 1.5 a) min c T x + d T y Ax + By b y = x (1) First reformulation, using z smallest number satisfying x z
More information4.5 Simplex method. LP in standard form: min z = c T x s.t. Ax = b
4.5 Simplex method LP in standard form: min z = c T x s.t. Ax = b x 0 George Dantzig (1914-2005) Examine a sequence of basic feasible solutions with non increasing objective function values until an optimal
More informationChapter 2. Matrix Arithmetic. Chapter 2
Matrix Arithmetic Matrix Addition and Subtraction Addition and subtraction act element-wise on matrices. In order for the addition/subtraction (A B) to be possible, the two matrices A and B must have the
More informationRadiation Shielding of a 230 MeV Proton Cyclotron For Cancer Therapy
Radiation Shielding of a 230 MeV Proton Cyclotron For Cancer Therapy BHASKAR MUKHERJEE Joint DESY and University of Hamburg Accelerator Physics Seminar 27 August 2009 WPE is located within the Campus of
More informationLecture 4: Algebra, Geometry, and Complexity of the Simplex Method. Reading: Sections 2.6.4, 3.5,
Lecture 4: Algebra, Geometry, and Complexity of the Simplex Method Reading: Sections 2.6.4, 3.5, 10.2 10.5 1 Summary of the Phase I/Phase II Simplex Method We write a typical simplex tableau as z x 1 x
More informationIterative Methods for Solving A x = b
Iterative Methods for Solving A x = b A good (free) online source for iterative methods for solving A x = b is given in the description of a set of iterative solvers called templates found at netlib: http
More informationUtilizing IMRT Intensity Map Complexity Measures in Segmentation Algorithms. Multileaf Collimator (MLC) IMRT: Beamlet Intensity Matrices
Radiation Teletherapy Goals Utilizing IMRT Intensity Map Complexity Measures in Segmentation Algorithms Kelly Sorensen August 20, 2005 Supported with NSF Grant DMI-0400294 High Level Goals: Achieve prescribed
More informationLarge Scale Semi-supervised Linear SVMs. University of Chicago
Large Scale Semi-supervised Linear SVMs Vikas Sindhwani and Sathiya Keerthi University of Chicago SIGIR 2006 Semi-supervised Learning (SSL) Motivation Setting Categorize x-billion documents into commercial/non-commercial.
More informationCSE 206A: Lattice Algorithms and Applications Spring Basic Algorithms. Instructor: Daniele Micciancio
CSE 206A: Lattice Algorithms and Applications Spring 2014 Basic Algorithms Instructor: Daniele Micciancio UCSD CSE We have already seen an algorithm to compute the Gram-Schmidt orthogonalization of a lattice
More informationResearch Article A New Global Optimization Algorithm for Solving Generalized Geometric Programming
Mathematical Problems in Engineering Volume 2010, Article ID 346965, 12 pages doi:10.1155/2010/346965 Research Article A New Global Optimization Algorithm for Solving Generalized Geometric Programming
More informationOptimization WS 13/14:, by Y. Goldstein/K. Reinert, 9. Dezember 2013, 16: Linear programming. Optimization Problems
Optimization WS 13/14:, by Y. Goldstein/K. Reinert, 9. Dezember 2013, 16:38 2001 Linear programming Optimization Problems General optimization problem max{z(x) f j (x) 0,x D} or min{z(x) f j (x) 0,x D}
More information4. Algebra and Duality
4-1 Algebra and Duality P. Parrilo and S. Lall, CDC 2003 2003.12.07.01 4. Algebra and Duality Example: non-convex polynomial optimization Weak duality and duality gap The dual is not intrinsic The cone
More informationCombinatorial Optimization
Combinatorial Optimization 2017-2018 1 Maximum matching on bipartite graphs Given a graph G = (V, E), find a maximum cardinal matching. 1.1 Direct algorithms Theorem 1.1 (Petersen, 1891) A matching M is
More informationModule 8 Linear Programming. CS 886 Sequential Decision Making and Reinforcement Learning University of Waterloo
Module 8 Linear Programming CS 886 Sequential Decision Making and Reinforcement Learning University of Waterloo Policy Optimization Value and policy iteration Iterative algorithms that implicitly solve
More information11. Learning graphical models
Learning graphical models 11-1 11. Learning graphical models Maximum likelihood Parameter learning Structural learning Learning partially observed graphical models Learning graphical models 11-2 statistical
More informationAn introductory example
CS1 Lecture 9 An introductory example Suppose that a company that produces three products wishes to decide the level of production of each so as to maximize profits. Let x 1 be the amount of Product 1
More informationUsing Sparsity to Design Primal Heuristics for MILPs: Two Stories
for MILPs: Two Stories Santanu S. Dey Joint work with: Andres Iroume, Marco Molinaro, Domenico Salvagnin, Qianyi Wang MIP Workshop, 2017 Sparsity in real" Integer Programs (IPs) Real" IPs are sparse: The
More informationPRIMES Math Problem Set
PRIMES Math Problem Set PRIMES 017 Due December 1, 01 Dear PRIMES applicant: This is the PRIMES 017 Math Problem Set. Please send us your solutions as part of your PRIMES application by December 1, 01.
More information(tree searching technique) (Boolean formulas) satisfying assignment: (X 1, X 2 )
Algorithms Chapter 5: The Tree Searching Strategy - Examples 1 / 11 Chapter 5: The Tree Searching Strategy 1. Ex 5.1Determine the satisfiability of the following Boolean formulas by depth-first search
More informationConjoint measurement
Conjoint measurement A brief introduction Denis Bouyssou CNRS LAMSADE Paris, France MCDM Summer School École Centrale 2010 Typical problem Motivation Comparing holiday packages cost # of days travel time
More informationRobust l 1 and l Solutions of Linear Inequalities
Available at http://pvamu.edu/aam Appl. Appl. Math. ISSN: 1932-9466 Applications and Applied Mathematics: An International Journal (AAM) Vol. 6, Issue 2 (December 2011), pp. 522-528 Robust l 1 and l Solutions
More informationLecture notes for Analysis of Algorithms : Markov decision processes
Lecture notes for Analysis of Algorithms : Markov decision processes Lecturer: Thomas Dueholm Hansen June 6, 013 Abstract We give an introduction to infinite-horizon Markov decision processes (MDPs) with
More informationPresolve Reductions in Mixed Integer Programming
Zuse Institute Berlin Takustr. 7 14195 Berlin Germany TOBIAS ACHTERBERG, ROBERT E. BIXBY, ZONGHAO GU, EDWARD ROTHBERG, AND DIETER WENINGER Presolve Reductions in Mixed Integer Programming This work has
More informationSimplex method(s) for solving LPs in standard form
Simplex method: outline I The Simplex Method is a family of algorithms for solving LPs in standard form (and their duals) I Goal: identify an optimal basis, as in Definition 3.3 I Versions we will consider:
More informationOptimization Tools in an Uncertain Environment
Optimization Tools in an Uncertain Environment Michael C. Ferris University of Wisconsin, Madison Uncertainty Workshop, Chicago: July 21, 2008 Michael Ferris (University of Wisconsin) Stochastic optimization
More informationThe dual simplex method with bounds
The dual simplex method with bounds Linear programming basis. Let a linear programming problem be given by min s.t. c T x Ax = b x R n, (P) where we assume A R m n to be full row rank (we will see in the
More informationIntroduction to Decision Sciences Lecture 6
Introduction to Decision Sciences Lecture 6 Andrew Nobel September 21, 2017 Functions Functions Given: Sets A and B, possibly different Definition: A function f : A B is a rule that assigns every element
More information3 The Simplex Method. 3.1 Basic Solutions
3 The Simplex Method 3.1 Basic Solutions In the LP of Example 2.3, the optimal solution happened to lie at an extreme point of the feasible set. This was not a coincidence. Consider an LP in general form,
More informationUncertainty modeling for robust verifiable design. Arnold Neumaier University of Vienna Vienna, Austria
Uncertainty modeling for robust verifiable design Arnold Neumaier University of Vienna Vienna, Austria Safety Safety studies in structural engineering are supposed to guard against failure in all reasonable
More informationDeterministic Dynamic Programming
Deterministic Dynamic Programming 1 Value Function Consider the following optimal control problem in Mayer s form: V (t 0, x 0 ) = inf u U J(t 1, x(t 1 )) (1) subject to ẋ(t) = f(t, x(t), u(t)), x(t 0
More informationThe Simplex and Policy Iteration Methods are Strongly Polynomial for the Markov Decision Problem with Fixed Discount
The Simplex and Policy Iteration Methods are Strongly Polynomial for the Markov Decision Problem with Fixed Discount Yinyu Ye Department of Management Science and Engineering and Institute of Computational
More informationi.e., into a monomial, using the Arithmetic-Geometric Mean Inequality, the result will be a posynomial approximation!
Dennis L. Bricker Dept of Mechanical & Industrial Engineering The University of Iowa i.e., 1 1 1 Minimize X X X subject to XX 4 X 1 0.5X 1 Minimize X X X X 1X X s.t. 4 1 1 1 1 4X X 1 1 1 1 0.5X X X 1 1
More informationModified Differential Evolution for Nonlinear Optimization Problems with Simple Bounds
Modified Differential Evolution for Nonlinear Optimization Problems with Simple Bounds Md. Abul Kalam Azad a,, Edite M.G.P. Fernandes b a Assistant Researcher, b Professor Md. Abul Kalam Azad Algoritmi
More informationIterative solvers for linear equations
Spectral Graph Theory Lecture 15 Iterative solvers for linear equations Daniel A. Spielman October 1, 009 15.1 Overview In this and the next lecture, I will discuss iterative algorithms for solving linear
More informationMLCC Clustering. Lorenzo Rosasco UNIGE-MIT-IIT
MLCC 2018 - Clustering Lorenzo Rosasco UNIGE-MIT-IIT About this class We will consider an unsupervised setting, and in particular the problem of clustering unlabeled data into coherent groups. MLCC 2018
More information3.4 Relaxations and bounds
3.4 Relaxations and bounds Consider a generic Discrete Optimization problem z = min{c(x) : x X} with an optimal solution x X. In general, the algorithms generate not only a decreasing sequence of upper
More informationwhere X is the feasible region, i.e., the set of the feasible solutions.
3.5 Branch and Bound Consider a generic Discrete Optimization problem (P) z = max{c(x) : x X }, where X is the feasible region, i.e., the set of the feasible solutions. Branch and Bound is a general semi-enumerative
More informationInterior Point Methods for LP
11.1 Interior Point Methods for LP Katta G. Murty, IOE 510, LP, U. Of Michigan, Ann Arbor, Winter 1997. Simplex Method - A Boundary Method: Starting at an extreme point of the feasible set, the simplex
More informationIntroduction to integer programming II
Introduction to integer programming II Martin Branda Charles University in Prague Faculty of Mathematics and Physics Department of Probability and Mathematical Statistics Computational Aspects of Optimization
More informationOptimal fractionation in radiotherapy with multiple normal tissues
Optimal fractionation in radiotherapy with multiple normal tissues Fatemeh Saberian, Archis Ghate, and Minsun Kim May 16, 2015 Abstract The goal in radiotherapy is to maximize the biological effect of
More information3.3 Easy ILP problems and totally unimodular matrices
3.3 Easy ILP problems and totally unimodular matrices Consider a generic ILP problem expressed in standard form where A Z m n with n m, and b Z m. min{c t x : Ax = b, x Z n +} (1) P(b) = {x R n : Ax =
More informationDecision Procedures An Algorithmic Point of View
An Algorithmic Point of View ILP References: Integer Programming / Laurence Wolsey Deciding ILPs with Branch & Bound Intro. To mathematical programming / Hillier, Lieberman Daniel Kroening and Ofer Strichman
More informationLinear Programming: Simplex
Linear Programming: Simplex Stephen J. Wright 1 2 Computer Sciences Department, University of Wisconsin-Madison. IMA, August 2016 Stephen Wright (UW-Madison) Linear Programming: Simplex IMA, August 2016
More informationOptimal design of experiments
Optimal design of experiments Session 5: Design construction Peter Goos 1 / 33 Constructing exact optimal designs direct approaches to maximize X T X Microsoft Excel mathematical programming run into problems
More informationLecture 10: Linear programming. duality. and. The dual of the LP in standard form. maximize w = b T y (D) subject to A T y c, minimize z = c T x (P)
Lecture 10: Linear programming duality Michael Patriksson 19 February 2004 0-0 The dual of the LP in standard form minimize z = c T x (P) subject to Ax = b, x 0 n, and maximize w = b T y (D) subject to
More informationSelected Examples of CONIC DUALITY AT WORK Robust Linear Optimization Synthesis of Linear Controllers Matrix Cube Theorem A.
. Selected Examples of CONIC DUALITY AT WORK Robust Linear Optimization Synthesis of Linear Controllers Matrix Cube Theorem A. Nemirovski Arkadi.Nemirovski@isye.gatech.edu Linear Optimization Problem,
More information1 Outline Part I: Linear Programming (LP) Interior-Point Approach 1. Simplex Approach Comparison Part II: Semidenite Programming (SDP) Concludin
Sensitivity Analysis in LP and SDP Using Interior-Point Methods E. Alper Yldrm School of Operations Research and Industrial Engineering Cornell University Ithaca, NY joint with Michael J. Todd INFORMS
More informationA Note on the Budgeted Maximization of Submodular Functions
A Note on the udgeted Maximization of Submodular Functions Andreas Krause June 2005 CMU-CALD-05-103 Carlos Guestrin School of Computer Science Carnegie Mellon University Pittsburgh, PA 15213 Abstract Many
More informationA Metaheuristic for IMRT Intensity Map Segmentation
A Metaheuristic for IMRT Intensity Map Segmentation Athula Gunawardena, Warren D Souza, Laura D. Goadrich, Kelly Sorenson, Robert Meyer, and Leyuan Shi University of Wisconsin-Madison October 15, 2004
More informationIE 5531: Engineering Optimization I
IE 5531: Engineering Optimization I Lecture 7: Duality and applications Prof. John Gunnar Carlsson September 29, 2010 Prof. John Gunnar Carlsson IE 5531: Engineering Optimization I September 29, 2010 1
More informationPlug-in Approach to Active Learning
Plug-in Approach to Active Learning Stanislav Minsker Stanislav Minsker (Georgia Tech) Plug-in approach to active learning 1 / 18 Prediction framework Let (X, Y ) be a random couple in R d { 1, +1}. X
More informationmaxz = 3x 1 +4x 2 2x 1 +x 2 6 2x 1 +3x 2 9 x 1,x 2
ex-5.-5. Foundations of Operations Research Prof. E. Amaldi 5. Branch-and-Bound Given the integer linear program maxz = x +x x +x 6 x +x 9 x,x integer solve it via the Branch-and-Bound method (solving
More informationDynamic Power Allocation and Routing for Time Varying Wireless Networks
Dynamic Power Allocation and Routing for Time Varying Wireless Networks X 14 (t) X 12 (t) 1 3 4 k a P ak () t P a tot X 21 (t) 2 N X 2N (t) X N4 (t) µ ab () rate µ ab µ ab (p, S 3 ) µ ab µ ac () µ ab (p,
More information6.079/6.975 S. Boyd & P. Parrilo October 29 30, Midterm exam
6.079/6.975 S. Boyd & P. Parrilo October 29 30, 2009. Midterm exam This is a 24 hour take-home midterm exam. Please turn it in to Professor Pablo Parrilo, (Stata Center), on Friday October 30, at 5PM (or
More informationReview Solutions, Exam 2, Operations Research
Review Solutions, Exam 2, Operations Research 1. Prove the weak duality theorem: For any x feasible for the primal and y feasible for the dual, then... HINT: Consider the quantity y T Ax. SOLUTION: To
More informationLOCAL SEARCH. Today. Reading AIMA Chapter , Goals Local search algorithms. Introduce adversarial search 1/31/14
LOCAL SEARCH Today Reading AIMA Chapter 4.1-4.2, 5.1-5.2 Goals Local search algorithms n hill-climbing search n simulated annealing n local beam search n genetic algorithms n gradient descent and Newton-Rhapson
More informationReformulation and Sampling to Solve a Stochastic Network Interdiction Problem
Network Interdiction Stochastic Network Interdiction and to Solve a Stochastic Network Interdiction Problem Udom Janjarassuk Jeff Linderoth ISE Department COR@L Lab Lehigh University jtl3@lehigh.edu informs
More informationCS 6820 Fall 2014 Lectures, October 3-20, 2014
Analysis of Algorithms Linear Programming Notes CS 6820 Fall 2014 Lectures, October 3-20, 2014 1 Linear programming The linear programming (LP) problem is the following optimization problem. We are given
More information1 Equilibrium Comparisons
CS/SS 241a Assignment 3 Guru: Jason Marden Assigned: 1/31/08 Due: 2/14/08 2:30pm We encourage you to discuss these problems with others, but you need to write up the actual homework alone. At the top of
More informationModeling & Control of Hybrid Systems Chapter 4 Stability
Modeling & Control of Hybrid Systems Chapter 4 Stability Overview 1. Switched systems 2. Lyapunov theory for smooth and linear systems 3. Stability for any switching signal 4. Stability for given switching
More informationIterative Convex Optimization Algorithms; Part One: Using the Baillon Haddad Theorem
Iterative Convex Optimization Algorithms; Part One: Using the Baillon Haddad Theorem Charles Byrne (Charles Byrne@uml.edu) http://faculty.uml.edu/cbyrne/cbyrne.html Department of Mathematical Sciences
More informationPositive Definite Kernels: Opportunities and Challenges
Positive Definite Kernels: Opportunities and Challenges Michael McCourt Department of Mathematical and Statistical Sciences University of Colorado, Denver CUNY Mathematics Seminar CUNY Graduate College
More informationParallel PIPS-SBB Multi-level parallelism for 2-stage SMIPS. Lluís-Miquel Munguia, Geoffrey M. Oxberry, Deepak Rajan, Yuji Shinano
Parallel PIPS-SBB Multi-level parallelism for 2-stage SMIPS Lluís-Miquel Munguia, Geoffrey M. Oxberry, Deepak Rajan, Yuji Shinano ... Our contribution PIPS-PSBB*: Multi-level parallelism for Stochastic
More informationApplications. Stephen J. Stoyan, Maged M. Dessouky*, and Xiaoqing Wang
Introduction to Large-Scale Linear Programming and Applications Stephen J. Stoyan, Maged M. Dessouky*, and Xiaoqing Wang Daniel J. Epstein Department of Industrial and Systems Engineering, University of
More informationMultiprocessor Scheduling I: Partitioned Scheduling. LS 12, TU Dortmund
Multiprocessor Scheduling I: Partitioned Scheduling Prof. Dr. Jian-Jia Chen LS 12, TU Dortmund 22/23, June, 2015 Prof. Dr. Jian-Jia Chen (LS 12, TU Dortmund) 1 / 47 Outline Introduction to Multiprocessor
More informationIntroduction - Motivation. Many phenomena (physical, chemical, biological, etc.) are model by differential equations. f f(x + h) f(x) (x) = lim
Introduction - Motivation Many phenomena (physical, chemical, biological, etc.) are model by differential equations. Recall the definition of the derivative of f(x) f f(x + h) f(x) (x) = lim. h 0 h Its
More informationAn Active Set Strategy for Solving Optimization Problems with up to 200,000,000 Nonlinear Constraints
An Active Set Strategy for Solving Optimization Problems with up to 200,000,000 Nonlinear Constraints Klaus Schittkowski Department of Computer Science, University of Bayreuth 95440 Bayreuth, Germany e-mail:
More informationA Parametric Simplex Algorithm for Linear Vector Optimization Problems
A Parametric Simplex Algorithm for Linear Vector Optimization Problems Birgit Rudloff Firdevs Ulus Robert Vanderbei July 9, 2015 Abstract In this paper, a parametric simplex algorithm for solving linear
More informationAn Optimization-Based Heuristic for the Split Delivery Vehicle Routing Problem
An Optimization-Based Heuristic for the Split Delivery Vehicle Routing Problem Claudia Archetti (1) Martin W.P. Savelsbergh (2) M. Grazia Speranza (1) (1) University of Brescia, Department of Quantitative
More informationAn Inexact Newton Method for Nonlinear Constrained Optimization
An Inexact Newton Method for Nonlinear Constrained Optimization Frank E. Curtis Numerical Analysis Seminar, January 23, 2009 Outline Motivation and background Algorithm development and theoretical results
More information4.5 Simplex method. min z = c T x s.v. Ax = b. LP in standard form
4.5 Simplex method min z = c T x s.v. Ax = b x 0 LP in standard form Examine a sequence of basic feasible solutions with non increasing objective function value until an optimal solution is reached or
More informationarxiv: v1 [math.fa] 14 Jul 2018
Construction of Regular Non-Atomic arxiv:180705437v1 [mathfa] 14 Jul 2018 Strictly-Positive Measures in Second-Countable Locally Compact Non-Atomic Hausdorff Spaces Abstract Jason Bentley Department of
More informationP,NP, NP-Hard and NP-Complete
P,NP, NP-Hard and NP-Complete We can categorize the problem space into two parts Solvable Problems Unsolvable problems 7/11/2011 1 Halting Problem Given a description of a program and a finite input, decide
More informationPattern Recognition Approaches to Solving Combinatorial Problems in Free Groups
Contemporary Mathematics Pattern Recognition Approaches to Solving Combinatorial Problems in Free Groups Robert M. Haralick, Alex D. Miasnikov, and Alexei G. Myasnikov Abstract. We review some basic methodologies
More informationTYPICAL POINTS FOR ONE-PARAMETER FAMILIES OF PIECEWISE EXPANDING MAPS OF THE INTERVAL
TYPICAL POINTS FOR ONE-PARAMETER FAMILIES OF PIECEWISE EXPANDING MAPS OF THE INTERVAL DANIEL SCHNELLMANN Abstract. Let I R be an interval and T a : [0, ] [0, ], a I, a oneparameter family of piecewise
More informationGeneralized Hypothesis Testing and Maximizing the Success Probability in Financial Markets
Generalized Hypothesis Testing and Maximizing the Success Probability in Financial Markets Tim Leung 1, Qingshuo Song 2, and Jie Yang 3 1 Columbia University, New York, USA; leung@ieor.columbia.edu 2 City
More informationL2: Review of probability and statistics
Probability L2: Review of probability and statistics Definition of probability Axioms and properties Conditional probability Bayes theorem Random variables Definition of a random variable Cumulative distribution
More informationMachine Learning 2010
Machine Learning 2010 Concept Learning: The Logical Approach Michael M Richter Email: mrichter@ucalgary.ca 1 - Part 1 Basic Concepts and Representation Languages 2 - Why Concept Learning? Concepts describe
More informationMATHEMATICAL OPTIMIZATION FOR THE INVERSE PROBLEM OF INTENSITY MODULATED RADIATION THERAPY
MATHEMATICAL OPTIMIZATION FOR THE INVERSE PROBLEM OF INTENSITY MODULATED RADIATION THERAPY Yair Censor, DSc Department of Mathematics, University of Haifa, Haifa, Israel The 2003 AAPM Summer School on
More informationThe Kernel Trick, Gram Matrices, and Feature Extraction. CS6787 Lecture 4 Fall 2017
The Kernel Trick, Gram Matrices, and Feature Extraction CS6787 Lecture 4 Fall 2017 Momentum for Principle Component Analysis CS6787 Lecture 3.1 Fall 2017 Principle Component Analysis Setting: find the
More informationLecture 5: Efficient PAC Learning. 1 Consistent Learning: a Bound on Sample Complexity
Universität zu Lübeck Institut für Theoretische Informatik Lecture notes on Knowledge-Based and Learning Systems by Maciej Liśkiewicz Lecture 5: Efficient PAC Learning 1 Consistent Learning: a Bound on
More informationBranch-and-cut Approaches for Chance-constrained Formulations of Reliable Network Design Problems
Branch-and-cut Approaches for Chance-constrained Formulations of Reliable Network Design Problems Yongjia Song James R. Luedtke August 9, 2012 Abstract We study solution approaches for the design of reliably
More informationBasic Probabilistic Reasoning SEG
Basic Probabilistic Reasoning SEG 7450 1 Introduction Reasoning under uncertainty using probability theory Dealing with uncertainty is one of the main advantages of an expert system over a simple decision
More informationCh. 10 Vector Quantization. Advantages & Design
Ch. 10 Vector Quantization Advantages & Design 1 Advantages of VQ There are (at least) 3 main characteristics of VQ that help it outperform SQ: 1. Exploit Correlation within vectors 2. Exploit Shape Flexibility
More informationLinear Programming Duality
Summer 2011 Optimization I Lecture 8 1 Duality recap Linear Programming Duality We motivated the dual of a linear program by thinking about the best possible lower bound on the optimal value we can achieve
More informationII. Analysis of Linear Programming Solutions
Optimization Methods Draft of August 26, 2005 II. Analysis of Linear Programming Solutions Robert Fourer Department of Industrial Engineering and Management Sciences Northwestern University Evanston, Illinois
More informationProduction Capacity Modeling of Alternative, Nonidentical, Flexible Machines
The International Journal of Flexible Manufacturing Systems, 14, 345 359, 2002 c 2002 Kluwer Academic Publishers Manufactured in The Netherlands Production Capacity Modeling of Alternative, Nonidentical,
More informationLearning with Decision-Dependent Uncertainty
Learning with Decision-Dependent Uncertainty Tito Homem-de-Mello Department of Mechanical and Industrial Engineering University of Illinois at Chicago NSF Workshop, University of Maryland, May 2010 Tito
More informationLebesgue Measure. Dung Le 1
Lebesgue Measure Dung Le 1 1 Introduction How do we measure the size of a set in IR? Let s start with the simplest ones: intervals. Obviously, the natural candidate for a measure of an interval is its
More informationAppendix PRELIMINARIES 1. THEOREMS OF ALTERNATIVES FOR SYSTEMS OF LINEAR CONSTRAINTS
Appendix PRELIMINARIES 1. THEOREMS OF ALTERNATIVES FOR SYSTEMS OF LINEAR CONSTRAINTS Here we consider systems of linear constraints, consisting of equations or inequalities or both. A feasible solution
More informationInteger Programming Chapter 15
Integer Programming Chapter 15 University of Chicago Booth School of Business Kipp Martin November 9, 2016 1 / 101 Outline Key Concepts Problem Formulation Quality Solver Options Epsilon Optimality Preprocessing
More informationA SUFFICIENTLY EXACT INEXACT NEWTON STEP BASED ON REUSING MATRIX INFORMATION
A SUFFICIENTLY EXACT INEXACT NEWTON STEP BASED ON REUSING MATRIX INFORMATION Anders FORSGREN Technical Report TRITA-MAT-2009-OS7 Department of Mathematics Royal Institute of Technology November 2009 Abstract
More informationC.M. Liu Perceptual Signal Processing Lab College of Computer Science National Chiao-Tung University
Quantization C.M. Liu Perceptual Signal Processing Lab College of Computer Science National Chiao-Tung University http://www.csie.nctu.edu.tw/~cmliu/courses/compression/ Office: EC538 (03)5731877 cmliu@cs.nctu.edu.tw
More information