LMI Methods in Optimal and Robust Control
|
|
- Muriel Joseph
- 5 years ago
- Views:
Transcription
1 LMI Methods in Optimal and Robust Control Matthew M. Peet Arizona State University Lecture 02: Optimization (Convex and Otherwise)
2 What is Optimization? An Optimization Problem has 3 parts. x F f(x) : subject to g i (x) 0 i = 1, K 1 h i (x) = 0 i = 1, K 2 Variables: x F The things you must choose. F represents the set of possible choices for the variables. Can be vectors, matrices, functions, systems, locations, colors... However, computers prefer vectors or matrices. Objective: f(x) A function which assigns a scalar value to any choice of variables. e.g. [x1, x 2] x 1 x 2; red 4; et c. Constraints: g(x) 0; h(x) = 0 Defines what is a imally acceptable choice of variables. Equality and Inequality constraints are common. x is OK if g(x) 0 and h(x) = 0. Constraints mean variables are not independent. (Constrained optimization is much harder). M. Peet Lecture 02: Optimization 2 / 31
3 Formulating Optimization Problems What do we need to know? Topics to Cover: Formulating Constraints Tricks of the Trade for expressing constraints. Converting everything to equality and inequality constraints. Equivalence: How to Recognize if Two Optimization Problems are Equivalent. May be true despite different variables, constraints and objectives Knowing which Problems are Solvable The Convex Ones. Some others, if the problem is relatively small. M. Peet Lecture 02: Optimization 3 / 31
4 Least Squares Unconstrained Optimization Problem: Given a bunch of data in the form Inputs: a i Outputs: b i Find the function f(a) = b which best fits the data. For Least Squares: Assume f(a) = z T a + z 0 where z R n, z 0 R are the variables with objective K K h(z) := f(a i ) b i 2 = z T a i + z 0 b i 2 i=1 The Optimization Problem is: where A := i=1 z Rn Az b 2 a T 1 1. a T K 1 b := b 1. b K M. Peet Lecture 02: Optimization 4 / 31
5 Least Squares Unconstrained Optimization Least squares problems are easy-ish to solve. z = (A T A) 1 A T b Note that A is assumed to be skinny. More rows than columns. More data points than inputs (dimension of a i is small). M. Peet Lecture 02: Optimization 5 / 31
6 Integer Programg Example MAX-CUT Optimization of a graph. Graphs have Nodes and Edges. Figure: Division of a set of nodes to maximize the weighted cost of separation Goal: Assign each node i an index x i = 1 or x j = 1 to maximize overall cost. The cost if x i and x j do not share the same index is w ij. The cost if they share an index is 0 The weight w i,j are given. M. Peet Lecture 02: Optimization 6 / 31
7 Integer Programg Example MAX-CUT Goal: Assign each node i an index x i = 1 or x j = 1 to maximize overall cost. Variables: x { 1, 1} n Referred to as Integer Variables or Binary Variables. Binary constraints can be incorporated explicitly: x 2 i = 1 Integer/Binary variables may be declared directly in YALMIP: > x = intvar(n); > y = binvar(n); M. Peet Lecture 02: Optimization 7 / 31
8 Integer Programg Example MAX-CUT Objective: We use the trick: (1 x i x j ) = 0 if x i and x j have the same sign (Together). (1 x i x j ) = 2 if x i and x j have the opposite sign (Apart). Then the objective function is 1 w i,j (1 x i x j ) 2 i,j The optimization problem is the integer program: 1 max x 2 i =1 2 w i,j (1 x i x j ) i,j M. Peet Lecture 02: Optimization 8 / 31
9 MAX-CUT The optimization problem is the integer program: 1 max x 2 i =1 2 w i,j (1 x i x j ) Consider the MAX-CUT problem with 5 nodes i,j w 12 = w 23 = w 45 = w 15 = w 34 =.5 and w 14 = w 24 = w 25 = 0 where w ij = w ji. An Optimal Cut IS: x 1 = x 3 = x 4 = 1 x 2 = x 5 = 1 This cut has objective value f(x) = 2.5.5x 1 x 2.5x 2 x 3.5x 3 x 4.5x 4 x 5.5x 1 x 5 = Figure: An Optimal Cut M. Peet Lecture 02: Optimization 9 / 31
10 Dynamic Programg and LQR Open-Loop Case Objective Function: Lets imize a quadratic Cost N 1 h(x, u) := x(n) T Sx(N) + x(k) T Qx(k) + u(k) T Ru(k) k=1 Variables: The states x(k), and inputs, u(k). Constraint: The dynamics define how u x. x(k + 1) = Ax(k) + Bu(k), x(0) = 1 k = 0,, N Note: The choice of x(0) = 1 does not affect the solution Combined: x,u x(n) T Sx(N) + N 1 k=1 x(k + 1) = Ax(k) + Bu(k), x(0) = 1 ( x(k) T Qx(k) + u(k) T Ru(k) ) k = 0,, N M. Peet Lecture 02: Optimization 10 / 31
11 Dynamic Programg and LQR Closed-Loop Case Objective Function: Lets imize a quadratic Cost N 1 x(n) T Sx(N) + x(k) T Qx(k) + u(k) T Ru(k) k=1 Variables: We want a fixed policy (gain matrix, K) which deteres u(k) based on x(k) as u(k) = Kx(k). Constraint: The dynamics define how u x. Combined: x,u x(k + 1) = Ax(k) + Bu(k), u(k) = Kx(k), x(0) = 1 x(n) T Sx(N) + N 1 k=1 x(k + 1) = Ax(k) + Bu(k), u(k) = Kx(k), x(0) = 1 k = 0,, N ( x(k) T Qx(k) + u(k) T Ru(k) ) k = 0,, N Question: Are the Closed-Loop and Open-Loop Problems Equivalent? M. Peet Lecture 02: Optimization 11 / 31
12 Formulating Optimization Problems Equivalence Definition 1. Two optimization problems are Equivalent if a solution to one can be used to construct a solution to the other. Example 1: Equivalent Objective Functions Problem 1: Problem 2: Problem 3: x f(x) subject to A T x b 10f(x) 12 x subject to A T x b 1 max x f(x) subject to A T x b In this case x 1 = x 2 = x 3. Proof: For any x x 1 (both feasible), we have f(x) > f(x 1). Thus 10f(x) 12 > 10f(x 1) 12 and 1 f(x) < 1 f(x 1 ). i.e x is suboptimal for all. M. Peet Lecture 02: Optimization 12 / 31
13 Formulating Optimization Problems Equivalence in Variables Example 2: Equivalent Variables Problem 1: Problem 2: x f(x) subject to A T x b x f(t x + c) subject to (T T A) T x b A T c Here x 1 = T x 2 + c and x 2 = T 1 (x 1 c). Change of variables is invertible. (given x x 2, you can show it is suboptimal) Example 3: Variable Separability Problem 1: Problem 2: Problem 3: x,y f(x) + g(y) subject to AT 1 x b 1, A T 2 y b 2 x f(x) subject to A T 1 x b 1 y g(y) subject to A T 2 y b 2 Here x 1 = x 2 and y 1 = y 3. Neither feasibility nor imality are coupled (Objective fn. is Separable). M. Peet Lecture 02: Optimization 13 / 31
14 Formulating Optimization Problems Constraint Equivalence Example 4: Constraint/Objective Equivalence Problem 1: Problem 2: x f(x) subject to g(x) 0 x,t t subject to g(x) 0, t f(x) Here x 1 = x 2 and t 2 = f(x 1). Some other Equivalences: Redundant Constraints {x R : x > 1} vs. {x R : x > 1, x > 0} Polytopes (Vertices vs. Hyperplanes) {x R n : x = i Aiαi, i αi = 1} vs. {x Rn : Cx > b} M. Peet Lecture 02: Optimization 14 / 31
15 Machine Learning Classification and Support-Vector Machines In Classification we have inputs (data) (x i ), each of which has a binary label (y i { 1, +1}) y i = +1 means the output of x i belongs to group 1 y i = 1 means the output of x i belongs to group 2 We want to find a rule (a classifier) which takes the data x and predicts which group it is in. Our rule has the form of a function f(x) = w T x b. Then x is in group 1 if f(x) = w T x b > 0. x is in group 2 if f(x) = w T x b < y=+1 y=-1 H w H w1 H w2 0 Question: How to find the best w and b?? Figure: We want to find a rule which separates two sets of data. M. Peet Lecture 02: Optimization 15 / 31
16 Machine Learning Classification and Support-Vector Machines Definition 2. A Hyperplane is the generalization of the concept of line/plane to multiple dimensions. {x R n : w T x b = 0} Half-Spaces are the parts above and below a Hyperplane. {x R n : w T x b 0} OR {x R n : w T x b 0} M. Peet Lecture 02: Optimization 16 / 31
17 Machine Learning Classification and Support-Vector Machines We want to separate the data into disjoint half-spaces and maximize the distance between these half-spaces Variables: w R n and b define the hyperplane Constraint: Each existing data point should be correctly labelled. w T x b > 1 when y i = +1 and w T x b < 1 when y i = 1 (Strict Separation) Alternatively: y i (w T x i b) 1. These two constraints are Equivalent. Figure: Maximizing the distance between two sets of Data Objective: The distance between Hyperplanes {x : w T x b = 1} and {x : w T x b = 1} is 1 f(w, b) = 2 wt w M. Peet Lecture 02: Optimization 17 / 31
18 Machine Learning Unconstrained Form (Soft-Margin SVM) Machine Learning algorithms solve 1 w R p,b R 2 wt w, subject to y i (w T x i b) 1, i = 1,..., K. Soft Margin Problems The hard margin problem can be relaxed to maximize the distance between hyperplanes PLUS the magnitude of classification errors Soft Margin SVM Problem data with y=-1 data with y=+1 H w H w1 H w2 H ξ 1 w R p,b R 2 w 2 +c n max(0, 1 (w T x i b)y i ). i=1 Link: Repository of Interesting Machine Learning Data Sets Figure: Data separation using soft-margin metric and distances to associated hyperplanes M. Peet Lecture 02: Optimization 18 / 31
19 Geometric vs. Functional Constraints These Problems are all equivalent: The Classical Representation: x R n f(x) : g i (x) 0 subject to i = 1, k The Geometric Representation is: f(x) : subject to x S x Rn where S := {x R n : g i (x) 0, i = 1,, k}. The Pure Geometric Representation (x is eliated!): γ γ : subject to S γ (S γ has at least one element) where S γ := {x R n : γ f(x) 0, g i (x) 0, i = 1,, k}. Proposition: Optimization is only as hard as detering feasibility! M. Peet Lecture 02: Optimization 19 / 31
20 Solving by Bisection (Do you have an Oracle?) Optimization Problem: γ = max γ : γ subject to S γ Bisection Algorithm (Convexity???): 1 Initialize infeasible γ u = b 2 Initialize feasible γ l = a 3 Set γ = γu+γ l 2 5 If S γ feasible, set γ l = γu+γ l 2 4 If S γ infeasible, set γ u = γu+γ l 2 6 k = k Goto 3 Then γ [γ l, γ u ] and γ u γ l b a 2 k. Bisection with oracle also solves the Primary Problem. ( γ : S γ = ) γ S u γ = γ 2 S γ = S γ 5 γ 4 S γ γ 3 S γ γ S 1 γ γ L S γ M. Peet Lecture 02: Optimization 20 / 31
21 Computational Complexity In Computer Science, we focus on Complexity of the PROBLEM NOT complexity of the algorithm. On an Turing machine, the # of steps is a fn of problem size (number of variables) NL: A logarithmic # (SORT) P: A polynomial # (LP) NP: A polynomial # for verification (TSP) NP HARD: at least as hard as NP (TSP) NP COMPLETE: A set of Equivalent* NP problems (MAX-CUT, TSP) EXPTIME: Solvable in 2 p(n) steps. p polynomial. (Chess) EXPSPACE: Solvable with 2 p(n) memory. *Equivalent means there is a polynomial-time reduction from one to the other. M. Peet Lecture 02: Optimization 21 / 31
22 How Hard is Optimization? The Classical Representation: x R nf(x) : g i (x) 0 h i (x) = 0 subject to i = 1, k i = 1, k Answer: Easy (P) if f, g i are all Convex and h i are affine. The Geometric Representation: f(x) : subject to x S x Rn Answer: Easy (P) if f is Convex and S is a Convex Set. The Pure Geometric Representation: max γ : subject to γ,x Rn (γ, x) S Answer: Easy (P) if S is a Convex Set. M. Peet Lecture 02: Optimization 22 / 31
23 Convex Functions Definition 3. An OBJECTIVE FUNCTION or CONSTRAINT function is convex if f(λx 1 + (1 λ)x 2 ) λf(x 1 ) + (1 λ)f(x 2 ) for all λ [0, 1]. Useful Facts: e ax, x are convex. x n (n 1 or n 0), log x are convex on x 0 If f 1 is convex and f 2 is convex, then f 3 (x) := f 1 (x) + f 2 (x) is convex. A function is convex if 2 f(x) is positive semidefinite for all x. If f 1, f 2 are convex, then f 3 (x) := max(f 1 (x), f 2 (x)) is convex. If f 1, f 2 are convex, and f 1 is increasing, then f 3 (x) := f 1 (f 2 (x)) is convex. M. Peet Lecture 02: Optimization 23 / 31
24 Convex Sets Definition 4. A FEASIBLE SET is convex if for any x, y Q, {µx + (1 µ)y : µ [0, 1]} Q. The line connecting any two points lies in the set. Facts: If f is convex, then {x : f(x) 0} is convex. The intersection of convex sets is convex. If S1 and S 2 are convex, then S 2 := {x : x S 1, x S 2} is convex. M. Peet Lecture 02: Optimization 24 / 31
25 Descent Algorithms (Why Convex Optimization is Easy) Unconstrained Optimization All descent algorithms are iterative, with a search direction ( x R n ) and step size (t 0). x k+1 = x k + t x Gradient Descent x = f(x) Newton s Algorithm: x = ( 2 f(x)) 1 f(x) Tries to solve the equation f(x) = 0. Both converge for sufficiently small step size. M. Peet Lecture 02: Optimization 25 / 31
26 Descent Algorithms Dealing with Constraints Method 1: Gradient Projection Figure: Must project step (t x) onto feasible Set Method 2: Barrier Functions f(x) + log(g(x)) x Converts a Constrained problem to an unconstrained problem. (Interior-Point Methods) M. Peet Lecture 02: Optimization 26 / 31
27 Non-Convexity and Local Optima 1. For convex optimization problems, Descent Methods Always find the global optimal point. 2. For non-convex optimization, Descent Algorithms may get stuck at local optima. M. Peet Lecture 02: Optimization 27 / 31
28 imize c Important Classes of Optimization T xproblems Linear Programg Example Linear Programg (LP) imize x 1 + x 2 subject x : subject to x R nct to 3x 1 + x 2 3 Ax b x 2 1 A x = b x 1 4 x 1 + 5x 2 20 x EASY: Simplex/Ellipsoid 1 + 4x Algorithm 2 20 (P) Can solve for >10,000 variables subject to Ax = b Cx d Link: A List of Solvers, Performance and Benchmark Problems M. Peet Lecture 02: Optimization 28 / 31
29 Important Classes of Optimization Problems Quadratic Programg Quadratic Programg (QP) Qx + c T x : x R nxt Ax b subject to EASY (P): If Q 0. HARD (NP-Hard): Otherwise M. Peet Lecture 02: Optimization 29 / 31
30 Important Classes of Optimization Problems Mixed-Integer Linear Programg Mixed-Integer Linear Programg (MILP) x : x R nct Ax b x i Z subject to i = 1, K HARD (NP-Hard) Mixed-Integer NonLinear Programg (MINLP) x R nf(x) : g i (x) 0 x i Z subject to i = 1, K Very Hard M. Peet Lecture 02: Optimization 30 / 31
31 Next Time: Positive Matrices, SDP and LMIs Also a bit on Duality, Relaxations. M. Peet Lecture 02: Optimization 31 / 31
Modern Optimal Control
Modern Optimal Control Matthew M. Peet Arizona State University Lecture 19: Stabilization via LMIs Optimization Optimization can be posed in functional form: min x F objective function : inequality constraints
More informationMachine Learning. Support Vector Machines. Manfred Huber
Machine Learning Support Vector Machines Manfred Huber 2015 1 Support Vector Machines Both logistic regression and linear discriminant analysis learn a linear discriminant function to separate the data
More informationSupport Vector Machines
Support Vector Machines Support vector machines (SVMs) are one of the central concepts in all of machine learning. They are simply a combination of two ideas: linear classification via maximum (or optimal
More informationNonlinear Programming Models
Nonlinear Programming Models Fabio Schoen 2008 http://gol.dsi.unifi.it/users/schoen Nonlinear Programming Models p. Introduction Nonlinear Programming Models p. NLP problems minf(x) x S R n Standard form:
More informationMath 273a: Optimization Basic concepts
Math 273a: Optimization Basic concepts Instructor: Wotao Yin Department of Mathematics, UCLA Spring 2015 slides based on Chong-Zak, 4th Ed. Goals of this lecture The general form of optimization: minimize
More informationSupport Vector Machines for Classification and Regression. 1 Linearly Separable Data: Hard Margin SVMs
E0 270 Machine Learning Lecture 5 (Jan 22, 203) Support Vector Machines for Classification and Regression Lecturer: Shivani Agarwal Disclaimer: These notes are a brief summary of the topics covered in
More informationInteger Programming ISE 418. Lecture 12. Dr. Ted Ralphs
Integer Programming ISE 418 Lecture 12 Dr. Ted Ralphs ISE 418 Lecture 12 1 Reading for This Lecture Nemhauser and Wolsey Sections II.2.1 Wolsey Chapter 9 ISE 418 Lecture 12 2 Generating Stronger Valid
More information6.252 NONLINEAR PROGRAMMING LECTURE 10 ALTERNATIVES TO GRADIENT PROJECTION LECTURE OUTLINE. Three Alternatives/Remedies for Gradient Projection
6.252 NONLINEAR PROGRAMMING LECTURE 10 ALTERNATIVES TO GRADIENT PROJECTION LECTURE OUTLINE Three Alternatives/Remedies for Gradient Projection Two-Metric Projection Methods Manifold Suboptimization Methods
More informationE5295/5B5749 Convex optimization with engineering applications. Lecture 5. Convex programming and semidefinite programming
E5295/5B5749 Convex optimization with engineering applications Lecture 5 Convex programming and semidefinite programming A. Forsgren, KTH 1 Lecture 5 Convex optimization 2006/2007 Convex quadratic program
More informationNumerical Optimization. Review: Unconstrained Optimization
Numerical Optimization Finding the best feasible solution Edward P. Gatzke Department of Chemical Engineering University of South Carolina Ed Gatzke (USC CHE ) Numerical Optimization ECHE 589, Spring 2011
More informationConvex envelopes, cardinality constrained optimization and LASSO. An application in supervised learning: support vector machines (SVMs)
ORF 523 Lecture 8 Princeton University Instructor: A.A. Ahmadi Scribe: G. Hall Any typos should be emailed to a a a@princeton.edu. 1 Outline Convexity-preserving operations Convex envelopes, cardinality
More informationConvex Optimization M2
Convex Optimization M2 Lecture 8 A. d Aspremont. Convex Optimization M2. 1/57 Applications A. d Aspremont. Convex Optimization M2. 2/57 Outline Geometrical problems Approximation problems Combinatorial
More informationICS-E4030 Kernel Methods in Machine Learning
ICS-E4030 Kernel Methods in Machine Learning Lecture 3: Convex optimization and duality Juho Rousu 28. September, 2016 Juho Rousu 28. September, 2016 1 / 38 Convex optimization Convex optimisation This
More informationModule 04 Optimization Problems KKT Conditions & Solvers
Module 04 Optimization Problems KKT Conditions & Solvers Ahmad F. Taha EE 5243: Introduction to Cyber-Physical Systems Email: ahmad.taha@utsa.edu Webpage: http://engineering.utsa.edu/ taha/index.html September
More informationLecture 1: January 12
10-725/36-725: Convex Optimization Fall 2015 Lecturer: Ryan Tibshirani Lecture 1: January 12 Scribes: Seo-Jin Bang, Prabhat KC, Josue Orellana 1.1 Review We begin by going through some examples and key
More informationSemidefinite Programming Basics and Applications
Semidefinite Programming Basics and Applications Ray Pörn, principal lecturer Åbo Akademi University Novia University of Applied Sciences Content What is semidefinite programming (SDP)? How to represent
More informationSupport Vector Machine (SVM) and Kernel Methods
Support Vector Machine (SVM) and Kernel Methods CE-717: Machine Learning Sharif University of Technology Fall 2014 Soleymani Outline Margin concept Hard-Margin SVM Soft-Margin SVM Dual Problems of Hard-Margin
More informationLecture Notes on Support Vector Machine
Lecture Notes on Support Vector Machine Feng Li fli@sdu.edu.cn Shandong University, China 1 Hyperplane and Margin In a n-dimensional space, a hyper plane is defined by ω T x + b = 0 (1) where ω R n is
More informationSupport Vector Machines for Classification and Regression
CIS 520: Machine Learning Oct 04, 207 Support Vector Machines for Classification and Regression Lecturer: Shivani Agarwal Disclaimer: These notes are designed to be a supplement to the lecture. They may
More information4. Convex optimization problems
Convex Optimization Boyd & Vandenberghe 4. Convex optimization problems optimization problem in standard form convex optimization problems quasiconvex optimization linear optimization quadratic optimization
More information4F3 - Predictive Control
4F3 Predictive Control - Lecture 3 p 1/21 4F3 - Predictive Control Lecture 3 - Predictive Control with Constraints Jan Maciejowski jmm@engcamacuk 4F3 Predictive Control - Lecture 3 p 2/21 Constraints on
More informationStatistical Pattern Recognition
Statistical Pattern Recognition Support Vector Machine (SVM) Hamid R. Rabiee Hadi Asheri, Jafar Muhammadi, Nima Pourdamghani Spring 2013 http://ce.sharif.edu/courses/91-92/2/ce725-1/ Agenda Introduction
More informationTwo hours. To be provided by Examinations Office: Mathematical Formula Tables. THE UNIVERSITY OF MANCHESTER. xx xxxx 2017 xx:xx xx.
Two hours To be provided by Examinations Office: Mathematical Formula Tables. THE UNIVERSITY OF MANCHESTER CONVEX OPTIMIZATION - SOLUTIONS xx xxxx 27 xx:xx xx.xx Answer THREE of the FOUR questions. If
More informationSMO vs PDCO for SVM: Sequential Minimal Optimization vs Primal-Dual interior method for Convex Objectives for Support Vector Machines
vs for SVM: Sequential Minimal Optimization vs Primal-Dual interior method for Convex Objectives for Support Vector Machines Ding Ma Michael Saunders Working paper, January 5 Introduction In machine learning,
More informationSupport Vector Machines
EE 17/7AT: Optimization Models in Engineering Section 11/1 - April 014 Support Vector Machines Lecturer: Arturo Fernandez Scribe: Arturo Fernandez 1 Support Vector Machines Revisited 1.1 Strictly) Separable
More informationConvex Optimization and Modeling
Convex Optimization and Modeling Convex Optimization Fourth lecture, 05.05.2010 Jun.-Prof. Matthias Hein Reminder from last time Convex functions: first-order condition: f(y) f(x) + f x,y x, second-order
More informationIntroduction to Nonlinear Stochastic Programming
School of Mathematics T H E U N I V E R S I T Y O H F R G E D I N B U Introduction to Nonlinear Stochastic Programming Jacek Gondzio Email: J.Gondzio@ed.ac.uk URL: http://www.maths.ed.ac.uk/~gondzio SPS
More informationLecture: Convex Optimization Problems
1/36 Lecture: Convex Optimization Problems http://bicmr.pku.edu.cn/~wenzw/opt-2015-fall.html Acknowledgement: this slides is based on Prof. Lieven Vandenberghe s lecture notes Introduction 2/36 optimization
More informationHomework 4. Convex Optimization /36-725
Homework 4 Convex Optimization 10-725/36-725 Due Friday November 4 at 5:30pm submitted to Christoph Dann in Gates 8013 (Remember to a submit separate writeup for each problem, with your name at the top)
More informationSupport Vector Machines and Kernel Methods
2018 CS420 Machine Learning, Lecture 3 Hangout from Prof. Andrew Ng. http://cs229.stanford.edu/notes/cs229-notes3.pdf Support Vector Machines and Kernel Methods Weinan Zhang Shanghai Jiao Tong University
More informationDuality Theory of Constrained Optimization
Duality Theory of Constrained Optimization Robert M. Freund April, 2014 c 2014 Massachusetts Institute of Technology. All rights reserved. 1 2 1 The Practical Importance of Duality Duality is pervasive
More informationCS-E4830 Kernel Methods in Machine Learning
CS-E4830 Kernel Methods in Machine Learning Lecture 3: Convex optimization and duality Juho Rousu 27. September, 2017 Juho Rousu 27. September, 2017 1 / 45 Convex optimization Convex optimisation This
More informationSupport Vector Machine (SVM) and Kernel Methods
Support Vector Machine (SVM) and Kernel Methods CE-717: Machine Learning Sharif University of Technology Fall 2015 Soleymani Outline Margin concept Hard-Margin SVM Soft-Margin SVM Dual Problems of Hard-Margin
More information8. Geometric problems
8. Geometric problems Convex Optimization Boyd & Vandenberghe extremal volume ellipsoids centering classification placement and facility location 8 Minimum volume ellipsoid around a set Löwner-John ellipsoid
More informationSupport Vector Machine (continued)
Support Vector Machine continued) Overlapping class distribution: In practice the class-conditional distributions may overlap, so that the training data points are no longer linearly separable. We need
More information1 Strict local optimality in unconstrained optimization
ORF 53 Lecture 14 Spring 016, Princeton University Instructor: A.A. Ahmadi Scribe: G. Hall Thursday, April 14, 016 When in doubt on the accuracy of these notes, please cross check with the instructor s
More informationApplications of Linear Programming
Applications of Linear Programming lecturer: András London University of Szeged Institute of Informatics Department of Computational Optimization Lecture 9 Non-linear programming In case of LP, the goal
More informationSupport Vector Machine (SVM) and Kernel Methods
Support Vector Machine (SVM) and Kernel Methods CE-717: Machine Learning Sharif University of Technology Fall 2016 Soleymani Outline Margin concept Hard-Margin SVM Soft-Margin SVM Dual Problems of Hard-Margin
More informationInterior Point Methods for Mathematical Programming
Interior Point Methods for Mathematical Programming Clóvis C. Gonzaga Federal University of Santa Catarina, Florianópolis, Brazil EURO - 2013 Roma Our heroes Cauchy Newton Lagrange Early results Unconstrained
More informationMachine Learning. Lecture 6: Support Vector Machine. Feng Li.
Machine Learning Lecture 6: Support Vector Machine Feng Li fli@sdu.edu.cn https://funglee.github.io School of Computer Science and Technology Shandong University Fall 2018 Warm Up 2 / 80 Warm Up (Contd.)
More informationLinear and non-linear programming
Linear and non-linear programming Benjamin Recht March 11, 2005 The Gameplan Constrained Optimization Convexity Duality Applications/Taxonomy 1 Constrained Optimization minimize f(x) subject to g j (x)
More informationConstrained Optimization and Lagrangian Duality
CIS 520: Machine Learning Oct 02, 2017 Constrained Optimization and Lagrangian Duality Lecturer: Shivani Agarwal Disclaimer: These notes are designed to be a supplement to the lecture. They may or may
More information6-1 The Positivstellensatz P. Parrilo and S. Lall, ECC
6-1 The Positivstellensatz P. Parrilo and S. Lall, ECC 2003 2003.09.02.10 6. The Positivstellensatz Basic semialgebraic sets Semialgebraic sets Tarski-Seidenberg and quantifier elimination Feasibility
More informationLecture Support Vector Machine (SVM) Classifiers
Introduction to Machine Learning Lecturer: Amir Globerson Lecture 6 Fall Semester Scribe: Yishay Mansour 6.1 Support Vector Machine (SVM) Classifiers Classification is one of the most important tasks in
More informationSemidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 5
Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 5 Instructor: Farid Alizadeh Scribe: Anton Riabov 10/08/2001 1 Overview We continue studying the maximum eigenvalue SDP, and generalize
More informationLecture 2: Convex Sets and Functions
Lecture 2: Convex Sets and Functions Hyang-Won Lee Dept. of Internet & Multimedia Eng. Konkuk University Lecture 2 Network Optimization, Fall 2015 1 / 22 Optimization Problems Optimization problems are
More informationLearning by constraints and SVMs (2)
Statistical Techniques in Robotics (16-831, F12) Lecture#14 (Wednesday ctober 17) Learning by constraints and SVMs (2) Lecturer: Drew Bagnell Scribe: Albert Wu 1 1 Support Vector Ranking Machine pening
More informationOperations Research Lecture 4: Linear Programming Interior Point Method
Operations Research Lecture 4: Linear Programg Interior Point Method Notes taen by Kaiquan Xu@Business School, Nanjing University April 14th 2016 1 The affine scaling algorithm one of the most efficient
More informationEE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 18
EE/ACM 150 - Applications of Convex Optimization in Signal Processing and Communications Lecture 18 Andre Tkacenko Signal Processing Research Group Jet Propulsion Laboratory May 31, 2012 Andre Tkacenko
More informationCS6375: Machine Learning Gautam Kunapuli. Support Vector Machines
Gautam Kunapuli Example: Text Categorization Example: Develop a model to classify news stories into various categories based on their content. sports politics Use the bag-of-words representation for this
More informationLECTURE 25: REVIEW/EPILOGUE LECTURE OUTLINE
LECTURE 25: REVIEW/EPILOGUE LECTURE OUTLINE CONVEX ANALYSIS AND DUALITY Basic concepts of convex analysis Basic concepts of convex optimization Geometric duality framework - MC/MC Constrained optimization
More informationLinear & nonlinear classifiers
Linear & nonlinear classifiers Machine Learning Hamid Beigy Sharif University of Technology Fall 1396 Hamid Beigy (Sharif University of Technology) Linear & nonlinear classifiers Fall 1396 1 / 44 Table
More informationCanonical Problem Forms. Ryan Tibshirani Convex Optimization
Canonical Problem Forms Ryan Tibshirani Convex Optimization 10-725 Last time: optimization basics Optimization terology (e.g., criterion, constraints, feasible points, solutions) Properties and first-order
More information4F3 - Predictive Control
4F3 Predictive Control - Lecture 2 p 1/23 4F3 - Predictive Control Lecture 2 - Unconstrained Predictive Control Jan Maciejowski jmm@engcamacuk 4F3 Predictive Control - Lecture 2 p 2/23 References Predictive
More informationML (cont.): SUPPORT VECTOR MACHINES
ML (cont.): SUPPORT VECTOR MACHINES CS540 Bryan R Gibson University of Wisconsin-Madison Slides adapted from those used by Prof. Jerry Zhu, CS540-1 1 / 40 Support Vector Machines (SVMs) The No-Math Version
More information9. Geometric problems
9. Geometric problems EE/AA 578, Univ of Washington, Fall 2016 projection on a set extremal volume ellipsoids centering classification 9 1 Projection on convex set projection of point x on set C defined
More informationIE 5531: Engineering Optimization I
IE 5531: Engineering Optimization I Lecture 19: Midterm 2 Review Prof. John Gunnar Carlsson November 22, 2010 Prof. John Gunnar Carlsson IE 5531: Engineering Optimization I November 22, 2010 1 / 34 Administrivia
More informationConvex optimization problems. Optimization problem in standard form
Convex optimization problems optimization problem in standard form convex optimization problems linear optimization quadratic optimization geometric programming quasiconvex optimization generalized inequality
More informationOutline. Outline. Outline DMP204 SCHEDULING, TIMETABLING AND ROUTING. 1. Scheduling CPM/PERT Resource Constrained Project Scheduling Model
Outline DMP204 SCHEDULING, TIMETABLING AND ROUTING Lecture 3 and Mixed Integer Programg Marco Chiarandini 1. Resource Constrained Project Model 2. Mathematical Programg 2 Outline Outline 1. Resource Constrained
More informationSupport Vector Machines
Wien, June, 2010 Paul Hofmarcher, Stefan Theussl, WU Wien Hofmarcher/Theussl SVM 1/21 Linear Separable Separating Hyperplanes Non-Linear Separable Soft-Margin Hyperplanes Hofmarcher/Theussl SVM 2/21 (SVM)
More information10 Numerical methods for constrained problems
10 Numerical methods for constrained problems min s.t. f(x) h(x) = 0 (l), g(x) 0 (m), x X The algorithms can be roughly divided the following way: ˆ primal methods: find descent direction keeping inside
More information1. Method 1: bisection. The bisection methods starts from two points a 0 and b 0 such that
Chapter 4 Nonlinear equations 4.1 Root finding Consider the problem of solving any nonlinear relation g(x) = h(x) in the real variable x. We rephrase this problem as one of finding the zero (root) of a
More informationCS295: Convex Optimization. Xiaohui Xie Department of Computer Science University of California, Irvine
CS295: Convex Optimization Xiaohui Xie Department of Computer Science University of California, Irvine Course information Prerequisites: multivariate calculus and linear algebra Textbook: Convex Optimization
More information8. Geometric problems
8. Geometric problems Convex Optimization Boyd & Vandenberghe extremal volume ellipsoids centering classification placement and facility location 8 1 Minimum volume ellipsoid around a set Löwner-John ellipsoid
More informationSymmetric Matrices and Eigendecomposition
Symmetric Matrices and Eigendecomposition Robert M. Freund January, 2014 c 2014 Massachusetts Institute of Technology. All rights reserved. 1 2 1 Symmetric Matrices and Convexity of Quadratic Functions
More informationA strongly polynomial algorithm for linear systems having a binary solution
A strongly polynomial algorithm for linear systems having a binary solution Sergei Chubanov Institute of Information Systems at the University of Siegen, Germany e-mail: sergei.chubanov@uni-siegen.de 7th
More informationInterior Point Methods in Mathematical Programming
Interior Point Methods in Mathematical Programming Clóvis C. Gonzaga Federal University of Santa Catarina, Brazil Journées en l honneur de Pierre Huard Paris, novembre 2008 01 00 11 00 000 000 000 000
More informationConvex Optimization. Prof. Nati Srebro. Lecture 12: Infeasible-Start Newton s Method Interior Point Methods
Convex Optimization Prof. Nati Srebro Lecture 12: Infeasible-Start Newton s Method Interior Point Methods Equality Constrained Optimization f 0 (x) s. t. A R p n, b R p Using access to: 2 nd order oracle
More informationMax Margin-Classifier
Max Margin-Classifier Oliver Schulte - CMPT 726 Bishop PRML Ch. 7 Outline Maximum Margin Criterion Math Maximizing the Margin Non-Separable Data Kernels and Non-linear Mappings Where does the maximization
More informationLecture 15 Newton Method and Self-Concordance. October 23, 2008
Newton Method and Self-Concordance October 23, 2008 Outline Lecture 15 Self-concordance Notion Self-concordant Functions Operations Preserving Self-concordance Properties of Self-concordant Functions Implications
More informationInteger Programming ISE 418. Lecture 13. Dr. Ted Ralphs
Integer Programming ISE 418 Lecture 13 Dr. Ted Ralphs ISE 418 Lecture 13 1 Reading for This Lecture Nemhauser and Wolsey Sections II.1.1-II.1.3, II.1.6 Wolsey Chapter 8 CCZ Chapters 5 and 6 Valid Inequalities
More informationCharacterizing Robust Solution Sets of Convex Programs under Data Uncertainty
Characterizing Robust Solution Sets of Convex Programs under Data Uncertainty V. Jeyakumar, G. M. Lee and G. Li Communicated by Sándor Zoltán Németh Abstract This paper deals with convex optimization problems
More informationEE613 Machine Learning for Engineers. Kernel methods Support Vector Machines. jean-marc odobez 2015
EE613 Machine Learning for Engineers Kernel methods Support Vector Machines jean-marc odobez 2015 overview Kernel methods introductions and main elements defining kernels Kernelization of k-nn, K-Means,
More informationMulti-class SVMs. Lecture 17: Aykut Erdem April 2016 Hacettepe University
Multi-class SVMs Lecture 17: Aykut Erdem April 2016 Hacettepe University Administrative We will have a make-up lecture on Saturday April 23, 2016. Project progress reports are due April 21, 2016 2 days
More informationConvex Optimization and Modeling
Convex Optimization and Modeling Duality Theory and Optimality Conditions 5th lecture, 12.05.2010 Jun.-Prof. Matthias Hein Program of today/next lecture Lagrangian and duality: the Lagrangian the dual
More informationOptimization. Yuh-Jye Lee. March 21, Data Science and Machine Intelligence Lab National Chiao Tung University 1 / 29
Optimization Yuh-Jye Lee Data Science and Machine Intelligence Lab National Chiao Tung University March 21, 2017 1 / 29 You Have Learned (Unconstrained) Optimization in Your High School Let f (x) = ax
More informationminimize x subject to (x 2)(x 4) u,
Math 6366/6367: Optimization and Variational Methods Sample Preliminary Exam Questions 1. Suppose that f : [, L] R is a C 2 -function with f () on (, L) and that you have explicit formulae for
More informationLECTURE 7 Support vector machines
LECTURE 7 Support vector machines SVMs have been used in a multitude of applications and are one of the most popular machine learning algorithms. We will derive the SVM algorithm from two perspectives:
More informationLinear & nonlinear classifiers
Linear & nonlinear classifiers Machine Learning Hamid Beigy Sharif University of Technology Fall 1394 Hamid Beigy (Sharif University of Technology) Linear & nonlinear classifiers Fall 1394 1 / 34 Table
More informationMachine Learning. Support Vector Machines. Fabio Vandin November 20, 2017
Machine Learning Support Vector Machines Fabio Vandin November 20, 2017 1 Classification and Margin Consider a classification problem with two classes: instance set X = R d label set Y = { 1, 1}. Training
More informationSupport Vector Machine (SVM) & Kernel CE-717: Machine Learning Sharif University of Technology. M. Soleymani Fall 2012
Support Vector Machine (SVM) & Kernel CE-717: Machine Learning Sharif University of Technology M. Soleymani Fall 2012 Linear classifier Which classifier? x 2 x 1 2 Linear classifier Margin concept x 2
More informationLearning the Kernel Matrix with Semi-Definite Programming
Learning the Kernel Matrix with Semi-Definite Programg Gert R.G. Lanckriet gert@cs.berkeley.edu Department of Electrical Engineering and Computer Science University of California, Berkeley, CA 94720, USA
More informationLecture: Examples of LP, SOCP and SDP
1/34 Lecture: Examples of LP, SOCP and SDP Zaiwen Wen Beijing International Center For Mathematical Research Peking University http://bicmr.pku.edu.cn/~wenzw/bigdata2018.html wenzw@pku.edu.cn Acknowledgement:
More informationConditional Gradient (Frank-Wolfe) Method
Conditional Gradient (Frank-Wolfe) Method Lecturer: Aarti Singh Co-instructor: Pradeep Ravikumar Convex Optimization 10-725/36-725 1 Outline Today: Conditional gradient method Convergence analysis Properties
More informationMachine Learning A Geometric Approach
Machine Learning A Geometric Approach CIML book Chap 7.7 Linear Classification: Support Vector Machines (SVM) Professor Liang Huang some slides from Alex Smola (CMU) Linear Separator Ham Spam From Perceptron
More informationOptimization of Polynomials
Optimization of Polynomials Matthew M. Peet Arizona State University Thanks to S. Lall and P. Parrilo for guidance and supporting material Lecture 03: Optimization of Polynomials Overview In this lecture,
More informationAgenda. 1 Cone programming. 2 Convex cones. 3 Generalized inequalities. 4 Linear programming (LP) 5 Second-order cone programming (SOCP)
Agenda 1 Cone programming 2 Convex cones 3 Generalized inequalities 4 Linear programming (LP) 5 Second-order cone programming (SOCP) 6 Semidefinite programming (SDP) 7 Examples Optimization problem in
More informationMATH 4211/6211 Optimization Basics of Optimization Problems
MATH 4211/6211 Optimization Basics of Optimization Problems Xiaojing Ye Department of Mathematics & Statistics Georgia State University Xiaojing Ye, Math & Stat, Georgia State University 0 A standard minimization
More informationA Brief Review on Convex Optimization
A Brief Review on Convex Optimization 1 Convex set S R n is convex if x,y S, λ,µ 0, λ+µ = 1 λx+µy S geometrically: x,y S line segment through x,y S examples (one convex, two nonconvex sets): A Brief Review
More information5. Duality. Lagrangian
5. Duality Convex Optimization Boyd & Vandenberghe Lagrange dual problem weak and strong duality geometric interpretation optimality conditions perturbation and sensitivity analysis examples generalized
More informationMathematical Optimization Models and Applications
Mathematical Optimization Models and Applications Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. http://www.stanford.edu/ yyye Chapters 1, 2.1-2,
More informationLecture 9: Large Margin Classifiers. Linear Support Vector Machines
Lecture 9: Large Margin Classifiers. Linear Support Vector Machines Perceptrons Definition Perceptron learning rule Convergence Margin & max margin classifiers (Linear) support vector machines Formulation
More informationCSCI 1951-G Optimization Methods in Finance Part 09: Interior Point Methods
CSCI 1951-G Optimization Methods in Finance Part 09: Interior Point Methods March 23, 2018 1 / 35 This material is covered in S. Boyd, L. Vandenberge s book Convex Optimization https://web.stanford.edu/~boyd/cvxbook/.
More informationMLCC 2017 Regularization Networks I: Linear Models
MLCC 2017 Regularization Networks I: Linear Models Lorenzo Rosasco UNIGE-MIT-IIT June 27, 2017 About this class We introduce a class of learning algorithms based on Tikhonov regularization We study computational
More informationSolution Methods. Richard Lusby. Department of Management Engineering Technical University of Denmark
Solution Methods Richard Lusby Department of Management Engineering Technical University of Denmark Lecture Overview (jg Unconstrained Several Variables Quadratic Programming Separable Programming SUMT
More information15. Conic optimization
L. Vandenberghe EE236C (Spring 216) 15. Conic optimization conic linear program examples modeling duality 15-1 Generalized (conic) inequalities Conic inequality: a constraint x K where K is a convex cone
More informationConvex Optimization in Classification Problems
New Trends in Optimization and Computational Algorithms December 9 13, 2001 Convex Optimization in Classification Problems Laurent El Ghaoui Department of EECS, UC Berkeley elghaoui@eecs.berkeley.edu 1
More informationSupport Vector Machines. CSE 6363 Machine Learning Vassilis Athitsos Computer Science and Engineering Department University of Texas at Arlington
Support Vector Machines CSE 6363 Machine Learning Vassilis Athitsos Computer Science and Engineering Department University of Texas at Arlington 1 A Linearly Separable Problem Consider the binary classification
More informationLecture 8. Strong Duality Results. September 22, 2008
Strong Duality Results September 22, 2008 Outline Lecture 8 Slater Condition and its Variations Convex Objective with Linear Inequality Constraints Quadratic Objective over Quadratic Constraints Representation
More informationMachine Learning And Applications: Supervised Learning-SVM
Machine Learning And Applications: Supervised Learning-SVM Raphaël Bournhonesque École Normale Supérieure de Lyon, Lyon, France raphael.bournhonesque@ens-lyon.fr 1 Supervised vs unsupervised learning Machine
More information