y y i LB k+1 f(x, y k+1 ) Ax + By k+1 b,

Size: px
Start display at page:

Download "y y i LB k+1 f(x, y k+1 ) Ax + By k+1 b,"

Transcription

1 a b b a b

2

3 min x,y f(x, y) g j (x, y) 0 j = 1,... l, Ax + By b, x R n, y Z m. f, g 1,..., g l : R n R m R y x

4 { (x i, y i ) } k i=0 k y k+1 min x,y,µ µ [ ] f(x i, y i ) + f(x i, y i ) T x x i y y i g j (x i, y i ) + g j (x i, y i ) T [ x x i y y i Ax + By b, µ i = 1,..., k, ] 0 i = 1,... k, j A i, x R n, y Z m, µ R. A i (x i, y i ) LB k+1 y k+1 x min x f(x, y k+1 ) g j (x, y k+1 ) 0 j = 1,... l, Ax + By k+1 b, x R n. x k+1 UB k+1 x y l min x,r r g j (x, y k+1 ) r j = 1,... l, Ax + By k+1 b, x R n, r R +.

5 x k+1 (x k+1, y k+1 ) [ ] f(x k+1, y k+1 ) + f(x k+1, y k+1 ) T x x k+1 y y k+1 µ, g j (x k+1, y k+1 ) + g j (x k+1, y k+1 ) T [ x x k+1 y y k+1 ] 0 j A k+1. y k+1 6x y 0.3(x 8) (y 6) e 2x y /x + 1/y x 0.5 y x 5y 1 1 x 20, 1 y 20, x R, y Z. x 0 = 5.29 y 0 = 3 (x 0, y 0 )

6 ϵ 0 x, ỹ x, ỹ k = 1 UB 0 = inf LB 0 = inf UB k 1 LB k 1 ϵ y k LB k y k x k x k UB k = UB k 1 x k, y k x k, y k UB k = min{f(x k, y k ), UB k 1 } k = k + 1 x, ȳ f( x, ȳ) x, ȳ LB 1

7 f LB 1 f f( x, ȳ) k ˆ f k = (1 α)f( x, ȳ) + αlbk, α (0, 1] x, ȳ LB k α α f ˆ k y k+1 x, ȳ f ˆ k min x,y,µ x x y ȳ 2 µ ˆ f k f(x i, y i ) + f(x i, y i ) T [ x x i y y i g j (x i, y i ) + g j (x i, y i ) T [ x x i y y i Ax + By b, x R n, y Z m, µ R, ] µ i = 1,..., k, ] 0 i = 1,... k, j A i,

8 (x 0, y 0 ) y k+1 f ˆ k fˆ k

9 ϵ 0 α (0, 1] x, ȳ x, ȳ k = 1 LB 0 = inf f( x, ȳ) LB k 1 ϵ LB k f ˆ k y k y k x k x k x k, y k x k, y k f(x k, y k ) f( x, ȳ) x, ȳ = x k, y k k = k + 1 x, ȳ α = 0.4 LB 4 = f( x, ȳ) x x 2 y ȳ r k, r k

10 µ f ˆ k x, y, µ r k = x 2 x y ȳ r k x, y, µ r k > x 2 x y ȳ. µ µ µ f ˆ k x, y, µ x, y, µ

11 L : R n R m R l R L(x, y, λ) = f(x, y) + l λ j g j (x, y), λ j 0 j f, g 1,..., g j x, y [ ] L( x, ȳ, λ) + x,y L( x, ȳ, λ) T x + 1 y 2 j=1 [ ] T x 2 y x,yl( x, ȳ, λ) [ ] x, y

12 x,y L x, y 2 x,y x = x x y = y ȳ 2 x,y λ 0 x, ȳ ϵ ϵ ϵ µ f( x, ȳ) ϵ [ ] f(x i, y i ) + f(x i, y i ) T x x i y y i µ i = 1,..., k, x, ȳ ϵ x, ȳ λ f ˆ k µ ˆ f k f(x i, y i ) + f(x i, y i ) T [ x x i y y i ] µ i = 1,..., k. f ˆ k ˆ f k

13 y k+1 min x,y,µ [ ] x,y L( x, ȳ, λ) T x + 1 y 2 µ ˆ f k f(x i, y i ) + f(x i, y i ) T [ x x i y y i g j (x i, y i ) + g j (x i, y i ) T [ x x i y y i Ax + By b, [ ] T x 2 y x,yl( x, ȳ, λ) [ ] x y ] µ i = 1,..., k ] 0 i = 1,... k, j A i, x R n, y Z m, µ R, x = x x y = y ȳ x, ȳ λ x λ 2 x,y α α = 1 α

14 ϵ 0 α ]0, 1] x, ȳ λ x, ȳ k = 1 LB 0 = inf f( x, ȳ) LB k 1 ϵ LB k f ˆ k y k y k x k λ k x k x k, y k x k, y k f(x k, y k ) f( x, ȳ) x, ȳ, λ = x k, y k, λ k k = k + 1 x, ȳ (x 0, y 0 ) α = 0.5 LB 3 = f( x, ȳ)

15 µ f ˆ k

16 ϕ(x, y) [ ] ϕ(x, y) ϕ(x 0, y 0 ) + ϕ(x 0, y 0 ) T x x 0 y y 0 (x, y), (x 0, y 0 ) D ϕ, D ϕ y k y k f ˆ k f ˆ k < f( x, ȳ) ˆx i, ŷ i ˆ f k < f( x, ȳ) f(ˆxi, ŷ i ) i. ˆx i, ŷ i [ ] f(ˆx i, ŷ i ) + f(ˆx i, ŷ i ) T x ˆx i y ŷ i µ. µ ˆµ i ˆ f k < f( x, ȳ) ˆµi i.

17 µ f ˆ k [ ] f(ˆx i, ŷ i ) T x ˆx i y ŷ i < 0 i. x, ỹ {ˆx i, ŷ i } x x x x f( x, ỹ) + g j ( x, ỹ) 0 l λ j x g j ( x, ỹ) + A T γ = 0 j=1 j = 1,..., l A x + Bỹ b λ, γ 0 λ j g j ( x, ỹ) = 0 j = 1,..., l (A x + Bỹ b) γ = 0, x x γ x, ỹ [ ] g j ( x, ỹ) + g j ( x, ỹ) T x x 0 j λ y ỹ j 0. g j ( x, ỹ) = 0 x λ j x g j ( x, ỹ) T x 0 j = 1,..., l. x γ γ T A x 0. x f( x, ỹ) T x < 0. l x f( x, ỹ) T x + λ j x g j ( x, ỹ) T x + γ T A x < 0. j=1

18 x l x f( x, ỹ) T x + λ j x g j ( x, ỹ) T x + γ T A x = 0, j=1 x y

19 n nonlin n + m > 0.5, n nonlin m + n

20 α 10 9 i 2 x,yl(i, i) := 2 x,yl(i, i) + λ min, λ min ϵ

21 ϵ rel f( x, ȳ) LB ϵ f( x, ȳ) LB f( x, ȳ) ϵ rel LB ϵ = 10 5 ϵ rel = s

22

23 τ τ τ 3 τ iter 1.5

24 τ t τ iter

25 α = 0.5 α α

26 α

27

28

29 m int m bin n n + m n nonlin n+m n nonlin n+m m int m bin n n + m

30

31

32 LB UB / LB LB UB

33

34

35

E E I M (E, I) E I 2 E M I I X I Y X Y I X, Y I X > Y x X \ Y Y {x} I B E B M E C E C C M r E X E r (X) X X r (X) = X E B M X E Y E X Y X B E F E F F E E E M M M M M M E B M E \ B M M 0 M M M 0 0 M x M

More information

Nonlinear Programming

Nonlinear Programming Nonlinear Programming Kees Roos e-mail: C.Roos@ewi.tudelft.nl URL: http://www.isa.ewi.tudelft.nl/ roos LNMB Course De Uithof, Utrecht February 6 - May 8, A.D. 2006 Optimization Group 1 Outline for week

More information

Shiqian Ma, MAT-258A: Numerical Optimization 1. Chapter 4. Subgradient

Shiqian Ma, MAT-258A: Numerical Optimization 1. Chapter 4. Subgradient Shiqian Ma, MAT-258A: Numerical Optimization 1 Chapter 4 Subgradient Shiqian Ma, MAT-258A: Numerical Optimization 2 4.1. Subgradients definition subgradient calculus duality and optimality conditions Shiqian

More information

(X i,y, i i ), i =1, 2,,n, Y i = min (Y i, C i ), i = I (Y i C i ) C i. Y i = min (C, Xi t θ 0 + i ) E [ρ α (Y t )] t ρ α (y) =(α I (y < 0))y I (A )

(X i,y, i i ), i =1, 2,,n, Y i = min (Y i, C i ), i = I (Y i C i ) C i. Y i = min (C, Xi t θ 0 + i ) E [ρ α (Y t )] t ρ α (y) =(α I (y < 0))y I (A ) Y F Y ( ) α q α q α = if y : F Y (y ) α. α E [ρ α (Y t )] t ρ α (y) =(α I (y < 0))y I (A ) Y 1,Y 2,,Y q α 1 Σ i =1 ρ α (Y i t ) t X = x α q α (x ) Y q α (x )= if y : P (Y y X = x ) α. q α (x ) θ α q α

More information

FIXED POINT ITERATIONS

FIXED POINT ITERATIONS FIXED POINT ITERATIONS MARKUS GRASMAIR 1. Fixed Point Iteration for Non-linear Equations Our goal is the solution of an equation (1) F (x) = 0, where F : R n R n is a continuous vector valued mapping in

More information

14. Duality. ˆ Upper and lower bounds. ˆ General duality. ˆ Constraint qualifications. ˆ Counterexample. ˆ Complementary slackness.

14. Duality. ˆ Upper and lower bounds. ˆ General duality. ˆ Constraint qualifications. ˆ Counterexample. ˆ Complementary slackness. CS/ECE/ISyE 524 Introduction to Optimization Spring 2016 17 14. Duality ˆ Upper and lower bounds ˆ General duality ˆ Constraint qualifications ˆ Counterexample ˆ Complementary slackness ˆ Examples ˆ Sensitivity

More information

5. Subgradient method

5. Subgradient method L. Vandenberghe EE236C (Spring 2016) 5. Subgradient method subgradient method convergence analysis optimal step size when f is known alternating projections optimality 5-1 Subgradient method to minimize

More information

1 Solutions to selected problems

1 Solutions to selected problems 1 Solutions to selected problems 1. Let A B R n. Show that int A int B but in general bd A bd B. Solution. Let x int A. Then there is ɛ > 0 such that B ɛ (x) A B. This shows x int B. If A = [0, 1] and

More information

Shiqian Ma, MAT-258A: Numerical Optimization 1. Chapter 9. Alternating Direction Method of Multipliers

Shiqian Ma, MAT-258A: Numerical Optimization 1. Chapter 9. Alternating Direction Method of Multipliers Shiqian Ma, MAT-258A: Numerical Optimization 1 Chapter 9 Alternating Direction Method of Multipliers Shiqian Ma, MAT-258A: Numerical Optimization 2 Separable convex optimization a special case is min f(x)

More information

Additional Exercises for Introduction to Nonlinear Optimization Amir Beck March 16, 2017

Additional Exercises for Introduction to Nonlinear Optimization Amir Beck March 16, 2017 Additional Exercises for Introduction to Nonlinear Optimization Amir Beck March 16, 2017 Chapter 1 - Mathematical Preliminaries 1.1 Let S R n. (a) Suppose that T is an open set satisfying T S. Prove that

More information

Primal-dual subgradient methods for convex problems

Primal-dual subgradient methods for convex problems Primal-dual subgradient methods for convex problems Yu. Nesterov March 2002, September 2005 (after revision) Abstract In this paper we present a new approach for constructing subgradient schemes for different

More information

Math 273a: Optimization Subgradients of convex functions

Math 273a: Optimization Subgradients of convex functions Math 273a: Optimization Subgradients of convex functions Made by: Damek Davis Edited by Wotao Yin Department of Mathematics, UCLA Fall 2015 online discussions on piazza.com 1 / 42 Subgradients Assumptions

More information

Linear and Combinatorial Optimization

Linear and Combinatorial Optimization Linear and Combinatorial Optimization The dual of an LP-problem. Connections between primal and dual. Duality theorems and complementary slack. Philipp Birken (Ctr. for the Math. Sc.) Lecture 3: Duality

More information

Accelerated primal-dual methods for linearly constrained convex problems

Accelerated primal-dual methods for linearly constrained convex problems Accelerated primal-dual methods for linearly constrained convex problems Yangyang Xu SIAM Conference on Optimization May 24, 2017 1 / 23 Accelerated proximal gradient For convex composite problem: minimize

More information

A Note on KKT Points of Homogeneous Programs 1

A Note on KKT Points of Homogeneous Programs 1 A Note on KKT Points of Homogeneous Programs 1 Y. B. Zhao 2 and D. Li 3 Abstract. Homogeneous programming is an important class of optimization problems. The purpose of this note is to give a truly equivalent

More information

Lagrange Relaxation and Duality

Lagrange Relaxation and Duality Lagrange Relaxation and Duality As we have already known, constrained optimization problems are harder to solve than unconstrained problems. By relaxation we can solve a more difficult problem by a simpler

More information

Least Squares. Stephen Boyd. EE103 Stanford University. October 28, 2017

Least Squares. Stephen Boyd. EE103 Stanford University. October 28, 2017 Least Squares Stephen Boyd EE103 Stanford University October 28, 2017 Outline Least squares problem Solution of least squares problem Examples Least squares problem 2 Least squares problem suppose m n

More information

Study Sheet. December 10, The course PDF has been updated (6/11). Read the new one.

Study Sheet. December 10, The course PDF has been updated (6/11). Read the new one. Study Sheet December 10, 2017 The course PDF has been updated (6/11). Read the new one. 1 Definitions to know The mode:= the class or center of the class with the highest frequency. The median : Q 2 is

More information

Chapter 0. Mathematical Preliminaries. 0.1 Norms

Chapter 0. Mathematical Preliminaries. 0.1 Norms Chapter 0 Mathematical Preliminaries 0.1 Norms Throughout this course we will be working with the vector space R n. For this reason we begin with a brief review of its metric space properties Definition

More information

In particular, if A is a square matrix and λ is one of its eigenvalues, then we can find a non-zero column vector X with

In particular, if A is a square matrix and λ is one of its eigenvalues, then we can find a non-zero column vector X with Appendix: Matrix Estimates and the Perron-Frobenius Theorem. This Appendix will first present some well known estimates. For any m n matrix A = [a ij ] over the real or complex numbers, it will be convenient

More information

Lecture 3. Optimization Problems and Iterative Algorithms

Lecture 3. Optimization Problems and Iterative Algorithms Lecture 3 Optimization Problems and Iterative Algorithms January 13, 2016 This material was jointly developed with Angelia Nedić at UIUC for IE 598ns Outline Special Functions: Linear, Quadratic, Convex

More information

Examination paper for TMA4180 Optimization I

Examination paper for TMA4180 Optimization I Department of Mathematical Sciences Examination paper for TMA4180 Optimization I Academic contact during examination: Phone: Examination date: 26th May 2016 Examination time (from to): 09:00 13:00 Permitted

More information

Generalized Pattern Search Algorithms : unconstrained and constrained cases

Generalized Pattern Search Algorithms : unconstrained and constrained cases IMA workshop Optimization in simulation based models Generalized Pattern Search Algorithms : unconstrained and constrained cases Mark A. Abramson Air Force Institute of Technology Charles Audet École Polytechnique

More information

Answer Key. Calculus I Math 141 Fall 2003 Professor Ben Richert. Exam 2

Answer Key. Calculus I Math 141 Fall 2003 Professor Ben Richert. Exam 2 Answer Key Calculus I Math 141 Fall 2003 Professor Ben Richert Exam 2 November 18, 2003 Please do all your work in this booklet and show all the steps. Calculators and note-cards are not allowed. Problem

More information

Mathematics 530. Practice Problems. n + 1 }

Mathematics 530. Practice Problems. n + 1 } Department of Mathematical Sciences University of Delaware Prof. T. Angell October 19, 2015 Mathematics 530 Practice Problems 1. Recall that an indifference relation on a partially ordered set is defined

More information

Quiz Discussion. IE417: Nonlinear Programming: Lecture 12. Motivation. Why do we care? Jeff Linderoth. 16th March 2006

Quiz Discussion. IE417: Nonlinear Programming: Lecture 12. Motivation. Why do we care? Jeff Linderoth. 16th March 2006 Quiz Discussion IE417: Nonlinear Programming: Lecture 12 Jeff Linderoth Department of Industrial and Systems Engineering Lehigh University 16th March 2006 Motivation Why do we care? We are interested in

More information

The Inverse Function Theorem 1

The Inverse Function Theorem 1 John Nachbar Washington University April 11, 2014 1 Overview. The Inverse Function Theorem 1 If a function f : R R is C 1 and if its derivative is strictly positive at some x R, then, by continuity of

More information

arxiv: v1 [math.oc] 5 Dec 2014

arxiv: v1 [math.oc] 5 Dec 2014 FAST BUNDLE-LEVEL TYPE METHODS FOR UNCONSTRAINED AND BALL-CONSTRAINED CONVEX OPTIMIZATION YUNMEI CHEN, GUANGHUI LAN, YUYUAN OUYANG, AND WEI ZHANG arxiv:141.18v1 [math.oc] 5 Dec 014 Abstract. It has been

More information

Final Exam Advanced Mathematics for Economics and Finance

Final Exam Advanced Mathematics for Economics and Finance Final Exam Advanced Mathematics for Economics and Finance Dr. Stefanie Flotho Winter Term /5 March 5 General Remarks: ˆ There are four questions in total. ˆ All problems are equally weighed. ˆ This is

More information

Qualifying Exam Math 6301 August 2018 Real Analysis I

Qualifying Exam Math 6301 August 2018 Real Analysis I Qualifying Exam Math 6301 August 2018 Real Analysis I QE ID Instructions: Please solve the following problems. Work on your own and do not discuss these problems with your classmates or anyone else. 1.

More information

LB 220 Homework 4 Solutions

LB 220 Homework 4 Solutions LB 220 Homework 4 Solutions Section 11.4, # 40: This problem was solved in class on Feb. 03. Section 11.4, # 42: This problem was also solved in class on Feb. 03. Section 11.4, # 43: Also solved in class

More information

On the Local Quadratic Convergence of the Primal-Dual Augmented Lagrangian Method

On the Local Quadratic Convergence of the Primal-Dual Augmented Lagrangian Method Optimization Methods and Software Vol. 00, No. 00, Month 200x, 1 11 On the Local Quadratic Convergence of the Primal-Dual Augmented Lagrangian Method ROMAN A. POLYAK Department of SEOR and Mathematical

More information

ACCELERATED BUNDLE LEVEL TYPE METHODS FOR LARGE SCALE CONVEX OPTIMIZATION

ACCELERATED BUNDLE LEVEL TYPE METHODS FOR LARGE SCALE CONVEX OPTIMIZATION ACCELERATED BUNDLE LEVEL TYPE METHODS FOR LARGE SCALE CONVEX OPTIMIZATION By WEI ZHANG A DISSERTATION PRESENTED TO THE GRADUATE SCHOOL OF THE UNIVERSITY OF FLORIDA IN PARTIAL FULFILLMENT OF THE REQUIREMENTS

More information

Math 0320 Final Exam Review

Math 0320 Final Exam Review Math 0320 Final Exam Review SHORT ANSWER. Write the word or phrase that best completes each statement or answers the question. Factor out the GCF using the Distributive Property. 1) 6x 3 + 9x 1) Objective:

More information

The Implicit and Inverse Function Theorems Notes to supplement Chapter 13.

The Implicit and Inverse Function Theorems Notes to supplement Chapter 13. The Implicit and Inverse Function Theorems Notes to supplement Chapter 13. Remark: These notes are still in draft form. Examples will be added to Section 5. If you see any errors, please let me know. 1.

More information

Primal Solutions and Rate Analysis for Subgradient Methods

Primal Solutions and Rate Analysis for Subgradient Methods Primal Solutions and Rate Analysis for Subgradient Methods Asu Ozdaglar Joint work with Angelia Nedić, UIUC Conference on Information Sciences and Systems (CISS) March, 2008 Department of Electrical Engineering

More information

Simple Linear Regression for the MPG Data

Simple Linear Regression for the MPG Data Simple Linear Regression for the MPG Data 2000 2500 3000 3500 15 20 25 30 35 40 45 Wgt MPG What do we do with the data? y i = MPG of i th car x i = Weight of i th car i =1,...,n n = Sample Size Exploratory

More information

Stochastic relations of random variables and processes

Stochastic relations of random variables and processes Stochastic relations of random variables and processes Lasse Leskelä Helsinki University of Technology 7th World Congress in Probability and Statistics Singapore, 18 July 2008 Fundamental problem of applied

More information

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for 1 Notions of error Notes for 2016-09-02 The art of numerics is finding an approximation with a fast algorithm, a form that is easy to analyze, and an error bound Given a task, we want to engineer an approximation

More information

Transformations from R m to R n.

Transformations from R m to R n. Transformations from R m to R n 1 Differentiablity First of all because of an unfortunate combination of traditions (the fact that we read from left to right and the way we define matrix multiplication

More information

Lectures 9 and 10: Constrained optimization problems and their optimality conditions

Lectures 9 and 10: Constrained optimization problems and their optimality conditions Lectures 9 and 10: Constrained optimization problems and their optimality conditions Coralia Cartis, Mathematical Institute, University of Oxford C6.2/B2: Continuous Optimization Lectures 9 and 10: Constrained

More information

SELECTED SOLUTIONS, SECTION (Weak duality) Prove that the primal and dual values p and d defined by equations (4.3.2) and (4.3.3) satisfy p d.

SELECTED SOLUTIONS, SECTION (Weak duality) Prove that the primal and dual values p and d defined by equations (4.3.2) and (4.3.3) satisfy p d. SELECTED SOLUTIONS, SECTION 4.3 1. Weak dualty Prove that the prmal and dual values p and d defned by equatons 4.3. and 4.3.3 satsfy p d. We consder an optmzaton problem of the form The Lagrangan for ths

More information

Notes on floating point number, numerical computations and pitfalls

Notes on floating point number, numerical computations and pitfalls Notes on floating point number, numerical computations and pitfalls November 6, 212 1 Floating point numbers An n-digit floating point number in base β has the form x = ±(.d 1 d 2 d n ) β β e where.d 1

More information

Ch 2: Simple Linear Regression

Ch 2: Simple Linear Regression Ch 2: Simple Linear Regression 1. Simple Linear Regression Model A simple regression model with a single regressor x is y = β 0 + β 1 x + ɛ, where we assume that the error ɛ is independent random component

More information

Subgradient. Acknowledgement: this slides is based on Prof. Lieven Vandenberghes lecture notes. definition. subgradient calculus

Subgradient. Acknowledgement: this slides is based on Prof. Lieven Vandenberghes lecture notes. definition. subgradient calculus 1/41 Subgradient Acknowledgement: this slides is based on Prof. Lieven Vandenberghes lecture notes definition subgradient calculus duality and optimality conditions directional derivative Basic inequality

More information

Math Models of OR: Handling Upper Bounds in Simplex

Math Models of OR: Handling Upper Bounds in Simplex Math Models of OR: Handling Upper Bounds in Simplex John E. Mitchell Department of Mathematical Sciences RPI, Troy, NY 280 USA September 208 Mitchell Handling Upper Bounds in Simplex / 8 Introduction Outline

More information

1 h 9 e $ s i n t h e o r y, a p p l i c a t i a n

1 h 9 e $ s i n t h e o r y, a p p l i c a t i a n T : 99 9 \ E \ : \ 4 7 8 \ \ \ \ - \ \ T \ \ \ : \ 99 9 T : 99-9 9 E : 4 7 8 / T V 9 \ E \ \ : 4 \ 7 8 / T \ V \ 9 T - w - - V w w - T w w \ T \ \ \ w \ w \ - \ w \ \ w \ \ \ T \ w \ w \ w \ w \ \ w \

More information

Optimization. Yuh-Jye Lee. March 21, Data Science and Machine Intelligence Lab National Chiao Tung University 1 / 29

Optimization. Yuh-Jye Lee. March 21, Data Science and Machine Intelligence Lab National Chiao Tung University 1 / 29 Optimization Yuh-Jye Lee Data Science and Machine Intelligence Lab National Chiao Tung University March 21, 2017 1 / 29 You Have Learned (Unconstrained) Optimization in Your High School Let f (x) = ax

More information

Lecture 6 : Projected Gradient Descent

Lecture 6 : Projected Gradient Descent Lecture 6 : Projected Gradient Descent EE227C. Lecturer: Professor Martin Wainwright. Scribe: Alvin Wan Consider the following update. x l+1 = Π C (x l α f(x l )) Theorem Say f : R d R is (m, M)-strongly

More information

Characterization of quadratic mappings through a functional inequality

Characterization of quadratic mappings through a functional inequality J. Math. Anal. Appl. 32 (2006) 52 59 www.elsevier.com/locate/jmaa Characterization of quadratic mappings through a functional inequality Włodzimierz Fechner Institute of Mathematics, Silesian University,

More information

Final Review Worksheet

Final Review Worksheet Score: Name: Final Review Worksheet Math 2110Q Fall 2014 Professor Hohn Answers (in no particular order): f(x, y) = e y + xe xy + C; 2; 3; e y cos z, e z cos x, e x cos y, e x sin y e y sin z e z sin x;

More information

Lecture # 11 The Power Method for Eigenvalues Part II. The power method find the largest (in magnitude) eigenvalue of. A R n n.

Lecture # 11 The Power Method for Eigenvalues Part II. The power method find the largest (in magnitude) eigenvalue of. A R n n. Lecture # 11 The Power Method for Eigenvalues Part II The power method find the largest (in magnitude) eigenvalue of It makes two assumptions. 1. A is diagonalizable. That is, A R n n. A = XΛX 1 for some

More information

Lecture 1: Review of methods to solve Ordinary Differential Equations

Lecture 1: Review of methods to solve Ordinary Differential Equations Introductory lecture notes on Partial Differential Equations - c Anthony Peirce Not to be copied, used, or revised without explicit written permission from the copyright owner 1 Lecture 1: Review of methods

More information

A. Derivation of regularized ERM duality

A. Derivation of regularized ERM duality A. Derivation of regularized ERM dualit For completeness, in this section we derive the dual 5 to the problem of computing proximal operator for the ERM objective 3. We can rewrite the primal problem as

More information

Bindel, Spring 2017 Numerical Analysis (CS 4220) Notes for

Bindel, Spring 2017 Numerical Analysis (CS 4220) Notes for 1 Norms revisited Notes for 2017-02-01 In the last lecture, we discussed norms, including induced norms: if A maps between two normed vector spaces V and W, the induced norm on A is Av W A V,W = sup =

More information

Least squares: introduction to the network adjustment

Least squares: introduction to the network adjustment Least squares: introduction to the network adjustment Experimental evidence and consequences Observations of the same quantity that have been performed at the highest possible accuracy provide different

More information

Second Order Optimization Algorithms I

Second Order Optimization Algorithms I Second Order Optimization Algorithms I Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. http://www.stanford.edu/ yyye Chapters 7, 8, 9 and 10 1 The

More information

Optimization. Yuh-Jye Lee. March 28, Data Science and Machine Intelligence Lab National Chiao Tung University 1 / 40

Optimization. Yuh-Jye Lee. March 28, Data Science and Machine Intelligence Lab National Chiao Tung University 1 / 40 Optimization Yuh-Jye Lee Data Science and Machine Intelligence Lab National Chiao Tung University March 28, 2017 1 / 40 The Key Idea of Newton s Method Let f : R n R be a twice differentiable function

More information

NEW MAXIMUM THEOREMS WITH STRICT QUASI-CONCAVITY. Won Kyu Kim and Ju Han Yoon. 1. Introduction

NEW MAXIMUM THEOREMS WITH STRICT QUASI-CONCAVITY. Won Kyu Kim and Ju Han Yoon. 1. Introduction Bull. Korean Math. Soc. 38 (2001), No. 3, pp. 565 573 NEW MAXIMUM THEOREMS WITH STRICT QUASI-CONCAVITY Won Kyu Kim and Ju Han Yoon Abstract. In this paper, we first prove the strict quasi-concavity of

More information

Accelerated Dual Gradient-Based Methods for Total Variation Image Denoising/Deblurring Problems (and other Inverse Problems)

Accelerated Dual Gradient-Based Methods for Total Variation Image Denoising/Deblurring Problems (and other Inverse Problems) Accelerated Dual Gradient-Based Methods for Total Variation Image Denoising/Deblurring Problems (and other Inverse Problems) Donghwan Kim and Jeffrey A. Fessler EECS Department, University of Michigan

More information

Entrance Exam, Real Analysis September 1, 2017 Solve exactly 6 out of the 8 problems

Entrance Exam, Real Analysis September 1, 2017 Solve exactly 6 out of the 8 problems September, 27 Solve exactly 6 out of the 8 problems. Prove by denition (in ɛ δ language) that f(x) = + x 2 is uniformly continuous in (, ). Is f(x) uniformly continuous in (, )? Prove your conclusion.

More information

8 Barrier Methods for Constrained Optimization

8 Barrier Methods for Constrained Optimization IOE 519: NL, Winter 2012 c Marina A. Epelman 55 8 Barrier Methods for Constrained Optimization In this subsection, we will restrict our attention to instances of constrained problem () that have inequality

More information

Problem set 4, Real Analysis I, Spring, 2015.

Problem set 4, Real Analysis I, Spring, 2015. Problem set 4, Real Analysis I, Spring, 215. (18) Let f be a measurable finite-valued function on [, 1], and suppose f(x) f(y) is integrable on [, 1] [, 1]. Show that f is integrable on [, 1]. [Hint: Show

More information

LECTURE 25: REVIEW/EPILOGUE LECTURE OUTLINE

LECTURE 25: REVIEW/EPILOGUE LECTURE OUTLINE LECTURE 25: REVIEW/EPILOGUE LECTURE OUTLINE CONVEX ANALYSIS AND DUALITY Basic concepts of convex analysis Basic concepts of convex optimization Geometric duality framework - MC/MC Constrained optimization

More information

Final exam.

Final exam. EE364b Convex Optimization II June 018 Prof. John C. Duchi Final exam By now, you know how it works, so we won t repeat it here. (If not, see the instructions for the EE364a final exam.) Since you have

More information

Problem 1 (Exercise 2.2, Monograph)

Problem 1 (Exercise 2.2, Monograph) MS&E 314/CME 336 Assignment 2 Conic Linear Programming January 3, 215 Prof. Yinyu Ye 6 Pages ASSIGNMENT 2 SOLUTIONS Problem 1 (Exercise 2.2, Monograph) We prove the part ii) of Theorem 2.1 (Farkas Lemma

More information

4 Insect outbreak model

4 Insect outbreak model 4 Insect outbreak model In this lecture I will put to a good use all the mathematical machinery we discussed so far. Consider an insect population, which is subject to predation by birds. It is a very

More information

Introduction to Nonlinear Stochastic Programming

Introduction to Nonlinear Stochastic Programming School of Mathematics T H E U N I V E R S I T Y O H F R G E D I N B U Introduction to Nonlinear Stochastic Programming Jacek Gondzio Email: J.Gondzio@ed.ac.uk URL: http://www.maths.ed.ac.uk/~gondzio SPS

More information

Completely regular Bishop spaces

Completely regular Bishop spaces Completely regular Bishop spaces Iosif Petrakis University of Munich petrakis@math.lmu.de Abstract. Bishop s notion of a function space, here called a Bishop space, is a constructive function-theoretic

More information

Applications of Linear Programming

Applications of Linear Programming Applications of Linear Programming lecturer: András London University of Szeged Institute of Informatics Department of Computational Optimization Lecture 9 Non-linear programming In case of LP, the goal

More information

Franco Giannessi, Giandomenico Mastroeni. Institute of Mathematics University of Verona, Verona, Italy

Franco Giannessi, Giandomenico Mastroeni. Institute of Mathematics University of Verona, Verona, Italy ON THE THEORY OF VECTOR OPTIMIZATION AND VARIATIONAL INEQUALITIES. IMAGE SPACE ANALYSIS AND SEPARATION 1 Franco Giannessi, Giandomenico Mastroeni Department of Mathematics University of Pisa, Pisa, Italy

More information

Introduction to conic sections. Author: Eduard Ortega

Introduction to conic sections. Author: Eduard Ortega Introduction to conic sections Author: Eduard Ortega 1 Introduction A conic is a two-dimensional figure created by the intersection of a plane and a right circular cone. All conics can be written in terms

More information

Asynchronous Training in Wireless Sensor Networks

Asynchronous Training in Wireless Sensor Networks u t... t. tt. tt. u.. tt tt -t t t - t, t u u t t. t tut t t t t t tt t u t ut. t u, t tt t u t t t t, t tt t t t, t t t t t. t t tt u t t t., t- t ut t t, tt t t tt. 1 tut t t tu ut- tt - t t t tu tt-t

More information

This can be 2 lectures! still need: Examples: non-convex problems applications for matrix factorization

This can be 2 lectures! still need: Examples: non-convex problems applications for matrix factorization This can be 2 lectures! still need: Examples: non-convex problems applications for matrix factorization x = prox_f(x)+prox_{f^*}(x) use to get prox of norms! PROXIMAL METHODS WHY PROXIMAL METHODS Smooth

More information

z 0 > 0 z = 0 h; (x, 0)

z 0 > 0 z = 0 h; (x, 0) n = (q 1,..., q n ) T (,, t) V (,, t) L(,, t) = T V d dt ( ) L q i L q i = 0, i = 1,..., n. l l 0 l l 0 l > l 0 {x} + = max(0, x) x = k{l l 0 } +ˆ, k > 0 ˆ (x, z) x z (0, z 0 ) (0, z 0 ) z 0 > 0 x z =

More information

Correlation: basic properties.

Correlation: basic properties. Correlation: basic properties. 1 r xy 1 for all sets of paired data. The closer r xy is to ±1, the stronger the linear relationship between the x-data and y-data. If r xy = ±1 then there is a perfect linear

More information

Special Topics: Data Science

Special Topics: Data Science Special Topics: Data Science L6b: Wiener Filter Victor Solo School of Electrical Engineering University of New South Wales Sydney, AUSTRALIA Topics Norbert Wiener 1894-1964 MIT Professor (Frequency Domain)

More information

Nonlinear Least Squares

Nonlinear Least Squares Nonlinear Least Squares Stephen Boyd EE103 Stanford University December 6, 2016 Outline Nonlinear equations and least squares Examples Levenberg-Marquardt algorithm Nonlinear least squares classification

More information

Supplement: Universal Self-Concordant Barrier Functions

Supplement: Universal Self-Concordant Barrier Functions IE 8534 1 Supplement: Universal Self-Concordant Barrier Functions IE 8534 2 Recall that a self-concordant barrier function for K is a barrier function satisfying 3 F (x)[h, h, h] 2( 2 F (x)[h, h]) 3/2,

More information

1 Kalman Filter Introduction

1 Kalman Filter Introduction 1 Kalman Filter Introduction You should first read Chapter 1 of Stochastic models, estimation, and control: Volume 1 by Peter S. Maybec (available here). 1.1 Explanation of Equations (1-3) and (1-4) Equation

More information

minimize x subject to (x 2)(x 4) u,

minimize x subject to (x 2)(x 4) u, Math 6366/6367: Optimization and Variational Methods Sample Preliminary Exam Questions 1. Suppose that f : [, L] R is a C 2 -function with f () on (, L) and that you have explicit formulae for

More information

Lecture 7 Monotonicity. September 21, 2008

Lecture 7 Monotonicity. September 21, 2008 Lecture 7 Monotonicity September 21, 2008 Outline Introduce several monotonicity properties of vector functions Are satisfied immediately by gradient maps of convex functions In a sense, role of monotonicity

More information

Convex Optimization Lecture 6: KKT Conditions, and applications

Convex Optimization Lecture 6: KKT Conditions, and applications Convex Optimization Lecture 6: KKT Conditions, and applications Dr. Michel Baes, IFOR / ETH Zürich Quick recall of last week s lecture Various aspects of convexity: The set of minimizers is convex. Convex

More information

Linear Least Square Problems Dr.-Ing. Sudchai Boonto

Linear Least Square Problems Dr.-Ing. Sudchai Boonto Dr-Ing Sudchai Boonto Department of Control System and Instrumentation Engineering King Mongkuts Unniversity of Technology Thonburi Thailand Linear Least-Squares Problems Given y, measurement signal, find

More information

Linear Transformations: Standard Matrix

Linear Transformations: Standard Matrix Linear Transformations: Standard Matrix Linear Algebra Josh Engwer TTU November 5 Josh Engwer (TTU) Linear Transformations: Standard Matrix November 5 / 9 PART I PART I: STANDARD MATRIX (THE EASY CASE)

More information

A strategy to optimize the use of retrievals in data assimilation

A strategy to optimize the use of retrievals in data assimilation A strategy to optimize the use of retrievals in data assimilation R. Hoffman, K. Cady-Pereira, J. Eluszkiewicz, D. Gombos, J.-L. Moncet, T. Nehrkorn, S. Greybush 2, K. Ide 2, E. Kalnay 2, M. J. Hoffman

More information

Conditioning of linear-quadratic two-stage stochastic programming problems

Conditioning of linear-quadratic two-stage stochastic programming problems Conditioning of linear-quadratic two-stage stochastic programming problems W. Römisch Humboldt-University Berlin Institute of Mathematics http://www.math.hu-berlin.de/~romisch (K. Emich, R. Henrion) (WIAS

More information

Lecture 4: Optimization. Maximizing a function of a single variable

Lecture 4: Optimization. Maximizing a function of a single variable Lecture 4: Optimization Maximizing or Minimizing a Function of a Single Variable Maximizing or Minimizing a Function of Many Variables Constrained Optimization Maximizing a function of a single variable

More information

NONLINEAR SCALARIZATION CHARACTERIZATIONS OF E-EFFICIENCY IN VECTOR OPTIMIZATION. Ke-Quan Zhao*, Yuan-Mei Xia and Xin-Min Yang 1.

NONLINEAR SCALARIZATION CHARACTERIZATIONS OF E-EFFICIENCY IN VECTOR OPTIMIZATION. Ke-Quan Zhao*, Yuan-Mei Xia and Xin-Min Yang 1. TAIWANESE JOURNAL OF MATHEMATICS Vol. 19, No. 2, pp. 455-466, April 2015 DOI: 10.11650/tjm.19.2015.4360 This paper is available online at http://journal.taiwanmathsoc.org.tw NONLINEAR SCALARIZATION CHARACTERIZATIONS

More information

Logistic Map, Euler & Runge-Kutta Method and Lotka-Volterra Equations

Logistic Map, Euler & Runge-Kutta Method and Lotka-Volterra Equations Logistic Map, Euler & Runge-Kutta Method and Lotka-Volterra Equations S. Y. Ha and J. Park Department of Mathematical Sciences Seoul National University Sep 23, 2013 Contents 1 Logistic Map 2 Euler and

More information

Upper and Lower Bounds

Upper and Lower Bounds James K. Peterson Department of Biological Sciences and Department of Mathematical Sciences Clemson University August 30, 2017 Outline 1 2 s 3 Basic Results 4 Homework Let S be a set of real numbers. We

More information

CS711008Z Algorithm Design and Analysis

CS711008Z Algorithm Design and Analysis CS711008Z Algorithm Design and Analysis Lecture 8 Linear programming: interior point method Dongbo Bu Institute of Computing Technology Chinese Academy of Sciences, Beijing, China 1 / 31 Outline Brief

More information

Additional Homework Problems

Additional Homework Problems Additional Homework Problems Robert M. Freund April, 2004 2004 Massachusetts Institute of Technology. 1 2 1 Exercises 1. Let IR n + denote the nonnegative orthant, namely IR + n = {x IR n x j ( ) 0,j =1,...,n}.

More information

Optimization Theory. Linear Operators and Adjoints

Optimization Theory. Linear Operators and Adjoints Optimization Theory Linear Operators and Adjoints A transformation T. : X Y y Linear Operators y T( x), x X, yy is the image of x under T The domain of T on which T can be defined : D X The range of T

More information

Heuristics for nonconvex MINLP

Heuristics for nonconvex MINLP Heuristics for nonconvex MINLP Pietro Belotti, Timo Berthold FICO, Xpress Optimization Team, Birmingham, UK pietrobelotti@fico.com 18th Combinatorial Optimization Workshop, Aussois, 9 Jan 2014 ======This

More information

Corrections and additions for the book Perturbation Analysis of Optimization Problems by J.F. Bonnans and A. Shapiro. Version of March 28, 2013

Corrections and additions for the book Perturbation Analysis of Optimization Problems by J.F. Bonnans and A. Shapiro. Version of March 28, 2013 Corrections and additions for the book Perturbation Analysis of Optimization Problems by J.F. Bonnans and A. Shapiro Version of March 28, 2013 Some typos in the book that we noticed are of trivial nature

More information

The coordinates of the vertex of the corresponding parabola are p, q. If a > 0, the parabola opens upward. If a < 0, the parabola opens downward.

The coordinates of the vertex of the corresponding parabola are p, q. If a > 0, the parabola opens upward. If a < 0, the parabola opens downward. Mathematics 10 Page 1 of 8 Quadratic Relations in Vertex Form The expression y ax p q defines a quadratic relation in form. The coordinates of the of the corresponding parabola are p, q. If a > 0, the

More information

Name: Date: Block: Algebra 1 Function/Solving for a Variable STUDY GUIDE

Name: Date: Block: Algebra 1 Function/Solving for a Variable STUDY GUIDE Algebra Functions Test STUDY GUIDE Name: Date: Block: SOLs: A.7, A. Algebra Function/Solving for a Variable STUDY GUIDE Know how to Plot points on a Cartesian coordinate plane. Find the (x, y) coordinates

More information

Interior Point Methods for LP

Interior Point Methods for LP 11.1 Interior Point Methods for LP Katta G. Murty, IOE 510, LP, U. Of Michigan, Ann Arbor, Winter 1997. Simplex Method - A Boundary Method: Starting at an extreme point of the feasible set, the simplex

More information

Part 5: Penalty and augmented Lagrangian methods for equality constrained optimization. Nick Gould (RAL)

Part 5: Penalty and augmented Lagrangian methods for equality constrained optimization. Nick Gould (RAL) Part 5: Penalty and augmented Lagrangian methods for equality constrained optimization Nick Gould (RAL) x IR n f(x) subject to c(x) = Part C course on continuoue optimization CONSTRAINED MINIMIZATION x

More information