Introduction to Optimization Techniques. Nonlinear Programming
|
|
- Griffin Perry
- 5 years ago
- Views:
Transcription
1 Introduction to Optiization echniques Nonlinear Prograing
2 Optial Solutions Consider the optiization proble in f ( x) where F R n xf Definition : x F is optial (global iniu) for this proble, if f( x ) f( x) for all xf. Definition : xf is a local iniu if there is an 0 so that f( x) f( x) for all x F N( x, ) where n Nx (, ) xr xx and is a nor on n R 2
3 Lagrange Multiplier Method Now consider the optiization proble in f ( x) xx g( x) 0 (GP) where n X R, f : XR, g : XR (and we take the feasible region F xx g( x) 0 Definition : he Lagrangian for (GP) is a function L( x, ) f ( x) g( x) where g( x) is the inner product of the vector with the vector g( x). hat is, L( x, ) f( x) g ( x) i i i ). 3
4 Lagrange Multiplier Method heore (Lagrange) : Let R (i.e., is a nonnegative -vector) and let x X solve the relatively unconstrained proble in L(, ) xx x If x F and λ ( x ) 0, then Proof : By assuption, f g But λ g( x ) 0 then iplies f But F X and, therefore, for all xf x is optial for (GP). x λ x x λ x xx ( ) g( ) f( ) g( ) for all x x λ x xx ( ) f( ) g( ) for all f x f x λ x f x ( ) ( ) g( ) ( ) Note : he theore assues an appropriate exists and we know its value. 4
5 Lagrange Multiplier Method Ex : ( ay not exist) in x x R 2 x 0 then F {0} and therefore x 0 is optial, Let 0, then in L( x, ) in x x 0 xx xr in L( x, ) xx not optial for 2 If 0, in L( x, ) in x x xx xr x( ) 0 is optial for in L( x, ) x 2 X x 0 is not. 5
6 Saddle Points for Optiality Ex 2 : Show there is no such (for heore ) for the proble in x R x x 0 Definition : For proble GP, ( x, is said to be a saddle-point of the Lagrangian if x X, 0 and L( x, L( x, L( x, for all R for all x X 6
7 Saddle Points for Optiality heore 2 : ( x, X R + is a saddle-point of the Lagrangian of GP, if and only if (i) x solves in L x, xx ii xx, g( x) 0 (i.e., xf) (iii) g( x) 0 (copleentary slackness) Proof : Assue ( x, X R + is a saddle-point. hen, L( x, L( x, for all x X is precisely the stateent (i) x solves in L x, xx 7
8 Saddle Points for Optiality Also, since L( x, L( x, all R, we have f( x) g( x) f( x) g( x), for all R ( ) ( ) g( x) 0, for all Assue g( x) 0. Without loss of generality (wlog) we ay assue g(. x) 0 Let, i i, i 2. then 0 and ( ) g( x) ( ) g ( x) g ( x) 0 i i i i and this contradicts (). herefore, (ii) x X, g( x) 0 R 8
9 Saddle Points for Optiality Now, () iplies (by taking 0 ) g( x) 0 But we ve just shown that g( x) 0 and we ve assued 0 ; therefore g( x) 0 and, hence, (iii) g( x) 0 Conversely, assue ( x, X R + satisfies conditions (i) - (iii). Condition (i) is precisely the stateent L( x, L( x,, for all x X and we now need to show L( x, L( x, for all R. 9
10 Saddle Points for Optiality For R we have g( x) 0 or g( x) 0 and therefore, f( x) g( x) f( x) f( x) g( x) or L( x, L( x,, for all R Corollary : If proble GP has a saddle-point ( x, ) then x is optial for GP. Proof : If ( x, ) is a saddle-point, then conditions (i)-(iii) hold and therefore heore applies. Note : Exaples and 2 show that not all probles have saddle-points (even though the proble ay have an optial solution) 0
11 Saddle Points for Optiality HW : Let ( x, ) be a saddle-point for GP and let x x be optial for GP. Show whether or not ( xλ, ) is a saddle-point. HW a : Let ( x, ), ( x, be two saddle-points for GP. Show whether or not ( xλ, ) is a saddle-point. HW 2 : Using the definition of the Lagrangian for GP, derive the Lagrangians for (a) ax f ( x) (b) ax f ( x) (c) in f ( x) xx g( x) 0 xx g( x) 0 xx g( x) 0
12 Saddle Points for Optiality HW 3 : Using the definition of Lagrangian for GP, derive the Lagrangian for in f ( x) xx h( x) 0 r( x) 0 and show that the ultipliers for the vector function h( x) are unrestricted in sign. (Hint : First write h ( x) 0 as h( x) 0, h( x) 0 ) HW 4 : (a useful lower bound) : For proble (GP) show that, for all, λ R in L( xλ, ) in f ( x) xx (Hint : recall, for x F, λ g( x) 0 ) xf 2
13 For Proble GP Dual Proble in f ( x) xx g( x) 0 (GP) we define the dual proble, denoted by (D), to be ax L ( λ) λ (D) where L ( λ) in L( x,λ) ( L is called the dual function) x X Note : he above HW shows that for all λ R (i.e., for all λ 0 ) we have L ( λin f ( x) and hence, xf ax L ( λin f ( x) λ0 xf his is called the Weak Duality heore. 3
14 Dual Proble Ex 3 : Consider the linear progra in cx xx Ax b x 0 Let n X xr x0 and take the Lagrangian to be L( x, λc x λ Ax b) ( c λ A) xλ b L ( in ( λ c λ A x λ b x0 4
15 Dual Proble Notational Digression: Let A be an n atrix. We let ai denote the i - th row of A and we let a denote the j -th colun of A. herefore, we have a i a2 2 n A or A = a a a a n j By Ax is eant : Ax a x j (where a nuber ties a vector is the nuber j ties each coponent of the vector) 5
16 Dual Proble We also have ax ax 2 Ax ax 2 n By ya is eant : y A ai yi or y A y a y a y a i Now, for the linear progra we have L ( λin ( c λ Axλ b x0 x0 n j cj λa xj ibi j= i in ( n j in ( cj x λa j ibi x j 0 j i 6
17 Dual Proble Now, if c λ A 0 there exists at least one index k so that k and therefore, in ( c λax x k 0 k and hence, if c λ A 0 we have ( λ k ck k λa k On the other hand, if c λ A 0 then each c 0 and, hence, k λa k in ( c λax 0 x k 0 herefore, if c λ A0, we have k L ( λ λ b k L 0 7
18 Dual Proble Hence, ax ( λ ay be rewritten as λ0 ax L λb c λ A 0 λ 0 ax or bλ Aλ c λ 0 and this, of course is the usual linear prograing dual for in cx Ax b x 0 8
19 HW 5 : For the linear progra in cx Dual Proble Ax b x 0 n take X R and let L( x, λ, γ) c x λ ( Ax b) γ x. Following the line of arguent above, develop the dual proble ax L ( λ, γ λ0 γ0 We now characterize saddle-points in ters of duality. 9
20 Optiality via Duality heore 3 : For proble (GP) the pair ( x, λxr is a saddle-point, if and only if, (a) x solves GP, (b) λ solves D, and (c) f ( x) L ( λ Proof : Assue ( x, λ is a saddle-point. By theores and 2 we autoatically have (a) x solves GP. Also, L g g ( λin L( x,λ) in f( x) λ ( x) f( x) λ ( x) f( x) xx xx where the last two equalities follow fro conditions (i) and (iii) of heore 2. L ( λ) f ( x) and condition (c) holds. L By weak duality, we have ( λ) f ( x) for all λ R and hence L ( λ) f ( x) iplies λ solves ax L ( λ and hence (b) holds. λ0 20
21 Optiality via Duality Conversely, assue conditions (a), (b), (c) hold (and, of course, λ 0). o show ( x, λ is a saddle-point. Condition (c) states ( ) f( x) L ( λin L( x, λ) L( x,λ) f( x) λ g( x) xx But condition (a) iplies x F and, hence, λ g( x) 0 and it ust then be the case that f( x) λ g( x) f( x) herefore, we ust have and f( x) λ g( x) f( x) λ g( x) 0 condition (iii) of heore 2 holds. Also, x F iplies condition (ii) of heore 2 holds. Hence, it only reains to show that condition (i) holds; i.e., to show x solves in L( x, λ ). But this follows iediately fro () xx since λ g ( x) 0 iplies (using () ) that in L( x, λ) L( x, λ) and condition (i) of heore 2 holds xx 2
22 Ex (revisited) : in x xx 2 x 0 Optiality via Duality 2 hen, L( x, ) xx and L ( ) in For 0, L( in x For xr xr 2 0, L ( ( ) xx 2, if L (, if 4 L ( 22
23 Optiality via Duality and while sup L ( 0 we see that ax L ( has no optial solution; 0 0 that is, there is no 0 so that L ( 0. Ex 2 (revisited) : in x (P) xr For 0 we see that and for 0 we have x 0 L( x, ) x x) and L ( in x x) L (0in x 0 xr xr L ( in x x) xr 0, if L (, if 23
24 L( Optiality via Duality 0 solves ax ( and 0 But x solves (P) and herefore, L f x L ( 0 x ( ) f x 0 L( ax L( in ( ) 0 xf [NOE : when this situation occurs we say that there is a duality gap. hat is, if x solves GP and solves D and L ( f( x ) we have a duality gap.] herefore, any proble with a duality gap cannot have a saddle-point. 24
25 Optiality via Duality HW 6 : Consider the proble, denoted by P I, where in xx cx Ax b (P ) I X n I x R xj 0 and integer-valued, j,, n [NOE: (P I ) is a version of the so-called linear integer prograing proble ] Let (P) denote the associated linear progra in xx Ax cx b (P) where n n XR 0 xr x 25
26 L I Optiality via Duality Let denote the dual function for and Let L denote the dual function for P. Show whether or not LI L HW 7 : Consider the proble in xx x 2 2x 0 where X [0,] (Note: the constraint is an equality constraint) P I (a) Derive L ( ) (b) Decide whether or not P has a saddle-point. 26
27 Karush-Kuhn-ucker Points Consider the optiization proble in f ( x) xx g( x) 0 (GDP) where f is differentiable on the interior of X and also each g, i i,, is differentiable on the interior of X. We use the sybol GDP for general differentiable proble. As before, g L( x, λ) f ( x) λ ( x) 27
28 (KK point) if where Karush-Kuhn-ucker Points Definition: We say ( x, λ) (int X) R is a Karush-Kuhn-ucker point (i') L( x, λ x (ii) x X, g( x) 0 g (iii) λ ( x) 0 L( x, λ L( x, λ x L( x, λ,, x x n the gradient, with respect to x, of the Lagrangian evaluated at x. 28
29 Karush-Kuhn-ucker Points Ex 2 (revisited) : in xr x x 0 then x is optial for P. Consider L( x, x x ). hen 2 x L( x, ( x) 2 herefore, if we take we have. 2 0x L( x, x L(, 2 Also, x int R,, and. herefore, g( x) x 0 g( x) 2( ) 0 we see that ( x, (, 2) is a Karush-Kuhn-ucker point for (P) [Recall : (P) has no saddle-point]. 29
30 Ex (revisited) : Karush-Kuhn-ucker Points in x xr 2 x 0 (P) 2 Let x 0, the only feasible point. Now L( x, xx and for all. herefore, (P) has x L( x, 2x x L( x, 0 no KK point (as well as no S.P.) 30
31 Karush-Kuhn-ucker Points HW 8 : Consider the linear progra ( A is n ) in xr n cx Ax b x 0 (P) and let L( x, λ, γc x λ ( Ax b) γ x. Show that (P) has an optial n solution x if, and only if, there exists vectors λ R, γ R, so that ( x,( λ, γ ) is a saddle-point. Is it necessarily true that ( x,( λ, γ ) is also a Karush-Kuhn-ucker point? Why? [HIN : Don t be afraid to use your knowledge of linear prograing.] 3
32 Econoic Motivation of Duality Assue GP provides our optial production cost in f ( x) xx g( x) 0 (GP) We are faced with the following offer. he Dual Co. will buy us out as follows: he Dual Co. provides us with a vector of prices λ ( 2 ) 0. We then choose x X and the Dual Co. will pay us λ g ( x) igi( x) (think of the vector g( x) as being the vector i of resources used when we choose x X). On the other hand, since we will not be producing, we are to pay the Dual Co. the savings in production costs (i.e., f ( x) ). herefore, the net payent to the Dual Co. is f ( x) λ g( x) L( x,λ. 32
33 Econoic Motivation of Duality Of course, when faced with λ 0 we would like to pay the Dual Co. as little as possible; i.e., we would like to pay L ( λ) in L( x,λ and, of course, the Dual Co. would like to choose a vector λ so that we pay the Dual Co. as uch as a possible. hat is, the Dual Co. would like to choose a λ 0 so that xx L ( λ ) ax L( λ0 Now, we already know that ax ( λ) in f ( x (i.e., our largest λ0 L possible payent to the Dual Co. is no larger than our optial production cost). λ xf 33
34 Econoic Motivation of Duality If ax ( λ) in f ( x (duality gap) λ0 L xf the Dual Co., presuably, will not want to buy us out since the aount of oney it receives fro us ust be saller than its optial production cost after it buys us out. herefore, a necessary condition for a rational Dual Co. to ake us an offer in the first place is that ax L ( λ) in f ( x. λ0 xf herefore, assue this condition is et. Now assue when faced with λ 0 we choose an xx so that (wlog, assue g( x) 0 ). he Dual Co. will then argue that it did not correctly estiate and it will offer a new price vector λ 0 where, say,, i i, i 2 and 0 (i.e., it will raise or increase ) 34
35 Econoic Motivation of Duality Proof : L( xλ, f( x) ( ) g ( x) g ( x) g ( x) 2 2 L( x, λ g ( x) L( x, λ i.e., our net payent to the Dual Co. will increase if the Dual Co. is allowed to change its price offer λ and we stick to our previous choice of the vector x. [Of course, since the Dual Co. has changed its λ we ll insist on being allowed to change our x.] herefore, a necessary condition for the Dual Co. to be satisfied with its own price offer is that g( x) 0. On the other hand, suppose g( x) 0 but λ g( x) 0. hen λ g( x) 0 and therefore, wlog, we ay assue g ( x) 0. 35
36 Econoic Motivation of Duality Show that the Dual Co. will then want to decrease the price therefore a necessary condition for the Dual Copany to be satisfied with its offer is λ g( x) 0, g( x) 0 and ax L ( λ) in f ( x. λ0 Hence, a necessary condition for us to actually be bought out is that our optial production optiization proble (GP) has a saddle-point. xf and 36
37 Crude Idea of a Dual Algorith for GP Step 0 : Set, select λ k 0. Step : Let x k X solve Step 2 : Let (NOE : IkC k and IkCk ). For each i Ik (if Ik ) let k 0 in L( x, λ xx k k k If x F xx g( x) 0 and λ g( x ) 0, go to Step 3 k i g k k k k i x j j g j x I ( ) 0, C ( ) 0 k k i For each j C if C, let 0 k k Set k k and return to Step. k i k k 37
38 Crude Idea of a Dual Algorith for GP k k k Step 3 : Stop; x is optial for GP ( ( x, λ ) is a S.P.) NOE : he above algorith is not well-defined for the following reason: (a) At Step 2 we have not specified by how uch to increase the prices associated with infeasible constraints nor have we specified by how uch to decrease the prices associated with feasible constraints for which copleentary slackness fails. (b) Even if GP has a saddle-point we, as yet, do not know if this algorith converges to a S.P. (c) Since this algorith is seeking a saddle-point for GP, the procedure is autoatically in trouble if (GP) does not have a S.P. his algorith, of course, was otivated by the above econoic discussion we ll have ore to say about these types of procedures later in the course. 38
Duality (Continued) min f ( x), X R R. Recall, the general primal problem is. The Lagrangian is a function. defined by
Duality (Continued) Recall, the general primal problem is min f ( x), xx g( x) 0 n m where X R, f : X R, g : XR ( X). he Lagrangian is a function L: XR R m defined by L( xλ, ) f ( x) λ g( x) Duality (Continued)
More informationSupport Vector Machines MIT Course Notes Cynthia Rudin
Support Vector Machines MIT 5.097 Course Notes Cynthia Rudin Credit: Ng, Hastie, Tibshirani, Friedan Thanks: Şeyda Ertekin Let s start with soe intuition about argins. The argin of an exaple x i = distance
More informationIn the original knapsack problem, the value of the contents of the knapsack is maximized subject to a single capacity constraint, for example weight.
In the original knapsack problem, the value of the contents of the knapsack is maximized subject to a single capacity constraint, for example weight. In the multi-dimensional knapsack problem, additional
More informationIntroduction to Discrete Optimization
Prof. Friedrich Eisenbrand Martin Nieeier Due Date: March 9 9 Discussions: March 9 Introduction to Discrete Optiization Spring 9 s Exercise Consider a school district with I neighborhoods J schools and
More informationTHE WEIGHTING METHOD AND MULTIOBJECTIVE PROGRAMMING UNDER NEW CONCEPTS OF GENERALIZED (, )-INVEXITY
U.P.B. Sci. Bull., Series A, Vol. 80, Iss. 2, 2018 ISSN 1223-7027 THE WEIGHTING METHOD AND MULTIOBJECTIVE PROGRAMMING UNDER NEW CONCEPTS OF GENERALIZED (, )-INVEXITY Tadeusz ANTCZAK 1, Manuel ARANA-JIMÉNEZ
More informationThe Methods of Solution for Constrained Nonlinear Programming
Research Inventy: International Journal Of Engineering And Science Vol.4, Issue 3(March 2014), PP 01-06 Issn (e): 2278-4721, Issn (p):2319-6483, www.researchinventy.co The Methods of Solution for Constrained
More informationList Scheduling and LPT Oliver Braun (09/05/2017)
List Scheduling and LPT Oliver Braun (09/05/207) We investigate the classical scheduling proble P ax where a set of n independent jobs has to be processed on 2 parallel and identical processors (achines)
More informationLagrange Relaxation and Duality
Lagrange Relaxation and Duality As we have already known, constrained optimization problems are harder to solve than unconstrained problems. By relaxation we can solve a more difficult problem by a simpler
More informationKernel Methods and Support Vector Machines
Intelligent Systes: Reasoning and Recognition Jaes L. Crowley ENSIAG 2 / osig 1 Second Seester 2012/2013 Lesson 20 2 ay 2013 Kernel ethods and Support Vector achines Contents Kernel Functions...2 Quadratic
More informationThe Weierstrass Approximation Theorem
36 The Weierstrass Approxiation Theore Recall that the fundaental idea underlying the construction of the real nubers is approxiation by the sipler rational nubers. Firstly, nubers are often deterined
More informationIntroduction to Mathematical Programming IE406. Lecture 10. Dr. Ted Ralphs
Introduction to Mathematical Programming IE406 Lecture 10 Dr. Ted Ralphs IE406 Lecture 10 1 Reading for This Lecture Bertsimas 4.1-4.3 IE406 Lecture 10 2 Duality Theory: Motivation Consider the following
More informationA Note on Scheduling Tall/Small Multiprocessor Tasks with Unit Processing Time to Minimize Maximum Tardiness
A Note on Scheduling Tall/Sall Multiprocessor Tasks with Unit Processing Tie to Miniize Maxiu Tardiness Philippe Baptiste and Baruch Schieber IBM T.J. Watson Research Center P.O. Box 218, Yorktown Heights,
More informationReed-Muller Codes. m r inductive definition. Later, we shall explain how to construct Reed-Muller codes using the Kronecker product.
Coding Theory Massoud Malek Reed-Muller Codes An iportant class of linear block codes rich in algebraic and geoetric structure is the class of Reed-Muller codes, which includes the Extended Haing code.
More informationSolution of Multivariable Optimization with Inequality Constraints by Lagrange Multipliers. where, x=[x 1 x 2. x n] T
Solution of Multivariable Optiization with Inequality Constraints by Lagrange Multipliers Consider this proble: Miniize f where, =[. n] T subect to, g,, The g functions are labeled inequality constraints.
More informationLP in Standard and Slack Forms
LP i Stadard ad Slack Fors ax j=1 s.t. j=1 c j a ij b i for i=1, 2,..., 0 for j=1, 2,..., z = 0 j=1 c j x i = b i j=1 a ij for i=1, 2,..., Auxiliary Liear Progra L: LP i stadard for: ax j=1 L aux : Auxiliary
More informationOn Constant Power Water-filling
On Constant Power Water-filling Wei Yu and John M. Cioffi Electrical Engineering Departent Stanford University, Stanford, CA94305, U.S.A. eails: {weiyu,cioffi}@stanford.edu Abstract This paper derives
More informationEgyptian Mathematics Problem Set
(Send corrections to cbruni@uwaterloo.ca) Egyptian Matheatics Proble Set (i) Use the Egyptian area of a circle A = (8d/9) 2 to copute the areas of the following circles with given diaeter. d = 2. d = 3
More informatione-companion ONLY AVAILABLE IN ELECTRONIC FORM
OPERATIONS RESEARCH doi 10.1287/opre.1070.0427ec pp. ec1 ec5 e-copanion ONLY AVAILABLE IN ELECTRONIC FORM infors 07 INFORMS Electronic Copanion A Learning Approach for Interactive Marketing to a Custoer
More informationCh 12: Variations on Backpropagation
Ch 2: Variations on Backpropagation The basic backpropagation algorith is too slow for ost practical applications. It ay take days or weeks of coputer tie. We deonstrate why the backpropagation algorith
More informationThe Simplex Method is Strongly Polynomial for the Markov Decision Problem with a Fixed Discount Rate
The Siplex Method is Strongly Polynoial for the Markov Decision Proble with a Fixed Discount Rate Yinyu Ye April 20, 2010 Abstract In this note we prove that the classic siplex ethod with the ost-negativereduced-cost
More informationLecture 21. Interior Point Methods Setup and Algorithm
Lecture 21 Interior Point Methods In 1984, Kararkar introduced a new weakly polynoial tie algorith for solving LPs [Kar84a], [Kar84b]. His algorith was theoretically faster than the ellipsoid ethod and
More information1 Bounding the Margin
COS 511: Theoretical Machine Learning Lecturer: Rob Schapire Lecture #12 Scribe: Jian Min Si March 14, 2013 1 Bounding the Margin We are continuing the proof of a bound on the generalization error of AdaBoost
More informationModel Fitting. CURM Background Material, Fall 2014 Dr. Doreen De Leon
Model Fitting CURM Background Material, Fall 014 Dr. Doreen De Leon 1 Introduction Given a set of data points, we often want to fit a selected odel or type to the data (e.g., we suspect an exponential
More informationALGEBRA REVIEW. MULTINOMIAL An algebraic expression consisting of more than one term.
Page 1 of 6 ALGEBRAIC EXPRESSION A cobination of ordinary nubers, letter sybols, variables, grouping sybols and operation sybols. Nubers reain fixed in value and are referred to as constants. Letter sybols
More informationOptimal Pigouvian Taxation when Externalities Affect Demand
Optial Pigouvian Taxation when Externalities Affect Deand Enda Patrick Hargaden Departent of Econoics University of Michigan enda@uich.edu Version of August 2, 2015 Abstract Purchasing a network good such
More informationCourse Notes for EE227C (Spring 2018): Convex Optimization and Approximation
Course Notes for EE7C (Spring 018: Convex Optiization and Approxiation Instructor: Moritz Hardt Eail: hardt+ee7c@berkeley.edu Graduate Instructor: Max Sichowitz Eail: sichow+ee7c@berkeley.edu October 15,
More information2001 Dennis L. Bricker Dept. of Industrial Engineering The University of Iowa. Reducing dimensionality of DP page 1
2001 Dennis L. Bricker Dept. of Industrial Engineering The University of Iowa Reducing dimensionality of DP page 1 Consider a knapsack with a weight capacity of 15 and a volume capacity of 12. Item # Value
More informationI.3. LMI DUALITY. Didier HENRION EECI Graduate School on Control Supélec - Spring 2010
I.3. LMI DUALITY Didier HENRION henrion@laas.fr EECI Graduate School on Control Supélec - Spring 2010 Primal and dual For primal problem p = inf x g 0 (x) s.t. g i (x) 0 define Lagrangian L(x, z) = g 0
More informationMonochromatic images
CHAPTER 8 Monochroatic iages 1 The Central Sets Theore Lea 11 Let S,+) be a seigroup, e be an idepotent of βs and A e There is a set B A in e such that, for each v B, there is a set C A in e with v+c A
More informationBipartite subgraphs and the smallest eigenvalue
Bipartite subgraphs and the sallest eigenvalue Noga Alon Benny Sudaov Abstract Two results dealing with the relation between the sallest eigenvalue of a graph and its bipartite subgraphs are obtained.
More informationIntelligent Systems: Reasoning and Recognition. Perceptrons and Support Vector Machines
Intelligent Systes: Reasoning and Recognition Jaes L. Crowley osig 1 Winter Seester 2018 Lesson 6 27 February 2018 Outline Perceptrons and Support Vector achines Notation...2 Linear odels...3 Lines, Planes
More informationDeflation of the I-O Series Some Technical Aspects. Giorgio Rampa University of Genoa April 2007
Deflation of the I-O Series 1959-2. Soe Technical Aspects Giorgio Rapa University of Genoa g.rapa@unige.it April 27 1. Introduction The nuber of sectors is 42 for the period 1965-2 and 38 for the initial
More informationFeature Extraction Techniques
Feature Extraction Techniques Unsupervised Learning II Feature Extraction Unsupervised ethods can also be used to find features which can be useful for categorization. There are unsupervised ethods that
More informationarxiv: v1 [math.nt] 14 Sep 2014
ROTATION REMAINDERS P. JAMESON GRABER, WASHINGTON AND LEE UNIVERSITY 08 arxiv:1409.411v1 [ath.nt] 14 Sep 014 Abstract. We study properties of an array of nubers, called the triangle, in which each row
More informationFinite fields. and we ve used it in various examples and homework problems. In these notes I will introduce more finite fields
Finite fields I talked in class about the field with two eleents F 2 = {, } and we ve used it in various eaples and hoework probles. In these notes I will introduce ore finite fields F p = {,,...,p } for
More informationarxiv: v1 [math.co] 19 Apr 2017
PROOF OF CHAPOTON S CONJECTURE ON NEWTON POLYTOPES OF q-ehrhart POLYNOMIALS arxiv:1704.0561v1 [ath.co] 19 Apr 017 JANG SOO KIM AND U-KEUN SONG Abstract. Recently, Chapoton found a q-analog of Ehrhart polynoials,
More informationBayes Decision Rule and Naïve Bayes Classifier
Bayes Decision Rule and Naïve Bayes Classifier Le Song Machine Learning I CSE 6740, Fall 2013 Gaussian Mixture odel A density odel p(x) ay be ulti-odal: odel it as a ixture of uni-odal distributions (e.g.
More information3.8 Three Types of Convergence
3.8 Three Types of Convergence 3.8 Three Types of Convergence 93 Suppose that we are given a sequence functions {f k } k N on a set X and another function f on X. What does it ean for f k to converge to
More informationSOLVING LITERAL EQUATIONS. Bundle 1: Safety & Process Skills
SOLVING LITERAL EQUATIONS Bundle 1: Safety & Process Skills Solving Literal Equations An equation is a atheatical sentence with an equal sign. The solution of an equation is a value for a variable that
More informationA Self-Organizing Model for Logical Regression Jerry Farlow 1 University of Maine. (1900 words)
1 A Self-Organizing Model for Logical Regression Jerry Farlow 1 University of Maine (1900 words) Contact: Jerry Farlow Dept of Matheatics Univeristy of Maine Orono, ME 04469 Tel (07) 866-3540 Eail: farlow@ath.uaine.edu
More informationA1. Find all ordered pairs (a, b) of positive integers for which 1 a + 1 b = 3
A. Find all ordered pairs a, b) of positive integers for which a + b = 3 08. Answer. The six ordered pairs are 009, 08), 08, 009), 009 337, 674) = 35043, 674), 009 346, 673) = 3584, 673), 674, 009 337)
More informationADVANCES ON THE BESSIS- MOUSSA-VILLANI TRACE CONJECTURE
ADVANCES ON THE BESSIS- MOUSSA-VILLANI TRACE CONJECTURE CHRISTOPHER J. HILLAR Abstract. A long-standing conjecture asserts that the polynoial p(t = Tr(A + tb ] has nonnegative coefficients whenever is
More informationMASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 11 10/15/2008 ABSTRACT INTEGRATION I
MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 11 10/15/2008 ABSTRACT INTEGRATION I Contents 1. Preliinaries 2. The ain result 3. The Rieann integral 4. The integral of a nonnegative
More informationINNER CONSTRAINTS FOR A 3-D SURVEY NETWORK
eospatial Science INNER CONSRAINS FOR A 3-D SURVEY NEWORK hese notes follow closely the developent of inner constraint equations by Dr Willie an, Departent of Building, School of Design and Environent,
More informationExtreme Abridgment of Boyd and Vandenberghe s Convex Optimization
Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization Compiled by David Rosenberg Abstract Boyd and Vandenberghe s Convex Optimization book is very well-written and a pleasure to read. The
More information13.2 Fully Polynomial Randomized Approximation Scheme for Permanent of Random 0-1 Matrices
CS71 Randoness & Coputation Spring 018 Instructor: Alistair Sinclair Lecture 13: February 7 Disclaier: These notes have not been subjected to the usual scrutiny accorded to foral publications. They ay
More informationTHE POLYNOMIAL REPRESENTATION OF THE TYPE A n 1 RATIONAL CHEREDNIK ALGEBRA IN CHARACTERISTIC p n
THE POLYNOMIAL REPRESENTATION OF THE TYPE A n RATIONAL CHEREDNIK ALGEBRA IN CHARACTERISTIC p n SHEELA DEVADAS AND YI SUN Abstract. We study the polynoial representation of the rational Cherednik algebra
More informationCourse Notes for EE227C (Spring 2018): Convex Optimization and Approximation
Course Notes for EE227C (Spring 2018): Convex Optiization and Approxiation Instructor: Moritz Hardt Eail: hardt+ee227c@berkeley.edu Graduate Instructor: Max Sichowitz Eail: sichow+ee227c@berkeley.edu October
More informationIntroduction to Optimization Techniques. Nonlinear Optimization in Function Spaces
Introduction to Optimization Techniques Nonlinear Optimization in Function Spaces X : T : Gateaux and Fréchet Differentials Gateaux and Fréchet Differentials a vector space, Y : a normed space transformation
More informationAn Algorithm for Posynomial Geometric Programming, Based on Generalized Linear Programming
An Algorith for Posynoial Geoetric Prograing, Based on Generalized Linear Prograing Jayant Rajgopal Departent of Industrial Engineering University of Pittsburgh, Pittsburgh, PA 526 Dennis L. Bricer Departent
More informationSTOPPING SIMULATED PATHS EARLY
Proceedings of the 2 Winter Siulation Conference B.A.Peters,J.S.Sith,D.J.Medeiros,andM.W.Rohrer,eds. STOPPING SIMULATED PATHS EARLY Paul Glasseran Graduate School of Business Colubia University New Yor,
More informationComputational and Statistical Learning Theory
Coputational and Statistical Learning Theory Proble sets 5 and 6 Due: Noveber th Please send your solutions to learning-subissions@ttic.edu Notations/Definitions Recall the definition of saple based Radeacher
More informationConstrained Consensus and Optimization in Multi-Agent Networks arxiv: v2 [math.oc] 17 Dec 2008
LIDS Report 2779 1 Constrained Consensus and Optiization in Multi-Agent Networks arxiv:0802.3922v2 [ath.oc] 17 Dec 2008 Angelia Nedić, Asuan Ozdaglar, and Pablo A. Parrilo February 15, 2013 Abstract We
More informationSupplementary to Learning Discriminative Bayesian Networks from High-dimensional Continuous Neuroimaging Data
Suppleentary to Learning Discriinative Bayesian Networks fro High-diensional Continuous Neuroiaging Data Luping Zhou, Lei Wang, Lingqiao Liu, Philip Ogunbona, and Dinggang Shen Proposition. Given a sparse
More informationPolygonal Designs: Existence and Construction
Polygonal Designs: Existence and Construction John Hegean Departent of Matheatics, Stanford University, Stanford, CA 9405 Jeff Langford Departent of Matheatics, Drake University, Des Moines, IA 5011 G
More information1 Identical Parallel Machines
FB3: Matheatik/Inforatik Dr. Syaantak Das Winter 2017/18 Optiizing under Uncertainty Lecture Notes 3: Scheduling to Miniize Makespan In any standard scheduling proble, we are given a set of jobs J = {j
More informationOn the Inapproximability of Vertex Cover on k-partite k-uniform Hypergraphs
On the Inapproxiability of Vertex Cover on k-partite k-unifor Hypergraphs Venkatesan Guruswai and Rishi Saket Coputer Science Departent Carnegie Mellon University Pittsburgh, PA 1513. Abstract. Coputing
More informationA Simple Regression Problem
A Siple Regression Proble R. M. Castro March 23, 2 In this brief note a siple regression proble will be introduced, illustrating clearly the bias-variance tradeoff. Let Y i f(x i ) + W i, i,..., n, where
More informationCongruences and Modular Arithmetic
Congruenes and Modular Aritheti 6-17-2016 a is ongruent to b od n eans that n a b. Notation: a = b (od n). Congruene od n is an equivalene relation. Hene, ongruenes have any of the sae properties as ordinary
More informationPart 1. The Review of Linear Programming
In the name of God Part 1. The Review of Linear Programming 1.5. Spring 2010 Instructor: Dr. Masoud Yaghini Outline Introduction Formulation of the Dual Problem Primal-Dual Relationship Economic Interpretation
More informationLost-Sales Problems with Stochastic Lead Times: Convexity Results for Base-Stock Policies
OPERATIONS RESEARCH Vol. 52, No. 5, Septeber October 2004, pp. 795 803 issn 0030-364X eissn 1526-5463 04 5205 0795 infors doi 10.1287/opre.1040.0130 2004 INFORMS TECHNICAL NOTE Lost-Sales Probles with
More informationConvex Optimization & Lagrange Duality
Convex Optimization & Lagrange Duality Chee Wei Tan CS 8292 : Advanced Topics in Convex Optimization and its Applications Fall 2010 Outline Convex optimization Optimality condition Lagrange duality KKT
More informationA Bernstein-Markov Theorem for Normed Spaces
A Bernstein-Markov Theore for Nored Spaces Lawrence A. Harris Departent of Matheatics, University of Kentucky Lexington, Kentucky 40506-0027 Abstract Let X and Y be real nored linear spaces and let φ :
More informationStructural and Multidisciplinary Optimization. P. Duysinx and P. Tossings
Structural and Multidisciplinary Optimization P. Duysinx and P. Tossings 2018-2019 CONTACTS Pierre Duysinx Institut de Mécanique et du Génie Civil (B52/3) Phone number: 04/366.91.94 Email: P.Duysinx@uliege.be
More informationHESSIAN MATRICES OF PENALTY FUNCTIONS FOR SOLVING CONSTRAINED-OPTIMIZATION PROBLEMS
R 702 Philips Res. Repts 24, 322-330, 1969 HESSIAN MATRICES OF PENALTY FUNCTIONS FOR SOLVING CONSTRAINED-OPTIMIZATION PROBLEMS by F. A. LOOTSMA Abstract This paper deals with the Hessian atrices of penalty
More informationINVEX FUNCTIONS AND CONSTRAINED LOCAL MINIMA
BULL. AUSRAL. MAH. SOC. VOL. 24 (1981), 357-366. 9C3 INVEX FUNCIONS AND CONSRAINED LOCAL MINIMA B.D. CRAVEN If a certain weakening of convexity holds for the objective and all constraint functions in a
More informationSharp Time Data Tradeoffs for Linear Inverse Problems
Sharp Tie Data Tradeoffs for Linear Inverse Probles Saet Oyak Benjain Recht Mahdi Soltanolkotabi January 016 Abstract In this paper we characterize sharp tie-data tradeoffs for optiization probles used
More informationOptimization for Communications and Networks. Poompat Saengudomlert. Session 4 Duality and Lagrange Multipliers
Optimization for Communications and Networks Poompat Saengudomlert Session 4 Duality and Lagrange Multipliers P Saengudomlert (2015) Optimization Session 4 1 / 14 24 Dual Problems Consider a primal convex
More informationSupport Vector Machine Classification of Uncertain and Imbalanced data using Robust Optimization
Recent Researches in Coputer Science Support Vector Machine Classification of Uncertain and Ibalanced data using Robust Optiization RAGHAV PAT, THEODORE B. TRAFALIS, KASH BARKER School of Industrial Engineering
More informationSoft Computing Techniques Help Assign Weights to Different Factors in Vulnerability Analysis
Soft Coputing Techniques Help Assign Weights to Different Factors in Vulnerability Analysis Beverly Rivera 1,2, Irbis Gallegos 1, and Vladik Kreinovich 2 1 Regional Cyber and Energy Security Center RCES
More informationNonlinear Optimization
Nonlinear Optimization Etienne de Klerk (UvT)/Kees Roos e-mail: C.Roos@ewi.tudelft.nl URL: http://www.isa.ewi.tudelft.nl/ roos Course WI3031 (Week 4) February-March, A.D. 2005 Optimization Group 1 Outline
More informationAlgorithms for parallel processor scheduling with distinct due windows and unit-time jobs
BULLETIN OF THE POLISH ACADEMY OF SCIENCES TECHNICAL SCIENCES Vol. 57, No. 3, 2009 Algoriths for parallel processor scheduling with distinct due windows and unit-tie obs A. JANIAK 1, W.A. JANIAK 2, and
More informationDistributed Subgradient Methods for Multi-agent Optimization
1 Distributed Subgradient Methods for Multi-agent Optiization Angelia Nedić and Asuan Ozdaglar October 29, 2007 Abstract We study a distributed coputation odel for optiizing a su of convex objective functions
More information3.3 Variational Characterization of Singular Values
3.3. Variational Characterization of Singular Values 61 3.3 Variational Characterization of Singular Values Since the singular values are square roots of the eigenvalues of the Heritian atrices A A and
More informationDuality. Lagrange dual problem weak and strong duality optimality conditions perturbation and sensitivity analysis generalized inequalities
Duality Lagrange dual problem weak and strong duality optimality conditions perturbation and sensitivity analysis generalized inequalities Lagrangian Consider the optimization problem in standard form
More informationOn Conditions for Linearity of Optimal Estimation
On Conditions for Linearity of Optial Estiation Erah Akyol, Kuar Viswanatha and Kenneth Rose {eakyol, kuar, rose}@ece.ucsb.edu Departent of Electrical and Coputer Engineering University of California at
More informationChap 2. Optimality conditions
Chap 2. Optimality conditions Version: 29-09-2012 2.1 Optimality conditions in unconstrained optimization Recall the definitions of global, local minimizer. Geometry of minimization Consider for f C 1
More informationA Better Algorithm For an Ancient Scheduling Problem. David R. Karger Steven J. Phillips Eric Torng. Department of Computer Science
A Better Algorith For an Ancient Scheduling Proble David R. Karger Steven J. Phillips Eric Torng Departent of Coputer Science Stanford University Stanford, CA 9435-4 Abstract One of the oldest and siplest
More informationIntroduction to Machine Learning. Recitation 11
Introduction to Machine Learning Lecturer: Regev Schweiger Recitation Fall Seester Scribe: Regev Schweiger. Kernel Ridge Regression We now take on the task of kernel-izing ridge regression. Let x,...,
More informationarxiv: v1 [cs.ds] 29 Jan 2012
A parallel approxiation algorith for ixed packing covering seidefinite progras arxiv:1201.6090v1 [cs.ds] 29 Jan 2012 Rahul Jain National U. Singapore January 28, 2012 Abstract Penghui Yao National U. Singapore
More informationChapter 6 1-D Continuous Groups
Chapter 6 1-D Continuous Groups Continuous groups consist of group eleents labelled by one or ore continuous variables, say a 1, a 2,, a r, where each variable has a well- defined range. This chapter explores:
More informationOPTIMIZATION in multi-agent networks has attracted
Distributed constrained optiization and consensus in uncertain networks via proxial iniization Kostas Margellos, Alessandro Falsone, Sione Garatti and Maria Prandini arxiv:603.039v3 [ath.oc] 3 May 07 Abstract
More informationIE 5531: Engineering Optimization I
IE 5531: Engineering Optimization I Lecture 7: Duality and applications Prof. John Gunnar Carlsson September 29, 2010 Prof. John Gunnar Carlsson IE 5531: Engineering Optimization I September 29, 2010 1
More informationGeneralized AOR Method for Solving System of Linear Equations. Davod Khojasteh Salkuyeh. Department of Mathematics, University of Mohaghegh Ardabili,
Australian Journal of Basic and Applied Sciences, 5(3): 35-358, 20 ISSN 99-878 Generalized AOR Method for Solving Syste of Linear Equations Davod Khojasteh Salkuyeh Departent of Matheatics, University
More informationSlide10. Haykin Chapter 8: Principal Components Analysis. Motivation. Principal Component Analysis: Variance Probe
Slide10 Motivation Haykin Chapter 8: Principal Coponents Analysis 1.6 1.4 1.2 1 0.8 cloud.dat 0.6 CPSC 636-600 Instructor: Yoonsuck Choe Spring 2015 0.4 0.2 0 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 How can we
More informationlecture 36: Linear Multistep Mehods: Zero Stability
95 lecture 36: Linear Multistep Mehods: Zero Stability 5.6 Linear ultistep ethods: zero stability Does consistency iply convergence for linear ultistep ethods? This is always the case for one-step ethods,
More informationMidterm 1 Sample Solution
Midter 1 Saple Solution NOTE: Throughout the exa a siple graph is an undirected, unweighted graph with no ultiple edges (i.e., no exact repeats of the sae edge) and no self-loops (i.e., no edges fro a
More information26 Impulse and Momentum
6 Ipulse and Moentu First, a Few More Words on Work and Energy, for Coparison Purposes Iagine a gigantic air hockey table with a whole bunch of pucks of various asses, none of which experiences any friction
More informationSupport Vector Machines. Machine Learning Series Jerry Jeychandra Blohm Lab
Support Vector Machines Machine Learning Series Jerry Jeychandra Bloh Lab Outline Main goal: To understand how support vector achines (SVMs) perfor optial classification for labelled data sets, also a
More informationE0 370 Statistical Learning Theory Lecture 6 (Aug 30, 2011) Margin Analysis
E0 370 tatistical Learning Theory Lecture 6 (Aug 30, 20) Margin Analysis Lecturer: hivani Agarwal cribe: Narasihan R Introduction In the last few lectures we have seen how to obtain high confidence bounds
More informationAnswers to Econ 210A Midterm, October A. The function f is homogeneous of degree 1/2. To see this, note that for all t > 0 and all (x 1, x 2 )
Question. Answers to Econ 20A Midter, October 200 f(x, x 2 ) = ax {x, x 2 } A. The function f is hoogeneous of degree /2. To see this, note that for all t > 0 and all (x, x 2 ) f(tx, x 2 ) = ax {tx, tx
More informationEE/AA 578, Univ of Washington, Fall Duality
7. Duality EE/AA 578, Univ of Washington, Fall 2016 Lagrange dual problem weak and strong duality geometric interpretation optimality conditions perturbation and sensitivity analysis examples generalized
More informationBlock designs and statistics
Bloc designs and statistics Notes for Math 447 May 3, 2011 The ain paraeters of a bloc design are nuber of varieties v, bloc size, nuber of blocs b. A design is built on a set of v eleents. Each eleent
More informationSupplement to: Subsampling Methods for Persistent Homology
Suppleent to: Subsapling Methods for Persistent Hoology A. Technical results In this section, we present soe technical results that will be used to prove the ain theores. First, we expand the notation
More informationOBJECTIVES INTRODUCTION
M7 Chapter 3 Section 1 OBJECTIVES Suarize data using easures of central tendency, such as the ean, edian, ode, and idrange. Describe data using the easures of variation, such as the range, variance, and
More informationSoft-margin SVM can address linearly separable problems with outliers
Non-linear Support Vector Machines Non-linearly separable probles Hard-argin SVM can address linearly separable probles Soft-argin SVM can address linearly separable probles with outliers Non-linearly
More informationCS Lecture 13. More Maximum Likelihood
CS 6347 Lecture 13 More Maxiu Likelihood Recap Last tie: Introduction to axiu likelihood estiation MLE for Bayesian networks Optial CPTs correspond to epirical counts Today: MLE for CRFs 2 Maxiu Likelihood
More informationNumerical Optimization
Constrained Optimization Computer Science and Automation Indian Institute of Science Bangalore 560 012, India. NPTEL Course on Constrained Optimization Constrained Optimization Problem: min h j (x) 0,
More informationNow multiply the left-hand-side by ω and the right-hand side by dδ/dt (recall ω= dδ/dt) to get:
Equal Area Criterion.0 Developent of equal area criterion As in previous notes, all powers are in per-unit. I want to show you the equal area criterion a little differently than the book does it. Let s
More informationProbability and Stochastic Processes: A Friendly Introduction for Electrical and Computer Engineers Roy D. Yates and David J.
Probability and Stochastic Processes: A Friendly Introduction for Electrical and oputer Engineers Roy D. Yates and David J. Goodan Proble Solutions : Yates and Goodan,1..3 1.3.1 1.4.6 1.4.7 1.4.8 1..6
More information