6.854J / J Advanced Algorithms Fall 2008
|
|
- Felicity Waters
- 6 years ago
- Views:
Transcription
1 MIT OpenCourseWare J / J Advanced Algorthms Fall 2008 For nformaton about ctng these materals or our Terms of Use, vst:
2 18.415/6.854 Advanced Algorthms October 27, 2008 Lecturer: Mchel X. Goemans Lecture 14 1 Introducton For ths lecture we ll look at usng nteror pont algorthms for solvng lnear programs, and more generally convex programs. Developng orgnally n 1984 by Narendra Karmarkar, there have been many varants (wth some of the keywords path followng, prmal-dual, potental reducton, etc.) on nteror pont algorthms, especally through the late 80s and early 90s. In the late 90s, people began to realze that nteror pont algorthms could also be used to solve semdefnte programs (or, even more generally, convex programs). As much as possble, we wll dscuss lnear programmng, semdefnte programmng, and even a larger class called conc programmng n a unfed way. 2 Lnear Programmng We wll start wth lnear programmng. Remember that n lnear programmng, we have: Prmal: Gven A R m n, c R n and b R m, fnd x R n : Mn c T x s.t. Ax = b, x 0. Its dual lnear program s: Dual: Fnd y R m : Max b T y s.t. A T y c. We can ntroduce non-negatve slack varables and rewrte ths as: Dual: Fnd y R m, s R n : Max b T y s.t. A T y + s = c, s 0. We know that, for a feasble soluton, x n the prmal, and a feasble soluton (y, s) n the dual, we know by complementary slackness that they wll both be optmal (for the prmal and the dual resp.) ff x T s = 0. Snce ths s the component-wse product of two non-negatve vectors, we can equvalently say: x j s j = 0 j. 2.1 Usng the Interor Pont Algorthm The nteror pont algorthm wll teratvely mantan a strctly feasble soluton n the prmal, such that for all values of j, x j > 0. Smlarly n the dual, t wll mantan a y and an s such that for all values of j, s j > 0. Because of ths strct nequalty, we can never reach our optmalty 14-1
3 condton stated above; however, we ll get very close, and once we do, we can show that a jump from ths non-optmal soluton (for ether the prmal or the dual) to a vertex of mproved cost (of the correspondng program) wll provde an optmal soluton to the (prmal or dual) program. In some lnear programs, t may not be possble to start wth a strctly postve soluton. For example, for any feasble soluton to the program, t may be that x j = 0, so we may be unable to fnd a strctly feasble soluton wth whch to start the algorthm. Ths can be dealt wth easly, but we wll not dscuss ths. We ll assume that the prmal and dual both have strctly feasble solutons. 3 Semdefnte Programmng As ntroduced n the prevous lecture, n semdefnte programmng, our varables are the entres of a symmetrc posttve semdefnte matrx X. Let S n denote the set of all real, symmetrc and n n matrces. For two such matrces A and B, we defne an nner product A B = A j B j = T race(a T B) = T race(ab). Semdefnte programmng (as a mnmzaton problem) s Mn j C X s.t. A X = b = 1...m X 0. Remember that for a symmetrc matrx M, M 0 means that M s postve semdefnte, meanng that all of ts (real) egenvalues λ 0, or equvalently, x, x T Mx Dual for SDP When workng wth lnear programs, we know the exstence of a dual lnear program wth a strong property: Any feasble dual soluton provdes a lower bound on the optmum prmal value and, f ether program s feasble, the optmum prmal and optmum dual values are equal. Does a smlar dual for a semdefnte progrm exst? The answer f yes, although we wll need some addtonal condton. We clam that the dual takes the followng form. Dual: Fnd y R n, and S S n : Max y R m s.t. b T y y A + S = C S
4 3.1.1 Weak Dualty For weak dualty, consder any feasble soluton x n the prmal, and any feasble soluton (y, S) n the dual. We have: ( ) C X = y A + S X = y (A X) + S X = y b + S X = b T y + S X b T y, the last nequalty followng from Lemma 1 below. Ths s true for any prmal and dual feasble solutons, and therefore we have z w, where: z = mn{c X : X feasble for prmal}, w = max{b T y : (y, S) feasble for dual}. Lemma 1 For any A, B 0, we have A B 0. Proof of Lemma 1: Any postve semdefnte matrx A admts a Cholesvky decomposton: A = V T V for some n n matrx V. Thus, A B = T race(ab) = T race(v T V B) = T race(v BV T ), the last nequalty followng from the fact that, for (not necessarly symmetrc) square matrces C and D, we have T race(cd) = T race(dc). But V BV T s postve defnte (snce x T V BV T x 0 for all x), and thus ts trace s nonnegatve, provng the result. A smlar lemma was used when we were talkng about lnear programmng, namely that f a, b R n wth a, b 0 then a T b Strong Dualty In general, t s not true that z = w. Several thngs can go wrong. In defnng z, we wrote: z = mn C X. However, that mn s not really a mn, but rather an nfmum. It mght happen that the nfmum value can be approached arbtrarly closely but no soluton may attan that value precsely. Smlarly n the dual, the supremum may not be attaned. In addton, n semdefnte programmng, t s possble that the prmal may have a fnte value, but that the dual may be nfeasble. In lnear programmng, ths was not the case. If the prmal had a fnte feasble value and was bounded, the dual was also fnte and wth the same value. In semdefnte programmng, the prmal can be fnte, whle the dual may be nfeasble or vce versa. In addton, both the prmal and dual could be fnte, but they could be of dfferng values. That all sad, n the typcal case, you do have strong dualty (z = w), but only necessarly under certan condtons Introducng a Regularty Condton Assume that the prmal and dual have a strctly feasble soluton. Ths means that for the prmal: X s.t. A X = b = (1...m). X
5 A 0 denotes that A s a postve-defnte matrx, meanng that a = 0, a T Xa > 0, or equvalently that all ts egenvalues λ satsfy λ > 0. Lkewse, n the dual, there exsts y and S such that: y A + S = C S 0. If we assume ths regularty condton that we ve defned above, then the prmal value z s fnte and attanable (.e. t s not an nfmum, but actually a mnmum), and the dual value w s attaned and furthermore z = w. Ths s gven wthout proof. 4 Conc Programmng Conc Programmng s a generalzaton of both Lnear Programmng and Semdefnte Programmng. Frst, we need the defnton of a cone: Defnton 1 A cone s a subset C of R n that has the property that for any v C and λ R +, λv s also n C. Conc Programmng s constraned optmzaton over K, a closed convex cone, wth a gven nner product x, y. We can, for example, take K = R n and x, y = x T y for any x, y R n ; ths wll lead to lnear programmng. Conc programmng, lke LP and SDP, has both a prmal and a dual form; the prmal s: Prmal: Gven A R m n, b R m, and c R n : mn c, x s.t. Ax = b x K. More generally, we could vew K as a cone n any space, and then A s a lnear operator from K to R m. To form the dual of a conc program, we frst need to fnd the polar cone, K, of K. The polar cone s defned to be the set of all s such that for all x n K, s, x 0. For nstance, the polar cone of R n + s R+ n tself (ndeed f s j < 0 then we have s / K snce e j, s < 0; conversely, f s 0 then x, s 0). In the case that K = K, we say that K s self-polar. Smlarly, the polar cone of P SD, the set of postve semdefnte matrces, s also tself. We also defne the adjont (operator) A of A to be such that, for all x and y, A y, x = y, Ax. For example, f the nner product s a standard dot product and A s the matrx correspondng to a lnear transformaton from R n to R m, then A = A T. To wrte the conc dual, we ntroduce a varable y R m and s R n and optmze: Dual: Weak Dualty max b, y s.t. A y + s = c s K. We can prove weak dualty that the value of the prmal s at least the value of the dual as follows. Let x be any prmal feasble soluton and (y, s) be any dual feasble soluton. Then c, x = A y + s, x = A y, x + s, x = y, Ax + s, x = b, y + s, x b, y, where we have used the defnton of K to show that s, x 0. Ths means that z, the nfmum value of the prmal, s at least the supremum value w of the dual. 14-4
6 4.0.5 Strong Dualty In the general case, we don t know that the two values wll be equal. But we have the followng statement (analogous to the regularty condton for SDP): f there exsts an x n the nteror of K, such that Ax = b, and a s n the nteror of K, wth A y + s = c, then the prmal and the dual both obtan ther optmal values, and those values are equal. 4.1 Semdefnte Programmng as a Specal Case of Conc Programmng LP s a specal case of conc programmng, f we let K = R n and take the nner product to be the + standard dot product a, b = a T b. We can also make any SDP nto a conc program; frst, we need a way of transformng semdefnte matrces nto vectors. Snce we are optmzng over symmetrc matrces, we ntroduce a map svec(m) that only takes the lower trangle of the matrx (ncludng the dagonal). To be able to use the standard dot product wth these vectors, svec multples all of the off-dagonal matrces by 2. So svec maps X to As a result: n svec(x), svec(y ) = x y + (x 11, x 22,..., x nn, 2x 12 2x13,..., 2x (n 1)n ). 2xj 2yj = Ths means that usng the basc dot product as the nner product s compatble wth the nner product used n SDP. So we can formulate an SDP as a conc program by lettng K = {svec(x) : X 0}, whch s a closed convex cone. To show convexty, we need to show that f A and B are matrces n P SD, then λa + (1 λ)b s also n P SD for 0 λ 1. Indeed, for any vector v, we have ( ) ( ) v T (λa + (1 λ)b)v = λ v T Av + (1 λ) v T Bv 0. m A (y), X := y, A(X) = y A X, m mplyng that A maps y to =1 y A. The dual SDP now arses as the dual conc program. =1 1 <j n 1,j n x j y j = T r(ab) = A B. Then, we can let the matrx A be a matrx that s the composton of the correspondng A of the semdefnte program, so that A svec(x) = (A X) =1,...,m. Now that the semdefnte program s cast nto a conc program, we could wrte the conc dual, and one could verfy that what we get s precsely the dual of the semdefnte program we defned ( ) earler. Instead of mappng the space of symmetrc matrces (say p p) nto R n (wth n = p+1 2 ) usng svec( ), one could smply defne K = {X S p : X 0} and X, Y = X Y. Now our lnear operator A : S n R m then maps X nto (A X) =1,,m. Its adjont A : R m S n s defned by: 4.2 Barrer Functons To solve the conc program, we wll requre a barrer functon F. Ths s a functon from nt(k), the nteror of K, to R such that 1. F s strctly convex, 2. F (x ) as x x K, where K s the boundary of K. =1 14-5
7 We wll use the barrer functon to punsh canddate solutons that are close to the boundary of K, keepng the current pont nsde K. Good barrer functons, that result n a fast overall n algorthm, have more propertes that wll be descrbed n a later lecture. For K = R +, a good barrer functon s F (x) = log(x ). As any one of the coordnates approaches 0, the log approaches, so the total functon goes to. One can also check that ths functon s strctly convex. For K = svec(p SD p ) or more smply K = P SD p (the set of symmetrc p p postve semdefnte matrces), the nteror of K s the set of postve defnte matrces, whch all have strctly postve determnants. (Ths s because the determnant s equal to the product of the egenvalues, whch are all strctly postve for a postve defnte matrx.) So we can use the followng barrer functon: F (X) = log(det(x)). As X approaches the boundary of K, the determnant goes to zero, and F goes to nfnty. One can also check that ths functon s strctly convex (ts Hessan, the matrx of second dervatves, can be shown to be postve defnte). 4.3 A Prmal-Dual Interor-Pont Method Once we have a barrer functon, we wll set the objectve functon of the prmal to c, x + µf (x), where µ s a parameter that we wll adjust through the course of the algorthm. Assumng that we start wth an ntal canddate that belongs to nt(k), we can gnore the constrant that x K, snce that wll be enforced through the barrer functon, snce there wll be an nfnte penalty for leavng K. Our prmal barrer problem BP (µ) wll be: mn{ c, x + µf (x) : Ax = b}. Analogously, for the dual, we change the objectve functon to b, y µf (s), where F s a barrer functon for the dual; we can also elmnate the constrant that s K. Our dual barrer problem, BD(µ), s: max{ b, y µf (s) : A y + s = c}. The basc method of the algorthm s to have a current value of µ, and keep track of the optmal solutons n the prmal BP (µ) and dual BD(µ). As long as µ s not zero, there s a unque optmum soluton for both, snce the objectve functon s the sum of a lnear functon and a strctly-convex functon, whch results n a strctly-convex functon. We wll steadly decrease µ, and keep track of the optmal solutons as they change; the paths the optmum solutons trace out s called the central path (or central trajectory). We wll show that the (prmal and dual) central paths wll converge to an optmum value of the prmal and dual orgnal programs. In the specal case of lnear programmng, once we are suffcently close, we can round the current soluton to the nearest vertex to obtan an optmum soluton. For semdefnte programmng, though, we do not have such an algorthm to convert a soluton for small enough µ to an optmum soluton. Let s characterze the optmum soluton to BP (µ) and BD(µ). We derve now the so-called KKT optmalty condtons. If there were no constrants n the conc program, then the mnmum would be found when the gradent of the objectve functon s zero. If there are affne constrants lke Ax = b, however, the mnmum wll occur when the gradent s normal to the affne space of feasble solutons. Otherwse, we could move along the projecton of the gradent on the feasble space, and mprove our objectve functon. For smplcty, let s frst look at the case when K = K n = R +, and the barrer functon s F (x) = log(x ). The objectve functon of the prmal s c, x µf (x), and the partal dervatves are µ ( c, x µf (x)) = c j x j x j 14-6
8 so the gradent s c µx 1, where x 1 denotes the vector {1/x }. But snce ths gradent s normal to the constrant Ax, the gradent must be of the form A T y for some y. So f we let s = µx 1, then we know c s s of the form A T y, or equvalently, The last constrant s equvalent to A T y + s = c s = µx 1. x j s j = µ (1) for all j. Now, lookng at the dual: the gradent wth respect to y s b, whch must be of the form Ax for some x. The gradent wth respect to s s µs 1, whch must equal the same x. Ths means that Ax = b s = µx 1, and the last equalty s agan equvalent to (1). So f we denote by x(µ) the optmum soluton to the prmal BP (µ) and by (y(µ), s(µ)) the optmum soluton to the dual BD(µ), one observes that each of them s a certfcate of optmalty for the other and furthermore: x j (µ)s j (µ) = µ. Ths means that the dualty gap n the orgnal prmal/dual par of lnear programs s x T s = nµ and therefore the dualty gap goes to 0 as µ goes to 0. Thus the central path (x(µ), y(µ), s(µ)) wll converge to optmum solutons to both the prmal and dual lnear programs. 14-7
princeton univ. F 17 cos 521: Advanced Algorithm Design Lecture 7: LP Duality Lecturer: Matt Weinberg
prnceton unv. F 17 cos 521: Advanced Algorthm Desgn Lecture 7: LP Dualty Lecturer: Matt Wenberg Scrbe: LP Dualty s an extremely useful tool for analyzng structural propertes of lnear programs. Whle there
More informationU.C. Berkeley CS294: Beyond Worst-Case Analysis Luca Trevisan September 5, 2017
U.C. Berkeley CS94: Beyond Worst-Case Analyss Handout 4s Luca Trevsan September 5, 07 Summary of Lecture 4 In whch we ntroduce semdefnte programmng and apply t to Max Cut. Semdefnte Programmng Recall that
More informationLecture 20: Lift and Project, SDP Duality. Today we will study the Lift and Project method. Then we will prove the SDP duality theorem.
prnceton u. sp 02 cos 598B: algorthms and complexty Lecture 20: Lft and Project, SDP Dualty Lecturer: Sanjeev Arora Scrbe:Yury Makarychev Today we wll study the Lft and Project method. Then we wll prove
More informationAssortment Optimization under MNL
Assortment Optmzaton under MNL Haotan Song Aprl 30, 2017 1 Introducton The assortment optmzaton problem ams to fnd the revenue-maxmzng assortment of products to offer when the prces of products are fxed.
More informationSolutions HW #2. minimize. Ax = b. Give the dual problem, and make the implicit equality constraints explicit. Solution.
Solutons HW #2 Dual of general LP. Fnd the dual functon of the LP mnmze subject to c T x Gx h Ax = b. Gve the dual problem, and make the mplct equalty constrants explct. Soluton. 1. The Lagrangan s L(x,
More informationMMA and GCMMA two methods for nonlinear optimization
MMA and GCMMA two methods for nonlnear optmzaton Krster Svanberg Optmzaton and Systems Theory, KTH, Stockholm, Sweden. krlle@math.kth.se Ths note descrbes the algorthms used n the author s 2007 mplementatons
More informationConvex Optimization. Optimality conditions. (EE227BT: UC Berkeley) Lecture 9 (Optimality; Conic duality) 9/25/14. Laurent El Ghaoui.
Convex Optmzaton (EE227BT: UC Berkeley) Lecture 9 (Optmalty; Conc dualty) 9/25/14 Laurent El Ghaou Organsatonal Mdterm: 10/7/14 (1.5 hours, n class, double-sded cheat sheet allowed) Project: Intal proposal
More informationModule 9. Lecture 6. Duality in Assignment Problems
Module 9 1 Lecture 6 Dualty n Assgnment Problems In ths lecture we attempt to answer few other mportant questons posed n earler lecture for (AP) and see how some of them can be explaned through the concept
More informationCollege of Computer & Information Science Fall 2009 Northeastern University 20 October 2009
College of Computer & Informaton Scence Fall 2009 Northeastern Unversty 20 October 2009 CS7880: Algorthmc Power Tools Scrbe: Jan Wen and Laura Poplawsk Lecture Outlne: Prmal-dual schema Network Desgn:
More informationSolutions to exam in SF1811 Optimization, Jan 14, 2015
Solutons to exam n SF8 Optmzaton, Jan 4, 25 3 3 O------O -4 \ / \ / The network: \/ where all lnks go from left to rght. /\ / \ / \ 6 O------O -5 2 4.(a) Let x = ( x 3, x 4, x 23, x 24 ) T, where the varable
More informationThe Second Anti-Mathima on Game Theory
The Second Ant-Mathma on Game Theory Ath. Kehagas December 1 2006 1 Introducton In ths note we wll examne the noton of game equlbrum for three types of games 1. 2-player 2-acton zero-sum games 2. 2-player
More informationThe Order Relation and Trace Inequalities for. Hermitian Operators
Internatonal Mathematcal Forum, Vol 3, 08, no, 507-57 HIKARI Ltd, wwwm-hkarcom https://doorg/0988/mf088055 The Order Relaton and Trace Inequaltes for Hermtan Operators Y Huang School of Informaton Scence
More informationLagrange Multipliers Kernel Trick
Lagrange Multplers Kernel Trck Ncholas Ruozz Unversty of Texas at Dallas Based roughly on the sldes of Davd Sontag General Optmzaton A mathematcal detour, we ll come back to SVMs soon! subject to: f x
More informationInner Product. Euclidean Space. Orthonormal Basis. Orthogonal
Inner Product Defnton 1 () A Eucldean space s a fnte-dmensonal vector space over the reals R, wth an nner product,. Defnton 2 (Inner Product) An nner product, on a real vector space X s a symmetrc, blnear,
More informationLectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix
Lectures - Week 4 Matrx norms, Condtonng, Vector Spaces, Lnear Independence, Spannng sets and Bass, Null space and Range of a Matrx Matrx Norms Now we turn to assocatng a number to each matrx. We could
More informationLecture 10 Support Vector Machines II
Lecture 10 Support Vector Machnes II 22 February 2016 Taylor B. Arnold Yale Statstcs STAT 365/665 1/28 Notes: Problem 3 s posted and due ths upcomng Frday There was an early bug n the fake-test data; fxed
More informationSupplement: Proofs and Technical Details for The Solution Path of the Generalized Lasso
Supplement: Proofs and Techncal Detals for The Soluton Path of the Generalzed Lasso Ryan J. Tbshran Jonathan Taylor In ths document we gve supplementary detals to the paper The Soluton Path of the Generalzed
More informationAPPENDIX A Some Linear Algebra
APPENDIX A Some Lnear Algebra The collecton of m, n matrces A.1 Matrces a 1,1,..., a 1,n A = a m,1,..., a m,n wth real elements a,j s denoted by R m,n. If n = 1 then A s called a column vector. Smlarly,
More informationCOS 521: Advanced Algorithms Game Theory and Linear Programming
COS 521: Advanced Algorthms Game Theory and Lnear Programmng Moses Charkar February 27, 2013 In these notes, we ntroduce some basc concepts n game theory and lnear programmng (LP). We show a connecton
More informationChapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems
Numercal Analyss by Dr. Anta Pal Assstant Professor Department of Mathematcs Natonal Insttute of Technology Durgapur Durgapur-713209 emal: anta.bue@gmal.com 1 . Chapter 5 Soluton of System of Lnear Equatons
More informationAffine transformations and convexity
Affne transformatons and convexty The purpose of ths document s to prove some basc propertes of affne transformatons nvolvng convex sets. Here are a few onlne references for background nformaton: http://math.ucr.edu/
More informationSalmon: Lectures on partial differential equations. Consider the general linear, second-order PDE in the form. ,x 2
Salmon: Lectures on partal dfferental equatons 5. Classfcaton of second-order equatons There are general methods for classfyng hgher-order partal dfferental equatons. One s very general (applyng even to
More information( ) 2 ( ) ( ) Problem Set 4 Suggested Solutions. Problem 1
Problem Set 4 Suggested Solutons Problem (A) The market demand functon s the soluton to the followng utlty-maxmzaton roblem (UMP): The Lagrangean: ( x, x, x ) = + max U x, x, x x x x st.. x + x + x y x,
More informationSTAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 16
STAT 39: MATHEMATICAL COMPUTATIONS I FALL 218 LECTURE 16 1 why teratve methods f we have a lnear system Ax = b where A s very, very large but s ether sparse or structured (eg, banded, Toepltz, banded plus
More informationYong Joon Ryang. 1. Introduction Consider the multicommodity transportation problem with convex quadratic cost function. 1 2 (x x0 ) T Q(x x 0 )
Kangweon-Kyungk Math. Jour. 4 1996), No. 1, pp. 7 16 AN ITERATIVE ROW-ACTION METHOD FOR MULTICOMMODITY TRANSPORTATION PROBLEMS Yong Joon Ryang Abstract. The optmzaton problems wth quadratc constrants often
More information2.3 Nilpotent endomorphisms
s a block dagonal matrx, wth A Mat dm U (C) In fact, we can assume that B = B 1 B k, wth B an ordered bass of U, and that A = [f U ] B, where f U : U U s the restrcton of f to U 40 23 Nlpotent endomorphsms
More informationFeature Selection: Part 1
CSE 546: Machne Learnng Lecture 5 Feature Selecton: Part 1 Instructor: Sham Kakade 1 Regresson n the hgh dmensonal settng How do we learn when the number of features d s greater than the sample sze n?
More informationSELECTED SOLUTIONS, SECTION (Weak duality) Prove that the primal and dual values p and d defined by equations (4.3.2) and (4.3.3) satisfy p d.
SELECTED SOLUTIONS, SECTION 4.3 1. Weak dualty Prove that the prmal and dual values p and d defned by equatons 4.3. and 4.3.3 satsfy p d. We consder an optmzaton problem of the form The Lagrangan for ths
More information8.4 COMPLEX VECTOR SPACES AND INNER PRODUCTS
SECTION 8.4 COMPLEX VECTOR SPACES AND INNER PRODUCTS 493 8.4 COMPLEX VECTOR SPACES AND INNER PRODUCTS All the vector spaces you have studed thus far n the text are real vector spaces because the scalars
More informationPerfect Competition and the Nash Bargaining Solution
Perfect Competton and the Nash Barganng Soluton Renhard John Department of Economcs Unversty of Bonn Adenauerallee 24-42 53113 Bonn, Germany emal: rohn@un-bonn.de May 2005 Abstract For a lnear exchange
More informationFirst day August 1, Problems and Solutions
FOURTH INTERNATIONAL COMPETITION FOR UNIVERSITY STUDENTS IN MATHEMATICS July 30 August 4, 997, Plovdv, BULGARIA Frst day August, 997 Problems and Solutons Problem. Let {ε n } n= be a sequence of postve
More informationModule 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur
Module 3 LOSSY IMAGE COMPRESSION SYSTEMS Verson ECE IIT, Kharagpur Lesson 6 Theory of Quantzaton Verson ECE IIT, Kharagpur Instructonal Objectves At the end of ths lesson, the students should be able to:
More informationU.C. Berkeley CS294: Spectral Methods and Expanders Handout 8 Luca Trevisan February 17, 2016
U.C. Berkeley CS94: Spectral Methods and Expanders Handout 8 Luca Trevsan February 7, 06 Lecture 8: Spectral Algorthms Wrap-up In whch we talk about even more generalzatons of Cheeger s nequaltes, and
More informationCalculation of time complexity (3%)
Problem 1. (30%) Calculaton of tme complexty (3%) Gven n ctes, usng exhaust search to see every result takes O(n!). Calculaton of tme needed to solve the problem (2%) 40 ctes:40! dfferent tours 40 add
More informationPHYS 705: Classical Mechanics. Calculus of Variations II
1 PHYS 705: Classcal Mechancs Calculus of Varatons II 2 Calculus of Varatons: Generalzaton (no constrant yet) Suppose now that F depends on several dependent varables : We need to fnd such that has a statonary
More information1 Convex Optimization
Convex Optmzaton We wll consder convex optmzaton problems. Namely, mnmzaton problems where the objectve s convex (we assume no constrants for now). Such problems often arse n machne learnng. For example,
More informationOn the Multicriteria Integer Network Flow Problem
BULGARIAN ACADEMY OF SCIENCES CYBERNETICS AND INFORMATION TECHNOLOGIES Volume 5, No 2 Sofa 2005 On the Multcrtera Integer Network Flow Problem Vassl Vasslev, Marana Nkolova, Maryana Vassleva Insttute of
More informationThe Minimum Universal Cost Flow in an Infeasible Flow Network
Journal of Scences, Islamc Republc of Iran 17(2): 175-180 (2006) Unversty of Tehran, ISSN 1016-1104 http://jscencesutacr The Mnmum Unversal Cost Flow n an Infeasble Flow Network H Saleh Fathabad * M Bagheran
More informationA 2D Bounded Linear Program (H,c) 2D Linear Programming
A 2D Bounded Lnear Program (H,c) h 3 v h 8 h 5 c h 4 h h 6 h 7 h 2 2D Lnear Programmng C s a polygonal regon, the ntersecton of n halfplanes. (H, c) s nfeasble, as C s empty. Feasble regon C s unbounded
More informationLecture 10 Support Vector Machines. Oct
Lecture 10 Support Vector Machnes Oct - 20-2008 Lnear Separators Whch of the lnear separators s optmal? Concept of Margn Recall that n Perceptron, we learned that the convergence rate of the Perceptron
More informationDeriving the X-Z Identity from Auxiliary Space Method
Dervng the X-Z Identty from Auxlary Space Method Long Chen Department of Mathematcs, Unversty of Calforna at Irvne, Irvne, CA 92697 chenlong@math.uc.edu 1 Iteratve Methods In ths paper we dscuss teratve
More informationa b a In case b 0, a being divisible by b is the same as to say that
Secton 6.2 Dvsblty among the ntegers An nteger a ε s dvsble by b ε f there s an nteger c ε such that a = bc. Note that s dvsble by any nteger b, snce = b. On the other hand, a s dvsble by only f a = :
More informationErrors for Linear Systems
Errors for Lnear Systems When we solve a lnear system Ax b we often do not know A and b exactly, but have only approxmatons  and ˆb avalable. Then the best thng we can do s to solve ˆx ˆb exactly whch
More informationAPPROXIMATE PRICES OF BASKET AND ASIAN OPTIONS DUPONT OLIVIER. Premia 14
APPROXIMAE PRICES OF BASKE AND ASIAN OPIONS DUPON OLIVIER Prema 14 Contents Introducton 1 1. Framewor 1 1.1. Baset optons 1.. Asan optons. Computng the prce 3. Lower bound 3.1. Closed formula for the prce
More informationCSci 6974 and ECSE 6966 Math. Tech. for Vision, Graphics and Robotics Lecture 21, April 17, 2006 Estimating A Plane Homography
CSc 6974 and ECSE 6966 Math. Tech. for Vson, Graphcs and Robotcs Lecture 21, Aprl 17, 2006 Estmatng A Plane Homography Overvew We contnue wth a dscusson of the major ssues, usng estmaton of plane projectve
More information10-801: Advanced Optimization and Randomized Methods Lecture 2: Convex functions (Jan 15, 2014)
0-80: Advanced Optmzaton and Randomzed Methods Lecture : Convex functons (Jan 5, 04) Lecturer: Suvrt Sra Addr: Carnege Mellon Unversty, Sprng 04 Scrbes: Avnava Dubey, Ahmed Hefny Dsclamer: These notes
More informationLecture 17: Lee-Sidford Barrier
CSE 599: Interplay between Convex Optmzaton and Geometry Wnter 2018 Lecturer: Yn Tat Lee Lecture 17: Lee-Sdford Barrer Dsclamer: Please tell me any mstake you notced. In ths lecture, we talk about the
More informationNorms, Condition Numbers, Eigenvalues and Eigenvectors
Norms, Condton Numbers, Egenvalues and Egenvectors 1 Norms A norm s a measure of the sze of a matrx or a vector For vectors the common norms are: N a 2 = ( x 2 1/2 the Eucldean Norm (1a b 1 = =1 N x (1b
More informationMaximal Margin Classifier
CS81B/Stat41B: Advanced Topcs n Learnng & Decson Makng Mamal Margn Classfer Lecturer: Mchael Jordan Scrbes: Jana van Greunen Corrected verson - /1/004 1 References/Recommended Readng 1.1 Webstes www.kernel-machnes.org
More informationSome modelling aspects for the Matlab implementation of MMA
Some modellng aspects for the Matlab mplementaton of MMA Krster Svanberg krlle@math.kth.se Optmzaton and Systems Theory Department of Mathematcs KTH, SE 10044 Stockholm September 2004 1. Consdered optmzaton
More informationProblem Set 9 Solutions
Desgn and Analyss of Algorthms May 4, 2015 Massachusetts Insttute of Technology 6.046J/18.410J Profs. Erk Demane, Srn Devadas, and Nancy Lynch Problem Set 9 Solutons Problem Set 9 Solutons Ths problem
More informationMath 217 Fall 2013 Homework 2 Solutions
Math 17 Fall 013 Homework Solutons Due Thursday Sept. 6, 013 5pm Ths homework conssts of 6 problems of 5 ponts each. The total s 30. You need to fully justfy your answer prove that your functon ndeed has
More informationGame Theory. Lecture Notes By Y. Narahari. Department of Computer Science and Automation Indian Institute of Science Bangalore, India February 2008
Game Theory Lecture Notes By Y. Narahar Department of Computer Scence and Automaton Indan Insttute of Scence Bangalore, Inda February 2008 Chapter 10: Two Person Zero Sum Games Note: Ths s a only a draft
More informationExercise Solutions to Real Analysis
xercse Solutons to Real Analyss Note: References refer to H. L. Royden, Real Analyss xersze 1. Gven any set A any ɛ > 0, there s an open set O such that A O m O m A + ɛ. Soluton 1. If m A =, then there
More informationWhich Separator? Spring 1
Whch Separator? 6.034 - Sprng 1 Whch Separator? Mamze the margn to closest ponts 6.034 - Sprng Whch Separator? Mamze the margn to closest ponts 6.034 - Sprng 3 Margn of a pont " # y (w $ + b) proportonal
More information3.1 Expectation of Functions of Several Random Variables. )' be a k-dimensional discrete or continuous random vector, with joint PMF p (, E X E X1 E X
Statstcs 1: Probablty Theory II 37 3 EPECTATION OF SEVERAL RANDOM VARIABLES As n Probablty Theory I, the nterest n most stuatons les not on the actual dstrbuton of a random vector, but rather on a number
More informationComputing Correlated Equilibria in Multi-Player Games
Computng Correlated Equlbra n Mult-Player Games Chrstos H. Papadmtrou Presented by Zhanxang Huang December 7th, 2005 1 The Author Dr. Chrstos H. Papadmtrou CS professor at UC Berkley (taught at Harvard,
More informationSingular Value Decomposition: Theory and Applications
Sngular Value Decomposton: Theory and Applcatons Danel Khashab Sprng 2015 Last Update: March 2, 2015 1 Introducton A = UDV where columns of U and V are orthonormal and matrx D s dagonal wth postve real
More informationGeneralized Linear Methods
Generalzed Lnear Methods 1 Introducton In the Ensemble Methods the general dea s that usng a combnaton of several weak learner one could make a better learner. More formally, assume that we have a set
More informationCOMPLEX NUMBERS AND QUADRATIC EQUATIONS
COMPLEX NUMBERS AND QUADRATIC EQUATIONS INTRODUCTION We know that x 0 for all x R e the square of a real number (whether postve, negatve or ero) s non-negatve Hence the equatons x, x, x + 7 0 etc are not
More informationComposite Hypotheses testing
Composte ypotheses testng In many hypothess testng problems there are many possble dstrbutons that can occur under each of the hypotheses. The output of the source s a set of parameters (ponts n a parameter
More informationLecture 21: Numerical methods for pricing American type derivatives
Lecture 21: Numercal methods for prcng Amercan type dervatves Xaoguang Wang STAT 598W Aprl 10th, 2014 (STAT 598W) Lecture 21 1 / 26 Outlne 1 Fnte Dfference Method Explct Method Penalty Method (STAT 598W)
More informationn α j x j = 0 j=1 has a nontrivial solution. Here A is the n k matrix whose jth column is the vector for all t j=0
MODULE 2 Topcs: Lnear ndependence, bass and dmenson We have seen that f n a set of vectors one vector s a lnear combnaton of the remanng vectors n the set then the span of the set s unchanged f that vector
More informationFormulas for the Determinant
page 224 224 CHAPTER 3 Determnants e t te t e 2t 38 A = e t 2te t e 2t e t te t 2e 2t 39 If 123 A = 345, 456 compute the matrx product A adj(a) What can you conclude about det(a)? For Problems 40 43, use
More information5 The Rational Canonical Form
5 The Ratonal Canoncal Form Here p s a monc rreducble factor of the mnmum polynomal m T and s not necessarly of degree one Let F p denote the feld constructed earler n the course, consstng of all matrces
More informationProseminar Optimierung II. Victor A. Kovtunenko SS 2012/2013: LV
Prosemnar Optmerung II Vctor A. Kovtunenko Insttute for Mathematcs and Scentfc Computng, Karl-Franzens Unversty of Graz, Henrchstr. 36, 8010 Graz, Austra; Lavrent ev Insttute of Hydrodynamcs, Sberan Dvson
More informationCS : Algorithms and Uncertainty Lecture 17 Date: October 26, 2016
CS 29-128: Algorthms and Uncertanty Lecture 17 Date: October 26, 2016 Instructor: Nkhl Bansal Scrbe: Mchael Denns 1 Introducton In ths lecture we wll be lookng nto the secretary problem, and an nterestng
More informationSection 8.3 Polar Form of Complex Numbers
80 Chapter 8 Secton 8 Polar Form of Complex Numbers From prevous classes, you may have encountered magnary numbers the square roots of negatve numbers and, more generally, complex numbers whch are the
More informationMASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.265/15.070J Fall 2013 Lecture 12 10/21/2013. Martingale Concentration Inequalities and Applications
MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.65/15.070J Fall 013 Lecture 1 10/1/013 Martngale Concentraton Inequaltes and Applcatons Content. 1. Exponental concentraton for martngales wth bounded ncrements.
More informationKernel Methods and SVMs Extension
Kernel Methods and SVMs Extenson The purpose of ths document s to revew materal covered n Machne Learnng 1 Supervsed Learnng regardng support vector machnes (SVMs). Ths document also provdes a general
More informationWeek 5: Neural Networks
Week 5: Neural Networks Instructor: Sergey Levne Neural Networks Summary In the prevous lecture, we saw how we can construct neural networks by extendng logstc regresson. Neural networks consst of multple
More informationDifference Equations
Dfference Equatons c Jan Vrbk 1 Bascs Suppose a sequence of numbers, say a 0,a 1,a,a 3,... s defned by a certan general relatonshp between, say, three consecutve values of the sequence, e.g. a + +3a +1
More informationSupport Vector Machines CS434
Support Vector Machnes CS434 Lnear Separators Many lnear separators exst that perfectly classfy all tranng examples Whch of the lnear separators s the best? + + + + + + + + + Intuton of Margn Consder ponts
More informationVector Norms. Chapter 7 Iterative Techniques in Matrix Algebra. Cauchy-Bunyakovsky-Schwarz Inequality for Sums. Distances. Convergence.
Vector Norms Chapter 7 Iteratve Technques n Matrx Algebra Per-Olof Persson persson@berkeley.edu Department of Mathematcs Unversty of Calforna, Berkeley Math 128B Numercal Analyss Defnton A vector norm
More informationREAL ANALYSIS I HOMEWORK 1
REAL ANALYSIS I HOMEWORK CİHAN BAHRAN The questons are from Tao s text. Exercse 0.0.. If (x α ) α A s a collecton of numbers x α [0, + ] such that x α
More informationSTAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 17. a ij x (k) b i. a ij x (k+1) (D + L)x (k+1) = b Ux (k)
STAT 309: MATHEMATICAL COMPUTATIONS I FALL 08 LECTURE 7. sor method remnder: n coordnatewse form, Jacob method s = [ b a x (k) a and Gauss Sedel method s = [ b a = = remnder: n matrx form, Jacob method
More informationAffine and Riemannian Connections
Affne and Remannan Connectons Semnar Remannan Geometry Summer Term 2015 Prof Dr Anna Wenhard and Dr Gye-Seon Lee Jakob Ullmann Notaton: X(M) space of smooth vector felds on M D(M) space of smooth functons
More informationMATH 241B FUNCTIONAL ANALYSIS - NOTES EXAMPLES OF C ALGEBRAS
MATH 241B FUNCTIONAL ANALYSIS - NOTES EXAMPLES OF C ALGEBRAS These are nformal notes whch cover some of the materal whch s not n the course book. The man purpose s to gve a number of nontrval examples
More informatione - c o m p a n i o n
OPERATIONS RESEARCH http://dxdoorg/0287/opre007ec e - c o m p a n o n ONLY AVAILABLE IN ELECTRONIC FORM 202 INFORMS Electronc Companon Generalzed Quantty Competton for Multple Products and Loss of Effcency
More information3.1 ML and Empirical Distribution
67577 Intro. to Machne Learnng Fall semester, 2008/9 Lecture 3: Maxmum Lkelhood/ Maxmum Entropy Dualty Lecturer: Amnon Shashua Scrbe: Amnon Shashua 1 In the prevous lecture we defned the prncple of Maxmum
More informationLinear Approximation with Regularization and Moving Least Squares
Lnear Approxmaton wth Regularzaton and Movng Least Squares Igor Grešovn May 007 Revson 4.6 (Revson : March 004). 5 4 3 0.5 3 3.5 4 Contents: Lnear Fttng...4. Weghted Least Squares n Functon Approxmaton...
More information2 More examples with details
Physcs 129b Lecture 3 Caltech, 01/15/19 2 More examples wth detals 2.3 The permutaton group n = 4 S 4 contans 4! = 24 elements. One s the dentty e. Sx of them are exchange of two objects (, j) ( to j and
More informationApproximate D-optimal designs of experiments on the convex hull of a finite set of information matrices
Approxmate D-optmal desgns of experments on the convex hull of a fnte set of nformaton matrces Radoslav Harman, Mára Trnovská Department of Appled Mathematcs and Statstcs Faculty of Mathematcs, Physcs
More informationBOUNDEDNESS OF THE RIESZ TRANSFORM WITH MATRIX A 2 WEIGHTS
BOUNDEDNESS OF THE IESZ TANSFOM WITH MATIX A WEIGHTS Introducton Let L = L ( n, be the functon space wth norm (ˆ f L = f(x C dx d < For a d d matrx valued functon W : wth W (x postve sem-defnte for all
More information1 GSW Iterative Techniques for y = Ax
1 for y = A I m gong to cheat here. here are a lot of teratve technques that can be used to solve the general case of a set of smultaneous equatons (wrtten n the matr form as y = A), but ths chapter sn
More informationExpected Value and Variance
MATH 38 Expected Value and Varance Dr. Neal, WKU We now shall dscuss how to fnd the average and standard devaton of a random varable X. Expected Value Defnton. The expected value (or average value, or
More informationMore metrics on cartesian products
More metrcs on cartesan products If (X, d ) are metrc spaces for 1 n, then n Secton II4 of the lecture notes we defned three metrcs on X whose underlyng topologes are the product topology The purpose of
More informationECE559VV Project Report
ECE559VV Project Report (Supplementary Notes Loc Xuan Bu I. MAX SUM-RATE SCHEDULING: THE UPLINK CASE We have seen (n the presentaton that, for downlnk (broadcast channels, the strategy maxmzng the sum-rate
More informationMATH Homework #2
MATH609-601 Homework #2 September 27, 2012 1. Problems Ths contans a set of possble solutons to all problems of HW-2. Be vglant snce typos are possble (and nevtable). (1) Problem 1 (20 pts) For a matrx
More informationDynamic Programming. Preview. Dynamic Programming. Dynamic Programming. Dynamic Programming (Example: Fibonacci Sequence)
/24/27 Prevew Fbonacc Sequence Longest Common Subsequence Dynamc programmng s a method for solvng complex problems by breakng them down nto smpler sub-problems. It s applcable to problems exhbtng the propertes
More information2E Pattern Recognition Solutions to Introduction to Pattern Recognition, Chapter 2: Bayesian pattern classification
E395 - Pattern Recognton Solutons to Introducton to Pattern Recognton, Chapter : Bayesan pattern classfcaton Preface Ths document s a soluton manual for selected exercses from Introducton to Pattern Recognton
More informationNP-Completeness : Proofs
NP-Completeness : Proofs Proof Methods A method to show a decson problem Π NP-complete s as follows. (1) Show Π NP. (2) Choose an NP-complete problem Π. (3) Show Π Π. A method to show an optmzaton problem
More informationSolving the Quadratic Eigenvalue Complementarity Problem by DC Programming
Solvng the Quadratc Egenvalue Complementarty Problem by DC Programmng Y-Shua Nu 1, Joaqum Júdce, Le Th Hoa An 3 and Pham Dnh Tao 4 1 Shangha JaoTong Unversty, Maths Departement and SJTU-Parstech, Chna
More informationMaximizing the number of nonnegative subsets
Maxmzng the number of nonnegatve subsets Noga Alon Hao Huang December 1, 213 Abstract Gven a set of n real numbers, f the sum of elements of every subset of sze larger than k s negatve, what s the maxmum
More informationCSCE 790S Background Results
CSCE 790S Background Results Stephen A. Fenner September 8, 011 Abstract These results are background to the course CSCE 790S/CSCE 790B, Quantum Computaton and Informaton (Sprng 007 and Fall 011). Each
More informationSL n (F ) Equals its Own Derived Group
Internatonal Journal of Algebra, Vol. 2, 2008, no. 12, 585-594 SL n (F ) Equals ts Own Derved Group Jorge Macel BMCC-The Cty Unversty of New York, CUNY 199 Chambers street, New York, NY 10007, USA macel@cms.nyu.edu
More informationPerron Vectors of an Irreducible Nonnegative Interval Matrix
Perron Vectors of an Irreducble Nonnegatve Interval Matrx Jr Rohn August 4 2005 Abstract As s well known an rreducble nonnegatve matrx possesses a unquely determned Perron vector. As the man result of
More informationTransfer Functions. Convenient representation of a linear, dynamic model. A transfer function (TF) relates one input and one output: ( ) system
Transfer Functons Convenent representaton of a lnear, dynamc model. A transfer functon (TF) relates one nput and one output: x t X s y t system Y s The followng termnology s used: x y nput output forcng
More informationLecture 20: November 7
0-725/36-725: Convex Optmzaton Fall 205 Lecturer: Ryan Tbshran Lecture 20: November 7 Scrbes: Varsha Chnnaobreddy, Joon Sk Km, Lngyao Zhang Note: LaTeX template courtesy of UC Berkeley EECS dept. Dsclamer:
More information1 Matrix representations of canonical matrices
1 Matrx representatons of canoncal matrces 2-d rotaton around the orgn: ( ) cos θ sn θ R 0 = sn θ cos θ 3-d rotaton around the x-axs: R x = 1 0 0 0 cos θ sn θ 0 sn θ cos θ 3-d rotaton around the y-axs:
More information