Lecture 4: Random-order model for the k-secretary problem
|
|
- Edward Johns
- 5 years ago
- Views:
Transcription
1 Algoritmos e Incerteza PUC-Rio INF2979, Lecture 4: Random-order model for the k-secretary problem Lecturer: Marco Molinaro April 3 rd Scribe: Joaquim Dias Garcia In this lecture we continue with the subject of random-order, models initially developed during Lecture 3. We shall go deeper in the models of this class by extending the Secretary problem to the k-secretary problem. The basic idea of Learning a Classifier exposed in the previous lecture will be applied to devise an algorithm. Finally, we will analyze the proposed algorithm using the probability tools of Lecture 3. Now let s go back a little The secretary problem: a brief review Going straight to the point and omitting some story telling presented in Lecture 3, the secretary problem goes like this: A problem instance is given by the set: {v 1, v 2,..., v n } where n is given and all v i are positive real numbers. These values will be presented to us in random order: v I1, v I2, v I3,... At each time step and we either pick the value presented or abandon it. If we pick the value v i that s our solution value and we cannot choose any other v j ; if we discard it we simply wait for the next v k. After n steps the process ends and the solution is the value we have chosen. The off-line optimal is clearly given by: OP T = max i v i, since the whole instance is known in advance. In Lecture 3, we developed the following algorithm A1: during steps 1 to n 2 compute M the maximum values of the numbers that show up; starting from step n + 1 simply select the first value that is larger than M 2 We are assuming n to be even just for simplicity of notation and modeling. Also from Lecture 3 we had: Lemma 0.1. E [A1] = 1 n! σ Σ n A1σ 1OP T, which means that A1 is 1-approximate 4 4 Here, Σ n is the set of permutations of the numbers 1 to n. This notation follows from the observation that every possible input that can be obtained from the instance {v 1, v 2,..., v n } is some permutation of a sequence of these values. 0.2 A digression... Besides the expected value of the algorithm, some other interesting questions could be raised, for instance: can we give a probability bound on the algorithm worst cases performance? 1
2 Example: Can we bound P vala1 1OP T? That is what is the probability upper bound 8 that A1 leads to bad solutions? We start by re-writing the probability of that event to take place as P vala1 18 OP T = P OP T vala1 78 OP T then note, from Lemma 0.1, that E [OP T vala1] 3 OP T. Now we use Markov s inequality: 4 P X a E [X] a, where X = OP T vala1 and a = 7 OP T. Clearly X is always positive since A1 is never 8 strictly better than OP T. Hence, P OP T vala1 78 OP T The final solution Theorem 0.2. There is an algorithm that is 1 -approximated and this is the best one can do even e with randomized algorithms. The main idea is to sample data up to step n and be clever in bounding. e For more on the Secretary problem the reader is referred to [4] for historical review and to [5] for many extensions. The secretary problem also appear in the fun math book [9], in which the problem is also named the marriage problem. 1 Learning a Classifier In this course we will study two main techniques to tackle problems in the random order model class: Learn a Classifier for example [3, 1, 8] and many others The Primal method for example [6] The rest of this lecture will be focused in the first method. 1.1 The k-scretary Problem This problem follows the same set up of the Secretary Problem, but it differs in the following: instead of choosing 1 item we have to choose k of them in order to maximize our objective function, which is the sum of the k items chosen [7, 2]. 2
3 It is worth emphasizing that we keep the rule of selecting an item and never be able to discard it or discarding it and we never have a chance to select it again. Also, the algorithm will do nothing if k items have already been chosen. Once again, the off-line optimal solution is trivial: just select the k most valuable items, the value, OP T, is their sum. The following assumption will useful to simply the analysis: Assumption avoid ties. From now on, all the items in the problem instance will have different values This 1.2 The off-line optimal For the sake of clarity, we describe the off-line optimal algorithm because it will present important insight for the on-line solution. First, let s plot the input data in a line where the largest values are on the left and smaller values in the right: Larger Smaller It is very clear that in order to optimize our objective function we must: Find the k th largest value: λ which is one of the samples v i ; Choose the all items i with v i λ Solution λ Larger Smaller Figure 1: Optimal solution k best values in red, optimal classifier λ is one of the values v i The following definition will be very important to define optimal solution sets and other solution candidates. Definition 1.1. pickλ = {i : v i λ}, i.e., the set of all item with value larger than λ. 3
4 We shall study the set pickλ to better understand the problem. Given λ, the k th largest value, its trivial to note that pickλ is the optimal solution, i.e. the one with largest value: Moreover we can easily note that: OP T = valpickλ If λ < λ then pickλ > pickλ = k, thus λ gives an infeasible solution; If λ > λ then valpickλ < valpickλ where X is the cardinality of the set X. Therefore we must choose λ carefully since the first case leads to infeasible solutions and the second one leads to sub-optimal objective values. Now, suppose that we choose λ such that pickλ = k, what is the value of this feasible 2 solution? We can certainly bound the value of such solution. Firstly, the number of elements in OP T not picked by λ, namely pickλ pickλ, is exactly the size of the picked elements pickλ. Moreover, the non-picked elements pickλ pickλ are all smaller or equal to the minimum value picked by λ, namely minpickλ. Together, these imply that valpickλ 1 2 valpickλ = 1 OP T. 2 The same argument can be used to analyze the case where pickλ = βk for any β [0, 1]. Lemma 1.2. If pickλ = βk, with β [0, 1], then valpickλ βop T we use the approximation symbol to emphasize that we are not considering rounding. 1.3 An online algorithm In order to construct an online algorithm via the Learn a classifier technique, we combine what we have learned in the previous section: Let s look for some value of λ such that λ λ and λ > λ so that we have a near optimal and feasible solution; with the core idea of our online algorithm for the secretary problem: use the first values of the input sequence to estimate a classifier or a cutoff value. To be generic, we can use the first αn values of the sequence to estimate ˆλ such that pickˆλ k in the whole data. A ˆλ with such property will, on average, pick αk items among the first αn in the instance: pickˆλ S α αk, where S α = {v I1,, v Iαn } is the set of the first αn elements that appear in the instance. We have the following Algorithm A2: 1. Observe the item from step 1 to αn; 2. Compute the largest value ˆλ that would make us pick 1 ɛαk items in the sample; ˆλ will actually be one of the v i ; 3. From step αn + 1 to n, choose all items with value larger than ˆλ. 4
5 In the second step, the multiplier 1 ɛ is important for some reason we shall see soon. We have the following theorem related to this algorithm: Theorem 1.3. With the appropriate choice of ɛ and α, Algorithm A2 is an 1 approximation. logn k Analyzing Algorithm A2 The main component of the analysis is the following. Theorem 1.4. In Algorithm A2, P { pickˆλ > k} { pickˆλ < 1 2ɛk} < ɛ for an appropriate choice of ɛ, α. The proof of this theorem follows from the Chernoff + union bound approach. So the first piece we need is the following concentration inequality. Lemma 1.5. Given a wrong classifier v i, namely it satisfies either pickv i > k or pickv i < 1 2ɛk. Then P pickv i S α = 1 ɛαk exp{ ɛ 2 αk}. In particular, the probability that such wrong classifier is even considered by Algorithm A2 in its Step 2 is at most exp{ ɛ 2 αk}. Proof sketch. To get the idea, consider a very wrong v i is wrong that picks pickv i 2k items. The number of elements that v i picks in the sample on average is 2αk; picking only 1 ɛαk on the sample is vary far from this average, so by Chernoff/Bernstein s inequality it will happen with tiny probability. We will ignore the multiplier 1 ɛ in this example because it will be inconsequential and makes the expressions cleaner. In a bit more detail, we want to find the probability that P pickv i S α = αk, which is equivalent to P { pickv i S α αk} { pickv i S α αk}, which is at most P pickv i S α αk. We are going to use the Bernstein inequality: P Z E [Z] + t exp { min { t 2, t}}, so 4V [Z] we use the fact that E pickv S α = 2αk and re-write the probability we are trying to bound: P pickv S α 2αk αk. To see we can use Bernstein s inequality, we define the R.V. X t which is one if v picks the element that shows up in step t and zero otherwise. Hence, pickv S α = X X αn If we had sampled the values v i s with replacement the X i s would all be independent and we could use Bernstein s inequality. This is not the case: the random-order model means we are sampling the v i s without replacement. But as we mentioned last lecture, we know that Bernstein do works fine in this sampling without replacement. So we can bound: { { P X X αn 2αk αk exp min 5 2αk 2 4V [X X αn ], αk }},
6 where V [X] is the variance of the random variable X. We also saw in Lecture 3 that given the independent random variables x i [0, 1], V [X X αk ] E [X X αk ] = 2αk, and this also holds in the sampling without replacement case. So, we can re-write the bound we had in the following form { { }} αk 2 exp min V [X X αn ], αk { { }} αk 2 exp min 2αk, αk e αk. This gives the bound of Lemma 1.5 for this case. The same argument can be done for any wrong classifier satisfying the hypothesis of the lemma, in particular for the case pickv i < 1 ɛk, that is, the argument is symmetrical. This concludes the proof sketch of the lemma. Now we go back to prove Theorem 1.4, since we know how to bound a single bad choice of the classifier we will use union bound to bound the probability of learning any bad classifier. Proof of Theorem 1.4. In order to be protected of all bad classifiers we use Union Bound. Firstly, define BAD = {v i : { pick ˆv i > k} { pick ˆv i < 1 2ɛk}}, hence, we can write our ultimate goal P { pickˆλ > k} { pickˆλ < 1 2ɛk} By union bound, we have P ˆλ = v = = P ˆλ = v. P ˆλ = v by definition of v BAD. By construction of the algorithm Step 2 P ˆλ = v = P v picks αk1 ɛ elements in the sample. Now, we apply Lemma 1.5 and we have P v picks αk1 ɛ elements in the sample By choosing appropriate values for α and ɛ we get n exp{ ɛ 2 αk} ɛ. exp{ ɛ 2 αk} n exp{ ɛ 2 αk}. It is possible to convert the probability bound of theorem 1.4 into the following: 6
7 [ ] Corollary 1.6. With probability greater than 1 ɛ we have E valpickˆλ 1 2ɛOP T Proof. Let E be the event that the bound from Theorem 1.4 holds. Then [ ] E valpickˆλ = E valpickˆλ E PrE + E valpickˆλ Ē PrĒ [ ] E valpickˆλ E PrE E valpickˆλ E 1 ɛ, where the first inequality follows from the fact that in every scenario valpickˆλ is non-negative. But in every scenario under the event E, we have the number of picked items is pickˆλ [1[ 2ɛk, k], which] together with Lemma 1.2 gives valpickˆλ 1 2ɛOP T. In particular, E valpickˆλ E 1 2ɛOP T. This concludes the proof. However, our Algorithm A2 used the first αn values in the sample to estimate ˆλ, therefore none of these values could be chosen. Thus we have: [ ] E [vala2] = E valpickˆλ E valpickˆλ S α E valpickˆλ E [ OP T 1 ɛα S α ], where OP T 1 ɛα S α is the value of picking the best 1 ɛαk items in the sample S α notice by definition pickˆλ = 1 ɛαk. Finally, we can bound the expected value of this optimal solution in the sample. Lemma 1.7. E [ OP T 1 ɛα S α ] αop T. Proof. The proof is actually a duality-based argument, which actually gives a cleaner and more intuitive interpretation see for example [1]; we have streamlined the exposition so as to not require knowledge of duality. We use [x] + = max{x, 0} to denote the positive part of x. Recalling that OP T is the set of items of value at least λ there are k of them, we break up the value of these items as v i = λ + [v i λ] + and obtain OP T = kλ + i [v i λ ] +. 1 Notice this sum can range over all items because the ones not included in OP T will have [v i λ ] + = 0. Now let U be the set of the top k = 1 ɛαk in the sample S α ; we want to show EvalU αop T. Since for all items we have the upper bound v i λ + [v i λ ] +, we obtain EvalU U λ + E v U[v λ ] + U λ + E v S α [v λ ]
8 The first term in the RHS is exactly 1 ɛαλ, and the second term is exactly 1 ɛα i [v i λ ] +, since each item has probability α of being in the sample S α. Together these imply that EvalU α λ + [v i λ ] +. i Putting this together with inequality 1 we obtain the desired bound. Thus, we finally conclude that E [vala2] 1 2ɛ αop T. Choosing ɛ = α = logn/ɛ 1/3 should recover Theorem 1.3. k An important observation is that the argument can be improved by using some technique other than union bound, which in this case is weak and give really loose bounds, to obtain an optimal 1 1 k -approximation, recovering the result of [7]. References [1] S. Agrawal, Z. Wang, and Y. Ye. A dynamic near-optimal algorithm for online linear programming. Operations Research, 624: , [2] M. Babaioff, N. Immorlica, D. Kempe, and R. Kleinberg. A knapsack secretary problem with applications. In Approximation, randomization, and combinatorial optimization. Algorithms and techniques, pages Springer, [3] N. R. Devenur and T. P. Hayes. The adwords problem: online keyword matching with budgeted bidders under random permutations. In ACM Conference on Electronic Commerce, pages 71 78, [4] T. S. Ferguson. Who solved the secretary problem? Statistical science, pages , [5] P. Freeman. The secretary problem and its extensions: A review. International Statistical Review/Revue Internationale de Statistique, pages , [6] T. Kesselheim, A. Tönnis, K. Radke, and B. Vöcking. Primal beats dual on online packing lps in the random-order model. In Proceedings of the 46th Annual ACM Symposium on Theory of Computing, pages ACM, [7] R. Kleinberg. A multiple-choice secretary algorithm with applications to online auctions. In Proceedings of the sixteenth annual ACM-SIAM symposium on Discrete algorithms, pages Society for Industrial and Applied Mathematics, [8] M. Molinaro and R. Ravi. The geometry of online packing linear programs. Mathematics of Operations Research, 391:46 59, [9] M. Parker. Things to make and do in the fourth dimension. Penguin UK,
Lecture 2: Paging and AdWords
Algoritmos e Incerteza (PUC-Rio INF2979, 2017.1) Lecture 2: Paging and AdWords March 20 2017 Lecturer: Marco Molinaro Scribe: Gabriel Homsi In this class we had a brief recap of the Ski Rental Problem
More informationCS264: Beyond Worst-Case Analysis Lecture #14: Smoothed Analysis of Pareto Curves
CS264: Beyond Worst-Case Analysis Lecture #14: Smoothed Analysis of Pareto Curves Tim Roughgarden November 5, 2014 1 Pareto Curves and a Knapsack Algorithm Our next application of smoothed analysis is
More informationTopic: Primal-Dual Algorithms Date: We finished our discussion of randomized rounding and began talking about LP Duality.
CS787: Advanced Algorithms Scribe: Amanda Burton, Leah Kluegel Lecturer: Shuchi Chawla Topic: Primal-Dual Algorithms Date: 10-17-07 14.1 Last Time We finished our discussion of randomized rounding and
More informationLecture 20: LP Relaxation and Approximation Algorithms. 1 Introduction. 2 Vertex Cover problem. CSCI-B609: A Theorist s Toolkit, Fall 2016 Nov 8
CSCI-B609: A Theorist s Toolkit, Fall 2016 Nov 8 Lecture 20: LP Relaxation and Approximation Algorithms Lecturer: Yuan Zhou Scribe: Syed Mahbub Hafiz 1 Introduction When variables of constraints of an
More informationA Dynamic Near-Optimal Algorithm for Online Linear Programming
Submitted to Operations Research manuscript (Please, provide the manuscript number! Authors are encouraged to submit new papers to INFORMS journals by means of a style file template, which includes the
More informationQuick Sort Notes , Spring 2010
Quick Sort Notes 18.310, Spring 2010 0.1 Randomized Median Finding In a previous lecture, we discussed the problem of finding the median of a list of m elements, or more generally the element of rank m.
More information5.5 Deeper Properties of Continuous Functions
5.5. DEEPER PROPERTIES OF CONTINUOUS FUNCTIONS 195 5.5 Deeper Properties of Continuous Functions 5.5.1 Intermediate Value Theorem and Consequences When one studies a function, one is usually interested
More informationDual fitting approximation for Set Cover, and Primal Dual approximation for Set Cover
duality 1 Dual fitting approximation for Set Cover, and Primal Dual approximation for Set Cover Guy Kortsarz duality 2 The set cover problem with uniform costs Input: A universe U and a collection of subsets
More informationU.C. Berkeley Better-than-Worst-Case Analysis Handout 3 Luca Trevisan May 24, 2018
U.C. Berkeley Better-than-Worst-Case Analysis Handout 3 Luca Trevisan May 24, 2018 Lecture 3 In which we show how to find a planted clique in a random graph. 1 Finding a Planted Clique We will analyze
More information6.2 Deeper Properties of Continuous Functions
6.2. DEEPER PROPERTIES OF CONTINUOUS FUNCTIONS 69 6.2 Deeper Properties of Continuous Functions 6.2. Intermediate Value Theorem and Consequences When one studies a function, one is usually interested in
More informationDuality of LPs and Applications
Lecture 6 Duality of LPs and Applications Last lecture we introduced duality of linear programs. We saw how to form duals, and proved both the weak and strong duality theorems. In this lecture we will
More informationCS 6820 Fall 2014 Lectures, October 3-20, 2014
Analysis of Algorithms Linear Programming Notes CS 6820 Fall 2014 Lectures, October 3-20, 2014 1 Linear programming The linear programming (LP) problem is the following optimization problem. We are given
More informationCompetitive Weighted Matching in Transversal Matroids
Competitive Weighted Matching in Transversal Matroids Nedialko B. Dimitrov C. Greg Plaxton Abstract Consider a bipartite graph with a set of left-vertices and a set of right-vertices. All the edges adjacent
More informationU.C. Berkeley CS294: Beyond Worst-Case Analysis Handout 12 Luca Trevisan October 3, 2017
U.C. Berkeley CS94: Beyond Worst-Case Analysis Handout 1 Luca Trevisan October 3, 017 Scribed by Maxim Rabinovich Lecture 1 In which we begin to prove that the SDP relaxation exactly recovers communities
More informationIntroduction to Mathematical Programming IE406. Lecture 10. Dr. Ted Ralphs
Introduction to Mathematical Programming IE406 Lecture 10 Dr. Ted Ralphs IE406 Lecture 10 1 Reading for This Lecture Bertsimas 4.1-4.3 IE406 Lecture 10 2 Duality Theory: Motivation Consider the following
More informationCompetitive Weighted Matching in Transversal Matroids
Competitive Weighted Matching in Transversal Matroids Nedialko B. Dimitrov and C. Greg Plaxton University of Texas at Austin 1 University Station C0500 Austin, Texas 78712 0233 {ned,plaxton}@cs.utexas.edu
More informationLecture 31: Miller Rabin Test. Miller Rabin Test
Lecture 31: Recall In the previous lecture we considered an efficient randomized algorithm to generate prime numbers that need n-bits in their binary representation This algorithm sampled a random element
More informationLecture 5: Probabilistic tools and Applications II
T-79.7003: Graphs and Networks Fall 2013 Lecture 5: Probabilistic tools and Applications II Lecturer: Charalampos E. Tsourakakis Oct. 11, 2013 5.1 Overview In the first part of today s lecture we will
More informationOn the Competitive Ratio of the Random Sampling Auction
On the Competitive Ratio of the Random Sampling Auction Uriel Feige 1, Abraham Flaxman 2, Jason D. Hartline 3, and Robert Kleinberg 4 1 Microsoft Research, Redmond, WA 98052. urifeige@microsoft.com 2 Carnegie
More informationTheoretical Cryptography, Lecture 10
Theoretical Cryptography, Lecture 0 Instructor: Manuel Blum Scribe: Ryan Williams Feb 20, 2006 Introduction Today we will look at: The String Equality problem, revisited What does a random permutation
More informationWeek 3 Linear programming duality
Week 3 Linear programming duality This week we cover the fascinating topic of linear programming duality. We will learn that every minimization program has associated a maximization program that has the
More informationCombinatorial Optimization
Combinatorial Optimization 2017-2018 1 Maximum matching on bipartite graphs Given a graph G = (V, E), find a maximum cardinal matching. 1.1 Direct algorithms Theorem 1.1 (Petersen, 1891) A matching M is
More informationLectures 6, 7 and part of 8
Lectures 6, 7 and part of 8 Uriel Feige April 26, May 3, May 10, 2015 1 Linear programming duality 1.1 The diet problem revisited Recall the diet problem from Lecture 1. There are n foods, m nutrients,
More informationLecture 14: SVD, Power method, and Planted Graph problems (+ eigenvalues of random matrices) Lecturer: Sanjeev Arora
princeton univ. F 13 cos 521: Advanced Algorithm Design Lecture 14: SVD, Power method, and Planted Graph problems (+ eigenvalues of random matrices) Lecturer: Sanjeev Arora Scribe: Today we continue the
More informationBeck s Surplus Tic Tac Toe Game Exposition by William Gasarch
1 Introduction Beck s Surplus Tic Tac Toe Game Exposition by William Gasarch (gasarch@cs.umd.edu) Consider the following game: Two players Mark (for Maker) and Betty (for Breaker) alternate (Mark going
More informationLecture #21. c T x Ax b. maximize subject to
COMPSCI 330: Design and Analysis of Algorithms 11/11/2014 Lecture #21 Lecturer: Debmalya Panigrahi Scribe: Samuel Haney 1 Overview In this lecture, we discuss linear programming. We first show that the
More informationCS 583: Approximation Algorithms: Introduction
CS 583: Approximation Algorithms: Introduction Chandra Chekuri January 15, 2018 1 Introduction Course Objectives 1. To appreciate that not all intractable problems are the same. NP optimization problems,
More informationFrom Primal-Dual to Cost Shares and Back: A Stronger LP Relaxation for the Steiner Forest Problem
From Primal-Dual to Cost Shares and Back: A Stronger LP Relaxation for the Steiner Forest Problem Jochen Könemann 1, Stefano Leonardi 2, Guido Schäfer 2, and Stefan van Zwam 3 1 Department of Combinatorics
More informationAlternatives to competitive analysis Georgios D Amanatidis
Alternatives to competitive analysis Georgios D Amanatidis 1 Introduction Competitive analysis allows us to make strong theoretical statements about the performance of an algorithm without making probabilistic
More informationInteger Linear Programs
Lecture 2: Review, Linear Programming Relaxations Today we will talk about expressing combinatorial problems as mathematical programs, specifically Integer Linear Programs (ILPs). We then see what happens
More informationLecture 4: Proof of Shannon s theorem and an explicit code
CSE 533: Error-Correcting Codes (Autumn 006 Lecture 4: Proof of Shannon s theorem and an explicit code October 11, 006 Lecturer: Venkatesan Guruswami Scribe: Atri Rudra 1 Overview Last lecture we stated
More informationGuide to Proofs on Sets
CS103 Winter 2019 Guide to Proofs on Sets Cynthia Lee Keith Schwarz I would argue that if you have a single guiding principle for how to mathematically reason about sets, it would be this one: All sets
More informationCS264: Beyond Worst-Case Analysis Lecture #18: Smoothed Complexity and Pseudopolynomial-Time Algorithms
CS264: Beyond Worst-Case Analysis Lecture #18: Smoothed Complexity and Pseudopolynomial-Time Algorithms Tim Roughgarden March 9, 2017 1 Preamble Our first lecture on smoothed analysis sought a better theoretical
More informationUnderstanding the Simplex algorithm. Standard Optimization Problems.
Understanding the Simplex algorithm. Ma 162 Spring 2011 Ma 162 Spring 2011 February 28, 2011 Standard Optimization Problems. A standard maximization problem can be conveniently described in matrix form
More informationLecture 5. x 1,x 2,x 3 0 (1)
Computational Intractability Revised 2011/6/6 Lecture 5 Professor: David Avis Scribe:Ma Jiangbo, Atsuki Nagao 1 Duality The purpose of this lecture is to introduce duality, which is an important concept
More information1 Seidel s LP algorithm
15-451/651: Design & Analysis of Algorithms October 21, 2015 Lecture #14 last changed: November 7, 2015 In this lecture we describe a very nice algorithm due to Seidel for Linear Programming in lowdimensional
More information1 Review of The Learning Setting
COS 5: Theoretical Machine Learning Lecturer: Rob Schapire Lecture #8 Scribe: Changyan Wang February 28, 208 Review of The Learning Setting Last class, we moved beyond the PAC model: in the PAC model we
More informationLecture Note 18: Duality
MATH 5330: Computational Methods of Linear Algebra 1 The Dual Problems Lecture Note 18: Duality Xianyi Zeng Department of Mathematical Sciences, UTEP The concept duality, just like accuracy and stability,
More informationCS264: Beyond Worst-Case Analysis Lecture #15: Smoothed Complexity and Pseudopolynomial-Time Algorithms
CS264: Beyond Worst-Case Analysis Lecture #15: Smoothed Complexity and Pseudopolynomial-Time Algorithms Tim Roughgarden November 5, 2014 1 Preamble Previous lectures on smoothed analysis sought a better
More informationLinear and Combinatorial Optimization
Linear and Combinatorial Optimization The dual of an LP-problem. Connections between primal and dual. Duality theorems and complementary slack. Philipp Birken (Ctr. for the Math. Sc.) Lecture 3: Duality
More informationChapter 1. Preliminaries
Introduction This dissertation is a reading of chapter 4 in part I of the book : Integer and Combinatorial Optimization by George L. Nemhauser & Laurence A. Wolsey. The chapter elaborates links between
More informationLecture 6: Communication Complexity of Auctions
Algorithmic Game Theory October 13, 2008 Lecture 6: Communication Complexity of Auctions Lecturer: Sébastien Lahaie Scribe: Rajat Dixit, Sébastien Lahaie In this lecture we examine the amount of communication
More informationSlope Fields: Graphing Solutions Without the Solutions
8 Slope Fields: Graphing Solutions Without the Solutions Up to now, our efforts have been directed mainly towards finding formulas or equations describing solutions to given differential equations. Then,
More informationPacking Returning Secretaries
Packing Returning Secretaries Martin Hoefer Goethe University Frankfurt/Main, Germany mhoefer@csuni-frankfurtde Lisa Wilhelmi Goethe University Frankfurt/Main, Germany wilhelmi@csuni-frankfurtde Abstract
More informationLecture 8: The Goemans-Williamson MAXCUT algorithm
IU Summer School Lecture 8: The Goemans-Williamson MAXCUT algorithm Lecturer: Igor Gorodezky The Goemans-Williamson algorithm is an approximation algorithm for MAX-CUT based on semidefinite programming.
More informationLecture 1 : Data Compression and Entropy
CPS290: Algorithmic Foundations of Data Science January 8, 207 Lecture : Data Compression and Entropy Lecturer: Kamesh Munagala Scribe: Kamesh Munagala In this lecture, we will study a simple model for
More informationLinear Programming Duality
Summer 2011 Optimization I Lecture 8 1 Duality recap Linear Programming Duality We motivated the dual of a linear program by thinking about the best possible lower bound on the optimal value we can achieve
More informationCS364B: Frontiers in Mechanism Design Lecture #2: Unit-Demand Bidders and Walrasian Equilibria
CS364B: Frontiers in Mechanism Design Lecture #2: Unit-Demand Bidders and Walrasian Equilibria Tim Roughgarden January 8, 2014 1 Bidders with Unit-Demand Valuations 1.1 The Setting Last lecture we discussed
More informationToday: Linear Programming (con t.)
Today: Linear Programming (con t.) COSC 581, Algorithms April 10, 2014 Many of these slides are adapted from several online sources Reading Assignments Today s class: Chapter 29.4 Reading assignment for
More informationEE364a Review Session 5
EE364a Review Session 5 EE364a Review announcements: homeworks 1 and 2 graded homework 4 solutions (check solution to additional problem 1) scpd phone-in office hours: tuesdays 6-7pm (650-723-1156) 1 Complementary
More informationImpagliazzo s Hardcore Lemma
Average Case Complexity February 8, 011 Impagliazzo s Hardcore Lemma ofessor: Valentine Kabanets Scribe: Hoda Akbari 1 Average-Case Hard Boolean Functions w.r.t. Circuits In this lecture, we are going
More information1 Review Session. 1.1 Lecture 2
1 Review Session Note: The following lists give an overview of the material that was covered in the lectures and sections. Your TF will go through these lists. If anything is unclear or you have questions
More informationLecture 6: Greedy Algorithms I
COMPSCI 330: Design and Analysis of Algorithms September 14 Lecturer: Rong Ge Lecture 6: Greedy Algorithms I Scribe: Fred Zhang 1 Overview In this lecture, we introduce a new algorithm design technique
More informationLecture 5. 1 Review (Pairwise Independence and Derandomization)
6.842 Randomness and Computation September 20, 2017 Lecture 5 Lecturer: Ronitt Rubinfeld Scribe: Tom Kolokotrones 1 Review (Pairwise Independence and Derandomization) As we discussed last time, we can
More informationCMPUT 675: Approximation Algorithms Fall 2014
CMPUT 675: Approximation Algorithms Fall 204 Lecture 25 (Nov 3 & 5): Group Steiner Tree Lecturer: Zachary Friggstad Scribe: Zachary Friggstad 25. Group Steiner Tree In this problem, we are given a graph
More informationCS261: A Second Course in Algorithms Lecture #18: Five Essential Tools for the Analysis of Randomized Algorithms
CS261: A Second Course in Algorithms Lecture #18: Five Essential Tools for the Analysis of Randomized Algorithms Tim Roughgarden March 3, 2016 1 Preamble In CS109 and CS161, you learned some tricks of
More informationORIE 6300 Mathematical Programming I August 25, Lecture 2
ORIE 6300 Mathematical Programming I August 25, 2016 Lecturer: Damek Davis Lecture 2 Scribe: Johan Bjorck Last time, we considered the dual of linear programs in our basic form: max(c T x : Ax b). We also
More informationLecture 13: Linear programming I
A Theorist s Toolkit (CMU 18-859T, Fall 013) Lecture 13: Linear programming I October 1, 013 Lecturer: Ryan O Donnell Scribe: Yuting Ge A Generic Reference: Ryan recommends a very nice book Understanding
More informationOn Random Sampling Auctions for Digital Goods
On Random Sampling Auctions for Digital Goods Saeed Alaei Azarakhsh Malekian Aravind Srinivasan February 8, 2009 Abstract In the context of auctions for digital goods, an interesting Random Sampling Optimal
More information1 Review of last lecture and introduction
Semidefinite Programming Lecture 10 OR 637 Spring 2008 April 16, 2008 (Wednesday) Instructor: Michael Jeremy Todd Scribe: Yogeshwer (Yogi) Sharma 1 Review of last lecture and introduction Let us first
More informationThe Simplex Algorithm
8.433 Combinatorial Optimization The Simplex Algorithm October 6, 8 Lecturer: Santosh Vempala We proved the following: Lemma (Farkas). Let A R m n, b R m. Exactly one of the following conditions is true:.
More informationHomework 4 Solutions
CS 174: Combinatorics and Discrete Probability Fall 01 Homework 4 Solutions Problem 1. (Exercise 3.4 from MU 5 points) Recall the randomized algorithm discussed in class for finding the median of a set
More informationLecture 11 October 7, 2013
CS 4: Advanced Algorithms Fall 03 Prof. Jelani Nelson Lecture October 7, 03 Scribe: David Ding Overview In the last lecture we talked about set cover: Sets S,..., S m {,..., n}. S has cost c S. Goal: Cover
More informationInformation Theory and Coding Prof. S. N. Merchant Department of Electrical Engineering Indian Institute of Technology, Bombay
Information Theory and Coding Prof. S. N. Merchant Department of Electrical Engineering Indian Institute of Technology, Bombay Lecture - 13 Competitive Optimality of the Shannon Code So, far we have studied
More informationAlgorithms for MDPs and Their Convergence
MS&E338 Reinforcement Learning Lecture 2 - April 4 208 Algorithms for MDPs and Their Convergence Lecturer: Ben Van Roy Scribe: Matthew Creme and Kristen Kessel Bellman operators Recall from last lecture
More informationBU CAS CS 538: Cryptography Lecture Notes. Fall itkis/538/
BU CAS CS 538: Cryptography Lecture Notes. Fall 2005. http://www.cs.bu.edu/ itkis/538/ Gene Itkis Boston University Computer Science Dept. Notes for Lectures 3 5: Pseudo-Randomness; PRGs 1 Randomness Randomness
More informationprinceton univ. F 13 cos 521: Advanced Algorithm Design Lecture 17: Duality and MinMax Theorem Lecturer: Sanjeev Arora
princeton univ F 13 cos 521: Advanced Algorithm Design Lecture 17: Duality and MinMax Theorem Lecturer: Sanjeev Arora Scribe: Today we first see LP duality, which will then be explored a bit more in the
More informationLecture 2 - Length Contraction
Lecture 2 - Length Contraction A Puzzle We are all aware that if you jump to the right, your reflection in the mirror will jump left. But if you raise your hand up, your reflection will also raise its
More informationLecture 1: Entropy, convexity, and matrix scaling CSE 599S: Entropy optimality, Winter 2016 Instructor: James R. Lee Last updated: January 24, 2016
Lecture 1: Entropy, convexity, and matrix scaling CSE 599S: Entropy optimality, Winter 2016 Instructor: James R. Lee Last updated: January 24, 2016 1 Entropy Since this course is about entropy maximization,
More informationSanta Claus Schedules Jobs on Unrelated Machines
Santa Claus Schedules Jobs on Unrelated Machines Ola Svensson (osven@kth.se) Royal Institute of Technology - KTH Stockholm, Sweden March 22, 2011 arxiv:1011.1168v2 [cs.ds] 21 Mar 2011 Abstract One of the
More informationWeek Cuts, Branch & Bound, and Lagrangean Relaxation
Week 11 1 Integer Linear Programming This week we will discuss solution methods for solving integer linear programming problems. I will skip the part on complexity theory, Section 11.8, although this is
More informationPoints: The first problem is worth 10 points, the others are worth 15. Maximize z = x y subject to 3x y 19 x + 7y 10 x + y = 100.
Math 5 Summer Points: The first problem is worth points, the others are worth 5. Midterm # Solutions Find the dual of the following linear programming problem. Maximize z = x y x y 9 x + y x + y = x, y
More informationCS599: Algorithm Design in Strategic Settings Fall 2012 Lecture 11: Ironing and Approximate Mechanism Design in Single-Parameter Bayesian Settings
CS599: Algorithm Design in Strategic Settings Fall 2012 Lecture 11: Ironing and Approximate Mechanism Design in Single-Parameter Bayesian Settings Instructor: Shaddin Dughmi Outline 1 Recap 2 Non-regular
More information1 The linear algebra of linear programs (March 15 and 22, 2015)
1 The linear algebra of linear programs (March 15 and 22, 2015) Many optimization problems can be formulated as linear programs. The main features of a linear program are the following: Variables are real
More information2 Upper-bound of Generalization Error of AdaBoost
COS 511: Theoretical Machine Learning Lecturer: Rob Schapire Lecture #10 Scribe: Haipeng Zheng March 5, 2008 1 Review of AdaBoost Algorithm Here is the AdaBoost Algorithm: input: (x 1,y 1 ),...,(x m,y
More informationM340(921) Solutions Practice Problems (c) 2013, Philip D Loewen
M340(921) Solutions Practice Problems (c) 2013, Philip D Loewen 1. Consider a zero-sum game between Claude and Rachel in which the following matrix shows Claude s winnings: C 1 C 2 C 3 R 1 4 2 5 R G =
More informationPartitions and Covers
University of California, Los Angeles CS 289A Communication Complexity Instructor: Alexander Sherstov Scribe: Dong Wang Date: January 2, 2012 LECTURE 4 Partitions and Covers In previous lectures, we saw
More informationLecture 4: Codes based on Concatenation
Lecture 4: Codes based on Concatenation Error-Correcting Codes (Spring 206) Rutgers University Swastik Kopparty Scribe: Aditya Potukuchi and Meng-Tsung Tsai Overview In the last lecture, we studied codes
More informationDR.RUPNATHJI( DR.RUPAK NATH )
Contents 1 Sets 1 2 The Real Numbers 9 3 Sequences 29 4 Series 59 5 Functions 81 6 Power Series 105 7 The elementary functions 111 Chapter 1 Sets It is very convenient to introduce some notation and terminology
More informationCS261: A Second Course in Algorithms Lecture #12: Applications of Multiplicative Weights to Games and Linear Programs
CS26: A Second Course in Algorithms Lecture #2: Applications of Multiplicative Weights to Games and Linear Programs Tim Roughgarden February, 206 Extensions of the Multiplicative Weights Guarantee Last
More information5.4 Continuity: Preliminary Notions
5.4. CONTINUITY: PRELIMINARY NOTIONS 181 5.4 Continuity: Preliminary Notions 5.4.1 Definitions The American Heritage Dictionary of the English Language defines continuity as an uninterrupted succession,
More informationCSC 5170: Theory of Computational Complexity Lecture 5 The Chinese University of Hong Kong 8 February 2010
CSC 5170: Theory of Computational Complexity Lecture 5 The Chinese University of Hong Kong 8 February 2010 So far our notion of realistic computation has been completely deterministic: The Turing Machine
More informationReview Solutions, Exam 2, Operations Research
Review Solutions, Exam 2, Operations Research 1. Prove the weak duality theorem: For any x feasible for the primal and y feasible for the dual, then... HINT: Consider the quantity y T Ax. SOLUTION: To
More informationOPTIMISATION /09 EXAM PREPARATION GUIDELINES
General: OPTIMISATION 2 2008/09 EXAM PREPARATION GUIDELINES This points out some important directions for your revision. The exam is fully based on what was taught in class: lecture notes, handouts and
More informationModern Algebra Prof. Manindra Agrawal Department of Computer Science and Engineering Indian Institute of Technology, Kanpur
Modern Algebra Prof. Manindra Agrawal Department of Computer Science and Engineering Indian Institute of Technology, Kanpur Lecture 02 Groups: Subgroups and homomorphism (Refer Slide Time: 00:13) We looked
More informationBBM402-Lecture 20: LP Duality
BBM402-Lecture 20: LP Duality Lecturer: Lale Özkahya Resources for the presentation: https://courses.engr.illinois.edu/cs473/fa2016/lectures.html An easy LP? which is compact form for max cx subject to
More informationAn introductory example
CS1 Lecture 9 An introductory example Suppose that a company that produces three products wishes to decide the level of production of each so as to maximize profits. Let x 1 be the amount of Product 1
More informationA Las Vegas approximation algorithm for metric 1-median selection
A Las Vegas approximation algorithm for metric -median selection arxiv:70.0306v [cs.ds] 5 Feb 07 Ching-Lueh Chang February 8, 07 Abstract Given an n-point metric space, consider the problem of finding
More information1 Motivation for Instrumental Variable (IV) Regression
ECON 370: IV & 2SLS 1 Instrumental Variables Estimation and Two Stage Least Squares Econometric Methods, ECON 370 Let s get back to the thiking in terms of cross sectional (or pooled cross sectional) data
More informationThe dual simplex method with bounds
The dual simplex method with bounds Linear programming basis. Let a linear programming problem be given by min s.t. c T x Ax = b x R n, (P) where we assume A R m n to be full row rank (we will see in the
More informationHow to Take the Dual of a Linear Program
How to Take the Dual of a Linear Program Sébastien Lahaie January 12, 2015 This is a revised version of notes I wrote several years ago on taking the dual of a linear program (LP), with some bug and typo
More informationMATH 521, WEEK 2: Rational and Real Numbers, Ordered Sets, Countable Sets
MATH 521, WEEK 2: Rational and Real Numbers, Ordered Sets, Countable Sets 1 Rational and Real Numbers Recall that a number is rational if it can be written in the form a/b where a, b Z and b 0, and a number
More informationMath101, Sections 2 and 3, Spring 2008 Review Sheet for Exam #2:
Math101, Sections 2 and 3, Spring 2008 Review Sheet for Exam #2: 03 17 08 3 All about lines 3.1 The Rectangular Coordinate System Know how to plot points in the rectangular coordinate system. Know the
More informationUniversity of California Berkeley CS170: Efficient Algorithms and Intractable Problems November 19, 2001 Professor Luca Trevisan. Midterm 2 Solutions
University of California Berkeley Handout MS2 CS170: Efficient Algorithms and Intractable Problems November 19, 2001 Professor Luca Trevisan Midterm 2 Solutions Problem 1. Provide the following information:
More information8.1 Polynomial Threshold Functions
CS 395T Computational Learning Theory Lecture 8: September 22, 2008 Lecturer: Adam Klivans Scribe: John Wright 8.1 Polynomial Threshold Functions In the previous lecture, we proved that any function over
More informationSVD, Power method, and Planted Graph problems (+ eigenvalues of random matrices)
Chapter 14 SVD, Power method, and Planted Graph problems (+ eigenvalues of random matrices) Today we continue the topic of low-dimensional approximation to datasets and matrices. Last time we saw the singular
More informationP (E) = P (A 1 )P (A 2 )... P (A n ).
Lecture 9: Conditional probability II: breaking complex events into smaller events, methods to solve probability problems, Bayes rule, law of total probability, Bayes theorem Discrete Structures II (Summer
More informationORDERS OF ELEMENTS IN A GROUP
ORDERS OF ELEMENTS IN A GROUP KEITH CONRAD 1. Introduction Let G be a group and g G. We say g has finite order if g n = e for some positive integer n. For example, 1 and i have finite order in C, since
More informationRose-Hulman Undergraduate Mathematics Journal
Rose-Hulman Undergraduate Mathematics Journal Volume 17 Issue 1 Article 5 Reversing A Doodle Bryan A. Curtis Metropolitan State University of Denver Follow this and additional works at: http://scholar.rose-hulman.edu/rhumj
More informationCS261: A Second Course in Algorithms Lecture #8: Linear Programming Duality (Part 1)
CS261: A Second Course in Algorithms Lecture #8: Linear Programming Duality (Part 1) Tim Roughgarden January 28, 2016 1 Warm-Up This lecture begins our discussion of linear programming duality, which is
More information