Combinatorial Optimization: AMS Amitabh Basu. Department of Applied Mathematics and Statistics, Johns Hopkins U.

Size: px
Start display at page:

Download "Combinatorial Optimization: AMS Amitabh Basu. Department of Applied Mathematics and Statistics, Johns Hopkins U."

Transcription

1 Combinatorial Optimization: AMS Amitabh Basu Department of Applied Mathematics and Statistics, Johns Hopkins U., Spring / 7

2 Two types of optimization problems Type I n Items: weights w 1,..., w n, values: c 1,..., c n Knapsack with capacity W. Find the subset with most value that can fit into the knapsack. Type II n Food Items: Each item has price/cost p i per unit weight, nutrition value per unit weight (fat, carbohydrates, vitamins etc.). Find right combination of food items (weights in pounds) with least cost that meets all nutritional requirements. 2 / 7

3 Two types of optimization problems Type I n Items: weights w 1,..., w n, values: c 1,..., c n Knapsack with capacity W. Find the subset with most value that can fit into the knapsack. Type II n Food Items: Each item has price/cost p i per unit weight, nutrition value per unit weight (fat, carbohydrates, vitamins etc.). Find right combination of food items (weights in pounds) with least cost that meets all nutritional requirements. 1. Brute force approach for Type I does not scale. 2. Classical techniques available for Type II: Calculus, convexity - Does not apply to Type I 2 / 7

4 Two types of optimization problems Type I Combinatorial Optimization Type II Continuous Optimization n Items: weights w 1,..., w n, values: c 1,..., c n Knapsack with capacity W. Find the subset with most value that can fit into the knapsack. n Food Items: Each item has price/cost p i per unit weight, nutrition value per unit weight (fat, carbohydrates, vitamins etc.). Find right combination of food items (weights in pounds) with least cost that meets all nutritional requirements. 1. Brute force approach for Type I does not scale. 2. Classical techniques available for Type II: Calculus, convexity - Does not apply to Type I 2 / 7

5 A Transportation Problem 3 / 7

6 A Simpler Problem - Transshipment problem / 7

7 A Scheduling Problem Job 1 Job 2 Job 3 Job 4 Machine 1 Machine 2 Machine 3 Machine 4 Machine 5 5 / 7

8 A Problem from Astronomy 6 / 7

9 A Problem from Astronomy 6 / 7

10 A Problem from Astronomy Use physics to derive an evaluation function that evaluates a given partition (Correlation function in astronomy) 6 / 7

11 A Problem from Astronomy Use physics to derive an evaluation function that evaluates a given partition (Correlation function in astronomy) 6 / 7

12 A Problem from Astronomy Use physics to derive an evaluation function that evaluates a given partition (Correlation function in astronomy) 6 / 7

13 A Problem from Astronomy 1000 galaxies: possible partitions Evaluate a partition in seconds Will take years!!!! Use physics to derive an evaluation function that evaluates a given partition (Correlation function in astronomy) 6 / 7

14 Statistical/Machine Learning Linear regression: Given a bunch of points x 1,..., x D R n, and labels y 1,..., y D R, find the best linear function that fits this data. 7 / 7

15 Statistical/Machine Learning Linear regression: Given a bunch of points x 1,..., x D R n, and labels y 1,..., y D R, find the best linear function that fits this data. min β R n D (y i β T x i ) 2 i=1 7 / 7

16 Statistical/Machine Learning Linear regression: Given a bunch of points x 1,..., x D R n, and labels y 1,..., y D R, find the best linear function that fits this data. min β R n D (y i β T x i ) 2 i=1 Better statistical guarantees if we enforce sparsity on β. 7 / 7

17 Statistical/Machine Learning Linear regression: Given a bunch of points x 1,..., x D R n, and labels y 1,..., y D R, find the best linear function that fits this data. min β R n D (y i β T x i ) 2 i=1 subject to β 0 K Better statistical guarantees if we enforce sparsity on β. 7 / 7

18 Statistical/Machine Learning Linear regression: Given a bunch of points x 1,..., x D R n, and labels y 1,..., y D R, find the best linear function that fits this data. min β R n D (y i β T x i ) 2 i=1 subject to β 0 K Better statistical guarantees if we enforce sparsity on β. Best Subset Selection via a Modern Optimization Lens by Bertsimas, King, Mazumder in Annals of Statistics / 7

19 Statistical/Machine Learning Linear regression: Given a bunch of points x 1,..., x D R n, and labels y 1,..., y D R, find the best linear function that fits this data. min β R n D (y i β T x i ) 2 i=1 subject to β 0 K Better statistical guarantees if we enforce sparsity on β. Best Subset Selection via a Modern Optimization Lens by Bertsimas, King, Mazumder in Annals of Statistics 2017 Similar problem in Compressed Sensing or Sparse Coding. 7 / 7

MLCC 2018 Variable Selection and Sparsity. Lorenzo Rosasco UNIGE-MIT-IIT

MLCC 2018 Variable Selection and Sparsity. Lorenzo Rosasco UNIGE-MIT-IIT MLCC 2018 Variable Selection and Sparsity Lorenzo Rosasco UNIGE-MIT-IIT Outline Variable Selection Subset Selection Greedy Methods: (Orthogonal) Matching Pursuit Convex Relaxation: LASSO & Elastic Net

More information

This means that we can assume each list ) is

This means that we can assume each list ) is This means that we can assume each list ) is of the form ),, ( )with < and Since the sizes of the items are integers, there are at most +1pairs in each list Furthermore, if we let = be the maximum possible

More information

COMS 4721: Machine Learning for Data Science Lecture 6, 2/2/2017

COMS 4721: Machine Learning for Data Science Lecture 6, 2/2/2017 COMS 4721: Machine Learning for Data Science Lecture 6, 2/2/2017 Prof. John Paisley Department of Electrical Engineering & Data Science Institute Columbia University UNDERDETERMINED LINEAR EQUATIONS We

More information

Linear Programming. Formulating and solving large problems. H. R. Alvarez A., Ph. D. 1

Linear Programming. Formulating and solving large problems.   H. R. Alvarez A., Ph. D. 1 Linear Programming Formulating and solving large problems http://academia.utp.ac.pa/humberto-alvarez H. R. Alvarez A., Ph. D. 1 Recalling some concepts As said, LP is concerned with the optimization of

More information

Principles of Food and Bioprocess Engineering (FS 231) Solutions to Example Problems on Mass Balance

Principles of Food and Bioprocess Engineering (FS 231) Solutions to Example Problems on Mass Balance Principles of Food and Bioprocess Engineering (FS 231) Solutions to Example Problems on Mass Balance 1. The mass balance equation for the system is: 2 + 3 = m This yields, m = 5 kg 2. The mass balance

More information

CSC 576: Variants of Sparse Learning

CSC 576: Variants of Sparse Learning CSC 576: Variants of Sparse Learning Ji Liu Department of Computer Science, University of Rochester October 27, 205 Introduction Our previous note basically suggests using l norm to enforce sparsity in

More information

Integer Linear Programming Modeling

Integer Linear Programming Modeling DM554/DM545 Linear and Lecture 9 Integer Linear Programming Marco Chiarandini Department of Mathematics & Computer Science University of Southern Denmark Outline 1. 2. Assignment Problem Knapsack Problem

More information

Combinatorial optimization problems

Combinatorial optimization problems Combinatorial optimization problems Heuristic Algorithms Giovanni Righini University of Milan Department of Computer Science (Crema) Optimization In general an optimization problem can be formulated as:

More information

Convex Optimization for High-Dimensional Portfolio Construction

Convex Optimization for High-Dimensional Portfolio Construction Convex Optimization for High-Dimensional Portfolio Construction R/Finance 2017 R. Michael Weylandt 2017-05-20 Rice University In the beginning... 2 The Markowitz Problem Given µ, Σ minimize w T Σw λµ T

More information

MVE165/MMG630, Applied Optimization Lecture 6 Integer linear programming: models and applications; complexity. Ann-Brith Strömberg

MVE165/MMG630, Applied Optimization Lecture 6 Integer linear programming: models and applications; complexity. Ann-Brith Strömberg MVE165/MMG630, Integer linear programming: models and applications; complexity Ann-Brith Strömberg 2011 04 01 Modelling with integer variables (Ch. 13.1) Variables Linear programming (LP) uses continuous

More information

EE 381V: Large Scale Optimization Fall Lecture 24 April 11

EE 381V: Large Scale Optimization Fall Lecture 24 April 11 EE 381V: Large Scale Optimization Fall 2012 Lecture 24 April 11 Lecturer: Caramanis & Sanghavi Scribe: Tao Huang 24.1 Review In past classes, we studied the problem of sparsity. Sparsity problem is that

More information

CS Lunch. Dynamic Programming Recipe. 2 Midterm 2! Slides17 - Segmented Least Squares.key - November 16, 2016

CS Lunch. Dynamic Programming Recipe. 2 Midterm 2! Slides17 - Segmented Least Squares.key - November 16, 2016 CS Lunch 1 Michelle Oraa Ali Tien Dao Vladislava Paskova Nan Zhuang Surabhi Sharma Wednesday, 12:15 PM Kendade 307 2 Midterm 2! Monday, November 21 In class Covers Greedy Algorithms Closed book Dynamic

More information

CS599: Algorithm Design in Strategic Settings Fall 2012 Lecture 7: Prior-Free Multi-Parameter Mechanism Design. Instructor: Shaddin Dughmi

CS599: Algorithm Design in Strategic Settings Fall 2012 Lecture 7: Prior-Free Multi-Parameter Mechanism Design. Instructor: Shaddin Dughmi CS599: Algorithm Design in Strategic Settings Fall 2012 Lecture 7: Prior-Free Multi-Parameter Mechanism Design Instructor: Shaddin Dughmi Outline 1 Multi-Parameter Problems and Examples 2 The VCG Mechanism

More information

Dynamic Programming: Interval Scheduling and Knapsack

Dynamic Programming: Interval Scheduling and Knapsack Dynamic Programming: Interval Scheduling and Knapsack . Weighted Interval Scheduling Weighted Interval Scheduling Weighted interval scheduling problem. Job j starts at s j, finishes at f j, and has weight

More information

MIT 9.520/6.860, Fall 2018 Statistical Learning Theory and Applications. Class 08: Sparsity Based Regularization. Lorenzo Rosasco

MIT 9.520/6.860, Fall 2018 Statistical Learning Theory and Applications. Class 08: Sparsity Based Regularization. Lorenzo Rosasco MIT 9.520/6.860, Fall 2018 Statistical Learning Theory and Applications Class 08: Sparsity Based Regularization Lorenzo Rosasco Learning algorithms so far ERM + explicit l 2 penalty 1 min w R d n n l(y

More information

Computational Learning Theory: Shattering and VC Dimensions. Machine Learning. Spring The slides are mainly from Vivek Srikumar

Computational Learning Theory: Shattering and VC Dimensions. Machine Learning. Spring The slides are mainly from Vivek Srikumar Computational Learning Theory: Shattering and VC Dimensions Machine Learning Spring 2018 The slides are mainly from Vivek Srikumar 1 This lecture: Computational Learning Theory The Theory of Generalization

More information

SECTION 3.2: Graphing Linear Inequalities

SECTION 3.2: Graphing Linear Inequalities 6 SECTION 3.2: Graphing Linear Inequalities GOAL: Graphing One Linear Inequality Example 1: Graphing Linear Inequalities Graph the following: a) x + y = 4 b) x + y 4 c) x + y < 4 Graphing Conventions:

More information

COMS 4771 Lecture Fixed-design linear regression 2. Ridge and principal components regression 3. Sparse regression and Lasso

COMS 4771 Lecture Fixed-design linear regression 2. Ridge and principal components regression 3. Sparse regression and Lasso COMS 477 Lecture 6. Fixed-design linear regression 2. Ridge and principal components regression 3. Sparse regression and Lasso / 2 Fixed-design linear regression Fixed-design linear regression A simplified

More information

Chapter 3: Discrete Optimization Integer Programming

Chapter 3: Discrete Optimization Integer Programming Chapter 3: Discrete Optimization Integer Programming Edoardo Amaldi DEIB Politecnico di Milano edoardo.amaldi@polimi.it Website: http://home.deib.polimi.it/amaldi/opt-16-17.shtml Academic year 2016-17

More information

Ω R n is called the constraint set or feasible set. x 1

Ω R n is called the constraint set or feasible set. x 1 1 Chapter 5 Linear Programming (LP) General constrained optimization problem: minimize subject to f(x) x Ω Ω R n is called the constraint set or feasible set. any point x Ω is called a feasible point We

More information

CPSC 340: Machine Learning and Data Mining. Sparse Matrix Factorization Fall 2017

CPSC 340: Machine Learning and Data Mining. Sparse Matrix Factorization Fall 2017 CPSC 340: Machine Learning and Data Mining Sparse Matrix Factorization Fall 2017 Admin Assignment 4: Due Friday. Assignment 5: Posted, due Monday of last week of classes Last Time: PCA with Orthogonal/Sequential

More information

CHAPTER 7. Regression

CHAPTER 7. Regression CHAPTER 7 Regression This chapter presents an extended example, illustrating and extending many of the concepts introduced over the past three chapters. Perhaps the best known multi-variate optimisation

More information

MLCC Clustering. Lorenzo Rosasco UNIGE-MIT-IIT

MLCC Clustering. Lorenzo Rosasco UNIGE-MIT-IIT MLCC 2018 - Clustering Lorenzo Rosasco UNIGE-MIT-IIT About this class We will consider an unsupervised setting, and in particular the problem of clustering unlabeled data into coherent groups. MLCC 2018

More information

1.1 P, NP, and NP-complete

1.1 P, NP, and NP-complete CSC5160: Combinatorial Optimization and Approximation Algorithms Topic: Introduction to NP-complete Problems Date: 11/01/2008 Lecturer: Lap Chi Lau Scribe: Jerry Jilin Le This lecture gives a general introduction

More information

CS 561, Lecture: Greedy Algorithms. Jared Saia University of New Mexico

CS 561, Lecture: Greedy Algorithms. Jared Saia University of New Mexico CS 561, Lecture: Greedy Algorithms Jared Saia University of New Mexico Outline Greedy Algorithm Intro Activity Selection Knapsack 1 Greedy Algorithms Greed is Good - Michael Douglas in Wall Street A greedy

More information

Knapsack and Scheduling Problems. The Greedy Method

Knapsack and Scheduling Problems. The Greedy Method The Greedy Method: Knapsack and Scheduling Problems The Greedy Method 1 Outline and Reading Task Scheduling Fractional Knapsack Problem The Greedy Method 2 Elements of Greedy Strategy An greedy algorithm

More information

Dynamic Programming( Weighted Interval Scheduling)

Dynamic Programming( Weighted Interval Scheduling) Dynamic Programming( Weighted Interval Scheduling) 17 November, 2016 Dynamic Programming 1 Dynamic programming algorithms are used for optimization (for example, finding the shortest path between two points,

More information

Today s Outline. CS 561, Lecture 15. Greedy Algorithms. Activity Selection. Greedy Algorithm Intro Activity Selection Knapsack

Today s Outline. CS 561, Lecture 15. Greedy Algorithms. Activity Selection. Greedy Algorithm Intro Activity Selection Knapsack Today s Outline CS 561, Lecture 15 Jared Saia University of New Mexico Greedy Algorithm Intro Activity Selection Knapsack 1 Greedy Algorithms Activity Selection Greed is Good - Michael Douglas in Wall

More information

Conditional Sparse Linear Regression

Conditional Sparse Linear Regression COMS 6998-4 Fall 07 November 3, 07 Introduction. Background Conditional Sparse Linear Regression esenter: Xingyu Niu Scribe: Jinyi Zhang Given a distribution D over R d R and data samples (y, z) D, linear

More information

Exercises NP-completeness

Exercises NP-completeness Exercises NP-completeness Exercise 1 Knapsack problem Consider the Knapsack problem. We have n items, each with weight a j (j = 1,..., n) and value c j (j = 1,..., n) and an integer B. All a j and c j

More information

Mat 3770 Bin Packing or

Mat 3770 Bin Packing or Basic Algithm Spring 2014 Used when a problem can be partitioned into non independent sub problems Basic Algithm Solve each sub problem once; solution is saved f use in other sub problems Combine solutions

More information

Optimization methods

Optimization methods Optimization methods Optimization-Based Data Analysis http://www.cims.nyu.edu/~cfgranda/pages/obda_spring16 Carlos Fernandez-Granda /8/016 Introduction Aim: Overview of optimization methods that Tend to

More information

CS599: Algorithm Design in Strategic Settings Fall 2012 Lecture 12: Approximate Mechanism Design in Multi-Parameter Bayesian Settings

CS599: Algorithm Design in Strategic Settings Fall 2012 Lecture 12: Approximate Mechanism Design in Multi-Parameter Bayesian Settings CS599: Algorithm Design in Strategic Settings Fall 2012 Lecture 12: Approximate Mechanism Design in Multi-Parameter Bayesian Settings Instructor: Shaddin Dughmi Administrivia HW1 graded, solutions on website

More information

Integer programming (part 2) Lecturer: Javier Peña Convex Optimization /36-725

Integer programming (part 2) Lecturer: Javier Peña Convex Optimization /36-725 Integer programming (part 2) Lecturer: Javier Peña Convex Optimization 10-725/36-725 Last time: integer programming Consider the problem min x subject to f(x) x C x j Z, j J where f : R n R, C R n are

More information

Master 2 MathBigData. 3 novembre CMAP - Ecole Polytechnique

Master 2 MathBigData. 3 novembre CMAP - Ecole Polytechnique Master 2 MathBigData S. Gaïffas 1 3 novembre 2014 1 CMAP - Ecole Polytechnique 1 Supervised learning recap Introduction Loss functions, linearity 2 Penalization Introduction Ridge Sparsity Lasso 3 Some

More information

Oslo Class 6 Sparsity based regularization

Oslo Class 6 Sparsity based regularization RegML2017@SIMULA Oslo Class 6 Sparsity based regularization Lorenzo Rosasco UNIGE-MIT-IIT May 4, 2017 Learning from data Possible only under assumptions regularization min Ê(w) + λr(w) w Smoothness Sparsity

More information

CSCI 5654 (Linear Programming, Fall 2013) CSCI 5654: Linear Programming. Notes. Lecture-1. August 29, Notes

CSCI 5654 (Linear Programming, Fall 2013) CSCI 5654: Linear Programming. Notes. Lecture-1. August 29, Notes CSCI 5654 (Linear Programming, Fall 2013) Lecture-1 August 29, 2013 Lecture 1 Slide# 1 CSCI 5654: Linear Programming Instructor: Sriram Sankaranarayanan. Meeting times: Tuesday-Thursday, 12:30-1:45 p.m.

More information

Chapter 5 Linear Programming (LP)

Chapter 5 Linear Programming (LP) Chapter 5 Linear Programming (LP) General constrained optimization problem: minimize f(x) subject to x R n is called the constraint set or feasible set. any point x is called a feasible point We consider

More information

CS264: Beyond Worst-Case Analysis Lecture #15: Smoothed Complexity and Pseudopolynomial-Time Algorithms

CS264: Beyond Worst-Case Analysis Lecture #15: Smoothed Complexity and Pseudopolynomial-Time Algorithms CS264: Beyond Worst-Case Analysis Lecture #15: Smoothed Complexity and Pseudopolynomial-Time Algorithms Tim Roughgarden November 5, 2014 1 Preamble Previous lectures on smoothed analysis sought a better

More information

Algorithm Design Strategies V

Algorithm Design Strategies V Algorithm Design Strategies V Joaquim Madeira Version 0.0 October 2016 U. Aveiro, October 2016 1 Overview The 0-1 Knapsack Problem Revisited The Fractional Knapsack Problem Greedy Algorithms Example Coin

More information

Multivariate Normal Models

Multivariate Normal Models Case Study 3: fmri Prediction Graphical LASSO Machine Learning/Statistics for Big Data CSE599C1/STAT592, University of Washington Emily Fox February 26 th, 2013 Emily Fox 2013 1 Multivariate Normal Models

More information

4/12/2011. Chapter 8. NP and Computational Intractability. Directed Hamiltonian Cycle. Traveling Salesman Problem. Directed Hamiltonian Cycle

4/12/2011. Chapter 8. NP and Computational Intractability. Directed Hamiltonian Cycle. Traveling Salesman Problem. Directed Hamiltonian Cycle Directed Hamiltonian Cycle Chapter 8 NP and Computational Intractability Claim. G has a Hamiltonian cycle iff G' does. Pf. Suppose G has a directed Hamiltonian cycle Γ. Then G' has an undirected Hamiltonian

More information

Gaussian Graphical Models and Graphical Lasso

Gaussian Graphical Models and Graphical Lasso ELE 538B: Sparsity, Structure and Inference Gaussian Graphical Models and Graphical Lasso Yuxin Chen Princeton University, Spring 2017 Multivariate Gaussians Consider a random vector x N (0, Σ) with pdf

More information

Multivariate Normal Models

Multivariate Normal Models Case Study 3: fmri Prediction Coping with Large Covariances: Latent Factor Models, Graphical Models, Graphical LASSO Machine Learning for Big Data CSE547/STAT548, University of Washington Emily Fox February

More information

An Introduction to Sparse Approximation

An Introduction to Sparse Approximation An Introduction to Sparse Approximation Anna C. Gilbert Department of Mathematics University of Michigan Basic image/signal/data compression: transform coding Approximate signals sparsely Compress images,

More information

Nearest Neighbor. Machine Learning CSE546 Kevin Jamieson University of Washington. October 26, Kevin Jamieson 2

Nearest Neighbor. Machine Learning CSE546 Kevin Jamieson University of Washington. October 26, Kevin Jamieson 2 Nearest Neighbor Machine Learning CSE546 Kevin Jamieson University of Washington October 26, 2017 2017 Kevin Jamieson 2 Some data, Bayes Classifier Training data: True label: +1 True label: -1 Optimal

More information

Accelerated Block-Coordinate Relaxation for Regularized Optimization

Accelerated Block-Coordinate Relaxation for Regularized Optimization Accelerated Block-Coordinate Relaxation for Regularized Optimization Stephen J. Wright Computer Sciences University of Wisconsin, Madison October 09, 2012 Problem descriptions Consider where f is smooth

More information

Linear classifiers selecting hyperplane maximizing separation margin between classes (large margin classifiers)

Linear classifiers selecting hyperplane maximizing separation margin between classes (large margin classifiers) Support vector machines In a nutshell Linear classifiers selecting hyperplane maximizing separation margin between classes (large margin classifiers) Solution only depends on a small subset of training

More information

CSE 421 Dynamic Programming

CSE 421 Dynamic Programming CSE Dynamic Programming Yin Tat Lee Weighted Interval Scheduling Interval Scheduling Job j starts at s(j) and finishes at f j and has weight w j Two jobs compatible if they don t overlap. Goal: find maximum

More information

Santa Claus Schedules Jobs on Unrelated Machines

Santa Claus Schedules Jobs on Unrelated Machines Santa Claus Schedules Jobs on Unrelated Machines Ola Svensson (osven@kth.se) Royal Institute of Technology - KTH Stockholm, Sweden March 22, 2011 arxiv:1011.1168v2 [cs.ds] 21 Mar 2011 Abstract One of the

More information

Lecture 18: More NP-Complete Problems

Lecture 18: More NP-Complete Problems 6.045 Lecture 18: More NP-Complete Problems 1 The Clique Problem a d f c b e g Given a graph G and positive k, does G contain a complete subgraph on k nodes? CLIQUE = { (G,k) G is an undirected graph with

More information

CS 598RM: Algorithmic Game Theory, Spring Practice Exam Solutions

CS 598RM: Algorithmic Game Theory, Spring Practice Exam Solutions CS 598RM: Algorithmic Game Theory, Spring 2017 1. Answer the following. Practice Exam Solutions Agents 1 and 2 are bargaining over how to split a dollar. Each agent simultaneously demands share he would

More information

Best subset selection via bi-objective mixed integer linear programming

Best subset selection via bi-objective mixed integer linear programming Best subset selection via bi-objective mixed integer linear programming Hadi Charkhgard a,, Ali Eshragh b a Department of Industrial and Management Systems Engineering, University of South Florida, Tampa,

More information

Reductions. Reduction. Linear Time Reduction: Examples. Linear Time Reductions

Reductions. Reduction. Linear Time Reduction: Examples. Linear Time Reductions Reduction Reductions Problem X reduces to problem Y if given a subroutine for Y, can solve X. Cost of solving X = cost of solving Y + cost of reduction. May call subroutine for Y more than once. Ex: X

More information

Learning Decision Trees

Learning Decision Trees Learning Decision Trees Machine Learning Spring 2018 1 This lecture: Learning Decision Trees 1. Representation: What are decision trees? 2. Algorithm: Learning decision trees The ID3 algorithm: A greedy

More information

Real World Metric System

Real World Metric System Real World Metric System Each table has been given a Nutrition Facts sheet from a well-known fast food restaurant. Notice that serving sizes and quantities of fats, sugars, etc. are listed using grams

More information

CPSC 540: Machine Learning

CPSC 540: Machine Learning CPSC 540: Machine Learning Mark Schmidt University of British Columbia Winter 2018 Last Time: Learning and Inference in DAGs We discussed learning in DAG models, log p(x W ) = n d log p(x i j x i pa(j),

More information

The Greedy Method. Design and analysis of algorithms Cs The Greedy Method

The Greedy Method. Design and analysis of algorithms Cs The Greedy Method Design and analysis of algorithms Cs 3400 The Greedy Method 1 Outline and Reading The Greedy Method Technique Fractional Knapsack Problem Task Scheduling 2 The Greedy Method Technique The greedy method

More information

Thursday, February 26, 15

Thursday, February 26, 15 High-throughput Data and New Representations for Models and Machine Learning Gus L. W. Hart Why am I here? Automatic-FLOW for Materials Discovery This talk is for you, not for me Automatic-FLOW for Materials

More information

https://goo.gl/kfxweg KYOTO UNIVERSITY Statistical Machine Learning Theory Sparsity Hisashi Kashima kashima@i.kyoto-u.ac.jp DEPARTMENT OF INTELLIGENCE SCIENCE AND TECHNOLOGY 1 KYOTO UNIVERSITY Topics:

More information

CS 6901 (Applied Algorithms) Lecture 3

CS 6901 (Applied Algorithms) Lecture 3 CS 6901 (Applied Algorithms) Lecture 3 Antonina Kolokolova September 16, 2014 1 Representative problems: brief overview In this lecture we will look at several problems which, although look somewhat similar

More information

Data Structures in Java

Data Structures in Java Data Structures in Java Lecture 21: Introduction to NP-Completeness 12/9/2015 Daniel Bauer Algorithms and Problem Solving Purpose of algorithms: find solutions to problems. Data Structures provide ways

More information

CS 6783 (Applied Algorithms) Lecture 3

CS 6783 (Applied Algorithms) Lecture 3 CS 6783 (Applied Algorithms) Lecture 3 Antonina Kolokolova January 14, 2013 1 Representative problems: brief overview of the course In this lecture we will look at several problems which, although look

More information

CS264: Beyond Worst-Case Analysis Lecture #18: Smoothed Complexity and Pseudopolynomial-Time Algorithms

CS264: Beyond Worst-Case Analysis Lecture #18: Smoothed Complexity and Pseudopolynomial-Time Algorithms CS264: Beyond Worst-Case Analysis Lecture #18: Smoothed Complexity and Pseudopolynomial-Time Algorithms Tim Roughgarden March 9, 2017 1 Preamble Our first lecture on smoothed analysis sought a better theoretical

More information

Lecture 4: January 26

Lecture 4: January 26 10-725/36-725: Conve Optimization Spring 2015 Lecturer: Javier Pena Lecture 4: January 26 Scribes: Vipul Singh, Shinjini Kundu, Chia-Yin Tsai Note: LaTeX template courtesy of UC Berkeley EECS dept. Disclaimer:

More information

Mathematical introduction to Compressed Sensing

Mathematical introduction to Compressed Sensing Mathematical introduction to Compressed Sensing Lesson 1 : measurements and sparsity Guillaume Lecué ENSAE Mardi 31 janvier 2016 Guillaume Lecué (ENSAE) Compressed Sensing Mardi 31 janvier 2016 1 / 31

More information

MTHSC 4420 Advanced Mathematical Programming Homework 2

MTHSC 4420 Advanced Mathematical Programming Homework 2 MTHSC 4420 Advanced Mathematical Programming Homework 2 Megan Bryant October 30, 2013 1. Use Dijkstra s algorithm to solve the shortest path problem depicted below. For each iteration, list the node you

More information

Bin packing and scheduling

Bin packing and scheduling Sanders/van Stee: Approximations- und Online-Algorithmen 1 Bin packing and scheduling Overview Bin packing: problem definition Simple 2-approximation (Next Fit) Better than 3/2 is not possible Asymptotic

More information

Least Mean Squares Regression

Least Mean Squares Regression Least Mean Squares Regression Machine Learning Spring 2018 The slides are mainly from Vivek Srikumar 1 Lecture Overview Linear classifiers What functions do linear classifiers express? Least Squares Method

More information

0-1 Knapsack Problem

0-1 Knapsack Problem KP-0 0-1 Knapsack Problem Define object o i with profit p i > 0 and weight w i > 0, for 1 i n. Given n objects and a knapsack capacity C > 0, the problem is to select a subset of objects with largest total

More information

Activity selection. Goal: Select the largest possible set of nonoverlapping (mutually compatible) activities.

Activity selection. Goal: Select the largest possible set of nonoverlapping (mutually compatible) activities. Greedy Algorithm 1 Introduction Similar to dynamic programming. Used for optimization problems. Not always yield an optimal solution. Make choice for the one looks best right now. Make a locally optimal

More information

Machine Learning. Regularization and Feature Selection. Fabio Vandin November 13, 2017

Machine Learning. Regularization and Feature Selection. Fabio Vandin November 13, 2017 Machine Learning Regularization and Feature Selection Fabio Vandin November 13, 2017 1 Learning Model A: learning algorithm for a machine learning task S: m i.i.d. pairs z i = (x i, y i ), i = 1,..., m,

More information

Deep Learning: Self-Taught Learning and Deep vs. Shallow Architectures. Lecture 04

Deep Learning: Self-Taught Learning and Deep vs. Shallow Architectures. Lecture 04 Deep Learning: Self-Taught Learning and Deep vs. Shallow Architectures Lecture 04 Razvan C. Bunescu School of Electrical Engineering and Computer Science bunescu@ohio.edu Self-Taught Learning 1. Learn

More information

1 Overview. 2 Multilinear Extension. AM 221: Advanced Optimization Spring 2016

1 Overview. 2 Multilinear Extension. AM 221: Advanced Optimization Spring 2016 AM 22: Advanced Optimization Spring 26 Prof. Yaron Singer Lecture 24 April 25th Overview The goal of today s lecture is to see how the multilinear extension of a submodular function that we introduced

More information

Σ w j. Σ v i KNAPSACK. for i = 1, 2,..., n. and an positive integers K and W. Does there exist a subset S of {1, 2,..., n} such that: and w i

Σ w j. Σ v i KNAPSACK. for i = 1, 2,..., n. and an positive integers K and W. Does there exist a subset S of {1, 2,..., n} such that: and w i KNAPSACK Given positive integers v i and w i for i = 1, 2,..., n. and an positive integers K and W. Does there exist a subset S of {1, 2,..., n} such that: Σ w j W i S and Σ v i K i S A special case: SUBSET

More information

Kernel Methods and Support Vector Machines

Kernel Methods and Support Vector Machines Kernel Methods and Support Vector Machines Oliver Schulte - CMPT 726 Bishop PRML Ch. 6 Support Vector Machines Defining Characteristics Like logistic regression, good for continuous input features, discrete

More information

A General Framework for Designing Approximation Schemes for Combinatorial Optimization Problems with Many Objectives Combined into One

A General Framework for Designing Approximation Schemes for Combinatorial Optimization Problems with Many Objectives Combined into One OPERATIONS RESEARCH Vol. 61, No. 2, March April 2013, pp. 386 397 ISSN 0030-364X (print) ISSN 1526-5463 (online) http://dx.doi.org/10.1287/opre.1120.1093 2013 INFORMS A General Framework for Designing

More information

What is an integer program? Modelling with Integer Variables. Mixed Integer Program. Let us start with a linear program: max cx s.t.

What is an integer program? Modelling with Integer Variables. Mixed Integer Program. Let us start with a linear program: max cx s.t. Modelling with Integer Variables jesla@mandtudk Department of Management Engineering Technical University of Denmark What is an integer program? Let us start with a linear program: st Ax b x 0 where A

More information

NP-Completeness. f(n) \ n n sec sec sec. n sec 24.3 sec 5.2 mins. 2 n sec 17.9 mins 35.

NP-Completeness. f(n) \ n n sec sec sec. n sec 24.3 sec 5.2 mins. 2 n sec 17.9 mins 35. NP-Completeness Reference: Computers and Intractability: A Guide to the Theory of NP-Completeness by Garey and Johnson, W.H. Freeman and Company, 1979. NP-Completeness 1 General Problems, Input Size and

More information

Outline / Reading. Greedy Method as a fundamental algorithm design technique

Outline / Reading. Greedy Method as a fundamental algorithm design technique Greedy Method Outline / Reading Greedy Method as a fundamental algorithm design technique Application to problems of: Making change Fractional Knapsack Problem (Ch. 5.1.1) Task Scheduling (Ch. 5.1.2) Minimum

More information

CS6375: Machine Learning Gautam Kunapuli. Decision Trees

CS6375: Machine Learning Gautam Kunapuli. Decision Trees Gautam Kunapuli Example: Restaurant Recommendation Example: Develop a model to recommend restaurants to users depending on their past dining experiences. Here, the features are cost (x ) and the user s

More information

COSC 341: Lecture 25 Coping with NP-hardness (2)

COSC 341: Lecture 25 Coping with NP-hardness (2) 1 Introduction Figure 1: Famous cartoon by Garey and Johnson, 1979 We have seen the definition of a constant factor approximation algorithm. The following is something even better. 2 Approximation Schemes

More information

LINEAR PROGRAMMING MODULE Part 1 - Model Formulation INTRODUCTION

LINEAR PROGRAMMING MODULE Part 1 - Model Formulation INTRODUCTION Name: LINEAR PROGRAMMING MODULE Part 1 - Model Formulation INTRODUCTION In general, a mathematical model is either deterministic or probabilistic. For example, the models and algorithms shown in the Graph-Optimization

More information

Restricted Strong Convexity Implies Weak Submodularity

Restricted Strong Convexity Implies Weak Submodularity Restricted Strong Convexity Implies Weak Submodularity Ethan R. Elenberg Rajiv Khanna Alexandros G. Dimakis Department of Electrical and Computer Engineering The University of Texas at Austin {elenberg,rajivak}@utexas.edu

More information

STATS DOESN T SUCK! ~ CHAPTER 16

STATS DOESN T SUCK! ~ CHAPTER 16 SIMPLE LINEAR REGRESSION: STATS DOESN T SUCK! ~ CHAPTER 6 The HR manager at ACME food services wants to examine the relationship between a workers income and their years of experience on the job. He randomly

More information

Q = Set of states, IE661: Scheduling Theory (Fall 2003) Primer to Complexity Theory Satyaki Ghosh Dastidar

Q = Set of states, IE661: Scheduling Theory (Fall 2003) Primer to Complexity Theory Satyaki Ghosh Dastidar IE661: Scheduling Theory (Fall 2003) Primer to Complexity Theory Satyaki Ghosh Dastidar Turing Machine A Turing machine is an abstract representation of a computing device. It consists of a read/write

More information

MATH36061 Convex Optimization

MATH36061 Convex Optimization M\cr NA Manchester Numerical Analysis MATH36061 Convex Optimization Martin Lotz School of Mathematics The University of Manchester Manchester, September 26, 2017 Outline General information What is optimization?

More information

5 Integer Linear Programming (ILP) E. Amaldi Foundations of Operations Research Politecnico di Milano 1

5 Integer Linear Programming (ILP) E. Amaldi Foundations of Operations Research Politecnico di Milano 1 5 Integer Linear Programming (ILP) E. Amaldi Foundations of Operations Research Politecnico di Milano 1 Definition: An Integer Linear Programming problem is an optimization problem of the form (ILP) min

More information

Learning From Data: Modelling as an Optimisation Problem

Learning From Data: Modelling as an Optimisation Problem Learning From Data: Modelling as an Optimisation Problem Iman Shames April 2017 1 / 31 You should be able to... Identify and formulate a regression problem; Appreciate the utility of regularisation; Identify

More information

Generalized Linear Models in Collaborative Filtering

Generalized Linear Models in Collaborative Filtering Hao Wu CME 323, Spring 2016 WUHAO@STANFORD.EDU Abstract This study presents a distributed implementation of the collaborative filtering method based on generalized linear models. The algorithm is based

More information

Need for Deep Networks Perceptron. Can only model linear functions. Kernel Machines. Non-linearity provided by kernels

Need for Deep Networks Perceptron. Can only model linear functions. Kernel Machines. Non-linearity provided by kernels Need for Deep Networks Perceptron Can only model linear functions Kernel Machines Non-linearity provided by kernels Need to design appropriate kernels (possibly selecting from a set, i.e. kernel learning)

More information

Sparse regression. Optimization-Based Data Analysis. Carlos Fernandez-Granda

Sparse regression. Optimization-Based Data Analysis.   Carlos Fernandez-Granda Sparse regression Optimization-Based Data Analysis http://www.cims.nyu.edu/~cfgranda/pages/obda_spring16 Carlos Fernandez-Granda 3/28/2016 Regression Least-squares regression Example: Global warming Logistic

More information

Sparse Proteomics Analysis (SPA)

Sparse Proteomics Analysis (SPA) Sparse Proteomics Analysis (SPA) Toward a Mathematical Theory for Feature Selection from Forward Models Martin Genzel Technische Universität Berlin Winter School on Compressed Sensing December 5, 2015

More information

Sparse Approximation and Variable Selection

Sparse Approximation and Variable Selection Sparse Approximation and Variable Selection Lorenzo Rosasco 9.520 Class 07 February 26, 2007 About this class Goal To introduce the problem of variable selection, discuss its connection to sparse approximation

More information

CS 590D Lecture Notes

CS 590D Lecture Notes CS 590D Lecture Notes David Wemhoener December 017 1 Introduction Figure 1: Various Separators We previously looked at the perceptron learning algorithm, which gives us a separator for linearly separable

More information

Binary Classification / Perceptron

Binary Classification / Perceptron Binary Classification / Perceptron Nicholas Ruozzi University of Texas at Dallas Slides adapted from David Sontag and Vibhav Gogate Supervised Learning Input: x 1, y 1,, (x n, y n ) x i is the i th data

More information

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation Instructor: Moritz Hardt Email: hardt+ee227c@berkeley.edu Graduate Instructor: Max Simchowitz Email: msimchow+ee227c@berkeley.edu

More information

EECS 477: Introduction to algorithms. Lecture 12

EECS 477: Introduction to algorithms. Lecture 12 EECS 477: Introduction to algorithms. Lecture 12 Prof. Igor Guskov guskov@eecs.umich.edu October 17, 2002 1 Lecture outline: greedy algorithms II Knapsack Scheduling minimizing time with deadlines 2 Greedy

More information

Introduction to Bayesian Learning. Machine Learning Fall 2018

Introduction to Bayesian Learning. Machine Learning Fall 2018 Introduction to Bayesian Learning Machine Learning Fall 2018 1 What we have seen so far What does it mean to learn? Mistake-driven learning Learning by counting (and bounding) number of mistakes PAC learnability

More information