ORF 363/COS 323 Final Exam, Fall 2017

Similar documents
ORF 363/COS 323 Final Exam, Fall 2015

ORF 363/COS 323 Final Exam, Fall 2018

ORF 363/COS 323 Final Exam, Fall 2016

ORF 523 Final Exam, Spring 2016

ORF 523. Finish approximation algorithms + Limits of computation & undecidability + Concluding remarks

1 Strict local optimality in unconstrained optimization

Game Theory. Greg Plaxton Theory in Programming Practice, Spring 2004 Department of Computer Science University of Texas at Austin

Algorithmic Game Theory. Alexander Skopalik

1 Basic Game Modelling

COS 341: Discrete Mathematics

Limits of Computation + Course Recap

Limits of Computation + Course Recap

COS 341: Discrete Mathematics

1 The independent set problem

6.079/6.975 S. Boyd & P. Parrilo December 10 11, Final exam

Last/Family Name First/Given Name Seat # Exam # Failure to follow the instructions below will constitute a breach of the Honor Code:

EE 278 October 26, 2017 Statistical Signal Processing Handout #12 EE278 Midterm Exam

Class President: A Network Approach to Popularity. Due July 18, 2014

MA/OR/ST 706: Nonlinear Programming Midterm Exam Instructor: Dr. Kartik Sivaramakrishnan INSTRUCTIONS

Math 110 (Fall 2018) Midterm II (Monday October 29, 12:10-1:00)

MATH 5524 MATRIX THEORY Pledged Problem Set 2

CS 598RM: Algorithmic Game Theory, Spring Practice Exam Solutions

EE364a: Convex Optimization I March or March 12 13, Final Exam

Last/Family Name First/Given Name Seat # Exam # Failure to follow the instructions below will constitute a breach of the Honor Code:

Midterm. Introduction to Machine Learning. CS 189 Spring Please do not open the exam before you are instructed to do so.

Exact and Approximate Equilibria for Optimal Group Network Formation

CS261: A Second Course in Algorithms Lecture #12: Applications of Multiplicative Weights to Games and Linear Programs

June If you want, you may scan your assignment and convert it to a.pdf file and it to me.

Mixed Nash Equilibria

SF2972 Game Theory Exam with Solutions March 15, 2013

Class Note #20. In today s class, the following four concepts were introduced: decision

EE263 Midterm Exam. This is a 24 hour take-home midterm. Please turn it in at Bytes Cafe in the Packard building, 24 hours after you pick it up.

Math 16 - Practice Final

CMU CS 462/662 (INTRO TO COMPUTER GRAPHICS) HOMEWORK 0.0 MATH REVIEW/PREVIEW LINEAR ALGEBRA

CS 170 Algorithms Fall 2014 David Wagner MT2

MIDTERM: CS 6375 INSTRUCTOR: VIBHAV GOGATE October,

1.1 Administrative Stuff

CS 170 Algorithms Spring 2009 David Wagner Final

Lec12p1, ORF363/COS323. The idea behind duality. This lecture

Convex envelopes, cardinality constrained optimization and LASSO. An application in supervised learning: support vector machines (SVMs)

ECS 20: Discrete Mathematics for Computer Science UC Davis Phillip Rogaway June 12, Final Exam

Msc Micro I exam. Lecturer: Todd Kaplan.

Course Staff. Textbook

M340(921) Solutions Practice Problems (c) 2013, Philip D Loewen

Midterm 1. Your Exam Room: Name of Person Sitting on Your Left: Name of Person Sitting on Your Right: Name of Person Sitting in Front of You:

Physics 18, Introductory Physics I for Biological Sciences Spring 2010

Checkpoint Questions Due Monday, October 1 at 2:15 PM Remaining Questions Due Friday, October 5 at 2:15 PM

Computational Game Theory Spring Semester, 2005/6. Lecturer: Yishay Mansour Scribe: Ilan Cohen, Natan Rubin, Ophir Bleiberg*

CSI 4105 MIDTERM SOLUTION

Massachusetts Institute of Technology 6.854J/18.415J: Advanced Algorithms Friday, March 18, 2016 Ankur Moitra. Problem Set 6

6 Evolution of Networks

Algebra Year 10. Language

ECE580 Fall 2015 Solution to Midterm Exam 1 October 23, Please leave fractions as fractions, but simplify them, etc.

MS&E 246: Lecture 4 Mixed strategies. Ramesh Johari January 18, 2007

Astronomy 1010: Survey of Astronomy. University of Toledo Department of Physics and Astronomy

Lecture notes for quantum semidefinite programming (SDP) solvers

Problem Point Value Points

Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 5

Chapter 1 Review of Equations and Inequalities

Theoretical Computer Science

University of Ottawa

CS103 Handout 08 Spring 2012 April 20, 2012 Problem Set 3

Optimality, Duality, Complementarity for Constrained Optimization

Data Structure and Algorithm Homework #1 Due: 2:20pm, Tuesday, March 12, 2013 TA === Homework submission instructions ===

6.079/6.975 S. Boyd & P. Parrilo October 29 30, Midterm exam

Theory Field Examination Game Theory (209A) Jan Question 1 (duopoly games with imperfect information)

CS364A: Algorithmic Game Theory Lecture #13: Potential Games; A Hierarchy of Equilibria

Final exam. June7 8orJune8 9,2006.

Motivating examples Introduction to algorithms Simplex algorithm. On a particular example General algorithm. Duality An application to game theory

December 2014 MATH 340 Name Page 2 of 10 pages

Theory and Internet Protocols

Lec3p1, ORF363/COS323

University of New Mexico Department of Computer Science. Midterm Examination. CS 361 Data Structures and Algorithms Spring, 2003

Applications of Convex Optimization on the Graph Isomorphism problem

Syllabus, Math 343 Linear Algebra. Summer 2005

CS 573: Algorithmic Game Theory Lecture date: January 23rd, 2008

EE363 homework 8 solutions

Exact and Approximate Equilibria for Optimal Group Network Formation

Ph.D. Preliminary Examination MICROECONOMIC THEORY Applied Economics Graduate Program June 2016

MATH 345 Differential Equations

1 PROBLEM DEFINITION. i=1 z i = 1 }.

Game Theory Fall 2003

Part IB Optimisation

Final Examination. Adrian Georgi Josh Karen Lee Min Nikos Tina. There are 12 problems totaling 150 points. Total time is 170 minutes.

1 Primals and Duals: Zero Sum Games

Andrew/CS ID: Midterm Solutions, Fall 2006

Linear Algebra. Instructor: Justin Ryan

OPTIMALITY AND STABILITY OF SYMMETRIC EVOLUTIONARY GAMES WITH APPLICATIONS IN GENETIC SELECTION. (Communicated by Yang Kuang)

Row and Column Distributions of Letter Matrices

Algorithmic Game Theory and Applications. Lecture 4: 2-player zero-sum games, and the Minimax Theorem

Google Page Rank Project Linear Algebra Summer 2012

Final exam.

Midterm Exam 1 (Solutions)

princeton univ. F 13 cos 521: Advanced Algorithm Design Lecture 17: Duality and MinMax Theorem Lecturer: Sanjeev Arora

CIS519: Applied Machine Learning Fall Homework 5. Due: December 10 th, 2018, 11:59 PM

Mathematics 24 Winter 2004 Exam I, January 27, 2004 In-class Portion YOUR NAME:

STUDENT NAME: STUDENT SIGNATURE: STUDENT ID NUMBER: SECTION NUMBER RECITATION INSTRUCTOR:

ORF 523 Lecture 9 Spring 2016, Princeton University Instructor: A.A. Ahmadi Scribe: G. Hall Thursday, March 10, 2016

ECO 199 GAMES OF STRATEGY Spring Term 2004 Precepts Week 7 March Questions GAMES WITH ASYMMETRIC INFORMATION QUESTIONS

ALGORITHMIC GAME THEORY. Incentive and Computation

Transcription:

Name: Princeton University Instructor: A.A. Ahmadi ORF 363/COS 323 Final Exam, Fall 2017 January 17, 2018 AIs: B. El Khadir, C. Dibek, G. Hall, J. Zhang, J. Ye, S. Uysal 1. Please write out and sign the following pledge on top of the first page of your exam: I pledge my honor that I have not violated the Honor Code or the rules specified by the instructor during this examination. 2. Don t forget to write your name on the exam. Make a copy of your solutions and keep it. 3. The exam is not to be discussed with anyone except possibly the professor and the TAs. You can only ask clarification questions, and only as public (and preferably non-anonymous) questions on Piazza. No emails. 4. You are allowed to consult the lecture notes, your own notes, the reference books of the course as indicated on the syllabus, the problem sets and their solutions (yours and ours), the midterm and its solutions (yours and ours), the practice midterm and final exams and their solutions, all Piazza posts, but nothing else. You can only use the Internet in case you run into problems related to MATLAB or CVX. 5. You are allowed to refer to facts proven in the notes or problem sets without reproving them. 6. For all problems involving MATLAB or CVX, show your code. The MATLAB output that you present should come from your code. 7. Unless you have been granted an extension because of overlapping finals, the exam is to be turned in on Friday (January 19, 2018) at 10 AM in the instructor s office (Sherrerd 329). If you cannot make it on Friday and decide to turn in your exam sooner, or if your deadline is different under the rules of the exam, you have to drop your exam off in the ORF 363 box of the ORFE undergraduate lounge (Sherrerd 123). If you do that, you need to write down the date and time on the first page of your exam and sign it. You can also submit the exam electronically on Blackboard as a single PDF file. 8. Good luck!

Grading Problem 1 25 pts Problem 2 25 pts Problem 3 25 pts Problem 4 25 pts TOTAL 100 1

Problem 1: Minimizing the probability of cheating At some universities like Princeton, professors have it easy. The students uphold such a high standard of academic integrity that there is no reason to worry about complications around cheating. This is sadly not the case at Cheatston University, where Professor Paranoid is about to hold a take-home final exam. Because he is worried about cheating, Professor Paranoid goes through the trouble of designing two different sets of exam questions of equal difficulty level. He is now faced with the task of deciding which students should get Exam A and which should get Exam B. To make this decision, Professor Paranoid s strategy is to make sure that to the extent possible, students who are friends with each other get different exams. More formally, suppose that Professor Paranoid has access to the friendship network of his students; this is a graph G(V, E) with n students as nodes and an edge between two nodes if and only if the two students are friends. If we denote the adjacency matrix 1 of this graph by A, then the optimization problem that Professor Paranoid wants to solve is the following: f 1 := max x R n 4 A ij (1 x i x j ) i,j s.t. x 2 i = 1, i = 1,..., n. (1) (a) We say that a friendship is cheat-free if the two students involved in this friendship get different exams. Argue why the optimal value f to problem (1) is equal to the maximum possible number of cheat-free friendships. Show that the above problem is not a convex optimization problem. It turns out that problem (1) is NP-hard to solve. Nevertheless, you know from ORF 363 that semidefinite programming is a powerful tool for approximately solving NP-hard problems. Let s see exactly how. (b) Upper bounding. Consider the semidefinite program f SDP := min X S n n T r(ax) s.t. X ii = 1, i = 1,..., n, (2) X 0. 1 Recall that the adjacency matrix is a symmetric matrix that has zeros on the diagonal and whose (i, j) entry equals 1 if students i and j are friends and 0 otherwise. 2

Let UB SDP := 1 4 i,j A ij 1 4 f SDP. Show that f UB SDP. (c) Lower bounding. To find a suboptimal solution to (1), Professor Paranoid gives out the exams according to the following strategy 2 : ˆx i = sign(x (i, 1)), i = 1,..., n, where X is the optimal solution to the SDP in (2) that the solver returns. 3 The objective value of (1) at ˆx clearly gives a lower bound on f. Let s call this lower bound LB SDP. So we have LB SDP f UB SDP. To examine the quality of these bounds, we are going to compute the ratio LBSDP on some UB SDP random instances of this problem. (If this ratio is 1, we have solved problem (1) exactly.) Suppose there are 70 students in the class and that any two randomly chosen students are friends with each other with probability 0.3. Generate 50 random instances of such a friendship network by running the following code (50 times): 1 n=70; p =.3; A=z e r o s ( n ) ; 2 f o r i =1:n 1 3 f o r j=i +1:n 4 h=rand ; i f h<p 5 A( i, j ) =1;A( j, i ) =1; 6 end 7 end 8 end In each run, compute LBSDP UB SDP and then show the histogram of this ratio over your 50 runs. Problem 2: True or False? (Provide a proof or a counterexample.) (a) For k = 0, 1,..., consider the iterations x k+1 = x k + α k d k used to minimize a function f : R n R. If the directions d k are descent directions for all k, the step size is set to α k = 1 2 k, and the function f is convex, then the iterations satisfy f(x k+1 ) f(x k ), k. 2 Here, the function sign(z) returns +1 if z 0 and 1 otherwise. 3 If you are curious, this heuristic is motivated by the fact that if the upper bound in part (b) is tight, then there is always a rank-1 optimal solution to the SDP whose first column (or any other column actually) is a ±1 vector that is optimal for (1). 3

(b) For k = 0, 1,..., consider the iterations x k+1 = x k + α k d k used to minimize a differentiable function f : R n R. If the directions d k are descent directions for all k, the function f is convex and nonnegative, and the step sizes are chosen to guarantee that f(x k+1 ) < f(x k ) for all k, then the sequence {x k } converges to a point x R n that satisfies f(x ) = 0. (c) Consider the following SDP: min. X S n n Tr(CX) s.t. Tr(A i X) = b i, i = 1,..., m, (3) X 0. Let b := (b 1,..., b m ) T, and suppose there exists a vector y R m such that b T y = 0 and m i=1 A iy i C. Then, there exists a matrix X S n n that is feasible to (3) and such that T r(cx ) T r(cx), for any matrix X that is feasible to (3). Problem 3: Nearest correlation matrix You are the CEO of HoneyMoney Technologies LLC, a new hedge fund firm in NYC whose proprietary optimization algorithms has Wall Street raving. Your main competitor, RenaissancE Technologies 4, has sent in a spy, disguised as a summer intern, to interfere with your investments. The spy has gotten his hands on your correlation matrix C of n important stocks, 5 to which he has added some random noise, leaving you with a matrix Ĉ. We remark that to be a valid correlation matrix, a matrix must be symmetric, positive semidefinite, and have all diagonal entries equal to one. The spy has been careful enough to make sure that the resulting matrix Ĉ is symmetric and has ones on the diagonal, but he hasn t noticed that his change has made Ĉ not positive semidefinite. 4 Not to be confused with Renaissance Technologies that would never do such a thing. 5 If you are curious, the correlation matrix is an n n symmetric matrix used frequently in investment banking. Its (i, j)-th entry is a number between -1 and 1, with numbers close to 1 meaning that stocks i and j are likely to move up together, close to -1 meaning that the two stocks are likely to move in opposite directions, and close to zero meaning that they are likely uncorrelated. The problem of finding the closest correlation matrix to a given matrix is an important problem in financial engineering; see e.g. this article. 4

(a) Suppose we have 1.00 0.76 0.07 0.96 0.76 1.00 0.18 0.07 Ĉ = 0.07 0.18 1.00 0.41. 0.96 0.07 0.41 1.00 Recover the original matrix by finding the nearest correlation matrix to Ĉ in Frobenius norm6 (i.e., the correlation matrix C that minimizes C Ĉ F ). Give your optimal solution. (b) Show that for any symmetric matrix Ĉ, the problem of finding the closest correlation matrix to Ĉ in Frobenius norm has a unique solution. (c) Suppose Ĉ is symmetric, has ones on the diagonal, but is not positive semidefinite. Show that the optimal solution to the problem of finding the closest correlation matrix in Frobenius norm to Ĉ must have at least one zero eigenvalue. (Hint: You may use the fact that the minimum eigenvalue of a symmetric matrix is a continuous function of its entries.) Problem 4: Nash equilibria in 2-player games A bimatrix game is a game between two players each having a finite number of strategies. If Player 1 has m strategies and Player 2 has n strategies, then the game is fully defined by two real m n payoff matrices A and B. When Player 1 plays strategy i and Player 2 plays strategy j, then the payoffs that they get are respectively A ij and B ij. In the case where Player 1 chooses to only play strategy i, we represent her choice by a vector x R m which consists of all zeros except for a 1 in the i th position. This is called a pure strategy. However, Player 1 can also choose to play a mixed strategy, where she chooses to play a convex combination of her m strategies. In either case, Player 1 s strategy is always an element of the unit simplex, which by definition is the following set: { } m m = x R m x i = 1, x i 0. i=1 Likewise, Player 2 s strategy will be an element of n. Under a pair of mixed strategies (x, y) m n, the payoff of Player 1 is x T Ay and the payoff of Player 2 is x T By. 6 Recall what the Frobenius norm is from a previous problem set. 5

A pair of strategies (x, y ) constitutes a Nash equilibrium if no player can improve his/her payoff by a unilateral deviation: x m, y n, (4) x T Ay x T Ay, x m, x T By x T By, y n. This means that if Player 2 sticks to his strategy y, then there is no incentive for Player 1 to change her strategy from x to some other strategy, and, conversely, if Player 1 sticks to her strategy x, then there is no incentive for Player 2 to change his strategy from y to some other strategy. Hence, we are at some sort of an equilibrium (Trump will not bomb Kim Jong-un, Kim Jong-un will not bomb Trump, and life will remain good). Nash s big result was to prove that independent of A and B, a (Nash) equilibrium always exists. 7 His paper however, did not give an algorithm for finding such equilibria. (a) Show that a pair of strategies (x, y ) is a Nash equilibrium if and only if it satisfies x m, y n, (5) x T Ay e T i Ay, i = 1,..., m, x T By x T Be i, i = 1,..., n, where e i is a vector (in R m or R n ) with only zeros, except for a 1 in the i th position. (b) An important class of bimatrix games is the so-called zero-sum games. These are games whose payoff matrices satisfy B = A. This means that the two players are in direct competition (any win for Player 1 is a loss for Player 2 and vice versa). For example, the classic rock-paper-scissors game is a zero-sum game. Show that in a zero-sum game, a pair of strategies (x, y ) is a Nash equilibrium if and only if it satisfies x m, y n, x T Ae j e T i Ay, {i, j} {1,..., m} {1,..., n}. (c) Consider the zero-sum game given by 1 3 3 A = 0 1 2, B = A. 3 1 1 Find a Nash equilibrium of this game. Is the Nash equilibrium unique? Justify. 7 In fact, he proved the result more generally for games among a finite number of players. 6