Convex Optimization, Game Theory, and Variational Inequality Theory in Multiuser Communication Systems

Size: px
Start display at page:

Download "Convex Optimization, Game Theory, and Variational Inequality Theory in Multiuser Communication Systems"

Transcription

1 Convex Optimization, Game Theory, and Variational Inequality Theory in Multiuser Communication Systems Daniel P. Palomar Hong Kong University of Science and Technology (HKUST) Gesualdo Scutari State University of New York (SUNY) at Buffalo, NY, USA ELEC Convex Optimization Fall , HKUST, Hong Kong

2 Acknowledgement The results on game theory and variational inequality theory are joint work with our collaborators: Sergio Barbarossa, Dipartimento INFOCOM, University of Rome La Sapienza, Italy Francisco Facchinei, Dipartimento di Informatica e Sistemistica, University of Rome La Sapienza, Italy Jong-Shi Pang, Dept. of Industrial & Enterprise Systems Engineering, UIUC, IL, USA. 1

3 Outline of Lecture Part I: Overview of Convex Optimization and Game Theory Part II: Variational Inequality Theory: The Theory Part III: Variational Inequality Theory: Applications Daniel P. Palomar & Gesualdo Scutari 2

4 Part I: Convex Optimization and Game Theory: A Quick Overview Daniel P. Palomar & Gesualdo Scutari 3

5 Part I - Outline Convex optimization (no need, you are experts already!) The minimum principle Elegant KKT conditions Game theory: Nash equilibrium and Pareto optimality Existence/uniqueness theorems Algorithms. Daniel P. Palomar & Gesualdo Scutari 4

6 Convex Optimization Daniel P. Palomar & Gesualdo Scutari 5

7 Convex Optimization By now, my dear students, you should all be extremely familiar (and on your way to becoming experts) with convex optimization. For sure you know basic concepts like: convexity, classes of convex problems, Lagrange duality, KKT conditions, relaxation methods, decomposition methods, numerical algorithms, robust worst-case optimization, and a myriad of applications. We just need to review the minimum principle and a compact&elegant way of writing the KKT conditions. Daniel P. Palomar & Gesualdo Scutari 6

8 Minimum Principle For an unconstrained convex optimization problem: minimize x f(x) an optimal solution x must satisfy f (x ) = 0. The extension of this optimality criterion to a constrained convex optimization problem minimize x subject to f(x) x K is the so-called minimum principle: (y x ) T f (x ) 0 y K. Daniel P. Palomar & Gesualdo Scutari 7

9 Geometrical Interpretation of the Minimum Principle A feasible x that satisfies the minumum principle: f(x ) forms a nonobtuse angle with all feasible vectors d = y x emanating from x : feasible set K d = y x y x f(x ) surface of equal cost f(x) Daniel P. Palomar & Gesualdo Scutari 8

10 Geometrical Interpretation of the Minimum Principle A feasible x that does NOT satisfy the minumum principle: there are other feasible points y x such that f(y) < f(x ) : feasible set K x f(x) y d = y x surface of equal cost f(x) Daniel P. Palomar & Gesualdo Scutari 9

11 Lagrangian Consider an optimization problem in standard form (not necessarily convex) minimize x f 0 (x) subject to f i (x) 0 i = 1,..., m h i (x) = 0 i = 1,..., p with variable x R n, domain D, and optimal value p. The Lagrangian is a function L : R n R m R p R, with dom L = D R m R p, defined as m p L (x, λ, ν) = f 0 (x) + λ i f i (x) + ν i h i (x) i=1 where λ i is the Lagrange multiplier associated with f i (x) 0 and ν i is the Lagrange multiplier associated with h i (x) = 0. i=1 Daniel P. Palomar & Gesualdo Scutari 10

12 Karush-Kuhn-Tucker (KKT) Conditions KKT conditions (for differentiable f i, h i ): 1. primal feasibility: f i (x ) 0, i = 1,..., m, h i (x ) = 0, i = 1,..., p 2. dual feasibility: λ 0 3. complementary slackness: λ i f i (x ) = 0 for i = 1,..., m 4. zero gradient of Lagrangian with respect to x: f 0 (x ) + m λ i f i (x ) + p ν i h i (x ) = 0 i=1 i=1 Daniel P. Palomar & Gesualdo Scutari 11

13 Elegant and Compact KKT Conditions We can make use of the compact notation a b a T b = 0 to write the primal-dual feasibility and complementary slackness KKT conditions as 0 λ i f i (x ) 0 for i = 1,..., m. By stacking the f i s, h i s, and λ i s in vectors, we can finally write the KKT conditions in the following compact form: 0 λ f (x ) 0 h (x ) = 0 and f 0 (x ) + f T (x ) λ i + h T (x ) ν i = 0. Daniel P. Palomar & Gesualdo Scutari 12

14 Game Theory Daniel P. Palomar & Gesualdo Scutari 13

15 Game Theory and Nash Equilibrium Game theory deals with problems with interacting decision-makers (called players). Noncooperative game theory is a branch of game theory for the resolution of conflicts among selffish players, each of which tries to optimize his own objective function. A solution to such a competitive game is an equilibrium point where each user is unilaterally happy and does not want to deviate: it is the so-called Nash Equilibrium (NE). We will consider two important types of noncooperative games: Nash Equilibrium Problems (NEP) and Generalized NEP (GNEP). Daniel P. Palomar & Gesualdo Scutari 14

16 Nash Equilibrium Problems (NEP) Mathematically, we can define a NEP as a set of coupled optimization problems: (G) : minimize x i f i (x i, x i ) subject to x i K i i = 1,..., Q where: x i R n i is the strategy of player i x i (x j ) j i are the strategies of all the players except i K i R n i is the strategy set of player i f i (x i, x i ) is the cost function of player i. How to define a solution of the game? Daniel P. Palomar & Gesualdo Scutari 15

17 Solution of G: A (pure strategy) Nash Equilibrium (NE) is a feasible x = (x i )Q i=1 such that f i (x i, x i) f i (y i, x i), y i K i, i = 1,..., Q A NE is a strategy profile where every player is unilaterally happy. Life is not so easy: A (pure strategy) NE may not exist or be unique Even when the NE is unique, there is no guarantee of convergence of iterative (best-response) algorithms. How to analyze a game? How to design convergent distributed algorithms? Daniel P. Palomar & Gesualdo Scutari 16

18 Minimum Principle for NEPs A NE is defined as a simultaneous solution of each of the singleplayer optimization problems: given the other players x i, x i must be the solution to minimize x i f i (x i, x i ) subject to x i K i. The minimum principle for a convex game is then, for each i = 1,..., Q: (y i x i ) T f i ( x i, x i) 0 yi K i. Daniel P. Palomar & Gesualdo Scutari 17

19 KKT Conditions for NEPs Suppose now that each set K i is described by a set of equalities and inequalities: K i = {x i : g i (x i ) 0, h i (x i ) = 0}. Then we can write the coupled KKT conditions for the convex NEP, for each i = 1,..., Q, as and 0 λ i g i (x ) 0 h i (x i ) = 0 f i (x ) + g T i (x i ) λ i + h T i (x i ) ν i = 0. Daniel P. Palomar & Gesualdo Scutari 18

20 Pareto Optimality Pareto optimality is the natural extension of the concept of optimality in classical optimization to multi-objective optimization. A point x is Pareto optimal if its set of objectives (f 1 (x),..., f Q (x)) cannot be improved (element by element) by another point. Note that some objectives of a Pareto optimal point may be improved by another point, but not all the objectives at the same time. Therefore, a multi-objective optimization problem has associated a Pareto-optimal frontier rather than an optimal point. Daniel P. Palomar & Gesualdo Scutari 19

21 In the context of game theory, one may formulate the multi-objective (possibly nonconvex) formulation of the game as minimize x 1,...,x Q Q i=1 w if i (x i, x i ) subject to x i K i i where the weights w i explore (the convex hull of) the Pareto optimality frontier. One common choice is to choose equal weights w i = 1, obtaining the sum-performance (social optimum). We can then compare the performance achieved by a NE versus that achieved by a Pareto optimal point in terms of sum-performance. This is related to the concept of price of anarchy in game theory. The loss of performance can be large in some cases. In most practical cases, however, does not seem to be too significant. Daniel P. Palomar & Gesualdo Scutari 20

22 As an example, we plot the sum-performance (sum-rate) achieved by the NE as well as the sum-pareto-optimal point in an adhoc wireless multiuser system: Nash Equilibrium GP Algorithm (uniform power initialization) GP Algorithm (random initialization) Average sum rate (b/s/hz) SNR=15dB SNR=10dB SNR=5dB SNR=0dB INR (db) Daniel P. Palomar & Gesualdo Scutari 21

23 Solution Analysis: Existence of NE Theorem. [(existence theorem) Debreu-Fan-Glicksberg (1952)] Given the game G =<K, f>, suppose that: The action space K is compact and convex; The cost-functions f i (x i, x i ) are continuous in x K and quasi-convex in x i K i, for any given x i. Then, there exists a pure strategy NE. Existence may also follow from the special structure of the game, for example: Potential games Supermodular games. Daniel P. Palomar & Gesualdo Scutari 22

24 Solution Analysis: Uniqueness of NE for Convex Games Theorem. [Rosen uniqueness theorem (1965)] Given the game G =<K, f>, suppose that: The action space K is compact and convex; The cost-functions f i (x i, x i ) are continuous in x K and quasi-convex in x i K i, for any given x i ; The Diagonal Strict Convexity (DSC) property holds true: given g(x, r) (r i xi f i (x)) Q i=1, DSC: r > 0 : (x y) T (g(x, r) g(y, r)) > 0, x, y K, x y Then, there exists a unique pure strategy NE. The DSC property is not easy to check and may be too restrictive (we want the conditions to be satisfied for all x K). Daniel P. Palomar & Gesualdo Scutari 23

25 Solution Analysis: Uniqueness of NE for Standard Games Let define the best-respose B i (x i ) of each player i as the set of optimal solutions of player i s optimization problem for any given x i : B i (x i ) {x i K i : f i (x i, x i ) f i (y i, x i ), y i K i }. Standard function [Yates, 1995]: A function g : K R m + is said to be standard if it has the two following properties: Monotonicity: x, y K, x y g(x) g(y) (component-wise) Scalability: α > 0, x K, g(α x) α g(x). Daniel P. Palomar & Gesualdo Scutari 24

26 If the best-response map B(x) = (B i (x i )) Q i=1 is a standard function (requiring B i (x i ) be a single-valued function), then the NE of the game G =< K, f > is unique. The above requirement is quite strong and in general is not satisfied in practice by games of our interest. Daniel P. Palomar & Gesualdo Scutari 25

27 Algorithms for Achieving NE Best-response (BR) based dynamics: Convergence requires contraction or monotonicity of the BR map; Contraction can be studied using classical fixed-point theory if the BR is known in closed form; Monotonicity is guaranteed if the BR is a standard function (too restrictive). ODE approximation: the idea is to introduce a time-continuous ordinary differential equation (ODE), whose equilibrium coincides with the NE of the game: Convergence to a NE is guaranteed under the globally asymptotically stability (GAS) of the equilibrium of the ODE; Sufficient conditions for the DSC implies the GAS. Daniel P. Palomar & Gesualdo Scutari 26

28 Summary You already knew the basic concepts in convex optimization theory. We have reviewed the minimum principle and the KKT conditions. Within the context of game theory, we have defined the concepts of NE, Pareto optimality, and have formulated NEPs. We have given the state of the art for the solution analysis of NEPs as well as practical algorithms. Daniel P. Palomar & Gesualdo Scutari 27

29 References Convex optimization: S. Boyd and L. Vandenberghe, Convex Optimization. Cambridge Univ. Press, D. P. Palomar and Y. C. Eldar (Eds.), Convex Optimization in Signal Processing and Communications, Cambridge University Press, Special Issue of IEEE Signal Processing Magazine on Convex Optimization for Signal Processing, May (Editors: Y. Eldar, T. Luo, K. Ma, D. Palomar, and N. Sidiropoulos.) Ben Tal and Nemirovsky, Lectures on Modern Convex Optimization. SIAM Game theory: M. J. Osborne and A. Rubinstein, A Course in Game Theory. Cambridge, MIT Press, J. P. Aubin, Mathematical Method for Game and Economic Theory. Amsterdam, The Netherlands: Elsevier, Special Issue of IEEE Jour. on Selected Areas in Communications on Game Theory in Communication Systems, Sept (Editors: J. Huang, D. Palomar, N. Mandayam, J. Walrand, S. B. Wicker, and T. Basar.) Special Issue of IEEE Signal Processing Magazine on Game Theory in Signal Proc. and Comm., Sept (Editors: E. A. Jorswieck, E. G. Larsson, M. Luise, and H. V. Poor.) Gesualdo Scutari, Daniel P. Palomar, Francisco Facchinei, and Jong-Shi Pang, Convex Optimization, Game Theory, and Variational Inequality Theory, IEEE Signal Processing Magazine, vol. 27, no. 3, pp , May Daniel P. Palomar & Gesualdo Scutari 28

30 Part II: Variational Inequality Theory: The Theory Daniel P. Palomar & Gesualdo Scutari 29

31 Part II - Outline VI as a general framework Alternative formulations of VI: KKT conditions, primal-dual form Solution analysis of VI: existence, uniqueness, solution as a projection Algorithms for VI NEP as a VI: solution analysis and algorithms GNEP as a VI: variational equilibria and algorithms Daniel P. Palomar & Gesualdo Scutari 30

32 VI as a General Framework Daniel P. Palomar & Gesualdo Scutari 31

33 The VI Problem Given a set K R n and a mapping F : K R n, the VI problem VI(K, F) is to find a vector x K such that (y x ) T F (x ) 0 y K. Daniel P. Palomar & Gesualdo Scutari 32

34 Geometrical Interpretation of the VI A feasible point x that is a solution of the VI(K, F): F(x ) forms an acute angle with all the feasible vectors y x feasible set K F(x ) x y y x Daniel P. Palomar & Gesualdo Scutari 33

35 Geometrical Interpretation of the VI A feasible point x that is NOT a solution of the VI(K, F) feasible set K y y x x F(x) Daniel P. Palomar & Gesualdo Scutari 34

36 Convex Optimization as a VI Convex optimization problem: minimize x subject to f(x) x K where K R n is a convex set and f : R n R is a convex function. Minimum principle: The problem above is equivalent to finding a point x K such that (y x ) T f (x ) 0 y K VI(K, f) which is a special case of VI with F = f. Daniel P. Palomar & Gesualdo Scutari 35

37 It seems that a VI is more general than a convex optimization problem only when F f. But is it really that significative? The answer is affirmative. The VI(K, F) encompasses a wider range of problems than classical optimization whenever F f ( F has not a symmetric Jacobian). Some examples of relevant problems that can be cast as a VI include NEPs, GNEPs, system of equations, nonlinear complementary problems, fixed-point problems, saddle-point problems, etc. Daniel P. Palomar & Gesualdo Scutari 36

38 NEP as a VI The minimum principle for a convex game is, for each i = 1,..., Q: (y i x i ) T ( f i x i, x i) 0 yi K i. We can write it in a more compact way as (y x ) T F (x ) 0 y K by defining K = K 1 K Q and F = ( xi f i ) Q i=1 F f). (note that Hence, the interpretation of the NEP as a VI: min f i (x i, x i ), i = 1,..., Q VI(K, F). x i K i Daniel P. Palomar & Gesualdo Scutari 37

39 System of Equations as a VI In some engineering problems, we may not want to minimize a function but instead finding a solution to a system of equations: F(x) = 0. This can be cast as a VI by choosing K = R n. Hence, F(x) = 0 VI(R n, F). Daniel P. Palomar & Gesualdo Scutari 38

40 Nonlinear Complementary Problem (NCP) as a VI The NCP is a unifying mathematical framework that includes linear programming, quadratic programming, and bi-matrix games. The NCP(F) is to find a vector x such that NCP(F) : 0 x F(x ) 0. An NCP can be cast as a VI by choosing K = R n +: NCP(F) VI(R n +, F). Daniel P. Palomar & Gesualdo Scutari 39

41 Linear Complementary Problem (LCP) as a VI An LCP is just an NCP with an affine function: F (x) = Mx + q: LCP(M, q) : 0 x Mx + q 0. An LCP can be cast as a VI by choosing K = R n +: LCP(M, q) VI(R n +, Mx + q). Examples of LCP: Quadratic (linear) programming, Canonical Bimatrix games, Nash-Cournot games. Daniel P. Palomar & Gesualdo Scutari 40

42 Quadratic (Linear) Programming as an LCP Consider the following QP (non necessarily convex): minimize x subject to 1 2 xt Px + c T x + r Ax b x 0 where P R n n is symmetric (the case P = 0 gives rise to an LP), A R m n, c R n, b R m, r R. The KKT are necessary conditions for the optimality (and sufficient if P 0) λ = Px + c + A T µ 0, x 0, λ T x = 0, b Ax 0 µ 0 µ T (b Ax) = 0 Daniel P. Palomar & Gesualdo Scutari 41

43 The KKT conditions can be written more compactly as 0 x Px + A T µ + c 0 0 µ Ax + b 0 which can be rewritten as LCP(M, q) 0 [ x µ ] [ P A T ] A }{{ 0 } M [ x µ ] + [ ] c b }{{} q 0 Daniel P. Palomar & Gesualdo Scutari 42

44 Fixed-Point Problem as a VI In other engineering problems, we may need to find the (unconstrained) fixed-point of a mapping G (x): find x R n such that x = G (x). This can be easily cast as a VI by noticing that a fixed-point of G (x) corresponds to a solution to a system of equations with function F(x) = x G(x): x = G (x) VI(R n, I G). Similarly, for constrained fixed-point equations: find x K such that x = G (x) VI(K, I G). Daniel P. Palomar & Gesualdo Scutari 43

45 Saddle-Point Problem as a VI Given two sets X R n and Y R m, and a function L(x, y), the saddle point problem is to find a pair (x, y ) X Y such that L(x, y) L(x, y ) L(x, y ) (x, y) X Y or, equivalently, L(x, y ) = min max x X y Y L(x, y) = max min y Y x X L(x, y). Assuming L(x, y) continuously differentiable and convex-concave, if the saddle-point exists, then (x x ) T x L(x, y ) 0 x X (y y ) T ( y L(x, y )) 0 y Y Daniel P. Palomar & Gesualdo Scutari 44

46 which is equivalent to the VI(K, F) with K =X Y and F = [ x L(x, y) y L(x, y) ]. Daniel P. Palomar & Gesualdo Scutari 45

47 Alternative Formulations of VI: KKT Conditions Suppose that the (convex) feasible set K of VI(K, F) is described by a set of inequalities and equalities K = {x : g (x) 0, h (x) = 0}. The KKT conditions of VI(K, F) are 0 = F (x) + g T (x) λ + h T (x) ν 0 λ g (x) 0 0 = h (x). Daniel P. Palomar & Gesualdo Scutari 46

48 To derive the KKT conditions it suffices to realize that if x is a solution to VI(K, F) then it must solve the following convex optimization problem and vice-versa: minimize y y T F (x ) subject to y K. (Otherwise, there would be a point y with y T F (x ) < x T F (x ) which would imply (y x ) T F (x ) < 0.) The KKT conditions of the VI follow from the KKT conditions of this problem noting that the gradient of the objective is F (x ). Daniel P. Palomar & Gesualdo Scutari 47

49 Alternative Formulations of VI: Primal-Dual Representation We can now capitalize on the KKT conditions of VI(K, F) to derive an alternative representation of the VI involving not only the primal variable x but also the dual variables λ and ν. Consider VI( K, F) with K = R n R m + R p and F (x, λ, ν) = F(x) + g T (x) λ + h T (x) ν g (x) h (x). The KKT conditions of VI( K, F) coincide with those of VI(K, F). Hence, both VIs are equivalent. Daniel P. Palomar & Gesualdo Scutari 48

50 VI(K, F) is the original (primal) representation whereas VI( K, F) is the so-called primal-dual form as it makes explicit both primal and dual variables. In fact, this primal-dual form is the VI representation of the KKT conditions of the original VI. Daniel P. Palomar & Gesualdo Scutari 49

51 Generalization of VI: Quasi-VI The QVI is an extension of a VI in which the defining set of the problem varies with the variable. Let K : R n x K(x) R n be a point-to-set mapping from R n into subsets of R n ; let F : R n x R n be a vector function on R n. The QVI defined by the pair (K, F) is to find a vector x K(x ) such that (y x ) T F(x ) 0, y K(x ) If K(x) = K R n, the QVI reduces to the VI If F(x) = 0, the QVI reduces to the classical fixed-point problem of the point-to-set map K(x): find a x such that x K(x). Daniel P. Palomar & Gesualdo Scutari 50

52 Solution Analysis of VI Daniel P. Palomar & Gesualdo Scutari 51

53 Existence of a Solution Theorem. Let K R n be compact and convex, and let F : K R n be continuous. Then, the VI(K, F) has a nonempty and compact solution set. Without boundedness of K, the existence of a solution needs some additional properties on the vector function F (as for convex optimization problems): Coercivity Monotonicity/P-properties. Daniel P. Palomar & Gesualdo Scutari 52

54 Monotonicity Properties of F: Outline Monotonicity properties of vector functions Convex programming - a special case: monotonicity properties are satisfied immediately by gradient maps of convex functions In a sense, role of monotonicity in VIs is similar to that of convexity in optimization Existence (uniqueness) of solutions of VIs and convexity of solution sets under monotonicity properties Daniel P. Palomar & Gesualdo Scutari 53

55 Monotonicity Properties of F (I) A mapping F : K R n is said to be (i) monotone on K if ( x y ) T ( F(x) F(y) ) 0, x, y K (ii) strictly monotone on K if ( x y ) T ( F(x) F(y) ) > 0, x, y K and x y (iii) strongly monotone on Q if there exists constant c sm > 0 such that ( x y ) T ( F(x) F(y) ) c sm x y 2, x, y K The constant c sm is called strong monotonicity constant Daniel P. Palomar & Gesualdo Scutari 54

56 Monotonicity Properties of F (II) Example of (a) monotone, (b) strictly monotone, and (c) strongly monotone functions: F (x) F (x) F (x) x x (a) (b) x (c) Daniel P. Palomar & Gesualdo Scutari 55

57 Monotonicity Properties of F (III) Relations among the monotonicity properties and connection with the positive semidefinitess of the Jacobian matrix JF(x) of F: strongly monotone strictly monotone monotone JF(x) c I 0 JF(x) 0 JF(x) 0 For an affine map F = Ax + b, where A R n n (not necessarily symmetric) and b R n, we have: strongly monotone strictly monotone A 0 monotone A 0 Daniel P. Palomar & Gesualdo Scutari 56

58 Monotonicity Properties of F (IV) If F = f, the monotonicity properties can be related to the convexity properties of f a) f convex f monotone 2 f 0 b) f strictly convex f strictly monotone 2 f 0 c) f strongly convex f strongly monotone 2 f c I 0 f(x) f(x) f(x) f(x) S x f(y) y x x (a) (b) (c) x f (x) f (x) f (x) x x (d) (e) x (f) Daniel P. Palomar & Gesualdo Scutari 57

59 Why Are Monotone Mappings Important? Arise from important classes of optimization/game-theoretic problems. Can articulate existence/uniqueness statements for such problems and VIs. Convergence properties of algorithms may sometimes (but not always) be restricted to such monotone problems. Daniel P. Palomar & Gesualdo Scutari 58

60 Solution Analysis Under Monotonicity Theorem. Let K R n be a convex closed set, and let F : K R n be continuous. (a) If F is monotone on K, the solution set of VI(K, F) is closed and convex (possibly empty). (b) If F is strictly monotone on K, the VI(K, F) has at most one solution. (c) If F is strongly-monotone on K, the VI(K, F) has a unique solution. (d) If F is Lipschitz continuous and strongly-monotone on Ω K, there exists a c > 0 such that Ω x x c F nat K (x) x Ω where x is the unique solution of the VI and F nat K (x) x (x F(x)) (note K that F nat K (x ) = 0). Daniel P. Palomar & Gesualdo Scutari 59

61 Remarks: Strict monotonicity of F does not guarantee the existence of a solution. For example, F (x) = e x is strictly monotone, but the VI(R, e x ) does not have a solution. Result in (d) provides an upper bound on the distance from the solution. For partitioned VI (i.e., K = Q i=1 K i, with K i R n i and F = (F i ) Q i=1, with F i : K i R n i), the existence and uniqueness results can be made weaker (there are necessary and sufficient conditions). Daniel P. Palomar & Gesualdo Scutari 60

62 Theorem. Parallelism with Convex Problems Let K R n be a convex closed set, and let f : K R be continuously differentiable ( F f is continuous). Consider the optimization problem (P ) : minimize x K f (x) (a) If f is convex ( F is monotone) on K, the solution set of (P) [the VI(K, F)] is closed and convex. (b) If f is strictly convex ( F is strictly monotone) on K, problem (P) [the VI(K, F)] has at most one solution. (c) If f is strongly convex ( F is strongly-monotone) on K, problem (P) [the VI(K, F)] has a unique solution. Daniel P. Palomar & Gesualdo Scutari 61

63 Characterization of Solution of VI as a Projection The solution of a VI(K, F) can be interpreted as the Euclidean projection of a proper map onto the convex set K. Let K R n be closed and convex and let F : K R n be arbitrary. x is a solution of the VI(K, F) x = K (x F(x )) The fixed-point interpretation can be very useful for both analytical and computational purposes: Basic existence results of a solution of the VI coming from the fixed-point theorems (e.g., Corollary 2) Development of a large family of iterative methods (e.g., projection methods). Daniel P. Palomar & Gesualdo Scutari 62

64 Graphical interpretation: (a) x is a solution of the VI(K, F) if and only if x = K (x F(x )); (b) a feasible x that is not a solution of the VI and thus x K (x F(x )). feasible set K x F(x ) F(x ) F(x) x = K (x F(x )) K (x F(x)) x x F(x) (a) (b) Daniel P. Palomar & Gesualdo Scutari 63

65 Waterfilling: A New Look at an Old Result Capacity of parallel Gaussian channels {λ k } N k=1 0 under average power constraints: maximize N p k=1 log (1 + p k λ k ) N subject to k=1 p k P T 0 p k p max k, 1 k N Optimal power allocation: the waterfilling solution p k = [ µ λ 1 ] p max k k, 1 k N 0 where µ is found to satisfy N k=1 p k = P T. History: Shannon 1949, Holsinger 1964, Gallager For some problems, this waterfilling expression is not convenient. Daniel P. Palomar & Gesualdo Scutari 64

66 New Interpretation of Waterfillings via VI Theorem [ScuPhD 05, ScuPalBar TSP07]: The waterfilling p = [ µ λ 1 ] p max is the unique solution of the affine VI (K, F), where 0 { } K = p R N : N p k = P T, i=1 0 p k p max k for all k, and F(p) = p + λ 1. Corollary [ScuPalBar TSP08]: The waterfilling solution can be rewritten as a projection p = Π K [ λ 1 ] {}}{ p SOL(K, F) p = Π K [ p F ( p )] Daniel P. Palomar & Gesualdo Scutari 65

67 Algorithms for VI Daniel P. Palomar & Gesualdo Scutari 66

68 Algorithms for VI Newton Methods for VIs Equation-Based Algorithms for Complementarity Problems KKT-Based Approaches for VIs Merit Function-Based Approaches for VIs Projection Methods for VIs Tikhonov Regularization and Proximal-Point Methods for VIs Daniel P. Palomar & Gesualdo Scutari 67

69 Projection Methods: Introduction Projection methods are conceptually simple methods for solving monotone VI(K, F) for a convex closed set K. Their advantages are Easily implementable and computationally inexpensive (when K has structure that makes the projection onto K easy) Suitable for large scale problems They are often used as a sub-procedures in faster and more complex methods (enabling the moves into promising regions) Their main disadvantage is slow progress since they do not use higher order information. Daniel P. Palomar & Gesualdo Scutari 68

70 Basic Idea of Projection Algorithms Fact: Recall that, given K R n closed and convex, x is a solution of the VI(K, F) if and only if x is a fixed-point of the mapping Φ(x) K (x F(x)), i.e., x = Φ(x ) [note that Φ : K K] The above fact motivates the following simple fixed-point based iterative scheme: x (n+1) = Φ(x (n) ), x (0) K which produces a sequence with accumulation points being fixed points of Φ. If one could ensure that Φ is a contraction in some norm, then one could use fixed-point iterates to find a fixed point of Φ and hence, a solution to VI(K, F). Daniel P. Palomar & Gesualdo Scutari 69

71 Basic Projection Method Algorithm 1: Projection algorithm with constant step-size (S.0) : Choose any x (0) K, and the step size τ > 0; set n = 0. (S.1) : If x (n) = K ( x (n) F(x (n) ) ), then: STOP. (S.2) : Compute x (n+1) = K (S.3) : Set n n + 1; go to (S.1). ( ) x (n) τ F(x (n) ). In order to ensure the convergence of the sequence { x (n+1)} n (or a subsequence) to a fixed point of Φ, one needs some conditions of the mapping F and the step size τ. (Note that instead of a scalar step size, one can also use a positive definite matrix.) Daniel P. Palomar & Gesualdo Scutari 70

72 Basic Projection Method: Convergence Conditions Theorem. Let F : K R n, where K R n is closed and convex. Suppose F is strongly monotone and Lipschitz continuous on K: x, y K, (x y) T (F(x) F(y)) c F x y 2, and F(x) F(y) L F x y and let τ < 2c F L 2. F Then, the mapping ( x (n) τf(x (n) ) ) is a contraction in the K Euclidean norm with contraction factor η = 1 L 2 F τ ( 2cF L 2 F ) τ. Therefore, any sequence { x (n+1)} generated by Algorithm 1 n converges to the unique solution of the VI(K, F). Daniel P. Palomar & Gesualdo Scutari 71

73 Pros: Convergence of the whole sequence to the unique solution of the VI. Easy to implement (for nice K): one step of projection. Cons: Strong requirements on F (strongly monotonicity and Lipschitz property). We do not always have access to L F and c F. Hence, we do not know how small τ should be to ensure convergence. We can trade the cons with a higher computational complexity; this includes using a variable stepsize, using the extra-gradient method, hyperplane projection methods, etc. Daniel P. Palomar & Gesualdo Scutari 72

74 NEP as a VI Daniel P. Palomar & Gesualdo Scutari 73

75 NEP as a VI: Solution Analysis and Algorithms We have seen that the NEP G =< K, f >: minimize x i f i (x i, x i ) subject to x i K i i = 1,..., Q, is equivalent (under mild conditions) to the VI(K, F), where K = K 1 K Q and F = ( xi f i ) Q i=1. Building on the VI framework, we can now derive conditions for the existence/uniqueness of a NE and devise distributed algorithms along with their convergence properties. The state-of-the-art-results are given in [Scu-Facc-PanPal IT10(sub)]. Daniel P. Palomar & Gesualdo Scutari 74

76 NEP as a VI: Existence and Uniqueness of a NE Theorem. Given the NEP G =<K, f>, suppose that each K i is closed and convex, f i (x i, x i ) is continuously differentiable and convex in x i, for any x i, and let F(x) ( xi f i (x)) Q i=1. Then the following statements hold. (a) Suppose that for every i the strategy set K i is bounded. Then the NEP [the VI(K, F)] has a nonempty and compact solution set; (b) Suppose that F(x) is a strictly monotone function on Q. Then the NEP [the VI(K, F)] has at most one solution; (c) Suppose that F(x) is a strongly monotone on Q. Then the NEP [the VI(K, F)] has a unique solution. Daniel P. Palomar & Gesualdo Scutari 75

77 Matrix Conditions for the Monotonicity of F We provide sufficient conditions for F to be (strictly/strongly) monotone. Let introduce the matrices JF low and Υ F, defined as [JF low ] ij inf x K 2 xi x i f i (x), if i = j, 1 2 ( 2 xixj f i(x) + 2 x j x i f j (x) ), otherwise. and where α min i inf z K λ min [Υ F ] ij { α min i, if i = j, βij max, otherwise, ( 2 xi x i f i (x) ) and β max ij sup z K 2 xi x i f i (x). Daniel P. Palomar & Gesualdo Scutari 76

78 Proposition. [Scu-Facc-PanPal IT10(sub)] Let F(x) ( xi f i (x)) Q i=1 be continuously differentiable with bounded derivatives on K. Then the following statements hold: (a) If JF low is a (strictly) copositive matrix or Υ F is a positive semidefinite (definite) matrix, then F is monotone ( strictly monotone) on K; (b) If JF low or Υ F is a positive definite matrix, then F is strongly monotone on K, with strongly monotonicity constant c sm (F) λ least (JF low ) [or c sm (F) λ least (Υ F )]. Sufficient conditions for Υ F 0 are 1 w i j i βij max w j αi min < 1, i, 1 w j i j βij max w i αi min < 1, j. Daniel P. Palomar & Gesualdo Scutari 77

79 Remarks: The uniqueness result stated in part (b)-(c) of the Theorem does not require that the set K be bounded; Note that [ if Υ F 0 (the NE is unique), it must be αi min = inf z K λmin ( 2 x i f i (z)) ] > 0 for all i. This implies the uniformly strong convexity of f i (, x i ) for all x i K i (the optimal solution of each player s optimizaition problem is unique); The uniqueness conditions are sufficient also for global convergence of best-response asynchronous distributed algorithms described later on. Daniel P. Palomar & Gesualdo Scutari 78

80 Algorithms for NEPs Algorithms for strongly monotone NEPs: Totally asynchronous best-response algorithm [Scu-Facc-PanPal IT10(sub)]. Algorithms for monotone NEPs: Proximal Decomposition Algorithms [Scu-Facc-PanPal IT10(sub)]. Daniel P. Palomar & Gesualdo Scutari 79

81 Best-response Algorithms: Basic Idea A NE x is defined as a simultaneous solution of each of the single-player optimization problems. Introducing the set of the optimal solutions to the i-th optimization problem for any given x i B i (x i ) {x i K i : f i (x i, x i ) f i (y i, x i ), y i K i } also termed as best-response map, the NEs of the NEP can be reinterpreted as the fixed-points of the best-response map B(x) (B i (x i )) Q i=1, i.e., x B(x ). Daniel P. Palomar & Gesualdo Scutari 80

82 This fact motivates the class of so-called best-response algorithms : fixed-point based iterative schemes where at every iteration each player updates his own strategy choosing a point in B i (x i ) (solving his own optimization problem given x i ). If one could ensure that B is a contraction in some norm, then one could use fixed-point iterates to find a fixed point of B and hence, a solution to the NEP. Daniel P. Palomar & Gesualdo Scutari 81

83 Asynchronous Best-Response Algorithm In the Asynchronous Best-Response Algorithm, users update their strategy in a totally asynchronous way based on B i (x i ); may update at arbitrary and different times and more or less frequently than others may use an outdated measure of the strategies of the others. Daniel P. Palomar & Gesualdo Scutari 82

84 Asynchronous Best-Response Algorithm Algorithm 2: Asynchronous Best-Response Algorithm (S.0) : Choose any feasible starting point x (0) = (x (0) i ) Q i=1 ; set n = 0. (S.1) : If x (n) satisfies a suitable termination criterion: STOP (S.2) : for i = 1,..., Q, compute x (n+1) i as ( ) x i argmin f i x i, x (τ i (n)) i, if n T i, x (n+1) x i K i i = x (n) i, otherwise end (S.3) : n n + 1; go to (S.1). Daniel P. Palomar & Gesualdo Scutari 83

85 Theorem. [Scu-Facc-PanPal IT10(sub)] Given the NEP G =<K, f >, suppose that Υ F 0. Then, any sequence {x (n) } n=0 generated by the asynchronous best-response algorithm converges to the unique NE of G, for any given updating feasible schedule of the players. Remarks: Under Υ F 0, the best-response mapping B(x) is a contraction (or equivalently, the function F = ( xi f i ) Q i=1 is strongly monotone on K); If f i (, x i ) is not uniformly strongly convex for all x i K i, (i.e., Υ F 0), the algorithm is not guaranteed to converge. How to deal with (non strongly) monotone NEPs? Daniel P. Palomar & Gesualdo Scutari 84

86 Algorithms for Monotone NEPs: Main Idea We know how to solve a strongly monotone NEP. To solve a monotone NEP the proposed approach is then to make it strongly monotone by a proper regularization. The regularization has to be chosen so that: one can recover somehow the NEs of the original game from those of the regularized game; one should be able to solve a regularized game via distributed algorithms. Ingredients: Equivalence between the VI and the NEP; Proximal decomposition algorithms for monotone VIs. Daniel P. Palomar & Gesualdo Scutari 85

87 GNEP as a VI Daniel P. Palomar & Gesualdo Scutari 86

88 GNEP: Basic Definitions The GNEP extends the classical NEP setting by assuming that each player s strategy set can depend on the rival players strategies x i. Let K i (x i ) R n i be the feasible set of player i when the other players choose x i. The aim of each player i, given x i, is to choose a strategy x i K i (x i ) that solves the problem minimize x i f i (x i, x i ) subject to x i K i (x i ). A Generalized Nash Equilibrium (GNE) is a feasible point x such that the following holds for each player i = 1,..., Q: f i (x i, x i) f i (x i, x i), x i K i (x i). Daniel P. Palomar & Gesualdo Scutari 87

89 Facts: The GNEP can be rewritten as a QVI. However, QVIs are much harder problems than VIs and only few results are available; Thus the GNEP, in its full generality, is almost intractable and also the VI approach does not help. We then restrict our attention to particular classes of (more tractable) equilibrium problems: the so-called GNEPs with jointly convex shared constraints. Daniel P. Palomar & Gesualdo Scutari 88

90 GNEPs With Jointly Convex Shared Constraints (JCSC) A GNEP is termed as GNEP with jointly convex shared constraints if the feasible sets are defined as: K i (x i ) { x i K i : g(x i, x i ) 0 } where: K i R n i is the closed and convex set of individual constraints of player i; g(x i, x i ) 0 represents the set of shared coupling constraints (equal for all the players); g (g j ) m j=1 : Rn R m is (jointly) convex in x. If there are no shared constraints the GNEP reduces to a NEP. Daniel P. Palomar & Gesualdo Scutari 89

91 Geometrical Interpretation Let define the joint feasible set K K = { x : g(x i, x i ) 0, x i K i, i = 1,..., Q } It easy to see that K i (x i ) = {x i : (x i, x i ) K}. x 2 set K { K 2 ( x 1 ) x = ( x 1, x 2 ) { K 1 ( x 2 ) x 1 Daniel P. Palomar & Gesualdo Scutari 90

92 Connection Between GNEPs with JCSC and VIs GNEPS with JCSC are still very difficult problems, however at least some type of solutions can be studied and calculated using the VI approach. Given the GNEP with JCSC, let K = { x : g(x i, x i ) 0, x i K i, i }, and F = ( xi f i ) Q i=1 and consider the VI(K, F). Lemma. Every solution of the VI(K, F) is a solution of the GNEP with JCSC; the converse in general may not be true. The solutions of the GNEP that are also solution of the VI are termed as Variational Equilibria (VE). Daniel P. Palomar & Gesualdo Scutari 91

93 VEs: Game Theoretical Interpretation Recall that the VEs are solution of the VI, with K = { x : g(x i, x i ) 0, x i K i, i }, and F = ( xi f i ) Q i=1. Connection between the VI(K, F) and NEPs: x K is a VE if and only if x along with a suitable λ is a solution of the NEP G λ : minimize f i (x i, x i ) + λ T g(x) x i i = 1,..., Q, subject to x i K i and furthermore 0 λ g(x ) 0. Daniel P. Palomar & Gesualdo Scutari 92

94 Remarks: We can interpret the λ as prices paid by the players for the common resource represented by the shared constraints; The complementarity condition says that we actually have to pay only when the resource becomes scarce; Thus, the NEP with pricing can be seen as a penalized version of the GNEP with JCSC, where the shared constraints are enforced by making the players to pay the common price λ ; Mathematically, λ is the KKT common multiplier of the shared constraints. We are now able to reduce the solution analysis & computation of a VE to that of the equilibrium of a NEP, to which we can in principle apply the theory developed so far. Daniel P. Palomar & Gesualdo Scutari 93

95 VEs: Solution Analysis and Algorithms Solution analysis: Since the VEs are solution of a VI, one can derive existence and uniqueness conditions from the VI theory developed so far. Algorithms: Similarly, we can also devise algorithms for VEs based on the VI(K, F); however they will not be distributed since the set K does not have a Cartesian structure (there is a coupling among the strategies of the players). How to attack the problem: Building on the equivalence between the VI(K, F) and the NEP with pricing, we can overcome this issue. To do that, however, we still need some more work. Daniel P. Palomar & Gesualdo Scutari 94

96 Toward Distributed Algs: A NCP Reformulation We rewrite the NEP with pricing as a VI whose feasible set has a Cartesian structure. For the sake of simplicity, we focus only on strongly monotone games [Υ F 0]. Step 1: The NEP with pricing can be rewritten as G λ : VI(K, F + g T λ) 0 λ g(x) 0 where K = Q i=1 K i and F = ( xi f i ) Q i=1. The VI(K, F + g T λ) is strongly monotone and thus has a unique solution x (λ) [the unique NE of G λ ]. Daniel P. Palomar & Gesualdo Scutari 95

97 Step 2: We rewrite G λ CC as a NCP. Let define the map Φ(λ) : R m + λ g (x (λ)) which measures the violation of the shared constraints at x (λ). Theorem. [Scu-Facc-PanPal IT10(sub)] If Υ F 0, the (strongly monotone) NEP with pricing in G λ CC is equivalent to the NCP in the price tuple λ NCP(Φ) : 0 λ Φ(λ) 0 VI(R m +, Φ). The NCP reformulation is instrumental to devise distributed algorithms. We can now use the algorithms developed so far for strongly monotone VIs. Daniel P. Palomar & Gesualdo Scutari 96

98 Distributed Projection Algorithms based on NCP Algorithm 3: Projection Algorithm with Variable Steps (PAVS) (S.0) : Choose any λ (0) 0; set n = 0. (S.1) : If λ (n) satisfies a suitable termination criterion: STOP. (S.2) : Given λ (n), compute x (λ (n) ) as the unique NE of G λ (n): x (λ (n) ) = SOL(K; F + x gλ (n) ). (S.3) : Choose τ n > 0 and update the price vectors λ according to λ (n+1) = [ ( λ (n) τ n Φ λ (n))] +. (S.4) : Set n n + 1; go to (S.1). Daniel P. Palomar & Gesualdo Scutari 97

99 Theorem. [Scu-Facc-PanPal IT10(sub)] Suppose Υ F 0. If the scalars τ n are chosen so that 0 < inf n τ n sup n τ n < 2 c coc (Φ), where c coc (Φ) ĉ sm (F)/c 2 Lip (g), c Lip(g) max x K g(x) T 2, and ĉ sm (F) is the strongly-monotonicity constant of F, then the sequence {λ (n) } n=0 generated by the PAVS converges to a solution of the NCP(Φ). Inner loop: The NE p (λ (n) ) of G λ (n) can be computed using the asynchronous best-response algorithms (convergence is guaranteed under Υ F 0). Algorithms for Monotone Games: Following the same idea as for monotone NEPs we can make the monotone NEP with pricing strongly-monotone via Proximal regularization [Scu-Facc- PanPal IT10(sub)]. Daniel P. Palomar & Gesualdo Scutari 98

100 Part III: Variational Inequality Theory: Applications Daniel P. Palomar & Gesualdo Scutari 99

101 Part III - Outline Application of NEP: Ad-Hoc Networks Application of GNEP: QoS Ad-Hoc Networks Application of NEP: Robust CR Systems with Individual Constraints Application of NEP: Robust CR Systems with Individual Constraints Application of GNEP with Shared Constraints: Systems Cognitive Radio Application of GNEP with Shared Constraints: Routing in Communication Networks Daniel P. Palomar & Gesualdo Scutari 100

102 Part III - Outline Application of NEP: Ad-Hoc Networks Application of GNEP: QoS Ad-Hoc Networks Application of NEP: Robust CR Systems with Individual Constraints Application of NEP: Robust CR Systems with Individual Constraints Application of GNEP with Shared Constraints: Systems Cognitive Radio Application of GNEP with Shared Constraints: Routing in Communication Networks Daniel P. Palomar & Gesualdo Scutari 101

103 Application of NEP: Ad-Hoc Networks Daniel P. Palomar & Gesualdo Scutari 102

104 Competitive Ad-Hoc Networks Consider a decentralized and competitive network of users fighting for the resources (i.e., spectrum): Baseband signal model: Vector Gaussian Interference Channel (IC) y q = H qq x }{{} q + H qr x r desired signal r q }{{} inteference + w q. Daniel P. Palomar & Gesualdo Scutari 103

105 The Frequency-Selective Case Daniel P. Palomar & Gesualdo Scutari 104

106 { } The channel matrices are diagonal: H qr = diag (H qr (k)) N k=1. The optimization variables correspond to the power allocation over the carriers: p q = {p q (k)} N k=1. There is a power budget for each user: N p q (k) P q. k=1 The payoff for user q is the information rate: r q (p q, p q ) = N log (1 + sinr q (k)) with sinr q (k) = k=1 H qq (k) 2 p q (k) 1 + r q H qr(k) 2 p r (k). The feasible set for the variables is: P q = { p q R N + : N k=1 p q(k) = P q }. Daniel P. Palomar & Gesualdo Scutari 105

107 Game Formulation for Ad-Hoc Networks Each of the Q users selfishly maximizes its own rate subject to the constraints: G : maximize N p k=1 log (1 + sinr q(k)) q q = 1,..., Q subject to p q P q Given the strategies of the others p q, the best response for each user is the waterfilling solution: where p q = wf q (p q ) (µ q interf q (p q )) + interf q (k; p q ) 1 + r q H qr(k) 2 p r (k) H qq (k) 2 k = 1,..., N Daniel P. Palomar & Gesualdo Scutari 106

108 Any NE is a simultaneous waterfilling for all users: p q = wf q ( p q ) q = 1,..., Q p = wf (p ). Main issues: Does a NE exist? Is the NE unique? How to reach a NE? Open problem for years: Different researchers have actively worked on this problem since 2001 [Yu-Gin-Ciof 01], [Yam-Luo 04], [Luo-Pang 06], [Scu-Pal-Bar 06],... VI provides the answers. Daniel P. Palomar & Gesualdo Scutari 107

109 How to Attack This Problem? One option is to write the NEP using the minimum principle as a VI and try to show monotonicity properties for the function F. A more elaborate option is to write the KKT conditions of the NEP and, after some manipulations, rewrite the NEP as an Affine VI and try to show monotonicity properties. In this problem, since the best-response of the NEP can be written in closed-form, we could try to show a contraction mapping property. Interestingly, the fixed-point characterization of the solution of the AVI happens to be the alternative simple representation of the waterfilling solution as a projection. This results in showing some norm property on a matrix that is equivalent to showing strongly monotonicity of the AVI. Daniel P. Palomar & Gesualdo Scutari 108

110 Why is so difficult studying this game? We need to prove that the waterfilling mapping is a contraction: wf wf ( p (1)) wf Using the definition of the wf: ( p (1)) wf ( p (2)) = ( p (2))? c p (1) p (2) (µ (1) Mp (1)) + ( µ (2) Mp (2)) + ( () Mp (1) Mp (2)) Daniel P. Palomar & Gesualdo Scutari 109

111 Why is so difficult studying this game? We need to prove that the waterfilling mapping is a contraction: wf ( p (1)) wf ( p (2))? c p (1) p (2) Using the definition of the wf: wf ( p (1)) wf ( p (2)) = (µ (1) Mp (1)) + ( µ (2) Mp (2)) + ( µ (1) µ (2)) ( Mp (1) Mp (2)) We are stuck!!! Daniel P. Palomar & Gesualdo Scutari 110

112 Why is so difficult studying this game? We need to prove that the waterfilling mapping is a contraction: wf ( p (1)) wf ( p (2))? c p (1) p (2) Using our interpretation of the wf as a projector: wf ( p (1)) wf ( p (2)) = ( Π K M M Mp (1)) Π K ( Mp (2)) ( p (2) p (1)) p (1) p (2) We are done!!! Daniel P. Palomar & Gesualdo Scutari 111

113 Existence and Uniqueness of the NE Theorem [Scu-Pal-Bar TSP08]: The solution set of the game is nonempty and compact. The NE is unique if ρ (H max ) < 1 where [H max ] qr { Hqr (k) 2 } max k H qq (k) 2 if q r 0 otherwise. Sufficient conditions: Low MUI : r q max k { Hqr (k) 2 } H qq (k) 2 < 1, q = 1,..., Q. Daniel P. Palomar & Gesualdo Scutari 112

114 State-of-the-Art Algorithms: Asynchronous IWFA Do the players reach an equilibrium if every one selfishly performs his waterfilling solution wf q (p q ) against the others? Daniel P. Palomar & Gesualdo Scutari 113

115 State-of-the-Art Algorithms: Asynchronous IWFA Do the players reach an equilibrium if every one selfishly performs his waterfilling solution wf q (p q ) against the others? Asynchronous IWFA [Scu-Pal-Bar IT08]: Users update the power allocation in a totally asynchronous way based on wf q ( ) users may update at arbitrary and different times and more or less frequently than others users may use an outdated measure of interference distributed implementation: local measures of the MUI & weak constraints on synchronization Theorem [Scu-Pal-Bar IT08]: The asynchronous IWFA converges to the unique NE if ρ (H max ) < 1. Daniel P. Palomar & Gesualdo Scutari 114

116 The MIMO Case Daniel P. Palomar & Gesualdo Scutari 115

117 Ad-Hoc Networks - The MIMO Game Game theoretic formulation [Scu-Pal-Bar TSP08]: maximize log det ( I + Q q H Q q qqr 1 ) q(q q )H qq q = 1,..., Q subject to Q q 0, Tr (Q q ) P q where R q (Q q ) = R nq + r qh rq Q r H rq. Can we similarly analize this MIMO formulation? The answer is affirmative when the channel matrices are square and invertible. Daniel P. Palomar & Gesualdo Scutari 116

118 Difficulties in the MIMO Case However, the answer is negative for the general case!! Some expected conjectures do not hold when the channel matrices are not square: the WF as a projection does not follow from the square case simply replacing the inverse with some generalized inverse the VI cannot be rewritten as a AVI and the reverse order law for generalized inverses does not hold the multiuser WF is not a contraction under the conditions valid for the square case. Full characterization of the game for arbitrary MIMO channels (not square, not invertible) [Scu-Pal-Bar TSP09]: Solution analysis: Existence and uniqueness of the NE Distributed algorithms: Asynchronous MIMO IWFA Daniel P. Palomar & Gesualdo Scutari 117

119 Convergence of the IWFA 1.2 user # Rates user #25 user # Simultaneous IWFA Sequential IWFA iterations (n) Daniel P. Palomar & Gesualdo Scutari 118

120 Application of NEP: CR Systems with Individual Constraints Daniel P. Palomar & Gesualdo Scutari 119

121 CR Systems Consider now an established network of primary users on top of which some secondary users play the previous game. Hierarchical CR networks PU=Primary users (legacy spectrum holders) SU=Secondary users (unlicensed users). Opportunistic communications: SUs can use the spectrum subject to inducing a limited interference or no interference at all on PUs. Daniel P. Palomar & Gesualdo Scutari 120

122 Standard Waterfilling Does Not Work The standard iterative waterfilling algorithm does not work because it violates the interference constraint: PSD of the Interference at the Primary Receiver interference constraint violated Interference level (PSD) interference limit carrier Daniel P. Palomar & Gesualdo Scutari 121

123 Signal Model for CR Systems Same signal model as for ad-hoc networks with the additional individual per-carrier interference constraints: h (P,S) qp (k) 2 p q (k) I pq (k), k = 1,..., N p = 1,..., P h qp (P,S) (k) 2 is the cross-channel gain between the qth secondary and the pth primary user I pq (k) is the maximum level of interference tolerable by the primary user p from the secondary user q over the subchannel k. Equivalently, we can write these constraints in terms of mask constraints p q (k) p max q (k) min p=1,...,p I pq (k) (k) 2 k = 1,..., N. h (P,S) qp Daniel P. Palomar & Gesualdo Scutari 122

124 Game Formulation for CR Systems Each of the Q users selfishly maximizes its own rate subject to the constraints: maximize p q subject to N k=1 log (1 + sinr q(k)) N k=1 p q(k) P q 0 p q (k) p max q (k), k q = 1,..., Q The best response in this case also has a nice and simple closed-form expression based on a modified waterfilling with clipping from above: wf q (p q ) [µ q interf q (p q )] pmax 0. Similar analysis and algorithms as for the previous game, based on our interpretation of wf as a projection [Scu-Pal-Bar TSP07]. Daniel P. Palomar & Gesualdo Scutari 123

125 Application of GNEP with Shared Constraints: Cognitive Radio Systems Daniel P. Palomar & Gesualdo Scutari 124

126 Signal Model for CR Systems Same signal model as for ad-hoc networks with the additional individual per-carrier interference constraints: H qp (P,S) (k) 2 p q (k) I pq (k), k = 1,..., N p = 1,..., P H qp (P,S) (k) 2 is the cross-channel gain between the qth secondary and the pth primary user I pq (k) is the maximum level of interference tolerable by the primary user p from the secondary user q over the subchannel k. Equivalently, we can write these constraints in terms of mask constraints p q (k) p max q (k) min p=1,...,p I pq (k) (k) 2 k = 1,..., N. H qp (P,S) Daniel P. Palomar & Gesualdo Scutari 125

127 The interference constraints are now satisfied: PSD of the Interference at the Primary Receiver Classic IWFA Conservative IWFA 0.02 Interference level (PSD) interference limit carrier However... this method may be too conservative as the level of interference from each secondary user is limited individually in a conservative way. Daniel P. Palomar & Gesualdo Scutari 126

128 Revised Signal Model for CR Systems The really important quantity is not the individual interference generated by each secondary user but the aggregate interference generated by all of them. We can then limit the aggregate interference: per-carrier constraints: Q H qp (P,S) q=1 (k) 2 p q (k) I p (k), p = 1,..., P, k = 1,..., N interference-temperature limits N Q k=1 q=1 H qp (P,S) (k) 2 p q (k) I p (k), p = 1,..., P Daniel P. Palomar & Gesualdo Scutari 127

129 CR Game with Coupling Constraint Proposed game theoretical formulation [Pan-Scu-Pal-Fac TSP10]: G : maximize p q 0 subject to N k=1 log (1 + sinr q(k)) N k=1 p q(k) P q q = 1,..., Q Q H qp (P,S) q=1 (k) 2 p q (k) I p (k), p = 1,..., P, k = 1,..., N. Daniel P. Palomar & Gesualdo Scutari 128

130 Indeed, this new reformulation achieves our goal without accidentally violating the limit: PSD of the Interference at the Primary Receiver Classic IWFA Conservative IWFA Flexible IWFA 0.02 Interference level (PSD) interference limit carrier Daniel P. Palomar & Gesualdo Scutari 129

131 or being too conservative: 2.5 Sum rate of SUs vs. Interference Threshold at the Primary Receiver 2 Sum rate of SUs (bit/cu) Classic IWFA Flexible IWFA Conservative IWFA I peak Daniel P. Palomar & Gesualdo Scutari 130

132 How to Deal with the Coupling Constraint The price to pay for including the coupling constraints is twofold: on a mathematical level, it complicates the analysis of the game and its design no previous GT results can be used on the practical side, this new game must include some mechanism to calculate the aggregate interference. For the mathematical analysis and design, we need more advance tools: VI theory for the analysis of GNEP with shared constraints. Daniel P. Palomar & Gesualdo Scutari 131

133 Formulation of the Game with Coupling Constraint Recall the game formulation with the coupling constraint [Pan-Scu- Pal-Fac TSP10]: G : maximize p q 0 subject to Q H qp (P,S) q=1 r q (p q, p q ) N k=1 log (1 + sinr q(k)) N k=1 p q(k) P q (k) 2 p q (k) I p (k), p = 1,..., P, k = 1,..., N. q = 1,..., Q This is a GNEP with shared constraints It can be rewritten as a VI problem (caveat: only the variational solutions with common multipliers are considered). Daniel P. Palomar & Gesualdo Scutari 132

134 Formulation of the Game with Coupling Constraint via Pricing Consider a game with pricing: G λ : maximize p q 0 subject to r q (p q, p q ) P N p=1 k=1 λ p,k N k=1 p q(k) P q H qp (P,S) (k) 2 p q (k) q where the prices λ p,k 0 are chosen such that (CC) : 0 λ p,k I p (k) Q H qp (P,S) q=1 (k) 2 p q (k) 0 p, k Parametric NEP & CC outside: classical GT fails. We will now rewrite the game G G λ (CC) as a VI problem. Daniel P. Palomar & Gesualdo Scutari 133

135 Theorem (Game as a VI) [Pan-Scu-Pal-Fac TSP10]: The game G is equivalent to the VI(K, F) where K and p RNQ + : N p q (k) P q and k=1 F(p) Q H qp (P,S) q=1 p1 r 1 (p). r pq (p) (k) 2 p q (k) I p (k), q, p, k G (p, λ) VI(K, F) (p) p = powers of the SUs λ = price vector p = primal variables λ = multipliers Daniel P. Palomar & Gesualdo Scutari 134

Lagrange Duality. Daniel P. Palomar. Hong Kong University of Science and Technology (HKUST)

Lagrange Duality. Daniel P. Palomar. Hong Kong University of Science and Technology (HKUST) Lagrange Duality Daniel P. Palomar Hong Kong University of Science and Technology (HKUST) ELEC5470 - Convex Optimization Fall 2017-18, HKUST, Hong Kong Outline of Lecture Lagrangian Dual function Dual

More information

Minimax Problems. Daniel P. Palomar. Hong Kong University of Science and Technolgy (HKUST)

Minimax Problems. Daniel P. Palomar. Hong Kong University of Science and Technolgy (HKUST) Mini Problems Daniel P. Palomar Hong Kong University of Science and Technolgy (HKUST) ELEC547 - Convex Optimization Fall 2009-10, HKUST, Hong Kong Outline of Lecture Introduction Matrix games Bilinear

More information

5. Duality. Lagrangian

5. Duality. Lagrangian 5. Duality Convex Optimization Boyd & Vandenberghe Lagrange dual problem weak and strong duality geometric interpretation optimality conditions perturbation and sensitivity analysis examples generalized

More information

Primal/Dual Decomposition Methods

Primal/Dual Decomposition Methods Primal/Dual Decomposition Methods Daniel P. Palomar Hong Kong University of Science and Technology (HKUST) ELEC5470 - Convex Optimization Fall 2018-19, HKUST, Hong Kong Outline of Lecture Subgradients

More information

Lecture 8 Plus properties, merit functions and gap functions. September 28, 2008

Lecture 8 Plus properties, merit functions and gap functions. September 28, 2008 Lecture 8 Plus properties, merit functions and gap functions September 28, 2008 Outline Plus-properties and F-uniqueness Equation reformulations of VI/CPs Merit functions Gap merit functions FP-I book:

More information

Constrained Optimization and Lagrangian Duality

Constrained Optimization and Lagrangian Duality CIS 520: Machine Learning Oct 02, 2017 Constrained Optimization and Lagrangian Duality Lecturer: Shivani Agarwal Disclaimer: These notes are designed to be a supplement to the lecture. They may or may

More information

Motivation. Lecture 2 Topics from Optimization and Duality. network utility maximization (NUM) problem:

Motivation. Lecture 2 Topics from Optimization and Duality. network utility maximization (NUM) problem: CDS270 Maryam Fazel Lecture 2 Topics from Optimization and Duality Motivation network utility maximization (NUM) problem: consider a network with S sources (users), each sending one flow at rate x s, through

More information

Optimal Linear Precoding Strategies for Wideband Non-Cooperative Systems based on Game Theory-Part II: Algorithms

Optimal Linear Precoding Strategies for Wideband Non-Cooperative Systems based on Game Theory-Part II: Algorithms Optimal Linear Precoding Strategies for Wideband Non-Cooperative Systems based on Game Theory-Part II: Algorithms Gesualdo Scutari 1, Daniel P. Palomar 2, and Sergio Barbarossa 1 E-mail: {scutari,sergio}@infocom.uniroma1.it,

More information

Competitive Design of Multiuser MIMO Systems based on Game Theory: A Unified View

Competitive Design of Multiuser MIMO Systems based on Game Theory: A Unified View JOURNAL OF SELECTED AREAS IN COMMUNICATIONS, VOL.?, NO.?, SEPTEMBER 2008 1 Competitive Design of Multiuser MIMO Systems based on Game Theory: A Unified View Gesualdo Scutari, Member, IEEE, Daniel P. Palomar,

More information

Convex Optimization M2

Convex Optimization M2 Convex Optimization M2 Lecture 3 A. d Aspremont. Convex Optimization M2. 1/49 Duality A. d Aspremont. Convex Optimization M2. 2/49 DMs DM par email: dm.daspremont@gmail.com A. d Aspremont. Convex Optimization

More information

Convex Optimization Boyd & Vandenberghe. 5. Duality

Convex Optimization Boyd & Vandenberghe. 5. Duality 5. Duality Convex Optimization Boyd & Vandenberghe Lagrange dual problem weak and strong duality geometric interpretation optimality conditions perturbation and sensitivity analysis examples generalized

More information

ICS-E4030 Kernel Methods in Machine Learning

ICS-E4030 Kernel Methods in Machine Learning ICS-E4030 Kernel Methods in Machine Learning Lecture 3: Convex optimization and duality Juho Rousu 28. September, 2016 Juho Rousu 28. September, 2016 1 / 38 Convex optimization Convex optimisation This

More information

MIMO Cognitive Radio: A Game Theoretical Approach Gesualdo Scutari, Member, IEEE, and Daniel P. Palomar, Senior Member, IEEE

MIMO Cognitive Radio: A Game Theoretical Approach Gesualdo Scutari, Member, IEEE, and Daniel P. Palomar, Senior Member, IEEE IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 58, NO. 2, FEBRUARY 2010 761 MIMO Cognitive Radio: A Game Theoretical Approach Gesualdo Scutari, Member, IEEE, and Daniel P. Palomar, Senior Member, IEEE Abstract

More information

Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization

Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization Compiled by David Rosenberg Abstract Boyd and Vandenberghe s Convex Optimization book is very well-written and a pleasure to read. The

More information

Lecture: Duality of LP, SOCP and SDP

Lecture: Duality of LP, SOCP and SDP 1/33 Lecture: Duality of LP, SOCP and SDP Zaiwen Wen Beijing International Center For Mathematical Research Peking University http://bicmr.pku.edu.cn/~wenzw/bigdata2017.html wenzw@pku.edu.cn Acknowledgement:

More information

Lecture: Duality.

Lecture: Duality. Lecture: Duality http://bicmr.pku.edu.cn/~wenzw/opt-2016-fall.html Acknowledgement: this slides is based on Prof. Lieven Vandenberghe s lecture notes Introduction 2/35 Lagrange dual problem weak and strong

More information

Convex Optimization & Lagrange Duality

Convex Optimization & Lagrange Duality Convex Optimization & Lagrange Duality Chee Wei Tan CS 8292 : Advanced Topics in Convex Optimization and its Applications Fall 2010 Outline Convex optimization Optimality condition Lagrange duality KKT

More information

IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, VOL. 26, NO. 7, SEPTEMBER

IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, VOL. 26, NO. 7, SEPTEMBER IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, VOL. 26, NO. 7, SEPTEMBER 2008 1089 Competitive Design of Multiuser MIMO Systems Based on Game Theory: A Unified View Gesualdo Scutari, Member, IEEE, Daniel

More information

Alternative Decompositions for Distributed Maximization of Network Utility: Framework and Applications

Alternative Decompositions for Distributed Maximization of Network Utility: Framework and Applications Alternative Decompositions for Distributed Maximization of Network Utility: Framework and Applications Daniel P. Palomar Hong Kong University of Science and Technology (HKUST) ELEC5470 - Convex Optimization

More information

Sum-Power Iterative Watefilling Algorithm

Sum-Power Iterative Watefilling Algorithm Sum-Power Iterative Watefilling Algorithm Daniel P. Palomar Hong Kong University of Science and Technolgy (HKUST) ELEC547 - Convex Optimization Fall 2009-10, HKUST, Hong Kong November 11, 2009 Outline

More information

6.254 : Game Theory with Engineering Applications Lecture 7: Supermodular Games

6.254 : Game Theory with Engineering Applications Lecture 7: Supermodular Games 6.254 : Game Theory with Engineering Applications Lecture 7: Asu Ozdaglar MIT February 25, 2010 1 Introduction Outline Uniqueness of a Pure Nash Equilibrium for Continuous Games Reading: Rosen J.B., Existence

More information

Optimality, Duality, Complementarity for Constrained Optimization

Optimality, Duality, Complementarity for Constrained Optimization Optimality, Duality, Complementarity for Constrained Optimization Stephen Wright University of Wisconsin-Madison May 2014 Wright (UW-Madison) Optimality, Duality, Complementarity May 2014 1 / 41 Linear

More information

Lecture 6: Conic Optimization September 8

Lecture 6: Conic Optimization September 8 IE 598: Big Data Optimization Fall 2016 Lecture 6: Conic Optimization September 8 Lecturer: Niao He Scriber: Juan Xu Overview In this lecture, we finish up our previous discussion on optimality conditions

More information

Game theory is a field of applied mathematics that

Game theory is a field of applied mathematics that [ Gesualdo Scutari, Daniel P. Palomar, Jong-Shi Pang, and Francisco Facchinei ] Flexible Design of Cognitive Radio Wireless Systems [From game theory to variational inequality theory] Game theory is a

More information

Multiuser Downlink Beamforming: Rank-Constrained SDP

Multiuser Downlink Beamforming: Rank-Constrained SDP Multiuser Downlink Beamforming: Rank-Constrained SDP Daniel P. Palomar Hong Kong University of Science and Technology (HKUST) ELEC5470 - Convex Optimization Fall 2018-19, HKUST, Hong Kong Outline of Lecture

More information

EE/AA 578, Univ of Washington, Fall Duality

EE/AA 578, Univ of Washington, Fall Duality 7. Duality EE/AA 578, Univ of Washington, Fall 2016 Lagrange dual problem weak and strong duality geometric interpretation optimality conditions perturbation and sensitivity analysis examples generalized

More information

Lecture 7 Monotonicity. September 21, 2008

Lecture 7 Monotonicity. September 21, 2008 Lecture 7 Monotonicity September 21, 2008 Outline Introduce several monotonicity properties of vector functions Are satisfied immediately by gradient maps of convex functions In a sense, role of monotonicity

More information

Solving Dual Problems

Solving Dual Problems Lecture 20 Solving Dual Problems We consider a constrained problem where, in addition to the constraint set X, there are also inequality and linear equality constraints. Specifically the minimization problem

More information

12. Interior-point methods

12. Interior-point methods 12. Interior-point methods Convex Optimization Boyd & Vandenberghe inequality constrained minimization logarithmic barrier function and central path barrier method feasibility and phase I methods complexity

More information

Convex Optimization. Newton s method. ENSAE: Optimisation 1/44

Convex Optimization. Newton s method. ENSAE: Optimisation 1/44 Convex Optimization Newton s method ENSAE: Optimisation 1/44 Unconstrained minimization minimize f(x) f convex, twice continuously differentiable (hence dom f open) we assume optimal value p = inf x f(x)

More information

A projection algorithm for strictly monotone linear complementarity problems.

A projection algorithm for strictly monotone linear complementarity problems. A projection algorithm for strictly monotone linear complementarity problems. Erik Zawadzki Department of Computer Science epz@cs.cmu.edu Geoffrey J. Gordon Machine Learning Department ggordon@cs.cmu.edu

More information

Convex Optimization and Modeling

Convex Optimization and Modeling Convex Optimization and Modeling Duality Theory and Optimality Conditions 5th lecture, 12.05.2010 Jun.-Prof. Matthias Hein Program of today/next lecture Lagrangian and duality: the Lagrangian the dual

More information

CS-E4830 Kernel Methods in Machine Learning

CS-E4830 Kernel Methods in Machine Learning CS-E4830 Kernel Methods in Machine Learning Lecture 3: Convex optimization and duality Juho Rousu 27. September, 2017 Juho Rousu 27. September, 2017 1 / 45 Convex optimization Convex optimisation This

More information

Lecture 9 Monotone VIs/CPs Properties of cones and some existence results. October 6, 2008

Lecture 9 Monotone VIs/CPs Properties of cones and some existence results. October 6, 2008 Lecture 9 Monotone VIs/CPs Properties of cones and some existence results October 6, 2008 Outline Properties of cones Existence results for monotone CPs/VIs Polyhedrality of solution sets Game theory:

More information

I.3. LMI DUALITY. Didier HENRION EECI Graduate School on Control Supélec - Spring 2010

I.3. LMI DUALITY. Didier HENRION EECI Graduate School on Control Supélec - Spring 2010 I.3. LMI DUALITY Didier HENRION henrion@laas.fr EECI Graduate School on Control Supélec - Spring 2010 Primal and dual For primal problem p = inf x g 0 (x) s.t. g i (x) 0 define Lagrangian L(x, z) = g 0

More information

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 52, NO. 2, FEBRUARY Uplink Downlink Duality Via Minimax Duality. Wei Yu, Member, IEEE (1) (2)

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 52, NO. 2, FEBRUARY Uplink Downlink Duality Via Minimax Duality. Wei Yu, Member, IEEE (1) (2) IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 52, NO. 2, FEBRUARY 2006 361 Uplink Downlink Duality Via Minimax Duality Wei Yu, Member, IEEE Abstract The sum capacity of a Gaussian vector broadcast channel

More information

Support Vector Machines

Support Vector Machines Support Vector Machines Support vector machines (SVMs) are one of the central concepts in all of machine learning. They are simply a combination of two ideas: linear classification via maximum (or optimal

More information

Math 273a: Optimization Subgradients of convex functions

Math 273a: Optimization Subgradients of convex functions Math 273a: Optimization Subgradients of convex functions Made by: Damek Davis Edited by Wotao Yin Department of Mathematics, UCLA Fall 2015 online discussions on piazza.com 1 / 42 Subgradients Assumptions

More information

Duality. Lagrange dual problem weak and strong duality optimality conditions perturbation and sensitivity analysis generalized inequalities

Duality. Lagrange dual problem weak and strong duality optimality conditions perturbation and sensitivity analysis generalized inequalities Duality Lagrange dual problem weak and strong duality optimality conditions perturbation and sensitivity analysis generalized inequalities Lagrangian Consider the optimization problem in standard form

More information

ELE539A: Optimization of Communication Systems Lecture 15: Semidefinite Programming, Detection and Estimation Applications

ELE539A: Optimization of Communication Systems Lecture 15: Semidefinite Programming, Detection and Estimation Applications ELE539A: Optimization of Communication Systems Lecture 15: Semidefinite Programming, Detection and Estimation Applications Professor M. Chiang Electrical Engineering Department, Princeton University March

More information

A Brief Review on Convex Optimization

A Brief Review on Convex Optimization A Brief Review on Convex Optimization 1 Convex set S R n is convex if x,y S, λ,µ 0, λ+µ = 1 λx+µy S geometrically: x,y S line segment through x,y S examples (one convex, two nonconvex sets): A Brief Review

More information

Optimality Conditions for Constrained Optimization

Optimality Conditions for Constrained Optimization 72 CHAPTER 7 Optimality Conditions for Constrained Optimization 1. First Order Conditions In this section we consider first order optimality conditions for the constrained problem P : minimize f 0 (x)

More information

Part IB Optimisation

Part IB Optimisation Part IB Optimisation Theorems Based on lectures by F. A. Fischer Notes taken by Dexter Chua Easter 2015 These notes are not endorsed by the lecturers, and I have modified them (often significantly) after

More information

14. Duality. ˆ Upper and lower bounds. ˆ General duality. ˆ Constraint qualifications. ˆ Counterexample. ˆ Complementary slackness.

14. Duality. ˆ Upper and lower bounds. ˆ General duality. ˆ Constraint qualifications. ˆ Counterexample. ˆ Complementary slackness. CS/ECE/ISyE 524 Introduction to Optimization Spring 2016 17 14. Duality ˆ Upper and lower bounds ˆ General duality ˆ Constraint qualifications ˆ Counterexample ˆ Complementary slackness ˆ Examples ˆ Sensitivity

More information

CSCI : Optimization and Control of Networks. Review on Convex Optimization

CSCI : Optimization and Control of Networks. Review on Convex Optimization CSCI7000-016: Optimization and Control of Networks Review on Convex Optimization 1 Convex set S R n is convex if x,y S, λ,µ 0, λ+µ = 1 λx+µy S geometrically: x,y S line segment through x,y S examples (one

More information

Uniqueness of Generalized Equilibrium for Box Constrained Problems and Applications

Uniqueness of Generalized Equilibrium for Box Constrained Problems and Applications Uniqueness of Generalized Equilibrium for Box Constrained Problems and Applications Alp Simsek Department of Electrical Engineering and Computer Science Massachusetts Institute of Technology Asuman E.

More information

Lecture 3. Optimization Problems and Iterative Algorithms

Lecture 3. Optimization Problems and Iterative Algorithms Lecture 3 Optimization Problems and Iterative Algorithms January 13, 2016 This material was jointly developed with Angelia Nedić at UIUC for IE 598ns Outline Special Functions: Linear, Quadratic, Convex

More information

Duality. Geoff Gordon & Ryan Tibshirani Optimization /

Duality. Geoff Gordon & Ryan Tibshirani Optimization / Duality Geoff Gordon & Ryan Tibshirani Optimization 10-725 / 36-725 1 Duality in linear programs Suppose we want to find lower bound on the optimal value in our convex problem, B min x C f(x) E.g., consider

More information

Interior-Point Methods for Linear Optimization

Interior-Point Methods for Linear Optimization Interior-Point Methods for Linear Optimization Robert M. Freund and Jorge Vera March, 204 c 204 Robert M. Freund and Jorge Vera. All rights reserved. Linear Optimization with a Logarithmic Barrier Function

More information

Lecture 18: Optimization Programming

Lecture 18: Optimization Programming Fall, 2016 Outline Unconstrained Optimization 1 Unconstrained Optimization 2 Equality-constrained Optimization Inequality-constrained Optimization Mixture-constrained Optimization 3 Quadratic Programming

More information

Game Theoretic Approach to Power Control in Cellular CDMA

Game Theoretic Approach to Power Control in Cellular CDMA Game Theoretic Approach to Power Control in Cellular CDMA Sarma Gunturi Texas Instruments(India) Bangalore - 56 7, INDIA Email : gssarma@ticom Fernando Paganini Electrical Engineering Department University

More information

5 Handling Constraints

5 Handling Constraints 5 Handling Constraints Engineering design optimization problems are very rarely unconstrained. Moreover, the constraints that appear in these problems are typically nonlinear. This motivates our interest

More information

Optimal Linear Precoding Strategies for Wideband Non-Cooperative Systems based on Game Theory-Part I: Nash Equilibria

Optimal Linear Precoding Strategies for Wideband Non-Cooperative Systems based on Game Theory-Part I: Nash Equilibria Optimal Linear Precoding Strategies for Wideband Non-Cooperative Systems based on Game Theory-Part I: Nash Equilibria Gesualdo Scutari 1, Daniel P. Palomar 2, and Sergio Barbarossa 1 E-mail: {aldo.scutari,sergio}@infocom.uniroma1.it,

More information

Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings

Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings Structural and Multidisciplinary Optimization P. Duysinx and P. Tossings 2018-2019 CONTACTS Pierre Duysinx Institut de Mécanique et du Génie Civil (B52/3) Phone number: 04/366.91.94 Email: P.Duysinx@uliege.be

More information

Primal-Dual Interior-Point Methods for Linear Programming based on Newton s Method

Primal-Dual Interior-Point Methods for Linear Programming based on Newton s Method Primal-Dual Interior-Point Methods for Linear Programming based on Newton s Method Robert M. Freund March, 2004 2004 Massachusetts Institute of Technology. The Problem The logarithmic barrier approach

More information

Lecture Notes on Support Vector Machine

Lecture Notes on Support Vector Machine Lecture Notes on Support Vector Machine Feng Li fli@sdu.edu.cn Shandong University, China 1 Hyperplane and Margin In a n-dimensional space, a hyper plane is defined by ω T x + b = 0 (1) where ω R n is

More information

Machine Learning. Support Vector Machines. Manfred Huber

Machine Learning. Support Vector Machines. Manfred Huber Machine Learning Support Vector Machines Manfred Huber 2015 1 Support Vector Machines Both logistic regression and linear discriminant analysis learn a linear discriminant function to separate the data

More information

Introduction to Mathematical Programming IE406. Lecture 10. Dr. Ted Ralphs

Introduction to Mathematical Programming IE406. Lecture 10. Dr. Ted Ralphs Introduction to Mathematical Programming IE406 Lecture 10 Dr. Ted Ralphs IE406 Lecture 10 1 Reading for This Lecture Bertsimas 4.1-4.3 IE406 Lecture 10 2 Duality Theory: Motivation Consider the following

More information

Tutorial on Convex Optimization: Part II

Tutorial on Convex Optimization: Part II Tutorial on Convex Optimization: Part II Dr. Khaled Ardah Communications Research Laboratory TU Ilmenau Dec. 18, 2018 Outline Convex Optimization Review Lagrangian Duality Applications Optimal Power Allocation

More information

Lagrange duality. The Lagrangian. We consider an optimization program of the form

Lagrange duality. The Lagrangian. We consider an optimization program of the form Lagrange duality Another way to arrive at the KKT conditions, and one which gives us some insight on solving constrained optimization problems, is through the Lagrange dual. The dual is a maximization

More information

Lecture Note 5: Semidefinite Programming for Stability Analysis

Lecture Note 5: Semidefinite Programming for Stability Analysis ECE7850: Hybrid Systems:Theory and Applications Lecture Note 5: Semidefinite Programming for Stability Analysis Wei Zhang Assistant Professor Department of Electrical and Computer Engineering Ohio State

More information

Introduction to Machine Learning Lecture 7. Mehryar Mohri Courant Institute and Google Research

Introduction to Machine Learning Lecture 7. Mehryar Mohri Courant Institute and Google Research Introduction to Machine Learning Lecture 7 Mehryar Mohri Courant Institute and Google Research mohri@cims.nyu.edu Convex Optimization Differentiation Definition: let f : X R N R be a differentiable function,

More information

WHEN ARE THE (UN)CONSTRAINED STATIONARY POINTS OF THE IMPLICIT LAGRANGIAN GLOBAL SOLUTIONS?

WHEN ARE THE (UN)CONSTRAINED STATIONARY POINTS OF THE IMPLICIT LAGRANGIAN GLOBAL SOLUTIONS? WHEN ARE THE (UN)CONSTRAINED STATIONARY POINTS OF THE IMPLICIT LAGRANGIAN GLOBAL SOLUTIONS? Francisco Facchinei a,1 and Christian Kanzow b a Università di Roma La Sapienza Dipartimento di Informatica e

More information

Contraction Methods for Convex Optimization and Monotone Variational Inequalities No.16

Contraction Methods for Convex Optimization and Monotone Variational Inequalities No.16 XVI - 1 Contraction Methods for Convex Optimization and Monotone Variational Inequalities No.16 A slightly changed ADMM for convex optimization with three separable operators Bingsheng He Department of

More information

subject to (x 2)(x 4) u,

subject to (x 2)(x 4) u, Exercises Basic definitions 5.1 A simple example. Consider the optimization problem with variable x R. minimize x 2 + 1 subject to (x 2)(x 4) 0, (a) Analysis of primal problem. Give the feasible set, the

More information

Optimization for Machine Learning

Optimization for Machine Learning Optimization for Machine Learning (Problems; Algorithms - A) SUVRIT SRA Massachusetts Institute of Technology PKU Summer School on Data Science (July 2017) Course materials http://suvrit.de/teaching.html

More information

Convex Optimization. Dani Yogatama. School of Computer Science, Carnegie Mellon University, Pittsburgh, PA, USA. February 12, 2014

Convex Optimization. Dani Yogatama. School of Computer Science, Carnegie Mellon University, Pittsburgh, PA, USA. February 12, 2014 Convex Optimization Dani Yogatama School of Computer Science, Carnegie Mellon University, Pittsburgh, PA, USA February 12, 2014 Dani Yogatama (Carnegie Mellon University) Convex Optimization February 12,

More information

Lagrange Relaxation and Duality

Lagrange Relaxation and Duality Lagrange Relaxation and Duality As we have already known, constrained optimization problems are harder to solve than unconstrained problems. By relaxation we can solve a more difficult problem by a simpler

More information

Distributed Multiuser Optimization: Algorithms and Error Analysis

Distributed Multiuser Optimization: Algorithms and Error Analysis Distributed Multiuser Optimization: Algorithms and Error Analysis Jayash Koshal, Angelia Nedić and Uday V. Shanbhag Abstract We consider a class of multiuser optimization problems in which user interactions

More information

Machine Learning. Lecture 6: Support Vector Machine. Feng Li.

Machine Learning. Lecture 6: Support Vector Machine. Feng Li. Machine Learning Lecture 6: Support Vector Machine Feng Li fli@sdu.edu.cn https://funglee.github.io School of Computer Science and Technology Shandong University Fall 2018 Warm Up 2 / 80 Warm Up (Contd.)

More information

Pattern Classification, and Quadratic Problems

Pattern Classification, and Quadratic Problems Pattern Classification, and Quadratic Problems (Robert M. Freund) March 3, 24 c 24 Massachusetts Institute of Technology. 1 1 Overview Pattern Classification, Linear Classifiers, and Quadratic Optimization

More information

Solving a Class of Generalized Nash Equilibrium Problems

Solving a Class of Generalized Nash Equilibrium Problems Journal of Mathematical Research with Applications May, 2013, Vol. 33, No. 3, pp. 372 378 DOI:10.3770/j.issn:2095-2651.2013.03.013 Http://jmre.dlut.edu.cn Solving a Class of Generalized Nash Equilibrium

More information

Lecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem

Lecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem Lecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem Michael Patriksson 0-0 The Relaxation Theorem 1 Problem: find f := infimum f(x), x subject to x S, (1a) (1b) where f : R n R

More information

minimize x subject to (x 2)(x 4) u,

minimize x subject to (x 2)(x 4) u, Math 6366/6367: Optimization and Variational Methods Sample Preliminary Exam Questions 1. Suppose that f : [, L] R is a C 2 -function with f () on (, L) and that you have explicit formulae for

More information

Linear & nonlinear classifiers

Linear & nonlinear classifiers Linear & nonlinear classifiers Machine Learning Hamid Beigy Sharif University of Technology Fall 1396 Hamid Beigy (Sharif University of Technology) Linear & nonlinear classifiers Fall 1396 1 / 44 Table

More information

Constrained Optimization

Constrained Optimization 1 / 22 Constrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University March 30, 2015 2 / 22 1. Equality constraints only 1.1 Reduced gradient 1.2 Lagrange

More information

ECE Optimization for wireless networks Final. minimize f o (x) s.t. Ax = b,

ECE Optimization for wireless networks Final. minimize f o (x) s.t. Ax = b, ECE 788 - Optimization for wireless networks Final Please provide clear and complete answers. PART I: Questions - Q.. Discuss an iterative algorithm that converges to the solution of the problem minimize

More information

Linear programming II

Linear programming II Linear programming II Review: LP problem 1/33 The standard form of LP problem is (primal problem): max z = cx s.t. Ax b, x 0 The corresponding dual problem is: min b T y s.t. A T y c T, y 0 Strong Duality

More information

MIT Algebraic techniques and semidefinite optimization February 14, Lecture 3

MIT Algebraic techniques and semidefinite optimization February 14, Lecture 3 MI 6.97 Algebraic techniques and semidefinite optimization February 4, 6 Lecture 3 Lecturer: Pablo A. Parrilo Scribe: Pablo A. Parrilo In this lecture, we will discuss one of the most important applications

More information

10 Numerical methods for constrained problems

10 Numerical methods for constrained problems 10 Numerical methods for constrained problems min s.t. f(x) h(x) = 0 (l), g(x) 0 (m), x X The algorithms can be roughly divided the following way: ˆ primal methods: find descent direction keeping inside

More information

Uses of duality. Geoff Gordon & Ryan Tibshirani Optimization /

Uses of duality. Geoff Gordon & Ryan Tibshirani Optimization / Uses of duality Geoff Gordon & Ryan Tibshirani Optimization 10-725 / 36-725 1 Remember conjugate functions Given f : R n R, the function is called its conjugate f (y) = max x R n yt x f(x) Conjugates appear

More information

Variational inequality formulation of chance-constrained games

Variational inequality formulation of chance-constrained games Variational inequality formulation of chance-constrained games Joint work with Vikas Singh from IIT Delhi Université Paris Sud XI Computational Management Science Conference Bergamo, Italy May, 2017 Outline

More information

Written Examination

Written Examination Division of Scientific Computing Department of Information Technology Uppsala University Optimization Written Examination 202-2-20 Time: 4:00-9:00 Allowed Tools: Pocket Calculator, one A4 paper with notes

More information

Nonlinear Optimization for Optimal Control

Nonlinear Optimization for Optimal Control Nonlinear Optimization for Optimal Control Pieter Abbeel UC Berkeley EECS Many slides and figures adapted from Stephen Boyd [optional] Boyd and Vandenberghe, Convex Optimization, Chapters 9 11 [optional]

More information

Near-Potential Games: Geometry and Dynamics

Near-Potential Games: Geometry and Dynamics Near-Potential Games: Geometry and Dynamics Ozan Candogan, Asuman Ozdaglar and Pablo A. Parrilo September 6, 2011 Abstract Potential games are a special class of games for which many adaptive user dynamics

More information

Optimization and Optimal Control in Banach Spaces

Optimization and Optimal Control in Banach Spaces Optimization and Optimal Control in Banach Spaces Bernhard Schmitzer October 19, 2017 1 Convex non-smooth optimization with proximal operators Remark 1.1 (Motivation). Convex optimization: easier to solve,

More information

A Solution Method for Semidefinite Variational Inequality with Coupled Constraints

A Solution Method for Semidefinite Variational Inequality with Coupled Constraints Communications in Mathematics and Applications Volume 4 (2013), Number 1, pp. 39 48 RGN Publications http://www.rgnpublications.com A Solution Method for Semidefinite Variational Inequality with Coupled

More information

Stationary Points of Bound Constrained Minimization Reformulations of Complementarity Problems1,2

Stationary Points of Bound Constrained Minimization Reformulations of Complementarity Problems1,2 JOURNAL OF OPTIMIZATION THEORY AND APPLICATIONS: Vol. 94, No. 2, pp. 449-467, AUGUST 1997 Stationary Points of Bound Constrained Minimization Reformulations of Complementarity Problems1,2 M. V. SOLODOV3

More information

Statistical Machine Learning from Data

Statistical Machine Learning from Data Samy Bengio Statistical Machine Learning from Data 1 Statistical Machine Learning from Data Support Vector Machines Samy Bengio IDIAP Research Institute, Martigny, Switzerland, and Ecole Polytechnique

More information

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 17

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 17 EE/ACM 150 - Applications of Convex Optimization in Signal Processing and Communications Lecture 17 Andre Tkacenko Signal Processing Research Group Jet Propulsion Laboratory May 29, 2012 Andre Tkacenko

More information

Introduction to Convex Optimization

Introduction to Convex Optimization Introduction to Convex Optimization Daniel P. Palomar Hong Kong University of Science and Technology (HKUST) ELEC5470 - Convex Optimization Fall 2018-19, HKUST, Hong Kong Outline of Lecture Optimization

More information

Duality in Linear Programs. Lecturer: Ryan Tibshirani Convex Optimization /36-725

Duality in Linear Programs. Lecturer: Ryan Tibshirani Convex Optimization /36-725 Duality in Linear Programs Lecturer: Ryan Tibshirani Convex Optimization 10-725/36-725 1 Last time: proximal gradient descent Consider the problem x g(x) + h(x) with g, h convex, g differentiable, and

More information

LECTURE 25: REVIEW/EPILOGUE LECTURE OUTLINE

LECTURE 25: REVIEW/EPILOGUE LECTURE OUTLINE LECTURE 25: REVIEW/EPILOGUE LECTURE OUTLINE CONVEX ANALYSIS AND DUALITY Basic concepts of convex analysis Basic concepts of convex optimization Geometric duality framework - MC/MC Constrained optimization

More information

Near-Potential Games: Geometry and Dynamics

Near-Potential Games: Geometry and Dynamics Near-Potential Games: Geometry and Dynamics Ozan Candogan, Asuman Ozdaglar and Pablo A. Parrilo January 29, 2012 Abstract Potential games are a special class of games for which many adaptive user dynamics

More information

Interior Point Algorithms for Constrained Convex Optimization

Interior Point Algorithms for Constrained Convex Optimization Interior Point Algorithms for Constrained Convex Optimization Chee Wei Tan CS 8292 : Advanced Topics in Convex Optimization and its Applications Fall 2010 Outline Inequality constrained minimization problems

More information

6.254 : Game Theory with Engineering Applications Lecture 8: Supermodular and Potential Games

6.254 : Game Theory with Engineering Applications Lecture 8: Supermodular and Potential Games 6.254 : Game Theory with Engineering Applications Lecture 8: Supermodular and Asu Ozdaglar MIT March 2, 2010 1 Introduction Outline Review of Supermodular Games Reading: Fudenberg and Tirole, Section 12.3.

More information

8. Geometric problems

8. Geometric problems 8. Geometric problems Convex Optimization Boyd & Vandenberghe extremal volume ellipsoids centering classification placement and facility location 8 Minimum volume ellipsoid around a set Löwner-John ellipsoid

More information

UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems

UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems Robert M. Freund February 2016 c 2016 Massachusetts Institute of Technology. All rights reserved. 1 1 Introduction

More information

Chapter 2. Optimization. Gradients, convexity, and ALS

Chapter 2. Optimization. Gradients, convexity, and ALS Chapter 2 Optimization Gradients, convexity, and ALS Contents Background Gradient descent Stochastic gradient descent Newton s method Alternating least squares KKT conditions 2 Motivation We can solve

More information

Algorithms for constrained local optimization

Algorithms for constrained local optimization Algorithms for constrained local optimization Fabio Schoen 2008 http://gol.dsi.unifi.it/users/schoen Algorithms for constrained local optimization p. Feasible direction methods Algorithms for constrained

More information