Lopsided Convergence: an Extension and its Quantification

Similar documents
Lopsided Convergence: an Extension and its Quantification

Optimality Functions and Lopsided Convergence

VARIATIONAL THEORY FOR OPTIMIZATION UNDER STOCHASTIC AMBIGUITY

On Semicontinuity of Convex-valued Multifunctions and Cesari s Property (Q)

Keywords. 1. Introduction.

Semicontinuities of Multifunctions and Functions

MULTIVARIATE EPI-SPLINES AND EVOLVING FUNCTION IDENTIFICATION PROBLEMS

COSMIC CONVERGENCE. R.T. Rockafellar Departments of Mathematics and Applied Mathematics University of Washington, Seattle

Near-Potential Games: Geometry and Dynamics

Metric Spaces and Topology

A Geometric Framework for Nonconvex Optimization Duality using Augmented Lagrangian Functions

Zangwill s Global Convergence Theorem

REPORT DOCUMENTATION PAGE

Optimization and Optimal Control in Banach Spaces

RANDOM APPROXIMATIONS IN STOCHASTIC PROGRAMMING - A SURVEY

Document downloaded from: This paper must be cited as:

On duality theory of conic linear problems

Translative Sets and Functions and their Applications to Risk Measure Theory and Nonlinear Separation

A SET OF LECTURE NOTES ON CONVEX OPTIMIZATION WITH SOME APPLICATIONS TO PROBABILITY THEORY INCOMPLETE DRAFT. MAY 06

Optimality, identifiability, and sensitivity

Local strong convexity and local Lipschitz continuity of the gradient of convex functions

Maths 212: Homework Solutions

Boundary Behavior of Excess Demand Functions without the Strong Monotonicity Assumption

Dedicated to Michel Théra in honor of his 70th birthday

Maximal Monotone Inclusions and Fitzpatrick Functions

Set, functions and Euclidean space. Seungjin Han

(convex combination!). Use convexity of f and multiply by the common denominator to get. Interchanging the role of x and y, we obtain that f is ( 2M ε

Introduction to Correspondences

VARIATIONAL CONVERGENCE OF BIFUNCTIONS: MOTIVATING APPLICATIONS

Near-Potential Games: Geometry and Dynamics

CHAPTER 7. Connectedness

A Unified Analysis of Nonconvex Optimization Duality and Penalty Methods with General Augmenting Functions

LECTURE 9 LECTURE OUTLINE. Min Common/Max Crossing for Min-Max

A projection-type method for generalized variational inequalities with dual solutions

Contents: 1. Minimization. 2. The theorem of Lions-Stampacchia for variational inequalities. 3. Γ -Convergence. 4. Duality mapping.

WHY SATURATED PROBABILITY SPACES ARE NECESSARY

CHAPTER I THE RIESZ REPRESENTATION THEOREM

Tijmen Daniëls Universiteit van Amsterdam. Abstract

Convex Optimization Notes

MATH 51H Section 4. October 16, Recall what it means for a function between metric spaces to be continuous:

Week 2: Sequences and Series

Analysis Finite and Infinite Sets The Real Numbers The Cantor Set

Some Background Material

Identifying Active Constraints via Partial Smoothness and Prox-Regularity

02. Measure and integral. 1. Borel-measurable functions and pointwise limits

Topology. Xiaolong Han. Department of Mathematics, California State University, Northridge, CA 91330, USA address:

Approximation Metrics for Discrete and Continuous Systems

2. The Concept of Convergence: Ultrafilters and Nets

arxiv: v2 [math.st] 21 Jan 2018

6.254 : Game Theory with Engineering Applications Lecture 7: Supermodular Games

APPENDIX C: Measure Theoretic Issues

The Lebesgue Integral

On smoothness properties of optimal value functions at the boundary of their domain under complete convexity

g 2 (x) (1/3)M 1 = (1/3)(2/3)M.

Statistics 612: L p spaces, metrics on spaces of probabilites, and connections to estimation

Convex Functions and Optimization

Notes on Ordered Sets

Convex Analysis and Economic Theory Winter 2018

Some Properties of the Augmented Lagrangian in Cone Constrained Optimization

Epiconvergence and ε-subgradients of Convex Functions

Subdifferential representation of convex functions: refinements and applications

CHAPTER 2: CONVEX SETS AND CONCAVE FUNCTIONS. W. Erwin Diewert January 31, 2008.

Optimality Conditions for Constrained Optimization

The general programming problem is the nonlinear programming problem where a given function is maximized subject to a set of inequality constraints.

Convex Analysis and Economic Theory AY Elementary properties of convex functions

Continuity of convex functions in normed spaces

G δ ideals of compact sets

Constraint qualifications for convex inequality systems with applications in constrained optimization

Economics 204 Summer/Fall 2017 Lecture 7 Tuesday July 25, 2017

Tree sets. Reinhard Diestel

Self-equilibrated Functions in Dual Vector Spaces: a Boundedness Criterion

Part III. 10 Topological Space Basics. Topological Spaces

Could Nash equilibria exist if the payoff functions are not quasi-concave?

Introduction to Real Analysis Alternative Chapter 1

AW -Convergence and Well-Posedness of Non Convex Functions

1 Lattices and Tarski s Theorem

Equilibria in Games with Weak Payoff Externalities

Two-Term Disjunctions on the Second-Order Cone

Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization

Radius Theorems for Monotone Mappings

Chapter 2 Convex Analysis

Yuqing Chen, Yeol Je Cho, and Li Yang

SOME REMARKS ON SUBDIFFERENTIABILITY OF CONVEX FUNCTIONS

Introduction to Convex Analysis Microeconomics II - Tutoring Class

WEAK LOWER SEMI-CONTINUITY OF THE OPTIMAL VALUE FUNCTION AND APPLICATIONS TO WORST-CASE ROBUST OPTIMAL CONTROL PROBLEMS

Empirical Processes: General Weak Convergence Theory

Lecture 1. Stochastic Optimization: Introduction. January 8, 2018

Strongly convex functions, Moreau envelopes and the generic nature of convex functions with strong minimizers

A LITTLE REAL ANALYSIS AND TOPOLOGY

Filters in Analysis and Topology

Product metrics and boundedness

GEOMETRIC APPROACH TO CONVEX SUBDIFFERENTIAL CALCULUS October 10, Dedicated to Franco Giannessi and Diethard Pallaschke with great respect

A Brøndsted-Rockafellar Theorem for Diagonal Subdifferential Operators

Journal of Inequalities in Pure and Applied Mathematics

Global minimization. Chapter Upper and lower bounds

The local equicontinuity of a maximal monotone operator

Selected solutions for Homework 9

1. Continuous Functions between Euclidean spaces

15 Real Analysis II Sequences and Limits , Math for Economists Fall 2004 Lecture Notes, 10/21/2004

MORE ON CONTINUOUS FUNCTIONS AND SETS

Transcription:

Lopsided Convergence: an Extension and its Quantification Johannes O. Royset Operations Research Department Naval Postgraduate School Monterey, California, USA joroyset@nps.edu Roger J-B Wets Department of Mathematics University of California, Davis Davis, California, USA rjbwets@ucdavis.edu Abstract. Much of the development of lopsided convergence for bifunctions defined on product spaces was in response to motivating applications. A wider class of applications requires an extension that would allow for arbitrary domains, not only product spaces. This leads to an extension of the definition and its implications that include the convergence of solutions and optimal values of a broad class of minsup problems. In the process we relax the definition of lopsided convergence even for the classical situation of product spaces. We now capture applications in optimization under stochastic ambiguity, Generalized Nash games, and many others. We also introduce the lop-distance between bifunctions, which leads to the first quantification of lopsided convergence. This quantification facilitates the study of convergence rates of methods for solving a series of problems including minsup problems, Generalized Nash games, and various equilibrium problems. Keywords: lopsided convergence, lop-convergence, lop-distance, Attouch-Wets distance epi-convergence, hypo-convergence, minsup problems, Generalized Nash games AMS Classification: 49M99, 65K10, 90C15, 90C33, 90C47 Date: April 11, 2018 1 Introduction The notion of lopsided convergence of bifunctions (= bivariate functions defined on a product space) was introduced in [3] for extended real-valued bifunctions. The focus on extended realvalued functions was motivated by the incentive to keep the development in concordance with the elegant duality results of Rockafellar [25, Chapters 33-37] and the subsequent convergence theory for saddle functions [4, 2, 19]. But this paradigm turned out to become unmanageable 1

when confronted with a series of applications that required dealing with bifunctions that were not of the convex-concave type. Eventually, this led to restricting the convergence theory for bifunctions to real-valued bivariate functions (only) defined on specific subsets of the product space, cf. [21, 28] and especially [20]. Lopsided convergence is emerging as a central tool in the study of linear and nonlinear complementarity problems, fixed points, variational inequalities, inclusions, noncooperative games, mathematical programs with equilibrium constraints, optimality conditions, Walras and Nash equilibrium problems, optimization under stochastic ambiguity, and robust optimization; see the recent developments in [21, 28, 29]. Already, Aubin and Ekeland [7, Chapter 6] brought to the fore the ineluctable connections between some of these applications when dealing with existence issues. Prior studies deal exclusively with bifunctions defined on a product space, but applications in Generalized Nash games, robust optimization, stochastic optimization with decision-dependent measures, and Generalized semi-infinite programming require extensions to bifunctions for which the second variable s domain depend on the first variable. For example, in a minsup problem 1 this corresponds to the situation when the inner maximization has a feasible region that depends on the outer minimization variable, in a Generalized Nash game, the need arises when the set of feasible actions for any given agent depends on the actions of the other agents. We extend the definition of lopsided convergence to deal with these situations and establish an array of results addressing this wider setting. Specifically, we show that for this extended notion of lopsided convergence, optimal solutions and optimal values of approximating minsup problems tend to those of an original minsup problem. We also relax the definition of lopsided convergence for bifunctions defined on a product space and, therefore, broaden the area of application even in this classical situation. In the process, we recast, and in a couple of instances refine, the fundamental implications of epi-convergence in the present framework, i.e., for finite-valued functions defined on a subset of a metric space. For the first time we quantify lopsided convergence by defining the lop-distance. The lop-distance between two bifunctions is given in terms of the Attouch-Wets distance [5] between the supprojections of the bifunctions with respect to the second variable. Thus, we place firmly the emphasis on the outer minimization in a minsup problem at the expense of the inner maximization. This imbalance indeed motivated the terminology lopsided. In the context of minsup problems, Generalized Nash game, and many other situations, this perspective is reasonable as the inner problem is certainly secondary as illustrated below; the solution of the outer minimization being primary. For example, this leads to estimates of the rate of convergence of that outer solution as demonstrated in [29]. We note that the point of view differs from that of epi/hypo-convergence [4] and analysis of the convex/concave case [25, Chapters 33-37]. There the focus is on finding saddle-point pairs, which implies a certain balance between the inner and 1 We prefer minsup problem to minimax problem as the inner maximization may not be attained in much of our developoment. 2

outer problems and indeed symmetry in the convex/concave case. Of course, our new viewpoint remains applicable in the convex/concave case and it yields somewhat sharper results not discussed here. The article proceeds in 2 with a couple of motivating examples. In 3, we give the new definition, provide sufficiency conditions, and also discuss foundations related to epi-convergence. Consequences of lopsided convergence, with illustrations from Generalized Nash games, are established in 4 and the lop-distance is introduced in 5. Throughout, we let (X, d X ) and (Y, d Y ) be two metric spaces and consider finite-valued bifunctions defined on nonempty subsets of X Y. 2 Motivation In contrast to this article predecessors, e.g., [28], that deal with bifunctions of the form F : C D X Y IR with D a (fixed) subset of Y, we consider bifunctions with arbitrary domains. That is, the set D Y of permissible values for the second variable might depend on x X, the first variable. This extension is essential to deal with two main application areas as illustrated next, but also a wide range of generalizations of models in [21]. 2.1 Optimization under Stochastic Ambiguity Consider the minsup problem min x C sup y D(x) F (x, y), where D : C Y is a set-valued mapping, C = dom D = { x : D(x) } X, and Y is the space of probability distributions defined on IR m, with an appropriate metric. It identifies an optimization model with stochastic ambiguity where C is a collection of feasible decisions, D(x) an ambiguity set of probability distributions for every x C, and F a bifunction that depends both on the decision and the distribution. For example, F (x, P ) = IE P [φ(x, ξ)], where ξ is a random vector with distribution function P and the expectation is therefore taken with respect to a distribution function that is determined by the inner maximization problem. In applications, it is sometimes crucial to allow the ambiguity set to depend in a nontrivial manner on the decision x to capture situations where the decision maker affects the uncertainty as modeled here via the set D(x); see [33, 29] as well as [30, 11, 13, 31] for related models. In [29], we leverage the results of the present paper to establish convergence of solutions of approximating optimization problems on IR n under stochastic ambiguity to those of an actual problem and also illustrate the use of the lop-distance defined in 5 for that context. 3

2.2 Generalized Nash Games As we shall see, the study of Generalized Nash games naturally leads to the study of bifunctions that are defined on a (proper) subset of a product space. An equilibrium of a Generalized Nash game with a finite set A of agents, is a solution x = ( x a, a A) that satisfies x a argmin xa D a( x a ) c a (x a, x a ), for all a A, where x a = (x a : a A\{a}), c a is the cost function for agent a, and D a ( x a ) is the set of available strategies for agent a, which depends on the choices of strategies by the other agents. The idea of using bifunctions to characterize equilibria of such games goes back at least to [24]; see also the review [14]. One approach leverages the Nikaido-Isoda bifunction, which is given by F (x, y) = [ ] c a (x a, x a ) c a (y a, x a ) for x C, y D(x) a A with C = { } x : x a D a (x a ) for all a A and D(x) = D a(x a ). a A Clearly, the (effective) domain of this bifunction might be rather involved. To align with the notation elsewhere, we think of D a (x a ) as a subset of a metric space X a of conceivable strategies for agent a and the underlying metric space for the first variable in the bifunction is X = a A X a, equipped with the product metric. Thus, C X. Certainly, D a : X a X a, where X a = a A\a X a, but the mapping D can be restricted to C as other points are irrelevant, i.e., D : C X. We therefore have that the other underlying metric space for the second variable in the bifunction Y = X in this case. Moreover, c a : X a X a IR. In our notation, highlighting the role of minsup-points, we then obtain the following characterization (see, e.g., [14]). 2.1 Proposition (characterization of equilibrium) In the notation of this subsection, x is an equilibrium if and only if it is a minsup-point of F with nonpositive minsup-value, i.e., x argmin x C sup y D(x) F (x, y) and sup y D( x) F ( x, y) 0. Proof. If x is a Nash equilibrium, then c a ( x a, x a ) c a (y a, x a ) for all y a D a ( x a ), a A. Thus, F ( x, y) 0 for all y D( x). For any x C, x D(x) and therefore sup y D(x) F (x, y) 0. In particular, sup y D( x) F ( x, y) = F ( x, x) = 0. 4

Consequently, x argmin x C sup y D(x) F (x, y). For the converse, let x be a minsup-point of F and sup y D( x) F ( x, y) 0. Then, 0 sup y D( x) F ( x, y) = a A [ ] c a ( x a, x a ) inf ya Da( x a ) c a (y a, x a ). The upper bound of zero and the fact that x a D a ( x a ) for all a A imply that each term in the sum must be zero and the conclusion follows. In 4.3, we show how stability and approximation of the solutions of Generalized Nash games can be examined through the lense of lopsided convergence. The results are widely applicable as the strategy spaces of the agents are only required to be metric spaces and an agent s constraint set may neither be convex nor compact and could very well depend on the other agents choices of strategies a centerpiece of the new definition of lopsided convergence. Thus, we extend results in [18, 21] that both consider finite-dimensional strategies and constraint sets independent of other agents choices of strategies. 3 Lopsided Convergence As already mentioned in the Introduction, to encompass the family of applications sketched out in 2 and others, we need to extend the definition of lopsided convergence, and the resulting theory, to a larger class of bifunctions than those considered in earlier work. The domain, dom F, of a finite-valued bifunction F will no longer be restricted to a product subset of X Y but could be any subset. The family of all such bifunctions is denoted bfcns(x, Y ) := { F : dom F IR : = dom F X Y }. It is on this family we introduce the new definition of lopsided convergence. 3.1 Definition The (first) x-variable of a bifunction takes a primary role in our development leading us to the following description of the domain of a bifunction. We associate with a bifunction F, the set C := { x X : y Y such that (x, y) dom F } and the set-valued mapping D : C Y such that D(x) := { y Y : (x, y) dom F } for x C. Thus, dom F = { (x, y) X Y : x C, y D(x) }, with dom D = { x X : D(x) } = C. When dom F is a product set it agrees with having D a constant mapping; the output of that mapping is then also denoted by D (instead of D(x)). Figure 1 illustrates the case with a product set (left portion) and the general case (right portion). Throughout, dom F is described in terms 5

( ) dom ( ) dom Figure 1: Domains of bifunctions: product set (left) and general (right). of such C and D. In applications, a bifunction of interest might be defined on a large subset of X Y, possibly everywhere, but the context requires restrictions to some smaller subset for example dictated by constraints imposed on the variables. If these constraints are that x C X and y D(x) for some D : C Y, then F in our notation would become the original bifunction restricted to the set {(x, y) X Y : x C, y D(x)}, which becomes dom F. In other applications, a bifunction F might be the only problem data given. These differences are immaterial to the following development as both are captured by considering bfcns(x, Y ). With IN := {1, 2, 3,...}, collections of bifunctions are often denoted by {F ν, ν IN} bfcns(x, Y ). Analogously to the notation above, dom F ν is described by a set C ν X and a set-valued mapping D ν : C ν Y such that dom F ν = { (x, y) X Y : x C ν, y D ν (x) }. If not specified otherwise, the index ν runs over IN so that x ν x means that the whole sequence {x ν, ν IN} converges to x. Let N # be all the subsets of IN determined by subsequences, i.e., N N # is an infinite collection of strictly increasing natural numbers. Thus, {x ν, ν N} is a subsequence of {x ν, ν IN}; its convergence to x is noted by x ν N x. The extended definition of lopsided convergence takes the following form. 3.1 Definition (lopsided convergence) Let {F, F ν, ν IN} bfcns(x, Y ) with domains described by (C, D) and (C ν, D ν ), respectively. Then, F ν converges lopsided, or lop-converges, to F, written F ν lop F, when (a) N N #, x ν C ν N x C, and y D(x), y ν D ν (x ν ) N y such that liminf ν N F ν (x ν, y ν ) F (x, y) and N N # and x ν C ν N x / C, y ν D ν (x ν ) such that F ν (x ν, y ν ) N ; 6

(b) x C, x ν C ν x such that N N # and y ν D ν (x ν ) N y Y, limsup ν N F ν (x ν, y ν ) F (x, y) if y D(x) and F ν (x ν, y ν ) N otherwise. Lop-convergence does not have a direct geometric interpretation. However, as discussed in 4, it is intimately tied to epi- and hypo-convergence, which are easily understood in terms of the convergence of epigraphs and hypographs; see 3.4. We can therefore understand, in part, lopconvergence through these geometric interpretations. A preview of the conclusions reached in 4 builds intuition at this stage: If the bifunctions F ν and F do not depend on y, then lopconvergence collapses to epi-convergence. Under mild assumptions, F ν lop F implies that sup y D ν ( ) F ν (, y) epi-converges to sup y D( ) F (, y). We also have that at every x C, F ν (x ν, ) hypo-converges to F (x, ) for some x ν x. We note that even in the case when dom F and dom F ν are product sets for all ν, our definition represents a relaxation of the requirements in prior definitions as {y ν, ν IN} in condition (a) of Definition 3.1 for the case x C is not required to converge to y. Contrast with earlier definition for product sets. Suppose that {F, F ν, ν IN} bfcns(x, Y ) have dom F = C D and dom F ν = C ν D ν for sets D, D ν Y. In [20, 28], lop-convergence is defined to take place when 2 (a ) N N, # x ν C ν N x X, and y D, y ν D ν N y such that liminf ν N F ν (x ν, y ν ) F (x, y) if x C and F ν (x ν, y ν ) N otherwise; and (b ) x C, x ν C ν x such that N N # and y ν D ν y Y, limsup ν N F ν (x ν, y ν ) F (x, y) if y D and F ν (x ν, y ν ) N otherwise. The condition (b ) is exactly condition (b) of Definition 3.1 for the case of product sets. However, condition (a ) is stronger than condition (a) of Definition 3.1 as illustrated by the following trivial example. Let X = Y = IR, C ν = C = (0, 1], D ν = D = [0, 1], and F ν (x, y) = F (x, y) = 1/(x + y) for (x, y) in the their domains. It is clear that (a ) fails for y = 1, x = 0, and x ν = 1/ν as there is no y ν y such that F ν (x ν, y ν ). However, condition (a) of Definition 3.1 holds as one can take y ν = 1/ν in the case x C. Then, F ν (x ν, y ν ) = ν/2. If x C, then one can take y ν = y and obtain that F ν (x ν, y ν ) = F (x ν, y) F (x, y) by continuity of F. For condition (b) of Definition 3.1, one can take x ν = x and only be concerned with y Y due to the closedness of Y. Continuity of F then allows us to conclude that F ν lop F. In this case, with F ν = F, convergence is indeed natural and Definition 3.1 addresses this situation. 3.2 About Sufficiency One can come up with a wide collection of sufficient conditions for lop-convergence in terms of the way the components of a sequence of bifunctions F ν converge to those of the limiting bifunction F ; note that it will be necessary to broaden convergence notions for functions and mappings to take into account the fact that the domains of these bifunctions are generally not identical. The following results are only meant to illustrate the possibilities and might not be 2 The need to check subsequences N N # is understood, but not explicitly stated in these references. 7

as taut as possible and certainly not necessary. The more interesting ones come from specific applications such as those laid out in [21, 12] involving bifunctions whose domains are product sets and the more general family, when the domains are not restricted to product sets, such as the examples in 4.3 and those described in [29]. In the following, convergence of sequences in X Y are always in the sense of the product topology, i.e., (x ν, y ν ) X Y (x, y) X Y if max{d X (x ν, x), d Y (y ν, y)} 0. Convergence of sets are always in the sense of Painlevé-Kuratowski. Specifically, in a metric space, the outer limit of a sequence of sets {A ν, ν IN}, denoted by OutLim A ν, is the collection of points x to which a subsequence of {x ν A ν, ν IN} converges. The inner limit, denote by InnLim A ν, is the points to which a sequence of {x ν A ν, ν IN} converges. If both limits exist and are identical to A, we say that A ν (set-)converges to A, which is denoted by A ν A; see [9, 26]. 3.2 Theorem (sufficiency when C = C ν ) For bifunctions { F, F ν, ν IN } bfcns(x, Y ) with domains described by (C, D) and (C ν, D ν ), respectively, F ν lop F when (a) C = C ν, ν IN, are closed; (b) the mappings D ν continuously converge to D, relative to C, i.e., x ν C x C, D ν (x ν ) D(x); and (c) the bifunctions F ν continuously converge to F, relative to their domains, i.e., (x ν, y ν ) dom F ν (x, y) dom F, F ν (x ν, y ν ) F (x, y). Proof. Since C is closed, given any x ν C x it always entails x C. Moreover, continuous convergence of the mappings D ν to D, relative to C, implies D ν (x ν ) D(x) which, in turn, implies that for any y D(x) one can find y ν D ν (x ν ) converging to y. From (c), the continuous convergence of the bifunctions F ν to F, implies F (x ν, y ν ) F (x, y) which immediately yields condition (a) of Definition 3.1. To verify condition (b) of Definition 3.1, given any x C, by choosing the sequence {x ν = x, ν IN}, D ν (x) D(x) follows from continuous convergence of the mappings. Thus, whenever y ν D ν (x) y, y D(x). In turn, this means that we only have to check if limsup F ν (x, y ν ) F (x, y) which, of course, is satisfied since F ν (x, y ν ) F (x, y) in view of assumption (c). Next, we deal with the situation when the domains of the bifunctions are product sets. In fact, the next statement can be viewed as a refinement of the earlier theorem. We recall that a function f : C IR, with C X and X any metric space, is lower semicontinuous (lsc) when, for all x ν C x X, liminf f(x ν ) f(x) if x C and f(x ν ) otherwise. 3.3 Proposition (sufficiency under product sets) Let {F : C D IR, F ν : C D ν IR, ν IN} bfcns(x, Y ) with C closed and D ν Y D Y. If in terms of some lsc bifunction F : X Y IR, F = F on C D and F ν = F on C D ν, then F ν lop F, provided that, for all x C, F (x, ) is usc. 8

Proof. First, consider condition (a) of Definition 3.1, which now simplifies to finding, for every y D and x ν C x C, a sequence y ν D ν y such that liminf F (x ν, y ν ) F (x, y). This condition follows from the set-convergence of D ν to D and lower semicontinuity of F. Second, for condition (b) of Definition 3.1, we select x ν = x C for all ν. Since D ν D, any sequence y ν D ν y, implies y D. Thus, the condition simplifies to limsup F (x, y ν ) F (x, y) which holds in view of the last assumption of the proposition. Finally, we record a result for the case when there are approximations in the set controlling the (first) x-variable, which may occur, for example, in sensitivity analysis, semi-infinite optimization, stochastic programming, and problems with an infinite-dimensional space X that all may trigger approximations. 3.4 Proposition (sufficiency when C C ν ) For bifunctions {F, F ν, ν IN} bfcns(x, Y ) with domains described by (C, D) and (C ν, D ν ), respectively, F ν lop F when (a) C ν C; (b) for some continuous set-valued mapping D : X Y, D = D on C and D ν = D on C ν ; (c) for some continuous bifunction F : X Y IR, F = F on dom F and F ν = F on dom F ν. Proof. Since C ν C, every x ν C ν x must have x C. Thus, condition (a) of Definition 3.1 simplifies to finding, for every x ν C ν x C and y D(x), a sequence y ν D(x ν ) y with liminf F (x ν, y ν ) F (x, y). Since D is continuous, such a sequence exists and the inequality therefore follows from the continuity of F. We next turn to condition (b) of Definition 3.1. Since D is continuous, y ν D(x ν ) y implies that y D(x) whenever x ν x. Thus, the condition simplifies to finding, for every x C, a sequence x ν C ν x such that limsup F (x ν, y ν ) F (x, y) for all y ν D(x ν ) y D(x). Since C ν C, there is certainly a sequence x ν C ν x for all x C. The inequality then holds in view of the continuity of F. It is easy to find generalizations of the preceding results, for example rather than requiring continuous convergence of the F ν in Theorem 3.2 one could be satisfied with some semicontinuous convergence complemented with a pointwise upper semicontinuity condition. At this point, we shall not get involved in all the possibilities as eventually one is bound to be mostly interested in conditions that apply in specific applications. However, we caution that certain natural conditions are not sufficient as exemplified next. Failure of lop-convergence under graphical convergence. Consider the following situation where F ν = F for all ν IN, with domains described by C = C ν = IR and D ν (x) = 9

D(x) = [ 1, 1] if x 0, and {0} otherwise. Certainly the mappings D ν graphically converge to D since they are identical. However, when considering condition (a) of Definition 3.1 with x ν > 0 x = 0 and y = 1/2 D(x), there are no y ν D ν (x ν ) y and lop-convergence fails. We note that in this case pointwise convergence D ν (x) D(x) holds for all x IR and, consequently, the mappings are equi-osc [26, Theorem 5.40]. We next give a more involved example where pointwise convergence again holds, but now for problems with different solutions. Failure of lop-convergence under pointwise set-convergence. Suppose that {F, F ν, ν IN} bfcns(ir, IR) with domains described by C = C ν = [0, 1], D(x) = D ν (x) = {0} for x [0, 1), D ν (1) = [0, 1 + 1/ν], and D(1) = [0, 1]. Moreover, let F (x, y) = F ν (x, y) = 0 if x [0, 1), and F (1, y) = F ν (1, y) = 2 + y if y 1 and F ν (1, y) = 1 if y > 1. Clearly, for every x [0, 1], D ν (x) D(x). However, lop-convergence of F ν to F fails as for x = 1, x ν = 1 1/ν, and y = 1, there exists no sequence {y ν, ν IN}, with y ν D ν (x ν ) = {0} that converges to y as required by condition (a) of Definition 3.1. Here, sup y D ν (x) F ν (x, y) = 0 if x [0, 1) and sup y D ν (x) F ν (x, y) = 1 if x = 1, and sup y D(x) F (x, y) = 0 if x [0, 1) and sup y D(x) F (1, y) = 1. Thus, the optimal value of min x C ν sup y D ν (x) F ν (x, y), which is 0, does not converge to the optimal value of min x C sup y D(x) F (x, y), which is 1. Since a main purpose of a notion of variational convergence of bifunctions is to ensure convergence of such optimal values, it is clear that pointwise set-convergence is not strong enough. 3.3 Tightness A slight strengthening of lop-convergence that amounts to a relaxed compactness assumption becomes beneficial in 4 when deriving consequences. 3.5 Definition (ancillary-tight lop-convergence) The lop-convergence of {F ν, ν IN} bfcns(x, Y ) to F bfcns(x, Y ) is ancilliary-tight when for every ε > 0 and sequence x ν x selected in condition (b) of Definition 3.1, there exists a compact set B ε Y and an integer ν ε such that where D ν describes dom F ν. sup F ν (x ν, y) sup F ν (x ν, y) ε for all ν ν ε, y D ν (x ν ) B ε y D ν (x ν ) As usual, we interpret the supremum over an empty subset of IR as. The added requirement for ancillary-tightness is satisfied if all D ν (x ν ) are contained in a compact set, but many other possibilities exist. For example, in [29] ancillary-tightness is connected to tightness in the sense of probability theory when Y is a space of distribution functions with a metric corresponding to convergence in distribution. Applications in optimization under stochastic ambiguity can be of this form; see 2.1. In this case, if {D ν (x ν ), ν IN} is tight in the sense of probability theory for every sequence x ν x selected in condition (b) of Definition 3.1, then lop-convergence implies ancillary-tight lop-convergence, i.e., ancillary-tightness is automatic. 10

If ancillary-tightness is combined with a similar condition for the outer minimization, we obtain a further strengthening of the notion. 3.6 Definition (tight lop-convergence) The ancillary-tight lop-convergence of bifunctions {F ν, ν IN} bfcns(x, Y ) to F bfcns(x, Y ) is tight when for any ε > 0 one can find a compact set A ε X and an integer ν ε such that inf x C ν A ε sup y D ν (x) F ν (x, y) inf x C ν sup y D ν (x) F ν (x, y) + ε for all ν ν ε, where (C ν, D ν ) describes dom F ν. The infimum of an empty subset of IR is interpreted as. This further strengthening of the requirements would be satisfied if all C ν are contained in a compact set, but this is certainly not a necessity. 3.4 Epi- and Hypo-Convergence: A Summary Before we develop consequences of lop-convergence, we give some background facts about epiand hypo-convergence for (univariate) functions; see [1, 8, 26] for comprehensive treatments. We present results for real-valued functions defined on (nonempty) subsets of (X, d X ); in many ways, this is just a variant of the more traditional framework that considers extended real-valued functions, cf. [8, 26], but slight sharpening of some results become possible. Our focus will, thus, be on fcns(x) = { f : C IR : for some = C X }. In this section, for functions f, f ν fcns(x), we denote by C and C ν their domains, respectively. The epigraph of f, epi f X IR, consists of all points that lie on or above the graph of f; it is lsc if epi f is a closed subset of X IR (with respect to the product topology generated by d X and the usual metric on IR) and, provided X is a linear space 3, it is convex if its epigraph is convex. The hypograph of f, hypo f X IR, consists of all points that lie on or below the graph of f; it is upper semicontinuous (usc) if hypo f is closed and, provided X is a linear space, it is concave if its hypograph is convex. A sequence of functions {f ν, ν IN} fcns(x) epi-converges to a function f fcns(x), written f ν e f, when the epigraphs epi f ν set-converge to epi f; similarly, they hypo-converge, written f ν h f, if the hypographs hypo f ν set-converge to hypo f. Equivalently, epi-convergence can also be defined as follows: 3.7 Definition (epi- and hypo-convergence) For {f, f ν, ν IN} fcns(x) with domains C and C ν, respectively, we have that f ν e f if and only if (a) N N # and x ν C ν N x, liminf ν N f ν (x ν ) f(x) if x C and f ν (x ν ) N otherwise, 3 Statements about convexity/concavity are the only ones that require a linear space in this paper. 11

(b) x C, x ν C ν x such that limsup f ν (x ν ) f(x). The functions f ν are said to epi-converge tightly to f when f ν e find a compact set B ε X and an index ν ε such that f and for all ε > 0, one can ν ν ε : inf x C ν B ε f ν (x) inf x C ν f ν (x) + ε. Moreover, f ν h f if and only if f ν e f and they hypo-converge tightly if the functions f ν epi-converge tightly to f. As follows immediately from the properties of set-limits, an epi-limit is always lsc and, provided that X is linear, it is convex whenever the functions f ν are convex. Moreover, a hypo-limit is always usc and, provided that X is linear, it is concave whenever the functions f ν are concave. The topology induced by epi-convergence is metrizable, a property we leverage in 5. For f fcns(x) with domain C, optimal values are denoted by inf f := inf{f(x) : x C} and sup f := sup{f(x) : x C}, and, with ε 0, (near-)optimal solutions by ε- argmin f := {x C : f(x) inf f + ε} and ε- argmax f := {x C : f(x) sup f ε}. Since C is nonempty because of f fcns(x), inf f <. Moreover, inf f = implies that argmin f =. Convergence of optimal solutions and optimal values are summarized in the next theorem. The result and proof are mostly the same as those of [26, Theorem 7.31] and [20, Theorems 2.5 and 2.8], which consider X = IR n, but stated here for completeness with some clarification and improvements, especially regarding the role of finiteness of inf f and convergence of near-optimal solutions. We refer to [4] for early results of this kind. 3.8 Theorem (epi- and hypo-convergence: basic properties) Consider {f, f ν, ν IN} fcns(x). If f ν e f, then the following hold: (a) limsup ( inf f ν) inf f and {ε ν 0, ν IN}, OutLim ( ε ν -argmin f ν) argmin f. (b) If {x ν argmin f ν, ν N} converges for some N N #, then lim ν N (inf f ν ) = inf f. (c) inf f ν inf f > f ν e f tightly. (d) inf f ν inf f and ε > 0 = InnLim(ε- argmin f ν ) argmin f. (e) inf f ν inf f and X is separable 4 = {ε ν 0, ν IN} such that ε ν -argmin f ν argmin f. (f) {ε ν 0, ν IN} such that ε ν -argmin f ν argmin f = inf f ν inf f >. 4 We deduce from a counterexample in [10] that the separability assumption cannot be relaxed. 12

If f ν h f, then liminf ν (sup f ν ) sup f and (a)-(f) hold with min/inf replaced by max/sup, > by <, and tight epi-convergence by tight hypo-convergence. Proof. Let C and C ν be the domains of f and f ν, respectively. For part (a), we first suppose that inf f is finite and let ε > 0. There exists x C such that f(x) inf f + ε and also, by condition (b) of Definition 3.7, x ν C ν x such that limsup f ν (x ν ) f(x). Thus, limsup(inf f ν ) limsup f ν (x ν ) f(x) inf f + ε. Second, suppose that inf f = and let δ > 0. Then, there exists x C such that f(x) δ and also, by condition (b) of Definition 3.7, x ν C ν x such that limsup f ν (x ν ) f(x). Thus, limsup(inf f ν ) limsup f ν (x ν ) f(x) δ. Since ε and δ are arbitrary, the first result of part (a) is established. For the second result, suppose that x OutLim(ε ν -argmin f ν ). Then, there exist N N # and x ν ε ν -argmin f ν N x. Thus, limsup ν N f ν (x ν ) limsup ν N (inf f ν + ε ν ) inf f <. In view of condition (a) of Definition 3.7, this implies that x C and also f( x) liminf ν N f ν (x ν ) limsup ν N f ν (x ν ) inf f. Hence, x argmin f and the proof of part (a) is complete. For part (b), let x be the limit of {x ν argmin f ν, ν N}. In view of part (a), x argmin f and also limsup(inf f ν ) inf f. Condition (a) of Definition 3.7 implies that liminf ν N (inf f ν ) = liminf ν N f ν (x ν ) f( x) = inf f and the conclusion holds. Part (c), necessity. Suppose that inf f ν inf f > and let ε > 0. Then, there exist ν 1 IN such that inf f inf f ν + ε/3 for all ν ν 1 and also x C such that f( x) inf f + ε/3. In view of condition (b) of Definition 3.7, there exist x ν C ν x and ν 2 ν 1 such that f ν (x ν ) f( x) + ε/3 for all ν ν 2. Let B X be a compact set containing {x ν, ν IN}. Thus, for ν ν 2, inf x C ν B f ν (x) f ν (x ν ) f( x) + ε/3 inf f + 2ε/3 inf f ν + ε. Part (c), sufficiency. For the sake of a contradiction let inf f =. Then, (a) implies that inf f ν. Since the epi-convergence is tight, there exists a compact set B X such that inf x C ν B f ν (x) and therefore also a sequence {x ν C ν B, ν IN} such that f ν (x ν ). The compactness of B implies that for some N N # and x B, lim ν N x ν = x. In view of condition (a) of Definition 3.7, liminf ν N f ν (x ν ) f(x) IR if x C and f ν (x ν ) N otherwise. However, both cases contradict f ν (x ν ) and thus inf f >. We next show that liminf(inf f ν ) inf f, which, together with (a) completes the proof of (c). We start by showing that for any compact set B X, {inf x C ν B f ν (x), ν IN}, except possibly for a finite number of indexes, is bounded away from. For the sake of a contradiction, suppose that for N N # we have inf x C ν B f ν (x) < inf f 1 for all ν N. Then, there exists {x ν C ν B, ν N} such that f ν (x ν ) < inf f 1 for all ν N. Since B is compact, there is a 13

cluster point x B of {x ν, ν N}. By condition (a) of Definition 3.7 and the boundedness of {f ν (x ν ), ν N}, x C. Then, the same condition gives that liminf ν N f ν (x ν ) f( x) inf f, which is a contradiction. Hence, {inf x C ν B f ν (x), ν IN}, except possibly for a finite number of indexes, is bounded away from. Next, for ε > 0, let the compact set B ε X and ν ε be such that inf x C ν B ε f ν (x) inf f ν + ε for all ν ν ε, which holds by the tightness assumption. Since {inf x C ν B f ν (x), ν IN} is eventually bounded away from, there exist ν ε ν ε and x ν C ν B ε such that f ν (x ν ) inf x C ν B ε f ν (x) + ε for all ν ν ε. Thus, for ν ν ε, f ν (x ν ) inf x C ν B ε f ν (x) + ε inf f ν + 2ε. In view of (a), we conclude that {f ν (x ν ), ν IN} is bounded from above. The compactness of B ε implies that there exist N N # and x B ε such that lim ν N x ν = x. In view of condition (a) of Definition 3.7, x C because {f ν (x ν ), ν N} is bounded from above. The same condition then implies that liminf ν N (inf f ν ) + 2ε liminf ν N (inf x C ν B ε f ν (x)) + ε liminf ν N f ν (x ν ) f( x) inf f. Since this argument holds not only for {f ν, ν IN} but also for all subsequences, liminf(inf f ν )+ 2ε inf f. We reach the conclusion of part (c) after recognizing that ε is arbitrary. For part (d), let x argmin f. Condition (b) of Definition 3.7 implies that there exist x ν C ν x and ν 1 IN such that f ν (x ν ) f( x) + ε/2 for all ν ν 1. Since inf f is finite, there is also ν 2 ν 1 such that inf f inf f ν + ε/2 for all ν ν 2. Thus, f ν (x ν ) f( x) + ε/2 = inf f + ε/2 inf f ν + ε for all ν ν 2 and we conclude that x InnLim(ε- argmin f ν ). Part (e). From (a), OutLim(ε ν -argmin f ν ) argmin f for any ε ν 0. Thus it suffices to show that InnLim(ε ν -argmin f ν ) argmin f for some ε ν 0. If argmin f =, then the inclusion is automatic. Thus, we can assume that argmin f and inf f is finite. Theorem 3.1 in [10] states the following result: For any collection {h, h ν, ν IN} of extended real-valued lsc functions on a separable metric space, if h ν e h and α IR, then there exists α ν α such that {x : h ν (x) α ν } {x : h(x) α}. To apply this theorem here, let lsc f ν be the lscregularization of f ν on X, i.e., the highest lsc function on X not exceeding f ν extended to the whole of X by assigning it the value outside its domain. Since lsc f ν e f, which already is lsc and can be extended in the same manner, it follows that there exists α ν inf f such that {x X : (lsc f ν )(x) α ν } {x X : f(x) inf f} = argmin f. Set {ε ν = 1/ν + max[0, α ν inf f ν ], ν IN} and x argmin f. In view of the above set convergence, there exists a sequence {x ν, ν IN}, with (lsc f ν )(x ν ) α ν, converging to x. The 14

definition of lsc-regularization implies that there exists {y ν C ν, ν IN} such that d X (x ν, y ν ) 1/ν and f ν (y ν ) (lsc f ν )(x ν ) + 1/ν. Thus, f ν (y ν ) (lsc f ν )(x ν ) + 1/ν α ν + 1/ν ε ν + inf f ν and therefore y ν ε ν -argmin f ν. Consequently, x InnLim(ε ν -argmin f ν ), which implies that InnLim(ε ν -argmin f ν ) argmin f. Since inf f ν inf f, ε ν 0 and the conclusion holds. For part (f), let x argmin f. The assumption permits the selection of ν IN and a sequence {x ν ε ν -argmin f ν, ν ν} x. Thus, by condition (a) of Definition 3.7, inf f = f( x) liminf f ν (x ν ) liminf(inf f ν + ε ν ) = liminf(inf f ν ). In view of part (a), the conclusion follows. 4 Consequences of Lopsided Convergence The main consequence of lop-convergence of bifunctions F ν to F is the convergence of solutions of the approximating minsup problems min x C ν sup y D ν (x) F ν (x, y) to those of an actual minsup problem min x C sup y D(x) F (x, y), where (C, D) and (C ν, D ν ) describe dom F and dom F ν, respectively. We give detailed results below, but start with fundamental properties associated with lopsided convergence. 4.1 Basic Properties For degenerate bifunctions that only depend on their first argument, lopsided convergence is equivalent to epi-convergence, a direct consequence of the definitions, as stated next. 4.1 Proposition (lop-convergence subsumes epi-convergence) Suppose that the bifunctions {F, F ν, ν IN} bfcns(x, Y ) has dom F = C Y, dom F ν = C ν Y, F (x, y) = F (x, y ) for all x C and y, y Y, and F ν (x, y) = F ν (x, y ) for all x C ν and y, y Y. Then, for any y Y, F ν lop-converges to F F ν (, y) : C ν IR epi-converges to F (, y) : C IR. Lop-convergence implies hypo-convergence for certain functions as seen next. 4.2 Proposition (hypo-convergence of slices) For {F, F ν, ν IN} bfcns(x, Y ) with domains described by (C, D) and (C ν, D ν ), respectively, we have that F ν lop F implies that for all x C, there exists x ν C ν x such that the functions F ν (x ν, ) : D ν (x ν ) IR hypo-converge to F (x, ) : D(x) IR. 15

Proof. From condition (b) of Definition 3.1 there exists x ν C ν x such that the functions { F ν (x ν, ) : D ν (x ν ) IR, ν IN} and F (x, ) : D(x) IR satisfy condition (a) of Definition 3.7. From condition (a) of Definition 3.1, for any y D(x) one can find y ν D ν (x ν ) y such that condition (b) of Definition 3.7 holds for { F ν (x ν, ) : D ν (x ν ) IR, ν IN} and F (x, ) : D(x) IR. Thus, the former functions epi-converge to the latter. The sup-projection 5 (in y) of F bfcns(x, Y ) with domain described by (C, D) exists if sup y D(x) F (x, y) < for some x C. When it exists, the sup-projection of F is the realvalued function f given by f(x) = sup y D(x) F (x, y) whenever x C and sup y D(x) F (x, y) <. This means that the domain of the sup-projection is a subset of C, which could be strict. Bifunctions without sup-projections are somewhat pathological and in the case of minsup problems correspond to infeasibility. Since D(x) for x C, sup y D(x) F (x, y) > and, thus, sup-projections are indeed real-valued. We denote by f and f ν sup-projections of F and F ν, respectively. It is clear that the sup-projection of F ν with domain defined by (C ν, D ν ) might not exist, i.e., sup y D ν (x) F ν (x, y) = for all x C ν, even if that of F does and F ν lop F as the following example shows. Absence of sup-projection. Consider the bifunctions {F, F ν, ν IN} bfcns(ir, IR) with domains described by C = C ν = [0, 1], D = D ν = {y IR : y 0}, and values F (x, y) = yx and F ν (x, y) = y(x + 1/ν) when defined. Clearly, the sup-projection of F has domain {0}. However, there is no ν for which F ν has a sup-projection. This situation takes place even though one can show that F ν lop F. A consequence of lopsided convergence for the epigraphs of sup-projections is given next. 4.3 Theorem (containment of sup-projections) For bifunctions {F, F ν, ν IN} bfcns(x, Y ), with corresponding sup-projections {f, f ν, ν IN}, F ν lop F implies that OutLim ( epi f ν) epi f. Proof. Let (C, D) and (C ν, D ν ) describe the domains of F and F ν, respectively. Suppose that (x, α) OutLim(epi f ν ). Then there exist N N # and {(x ν, α ν ), ν N}, with x ν C ν, sup y D ν (x ν ) F ν (x ν, y) α ν, x ν N x, and α ν N α. If x C, then we can construct by condition (a) of Definition 3.1 a sequence y ν D ν (x ν ) such that F ν (x ν, y ν ) N. However, α ν sup y D ν (x ν ) F ν (x ν, y) F ν (x ν, y ν ), ν N, 5 In parametric optimization and elsewhere sup-projections of bifunctions are sometimes called optimal value functions. 16

implies a contradiction since α ν N α IR. Thus, x C. If sup y D(x) F (x, y) =, then there exists y D(x) such that F (x, y) α + 1. Condition (a) of Definition 3.1 ensures that there exists a sequence y ν D ν (x ν ) N y such that liminf ν N F ν (x ν, y ν ) F (x, y). Consequently, α = liminf ν N α ν liminf ν N sup y D ν (x ν ) F ν (x ν, y) liminf ν N F ν (x ν, y ν ) F (x, y) α + 1, which is a contradiction. Hence, it suffices to consider the case with sup y D(x) F (x, y) finite. Given any ε > 0 arbitrarily small, pick y ε D(x) such that F (x, y ε ) sup y D(x) F (x, y) ε. Then condition (a) of Definition 3.1 again yields y ν D ν (x ν ) N y ε such that liminf ν N sup y D ν (x ν ) F ν (x ν, y) liminf ν N F ν (x ν, y ν ) F (x, y ε ) sup y D(x) F (x, y) ε, implying liminf ν N sup y D ν (x ν ) F ν (x ν, y) sup y D(x) F (x, y). Since the conclusion follows. α = liminf ν N α ν liminf ν N sup y D ν (x ν ) F ν (x ν, y) sup y D(x) F (x, y), It is clear from the following example that lop-convergence of bifunctions does not guarantee epi-convergence of the corresponding sup-projections. Absence of tightness. For {F, F ν, ν IN} bfcns(ir, IR) with domains IR IR and values F (x, y) = 0 for (x, y) IR IR, and F ν (x, y) = 1 if x IR and y = ν, and zero otherwise. It is easy to show that F ν lop F. The sup-projection of F ν has value 1 for all x IR and that of F has value 0 for all x IR. Thus, the inclusion in Theorem 4.3 is strict. We observe that ancillary-tightness fails in this instance and that property is indeed key to eliminating such possibilities. When Y is a space of distribution functions, these questions translates into those about tightness in the sense of probability theory; see the remark after Definition 3.5 and [29]. 4.4 Theorem (epi-convergence of sup-projections) Suppose the bifunctions {F, F ν, ν IN} bfcns(x, Y ) have corresponding sup-projections {f, f ν, ν IN}. If F ν lop-converges ancillarytightly to F, then epi f ν epi f, also written f ν e f. Proof. Let (C, D) and (C ν, D ν ) describe the domains of F and F ν, respectively, and x dom f. Now, choose x ν C ν x such that F ν (x ν, ) : D ν (x ν ) IR hypo-converge to F (x, ) : D(x) IR, cf. Proposition 4.2. In fact, these functions hypo-converge tightly as an immediate consequence of ancillary-tightness. Thus, sup F ν (x ν, y ν ) sup F (x, y), y D ν (x ν ) y D(x) via Theorem 3.8. This fact together with Theorem 4.3 establish the conclusion. It is clear that ancillary-tight lop-convergence is a stronger condition than epi-convergence of the corresponding sup-projections, i.e., the converse of Theorem 4.4 fails. For example, the 17

bifunctions {F, F ν, ν IN} bfcns(ir, IR) with domains described by C = C ν = [0, 1] and D = D ν = [0, 1], and values F (x, y) = 1, F ν (x, y) = y for all (x, y) C D have sup-projections that are identical and, certainly, f ν e f. However, condition (a) of Definition 3.1 fails and therefore F ν does not lop-converge to F. We end this subsection by listing consequences for semicontinuity, concavity, and convexity. 4.5 Proposition (usc and concavity) For bifunctions {F, F ν, ν IN} bfcns(x, Y ) with domains described by (C, D) and (C ν, D ν ), respectively, and any x C, the lop-convergence F ν lop F implies that the (univariate) function F (x, ) : D(x) IR is usc. When Y is a linear space and for all x ν C ν x, F ν (x ν, ) : D ν (x ν ) IR is concave, it also implies that F (x, ) is concave. Proof. In view of Proposition 4.2, for every x C, there exists x ν C ν x such that F ν (x ν, ) hypo-converges to F (x, ), which therefore must be usc. If F ν (x ν, ) are concave, its hypo-limit must also be concave, which establishes the conclusions. 4.6 Proposition (lsc and convexity) Suppose the bifunctions {F, F ν, ν IN} bfcns(x, Y ) have domains described by (C, D) and (C ν, D ν ), respectively, and corresponding sup-projections {f, f ν, ν IN}. If F ν lop-converges ancillary-tightly to F, then f is lsc and also convex provided that X is a linear space, D ν is constant on C ν, and F ν (, y) is convex for all y D ν. Proof. In view of Theorem 4.4, f is an epi-limit and thus lsc. Convexity is guaranteed if f ν = sup y D ν ( ) F ν (, y), ν IN, are convex, which holds by the stated assumptions. 4.2 Consequence for Minsup Problems We have now reached the presentation of the main results of the paper. The minsup-value of a bifunction F bfcns(x, Y ) is defined as infsup F := inf x C sup y D(x) F (x, y), which clearly is the optimal value of the minsup problem min x C sup y D(x) F (x, y). The corresponding ε-optimal solutions, referred to as ε-minsup-points of F, are given by ε- argminsup F := { x C : sup y D(x) F (x, y) infsup F + ε }, for ε 0. If ε = 0, we simply refer to such points as minsup-points. 4.7 Theorem (bounds on minsup-value) Suppose that {F, F ν, ν IN} bfcns(x, Y ) have sup-projections, F ν lop F, and {x ν argminsup F ν, ν IN} exist. Then, the following hold: (a) If {x ν, ν N} converges for some N N #, then liminf ν N (infsup F ν ) infsup F. 18

(b) If lop-convergence is ancillary-tight, then limsup(infsup F ν ) infsup F and {ε ν 0, ν IN}, OutLim(ε ν - argminsup F ν ) argminsup F. (c) If lop-convergence is ancillary-tight and {x ν, ν N} converges for some N N #, then lim ν N (infsup F ν ) = infsup F. Proof. Let f and f ν be the sup-projections of F and F ν, respectively. We first consider (a): Let N N # and x be such that x ν N x. Theorem 4.3 applies so that OutLim(epi f ν ) epi f, which in turn ensures that condition (a) of Definition 3.7 holds. Thus, if x dom f, then liminf ν N (infsup F ν ) = liminf ν N f ν (x ν ) f( x) infsup F. If x dom f, then liminf ν N (infsup F ν ) = liminf ν N f ν (x ν ) and the conclusion holds. Second, consider (b) and (c): Theorem 4.4 implies that f ν e f and thus the conclusion is a direct application of Theorem 3.8. Tight lop-convergence implies the following strengthening of the result for minsup-points and values. 4.8 Theorem (approximating minsup-points) Suppose that {F, F ν, ν IN} bfcns(x, Y ) have sup-projections and F ν lop F tightly. Then, (a) infsup F ν infsup F, which is finite; (b) for ε > 0, InnLim(ε- argminsup F ν ) argminsup F ; (c) for X separable, {ε ν 0, ν IN} such that ε ν - argminsup F ν argminsup F. Proof. The assumptions imply those of Theorem 4.4 and thus the sup-projection of F ν epiconverges to that of F. In view of Definition 3.6, the epi-convergence is tight; see Definition 3.7. The conclusion is then a direct application of Theorem 3.8. It is well-known that the infimum of a lsc function with a domain contained in a compact set is attained. Consequently, if the sup-projection f of some bifunction F bfcns(x, Y ) is lsc, which holds under mild assumptions (cf. Proposition 5.1), and dom f is contained in some compact set, then there exists a minimizer of f and thus also a minsup-point of F. We next state a result that relaxes the compactness requirement. 4.9 Theorem (existence of minsup-points) Suppose that {F, F ν, ν IN} bfcns(x, Y ) have sup-projections, those of F ν, denoted by f ν, are lsc, F ν lop F ancillary-tightly, and there are compact sets {B ν X, ν IN} such that dom f ν B ν. Then, {x ν argminsup F ν, ν IN} exist and every cluster point of {x ν, ν IN} is a minsup-point of F. Proof. The discussion prior to the theorem ensures the existence of minsup-points of F ν for every ν. The result is then a consequence of Theorem 4.7. Theorem 4.9 does not ensure the existence of a cluster point of {x ν, ν IN}. Still, it provides an approach for establishing the existence of a minsup-point of F : first construct a sequence {F ν, ν IN}, with the required properties, that lop-converges ancillary-tightly to F and, second, prove that {x ν, ν IN} has a cluster point. 19

4.3 Application to Generalized Nash Games From 2.2, we know that the solutions of a Generalized Nash game are fully characterized by the associated Nikaido-Isoda bifunction; see Proposition 2.1. Here, we illustrate the application of lopsided convergence in this context and in the process obtain stability results for Generalized Nash games in metric spaces with agent-constraints that depend on the other agents choices of strategies. We consider the actual game x a argmin xa D a ( x a ) c a (x a, x a ), for all a A, where the notation follows that of 2.2, and the approximating game x a argmin xa D ν a ( x a) c ν a(x a, x a ), for all a A, where c ν a : X a X a IR is the approximating cost function and Da ν : X a X a is the approximating set-valued mapping that defines the constraints for agent a A. The Nikaido- Isoda bifunction associated with the actual game is defined in 2.2. Similarly, the one for the approximating game is F ν (x, y) = [ ] c ν a(x a, x a ) c ν a(y a, x a ) for x C ν X, y D ν (x) X a A with C ν = { } x X : x a Da(x ν a ) for all a A and D ν (x) = a A Dν a(x a ). The set-valued mapping D ν : C ν X. We assume throughout this subsection that there exist x, x ν X such that x a D a (x a ) and x ν a D ν a(x ν a) for all a A and ν IN, i.e., there are feasible solutions. Thus, F, F ν bfcns(x, X). The approximating Nikaido-Isoda bifunction lop-converges to the actual bifunction under natural assumptions as seen next. This result and the associated propositions extend in many ways Corollary 7.5 of [21], which assumes convex and compact agent-constraint sets that are independent of other agents strategies as well as other assumptions, and also [18], which also adopts the independence assumption and, in addition, constraint sets that are not approximated. 4.10 Theorem (lop-convergence of Nikaido-Isoda bifunctions) For Nikaido-Isoda bifunctions {F, F ν, ν IN} bfcns(x, X) defined in terms of cost functions {c a, c ν a, a A, ν IN} and constraint mappings {D a, Da, ν a A, ν IN}, we have F ν lop F provided that (a) the mappings D ν a continuously converge to D a, i.e., x ν X x X, D ν a(x ν a) D a (x a ) a A; (b) the cost functions c ν a continuously converge to c a, i.e., x ν X x X, c ν a(x ν a, x ν a) c a (x a, x a ) a A; and 20