Semilinear Elliptic PDE 2 Prof. Dr. Thomas Østergaard Sørensen summer term 2016

Similar documents
EXISTENCE OF SOLUTIONS FOR A RESONANT PROBLEM UNDER LANDESMAN-LAZER CONDITIONS

Contents: 1. Minimization. 2. The theorem of Lions-Stampacchia for variational inequalities. 3. Γ -Convergence. 4. Duality mapping.

Critical Point Theory 0 and applications

Nonlinear elliptic systems with exponential nonlinearities

Eigenvalues and Eigenfunctions of the Laplacian

Elliptic Kirchhoff equations

FIXED POINT METHODS IN NONLINEAR ANALYSIS

EXISTENCE OF SOLUTIONS TO ASYMPTOTICALLY PERIODIC SCHRÖDINGER EQUATIONS

ASYMMETRIC SUPERLINEAR PROBLEMS UNDER STRONG RESONANCE CONDITIONS

RNDr. Petr Tomiczek CSc.

Second Order Elliptic PDE

NONTRIVIAL SOLUTIONS TO INTEGRAL AND DIFFERENTIAL EQUATIONS

Laplace s Equation. Chapter Mean Value Formulas

SEMILINEAR ELLIPTIC EQUATIONS WITH DEPENDENCE ON THE GRADIENT

UNIQUENESS OF POSITIVE SOLUTION TO SOME COUPLED COOPERATIVE VARIATIONAL ELLIPTIC SYSTEMS

Week 6 Notes, Math 865, Tanveer

EIGENVALUE QUESTIONS ON SOME QUASILINEAR ELLIPTIC PROBLEMS. lim u(x) = 0, (1.2)

NONLINEAR FREDHOLM ALTERNATIVE FOR THE p-laplacian UNDER NONHOMOGENEOUS NEUMANN BOUNDARY CONDITION

Calculus of Variations. Final Examination

EXISTENCE RESULTS FOR QUASILINEAR HEMIVARIATIONAL INEQUALITIES AT RESONANCE. Leszek Gasiński

ON WEAKLY NONLINEAR BACKWARD PARABOLIC PROBLEM

CONVERGENCE THEORY. G. ALLAIRE CMAP, Ecole Polytechnique. 1. Maximum principle. 2. Oscillating test function. 3. Two-scale convergence

Chapter 7. Extremal Problems. 7.1 Extrema and Local Extrema

A Concise Course on Stochastic Partial Differential Equations

Math 742: Geometric Analysis

A CONVEX-CONCAVE ELLIPTIC PROBLEM WITH A PARAMETER ON THE BOUNDARY CONDITION

Existence and Multiplicity of Solutions for a Class of Semilinear Elliptic Equations 1

OPTIMAL POTENTIALS FOR SCHRÖDINGER OPERATORS. 1. Introduction In this paper we consider optimization problems of the form. min F (V ) : V V, (1.

COMBINED EFFECTS FOR A STATIONARY PROBLEM WITH INDEFINITE NONLINEARITIES AND LACK OF COMPACTNESS

Sobolev spaces. May 18

SYMMETRY RESULTS FOR PERTURBED PROBLEMS AND RELATED QUESTIONS. Massimo Grosi Filomena Pacella S. L. Yadava. 1. Introduction

arxiv: v1 [math.ap] 7 May 2009

A semilinear Schrödinger equation with magnetic field

Your first day at work MATH 806 (Fall 2015)

ON THE EXISTENCE OF THREE SOLUTIONS FOR QUASILINEAR ELLIPTIC PROBLEM. Paweł Goncerz

Math The Laplacian. 1 Green s Identities, Fundamental Solution

Partial Differential Equations

Nonvariational problems with critical growth

SPECTRAL PROPERTIES OF THE LAPLACIAN ON BOUNDED DOMAINS

Bulletin of the. Iranian Mathematical Society

Non-degeneracy of perturbed solutions of semilinear partial differential equations

A NOTE ON THE EXISTENCE OF TWO NONTRIVIAL SOLUTIONS OF A RESONANCE PROBLEM

Weak Convergence Methods for Energy Minimization

EXISTENCE OF NONTRIVIAL SOLUTIONS FOR A QUASILINEAR SCHRÖDINGER EQUATIONS WITH SIGN-CHANGING POTENTIAL

THE L 2 -HODGE THEORY AND REPRESENTATION ON R n

Overview of normed linear spaces

BIHARMONIC WAVE MAPS INTO SPHERES

POTENTIAL LANDESMAN-LAZER TYPE CONDITIONS AND. 1. Introduction We investigate the existence of solutions for the nonlinear boundary-value problem

Variational Formulations

Chapter 1 Foundations of Elliptic Boundary Value Problems 1.1 Euler equations of variational problems

Weak Formulation of Elliptic BVP s

Non-degeneracy of perturbed solutions of semilinear partial differential equations

TD M1 EDP 2018 no 2 Elliptic equations: regularity, maximum principle

2. Function spaces and approximation

Numerical Solutions to Partial Differential Equations

SPECTRAL THEOREM FOR COMPACT SELF-ADJOINT OPERATORS

Math Tune-Up Louisiana State University August, Lectures on Partial Differential Equations and Hilbert Space

08a. Operators on Hilbert spaces. 1. Boundedness, continuity, operator norms

************************************* Applied Analysis I - (Advanced PDE I) (Math 940, Fall 2014) Baisheng Yan

Functional Analysis. Franck Sueur Metric spaces Definitions Completeness Compactness Separability...

The Dirichlet s P rinciple. In this lecture we discuss an alternative formulation of the Dirichlet problem for the Laplace equation:

Implicit Functions, Curves and Surfaces

Analysis in weighted spaces : preliminary version

LECTURE 1: SOURCES OF ERRORS MATHEMATICAL TOOLS A PRIORI ERROR ESTIMATES. Sergey Korotov,

Scientific Computing WS 2018/2019. Lecture 15. Jürgen Fuhrmann Lecture 15 Slide 1

VISCOSITY SOLUTIONS. We follow Han and Lin, Elliptic Partial Differential Equations, 5.

Landesman-Lazer type results for second order Hamilton-Jacobi-Bellman equations

Tools from Lebesgue integration

Topological properties

Friedrich symmetric systems

Finite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product

EXISTENCE OF THREE WEAK SOLUTIONS FOR A QUASILINEAR DIRICHLET PROBLEM. Saeid Shokooh and Ghasem A. Afrouzi. 1. Introduction

THE INVERSE FUNCTION THEOREM FOR LIPSCHITZ MAPS

FIRST CURVE OF FUČIK SPECTRUM FOR THE p-fractional LAPLACIAN OPERATOR WITH NONLOCAL NORMAL BOUNDARY CONDITIONS

Lectures on. Sobolev Spaces. S. Kesavan The Institute of Mathematical Sciences, Chennai.

2 A Model, Harmonic Map, Problem

STEKLOV PROBLEMS INVOLVING THE p(x)-laplacian

Math Advanced Calculus II

The Compactness from Mountain Pass to Saddle Point

arxiv: v1 [math.ap] 16 Jan 2015

0.1 Tangent Spaces and Lagrange Multipliers

Partial Differential Equations, 2nd Edition, L.C.Evans The Calculus of Variations

Theory of PDE Homework 2

QUALIFYING EXAMINATION Harvard University Department of Mathematics Tuesday September 21, 2004 (Day 1)

Lecture 4 Lebesgue spaces and inequalities

Multiple positive solutions for a class of quasilinear elliptic boundary-value problems

Nonlinear Maxwell equations a variational approach Version of May 16, 2018

************************************* Partial Differential Equations II (Math 849, Spring 2019) Baisheng Yan

Nonlinear problems with lack of compactness in Critical Point Theory

SHARP BOUNDARY TRACE INEQUALITIES. 1. Introduction

NONTRIVIAL SOLUTIONS FOR SUPERQUADRATIC NONAUTONOMOUS PERIODIC SYSTEMS. Shouchuan Hu Nikolas S. Papageorgiou. 1. Introduction

Short note on compact operators - Monday 24 th March, Sylvester Eriksson-Bique

Existence of minimizers for the pure displacement problem in nonlinear elasticity

APPLICATIONS OF DIFFERENTIABILITY IN R n.

MAT 578 FUNCTIONAL ANALYSIS EXERCISES

Institut für Mathematik

3 (Due ). Let A X consist of points (x, y) such that either x or y is a rational number. Is A measurable? What is its Lebesgue measure?

Sobolev Spaces. Chapter Hölder spaces

The Dirichlet-to-Neumann operator

CAPACITIES ON METRIC SPACES

Transcription:

Semilinear Elliptic PDE 2 Prof. Dr. Thomas Østergaard Sørensen summer term 2016 Marcel Schaub July 8, 2016 1

CONTENTS Contents 0 Introduction 3 1 Notation, repetition, generalisations 4 1.1 Repetition PDE 2 and Semilinear elliptic PDE s 1............... 4 1.2 The Mountain Pass Theorem.......................... 7 1.3 Generalization of the Direct Method to vector-valued functions...... 8 1.4 Null-Lagrangians................................. 9 1.5 Brouwer s Fix Point Theorem.......................... 11 2 More on Minimax-Methods 14 2.1 Another Application of the Mountain Pass Theorem............. 14 2.2 The Saddle Point Theorem........................... 15 2.3 Application to non-resonant problems..................... 17 2.4 Outlook...................................... 27 3 Constrained Minimization 30 3.1 Proof 1 of 3.1: Minimization on spheres.................... 30 3.2 Proof 2 of 3.1: Minimization on the Nehari manifold............. 32 4 Fixpoint Methods 35 4.1 Recall PDE 2................................... 35 4.2 Proof 1 of Theorem 4.11, via 4.6........................ 38 4.3 Proof 2 of Theorem 4.11, via 4.7........................ 38 5 Method of lower and upper solutions 39 6 Outlook 43 6.1 Boundary conditions............................... 43 6.2 Conditions on the right hand side........................ 43 6.3 The differential operator............................. 43 6.4 p-laplacian.................................... 43 6.5 Choquard/Pekar equations............................ 43 6.6 Methods for classical solutions.......................... 43 2

0 INTRODUCTION 0 Introduction We are continuing to prove existence of weak solutions to semi-linear elliptic PDE. The typical example is of course u = g(x, u), where u: R and R N is open. The plan is as follows: (1) Continue the study of minimax methods ( Saddle Point Theorem ). (2) Study constrained minimization. (3) Minimization techniques when compactness (of the Sobolev embedding) is lost /not available. Part (1) - (3) is all variational, that is, u is a weak solution iff J (u) = 0. (4) Non-variational methods: Fix point theory, Methods of lower and upper solutions. 3

1 NOTATION, REPETITION, GENERALISATIONS 1 Notation, repetition, generalisations 1.1 Repetition PDE 2 and Semilinear elliptic PDE s 1 Let R N be open, and for now bounded 1. For simplicity, N 3 (N = 2 is also possible!), u: R, Du = u: R N. The interest is to study semilinear elliptic boundary value problems (BVP) u = g(x, u, u) in (1.1) u = 0 on where u = 0 is referred to as the Dirichlet boundary condition. More generally, one could replace u by any uniformly elliptic partial differential operator of second order Lu = N xi (a ij (x) xj u) + i,j=1 N b i (x) xi u + c(x)u u is a classical solution to (1.1) iff u C 2 () C 0 () and u solves (1.1) pointwise. We call u a weak solution iff u H0 1 () and [ u v g(x, u)v] dx = 0 v H0 1 () (1.2) The connection between the two is that u being a weak solution and regularity theory (mostly) leads to classical solutions. We study the existence of weak solutions by the variational approach (b i 0, c(x) = q(x), a ij = δ ij ). The idea is that by defining J : H0 1 () R, [ ] 1 t J(u) = 2 u 2 + G(x, u) dx G(x, t) = g(x, s) ds (1.3) we have that u is a weak solution of (1.1) iff i=1 J (u)v = 0 v H 1 0 () (1.4) iff J (u) = 0 (u is a critical point of J). For this, we studied differential calculus on Banach and Hilbert spaces (non-linear functional analysis). Properties of Sobolev spaces (embeddings, Poincaré and Sobolev inequality etc.) are assumed known (from PDE 2). Theorem 1.1. Let f : R R be continuous. Assume that there exist a, b > 0 such that 0 f(t) a + b t 2 1 t R (f 1 ) Let g(x, u) = q(x)u + h(x) + f(u) with q L (), h L 2 () and G as in (1.2). Then J in (1.2) is Fréchet-differentiable on H 1 0 () (and on H1 (), if, for example, is C 1 ). Example (Linear Case). Fredholm theory and Lax-Milgram (PDE2). Theorem 1.2. Let R N, N 3 open and bounded, q L (). Then (1) There exists a sequence λ k } k N R, ϕ k } k N H0 1 () \ 0} such that 1 Unlike in the previous course, we will consider unbounded sets in the future. 4

1 NOTATION, REPETITION, GENERALISATIONS (i) We have ϕ k + q(x)ϕ k = λ k ϕ k ϕ k = 0 in on (1.5) (ii) λ k weakly. (iii) ϕ k } k N is an ONB in L 2 (). We call λ k and ϕ k the eigenvalues and eigenfunctions of + q(x) respectively (there is more information on λ 1 and ϕ 1 available). (2) Let h L 2 () and consider u + q(x)u = λu + h(x) Then u = 0 in on (1.6) (i) Equation (1.6) has a unique weak solution for all h L 2 () iff λ is not an eigenvalue of + q(x). (ii) If λ = λ k for some k then (1.6) has a solution for a given h L 2 () iff h is orthogonal in L 2 () to ker( + q(x) λ). Note. (1.6) is of the form u + q(x)u = f(u) + h(x) with f(t) = λt (satisfies (f 1 ) is linear). We did some minimization. The themes were convexity, coercivity, weak lower semicontinuity (reflexivity!). We discussed detailed results when each holds for the functional F (u) dx, F (t) = t 0 f(s) ds here, we only state the results on existence of weak solutions to (1.1). Theorem 1.3. For all h L 2 () there exists a unique weak solution u H0 1 () to the Dirichlet boundary value problem for u + q(x)u = f(u) + h(x) if (f 1 ) holds and Corollary 1.4. For p (1, 2 ]: f(t)t 0 t R (f 2 ) (f(t) f(s))(t s) 0 t, s R (f 3 ) (i) For all h L 2 () there exists a unique weak solution u H0 1 () to the Dirichlet boundary value problem u + u p 2 u = h(x) (ii) u 0 is the only weak solution to the Dirichlet boundary value problem for the equation u + u p 2 u = 0. Theorem 1.5. Let λ 1 = λ 1 ( + q) > 0, h L 2 (), f : R R continuous and assume that there exist a > 0, b (0, λ 1 ) such that f(t) a + b t t R (f 4 ) Then there exists a weak solution u H0 1 () to the Dirichlet boundary value problem u + q(x)u = f(u) + h(x). 5

1 NOTATION, REPETITION, GENERALISATIONS Note. (1) No uniqueness. (2) (f 4 ) is implied by f(t) a + b t σ t R ( f 4 ) if such σ (0, 1), a, b > 0 exist (as well as by f bounded ). Theorem 1.6. For all h L 2 () there exists a weak solution u H0 1 () to the Dirichlet boundary value problem for u + q(x)u = f(u) + h(x) if f : R R is continuous and (f 1 ) and (f 2 ) hold. All of this was on minimization or the Direct method of the Calculus of Variations. The purpose was to prove that the functional ( ) 1 J(u) = 2 u 2 + q(x)u 2 F (u) h(x)u dx attains its minimum on H 1 0 () then J (u) = 0 and so u is a weak solutions to the Dirichlet boundary value problem. Note. What does the graph of f look like in the results above?! f may grow maximally as t 2 1 at ± superlinear growth is possible this gives the differentiability (and well-definedness) of J, Theorem 1.1. If f grows linearly, and λ λ k for all k, then u + q(x)u = λu + h(x) has a unique solution u for all h, Theorem 1.2 (2)(i). If λ = λ k for some k, then u + q(x)u = λu+h(x) has (several or one) weak solutions u iff h ker( +q(x) λ k ), Theorem 1.2 (2)(ii). If (f 4 ) holds, then u + q(x)u = f(u) + h(x) has (non-unique) weak solutions u for h L 2 (), Theorem 1.5. There was the sign-condition (f 2 ) and the non-increasing-condition (f 3 ). They ensure a unique weak solution (the only growth condition is (f 1 )) for all h L 2 (), Theorem 1.3. The second (after Direct Method ) are Minimax methods. These are techniques to find critical points (J (u) = 0) that are not global minima. The goal is to characterize a critical value c R of J (J(u) = c, J (u) = 0, some u) as minimax over a suitable class of subsets S 2 H1 0 (), i.e. c = inf sup A S u A J(u) We discussed sublevels and their topology, deformations (of those sublevels), (minus) the gradient flow, Palais-Smale sequences and the Palais-Smale condition. We also talked about (admissible) minimax classes. The main concrete example (we will soon see another new one). 6

1 NOTATION, REPETITION, GENERALISATIONS 1.2 The Mountain Pass Theorem Theorem 1.7 (Mountain Pass Theorem, Ambrosetti & Rabinowitz). Let H be a Hilbert space, J C 1,1 (H), J(0) = 0. Assume that J has the Mountain Pass geometry : There exist ρ > 0 and α > 0 such that (i) J(u) α > 0 for all u H with u = ρ. (ii) There exists v H with v > ρ and J(v) < α. Let Γ := } γ([0, 1]) γ : [0, 1] H, γ continuous, γ(0) = 0, γ(1) = v and (by abuse of notation) c = inf sup γ([0,1]) u γ([0,1]) J(u) = inf γ Γ max t [0,1] J(γ(t)) Then there exists a Palais-Smale sequence for J at level c. If J satisfies (PS) c, then there exists a critical point at level c. The main ingredient in the proof was the Deformation Lemma. This (MPT) is one concrete consequence (we will see another) of the main abstract result of critical point or Minimax Theory: Theorem 1.8. Let H be a Hilbert space, J C 1,1 (H). Let Γ an admissible 2 minimax class at level c with respect to J. Then there exists a Palais-Smale sequence for J at level c. If J satisfies (PS) c, then there exists a critical point for J at level c. The main application to semilinear elliptic PDE for superlinear but subcritical growth: Theorem 1.9. Let R N, N 3, open and bounded, q L (), q 0, f : R R continuous and assume that (a) There exists a p (2, 2 ), C > 0 such that f(t) f(s) C t s ( s + t + 1 ) p 2 (f 6 ) (b) We have (c) There exists M > 0 and µ > 2 such that f(t) lim = 0 (f 7 ) t 0 t f(t)t µf (t) t R, t M (f 8 ) (d) There is t 0 R with t 0 M (with M as in (f 8 )) such that F (t 0 ) > 0 (f 9 ) 2 i.e. c := inf A Γ sup u A J(u) is finite and there exists ε 0 > 0 such that for all ε (0, ε 0), Γ is invariant for all deformations that fix the sublevel J c 2ε. 7

1 NOTATION, REPETITION, GENERALISATIONS Then there exists at least one non-trivial weak solution of u + q(x)u = f(u) in Easier to read: u = 0 on Corollary 1.10. For p (2, 2 ), there exists a non-trivial weak solution u 0 to the Dirichlet boundary value problem for u + q(x)u = u p 2 u. Note carefully the difference in sign to Corollary 1.4. This finishes the repetition -part! 1.3 Generalization of the Direct Method to vector-valued functions The first new material concerning semilinear elliptic PDE that we will present will be (a) Another application of the Mountain Pass Theorem 1.7. (b) Another choice of minimax class in Theorem 1.8, leading to the Saddle Point Theorem (SPT). (c) Application to semilinear elliptic PDE of the Saddle Point Theorem. First, however, some general remarks on minimization, other types of problems, and an application, which we will need (Brouwer s Fix Point Theorem). We have studies critical points and minimization (the Direct Method ) of [ ] 1 J(u) = 2 u 2 G(x, u) dx u H0 1 () where u: R. More generally, the Direct Method can be applied to more general energy functionals, for example E(ω) := L(x, ω(x), ω(x)) dx where ω : R m, m 1 is now vector-valued and L: R m R mn R (x, z, R) L(x, z, R) where P is an m N-matrix. L is the Lagrange-function or the Lagrangian. This problem (minimizing E) is normally augmented by a boundary condition, for example (Dirichlet, but inhomogeneous ) ω = ϕ on, ϕ: R m is given. If ω minimises, then it satisfies a PDE: Proposition 1.11. If ω : R m is smooth and ω = ϕ, and ω minimises E among smooth functions, then N ( ) L p k(x, ω, ω) + L i z k(x, ω, ω) = 0 k = 1,..., m (1.7) x i i=1 8

1 NOTATION, REPETITION, GENERALISATIONS Here, the notation is ω 1 (x) ω 1 (x) ω(x) =. ω(x) =. = 1 ω 1 N ω 1..... = ( jω i ) ω m (x) ω m (x) 1 ω m N ω m where P = (P i j ) Rm N, z = (z 1,..., z m ) and L p k i and L z k are the corresponding partial derivatives. The proof of Proposition 1.11 is as for Lemma 2.16 in last semester (see also [E] pp. 459-464 and R užička pp. 9-17. Note. (1.7) is a system of m quasilinear PDE s the Euler-Lagrange equations for E(ω). The applications of problems with vector valued ω are in elasticity theory and minimal surfaces, for example. 1.4 Null-Lagrangians Definition 1.12. The function L (the Lagrangian) is called a null-lagrangian iff the corresponding Euler-Lagrange equation (1.7) is satisfied for all smooth ω : R m. The interest can be used to characterize weak sequential continuity of the energy functional of the type L( ω). The purpose here is to give an analytic proof of Brouwer s fix point theorem. Theorem 1.13. Let L be a (smooth...) null-lagrangian and ω, ω smooth with ω = ω on. Then E(ω) = E(ω). I.e. E(ω) only depends on the boundary value of ω : R m. Proof. Define j : [0, 1] R by j(τ) = E(τω + (1 τ) ω). Then, using smoothness of ω, ω and L (say, ω, ω C 1 () should be enough), we get j (τ) = = = 0 N m k=1 m i=1 k=1 ) L p k (x, τω + (1 τ) ω, τ ω + (1 τ) ω ( i ω k i i ω k ) + N i=1 + m k=1 L z k ( ) } x, τω + (1 τ) ω, τ ω + (1 τ) ω (ω k ω k ) dx i ( L p k i (x, τω + (1 τ) ω, τ ω + (1 τ) ω) ) + + L z k ( x, τω + (1 τ) ω, τ ω + (1 τ) ω) } (ω k ω k ) dx Hence, j is constant on [0, 1] so j(0) = j(1), i.e. E(ω) = E( ω). We need a bit of linear algebra in order to prove that the determinant is a null Lagrangian. For A = (a i j ), the cofactor-matrix cof A is defined by (cof A)k i := ( 1) i+k det A k i, where A k i is obtained from A by removing the k th row and the i th column. 9

1 NOTATION, REPETITION, GENERALISATIONS Lemma 1.14. Let ω : R N R N be smooth (C 2, say). Then, for all k = 1,..., N: N xi (cof( ω)) k i = 0 One can reinterpret this as the divergence of the k th row of ω. i=1 The proof needs one fact from linear algebra. (det P )1 N = P (cof P ). For P R N N, we have the formula Proof. For i, j = 1,..., N, we have that (det P )δ ij = [P (cof P )] i j = Choose i = j = s, so that N N (P ) i k (cof P )k j = p k i (cof P ) k j (1.8) k=1 k=1 N det P = p k s(cof P ) k s k=1 s = 1,..., N Hence, det P p r s = N [ δ ks (cof P ) k s + p k s k=1 (cof P ) k ] s p r s! = (cof P ) r s (1.9) since (cof P ) k s = ( 1) k+s det(a\k, s}), and hence (cof P ) k s is independent of (p 1 s,..., p N s ). Let now P = ω = ( s ω r ) R N N. Then, by the chain rule, and (1.9), we have xj (det( ω)) = N s=1 r=1 N ( j s ω r ) (cof ω) r s j = 1,..., N Use this and (1.8) and sum over j, to get, for i = 1,..., N: N j,r,s=1 δ ij (cof ω) r s j s ω r = N k,j=1 } j i ω k (cof ω) k j + i ω k j (cof ω) k j (1.10) Note that for i = 1,..., N, we have N N N δ ij (cof ω) r s j s ω r = (cof ω) r s i s ω r = (cof ω) k j i j ω k j,r,s=1 r,s=1 N = (cof ω) k j j i ω k j,k=1 j,k=1 by Schwarz. I.e. (1.10) becomes N ( N ) i ω k j (cof ω) k j = 0 i = 1,..., N k=1 j=1 10

1 NOTATION, REPETITION, GENERALISATIONS I.e. with P = ω, the vector ( N y = j (cof ω) k j j=1 ) N k=1 (1.11) solves P y = 0 (this is (1.11)). This means, if x 0 R N is such that det( ω(x 0 )) 0, it follows that y k = N j=1 j(cof ω(x 0 )) k j = 0, k = 1,..., N (which was to be proved). If det( ω(x 0 )) = 0, then choose ε 0 > 0 such that det( ω(x 0 ) + ε1 N ) 0, for all 0 < ε ε 0. This is possible, since det( ω(x 0 )+ε1 N ) is a polynomial in ε, hence has only finitely many (isolated) zeroes. Now repeat the above calculation for ω := ω + εx and pass to the limit ε 0 at the end. Theorem 1.15 (Landers 1942; J. Ball 1976). The determinant L(P ) = det P, P R N N is a null-lagrangian. Proof. We need to prove that any smooth ω : R N satisfies the corresponding Euler- Lagrange equation (see (1.7)), i.e. N i (L p k( ω)) = 0 k = 1,..., N (1.12) i i=1 From the calculation in the proof of 1.14, we know L p k( ω) = (cof ω) k i i, i, k = 1,..., N and so (1.12) is nothing but the result of Lemma 1.14. Remark 1.16. (1) As a fact, L(P ) is a null-lagrangian if and only if there exist matrices B = (b i j ), and C = (ci j ) RN N and a, d R such that L(P ) = a + N b i jp i j + i,j=1 Furthermore, the energy functional E(ω) = N c i j(cof P ) i j + d det P i,j=1 L( ω) dx for such Lagrangians is weakly sequentially continuous in appropriate function spaces. (2) A further explicit example (apart from L(P ) = det P ) is L(P ) = tr(p 2 ) tr(p ) 2. 1.5 Brouwer s Fix Point Theorem We can now prove Brouwer s Fix Point Theorem. Theorem 1.17 (Brouwer 1912 (?)). Every continuous map from a closed ball in R N into itself has a fix point: Let f : B B, where B := B R (x 0 ), be continuous. Then there exists an x B such that f(x) = x. The proof will follow from the following (interesting in itself). Theorem 1.18. B is not a retraction of B. This means, there exists no continuous map g : B B such that g B (i.e. g(x) = x for all x B). 11

1 NOTATION, REPETITION, GENERALISATIONS Proof of 1.17. Assume for contradiction that f : B B is continuous and f(x) x for all x B. For all x B, there exists a unique line starting at f(x), through x. Let g(x) be the point, where it interesects B. This is well-defined, since x f(x) for all x B. Also, g is continuous (!!) in x B and g(x) = x for all x B. Hence, this is a contradiction to Theorem 1.18. Proof of Theorem 1.18. We first show that there exists no smooth map ω : B B such that ω(x) = x for all x B. Assume for contradiction that such a map exists, let ω be the identity on B, i.e. ω(x) = x for all x B. Then, ω(x) = ω(x) for all x B. Hence, by 1.15, we have det ω dx = det ω dx = Vol(B) 0 (1.13) B B since det P is a null-lagrangian, and det ω(x) = 1 for all x B. Since ω(x) B for all x B, we have ω(x) 2 = 1 for all x B. Differentiating gives ( ω(x)) ω(x) = 0. Since ω(x) = 1, so ω(x) 0, hence, 0 is an eigenvalue of ( ω(x)) for all x B (and ω(x) is an eigenvector). But then det( ω(x)) = 0 for all x B, contradicting (1.13). It remains to prove that no continuous map ω : B B with ω(x) = x for all x B can exist. Assume, for contradiction, that such ω exists. Extend ω to a map (called the same) on R N, by ω(x) = x for x R N \ B. Then ω : R N R N is continuous and ω(x) 1 for all x R N (since ω(x) B for all x B). Hence, ω(x) 0 for all x R N. Let ω ε (x) := (η ε ω)(x) := η ε (x y)ω(y) dy R N with η ε (x) = ε N η( x ε ) and η is a standard mollifier with η(x) = 0 for all x 1 and R η(x) dx = 1 and η radially symmetric, i.e. by abuse η(x) = η( x ). We claim that N there exists an ε > 0 such that ω ε (x) 0 for all x R N. First, for x R N \ B 2 (0) and ε > 0 is sufficiently small, ω ε (x) = η ε ( y )ω(x y) dy = η ε ( y )(x y) dy B ε(0) B ε(0) since ω(x y) = x y for all x y > 1. Hence, this equals = x η ε ( y ) dx yη ε ( y ) dy = x B ε(0) since B ε(0) η ε( y ) dy = 1 and B ε(0) yη ε( y ) dy = 0, since η ε ( y ) is radially symmetric and y is an odd/antisymmetric function. Secondly, on B 2 (0), ω ε ω uniformly on compact subsets as ε 0 since ω is continuous (see for example PDE 2, SS2015, Problem Sheet 1, Q4). Hence, from (i) ω ε (x) = x for all x R N \ B 2 (0) and B ε(0) (ii) ω(x) 1 for all x B 2 (0), it follows that there exists an ε > 0 such that for all x R N, we have ω ε (x) 1 2, in particular, ω ε(x) 0 for all x R N (and ω ε is smooth). Then, define ω(x) := 2ω ε(x) ω ε (x) which is smooth and satisfies ω : B B, and ω(x) = x for all x B. This contradicts the first part of the proof with B = B 2 (0) instead of B 1 (0). 12

1 NOTATION, REPETITION, GENERALISATIONS We will use the following consequence of Brouwer s Fix Point Theorem (in fact, it is an equivalent formulation). Theorem 1.19. Let B R = x R N x R} and let ϕ: B R R N be a continuous function with ϕ(x) = x if x = R. Then there exists x B R with ϕ(x) = 0. Proof. Let ψ : B R B R be defined by x ϕ(x) ψ(x) := R (x ϕ(x)) x ϕ(x) x ϕ(x) R x ϕ(x) > R Then ψ is well-defined, continuous and ψ(b R ) B R. x B R with ψ(x) = x. If x ϕ(x) > R, then By, Brouwer 1.17, there exists x = ψ(x) = R (x ϕ(x)) x ϕ(x) Hence, x = R. This (by assumption on ϕ) implies that ϕ(x) = x, so x ϕ(x) = 0, contradicting x ϕ(x) > R. Hence, x ϕ(x) R, so from ψ(x) = x follows that x = ψ(x) = x ϕ(x), i.e. ϕ(x) = 0. We now return to MiniMax Methods. 13

2 MORE ON MINIMAX-METHODS 2 More on Minimax-Methods 2.1 Another Application of the Mountain Pass Theorem We start by giving another application of the Mountain Pass Theorem 1.7 recall that the first applications of 1.7 were Theorem 1.9 and Corollary 1.10. Theorem 2.1. Let R N, N 3, open and bounded, q L (), q 0. Let p (2, 2 ) and λ < λ 1 ( + q). Then, there exists a non-trivial weak solution u 0 of u + q(x)u = λu + u p 2 u in (2.1) u = 0 on Remark 2.2. We will prove, using the Mountain Pass Theorem, that weak solutions of (2.1) are critical points of J : H0 1 () R, J(u) = 1 2 u 2 q λ u 2 dx 1 u p dx = 1 2 p 2 u 2 q F (u) dx with u 2 q = ( ) 1 2 u 2 + qu 2 dx f(u) = λu + u p 2 u F (t) = t 0 f(s) ds as usual. Note that f(u) = λu + u p 2 u satisfies (f 6 ), (f 8 ), (f 9 ) in Theorem 1.9, but not (f 7 ). So we cannot apply 1.9. However (see Lemma 3.33 last semester), from (f 6 ) follows that F C 1,1 (H 1 0 ()). From (f 6) and (f 8 ) also follows that f satisfies the Palais-Smale condition (PS) c for all c R (see Lemma 3.34 last semester). We recall what (PS) c and a Palais-Smale sequence are. Definition 2.3. Let X be a Banach space, J : X R (Fréchet) differentiable. Then (1) J satisfies (PS) iff every Palais-Smale sequence for J has a convergent subsequence. (2) J satisfies (PS) c for c R iff every Palais-Smale sequence at level c for J has a convergent subsequence. Definition 2.4. Let X be a Banach space, J : X R be a Fréchet differentiable functional. (1) A sequence u k } k N is called a Palais-Smale sequence (PS-sequence) for J iff (i) J(u k )} k N is bounded in R. (ii) J (u k ) 0 in X. (2) Let c R. If (i) J(u k ) c in R. (ii) J (u k ) 0 in X. then u k } k N is called a Palais-Smale sequence for J at level c (and c is called a Palais-Smale level for J). Lemma 2.5. Let J be as in Remark 2.2, then J satisfies (PS) c for all c R. 14

2 MORE ON MINIMAX-METHODS Proof. See Lemma 3.34 last semester (using (f 6 ), (f 8 ) and (1.9)), or see [BS, p.166] for an explicit proof for this f. To apply the Mountain Pass Theorem to prove Theorem 2.1, it remains to prove: Lemma 2.6. Let J be as in Remark 2.2. Then J has the Mountain Pass geometry (see 1.7). Last semester (Lemma 3.35) we proved a similar statement to prove Theorem 1.9 but this used (f 7 ) which f(u) = λu + u p 2 u does not satisfy. Proof. It is clear that J(0) = 0. For all u H0 1 (), we have J(u) = 1 2 u 2 q λ u 2 dx 1 u p dx 2 p 1 λ 2 u 2 q 2λ 1 ( + q) u 2 q C u p q where we used the variational characterization of λ 1, Sobolev-embedding and the equivalence of norms on H0 1 (). Hence, J(u) 1 ( ) λ 1 u 2 q C u 2 q 2 λ 1 ( + q) Choose ρ > 0 so small that α := 1 2 ( 1 λ ) ρ 2 Cρ p > 0 λ 1 (q) This is possible since λ < λ 1 (q) and p > 2. Then J(u) α if u q = ρ. Finally, for all u H0 1 () \ 0} and t > 0, we have that J(tu) = t2 2 u 2 q λt2 u 2 dx tp u p dx 2 p ( ) = t2 u 2 q λ u 2 dx tp u p dx t 2 2 Hence, there exists v H 1 0 () such that v q > ρ and J(v) < α (or J(v) < 0). Hence, J has the Mountain Pass geometry. Proof of Theorem 2.1. By 1.7, there exists a critical point u H0 1 () for J at a positive level c α > 0. This critical point is a non-trivial weak solution of (2.1) in Theorem 2.1. 2.2 The Saddle Point Theorem We move on to another choice of a minimax class in Theorem 1.8 (other that Γ in the Mountain Pass Theorem 1.7) which gives another concrete minimax theorem. Theorem 2.7 (Saddle Point Theorem (SPT), Rabinowitz 1978). Let H be a Hilbert space, J C 1,1 (H). Let E n H be an n-dimensional linear subspace of H, and denote V := En (the orthogonal complement in H), so that H = E n V. For R > 0, let BR n the closed ball in E n with center 0 and radius R: B n R := y E n y H R} 15

2 MORE ON MINIMAX-METHODS and let B n R be its boundary in E n. Assume that there exists an R > 0 such that max J(u) < inf J(u) u BR n u V Then there exists a Palais-Smale sequence at some level c > inf V J. J V inf V J E 1 J V B 1 R J B 1 R R 2 = E 1 V Remark. BR n is compact, so max u BR n J(u) is attained. Proof. We will apply Theorem 1.8. Define } Γ := ϕ(br) n ϕ: BR n H is continuous, ϕ(u) = u u BR n i.e. the (image of) continuous maps of the finite dimensional closed ball BR n into H fixing the boundary BR n. Then its minimax level is The goal is to prove that Γ is admissible. c := inf max J(ϕ(u)) ϕ Γ u BR n (a) We need to show that c R. Note that c < since BR n is compact, the maximum sup u B n R J(u) = max u B n R J(ϕ) is attained. We show that c >. Let P : H E n be the orthogonal projection onto E n. Then u V iff P (u) = 0 and u E n iff P (u) = u. Let ϕ Γ (i.e. ϕ(br n) Γ, by abuse, we identify the set ϕ(bn R ) with the map ϕ B n R H). Define ψ : B n R E n u ψ(u) := P (ϕ(u)) Then ψ is continuous (both ϕ and P are), and for all u BR n, we have ψ(u) = P (ϕ(u)) = P (u) = u since u = ϕ(u) for u BR n when ϕ Γ and since u Bn R E n. Hence, the map ψ : BR n E n is continuous and ψ B n R 1 B n R. By Theorem 1.19 (consequence of Brouwer) there exists u ϕ BR n such that ψ(u ϕ) = 0. Hence, by definition of ψ, P (ϕ(u ϕ )) = 0, so ϕ(u ϕ ) V. Hence, max J(ϕ(u)) J(ϕ(u ϕ )) inf J(u) > max J(u) > u BR n u V u BR n by assumption. This holds for all ϕ Γ, hence, taking the infimum over Γ, we get c = inf max J(ϕ(v)) inf J(u) > (2.2) ϕ Γ v BR n u V 16

2 MORE ON MINIMAX-METHODS (b) We need to prove that there exists ε 0 > 0 such that for all ε (0, ε 0 ) the set Γ is invariant for all deformations that fix sublevel J c 2ε. Since we have assumed inf u V J(u) > max u B n R J(u), we can choose ε 0 > 0 such that inf J(u) 2ε 0 > max J(u) u V u BR n Let ε (0, ε 0 ), then we also have that inf V J 2ε > max B n R J. Let η be a deformation that fixes J c 2ε. The aim is to prove that if A := ϕ(br n ) Γ, then η(t, A) Γ for all t [0, 1]. Define for fixed t [0, 1]: ϕ: B n R H u ϕ(u) := η(t, ϕ(u)) Then ϕ is continuous. To prove is that η(t, ϕ(br n)) Γ is to prove that ϕ(bn R ) Γ. We need to prove ϕ(u) = u for all u BR n. So, let u Bn R. Then ϕ(u) = u (since ϕ Γ rather, ϕ(br n ) Γ). Also, by (2.2) J(u) max J(v) < inf J(v) 2ε c 2ε v BR n v V Hence, u J c 2ε which is fixed by η (by assumption). Hence, ϕ(u) = η(t, ϕ(u)) = η(t, u) = u, since ϕ(u) = u and η fixes J c 2ε. So, Γ is admissible and applying Theorem 1.8 ends the proof. 2.3 Application to non-resonant problems We now apply the Saddle Point Theorem 2.7 to a concrete problem of semilinear elliptic PDE. The Saddle Point Theorem is typically used for asymptotic linear problems. The non-linearity grows linearly at. Recall the discussion in Chapter 1 of the interaction with the spectrum λ k } k in the case of linear growth 1 We will study the differential equation u + q(x)u = f(x, u) with 2 with h L 2 () and Then f(x, t) = λt + f(t) + h(x) (2.3) f C 1 (R) f bounded f (t) C(1 + t 2 2 ) (f 10 ) f(x, t) t t ± λ a.e. x (2.4) Hence, f grows linearly in t at infinity this is an asymptotically linear problem. Definition 2.8. Let R N, N 3 open and bounded, q L (), q 0 and λ k } k N be the sequence of eigenvalues of + q(x). Let h L 2 (). If λ λ k for all k N, then the Dirichlet boundary value problem u + q(x)u = λu + f(u) + h(x) in (2.5) u = 0 on is said to be non-resonant. If λ = λ k for some k, then (2.5) is said to be resonant (or at resonance) with the k th eigenvalue. 1 See 1.2 for the linear case, 1.5 for linear/affine growth) and the discussion of the graphs of f = f(u). 2 In fact, f bounded and (f 6) would suffice. 17

2 MORE ON MINIMAX-METHODS It is understood that f is such that (2.4) should hold, for Definition 2.8 to make sense. Resonant problems are more difficult to treat sometimes they might not have solutions (see Theorem 1.2, 2(ii) for the linear case). The Saddle Point Theorem is perfect for nonresonant problems. Recall that if λ < λ 1 ( + q), we have already studied this in 1.5. This is treated by the Direct Method. Hence, the interest is in λ > λ k but λ λ k for all k. Theorem 2.9. Let R N, N 3 be open and bounded, q L (), q 0, h L 2 (). Let λ k } k N be the sequence of eigenvalues of + q(x), and assume that (f 10 ) holds, i.e. f C 1 (R), f is bounded and f (t) C(1 + t 2 2 ). If λ λ k for all k N, then the Dirichlet boundary value problem u + q(x)u = λu + f(u) + h(x) in u = 0 has at least one weak solution u H 1 0 (). on Remark. The case f 0 is Theorem 1.2, 2(i) (or, part of it here, we have no uniqueness). We will apply the Saddle Point Theorem 2.7. Hence, we need to check the conditions for the functional J : H0 1 () R, J(u) = 1 2 u 2 q λ u 2 dx F (u) dx hu dx (2.6) 2 with F (t) = t 0 f(s) ds. That J C1,1 (H 1 0 ()) follows as usual from (f 10) (see the proof of Lemma 3.33 last semester). Hence, it remains to check the geometry and the Palais-Smale condition. Lemma 2.10. The functional J in (2.6) satisfies (PS) c for all c R under the conditions in 2.9. Proof. Let u k } k N H0 1() be a Palais-Smale sequence at level c, i.e. J(u k) c and J (u k ) 0. We will prove that this implies that u k } k N is bounded in H0 1 (). From this, the lemma will follow the proof is as for Lemma 3.34 from last semester 3. Assume for contradiction that up to a subsequence, u k q. Since J (u k ) 0 in H0 1(), we have that for all v H0 1() and all k N there exists c k = c k (v) such that J (u k )v = c k v q c k 0 Recall that J (u k )v = (u k, v) q λ u k v dx f(u k )v dx hv dx (2.7) u k u k q Define ψ k := (okay, since u k q ). Then ψ k } k N H0 1 () is bounded, so, up to a subsequence, we may assume that there exists ψ H0 1() such that ψ k ψ. Now, dividing by u k q in (2.7), we obtain (ψ k, v) q λ ψ k v dx 1 f(u k )v dx 1 v q hv dx = c k (2.8) u k q u k q u k q 3 See also [BS, p.169]: Boundedness of f is more than enough. 18

2 MORE ON MINIMAX-METHODS Now, since f is bounded, by passing to the limit k in (2.8), we get (ψ, v) q λ ψv dx = 0 (2.9) We will prove that ψ 0. This will lead to a contradiction, since (2.9) then states that ψ H0 1 () \ 0} is a weak solution of ψ + q(x)ψ = λψ in ψ = 0 on which implies that λ is an eigenvalue of + q(x) contradicting the non-resonance condition λ λ k for all k N. Assume for contradiction that ψ 0. Since J (u k ) 0 and ψ k q = 1, we have J (u k )ψ k d k 0. Hence, calculation (2.7) gives that (u k, ψ k ) q λ u k ψ k dx f(u k )ψ k dx hψ k dx = d k 0 Dividing by u k q, this implies ψ k 2 q λ ψk 2 dx = 1 u k q f(u k )ψ k dx + 1 hψ k dx + u k q d k u k q 0 (2.10) Hence, since ψ k ψ, by compact embedding, ψ k ψ 0 in L 2 (). From (2.10) then follows that ψ k 2 q 0 as k. This is a contradiction to ψ k q = 1 for all k. Hence, u k } is bounded in H0 1(). Now, we are going to prove that J satisfies the geometric condition of the Saddle Point Theorem 2.7. For this, we need the following lemma: Lemma 2.11. Let R N, N 3, open and bounded and let q L (), q 0, λ k } k be the eigenvalues in increasing order and ϕ k } k N the eigenvectors of + q. Let u 2 q = 1 u 2 dx + qu 2 dx u H0 1 () 2 Then set E n := spanϕ 1,..., ϕ n }. For all u E n, we have u 2 2 1 λ n u 2 q and for all v E = spanϕ k k n + 1} q, we have v 2 2 1 λ n+1 v 2 q Recall (see Remark 1.13 (ii) last semester) that for all u H0 1(), we have u = k=1 α kϕ k in L 2 () and u 2 2 = k=1 α2 k with α k = uϕ k dx. As well u = k=1 β k ϕ k λk in H0 1() and u 2 q = k=1 β2 k with ( ) ϕk β k = u dx + qu ϕ k dx λk λk Then ϕ k λk } k is an ONB in H 1 0 () with respect to (, ) q. Here β k = λ k α k. 19

2 MORE ON MINIMAX-METHODS Proof. For u E n, and for v E n, we have u 2 dx = u 2 dx = k=n+1 n αk 2 = k=1 α 2 k = n k=1 k=n+1 β 2 k λ k 1 λ n β 2 k λ k 1 λ n+1 n k=1 β 2 k = 1 λ n u 2 q k=n+1 β 2 k = 1 λ n+1 u 2 q Lemma 2.12. The functional J in (2.7) satisfies: There exists R > 0 such that max J(u) < inf J(u) u BR n u V with, as above, E n = spanϕ 1,..., ϕ n } with n such that λ n < λ < λ n+1 and V = E n ( in H 1 0 () with respect to (, ) q), where B n R E n denotes the boundary in E n, and B n R = x E n x q R}. Proof. Recall that λ λ k for all k N, λ > λ 1 and λ k, k. Hence, there exists an n N such that λ n < λ < λ n+1. For u E n, by 2.11, we have J(u) = 1 2 u 2 q λ 2 u 2 2 F (u) dx hu dx 1 2 u 2 q λ 2λ n u 2 q + C u q since f is bounded and because of the Sobolev embedding and the equivalence of norms on H0 1 (). Hence J(u) 1 ) (1 λλn u 2 q + C u q (2.11) 2 Recall that BR n is a closed ball with center 0 and radius R, in E n. Hence, (2.11) implies that for all u BR n J(u) 1 2 (1 λ λ n )R 2 + CR, since λ λ k > 0, it follows that max u B n R J(u) R. On the other hand, for all u V = En, by 2.11, we have J(u) = 1 2 u 2 q λ 2 u 2 2 F (u) dx hu dx 1 2 u 2 q λ u 2 q C u q 2λ n+1 = 1 ( 1 λ ) u 2 q C u q g( u 2 2 λ q) n+1 λ λ n+1 with g(x) = Ax 2 cx, A > 0, since < 1, this implies inf u V J(u) >. Hence, for R large enough, max u B n R J(u) < inf u V J(u). We now move on to a problem at resonance (recall 2.8), i.e. λ = λ k for some k N. This is harder (longer...) and demands imposing further conditions on the data f and h in the non-linearity f(x, t) = λt + f(t) + h(x) (see (2.3)). Typically 4, given f (with some conditions), one can prove existence (of weak solutions to the Dirichlet boundary value 4 Compare also the linear (inhomogeneous) case in 1.2 (2)(ii). 20

2 MORE ON MINIMAX-METHODS problem) under certain conditions on h L 2 (). Here, we will study the Landesman- Lazer condition. In this section, we troughout assume 5 q 0. Hence, we will study u = λu + f(u) + h(x) in (2.12) u = 0 on This is a nonlinear generalization (f 0) of the inhomogeneous linear problem (see 1.2) u = λu + h(x) in (2.13) u = 0 on Recall 1.2 (2): If λ is an eigenvalue of in H0 1 (), then (2.13) has a solution if (in fact, iff...) h ker( λ). Changing f (from f 0 in (2.13)) makes the condition on h more complicated and will involve f. Definition 2.13 (Landesman-Lazer condition). Let λ k } k N denote the eigenvalues and ϕ k } k N the eigenvectors of in H 1 0 () (as defined in 1.2 (i)). Let f satisfy (f 10), i.e. f C 1 (R), f is bounded, f (t) C(1 + t 2 2 ) and assume (i) The limits f l = lim f(t) t f r := lim t f(t) exist. (ii) f l > f r. (iii) For some n, k N, we have (iv) λ = λ n+1 = = λ n+k. λ n < λ n+1 = λ n+2 = = λ n+k < λ n+k+1 Let E := spanϕ n+1,..., ϕ n+k } H0 1(). Then the Landesman-Lazer conditions for h L 2 () are: For every ϕ E \ 0}: f r ϕ dx f l ϕ + dx < hϕ dx < f l ϕ dx f r ϕ + dx (LL) with ϕ + = maxϕ, 0} and ϕ = minϕ, 0}. Remarks 2.14. (1) Concerning (ii): f r < f l can also be treated. (2) The assumption (f 10 ), (iii) and (iv) say that the problem (2.12) is at resonance (with a multiple eigenvalue if k 2). (3) The condition (iii) says, the eigenvalue λ n+1 is k fold degenerate and the k-dimensional subspace E of H0 1 () consists of exactly all (weak) solutions of ϕ = λϕ (recall (iv)). (4) Note that ϕ E implies ϕ E. Hence, any one of the two inequalities in (LL) implies the other. (5) The motivation/meaning of (LL) is 5 All we are doing would also work with the usual q L (), q 0 or even λ 1( + q) > 0. 21

2 MORE ON MINIMAX-METHODS (a) Assume that f l = f r, where f r < 0. Then (LL) becomes f l ϕ dx f l ϕ + dx < hϕ dx < f l ϕ dx + f l so since ϕ + + ϕ = ϕ, f l ϕ dx < hϕ dx < f l ϕ dx ϕ + dx Note that this is satisfied if h E = ker( λ). Compare again with 1.2 (2) (ii) it is a generalized orthogonality condition. The L 2 -scalar product of h and ϕ (for all ϕ E \ 0}) must not be too large (but can be non-zero) compared to the limits of f at ± (see (i) in 2.13). (b) In fact, under slightly stronger conditions, assumption (LL) becomes a necessary condition for existence of solutions to (2.12): Assume that f l < f(t) < f l for all t R. Then we claim that existence of a weak solution to (2.12) implies (LL). To see this, let u solve (2.12) and take any ϕ E \ 0} to get 0 = u ϕ dx λ ϕu dx = f(u)ϕ dx + hϕ dx Hence so that hϕ dx = hϕ dx = f(u)ϕ dx = f(u)ϕ dx f(u)ϕ + dx f(u)ϕ dx f(u)ϕ + dx < f l ϕ dx f r ϕ + dx which is one of the (and hence the other) conditions in (LL). One needs that ϕ and ϕ + are different from 0 on a set of positive measure. Theorem 2.15. Let R N, N 3 be open and bounded. Assume that f : R R satisfies (f 10 ) and that the limits f l and f r in 2.13 (i) exist and satisfy 2.13 (ii). Let λ R be as in 2.13 (iii)-(iv). Then, for any h L 2 () satisfying (LL) in 2.13 then there exists (at least one) weak solution u H0 1 () of u = λu + f(u) + h(x) in u = 0 on Weak solutions of (2.12) are, as always, critical points of J(u) = 1 u 2 dx λ u 2 dx F (u) dx 2 2 hu dx (2.14) with F (t) = t 0 f(s) ds. We will prove 2.15 by applying the Saddle Point Theorem 2.7. Recall that it is sufficient to prove (1) J C 1,1 (H 1 0 ()). (2) J satisfies (PS). (3) J satisfies the geometry in the Saddle Point Theorem. 22

2 MORE ON MINIMAX-METHODS Now, (1) holds as usual ((f 10 ) is enough). Lemma 2.16. The functional J in (2.14) satisfies (PS) c for every c R under the conditions in 2.15 Proof. Recall that ϕ k } k H0 1() are the eigenfunctions for in H1 0 () corresponding to the eigenvalues λ k } k, we let E := spanϕ n+1,..., ϕ n+k } where λ n < λ n+1 = = λ n+k < λ n+k+1. We write H 1 0 () = E E and u = w + v, w E, v E. Let u k } k N be a Palais-Smale sequence for J at level c R, i.e. J(u k ) c J (u k ) 0 Recall that it is enough to prove that u k } k N is bounded in H0 1 (). Then a subsequence converges strongly see Lemma 3.34 last semester and the proof of Lemma 2.10. As above, we write u k = w k + v k, w k E, v k E, k N. Then, by definition of E and λ, w k z λ w k z dx = 0 z H0 1 () As in the proof of 2.10, from J (u k ) 0 in (H0 1()), we have: For every z H0 1 () and every k N, there is c k = c k (z) such that J (u k )z = c k z, c k 0 as k. Here, z z 0 is a norm on H0 1() by Poincaré. Recall that for each z H1 0 (), we have J (u k )z = ( w k z dx + v k z) dx λ w k z dx λ v k z dx f(u k )z dx hz dx = v k z dx λ v k z dx f(u k )z dx hz dx = c k z (2.15) Repeating the argument in the proof of 2.10 (there, for u k rather, ψ k = u k u k ) one shows that v k } k is bounded 6 in H0 1(). Hence, it remains to prove that w k} k is bounded in H0 1 (). This was not a problem in the proof of 2.10, since there E = 0}. Assume for contradiction that w k, k. Then w k w k } k N E H0 1 () is bounded and since dim E = k <, there exists a subsequence converging in -norm (i.e. in H0 1()) to a ϕ E (we get strong convergence since dim E < ). We can assume that (up to a subsequence) w k(x) w k ϕ(x) a.e. in. Note that ϕ 0 since w k w k = 1 and convergence is in norm. As v k } k is bounded in H0 1 (), we can (up to a subsequence) assume that v k (x) v(x) a.e. in (embedding and subsequence). Hence, u k (x) = w k wk(x) w k + v k(x) ϕ(x) > 0 ϕ(x) < 0 a.e. We used that ϕ is (a.e. in ) different from 0. ϕ E, hence, ϕ is an eigenfunction for : ϕ = λϕ in ϕ = 0 on 6 If it was unbounded, a subsequence will satisfy v k, k and with ψ k := v k v k, one sees that ψ k converges to an eigenfunction for λ in E which is impossible by definition of E since the limit is non-zero. 23

2 MORE ON MINIMAX-METHODS which implies ϕ 0 a.e. in. Define f : R by f r ϕ(x) > 0 f (x) = f l ϕ(x) < 0 Then (2.15) means that f(u k (x)) f (x), for a.e. x. Now, f is bounded and so is, so, choosing z = ϕ in (2.15), dominated convergence gives f(u k ) f in every L p (), p [1, ) and letting k, we get f ϕ dx hϕ dx = 0 (2.16) Recall that since ϕ E. Now (2.16) reads hϕ = v k ϕ dx λ v k ϕ dx = 0 f ϕ = f l ϕ dx f r ϕ + dx by definition of f. This contradicts the Landesman-Lazer condition (LL). Hence, the sequence w k } k H0 1() is bounded, so u k} k H0 1 () is bounded the rest of the proof is as usual. Lemma 2.17. The functional J in (2.14) satisfies: There exists an R > 0 such that max J(u) < inf J(u) u BR n u V with E n = spanϕ 1,..., ϕ n } and V = En = spanϕ k k n + 1}. Here, BR n E n is the boundary in E n of BR n = x E n x R}. Remark. Note that the space E used in the proof of Lemma 2.16 is now part of V, this is what makes the following more complicated than that of 2.12. As in the proof of 2.12, we prove the following two results. Lemma 2.18. The functional J in 2.14 satisfies (a) max u B n R J(u) R (b) inf u V J(u) >. Proof. (a) This is exactly as in 2.12, since we still have (by 2.11) u 2 0 λ n u 2 2 u E n. for all (b) Let V := E F and E = spanϕ n+1,..., ϕ n+k } (as before) and F = spanϕ j j n + k + 1} Hence, F is the orthogonal complement to E in V and for u V write u = w + v with w E and v F. Note that w v both in H 1 0 () and in L2 () since ϕ k ϕ j for all k, j in both spaces see the proof of 2.11. Also (by 2.11), we have u 2 λ n+k+1 v 2 2 v F 24

2 MORE ON MINIMAX-METHODS Hence, for all u V, we have J(u) = 1 u 2 dx λ u 2 dx F (u) dx hu dx 2 2 1 ( λ ) 1 v 2 λ n+k+1 2 dx F (u) dx hu dx = µ v 2 F (u) dx hu dx (2.17) We have µ > 0 by assumption on λ and the choice of n, k. As in the proof of 2.12, by the boundedness of f, F (u) dx + hu dx C u C w + C v So J(u) µ v 2 C v C w Assume for contradiction that there exists a sequence u k } k H0 1 () such that the decomposition u k = w k + v k V holds and such that J(u k ). First of all, we claim that 7, w k v k. To see this, we have J(u k ) µ v k 2 C v k C w k (2.18) hence w k. If v k } k N is bounded, then the claim follows. If v k (or, a subsequence...) then µ v k 2 C v k C w k = v k ( µ v k C C w k v k and the claim follows. As in the proof of 2.16, we may assume that there exists ϕ E \ 0} and v F such that (up to a subsequence...): ) w k (x) w k ϕ(x) v k (x) v k v(x) a.e. Since w k w k } k E is bounded and v k from the strong convergence of w k w k ( wk (x) u k (x) = w k (x) + v k (x) = w k w k + v k w k vk(x) ) v k The last term converges to 0. The next claim is v k } k F is bounded. That ϕ 0 holds follows since dim E < (see 2.16). Hence, ϕ(x) > 0 ϕ(x) < 0 w k F (u k (x)) f ϕ(x) a.e. 7 We assume v k 0 for all but finitely many k if not, then (up to a subsequence) v k = 0 for all k. Still (2.18) states that w k. The rest of the proof carries through by replacing v k with 0 and u k by w k. Denominators are no problem at the respective places. 25

2 MORE ON MINIMAX-METHODS with (as in the proof of 2.16), f : R: f r ϕ(x) > 0 f (x) = f l ϕ(x) < 0 To see this, use l Hôpital (recall that f is bounded) F (t) lim = lim t t t t 0 f(s) ds f(t) = lim = f l (2.19) t t 1 F (t) and similarly lim t t = f r. Note that for almost every x in y ϕ(y) > 0} and for k large, we have 1 w k F (u k(x)) = u k(x) w k F (u ( k(x)) vk (x) = u k (x) w k + w ) k(x) F (uk (x)) w k u k (x) From earlier, for k, we have and w k(x) w k v k (x) w k = v k w k vk(x) v k 0 v(x) = 0 ϕ(x) almost everywhere. By(2.19), we have F (u k(x)) y ϕ(y) > 0}. Also, F (u k(x) u k (x) The claim now is that we even have the L 1 ()-convergence u k (x) f r a.e. in f l a.e. in y ϕ(y) < 0}. This proves the claim. 1 w k F (u k) f ϕ The sequence u k u k } H1 0 () is bounded, hence (by compact embedding), has a strongly convergent subsequence in L 1 () and (1.3 (ii) last semester) and there exists a u L 1 () such that u k(x) u k u(x) a.e. in. Recall that u k w k } k R is bounded, since w k v k and u k = w k + v k (triangle inequality). Hence, since F (t) b t for all t and some b > 0 (by boundedness of f), we have F (u k (x)) w k b u k(x) w k = b u k u k (x) w k u k C u(x) L1 () Hence, by Lebesgues Dominated Convergence Theorem, the claim follows. As the sequence u k w k } k H0 1 () is bounded (see above), up to a subsequence, it converges ϕ. It weakly in H0 1() and strongly in L2 () (compact embedding). Also follows that J(u k ) µ v k 2 F (u k ) dx hu k dx ( ) 1 = µ v k 2 u k w k F (u k ) dx + h w k w k dx ( ) w k f ϕ dx + hϕ dx + o(1) ) = w k (f r ϕ + dx f l ϕ dx + hϕ + o(1) By the assumption (LL), we have f r ϕ + dx f l ϕ dx + hϕ dx < 0 ϕ E Hence, J(u k ), contradicting the choice of u k } k. u k w k k k 26

2 MORE ON MINIMAX-METHODS 2.4 Outlook Often, more generally (?!), one is interested in the nonlinear eigenvalue problem : u + q(x)u = λf(u) in u = 0 on (2.20) The interest is the question of existence of one or more weak solutions in H0 1 () and their dependence on λ R. Typically, f(0) = 0, so u 0 is always a solution. Most methods come from bifurcation theory (which we might roughly discuss later). However, one can do things with variational methods. Example (of what possibly to expect). Consider u + q(x)u = λ u p 2 u u = 0 in on (2.21) for open, bounded, q L (), q 0, p (2, 2 ) as usual. By Corollary 1.10 (rereading the proof) for every λ > 0, there exists a weak solution u 0 to (2.21). Also, u 0 is a solution. Note that also λ < 0 was covered in Corollary 1.4. There, the problem has no non-trivial solutions (u 0 is still is a solution). One can prove (will discuss later) that u 0 for the non-trivial solution found in Corollary 1.10. Let u be a weak solution to (2.21) for λ = 1. Then u λ := λ 1 p 2 u is a solution for (2.21) (for λ). Note that u λ } λ>0 satisfies ( dependence of solution on λ ). u λ λ 0 u λ λ 0 Such a thing holds more generally: Theorem 2.19. Let R N be open and bounded, q 0, q L () and assume, f : R R admits the existence of some µ > 2 such that f(t)t µf (t) t R (f 11 ) Furthermore, assume that there exists a p (2, 2 ), C > 0 such that f(t) f(s) C t s ( s + t + 1 ) p 2 (f 6 ) and that we have Then f(t) lim = 0 (f 7 ) t 0 t (i) If there exist r > 0, M 1 > 0 and t 1 > 0 such that f(t) M 1 t r 1 for all t [0, t 1 ], then, for all λ > 0, there exists a non-trivial weak solution u λ H0 1() with u λ 0 and u λ λ 0. (ii) If F (t) > 0 for all t > 0, then we have that, for all λ > 0, there exists a non-trivial weak solution u λ H0 1() with u λ 0 and u λ λ 0. 27

2 MORE ON MINIMAX-METHODS This (when both holds) gives a behavior as for (2.21). Note also that t p 1 + t r 1 2 < p < r < 2, t > 0 f(t) := 0 t 0 is an example that 2.19 covers, which is not homogeneous (in t) as is (2.21). Proof (idea). The solution is given by the Mountain Pass Theorem (in both cases). The hard part is the asymptotics. This is done by constructing particular minimax classes for (i) and (ii). (i) Γ := γ([0, 1]) γ : [0, 1] H0 1 (), cont., γ(0) = 0, γ(1) = ψ} with ψ fixed, chosen such that ψ H0 1() \ 0}, 0 ψ(x) t 1 for all x with t 1 as in (i). (ii) For every λ > 0, Γ λ := γ([0, 1]) γ : [0, 1] H 1 0 () cont., γ(0) = 0, γ(1) = t λ ψ} with ψ H0 1() \ 0} fixed with ψ 0 and a t λ chosen (!) such that J λ (t λ ψ) < 0 for the corresponding J λ to (2.21) J λ (u) = 1 2 u 2 q λ F (u) dx Another example of what can happen (other than 2.19 (i) and (ii)) is to consider the chain of inequalities 1 < r < 2 < s < 2 and t s 1 t [0, 1] f(t) := t r 1 t 1 (2.22) 0 t 0 Let F and J λ be as above. Proposition 2.20. Let R N open and bounded, N 3, q L (), q 0. Assume u λ H0 1(), u λ 0 a weak solution of u + q(x)u = λf(u) in (2.23) u = 0 on with f as in (2.22) and λ [0, λ 1 ], where λ 1 is the first eigenvalue of +q. Then u λ 0, i.e. the only non-negative solution of (2.22) is the trivial one. Theorem 2.21. Let R N, N 3, be open and bounded, q L (), q 0 and f as in (2.22) above. Then there exists a λ > 0 such that for every λ > λ, there are weak solutions u λ, v λ H 1 0 (), u λ 0, v λ 0 to (2.23) and J λ (v λ ) = min J λ (u) < 0 < α λ J λ (u λ ) (2.24) u H0 1() for some α λ. Here, u λ is a Mountain Pass critical point. Moreover, v λ q λ u λ q 0. λ and 28

2 MORE ON MINIMAX-METHODS λ Proof. For u λ, it can be shown inf u H 1 0 () J λ(u). The Direct Method for λ > λ (with λ > 0 such that inf u J λ (u) < 0 for all λ > λ ) gives a non-trivial critical point/minimiser v λ. For u λ and λ large, there exist α λ, ρ λ > 0 such that u q = ρ λ implies J λ (u) α λ > 0. Then use the Mountain Pass Theorem to get a non-trivial solution. Note. We will return to v λ 0, u λ 0 v λ 0, u λ 0 follows from (2.24). I.e., for λ > λ fixed, we have the picture [PIC]. These types of problems are mostly studied using bifurcation techniques. 29

3 CONSTRAINED MINIMIZATION 3 Constrained Minimization We shall briefly discuss this topic only to study problems which are not really constrained (but methods work). The goal is Theorem 3.1. Let R N, N 3, open and bounded, q L (), q 0 and p (2, 2 ). Then there exists at least one non-negative, non-trivial weak solution u H0 1 () to u + q(x)u = u p 2 u in (3.1) u = 0 on Remark. (1) We already know this (via minimax theory (MPT) see Corollary 1.10 (!). Here, we are interested in the method of the proof. (2) We will give two (more) proofs the first relies on scaling of the corresponding functional J(u) = 1 2 u 2 q 1 p u p p (3.2) The second proof is more versatile (see [BS] for generalization). (3) Recall that J in (3.2) is differentiable and critical points are weak solutions to (3.1). Also, for all u 0, we have J(tu) = t2 2 u 2 q tp p u p p since p > 2. Hence, J is not bounded from below. 3.1 Proof 1 of 3.1: Minimization on spheres For β > 0, let t Σ β := u H 1 0 () u p p = β} = G 1 (β) denote a sphere of L p () (i.e. not of H0 1()), where G: H1 0 () R, G(u) = u p p. Then, for all u Σ β H0 1 (), we have J(u) = 1 2 u 2 q 1 p β so J Σβ 1 p β is bounded from below and coercive (on Σ β). We will set m β = inf u Σ β u 2 q = inf u Σ β N(u) where N(u) = u 2 q. The goal is to prove that m β is attained at, say u Σ β, and that u solves (3.1) in 3.1. Heuristically (the true proof needs more nonlinear functional analysis), we have Σ β = G 1 (β) and G C 1 (H 1 0 ()), and G (u)u = pg(u) = pβ 0 for all u G 1 (β), so, by the Implicit Function Theorem (in Banach spaces...), G 1 (β) is a C 1 -manifold. By Lagrange s 30

3 CONSTRAINED MINIMIZATION Theorem, if u minimizes N on G 1 (β), then there exists λ R (Lagrange multiplier): N (u) = λg (u), i.e. u v dx + q(x)uv dx = λ u p 2 u dx v H0 1 () Multiplying by a suitable number (see the Outlook at the end of the previous chapter) then gives a solution to (3.1) in 3.1. Note. This minimizer is no longer belonging to Σ β. Lemma 3.2. For every β > 0, there exists u Σ β with u 0 (a.e. on ) such that N(u) = u 2 q = m β = inf v Σβ N(v). Lemma 3.3. Let u be a minimizer in the previous Lemma 3.2. Then u satisfies u v dx + q(x)uv dx = m β u p 2 u dx β We continue to prove Theorem 3.1 using 3.2 and 3.3. We need to get rid of of the Lagrange multiplier m β β. Let u be a minimizer of N ofer Σ β, set u = c w for some c R (to be determined). Then, by 3.3, we have c w, v q = m β β cp 1 w p 2 wvdx v H0 1 () Choosing c := ( β /m β ) 1 p 2, we have that w is non-negative and w, v q = w p 2 wv dx v H0 1 () Note that u Σ β but w / Σ β we could try to do the same for Σ γ, γ β (to get another solution) but this does not give anything new, see [BS] 2.36. It remains to prove Lemma 3.2 and 3.3. Proof of Lemma 3.2. Let u k } k N Σ β be a minimizing sequence for N(u) = u 2 q, then u k } k N is also a minimizing sequence (see Lemma 1.1 (iii) last semester). Hence, we can assume u k (x) 0 a.e.. As this sequence is bounded in H0 1 (), up to a subsequence, u k u in H0 1(), u k u in L p () and u k (x) u(x) a.e. in. As the q - norm is weakly lower semi-continuous, we have u 2 q m β. Also, u p p = β and u(x) 0 a.e. in. Hence, u Σ β and so u 2 q = m β. After all, u is not identically 0 as u Σ β. Note the importance of p < 2! Proof of Lemma 3.3. As u is a constrained minimizer of N (on Σ β ), we do not know that the differential of N is zero. We will, in practice, prove that the differential of N at u vanishes on the tangent space to Σ β at u. Fix v H0 1 () then there exists ε > 0 such that for all s ( ε, ε), the function u + sv is not identically 0 since u 0. Thus, ( β t(s) := u + sv p dx is well-defined. Then t: ( ε, ε) (0, ) with t(s)(u + sv) p = β ) 1/p 31