Thus @u @x i PDE 69 is a weak solution with the RHS @f @x i L. Thus u W 3, loc (). Iterating further, and using a generalized Sobolev imbedding gives that u is smooth. Theorem 3.33 (Local smoothness). Let and a ij,b i,c C () f C (). Further, let u W, () be a weak solution to Lu = f. Then u C (). 3.5.. Global L -regularity. Also a global regularity result holds. Definition 3.34. We denote @ C k (), if for each point x 0 @ there is a r>0 and a C k -function : R n! R such that upon relabeling and reorienting the coordinate axes if necessary it holds that \ ={x B(x 0 ; r) :x n > (x,...,x n )}. We also denote for bounded C () ={u C () :D u is uniformly continuous in }. Theorem 3.35 (Global regularity). Let and a ij C (),b i L (),c L () f L (). Further, assume that @ C (), let g W, () and u W, () be a weak solution to ( Lu = f in, Then and u g W, 0 (). u W, () u W, () c f L () + u L () + g W, (),
70 PDE where c may depend on and a ij,b i,c, but not on u. Remark 3.36 (Warning). One might be tempted to think that all kind of properties of the boundary value function are inherited by the solution as long as the boundary and the coe cient are regular enough. This is false however! Let (, ) and = {z C : z > 0, Arg z ( Denote z = x + iy = re i with (, ]. Since log z = i +logr, )}. z := e log(z) = e log r e i = r (cos( )+i sin( )). We take for granted that z is an analytic function in, and thus its real part u(x, y) :=Rez = r cos( ) is a harmonic function. Then for x>0 it holds that even if u(x, 0) = x u 0 on @. A harmonic function is actually locally but not necessarily globally Lipschitz. The similar phenomenon happens even if the boundary is smooth. Indeed, consider the upper half plane and ( u =0 (x, y) R + (3.3) u(x, 0) = g(x), where g(x) = x close to 0 and continued in a suitable bounded and smooth fashion to the whole of R. Then x 7! u(0,y) is only Hölder continuous close to x =0. Similarly, if g C (R) in (3.3) it does not always follow that u C (R +).
PDE 7 3.5.3. Local L p -regularity. Example 3.37 (Calderón-ygmund type inequality). First consider a classical approach to the problem u = f where f L p, p<. A solution u is of the form (we say nothing about a domain or uniqueness as this is just the idea on general level) f(y) u(x) =C x y n dy. One of the questions in the regularity theory of PDEs is, does u have the second derivatives in L p i.e. @ u @x i @x j L p? If we formally di erentiate u, we get @ u @ = C f(y) @x i @x j R @x n i @x j x y n dy. {z } C/ x y n It follows that R f(y) @ @x i @x j x y n dy defines (the precise definitions are beyond our scope here) a singular integral Tf(x). A typical theorem in the theory of singular integrals says Tf p C f p @ and thus we can deduce that u @x i @x j L p. Further, we get that u W,p. This was established by Calderón and ygmund (95, Acta Math.) and thus the above inequality is often called the Calderón-ygmund inequality. Theorem 3.38 (Local L p -regularity). Let <p< and f L p (). Further, let u W, () be a weak solution to and for any 0 b where c = c(p, n, 0, ). u W,p loc () u W,p ( 0 ) c f L p () + u L p (), u = f. Then
7 PDE 3.5.4. C regularity for weak solutions, De Giorgi method. For expository reasons we only consider the Laplacian. Nonetheless, the method also applies to the equation with bounded measurable coe cients. Lemma 3.39 (Caccioppoli s inequality). Let u W, loc () be a weak solution to u =0in. Then there exists a constant c = c(n) such that D(u B(x 0,R) c k) + dx R r B(x 0,R) k) + dx, where k R, 0 <r<r< s.t. B(x 0,R) and u + =max(u, 0). Proof. Let C0 (B(x 0,R)) cut-o function s.t. 0, =in, D c R and a test function ' =(u k) + W, 0 (B(x 0,R)). Since D' = D(u k) + +(u k) + D we obtain using weak formulation Du D(u k) + dx = Du (u k) + D dx. (3.4) Recall Thus a.e. ( Du a.e. {u >k} D(u k) + = 0 a.e. {u k}. D(u k) + = Du D(u k) + and combining this with (3.4), we get D(u k) + dx B(x 0,R) = B(x 0,R) (3.5) = B(x 0,R) Young " Du D(u B(x 0,R) B(x 0,R) Du k) + dx k) + D dx D(u k) + k) + D dx D(u r (3.5) k) + dx + C(") (u k) + D dx. B(x 0,R)
PDE 73 From this the result follows by absorbing the first term on the RHS into the left, and recalling the definition of. Theorem 3.40 (ess sup-estimate). Let u W, loc () be a weak solution to u =0 in. Then there exists c = c(n) such that ess sup u k 0 + c B(x 0, r ) where k 0 R and. k 0 ) + dx Proof. Let 0 <r/ < <rand C 0 () / 0, =inb(x 0, ), D c r, and use test function v =(u k) +. The proof will be based on the use of Sobolev-Poincare, Caccioppoli and iteration. To be more precise, Dv dx D((u k) + ) dx Further, c Cacc, def. of B(x 0, ) D(u k) + +(u k) + D dx D(u k) + + k) + D dx c (r ) k) + dx. / / k) + dx c v dx v W, 0 (B(x 0,r)), Sobo ie cr Dv dx. Combining (3.7) and(3.6), we get Define B(x 0, ) / k) + dx cr (r ) A(k, ) =B(x 0, ) \ {x : u(x) >k} k) + dx. (3.6) (3.7) (3.8)
74 PDE and observe B(x 0, ) k) + dx = k) + dx B(x 0, ) A(k, ) Hölder / k) + dx A(k, ) B(x 0, ) A(k, ) / A(k, ) = k) + dx B(x 0, ) A(k, ) B(x 0, ) / A(k, ) k) + dx B(x 0, ) B(x 0, ) (3.8) cr A(k, ) k) (r ) + dx. B(x 0, ) If h<k, then (k h) A(k, ) = A(k, ) u>kin A(k, ) h<k A(h, ) B(x 0, ) (k h) dx A(k, ) (u h) dx (u h) dx. (u h) dx (3.9) By this, denoting /, u(h, ) := h) + dx (3.0) B(x 0, ) we get A(k, ) (u h) dx = B(x 0, ) (k h) B(x 0, ) (k h) u(h, ). (3.) Using this with (3.9), we get u(k, ) (3.9) r u(k, c r) A(k, ) r B(x 0, ) (3.) r u(k, c r) u(h, ) r k h ( ). (3.)
PDE 75 This implies u(k, r) u(h, r) r u(k, ) c u(h, r)+ r r = c r u(h, r)+ (k h), (k h) ( ) (3.3) where :=. Auxiliary claim: For k R it holds that u(k 0 + d, r/) = 0 where d := c (+ ) / + u(k 0,r) and c, are as above. Proof of the auxiliary claim: Let k j = k 0 + d( j ) j = r/+ j r, j =0,,,... so that 0 = r, j & r/ andk j % k + d as j!. Then we show by induction that u(k j, j ) µj u(k 0,r), j =0,,,... (3.4) where µ =(+ )/. Indeed j =0immediatelyfollowssince 0 = r. Assume then that the claim holds for some j, and observe that j j+ =( j j )r = j r, k j+ k j =( j + j )d = j d, j r. Using these with (3.3), we have c j u(k j+, j+ ) (k j+ k j ) u(k j, j+ ) + j j+ cr j r ( j d) µj(+ ) u(k 0,r) +,
76 PDE where we used induction assumption to estimate u(k j, j+ ). recalling the shorthand notations, we get u(k j+, j+ ) c j++ (j+) ( +) {z } µ(+ ) = +(+ )(j+) ( +) j = ( +)(j+)( = µ(j+) u(k 0,r), j Then (c ( +) + u(k 0,r) ) u(k {z } 0,r) + d ( +) + ) {z } u(k 0,r) u(k 0,r) and thus the induction is complete. Further, this implies and 0 u(k 0 + d, r ) k j k 0 + d = lim u(k j, j )=0 j! r j B(x0, j ) B(x 0, r ) B(x 0, j ) It follows that B(x 0, r ) k j ) + dx u(k 0 + d, r )=0, k j ) + dx cu(k j, j )! 0. and this ends the proof of the auxiliary claim. By using the auxiliary claim, we now finish the proof of the essupestimate. Indeed, 0=u(k 0 + d, r )= B(x 0, r ) (k 0 + d)) + dx where d = c (+ )/ + u(k 0,r). Thus a.e. in B(x 0, r )itholdsthat u k 0 + d = k 0 + c k 0 ) + dx from which the claim follows. ess sup u k 0 + c B(x 0, r ) k 0 ) + dx,
PDE 77 Corollary 3.4. Let u be a weak solution to u =0in. Then there exists c = c(n) such that ess sup u c u dx B(x 0, r ) for all B(x 0, r ). Proof. Choose k 0 =0inthepreviousresultandobservethat ess sup u c u + dx. B(x 0, r ) Since u is also a solution, we obtain ess inf u =esssup ( u) c B(x 0, r ) B(x 0, r ) Combining the estimates ess sup u max{ess sup u, B(x 0, r ) B(x 0, r ) ( u) + dx ess inf u} c u B(x 0, r B(x dx ) 0,r).. The above result implies that (unlike Sobolev functions in general) are locally bounded. Next lemma is needed in order to prove Hölder continuity for weak solutions. Lemma 3.4. [Measure decay] Let u be a weak solution to, B(x 0, r), and m(r) =essinf B(x 0,r) u, M(r) =esssupu, B(x 0,r) A(k 0,r), 0 < <, u =0in where A(k 0,r)= \ {x : u(x) >k 0 } and k 0 =(m(r) + M(r))/. Then lim A(k, r) =0. k%m(r) We postpone the proof and go to the proof of the Hölder continuity immediately. Theorem 3.43 (Hölder-continuity). If u be a weak solution to u =0 in, then u is locally Hölder-continuous (or has such a representative to be more precise).
78 PDE Proof. Let k 0 = m(r)+m(r) similarly to the previous lemma. Without loss of generality we may assume that \ {x : u(x) >k 0 } = A(k 0,r) B(x 0,r), (3.5) since otherwise if A(k 0,r) > B(x 0,r) it holds that {x : u(x) k 0 } > B(x 0,r) {x : u(x) > k 0 } < B(x 0,r) and the argument below works for u and k 0 instead. Here we need that both u and u are solutions. Using the esssup-estimate with we have k l = M(r) (l+) (M(r) m(r)) M( r ) k l + c k l ) + {z } dx (M(r) k l ) A(kl,r) k l + c(m(r) k l ) since the integrand can only be nonzero in the set A(k l,r). By Lemma 3.4, we may choose l large enough so that A(kl,r) c <. This fixes l. Combining the previous two estimates we obtain M( r ) k l + (M(r) k l) M(r) (l+) (M(r) m(r)) + M(r) (M(r) (l+) (M(r) m(r))) M(r) (l+) ( )(M(r) m(r)). From this we get M( r ) m(r ) M(r ) m(r) ( (l+) )(M(r) m(r)).
PDE 79 Using the notation osc B(x0,r) u := M(r) m(r) =esssup B(x0,r) u ess inf B(x0,r) u and := ( (l+) ) <, the above reads as osc B(x0, r ) u osc B(x0,r) u. The Hölder continuity follows from this by a standard iteration. To be more precise, choose j N such that 4 j R r < 4j. Then osc B(x0, r ) u j osc B(x0,4 j r) u 4 j rr j osc B(x0,R) u j osc B(x0,R) u r c oscb(x0,r) u, R (3.6) where we denoted = log / log 4 (0, ) and observed R r < 4j ) 4 j R r = r R j =4 log 4 ( j ) =4 (j ) log( )/ log(4) =4 ( j) 4 r R Let y s.t. x 0 y 8 dist(x 0, @), R =dist(x 0, @), r = x 0 y (actually only for a.e. point but then the below deduction can be used to define the Hölder continuous representative). Then by the estimate (3.6) u(x 0 ) Actually, the above u(y) osc B(x0,r) u r c 3 R oscb(x0, 3 8 R) u 8 x0 y c 3 oscb(x0, 3 8 R) u c x 0 8 R y R esssup B(x 0, 3 8 R) u esssup-est c x 0 y R B(0, 6 c x 0 y. It remains to prove the lemma used above. 8 R) u dx.
80 PDE Proof of Lemma 3.4. The proof is based on deriving an estimate containing A(h, r) A(k, r) by using Poincare s and Caccioppoli s inequalities. To this end, we let k > h > k 0 and define an auxiliary function 8 >< k h, u(x) k, v(x) = u(x) h, h < u(x) <k, >: 0, u(x) h. It immediately follows {x :v(x) =0} = {x :u(x) h} = {x :u(x) >h} = A(h, r) k 0 <h B(x0,r) A(k 0,r) assump. ( ) B(x0,r). From this denoting v B(x0,r) := R B(x 0 vdx, we get,r) k h v B(x0,r) = (k h v) dx ( ) (k h v) dx {x:v(x)=0} =( )(k h). Integrating this over A(k, r) and using Poincare s inequality, we obtain (k h) A(k, r) (k h v B(x0,r)) dx A(k,r) v = k h in A(k, r) = (v v B(x0,r)) dx A(k,r) (v v B(x0,r)) dx Hölder Poinc., p= v v B(x0,r) cr Dv dx. n n n n dx
PDE 8 Then using the above estimate and by Hölder s as well as Caccioppoli s inequalities (k h) A(k, r) cr Dv dx def. v Hölder Cacc. c cr cr A(h,r) A(h,r)\A(k,r) A(h,r) Du dx /( A(h, Du dx r) A(k, r) ) /( A(h, u h dx r) A(k, r) ), (3.7) where at the last step we observed that r/(r r) c. Next replace k and h in the above inequality by k j and k j where Then since k j = M(r) (j+) (M(r) m(r)). k j k j =(M(r) (j+) (M(r) m(r))) we get by (3.7) that M(r) j (M(r) m(r)) = (j+) (M(r) m(r)), (j+) (M(r) m(r)) A(k j,r) =(k j k j ) A(k j,r) /( A(kj c u k j dx,r) A(k j,r) ). A(k j,r) Then observing u k j M(r) k j = M(r) M(r) j (M(r) m(r)) = j (M(r) m(r)) and using the above estimate, we obtain (j+) (M(r) m(r)) A(k j,r) c(m(r) k j ) A(k j, r) ( A(kj,r) A(k j,r) ) c j (M(r) m(r)) A(k j, r) ( A(kj,r) A(k j,r) ). Cancelling j (M(r) m(r)) on both sides and choosing l, l j, we end up with A(k l,r) A(k j,r) c B(x 0, r) ( A(kj,r) A(k j,r) ).
8 PDE Taking squares and summing over j, this gives by telescoping l A(k l,r) = lx A(k l,r) j= c B(x 0, r) lx ( A(k j,r) A(k j,r) ) j= c B(x 0, r) ( A(k 0,r) Dividing by l, we finally get A(k l,r) ) c B(x 0, r). lim A(k l,r) =0. l! 3.6. Comparison and max principles. In this section we consider Lu = D i (a ij (x)d j u(x)) + c(x)u(x) =f. i,j= Theorem 3.44 (Comparison principle). Let u, w W, () be weak solutions and (u w) + W, 0 (). Then there is c 0 such that if c c 0 it holds that u w in. Proof. The idea is the same as in the proof of the uniqueness. First a ij D j ud i v + cuv dx = fvdx i,j= a ij D j wd i v + cwv dx = i,j= fvdx for every v W, 0 (). By subtracting the equations a ij D j (u w)d i v + c(u w)vdx=0. i,j= Now we choose v =(u w) + W, 0 () andestimate 0= a ij D j (u w)d i (u w) + + c(u w) + dx i,j= D i (u w) + + c(u w) + dx.
Since c(u w) + dx PDE 83 µ (u w) + dx with choice c /(µ). Combining the facts and recalling Poincaré s inequality R v dx µ R Dv dx we have 0 D i(u w) + + (u w) + dx µ µ = D i (u w) + dx. Again using Poincaré s inequality (recall exercise set ), we see that (u w) + =0a.e.,thatisu w a.e. Remark 3.45. By analyzing the above proof, we see that also the following holds: Let u, w W, () and u and w be sub- and supersolutions respectively ie. a ij D j ud i v + cuv dx fvdx i,j= a ij D j wd i v + cwv dx i,j= fvdx for every v 0, v W, 0 (), and (u w) + W, 0 (). Then For the next theorem, we define u w in. sup u := inf{l R :(u l) + W, 0 ()}. @ Theorem 3.46 P (Weak max principle). Let u W, () be a weak solution to n i,j= D i(a ij (x)d j u(x)) + c(x)u(x) = 0, with c 0. Then ess sup u sup u +. @ Proof. Set M := sup @ u + 0. It holds that (u M) + W, 0 (). To see this, choose decreasing sequence l i! M so that (u l i ) + = (u + l i ) + W, 0 (). Then since is bounded, it follows that u l i! u M in W, (). By it holds, (u l i ) +! (u M) + in W, () and thus the claim (u M) + W, 0 () follows.
84 PDE We may use v =(u where M,c,v M) + as a test function in a ij D j ud i v + cuv dx =0 i,j= a ij D j MD i v + cmv dx 0, i,j= 0wasused.Wesubtractthesetohave D(u M) + + c(u M) + dx 0. From this it follows that u M a.e. 3.6.. Strong maximum principle. Strong comparison principle for weak solutions follows Harnack type arguments that we have not proven yet. Nonetheless, we show that due to the classical theory this is something to be expected anyway. Recall C () ={u C () :D u is uniformly continuous on bounded subsets of for all }. The argument does not rely on divergence form. notation we consider Lu = a ij D i D j u =0. i,j= For simplicity of By interior ball condition for at x 0 @, we mean that there is a ball B such that x 0 @B. Lemma 3.47 (Hopf). Let u C () \ C () satisfy Lu 0, and suppose that there is x 0 satisfying interior ball condition for B and u(x 0 ) >u(x) for all x. Then @u @ (x 0) > 0 where is exterior unit normal for B at x 0. Proof. We may assume that B = B(0,r)andu(x 0 ) 0. Set for > 0 Then v(x) =e x e r, D j v = x j e x B(0,r). x
PDE 85 and Thus D i D j v =( ij +4 x i x j )e Lv = = ell ( a ij D i D j v i,j= a ij ( ij +4 x i x j )e i,j= a ii 4 x )e i= Thus for large enough, we have Lv ( a ii 4 x )e i= x. x. x x 0, x B(0,r) \ B(0, r ). By the assumption u(x 0 ) >u(x) forallx, for small enough " > 0, it holds that u(x 0 ) u(x)+"v(x) on @B(0,r/). Thesameholdson@B(0,r)sincetherev =0. We have L(u + "v u(x 0 )) = Lu + "Lv 0, and therefore the weak maximum principle for classical solutions (ex.) implies But so that This yields @u @ (x 0) u + "v u(x 0 ) 0 in B(0,r) \ B(0,r/). u(x 0 )+"v(x 0 ) u(x 0 )=0 @(u + "v u(x 0 )) (x 0 ) 0. @ " @v @ (x 0)= " x 0 r Dv(x 0)= " x 0 r ( x 0 e x 0 ) > 0. Remark 3.48. The nontrivial point on Hopf s lemma is that the inequality @u @ (x 0) > 0 is strict!
86 PDE Theorem 3.49 (Strong max principle). Let u C () \ C() satisfy Lu 0, and let be a bounded, open and connected set. Then if u attains its max at the interior of, it follows that u sup u. Proof. Let M := max u and C = {x : u(x) =M}, V = {x : u(x) <M}. Let us make a counter proposition that V is not empty. Take a point y V with dist(y, C) < dist(y, @), which exist since dist(c, V )=0 by continuity of u. Let B = B(y, r) V be a largest possible ball in V centered at y. Then B touches C at some point x 0,andthusV satisfies interior ball condition at this point. By Hopf s lemma, @u @ (x 0) > 0 but this is a contradiction since x 0 is a max point for u implying Du(x 0 )=0. 4. Linear parabolic equations Next we study generalizations of the heat equation. T = (0,T) and @ p T =( {0}) [ (@ (0,T)). We denote Definition 4. (parabolic Sobolev space). The Sobolev space L (0,T; W, ()), consists of all measurable(in T ) functions u(x, t) such that u(x, t) belongs to W, () for almost every 0 <t<t,(u(x, t) is measurable as a mapping from (0,T) to W, (), and the norm / ( u(x, t) + Du(x, t) ) dx dt T is finite. The definition of the space L (0,T; W, 0 ()) is analogous.