Variational Methods and Elliptic Equationsgprokert/PDE/course.pdf · 6 The Sobolev space W1;2() 14...

42
Variational Methods and Elliptic Equations Lecture notes (preliminary version) TU Eindhoven 2004/2005 G. Prokert 1

Transcript of Variational Methods and Elliptic Equationsgprokert/PDE/course.pdf · 6 The Sobolev space W1;2() 14...

Page 1: Variational Methods and Elliptic Equationsgprokert/PDE/course.pdf · 6 The Sobolev space W1;2() 14 7 The space W1;2 0 18 8 Linear operators 20 9 Orthogonal projections in Hilbert

Variational Methods

and Elliptic Equations

Lecture notes (preliminary version)

TU Eindhoven 2004/2005

G. Prokert

1

Page 2: Variational Methods and Elliptic Equationsgprokert/PDE/course.pdf · 6 The Sobolev space W1;2() 14 7 The space W1;2 0 18 8 Linear operators 20 9 Orthogonal projections in Hilbert

Contents

1 Normed spaces, Banach spaces 3

2 Inner Product spaces, Hilbert spaces 4

3 The Hilbert space L2(Ω) 6

4 Test functions 8

5 Weak derivatives 12

6 The Sobolev space W 1,2(Ω) 14

7 The space W 1,20 (Ω) 18

8 Linear operators 20

9 Orthogonal projections in Hilbert spaces 22

10 The theorems of Riesz and Lax-Milgram 24

11 Application to elliptic boundary value problems 2811.1 Linear second order elliptic equations with homogeneous Dirichlet bound-

ary condition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2811.2 Linear second order elliptic equations with homogeneous natural boundary

condition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3111.3 The Neumann problem for the Poisson equation . . . . . . . . . . . . . . . . 3411.4 The Stokes equations with homogeneous Dirchlet boundary conditions . . . 36

A Appendix: Lebesgue integration 39A.1 Measure theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39A.2 Integration theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40

References 42

2

Page 3: Variational Methods and Elliptic Equationsgprokert/PDE/course.pdf · 6 The Sobolev space W1;2() 14 7 The space W1;2 0 18 8 Linear operators 20 9 Orthogonal projections in Hilbert

1 Normed spaces, Banach spaces

Let K be the field R or C. Let E be a vector space over K.Definitions: A mapping ‖ · ‖ from E to R is called a norm iff:

• ‖x‖ ≥ 0 for all x ∈ E,

• ‖x‖ = 0 ⇔ x = 0,

• ‖λx‖ = |λ|‖x‖ for all x ∈ E, λ ∈ K,

• ‖x+ y‖ ≤ ‖x‖ + ‖y‖ for all x, y ∈ E. (triangle inequality)

Then E together with ‖ · ‖ is called a normed space.A simple consequence of the triangle inequality is the so-called inverse triangle inequal-

ity: For all x, y ∈ E,| ‖x‖ − ‖y‖ | ≤ ‖x− y‖. (1.1)

(Check!)The distance of two points (elements) in E is

d(x, y) := ‖x− y‖.

The open ball around x0 in E with radius r > 0 is

BE(x0, r) := x ∈ E | d(x, x0) < r.

The distance of a point x ∈ E from a (nonempty) subset A ⊂ E is

d(x,A) := infy∈A

d(x, y).

The closure of a subset A ⊂ E is

A := x ∈ E | d(x,A) = 0.

A subset A ⊂ E is called closed iff A = A. A sequence (xn) in E is called convergentto x∗ ∈ E iff d(xn, x

∗) → 0 as n → ∞. x∗ is called the limit of the sequence (xn). We

write xnE→ x∗ or limn→∞ xn = x∗.

Lemma 1.1 (Elementary limit properties)

Assume xnE→ x∗, yn

E→ y∗, λn

K→ λ∗. Then

(i) xn + ynE→ x∗ + y∗,

(ii) λnxnE→ λ∗x∗,

(iii) ‖xn‖ → ‖x∗‖.

Proof: (i) and (ii): Straightforward (Check!)(iii) By (1.1),

| ‖xn‖ − ‖x∗‖ | ≤ ‖xn − x∗‖ → 0,

hence ‖xn‖ → ‖x∗‖.

3

Page 4: Variational Methods and Elliptic Equationsgprokert/PDE/course.pdf · 6 The Sobolev space W1;2() 14 7 The space W1;2 0 18 8 Linear operators 20 9 Orthogonal projections in Hilbert

Lemma 1.2 (Closure and limit points) Assume A ⊂ E. A point y ∈ E belongs to A if

and only if there is a sequence (xn) in A with xnE→ y. In particular, A is closed if and

only if all convergent sequences in A have their limit in A.

Definition: A subset A ⊂ E is called dense in E iff A = E.

Lemma 1.3 (Dense sets)A subset A ⊂ E is dense in E if and only if for any ε > 0 and any x ∈ E there is a

y ∈ A such that d(x, y) < ε.

Definition: A sequence (xn) in E is called a Cauchy sequence iff d(xn, xm) → 0 asn,m→ ∞. More precisely,

∀ε > 0 ∃N = N(ε) ∀m,n > N : d(xm, xn) < ε.

Every convergent sequence is a Cauchy sequence. (Check!) Every Cauchy sequencethat has a convergent subsequence is convergent. (Check!)Definition: A normed space (E, ‖ · ‖) is called a Banach space if it is complete, i.e.,if every Cauchy sequence in E converges.Definition: Let (E, ‖ · ‖) be a normed space and f1, f2, . . . ∈ E. The series

∑∞n=1 fn is

called convergent if the sequence sN of its partial sums sN :=∑N

n=1 fn is convergent.The series

∑∞n=1 fn is called absolutely convergent if

∑∞n=1 ‖fn‖ <∞.

Lemma 1.4 (Completeness criterion for normed spaces)Let (E, ‖·‖) be a normed space in which every absolutely convergent series is convergent.

Then (E, ‖ · ‖) is complete, i.e. a Banach space.

Proof: Let gn be a Cauchy sequence in E. This sequence has a subsequence gn suchthat

‖gn+1 − gn‖ ≤ 2−n.

(Check!) Now set fn := gn+1 − gn. Then∑∞

n=1 ‖fn‖ <∞Assumption

=⇒ sN = ∑N

n=1 fn =gN+1 − g1 convergent ⇒ gn convergent ⇒ gn convergent.

2 Inner Product spaces, Hilbert spaces

Let E be a vector space over K. A mapping (·, ·) from E × E to K is called an innerproduct iff

• (x, y) = (y, x) for all x, y ∈ E,

• (λx, y) = λ(x, y) for all x, y ∈ E, λ ∈ K

• (x+ y, z) = (x, z) + (y, z) for all x, y, z ∈ E,

• (x, x) ≥ 0, and (x, x) = 0 ⇔ x = 0.

Then E together with (·, ·) is called an Inner Product space. Additionally we define

‖x‖ := (x, x)1

2 . (2.1)

4

Page 5: Variational Methods and Elliptic Equationsgprokert/PDE/course.pdf · 6 The Sobolev space W1;2() 14 7 The space W1;2 0 18 8 Linear operators 20 9 Orthogonal projections in Hilbert

Lemma 2.1 (Elementary properties of Inner Product spaces)

(i) Cauchy-Schwarz inequality: For all x, y ∈ E:

|(x, y)| ≤ ‖x‖‖y‖.

(ii) ‖ · ‖ given by (2.1) is a norm on E, hence (E, ‖ · ‖) is a normed space.

(iii) Parallelogram identity: For all x, y ∈ E:

‖x+ y‖2 + ‖x− y‖2 = 2(‖x‖2 + ‖y‖2).

Proof: (i): The result is obviously true for (x, y) = 0. Assume (x, y) 6= 0. Then‖x‖, ‖y‖ 6= 0 and

0 ≤

∥∥∥∥x

‖x‖−

y

‖y‖

∥∥∥∥2

= 2 − 2Re(x, y)

‖x‖‖y‖,

henceRe(x, y) ≤ ‖x‖‖y‖.

Replacing x by x := (x, y)x in the last inequality yields

|(x, y)|2 ≤ |(x, y)|‖x‖‖y‖,

and the result follows by dividing by |(x, y)|.(ii): We have by (i)

‖x+ y‖2 = ‖x‖2 + ‖y‖2 + 2Re(x, y) ≤ ‖x‖2 + ‖y‖2 + 2‖x‖‖y‖ = (‖x‖ + ‖y‖)2,

and this implies the triangle inequality. The other properties are easy to check.(iii) Straightforward.By (ii), all concepts defined for normed spaces (convergence, distance, closure etc.) are

also defined for inner product spaces. In particular:Definition: A complete inner product space is called Hilbert space.Definition: Two elements x, y ∈ E are called orthogonal iff (x, y) = 0. We write x ⊥ y.An element x is called orthogonal to a subset A ⊂ E iff x ⊥ y for all y ∈ A. We writex ⊥ A.

Lemma 2.2 (Pythagoras’ theorem)Let x, y ∈ E. If x ⊥ y then ‖x+ y‖2 = ‖x‖2 + ‖y‖2.

Proof: Straightforward.

Lemma 2.3 (Orthogonality and closure)Assume x ∈ E, A ⊂ E. If x ⊥ A then x ⊥ A.

Proof: Straightforward.

5

Page 6: Variational Methods and Elliptic Equationsgprokert/PDE/course.pdf · 6 The Sobolev space W1;2() 14 7 The space W1;2 0 18 8 Linear operators 20 9 Orthogonal projections in Hilbert

3 The Hilbert space L2(Ω)

Let Ω ⊂ Rm be a domain, i.e. an open and connected subset of R

m. Let µ denote theLebesgue measure (see Appendix). For brevity we write

Ωf :=

Ωf(x) dµ(x).

Definition: L2(Ω) is the set of all measurable functions f : Ω −→ K such that |f |2 isintegrable. Due to

Ω|f + g|2 ≤

Ω(|f | + |g|)2 ≤ 2

(∫

Ω|f |2 +

Ω|g|2)

it is easy to see that L2(Ω) is a vector space with respect to pointwise addition and scalarmultiplication. Moreover,

N := f ∈ L2(Ω) | f(x) = 0 a.e..

N is a linear subspace of L2(Ω) (Check!).On L2(Ω) we define the relation ∼ by

f ∼ g ⇐⇒ f − g ∈ N .

This is an equivalence relation (Check!). Hence it defines equivalence classes

[f ] := g ∈ L2(Ω) | f ∼ g, f ∈ L2(Ω).

On these equivalence classes, we define an addition and a scalar multiplication by

[f ] + [g] := [f + g],

λ[f ] := [λf ],

f, g ∈ L2(Ω), λ ∈ K. These operations are well-defined, i.e. independent of the represen-tants of the equivalence classes (Check!).Theorem and Definition: The set of these equivalence classes,

L2(Ω) := L2(Ω)/N := [f ] | f ∈ L2(Ω),

is a linear space with respect to these operations (Check!).Remark: In the sequel, we will write f instead of [f ] for elements of L2(Ω). Note,however, that then f = g has to be interpreted as f(x) = g(x) for almost all x ∈ Ω.

Lemma 3.1 (Inner product on L2(Ω))The mapping (·, ·) : L2(Ω) × L2(Ω) −→ K given by

(f, g) :=

Ωfg

is an inner product on L2(Ω).

6

Page 7: Variational Methods and Elliptic Equationsgprokert/PDE/course.pdf · 6 The Sobolev space W1;2() 14 7 The space W1;2 0 18 8 Linear operators 20 9 Orthogonal projections in Hilbert

Proof: At first we note that (·, ·) is independent of the chosen representants, i.e. if f1 ∼ f2

and g1 ∼ g2 then (f1, g1) = (f2, g2) (Check!). To show that fg is integrable we note that

|fg| = |f ||g| ≤ 12(|f |2 + |g|2),

hence fg is integrable because of f, g ∈ L2(Ω).Assume now (f, f) =

∫Ω |f |2 = 0. Set

N := x ∈ Ω | f(x) 6= 0, An := x ∈ Ω | |f(x)|2 >1

n.

Then

0 ≤1

nµ(An) ≤

An

|f |2 ≤

Ω|f |2 = 0,

hence µ(An) = 0 for all n ∈ N. Furthermore, N =⋃∞

n=1An and therefore

µ(N) ≤∞∑

n=1

µ(An) = 0,

i.e. f(x) = 0 for almost all x ∈ Ω, i.e. f = 0 in L2(Ω).The other properties are easy to check.

Corollary: Any function f ∈ L2(Ω) is integrable on any subset A ⊂ Ω having finitemeasure. In particular, f(x) <∞ for almost all x ∈ Ω.Proof: Define

g(x) :=

f(x)|f(x)| x ∈ A and f(x) 6= 0,

0 x /∈ A or f(x) = 0.

Then∫Ω |g|2 ≤ µ(A) <∞, hence g ∈ L2(Ω), and

A

|f | = (f, g) <∞.

The norm generated by (·, ·) will be denoted by ‖ · ‖2. Thus

‖f‖22 :=

Ω|f |2.

Theorem 3.2 (Completeness of L2(Ω))The inner product space (L2(Ω), (·, ·)) is a Hilbert space.

Proof: Consider a sequence fk in L2(Ω) such that

∞∑

k=0

‖fk‖2 = M <∞.

We will show that∑∞

k=0 fk, i.e. limn→∞∑n

k=0 fk exists in L2(Ω), this implies the theoremby Lemma 1.4.

7

Page 8: Variational Methods and Elliptic Equationsgprokert/PDE/course.pdf · 6 The Sobolev space W1;2() 14 7 The space W1;2 0 18 8 Linear operators 20 9 Orthogonal projections in Hilbert

For x ∈ Ω, n ∈ N define

hn(x) :=

n∑

k=1

|fk(x)|, h(x) :=

∞∑

k=1

|fk(x)|.

(Note that we do not assume that h(x) is finite almost everywhere.) Then

‖hn‖2 ≤n∑

k=1

‖ |fk| ‖2 =

n∑

k=1

‖fk‖2 ≤M.

Moreover, hn(x)2 ↑ h(x)2 for all x ∈ Ω. By the monotone convergence theorem,∫

Ωh2 = lim

n→∞

Ωh2

n ≤M2,

hence h ∈ L2(Ω), and in particular h(x) <∞ a.e.. Set N = x ∈ Ω |h(x) = ∞ and

f(x) :=

∑∞k=0 fk(x) x ∈ Ω\N,

0 x ∈ N.

Then |f(x)| ≤ h(x) for all x ∈ Ω, hence f ∈ L2(Ω). Moreover, by the definition of f ,

∣∣∣∣∣f(x) −n∑

k=1

fk(x)

∣∣∣∣∣

2n→∞−→ 0 a.e. in Ω

and∣∣∣∣∣f(x) −

n∑

k=1

fk(x)

∣∣∣∣∣

2

=

∣∣∣∣∣

∞∑

k=n+1

fk(x)

∣∣∣∣∣

2

(∞∑

k=1

|fk(x)|

)2

≤ h2(x) a.e. in Ω.

By the dominated convergence theorem,

Ω

∣∣∣∣∣f(x) −n∑

k=1

fk(x)

∣∣∣∣∣

2n→∞−→ 0,

i.e. ‖f −∑n

k=1 fk‖2 → 0 or f =∑∞

k=1 fk in L2(Ω).

4 Test functions

Definition: Any vector α = (α1, . . . , αm) ∈ Nm is called an (m-dimensional) multiindex.

We use multiindex notation for the description of partial derivatives and write

∂αf :=∂|α|f

∂α1x1 . . . ∂αmxm, |α| := α1 + . . .+ αm.

For A,B ⊂ Rm we define

A+B := x+ y |x ∈ A, y ∈ B,

dist(A,B) := infd(x, y) |x ∈ A, y ∈ B.

8

Page 9: Variational Methods and Elliptic Equationsgprokert/PDE/course.pdf · 6 The Sobolev space W1;2() 14 7 The space W1;2 0 18 8 Linear operators 20 9 Orthogonal projections in Hilbert

Let Ω ⊂ Rm be a domain, f : Ω −→ K. The support of f is defined as

supp f := x ∈ Ω | f(x) 6= 0.

f is called finite (in Ω) iff suppf is bounded and suppf ⊂ Ω. Note that then suppf iscompact and has a positive distance to the boundary of Ω.

A function φ : Ω −→ K is called smooth iff all its partial derivatives (of all orders)exist and are continuous on Ω.The set of all smooth functions on Ω is denoted by C∞(Ω).It is a linear space.

A function φ : Ω −→ K is called a test function (on Ω) if φ is smooth and finite (inΩ). The set of all test functions on Ω is denoted by C∞

0 (Ω). It is a linear subspace ofL2(Ω). (Check!)Example: Set

ψ(x) :=

e− 1

1−|x|2 |x| < 1,0 |x| ≥ 1.

Then ψ is a test function on all domains Ω that contain the closed unit ball B(0, 1).(Check!)

For h > 0 and finite and continuous f on Rm we define

ψh(x) :=ψ(

xh

)∫

Rm ψ(

xh

)dx, (4.1)

fh(x) := (f ∗ ψh)(x) :=

Rm

f(x− y)ψh(y) dy.

Remark: f ∗ ψh is called the convolution product of f and ψh. Note that in generalthe convolution product of two finite continuous functions on R

m is well defined, finiteand continuous. (Why?)

Lemma 4.1 (Regularization by Convolution)

(i) ∂αfh = f ∗ ∂αψh, in particular, fh ∈ C∞(Rm),

(ii) supp fh ⊂ suppf +B(0, h),

(iii) fh ∈ L2(Rm) and ‖f − fh‖2h↓0−→ 0.

Proof: (i) Substituting z := x− y we get

fh(x) =

Rm

f(z)ψh(x− z) dz.

For j = 1, . . . ,m, the functions (x, z) 7→ f(z)ψ(x − z) and (x, z) 7→ f(z)∂jψ(x − z) arecontinuous and have bounded support. Hence integration and differentiation with respectto the parameter x may be interchanged, i.e.

∂jfh(x) =

Rm

f(z)∂jψh(x− z) dz = (f ∗ ∂jψh)(x).

Now (i) follows by repeated application of the same arguments.

9

Page 10: Variational Methods and Elliptic Equationsgprokert/PDE/course.pdf · 6 The Sobolev space W1;2() 14 7 The space W1;2 0 18 8 Linear operators 20 9 Orthogonal projections in Hilbert

(ii) Exercise!(iii) It follows from (i) and (ii) that fh ∈ L2(Rm). Let ε > 0 be given. As f is uniformly

continuous, there is a h0 = h0(ε) ∈ (0, 1) such that

|f(x) − f(x− y)| < ε x, y ∈ Rm, |y| ≤ h0. (4.2)

Because of∫

Rm ψh(y) dy = 1 we have, for any x ∈ Rm and any h ∈ (0, h0)

f(x) =

Rm

f(x)ψh(y) dy,

f(x) − fh(x) =

Rm

(f(x) − f(x− y))ψh(y) dy =

|y|≤h

(f(x) − f(x− y))ψh(y) dy,

|f(x) − fh(x)| ≤

|y|≤h

|f(x) − f(x− y)|ψh(y) dy(4.2)≤ ε

|y|≤h

ψh(y) dy = ε.

As f and fh both vanish outside the bounded set supp f1, we get

‖f − fh‖22 =

Rm

|f(x) − fh(x)|2 dy =

supp f1

ε2 dy = ε2µ(suppf1).

Hence ‖f − fh‖2 → 0 as h ↓ 0.We will use the previous lemma to prove the following fundamental result on the space

L2(Ω):

Lemma 4.2 (Approximation by test functions)The subspace C∞

0 (Ω) is dense in L2(Ω).

Proof: For any function f ∈ L2(Ω) and any ε > 0 we have to find a function g ∈ C∞0 (Ω)

such that ‖f − g‖2 < ε (cf. Lemma 1.3). Without loss of generality, we can assume thatf is real and f ≥ 0 (Why?). The function g will be constructed in four steps.

1. Approximate f by a bounded finite function g1:For n ∈ N set

Ωn := x ∈ Ω | |x| < n, dist(x,Rm\Ω) >1

n, f(x) ≤ n; fn(x) :=

f(x) x ∈ Ωn,0 x ∈ Ω\Ωn.

Then all fn are bounded, nonnegative, and finite. For all x ∈ Ω we have fn(x)2 ↑ f(x)2

and, by the monotone convergence theorem,

‖f − fn‖22 =

Ω(f(x) − fn(x))2 dx =

Ω\Ωn

f(x)2 dx =

Ωf(x)2 dx−

Ωfn(x)2 dx→ 0

as n → ∞. Thus fn → f in L2(Ω). By choosing g1 = fn with n sufficiently large we get‖f − g1‖2 < ε/4.

2. Approximate g1 by a step function g2:For n ∈ N, x ∈ Ω define

sn(x) := max

k

n

∣∣∣∣ k ∈ N,k

n≤ g1(x)

.

10

Page 11: Variational Methods and Elliptic Equationsgprokert/PDE/course.pdf · 6 The Sobolev space W1;2() 14 7 The space W1;2 0 18 8 Linear operators 20 9 Orthogonal projections in Hilbert

Then all sn are finite, nonnegative step functions (taking only a finite number of values).Furthermore,

‖g1 − sn‖22 =

supp g1

|g1 − sn|2 ≤ µ(supp g1)

1

n2→ 0.

Thus sn → g1 in L2(Ω). By choosing g2 = sn with n sufficiently large we get ‖g1 − g2‖2 <ε/4.

3. Approximate g2 by a finite continuous function g3:The step function g2 can be written as

g2 =

N∑

k=1

αkχAk, 0 < αk ≤M, χAk

(x) :=

1 x ∈ Ak,0 x /∈ Ak,

where all Ak are measurable and bounded and Ak ⊂ Ω. We start by approximating asingle step function χA, A = Ak for some k. From measure theory it is known that thereare a compact set K and an open set U such that K ⊂ A ⊂ U , U ⊂ Ω bounded, andµ(U\K) < ε/(4MN). Define

χA(x) :=

1 x ∈ K

max

0, 1 −

2 dist(x,K)dist(K, Omega\U)

x ∈ U\K

0 x ∈ Ω\U.

Then χA is continuous, has support in Ω and

‖χA − χA‖2 ≤ µ(U\K) <ε

4MN.

(Check!) Define

g3 :=

N∑

k=1

αkχAk.

Then ‖g3 − g2‖2 < ε/4, g3 is finite and continuous.4. Approximate g3 by a test function g:Define g := (g3)h := g3 ∗ ψh with h > 0. Lemma 4.1 ensures that g is a test function

in Ω and ‖g − g3‖2 < ε/4 if h is chosen sufficiently small.Summarizing, we get

‖f − g‖2 ≤ ‖f − g1‖2 + ‖g1 − g2‖2 + ‖g2 − g3‖2 + ‖g3 − g‖2 ≤ ε.

We will need the following generalization of Lemma 4.1:

Lemma 4.3 (Regularization of L2-functions)Assume f ∈ L2(Ω). Then all statements of Lemma 4.1 remain true.

The proof uses the dominated convergence theorem to justify the interchanging of dif-ferentiation and convolution. For the details we refer to [Ev98], Appendix C, Theorem6.

11

Page 12: Variational Methods and Elliptic Equationsgprokert/PDE/course.pdf · 6 The Sobolev space W1;2() 14 7 The space W1;2 0 18 8 Linear operators 20 9 Orthogonal projections in Hilbert

5 Weak derivatives

Definition: Assume f ∈ L2(Ω), i = 1, . . . ,m. A function g ∈ L2(Ω) is called i-th weakpartial derivative iff

Ωf∂iv dx = −

Ωgv dx ∀v ∈ C∞

0 (Ω).

Note that not all f ∈ L2(Ω) have weak derivatives.

Lemma 5.1 (Elementary properties of weak derivatives)

(i) The weak partial derivatives are unique.

(ii) Assume f ∈ C1(Ω) ∩ L2(Ω) and ∂if ∈ L2(Ω). Then ∂if is the weak derivative of f .

Proof: (i) Assume f has two partial weak derivatives g1, g2 ∈ L2(Ω). Then∫

Ω(g1 − g2)v dx = 0 ∀v ∈ C∞

0 (Ω),

i.e. g1 − g2 ⊥ C∞0 (Ω). By Lemmas 2.3 and 4.2, this implies g1 − g2 ⊥ L2(Ω), therefore

g1 = g2.(ii) Assume without loss of generality i = 1. Fix v ∈ C∞

0 (Ω). Extend v outside Ω by0. Fix a bounded open set U such that supp v ⊂ U and U ⊂ Ω. Set

χU (x) :=

1 x ∈ U0 x /∈ U

and η := χU ∗ψh with 0 < h < min(dist(supp v,Rm\U),dist(U,Rm\Ω)). Then η ∈ C∞0 (Ω)

and η ≡ 1 on suppv (Check!). Now define f by

f(x) =

f(x)η(x) x ∈ Ω0 x /∈ Ω.

Then f = f and ∂1f = ∂1f on supp v. Now, by Fubini’s theorem,∫

Ωf∂1v dx =

supp v

f∂1v dx =

Rm

f∂1v dx =

Rm−1

(∫

R

f∂1v dx1

)dx2 . . . dxm

= −

Rm−1

(∫

R

∂1fv dx1

)dx2 . . . dxm = −

Rm

∂1fv dx = −

supp v

∂1fv dx

= −

supp v

∂1fv dx = −

Ω∂1fv dx.

Lemma 5.1 allows us to speak about the weak partial derivative of a function f ∈ L2(Ω)and to write ∂if for it. Analogously, we call g ∈ L2(Ω) the weak partial derivative oforder α, α ∈ N

m, iff ∫

Ωf∂αv dx = (−1)|α|

Ωgv dx.

Again, this weak partial derivative is unique and coincides with the strong one if the latterexists.

12

Page 13: Variational Methods and Elliptic Equationsgprokert/PDE/course.pdf · 6 The Sobolev space W1;2() 14 7 The space W1;2 0 18 8 Linear operators 20 9 Orthogonal projections in Hilbert

Lemma 5.2 (Vanishing weak gradient)Assume f ∈ L2(Ω), ∂if = 0 for i = 1, . . . ,m. Then there is a constant c such that

f = c almost everywhere.

Proof: For (sufficiently small) δ > 0 define

Ωδ := x ∈ Ω | dist(x,Rm\Ω) > δ.

It is sufficient to show that f = c a.e. on Ωδ for any δ > 0.Choose v ∈ C∞

0 (Ωδ) and h ∈ (0, δ). Then by Lemma 4.1, v ∗ ψh ∈ C∞0 (Ω) and

∂i(v ∗ ψh) = ∂iv ∗ ψh, i = 1, . . . ,m. Therefore

0 =

Ω∂if(v ∗ ψh) dx = −

Ωf∂i(v ∗ ψh) dx = −

Ωf(∂iv ∗ ψh) dx

=

Ωf(x)

Ωδ

∂iv(y)ψh(x− y) dy dxFubini

= −

Ωδ

(∫

Ωf(x)ψh(y − x) dx

)∂iv(y) dy

= −

Ωδ

(f ∗ ψh)(y)∂iv(y) dy.

By Lemma 4.3, f ∗ ψh is smooth and in L2(Ω), and ∂i(f ∗ ψh) = f ∗ ∂iψh is in L2(Ω)as well. The above calculation shows that the weak derivatives of f ∗ ψh vanish on Ωδ,and by Lemma 5.1 (ii) this is the usual derivative. Hence there is a constant ch such thatf ∗ ψh = ch a.e.. Letting h ↓ 0 and using Lemma 4.3 again gives

f = limh↓0

f ∗ ψh = limh↓0

ch = c in L2(Ωδ),

because the L2-limit of a sequence of constant functions has to be (a.e.) constant. (Why?)

For m = 1, we can generalize the Fundamental Theorem of Calculus in the followingway:

Lemma 5.3 (“Weak” fundamental theorem of calculus)Let I ⊂ R be an interval, g ∈ L2(I), x0 ∈ I and

f(x) :=

∫ x

x0

g(t) dt, x ∈ I.

Then f is continuous and has weak derivative g.

Proof: Continuity follows from the dominated convergence theorem (Check!). For v ∈C∞

0 (I), fix a, b ∈ I such that suppv ∈ [a, b] and x0 ∈ [a, b]. Then we have∫

I

fv′ dx = −

∫ x0

a

(∫ x0

x

g(t)v′(x) dt

)dx+

∫ b

x0

(∫ x

x0

g(t)v′(x) dt

)dx

Fubini= −

∫ x0

a

(∫ t

a

g(t)v′(x) dx

)dt+

∫ b

x0

(∫ b

t

g(t)v′(x) dx

)dt

= −

∫ x0

a

g(t)

(∫ t

a

v′(x) dx

)dt+

∫ b

x0

g(t)

(∫ b

t

v′(x) dx

)dt

= −

∫ b

a

g(t)v(t) dt.

This proves f ′ = g as a weak derivative.

13

Page 14: Variational Methods and Elliptic Equationsgprokert/PDE/course.pdf · 6 The Sobolev space W1;2() 14 7 The space W1;2 0 18 8 Linear operators 20 9 Orthogonal projections in Hilbert

6 The Sobolev space W1,2(Ω)

Definition:

W 1,2(Ω) := u ∈ L2(Ω) | the weak derivatives ∂iu exist in L2(Ω), i = 1, . . . ,m

This subspace of L2(Ω) is called a Sobolev space. (Check that W 1,2(Ω) is indeed a linearspace!)

On W 1,2(Ω) we define the inner product (·, ·)W 1,2 by

(u, v)W 1,2 :=

Ω

(uv +

m∑

i=1

∂iu∂iv

)dx = (u, v) +

m∑

i=1

(∂iu, ∂iv)

and the corresponding norm ‖ · ‖W 1,2 by

‖u‖2W 1,2 := ‖u‖2

L2(Ω) +

m∑

i=1

‖∂iu‖2L2(Ω).

Examples:1. If Ω has finite measure then any function in C1(Ω) is in W 1,2(Ω). (Why?)2. Assume m = 2,

Ω := x ∈ R2 | |x| < 1

2,

and let u : Ω −→ R be given by u(x) := log | log |x||. We will show that u ∈W 1,2(Ω):Clearly u ∈ L2(Ω) (Check!). Assume v ∈ C∞

0 (Ω) and set

Ωε := x ∈ R2 | ε < |x| < 1

2.

Note that u is smooth on Ωε. By dominated convergence and integration by parts,

Ωu∂iv dx = lim

ε↓0

Ωε

u∂iv dx = limε↓0

(−

Ωε

∂iuv dx−

|x|=ε

xi

|x|uv dSε

),

i = 1, 2. On Ωε we have ∂iu = gi with

gi(x) :=xi

|x|2 log |x|, x ∈ Ω

where gi ∈ L2(Ω) (Check!). Hence giv is integrable on Ω and therefore

limε↓0

Ωε

∂iuv dx =

Ωgiv dx.

Furthermore, ∣∣∣∣∣

|x|=ε

xi

|x|uv dSε

∣∣∣∣∣ ≤ Cε log | log ε|ε↓0−→ 0.

(Check!)Thus ∫

Ωu∂iv dx =

Ωgiv dx, v ∈ C∞

0 (Ω),

14

Page 15: Variational Methods and Elliptic Equationsgprokert/PDE/course.pdf · 6 The Sobolev space W1;2() 14 7 The space W1;2 0 18 8 Linear operators 20 9 Orthogonal projections in Hilbert

i.e. ∂iu = gi in the sense of weak derivatives. This shows u ∈W 1,2(Ω).Danger: This example shows that for m ≥ 2 functions in W 1,2(Ω) are not necessarilycontinuous!

For m = 1, however, we have a simpler situation:

Theorem 6.1 (W 1,2-functions in 1D)Let −∞ ≤ a < b ≤ ∞, I = (a, b).

(i) Any equivalence class of functions in W 1,2(I) contains a representant which is con-tinuous on I.

(ii) For this representant, we have

maxx∈I

|u(x)| ≤ C‖u‖W 1,2

with C depending only on the length of I.

(iii) If a = −∞ then limx→−∞ u(x) = 0; if b = ∞ then limx→∞ u(x) = 0.

Proof: (i) Fix u ∈W 1,2(I), x0 ∈ I and define u : I −→ K by

u(x) :=

∫ x

x0

u′(t) dt, x ∈ I.

By Lemma 5.3, u is continuous, u′ = u′, and therefore by Lemma 5.2 there is a c ∈ K suchthat u = u + c a.e. in I . In particular, this implies that u is equivalent to a continuousfunction. We will keep the notation u for this continuous representant.

(ii) Note that

u(x) − u(y) =

∫ x

y

u′(t) dt, x, y ∈ I (6.1)

with u from the proof of (i). (Check!)Assume first −∞ < a < b <∞ and define

u :=1

b− a

∫ b

a

u dx, u1 := u− u

Then∫u1 dx = 0, u′1 = u′, and as u1 is continuous, there is an y0 ∈ I such that u1(y0) = 0

(Check!). Now, for x ∈ I,

|u(x)| ≤ |u1(x)| + |u|(6.1)=

∣∣∣∣∫ x

y0

u′1(t) dt

∣∣∣∣+1

b− a

∣∣∣∣∫ b

a

u(t) dt

∣∣∣∣Cauchy-Schwarz

≤ C(‖u′‖2 + ‖u‖2) ≤ C‖u‖W 1,2 .

This proves (ii) for bounded intervals.If b = ∞ then, for any x ∈ I, we restrict u to the subinterval (x, x + 1) ⊂ I and use

the result for bounded intervals which has just been proved. Thus,

|u(x)| ≤ C‖u‖W 1,2((x,x+1)) ≤ C‖u‖W 1,2((a,∞)). (6.2)

15

Page 16: Variational Methods and Elliptic Equationsgprokert/PDE/course.pdf · 6 The Sobolev space W1;2() 14 7 The space W1;2 0 18 8 Linear operators 20 9 Orthogonal projections in Hilbert

If a = −∞ we proceed analogously.(iii) Assume b = ∞. By the dominated convergence theorem,

‖u‖2W 1,2((x,x+1)) =

∫ x+1

x

|u|2 dt+

∫ x+1

x

|u′|2 dtx→∞−→ 0,

and from the first inequality in (6.2) we get limx→∞ u(x) = 0. If a = −∞ we proceedanalogously.

To show completeness of W 1,2(Ω) with respect to this norm we will use the followinglemma:

Lemma 6.2 (Convergence of weak derivatives)Let (un) be a sequence in W 1,2(Ω) such that un → u in L2(Ω) and ∂iun → wi in L2(Ω)

for some u,wi ∈ L2(Ω), i = 1, . . . ,m. Then ∂iu = wi. In particular, u ∈W 1,2(Ω).

Proof: Fix i ∈ 1, . . . ,m, v ∈ C∞0 (Ω). Due to un → u in L2(Ω) we have, by the

Cauchy-Schwarz inequality,

∣∣∣∣∫

Ωun∂iv dx−

Ωu∂iv dx

∣∣∣∣ ≤ ‖un − u‖L2(Ω)‖∂iv‖L2(Ω) → 0

and thus ∫

Ωu∂iv dx = lim

n→∞

Ωun∂iv dx.

Analogously, from ∂iun → wi in L2(Ω) we get

Ωwiv dx = lim

n→∞

Ω∂iunv dx.

Therefore,

Ωu∂iv dx = lim

n→∞

Ωun∂iv dx = − lim

n→∞

Ω∂iunv dx = −

Ωwiv dx

for all v ∈ C∞0 (Ω). This means ∂iu = wi.

Theorem 6.3 (The Hilbert space W 1,2(Ω))The space W 1,2(Ω) is complete with respect to the norm ‖ · ‖W 1,2 .

Proof: Let (un) be a Cauchy sequence in W 1,2(Ω). This implies that (un) is a Cauchysequence in L2(Ω) and (∂iun) is a Cauchy sequence in L2(Ω) for i = 1, . . . ,m (Check!).As L2(Ω) is complete by Theorem 3.2 there are functions u,wi ∈ L2(Ω) such that un → uand ∂iun → wi in L2(Ω). By Lemma 6.2, this implies u ∈ W 1,2(Ω) and ∂iun → ∂iu inL2(Ω). Therefore un → u in W 1,2(Ω). (Check!).

Differing from the situation in L2(Ω), the test functions are in general not dense inW 1,2(Ω). This is true only for Ω = R

m:

Lemma 6.4 (Approximation by test functions in W 1,2(Rm))C∞

0 (Rm) is dense in W 1,2(Rm).

16

Page 17: Variational Methods and Elliptic Equationsgprokert/PDE/course.pdf · 6 The Sobolev space W1;2() 14 7 The space W1;2 0 18 8 Linear operators 20 9 Orthogonal projections in Hilbert

Proof: Let f ∈W 1,2(Rm) and ε > 0 be given.1. Approximate f by g1 ∈W 1,2(Rm) ∩ C∞(Rm):Let ψh be defined as in (4.1). By Lemma 4.3, we have f ∗ψh ∈ C∞(Rm)∩L2(Rm) and

f ∗ ψhL2(Rm)−→ f for h ↓ 0. Moreover, for i = 1, . . . ,m we get ∂i(f ∗ ψh) = f ∗ ∂iψh. Note

that y 7→ ψh(x− y) is in C∞0 (Rm) for all x ∈ R

m. Therefore,

(f ∗ ∂iψh)(x) =

Rm

f(y)∂iψh(x− y) dy = −

Rm

f(y)∂

∂yi[ψh(x− y)] dy

=

Rm

∂if(y)ψh(x− y) dy = (∂if ∗ ψh)(x).

Hence ∂i(f ∗ ψh) = ∂if ∗ ψhL2(Rm)−→ ∂if for all i = 1, . . . ,m by Lemma 4.3, and therefore

f ∗ ψhW 1,2(Rm)−→ f for h ↓ 0. Consequently, if h is chosen small enough and g1 := f ∗ ψh

then ‖f − g1‖W 1,2 < ε/2.2. Approximate g1 by g ∈ C∞

0 (Rm):Fix a function η ∈ C∞

0 (Rm) such that 0 ≤ η ≤ 1 and η(0) = 1, and define ηn ∈ C∞0 (Rm)

for n ∈ N by ηn(x) := η( xn). Then g1ηn ∈ C∞

0 (Rm) for all n and ηn(x) → 1 for all x ∈ Rm

as n→ ∞. Therefore

‖g1 − g1ηn‖22 =

Rm

|g1|2(1 − ηn) dx→ 0

as n→ ∞ by the dominated convergence theorem. Moreover, for i = 1, . . . ,m,

‖∂i(g1 − g1ηn)‖2 ≤ ‖∂ig1 − (∂ig1)ηn‖2 + ‖g1∂iηn‖2.

For the first term on the right, we find

‖∂ig1 − (∂ig1)ηn‖2n→∞−→ 0

as above, and for the second term we estimate

‖g1∂iηn‖2 ≤ ‖g1‖2 maxx∈Rm

|∂iηn(x)| =1

n‖g1‖2 max

x∈Rm|∂iη(x)|

n→∞−→ 0.

Hence g1ηnW 1,2(Rm)−→ g1 as n→ ∞, and if n is sufficiently large and g := g1ηn then

‖f − g‖W 1,2 ≤ ‖f − g1‖W 1,2 + ‖g1 − g‖W 1,2 ≤ ε/2 + ε/2 = ε.

Definition: Let E, F be normed spaces, let U ⊂ E. A function u : U −→ F is calledLipschitz continuous iff there is a constant L > 0 such that

‖u(x) − u(y)‖F ≤ L‖x− y‖E , x, y ∈ U.

Let Ω ⊂ Rm be a domain. The set ∂Ω := Ω\Ω is called the boundary of Ω.

Let m > 1. A domain Ω ⊂ Rm is called a Lipschitz domain iff ∂Ω can locally be

represented as graph of a Lipschitz continuous function, i.e. iff for any x0 ∈ ∂Ω there

17

Page 18: Variational Methods and Elliptic Equationsgprokert/PDE/course.pdf · 6 The Sobolev space W1;2() 14 7 The space W1;2 0 18 8 Linear operators 20 9 Orthogonal projections in Hilbert

are an orthonormal basis e1, . . . , em of Rm, an ε > 0 and a Lipschitz continuous function

Φ : BRm−1(x0, ε) → R such that

∂Ω ∩BRm(x0, ε) =

x0 +

m−1∑

k=1

zkek + Φ(z)em

∣∣∣∣∣ |z| < ε

∩BRm(x0, ε).

We state the following results without proof:

Theorem 6.5 (Approximation by smooth functions in W 1,2(Ω))

(i) W 1,2(Ω) ∩ C∞(Ω) is dense in W 1,2(Ω).

(ii) Let m = 1 or let Ω be a Lipschitz domain. Then

C∞0 (Rm)|Ω := u|Ω |u ∈ C∞

0 (Rm)

is a dense subset of W 1,2(Ω).

Corollary: Let Ω be a bounded interval or a bounded Lipschitz domain. Then C 1(Ω) is adense subset of W 1,2(Ω).Proof: Exercise!

7 The space W1,20 (Ω)

Definition: The space W 1,20 (Ω) is the closure of C∞

0 (Ω) in W 1,2(Ω).

As a closed subspace of the Hilbert space W 1,2(Ω), the space W 1,20 (Ω) is a Hilbert

space as well (with respect to the same inner product).According to Lemma 6.4, W 1,2

0 (Rm) = W 1,2(Rm). In general, however, W 1,20 (Ω) is

strictly smaller than W 1,2(Ω). It contains those functions from W 1,2(Ω) which have “zeroboundary values” in a generalized sense.Danger: As functions in W 1,2(Ω) are defined only up to a set of measure zero, they donot have well-defined boundary values in the usual sense of continuous extension!

If m = 1, however, we choose the continuous representant which, by Theorem 6.1, hasa continuous extension to the boundary. Then:

Lemma 7.1 (Vanishing boundary values in 1D)Let I be an interval. If x0 ∈ ∂I and u ∈W 1,2

0 (I) then u(x0) = 0.

Proof: Fix u ∈W 1,20 (I). By definition of W 1,2

0 (I), there is a sequence (un) in C∞0 (I) such

that unW 1,2

−→ u. From Theorem 6.1 we know that u(x0) exists and

|u(x0)| = |u(x0) − un(x0)| ≤ maxx∈I

|(u− un)(x)| ≤ C‖u− un‖W 1,2n→∞−→ 0.

This implies the assertion.In the general case, the following result is easy to see:

Lemma 7.2 (Functions vanishing near ∂Ω)Assume u ∈W 1,2(Ω) and there is a compact set K ⊂ Ω such that u = 0 a.e. in Ω\K.

Then u ∈W 1,20 (Ω).

18

Page 19: Variational Methods and Elliptic Equationsgprokert/PDE/course.pdf · 6 The Sobolev space W1;2() 14 7 The space W1;2 0 18 8 Linear operators 20 9 Orthogonal projections in Hilbert

Proof: Approximate u by uh := u ∗ ψh (cf. the proof of Lemma 6.4). The details are leftas an exercise.

More generally, we have the following result which we will not prove:Define

C0(Ω) := u ∈ C(Ω) |u|∂Ω = 0.

Theorem 7.3 (Continuous functions in W 1,20 (Ω))

(i) W 1,2(Ω) ∩ C0(Ω) ⊂W 1,20 (Ω).

(ii) Assume m = 1 or Ω is a Lipschitz domain. Then

W 1,2(Ω) ∩ C0(Ω) = W 1,20 (Ω) ∩ C(Ω).

We are going to prove two inequalities which will be crucial for the treatment ofsecond-order elliptic boundary value problems:

Lemma 7.4 (Poincare inequality1)Let Ω ⊂ R

m be a bounded domain.

(i) There is a constant C depending only on Ω such that

‖u‖2 ≤ C‖ |∇u| ‖2 (7.1)

for all u ∈W 1,20 (Ω).

(ii) Assume additionally m = 1 or Ω is a Lipschitz domain. Then there is a constant Cdepending only on Ω such that

‖u‖2 ≤ C

(‖ |∇u| ‖2 +

∣∣∣∣∫

Ωu dx

∣∣∣∣)

(7.2)

for all u ∈W 1,2(Ω).

Proof of (i): 1. We show (7.1) for u ∈ C∞0 (Ω): Extend u by zero to R

m. Writex = (x1, . . . , xm). As Ω is bounded there are a, b ∈ R such that a ≤ x1 ≤ b for x ∈ Ω.Then u(a, x2, . . . , xm) = 0 and therefore

|u(x)| =

∣∣∣∣∫ x1

a

∂1u(t, x2, . . . , xm) dt

∣∣∣∣

Cauchy-Schwarz

≤ C

(∫ x1

a

|∂1u(t, x2, . . . , xm)|2 dt

) 1

2

≤ C

(∫

R

|∇u(t, x2, . . . , xm)|2 dt

) 1

2

for all x ∈ Rm. Squaring and integrating over R with respect to x1 gives

R

|u(x1, . . . , xm)|2 dx1 =

∫ b

a

|u(x1, . . . , xm)|2 dx1 ≤ (b− a) maxa≤x1≤b

|u(x1, . . . , xm)|2

≤ C

R

|∇u(t, x2, . . . , xm)|2 dt.

1also known as Friedrichs’ inequality

19

Page 20: Variational Methods and Elliptic Equationsgprokert/PDE/course.pdf · 6 The Sobolev space W1;2() 14 7 The space W1;2 0 18 8 Linear operators 20 9 Orthogonal projections in Hilbert

Using this, we find∫

Ω|u(x)|2 dx =

Rm

|u(x)|2 dxFubini

=

Rm−1

(∫

R

|u(x1 . . . , xm)|2 dx1

)dx2 . . . dxm

≤ C

Rm−1

(∫

R

|∇u(t . . . , xm)|2 dt

)dx2 . . . dxm

Fubini= C

Rm

|∇u(x)|2 dx = C

Ω|∇u(x)|2 dx.

This implies (7.1) for test functions u.2. Let now u ∈ W 1,2

0 (Ω) be arbitrary. There is a sequence (un) in C∞0 (Ω) such that

unW 1,2

−→ u. This implies unL2

−→ u, ∂iunL2

−→ ∂iu for i = 1, . . . ,m, ‖un‖2 → ‖u‖2, and‖ |∇un| ‖2 → ‖ |∇u| ‖2 for n→ ∞ (Check!). From Step 1 we have

‖un‖2 ≤ C‖ |∇un| ‖2.

Now (7.1) follows by taking the limit n→ ∞.Proof of (ii) in the case m = 1: We have Ω = (a, b), a, b ∈ R. Without loss of generality,let K = R. We choose the continuous representant for u ∈W 1,2((a, b)) and estimate

|u(x) − u(y)| =

∣∣∣∣∫ x

y

u′(t) dt

∣∣∣∣

Cauchy-Schwarz

≤ C

(∫ b

a

|u′(t)|2 dt

) 1

2

= C‖u′‖2.

The result follows now from squaring this inequality and integrating with respect to x andy over (a, b). The details are left as an exercise.

8 Linear operators

Let E and F be normed spaces. Their norms will be denoted by ‖ · ‖E and ‖ · ‖F ,respectively.Definitions: A mapping A : E −→ F is called a linear operator iff

• A(x+ y) = Ax+Ay for all x, y ∈ E,

• A(λx) = λAx for all x ∈ E, λ ∈ K.

A linear operator A is called bounded iff there is a C > 0 such that

‖Ax‖F ≤ C‖x‖E for all x ∈ E,

or, equivalently,‖Ax‖F ≤ C for all x ∈ E with ‖x‖ ≤ 1.

A linear operator A is called continuous iff for any sequence (xn) in E holds:

xnE→ x∗ ⇒ Axn

F→ Ax∗.

Lemma 8.1 (Boundedness and continuity)A linear operator is bounded if and only if it is continuous.

20

Page 21: Variational Methods and Elliptic Equationsgprokert/PDE/course.pdf · 6 The Sobolev space W1;2() 14 7 The space W1;2 0 18 8 Linear operators 20 9 Orthogonal projections in Hilbert

Proof: Let A be bounded and xnE→ x∗. Then

‖Axn −Ax∗‖F = ‖A(xn − x∗)‖F ≤ C‖xn − x∗‖En→∞−→ 0,

hence AxnF→ ax∗.

Conversely, let A be continuous. We argue by contradiction: Assume A would notbe bounded. Then there is a sequence of elements (xn) in E such that ‖xn‖E = 1,‖Axn‖F ≥ n. Set yn := xn

n. Then

‖Ayn‖F ≥ 1, (8.1)

‖yn‖E =1

n

n→∞−→ 0,

hence ynE→ 0. The continuity of A implies Ayn

F→ A 0 = 0, hence ‖Ayn‖F

n→∞−→ 0, but

this clearly contradicts (8.1). Hence A is bounded.The set of all bounded linear operators from E to F is denoted by L(E,F ). This set

is a vector space with respect to the addition and scalar multiplication given by

(A+B)x := Ax+Bx,

(λA)x := λ(Ax)

for A,B ∈ L(E,F ), x ∈ E, λ ∈ K. (Check!). For A ∈ L(E,F ) we define

‖A‖L(E,F ) := supx∈E\0

‖Ax‖F

‖x‖E= sup

x∈E, ‖x‖E≤1‖Ax‖F = sup

x∈E, ‖x‖E=1‖Ax‖F . (8.2)

Lemma 8.2 (The normed space L(E,F ))The mapping ‖ · ‖L(E,F ) is a norm on L(E,F ). If F is a Banach space, then L(E,F )

is a Banach space.

Proof: The first part of the lemma is easy to check. To show the second part, assume Fis a Banach space and let (An) be a Cauchy sequence in L(E,F ). For any x ∈ E, we have

‖Anx−Amx‖ ≤ ‖An −Am‖L(E,F )‖x‖n,m→∞−→ 0,

hence (Anx) is a Cauchy sequence in F . Therefore, it is convergent, and we set, for anyx ∈ E,

A∗x := limn→∞

Anx.

It is easy to see that A∗ is a linear operator. (Check!) We have to show that it is bounded

and that AnL(E,F )−→ A∗, or equivalently, ‖An −A∗‖L(E,F )

n→∞−→ 0. Choose ε > 0 and x ∈ E

with ‖x‖E ≤ 1. There is an n0 ∈ N such that ‖An − Am‖L(E,F ) < ε/2 for all m,n ≥ n0,and there is an m ≥ n0 such that ‖Amx−A∗x‖F < ε/2. Consequently,

‖(An −A∗)x‖F ≤ ‖An −Am‖L(E,F ) + ‖Amx−A∗x‖F ≤ ε. (8.3)

This shows An − A∗ ∈ L(E,F ), hence also A∗ = An − (An − A∗) ∈ L(E,F ), and takingthe supremum over x ∈ E, ‖x‖E ≤ 1 in (8.3) gives ‖An −A∗‖L(E,F )

n→∞−→ 0.

Definition: The (Banach) space L(E,K) is called dual space of E, it is denoted by E ′,and its elements are called (bounded) linear functionals.

21

Page 22: Variational Methods and Elliptic Equationsgprokert/PDE/course.pdf · 6 The Sobolev space W1;2() 14 7 The space W1;2 0 18 8 Linear operators 20 9 Orthogonal projections in Hilbert

9 Orthogonal projections in Hilbert spaces

Let H be a Hilbert space with inner product (·, ·), let H1 be a closed subspace of H.(Note that then (H1, (·, ·)) is also a Hilbert space - check!) For fixed x ∈ H we considerthe following approximation problem:

‖x− y‖ −→ min! y ∈ H1. (M1)

Theorem 9.1 (Orthogonal projections)

(i) Problem (M1) has a unique solution y∗ ∈ H1.

(ii) y∗ is the unique solution of the variational equality

(x− y∗, h) = 0 ∀h ∈ H1 (V1)

(iii) For any x ∈ H, there is precisely one pair of elements x1, x2 such that

x = x1 + x2, x1 ∈ H1, x2 ⊥ H1.

Moreover, x1 = y∗.

(iv) The mapping P : H −→ H1 defined by Px := y∗ is in L(H,H1) and satisfies

P 2 := P P = P.

If H1 6= 0 then ‖P‖L(H,H1) = 1.

Proof: 1. We show that (M1) has a solution: Let

d := d(x,H1) = infy∈H1

‖x− y‖

be the distance from x to H1. There is a sequence (yk) in H1 such that

‖x− yk‖2 k→∞−→ d2.

Fix k, l ∈ N and set a := x − yk, b := x − yl. Then a + b = 2(x − yk+yl

2 ). As H1 is asubspace, yk+yl

2 ∈ H1 and therefore ‖a+ b‖2 ≥ 4d2. By the Parallelogram identity,

‖yk − yl‖2 = ‖a− b‖2 = 2(‖a‖2 + ‖b‖2) − ‖a+ b‖2

≤ 2(‖x − yk‖2 + ‖x− yl‖

2) − 4d2 k,l→∞−→ 0.

Hence (yk) is a Cauchy sequence. As H is a Hilbert space, there is an y∗ ∈ H with

ykH→ y∗. As H1 is closed, y∗ ∈ H1 by Lemma 1.2, and by Lemma 1.1

‖x− y∗‖ = ‖x− limn→∞

yk‖ = ‖ limn→∞

(x− yk)‖ = limn→∞

‖x− yk‖ = d.

Therefore, y∗ solves (M1).

22

Page 23: Variational Methods and Elliptic Equationsgprokert/PDE/course.pdf · 6 The Sobolev space W1;2() 14 7 The space W1;2 0 18 8 Linear operators 20 9 Orthogonal projections in Hilbert

2. We show that any solution y∗ of (M1) solves (V1): The equality in (V1) is obviouslytrue for h = 0. Assume h ∈ H1\0, λ ∈ R. Then y∗ + λh ∈ H1, and therefore

‖x− y∗ − λh‖2 ≥ ‖x− y∗‖2

or, after using the definition of the norm and simplifying,

−2λRe(x− y∗, h) + λ2‖h‖2 ≥ 0

for all λ ∈ R. Setting λ = Re(x−y∗,h)‖h‖2 yields

Re(x− y∗, h) = 0.

for all h ∈ H1\0. If K = R, this is our result, if K = C we replace h by ih and find

Re(x− y∗, ih) = Re(−i(x− y∗, h)) = Im(x− y∗, h) = 0.

This shows (V1) also for K = C.3. We show that the solution of (V1) is unique: Let y1, y2 be two solutions of (V1).

Subtracting the corresponding variational equalities yields (y1 − y2, h) = 0 for all h ∈ H1.Setting h := y1 − y2 yields ‖y1 − y2‖

2 = 0, hence y1 = y2. As any solution of (M1) is alsoa solution of (V1), the solution of (M1) is also unique. Hence (i) and (ii) are proved.

4. We show (iii): By (ii), the elements x1 := y∗ and x2 := x− y∗ have the propertiesdemanded in (iii). To show uniqueness, assume a pair x1, x2 ∈ H has the properties givenin (iii). This implies

(x− x1, h) = 0 ∀h ∈ H1.

By (ii), x1 = y∗ and x2 = x− y∗.5. We show (iv): For arbitrary x, z ∈ H we have from (iii)

x = Px+ (x− Px), Px ∈ H1, x− Px ⊥ H1, (9.1)

z = Pz + (z − Pz), P z ∈ H1, z − Pz ⊥ H1, (9.2)

x+ z = P (x+ z) + x+ z − P (x+ z),

P (x+ z) ∈ H1, x+ z − P (x+ z) ⊥ H1. (9.3)

On the other hand, by adding (9.1) and (9.2) we get

x+ z = P (x) + P (z) + x+ z − (P (x) + P (z)),

P (x) + P (z) ∈ H1, x + z − (P (x) + P (z)) ⊥ H1. Comparing this to (9.3) and using theuniqueness statement of (iii), we get

P (x+ z) = P (x) + P (z).

Analogously, one shows

P (λx) = λP (x), x ∈ H,λ ∈ K.

Hence P is a linear operator. By Pythagoras’ theorem,

‖x‖2 = ‖Px‖2 + ‖x− Px‖2,

23

Page 24: Variational Methods and Elliptic Equationsgprokert/PDE/course.pdf · 6 The Sobolev space W1;2() 14 7 The space W1;2 0 18 8 Linear operators 20 9 Orthogonal projections in Hilbert

hence ‖Px‖ ≤ ‖x‖ for all x ∈ H and therefore P ∈ L(H,H1) with ‖P‖L(H,H1) ≤ 1.Moreover, clearly Px = x iff x ∈ H1, thus P 2 = P . If H1 6= 0 then there is anx0 ∈ H1\0, and Px0 = x0 implies ‖P‖L(H,H1) ≥ 1.Definitions: The linear operator defined in Theorem 9.1 (iv) is called the orthogonalprojection of H onto H1. The decomposition in (iii) is called orthogonal decomposi-tion of x with respect to H1. Any sequence (yn) in H1 such that

‖x− yn‖ → infy∈H1

‖x− y‖

is called a minimizing sequence for (M1).

10 The theorems of Riesz and Lax-Milgram

Let H be a Hilbert space.Any element y ∈ H determines a bounded linear functional gy on H via

gy(x) := (x, y).

The following important theorem states that any bounded linear functional on H can berepresented in that way.

Theorem 10.1 (Riesz representation theorem)For any f ∈ H ′, there is precisely one y ∈ H such that

f(x) = (x, y) ∀x ∈ H. (10.1)

Moreover, ‖f‖H′ = ‖y‖H .

Proof: 1. If f = 0 then clearly (10.1) is satisfied with y = 0. Assume f 6= 0 and choosex0 ∈ H such that f(x0) = 1. The subset

H1 := x ∈ H | f(x) = 0

is a closed subspace of H. (Check!) Let P denote the orthogonal projection of H onto H1

and set u := x0 − Px0. Then u 6= 0, u ⊥ H1 by Theorem 9.1, and

f(u) = f(x0) − f(Px0)︸ ︷︷ ︸0

= 1.

Therefore, for any x ∈ H,

f(x− f(x)u) = f(x) − f(x)f(u) = 0,

hence x− f(x)u ∈ H1. As u ⊥ H1 we get

(x, u) = f(x)‖u‖2,

and f(x) = (x, y) with y := u‖u‖2 .

24

Page 25: Variational Methods and Elliptic Equationsgprokert/PDE/course.pdf · 6 The Sobolev space W1;2() 14 7 The space W1;2 0 18 8 Linear operators 20 9 Orthogonal projections in Hilbert

2. To show the uniqueness of y, assume

f(x) = (x, y1) = (x, y2) ∀x ∈ H.

Then (x, y1 − y2) = 0 for all x ∈ H, and choosing x := y1 − y2 yields ‖y1 − y2‖ = 0, hencey1 = y2.

3. We show ‖f‖H′ = ‖y‖H : From (10.1) and the Cauchy-Schwarz inequality we get

|f(x)| ≤ ‖y‖‖x‖,

hence ‖f‖H′ ≤ ‖y‖. If y = 0, this implies equality. Otherwise, from f(y) = |f(y)| = ‖y‖2

we get

‖y‖ =|f(y)|

‖y‖≤ ‖f‖H′ ,

and the equality is proved.Definitions: A mapping a : H ×H −→ K is called a sesquilinear form if:

• a is linear in the first argument:

a(λx+ µy, z) = λa(x, z) + µa(y, z), x, y, z ∈ H,λ, µ ∈ K

• a is antilinear in the second argument:

a(x, λy + µz) = λa(x, y) + µa(x, z), x, y, z ∈ H,λ, µ ∈ K

If K = R then a is linear in both arguments and is called a bilinear form.A sesquilinear form is called bounded iff there is a C > 0 such that

|a(x, y)| ≤ C‖x‖ ‖y‖, x, y ∈ H. (10.2)

A sesquilinear form is called H-elliptic iff there is a c > 0 such that

Re a(x, x) ≥ c‖x‖2, x ∈ H. (10.3)

Let f ∈ H ′ be given. We consider the variational problem

a(x, y) = f(x) ∀x ∈ H. (V2)

Theorem 10.2 (Lax-Milgram theorem)Assume that a is a bounded, H-elliptic sesquilinear form. Then, for any f ∈ H ′ there

is precisely one y ∈ H such that (V2) holds. It satisfies an estimate

‖y‖H ≤1

c‖f‖H′

with c from (10.3).

25

Page 26: Variational Methods and Elliptic Equationsgprokert/PDE/course.pdf · 6 The Sobolev space W1;2() 14 7 The space W1;2 0 18 8 Linear operators 20 9 Orthogonal projections in Hilbert

Remark: The inner product of H is a bounded, H-elliptic sesquilinear form itself. In thespecial case a(·, ·) = (·, ·), the Lax-Milgram theorem coincides with the Riesz representa-tion theorem.Proof of Theorem 10.2: 1. We reformulate (V2) as an operator equation in H:

For any fixed y, the mapping x 7→ a(x, y) is a bounded linear functional on H, henceby the Riesz representation theorem, there is an unique z ∈ H such that

a(x, y) = (x, z) ∀x ∈ H.

We define a mapping A : H −→ H by Ay := z. Again by the Riesz representation theorem,there is an unique wf ∈ H such that

f(x) = (x,wf ) ∀x ∈ H.

Hence (V2) is equivalent to

(x,Ay) = (x,wf ) ∀x ∈ H,

or toAy = wf . (10.4)

2. We show that (10.4) has a solution for all right hand sides, i.e. that

R := Ay | y ∈ H = H.

2.1. A is linear: For any x, y1, y2 ∈ H, λ, µ ∈ K we have

(x,A(λy1 + µy2)) = a(x, λy1 + µy2) = λa(x, y1) + µa(x, y2)

= λ(x,Ay1) + µ(x, y2) = (x, λAy1 + µAy2),

and therefore A(λy1 + µy2) = λAy1 + µAy2. (Check!) Therefore R is a linear subspace ofH. (Check!)

2.2. A is bounded: For any y ∈ H,

‖Ay‖2 = (Ay,Ay) = a(Ay, y) ≤ C‖Ay‖ ‖y‖.

Hence ‖Ay‖ ≤ C‖y‖.2.3. R is closed: Note at first that

c‖y‖2 ≤ Re a(y, y) ≤ Re(y,Ay) ≤ ‖y‖ ‖Ay‖,

therefore‖Ay‖ ≥ c‖y‖. (10.5)

Now let (zk) be a sequence in R with zkH→ z∗. There is a sequence (yk) in H such that

Ayk = zk. Due to (10.5) we have

‖yk − yl‖ ≤1

c‖Ayk −Ayl‖ =

1

c‖zk − zl‖

k,l→∞−→ 0,

26

Page 27: Variational Methods and Elliptic Equationsgprokert/PDE/course.pdf · 6 The Sobolev space W1;2() 14 7 The space W1;2 0 18 8 Linear operators 20 9 Orthogonal projections in Hilbert

thus (yk) is a Cauchy sequence, hence it converges to some y∗ ∈ H. As A is continuous,we have

z∗ = limk→∞

zk = limk→∞

Ayk = A limk→∞

yk = Ay∗ ∈ R.

Hence R is closed by Lemma 1.2.2.4. R = H: Let P denote the orthogonal projection onto R. Assume there is an

x ∈ H\R. Then x0 := x− Px 6= 0 and x0 ⊥ R. Therefore,

c‖x0‖2 ≤ Re a(x0, x0) = Re(x0, Ax0) = 0

in contradiction to x0 6= 0.3. To show uniqueness for the solution of (10.4) it is sufficient to remark that Ay1 = Ay2

implies by (10.5)c‖y1 − y2‖ ≤ ‖A(y1 − y2)‖ = 0,

hence y1 = y2.4. The estimate on y follows from

‖y‖2 ≤1

cRe a(y, y) ≤

1

cRe f(y) ≤

1

c|f(y)| ≤

1

c‖f‖H′‖y‖.

Assume now additionally that

a(x, y) = a(y, x), x, y ∈ H. (10.6)

In connection with (V2) we are going to consider the minimization problem

J(y) := 12a(y, y) − Re f(y) −→ min! y ∈ H. (M2)

(Note that J is real-valued because of a(y, y) = a(y, y) ∈ R.)

Theorem 10.3 (Quadratic minimization problems)Let all the assumptions of Theorem 10.2 and (10.6) be satisfied. Then the solution of

(V2) is also the unique solution of (M2).

Proof: Let y denote the solution of (V2). Then, for any x ∈ H,

J(x) − J(y) = 12(a(x, x) − a(y, y)) − Re f(x− y) = 1

2 (a(x, x) − a(y, y) − 2Re a(x− y, y)).

By (10.6),

2Re a(x− y, y) = a(x− y, y) + a(x− y, y)

= a(x− y, y) + a(y, x− y) = a(x, y) + a(y, x) − 2a(y, y),

and hence

J(x) − J(y) = 12(a(x, x) − a(x, y) − a(y, x) + a(y, y))

= 12a(x− y, x− y) ≥ c0

2 ‖x− y‖2.

Therefore, J(x) > J(y) for x 6= y, i.e. y is the unique global minimum of J .

27

Page 28: Variational Methods and Elliptic Equationsgprokert/PDE/course.pdf · 6 The Sobolev space W1;2() 14 7 The space W1;2 0 18 8 Linear operators 20 9 Orthogonal projections in Hilbert

11 Application to elliptic boundary value problems

11.1 Linear second order elliptic equations with homogeneous Dirichletboundary condition

Let Ω ⊂ Rm be a domain.

Definition: Let L∞(Ω) denote the space of all (equivalence classes of) essentiallybounded measurable functions on Ω, i.e.

L∞(Ω) := u : Ω −→ K | |u(x)| ≤ C a.e. in Ω.

On L∞(Ω) we introduce the norm ‖ · ‖∞ by

‖u‖∞ := infC ≥ 0 | |u(x)| ≤ C a.e. in Ω.

(Check that (L∞(Ω), ‖ · ‖∞) is indeed a normed space!)Let W 1,∞(Ω) denote the space of all functions in L∞(Ω) having weak derivatives in

L∞(Ω).

Lemma 11.1 (Pointwise products in L2(Ω))Let a ∈ L∞(Ω), u ∈ L2(Ω). Then the pointwise product au given by (au)(x) =

a(x)u(x), x ∈ Ω, is in L2(Ω) and

‖au‖2 ≤ ‖a‖∞‖u‖2.

Proof: Exercise!From now, let Ω be bounded, K = R.Consider the general second-order linear boundary value problem

−∂i(aij(x)∂ju(x)) + bi(x)∂iu(x) + c(x)u(x) = φ(x) in Ω,u = 0 on ∂Ω,

(11.1)

where i, j = 1, . . . ,m and we sum over all indices occurring twice in the same product.The function u is unknown, and the following data are given:

aij, c ∈ L∞(Ω), bi ∈W 1,∞(Ω), φ ∈ L2(Ω). (11.2)

Our crucial assumption for the problem (11.1) to be elliptic is

aij(x)ξiξj ≥ λ|ξ|2, x ∈ Ω, ξ ∈ Rm (11.3)

for some λ > 0, called the ellipticity constant. Moreover, we assume

−12∂ibi(x) + c(x) ≥ 0, x ∈ Ω. (11.4)

(In both inequalities, it is sufficient that they hold for almost all x ∈ Ω.)Examples: The following problems fit into the general framework given above (Check!):

1. The Poisson problem:

−∆u = φ in Ω,u = 0 on ∂Ω.

28

Page 29: Variational Methods and Elliptic Equationsgprokert/PDE/course.pdf · 6 The Sobolev space W1;2() 14 7 The space W1;2 0 18 8 Linear operators 20 9 Orthogonal projections in Hilbert

2. A general convection-diffusion-reaction equation:

−∆u+~b · ∇u+ cu = φ in Ω,u = 0 on ∂Ω,

if −12 div~b+ c ≥ 0.

3. The Sturm-Liouville problem

−(pu′)′ + qu = φ in (0, 1),u(0) = u(1) = 0,

with p ∈ C1([0, 1]), q ∈ C([0, 1]), p(x) > 0, q(x) ≥ 0 for x ∈ [0, 1].

Definition: A weak solution to the problem (11.1) is a function u ∈W 1,20 (Ω) such that

(11.1)1 holds in the sense of weak derivatives.Danger: The boundary condition in the second equation has to be interpreted in ageneralized sense unless u has a continuous extension to Ω.

Lemma 11.2 (Strong solutions are weak solutions)Assume aij ∈ C1(Ω), u ∈ C2(Ω) ∩ C1(Ω) solves (11.1). Then u is a weak solution of

(11.1).

Proof: By our assumptions, all derivatives in (11.1)1 exist as classical derivatives, there-fore also as weak ones. It remains to show that u ∈ W 1,2

0 (Ω). As Ω is bounded, we get

u ∈ W 1,2(Ω) from u ∈ C1(Ω) and finally u ∈ W 1,20 (Ω) from (11.1)2 and Theorem 7.3 (i).

Lemma 11.3 (Weak or variational formulation)A function u ∈W 1,2

0 (Ω) is a weak solution to (11.1) iff

a(u, v) = f(v) ∀v ∈W 1,20 (Ω), (11.5)

where

a(u, v) :=

Ω(aij∂ju∂iv + bi∂iu v + cuv) dx,

f(v) :=

Ωφv dx.

Definition: The variational equation (11.5) is called the weak or variational formu-lation of (11.1).

To prove Lemma 11.3 and later, we will need the following simple result:

Lemma 11.4 (Boundedness of a and f)

(i) a is a bounded bilinear form on W 1,2(Ω).

(ii) f is a bounded linear form on W 1,2(Ω) with

‖f‖(W 1,2(Ω))′ ≤ ‖φ‖2.

29

Page 30: Variational Methods and Elliptic Equationsgprokert/PDE/course.pdf · 6 The Sobolev space W1;2() 14 7 The space W1;2 0 18 8 Linear operators 20 9 Orthogonal projections in Hilbert

Proof: Exercise!Proof of Lemma 11.3: ”⇒”: Let u be a weak solution to (11.1). By our definitions,

a(u, v) = f(v) ∀v ∈ C∞0 (Ω). (11.6)

Now for v ∈W 1,20 (Ω) arbitrary, choose a sequence (vn) of test functions such that vn

W 1,2

−→ v.By the continuity of a and f and by (11.6),

a(u, v) = limn→∞

a(u, vn) = limn→∞

f(vn) = f(v).

”⇐”: Let u ∈ W 1,20 (Ω) solve (11.5). Then, in particular, (11.6) holds, and therefore u is

a weak solution to (11.1).

Lemma 11.5 (Smooth weak solutions are strong solutions)Let m = 1 or let Ω be a Lipschitz domain. Assume aij ∈ C1(Ω) and u ∈ C2(Ω)∩C(Ω)

is a weak solution to (11.1). Then (11.1)1 is satisfied in the sense of classical derivatives,and (11.1)2 holds.

Proof: The function aij∂ju is continuously differentiable for i = 1, . . . ,m. Hence, forarbitrary v ∈ C∞

0 (Ω), we have by partial integration on supp v

Ωaij∂ju∂iv dx = −

Ω∂i(aij∂ju)v dx.

Consequently, writingz := −∂i(aij∂ju) + bi∂iu+ cu− φ,

we get ∫

Ωzv dx = (z, v) = 0 ∀v ∈ C∞

0 (Ω)

and therefore, by Lemma 4.2, also for all v ∈ L2(Ω). Hence z = 0. i.e. (11.1)1 holds(almost everywhere in Ω). The validity of (11.1)2 follows from Theorem 7.3 (ii).

According to these results, the analysis of (11.1) proceeds in two steps:

1. Show existence and uniqueness of a weak solution.

2. Under the assumption of higher regularity for the data, prove higher regularity ofthe solution.

We will concentrate here on the first step.

Theorem 11.6 (Existence and uniqueness of a weak solution to (11.1))Problem (11.5) has precisely one solution u ∈W 1,2

0 (Ω). It satisfies an estimate

‖u‖W 1,2 ≤ C‖φ‖2

with C independent of φ.

30

Page 31: Variational Methods and Elliptic Equationsgprokert/PDE/course.pdf · 6 The Sobolev space W1;2() 14 7 The space W1;2 0 18 8 Linear operators 20 9 Orthogonal projections in Hilbert

Proof: We set H := W 1,20 (Ω) and want to apply the Lax-Milgram theorem 10.2. By

Lemma 11.4, both a and f are bounded, thus it remains to check that a is W 1,20 (Ω)-

elliptic. Assume first that u ∈ C∞0 (Ω). We have

Ωbi∂iuu dx = 1

2

∫Ω bi∂i(u

2) dx = −12

∫Ω ∂ibiu

2 dx.

Consequently, by (11.3), (11.4) and the Poincare inequality (7.1),

a(u, u) =

Ω(aij∂iu∂ju+ (−1

2∂ibi + c)u2) dx

≥ λ

Ω|∇u|2 dx ≥

λ

2

(∫

Ω|∇u|2 dx+

1

C‖u‖2

2

)≥ α‖u‖2

W 1,2

with α = min(λ2 ,

λ2C

) > 0.

Taking now u ∈ W 1,20 (Ω) arbitrary and approximating it by a sequence (un) of test

functions with unW 1,2

−→ u we find

a(u, u) = limn→∞

a(un, un) ≥ limn→∞

α‖un‖2W 1,2 = α‖u‖2

W 1,2

(Check!) Now our result follows from Theorem 10.2 and Lemma 11.4.

11.2 Linear second order elliptic equations with homogeneous naturalboundary condition

Assume K = R.If m > 1, assume that Ω is a Lipschitz domain and assume additionally that Ω has a

piecewise C1 boundary, i.e. ∂Ω consists of a finite number of C 1-hypersurfaces, i.e.zero level sets of C1 functions with nonvanishing gradient. A boundary point x ∈ ∂Ω iscalled regular iff there is an ε > 0 such that ∂Ω ∩B(x, ε) is a C 1-hypersurface.

On the set (∂Ω)∗ of regular points of ∂Ω, we introduce the outer normal vectorfield n : (∂Ω)∗ −→ R

m by the following demands: For all x ∈ (∂Ω)∗:

• n(x) is orthogonal to ∂Ω in x

• |n(x)| = 1,

• n(x) is directed outwards, i.e. x+ τn(x) /∈ Ω for all sufficiently small τ > 0.

It can be shown that such an n exists, is uniquely determined by these demands, and iscontinuous. It is possible to introduce a Lebesgue measure and a corresponding integral on∂Ω such that the integral of continuous functions on (∂Ω)∗ can be calculated in the usualway. Moreover, it can be shown that the set of the boundary points that are not regularis a set of measure zero and therefore does not play a role when integrals are calculated.For the domain Ω, Gauss’ integral theorem is valid, and therefore we find∫

Ω∂iuv dx = −

Ωu∂iv dx+

∂Ωuvni dS, i = 1, . . . ,m, u, v ∈ C1(Ω)∩C(Ω). (11.7)

(Check! What is the corresponding formula for m = 1?)

31

Page 32: Variational Methods and Elliptic Equationsgprokert/PDE/course.pdf · 6 The Sobolev space W1;2() 14 7 The space W1;2 0 18 8 Linear operators 20 9 Orthogonal projections in Hilbert

We consider the general second-order linear boundary value problem

−∂i(aij∂ju) + bi∂iu+ cu = φ in Ω,aij∂juni = 0 on (∂Ω)∗.

(11.8)

Here we assume again (11.2) for the data.Danger: For the boundary condition (11.8)2 to make sense, we have to demand that

the functions aij have well-defined values almost everywhere on ∂Ω. This is the case if,for example, aij is piecewise continuous on Ω. However, as we will see below, this is notnecessary if we restrict our attention to weak solutions.

Again we assume (11.3) and replace (11.4) by the stronger demands

−12∂ibi + c ≥ µ > 0 (11.9)

for almost all x ∈ Ω andbini ≥ 0 on (∂Ω)∗. (11.10)

Examples:

1. A general convection-diffusion-reaction equation:

−∆u+~b · ∇u+ cu = φ in Ω,

∇u · n =:∂u

∂n= 0 on (∂Ω)∗,

if −12 div~b+ c ≥ µ > 0. (Check!)

The directional derivative∂u

∂nis called outer normal derivative of u.

2. The Sturm-Liouville problem

−(pu′)′ + qu = φ in (0, 1),u′(0) = u′(1) = 0,

with p ∈ C1([0, 1]), q ∈ C([0, 1]), p(x) > 0, q(x) > 0 for x ∈ [0, 1].

Recall the definitions of a and f from Lemma 11.3.Definition: A weak solution to (11.8) is a function u ∈W 1,2(Ω) such that

a(u, v) = f(v) ∀v ∈W 1,2(Ω). (11.11)

The variational equation (11.11) is called weak or variational formulation of ourboundary value problem (11.8).

The relations between the strong and weak formulations are as follows:

Lemma 11.7

(i) (Weak solutions)

Let u be a weak solution to (11.8). Then u solves (11.8)1 in the sense of weakderivatives.

32

Page 33: Variational Methods and Elliptic Equationsgprokert/PDE/course.pdf · 6 The Sobolev space W1;2() 14 7 The space W1;2 0 18 8 Linear operators 20 9 Orthogonal projections in Hilbert

(ii) (Strong solutions are weak solutions)

Assume aij ∈ C1(Ω) ∩ C(Ω) and let u ∈ C2(Ω) ∩ C1(Ω) be a (strong) solution of(11.8). Then u is a weak solution of (11.8).

(iii) (Smooth weak solutions are strong solutions)

Assume aij ∈ C1(Ω)∩C(Ω) and let u ∈ C2(Ω)∩C1(Ω) be a weak solution of (11.8).Then u solves (11.8)1 in the sense of strong derivatives, and (11.8)2 holds.

Proof: (i): Eq. (11.11) implies in particular

a(u, v) = f(v) ∀v ∈ C∞0 (Ω),

and this is by definition (11.8)1 in the sense of weak derivatives.(ii): Let v ∈ C1(Ω). Multiplying (11.8)1 by v and integration over Ω yields

Ω∂i(aij∂ju)v dx+

Ω(bi∂iuv + cuv) dx =

Ωφv dx = f(v).

Applying (11.7) in the first integral yields

a(u, v) −

∂Ωaij∂juniv dS = f(v), (11.12)

and the boundary integral term vanishes due to (11.8)2. Therefore

a(u, v) = f(v) ∀v ∈ C1(Ω).

The general case v ∈ W 1,2(Ω) follows now from the fact (Corollary of Theorem 6.5) thatC1(Ω) is dense in W 1,2(Ω). (The details are left as an exercise.)

(iii): We know from (i) that (11.8)1 is satisfied in the weak sense. By assumption, thefunctions aij∂ju are in C1(Ω) for i = 1, . . . ,m, and therefore the strong derivatives existand coincide with the weak ones. This shows that (11.8)1 is satisfied in the sense of strongderivatives.

To show (11.8)2, assume that z(x0) := aij(x0)∂ju(x0)ni(x0) 6= 0 for some x0 ∈ (∂Ω)∗.We assume without loss of generality z(x0) > 0. As z is continuous, there is an ε > 0 suchthat z(x) > 0 for all x ∈ ∂Ω∩B(x0, ε). Now choose v ∈ C1(Ω) such that v ≥ 0, v(x0) > 0and supp v ⊂ B(x0, ε). (Give an example!) then

∂Ωaij∂juniv dS > 0. (11.13)

(Check!) On the other hand, we derive (11.12) as in (ii). Therefore, as v ∈ W 1,2(Ω) andu solves (11.11), ∫

∂Ωaij∂juniv dS = 0,

contradicting (11.13). Hence (11.8)2 is satisfied.The differential operator in the boundary aij∂juni appears “naturally” by integration

by parts, cf. (11.12). Therefore, (11.8)2 is called (homogeneous) natural boundarycondition for the equation (11.8)1.

Existence and uniqueness of the weak solution can be proved now by similar meansas in the case of Dirichlet boundary conditions. Note, however, that we will work in thelarger space W 1,2(Ω) here and cannot use the Poincare inequality for this reason.

33

Page 34: Variational Methods and Elliptic Equationsgprokert/PDE/course.pdf · 6 The Sobolev space W1;2() 14 7 The space W1;2 0 18 8 Linear operators 20 9 Orthogonal projections in Hilbert

Theorem 11.8 (Existence and uniqueness of a weak solution to (11.8))Problem (11.11) has precisely one solution u ∈W 1,2(Ω). It satisfies an estimate

‖u‖W 1,2 ≤ C‖φ‖2

with C independent of φ.

Proof: We set H = W 1,2(Ω) and want to apply the Lax-Milgram theorem 10.2. In viewof Lemma 11.4, it only remains to check that a is W 1,2(Ω)-elliptic. As in the proof ofTheorem 11.6 we get

a(u, u) =

Ω(aij∂iu∂ju+ (−1

2∂ibi + c)u2) dx+

∂Ωbiniu

2 dS

(11.3),(11.9), (11.10)

≥ λ

Ω|∇u|2 dx+ µ

Ωu2 dx ≥ min(λ, µ)‖u‖2

W 1,2

for any u ∈ C1(Ω). By an approximation argument we conclude

a(u, u) ≥ min(λ, µ)‖u‖2W 1,2 ∀u ∈W 1,2(Ω).

(Exercise!) Now our result follows from Theorem 10.2 and Lemma 11.4.

11.3 The Neumann problem for the Poisson equation

As in the previous subsection, let K = R and let Ω be a bounded Lipschitz domain withpiecewise C1-boundary. We consider the problem

−∆u = φ in Ω,∂u

∂n= 0 on (∂Ω)∗.

(11.14)

The boundary condition (11.14)2 is called Neumann boundary condition, and theproblem is called Neumann problem for the Poisson equation.

Although this boundary condition is natural for (11.14)1, this problem is not coveredby the general one in the previous subsection because (11.9) is not satisfied. The weakformulation is

a(u, v) =

Ω∇u · ∇v dx =

Ωφv dx = f(v) ∀v ∈W 1,2(Ω) (11.15)

(cf. (11.11)). We have the same relations between weak and strong solutions as in Lemma11.7.

However, the situation concerning the solvability of (11.15) is more complicated. Ingeneral, there will be no solution for arbitrary φ ∈ L2(Ω), and if there is a solution, it isnot unique. Indeed, setting v ≡ 1 in (11.15) yields

Ωφdx = 0 (11.16)

as a necessary solvability condition, and if a function u ∈W 1,2(Ω) solves (11.15) then thesame is true for all functions u := u+ c with arbitrary c ∈ R. (Check!)

On the other hand, all (weak) solutions can be obtained in that way:

34

Page 35: Variational Methods and Elliptic Equationsgprokert/PDE/course.pdf · 6 The Sobolev space W1;2() 14 7 The space W1;2 0 18 8 Linear operators 20 9 Orthogonal projections in Hilbert

Lemma 11.9 (Weak solutions to (11.14) differ by a constant)Let u1, u2 ∈W 1,2(Ω) be two solutions of (11.15). Then there is a constant c ∈ R such

that u2 = u1 + c.

Proof: Set w := u2 − u1. Then∫

Ω∇w · ∇v dx = 0 ∀v ∈W 1,2(Ω),

and setting v := w yields ‖ |∇w| ‖22 = 0, i.e. ∂iw = 0 in L2(Ω) for all i = 1, . . . ,m. By

Lemma 5.2 this means w = c a.e. for some constant c ∈ R, and the lemma is proved.In view of these results, it is sensible to demand

Ωu dx = 0 (11.17)

as an auxiliary condition to enforce uniqueness of the solution.

Theorem 11.10 (Conditional existence and uniqueness for weak solutions of (11.14))Assume (11.16). There is precisely one u ∈W 1,2(Ω) satisfying (11.15) and (11.17). It

satisfies an estimate‖u‖W 1,2 ≤ C‖φ‖2

with C independent of φ.

Proof: We introduce the subspace

H :=

v ∈W 1,2(Ω)

∣∣∣∣∫

Ωv dx = 0

.

As H is closed in W 1,2(Ω) (Check!), it is a Hilbert space with respect to the inner productof W 1,2(Ω). We are going to apply the Lax-Milgram theorem 10.2. It follows from (11.4)that a and f are bounded on H as well. We will show that a is H-elliptic. By the Poincareinequality (7.2), we have

a(u, u) =

Ω|∇u|2 dx ≥ 1

2

∫Ω |∇u|2 dx+ c‖u‖2

2 ≥ c′‖u‖2W 1,2

for any u in H with positive constants c, c′.Therefore, the Lax-Milgram theorem yields that there is precisely one u ∈ H such that

a(u, v) = f(v) ∀v ∈ H. (11.18)

Let now v ∈W 1,2(Ω) be arbitrary. We set

v :=

∫Ω v dx∫Ω dx

, v1 = v − v.

Then v1 ∈ H and f(v) = 0 due to (11.16) (Check!), and therefore

a(u, v) = a(u, v1)(11.18)

= f(v1) = f(v) − f(v) = f(v).

This proves our theorem.

35

Page 36: Variational Methods and Elliptic Equationsgprokert/PDE/course.pdf · 6 The Sobolev space W1;2() 14 7 The space W1;2 0 18 8 Linear operators 20 9 Orthogonal projections in Hilbert

11.4 The Stokes equations with homogeneous Dirchlet boundary condi-tions

The Stokes equations describe the stationary motion of a viscous incompressible liquid withnegligible inertial forces. The motion is driven by an external force field φ = (φ1, . . . , φm)T .We describe the motion by the velocity field u = (u1, . . . , um)T of the liquid and its (scalar)pressure field p. We assume that the liquid moves inside a given fixed domain Ω ⊂ R

m

and is at rest at its boundary ∂Ω (so-called no-slip boundary condition).In this case, the Stokes equations read

−∆u+ ∇p = φ in Ω,divu = 0 in Ω,

u = 0 on ∂Ω,

(11.19)

where

divu :=m∑

i=1

∂iui. (11.20)

Note that (11.19)1 is a vector-valued equation. In coordinates it reads

−∆ui + ∂ip = φi, i = 1, . . . ,m.

To discuss the solvability of (11.19) we first have to introduce function spaces of vector-valued functions:Definition: Let

(L2(Ω))m := u : Ω −→ Rm |ui ∈ L2(Ω), i = 1, . . . ,m.

The space (L2(Ω))m is a Hilbert space with respect to the inner product (·, ·)(L2(Ω))m givenby

(u, v)(L2(Ω))m :=

m∑

i=1

(ui, vi)L2(Ω) =

m∑

i=1

Ωuivi dx.

(Check!)Definition: Let

(W 1,2(Ω))m := u ∈ (L2(Ω))m |ui ∈W 1,2(Ω), i = 1, . . . ,m.

The space (W 1,2(Ω))m is a Hilbert space with respect to the inner product (·, ·)(W 1,2(Ω))m

given by

(u, v)(W 1,2(Ω))m :=m∑

i=1

(ui, vi)W 1,2 =m∑

i=1

Ω(uivi + ∇ui∇vi) dx.

(Check!)Definition: Let

(C∞0 (Ω))m := u : Ω −→ R

m |ui ∈ C∞0 (Ω), i = 1, . . . ,m

and let (W 1,20 (Ω))m be the closure of (C∞

0 (Ω))m in (W 1,2(Ω))m. Then

(W 1,20 (Ω))m = u ∈ (W 1,2(Ω))m |ui ∈W 1,2

0 (Ω), i = 1, . . . ,m.

36

Page 37: Variational Methods and Elliptic Equationsgprokert/PDE/course.pdf · 6 The Sobolev space W1;2() 14 7 The space W1;2 0 18 8 Linear operators 20 9 Orthogonal projections in Hilbert

(Check!) Finally, note that for the operator given by (11.20) (in the sense of weak deriva-tives) we have

div ∈ L((W 1,2(Ω))m, L2(Ω)

). (11.21)

(Check!) Consequently, the space

W 1,20,σ (Ω) :=

u ∈ (W 1,2

0 (Ω))m |divu = 0

is a closed subspace of (W 1,2(Ω))m, hence a Hilbert space with respect to (·, ·)(W 1,2(Ω))m .2

For the weak formulation of the Stokes equations we introduce on (W 1,2(Ω))m thebilinear form a given by

a(u, v) :=

Ω∇ui∇vi dx, u, v ∈ (W 1,2(Ω))m,

and the linear form f given by

f(v) :=

Ωφivi dx, v ∈ (W 1,2(Ω))m.

Again, we assume that Ω is bounded.

Lemma 11.11 (The (bi-) linear forms a and f)

(i) a is a bounded bilinear form on (W 1,2(Ω))m.

(ii) f is a bounded linear form on (W 1,2(Ω))m.

(iii) a is (W 1,20 (Ω))m-elliptic.

Proof: : Exercise!Definition: A vector field u ∈W 1,2

0,σ (Ω) satisfying the variational equality

a(u, v) = f(v) ∀v ∈W 1,20,σ (Ω) (11.22)

is called a weak solution of the Stokes equations.Note: The weak solution does not contain the pressure field p. If one is interested in thepressure as well, a different weak formulation has to be used.

Lemma 11.12 (Strong solutions are weak solutions)Let u ∈ (C2(Ω) ∩ C1(Ω))m, p ∈ C1(Ω) ∩ C(Ω) be a solution to (11.19). Then u solves

(11.22).

Proof: Due to our assumptions and Theorem 7.3 we have that u ∈ W 1,20,σ (Ω). Let v ∈

(C∞0 (Ω))m. Then, by integration by parts,

f(v) =

Ωφivi dx =

Ω(−∆uivi + ∂ipvi) dx = a(u, v) −

Ωpdivv dx.

2A vector field u that satisfies divu = 0 is called solenoidal. This motivates the index σ in the notationW

1,20,σ (Ω).

37

Page 38: Variational Methods and Elliptic Equationsgprokert/PDE/course.pdf · 6 The Sobolev space W1;2() 14 7 The space W1;2 0 18 8 Linear operators 20 9 Orthogonal projections in Hilbert

Now let v ∈ W 1,20,σ (Ω). Then there is a sequence (vn) in (C∞

0 (Ω))m such that vn → v in

(W 1,2(Ω))m. Note that due to (11.21) this implies divvn → 0 in L2(Ω). Consequently, byLemma 11.11 (i) and (ii),

f(v) = limn→∞

f(vn) = limn→∞

(a(u, vn) −

Ωpdivvn dx

)= a(u, v).

(Check!) This shows the assertion.

Theorem 11.13 (Existence and uniqueness of weak solutions for the Stokes equations)For any φ ∈ (L2(Ω(0))m there is precisely one u ∈ W 1,2

0,σ (Ω) satisfying (11.22). Itsatisfies an estimate

‖u‖(W 1,2(Ω))m ≤ C‖φ‖(L2(Ω))m

with C independent of φ.

Proof: Exercise!

38

Page 39: Variational Methods and Elliptic Equationsgprokert/PDE/course.pdf · 6 The Sobolev space W1;2() 14 7 The space W1;2 0 18 8 Linear operators 20 9 Orthogonal projections in Hilbert

A Appendix: Lebesgue integration

A.1 Measure theory

Let X be a set and A a set of subsets of X. Then A is called s σ-algebra iff

(i) X ∈ A,

(ii) if A ∈ A then Ac ∈ A,

(iii) if A1, A2, . . . ∈ A then⋃

n∈INAn ∈ A.

If A is a σ-algebra then (X,A) is called a measurable space. The elements of A arecalled measurable sets.

Let Σ be a set of subsets of X. Let Ω be the set of all σ-algebras A such that Σ ⊂ A.Define

A(Σ) :=⋂

A∈Ω

A.

Then A(Σ) is a σ-algebra (check!). This is the smallest σ-algebra containing Σ and iscalled the σ-algebra generated by Σ.

Let τ be a set of subsets of X. Then τ is called a topology on X iff

(i) ∅, X ∈ τ ,

(ii) if U, V ∈ τ then U ∩ V ∈ τ ,

(iii) if I is any index set and Ui ∈ τ for all i ∈ I then⋃

i∈I Ui ∈ τ .

The elements of a topology are called open sets.If τ is a topology on X is then B(X) := A(τ) is called the Borel σ-algebra on X.

The elements of B(X) are called Borel sets. For all m ∈ IN we write Bm = B(IRm) enB = B1, where τ is the usual topology of open sets on IRm.

Define [0,∞] = [0,∞) ∪ ∞. The order relation ≤ on [0,∞) can be extended to anorder relation ≤ on [0,∞] by defining x ≤ ∞ for all x ∈ [0,∞]. Then any nonemptysubset of [0,∞] has a supremum. The concept of limit can also be extended to [0,∞].Then any increasing sequence in [0,∞] is convergent. Addition and multiplication on[0,∞) can be extended to [0,∞] by defining x + ∞ = ∞ + x = ∞ for all x ∈ [0,∞] andx · ∞ = ∞ · x = ∞ for all x ∈ (0,∞]. Moreover, we define 0 · ∞ = ∞ · 0 = 0. The set[0,∞] is given the smallest topology containing all sets of the form (a, b), [0, b) and (a,∞]with 0 ≤ a < b ≤ ∞.

Let (X,A) be a measurable space. A measure on A is a function µ:A → [0,∞] suchthat

(i) µ(∅) = 0,

(ii) for all pairwise disjoint sets A1, A2, . . . ∈ A we have µ(⋃

n∈INAn) =∑∞

n=1 µ(An).(This property is called σ-additivity of µ.)

39

Page 40: Variational Methods and Elliptic Equationsgprokert/PDE/course.pdf · 6 The Sobolev space W1;2() 14 7 The space W1;2 0 18 8 Linear operators 20 9 Orthogonal projections in Hilbert

Then the triple (X,A, µ) is called a measure space. For all A ∈ A, µ(A) is called themeasure of A. If A ∈ A and µ(A) = 0, then any subset B ⊂ A is called µ-negligiblesubset. Note that not always B ∈ A. If P (x) is a statement for all x ∈ X, then we say thatP holds almost everywhere (a.e.) if x ∈ X : P (x) is false is negligible. If (X,A, µ) isa measure space, then the σ-algebra generated by A and all µ-negligible subsets is calledthe completion van A.

Theorem A.1 For all m ∈ IN there is a unique measure µ:Bm → [0,∞] such that

µ([a1, b1) × . . .× [am, bm)) = (b1 − a1) · . . . · (bm − am)

for all a1 < b1, . . . , am < bm. Let Λm be the completion of Bm in the measure space(IRm,Bm, µ). Then there is a unique measure λm: Λm → [0,∞] such that λm(A) = µ(A)for all A ∈ Bm.

This measure λm is called the Borel–Lebesgue measure. We write λ = λ1.The Borel–Lebesgue measure has the following properties:

(i) λm(K) <∞ for all compact K ⊂ IRm,

(ii) λm(A) = supµ(K) : K ⊂ A, K compact for all A ∈ Λm,

(iii) λm(A) = infµ(U) : U ⊃ A, U open for all A ∈ Λm.

A.2 Integration theory

Let (X,A1) and (Y,A2) be two measurable spaces. A mapping f :X → Y is called mea-surable mapping if f−1(B) ∈ A1 for all B ∈ A2. (Recall: f−1(B) := x ∈ X : f(x) ∈B.)

Let (X,A) be a measurable space. A function f :X → IR is called measurablefunction if f is a measurable map from (X,A) to (IR,B). Analogously, a function f :X →|C or f :X → [0,∞] is called a measurable function if f is a measurable map from (X,A)to ( |C,B( |C)) or ([0,∞],B([0,∞])), respectively. Any continuous function f : IRm → |C ismaesurable from (IRm,Bm) to ( |C,B( |C)). A step function is a measurable functionf :X → IR such that f(X) is finite.

Let (X,A, µ) be a measure space. For a step function f :X → [0,∞) define∫f ∈ [0,∞]

by ∫f =

n∑

l=1

ak µ(Ak)

as f =∑n

k=1 ak 11Akand A1, . . . , An are pairwise disjoint. Let now f :X → [0,∞] be a

measurable function. Define∫f ∈ [0,∞] by

∫f = sup

∫g : 0 ≤ g ≤ f en g is a step function.

The following theorems are of fundamental importance.

Theorem A.2 (Monotone convergence theorem, Beppo Levi theorem) Let (X,A, µ)be a measure space. Let f, f1, f2, . . . :X → [0,∞] be measurable functions and assumefn ↑ f pointwise. Then also

∫fn ↑

∫f .

40

Page 41: Variational Methods and Elliptic Equationsgprokert/PDE/course.pdf · 6 The Sobolev space W1;2() 14 7 The space W1;2 0 18 8 Linear operators 20 9 Orthogonal projections in Hilbert

Lemma A.3 (Fatou’s Lemma) Let (X,A, µ) be a measure space. Let f, f1, f2, . . . :X →[0,∞] be measurable functions. Then

∫lim infn→∞

fn ≤ lim infn→∞

∫fn.

For a function f :X → IR define f+, f−:X → [0,∞) by f+ = max(f, 0) and f− =max(−f, 0). Then f = f+ − f−.

A measurable function f :X → IR is called integrable iff∫|f | < ∞. Note that

then also∫f+ < ∞ and

∫f− < ∞. If f is integrable, define

∫f :=

∫f+ −

∫f−. A

measurable function f :X → |C is called integrable iff∫|f | <∞. If f is integrable, define∫

f :=∫

Re f + i∫

Im f .Remark: If necessary, we also write

∫f(x) dx =

∫f(x) dµ(x) =

∫f dµ voor

∫f .

The following theorem provides a practical tool for interchanging limit and integral:

Theorem A.4 (Lebesgue’s dominated convergence theorem) Let (X,A, µ) be a mea-sure space. Let f, f1, f2, . . . :X → |C be measurable functions and assume limn→∞ fn = fb.o.. Assume there is an integrable function g such that for all n ∈ IN we have |fn| ≤ ga.e.. Then limn→∞

∫|fn − f | = 0 and limn→∞

∫fn =

∫f .

Let (X,A, µ) be a measure space. Let E ∈ A. Define

A|E = A ∩E : A ∈ A.

Let µE be the restriction of µ to A|E . For a function f :E → |C define

E

f dµ :=

∫f dµE

if the right hand side exists.If f : [a, b] → IR is a Riemann-integrable function we denote the Riemann integral by∫ b

af(x) dx.

Lemma A.5 Assume a, b ∈ IR, a < b and let f : [a, b] → IR be a (properly) Riemannintegrable function. Then f is measurable from (IR,Λ1) to (IR,B) and

∫ b

a

f(x) dx =

[a,b]f dλ.

Here λ is the Borel–Lebesgue measure on Λ1.Remark: For all m ∈ IN and measurable f : (IRm,Λm) → IR there exists a measurable

function g: (IRm,Bm) → IR such that f = g a.e..

The following theorem holds for the σ-algebras Bm, Bn en Bm+n on IRm, IRn, andIRm+n = IRm × IRn, respectively, and also with the σ-algebras Λm, Λn en Λm+n.

Theorem A.6 (Fubini) Assume m,n ∈ IN.

41

Page 42: Variational Methods and Elliptic Equationsgprokert/PDE/course.pdf · 6 The Sobolev space W1;2() 14 7 The space W1;2 0 18 8 Linear operators 20 9 Orthogonal projections in Hilbert

I. Let f : IRm × IRn → [0,∞] be a measurable function. For all x ∈ IRm, definefx: IR

n → [0,∞] by fx(y) = f(x, y). Then fx is a measurable function for almostall x. The mapping x 7→

∫f(x, y) dy is an (almost everywhere defined) measurable

function from IRm to [0,∞]. Moreover,

∫f =

∫fx dx.

Thus ∫f =

∫ (∫f(x, y) dy

)dx.

Analogous results hold if one integrates over x first.

II. Let f : IRm × IRn → |C be an integrable function. For all x ∈ IRm define fx: IRn → |Cby fx(y) = f(x, y). Then fx is integrable for almost all x ∈ IRm. Define F : IRm → |Cby

F (x) =

∫fx if fx is integrable,

0 else.

Then ∫f =

∫F (x) dx.

Thus ∫f =

∫ (∫f(x, y) dy

)dx.

Analogous results hold if one integrates over x first.

Theorem A.7 (Sets of Lebesgue measure zero) A set M ⊂ IRn has Lebesgue mea-sure zero if and only if for any ε > 0 there is a sequence of (open) balls B(x1, r1),B(x2, r2),. . . in IRn such that

∑∞k=1 r

nk < ε and M ⊂

⋃∞k=1B(xk, rk).

References

[Ev98] Evans, L.C.: Partial Differential Equations, AMS 1998

42