Greedy algorithm (How to select sample points) · Greedy sampling Characteristics of greedy...

51
Greedy algorithm (How to select sample points) Goal: Selection of sample points μ 1 ,..., μ N such that V N = span{u δ (μ 1 ),...,u δ (μ N )} M δ Thursday, August 23, 2012

Transcript of Greedy algorithm (How to select sample points) · Greedy sampling Characteristics of greedy...

Page 1: Greedy algorithm (How to select sample points) · Greedy sampling Characteristics of greedy RB-space construction: o Cheap: N truth problems need to be solved for an N -dimensional

Greedy algorithm (How to select sample points)Goal: Selection of sample points µ1, . . . ,µN such that

VN = span{uδ(µ1), . . . , uδ(µN )} ≈Mδ

Thursday, August 23, 2012

Page 2: Greedy algorithm (How to select sample points) · Greedy sampling Characteristics of greedy RB-space construction: o Cheap: N truth problems need to be solved for an N -dimensional

Greedy algorithm (How to select sample points)

Estimated error feedback (A posteriori estimates): Consider

µ −→ solve: a(uN (µ), vN ;µ) = f(vN ;µ), ∀vN ∈ VN −→ ηN (µ)

where ηN (µ) is an a posteriori estimation for �uN (µ)− uδ(µ)�Vδ .

Goal: Selection of sample points µ1, . . . ,µN such that

VN = span{uδ(µ1), . . . , uδ(µN )} ≈Mδ

Thursday, August 23, 2012

Page 3: Greedy algorithm (How to select sample points) · Greedy sampling Characteristics of greedy RB-space construction: o Cheap: N truth problems need to be solved for an N -dimensional

Greedy algorithm (How to select sample points)

Estimated error feedback (A posteriori estimates): Consider

µ −→ solve: a(uN (µ), vN ;µ) = f(vN ;µ), ∀vN ∈ VN −→ ηN (µ)

where ηN (µ) is an a posteriori estimation for �uN (µ)− uδ(µ)�Vδ .

Greedy algorithm:

Set N = 1, choose µ1 ∈ P arbitrarily.

1. Compute uδ(µN ) (truth problem: computationally expensive)

2. Set VN = span{VN−1, uδ(µN )}

3. Find µN+1 = arg maxµ∈P ηN (µ)

4. Set N := N + 1 and goto 1. while maxµ∈P ηN (µ) > Tol

Goal: Selection of sample points µ1, . . . ,µN such that

VN = span{uδ(µ1), . . . , uδ(µN )} ≈Mδ

Thursday, August 23, 2012

Page 4: Greedy algorithm (How to select sample points) · Greedy sampling Characteristics of greedy RB-space construction: o Cheap: N truth problems need to be solved for an N -dimensional

• Only N truth problems need to be solved (compared to POD approach)• Error control through estimator for any parameter value

Greedy algorithm (How to select sample points)

Estimated error feedback (A posteriori estimates): Consider

µ −→ solve: a(uN (µ), vN ;µ) = f(vN ;µ), ∀vN ∈ VN −→ ηN (µ)

where ηN (µ) is an a posteriori estimation for �uN (µ)− uδ(µ)�Vδ .

Greedy algorithm:

Set N = 1, choose µ1 ∈ P arbitrarily.

1. Compute uδ(µN ) (truth problem: computationally expensive)

2. Set VN = span{VN−1, uδ(µN )}

3. Find µN+1 = arg maxµ∈P ηN (µ)

4. Set N := N + 1 and goto 1. while maxµ∈P ηN (µ) > Tol

Goal: Selection of sample points µ1, . . . ,µN such that

VN = span{uδ(µ1), . . . , uδ(µN )} ≈Mδ

Thursday, August 23, 2012

Page 5: Greedy algorithm (How to select sample points) · Greedy sampling Characteristics of greedy RB-space construction: o Cheap: N truth problems need to be solved for an N -dimensional

Greedy algorithm: illustration

One-dimensional parameter space

Thursday, August 23, 2012

Page 6: Greedy algorithm (How to select sample points) · Greedy sampling Characteristics of greedy RB-space construction: o Cheap: N truth problems need to be solved for an N -dimensional

Greedy algorithm: illustration

10 12 14 16 18 20k

1x10-8

1x10-7

1x10-6

0.00001

0.0001

0.001

0.01

0.1

1

Erro

rError estimateTrue error

One-dimensional parameter space

Thursday, August 23, 2012

Page 7: Greedy algorithm (How to select sample points) · Greedy sampling Characteristics of greedy RB-space construction: o Cheap: N truth problems need to be solved for an N -dimensional

Greedy algorithm: illustration

10 12 14 16 18 20k

1x10-8

1x10-7

1x10-6

0.00001

0.0001

0.001

0.01

0.1

1

Erro

rError estimateTrue error

10 12 14 16 18 20k

1x10-8

1x10-7

1x10-6

0.00001

0.0001

0.001

0.01

0.1

1

Erro

rError estimateTrue error

One-dimensional parameter space

Thursday, August 23, 2012

Page 8: Greedy algorithm (How to select sample points) · Greedy sampling Characteristics of greedy RB-space construction: o Cheap: N truth problems need to be solved for an N -dimensional

Greedy algorithm: illustration

10 12 14 16 18 20k

1x10-8

1x10-7

1x10-6

0.00001

0.0001

0.001

0.01

0.1

1

Erro

rError estimateTrue error

10 12 14 16 18 20k

1x10-8

1x10-7

1x10-6

0.00001

0.0001

0.001

0.01

0.1

1

Erro

rError estimateTrue error

10 12 14 16 18 20k

1x10-8

1x10-7

1x10-6

0.00001

0.0001

0.001

0.01

0.1Er

ror

Error estimateTrue error

One-dimensional parameter space

Thursday, August 23, 2012

Page 9: Greedy algorithm (How to select sample points) · Greedy sampling Characteristics of greedy RB-space construction: o Cheap: N truth problems need to be solved for an N -dimensional

Greedy algorithm: illustration

10 12 14 16 18 20k

1x10-8

1x10-7

1x10-6

0.00001

0.0001

0.001

0.01

0.1

1

Erro

rError estimateTrue error

10 12 14 16 18 20k

1x10-8

1x10-7

1x10-6

0.00001

0.0001

0.001

0.01

0.1

1

Erro

rError estimateTrue error

10 12 14 16 18 20k

1x10-8

1x10-7

1x10-6

0.00001

0.0001

0.001

0.01

0.1Er

ror

Error estimateTrue error

10 12 14 16 18 20k

1x10-8

1x10-7

1x10-6

0.00001

0.0001

0.001

0.01

0.1

1

Erro

rError estimateTrue error

One-dimensional parameter space

Thursday, August 23, 2012

Page 10: Greedy algorithm (How to select sample points) · Greedy sampling Characteristics of greedy RB-space construction: o Cheap: N truth problems need to be solved for an N -dimensional

Greedy algorithm: illustration

10 12 14 16 18 20k

1x10-8

1x10-7

1x10-6

0.00001

0.0001

0.001

0.01

0.1

1

Erro

rError estimateTrue error

10 12 14 16 18 20k

1x10-8

1x10-7

1x10-6

0.00001

0.0001

0.001

0.01

0.1

1

Erro

rError estimateTrue error

10 12 14 16 18 20k

1x10-8

1x10-7

1x10-6

0.00001

0.0001

0.001

0.01

0.1Er

ror

Error estimateTrue error

10 12 14 16 18 20k

1x10-8

1x10-7

1x10-6

0.00001

0.0001

0.001

0.01

0.1

1

Erro

rError estimateTrue error

10 12 14 16 18 20k

1x10-8

1x10-7

1x10-6

0.00001

0.0001

0.001

0.01

0.1

1

Erro

rError estimateTrue error

One-dimensional parameter space

Thursday, August 23, 2012

Page 11: Greedy algorithm (How to select sample points) · Greedy sampling Characteristics of greedy RB-space construction: o Cheap: N truth problems need to be solved for an N -dimensional

Greedy algorithm

10 12 14 16 18 20k

1x10-8

1x10-7

1x10-6

0.00001

0.0001

0.001

0.01

0.1

1

Erro

r

Error estimateTrue error

10 12 14 16 18 20k

1x10-8

1x10-7

1x10-6

0.00001

0.0001

0.001

0.01

0.1

Erro

r

Error estimateTrue error

Dorfler marking vs. greedy-selection:

Thursday, August 23, 2012

Page 12: Greedy algorithm (How to select sample points) · Greedy sampling Characteristics of greedy RB-space construction: o Cheap: N truth problems need to be solved for an N -dimensional

Greedy sampling

Characteristics of greedy RB-space construction:

o Cheap: N truth problems need to be solved for an N -dimensional RB-space.o Accuracy is guaranteed by theoretical results.o Provides a hierarchical family of reduced basis spaces: VN−1 ⊂ VN .

Thursday, August 23, 2012

Page 13: Greedy algorithm (How to select sample points) · Greedy sampling Characteristics of greedy RB-space construction: o Cheap: N truth problems need to be solved for an N -dimensional

Greedy sampling

Characteristics of greedy RB-space construction:

o Cheap: N truth problems need to be solved for an N -dimensional RB-space.o Accuracy is guaranteed by theoretical results.o Provides a hierarchical family of reduced basis spaces: VN−1 ⊂ VN .

Practical hint: The basis functions uδ(µ1), . . . , uδ(µN ) might be highly linearlydependent and lead to a poor conditioning of the system matrix

Bij(µ) = a(uδ(µj), uδ(µi);µ).

Thursday, August 23, 2012

Page 14: Greedy algorithm (How to select sample points) · Greedy sampling Characteristics of greedy RB-space construction: o Cheap: N truth problems need to be solved for an N -dimensional

Greedy sampling

Characteristics of greedy RB-space construction:

o Cheap: N truth problems need to be solved for an N -dimensional RB-space.o Accuracy is guaranteed by theoretical results.o Provides a hierarchical family of reduced basis spaces: VN−1 ⊂ VN .

Remedy: Apply the Gram-Schmidt orthonormalization procedure to the set ofbasis functions using the inner product (·, ·)V. This leads to a new set {ξ1, . . . , ξN}of basis functions with better conditioning properties.

Practical hint: The basis functions uδ(µ1), . . . , uδ(µN ) might be highly linearlydependent and lead to a poor conditioning of the system matrix

Bij(µ) = a(uδ(µj), uδ(µi);µ).

Thursday, August 23, 2012

Page 15: Greedy algorithm (How to select sample points) · Greedy sampling Characteristics of greedy RB-space construction: o Cheap: N truth problems need to be solved for an N -dimensional

Theoretical estimates of the greedy algorithm

Here: PN : V→ VN denotes the projection onto VN .

Exact greedy algorithm:

Set N = 1, choose µ1 ∈ P arbitrarily.

1. Set VN = span{VN−1, u(µN )}

2. Find µN+1 = arg maxµ∈P �u(µ)− PNu(µ)�V

3. Set N := N + 1 and goto 1. while maxµ∈P �u(µ)− PNu(µ)�V > Tol

Difference to practical greedy implementation:

1. Exact solutions u(µ) available (instead of uδ(µ)).

2. Exact error �u(µ)− PNu(µ)�V is supposed to be known/computable.

3. The projection PN is used instead of the Galerkin-projection based onbilinear form a(·, ·;µ): Find uN (µ) ∈ VN s.t.

a(uN (µ), vN ;µ) = f(vN ;µ), ∀vN ∈ VN .

Thursday, August 23, 2012

Page 16: Greedy algorithm (How to select sample points) · Greedy sampling Characteristics of greedy RB-space construction: o Cheap: N truth problems need to be solved for an N -dimensional

Theoretical estimates of the greedy algorithm

Recall: V : Hilbert space,

M = {u(µ) ; ∀µ ∈ P} ⊂ V : Exact solution manifold.

What is provided by the exact greedy algorithm? A N-dimensional sub-space Vgr

N ⊂ Vδ with approximation error

d(VgrN ) = sup

v∈Minf

vN∈VgrN

�v − vN�V.

VgrN is of the particular form Vgr

N = span{uδ(µ1), . . . , uδ(µN )}.

Thursday, August 23, 2012

Page 17: Greedy algorithm (How to select sample points) · Greedy sampling Characteristics of greedy RB-space construction: o Cheap: N truth problems need to be solved for an N -dimensional

Theoretical estimates of the greedy algorithm

What means optimal? Find the N-dimensional subspace VN ⊂ Vδ such thatthe approximation error

d(VN ) = supv∈M

infvN∈VN

�v − vN�V

is minimized. We denote this space by VKolN and d(VKol

N ) as the Kolmogorovwidth.

Recall: V : Hilbert space,

M = {u(µ) ; ∀µ ∈ P} ⊂ V : Exact solution manifold.

What is provided by the exact greedy algorithm? A N-dimensional sub-space Vgr

N ⊂ Vδ with approximation error

d(VgrN ) = sup

v∈Minf

vN∈VgrN

�v − vN�V.

VgrN is of the particular form Vgr

N = span{uδ(µ1), . . . , uδ(µN )}.

Thursday, August 23, 2012

Page 18: Greedy algorithm (How to select sample points) · Greedy sampling Characteristics of greedy RB-space construction: o Cheap: N truth problems need to be solved for an N -dimensional

Theoretical estimates of the greedy algorithm

What means optimal? Find the N-dimensional subspace VN ⊂ Vδ such thatthe approximation error

d(VN ) = supv∈M

infvN∈VN

�v − vN�V

is minimized. We denote this space by VKolN and d(VKol

N ) as the Kolmogorovwidth.

Note: The best approximation space VKolN is in general not a subspace ofM.

Recall: V : Hilbert space,

M = {u(µ) ; ∀µ ∈ P} ⊂ V : Exact solution manifold.

What is provided by the exact greedy algorithm? A N-dimensional sub-space Vgr

N ⊂ Vδ with approximation error

d(VgrN ) = sup

v∈Minf

vN∈VgrN

�v − vN�V.

VgrN is of the particular form Vgr

N = span{uδ(µ1), . . . , uδ(µN )}.

Thursday, August 23, 2012

Page 19: Greedy algorithm (How to select sample points) · Greedy sampling Characteristics of greedy RB-space construction: o Cheap: N truth problems need to be solved for an N -dimensional

Theoretical estimates of the greedy algorithm

1 10 100N

0.0001

0.001

0.01

0.1

1

10

100

1000

10000

100000

1x106

Kolmogorov widthupper bound for greedy

M = 1α = 2⇒ C = 524�288

Theorem. (Polynomial decay of greedy approximation)Suppose that d(XKol

N ) ≤ M and

d(VKolN ) ≤ MN−α, N > 0,

for some M > 0 and α > 0. Then,

d(VgrN ) ≤ CMN−α, N > 0,

with C := q12 (4q)α and q := �2α+1�2.

Thursday, August 23, 2012

Page 20: Greedy algorithm (How to select sample points) · Greedy sampling Characteristics of greedy RB-space construction: o Cheap: N truth problems need to be solved for an N -dimensional

Theoretical estimates of the greedy algorithm

5 10 15 20N

1x10-381x10-361x10-341x10-321x10-301x10-281x10-261x10-241x10-221x10-201x10-181x10-161x10-141x10-121x10-101x10-81x10-6

0.00010.01

1Kolmogorov widthupper bound for greedy

Theorem. (Exponential decay of greedy approximation)There holds

d(VgrN ) ≤ 2√

32N d(VKol

N ).

d(VKolN ) = e−N1.5

Thursday, August 23, 2012

Page 21: Greedy algorithm (How to select sample points) · Greedy sampling Characteristics of greedy RB-space construction: o Cheap: N truth problems need to be solved for an N -dimensional

Theoretical estimates of the greedy algorithm

Remarks:

1. More estimates are available.

2. Estimates of the form d(VgrN ) ≤ C d(VKol

N ) can not be established(counter-example exists).

3. The assumptions on the greedy algorithm can be relaxed:o Error estimates η(µ) can be used instead of the exact error.o Discrete solutions uδ(µ) can be considered instead of the exact

solutions u(µ).o Yields different estimates (of course).

Thursday, August 23, 2012

Page 22: Greedy algorithm (How to select sample points) · Greedy sampling Characteristics of greedy RB-space construction: o Cheap: N truth problems need to be solved for an N -dimensional

Theoretical estimates of the greedy algorithm

Remarks:

1. More estimates are available.

2. Estimates of the form d(VgrN ) ≤ C d(VKol

N ) can not be established(counter-example exists).

3. The assumptions on the greedy algorithm can be relaxed:o Error estimates η(µ) can be used instead of the exact error.o Discrete solutions uδ(µ) can be considered instead of the exact

solutions u(µ).o Yields different estimates (of course).

References:

[1] Binev et al. Convergence rates for greedy algorithms in reduced basis methods. SIAM J. Math. Anal. (2011) vol. 43 (3) pp. 1457-1472

[2] Buffa et al. A priori convergence of the Greedy algorithm for the parametrized reduced basis method. Esaim-Math Model Num (2012) vol. 46 (3) pp. 595-603

Thursday, August 23, 2012

Page 23: Greedy algorithm (How to select sample points) · Greedy sampling Characteristics of greedy RB-space construction: o Cheap: N truth problems need to be solved for an N -dimensional

Theoretical estimates of the greedy algorithm

Remarks:

1. More estimates are available.

2. Estimates of the form d(VgrN ) ≤ C d(VKol

N ) can not be established(counter-example exists).

3. The assumptions on the greedy algorithm can be relaxed:o Error estimates η(µ) can be used instead of the exact error.o Discrete solutions uδ(µ) can be considered instead of the exact

solutions u(µ).o Yields different estimates (of course).

References:

[1] Binev et al. Convergence rates for greedy algorithms in reduced basis methods. SIAM J. Math. Anal. (2011) vol. 43 (3) pp. 1457-1472

[2] Buffa et al. A priori convergence of the Greedy algorithm for the parametrized reduced basis method. Esaim-Math Model Num (2012) vol. 46 (3) pp. 595-603

Open problem:

Parametrized problem ⇒ Decay of Kolmogorov width

Thursday, August 23, 2012

Page 24: Greedy algorithm (How to select sample points) · Greedy sampling Characteristics of greedy RB-space construction: o Cheap: N truth problems need to be solved for an N -dimensional

EfficiencyHow to solve: “For µ ∈ P, find the solution uN (µ) ∈ VN of

a(uN (µ), vN ;µ) = f(vN ;µ), ∀vN ∈ VN .”

fastly?

Further assumption: The output functional s(·;µ) is linear.

It shall be avoided that, if VN = span{ξ1, . . . , ξN}, the matrix

(Aµ)ij = a(ξj , ξi;µ), ∀ 1 ≤ i, j ≤ N,

needs to be reassembled for each new parameter value µ ∈ P.Why? ⇒ Assembly process depends on N = dim(Vδ).

Thursday, August 23, 2012

Page 25: Greedy algorithm (How to select sample points) · Greedy sampling Characteristics of greedy RB-space construction: o Cheap: N truth problems need to be solved for an N -dimensional

Efficiency

Assumption:

a(w, v;µ) =Ma�

m=1

Θma (µ) am(w, v),

f(v;µ) =Mf�

m=1

Θmf (µ) fm(v),

s(v;µ) =Ms�

m=1

Θms (µ) sm(v),

where

Θma ,Θm

f Θms : D → R µ− dependent functions,

am : V× V→ R µ− independent forms,fm, sm : V→ R µ− independent forms,

Thursday, August 23, 2012

Page 26: Greedy algorithm (How to select sample points) · Greedy sampling Characteristics of greedy RB-space construction: o Cheap: N truth problems need to be solved for an N -dimensional

Efficiency: example

Example: Convection-diffusion

a(u, v; ε) = ε

� 1

0u�(x)v�(x) dx +

� 1

0u�(x)v(x) dx,

a1(u, v; ε) =� 1

0u�(x)v�(x) dx, Θ1

a(ε) = ε,

a2(u, v; ε) =� 1

0u�(x)v(x) dx, Θ2

a(ε) = 1,

Thursday, August 23, 2012

Page 27: Greedy algorithm (How to select sample points) · Greedy sampling Characteristics of greedy RB-space construction: o Cheap: N truth problems need to be solved for an N -dimensional

Efficiency: example

Example: Convection-diffusion

a(w, v;µ) =15�

i=1

µi

Ri

∇w ·∇v +�

RP+1

∇w ·∇v,

ai(w, v;µ) =�

Ri

∇w ·∇v, Θia(µ) = µi, i = 1, . . . , 15,

a16(w, v;µ) =�

R16

∇w ·∇v, Θ16a (µ) = 1,

Example: Heat conduction on thermal blocks

a(u, v; ε) = ε

� 1

0u�(x)v�(x) dx +

� 1

0u�(x)v(x) dx,

a1(u, v; ε) =� 1

0u�(x)v�(x) dx, Θ1

a(ε) = ε,

a2(u, v; ε) =� 1

0u�(x)v(x) dx, Θ2

a(ε) = 1,

Thursday, August 23, 2012

Page 28: Greedy algorithm (How to select sample points) · Greedy sampling Characteristics of greedy RB-space construction: o Cheap: N truth problems need to be solved for an N -dimensional

Off-line:

On-line:

Affine assumption: Off-line/on-line

For each new parameter value µ ∈ P

Given VN = span{ξi | i = 1, . . . , N} precompute

(Am)i,j = am(ξj , ξi), ∀ 1 ≤ i, j ≤ N,

(Fm)i = fm(ξi), ∀ 1 ≤ i ≤ N,

(Sm)i = sm(ξi), ∀ 1 ≤ i ≤ N.

Rem. Depends on N = dim(Vδ).Rem. Size of Am and Fm, Sm is N2 resp. N .

1. Assemble (depending on M and N , i.e. ∼MN2 resp. ∼MN)

Aµ =Ma�

m=1

Θma (µ)Am Fµ =

Mf�

m=1

Θmf (µ)Fm

2. Solve Aµu(µ) = Fµ. (depending on N , i.e ∼ N3 for LU factorization)3. Compute

s(uN (µ);µ) =Ms�

m=1

N�

n=1

un(µ) Θms (µ) (Sm)n.

Thursday, August 23, 2012

Page 29: Greedy algorithm (How to select sample points) · Greedy sampling Characteristics of greedy RB-space construction: o Cheap: N truth problems need to be solved for an N -dimensional

Off-line/On-line procedure: Is it worth?

Computational costs:

o Off-line procedure: Toff.

o One on-line evaluation: Ton.

o One truth solve: Ttr � Ton.

Thursday, August 23, 2012

Page 30: Greedy algorithm (How to select sample points) · Greedy sampling Characteristics of greedy RB-space construction: o Cheap: N truth problems need to be solved for an N -dimensional

Off-line/On-line procedure: Is it worth?

Computational costs:

o Off-line procedure: Toff.

o One on-line evaluation: Ton.

o One truth solve: Ttr � Ton.

Evaluation for M parameter values:1. Brute force approach: M · Ttr.2. Reduced basis method: Toff + M · Ton.

Thursday, August 23, 2012

Page 31: Greedy algorithm (How to select sample points) · Greedy sampling Characteristics of greedy RB-space construction: o Cheap: N truth problems need to be solved for an N -dimensional

Off-line/On-line procedure: Is it worth?

Computational costs:

o Off-line procedure: Toff.

o One on-line evaluation: Ton.

o One truth solve: Ttr � Ton.

Evaluation for M parameter values:1. Brute force approach: M · Ttr.2. Reduced basis method: Toff + M · Ton.

Theoretical considerations:

o The parameter space P is a continuous space (not discrete): M is potentiallyarbitrarily high.

o Whenever the number M of parameter evaluations is high enough, thereduced basis method is always cheaper.

Thursday, August 23, 2012

Page 32: Greedy algorithm (How to select sample points) · Greedy sampling Characteristics of greedy RB-space construction: o Cheap: N truth problems need to be solved for an N -dimensional

Off-line/On-line procedure: Is it worth?

Computational costs:

o Off-line procedure: Toff.

o One on-line evaluation: Ton.

o One truth solve: Ttr � Ton.

Evaluation for M parameter values:1. Brute force approach: M · Ttr.2. Reduced basis method: Toff + M · Ton.

1x103 1x104 1x105 1x106

Number of parameter evaluations: M

0

20

40

60

80

100

Spee

d-up

Realistic example:

o Ton = 1.o Ttr = 102 (realistic speed-up).o Toff = 104 (N = 100).

Brute force: M · Ttr = M · 102

RBM: Toff + M · Ton = 104 + M

Thursday, August 23, 2012

Page 33: Greedy algorithm (How to select sample points) · Greedy sampling Characteristics of greedy RB-space construction: o Cheap: N truth problems need to be solved for an N -dimensional

Schematic overview of Reduced Basis Method (so far)

Offline procedure:

1. Construct the reduced basis space VN empirically (Greedy)2. Precompute the matrices Am and the vectors Fm, Sm

uN (µ) =N�

n=1

un(µ) ξn

Online procedure:

µ −→ solve: uN (µ) −→ s(uN (µ);µ)

which consists of

1. AssembleAµ =

Ma�

m=1

Θma (µ) Am Fµ =

Mf�

m=1

Θmf (µ) Fm

2. Solve Aµu(µ) = Fµ (N -dimensional linear system, N � N )3. Compute s(uN (µ);µ)

Characteristics: Independent of N = dim(Vδ): cheap. Feasible in a many-query context.

Thursday, August 23, 2012

Page 34: Greedy algorithm (How to select sample points) · Greedy sampling Characteristics of greedy RB-space construction: o Cheap: N truth problems need to be solved for an N -dimensional

Schematic overview of Reduced Basis Method (so far)

Offline procedure:

1. Construct the reduced basis space VN empirically (Greedy)2. Precompute the matrices Am and the vectors Fm, Sm

uN (µ) =N�

n=1

un(µ) ξn

Online procedure:

µ −→ solve: uN (µ) −→ s(uN (µ);µ)

which consists of

1. AssembleAµ =

Ma�

m=1

Θma (µ) Am Fµ =

Mf�

m=1

Θmf (µ) Fm

2. Solve Aµu(µ) = Fµ (N -dimensional linear system, N � N )3. Compute s(uN (µ);µ)

Characteristics: Independent of N = dim(Vδ): cheap. Feasible in a many-query context.

Idea: Restrict the solution space from Vδ to VN ≈ {uδ(µ) : µ ∈ P}

empirically. →Discard unnecessary

modes.

Thursday, August 23, 2012

Page 35: Greedy algorithm (How to select sample points) · Greedy sampling Characteristics of greedy RB-space construction: o Cheap: N truth problems need to be solved for an N -dimensional

A posteriori error estimation

Thursday, August 23, 2012

Page 36: Greedy algorithm (How to select sample points) · Greedy sampling Characteristics of greedy RB-space construction: o Cheap: N truth problems need to be solved for an N -dimensional

A posteriori estimate

So far we assumed the a posteriori estimation process:

µ −→ solve: a(uN (µ), vN ;µ) = f(vN ;µ), ∀vN ∈ VN −→ ηN (µ)

where ηN (µ) is an a posteriori estimation for �uN (µ)− uδ(µ)�Vδ .

o Estimates the discrete error �uδ(µ)− uN (µ)�V by ηN (µ).

o Crucial for selection process in greedy algorithm. (Off-line)

o Should be cheap, i.e., independent on N = dim(Vδ). (Off- and On-line)

o Certifies the model order reduction error with a computable bound:

�uδ(µ)− uN (µ)�V ≤ ηN (µ)

Thursday, August 23, 2012

Page 37: Greedy algorithm (How to select sample points) · Greedy sampling Characteristics of greedy RB-space construction: o Cheap: N truth problems need to be solved for an N -dimensional

A posteriori estimate

Why only �uδ(µ)− uN (µ)�V?

Error of truthapproximation

Error of modelorder reduction

Given by theapproximationspace Vδ

Depends on thereduced basisspace VN

Needs to be estimated

�u(µ)− uN (µ)�V ≤ �u(µ)− uδ(µ)�V� �� � + �uδ(µ)− uN (µ)�V� �� �

Assumption:

supµ∈P

�u(µ)− uδ(µ)�V ≤ tol

by the choice of Vδ.

Thursday, August 23, 2012

Page 38: Greedy algorithm (How to select sample points) · Greedy sampling Characteristics of greedy RB-space construction: o Cheap: N truth problems need to be solved for an N -dimensional

The error estimate ηTruth solution: Find uδ(µ) ∈ Vδ such that

a(uδ(µ), vδ;µ) = f(vδ;µ), ∀vδ ∈ Vδ.

Define the errore(µ) = uδ(µ)− uN (µ) ∈ Vδ.

Thursday, August 23, 2012

Page 39: Greedy algorithm (How to select sample points) · Greedy sampling Characteristics of greedy RB-space construction: o Cheap: N truth problems need to be solved for an N -dimensional

The error estimate η

Error committed by the modelorder reduction, i.e. by re-stricting the solution spacefrom Vδ to VN .

Truth solution: Find uδ(µ) ∈ Vδ such that

a(uδ(µ), vδ;µ) = f(vδ;µ), ∀vδ ∈ Vδ.

Define the errore(µ) = uδ(µ)− uN (µ) ∈ Vδ.

Thursday, August 23, 2012

Page 40: Greedy algorithm (How to select sample points) · Greedy sampling Characteristics of greedy RB-space construction: o Cheap: N truth problems need to be solved for an N -dimensional

The error estimate η

Error committed by the modelorder reduction, i.e. by re-stricting the solution spacefrom Vδ to VN .

Truth solution: Find uδ(µ) ∈ Vδ such that

a(uδ(µ), vδ;µ) = f(vδ;µ), ∀vδ ∈ Vδ.

Define the errore(µ) = uδ(µ)− uN (µ) ∈ Vδ.

The error equation yields

a(e(µ), vδ;µ) = r(vδ;µ), ∀vδ ∈ Vδ,

r(vδ;µ) = f(vδ;µ)− a(uN (µ), vδ;µ), ∀vδ ∈ Vδ.

Thursday, August 23, 2012

Page 41: Greedy algorithm (How to select sample points) · Greedy sampling Characteristics of greedy RB-space construction: o Cheap: N truth problems need to be solved for an N -dimensional

The error estimate η

Error committed by the modelorder reduction, i.e. by re-stricting the solution spacefrom Vδ to VN .

Truth solution: Find uδ(µ) ∈ Vδ such that

a(uδ(µ), vδ;µ) = f(vδ;µ), ∀vδ ∈ Vδ.

Define the errore(µ) = uδ(µ)− uN (µ) ∈ Vδ.

The error equation yields

a(e(µ), vδ;µ) = r(vδ;µ), ∀vδ ∈ Vδ,

r(vδ;µ) = f(vδ;µ)− a(uN (µ), vδ;µ), ∀vδ ∈ Vδ.

Using the Riesz representation theorem yields e(µ) ∈ Vδ s.t.

(e(µ), vδ)V = r(vδ;µ), ∀vδ ∈ Vδ,

�r(·,µ)�V�δ

= supvδ∈Vδ

r(vδ;µ)�vδ�V

= �e(µ)�V, ∀vδ ∈ Vδ.

Thursday, August 23, 2012

Page 42: Greedy algorithm (How to select sample points) · Greedy sampling Characteristics of greedy RB-space construction: o Cheap: N truth problems need to be solved for an N -dimensional

Error certificationRecall: The solvability requires αδ(µ) > 0 s.t.

αδ(µ) �vδ�V ≤ supwδ∈Vδ

a(vδ, wδ;µ)�wδ�V

, or αδ(µ) �vδ�2V ≤ a(vδ, vδ;µ),

for all vδ ∈ Vδ, from which one recovers the estimator

ηN (µ) =�e(µ)�VαLB(µ)

, 0 < αLB(µ) ≤ αδ(µ).

Thursday, August 23, 2012

Page 43: Greedy algorithm (How to select sample points) · Greedy sampling Characteristics of greedy RB-space construction: o Cheap: N truth problems need to be solved for an N -dimensional

Error certification

Difficult task but doable! (later)

Recall: The solvability requires αδ(µ) > 0 s.t.

αδ(µ) �vδ�V ≤ supwδ∈Vδ

a(vδ, wδ;µ)�wδ�V

, or αδ(µ) �vδ�2V ≤ a(vδ, vδ;µ),

for all vδ ∈ Vδ, from which one recovers the estimator

ηN (µ) =�e(µ)�VαLB(µ)

, 0 < αLB(µ) ≤ αδ(µ).

Thursday, August 23, 2012

Page 44: Greedy algorithm (How to select sample points) · Greedy sampling Characteristics of greedy RB-space construction: o Cheap: N truth problems need to be solved for an N -dimensional

Error certification

Difficult task but doable! (later)

Recall: The solvability requires αδ(µ) > 0 s.t.

αδ(µ) �vδ�V ≤ supwδ∈Vδ

a(vδ, wδ;µ)�wδ�V

, or αδ(µ) �vδ�2V ≤ a(vδ, vδ;µ),

for all vδ ∈ Vδ, from which one recovers the estimator

ηN (µ) =�e(µ)�VαLB(µ)

, 0 < αLB(µ) ≤ αδ(µ).

Theorem: The estimator is reliable

�uδ(µ)− uN (µ)�V ≤ ηN (µ).

Thursday, August 23, 2012

Page 45: Greedy algorithm (How to select sample points) · Greedy sampling Characteristics of greedy RB-space construction: o Cheap: N truth problems need to be solved for an N -dimensional

Error certification

Difficult task but doable! (later)

Recall: The solvability requires αδ(µ) > 0 s.t.

αδ(µ) �vδ�V ≤ supwδ∈Vδ

a(vδ, wδ;µ)�wδ�V

, or αδ(µ) �vδ�2V ≤ a(vδ, vδ;µ),

for all vδ ∈ Vδ, from which one recovers the estimator

ηN (µ) =�e(µ)�VαLB(µ)

, 0 < αLB(µ) ≤ αδ(µ).

Theorem: The estimator is reliable

�uδ(µ)− uN (µ)�V ≤ ηN (µ).

Proof: (Coercivy problem)

�uδ(µ)− uN (µ)� �� �=e(µ)

�2V ≤1

αδ(µ)a(e(µ), e(µ);µ) =

1αδ(µ)

supwδ∈Vδ

r(e(µ);µ)

=1

αδ(µ)sup

wδ∈Vδ

(e(µ), e(µ))V ≤1

αδ(µ)�e(µ)�V�e(µ)�V

≤ 1αLB(µ)

�e(µ)�V�e(µ)�V = ηN (µ)�e(µ)�V.

�Thursday, August 23, 2012

Page 46: Greedy algorithm (How to select sample points) · Greedy sampling Characteristics of greedy RB-space construction: o Cheap: N truth problems need to be solved for an N -dimensional

Error certification

Difficult task but doable! (later)

Recall: The solvability requires αδ(µ) > 0 s.t.

αδ(µ) �vδ�V ≤ supwδ∈Vδ

a(vδ, wδ;µ)�wδ�V

, or αδ(µ) �vδ�2V ≤ a(vδ, vδ;µ),

for all vδ ∈ Vδ, from which one recovers the estimator

ηN (µ) =�e(µ)�VαLB(µ)

, 0 < αLB(µ) ≤ αδ(µ).

Theorem: The estimator is reliable

�uδ(µ)− uN (µ)�V ≤ ηN (µ).

Proof: (Saddle-point problem)

�uδ(µ)− uN (µ)� �� �=e(µ)

�V ≤1

αδ(µ)sup

wδ∈Vδ

a(e(µ), wδ;µ)�wδ�V

=1

αδ(µ)sup

wδ∈Vδ

r(wδ;µ)�wδ�V

=1

αδ(µ)sup

wδ∈Vδ

(e(µ), wδ)V�wδ�V

≤ 1αδ(µ)

�e(µ)�V

≤ 1αLB(µ)

�e(µ)�V = ηN (µ).

�Thursday, August 23, 2012

Page 47: Greedy algorithm (How to select sample points) · Greedy sampling Characteristics of greedy RB-space construction: o Cheap: N truth problems need to be solved for an N -dimensional

Error certification

Corollary: Effectivity is bounded from below

ηN (µ)�uδ(µ)− uN (µ)�V

≥ 1.

Thursday, August 23, 2012

Page 48: Greedy algorithm (How to select sample points) · Greedy sampling Characteristics of greedy RB-space construction: o Cheap: N truth problems need to be solved for an N -dimensional

Error certification

Corollary: Effectivity is bounded from below

ηN (µ)�uδ(µ)− uN (µ)�V

≥ 1.

Theorem: The estimator is efficient

ηN (µ) ≤ γ(µ)αLB(µ)

�uδ(µ)− uN (µ)�V

Thursday, August 23, 2012

Page 49: Greedy algorithm (How to select sample points) · Greedy sampling Characteristics of greedy RB-space construction: o Cheap: N truth problems need to be solved for an N -dimensional

Error certification

Corollary: Effectivity is bounded from below

ηN (µ)�uδ(µ)− uN (µ)�V

≥ 1.

Theorem: The estimator is efficient

ηN (µ) ≤ γ(µ)αLB(µ)

�uδ(µ)− uN (µ)�V

Proof:

�e(µ)�2V = (e(µ), e(µ))V = a(e(µ), e(µ);µ) ≤ γδ�e(µ)�V�e(µ)�V

and thusηN (µ) =

�e(µ)�VαLB(µ)

≤ γδ(µ)αLB(µ)

�e(µ)�V

Thursday, August 23, 2012

Page 50: Greedy algorithm (How to select sample points) · Greedy sampling Characteristics of greedy RB-space construction: o Cheap: N truth problems need to be solved for an N -dimensional

Output functional

o Sharper bounds can be obtained using a primal-dual approach: Requiresto also assemble a reduced basis for the dual problem.

Theorem: If the output functional s : V → R is linear, then the error in theoutput functional can be bounded by

|s(u(µ))− s(uN (µ))| ≤ �s�V�δ

ηN (µ).

Proof:

|s(uδ(µ))− s(uN (µ))| = |s(uδ(µ)− uN (µ))|

≤ supvδ∈Vδ

s(vδ)�vδ�V

�uδ(µ)− uN (µ)�V

≤ �s�V�δ

ηN (µ).

Thursday, August 23, 2012

Page 51: Greedy algorithm (How to select sample points) · Greedy sampling Characteristics of greedy RB-space construction: o Cheap: N truth problems need to be solved for an N -dimensional

On-line computation of error estimate

Question: How to compute ηN (µ) independently of N = dim(Vδ)?

Recall:ηN (µ) =

�e(µ)�Vδ

αLB(µ)

o µ → αLB(µ) will be discussed later.o Here: Computation of �e(µ)�Vδ .

Goal: Decompose �e(µ)�V into a pre-computable parameter independent Off-

line part and a parameter dependent On-line part which is independent on N .

Recall:

(e(µ), vδ)V = r(vδ;µ) = f(vδ;µ)− a(uN (µ), vh;µ), ∀vδ ∈ Vδ.

Thursday, August 23, 2012