Data-driven approximation of vector fields and ...

of 28/28
Data-driven approximation of vector fields and differential forms with the spectral exterior calculus Dimitris Giannakis Courant Institute of Mathematical Sciences New York University Advances in Manifold Learning and Applications Joint Mathematics Meetings January 9, 2021 Collaborators: Tyrus Berry, Shuddho Das, Joanna Slawinska
  • date post

    22-Jan-2022
  • Category

    Documents

  • view

    0
  • download

    0

Embed Size (px)

Transcript of Data-driven approximation of vector fields and ...

Data-driven approximation of vector fields and differential forms with the spectral exterior calculuswith the spectral exterior calculus
Dimitris Giannakis
Advances in Manifold Learning and Applications Joint Mathematics Meetings
January 9, 2021
Approximating the Laplacian on functions
= − div grad = δd manifold Laplacian
←→ L = D −K graph Laplacian
• Pointwise consistency: For fixed f : M → R, L~f converges to f . (Belkin & Niyogi 2003; Coifman & Lafon 2006; Hein et al. 2006; Singer 2006;
Berry & Harlim 2016; Berry & Sauer 2016; Vaughn et al. 2019)
• Spectral consistency: spec L converges to spec , and the eigenspaces also converge. (Belkin & Niyogi 2007; Shi 2015; Trillos & Slepcev 2018; Dunson et al. 2019;
Trillos et al. 2020; Wormell et al. 2020)
Approximating the exterior calculus
• Discrete exterior calculus (DEC): Approximate k-forms using k-simplicial cochains. (Hirani 2003; Desbrun et al. 2005; Rufat et al. 2014)
• Finite-element exterior calculus (FEC): Approximate using a triangulation and associated finite-element bases. (Arnold et al. 2006, 2010; Budniskiy et al. 2019)
• Fourier space discretizations: Approximate using a chain complex in Fourier space. (Lessig 2021)
Spectral exterior calculus (SEC)
1 Computing eigenforms of the 1-Laplacian.
2 Learning vector fields of dynamical systems.
Spectral exterior calculus (SEC)
1 Computing eigenforms of the 1-Laplacian.
2 Learning vector fields of dynamical systems.
Product rule (carre du champ identity)
2 grad f · grad g = (f )g + f (g)−(fg), f , g ∈ C∞(M)
Laplace-Beltrami eigenfunctions:
gradφi · gradφj = 1
(λi + λj − λk)cijk .
Consistent approximations of λj , φj , and cijk are available from data sampled on M using graph-theoretic techniques.
Product rule (carre du champ identity)
2 grad f · grad g = (f )g + f (g)−(fg), f , g ∈ C∞(M)
Laplace-Beltrami eigenfunctions:
gradφi · gradφj = 1
(λi + λj − λk)cijk .
Consistent approximations of λj , φj , and cijk are available from data sampled on M using graph-theoretic techniques.
C∞(M)-module structure
For M compact, gradients of finitely many eigenfunctions generate X(M) as a C∞(M)-module:
v = J∑
j=1
• fj : C∞(M) functions.
Every L2 vector field v has an L2-convergent expansion,
v = ∞∑ i=0
J∑ j=1
• Analogous results hold for k-forms with grad replaced by d .
C∞(M)-module structure
For M compact, gradients of finitely many eigenfunctions generate X(M) as a C∞(M)-module:
v = J∑
j=1
• fj : C∞(M) functions.
Every L2 vector field v has an L2-convergent expansion,
v = ∞∑ i=0
J∑ j=1
• Analogous results hold for k-forms with grad replaced by d .
Frames of inner product spaces
A sequence u0, u1, . . . of elements of an inner product space V is said to be a frame if there exist C1,C2 > 0 such that
C1v2 ≤ ∑ k
• Synthesis operator: T ∗ : `2 → V , T ∗(ck) = ∑
k ckuk .
• Gram operator: G : `2 → `2, G = TT ∗.
• Dual frame: {u′k} with u′k = S−1uk ,
v = ∑ k
uk , vV u′k , ∀v ∈ V .
Frames of inner product spaces
A sequence u0, u1, . . . of elements of an inner product space V is said to be a frame if there exist C1,C2 > 0 such that
C1v2 ≤ ∑ k
• Synthesis operator: T ∗ : `2 → V , T ∗(ck) = ∑
k ckuk .
• Gram operator: G : `2 → `2, G = TT ∗.
• Dual frame: {u′k} with u′k = S−1uk ,
v = ∑ k
uk , vV u′k , ∀v ∈ V .
SEC frames for vector fields
Theorem 1. The sets
BX = {bij : i ∈ {0, 1, . . .}, j ∈ {1, 2, . . .}},
where bij = φi gradφj , bij = e−λjbij ,
and J is an integer such that x ∈ M 7→ (φ1(x), . . . , φJ(x)) ∈ RJ
is an embedding, are frames for L2 X(M).
Proof. Use Cauchy-Schwarz inequalities for Riemannian, Hodge, and `2 inner products, in conjunction with the bounds:
• φjL∞ ≤ Cλ (m−1)/4 j (Hormander 1968).
• gradφjL∞X ≤ Cλ 1/2 j φjL∞ (Shi & Xu 2010).
• j = Cλ m/2 j + o(λ
(m−1)/2 j ) (Weyl 1911).
SEC frames for vector fields
Theorem 1. The sets
BX = {bij : i ∈ {0, 1, . . .}, j ∈ {1, 2, . . .}},
where bij = φi gradφj , bij = e−λjbij ,
and J is an integer such that x ∈ M 7→ (φ1(x), . . . , φJ(x)) ∈ RJ
is an embedding, are frames for L2 X(M).
Proof. Use Cauchy-Schwarz inequalities for Riemannian, Hodge, and `2 inner products, in conjunction with the bounds:
• φjL∞ ≤ Cλ (m−1)/4 j (Hormander 1968).
• gradφjL∞X ≤ Cλ 1/2 j φjL∞ (Shi & Xu 2010).
• j = Cλ m/2 j + o(λ
(m−1)/2 j ) (Weyl 1911).
SEC frames for k-forms
Theorem 2a. The sets
BJ k = {bij1···jk : i ∈ {0, 1, . . . , }, j1, . . . , jk ∈ {1, . . . , J}},
Bk = {bij1···jk : i ∈ {0, 1, . . .}, j1, . . . , jk ∈ {1, 2, . . .}},
where
bij1···jk = φi dφj1 ∧ · · · ∧ dφjk , bij1···jk = e−(λj1 +···+λjk )bij1···jk ,
are frames for L2 k(M).
SEC frames for k-forms
Theorem 2b. The sets
B1,1 = {bij1 : i ∈ {0, 1, . . .}, j ∈ {1, 2, . . .}},
where
dφj1 ∧ · · · ∧ dφjk , bij1···jk1 = e−(λj1 +···+λjk )bij1···jk1 ,
and φiH1 = √
Eigenforms of 1-Laplacian
1 = δd + dδ
1ψj = νjψj , ψj ∈ 1(M), νj ≥ 0
• Weak form: Find ψj ∈ H1 1 (M) \ {0} and νj ∈ R such that
dω, dψj+ δω, δψj = νjω, ψj, ∀ω ∈ H1 1 (M).
• SEC approximation: Matrix generalized eigenvalue problem,
A~uj = νjB~uj ,
νj ≈ νj , ψj = ∑ ij
uijb ij 1 ≈ ψj .
Circle
• The eigenforms k of 1 are sin(kθ) dθ and cos(kθ) dθ, and the eigenvalues νk are as those of 0,
νk ∈ {0, 1, 1, 4, 4, . . . , j2, j2, . . .}.
10 20 30 40
e s o
x
-1
-0.5
0
0.5
1
y
x
-1
-0.5
0
0.5
1
y
Torus embedded in R3
SEC 0.0040 0.0093 0.2574 0.2575 0.2575 0.2587 0.8061 0.8067
2-sphere
1
SEC 1.9349 1.9521 1.9781 1.9817 2.0042 2.0172 5.8001 5.8142
Genus-2 surface
SEC 0.0021 0.0026 0.0026 0.0041 0.0893 0.0901 0.2151 0.2175
Mobius band
SEC 0.0242 1.0415 1.0449 3.8684 3.8948 8.0352 8.1018 8.9369
Lorenz 63 attractor
x+y
x+y
SEC 0.0011 0.0017 0.0030 0.0072 0.0105 0.0109 0.0205 0.0262
Learning vector fields of dynamical systems
x(t) = v |x(t), v ∈ X(M)
Given: • Embedding F : M → Rd , F =
∑ j Fjφj .
Learning algorithm:
1 Apply analysis operator, v ′ = Tv = (bij , v) ∈ `2. (Gives coefficients of v in the dual frame.)
2 Apply dual Gram operator, v = G+v ′ ∈ `2. (Gives coefficients of v in the primal frame.)
3 Reconstruct using synthesis operator, v = T ∗v ∈ X(M).
4 Apply pushforward map into data space, ~v = F∗v = vF .
Procedure is not equivalent to supervised learning of the coordinate functions F∗v .
Learning vector fields of dynamical systems
x(t) = v |x(t), v ∈ X(M)
Given: • Embedding F : M → Rd , F =
∑ j Fjφj .
Learning algorithm:
1 Apply analysis operator, v ′ = Tv = (bij , v) ∈ `2. (Gives coefficients of v in the dual frame.)
2 Apply dual Gram operator, v = G+v ′ ∈ `2. (Gives coefficients of v in the primal frame.)
3 Reconstruct using synthesis operator, v = T ∗v ∈ X(M).
4 Apply pushforward map into data space, ~v = F∗v = vF .
Procedure is not equivalent to supervised learning of the coordinate functions F∗v .
Variable-speed rotation on circle
θ(t) = v |θ(t) = (eκ cos θ sin θ + 1)2 + 1, κ = 4
−1.5 −1 −0.5 0 0.5 1 1.5 −1.5
−1
−0.5
0
0.5
1
1.5
x
y
−1
−0.5
0
0.5
1
1.5
x
y
Orbit - operator based reconstruction, frames
• Left: Dynamical evolution based on interpolation of F∗v |θ ∈ R2.
• Right: Dynamical evolution based on SEC approximation.
Oxtoby system on 2-torus
θ(t) = v |θ(t)
v1(θ) = v2(θ) + (1− α)(1− cos θ2), v2(θ) = α(1− cos(θ1 − θ2)) 382 D. Giannakis / Appl. Comput. Harmon. Anal. 47 (2019) 338–396
Fig. 16. Time series for the fixed-point system on the 2-torus. (a) Phase space diagram on a periodic box for 64,000 samples; (b, c) components x1 = cos θ1 and x3 = cos θ2 of the standard (flat) embedding F : T2 !→ R4.
α = 201 /2 and the standard (flat) embedding of the 2-torus in R4, F (a) = (cos θ1 , sin θ1 , cos θ2, sin θ2). In what follows, we discuss dimension reduction and forecasting results for this system and observation map using a time series of 128,000 samples initialized at (θ1 , θ2) = (π/2, π/2) and sampled at a timestep T = 0.01.
First, we note that the methods of Section 4 with no time change fail in the initial diffusion maps stage, as the Laplace–Beltrami eigenfunctions computed via Algorithm 1 are corrupted by series of spikes (a hallmark of ill conditioning of the heat kernel matrix P ). We experimented with different kernels, tuning procedures, and normalizations (including the standard α = 1/2 normalization of diffusion maps that requires no den- sity estimation), but in all cases the quality of the eigenfunctions was poor. This ill-conditioning is likely caused by the behavior of the system near the fixed point, where the sampling density through finite-time trajectories has a singular, “one-dimensional” structure (see Fig. 16(a)), even though the asymptotic sam- pling density is uniform with respect to the Haar measure. On the other hand, after time change by the empirically accessible phase-space speed function ξ, the quality of the eigenfunctions from diffusion maps improves markedly. We attribute this improvement to the modified Riemannian metric h from Section 6.3. This metric becomes degenerate near the fixed point where ξ vanishes, assigning arbitrarily small norm to all tangent vectors (this can also be seen from the fact that the kernel in (50) assigns near-maximal affinity to all pairs of points with small corresponding ξ). Therefore, the heat kernel associated with h produces stronger averaging (smoothing) near the fixed point resulting in a well-behaved eigenfunction basis which is crucial to the success of the techniques of Section 6.3.
In what follows, we work with the approximate eigenfunctions for the time-changed vector field computed using the advection–diffusion operator Lε from (51) for the regularization parameter ε = 0.02. We selected this value as a reasonable compromise between bias error and smoothness of the computed eigenfunctions after testing various candidate values of ε in the interval 10−4 to 10−1 . As shown in Fig. 17, the generating eigenfunctions {ζ1 , ζ2} for this value of ε do not lie on the unit circle with the same accuracy as the earlier results in Figs. 4 and 13. Nevertheless, the eigenfunctions lie on a narrow annulus about the unit circle, and the corresponding time series have the structure of phase-modulated waves with weak amplitude modulation. The basic frequencies and Dirichlet energies are {1 , 2} = {0.735, 0.165} and {E(ζ1 ), E(ζ2)} = {1.54, 2.42}. The eigenfunction time series exhibit timescale separation, with ζ1 evolving at faster timescales than ζ2, but this timescale separation has a time-dependent nature in the sense that both time series evolve slowly near the fixed point. The timescale separation between ζ1 and ζ2 is also evident from the scatterplots on the torus in Fig. 17(a), (d). There, it can be seen that the level sets of ζ2 are aligned with the orbits of the dynamics, whereas the level sets of ζ1 are transverse to the dynamics resulting to rapid oscillations due to frequent level-set crossings. As discussed in the SOM, EDMD implemented with a dictionary consisting of lags of the state vector fails to recover Koopman eigenfunctions of comparable quality to those in Fig. 17. In particular, as shown in Fig. 4 in the SOM, the EDMD spectrum contains an eigenfunction that somewhat
α = √
20
v1(θ) = v2(θ) + (1− α)(1− cos θ2), v2(θ) = α(1− cos(θ1 − θ2))
— true — SEC
Conclusions
A spectral formulation of exterior calculus was developed with a number of useful features for manifold learning applications:
• The framework is based entirely on the eigenvalues and eigenfunctions of the Laplacian on functions, allowing the use of graph-theoretic techniques with well-established consistency properties.
• No auxiliary structures such as simplicial complexes are required.
• Classical approaches in approximation theory of operators can be employed to construct Galerkin schemes for operators such as the 1-Laplacian with spectral convergence guarantees.
• Computational cost decoupled from ambient space dimension and dataset size.
Reference
• Berry, T., and D. Giannakis (2020). Spectral exterior calculus. Commun. Pure Appl. Math., 73, 689–770. doi:10.1002/cpa.21885.