Sparse recovery for spherical harmonic expansions
Rachel Ward1
1Courant Institute, New York University
Workshop Sparsity and Cosmology, NiceMay 31, 2011
Cosmic Microwave Background Radiation (CMB) map
• Temperature is measured asT (θ, ϕ) =
∑∞k=0
∑k`=−k a(`,k)Y
k` (θ, ϕ), where Y k
` ’s arespherical harmonics
• Red band: measurements are corrupted by galactic signal
Rachel Ward1 1Courant Institute, New York University Sparse recovery for spherical harmonic expansions
CMB map is compressible in spherical harmonics
• Consider the coefficient vector a = a(`,k) and
T (θ, φ) ≈n∑
k=0
k∑`=−k
a(`,k)Yk` (θ, ϕ).
• This vector is predicted and observed to be compressible.
Rachel Ward1 1Courant Institute, New York University Sparse recovery for spherical harmonic expansions
Spherical harmonics: Fourier analysis on the sphere
• Y k` ’s are products of complex exponentials and orthogonal
Jacobi polynomials• Y k
` ’s are orthonormal with respect to spherical surfacemeasure sin(ϕ)dϕdθ
Rachel Ward1 1Courant Institute, New York University Sparse recovery for spherical harmonic expansions
CMB map inpainting via `1-minimization
• (Abrial, Moudden, Starck, Fadili, Delabrouille, Nguyen ′08):Propose full-sky CMB map inpainting from partialmeasurements T (θj , ϕj). Obtain coefficients a = a(`,k) bysolving the `1-minimization problem:
a = arg min ‖c‖1 s.t.
√N∑
k=0
k∑`=−k
c(`,k)Yk` (θj , ϕj) = T (θj , ϕj)
• D =√
N is a prescribed maximal degree• Theoretical justification?
Rachel Ward1 1Courant Institute, New York University Sparse recovery for spherical harmonic expansions
The spherical sampling matrix
• In matrix form, the constraints in `1-minimization problem areΦc = T, where Φ ∈ Cm×N is the spherical sampling matrix
Φ =
1 Y 1
1 (θ1, ϕ1) . . . Y k` (θ1, ϕ1) . . .
1 Y 11 (θ2, ϕ2) . . . Y k
` (θ2, ϕ2) . . ....
......
1 Y 11 (θm, ϕm) . . . Y k
` (θm, ϕm) . . .
• We assume these measurements are underdetermined: m < N.
• Compressed sensing etc: If Φ acts as approximate isometry onsparse vectors, then compressible vectors are stably recoveredvia `1-minimization
Rachel Ward1 1Courant Institute, New York University Sparse recovery for spherical harmonic expansions
The spherical sampling matrix
• In matrix form, the constraints in `1-minimization problem areΦc = T, where Φ ∈ Cm×N is the spherical sampling matrix
Φ =
1 Y 1
1 (θ1, ϕ1) . . . Y k` (θ1, ϕ1) . . .
1 Y 11 (θ2, ϕ2) . . . Y k
` (θ2, ϕ2) . . ....
......
1 Y 11 (θm, ϕm) . . . Y k
` (θm, ϕm) . . .
• We assume these measurements are underdetermined: m < N.
• Compressed sensing etc: If Φ acts as approximate isometry onsparse vectors, then compressible vectors are stably recoveredvia `1-minimization
Rachel Ward1 1Courant Institute, New York University Sparse recovery for spherical harmonic expansions
Restricted Isometry Property (RIP)
Definition
[Candes, Romberg, Tao ′06] The restricted isometry constant δs ofa matrix Φ ∈ Cm×N is the smallest number such that for alls-sparse x ∈ CN ,
(1− δs)‖x‖22 ≤ ‖Φx‖22 ≤ (1 + δs)‖x‖22
• Open to construct deterministic matrices satisfying the RIP inthe regime m � s logp(N).
• If Φ ∈ Rm×N has i.i.d. Gaussian or Bernoulli entries andm ≥ Cδ−2(s log(N/s)) then δs ≤ δ with high probability.
• [CRT ′06, RV ′08, R ′09 ] If m = O(s log4(N)) the RIP holdsw.h.p. for Φ associated to bounded orthonormal systems.
Rachel Ward1 1Courant Institute, New York University Sparse recovery for spherical harmonic expansions
Restricted Isometry Property (RIP)
Definition
[Candes, Romberg, Tao ′06] The restricted isometry constant δs ofa matrix Φ ∈ Cm×N is the smallest number such that for alls-sparse x ∈ CN ,
(1− δs)‖x‖22 ≤ ‖Φx‖22 ≤ (1 + δs)‖x‖22
• Open to construct deterministic matrices satisfying the RIP inthe regime m � s logp(N).
• If Φ ∈ Rm×N has i.i.d. Gaussian or Bernoulli entries andm ≥ Cδ−2(s log(N/s)) then δs ≤ δ with high probability.
• [CRT ′06, RV ′08, R ′09 ] If m = O(s log4(N)) the RIP holdsw.h.p. for Φ associated to bounded orthonormal systems.
Rachel Ward1 1Courant Institute, New York University Sparse recovery for spherical harmonic expansions
RIP matrices are good for sparse recovery
• [CRT ′06, C′08, Foucart ′10] If for Φ ∈ Cm×N we haveδs ≤ δ0, (δ0 = .46 is valid), y = Φx is observed, and
x = arg minz ‖z‖1 subject to Φz = y,
then
‖x− x‖2 �‖x− xs‖1√
s,
where xs is the best s-term approximation to x.
• If x is s-sparse, then x = x is recovered exactly.
• If x is well-approximated by an s-sparse vector, then x ≈ x.
Rachel Ward1 1Courant Institute, New York University Sparse recovery for spherical harmonic expansions
Sparse recovery for bounded orthonormal systems
Ψ =
ψ1(x1) ψ2(x1) . . . . . . ψN(x1)...
......
ψ1(xm) ψ2(xm) . . . . . . ψN(xm)
• Suppose (ψj)Nj=1 on compact domain D are orthonormal with
respect to measure dν
• Suppose x1, . . . , xm ∈ D are chosen i.i.d. from dν.
• Suppose maxj∈1...N ‖ψj‖∞ ≤ K .
Theorem (Rudelson, Vershynin ’08, Rauhut ’09)
If m ≥ CK 2δ−2s log3(s) log(N) then the matrix 1√m
Ψ satisfies
δs ≤ δ with probability at least 1− N−γ log3(s).
Rachel Ward1 1Courant Institute, New York University Sparse recovery for spherical harmonic expansions
Examples of bounded orthonormal systems
• Fourier ψj(x) = e2πijx : D = [0, 1], dν = dx , K = 1(also discrete analog)
• Chebyshev polynomials Tj(x): D = [−1, 1],dν = (1− x2)−1/2dx , K =
√2
• RIP for Ψ means that functions which admit s-sparseexpansions with respect to the ψj ’s can be recovered fromtheir values at m sample points provided
m ≥ CK 2s log3(s) log(N),
and functions with compressible expansions can be recoveredapproximately
Rachel Ward1 1Courant Institute, New York University Sparse recovery for spherical harmonic expansions
Examples of bounded orthonormal systems
[Rauhut, W ′10] : preconditioned Legendre system Q(x)Lj(x)
• Ljs are normalized Legendre polynomials
• Q(x) = C (1− x2)1/4, dν(x) = π−1(1− x2)−1/2dx , andK = 2
• Q(x) is preconditioner; implies sparse recovery in Legendresystem
Rachel Ward1 1Courant Institute, New York University Sparse recovery for spherical harmonic expansions
Examples of bounded orthonormal systems
[Rauhut,W ′10] : More generally, preconditioned Jacobi systemQα(x)pαj (x)
• pαj s are polynomials orthonormal w.r.t. dν(x) = (1− x2)αdx
• [Krasikov 07:] |Qαpαj (x)| ≤ Cα1/4
• Qα(x) = (1− x2)α/2+1/4, dν(x) = (1− x2)−1/2dx , andK = Cα1/4
That is, Chebyshev sampling is universal for recovering sparsepolynomial expansions
Rachel Ward1 1Courant Institute, New York University Sparse recovery for spherical harmonic expansions
The spherical harmonics
• The spherical harmonics can be written as
Y k` (θ, ϕ) = e ikθp
|k|`−|k|(cosϕ)(sinϕ)|k|, (θ, ϕ) ∈ [0, π]× [0, 2π),
−k ≤ ` ≤ k , k ≥ 0
• Growth rates for complex exponentials and Jacobi polynomialsgive:
sup0≤k≤
√N−1,−k≤`≤k
| sin(ϕ)1/2Y k` (θ, ϕ)| ≤ CN1/8
• This implies the strategy of uniform sampling from theproduct measure dϕdθ.
Rachel Ward1 1Courant Institute, New York University Sparse recovery for spherical harmonic expansions
Location of sampling points matters
Figure: Phase transitions for sparse recovery on the sphere
m/N
s/m
0.2 0.4 0.6 0.8 1
0.20.40.60.81
(a)
m/N
s/m
0.2 0.4 0.6 0.8 1
0.20.40.60.81
(b)
• We form “random” s-sparse coefficient vectors c = (c`,k) ofdegree D = N1/2 = 16 and choose m sampling points from(a) product measure dϕdθ and (b) uniform spherical measuresinϕdϕdθ. Black indicates recovery.
Rachel Ward1 1Courant Institute, New York University Sparse recovery for spherical harmonic expansions
Sparse recovery in spherical harmonic systems
Theorem (Rauhut, W ’11)
Suppose that (θ1, ϕ1), . . . , (θm, ϕm), withm ≥ Cs log3(s)N1/4 log(N) are drawn independently from theuniform measure on B = [0, π]× [0, 2π).Let Φ be the m × N spherical sampling matrix and let QΦ be itspreconditioned version. With high probability the following holdsfor all harmonic polynomials
g(θ, ϕ) =∑N1/2−1
`=0
∑`k=−` c`,kY k
` (θ, ϕ). Suppose that noisysample values yj = g(θj , ϕj) + ηj are observed, and that ‖η‖∞ ≤ ε.Let
c = arg min ‖z‖1 subject to ‖QΦz −Qy‖2 ≤√
mε.
Then
‖c − c‖2 ≤C1σs(c)1√
s+ C2ε.
Rachel Ward1 1Courant Institute, New York University Sparse recovery for spherical harmonic expansions
Conclusions
• Our results provide a measure of justification for goodnumerical results for CMB map inpainting via `1-minimization
• Our results may be of interest to other problems ingeophysics, astronomy, and medical imaging.
Rachel Ward1 1Courant Institute, New York University Sparse recovery for spherical harmonic expansions
Open problems
For practical implementation, we would rather sample from adiscrete grid. In experiments, the sparse recovery results fordiscrete vs. continuous are indistinguishable.Proof?
Rachel Ward1 1Courant Institute, New York University Sparse recovery for spherical harmonic expansions
Open problems
In our proof, we require m � sN1/4 log4(N) sampling points (orrows in Φ) for `1-minimization to be able to recover s-sparsespherical polynomials of degree N1/2. .
We should be able to improve this to m � s logp(N)...
In practice, different models of sparsity are more suited for thesphere, such as rotationally invariant sparsity sets, or sparsity incertain linear combinations of spherical harmonic coefficients
Rachel Ward1 1Courant Institute, New York University Sparse recovery for spherical harmonic expansions
Open problems
In our proof, we require m � sN1/4 log4(N) sampling points (orrows in Φ) for `1-minimization to be able to recover s-sparsespherical polynomials of degree N1/2. .
We should be able to improve this to m � s logp(N)...
In practice, different models of sparsity are more suited for thesphere, such as rotationally invariant sparsity sets, or sparsity incertain linear combinations of spherical harmonic coefficients
Rachel Ward1 1Courant Institute, New York University Sparse recovery for spherical harmonic expansions
Top Related