Capacity-achieving Sparse Regression Codes via Aproximate Message...

32
Capacity-achieving Sparse Regression Codes via Aproximate Message Passing Ramji Venkataramanan University of Cambridge Joint work with Cynthia Rush (Yale) & Adam Greig (Cambridge) ITA 2015 1 / 17

Transcript of Capacity-achieving Sparse Regression Codes via Aproximate Message...

Page 1: Capacity-achieving Sparse Regression Codes via Aproximate Message …rv285/ita15_sparc_amp_talk.pdf · 2015. 2. 7. · A is n N measurement matrix, i.i.d. with known prior AMP iteratively

Capacity-achieving Sparse Regression Codes via

Aproximate Message Passing

Ramji Venkataramanan

University of Cambridge

Joint work with Cynthia Rush (Yale) & Adam Greig (Cambridge)

ITA 2015

1 / 17

Page 2: Capacity-achieving Sparse Regression Codes via Aproximate Message …rv285/ita15_sparc_amp_talk.pdf · 2015. 2. 7. · A is n N measurement matrix, i.i.d. with known prior AMP iteratively

Want to construct efficient channel codes for the AWGN channel:

y = x + w , w ∼ N (0, σ2)

Power constraint: 1n

∑ni=1 x

2i ≤ P

GOAL: Codes with fast encoding & decoding with probability ofdecoding error → 0 at rates approaching

C =1

2log

(1 +

P

σ2

)

Sparse Regression Codes (SPARCs)

Introduced by Barron and Joseph [’10, ’12]

Efficient, asymptotically capacity-achieving decoders proposedby [Barron-Joseph], [Barron-Cho]

In this talk:- Fast, asymptotically capacity-achieving AMP decoder- Good empirical performance at practical block lengths

2 / 17

Page 3: Capacity-achieving Sparse Regression Codes via Aproximate Message …rv285/ita15_sparc_amp_talk.pdf · 2015. 2. 7. · A is n N measurement matrix, i.i.d. with known prior AMP iteratively

Want to construct efficient channel codes for the AWGN channel:

y = x + w , w ∼ N (0, σ2)

Power constraint: 1n

∑ni=1 x

2i ≤ P

GOAL: Codes with fast encoding & decoding with probability ofdecoding error → 0 at rates approaching

C =1

2log

(1 +

P

σ2

)

Sparse Regression Codes (SPARCs)

Introduced by Barron and Joseph [’10, ’12]

Efficient, asymptotically capacity-achieving decoders proposedby [Barron-Joseph], [Barron-Cho]

In this talk:- Fast, asymptotically capacity-achieving AMP decoder- Good empirical performance at practical block lengths

2 / 17

Page 4: Capacity-achieving Sparse Regression Codes via Aproximate Message …rv285/ita15_sparc_amp_talk.pdf · 2015. 2. 7. · A is n N measurement matrix, i.i.d. with known prior AMP iteratively

Codebook Construction

A:

β: 0, c1, 0, c2, 0, cL, 0, , 00,T

n rows

A: design matrix or ‘dictionary’ with ind. N (0, 1/n) entries

Codewords Aβ - sparse linear combinations of columns of A

3 / 17

Page 5: Capacity-achieving Sparse Regression Codes via Aproximate Message …rv285/ita15_sparc_amp_talk.pdf · 2015. 2. 7. · A is n N measurement matrix, i.i.d. with known prior AMP iteratively

SPARC Codebook

A:

β: 0, c2, 0, cL, 0, , 00,

M columns M columnsM columnsSection 1 Section 2 Section L

T

n rows

0, c1,

n rows, ML columns

Choosing M and L:

For rate R codebook, need ML = 2nR

Choose M polynomial in n, e.g.,√n ⇒ L ∼ n/log n

Size of A: polynomial in n

4 / 17

Page 6: Capacity-achieving Sparse Regression Codes via Aproximate Message …rv285/ita15_sparc_amp_talk.pdf · 2015. 2. 7. · A is n N measurement matrix, i.i.d. with known prior AMP iteratively

SPARC Codebook

A:

β: 0, c2, 0, cL, 0, , 00,

M columns M columnsM columnsSection 1 Section 2 Section L

T

n rows

0, c1,

n rows, ML columns

Choosing M and L:

For rate R codebook, need ML = 2nR

Choose M polynomial in n, e.g.,√n ⇒ L ∼ n/log n

Size of A: polynomial in n

4 / 17

Page 7: Capacity-achieving Sparse Regression Codes via Aproximate Message …rv285/ita15_sparc_amp_talk.pdf · 2015. 2. 7. · A is n N measurement matrix, i.i.d. with known prior AMP iteratively

SPARC Codebook

A:

β: 0, c2, 0, cL, 0, , 00,

M columns M columnsM columnsSection 1 Section 2 Section L

T

n rows

0, c1,

n rows, ML columns

Choosing M and L:

For rate R codebook, need ML = 2nR

Choose M polynomial in n, e.g.,√n ⇒ L ∼ n/log n

Size of A: polynomial in n4 / 17

Page 8: Capacity-achieving Sparse Regression Codes via Aproximate Message …rv285/ita15_sparc_amp_talk.pdf · 2015. 2. 7. · A is n N measurement matrix, i.i.d. with known prior AMP iteratively

Power Allocation

A:

β: 0,√nP2, 0,

√nPL, 0, , 00,

M columns M columnsM columnsSection 1 Section 2 Section L

T

n rows

0,√nP1,

Coefficients c1 =√nP1, . . . , cL =

√nPL chosen such that∑

`

P` = P

Examples:1) Flat: P` = P

L

2) Exponentially Decaying: P` ∝ 2−κ`/L, with constant κ > 0

Allocation such that P` = Θ( 1L)5 / 17

Page 9: Capacity-achieving Sparse Regression Codes via Aproximate Message …rv285/ita15_sparc_amp_talk.pdf · 2015. 2. 7. · A is n N measurement matrix, i.i.d. with known prior AMP iteratively

Decoding

A:

β: 0,√nP2, 0,

√nPL, 0, , 00,

M columns M columnsM columnsSection 1 Section 2 Section L

T

n rows

0,√nP1,

y = Aβ + w

Want efficient algorithm to decode β from y

6 / 17

Page 10: Capacity-achieving Sparse Regression Codes via Aproximate Message …rv285/ita15_sparc_amp_talk.pdf · 2015. 2. 7. · A is n N measurement matrix, i.i.d. with known prior AMP iteratively

Approximate Message PassingApproximation of loopy belief propagation for dense graphs[Donoho-Maleki-Montanari ’09], [Rangan ’11], [Krzakala et al ’12],[Schniter ’11], . . .

For problems such as compressed sensing

y = Aβ + w

A is n × N measurement matrix, β i.i.d. with known prior

AMP iteratively produces esitmates β1, β2, . . .

Faster than `1-based convex optimization, similar performance

Rigorous asymptotic analysis [Bayati-Montanari ’11] when Ais Gaussian, undersampling ratio (n/N) is constant

For each t, limn→∞

‖β − βt‖2n

= σ2t

σ2t can be computed via a scalar iteration — state evolution

7 / 17

Page 11: Capacity-achieving Sparse Regression Codes via Aproximate Message …rv285/ita15_sparc_amp_talk.pdf · 2015. 2. 7. · A is n N measurement matrix, i.i.d. with known prior AMP iteratively

Approximate Message PassingApproximation of loopy belief propagation for dense graphs[Donoho-Maleki-Montanari ’09], [Rangan ’11], [Krzakala et al ’12],[Schniter ’11], . . .

For problems such as compressed sensing

y = Aβ + w

A is n × N measurement matrix, β i.i.d. with known prior

AMP iteratively produces esitmates β1, β2, . . .

Faster than `1-based convex optimization, similar performance

Rigorous asymptotic analysis [Bayati-Montanari ’11] when Ais Gaussian, undersampling ratio (n/N) is constant

For each t, limn→∞

‖β − βt‖2n

= σ2t

σ2t can be computed via a scalar iteration — state evolution

7 / 17

Page 12: Capacity-achieving Sparse Regression Codes via Aproximate Message …rv285/ita15_sparc_amp_talk.pdf · 2015. 2. 7. · A is n N measurement matrix, i.i.d. with known prior AMP iteratively

AMP for SPARC

A:

β: 0,√nP2, 0,

√nPL, 0, , 00,

M columns M columnsM columnsSection 1 Section 2 Section L

T

n rows

0,√nP1,

y = Aβ + w , w i.i.d. ∼ N (0, σ2) (1)

In SPARCs,

β has one non-zero per section, section size M →∞The undersampling ratio n/(ML)→ 0.

AMP decoder can be derived by approximating a min-sum-likemessage passing algorithm for (1)

8 / 17

Page 13: Capacity-achieving Sparse Regression Codes via Aproximate Message …rv285/ita15_sparc_amp_talk.pdf · 2015. 2. 7. · A is n N measurement matrix, i.i.d. with known prior AMP iteratively

AMP DecoderSet β0 = 0. For t ≥ 0:

z t = y − Aβt +z t−1

τ2t−1

(P − ‖β

t‖2n

),

βt+1i = ηti (βt + A∗z t), for i = 1, . . . ,ML

For i ∈ section `,

ηti (s) =√

nP`exp

(si√nP`/τ

2t

)∑j∈sec` exp

(sj√nP`/τ

2t

) .

9 / 17

Page 14: Capacity-achieving Sparse Regression Codes via Aproximate Message …rv285/ita15_sparc_amp_talk.pdf · 2015. 2. 7. · A is n N measurement matrix, i.i.d. with known prior AMP iteratively

AMP Decoder

Set β0 = 0. For t ≥ 0:

z t = y − Aβt +z t−1

τ2t−1

(P − ‖β

t‖2n

),

βt+1i = ηti (βt + A∗z t), for i = 1, . . . ,ML

For i ∈ section `,

ηti (s) =√

nP`exp

(si√nP`/τ

2t

)∑j∈sec` exp

(sj√nP`/τ

2t

) .For large enough n, βt + A∗z t has distribution close to β + τtZ ,where Z is i.i.d. ∼ N (0, 1)

9 / 17

Page 15: Capacity-achieving Sparse Regression Codes via Aproximate Message …rv285/ita15_sparc_amp_talk.pdf · 2015. 2. 7. · A is n N measurement matrix, i.i.d. with known prior AMP iteratively

AMP Decoder

Set β0 = 0. For t ≥ 0:

z t = y − Aβt +z t−1

τ2t−1

(P − ‖β

t‖2n

),

βt+1i = ηti (βt + A∗z t), for i = 1, . . . ,ML

For i ∈ section `,

ηti (s) =√

nP`exp

(si√nP`/τ

2t

)∑j∈sec` exp

(sj√nP`/τ

2t

) .

Then βt+1 = ηt(s) = E[β | β + τtZ = s]. It is

the MMSE estimate of β given the observation β + τtZ

∝ the posterior probability of entry i of β being non-zero

9 / 17

Page 16: Capacity-achieving Sparse Regression Codes via Aproximate Message …rv285/ita15_sparc_amp_talk.pdf · 2015. 2. 7. · A is n N measurement matrix, i.i.d. with known prior AMP iteratively

The test statistics

Suppose

z t = y − Aβt

+z t−1

τ2t−1

(P − ‖β

t‖2n

)

10 / 17

Page 17: Capacity-achieving Sparse Regression Codes via Aproximate Message …rv285/ita15_sparc_amp_talk.pdf · 2015. 2. 7. · A is n N measurement matrix, i.i.d. with known prior AMP iteratively

The test statistics

Suppose

z t = y − Aβt

+z t−1

τ2t−1

(P − ‖β

t‖2n

)

Then

βt + A∗z t = β + A∗w + (I− A∗A)(β − βt)

10 / 17

Page 18: Capacity-achieving Sparse Regression Codes via Aproximate Message …rv285/ita15_sparc_amp_talk.pdf · 2015. 2. 7. · A is n N measurement matrix, i.i.d. with known prior AMP iteratively

The test statistics

Suppose

z t = y − Aβt

+z t−1

τ2t−1

(P − ‖β

t‖2n

)

βt + A∗z t = β + A∗w︸︷︷︸N (0,σ2)

+ (I− A∗A)(β − βt)

10 / 17

Page 19: Capacity-achieving Sparse Regression Codes via Aproximate Message …rv285/ita15_sparc_amp_talk.pdf · 2015. 2. 7. · A is n N measurement matrix, i.i.d. with known prior AMP iteratively

The test statistics

Suppose

z t = y − Aβt

+z t−1

τ2t−1

(P − ‖β

t‖2n

)

βt + A∗z t = β + A∗w︸︷︷︸N (0,σ2)

+ (I− A∗A)︸ ︷︷ ︸≈N (0,1/n)

(β − βt)

10 / 17

Page 20: Capacity-achieving Sparse Regression Codes via Aproximate Message …rv285/ita15_sparc_amp_talk.pdf · 2015. 2. 7. · A is n N measurement matrix, i.i.d. with known prior AMP iteratively

The test statistics

Suppose

z t = y − Aβt +z t−1

τ2t−1

(P − ‖β

t‖2n

)βt + A∗z t = β + A∗w︸︷︷︸

N (0,σ2)

+ (I− A∗A)︸ ︷︷ ︸≈N (0,1/n)

(β − βt)

10 / 17

Page 21: Capacity-achieving Sparse Regression Codes via Aproximate Message …rv285/ita15_sparc_amp_talk.pdf · 2015. 2. 7. · A is n N measurement matrix, i.i.d. with known prior AMP iteratively

The constants τt

βt + A∗z t ∼ β + τtZ

τ2t is the variance of the noise in the test statistic after step t

τ20 = σ2 + P,

τ2t+1 = σ2 +E[‖β − βt+1‖2]

n= σ2 + P(1− xt+1),

where

xt+1 =L∑`=1

P`PE

[exp

(√nP`τt

(U`1 +

√nP`τt

))

exp(√

nP`τt

(U`1 +

√nP`τt

))

+∑M

j=2 exp(√

nP`τt

U`j

)]

{U`j } are i.i.d. ∼ N (0, 1)

11 / 17

Page 22: Capacity-achieving Sparse Regression Codes via Aproximate Message …rv285/ita15_sparc_amp_talk.pdf · 2015. 2. 7. · A is n N measurement matrix, i.i.d. with known prior AMP iteratively

State EvolutionThus, when βt + A∗z t ∼ β + τtZ :

βt+1 ∼ ηt(β + τtZ )

βt+2 ∼ ηt(β + τt+1Z )

...

τ2t+1 = σ2 + P(1− xt+1), where

xt+1 =L∑`=1

P`PE

[exp

(√nP`τt

(U`1 +

√nP`τt

))

exp(√

nP`τt

(U`1 +

√nP`τt

))

+∑M

j=2 exp(√

nP`τt

U`j

)]

KEY property

xt increases with t for a finite number of steps Tn

xt ≈ 1 ⇒ τ2t ≈ σ2, i.e., the test statistic is ∼ β + σZ , i.e.,

AMP has effectively converted the A matrix to an identity!

12 / 17

Page 23: Capacity-achieving Sparse Regression Codes via Aproximate Message …rv285/ita15_sparc_amp_talk.pdf · 2015. 2. 7. · A is n N measurement matrix, i.i.d. with known prior AMP iteratively

State EvolutionThus, when βt + A∗z t ∼ β + τtZ :

βt+1 ∼ ηt(β + τtZ )

βt+2 ∼ ηt(β + τt+1Z )

...

τ2t+1 = σ2 + P(1− xt+1), where

xt+1 =L∑`=1

P`PE

[exp

(√nP`τt

(U`1 +

√nP`τt

))

exp(√

nP`τt

(U`1 +

√nP`τt

))

+∑M

j=2 exp(√

nP`τt

U`j

)]

KEY property

xt increases with t for a finite number of steps Tn

xt ≈ 1 ⇒ τ2t ≈ σ2, i.e., the test statistic is ∼ β + σZ , i.e.,

AMP has effectively converted the A matrix to an identity!

12 / 17

Page 24: Capacity-achieving Sparse Regression Codes via Aproximate Message …rv285/ita15_sparc_amp_talk.pdf · 2015. 2. 7. · A is n N measurement matrix, i.i.d. with known prior AMP iteratively

State EvolutionThus, when βt + A∗z t ∼ β + τtZ :

βt+1 ∼ ηt(β + τtZ )

βt+2 ∼ ηt(β + τt+1Z )

...

τ2t+1 = σ2 + P(1− xt+1), where

xt+1 =L∑`=1

P`PE

[exp

(√nP`τt

(U`1 +

√nP`τt

))

exp(√

nP`τt

(U`1 +

√nP`τt

))

+∑M

j=2 exp(√

nP`τt

U`j

)]

KEY property

xt increases with t for a finite number of steps Tn

xt ≈ 1 ⇒ τ2t ≈ σ2, i.e., the test statistic is ∼ β + σZ , i.e.,

AMP has effectively converted the A matrix to an identity!

12 / 17

Page 25: Capacity-achieving Sparse Regression Codes via Aproximate Message …rv285/ita15_sparc_amp_talk.pdf · 2015. 2. 7. · A is n N measurement matrix, i.i.d. with known prior AMP iteratively

xt vs. t

SPARC: M = 512, L = 1024, snr = 15,R = 0.7C,P` ∝ 2−2C`/L

xt =1

nE[β∗βt ]

“Power-weighted fraction of correctly decoded sections in βt”13 / 17

Page 26: Capacity-achieving Sparse Regression Codes via Aproximate Message …rv285/ita15_sparc_amp_talk.pdf · 2015. 2. 7. · A is n N measurement matrix, i.i.d. with known prior AMP iteratively

Asymptotics

Nice closed-form expression can be obtained for xt := limn→∞ xt

Example: With P` ∝ 2−2C`/L

xt := lim xt =(1 + snr)− (1 + snr)1−ξt−1

snr,

τ2t := lim τ2t = σ2 + P(1− xt) = σ2 (1 + snr)1−ξt−1

where ξ−1 = 0 and for t ≥ 0,

ξt = min

{(1

2C log

( CR

)+ ξt−1

), 1

}.

For R < C, xt ↗ 1 and τ2t ↘ 0 in exactly T ∗ = d 2Clog(C/R)e steps

Run AMP decoder for T ∗ steps to get β1, . . . , βT∗ → β

14 / 17

Page 27: Capacity-achieving Sparse Regression Codes via Aproximate Message …rv285/ita15_sparc_amp_talk.pdf · 2015. 2. 7. · A is n N measurement matrix, i.i.d. with known prior AMP iteratively

Asymptotics

Nice closed-form expression can be obtained for xt := limn→∞ xt

Example: With P` ∝ 2−2C`/L

xt := lim xt =(1 + snr)− (1 + snr)1−ξt−1

snr,

τ2t := lim τ2t = σ2 + P(1− xt) = σ2 (1 + snr)1−ξt−1

where ξ−1 = 0 and for t ≥ 0,

ξt = min

{(1

2C log

( CR

)+ ξt−1

), 1

}.

For R < C, xt ↗ 1 and τ2t ↘ 0 in exactly T ∗ = d 2Clog(C/R)e steps

Run AMP decoder for T ∗ steps to get β1, . . . , βT∗ → β

14 / 17

Page 28: Capacity-achieving Sparse Regression Codes via Aproximate Message …rv285/ita15_sparc_amp_talk.pdf · 2015. 2. 7. · A is n N measurement matrix, i.i.d. with known prior AMP iteratively

Main ResultThe section error rate of a decoder for SPARC S is

Esec(S) :=1

L

L∑`=1

1{β` 6= β`}.

Theorem

Fix any rate R < C, and b > 0. Consider a sequence of rate RSPARCs {Sn} indexed by block length n, with design matrixparameters L and M = Lb, and power allocation ∝ 2−2C`/L.

Then the section error rate of the AMP decoder converges to zeroalmost surely, i.e., for any ε > 0,

limn0→∞

P (Esec(Sn) < ε, ∀n ≥ n0) = 1.

Proof: Show that asymptotically

st = (A∗βt + z t) ∼ β+ τtZ , with τt given by state evolution,

‖β − βt‖2 a.s.−→ P(1− xt), t ≤ T ∗

15 / 17

Page 29: Capacity-achieving Sparse Regression Codes via Aproximate Message …rv285/ita15_sparc_amp_talk.pdf · 2015. 2. 7. · A is n N measurement matrix, i.i.d. with known prior AMP iteratively

Main ResultThe section error rate of a decoder for SPARC S is

Esec(S) :=1

L

L∑`=1

1{β` 6= β`}.

Theorem

Fix any rate R < C, and b > 0. Consider a sequence of rate RSPARCs {Sn} indexed by block length n, with design matrixparameters L and M = Lb, and power allocation ∝ 2−2C`/L.

Then the section error rate of the AMP decoder converges to zeroalmost surely, i.e., for any ε > 0,

limn0→∞

P (Esec(Sn) < ε, ∀n ≥ n0) = 1.

Proof: Show that asymptotically

st = (A∗βt + z t) ∼ β+ τtZ , with τt given by state evolution,

‖β − βt‖2 a.s.−→ P(1− xt), t ≤ T ∗

15 / 17

Page 30: Capacity-achieving Sparse Regression Codes via Aproximate Message …rv285/ita15_sparc_amp_talk.pdf · 2015. 2. 7. · A is n N measurement matrix, i.i.d. with known prior AMP iteratively

Empirical Performance

SPARC: M = 512, L = 1024, snr = 15,R = 0.7C

Power allocation plays a key role at finite block lengths!16 / 17

Page 31: Capacity-achieving Sparse Regression Codes via Aproximate Message …rv285/ita15_sparc_amp_talk.pdf · 2015. 2. 7. · A is n N measurement matrix, i.i.d. with known prior AMP iteratively

Summary

AMP provably achieves R < C, complexity linear in matrix size

[Barbier-Krzakala’14]:AMP decoder with different update rules

Future directions:

Rules of thumb for power allocation

Use Hadamard instead of Gaussian design matrix:

AMP complexity reduced to ∼ n1+ε; much less memory

Scaling laws a la polar codes, spatially coupled codes

AMP encoder for SPARC lossy compression

Implement binning, superposition by nesting SPARC channel& source codes: fast codes for Wyner-Ziv, Gelfand-Pinsker . . .

http://arxiv.org/abs/1501.05892

17 / 17

Page 32: Capacity-achieving Sparse Regression Codes via Aproximate Message …rv285/ita15_sparc_amp_talk.pdf · 2015. 2. 7. · A is n N measurement matrix, i.i.d. with known prior AMP iteratively

Summary

AMP provably achieves R < C, complexity linear in matrix size

[Barbier-Krzakala’14]:AMP decoder with different update rules

Future directions:

Rules of thumb for power allocation

Use Hadamard instead of Gaussian design matrix:

AMP complexity reduced to ∼ n1+ε; much less memory

Scaling laws a la polar codes, spatially coupled codes

AMP encoder for SPARC lossy compression

Implement binning, superposition by nesting SPARC channel& source codes: fast codes for Wyner-Ziv, Gelfand-Pinsker . . .

http://arxiv.org/abs/1501.05892

17 / 17