Stochastic Integration via Error-Correcting Codes · The Goal (“Stochastic Approximate...

42
Stochastic Integration via Error-Correcting Codes Dimitris Achlioptas Pei Jiang UC Santa Cruz Dimitris Achlioptas (UC Santa Cruz) Stochastic Integration via ECC Simons Institute May 2016 1 / 14

Transcript of Stochastic Integration via Error-Correcting Codes · The Goal (“Stochastic Approximate...

Page 1: Stochastic Integration via Error-Correcting Codes · The Goal (“Stochastic Approximate Integration”) Probabilistically, approximately estimate Z = X σ∈Ω f(σ). Non-negativity

Stochastic Integration via Error-Correcting Codes

Dimitris Achlioptas Pei Jiang

UC Santa Cruz

Dimitris Achlioptas (UC Santa Cruz) Stochastic Integration via ECC Simons Institute May 2016 1 / 14

Page 2: Stochastic Integration via Error-Correcting Codes · The Goal (“Stochastic Approximate Integration”) Probabilistically, approximately estimate Z = X σ∈Ω f(σ). Non-negativity

High-Dimensional Integration

The Setting

A (large) domain Ω = D1 × · · · ×Dn , where Dini=1 are finite.A non-negative function f : Ω→ R.

The Goal (“Stochastic Approximate Integration”)

Probabilistically, approximately estimate Z =∑σ∈Ω

f (σ) .

Non-negativity of f =⇒ No Cancellations

AppplicationsProbabilistic Inference via graphical models (partition function)Automatic test-input generation in verification (model counting)Generic alternative to MCMC

Dimitris Achlioptas (UC Santa Cruz) Stochastic Integration via ECC Simons Institute May 2016 2 / 14

Page 3: Stochastic Integration via Error-Correcting Codes · The Goal (“Stochastic Approximate Integration”) Probabilistically, approximately estimate Z = X σ∈Ω f(σ). Non-negativity

High-Dimensional Integration

The Setting

A (large) domain Ω = D1 × · · · ×Dn , where Dini=1 are finite.A non-negative function f : Ω→ R.

The Goal (“Stochastic Approximate Integration”)

Probabilistically, approximately estimate Z =∑σ∈Ω

f (σ) .

Non-negativity of f =⇒ No Cancellations

Quality GuaranteeFor any accuracy ε > 0, with effort proportional to s n/ε2,

PrA

[1− ε < Z

Z < 1 + ε

]= 1− exp(−Θ(s)) .

Dimitris Achlioptas (UC Santa Cruz) Stochastic Integration via ECC Simons Institute May 2016 2 / 14

Page 4: Stochastic Integration via Error-Correcting Codes · The Goal (“Stochastic Approximate Integration”) Probabilistically, approximately estimate Z = X σ∈Ω f(σ). Non-negativity

High-Dimensional Integration

The Setting

A (large) domain Ω = D1 × · · · ×Dn , where Dini=1 are finite.A non-negative function f : Ω→ R.

The Goal (“Stochastic Approximate Integration”)

Probabilistically, approximately estimate Z =∑σ∈Ω

f (σ) .

Non-negativity of f =⇒ No Cancellations

Rest of the TalkΩ = 0, 1n Di = 0, 1 for all i ∈ [n]

32-approximation. Typically Z = exp(n)

Dimitris Achlioptas (UC Santa Cruz) Stochastic Integration via ECC Simons Institute May 2016 2 / 14

Page 5: Stochastic Integration via Error-Correcting Codes · The Goal (“Stochastic Approximate Integration”) Probabilistically, approximately estimate Z = X σ∈Ω f(σ). Non-negativity

High-Dimensional Integration

The Setting

A (large) domain Ω = D1 × · · · ×Dn , where Dini=1 are finite.A non-negative function f : Ω→ R.

The Goal (“Stochastic Approximate Integration”)

Probabilistically, approximately estimate Z =∑σ∈Ω

f (σ) .

Non-negativity of f =⇒ No Cancellations

General IdeaFor i from 0 to n

Repeat Θ(ε−2) times– Generate random Ri ⊆ Ω of size ∼ 2n−i

– Find yi = maxσ∈Ri f (σ)Combine yi in a straightforward way to get Z .

Dimitris Achlioptas (UC Santa Cruz) Stochastic Integration via ECC Simons Institute May 2016 2 / 14

Page 6: Stochastic Integration via Error-Correcting Codes · The Goal (“Stochastic Approximate Integration”) Probabilistically, approximately estimate Z = X σ∈Ω f(σ). Non-negativity

High-Dimensional Integration

The Setting

A (large) domain Ω = D1 × · · · ×Dn , where Dini=1 are finite.A non-negative function f : Ω→ R.

The Goal (“Stochastic Approximate Integration”)

Probabilistically, approximately estimate Z =∑σ∈Ω

f (σ) .

Non-negativity of f =⇒ No Cancellations

General IdeaFor i from 0 to n

Repeat Θ(ε−2) times– Generate random Ri ⊆ Ω of size ∼ 2n−i as an ECC– Find yi = maxσ∈Ri f (σ)

Combine yi in a straightforward way to get Z .Dimitris Achlioptas (UC Santa Cruz) Stochastic Integration via ECC Simons Institute May 2016 2 / 14

Page 7: Stochastic Integration via Error-Correcting Codes · The Goal (“Stochastic Approximate Integration”) Probabilistically, approximately estimate Z = X σ∈Ω f(σ). Non-negativity

[Ermon, Gomes, Sabharwal and Selman 10-15]

Estimation by Stratification

Thought ExperimentSort Ω by decreasing f -value. W.l.o.g.

f (σ1) ≥ f (σ2) ≥ f (σ3) · · · f (σ2i ) · · · ≥ f (σ2n )

Imagine we could get our hands on the n + 1 numbers bi = f (σ2i ).

Dimitris Achlioptas (UC Santa Cruz) Stochastic Integration via ECC Simons Institute May 2016 3 / 14

Page 8: Stochastic Integration via Error-Correcting Codes · The Goal (“Stochastic Approximate Integration”) Probabilistically, approximately estimate Z = X σ∈Ω f(σ). Non-negativity

[Ermon, Gomes, Sabharwal and Selman 10-15]

Estimation by Stratification

Thought ExperimentSort Ω by decreasing f -value. W.l.o.g.

f (σ1) ≥ f (σ2) ≥ f (σ3) · · · f (σ2i ) · · · ≥ f (σ2n )

Imagine we could get our hands on the n + 1 numbers bi = f (σ2i ).

If we let

U := b0 +n−1∑i=0

bi2i and L := b0 +n−1∑i=0

bi+12i

thenL ≤ Z ≤ U≤ 2L

Dimitris Achlioptas (UC Santa Cruz) Stochastic Integration via ECC Simons Institute May 2016 3 / 14

Page 9: Stochastic Integration via Error-Correcting Codes · The Goal (“Stochastic Approximate Integration”) Probabilistically, approximately estimate Z = X σ∈Ω f(σ). Non-negativity

[Ermon, Gomes, Sabharwal and Selman 10-15]

Estimation by Stratification

Thought ExperimentSort Ω by decreasing f -value. W.l.o.g.

f (σ1) ≥ f (σ2) ≥ f (σ3) · · · f (σ2i ) · · · ≥ f (σ2n )

Imagine we could get our hands on the n + 1 numbers bi = f (σ2i ).

2 4 8 16 32 64quantile

0

20

40

60

80

100 fupper sumlower sum

Dimitris Achlioptas (UC Santa Cruz) Stochastic Integration via ECC Simons Institute May 2016 3 / 14

Page 10: Stochastic Integration via Error-Correcting Codes · The Goal (“Stochastic Approximate Integration”) Probabilistically, approximately estimate Z = X σ∈Ω f(σ). Non-negativity

[Ermon, Gomes, Sabharwal and Selman 10-15]

Estimation by Stratification

Thought ExperimentSort Ω by decreasing f -value. W.l.o.g.

f (σ1) ≥ f (σ2) ≥ f (σ3) · · · f (σ2i ) · · · ≥ f (σ2n )

Imagine we could get our hands on the n + 1 numbers bi = f (σ2i ).

Theorem (EGSS)To get a 22c+1-approximation it suffices to find for each 0 ≤ i ≤ n,

bi+c ≤ bi ≤ bi−c .

Dimitris Achlioptas (UC Santa Cruz) Stochastic Integration via ECC Simons Institute May 2016 3 / 14

Page 11: Stochastic Integration via Error-Correcting Codes · The Goal (“Stochastic Approximate Integration”) Probabilistically, approximately estimate Z = X σ∈Ω f(σ). Non-negativity

[Ermon, Gomes, Sabharwal and Selman 10-15]

Estimation by Stratification

Thought ExperimentSort Ω by decreasing f -value. W.l.o.g.

f (σ1) ≥ f (σ2) ≥ f (σ3) · · · f (σ2i ) · · · ≥ f (σ2n )

Imagine we could get our hands on the n + 1 numbers bi = f (σ2i ).

Corollary (when c = 2)To get a 32-approximation it suffices to find for each 0 ≤ i ≤ n,

bi+2 ≤ bi ≤ bi−2 .

Dimitris Achlioptas (UC Santa Cruz) Stochastic Integration via ECC Simons Institute May 2016 3 / 14

Page 12: Stochastic Integration via Error-Correcting Codes · The Goal (“Stochastic Approximate Integration”) Probabilistically, approximately estimate Z = X σ∈Ω f(σ). Non-negativity

[Ermon, Gomes, Sabharwal and Selman 10-15]

Refinement by Repetition

Lemma (Concentration of measure)Let X be any r.v. such that:

Pr[X ≤ Upper] ≥ 1/2 + δ

andPr[X ≥ Lower] ≥ 1/2 + δ .

If X1,X2, . . . ,Xt are independent samples of X , then

Pr [Lower ≤ Median(X1,X2, . . . ,Xt) ≤ Upper] ≥ 1− 2 exp(−δ2t

)

Dimitris Achlioptas (UC Santa Cruz) Stochastic Integration via ECC Simons Institute May 2016 4 / 14

Page 13: Stochastic Integration via Error-Correcting Codes · The Goal (“Stochastic Approximate Integration”) Probabilistically, approximately estimate Z = X σ∈Ω f(σ). Non-negativity

[Ermon, Gomes, Sabharwal and Selman 10-15]

The Basic Plan

Thinning SetsWe will consider random sets Ri such that for every σ ∈ Ω,

Pr[σ ∈ Ri ] = 2−i .

Our estimator for bi = f (σ2i ) will be

mi = maxσ∈Ri

f (σ) .

Recall that f (σ1) ≥ f (σ2) ≥ f (σ3) · · · ≥ f (σ2i ) ≥ f (σ2i + 1) · · · ≥ f (σ2n )

Dimitris Achlioptas (UC Santa Cruz) Stochastic Integration via ECC Simons Institute May 2016 5 / 14

Page 14: Stochastic Integration via Error-Correcting Codes · The Goal (“Stochastic Approximate Integration”) Probabilistically, approximately estimate Z = X σ∈Ω f(σ). Non-negativity

[Ermon, Gomes, Sabharwal and Selman 10-15]

The Basic Plan

Thinning SetsWe will consider random sets Ri such that for every σ ∈ Ω,

Pr[σ ∈ Ri ] = 2−i .

Our estimator for bi = f (σ2i ) will be

mi = maxσ∈Ri

f (σ) .

Lemma (Avoiding Overestimation is Easy)Pr[mi > bi−2] ≤ Pr[Ri ∩ σ1, σ2, σ3, . . . , σ2i−2 6= ∅]

≤ 2i−2 2−iUnion Bound

= 1/4 .

Dimitris Achlioptas (UC Santa Cruz) Stochastic Integration via ECC Simons Institute May 2016 5 / 14

Page 15: Stochastic Integration via Error-Correcting Codes · The Goal (“Stochastic Approximate Integration”) Probabilistically, approximately estimate Z = X σ∈Ω f(σ). Non-negativity

[Ermon, Gomes, Sabharwal and Selman 10-15]

Getting Down to Business: Avoiding Underestimation

To avoid underestimation, i.e., to achieve mi ≥ bi+2, we need

Xi = |Ri ∩ σ1, σ2, σ3, . . . , σ2i+2| > 0 .

Observe thatEXi = 2i+22−i = 4 .

So, we have:Two exponential-sized sets

σ1, σ2, σ3, . . . , σ2i+2|Ri | ∼ 2n−i

Which must intersect with probability 1/2 + δ

While having expected intersection size 4

Dimitris Achlioptas (UC Santa Cruz) Stochastic Integration via ECC Simons Institute May 2016 6 / 14

Page 16: Stochastic Integration via Error-Correcting Codes · The Goal (“Stochastic Approximate Integration”) Probabilistically, approximately estimate Z = X σ∈Ω f(σ). Non-negativity

Pseudorandom Sets

It Boils Down to This

We need to design a random set R such that:Pr[σ ∈ R] = 2−i for every σ ∈ 0, 1n e.g., a random subcube of dimension n − i

Describing R can be done in poly(n) time ditto

For fixed S ⊆ 0, 1n , the variance of X = |R ∩ S | is minimized

Minimizing variance amounts to minimizing∑σ 6=σ′∈S

Pr[σ′ ∈ R | σ ∈ R]

Since we know nothing about the geometry of S , a sensible goal is

Pr[σ′ ∈ R | σ ∈ R]= Pr[σ′ ∈ R] Pairwise Independence

How can this be reconciled with R being “simple to describe”?

Dimitris Achlioptas (UC Santa Cruz) Stochastic Integration via ECC Simons Institute May 2016 7 / 14

Page 17: Stochastic Integration via Error-Correcting Codes · The Goal (“Stochastic Approximate Integration”) Probabilistically, approximately estimate Z = X σ∈Ω f(σ). Non-negativity

Pseudorandom Sets

It Boils Down to This

We need to design a random set R such that:Pr[σ ∈ R] = 2−i for every σ ∈ 0, 1n e.g., a random subcube of dimension n − i

Describing R can be done in poly(n) time ditto

For fixed S ⊆ 0, 1n , the variance of X = |R ∩ S | is minimized

Minimizing variance amounts to minimizing∑σ 6=σ′∈S

Pr[σ′ ∈ R | σ ∈ R]

Since we know nothing about the geometry of S , a sensible goal is

Pr[σ′ ∈ R | σ ∈ R]= Pr[σ′ ∈ R] Pairwise Independence

How can this be reconciled with R being “simple to describe”?

Dimitris Achlioptas (UC Santa Cruz) Stochastic Integration via ECC Simons Institute May 2016 7 / 14

Page 18: Stochastic Integration via Error-Correcting Codes · The Goal (“Stochastic Approximate Integration”) Probabilistically, approximately estimate Z = X σ∈Ω f(σ). Non-negativity

Pseudorandom Sets

It Boils Down to This

We need to design a random set R such that:Pr[σ ∈ R] = 2−i for every σ ∈ 0, 1n e.g., a random subcube of dimension n − i

Describing R can be done in poly(n) time ditto

For fixed S ⊆ 0, 1n , the variance of X = |R ∩ S | is minimized

Minimizing variance amounts to minimizing∑σ 6=σ′∈S

Pr[σ′ ∈ R | σ ∈ R]

Since we know nothing about the geometry of S , a sensible goal is

Pr[σ′ ∈ R | σ ∈ R]= Pr[σ′ ∈ R] Pairwise Independence

How can this be reconciled with R being “simple to describe”?

Dimitris Achlioptas (UC Santa Cruz) Stochastic Integration via ECC Simons Institute May 2016 7 / 14

Page 19: Stochastic Integration via Error-Correcting Codes · The Goal (“Stochastic Approximate Integration”) Probabilistically, approximately estimate Z = X σ∈Ω f(σ). Non-negativity

Pseudorandom Sets

It Boils Down to This

We need to design a random set R such that:Pr[σ ∈ R] = 2−i for every σ ∈ 0, 1n e.g., a random subcube of dimension n − i

Describing R can be done in poly(n) time ditto

For fixed S ⊆ 0, 1n , the variance of X = |R ∩ S | is minimized

Minimizing variance amounts to minimizing∑σ 6=σ′∈S

Pr[σ′ ∈ R | σ ∈ R]

Since we know nothing about the geometry of S , a sensible goal is

Pr[σ′ ∈ R | σ ∈ R]= Pr[σ′ ∈ R] Pairwise Independence

How can this be reconciled with R being “simple to describe”?Dimitris Achlioptas (UC Santa Cruz) Stochastic Integration via ECC Simons Institute May 2016 7 / 14

Page 20: Stochastic Integration via Error-Correcting Codes · The Goal (“Stochastic Approximate Integration”) Probabilistically, approximately estimate Z = X σ∈Ω f(σ). Non-negativity

Error-Correcting Codes

Uncle Claude to the Rescue

Linear Error-Correcting CodesLet

R = σ ∈ 0, 1n : Aσ = b

where both A ∈ 0, 1i×n and b ∈ 0, 1i are uniformly random.

Dimitris Achlioptas (UC Santa Cruz) Stochastic Integration via ECC Simons Institute May 2016 8 / 14

Page 21: Stochastic Integration via Error-Correcting Codes · The Goal (“Stochastic Approximate Integration”) Probabilistically, approximately estimate Z = X σ∈Ω f(σ). Non-negativity

Error-Correcting Codes

Uncle Claude to the Rescue

Linear Error-Correcting CodesLet

R = σ ∈ 0, 1n : Aσ = b

where both A ∈ 0, 1i×n and b ∈ 0, 1i are uniformly random.

Dimitris Achlioptas (UC Santa Cruz) Stochastic Integration via ECC Simons Institute May 2016 8 / 14

Page 22: Stochastic Integration via Error-Correcting Codes · The Goal (“Stochastic Approximate Integration”) Probabilistically, approximately estimate Z = X σ∈Ω f(σ). Non-negativity

Error-Correcting Codes

Uncle Claude to the RescueThe probability that both σ, σ′ are codewords is

Pr[A(σ′ − σ) = 0 ∧Aσ = b] = Pr[A(σ′ − σ) = 0] · Pr[Aσ = b] .

Dimitris Achlioptas (UC Santa Cruz) Stochastic Integration via ECC Simons Institute May 2016 9 / 14

Page 23: Stochastic Integration via Error-Correcting Codes · The Goal (“Stochastic Approximate Integration”) Probabilistically, approximately estimate Z = X σ∈Ω f(σ). Non-negativity

Error-Correcting Codes

Uncle Claude to the RescueThe probability that both σ, σ′ are codewords is

Pr[A(σ′ − σ) = 0 ∧Aσ = b] = Pr[A(σ′ − σ) = 0] · Pr[Aσ = b] .

Dimitris Achlioptas (UC Santa Cruz) Stochastic Integration via ECC Simons Institute May 2016 9 / 14

Page 24: Stochastic Integration via Error-Correcting Codes · The Goal (“Stochastic Approximate Integration”) Probabilistically, approximately estimate Z = X σ∈Ω f(σ). Non-negativity

Error-Correcting Codes

Uncle Claude to the RescueThe probability that both σ, σ′ are codewords is

Pr[A(σ′ − σ) = 0 ∧Aσ = b] = Pr[A(σ′ − σ) = 0] · Pr[Aσ = b] .

Dimitris Achlioptas (UC Santa Cruz) Stochastic Integration via ECC Simons Institute May 2016 9 / 14

Page 25: Stochastic Integration via Error-Correcting Codes · The Goal (“Stochastic Approximate Integration”) Probabilistically, approximately estimate Z = X σ∈Ω f(σ). Non-negativity

Error-Correcting Codes

Uncle Claude to the RescueThe probability that both σ, σ′ are codewords is

Pr[A(σ′ − σ) = 0 ∧Aσ = b] = Pr[A(σ′ − σ) = 0] · Pr[Aσ = b] .

Dimitris Achlioptas (UC Santa Cruz) Stochastic Integration via ECC Simons Institute May 2016 9 / 14

Page 26: Stochastic Integration via Error-Correcting Codes · The Goal (“Stochastic Approximate Integration”) Probabilistically, approximately estimate Z = X σ∈Ω f(σ). Non-negativity

Error-Correcting Codes

Are We Done Yet?

RecappingDefine Ri via i random parity constraints with ∼ n/2 variables eachEstimate bi by maximizing f subject to the constraints

n = 10× 10FerromagneticIsing Grid

Coupling Strengths& External FieldsNear criticality

0 20 40 60 80 100

number of parity constraints (i)

−20

0

20

40

60

80

100

120

140

best

solu

tion

Affine Map100-iDense ParitySparse Parity

Dimitris Achlioptas (UC Santa Cruz) Stochastic Integration via ECC Simons Institute May 2016 10 / 14

Page 27: Stochastic Integration via Error-Correcting Codes · The Goal (“Stochastic Approximate Integration”) Probabilistically, approximately estimate Z = X σ∈Ω f(σ). Non-negativity

Error-Correcting Codes

Are We Done Yet?

RecappingDefine Ri via i random parity constraints with ∼ n/2 variables eachEstimate bi by maximizing f subject to the constraints

n = 10× 10FerromagneticIsing Grid

Coupling Strengths& External FieldsNear criticality

0 20 40 60 80 100

number of parity constraints (i)

−20

0

20

40

60

80

100

120

140

best

solu

tion

Affine Map100-iDense ParitySparse Parity

Dimitris Achlioptas (UC Santa Cruz) Stochastic Integration via ECC Simons Institute May 2016 10 / 14

Page 28: Stochastic Integration via Error-Correcting Codes · The Goal (“Stochastic Approximate Integration”) Probabilistically, approximately estimate Z = X σ∈Ω f(σ). Non-negativity

Error-Correcting Codes

First Contribution: Random Affine Maps (Exploiting Linearity)Let G ∈ 0, 1(n−i)×n be the generator matrix of R, i.e.,

R =σ ∈ 0, 1n : σ = xG and x ∈ 0, 1n−i

.

Instead of solving the constrained optimization problem

maxσ∈0,1n

Aσ=b

f (σ) ,

solve the unconstrained optimization problem

maxx∈0,1n−i

f (xG) ,

over the exponentially smaller set 0, 1n−i .0 20 40 60 80 100

number of parity constraints (i)

−20

0

20

40

60

80

100

120

140

best

solu

tion

Affine Map100-iDense ParitySparse Parity

Dimitris Achlioptas (UC Santa Cruz) Stochastic Integration via ECC Simons Institute May 2016 11 / 14

Page 29: Stochastic Integration via Error-Correcting Codes · The Goal (“Stochastic Approximate Integration”) Probabilistically, approximately estimate Z = X σ∈Ω f(σ). Non-negativity

Error-Correcting Codes

First Contribution: Random Affine Maps (Exploiting Linearity)Let G ∈ 0, 1(n−i)×n be the generator matrix of R, i.e.,

R =σ ∈ 0, 1n : σ = xG and x ∈ 0, 1n−i

.

Instead of solving the constrained optimization problem

maxσ∈0,1n

Aσ=b

f (σ) ,

solve the unconstrained optimization problem

maxx∈0,1n−i

f (xG) ,

over the exponentially smaller set 0, 1n−i .0 20 40 60 80 100

number of parity constraints (i)

−20

0

20

40

60

80

100

120

140

best

solu

tion

Affine Map100-iDense ParitySparse Parity

Dimitris Achlioptas (UC Santa Cruz) Stochastic Integration via ECC Simons Institute May 2016 11 / 14

Page 30: Stochastic Integration via Error-Correcting Codes · The Goal (“Stochastic Approximate Integration”) Probabilistically, approximately estimate Z = X σ∈Ω f(σ). Non-negativity

Error-Correcting Codes

First Contribution: Random Affine Maps (Exploiting Linearity)Let G ∈ 0, 1(n−i)×n be the generator matrix of R, i.e.,

R =σ ∈ 0, 1n : σ = xG and x ∈ 0, 1n−i

.

Instead of solving the constrained optimization problem

maxσ∈0,1n

Aσ=b

f (σ) ,

solve the unconstrained optimization problem

maxx∈0,1n−i

f (xG) ,

over the exponentially smaller set 0, 1n−i .

0 20 40 60 80 100

number of parity constraints (i)

−20

0

20

40

60

80

100

120

140

best

solu

tion

Affine Map100-iDense ParitySparse Parity

Dimitris Achlioptas (UC Santa Cruz) Stochastic Integration via ECC Simons Institute May 2016 11 / 14

Page 31: Stochastic Integration via Error-Correcting Codes · The Goal (“Stochastic Approximate Integration”) Probabilistically, approximately estimate Z = X σ∈Ω f(σ). Non-negativity

Error-Correcting Codes

First Contribution: Random Affine Maps (Exploiting Linearity)Let G ∈ 0, 1(n−i)×n be the generator matrix of R, i.e.,

R =σ ∈ 0, 1n : σ = xG and x ∈ 0, 1n−i

.

Instead of solving the constrained optimization problem

maxσ∈0,1n

Aσ=b

f (σ) ,

solve the unconstrained optimization problem

maxx∈0,1n−i

f (xG) ,

over the exponentially smaller set 0, 1n−i .0 20 40 60 80 100

number of parity constraints (i)

−20

0

20

40

60

80

100

120

140

best

solu

tion

Affine Map100-iDense ParitySparse Parity

Dimitris Achlioptas (UC Santa Cruz) Stochastic Integration via ECC Simons Institute May 2016 11 / 14

Page 32: Stochastic Integration via Error-Correcting Codes · The Goal (“Stochastic Approximate Integration”) Probabilistically, approximately estimate Z = X σ∈Ω f(σ). Non-negativity

Error-Correcting Codes

FactWorking with an explicit representation of f is often crucial for efficientmaximization

Dimitris Achlioptas (UC Santa Cruz) Stochastic Integration via ECC Simons Institute May 2016 12 / 14

Page 33: Stochastic Integration via Error-Correcting Codes · The Goal (“Stochastic Approximate Integration”) Probabilistically, approximately estimate Z = X σ∈Ω f(σ). Non-negativity

Error-Correcting Codes

Second Contribution: Use Low Density Parity Check CodesExtremely sparse equations but with variable regularity

Dimitris Achlioptas (UC Santa Cruz) Stochastic Integration via ECC Simons Institute May 2016 13 / 14

Page 34: Stochastic Integration via Error-Correcting Codes · The Goal (“Stochastic Approximate Integration”) Probabilistically, approximately estimate Z = X σ∈Ω f(σ). Non-negativity

Error-Correcting Codes

Second Contribution: Use Low Density Parity Check CodesExtremely sparse equations but with variable regularity

Dimitris Achlioptas (UC Santa Cruz) Stochastic Integration via ECC Simons Institute May 2016 13 / 14

Page 35: Stochastic Integration via Error-Correcting Codes · The Goal (“Stochastic Approximate Integration”) Probabilistically, approximately estimate Z = X σ∈Ω f(σ). Non-negativity

Error-Correcting Codes

Second Contribution: Use Low Density Parity Check CodesExtremely sparse equations but with variable regularity

Dimitris Achlioptas (UC Santa Cruz) Stochastic Integration via ECC Simons Institute May 2016 13 / 14

Page 36: Stochastic Integration via Error-Correcting Codes · The Goal (“Stochastic Approximate Integration”) Probabilistically, approximately estimate Z = X σ∈Ω f(σ). Non-negativity

Error-Correcting Codes

Second Contribution: Use Low Density Parity Check CodesExtremely sparse equations but with variable regularity

Dimitris Achlioptas (UC Santa Cruz) Stochastic Integration via ECC Simons Institute May 2016 13 / 14

Page 37: Stochastic Integration via Error-Correcting Codes · The Goal (“Stochastic Approximate Integration”) Probabilistically, approximately estimate Z = X σ∈Ω f(σ). Non-negativity

Error-Correcting Codes

Second Contribution: Use Low Density Parity Check CodesExtremely sparse equations but with variable regularity

Dimitris Achlioptas (UC Santa Cruz) Stochastic Integration via ECC Simons Institute May 2016 13 / 14

Page 38: Stochastic Integration via Error-Correcting Codes · The Goal (“Stochastic Approximate Integration”) Probabilistically, approximately estimate Z = X σ∈Ω f(σ). Non-negativity

Error-Correcting Codes

Second Contribution: Use Low Density Parity Check CodesExtremely sparse equations but with variable regularity

Dimitris Achlioptas (UC Santa Cruz) Stochastic Integration via ECC Simons Institute May 2016 13 / 14

Page 39: Stochastic Integration via Error-Correcting Codes · The Goal (“Stochastic Approximate Integration”) Probabilistically, approximately estimate Z = X σ∈Ω f(σ). Non-negativity

Error-Correcting Codes

Second Contribution: Use Low Density Parity Check CodesExtremely sparse equations but with variable regularity

Dimitris Achlioptas (UC Santa Cruz) Stochastic Integration via ECC Simons Institute May 2016 13 / 14

Page 40: Stochastic Integration via Error-Correcting Codes · The Goal (“Stochastic Approximate Integration”) Probabilistically, approximately estimate Z = X σ∈Ω f(σ). Non-negativity

Error-Correcting Codes

Second Contribution: Use Low Density Parity Check CodesExtremely sparse equations but with variable regularity

Scales to problems with several thousand variables

Running-time when proving satisfiability comparableto original instance

In all problems where ground truth is known:Equally accurate as long XORs2-1000x faster

Dimitris Achlioptas (UC Santa Cruz) Stochastic Integration via ECC Simons Institute May 2016 13 / 14

Page 41: Stochastic Integration via Error-Correcting Codes · The Goal (“Stochastic Approximate Integration”) Probabilistically, approximately estimate Z = X σ∈Ω f(σ). Non-negativity

Error-Correcting Codes

Second Contribution: Use Low Density Parity Check CodesExtremely sparse equations but with variable regularity

log2 log2

ZLDPC

ZLXOR

!log2

Time with LXOR

Time with LDPC

Each point represents one CNF formula

Dimitris Achlioptas (UC Santa Cruz) Stochastic Integration via ECC Simons Institute May 2016 13 / 14

Page 42: Stochastic Integration via Error-Correcting Codes · The Goal (“Stochastic Approximate Integration”) Probabilistically, approximately estimate Z = X σ∈Ω f(σ). Non-negativity

Error-Correcting Codes

Thanks!

Dimitris Achlioptas (UC Santa Cruz) Stochastic Integration via ECC Simons Institute May 2016 14 / 14