Happy ABC: Expectation-Propagation for Summary-Less...

22

Transcript of Happy ABC: Expectation-Propagation for Summary-Less...

Page 1: Happy ABC: Expectation-Propagation for Summary-Less ...conferences.inf.ed.ac.uk/bayes250/slides/chopin.pdf · Results from alpha-stable example a Density 1.50 1.60 1.70 0 2 4 6 8

Happy ABC: Expectation-Propagation for

Summary-Less, Likelihood-Free Inference

Nicolas Chopin

CREST (ENSAE)

joint work with Simon Barthelmé (TU Berlin)

1 / 22

Page 2: Happy ABC: Expectation-Propagation for Summary-Less ...conferences.inf.ed.ac.uk/bayes250/slides/chopin.pdf · Results from alpha-stable example a Density 1.50 1.60 1.70 0 2 4 6 8

Basic ABC

Data: y?, prior p(θ), model p(y |θ). Likelihood p(y |θ) isintractable.

1 Sample θ ∼ p(θ)

2 Sample y ∼ p(y |θ)

3 Accept θ i� ‖s(y)− s(y?)‖ ≤ ε

2 / 22

Page 3: Happy ABC: Expectation-Propagation for Summary-Less ...conferences.inf.ed.ac.uk/bayes250/slides/chopin.pdf · Results from alpha-stable example a Density 1.50 1.60 1.70 0 2 4 6 8

ABC target

The previous algorithm targets:

pε(θ|y?) ∝ p(θ)

ˆp(y |θ)1{‖s(y)−s(y?)‖≤ε} dy

which approximates the true posterior p(θ|y). Two levels ofapproximation:

1 Non-parametric error, governed by �bandwidth� ε;pε(θ|y?)→ p(θ|s(y?)) as ε→ 0.

2 Bias introduced by summary stat. s, sincep(θ|s(y?)) 6= p(θ|y?).

Note that p(θ|s(y?)) ≈ p(θ|y?) may be a reasonableapproximation, but p(y?) and p(s(y?)) have no clear relation:hence standard ABC cannot reliably approximate the evidence.

3 / 22

Page 4: Happy ABC: Expectation-Propagation for Summary-Less ...conferences.inf.ed.ac.uk/bayes250/slides/chopin.pdf · Results from alpha-stable example a Density 1.50 1.60 1.70 0 2 4 6 8

EP-ABC target

Assume that the data y decomposes into (y1, . . . , yn), and considerthe ABC approximation:

pε(θ|y?) ∝ p(θ)n∏

i=1

{ˆp(yi |y?1:i−1,θ)1{‖yi−y?i ‖≤ε} dyi

}(1)

Standard ABC cannot target this approximate posterior, becausethe probability that ‖yi − y?i ‖ ≤ ε for all i simultaneously isexponentially small w.r.t. n. But it does not depend on somesummary stats s, and pε(θ|y?)→ p(θ|y?) as ε→ 0 (one level ofapproximation).The EP-ABC algorithm computes a Gaussian approximation of (1).

4 / 22

Page 5: Happy ABC: Expectation-Propagation for Summary-Less ...conferences.inf.ed.ac.uk/bayes250/slides/chopin.pdf · Results from alpha-stable example a Density 1.50 1.60 1.70 0 2 4 6 8

Noisy ABC interpretation

Note that the EP-ABC target of the previous slide can beinterpreted as the correct posterior distribution of a model wherethe datapoints are corrupted with a U[−ε, ε] noise, followingWilkinson (2008).

5 / 22

Page 6: Happy ABC: Expectation-Propagation for Summary-Less ...conferences.inf.ed.ac.uk/bayes250/slides/chopin.pdf · Results from alpha-stable example a Density 1.50 1.60 1.70 0 2 4 6 8

EP: an introductionIntroduced in Machine Learning by Minka (2001). Consider ageneric posterior:

π(θ) = p(θ|y) ∝ p (θ)n∏

i=1

li (θ) (2)

where the li are n contributions to the likelihood. Aim is toapproximate π with

q(θ) ∝n∏

i=0

fi (θ) (3)

where the fi 's are the �sites�. To obtain a Gaussian approximation,take fi (θ) ∝ exp

(−1

2θtQ iθ + rti θ

), so that:

q(θ) ∝ exp

{−1

2θt

(n∑

i=0

Qi

)θ +

(n∑

i=0

ri

)t

θ

}(4)

where Qi and ri are the site parameters.6 / 22

Page 7: Happy ABC: Expectation-Propagation for Summary-Less ...conferences.inf.ed.ac.uk/bayes250/slides/chopin.pdf · Results from alpha-stable example a Density 1.50 1.60 1.70 0 2 4 6 8

Site update

We wish to minimise KL(π‖q). To that aim, we update each site(Q i , r i ) in turn, as follows. Consider the hybrid:

hi (θ) ∝ q−i (θ)li (θ), q−i (θ) =∏j 6=i

fj(θ)

and adjust (Q i , r i ) so that KL(hi‖q) is minimal. One may easilyprove that this may be done by moment matching, i.e. calculate:

µh = Ehi [θ] , Σh = Ehi[θθT

]− µiµ

Ti

set Qh = Σ−1h , rh = Σ−1h µh, then adjust (Q i , r i ) so that (Qh, rh)and (Q, r) = (

∑ni=0Q i ,

∑ni=0 r i ) (the moments of q) match.

Q i ← Σ−1h −Q−i , r i ← Σ−1h µh − r−i .

7 / 22

Page 8: Happy ABC: Expectation-Propagation for Summary-Less ...conferences.inf.ed.ac.uk/bayes250/slides/chopin.pdf · Results from alpha-stable example a Density 1.50 1.60 1.70 0 2 4 6 8

EP quick summary

• Convergence is usually obtained after a few complete cyclesover all the sites.

• Output is a Gaussian distribution which is �closest� to targetπ, in KL sense.

• We use the Gaussian family for q, but one may take anotherexponential family.

• Feasiblity of EP is determined by how easy it is to computethe moments of order 1 and 2 of the hybrid distribution (i.e. aGaussian density q−i times a single likelihood contribution li ).

8 / 22

Page 9: Happy ABC: Expectation-Propagation for Summary-Less ...conferences.inf.ed.ac.uk/bayes250/slides/chopin.pdf · Results from alpha-stable example a Density 1.50 1.60 1.70 0 2 4 6 8

EP-ABC

Going back to the EP-ABC target:

pε(θ|y?) ∝ p(θ)n∏

i=1

{ˆp(yi |y?1:i−1,θ)1{‖yi−y?i ‖≤ε} dyi

}(5)

we take

li (θ) =

ˆp(yi |y?1:i−1,θ)1{‖yi−y?i ‖≤ε} dyi .

In that case, the hybrid distribution is a Gaussian times li . Themoments are not available in close-form (obviously), but they areeasily obtained, using some form of ABC for a single observation.

9 / 22

Page 10: Happy ABC: Expectation-Propagation for Summary-Less ...conferences.inf.ed.ac.uk/bayes250/slides/chopin.pdf · Results from alpha-stable example a Density 1.50 1.60 1.70 0 2 4 6 8

EP-ABC site update

Inputs: ε, y?, i , and the moment parameters µ−i , Σ−i of theGaussian pseudo-prior q−i .

1 Draw M variates θ[m] from a N(µ−i ,Σ−i ) distribution.

2 For each θ[m], draw y[m]i ∼ p(yi |y?1:i−1,θ[m]).

3 Compute the empirical moments

Macc =M∑

m=1

1{‖y [m]

i−y?

i‖≤ε

}, µh =

∑Mm=1 θ

[m]1{‖y [m]

i−y?

i‖≤ε

}Macc

(6)

Σh =

∑Mm=1 θ

[m]{θ[m]

}t1{‖y [m]

i−y?

i‖≤ε

}Macc

− µ(hi )µ(hi )t . (7)

Return Z (hi ) = Macc/M, µ(hi ) and Σ(hi ).

10 / 22

Page 11: Happy ABC: Expectation-Propagation for Summary-Less ...conferences.inf.ed.ac.uk/bayes250/slides/chopin.pdf · Results from alpha-stable example a Density 1.50 1.60 1.70 0 2 4 6 8

Numerical stability

We are turning a deterministic, �xed-point algorithm, into astochastic algorithm, hence numerical stability may be an issue.Solutions:

• We adjust dynamically M the number of simulated points at agiven site, so that the number of accepted points exceedssome threshold.

• We use Quasi-Monte Carlo in the θ dimension.

• Slow EP updates may also be used.

11 / 22

Page 12: Happy ABC: Expectation-Propagation for Summary-Less ...conferences.inf.ed.ac.uk/bayes250/slides/chopin.pdf · Results from alpha-stable example a Density 1.50 1.60 1.70 0 2 4 6 8

Acceleration in the IID case

In the IID case, p(yi |y1:i−1,θ) = p(yi |θ), and the simulation step

y[m]i ∼ p(yi |θ[m]) is the same for all the sites, so it is possible torecycle simulations, using importance sampling.

12 / 22

Page 13: Happy ABC: Expectation-Propagation for Summary-Less ...conferences.inf.ed.ac.uk/bayes250/slides/chopin.pdf · Results from alpha-stable example a Density 1.50 1.60 1.70 0 2 4 6 8

First example: alpha-stable distributions

An IID univariate model taken from Peters et al. (2010). Theobservations are alpha-stable, with common distribution de�nedthrough the characteristic function

ΦX (t) =

{exp{iδt − γα |t|α

[1 + iβ tan πα

2 sgn(t)(|γt| − 1)]}

α 6= 1

exp{iδt − γ |t|

[1 + iβ 2

π sgn(t) log |γt|]}

α = 1

Density is not available in close-form.Data: n = 1200 AUD/GBP log-returns computed from dailyexchange rates.

13 / 22

Page 14: Happy ABC: Expectation-Propagation for Summary-Less ...conferences.inf.ed.ac.uk/bayes250/slides/chopin.pdf · Results from alpha-stable example a Density 1.50 1.60 1.70 0 2 4 6 8

Results from alpha-stable example

α

Den

sity

1.50 1.60 1.70

02

46

8

β

Den

sity

−0.2 0.0 0.1 0.2 0.3 0.4

01

23

4

γD

ensi

ty

0.38 0.40 0.42 0.44 0.46

010

2030

40δ

Den

sity

−0.10 −0.05 0.00 0.05

05

1015

20

Marginal posterior distributions of α, β, γ and δ for alpha-stablemodel: MCMC output from the exact algorithm (histograms, 60h),approximate posteriors provided by EP-ABC (40min, solid line),

kernel density estimates computed from MCMC-ABC sample basedon summary statistic proposed by Peters et al (50 times more

simulations, dashed line). 14 / 22

Page 15: Happy ABC: Expectation-Propagation for Summary-Less ...conferences.inf.ed.ac.uk/bayes250/slides/chopin.pdf · Results from alpha-stable example a Density 1.50 1.60 1.70 0 2 4 6 8

Second example: Lokta-Volterra processes

The stochastic Lotka-Volterra process describes the evolution oftwo species Y1 (prey) and Y2 (predator):

Y1r1→ 2Y1

Y1 + Y2r2→ 2Y2

Y2r3→ ∅

We take θ = (log r1, log r2, log r3), and we observe the process atdiscrete times. Model is Markov, p(y?i |y?1:i−1,θ) = p(y?i |y?i−1,θ).

15 / 22

Page 16: Happy ABC: Expectation-Propagation for Summary-Less ...conferences.inf.ed.ac.uk/bayes250/slides/chopin.pdf · Results from alpha-stable example a Density 1.50 1.60 1.70 0 2 4 6 8

Simulated data

Time

Pop

ulat

ion

20

40

60

80

100

● ●

● ● ●

●●

● ● ● ●●

●●

●●

● ● ● ●● ●

●●

●●

● ● ● ●● ●

●●

●●

● ●

● ●

● ●●

●●

● ●

● ●

●●

●● ●

● ●

●●

●●

Preys

Predators

10 20 30 40 50

16 / 22

Page 17: Happy ABC: Expectation-Propagation for Summary-Less ...conferences.inf.ed.ac.uk/bayes250/slides/chopin.pdf · Results from alpha-stable example a Density 1.50 1.60 1.70 0 2 4 6 8

Results

value

prob

0

5

10

15

0

5

10

15

repsilon=3, r1

0.30 0.35 0.40 0.45 0.50epsilon=1, r1

0.30 0.35 0.40 0.45 0.50

0

100

200

300

400

500

600

0

100

200

300

400

500

600

700

epsilon=3, r2

0.008 0.009 0.010 0.011 0.012 0.013epsilon=1, r2

0.008 0.009 0.010 0.011 0.012 0.013

0

5

10

15

20

0

5

10

15

20

epsilon=3, r3

0.25 0.30 0.35epsilon=1, r3

0.25 0.30 0.35

PMCMC approximations of the ABC target (histograms) for ε = 3(top), EP-ABC approximations, for ε = 3 (top) and ε = 1(bottom).

17 / 22

Page 18: Happy ABC: Expectation-Propagation for Summary-Less ...conferences.inf.ed.ac.uk/bayes250/slides/chopin.pdf · Results from alpha-stable example a Density 1.50 1.60 1.70 0 2 4 6 8

Third example: reaction times

Subject must choose between k alternatives. Evidence ej(t) infavour of choice j follows a Brownian motion with drift:

τdej(t) = mjdt + dW jt .

Decision is taken when one evidence �wins the race�; see plot.

0 50 100 150

time (ms)

Threshold for "Signal Absent"

Threshold for "Signal Present"

Evidence for "Signal Absent"

Evidence for "Signal Present"

18 / 22

Page 19: Happy ABC: Expectation-Propagation for Summary-Less ...conferences.inf.ed.ac.uk/bayes250/slides/chopin.pdf · Results from alpha-stable example a Density 1.50 1.60 1.70 0 2 4 6 8

Data

1860 Observations, from a single human being, who must choosebetween �signal absent�, and �signal present�.

Relative target contrast

Rea

ctio

n tim

e (m

s)

200

300

400

500

600

700

200

300

400

500

600

700

Position A

0.05 0.10 0.15 0.20

Position B

0.05 0.10 0.15 0.20

Position C

0.05 0.10 0.15 0.20

"Signal absent" response

"Signal present" response

19 / 22

Page 20: Happy ABC: Expectation-Propagation for Summary-Less ...conferences.inf.ed.ac.uk/bayes250/slides/chopin.pdf · Results from alpha-stable example a Density 1.50 1.60 1.70 0 2 4 6 8

Results

Individual datapoint

Predictive densityReal data

90% quantile

Mean

10% quantile

20 / 22

Page 21: Happy ABC: Expectation-Propagation for Summary-Less ...conferences.inf.ed.ac.uk/bayes250/slides/chopin.pdf · Results from alpha-stable example a Density 1.50 1.60 1.70 0 2 4 6 8

Conclusion

• EP-ABC features two levels of approximations: EP, and ABC(ε, no summary stat.).

• standard ABC also has two levels of approximations: ABC (ε),plus summary stats.

• EP-ABC is fast (minutes), because it integrates one datapointat a time (not all of them together).

• EP-ABC also approximates the evidence.

• current scope of EP-ABC is restricted to models such that onemay sample from p(yi |y?1:i−1).

• Convergence of EP-ABC is an open problem.

21 / 22

Page 22: Happy ABC: Expectation-Propagation for Summary-Less ...conferences.inf.ed.ac.uk/bayes250/slides/chopin.pdf · Results from alpha-stable example a Density 1.50 1.60 1.70 0 2 4 6 8

Quote, references

�It seems quite absurd to reject an EP-based approach, if the only

alternative is an ABC approach based on summary statistics, which

introduces a bias which seems both larger (according to our

numerical examples) and more arbitrary, in the sense that in

real-world applications one has little intuition and even less

mathematical guidance on to why p(θ|s(y)) should be close to

p(θ|y) for a given set of summary statistics.�

• Barthelmé, S. and Chopin, N. (2011). ABC-EP: ExpectationPropagation for Likelihood-free Bayesian Computation, ICML2011 (Proceedings of the 28th International Conference onMachine Learning), L. Getoor and T. Sche�er (eds), 289-296.

• Barthelmé, S. & Chopin, N. (2011). Expectation-Propagationfor Summary-Less, Likelihood-Free Inference, arxiv:1107.5959.

22 / 22