Discussion of ABC talk by Stefano Cabras, Padova, March 21, 2013

66
DISCUSSION of Bayesian Computation via empirical likelihood Stefano Cabras, [email protected] Universidad Carlos III de Madrid (Spain) Universit` a di Cagliari (Italy) Padova, 21-Mar-2013

description

This discussion was given after my talk by Stefano Cabras, at the Padova workshop on recent advances in statistical inference

Transcript of Discussion of ABC talk by Stefano Cabras, Padova, March 21, 2013

Page 1: Discussion of ABC talk by Stefano Cabras, Padova, March 21, 2013

DISCUSSIONof

Bayesian Computation via empirical likelihood

Stefano Cabras, [email protected] Carlos III de Madrid (Spain)

Universita di Cagliari (Italy)

Padova, 21-Mar-2013

Page 2: Discussion of ABC talk by Stefano Cabras, Padova, March 21, 2013

Summary

◮ Problem:

Page 3: Discussion of ABC talk by Stefano Cabras, Padova, March 21, 2013

Summary

◮ Problem:◮ a statistical model f (y | θ);◮ a prior π(θ) on θ;

Page 4: Discussion of ABC talk by Stefano Cabras, Padova, March 21, 2013

Summary

◮ Problem:◮ a statistical model f (y | θ);◮ a prior π(θ) on θ;

◮ we want to obtain the posterior

πN(θ | y) ∝ LN(θ)π(θ).

Page 5: Discussion of ABC talk by Stefano Cabras, Padova, March 21, 2013

Summary

◮ Problem:◮ a statistical model f (y | θ);◮ a prior π(θ) on θ;

◮ we want to obtain the posterior

πN(θ | y) ∝ LN(θ)π(θ).

◮ BUT

Page 6: Discussion of ABC talk by Stefano Cabras, Padova, March 21, 2013

Summary

◮ Problem:◮ a statistical model f (y | θ);◮ a prior π(θ) on θ;

◮ we want to obtain the posterior

πN(θ | y) ∝ LN(θ)π(θ).

◮ BUT◮ IF LN(θ) is not available:

◮ THEN all life ABC;

Page 7: Discussion of ABC talk by Stefano Cabras, Padova, March 21, 2013

Summary

◮ Problem:◮ a statistical model f (y | θ);◮ a prior π(θ) on θ;

◮ we want to obtain the posterior

πN(θ | y) ∝ LN(θ)π(θ).

◮ BUT◮ IF LN(θ) is not available:

◮ THEN all life ABC;

◮ IF it is not even possible to simulate from f (y | θ):

Page 8: Discussion of ABC talk by Stefano Cabras, Padova, March 21, 2013

Summary

◮ Problem:◮ a statistical model f (y | θ);◮ a prior π(θ) on θ;

◮ we want to obtain the posterior

πN(θ | y) ∝ LN(θ)π(θ).

◮ BUT◮ IF LN(θ) is not available:

◮ THEN all life ABC;

◮ IF it is not even possible to simulate from f (y | θ):◮ THEN replace LN(θ) with LEL(θ)

(the proposed BCel procedure):

π(θ|y) ∝ LEL(θ)× π(θ).

.

Page 9: Discussion of ABC talk by Stefano Cabras, Padova, March 21, 2013

... what remains about the f (y | θ) ?

Page 10: Discussion of ABC talk by Stefano Cabras, Padova, March 21, 2013

... what remains about the f (y | θ) ?

◮ Recall that the Empirical Likelihood is defined, for iid sample,by means of a set of constraints:

Ef (y |θ)[h(Y ,θ)] = 0.

Page 11: Discussion of ABC talk by Stefano Cabras, Padova, March 21, 2013

... what remains about the f (y | θ) ?

◮ Recall that the Empirical Likelihood is defined, for iid sample,by means of a set of constraints:

Ef (y |θ)[h(Y ,θ)] = 0.

◮ The relation between θ and obs. Y is model conditioned andexpressed by h(Y ,θ);

Page 12: Discussion of ABC talk by Stefano Cabras, Padova, March 21, 2013

... what remains about the f (y | θ) ?

◮ Recall that the Empirical Likelihood is defined, for iid sample,by means of a set of constraints:

Ef (y |θ)[h(Y ,θ)] = 0.

◮ The relation between θ and obs. Y is model conditioned andexpressed by h(Y ,θ);

◮ Constraints are model driven and so there is still a timid traceof f (y | θ) in BCel .

Page 13: Discussion of ABC talk by Stefano Cabras, Padova, March 21, 2013

... what remains about the f (y | θ) ?

◮ Recall that the Empirical Likelihood is defined, for iid sample,by means of a set of constraints:

Ef (y |θ)[h(Y ,θ)] = 0.

◮ The relation between θ and obs. Y is model conditioned andexpressed by h(Y ,θ);

◮ Constraints are model driven and so there is still a timid traceof f (y | θ) in BCel .

◮ Examples:

Page 14: Discussion of ABC talk by Stefano Cabras, Padova, March 21, 2013

... what remains about the f (y | θ) ?

◮ Recall that the Empirical Likelihood is defined, for iid sample,by means of a set of constraints:

Ef (y |θ)[h(Y ,θ)] = 0.

◮ The relation between θ and obs. Y is model conditioned andexpressed by h(Y ,θ);

◮ Constraints are model driven and so there is still a timid traceof f (y | θ) in BCel .

◮ Examples:◮ The coalescent model example is illuminating in suggesting the

score of the pairwise likelihood;

Page 15: Discussion of ABC talk by Stefano Cabras, Padova, March 21, 2013

... what remains about the f (y | θ) ?

◮ Recall that the Empirical Likelihood is defined, for iid sample,by means of a set of constraints:

Ef (y |θ)[h(Y ,θ)] = 0.

◮ The relation between θ and obs. Y is model conditioned andexpressed by h(Y ,θ);

◮ Constraints are model driven and so there is still a timid traceof f (y | θ) in BCel .

◮ Examples:◮ The coalescent model example is illuminating in suggesting the

score of the pairwise likelihood;◮ The residuals in GARCH models.

Page 16: Discussion of ABC talk by Stefano Cabras, Padova, March 21, 2013

... a suggestion

What if we do not even known h(·) ?

Page 17: Discussion of ABC talk by Stefano Cabras, Padova, March 21, 2013

... how to elicit h(·) automatically

Page 18: Discussion of ABC talk by Stefano Cabras, Padova, March 21, 2013

... how to elicit h(·) automatically

Page 19: Discussion of ABC talk by Stefano Cabras, Padova, March 21, 2013

... how to elicit h(·) automatically

◮ Set h(Y ,θ) = Y − g(θ), where

g(θ) = Ef (y |θ)(Y |θ),

is the regression function of Y |θ;

Page 20: Discussion of ABC talk by Stefano Cabras, Padova, March 21, 2013

... how to elicit h(·) automatically

◮ Set h(Y ,θ) = Y − g(θ), where

g(θ) = Ef (y |θ)(Y |θ),

is the regression function of Y |θ;

◮ g(θ) should be replaced by an estimator g(θ).

Page 21: Discussion of ABC talk by Stefano Cabras, Padova, March 21, 2013

How to estimate g(θ) ?

1... similar to Fearnhead, P. and D. Prangle (JRRS-B, 2012) or Cabras,Castellanos, Ruli (Ercim-2012, Oviedo).

Page 22: Discussion of ABC talk by Stefano Cabras, Padova, March 21, 2013

How to estimate g(θ) ?

◮ Use a once forever pilot-run simulation study: 1

1... similar to Fearnhead, P. and D. Prangle (JRRS-B, 2012) or Cabras,Castellanos, Ruli (Ercim-2012, Oviedo).

Page 23: Discussion of ABC talk by Stefano Cabras, Padova, March 21, 2013

How to estimate g(θ) ?

◮ Use a once forever pilot-run simulation study: 1

1. Consider a grid (or regular lattice) of θ made by M points:θ1, . . . ,θM

1... similar to Fearnhead, P. and D. Prangle (JRRS-B, 2012) or Cabras,Castellanos, Ruli (Ercim-2012, Oviedo).

Page 24: Discussion of ABC talk by Stefano Cabras, Padova, March 21, 2013

How to estimate g(θ) ?

◮ Use a once forever pilot-run simulation study: 1

1. Consider a grid (or regular lattice) of θ made by M points:θ1, . . . ,θM

2. Simulate the corresponding y1, . . . , yM

1... similar to Fearnhead, P. and D. Prangle (JRRS-B, 2012) or Cabras,Castellanos, Ruli (Ercim-2012, Oviedo).

Page 25: Discussion of ABC talk by Stefano Cabras, Padova, March 21, 2013

How to estimate g(θ) ?

◮ Use a once forever pilot-run simulation study: 1

1. Consider a grid (or regular lattice) of θ made by M points:θ1, . . . ,θM

2. Simulate the corresponding y1, . . . , yM

3. Regress y1, . . . , yM on θ1, . . . ,θM obtaining g(θ).

1... similar to Fearnhead, P. and D. Prangle (JRRS-B, 2012) or Cabras,Castellanos, Ruli (Ercim-2012, Oviedo).

Page 26: Discussion of ABC talk by Stefano Cabras, Padova, March 21, 2013

... example: y ∼ N(|θ|, 1)For a pilot run of M = 1000 we have g(θ) = |θ|.

−10 −5 0 5 10

05

10

Pilot−Run s.s.

θ

y

g(θ)

Page 27: Discussion of ABC talk by Stefano Cabras, Padova, March 21, 2013

... example: y ∼ N(|θ|, 1)Suppose to draw a n = 100 sample from θ = 2:

Histogram of y

y

Fre

quen

cy

0 1 2 3 4

05

1015

20

Page 28: Discussion of ABC talk by Stefano Cabras, Padova, March 21, 2013

... example: y ∼ N(|θ|, 1)The Empirical Likelihood is this

−4 −2 0 2 4

1.0

1.5

2.0

2.5

θ

Em

p. L

ik.

Page 29: Discussion of ABC talk by Stefano Cabras, Padova, March 21, 2013

1st Point: Do we need necessarily have to use f (y | θ) ?

Page 30: Discussion of ABC talk by Stefano Cabras, Padova, March 21, 2013

1st Point: Do we need necessarily have to use f (y | θ) ?

◮ The above data maybe drawn from a (e.g.) a Half Normal;

Page 31: Discussion of ABC talk by Stefano Cabras, Padova, March 21, 2013

1st Point: Do we need necessarily have to use f (y | θ) ?

◮ The above data maybe drawn from a (e.g.) a Half Normal;

◮ How this is reflected in the BCel ?

Page 32: Discussion of ABC talk by Stefano Cabras, Padova, March 21, 2013

1st Point: Do we need necessarily have to use f (y | θ) ?

◮ The above data maybe drawn from a (e.g.) a Half Normal;

◮ How this is reflected in the BCel ?◮ For a given data y;

Page 33: Discussion of ABC talk by Stefano Cabras, Padova, March 21, 2013

1st Point: Do we need necessarily have to use f (y | θ) ?

◮ The above data maybe drawn from a (e.g.) a Half Normal;

◮ How this is reflected in the BCel ?◮ For a given data y;◮ and h(Y ,θ) fixed;

Page 34: Discussion of ABC talk by Stefano Cabras, Padova, March 21, 2013

1st Point: Do we need necessarily have to use f (y | θ) ?

◮ The above data maybe drawn from a (e.g.) a Half Normal;

◮ How this is reflected in the BCel ?◮ For a given data y;◮ and h(Y ,θ) fixed;◮ the LEL(θ) is the same regardless of f (y | θ).

Page 35: Discussion of ABC talk by Stefano Cabras, Padova, March 21, 2013

1st Point: Do we need necessarily have to use f (y | θ) ?

◮ The above data maybe drawn from a (e.g.) a Half Normal;

◮ How this is reflected in the BCel ?◮ For a given data y;◮ and h(Y ,θ) fixed;◮ the LEL(θ) is the same regardless of f (y | θ).

Can we ignore f (y | θ) ?

Page 36: Discussion of ABC talk by Stefano Cabras, Padova, March 21, 2013

2nd Point: Sample free vs Simulation free

Page 37: Discussion of ABC talk by Stefano Cabras, Padova, March 21, 2013

2nd Point: Sample free vs Simulation free

◮ The Empirical Likelihood is ”simulation free” but not ”samplefree”, i.e.

Page 38: Discussion of ABC talk by Stefano Cabras, Padova, March 21, 2013

2nd Point: Sample free vs Simulation free

◮ The Empirical Likelihood is ”simulation free” but not ”samplefree”, i.e.

◮ LEL(θ) → LN(θ) for n → ∞,◮ implying π(θ|y) → πN(θ | y) asymptotically in n.

Page 39: Discussion of ABC talk by Stefano Cabras, Padova, March 21, 2013

2nd Point: Sample free vs Simulation free

◮ The Empirical Likelihood is ”simulation free” but not ”samplefree”, i.e.

◮ LEL(θ) → LN(θ) for n → ∞,◮ implying π(θ|y) → πN(θ | y) asymptotically in n.

◮ The ABC is ”sample free” but not ”simulation free”, i.e.

Page 40: Discussion of ABC talk by Stefano Cabras, Padova, March 21, 2013

2nd Point: Sample free vs Simulation free

◮ The Empirical Likelihood is ”simulation free” but not ”samplefree”, i.e.

◮ LEL(θ) → LN(θ) for n → ∞,◮ implying π(θ|y) → πN(θ | y) asymptotically in n.

◮ The ABC is ”sample free” but not ”simulation free”, i.e.◮ π(θ|ρ(s(y), sobs) < ǫ) → πN(θ | y) as ǫ → 0◮ implying convergence in the number of simulations if s(y) were

sufficient.

Page 41: Discussion of ABC talk by Stefano Cabras, Padova, March 21, 2013

2nd Point: Sample free vs Simulation free

◮ The Empirical Likelihood is ”simulation free” but not ”samplefree”, i.e.

◮ LEL(θ) → LN(θ) for n → ∞,◮ implying π(θ|y) → πN(θ | y) asymptotically in n.

◮ The ABC is ”sample free” but not ”simulation free”, i.e.◮ π(θ|ρ(s(y), sobs) < ǫ) → πN(θ | y) as ǫ → 0◮ implying convergence in the number of simulations if s(y) were

sufficient.

A quick answer recommends use BCel

BUTa small sample would recommend ABC ?

Page 42: Discussion of ABC talk by Stefano Cabras, Padova, March 21, 2013

3nd Point: How to validate a pseudo-posteriorπ(θ|y) ∝ LEL(θ)× π(θ) ?

Page 43: Discussion of ABC talk by Stefano Cabras, Padova, March 21, 2013

3nd Point: How to validate a pseudo-posteriorπ(θ|y) ∝ LEL(θ)× π(θ) ?

◮ The use of pseudo-likelihoods is not new in the Bayesiansetting:

Page 44: Discussion of ABC talk by Stefano Cabras, Padova, March 21, 2013

3nd Point: How to validate a pseudo-posteriorπ(θ|y) ∝ LEL(θ)× π(θ) ?

◮ The use of pseudo-likelihoods is not new in the Bayesiansetting:

◮ Empirical Likelihoods:

Page 45: Discussion of ABC talk by Stefano Cabras, Padova, March 21, 2013

3nd Point: How to validate a pseudo-posteriorπ(θ|y) ∝ LEL(θ)× π(θ) ?

◮ The use of pseudo-likelihoods is not new in the Bayesiansetting:

◮ Empirical Likelihoods:◮ Lazar (Biometrika, 2003)◮ Mengersen et al. (PNAS, 2012)

◮ ...

Page 46: Discussion of ABC talk by Stefano Cabras, Padova, March 21, 2013

3nd Point: How to validate a pseudo-posteriorπ(θ|y) ∝ LEL(θ)× π(θ) ?

◮ The use of pseudo-likelihoods is not new in the Bayesiansetting:

◮ Empirical Likelihoods:◮ Lazar (Biometrika, 2003)◮ Mengersen et al. (PNAS, 2012)

◮ ...

◮ Modified-Likelihoods:

Page 47: Discussion of ABC talk by Stefano Cabras, Padova, March 21, 2013

3nd Point: How to validate a pseudo-posteriorπ(θ|y) ∝ LEL(θ)× π(θ) ?

◮ The use of pseudo-likelihoods is not new in the Bayesiansetting:

◮ Empirical Likelihoods:◮ Lazar (Biometrika, 2003)◮ Mengersen et al. (PNAS, 2012)

◮ ...

◮ Modified-Likelihoods:◮ Ventura et al. (JASA, 2009)

◮ Chang and Mukerjee (Stat. & Prob. Letters 2006)◮ ...

Page 48: Discussion of ABC talk by Stefano Cabras, Padova, March 21, 2013

3nd Point: How to validate a pseudo-posteriorπ(θ|y) ∝ LEL(θ)× π(θ) ?

◮ The use of pseudo-likelihoods is not new in the Bayesiansetting:

◮ Empirical Likelihoods:◮ Lazar (Biometrika, 2003)◮ Mengersen et al. (PNAS, 2012)

◮ ...

◮ Modified-Likelihoods:◮ Ventura et al. (JASA, 2009)

◮ Chang and Mukerjee (Stat. & Prob. Letters 2006)◮ ...

◮ Quasi-Likelihoods:

Page 49: Discussion of ABC talk by Stefano Cabras, Padova, March 21, 2013

3nd Point: How to validate a pseudo-posteriorπ(θ|y) ∝ LEL(θ)× π(θ) ?

◮ The use of pseudo-likelihoods is not new in the Bayesiansetting:

◮ Empirical Likelihoods:◮ Lazar (Biometrika, 2003)◮ Mengersen et al. (PNAS, 2012)

◮ ...

◮ Modified-Likelihoods:◮ Ventura et al. (JASA, 2009)

◮ Chang and Mukerjee (Stat. & Prob. Letters 2006)◮ ...

◮ Quasi-Likelihoods:◮ Lin (Statist. Methodol., 2006)◮ Greco et al. (JSPI, 2008)◮ Ventura et al. (JSPI, 2010)◮ ...

Page 50: Discussion of ABC talk by Stefano Cabras, Padova, March 21, 2013

3nd Point: How to validate a pseudo-posteriorπ(θ|y) ∝ LEL(θ)× π(θ) ?

◮ The use of pseudo-likelihoods is not new in the Bayesiansetting:

◮ Empirical Likelihoods:◮ Lazar (Biometrika, 2003) : examples and coverages of C.I.◮ Mengersen et al. (PNAS, 2012)

◮ ...

◮ Modified-Likelihoods:◮ Ventura et al. (JASA, 2009)

◮ Chang and Mukerjee (Stat. & Prob. Letters 2006)◮ ...

◮ Quasi-Likelihoods:◮ Lin (Statist. Methodol., 2006)◮ Greco et al. (JSPI, 2008)◮ Ventura et al. (JSPI, 2010)◮ ...

Page 51: Discussion of ABC talk by Stefano Cabras, Padova, March 21, 2013

3nd Point: How to validate a pseudo-posteriorπ(θ|y) ∝ LEL(θ)× π(θ) ?

◮ The use of pseudo-likelihoods is not new in the Bayesiansetting:

◮ Empirical Likelihoods:◮ Lazar (Biometrika, 2003) : examples and coverages of C.I.◮ Mengersen et al. (PNAS, 2012) : examples and coverages of

C.I.◮ ...

◮ Modified-Likelihoods:◮ Ventura et al. (JASA, 2009)

◮ Chang and Mukerjee (Stat. & Prob. Letters 2006)◮ ...

◮ Quasi-Likelihoods:◮ Lin (Statist. Methodol., 2006)◮ Greco et al. (JSPI, 2008)◮ Ventura et al. (JSPI, 2010)◮ ...

Page 52: Discussion of ABC talk by Stefano Cabras, Padova, March 21, 2013

3nd Point: How to validate a pseudo-posteriorπ(θ|y) ∝ LEL(θ)× π(θ) ?

◮ The use of pseudo-likelihoods is not new in the Bayesiansetting:

◮ Empirical Likelihoods:◮ Lazar (Biometrika, 2003) : examples and coverages of C.I.◮ Mengersen et al. (PNAS, 2012) : examples and coverages of

C.I.◮ ...

◮ Modified-Likelihoods:◮ Ventura et al. (JASA, 2009) : second order matching

properties;◮ Chang and Mukerjee (Stat. & Prob. Letters 2006)◮ ...

◮ Quasi-Likelihoods:◮ Lin (Statist. Methodol., 2006)◮ Greco et al. (JSPI, 2008)◮ Ventura et al. (JSPI, 2010)◮ ...

Page 53: Discussion of ABC talk by Stefano Cabras, Padova, March 21, 2013

3nd Point: How to validate a pseudo-posteriorπ(θ|y) ∝ LEL(θ)× π(θ) ?

◮ The use of pseudo-likelihoods is not new in the Bayesiansetting:

◮ Empirical Likelihoods:◮ Lazar (Biometrika, 2003) : examples and coverages of C.I.◮ Mengersen et al. (PNAS, 2012) : examples and coverages of

C.I.◮ ...

◮ Modified-Likelihoods:◮ Ventura et al. (JASA, 2009) : second order matching

properties;◮ Chang and Mukerjee (Stat. & Prob. Letters 2006) : examples;◮ ...

◮ Quasi-Likelihoods:◮ Lin (Statist. Methodol., 2006)◮ Greco et al. (JSPI, 2008)◮ Ventura et al. (JSPI, 2010)◮ ...

Page 54: Discussion of ABC talk by Stefano Cabras, Padova, March 21, 2013

3nd Point: How to validate a pseudo-posteriorπ(θ|y) ∝ LEL(θ)× π(θ) ?

◮ The use of pseudo-likelihoods is not new in the Bayesiansetting:

◮ Empirical Likelihoods:◮ Lazar (Biometrika, 2003) : examples and coverages of C.I.◮ Mengersen et al. (PNAS, 2012) : examples and coverages of

C.I.◮ ...

◮ Modified-Likelihoods:◮ Ventura et al. (JASA, 2009) : second order matching

properties;◮ Chang and Mukerjee (Stat. & Prob. Letters 2006) : examples;◮ ...

◮ Quasi-Likelihoods:◮ Lin (Statist. Methodol., 2006) : examples;◮ Greco et al. (JSPI, 2008) : robustness properties;◮ Ventura et al. (JSPI, 2010) : examples and coverages of C.I.;◮ ...

Page 55: Discussion of ABC talk by Stefano Cabras, Padova, March 21, 2013

3nd Point: How to validate a pseudo-posteriorπ(θ|y) ∝ LEL(θ)× π(θ) ?

◮ Monahan & Boos (Biometrika, 1992) proposed a notion ofvalidity:

Page 56: Discussion of ABC talk by Stefano Cabras, Padova, March 21, 2013

3nd Point: How to validate a pseudo-posteriorπ(θ|y) ∝ LEL(θ)× π(θ) ?

◮ Monahan & Boos (Biometrika, 1992) proposed a notion ofvalidity:

π(θ|y) should obey the laws of probability in a fashion that isconsistent with statements derived from Bayes’rule.

Page 57: Discussion of ABC talk by Stefano Cabras, Padova, March 21, 2013

3nd Point: How to validate a pseudo-posteriorπ(θ|y) ∝ LEL(θ)× π(θ) ?

◮ Monahan & Boos (Biometrika, 1992) proposed a notion ofvalidity:

π(θ|y) should obey the laws of probability in a fashion that isconsistent with statements derived from Bayes’rule.

◮ Very difficult!

Page 58: Discussion of ABC talk by Stefano Cabras, Padova, March 21, 2013

3nd Point: How to validate a pseudo-posteriorπ(θ|y) ∝ LEL(θ)× π(θ) ?

◮ Monahan & Boos (Biometrika, 1992) proposed a notion ofvalidity:

π(θ|y) should obey the laws of probability in a fashion that isconsistent with statements derived from Bayes’rule.

◮ Very difficult!

How to validate the pseudo-posterior π(θ|y) when this is notpossible ?

Page 59: Discussion of ABC talk by Stefano Cabras, Padova, March 21, 2013

... Last point: the ABC is still a terrific tool

Page 60: Discussion of ABC talk by Stefano Cabras, Padova, March 21, 2013

... Last point: the ABC is still a terrific tool

◮ ... a lot of references:

Page 61: Discussion of ABC talk by Stefano Cabras, Padova, March 21, 2013

... Last point: the ABC is still a terrific tool

◮ ... a lot of references:◮ Statistical Journals;

Page 62: Discussion of ABC talk by Stefano Cabras, Padova, March 21, 2013

... Last point: the ABC is still a terrific tool

◮ ... a lot of references:◮ Statistical Journals;◮ Twitter;

Page 63: Discussion of ABC talk by Stefano Cabras, Padova, March 21, 2013

... Last point: the ABC is still a terrific tool

◮ ... a lot of references:◮ Statistical Journals;◮ Twitter;◮ Xiang’s blog ( xianblog.wordpress.com )

Page 64: Discussion of ABC talk by Stefano Cabras, Padova, March 21, 2013

... Last point: the ABC is still a terrific tool

◮ ... a lot of references:◮ Statistical Journals;◮ Twitter;◮ Xiang’s blog ( xianblog.wordpress.com )

◮ ... it is tailored to Approximate LN(θ).

Page 65: Discussion of ABC talk by Stefano Cabras, Padova, March 21, 2013

... Last point: the ABC is still a terrific tool

◮ ... a lot of references:◮ Statistical Journals;◮ Twitter;◮ Xiang’s blog ( xianblog.wordpress.com )

◮ ... it is tailored to Approximate LN(θ).

Where is the A in BCel ?

Page 66: Discussion of ABC talk by Stefano Cabras, Padova, March 21, 2013