Distributed Parametric-Nonparametric Estimation in...

Post on 28-Sep-2020

1 views 0 download

Transcript of Distributed Parametric-Nonparametric Estimation in...

Distributed Parametric-Nonparametric

Estimation in Networked Control Systems

Damiano Varagnolo

Department of Information Engineering - University of Padova

April, 18th 2011

entirely

writtenin

LATEX

2εus

ing

Beamer and Tik Z

Damiano Varagnolo (DEI) varagnolo@dei.unipd.it April, 18th 2011 1 / 48

introduction

Research topics

topics

multi-agent

systems

smart grids

parametric

regression

nonparametric

regression

estimation of

# of agents

application

of consensus

distributed

optimization

Damiano Varagnolo (DEI) varagnolo@dei.unipd.it April, 18th 2011 2 / 48

introduction

Research topics

topics

multi-agent

systems

smart grids

parametric

regression

nonparametric

regression

estimation of

# of agents

application

of consensus

distributed

optimization

Damiano Varagnolo (DEI) varagnolo@dei.unipd.it April, 18th 2011 2 / 48

introduction

Research topics

topics

multi-agent

systems

smart grids

parametric

regression

nonparametric

regression

estimation of

# of agents

application

of consensus

distributed

optimization

nonparametric

regression

estimation of

# of agents

Damiano Varagnolo (DEI) varagnolo@dei.unipd.it April, 18th 2011 2 / 48

introduction

Multi-agent systems: examples of applications

Multi-Robot

Coordination

Underwater

Exploration

Smart Grids

Deployment

Wind Power

Management

Wild Life

Monitoring

Smart

Highways

Deployment

Smart

Surveillance

Implementation

Damiano Varagnolo (DEI) varagnolo@dei.unipd.it April, 18th 2011 3 / 48

introduction

First problem considered in this speech

Assumption

noisy measurements of

f (x , t) : R3 × R 7→ R

that are

non uniformly sampled in space x

non uniformly sampled in time t

taken by di�erent agents

Objective

smoothing in space (x) and forecast in time (t) the quantity f (x , t)

Damiano Varagnolo (DEI) varagnolo@dei.unipd.it April, 18th 2011 4 / 48

introduction

Example 1 - channel gains in geographical areas

source: Dall'Anese et al., 2011

x ∈ R2 : position

t : time

f (x , t) : channel gain

Damiano Varagnolo (DEI) varagnolo@dei.unipd.it April, 18th 2011 5 / 48

introduction

Example 2 - waves power extraction

source: www.graysharboroceanenergy.com

x ∈ R2 : position

t : time

f (x , t) : sea level

Damiano Varagnolo (DEI) varagnolo@dei.unipd.it April, 18th 2011 6 / 48

introduction

Example 3 - multi robot exploration

source: http://www-robotics.jpl.nasa.gov

x ∈ R2 : position

f (x) : ground level

Damiano Varagnolo (DEI) varagnolo@dei.unipd.it April, 18th 2011 7 / 48

introduction

Di�culties related to this problem

Information-related di�culties

non-uniform samplings both in time and in space

unknown dynamics of f

unknown or extremely complex correlations in time and space

Hardware-related di�culties

energy & computational & memory & bandwidth limitations

Framework-related di�culties

mobile and time varying network

Damiano Varagnolo (DEI) varagnolo@dei.unipd.it April, 18th 2011 8 / 48

introduction

State of the art

proposed

distributed

solutions

Least Squares

Maximum Likelihoods

Kalman Filtering

Kriging

Other Learning Techniques

Choi et al. 2009, Predd et al. 2006, Boyd et al. 2005

Schizas et al. 2008, Barbarossa

and Scutari 2007, Boyd et al.

2010

Cressie and Wikle 2002,

Olfati-Saber 2007, Carli

et al. 2008

Dall'Anese et al. 2011, Cortés 2010

Nguyen et al. 2005, Bazerque et al. 2010

nonparametric

Damiano Varagnolo (DEI) varagnolo@dei.unipd.it April, 18th 2011 9 / 48

introduction

State of the art

proposed

distributed

solutions

Least Squares

Maximum Likelihoods

Kalman Filtering

Kriging

Other Learning Techniques

Choi et al. 2009, Predd et al. 2006, Boyd et al. 2005

Schizas et al. 2008, Barbarossa

and Scutari 2007, Boyd et al.

2010

Cressie and Wikle 2002,

Olfati-Saber 2007, Carli

et al. 2008

Dall'Anese et al. 2011, Cortés 2010

Nguyen et al. 2005, Bazerque et al. 2010

dynamic scenarios

Damiano Varagnolo (DEI) varagnolo@dei.unipd.it April, 18th 2011 9 / 48

introduction

State of the art

proposed

distributed

solutions

Least Squares

Maximum Likelihoods

Kalman Filtering

Kriging

Other Learning Techniques

Choi et al. 2009, Predd et al. 2006, Boyd et al. 2005

Schizas et al. 2008, Barbarossa

and Scutari 2007, Boyd et al.

2010

Cressie and Wikle 2002,

Olfati-Saber 2007, Carli

et al. 2008

Dall'Anese et al. 2011, Cortés 2010

Nguyen et al. 2005, Bazerque et al. 2010

static scenarios

Damiano Varagnolo (DEI) varagnolo@dei.unipd.it April, 18th 2011 9 / 48

introduction

State of the art - Vision

obtain

f (x , t) = Ψ (past measurements)

being

distributed

capable of both smoothing and prediction

Our approach

nonparametric: Ψ (·) lives in an in�nite dimensional space

Damiano Varagnolo (DEI) varagnolo@dei.unipd.it April, 18th 2011 10 / 48

introduction

Why should we use a nonparametric approach?

Motivations

it could be di�cult or even impossible to de�ne a parametric

model (e.g. when only regularity assumptions are available)

parametric models could involve a large number of parameters

(could require nonlinear optimization techniques)

lead to convex optimization problems

consistent, i.e. f → f when # measurements →∞ (De Nicolao,

Ferrari-Trecate, 1999)

Damiano Varagnolo (DEI) varagnolo@dei.unipd.it April, 18th 2011 11 / 48

introduction

State of the art - where we actually contributed

our small

puzzle-piece

agents estimate

the same f

static scenario

(f indepen-

dent of t)

Regularization

Network

approach

Poggio and Girosi 1990

e.g. exploration

no needs to discard

old measurements

Damiano Varagnolo (DEI) varagnolo@dei.unipd.it April, 18th 2011 12 / 48

introduction

Our goal

obtain a simple, self-evaluating and auto-

tuning multi-agent regression strategy

Damiano Varagnolo (DEI) varagnolo@dei.unipd.it April, 18th 2011 13 / 48

framework & assumptions

Framework

Agents:

noisily sample the same f

limited computational &

communication capabilities

1 measurement × agent (ease

of notation)

M measurements in total

Damiano Varagnolo (DEI) varagnolo@dei.unipd.it April, 18th 2011 14 / 48

framework & assumptions

Measurement model

ym = f (xm) + νm (1)

f : X ⊂ Rd → R unknown (X compact)

νm ⊥ xm, zero mean and variance σ2

xm ∼ µ i.i.d. (agents know µ!!)

examples of µ:

uniform jitter generic

Damiano Varagnolo (DEI) varagnolo@dei.unipd.it April, 18th 2011 15 / 48

distributed estimators

Considered cost function

Q (f ) =M∑

m=1

(ym − f (xm))2 + γ ‖f ‖2K

lives in an in�nite

dimensional space

regularization factor,

K : X × X → RMercer kernel

Centralized optimal solution as a Regularization Network

fc =M∑

m=1

cmK (xm, ·)[

c1

.

.

.

cM

]=

([K (x1, x1) · · · K (x1, xM )

.

.

.

.

.

.

K (xM , x1) · · · K (xM , xM )

]+γI

)−1[y1

.

.

.

yM

]

Damiano Varagnolo (DEI) varagnolo@dei.unipd.it April, 18th 2011 16 / 48

distributed estimators

Considered cost function

Q (f ) =M∑

m=1

(ym − f (xm))2 + γ ‖f ‖2K

lives in an in�nite

dimensional space

regularization factor,

K : X × X → RMercer kernel

Centralized optimal solution as a Regularization Network

fc =M∑

m=1

cmK (xm, ·)[

c1

.

.

.

cM

]=

([K (x1, x1) · · · K (x1, xM )

.

.

.

.

.

.

K (xM , x1) · · · K (xM , xM )

]+γI

)−1[y1

.

.

.

yM

]

Damiano Varagnolo (DEI) varagnolo@dei.unipd.it April, 18th 2011 16 / 48

distributed estimators

Drawbacks

fc =M∑

m=1

cmK (xm, ·)[

c1

.

.

.

cM

]=

([K (x1, x1) · · · K (x1, xM )

.

.

.

.

.

.

K (xM , x1) · · · K (xM , xM )

]+γI

)−1 [y1

.

.

.

yM

]

computational cost: O (M3) (inversion of M ×M matrix)

transmission cost: O (M) (knowledge of whole {xm, ym}Mm=1)

need to �nd alternative solutions

Damiano Varagnolo (DEI) varagnolo@dei.unipd.it April, 18th 2011 17 / 48

distributed estimators

Alternative centralized optimal solution (1st on 2)

Structure of K implies

• K (x1, x2) =+∞∑

e=1

λeφe(x1)φe(x2)λe = eigenvalue

φe = eigenfunction

• f (x) =+∞∑

e=1

beφe(x)

⇒ measurement model can be rewritten as

ym =

Cm:=︷ ︸︸ ︷[φ1 (xm) , φ2 (xm) , . . .]

b:=︷ ︸︸ ︷

b1b2...

+νm (2)

Damiano Varagnolo (DEI) varagnolo@dei.unipd.it April, 18th 2011 18 / 48

distributed estimators

Alternative centralized optimal solution (2nd on 2)

bc =

(1

Mdiag

λe

)+

1

M

M∑

m=1

CTmCm

)−1(1

M

M∑

m=1

CTm ym

)(3)

involves in�nite dimensional objects:

bc =

• · · · · · ·...

. . ....

. . .

−1

•......

⇒ cannot be computed exactly

Damiano Varagnolo (DEI) varagnolo@dei.unipd.it April, 18th 2011 19 / 48

distributed estimators

Suboptimal �nite dimensional solution

New estimator

br =

(1

Mdiag

λe

)+

1

M

M∑

m=1

(CEm

)TCEm

)−1(1

M

M∑

m=1

(CEm

)Tym

)

computable (involves E × E matrices and E -dimensional vectors)

minimizes QE (b) :=∑M

m=1

(ym − CE

mb)2

+ γ∑E

e=1b2eλe

Drawbacks

1 O (E 3) computational e�ort

2 O (E 2) transmission e�ort

3 must know M

Damiano Varagnolo (DEI) varagnolo@dei.unipd.it April, 18th 2011 20 / 48

distributed estimators

Suboptimal �nite dimensional solution

New estimator

br =

(1

Mdiag

λe

)+

1

M

M∑

m=1

(CEm

)TCEm

)−1(1

M

M∑

m=1

(CEm

)Tym

)

computable (involves E × E matrices and E -dimensional vectors)

minimizes QE (b) :=∑M

m=1

(ym − CE

mb)2

+ γ∑E

e=1b2eλe

Drawbacks

1 O (E 3) computational e�ort

2 O (E 2) transmission e�ort

3 must know M

Damiano Varagnolo (DEI) varagnolo@dei.unipd.it April, 18th 2011 20 / 48

distributed estimators

Derivation of the distributed estimator

br =

(1

Mdiag

λe

)+

1

M

M∑

m=1

(CEm

)TCEm

)−1(1

M

M∑

m=1

(CEm

)Tym

)

Consider the approximations

M → Mg (guess)

1

M

M∑

m=1

(CEm

)TCEm → Eµ

[(CEm

)TCEm

]= I

Damiano Varagnolo (DEI) varagnolo@dei.unipd.it April, 18th 2011 21 / 48

distributed estimators

Derivation of the distributed estimator

obtain: bd =

(1

Mg

diag

λe

)+ I

)−1(1

M

M∑

m=1

(CEm

)Tym

)

Advantages

1 O (E ) computational e�ort

2 O (E ) transmission e�ort

Damiano Varagnolo (DEI) varagnolo@dei.unipd.it April, 18th 2011 22 / 48

distributed estimators

Summary of proposed estimation schemes

???

br (E )

br : O(E3

)comput., O

(E2

)transm.

bd (E ,Mg )

bd : O (E) comput., O (E) transm.

original hyp. space

bcbc : O

(M3

)comput., O (M) transm.

reduced hyp. s

pace

???

Damiano Varagnolo (DEI) varagnolo@dei.unipd.it April, 18th 2011 23 / 48

distributed estimators

Summary of proposed estimation schemes

???

br (E )

br : O(E3

)comput., O

(E2

)transm.

bd (E ,Mg )

bd : O (E) comput., O (E) transm.

original hyp. space

bcbc : O

(M3

)comput., O (M) transm.

reduced hyp. s

pace

???

Damiano Varagnolo (DEI) varagnolo@dei.unipd.it April, 18th 2011 23 / 48

distributed estimators

Summary of proposed estimation schemes

???

br (E )

br : O(E3

)comput., O

(E2

)transm.

bd (E ,Mg )

bd : O (E) comput., O (E) transm.

original hyp. space

bcbc : O

(M3

)comput., O (M) transm.

reduced hyp. s

pace

???

Damiano Varagnolo (DEI) varagnolo@dei.unipd.it April, 18th 2011 23 / 48

distributed estimators

Quanti�cation of performancesAssumption: E ,Mg already chosen, bd already computed

bc

brbd

computable through distributed MC

local residuals

∝ 1

Mmin

− 1

Mmax

∝ I − 1

M

M∑

m=1

(CEm

)TCEm

‖bc − bd‖2 ≤1

M

M∑

m=1

|rm| + ‖UMbd‖2 + ‖UCbd‖2

Damiano Varagnolo (DEI) varagnolo@dei.unipd.it April, 18th 2011 24 / 48

distributed estimators

Quanti�cation of performancesAssumption: E ,Mg already chosen, bd already computed

bc

brbd

computable through distributed MC

local residuals

∝ 1

Mmin

− 1

Mmax

∝ I − 1

M

M∑

m=1

(CEm

)TCEm

‖bc − bd‖2 ≤1

M

M∑

m=1

|rm| + ‖UMbd‖2 + ‖UCbd‖2

Damiano Varagnolo (DEI) varagnolo@dei.unipd.it April, 18th 2011 24 / 48

distributed estimators

Quanti�cation of performancesAssumption: E ,Mg already chosen, bd already computed

bc

brbd

computable through distributed MC

local residuals

∝ 1

Mmin

− 1

Mmax

∝ I − 1

M

M∑

m=1

(CEm

)TCEm

‖bc − bd‖2 ≤1

M

M∑

m=1

|rm| + ‖UMbd‖2 + ‖UCbd‖2

Damiano Varagnolo (DEI) varagnolo@dei.unipd.it April, 18th 2011 24 / 48

distributed estimators

Quanti�cation of performancesAssumption: E ,Mg already chosen, bd already computed

bc

brbd

computable through distributed MC

local residuals

∝ 1

Mmin

− 1

Mmax

∝ I − 1

M

M∑

m=1

(CEm

)TCEm

‖bc − bd‖2 ≤1

M

M∑

m=1

|rm| + ‖UMbd‖2 + ‖UCbd‖2

Damiano Varagnolo (DEI) varagnolo@dei.unipd.it April, 18th 2011 24 / 48

distributed estimators

Quanti�cation of performancesAssumption: E ,Mg already chosen, bd already computed

bc

brbd

computable through distributed MC

local residuals

∝ 1

Mmin

− 1

Mmax

∝ I − 1

M

M∑

m=1

(CEm

)TCEm

‖bc − bd‖2 ≤1

M

M∑

m=1

|rm| + ‖UMbd‖2 + ‖UCbd‖2

Damiano Varagnolo (DEI) varagnolo@dei.unipd.it April, 18th 2011 24 / 48

distributed estimators

Tuning of the parameters - key ideasAssumption: have some information on the energy of f

bc

brbd

parameters to

be estimated

number of

eigenfunctions E

number of mea-

surements Mg

assure ‖bc − br (E )‖to be su�ciently small

minimize the bound

on ‖bc − bd (E ,Mg )‖

Damiano Varagnolo (DEI) varagnolo@dei.unipd.it April, 18th 2011 25 / 48

distributed estimators

Tuning of the parameters - key ideasAssumption: have some information on the energy of f

bc

brbd

parameters to

be estimated

number of

eigenfunctions E

number of mea-

surements Mg

assure ‖bc − br (E )‖to be su�ciently small

minimize the bound

on ‖bc − bd (E ,Mg )‖

Damiano Varagnolo (DEI) varagnolo@dei.unipd.it April, 18th 2011 25 / 48

distributed estimators

Tuning of the parameters - key ideasAssumption: have some information on the energy of f

bc

brbd

parameters to

be estimated

number of

eigenfunctions E

number of mea-

surements Mg

assure ‖bc − br (E )‖to be su�ciently small

minimize the bound

on ‖bc − bd (E ,Mg )‖

Damiano Varagnolo (DEI) varagnolo@dei.unipd.it April, 18th 2011 25 / 48

distributed estimators

Regression strategy e�ectiveness exampleM = 100, E = 20, Mmin = 90, Mmax = 110, SNR ≈ 2.5

0 0.2 0.4 0.6 0.8 1

−3

−2

−1

0

1

2

3

x

f(x)

meas.

true f

Damiano Varagnolo (DEI) varagnolo@dei.unipd.it April, 18th 2011 26 / 48

distributed estimators

Regression strategy e�ectiveness exampleM = 100, E = 20, Mmin = 90, Mmax = 110, SNR ≈ 2.5

0 0.2 0.4 0.6 0.8 1

−3

−2

−1

0

1

2

3

x

f(x)

bcbd

Damiano Varagnolo (DEI) varagnolo@dei.unipd.it April, 18th 2011 27 / 48

distributed estimators

Accuracy of the computed boundM = 100, E = 20, Mmin = 90, Mmax = 110

0 0.25 0.5 0.75

0

0.25

0.5

0.75

true distance ‖bd − bc‖2

bound

Damiano Varagnolo (DEI) varagnolo@dei.unipd.it April, 18th 2011 28 / 48

distributed estimators

Comparison with oracleM = 100, E = 20, Mmin = 90, Mmax = 110

0 0.2 0.4 0.6

0

0.2

0.4

0.6

∥∥boracled − bc∥∥2

∥ ∥ bours

d−

bc

∥ ∥ 2

Damiano Varagnolo (DEI) varagnolo@dei.unipd.it April, 18th 2011 29 / 48

conclusions

Conclusions and future works for this part

Conclusions

Strategy:

is e�ective and easy to be implemented

has self-evaluation capabilities

has self-tuning capabilities

Future works

exploit statistical knowledge about M

incorporate e�ects of �nite number of steps in consensus

algorithms

extend to dynamic scenarios (long term objective)

Damiano Varagnolo (DEI) varagnolo@dei.unipd.it April, 18th 2011 30 / 48

Part Two

Damiano Varagnolo (DEI) varagnolo@dei.unipd.it April, 18th 2011 31 / 48

Introduction

Privacy-aware number of agents estimation

Estimation of the number of agents (A) can be important in:

distributed estimation

analysis of connectivity

We assume privacy concerns → do not use IDs!

Our goal: obtain an easily implementable

distributed estimator satisfying the constraints

Damiano Varagnolo (DEI) varagnolo@dei.unipd.it April, 18th 2011 32 / 48

A simple example

The basic idea

Algorithm:

local

generation

average

consensus

Maximum

LikelihoodA−1 = y 2

avedoes not require

sending IDs

.

.

Damiano Varagnolo (DEI) varagnolo@dei.unipd.it April, 18th 2011 33 / 48

A simple example

The basic idea

y1 ∼ N (0, 1)

y2 ∼ N (0, 1)

y3 ∼ N (0, 1)

y4 ∼ N (0, 1)

y5 ∼ N (0, 1)Algorithm:

local

generation

average

consensus

Maximum

LikelihoodA−1 = y 2

avedoes not require

sending IDs

.

.

Damiano Varagnolo (DEI) varagnolo@dei.unipd.it April, 18th 2011 33 / 48

A simple example

The basic idea

y1 →1

A

A∑

a=1

ya

y2 →1

A

A∑

a=1

ya

y3 →1

A

A∑

a=1

ya

y4 →1

A

A∑

a=1

ya

y5 →1

A

A∑

a=1

yaAlgorithm:

local

generation

average

consensus

Maximum

LikelihoodA−1 = y 2

avedoes not require

sending IDs

.

.

Damiano Varagnolo (DEI) varagnolo@dei.unipd.it April, 18th 2011 33 / 48

A simple example

The basic idea

yave ∼ N(0, 1

A

)yave ∼ N

(0, 1

A

)

yave ∼ N(0, 1

A

)

yave ∼ N(0, 1

A

)

yave ∼ N(0, 1

A

)Algorithm:

local

generation

average

consensus

Maximum

LikelihoodA−1 = y 2

avedoes not require

sending IDs

.

.

Damiano Varagnolo (DEI) varagnolo@dei.unipd.it April, 18th 2011 33 / 48

A simple example

The basic idea

yave ∼ N(0, 1

A

)yave ∼ N

(0, 1

A

)

yave ∼ N(0, 1

A

)

yave ∼ N(0, 1

A

)

yave ∼ N(0, 1

A

)Algorithm:

local

generation

average

consensus

Maximum

LikelihoodA−1 = y 2

avedoes not require

sending IDs

.

.

Damiano Varagnolo (DEI) varagnolo@dei.unipd.it April, 18th 2011 33 / 48

A simple example

Reformulating the idea as a block scheme

local distributed local

N (0, 1)

y1y2

...yA

aver. cons. ML A

Damiano Varagnolo (DEI) varagnolo@dei.unipd.it April, 18th 2011 34 / 48

A simple example

Plausible ways to generalize the idea

local distributed local

p (·)

y 11 , . . . , yr1

y 12 , . . . , yr2

...

y 1A, . . . , yrA

F Ψ A

• N (µ, σ2)

• U [α, β]

• ??

• average• max• ??

• ML

• MMSE

• MAP

• ??Damiano Varagnolo (DEI) varagnolo@dei.unipd.it April, 18th 2011 35 / 48

Theoretical results

Which cost function we consider

notice: we want to estimate A−1 instead of A

↓estimator =: A−1

Considered cost function

E[(

A−1 − A−1)2] (

≡ variance if A−1 unbiased)

Why?

convenient in order to obtain mathematical results

in our cases, asymptotically in r :

limr→+∞

E

(A−1 − A−1

A−1

)2 = E

(A− A

A

)2

Damiano Varagnolo (DEI) varagnolo@dei.unipd.it April, 18th 2011 36 / 48

Theoretical results

Theoretical results: average-consensus + ML

p (·)

y1

y2

...yS

aver. cons. ML S

Assumptions

y ·a generated through Gaussian distributions N (µ, σ2)

fusion of y ·a is through average-consensus

Results: ML estimators:

writable in closed form are MVUE (Minimum Variance and Unbiased)

performances: var

(A−1 − A−1

A−1

)=

2

r(independent of µ and σ2)

(conjecture: Law of Large Numbers) if r → +∞ then performances are

independent of p (·)Damiano Varagnolo (DEI) varagnolo@dei.unipd.it April, 18th 2011 37 / 48

Theoretical results

Theoretical results: average-consensus + ML

p (·)

y1

y2

...yS

aver. cons. ML S

Assumptions

y ·a generated through Gaussian distributions N (µ, σ2)

fusion of y ·a is through average-consensus

Results: ML estimators:

writable in closed form are MVUE (Minimum Variance and Unbiased)

performances: var

(A−1 − A−1

A−1

)=

2

r(independent of µ and σ2)

(conjecture: Law of Large Numbers) if r → +∞ then performances are

independent of p (·)Damiano Varagnolo (DEI) varagnolo@dei.unipd.it April, 18th 2011 37 / 48

Theoretical results

Theoretical results: average-consensus + ML

p (·)

y1

y2

...yS

aver. cons. ML S

Assumptions

y ·a generated through Gaussian distributions N (µ, σ2)

fusion of y ·a is through average-consensus

Results: ML estimators:

writable in closed form are MVUE (Minimum Variance and Unbiased)

performances: var

(A−1 − A−1

A−1

)=

2

r(independent of µ and σ2)

(conjecture: Law of Large Numbers) if r → +∞ then performances are

independent of p (·)Damiano Varagnolo (DEI) varagnolo@dei.unipd.it April, 18th 2011 37 / 48

Theoretical results

Theoretical results: average-consensus + ML

p (·)

y1

y2

...yS

aver. cons. ML S

Assumptions

y ·a generated through Gaussian distributions N (µ, σ2)

fusion of y ·a is through average-consensus

Results: ML estimators:

writable in closed form are MVUE (Minimum Variance and Unbiased)

performances: var

(A−1 − A−1

A−1

)=

2

r(independent of µ and σ2)

(conjecture: Law of Large Numbers) if r → +∞ then performances are

independent of p (·)Damiano Varagnolo (DEI) varagnolo@dei.unipd.it April, 18th 2011 37 / 48

Theoretical results

Theoretical results: average-consensus + ML

p (·)

y1

y2

...yS

aver. cons. ML S

Assumptions

y ·a generated through Gaussian distributions N (µ, σ2)

fusion of y ·a is through average-consensus

Results: ML estimators:

writable in closed form are MVUE (Minimum Variance and Unbiased)

performances: var

(A−1 − A−1

A−1

)=

2

r(independent of µ and σ2)

(conjecture: Law of Large Numbers) if r → +∞ then performances are

independent of p (·)Damiano Varagnolo (DEI) varagnolo@dei.unipd.it April, 18th 2011 37 / 48

Theoretical results

Theoretical results: max-consensus + ML

p (·)

y1

y2

...yS

max cons. ML S

Assumptions

cumulative distribution P (·) of y ·a is strictly monotonic andcontinuous

fusion of y ·a is through max-consensus

Results: ML estimators:

writable in closed form are MVUE (Minimum Variance and Unbiased)

performances: var

(A−1 − A−1

A−1

)=

1

rindependent of P (·)

performances are twice as good as average-consensus

Damiano Varagnolo (DEI) varagnolo@dei.unipd.it April, 18th 2011 38 / 48

Theoretical results

Theoretical results: max-consensus + ML

p (·)

y1

y2

...yS

max cons. ML S

Assumptions

cumulative distribution P (·) of y ·a is strictly monotonic andcontinuous

fusion of y ·a is through max-consensus

Results: ML estimators:

writable in closed form are MVUE (Minimum Variance and Unbiased)

performances: var

(A−1 − A−1

A−1

)=

1

rindependent of P (·)

performances are twice as good as average-consensus

Damiano Varagnolo (DEI) varagnolo@dei.unipd.it April, 18th 2011 38 / 48

Theoretical results

Theoretical results: max-consensus + ML

p (·)

y1

y2

...yS

max cons. ML S

Assumptions

cumulative distribution P (·) of y ·a is strictly monotonic andcontinuous

fusion of y ·a is through max-consensus

Results: ML estimators:

writable in closed form are MVUE (Minimum Variance and Unbiased)

performances: var

(A−1 − A−1

A−1

)=

1

rindependent of P (·)

performances are twice as good as average-consensus

Damiano Varagnolo (DEI) varagnolo@dei.unipd.it April, 18th 2011 38 / 48

Theoretical results

Theoretical results: max-consensus + ML

p (·)

y1

y2

...yS

max cons. ML S

Assumptions

cumulative distribution P (·) of y ·a is strictly monotonic andcontinuous

fusion of y ·a is through max-consensus

Results: ML estimators:

writable in closed form are MVUE (Minimum Variance and Unbiased)

performances: var

(A−1 − A−1

A−1

)=

1

rindependent of P (·)

performances are twice as good as average-consensus

Damiano Varagnolo (DEI) varagnolo@dei.unipd.it April, 18th 2011 38 / 48

Theoretical results

Theoretical results: max-consensus + ML

p (·)

y1

y2

...yS

max cons. ML S

Assumptions

cumulative distribution P (·) of y ·a is strictly monotonic andcontinuous

fusion of y ·a is through max-consensus

Results: ML estimators:

writable in closed form are MVUE (Minimum Variance and Unbiased)

performances: var

(A−1 − A−1

A−1

)=

1

rindependent of P (·)

performances are twice as good as average-consensus

Damiano Varagnolo (DEI) varagnolo@dei.unipd.it April, 18th 2011 38 / 48

Simulative results

Results of various simulated systems (1)

N (0, 1) average consensus ML A = 10

5 10 15

0

0.1

0.2

0.3

0.4

0.3

0.4

A

p( A∣ ∣ ∣A)

r = 10

r = 40

r = 70

r = 100

Damiano Varagnolo (DEI) varagnolo@dei.unipd.it April, 18th 2011 39 / 48

Simulative results

Results of various simulated systems (2)

U [0, 1] max consensus ML A = 10

5 10 15

0

0.1

0.2

0.3

0.4

0.3

0.4

A

p( A∣ ∣ ∣A)

r = 10

r = 40

r = 70

r = 100

Damiano Varagnolo (DEI) varagnolo@dei.unipd.it April, 18th 2011 40 / 48

Simulative results

Results of various simulated systems (3)

U [0, 1] max consensus ML A = 100

50 100 150

0

.01

.02

.03

.04

A

p( A∣ ∣ ∣A)

r = 10

r = 40

r = 70

r = 100

Damiano Varagnolo (DEI) varagnolo@dei.unipd.it April, 18th 2011 41 / 48

Conclusions. . . and future extensions

Conclusions

e�ective and robust algorithm

quanti�able performances

rely on statistical concepts → preserves privacy

inherits good qualities of consensus strategies

Future extensions

analyze optimal quantization strategies

�nd optimal distributions for average consensus

use the strategy for topological change detection purposes

Damiano Varagnolo (DEI) varagnolo@dei.unipd.it April, 18th 2011 42 / 48

Distributed Parametric-Nonparametric

Estimation in Networked Control Systems

Damiano Varagnolo

Department of Information Engineering - University of Padova

April, 18th 2011

damiano.varagnolo@dei.unipd.it

www.dei.unipd.it/∼varagnolo/google: damiano varagnolo

licensed under the Creative Commons BY-NC-SA 2.5 Italy License:

entirely

writtenin

LATEX

2εus

ing

Beamer and Tik Z

Damiano Varagnolo (DEI) varagnolo@dei.unipd.it April, 18th 2011 43 / 48

Publications

Distributed Optimization

F. Zanella, D. Varagnolo, A. Cenedese, G. Pillonetto, L. Schenato.

Newton-Raphson consensus for distributed convex optimization.

IEEE Conference on Decision and Control (CDC 2011) (submitted)

Applications of Consensus

S. Bolognani, S. Del Favero, L. Schenato, D. Varagnolo.

Consensus-based distributed sensor calibration and least-square

parameter identi�cation in WSNs. International Journal of Robust

and Nonlinear Control, 2010

S. Bolognani, S. Del Favero, L. Schenato, D. Varagnolo.

Distributed sensor calibration and least-square parameter

identi�cation in WSNs using consensus algorithms. Proceedings of

Allerton Conference on Communication Control and Computing

(Allerton 2008)

Damiano Varagnolo (DEI) varagnolo@dei.unipd.it April, 18th 2011 44 / 48

Publications

Parametric Regression

D. Varagnolo, G. Pillonetto, L. Schenato. Distributed

consensus-based Bayesian estimation: su�cient conditions for

performance characterization. 2010 American Control Conference,

2010

D. Varagnolo, P. Chen, L. Schenato, S. Sastry. Performance

analysis of di�erent routing protocols in wireless sensor networks

for real-time estimation. IEEE Conference on Decision and Control

(CDC 2008)

Damiano Varagnolo (DEI) varagnolo@dei.unipd.it April, 18th 2011 45 / 48

Publications

Nonparametric Regression / Classi�cation

D. Varagnolo, G. Pillonetto, L. Schenato. Distributed parametric

and nonparametric regression with on-line performance bounds

computation. Automatica (submitted)

D. Varagnolo, G. Pillonetto, L. Schenato. Auto-tuning procedures

for distributed nonparametric regression algorithms. IEEE

Conference on Decision and Control (CDC 2011) (submitted)

S. Del Favero, D. Varagnolo, F. Dinuzzo, L. Schenato, G.

Pillonetto. On the discardability of data in Support Vector

Classi�cation problems. IEEE Conference on Decision and Control

(CDC 2011) (submitted)

D. Varagnolo, G. Pillonetto, L. Schenato. Distributed Function

and Time Delay Estimation using Nonparametric Techniques. IEEE

Conference on Decision and Control (CDC 2009)

Damiano Varagnolo (DEI) varagnolo@dei.unipd.it April, 18th 2011 46 / 48

Publications

Number of Sensors Estimation

D. Varagnolo, G. Pillonetto, L. Schenato. Distributed statistical

estimation of the number of nodes in Sensor Networks. IEEE

Conference on Decision and Control (CDC 2010)

Smart Grids

E. Bitar, A. Giani, R. Rajagopal, D. Varagnolo, P. Khargonekar,

K. Poolla, V. Pravin. Optimal Contracts for Wind Power Producers

in Electricity Markets. Conference on Decision and Control

CDC10, 2010

Damiano Varagnolo (DEI) varagnolo@dei.unipd.it April, 18th 2011 47 / 48

left mouse / page down / space next slide

right mouse / page up / backspace previous slide

q / esc exit

tab overview toggling

t clock toggling

home go to �rst slide

end go to last slide

l go to the previously last seen slide

b fade the screen to black

w fade the screen to white

f full screen toggling

enter spotlight toggling

+ / - adjust the spotlight size

mouse wheel adjust the spotlight size

left mouse (dragging a box) highglight a box

right mouse (on a highlighted box) remove the highlight of that box

z zoom toggling

right mouse (in zoom modality) move on the image