Representation Power of Feedforward Neural Networksdasgupta/254-deep/matus.pdf · Representation...

136
Representation Power of Feedforward Neural Networks Based on work by Barron (1993), Cybenko (1989), Kolmogorov (1957) Matus Telgarsky <[email protected]>

Transcript of Representation Power of Feedforward Neural Networksdasgupta/254-deep/matus.pdf · Representation...

Page 1: Representation Power of Feedforward Neural Networksdasgupta/254-deep/matus.pdf · Representation Power of Feedforward Neural Networks Based on work by Barron (1993), Cybenko (1989),

Representation Power of Feedforward NeuralNetworks

Based on work by Barron (1993), Cybenko (1989),Kolmogorov (1957)

Matus Telgarsky <[email protected]>

Page 2: Representation Power of Feedforward Neural Networksdasgupta/254-deep/matus.pdf · Representation Power of Feedforward Neural Networks Based on work by Barron (1993), Cybenko (1989),

Feedforward Neural Networks

I Two node types:I Linear combinations:

x 7→∑i

wixi + w0.

I Sigmoid thresholded linear combinations:

x 7→ σ (〈w, x〉+ w0) .

I What can a network of these nodes represent?n∑i=1

wixi one layer,

n∑i=1

wiσ

ni∑j=1

wjixj + wj0

two layers,

......

Page 3: Representation Power of Feedforward Neural Networksdasgupta/254-deep/matus.pdf · Representation Power of Feedforward Neural Networks Based on work by Barron (1993), Cybenko (1989),

Feedforward Neural Networks

I Two node types:I Linear combinations:

x 7→∑i

wixi + w0.

I Sigmoid thresholded linear combinations:

x 7→ σ (〈w, x〉+ w0) .

I What can a network of these nodes represent?n∑i=1

wixi one layer,

n∑i=1

wiσ

ni∑j=1

wjixj + wj0

two layers,

......

Page 4: Representation Power of Feedforward Neural Networksdasgupta/254-deep/matus.pdf · Representation Power of Feedforward Neural Networks Based on work by Barron (1993), Cybenko (1989),

Forget about 1 layer.

I Target set [0, 3]; target function 1[x ∈ [1, 2]].

I Standard sigmoid σs(x) := 1/(1 + e−x).

I Consider sigmoid output at x = 2.

I w ≥ 0 and σ(2w + w0) ≥ 1/2: mess up on right side.

0 3

1

0.5

I w ≥ 0 and σ(2w + w0) < 1/2: mess up on middle bump.

0 3

1

0.5

I Can symmetrize (w < 0); no matter what, error ≥ 1/2.

Page 5: Representation Power of Feedforward Neural Networksdasgupta/254-deep/matus.pdf · Representation Power of Feedforward Neural Networks Based on work by Barron (1993), Cybenko (1989),

Forget about 1 layer.

I Target set [0, 3]; target function 1[x ∈ [1, 2]].

I Standard sigmoid σs(x) := 1/(1 + e−x).

I Consider sigmoid output at x = 2.

I w ≥ 0 and σ(2w + w0) ≥ 1/2: mess up on right side.

0 3

1

0.5

I w ≥ 0 and σ(2w + w0) < 1/2: mess up on middle bump.

0 3

1

0.5

I Can symmetrize (w < 0); no matter what, error ≥ 1/2.

Page 6: Representation Power of Feedforward Neural Networksdasgupta/254-deep/matus.pdf · Representation Power of Feedforward Neural Networks Based on work by Barron (1993), Cybenko (1989),

Forget about 1 layer.

I Target set [0, 3]; target function 1[x ∈ [1, 2]].

I Standard sigmoid σs(x) := 1/(1 + e−x).

I Consider sigmoid output at x = 2.

I w ≥ 0 and σ(2w + w0) ≥ 1/2: mess up on right side.

0 3

1

0.5

I w ≥ 0 and σ(2w + w0) < 1/2: mess up on middle bump.

0 3

1

0.5

I Can symmetrize (w < 0); no matter what, error ≥ 1/2.

Page 7: Representation Power of Feedforward Neural Networksdasgupta/254-deep/matus.pdf · Representation Power of Feedforward Neural Networks Based on work by Barron (1993), Cybenko (1989),

Forget about 1 layer.

I Target set [0, 3]; target function 1[x ∈ [1, 2]].

I Standard sigmoid σs(x) := 1/(1 + e−x).

I Consider sigmoid output at x = 2.I w ≥ 0 and σ(2w + w0) ≥ 1/2: mess up on right side.

0 3

1

0.5

I w ≥ 0 and σ(2w + w0) < 1/2: mess up on middle bump.

0 3

1

0.5

I Can symmetrize (w < 0); no matter what, error ≥ 1/2.

Page 8: Representation Power of Feedforward Neural Networksdasgupta/254-deep/matus.pdf · Representation Power of Feedforward Neural Networks Based on work by Barron (1993), Cybenko (1989),

Forget about 1 layer.

I Target set [0, 3]; target function 1[x ∈ [1, 2]].

I Standard sigmoid σs(x) := 1/(1 + e−x).

I Consider sigmoid output at x = 2.I w ≥ 0 and σ(2w + w0) ≥ 1/2: mess up on right side.

0 3

1

0.5

I w ≥ 0 and σ(2w + w0) < 1/2: mess up on middle bump.

0 3

1

0.5

I Can symmetrize (w < 0); no matter what, error ≥ 1/2.

Page 9: Representation Power of Feedforward Neural Networksdasgupta/254-deep/matus.pdf · Representation Power of Feedforward Neural Networks Based on work by Barron (1993), Cybenko (1989),

Forget about 1 layer.

I Target set [0, 3]; target function 1[x ∈ [1, 2]].

I Standard sigmoid σs(x) := 1/(1 + e−x).

I Consider sigmoid output at x = 2.I w ≥ 0 and σ(2w + w0) ≥ 1/2: mess up on right side.

0 3

1

0.5

I w ≥ 0 and σ(2w + w0) < 1/2: mess up on middle bump.

0 3

1

0.5

I Can symmetrize (w < 0); no matter what, error ≥ 1/2.

Page 10: Representation Power of Feedforward Neural Networksdasgupta/254-deep/matus.pdf · Representation Power of Feedforward Neural Networks Based on work by Barron (1993), Cybenko (1989),

Meaning of “Universal Approximation”

Target set [0, 1]n; target function f ∈ C([0, 1]n).

I For any ε > 0, exists NN f ,

x ∈ [0, 1]n =⇒ |f(x)− f(x)| < ε.

I This gives NNs fi → f pointwise.

I For any ε > 0, exists NN f and S ⊆ [0, 1]n, m(S) ≥ 1− ε,

x ∈ S =⇒ |f(x)− f(x)| < ε.

I If (for instance) bounded on Sc, gives NNs fi → f m-a.e..

Goal: 2-NNs approximate continuous functions over [0, 1]n.

Page 11: Representation Power of Feedforward Neural Networksdasgupta/254-deep/matus.pdf · Representation Power of Feedforward Neural Networks Based on work by Barron (1993), Cybenko (1989),

Meaning of “Universal Approximation”

Target set [0, 1]n; target function f ∈ C([0, 1]n).

I For any ε > 0, exists NN f ,

x ∈ [0, 1]n =⇒ |f(x)− f(x)| < ε.

I This gives NNs fi → f pointwise.

I For any ε > 0, exists NN f and S ⊆ [0, 1]n, m(S) ≥ 1− ε,

x ∈ S =⇒ |f(x)− f(x)| < ε.

I If (for instance) bounded on Sc, gives NNs fi → f m-a.e..

Goal: 2-NNs approximate continuous functions over [0, 1]n.

Page 12: Representation Power of Feedforward Neural Networksdasgupta/254-deep/matus.pdf · Representation Power of Feedforward Neural Networks Based on work by Barron (1993), Cybenko (1989),

Meaning of “Universal Approximation”

Target set [0, 1]n; target function f ∈ C([0, 1]n).

I For any ε > 0, exists NN f ,

x ∈ [0, 1]n =⇒ |f(x)− f(x)| < ε.

I This gives NNs fi → f pointwise.

I For any ε > 0, exists NN f and S ⊆ [0, 1]n, m(S) ≥ 1− ε,

x ∈ S =⇒ |f(x)− f(x)| < ε.

I If (for instance) bounded on Sc, gives NNs fi → f m-a.e..

Goal: 2-NNs approximate continuous functions over [0, 1]n.

Page 13: Representation Power of Feedforward Neural Networksdasgupta/254-deep/matus.pdf · Representation Power of Feedforward Neural Networks Based on work by Barron (1993), Cybenko (1989),

Meaning of “Universal Approximation”

Target set [0, 1]n; target function f ∈ C([0, 1]n).

I For any ε > 0, exists NN f ,

x ∈ [0, 1]n =⇒ |f(x)− f(x)| < ε.

I This gives NNs fi → f pointwise.

I For any ε > 0, exists NN f and S ⊆ [0, 1]n, m(S) ≥ 1− ε,

x ∈ S =⇒ |f(x)− f(x)| < ε.

I If (for instance) bounded on Sc, gives NNs fi → f m-a.e..

Goal: 2-NNs approximate continuous functions over [0, 1]n.

Page 14: Representation Power of Feedforward Neural Networksdasgupta/254-deep/matus.pdf · Representation Power of Feedforward Neural Networks Based on work by Barron (1993), Cybenko (1989),

Meaning of “Universal Approximation”

Target set [0, 1]n; target function f ∈ C([0, 1]n).

I For any ε > 0, exists NN f ,

x ∈ [0, 1]n =⇒ |f(x)− f(x)| < ε.

I This gives NNs fi → f pointwise.

I For any ε > 0, exists NN f and S ⊆ [0, 1]n, m(S) ≥ 1− ε,

x ∈ S =⇒ |f(x)− f(x)| < ε.

I If (for instance) bounded on Sc, gives NNs fi → f m-a.e..

Goal: 2-NNs approximate continuous functions over [0, 1]n.

Page 15: Representation Power of Feedforward Neural Networksdasgupta/254-deep/matus.pdf · Representation Power of Feedforward Neural Networks Based on work by Barron (1993), Cybenko (1989),

Meaning of “Universal Approximation”

Target set [0, 1]n; target function f ∈ C([0, 1]n).

I For any ε > 0, exists NN f ,

x ∈ [0, 1]n =⇒ |f(x)− f(x)| < ε.

I This gives NNs fi → f pointwise.

I For any ε > 0, exists NN f and S ⊆ [0, 1]n, m(S) ≥ 1− ε,

x ∈ S =⇒ |f(x)− f(x)| < ε.

I If (for instance) bounded on Sc, gives NNs fi → f m-a.e..

Goal: 2-NNs approximate continuous functions over [0, 1]n.

Page 16: Representation Power of Feedforward Neural Networksdasgupta/254-deep/matus.pdf · Representation Power of Feedforward Neural Networks Based on work by Barron (1993), Cybenko (1989),

Outline

I 2-nn via functional analysis (Cybenko, 1989).

I 2-nn via greedy approx (Barron, 1993).

I 3-nn via histograms (Folklore).

I 3-nn via wizardry (Kolmogorov, 1957).

Page 17: Representation Power of Feedforward Neural Networksdasgupta/254-deep/matus.pdf · Representation Power of Feedforward Neural Networks Based on work by Barron (1993), Cybenko (1989),

Overview of Functional Analysis proof (Cybenko, 1989)

I Hidden layer as a basis:

B := σ(〈w, x〉+ w0) : w ∈ Rn, w0 ∈ R .

I Want to show cl(span(B)) = C([0, 1]n).

I Work via contradiction: if f ∈ C([0, 1]n) far fromcl(span(B)), can bridge the gap with a sigmoid.

Page 18: Representation Power of Feedforward Neural Networksdasgupta/254-deep/matus.pdf · Representation Power of Feedforward Neural Networks Based on work by Barron (1993), Cybenko (1989),

Overview of Functional Analysis proof (Cybenko, 1989)

I Hidden layer as a basis:

B := σ(〈w, x〉+ w0) : w ∈ Rn, w0 ∈ R .

I Want to show cl(span(B)) = C([0, 1]n).

I Work via contradiction: if f ∈ C([0, 1]n) far fromcl(span(B)), can bridge the gap with a sigmoid.

Page 19: Representation Power of Feedforward Neural Networksdasgupta/254-deep/matus.pdf · Representation Power of Feedforward Neural Networks Based on work by Barron (1993), Cybenko (1989),

Overview of Functional Analysis proof (Cybenko, 1989)

I Hidden layer as a basis:

B := σ(〈w, x〉+ w0) : w ∈ Rn, w0 ∈ R .

I Want to show cl(span(B)) = C([0, 1]n).

I Work via contradiction: if f ∈ C([0, 1]n) far fromcl(span(B)), can bridge the gap with a sigmoid.

Page 20: Representation Power of Feedforward Neural Networksdasgupta/254-deep/matus.pdf · Representation Power of Feedforward Neural Networks Based on work by Barron (1993), Cybenko (1989),

Abstracting σ

I Cybenko needs σ discriminates:

µ = 0 ⇐⇒ ∀w,w0 ∫σ(〈w, x〉+ w0)dµ(x) = 0.

I Satisfied for the standard choices

σs(x) =1

1 + e−x,

1

2(tanh(x) + 1) =

1

2

(ex − e−x

ex + e−x+ 1

)= σs(2x).

I Most results today only need σ approximates 1[x ≥ 0]:

σ(x)→

1 as x→ +∞,

0 as x→ −∞.

Combined with σ bounded&measurable givesdiscriminatory (Cybenko, 1989, Lemma 1).

Page 21: Representation Power of Feedforward Neural Networksdasgupta/254-deep/matus.pdf · Representation Power of Feedforward Neural Networks Based on work by Barron (1993), Cybenko (1989),

Abstracting σ

I Cybenko needs σ discriminates:

µ = 0 ⇐⇒ ∀w,w0 ∫σ(〈w, x〉+ w0)dµ(x) = 0.

I Satisfied for the standard choices

σs(x) =1

1 + e−x,

1

2(tanh(x) + 1) =

1

2

(ex − e−x

ex + e−x+ 1

)= σs(2x).

I Most results today only need σ approximates 1[x ≥ 0]:

σ(x)→

1 as x→ +∞,

0 as x→ −∞.

Combined with σ bounded&measurable givesdiscriminatory (Cybenko, 1989, Lemma 1).

Page 22: Representation Power of Feedforward Neural Networksdasgupta/254-deep/matus.pdf · Representation Power of Feedforward Neural Networks Based on work by Barron (1993), Cybenko (1989),

Abstracting σ

I Cybenko needs σ discriminates:

µ = 0 ⇐⇒ ∀w,w0 ∫σ(〈w, x〉+ w0)dµ(x) = 0.

I Satisfied for the standard choices

σs(x) =1

1 + e−x,

1

2(tanh(x) + 1) =

1

2

(ex − e−x

ex + e−x+ 1

)= σs(2x).

I Most results today only need σ approximates 1[x ≥ 0]:

σ(x)→

1 as x→ +∞,

0 as x→ −∞.

Combined with σ bounded&measurable givesdiscriminatory (Cybenko, 1989, Lemma 1).

Page 23: Representation Power of Feedforward Neural Networksdasgupta/254-deep/matus.pdf · Representation Power of Feedforward Neural Networks Based on work by Barron (1993), Cybenko (1989),

Proof of Cybenko (1989)

I Consider the

closed

subspace

S :=

cl

(

span (σ(〈w, x〉+ w0) : w ∈ Rn, w0 ∈ R)

)

.

I Suppose (contradictorily) exists f ∈ C([0, 1]n) \ S.

I Exists bounded linear L 6= 0 with L|S = 0.I Exists µ 6= 0 so that L(h) =

∫h(x)dµ(x).

I Contradiction: σ is discriminatory.

I L|S = 0 implies ∀w,w0 ∫σ(〈w, x〉+ w0)dµ(x) = 0.

I But “discriminatory” means µ = 0 ⇐⇒ ∀w,w0 ∫. . ..

Page 24: Representation Power of Feedforward Neural Networksdasgupta/254-deep/matus.pdf · Representation Power of Feedforward Neural Networks Based on work by Barron (1993), Cybenko (1989),

Proof of Cybenko (1989)

I Consider the closed subspace

S := cl

(span (σ(〈w, x〉+ w0) : w ∈ Rn, w0 ∈ R)

).

I Suppose (contradictorily) exists f ∈ C([0, 1]n) \ S.

I Exists bounded linear L 6= 0 with L|S = 0.I Exists µ 6= 0 so that L(h) =

∫h(x)dµ(x).

I Contradiction: σ is discriminatory.

I L|S = 0 implies ∀w,w0 ∫σ(〈w, x〉+ w0)dµ(x) = 0.

I But “discriminatory” means µ = 0 ⇐⇒ ∀w,w0 ∫. . ..

Page 25: Representation Power of Feedforward Neural Networksdasgupta/254-deep/matus.pdf · Representation Power of Feedforward Neural Networks Based on work by Barron (1993), Cybenko (1989),

Proof of Cybenko (1989)

I Consider the closed subspace

S := cl

(span (σ(〈w, x〉+ w0) : w ∈ Rn, w0 ∈ R)

).

I Suppose (contradictorily) exists f ∈ C([0, 1]n) \ S.

I Exists bounded linear L 6= 0 with L|S = 0.I Exists µ 6= 0 so that L(h) =

∫h(x)dµ(x).

I Contradiction: σ is discriminatory.

I L|S = 0 implies ∀w,w0 ∫σ(〈w, x〉+ w0)dµ(x) = 0.

I But “discriminatory” means µ = 0 ⇐⇒ ∀w,w0 ∫. . ..

Page 26: Representation Power of Feedforward Neural Networksdasgupta/254-deep/matus.pdf · Representation Power of Feedforward Neural Networks Based on work by Barron (1993), Cybenko (1989),

Proof of Cybenko (1989)

I Consider the closed subspace

S := cl

(span (σ(〈w, x〉+ w0) : w ∈ Rn, w0 ∈ R)

).

I Suppose (contradictorily) exists f ∈ C([0, 1]n) \ S.I Exists bounded linear L 6= 0 with L|S = 0.

I Exists µ 6= 0 so that L(h) =∫h(x)dµ(x).

I Contradiction: σ is discriminatory.

I L|S = 0 implies ∀w,w0 ∫σ(〈w, x〉+ w0)dµ(x) = 0.

I But “discriminatory” means µ = 0 ⇐⇒ ∀w,w0 ∫. . ..

Page 27: Representation Power of Feedforward Neural Networksdasgupta/254-deep/matus.pdf · Representation Power of Feedforward Neural Networks Based on work by Barron (1993), Cybenko (1989),

Proof of Cybenko (1989)

I Consider the closed subspace

S := cl

(span (σ(〈w, x〉+ w0) : w ∈ Rn, w0 ∈ R)

).

I Suppose (contradictorily) exists f ∈ C([0, 1]n) \ S.I Exists bounded linear L 6= 0 with L|S = 0.

I Can think of projecting f onto S⊥.

I Don’t need inner products: Hahn-Banach Theorem.

I Exists µ 6= 0 so that L(h) =∫h(x)dµ(x).

I Contradiction: σ is discriminatory.

I L|S = 0 implies ∀w,w0 ∫σ(〈w, x〉+ w0)dµ(x) = 0.

I But “discriminatory” means µ = 0 ⇐⇒ ∀w,w0 ∫. . ..

Page 28: Representation Power of Feedforward Neural Networksdasgupta/254-deep/matus.pdf · Representation Power of Feedforward Neural Networks Based on work by Barron (1993), Cybenko (1989),

Proof of Cybenko (1989)

I Consider the closed subspace

S := cl

(span (σ(〈w, x〉+ w0) : w ∈ Rn, w0 ∈ R)

).

I Suppose (contradictorily) exists f ∈ C([0, 1]n) \ S.I Exists bounded linear L 6= 0 with L|S = 0.

I Can think of projecting f onto S⊥.I Don’t need inner products: Hahn-Banach Theorem.

I Exists µ 6= 0 so that L(h) =∫h(x)dµ(x).

I Contradiction: σ is discriminatory.

I L|S = 0 implies ∀w,w0 ∫σ(〈w, x〉+ w0)dµ(x) = 0.

I But “discriminatory” means µ = 0 ⇐⇒ ∀w,w0 ∫. . ..

Page 29: Representation Power of Feedforward Neural Networksdasgupta/254-deep/matus.pdf · Representation Power of Feedforward Neural Networks Based on work by Barron (1993), Cybenko (1989),

Proof of Cybenko (1989)

I Consider the closed subspace

S := cl

(span (σ(〈w, x〉+ w0) : w ∈ Rn, w0 ∈ R)

).

I Suppose (contradictorily) exists f ∈ C([0, 1]n) \ S.I Exists bounded linear L 6= 0 with L|S = 0.

I Exists µ 6= 0 so that L(h) =∫h(x)dµ(x).

I Contradiction: σ is discriminatory.

I L|S = 0 implies ∀w,w0 ∫σ(〈w, x〉+ w0)dµ(x) = 0.

I But “discriminatory” means µ = 0 ⇐⇒ ∀w,w0 ∫. . ..

Page 30: Representation Power of Feedforward Neural Networksdasgupta/254-deep/matus.pdf · Representation Power of Feedforward Neural Networks Based on work by Barron (1993), Cybenko (1989),

Proof of Cybenko (1989)

I Consider the closed subspace

S := cl

(span (σ(〈w, x〉+ w0) : w ∈ Rn, w0 ∈ R)

).

I Suppose (contradictorily) exists f ∈ C([0, 1]n) \ S.I Exists bounded linear L 6= 0 with L|S = 0.I Exists µ 6= 0 so that L(h) =

∫h(x)dµ(x).

I Contradiction: σ is discriminatory.

I L|S = 0 implies ∀w,w0 ∫σ(〈w, x〉+ w0)dµ(x) = 0.

I But “discriminatory” means µ = 0 ⇐⇒ ∀w,w0 ∫. . ..

Page 31: Representation Power of Feedforward Neural Networksdasgupta/254-deep/matus.pdf · Representation Power of Feedforward Neural Networks Based on work by Barron (1993), Cybenko (1989),

Proof of Cybenko (1989)

I Consider the closed subspace

S := cl

(span (σ(〈w, x〉+ w0) : w ∈ Rn, w0 ∈ R)

).

I Suppose (contradictorily) exists f ∈ C([0, 1]n) \ S.I Exists bounded linear L 6= 0 with L|S = 0.I Exists µ 6= 0 so that L(h) =

∫h(x)dµ(x).

I In Rn, linear functions have form 〈·, y〉 for some y ∈ Rn.

I With infinite dimensions, integrals give inner products.I A form of the Riesz Representation Theorem.

I Contradiction: σ is discriminatory.

I L|S = 0 implies ∀w,w0 ∫σ(〈w, x〉+ w0)dµ(x) = 0.

I But “discriminatory” means µ = 0 ⇐⇒ ∀w,w0 ∫. . ..

Page 32: Representation Power of Feedforward Neural Networksdasgupta/254-deep/matus.pdf · Representation Power of Feedforward Neural Networks Based on work by Barron (1993), Cybenko (1989),

Proof of Cybenko (1989)

I Consider the closed subspace

S := cl

(span (σ(〈w, x〉+ w0) : w ∈ Rn, w0 ∈ R)

).

I Suppose (contradictorily) exists f ∈ C([0, 1]n) \ S.I Exists bounded linear L 6= 0 with L|S = 0.I Exists µ 6= 0 so that L(h) =

∫h(x)dµ(x).

I In Rn, linear functions have form 〈·, y〉 for some y ∈ Rn.I With infinite dimensions, integrals give inner products.

I A form of the Riesz Representation Theorem.

I Contradiction: σ is discriminatory.

I L|S = 0 implies ∀w,w0 ∫σ(〈w, x〉+ w0)dµ(x) = 0.

I But “discriminatory” means µ = 0 ⇐⇒ ∀w,w0 ∫. . ..

Page 33: Representation Power of Feedforward Neural Networksdasgupta/254-deep/matus.pdf · Representation Power of Feedforward Neural Networks Based on work by Barron (1993), Cybenko (1989),

Proof of Cybenko (1989)

I Consider the closed subspace

S := cl

(span (σ(〈w, x〉+ w0) : w ∈ Rn, w0 ∈ R)

).

I Suppose (contradictorily) exists f ∈ C([0, 1]n) \ S.I Exists bounded linear L 6= 0 with L|S = 0.I Exists µ 6= 0 so that L(h) =

∫h(x)dµ(x).

I In Rn, linear functions have form 〈·, y〉 for some y ∈ Rn.I With infinite dimensions, integrals give inner products.I A form of the Riesz Representation Theorem.

I Contradiction: σ is discriminatory.

I L|S = 0 implies ∀w,w0 ∫σ(〈w, x〉+ w0)dµ(x) = 0.

I But “discriminatory” means µ = 0 ⇐⇒ ∀w,w0 ∫. . ..

Page 34: Representation Power of Feedforward Neural Networksdasgupta/254-deep/matus.pdf · Representation Power of Feedforward Neural Networks Based on work by Barron (1993), Cybenko (1989),

Proof of Cybenko (1989)

I Consider the closed subspace

S := cl

(span (σ(〈w, x〉+ w0) : w ∈ Rn, w0 ∈ R)

).

I Suppose (contradictorily) exists f ∈ C([0, 1]n) \ S.I Exists bounded linear L 6= 0 with L|S = 0.I Exists µ 6= 0 so that L(h) =

∫h(x)dµ(x).

I Contradiction: σ is discriminatory.

I L|S = 0 implies ∀w,w0 ∫σ(〈w, x〉+ w0)dµ(x) = 0.

I But “discriminatory” means µ = 0 ⇐⇒ ∀w,w0 ∫. . ..

Page 35: Representation Power of Feedforward Neural Networksdasgupta/254-deep/matus.pdf · Representation Power of Feedforward Neural Networks Based on work by Barron (1993), Cybenko (1989),

Proof of Cybenko (1989)

I Consider the closed subspace

S := cl

(span (σ(〈w, x〉+ w0) : w ∈ Rn, w0 ∈ R)

).

I Suppose (contradictorily) exists f ∈ C([0, 1]n) \ S.I Exists bounded linear L 6= 0 with L|S = 0.I Exists µ 6= 0 so that L(h) =

∫h(x)dµ(x).

I Contradiction: σ is discriminatory.

I L|S = 0 implies ∀w,w0 ∫σ(〈w, x〉+ w0)dµ(x) = 0.

I But “discriminatory” means µ = 0 ⇐⇒ ∀w,w0 ∫. . ..

Page 36: Representation Power of Feedforward Neural Networksdasgupta/254-deep/matus.pdf · Representation Power of Feedforward Neural Networks Based on work by Barron (1993), Cybenko (1989),

Proof of Cybenko (1989)

I Consider the closed subspace

S := cl

(span (σ(〈w, x〉+ w0) : w ∈ Rn, w0 ∈ R)

).

I Suppose (contradictorily) exists f ∈ C([0, 1]n) \ S.I Exists bounded linear L 6= 0 with L|S = 0.I Exists µ 6= 0 so that L(h) =

∫h(x)dµ(x).

I Contradiction: σ is discriminatory.I L|S = 0 implies ∀w,w0

∫σ(〈w, x〉+ w0)dµ(x) = 0.

I But “discriminatory” means µ = 0 ⇐⇒ ∀w,w0 ∫. . ..

Page 37: Representation Power of Feedforward Neural Networksdasgupta/254-deep/matus.pdf · Representation Power of Feedforward Neural Networks Based on work by Barron (1993), Cybenko (1989),

Proof of Cybenko (1989)

I Consider the closed subspace

S := cl

(span (σ(〈w, x〉+ w0) : w ∈ Rn, w0 ∈ R)

).

I Suppose (contradictorily) exists f ∈ C([0, 1]n) \ S.I Exists bounded linear L 6= 0 with L|S = 0.I Exists µ 6= 0 so that L(h) =

∫h(x)dµ(x).

I Contradiction: σ is discriminatory.I L|S = 0 implies ∀w,w0

∫σ(〈w, x〉+ w0)dµ(x) = 0.

I But “discriminatory” means µ = 0 ⇐⇒ ∀w,w0 ∫. . ..

Page 38: Representation Power of Feedforward Neural Networksdasgupta/254-deep/matus.pdf · Representation Power of Feedforward Neural Networks Based on work by Barron (1993), Cybenko (1989),

What was Cybenko’s proof about?

I Set S := cl (span (σ(〈w, x〉+ w0) : w ∈ Rn, w0 ∈ R)).

I Another way to look at it.

I Given f ∈ C([0, 1]n), current approximant f ∈ S.I The residue f − f must have nonzero projection onto S.I Add residue projection into f ; does this now match f?

I Can this be turned into an algorithm?

Page 39: Representation Power of Feedforward Neural Networksdasgupta/254-deep/matus.pdf · Representation Power of Feedforward Neural Networks Based on work by Barron (1993), Cybenko (1989),

What was Cybenko’s proof about?

I Set S := cl (span (σ(〈w, x〉+ w0) : w ∈ Rn, w0 ∈ R)).I Another way to look at it.

I Given f ∈ C([0, 1]n), current approximant f ∈ S.I The residue f − f must have nonzero projection onto S.I Add residue projection into f ; does this now match f?

I Can this be turned into an algorithm?

Page 40: Representation Power of Feedforward Neural Networksdasgupta/254-deep/matus.pdf · Representation Power of Feedforward Neural Networks Based on work by Barron (1993), Cybenko (1989),

What was Cybenko’s proof about?

I Set S := cl (span (σ(〈w, x〉+ w0) : w ∈ Rn, w0 ∈ R)).I Another way to look at it.

I Given f ∈ C([0, 1]n), current approximant f ∈ S.

I The residue f − f must have nonzero projection onto S.I Add residue projection into f ; does this now match f?

I Can this be turned into an algorithm?

Page 41: Representation Power of Feedforward Neural Networksdasgupta/254-deep/matus.pdf · Representation Power of Feedforward Neural Networks Based on work by Barron (1993), Cybenko (1989),

What was Cybenko’s proof about?

I Set S := cl (span (σ(〈w, x〉+ w0) : w ∈ Rn, w0 ∈ R)).I Another way to look at it.

I Given f ∈ C([0, 1]n), current approximant f ∈ S.I The residue f − f must have nonzero projection onto S.

I Add residue projection into f ; does this now match f?

I Can this be turned into an algorithm?

Page 42: Representation Power of Feedforward Neural Networksdasgupta/254-deep/matus.pdf · Representation Power of Feedforward Neural Networks Based on work by Barron (1993), Cybenko (1989),

What was Cybenko’s proof about?

I Set S := cl (span (σ(〈w, x〉+ w0) : w ∈ Rn, w0 ∈ R)).I Another way to look at it.

I Given f ∈ C([0, 1]n), current approximant f ∈ S.I The residue f − f must have nonzero projection onto S.I Add residue projection into f ; does this now match f?

I Can this be turned into an algorithm?

Page 43: Representation Power of Feedforward Neural Networksdasgupta/254-deep/matus.pdf · Representation Power of Feedforward Neural Networks Based on work by Barron (1993), Cybenko (1989),

What was Cybenko’s proof about?

I Set S := cl (span (σ(〈w, x〉+ w0) : w ∈ Rn, w0 ∈ R)).I Another way to look at it.

I Given f ∈ C([0, 1]n), current approximant f ∈ S.I The residue f − f must have nonzero projection onto S.I Add residue projection into f ; does this now match f?

I Can this be turned into an algorithm?

Page 44: Representation Power of Feedforward Neural Networksdasgupta/254-deep/matus.pdf · Representation Power of Feedforward Neural Networks Based on work by Barron (1993), Cybenko (1989),

Outline

I 2-nn via functional analysis (Cybenko, 1989).

I 2-nn via greedy approx (Barron, 1993).

I 3-nn via histograms (Folklore).

I 3-nn via wizardry (Kolmogorov, 1957).

Page 45: Representation Power of Feedforward Neural Networksdasgupta/254-deep/matus.pdf · Representation Power of Feedforward Neural Networks Based on work by Barron (1993), Cybenko (1989),

The method of Barron (1993, Section 8)

Input: basis set B,

1. Choose arbitrary f0 ∈ B.

2. For t = 1, 2, . . .:

2.1 Choose (αt, gt) apx minimizing

infα∈[0,1],g∈B

‖f − (αft−1 + (1− α)g)‖22.

2.2 Update ft := αtft−1 + (1− αt)gt.

I Can rescale B so f ∈ cl(conv(B)). (or tweak alg.)

I Is this familiar?

I Barron carefully shows rate ‖f − ft‖22 = O(1/t).

Page 46: Representation Power of Feedforward Neural Networksdasgupta/254-deep/matus.pdf · Representation Power of Feedforward Neural Networks Based on work by Barron (1993), Cybenko (1989),

The method of Barron (1993, Section 8)

Input: basis set B, target f ∈ cl(conv(B)).

1. Choose arbitrary f0 ∈ B.

2. For t = 1, 2, . . .:

2.1 Choose (αt, gt) apx minimizing

infα∈[0,1],g∈B

‖f − (αft−1 + (1− α)g)‖22.

2.2 Update ft := αtft−1 + (1− αt)gt.

I Can rescale B so f ∈ cl(conv(B)). (or tweak alg.)

I Is this familiar?

I Barron carefully shows rate ‖f − ft‖22 = O(1/t).

Page 47: Representation Power of Feedforward Neural Networksdasgupta/254-deep/matus.pdf · Representation Power of Feedforward Neural Networks Based on work by Barron (1993), Cybenko (1989),

The method of Barron (1993, Section 8)

Input: basis set B, target f ∈ cl(conv(B)).

1. Choose arbitrary f0 ∈ B.

2. For t = 1, 2, . . .:

2.1 Choose (αt, gt) apx minimizing

infα∈[0,1],g∈B

‖f − (αft−1 + (1− α)g)‖22.

2.2 Update ft := αtft−1 + (1− αt)gt.

I Can rescale B so f ∈ cl(conv(B)). (or tweak alg.)

I Is this familiar?

I Barron carefully shows rate ‖f − ft‖22 = O(1/t).

Page 48: Representation Power of Feedforward Neural Networksdasgupta/254-deep/matus.pdf · Representation Power of Feedforward Neural Networks Based on work by Barron (1993), Cybenko (1989),

The method of Barron (1993, Section 8)

Input: basis set B, target f ∈ cl(conv(B)).

1. Choose arbitrary f0 ∈ B.

2. For t = 1, 2, . . .:

2.1 Choose (αt, gt) apx minimizing

infα∈[0,1],g∈B

‖f − (αft−1 + (1− α)g)‖22.

2.2 Update ft := αtft−1 + (1− αt)gt.

I Can rescale B so f ∈ cl(conv(B)). (or tweak alg.)

I Is this familiar?

I Barron carefully shows rate ‖f − ft‖22 = O(1/t).

Page 49: Representation Power of Feedforward Neural Networksdasgupta/254-deep/matus.pdf · Representation Power of Feedforward Neural Networks Based on work by Barron (1993), Cybenko (1989),

The method of Barron (1993, Section 8)

Input: basis set B, target f ∈ cl(conv(B)).

1. Choose arbitrary f0 ∈ B.

2. For t = 1, 2, . . .:

2.1 Choose (αt, gt) apx minimizing

infα∈[0,1],g∈B

‖f − (αft−1 + (1− α)g)‖22.

2.2 Update ft := αtft−1 + (1− αt)gt.

I Can rescale B so f ∈ cl(conv(B)). (or tweak alg.)

I Is this familiar?

I Barron carefully shows rate ‖f − ft‖22 = O(1/t).

Page 50: Representation Power of Feedforward Neural Networksdasgupta/254-deep/matus.pdf · Representation Power of Feedforward Neural Networks Based on work by Barron (1993), Cybenko (1989),

The method of Barron (1993, Section 8)

Input: basis set B, target f ∈ cl(conv(B)).

1. Choose arbitrary f0 ∈ B.

2. For t = 1, 2, . . .:

2.1 Choose (αt, gt) apx minimizing (tolerance O(1/t2))

infα∈[0,1],g∈B

‖f − (αft−1 + (1− α)g)‖22.

2.2 Update ft := αtft−1 + (1− αt)gt.

I Can rescale B so f ∈ cl(conv(B)). (or tweak alg.)

I Is this familiar?

I Barron carefully shows rate ‖f − ft‖22 = O(1/t).

Page 51: Representation Power of Feedforward Neural Networksdasgupta/254-deep/matus.pdf · Representation Power of Feedforward Neural Networks Based on work by Barron (1993), Cybenko (1989),

The method of Barron (1993, Section 8)

Input: basis set B, target f ∈ cl(conv(B)).

1. Choose arbitrary f0 ∈ B.

2. For t = 1, 2, . . .:

2.1 Choose (αt, gt) apx minimizing (tolerance O(1/t2))

infα∈[0,1],g∈B

‖f − (αft−1 + (1− α)g)‖22.

2.2 Update ft := αtft−1 + (1− αt)gt.

I Can rescale B so f ∈ cl(conv(B)). (or tweak alg.)

I Is this familiar?

I Barron carefully shows rate ‖f − ft‖22 = O(1/t).

Page 52: Representation Power of Feedforward Neural Networksdasgupta/254-deep/matus.pdf · Representation Power of Feedforward Neural Networks Based on work by Barron (1993), Cybenko (1989),

The method of Barron (1993, Section 8)

Input: basis set B, target f ∈ cl(conv(B)).

1. Choose arbitrary f0 ∈ B.

2. For t = 1, 2, . . .:

2.1 Choose (αt, gt) apx minimizing (tolerance O(1/t2))

infα∈[0,1],g∈B

‖f − (αft−1 + (1− α)g)‖22.

2.2 Update ft := αtft−1 + (1− αt)gt.

I Can rescale B so f ∈ cl(conv(B)). (or tweak alg.)

I Is this familiar?

I Barron carefully shows rate ‖f − ft‖22 = O(1/t).

Page 53: Representation Power of Feedforward Neural Networksdasgupta/254-deep/matus.pdf · Representation Power of Feedforward Neural Networks Based on work by Barron (1993), Cybenko (1989),

The method of Barron (1993, Section 8)

Input: basis set B, target f ∈ cl(conv(B)).

1. Choose arbitrary f0 ∈ B.

2. For t = 1, 2, . . .:

2.1 Choose (αt, gt) apx minimizing (tolerance O(1/t2))

infα∈[0,1],g∈B

‖f − (αft−1 + (1− α)g)‖22.

2.2 Update ft := αtft−1 + (1− αt)gt.

I Can rescale B so f ∈ cl(conv(B)). (or tweak alg.)

I Is this familiar? Oh let’s see..

I Barron carefully shows rate ‖f − ft‖22 = O(1/t).

Page 54: Representation Power of Feedforward Neural Networksdasgupta/254-deep/matus.pdf · Representation Power of Feedforward Neural Networks Based on work by Barron (1993), Cybenko (1989),

The method of Barron (1993, Section 8)

Input: basis set B, target f ∈ cl(conv(B)).

1. Choose arbitrary f0 ∈ B.

2. For t = 1, 2, . . .:

2.1 Choose (αt, gt) apx minimizing (tolerance O(1/t2))

infα∈[0,1],g∈B

‖f − (αft−1 + (1− α)g)‖22.

2.2 Update ft := αtft−1 + (1− αt)gt.

I Can rescale B so f ∈ cl(conv(B)). (or tweak alg.)

I Is this familiar? Oh let’s see.. gradient descent, coordinatedescent, Frank-Wolfe, projection pursuit, basis pursuit,boosting.. as usual, cf. Zhang (2003).

I Barron carefully shows rate ‖f − ft‖22 = O(1/t).

Page 55: Representation Power of Feedforward Neural Networksdasgupta/254-deep/matus.pdf · Representation Power of Feedforward Neural Networks Based on work by Barron (1993), Cybenko (1989),

The method of Barron (1993, Section 8)

Input: basis set B, target f ∈ cl(conv(B)).

1. Choose arbitrary f0 ∈ B.

2. For t = 1, 2, . . .:

2.1 Choose (αt, gt) apx minimizing (tolerance O(1/t2))

infα∈[0,1],g∈B

‖f − (αft−1 + (1− α)g)‖22.

2.2 Update ft := αtft−1 + (1− αt)gt.

I Can rescale B so f ∈ cl(conv(B)). (or tweak alg.)

I Is this familiar? YES.

I Barron carefully shows rate ‖f − ft‖22 = O(1/t).

Page 56: Representation Power of Feedforward Neural Networksdasgupta/254-deep/matus.pdf · Representation Power of Feedforward Neural Networks Based on work by Barron (1993), Cybenko (1989),

The method of Barron (1993, Section 8)

Input: basis set B, target f ∈ cl(conv(B)).

1. Choose arbitrary f0 ∈ B.

2. For t = 1, 2, . . .:

2.1 Choose (αt, gt) apx minimizing (tolerance O(1/t2))

infα∈[0,1],g∈B

‖f − (αft−1 + (1− α)g)‖22.

2.2 Update ft := αtft−1 + (1− αt)gt.

I Can rescale B so f ∈ cl(conv(B)). (or tweak alg.)

I Is this familiar? YES.

I Barron carefully shows rate ‖f − ft‖22 = O(1/t).

Page 57: Representation Power of Feedforward Neural Networksdasgupta/254-deep/matus.pdf · Representation Power of Feedforward Neural Networks Based on work by Barron (1993), Cybenko (1989),

Greedy approx as coordinate descent

I Intuition: linear operator A with basis B as “columns”:

ft = Awt, g = Aej .

I Split the update into two steps:

infg∈B‖f − (ft−1 + g)‖22 = inf

i‖f − (Awt−1 +Aei)‖22,

infα∈[0,1]

‖f − (αft−1 + (1− α)gt)‖22.

I Suppose ‖g‖2 = 1 for all g ∈ B; first step equiv to

supg∈B

⟨g, ft−1 − f

⟩= sup

i(Aei)

>(Awt−1 − f).

I Second version is coordinate descent update!

I (Standard rate proofs use curvature of ‖ · ‖22.)

Page 58: Representation Power of Feedforward Neural Networksdasgupta/254-deep/matus.pdf · Representation Power of Feedforward Neural Networks Based on work by Barron (1993), Cybenko (1989),

Greedy approx as coordinate descent

I Intuition: linear operator A with basis B as “columns”:

ft = Awt, g = Aej .

I Split the update into two steps:

infg∈B‖f − (ft−1 + g)‖22 = inf

i‖f − (Awt−1 +Aei)‖22,

infα∈[0,1]

‖f − (αft−1 + (1− α)gt)‖22.

I Suppose ‖g‖2 = 1 for all g ∈ B; first step equiv to

supg∈B

⟨g, ft−1 − f

⟩= sup

i(Aei)

>(Awt−1 − f).

I Second version is coordinate descent update!

I (Standard rate proofs use curvature of ‖ · ‖22.)

Page 59: Representation Power of Feedforward Neural Networksdasgupta/254-deep/matus.pdf · Representation Power of Feedforward Neural Networks Based on work by Barron (1993), Cybenko (1989),

Greedy approx as coordinate descent

I Intuition: linear operator A with basis B as “columns”:

ft = Awt, g = Aej .

I Split the update into two steps:

infg∈B‖f − (ft−1 + g)‖22 = inf

i‖f − (Awt−1 +Aei)‖22,

infα∈[0,1]

‖f − (αft−1 + (1− α)gt)‖22.

I Suppose ‖g‖2 = 1 for all g ∈ B; first step equiv to

supg∈B

⟨g, ft−1 − f

⟩= sup

i(Aei)

>(Awt−1 − f).

I Second version is coordinate descent update!

I (Standard rate proofs use curvature of ‖ · ‖22.)

Page 60: Representation Power of Feedforward Neural Networksdasgupta/254-deep/matus.pdf · Representation Power of Feedforward Neural Networks Based on work by Barron (1993), Cybenko (1989),

Greedy approx as coordinate descent

I Intuition: linear operator A with basis B as “columns”:

ft = Awt, g = Aej .

I Split the update into two steps:

infg∈B‖f − (ft−1 + g)‖22 = inf

i‖f − (Awt−1 +Aei)‖22,

infα∈[0,1]

‖f − (αft−1 + (1− α)gt)‖22.

I Suppose ‖g‖2 = 1 for all g ∈ B; first step equiv to

supg∈B

⟨g, ft−1 − f

⟩= sup

i(Aei)

>(Awt−1 − f).

I Second version is coordinate descent update!

I (Standard rate proofs use curvature of ‖ · ‖22.)

Page 61: Representation Power of Feedforward Neural Networksdasgupta/254-deep/matus.pdf · Representation Power of Feedforward Neural Networks Based on work by Barron (1993), Cybenko (1989),

Greedy approx as coordinate descent

I Intuition: linear operator A with basis B as “columns”:

ft = Awt, g = Aej .

I Split the update into two steps:

infg∈B‖f − (ft−1 + g)‖22 = inf

i‖f − (Awt−1 +Aei)‖22,

infα∈[0,1]

‖f − (αft−1 + (1− α)gt)‖22.

I Suppose ‖g‖2 = 1 for all g ∈ B; first step equiv to

supg∈B

⟨g, ft−1 − f

⟩= sup

i(Aei)

>(Awt−1 − f).

I Second version is coordinate descent update!

I (Standard rate proofs use curvature of ‖ · ‖22.)

Page 62: Representation Power of Feedforward Neural Networksdasgupta/254-deep/matus.pdf · Representation Power of Feedforward Neural Networks Based on work by Barron (1993), Cybenko (1989),

Story for 2-layer NNs so far

I Can L2 apx any f ∈ C([0, 1]n) at rate O(1/√t)..

I ..so why aren’t we using this algorithm for neural nets?

I This is not an easy optimization problem:

infα∈[0,1],w∈Rn,w0∈R

‖f − (αft−1 + (1− α)σ(〈w, ·〉+ w0)‖22.

I For a general basis B, must check each basis element..

I From an algorithmic perspective, much remains to be done.

Page 63: Representation Power of Feedforward Neural Networksdasgupta/254-deep/matus.pdf · Representation Power of Feedforward Neural Networks Based on work by Barron (1993), Cybenko (1989),

Story for 2-layer NNs so far

I Can L2 apx any f ∈ C([0, 1]n) at rate O(1/√t)..

I ..so why aren’t we using this algorithm for neural nets?

I This is not an easy optimization problem:

infα∈[0,1],w∈Rn,w0∈R

‖f − (αft−1 + (1− α)σ(〈w, ·〉+ w0)‖22.

I For a general basis B, must check each basis element..

I From an algorithmic perspective, much remains to be done.

Page 64: Representation Power of Feedforward Neural Networksdasgupta/254-deep/matus.pdf · Representation Power of Feedforward Neural Networks Based on work by Barron (1993), Cybenko (1989),

Story for 2-layer NNs so far

I Can L2 apx any f ∈ C([0, 1]n) at rate O(1/√t)..

I ..so why aren’t we using this algorithm for neural nets?I This is not an easy optimization problem:

infα∈[0,1],w∈Rn,w0∈R

‖f − (αft−1 + (1− α)σ(〈w, ·〉+ w0)‖22.

I For a general basis B, must check each basis element..

I From an algorithmic perspective, much remains to be done.

Page 65: Representation Power of Feedforward Neural Networksdasgupta/254-deep/matus.pdf · Representation Power of Feedforward Neural Networks Based on work by Barron (1993), Cybenko (1989),

Story for 2-layer NNs so far

I Can L2 apx any f ∈ C([0, 1]n) at rate O(1/√t)..

I ..so why aren’t we using this algorithm for neural nets?I This is not an easy optimization problem:

infα∈[0,1],w∈Rn,w0∈R

‖f − (αft−1 + (1− α)σ(〈w, ·〉+ w0)‖22.

I For a general basis B, must check each basis element..

I From an algorithmic perspective, much remains to be done.

Page 66: Representation Power of Feedforward Neural Networksdasgupta/254-deep/matus.pdf · Representation Power of Feedforward Neural Networks Based on work by Barron (1993), Cybenko (1989),

Story for 2-layer NNs so far

I Can L2 apx any f ∈ C([0, 1]n) at rate O(1/√t)..

I ..so why aren’t we using this algorithm for neural nets?I This is not an easy optimization problem:

infα∈[0,1],w∈Rn,w0∈R

‖f − (αft−1 + (1− α)σ(〈w, ·〉+ w0)‖22.

I For a general basis B, must check each basis element..

I From an algorithmic perspective, much remains to be done.

Page 67: Representation Power of Feedforward Neural Networksdasgupta/254-deep/matus.pdf · Representation Power of Feedforward Neural Networks Based on work by Barron (1993), Cybenko (1989),

Lower bound from Barron (1993, Section 10)

I The greedy algorithm gets to search whole basis.

I If an n-element basis Bn is fixed, then

supf∈cl(spanC(B))

inff∈span(Bn)

‖f − f‖2 = Ω

(1

n1/d

),

the curse of dimension!

I Can defeat this with more layers (cf. Kolmogorov (1957)).

Page 68: Representation Power of Feedforward Neural Networksdasgupta/254-deep/matus.pdf · Representation Power of Feedforward Neural Networks Based on work by Barron (1993), Cybenko (1989),

Lower bound from Barron (1993, Section 10)

I The greedy algorithm gets to search whole basis.

I If an n-element basis Bn is fixed, then

supf∈cl(spanC(B))

inff∈span(Bn)

‖f − f‖2 = Ω

(1

n1/d

),

the curse of dimension!

I Can defeat this with more layers (cf. Kolmogorov (1957)).

Page 69: Representation Power of Feedforward Neural Networksdasgupta/254-deep/matus.pdf · Representation Power of Feedforward Neural Networks Based on work by Barron (1993), Cybenko (1989),

Lower bound from Barron (1993, Section 10)

I The greedy algorithm gets to search whole basis.

I If an n-element basis Bn is fixed, then

supf∈cl(spanC(B))

inff∈span(Bn)

‖f − f‖2 = Ω

(1

n1/d

),

the curse of dimension!

I Can defeat this with more layers (cf. Kolmogorov (1957)).

Page 70: Representation Power of Feedforward Neural Networksdasgupta/254-deep/matus.pdf · Representation Power of Feedforward Neural Networks Based on work by Barron (1993), Cybenko (1989),

Outline

I 2-nn via functional analysis (Cybenko, 1989).

I 2-nn via greedy approx (Barron, 1993).

I 3-nn via histograms (Folklore).

I 3-nn via wizardry (Kolmogorov, 1957).

Page 71: Representation Power of Feedforward Neural Networksdasgupta/254-deep/matus.pdf · Representation Power of Feedforward Neural Networks Based on work by Barron (1993), Cybenko (1989),

Folklore proof of NN power

I What’s an easy way to write function as sum of others?

I Project onto favorite basis (Fourier, Chebyshev, ...)!

I Let’s use this to build NNs.

I Easiest basis: histograms!

Page 72: Representation Power of Feedforward Neural Networksdasgupta/254-deep/matus.pdf · Representation Power of Feedforward Neural Networks Based on work by Barron (1993), Cybenko (1989),

Folklore proof of NN power

I What’s an easy way to write function as sum of others?I Project onto favorite basis (Fourier, Chebyshev, ...)!

I Let’s use this to build NNs.

I Easiest basis: histograms!

Page 73: Representation Power of Feedforward Neural Networksdasgupta/254-deep/matus.pdf · Representation Power of Feedforward Neural Networks Based on work by Barron (1993), Cybenko (1989),

Folklore proof of NN power

I What’s an easy way to write function as sum of others?I Project onto favorite basis (Fourier, Chebyshev, ...)!

I Let’s use this to build NNs.

I Easiest basis: histograms!

Page 74: Representation Power of Feedforward Neural Networksdasgupta/254-deep/matus.pdf · Representation Power of Feedforward Neural Networks Based on work by Barron (1993), Cybenko (1989),

Folklore proof of NN power

I What’s an easy way to write function as sum of others?I Project onto favorite basis (Fourier, Chebyshev, ...)!

I Let’s use this to build NNs.I Easiest basis: histograms!

Page 75: Representation Power of Feedforward Neural Networksdasgupta/254-deep/matus.pdf · Representation Power of Feedforward Neural Networks Based on work by Barron (1993), Cybenko (1989),

Histogram basis for C([0, 1]n)

I Grid [0, 1]n into Rimi=1; consider

f(x) =

m∑i=1

ai1[x ∈ Ri],

where target f(x) = ai for some x ∈ Ri.

I Fact: for any ε > 0, exists histogram f with

x ∈ [0, 1]n =⇒ |f(x)− f(x)| < ε.

I Pf. Continuous function over compact set is uniformlycontinuous.

I Now just write individual histogram bars as a NNs.

Page 76: Representation Power of Feedforward Neural Networksdasgupta/254-deep/matus.pdf · Representation Power of Feedforward Neural Networks Based on work by Barron (1993), Cybenko (1989),

Histogram basis for C([0, 1]n)

I Grid [0, 1]n into Rimi=1; consider

f(x) =

m∑i=1

ai1[x ∈ Ri],

where target f(x) = ai for some x ∈ Ri.I Fact: for any ε > 0, exists histogram f with

x ∈ [0, 1]n =⇒ |f(x)− f(x)| < ε.

I Pf. Continuous function over compact set is uniformlycontinuous.

I Now just write individual histogram bars as a NNs.

Page 77: Representation Power of Feedforward Neural Networksdasgupta/254-deep/matus.pdf · Representation Power of Feedforward Neural Networks Based on work by Barron (1993), Cybenko (1989),

Histogram basis for C([0, 1]n)

I Grid [0, 1]n into Rimi=1; consider

f(x) =

m∑i=1

ai1[x ∈ Ri],

where target f(x) = ai for some x ∈ Ri.I Fact: for any ε > 0, exists histogram f with

x ∈ [0, 1]n =⇒ |f(x)− f(x)| < ε.

I Pf. Continuous function over compact set is uniformlycontinuous.

I Now just write individual histogram bars as a NNs.

Page 78: Representation Power of Feedforward Neural Networksdasgupta/254-deep/matus.pdf · Representation Power of Feedforward Neural Networks Based on work by Barron (1993), Cybenko (1989),

Histogram basis for C([0, 1]n)

I Grid [0, 1]n into Rimi=1; consider

f(x) =

m∑i=1

ai1[x ∈ Ri],

where target f(x) = ai for some x ∈ Ri.I Fact: for any ε > 0, exists histogram f with

x ∈ [0, 1]n =⇒ |f(x)− f(x)| < ε.

I Pf. Continuous function over compact set is uniformlycontinuous.

I Now just write individual histogram bars as a NNs.

Page 79: Representation Power of Feedforward Neural Networksdasgupta/254-deep/matus.pdf · Representation Power of Feedforward Neural Networks Based on work by Barron (1993), Cybenko (1989),

Histograms via 0/1 NNs

I Write 1[x ∈ R] as sum/composition of 1[〈w, x〉 ≥ w0].

I 2-D example. First carve out axes.

2

1

1 2

3

2

3

2

2

3

4

3

I Now pass through a second layer 1[input ≥ 3.5].

1

0

Page 80: Representation Power of Feedforward Neural Networksdasgupta/254-deep/matus.pdf · Representation Power of Feedforward Neural Networks Based on work by Barron (1993), Cybenko (1989),

Histograms via 0/1 NNs

I Write 1[x ∈ R] as sum/composition of 1[〈w, x〉 ≥ w0].

I 2-D example. First carve out axes.

2

1

1 2

3

2

3

2

2

3

4

3

I Now pass through a second layer 1[input ≥ 3.5].

1

0

Page 81: Representation Power of Feedforward Neural Networksdasgupta/254-deep/matus.pdf · Representation Power of Feedforward Neural Networks Based on work by Barron (1993), Cybenko (1989),

Histograms via 0/1 NNs

I Write 1[x ∈ R] as sum/composition of 1[〈w, x〉 ≥ w0].

I 2-D example. First carve out axes.

2

1

1 2

3

2

3

2

2

3

4

3

I Now pass through a second layer 1[input ≥ 3.5].

1

0

Page 82: Representation Power of Feedforward Neural Networksdasgupta/254-deep/matus.pdf · Representation Power of Feedforward Neural Networks Based on work by Barron (1993), Cybenko (1989),

Histograms via 0/1 NNs

I Write 1[x ∈ R] as sum/composition of 1[〈w, x〉 ≥ w0].

I 2-D example. First carve out axes.

2

1

1

2

3

2

3

2

2

3

4

3

I Now pass through a second layer 1[input ≥ 3.5].

1

0

Page 83: Representation Power of Feedforward Neural Networksdasgupta/254-deep/matus.pdf · Representation Power of Feedforward Neural Networks Based on work by Barron (1993), Cybenko (1989),

Histograms via 0/1 NNs

I Write 1[x ∈ R] as sum/composition of 1[〈w, x〉 ≥ w0].

I 2-D example. First carve out axes.

2

1

1 2

3

2

3

2

2

3

4

3

I Now pass through a second layer 1[input ≥ 3.5].

1

0

Page 84: Representation Power of Feedforward Neural Networksdasgupta/254-deep/matus.pdf · Representation Power of Feedforward Neural Networks Based on work by Barron (1993), Cybenko (1989),

Histograms via 0/1 NNs

I Write 1[x ∈ R] as sum/composition of 1[〈w, x〉 ≥ w0].

I 2-D example. First carve out axes.

2

1

1 2

3

2

3

2

2

3

4

3

I Now pass through a second layer 1[input ≥ 3.5].

1

0

Page 85: Representation Power of Feedforward Neural Networksdasgupta/254-deep/matus.pdf · Representation Power of Feedforward Neural Networks Based on work by Barron (1993), Cybenko (1989),

Histograms via sigmoidal NNs

I Write 1[x ∈ R] as sum/composition of σ(〈w, x〉+ w0).

I 2-D example. First carve out axes, with fuzz region.

>2-2ε

<1+2ε

<1+2ε

<2+4ε

<3+4ε

>4-4ε

I Now pass through a second layer σ(huge · (input− 3.5)).

>1-ε

Page 86: Representation Power of Feedforward Neural Networksdasgupta/254-deep/matus.pdf · Representation Power of Feedforward Neural Networks Based on work by Barron (1993), Cybenko (1989),

Histograms via sigmoidal NNs

I Write 1[x ∈ R] as sum/composition of σ(〈w, x〉+ w0).

I 2-D example. First carve out axes, with fuzz region.

>2-2ε

<1+2ε

<1+2ε

<2+4ε

<3+4ε

>4-4ε

I Now pass through a second layer σ(huge · (input− 3.5)).

>1-ε

Page 87: Representation Power of Feedforward Neural Networksdasgupta/254-deep/matus.pdf · Representation Power of Feedforward Neural Networks Based on work by Barron (1993), Cybenko (1989),

Histograms via sigmoidal NNs

I Write 1[x ∈ R] as sum/composition of σ(〈w, x〉+ w0).

I 2-D example. First carve out axes, with fuzz region.

>2-2ε

<1+2ε

<1+2ε

<2+4ε

<3+4ε

>4-4ε

I Now pass through a second layer σ(huge · (input− 3.5)).

>1-ε

Page 88: Representation Power of Feedforward Neural Networksdasgupta/254-deep/matus.pdf · Representation Power of Feedforward Neural Networks Based on work by Barron (1993), Cybenko (1989),

Histograms via sigmoidal NNs

I Write 1[x ∈ R] as sum/composition of σ(〈w, x〉+ w0).

I 2-D example. First carve out axes, with fuzz region.

>2-2ε

<1+2ε

<1+2ε

<2+4ε

<3+4ε

>4-4ε

I Now pass through a second layer σ(huge · (input− 3.5)).

>1-ε

Page 89: Representation Power of Feedforward Neural Networksdasgupta/254-deep/matus.pdf · Representation Power of Feedforward Neural Networks Based on work by Barron (1993), Cybenko (1989),

Histograms via sigmoidal NNs

I Write 1[x ∈ R] as sum/composition of σ(〈w, x〉+ w0).

I 2-D example. First carve out axes, with fuzz region.

>2-2ε

<1+2ε

<1+2ε

<2+4ε

<3+4ε

>4-4ε

I Now pass through a second layer σ(huge · (input− 3.5)).

>1-ε

Page 90: Representation Power of Feedforward Neural Networksdasgupta/254-deep/matus.pdf · Representation Power of Feedforward Neural Networks Based on work by Barron (1993), Cybenko (1989),

Histograms via sigmoidal NNs

I Write 1[x ∈ R] as sum/composition of σ(〈w, x〉+ w0).

I 2-D example. First carve out axes, with fuzz region.

>2-2ε

<1+2ε

<1+2ε

<2+4ε

<3+4ε

>4-4ε

I Now pass through a second layer σ(huge · (input− 3.5)).

>1-ε

Page 91: Representation Power of Feedforward Neural Networksdasgupta/254-deep/matus.pdf · Representation Power of Feedforward Neural Networks Based on work by Barron (1993), Cybenko (1989),

Histograms apx via sigmoidal NNs, some math

I Histogram region is clearly 2-layer 0/1 NN:

1[∀i xi ≥ li ∧ xi ≤ ui]

= 1

[n∑i=1

(1[xi ≥ li] + 1[xi ≤ ui]) ≥ (2n− 1)/2n

].

I Choose B huge; apx 1[〈w, x〉 ≥ w0] with σ(B(〈w, x〉 −w0)).

I ε/2-apx f with histogram ai, Rimi=1;ε/(2m)-apx each bar with sigmoids.

I Final NN f(x) :=∑m

i=1 ai(sigmoidal apx to 1[x ∈ Ri]).I Have fuzz S ⊆ [0, 1]n with m(S) ≥ 1− ε,

x ∈ S =⇒ |f(x)− f(x)| < ε.

Page 92: Representation Power of Feedforward Neural Networksdasgupta/254-deep/matus.pdf · Representation Power of Feedforward Neural Networks Based on work by Barron (1993), Cybenko (1989),

Histograms apx via sigmoidal NNs, some math

I Histogram region is clearly 2-layer 0/1 NN:

1[∀i xi ≥ li ∧ xi ≤ ui]

= 1

[n∑i=1

(1[xi ≥ li] + 1[xi ≤ ui]) ≥ (2n− 1)/2n

].

I Choose B huge; apx 1[〈w, x〉 ≥ w0] with σ(B(〈w, x〉 −w0)).

I ε/2-apx f with histogram ai, Rimi=1;ε/(2m)-apx each bar with sigmoids.

I Final NN f(x) :=∑m

i=1 ai(sigmoidal apx to 1[x ∈ Ri]).I Have fuzz S ⊆ [0, 1]n with m(S) ≥ 1− ε,

x ∈ S =⇒ |f(x)− f(x)| < ε.

Page 93: Representation Power of Feedforward Neural Networksdasgupta/254-deep/matus.pdf · Representation Power of Feedforward Neural Networks Based on work by Barron (1993), Cybenko (1989),

Histograms apx via sigmoidal NNs, some math

I Histogram region is clearly 2-layer 0/1 NN:

1[∀i xi ≥ li ∧ xi ≤ ui]

= 1

[n∑i=1

(1[xi ≥ li] + 1[xi ≤ ui]) ≥ (2n− 1)/2n

].

I Choose B huge; apx 1[〈w, x〉 ≥ w0] with σ(B(〈w, x〉 −w0)).

I ε/2-apx f with histogram ai, Rimi=1;ε/(2m)-apx each bar with sigmoids.

I Final NN f(x) :=∑m

i=1 ai(sigmoidal apx to 1[x ∈ Ri]).I Have fuzz S ⊆ [0, 1]n with m(S) ≥ 1− ε,

x ∈ S =⇒ |f(x)− f(x)| < ε.

Page 94: Representation Power of Feedforward Neural Networksdasgupta/254-deep/matus.pdf · Representation Power of Feedforward Neural Networks Based on work by Barron (1993), Cybenko (1989),

Histograms apx via sigmoidal NNs, some math

I Histogram region is clearly 2-layer 0/1 NN:

1[∀i xi ≥ li ∧ xi ≤ ui]

= 1

[n∑i=1

(1[xi ≥ li] + 1[xi ≤ ui]) ≥ (2n− 1)/2n

].

I Choose B huge; apx 1[〈w, x〉 ≥ w0] with σ(B(〈w, x〉 −w0)).

I ε/2-apx f with histogram ai, Rimi=1;ε/(2m)-apx each bar with sigmoids.

I Final NN f(x) :=∑m

i=1 ai(sigmoidal apx to 1[x ∈ Ri]).

I Have fuzz S ⊆ [0, 1]n with m(S) ≥ 1− ε,

x ∈ S =⇒ |f(x)− f(x)| < ε.

Page 95: Representation Power of Feedforward Neural Networksdasgupta/254-deep/matus.pdf · Representation Power of Feedforward Neural Networks Based on work by Barron (1993), Cybenko (1989),

Histograms apx via sigmoidal NNs, some math

I Histogram region is clearly 2-layer 0/1 NN:

1[∀i xi ≥ li ∧ xi ≤ ui]

= 1

[n∑i=1

(1[xi ≥ li] + 1[xi ≤ ui]) ≥ (2n− 1)/2n

].

I Choose B huge; apx 1[〈w, x〉 ≥ w0] with σ(B(〈w, x〉 −w0)).

I ε/2-apx f with histogram ai, Rimi=1;ε/(2m)-apx each bar with sigmoids.

I Final NN f(x) :=∑m

i=1 ai(sigmoidal apx to 1[x ∈ Ri]).I Have fuzz S ⊆ [0, 1]n with m(S) ≥ 1− ε,

x ∈ S =⇒ |f(x)− f(x)| < ε.

Page 96: Representation Power of Feedforward Neural Networksdasgupta/254-deep/matus.pdf · Representation Power of Feedforward Neural Networks Based on work by Barron (1993), Cybenko (1989),

Folklore proof discussion

I Curse of dimension!

I Still, high level features useful.

Page 97: Representation Power of Feedforward Neural Networksdasgupta/254-deep/matus.pdf · Representation Power of Feedforward Neural Networks Based on work by Barron (1993), Cybenko (1989),

Outline

I 2-nn via functional analysis (Cybenko, 1989).

I 2-nn via greedy approx (Barron, 1993).

I 3-nn via histograms (Folklore).

I 3-nn via wizardry (Kolmogorov, 1957).

Page 98: Representation Power of Feedforward Neural Networksdasgupta/254-deep/matus.pdf · Representation Power of Feedforward Neural Networks Based on work by Barron (1993), Cybenko (1989),

Preface to the Kolmogorov Superposition Theorem

I In 1957, at 19, Kolmogorov’s student Vladimir Arnoldsolved Hilbert’s 13th problem:

Can the solution to

x7 + ax3 + bx2 + cx+ 1 = 0

be written as a finite number of 2-variablefunctions?

I Kolmogorov’s generalization is a NN apx theorem.

I The “transfer functions” (i.e., sigmoids), are not fixedacross nodes.

Page 99: Representation Power of Feedforward Neural Networksdasgupta/254-deep/matus.pdf · Representation Power of Feedforward Neural Networks Based on work by Barron (1993), Cybenko (1989),

Preface to the Kolmogorov Superposition Theorem

I In 1957, at 19, Kolmogorov’s student Vladimir Arnoldsolved Hilbert’s 13th problem:

Can the solution to

x7 + ax3 + bx2 + cx+ 1 = 0

be written as a finite number of 2-variablefunctions?

I Kolmogorov’s generalization is a NN apx theorem.

I The “transfer functions” (i.e., sigmoids), are not fixedacross nodes.

Page 100: Representation Power of Feedforward Neural Networksdasgupta/254-deep/matus.pdf · Representation Power of Feedforward Neural Networks Based on work by Barron (1993), Cybenko (1989),

Preface to the Kolmogorov Superposition Theorem

I In 1957, at 19, Kolmogorov’s student Vladimir Arnoldsolved Hilbert’s 13th problem:

Can the solution to

x7 + ax3 + bx2 + cx+ 1 = 0

be written as a finite number of 2-variablefunctions?

I Kolmogorov’s generalization is a NN apx theorem.

I The “transfer functions” (i.e., sigmoids), are not fixedacross nodes.

Page 101: Representation Power of Feedforward Neural Networksdasgupta/254-deep/matus.pdf · Representation Power of Feedforward Neural Networks Based on work by Barron (1993), Cybenko (1989),

Preface to the Kolmogorov Superposition Theorem

I In 1957, at 19, Kolmogorov’s student Vladimir Arnoldsolved Hilbert’s 13th problem:

Can the solution to

x7 + ax3 + bx2 + cx+ 1 = 0

be written as a finite number of 2-variablefunctions?

I Kolmogorov’s generalization is a NN apx theorem.

I The “transfer functions” (i.e., sigmoids), are not fixedacross nodes.

Page 102: Representation Power of Feedforward Neural Networksdasgupta/254-deep/matus.pdf · Representation Power of Feedforward Neural Networks Based on work by Barron (1993), Cybenko (1989),

The Kolmogorov Superposition Theorem

Theorem. (Kolmogorov, 1957)

There exist continuous ψp,q : p ∈ [n], q ∈ [2n+ 1], so that

forany f ∈ C([0, 1]n),

there exist continuous χq

2n+1

q=1 ,

f(x) =

2n+1

∑q=1

χq

n

∑p=1

ψp,q(xp)

I The proof is also histogram based.

I It must remove the “fuzz” gaps.I It must remove dependence on a particular ε > 0.

I The magic is within ψp,q!!

Page 103: Representation Power of Feedforward Neural Networksdasgupta/254-deep/matus.pdf · Representation Power of Feedforward Neural Networks Based on work by Barron (1993), Cybenko (1989),

The Kolmogorov Superposition Theorem

Theorem. (Kolmogorov, 1957)

There exist continuous ψp,q : p ∈ [n], q ∈ [2n+ 1], so that

forany f ∈ C([0, 1]n), there exist continuous χq

2n+1

q=1 ,

f(x) =

2n+1

∑q=1

χq

n

∑p=1

ψp,q(xp)

I The proof is also histogram based.

I It must remove the “fuzz” gaps.I It must remove dependence on a particular ε > 0.

I The magic is within ψp,q!!

Page 104: Representation Power of Feedforward Neural Networksdasgupta/254-deep/matus.pdf · Representation Power of Feedforward Neural Networks Based on work by Barron (1993), Cybenko (1989),

The Kolmogorov Superposition Theorem

Theorem. (Kolmogorov, 1957)

There exist continuous ψp,q : p ∈ [n], q ∈ [2n+ 1], so that

forany f ∈ C([0, 1]n), there exist continuous χq

2n+1

q=1 ,

f(x) =

2n+1

∑q=1

χq

n∑p=1

ψp,q(xp)

I The proof is also histogram based.

I It must remove the “fuzz” gaps.I It must remove dependence on a particular ε > 0.

I The magic is within ψp,q!!

Page 105: Representation Power of Feedforward Neural Networksdasgupta/254-deep/matus.pdf · Representation Power of Feedforward Neural Networks Based on work by Barron (1993), Cybenko (1989),

The Kolmogorov Superposition Theorem

Theorem. (Kolmogorov, 1957)

There exist continuous ψp,q : p ∈ [n], q ∈ [2n+ 1], so that

forany f ∈ C([0, 1]n), there exist continuous χq2n+1

q=1 ,

f(x) =

2n+1∑q=1

χq

n∑p=1

ψp,q(xp)

I The proof is also histogram based.

I It must remove the “fuzz” gaps.I It must remove dependence on a particular ε > 0.

I The magic is within ψp,q!!

Page 106: Representation Power of Feedforward Neural Networksdasgupta/254-deep/matus.pdf · Representation Power of Feedforward Neural Networks Based on work by Barron (1993), Cybenko (1989),

The Kolmogorov Superposition Theorem

Theorem. (Kolmogorov, 1957)

There exist continuous ψp,q : p ∈ [n], q ∈ [2n+ 1], so that forany f ∈ C([0, 1]n), there exist continuous χq2n+1

q=1 ,

f(x) =

2n+1∑q=1

χq

n∑p=1

ψp,q(xp)

I The proof is also histogram based.

I It must remove the “fuzz” gaps.I It must remove dependence on a particular ε > 0.

I The magic is within ψp,q!!

Page 107: Representation Power of Feedforward Neural Networksdasgupta/254-deep/matus.pdf · Representation Power of Feedforward Neural Networks Based on work by Barron (1993), Cybenko (1989),

The Kolmogorov Superposition Theorem

Theorem. (Kolmogorov, 1957)

There exist continuous ψp,q : p ∈ [n], q ∈ [2n+ 1], so that forany f ∈ C([0, 1]n), there exist continuous χq2n+1

q=1 ,

f(x) =

2n+1∑q=1

χq

n∑p=1

ψp,q(xp)

I The proof is also histogram based.

I It must remove the “fuzz” gaps.I It must remove dependence on a particular ε > 0.

I The magic is within ψp,q!!

Page 108: Representation Power of Feedforward Neural Networksdasgupta/254-deep/matus.pdf · Representation Power of Feedforward Neural Networks Based on work by Barron (1993), Cybenko (1989),

The Kolmogorov Superposition Theorem

Theorem. (Kolmogorov, 1957)

There exist continuous ψp,q : p ∈ [n], q ∈ [2n+ 1], so that forany f ∈ C([0, 1]n), there exist continuous χq2n+1

q=1 ,

f(x) =

2n+1∑q=1

χq

n∑p=1

ψp,q(xp)

I The proof is also histogram based.

I It must remove the “fuzz” gaps.

I It must remove dependence on a particular ε > 0.

I The magic is within ψp,q!!

Page 109: Representation Power of Feedforward Neural Networksdasgupta/254-deep/matus.pdf · Representation Power of Feedforward Neural Networks Based on work by Barron (1993), Cybenko (1989),

The Kolmogorov Superposition Theorem

Theorem. (Kolmogorov, 1957)

There exist continuous ψp,q : p ∈ [n], q ∈ [2n+ 1], so that forany f ∈ C([0, 1]n), there exist continuous χq2n+1

q=1 ,

f(x) =

2n+1∑q=1

χq

n∑p=1

ψp,q(xp)

I The proof is also histogram based.

I It must remove the “fuzz” gaps.I It must remove dependence on a particular ε > 0.

I The magic is within ψp,q!!

Page 110: Representation Power of Feedforward Neural Networksdasgupta/254-deep/matus.pdf · Representation Power of Feedforward Neural Networks Based on work by Barron (1993), Cybenko (1989),

The Kolmogorov Superposition Theorem

Theorem. (Kolmogorov, 1957)

There exist continuous ψp,q : p ∈ [n], q ∈ [2n+ 1], so that forany f ∈ C([0, 1]n), there exist continuous χq2n+1

q=1 ,

f(x) =

2n+1∑q=1

χq

n∑p=1

ψp,q(xp)

I The proof is also histogram based.

I It must remove the “fuzz” gaps.I It must remove dependence on a particular ε > 0.

I The magic is within ψp,q!!

Page 111: Representation Power of Feedforward Neural Networksdasgupta/254-deep/matus.pdf · Representation Power of Feedforward Neural Networks Based on work by Barron (1993), Cybenko (1989),

Constructing ψp,q

I Given resolution k ∈ Z++, build a staggered gridding.

I Let Sqk,i1,...,im range over the cells.

I Construct ψp,q so that for each k and q,∑n

p=1 ψp,q maps

each Sqk,i1,...,im to its own specific interval of R.

I Holds for all q: gracefully handles the fuzz regions.

I Holds for all k, simultaneously handles all resolutions.

I The functions ψp,q are fractals.

Page 112: Representation Power of Feedforward Neural Networksdasgupta/254-deep/matus.pdf · Representation Power of Feedforward Neural Networks Based on work by Barron (1993), Cybenko (1989),

Constructing ψp,q

I Given resolution k ∈ Z++, build a staggered gridding.

I Let Sqk,i1,...,im range over the cells.

I Construct ψp,q so that for each k and q,∑n

p=1 ψp,q maps

each Sqk,i1,...,im to its own specific interval of R.

I Holds for all q: gracefully handles the fuzz regions.

I Holds for all k, simultaneously handles all resolutions.

I The functions ψp,q are fractals.

Page 113: Representation Power of Feedforward Neural Networksdasgupta/254-deep/matus.pdf · Representation Power of Feedforward Neural Networks Based on work by Barron (1993), Cybenko (1989),

Constructing ψp,q

I Given resolution k ∈ Z++, build a staggered gridding.

I Let Sqk,i1,...,im range over the cells.

I Construct ψp,q so that for each k and q,∑n

p=1 ψp,q maps

each Sqk,i1,...,im to its own specific interval of R.

I Holds for all q: gracefully handles the fuzz regions.

I Holds for all k, simultaneously handles all resolutions.

I The functions ψp,q are fractals.

Page 114: Representation Power of Feedforward Neural Networksdasgupta/254-deep/matus.pdf · Representation Power of Feedforward Neural Networks Based on work by Barron (1993), Cybenko (1989),

Constructing ψp,q

I Given resolution k ∈ Z++, build a staggered gridding.

q

I Let Sqk,i1,...,im range over the cells.

I Construct ψp,q so that for each k and q,∑n

p=1 ψp,q maps

each Sqk,i1,...,im to its own specific interval of R.

I Holds for all q: gracefully handles the fuzz regions.

I Holds for all k, simultaneously handles all resolutions.

I The functions ψp,q are fractals.

Page 115: Representation Power of Feedforward Neural Networksdasgupta/254-deep/matus.pdf · Representation Power of Feedforward Neural Networks Based on work by Barron (1993), Cybenko (1989),

Constructing ψp,q

I Given resolution k ∈ Z++, build a staggered gridding.

q

I Let Sqk,i1,...,im range over the cells.

I Construct ψp,q so that for each k and q,∑n

p=1 ψp,q maps

each Sqk,i1,...,im to its own specific interval of R.

I Holds for all q: gracefully handles the fuzz regions.

I Holds for all k, simultaneously handles all resolutions.

I The functions ψp,q are fractals.

Page 116: Representation Power of Feedforward Neural Networksdasgupta/254-deep/matus.pdf · Representation Power of Feedforward Neural Networks Based on work by Barron (1993), Cybenko (1989),

Constructing ψp,q

I Given resolution k ∈ Z++, build a staggered gridding.

q

I Let Sqk,i1,...,im range over the cells.

I Construct ψp,q so that for each k and q,∑n

p=1 ψp,q maps

each Sqk,i1,...,im to its own specific interval of R.

I Holds for all q: gracefully handles the fuzz regions.

I Holds for all k, simultaneously handles all resolutions.

I The functions ψp,q are fractals.

Page 117: Representation Power of Feedforward Neural Networksdasgupta/254-deep/matus.pdf · Representation Power of Feedforward Neural Networks Based on work by Barron (1993), Cybenko (1989),

Constructing ψp,q

I Given resolution k ∈ Z++, build a staggered gridding.

q

I Let Sqk,i1,...,im range over the cells.

I Construct ψp,q so that for each k and q,∑n

p=1 ψp,q maps

each Sqk,i1,...,im to its own specific interval of R.

I Holds for all q: gracefully handles the fuzz regions.

I Holds for all k, simultaneously handles all resolutions.

I The functions ψp,q are fractals.

Page 118: Representation Power of Feedforward Neural Networksdasgupta/254-deep/matus.pdf · Representation Power of Feedforward Neural Networks Based on work by Barron (1993), Cybenko (1989),

Constructing ψp,q

I Given resolution k ∈ Z++, build a staggered gridding.

q

I Let Sqk,i1,...,im range over the cells.

I Construct ψp,q so that for each k and q,∑n

p=1 ψp,q maps

each Sqk,i1,...,im to its own specific interval of R.

I Holds for all q: gracefully handles the fuzz regions.

I Holds for all k, simultaneously handles all resolutions.

I The functions ψp,q are fractals.

Page 119: Representation Power of Feedforward Neural Networksdasgupta/254-deep/matus.pdf · Representation Power of Feedforward Neural Networks Based on work by Barron (1993), Cybenko (1989),

Constructing ψp,q

I Given resolution k ∈ Z++, build a staggered gridding.

q

I Let Sqk,i1,...,im range over the cells.

I Construct ψp,q so that for each k and q,∑n

p=1 ψp,q maps

each Sqk,i1,...,im to its own specific interval of R.

I Holds for all q: gracefully handles the fuzz regions.

I Holds for all k, simultaneously handles all resolutions.

I The functions ψp,q are fractals.

Page 120: Representation Power of Feedforward Neural Networksdasgupta/254-deep/matus.pdf · Representation Power of Feedforward Neural Networks Based on work by Barron (1993), Cybenko (1989),

Constructing ψp,q

I Given resolution k ∈ Z++, build a staggered gridding.

q

I Let Sqk,i1,...,im range over the cells.

I Construct ψp,q so that for each k and q,∑n

p=1 ψp,q maps

each Sqk,i1,...,im to its own specific interval of R.

I Holds for all q: gracefully handles the fuzz regions.

I Holds for all k, simultaneously handles all resolutions.

I The functions ψp,q are fractals.

Page 121: Representation Power of Feedforward Neural Networksdasgupta/254-deep/matus.pdf · Representation Power of Feedforward Neural Networks Based on work by Barron (1993), Cybenko (1989),

Constructing ψp,q

I Given resolution k ∈ Z++, build a staggered gridding.

q

I Let Sqk,i1,...,im range over the cells.

I Construct ψp,q so that for each k and q,∑n

p=1 ψp,q maps

each Sqk,i1,...,im to its own specific interval of R.

I Holds for all q: gracefully handles the fuzz regions.

I Holds for all k, simultaneously handles all resolutions.

I The functions ψp,q are fractals.

Page 122: Representation Power of Feedforward Neural Networksdasgupta/254-deep/matus.pdf · Representation Power of Feedforward Neural Networks Based on work by Barron (1993), Cybenko (1989),

Constructing ψp,q

I Given resolution k ∈ Z++, build a staggered gridding.

q

I Let Sqk,i1,...,im range over the cells.

I Construct ψp,q so that for each k and q,∑n

p=1 ψp,q maps

each Sqk,i1,...,im to its own specific interval of R.

I Holds for all q: gracefully handles the fuzz regions.

I Holds for all k, simultaneously handles all resolutions.

I The functions ψp,q are fractals.

Page 123: Representation Power of Feedforward Neural Networksdasgupta/254-deep/matus.pdf · Representation Power of Feedforward Neural Networks Based on work by Barron (1993), Cybenko (1989),

Constructing ψp,q

I Given resolution k ∈ Z++, build a staggered gridding.

q

I Let Sqk,i1,...,im range over the cells.

I Construct ψp,q so that for each k and q,∑n

p=1 ψp,q maps

each Sqk,i1,...,im to its own specific interval of R.

I Holds for all q: gracefully handles the fuzz regions.

I Holds for all k, simultaneously handles all resolutions.

I The functions ψp,q are fractals.

Page 124: Representation Power of Feedforward Neural Networksdasgupta/254-deep/matus.pdf · Representation Power of Feedforward Neural Networks Based on work by Barron (1993), Cybenko (1989),

Constructing χq

I Recall goal:

f(x) =

2n+1∑q=1

χq

n∑p=1

ψp,q(xp)

I Start χq0 := 0; eventually χq := limr→∞ χqr.

I To build χqr+1 from χqr:

I Choose kr+1 so gridding fluctuations sufficiently small.I Make χq some value of f in each cell, smooth interpolation

in fuzz.I Every x ∈ [0, 1]n hits at least n+ 1 cells Sqk,i1,...,in ; i.e., sinceq ∈ [2n+ 1], more than half the approximations are good.

I (I’m leaving a lot out =( )

Page 125: Representation Power of Feedforward Neural Networksdasgupta/254-deep/matus.pdf · Representation Power of Feedforward Neural Networks Based on work by Barron (1993), Cybenko (1989),

Constructing χq

I Recall goal:

f(x) =

2n+1∑q=1

χq

n∑p=1

ψp,q(xp)

I Start χq0 := 0; eventually χq := limr→∞ χ

qr.

I To build χqr+1 from χqr:

I Choose kr+1 so gridding fluctuations sufficiently small.I Make χq some value of f in each cell, smooth interpolation

in fuzz.I Every x ∈ [0, 1]n hits at least n+ 1 cells Sqk,i1,...,in ; i.e., sinceq ∈ [2n+ 1], more than half the approximations are good.

I (I’m leaving a lot out =( )

Page 126: Representation Power of Feedforward Neural Networksdasgupta/254-deep/matus.pdf · Representation Power of Feedforward Neural Networks Based on work by Barron (1993), Cybenko (1989),

Constructing χq

I Recall goal:

f(x) =

2n+1∑q=1

χq

n∑p=1

ψp,q(xp)

I Start χq0 := 0; eventually χq := limr→∞ χ

qr.

I To build χqr+1 from χqr:

I Choose kr+1 so gridding fluctuations sufficiently small.I Make χq some value of f in each cell, smooth interpolation

in fuzz.I Every x ∈ [0, 1]n hits at least n+ 1 cells Sqk,i1,...,in ; i.e., sinceq ∈ [2n+ 1], more than half the approximations are good.

I (I’m leaving a lot out =( )

Page 127: Representation Power of Feedforward Neural Networksdasgupta/254-deep/matus.pdf · Representation Power of Feedforward Neural Networks Based on work by Barron (1993), Cybenko (1989),

Constructing χq

I Recall goal:

f(x) =

2n+1∑q=1

χq

n∑p=1

ψp,q(xp)

I Start χq0 := 0; eventually χq := limr→∞ χ

qr.

I To build χqr+1 from χqr:I Choose kr+1 so gridding fluctuations sufficiently small.

I Make χq some value of f in each cell, smooth interpolationin fuzz.

I Every x ∈ [0, 1]n hits at least n+ 1 cells Sqk,i1,...,in ; i.e., sinceq ∈ [2n+ 1], more than half the approximations are good.

I (I’m leaving a lot out =( )

Page 128: Representation Power of Feedforward Neural Networksdasgupta/254-deep/matus.pdf · Representation Power of Feedforward Neural Networks Based on work by Barron (1993), Cybenko (1989),

Constructing χq

I Recall goal:

f(x) =

2n+1∑q=1

χq

n∑p=1

ψp,q(xp)

I Start χq0 := 0; eventually χq := limr→∞ χ

qr.

I To build χqr+1 from χqr:I Choose kr+1 so gridding fluctuations sufficiently small.I Make χq some value of f in each cell, smooth interpolation

in fuzz.

I Every x ∈ [0, 1]n hits at least n+ 1 cells Sqk,i1,...,in ; i.e., sinceq ∈ [2n+ 1], more than half the approximations are good.

I (I’m leaving a lot out =( )

Page 129: Representation Power of Feedforward Neural Networksdasgupta/254-deep/matus.pdf · Representation Power of Feedforward Neural Networks Based on work by Barron (1993), Cybenko (1989),

Constructing χq

I Recall goal:

f(x) =

2n+1∑q=1

χq

n∑p=1

ψp,q(xp)

I Start χq0 := 0; eventually χq := limr→∞ χ

qr.

I To build χqr+1 from χqr:I Choose kr+1 so gridding fluctuations sufficiently small.I Make χq some value of f in each cell, smooth interpolation

in fuzz.I Every x ∈ [0, 1]n hits at least n+ 1 cells Sqk,i1,...,in ; i.e., sinceq ∈ [2n+ 1], more than half the approximations are good.

I (I’m leaving a lot out =( )

Page 130: Representation Power of Feedforward Neural Networksdasgupta/254-deep/matus.pdf · Representation Power of Feedforward Neural Networks Based on work by Barron (1993), Cybenko (1989),

Constructing χq

I Recall goal:

f(x) =

2n+1∑q=1

χq

n∑p=1

ψp,q(xp)

I Start χq0 := 0; eventually χq := limr→∞ χ

qr.

I To build χqr+1 from χqr:I Choose kr+1 so gridding fluctuations sufficiently small.I Make χq some value of f in each cell, smooth interpolation

in fuzz.I Every x ∈ [0, 1]n hits at least n+ 1 cells Sqk,i1,...,in ; i.e., sinceq ∈ [2n+ 1], more than half the approximations are good.

I (I’m leaving a lot out =( )

Page 131: Representation Power of Feedforward Neural Networksdasgupta/254-deep/matus.pdf · Representation Power of Feedforward Neural Networks Based on work by Barron (1993), Cybenko (1989),

Discussion 1

Page 132: Representation Power of Feedforward Neural Networksdasgupta/254-deep/matus.pdf · Representation Power of Feedforward Neural Networks Based on work by Barron (1993), Cybenko (1989),

Discussion 2

I It needs different transfer functions..

I ..but these can be approximated by multiple layers!

I It shows the value of powerful nodes, whereas the standardapx results just suggest a very wide hidden layer.

Page 133: Representation Power of Feedforward Neural Networksdasgupta/254-deep/matus.pdf · Representation Power of Feedforward Neural Networks Based on work by Barron (1993), Cybenko (1989),

Discussion 2

I It needs different transfer functions..

I ..but these can be approximated by multiple layers!

I It shows the value of powerful nodes, whereas the standardapx results just suggest a very wide hidden layer.

Page 134: Representation Power of Feedforward Neural Networksdasgupta/254-deep/matus.pdf · Representation Power of Feedforward Neural Networks Based on work by Barron (1993), Cybenko (1989),

Discussion 2

I It needs different transfer functions..

I ..but these can be approximated by multiple layers!

I It shows the value of powerful nodes, whereas the standardapx results just suggest a very wide hidden layer.

Page 135: Representation Power of Feedforward Neural Networksdasgupta/254-deep/matus.pdf · Representation Power of Feedforward Neural Networks Based on work by Barron (1993), Cybenko (1989),

Conclusion

I Some fancy mechanisms give 2-NNs fi → f ∈ C([0, 1]n).

I Histogram constructions hint to the power of deepernetworks.

Page 136: Representation Power of Feedforward Neural Networksdasgupta/254-deep/matus.pdf · Representation Power of Feedforward Neural Networks Based on work by Barron (1993), Cybenko (1989),

Thanks!

Andrew R. Barron. Universal approximation bounds forsuperpositions of a sigmoidal function. IEEE Transactions onInformation theory, 39(3):930–945, 1993.

George Cybenko. Approximation by superpositions of asigmoidal function. Mathematics of Control, Signals, andSystems, 2:303–314, 1989.

Andrei N. Kolmogorov. On the representation of continuousfunctions of several veriables as superpositions of continuousfunctions of one variable and addition. Dokl. Acad. NaukSSSR, 114(5):953–956, 1957. Translation to English: V. M.Volosov.

Tong Zhang. Sequential greedy approximation for certainconvex optimization problems. IEEE Transactions onInformation Theory, 49(3):682–691, 2003.