The Perceptron. Prehistory W.S. McCulloch & W. Pitts (1943). A logical calculus of the ideas...

34
The Perceptron

Transcript of The Perceptron. Prehistory W.S. McCulloch & W. Pitts (1943). A logical calculus of the ideas...

Page 1: The Perceptron. Prehistory W.S. McCulloch & W. Pitts (1943). A logical calculus of the ideas immanent in nervous activity, Bulletin of Mathematical Biophysics,

The Perceptron

Page 2: The Perceptron. Prehistory W.S. McCulloch & W. Pitts (1943). A logical calculus of the ideas immanent in nervous activity, Bulletin of Mathematical Biophysics,

PrehistoryW.S. McCulloch & W. Pitts (1943). “A logical calculus of the ideas

immanent in nervous activity”, Bulletin of Mathematical Biophysics, 5, 115-137.

x+y-21 * -2

y * +1

x * +1

if sum<0 : 0

else : 1

input

sweight

s sum output

1 1 1 01

0 1 0 0

000

x & y

yx

output

inputs

Truth Table for Logical AND

• This seminal paper pointed out that simple artificial “neurons” could be made to perform basic logical operations such as AND, OR and NOT.

Page 3: The Perceptron. Prehistory W.S. McCulloch & W. Pitts (1943). A logical calculus of the ideas immanent in nervous activity, Bulletin of Mathematical Biophysics,

Nervous Systems as Logical CircuitsGroups of these “neuronal” logic gates could carry out any computation, even though each neuron was very limited.

x+y-11 * -1

y * +1

x * +1

if sum<0 : 0

else : 1

input

sweight

s sum output

1 1 1 01

0 1 0 0

110

x | y

yx

output

inputs

Truth Table for Logical OR

• Could computers built from these simple units reproduce the computational power of biological brains?

• Were biological neurons performing logical operations?

Page 4: The Perceptron. Prehistory W.S. McCulloch & W. Pitts (1943). A logical calculus of the ideas immanent in nervous activity, Bulletin of Mathematical Biophysics,

The Perceptron

It obeyed the following rule:If the sum of the weighted inputs exceeds a threshold, output 1, else output -1.

outputi

nputs

weight

s sum

Σxi wi

*Frank Rosenblatt (1962). Principles of Neurodynamics, Spartan, New York, NY.

Subsequent progress was inspired by the invention of learning rules inspired by ideas from neuroscience…Rosenblatt’s Perceptron could automatically learn to categorise or classify input vectors into types.

1 if Σ inputi * weighti > threshold -1 if Σ inputi * weighti < threshold

Page 5: The Perceptron. Prehistory W.S. McCulloch & W. Pitts (1943). A logical calculus of the ideas immanent in nervous activity, Bulletin of Mathematical Biophysics,

Classifier• Consider a network as a Classifier • Network parameters are adapted so that it discriminates between

classes• For m classes, the classifier partitions the feature space into m

decision regions• The line or curve separating the classes is the decision boundary. • In more than 2 dimensions this is a surface (e.g., a hyperplane)• For 2 classes can view net output as a discriminant function y(x, w)

where:

y(x, w) = 1 if x in C1

y(x, w) = - 1 if x in C2 • Need some training data with known classes to generate an error

function for the network• Need a (supervised) learning algorithm to adjust the weights

Page 6: The Perceptron. Prehistory W.S. McCulloch & W. Pitts (1943). A logical calculus of the ideas immanent in nervous activity, Bulletin of Mathematical Biophysics,

Linear discriminant functions

-4 -2 0 2 4 6 8 10 12 14-4

-2

0

2

4

6

8

Feature 1

Feature 2

TWO-CLASS DATA IN A TWO-DIMENSIONAL FEATURE SPACE

DecisionBoundary

DecisionRegion 1

Decision Region 2

A linear discriminant function is a mapping which partitions feature space using a linear function (a straight line, or a hyperplane)

Thus in 2 dimensions the decision boundary is a straight line

Simple form of classifier:

“separate the two classes using a straight line in feature space”

Page 7: The Perceptron. Prehistory W.S. McCulloch & W. Pitts (1943). A logical calculus of the ideas immanent in nervous activity, Bulletin of Mathematical Biophysics,

The Perceptron as a ClassifierFor d-dimensional data perceptron consists of d-weights, a bias and a thresholding activation function. For 2D data we have:

w1

w2

w0

a = w0 + w1 x1 + w2 x2 y=g(a) {-1, +1}

1. Weighted Sum of the inputs

2. Pass thru Heaviside function:T(a)= -1 if a < 0 T(a)= 1 if a >= 0

x1

1Output= classdecision

If we group the weights as a vector w we therefore have the net output y given by:

y = g(w . x + w0)

x2

View the bias as another weight from an input which is constantly on

Page 8: The Perceptron. Prehistory W.S. McCulloch & W. Pitts (1943). A logical calculus of the ideas immanent in nervous activity, Bulletin of Mathematical Biophysics,

Interpretation of weightsSince Heaviside function is thresholded on 0 the decision

boundary is at:

w

||w||w0

So perceptron functions as a linear discriminant function

a = w . x + w0 = w0 + w1 x1 + w2 x2 = 0

Rearranging we get: x2 = - (w0 + w1 x1)/w2 unless w2=0 when we have x1= - w0/w1 or w2= w1=0 when classification depends on sign of w0

Page 9: The Perceptron. Prehistory W.S. McCulloch & W. Pitts (1943). A logical calculus of the ideas immanent in nervous activity, Bulletin of Mathematical Biophysics,

• Perceptron can be extended to discriminate between k classes by having k output nodes:

• x is in class Cj if yj (x)>= yk for all k

• Resulting decision boundaries divide the feature space into convex decision regions

yk yj = g(wjk xk + wk0)

w11

y1

wkd

wk1

wk0

x1

xd

Weight to output j from input k is wjk

1

C1 C2

C3

Page 10: The Perceptron. Prehistory W.S. McCulloch & W. Pitts (1943). A logical calculus of the ideas immanent in nervous activity, Bulletin of Mathematical Biophysics,

Other activation functions can also be used (usually chosen to be monotonic). NB discriminant is still linear.

Use of the sigmoidal logistic activation function:

g(a) = 1/(1 + e-a)

together with data drawn from Gaussian or Bernoulli class-conditional distributions (P(x | Ck)) means that the network outputs can be interpreted as the posterior probabilities P(Ck | x)

Generalised linear discriminants

Linear discriminants can be made more general by including non-linear functions (basis functions) k which to transform the input data. Thus the outputs become:

yj = g(wjk k + wk0)

Page 11: The Perceptron. Prehistory W.S. McCulloch & W. Pitts (1943). A logical calculus of the ideas immanent in nervous activity, Bulletin of Mathematical Biophysics,

Network LearningStandard procedure for training the weights is by gradient descent

For this process we have a set of training data from known classes to be used in conjunction with an error function E(w) (eg sum of squares error) to specify an error for each instantiation of the network

Then do: w new = w old - E(w)

So:

where: (w) is a vector representing the gradient and is the learning rate (small, positive)

1. This moves us downhill in direction E(w) (steepest downhill since E(w) is the direction of steepest increase)2. How far we go is determined by the value of

jk

tjk

tjk w

Eww

∂∂

−=+ 1

Page 12: The Perceptron. Prehistory W.S. McCulloch & W. Pitts (1943). A logical calculus of the ideas immanent in nervous activity, Bulletin of Mathematical Biophysics,

Moving Downhill: Move in direction of negative derivative

E(w)

w1

d E(w)/ dw1

w1

Decreasing E(w)

d E(w)/dw1 > 0

w1 <= w1 - d E(w)/dw1

i.e., the rule decreases w1

Page 13: The Perceptron. Prehistory W.S. McCulloch & W. Pitts (1943). A logical calculus of the ideas immanent in nervous activity, Bulletin of Mathematical Biophysics,

E(w)

w1

d E(w)/ dw1

w1

Decreasing E(w)

d E(w)/dw1 < 0

w1 <= w1 - d E(w)/dw1

i.e., the rule increases w1

Moving Downhill: Move in direction of negative derivative

Page 14: The Perceptron. Prehistory W.S. McCulloch & W. Pitts (1943). A logical calculus of the ideas immanent in nervous activity, Bulletin of Mathematical Biophysics,

Illustration of Gradient Descent

w1

w0

E(w)

Page 15: The Perceptron. Prehistory W.S. McCulloch & W. Pitts (1943). A logical calculus of the ideas immanent in nervous activity, Bulletin of Mathematical Biophysics,

Illustration of Gradient Descent

w1

w0

E(w)

Page 16: The Perceptron. Prehistory W.S. McCulloch & W. Pitts (1943). A logical calculus of the ideas immanent in nervous activity, Bulletin of Mathematical Biophysics,

Illustration of Gradient Descent

w1

w0

E(w)

Direction of steepestdescent = direction ofnegative gradient

Page 17: The Perceptron. Prehistory W.S. McCulloch & W. Pitts (1943). A logical calculus of the ideas immanent in nervous activity, Bulletin of Mathematical Biophysics,

Illustration of Gradient Descent

w1

w0

E(w)

Original point inweight space

New point inweight space

Page 18: The Perceptron. Prehistory W.S. McCulloch & W. Pitts (1943). A logical calculus of the ideas immanent in nervous activity, Bulletin of Mathematical Biophysics,

General Gradient Descent Algorithm

• Define an objective function E(w)

• Algorithm:– pick an initial set of weights w, e.g. randomly– evaluate E(w) at w

• note: this can be done numerically or in closed form

– update all the weights

• w new = w old - (w) – check if E(w) is approximately 0

• if so, we have converged to a “flat minimum”• if not, we move again in weight space

Page 19: The Perceptron. Prehistory W.S. McCulloch & W. Pitts (1943). A logical calculus of the ideas immanent in nervous activity, Bulletin of Mathematical Biophysics,

• Equivalent to hill-climbing • Can be problems knowing when to stop• Local minima

– can have multiple local minima (note: for perceptron, E(w) only has a single global minimum, so this is not a problem)

– gradient descent goes to the closest local minimum:• solution: random restarts from multiple places in weight space

Page 20: The Perceptron. Prehistory W.S. McCulloch & W. Pitts (1943). A logical calculus of the ideas immanent in nervous activity, Bulletin of Mathematical Biophysics,

Sequential Gradient DescentIn standard gradient descent (batch version) get network output for all data points and estimate error gradient from difference between outputs and targets (for current weights)

Sequential gradient descent:get an approximation to the full gradient based onthe ith training vector xi only

use: where Ei is the error due to xi

This allows us to update the weights as we cycle through each input

- tends to be faster in practice- don’t have to store all outputs and vectors- can be used to adapt weights on-line- can track slow moving changes in the data- stochasticity can help to escape from local minima

jk

itjk

tjk w

Eww

∂∂

−=+ 1

Page 21: The Perceptron. Prehistory W.S. McCulloch & W. Pitts (1943). A logical calculus of the ideas immanent in nervous activity, Bulletin of Mathematical Biophysics,

Error function

Need to define an error function to start the training procedure

Also need to define target functions ti for each input pattern xi in the training data set X:

ti = 1 if pattern xi is in C1 and –1 if xi is in C2

An obvious starting point is to use number of training patterns that are currently misclassified

Equivalent to sum of squares error function:

E(w) = |y(xi) – ti| = (1/4) (y(xi) – ti)2

However, thinking about the resulting Error Surface highlights some bad properties of this error for gradient descent

Page 22: The Perceptron. Prehistory W.S. McCulloch & W. Pitts (1943). A logical calculus of the ideas immanent in nervous activity, Bulletin of Mathematical Biophysics,

In particular a smooth change in the weights w will not result in a smooth change in the error:

E

4

5

w

w

w

Either weight change has no effect on error

Or a pattern is reclassified causing a discontinuity in the error surface

This means we get no info from the error gradient (not great for a gradient descent procedure …)

x o x oie we cannot distinguish this:

From this:

Page 23: The Perceptron. Prehistory W.S. McCulloch & W. Pitts (1943). A logical calculus of the ideas immanent in nervous activity, Bulletin of Mathematical Biophysics,

Therfore want an error function which takes into account the distance of misclassified patterns from the boundary

Use the Perceptron Criterion:

Eperc(w) = - w.xi ti for all misclassified xi

Since if xi is in C1 but classified in C2 then ti = +1 and w.xi < 0 so w.xi ti = w.xi < 0 and if xi is in C2 but classified in C1 then ti = -1 and w.xi >=0 so w.xi ti = - w.xi < 0

w

xi

w

d w.xi

Also from a geometrical interpretation of the weights we can show that w.xi is proportional to the absolute distance to the decision boundary

Page 24: The Perceptron. Prehistory W.S. McCulloch & W. Pitts (1943). A logical calculus of the ideas immanent in nervous activity, Bulletin of Mathematical Biophysics,

Applying the sequential gradient descent algorithm to this error function we get:

w(t+1) = w(t) + xj tj for all misclassified xj

Equivalently we can use:

w(t+1) = w(t) + xj (tj – yj)

Which is a form of the adaline learning algorithm

The Perceptron Convergence Theorem (Rosenblatt, 1962) states that this algorithm is guaranteed to converge to a solution for linearly separable data.

The idea of the proof is to consider ||w(t+1)-w||-||w(t)-w|| which is a decreasing function of t (See eg Bishop (1995))

Page 25: The Perceptron. Prehistory W.S. McCulloch & W. Pitts (1943). A logical calculus of the ideas immanent in nervous activity, Bulletin of Mathematical Biophysics,

x

xo

o

x3

w.x <0w.x > 0

(t3 – y3) = - 2

w(t)

Page 26: The Perceptron. Prehistory W.S. McCulloch & W. Pitts (1943). A logical calculus of the ideas immanent in nervous activity, Bulletin of Mathematical Biophysics,

x

xo

o

x3

w.x <0w.x > 0

(t3 – y3) = - 2

x3 (t3 – y3)

w(t)

Page 27: The Perceptron. Prehistory W.S. McCulloch & W. Pitts (1943). A logical calculus of the ideas immanent in nervous activity, Bulletin of Mathematical Biophysics,

x

xo

o

x3

w.x <0w.x > 0

w(t+1) = w(t) + x3 (t3 – y3)

Page 28: The Perceptron. Prehistory W.S. McCulloch & W. Pitts (1943). A logical calculus of the ideas immanent in nervous activity, Bulletin of Mathematical Biophysics,

x

xo

o

x3

w.x <0w.x > 0

w(t+1)

Page 29: The Perceptron. Prehistory W.S. McCulloch & W. Pitts (1943). A logical calculus of the ideas immanent in nervous activity, Bulletin of Mathematical Biophysics,

The Fall of the Perceptron

Marvin Minsky & Seymour Papert (1969). Perceptrons, MIT Press, Cambridge, MA.

• Before long researchers had begun to discover the Perceptron’s limitations.

• Unless input categories were “linearly separable”, a perceptron could not learn to discriminate between them.

• Unfortunately, it appeared that many important categories were not linearly separable.

• E.g., those inputs to an XOR gate that give an output of 1 (namely 10 & 01) are not linearly separable from those that do not (00 & 11).

Page 30: The Perceptron. Prehistory W.S. McCulloch & W. Pitts (1943). A logical calculus of the ideas immanent in nervous activity, Bulletin of Mathematical Biophysics,

The Fall of the Perceptron

Successful

Unsuccessful

Many Hours in the Gym per Week

Few Hours

in the Gym per

Week

FootballersAcademics

In this example, a perceptron would not be able to discriminate between the footballers and the academics…

…despite the simplicity of their relationship:

Academics =Successful XOR Gym

This failure caused the majority of researchers to walk away.

Page 31: The Perceptron. Prehistory W.S. McCulloch & W. Pitts (1943). A logical calculus of the ideas immanent in nervous activity, Bulletin of Mathematical Biophysics,

The simple XOR example masks a deeper problem ...

1. 3.2. 4.

Consider a perceptron classifying shapes as connected or disconnected and taking inputs from the dashed circles in 1.

In going from 1 to 2, change of right hand end only must be sufficient to change classification (raise/lower linear sum thru 0)

Similarly, the change in left hand end only must be sufficient to change classification

Therefore changing both ends must take the sum even further across threshold

Problem is because of single layer of processing local knowledge cannot be combined into global knowledge. So add more layers ...

Page 32: The Perceptron. Prehistory W.S. McCulloch & W. Pitts (1943). A logical calculus of the ideas immanent in nervous activity, Bulletin of Mathematical Biophysics,

THE PERCEPTRON CONTROVERSY There is no doubt that Minsky and Papert's book was a block to the funding of research in neural networks for more than ten years. The book was widely interpreted as showing that neural networks are basically limited and fatally flawed.

What IS controversial is whether Minsky and Papert shared and/or promoted this belief ? Following the rebirth of interest in artificial neural networks, Minsky and Papert claimed that they had not intended such a broad interpretation of the conclusions they reached in the book Perceptrons.

However, Jianfeng was present at MIT in 1974, and reached a different conclusion on the basis of the internal reports circulating at MIT. What were Minsky and Papert actually saying to their colleagues in the period after the publication of their book?

Page 33: The Perceptron. Prehistory W.S. McCulloch & W. Pitts (1943). A logical calculus of the ideas immanent in nervous activity, Bulletin of Mathematical Biophysics,

Minsky and Papert describe a neural network with a hidden layer as follows:

GAMBA PERCEPTRON: A number of linear threshold systems have their outputs connected to the in- puts of a linear threshold system. Thus we have a linear threshold function of many linear threshold functions.

Minsky and Papert then state:

Virtually nothing is known about the computational capabilities of this latter kind of machine. We believe that it can do little more than can a low order perceptron. (This, in turn, would mean, roughly, that although they could recognize (sp) some relations between the points of a picture, they could not handle relations between such relations to any significant extent.) That we cannot understand mathematically the Gamba perceptron very well is, we feel, symptomatic of the early state of development of elementary computational theories.

Page 34: The Perceptron. Prehistory W.S. McCulloch & W. Pitts (1943). A logical calculus of the ideas immanent in nervous activity, Bulletin of Mathematical Biophysics,

In summary, Minsky and Papert, with intellectual honesty, confessed that they were not not able to prove that even with hidden layers, feed-forward neural nets were useless, but they expressed strong confidence that they were quite inadequate computational learning devices.

NB Minsky and Papert restrict discussion to "linear threshold" rather than the sigmoid threshold functions prevalent in ANN.

Conclusion? Don’t believe everything you hear …