LEARNING WITH NONTRIVIAL TEACHER: LEARNING USING PRIVILEGED

39
1 LEARNING WITH NONTRIVIAL TEACHER: LEARNING USING PRIVILEGED INFORMATION Vladimir Vapnik Columbia University, NEC-labs

Transcript of LEARNING WITH NONTRIVIAL TEACHER: LEARNING USING PRIVILEGED

Page 1: LEARNING WITH NONTRIVIAL TEACHER: LEARNING USING PRIVILEGED

1

LEARNING WITH NONTRIVIAL TEACHER:

LEARNING USING PRIVILEGED

INFORMATION

Vladimir Vapnik

Columbia University, NEC-labs

Page 2: LEARNING WITH NONTRIVIAL TEACHER: LEARNING USING PRIVILEGED

2THE ROSENBLATT’S PERCEPTRON ANDCLASSICAL MACHINE LEARNING PARADIGM

THE ROSENBLATT’S SCHEME:

1. Transform input vectors of space X into space Z.2. Using training data

(x1, y1), ...(x`, y`) (1)

construct a separating hyperplane in space Z

X ZX Z

GENERAL MATHEMATICAL SCHEME:

1. From a given collection of functions f (x, α), α ∈ Λ choose one that mini-mizes the number of misclassification on the training data (1)

Page 3: LEARNING WITH NONTRIVIAL TEACHER: LEARNING USING PRIVILEGED

3MAIN RESULTS OF THE VC THEORY

1. There exist two and only two factors responsible for generalization:a) The percent of training errors νtrain.b) The capacity of the set of functions from whichone chooses the desired function (the VC dimension V Cdim).

2a. The following bounds on probability of test error (Ptest) are valid

Ptest ≤ νtrain + O∗

(√V Cdim

`

)where ` is the number of observations.

2b. When νtrain = 0 the following bounds are valid

Ptest ≤ O∗(V Cdim

`

)The bounds are achievable.

Page 4: LEARNING WITH NONTRIVIAL TEACHER: LEARNING USING PRIVILEGED

4NEW LEARNING MODEL — LEARNING WITHA NONTRIVIAL TEACHER

Let us include a teacher in the learning process.

During the learning process a teacher supplies training example with ad-ditional information which can include comments, comparison, explanation,logical, emotional or metaphorical reasoning, and so on.

This additional (privileged) information is available only for thetraining examples. It is not available for test examples.

Privileged information exists for almost any learning problemand can play a crucial role in the learning process: it can

significantly increase the speed of learning.

Page 5: LEARNING WITH NONTRIVIAL TEACHER: LEARNING USING PRIVILEGED

5THE BASIC MODELS

The classical learning model:. given training pairs

(x1, y1), ..., (x`, y`), xi ∈ X, yi ∈ {−1, 1}, i = 1, ..., `,

find among a given set of functions f (x, α), α ∈ Λ the function y = f (x, α∗)that minimizes the probability of incorrect classifications Ptest

The LUPI learning model: given training triplets

(x1, x∗1, y1), ..., (x`, x

∗` , y`), xi ∈ X, x∗i ∈ X∗, yi ∈ {−1, 1}, i = 1, ..., `,

find among a given set of functions f (x, α), α ∈ Λ the function y = f (x, α∗)that minimizes the probability of incorrect classifications Ptest.

Page 6: LEARNING WITH NONTRIVIAL TEACHER: LEARNING USING PRIVILEGED

6GENERALIZATION OF PERCEPTRON — SVM

Generalization 1: Large margin.

Minimize the functionalR = (w,w)

subject to the constraints

yi[(w, zi) + b] ≥ 1, i = 1, ..., `.

The solution (w`, b`) has the bound

Ptest ≤ O∗(V Cdim

`

).

Page 7: LEARNING WITH NONTRIVIAL TEACHER: LEARNING USING PRIVILEGED

7GENERALIZATION OF PERCEPTRON — SVM

Generalization 2: Nonseparable case.

Minimize the functional

R(w, b) = (w,w) + C∑̀i=1

ξi

subject to constraints

yi[(w, zi) + b] ≥ 1− ξi, ξi ≥ 0, i = 1, ..., `.

The solution (w`, b`) has the bound

Ptest ≤ νtrain + O∗

(√V Cdim

`

).

9

44

22

Page 8: LEARNING WITH NONTRIVIAL TEACHER: LEARNING USING PRIVILEGED

8WHY WE HAVE SO BIG DIFFERENCE?

• In the separable case using ` examples one estimates n parameters of w.

• In the non-separable case one estimates n + ` parameters (n parameters ofvector w and ` parameters of slacks).

Suppose that we know set of functions ξ(x, δ) ≥ 0, δ ∈ D such that

ξ = ξ(x) = ξ(x, δ0)

and has finite VCdim∗ (let δ be an m-dimensional vector).In this situation to find optimal hyperplane in the non-separating case one

needs to estimate n + m parameters using ` observations.

Can the rate of convergence in this case be faster?

Page 9: LEARNING WITH NONTRIVIAL TEACHER: LEARNING USING PRIVILEGED

9THE KEY OBSERVATION: ORACLE SVM

Suppose we are given triplets

(x1, ξ01, y1), ..., (x`, ξ

0` , y`),

where ξ0i = ξ0(xi), i = 1, ..., ` are the slack values with respect to the best

hyperplane. Then to find the approximation (wbest, bbest) we minimize the func-tional

R(w, b) = (w,w)

subject to constraints

yi[(w, xi) + b] ≥ ri, ri = 1− ξ0(xi), i = 1, ..., `.

Proposition 1. For Oracle SVM the following bound holds

Ptest ≤ νtrain + O∗(V Cdim

`

).

Page 10: LEARNING WITH NONTRIVIAL TEACHER: LEARNING USING PRIVILEGED

10ILLUSTRATION — I

−2 0 2 4 6 8 10−2

0

2

4

6

8

10Sample Training Data

x1 →

x 2 →class II

class I

Page 11: LEARNING WITH NONTRIVIAL TEACHER: LEARNING USING PRIVILEGED

11ILLUSTRATION —II

0 5 10 15 20 25 30 35 40 4511

12

13

14

15

16

17

18

19

20

21

Training Data Size

Err

or r

ate

K : linear

12%

14%

18%

20%

Err

or R

ate

SVMOracle SVMBayes error

Page 12: LEARNING WITH NONTRIVIAL TEACHER: LEARNING USING PRIVILEGED

12WHAT CAN A REAL TEACHER DO — I

One can not expect that a teacher knows values of slacks. However he can:

• Supply students with a correcting space X∗ and a set of functions ξ(x∗, δ),δ ∈ D, in this space (with VC dimension h∗) which contains a function

ξi = ξ(x∗i , δbest)

that approximates the oracle slack function ξ0 = ξ0(x∗) well.

• During training process supply students with triplets

(x1, x∗1, y1), ..., (x`, x

∗` , y`)

in order to estimate simultaneously both the correcting (slack) function

ξ = ξ(x∗, δ`)

and the decision hyperplane (pair (w`, b`)).

Page 13: LEARNING WITH NONTRIVIAL TEACHER: LEARNING USING PRIVILEGED

13WHAT CAN A REAL TEACHER DO — II

The problem of learning with a teacher is to minimize the functional

R(w, b, δ) = (w,w) + C∑̀i=1

ξ(x∗i , δ)

subject to constraints ξ(x∗, δ) ≥ 0 and constraints

yi((w, x) + b) ≥ 1− ξ(x∗i , δ), i = 1, ..., `.

Proposition 2. With probability 1− η the following bound holds true

P (y[(w`, x)+b`] < 0) ≤ P (1−ξ(x∗, δ`) < 0)+A(n + h∗)(ln 2`

(n+h∗) + 1)− ln η

`.

The problem is how good is the teacher: how fast the probabilityP (1− ξ(x∗, δ`) < 0) converges to the probability P (1− ξ(x∗, δ0)) < 0).

Page 14: LEARNING WITH NONTRIVIAL TEACHER: LEARNING USING PRIVILEGED

14THE BOTTOM LINE

The goal of a teacher is by introducing both the space X∗ andthe set of slack-functions in this space ξ(x∗, δ), δ ∈ ∆ to tryspeed up the rate of convergence of the learning process

from O( 1√`) to O(1

`).

The difference between standard and fast methods is in thenumber of examples needed for training:

` for the standard methods and√` for the fast methods

(i.e. 100,000 and 320; or 1000 and 32).

Page 15: LEARNING WITH NONTRIVIAL TEACHER: LEARNING USING PRIVILEGED

15IDEA OF SVM ALGORITHM

• Transform the training pairs

(x1, y1), ..., (x`, y`)

into the pairs(z1, y1), ..., (z`, y`)

by mapping vectors x ∈ X into z ∈ Z.• Find in Z the hyperplane that minimizes the functional

R(w, b) = (w,w) + C∑̀i=1

ξi

subject to constraints

yi[(w, zi) + b] ≥ 1− ξi, ξi ≥ 0, i = 1, ..., `.

• Use inner product in Z space in the form

(zi, zj) = K(xi, xj).

Page 16: LEARNING WITH NONTRIVIAL TEACHER: LEARNING USING PRIVILEGED

16DUAL SPACE SOLUTION FOR SVM

The decision function has a form

f (x, α) = sgn

[∑̀i=1

αiyiK(xi, x) + b

](2)

where αi ≥ 0, i = 1, ..., ` are values which maximize the functional

R(α) =∑̀i=1

αi −1

2

∑̀i,j=1

αiαjyiyjK(xi, xj) (3)

subject to constraints∑̀i=1

αiyi = 0, 0 ≤ αi ≤ C, i = 1, ..., `.

Here kernel K(·, ·) is used for two different purposes:1. In (2) to define a set of expansion-functions K(xi, x).2. In (3) to define similarity between vectors xi and xj.

Page 17: LEARNING WITH NONTRIVIAL TEACHER: LEARNING USING PRIVILEGED

17IDEA OF SVM+ ALGORITHM

• Transform the training triplets (x1, x∗1, y1), ..., (x`, x

∗` , y`)

into the triplets (z1, z∗1, y1), ..., (z`, z

∗` , y`)

by mapping vectors x ∈ X into vectors z ∈ Z and x∗ ∈ X∗ into z∗ ∈ Z∗.• Define the slack-function in the form

ξi = (w∗, z∗i ) + b∗

and find in space Z the hyperplane that minimizes the functional

R(w, b, w∗, b∗) = (w,w) + γ(w∗, w∗) + C∑̀i=1

[(w∗, z∗i ) + b∗]+,

subject to constraints

yi[(w, zi) + b] ≥ 1− [(w∗, z∗i ) + b∗], i = 1, ..., `.

• Use inner products in Z and Z∗ spaces in the kernel form

(zi, zj) = K(xi, xj), (z∗i , z∗j ) = K∗(x∗i , x

∗j).

Page 18: LEARNING WITH NONTRIVIAL TEACHER: LEARNING USING PRIVILEGED

18DUAL SPACE SOLUTION FOR SVM+

The decision function has a form

f (x, α) = sgn

[∑̀i=1

αiyiK(xi, x) + b

]where αi, i = 1, ..., ` are values that maximize the functional

R(α, β) =∑̀i=1

αi −1

2

∑̀i,j=1

αiαjyiyjK(xi, xj)−1

∑̀i,j=1

(αi − βi)(αj − βj)K∗(x∗i , x∗j)

subject to the constraints∑̀i=1

αiyi = 0,∑̀i=1

(αi − βi) = 0.

and the constraintsαi ≥ 0, 0 ≤ βi ≤ C

Page 19: LEARNING WITH NONTRIVIAL TEACHER: LEARNING USING PRIVILEGED

19ADVANCED TECHNICAL MODEL ASPRIVILEGED INFORMATION

Classification of proteins into families

The problem is : Given amino-acid sequences of proteins construct a rule toclassify families of proteins. The decision space X is the space of amino-acidsequences. The privileged information space X∗ is the space of 3D structure ofthe proteins.

Page 20: LEARNING WITH NONTRIVIAL TEACHER: LEARNING USING PRIVILEGED

20CLASSIFICATION OF PROTEIN FAMILIES

11 cases: no improvement

15 cases: improvement by 1‐1.25 times

12 cases: improvement by 1.25‐1.6 times

17 cases: improvement by 1.6‐2.5 times

13 cases: improvement by 2.5‐5 times

9 cases: improvement by 5+ times

Improvement of SVM+ over SVM  (error rate decrease)

Significant improvement in classification accuracy achieved without additional data Two factors contribute to classification accuracy improvement: Two factors contribute to classification accuracy improvement:

Privileged information during training (3D structures) New mathematical algorithm (SVM+)

Page 21: LEARNING WITH NONTRIVIAL TEACHER: LEARNING USING PRIVILEGED

21CLASSIFICATION OF PROTEINS: DETAILS

Protein SVM SVM+ SVM Protein SVM SVM+ SVMsuperfamily pair (3D)

a.26.1-vs-c.68.1 7.3 7.3 0

a.26.1-vs-g.17.1 16.4 14.3 0

a 118 1-vs-b 82 1 19 2 6 4 0

superfamily pair (3D)

b.29.1-vs-c.47.1 10.2 3.2 0

b.30.5-vs-b.80.1 43.3 6.7 0

b 30 5-vs-b 55 1 25 5 14 6 0a.118.1 vs b.82.1 19.2 6.4 0

a.118.1-vs-d.2.1 41.5 24.5 3.8

a.118.1-vs-d.14.1 13.1 13.1 2.2

a.118.1-vs-e.8.1 22.8 2.3 2.3

b.30.5 vs b.55.1 25.5 14.6 0

b.55.1-vs-b.82.1 11.8 10.3 0

b.55.1-vs-d.14.1 20.9 19.4 0

b.55.1-vs-d.15.1 17.7 12.7 0

b.1.18-vs-b.55.1 14.6 13.5 0

b.18.1-vs-b.55.1 31.5 15.1 0

b.18.1-vs-c.55.1 36.2 36.2 0

b 8 3 38 36 6 0

b.80.1-vs-b.82.1 4.7 4.7 0

b.82.1-vs-b.121.4 7.9 3.4 0

b.121.4-vs-d.14.1 29.5 23.9 0

b.18.1-vs-c.55.3 38.1 36.6 0

b.18.1-vs-d.92.1 25 11.8 0

b.29.1-vs-b.30.5 16.9 16.9 3.6

b.29.1-vs-b.55.1 10 5.5 0

b.121.4-vs-d.92.1 15.3 9.2 0

c.36.1-vs-c.68.1 8.9 0 0

c.36.1-vs-e.8.1 12.8 2.2 0

c 47 1-vs-c 69 1 1 9 0 6 0b.29.1 vs b.55.1 10 5.5 0

b.29.1-vs-b.80.1 8.3 5.9 0

b.29.1-vs-b.121.4 35.9 16.8 5.3

c.47.1 vs c.69.1 1.9 0.6 0

c.52.1-vs-b.80.1 11.8 5.9 0

c.55.1-vs-c.55.3 45.1 28.2 22.5

3D structure is essential for classification; SVM+ does not improve classification of SVM3D structure is essential for classification; SVM+ does not improve classification of SVM

SVM+ provides significant improvement over SVM (several times)

Page 22: LEARNING WITH NONTRIVIAL TEACHER: LEARNING USING PRIVILEGED

22FUTURE EVENTS AS PRIVILEGEDINFORMATION

Time series prediction

Given pairs(x1, y1)..., (x`, y`),

find the ruleyt = f (xt+∆),

wherext = (x(t), ..., x(t−m)).

For regression model of time series:

yt = x(t + ∆).

For classification model of time series:

yt =

{1, if x(t + ∆) > x(t),−1, if x(t + ∆) ≤ x(t).

Page 23: LEARNING WITH NONTRIVIAL TEACHER: LEARNING USING PRIVILEGED

23MACKEY-GLASS TIME SERIES PREDICTION

Let data be generated by the Mackey-Glass equation:

dx(t)

dt= −ax(t) +

bx(t− τ )

1 + x10(t− τ ),

where a, b, and τ (delay) are parameters.The training triplets (x1, , x

∗1, y1), ...., (x`, x

∗` , y`) are defined as follows:

xt = (x(t), x(t− 1), x(t− 2), x(t− 3))

x∗t = (x(t + ∆− 1), x(t + ∆− 2), x(t + ∆ + 1), x(t + ∆ + 2))

Page 24: LEARNING WITH NONTRIVIAL TEACHER: LEARNING USING PRIVILEGED

24INTERPOLATION AND EXTRAPOLATION

Page 25: LEARNING WITH NONTRIVIAL TEACHER: LEARNING USING PRIVILEGED

25ILLUSTRATION

Page 26: LEARNING WITH NONTRIVIAL TEACHER: LEARNING USING PRIVILEGED

26HOLISTIC DESCRIPTION AS PRIVILEGEDINFORMATION

Classification of digit 5 and digit 8 from the NIST database.

Given triplets (xi, x∗i , yi), i = 1, ..., ` find the classification rule y = f (x),

where x∗i is the holistic description of the digit xi.

%enddocument

Page 27: LEARNING WITH NONTRIVIAL TEACHER: LEARNING USING PRIVILEGED

27YING YANG STYLE DESCRIPTIONS

Straightforward, very active, hard, very masculine with rather clearintention. A sportsman or a warrior. Aggressive and ruthless, eager to dominateeverybody, clever and accurate, more emotional than rational, very resolute. Nocompromise accepted. Strong individuality, egoistic. Honest. Hot, able to givemuch pain. Hard. Belongs to surface. Individual, no desire to be sociable.First moving second thinking. Will never give a second thought to whatever.Upward-seeking. 40 years old.

A young man is energetic and seriously absorbed in his career. He isnot absolutely precise and accurate. He seems a bit aggressive mostly due tolack of sense of humor. He is too busy with himself to be open to the world.He has simple mind and evident plans connected with everyday needs. He feelsgood in familiar surroundings. Solid soil and earth are his native space. He isupward seeking but does not understand air.

Page 28: LEARNING WITH NONTRIVIAL TEACHER: LEARNING USING PRIVILEGED

28CODES FOR HOLISTIC DESCRIPTION

1.Active (0 — 5), 2.Passive (0 — 5), 3.Feminine (0 — 5),4.Masculine (0 — 5), 5.Hard (0 — 5), 6.Soft (0 — 5),7.Occupancy (0 — 3), 8.Strength (0 — 3), 9.Hot (0 — 3),10.Cold (0 — 3), 11.Aggressive (0 — 3), 12.Controlling (0 — 3),13.Mysterious (0 — 3), 14.Clear (0 — 3), 15.Emotional (0 — 3),16.Rational (0 — 3), 17.Collective (0 — 3), 18.Individual (0 — 3),19.Serious (0 — 3), 20.Light-minded (0 —3), 21.Hidden (0 — 3),22.Evident (0 — 3), 23.Light (0 — 3), 24.Dark (0 — 3),25.Upward-seeking (0 — 3), 26.Downward-seeking (0 — 3),27.Water flowing (0 — 3), 28.Solid earth (0 — 3),29.Interior (0 — 2), 30.Surface (0 — 2), 31.Air (0—3).

http://ml.nec-labs.com/download/data/svm+/mnist.priviledged

Page 29: LEARNING WITH NONTRIVIAL TEACHER: LEARNING USING PRIVILEGED

29RESULTS

Page 30: LEARNING WITH NONTRIVIAL TEACHER: LEARNING USING PRIVILEGED

30HOLISTIC SPACE VS. ADVANCED TECHNICALSPACE

Page 31: LEARNING WITH NONTRIVIAL TEACHER: LEARNING USING PRIVILEGED

31IMAGES CLASSIFICATION I

Dog/cat classification (Pascal) 2006 (Geng, Qi, Tian, Yu) Imageswere resized to be gray 80× 100 pixels. (50+50 training set)

Privileged informationThe ear is small in proportion to its face; the mouth is narrow and non-

prominent; the nose is small and its color is light; short and rounded head;hardly see its lip on the face; the color of the whole body is very bright andrich; see the whole body; several cats in the picture; the image is clear. Aholistic description for some dog is as follows (see Fig. 6 (b)): The ear is largein proportion to its face; the mouth is wide and prominent; the nose is largeand black; the face is very long; the lip is also long and just like a zipper on theface; the color of the whole body is very dark and lacks diversity; only see thepart of the body; only a dog in the picture; the image is clear.

Page 32: LEARNING WITH NONTRIVIAL TEACHER: LEARNING USING PRIVILEGED

32IMAGES CLASSIFICATION II

Feature space. Holistic descriptions is translated into 10-dimensional fea-ture vectors: the length of the ear in proportion to its face(0-5); the width ofthe mouth (0-5); the prominent extent of the mouth (0-6); the size of the noise(0-6); the color of the noise (0-4); the length of the head (0-5); the appearanceof the head (0-4); the length of the lip (0-6); if see the whole body (1-2); thenumber of the animal (0-6); the clearness of the image (0-5) .

The results

Page 33: LEARNING WITH NONTRIVIAL TEACHER: LEARNING USING PRIVILEGED

33DUAL SPACE SOLUTION FOR SVM+

The decision function has a form

f (x, α) = sgn

[∑̀i=1

αiyiK(xi, x) + b

]where αi, i = 1, ..., ` are values that maximize the functional

R(α, β) =∑̀i=1

αi −1

2

∑̀i,j=1

αiαjyiyjK(xi, xj)−1

∑̀i,j=1

(αi − βi)(αj − βj)K∗(x∗i , x∗j)

subject to the constraints∑̀i=1

αiyi = 0,∑̀i=1

(αi − βi) = 0.

and the constraintsαi ≥ 0, 0 ≤ βi ≤ C

Page 34: LEARNING WITH NONTRIVIAL TEACHER: LEARNING USING PRIVILEGED

34TWO EXAMPLES OF POSSIBLE PRIVILEGEDINFORMATION

• Semi-scientific models (say, use Elliott waves, informal human-machine inference) as privileged information to improve for-mal models.

• Alternative theory to improve the theory of interest (say, useEastern medicine as privileged information to improve rulesof Western medicine).

Page 35: LEARNING WITH NONTRIVIAL TEACHER: LEARNING USING PRIVILEGED

35HOLISTIC (YING-YANG) DESCRIPTIONS OFPULSE

• Shallow pulse (Yang). Shallow pulse flows in the surface. You press itand it seems full, you press stronger - it becomes weak. It is like slight breezewhirling up bird’s tuft, like wind swaying leaves, like water which sways a chipof wood when the wind is blowing.• Deep pulse (Ying). The deep pulse is similar to a stone wrapped in

cotton wool: it is soft from the outside and it is hard inside. It lies in thebottom like a stone thrown in the water.• Free pulse (Ying). Such pulse is irregular. It reminds of a pearl rolling

in a plate. It flows like a drop after a drop, sliding like a pearl after a pearl.• String pulse (Ying in Yang). This pulse makes an impression of a

tight violin string. Its beating is direct and long like a string.• Skin pulse (Ying). Its beating is elastic and resilient like a drum. The

pulse is shallow and reminds touching drum skin.• Inconspicuous pulse. The beating is exceptionally soft and gentle as

well as shallow and thin. It reminds of a silk cloth flowing in the water.

Page 36: LEARNING WITH NONTRIVIAL TEACHER: LEARNING USING PRIVILEGED

36RELATION TO DIFFERENT BRANCHES OFSCIENCE

• Statistics: Non-symmetric models in predictive statistics (advanced andfuture events as privileged information in regression and time series analysis).

•Cognitive science: Role of right and left parts of the brain (existenceand unity of two different information spaces: analytic and holistic).

• Psychology: Emotional logics in inference problems.

• Philosophy of Science: Difference in analysis Simple World and Com-plex World (unity of analytic and holistic models of complex worlds).

Page 37: LEARNING WITH NONTRIVIAL TEACHER: LEARNING USING PRIVILEGED

37LIMITS OF THE CLASSICAL MODELS OFSCIENCE

•WHEN THE SOLUTION IS SIMPLE,GOD IS ANSWERING.

• WHEN THE NUMBER OF FACTORS COMING INTO PLAYIN A PHENOMENOLOGICAL COMPLEX IS TOO LARGE,SCIENTIFIC METHODS IN MOST CASES FAIL.

A. Einstein.

Page 38: LEARNING WITH NONTRIVIAL TEACHER: LEARNING USING PRIVILEGED

38THE BOTTOM LINE

•Machine Learning science is not only about computers.

It is also science about humans: unity of their logics,emotions, and cultures.

• Machine Learning is the discipline that can produceand analyze facts that lead to understanding ofmodel of science for Complex World which is based notenterally on logic (let us call it the Soft Science).

Page 39: LEARNING WITH NONTRIVIAL TEACHER: LEARNING USING PRIVILEGED

39LITERATURE

1. Vladimir Vapnik. Estimation of Dependencies Based on Empirical Data,2-nd Ed.: Empirical Inference Science, Springer, 2006

2. Vapnik V., Vashist A., Pavlovitch N.: Learning using hidden information:Master class learning. In Proceedings of NATO workshop on mining massivedata sets for security (pp. 3-14) IOS Press, 2008.

3. Vapnik V., Vashist A.,Pavlovitch N.: Learning using hidden information:(learning with teacher). In Proceedings of IJCNN (pp. 3188 –3195), 2009

4. Vladimir Vapnik, Akshay Vashist: A new learning paradigm: Learningusing privileged information. Neural Networks 22(5-6): (pp. 544-557), 2009

5. D.Pechyony, R.Izmailov, A.Vashist, V.Vapnik, SMO-style algorithms forlearning using privileged information, in Proceedings of the 2010 InternationalConference on Data Mining (DMIN), 2010.

6. Dmitri Pechyony, Vladimir Vapnik, On the theory of learning with privi-leged information, NIPS –2010.

Digit database (with Poetic and Ying-Yang descriptions by N. Pavlovitch):http://ml.nec-labs.com/download/data/svm+/mnist.priviledged/