Review of Probability Theory - Machine...

75
Review of Probability Theory Zahra Koochak and Jeremy Irvin

Transcript of Review of Probability Theory - Machine...

Page 1: Review of Probability Theory - Machine Learningcs229.stanford.edu/summer2020/cs229-prob-slide.pdf · Review of Probability Theory Zahra Koochak and Jeremy Irvin. Elements of Probability

Review of Probability Theory

Zahra Koochak and Jeremy Irvin

Page 2: Review of Probability Theory - Machine Learningcs229.stanford.edu/summer2020/cs229-prob-slide.pdf · Review of Probability Theory Zahra Koochak and Jeremy Irvin. Elements of Probability

Elements of Probability

Sample Space Ω

HH,HT ,TH,TT

Event A ⊆ Ω

HH,HT, Ω

Event Space F

Probability Measure P : F → RP(A) ≥ 0 ∀A ∈ F

P(Ω) = 1

If A1,A2, ... disjoint set of events (Ai ∩ Aj = ∅ when i 6= j),then

P

(⋃i

Ai

)=∑i

P(Ai )

Page 3: Review of Probability Theory - Machine Learningcs229.stanford.edu/summer2020/cs229-prob-slide.pdf · Review of Probability Theory Zahra Koochak and Jeremy Irvin. Elements of Probability

Elements of Probability

Sample Space Ω

HH,HT ,TH,TT

Event A ⊆ Ω

HH,HT, Ω

Event Space F

Probability Measure P : F → RP(A) ≥ 0 ∀A ∈ F

P(Ω) = 1

If A1,A2, ... disjoint set of events (Ai ∩ Aj = ∅ when i 6= j),then

P

(⋃i

Ai

)=∑i

P(Ai )

Page 4: Review of Probability Theory - Machine Learningcs229.stanford.edu/summer2020/cs229-prob-slide.pdf · Review of Probability Theory Zahra Koochak and Jeremy Irvin. Elements of Probability

Elements of Probability

Sample Space Ω

HH,HT ,TH,TT

Event A ⊆ Ω

HH,HT, Ω

Event Space F

Probability Measure P : F → RP(A) ≥ 0 ∀A ∈ F

P(Ω) = 1

If A1,A2, ... disjoint set of events (Ai ∩ Aj = ∅ when i 6= j),then

P

(⋃i

Ai

)=∑i

P(Ai )

Page 5: Review of Probability Theory - Machine Learningcs229.stanford.edu/summer2020/cs229-prob-slide.pdf · Review of Probability Theory Zahra Koochak and Jeremy Irvin. Elements of Probability

Elements of Probability

Sample Space Ω

HH,HT ,TH,TT

Event A ⊆ Ω

HH,HT, Ω

Event Space F

Probability Measure P : F → RP(A) ≥ 0 ∀A ∈ F

P(Ω) = 1

If A1,A2, ... disjoint set of events (Ai ∩ Aj = ∅ when i 6= j),then

P

(⋃i

Ai

)=∑i

P(Ai )

Page 6: Review of Probability Theory - Machine Learningcs229.stanford.edu/summer2020/cs229-prob-slide.pdf · Review of Probability Theory Zahra Koochak and Jeremy Irvin. Elements of Probability

Elements of Probability

Sample Space Ω

HH,HT ,TH,TT

Event A ⊆ Ω

HH,HT,

Ω

Event Space F

Probability Measure P : F → RP(A) ≥ 0 ∀A ∈ F

P(Ω) = 1

If A1,A2, ... disjoint set of events (Ai ∩ Aj = ∅ when i 6= j),then

P

(⋃i

Ai

)=∑i

P(Ai )

Page 7: Review of Probability Theory - Machine Learningcs229.stanford.edu/summer2020/cs229-prob-slide.pdf · Review of Probability Theory Zahra Koochak and Jeremy Irvin. Elements of Probability

Elements of Probability

Sample Space Ω

HH,HT ,TH,TT

Event A ⊆ Ω

HH,HT, Ω

Event Space F

Probability Measure P : F → RP(A) ≥ 0 ∀A ∈ F

P(Ω) = 1

If A1,A2, ... disjoint set of events (Ai ∩ Aj = ∅ when i 6= j),then

P

(⋃i

Ai

)=∑i

P(Ai )

Page 8: Review of Probability Theory - Machine Learningcs229.stanford.edu/summer2020/cs229-prob-slide.pdf · Review of Probability Theory Zahra Koochak and Jeremy Irvin. Elements of Probability

Elements of Probability

Sample Space Ω

HH,HT ,TH,TT

Event A ⊆ Ω

HH,HT, Ω

Event Space F

Probability Measure P : F → RP(A) ≥ 0 ∀A ∈ F

P(Ω) = 1

If A1,A2, ... disjoint set of events (Ai ∩ Aj = ∅ when i 6= j),then

P

(⋃i

Ai

)=∑i

P(Ai )

Page 9: Review of Probability Theory - Machine Learningcs229.stanford.edu/summer2020/cs229-prob-slide.pdf · Review of Probability Theory Zahra Koochak and Jeremy Irvin. Elements of Probability

Elements of Probability

Sample Space Ω

HH,HT ,TH,TT

Event A ⊆ Ω

HH,HT, Ω

Event Space F

Probability Measure P : F → R

P(A) ≥ 0 ∀A ∈ F

P(Ω) = 1

If A1,A2, ... disjoint set of events (Ai ∩ Aj = ∅ when i 6= j),then

P

(⋃i

Ai

)=∑i

P(Ai )

Page 10: Review of Probability Theory - Machine Learningcs229.stanford.edu/summer2020/cs229-prob-slide.pdf · Review of Probability Theory Zahra Koochak and Jeremy Irvin. Elements of Probability

Elements of Probability

Sample Space Ω

HH,HT ,TH,TT

Event A ⊆ Ω

HH,HT, Ω

Event Space F

Probability Measure P : F → RP(A) ≥ 0 ∀A ∈ F

P(Ω) = 1

If A1,A2, ... disjoint set of events (Ai ∩ Aj = ∅ when i 6= j),then

P

(⋃i

Ai

)=∑i

P(Ai )

Page 11: Review of Probability Theory - Machine Learningcs229.stanford.edu/summer2020/cs229-prob-slide.pdf · Review of Probability Theory Zahra Koochak and Jeremy Irvin. Elements of Probability

Elements of Probability

Sample Space Ω

HH,HT ,TH,TT

Event A ⊆ Ω

HH,HT, Ω

Event Space F

Probability Measure P : F → RP(A) ≥ 0 ∀A ∈ F

P(Ω) = 1

If A1,A2, ... disjoint set of events (Ai ∩ Aj = ∅ when i 6= j),then

P

(⋃i

Ai

)=∑i

P(Ai )

Page 12: Review of Probability Theory - Machine Learningcs229.stanford.edu/summer2020/cs229-prob-slide.pdf · Review of Probability Theory Zahra Koochak and Jeremy Irvin. Elements of Probability

Elements of Probability

Sample Space Ω

HH,HT ,TH,TT

Event A ⊆ Ω

HH,HT, Ω

Event Space F

Probability Measure P : F → RP(A) ≥ 0 ∀A ∈ F

P(Ω) = 1

If A1,A2, ... disjoint set of events (Ai ∩ Aj = ∅ when i 6= j),then

P

(⋃i

Ai

)=∑i

P(Ai )

Page 13: Review of Probability Theory - Machine Learningcs229.stanford.edu/summer2020/cs229-prob-slide.pdf · Review of Probability Theory Zahra Koochak and Jeremy Irvin. Elements of Probability

Conditional Probability and Independence

Let B be any event such that P(B) 6= 0.

P(A|B) := P(A∩B)P(B)

A ⊥ B if and only if P(A ∩ B) = P(A)P(B)

A ⊥ B if and only if P(A|B) = P(A∩B)P(B) = P(A)P(B)

P(B) = P(A)

Page 14: Review of Probability Theory - Machine Learningcs229.stanford.edu/summer2020/cs229-prob-slide.pdf · Review of Probability Theory Zahra Koochak and Jeremy Irvin. Elements of Probability

Conditional Probability and Independence

Let B be any event such that P(B) 6= 0.

P(A|B) := P(A∩B)P(B)

A ⊥ B if and only if P(A ∩ B) = P(A)P(B)

A ⊥ B if and only if P(A|B) = P(A∩B)P(B) = P(A)P(B)

P(B) = P(A)

Page 15: Review of Probability Theory - Machine Learningcs229.stanford.edu/summer2020/cs229-prob-slide.pdf · Review of Probability Theory Zahra Koochak and Jeremy Irvin. Elements of Probability

Conditional Probability and Independence

Let B be any event such that P(B) 6= 0.

P(A|B) := P(A∩B)P(B)

A ⊥ B if and only if P(A ∩ B) = P(A)P(B)

A ⊥ B if and only if P(A|B) = P(A∩B)P(B) = P(A)P(B)

P(B) = P(A)

Page 16: Review of Probability Theory - Machine Learningcs229.stanford.edu/summer2020/cs229-prob-slide.pdf · Review of Probability Theory Zahra Koochak and Jeremy Irvin. Elements of Probability

Conditional Probability and Independence

Let B be any event such that P(B) 6= 0.

P(A|B) := P(A∩B)P(B)

A ⊥ B if and only if P(A ∩ B) = P(A)P(B)

A ⊥ B if and only if P(A|B) = P(A∩B)P(B) = P(A)P(B)

P(B) = P(A)

Page 17: Review of Probability Theory - Machine Learningcs229.stanford.edu/summer2020/cs229-prob-slide.pdf · Review of Probability Theory Zahra Koochak and Jeremy Irvin. Elements of Probability

Conditional Probability and Independence

Let B be any event such that P(B) 6= 0.

P(A|B) := P(A∩B)P(B)

A ⊥ B if and only if P(A ∩ B) = P(A)P(B)

A ⊥ B if and only if P(A|B) = P(A∩B)P(B) = P(A)P(B)

P(B) = P(A)

Page 18: Review of Probability Theory - Machine Learningcs229.stanford.edu/summer2020/cs229-prob-slide.pdf · Review of Probability Theory Zahra Koochak and Jeremy Irvin. Elements of Probability

Random Variables (RV)

ω0 = HHHTHTTHTT

A RV is X : Ω→ R

# of heads: X (ω0) = 5

# of tosses until tails: X (ω0) = 4

Val(X ) := X (Ω)

Val(X ) = 0, 1, ..., 10

Page 19: Review of Probability Theory - Machine Learningcs229.stanford.edu/summer2020/cs229-prob-slide.pdf · Review of Probability Theory Zahra Koochak and Jeremy Irvin. Elements of Probability

Random Variables (RV)

ω0 = HHHTHTTHTT

A RV is X : Ω→ R

# of heads: X (ω0) = 5

# of tosses until tails: X (ω0) = 4

Val(X ) := X (Ω)

Val(X ) = 0, 1, ..., 10

Page 20: Review of Probability Theory - Machine Learningcs229.stanford.edu/summer2020/cs229-prob-slide.pdf · Review of Probability Theory Zahra Koochak and Jeremy Irvin. Elements of Probability

Random Variables (RV)

ω0 = HHHTHTTHTT

A RV is X : Ω→ R

# of heads: X (ω0) = 5

# of tosses until tails: X (ω0) = 4

Val(X ) := X (Ω)

Val(X ) = 0, 1, ..., 10

Page 21: Review of Probability Theory - Machine Learningcs229.stanford.edu/summer2020/cs229-prob-slide.pdf · Review of Probability Theory Zahra Koochak and Jeremy Irvin. Elements of Probability

Random Variables (RV)

ω0 = HHHTHTTHTT

A RV is X : Ω→ R

# of heads: X (ω0) = 5

# of tosses until tails: X (ω0) = 4

Val(X ) := X (Ω)

Val(X ) = 0, 1, ..., 10

Page 22: Review of Probability Theory - Machine Learningcs229.stanford.edu/summer2020/cs229-prob-slide.pdf · Review of Probability Theory Zahra Koochak and Jeremy Irvin. Elements of Probability

Random Variables (RV)

ω0 = HHHTHTTHTT

A RV is X : Ω→ R

# of heads: X (ω0) = 5

# of tosses until tails: X (ω0) = 4

Val(X ) := X (Ω)

Val(X ) = 0, 1, ..., 10

Page 23: Review of Probability Theory - Machine Learningcs229.stanford.edu/summer2020/cs229-prob-slide.pdf · Review of Probability Theory Zahra Koochak and Jeremy Irvin. Elements of Probability

Random Variables (RV)

ω0 = HHHTHTTHTT

A RV is X : Ω→ R

# of heads: X (ω0) = 5

# of tosses until tails: X (ω0) = 4

Val(X ) := X (Ω)

Val(X ) = 0, 1, ..., 10

Page 24: Review of Probability Theory - Machine Learningcs229.stanford.edu/summer2020/cs229-prob-slide.pdf · Review of Probability Theory Zahra Koochak and Jeremy Irvin. Elements of Probability

Random Variables (RV)

ω0 = HHHTHTTHTT

A RV is X : Ω→ R

# of heads: X (ω0) = 5

# of tosses until tails: X (ω0) = 4

Val(X ) := X (Ω)

Val(X ) = 0, 1, ..., 10

Page 25: Review of Probability Theory - Machine Learningcs229.stanford.edu/summer2020/cs229-prob-slide.pdf · Review of Probability Theory Zahra Koochak and Jeremy Irvin. Elements of Probability

Cumulative Distribution Function (CDF)

FX : R→ [0, 1]

FX (x) = P(X ≤ x):= P(ω|X (ω) ≤ x)

Page 26: Review of Probability Theory - Machine Learningcs229.stanford.edu/summer2020/cs229-prob-slide.pdf · Review of Probability Theory Zahra Koochak and Jeremy Irvin. Elements of Probability

Cumulative Distribution Function (CDF)

FX : R→ [0, 1]

FX (x) = P(X ≤ x):= P(ω|X (ω) ≤ x)

Page 27: Review of Probability Theory - Machine Learningcs229.stanford.edu/summer2020/cs229-prob-slide.pdf · Review of Probability Theory Zahra Koochak and Jeremy Irvin. Elements of Probability

Cumulative Distribution Function (CDF)

FX : R→ [0, 1]

FX (x) = P(X ≤ x)

:= P(ω|X (ω) ≤ x)

Page 28: Review of Probability Theory - Machine Learningcs229.stanford.edu/summer2020/cs229-prob-slide.pdf · Review of Probability Theory Zahra Koochak and Jeremy Irvin. Elements of Probability

Cumulative Distribution Function (CDF)

FX : R→ [0, 1]

FX (x) = P(X ≤ x):= P(ω|X (ω) ≤ x)

Page 29: Review of Probability Theory - Machine Learningcs229.stanford.edu/summer2020/cs229-prob-slide.pdf · Review of Probability Theory Zahra Koochak and Jeremy Irvin. Elements of Probability

Cumulative Distribution Function (CDF)

FX : R→ [0, 1]

FX (x) = P(X ≤ x):= P(ω|X (ω) ≤ x)

Page 30: Review of Probability Theory - Machine Learningcs229.stanford.edu/summer2020/cs229-prob-slide.pdf · Review of Probability Theory Zahra Koochak and Jeremy Irvin. Elements of Probability

Discrete vs. Continuous RV

Discrete RV: Val(X ) countable

P(X = k) := P(ω|X (ω) = k)

Probability Mass Function (PMF)pX : Val(X ) → [0, 1]

pX (x) := P(X = x)∑x∈Val(X )

pX (x) = 1

Continuous RV: Val(X ) uncountable

P(a ≤ X ≤ b) := P(ω|a ≤ X (ω) ≤ b)

Probability Density Function (PDF)fX : R → R

fX (x) :=ddxFX (x)

fX (x) 6= P(X = x)∫∞−∞ fX (x)dx︸ ︷︷ ︸

P(x≤X≤x+dx)

= 1

Page 31: Review of Probability Theory - Machine Learningcs229.stanford.edu/summer2020/cs229-prob-slide.pdf · Review of Probability Theory Zahra Koochak and Jeremy Irvin. Elements of Probability

Discrete vs. Continuous RV

Discrete RV: Val(X ) countable

P(X = k) := P(ω|X (ω) = k)

Probability Mass Function (PMF)pX : Val(X ) → [0, 1]

pX (x) := P(X = x)∑x∈Val(X )

pX (x) = 1

Continuous RV: Val(X ) uncountable

P(a ≤ X ≤ b) := P(ω|a ≤ X (ω) ≤ b)

Probability Density Function (PDF)fX : R → R

fX (x) :=ddxFX (x)

fX (x) 6= P(X = x)∫∞−∞ fX (x)dx︸ ︷︷ ︸

P(x≤X≤x+dx)

= 1

Page 32: Review of Probability Theory - Machine Learningcs229.stanford.edu/summer2020/cs229-prob-slide.pdf · Review of Probability Theory Zahra Koochak and Jeremy Irvin. Elements of Probability

Discrete vs. Continuous RV

Discrete RV: Val(X ) countable

P(X = k) := P(ω|X (ω) = k)

Probability Mass Function (PMF)pX : Val(X ) → [0, 1]

pX (x) := P(X = x)∑x∈Val(X )

pX (x) = 1

Continuous RV: Val(X ) uncountable

P(a ≤ X ≤ b) := P(ω|a ≤ X (ω) ≤ b)

Probability Density Function (PDF)fX : R → R

fX (x) :=ddxFX (x)

fX (x) 6= P(X = x)∫∞−∞ fX (x)dx︸ ︷︷ ︸

P(x≤X≤x+dx)

= 1

Page 33: Review of Probability Theory - Machine Learningcs229.stanford.edu/summer2020/cs229-prob-slide.pdf · Review of Probability Theory Zahra Koochak and Jeremy Irvin. Elements of Probability

Discrete vs. Continuous RV

Discrete RV: Val(X ) countable

P(X = k) := P(ω|X (ω) = k)

Probability Mass Function (PMF)pX : Val(X ) → [0, 1]

pX (x) := P(X = x)∑x∈Val(X )

pX (x) = 1

Continuous RV: Val(X ) uncountable

P(a ≤ X ≤ b) := P(ω|a ≤ X (ω) ≤ b)

Probability Density Function (PDF)fX : R → R

fX (x) :=ddxFX (x)

fX (x) 6= P(X = x)∫∞−∞ fX (x)dx︸ ︷︷ ︸

P(x≤X≤x+dx)

= 1

Page 34: Review of Probability Theory - Machine Learningcs229.stanford.edu/summer2020/cs229-prob-slide.pdf · Review of Probability Theory Zahra Koochak and Jeremy Irvin. Elements of Probability

Discrete vs. Continuous RV

Discrete RV: Val(X ) countable

P(X = k) := P(ω|X (ω) = k)

Probability Mass Function (PMF)pX : Val(X ) → [0, 1]

pX (x) := P(X = x)

∑x∈Val(X )

pX (x) = 1

Continuous RV: Val(X ) uncountable

P(a ≤ X ≤ b) := P(ω|a ≤ X (ω) ≤ b)

Probability Density Function (PDF)fX : R → R

fX (x) :=ddxFX (x)

fX (x) 6= P(X = x)∫∞−∞ fX (x)dx︸ ︷︷ ︸

P(x≤X≤x+dx)

= 1

Page 35: Review of Probability Theory - Machine Learningcs229.stanford.edu/summer2020/cs229-prob-slide.pdf · Review of Probability Theory Zahra Koochak and Jeremy Irvin. Elements of Probability

Discrete vs. Continuous RV

Discrete RV: Val(X ) countable

P(X = k) := P(ω|X (ω) = k)

Probability Mass Function (PMF)pX : Val(X ) → [0, 1]

pX (x) := P(X = x)∑x∈Val(X )

pX (x) = 1

Continuous RV: Val(X ) uncountable

P(a ≤ X ≤ b) := P(ω|a ≤ X (ω) ≤ b)

Probability Density Function (PDF)fX : R → R

fX (x) :=ddxFX (x)

fX (x) 6= P(X = x)∫∞−∞ fX (x)dx︸ ︷︷ ︸

P(x≤X≤x+dx)

= 1

Page 36: Review of Probability Theory - Machine Learningcs229.stanford.edu/summer2020/cs229-prob-slide.pdf · Review of Probability Theory Zahra Koochak and Jeremy Irvin. Elements of Probability

Discrete vs. Continuous RV

Discrete RV: Val(X ) countable

P(X = k) := P(ω|X (ω) = k)

Probability Mass Function (PMF)pX : Val(X ) → [0, 1]

pX (x) := P(X = x)∑x∈Val(X )

pX (x) = 1

Continuous RV: Val(X ) uncountable

P(a ≤ X ≤ b) := P(ω|a ≤ X (ω) ≤ b)

Probability Density Function (PDF)fX : R → R

fX (x) :=ddxFX (x)

fX (x) 6= P(X = x)∫∞−∞ fX (x)dx︸ ︷︷ ︸

P(x≤X≤x+dx)

= 1

Page 37: Review of Probability Theory - Machine Learningcs229.stanford.edu/summer2020/cs229-prob-slide.pdf · Review of Probability Theory Zahra Koochak and Jeremy Irvin. Elements of Probability

Discrete vs. Continuous RV

Discrete RV: Val(X ) countable

P(X = k) := P(ω|X (ω) = k)

Probability Mass Function (PMF)pX : Val(X ) → [0, 1]

pX (x) := P(X = x)∑x∈Val(X )

pX (x) = 1

Continuous RV: Val(X ) uncountable

P(a ≤ X ≤ b) := P(ω|a ≤ X (ω) ≤ b)

Probability Density Function (PDF)fX : R → R

fX (x) :=ddxFX (x)

fX (x) 6= P(X = x)∫∞−∞ fX (x)dx︸ ︷︷ ︸

P(x≤X≤x+dx)

= 1

Page 38: Review of Probability Theory - Machine Learningcs229.stanford.edu/summer2020/cs229-prob-slide.pdf · Review of Probability Theory Zahra Koochak and Jeremy Irvin. Elements of Probability

Discrete vs. Continuous RV

Discrete RV: Val(X ) countable

P(X = k) := P(ω|X (ω) = k)

Probability Mass Function (PMF)pX : Val(X ) → [0, 1]

pX (x) := P(X = x)∑x∈Val(X )

pX (x) = 1

Continuous RV: Val(X ) uncountable

P(a ≤ X ≤ b) := P(ω|a ≤ X (ω) ≤ b)

Probability Density Function (PDF)fX : R → R

fX (x) :=ddxFX (x)

fX (x) 6= P(X = x)∫∞−∞ fX (x)dx︸ ︷︷ ︸

P(x≤X≤x+dx)

= 1

Page 39: Review of Probability Theory - Machine Learningcs229.stanford.edu/summer2020/cs229-prob-slide.pdf · Review of Probability Theory Zahra Koochak and Jeremy Irvin. Elements of Probability

Discrete vs. Continuous RV

Discrete RV: Val(X ) countable

P(X = k) := P(ω|X (ω) = k)

Probability Mass Function (PMF)pX : Val(X ) → [0, 1]

pX (x) := P(X = x)∑x∈Val(X )

pX (x) = 1

Continuous RV: Val(X ) uncountable

P(a ≤ X ≤ b) := P(ω|a ≤ X (ω) ≤ b)

Probability Density Function (PDF)fX : R → R

fX (x) :=ddxFX (x)

fX (x) 6= P(X = x)∫∞−∞ fX (x)dx︸ ︷︷ ︸

P(x≤X≤x+dx)

= 1

Page 40: Review of Probability Theory - Machine Learningcs229.stanford.edu/summer2020/cs229-prob-slide.pdf · Review of Probability Theory Zahra Koochak and Jeremy Irvin. Elements of Probability

Discrete vs. Continuous RV

Discrete RV: Val(X ) countable

P(X = k) := P(ω|X (ω) = k)

Probability Mass Function (PMF)pX : Val(X ) → [0, 1]

pX (x) := P(X = x)∑x∈Val(X )

pX (x) = 1

Continuous RV: Val(X ) uncountable

P(a ≤ X ≤ b) := P(ω|a ≤ X (ω) ≤ b)

Probability Density Function (PDF)fX : R → R

fX (x) :=ddxFX (x)

fX (x) 6= P(X = x)

∫∞−∞ fX (x)dx︸ ︷︷ ︸

P(x≤X≤x+dx)

= 1

Page 41: Review of Probability Theory - Machine Learningcs229.stanford.edu/summer2020/cs229-prob-slide.pdf · Review of Probability Theory Zahra Koochak and Jeremy Irvin. Elements of Probability

Discrete vs. Continuous RV

Discrete RV: Val(X ) countable

P(X = k) := P(ω|X (ω) = k)

Probability Mass Function (PMF)pX : Val(X ) → [0, 1]

pX (x) := P(X = x)∑x∈Val(X )

pX (x) = 1

Continuous RV: Val(X ) uncountable

P(a ≤ X ≤ b) := P(ω|a ≤ X (ω) ≤ b)

Probability Density Function (PDF)fX : R → R

fX (x) :=ddxFX (x)

fX (x) 6= P(X = x)∫∞−∞ fX (x)dx

︸ ︷︷ ︸P(x≤X≤x+dx)

= 1

Page 42: Review of Probability Theory - Machine Learningcs229.stanford.edu/summer2020/cs229-prob-slide.pdf · Review of Probability Theory Zahra Koochak and Jeremy Irvin. Elements of Probability

Discrete vs. Continuous RV

Discrete RV: Val(X ) countable

P(X = k) := P(ω|X (ω) = k)

Probability Mass Function (PMF)pX : Val(X ) → [0, 1]

pX (x) := P(X = x)∑x∈Val(X )

pX (x) = 1

Continuous RV: Val(X ) uncountable

P(a ≤ X ≤ b) := P(ω|a ≤ X (ω) ≤ b)

Probability Density Function (PDF)fX : R → R

fX (x) :=ddxFX (x)

fX (x) 6= P(X = x)∫∞−∞ fX (x)dx︸ ︷︷ ︸

P(x≤X≤x+dx)

= 1

Page 43: Review of Probability Theory - Machine Learningcs229.stanford.edu/summer2020/cs229-prob-slide.pdf · Review of Probability Theory Zahra Koochak and Jeremy Irvin. Elements of Probability

Expected Value and Variance

g : R→ R

Expected Value

Let X be a discrete RV with PMF pX .

E[g(X )] :=∑

x∈Val(X )

g(x)pX (x)

Let X be a continuous RV with PDF fX .

E[g(X )] :=∫∞−∞ g(x)fX (x)dx

Variance

Var(X ) := E[(X − E[X ])2]= E[X 2]− E[X ]2

Page 44: Review of Probability Theory - Machine Learningcs229.stanford.edu/summer2020/cs229-prob-slide.pdf · Review of Probability Theory Zahra Koochak and Jeremy Irvin. Elements of Probability

Expected Value and Variance

g : R→ R

Expected Value

Let X be a discrete RV with PMF pX .

E[g(X )] :=∑

x∈Val(X )

g(x)pX (x)

Let X be a continuous RV with PDF fX .

E[g(X )] :=∫∞−∞ g(x)fX (x)dx

Variance

Var(X ) := E[(X − E[X ])2]= E[X 2]− E[X ]2

Page 45: Review of Probability Theory - Machine Learningcs229.stanford.edu/summer2020/cs229-prob-slide.pdf · Review of Probability Theory Zahra Koochak and Jeremy Irvin. Elements of Probability

Expected Value and Variance

g : R→ R

Expected Value

Let X be a discrete RV with PMF pX .

E[g(X )] :=∑

x∈Val(X )

g(x)pX (x)

Let X be a continuous RV with PDF fX .

E[g(X )] :=∫∞−∞ g(x)fX (x)dx

Variance

Var(X ) := E[(X − E[X ])2]= E[X 2]− E[X ]2

Page 46: Review of Probability Theory - Machine Learningcs229.stanford.edu/summer2020/cs229-prob-slide.pdf · Review of Probability Theory Zahra Koochak and Jeremy Irvin. Elements of Probability

Expected Value and Variance

g : R→ R

Expected Value

Let X be a discrete RV with PMF pX .

E[g(X )] :=∑

x∈Val(X )

g(x)pX (x)

Let X be a continuous RV with PDF fX .

E[g(X )] :=∫∞−∞ g(x)fX (x)dx

Variance

Var(X ) := E[(X − E[X ])2]= E[X 2]− E[X ]2

Page 47: Review of Probability Theory - Machine Learningcs229.stanford.edu/summer2020/cs229-prob-slide.pdf · Review of Probability Theory Zahra Koochak and Jeremy Irvin. Elements of Probability

Expected Value and Variance

g : R→ R

Expected Value

Let X be a discrete RV with PMF pX .

E[g(X )] :=∑

x∈Val(X )

g(x)pX (x)

Let X be a continuous RV with PDF fX .

E[g(X )] :=∫∞−∞ g(x)fX (x)dx

Variance

Var(X ) := E[(X − E[X ])2]= E[X 2]− E[X ]2

Page 48: Review of Probability Theory - Machine Learningcs229.stanford.edu/summer2020/cs229-prob-slide.pdf · Review of Probability Theory Zahra Koochak and Jeremy Irvin. Elements of Probability

Expected Value and Variance

g : R→ R

Expected Value

Let X be a discrete RV with PMF pX .

E[g(X )] :=∑

x∈Val(X )

g(x)pX (x)

Let X be a continuous RV with PDF fX .

E[g(X )] :=∫∞−∞ g(x)fX (x)dx

Variance

Var(X ) := E[(X − E[X ])2]= E[X 2]− E[X ]2

Page 49: Review of Probability Theory - Machine Learningcs229.stanford.edu/summer2020/cs229-prob-slide.pdf · Review of Probability Theory Zahra Koochak and Jeremy Irvin. Elements of Probability

Expected Value and Variance

g : R→ R

Expected Value

Let X be a discrete RV with PMF pX .

E[g(X )] :=∑

x∈Val(X )

g(x)pX (x)

Let X be a continuous RV with PDF fX .

E[g(X )] :=∫∞−∞ g(x)fX (x)dx

Variance

Var(X ) := E[(X − E[X ])2]= E[X 2]− E[X ]2

Page 50: Review of Probability Theory - Machine Learningcs229.stanford.edu/summer2020/cs229-prob-slide.pdf · Review of Probability Theory Zahra Koochak and Jeremy Irvin. Elements of Probability

Expected Value and Variance

g : R→ R

Expected Value

Let X be a discrete RV with PMF pX .

E[g(X )] :=∑

x∈Val(X )

g(x)pX (x)

Let X be a continuous RV with PDF fX .

E[g(X )] :=∫∞−∞ g(x)fX (x)dx

Variance

Var(X ) := E[(X − E[X ])2]= E[X 2]− E[X ]2

Page 51: Review of Probability Theory - Machine Learningcs229.stanford.edu/summer2020/cs229-prob-slide.pdf · Review of Probability Theory Zahra Koochak and Jeremy Irvin. Elements of Probability

Expected Value and Variance

g : R→ R

Expected Value

Let X be a discrete RV with PMF pX .

E[g(X )] :=∑

x∈Val(X )

g(x)pX (x)

Let X be a continuous RV with PDF fX .

E[g(X )] :=∫∞−∞ g(x)fX (x)dx

Variance

Var(X ) := E[(X − E[X ])2]

= E[X 2]− E[X ]2

Page 52: Review of Probability Theory - Machine Learningcs229.stanford.edu/summer2020/cs229-prob-slide.pdf · Review of Probability Theory Zahra Koochak and Jeremy Irvin. Elements of Probability

Expected Value and Variance

g : R→ R

Expected Value

Let X be a discrete RV with PMF pX .

E[g(X )] :=∑

x∈Val(X )

g(x)pX (x)

Let X be a continuous RV with PDF fX .

E[g(X )] :=∫∞−∞ g(x)fX (x)dx

Variance

Var(X ) := E[(X − E[X ])2]= E[X 2]− E[X ]2

Page 53: Review of Probability Theory - Machine Learningcs229.stanford.edu/summer2020/cs229-prob-slide.pdf · Review of Probability Theory Zahra Koochak and Jeremy Irvin. Elements of Probability

Example Distributions

Distribution PDF or PMF Mean Variance

Bernoulli(p)

p, if x = 11− p, if x = 0.

p p(1− p)

Binomial(n, p)(nk

)pk(1− p)n−k for k = 0, 1, ..., n np np(1− p)

Geometric(p) p(1− p)k−1 for k = 1, 2, ... 1p

1−pp2

Poisson(λ) e−λλk

k! for k = 0, 1, ... λ λ

Uniform(a, b) 1b−a for all x ∈ (a, b) a+b

2(b−a)2

12

Gaussian(µ, σ2) 1σ√2πe−

(x−µ)2

2σ2 for all x ∈ (−∞,∞) µ σ2

Exponential(λ) λe−λx for all x ≥ 0, λ ≥ 0 1λ

1λ2

Page 54: Review of Probability Theory - Machine Learningcs229.stanford.edu/summer2020/cs229-prob-slide.pdf · Review of Probability Theory Zahra Koochak and Jeremy Irvin. Elements of Probability

Two Random Variables

Bivariate CDF

FXY (x , y) = P(X ≤ x ,Y ≤ y)

Bivariate PMF

pXY (x , y) = P(X = x ,Y = y)

Marginal PMF

pX (x) =∑

y pXY (x , y)

Bivariate PDF

fXY (x , y) = ∂2FXY (x ,y)∂x∂y

Marginal PDF

fX (x) =∫∞−∞ fXY (x , y)dy

Page 55: Review of Probability Theory - Machine Learningcs229.stanford.edu/summer2020/cs229-prob-slide.pdf · Review of Probability Theory Zahra Koochak and Jeremy Irvin. Elements of Probability

Bayes’ Theorem

I Given the conditional probability of an event P(x |y)

I Want to find the ”reverse” conditional probability, P(y |x)

P(y |x) =P(x |y)P(y)

P(x)

where:P(x) =∑

y ′∈value y P(x |y ′)P(y ′)

X and Y are continuous

f (y |x) =f (x |y)f (y)

f (x)

where:f (x) =∫y ′∈value y f (x |y ′)f (y ′)dy ′

Page 56: Review of Probability Theory - Machine Learningcs229.stanford.edu/summer2020/cs229-prob-slide.pdf · Review of Probability Theory Zahra Koochak and Jeremy Irvin. Elements of Probability

Example for Bayes Rule

I You randomly choose a treasure chest to open, and thenrandomly choose a coin from that treasure chest. If the coinyou choose is gold, then what is the probability that youchoose chest A?

a)1

3b)

2

3c)1 d)None

Page 57: Review of Probability Theory - Machine Learningcs229.stanford.edu/summer2020/cs229-prob-slide.pdf · Review of Probability Theory Zahra Koochak and Jeremy Irvin. Elements of Probability

Independence

Two random variables X and Y are independent if:

I pXY (x , y) = pX (x)pY (y)

I pY |X (x , y) = PY (y)

For continuous random variables:

pXY (x , y)→ fXY (x , y)

Page 58: Review of Probability Theory - Machine Learningcs229.stanford.edu/summer2020/cs229-prob-slide.pdf · Review of Probability Theory Zahra Koochak and Jeremy Irvin. Elements of Probability

Example for independent random variables

I Spin a spinner numbered 1 to 7, and toss a coin. What is theprobability of getting an odd. number on the spinner and atail on the coin?

pXY (x , y) = pX (x)pY (y) =1

2× 4

7=

2

7

Page 59: Review of Probability Theory - Machine Learningcs229.stanford.edu/summer2020/cs229-prob-slide.pdf · Review of Probability Theory Zahra Koochak and Jeremy Irvin. Elements of Probability

Expectation

I X, Y :Two continuous random variables

I g , R2 → R : A function of X and Y

E (g(x , y)) =

∫x∈Val(x)

∫y∈Val(y)

g(x , y)fXY (x , y)dxdy

Example

g(x , y) = 3x , fx ,y = 4xy , 0 < x < 1, 0 < y < 1

E (g(x , y)) =∫ 10

∫ 10 3x × 4xy dxdy

Page 60: Review of Probability Theory - Machine Learningcs229.stanford.edu/summer2020/cs229-prob-slide.pdf · Review of Probability Theory Zahra Koochak and Jeremy Irvin. Elements of Probability

Covariance of two random variables X and Y

Cov [x , y ] = E [(x − E [x ])(y − (E [y ]))]

= E (XY )− E (X )E (Y )

If X and Y are independent, then:

E (XY ) = E (X )E (Y )→ Cov [x , y ] = 0

Var [X + Y ] = [E (X + Y )]2 − E ((X + Y )2)

Var [X + Y ] = Var [X ] + Var [Y ] + 2Cov [X ,Y ]

Page 61: Review of Probability Theory - Machine Learningcs229.stanford.edu/summer2020/cs229-prob-slide.pdf · Review of Probability Theory Zahra Koochak and Jeremy Irvin. Elements of Probability

Multivariant Gaussian (Normal) distribution

x ∈ IRn. Model p(x1), p(x2), ....etc . at the same time. Parameters:µ ∈ IRn,Σ ∈ IRn×n(covariancematrix)

p(x ;µ,Σ) =1

(2π)n/2|Σ|12

exp(−1

2(x − µ)TΣ−1(x − µ))

Page 62: Review of Probability Theory - Machine Learningcs229.stanford.edu/summer2020/cs229-prob-slide.pdf · Review of Probability Theory Zahra Koochak and Jeremy Irvin. Elements of Probability

Multivariant Gaussian (Normal examples)

Page 63: Review of Probability Theory - Machine Learningcs229.stanford.edu/summer2020/cs229-prob-slide.pdf · Review of Probability Theory Zahra Koochak and Jeremy Irvin. Elements of Probability

Multivariant Gaussian (Normal examples)

Page 64: Review of Probability Theory - Machine Learningcs229.stanford.edu/summer2020/cs229-prob-slide.pdf · Review of Probability Theory Zahra Koochak and Jeremy Irvin. Elements of Probability

Multivariant Gaussian (Normal examples)

Page 65: Review of Probability Theory - Machine Learningcs229.stanford.edu/summer2020/cs229-prob-slide.pdf · Review of Probability Theory Zahra Koochak and Jeremy Irvin. Elements of Probability

Multivariant Gaussian (Normal examples)

Page 66: Review of Probability Theory - Machine Learningcs229.stanford.edu/summer2020/cs229-prob-slide.pdf · Review of Probability Theory Zahra Koochak and Jeremy Irvin. Elements of Probability

Multivariant Gaussian (Normal examples)

Page 67: Review of Probability Theory - Machine Learningcs229.stanford.edu/summer2020/cs229-prob-slide.pdf · Review of Probability Theory Zahra Koochak and Jeremy Irvin. Elements of Probability

Conditional Probability and Expectation

Remember:

Let B be any event such that P(B) 6= 0.

P(A|B) := P(A∩B)P(B)

Page 68: Review of Probability Theory - Machine Learningcs229.stanford.edu/summer2020/cs229-prob-slide.pdf · Review of Probability Theory Zahra Koochak and Jeremy Irvin. Elements of Probability

Conditional Probability and Expectation

X,Y are RVs with the same probability space,

we have

P(X = x |Y = y) =P(X = x ,Y = y)

P(Y = y)

E(X |Y = y) =∑x

xP(X = x ,Y = y)

P(Y = y)

Page 69: Review of Probability Theory - Machine Learningcs229.stanford.edu/summer2020/cs229-prob-slide.pdf · Review of Probability Theory Zahra Koochak and Jeremy Irvin. Elements of Probability

Conditional Probability and Expectation

E[X |Y ]

It is actually a random variable

E[X |Y ](y) = E[X |Y = y ] is a function of Y

Page 70: Review of Probability Theory - Machine Learningcs229.stanford.edu/summer2020/cs229-prob-slide.pdf · Review of Probability Theory Zahra Koochak and Jeremy Irvin. Elements of Probability

Conditional Probability and Expectation

E[X |Y ]

It is actually a random variable

E[X |Y ](y) = E[X |Y = y ] is a function of Y

Page 71: Review of Probability Theory - Machine Learningcs229.stanford.edu/summer2020/cs229-prob-slide.pdf · Review of Probability Theory Zahra Koochak and Jeremy Irvin. Elements of Probability

Law of Total Expectation

Let X, Y be RVs with the same probability space, then

E[X ] = E[E[X |Y ]]

A brief proof of X,Y being discrete and finite

E[E[X |Y ]] = E[∑x

xP(X = x |Y )]

=∑y

(∑x

xP(X = x |Y = y))P(Y = y)

=∑y

∑x

xP(X = x ,Y = y)

=∑x

x(∑y

P(X = x ,Y = y))

=∑x

xP(X = x)

= E[X ]

Page 72: Review of Probability Theory - Machine Learningcs229.stanford.edu/summer2020/cs229-prob-slide.pdf · Review of Probability Theory Zahra Koochak and Jeremy Irvin. Elements of Probability

Law of Total Expectation

Let X, Y be RVs with the same probability space, then

E[X ] = E[E[X |Y ]]

A brief proof of X,Y being discrete and finite

E[E[X |Y ]] = E[∑x

xP(X = x |Y )]

=∑y

(∑x

xP(X = x |Y = y))P(Y = y)

=∑y

∑x

xP(X = x ,Y = y)

=∑x

x(∑y

P(X = x ,Y = y))

=∑x

xP(X = x)

= E[X ]

Page 73: Review of Probability Theory - Machine Learningcs229.stanford.edu/summer2020/cs229-prob-slide.pdf · Review of Probability Theory Zahra Koochak and Jeremy Irvin. Elements of Probability

Law of Total Expectation

Let X, Y be RVs with the same probability space, then

E[X ] = E[E[X |Y ]]

A brief proof of X,Y being discrete and finite

E[E[X |Y ]] = E[∑x

xP(X = x |Y )]

=∑y

(∑x

xP(X = x |Y = y))P(Y = y)

=∑y

∑x

xP(X = x ,Y = y)

=∑x

x(∑y

P(X = x ,Y = y))

=∑x

xP(X = x)

= E[X ]

Page 74: Review of Probability Theory - Machine Learningcs229.stanford.edu/summer2020/cs229-prob-slide.pdf · Review of Probability Theory Zahra Koochak and Jeremy Irvin. Elements of Probability

More Conditioned Bayes Rule

P(a|b, c) =P(b|a, c)P(a|c)

P(b|c)

It is actually the same as the Bayes Rule:

P(a|b) =P(b|a)P(a)

P(b)

with a random variable c that all the probabilities areconditioned on.

Page 75: Review of Probability Theory - Machine Learningcs229.stanford.edu/summer2020/cs229-prob-slide.pdf · Review of Probability Theory Zahra Koochak and Jeremy Irvin. Elements of Probability

More Conditioned Bayes Rule

A proof:

P(b|a, c)P(a|c)

P(b|c)=

P(b, a, c)P(a|c)

P(b|c)P(a, c)

=P(b, a, c)P(a, c)

P(b|c)P(a, c)P(c)

=P(b, a, c)

P(b|c)P(c)

=P(b, a, c)

P(b, c)

= P(a|b, c)