of 22/22
c Stanley Chan 2020. All Rights Reserved. ECE 302: Lecture 4.7 Gaussian Random Variable Prof Stanley Chan School of Electrical and Computer Engineering Purdue University 1 / 22
• date post

28-Feb-2022
• Category

## Documents

• view

0

0

Embed Size (px)

### Transcript of ECE 302: Lecture 4.7 Gaussian Random Variable

Prof Stanley Chan
1 / 22
Outline
Uniform Exponential Gaussian
Today’s lecture:
Definition of Gaussian
Mean and variance
Skewness and kurtosis
Definition
Definition
Let X be an Gaussian random variable. The PDF of X is
fX (x) = 1√
where (µ, σ2) are parameters of the distribution. We write
X ∼ Gaussian(µ, σ2) or X ∼ N (µ, σ2)
to say that X is drawn from a Gaussian distribution of parameter (µ, σ2).
3 / 22
Interpreting the mean and variance
-10 -5 0 5 10
0
0.1
0.2
0.3
0.4
0.5
= -3
= -0.3
= 0
= 1.2
= 4
0
0.1
0.2
0.3
0.4
0.5
= 0.8
= 1
= 2
= 3
= 4
Figure: A Gaussian random variable with different µ and σ
4 / 22
Proving the mean
E[X ] = µ, and Var[X ] = σ2. (2)
E[X ] = 1√
Proving the variance
E[X ] = µ, and Var[X ] = σ2. (3)
Var[X ] = 1√
= σ2√ 2π
Standard Gaussian PDF
Definition
A standard Gaussian (or standard Normal) random variable X has a PDF
fX (x) = 1√ 2π
e− x2
2 . (4)
That is, X ∼ N (0, 1) is a Gaussian with µ = 0 and σ2 = 1.
Figure: Definition of the CDF of the standard Gaussian Φ(x).
7 / 22
Standard Gaussian CDF
Definition
The CDF of the standard Gaussian is defined as the Φ(·) function
Φ(x) def = FX (x) =
2 dt. (5)
The standard Gaussian’s CDF is related to a so-called error function which is defined as
erf(x) = 2√ π
Φ(x) = 1
CDF of arbitrary Gaussian
FX (x) = Φ
( x − µ σ
FX (x) = .
Substituting y = t−µ σ , and using the definition of standard Gaussian, we
have∫ x
Other results
( b − µ σ
To see this, note that
P[a < X ≤ b] = P[X ≤ b]− P[X ≤ a] = Φ
( b − µ σ
Let X ∼ N (µ, σ2). Then, the following results hold:
Φ(y) = 1− Φ(−y).
) .
) + Φ
Skewness and Kurtosis
Definition
For a random variable X with PDF fX (x), define the following central moments as
mean = E[X ] def = µ,
variance = E [ (X − µ)2
Skewness
Measures how asymmetrical the distribution is. Gaussian has skewness 0.
0 5 10 15 20 0
0.1
0.2
0.3
0.4
negative skewness
Figure: Skewness of a distribution measures how asymmetric the distribution is. In this example, the skewness are: orange = 0.8943, black = 0, blue = -1.414.
12 / 22
Kurtosis
)4] .
Measures how heavy tail is. Gaussian has kurtosis 3. Some people prefer excess kurtosis κ− 3. Gaussian has excess kurtosis 0.
-5 -4 -3 -2 -1 0 1 2 3 4 5
0
0.2
0.4
0.6
0.8
1
kurtosis > 0
kurtosis = 0
kurtosis < 0
Figure: Kurtosis of a distribution measures how heavy tail the distribution is. In this example, the (excess) kurtosis are: orange = 2.8567, black = 0, blue = -0.1242. 13 / 22
Skewness and Kurtosis
Random variable Mean Variance Skewness Excess kurtosis µ σ2 γ κ− 3
Bernoulli p p(1− p) 1−2p√ p(1−p)
1 1−p + 1
6p2−6p+1 np(1−p)
Geometric 1 p
1−p p2
1 λ
5 Exponential 1
Table: The first few moments of commonly used random variables.
14 / 22
Example: Titanic
On April 15, 1912, RMS Titanic sank after hitting an iceberg. This has killed 1502 out of 2224 passengers and crew. A hundred years later, we want to analyze the data. On https://www.kaggle.com/c/titanic/
there is a dataset collecting the identities, age, gender, etc of the passengers.
Statistics Group 1 (Died) Group 2 (Survived)
Mean 30.6262 28.3437 Standard Deviation 14.1721 14.9510 Skewness 0.5835 0.1795 Excess Kurtosis 0.2652 -0.0772
15 / 22
Example: Titanic
Skewness and kurtosis can tell the difference.
0 20 40 60 80
age
0
10
20
30
40
age
0
10
20
30
40
Figure: The Titanic dataset https://www.kaggle.com/c/titanic/.
16 / 22
Origin of Gaussian
Why do they have bell shapes?
What is the origin of Gaussian?
When we sum many independent random variables, the resulting random variable is a Gaussian. This is known as the Central Limit Theorem. The theorem applies to any random variable. Summing random variables is equivalent to convolving the PDFs. Convolving PDFs infinitely many times yields the bell shape.
17 / 22
The experiment of throwing many dices
(a) X1 (b) X1 + X2
(c) X1 + . . .+ X5 (d) X1 + . . .+ X100
Figure: When adding uniform random variables, the overall distribution is becoming like a Gaussian.
18 / 22
Sum of X and Y = Convolution of fX and fY
Example: Two rectangles to give a triangle:
We will show this result in a later lecture:
(fX ∗ fX )(x) =
If you convolve infinitely many times
Then in Fourier domain you will have
F {(fX ∗ fX ∗ . . . ∗ fX )} = F{fX} · F{fX} · . . . · F{fX}.
-10 -8 -6 -4 -2 0 2 4 6 8 10 -0.5
-0.25
0
0.25
0.5
0.75
1
1.25
3
Figure: Convolving the PDF of a uniform distribution is equivalent to multiplying their Fourier transforms in the Fourier space. As the number of convolution grows, the product is gradually becoming Gaussian.
20 / 22
Origin of Gaussian
What happens if you convolve a PDF infinitely many times?
You will get a Gaussian. This is known as the central limit theorem.
Why are Gaussians everywhere?
We seldom look at individual random variables. We often look at the sum/average. Whenever we have a sum, Central Limit Theorem kicks in. Summing random variables is equivalent to convolving the PDFs. Convolving PDFs infinitely many times yields the bell shape. This result applies to any random variable, as long as they are independently summed.
21 / 22
Questions?