Post on 21-Aug-2018
Chapter 3: Probability, Random Variables, Random Processes
EE456 – Digital CommunicationsProfessor Ha Nguyen
September 2015
EE456 – Digital Communications 1
Chapter 3: Probability, Random Variables, Random Processes
What is a Random Variable?
R
Ω
52 3 41 6
Figure 1: An example of random variable.
Sample space: Collection of all possible outcomes of a random experiment.
Strictly speaking, a random variable is a mapping from the sample space to theset of real numbers.
Loosely speaking, a random variable means a numerical quality that can take ondifferent values.
EE456 – Digital Communications 2
Chapter 3: Probability, Random Variables, Random Processes
Cumulative Distribution Function (cdf)
Of course, not all random variables are the same.
A random variable is completely characterized (described) by a cumulativedistribution function (cdf), or a probability density function (pdf).
A cdf or a pdf can be used to determine the probability (i.e., chance, level ofconfidence) that the a random variable will take a value in a certain range (suchas negative, positive, between 1 and 2, etc.).
The cdf of a random variable is defined as:
Fx(x) = Probability that x ≤ x = P (x ≤ x).
The cdf has the following properties:
1. 0 ≤ Fx(x) ≤ 1.2. Fx(x) is nondecreasing: Fx(x1) ≤ Fx(x2) if x1 ≤ x2.3. Fx(−∞) = 0 and Fx(+∞) = 1.4. P (a < x ≤ b) = Fx(b) − Fx(a).
EE456 – Digital Communications 3
Chapter 3: Probability, Random Variables, Random Processes
Typical Plots of a cdf
A random variable can be discrete, continuous or mixed.
0.1
0 ∞∞− x
( )F xx
(a)
0.1
0 ∞∞−x
( )F xx
(b)
0.1
0 ∞∞− x
( )F xx
(c)
EE456 – Digital Communications 4
Chapter 3: Probability, Random Variables, Random Processes
Probability Density Function (pdf)
The pdf is defined as the derivative of the cdf: fx(x) =dFx(x)
dx.
0
d ( )( )
d
F xf x
x= x
x
x
Π
( ) ( ) d
Total area under ( ) over
P f x x
f x
Π
Π∈
= Π
= ∫ x
x
Basic properties of a pdf:
1. fx(x) ≥ 0 (a valid pdf must benonnegative).
2.
∫ ∞
−∞fx(x)dx = 1 (the total area
under a pdf curve must be 1).
3.
P (x1 ≤ x ≤ x2) = P (x ≤ x2)− P (x ≤ x1)
= Fx(x2) − Fx(x1) =
∫ x2
x1
fx(x)dx.
4. In general, P (x ∈ Π) =
∫
Πfx(x)dx.
EE456 – Digital Communications 5
Chapter 3: Probability, Random Variables, Random Processes
Bernoulli Random Variable
0x
1 0x
1
p−1
1
(1 )p−( )p
( )F xx( )f xx
A discrete random variable that takes two values 1 and 0 with probabilities p and1− p.
A good model for a binary data source whose output is 1 or 0.
Can also be used to model the channel errors.
EE456 – Digital Communications 6
Chapter 3: Probability, Random Variables, Random Processes
Uniform Random Variable
0x
ab −1
a b 0x
a b
1
( )F xx( )f xx
A continuous random variable that takes values between a and b with equalprobabilities over intervals of equal length.
The phase of a received sinusoidal carrier is usually modeled as a uniform randomvariable between 0 and 2π. Quantization error is also typically modeled asuniform.
EE456 – Digital Communications 7
Chapter 3: Probability, Random Variables, Random Processes
Gaussian (or Normal) Random Variable
0x
0x
122
1
πσ
µ
2
1
µ
( )f xx ( )F xx
A continuous random variable whose pdf is:
fx(x) =1√2πσ2
exp
− (x− µ)2
2σ2
,
µ and σ2 are parameters. Usually denoted as N (µ, σ2).
Most important and frequently encountered random variable in communications.
EE456 – Digital Communications 8
Chapter 3: Probability, Random Variables, Random Processes
Uniform or Gaussian?
0 50 100
−2
−1
0
1
2
Trial number
Out
com
e
0 50 100
−2
−1
0
1
2
Trial number
Out
com
e
0 50 100
−5
0
5
Trial number
Out
com
e
0 50 100
−5
0
5
Trial number
Out
com
e
EE456 – Digital Communications 9
Chapter 3: Probability, Random Variables, Random Processes
Gaussian RVs with Different Average (Mean) Values
Notice the connection between the pdf and observed values
0 20 40 60 80 100−6
−4
−2
0
2
4
6
Trial number
Out
com
e
−5 0 50
0.1
0.2
0.3
0.4
x
f x(x
)
0 20 40 60 80 100−6
−4
−2
0
2
4
6
Trial number
Out
com
e
−5 0 50
0.1
0.2
0.3
0.4
x
f x(x
)
EE456 – Digital Communications 10
Chapter 3: Probability, Random Variables, Random Processes
Gaussian RVs with Different Average (Mean) Squared Values
Notice the connection between the pdf and observed values
0 20 40 60 80 100−6
−4
−2
0
2
4
6
Trial number
Out
com
e
−5 0 50
0.1
0.2
0.3
0.4
x
f x(x
)
0 20 40 60 80 100−6
−4
−2
0
2
4
6
Trial number
Out
com
e
−5 0 50
0.1
0.2
0.3
0.4
x
f x(x
)
EE456 – Digital Communications 11
Chapter 3: Probability, Random Variables, Random Processes
Histogram of Observed ValuesPredicting the pdf based on the observed values
0 500 1000−4
−2
0
2
4
Trial number
Out
com
e
−4 −2 0 2 40
100
200
300
Bin
Cou
nt
0 500 1000−4
−2
0
2
4
Trial number
Out
com
e
−2 0 20
50
100
150
Bin
Cou
nt
EE456 – Digital Communications 12
Chapter 3: Probability, Random Variables, Random Processes
Expectations (Statistical Averages) of a Random Variable
Expectations (statistical averages, or moments), play an important role in thecharacterization of the random variable.
The expected value (also called the mean value, first moment) of the randomvariable x is defined as
mx = Ex ≡∫ ∞
−∞xfx(x)dx,
where E denotes the statistical expectation operator.
In general, the nth moment of x is defined as
Exn ≡∫ ∞
−∞xnfx(x)dx.
For n = 2, Ex2 is known as the mean-squared value of the random variable.
The nth central moment of the random variable x is:
Ey = E(x−mx)n =
∫ ∞
−∞(x−mx)
nfx(x)dx.
When n = 2 the central moment is called the variance, commonly denoted as σ2x:
σ2x = var(x) = E(x−mx)
2 =∫ ∞
−∞(x−mx)
2fx(x)dx.
EE456 – Digital Communications 13
Chapter 3: Probability, Random Variables, Random Processes
The variance provides a measure of the variable’s “randomness”.
The mean and variance of a random variable give a partial description of its pdf.
Relationship between the variance, the first and second moments:
σ2x = Ex2 − [Ex]2 = Ex2 −m2
x.
An electrical engineering interpretation: The AC power equals total power minus
DC power.
The square-root of the variance is known as the standard deviation, and can beinterpreted as the root-mean-squared (RMS) value of the AC component.
EE456 – Digital Communications 14
Chapter 3: Probability, Random Variables, Random Processes
Examples
Example 1: Find the mean and variance of the Bernoulli random variable definedon Page 6.
Example 2: Find the mean and variance of the uniform random variable definedon Page 7.
Example 3: Prove that the mean and variance of the Gaussian random variable
defined on Page 8 are exactly µ and σ2, respectively.
Example 4: Consider two random variables, x and y with the pdfs given as in thebelow figure. Then answer the following:
Are the means of x and y are the same or different?Are the variances of x and y are the same or different? If they are different, whichrandom variable has a larger variance?Compute the means and variances of x and y.
0x
( )f xx
1
1
1− 0y
( )f yy
1
1
1−
EE456 – Digital Communications 15
Chapter 3: Probability, Random Variables, Random Processes
Statistical (or Ensemble) Averages vs. Empirical (Sample) Averages
Statistical averages of a random variable are found from its pdf. For thestatistical averages to be useful, the pdf has to be accurate enough.
Empirical averages are obtained based on the actual observations (no pdf isneeded).
Let x1, x2, . . . , xN are actual observations (outcomes) of random variable x. Theempirical averages of x and x
2 would be calculated as follows:
x =
∑Nn=1 xn
N(1)
x2 =
∑Nn=1 x
2n
N(2)
If the number of observations N , i.e., the data size, is large enough, the aboveempirical averages should be very close to the statistical averagesEx =
∫∞−∞ xfx(x)dx and Ex2 =
∫∞−∞ x2fx(x)dx.
Example 1: The Matlab command randn(1,10) generates 10 values of aGaussian random variable whose pdf is N (0, 1). Compute the “mean” and “meansquared value” of the random variable based on the following actual observations:
0.5377 1.8339 -2.2588 0.8622 0.3188 -1.3077 -0.4336 0.3426 3.5784 2.7694
Example 2: Repeat the calculations for 10 values of a uniform random variable2*rand(1,10):
1.3115 0.0714 1.6983 1.8680 1.3575 1.5155 1.4863 0.7845 1.3110 0.3424
EE456 – Digital Communications 16
Chapter 3: Probability, Random Variables, Random Processes
Example
The Matlab function randn defines a Gaussian random variable of zero mean and unit
variance. Generate a vector of L = 106 samples according to a zero-mean Gaussianrandom variable x with variance σ2 = 10−8. This can be done as follows:x=sigma*randn(1,L).
(a) Based on the sample vector x, verify the mean and variance of random variable x.
(b) Based on the sample vector x, compute the “probability” that x > A, whereA = −10−4. Compare the result with that found in Question 2-(c).
(c) Using the Matlab command hist, plot the histogram of sample vector x with 100bins. Next obtain and plot the pdf from the histogram. Then also plot on thesame figure the theoretical Gaussian pdf (note the value of the variance for thepdf). Do the histogram pdf and theoretical pdf fit well?
EE456 – Digital Communications 17
Chapter 3: Probability, Random Variables, Random Processes
Solution:
L=10^6; % Length of the noise vector
sigma=sqrt(10^(-8)); % Standard deviation of the noise
A=-10^(-4);
x=sigma*randn(1,L);
%[2] (a) Verify mean and variance:
mean_x=sum(x)/L; % this is the same as mean(x)
variance_x=sum((x-mean_x).^2)/L; % this is the same as var(x);
% mean_x = -3.7696e-008
% variance_x = 1.0028e-008
%[3] (b) Compute P(x>A)
P=length(find(x>A))/L;
% P = 0.8410
%[5] (c) Histogram and Gaussian pdf fit
N_bins=100;
[y,x_center]=hist(x,100); % This bins the elements of x into N_bins equally spaced containers
% and returns the number of elements in each container in y,
% while the bin centers are stored in x_center.
dx=(max(x)-min(x))/N_bins; % This gives the width of each bin.
hist_pdf= (y/L)/dx; % This approximate the pdf to be a constant over each bin;
pl(1)=plot(x_center,hist_pdf,’color’,’blue’,’linestyle’,’--’,’linewidth’,1.0);
hold on;
x0=[-5:0.001:5]*sigma; % this specifies the range of random variable x
true_pdf=1/(sqrt(2*pi)*sigma)*exp(-x0.^2/(2*sigma^2));
pl(2)=plot(x0,true_pdf,’color’,’r’,’linestyle’,’-’,’linewidth’,1.0);
xlabel(’\itx’,’FontName’,’Times New Roman’,’FontSize’,16);
ylabel(’\itf_\bf x(\itx)’,’FontName’,’Times New Roman’,’FontSize’,16);
legend(pl,’pdf from histogram’,’true pdf’,+1);
set(gca,’FontSize’,16,’XGrid’,’on’,’YGrid’,’on’,’GridLineStyle’,’:’,’MinorGridLineStyle’,’none’,’FontName’,’Times New Roman’);
EE456 – Digital Communications 18
Chapter 3: Probability, Random Variables, Random Processes
−5 0 5
x 10−4
0
500
1000
1500
2000
2500
3000
3500
4000
x
f x(x
)
pdf from histogramtrue pdf
EE456 – Digital Communications 19
Chapter 3: Probability, Random Variables, Random Processes
Q-function
0λ
x
)(Area xQ=
2
2
e2
1 λ
π−
Q(x) ≡ 1√2π
∫ ∞
x
exp
(
−λ2
2
)
dλ.
0 1 2 3 4 5 610
−10
10−8
10−6
10−4
10−2
100
x
Q(x
)
EE456 – Digital Communications 20
Chapter 3: Probability, Random Variables, Random Processes
Example
The noise voltage in an electric circuit can be modeled as a Gaussian random variablewith zero mean and variance σ2. Show that the probability that the noise voltage
exceeds some level A can be expressed as Q(
Aσ
)
.
Solution:
fx(x) =1√2πσ
e−
(x−µ)2
2σ2(µ=0)=
1√2πσ
e− x2
2σ2 .
The Q-function is defined as:
Q(t) =1√2π
∫ ∞
te−
λ2
2 dλ
The probability that the noise voltage exceeds some level A is
P (x > A) =
∫ ∞
Afx(x)dx
=1√2πσ
∫ ∞
Ae− x2
2σ2 dx(λ= x
σ)
=(∴dλ= dx
σ)
1√2π
∫ ∞
Aσ
e−λ2
2 dλ = Q
(
A
σ
)
.
EE456 – Digital Communications 21
Chapter 3: Probability, Random Variables, Random Processes
Joint cdf and Joint pdf of Multiple Random Variables
Let x and y be two random variables. Each random variable can bedescribed/characterized separately by its own cdf, pdf, or statistical averages.
When two (or multiple) random variables are related (even remotely) to the samephysical phenomenon, there can be relationship between them.
It could be very helpful to be able to describe/characterize multiple randomvariables jointly.
The joint cdf is defined as
Fx,y(x, y) = P (x ≤ x,y ≤ y).
Similarly, the joint pdf is:
fx,y(x, y) =∂2Fx,y(x, y)
∂x∂y.
When the joint pdf is integrated over one of the variables, one obtains the pdf ofother variable (also called the marginal pdf):
∫ ∞
−∞fx,y(x, y)dx = fy(y),
∫ ∞
−∞fx,y(x, y)dy = fx(x).
EE456 – Digital Communications 22
Chapter 3: Probability, Random Variables, Random Processes
A joint pdf is always nonnegative. Furthermore, the total volume under a jointpdf is unity:
∫ ∞
−∞
∫ ∞
−∞fx,y(x, y)dxdy = F (∞,∞) = 1
The conditional pdf of the random variable y, given that the value of the randomvariable x is equal to x, is defined as
fy(y|x) =
fx,y(x, y)
fx(x), fx(x) 6= 0
0, otherwise.
Two random variables x and y are statistically independent if and only if
fy(y|x) = fy(y) or equivalently fx,y(x, y) = fx(x)fy(y).
The joint moment is defined as
Exjyk =
∫ ∞
−∞
∫ ∞
−∞xjykfx,y(x, y)dxdy.
EE456 – Digital Communications 23
Chapter 3: Probability, Random Variables, Random Processes
The joint central moment is
E(x−mx)j(y −my)
k =∫ ∞
−∞
∫ ∞
−∞(x−mx)
j(y −my)kfx,y(x, y)dxdy
where mx = Ex and my = Ey.The most important moments are
Exy ≡∫ ∞
−∞
∫ ∞
−∞xyfx,y(x, y)dxdy (correlation)
covx,y ≡ E(x−mx)(y −my)= Exy −mxmy (covariance).
Let σ2xand σ2
ybe the variance of x and y. The covariance normalized w.r.t.
σxσy is called the correlation coefficient:
ρx,y =covx,yσxσy
.
ρx,y indicates the degree of linear dependence between two RVs.
It can be shown that |ρx,y| ≤ 1.
ρx,y = ±1 implies an increasing/decreasing linear relationship.
If ρx,y = 0, x and y are said to be uncorrelated.
It is easy to verify that if x and y are independent, then ρx,y = 0: Independence
implies lack of correlation.
However, lack of correlation (no linear relationship) does not in general implystatistical independence.
EE456 – Digital Communications 24
Chapter 3: Probability, Random Variables, Random Processes
Examples of Uncorrelated and Correlated Random Variables
0 0.5 1 1.50
0.5
1
1.5
x
y
0 0.5 1 1.50
0.5
1
1.5
x
y
EE456 – Digital Communications 25
Chapter 3: Probability, Random Variables, Random Processes
Other Examples of Uncorrelated and Correlated Random Variables
−2 0 2
−5
−4
−3
−2
−1
0
1
2
3
4
5
x
y
−2 0 2
−5
−4
−3
−2
−1
0
1
2
3
4
5
x
y
EE456 – Digital Communications 26
Chapter 3: Probability, Random Variables, Random Processes
Interpretation of Correlation Coefficient
0 0.5 1 1.50
0.5
1
1.5
x
y
(a) ρx,y ≈ 0.71
0 50 100 1500
50
100
150
x
y
(b) ρx,y ≈ 0.71
0 0.5 1 1.50
0.5
1
1.5
x
y
(c) ρx,y ≈ 0.97
0 0.5 1 1.5
−0.5
0
0.5
x
y
(d) ρx,y ≈ −0.97
EE456 – Digital Communications 27
Chapter 3: Probability, Random Variables, Random Processes
Jointly Gaussian Distribution (Bivariate)
fx,y(x, y) =1
2πσxσy
√
1− ρ2x,y
exp
− 1
2(1− ρ2x,y)
×[
(x−mx)2
σ2x
− 2ρx,y(x−mx)(y −my)
σxσy
+(y −my)2
σ2y
]
,
where mx, my, σx, σy are the means and variances.
ρx,y is indeed the correlation coefficient.
Marginal density is Gaussian: fx(x) ∼ N (mx, σ2x) and fy(y) ∼ N (my, σ2
y).
When ρx,y = 0 → fx,y(x, y) = fx(x)fy(y) → random variables x and y arestatistically independent.
For joint Gaussian random variables, uncorrelatedness means statistically
independent!
Weighted sum of two jointly Gaussian random variables is also Gaussian.
EE456 – Digital Communications 28
Chapter 3: Probability, Random Variables, Random Processes
Joint pdf and Contours for σx = σy = 1 and ρx,y = 0
−3−2
−10
12
3
−2
0
2
0
0.05
0.1
0.15
x
ρx,y
=0
y
f x,y(x
,y)
x
y
−2 −1 0 1 2−2.5
−2
−1
0
1
2
2.5
EE456 – Digital Communications 29
Chapter 3: Probability, Random Variables, Random Processes
Joint pdf and Contours for σx = σy = 1 and ρx,y = 0.3
−3−2
−10
12
3
−2
0
2
0
0.05
0.1
0.15
x
ρx,y
=0.30
y
f x,y(x
,y)
a cross−section
x
y
−2 −1 0 1 2−2.5
−2
−1
0
1
2
2.5
EE456 – Digital Communications 30
Chapter 3: Probability, Random Variables, Random Processes
Joint pdf and Contours for σx = σy = 1 and ρx,y = 0.7
−3−2
−10
12
3
−2
0
2
0
0.05
0.1
0.15
0.2
a cross−section
x
ρx,y
=0.70
y
f x,y(x
,y)
x
y
−2 −1 0 1 2−2.5
−2
−1
0
1
2
2.5
EE456 – Digital Communications 31
Chapter 3: Probability, Random Variables, Random Processes
Random Processes (Random Signals)
Loosely speaking, a random process is a random variable evolving in time.
The figures below plot time functions of several different random processes.
(a) Thermal noise
0 t
(b) Uniform phase
t 0
EE456 – Digital Communications 32
Chapter 3: Probability, Random Variables, Random Processes
t 0
(c) Rayleigh fading process
t
+V
−V
(d) Binary random data
0
Tb
EE456 – Digital Communications 33
Chapter 3: Probability, Random Variables, Random Processes
Real number
Timetk
.
.
.
t2
t1
.
.
.
.
.
. . . .
.
.
.
Denote the random process by x(t). The time functions x1(t), x2(t), x3(t),. . . are specific realizations of the process.
At any time instant, t = tk , we have random variable x(tk).
At any two time instants, say t1 and t2, we have two different random variablesx(t1) and x(t2).
Any relationship between random variables x(t1) and x(t2) is completelydescribed by the joint pdf fx(t1),x(t2)(x1, x2; t1, t2).
EE456 – Digital Communications 34
Chapter 3: Probability, Random Variables, Random Processes
Statistical Averages of a Random ProcessThe value of a random process at anytime instant, say t1, is a random variablex1 = x(t1). Such a random variable has mean and mean squared values (i.e.,first-order and second-order statistics).
Most random processes encountered in communications have the property thatthe mean and mean squared values are constants over time:
(P1) mx = Ex(t) =∫ ∞
−∞xfx(x)dx.
(P2) MSVx = Ex2(t) =∫ ∞
−∞x2fx(x)dx
The correlation between two random variables x1 = x(t1) and x2 = x(t2) is:
Rx(t1, t2) = Ex(t1)x(t2)
=
∫ ∞
x1=−∞
∫ ∞
x2=−∞x1x2fx1,x2 (x1, x2; t1, t2)dx1dx2.
Most random processes in communications have the property that the correlationonly depends on the time difference τ = t2 − t1
(P3) Rx(τ) = Ex(t1)x(t1 + τ) = Ex(t)x(t + τ) = Ex(t)x(t + τ), ∀tRandom processes that satisfy properties (P1), (P2), P(3) are called wide-sense
stationarity (WSS).
It can be shown that the autocorrelation function Rx(τ) completely describes thepower spectral density of the random process.
EE456 – Digital Communications 35
Chapter 3: Probability, Random Variables, Random Processes
Properties of the Autocorrelation Function
1. Rx(τ) = Rx(−τ). It is an even function of τ because the same set of productvalues is averaged across the ensemble, regardless of the direction of translation.
2. |Rx(τ)| ≤ Rx(0). The maximum always occurs at τ = 0, though there maybeother values of τ for which it is as big. Further Rx(0) is the mean-squared valueof the random process.
3. If for some τ0 we have Rx(τ0) = Rx(0), then for all integers k,Rx(kτ0) = Rx(0).
4. If mx 6= 0 then Rx(τ) will have a constant component equal to m2x.
5. Autocorrelation functions cannot have an arbitrary shape. The restriction on theshape arises from the fact that the Fourier transform of an autocorrelationfunction must be greater than or equal to zero, i.e., FRx(τ) ≥ 0.
EE456 – Digital Communications 36
Chapter 3: Probability, Random Variables, Random Processes
Example of a Random Process
A random process is generated as follows: x(t) = e−a|t|, where a is a random variablewith pdf fa(a) = u(a) − u(a − 1) (1/seconds).
(a) Sketch several members of the ensemble.
(b) For a specific time, t, over what values of amplitude does the random variablex(t) range?
(c) Find the mean and mean-squared value of x(t). Is the process x(t) wide-sensestationary (WSS)?
(d) Determine the first-order pdf of x(t).
EE456 – Digital Communications 37
Chapter 3: Probability, Random Variables, Random Processes
(a) Plot of several member functions of x(t) = e−a|t|:
−2 −1.5 −1 −0.5 0 0.5 1 1.5 20
0.51
a=0.82141
t
−2 −1.5 −1 −0.5 0 0.5 1 1.5 20
0.51
a=0.4447
t
−2 −1.5 −1 −0.5 0 0.5 1 1.5 20
0.51
a=0.61543
t
−2 −1.5 −1 −0.5 0 0.5 1 1.5 20
0.51
a=0.79194
t
EE456 – Digital Communications 38
Chapter 3: Probability, Random Variables, Random Processes
(b) Since a ranges between 0 and 1, for a specific time, t, the random variable x(t)ranges from e−|t| (when a = 1) to 1 (when a = 0).
(c) x is considered as a function of a with t a parameter. The mean andmean-squared values of x(t) are:
Ex(t) =
∞∫
−∞
x(t)fa(a)da =
1∫
0
e−a|t|
da =e−a|t|
−|t|
∣
∣
∣
∣
1
0
=1
|t|(
1 − e−|t|
)
.
Ex2(t) =
∞∫
−∞
x2(t)fa(a)da =
1∫
0
e−2a|t|da =e−2a|t|
−2|t|
∣
∣
∣
∣
1
0
=1
2|t|(
1 − e−2|t|)
.
Since both the mean and mean-squared value are functions of time t, the processx(t) is not wide-sense stationary.
EE456 – Digital Communications 39
Chapter 3: Probability, Random Variables, Random Processes
(d) For x ≤ e−|t| or x > 1, we have fx(t)(x; t) = 0.
For e−|t| < x ≤ 1, the equation x = g(a) = e−a|t| has only one solution
a = − 1|t|
ln x, e−|t| < x ≤ 1. Furthermore
∣
∣
∣
∣
∣
∣
dg(a)
da
∣
∣
∣
∣
a=− 1|t|
ln x
∣
∣
∣
∣
∣
∣
=
∣
∣
∣
∣
∣
∣
−|t|e−a|t|
∣
∣
∣
∣
a=− 1|t|
ln x
∣
∣
∣
∣
∣
∣
= |t|x.
Therefore
fx(t)(x; t) =
fa(a = − 1|t|
ln x)∣
∣
∣
∣
∣
∣
dg(a)da
∣
∣
∣
∣
a=− 1|t|
ln x
∣
∣
∣
∣
∣
∣
=1
|t|x .
To conclude:
fx(t)(x; t) =
1|t|x
, e−|t| < x ≤ 1
0, otherwise
EE456 – Digital Communications 40
Chapter 3: Probability, Random Variables, Random Processes
Power Spectral Density of a Random Process; Effects of an LTI System onRandom Processes
Power Spectral Density (PSD): Determine how the average power of the processis distributed in frequency.
It can be shown that the power spectral density and the autocorrelation function
are a Fourier transform pair:
Rx(τ) =
∫ ∞
f=−∞Sx(f)e
j2πfτdf ←→ Sx(f) =
∫ ∞
τ=−∞Rx(τ)e
−j2πfτ dτ.
Linear, Time-Invariant
(LTI) System
Input Output
( ) ( )h t H f←→( )tx ( )ty
, ( ) ( )m R S fτ ←→x x x , ( ) ( )m R S fτ ←→y y y
, ( )R τx y
my = Ey(t) = E
∫ ∞
−∞h(λ)x(t − λ)dλ
= mxH(0)
Sy(f) = |H(f)|2 Sx(f)
Ry(τ) = h(τ) ∗ h(−τ) ∗Rx(τ).
EE456 – Digital Communications 41
Chapter 3: Probability, Random Variables, Random Processes
Thermal Noise in Communication Systems
A natural noise source is thermal noise, whose amplitude statistics are wellmodeled to be Gaussian with zero mean.
The autocorrelation and PSD are well modeled as:
Rw(τ) = kθGe−|τ |/t0
t0(watts),
Sw(f) =2kθG
1 + (2πft0)2(watts/Hz).
where k = 1.38× 10−23 joule/0K is Boltzmann’s constant, G is conductance ofthe resistor (mhos); θ is temperature in degrees Kelvin; and t0 is the statisticalaverage of time intervals between collisions of free electrons in the resistor (onthe order of 10−12 sec).
EE456 – Digital Communications 42
Chapter 3: Probability, Random Variables, Random Processes
−15 −10 −5 0 5 10 150
f (GHz)
S w(f
) (w
atts
/Hz)
(a) Power Spectral Density, Sw
(f)
−0.1 −0.08 −0.06 −0.04 −0.02 0 0.02 0.04 0.06 0.08 0.1τ (pico−sec)
Rw
(τ)
(wat
ts)
(b) Autocorrelation, Rw
(τ)
N0/2
N0/2δ(τ)
White noiseThermal noise
White noiseThermal noise
EE456 – Digital Communications 43
Chapter 3: Probability, Random Variables, Random Processes
The noise PSD is approximately flat over the frequency range of 0 to 10 GHz ⇒let the spectrum be flat from 0 to ∞:
Sw(f) =N0
2(watts/Hz),
where N0 = 4kθG is a constant.
Noise that has a uniform spectrum over the entire frequency range is referred toas white noise
The autocorrelation of white noise is
Rw(τ) =N0
2δ(τ) (watts).
Since Rw(τ) = 0 for τ 6= 0, any two different samples of white noise, no matterhow close in time they are taken, are uncorrelated.
Since the noise samples of white noise are uncorrelated, if the noise is both whiteand Gaussian (for example, thermal noise) then the noise samples are alsoindependent.
EE456 – Digital Communications 44
Chapter 3: Probability, Random Variables, Random Processes
Example
Suppose that a (WSS) white noise process, x(t), of zero-mean and power spectraldensity N0/2 is applied to the input of the filter.
(a) Find and sketch the power spectral density and autocorrelation function of therandom process y(t) at the output of the filter.
(b) What are the mean and variance of the output process y(t)?
L
R( )tx ( )ty
EE456 – Digital Communications 45
Chapter 3: Probability, Random Variables, Random Processes
H(f) =R
R+ j2πfL=
1
1 + j2πfL/R.
Sy(f) =N0
2
1
1 +(
2πLR
)2f2
←→ Ry(τ) =N0R
4Le−(R/L)|τ |.
0 0
20N
(Hz) f (sec) τ
L
RN
40
( ) (watts/Hz)S fy ( ) (watts)R τy
EE456 – Digital Communications 46
Chapter 3: Probability, Random Variables, Random Processes
Bandlimiting White Gaussian Noise
The input noise x(t) is modeled as a wide-sense stationary, white, Gaussian randomprocess with a zero mean and two-sided power spectral density N0/2.
0 W
1
fW−
)( fH( )tx ( )ty
Low-pass filter
(a) Find and sketch the power spectral density of y(t).
(b) Find and sketch the autocorrelation function of y(t).
(c) What are the average DC level and the average power of y(t)?
(d) Suppose that the output noise is sampled every Ts seconds to obtain the noisesamples y(kTs), k = 0, 1, 2, . . .. Find the smallest values of Ts so that the noisesamples are statistically independent. Explain.
EE456 – Digital Communications 47
Chapter 3: Probability, Random Variables, Random Processes
(a) The power spectral density (PSD) of y(t) is
Sy(f) = Sx(f) · |H(f)|2 =
N02, |f | ≤W
0, otherwise
0 W
0
2
N
fW−
( )S fy
0
0
2
N
f
( )S fx
(b) The autocorrelation can be found as the inverse Fourier transform of the PSD:
Ry(τ) = F−1Sy(f) =∫ +∞
−∞Sy(f)e
j2πfτ df =
∫ W
−W
N0
2ej2πfτdf
=N0
4πτ
∫ W
−Wej2πfτ (2πτ)df
x=2πτf=
N0
4πτ
∫ 2πWτ
−2πWτejxdx
=N0
4πτ· 2 sin(2πWτ) =
N0 sin(2πWτ)
2πτ= N0 ·W · sinc(2Wτ)
EE456 – Digital Communications 48
Chapter 3: Probability, Random Variables, Random Processes
−3 −2 −1 0 1 2 3−0.4
−0.2
0
0.2
0.4
0.6
0.8
1
Wτ
Ry(τ
)/(N
0W)
(c) The DC level of y(t) is
my = Ey(t) = mx ·H(0) = 0 · 1 = 0
The average power of y(t) is
σ2y = Ey2(t) = Ry(0) = N0W
(d) Since the input x(t) is a Gaussian process, the output y(t) is also a Gaussianprocess and the samples y(kTs) are Gaussian random variables. For Gaussianrandom variables, statistical independence is equivalent to uncorrelatedness. Thusone needs to find the smallest value for τ (or Ts) so that the autocorrelation iszero. From the graph of Ry(τ), the answer is:
τmin = (Ts)min =1
2W
EE456 – Digital Communications 49
Chapter 3: Probability, Random Variables, Random Processes
Random Sequences
A random sequence can be obtained by sampling a continuous-time randomprocess. How to characterize a random sequence?
Let x[1], x[2], . . . be a sequence of indexed random variables. Various definitionsof continuous-time random processes can be applied, except that the timevariable t is replaced by the sequence index n.
Mean function: mx[n] = Ex[n]Variance function: σ2
x[n] = E(x[n] − mx[n])
2Correlation function: Rx(n, k) = Ex[n]x[k]Covariance function: Cx(n, k) = E(x[n]− mx[n])(x[k] − mx[k]
A Gaussian random sequence is a sequence of random variables, for which anyfinite number of members of the sequence are jointly Gaussian.
When the autocorrelation is only a function of only the difference between n andk, i.e., Rx(n, k) = Rx[n− k], or Rx[m] = Ex[n]x[n−m], then the DTFT ofRx[m] gives the power spectral density (PSD) of the random sequence x[n]:
Sx(ejω) =
∞∑
m=−∞
Rx[m]e−jωm
An important case is the white random sequence, where Rx[k] = σ2δ[k], andSx(ejω) = σ2: The sequence is completely uncorrelated and all the averagepower in the sequence is equally shared by all frequencies in the sequence!
EE456 – Digital Communications 50
Chapter 3: Probability, Random Variables, Random Processes
Effects of an LTI System on Random Sequences
[ ] (e )j
h n Hω←→[ ]nx [ ]ny
, [ ] (e )jm R k S
ω←→x x x
, [ ] (e )jm R k S ω←→y y y
,[ ]R k
x y
my = Ey[n] = E
∞∑
−∞
h[k]x[n− k]
= mx ·∞∑
k=−∞
h[k] = mx ·H(ejω)
∣
∣
∣
∣
ω=0
Sy(ejω) =
∣
∣
∣H(ejω)
∣
∣
∣
2Sx(e
jω)
Ry[k] = h[k] ∗ h[−k] ∗Rx[k].
EE456 – Digital Communications 51
Chapter 3: Probability, Random Variables, Random Processes
Example: Digital Filtering of a Random Sequence
Let x[n] be a white random sequence withmx[n] = 0 and σ2
x[n] = σ2. Suppose x[n] is the
input to a digital filter (LTI system) with impulseresponse h[n] = anu[n]. Find and sketch the powerspectral density and autocorrelation function of theoutput sequence y[n].
Solution: The frequency response of this filter is
H(ejω) =∞∑
n=0
ane−jnω =1
1− ae−jω.
Since the input PSD is Sx(ejω) = σ2, it follows that
Sy(ejω) = |H(ejω)|2Sx(e
jω) =
∣
∣
∣
∣
1
1− ae−jω
∣
∣
∣
∣
2
σ2
=σ2
1 + a2 − 2a cos ω
Ry[k] = IDFTSy(ejω) = σ2
1− a2a|k|
[ ]R kx
(e )jS
ωx
(e )j
Sω
y
[ ]R ky
ω
ω
EE456 – Digital Communications 52
Chapter 3: Probability, Random Variables, Random Processes
Sampling a Continuous-Time Random Process with an ADC
( )tw
0
1
f
)( fHs
t nT=
2sF
2sF−
[ ]nw( )tw
1s
s
FT
=20N
0f
( )S fw
20N
0f
( )S fw
2sF
2sF−
( )sFN 20
0ω
(e )jS ωw
ππ−
0
power of noise sequence
1(e )d
2 2
j
s
NS F
πω
π
ωπ −
=∫ w
EE456 – Digital Communications 53
Chapter 3: Probability, Random Variables, Random Processes
Sending a Random Sequence to a DAC
( )tw
0
1
t
)( th
sT
[ ]nw
20N
0ω
(e )j
Sω
w
ππ−
1s
s
FT
=sFN )2( 0
0f
( )S fw
( )tw
0f
( )S fw
sF
n0
1
2t
0
sT
2s
T 0
sF−
tsT 2
sT
0
power of noise sequence
1(e )d
2 2
j NS
πω
π
ωπ −
=∫ w
( ) sincss
fH f T
F
=
20 0
total noise power
( )d ( ) d2 2
sN F NS f f H f f
∞ ∞
−∞ −∞
= =∫ ∫w
sTN )2( 0
EE456 – Digital Communications 54
Chapter 3: Probability, Random Variables, Random Processes
Relationship Between the PSD of w[n] and the PSD of w(t)
The random process w(t) is constructed from the random sequence w[n] as
w(t) =
∞∑
n=−∞
w[n]δ(t − nTs).
To find the PSD of w(t), truncate the random process to a time interval of −T = −NTs
to T = NTs to obtain wT (t) =∑N
n=−N w[n]δ(t − nTs).
Take the Fourier transform of the truncated process:
WT (f) =
N∑
n=−N
w[n]Fδ(t − nTs) =
N∑
n=−N
w[n]e−j2πfnTs .
Apply the basic definition of PSD:
Sw(f) = limT→∞
E
|WT (f)|2
2T
= limN→∞
1
(2N + 1)Ts
E
∣
∣
∣
∣
∣
∣
N∑
n=−N
w[n]e−j2πfnTs
∣
∣
∣
∣
∣
∣
2
=1
Ts
∞∑
k=−∞
Rw[k]e−j2πkfTs .
where Rw[k]) = E w[n]w[n − k] is the autocorrelation function of random sequencew[n]. Note also that Rw[k] = Rw[−k].
Let ω = 2πfTs and recognize that Sw(ejω) =∑∞
k=−∞ Rw[k]e−jkω is exactly the PSD
of random sequence w[n]. Then the PSD of w(t) is
Sw(f) =1
Ts
Sw(ejω)
∣
∣
∣
∣
ω=2πfTs
EE456 – Digital Communications 55