Transcript

Normal Distribution

Proposition

If X has a normal distribution with mean µ and stadard deviationσ, then

Z =X − µσ

has a standard normal distribution. Thus

P(a ≤ X ≤ b) = P(a− µσ≤ Z ≤ b − µ

σ)

= Φ(b − µσ

)− Φ(a− µσ

)

P(X ≤ a) = Φ(a− µσ

) P(X ≥ b) = 1− Φ(b − µσ

)

Normal Distribution

Proposition

If X has a normal distribution with mean µ and stadard deviationσ, then

Z =X − µσ

has a standard normal distribution. Thus

P(a ≤ X ≤ b) = P(a− µσ≤ Z ≤ b − µ

σ)

= Φ(b − µσ

)− Φ(a− µσ

)

P(X ≤ a) = Φ(a− µσ

) P(X ≥ b) = 1− Φ(b − µσ

)

Normal Distribution

Example (Problem 38):There are two machines available for cutting corks intended for usein wine bottles. The first produces corks with diameters that arenormally distributed with mean 3cm and standard deviation 0.1cm.The second produces corks with diameters that have a normaldistribution with mean 3.04cm and standard deviation 0.02cm.Acceptable corks have diameters between 2.9cm and 3.1cm.Which machine is more likely to produce an acceptable cork?

P(2.9 ≤ X1 ≤ 3.1) = P(2.9− 3

0.1≤ Z ≤ 3.1− 3

0.1)

= P(−1 ≤ Z ≤ 1) = 0.8413− 0.1587 = 0.6826

P(2.9 ≤ X2 ≤ 3.1) = P(2.9− 3.04

0.02≤ Z ≤ 3.1− 3.04

0.02)

= P(−7 ≤ Z ≤ 3) = 0.9987− 0 = 0.9987

Normal DistributionExample (Problem 38):There are two machines available for cutting corks intended for usein wine bottles. The first produces corks with diameters that arenormally distributed with mean 3cm and standard deviation 0.1cm.The second produces corks with diameters that have a normaldistribution with mean 3.04cm and standard deviation 0.02cm.Acceptable corks have diameters between 2.9cm and 3.1cm.Which machine is more likely to produce an acceptable cork?

P(2.9 ≤ X1 ≤ 3.1) = P(2.9− 3

0.1≤ Z ≤ 3.1− 3

0.1)

= P(−1 ≤ Z ≤ 1) = 0.8413− 0.1587 = 0.6826

P(2.9 ≤ X2 ≤ 3.1) = P(2.9− 3.04

0.02≤ Z ≤ 3.1− 3.04

0.02)

= P(−7 ≤ Z ≤ 3) = 0.9987− 0 = 0.9987

Normal DistributionExample (Problem 38):There are two machines available for cutting corks intended for usein wine bottles. The first produces corks with diameters that arenormally distributed with mean 3cm and standard deviation 0.1cm.The second produces corks with diameters that have a normaldistribution with mean 3.04cm and standard deviation 0.02cm.Acceptable corks have diameters between 2.9cm and 3.1cm.Which machine is more likely to produce an acceptable cork?

P(2.9 ≤ X1 ≤ 3.1) = P(2.9− 3

0.1≤ Z ≤ 3.1− 3

0.1)

= P(−1 ≤ Z ≤ 1) = 0.8413− 0.1587 = 0.6826

P(2.9 ≤ X2 ≤ 3.1) = P(2.9− 3.04

0.02≤ Z ≤ 3.1− 3.04

0.02)

= P(−7 ≤ Z ≤ 3) = 0.9987− 0 = 0.9987

Normal Distribution

Example (Problem 44):If bolt thread length is normally distributed, what is the probabilitythat the thread length of a randomly selected bolt is (a)within 1.5SDs of its mean value? (b)between 1 and 2 SDs from its meanvalue?

P(µ− 1.5σ ≤ X1 ≤ µ+ 1.5σ) = P(µ− 1.5σ − µ

σ≤ Z ≤ µ+ 1.5σ − µ

σ)

= P(−1.5 ≤ Z ≤ 1.5)

= 0.9332− 0.0668 = 0.8664

2 · P(µ+ σ ≤ X1 ≤ µ+ 2σ) = 2P(µ+ σ − µ

σ≤ Z ≤ µ+ 2σ − µ

σ)

= 2P(1 ≤ Z ≤ 2)

= 2(0.9772− 0.8413) = 0.0.2718

Normal DistributionExample (Problem 44):If bolt thread length is normally distributed, what is the probabilitythat the thread length of a randomly selected bolt is (a)within 1.5SDs of its mean value? (b)between 1 and 2 SDs from its meanvalue?

P(µ− 1.5σ ≤ X1 ≤ µ+ 1.5σ) = P(µ− 1.5σ − µ

σ≤ Z ≤ µ+ 1.5σ − µ

σ)

= P(−1.5 ≤ Z ≤ 1.5)

= 0.9332− 0.0668 = 0.8664

2 · P(µ+ σ ≤ X1 ≤ µ+ 2σ) = 2P(µ+ σ − µ

σ≤ Z ≤ µ+ 2σ − µ

σ)

= 2P(1 ≤ Z ≤ 2)

= 2(0.9772− 0.8413) = 0.0.2718

Normal DistributionExample (Problem 44):If bolt thread length is normally distributed, what is the probabilitythat the thread length of a randomly selected bolt is (a)within 1.5SDs of its mean value? (b)between 1 and 2 SDs from its meanvalue?

P(µ− 1.5σ ≤ X1 ≤ µ+ 1.5σ) = P(µ− 1.5σ − µ

σ≤ Z ≤ µ+ 1.5σ − µ

σ)

= P(−1.5 ≤ Z ≤ 1.5)

= 0.9332− 0.0668 = 0.8664

2 · P(µ+ σ ≤ X1 ≤ µ+ 2σ) = 2P(µ+ σ − µ

σ≤ Z ≤ µ+ 2σ − µ

σ)

= 2P(1 ≤ Z ≤ 2)

= 2(0.9772− 0.8413) = 0.0.2718

Normal DistributionExample (Problem 44):If bolt thread length is normally distributed, what is the probabilitythat the thread length of a randomly selected bolt is (a)within 1.5SDs of its mean value? (b)between 1 and 2 SDs from its meanvalue?

P(µ− 1.5σ ≤ X1 ≤ µ+ 1.5σ) = P(µ− 1.5σ − µ

σ≤ Z ≤ µ+ 1.5σ − µ

σ)

= P(−1.5 ≤ Z ≤ 1.5)

= 0.9332− 0.0668 = 0.8664

2 · P(µ+ σ ≤ X1 ≤ µ+ 2σ) = 2P(µ+ σ − µ

σ≤ Z ≤ µ+ 2σ − µ

σ)

= 2P(1 ≤ Z ≤ 2)

= 2(0.9772− 0.8413) = 0.0.2718

Normal Distribution

Proposition

{(100p)th percentile for N(µ, σ2)} =µ + {(100p)th percentile for N(0, 1)} · σ

Example (Problem 39)The width of a line etched on an integrated circuit chip is normallydistributed with mean 3.000 µm and standard deviation 0.140.What width value separates the widest 10% of all such lines fromthe other 90%?

ηN(3,0.1402)(90) = 3.0+0.140·ηN(0,1)(90) = 3.0+0.140·1.28 = 3.1792

Normal Distribution

Proposition

{(100p)th percentile for N(µ, σ2)} =µ + {(100p)th percentile for N(0, 1)} · σ

Example (Problem 39)The width of a line etched on an integrated circuit chip is normallydistributed with mean 3.000 µm and standard deviation 0.140.What width value separates the widest 10% of all such lines fromthe other 90%?

ηN(3,0.1402)(90) = 3.0+0.140·ηN(0,1)(90) = 3.0+0.140·1.28 = 3.1792

Normal Distribution

Proposition

{(100p)th percentile for N(µ, σ2)} =µ + {(100p)th percentile for N(0, 1)} · σ

Example (Problem 39)The width of a line etched on an integrated circuit chip is normallydistributed with mean 3.000 µm and standard deviation 0.140.What width value separates the widest 10% of all such lines fromthe other 90%?

ηN(3,0.1402)(90) = 3.0+0.140·ηN(0,1)(90) = 3.0+0.140·1.28 = 3.1792

Normal Distribution

Proposition

{(100p)th percentile for N(µ, σ2)} =µ + {(100p)th percentile for N(0, 1)} · σ

ηN(3,0.1402)(90) = 3.0+0.140·ηN(0,1)(90) = 3.0+0.140·1.28 = 3.1792

Normal Distribution

Proposition

Let X be a binomial rv based on n trials with success probability p.Then if the binomial probability histogram is not too skewed, X hasapproximately a normal distribution with µ = np and σ =

√npq,

where q = 1− p. In particular, for x = a posible value of X ,

P(X ≤ x) = B(x ; n, p) ≈(

area under the normal curve

to the left of x+0.5

)= Φ(

x+0.5− np√

npq)

In practice, the approximation is adequate provided that bothnp ≥ 10 and nq ≥ 10, since there is then enough symmetry in theunderlying binomial distribution.

Normal Distribution

Proposition

Let X be a binomial rv based on n trials with success probability p.Then if the binomial probability histogram is not too skewed, X hasapproximately a normal distribution with µ = np and σ =

√npq,

where q = 1− p. In particular, for x = a posible value of X ,

P(X ≤ x) = B(x ; n, p) ≈(

area under the normal curve

to the left of x+0.5

)= Φ(

x+0.5− np√

npq)

In practice, the approximation is adequate provided that bothnp ≥ 10 and nq ≥ 10, since there is then enough symmetry in theunderlying binomial distribution.

Normal Distribution

A graphical explanation for

P(X ≤ x) = B(x ; n, p) ≈(

area under the normal curve

to the left of x+0.5

)= Φ(

x+0.5− np√

npq)

Normal Distribution

A graphical explanation for

P(X ≤ x) = B(x ; n, p) ≈(

area under the normal curve

to the left of x+0.5

)= Φ(

x+0.5− np√

npq)

Normal Distribution

A graphical explanation for

P(X ≤ x) = B(x ; n, p) ≈(

area under the normal curve

to the left of x+0.5

)= Φ(

x+0.5− np√

npq)

Normal Distribution

Example (Problem 54)Suppose that 10% of all steel shafts produced by a certain processare nonconforming but can be reworked (rather than having to bescrapped). Consider a random sample of 200 shafts, and let Xdenote the number among these that are nonconforming and canbe reworked. What is the (approximate) probability that X isbetween 15 and 25 (inclusive)?In this problem n = 200, p = 0.1 and q = 1− p = 0.9. Thusnp = 20 > 10 and nq = 180 > 10

P(15 ≤ X ≤ 25) = Bin(25; 200, 0.1)− Bin(14; 200, 0.1)

≈ Φ(25 + 0.5− 20√

200 · 0.1 · 0.9)− Φ(

15 + 0.5− 20√200 · 0.1 · 0.9

)

= Φ(0.3056)− Φ(−0.2500)

= 0.6217− 0.4013

= 0.2204

Normal DistributionExample (Problem 54)Suppose that 10% of all steel shafts produced by a certain processare nonconforming but can be reworked (rather than having to bescrapped). Consider a random sample of 200 shafts, and let Xdenote the number among these that are nonconforming and canbe reworked. What is the (approximate) probability that X isbetween 15 and 25 (inclusive)?

In this problem n = 200, p = 0.1 and q = 1− p = 0.9. Thusnp = 20 > 10 and nq = 180 > 10

P(15 ≤ X ≤ 25) = Bin(25; 200, 0.1)− Bin(14; 200, 0.1)

≈ Φ(25 + 0.5− 20√

200 · 0.1 · 0.9)− Φ(

15 + 0.5− 20√200 · 0.1 · 0.9

)

= Φ(0.3056)− Φ(−0.2500)

= 0.6217− 0.4013

= 0.2204

Normal DistributionExample (Problem 54)Suppose that 10% of all steel shafts produced by a certain processare nonconforming but can be reworked (rather than having to bescrapped). Consider a random sample of 200 shafts, and let Xdenote the number among these that are nonconforming and canbe reworked. What is the (approximate) probability that X isbetween 15 and 25 (inclusive)?In this problem n = 200, p = 0.1 and q = 1− p = 0.9. Thusnp = 20 > 10 and nq = 180 > 10

P(15 ≤ X ≤ 25) = Bin(25; 200, 0.1)− Bin(14; 200, 0.1)

≈ Φ(25 + 0.5− 20√

200 · 0.1 · 0.9)− Φ(

15 + 0.5− 20√200 · 0.1 · 0.9

)

= Φ(0.3056)− Φ(−0.2500)

= 0.6217− 0.4013

= 0.2204

Normal DistributionExample (Problem 54)Suppose that 10% of all steel shafts produced by a certain processare nonconforming but can be reworked (rather than having to bescrapped). Consider a random sample of 200 shafts, and let Xdenote the number among these that are nonconforming and canbe reworked. What is the (approximate) probability that X isbetween 15 and 25 (inclusive)?In this problem n = 200, p = 0.1 and q = 1− p = 0.9. Thusnp = 20 > 10 and nq = 180 > 10

P(15 ≤ X ≤ 25) = Bin(25; 200, 0.1)− Bin(14; 200, 0.1)

≈ Φ(25 + 0.5− 20√

200 · 0.1 · 0.9)− Φ(

15 + 0.5− 20√200 · 0.1 · 0.9

)

= Φ(0.3056)− Φ(−0.2500)

= 0.6217− 0.4013

= 0.2204

Exponential Distribution

DefinitionX is said to have an exponential distribution with parameterλ(λ > 0) if the pdf of X is

f (x ;λ) =

{λe−λx x ≥ 0

0 otherwise

Remark:1. Usually we use X ∼ EXP(λ) to denote that the random variableX has an exponential distribution with parameter λ.2. In some sources, the pdf of exponential distribution is given by

f (x ; θ) =

{1θ e−

xθ x ≥ 0

0 otherwise

The difference is that λ→ 1θ .

Exponential Distribution

DefinitionX is said to have an exponential distribution with parameterλ(λ > 0) if the pdf of X is

f (x ;λ) =

{λe−λx x ≥ 0

0 otherwise

Remark:1. Usually we use X ∼ EXP(λ) to denote that the random variableX has an exponential distribution with parameter λ.2. In some sources, the pdf of exponential distribution is given by

f (x ; θ) =

{1θ e−

xθ x ≥ 0

0 otherwise

The difference is that λ→ 1θ .

Exponential Distribution

DefinitionX is said to have an exponential distribution with parameterλ(λ > 0) if the pdf of X is

f (x ;λ) =

{λe−λx x ≥ 0

0 otherwise

Remark:1. Usually we use X ∼ EXP(λ) to denote that the random variableX has an exponential distribution with parameter λ.

2. In some sources, the pdf of exponential distribution is given by

f (x ; θ) =

{1θ e−

xθ x ≥ 0

0 otherwise

The difference is that λ→ 1θ .

Exponential Distribution

DefinitionX is said to have an exponential distribution with parameterλ(λ > 0) if the pdf of X is

f (x ;λ) =

{λe−λx x ≥ 0

0 otherwise

Remark:1. Usually we use X ∼ EXP(λ) to denote that the random variableX has an exponential distribution with parameter λ.2. In some sources, the pdf of exponential distribution is given by

f (x ; θ) =

{1θ e−

xθ x ≥ 0

0 otherwise

The difference is that λ→ 1θ .

Exponential Distribution

Exponential Distribution

Exponential Distribution

Proposition

If X ∼ EXP(λ), then

E (X ) =1

λand V (X ) =

1

λ2

And the cdf for X is

F (x ;λ) =

{1− e−λx x ≥ 0

0 x < 0

Exponential Distribution

Proposition

If X ∼ EXP(λ), then

E (X ) =1

λand V (X ) =

1

λ2

And the cdf for X is

F (x ;λ) =

{1− e−λx x ≥ 0

0 x < 0

Exponential Distribution

Proof:

E (X ) =

∫ ∞0

xλe−λxdx

=1

λ

∫ ∞0

(λx)e−λxd(λx)

=1

λ

∫ ∞0

ye−y dy y = λx

=1

λ[−ye−y |∞0 +

∫ ∞0

e−y dy ] integration by parts:u = y , v = −e−y

=1

λ[0 + (−e−y |∞0 )]

=1

λ

Exponential Distribution

Proof:

E (X ) =

∫ ∞0

xλe−λxdx

=1

λ

∫ ∞0

(λx)e−λxd(λx)

=1

λ

∫ ∞0

ye−y dy y = λx

=1

λ[−ye−y |∞0 +

∫ ∞0

e−y dy ] integration by parts:u = y , v = −e−y

=1

λ[0 + (−e−y |∞0 )]

=1

λ

Exponential Distribution

Proof (continued):

E (X 2) =

∫ ∞0

x2λe−λxdx

=1

λ2

∫ ∞0

(λx)2e−λxd(λx)

=1

λ2

∫ ∞0

y 2e−y dy y = λx

=1

λ2[−y 2e−y |∞0 +

∫ ∞0

2ye−y dy ] integration by parts

=1

λ2[0 + 2(−ye−y |∞0 +

∫ ∞0

e−y dy)] integration by parts

=1

λ22[0 + (−ye−y |∞0 )]

=2

λ2

Exponential Distribution

Proof (continued):

E (X 2) =

∫ ∞0

x2λe−λxdx

=1

λ2

∫ ∞0

(λx)2e−λxd(λx)

=1

λ2

∫ ∞0

y 2e−y dy y = λx

=1

λ2[−y 2e−y |∞0 +

∫ ∞0

2ye−y dy ] integration by parts

=1

λ2[0 + 2(−ye−y |∞0 +

∫ ∞0

e−y dy)] integration by parts

=1

λ22[0 + (−ye−y |∞0 )]

=2

λ2

Exponential Distribution

Proof (continued):

V (X ) = E (X 2)− [E (X )]2 =2

λ2− (

1

λ)2 =

1

λ2

F (x) =

∫ x

0λe−λy dy

=

∫ x

0e−λy d(λy)

=

∫ x

0e−zdz z = λy

= −e−z |x0= 1− e−x

Exponential Distribution

Proof (continued):

V (X ) = E (X 2)− [E (X )]2 =2

λ2− (

1

λ)2 =

1

λ2

F (x) =

∫ x

0λe−λy dy

=

∫ x

0e−λy d(λy)

=

∫ x

0e−zdz z = λy

= −e−z |x0= 1− e−x

Exponential Distribution

Example (Problem 108)The article “Determination of the MTF of Positive PhotoresistsUsing the Monte Carlo method” (Photographic Sci. and

Engr., 1983: 254-260) proposes the exponential distributionwith parameter λ = 0.93 as a model for the distribution of aphoton’s free path length (µm) under certain circumstances.Suppose this is the correct model.

a. What is the expected path length, and what is the standarddeviation of path length?

b. What is the probability that path length exceeds 3.0?

c. What value is exceeded by only 10% of all path lengths?

Exponential Distribution

Example (Problem 108)The article “Determination of the MTF of Positive PhotoresistsUsing the Monte Carlo method” (Photographic Sci. and

Engr., 1983: 254-260) proposes the exponential distributionwith parameter λ = 0.93 as a model for the distribution of aphoton’s free path length (µm) under certain circumstances.Suppose this is the correct model.

a. What is the expected path length, and what is the standarddeviation of path length?

b. What is the probability that path length exceeds 3.0?

c. What value is exceeded by only 10% of all path lengths?

Exponential Distribution

Example (Problem 108)The article “Determination of the MTF of Positive PhotoresistsUsing the Monte Carlo method” (Photographic Sci. and

Engr., 1983: 254-260) proposes the exponential distributionwith parameter λ = 0.93 as a model for the distribution of aphoton’s free path length (µm) under certain circumstances.Suppose this is the correct model.

a. What is the expected path length, and what is the standarddeviation of path length?

b. What is the probability that path length exceeds 3.0?

c. What value is exceeded by only 10% of all path lengths?

Exponential Distribution

a. What is the expected path length, and what is the standarddeviation of path length?

b. What is the probability that path length exceeds 3.0?

c. What value is exceeded by only 10% of all path lengths?

Exponential Distribution

a. What is the expected path length, and what is the standarddeviation of path length?

b. What is the probability that path length exceeds 3.0?

c. What value is exceeded by only 10% of all path lengths?

Exponential Distribution

Proposition

Suppose that the number of events occurring in any time intervalof length t has a Poisson distribution with parameter αt (where α,the rate of the event process, is the expected number of eventsoccurring in 1 unit of time) and that numbers of occurrences innonoverlappong intervals are independent of one another. Thenthe distribution of elapsed time between the occurrence of twosuccessive events is exponential with parameter λ = α.

e.g.the number of customers visiting Costco in each hour =⇒Poisson distribution;the time between every two successive customers visiting Costco=⇒ Exponential distribution.

Exponential Distribution

Proposition

Suppose that the number of events occurring in any time intervalof length t has a Poisson distribution with parameter αt (where α,the rate of the event process, is the expected number of eventsoccurring in 1 unit of time) and that numbers of occurrences innonoverlappong intervals are independent of one another. Thenthe distribution of elapsed time between the occurrence of twosuccessive events is exponential with parameter λ = α.

e.g.the number of customers visiting Costco in each hour =⇒Poisson distribution;the time between every two successive customers visiting Costco=⇒ Exponential distribution.

Exponential Distribution

Proposition

Suppose that the number of events occurring in any time intervalof length t has a Poisson distribution with parameter αt (where α,the rate of the event process, is the expected number of eventsoccurring in 1 unit of time) and that numbers of occurrences innonoverlappong intervals are independent of one another. Thenthe distribution of elapsed time between the occurrence of twosuccessive events is exponential with parameter λ = α.

e.g.the number of customers visiting Costco in each hour =⇒Poisson distribution;

the time between every two successive customers visiting Costco=⇒ Exponential distribution.

Exponential Distribution

Proposition

e.g.the number of customers visiting Costco in each hour =⇒Poisson distribution;the time between every two successive customers visiting Costco=⇒ Exponential distribution.

Exponential Distribution

Example (Example 4.22)Suppose that calls are received at a 24-hour hotline according to aPoisson process with rate α = 0.5 call per day.Then the number of days X between successive calls has anexponential distribution with parameter value 0.5.The probability that more than 3 days elapse between calls is

P(X > 3) = 1− P(X ≤ 3) = 1− F (3; 0.5) = e−0.5·3 = 0.223.

The expected time between successive calls is 1/0.5 = 2 days.

Exponential Distribution

Example (Example 4.22)Suppose that calls are received at a 24-hour hotline according to aPoisson process with rate α = 0.5 call per day.

Then the number of days X between successive calls has anexponential distribution with parameter value 0.5.The probability that more than 3 days elapse between calls is

P(X > 3) = 1− P(X ≤ 3) = 1− F (3; 0.5) = e−0.5·3 = 0.223.

The expected time between successive calls is 1/0.5 = 2 days.

Exponential Distribution

Example (Example 4.22)Suppose that calls are received at a 24-hour hotline according to aPoisson process with rate α = 0.5 call per day.Then the number of days X between successive calls has anexponential distribution with parameter value 0.5.

The probability that more than 3 days elapse between calls is

P(X > 3) = 1− P(X ≤ 3) = 1− F (3; 0.5) = e−0.5·3 = 0.223.

The expected time between successive calls is 1/0.5 = 2 days.

Exponential Distribution

Example (Example 4.22)Suppose that calls are received at a 24-hour hotline according to aPoisson process with rate α = 0.5 call per day.Then the number of days X between successive calls has anexponential distribution with parameter value 0.5.The probability that more than 3 days elapse between calls is

P(X > 3) = 1− P(X ≤ 3) = 1− F (3; 0.5) = e−0.5·3 = 0.223.

The expected time between successive calls is 1/0.5 = 2 days.

Exponential Distribution

Example (Example 4.22)Suppose that calls are received at a 24-hour hotline according to aPoisson process with rate α = 0.5 call per day.Then the number of days X between successive calls has anexponential distribution with parameter value 0.5.The probability that more than 3 days elapse between calls is

P(X > 3) = 1− P(X ≤ 3) = 1− F (3; 0.5) = e−0.5·3 = 0.223.

The expected time between successive calls is 1/0.5 = 2 days.

Exponential Distribution

“Memoryless” PropertyLet X = the time certain component lasts (in hours)and we assume the component lifetime is exponentially distributedwith parameter λ. Then what is the probability that thecomponent can last at least an additional t hours after working fort0 hours, i.e. what is P(X ≥ t + t0 | X ≥ t0)?

P(X ≥ t + t0 | X ≥ t0) =P({X ≥ t + t0} ∩ {X ≥ t0})

P(X ≥ t0)

=P(X ≥ t + t0)

P(X ≥ t0)

=1− F (t + t0;λ)

F (t0;λ)

= e−λt

Exponential Distribution

“Memoryless” PropertyLet X = the time certain component lasts (in hours)and we assume the component lifetime is exponentially distributedwith parameter λ. Then what is the probability that thecomponent can last at least an additional t hours after working fort0 hours, i.e. what is P(X ≥ t + t0 | X ≥ t0)?

P(X ≥ t + t0 | X ≥ t0) =P({X ≥ t + t0} ∩ {X ≥ t0})

P(X ≥ t0)

=P(X ≥ t + t0)

P(X ≥ t0)

=1− F (t + t0;λ)

F (t0;λ)

= e−λt

Exponential Distribution

“Memoryless” PropertyLet X = the time certain component lasts (in hours)and we assume the component lifetime is exponentially distributedwith parameter λ. Then what is the probability that thecomponent can last at least an additional t hours after working fort0 hours, i.e. what is P(X ≥ t + t0 | X ≥ t0)?

P(X ≥ t + t0 | X ≥ t0) =P({X ≥ t + t0} ∩ {X ≥ t0})

P(X ≥ t0)

=P(X ≥ t + t0)

P(X ≥ t0)

=1− F (t + t0;λ)

F (t0;λ)

= e−λt

Exponential Distribution

“Memoryless” PropertyHowever, we have

P(X ≥ t) = 1− F (t;λ) = e−λt

Therefore, we have

P(X ≥ t) = P(X ≥ t + t0 | X ≥ t0)

for any positive t and t0.In words, the distribution of additional lifetime is exactly the sameas the original distribution of lifetime, so at each point in time thecomponent shows no effect of wear. In other words, thedistribution of remaining lifetime is independent of current age.

Exponential Distribution

“Memoryless” PropertyHowever, we have

P(X ≥ t) = 1− F (t;λ) = e−λt

Therefore, we have

P(X ≥ t) = P(X ≥ t + t0 | X ≥ t0)

for any positive t and t0.In words, the distribution of additional lifetime is exactly the sameas the original distribution of lifetime, so at each point in time thecomponent shows no effect of wear. In other words, thedistribution of remaining lifetime is independent of current age.

Exponential Distribution

“Memoryless” PropertyHowever, we have

P(X ≥ t) = 1− F (t;λ) = e−λt

Therefore, we have

P(X ≥ t) = P(X ≥ t + t0 | X ≥ t0)

for any positive t and t0.

In words, the distribution of additional lifetime is exactly the sameas the original distribution of lifetime, so at each point in time thecomponent shows no effect of wear. In other words, thedistribution of remaining lifetime is independent of current age.

Exponential Distribution

“Memoryless” PropertyHowever, we have

P(X ≥ t) = 1− F (t;λ) = e−λt

Therefore, we have

P(X ≥ t) = P(X ≥ t + t0 | X ≥ t0)

for any positive t and t0.In words, the distribution of additional lifetime is exactly the sameas the original distribution of lifetime, so at each point in time thecomponent shows no effect of wear. In other words, thedistribution of remaining lifetime is independent of current age.

Gamma Distribution

DefinitionFor α > 0, the gamma function Γ(α) is defined by

Γ(α) =

∫ ∞0

xα−1e−xdx

Properties for gamma function:1. For any α > 1, Γ(α) = (α− 1) · Γ(α− 1) [via integration byparts];2. For any positive integer, n, Γ(n) = (n − 1)!;3. Γ( 1

2 ) =√π.

e.g. Γ(4) = (4− 1)! = 6 and Γ( 52 ) = 3

2 · Γ( 32 ) = 3

2 [ 12 · Γ( 1

2 )] = 34

√π

Gamma Distribution

DefinitionFor α > 0, the gamma function Γ(α) is defined by

Γ(α) =

∫ ∞0

xα−1e−xdx

Properties for gamma function:1. For any α > 1, Γ(α) = (α− 1) · Γ(α− 1) [via integration byparts];2. For any positive integer, n, Γ(n) = (n − 1)!;3. Γ( 1

2 ) =√π.

e.g. Γ(4) = (4− 1)! = 6 and Γ( 52 ) = 3

2 · Γ( 32 ) = 3

2 [ 12 · Γ( 1

2 )] = 34

√π

Gamma Distribution

DefinitionFor α > 0, the gamma function Γ(α) is defined by

Γ(α) =

∫ ∞0

xα−1e−xdx

Properties for gamma function:1. For any α > 1, Γ(α) = (α− 1) · Γ(α− 1) [via integration byparts];

2. For any positive integer, n, Γ(n) = (n − 1)!;3. Γ( 1

2 ) =√π.

e.g. Γ(4) = (4− 1)! = 6 and Γ( 52 ) = 3

2 · Γ( 32 ) = 3

2 [ 12 · Γ( 1

2 )] = 34

√π

Gamma Distribution

DefinitionFor α > 0, the gamma function Γ(α) is defined by

Γ(α) =

∫ ∞0

xα−1e−xdx

Properties for gamma function:1. For any α > 1, Γ(α) = (α− 1) · Γ(α− 1) [via integration byparts];2. For any positive integer, n, Γ(n) = (n − 1)!;

3. Γ( 12 ) =

√π.

e.g. Γ(4) = (4− 1)! = 6 and Γ( 52 ) = 3

2 · Γ( 32 ) = 3

2 [ 12 · Γ( 1

2 )] = 34

√π

Gamma Distribution

DefinitionFor α > 0, the gamma function Γ(α) is defined by

Γ(α) =

∫ ∞0

xα−1e−xdx

Properties for gamma function:1. For any α > 1, Γ(α) = (α− 1) · Γ(α− 1) [via integration byparts];2. For any positive integer, n, Γ(n) = (n − 1)!;3. Γ( 1

2 ) =√π.

e.g. Γ(4) = (4− 1)! = 6 and Γ( 52 ) = 3

2 · Γ( 32 ) = 3

2 [ 12 · Γ( 1

2 )] = 34

√π

Gamma Distribution

DefinitionFor α > 0, the gamma function Γ(α) is defined by

Γ(α) =

∫ ∞0

xα−1e−xdx

2 ) =√π.

e.g. Γ(4) = (4− 1)! = 6 and Γ( 52 ) = 3

2 · Γ( 32 ) = 3

2 [ 12 · Γ( 1

2 )] = 34

√π

Gamma Distribution

DefinitionA continuous random variable X is said to have a gammadistribution if the pdf of X is

f (x ;α, β) =

{1

βαΓ(α) xα−1e−x/β x ≥ 0

0 otherwise

where the parameters α and β satisfy α > 0, β > 0. The standardgamma distribution has β = 1, so the pdf of a standard gammarv is

f (x ;α) =

{1

Γ(α) xα−1e−x x ≥ 0

0 otherwise

Gamma Distribution

DefinitionA continuous random variable X is said to have a gammadistribution if the pdf of X is

f (x ;α, β) =

{1

βαΓ(α) xα−1e−x/β x ≥ 0

0 otherwise

where the parameters α and β satisfy α > 0, β > 0. The standardgamma distribution has β = 1, so the pdf of a standard gammarv is

f (x ;α) =

{1

Γ(α) xα−1e−x x ≥ 0

0 otherwise

Gamma Distribution

Remark:1. We use X ∼ GAM(α, β) to denote that the rv X has a gammadistribution with parameter α and β.2. If we let α = 1 and β = 1/λ, then we get the exponentialdistribution:

f (x ; 1,1

λ) =

{1

1λ

Γ(1)x1−1e−x/ 1

λ = λe−λx x ≥ 0

0 otherwise

3. When X is a standard gamma rv (β = 1), the cdf of X ,

F (x ;α) =

∫ x

0

yα−1e−y

Γ(α)dy

is called the incomplete gamma function.There are extensive tables of F (x ;α) available (Appendix TableA.4).

Gamma Distribution

Remark:1. We use X ∼ GAM(α, β) to denote that the rv X has a gammadistribution with parameter α and β.

2. If we let α = 1 and β = 1/λ, then we get the exponentialdistribution:

f (x ; 1,1

λ) =

{1

1λ

Γ(1)x1−1e−x/ 1

λ = λe−λx x ≥ 0

0 otherwise

3. When X is a standard gamma rv (β = 1), the cdf of X ,

F (x ;α) =

∫ x

0

yα−1e−y

Γ(α)dy

is called the incomplete gamma function.There are extensive tables of F (x ;α) available (Appendix TableA.4).

Gamma Distribution

Remark:1. We use X ∼ GAM(α, β) to denote that the rv X has a gammadistribution with parameter α and β.2. If we let α = 1 and β = 1/λ, then we get the exponentialdistribution:

f (x ; 1,1

λ) =

{1

1λ

Γ(1)x1−1e−x/ 1

λ = λe−λx x ≥ 0

0 otherwise

3. When X is a standard gamma rv (β = 1), the cdf of X ,

F (x ;α) =

∫ x

0

yα−1e−y

Γ(α)dy

is called the incomplete gamma function.There are extensive tables of F (x ;α) available (Appendix TableA.4).

Gamma Distribution

Remark:1. We use X ∼ GAM(α, β) to denote that the rv X has a gammadistribution with parameter α and β.2. If we let α = 1 and β = 1/λ, then we get the exponentialdistribution:

f (x ; 1,1

λ) =

{1

1λ

Γ(1)x1−1e−x/ 1

λ = λe−λx x ≥ 0

0 otherwise

3. When X is a standard gamma rv (β = 1), the cdf of X ,

F (x ;α) =

∫ x

0

yα−1e−y

Γ(α)dy

is called the incomplete gamma function.

There are extensive tables of F (x ;α) available (Appendix TableA.4).

Gamma Distribution

f (x ; 1,1

λ) =

{1

1λ

Γ(1)x1−1e−x/ 1

λ = λe−λx x ≥ 0

0 otherwise

3. When X is a standard gamma rv (β = 1), the cdf of X ,

F (x ;α) =

∫ x

0

yα−1e−y

Γ(α)dy

Gamma Distribution

Gamma Distribution

Gamma Distribution

Proposition

If X ∼ GAM(α, β), then

E (X ) = αβ and V (X ) = αβ2

Furthermore, for any x > 0, the cdf of X is given by

P(X ≤ x) = F (x ;α, β) = F

(x

β;α

)where F (•;α) is the incomplete gamma function.

Gamma Distribution

Proposition

If X ∼ GAM(α, β), then

E (X ) = αβ and V (X ) = αβ2

Furthermore, for any x > 0, the cdf of X is given by

P(X ≤ x) = F (x ;α, β) = F

(x

β;α

)where F (•;α) is the incomplete gamma function.

Gamma Distribution

Example:The survival time (in days) of a white rat that was subjected to acertain level of X-ray radiation is a random variableX ∼ GAM(5, 4). Then what is

a. the probability that the survival time is at most 16 days;

b. the probability that the survival time is between 16 days and20 days (not inclusive);

c. the expected survival time.

Gamma Distribution

Example:The survival time (in days) of a white rat that was subjected to acertain level of X-ray radiation is a random variableX ∼ GAM(5, 4). Then what is

a. the probability that the survival time is at most 16 days;

b. the probability that the survival time is between 16 days and20 days (not inclusive);

c. the expected survival time.

Gamma Distribution

Example:The survival time (in days) of a white rat that was subjected to acertain level of X-ray radiation is a random variableX ∼ GAM(5, 4). Then what is

a. the probability that the survival time is at most 16 days;

b. the probability that the survival time is between 16 days and20 days (not inclusive);

c. the expected survival time.

Gamma Distribution

a. the probability that the survival time is at most 16 days;

b. the probability that the survival time is between 16 days and20 days (not inclusive);

c. the expected survival time.

Gamma Distribution

a. the probability that the survival time is at most 16 days;

b. the probability that the survival time is between 16 days and20 days (not inclusive);

c. the expected survival time.

Chi-Squared Distribution

DefinitionLet ν be a positive integer. Then a random variable X is said tohave a chi-squared distribution with parameter ν if the pdf of Xis the gamma density with α = ν/2 and β = 2. The pdf of achi-squared rv is thus

f (x ; ν) =

{1

2ν/2Γ(ν/2)x (ν/2)−1e−x/2 x ≥ 0

0 x < 0

The parameter ν is called the number of degrees of freedom(df) of X . The symbol χ2 is often used in place of “chi-squared”.

Chi-Squared Distribution

DefinitionLet ν be a positive integer. Then a random variable X is said tohave a chi-squared distribution with parameter ν if the pdf of Xis the gamma density with α = ν/2 and β = 2. The pdf of achi-squared rv is thus

f (x ; ν) =

{1

2ν/2Γ(ν/2)x (ν/2)−1e−x/2 x ≥ 0

0 x < 0

The parameter ν is called the number of degrees of freedom(df) of X . The symbol χ2 is often used in place of “chi-squared”.

Chi-Squared Distribution

Remark:1. Usually, we use X ∼ χ2(ν) to denote that X is a chi-squared rvwith parameter ν;2. If X1,X2, . . . ,Xn is n independent standard normal rv’s, thenX 2

1 + X 22 + · · ·+ X 2

n has the same distribution as χ2(n).

Chi-Squared Distribution

Remark:1. Usually, we use X ∼ χ2(ν) to denote that X is a chi-squared rvwith parameter ν;

2. If X1,X2, . . . ,Xn is n independent standard normal rv’s, thenX 2

1 + X 22 + · · ·+ X 2

n has the same distribution as χ2(n).

Chi-Squared Distribution

Remark:1. Usually, we use X ∼ χ2(ν) to denote that X is a chi-squared rvwith parameter ν;2. If X1,X2, . . . ,Xn is n independent standard normal rv’s, thenX 2

1 + X 22 + · · ·+ X 2

n has the same distribution as χ2(n).

Chi-Squared Distribution

Chi-Squared Distribution

Weibull Distribution

DefinitionA random variable X is said to have a Weibull distribution withparameters α and β (α > 0, β > 0) if the pdf of X is

f (x ;α, β) =

{αβα xα−1e−(x/β)α x ≥ 0

0 x < 0

Remark:1. The family of Weibull distributions was introduced by theSwedish physicist Waloddi Weibull in 1939.2. We use X ∼WEB(α, β) to denote that the rv X has a Weibulldistribution with parameters α and β.

Weibull Distribution

DefinitionA random variable X is said to have a Weibull distribution withparameters α and β (α > 0, β > 0) if the pdf of X is

f (x ;α, β) =

{αβα xα−1e−(x/β)α x ≥ 0

0 x < 0

Remark:1. The family of Weibull distributions was introduced by theSwedish physicist Waloddi Weibull in 1939.2. We use X ∼WEB(α, β) to denote that the rv X has a Weibulldistribution with parameters α and β.

Weibull Distribution

DefinitionA random variable X is said to have a Weibull distribution withparameters α and β (α > 0, β > 0) if the pdf of X is

f (x ;α, β) =

{αβα xα−1e−(x/β)α x ≥ 0

0 x < 0

Remark:1. The family of Weibull distributions was introduced by theSwedish physicist Waloddi Weibull in 1939.

2. We use X ∼WEB(α, β) to denote that the rv X has a Weibulldistribution with parameters α and β.

Weibull Distribution

f (x ;α, β) =

{αβα xα−1e−(x/β)α x ≥ 0

0 x < 0

Remark:1. The family of Weibull distributions was introduced by theSwedish physicist Waloddi Weibull in 1939.2. We use X ∼WEB(α, β) to denote that the rv X has a Weibulldistribution with parameters α and β.

Weibull Distribution

Remark:3. When α = 1, the pdf becomes

f (x ;β) =

{1β e−x/β x ≥ 0

0 x < 0

which is the pdf for an exponential distribution with parameterλ = 1

β . Thus we see that the exponential distribution is a specialcase of both the gamma and Weibull distributions.4. There are gamma distributions that are not Weibull distributiosand vice versa, so one family is not a subset of the other.

Weibull Distribution

Remark:

3. When α = 1, the pdf becomes

f (x ;β) =

{1β e−x/β x ≥ 0

0 x < 0

which is the pdf for an exponential distribution with parameterλ = 1

β . Thus we see that the exponential distribution is a specialcase of both the gamma and Weibull distributions.4. There are gamma distributions that are not Weibull distributiosand vice versa, so one family is not a subset of the other.

Weibull Distribution

Remark:3. When α = 1, the pdf becomes

f (x ;β) =

{1β e−x/β x ≥ 0

0 x < 0

which is the pdf for an exponential distribution with parameterλ = 1

β . Thus we see that the exponential distribution is a specialcase of both the gamma and Weibull distributions.

4. There are gamma distributions that are not Weibull distributiosand vice versa, so one family is not a subset of the other.

Weibull Distribution

Remark:3. When α = 1, the pdf becomes

f (x ;β) =

{1β e−x/β x ≥ 0

0 x < 0

which is the pdf for an exponential distribution with parameterλ = 1

β . Thus we see that the exponential distribution is a specialcase of both the gamma and Weibull distributions.4. There are gamma distributions that are not Weibull distributiosand vice versa, so one family is not a subset of the other.

Weibull Distribution

Weibull Distribution

Weibull Distribution

Weibull Distribution

Weibull Distribution

Proposition

Let X be a random variable such that X ∼WEI(α, β). Then

E (X ) = βΓ

(1 +

1

α

)and V (X ) = β2

{Γ

(1 +

2

α

)−[

Γ

(1 +

1

α

)]2}

The cdf of X is

F (x ;α, β) =

{1− e−(x/β)α x ≥ 0

0 x < 0

Weibull Distribution

Proposition

Let X be a random variable such that X ∼WEI(α, β). Then

E (X ) = βΓ

(1 +

1

α

)and V (X ) = β2

{Γ

(1 +

2

α

)−[

Γ

(1 +

1

α

)]2}

The cdf of X is

F (x ;α, β) =

{1− e−(x/β)α x ≥ 0

0 x < 0

Weibull Distribution

Example:The shear strength (in pounds) of a spot weld is a Weibulldistributed random variable, X ∼WEB(400, 2/3).

a. Find P(X > 410).

b. Find P(X > 410 | X > 390).

c. Find E (X ) and V (X ).

d. Find the 95th percentile.

Weibull Distribution

Example:The shear strength (in pounds) of a spot weld is a Weibulldistributed random variable, X ∼WEB(400, 2/3).

a. Find P(X > 410).

b. Find P(X > 410 | X > 390).

c. Find E (X ) and V (X ).

d. Find the 95th percentile.

Weibull Distribution

Example:The shear strength (in pounds) of a spot weld is a Weibulldistributed random variable, X ∼WEB(400, 2/3).

a. Find P(X > 410).

b. Find P(X > 410 | X > 390).

c. Find E (X ) and V (X ).

d. Find the 95th percentile.

Weibull Distribution

a. Find P(X > 410).

b. Find P(X > 410 | X > 390).

c. Find E (X ) and V (X ).

d. Find the 95th percentile.

Weibull Distribution

a. Find P(X > 410).

b. Find P(X > 410 | X > 390).

c. Find E (X ) and V (X ).

d. Find the 95th percentile.

Weibull Distribution

In practical situations, γ = min(X ) > 0 and X − γ has a Weibulldistribution.Example (Problem 74):Let X = the time (in 10−1 weeks) from shipment of a

defective product until the customer returns the

product. Suppose that the minimum return time is γ = 3.5 andthat the excess X − 3.5 over the minimum has a Weibulldistribution with parameters α = 2 and β = 1.5.

a. What is the cdf of X ?

b. What are the expected return time and variance of returntime?

c. Compute P(X > 5).

d. Compute P(5 ≤ X ≤ 8).

Weibull Distribution

In practical situations, γ = min(X ) > 0 and X − γ has a Weibulldistribution.

Example (Problem 74):Let X = the time (in 10−1 weeks) from shipment of a

defective product until the customer returns the

product. Suppose that the minimum return time is γ = 3.5 andthat the excess X − 3.5 over the minimum has a Weibulldistribution with parameters α = 2 and β = 1.5.

a. What is the cdf of X ?

b. What are the expected return time and variance of returntime?

c. Compute P(X > 5).

d. Compute P(5 ≤ X ≤ 8).

Weibull Distribution

In practical situations, γ = min(X ) > 0 and X − γ has a Weibulldistribution.Example (Problem 74):Let X = the time (in 10−1 weeks) from shipment of a

defective product until the customer returns the

product. Suppose that the minimum return time is γ = 3.5 andthat the excess X − 3.5 over the minimum has a Weibulldistribution with parameters α = 2 and β = 1.5.

a. What is the cdf of X ?

b. What are the expected return time and variance of returntime?

c. Compute P(X > 5).

d. Compute P(5 ≤ X ≤ 8).

Weibull Distribution

In practical situations, γ = min(X ) > 0 and X − γ has a Weibulldistribution.Example (Problem 74):Let X = the time (in 10−1 weeks) from shipment of a

defective product until the customer returns the

a. What is the cdf of X ?

b. What are the expected return time and variance of returntime?

c. Compute P(X > 5).

d. Compute P(5 ≤ X ≤ 8).

Weibull Distribution

defective product until the customer returns the

a. What is the cdf of X ?

b. What are the expected return time and variance of returntime?

c. Compute P(X > 5).

d. Compute P(5 ≤ X ≤ 8).

Weibull Distribution

defective product until the customer returns the

a. What is the cdf of X ?

b. What are the expected return time and variance of returntime?

c. Compute P(X > 5).

d. Compute P(5 ≤ X ≤ 8).

Lognormal Distribution

DefinitionA nonnegative rv X is said to have a lognormal distribution if therv Y = ln(X ) has a normal distribution. The resulting pdf of alognormal rv when ln(X ) is normally distributed with parameters µand σ is

f (x ;µ, σ) =

{1√

2πσxe−[ln(x)−µ]2/(2σ2) x ≤ 0

0 x < 0

Remark:1. We use X ∼ LOGN(µ, σ2) to denote that rv X have alognormal distribution with parameters µ and σ.2. Notice here that the parameter µ is not the mean and σ2 is notthe variance, i.e.

µ 6= E (X ) and σ2 6= V (X )

Lognormal Distribution

DefinitionA nonnegative rv X is said to have a lognormal distribution if therv Y = ln(X ) has a normal distribution. The resulting pdf of alognormal rv when ln(X ) is normally distributed with parameters µand σ is

f (x ;µ, σ) =

{1√

2πσxe−[ln(x)−µ]2/(2σ2) x ≤ 0

0 x < 0

Remark:1. We use X ∼ LOGN(µ, σ2) to denote that rv X have alognormal distribution with parameters µ and σ.2. Notice here that the parameter µ is not the mean and σ2 is notthe variance, i.e.

µ 6= E (X ) and σ2 6= V (X )

Lognormal Distribution

DefinitionA nonnegative rv X is said to have a lognormal distribution if therv Y = ln(X ) has a normal distribution. The resulting pdf of alognormal rv when ln(X ) is normally distributed with parameters µand σ is

f (x ;µ, σ) =

{1√

2πσxe−[ln(x)−µ]2/(2σ2) x ≤ 0

0 x < 0

Remark:1. We use X ∼ LOGN(µ, σ2) to denote that rv X have alognormal distribution with parameters µ and σ.

2. Notice here that the parameter µ is not the mean and σ2 is notthe variance, i.e.

µ 6= E (X ) and σ2 6= V (X )

Lognormal Distribution

f (x ;µ, σ) =

{1√

2πσxe−[ln(x)−µ]2/(2σ2) x ≤ 0

0 x < 0

Remark:1. We use X ∼ LOGN(µ, σ2) to denote that rv X have alognormal distribution with parameters µ and σ.2. Notice here that the parameter µ is not the mean and σ2 is notthe variance, i.e.

µ 6= E (X ) and σ2 6= V (X )

Lognormal Distribution

Lognormal Distribution

Lognormal Distribution

Proposition

If X ∼ LOGN(µ, σ2), then

E (X ) = eµ+σ2/2 and V (X ) = e2µ+σ2 · (eσ2 − 1)

The cdf of X is

F (x ;µ, σ) = P(X ≤ x) = P[ln(X ) ≤ ln(x)]

= P

(Z ≤ ln(x)− µ

σ

)= Φ

(ln(x)− µ

σ

)x ≤ 0

where Φ(z) is the cdf of the standard normal rv Z .

Lognormal Distribution

Proposition

If X ∼ LOGN(µ, σ2), then

E (X ) = eµ+σ2/2 and V (X ) = e2µ+σ2 · (eσ2 − 1)

The cdf of X is

F (x ;µ, σ) = P(X ≤ x) = P[ln(X ) ≤ ln(x)]

= P

(Z ≤ ln(x)− µ

σ

)= Φ

(ln(x)− µ

σ

)x ≤ 0

where Φ(z) is the cdf of the standard normal rv Z .

Lognormal Distribution

Example (Problem 115)Let Ii be the input current to a transistor and I0 be the outputcurrent. Then the current gain is proportional to ln(I0/Ii ).Suppose the constant of proportionality is 1 (which amounts tochoosing a particular unit of measurement), so that current gain =X = ln(I0/Ii ). Assume X is normally distributed with µ = 1 andσ = 0.05.

a. What is the probability that the output current is more thantwice the input current?

b. What are the expected value and variance of the ratio ofoutput to input current?

c. What value r is such that only 5% chance we will have theratio of output to input current exceed r?

Lognormal Distribution

Example (Problem 115)Let Ii be the input current to a transistor and I0 be the outputcurrent. Then the current gain is proportional to ln(I0/Ii ).Suppose the constant of proportionality is 1 (which amounts tochoosing a particular unit of measurement), so that current gain =X = ln(I0/Ii ). Assume X is normally distributed with µ = 1 andσ = 0.05.

a. What is the probability that the output current is more thantwice the input current?

b. What are the expected value and variance of the ratio ofoutput to input current?

c. What value r is such that only 5% chance we will have theratio of output to input current exceed r?

Lognormal Distribution

Example (Problem 115)Let Ii be the input current to a transistor and I0 be the outputcurrent. Then the current gain is proportional to ln(I0/Ii ).Suppose the constant of proportionality is 1 (which amounts tochoosing a particular unit of measurement), so that current gain =X = ln(I0/Ii ). Assume X is normally distributed with µ = 1 andσ = 0.05.

a. What is the probability that the output current is more thantwice the input current?

b. What are the expected value and variance of the ratio ofoutput to input current?

c. What value r is such that only 5% chance we will have theratio of output to input current exceed r?

Lognormal Distribution

a. What is the probability that the output current is more thantwice the input current?

b. What are the expected value and variance of the ratio ofoutput to input current?

Lognormal Distribution

a. What is the probability that the output current is more thantwice the input current?

b. What are the expected value and variance of the ratio ofoutput to input current?

Beta Distribution

DefinitionA random variable X is said to have a beta distribution withparameters α, β(both positive), A, and B if the pdf of X is

f (x ;α, β,A,B) =

1B−A ·

Γ(α+β)Γ(α)·Γ(β) ·

(x−AB−A

)α−1·(

B−xB−A

)β−1A ≤ x ≤ B

0 otherwise

The case A = 0,B = 1 gives the standard beta distribution.

Remark: We use X ∼ BETA(α, β,A,B) to denote that rv X has abeta distribution with parameters α, β, A, and B.

Beta Distribution

DefinitionA random variable X is said to have a beta distribution withparameters α, β(both positive), A, and B if the pdf of X is

f (x ;α, β,A,B) =

1B−A ·

Γ(α+β)Γ(α)·Γ(β) ·

(x−AB−A

)α−1·(

B−xB−A

)β−1A ≤ x ≤ B

0 otherwise

The case A = 0,B = 1 gives the standard beta distribution.

Remark: We use X ∼ BETA(α, β,A,B) to denote that rv X has abeta distribution with parameters α, β, A, and B.

Beta Distribution

DefinitionA random variable X is said to have a beta distribution withparameters α, β(both positive), A, and B if the pdf of X is

f (x ;α, β,A,B) =

1B−A ·

Γ(α+β)Γ(α)·Γ(β) ·

(x−AB−A

)α−1·(

B−xB−A

)β−1A ≤ x ≤ B

0 otherwise

The case A = 0,B = 1 gives the standard beta distribution.

Remark: We use X ∼ BETA(α, β,A,B) to denote that rv X has abeta distribution with parameters α, β, A, and B.

Beta Distribution

Proposition

If X ∼ BETA(α, β,A,B), then

E (X ) = A + (B − A) · α

α + βand V (X ) =

(B − A)2αβ

(α + β)2(α + β + 1)

Beta Distribution

Proposition

If X ∼ BETA(α, β,A,B), then

E (X ) = A + (B − A) · α

α + βand V (X ) =

(B − A)2αβ

(α + β)2(α + β + 1)

Beta Distribution

Beta Distribution

Beta Distribution

Example (Problem 127)An individual’s credit score is a number calculated based on thatperson’s credit history which helps a lender determine how muchhe/she should be loaned or what credit limit should be establishedfor a credit card. An article in the Los Angeles Times gave datawhich suggested that a beta distribution with parametersA = 150,B = 850, α = 8, β = 2 would provide a reasonableapproximation to the distribution of American credit scores.[Note: credit scores are integer-valued].

a. Let X represent a randomly selected American credit score.What are the mean value and standard deviation of thisrandom variable? What is the probability that X is within 1standard deviation of its mean value?

b. What is the approximate probability that a randomly selectedscore will exceed 750 (which lenders consider a very goodscore)?

Beta Distribution

Example (Problem 127)An individual’s credit score is a number calculated based on thatperson’s credit history which helps a lender determine how muchhe/she should be loaned or what credit limit should be establishedfor a credit card. An article in the Los Angeles Times gave datawhich suggested that a beta distribution with parametersA = 150,B = 850, α = 8, β = 2 would provide a reasonableapproximation to the distribution of American credit scores.[Note: credit scores are integer-valued].

a. Let X represent a randomly selected American credit score.What are the mean value and standard deviation of thisrandom variable? What is the probability that X is within 1standard deviation of its mean value?

b. What is the approximate probability that a randomly selectedscore will exceed 750 (which lenders consider a very goodscore)?

Beta Distribution

Example (Problem 127)An individual’s credit score is a number calculated based on thatperson’s credit history which helps a lender determine how muchhe/she should be loaned or what credit limit should be establishedfor a credit card. An article in the Los Angeles Times gave datawhich suggested that a beta distribution with parametersA = 150,B = 850, α = 8, β = 2 would provide a reasonableapproximation to the distribution of American credit scores.[Note: credit scores are integer-valued].

a. Let X represent a randomly selected American credit score.What are the mean value and standard deviation of thisrandom variable? What is the probability that X is within 1standard deviation of its mean value?

b. What is the approximate probability that a randomly selectedscore will exceed 750 (which lenders consider a very goodscore)?

Beta Distribution

Top Related