Estimating the parameters of an -stable distribution using the existence of moments of order...

7
Statistics and Probability Letters 90 (2014) 78–84 Contents lists available at ScienceDirect Statistics and Probability Letters journal homepage: www.elsevier.com/locate/stapro Estimating the parameters of an α -stable distribution using the existence of moments of order statistics Mohammad Mohammadi, Adel Mohammadpour Department of Statistics, Faculty of Mathematics & Computer Science, Amirkabir University of Technology (Tehran Polytechnic), 424, Hafez Ave., Tehran, Iran article info Article history: Received 29 November 2013 Received in revised form 3 March 2014 Accepted 12 March 2014 Available online 26 March 2014 Keywords: α-stable distribution Order statistics Estimation Asymptotic distribution abstract Necessary and sufficient conditions for the existence of order statistics moments of α- stable random variables are introduced. Using the obtained results, all parameters of α- stable distribution are estimated. © 2014 Elsevier B.V. All rights reserved. 1. Introduction The existence of moments for every distribution is an advantage for parameter estimation. There are several straightforward and efficient moment base estimators in classical statistics. However, lack of existence of the variance for non-Gaussian stable random variables is a drawback to introduce such an estimator. On the other hand, variance of many order statistics of an α-stable distribution exist. This is the main reason to make inference through the order statistics of a random sample of α-stable distribution. In other words, it can adapt well known order statistics-base estimators, such as L-estimator, best linear unbiased or invariant estimators for α-stable distribution parameters. In this paper, we prove the basic lemma about the existence of moments of stable distributions and propose new estimators using the results of the lemma. A random variable X is said to have an α-stable (non-Gaussian α-stable) distribution if there are parameters 0 <α< 2, σ 0, 1 β 1 and a real number µ such that its characteristic function has the following form: E exp(itX ) = exp σ α | t | α 1 iβ(sign t ) tan πα 2 + it µ if α = 1, exp σ | t | 1 + iβ 2 π (sign t ) log | t | + it µ if α = 1. Since an α-stable distribution is characterized by four parameters, α, index of stability, σ , scale, β , skewness, and µ, location parameter, we will denote it by S α (σ,β,µ) and write X S α (σ,β,µ) to indicate that the random variable X has α-stable distribution S α (σ,β,µ). From the characteristic function, X S α (σ,β,µ) if and only if X d = σ Z + µ if α = 1, σ Z + µ + β 2 π σ log σ if α = 1, (1) Corresponding author. Tel.: +98 2164542500; fax: +98 2166497930. E-mail addresses: [email protected] (M. Mohammadi), [email protected], [email protected] (A. Mohammadpour). http://dx.doi.org/10.1016/j.spl.2014.03.008 0167-7152/© 2014 Elsevier B.V. All rights reserved.

Transcript of Estimating the parameters of an -stable distribution using the existence of moments of order...

Statistics and Probability Letters 90 (2014) 78–84

Contents lists available at ScienceDirect

Statistics and Probability Letters

journal homepage: www.elsevier.com/locate/stapro

Estimating the parameters of an α-stable distribution usingthe existence of moments of order statisticsMohammad Mohammadi, Adel Mohammadpour ∗

Department of Statistics, Faculty of Mathematics & Computer Science, Amirkabir University of Technology (Tehran Polytechnic), 424,Hafez Ave., Tehran, Iran

a r t i c l e i n f o

Article history:Received 29 November 2013Received in revised form 3 March 2014Accepted 12 March 2014Available online 26 March 2014

Keywords:α-stable distributionOrder statisticsEstimationAsymptotic distribution

a b s t r a c t

Necessary and sufficient conditions for the existence of order statistics moments of α-stable random variables are introduced. Using the obtained results, all parameters of α-stable distribution are estimated.

© 2014 Elsevier B.V. All rights reserved.

1. Introduction

The existence of moments for every distribution is an advantage for parameter estimation. There are severalstraightforward and efficient moment base estimators in classical statistics. However, lack of existence of the variance fornon-Gaussian stable random variables is a drawback to introduce such an estimator. On the other hand, variance of manyorder statistics of an α-stable distribution exist. This is the main reason to make inference through the order statistics of arandom sample of α-stable distribution. In other words, it can adapt well known order statistics-base estimators, such asL-estimator, best linear unbiased or invariant estimators for α-stable distribution parameters. In this paper, we prove thebasic lemma about the existence of moments of stable distributions and propose new estimators using the results of thelemma.

A random variable X is said to have an α-stable (non-Gaussian α-stable) distribution if there are parameters 0 < α < 2,σ ≥ 0, −1 ≤ β ≤ 1 and a real number µ such that its characteristic function has the following form:

Eexp(itX)

=

exp

−σ α

| t |α1 − iβ(sign t) tan

πα

2

+ itµ

if α = 1,

exp−σ | t |

1 + iβ

(sign t) log | t |

+ itµ

if α = 1.

Since an α-stable distribution is characterized by four parameters, α, index of stability, σ , scale, β , skewness, andµ, locationparameter, we will denote it by Sα(σ , β, µ) and write X ∼ Sα(σ , β, µ) to indicate that the random variable X has α-stabledistribution Sα(σ , β, µ). From the characteristic function, X ∼ Sα(σ , β, µ) if and only if

X d=

σZ + µ if α = 1,

σZ + µ + β2π

σ log σ if α = 1,(1)

∗ Corresponding author. Tel.: +98 2164542500; fax: +98 2166497930.E-mail addresses:[email protected] (M. Mohammadi), [email protected], [email protected] (A. Mohammadpour).

http://dx.doi.org/10.1016/j.spl.2014.03.0080167-7152/© 2014 Elsevier B.V. All rights reserved.

M. Mohammadi, A. Mohammadpour / Statistics and Probability Letters 90 (2014) 78–84 79

where Z ∼ Sα(1, β, 0). The notation ‘‘ d=’’ denotes equality in distribution. An α-stable random variable is symmetric aboutµ if and only if β = 0, for more information on the basic concepts of α-stable random variables see Samorodnitsky andTaqqu (1994).

Class of α-stable distributions have been used for modeling in finance, telecommunications and medicine, seee.g. Kabašinskas et al. (2009), Nolan (2003) and Gallardo (2000). These applications provide motivations for estimating theparameters.

A few methods are proposed for estimating four parameters of α-stable distributions. McCulloch (1986) used samplequantiles for finding consistent estimators. Kogon and Williams (1998) estimated parameters by empirical characteristicfunction and Nolan (2001) computed maximum Likelihood estimators for α ≥ 0.4. Also, Antoniadis et al. (2006) proposeda wavelet based estimation method. In this article all parameters are estimated using the existence of moments of orderstatistics. We show that the presented estimators work well for small tail indexes through a simulation study. Also, it isshown that the presented estimator for 1/α has a smaller asymptotic variance in comparison to some popular estimators.

In Section 2 necessary and sufficient conditions for the existence of order statisticsmoments ofα-stable randomvariablesare introduced. In Section 3, using the obtained results in Section 2, we try to estimate the α-stable distribution parametersand in Section 4 the obtained estimators are compared to some popular estimators. The paper is concluded in Section 5.

2. Moments of order statistics

For an α-stable random variable X ∼ Sα(σ , β, µ) with 0 < α < 2, we have,

limλ−→∞

λαP(X > λ) = Cα

1 + β

2σ α,

limλ−→∞

λαP(X < −λ) = Cα

1 − β

2σ α, (2)

where Cα = (1 − α)/(Γ (2 − α) cos(πα2 )) if α = 1 and Cα =

2πif α = 1. This property is called regularly varying property,

for proof see Samorodnitsky and Taqqu (1994). By the regularly varying property, we have the following theorem.

Theorem 2.1. Let m be a positive number. Let Xi ∼ Sα(σi, βi, µi), i = 1, . . . , n, be a sequence of independent α-stable randomvariables with 0 < α < 2 and X1:n ≤ X2:n ≤ · · · ≤ Xn:n be its corresponding order statistics.

(I) Suppose −1 < βi < 1, for i = 1, . . . , n. In order that EXmk:n exists it is necessary and sufficient that α−1m < k <

n + 1 − α−1m.

(II) Suppose α ≥ 1 and βi = 1 or βi = −1, for i = 1, . . . , n. In order that EXmk:n exists it is sufficient that α−1m < k <

n + 1 − α−1m.(III) Suppose α < 1 and βi = 1 or βi = −1, for i = 1, . . . , n. In order that EXm

k:n exists it is necessary and sufficient thatk < n + 1 − α−1m or α−1m < k, respectively.

Proof. It can be shown that

P(Xk:n > λ) =

nj=n−k+1

(−1)j−(n−k+1)

j − 1n − k

1≤i1<i2<···<ij≤n

P(Xi1 > λ, . . . , Xij > λ),

P(Xk:n < −λ) =

nj=k

(−1)j−k

j − 1k − 1

1≤i1<i2<···<ij≤n

P(Xi1 < −λ, . . . , Xij < −λ),

see Samorodnitsky (1986) for its proof. From independence and regularly varying property, equations (2), we have

limλ−→∞

λ(n−k+1)αP(Xi1 > λ, . . . , Xij > λ) =

n−k+1h=1

limλ→+∞

λαP(Xih > λ) if j = n − k + 1,

0 if j > n − k + 1,

=

n−k+1h=1

1 + βih

2σ αih if j = n − k + 1,

0 if j > n − k + 1,

and

limλ−→∞

λkαP(Xi1 < −λ, . . . , Xij < −λ) =

k

h=1

limλ→+∞

λαP(Xih < −λ) if j = k,

0 if j > k,=

k

h=1

1 − βih

2σ αih if j = k,

0 if j > k.

80 M. Mohammadi, A. Mohammadpour / Statistics and Probability Letters 90 (2014) 78–84

Therefore,

limλ−→∞

λ(n−k+1)αP(Xk:n > λ) =

1≤i1<i2<···<in−k+1≤n

n−k+1h=1

1 + βih

2σ αih

, (3)

limλ−→∞

λkαP(Xk:n < −λ) =

1≤i1<i2<···<ik≤n

kh=1

1 − βih

2σ αih

. (4)

Take d1,k and d2,k equal to (3) and (4), respectively. Suppose −1 < βi < 1, for i = 1, . . . , n. We have,

m−1E | Xk:n |m

= m−1

+∞

0P(| Xk:n |

m > ν)dν

= m−1 ϵ

0P(| Xk:n |

m > ν)dν +

+∞

ϵ1m

λm−1P(Xk:n > λ) + P(Xk:n < −λ)

dλ,

where ϵ is a positive arbitrary number. The first term in the last equality is finite. Using (3) and (4) the second term is finiteif and only if

+∞

ϵ1m

λm−1 d1,k

λ(n−k+1)α+

d2,kλkα

dλ < ∞,

and this is finite if and only if α−1m < k < n + 1 − α−1m. By the same way part (II) can be proved. It is known thatevery α-stable random variable with α < 1 and β = 1 or β = −1, is concentrated on the intervals [µ, +∞) or (−∞, µ],respectively, where µ is the location parameter. By these facts and a similar procedure in the proof of the case (I) one canprove (III). �

Remark 2.2. The location and scale parameters have not any influence in the existence of order statistics moments and theinfluence of the skewness parameter is not very important.

3. Estimation

Our approach for estimating parameters is based on Theorem 2.1. This section is divided into 3 subsections. Firstly,estimators of location and scale parameters are introduced. Secondly, we use this result to estimate the tail index and itsasymptotic behavior is investigated. Finally, we give an estimator for the skewness intensity parameter.

3.1. Estimation of µ and σ

A class of unbiased estimators for location parameter, in the symmetric case, is introduced. Then, two classes of unbiasedestimators of a pair of location and scale parameters are presented in the asymmetric case.

Let Xi ∼ Sα(σ , 0, µ) and Zi ∼ Sα(1, 0, 0), i = 1, . . . , n, be two sequences of independent α-stable random variables. By(1) we have, Xi:n

d= σZi:n + µ, for i = 1, . . . , n. On the other hand, EZi:n = −EZn−i+1:n, i = 1, . . . , n, whenever the expected

value exists. Take

C(k) =1

n − 2k + 2

n−k+1i=k

Xi:n, (5)

for integer numbers k in the interval (α−1, n + 1 − α−1) such that 2k ≤ n + 1. If n is an odd number, thenEC(k)

= µ.

Let [x] be the integer part of x. If k = α−1m + 1 when α−1m is a positive integer or k = [α−1m] + 1, otherwise, then byTheorem 2.1, E

C(k)m

exists. The obtained result can be summarized in the following lemma.

Lemma 3.1. Let n be an odd number. Suppose Xi ∼ Sα(σ , 0, µ), i = 1, . . . , n is an independent and identically distributed(i.i.d.) sequence of α-stable random variables. Then (5) is an unbiased estimator for µ.

In the case of i.i.d. random variables, asymptotic distributions of the estimators obtained in Lemma 3.1 are considered inCsörgo et al. (1986). Also, (5) is a special case of trimmed mean. For more details, see DasGupta (2008) Chapter 7.

Remark 3.2. In Lemma 3.1, dependence of C(k) on the parametersα can be circumvented by a restriction onα. For example,if we assume that α ≥ 0.1 then Lemma 3.1 is valid for integer numbers k in the interval (10, n + 1 − 10).

The described method in the above can be extended to the asymmetric case. Base on (1) for α = 1, we can write thefollowing equations:

EXi:n = σEZi:n + µ,EXj:n = σEZj:n + µ,

M. Mohammadi, A. Mohammadpour / Statistics and Probability Letters 90 (2014) 78–84 81

for i, j = k, . . . , n− k+1, i = j, (k is defined as in Lemma 3.1). Solve these equations in terms ofµ and σ . Then, by replacingEXi:n and EXj:n with Xi:n and Xj:n, respectively, we find the following classes:

C1(k) =

Xi:nEZj:n − Xj:nEZi:n

EZj:n − EZi:n; i, j = k, . . . , n − k + 1, i = j

, (6)

C2(k) =

Xj:n − Xi:n

EZj:n − EZi:n; i, j = k, . . . , n − k + 1, i = j

. (7)

For every ζX,Z ∈ C1(k) and ϕX,Z ∈ C2(k), we have, E(ζX,Z ) = µ and E(ϕX,Z ) = σ , respectively. A generalization of theseresults is given in the following lemma.

Lemma 3.3. Let Xi ∼ Sαi(σ , βi, µ) and Zi ∼ Sαi(1, βi, 0), i = 1, . . . , n, be two sequences of independent stable randomvariables where αi and βi are known, and αi = 1, for every i. In the case of αi = 1 assume that βi = 0. Then, fork ∈ (maxi α−1

i , n+1−maxi α−1i ), C1(k) and C2(k) in (6) and (7) are two classes of unbiased estimators for µ and σ , respectively.

In Lemma 3.3, since tail indexes and skewness intensities parameters are known, one can calculate EZj:n. In an application, inthe i.i.d. case, by estimating α and β , EZj:n and EZi:n can be estimated. In the sequel, using introducedmethods for estimatingα and β this difficulty can be solved. In Section 4, it is explained about another estimator of EZj:n.

3.2. Estimation of α

In this subsection, estimation of tail index is considered. A class of consistent estimators for 1/α is introduced. Also, theasymptotic behaviors of the introduced estimators are obtained.

In the following, fX (.) and FX (.) denote to the probability density function and distribution function of a random variableX , respectively. Also, ξX

p satisfies FX (ξXp ) = p. In the sequel, unless otherwise specified, all limit relations are assumed to

hold as n → +∞. Let h/n → p, 0 < p < 1, then based on a classical non-parametric theorem√n(Xh:n − ξX

p ) →D N(0, σ 2), (8)

where σ 2= p(1 − p)/

fX (ξX

p )2. The notations ‘‘→D’’ and ‘‘→P ’’ denote convergence in distribution and convergence in

probability, respectively.Let Xi ∼ Sα(σ , β, µ), i = 1, 2, . . . ,m be an i.i.d. sequence. In the case of α = 1 assume that β = 0. From (1) it can be

shown that

X1 + · · · + Xmd=m1/αX1 + µ(m − m1/α). (9)

Let σm be the scale parameter of Y =m

i=1 Xi. From (9) we have σm = m1/ασ . Therefore, 1/α = (log(σm)− log(σ ))/ log(m).Now, instead of σm and σ we put their estimators, as obtained in Lemma 3.3. This presents an estimator for 1/α as,

1αi,j(m, n)

=

log

Yj:n − Yi:n

E(Zj:n − Zi:n)

− log

Xj:n − Xi:n

E(Zj:n − Zi:n)

/ log(m)

=

log(Yj:n − Yi:n) − log(Xj:n − Xi:n)

/ log(m),

where Yi:n, . . . , Yn:n are order statistics corresponding to i.i.d. sequence Y1, . . . , Yn and Y1d= Y . By choosing m = 2 one can

use more information from the sample. Therefore, in the following we assume that m = 2. By a sample size N = 3n, if weput Yi = X2i−1+n + X2i+n, i = 1, . . . , n, then two independent samples X1, . . . , Xn and Y1, . . . , Yn can be constructed. Thefollowing theorem deals with the asymptotic distribution of the presented estimator for 1/α.

Theorem 3.4. Let Xi and Yi = X2i−1+n + X2i+n, i = 1, . . . , n be two independent sequences constructed from an i.i.d. sequenceX1, . . . , XN ∼ Sα(α, β, µ), N = 3n. Let

1αi,j(2, n)

=1

log(2)

log(Yj:n − Yi:n) − log(Xj:n − Xi:n)

.

Assume that in → p1 and j

n → p2, where 0 < p1 < p2 < 1. Then,

A−1/2n

1

αi,j(2, n)−

→D N(0, 1), (10)

82 M. Mohammadi, A. Mohammadpour / Statistics and Probability Letters 90 (2014) 78–84

where

(log(2))2An =(p2 − p1)(1 − p2 + p1)

n(1 − p1)(fQ1(ξQ1p2−p1))

2+

(p2 − p1)(1 − p2 + p1)

n(1 − p1)(fQ2(ξQ2p2−p1))

2,

Q1 = log(Y − ξ Yp1)I(Y > ξ Y

p1) and Q2 = log(X − ξXp1)I(X > ξX

p1).

Proof. For enough large n, we expect to have an i.i.d. sequence of size [n(1 − p1)] with common distribution functionas distribution function of the random variable Q1. Therefore, log(Yj:n − ξ Y

p1), for enough large n, can be considered as[(p2 − p1)n(1 − p1)]th order statistics of an i.i.d. sequence of size [n(1 − p1)] from Q1. With this discussion and from(8) we have,

√n(log(Yj:n − ξ Y

p1) − ξQ1p2−p1) →D N(0, η2

1),

where

η21 =

(p2 − p1)(1 − p2 + p1)

(1 − p1)fQ1

log(ξ Y

p2 − ξ Yp1)

2 .

log(Yj:n − ξ Yp1) →P log(ξ Y

p2 − ξ Yp1), therefore, ξ

Q1p2−p1 = log(ξ Y

p2 − ξ Yp1). We have,

√nlog(Yj:n − ξ Y

p1) − log(ξ Yp2 − ξ Y

p1)

=√nlog(Yj:n − Yi:n + Yi:n − ξ Y

p1) − log(ξ Yp2 − ξ Y

p1).

Now from the fact that Yi:n − ξ Yp1 →P 0 we can conclude that

√nlog(Yj:n − Yi:n) − log(ξ Y

p2 − ξ Yp1)

→D N(0, η2

1).

Using a similar argument we have√nlog(Xj:n − Xi:n) − log(ξX

p2 − ξXp1)

→D N(0, η2

2),

where

η22 =

(p2 − p1)(1 − p2 + p1)

(1 − p1)fQ2

log(ξX

p2 − ξXp1)

2 .

Using (1), we have,

ξ Yp =

σ2ξZp + 2µ if α = 1,

σ2ξZp + 2

µ + β

σ log(2σ)

if α = 1,

ξXp =

σξ Zp + µ if α = 1,

σ ξ Zp + µ + β

σ log σ if α = 1.

These equations imply, log(2)/α = log(ξ Yp2 − ξ Y

p1) − log(ξXp2 − ξX

p1). Finally, by independence of two sequences X1, . . . , Xnand Y1, . . . , Yn we conclude that

log(2)√n 1log(2)

log(Yj:n − Yi:n) − log(Xj:n − Xi:n)

→D N(0, η2

1 + η22).

This proves the theorem. �

We can use an i.i.d. sequence of size N = 2n for estimating 1/α. It is stated in the following corollary. The proof of corollarycan be done with arguments in the proof of Theorem 3.4.

Corollary 3.5. Let Xi and Yi = X2i−1 + X2i, i = 1, . . . , n be two sequences constructed from an i.i.d. sequence X1, . . . , XN ∼

Sα(α, β, µ), N = 2n. Then,

1αi,j(2, n)

→P1α

.

For constructing the estimator in the case of α = 1, we needed that β = 0. But, this limitation is removed in the theorem.Also, based on relation (10) we can find an asymptotic confidence interval for α.

3.3. Estimation of β

In this subsection, an estimator for β based on the relations in (2) and sample quantiles is given. For computing theestimator, we need an estimator for α. We use the presented estimator for α that is given in Section 3.2.

M. Mohammadi, A. Mohammadpour / Statistics and Probability Letters 90 (2014) 78–84 83

Table 1Approximated MSE of the PE and KW estimators in estimating α and β .

α Method MSE of estimators of α MSE of estimators of ββ = 0 β = 0.5 β = 1 β = 0 β = 0.5 β = 1

0.1 PE 0.0001 0.0002 0.0002 0.0762 0.0577 0.0000KW 0.0001 0.0013 0.0015 0.6036 0.2268 0.0084

0.25 PE 0.0089 0.0108 0.0120 0.0781 0.0536 0.0001KW 0.0017 0.0019 0.0022 0.0367 0.1380 0.0000

0.75 PE 0.0089 0.0108 0.0120 0.0815 0.0512 0.0016KW 0.0031 0.0032 0.0041 0.0142 0.0163 0.0047

1.25 PE 0.0353 0.0373 0.0475 0.0777 0.0690 0.0727KW 0.0051 0.0057 0.0062 0.0204 0.0216 0.0074

1.75 PE 0.0533 0.0487 0.0497 0.0436 0.0818 0.2697KW 0.0061 0.0060 0.0056 0.0934 0.0859 0.0368

Let X be not totally skewed to the right or left. Consider ξXp as in Section 3.2. Based on the relations in (2), for β = 1 we

have

limp→0

|ξX1−p|

αP(X > ξX1−p)

|ξXp |αP(X < ξX

p )=

1 + β

1 − β.

Since P(X > ξX1−p) = P(X < ξX

p ) = p, therefore,

limp→0

|ξX1−p|

α

|ξXp |α

=1 + β

1 − β.

This relation helps us to find an estimator for β . We can estimate α, using the described method, using (8) and ξXp can be

estimated by the pth sample quantile. Let α and ξXp denote the estimator ofα and ξX

p , respectively. For p near 0, the parameterβ can be estimated by

|ξX1−p|

α− |ξX

p |α

|ξX1−p|

α + |ξXp |α

. (11)

For an α-stable random variable, X ∼ Sα(σ , β, µ) with β = −1 we use the relation

limp→0

|ξXp |

αP(X < ξXp )

|ξX1−p|

αP(X > ξX1−p)

=1 − β

1 + β.

Now, by a similar discussion, we find estimator (11) for β . Therefore, estimator (11) works for all β ∈ [−1, 1]. From formula(11) we find that when X is totally skewed to the right or left with location 0, then we can use estimator (11). In the nextsection it is shown that for p = 0.01, the estimator works well.

4. Comparison

Theorem 3.4 lets us have a theory comparison for α. In Paulauskas and Vaičiulis (2011), four estimators for 1/α arecompared with together. Their convergent rates are greater than

√n. Using the obtained results in Theorem 3.4 it can be

shown that the presented estimator for 1/α is better than these four estimators with respect to the convergence rate.We compared the proposed estimators (PE) with presented estimators in Kogon and Williams (1998) (KW) through a

simulation study. The simulation study showed that the KW method and the method introduced in McCulloch work same.But, we only consider the KW method for comparison. We use the Mean Square Error (MSE) criterion. Simulation resultsare obtained with σ = 1 and µ = 0 for different α and β . Table 1 is dedicated to simulation study for α and β . We use theestimators in Corollary 3.5 with i = 25th and j = 75th sample percentile. Simulation study is done with 1000 iterationsfor a sample size of 500. Table 1 shows that, for small α, the PE is better than the KW estimator in estimating α. For theestimation of β , we use formula (11) with p = 0.01 and α is estimated α. In the three last columns of Table 1, MSE’s forestimation of β are given. For small α, the proposed estimator has smaller MSE.

After the estimation of α and β , EZj:n and EZi:n can be computed numerically or through a simulation. We explain thesemethods in the following. EZi:n can be computed numerically from the following formula,

EZi:n =

R

n!(i − 1)!(n − i)!

uf (u)F(u)i−1(1 − F(u))n−idu,

84 M. Mohammadi, A. Mohammadpour / Statistics and Probability Letters 90 (2014) 78–84

Table 2Approximated MSE of the PE and KW estimators in estimating σ and µ.

α Method MSE of estimators of σ MSE of estimators of µβ = 0 β = 0.5 β = 1 β = 0 β = 0.5 β = 1

0.1 PE 0.2009 1.7991 0.6917 33.630 0.0073 17.433KW a 1.8818 12.042 a a a

0.25 PE 0.0325 0.1044 0.0695 0.2038 0.0057 0.0575KW 0.0101 0.0404 0.0888 0.0026 0.1036 0.6095

0.75 PE 0.0013 0.0060 0.0054 0.1056 0.0757 0.3751KW 0.0010 0.0009 0.0018 0.0101 0.0255 0.0958

1.25 PE 0.0004 0.0011 0.0061 0.0399 0.1024 0.3218KW 0.0003 0.0003 0.0004 0.0097 0.0192 0.0515

1.75 PE 0.0003 0.0004 0.0006 0.0009 0.0040 0.0171KW 0.0001 0.0001 0.0001 0.0009 0.0009 0.0017

a MSE is too large and is not useful in application.

where f (.) and F(.) are the probability density function and the distribution function of Z , respectively. For computing f (.)and F(.) see Zolotarev (1986) page 78. Also, we can estimate EZi:n by simulation. If i is chosen such that EZi:n exists, thenusing the strong law of large numbers we have

1n

nj=1

Z ji:n → EZi:n, a.s.,

as n → +∞, where Z ji:n, j = 1, 2, . . . , is an i.i.d. sequence of random variables having the same distribution with Zi:n.

The distribution function of Z is dependent only on α and β . Moreover, based on (8), EZ[np]:n can be estimated by ξ Zp . Using

cumulative distribution function of Z , we can find ξ Zp , see Zolotarev (1986) page 78. For estimation of µ and σ we use the

following estimators:

µ =13

3i=1

X[nqi]:nξZpi − X[npi]:nξ

Zqi

ξ Zpi − ξ Z

qi

and σ =X[0.75n]:n − X[0.25n]:n

ξ Z0.75 − ξ Z

0.25

, (12)

where p1 = p2 = 0.75, p3 = 0.5, q1 = q3 = 0.25, and q2 = 0.5. The notation ξ Zp denotes estimator of ξ Z

p . We takesample size of 5000 with 1000 iterations. Through a simulation study, we found that the estimators in (12) between otherestimators (see Lemma 3.3) have small MSE. Therefore, we used these estimators for µ and σ . Also, in this case, one can useU-statistic for finding estimators with smaller variances. Simulation studies showed that the KW method is faster than thenew presented method. Table 2 shows that when α is near 0.1 the proposed estimators work better than KW estimators.When α is near 2 accuracy of two estimators for σ is similar.

5. Conclusion

Four new estimators for four parameters of α-stable distributions, based on the existence of moments of order statistics,were proposed. Through a simulation study, the proposed estimators are comparedwith theKWestimators. A general aspectof simulation results shows that for, small α, the presented estimators have smaller MSE.

References

Antoniadis, A., Feuerverger, A., Gonlves, P., 2006. Wavelet-based estimation for univariate stable laws. Ann. Inst. Stat. Math. 58 (4), 779–807.Csörgo, M., Csörgo, S., Horvath, L., Mason, D., 1986. Normal and stable convergence of integral functions of the empirical distribution function. Ann. Probab.

14 (1), 86–118.DasGupta, A., 2008. Asymptotic Theory of Statistics and Probability. Springer, New York.Gallardo, J.R., 2000. Fractional stable noise processes and their application to trafficmodels and fast simulation of broadband telecommunications networks.

Ph.D. dissertation, Department of Electrical and Computer Science, George Washington University.Kabašinskas, A., Rachev, S., Sakalauskas, L., Sun, W., Belovas, I., 2009. Alpha-stable paradigm in financial markets. J. Comput. Anal. Appl. 11 (4), 641–668.Kogon, S., Williams, D., 1998. Characteristic function based estimation of stable parameters. In: Adler, R., Feldman, R., Taqqu, M. (Eds.), A Practical Guide

to Heavy Tailed Data. Birkhauser, Boston, MA, pp. 311–338.McCulloch, J.H., 1986. Simple consistent estimators of stable distribution parameters. Commun. Stat.-Simul. 15 (4), 1109–1136.Nolan, J.P., 2003. Modeling financial distributions with stable distributions. In: Chapter 3 of Handbook of Heavy Tailed Distributions in Finance. Elsevier

Science, Amsterdam.Nolan, J.P., 2001. Maximum likelihood estimation of stable parameters. In: Barndorff-Nielsen, O.E., Mikosch, T., Resnick, S.I. (Eds.), Lévy Processes: Theory

and Applications. Birkhäuser, Boston, pp. 379–400.Paulauskas, V., Vaičiulis, M., 2011. Once more on comparison of tail index estimators. arXiv:1104.1242v1.Samorodnitsky, G., 1986. Extrema of skewed stable process. Stochastic Process. Appl. 30, 17–39.Samorodnitsky, G., Taqqu, M.S., 1994. Stable Non-Gaussian Random Processes. Chapman & Hall, New York.Zolotarev, V., 1986. One-Dimensional Stable Distribution. American Mathematical Society.