A uni ed kernel function approach to polynomial interior ... · kernel function approach to...

30
A unified kernel function approach to polynomial interior-point algorithms for the Cartesian P * (κ)-SCLCP * G. Q. Wang a,b and D.T. Zhu b (April 27, 2010; Revised March 12, 2011) a College of Advanced Vocational Technology, Shanghai University of Engineering Science, Shanghai 200437, P.R. China b Department of Mathematics, Shanghai Normal University, Shanghai 200234, P.R. China e-mail: guoq [email protected], [email protected] Abstract Recently, Bai et al. [Bai Y.Q., Ghami M. El, Roos C., 2004. A comparative study of kernel functions for primal-dual interior-point algorithms in linear optimization. SIAM Journal on Opti- mization, 15(1), 101-128.] provided a unified approach and comprehensive treatment of interior-point methods for linear optimization based on the class of eligible kernel functions. In this paper we generalize the analysis presented in the above paper to the Cartesian P*(κ)-linear complementarity problem over symmetric cones via the machinery of the Euclidean Jordan algebras. The symmetry of the resulting search directions is forced by using the Nesterov-Todd scaling scheme. The iteration bounds for the algorithms are performed in a systematic scheme, which highly depend on the choice of the eligible kernel functions. Moreover, we derive the iteration bounds that match the currently best known iteration bounds for large- and small-update methods, namely O((1 + 2κ) r log r log r ε ) and O((1 + 2κ) r log r ε ), respectively, where r denotes the rank of the associated Euclidean Jordan algebra and ε the desired accuracy. Keywords: Symmetric cone linear complementarity problem; Interior-point methods; Euclidean Jor- dan Algebra; Large- and small-update methods; Kernel functions; Iteration bound. AMS Subject Classification: 90C33; 90C51. 1 Introduction Let (V, ) be the Cartesian product of a finite number of Euclidean Jordan algebras, i.e., V = V 1 ×···× V N with the its cone of squares K = K 1 ×···×K N , where each V j is an n j -dimensional simple Euclidean Jordan algebra with n = N j=1 n j and K j is the corresponding cone of squares of V j with rank(V j )= r j * The research of the first author is supported by the National Natural Science Foundation of China (No. 11001169) and China Postdoctoral Science Foundation (No. 20100480604). The research of the second author is supported by the National Natural Science Foundation of China (No. 10871130), the Ph.D. Foundation Grant (No. 20093127110005) and the Shanghai Leading Academic Discipline Project (No. T0401). 1

Transcript of A uni ed kernel function approach to polynomial interior ... · kernel function approach to...

Page 1: A uni ed kernel function approach to polynomial interior ... · kernel function approach to polynomial interior-point algorithms for P ( )-SCLCP. Moreover, this uni es the analysis

A unified kernel function approach to polynomial

interior-point algorithms for the Cartesian

P∗(κ)-SCLCP ∗

G. Q. Wanga,b and D.T. Zhub

(April 27, 2010; Revised March 12, 2011)

a College of Advanced Vocational Technology, Shanghai University of Engineering Science,

Shanghai 200437, P.R. China

b Department of Mathematics, Shanghai Normal University, Shanghai 200234, P.R. China

e-mail: guoq [email protected], [email protected]

Abstract

Recently, Bai et al. [Bai Y.Q., Ghami M. El, Roos C., 2004. A comparative study of kernelfunctions for primal-dual interior-point algorithms in linear optimization. SIAM Journal on Opti-mization, 15(1), 101-128.] provided a unified approach and comprehensive treatment of interior-pointmethods for linear optimization based on the class of eligible kernel functions. In this paper wegeneralize the analysis presented in the above paper to the Cartesian P∗(κ)-linear complementarityproblem over symmetric cones via the machinery of the Euclidean Jordan algebras. The symmetryof the resulting search directions is forced by using the Nesterov-Todd scaling scheme. The iterationbounds for the algorithms are performed in a systematic scheme, which highly depend on the choiceof the eligible kernel functions. Moreover, we derive the iteration bounds that match the currentlybest known iteration bounds for large- and small-update methods, namely O((1 + 2κ)

√r log r log r

ε)

and O((1 + 2κ)√r log r

ε), respectively, where r denotes the rank of the associated Euclidean Jordan

algebra and ε the desired accuracy.

Keywords: Symmetric cone linear complementarity problem; Interior-point methods; Euclidean Jor-dan Algebra; Large- and small-update methods; Kernel functions; Iteration bound.AMS Subject Classification: 90C33; 90C51.

1 Introduction

Let (V, ◦) be the Cartesian product of a finite number of Euclidean Jordan algebras, i.e., V = V1×···×VNwith the its cone of squares K = K1 × · · · × KN , where each Vj is an nj-dimensional simple Euclidean

Jordan algebra with n =∑Nj=1 nj and Kj is the corresponding cone of squares of Vj with rank(Vj) = rj

∗The research of the first author is supported by the National Natural Science Foundation of China (No. 11001169)and China Postdoctoral Science Foundation (No. 20100480604). The research of the second author is supported by theNational Natural Science Foundation of China (No. 10871130), the Ph.D. Foundation Grant (No. 20093127110005) andthe Shanghai Leading Academic Discipline Project (No. T0401).

1

Page 2: A uni ed kernel function approach to polynomial interior ... · kernel function approach to polynomial interior-point algorithms for P ( )-SCLCP. Moreover, this uni es the analysis

(r =∑Nj=1 rj). For a linear transformation A : V → V and a q ∈ V , the linear complementarity problem

over symmetric cones, denoted by SCLCP, is to find x, s ∈ V such that

x ∈ K, s = A(x) + q ∈ K, and x ◦ s = 0.

The SCLCP is a wide class of problems that contains linear complementarity problem (LCP), second-order cone linear complementarity problem (SOCLCP) and semidefinite linear complementarity problem(SDLCP) as special cases. Moreover, the Karush-Kuhn-Tucker (KKT) condition of symmetric optimiza-tion (SO) can be written in the form of SCLCP [50]. For a brief survey on the recent developmentsrelated to symmetric cone complementarity problems (SCCP), we refer to [51].

We call SCLCP the Cartesian P∗(κ)-SCLCP if the linear transformation A has the Cartesian P∗(κ)-property, i.e.,

(1 + 4κ)∑

ν∈I+(x)

〈x(ν), [A(x)](ν)〉+∑

ν∈I−(x)

〈x(ν), [A(x)](ν)〉 ≥ 0,

where κ is some nonnegative constant, ν ∈ I = {1, · · ·, N}, I+(x) = {ν : 〈x(ν), [A(x)](ν)〉 > 0} andI−(x) = {ν : 〈x(ν), [A(x)](ν)〉 < 0} are two index sets. It is a straightforward extension of the P∗(κ)-matrix introduced by Kojima et al. [25]. In fact, when K = Rn

+, the nonnegative orthant in Rn, whichcorresponds to n1 = · · · = nN = 1 and n = N , then the Cartesian P∗(κ)-SCLCP becomes the P∗(κ)-linear complementarity problem (P∗(κ)-LCP). Moreover, it is evident that for κ = 0, P∗(0)-SCLCP is theso-called monotone SCLCP [17], and 0 ≤ κ1 ≤ κ2 implies P∗(κ1) ⊂ P∗(κ2). The linear transformation Ahas the Cartesian P∗-property if it has the Cartesian P∗(κ)-property for some nonnegative κ, i.e.,

P∗ =⋃κ≥0

P∗(κ).

We also recall that the linear transformation A has(a) the Cartesian P -property, if for any x ∈ K and x 6= 0, there exists an index ν ∈ I such that〈x(ν), [A(x)](ν)〉 > 0;(b) the Cartesian P0-property, if for any x ∈ K and x 6= 0, there exists an index ν ∈ I such that x(ν) 6= 0and 〈x(ν), [A(x)](ν)〉 ≥ 0.It is clear that the Cartesian P∗ class involves the Cartesian P class and turns out to be a special case inthe Cartesian P0 class, i.e., P ⊂ P∗ ⊂ P0. The concept of the Cartesian P0- and P -properties for a lineartransformation over the space of symmetric matrices was first introduced by Chen and Qi [10], and laterextended by Pan and Chen [34] and Luo and Xiu [29] to the space of second-order cones and the generalEuclidean Jordan algebras, respectively.

Until now majority of well known polynomial interior-point methods (IPMs) have used the so-calledcentral path as a guideline to the optimal set, and some variants of the Newton’s method to follow thecentral path approximately. Kojima et al. [25] first proved the existence and uniqueness of the centralpath for the P∗(κ)-LCP and generalized the primal-dual interior point algorithm for LO to the P∗(κ)-LCP.Since then, several efficient algorithms have been provided for the P∗(κ)-LCP [30, 39], P∗ complementarityproblem (P∗-CP) [52], SOCLCP [19, 22] and SDLCP [26, 43, 44]. By using the machinery of EuclideanJordan algebras, Faybusovich [17] first studied the interior point algorithm for the monotone SCLCPand proved the existence and uniqueness of the central path. Gowda and Sznajder [20] presented someglobal uniqueness and solvability results for SCLCP. Luo and Xiu [29] first established a theoreticalframework of path-following interior point algorithms for the Cartesian P∗(κ)-SCLCP and proved theglobal convergence and the iteration complexities of the proposed algorithms. Yoshise [50] proposed thehomogeneous model for monotone SCCP and discussed its theoretical aspects. For some other methodson SCCP we refer to [11, 23, 27, 35].

It is generally agreed that primal-dual IPMs are most efficient from a computational point of view. Thechoice of the so-called barrier update parameter θ plays an important role both in theory and practiceof IPMs. Usually, if θ is a constant independent of the dimension n of the problem, for instance θ = 1

2 ,then we call the algorithm a large-update method. If θ depends on the dimension of the problem, suchas θ = 1√

n, then the algorithm is named a small-update method. However, there is still a gap between

2

Page 3: A uni ed kernel function approach to polynomial interior ... · kernel function approach to polynomial interior-point algorithms for P ( )-SCLCP. Moreover, this uni es the analysis

the practical behavior of these algorithms and these theoretical performance results. The so-called large-update IPMs have superior practical performance but with relatively weak theoretical results. While theso-called small-update IPMs enjoy the best known worst-case iteration bounds but their performancein computational practice is poor. Peng et al. [37, 38] presented primal-dual IPMs for LO, SDO, andSOCO based on the self-regular proximities and derived the currently best known iteration bounds forlarge- and small-update methods, which almost closes the gap between the iteration bounds for large-and small-update methods. Bai et al. proposed primal-dual IPMs for LO [3, 5, 6, 7], SDO [46] andSOCO [8] based the so-called eligible kernel functions, which are not necessarily self-regular, and alsoobtained the same favourable iteration bounds for large- and small-update methods as in [37, 38]. Vieira[45] extended primal-dual IPMs for LO [8] to SO based on the eligible kernel functions. Recently, Choet al. [13, 14, 15], Bai et al. [4], wang and Bai [47] generalized primal-dual interior-point algorithmsfor LO to P∗(κ)-LCP based on some specific eligible kernel functions, respectively. Lesaja and Roos [28]provided a unified approach and comprehensive treatment of IPMs for P∗(κ)-LCP based on the class ofeligible kernel functions. Wang and Zhu [48, 49] extended the results for LO based on a specific eligiblekernel function in [5] for the Cartesian P∗(κ)-SOCLCP and the Cartesian P∗(κ)-SCLCP, respectively.They derived the currently best known iteration bounds with the addition of a factor 1 + 2κ for large-and small-update methods.

It is well known that kernel functions play an important role in the design and analysis of interior-pointalgorithms. They are not only used for determining the search directions but also for measuring thedistance between the given iterate and the corresponding µ-center for the algorithms. In general, eachkernel function gives rise to an interior-point algorithm. An interesting question here is whether we candirectly extend the unified approaches and comprehensive treatments of IPMs for LO in [8] and P∗(κ)-LCP in [28] based on the entire class of eligible kernel functions to the Cartesian P∗(κ)-SCLCP. As wewill see later, since the symmetric cone is nonpolyhedral, LO and P∗(κ)-LCP theory cannot be triviallygeneralized to the Cartesian P∗(κ)-SCLCP context.

The purpose of the paper is to present a unified approach and comprehensive treatment of polynomialinterior-point algorithms for the Cartesian P∗(κ)-SCLCP based on the eligible kernel functions. We showin this paper that every eligible kernel function gives rise to an interior-point algorithm for the CartesianP∗(κ)-SCLCP. We adopt the basic analysis used in [8, 28], and revise them to be suited for the CartesianP∗(κ)-SCLCP case. These analytic tools reveal that the iteration bounds highly depend on the choice ofthe eligible kernel function, especially on its inverse functions and derivatives. The currently best knowniteration bounds for large- and small-update methods are also obtained, namely, O((1+2κ)

√r log r log r

ε )and O((1 + 2κ)

√r log r

ε ), respectively. As far as we know, this is the first paper that provides a unifiedkernel function approach to polynomial interior-point algorithms for P∗(κ)-SCLCP. Moreover, this unifiesthe analysis for the P∗(κ)-LCP, the Cartesian P∗(κ)-SOCLCP and the Cartesian P∗(κ)-SDLCP.

The outline of the paper is as follows. In Section 2, we briefly review some well-known results on thetheory of Euclidean Jordan algebras and their associated symmetric cones that are used in this paper.In Section 3, we introduce the concept of the eligible kernel functions and recall some useful propertiesof the eligible kernel functions and their corresponding barrier functions. In Section 4, we first discussthe existence of the central path, then we mainly derive the new search directions for the CartesianP∗(κ)-SCLCP based on the eligible kernel functions. The generic polynomial interior-point algorithm forthe Cartesian P∗(κ)-SCLCP is also presented. In Section 5, we analyze the algorithms and derive theiteration bounds for the algorithms in a systematic scheme. Finally, some conclusions and remarks followin Section 6.

Notations used throughout the paper are as follows. Rn, Rn+ and Rn

++ denote the set of all vectors(with n components), the set of nonnegative vectors and the set of positive vectors, respectively. Rn×n

denotes the set of n×n real matrices. Sn, Sn+ and Sn++ denote the cone of symmetric, symmetric positivesemidefinite and symmetric positive definite n × n matrices, respectively. En denotes n × n identitymatrix. We use the matrix inner product A •B = Tr(ATB). When λ is a vector we denote the diagonalmatrix Λ with entries λi by diag (λ). The Kronecker product of two matrices A and B is denoted byA⊗B. We denote the largest eigenvalue of x as λmax(x), and analogously the smallest eigenvalue of x asλmin(x). The Lowner partial ordering “�K” of Rn defined by a cone K is defined by x �K s if x− s ∈ K.

3

Page 4: A uni ed kernel function approach to polynomial interior ... · kernel function approach to polynomial interior-point algorithms for P ( )-SCLCP. Moreover, this uni es the analysis

The interior of K is denoted as K+ and we write x �K s if x − s ∈ K+. Finally, if g(x) ≥ 0 is a realvalued function of a real nonnegative variable, the notation g(x) = O(x) means that g(x) ≤ cx for somepositive constant c and g(x) = Θ(x) that c1x ≤ g(x) ≤ c2x for two positive constants c1 and c2.

2 Preliminaries

2.1 Euclidean Jordan algebras and their associated symmetric cones

In this section we briefly review some well-known results on the theory of Euclidean Jordan algebrasand its associated symmetric cones. These will be served as the basic analytic tools for the analysis ofour interior-point algorithms for the Cartesian P∗(κ)-SCLCP. Our presentation is mainly based on themonograph [16] and the references [18, 21, 40, 42, 45]. To ease discussion, we assume the cone K is definedwith N = 1.

Definition 2.1 Let V be an n-dimensional vector space over R along with the bilinear map ◦ : (x, y) 7→x ◦ y ∈ V . Then (V, ◦) is a Euclidean Jordan algebra if for all x, y ∈ V(i) x ◦ y = y ◦ x (Commutativity);(ii) x ◦ (x2 ◦ y) = x2 ◦ (x ◦ y), where x2 = x ◦ x (Jordan’s Axiom);(iii) there exists a symmetric positive definite quadratic form Q on V such that Q(x ◦ y, z) = Q(x, y ◦ z);(iv) there exists an identity element e ∈ V , i.e., e such that e ◦ x = x ◦ e for all x ∈ V .

Since “ ◦ ” is bilinear for every x ∈ V , there exists a matrix L(x) such that x ◦ y = L(x)y for every y. Inparticular, L(x)e = x and L(x)x = x2. For each x ∈ V , we define

P (x) := 2L(x)2 − L(x2), (1)

where L(x)2 = L(x)L(x). The map P (x) is called the quadratic representation of x in V , which is anessential concept in the theory of Jordan algebras and will play an important role in the analysis of thealgorithms.

Definition 2.2 An element x ∈ V is called invertible if there exists a y =∑ki=0 αix

i for some finitek <∞ and real numbers αi such that x ◦ y = y ◦ x = e, and denoted as x−1.

If V is a Euclidean Jordan algebra, then its cone of squares is the set

K(V ) := {x2 : x ∈ V }.

The following theorem gives some major properties of the cone of squares in a Euclidean Jordan algebra.

Theorem 2.3 (Theorem III.2.1, Proposition III.2.2 in [16]) Let V be a Euclidean Jordan algebra.Then K(V ) is a symmetric cone, and is the set of elements x in V for which L(x) is positive semidefinite.Furthermore, if x is invertible, then

P (x)K+ = K+.

Example 2.4 (The second order cone Ln) Let x, s ∈ Ln, where Ln is the second-order (also calledLorentz or ice-cream) cone, i.e.,

Ln =

{(x1; · · ·;xn) ∈ Rn : x2

1 ≥n∑i=2

x2i , x1 ≥ 0

}.

Then (Ln, ◦) is a Euclidean Jordan algebra with identity element e = (1; 0; · · ·; 0) ∈ Rn if we define thebilinear map ◦ by

x ◦ s = (xT s;x1s2 + s1x2; · · ·;x1sn + s1xn),

4

Page 5: A uni ed kernel function approach to polynomial interior ... · kernel function approach to polynomial interior-point algorithms for P ( )-SCLCP. Moreover, this uni es the analysis

and the symmetric, positive definite quadratic form Q by

Q(x, s) = xT s.

As a consequence we have

L(x) =

x1 xT2:n

x2:n x1En−1

, and P (x) =

‖x‖2 2x1xT2:n

2x1x2:n det(x)En−1 + 2x2:nxT2:n

.Example 2.5 (The positive semidefinite cone Sn+) Let X,S ∈ Sn+. Then (Sn+, ◦) is a EuclideanJordan algebra with identity element E if we define the bilinear map ◦ by

X ◦ S =XS + SX

2,

and the symmetric, positive definite quadratic form Q by

Q(X,S) = Tr(XS).

As a consequence we have

L(X) =X ⊗ E + E ⊗X

2, and P (X) = X ⊗X.

A symmetric cone K in a Euclidean space V is said to be irreducible if there doesn’t exist non-trivialsubspaces V1, V2 and symmetric cones K1 ⊂ V1, K2 ⊂ V2, such that V is the direct sum of V1 and V2,and K the direct sum of K1 and K2.

Lemma 2.6 (Proposition II.4.5 in [16]) Any symmetric cone K is, in a unique way, the direct sumof irreducible symmetric cones.

A Euclidean Jordan algebra is called simple if it cannot be represented as the orthogonal direct sum oftwo Euclidean Jordan algebras. The simple Euclidean Jordan algebras have been classified into five casesand consequently we have five kinds of irreducible symmetric cones (e.g., [16, 21, 42]).

Since a Euclidean Jordan algebra V is power associative, xm ◦ xn = xm+n, i.e. the algebra generated bya single element x ∈ V is associative. We can define the concepts of rank, the characteristic polynomials,eigenvalues, trace, and determinant for it in the following way [42].

For any x ∈ V , let r be the smallest integer such that the set {e, x, x2, · · ·, xr} is linearly dependent.Then r is the degree of x which we denote as deg(x). The rank of V , rank(V ) is the largest deg(x) ofany number x ∈ V . An element x ∈ V is called regular if its degree equals the rank of V .

Remark 2.7 In the sequel, unless stated otherwise, we always assume V is a Euclidean Jordan algebrawith rank(V ) = r.

For a regular element x ∈ V , since {e, x, x2, · · ·, xr} is linearly dependent, there are real numbers a1(x), · ··, ar(x) such that the minimal polynomial of every regular element x is given by

f(λ;x) = λr − a1(x)λr−1 + · · ·+ (−1)rar(x),

which is the characteristic polynomial of the regular element x. The coefficient a1(x) is called the traceof x, denoted as tr(x). And the coefficient ar(x) is called the determinant of x, denoted as det(x).

An element c ∈ V is said to be an idempotent if c 6= 0 and c2 = c. Two idempotents c1 and c2 are saidto be orthogonal if c1 ◦ c2 = 0. We say that {c1, · · ·, cr} is a complete system of orthogonal primitiveidempotents, or Jordan frame, if each ci is a primitive idempotent, ci ◦ cj = 0, i 6= j, and

∑ri=1 ci = e.

Note that Jordan frames always contain r primitive idempotents, where r is the rank of V .

5

Page 6: A uni ed kernel function approach to polynomial interior ... · kernel function approach to polynomial interior-point algorithms for P ( )-SCLCP. Moreover, this uni es the analysis

Theorem 2.8 (Spectral decomposition, Theorem III.1.2 in [16]) Let x ∈ V . Then there exist aJordan frame {c1, · · ·, cr} and real numbers λ1(x), · · ·, λr(x) such that

x =

r∑i=1

λi(x)ci.

The numbers λi(x) (with their multiplicities) are the eigenvalues of x. Furthermore,

tr(x) =

r∑i=1

λi(x), and det(x) =

r∏i=1

λi(x).

In fact, the above λ1(x), · · ·, λr(x) are exactly the roots of the characteristic polynomial f(λ;x). Notethat since e = c1 + · · ·+ cr has eigenvalue 1, with multiplicity r. It follows that tr(e) = r and det(e) = 1.Moreover, we can easily verify that

x ∈ K ⇔ λi(x) ≥ 0, and x ∈ K+ ⇔ λi(x) > 0, i = 1, · · ·, r. (2)

Example 2.9 (Spectral decomposition for Ln) Let x ∈ Ln, the so-called spectral decomposition ofx is given by

x = λ1c1 + λ2c2,

whereλ1 := x1 + ‖x2:n‖, and λ2 := x1 − ‖x2:n‖,

and

c1 :=1

2

(1;

x2:n

‖x2:n‖

), and c2 :=

1

2

(1;−x2:n

‖x2:n‖

)are the eigenvalues and the corresponding unit eigenvectors of x. In fact, {c1, c2} is a Jordan frame ofx. Moreover, the rank of Ln is 2.

Example 2.10 (Spectral decomposition for Sn+) Let X ∈ Sn+, the so-called spectral decompositionof X is given by

X =

n∑i=1

λiqiqTi ,

where λi and qi (i = 1, · · ·, n) are the eigenvalues and the corresponding unit eigenvectors of X. In fact,{q1q

T1 , · · ·, qnqTn } is a Jordan frame of X. Moreover, the rank of Sn+ is n.

The importance of the decomposition is that it enables us to extend the definition of any real valued,continuous univariate function to elements of a Euclidean Jordan algebra, using the eigenvalues.

Definition 2.11 Let x ∈ V with the spectral decomposition as defined in Theorem 2.8. The vector-valuedfunction ψ(x) is defined by

ψ(x) := ψ(λ1(x)) c1 + · · ·+ ψ(λr(x)) cr. (3)

From Definition 2.11, for any x ∈ V , we obtain (e.g., [21, 42])

• Square root: x12 := (λ1(x))

12 c1 + · · · + (λr(x))

12 cr, whenever λi(x) ≥ 0 with i = 1, · · ·, r and

undefined otherwise;

• Inverse: x−1 := (λ1(x))−1c1 + · · · + (λr(x))−1cr, whenever λi(x) 6= 0 with i = 1, · · ·, r andundefined otherwise;

• Square: x2 := (λ1(x))2c1 + · · ·+ (λr(x))2cr.

6

Page 7: A uni ed kernel function approach to polynomial interior ... · kernel function approach to polynomial interior-point algorithms for P ( )-SCLCP. Moreover, this uni es the analysis

In fact, we have x2 = x ◦ x and (x12 )2 = x. If x−1 is well defined, then x ◦ x−1 = e. Furthermore, if ψ(t)

is differentiable, the derivative ψ′(t) exist, and we also have vector-valued functions ψ′(x), namely

ψ′(x) := ψ′(λ1(x)) c1 + · · ·+ ψ′(λr(x)) cr. (4)

It should be noted that ψ′(x), which does not mean that the derivative of the vector valued function ψ(x)defined by (3), is just a vector valued function induced by the derivative ψ′(t) of the function ψ(t).

Remark 2.12 In the rest of the paper, when we use the function ψ(·) and its derivative ψ′(·), they denotea vector-valued function if the argument is in V and a univariate function if the argument is in R.

In what follows we introduce another important decomposition, the Peirce decomposition, on the spaceV . Let c be an idempotent element in a Euclidean Jordan algebra V , the eigenvalues of the self-adjointoperator L(c) are 0, 1/2 and 1 (cf. Proposition III.1.3 in [16]). Furthermore, the eigenspace correspondingto each eigenvalue of L(c) is the set of x such that L(c)x = λx or equivalently c◦x = λx, for λ = 0, 1/2, 1.

Theorem 2.13 (Pierce Decomposition, Theorem IV.2.1 in [16]) Let x ∈ V with the spectral de-composition as defined by (2.8). Then we have

V = ⊕i≤mVim,

where

Vii = {x|x ◦ ci = x}, and Vim =

{x|x ◦ ci =

1

2x = x ◦ cm

}, 1 ≤ i < m ≤ r

are Pierce spaces of V . Then for any x ∈ V , there exists xi ∈ R, ci ∈ Vii and xim ∈ Vim (i < m) suchthat

x =

r∑i=1

xici +∑i<m

xim.

Let x =∑ri=1 λi(x)ci be its spectral decomposition for x ∈ V and f be a real valued function on an open

subset D of R and assume that the eigenvalues of x are in D. We denote Dxf(x) and D2xf(x) as the

derivative and the second derivative of f at x, respectively.

The following two theorems give explicitly the derivatives of the real valued separable spectral functionand the vector valued separable spectral function, respectively.

Theorem 2.14 (Theorem 38 in [2]) Let F (x) =∑ri=1 f(λi(x)). If f is continuously differentiable in

D, then F (x) is continuously differentiable at x and

DxF (x) =

r∑i=1

f ′(λi(x))ci.

Theorem 2.15 (Lemma 1 in [24]) Let G(x) =∑ri=1 f(λi(x))ci. If f is a continuously differentiable

in D, then G(x) is continuously differentiable at x and

DxG(x) =

r∑i=1

f ′(λi(x))xici +∑i<m

f(λi(x))− f(λm(x))

λi(x)− λm(x)xim, 1 ≤ i < m ≤ r

When λi(x) = λm(x) the quotient is understood as the derivative of f(λi(x)), i.e., f ′(λi(x)).

For any x, s ∈ V , we say that two elements x, s ∈ V operator commute if L(x)L(y) = L(y)L(x). In otherwords, x and y operator commute if for all z ∈ V , x ◦ (y ◦ z) = y ◦ (x ◦ z) (see, e.g., [42]).

7

Page 8: A uni ed kernel function approach to polynomial interior ... · kernel function approach to polynomial interior-point algorithms for P ( )-SCLCP. Moreover, this uni es the analysis

Theorem 2.16 (Lemma X.2.2 in [16]) Let x, s ∈ V . The elements x and s operator commute if andonly if they share a Jordan frame, that is

x = λ1(x)c1 + · · ·+ λr(x)cr, and s = λ1(s)c1 + · · ·+ λr(s)cr

for a Jordan frame {c1, · · ·, cr}.

It follows immediately that

x ◦ s = λ1(x)λ1(s)c1 + · · ·+ λr(x)λr(s)cr (5)

if x, s ∈ V share the Jordan frame {c1, · · ·, cr}.For any x, s ∈ V , we define the canonical inner product of x, y ∈ V as follows

〈x, s〉 := tr(x ◦ s), (6)

and the Frobenius norm of x as follows

‖x‖F :=√〈x, x〉. (7)

It follows that

‖x‖F =√

tr(x2) =

√√√√ r∑i=1

λ2i (x). (8)

Furthermore, we have|λmax(x)| ≤ ‖x‖F , and |λmin(x)| ≤ ‖x‖F . (9)

Lemma 2.17 (Lemma 14 in [42]) Let x, s ∈ V . Then

λmin(x)− ‖s‖F ≤ λmin(x+ s) ≤ λmax(x+ s) ≤ λmax(x) + ‖s‖F .

The following lemma gives the so-called NT-scaling of V , which plays an important role in the design ofthe interior-point algorithm for the Cartesian P∗(κ)-SCLCP.

Lemma 2.18 (NT-scaling, Lemma 3.2 in [18]) Let x, s ∈ K+. Then there exists a unique w ∈ K+

such thatx = P (w)s.

Moreover,

w = P (x)12

(P (x

12 )s)− 1

2

[= P (s−

12 )(P (s

12 )x) 1

2

].

The point w is called the scaling point of x and s (in this order).

As a consequence there exists v ∈ K+ such that

v = P (w)−12x = P (w)

12 s.

It should be noted that P (w)12 and its inverse P (w)−

12 are automorphisms of K.

It is well-known that two matrices X and Y are said to be similar if they share the same set of theeigenvalues, in this case, we write X ∼ Y . Analogously, we define for two elements x, s ∈ V the definitionof similarity as follows.

Definition 2.19 Let x, s ∈ V , we say that two elements x and s are similar, and briefly denoted asx ∼ s, if and only if x and s share the same set of the eigenvalues.

8

Page 9: A uni ed kernel function approach to polynomial interior ... · kernel function approach to polynomial interior-point algorithms for P ( )-SCLCP. Moreover, this uni es the analysis

It follows immediately from Definition 2.19 and Theorem 2.8 that if x and s in V are similar, then

det(x) = det(s), and tr(x) = tr(s).

Furthermore, we havex ∈ K (K+)⇔ s ∈ K (K+).

Lemma 2.20 (Proposition 21 in [42]) Let x, s, z ∈ K+. Then

(i) P (x1/2)s ∼ P (s1/2)x.

(ii) P (x1/2)s ∼ P ((P (z)x)1/2)P (z−1)s.

Lemma 2.21 (Proposition 3.2.4 in [45]) Let x, s ∈ K+ and w be the scaling point of x and s. Then

(P (x1/2)s)1/2 ∼ P (w1/2)s.

2.2 Back to the general case

Before ending this section, we proceed by adapting the definitions and properties stated so far in thissection to the general case, when the cone underlying the given P∗(κ)-SCLCP is the Cartesian product ofN symmetric cones Kj , where N > 1. First we partition any vector x ∈ V according to the dimensionsof the successive cones Kj , so

x =(x(1); · · ·;x(N)

)∈ V ⇔ x(j) ∈ Vj , 1 ≤ j ≤ N, (10)

andx =

(x(1); · · ·;x(N)

)∈ K ⇔ x(j) ∈ Kj , 1 ≤ j ≤ N. (11)

We define the algebra (V, �) as a direct product of Jordan algebras with the product defined as follows

x � s =(x(1) ◦ s(1); · · ·;x(N) ◦ s(N)

). (12)

Obviously, if e(j) ∈ Vj is the identity element in the Jordan algebra for the j-th cone, then the vector

e = (e(1); · · ·; e(N)) (13)

is the identity element in (V, �). One can easily verify that tr(e) = r. The Lyapunov transformation andthe quadratic representation of V can be adjusted to

L(x) = diag(L(x(1)), · · ·, L(x(N))), (14)

andP (x) = diag(P (x(1)), · · ·, P (x(N))). (15)

Let x(j) =∑rji=1 λi(x

(j))c(j)i be its spectral decomposition for x(j) ∈ Vj for each j, 1 ≤ j ≤ N . It follows

from Theorem 2.8 and Theorem 2.13 that the spectral decomposition and the Peirce decomposition ofx ∈ V can be adapted to

x =

(r1∑i=1

λi(x(1))c

(1)i ; · · ·;

rN∑i=1

λi(x(N))c

(N)i

), (16)

and

x =

(r1∑i=1

x(1)i c

(1)i +

∑i<m1

x(1)im1

; · · ·;rN∑i=1

x(N)i c

(N)i +

∑i<mN

x(N)imN

), (17)

respectively. We call {c(1)1 , · · ·, c(1)

r1 , · · ·, c(N)1 , · · ·, c(N)

rN } the Jordan frame of x ∈ V .

9

Page 10: A uni ed kernel function approach to polynomial interior ... · kernel function approach to polynomial interior-point algorithms for P ( )-SCLCP. Moreover, this uni es the analysis

The canonical inner product can be adjusted to

〈x, s〉 =

N∑j=1

〈x(j), s(j)〉. (18)

Furthermore, we have

tr(x) =

N∑j=1

tr(x(j)), ‖x‖F =

√√√√ N∑j=1

‖x(j)‖2F , and det(x) =

N∏j=1

det(x(j)). (19)

The NT-scaling scheme in this general case is now obtained as follows. Let x(j), s(j) ∈ (Kj)+, w(j) be thescalling point in (Kj)+ for each j, 1 ≤ j ≤ N . Then

P (w(j))−12x(j) = P (w(j))

12 s(j). (20)

The scaling point of x and s in K is then defined by

w = (w(1); · · ·;w(N)). (21)

Since P (w(j)) is symmetric and positive definite for each j, 1 ≤ j ≤ N , the matrix

P (w) = diag(P (w(1)), , · · ·, P (w(N))), (22)

is symmetric and positive definite as well and represents an automorphism of K such that P (w)s = x.Therefore P (w) can be used to rescale x and s to the same vector

v = (v(1); · · ·; v(N)). (23)

As a consequence, we adapt the definitions of ψ(x) and ψ′(x) according to v as follows

ψ(v) = (ψ(v(1)); · · ·;ψ(v(N))), (24)

andψ′(v) = (ψ′(v(1)); · · ·;ψ′(v(N))). (25)

Finally, we defineλmax(v) = max{λmax(v(j)) : 1 ≤ j ≤ N}, (26)

andλmin(v) = min{λmin(v(j)) : 1 ≤ j ≤ N}. (27)

3 The eligible kernel (barrier) functions and their properties

In this section we introduce the concept of the eligible kernel functions and recall some useful propertiesof the eligible kernel functions and their corresponding barrier functions. For more details we refer thereader to [8, 37].

A univariate ψ : (0,∞)→ [0,∞) is said to be a kernel function [37] if it satisfies

ψ′(1) = ψ(1) = 0, (28-a)

ψ′′(t) > 0, (28-b)

limt→0 ψ(t) = limt→∞ ψ(t) =∞. (28-c)

10

Page 11: A uni ed kernel function approach to polynomial interior ... · kernel function approach to polynomial interior-point algorithms for P ( )-SCLCP. Moreover, this uni es the analysis

i Eligible kernel functions ψi(t) Ref.

1 t2−12 − log t e.g., [41]

2 12

(t− 1

t

)2[36]

3 t2−12 + t1−q−1

q−1 , q > 1 [37]

4 t2−12 + t1−q−1

q(q−1) −q−1q (t− 1), q > 1 [37]

5 t2−12 + e

1t −ee [8]

6 t2−12 −

∫ t1e

1ξ−1dξ [8]

7 t2−12 + e

q( 1t−1)−qq , q ≥ 1 [1]

8 t2−12 −

∫ t1eq(

1ξ−1)dξ, q ≥ 1 [3]

9 t2−12 + (e−1)2

e1

et−1 −e−1e [7]

10 t− 1 + t1−q−1q−1 , q > 1 [6]

11 tp+1−1p+1 − log t, p ∈ [0, 1], q > 1 [12]

12 tp+1−1p+1 + t1−q−1

q−1 , p ∈ [0, 1], q > 1 [5]

Table 1. Twelve eligible kernel functions

Note that this implies that ψ(t) is strictly convex and minimal at t = 1, with ψ(1) = 0. Moreover, (28-c)expresses that ψ(t) has the barrier property. Also note that ψ(t) is completely determined by its secondderivative, because the above properties imply that

ψ(t) =

∫ t

1

∫ ξ

1

ψ′′(ζ) dζdξ, t > 0. (29)

In this paper we consider the so-called eligible kernel function [8], i.e., the kernel function ψ(t) satisfiesfour of the following five conditions, namely the first and the last three conditions.

tψ′′(t) + ψ′(t) > 0, t < 1, (30-a)

tψ′′(t)− ψ′(t) > 0, t > 1, (30-b)

ψ′′′(t) < 0, t > 0, (30-c)

2ψ′′(t)2 − ψ′(t)ψ′′′(t) > 0, t < 1, (30-d)

ψ′′(t)ψ′(βt)− βψ′(t)ψ′′(βt) > 0, t > 1, β > 1. (30-e)

It should be pointed out that the first four conditions are logically independent, and that the fifthcondition is a consequence of (30-b) and (30-c). Since (30-b) is much simpler to check than (30-e), inmany cases it is convenient to know that ψ(t) is eligible if it satisfies the first four conditions.

The twelve eligible kernel functions are listed in Table 3. For more details we refer to the given references.

Corresponding to the eligible kernel function ψ(t), we define the barrier function Ψ(v) as follows

Ψ(v) := Ψ(x, s;µ) =: tr(ψ(v)) =

N∑j=1

tr(ψ(v(j))) =

N∑j=1

rj∑i=1

ψ(λi(v(j))). (31)

According to the properties of the eligible kernel functions, we can conclude that Ψ(v) is nonnegativeand strictly convex with respect to v �K 0 and vanishes at its global minimal point v = e, i.e.,

Ψ(v) = 0⇔ ψ(v) = 0⇔ ψ′(v) = 0⇔ v = e.

11

Page 12: A uni ed kernel function approach to polynomial interior ... · kernel function approach to polynomial interior-point algorithms for P ( )-SCLCP. Moreover, this uni es the analysis

It follows immediately from Theorem 2.14 that

DvΨ(v) =

(r1∑i=1

ψ′(λi(v(1)))c

(1)i ; · · ·;

rN∑i=1

ψ′(λi(v(N)))c

(N)i

)= ψ′(v). (32)

This means that the derivative of the barrier function Ψ(v) as defined by (31) coincides with the vectorvalued function ψ′(v) defined by (24).

The following lemma gives some equivalent representations of condition (30-a).

Lemma 3.1 (Lemma 2.1 in [8]) The following three properties are equivalent.(i) ψ(

√t1t2) ≤ 1

2 (ψ(t1) + ψ(t2)), for t1, t2 > 0;(ii) ψ(t) + tψ′′(t)) > 0, t > 0;(iii) ψ(eξ) is convex.

Recall that the property described in Lemma 3.1 is called exponential convexity, or shortly e-convexity.This property has been proven to be very useful in the analysis of interior-point algorithms based on thekernel functions [8, 37].

As a consequence of Lemma 3.1, we have the following theorem, which is crucial for the analysis of theinterior-point algorithms for the Cartesian P∗(κ)-SCLCP.

Theorem 3.2 (Theorem 4.3.2 in [45]) If x, s ∈ K+, one has

Ψ(

(P (x)1/2s)1/2)≤ 1

2(Ψ(x) + Ψ(s)).

For the analysis of the algorithms, we define the norm-based proximity measure δ(v) as follows

δ(v) :=1

2‖ψ′(v)‖F =

1

2

√√√√ N∑j=1

rj∑i=1

ψ′(λi(v(j)))2. (33)

One can easily verify that δ(v) ≥ 0, and δ(v) = 0 if and only if Ψ(v) = 0.

The following theorem is a reformulation of Theorem 4.9 in [8] and whose proof depends on the condition(30-d).

Theorem 3.3 (Theorem 4.9 in [8]) Let % : [0,∞) → [1,∞) be the inverse function of the eligiblekernel function ψ(t) for t ≥ 1. For any positive vector z ∈ Rn, one has√√√√ n∑

i=1

ψ′(zi)2 ≥ ψ′(%

(n∑i=1

ψ(zi)

)).

It follows from (31) and (33) that Ψ(v) and δ(v) depend only on the eigenvalues λi(v(j)) of the symmetric

cone v(j) for each j, 1 ≤ j ≤ N . This observation makes it possible to apply Theorem 3.3, with z beingthe vector in Rr consisting of all the eigenvalues of v.

Theorem 3.4 If v ∈ K+, one has

δ(v) ≥ 1

2ψ′(%(Ψ(v))).

In the analysis of the algorithm, we need to consider the derivatives of the function Ψ(x(t)) with respectto t, where x(t) = x0 + tu ∈ K+ with t ∈ R and u ∈ V . It follows from (16) and (17) that the spectraldecomposition of x(t) is

x(t) =

(r1∑i=1

λi(x(t)(1))c(1)i ; · · ·;

rN∑i=1

λi(x(t)(N))c(N)i

), (34)

12

Page 13: A uni ed kernel function approach to polynomial interior ... · kernel function approach to polynomial interior-point algorithms for P ( )-SCLCP. Moreover, this uni es the analysis

and the Peirce decomposition of u is

u =

(r1∑i=1

u(1)i c

(1)i +

∑i<m1

u(1)im1

; · · ·;rN∑i=1

u(N)i c

(N)i +

∑i<mN

u(N)imN

). (35)

From Theorem 2.14 and Theorem 2.15, after some elementary reductions, we have the following theorem,which gives explicitly the first two derivatives and bounds the second order derivative of the generalfunction Ψ(x(t)) with respect to t.

Theorem 3.5 (Section 4.4 in [45]) Let ψ(t) be any eligible kernel function. Then

DtΨ(x(t)) =

N∑j=1

tr(DxΨ(x(t)(j)) ◦ x′(t)(j)) =

N∑j=1

tr

(rj∑i=1

ψ′(λi(x(t)(j)))c(j)i ◦ u

(j)

),

and

D2tΨ(x(t))=

N∑j=1

rj∑i=1

ψ′′(λ(j)i )(u

(j)i )2 +

N∑j=1

∑i<mj

ψ′(λ(j)i )− ψ′(λ(j)

mj )

λ(j)i − λ

(j)mj

tr((u(j)imj

)2),

where λ(j)i = λi(x(t)(j)) and λ

(j)m = λm(x(t)(j)). Furthermore, under the assumption that l < m implies

λ(j)l ≥ λ

(j)m ,we have

D2tΨ(x(t)) ≤

N∑j=1

rj∑i=1

ψ′′(λ(j)i )(u

(j)i )2 +

N∑j=1

∑i<mj

ψ′′(λ(j)mj )tr((u

(j)imj

)2).

4 Polynomial interior-point algorithms for the Cartesian P∗(κ)-SCLCP

In this section we first discuss the existence and uniqueness of the central path for the Cartesian P∗(κ)-SCLCP, then we mainly derive the new search directions for the Cartesian P∗(κ)-SCLCP based on theeligible kernel functions. The generic polynomial interior-point algorithm for the Cartesian P∗(κ)-SCLCPis also presented.

4.1 The central path for the Cartesian P∗(κ)-SCLCP

Analogously to the P∗(κ)-LCP case, the concept of the central path can also be extended to the CartesianP∗(κ)-SCLCP. The existence and uniqueness of the central path for the Cartesian P∗(κ)-SCLCP werediscussed by Luo and Xiu [29]. The following two lemmas are provided to ensure the existence anduniqueness of the central path for the Cartesian P∗(κ)-SCLCP. For more details we refer to [29] and itsP∗(κ)-LCP counterpart [25].

Lemma 4.1 (Lemma 3.1 in [29]) For any (x1, y1), (x2, y2) ∈ U , if A : V → V has the CartesianP∗(κ)-property, then the identity H(x1, y1) = H(x2, y2) implies (x1, y1) = (x2, y2), where U = {(x, y) ∈K+ ×K+ : x ◦ y ∈ K+} and H(x, y) = (x ◦ y, y −A(x)− q)T .

Lemma 4.2 (Lemma 3.2 in [29]) Suppose that A : V → V has the Cartesian P∗(κ)-property andthere exists a strictly feasible solution (x0, y0) ∈ U = {(x, y) ∈ K+ × K+ : x ◦ y ∈ K+} to the CartesianP∗(κ)-SCLCP. Then the system x ◦ y = u, (x, y) ∈ U has a solution for any u ∈ K+.

13

Page 14: A uni ed kernel function approach to polynomial interior ... · kernel function approach to polynomial interior-point algorithms for P ( )-SCLCP. Moreover, this uni es the analysis

Throughout the paper we assume that the Cartesian P∗(κ)-SCLCP satisfies the interior-point condition(IPC), i.e., there exists (x0 �K 0, s0 �K 0) such that s0 = A(x0) + q. For this and other properties ofthe Cartesian P∗(κ)-SCLCP, we refer to [29]. Under the IPC holds, by relaxing the complementarityslackness x � s = 0, we obtain the following systemA(x)− s

x � s

=

−qµe

, x, s �K 0, (36)

where µ > 0 is a parameter. The parameterized system (36) has a unique solution, for each µ > 0. Thissolution is denoted as (x(µ), s(µ)) and we call (x(µ), s(µ)) the µ-center of the Cartesian P∗(κ)-SCLCP.The set of µ-centers (with µ running through all positive real numbers) gives a homotopy path, which iscalled the central path of the Cartesian P∗(κ)-SCLCP. If µ→ 0, then the limit of the central path existsand since the limit points satisfy the complementarity condition x � s = 0, the limit yields an optimalsolution for the Cartesian P∗(κ)-SCLCP.

4.2 The new search directions for the Cartesian P∗(κ)-SCLCP

The basic idea of IPMs is to follow the central path and approach the optimal set of the Cartesian P∗(κ)-SCLCP by letting µ go to zero. To obtain the search directions for the Cartesian P∗(κ)-SCLCP the usualapproach is to use Newton’s method and to linearize the system (36). For any strictly feasible x �K 0and s �K 0, we want to find displacements ∆x and ∆s such thatA(x+ ∆x)− (s+ ∆s)

(x+ ∆x) � (s+ ∆s)

=

−qµe

. (37)

Neglecting the term ∆x ◦ ∆s in the left-hand side expression of the second equation, we obtain thefollowing Newton system for the search directions ∆x and ∆s A(∆x)−∆s

s �∆x+ x �∆s

=

0

µe− x � s

. (38)

Due to the fact that x and s do not operator commute in general, i.e., L(x)L(s) 6= L(s)L(x), this systemdoesn’t always have a unique solution. It is well known that this difficulty can be solved by applying ascaling scheme. This goes as follows.

Lemma 4.3 (Lemma 28 in [42]) Let u ∈ K+. Then

x ◦ s = µe ⇔ P (u)x ◦ P (u)−1s = µe.

Now we replace the second equation of the system (37) by

P (u)(x+ ∆x) � P (u)−1(s+ ∆s) = µe. (39)

Applying Newton’s method again, and neglecting the term P (u)∆x ◦ P (u)−1∆s, we get A(∆x)−∆s

P (u)−1(s) � P (u)∆x+ P (u)(x) � P (u)−1∆s

=

0

µe− P (u)(x) � P (u)−1(s)

. (40)

By choosing u appropriately this system can be used to define the commutative class of search directions(e.g., [31, 42]). In the literature the following three choices are well known: u = s1/2, u = x1/2, andu = w−1/2, where w is the NT-scaling point of x and s. The first two choices lead to the so-called

14

Page 15: A uni ed kernel function approach to polynomial interior ... · kernel function approach to polynomial interior-point algorithms for P ( )-SCLCP. Moreover, this uni es the analysis

xs-direction and sx-direction, respectively. In this paper we consider the third choice, i.e., the so-calledNT-scaling scheme [32, 33]. Let u = w−

12 , where

w = P (x)12

(P (x

12 )s)− 1

2

[= P (s−

12 )(P (s

12 )x) 1

2

]. (41)

Furthermore, we define

v :=P (w)−

12x

õ

[=P (w)

12 s

õ

], (42)

and

A := P (w)12AP (w)

12 , dx :=

P (w)−12 ∆x

õ

, ds :=P (w)

12 ∆s

õ

. (43)

It should be mentioned that the transformation A also has the Cartesian P∗(κ)-property if the lineartransformation A has the Cartesian P∗(κ)-property (cf. Proposition 3.4 in [29]). Using (42) and (43),after some elementary reductions, we obtain the scaled Newton system as followsA(dx)− ds

dx + ds

=

0

v−1 − v

. (44)

Since the linear transformation A has the Cartesian P∗(κ)-property, the system (44) has a unique solution.This fact holds due to the following lemma.

Lemma 4.4 (Theorem 3.6 in [29]) Let A : V → V be a linear transformation with the CartesianP∗(κ)-property. For any (x, y) ∈ N (τ), the scaled Newton system has a unique solution, where N (τ) isthe τ -neighborhood of the central path.

A crucial observation is that the right-hand side v−1 − v in the second equation of system (44) equalsminus the gradient of the classical logarithmic barrier function

Ψc(v) := tr(ψc(v)) =

N∑j=1

tr(ψc(v(j))) =

N∑j=1

rj∑i=1

(λi(v

(j))2 − 1

2− log λi(v

(j))

). (45)

Then the system (44) is equivalent to the following systemA(dx)− dsdx + ds

=

0

−ψ′c(v)

. (46)

So far we describe the scheme that defines the classical NT search direction for the Cartesian P∗(κ)-SCLCP. The approach in this paper differs only in one detail. Given any eligible kernel function ψ(t) andthe associated vector-valued function ψ′(v) defined by (25), we replace the right-hand side of the secondequation in (46) by −ψ′(v), i.e., minus the derivative of the barrier function Ψ(v). Thus we consider thefollowing system A(dx)− ds

dx + ds

=

0

−ψ′(v)

. (47)

The new search directions dx and ds are obtained by solving (47) so that ∆x and ∆s are computed via(43). If (x, s) 6= (x(µ), s(µ)) then (∆x,∆s) is nonzero. By taking a default step size α along the searchdirections, we get the new iteration point (x+, s+) according to

x+ := x+ α4x, and s+ := s+ α4s. (48)

15

Page 16: A uni ed kernel function approach to polynomial interior ... · kernel function approach to polynomial interior-point algorithms for P ( )-SCLCP. Moreover, this uni es the analysis

Furthermore, we can easily verify that

x � s = µe⇔ v = e⇔ ψ′(v) = 0⇔ ψ(v) = 0⇔ Ψ(v) = 0. (49)

Hence, the value of Ψ(v) can be considered as a measure for the distance between the given iterate (x, s)and the µ-center (x(µ), s(µ)).

4.3 The generic interior-point algorithm for the Cartesian P∗(κ)-SCLCP

We define the τ -neighborhood of the central path as follows

N (τ) = {(x, s) ∈ K+ ×K+ : s = A(x) + q,Ψ(v) ≤ τ}.

It is clear from the above description that the closeness of (x, s) to (x(µ), s(µ)) is measured by the valueof Ψ(v), with τ > 0 as a threshold value. If Ψ(v) ≤ τ then we start a new outer iteration by performing aµ-update, otherwise we enter an inner iteration by computing the search directions at the current iterateswith respect to the current value of µ and apply (48) to get new iterates. If necessary, we repeat theprocedure until we find iterates that are in the τ -neighborhood of (x(µ), s(µ)). Then µ is again reducedby the factor 1 − θ with 0 < θ < 1 and we apply Newton’s method targeting at the new µ-centers, andso on. This process is repeated until µ is small enough, say until rµ < ε, at this stage we have found anε-approximate solution of the Cartesian P∗(κ)-SCLCP. The parameters τ, θ and the step size α should bechosen in such a way that the algorithm is ‘optimized’ in the sense that the number of iterations requiredby the algorithm is as small as possible.

The generic polynomial interior-point algorithm for the Cartesian P∗(κ)-SCLCP is now presented inFigure 1. We will prove that the algorithm is well defined in Section 5.

Interior-Point Algorithm for the Cartesian P∗(κ)-SCLCP

Input:A threshold parameter τ ≥ 1;an accuracy parameter ε > 0;a fixed barrier update parameter θ, 0 < θ < 1;a strictly feasible (x0, s0) and µ0 = 〈x0, s0〉/r such thatΨ(x0, s0;µ0) ≤ τ .

beginx := x0; s := s0; µ := µ0;while rµ ≥ ε dobeginµ := (1− θ)µ;while Ψ(x, s;µ) > τ dobegin

solve system (47) and use (43) to obtain (∆x,∆s);choose a suitable step size α;update x = x+ α∆x, s = s+ α∆s;

endend

end

Figure 1: Algorithm

16

Page 17: A uni ed kernel function approach to polynomial interior ... · kernel function approach to polynomial interior-point algorithms for P ( )-SCLCP. Moreover, this uni es the analysis

5 Analysis of the algorithms

5.1 Growth behavior of the barrier function during an outer iteration

It should be mentioned that during the course of the algorithm the largest values of Ψ(v) occurs justafter the update of µ. So next we derive an estimate for the effect of a µ-update on the value of Ψ(v).The following theorem is a reformulation of Theorem 3.2 in [8] and whose proof depends on the condition(30-e).

Theorem 5.1 (Theorem 3.2 in [8]) For any positive vector z ∈ Rn and any β ≥ 1, one has

n∑i=1

ψ(βzi) ≤ nψ

(β%

(1

n

n∑i=1

ψ(zi)

)).

The equality holds if and only if there exists a scalar a ≥ 1 such that zi = a for all i.

It follows from (31) that

Ψ(βv) =

N∑j=1

rj∑i=1

ψ(βλi(v(j))),

which means that Ψ(βv) depend only on the eigenvalues λi(v(j)) of the symmetric cone v(j) for each j,

1 ≤ j ≤ N . By applying Theorem 5.1, with z being the vector in Rr consisting of all the eigenvalues ofthe symmetric cone v, the theorem below immediately follows.

Theorem 5.2 If v ∈ K+ and β ≥ 1, one has

Ψ(βv) ≤ rψ(β%

(Ψ(v)

r

)).

Corollary 5.3 Let 0 ≤ θ < 1 and v+ =v√

1− θ. If Ψ(v) ≤ τ , one has

Ψ(v+) ≤ rψ(

%( τr )√

1− θ

).

Proof: With β = 1√1−θ ≥ 1 and Ψ(v) ≤ τ , the corollary follows immediately from Theorem 5.2. 2

As we showed in the previous section, each inner iteration gives rise to a decrease of the value of Ψ(v).Hence during the course of the algorithm the largest values of Ψ(v) occurs after the µ-updates. Therefore,Corollary 5.3 provides a uniform upper bound for Ψ(v) during the execution of the algorithms. In thesequel we use the symbol

Ψ(v+) ≤ Lψ(r, θ, τ) := rψ

(%(τr

)√

1− θ

). (50)

to denote this uniform upper bound for Ψ(v).

5.2 Decrease of the barrier function during an inner iteration

In each inner iteration after the default step, we have got the new iterates x+ = x+α∆x and s+ = s+α∆s.The step size α is strictly feasible if and only if x + α∆x ∈ K+ and s + α∆s ∈ K+, i.e., if and only ifx(j) + α∆x(j) ∈ (Kj)+ and s(j) + α∆s(j) ∈ (Kj)+ for each j, 1 ≤ j ≤ N . After some elementaryreductions, we have

x(j)+ =

√µ(j)P (w(j))1/2(v(j) + αd(j)

x ), and s(j)+ =

√µ(j)P (w(j))−1/2(v(j) + αd(j)

s ).

17

Page 18: A uni ed kernel function approach to polynomial interior ... · kernel function approach to polynomial interior-point algorithms for P ( )-SCLCP. Moreover, this uni es the analysis

Note that during an inner iteration the parameter µ(j) is fixed. Hence, after the default step the new

scaled vector v(j)+ is given by

v(j)+ = P (w

(j)+ )−1/2P (w(j))1/2(v(j) + αd(j)

x ) = P (w(j)+ )1/2P (w(j))−1/2(v(j) + αd(j)

s ),

where, according to Lemma 2.18,

w(j)+ = P (x

(j)+ )1/2((P (x

(j)+ )1/2s

(j)+ )−1/2).

By Lemma 2.20, we have õ(j)v

(j)+ = P (w

(j)+ )

12 s

(j)+ ∼

(P (x

(j)+ )

12 s

(j)+ )) 1

2

.

Lemma 2.21-(ii) with z = (w(j))12 implies that(

µ(j)P (v(j) + αd(j)x )

12 (v(j) + αd(j)

s )) 1

2

∼(µ(j)P

(P (w(j))

12 (v(j) + αd(j)

x )) 1

2

P (w(j))−12 (v(j) + αd(j)

s )

) 12

.

Note that (P (x

(j)+ )

12 s

(j)+ )) 1

2

=

(µ(j)P

(P (w(j))

12 (v(j) + αd(j)

x )) 1

2

P (w(j))−12 (v(j) + αd(j)

s )

) 12

.

We can conclude that v(j)+ is unitarily similar to (P (v(j) + αd

(j)x )1/2(v(j) + αd

(j)s ))1/2. This shows that

the eigenvalues of v(j)+ are precisely the same as those of

v(j)+ := (P (v(j) + αd(j)

x )1/2(v(j) + αd(j)s ))1/2.

By Theorem 3.2, this implies that, for each j

Ψ(v(j)+ ) = Ψ(v

(j)+ ) ≤ 1

2

(Ψ(v(j) + αd(j)

x ) + Ψ(v(j) + αd(j)s )).

Taking the sum over all j, 1 ≤ j ≤ N , we get

Ψ(v+) = Ψ(v+) ≤ 1

2(Ψ(v + αdx) + Ψ(v + αds)) .

Hence, denoting the decrease in Ψ(v) during an inner iteration as

f(α) := Ψ(v+)−Ψ(v).

We have

f(α) ≤ f1(α) :=1

2(Ψ(v + αdx) + Ψ(v + αds))−Ψ(v).

which means that f1(α) gives an upper bound for the decrease of the barrier function Ψ(v). One caneasily verify that f(0) = f1(0) = 0. It should be pointed out that f1(α) is convex, whereas f(α) is ingeneral not convex.

It follows from Theorem 3.5 that

f ′1(α)=1

2

N∑j=1

(tr(ψ′(v(j) + αd(j)

x ) ◦ d(j)x ) + tr(ψ′(v(j) + αd(j)

s ) ◦ d(j)s ))

=1

2(tr(ψ′(v + αdx) � dx) + tr(ψ′(v + αds) � ds)).

18

Page 19: A uni ed kernel function approach to polynomial interior ... · kernel function approach to polynomial interior-point algorithms for P ( )-SCLCP. Moreover, this uni es the analysis

This gives, by (47),

f ′1(0) =1

2tr(ψ′(v) � (dx + ds)) = −1

2tr(ψ′(v) � ψ′(v)) = −1

2‖ψ′(v)‖2F = −2δ(v)2 < 0. (51)

We can conclude that f ′1(α) is monotonically decreasing in a neighborhood of α = 0.

Let η(j) = v(j) + αd(j)x and γ(j) = v(j) + αd

(j)s for each j, 1 ≤ j ≤ N . To simplify the notations we used

(and will use below), the following conventions

d(j)xi := (d(j)

x )i, d(j)si := (d(j)

s )i, dximj := (d(j)x )imj , and d

(j)simj

:= (d(j)s )imj . (52)

Using Theorem 3.5 again, we have

f ′′1 (α)≤ 1

2

N∑j=1

rj∑i=1

ψ′′(λi(η(j)))(d

(j)xi )2 +

∑i<mj

ψ′′(λi(η(j)))tr

((d

(j)ximj

)2)

+1

2

N∑j=1

rj∑i=1

ψ′′(λi(γ(j)))(d

(j)si )2 +

∑i<mj

ψ′′(λi(γ(j)))tr

((d

(j)simj

)2) . (53)

Since the linear transformation A has the Cartesian P∗(κ)-property, we obtain

(1 + 4κ)∑

ν∈J+(∆x)

〈∆x(ν), [A(∆x)](ν)〉+∑

ν∈J−(∆x)

〈∆x(ν), [A(∆x)](ν)〉 ≥ 0, (54)

where J+(∆x) = {ν : 〈∆x(ν), [A(∆x)](ν)〉 > 0} and J−(∆x) = {ν : 〈∆x(ν), [A(∆x)](ν)〉 < 0} are twoindex sets. It follows from (43) and A(∆x) = ∆s that

〈dx, ds〉 =〈∆x,∆s〉

µ.

Thus we can rewrite (54) as

(1 + 4κ)∑ν∈J+

〈d(ν)x , d(ν)

s 〉+∑ν∈J−

〈d(ν)x , d(ν)

s 〉 ≥ 0. (55)

In order to facilitate discussion, we denote

δ := δ(v), δ+ :=∑ν∈J+

〈d(ν)x , d(ν)

s 〉, and δ− := −∑ν∈J−

〈d(ν)x , d(ν)

s 〉. (56)

Lemma 5.4 One hasδ+ ≤ δ2, and δ− ≤ (1 + 4κ)δ2.

Proof: We have

1

4

∑ν∈J+

‖d(ν)x + d(ν)

s ‖2F −∑ν∈J+

〈d(ν)x , d(ν)

s 〉 =1

4

∑ν∈J+

‖d(ν)x − d(ν)

s ‖2F ≥ 0.

This implies that ∑ν∈J+

〈d(ν)x , d(ν)

s 〉 ≤1

4

∑ν∈J+

‖d(ν)x + d(ν)

s ‖2F .

Hence, we have, by (56),

δ+ =∑ν∈J+

〈d(ν)x , d(ν)

s 〉 ≤1

4

∑ν∈J+

‖d(ν)x + d(ν)

s ‖2F ≤1

4

∑ν∈J‖d(ν)x + d(ν)

s ‖2F =1

4‖dx + ds‖2F = δ2.

19

Page 20: A uni ed kernel function approach to polynomial interior ... · kernel function approach to polynomial interior-point algorithms for P ( )-SCLCP. Moreover, this uni es the analysis

It follows immediately from (55) that

(1 + 4κ)δ+ − δ− ≥ 0.

Thenδ− ≤ (1 + 4κ)δ+ ≤ (1 + 4κ)δ2.

This proves the lemma. 2

Lemma 5.5 One has‖dx‖F ≤ 2

√1 + 2κδ, and ‖ds‖F ≤ 2

√1 + 2κδ.

Proof: It follows from Lemma 5.4 that

4δ2 = ‖dx + ds‖2F = ‖dx‖2F + ‖ds‖2F + 2(δ+ − δ−) ≥ ‖dx‖2F + ‖ds‖2F −8k

1 + 4kδ−.

Thus

‖dx‖2F + ‖ds‖2F ≤ 4δ2 +8κ

1 + 4κδ− ≤ 4(1 + 2κ)δ2. (57)

This implies the lemma. 2

Lemma 5.6 One hasf ′′1 (α) ≤ 2(1 + 2κ)δ2ψ′′(λmin(v)− 2α

√1 + 2κδ).

Proof: It follows from Lemma 2.17 and Lemma 5.5 that, for each j, 1 ≤ j ≤ N ,

λi((v + αdx)(j)) ≥ λmin((v + αdx)(j)) ≥ λmin(v(j))− ‖αd(j)x ‖F ≥ λmin(v(j))− 2α

√1 + 2κδ.

Since ψ′′(t) is monotonically decreasing in t ∈ (0,+∞), we have

ψ′′(λi((v + αdx)(j))) ≤ ψ′′(λmin(v)− 2α√

1 + 2κδ), j = 1, 2, · · ·, N.

Similarly, we have

ψ′′(λi((v + αds)(j))) ≤ ψ′′(λmin(v)− 2α

√1 + 2κδ), j = 1, 2, · · ·, N.

From Theorem 2.13, let

d(j)x =

rj∑i=1

d(j)xi c

(j)i +

∑i<mj

d(j)ximj

be the Peirce decomposition of d(j)x with respect to the Jordan frame {c(j)1 , c

(j)2 , · · ·, c(j)rj }, and

d(j)s =

rj∑i=1

d(j)si b

(j)i +

∑i<mj

d(j)simj

be the Peirce decomposition of d(j)s with respect to the Jordan frame {b(j)1 , b

(j)2 , · · ·, b(j)rj }. We can conclude

that

‖dx‖2F =

N∑j=1

‖d(j)x ‖2F =

N∑j=1

〈d(j)x , d(j)

x 〉 =

N∑j=1

rj∑i=1

(d(j)xi )2 +

∑i<mj

tr((d(j)ximj

)2)

,

‖ds‖2F =

N∑j=1

‖d(j)s ‖2F =

N∑j=1

〈d(j)s , d(j)

s 〉 =

N∑j=1

rj∑i=1

(d(j)si )2 +

∑i<mj

tr((d(j)simj

)2)

.

20

Page 21: A uni ed kernel function approach to polynomial interior ... · kernel function approach to polynomial interior-point algorithms for P ( )-SCLCP. Moreover, this uni es the analysis

Due to (53), we have

f ′′1 (α)≤ 1

2ψ′′(λmin(v)− 2α

√1 + 2κδ)

N∑j=1

rj∑i=1

(d(j)xi )2 +

∑i<mj

tr((d(j)ximj

)2)

+

1

2ψ′′(λmin(v)− 2α

√1 + 2κδ)

N∑j=1

rj∑i=1

(d(j)si )2 +

∑i<mj

tr((d(j)simj

)2)

≤ 1

2ψ′′(λmin(v)− 2α

√1 + 2κδ)(‖dx‖2F + ‖ds‖2F )

≤2(1 + 2κ)δ2ψ′′(vmin − 2α√

1 + 2κδ).

The last inequality holds due to the fact that (57). This completes the proof of the lemma. 2

Next we will choose a suitable step size for the algorithm presented in Figure 1. This should be chosensuch that x+ and s+ are feasible and such that Ψ(v+) − Ψ(v) decreases sufficiently. The followingstrategy for selecting the step size is almost a “word-by-word” extension of the LO case [8]. For the sakeof completeness, we give their proofs.

Lemma 5.7 If the step size α satisfies

−ψ′(λmin(v)− 2α√

1 + 2κδ) + ψ′(λmin(v)) ≤ 2δ√1 + 2κ

, (58)

one has f1(α) ≤ 0.

Proof: It follows from (51) and Lemma 5.6 that

f ′1(α)=f ′1(0) +

∫ α

0

f ′′1 (ξ)dξ

≤−2δ2 + 2(1 + 2κ)δ2

∫ α

0

ψ′′(λmin(v)− 2ξ√

1 + 2κδ)dξ

=−2δ2 −√

1 + 2κδ

∫ α

0

ψ′′(λmin(v)− 2ξ√

1 + 2κδ)d(λmin(v)− 2ξ√

1 + 2κδ)

=−2δ2 −√

1 + 2κδ(ψ′(λmin(v)− 2α√

1 + 2κδ)− ψ′(λmin(v)))

≤−2δ2 +√

1 + 2κδ2δ√

1 + 2κ=0.

This proves the lemma. 2

Lemma 5.8 Let ρ : [0,∞)→ (0, 1] denote the inverse function of the restriction of 12ψ′(t) to the interval

(0, 1], where ψ(t) is the eligible kernel function. Then the largest possible value of the step size of αsatisfying (58) is given by

α =1

2√

1 + 2κδ(ρ(δ)− ρ (Lδ)) , (59)

where L =(

1 + 1√1+2κ

).

Proof: It follows from (30-c) that ψ′′(t) is decreasing. Then the derivative of the left-hand side ofinequality (58) with respect to λmin(v) (i.e., −ψ′′(λmin(v) − 2α

√1 + 2κδ) + ψ′′(λmin(v))) is negative.

21

Page 22: A uni ed kernel function approach to polynomial interior ... · kernel function approach to polynomial interior-point algorithms for P ( )-SCLCP. Moreover, this uni es the analysis

This means that the left-hand side of inequality (58) is a decreasing function of λmin(v). Therefore, fixingδ the smaller λmin(v) is, the smaller α will be. So we derive

δ =1√2‖ψ′(v)‖ ≥ 1

2| ψ′(λmin(v)) |≥ −1

2ψ′(λmin(v)).

The equality holds if and only if λmin(v) is the only eigenvalue of v(j) for all j, 1 ≤ j ≤ N , that differsfrom 1, and λmin(v) < 1 (in which case ψ′(λmin(v)) < 0). So, the worst situation for the step size occurswhen λmin(v) satisfies

−1

2ψ′(λmin(v)) = δ. (60)

The derivative of the left-hand side of inequality (58) with respect to α is

2√

1 + 2κδψ′′(λmin(v)− 2α√

1 + 2κδ) ≥ 0.

Hence, the left-hand side of the inequality (58) is increasing in α. Thus, the largest possible value of αsatisfying (58) occurs when

−ψ′(λmin(v)− 2α√

1 + 2κδ) + ψ′(λmin(v)) =2δ√

1 + 2κ.

From (60), the above equation can be written as

−1

2ψ′(λmin(v)− 2α

√1 + 2κδ) =

(1 +

1√1 + 2κ

)δ = Lδ. (61)

It follows from (60), (61) and the definition of ρ that

λmin(v) = ρ(δ), and λmin(v)− 2α√

1 + 2κδ = ρ(Lδ).

Thus, the largest possible value of the step size of α satisfying (58) is given by

α =1

2√

1 + 2κδ(ρ(δ)− ρ (Lδ)) .

The proof of the lemma is completed. 2

Lemma 5.9 Let ρ and α as define in Lemma 5.8. One has

α ≥ 1

(1 + 2κ)ψ′′ (ρ (Lδ)).

Proof: From the definition of ρ, we have

−ψ′(ρ(δ)) = 2δ.

Taking the derivative with respect to δ, we get

−ψ′′(ρ(δ))ρ′(δ) = 2,

which yields

ρ′(δ) = − 2

ψ′′(ρ(δ))< 0.

Hence ρ is monotonically decreasing. An immediate consequence of (59) is

α =1

2δ√

1 + 2κ

∫ δ

ρ′(σ)dσ =1

δ√

1 + 2κ

∫ Lδ

δ

1

ψ′′(ρ(δ))dσ.

22

Page 23: A uni ed kernel function approach to polynomial interior ... · kernel function approach to polynomial interior-point algorithms for P ( )-SCLCP. Moreover, this uni es the analysis

To obtain a lower bound for α, we want to replace the argument of the last integral by its minimalvalue. So we want to know when ψ′′(ρ(σ)) is maximal, for σ ∈ [δ, Lδ]. Note that ψ′′(t) is monotonicallydecreasing. So ψ′′(ρ(σ)) is maximal for σ ∈ [δ, Lδ] when ρ(σ) is minimal. Since ρ is monotonicallydecreasing, this occurs when σ = Lδ. Therefore

α =1

δ√

1 + 2κ

∫ Lδ

δ

1

ψ′′(ρ(σ))dσ ≥ 1

δ√

1 + 2κ

1

ψ′′(ρ(Lδ))

∫ Lδ

δ

dσ =1

(1 + 2κ)ψ′′ (ρ (Lδ)),

which proves the lemma. 2

In the sequel we use the notation

α :=1

(1 + 2κ)ψ′′ (ρ (Lδ)), (62)

and we will use α as the default step size. It is obvious that α ≥ α.

Lemma 5.10 (Lemma 3.12 in [36]) Let h(t) be a twice differentiable convex function with h(0) = 0,h′(0) < 0 and let h(t) attain its (global) minimum at t∗ > 0. If h′′(t) is increasing for t ∈ [0, t∗], then

h(t) ≤ th′(0)

2, 0 ≤ t ≤ t∗.

Lemma 5.11 If the step size α is such that α ≤ α, then

f(α) ≤ −αδ2.

Proof: Let the univariate function h be such that

h(0) = f1(0), h′(0) = f ′1(0) = −2δ2, h′′(α) = 2(1 + 2κ)δ2ψ′′(λmin(v)− 2α√

1 + 2κδ).

According to Lemma 5.6, we have f ′′1 (α) ≤ h′′(α) which implies f ′1(α) ≤ h′(α) and f1(α) ≤ h(α). Takingα ≤ α, with α as defined by (59), and using the fundamental theorem of calculus, we derive

h′(α)=h′(0) +

∫ α

0

h′′(ξ)dξ

=−2δ2 + 2(1 + 2κ)δ2

∫ α

0

ψ′′(λmin(v)− 2ξ√

1 + 2κδ)dξ

=−2δ2 + δ√

1 + 2κ(ψ′(λmin(v)− 2α√

1 + 2κδ)− ψ′(λmin(v)))

≤−2δ2 + δ√

1 + 2κ2δ√

1 + 2κ= 0.

The last inequality hods is due to the definition of α, which guarantees that if α ≤ α then inequality (58)holds. Since ψ′′′(t) < 0 for t > 0, ψ′′(t) is decreasing in t. Therefore h′′(α) is increasing in α. We have,by Lemma 5.10,

f1(α) ≤ h(α) ≤ 1

2αh′(0) = −αδ2.

As we mentioned before, f1(α) is an upper bound for f(α), hence, the lemma is proved. 2

Combining the results of Lemma 5.9 and Lemma 5.11, we obtain the following theorem.

Theorem 5.12 With α being the default step size, as given by (62), one has

f(α) ≤ − δ2

(1 + 2κ)ψ′′ (ρ (Lδ)). (63)

23

Page 24: A uni ed kernel function approach to polynomial interior ... · kernel function approach to polynomial interior-point algorithms for P ( )-SCLCP. Moreover, this uni es the analysis

Lemma 5.13 The right-hand side expression in (63) is monotonically decreasing in δ(v).

Proof: Let t = ρ(Lδ), which implies 0 < t ≤ 1, and which is equivalent to

1

2ψ′(t) = −1

2ψ′(ρ(Lδ)) = Lδ.

Since ρ is monotonically decreasing, t is monotonically decreasing if δ increases. Hence the right-handside expression in (63) is monotonically decreasing in δ if and only if the function

g(t) =(ψ′(t))2

4L2(1 + 2κ)ψ′′(t)

is monotonically decreasing for 0 < t ≤ 1. Note that g(1) = 0 and

g′(t) =ψ′(t)(2(ψ′′(t))2 − ψ′(t)ψ′′′(t))

4L2(1 + 2κ)(ψ′′(t))2.

Hence, since ψ′(1) = 0 and ψ′(t) ≤ 0 for 0 < t ≤ 1, g(t) is monotonically decreasing for 0 < t ≤ 1 if andonly if

2(ψ′′(t))2 − ψ′(t)ψ′′′(t) ≥ 0, 0 < t ≤ 1.

The last inequality is satisfied, due to the condition (30-d). Hence the lemma is proved.

Combining the results of Theorem 5.12 and Theorem 3.4, while using also Lemma 5.13, we obtain

f(α) ≤ − (ψ′ (%(Ψ(v))))2

4(1 + 2κ)ψ′′(ρ(L2ψ′ (ρ(Ψ(v)))

)) . (64)

This expresses the decrease in Ψ(v) during an inner iteration completely in ψ, its first and secondderivatives, and the inverse functions ρ and %.

5.3 Iteration bounds for the algorithms

We need to count how many inner iterations are required to return to the situation where Ψ(v) ≤ τ .We use the value of Ψ(v) after the µ-update by Ψ0, the subsequent values in the same outer iterationare denoted as Ψk, k = 1, 2, · · ·,K, where K denotes the total number of inner iterations in the outeriteration.

Following the approach in [8] for LO we let the constants β > 0 and γ ∈ (0, 1] be such that for Ψ(v) ≥ τone has

(ψ′ (%(Ψ(v))))2

4(1 + 2κ)ψ′′(ρ(L2ψ′ (ρ(Ψ(v)))

)) ≥ βΨ(v)1−γ .

Recall that the left hand side expression is increasing in Ψ(v). Therefore, such numbers β and γ certainlyexist (take, e.g., γ = 1 and β equals to the value of the left hand side expression for Ψ(v) = τ).

Lemma 5.14 (Lemma 14 in [37]) Suppose t0, t1, · · ·, tK be a sequence of positive numbers such that

tk+1 ≤ tk − βt1−γk , k = 0, 1, · · ·,K − 1,

where β > 0 and 0 < γ ≤ 1. Then K ≤⌈tγ0βγ

⌉.

Theorem 5.15 One has

K ≤⌈

Ψγ0

βγ

⌉.

24

Page 25: A uni ed kernel function approach to polynomial interior ... · kernel function approach to polynomial interior-point algorithms for P ( )-SCLCP. Moreover, this uni es the analysis

Proof: The definition K implies ΨK−1 > τ and ΨK ≤ τ and

Ψk+1 ≤ Ψk − β(Ψk)1−γ , k = 0, 1, · · ·,K − 1.

The theorem follows immediately from Lemma 5.14 with tk = Ψk. 2

Theorem 5.15 provides an estimate for the number of inner iterations between two successive barrierparameter updates, in terms of Ψ0 and the constants β and γ. An upper bound for the total numberof iterations is obtained by multiplying (the upper bound for) the number of inner iterations K by thenumber of outer iterations, which is bounded above by 1

θ log rε (cf. Lemma Π.17 in [41]). Thus we obtain

the following upper bound on the total number of iterations, namely

Ψγ0

θβγlog

r

ε.

From Corollary 5.3, we obtain the following iteration bound for the algorithms

1

θβγ

(rψ

(%(τr

)√

1− θ

))γlog

r

ε. (65)

Note that this iteration bound is completely determined by the eligible kernel function ψ(t) and theparameters θ and τ in the algorithms. It is surprising that the bound is exactly the same as the boundin [8] for LO.

5.4 Application to the eligible kernel functions

As a result the computations required to obtain the iteration bounds for large- and small-update methodsbased on the eligible kernel functions can be performed in a systematic way by using the following scheme,which is essentially the same scheme as was presented in [8], Section 6.1, for the LO case.

25

Page 26: A uni ed kernel function approach to polynomial interior ... · kernel function approach to polynomial interior-point algorithms for P ( )-SCLCP. Moreover, this uni es the analysis

• Step 0: Input an eligible kernel function ψ(t); an update parameter θ,0 < θ < 1; a threshold parameter τ ; and an accuracy parameter ε.

• Step 1: Solve the equation − 12ψ′(t) = s to get ρ(s), the inverse function

of − 12ψ′(t), t ∈ (0, 1]. If the equation is hard to solve, derive a lower bound for

ρ(s).

• Step 2: Calculate the decrease of Ψ(v) in terms of δ for the default step sizeα from

f(α) ≤ − δ2

(1 + 2κ)ψ′′ (ρ (Lδ)).

• Step 3: Solve the equation ψ(t) = s to get %(s), the inverse function ofψ(t), t ≥ 1. If the equation is hard to solve, derive lower and upper bounds for%(s).

• Step 4: Derive a lower bound for δ(v) in terms of Ψ(v) by using

δ(v) ≥ 12ψ′ (% (Ψ(v)) .

• Step 5: Using the results of step 3 and step 4 to find positive constants βand γ, with γ ∈ (0, 1], such that

f(α) ≤ −βΨ(v)1−γ .

• Step 6: Calculate the uniform upper bound Ψ0 for Ψ(v) from

Ψ0 ≤ Lψ(r, θ, τ) = rψ

(%(τr

)√

1− θ

).

• Step 7: Derive an upper bound for the total number of iterations from

Ψγ0

θβγlog

r

ε.

• Step 8: Set τ = O(r) and θ = Θ(1) so as to calculate an iteration boundfor large-update methods, or set τ = O(1) and θ = Θ( 1√

r) to get an iteration

bound for small-update methods.

As a consequence, since the resulting iteration bounds for LO based on a wide class of eligible kernelfunctions have been computed in [3, 5, 6, 7, 8, 12, 36, 37, 38, 41], similar to the analysis presented in[49] based on the specific eligible kernel function ψ12(t), we immediately get the iteration bounds for theCartesian P∗(κ)-SCLCP based on the other eligible kernel functions. The resulting iteration bounds aresummarized in the 3rd and 4th columns of Table 1.

Small-update methods based on the eligible kernel functions in Table 1 all have the same complexity asthe small-update method based on the logarithmic barrier function, namely, O

((1 + 2κ)

√r log r

ε

); as is

well known, until now this is the best iteration bound for small-update method. It must be noted thatwhere appropriate (i.e., for i ∈ {3, 4, 7, 8, 10, 12}) one has to take q = O(1) to achieve this bound. Onemay easily verify that for i ∈ {3, 4, 12} the best iteration bound for large-update method is obtained bytaking p = 1, q = 1

2 log r. This gives the iteration bound O((1 + 2κ)

√r log r log r

ε

), which is the currently

best known bound for large-update method. The same bound is achieved for the eligible kernel functionsin the table (for i = 7 and i = 8), also by taking q = O(log(r)).

26

Page 27: A uni ed kernel function approach to polynomial interior ... · kernel function approach to polynomial interior-point algorithms for P ( )-SCLCP. Moreover, this uni es the analysis

i Kernel functions ψi(t) Large-update Small-update

1 t2−12 − log t O

((1 + 2κ)r log r

ε

)O((1 + 2κ)

√r log r

ε

)2 1

2

(t− 1

t

)2O(

(1 + 2κ)r23 log r

ε

)O((1 + 2κ)

√r log r

ε

)3 t2−1

2 + t1−q−1q−1 , q > 1 O

((1 + 2κ)qr

q+12q log r

ε

)O((1 + 2κ)q2

√r log r

ε

)4 t2−1

2 + t1−q−1q(q−1) −

q−1q (t− 1), q > 1 O

((1 + 2κ)qr

q+12q log r

ε

)O((1 + 2κ)q2

√r log r

ε

)5 t2−1

2 + e1t −ee O

((1 + 2κ)

√r(log r)2 log r

ε

)O((1 + 2κ)

√r log r

ε

)6 t2−1

2 −∫ t

1e

1ξ−1dξ O

((1 + 2κ)

√r(log r)2 log r

ε

)O((1 + 2κ)

√r log r

ε

)7 t2−1

2 + eq( 1t−1)−qq , q ≥ 1 O

((1 + 2κ)q

√r log r

ε

)O((1 + 2κ)q

√qr log r

ε

)8 t2−1

2 −∫ t

1eq(

1ξ−1)dξ, q ≥ 1 O

((1 + 2κ)q

√r log r

ε

)O((1 + 2κ)q

√qr log r

ε

)9 t2−1

2 + (e−1)2

e1

et−1 −e−1e O

((1 + 2κ)r

34 log r

ε

)O((1 + 2κ)

√r log r

ε

)10 t− 1 + t1−q−1

q−1 , q > 1 O((1 + 2κ)qr log r

ε

)O((1 + 2κ)q2

√r log r

ε

)11 tp+1−1

p+1 − log t, p ∈ [0, 1], q > 1 O((1 + 2κ)r log r

ε

)O((1 + 2κ)

√r log r

ε

)12 tp+1−1

p+1 + t1−q−1q−1 , p ∈ [0, 1], q > 1 O

((1 + 2κ)qr

p+qq(1+p) log r

ε

)O((1 + 2κ)q2

√r log r

ε

)Table 1: Iteration bounds for large- and small-update methods

6 Conclusions and remarks

In this paper, we generalized the recent work on IPMs for LO [8] and P∗(κ)-LCP [28] to the CartesianP∗(κ)-SCLCP by using the machinery of the Euclidean Jordan algebras. The iteration bounds for thealgorithms are performed in a systematic scheme, which highly depend on the choice of the eligiblekernel functions, especially on the inverse functions of the eligible kernel functions and its derivatives.Moreover, the currently best known iteration bounds for large- and small-update methods are derived,namely O((1 + 2κ)

√r log r log r

ε ) and O((1 + 2κ)√r log r

ε ), respectively, which reduce the gap betweenthe practical behavior of the algorithms and its theoretical performance results.

Some interesting topics for further research remain. The search directions used in this paper are based onthe NT-scaling scheme. It would be interesting to investigate if it is to design the similar algorithm usingother scaling schemes and still obtain polynomial-time iteration bounds. Next, the extensions to nonlinearcomplementarity problems deserve to be investigated. Finally, the numerical test is an interesting workfor investigating the behavior of the algorithm so as to be compared with other existing approaches.

AcknowledgmentsThe authors would like to thank the referees for their useful suggestions which improve the presentationof this paper. The research of the first author is supported by the National Natural Science Foundation ofChina (No. 11001169) and China Postdoctoral Science Foundation (No. 20100480604). The research ofthe second author is supported by the National Natural Science Foundation of China (No. 10871130), thePh.D. Foundation Grant (No. 20093127110005) and the Shanghai Leading Academic Discipline Project(No. T0401).

References

[1] Amini K. and Haseli A., 2007. A new proximity function generating the best known iteration bounds forboth large-update and small-update interior-point methods. ANZIAM Journal 49(2), 259-270.

[2] Baes M., 2007. Convexity and differentiability properties of spectral functions and spectral mappings onEuclidean Jordan algebras. Linear Algebra and its Applications 422(2-3), 664-700.

27

Page 28: A uni ed kernel function approach to polynomial interior ... · kernel function approach to polynomial interior-point algorithms for P ( )-SCLCP. Moreover, this uni es the analysis

[3] Bai Y.Q., Guo J.L., and Roos C., 2009. A new kernel function yielding the best known iteration bounds forprimal-dual interior-point algorithms. Acta Mathematica Sinica, English Series 25(12), 2169-2178.

[4] Bai Y.Q., Lesaja G., and Roos C., 2008. A new class of polynomial interior-point algorithms for P∗(κ) linearcomplementarity problems. Pacific Journal of Optimization 4(1), 19-41.

[5] Bai Y.Q., Lesaja G., Roos C., Wang G.Q., and Ghami M. El, 2008. A class of large-update and small-update primal-dual interior-point algorithms for linear optimization. Journal of Optimization Theory andApplication 138(3), 341-359.

[6] Bai Y.Q. and Roos C., 2005. A primal-dual interior-point method based on a new kernel function with lineargrowth rate. In L. Caccetta and V. Rehbock, editors, Proceedings of Symposium on Industrial Optimisationand the 9TH Australian Optimisation Day, pages 15-28, Curtin University of Technology, Australia.

[7] Bai Y.Q., Roos C., and Ghami M. El, 2002. A primal-dual interior-point method for linear optimizationbased a new proximity function. Optimization Methods and Software 17(6), 985-1008.

[8] Bai Y.Q., Ghami M. El, and Roos C., 2004. A comparative study of kernel functions for primal-dual interior-point algorithms in linear optimization. SIAM Journal on Optimization 15(1), 101-128.

[9] Bai Y.Q., Wang G.Q., and Roos C., 2009. Primal-dual interior-point algorithms for second-order coneoptimization based on kernel functions. Nonlinear Analysis: Theory, Methods and Applications 70(10),3584-3602.

[10] Chen X. and Qi H.D., 2006. Cartesian P-property and its applications to the semidefinite linear complemen-tarity problem. Mathematical Programming 106(1), 177-201.

[11] Chua C.B. and Peng Y, 2010. A continuation method for nonlinear complementarity problems over symmetriccone. SIAM Journal on Optimization 20(5), 2560-2583.

[12] Ghami M.El, Ivanov I.D., Roos C., and Steihag T., 2008. A polynomial-time algorithm for LO based ongeneralized logarithmic barrier functions. International Journal of Applied Mathematics 21(1) 99-115.

[13] Cho G.M., 2008. A new large-update interior point algorithm for P∗(κ) linear complementarity problems.Journal of Computational and Applied Mathematics 216(1), 265-278.

[14] Cho G.M. and Kim M.K., 2006. A new large-update interior point algorithm for P∗(κ) LCPs based on kernelfunctions. Applied Mathematics and Computation 182(2), 1169-1183.

[15] Cho G.M., Kim M.K., and Lee Y.H., 2007. Complexity of large-update interior point algorithm for P∗(κ)linear complementarity problems. Computers and Mathematics with Applications 53(6), 948-960.

[16] Faraut J. and Koranyi A., 1994. Analysis on Symmetric Cone. Oxford University Press, New York, USA.

[17] Faybusovich L., 1997. Euclidean Jordan algebras and interior-point algorithms. Positivity 1, 331-357.

[18] Faybusovich L., 2002. A Jordan-algebraic approach to potential-reduction algorithms. MathematischeZeitschrift 239(1), 117-129.

[19] Fukushima M., Luo Z.Q., and Tseng P., 2001. Smoothing functions for second-order cone complementarityproblems. SIAM Journal on Optimization 12(2), 436-460.

[20] Gowda M.S. and Sznajder R., 2007. Some global uniqueness and solvability results for linear complementarityproblems over symmetric cones. SIAM Journal on Optimization 18(2), 461-481.

[21] Gu G, Zangiabada M., and Roos C., 2009. Full Nesterov-Todd step infeasible interior-point method forsymmetric optimization. Submitted to European Journal of Operational Research.

[22] Hayashi S., Yamashita N., and Fukuahima M., 2005. A combined smoothing and regularization method formonotone second-order cone complementarity problems. SIAM Journal on Optimization 15(2), 593-615.

[23] Huang Z.H., Hu S.L., and Han J.Y., 2009. Convergence of a smoothing algorithm for symmetric conecomplementarity problems with a nonmonotone line search. Science in China Series A: Mathematics 52(4),833-848.

[24] Koranyi A., 1984. Monotone functions on formally real Jordan algebras. Mathematische Annalen 269(1),73-76.

[25] Kojima M., Megiddo N., Noma T., and Yoshise A., 1991. A Unified Approach to Interior Point Algorithmsfor Linear Complementarity Problems. Lecture Notes in Computer Science, Vol. 538, Springer-Verlag, NewYork, USA.

[26] Kojima M., Mizuno S., and Hara S., 1997. Interior-point methods for the monotone semidefinite linearcomplementarity problem in symmetric matrices. SIAM Journal on Optimization 7(1), 86-125.

28

Page 29: A uni ed kernel function approach to polynomial interior ... · kernel function approach to polynomial interior-point algorithms for P ( )-SCLCP. Moreover, this uni es the analysis

[27] Kong L.C., Sun J., and Xiu N.H., 2008. A regularized smoothing Newton method for symmetric conecomplementarity problems. SIAM Journal on Optimization 19(3), 1028-1047.

[28] Lesaja G. and Roos C., 2010. Unified analysis of kernel-based interior-point methods for P∗(κ)-linear com-plementarity problems,. SIAM Journal on Optimization 20(6), 3014-3039.

[29] Luo Z.Y. and Xiu N.H., 2009. Path-following interior point algorithms for the Cartesian P∗(κ)-LCP oversymmetric cones. Science in China Series A: Mathematics 52(8), 1769-1784.

[30] Miao J., 1995. A quadratically convergent O((1 + κ)√nL)-iteration algorithm for the P∗(κ)-matrix linear

complementarity problem. Mathematical Programming 69(3), 355-368.

[31] Muramatsu M., 2002. On a commutative class of search directions for linear programming over symmetriccones. Journal of Optimization Theory and Applications 112(3), 595-625.

[32] Nesterov Y.E. and Todd M.J., 1997. Self-scaled barries and interior-point methods for convex programming.Mathematics of Operations Reaearch 22(1), 1-42.

[33] Nesterov Y.E. and Todd M.J., 1998. Primal-dual interior-point methods for self-scaled cones. SIAM Journalon Optimization 8(2), 324-364.

[34] Pan S.H. and Chen J.S., 2009. A regularization method for the second-order cone complementarity problemwith the Cartesian P0-property. Nonlinear Analysis, Theory, Methods and Applications 70(4), 1475-1491.

[35] Pan S.H. and Chen J.S., 2009. Growth behavior of two classes of merit functions for the symmetric conecomplementarity problems. Journal of Optimization Theory and Applications 141(1), 167-191.

[36] Peng J., Roos C., and Terlaky T., 2000. New complexity analysis of the primal-dual Newton method forlinear optimization. Annual of Operation Research 99(1-4), 23-39.

[37] Peng J., Roos C., and Terlaky T., 2002. Self-regular functions and new search directions for linear andsemi-definite optimization. Mathematical Programming 93(1), 129-171.

[38] Peng J., Roos C., and Terlaky T., 2002. A new class of polynomial primal-dual interior-point methods forsecond-order cone optimization based on self-regular proximities. SIAM Journal on Optimization 13(1),179-203.

[39] Potra F.A. and Stoer J., 2009. On a class of superlinearly convergent polynomial time interior point methodsfor sufficient LCP. SIAM Journal on Optimization 20(3), 1333-1363.

[40] Rangarajan B.K., 2006. Polynomial convergence of infeasible-interior-point methods over symmetric cones.SIAM Journal on Optimization 16(4), 1211-1229.

[41] Roos C., Terlaky T., and Vial J.-Ph., Theory and Algorithms for Linear Optimization. An Interior-PointApproach, John Wiley and Sons, Chichester, UK, 1997.

[42] Schmieta S.H. and Alizadeh F., 2003. Extension of primal-dual interior-point algorithms to symmetric cones.Mathematical Programming 96(3), 409-438.

[43] Shida M., Shindoh S., and Kojima M., 1998. Existence and uniqueness of search directions in interior-pointalgorithms for the SDP and the monotone SDLCP. SIAM Journal on Optimization 8(2), 387-396.

[44] Sim C.K., 2011. Superlinear convergence of an infeasible predictor-corrector path-following interior pointalgorithm for a semidefinite linear complementarity problem using the HKM direction. SIAM Journal onOptimization 21(1), 102-126.

[45] Vieira M.V.C., 2007. Jordan algebraic approach to symmetric optimization. Ph.D thesis, Electrical Engi-neering, Mathematics and Computer Science, Delft University of Technology, The Netherlands.

[46] Wang G.Q., Bai Y.Q., and C. Roos, 2005. Primal-dual interior-point algorithms for semidefinite optimizationbased on a simple kernel function. Journal of Mathematical Modelling and Algorithms 4(4), 409-433.

[47] Wang G.Q. and Bai Y.Q., 2009. Polynomial interior-point algorithms for P∗(κ) horizontal linear comple-mentarity problem. Journal of Computational and Applied Mathematics 233(2), 248-263.

[48] Wang G.Q. and Zhu D.T., 2010. A class of polynomial interior-point algorithms for the Cartesian P∗(κ)second-order cone linear complementarity problem. Nonlinear Analysis, Theory, Methods and Applications,73(12), 3705-3722.

[49] Wang G.Q., 2009. A new class of polynomial interior-point algorithms for the Cartesian P∗(κ)-LCP oversymmetric cones. Submitted to Journal of Optimization Theory and Applications.

[50] Yoshise A., 2006. Interior point trajectories and a homogeneous model for nonlinear complementarity prob-lems over symmetric cones. SIAM Journal on Optimization 17(4), 1129-1153.

29

Page 30: A uni ed kernel function approach to polynomial interior ... · kernel function approach to polynomial interior-point algorithms for P ( )-SCLCP. Moreover, this uni es the analysis

[51] Yoshise A., 2010. Complementarity problems over symmetric cones. Discussion Paper Series, No.1262,University of Tsukuba, Tsukuba, Japan.

[52] Zhao Y.B. and Li D., 2005. A new path-following algorithm for nonlinear P∗ complementarity problems.Computational Optimization and Applications 34(2), 183-214.

30