# A smoothing-type algorithm for solving inequalities under the order induced by a symmetric cone

- Nan Lu
^{1}and - Ying Zhang
^{2}Email author

**2011**:4

https://doi.org/10.1186/1029-242X-2011-4

© Lu and Zhang; licensee Springer. 2011

**Received: **3 October 2010

**Accepted: **16 June 2011

**Published: **16 June 2011

## Abstract

In this article, we consider the numerical method for solving the system of inequalities under the order induced by a symmetric cone with the function involved being monotone. Based on a perturbed smoothing function, the underlying system of inequalities is reformulated as a system of smooth equations, and a smoothing-type method is proposed to solve it iteratively so that a solution of the system of inequalities is found. By means of the theory of Euclidean Jordan algebras, the algorithm is proved to be well defined, and to be globally convergent under weak assumptions and locally quadratically convergent under suitable assumptions. Preliminary numerical results indicate that the algorithm is effective.

**AMS subject classifications:** 90C33, 65K10.

## Keywords

## 1 Introduction

*V*be a finite dimensional vector space over 4 with an inner product 〈·,·〉. If there exists a bilinear transformation from

*V*×

*V*to

*V*, denoted by "○," such that for any

*x*,

*y*,

*z*∈

*V*,

where *x*^{2} := *x* ○ *x*, then (*V*, ○, 〈·,·〉) is called a Euclidean Jordan algebra. Let *K* := {*x*^{2} : *x* ∈ *V*}; then *K* is a symmetric cone [1]. Thus, *K* could induce a partial order ≽: for any *x* ∈ *V*, *x* ≽ 0 means *x*∈ *K*. Similarly, *x* ≻ 0 means *x* ∈ int*K* where int*K* denotes the interior of *K*; and *x* ≼ 0 means -*x* ≽ 0.

_{ k }(

*x*) denote the (orthogonal) projection of

*x*onto

*K*. By Moreau decomposition [2], we can define

where *f* : *V* → *V* is a transformation (see two transformations: *Löwner operator* defined in [3], and *relaxation transformation* defined in [4]). We assume that *f* is continuously differentiable. Recall that a transformation *f* : *V* → *V* is called to be continuously differentiable if the linear operator ∇*f* (*x*) : *V* → *V* is continuous at each *x* ∈ *V*, where ∇*f* (*x*) satisfying
is the Fréchet derivative of *f* at *x*.

When
, (1.2) reduces to the usual system of inequalities over ℜ ^{
n
} . In this case, the system of inequalities has been studied extensively because of its various applications in data analysis, set separation problems, computer-aided design problems, image reconstructions, and detection on the feasibility of nonlinear programming. Already many iteration methods exist for solving such inequalities; see, for example [5–9]. It is well known that the positive semi-definite matrix cone, the second-order cone, and the nonnegative orthant cone
as common symmetric cones have many applications in practice and are studied mostly. Thus, investigation of (1.2) could provide a unified theoretical framework for studying the system of respective inequalities under the order induced by the nonnegative orthant, the second-order, and the positive semidefinite matrix cones. This is one of the factor that motivated us to investigate (1.2).

Another motivation factor comes from detection on the feasibility of optimization problems. A main method to solve symmetric cone programming problems is the interior point method (IPM, in short). An usual requirement in the IPM is that a feasible interior point of the problem is known in advance. In general, however, the diffculty to find a feasible interior point is equivalent to the one to solve the optimization problem itself. Consider an optimization problem with the constraint given by (1.2) where the interior of the feasible set is nonempty. If an algorithm can solve (1.2) effectively, then the same algorithm can be applied to solve *f* (*x*) + ε*e* ≼ 0 to generate an interior point of the solution set of (1.2), where ε *>* 0 is a sufficiently small real number and *e* is the unique element in *V* such that *x* ○ *e* = *e* ○ *x* = *x* holds for all *x* ∈ *V* (i.e., the identity of *V* ). Thus, a feasible interior point of conic optimization problem could be found in this way.

It is well known that smoothing-type algorithms have been a powerful tool for solving many optimization problems. On one hand, smoothing-type algorithms have been developed to solve symmetric cone complementarity problems (see, for example, [10–14]) and symmetric cone linear programming (see, for example, [15, 16]). On the other hand, smoothing-type algorithms have also been developed to solve the system of inequalities under the order induced by
(see, for example, [17–19]). From these recent studies, a natural question is that *how to develop a smoothing-type algorithm to solve the system of inequalities under the order induced by a symmetric cone*. Our objective of this article is to answer this question.

*f*(

*x*) ≼ 0 ⇔ -

*f*(

*x*) ∈

*K*⇔

*f*(

*x*)

_{-}= -

*f*(

*x*) ⇔

*f*(

*x*)

_{+}=

*f*(

*x*)

_{-}+

*f*(

*x*) = 0; that is, the system of inequalities (1.2) is equivalent to the following system of equations:

By means of (1.4), we extend a smoothing-type algorithm to solve (1.2). By investigating the solvability of the system of Newton equations, we show that the algorithm is well defined. In particular, we show that the algorithm is globally and locally quadratically convergent under some assumptions.

The rest of this article is organized as follows. In the next section, we first briefly review some basic concepts on Euclidean Jordan algebras and symmetric cones, and then present some useful results which will be used later. In Section 3, we investigate a smoothing-type algorithm for solving the system of inequalities (1.2) and show that the algorithm is well defined by proving that solvability of the system of Newton equations. In Section 4, we discuss the global and local quadratic convergence of the algorithm. The preliminary numerical results for the system of inequalities under the order induced by the second-order cone are reported in Section 5; some final remarks are provided in Section 6.

## 2 Preliminaries

### 2.1 Euclidean Jordan Algebra

In this subsection, we first recall some basic concepts and results over Euclidean Jordan algebras. For a comprehensive treatment of Jordan algebras, the reader is referred to [1] by Faraut and Korányi.

Suppose that (*V*, ○, 〈·,·〉) is a Euclidean Jordan algebra which has the identity *e*. An element *c* ∈ *V* is called an idempotent if *c* ○ *c* = *c*. An idempotent *c* is primitive if it is nonzero and cannot be expressed by sum of two other nonzero idempotents. For any *x* ∈ *V*, let *m*(*x*) be the minimal positive integer such that {*e*, *x*, *x*^{2},..., *x*^{m(x)}} is linearly dependent. Then, rank of *V*, denoted by Rank(*V* ), is defined as max{*m*(*x*) : *x* ∈ *V* }. A set of primitive idempotents {*c*_{1}, *c*_{2},..., *c*_{
k
}} is called a Jordan frame if *c*_{
i
} ○ *c*_{
j
} = 0 for any *i*, *j* ∈ {1,..., *k*} with *i* ≠ *j* and
.

**Theorem 2.1** *(Spectral Decomposition Theorem*[1]) *Let* (*V*, ○, 〈·,·〉) *be a Euclidean Jordan algebra with Rank*(*V* ) = *r. Then for any x* ∈ *V*, *there exists a Jordan frame* {*c*_{1}(*x*),..., *c*_{
r
} (*x*)} *and real numbers λ*_{1}(*x*),..., *λ*_{
r
} (*x*) *such that*
. *The numbers λ*_{1}(*x*),...,*λ*_{
r
} (*x*) *(with their multiplicities) are uniquely determined by x*.

*λ*

_{ i }(

*x*)(

*i*∈ {1,...,

*r*}) is called an eigenvalue of

*x*, which is a continuous function with respect to

*x*(see [20]). Define , where Tr(

*x*) denotes the trace of

*x*. For any

*x*∈

*V*, define a linear transformation ℒ

_{ x }by ℒ

_{ x }

*y*=

*x*○

*y*for any

*y*∈

*V*. Specially, when

*K*is the nonnegative orthant cone , for any

*x*= (

*x*

_{1},...,

*x*

_{ n })

^{ T },

*y*= (

*y*

_{1},...,

*y*

_{ n })

^{ T }∈ ℜ

^{ n },

For any *x*, *y* ∈ *V*, *x* and *y* operator commute if ℒ _{
x
} and ℒ _{
y
} commute, i.e., ℒ_{
x
} ℒ _{
y
} = ℒ_{
y
} ℒ_{
x
}. It is well known that *x* and *y* operator commute if and only if *x* and *y* have their spectral decompositions with respect to a common Jordan frame. We define the inner product 〈·,·〉 by 〈*x*, *y*〉 := Tr(*x* ○ *y*) for any *x*, *y* ∈ *V*. Thus, the norm on *V* induced by the inner product is
.

An element *x* ∈ *V* is said to be invertible if there exists a *y* in the subalgebra generated by *x* such that *x* ○ *y* = *y* ○ *x* = *e*, and is written as *x*^{-1}. If *x*^{2} = *y* and *x* ∈ *K*, then *x* can be written as *y*^{1/2}. Given *x* ∈ *V* with
, where {*c*_{1}(*x*),..., *c*_{
r
} (*x*)} is a Jordan frame and *λ*_{1}(*x*),..., *λ*_{
r
} (*x*) are eigenvalues of *x*, then
and
Furthermore, if *λi* (*x*) *≥* 0 for all *i* ∈ {1,..., *r*}, then
; and if *λ*_{
i
} (*x*) *>* 0 for all *i* ∈ {1,..., *r*}, then
. More generally, we extend the definition of any real-valued analytic function *g* to elements of Euclidean Jordan algebras via their eigenvalues, i.e.,
where *x* ∈ *V* has the spectral decomposition
.

*V*. Fix a Jordan frame {

*c*

_{1},...,

*c*

_{ r }}in a Euclidean Jordan algebra

*V*, for

*i*,

*j*∈ {1,...,

*r*}, define

**Theorem 2.2**

*(Peirce decomposition Theorem*[1])

*The space*

*V*

*is the orthogonal direct sum*

*of spaces V*

_{ ij }(

*i ≤ j*).

*Furthermore*,

Thus, given a Jordan frame {*c*_{1},..., *c*_{
r
} }, we can write any element *x* ∈ *V* as
, where *x*_{
i
} ∈ ℜ and *x*_{
ij
}∈ *V*_{
ij
}.

### 2.2 Basic Results

In this subsection, we produce several basic results which will be used in our later analysis.

**Proposition 2.1** *If x* ≽ 0, *y* ≽ 0, *and x - y* ≽ 0, *then*
.

**Proof**. The proof is similar to Proposition 8 in [20]; hence we omit it.

**Proposition 2.2** *For any sequence* {*a*^{
k
} } ⊆ *V and any given Jordan system* {*c*_{1},..., *c*_{
r
} },

*suppose that, for any k*,
*is the Peirce decomposition of a*^{
k
} *with respect to* {*c*_{1},..., *c*_{
r
}}. *Then*,

*(i) if there exists an index i* ∈ {1,..., *r*} *such that*
, *then λ*_{max}(*a*^{
k
} ) → ∞ *and*

*(ii) if there exists an index i* ∈ {1,..., *r*} *such that*
, *then λ*_{min}(*a*^{
k
} ) → -∞,

*where λ*_{max} (*a*^{
k
} ) *and λ*_{min} (*a*^{
k
} ) *denote the largest and the smallest eigenvalues of a*^{
k
} , *respectively*.

**Proof**. For any

*k*, let be the spectral decomposition of

*a*

^{ k }with {

*e*

_{1}(

*a*

^{ k }),...,

*e*

_{ r }(

*a*

^{ k })} being a Jordan system. Then, for any

*i*∈ {1,...,

*r*}, we have

Since ℒ_{e} is positive definite by [1, Proposition III.2.2] and *c*_{
i
} ≠ 0, it follows 〈*e*, *c*_{
i
} 〉 > 0 and ||*c*_{
i
} || > 0. Thus, from (2.1) we have that *λ*_{max}(*a*^{
k
} ) → ∞ when
, which implies that the result (*i*) holds;

Similarly,
for any *i* ∈ {1,..., *r*}, and hence, *λ*_{min}(*a*^{
k
} ) → -∞ when
, which implies that the result (*ii*) holds.

**Proposition 2.3** *Let* *ϕ*(·,·) *be defined by* (1.4). *Then*, *the following results hold:*

*Where* ℜ_{++}:= {*α* ∈ ℜ|*α* > 0},
, (*h*, *v*) ∈ ℜ × *V* *and* *D* *ϕ*(*μ*, *y*) *denotes the Fréchet derivative of the transformation* *ϕ* *at* (*μ*, *y*).

*(ii) ϕ*(0, y) = 2y_{+}, *and* *ϕ* (0, *y*) *is strong semismoothness at any y* ∈ *V*.

*(iii) ϕ* (*μ*, *y*) = 0 *if and only if μ = 0 and y*_{+} = 0.

**Proof**. (*i*): It is easy to get the results similar to [11, Lemma 3.1]; hence we omit the proof.

(*ii*)
. In addition, [3, Proposition 3.3] says that *y*_{+} is strong semismoothness at any *y* ∈ *V*. Thus, *ϕ*(0, *y*) is strong semismoothness at any *y* ∈ *V*.

The last equality implies *μ* = 0. This, together with (*ii*), yields the desired result.

## 3 A smoothing Newton algorithm

From Proposition 2.3(iii) it follows that *H*(*μ*_{*}, *x**, *y**) = 0 if and only if *μ*_{*} = 0, *y** = *f*(*x**) and *f*(*x**)_{+} = 0, i.e., *x** solves the system of inequalities (1.2).

*z*= (

*μ*,

*x*,

*y*) ∈ ℜ

_{++}×

*V*×

*V*, the transformation

*H*is continuously differentiable with

where *DH* (*z*) denotes the Fréchet derivative of the transformation *H* at *z* and (*h*, *u*, *v*) ∈ ℜ × *V* × *V*. Therefore, we may apply some Newton-type method to solve the smoothing equations *H* (*z*) = 0 at each iteration and make *μ* > 0 and *H* (*z*) *→* 0, so that a solution of (1.2) can be found.

**Algorithm 3.1**
*(A Smoothing Newton Algorithm)*

**Step 0** *Choose* *δ* ∈ (0, 1),
. *Let γ be given in the definition of β*(·),
*and* (*x*^{0}, *y*^{0}) ∈ *V* × *V be an arbitrary element. Set z*^{0} = (*μ*_{0}, *x*^{0}, *y*^{0}). *Set*
*and k* = 0.

**Step 1** *If* ||*H*(*x*^{
k
} )|| = 0 *then stop*.

*where DH* (*z*^{
k
} ) *denotes the Fréchet derivative of the transformation H at z*^{
k
} .

**Step 4** *Set* z^{k+1} = z^{k} + λ_{k}Δz^{k}*and k* = *k* + 1. *Go to Step 1*.

In order to show that Algorithm 3.1 is well defined, we need to show that the system of Newton equations (3.4) is solvable, and the line search (3.5) will terminate finitely. The latter result can be proved in a similar way as those standard discussions in the literature. Thus, we only need to prove the former result, i.e., the solvability of the system of Newton equations.

**Theorem 3.1** *Suppose that f is a continuously differentiable monotone transformation. Then, the system of Newton equations* (3.4) *is solvable*.

**Proof**. For this purpose, we only need to show that

*DH*(

*z*) is invertible for all

*z*∈ ℜ

_{++}×

*V*×

*V*. Suppose that

*DH*(

*z*)Δ

*z*= 0, by (3.4) we have

By Proposition 2.1 and
, we have that (1 + *μ*) *c*_{
μ
} ≻ *|y|* ≽ -*y*, and hence, (1 + *μ*) *c*_{
μ
} + *y* ≻ 0. Then, by [1, Proposition III.2.2], we know that
is positive definite, and so, Δ*y* = 0 holds from (3.7). Since *Df* (*x*) is positive semidefinite from the fact that *f* is monotone, by the second system of equations in (3.6), we have Δ*x* = 0, which, together with the first system of equations in (3.6), implies that *DH* (*z*) is invertible for all *z* ∈ ℜ_{++} × *V* × *V*.

The proof is complete.

**Lemma 3.1** *Suppose that f is a continuously differentiable monotone transformation and* {*z*^{
k
} } = {(*μ*_{
k
} , *x*^{
k
} , *y*^{
k
} )} ⊆ ℜ × *V* × *V is a sequence generated by Algorithm 3.1*, *then we have*

*(i) The sequences* {Ψ(*z*^{
k
} )}, {||*H* (*z*^{
k
} )||}, *and* {*β* (*z*^{
k
} )} *are monotonically decreasing*.

*(ii) Define*
*where the constant γ is given in Step 0 of Algorithm 3.1 and the function β* (·) *is defined by* (3.3), *then*
*for all k*.

*(iii) The sequence* {*μ*_{
k
}} *is monotonically decreasing and μ*_{
k
}> 0 *for all k*.

**Proof**. (*i*) From (3.5) it is easy to see that the sequence {Ψ(*z*^{
k
} )} is monotonically decreasing, and hence, sequences {||*H*(*z*^{
k
} )||} and {*β*(*z*^{
k
} )} are monotonically decreasing.

*ii*) We use inductive method to obtain this result. First, it is evident from the choice of the starting point that . Second, if we assume that for some index

*m*, then

where the first equality follows from the equation in (3.4) and Step 4, the first inequality from the assumption
, and the last inequality from (*i*). This shows that
, and hence,
for all *k*.

*iii*) It follows (3.4) that . Since

*μ*

_{0}

*>*0, we can get

*μ*

_{ k }

*>*0 for all

*k*through the recursive methods. In addition, by (

*ii*), we have

which implies that {*μ*_{
k
} } is monotonically decreasing.

The proof is complete.

## 4 Convergence of algorithm 3.1

In this section, we discuss the global and local quadratic convergences of Algorithm 3.1. We begin with the following lemma, a generalization of [21, Lemma 4.1], which will be used in our analysis on the boundedness of iterative sequences.

**Lemma 4.1** *Let f be a continuously differentiable monotone function and* {*u*^{
k
}} ⊆ *V be a sequence satisfying*||*u*^{
k
}|| → ∞. *Then there exist a subsequence, which we write without loss of generality as* {*u*^{
k
}}, *and an index i* ∈ {1,..., *r*} *such that, either λ*_{
i
}(*u*^{
k
}) → ∞ *and f*_{
i
}(*u*^{
k
}) *is bounded below; or λ*_{
i
}(*u*^{
k
}) → -∞ *and f*_{
i
}(*u*^{
k
}) *is bounded above, where*
*is the spectral decomposition of u*^{
k
}, *and*
*is the Peirce decomposition of f*(*u*^{
k
}) *with respect to* {*e*_{1}(*u*^{
k
}),..., *e*_{
r
}(*u*^{
k
})}.

*u*

^{ k }. From the definition of

*v*

^{ k }and the assumption of

*f*being monotone, it follows that, for all

*k*,

For any *i* ∈ *J*, we have |*λ*_{
i
} (*u*^{
k
} )| *→ ∞*, and hence, either *λ*_{
i
} (*u*^{
k
} ) → ∞ or *λ*_{
i
} (*u*^{
k
} ) → -*∞*. If *λ*_{
i
} (*u*^{
k
} ) *→ ∞*, then (4.1) shows that *f*_{
i
} (*u*^{
k
} ) is bounded below by inf _{
k
} *f*_{
i
} (*v*^{
k
} ); if *λ*_{
i
} (*u*^{
k
} ) *→* -*∞*, then (4.1) shows that *f*_{
i
} (*u*^{
k
} ) is bounded above by sup _{
k
}*f*_{
i
} (*v*^{
k
} ). Thus, the proof is complete.

**Theorem 4.1** *Suppose that f is a continuously differentiable monotone function, then the sequence* {*z*^{
k
}} *generated by Algorithm 3.1 is bounded and every accumulation point of* {*x*^{
k
}} *is a solution of the system of inequalities* (1.2).

**Proof**. By Lemma 3.1, we have that sequences {

*μ*

_{ k }} and {Ψ(

*z*

^{ k })} are nonnegative and monotone decreasing. From (3.1) and (3.3), we have

*y*

^{ k }-

*f*(

*x*

^{ k }) -

*μ*

_{ k }

*x*

^{ k }} and {

*ϕ*(

*μ*

_{ k },

*y*

^{ k }) +

*μ*

_{ k }

*y*

^{ k }} are bounded. Let

*g*(

*μ*

_{ k },

*x*

^{ k },

*y*

^{ k }) :=

*y*

^{ k }-

*f*(

*x*

^{ k }) -

*μ*

_{ k }

*x*

^{ k }, then {

*g*(

*μ*

_{ k },

*x*

^{ k },

*y*

^{ k })} is bounded and

*y*

^{ k }=

*g*(

*μ*

_{ k },

*x*

^{ k },

*y*

^{ k }) +

*f*(

*x*

^{ k }) +

*μ*

_{ k }

*x*

^{ k }. Suppose that

*x*

^{ k }has the spectral decomposition , then the Peirce decomposition of

*f*(

*x*

^{ k }) and

*g*(

*μ*

_{ k },

*x*

^{ k },

*y*

^{ k }) with respect to the Jordan frame {

*e*

_{1}(

*x*

^{ k }),....,

*e*

_{ r }(

*x*

^{ k })} are

*y*

^{ k }with respect to {

*e*

_{1}(

*x*

^{ k }),...,

*e*

_{ r }(

*x*

^{ k })} is

*x*

^{ k }} is unbounded and derive a contradiction. Since

*f*is a continuously differentiable monotone function, by noticing Lemma 4.1, we can take a subsequence if necessary, without loss of generality denoted by {

*x*

^{ k }}, and an index

*i*

_{0}∈ {1,...,

*r*} such that either and is bounded below; or and is bounded above. Together with (4.4), it follows that either and ; or and with . By Proposition 2.2, we further obtain that

*y*

^{ k }has the spectral decomposition , then

*ϕ*(

*μ*

_{ k },

*y*

^{ k }) +

*μ*

_{ k }

*y*

^{ k }has the spectral decomposition

We now consider two cases.

*λ*

*i*

_{ 0 }(

*x*

^{ k }) → ∞. It follows from (4.5) that

*λ*

_{max}(

*y*

^{ k }) → ∞, which together with (4.6) implies that

where *e*_{
max
}(*y*^{
k
}) denotes the element corresponding to *λ*_{
max
}(*y*^{
k
}) in the spectral decomposition of *y*^{
k
}.

*λ*

_{i 0}(

*x*

^{ k }) → -∞. It follows from (4.5) that

*λ*

_{min}(

*y*

^{ k }) → -∞, which together with (4.6) implies that

*e*

_{ min }(

*y*

^{ k }) denotes the element corresponding to

*λ*

_{ min }(

*y*

^{ k }) in the spectral decomposition of

*y*

^{ k }. Since when

*λ*

_{ min }(

*y*

^{ k }) → -∞, so

and hence, ||*ϕ*(*μ*_{
k
}, *y*^{
k
}) + *μ*_{
k
}*y*^{
k
}||^{2 →} ∞ as *k* → ∞.

In either case, we get ||*ϕ*(*μ*_{
k
} , *y*^{
k
} ) + *μ*_{
k
}*y*^{
k
} || *→ ∞* as *k* → *∞*, which contradicts the fact that {*ϕ*(*μ*_{
k
} , *y*^{
k
} ) + *μ*_{
k
}*y*^{
k
} } is bounded. Hence, {*x*^{
k
} } is bounded. Since the function *f* is continuous, by noticing that *y*^{
k
} = *g* (*μ*_{
k
} , *x*^{
k
} , *y*^{
k
} ) + *f* (*x*^{
k
} ) + *μ*_{
k
}*x*^{
k
} for all *k*, it follows that {*y*^{
k
} } is bounded. Therefore, the sequence {(*x*^{
k
} , *y*^{
k
} )} is bounded.

*μ*

_{ k }}, {||

*H*(

*z*

^{ k })||}, and {Ψ(

*z*

^{ k })} are nonnegative and monotone decreasing, and hence they are convergent. Denote

*H** = 0. In the following, we assume

*H** ≠ 0 and derive a contradiction. Under this assumption, it is easy to show that

*H** > 0,

*μ*

_{*}> 0, Ψ* > 0. Since

*μ*

_{ * }> 0 and the sequence {||

*H*(

*z*

^{ k })||} is bounded, we could obtain from the first result that the sequence {(

*x*

^{ k },

*y*

^{ k })} is bounded. Thus, subsequencing if necessary, we may assume that there exists a point

*z** = (

*μ*

_{ * },

*x**,

*y**) ∈ ℜ

_{+}×

*V*×

*V*such that lim

_{k→∞}

*z*

^{ k }=

*z**, and hence,

*H** = ||

*H*(

*z**)|| and Ψ* = Ψ (

*z**). Since ||

*H*(

*z**)|| > 0, so Ψ(

*z**) > 0, from (3.5), it follows that lim

_{k→∞}

*λ*

_{ k }= 0. Thus, for any sufficiently large

*k*, the stepsize does not satisfy the line search criterion (3.5), i.e., , which implies that

*μ*

_{*}> 0, it follows that Ψ(

*z*) is continuously differentiable at

*z**. Let

*k*→

*∞*, then the above inequality gives

then
, together with
, it follows that
, which contradicts the fact that
. So *H** = 0. Thus, by a simple continuity discussion, we obtain that *x** is a solution of the system of inequalities (1.2). This shows that the desired result holds.

Now, we discuss the local quadratic convergence of Algorithm 3.1. For this purpose, we need the strong semismoothness of transformation *H* which can be obtained by Proposition 2.3 (ii). In a similar way as the one in [17, Theorem 3.2], we can obtain the local quadratic convergence of Algorithm 3.1.

**Theorem 4.2**

*Suppose that f is a continuously differentiable monotone function. Let the sequence {z*

^{ k }

*} be generated by Algorithm 3.1 and*z* := (μ

_{*}, x*, y*)

*be an accumulation point of*{

*z*

^{ k }}.

*If all W*∈

*∂H*(

*z**)

*is nonsingular*,

*where*

## 5 Numerical experiments

*δ*= 0.5,

*σ*= 0.0001, and

*γ*= 0.20. The algorithm is terminated whenever ||

*H*(

*z*)|| ≤ 10

^{-6}, or the step length

*α*≤ 10

^{-6}, or the number of iteration was over 500. The starting points in the following test problems are randomly chosen from the interval [-1, 1]. In our experiments, the function

*H*defined by (3.1) is replaced by

*c*is a constant. This does not destroy all theoretical results obtained in the previous sections. Denote

then *K*^{
m
} is an *m*-dimensional second-order cone.

First, we test the following problem.

**Example 5.1** *Consider the system of inequalities* (1.4) *with f* (*x*) := *Mx* + *q, and the order induced by the second-order cone*
, *where M* = *BB*^{
T
}*with B* ∈ ℜ^{n × n}*being a matrix every of which is randomly chosen from the interval* [0, 1] *and q* ∈ ℜ^{
n
}*being a vector every component of which is* 1.

*n*= 400, 800,..., 4000 and each

*n*

_{ i }= 10. The random problems of each size are generated 10 times, and thus, we have totally 100 random problems. Table 1 shows the average iteration numbers (iter), the average CPU time (cpu) in seconds, and the average residual norm ||

*H*(

*z*)|| (res) for 10 test problems given in Example 5.1 of each size, for the random initializations, respectively. Figure 1 shows the convergence behavior of one of the largest test problems, i.e.,

*n*= 4000.

Average performances of Algorithm 3.1 for ten problems

n | iter | cpu | res |
---|---|---|---|

400 | 24.500 | 1.053 | 1.552e-007 |

800 | 29.500 | 4.365 | 2.953e-007 |

1200 | 22.800 | 7.194 | 3.429e-007 |

1600 | 24.333 | 13.580 | 4.467e-007 |

2000 | 12.667 | 11.038 | 2.146e-007 |

2400 | 15.444 | 19.891 | 3.419e-007 |

2800 | 15.667 | 31.102 | 3.693e-008 |

3200 | 12.100 | 36.105 | 1.249e-007 |

3600 | 14.500 | 59.270 | 2.043e-007 |

4000 | 16.625 | 92.832 | 3.888e-007 |

Second, we test the following problem, which is taken from [22].

*and the order induced by the second-order cone K* := *K*^{3} × *K*^{2}.

This problem is tested 20 times for 20 random starting points. The average iteration number is 5.250, the average CPU time is 0.002, and the average residual norm ||*H*(*z*)|| is 1.197*e*-007.

From the numerical results, it is easy to see that Algorithm 3.1 is effective for the problems has tested. We have also tested some other inequalities, and the performances of Algorithm 3.1 are similar.

## 6 Remarks

In this article, we proposed a smoothing-type algorithm for solving the system of inequalities under the order induced by the symmetric cone. By means of the theory of Euclidean Jordan algebras, we showed that the system of Newton equations is solvable. Furthermore, we showed that the algorithm is well defined and is globally convergent under weak assumptions. We also investigated the local quadratical convergence of the algorithm. Moreover, the proposed algorithm has no restrictions on the starting point and solves only one system of equations at each iteration. The preliminary numerical experiments show that the algorithm is effective.

## Declarations

### Acknowledgements

This study was partially supported by the National Natural Science Foundation of China (Grant No. 10871144) and the Seed Foundation of Tianjin University (Grant No. 60302023).

## Authors’ Affiliations

## References

- Faraut J, Korányi A:
*Analysis on Symmetric Cones.*Oxford Mathematical Monographs, Oxford University Press, New York; 1994.Google Scholar - Moreau JJ:
**Déomposition orthogonale d'un espace hilbertien selon deux cônes mutuellement polaires.***CR Acad Sci Paris*1962,**255:**238–240.MathSciNetGoogle Scholar - Sun D, Sun J:
**Löwner's operator and spectral functions in Euclidean Jordan algebras.***Math Oper Res*2008,**33:**421–445. 10.1287/moor.1070.0300MathSciNetView ArticleGoogle Scholar - Lu N, Huang ZH, Han J:
**Properties of a class of nonlinear transformation over euclidean Jordan algebras with applications to complementarity problems.***Numer Func Anal Optim*2009,**30:**799–821. 10.1080/01630560903123304MathSciNetView ArticleGoogle Scholar - Daniel JW:
**Newton's method for nonlinear inequalities.***Numer Math*1973,**21:**381–387. 10.1007/BF01436488MathSciNetView ArticleGoogle Scholar - Macconi M, Morini B, Porcelli M:
**Trust-region quadratic methods for non-linear systems of mixed equalities and inequalities.***Appl Numer Math*2009,**59:**859–876. 10.1016/j.apnum.2008.03.028MathSciNetView ArticleGoogle Scholar - Mayne DQ, Polak E, Heunis AJ:
**Solving nonlinear inequalities in a finite number of iterations.***J Optim Theory Appl*1981,**33:**207–221. 10.1007/BF00935547MathSciNetView ArticleGoogle Scholar - Morini B, Porcelli M: TRESNEI, a Matlab trust-region solver for systems of nonlinear equalities and inequalities. Comput Optim ApplGoogle Scholar
- Sahba M:
**On the solution of nonlinear inequalities in a finite number of iterations.***Numer Math*1985,**46:**229–236. 10.1007/BF01390421MathSciNetView ArticleGoogle Scholar - Huang ZH, Hu SL, Han J:
**Global convergence of a smoothing algorithm for symmetric cone complementarity problems with a nonmonotone line search.***Sci China Ser A*2009,**52:**833–848.MathSciNetView ArticleGoogle Scholar - Huang ZH, Ni T:
**Smoothing algorithms for complementarity problems over symmetric cones.***Comput Optim Appl*2010,**45:**557–579. 10.1007/s10589-008-9180-yMathSciNetView ArticleGoogle Scholar - Kong LC, Sun J, Xiu NH:
**A regularized smoothing Newton method for sym-metric cone complementarity problems.***SIAM J Optim*2008,**19:**1028–1047. 10.1137/060676775MathSciNetView ArticleGoogle Scholar - Liu XH, Gu WZ:
**Smoothing Newton algorithm based on a regularized one-parametric class of smoothing functions for generalized complementarity problems over symmetric cones.***J Ind Manag Optim*2010,**6:**363–380.View ArticleGoogle Scholar - Lu N, Huang ZH:
**Convergence of a non-interior continuation algorithm for the monotone SCCP.***Acta Mathematicae Applicatae Sinica (English Series)*2010,**26:**543–556. 10.1007/s10255-010-0024-zMathSciNetView ArticleGoogle Scholar - Liu XH, Huang ZH:
**A smoothing Newton algorithm based on a one-parametric class of smoothing functions for linear programming over symmetric cones.***Math Meth Oper Res*2009,**70:**385–404. 10.1007/s00186-008-0274-1View ArticleGoogle Scholar - Liu YJ, Zhang LW, Wang YH:
**Analysis of smoothing method for symmetric conic linear programming.***J Appl Math Comput*2006,**22:**133–148.MathSciNetView ArticleGoogle Scholar - Huang ZH, Zhang Y, Wu W:
**A smoothing-type algorithm for solving system of inequalities.***J Comput Appl Math*2008,**220:**355–363. 10.1016/j.cam.2007.08.024MathSciNetView ArticleGoogle Scholar - Zhang Y, Huang ZH:
**A nonmonotone smoothing-type algorithm for solving a system of equalities and inequalities.***J Comput Appl Math*2010,**233:**2312–2321. 10.1016/j.cam.2009.10.016MathSciNetView ArticleGoogle Scholar - Zhu JG, Liu HW, Li XL:
**A regularized smoothing-type algorithm for solving a system of inequalities with a**P_{ 0 }**-function.***J Comput Appl Math*2010,**233:**2611–2619. 10.1016/j.cam.2009.11.007MathSciNetView ArticleGoogle Scholar - Gowda MS, Sznajder R, Tao J:
**Some**P**-properties for linear transformations on Euclidean Jordan algebras.***Linear Algebra Appl*2004,**393:**203–232.MathSciNetView ArticleGoogle Scholar - Gowda MS, Tawhi MA:
**Existence and limiting behavior of trajectories associated with**P_{ 0 }**-equations.***Comput Optim Appl*1991,**12:**229–251.View ArticleGoogle Scholar - Hayashi S, Yamashita N, Fukushima M:
**A combined smoothing and regularization method for monotone second-order cone complementarity problems.***SIAM J Optim*2005,**15:**593–615. 10.1137/S1052623403421516MathSciNetView ArticleGoogle Scholar

## Copyright

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.