• Research
• Open Access

# A smoothing-type algorithm for solving inequalities under the order induced by a symmetric cone

Journal of Inequalities and Applications20112011:4

https://doi.org/10.1186/1029-242X-2011-4

• Received: 3 October 2010
• Accepted: 16 June 2011
• Published:

## Abstract

In this article, we consider the numerical method for solving the system of inequalities under the order induced by a symmetric cone with the function involved being monotone. Based on a perturbed smoothing function, the underlying system of inequalities is reformulated as a system of smooth equations, and a smoothing-type method is proposed to solve it iteratively so that a solution of the system of inequalities is found. By means of the theory of Euclidean Jordan algebras, the algorithm is proved to be well defined, and to be globally convergent under weak assumptions and locally quadratically convergent under suitable assumptions. Preliminary numerical results indicate that the algorithm is effective.

AMS subject classifications: 90C33, 65K10.

## Keywords

• Symmetric cone; Euclidean Jordan algebra
• smoothing-type algorithm
• global convergence
• local quadratic convergence

## 1 Introduction

Let V be a finite dimensional vector space over 4 with an inner product 〈·,·〉. If there exists a bilinear transformation from V × V to V, denoted by "," such that for any x, y, z V,

where x2 := x x, then (V, , 〈·,·〉) is called a Euclidean Jordan algebra. Let K := {x2 : x V}; then K is a symmetric cone . Thus, K could induce a partial order : for any x V, x 0 means x K. Similarly, x 0 means x intK where intK denotes the interior of K; and x 0 means -x 0.

Let Π k (x) denote the (orthogonal) projection of x onto K. By Moreau decomposition , we can define
The system of inequalities under the order induced by the symmetric cones K is given by

where f : VV is a transformation (see two transformations: Löwner operator defined in , and relaxation transformation defined in ). We assume that f is continuously differentiable. Recall that a transformation f : VV is called to be continuously differentiable if the linear operator f (x) : VV is continuous at each x V, where f (x) satisfying is the Fréchet derivative of f at x.

When , (1.2) reduces to the usual system of inequalities over n . In this case, the system of inequalities has been studied extensively because of its various applications in data analysis, set separation problems, computer-aided design problems, image reconstructions, and detection on the feasibility of nonlinear programming. Already many iteration methods exist for solving such inequalities; see, for example . It is well known that the positive semi-definite matrix cone, the second-order cone, and the nonnegative orthant cone as common symmetric cones have many applications in practice and are studied mostly. Thus, investigation of (1.2) could provide a unified theoretical framework for studying the system of respective inequalities under the order induced by the nonnegative orthant, the second-order, and the positive semidefinite matrix cones. This is one of the factor that motivated us to investigate (1.2).

Another motivation factor comes from detection on the feasibility of optimization problems. A main method to solve symmetric cone programming problems is the interior point method (IPM, in short). An usual requirement in the IPM is that a feasible interior point of the problem is known in advance. In general, however, the diffculty to find a feasible interior point is equivalent to the one to solve the optimization problem itself. Consider an optimization problem with the constraint given by (1.2) where the interior of the feasible set is nonempty. If an algorithm can solve (1.2) effectively, then the same algorithm can be applied to solve f (x) + εe 0 to generate an interior point of the solution set of (1.2), where ε > 0 is a sufficiently small real number and e is the unique element in V such that x e = e x = x holds for all x V (i.e., the identity of V ). Thus, a feasible interior point of conic optimization problem could be found in this way.

It is well known that smoothing-type algorithms have been a powerful tool for solving many optimization problems. On one hand, smoothing-type algorithms have been developed to solve symmetric cone complementarity problems (see, for example, ) and symmetric cone linear programming (see, for example, [15, 16]). On the other hand, smoothing-type algorithms have also been developed to solve the system of inequalities under the order induced by (see, for example, ). From these recent studies, a natural question is that how to develop a smoothing-type algorithm to solve the system of inequalities under the order induced by a symmetric cone. Our objective of this article is to answer this question.

By the definition of "" and the second equality in (1.1), we have f(x) 0 -f(x) K f(x)- = -f(x) f(x)+ = f(x)- + f(x) = 0; that is, the system of inequalities (1.2) is equivalent to the following system of equations:
Since the transformation involving in (1.3) is non-smooth, the classical Newton methods cannot be directly applied to solve (1.3). In this article, we introduce the smoothing function:

By means of (1.4), we extend a smoothing-type algorithm to solve (1.2). By investigating the solvability of the system of Newton equations, we show that the algorithm is well defined. In particular, we show that the algorithm is globally and locally quadratically convergent under some assumptions.

The rest of this article is organized as follows. In the next section, we first briefly review some basic concepts on Euclidean Jordan algebras and symmetric cones, and then present some useful results which will be used later. In Section 3, we investigate a smoothing-type algorithm for solving the system of inequalities (1.2) and show that the algorithm is well defined by proving that solvability of the system of Newton equations. In Section 4, we discuss the global and local quadratic convergence of the algorithm. The preliminary numerical results for the system of inequalities under the order induced by the second-order cone are reported in Section 5; some final remarks are provided in Section 6.

## 2 Preliminaries

### 2.1 Euclidean Jordan Algebra

In this subsection, we first recall some basic concepts and results over Euclidean Jordan algebras. For a comprehensive treatment of Jordan algebras, the reader is referred to  by Faraut and Korányi.

Suppose that (V, , 〈·,·〉) is a Euclidean Jordan algebra which has the identity e. An element c V is called an idempotent if c c = c. An idempotent c is primitive if it is nonzero and cannot be expressed by sum of two other nonzero idempotents. For any x V, let m(x) be the minimal positive integer such that {e, x, x2,..., xm(x)} is linearly dependent. Then, rank of V, denoted by Rank(V ), is defined as max{m(x) : x V }. A set of primitive idempotents {c1, c2,..., c k } is called a Jordan frame if c i c j = 0 for any i, j {1,..., k} with ij and .

Theorem 2.1 (Spectral Decomposition Theorem) Let (V, , 〈·,·〉) be a Euclidean Jordan algebra with Rank(V ) = r. Then for any x V, there exists a Jordan frame {c1(x),..., c r (x)} and real numbers λ1(x),..., λ r (x) such that . The numbers λ1(x),...,λ r (x) (with their multiplicities) are uniquely determined by x.

Every λ i (x)(i {1,..., r}) is called an eigenvalue of x, which is a continuous function with respect to x (see ). Define , where Tr(x) denotes the trace of x. For any x V, define a linear transformation x by x y = x y for any y V. Specially, when K is the nonnegative orthant cone , for any x = (x1,..., x n ) T , y = (y1,..., y n ) T n ,
when K is the second-order cone , for any ,
when K is the positive semidefinite cone , for any For any x, y V, x and y operator commute if x and y commute, i.e., x y = y x . It is well known that x and y operator commute if and only if x and y have their spectral decompositions with respect to a common Jordan frame. We define the inner product 〈·,·〉 by 〈x, y〉 := Tr(x y) for any x, y V. Thus, the norm on V induced by the inner product is .

An element x V is said to be invertible if there exists a y in the subalgebra generated by x such that x y = y x = e, and is written as x-1. If x2 = y and x K, then x can be written as y1/2. Given x V with , where {c1(x),..., c r (x)} is a Jordan frame and λ1(x),..., λ r (x) are eigenvalues of x, then and Furthermore, if λi (x) 0 for all i {1,..., r}, then ; and if λ i (x) > 0 for all i {1,..., r}, then . More generally, we extend the definition of any real-valued analytic function g to elements of Euclidean Jordan algebras via their eigenvalues, i.e., where x V has the spectral decomposition .

We recall the Peirce decomposition theorem on the space V. Fix a Jordan frame {c1,..., c r }in a Euclidean Jordan algebra V, for i, j {1,..., r}, define
Theorem 2.2 (Peirce decomposition Theorem) The space V is the orthogonal direct sum of spaces V ij (i ≤ j). Furthermore,

Thus, given a Jordan frame {c1,..., c r }, we can write any element x V as , where x i and x ij V ij .

### 2.2 Basic Results

In this subsection, we produce several basic results which will be used in our later analysis.

Proposition 2.1 If x 0, y 0, and x - y 0, then .

Proof. The proof is similar to Proposition 8 in ; hence we omit it.

Proposition 2.2 For any sequence {a k } V and any given Jordan system {c1,..., c r },

suppose that, for any k, is the Peirce decomposition of a k with respect to {c1,..., c r }. Then,

(i) if there exists an index i {1,..., r} such that , then λmax(a k ) → ∞ and

(ii) if there exists an index i {1,..., r} such that , then λmin(a k ) → -∞,

where λmax (a k ) and λmin (a k ) denote the largest and the smallest eigenvalues of a k , respectively.

Proof. For any k, let be the spectral decomposition of a k with {e1 (a k ),..., e r (a k )} being a Jordan system. Then, for any i {1,..., r}, we have

Since e is positive definite by [1, Proposition III.2.2] and c i ≠ 0, it follows 〈e, c i 〉 > 0 and ||c i || > 0. Thus, from (2.1) we have that λmax(a k ) → ∞ when , which implies that the result (i) holds;

Similarly, for any i {1,..., r}, and hence, λmin(a k ) → -∞ when , which implies that the result (ii) holds.

Proposition 2.3 Let ϕ(·,·) be defined by (1.4). Then, the following results hold:

Where ++:= {α |α > 0}, , (h, v) × V and D ϕ(μ, y) denotes the Fréchet derivative of the transformation ϕ at (μ, y).

(ii) ϕ(0, y) = 2y+, and ϕ (0, y) is strong semismoothness at any y V.

(iii) ϕ (μ, y) = 0 if and only if μ = 0 and y+ = 0.

Proof. (i): It is easy to get the results similar to [11, Lemma 3.1]; hence we omit the proof.

(ii) . In addition, [3, Proposition 3.3] says that y+ is strong semismoothness at any y V. Thus, ϕ(0, y) is strong semismoothness at any y V.

The last equality implies μ = 0. This, together with (ii), yields the desired result.

## 3 A smoothing Newton algorithm

From Proposition 2.3(iii) it follows that H(μ*, x*, y*) = 0 if and only if μ* = 0, y* = f(x*) and f(x*)+ = 0, i.e., x* solves the system of inequalities (1.2).

By Proposition 2.3 (i), for any z = (μ, x, y) ++ × V × V, the transformation H is continuously differentiable with

where DH (z) denotes the Fréchet derivative of the transformation H at z and (h, u, v) × V × V. Therefore, we may apply some Newton-type method to solve the smoothing equations H (z) = 0 at each iteration and make μ > 0 and H (z) 0, so that a solution of (1.2) can be found.

Given , choose γ (0, 1), such that . Define transformations Ψ and β as

Algorithm 3.1 (A Smoothing Newton Algorithm)

Step 0 Choose δ (0, 1), . Let γ be given in the definition of β(·), and (x0, y0) V × V be an arbitrary element. Set z0 = (μ0, x0, y0). Set and k = 0.

Step 1 If ||H(x k )|| = 0 then stop.

where DH (z k ) denotes the Fréchet derivative of the transformation H at z k .

Step 4 Set zk+1 = zk + λkΔzkand k = k + 1. Go to Step 1.

In order to show that Algorithm 3.1 is well defined, we need to show that the system of Newton equations (3.4) is solvable, and the line search (3.5) will terminate finitely. The latter result can be proved in a similar way as those standard discussions in the literature. Thus, we only need to prove the former result, i.e., the solvability of the system of Newton equations.

Theorem 3.1 Suppose that f is a continuously differentiable monotone transformation. Then, the system of Newton equations (3.4) is solvable.

Proof. For this purpose, we only need to show that DH (z) is invertible for all z ++ × V × V. Suppose that DH(zz = 0, by (3.4) we have
where . Then, from the first and third system of equations in (3.6), it follows that

By Proposition 2.1 and , we have that (1 + μ) c μ |y| -y, and hence, (1 + μ) c μ + y 0. Then, by [1, Proposition III.2.2], we know that is positive definite, and so, Δy = 0 holds from (3.7). Since Df (x) is positive semidefinite from the fact that f is monotone, by the second system of equations in (3.6), we have Δx = 0, which, together with the first system of equations in (3.6), implies that DH (z) is invertible for all z ++ × V × V.

The proof is complete.

Lemma 3.1 Suppose that f is a continuously differentiable monotone transformation and {z k } = {(μ k , x k , y k )} × V × V is a sequence generated by Algorithm 3.1, then we have

(i) The sequences {Ψ(z k )}, {||H (z k )||}, and {β (z k )} are monotonically decreasing.

(ii) Define where the constant γ is given in Step 0 of Algorithm 3.1 and the function β (·) is defined by (3.3), then for all k.

(iii) The sequence {μ k } is monotonically decreasing and μ k > 0 for all k.

Proof. (i) From (3.5) it is easy to see that the sequence {Ψ(z k )} is monotonically decreasing, and hence, sequences {||H(z k )||} and {β(z k )} are monotonically decreasing.

(ii) We use inductive method to obtain this result. First, it is evident from the choice of the starting point that . Second, if we assume that for some index m, then

where the first equality follows from the equation in (3.4) and Step 4, the first inequality from the assumption , and the last inequality from (i). This shows that , and hence, for all k.

(iii) It follows (3.4) that . Since μ0> 0, we can get μ k > 0 for all k through the recursive methods. In addition, by (ii), we have

which implies that {μ k } is monotonically decreasing.

The proof is complete.

## 4 Convergence of algorithm 3.1

In this section, we discuss the global and local quadratic convergences of Algorithm 3.1. We begin with the following lemma, a generalization of [21, Lemma 4.1], which will be used in our analysis on the boundedness of iterative sequences.

Lemma 4.1 Let f be a continuously differentiable monotone function and {u k } V be a sequence satisfying||u k || → ∞. Then there exist a subsequence, which we write without loss of generality as {u k }, and an index i {1,..., r} such that, either λ i (u k ) → ∞ and f i (u k ) is bounded below; or λ i (u k ) → -∞ and f i (u k ) is bounded above, where is the spectral decomposition of u k , and is the Peirce decomposition of f(u k ) with respect to {e1(u k ),..., e r (u k )}.

Define a bounded sequence {v k } with , where
with is the spectral decomposition of u k . From the definition of v k and the assumption of f being monotone, it follows that, for all k,

For any i J, we have |λ i (u k )| → ∞, and hence, either λ i (u k ) → ∞ or λ i (u k ) → -. If λ i (u k ) → ∞, then (4.1) shows that f i (u k ) is bounded below by inf k f i (v k ); if λ i (u k ) -, then (4.1) shows that f i (u k ) is bounded above by sup k f i (v k ). Thus, the proof is complete.

Theorem 4.1 Suppose that f is a continuously differentiable monotone function, then the sequence {z k } generated by Algorithm 3.1 is bounded and every accumulation point of {x k } is a solution of the system of inequalities (1.2).

Proof. By Lemma 3.1, we have that sequences {μ k } and {Ψ(z k )} are nonnegative and monotone decreasing. From (3.1) and (3.3), we have
Thus, {y k - f (x k ) - μ k x k } and {ϕ(μ k , y k ) + μ k y k } are bounded. Let g (μ k , x k , y k ) := y k - f (x k ) - μ k x k , then {g (μ k , x k , y k )} is bounded and y k = g (μ k , x k , y k ) + f (x k ) + μ k x k . Suppose that x k has the spectral decomposition , then the Peirce decomposition of f(x k ) and g(μ k , x k , y k ) with respect to the Jordan frame {e1(x k ),...., e r (x k )} are
respectively. By (4.2) and (4.3), we have that the Peirce decomposition of y k with respect to {e1(x k ),..., e r (x k )} is
In the following, we assume that {x k } is unbounded and derive a contradiction. Since f is a continuously differentiable monotone function, by noticing Lemma 4.1, we can take a subsequence if necessary, without loss of generality denoted by {x k }, and an index i0 {1,..., r} such that either and is bounded below; or and is bounded above. Together with (4.4), it follows that either and ; or and with . By Proposition 2.2, we further obtain that
Suppose that y k has the spectral decomposition , then ϕ(μ k , y k ) + μ k y k has the spectral decomposition

We now consider two cases.

Case 1. λ i 0 (x k ) → ∞. It follows from (4.5) that λmax(y k ) → ∞, which together with (4.6) implies that

where e max (y k ) denotes the element corresponding to λ max (y k ) in the spectral decomposition of y k .

Case 2. λi 0(x k ) → -∞. It follows from (4.5) that λmin(y k ) → -∞, which together with (4.6) implies that
where e min (y k ) denotes the element corresponding to λ min (y k ) in the spectral decomposition of y k . Since when λ min (y k ) → -∞, so

and hence, ||ϕ(μ k , y k ) + μ k y k ||2 → ∞ as k → ∞.

In either case, we get ||ϕ(μ k , y k ) + μ k y k || → ∞ as k, which contradicts the fact that {ϕ(μ k , y k ) + μ k y k } is bounded. Hence, {x k } is bounded. Since the function f is continuous, by noticing that y k = g (μ k , x k , y k ) + f (x k ) + μ k x k for all k, it follows that {y k } is bounded. Therefore, the sequence {(x k , y k )} is bounded.

By Lemma 3.1, we have that sequences {μ k }, {||H(z k )||}, and {Ψ(z k )} are nonnegative and monotone decreasing, and hence they are convergent. Denote
We show that H* = 0. In the following, we assume H* ≠ 0 and derive a contradiction. Under this assumption, it is easy to show that H* > 0, μ* > 0, Ψ* > 0. Since μ * > 0 and the sequence {||H(z k )||} is bounded, we could obtain from the first result that the sequence {(x k , y k )} is bounded. Thus, subsequencing if necessary, we may assume that there exists a point z* = (μ * , x*, y*) + × V × V such that limk→∞z k = z*, and hence, H* = ||H(z*)|| and Ψ* = Ψ (z*). Since ||H(z*)|| > 0, so Ψ(z*) > 0, from (3.5), it follows that limk→∞λ k = 0. Thus, for any sufficiently large k, the stepsize does not satisfy the line search criterion (3.5), i.e., , which implies that
Since μ* > 0, it follows that Ψ(z) is continuously differentiable at z*. Let k, then the above inequality gives

then , together with , it follows that , which contradicts the fact that . So H* = 0. Thus, by a simple continuity discussion, we obtain that x* is a solution of the system of inequalities (1.2). This shows that the desired result holds.

Now, we discuss the local quadratic convergence of Algorithm 3.1. For this purpose, we need the strong semismoothness of transformation H which can be obtained by Proposition 2.3 (ii). In a similar way as the one in [17, Theorem 3.2], we can obtain the local quadratic convergence of Algorithm 3.1.

Theorem 4.2 Suppose that f is a continuously differentiable monotone function. Let the sequence {z k } be generated by Algorithm 3.1 and z* := (μ*, x*, y*) be an accumulation point of {z k }. If all W ∂H(z*) is nonsingular, where

## 5 Numerical experiments

In this section, in order to evaluate the efficiency of Algorithm 3.1, we give some numerical results for solving the system of inequalities under the order induced by the second-order cone (SOCIS, for short) through conducting some numerical experiments. All the experiments are done on a PC with CPU of 2.4 GHz and RAM of 2.0 GB, and all codes are written in MATLAB. Throughout the experiments, the parameters we used are δ = 0.5, σ = 0.0001, and γ = 0.20. The algorithm is terminated whenever ||H (z)|| ≤ 10-6, or the step length α ≤ 10-6, or the number of iteration was over 500. The starting points in the following test problems are randomly chosen from the interval [-1, 1]. In our experiments, the function H defined by (3.1) is replaced by
where c is a constant. This does not destroy all theoretical results obtained in the previous sections. Denote

then K m is an m-dimensional second-order cone.

First, we test the following problem.

Example 5.1 Consider the system of inequalities (1.4) with f (x) := Mx + q, and the order induced by the second-order cone , where M = BB T with B n × nbeing a matrix every of which is randomly chosen from the interval [0, 1] and q n being a vector every component of which is 1.

For this example, the test problems are generated with sizes n = 400, 800,..., 4000 and each n i = 10. The random problems of each size are generated 10 times, and thus, we have totally 100 random problems. Table 1 shows the average iteration numbers (iter), the average CPU time (cpu) in seconds, and the average residual norm ||H(z)|| (res) for 10 test problems given in Example 5.1 of each size, for the random initializations, respectively. Figure 1 shows the convergence behavior of one of the largest test problems, i.e., n = 4000. Figure 1 The logarithm of residual norm || H ( z )|| by iterations.

Second, we test the following problem, which is taken from .

and the order induced by the second-order cone K := K3 × K2.

This problem is tested 20 times for 20 random starting points. The average iteration number is 5.250, the average CPU time is 0.002, and the average residual norm ||H(z)|| is 1.197e-007.

From the numerical results, it is easy to see that Algorithm 3.1 is effective for the problems has tested. We have also tested some other inequalities, and the performances of Algorithm 3.1 are similar.

## 6 Remarks

In this article, we proposed a smoothing-type algorithm for solving the system of inequalities under the order induced by the symmetric cone. By means of the theory of Euclidean Jordan algebras, we showed that the system of Newton equations is solvable. Furthermore, we showed that the algorithm is well defined and is globally convergent under weak assumptions. We also investigated the local quadratical convergence of the algorithm. Moreover, the proposed algorithm has no restrictions on the starting point and solves only one system of equations at each iteration. The preliminary numerical experiments show that the algorithm is effective.

## Declarations

### Acknowledgements

This study was partially supported by the National Natural Science Foundation of China (Grant No. 10871144) and the Seed Foundation of Tianjin University (Grant No. 60302023).

## Authors’ Affiliations

(1)
Department of Mathematics, Xi dian University, XiAn, 710071, PR, China
(2)
Department of Mathematics, School of Science, Tianjin University, Tianjin, 300072, PR, China

## References 