Skip to main content

A prediction-correction inexact alternating direction method for convex nonlinear second-order cone programming with linear constraints

Abstract

The convex nonlinear second-order cone programming with linear constraints is equivalent to a separate structure convex programming. A prediction-correction inexact alternating direction method is proposed for the separate structure convex programming. In the proposed method, the convex objective function is not required to be Lipschitz continuous and only needs satisfy an inequality. The global convergence result is given. Numerical results demonstrate that our method is efficient for some random second-order cone programming problems in lower accuracy. In addition, our method can be extended to the convex nonlinear circular cone programming with linear constraints. We also give the simulation results of the three-fingered grasping force optimization problems.

1 Introduction

In this paper, we consider the convex nonlinear second-order cone programming (CNSOCP) with linear constraints

$$ \begin{aligned} &\min f(x) \\ &\quad \mbox{s.t. } Ax = b, \quad x\in K , \end{aligned} $$
(1)

where \(f:\mathbb{R}^{n}\rightarrow \mathbb{R}\) is a nonlinear continuously differentiable convex function, \(A\in \mathbb{R}^{m \times n}\) is a full row rank matrix, \(b\in \mathbb{R}^{m} \) is a vector, \(x=[x_{1},\ldots ,x_{N}]\in \mathbb{R}^{n_{1}}\times \cdots \times \mathbb{R}^{n_{N}}\) is viewed as a column vector in \(\mathbb{R}^{n_{1}+\cdots +n_{N}}\), and \(\sum_{i=1}^{N}n_{i}=n\). In addition, K is a Cartesian product of second-order cones

$$ K=K^{n_{1}}\times K^{n_{2}}\times \cdots \times K^{n_{N}}, $$

\(x_{i} \in K^{n_{i}}\), and \(K^{n_{i}}\) is the \(n_{i}\)-dimensional second-order cone

$$ K^{n_{i}}:= \left \{ x_{i}= \begin{bmatrix} x_{i_{1}} \\ x_{i_{0}} \end{bmatrix} \in \mathbb{R}^{n_{i}-1}\times \mathbb{R}: \Vert x_{i_{1}} \Vert \leq x_{i_{0}} \right \} , $$
(2)

where \(\|\cdot \|\) denotes the Euclid norm.

The projection \(P_{K^{n_{i}}}(x_{i})\) on the second-order cone \(K^{n_{i}}\) is [1, 2]

$$ P_{K^{n_{i}}}(x_{i})= \bigl(\lambda _{1}(x_{i}) \bigr)_{+}c_{1}(x_{i})+ \bigl(\lambda _{2}(x _{i}) \bigr)_{+}c_{2}(x_{i}), \quad i=1,2, \ldots , N, $$
(3)

where \(s_{+} := \max (0,s)\), \(\lambda _{1}(x_{i})=x_{i_{0}}-\|x_{i_{1}}\|\), \(\lambda _{2}(x_{i})=x _{i_{0}} +\|x_{i_{1}}\|\), c1(xi)=12[w1], c2(xi)=12[w1] with \(w=\frac{-x_{i_{1}}}{\|x_{i_{1}}\|} \) if \(x_{i_{1}} \neq 0\), and any vector in \(\mathbb {R}^{n_{i}-1}\) satisfying \(\|w\|=1\) if \(x_{i_{1}}=0\). Then the projection \(P_{K}(x)\) on the cone K is

$$ P_{K}(x)= \bigl[P_{K^{n_{1}}}(x_{1}), \ldots , P_{K^{n_{N}}}(x _{N}) \bigr]. $$
(4)

The second-order cone programming has wide applications in engineering problems, such as FIR filter design, antenna array weight design, and truss design [3, 4]. There have been many methods proposed for solving linear second-order cone programming (LSOCP) [58]. However, the study of nonlinear second-order cone programming (NSOCP) is much more recent and still in its preliminary phase. In paper [9], a primal-dual interior point method was proposed for solving nonlinear second-order cone programming. In paper [10], theoretical properties of an augmented Lagrangian method for solving nonlinear second-order cone optimization problems were considered. The SQP-type method and trust region SQP-filter method were developed for NSOCP in papers [11] and [12]. In paper [13], a \(\mbox{S}l_{1}\mbox{QP}\) based algorithm with trust region technique was proposed for solving nonlinear second-order cone programming problems. A homotopy method was presented for nonlinear second-order cone programming in paper [14], and global convergence was proven under mild conditions.

The alternating direction method has been an effective first-order approach for solving optimization problems such as variational inequality problems [15], linear programming [16], and semidefinite programming [17]. In paper [18], a modified alternating direction method was presented for convex nonlinear semidefinite programming problem. The method gives a prototype for nonlinear semidefinite programming. The algorithm needs compute the projection on the semidefinite cone and the convex constraints set. The projection on the convex constraints set is very difficult except for some easy constraints, such as linear constraints.

Inspired by paper [18], a prediction-correction inexact alternating direction method is proposed for convex second-order cone programming problems with linear constraints. In the algorithm, the problem is equivalent to a separate structure convex nonlinear programming. Different from the direct extension method in paper [18], we do not need compute the projection on the linear constraints set, and we only compute the projection on the second-order cone. Moreover, the convex objective function in the proposed method is not required to be Lipschitz continuous and only needs satisfy an inequality. We prove the global convergence. Some random second-order cone programming examples are used to test the performance of the efficiency of our proposed approach, which shows our method is efficient for the examples in low accuracy. In addition, we extend our proposed method to the convex nonlinear circular cone programming with linear constraints. The simulation results of the three-fingered grasping force optimization problems are given.

2 A prediction-correction inexact alternating direction method for CNSOCP problems with linear constraints

Firstly, we give an equivalent separate structure convex programming problem to problem (1):

$$ \begin{aligned} &\min f(x) \\ &\quad \mbox{s.t. } Ax = b \\ &\quad \hphantom{\mbox{s.t. }} x=y,\quad y\in K. \end{aligned} $$
(5)

Under Slater’s condition, strong duality holds for problem (5). Hence, \(x^{*}\) is an optimal solution of (5) if and only if there exists \(w^{*}=(x^{*},y^{*},\lambda ^{*},\mu ^{*})\in \varOmega =\mathbb {R}^{n} \times K\times \mathbb {R}^{m} \times \mathbb {R}^{n}\) satisfying the following KKT system in variational inequality form:

$$ \textstyle\begin{cases} \langle x-x^{*}, \nabla f(x^{*})-A^{T}\lambda ^{*}-\mu ^{*}\rangle \geq 0, \quad \forall x \in \mathbb {R}^{n}, \\ \langle y-y^{*}, \mu ^{*}\rangle \geq 0, \quad \forall y \in K, \\ Ax^{*}=b, \\ x^{*}=y^{*}, \end{cases} $$
(6)

where \(\langle \cdot \rangle \) denotes the inner product of two vectors.

The augmented Lagrangian function for the separate structure convex nonlinear programming problem is

$$ L(x,y,\lambda ,\mu )=f(x)-\lambda ^{T}(Ax-b)-\mu ^{T}(x-y) +\frac{1}{2 \beta _{1}} \Vert Ax-b \Vert ^{2}+ \frac{1}{2\beta _{2}} \Vert x-y \Vert ^{2} , $$
(7)

where \(\lambda \in \mathbb {R}^{m}\), \(\mu \in \mathbb {R}^{n}\), \(\beta _{1}, \beta _{2}>0\).

The exact alternating direction method for (7) is given as follows.

The exact alternating direction method

Given \(w^{0}=(x^{0},y^{0},\lambda ^{0},\mu ^{0})\in \mathbb {R}^{n} \times \mathbb {R}^{n}\times \mathbb {R}^{m} \times \mathbb {R}^{n}\), and \(\beta _{1}, \beta _{2}>0\). For \(k=0, 1, 2, \ldots \) , then:

Step 1. Find \(x^{k+1}\in \mathbb {R}^{n}\), which satisfies

$$ \begin{aligned}[b] & \biggl\langle x-x^{k+1}, \nabla f \bigl(x^{k+1} \bigr)-A^{T}\lambda ^{k}-\mu ^{k}+\frac{1}{ \beta _{1}}A^{T} \bigl(Ax^{k+1}-b \bigr) \\ &\quad {} +\frac{1}{\beta _{2}} \bigl(x^{k+1}-y^{k} \bigr) \biggr\rangle \geq 0,\quad \forall x \in \mathbb {R}^{n}. \end{aligned} $$
(8)

Step 2. Find \(y^{k+1}\in K\), which satisfies

$$ \biggl\langle y-y^{k+1}, \mu ^{k}- \frac{1}{\beta _{2}} \bigl(x^{k+1}-y^{k+1} \bigr) \biggr\rangle \geq 0,\quad \forall y \in K. $$
(9)

Step 3. Update the Lagrange multiplier by

$$ \lambda ^{k+1}=\lambda ^{k}-\frac{1}{\beta _{1}} \bigl(Ax^{k+1}-b \bigr). $$
(10)

Step 4. Update the Lagrange multiplier by

$$ \mu ^{k+1}=\mu ^{k}-\frac{1}{\beta _{2}} \bigl(x^{k+1}-y^{k+1} \bigr). $$
(11)

It is time consuming to solve problem (8) exactly. Therefore, a prediction-correction inexact alternating direction method is presented to solve problem (5). In the proposed method, we will convert Step 1 and Step 2 to simple projection operations. For this purpose, all we need is the following fact from convex geometry.

Lemma 1

([19])

LetΘbe a closed convex set in a Hilbert space and \(P_{\varTheta }(x)\)be the projection ofxonΘ. Then

$$ \langle z-y, y-x\rangle \geq 0, \quad \forall z\in \varTheta \quad \Longleftrightarrow\quad y=P_{\varTheta }(x). $$
(12)

Taking \(x=\hat{x}^{k}-\alpha _{1}(\nabla f(\hat{x}^{k})-A^{T}\lambda ^{k}-\mu ^{k}+\frac{1}{\beta _{1}}A^{T}(A\hat{x}^{k}-b)+\frac{1}{\beta _{2}}(\hat{x}^{k}-y^{k}))\) and \(y=\hat{x}^{k}\) in (12), then (8) is equivalent to the following nonlinear equation:

$$ \hat{x}^{k}=P_{R^{n}} \biggl[\hat{x}^{k}- \alpha _{1} \biggl(\nabla f \bigl(\hat{x} ^{k} \bigr)-A^{T}\lambda ^{k}- \mu ^{k}+ \frac{1}{\beta _{1}}A^{T} \bigl(A\hat{x}^{k}-b \bigr)+ \frac{1}{ \beta _{2}} \bigl(\hat{x}^{k}-y^{k} \bigr) \biggr) \biggr], $$
(13)

where \(\alpha _{1}\) can be any positive number.

Taking \(x=\hat{y}^{k}-\alpha _{2} (\mu ^{k}-\frac{1}{\beta _{2}}( \hat{x}^{k}-\hat{y}^{k}) )\) and \(y=\hat{y}^{k}\) in (12), then (9) is equivalent to the following nonlinear equation:

$$ \hat{y}^{k}=P_{K} \biggl[\hat{y}^{k}- \alpha _{2} \biggl(\mu ^{k}- \frac{1}{ \beta _{2}} \bigl( \hat{x}^{k}-\hat{y}^{k} \bigr) \biggr) \biggr], $$
(14)

where \(\alpha _{2}\) can be any positive number.

It is time consuming to solve problem (13) directly due to the existence of the term \(\nabla f(\hat{x}^{k})\) and \(A^{T}A\hat{x}^{k}\) in (13). The inexact approach is used to solve problem (13), which is similar to the method in paper [18]. Let

$$ R_{1} \bigl(x^{k},\hat{x}^{k} \bigr)= \biggl(1-\frac{\alpha _{1}}{\beta _{2}}-\gamma _{1}\frac{\alpha _{1}}{\beta _{1}} \biggr) \bigl(x^{k}-\hat{x}^{k} \bigr)-\alpha _{1} \bigl(\nabla f \bigl(x^{k} \bigr)-\nabla f \bigl( \hat{x}^{k} \bigr) \bigr) $$

and

$$ R_{2} \bigl(x^{k},\hat{x}^{k} \bigr)= \frac{\alpha _{1}}{\beta _{1}} \bigl( \bigl(\gamma _{1}I_{n}- A^{T}A \bigr) \bigl(x^{k}-\hat{x}^{k} \bigr) \bigr), $$

where \(\gamma _{1}>\lambda _{\mathrm{max}}(A^{T}A)\), and \(\lambda _{\mathrm{max}}(A^{T}A)\) is the largest eigenvalue of \(A^{T}A\).

Instead of computing (13), we compute

$$ \begin{aligned}[b] \hat{x}^{k} &= P_{R^{n}} \biggl[\hat{x}^{k}-\alpha _{1} \biggl(\nabla f \bigl( \hat{x}^{k} \bigr)-A ^{T}\lambda ^{k}-\mu ^{k}+\frac{1}{\beta _{1}}A^{T} \bigl(A \hat{x}^{k}-b \bigr) \\ &\quad {}+ \frac{1}{\beta _{2}} \bigl(\hat{x}^{k}-y^{k} \bigr) \biggr)+R_{1} \bigl(x^{k},\hat{x}^{k} \bigr)+R _{2} \bigl(x^{k},\hat{x}^{k} \bigr) \biggr] \\ & = x^{k}-\alpha _{1} \biggl(\nabla f \bigl(x^{k} \bigr)-A^{T}\lambda ^{k}-\mu ^{k}+\frac{1}{ \beta _{1}}A^{T} \bigl(Ax^{k}-b \bigr)+ \frac{1}{\beta _{2}} \bigl(x^{k}-y^{k} \bigr) \biggr). \end{aligned} $$
(15)

Obviously, problem (15) only needs compute the projection on K, whose solution is used as an approximation to the solution of variational inequality (8).

Let \(\alpha _{2}=\beta _{2}\) in (14). Then we have

$$ \hat{y}^{k}=P_{K} \biggl[-\alpha _{2} \biggl(\mu ^{k}- \frac{1}{\beta _{2}}\hat{x}^{k} \biggr) \biggr]. $$
(16)

Now we present the prediction-correction inexact alternating direction method. To simplify the following analysis, we denote

$$ G= \begin{pmatrix} I_{n} & 0 & 0 & 0 \\ 0 & \frac{\alpha _{1}}{\beta _{2}}I_{n} & 0 & 0 \\ 0 & 0 & \alpha _{1}\beta _{1}I_{m} & 0 \\ 0 & 0 & 0 & \alpha _{1}\beta _{2}I_{n} \end{pmatrix}. $$

The prediction-correction inexact alternating direction method

Step 0. Given \(w^{0}=(x^{0},y^{0},\lambda ^{0},\mu ^{0}) \in \mathbb {R}^{n}\times \mathbb {R}^{n}\times \mathbb {R}^{m} \times \mathbb {R}^{n}\), \(\beta _{1}, \beta _{2}>0\), \(\eta \in (0,1)\), and \(\alpha _{0}= \frac{\eta }{1+\eta (\frac{1}{\beta _{2}}+\frac{\gamma _{1}}{\beta _{1}})}\). Set \(k=0\).

Step 1. The prediction step: for a given \(w^{k}=(x^{k}, y ^{k},\lambda ^{k}, \mu ^{k})\), set

$$ \textstyle\begin{cases} \hat{x}^{k}=x^{k}-\alpha _{1} (\nabla f(x^{k})-A^{T}\lambda ^{k}- \mu ^{k}+\frac{1}{\beta _{1}}A^{T}(Ax^{k}-b)+\frac{1}{\beta _{2}}(x^{k}-y ^{k}) ), \\ \hat{y}^{k}=P_{K} [\hat{x}^{k}-\alpha _{2}\mu ^{k} ], \\ \hat{\lambda }^{k}=\lambda ^{k}-\frac{1}{\beta _{1}}(A\hat{x}^{k}-b), \\ \hat{\mu }^{k}=\mu ^{k}-\frac{1}{\beta _{2}}(\hat{x}^{k}-\hat{y}^{k}), \end{cases} $$

where \(\alpha _{1}=(0.1)^{i}\alpha _{0}\) satisfies the following inequations by the Armijo line search strategy for \(i\in N\) (the set of natural numbers):

$$ \begin{aligned}[b] & \biggl\Vert \alpha _{1} \bigl( \nabla f \bigl(x^{k} \bigr)-\nabla f \bigl(\hat{x}^{k} \bigr) \bigr)+\frac{\alpha _{1}}{ \beta _{1}}A^{T}A \bigl(x ^{k}- \hat{x}^{k} \bigr) \biggr\Vert \\ &\quad \leq \biggl(\eta \biggl(1-\frac{\alpha _{1}}{\beta _{2}} \biggr)+(1- \eta ) \frac{\alpha _{1}\gamma _{1}}{\beta _{1}} \biggr) \bigl\Vert x^{k}-\hat{x} ^{k} \bigr\Vert . \end{aligned} $$
(17)

Step 2. The correction step: compute the next iteration point \(w^{k+1}=(x^{k+1}, y^{k+1},\lambda ^{k+1}, \mu ^{k+1})\) by the following equation:

$$ w^{k+1}=P_{\varOmega } \bigl( w^{k}-\rho _{k}d \bigl(w^{k}, \hat{w}^{k} \bigr) \bigr), $$
(18)

where \(\rho _{k}\) is stepsize determined by

$$ \rho _{k}=v\rho _{k}^{*}=v \frac{\langle w^{k}-\hat{w}^{k}, d^{k}\rangle _{G}}{ \Vert d(w^{k}, \hat{w}^{k}) \Vert _{G}}, \quad \nu \in (0, 2), $$

and

$$ d \bigl(w^{k}, \hat{w}^{k} \bigr)= \begin{pmatrix} R_{1}(x^{k},\hat{x}^{k})+R_{2}(x^{k},\hat{x}^{k}) \\ y^{k}-\hat{y}^{k} \\ \lambda ^{k}-\hat{\lambda }^{k} \\ \mu ^{k}-\hat{\mu }^{k} \end{pmatrix}. $$

Here, \(\langle \xi _{1} \cdot \xi _{2} \rangle _{G}=\xi _{1}^{T}G\xi _{2}\) for any vectors \(\xi _{1}, \xi _{2}\in \mathbb {R}^{n}\) and \(\|\xi \|_{G}=\sqrt{\xi ^{T}G\xi }\) for any vector \(\xi \in \mathbb {R}^{n}\).

Remark

If the constant L satisfies the following inequality:

$$ \bigl\Vert \nabla f \bigl(\hat{x}^{k} \bigr)-\nabla f \bigl(x^{k} \bigr) \bigr\Vert \leq L \bigl\Vert \hat{x}^{k}-x^{k} \bigr\Vert , $$
(19)

we choose \(\alpha _{1}\) so that

$$ \alpha _{1}\leq \frac{\eta }{L+\eta (\frac{1}{\beta _{2}}+\frac{\gamma _{1}}{\beta _{1}})} $$
(20)

with certain \(0<\eta <1\). It is easily proven that inequality (17) holds if inequality (20) holds. But different from (20), \(\nabla f(x)\) does not require to be Lipschitz continuous in inequality (17). We take the Armijo line search strategy which uses a geometric decreasing series to get \(\alpha _{1}\) until condition (17) holds.

3 The convergence result

In this section, we extend and modify the convergence results of the alternating direction methods for convex nonlinear semidefinite programs in paper [18] and study convergence of our proposed method.

Lemma 2

The sequence \(\hat{w}^{k}=(\hat{x}^{k}, \hat{y}^{k}, \hat{\lambda } ^{k}, \hat{\mu }^{k})\)generated by the prediction-correction inexact alternating direction method satisfies

$$ \begin{aligned}[b] & \bigl\langle \hat{w}^{k}-w^{*}, d \bigl(w^{k}, \hat{w}^{k} \bigr) \bigr\rangle _{G} \\ &\quad = \bigl\langle \hat{x}^{k}-x^{*}, R_{1} \bigl(x^{k}, \hat{x}^{k} \bigr)+R_{2} \bigl(x ^{k}, \hat{x}^{k} \bigr) \bigr\rangle + \frac{\alpha _{1}}{\beta _{2}} \bigl\langle \hat{y} ^{k}-y^{*},y^{k}- \hat{y}^{k} \bigr\rangle \\ &\qquad {} +\alpha _{1}\beta _{1} \bigl\langle \hat{ \lambda }^{k}-\lambda ^{*}, \lambda ^{k}- \hat{\lambda }^{k} \bigr\rangle + \alpha _{1}\beta _{2} \bigl\langle \hat{\mu }^{k}-\mu ^{*}, \mu ^{k}-\hat{\mu }^{k} \bigr\rangle \geq 0, \end{aligned} $$
(21)

where \(w^{*}=(x^{*}, y^{*}, \lambda ^{*}, \mu ^{*})\)is a KKT point of system (6).

Proof

Taking \(y=\hat{y}^{k}\) in the second inequality in system (6), we obtain

$$ \bigl\langle \hat{y}^{k}-y^{*}, \mu ^{*} \bigr\rangle \geq 0. $$
(22)

Coupled with (11) and taking \(y=y^{*}\) in (9), we have

$$ \bigl\langle y^{*}-\hat{y}^{k}, \hat{\mu }^{k} \bigr\rangle \geq 0. $$
(23)

Combining with (22) and (23), we have

$$ \bigl\langle \hat{y}^{k}-y^{*}, \mu ^{*}- \hat{\mu }^{k} \bigr\rangle \geq 0. $$
(24)

In addition, from (9) and (11), we have

$$ \bigl\langle y^{k}-\hat{y}^{k}, \hat{\mu }^{k} \bigr\rangle \geq 0, \bigl\langle \hat{y}^{k}-y^{k}, \mu ^{k} \bigr\rangle \geq 0. $$

Combining with the two inequalities above, we have

$$ \bigl\langle \hat{y}^{k}-y^{k}, \mu ^{k}- \hat{\mu }^{k} \bigr\rangle \geq 0. $$
(25)

Note that (15) can be written equivalently as

$$ \begin{aligned} & \biggl\langle x-\hat{x}^{k}, \alpha _{1} \biggl(\nabla f \bigl(\hat{x}^{k} \bigr)-A^{T} \hat{\lambda }^{k}-\hat{\mu }^{k}+\frac{1}{\beta _{2}} \bigl(\hat{y^{k}}-y ^{k} \bigr) \biggr) \\ &\quad {}- R_{1} \bigl(x^{k}, \hat{x}^{k} \bigr)-R_{2} \bigl(x^{k}, \hat{x}^{k} \bigr) \biggr\rangle \geq 0,\quad \forall x\in R^{n}. \end{aligned} $$

Setting \(x=x^{*}\), we have

$$ \begin{aligned}[b] & \biggl\langle x^{*}- \hat{x}^{k}, \alpha _{1} \biggl(\nabla f \bigl( \hat{x}^{k} \bigr)-A ^{T}\hat{\lambda }^{k}- \hat{\mu }^{k}+\frac{1}{\beta _{2}} \bigl( \hat{y^{k}}-y ^{k} \bigr) \biggr) \\ &\quad {}-R_{1} \bigl(x^{k}, \hat{x}^{k} \bigr)-R_{2} \bigl(x^{k}, \hat{x}^{k} \bigr) \biggr\rangle \geq 0,\quad \forall x\in R^{n}. \end{aligned} $$
(26)

Taking \(x=\hat{x}^{k}\) in the first inequality in system (6), we have

$$ \alpha _{1} \bigl\langle \hat{x}^{k}-x^{*}, \nabla f \bigl(x^{*} \bigr)-A^{T}\lambda ^{*}-\mu ^{*} \bigr\rangle \geq 0. $$
(27)

Combining with (26) and (27), we have

$$ \begin{aligned}[b] & \bigl\langle \hat{x}^{k}-x^{*}, \alpha _{1}A^{T} \bigl(\hat{\lambda }^{k}- \lambda ^{*} \bigr) \bigr\rangle + \bigl\langle \hat{x}^{k}-x^{*}, \alpha _{1} \bigl(\hat{ \mu }^{k}-\mu ^{*} \bigr) \bigr\rangle \\ &\qquad {}+ \biggl\langle \hat{x}^{k}-x^{*}, \frac{\alpha _{1}}{\beta _{2}} \bigl(y^{k}-y ^{k+1} \bigr) \biggr\rangle + \bigl\langle x^{k+1}-x^{*},R_{1} \bigl(x^{k}, \hat{x}^{k} \bigr)+R_{2} \bigl(x^{k}, \hat{x}^{k} \bigr) \bigr\rangle \\ &\quad \geq \alpha _{1} \bigl\langle \hat{x}^{k}-x^{*}, \nabla f \bigl(\hat{x}^{k} \bigr)- \nabla f \bigl(x^{*} \bigr) \bigr\rangle \geq 0. \end{aligned} $$
(28)

Based on the first part on the left-hand side of (28) and the third equation in system (6), we have

$$ \begin{aligned}[b] \bigl\langle \hat{x}^{k}-x^{*}, \alpha _{1}A^{T} \bigl(\hat{\lambda }^{k}- \lambda ^{*} \bigr) \bigr\rangle &=\alpha _{1} \bigl\langle A\hat{x}^{k}-Ax ^{*},\hat{\lambda }^{k}-\lambda ^{*} \bigr\rangle \\ &=\alpha _{1} \bigl\langle A\hat{x}^{k}-b,\hat{\lambda }^{k}-\lambda ^{*} \bigr\rangle \\ &=\alpha _{1}\beta _{1} \bigl\langle \hat{\lambda }^{k}-\lambda ^{*}, \lambda ^{k}-\hat{ \lambda }^{k} \bigr\rangle . \end{aligned} $$
(29)

By (11), (24), the last equation in system (6), and the second part on the left-hand side of (28), we have

$$ \begin{aligned}[b] & \alpha _{1} \bigl\langle \hat{x}^{k}-x^{*}, \hat{\mu }^{k}-\mu ^{*} \bigr\rangle + \alpha _{1} \bigl\langle \hat{y} ^{k}-y^{*}, \mu ^{*}-\hat{\mu }^{k} \bigr\rangle \\ &\quad =\alpha _{1} \bigl\langle \hat{\mu }^{k}-\mu ^{*}, \hat{x}^{k}-x^{*}- \hat{y}^{k}+y^{*} \bigr\rangle \\ &\quad =\alpha _{1} \bigl\langle \hat{\mu }^{k}-\mu ^{*}, \hat{x}^{k}- \hat{y}^{k} \bigr\rangle \\ &\quad =\alpha _{1}\beta _{2} \bigl\langle \hat{\mu }^{k}-\mu ^{*}, \mu ^{k}- \hat{\mu }^{k} \bigr\rangle . \end{aligned} $$
(30)

In addition, from the third part on the left-hand side of (28), we have

$$ \begin{aligned}[b] & \frac{\alpha _{1}}{\beta _{2}} \bigl\langle \hat{x}^{k}-x^{*}, y^{k}- \hat{y}^{k} \bigr\rangle \\ &\quad = \frac{\alpha _{1}}{\beta _{2}} \bigl\langle \hat{y}^{k}-y^{*}, y ^{k}-\hat{y}^{k} \bigr\rangle + \frac{\alpha _{1}}{\beta _{2}} \bigl\langle \hat{x} ^{k}-\hat{y}^{k}, y^{k}-\hat{y}^{k} \bigr\rangle \\ &\quad =\frac{\alpha _{1}}{\beta _{2}} \bigl\langle \hat{y}^{k}-y^{*}, y ^{k}-\hat{y}^{k} \bigr\rangle -\frac{\alpha _{1}}{\beta _{2}} \bigl\langle \hat{y} ^{k}-y^{k}, \mu ^{k}- \hat{\mu }^{k} \bigr\rangle . \end{aligned} $$
(31)

It follows from (25)–(26) and (28)–(31) that

$$ \bigl\langle \hat{w}^{k}-w^{*}, d \bigl(w^{k}, \hat{w}^{k} \bigr) \bigr\rangle _{G}\geq 0. $$

 □

Lemma 3

If \(\alpha _{1}\)satisfies (17), then for anyk, we have

$$ \bigl\langle w^{k}-\hat{w}^{k}, d \bigl(w^{k}, \hat{w}^{k} \bigr) \bigr\rangle _{G}\geq \biggl(1-\alpha _{1} \biggl( \frac{1}{\beta _{2}}+\frac{\gamma _{1}}{ \beta _{1}} \biggr) \biggr) (1-\eta ) \bigl\Vert w^{k}-\hat{w}^{k} \bigr\Vert _{G}^{2}. $$
(32)

Proof

Since \(\alpha _{1}\) satisfies (17), then the following inequality holds:

$$ \begin{aligned} & \bigl\langle x^{k}- \hat{x}^{k}, R_{1}+R_{2} \bigr\rangle \\ &\quad = \biggl(1-\frac{\alpha _{1}}{\beta _{2}} \biggr) \bigl\Vert x^{k}- \hat{x}^{k} \bigr\Vert _{2}^{2}-\alpha _{1} \bigl(x^{k}-\hat{x}^{k} \bigr)^{T} \bigl(\nabla f \bigl(x^{k} \bigr)-\nabla f \bigl( \hat{x}^{k} \bigr) \bigr) \\ &\qquad {}+\frac{\alpha _{1}}{\beta _{1}} \bigl(x^{k}-\hat{x}^{k} \bigr)^{T}A^{T}A \bigl(x ^{k}- \hat{x}^{k} \bigr) \\ &\quad \geq \biggl(1-\alpha _{1} \biggl(\frac{1}{\beta _{2}}+ \frac{\gamma _{1}}{\beta _{1}} \biggr) \biggr) (1-\eta ) \bigl\Vert x^{k}- \hat{x}^{k} \bigr\Vert _{2}^{2}. \end{aligned} $$

By the inequality above, we have

$$ \begin{aligned} & \bigl\langle w^{k}- \hat{w}^{k}, d \bigl(w^{k}, \hat{w}^{k} \bigr) \bigr\rangle _{G} \\ &\quad = \bigl\langle x^{k}-\hat{x}^{k}, R_{1}+R_{2} \bigr\rangle +\frac{\alpha _{1}}{ \beta _{2}} \bigl\Vert y^{k}-\hat{y}^{k} \bigr\Vert _{2}^{2} +\alpha _{1}\beta _{1} \bigl\Vert \lambda ^{k}-\hat{\lambda }^{k} \bigr\Vert _{2}^{2}+ \alpha _{1} \beta _{2} \bigl\Vert \mu ^{k}- \hat{\mu }^{k} \bigr\Vert _{2}^{2} \\ &\quad \geq \biggl(1-\alpha _{1} \biggl(\frac{1}{\beta _{2}}+ \frac{\gamma _{1}}{\beta _{1}} \biggr) \biggr) (1-\eta ) \bigl\Vert x^{k}- \hat{x}^{k} \bigr\Vert _{2}^{2}+ \frac{\alpha _{1}}{\beta _{2}} \bigl\Vert y^{k}-\hat{y}^{k} \bigr\Vert _{2}^{2} \\ &\qquad {}+\alpha _{1}\beta _{1} \bigl\Vert \lambda ^{k}-\hat{\lambda }^{k} \bigr\Vert _{2}^{2}+ \alpha _{1} \beta _{2} \bigl\Vert \mu ^{k}-\hat{\mu }^{k} \bigr\Vert _{2}^{2} \\ &\quad \geq \biggl(1-\alpha _{1} \biggl(\frac{1}{\beta _{2}}+ \frac{\gamma _{1}}{\beta _{1}} \biggr) \biggr) (1-\eta ) \bigl\Vert w^{k}- \hat{w}^{k} \bigr\Vert _{G}^{2}. \end{aligned} $$

 □

Now, we give the convergent conclusion.

Theorem 1

The sequence \(w^{k}=(x^{k}, y^{k}, \lambda ^{k}, \mu ^{k})\)generated by the predictor-corrector inexact alternating direction method converges to a KKT point \(w^{*}=(x^{*}, y^{*}, \lambda ^{*}, \mu ^{*})\)of problem (6).

Proof

It is easy to prove that solving the optimal condition (6) for problem (5) is equivalent to finding a zero point of the residual function

$$ e(w)=\left \Vert \textstyle\begin{array}{@{}l@{}} x-P_{R^{n}} (x-\alpha _{1}(\nabla f(x)-A^{T}\lambda -\mu ) ) \\ y-P_{K} (y-\alpha _{2}\mu ) \\ Ax-b \\ x-y \end{array}\displaystyle \right \Vert _{G}. $$
(33)

By (15), we have

$$ \begin{aligned}[b] \hat{x}^{k}&= P_{R^{n}} \biggl[\hat{x}^{k}-\alpha _{1} \biggl(\nabla f \bigl( \hat{x}^{k} \bigr)-A ^{T}\hat{\lambda }^{k}- \hat{\mu }^{k}+ \frac{1}{\beta _{2}} \bigl( \hat{y} ^{k}-y^{k} \bigr) \biggr) \\ &\quad {} +R_{1} \bigl(x^{k}, \hat{x}^{k} \bigr)+R_{2} \bigl(x^{k}, \hat{x}^{k} \bigr) \biggr]. \end{aligned} $$
(34)

From (14), we have

$$ \hat{y}^{k}= P_{K} \bigl[\hat{y}^{k}- \alpha _{2}\hat{\mu }^{k} \bigr]. $$
(35)

Based on (33)–(35) and the nonexpansion property of the projection operator, we have

$$ \begin{aligned}[b] \bigl\Vert e \bigl(\hat{w}^{k} \bigr) \bigr\Vert _{G}& \leq \left \Vert \textstyle\begin{array}{@{}l@{}} \frac{\alpha _{1}}{\beta _{2}}(y^{k}-\hat{y}^{k})+R_{1}(x^{k}, \hat{x} ^{k})+R_{2}(x^{k}, \hat{x}^{k}) \\ 0 \\ \beta _{1} (\lambda ^{k}-\hat{\lambda }^{k}) \\ \beta _{2}( \mu ^{k}-\hat{\mu }^{k}) \end{array}\displaystyle \right \Vert _{G} \\ &\leq \left \Vert \textstyle\begin{array}{@{}l@{}} R_{1}(x^{k}, \hat{x}^{k})+R_{2}(x^{k}, \hat{x}^{k}) \\ 0 \\ \beta _{1} (\lambda ^{k}-\hat{\lambda }^{k}) \\ \beta _{2}( \mu ^{k}-\hat{\mu }^{k}) \end{array}\displaystyle \right \Vert _{G}+\left \Vert \textstyle\begin{array}{@{}l@{}} \frac{\alpha _{1}}{\beta _{2}}(y^{k}-\hat{y}^{k}) \\ 0 \\ 0 \\ 0 \end{array}\displaystyle \right \Vert _{G} \\ &\leq \delta \bigl\Vert w^{k}-\hat{w}^{k} \bigr\Vert _{G}, \end{aligned} $$
(36)

where δ is a positive constant depending on parameters \(\alpha _{1}\), \(\beta _{1}\), \(\beta _{2}\). By inequality (17), the value of δ is set as

$$ \delta =\sqrt{\max \biggl\{ \beta _{1},\beta _{2}, \frac{\alpha _{1}}{\beta _{2}}, 2 \biggl(1-\frac{\alpha _{1}}{\beta _{2}} \biggr) \biggr\} }. $$
(37)

Thus, based on Lemmas 2 and 3, we have

$$ \begin{aligned} & \bigl\Vert w^{k+1}-w^{*} \bigr\Vert ^{2}_{G} \\ &\quad \leq \bigl\Vert w^{k}-\rho _{k} d \bigl(w^{k},\hat{w}^{k} \bigr)-w^{*} \bigr\Vert _{G}^{2} \\ &\quad = \bigl\Vert w^{k}-w^{*} \bigr\Vert ^{2}_{G}-2\rho _{k} \bigl\langle w^{k}-w^{*}, d \bigl(w ^{k}, \hat{w}^{k} \bigr) \bigr\rangle _{G}+\rho _{k}^{2} \bigl\Vert d \bigl(w^{k}, \hat{w}^{k} \bigr) \bigr\Vert _{G}^{2} \\ &\quad = \bigl\Vert w^{k}-w^{*} \bigr\Vert _{G}^{2}-2\rho _{k} \bigl\langle w^{k}-\hat{w}^{k}+\hat{w} ^{k}-w^{*}, d \bigl(w^{k}, \hat{w}^{k} \bigr) \bigr\rangle _{G}+\rho _{k}^{2} \bigl\Vert w^{k}- \hat{w}^{k} \bigr\Vert _{G}^{2} \\ &\quad \leq \bigl\Vert w^{k}-w^{*} \bigr\Vert ^{2}_{G}-2\rho _{k} \bigl\langle w^{k}-\hat{w} ^{k}, d \bigl(w^{k}, \hat{w}^{k} \bigr) \bigr\rangle _{G}+\rho _{k}^{2} \bigl\Vert d \bigl(w ^{k}, \hat{w}^{k} \bigr) \bigr\Vert _{G}^{2} \\ &\quad = \bigl\Vert w^{k}-w^{*} \bigr\Vert ^{2}_{G}-v(2-v)\rho _{k}^{*} \bigl\langle w^{k}- \hat{w}^{k}, d \bigl(w^{k}, \hat{w}^{k} \bigr) \bigr\rangle _{G} \\ &\quad \leq \bigl\Vert w^{k}-w^{*} \bigr\Vert ^{2}_{G}-v(2-v)\rho _{k}^{*} \biggl(1-\alpha _{1} \biggl(\frac{1}{ \beta _{2}}+ \frac{\gamma _{1}}{\beta _{1}} \biggr) \biggr) (1-\eta ) \bigl\Vert w^{k}- \hat{w}^{k} \bigr\Vert ^{2}_{G} \\ &\quad \leq \bigl\Vert w^{k}-w^{*} \bigr\Vert ^{2}_{G}-v(2-v)\rho _{k}^{*} \biggl(1-\alpha _{1} \biggl(\frac{1}{ \beta _{2}}+ \frac{\gamma _{1}}{\beta _{1}} \biggr) \biggr) (1-\eta )\frac{1}{ \delta ^{2}} \bigl\Vert e \bigl(\hat{w}^{k} \bigr) \bigr\Vert _{G}^{2}. \end{aligned} $$

From the above inequality, we have

$$ \bigl\Vert w^{k+1}-w^{*} \bigr\Vert ^{2}_{G} \leq \bigl\Vert w^{k}-w^{*} \bigr\Vert ^{2}_{G},\quad k=1, 2, \ldots . $$
(38)

That is, the sequence \(\{w^{k}\}\) is bounded. From the above inequality, we have

$$ \sum_{k=0}^{\infty }v(2-v)\rho _{k}^{*} \biggl(1-\alpha _{1} \biggl( \frac{1}{ \beta _{2}}+ \frac{\gamma _{1}}{\beta _{1}} \biggr) \biggr) (1-\eta ) \bigl\Vert w^{k}- \hat{w}^{k} \bigr\Vert ^{2}_{G}< + \infty . $$

This implies that \(\lim_{k\rightarrow \infty }\|w^{k}-\hat{w}^{k}\| _{G}=0\). Thus, the sequence \(\{\hat{w}^{k}\}\) is also bounded. Then there exists at least one cluster point of \(\{\hat{w}_{k}\}\).

From the above inequality, we have

$$ \sum_{k=0}^{\infty }v(2-v)\rho _{k}^{*} \biggl(1-\alpha _{1} \biggl( \frac{1}{ \beta _{2}}+ \frac{\gamma _{1}}{\beta _{1}} \biggr) \biggr) (1-\eta ) \bigl\Vert e \bigl( \hat{w} ^{k} \bigr) \bigr\Vert ^{2}_{G}< +\infty . $$

This implies that \(\lim_{k\rightarrow \infty }\|e(\hat{w}^{k})\|_{G}=0\).

Let be a cluster point of \(\{\hat{w}^{k}\}\) and the subsequence \(\{\hat{w}^{k_{j}}\}\) converges to . We have

$$ \bigl\Vert w^{k_{j}}-\bar{w} \bigr\Vert _{G} = \lim _{j\rightarrow \infty } \bigl\Vert w^{k_{j}}- \hat{w}^{k_{j}} \bigr\Vert _{G} =0 $$

and

$$ \bigl\Vert e(\bar{w}) \bigr\Vert _{G} = \lim _{j\rightarrow \infty } \bigl\Vert e \bigl(w^{k_{j}} \bigr) \bigr\Vert _{G} =0. $$

Therefore, satisfies system (6). Setting \(w^{*}=\bar{w}\), we have

$$ \bigl\Vert w^{k+1}-\bar{w } \bigr\Vert _{G} \leq \bigl\Vert w^{k}-\bar{w} \bigr\Vert _{G}, $$

the sequence \(\{w^{k}\}\) satisfies \(\lim_{k\rightarrow \infty }w^{k} = \bar{w}\). □

4 Simulation experiments

In this section we present computational results of the proposed method. All the algorithms are run in MATLAB 7.0 environment on an Inter Core processor 1.80 GHz personal computer with 2.00 GB of Ram.

In the predictor-corrector inexact alternating direction method, we set \(\beta _{1}=0.8\), \(\beta _{2}=0.8\), \(\gamma _{1}=\lambda _{\mathrm{max}}(A ^{T}A)+0.0001\). The initial points \(x^{0}\), \(y^{0}\) are randomly generated, and \(\lambda ^{0}\), \(\mu ^{0}\) are zeros.

Example 4.1

In the first test example, we test the following CNSOCP, which was derived from paper [11]:

$$ \begin{aligned} &\min y^{T}Qy+\sum _{i=1}^{n} \bigl(d_{i}y^{4}_{i}+f_{i}y_{i} \bigr) \\ &\quad \mbox{s.t. } By+ \begin{pmatrix} b_{n_{1}} \\ \vdots \\ b_{n_{N}} \end{pmatrix}\in K, \end{aligned} $$

where the elements of the matrix B are randomly generated from the interval \([0,2]\), \(b_{j} = e^{j}\), \(d_{i}\) and \(f_{i}\) are randomly generated from the intervals \([0,1]\) and \([-1,1]\), respectively. In addition, C is given by \(Q = C^{T}C\), where C is an \(n\times n\) matrix whose elements are randomly generated from the interval \([0,1]\).

Let

z=By+(bn1bnN).

Then, as the form in problem (1), we have

$$ A=[B -I_{n}],\qquad x= \begin{bmatrix} y \\ z \end{bmatrix}\in \mathbb{R}^{n} \times K,\qquad b=- \begin{pmatrix} b_{n_{1}} \\ \vdots \\ b_{n_{N}} \end{pmatrix} . $$

Obviously, the problems in Example 4.1 are not Lipschitz continuous, but inequality (17) holds for each k. The detailed test problems are shown in Table 1. In Table 1, the first three test problems are given in [11]. The other 12 test problems are generated based on the method in Example 4.1 and extended the scale of the problem.

Table 1 The compared results for the test problems with medium scale

For the test problems in Table 1, we compare our proposed method with the SQP-type algorithm in paper [11], and the compared results are listed in Table 1. The SQP-type algorithm is implemented using the SeDuMi solver [20] to solve the subproblems by transforming them into LSOCPs. In SQP-type algorithm, the parameters are set similar to those in paper [11]. The stopping criterion is \(\frac{\|\Delta x^{k}\|}{\|x^{k}\|} < 10^{-3}\). Let \(\Delta f(x^{k})=f(x^{k})-f(x^{k-1})\). In Example 4.1, our algorithm is terminated when

$$ \max \biggl\{ \frac{ \Vert x^{k}-x^{k-1} \Vert }{ \Vert x^{k} \Vert },\frac{ \Vert y^{k}-y ^{k-1} \Vert }{ \Vert y^{k} \Vert },\frac{ \Vert \lambda ^{k}-\lambda ^{k-1} \Vert }{ \Vert \lambda ^{k} \Vert }, \frac{ \Vert \mu ^{k}-\mu ^{k-1} \Vert }{ \Vert \mu ^{k} \Vert },\frac{ \vert \Delta f(x ^{k})) \vert }{ \vert f(x^{k}) \vert } \biggr\} \leq \epsilon $$

for \(\epsilon =10^{-3}\) and \(v=0.9\).

In Table 1, an entry of the form “\(2 \times 5\)” in the “SOC” column means that there are two 5-dimensional second-order cones, and “\(\alpha _{1}\)” denotes the parameters in the prediction-correction inexact alternating direction method. For the test problems, the iteration number and average CPU time are used to evaluate the performances of the proposed method. The test results are shown in Table 1. In Table 1, “Time” represents the average CPU time (in seconds) and “Iter.” denotes the average number of iterations. In addition, “PCIADM” represents the predictor-corrector inexact alternating direction method, and “SQP” represents the SQP-type algorithm in [11].

The results in Table 1 show that prediction-correction inexact alternating direction method costs less CPU time than the SQP-type algorithm in [11]. But the average number of iteration steps of our proposed method is higher than that of the SQP-type algorithm in [11]. Furthermore, the prediction-correction inexact alternating direction method is a first-order algorithm. Therefore, the prediction-correction inexact alternating direction method appears to be beneficial for large scale second-order cone problems with a large number of second-order cones and dense constraint matrices in low accuracy.

Example 4.2

In this example, the grasping force optimization problem for the multi-fingered robotic hand [21, 22] is used to test the performance of the proposed PCIADM. For the robotic hand with m fingers, the optimization problem can be formulated as a convex quadratic circular cone programming problem

$$ \begin{aligned} &\min \frac{1}{2} f^{T}f \\ &\quad \mbox{s.t. } Gf=-w_{\mathrm{ext}} \\ &\quad \hphantom{\mbox{s.t. }} \bigl\Vert (f_{i1},f_{i2})^{T} \bigr\Vert \leq \nu f_{i3} \quad (i=1,2,\ldots ,m), \end{aligned} $$
(39)

where \(f=[f_{11},f_{12},\cdot ,f_{m3}]\) is the grasping force, G is the grasping transformation matrix, \(w_{\mathrm{ext}}\) is the time-varying external wrench, and μ is the friction coefficient.

In this example, we consider a three-fingered grasping force optimization example [22]. The three-finger robot hand grasps a polyhedral with the grasp points \([0,1,0]^{T}\), \([1,0.5,0]^{T}\), and \([0,-1,0]^{T}\), and the robot hand moves along a vertical circular trajectory of radius r with constant velocity \(v_{1}\). Let \(x=[f_{13},f_{11},f_{12},f_{23},f_{21}, f_{22},f_{23},f _{31}, f_{32}]^{T}\). Then problem (39) is reformulated as a convex quadratic circular cone programming problem:

$$ \begin{aligned} &\min \frac{1}{2} x^{T}Qx \\ &\quad \mbox{s.t. } Ax=b \\ &\quad \hphantom{\mbox{s.t. }} \bigl\Vert (x_{2},x_{3}) \bigr\Vert \leq \tan \theta _{1} x_{1} \\ &\quad \hphantom{\mbox{s.t. }} \bigl\Vert (x_{5},x_{6}) \bigr\Vert \leq \tan \theta _{2} x_{4} \\ &\quad \hphantom{\mbox{s.t. }} \bigl\Vert (x_{8},x_{9}) \bigr\Vert \leq \tan \theta _{3} x_{7} , \end{aligned} $$
(40)

where \(Q=\operatorname{diag}(1,1,1,1,1,1,1,1,1)\) is an identity matrix, \(\theta _{1}=\theta _{2}=\theta _{3}=\operatorname{actan}(\nu ^{-1})\),

$$ A= \begin{pmatrix} 0 & 0 & 1 & -1 & 0 & 0 & 0 & 1 & 0 \\ -1 & 0 & 0 & 0 & 0 & -1 & 1 & 0 & 0 \\ 0 & -1 & 0 & 0 & -1 & 0 & 0 & 0 & -1 \\ 0 & -1 & 0 & 0 & -0.5 & 0 & 0 & 0 & 1 \\ 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 \\ 0 & 0 & -1 & 0.5 & 0 & -1 & 0 & 1 & 0 \end{pmatrix},\qquad b= \begin{pmatrix} 0 \\ -f_{c} \sin \theta (t) \\ Mg-f_{c} \cos \theta (t) \\ 0 \\ 0 \\ 0 \end{pmatrix}. $$

Here, M is the mass of the polyhedral, \(g=9.8~\mbox{m/s}^{2}\), \(f_{c}=Mv _{1}^{2}/r\) the centripetal force, t is the time, and \(\theta (t)=v _{1}t/r\in [0,2\pi ]\). In this example, we set the data as follows: \(M=0.1~\mbox{kg}\), \(r=0.2~\mbox{m}\), \(n= 0.4\pi~\mbox{m/s}\), and \(\mu =0.6\).

Let its half-aperture angle be \(\theta _{i} \in (0,\frac{\pi }{2})\), \(i=1,2, \ldots ,N\). Then the \(n_{i}\)-dimensional circular cone denoted by \(L_{\theta _{i}}\) is

$$ L_{\theta _{i}}= \left \{ x_{i}= \begin{bmatrix} x_{i_{1}} \\ x_{i_{0}} \end{bmatrix} \in R^{{n_{i}-1}}\times R: \Vert x_{i_{1}} \Vert \leq \tan \theta _{i}x _{i_{0}} \right \} . $$

Our proposed method also can be extended to the convex nonlinear circular cone programming with linear constraints. In the method, the projection on the second-order cone is substituted for the projection on circular cone. The projection computation on the circular cone is shown in paper [23].

Since b is a time-varying variable, we need solve multiple force optimization problems to some given accuracy. Furthermore, the external wrench b of the next problem is not far from that of the previous problem. Based on the data features of the force optimization problems, we simply use the previously computed optimal force vector x as the starting point for the next force optimization problem.

In Example 4.2, our algorithm is terminated when

$$ \max \bigl\{ \bigl\Vert x^{k}-x^{k-1} \bigr\Vert , \bigl\Vert y^{k}-y^{k-1} \bigr\Vert , \bigl\Vert \lambda ^{k}- \lambda ^{k-1} \bigr\Vert , \bigl\Vert \mu ^{k}-\mu ^{k-1} \bigr\Vert , \bigl\vert \Delta f \bigl(x^{k} \bigr) \bigr\vert \bigr\} \leq \epsilon $$

for \(\epsilon =10^{-4}\) and \(v=1.6\).

For 4000 force optimization problems with \(t=0:1/4000:1\), the average iteration number and average CPU time are used to evaluate the performances of the prediction-correction inexact alternating direction method and interior-point method. As is known to all, the primal-dual interior point methods have been proven to be one of the most efficient class of methods for SOCP. Here the Matlab program codes for primal-dual interior point method are designed from the software package by Sedumi [20]. But SeDuMi software cannot solve the grasping force optimization problem (39) directly, so we need transform (39) as a linear second-order cone programming problem [24]:

$$ \begin{aligned} &\min t \\ &\quad \mbox{s.t. } Af+\omega ^{\mathrm{ext}}=0 \\ &\quad \hphantom{\mbox{s.t. } } \sqrt{(t-1)^{2}+2 \Vert f \Vert ^{2} } \leq t+1 \\ &\quad \hphantom{\mbox{s.t. } } \bigl\Vert \bigl(f_{x}^{(i)},f_{y}^{(i)} \bigr)^{T} \bigr\Vert \leq \tan \theta _{i} f_{z}^{(i)},\quad i=1,2, \ldots ,M. \end{aligned} $$

In the SeDuMi software, the stop criterion is \(\mbox{pars.eps}=10^{-4}\).

Here, during the 4000 force optimization problems, the value of b will be recalculated in each problem. The test results are shown in Table 2. We also give the test results for 2000 force optimization problems with \(t=0:1/2000:1\) in Table 2.

Table 2 The test results for the multiple force optimization problems

The results in Table 2 show that the prediction-correction inexact alternating direction method costs more iteration steps than the interior-point method. On the other hand, the prediction-correction inexact alternating direction method costs less CPU time than SeDuMi.

The optimal forces for the 4000 force optimization problems solved by the prediction-correction inexact alternating direction method are shown in Fig. 1. In addition, Fig. 2 gives optimal force \(f_{z}^{1}\), \(f _{x}^{1}\), \(f_{y}^{1}\), \(f_{z}^{2}\) by the two methods.

Figure 1
figure 1

The trajectories of forces for 4000 FOPs by the PCIADM method. The optimal forces for the 4000 force optimization problems solved by the prediction-correction inexact alternating direction method are shown in Figure 1

Figure 2
figure 2

Comparative results of the two methods. Figure 2 gives optimal force \(f_{z}^{1}\), \(f_{x}^{1}\), \(f_{y}^{1}\), \(f_{z}^{2}\) by the two methods

The results in Fig. 2 show that the optimal forces solved by the two methods are almost similar. Figures 12 and Table 2 demonstrate that our methods are efficient for the grasping force optimization problems.

5 Conclusion

In this paper, the convex nonlinear second-order cone programming problem with linear constraints is equivalent to a separate structure convex programming. A prediction-correction inexact alternating direction method is proposed to solve the separate structure convex programming. In the method, the Lipschitz continuity does not need to be satisfied. In addition, the proposed method does not require to solve sub-variational inequality problems exactly. At each iteration, we only need compute the metric projection on the second-order cone. The proposed predictor-corrector inexact alternating direction method does not require second-order information and it is easy to implement.

References

  1. Outrata, J.V., Sun, D.F.: On the coderivative of the projection operator onto the second-order cone. Set-Valued Var. Anal. 16, 999–1014 (2008)

    Article  MathSciNet  Google Scholar 

  2. Kong, L.C., Tuncel, L., Xiu, N.H.: Clarke generalized Jacobian of the projection onto symmetric cones. Set-Valued Var. Anal. 17, 135–151 (2009)

    Article  MathSciNet  Google Scholar 

  3. Lu, W.S., Hinamoto, T.: Optimal design of IIR digital filters with robust stability using conic-quadratic-programming updates. IEEE Trans. Signal Process. 51, 1581–1592 (2003)

    Article  MathSciNet  Google Scholar 

  4. Luo, Z.Q.: Applications of convex optimization in signal processing and digital communication. Math. Program. 97B, 177–207 (2003)

    Article  MathSciNet  Google Scholar 

  5. Alizadeh, F., Goldfarb, D.: Second-order cone programming. Math. Program. 95, 3–51 (2003)

    Article  MathSciNet  Google Scholar 

  6. Fukushima, M., Luo, Z.Q., Tseng, P.: Smoothing functions for second-order cone complementarity problems. SIAM J. Optim. 12, 436–460 (2002)

    Article  MathSciNet  Google Scholar 

  7. Chen, J.S., Tseng, P.: An unconstrained smooth minimization reformulation of the second-order cone complementarity problems. Math. Program. 104, 293–327 (2005)

    Article  MathSciNet  Google Scholar 

  8. Kanzow, C., Ferenczi, I., Fukushima, M.: On the local convergence of semismooth Newton methods for linear and nonlinear second-order cone programs without strict complementarity. SIAM J. Optim. 20, 297–320 (2009)

    Article  MathSciNet  Google Scholar 

  9. Yamashita, H., Yabe, H.: A primal-dual interior point method for nonlinear optimization over second-order cones. Optim. Methods Softw. 24, 407–426 (2009)

    Article  MathSciNet  Google Scholar 

  10. Liu, Y.J., Zhang, L.W.: Convergence of the augmented Lagrangian method for nonlinear optimization problems over second-order cones. J. Optim. Theory Appl. 139, 557–575 (2008)

    Article  MathSciNet  Google Scholar 

  11. Kato, H., Fukushima, M.: An SQP-type algorithm for nonlinear second-order cone programs. Optim. Lett. 1, 129–144 (2007)

    Article  MathSciNet  Google Scholar 

  12. Zhang, X., Liu, Z., Liu, S.: A trust region SQP-filter method for nonlinear second-order cone programming. Comput. Math. Appl. 63, 1569–1576 (2012)

    Article  MathSciNet  Google Scholar 

  13. Okuno, T., Yasuda, K., Hayashi, S.: \(\mbox{S}l_{1}\mbox{QP}\) based algorithm with trust region technique for solving nonlinear second-order cone programming problems. Interdiscip. Inf. Sci. 21, 97–107 (2015)

    Google Scholar 

  14. Li, Y., Yu, B., Li, Y.X.: A homotopy method for nonlinear second-order cone programming. Numer. Algorithms 68, 355–365 (2015)

    Article  MathSciNet  Google Scholar 

  15. He, B.S., Liao, L.Z., Han, D., Yang, H.: A new inexact alternating directions method for monotone variational inequalities. Math. Program. 92, 103–118 (2002)

    Article  MathSciNet  Google Scholar 

  16. Eckstein, J., Bertsekas, D.P.: An alternating direction method for linear programming. LIDS-P, Laboratory for Information and Decision Systems, Massachusetts Institute of Technology (1967)

  17. Sun, J., Zhang, S.: A modified alternating direction method for convex quadratically constrained quadratic semidefinite programs. Eur. J. Oper. Res. 207, 1210–1220 (2010)

    Article  MathSciNet  Google Scholar 

  18. Zhang, S., Ang, J., Sun, J.: An alternating direction method for solving convex nonlinear semidefinite programming problems. Optimization 62, 527–543 (2013)

    Article  MathSciNet  Google Scholar 

  19. Kinderlehrer, D., Stampacchia, G.: An Introduction to Variational Inequalities and Their Applications. Academic Press, New York (1980)

    MATH  Google Scholar 

  20. Sturm, J.F.: Using SeDuMi 1.02, a MATLAB toolbox for optimization over symmetric cones. Optim. Methods Softw. 11–12, 625–653 (1999)

    Article  MathSciNet  Google Scholar 

  21. Boyd, S., Wegbreit, B.: Fast computation of optimal contact forces. IEEE Trans. Robot. 23, 1117–1132 (2007)

    Article  Google Scholar 

  22. Ko, C.H., Chen, J.S., Ching, Y.Y.: Recurrent neural networks for solving second-order cone programs. Neurocomputing 74, 3646–3653 (2011)

    Article  Google Scholar 

  23. Zhou, J.C., Chen, J.S.: Properties of circular cone and spectral factorization associated with circular cone. J. Nonlinear Convex Anal. 214, 807–816 (2013)

    MathSciNet  MATH  Google Scholar 

  24. Zhao, X.Y.: A semismooth Newton-CG augmented Lagrangian method for large scale linear and convex quadratic SDPs. Ph.D. thesis, National University of Singapore (2009)

Download references

Acknowledgements

The authors are very grateful to the editor and two anonymous referees for their valuable comments which led to the improvements of the paper.

Availability of data and materials

The datasets used and/or analysed during the current study are available from the corresponding author on reasonable request.

Funding

This work was supported by the National Science Basic Research Plan in ShaanXi Province of China (2015JM1031) and the Fundamental Research Funds for the Central Universities (JB150713).

Author information

Authors and Affiliations

Authors

Contributions

YZ designed the prediction-correction inexact alternating direction method for the convex nonlinear second-order cone programming with linear constraints, gave convergence analysis, and was the major contributor in writing the manuscript. HL performed the experiments of the three-fingered grasping force optimization problems. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Yaling Zhang.

Ethics declarations

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zhang, Y., Liu, H. A prediction-correction inexact alternating direction method for convex nonlinear second-order cone programming with linear constraints. J Inequal Appl 2020, 10 (2020). https://doi.org/10.1186/s13660-020-2285-2

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13660-020-2285-2

Keywords