# LQP method with a new optimal step size rule for nonlinear complementarity problems

## Abstract

Inspired and motivated by results of Bnouhachem et al. (Hacet. J. Math. Stat. 41(1):103-117, 2012), we propose a new modified LQP method by using a new optimal step size, where the underlying function F is co-coercive. Under some mild conditions, we show that the method is globally convergent. Some preliminary computational results are given to illustrate the efficiency of the proposed method.

## 1 Introduction

The nonlinear complementarity problem (NCP) is to determine a vector $$x \in R^{n}$$ such that

$$x\geq0,\quad F(x)\geq0 \quad \mbox{and} \quad x^{T} F(x)= 0,$$
(1.1)

where F is a nonlinear mapping from $$R^{n}$$ into itself. Complementarity problems introduced by Lemke [1] and Cottle and Dantzig [2] in the early 1960s has attracted great attention of researchers (see, e.g., [3, 4] and the references therein). On the one hand, there have been many theoretical results on the existence of solutions and their structural properties. On the other hand, many attempts have been made to develop implementable algorithms for the solution of NCP. A popular way to solve the NCP is to reformulate as finding the zero point of the operator $$T(x)=F(x)+N_{R^{n}_{+}}(x)$$, i.e., find $$x^{*}\in R^{n}_{+}$$ such that $$0\in T(x^{*})$$, where $$N_{R^{n}_{+}}(\cdot)$$ is the normal cone operator to $$R^{n}_{+}$$ defined by

$$N_{R^{n}_{+}}(x)= \left \{ \textstyle\begin{array}{@{}l@{\quad}l} \{y\in R^{n}:y^{T}(v-x)\le0, \forall v\in R^{n}_{+}\} &\mbox{if } x\in R^{n}_{+},\\ \emptyset &\mbox{otherwise}. \end{array}\displaystyle \right .$$

The proximal point algorithm (PPA) is recognized as a powerful and successful algorithm in finding a solution of maximal monotone operators, and it has been proposed by Martinet [5] and studied by Rockafellar [6]. Starting from any initial $$x^{0}\in R^{n}$$ and for positive real $$\beta_{k}\geq\beta>0$$, iteratively updating $$x^{k+1}$$ conforming to the following problem:

$$0\in\beta_{k}T(x)+\nabla_{x}q \bigl(x,x^{k}\bigr),$$
(1.2)

where

$$q\bigl(x,x^{k}\bigr) = \frac{1}{2} \bigl\| x - x^{k}\bigr\| ^{2},$$
(1.3)

is a quadratic function of x. In place of the usual quadratic term many researchers have used some nonlinear functions $$r(x,x^{k})$$; see, for example, [69]. Auslender et al. [10, 11] proposed a new type of proximal interior method through replacing the second term of (1.2) by

$$x-(1-\mu)x^{k}-\mu X^{2}_{k}{x}^{-1}$$
(1.4)

or

$$x-{x}^{k}+\mu X_{k}\log\biggl(\frac{x}{x^{k}}\biggr),$$
(1.5)

where $$\mu\in(0,1)$$ is a given constant, $$X_{k}=\operatorname{diag}(x^{k}_{1},x^{k}_{2},\ldots,x^{k}_{n})$$, and $${x}^{-1}$$ is an n-vector whose jth elements is $$\frac{1}{x_{j}}$$. It is easy to see that, at the kth iteration, solving (1.1) by the LQP method is equivalent to the following system of nonlinear equations:

$$\beta_{k}F(x)+x-(1-\mu)x^{k}-\mu X^{2}_{k}{x}^{-1}=0$$
(1.6)

or

$$\beta_{k}F(x)+x-x^{k}+\mu X_{k}\log \biggl(\frac{x}{x^{k}}\biggr)=0.$$
(1.7)

Solving the subproblem (1.6) or (1.7) exactly is typically hard demand in practice. To overcome this difficulty, He et al. [12], Bnouhachem [13, 14], Bnouhachem and Yuan [15], Bnouhachem and Noor [16, 17], Bnouhachem et al. [18, 19], Noor and Bnouhachem [20], and Xu et al. [21] introduced some LQP-based prediction-correction methods which do not suffer from the above difficulty and make the LQP method more practical. Each iteration of the above methods contain a prediction and a correction, the predictor is obtained via solving the LQP system approximately under significantly relaxed accuracy criterion and the new iterate is computed directly by an explicit formula derived from the original LQP method for [12], while the new iterate is computed by using the projection operator for [14, 20, 21]. Inspired and motivated by the above research, we suggest and analyze a new LQP method for solving nonlinear complementarity problems (1.1) by using a new step size $$\alpha_{k}$$ to Bnouhachem’s LQP method [18]. We also study the global convergence of the proposed modified LQP method under some mild conditions.

Throughout this paper we assume that F is co-coercive with modulus $$c > 0$$, that is, $$\langle F(x)-F(y),x-y\rangle\ge c\|F(x)-F(y)\|^{2}$$, $$\forall x,y\in R^{n}_{+}$$ and the solution set of (1.1), denoted by $$\Omega^{*}$$, is nonempty.

## 2 The proposed method and some properties

In this section, we suggest and analyze the new modified LQP method for solving NCP (1.1). For given $$x^{k}>0$$ and $$\beta_{k}>0$$, each iteration of the proposed method consists of two steps, the first step offers a predictor $$\tilde{x}^{k}$$ and the second step produces the new iterate $$x^{k+1}$$.

Prediction step: Find an approximate solution $$\tilde{x}^{k}$$ of (1.6), called predictor, such that

$$0\approx\beta_{k}F\bigl(\tilde{x}^{k}\bigr)+ \tilde{x}^{k}-(1-\mu)x^{k}-\mu X_{k}^{2} \bigl(\tilde{x}^{k}\bigr)^{-1}=\xi^{k}:= \beta_{k}\bigl(F\bigl(\tilde{x}^{k}\bigr)-F \bigl(x^{k}\bigr)\bigr)$$
(2.1)

and $$\xi^{k}$$ which satisfies

$$\bigl\| \xi^{k}\bigr\| \leq\eta\bigl\| x^{k}- \tilde{x}^{k} \bigr\| ,\quad 0< \eta< 1.$$
(2.2)

Correction step: For $$0< \rho<1$$, the new iterate $$x^{k+1}(\alpha_{k})$$ is defined by

$$x^{k+1}(\alpha_{k})= \rho x^{k}+(1- \rho)P_{R^{n}_{+}} \bigl[x^{k}-\alpha_{k} d \bigl(x^{k},\beta_{k}\bigr) \bigr],$$
(2.3)

where

$$d\bigl(x^{k},\beta_{k}\bigr):= \bigl(x^{k}-\tilde{x}^{k}\bigr)+{\frac{\beta_{k}}{1+\mu}}F\bigl( \tilde{x}^{k}\bigr)$$
(2.4)

and $$\alpha_{k}$$ is a positive scalar. How to choose a suitable $$\alpha_{k}$$ we will discuss later.

### Remark 2.1

Equation (2.1) can be written as

$$\beta_{k} F\bigl(x^{k}\bigr) + \tilde{x}^{k} - (1-\mu)x^{k} - \mu X_{k}^{2} \bigl(\tilde{x}^{k}\bigr)^{-1} = 0,$$
(2.5)

and the solution of (2.5) can be componentwise obtained by

$$\tilde{x}_{j}^{k}=\frac{(1-\mu) x_{j}^{k}-\beta_{k} F_{j} (x^{k})+\sqrt{[(1-\mu)x_{j}^{k}-\beta_{k} F_{j}(x^{k})]^{2}+ 4\mu (x_{j}^{k})^{2}}}{2}.$$
(2.6)

Moreover, for any $$x^{k}>0$$ we have always $$\tilde{x}^{k}> 0$$.

We now consider the criterion for $$\alpha_{k}$$, which ensures that $$x^{k+1}(\alpha_{k})$$ is closer to the solution set than $$x^{k}$$. For this purpose, we define

$$\Theta(\alpha_{k})=\bigl\| x^{k}-x^{*} \bigr\| ^{2}-\bigl\| x^{k+1}(\alpha_{k})-x^{*}\bigr\| ^{2}.$$
(2.7)

### Theorem 2.1

[18]

Let $$x^{*}\in\Omega^{*}$$, $$x^{k+1}(\alpha_{k})$$ be defined by (2.3), then we have

\begin{aligned} \Theta(\alpha_{k}) \geq& (1-\rho)\biggl\{ 2 \alpha_{k}\bigl(x^{k}-\tilde{x}^{k} \bigr)^{T}D\bigl(x^{k},\beta_{k}\bigr) - \alpha_{k}^{2} \bigl(\bigl\| D\bigl(x^{k}, \beta_{k}\bigr)\bigr\| ^{2}+2{D\bigl(x^{k},\beta _{k}\bigr)}^{T}\bigl(x^{k}-\tilde {x}^{k} \bigr) \bigr) \\ &{}+ \biggl({\frac{2\alpha_{k}}{1+\mu}}\biggl(1-\mu-{\frac{\beta _{k}}{4c}}\biggr)- \alpha_{k}^{2} \biggr)\bigl\| x^{k}-\tilde{x}^{k} \bigr\| ^{2} \biggr\} , \end{aligned}
(2.8)

where

$$D\bigl(x^{k},\beta_{k}\bigr):=\bigl(x^{k}- \tilde{x}^{k}\bigr)+{\frac{1}{1+\mu}}\xi^{k}.$$

### Lemma 2.1

[18]

For given $$x^{k}\in R^{n}_{++}$$, let $$\tilde{x}^{k}$$ satisfy the condition (2.2), then we have the following:

\begin{aligned} \bigl(x^{k}-\tilde{x}^{k}\bigr)^{T}D \bigl(x^{k},\beta_{k}\bigr)\geq \biggl({\frac{1-\eta}{1+\mu }} \biggr)\bigl\| x^{k}-\tilde{x}^{k}\bigr\| ^{2} \end{aligned}
(2.9)

and

\begin{aligned} \bigl(x^{k}-\tilde{x}^{k}\bigr)^{T}D \bigl(x^{k},\beta_{k}\bigr)\geq{\frac{1}{2}}\bigl\| D \bigl(x^{k},\beta _{k}\bigr)\bigr\| ^{2}. \end{aligned}
(2.10)

From (2.8), we have

\begin{aligned} \Theta(\alpha_{k}) \geq& (1-\rho)\biggl\{ 2 \alpha_{k}\bigl(x^{k}-\tilde{x}^{k} \bigr)^{T}D\bigl(x^{k},\beta_{k}\bigr) - \alpha_{k}^{2} \bigl(\bigl\| D\bigl(x^{k}, \beta_{k}\bigr)\bigr\| ^{2}+2{D\bigl(x^{k},\beta _{k}\bigr)}^{T}\bigl(x^{k}-\tilde {x}^{k} \bigr) \bigr) \\ &{}+ \biggl({\frac{2\alpha_{k}}{1+\mu}}\biggl(1-\mu-{\frac{\beta _{k}}{4c}}\biggr)- \alpha_{k}^{2} \biggr)\bigl\| x^{k}-\tilde{x}^{k} \bigr\| ^{2} \biggr\} \\ =&(1-\rho)\biggl\{ 2\alpha_{k} \biggl(\bigl(x^{k}- \tilde{x}^{k}\bigr)^{T}D\bigl(x^{k},\beta _{k}\bigr)+{\frac {1}{1+\mu}}\biggl(1-\mu-{\frac{\beta_{k}}{4c}}\biggr) \bigl\| x^{k}-\tilde{x}^{k}\bigr\| ^{2} \biggr) \\ &{}-\alpha_{k}^{2} \bigl(\bigl\| D\bigl(x^{k}, \beta_{k}\bigr)\bigr\| ^{2}+2{D\bigl(x^{k},\beta _{k}\bigr)}^{T}\bigl(x^{k}-\tilde {x}^{k} \bigr)+\bigl\| x^{k}-\tilde{x}^{k}\bigr\| ^{2} \bigr)\biggr\} \\ =&(1-\rho)\biggl\{ 2\alpha_{k} \biggl(\bigl(x^{k}- \tilde{x}^{k}\bigr)^{T}D\bigl(x^{k},\beta _{k}\bigr)+{\frac {1}{1+\mu}}\biggl(1-\mu-{\frac{\beta_{k}}{4c}}\biggr) \bigl\| x^{k}-\tilde{x}^{k}\bigr\| ^{2} \biggr) \\ &{}-\alpha_{k}^{2}\bigl\| D\bigl(x^{k}, \beta_{k}\bigr)+x^{k}-\tilde{x}^{k}\bigr\| ^{2} \biggr\} \\ =&(1-\rho)\Psi(\alpha_{k}), \end{aligned}
(2.11)

where

\begin{aligned} \Psi(\alpha_{k}) :=&2\alpha_{k} \biggl( \bigl(x^{k}-\tilde{x}^{k}\bigr)^{T}D \bigl(x^{k},\beta_{k}\bigr)+ {\frac{1}{1+\mu}}\biggl(1- \mu-{\frac{\beta_{k}}{4c}}\biggr)\bigl\| x^{k}-\tilde{x}^{k}\bigr\| ^{2} \biggr) \\ &{}-\alpha_{k}^{2}\bigl\| D\bigl(x^{k}, \beta_{k}\bigr)+x^{k}-\tilde{x}^{k}\bigr\| ^{2}. \end{aligned}
(2.12)

## 3 Convergence analysis

In this section, we prove some useful results which will be used in the consequent analysis and then investigate the strategy of how to choose the new step size $$\alpha_{k}$$.

Note that $$\Psi(\alpha_{k})$$ is a quadratic function of $$\alpha_{k}$$ and it reaches its maximum at

$$\alpha^{*}_{k}={\frac{(x^{k}-\tilde{x}^{k})^{T}D(x^{k},\beta_{k})+{\frac {1}{1+\mu }}(1-\mu-{\frac{\beta_{k}}{4c}})\|x^{k}-\tilde{x}^{k}\|^{2}}{\|D(x^{k},\beta _{k})+x^{k}-\tilde{x}^{k}\|^{2}}}$$
(3.1)

and

\begin{aligned} \Psi\bigl(\alpha^{*}_{k}\bigr) =\alpha^{*}_{k} \biggl(\bigl(x^{k}-\tilde{x}^{k}\bigr)^{T}D \bigl(x^{k},\beta _{k}\bigr)+{\frac{1}{1+\mu}}\biggl(1- \mu-{\frac{\beta_{k}}{4c}}\biggr)\bigl\| x^{k}-\tilde {x}^{k}\bigr\| ^{2} \biggr). \end{aligned}
(3.2)

In the next theorem we show that $$\alpha^{*}_{k}$$ and $$\Psi(\alpha^{*}_{k})$$ are lower bounded away from zero, whenever $$x^{k}\ne\tilde{x}^{k}$$ and it is one of the keys to prove the global convergence results.

### Theorem 3.1

For given $$x^{k}\in R^{n}_{++}$$, let $$\tilde{x}^{k}$$ satisfy the condition (2.2) and $$\beta_{k}$$ satisfy

$$0< \beta_{l}\leq\inf_{k=0}^{\infty} \beta_{k}\leq\sup_{k=0}^{\infty} \beta_{k}\leq\beta_{u}< 4c(1-\mu),$$

then we have the following:

$$\alpha^{*}_{k}\geq{\frac{1-\eta}{5-4\eta+\mu}}>0$$
(3.3)

and

$$\Psi\bigl(\alpha^{*}_{k}\bigr)\geq {\frac{(1-\eta)^{2}}{(5-4\eta+\mu)(1+\mu)}} \bigl\| x^{k}-\tilde{x}^{k}\bigr\| ^{2}.$$
(3.4)

### Proof

It follows from (2.9) and (2.10) that

\begin{aligned} \alpha^{*}_{k} =&{\frac{(x^{k}-\tilde{x}^{k})^{T}D(x^{k},\beta_{k})+{\frac {1}{1+\mu }}(1-\mu-{\frac{\beta_{k}}{4c}})\|x^{k}-\tilde{x}^{k}\|^{2}}{\|D(x^{k},\beta _{k})+x^{k}-\tilde{x}^{k}\|^{2}}} \\ \geq&{\frac{(x^{k}-\tilde{x}^{k})^{T}D(x^{k},\beta_{k})}{\|D(x^{k},\beta _{k})+x^{k}-\tilde{x}^{k}\|^{2}}} \\ =&{\frac{(x^{k}-\tilde{x}^{k})^{T}D(x^{k},\beta_{k})}{\|D(x^{k},\beta_{k})\| ^{2}+2{D(x^{k},\beta_{k})}^{T}(x^{k}-\tilde{x}^{k})+\|x^{k}-\tilde{x}^{k}\| ^{2}}} \\ \geq&{\frac{(x^{k}-\tilde{x}^{k})^{T}D(x^{k},\beta_{k})}{ (4+\frac {1+\mu }{1-\eta} ){D(x^{k},\beta_{k})}^{T}(x^{k}-\tilde{x}^{k})}} \\ =&{\frac{1-\eta}{5-4\eta+\mu}}>0. \end{aligned}

Using (3.2), (3.3), and (2.9), we have

\begin{aligned} \Psi\bigl(\alpha^{*}_{k}\bigr) \geq& \biggl({\frac{1-\eta}{5-4\eta+\mu}} \biggr) \biggl({\frac{1-\eta }{1+\mu }}+{\frac{1}{1+\mu}}\biggl(1-\mu-{\frac{\beta_{k}}{4c}} \biggr) \biggr)\bigl\| x^{k}-\tilde {x}^{k}\bigr\| ^{2} \\ \geq&{\frac{(1-\eta)^{2}}{(5-4\eta+\mu)(1+\mu)}}\bigl\| x^{k}-\tilde {x}^{k} \bigr\| ^{2}. \end{aligned}

□

### Remark 3.1

Note that $$\alpha^{*}_{k_{2}}=\min\{(1-\mu-{\frac{\beta_{k}}{4c}})/(1+\mu),{\frac {(x^{k}-\tilde{x}^{k})^{T}D(x^{k},\beta_{k})}{\|D(x^{k},\beta_{k})\|^{2} +2{D(x^{k},\beta_{k})}^{T}(x^{k}-\tilde{x}^{k})}}\}$$ is the optimal step size used in [18]. Since $$\alpha^{*}_{k}$$ is to maximize the profit function $$\Psi(\alpha_{k})$$, we have

$$\Psi\bigl(\alpha^{*}_{k}\bigr)\geq\Psi\bigl( \alpha^{*}_{k_{2}}\bigr).$$
(3.5)

Inequality (3.5) shows theoretically that the proposed method is expected to make more progress than that in [18] at each iteration, and so it explains theoretically that the proposed method outperforms the method in [18].

For fast convergence, we take a relaxation factor $$\gamma\in[1,2)$$ and set the step size $$\alpha_{k}$$ in (2.3) by $$\alpha_{k}=\gamma \alpha^{*}_{k}$$, it follows from (2.11), (2.12), and Theorem 3.1 that

\begin{aligned}& \begin{aligned}[b] \Theta(\alpha_{k}) &\geq \gamma(2-\gamma) (1-\rho)\Psi \bigl(\alpha^{*}_{k}\bigr)\\ &\geq\gamma(2-\gamma) (1-\rho){\frac{(1-\eta)^{2}}{(5-4\eta+\mu )(1+\mu )}}\bigl\| x^{k}- \tilde{x}^{k}\bigr\| ^{2}. \end{aligned} \end{aligned}
(3.6)

Then from definition of $$\Theta(\alpha_{k})$$ and (3.6) there is a constant

$$\tau:=\gamma(2-\gamma) (1-\rho){\frac{(1-\eta)^{2}}{(5-4\eta+\mu )(1+\mu)}}>0$$

such that

$$\bigl\| x^{k+1}(\alpha_{k})-x^{*}\bigr\| ^{2}\leq \bigl\| x^{k}-x^{*}\bigr\| ^{2}-\tau\bigl\| x^{k}-\tilde{x}^{k} \bigr\| ^{2},\quad \forall x^{*}\in\Omega^{*} .$$
(3.7)

The following result can be proved by similar arguments to those in [12, 14, 18, 21]. Hence the proof will be omitted.

### Theorem 3.2

[12, 14, 18, 21]

If $$\inf^{\infty}_{k=0}\beta_{k}=\beta_{l}>0$$, then the sequence $$\{x^{k}\}$$ generated by the proposed method converges to some $$x^{\infty}$$ which is a solution of NCP.

The detailed algorithm is as follows.

• Step 0. Let $$\beta_{0}=1$$, $$\eta(:=0.9 )<1$$, $$0<\rho<1$$, $$0<\mu<1$$, $$\gamma=1.9$$, $$\epsilon=10^{-7}$$, $$k=0$$, and $$x^{0}>0$$.

• Step 1. If $$\|\min(x,F(x))\|_{\infty}\leq\epsilon$$, then stop. Otherwise, go to Step 2.

• Step 2. (Prediction step)

\begin{aligned}& s : = (1-\mu) x^{k} - \beta_{k} F\bigl(x^{k} \bigr),\qquad \tilde{x}_{i}^{k} := \Bigl(s_{i} + \sqrt{(s_{i})^{2}+4\mu\bigl(x_{i}^{k} \bigr)^{2}} \Bigr)/2, \\& \xi^{k} := \beta_{k} \bigl(F\bigl(\tilde{x}^{k} \bigr) - F\bigl(x^{k}\bigr)\bigr), \qquad r:= \bigl\| \xi^{k} \bigr\| / \bigl\| x^{k}-\tilde{x}^{k}\bigr\| \end{aligned}

while ($$r> \eta$$)

\begin{aligned}& \beta_{k} := \beta_{k}*0.8/r, \\& s: =(1-\mu) x^{k} - \beta_{k} F\bigl(x^{k}\bigr), \qquad \tilde{x}_{i}^{k}: = \Bigl(s_{i} +\sqrt {(s_{i})^{2}+4\mu\bigl(x_{i}^{k} \bigr)^{2}} \Bigr)/2, \\& \xi^{k} := \beta_{k} \bigl(F\bigl(\tilde{x}^{k} \bigr) - F\bigl(x^{k}\bigr)\bigr), \qquad r:= \bigl\| \xi^{k} \bigr\| / \bigl\| x^{k}-\tilde{x}^{k}\bigr\| . \end{aligned}

end while

• Step 3. (Correction step)

\begin{aligned}& D\bigl(x^{k},\beta_{k}\bigr):=\bigl(x^{k}- \tilde{x}^{k}\bigr)+{\frac{1}{1+\mu}}\xi^{k},\qquad d \bigl(x^{k},\beta_{k}\bigr):=\bigl(x^{k}- \tilde{x}^{k}\bigr)+{\frac{\beta_{k}}{1+\mu }}F\bigl(\tilde {x}^{k} \bigr), \\& \alpha^{*}_{k}={\frac{(x^{k}-\tilde {x}^{k})^{T}D(x^{k},\beta _{k})+{\frac{1}{1+\mu}}(1-\mu-{\frac{\beta_{k}}{4c}})\|x^{k}-\tilde {x}^{k}\| ^{2}}{\|D(x^{k},\beta_{k})+x^{k}-\tilde{x}^{k}\|^{2}}},\qquad \alpha_{k}=\gamma \alpha^{*}_{k}, \\& x^{k+1}=\rho x^{k}+(1-\rho)P_{R_{+}^{n}} \bigl[x^{k}-\alpha_{k}d\bigl(x^{k}, \beta_{k}\bigr)\bigr]. \end{aligned}
• Step 4.

$$\beta_{k+1}=\left \{ \textstyle\begin{array}{@{}l@{\quad}l} \frac{\beta_{k}*0.7}{r}, & \mbox{if }r\leq0.3;\\ \beta_{k},& \mbox{otherwise}. \end{array}\displaystyle \right .$$
• Step 5. $$k:=k+1$$; go to Step 1.

## 4 Preliminary computational results

In this section, we consider two examples to illustrate the efficiency and the performance of the proposed algorithm.

### 4.1 Numerical experiments I

We consider the nonlinear complementarity problems

$$x\ge0, \quad F(x)\ge0, \qquad x^{T}F(x)=0,$$
(4.1)

where

$$F(x)=D(x)+Mx+q,$$

$$D(x)$$ and $$Mx+q$$ are the nonlinear part and linear part of $$F(x)$$, respectively.

We form the linear part in the test problems similarly to Harker and Pang [4]. The matrix $$M=A^{T}A +B$$, where A is an $$n\times n$$ matrix whose entries are randomly generated in the interval $$(-5,+5)$$ and a skew-symmetric matrix B is generated in the same way. The vector q is generated from a uniform distribution in the interval $$(-500,500)$$ or in $$(-500,0)$$. In $$D(x)$$, the nonlinear part of $$F(x)$$, the components are chosen to be $$D_{j} (x)= d_{j} *\arctan(x_{j})$$, where $$d_{j}$$ is a random variable in $$(0,1)$$.

In all tests we take the logarithmic proximal parameter $$\mu=0.01$$, $$\rho=0.01$$, and $$c=0.9$$. All iterations start with $$x^{0}=(1,\ldots,1)^{T}$$ and $$\beta_{0}=1$$, and we have the stopping criterion whenever

$$\bigl\| \min\bigl(x^{k},F\bigl(x^{k}\bigr)\bigr)\bigr\| _{\infty}\leq10^{-7} .$$

All codes were written in Matlab, and we compare the proposed method with that in [18]. The test results for problem (4.1) are reported in Tables 1 and 2. k is the number of iteration and l denotes the number of evaluations of mapping F.

Tables 1 and 2 show that the proposed method is more efficient. Numerical results indicate that the proposed method can be save about $$49\sim65$$ percent of the number of iterations and about $$48\sim63$$ of the amount of computing the value of function F.

### 4.2 Numerical experiments II

In this subsection, we apply the proposed method to the traffic equilibrium problems and present corresponding numerical results.

Consider a network $$[N,L]$$ of nodes N and directed links L, which consists of a finite sequence of connecting links with a certain orientation. Let a, b, etc. denote the links, and let p, q, etc. denote the paths. We let ω denote an origin/destination (O/D) pair of nodes of the network and $$P_{\omega}$$ denotes the set of all paths connecting O/D pair ω. Note that the path-arc incidence matrix and the path-O/D pair incidence matrix, denoted by A and B, respectively, are determined by the given network and O/D pairs. To see how to convert a traffic equilibrium problem into a variational inequality, we take into account a simple example as depicted in Figure 1.

For the given example in Figure 1, the path-arc incidence matrix A and the path-O/D pair incidence matrix B have the following forms:

\begin{aligned}& \textstyle\begin{array}{@{}rc@{}} \mbox{No.}\ \mbox{link}& \underline{\textstyle\begin{array}{@{}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{}} 1 & 2 & 3 & 4 & 5 \end{array}\displaystyle } \\ \displaystyle A &= \begin{pmatrix} 0 & 0 & 1 & 0 & 0 \\ 1 & 0 & 0 & 0 & 1 \\ 0 & 0 & 0 & 1 & 0 \\ 0 & 1 & 0 & 0 & 1 \end{pmatrix}, \end{array}\displaystyle \qquad \textstyle\begin{array}{@{}rc@{}} \mathrm{No.\ O/D pair} &\underline{\textstyle\begin{array}{@{}c@{\quad}c@{}} \omega_{1} & \omega_{2} \end{array}\displaystyle } \\ \displaystyle B &= \begin{pmatrix} 1 & 0 \\ 1 & 0 \\ 0 & 1 \\ 0 & 1 \end{pmatrix}. \end{array}\displaystyle \end{aligned}

Let $$x_{p}$$ represent the traffic flow on path p and $$f_{a}$$ denote the link load on link a, then the arc-flow vector f is given by

$$f=A^{T} x.$$

Let $$d_{\omega}$$ denote the traffic amount between O/D pair ω, which must satisfy

$$d_{\omega} = \sum_{p\in P_{\omega}} x_{p}.$$

Thus, the O/D pair-traffic amount vector d is given by

$$d= B^{T} x.$$

Let $$t(f)=\{ t_{a}, a\in L\}$$ be the vector of link travel costs, which is a function of the link flow. A user traveling on path p incurs a (path) travel cost $$\theta_{p}$$. For given link travel cost vector t, the path travel cost vector θ is given by

$$\theta= A t(f) \quad\mbox{and thus}\quad \theta(x) = A t\bigl(A^{T} x \bigr).$$

Associated with every O/D pair ω, there is a travel disutility $$\lambda_{\omega}(d)$$. Since both the path costs and the travel disutilities are functions of the flow pattern x, the traffic network equilibrium problem is to seek the path flow pattern $$x^{*}$$ such that

$$x^{*} \geq0,\quad \bigl(x - x^{*}\bigr)^{T} F\bigl(x^{*}\bigr) \ge0,\quad \forall x\geq0,$$
(4.2)

where

$$F_{p}(x)= \theta_{p}(x) - \lambda_{\omega}\bigl(d(x) \bigr),\quad \forall \omega, p\in P_{\omega},$$

and thus

$$F(x) = At\bigl(A^{T}x\bigr) - B\lambda\bigl(B^{T}x\bigr).$$

We apply the proposed method to the example taken from [22] (Example 7.5 in [22]), which consisted of 25 nodes, 37 links and six O/D pairs. The network is depicted in Figure 2.

For this example, there are together 55 paths for the six given O/D pairs and hence the dimension of the variable x is 55. Therefore, the path-arc incidence matrix A is a $$55\times37$$ matrix and the path-O/D pair incidence matrix B is a $$55\times6$$ matrix. The user cost of traversing link a is given in Table 3. The disutility function is given by

$$\lambda_{\omega}(d) = -m_{\omega}d_{\omega} + q_{\omega}$$
(4.3)

and the coefficients $$m_{\omega}$$ and $$q_{\omega}$$ in the disutility function of different O/D pairs for this example are given in Table 4.

The test results for problems (4.2) for different ε are reported in Table 5, k is the number of iterations and l denotes the number of evaluations of mapping F. The stopping criterion is

$$\frac{\|\min\{x,F(x)\}\|_{\infty}}{\|\min\{x^{0},F(x^{0})\}\|_{\infty}} \le\varepsilon.$$

Table 5 shows that the new method is more flexible and efficient to solve a traffic equilibrium problem. Moreover, it demonstrates computationally that the new method is more effective than the method presented in [18] in the sense that the new method needs fewer iteration and less evaluation numbers of F, which clearly illustrates its efficiency.

## References

1. Lemke, CE: Bimatrix equilibrium point and mathematical programming. Manag. Sci. 11, 681-689 (1965)

2. Cottle, RW, Dantzig, GB: Complementary pivot theory of mathematical programming. Linear Algebra Appl. 1, 103-125 (1968)

3. Ferris, MC, Pang, JS: Engineering and economic applications of complementary problems. SIAM Rev. 39, 669-713 (1997)

4. Harker, PT, Pang, JS: Finite-dimensional variational inequality and nonlinear complementarity problems: a survey of theory, algorithms and applications. Math. Program. 48, 161-220 (1990)

5. Martinet, B: Determination approchée d’un point fixe d’une application pseudo-contractante. C. R. Acad. Sci. Paris 274, 163-165 (1972)

6. Rockafellar, RT: Monotone operators and the proximal point algorithm. SIAM J. Control Optim. 14, 877-898 (1976)

7. Eckestein, J: Approximate iterations in Bregman-function-based proximal algorithms. Math. Program. 83, 113-123 (1998)

8. Guler, O: On the convergence of the proximal point algorithm for convex minimization. SIAM J. Control Optim. 29, 403-419 (1991)

9. Teboulle, M: Convergence of proximal-like algorithms. SIAM J. Optim. 7, 1069-1083 (1997)

10. Auslender, A, Teboulle, M, Ben-Tiba, S: A logarithmic-quadratic proximal method for variational inequalities. Comput. Optim. Appl. 12, 31-40 (1999)

11. Auslender, A, Teboulle, M, Ben-Tiba, S: Interior proximal and multiplier methods based on second order homogenous Kernels. Math. Oper. Res. 24, 646-668 (1999)

12. He, BS, Liao, LZ, Yuan, XM: A LQP based interior prediction-correction method for nonlinear complementarity problems. J. Comput. Math. 24(1), 33-44 (2006)

13. Bnouhachem, A: A new inexactness criterion for approximate logarithmic-quadratic proximal methods. Numer. Math., Theory Methods Appl. 15(1), 74-81 (2006)

14. Bnouhachem, A: An LQP method for pseudomonotone variational inequalities. J. Glob. Optim. 36(3), 351-363 (2006)

15. Bnouhachem, A, Yuan, XM: An extended LQP method for monotone nonlinear complementarity problems. J. Optim. Theory Appl. 135(3), 343-353 (2007)

16. Bnouhachem, A, Noor, MA: A new predictor-corrector method for pseudomonotone nonlinear complementarity problems. Int. J. Comput. Math. 85, 1023-1038 (2008)

17. Bnouhachem, A, Noor, MA: An interior proximal point algorithm for nonlinear complementarity problems. Nonlinear Anal. Hybrid Syst. 4(3), 371-380 (2010)

18. Bnouhachem, A, Noor, MA, Khalfaoui, M, Zhaohan, S: An approximate proximal point algorithm for nonlinear complementarity problems. Hacet. J. Math. Stat. 41(1), 103-117 (2012)

19. Bnouhachem, A, Noor, MA, Khalfaoui, M, Zhaohan, S: A new logarithmic-quadratic proximal method for nonlinear complementarity problems. Appl. Math. Comput. 215, 695-706 (2009)

20. Noor, MA, Bnouhachem, A: Modified proximal point methods for nonlinear complementarity problems. J. Comput. Appl. Math. 197, 395-405 (2006)

21. Xu, Y, He, BS, Yuan, X: A hybrid inexact logarithmic-quadratic proximal method for nonlinear complementarity problems. J. Math. Anal. Appl. 322, 276-287 (2006)

22. Nagurney, A, Zhang, D: Projected Dynamical Systems and Variational Inequalities with Applications. Kluwer Academic, Dordrecht (1996)

## Acknowledgements

The authors are very grateful to the referees for their careful reading, comments, and suggestions, which helped us improve the presentation of this paper.

## Author information

Authors

### Corresponding author

Correspondence to Abdellah Bnouhachem.

### Competing interests

The authors declare that they have no competing interests.

### Authors’ contributions

All authors have made equal contributions. All authors read and approved the final manuscript.

## Rights and permissions

Reprints and permissions

Ou-yassine, A., Bnouhachem, A. & Benssi, F. LQP method with a new optimal step size rule for nonlinear complementarity problems. J Inequal Appl 2015, 216 (2015). https://doi.org/10.1186/s13660-015-0733-1