Open Access

LQP method with a new optimal step size rule for nonlinear complementarity problems

Journal of Inequalities and Applications20152015:216

https://doi.org/10.1186/s13660-015-0733-1

Received: 20 April 2015

Accepted: 12 June 2015

Published: 1 July 2015

Abstract

Inspired and motivated by results of Bnouhachem et al. (Hacet. J. Math. Stat. 41(1):103-117, 2012), we propose a new modified LQP method by using a new optimal step size, where the underlying function F is co-coercive. Under some mild conditions, we show that the method is globally convergent. Some preliminary computational results are given to illustrate the efficiency of the proposed method.

Keywords

nonlinear complementarity problemsco-coercive operatorlogarithmic-quadratic proximal method

1 Introduction

The nonlinear complementarity problem (NCP) is to determine a vector \(x \in R^{n} \) such that
$$ x\geq0,\quad F(x)\geq0 \quad \mbox{and} \quad x^{T} F(x)= 0, $$
(1.1)
where F is a nonlinear mapping from \(R^{n}\) into itself. Complementarity problems introduced by Lemke [1] and Cottle and Dantzig [2] in the early 1960s has attracted great attention of researchers (see, e.g., [3, 4] and the references therein). On the one hand, there have been many theoretical results on the existence of solutions and their structural properties. On the other hand, many attempts have been made to develop implementable algorithms for the solution of NCP. A popular way to solve the NCP is to reformulate as finding the zero point of the operator \(T(x)=F(x)+N_{R^{n}_{+}}(x)\), i.e., find \(x^{*}\in R^{n}_{+}\) such that \(0\in T(x^{*})\), where \(N_{R^{n}_{+}}(\cdot)\) is the normal cone operator to \(R^{n}_{+}\) defined by
$$N_{R^{n}_{+}}(x)= \left \{ \textstyle\begin{array}{@{}l@{\quad}l} \{y\in R^{n}:y^{T}(v-x)\le0, \forall v\in R^{n}_{+}\} &\mbox{if } x\in R^{n}_{+},\\ \emptyset &\mbox{otherwise}. \end{array}\displaystyle \right . $$
The proximal point algorithm (PPA) is recognized as a powerful and successful algorithm in finding a solution of maximal monotone operators, and it has been proposed by Martinet [5] and studied by Rockafellar [6]. Starting from any initial \(x^{0}\in R^{n}\) and for positive real \(\beta_{k}\geq\beta>0\), iteratively updating \(x^{k+1}\) conforming to the following problem:
$$ 0\in\beta_{k}T(x)+\nabla_{x}q \bigl(x,x^{k}\bigr), $$
(1.2)
where
$$ q\bigl(x,x^{k}\bigr) = \frac{1}{2} \bigl\| x - x^{k}\bigr\| ^{2}, $$
(1.3)
is a quadratic function of x. In place of the usual quadratic term many researchers have used some nonlinear functions \(r(x,x^{k})\); see, for example, [69]. Auslender et al. [10, 11] proposed a new type of proximal interior method through replacing the second term of (1.2) by
$$ x-(1-\mu)x^{k}-\mu X^{2}_{k}{x}^{-1} $$
(1.4)
or
$$ x-{x}^{k}+\mu X_{k}\log\biggl(\frac{x}{x^{k}}\biggr), $$
(1.5)
where \(\mu\in(0,1)\) is a given constant, \(X_{k}=\operatorname{diag}(x^{k}_{1},x^{k}_{2},\ldots,x^{k}_{n})\), and \({x}^{-1}\) is an n-vector whose jth elements is \(\frac{1}{x_{j}}\). It is easy to see that, at the kth iteration, solving (1.1) by the LQP method is equivalent to the following system of nonlinear equations:
$$ \beta_{k}F(x)+x-(1-\mu)x^{k}-\mu X^{2}_{k}{x}^{-1}=0 $$
(1.6)
or
$$ \beta_{k}F(x)+x-x^{k}+\mu X_{k}\log \biggl(\frac{x}{x^{k}}\biggr)=0. $$
(1.7)
Solving the subproblem (1.6) or (1.7) exactly is typically hard demand in practice. To overcome this difficulty, He et al. [12], Bnouhachem [13, 14], Bnouhachem and Yuan [15], Bnouhachem and Noor [16, 17], Bnouhachem et al. [18, 19], Noor and Bnouhachem [20], and Xu et al. [21] introduced some LQP-based prediction-correction methods which do not suffer from the above difficulty and make the LQP method more practical. Each iteration of the above methods contain a prediction and a correction, the predictor is obtained via solving the LQP system approximately under significantly relaxed accuracy criterion and the new iterate is computed directly by an explicit formula derived from the original LQP method for [12], while the new iterate is computed by using the projection operator for [14, 20, 21]. Inspired and motivated by the above research, we suggest and analyze a new LQP method for solving nonlinear complementarity problems (1.1) by using a new step size \(\alpha_{k} \) to Bnouhachem’s LQP method [18]. We also study the global convergence of the proposed modified LQP method under some mild conditions.

Throughout this paper we assume that F is co-coercive with modulus \(c > 0\), that is, \(\langle F(x)-F(y),x-y\rangle\ge c\|F(x)-F(y)\|^{2}\), \(\forall x,y\in R^{n}_{+}\) and the solution set of (1.1), denoted by \(\Omega^{*}\), is nonempty.

2 The proposed method and some properties

In this section, we suggest and analyze the new modified LQP method for solving NCP (1.1). For given \(x^{k}>0\) and \(\beta_{k}>0\), each iteration of the proposed method consists of two steps, the first step offers a predictor \(\tilde{x}^{k}\) and the second step produces the new iterate \(x^{k+1}\).

Prediction step: Find an approximate solution \(\tilde{x}^{k}\) of (1.6), called predictor, such that
$$ 0\approx\beta_{k}F\bigl(\tilde{x}^{k}\bigr)+ \tilde{x}^{k}-(1-\mu)x^{k}-\mu X_{k}^{2} \bigl(\tilde{x}^{k}\bigr)^{-1}=\xi^{k}:= \beta_{k}\bigl(F\bigl(\tilde{x}^{k}\bigr)-F \bigl(x^{k}\bigr)\bigr) $$
(2.1)
and \(\xi^{k}\) which satisfies
$$ \bigl\| \xi^{k}\bigr\| \leq\eta\bigl\| x^{k}- \tilde{x}^{k} \bigr\| ,\quad 0< \eta< 1. $$
(2.2)
Correction step: For \(0< \rho<1\), the new iterate \(x^{k+1}(\alpha_{k})\) is defined by
$$ x^{k+1}(\alpha_{k})= \rho x^{k}+(1- \rho)P_{R^{n}_{+}} \bigl[x^{k}-\alpha_{k} d \bigl(x^{k},\beta_{k}\bigr) \bigr], $$
(2.3)
where
$$ d\bigl(x^{k},\beta_{k}\bigr):= \bigl(x^{k}-\tilde{x}^{k}\bigr)+{\frac{\beta_{k}}{1+\mu}}F\bigl( \tilde{x}^{k}\bigr) $$
(2.4)
and \(\alpha_{k}\) is a positive scalar. How to choose a suitable \(\alpha_{k}\) we will discuss later.

Remark 2.1

Equation (2.1) can be written as
$$ \beta_{k} F\bigl(x^{k}\bigr) + \tilde{x}^{k} - (1-\mu)x^{k} - \mu X_{k}^{2} \bigl(\tilde{x}^{k}\bigr)^{-1} = 0, $$
(2.5)
and the solution of (2.5) can be componentwise obtained by
$$ \tilde{x}_{j}^{k}=\frac{(1-\mu) x_{j}^{k}-\beta_{k} F_{j} (x^{k})+\sqrt{[(1-\mu)x_{j}^{k}-\beta_{k} F_{j}(x^{k})]^{2}+ 4\mu (x_{j}^{k})^{2}}}{2}. $$
(2.6)
Moreover, for any \(x^{k}>0\) we have always \(\tilde{x}^{k}> 0\).
We now consider the criterion for \(\alpha_{k}\), which ensures that \(x^{k+1}(\alpha_{k})\) is closer to the solution set than \(x^{k}\). For this purpose, we define
$$ \Theta(\alpha_{k})=\bigl\| x^{k}-x^{*} \bigr\| ^{2}-\bigl\| x^{k+1}(\alpha_{k})-x^{*}\bigr\| ^{2}. $$
(2.7)

Theorem 2.1

[18]

Let \(x^{*}\in\Omega^{*}\), \(x^{k+1}(\alpha_{k})\) be defined by (2.3), then we have
$$\begin{aligned} \Theta(\alpha_{k}) \geq& (1-\rho)\biggl\{ 2 \alpha_{k}\bigl(x^{k}-\tilde{x}^{k} \bigr)^{T}D\bigl(x^{k},\beta_{k}\bigr) - \alpha_{k}^{2} \bigl(\bigl\| D\bigl(x^{k}, \beta_{k}\bigr)\bigr\| ^{2}+2{D\bigl(x^{k},\beta _{k}\bigr)}^{T}\bigl(x^{k}-\tilde {x}^{k} \bigr) \bigr) \\ &{}+ \biggl({\frac{2\alpha_{k}}{1+\mu}}\biggl(1-\mu-{\frac{\beta _{k}}{4c}}\biggr)- \alpha_{k}^{2} \biggr)\bigl\| x^{k}-\tilde{x}^{k} \bigr\| ^{2} \biggr\} , \end{aligned}$$
(2.8)
where
$$D\bigl(x^{k},\beta_{k}\bigr):=\bigl(x^{k}- \tilde{x}^{k}\bigr)+{\frac{1}{1+\mu}}\xi^{k}. $$

Lemma 2.1

[18]

For given \(x^{k}\in R^{n}_{++}\), let \(\tilde{x}^{k}\) satisfy the condition (2.2), then we have the following:
$$\begin{aligned} \bigl(x^{k}-\tilde{x}^{k}\bigr)^{T}D \bigl(x^{k},\beta_{k}\bigr)\geq \biggl({\frac{1-\eta}{1+\mu }} \biggr)\bigl\| x^{k}-\tilde{x}^{k}\bigr\| ^{2} \end{aligned}$$
(2.9)
and
$$\begin{aligned} \bigl(x^{k}-\tilde{x}^{k}\bigr)^{T}D \bigl(x^{k},\beta_{k}\bigr)\geq{\frac{1}{2}}\bigl\| D \bigl(x^{k},\beta _{k}\bigr)\bigr\| ^{2}. \end{aligned}$$
(2.10)
From (2.8), we have
$$\begin{aligned} \Theta(\alpha_{k}) \geq& (1-\rho)\biggl\{ 2 \alpha_{k}\bigl(x^{k}-\tilde{x}^{k} \bigr)^{T}D\bigl(x^{k},\beta_{k}\bigr) - \alpha_{k}^{2} \bigl(\bigl\| D\bigl(x^{k}, \beta_{k}\bigr)\bigr\| ^{2}+2{D\bigl(x^{k},\beta _{k}\bigr)}^{T}\bigl(x^{k}-\tilde {x}^{k} \bigr) \bigr) \\ &{}+ \biggl({\frac{2\alpha_{k}}{1+\mu}}\biggl(1-\mu-{\frac{\beta _{k}}{4c}}\biggr)- \alpha_{k}^{2} \biggr)\bigl\| x^{k}-\tilde{x}^{k} \bigr\| ^{2} \biggr\} \\ =&(1-\rho)\biggl\{ 2\alpha_{k} \biggl(\bigl(x^{k}- \tilde{x}^{k}\bigr)^{T}D\bigl(x^{k},\beta _{k}\bigr)+{\frac {1}{1+\mu}}\biggl(1-\mu-{\frac{\beta_{k}}{4c}}\biggr) \bigl\| x^{k}-\tilde{x}^{k}\bigr\| ^{2} \biggr) \\ &{}-\alpha_{k}^{2} \bigl(\bigl\| D\bigl(x^{k}, \beta_{k}\bigr)\bigr\| ^{2}+2{D\bigl(x^{k},\beta _{k}\bigr)}^{T}\bigl(x^{k}-\tilde {x}^{k} \bigr)+\bigl\| x^{k}-\tilde{x}^{k}\bigr\| ^{2} \bigr)\biggr\} \\ =&(1-\rho)\biggl\{ 2\alpha_{k} \biggl(\bigl(x^{k}- \tilde{x}^{k}\bigr)^{T}D\bigl(x^{k},\beta _{k}\bigr)+{\frac {1}{1+\mu}}\biggl(1-\mu-{\frac{\beta_{k}}{4c}}\biggr) \bigl\| x^{k}-\tilde{x}^{k}\bigr\| ^{2} \biggr) \\ &{}-\alpha_{k}^{2}\bigl\| D\bigl(x^{k}, \beta_{k}\bigr)+x^{k}-\tilde{x}^{k}\bigr\| ^{2} \biggr\} \\ =&(1-\rho)\Psi(\alpha_{k}), \end{aligned}$$
(2.11)
where
$$\begin{aligned} \Psi(\alpha_{k}) :=&2\alpha_{k} \biggl( \bigl(x^{k}-\tilde{x}^{k}\bigr)^{T}D \bigl(x^{k},\beta_{k}\bigr)+ {\frac{1}{1+\mu}}\biggl(1- \mu-{\frac{\beta_{k}}{4c}}\biggr)\bigl\| x^{k}-\tilde{x}^{k}\bigr\| ^{2} \biggr) \\ &{}-\alpha_{k}^{2}\bigl\| D\bigl(x^{k}, \beta_{k}\bigr)+x^{k}-\tilde{x}^{k}\bigr\| ^{2}. \end{aligned}$$
(2.12)

3 Convergence analysis

In this section, we prove some useful results which will be used in the consequent analysis and then investigate the strategy of how to choose the new step size \(\alpha_{k}\).

Note that \(\Psi(\alpha_{k})\) is a quadratic function of \(\alpha_{k}\) and it reaches its maximum at
$$ \alpha^{*}_{k}={\frac{(x^{k}-\tilde{x}^{k})^{T}D(x^{k},\beta_{k})+{\frac {1}{1+\mu }}(1-\mu-{\frac{\beta_{k}}{4c}})\|x^{k}-\tilde{x}^{k}\|^{2}}{\|D(x^{k},\beta _{k})+x^{k}-\tilde{x}^{k}\|^{2}}} $$
(3.1)
and
$$\begin{aligned} \Psi\bigl(\alpha^{*}_{k}\bigr) =\alpha^{*}_{k} \biggl(\bigl(x^{k}-\tilde{x}^{k}\bigr)^{T}D \bigl(x^{k},\beta _{k}\bigr)+{\frac{1}{1+\mu}}\biggl(1- \mu-{\frac{\beta_{k}}{4c}}\biggr)\bigl\| x^{k}-\tilde {x}^{k}\bigr\| ^{2} \biggr). \end{aligned}$$
(3.2)
In the next theorem we show that \(\alpha^{*}_{k}\) and \(\Psi(\alpha^{*}_{k})\) are lower bounded away from zero, whenever \(x^{k}\ne\tilde{x}^{k}\) and it is one of the keys to prove the global convergence results.

Theorem 3.1

For given \(x^{k}\in R^{n}_{++}\), let \(\tilde{x}^{k}\) satisfy the condition (2.2) and \(\beta_{k}\) satisfy
$$0< \beta_{l}\leq\inf_{k=0}^{\infty} \beta_{k}\leq\sup_{k=0}^{\infty} \beta_{k}\leq\beta_{u}< 4c(1-\mu), $$
then we have the following:
$$ \alpha^{*}_{k}\geq{\frac{1-\eta}{5-4\eta+\mu}}>0 $$
(3.3)
and
$$ \Psi\bigl(\alpha^{*}_{k}\bigr)\geq {\frac{(1-\eta)^{2}}{(5-4\eta+\mu)(1+\mu)}} \bigl\| x^{k}-\tilde{x}^{k}\bigr\| ^{2}. $$
(3.4)

Proof

It follows from (2.9) and (2.10) that
$$\begin{aligned} \alpha^{*}_{k} =&{\frac{(x^{k}-\tilde{x}^{k})^{T}D(x^{k},\beta_{k})+{\frac {1}{1+\mu }}(1-\mu-{\frac{\beta_{k}}{4c}})\|x^{k}-\tilde{x}^{k}\|^{2}}{\|D(x^{k},\beta _{k})+x^{k}-\tilde{x}^{k}\|^{2}}} \\ \geq&{\frac{(x^{k}-\tilde{x}^{k})^{T}D(x^{k},\beta_{k})}{\|D(x^{k},\beta _{k})+x^{k}-\tilde{x}^{k}\|^{2}}} \\ =&{\frac{(x^{k}-\tilde{x}^{k})^{T}D(x^{k},\beta_{k})}{\|D(x^{k},\beta_{k})\| ^{2}+2{D(x^{k},\beta_{k})}^{T}(x^{k}-\tilde{x}^{k})+\|x^{k}-\tilde{x}^{k}\| ^{2}}} \\ \geq&{\frac{(x^{k}-\tilde{x}^{k})^{T}D(x^{k},\beta_{k})}{ (4+\frac {1+\mu }{1-\eta} ){D(x^{k},\beta_{k})}^{T}(x^{k}-\tilde{x}^{k})}} \\ =&{\frac{1-\eta}{5-4\eta+\mu}}>0. \end{aligned}$$
Using (3.2), (3.3), and (2.9), we have
$$\begin{aligned} \Psi\bigl(\alpha^{*}_{k}\bigr) \geq& \biggl({\frac{1-\eta}{5-4\eta+\mu}} \biggr) \biggl({\frac{1-\eta }{1+\mu }}+{\frac{1}{1+\mu}}\biggl(1-\mu-{\frac{\beta_{k}}{4c}} \biggr) \biggr)\bigl\| x^{k}-\tilde {x}^{k}\bigr\| ^{2} \\ \geq&{\frac{(1-\eta)^{2}}{(5-4\eta+\mu)(1+\mu)}}\bigl\| x^{k}-\tilde {x}^{k} \bigr\| ^{2}. \end{aligned}$$
 □

Remark 3.1

Note that \(\alpha^{*}_{k_{2}}=\min\{(1-\mu-{\frac{\beta_{k}}{4c}})/(1+\mu),{\frac {(x^{k}-\tilde{x}^{k})^{T}D(x^{k},\beta_{k})}{\|D(x^{k},\beta_{k})\|^{2} +2{D(x^{k},\beta_{k})}^{T}(x^{k}-\tilde{x}^{k})}}\} \) is the optimal step size used in [18]. Since \(\alpha^{*}_{k}\) is to maximize the profit function \(\Psi(\alpha_{k})\), we have
$$ \Psi\bigl(\alpha^{*}_{k}\bigr)\geq\Psi\bigl( \alpha^{*}_{k_{2}}\bigr). $$
(3.5)

Inequality (3.5) shows theoretically that the proposed method is expected to make more progress than that in [18] at each iteration, and so it explains theoretically that the proposed method outperforms the method in [18].

For fast convergence, we take a relaxation factor \(\gamma\in[1,2)\) and set the step size \(\alpha_{k}\) in (2.3) by \(\alpha_{k}=\gamma \alpha^{*}_{k}\), it follows from (2.11), (2.12), and Theorem 3.1 that
$$\begin{aligned}& \begin{aligned}[b] \Theta(\alpha_{k}) &\geq \gamma(2-\gamma) (1-\rho)\Psi \bigl(\alpha^{*}_{k}\bigr)\\ &\geq\gamma(2-\gamma) (1-\rho){\frac{(1-\eta)^{2}}{(5-4\eta+\mu )(1+\mu )}}\bigl\| x^{k}- \tilde{x}^{k}\bigr\| ^{2}. \end{aligned} \end{aligned}$$
(3.6)
Then from definition of \(\Theta(\alpha_{k})\) and (3.6) there is a constant
$$\tau:=\gamma(2-\gamma) (1-\rho){\frac{(1-\eta)^{2}}{(5-4\eta+\mu )(1+\mu)}}>0 $$
such that
$$ \bigl\| x^{k+1}(\alpha_{k})-x^{*}\bigr\| ^{2}\leq \bigl\| x^{k}-x^{*}\bigr\| ^{2}-\tau\bigl\| x^{k}-\tilde{x}^{k} \bigr\| ^{2},\quad \forall x^{*}\in\Omega^{*} . $$
(3.7)

The following result can be proved by similar arguments to those in [12, 14, 18, 21]. Hence the proof will be omitted.

Theorem 3.2

[12, 14, 18, 21]

If \(\inf^{\infty}_{k=0}\beta_{k}=\beta_{l}>0\), then the sequence \(\{x^{k}\}\) generated by the proposed method converges to some \(x^{\infty}\) which is a solution of NCP.

The detailed algorithm is as follows.
  • Step 0. Let \(\beta_{0}=1\), \(\eta(:=0.9 )<1\), \(0<\rho<1\), \(0<\mu<1\), \(\gamma=1.9\), \(\epsilon=10^{-7}\), \(k=0\), and \(x^{0}>0\).

  • Step 1. If \(\|\min(x,F(x))\|_{\infty}\leq\epsilon\), then stop. Otherwise, go to Step 2.

  • Step 2. (Prediction step)
    $$\begin{aligned}& s : = (1-\mu) x^{k} - \beta_{k} F\bigl(x^{k} \bigr),\qquad \tilde{x}_{i}^{k} := \Bigl(s_{i} + \sqrt{(s_{i})^{2}+4\mu\bigl(x_{i}^{k} \bigr)^{2}} \Bigr)/2, \\& \xi^{k} := \beta_{k} \bigl(F\bigl(\tilde{x}^{k} \bigr) - F\bigl(x^{k}\bigr)\bigr), \qquad r:= \bigl\| \xi^{k} \bigr\| / \bigl\| x^{k}-\tilde{x}^{k}\bigr\| \end{aligned}$$
    while (\(r> \eta\))
    $$\begin{aligned}& \beta_{k} := \beta_{k}*0.8/r, \\& s: =(1-\mu) x^{k} - \beta_{k} F\bigl(x^{k}\bigr), \qquad \tilde{x}_{i}^{k}: = \Bigl(s_{i} +\sqrt {(s_{i})^{2}+4\mu\bigl(x_{i}^{k} \bigr)^{2}} \Bigr)/2, \\& \xi^{k} := \beta_{k} \bigl(F\bigl(\tilde{x}^{k} \bigr) - F\bigl(x^{k}\bigr)\bigr), \qquad r:= \bigl\| \xi^{k} \bigr\| / \bigl\| x^{k}-\tilde{x}^{k}\bigr\| . \end{aligned}$$
    end while
  • Step 3. (Correction step)
    $$\begin{aligned}& D\bigl(x^{k},\beta_{k}\bigr):=\bigl(x^{k}- \tilde{x}^{k}\bigr)+{\frac{1}{1+\mu}}\xi^{k},\qquad d \bigl(x^{k},\beta_{k}\bigr):=\bigl(x^{k}- \tilde{x}^{k}\bigr)+{\frac{\beta_{k}}{1+\mu }}F\bigl(\tilde {x}^{k} \bigr), \\& \alpha^{*}_{k}={\frac{(x^{k}-\tilde {x}^{k})^{T}D(x^{k},\beta _{k})+{\frac{1}{1+\mu}}(1-\mu-{\frac{\beta_{k}}{4c}})\|x^{k}-\tilde {x}^{k}\| ^{2}}{\|D(x^{k},\beta_{k})+x^{k}-\tilde{x}^{k}\|^{2}}},\qquad \alpha_{k}=\gamma \alpha^{*}_{k}, \\& x^{k+1}=\rho x^{k}+(1-\rho)P_{R_{+}^{n}} \bigl[x^{k}-\alpha_{k}d\bigl(x^{k}, \beta_{k}\bigr)\bigr]. \end{aligned}$$
  • Step 4.
    $$\beta_{k+1}=\left \{ \textstyle\begin{array}{@{}l@{\quad}l} \frac{\beta_{k}*0.7}{r}, & \mbox{if }r\leq0.3;\\ \beta_{k},& \mbox{otherwise}. \end{array}\displaystyle \right . $$
  • Step 5. \(k:=k+1\); go to Step 1.

4 Preliminary computational results

In this section, we consider two examples to illustrate the efficiency and the performance of the proposed algorithm.

4.1 Numerical experiments I

We consider the nonlinear complementarity problems
$$ x\ge0, \quad F(x)\ge0, \qquad x^{T}F(x)=0, $$
(4.1)
where
$$F(x)=D(x)+Mx+q, $$
\(D(x)\) and \(Mx+q\) are the nonlinear part and linear part of \(F(x) \), respectively.

We form the linear part in the test problems similarly to Harker and Pang [4]. The matrix \(M=A^{T}A +B\), where A is an \(n\times n\) matrix whose entries are randomly generated in the interval \((-5,+5)\) and a skew-symmetric matrix B is generated in the same way. The vector q is generated from a uniform distribution in the interval \((-500,500)\) or in \((-500,0)\). In \(D(x)\), the nonlinear part of \(F(x)\), the components are chosen to be \(D_{j} (x)= d_{j} *\arctan(x_{j})\), where \(d_{j}\) is a random variable in \((0,1)\).

In all tests we take the logarithmic proximal parameter \(\mu=0.01\), \(\rho=0.01\), and \(c=0.9\). All iterations start with \(x^{0}=(1,\ldots,1)^{T}\) and \(\beta_{0}=1\), and we have the stopping criterion whenever
$$\bigl\| \min\bigl(x^{k},F\bigl(x^{k}\bigr)\bigr)\bigr\| _{\infty}\leq10^{-7} . $$
All codes were written in Matlab, and we compare the proposed method with that in [18]. The test results for problem (4.1) are reported in Tables 1 and 2. k is the number of iteration and l denotes the number of evaluations of mapping F.
Table 1

Numerical results for problem ( 4.1 ) with \(\pmb{q \in(-500, 500)}\)

n

The method in [18]

The proposed method

k

l

CPU (Sec.)

k

l

CPU (Sec.)

200

297

651

0.068

117

279

0.016

300

329

708

0.094

129

310

0.029

400

333

721

0.13

169

367

0.08

500

368

801

0.22

171

381

0.12

700

364

751

0.41

142

334

0.12

1000

339

743

1.74

139

328

0.66

Table 2

Numerical results for problem ( 4.1 ) with \(\pmb{q \in(-500, 0)}\)

n

The method in [18]

The proposed method

k

l

CPU (Sec.)

k

l

CPU (Sec.)

200

578

1246

0.09

217

495

0.04

300

584

1257

0.14

212

497

0.06

400

769

1586

0.26

284

633

0.11

500

821

1762

0.38

282

645

0.15

700

699

1524

0.61

245

571

0.27

1000

813

1709

3.37

294

679

1.45

Tables 1 and 2 show that the proposed method is more efficient. Numerical results indicate that the proposed method can be save about \(49\sim65\) percent of the number of iterations and about \(48\sim63\) of the amount of computing the value of function F.

4.2 Numerical experiments II

In this subsection, we apply the proposed method to the traffic equilibrium problems and present corresponding numerical results.

Consider a network \([N,L]\) of nodes N and directed links L, which consists of a finite sequence of connecting links with a certain orientation. Let a, b, etc. denote the links, and let p, q, etc. denote the paths. We let ω denote an origin/destination (O/D) pair of nodes of the network and \(P_{\omega}\) denotes the set of all paths connecting O/D pair ω. Note that the path-arc incidence matrix and the path-O/D pair incidence matrix, denoted by A and B, respectively, are determined by the given network and O/D pairs. To see how to convert a traffic equilibrium problem into a variational inequality, we take into account a simple example as depicted in Figure 1.
Figure 1

An illustrative example of given directed network and the O/D pairs.

For the given example in Figure 1, the path-arc incidence matrix A and the path-O/D pair incidence matrix B have the following forms:
$$\begin{aligned}& \textstyle\begin{array}{@{}rc@{}} \mbox{No.}\ \mbox{link}& \underline{\textstyle\begin{array}{@{}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{}} 1 & 2 & 3 & 4 & 5 \end{array}\displaystyle } \\ \displaystyle A &= \begin{pmatrix} 0 & 0 & 1 & 0 & 0 \\ 1 & 0 & 0 & 0 & 1 \\ 0 & 0 & 0 & 1 & 0 \\ 0 & 1 & 0 & 0 & 1 \end{pmatrix}, \end{array}\displaystyle \qquad \textstyle\begin{array}{@{}rc@{}} \mathrm{No.\ O/D pair} &\underline{\textstyle\begin{array}{@{}c@{\quad}c@{}} \omega_{1} & \omega_{2} \end{array}\displaystyle } \\ \displaystyle B &= \begin{pmatrix} 1 & 0 \\ 1 & 0 \\ 0 & 1 \\ 0 & 1 \end{pmatrix}. \end{array}\displaystyle \end{aligned}$$
Let \(x_{p}\) represent the traffic flow on path p and \(f_{a}\) denote the link load on link a, then the arc-flow vector f is given by
$$f=A^{T} x. $$
Let \(d_{\omega}\) denote the traffic amount between O/D pair ω, which must satisfy
$$d_{\omega} = \sum_{p\in P_{\omega}} x_{p}. $$
Thus, the O/D pair-traffic amount vector d is given by
$$d= B^{T} x. $$
Let \(t(f)=\{ t_{a}, a\in L\}\) be the vector of link travel costs, which is a function of the link flow. A user traveling on path p incurs a (path) travel cost \(\theta_{p}\). For given link travel cost vector t, the path travel cost vector θ is given by
$$\theta= A t(f) \quad\mbox{and thus}\quad \theta(x) = A t\bigl(A^{T} x \bigr). $$
Associated with every O/D pair ω, there is a travel disutility \(\lambda_{\omega}(d)\). Since both the path costs and the travel disutilities are functions of the flow pattern x, the traffic network equilibrium problem is to seek the path flow pattern \(x^{*}\) such that
$$ x^{*} \geq0,\quad \bigl(x - x^{*}\bigr)^{T} F\bigl(x^{*}\bigr) \ge0,\quad \forall x\geq0, $$
(4.2)
where
$$ F_{p}(x)= \theta_{p}(x) - \lambda_{\omega}\bigl(d(x) \bigr),\quad \forall \omega, p\in P_{\omega}, $$
and thus
$$ F(x) = At\bigl(A^{T}x\bigr) - B\lambda\bigl(B^{T}x\bigr). $$
We apply the proposed method to the example taken from [22] (Example 7.5 in [22]), which consisted of 25 nodes, 37 links and six O/D pairs. The network is depicted in Figure 2.
Figure 2

A directed network with 25 nodes and 37 links.

For this example, there are together 55 paths for the six given O/D pairs and hence the dimension of the variable x is 55. Therefore, the path-arc incidence matrix A is a \(55\times37\) matrix and the path-O/D pair incidence matrix B is a \(55\times6\) matrix. The user cost of traversing link a is given in Table 3. The disutility function is given by
$$ \lambda_{\omega}(d) = -m_{\omega}d_{\omega} + q_{\omega} $$
(4.3)
and the coefficients \(m_{\omega}\) and \(q_{\omega}\) in the disutility function of different O/D pairs for this example are given in Table 4.
Table 3

The link traversing cost functions \(\pmb{t_{a}(f)}\) in the example

\(t_{1}(f)=5\cdot10^{-5}f_{1}^{4}+5f_{1}+2f_{2}+500\)

\(t_{20}(f)=3\cdot 10^{-5}f_{20}^{4}+6f_{20}+f_{21}+300\)

\(t_{2}(f)=3\cdot10^{-5}f_{2}^{4}+4f_{2}+4f_{1}+200\)

\(t_{21}(f)=4\cdot 10^{-5}f_{21}^{4}+4f_{21}+f_{22}+400\)

\(t_{3}(f)=5\cdot10^{-5}f_{3}^{4}+3f_{3}+f_{4}+350\)

\(t_{22}(f)=2\cdot 10^{-5}f_{22}^{4}+6f_{22}+f_{23}+500\)

\(t_{4}(f)=3\cdot10^{-5}f_{4}^{4}+6f_{4}+3f_{5}+400\)

\(t_{23}(f)=3\cdot 10^{-5}f_{23}^{4}+9f_{23}+2f_{24}+350\)

\(t_{5}(f)=6\cdot10^{-5}f_{5}^{4}+6f_{5}+4f_{6}+600\)

\(t_{24}(f)=2\cdot 10^{-5}f_{24}^{4}+8f_{24}+f_{25}+400\)

\(t_{6}(f)=7f_{6}+3f_{7}+500\)

\(t_{25}(f)=3\cdot 10^{-5}f_{25}^{4}+9f_{25}+3f_{26}+450\)

\(t_{7}(f)=8\cdot10^{-5}f_{7}^{4}+8f_{7}+2f_{8}+400\)

\(t_{26}(f)=6\cdot 10^{-5}f_{26}^{4}+7f_{26}+8f_{27}+300\)

\(t_{8}(f)=4\cdot10^{-5}f_{8}^{4}+5f_{8}+2f_{9}+650\)

\(t_{27}(f)=3\cdot 10^{-5}f_{27}^{4}+8f_{27}+3f_{28}+500\)

\(t_{9}(f)=10^{-5}f_{9}^{4}+6f_{9}+2f_{1}0+700\)

\(t_{28}(f)=3\cdot 10^{-5}f_{28}^{4}+7f_{28}+650\)

\(t_{10}(f)=4f_{10}+f_{12}+800\)

\(t_{29}(f)=3\cdot 10^{-5}f_{29}^{4}+3f_{29}+f_{30}+450\)

\(t_{11}(f)=7\cdot10^{-5}f_{11}^{4}+7f_{11}+4f_{12}+650\)

\(t_{30}(f)=4\cdot 10^{-5}f_{30}^{4}+7f_{30}+2f_{31}+600\)

\(t_{12}(f)=8f_{12}+2f_{13}+700\)

\(t_{31}(f)=3\cdot 10^{-5}f_{31}^{4}+8f_{31}+f_{32}+750\)

\(t_{13}(f)=10^{-5}f_{13}^{4}+7f_{13}+3f_{18}+600\)

\(t_{32}(f)=6\cdot 10^{-5}f_{32}^{4}+8f_{32}+3f_{33}+650\)

\(t_{14}(f)=8f_{14}+3f_{15}+500\)

\(t_{33}(f)=4\cdot 10^{-5}f_{33}^{4}+9f_{33}+2f_{31}+750\)

\(t_{15}(f)=3\cdot10^{-5}f_{15}^{4}+9f_{15}+2f_{14}+200\)

\(t_{34}(f)=6\cdot 10^{-5}f_{34}^{4}+7f_{34}+3f_{30}+550\)

\(t_{16}(f)=8f_{16}+5f_{12}+300\)

\(t_{35}(f)=3\cdot 10^{-5}f_{35}^{4}+8f_{35}+3f_{32}+600\)

\(t_{17}(f)=3\cdot10^{-5}f_{17}^{4}+7f_{17}+2f_{15}+450\)

\(t_{36}(f)=2\cdot 10^{-5}f_{36}^{4}+8f_{36}+4f_{31}+750\)

\(t_{18}(f)=5f_{18}+f_{16}+300\)

\(t_{37}(f)=6\cdot 10^{-5}f_{37}^{4}+5f_{37}+f_{36}+350\)

\(t_{19}(f)=8f_{19}+3f_{17}+600\)

 
Table 4

The O/D pairs and the parameters in ( 4.3 ) of the example

(O,D) Pair ω

(1,20)

(1,25)

(2,20)

(3,25)

(1,24)

(11,25)

\(m_{\omega}\)

1

6

10

5

7

9

\(q_{\omega}\)

1,000

800

2,000

6,000

8,000

7,000

\(|P_{\omega}|\)

10

15

9

6

10

5

The test results for problems (4.2) for different ε are reported in Table 5, k is the number of iterations and l denotes the number of evaluations of mapping F. The stopping criterion is
$$\frac{\|\min\{x,F(x)\}\|_{\infty}}{\|\min\{x^{0},F(x^{0})\}\|_{\infty}} \le\varepsilon. $$
Table 5

Numerical results for different ε

Different ε

The method in [18]

The proposed method

k

l

CPU (Sec.)

k

l

CPU (Sec.)

10−5

201

445

0.04

90

216

0.11

10−6

263

580

0.034

115

276

0.01

10−7

321

708

0.054

150

352

0.019

10−8

380

837

0.058

183

426

0.018

10−9

438

963

0.061

211

491

0.02

Table 5 shows that the new method is more flexible and efficient to solve a traffic equilibrium problem. Moreover, it demonstrates computationally that the new method is more effective than the method presented in [18] in the sense that the new method needs fewer iteration and less evaluation numbers of F, which clearly illustrates its efficiency.

Declarations

Acknowledgements

The authors are very grateful to the referees for their careful reading, comments, and suggestions, which helped us improve the presentation of this paper.

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Authors’ Affiliations

(1)
Laboratoire d’Ingénierie des Systémes et Technologies de l’Information, ENSA, Ibn Zohr University
(2)
School of Management Science and Engineering, Nanjing University

References

  1. Lemke, CE: Bimatrix equilibrium point and mathematical programming. Manag. Sci. 11, 681-689 (1965) MATHMathSciNetView ArticleGoogle Scholar
  2. Cottle, RW, Dantzig, GB: Complementary pivot theory of mathematical programming. Linear Algebra Appl. 1, 103-125 (1968) MATHMathSciNetView ArticleGoogle Scholar
  3. Ferris, MC, Pang, JS: Engineering and economic applications of complementary problems. SIAM Rev. 39, 669-713 (1997) MATHMathSciNetView ArticleGoogle Scholar
  4. Harker, PT, Pang, JS: Finite-dimensional variational inequality and nonlinear complementarity problems: a survey of theory, algorithms and applications. Math. Program. 48, 161-220 (1990) MATHMathSciNetView ArticleGoogle Scholar
  5. Martinet, B: Determination approchée d’un point fixe d’une application pseudo-contractante. C. R. Acad. Sci. Paris 274, 163-165 (1972) MATHMathSciNetGoogle Scholar
  6. Rockafellar, RT: Monotone operators and the proximal point algorithm. SIAM J. Control Optim. 14, 877-898 (1976) MATHMathSciNetView ArticleGoogle Scholar
  7. Eckestein, J: Approximate iterations in Bregman-function-based proximal algorithms. Math. Program. 83, 113-123 (1998) Google Scholar
  8. Guler, O: On the convergence of the proximal point algorithm for convex minimization. SIAM J. Control Optim. 29, 403-419 (1991) MathSciNetView ArticleGoogle Scholar
  9. Teboulle, M: Convergence of proximal-like algorithms. SIAM J. Optim. 7, 1069-1083 (1997) MATHMathSciNetView ArticleGoogle Scholar
  10. Auslender, A, Teboulle, M, Ben-Tiba, S: A logarithmic-quadratic proximal method for variational inequalities. Comput. Optim. Appl. 12, 31-40 (1999) MATHMathSciNetView ArticleGoogle Scholar
  11. Auslender, A, Teboulle, M, Ben-Tiba, S: Interior proximal and multiplier methods based on second order homogenous Kernels. Math. Oper. Res. 24, 646-668 (1999) MathSciNetView ArticleGoogle Scholar
  12. He, BS, Liao, LZ, Yuan, XM: A LQP based interior prediction-correction method for nonlinear complementarity problems. J. Comput. Math. 24(1), 33-44 (2006) MATHMathSciNetGoogle Scholar
  13. Bnouhachem, A: A new inexactness criterion for approximate logarithmic-quadratic proximal methods. Numer. Math., Theory Methods Appl. 15(1), 74-81 (2006) MATHMathSciNetGoogle Scholar
  14. Bnouhachem, A: An LQP method for pseudomonotone variational inequalities. J. Glob. Optim. 36(3), 351-363 (2006) MATHMathSciNetView ArticleGoogle Scholar
  15. Bnouhachem, A, Yuan, XM: An extended LQP method for monotone nonlinear complementarity problems. J. Optim. Theory Appl. 135(3), 343-353 (2007) MATHMathSciNetView ArticleGoogle Scholar
  16. Bnouhachem, A, Noor, MA: A new predictor-corrector method for pseudomonotone nonlinear complementarity problems. Int. J. Comput. Math. 85, 1023-1038 (2008) MATHMathSciNetView ArticleGoogle Scholar
  17. Bnouhachem, A, Noor, MA: An interior proximal point algorithm for nonlinear complementarity problems. Nonlinear Anal. Hybrid Syst. 4(3), 371-380 (2010) MATHMathSciNetView ArticleGoogle Scholar
  18. Bnouhachem, A, Noor, MA, Khalfaoui, M, Zhaohan, S: An approximate proximal point algorithm for nonlinear complementarity problems. Hacet. J. Math. Stat. 41(1), 103-117 (2012) MATHMathSciNetGoogle Scholar
  19. Bnouhachem, A, Noor, MA, Khalfaoui, M, Zhaohan, S: A new logarithmic-quadratic proximal method for nonlinear complementarity problems. Appl. Math. Comput. 215, 695-706 (2009) MATHMathSciNetView ArticleGoogle Scholar
  20. Noor, MA, Bnouhachem, A: Modified proximal point methods for nonlinear complementarity problems. J. Comput. Appl. Math. 197, 395-405 (2006) MATHMathSciNetView ArticleGoogle Scholar
  21. Xu, Y, He, BS, Yuan, X: A hybrid inexact logarithmic-quadratic proximal method for nonlinear complementarity problems. J. Math. Anal. Appl. 322, 276-287 (2006) MATHMathSciNetView ArticleGoogle Scholar
  22. Nagurney, A, Zhang, D: Projected Dynamical Systems and Variational Inequalities with Applications. Kluwer Academic, Dordrecht (1996) View ArticleGoogle Scholar

Copyright

© Ou-yassine et al. 2015