# On LQP alternating direction method for solving variational inequality problems with separable structure

## Abstract

In this paper, we proposed a logarithmic-quadratic proximal alternating direction method for structured variational inequalities. The new iterate is obtained by a convex combination of the previous point and the one generated by a projection-type method along a new descent direction. Global convergence of the new method is proved under certain assumptions. We also reported some numerical results to illustrate the efficiency of the proposed method.

MSC: 90C33, 49J40.

## 1 Introduction

The problem we are concerned with in this paper is for the following variational inequalities: find $u\in \mathrm{\Omega }$ such that

${\left({u}^{\prime }-u\right)}^{T}F\left(u\right)\ge 0,\phantom{\rule{1em}{0ex}}\mathrm{\forall }{u}^{\prime }\in \mathrm{\Omega },$
(1.1)

with

$\begin{array}{r}u=\left(\begin{array}{c}x\\ y\end{array}\right),\phantom{\rule{2em}{0ex}}F\left(u\right)=\left(\begin{array}{c}f\left(x\right)\\ g\left(y\right)\end{array}\right)\phantom{\rule{1em}{0ex}}\text{and}\\ \mathrm{\Omega }=\left\{\left(x,y\right)\mid x\in {\mathcal{R}}_{++}^{n},y\in {\mathcal{R}}_{++}^{m},Ax+By=b\right\},\end{array}$
(1.2)

where $A\in {\mathcal{R}}^{l×n}$, $B\in {\mathcal{R}}^{l×m}$ are given matrices, $b\in {\mathcal{R}}^{l}$ is a given vector, and $f:{\mathcal{R}}_{++}^{n}\to {\mathcal{R}}^{n}$, $g:{\mathcal{R}}_{++}^{m}\to {\mathcal{R}}^{m}$ are given monotone operators. Studies and applications of such problems can be found in . By attaching a Lagrange multiplier vector $\lambda \in {\mathcal{R}}^{l}$ to the linear constraints $Ax+By=b$, the problem (1.1)-(1.2) can be explained in terms of finding $w\in \mathcal{W}$ such that

${\left({w}^{\prime }-w\right)}^{T}Q\left(w\right)\ge 0,\phantom{\rule{1em}{0ex}}\mathrm{\forall }{w}^{\prime }\in \mathcal{W},$
(1.3)

where

$w=\left(\begin{array}{c}x\\ y\\ \lambda \end{array}\right),\phantom{\rule{2em}{0ex}}Q\left(w\right)=\left(\begin{array}{c}f\left(x\right)-{A}^{T}\lambda \\ g\left(y\right)-{B}^{T}\lambda \\ Ax+By-b\end{array}\right),\phantom{\rule{2em}{0ex}}\mathcal{W}={\mathcal{R}}_{++}^{n}×{\mathcal{R}}_{++}^{m}×{\mathcal{R}}^{l}.$
(1.4)

The problem (1.3)-(1.4) is referred to as SVI (structured variational inequalities).

The alternating direction method (ADM) is a powerful method for solving the structured problem (1.3)-(1.4), since it decomposes the original problems into a series subproblems with lower scale, originally proposed by Gabay and Mercier  and Gabay . The classical proximal alternating direction method (PADM)  is an effective numerical approach for solving variational inequalities with a separable structure. To make the PADM more efficient and practical, He et al.  proposed a modified PADM as follows. For given $\left({x}^{k},{y}^{k},{\lambda }^{k}\right)\in {\mathcal{R}}_{++}^{n}×{\mathcal{R}}_{++}^{m}×{\mathcal{R}}^{l}$, the new iterative $\left({x}^{k+1},{y}^{k+1},{\lambda }^{k+1}\right)$ is obtained via the following steps.

Step 1. Solve the following system of nonlinear equations to obtain ${x}^{k+1}$:

$\begin{array}{c}{\left({x}^{\prime }-{x}^{k+1}\right)}^{T}\left\{f\left({x}^{k+1}\right)-{A}^{T}\left[{\lambda }^{k}-{H}_{k}\left(A{x}^{k+1}+B{y}^{k}-b\right)\right]+{R}_{k}\left({x}^{k+1}-{x}^{k}\right)\right\}\hfill \\ \phantom{\rule{1em}{0ex}}\ge 0,\phantom{\rule{1em}{0ex}}\mathrm{\forall }{x}^{\prime }\in {\mathcal{R}}_{++}^{n}.\hfill \end{array}$
(1.5)

Step 2. Solve the following system of nonlinear equations to obtain ${y}^{k+1}$:

$\begin{array}{c}{\left({y}^{\prime }-{y}^{k+1}\right)}^{T}\left\{g\left({y}^{k+1}\right)-{B}^{T}\left[{\lambda }^{k}-{H}_{k}\left(A{x}^{k+1}+B{y}^{k+1}-b\right)\right]+{S}_{k}\left({y}^{k+1}-{y}^{k}\right)\right\}\hfill \\ \phantom{\rule{1em}{0ex}}\ge 0,\phantom{\rule{1em}{0ex}}\mathrm{\forall }{y}^{\prime }\in {\mathcal{R}}_{++}^{m}.\hfill \end{array}$
(1.6)

Step 3. Update ${\lambda }^{k}$ via

${\lambda }^{k+1}={\lambda }^{k}-{H}_{k}\left(A{x}^{k+1}+B{y}^{k+1}-b\right).$
(1.7)

Yuan and Li  have developed a logarithmic-quadratic proximal (LQP)-based decomposition method by applying the LQP terms to regularize the ADM subproblems, by substituting in the alternating directions method (1.5)-(1.7) the term $R\left(x-{x}^{k}\right)$ and $S\left(y-{y}^{k}\right)$ by $R\left[\left(x-{x}^{k}\right)+\mu \left({x}^{k}-{X}_{k}^{2}{x}^{-1}\right)\right]$ and $S\left[\left(y-{y}^{k}\right)+\mu \left({y}^{k}-{Y}_{k}^{2}{y}^{-1}\right)\right]$, respectively. The new iterative $\left({x}^{k+1},{y}^{k+1},{\lambda }^{k+1}\right)$ in  is obtained via the following procedure: From a given ${w}^{k}=\left({x}^{k},{y}^{k},{\lambda }^{k}\right)\in {\mathcal{R}}_{++}^{n}×{\mathcal{R}}_{++}^{m}×{\mathcal{R}}^{l}$ and $\mu \in \left(0,1\right)$, $\left({x}^{k+1},{y}^{k+1},{\lambda }^{k+1}\right)$ is obtained via solving the following system:

$\begin{array}{c}f\left(x\right)-{A}^{T}\left[{\lambda }^{k}-H\left(Ax+B{y}^{k}-b\right)\right]+R\left[\left(x-{x}^{k}\right)+\mu \left({x}^{k}-{X}_{k}^{2}{x}^{-1}\right)\right]=0,\hfill \\ g\left(y\right)-{B}^{T}\left[{\lambda }^{k}-H\left(A{x}^{k+1}+By-b\right)\right]+S\left[\left(y-{y}^{k}\right)+\mu \left({y}^{k}-{Y}_{k}^{2}{y}^{-1}\right)\right]=0,\hfill \\ {\lambda }^{k+1}={\lambda }^{k}-H\left(A{x}^{k}+B{y}^{k}-b\right).\hfill \end{array}$

Note that the LQP method was presented originally in . Later, Bnouhachem et al. [17, 18] have proposed a new inexact LQP alternating direction method by solving a series of related systems of nonlinear equations. Very recently, Li  presented an LQP-based prediction-correction method, the new iterate is obtained by a convex combination of the previous point and the one generated by a projection-type method along this descent direction.

In the present paper, inspired by the above cited works and by the recent works going in this direction, we proposed a new LQP-based prediction-correction method; the new iterate is obtained by a convex combination of the previous point and the one generated by a projection-type method along another descent direction. Under the same conditions as those in , we prove the global convergence of the proposed algorithm. It is proved theoretically that the lower bound of the progress obtained by the proposed method is greater than that by Li’s method . The effectiveness and superiority of the proposed method is verified by our preliminary numerical experiments.

## 2 The proposed method

In this section, we recall some basic definitions and properties, which will be frequently used in our later analysis. Some useful results proved already in the literature are also summarized. The first lemma provides some basic properties of projection onto Ω.

Lemma 2.1 Let G be a symmetry positive definite matrix and Ω be a nonempty closed convex subset of ${R}^{l}$, we denote ${P}_{\mathrm{\Omega },G}\left(\cdot \right)$ as the projection under the G-norm, i.e.,

${P}_{\mathrm{\Omega },G}\left(v\right)=argmin\left\{{\parallel v-u\parallel }_{G}\mid u\in \mathrm{\Omega }\right\}.$

Then we have the following inequalities:

${\left(z-{P}_{\mathrm{\Omega },G}\left[z\right]\right)}^{T}G\left({P}_{\mathrm{\Omega },G}\left[z\right]-v\right)\ge 0,\phantom{\rule{1em}{0ex}}\mathrm{\forall }z\in {R}^{l},v\in \mathrm{\Omega };$
(2.1)
${\parallel {P}_{\mathrm{\Omega },G}\left[u\right]-{P}_{\mathrm{\Omega },G}\left[v\right]\parallel }_{G}\le {\parallel u-v\parallel }_{G},\phantom{\rule{1em}{0ex}}\mathrm{\forall }u,v\in {R}^{l};$
(2.2)
${\parallel u-{P}_{\mathrm{\Omega },G}\left[z\right]\parallel }_{G}^{2}\le {\parallel z-u\parallel }_{G}^{2}-{\parallel z-{P}_{\mathrm{\Omega },G}\left[z\right]\parallel }_{G}^{2},\phantom{\rule{1em}{0ex}}\mathrm{\forall }z\in {R}^{l},u\in \mathrm{\Omega }.$
(2.3)

In course we always make the following standard assumptions.

Assumption A $f\left(x\right)$ is monotone with respect to ${\mathcal{R}}_{++}^{n}$ and $g\left(y\right)$ is monotone with respect to ${\mathcal{R}}_{++}^{m}$.

Assumption B The solution set of SVI, denoted by ${\mathcal{W}}^{\ast }$, is nonempty.

Now, we suggest and consider the new LQP alternating direction method (LQP-ADM) for solving SVI as follows.

Prediction step: For a given ${w}^{k}=\left({x}^{k},{y}^{k},{\lambda }^{k}\right)\in {\mathcal{R}}_{++}^{n}×{\mathcal{R}}_{++}^{m}×{\mathcal{R}}^{l}$, and $\mu \in \left(0,1\right)$, the predictor ${\stackrel{˜}{w}}^{k}=\left({\stackrel{˜}{x}}^{k},{\stackrel{˜}{y}}^{k},{\stackrel{˜}{\lambda }}^{k}\right)\in {\mathcal{R}}_{++}^{n}×{\mathcal{R}}_{++}^{m}×{\mathcal{R}}^{l}$ is obtained via solving the following system:

$f\left(x\right)-{A}^{T}\left[{\lambda }^{k}-H\left(Ax+B{y}^{k}-b\right)\right]+R\left[\left(x-{x}^{k}\right)+\mu \left({x}^{k}-{X}_{k}^{2}{x}^{-1}\right)\right]=0,$
(2.4a)
$g\left(y\right)-{B}^{T}\left[{\lambda }^{k}-H\left(Ax+By-b\right)\right]+S\left[\left(y-{y}^{k}\right)+\mu \left({y}^{k}-{Y}_{k}^{2}{y}^{-1}\right)\right]=0,$
(2.4b)
${\stackrel{˜}{\lambda }}^{k}={\lambda }^{k}-H\left(A{\stackrel{˜}{x}}^{k}+B{\stackrel{˜}{y}}^{k}-b\right).$
(2.4c)

Correction step: The new iterate ${w}^{k+1}\left({\alpha }_{k}\right)=\left({x}^{k+1},{y}^{k+1},{\lambda }^{k+1}\right)$ is given by

${w}^{k+1}\left({\alpha }_{k}\right)=\left(1-\sigma \right){w}^{k}+\sigma {P}_{\mathcal{W}}\left[{w}^{k}-{\alpha }_{k}{G}^{-1}d\left({w}^{k},{\stackrel{˜}{w}}^{k}\right)\right],\phantom{\rule{1em}{0ex}}\sigma \in \left(0,1\right),$
(2.5)

where

${\alpha }_{k}=\frac{{\phi }_{k}}{{\parallel {w}^{k}-{\stackrel{˜}{w}}^{k}\parallel }_{G}^{2}},$
(2.6)
$\begin{array}{c}{\phi }_{k}:={\parallel {w}^{k}-{\stackrel{˜}{w}}^{k}\parallel }_{M}^{2}+{\left({\lambda }^{k}-{\stackrel{˜}{\lambda }}^{k}\right)}^{T}\left(B{y}^{k}-B{\stackrel{˜}{y}}^{k}\right),\hfill \\ d\left({w}^{k},{\stackrel{˜}{w}}^{k}\right)=\left(\begin{array}{c}f\left({\stackrel{˜}{x}}^{k}\right)-{A}^{T}{\stackrel{˜}{\lambda }}^{k}+{A}^{T}HB\left({y}^{k}-{\stackrel{˜}{y}}^{k}\right)\\ g\left({\stackrel{˜}{y}}^{k}\right)-{B}^{T}{\stackrel{˜}{\lambda }}^{k}+{B}^{T}HB\left({y}^{k}-{\stackrel{˜}{y}}^{k}\right)\\ A{\stackrel{˜}{x}}^{k}+B{\stackrel{˜}{y}}^{k}-b\end{array}\right)\hfill \end{array}$
(2.7)

and

$G=\left(\begin{array}{ccc}\left(1+\mu \right)R& 0& 0\\ 0& \left(1+\mu \right)S+{B}^{T}HB& 0\\ 0& 0& {H}^{-1}\end{array}\right),\phantom{\rule{2em}{0ex}}M=\left(\begin{array}{ccc}R& 0& 0\\ 0& S+{B}^{T}HB& 0\\ 0& 0& {H}^{-1}\end{array}\right).$

Remark 2.1 If ${x}^{k+1}={\stackrel{˜}{x}}^{k}$, ${y}^{k+1}={\stackrel{˜}{y}}^{k}$ and ${\lambda }^{k+1}={\stackrel{˜}{\lambda }}^{k}$ in (2.4a), (2.4b), and (2.4c), respectively, we obtain the method proposed in .

We need the following result in the convergence analysis of the proposed method.

Lemma 2.2 

Let $q\left(u\right)\in {\mathcal{R}}^{n}$ be a monotone mapping of u with respect to ${}_{\mathcal{R}}^{{}^{+}n}$ and $R\in {\mathcal{R}}^{n×n}$ be positive definite diagonal matrix. For given ${u}^{k}>0$, if we let ${U}_{k}:=diag\left({u}_{1}^{k},{u}_{2}^{k},\dots ,{u}_{n}^{k}\right)$ and ${u}^{-1}$ be an n-vector whose jth element is $1/{u}_{j}$, then the equation

$q\left(u\right)+R\left[\left(u-{u}^{k}\right)+\mu \left({u}^{k}-{U}_{k}^{2}{u}^{-1}\right)\right]=0$
(2.8)

has a unique positive solution u. Moreover, for any $v\ge 0$, we have

${\left(v-u\right)}^{T}q\left(u\right)\ge \frac{1+\mu }{2}\left({\parallel u-v\parallel }_{R}^{2}-{\parallel {u}^{k}-v\parallel }_{R}^{2}\right)+\frac{1-\mu }{2}{\parallel {u}^{k}-u\parallel }_{R}^{2}.$
(2.9)

In the next theorem we show that ${\alpha }_{k}$ is lower bounded away from zero and it is one of the keys to prove the global convergence results.

Theorem 2.1 For given ${w}^{k}\in {\mathcal{R}}_{++}^{n}×{\mathcal{R}}_{++}^{m}×{\mathcal{R}}^{l}$, let ${\stackrel{˜}{w}}^{k}$ be generated by (2.4a)-(2.4c), then we have the following:

${\phi }_{k}\ge \frac{1}{2}\left({\parallel A{\stackrel{˜}{x}}^{k}+B{y}^{k}-b\parallel }_{H}^{2}+{\parallel {w}^{k}-{\stackrel{˜}{w}}^{k}\parallel }_{G}^{2}\right)$
(2.10)

and

${\alpha }_{k}\ge \frac{1}{2}.$
(2.11)

Proof It follows from (2.7) that

$\begin{array}{rl}{\phi }_{k}=& {\parallel {w}^{k}-{\stackrel{˜}{w}}^{k}\parallel }_{M}^{2}+{\left({\lambda }^{k}-{\stackrel{˜}{\lambda }}^{k}\right)}^{T}\left(B{y}^{k}-B{\stackrel{˜}{y}}^{k}\right)\\ =& {\parallel {x}^{k}-{\stackrel{˜}{x}}^{k}\parallel }_{R}^{2}+{\parallel {y}^{k}-{\stackrel{˜}{y}}^{k}\parallel }_{S}^{2}+{\parallel B{y}^{k}-B{\stackrel{˜}{y}}^{k}\parallel }_{H}^{2}+{\parallel {\lambda }^{k}-{\stackrel{˜}{\lambda }}^{k}\parallel }_{{H}^{-1}}^{2}\\ +{\left({\lambda }^{k}-{\stackrel{˜}{\lambda }}^{k}\right)}^{T}\left(B{y}^{k}-B{\stackrel{˜}{y}}^{k}\right).\end{array}$
(2.12)

Using (2.4c), we have

$\begin{array}{c}{\left({\lambda }^{k}-{\stackrel{˜}{\lambda }}^{k}\right)}^{T}\left(B{y}^{k}-B{\stackrel{˜}{y}}^{k}\right)+\frac{1}{2}\left({\parallel B{y}^{k}-B{\stackrel{˜}{y}}^{k}\parallel }_{H}^{2}+{\parallel {\lambda }^{k}-{\stackrel{˜}{\lambda }}^{k}\parallel }_{{H}^{-1}}^{2}\right)\hfill \\ \phantom{\rule{1em}{0ex}}={\left(A{\stackrel{˜}{x}}^{k}+B{\stackrel{˜}{y}}^{k}-b\right)}^{T}H\left(B{y}^{k}-B{\stackrel{˜}{y}}^{k}\right)+\frac{1}{2}\left({\parallel B{y}^{k}-B{\stackrel{˜}{y}}^{k}\parallel }_{H}^{2}+{\parallel A{\stackrel{˜}{x}}^{k}+B{\stackrel{˜}{y}}^{k}-b\parallel }_{H}^{2}\right)\hfill \\ \phantom{\rule{1em}{0ex}}=\frac{1}{2}{\parallel A{\stackrel{˜}{x}}^{k}+B{y}^{k}-b\parallel }_{H}^{2}.\hfill \end{array}$
(2.13)

Substituting (2.13) into (2.12), we get

$\begin{array}{rcl}{\phi }_{k}& =& \frac{1}{2}\left({\parallel A{\stackrel{˜}{x}}^{k}+B{y}^{k}-b\parallel }_{H}^{2}+{\parallel B{y}^{k}-B{\stackrel{˜}{y}}^{k}\parallel }_{H}^{2}+{\parallel {\lambda }^{k}-{\stackrel{˜}{\lambda }}^{k}\parallel }_{{H}^{-1}}^{2}\right)+{\parallel {x}^{k}-{\stackrel{˜}{x}}^{k}\parallel }_{R}^{2}+{\parallel {y}^{k}-{\stackrel{˜}{y}}^{k}\parallel }_{S}^{2}\\ =& \frac{1}{2}\left({\parallel A{\stackrel{˜}{x}}^{k}+B{y}^{k}-b\parallel }_{H}^{2}+{\parallel {w}^{k}-{\stackrel{˜}{w}}^{k}\parallel }_{G}^{2}+\left(1-\mu \right){\parallel {x}^{k}-{\stackrel{˜}{x}}^{k}\parallel }_{R}^{2}+\left(1-\mu \right){\parallel {y}^{k}-{\stackrel{˜}{y}}^{k}\parallel }_{S}^{2}\right)\\ \ge & \frac{1}{2}\left({\parallel A{\stackrel{˜}{x}}^{k}+B{y}^{k}-b\parallel }_{H}^{2}+{\parallel {w}^{k}-{\stackrel{˜}{w}}^{k}\parallel }_{G}^{2}\right).\end{array}$

Therefore, it follows from (2.6) and (2.10) that

${\alpha }_{k}\ge \frac{1}{2}$

and this completes the proof. □

## 3 Basic results

In this section, we prove some basic properties, which will be used to establish the sufficient and necessary conditions for the convergence of the proposed method. The following results are due to applying Lemma 2.1 to the LQP systems in prediction step of the proposed method.

Lemma 3.1 For given ${w}^{k}=\left({x}^{k},{y}^{k},{\lambda }^{k}\right)\in {\mathcal{R}}_{++}^{n}×{\mathcal{R}}_{++}^{m}×{\mathcal{R}}^{l}$, let ${\stackrel{˜}{w}}^{k}$ be generated by (2.4a)-(2.4c). Then for any ${w}^{\ast }=\left({x}^{\ast },{y}^{\ast },{\lambda }^{\ast }\right)\in {\mathcal{W}}^{\ast }$, we have

${\left({w}^{k}-{w}^{\ast }\right)}^{T}G\left({w}^{k}-{\stackrel{˜}{w}}^{k}\right)\ge {\phi }_{k}.$
(3.1)

Proof Applying Lemma 2.1 to (2.4a) (by setting ${u}^{k}={x}^{k}$, $u={\stackrel{˜}{x}}^{k}$, $v={x}^{\ast }$ in (2.9)) and

$q\left(u\right)=f\left({\stackrel{˜}{x}}^{k}\right)-{A}^{T}\left[{\lambda }^{k}-H\left(A{\stackrel{˜}{x}}^{k}+B{y}^{k}-b\right)\right],$

we get

$\begin{array}{c}{\left({x}^{\ast }-{\stackrel{˜}{x}}^{k}\right)}^{T}\left\{f\left({\stackrel{˜}{x}}^{k}\right)-{A}^{T}\left[{\lambda }^{k}-H\left(A{\stackrel{˜}{x}}^{k}+B{y}^{k}-b\right)\right]\right\}\hfill \\ \phantom{\rule{1em}{0ex}}\ge \frac{1+\mu }{2}\left({\parallel {\stackrel{˜}{x}}^{k}-{x}^{\ast }\parallel }_{R}^{2}-{\parallel {x}^{k}-{x}^{\ast }\parallel }_{R}^{2}\right)+\frac{1-\mu }{2}{\parallel {x}^{k}-{\stackrel{˜}{x}}^{k}\parallel }_{R}^{2}.\hfill \end{array}$
(3.2)

Recall

${\left({x}^{\ast }-{\stackrel{˜}{x}}^{k}\right)}^{T}R\left({x}^{k}-{\stackrel{˜}{x}}^{k}\right)=\frac{1}{2}\left({\parallel {\stackrel{˜}{x}}^{k}-{x}^{\ast }\parallel }_{R}^{2}-\parallel {x}^{k}-{x}^{\ast }{|}_{R}^{2}\right)+\frac{1}{2}{\parallel {x}^{k}-{\stackrel{˜}{x}}^{k}\parallel }_{R}^{2}.$
(3.3)

Adding (3.2) and (3.3), we obtain

${\left({x}^{\ast }-{\stackrel{˜}{x}}^{k}\right)}^{T}\left\{\left(1+\mu \right)R\left({x}^{k}-{\stackrel{˜}{x}}^{k}\right)-f\left({\stackrel{˜}{x}}^{k}\right)+{A}^{T}{\stackrel{˜}{\lambda }}^{k}-{A}^{T}HB\left({y}^{k}-{\stackrel{˜}{y}}^{k}\right)\right\}\le \mu {\parallel {x}^{k}-{\stackrel{˜}{x}}^{k}\parallel }_{R}^{2}.$
(3.4)

Similarly, applying Lemma 2.1 to (2.4b), substituting ${u}^{k}={y}^{k}$, $u={\stackrel{˜}{y}}^{k}$, $v={y}^{\ast }$, and replacing R, n with S, m, respectively, in (2.9) and

$q\left(u\right)=g\left({\stackrel{˜}{y}}^{k}\right)-{B}^{T}\left[{\lambda }^{k}-H\left(A{\stackrel{˜}{x}}^{k}+B{\stackrel{˜}{y}}^{k}-b\right)\right],$

we get

$\begin{array}{c}{\left({y}^{\ast }-{\stackrel{˜}{y}}^{k}\right)}^{T}\left\{g\left({\stackrel{˜}{y}}^{k}\right)-{B}^{T}\left[{\lambda }^{k}-H\left(A{\stackrel{˜}{x}}^{k}+B{\stackrel{˜}{y}}^{k}-b\right)\right]\right\}\hfill \\ \phantom{\rule{1em}{0ex}}\ge \frac{1+\mu }{2}\left({\parallel {\stackrel{˜}{y}}^{k}-{y}^{\ast }\parallel }_{S}^{2}-{\parallel {y}^{k}-{y}^{\ast }\parallel }_{S}^{2}\right)+\frac{1-\mu }{2}{\parallel {y}^{k}-{\stackrel{˜}{y}}^{k}\parallel }_{S}^{2}.\hfill \end{array}$
(3.5)

Recall

${\left({y}^{\ast }-{\stackrel{˜}{y}}^{k}\right)}^{T}S\left({y}^{k}-{\stackrel{˜}{y}}^{k}\right)=\frac{1}{2}\left({\parallel {\stackrel{˜}{y}}^{k}-{y}^{\ast }\parallel }_{S}^{2}-{\parallel {y}^{k}-{y}^{\ast }\parallel }_{S}^{2}\right)+\frac{1}{2}{\parallel {y}^{k}-{\stackrel{˜}{y}}^{k}\parallel }_{S}^{2}.$
(3.6)

Adding (3.5) and (3.6), we have

${\left({y}^{\ast }-{\stackrel{˜}{y}}^{k}\right)}^{T}\left\{\left(1+\mu \right)S\left({y}^{k}-{\stackrel{˜}{y}}^{k}\right)-g\left({\stackrel{˜}{y}}^{k}\right)+{B}^{T}{\stackrel{˜}{\lambda }}^{k}\right\}\le \mu {\parallel {y}^{k}-{\stackrel{˜}{y}}^{k}\parallel }_{S}^{2}.$
(3.7)

Since $\left({x}^{\ast },{y}^{\ast },{\lambda }^{\ast }\right)$ is a solution of SVI, ${\stackrel{˜}{x}}^{k}\in {\mathcal{R}}_{++}^{n}$ and ${\stackrel{˜}{y}}^{k}\in {\mathcal{R}}_{++}^{m}$, we have

$\begin{array}{c}{\left({\stackrel{˜}{x}}^{k}-{x}^{\ast }\right)}^{T}\left(f\left({x}^{\ast }\right)-{A}^{T}{\lambda }^{\ast }\right)\ge 0,\hfill \\ {\left({\stackrel{˜}{y}}^{k}-{y}^{\ast }\right)}^{T}\left(g\left({y}^{\ast }\right)-{B}^{T}{\lambda }^{\ast }\right)\ge 0\hfill \end{array}$

and

$A{x}^{\ast }+B{y}^{\ast }-b=0.$

Using the monotonicity of f and g, we obtain

${\left(\begin{array}{c}{\stackrel{˜}{x}}^{k}-{x}^{\ast }\\ {\stackrel{˜}{y}}^{k}-{y}^{\ast }\\ {\stackrel{˜}{\lambda }}^{k}-{\lambda }^{\ast }\end{array}\right)}^{T}\left(\begin{array}{c}f\left({\stackrel{˜}{x}}^{k}\right)-{A}^{T}{\stackrel{˜}{\lambda }}^{k}\\ g\left({\stackrel{˜}{y}}^{k}\right)-{B}^{T}{\stackrel{˜}{\lambda }}^{k}\\ A{\stackrel{˜}{x}}^{k}+B{\stackrel{˜}{y}}^{k}-b\end{array}\right)\ge {\left(\begin{array}{c}{\stackrel{˜}{x}}^{k}-{x}^{\ast }\\ {\stackrel{˜}{y}}^{k}-{y}^{\ast }\\ {\stackrel{˜}{\lambda }}^{k}-{\lambda }^{\ast }\end{array}\right)}^{T}\left(\begin{array}{c}f\left({x}^{\ast }\right)-{A}^{T}{\lambda }^{\ast }\\ g\left({y}^{\ast }\right)-{B}^{T}{\lambda }^{\ast }\\ A{x}^{\ast }+B{y}^{\ast }-b\end{array}\right)\ge 0.$
(3.8)

Adding (3.4), (3.7), and (3.8), we get

$\begin{array}{rcl}{\left({w}^{\ast }-{\stackrel{˜}{w}}^{k}\right)}^{T}G\left({w}^{k}-{\stackrel{˜}{w}}^{k}\right)& =& {\left({x}^{\ast }-{\stackrel{˜}{x}}^{k}\right)}^{T}\left(\left(1+\mu \right)R\left({x}^{k}-{\stackrel{˜}{x}}^{k}\right)\right)\\ +{\left({y}^{\ast }-{\stackrel{˜}{y}}^{k}\right)}^{T}\left(\left(1+\mu \right)S\left({y}^{k}-{\stackrel{˜}{y}}^{k}\right)+{B}^{T}HB\left({y}^{k}-{\stackrel{˜}{y}}^{k}\right)\right)\\ +{\left({\lambda }^{\ast }-{\stackrel{˜}{\lambda }}^{k}\right)}^{T}\left(A{\stackrel{˜}{x}}^{k}+B{\stackrel{˜}{y}}^{k}-b\right)\\ \le & \mu {\parallel {x}^{k}-{\stackrel{˜}{x}}^{k}\parallel }_{R}^{2}+{\left({x}^{\ast }-{\stackrel{˜}{x}}^{k}\right)}^{T}{A}^{T}HB\left({y}^{k}-{\stackrel{˜}{y}}^{k}\right)\\ +{\left({y}^{\ast }-{\stackrel{˜}{y}}^{k}\right)}^{T}{B}^{T}HB\left({y}^{k}-{\stackrel{˜}{y}}^{k}\right)+\mu {\parallel {y}^{k}-{\stackrel{˜}{y}}^{k}\parallel }_{S}^{2}\\ =& \mu {\parallel {x}^{k}-{\stackrel{˜}{x}}^{k}\parallel }_{R}^{2}-{\left(A{\stackrel{˜}{x}}^{k}+B{\stackrel{˜}{y}}^{k}-b\right)}^{T}HB\left({y}^{k}-{\stackrel{˜}{y}}^{k}\right)+\mu {\parallel {y}^{k}-{\stackrel{˜}{y}}^{k}\parallel }_{S}^{2}\\ =& \mu {\parallel {x}^{k}-{\stackrel{˜}{x}}^{k}\parallel }_{R}^{2}-{\left({\lambda }^{k}-{\stackrel{˜}{\lambda }}^{k}\right)}^{T}\left(B{y}^{k}-B{\stackrel{˜}{y}}^{k}\right)+\mu {\parallel {y}^{k}-{\stackrel{˜}{y}}^{k}\parallel }_{S}^{2},\end{array}$
(3.9)

where the last equality follows from (2.4c). It follows from (3.9) that

$\begin{array}{rcl}{\left({w}^{k}-{w}^{\ast }\right)}^{T}G\left({w}^{k}-{\stackrel{˜}{w}}^{k}\right)& \ge & {\parallel {w}^{k}-{\stackrel{˜}{w}}^{k}\parallel }_{G}^{2}-\mu {\parallel {x}^{k}-{\stackrel{˜}{x}}^{k}\parallel }_{R}^{2}-\mu {\parallel {y}^{k}-{\stackrel{˜}{y}}^{k}\parallel }_{S}^{2}\\ +{\left({\lambda }^{k}-{\stackrel{˜}{\lambda }}^{k}\right)}^{T}\left(B{y}^{k}-B{\stackrel{˜}{y}}^{k}\right)\\ =& {\parallel {x}^{k}-{\stackrel{˜}{x}}^{k}\parallel }_{R}^{2}+{\parallel {y}^{k}-{\stackrel{˜}{y}}^{k}\parallel }_{S}^{2}+{\parallel B{y}^{k}-B{\stackrel{˜}{y}}^{k}\parallel }_{H}^{2}+{\parallel {\lambda }^{k}-{\stackrel{˜}{\lambda }}^{k}\parallel }_{{H}^{-1}}^{2}\\ +{\left({\lambda }^{k}-{\stackrel{˜}{\lambda }}^{k}\right)}^{T}\left(B{y}^{k}-B{\stackrel{˜}{y}}^{k}\right).\end{array}$

Using the definition of ${\phi }_{k}$ the assertion of this lemma is proved. □

Theorem 3.1 Let ${w}^{\ast }\in {\mathcal{W}}^{\ast }$, ${w}^{k+1}\left({\alpha }_{k}\right)$ be defined by (2.5) and

$\mathrm{\Theta }\left({\alpha }_{k}\right):={\parallel {w}^{k}-{w}^{\ast }\parallel }_{G}^{2}-{\parallel {w}^{k+1}\left({\alpha }_{k}\right)-{w}^{\ast }\parallel }_{G}^{2},$
(3.10)

then we have

$\mathrm{\Theta }\left({\alpha }_{k}\right)\ge \sigma \left({\parallel {w}^{k}-{w}_{\ast }^{k}-{\alpha }_{k}\left({w}^{k}-{\stackrel{˜}{w}}^{k}\right)\parallel }_{G}^{2}+2{\alpha }_{k}{\phi }_{k}-{\alpha }_{k}^{2}{\parallel {w}^{k}-{\stackrel{˜}{w}}^{k}\parallel }_{G}^{2}\right),$
(3.11)

where

${w}_{\ast }^{k}=\left({x}_{\ast }^{k},{y}_{\ast }^{k},{\lambda }_{\ast }^{k}\right):={P}_{\mathcal{W}}\left[{w}^{k}-{\alpha }_{k}{G}^{-1}d\left({w}^{k},{\stackrel{˜}{w}}^{k}\right)\right].$
(3.12)

Proof Similarly as in (3.4) and (3.7), we have

${\left({x}_{\ast }^{k}-{\stackrel{˜}{x}}^{k}\right)}^{T}\left\{\left(1+\mu \right)R\left({x}^{k}-{\stackrel{˜}{x}}^{k}\right)-f\left({\stackrel{˜}{x}}^{k}\right)+{A}^{T}{\stackrel{˜}{\lambda }}^{k}-{A}^{T}HB\left({y}^{k}-{\stackrel{˜}{y}}^{k}\right)\right\}\le \mu {\parallel {x}^{k}-{\stackrel{˜}{x}}^{k}\parallel }_{R}^{2}$
(3.13)

and

$\begin{array}{c}{\left({y}_{\ast }^{k}-{\stackrel{˜}{y}}^{k}\right)}^{T}\left\{\left(1+\mu \right)S\left({y}^{k}-{\stackrel{˜}{y}}^{k}\right)-g\left({\stackrel{˜}{y}}^{k}\right)+{B}^{T}{\stackrel{˜}{\lambda }}^{k}-{B}^{T}HB\left({y}^{k}-{\stackrel{˜}{y}}^{k}\right)+{B}^{T}HB\left({y}^{k}-{\stackrel{˜}{y}}^{k}\right)\right\}\hfill \\ \phantom{\rule{1em}{0ex}}\le \mu {\parallel {y}^{k}-{\stackrel{˜}{y}}^{k}\parallel }_{S}^{2}.\hfill \end{array}$
(3.14)

It follows from (3.13) and (3.14) that

$\begin{array}{c}{\left(\begin{array}{c}{x}_{\ast }^{k}-{\stackrel{˜}{x}}^{k}\\ {y}_{\ast }^{k}-{\stackrel{˜}{y}}^{k}\\ {\lambda }_{\ast }^{k}-{\stackrel{˜}{\lambda }}^{k}\end{array}\right)}^{T}\left(\begin{array}{c}\left(1+\mu \right)R\left({x}^{k}-{\stackrel{˜}{x}}^{k}\right)-f\left({\stackrel{˜}{x}}^{k}\right)+{A}^{T}{\stackrel{˜}{\lambda }}^{k}-{A}^{T}HB\left({y}^{k}-{\stackrel{˜}{y}}^{k}\right)\\ \left(\left(1+\mu \right)S+{B}^{T}HB\right)\left({y}^{k}-{\stackrel{˜}{y}}^{k}\right)-g\left({\stackrel{˜}{y}}^{k}\right)+{B}^{T}{\stackrel{˜}{\lambda }}^{k}-{B}^{T}HB\left({y}^{k}-{\stackrel{˜}{y}}^{k}\right)\\ {H}^{-1}\left({\lambda }^{k}-{\stackrel{˜}{\lambda }}^{k}\right)-\left(A{\stackrel{˜}{x}}^{k}+B{\stackrel{˜}{y}}^{k}-b\right)\end{array}\right)\hfill \\ \phantom{\rule{1em}{0ex}}\le \mu {\parallel {x}^{k}-{\stackrel{˜}{x}}^{k}\parallel }_{R}^{2}+\mu {\parallel {y}^{k}-{\stackrel{˜}{y}}^{k}\parallel }_{S}^{2},\hfill \end{array}$

which implies

$2{\alpha }_{k}{\left({w}_{\ast }^{k}-{\stackrel{˜}{w}}^{k}\right)}^{T}\left(G\left({w}^{k}-{\stackrel{˜}{w}}^{k}\right)-d\left({w}^{k},{\stackrel{˜}{w}}^{k}\right)\right)-2{\alpha }_{k}\mu {\parallel {x}^{k}-{\stackrel{˜}{x}}^{k}\parallel }_{R}^{2}-2{\alpha }_{k}\mu {\parallel {y}^{k}-{\stackrel{˜}{y}}^{k}\parallel }_{S}^{2}\le 0.$
(3.15)

Since ${w}^{\ast }\in {\mathcal{W}}^{\ast }$ and ${w}_{\ast }^{k}={P}_{\mathcal{W}}\left[{w}^{k}-{\alpha }_{k}{G}^{-1}d\left({w}^{k},{\stackrel{˜}{w}}^{k}\right)\right]$, it follows from (2.3) that

${\parallel {w}_{\ast }^{k}-{w}^{\ast }\parallel }_{G}^{2}\le {\parallel {w}^{k}-{\alpha }_{k}{G}^{-1}d\left({w}^{k},{\stackrel{˜}{w}}^{k}\right)-{w}^{\ast }\parallel }_{G}^{2}-{\parallel {w}^{k}-{\alpha }_{k}{G}^{-1}d\left({w}^{k},{\stackrel{˜}{w}}^{k}\right)-{w}_{\ast }^{k}\parallel }_{G}^{2}.$
(3.16)

From (2.5), we get

$\begin{array}{rcl}{\parallel {w}^{k+1}\left({\alpha }_{k}\right)-{w}^{\ast }\parallel }_{G}^{2}& =& {\parallel \left(1-\sigma \right)\left({w}^{k}-{w}^{\ast }\right)+\sigma \left({w}_{\ast }^{k}-{w}^{\ast }\right)\parallel }_{G}^{2}\\ =& {\left(1-\sigma \right)}^{2}{\parallel {w}^{k}-{w}^{\ast }\parallel }_{G}^{2}+{\sigma }^{2}{\parallel {w}_{\ast }^{k}-{w}^{\ast }\parallel }_{G}^{2}\\ +2\sigma \left(1-\sigma \right){\left({w}^{k}-{w}^{\ast }\right)}^{T}G\left({w}_{\ast }^{k}-{w}^{\ast }\right).\end{array}$

Using the following identity:

$2{\left(a+b\right)}^{T}Gb={\parallel a+b\parallel }_{G}^{2}-{\parallel a\parallel }_{G}^{2}+{\parallel b\parallel }_{G}^{2}$

for $a={w}^{k}-{w}_{\ast }^{k}$, $b={w}_{\ast }^{k}-{w}^{\ast }$ and (3.16), we obtain

$\begin{array}{rcl}{\parallel {w}^{k+1}\left({\alpha }_{k}\right)-{w}^{\ast }\parallel }_{G}^{2}& =& {\left(1-\sigma \right)}^{2}{\parallel {w}^{k}-{w}^{\ast }\parallel }_{G}^{2}+{\sigma }^{2}{\parallel {w}_{\ast }^{k}-{w}^{\ast }\parallel }_{G}^{2}+\sigma \left(1-\sigma \right)\left\{{\parallel {w}^{k}-{w}^{\ast }\parallel }_{G}^{2}\\ -{\parallel {w}^{k}-{w}_{\ast }^{k}\parallel }_{G}^{2}+{\parallel {w}_{\ast }^{k}-{w}^{\ast }\parallel }_{G}^{2}\right\}\\ =& \left(1-\sigma \right){\parallel {w}^{k}-{w}^{\ast }\parallel }_{G}^{2}+\sigma {\parallel {w}_{\ast }^{k}-{w}^{\ast }\parallel }_{G}^{2}-\sigma \left(1-\sigma \right){\parallel {w}^{k}-{w}_{\ast }^{k}\parallel }_{G}^{2}\\ \le & \left(1-\sigma \right){\parallel {w}^{k}-{w}^{\ast }\parallel }_{G}^{2}+\sigma {\parallel {w}^{k}-{\alpha }_{k}{G}^{-1}d\left({w}^{k},{\stackrel{˜}{w}}^{k}\right)-{w}^{\ast }\parallel }_{G}^{2}\\ -\sigma {\parallel {w}^{k}-{\alpha }_{k}{G}^{-1}d\left({w}^{k},{\stackrel{˜}{w}}^{k}\right)-{w}_{\ast }^{k}\parallel }_{G}^{2}-\sigma \left(1-\sigma \right){\parallel {w}^{k}-{w}_{\ast }^{k}\parallel }_{G}^{2}.\end{array}$
(3.17)

Using the definition of $\mathrm{\Theta }\left({\alpha }_{k}\right)$ and (3.17), we get

$\begin{array}{rcl}\mathrm{\Theta }\left({\alpha }_{k}\right)& \ge & \sigma {\parallel {w}^{k}-{w}_{\ast }^{k}\parallel }_{G}^{2}+2\sigma {\alpha }_{k}{\left({w}_{\ast }^{k}-{w}^{k}\right)}^{T}d\left({w}^{k},{\stackrel{˜}{w}}^{k}\right)\\ +2\sigma {\alpha }_{k}{\left({w}^{k}-{w}^{\ast }\right)}^{T}d\left({w}^{k},{\stackrel{˜}{w}}^{k}\right).\end{array}$
(3.18)

Using the monotonicity of f and g, we obtain

${\left(\begin{array}{c}{\stackrel{˜}{x}}^{k}-{x}^{\ast }\\ {\stackrel{˜}{y}}^{k}-{y}^{\ast }\\ {\stackrel{˜}{\lambda }}^{k}-{\lambda }^{\ast }\end{array}\right)}^{T}\left(\begin{array}{c}f\left({\stackrel{˜}{x}}^{k}\right)-{A}^{T}{\stackrel{˜}{\lambda }}^{k}\\ g\left({\stackrel{˜}{y}}^{k}\right)-{B}^{T}{\stackrel{˜}{\lambda }}^{k}\\ A{\stackrel{˜}{x}}^{k}+B{\stackrel{˜}{y}}^{k}-b\end{array}\right)\ge {\left(\begin{array}{c}{\stackrel{˜}{x}}^{k}-{x}^{\ast }\\ {\stackrel{˜}{y}}^{k}-{y}^{\ast }\\ {\stackrel{˜}{\lambda }}^{k}-{\lambda }^{\ast }\end{array}\right)}^{T}\left(\begin{array}{c}f\left({x}^{\ast }\right)-{A}^{T}{\lambda }^{\ast }\\ g\left({y}^{\ast }\right)-{B}^{T}{\lambda }^{\ast }\\ A{x}^{\ast }+B{y}^{\ast }-b\end{array}\right)\ge 0$

and consequently

$\begin{array}{rcl}{\left({\stackrel{˜}{w}}^{k}-{w}^{\ast }\right)}^{T}d\left({w}^{k},{\stackrel{˜}{w}}^{k}\right)& \ge & {\left({\stackrel{˜}{w}}^{k}-{w}^{\ast }\right)}^{T}\left(\begin{array}{c}{A}^{T}HB\left({y}^{k}-{\stackrel{˜}{y}}^{k}\right)\\ {B}^{T}HB\left({y}^{k}-{\stackrel{˜}{y}}^{k}\right)\\ 0\end{array}\right)\\ =& {\left(A{\stackrel{˜}{x}}^{k}+B{\stackrel{˜}{x}}^{k}-b\right)}^{T}HB\left({y}^{k}-{\stackrel{˜}{y}}^{k}\right)\\ =& {\left({\lambda }^{k}-{\stackrel{˜}{\lambda }}^{k}\right)}^{T}B\left({y}^{k}-{\stackrel{˜}{y}}^{k}\right)\end{array}$

and it follows that

${\left({w}^{k}-{w}^{\ast }\right)}^{T}d\left({w}^{k},{\stackrel{˜}{w}}^{k}\right)\ge {\left({w}^{k}-{\stackrel{˜}{w}}^{k}\right)}^{T}d\left({w}^{k},{\stackrel{˜}{w}}^{k}\right)+{\left({\lambda }^{k}-{\stackrel{˜}{\lambda }}^{k}\right)}^{T}B\left({y}^{k}-{\stackrel{˜}{y}}^{k}\right).$
(3.19)

Applying (3.19) to the last term in the right side of (3.18), we obtain

$\begin{array}{rcl}\mathrm{\Theta }\left({\alpha }_{k}\right)& \ge & \sigma {\parallel {w}^{k}-{w}_{\ast }^{k}\parallel }_{G}^{2}+2\sigma {\alpha }_{k}{\left({w}_{\ast }^{k}-{w}^{k}\right)}^{T}d\left({w}^{k},{\stackrel{˜}{w}}^{k}\right)\\ +2\sigma {\alpha }_{k}\left\{{\left({w}^{k}-{\stackrel{˜}{w}}^{k}\right)}^{T}d\left({w}^{k},{\stackrel{˜}{w}}^{k}\right)+{\left({\lambda }^{k}-{\stackrel{˜}{\lambda }}^{k}\right)}^{T}B\left({y}^{k}-{\stackrel{˜}{y}}^{k}\right)\right\}\\ =& \sigma \left\{{\parallel {w}^{k}-{w}_{\ast }^{k}\parallel }_{G}^{2}+2{\alpha }_{k}{\left({w}_{\ast }^{k}-{\stackrel{˜}{w}}^{k}\right)}^{T}d\left({w}^{k},{\stackrel{˜}{w}}^{k}\right)\\ +2{\alpha }_{k}{\left({\lambda }^{k}-{\stackrel{˜}{\lambda }}^{k}\right)}^{T}B\left({y}^{k}-{\stackrel{˜}{y}}^{k}\right)\right\}.\end{array}$
(3.20)

Adding (3.15) (multiplied by σ) to (3.20), we get

$\begin{array}{rcl}\mathrm{\Theta }\left({\alpha }_{k}\right)& \ge & \sigma \left\{{\parallel {w}^{k}-{w}_{\ast }^{k}\parallel }_{G}^{2}+2{\alpha }_{k}{\left({w}_{\ast }^{k}-{\stackrel{˜}{w}}^{k}\right)}^{T}G\left({w}^{k}-{\stackrel{˜}{w}}^{k}\right)-2{\alpha }_{k}\mu {\parallel {x}^{k}-{\stackrel{˜}{x}}^{k}\parallel }_{R}^{2}\\ -2{\alpha }_{k}\mu {\parallel {y}^{k}-{\stackrel{˜}{y}}^{k}\parallel }_{S}^{2}+2{\alpha }_{k}{\left({\lambda }^{k}-{\stackrel{˜}{\lambda }}^{k}\right)}^{T}B\left({y}^{k}-{\stackrel{˜}{y}}^{k}\right)\right\}\\ =& \sigma \left\{{\parallel {w}^{k}-{w}_{\ast }^{k}-{\alpha }_{k}\left({w}^{k}-{\stackrel{˜}{w}}^{k}\right)\parallel }_{G}^{2}-{\alpha }_{k}^{2}{\parallel {w}^{k}-{\stackrel{˜}{w}}^{k}\parallel }_{G}^{2}+2{\alpha }_{k}{\parallel {w}^{k}-{\stackrel{˜}{w}}^{k}\parallel }_{G}^{2}\\ -2{\alpha }_{k}\mu {\parallel {x}^{k}-{\stackrel{˜}{x}}^{k}\parallel }_{R}^{2}-2{\alpha }_{k}\mu {\parallel {y}^{k}-{\stackrel{˜}{y}}^{k}\parallel }_{S}^{2}+2{\alpha }_{k}{\left({\lambda }^{k}-{\stackrel{˜}{\lambda }}^{k}\right)}^{T}B\left({y}^{k}-{\stackrel{˜}{y}}^{k}\right)\right\}\end{array}$

and using the notation of ${\phi }_{k}$ in (2.7), the theorem is proved. □

From the computational point of view, a relaxation factor $\gamma \in \left(0,2\right)$ is preferable in the correction. We are now at the position to prove the contractive property of the iterative sequence.

Theorem 3.2 Let ${w}^{\ast }\in {\mathcal{W}}^{\ast }$ be a solution of SVI and let ${w}^{k+1}\left(\gamma {\alpha }_{k}\right)$ be generated by (2.5). Then ${w}^{k}$ and ${\stackrel{˜}{w}}^{k}$ are bounded, and

${\parallel {w}^{k+1}\left(\gamma {\alpha }_{k}\right)-{w}^{\ast }\parallel }_{G}^{2}\le {\parallel {w}^{k}-{w}^{\ast }\parallel }_{G}^{2}-c{\parallel {w}^{k}-{\stackrel{˜}{w}}^{k}\parallel }_{G}^{2},$
(3.21)

where

$c:=\frac{\sigma \gamma \left(2-\gamma \right)}{4}>0.$

Proof It follows from (3.11), (2.10), and (2.11) that

$\begin{array}{rcl}{\parallel {w}^{k+1}\left(\gamma {\alpha }_{k}\right)-{w}^{\ast }\parallel }_{G}^{2}& \le & {\parallel {w}^{k}-{w}^{\ast }\parallel }_{G}^{2}-\sigma \left(2\gamma {\alpha }_{k}{\phi }_{k}-{\gamma }^{2}{\alpha }_{k}^{2}{\parallel {w}^{k}-{\stackrel{˜}{w}}^{k}\parallel }_{G}^{2}\right)\\ =& {\parallel {w}^{k}-{w}^{\ast }\parallel }_{G}^{2}-\gamma \left(2-\gamma \right){\alpha }_{k}\sigma {\phi }_{k}\\ \le & {\parallel {w}^{k}-{w}^{\ast }\parallel }_{G}^{2}-\frac{\sigma \gamma \left(2-\gamma \right)}{4}\left({\parallel A{\stackrel{˜}{x}}^{k}+B{y}^{k}-b\parallel }_{H}^{2}+{\parallel {w}^{k}-{\stackrel{˜}{w}}^{k}\parallel }_{G}^{2}\right).\end{array}$

Since $\gamma \in \left(0,2\right)$ we have

$\parallel {w}^{k+1}-{w}^{\ast }\parallel \le \parallel {w}^{k}-{w}^{\ast }\parallel \le \cdots \le \parallel {w}^{0}-{w}^{\ast }\parallel$

and thus $\left\{{w}^{k}\right\}$ is a bounded sequence.

It follows from (3.21) that

$\sum _{k=0}^{\mathrm{\infty }}c{\parallel {w}^{k}-{\stackrel{˜}{w}}^{k}\parallel }_{G}^{2}<+\mathrm{\infty },$

which means that

$\underset{k\to \mathrm{\infty }}{lim}{\parallel {w}^{k}-{\stackrel{˜}{w}}^{k}\parallel }_{G}=0.$
(3.22)

Since $\left\{{w}^{k}\right\}$ is a bounded sequence, we conclude that $\left\{{\stackrel{˜}{w}}^{k}\right\}$ is also bounded. □

## 4 Convergence of the proposed method

In this section, we prove the global convergence of the proposed method. The following results can be proved by using the technique of Lemma 5.1 and Theorem 5.1 in .

Lemma 4.1 For given ${w}^{k}=\left({x}^{k},{y}^{k},{\lambda }^{k}\right)\in {\mathcal{R}}_{++}^{n}×{\mathcal{R}}_{++}^{m}×{\mathcal{R}}^{l}$, let ${\stackrel{˜}{w}}^{k}=\left({\stackrel{˜}{x}}^{k},{\stackrel{˜}{y}}^{k},{\stackrel{˜}{\lambda }}^{k}\right)$ be generated by (2.4a)-(2.4c). Then for any $w=\left(x,y,\lambda \right)\in \mathcal{W}$, we have

${\left(x-{\stackrel{˜}{x}}^{k}\right)}^{T}\left(f\left({\stackrel{˜}{x}}^{k}\right)-{A}^{T}{\stackrel{˜}{\lambda }}^{k}+{A}^{T}HB\left({y}^{k}-{\stackrel{˜}{y}}^{k}\right)\right)\ge {\left({x}^{k}-{\stackrel{˜}{x}}^{k}\right)}^{T}R\left\{\left(1+\mu \right)x-\left(\mu {x}^{k}+{\stackrel{˜}{x}}^{k}\right)\right\}$
(4.1)

and

${\left(y-{\stackrel{˜}{y}}^{k}\right)}^{T}\left(g\left({\stackrel{˜}{y}}^{k}\right)-{B}^{T}{\stackrel{˜}{\lambda }}^{k}\right)\ge {\left({y}^{k}-{\stackrel{˜}{y}}^{k}\right)}^{T}S\left\{\left(1+\mu \right)y-\left(\mu {y}^{k}+{\stackrel{˜}{y}}^{k}\right)\right\}.$
(4.2)

Proof Applying Lemma 2.1 to prediction step of LQP-ADM (by setting ${u}^{k}={x}^{k}$, $u={\stackrel{˜}{x}}^{k}$, $q\left(u\right)=f\left({\stackrel{˜}{x}}^{k}\right)-{A}^{T}{\stackrel{˜}{\lambda }}^{k}+{A}^{T}HB\left({y}^{k}-{\stackrel{˜}{y}}^{k}\right)$ and $v=x$ in (2.9)), it follows that

$\begin{array}{c}{\left(x-{\stackrel{˜}{x}}^{k}\right)}^{T}\left(f\left({\stackrel{˜}{x}}^{k}\right)-{A}^{T}{\stackrel{˜}{\lambda }}^{k}+{A}^{T}HB\left({y}^{k}-{\stackrel{˜}{y}}^{k}\right)\right)\hfill \\ \phantom{\rule{1em}{0ex}}\ge \frac{1+\mu }{2}\left({\parallel {\stackrel{˜}{x}}^{k}-x\parallel }_{R}^{2}-{\parallel {x}^{k}-x\parallel }_{R}^{2}\right)+\frac{1-\mu }{2}{\parallel {x}^{k}-{\stackrel{˜}{x}}^{k}\parallel }_{R}^{2}.\hfill \end{array}$

By a simple manipulation, we have

$\begin{array}{c}\frac{1+\mu }{2}\left({\parallel {\stackrel{˜}{x}}^{k}-x\parallel }_{R}^{2}-{\parallel {x}^{k}-x\parallel }_{R}^{2}\right)+\frac{1-\mu }{2}{\parallel {x}^{k}-{\stackrel{˜}{x}}^{k}\parallel }_{R}^{2}\hfill \\ \phantom{\rule{1em}{0ex}}=\left(1+\mu \right){x}^{T}R{x}^{k}-\left(1+\mu \right){x}^{T}R{\stackrel{˜}{x}}^{k}-\left(1-\mu \right){\left({\stackrel{˜}{x}}^{k}\right)}^{T}R{x}^{k}-\mu {\parallel {x}^{k}\parallel }_{R}^{2}+{\parallel {\stackrel{˜}{x}}^{k}\parallel }_{R}^{2}\hfill \\ \phantom{\rule{1em}{0ex}}=\left(1+\mu \right){x}^{T}R\left({x}^{k}-{\stackrel{˜}{x}}^{k}\right)-{\left({x}^{k}-{\stackrel{˜}{x}}^{k}\right)}^{T}R\left(\mu {x}^{k}+{\stackrel{˜}{x}}^{k}\right)\hfill \\ \phantom{\rule{1em}{0ex}}={\left({x}^{k}-{\stackrel{˜}{x}}^{k}\right)}^{T}R\left\{\left(1+\mu \right)x-\left(\mu {x}^{k}+{\stackrel{˜}{x}}^{k}\right)\right\},\hfill \end{array}$

and the assertion (4.1) is proved. Similarly we can prove the assertion (4.2). □

Now, we are ready to prove the convergence of the proposed method.

Theorem 4.1 The sequence $\left\{{w}^{k}\right\}$ generated by the proposed method converges to some ${w}^{\mathrm{\infty }}$ which is a solution of SVI.

Proof It follows from (3.22) that

$\underset{k\to \mathrm{\infty }}{lim}{\parallel {x}^{k}-{\stackrel{˜}{x}}^{k}\parallel }_{R}=0,\phantom{\rule{2em}{0ex}}\underset{k\to \mathrm{\infty }}{lim}{\parallel {y}^{k}-{\stackrel{˜}{y}}^{k}\parallel }_{S}=0$
(4.3)

and

$\underset{k\to \mathrm{\infty }}{lim}{\parallel {\lambda }^{k}-{\stackrel{˜}{\lambda }}^{k}\parallel }_{{H}^{-1}}=\underset{k\to \mathrm{\infty }}{lim}{\parallel A{\stackrel{˜}{x}}^{k}+B{\stackrel{˜}{y}}^{k}-b\parallel }_{H}=0.$
(4.4)

Moreover, (4.1) and (4.2) imply that

${\left(x-{\stackrel{˜}{x}}^{k}\right)}^{T}\left(f\left({\stackrel{˜}{x}}^{k}\right)-{A}^{T}{\stackrel{˜}{\lambda }}^{k}\right)\ge {\left({x}^{k}-{\stackrel{˜}{x}}^{k}\right)}^{T}R\left\{\left(1+\mu \right)x-\left(\mu {x}^{k}+{\stackrel{˜}{x}}^{k}\right)\right\}-{\left(x-{\stackrel{˜}{x}}^{k}\right)}^{T}{A}^{T}HB\left({y}^{k}-{\stackrel{˜}{y}}^{k}\right)$

and

${\left(y-{\stackrel{˜}{y}}^{k}\right)}^{T}\left(g\left({\stackrel{˜}{y}}^{k}\right)-{B}^{T}{\stackrel{˜}{\lambda }}^{k}\right)\ge {\left({y}^{k}-{\stackrel{˜}{y}}^{k}\right)}^{T}S\left\{\left(1+\mu \right)y-\left(\mu {y}^{k}+{\stackrel{˜}{y}}^{k}\right)\right\}.$

We deduce from (4.3) that

$\left\{\begin{array}{l}{lim}_{k\to \mathrm{\infty }}{\left(x-{\stackrel{˜}{x}}^{k}\right)}^{T}\left\{f\left({\stackrel{˜}{x}}^{k}\right)-{A}^{T}{\stackrel{˜}{\lambda }}^{k}\right\}\ge 0,\phantom{\rule{1em}{0ex}}\mathrm{\forall }x\in {\mathcal{R}}_{++}^{n},\\ {lim}_{k\to \mathrm{\infty }}{\left(y-{\stackrel{˜}{y}}^{k}\right)}^{T}\left\{g\left({\stackrel{˜}{y}}^{k}\right)-{B}^{T}{\stackrel{˜}{\lambda }}^{k}\right\}\ge 0,\phantom{\rule{1em}{0ex}}\mathrm{\forall }y\in {\mathcal{R}}_{++}^{m}.\end{array}$
(4.5)

Since $\left\{{w}^{k}\right\}$ is bounded, so it has at least one cluster point. Let ${w}^{\mathrm{\infty }}$ be a cluster point of $\left\{{w}^{k}\right\}$ and the subsequence $\left\{{w}^{{k}_{j}}\right\}$ converges to ${w}^{\mathrm{\infty }}$. It follows from (4.4) and (4.5) that

$\left\{\begin{array}{l}{lim}_{j\to \mathrm{\infty }}{\left(x-{x}^{{k}_{j}}\right)}^{T}\left\{f\left({x}^{{k}_{j}}\right)-{A}^{T}{\lambda }^{{k}_{j}}\right\}\ge 0,\phantom{\rule{1em}{0ex}}\mathrm{\forall }x\in {\mathcal{R}}_{++}^{n},\\ {lim}_{j\to \mathrm{\infty }}{\left(y-{y}^{{k}_{j}}\right)}^{T}\left\{g\left({y}^{{k}_{j}}\right)-{B}^{T}{\lambda }^{{k}_{j}}\right\}\ge 0,\phantom{\rule{1em}{0ex}}\mathrm{\forall }y\in {\mathcal{R}}_{++}^{m},\\ {lim}_{j\to \mathrm{\infty }}\left(A{x}^{{k}_{j}}+B{y}^{{k}_{j}}-b\right)=0.\end{array}$

Consequently

$\left\{\begin{array}{l}{\left(x-{x}^{\mathrm{\infty }}\right)}^{T}\left\{f\left({x}^{\mathrm{\infty }}\right)-{A}^{T}{\lambda }^{\mathrm{\infty }}\right\}\ge 0,\phantom{\rule{1em}{0ex}}\mathrm{\forall }x\in {\mathcal{R}}_{++}^{n},\\ {\left(y-{y}^{\mathrm{\infty }}\right)}^{T}\left\{g\left({y}^{\mathrm{\infty }}\right)-{B}^{T}{\lambda }^{\mathrm{\infty }}\right\}\ge 0,\phantom{\rule{1em}{0ex}}\mathrm{\forall }y\in {\mathcal{R}}_{++}^{m},\\ A{x}^{\mathrm{\infty }}+B{y}^{\mathrm{\infty }}-b=0,\end{array}$

which means that ${w}^{\mathrm{\infty }}$ is a solution of SVI.

Now we prove that the sequence $\left\{{w}^{k}\right\}$ converges to ${w}^{\mathrm{\infty }}$. Since

$\underset{k\to \mathrm{\infty }}{lim}{\parallel {w}^{k}-{\stackrel{˜}{w}}^{k}\parallel }_{G}=0\phantom{\rule{1em}{0ex}}\text{and}\phantom{\rule{1em}{0ex}}\left\{{\stackrel{˜}{w}}^{{k}_{j}}\right\}\to {w}^{\mathrm{\infty }}$

for any $ϵ>0$, there exists an $l>0$ such that

$\parallel {\stackrel{˜}{w}}^{{k}_{l}}-{w}^{\mathrm{\infty }}\parallel <\frac{ϵ}{2}\phantom{\rule{1em}{0ex}}\text{and}\phantom{\rule{1em}{0ex}}\parallel {w}^{{k}_{l}}-{\stackrel{˜}{w}}^{{k}_{l}}\parallel <\frac{ϵ}{2}.$
(4.6)

Therefore, for any $k\ge {k}_{l}$, it follows from (3.21) and (4.6) that

$\parallel {w}^{k}-{w}^{\mathrm{\infty }}\parallel \le \parallel {w}^{{k}_{l}}-{w}^{\mathrm{\infty }}\parallel \le \parallel {w}^{{k}_{l}}-{\stackrel{˜}{w}}^{{k}_{l}}\parallel +\parallel {\stackrel{˜}{w}}^{{k}_{l}}-{w}^{\mathrm{\infty }}\parallel <ϵ.$

This implies that the sequence $\left\{{w}^{k}\right\}$ converges to ${w}^{\mathrm{\infty }}$, which is a solution of SVI. □

## 5 Comparison

Let

${w}_{I}^{k+1}\left({\alpha }_{k}\right):={P}_{W}\left[{w}^{k}-{\alpha }_{k}{G}^{-1}d\left({w}^{k},{\stackrel{˜}{w}}^{k}\right)\right]$
(5.1)

and

${w}_{II}^{k+1}\left({\alpha }_{k}\right):={P}_{W}\left[{w}^{k}-{\alpha }_{k}\left({w}^{k}-{\stackrel{˜}{w}}^{k}\right)\right]$
(5.2)

represent the new iterates generated by the algorithm presented in this paper and Li’s algorithm in , respectively, where $\sigma =1$. Let

${\mathrm{\Theta }}_{I}\left({\alpha }_{k}\right):={\parallel {w}^{k}-{w}^{\ast }\parallel }_{G}^{2}-{\parallel {w}_{I}^{k+1}\left({\alpha }_{k}\right)-{w}^{\ast }\parallel }_{G}^{2}$

and

${\mathrm{\Theta }}_{II}\left({\alpha }_{k}\right):={\parallel {w}^{k}-{w}^{\ast }\parallel }_{G}^{2}-{\parallel {w}_{II}^{k+1}\left({\alpha }_{k}\right)-{w}^{\ast }\parallel }_{G}^{2}$

measure the progresses made by the new iterates, respectively. From (3.11), we have

${\mathrm{\Theta }}_{I}\left({\alpha }_{k}\right)\ge {q}_{I}\left({\alpha }_{k}\right):={\parallel {w}^{k}-{w}_{I}^{k+1}\left({\alpha }_{k}\right)-{\alpha }_{k}\left({w}^{k}-{\stackrel{˜}{w}}^{k}\right)\parallel }_{G}^{2}+2{\alpha }_{k}{\phi }_{k}-{\alpha }_{k}^{2}{\parallel {w}^{k}-{\stackrel{˜}{w}}^{k}\parallel }_{G}^{2}.$

Theorem 3.5 of  indicates that

${\mathrm{\Theta }}_{II}\left({\alpha }_{k}\right)\ge {q}_{II}\left({\alpha }_{k}\right):=2{\alpha }_{k}{\phi }_{k}-{\alpha }_{k}^{2}{\parallel {w}^{k}-{\stackrel{˜}{w}}^{k}\parallel }_{G}^{2}.$

Note that the optimal step sizes used in both methods are identical. It is easy to prove that

${q}_{I}\left({\alpha }_{k}\right)\ge {q}_{II}\left({\alpha }_{k}\right).$
(5.3)

Inequality (5.3) shows theoretically that the proposed method is expected to make more progress than that in  at each iteration, and so it explains theoretically that the proposed method outperforms the method in .

## 6 Preliminary computational results

In this section, we report some numerical results of the proposed method, we consider the following optimization problem with matrix variables:

$min\left\{\frac{1}{2}{\parallel X-C\parallel }_{F}^{2}|X\in {S}_{+}^{n}\right\},$
(6.1)

where ${\parallel \cdot \parallel }_{F}$ is the matrix Fröbenius norm, i.e., ${\parallel C\parallel }_{F}={\left({\sum }_{i=1}^{n}{\sum }_{j=1}^{n}{|{C}_{ij}|}^{2}\right)}^{1/2}$,

${S}_{+}^{n}=\left\{H\in {\mathcal{R}}^{n×n}\mid {H}^{T}=H,H⪰0\right\}.$

Note that the matrix Fröbenius norm is induced by the inner product

$〈A,B〉=Trace\left({A}^{T}B\right).$

Note that the problem (6.1) is equivalent to the following:

(6.2)

which is equivalent to the following variational inequality: to find ${u}^{\ast }=\left({X}^{\ast },{Y}^{\ast },{Z}^{\ast }\right)\in \mathrm{\Omega }={S}_{+}^{n}×{S}_{+}^{n}×{\mathcal{R}}^{n×n}$ such that

$\left\{\begin{array}{l}〈X-{X}^{\ast },\left({X}^{\ast }-C\right)-{Z}^{\ast }〉\ge 0,\\ 〈Y-{Y}^{\ast },\left({Y}^{\ast }-C\right)+{Z}^{\ast }〉\ge 0,\phantom{\rule{1em}{0ex}}\mathrm{\forall }u=\left(X,Y,Z\right)\in \mathrm{\Omega },\\ {X}^{\ast }-{Y}^{\ast }=0.\end{array}$
(6.3)

The problem (6.3) is a special case of (1.3)-(1.4) with matrix variables where $A={I}_{n×n}$, $B=-{I}_{n×n}$, $b=0$, $f\left(X\right)=X-C$, $g\left(Y\right)=Y-C$ and $\mathcal{W}={S}_{+}^{n}×{S}_{+}^{n}×{\mathcal{R}}^{n×n}$.

For simplification, we take $R=r{I}_{n×n}$, $S=s{I}_{n×n}$ and $H={I}_{n×n}$ where $r>0$ and $s>0$ are scalars. In all tests we take $\mu =0.5$, $C=rand\left(n\right)$ and $\left({X}^{0},{Y}^{0},{Z}^{0}\right)=\left({I}_{n×n},{I}_{n×n},{0}_{n×n}\right)$ as the initial point in the test. The iteration is stopped as soon as

$max\left\{\parallel {X}^{k}-{\stackrel{˜}{X}}^{k}\parallel ,\parallel {Y}^{k}-{\stackrel{˜}{Y}}^{k}\parallel ,\parallel {Z}^{k}-{\stackrel{˜}{Z}}^{k}\parallel \right\}\le {10}^{-6}.$

All codes were written in Matlab; we compare the proposed method with that in . The iteration numbers denoted by k, and the computational time for the problem (6.1) with different dimensions are given in Tables 1-3.

Tables 1-3 show that the proposed method is more flexible and efficient. Moreover, it demonstrates computationally that the new method is more effective than the method presented in  in the sense that the new method needs fewer iterations and less computational time, which clearly illustrates its efficiency and thus justifies the theoretical assertions.

Remark 6.1 For the example used in numerical results in , we found the same results as in , so we do not include the results.

## 7 Conclusions

In this paper, we propose a new logarithmic-quadratic proximal alternating direction method (LQP-ADM) for solving structured variational inequalities. Each iteration of the new LQP-ADM includes a prediction step where a prediction point is obtained as that in , and a correction step where the new iterate is generated by a convex combination of the previous iterate and the one generated by a projection-type method along a new descent direction. Global convergence of the proposed method is proved under mild assumptions. Further, it is proved theoretically that the lower bound of the progress obtained by the proposed method is greater than that by . Some preliminary numerical results are reported to verify the efficiency of the proposed LQP-ADM and thus justified the theoretical assertions.

## References

1. Eckstein J, Bertsekas DB: On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 1992, 55: 293-318. 10.1007/BF01581204

2. Fortin M, Glowinski R: Augmented Lagrangian Methods: Applications to the Solution of Boundary-Valued Problems. North-Holland, Amsterdam; 1983.

3. Gabay D: Applications of the method of multipliers to variational inequalities. In Augmented Lagrange Methods: Applications to the Solution of Boundary-Valued Problems. Edited by: Fortin M, Glowinski R. North-Holland, Amsterdam; 1983:299-331.

4. Gabay D, Mercier B: A dual algorithm for the solution of nonlinear variational problems via finite-element approximations. Comput. Math. Appl. 1976, 2: 17-40. 10.1016/0898-1221(76)90003-1

5. Glowinski R: Numerical Methods for Nonlinear Variational Problems. Springer, New York; 1984.

6. Glowinski R, Le Tallec P SIAM Studies in Applied Mathematics. In Augmented Lagrangian and Operator-Splitting Methods in Nonlinear Mechanics. SIAM, Philadelphia; 1989.

7. Teboulle M: Convergence of proximal-like algorithms. SIAM J. Optim. 1997, 7: 1069-1083. 10.1137/S1052623495292130

8. He BS, Yang H: Some convergence properties of a method of multipliers for linearly constrained monotone variational inequalities. Oper. Res. Lett. 1998, 23: 151-161. 10.1016/S0167-6377(98)00044-3

9. Kontogiorgis S, Meyer RR: A variable-penalty alternating directions method for convex optimization. Math. Program. 1998, 83: 29-53.

10. Jiang ZK, Bnouhachem A: A projection-based prediction-correction method for structured monotone variational inequalities. Appl. Math. Comput. 2008, 202: 747-759. 10.1016/j.amc.2008.03.018

11. Tao M, Yuan XM:On the $O\left(1/t\right)$ convergence rate of alternating direction method with logarithmic-quadratic proximal regularization. SIAM J. Optim. 2012,22(4):1431-1448. 10.1137/110847639

12. Chen G, Teboulle M: A proximal-based decomposition method for convex minimization problems. Math. Program. 1994, 64: 81-101. 10.1007/BF01582566

13. Eckstein J: Some saddle-function splitting methods for convex programming. Optim. Methods Softw. 1994, 4: 75-83. 10.1080/10556789408805578

14. He BS, Liao LZ, Han DR, Yang H: A new inexact alternating directions method for monotone variational inequalities. Math. Program. 2002, 92: 103-118. 10.1007/s101070100280

15. Yuan XM, Li M: An LQP-based decomposition method for solving a class of variational inequalities. SIAM J. Optim. 2011,21(4):1309-1318. 10.1137/070703557

16. Auslender A, Teboulle M, Ben-Tiba S: A logarithmic-quadratic proximal method for variational inequalities. Comput. Optim. Appl. 1999, 12: 31-40. 10.1023/A:1008607511915

17. Bnouhachem A, Benazza H, Khalfaoui M: An inexact alternating direction method for solving a class of structured variational inequalities. Appl. Math. Comput. 2013, 219: 7837-7846. 10.1016/j.amc.2013.01.067

18. Bnouhachem A, Xu MH: An inexact LQP alternating direction method for solving a class of structured variational inequalities. Comput. Math. Appl. 2014, 67: 671-680. 10.1016/j.camwa.2013.12.010

19. Li M: A hybrid LQP-based method for structured variational inequalities. Int. J. Comput. Math. 2012,89(10):1412-1425. 10.1080/00207160.2012.688822

## Author information

Authors

### Corresponding author

Correspondence to Abdellah Bnouhachem.

### Competing interests

The author declares that he has no competing interests.

## Rights and permissions

Reprints and Permissions

Bnouhachem, A. On LQP alternating direction method for solving variational inequality problems with separable structure. J Inequal Appl 2014, 80 (2014). https://doi.org/10.1186/1029-242X-2014-80

• Accepted:

• Published:

• DOI: https://doi.org/10.1186/1029-242X-2014-80

### Keywords

• variational inequalities
• monotone operator 