# A parallel resolvent method for solving a system of nonlinear mixed variational inequalities

## Abstract

In this paper, we introduce a system of generalized nonlinear mixed variational inequalities and obtain the approximate solvability by using the resolvent parallel technique. Our results may be viewed as an extension and improvement of the previously known results for variational inequalities.

## 1 Introduction and preliminaries

Variational inequality theory, which was introduced by Stampacchia  in 1964, has been witnessed as an interesting branch of mathematical and engineering sciences with a wide range of applications in industry, finance, economics and pure and applied sciences. In 2001, Verma  introduced a new system of strongly monotonic variational inequalities and studied the approximation solvability of the system based on the application of a projection method. The main and basic idea in this technique is to establish the equivalence between variational inequalities and fixed point problems. This alternative equivalence has been used to develop several projection iterative methods for solving variational inequalities and related optimization problems. Several extensions and generalizations of the system of strongly monotonic variational inequalities have been considered by many authors . Inspired and motivated by research in this area, we introduce a system of generalized nonlinear mixed variational inequalities problem involving two different nonlinear operators. It is well known that if the nonlinear term in the mixed variational inequality is a proper, convex, and lower semicontinuous, then one can establish the equivalence between the mixed variational inequality and the fixed point problem. Using the parallel algorithm considered in , we suggest and analyze a parallel iterative method for solving this system. Our result may be viewed as an extension and improvement of the recent results.

Let be a real Hilbert space whose inner product and norm are denoted by $〈\cdot ,\cdot 〉$ and $\parallel \cdot \parallel$, respectively. Let K be a nonempty closed convex subset of . Let ${T}_{1},{T}_{2}:K×K\to \mathcal{H}$ be two nonlinear operators. Let ${\phi }_{1},{\phi }_{2}:\mathcal{H}\to \mathbb{R}\cup \left\{+\mathrm{\infty }\right\}$ be proper convex lower semi-continuous functions on . We consider a system of generalized nonlinear mixed variational inequalities (abbreviated as SMNVI) as follows: Find $\left({x}^{\ast },{y}^{\ast }\right)\in K×K$ such that

$\left\{\begin{array}{c}〈\rho {T}_{1}\left({y}^{\ast },{x}^{\ast }\right)+g\left({x}^{\ast }\right)-g\left({y}^{\ast }\right),g\left(x\right)-g\left({x}^{\ast }\right)〉+{\phi }_{1}\left(g\left(x\right)\right)-{\phi }_{1}\left(g\left({x}^{\ast }\right)\right)\hfill \\ \phantom{\rule{1em}{0ex}}\ge 0,\phantom{\rule{1em}{0ex}}\mathrm{\forall }g\left(x\right)\in K,\hfill \\ 〈\eta {T}_{2}\left({x}^{\ast },{y}^{\ast }\right)+g\left({y}^{\ast }\right)-g\left({x}^{\ast }\right),g\left(x\right)-g\left({x}^{\ast }\right)〉+{\phi }_{2}\left(g\left(x\right)\right)-{\phi }_{2}\left(g\left({y}^{\ast }\right)\right)\hfill \\ \phantom{\rule{1em}{0ex}}\ge 0,\phantom{\rule{1em}{0ex}}\mathrm{\forall }g\left(x\right)\in K,\hfill \end{array}$
(1.1)

where $g:K\to K$ is a mapping and $\rho ,\eta >0$.

Note that if ${\phi }_{1}={\phi }_{2}={\delta }_{K}$, and $g=I$, where I is the identity operator, ${\delta }_{K}$ is the indicator function of K defined by

then problem (1.1) reduces to the following system of nonlinear variational inequalities (SNVI) considered in  of finding $\left({x}^{\ast },{y}^{\ast }\right)\in K×K$ such that

$\left\{\begin{array}{cc}〈\rho {T}_{1}\left({y}^{\ast },{x}^{\ast }\right)+{x}^{\ast }-{y}^{\ast },x-{x}^{\ast }〉\ge 0,\hfill & \mathrm{\forall }x\in K,\hfill \\ 〈\eta {T}_{2}\left({x}^{\ast },{y}^{\ast }\right)+{y}^{\ast }-{x}^{\ast },x-{x}^{\ast }〉\ge 0,\hfill & \mathrm{\forall }x\in K.\hfill \end{array}$
(1.2)

If ${T}_{1}={T}_{2}=T$ and $g=I$, where I is the identity operator, then problem (1.1) is equivalent to the following system of nonlinear mixed variational inequalities (SNVI) considered in [7, 8] of finding $\left({x}^{\ast },{y}^{\ast }\right)\in K×K$ such that

$\left\{\begin{array}{cc}〈\rho T\left({y}^{\ast },{x}^{\ast }\right)+{x}^{\ast }-{y}^{\ast },x-{x}^{\ast }〉+{\phi }_{1}\left(x\right)-{\phi }_{1}\left({x}^{\ast }\right)\ge 0,\hfill & \mathrm{\forall }x\in K,\hfill \\ 〈\eta T\left({x}^{\ast },{y}^{\ast }\right)+{y}^{\ast }-{x}^{\ast },x-{x}^{\ast }〉+{\phi }_{2}\left(x\right)-{\phi }_{2}\left({y}^{\ast }\right)\ge 0,\hfill & \mathrm{\forall }x\in K.\hfill \end{array}$
(1.3)

If ${\phi }_{1}={\phi }_{2}={\delta }_{K}$ and ${T}_{1},{T}_{2}:K\to \mathcal{H}$ are univariate mappings, then problem (1.1) is reduced to the following system of nonlinear variational inequalities (SNVI) considered in  of finding $\left({x}^{\ast },{y}^{\ast }\right)\in K×K$ such that

$\left\{\begin{array}{cc}〈\rho {T}_{1}\left({y}^{\ast }\right)+g\left({x}^{\ast }\right)-g\left({y}^{\ast }\right),g\left(x\right)-g\left({x}^{\ast }\right)〉\ge 0,\hfill & \mathrm{\forall }g\left(x\right)\in K,\hfill \\ 〈\eta {T}_{2}\left({x}^{\ast }\right)+g\left({y}^{\ast }\right)-g\left({x}^{\ast }\right),g\left(x\right)-g\left({x}^{\ast }\right)〉\ge 0,\hfill & \mathrm{\forall }g\left(x\right)\in K,\hfill \end{array}$
(1.4)

where $g:K\to K$ is a mapping.

If ${T}_{1}={T}_{2}=T$, $g=I$ and ${\phi }_{1}={\phi }_{2}={\delta }_{K}$, where T is a univariate mapping defined by $T:K\to \mathcal{H}$, then problem (1.1) reduces to the following system of variational inequalities (SVI) considered in  of finding $\left({x}^{\ast },{y}^{\ast }\right)\in K×K$ such that

$\left\{\begin{array}{cc}〈\rho T\left({y}^{\ast }\right)+{x}^{\ast }-{y}^{\ast },x-{x}^{\ast }〉\ge 0,\hfill & \mathrm{\forall }x\in K,\hfill \\ 〈\eta T\left({x}^{\ast }\right)+{y}^{\ast }-{x}^{\ast },x-{x}^{\ast }〉\ge 0,\hfill & \mathrm{\forall }x\in K.\hfill \end{array}$
(1.5)

We also need the following well-known results.

Definition 1.1 Define the norm $\parallel \cdot \parallel$ on $\mathcal{H}×\mathcal{H}$ by

$\parallel \left(u,v\right)\parallel =\parallel u\parallel +\parallel v\parallel ,\phantom{\rule{1em}{0ex}}\mathrm{\forall }\left(u,v\right)\in \mathcal{H}×\mathcal{H}.$

Definition 1.2 For any maximal monotone operator T, the resolvent operator associated with T, for any $\lambda >0$, is defined by

${J}_{T}^{\lambda }\left(u\right)={\left(I+\lambda T\right)}^{-1}\left(u\right),\phantom{\rule{1em}{0ex}}\mathrm{\forall }u\in \mathcal{H}.$

Remark 1.1 It is well known that the subdifferential ∂φ of a proper convex lower semi-continuous function $\phi :\mathcal{H}\to \mathbb{R}\cup \left\{+\mathrm{\infty }\right\}$ is a maximal monotone operator. We can define its resolvent operator by

${J}_{\phi }^{\lambda }\left(u\right)={\left(I+\lambda \partial \phi \right)}^{-1}\left(u\right),\phantom{\rule{1em}{0ex}}\mathrm{\forall }u\in \mathcal{H},$

where $\lambda >0$ and ${J}_{\phi }$ is defined everywhere.

Lemma 1.1 

For a given $u,z\in \mathcal{H}$ satisfies the inequality

$〈u-z,x-u〉+\lambda \phi \left(x\right)-\lambda \phi \left(u\right)\ge 0,\phantom{\rule{1em}{0ex}}\mathrm{\forall }x\in \mathcal{H},$

if and only if $u={J}_{\phi }^{\lambda }\left(z\right)$, where ${J}_{\phi }^{\lambda }\left(u\right)={\left(I+\lambda \partial \phi \right)}^{-1}\left(u\right)$ is the resolvent operator and $\lambda >0$.

If φ is the indicator function of a closed convex set $K\subseteq \mathcal{H}$, then the resolvent operator ${J}_{\phi }^{\lambda }\left(\cdot \right)$ reduces to the projection operator ${P}_{K}\left(\cdot \right)$. It is well known that ${J}_{\phi }^{\lambda }$ is nonexpansive, i.e.,

$\parallel {J}_{\phi }^{\lambda }\left(u\right)-{J}_{\phi }^{\lambda }\left(v\right)\parallel \le \parallel u-v\parallel ,\phantom{\rule{1em}{0ex}}\mathrm{\forall }u,v\in \mathcal{H}.$

Based on Lemma 1.1, similar to that in  and , the following statement gives equivalent characterization of problem (1.1).

Lemma 1.2 Problem (1.1) is equivalent to finding $\left({x}^{\ast },{y}^{\ast }\right)\in K×K$ such that

$\left\{\begin{array}{c}g\left({x}^{\ast }\right)={J}_{{\phi }_{1}}^{1}\left[g\left({y}^{\ast }\right)-\rho {T}_{1}\left({y}^{\ast },{x}^{\ast }\right)\right],\hfill \\ g\left({y}^{\ast }\right)={J}_{{\phi }_{2}}^{1}\left[g\left({x}^{\ast }\right)-\eta {T}_{2}\left({x}^{\ast },{y}^{\ast }\right)\right],\hfill \end{array}$
(1.6)

where ${J}_{{\phi }_{i}}^{1}={\left(I+\partial {\phi }_{i}\right)}^{-1}$, $i=1,2$.

Proof Suppose that $\left({x}^{\ast },{y}^{\ast }\right)\in K×K$ is a solution of the following generalized nonlinear mixed variational inequalities (abbreviated as SNMVI):

$\left\{\begin{array}{c}〈\rho {T}_{1}\left({y}^{\ast },{x}^{\ast }\right)+g\left({x}^{\ast }\right)-g\left({y}^{\ast }\right),g\left(x\right)-g\left({x}^{\ast }\right)〉+{\rho }^{\prime }{\phi }_{1}\left(g\left(x\right)\right)-{\rho }^{\prime }{\phi }_{1}\left(g\left({x}^{\ast }\right)\right)\hfill \\ \phantom{\rule{1em}{0ex}}\ge 0,\phantom{\rule{1em}{0ex}}\mathrm{\forall }g\left(x\right)\in K,\hfill \\ 〈\eta {T}_{2}\left({x}^{\ast },{y}^{\ast }\right)+g\left({y}^{\ast }\right)-g\left({x}^{\ast }\right),g\left(x\right)-g\left({x}^{\ast }\right)〉+{\eta }^{\prime }{\phi }_{2}\left(g\left(x\right)\right)-{\eta }^{\prime }{\phi }_{2}\left(g\left({y}^{\ast }\right)\right)\hfill \\ \phantom{\rule{1em}{0ex}}\ge 0,\phantom{\rule{1em}{0ex}}\mathrm{\forall }g\left(x\right)\in K,\hfill \end{array}$
(1.7)

where $g:K\to K$ is a mapping and $\rho >0$, ${\rho }^{\prime }>0$, $\eta >0$, ${\eta }^{\prime }>0$. Using Lemma 1.1, we can easily show that problem (1.7) is equivalent to

$\left\{\begin{array}{c}g\left({x}^{\ast }\right)={J}_{{\phi }_{1}}^{{\rho }^{\prime }}\left[g\left({y}^{\ast }\right)-\rho {T}_{1}\left({y}^{\ast },{x}^{\ast }\right)\right],\hfill \\ g\left({y}^{\ast }\right)={J}_{{\phi }_{2}}^{{\eta }^{\prime }}\left[g\left({x}^{\ast }\right)-\eta {T}_{2}\left({x}^{\ast },{y}^{\ast }\right)\right],\hfill \end{array}$
(1.8)

where ${J}_{{\phi }_{1}}^{{\rho }^{\prime }}={\left(I+{\rho }^{\prime }\partial {\phi }_{1}\right)}^{-1}$, ${J}_{{\phi }_{2}}^{{\eta }^{\prime }}={\left(I+{\eta }^{\prime }\partial {\phi }_{2}\right)}^{-1}$. Let ${\rho }^{\prime }={\eta }^{\prime }=1$. Then problem (1.7) reduces to problem (1.1) and ${J}_{{\phi }_{i}}^{1}={\left(I+\partial {\phi }_{i}\right)}^{-1}$, $i=1,2$. This completes the proof. □

Remark 1.2 If ${T}_{1}={T}_{2}=T$ and $g=I$, where I is the identity operator, then Lemma 1.2 reduces to Lemma 1.2 in .

Definition 1.3 A mapping $T:K×K\to \mathcal{H}$ is said to be

1. (1)

relaxed g-$\left(\gamma ,r\right)$-cocoercive if there exist constants $\gamma >0$ and $r>0$ such that for all $x,y\in K$,

$〈T\left(x,u\right)-T\left(y,v\right),g\left(x\right)-g\left(y\right)〉\ge \left(-\gamma \right){\parallel T\left(x,u\right)-T\left(y,v\right)\parallel }^{2}+r{\parallel g\left(x\right)-g\left(y\right)\parallel }^{2},\phantom{\rule{1em}{0ex}}\mathrm{\forall }u,v\in K;$
2. (2)

g-μ-Lipschitz continuous in the first variable if there exists a constant $\mu >0$ such that for all $x,y\in K$,

$\parallel T\left(x,u\right)-T\left(y,v\right)\parallel \le \mu \parallel g\left(x\right)-g\left(y\right)\parallel ,\phantom{\rule{1em}{0ex}}\mathrm{\forall }u,v\in K.$

Remark 1.3 If T is a univariate mapping and $g=I$, where I is the identity operator, then Definition 1.3 reduces to the standard definition of relaxed $\left(\gamma ,r\right)$-cocoercive and Lipschitz continuous, respectively.

Definition 1.4 A mapping $g:K\to \mathcal{H}$ is said to be α-expansive if there exists a constant $\alpha >0$ such that for all $x,y\in \mathcal{H}$,

$\parallel g\left(x\right)-g\left(y\right)\parallel \ge \alpha \parallel x-y\parallel .$

Lemma 1.3 

Suppose that $\left\{{\delta }_{n}\right\}$ is a nonnegative sequence satisfying the following inequality:

${\delta }_{n+1}\le \left(1-{\lambda }_{n}\right){\delta }_{n}+{\sigma }_{n},\phantom{\rule{1em}{0ex}}\mathrm{\forall }n\ge {n}_{0},$

where ${n}_{0}$ is a nonnegative number, ${\lambda }_{n}\in \left[0,1\right]$ with ${\sum }_{n=0}^{\mathrm{\infty }}{\lambda }_{n}=\mathrm{\infty }$, and ${\sigma }_{n}=o\left({\lambda }_{n}\right)$. Then ${lim}_{n\to \mathrm{\infty }}{\delta }_{n}=0$.

## 2 Algorithms

In this section, we suggest a parallel algorithm associated with the resolvent operator for solving the system of SNMVI. Our results extend and improve the corresponding results in [2, 3, 7, 11, 12]. In fact, using Lemma 1.2, we suggest the following iterative method for solving problem (1.1).

Algorithm 2.1 For arbitrarily chosen initial points ${x}_{0},{y}_{0}\in K$ (and $g\left({x}_{0}\right),g\left({y}_{0}\right)\in K$), compute the sequences $\left\{{x}_{n}\right\}$ and $\left\{{y}_{n}\right\}$ such that

$\left\{\begin{array}{c}g\left({x}_{n+1}\right)=\left(1-{\alpha }_{n}\right)g\left({x}_{n}\right)+{\alpha }_{n}{J}_{{\phi }_{1}}^{1}\left[g\left({y}_{n}\right)-\rho {T}_{1}\left({y}_{n},{x}_{n}\right)\right],\hfill \\ g\left({y}_{n+1}\right)=\left(1-{\beta }_{n}\right)g\left({y}_{n}\right)+{\beta }_{n}{J}_{{\phi }_{2}}^{1}\left[g\left({x}_{n}\right)-\eta {T}_{2}\left({x}_{n},{y}_{n}\right)\right],\hfill \end{array}$
(2.1)

where ${J}_{{\phi }_{i}}^{1}={\left(I+\partial {\phi }_{i}\right)}^{-1}$, $i=1,2$, is the resolvent operator, $\rho ,\eta >0$, ${\alpha }_{n}\in \left[0,1\right]$ and ${\beta }_{n}\in \left[0,1\right]$ for all $n\ge 0$.

As reported in , one of the attractive features of Algorithm 2.1 is that it is suitable for implementing on two different processor computers. In other words, ${x}_{n+1}$ and ${y}_{n+1}$ are solved in parallel, and Algorithm 2.1 is the so-called parallel resolvent method. We refer the interested reader to papers  and references therein for more examples and ideas of parallel iterative methods.

If ${\phi }_{1}={\phi }_{2}={\delta }_{K}$, and $g=I$, ${\delta }_{K}$ is the indicator function of K, then Algorithm 2.1 reduces to the following algorithm.

Algorithm 2.2 For arbitrarily chosen initial points ${x}_{0},{y}_{0}\in K$, compute the sequences $\left\{{x}_{n}\right\}$ and $\left\{{y}_{n}\right\}$ such that

$\left\{\begin{array}{c}{x}_{n+1}=\left(1-{\alpha }_{n}\right){x}_{n}+{\alpha }_{n}{P}_{K}\left[{y}_{n}-\rho {T}_{1}\left({y}_{n},{x}_{n}\right)\right],\hfill \\ {y}_{n+1}=\left(1-{\beta }_{n}\right){y}_{n}+{\beta }_{n}{P}_{K}\left[{x}_{n}-\eta {T}_{2}\left({x}_{n},{y}_{n}\right)\right],\hfill \end{array}$
(2.2)

where $\rho ,\eta >0$, ${\alpha }_{n}\in \left[0,1\right]$ and ${\beta }_{n}\in \left[0,1\right]$ for all $n\ge 0$.

If ${T}_{1}={T}_{2}=T$ and $g=I$, then Algorithm 2.1 reduces to the following algorithm.

Algorithm 2.3 For arbitrarily chosen initial points ${x}_{0},{y}_{0}\in K$, compute the sequences $\left\{{x}_{n}\right\}$ and $\left\{{y}_{n}\right\}$ such that

$\left\{\begin{array}{c}{x}_{n+1}=\left(1-{\alpha }_{n}\right){x}_{n}+{\alpha }_{n}{J}_{{\phi }_{1}}^{1}\left[{y}_{n}-\rho T\left({y}_{n},{x}_{n}\right)\right],\hfill \\ {y}_{n+1}=\left(1-{\beta }_{n}\right){y}_{n}+{\beta }_{n}{J}_{{\phi }_{2}}^{1}\left[{x}_{n}-\eta T\left({x}_{n},{y}_{n}\right)\right],\hfill \end{array}$
(2.3)

where ${J}_{{\phi }_{i}}^{1}={\left(I+\partial {\phi }_{i}\right)}^{-1}$, $i=1,2$, is the resolvent operator, $\rho ,\eta >0$, ${\alpha }_{n}\in \left[0,1\right]$ and ${\beta }_{n}\in \left[0,1\right]$ for all $n\ge 0$.

If ${\phi }_{1}={\phi }_{2}={\delta }_{K}$ and ${T}_{1},{T}_{2}:K\to \mathcal{H}$ are univariate mappings, then Algorithm 2.1 reduces to the following algorithm.

Algorithm 2.4 For arbitrarily chosen initial points ${x}_{0},{y}_{0}\in K$ (and $g\left({x}_{0}\right),g\left({y}_{0}\right)\in K$), compute the sequences $\left\{{x}_{n}\right\}$ and $\left\{{y}_{n}\right\}$ such that

$\left\{\begin{array}{c}g\left({x}_{n+1}\right)=\left(1-{\alpha }_{n}\right)g\left({x}_{n}\right)+{\alpha }_{n}{P}_{K}\left[g\left({y}_{n}\right)-\rho {T}_{1}\left({y}_{n}\right)\right],\hfill \\ g\left({y}_{n+1}\right)=\left(1-{\beta }_{n}\right)g\left({y}_{n}\right)+{\beta }_{n}{P}_{K}\left[g\left({x}_{n}\right)-\eta {T}_{2}\left({x}_{n}\right)\right],\hfill \end{array}$
(2.4)

where $\rho ,\eta >0$, ${\alpha }_{n}\in \left[0,1\right]$ and ${\beta }_{n}\in \left[0,1\right]$ for all $n\ge 0$.

If ${T}_{1}={T}_{2}=T$, $g=I$ and ${\phi }_{1}={\phi }_{2}={\delta }_{K}$, where T is a univariate mapping defined by $T:K\to \mathcal{H}$, then Algorithm 2.1 reduces to the following algorithm.

Algorithm 2.5 For arbitrarily chosen initial points ${x}_{0},{y}_{0}\in K$ (and $g\left({x}_{0}\right),g\left({y}_{0}\right)\in K$), compute the sequences $\left\{{x}_{n}\right\}$ and $\left\{{y}_{n}\right\}$ such that

$\left\{\begin{array}{c}{x}_{n+1}=\left(1-{\alpha }_{n}\right){x}_{n}+{\alpha }_{n}{P}_{K}\left[{y}_{n}-\rho T\left({y}_{n}\right)\right],\hfill \\ {y}_{n+1}=\left(1-{\beta }_{n}\right){y}_{n}+{\beta }_{n}{P}_{K}\left[{x}_{n}-\eta T\left({x}_{n}\right)\right],\hfill \end{array}$
(2.5)

where $\rho ,\eta >0$, ${\alpha }_{n}\in \left[0,1\right]$ and ${\beta }_{n}\in \left[0,1\right]$ for all $n\ge 0$.

## 3 Main results

In this section, based on Algorithm 2.1, we now present the approximation solvability of problem (1.1) involving relaxed g-$\left(\gamma ,r\right)$-cocoercive and g-μ-Lipschitz continuous in the first variable mappings in Hilbert settings.

Theorem 3.1 Let be a real Hilbert space. Let K be a nonempty closed convex subset of , and let ${T}_{i}:K×K\to \mathcal{H}$ be relaxed g-$\left({\gamma }_{i},{r}_{i}\right)$-cocoercive and g-${\mu }_{i}$-Lipschitz continuous in the first variable for $i=1,2$. Let $g:K\to K$ be an α-expansive mapping. Suppose that $\left({x}^{\ast },{y}^{\ast }\right)\in K×K$ is the unique solution to problem (1.1) and $\left\{{x}_{n}\right\}$, $\left\{{y}_{n}\right\}$ are generated by Algorithm  2.1. If $\left\{{\alpha }_{n}\right\}$ and $\left\{{\beta }_{n}\right\}$ are two sequences in $\left[0,1\right]$ satisfying the following conditions:

1. (i)

${\alpha }_{n}-{\theta }_{2}{\beta }_{n}\ge 0$ and ${\beta }_{n}-{\theta }_{1}{\alpha }_{n}\ge 0$ such that ${\sum }_{n=0}^{\mathrm{\infty }}{\alpha }_{n}-{\theta }_{2}{\beta }_{n}=\mathrm{\infty }$, ${\sum }_{n=0}^{\mathrm{\infty }}{\beta }_{n}-{\theta }_{1}{\alpha }_{n}=\mathrm{\infty }$,

2. (ii)

${\theta }_{1}=\sqrt{1+2\rho {\gamma }_{1}{{\mu }_{1}}^{2}-2\rho {r}_{1}+{\rho }^{2}{\mu }_{1}}$ such that $0<{\theta }_{1}<1$,

3. (iii)

${\theta }_{2}=\sqrt{1+2\eta {\gamma }_{2}{{\mu }_{2}}^{2}-2\eta {r}_{2}+{\eta }^{2}{\mu }_{2}}$ such that $0<{\theta }_{2}<1$,

then the sequences $\left\{{x}_{n}\right\}$ and $\left\{{y}_{n}\right\}$ converge to ${x}^{\ast }$ and ${y}^{\ast }$, respectively.

Proof Since $\left({x}^{\ast },{y}^{\ast }\right)\in K×K$ is the unique solution to problem (1.1), from Lemma 1.2 it follows that

$\left\{\begin{array}{c}g\left({x}^{\ast }\right)={J}_{{\phi }_{1}}^{1}\left[g\left({y}^{\ast }\right)-\rho {T}_{1}\left({y}^{\ast },{x}^{\ast }\right)\right],\hfill \\ g\left({y}^{\ast }\right)={J}_{{\phi }_{2}}^{1}\left[g\left({x}^{\ast }\right)-\eta {T}_{2}\left({x}^{\ast },{y}^{\ast }\right)\right].\hfill \end{array}$
(3.1)

We first evaluate $\parallel g\left({x}_{n+1}\right)-g\left({x}^{\ast }\right)\parallel$ for all $n\ge 0$. From (2.1) and the nonexpansive property of the resolvent operator, we have

$\begin{array}{r}\parallel g\left({x}_{n+1}\right)-g\left({x}^{\ast }\right)\parallel \\ \phantom{\rule{1em}{0ex}}=\parallel \left(1-{\alpha }_{n}\right)g\left({x}_{n}\right)+{\alpha }_{n}{J}_{{\phi }_{1}}^{1}\left[g\left({y}_{n}\right)-\rho {T}_{1}\left({y}_{n},{x}_{n}\right)\right]\\ \phantom{\rule{2em}{0ex}}-\left(1-{\alpha }_{n}\right)g\left({x}^{\ast }\right)-{\alpha }_{n}{J}_{{\phi }_{1}}^{1}\left[g\left({y}^{\ast }\right)-\rho {T}_{1}\left({y}^{\ast },{x}^{\ast }\right)\right]\parallel \\ \phantom{\rule{1em}{0ex}}\le \left(1-{\alpha }_{n}\right)\parallel g\left({x}_{n}\right)-g\left({x}^{\ast }\right)\parallel \\ \phantom{\rule{2em}{0ex}}+{\alpha }_{n}\parallel {J}_{{\phi }_{1}}^{1}\left[g\left({y}_{n}\right)-\rho {T}_{1}\left({y}_{n},{x}_{n}\right)\right]-{J}_{{\phi }_{1}}^{1}\left[g\left({y}^{\ast }\right)-\rho {T}_{1}\left({y}^{\ast },{x}^{\ast }\right)\right]\parallel \\ \phantom{\rule{1em}{0ex}}\le \left(1-{\alpha }_{n}\right)\parallel g\left({x}_{n}\right)-g\left({x}^{\ast }\right)\parallel +{\alpha }_{n}\parallel g\left({y}_{n}\right)-g\left({y}^{\ast }\right)-\rho \left({T}_{1}\left({y}_{n},{x}_{n}\right)-{T}_{1}\left({y}^{\ast },{x}^{\ast }\right)\right)\parallel .\end{array}$
(3.2)

Notice that ${T}_{1}$ is relaxed g-$\left({\gamma }_{1},{r}_{1}\right)$-cocoercive and g-${\mu }_{1}$-Lipschitz continuous in the first variable. Then we have

$\begin{array}{r}{\parallel g\left({y}_{n}\right)-g\left({y}^{\ast }\right)-\rho \left({T}_{1}\left({y}_{n},{x}_{n}\right)-{T}_{1}\left({y}^{\ast },{x}^{\ast }\right)\right)\parallel }^{2}\\ \phantom{\rule{1em}{0ex}}={\parallel g\left({y}_{n}\right)-g\left({y}^{\ast }\right)\parallel }^{2}-2\rho 〈g\left({y}_{n}\right)-g\left({y}^{\ast }\right),{T}_{1}\left({y}_{n},{x}_{n}\right)-{T}_{1}\left({y}^{\ast },{x}^{\ast }\right)〉\\ \phantom{\rule{2em}{0ex}}+{\rho }^{2}{\parallel {T}_{1}\left({y}_{n},{x}_{n}\right)-{T}_{1}\left({y}^{\ast },{x}^{\ast }\right)\parallel }^{2}\\ \phantom{\rule{1em}{0ex}}\le {\parallel g\left({y}_{n}\right)-g\left({y}^{\ast }\right)\parallel }^{2}+2\rho {\gamma }_{1}{\parallel {T}_{1}\left({y}_{n},{x}_{n}\right)-{T}_{1}\left({y}^{\ast },{x}^{\ast }\right)\parallel }^{2}\\ \phantom{\rule{2em}{0ex}}-2\rho {r}_{1}{\parallel g\left({y}_{n}\right)-g\left({y}^{\ast }\right)\parallel }^{2}+{\rho }^{2}{\parallel {T}_{1}\left({y}_{n},{x}_{n}\right)-{T}_{1}\left({y}^{\ast },{x}^{\ast }\right)\parallel }^{2}\\ \phantom{\rule{1em}{0ex}}\le {\theta }^{2}{\parallel g\left({y}_{n}\right)-g\left({y}^{\ast }\right)\parallel }^{2},\end{array}$
(3.3)

where ${\theta }_{1}=\sqrt{1+2\rho {\gamma }_{1}{{\mu }_{1}}^{2}-2\rho {r}_{1}+{\rho }^{2}{\mu }_{1}}<1$ in view of assumption (ii). Substituting (3.3) into (3.2), we have

$\parallel g\left({x}_{n+1}\right)-g\left({x}^{\ast }\right)\parallel \le \left(1-{\alpha }_{n}\right)\parallel g\left({x}_{n}\right)-g\left({x}^{\ast }\right)\parallel +{\alpha }_{n}{\theta }_{1}\parallel g\left({y}_{n}\right)-g\left({y}^{\ast }\right)\parallel .$
(3.4)

Similarly, since ${T}_{2}$ is relaxed g-$\left({\gamma }_{2},{r}_{2}\right)$-cocoercive and g-${\mu }_{2}$-Lipschitz continuous in the first variable, we have

$\parallel g\left({y}_{n+1}\right)-g\left({y}^{\ast }\right)\parallel \le \left(1-{\beta }_{n}\right)\parallel g\left({y}_{n}\right)-g\left({y}^{\ast }\right)\parallel +{\beta }_{n}{\theta }_{2}\parallel g\left({x}_{n}\right)-g\left({x}^{\ast }\right)\parallel ,$
(3.5)

where ${\theta }_{2}=\sqrt{1+2\eta {\gamma }_{2}{{\mu }_{2}}^{2}-2\eta {r}_{2}+{\eta }^{2}{\mu }_{2}}<1$ in view of assumption (iii). It follows from (3.4) and (3.5) that

$\begin{array}{r}\parallel \left(g\left({x}_{n+1}\right),g\left({y}_{n+1}\right)\right)-\left(g\left({x}^{\ast }\right),g\left({y}^{\ast }\right)\right)\parallel \\ \phantom{\rule{1em}{0ex}}\le \left[1-\left({\alpha }_{n}-{\theta }_{2}{\beta }_{n}\right)\right]\parallel g\left({x}_{n}\right)-g\left({x}^{\ast }\right)\parallel +\left[1-\left({\beta }_{n}-{\theta }_{1}{\alpha }_{n}\right)\right]\parallel g\left({y}_{n}\right)-g\left({y}^{\ast }\right)\parallel \\ \phantom{\rule{1em}{0ex}}=max\left\{{w}_{1n},{w}_{2n}\right\}\left(\parallel \left(g\left({x}_{n}\right),g\left({y}_{n}\right)\right)-\left(g\left({x}^{\ast }\right),g\left({y}^{\ast }\right)\right)\parallel \right),\end{array}$
(3.6)

where ${w}_{1n}=1-\left({\alpha }_{n}-{\theta }_{2}{\beta }_{n}\right)$ and ${w}_{2n}=1-\left({\beta }_{n}-{\theta }_{1}{\alpha }_{n}\right)$.

From assumption (i) and Lemma 1.3, we can obtain

$\underset{n\to \mathrm{\infty }}{lim}\parallel \left(g\left({x}_{n+1}\right),g\left({y}_{n+1}\right)\right)-\left(g\left({x}^{\ast }\right),g\left({y}^{\ast }\right)\right)\parallel =0,$

and so

$\underset{n\to \mathrm{\infty }}{lim}\parallel g\left({x}_{n+1}\right)-g\left({x}^{\ast }\right)\parallel =0\phantom{\rule{1em}{0ex}}\text{and}\phantom{\rule{1em}{0ex}}\underset{n\to \mathrm{\infty }}{lim}\parallel g\left({y}_{n+1}\right)-g\left({y}^{\ast }\right)\parallel =0,$

which implies that sequences $\left\{g\left({x}_{n}\right)\right\}$ and $\left\{g\left({y}_{n}\right)\right\}$ converge to $g\left({x}^{\ast }\right)$ and $g\left({y}^{\ast }\right)$, respectively. Since g is α-expansive, it follows that $\left\{{x}_{n}\right\}$ and $\left\{{y}_{n}\right\}$ converge to ${x}^{\ast }$ and ${y}^{\ast }$, respectively. This completes the proof. □

The following theorems can be obtained from Theorem 3.1 immediately.

Theorem 3.2 

Let be a real Hilbert space. Let K be a nonempty closed convex subset of , and let ${T}_{i}:K×K\to \mathcal{H}$ be relaxed $\left({\gamma }_{i},{r}_{i}\right)$-cocoercive and ${\mu }_{i}$-Lipschitz continuous in the first variable for $i=1,2$. Suppose that $\left({x}^{\ast },{y}^{\ast }\right)\in K×K$ is the unique solution to problem (1.2) and $\left\{{x}_{n}\right\}$, $\left\{{y}_{n}\right\}$ are generated by Algorithm  2.2. If $\left\{{\alpha }_{n}\right\}$ and $\left\{{\beta }_{n}\right\}$ are two sequences in $\left[0,1\right]$ satisfying the following conditions:

1. (1)

${\alpha }_{n}-{\theta }_{2}{\beta }_{n}\ge 0$ and ${\beta }_{n}-{\theta }_{1}{\alpha }_{n}\ge 0$ such that ${\sum }_{n=0}^{\mathrm{\infty }}{\alpha }_{n}-{\theta }_{2}{\beta }_{n}=\mathrm{\infty }$, ${\sum }_{n=0}^{\mathrm{\infty }}{\beta }_{n}-{\theta }_{1}{\alpha }_{n}=\mathrm{\infty }$,

2. (2)

${\theta }_{1}=\sqrt{1+2\rho {\gamma }_{1}{{\mu }_{1}}^{2}-2\rho {r}_{1}+{\rho }^{2}{\mu }_{1}}$ such that $0<{\theta }_{1}<1$,

3. (3)

${\theta }_{2}=\sqrt{1+2\eta {\gamma }_{2}{{\mu }_{2}}^{2}-2\eta {r}_{2}+{\eta }^{2}{\mu }_{2}}$ such that $0<{\theta }_{2}<1$,

then the sequences $\left\{{x}_{n}\right\}$ and $\left\{{y}_{n}\right\}$ converge to ${x}^{\ast }$ and ${y}^{\ast }$, respectively.

Theorem 3.3 

Let be a real Hilbert space. Let K be a nonempty closed convex subset of , and let ${T}_{i}:K\to \mathcal{H}$ be relaxed g-$\left({\gamma }_{i},{r}_{i}\right)$-cocoercive and g-${\mu }_{i}$-Lipschitz continuous for $i=1,2$. Let $g:K\to K$ be an α-expansive mapping. Suppose that $\left({x}^{\ast },{y}^{\ast }\right)\in K×K$ is the unique solution to problem (1.4) and $\left\{{x}_{n}\right\}$, $\left\{{y}_{n}\right\}$ are generated by Algorithm  2.4. If $\left\{{\alpha }_{n}\right\}$ and $\left\{{\beta }_{n}\right\}$ are two sequences in $\left[0,1\right]$ satisfying the following conditions:

1. (1)

$0\le {\alpha }_{n},{\beta }_{n}\le 1$, ${\alpha }_{n}-{\theta }_{2}{\beta }_{n}\ge 0$ and ${\beta }_{n}-{\theta }_{1}{\alpha }_{n}\ge 0$ such that ${\sum }_{n=0}^{\mathrm{\infty }}{\alpha }_{n}-{\theta }_{2}{\beta }_{n}=\mathrm{\infty }$, ${\sum }_{n=0}^{\mathrm{\infty }}{\beta }_{n}-{\theta }_{1}{\alpha }_{n}=\mathrm{\infty }$,

2. (2)

${\theta }_{1}=\sqrt{1+2\rho {\gamma }_{1}{{\mu }_{1}}^{2}-2\rho {r}_{1}+{\rho }^{2}{\mu }_{1}}$ such that $0<{\theta }_{1}<1$,

3. (3)

${\theta }_{2}=\sqrt{1+2\eta {\gamma }_{2}{{\mu }_{2}}^{2}-2\eta {r}_{2}+{\eta }^{2}{\mu }_{2}}$ such that $0<{\theta }_{2}<1$,

then the sequences $\left\{{x}_{n}\right\}$ and $\left\{{y}_{n}\right\}$ converge to ${x}^{\ast }$ and ${y}^{\ast }$, respectively.

## References

1. Stampacchia G: Formes bilineaires coercitives sur les ensembles convexes. C. R. Math. Acad. Sci. Paris 1964, 258: 4413–4416.

2. Verma RU: Projection methods, algorithms and a new system of nonlinear variational inequalities. Comput. Math. Appl. 2001, 41: 1025–1031. 10.1016/S0898-1221(00)00336-9

3. Chang SS, Joseph Lee HW, Chan CK: Generalized system for relaxed cocoercive variational inequalities in Hilbert spaces. Appl. Math. Lett. 2007, 20: 329–334. 10.1016/j.aml.2006.04.017

4. Fang YP, Huang NJ, Cao YJ, Kang SM: Stable iterative algorithms for a class of general nonlinear variational inequalities. Adv. Nonlinear Var. Inequal. 2002, 5(2):1–9.

5. Fang YP, Huang NJ: H -Monotone operator and resolvent operator technique for variational inclusions. Appl. Math. Comput. 2003, 145(2–3):795–803. 10.1016/S0096-3003(03)00275-3

6. Fang YP, Huang NJ: H -Accretive operators and resolvent operator technique for solving variational inclusions in Banach spaces. Appl. Math. Lett. 2004, 17(6):647–653. 10.1016/S0893-9659(04)90099-7

7. He Z, Gu F: Generalized system for relaxed cocoercive mixed variational inequalities in Hilbert spaces. Appl. Math. Comput. 2009, 214: 26–30. 10.1016/j.amc.2009.03.056

8. Narin P: A resolvent operator technique for approximate solving of generalized system mixed variational inequality and fixed point problems. Appl. Math. Lett. 2010, 23: 440–445. 10.1016/j.aml.2009.12.001

9. Nie NH, Liu Z, Kim KH, Kang SM: A system of nonlinear variational inequalities involving strong monotone and pseudocontractive mappings. Adv. Nonlinear Var. Inequal. 2003, 6: 91–99.

10. Verma RU: Generalized system for relaxed cocoercive variational inequalities and its projection methods. J. Optim. Theory Appl. 2004, 121: 203–210.

11. Verma RU: General convergence analysis for two-step projection methods and applications to variational problems. Appl. Math. Lett. 2005, 18(11):1286–1292. 10.1016/j.aml.2005.02.026

12. Yang HJ, Zhou LJ, Li QG: A parallel projection method for a system of nonlinear variational inequalities. Appl. Math. Comput. 2010, 217: 1971–1975. 10.1016/j.amc.2010.06.053

13. Brezis H North-Holland Mathematics Studies 5. In Opérateurs maximaux monotone et semi-groupes de contractions dans les espaces de Hilbert. North-Holland, Amsterdam; 1973. Notas de matematica, vol. 50.

14. Weng XL: Fixed point iteration for local strictly pseudocontractive mappings. Proc. Am. Math. Soc. 1991, 113: 727–731. 10.1090/S0002-9939-1991-1086345-8

15. Bertsekas D, Tsitsiklis J: Parallel and Distributed Computation, Numerical Methods. Prentice-Hall, Englewood Cliff; 1989.

16. Hoffmann KH, Zou J: Parallel algorithms of Schwarz variant for variational inequalities. Numer. Funct. Anal. Optim. 1992, 13: 449–462. 10.1080/01630569208816491

17. Hoffmann KH, Zou J: Parallel solution of variational inequality problems with nonlinear source terms. IMA J. Numer. Anal. 1996, 16: 31–45. 10.1093/imanum/16.1.31

## Acknowledgements

This work was supported by the Natural Science Foundation of China (60804065, 11371015), the Key Project of Chinese Ministry of Education (211163), Sichuan Youth Science and Technology Foundation (2012JQ0032).

## Author information

Authors

### Corresponding author

Correspondence to Ke Guo.

### Competing interests

The authors declare that they have no competing interests.

### Authors’ contributions

All authors contributed significantly in writing the paper. All authors read and approved the final manuscript.

## Rights and permissions

Reprints and Permissions

Guo, K., Jiang, Y. & Feng, SQ. A parallel resolvent method for solving a system of nonlinear mixed variational inequalities. J Inequal Appl 2013, 509 (2013). https://doi.org/10.1186/1029-242X-2013-509

• Accepted:

• Published:

• DOI: https://doi.org/10.1186/1029-242X-2013-509

### Keywords

• resolvent operator
• parallel projection
• relaxed cocoercive
• generalized nonlinear mixed variational inequalities 