Skip to main content

Modified subgradient extragradient method for system of variational inclusion problem and finite family of variational inequalities problem in real Hilbert space

Abstract

For the purpose of this article, we introduce a modified form of a generalized system of variational inclusions, called the generalized system of modified variational inclusion problems (GSMVIP). This problem reduces to the classical variational inclusion and variational inequalities problems. Motivated by several recent results related to the subgradient extragradient method, we propose a new subgradient extragradient method for finding a common element of the set of solutions of GSMVIP and the set of a finite family of variational inequalities problems. Under suitable assumptions, strong convergence theorems have been proved in the framework of a Hilbert space. In addition, some numerical results indicate that the proposed method is effective.

1 Introduction

Throughout this paper, let H be a real Hilbert space and C be a nonempty closed convex subset of H with the inner product \(\langle \cdot ,\cdot \rangle \) and norm \(\Vert \cdot \Vert \). Let \(T:C \rightarrow C\) be a mapping. Then T is called nonexpansive if \(\Vert Tx-Ty \Vert \leq \Vert x-y \Vert \), for all \(x,y \in C\). We denote by \(F(T)\) the set of fixed points of T, that is, \(F(T) = \{ x \in C : Tx = x \} \). It is well known that \(F(T)\) is closed convex and also nonempty.

Let \(B: H \rightarrow H\) be a mapping and \(M: H \rightarrow 2^{H}\) be a multi-valued mapping. The variational inclusion problem is to find \(x \in H\) such that

$$ \theta \in Bx + Mx , $$
(1)

where θ is the zero vector in H. The set of solutions of (1) is denoted by \(VI(H,B,M)\). This problem has received much attention due to its applications in large variety of problems arising in convex programming, variational inequalities, split feasibility problems, and minimization problems. To be more precise, some concrete problems in machine learning, image processing, and linear inverse problems can be modeled mathematically by this formulation.

The variational inequality problem (VIP) is to find a point \(u \in C\) such that

$$\begin{aligned} \langle Au,v-u \rangle \geq 0,\quad \forall v \in C. \end{aligned}$$
(2)

The set of solutions of the variational inequality problem is denoted by \(VI(C,A)\). This problem is an important tool in economics, engineering and mathematics. It includes, as special cases, many problems of nonlinear analysis such as optimization, optimal control problems, saddle point problems and mathematical programming; see, for example, [14].

It is well known that one of the most popular methods for solving the problem (VIP) is the extragradient method proposed by Korpelevich [5]. The extragradient method is needed to calculate two projections onto the feasible set C in each iteration. So, in the case that the set C is not simple to project on to it, as analyzed in some remarks of the authors in [6], when the subset is a closed expression as in the case of a ball or a half-space, the projection onto the feasible subset C can be computed easily. This can affect the efficiency of the used method. In recent years, the extragradient method has received great attention by many authors, who improved it in various ways; see, e.g. [713] and the references therein.

In 2011, Censor et al. [12] proposed the subgradient extragradient method for solving variational inequality problems as follows:

$$\begin{aligned} \textstyle\begin{cases} y_{n} = P_{C}(x_{n} - \lambda Ax_{n}), \\ T_{n} = \{x \in H : \langle x_{n} - \lambda Ax_{n} - y_{n} , x - y_{n} \rangle \leq 0\}, \\ x_{n+1} = P_{T_{n}}(x_{n} - \lambda Ay_{n}), \end{cases}\displaystyle \end{aligned}$$
(3)

for each \(n \geq 1\), where \(\lambda \in (0,1/L)\). In this method, they have replaced the second projection in Korpelevich’s extragradient method by a projection on to a half-space, which is computed explicitly.

Motivated by the problem (1), in this paper, we introduce a new problem of the system of variational inclusions in a real Hilbert space as follows:

Let H be a real Hilbert space and let \(A : H \rightarrow H\) be mapping and \(M_{A}, M_{B} : H \rightarrow 2^{H}\) be set value mapping. We consider the problem for finding \(x^{*} \in H\) such that

$$ \theta \in Ax^{*} + M_{A} x^{*} \quad \text{and}\quad \theta \in Ax^{*} + M_{B} x^{*}, $$
(4)

where θ is the zero mapping in H, which is called a generalized system of modified variational inclusion problems (in short, GSMVIP). The set of solutions of (4) is denoted by Ω,i.e., \(\Omega = \{ x^{*}\in H:\theta \in Ax^{*}+M_{A}x^{*}\text{ and }\theta \in Ax^{*} + M_{B} x^{*} \} \). In particular, if \(M_{A} = M_{B}\), then the problem (4) reduces to the problem (1) and if \(J_{M_{A},\lambda _{A}} = J_{M_{B},\lambda _{B}} = P_{C}\), then the problem (4) reduces to VIP.

In 2012, Kangtunyakarn [14] modified the set of variational inequality problems as follows:

$$\begin{aligned} VI &\bigl(C,aA + (1-a)B \bigr) \\ &\quad = \bigl\{ x \in C : \bigl\langle y-x , \bigl(aA + (1-a)B \bigr)x \bigr\rangle \geq 0, \forall y \in C, a \in (0,1) \bigr\} , \end{aligned}$$
(5)

where A and B are the mappings of C into H.

In order to develop efficient algorithms for finding solution of a finite family variational inequalities problem, inspired by problem (5), we define the new half-space \(Q_{n} = \{z \in H : \langle (I-\lambda \sum_{i=1}^{N} a_{i}A_{i})x_{n}-y_{n},y_{n}-z \rangle \geq 0 \} \), which as a tool to prove the strong convergence theorem. In particular, if we put \(i=1\), then \(Q_{n}\) reduces to \(T_{n}\) in subgradient extragradient method (3). However, the sequence \(\{ x_{n} \} \) generated by (3) converges weakly to a solution of the variational inequality problem.

In this paper, motivated by recent research [7, 12] and [14], we introduce a new problem (4) and the new iterative scheme for finding a common element of the set of a finite family of variational inequalities problems and the set of solutions of the proposed problem (4) in a real Hilbert space. Then we establish and prove the strong convergence theorem under some proper conditions. Furthermore, we also give some various examples to support our main result.

2 Preliminaries

In this section, we give some useful lemmas that will be needed to prove our main result.

Let C be a nonempty closed convex subset of a real Hilbert space H. We denote strong convergence and weak convergence by the notations → and , respectively. For every \(x \in H\), there exists a unique nearest point \(P_{C} x \in C\) such that

$$\begin{aligned} \Vert x - P_{C} x \Vert \leq \Vert x - y \Vert ,\quad \forall y \in C. \end{aligned}$$

\(P_{C}\) is called a metric projection of H onto C. It follows that

$$\begin{aligned} \Vert x - y \Vert ^{2} \geq \Vert x - P_{C} x \Vert ^{2} + \Vert y - P_{C} x \Vert ^{2}, \quad \text{for all }x \in H, y \in C. \end{aligned}$$
(6)

Lemma 2.1

([15])

Given \(x \in H\) and \(y \in C\). Then \(y = P_{C} x\) if and only if we have the inequality

$$\begin{aligned} \langle x-y,y-z \rangle \geq 0, \quad\forall z \in C. \end{aligned}$$

Definition 2.2

Let \(M : H \rightarrow 2^{H}\) be a multi-valued mapping.

(i) The graph \(G(M)\) of M is defined by

$$\begin{aligned} G(M) := \bigl\{ (x,u) \in H \times H : u \in M(x) \bigr\} , \end{aligned}$$

(ii) the operator M is called a maximal monotone operator if M is monotone, i.e.

$$\begin{aligned} \langle u-v , x - y \rangle \geq 0 \quad\forall u \in M(x), v \in M(y), \end{aligned}$$

and the graph \(G(M)\) of M is not properly contained in the graph of any other monotone operator. It is clear that a monotone mapping M is maximal if and only if for any \((x,u) \in H \times H\), \(\langle u-v,x-y \rangle \geq 0\) for every \((y,v) \in G(M)\) implies that \(u \in M(x)\).

Let \(M: H \rightarrow 2^{H}\) be a multi-valued maximal monotone mapping, then the single-valued mapping \(J_{M,\lambda }: H \rightarrow H\) defined by

$$\begin{aligned} J_{M,\lambda }(u) = (I + \lambda M)^{-1} (u), \quad\forall u \in H, \end{aligned}$$

is called the resolvent operator associated with M where λ is positive number and I is an identity mapping; see [16]. Note that \(J_{M,\lambda }\) is a nonexpansive mapping.

Definition 2.3

Let \(A: C \rightarrow H\) be a mapping.

(i) A is called μ-Lipschitz continuous if there exists a nonnegative real number \(\mu \geq 0\) such that

$$\begin{aligned} \Vert Ax - Ay \Vert \leq \mu \Vert x-y \Vert , \quad\forall x,y \in C. \end{aligned}$$

(ii) A is called α-inverse strongly monotone if there exists a nonnegative real number \(\alpha \geq 0\) such that

$$\begin{aligned} \langle x-y,Ax - Ay \rangle \geq \alpha \Vert Ax - Ay \Vert ^{2} ,\quad \forall x,y \in C. \end{aligned}$$

Lemma 2.4

([14])

Let C be a nonempty closed convex subset of a real Hilbert space H and let A,B: CH be α- and β-inverse strongly monotone mappings, respectively, with \(\alpha ,\beta > 0\) and \(VI(C,A) \cap VI(C,B) \neq \emptyset \). Then

$$\begin{aligned} VI \bigl(C, aA + (1-a)B \bigr) = VI(C,A) \cap VI(C,B),\quad \forall a \in (0,1). \end{aligned}$$

Furthermore, if \(0 < \gamma < \min \{ 2\alpha ,2\beta \} \), we find that \(I - \gamma (aA + (1-a)B)\) is a nonexpansive mapping.

Remark 2.5

For every \(i = 1,2,\ldots,N\) the mapping \(A_{i} : C\rightarrow H \) be \(\alpha _{i} \)-inverse strongly monotone mappings with \(\eta = \min_{1,2,\ldots,N} \{ \alpha _{i} \} \) and \(\bigcap_{i=1}^{N} VI(C,A_{i}) \neq \emptyset \). Then

$$\begin{aligned} VI \Biggl(C,\sum_{i=1}^{N} a_{i}A_{i} \Biggr) = \bigcap_{i=1}^{N} VI(C,A_{i}), \end{aligned}$$
(7)

where \(\sum_{i=1}^{N} a_{i} = 1\) and \(0< a_{i}<1\) for every \(i = 1,2,\ldots,N\). Moreover, we find that \(\sum_{i=1}^{N} a_{i}A_{i}\) is monotone and is a μ-Lipschitz continuous mapping.

Proof

It easy to see that \(\sum_{i=k+1}^{N}\frac{a_{i}}{\prod_{j=1}^{k}(1-a_{j})}A_{i}\) is η-inverse strongly monotone mappings with \(\eta = \min \{ \beta _{i} \} \) for each \(i = 2,\ldots,N\) and \(k = 1,2,\ldots,N-1\).

Take \(N = 3\) and let \(VI(C,A_{1}) \cap VI(C,A_{2}) \cap VI(C,A_{3}) \neq \emptyset \). By using Lemma 2.4, we have

$$\begin{aligned} VI(C,a_{1}A_{1} + a_{2}A_{2} +a_{3}A_{3}) &= VI \biggl(C,a_{1}A_{1} + (1-a_{1}) \biggl( \frac{a_{2}}{1-a_{1}}A_{2} + \frac{a_{3}}{1-a_{1}}A_{3} \biggr) \biggr) \\ &= VI(C,A_{1}) \cap VI \biggl(C,\frac{a_{2}}{1-a_{1}}A_{2} + \frac{a_{3}}{1-a_{1}}A_{3} \biggr) \\ &= VI(C,A_{1}) \cap VI(C,A_{2}) \cap VI(C,A_{3}), \end{aligned}$$
(8)

where \(a_{1} , a_{2} ,a_{3} \in (0,1)\) and \(\sum_{i=1}^{3} a_{i} = 1\).

Take \(N = 4\) and let \(\bigcap_{i=1}^{4} VI(C,A_{i}) \neq \emptyset \). By using Lemma 2.4 and (8), we have

$$\begin{aligned} & VI(C,a_{1}A_{1} + a_{2}A_{2} +a_{3}A_{3} +a_{4}A_{4}) \\ &\quad= VI \biggl(C,(1 - a_{4}) \biggl( \frac{a_{1}}{1-a_{4}}A_{1} + \frac{a_{2}}{1-a_{4}}A_{2} +\frac{a_{3}}{1-a_{4}}A_{3} \biggr) +a_{4}A_{4} \biggr) \\ &\quad= VI \biggl(C,\frac{a_{1}}{1-a_{4}}A_{1} + \frac{a_{2}}{1-a_{4}}A_{2}+\frac{a_{3}}{1-a_{4}}A_{3} \biggr) \cap VI(C,A_{4}) \\ &\quad= VI(C,A_{1}) \cap VI(C,A_{2}) \cap VI(C,A_{3}) \cap VI(C,A_{4}), \end{aligned}$$
(9)

where \(a_{1} , a_{2} ,a_{3}, a_{4} \in (0,1)\) and \(\sum_{i=1}^{4} a_{i} = 1\).

In the same way, if \(\bigcap_{i=1}^{N} VI(C,A_{i}) \neq \emptyset \), we obtain

$$\begin{aligned} VI \Biggl(C,\sum_{i=1}^{N} a_{i}A_{i} \Biggr) = \bigcap_{i=1}^{N} VI(C,A_{i}), \end{aligned}$$
(10)

where \(a_{i} \in (0,1)\), for each \(i = 1,2,\ldots,N\), and \(\sum_{i=1}^{N} a_{i} = 1\). □

Lemma 2.6

In real Hilbert spaces H, the following well-known results hold:

(i) For all \(x,y \in H\) and \(\alpha \in [0,1]\),

$$\begin{aligned} \bigl\Vert \alpha x + (1-\alpha )y \bigr\Vert ^{2} = \alpha \Vert x \Vert ^{2} + (1 - \alpha ) \Vert y \Vert ^{2} - \alpha (1 - \alpha ) \Vert x - y \Vert ^{2}, \end{aligned}$$

(ii) \(\Vert x + y \Vert ^{2} \leq \Vert x \Vert ^{2} + 2 \langle y, x+y \rangle \) for all \(x,y \in H\).

Lemma 2.7

([17])

Let C be a nonempty closed and convex subset of a real Hilbert space H. If \(T:C \rightarrow C\) is a nonexpansive mapping with \(F(T) \neq \emptyset \), then the mapping \(I-T\) is demiclosed at 0, i.e., if \(\{ x_{n} \} \) is a sequence in C weakly converging to \(x \in C\) and if \(\{ x_{n} - T x_{n} \} \) converges strongly to 0, then \(x \in F(T)\).

Lemma 2.8

([17])

Let \(\{ s_{n} \} \) be a sequence of nonnegative real numbers satisfying

$$\begin{aligned} s_{n+1} \leq (1-\alpha _{n})s_{n} + \delta _{n}, \quad\forall \geq 0, \end{aligned}$$

where \(\{\alpha _{n}\}\) is a sequence in (0,1) and \(\{ \delta _{n} \} \) is a sequence such that

(1) \(\sum_{n=1}^{\infty } \alpha _{n}= \infty \);

(2) \(\limsup_{n \rightarrow 0}\frac{\delta _{n}}{\alpha _{n}} \leq 0\) or \(\sum_{n=1}^{\infty } |\delta _{n}| = \infty \).

Then \(\lim_{n \rightarrow 0} s_{n} = 0\).

Lemma 2.9

([17])

Each Hilbert space H satisfies Opial’s condition, i.e., for any sequence \(\{ x_{n} \} \) with \(x_{n} \rightharpoonup x\), the inequality

$$\begin{aligned} \liminf_{n \rightarrow \infty } \Vert x_{n} - x \Vert < \liminf _{n \rightarrow \infty } \Vert x_{n} - y \Vert \end{aligned}$$

holds for every \(y \in H\) with \(x \neq y\).

Lemma 2.10

([16])

\(u \in H\) is a solution of variational inclusion (1) if and only if \(u=J_{M,\lambda }(u-\lambda Bu)\), \(\forall \lambda > 0\), i.e.,

$$\begin{aligned} VI(H,B,M) = F \bigl(J_{M,\lambda }(I-\lambda B) \bigr), \quad\forall \lambda >0. \end{aligned}$$

If \(\lambda \in (0,2\alpha ]\), then \(VI(H,B,M)\) is a closed convex subset in H.

The next lemma presents the association of the fixed point of a nonlinear mapping and the solution of GSMVIP under suitable conditions on the parameters.

Lemma 2.11

Let H be a real Hilbert space and let \(A_{G} : H \rightarrow H\) be an α-inverse strongly monotone mapping. Let \(M_{A}, M_{B} : H \rightarrow 2^{H}\) be multi-value maximum monotone mappings with \(\Omega \neq \emptyset \). \(x^{*} \in \Omega \) if and only if \(x^{*} = Gx^{*}\), where \(G : H \rightarrow H\) is a mapping defined by

$$\begin{aligned} G(x) = J_{M_{A},\lambda _{A}}(I-\lambda _{A}A_{G}) \bigl(bx+(1-b)J_{M_{B}, \lambda _{B}}(I-\lambda _{B}A_{G})x \bigr), \end{aligned}$$

for all \(x \in H\), \(b \in (0,1)\) and \(\lambda _{A},\lambda _{B} \in (0,2\alpha )\). Moreover, we see that G is a nonexpansive mapping.

Proof

Let the conditions hold.

\((\Rightarrow )\) Let \(x^{*} \in \Omega \), we have \(x \in H\) such that \(\theta \in A_{G}x^{*} + M_{A}x^{*}\) and \(\theta \in A_{G}x^{*} + M_{B}x^{*}\), that is, \(x^{*} \in VI(H,A_{G},M_{A})\) and \(x^{*} \in VI(H,A_{G},M_{B})\).

From Lemma 2.10, we have \(x^{*} \in F(J_{M_{A},\lambda _{A}}(I-\lambda _{A}A_{G}))\) and \(x^{*} \in F(J_{M_{B},\lambda _{B}}(I-\lambda _{B}A_{G}))\).

It implies that

$$\begin{aligned} x^{*} = J_{M_{A},\lambda _{A}}(I-\lambda _{A}A_{G})x^{*} \end{aligned}$$
(11)

and

$$\begin{aligned} x^{*} = J_{M_{B},\lambda _{B}}(I-\lambda _{B}A_{G})x^{*}. \end{aligned}$$
(12)

By the definition of G, (11) and (12), we have

$$\begin{aligned} G \bigl(x^{*} \bigr) &= J_{M_{A},\lambda _{A}}(I-\lambda _{A}A_{G}) \bigl(bx^{*}+(1-b)J_{M_{B}, \lambda _{B}}(I- \lambda _{B}A_{G})x^{*} \bigr) \\ &= x^{*}. \end{aligned}$$

\((\Leftarrow )\) Let \(x^{*} = G(x^{*})\). Applying the same method of Lemma 2.1 (2) in [16], we find that \(J_{M_{A},\lambda _{A}}(I-\lambda _{A}A_{G})\) and \(J_{M_{B},\lambda _{B}}(I-\lambda _{B}A_{G})\) are nonexpansive mappings.

Since \(x^{*} = G(x^{*})\), we have

$$\begin{aligned} x^{*} = G \bigl(x^{*} \bigr) = J_{M_{A},\lambda _{A}}(I-\lambda _{A}A_{G}) \bigl(bx^{*}+(1-b)J_{M_{B}, \lambda _{B}}(I- \lambda _{B}A_{G})x^{*} \bigr). \end{aligned}$$

Let \(y \in \Omega \), we have \(\theta \in A_{G}y + M_{A}y\) and \(\theta \in A_{G}y + M_{B}y\).

From Lemma 2.10, it implies that

\(y \in F(J_{M_{A},\lambda _{A}}(I-\lambda _{A}A_{G})) \cap F(J_{M_{B}, \lambda _{B}}(I-\lambda _{B}A_{G}))\). Then

$$\begin{aligned} \bigl\Vert x^{*} - y \bigr\Vert ^{2} ={}& \bigl\Vert J_{M_{A},\lambda _{A}}(I-\lambda _{A}A_{G}) \bigl(bx^{*}+(1-b)J_{M_{B}, \lambda _{B}}(I-\lambda _{B}A_{G})x^{*} \bigr) - y \bigr\Vert ^{2} \\ ={}& \bigl\Vert J_{M_{A},\lambda _{A}}(I-\lambda _{A}A_{G}) \bigl(bx^{*}+(1-b)J_{M_{B}, \lambda _{B}}(I-\lambda _{B}A_{G})x^{*} \bigr) \\ &{} -J_{M_{A},\lambda _{A}}(I-\lambda _{A}A_{G}) y \bigr\Vert ^{2} \\ \leq {}& \bigl\Vert \bigl(bx^{*}+(1-b)J_{M_{B},\lambda _{B}}(I-\lambda _{B}A_{G})x^{*} \bigr) - y \bigr\Vert ^{2} \\ ={}& \bigl\Vert b \bigl(x^{*} - y \bigr) + (1-b) \bigl(J_{M_{B},\lambda _{B}}(I- \lambda _{B}A_{G})x^{*} - y \bigr) \bigr\Vert ^{2} \\ ={}& b \bigl\Vert x^{*} - y \bigr\Vert ^{2} + (1-b) \bigl\Vert J_{M_{B},\lambda _{B}}(I-\lambda _{B}A_{G})x^{*} - y \bigr\Vert ^{2} \\ &{}- b(1-b) \bigl\Vert x^{*}-J_{M_{B},\lambda _{B}}(I-\lambda _{B}A_{G})x^{*} \bigr\Vert ^{2} \\ \leq {}& b \bigl\Vert x^{*} - y \bigr\Vert ^{2} + (1-b) \bigl\Vert x^{*} - y \bigr\Vert ^{2} - b(1-b) \bigl\Vert x^{*} \\ &{} -J_{M_{B},\lambda _{B}}(I-\lambda _{B}A_{G})x^{*} \bigr\Vert ^{2} \\ ={}& \bigl\Vert x^{*} - y \bigr\Vert ^{2}- b(1-b) \bigl\Vert x^{*}-J_{M_{B},\lambda _{B}}(I- \lambda _{B}A_{G})x^{*} \bigr\Vert ^{2}. \end{aligned}$$
(13)

It implies that \(\Vert x^{*} - J_{M_{B},\lambda _{B}}(I-\lambda _{B}A_{G})x^{*} \Vert = 0\).

That is, \(x^{*} \in F(J_{M_{B},\lambda _{B}}(I-\lambda _{B}A_{G}))\).

Since \(x^{*} = G(x^{*}) \) and \(x^{*} \in F(J_{M_{B},\lambda _{B}}(I-\lambda _{B}A_{G}))\), we have

$$\begin{aligned} x^{*} &= J_{M_{A},\lambda _{A}}(I-\lambda _{A}A_{G}) \bigl(bx^{*}+(1-b)J_{M_{B}, \lambda _{B}}(I-\lambda _{B}A_{G})x^{*} \bigr) \\ &= J_{M_{A},\lambda _{A}}(I-\lambda _{A}A_{G}) \bigl(bx^{*}+(1-b)x^{*} \bigr) \\ &= J_{M_{A},\lambda _{A}}(I-\lambda _{A}A_{G})x^{*}. \end{aligned}$$

Therefore \(x^{*} \in F(J_{M_{A},\lambda _{A}}(I-\lambda _{A}A_{G}))\).

From Lemma 2.10, \(x^{*} \in F(J_{M_{A},\lambda _{A}}(I-\lambda _{A}A_{G}))\) and \(x^{*} \in F(J_{M_{B},\lambda _{B}}(I-\lambda _{B}A_{G}))\), we have \(\theta \in A_{G}x^{*} + M_{A}x^{*}\) and \(\theta \in A_{G}x^{*} + M_{B}x^{*}\). Then \(x^{*} \in \Omega \).

Applying (13), we can conclude that G is a nonexpansive mapping. □

We give some examples to support Lemma 2.11 and show that Lemma 2.11 is not true if some condition fails.

Example 2.12

Let \(H = \mathcal{R}^{2}\) be the two dimensional space of real numbers with an inner product \(\langle \cdot ,\cdot \rangle : \mathcal{R}^{2} \times \mathcal{R}^{2} \rightarrow \mathcal{R}\) defined by \(\langle \mathbf{x,y} \rangle = x \cdot y = x_{1}y_{1} + x_{2}y_{2}\), for all \(\mathbf{x} = (x_{1},x_{2}) \in \mathcal{R}^{2} , \mathbf{y} = (y_{1},y_{2}) \in \mathcal{R}^{2}\) and a usual norm \(\Vert \cdot \Vert : \mathcal{R}^{2} \times \mathcal{R}^{2} \rightarrow \mathcal{R}\) give by \(\Vert \mathbf{x} \Vert = \sqrt{x_{1}^{2} + x_{2}^{2}}\) for all \(\mathbf{x} = (x_{1},x_{2}) \in \mathcal{R}^{2}\) and \(A_{G}\): \(\mathcal{R}^{2} \rightarrow \mathcal{R}^{2}\) defined by \(A_{G}((x_{1},x_{2})) = (x_{1} - 5,x_{2} - 5)\). Let \(M_{A} : \mathcal{R}^{2} \rightarrow 2^{\mathcal{R}^{2}}\) be defined by \(\{ (2x_{1}-1,2x_{2} - 1) \} \) and \(M_{B} : \mathcal{R}^{2} \rightarrow 2^{\mathcal{R}^{2}}\) be defined by \(\{ (\frac{x_{1}}{2}+2,\frac{x_{2}}{2}+2) \} \). Show that \((2,2) \in F(G)\).

Solution. It is obvious that \(\Omega = \{(2,2)\}\). Choose \(\lambda _{A} = \frac{1}{2}\). From \(M_{A}(x_{1},x_{2}) = \{ (2x_{1}-1,2x_{2} - 1) \} \) and the resolvent of \(M_{A}\), \(J_{M_{A},\lambda _{A}}x = (I+\lambda _{A} M_{A})^{-1}x\) for all \(x=(x_{1},x_{2}) \in \mathcal{R}^{2}\), we have

$$\begin{aligned} J_{M_{A},\lambda _{A}}(x) = \frac{x}{2} + \frac{1}{4}, \end{aligned}$$
(14)

for all \(x=(x_{1},x_{2}) \in \mathcal{R}^{2}\). Choose \(\lambda _{B} = 1\). From \(M_{B}(x_{1},x_{2}) = \{ (\frac{x_{1}}{2}+2,\frac{x_{2}}{2}+2) \} \) and the resolvent of \(M_{B}\), \(J_{M_{B},\lambda _{B}}x= (I+ \lambda _{B} M_{B})^{-1}x\) for all \(x=(x_{1},x_{2}) \in \mathcal{R}^{2}\), we have

$$\begin{aligned} J_{M_{B},\lambda _{B}}(x) = \frac{2x}{3} - \frac{4}{3}, \end{aligned}$$
(15)

for all \(x=(x_{1},x_{2}) \in \mathcal{R}^{2}\). It is easy to see that \(A_{G}\) is 1-inverse strongly monotone. Choose \(b = \frac{1}{4}\). From (14) and (15), we have

$$\begin{aligned} G(x) &= J_{M_{A},\frac{1}{2}} \biggl(I-\frac{1}{2}A_{G} \biggr) \biggl(\frac{1}{4}x+ \frac{3}{4}J_{M_{B},1}(I-1A_{G})x \biggr) \\ &= \frac{x}{16}+\frac{30}{16}, \end{aligned}$$

for all \(x=(x_{1},x_{2}) \in \mathcal{R}^{2}\). By Lemma 2.11, we have \((2,2) \in F(G)\).

Example 2.13

Let \(H = \mathcal{R}^{2}\) be the two dimensional space of real numbers with an inner product \(\langle \cdot ,\cdot \rangle : \mathcal{R}^{2} \times \mathcal{R}^{2} \rightarrow \mathcal{R}\) defined by \(\langle \mathbf{x,y} \rangle = x \cdot y = x_{1}y_{1} + x_{2}y_{2}\), for all \(\mathbf{x} = (x_{1},x_{2}) \in \mathcal{R}^{2} , \mathbf{y} = (y_{1},y_{2}) \in \mathcal{R}^{2}\) and a usual norm \(\Vert \cdot \Vert : \mathcal{R}^{2} \times \mathcal{R}^{2} \rightarrow \mathcal{R}\) give by \(\Vert \mathbf{x} \Vert = \sqrt{x_{1}^{2} + x_{2}^{2}}\) for all \(\mathbf{x} = (x_{1},x_{2}) \in \mathcal{R}^{2}\) and \(A_{G}\): \(\mathcal{R}^{2} \rightarrow \mathcal{R}^{2}\) defined by \(A_{G}((x_{1},x_{2})) = (x_{1} - 5,x_{2} - 5)\). Let \(M_{A} : \mathcal{R}^{2} \rightarrow 2^{\mathcal{R}^{2}}\) be defined by \(\{ (2x_{1}-1,2x_{2} - 1) \} \) and \(M_{B} : \mathcal{R}^{2} \rightarrow 2^{\mathcal{R}^{2}}\) be defined by \(\{ (\frac{x_{1}}{2}+2,\frac{x_{2}}{2}+2) \} \). Show that \((2,2) \notin F(G)\).

Solution. It is obvious that \(\Omega = \{(2,2)\}\). Choose \(\lambda _{A} = 2\). From \(M_{A}(x_{1},x_{2}) = \{ (2x_{1}-1,2x_{2} - 1) \} \) and the resolvent of \(M_{A}\), \(J_{M_{A},\lambda _{A}}x = (I+\lambda _{A} M_{A})^{-1}x\) for all \(x=(x_{1},x_{2}) \in \mathcal{R}^{2}\), we have

$$\begin{aligned} J_{M_{A},\lambda _{A}}(x) = \frac{x}{5} + \frac{2}{5}, \end{aligned}$$
(16)

for all \(x=(x_{1},x_{2}) \in \mathcal{R}^{2}\). Choose \(\lambda _{B} = 4\). From \(M_{B}(x_{1},x_{2}) = \{ (\frac{x_{1}}{2}+2,\frac{x_{2}}{2}+2) \} \) and the resolvent of \(M_{B}\), \(J_{M_{B},\lambda _{B}}x= (I+ \lambda _{B} M_{B})^{-1}x\) for all \(x=(x_{1},x_{2}) \in \mathcal{R}^{2}\), we have

$$\begin{aligned} J_{M_{B},\lambda _{B}}(x) = \frac{x}{3} - \frac{8}{3}, \end{aligned}$$
(17)

for all \(x=(x_{1},x_{2}) \in \mathcal{R}^{2}\). Choose \(b = \frac{1}{4}\). From (16), (17) and \(A_{G}\) being 1-inverse strongly monotone, we have

$$\begin{aligned} G(x) &= J_{M_{A},2}(I-2A_{G}) \biggl(\frac{1}{4} x+ \frac{3}{4} J_{M_{B},4}(I-4 A_{G})x \biggr) \\ &= \frac{x}{10}+\frac{9}{5}, \end{aligned}$$

for all \(x=(x_{1},x_{2}) \in \mathcal{R}^{2}\). By Lemma 2.11, we have \((2,2) \notin F(G)\).

Lemma 2.14

([18])

Let \(\{ \Gamma _{n} \} \) be a sequence of real numbers that do not decrease at infinity, in the sense that there exists a subsequence \(\{ \Gamma _{n_{j}} \} \) of \(\{\Gamma _{n}\}\) such that \(\Gamma _{n_{j}} < \Gamma _{{n_{j}}+1}\) for all \(j \geq 0\). Also we consider the sequence of integers \(\{ \tau (n) \} _{n \geq n_{0}}\) defined by

$$\begin{aligned} \tau (n) = \max \{ k \leq n : \Gamma _{k} < \Gamma _{k+1} \} . \end{aligned}$$

Then \(\{ \tau (n) \} _{n \geq n_{0}}\) is a nondecreasing sequence verifying \(\lim_{n \rightarrow \infty } \tau (n) = \infty \) and, for all \(n \geq n_{0}\),

$$\begin{aligned} \max \{ \Gamma _{\tau (n)}, \Gamma _{n} \} \leq \Gamma _{ \tau (n)+1} . \end{aligned}$$

Lemma 2.15

Let H be a real Hilbert space, for every \(i = 1,2,\ldots,N\), let \(A_{i} : H \rightarrow H \) be \(\alpha _{i}\)-inverse strongly monotone mappings with \(\eta = \min \{ \alpha _{i} \} \). Let \(\{ x_{n} \} _{n=1}^{\infty }\) and \(\{ y_{n} \} _{n=1}^{\infty }\) be a sequence generated by \(y_{n} = P_{C}(I-\lambda \sum_{i=1}^{N} a_{i}A_{i})x_{n}\), \(Q_{n} = \{ z \in H : \langle (I-\lambda \sum_{i=1}^{N} a_{i}A_{i})x_{n}-y_{n},y_{n}-z \rangle \geq 0 \} \) and \(x^{*} \in \bigcap_{i=1}^{N} VI(C,A_{i})\) for all \(i = 1,2,\ldots,N\). Then the following inequality is fulfilled:

$$\begin{aligned} &\Biggl\Vert P_{Q_{n}} \Biggl(x_{n} - \lambda \sum _{i=1}^{N} a_{i}A_{i}y_{n} \Biggr) - x^{*} \Biggr\Vert ^{2} \\ &\quad\leq \bigl\Vert x_{n} -x^{*} \bigr\Vert ^{2} - \biggl(1-\frac{\lambda }{\eta } \biggr) \Biggl\Vert P_{Q_{n}} \Biggl(x_{n} - \lambda \sum _{i=1}^{N} a_{i}A_{i}y_{n} \Biggr) - y_{n} \Biggr\Vert ^{2} \\ &\qquad{} - \biggl(1-\frac{\lambda }{\eta } \biggr) \Vert x_{n} - y_{n} \Vert ^{2}, \end{aligned}$$

where \(\sum_{i=1}^{N} a_{i} = 1, 0 < a_{i} < 1\) and \(\lambda \in (0,\eta )\) with \(\eta = \min_{i=1,2,\ldots,N} \{ \alpha _{i} \} \) for every \(i = 1,2,\ldots,N\).

Proof

Since \(x^{*} \in \bigcap_{i=1}^{N} VI(C,A_{i})\), we have \(x^{*} \in VI(C,A_{i})\) for every \(i = 1,2,\ldots,N\) and (6), we obtain

$$\begin{aligned} &\Biggl\Vert P_{Q_{n}} \Biggl(x_{n} - \lambda \sum_{i=1}^{N} a_{i}A_{i}y_{n} \Biggr) - x^{*} \Biggr\Vert ^{2} \\ &\quad \leq \Biggl\Vert x_{n} - \lambda \sum_{i=1}^{N} a_{i}A_{i}y_{n} - x^{*} \Biggr\Vert ^{2} \\ &\qquad{} - \Biggl\Vert P_{Q_{n}} \Biggl(x_{n} - \lambda \sum _{i=1}^{N} a_{i}A_{i}y_{n} \Biggr) - \Biggl(x_{n} - \lambda \sum_{i=1}^{N} a_{i}A_{i}y_{n} \Biggr) \Biggr\Vert ^{2} \\ &\quad=\bigl\Vert x_{n} -x^{*} \bigr\Vert ^{2} \\ &\qquad{} - 2\lambda \Biggl\langle P_{Q_{n}} \Biggl(x_{n} - \lambda \sum_{i=1}^{N} a_{i}A_{i}y_{n} \Biggr) - x^{*}, \sum_{i=1}^{N} a_{i}A_{i}y_{n} \Biggr\rangle \\ &\qquad{} - \Biggl\Vert P_{Q_{n}} \Biggl(x_{n} - \lambda \sum _{i=1}^{N} a_{i}A_{i}y_{n} \Biggr) - x_{n} \Biggr\Vert ^{2} . \end{aligned}$$
(18)

From the monotonicity of \(\sum_{i=1}^{N} a_{i}A_{i}\), we have

$$\begin{aligned} 0 \leq {}& \Biggl\langle \sum_{i=1}^{N} a_{i}A_{i}y_{n} - \sum _{i=1}^{N} a_{i}A_{i}x^{*}, y_{n} - x^{*} \Biggr\rangle \\ ={}& \Biggl\langle \sum_{i=1}^{N} a_{i}A_{i}y_{n} , y_{n} - x^{*} \Biggr\rangle - \Biggl\langle \sum_{i=1}^{N} a_{i}A_{i}x^{*}, y_{n} - x^{*} \Biggr\rangle \\ \leq{} & \Biggl\langle \sum_{i=1}^{N} a_{i}A_{i}y_{n} , y_{n} - x^{*} \Biggr\rangle \\ ={}& \Biggl\langle \sum_{i=1}^{N} a_{i}A_{i}y_{n} , y_{n} - P_{Q_{n}} \Biggl(x_{n} - \lambda \sum _{i=1}^{N} a_{i}A_{i}y_{n} \Biggr) \Biggr\rangle \\ &{} + \Biggl\langle \sum_{i=1}^{N} a_{i}A_{i}y_{n} ,P_{Q_{n}} \Biggl(x_{n} - \lambda \sum_{i=1}^{N} a_{i}A_{i}y_{n} \Biggr) - x^{*} \Biggr\rangle . \end{aligned}$$

It implies that

$$\begin{aligned} &\Biggl\langle x^{*} - P_{Q_{n}} \Biggl(x_{n} - \lambda \sum_{i=1}^{N} a_{i}A_{i}y_{n} \Biggr) ,\sum _{i=1}^{N} a_{i}A_{i}y_{n} \Biggr\rangle \\ &\quad\leq \Biggl\langle \sum_{i=1}^{N} a_{i}A_{i}y_{n} , y_{n} - P_{Q_{n}} \Biggl(x_{n} - \lambda \sum _{i=1}^{N} a_{i}A_{i}y_{n} \Biggr) \Biggr\rangle . \end{aligned}$$
(19)

From (18) and (19), we have

$$\begin{aligned} &\Biggl\Vert P_{Q_{n}} \Biggl(x_{n} - \lambda \sum _{i=1}^{N} a_{i}A_{i}y_{n} \Biggr) - x^{*} \Biggr\Vert ^{2} \\ &\quad\leq \bigl\Vert x_{n} -x^{*} \bigr\Vert ^{2} + 2\lambda \Biggl\langle \sum_{i=1}^{N} a_{i}A_{i}y_{n} , y_{n} - P_{Q_{n}} \Biggl(x_{n} - \lambda \sum _{i=1}^{N} a_{i}A_{i}y_{n} \Biggr) \Biggr\rangle \\ &\qquad{} - \Biggl\Vert P_{Q_{n}} \Biggl(x_{n} - \lambda \sum _{i=1}^{N} a_{i}A_{i}y_{n} \Biggr) - x_{n} \Biggr\Vert ^{2} \\ &\quad= \bigl\Vert x_{n} -x^{*} \bigr\Vert ^{2} - \Biggl\Vert P_{Q_{n}} \Biggl(x_{n} - \lambda \sum _{i=1}^{N} a_{i}A_{i}y_{n} \Biggr) - y_{n} \Biggr\Vert ^{2} - \Vert y_{n} - x_{n} \Vert ^{2} \\ &\qquad{} - 2 \Biggl\langle P_{Q_{n}} \Biggl(x_{n} - \lambda \sum _{i=1}^{N} a_{i}A_{i}y_{n} \Biggr) - y_{n}, y_{n} - x_{n} \Biggr\rangle \\ &\qquad{} + 2\lambda \Biggl\langle \sum_{i=1}^{N} a_{i}A_{i}y_{n} , y_{n} - P_{Q_{n}} \Biggl(x_{n} - \lambda \sum _{i=1}^{N} a_{i}A_{i}y_{n} \Biggr) \Biggr\rangle \\ &\quad= \bigl\Vert x_{n} -x^{*} \bigr\Vert ^{2} - \Biggl\Vert P_{Q_{n}} \Biggl(x_{n} - \lambda \sum _{i=1}^{N} a_{i}A_{i}y_{n} \Biggr) - y_{n} \Biggr\Vert ^{2} - \Vert y_{n} - x_{n} \Vert ^{2} \\ &\qquad{} + 2 \Biggl\langle x_{n} - y_{n} - \lambda \sum _{i=1}^{N} a_{i}A_{i}y_{n} , P_{Q_{n}} \Biggl(x_{n} - \lambda \sum _{i=1}^{N} a_{i}A_{i}y_{n} \Biggr) - y_{n} \Biggr\rangle \\ &\quad= \bigl\Vert x_{n} -x^{*} \bigr\Vert ^{2} - \Biggl\Vert P_{Q_{n}} \Biggl(x_{n} - \lambda \sum _{i=1}^{N} a_{i}A_{i}y_{n} \Biggr) - y_{n} \Biggr\Vert ^{2} - \Vert y_{n} - x_{n} \Vert ^{2} \\ &\qquad{} + 2 \Biggl\langle \Biggl(I - \lambda \sum_{i=1}^{N} a_{i}A_{i} \Biggr)x_{n} - y_{n} , P_{Q_{n}} \Biggl(x_{n} - \lambda \sum _{i=1}^{N} a_{i}A_{i}y_{n} \Biggr) - y_{n} \Biggr\rangle \\ &\qquad{}+ 2 \Biggl\langle \lambda \sum_{i=1}^{N} a_{i}A_{i} x_{n} - \lambda \sum _{i=1}^{N} a_{i}A_{i}y_{n} , P_{Q_{n}} \Biggl(x_{n} - \lambda \sum _{i=1}^{N} a_{i}A_{i}y_{n} \Biggr) - y_{n} \Biggr\rangle \\ &\quad\leq \bigl\Vert x_{n} -x^{*} \bigr\Vert ^{2} - \Biggl\Vert P_{Q_{n}} \Biggl(x_{n} - \lambda \sum_{i=1}^{N} a_{i}A_{i}y_{n} \Biggr) - y_{n} \Biggr\Vert ^{2} - \Vert y_{n} - x_{n} \Vert ^{2} \\ &\qquad{} + 2\lambda \Biggl\Vert \sum_{i=1}^{N} a_{i}A_{i} x_{n} - \sum _{i=1}^{N} a_{i}A_{i} y_{n} \Biggr\Vert \Biggl\Vert P_{Q_{n}} \Biggl(x_{n} - \lambda \sum_{i=1}^{N} a_{i}A_{i}y_{n} \Biggr) - y_{n} \Biggr\Vert \\ &\quad\leq \bigl\Vert x_{n} -x^{*} \bigr\Vert ^{2} - \Biggl\Vert P_{Q_{n}} \Biggl(x_{n} - \lambda \sum_{i=1}^{N} a_{i}A_{i}y_{n} \Biggr) - y_{n} \Biggr\Vert ^{2} - \Vert y_{n} - x_{n} \Vert ^{2} \\ &\qquad{} + 2\frac{\lambda }{\eta } \Vert x_{n} - y_{n} \Vert \Biggl\Vert P_{Q_{n}} \Biggl(x_{n} - \lambda \sum _{i=1}^{N} a_{i}A_{i}y_{n} \Biggr) - y_{n} \Biggr\Vert \\ &\quad = \bigl\Vert x_{n} -x^{*} \bigr\Vert ^{2} - \Biggl\Vert P_{Q_{n}} \Biggl(x_{n} - \lambda \sum _{i=1}^{N} a_{i}A_{i}y_{n} \Biggr) - y_{n} \Biggr\Vert ^{2} - \Vert y_{n} - x_{n} \Vert ^{2} \\ &\qquad{} + \frac{\lambda }{\eta } \Biggl( \Vert x_{n} - y_{n} \Vert ^{2} + \Biggl\Vert P_{Q_{n}} \Biggl(x_{n} - \lambda \sum_{i=1}^{N} a_{i}A_{i}y_{n} \Biggr) - y_{n} \Biggr\Vert ^{2} \Biggr) \\ &\quad= \bigl\Vert x_{n} -x^{*} \bigr\Vert ^{2} - \biggl(1- \frac{\lambda }{\eta } \biggr) \Biggl\Vert P_{Q_{n}} \Biggl(x_{n} - \lambda \sum_{i=1}^{N} a_{i}A_{i}y_{n} \Biggr) - y_{n} \Biggr\Vert ^{2} \\ &\qquad{} - \biggl(1- \frac{\lambda }{\eta } \biggr) \Vert y_{n} - x_{n} \Vert ^{2}. \end{aligned}$$
(20)

 □

3 Main result

In this section, we prove the strong convergence of the sequence acquired from the proposed iterative methods for finding a common element of the set of finite family variational inequalities problems and the set of solutions of the proposed problem.

Theorem 3.1

Let H be a real Hilbert space. For \(i = 1,2,\ldots,N\), let \(A_{i} : H \rightarrow H\) be \(\alpha _{i}\)-inverse strongly monotone mappings and let \(A_{G} : H \rightarrow H \) be \(\alpha _{G}\)-inverse strongly monotone mappings. Define the mapping \(G:H \rightarrow H\) by \(G(x) = J_{M_{A},\lambda _{A}}(I-\lambda _{A}A_{G})(bx+(1-b)J_{M_{B}, \lambda _{B}}(I-\lambda _{B}A_{G})x)\) for all \(x \in H\), \(b \in (0,1)\) and \(\lambda _{A},\lambda _{B} \in (0,2\alpha _{G})\). Assume that \(\Gamma = \bigcap_{i=1}^{N} VI(C,A_{i}) \cap F(G) \neq \emptyset \). Let the sequence \(\{ y_{n} \} \) and \(\{ x_{n} \} \) be generated by \(x_{1} , u \in H\) and

$$\begin{aligned} \textstyle\begin{cases} y_{n} = P_{C}(I-\lambda \sum_{i=1}^{N} a_{i}A_{i})x_{n}, \\ Q_{n} = \{ z \in H : \langle (I-\lambda \sum_{i=1}^{N} a_{i}A_{i})x_{n}-y_{n},y_{n}-z \rangle \geq 0 \} , \\ x_{n+1} = \alpha _{n}u + \beta _{n} P_{Q_{n}}(x_{n} - \lambda \sum_{i=1}^{N} a_{i}A_{i}y_{n}) + \gamma _{n} Gx_{n}, \end{cases}\displaystyle \end{aligned}$$
(21)

where \(\sum_{i=1}^{N} a_{i} = 1, 0 < a_{i} < 1 , \{ \alpha _{n} \} , \{ \beta _{n} \} , \{ \gamma _{n} \} \subset [0, 1]\) with \(\alpha _{n}+\beta _{n}+\gamma _{n} = 1\), \(\lambda \in (0,\eta )\) with \(\eta = \min_{i=1,2,\ldots,N} \{ \alpha _{i} \} \).

Suppose the following conditions hold:

(i) \(\sum_{n=0}^{\infty }\alpha _{n} = \infty \), \(\lim_{n \rightarrow \infty } \alpha _{n} = 0\),

(ii) \(0< c<\beta _{n}\), \(\gamma _{n} \leq d <1\).

Then \(\{ x_{n} \} \) converges strongly to \(x^{*} \in \Gamma \) where \(x^{*} = P_{\Gamma }u\).

Proof

We must show that \(\{ x_{n} \} \) is bounded. Let \(z_{n} = P_{Q_{n}}(x_{n} - \lambda \sum_{i=1}^{N} a_{i}A_{i}y_{n})\).

We consider

$$\begin{aligned} x_{n+1} &= \alpha _{n}u + \beta _{n} z_{n} + \gamma _{n} Gx_{n} \\ &= \alpha _{n}u + (1-\alpha _{n}) \biggl( \frac{\beta _{n} z_{n} + \gamma _{n} Gx_{n}}{1-\alpha _{n}} \biggr) \\ &= \alpha _{n}u + (1-\alpha _{n})t_{n}, \end{aligned}$$

where \(t_{n} = \frac{\beta _{n} z_{n} + \gamma _{n} Gx_{n}}{1-\alpha _{n}}\). Letting \(x^{*} \in \Gamma = \bigcap_{i=1}^{N} VI(C,A_{i}) \cap F(G)\), we have

$$\begin{aligned} \bigl\Vert t_{n} - x^{*} \bigr\Vert ^{2} ={}& \biggl\Vert \frac{\beta _{n} z_{n} + \gamma _{n} Gx_{n}}{1-\alpha _{n}} - x^{*} \biggr\Vert ^{2} \\ ={}& \biggl\Vert \frac{\beta _{n} z_{n} + \gamma _{n} Gx_{n} - (1-\alpha _{n})x^{*}}{1-\alpha _{n}} \biggr\Vert ^{2} \\ ={}& \frac{\beta _{n}}{1-\alpha _{n}} \bigl\Vert z_{n} - x^{*} \bigr\Vert ^{2} + \frac{\gamma _{n}}{1-\alpha _{n}} \bigl\Vert Gx_{n} - x^{*} \bigr\Vert ^{2} \\ &{} - \frac{\beta _{n}\gamma _{n}}{(1-\alpha _{n})^{2}} \Vert z_{n} - Gx_{n} \Vert ^{2}. \end{aligned}$$
(22)

From definition of \(x_{n+1}\) and (22), we consider

$$\begin{aligned} \bigl\Vert x_{n+1} - x^{*} \bigr\Vert ^{2} ={}& \bigl\Vert \alpha _{n}u + (1-\alpha _{n})t_{n} - x^{*} \bigr\Vert ^{2} \\ = {}&\bigl\Vert \alpha _{n} \bigl(u-x^{*} \bigr) - (1- \alpha _{n}) \bigl(t_{n} - x^{*} \bigr) \bigr\Vert ^{2} \\ ={}& \alpha _{n} \bigl\Vert u-x^{*} \bigr\Vert ^{2} + (1-\alpha _{n}) \bigl\Vert t_{n} - x^{*} \bigr\Vert ^{2} - \alpha _{n}(1-\alpha _{n}) \Vert u - t_{n} \Vert ^{2} \\ ={}& \alpha _{n} \bigl\Vert u-x^{*} \bigr\Vert ^{2} + (1-\alpha _{n}) \biggl[ \frac{\beta _{n}}{1-\alpha _{n}} \bigl\Vert z_{n} - x^{*} \bigr\Vert ^{2} \\ &{} + \frac{\gamma _{n}}{1-\alpha _{n}} \bigl\Vert Gx_{n} - x^{*} \bigr\Vert ^{2} - \frac{\beta _{n}\gamma _{n}}{(1-\alpha _{n})^{2}} \Vert z_{n} - Gx_{n} \Vert ^{2} \biggr] \\ &{} - \alpha _{n}(1-\alpha _{n}) \Vert u - t_{n} \Vert ^{2} \\ ={}& \alpha _{n} \bigl\Vert u-x^{*} \bigr\Vert ^{2} + \beta _{n} \bigl\Vert z_{n} - x^{*} \bigr\Vert ^{2} + \gamma _{n} \bigl\Vert Gx_{n} - x^{*} \bigr\Vert ^{2} \\ &{} -\frac{\beta _{n} \gamma _{n}}{1-\alpha _{n}} \Vert z_{n} - Gx_{n} \Vert ^{2} - \alpha _{n}(1-\alpha _{n}) \Vert u - t_{n} \Vert ^{2} \\ \leq {}& \alpha _{n} \bigl\Vert u-x^{*} \bigr\Vert ^{2} + \beta _{n} \bigl\Vert z_{n} - x^{*} \bigr\Vert ^{2} + \gamma _{n} \bigl\Vert x_{n} - x^{*} \bigr\Vert ^{2} \\ &{} -\frac{\beta _{n} \gamma _{n}}{1-\alpha _{n}} \Vert z_{n} - Gx_{n} \Vert ^{2} - \alpha _{n}(1-\alpha _{n}) \Vert u - t_{n} \Vert ^{2} . \end{aligned}$$
(23)

By Lemma 2.15 and \(\lambda \in (0,1)\), we have

$$\begin{aligned} \bigl\Vert z_{n} - x^{*} \bigr\Vert ^{2} \leq \bigl\Vert x_{n} - x^{*} \bigr\Vert ^{2} . \end{aligned}$$
(24)

From (23) and (24), we get

$$\begin{aligned} \bigl\Vert x_{n+1} - x^{*} \bigr\Vert ^{2} \leq {}& \alpha _{n} \bigl\Vert u-x^{*} \bigr\Vert ^{2} + \beta _{n} \bigl\Vert z_{n} - x^{*} \bigr\Vert ^{2} + \gamma _{n} \bigl\Vert x_{n} - x^{*} \bigr\Vert ^{2} \\ &{} -\frac{\beta _{n} \gamma _{n}}{1-\alpha _{n}} \Vert z_{n} - Gx_{n} \Vert ^{2} - \alpha _{n}(1-\alpha _{n}) \Vert u - t_{n} \Vert ^{2} \\ \leq {}& \alpha _{n} \bigl\Vert u-x^{*} \bigr\Vert ^{2} + \beta _{n} \bigl\Vert x_{n} - x^{*} \bigr\Vert ^{2} + \gamma _{n} \bigl\Vert x_{n} - x^{*} \bigr\Vert ^{2} \\ &{}-\frac{\beta _{n} \gamma _{n}}{1-\alpha _{n}} \Vert z_{n} - Gx_{n} \Vert ^{2} - \alpha _{n}(1-\alpha _{n}) \Vert u - t_{n} \Vert ^{2} \\ ={}& \alpha _{n} \bigl\Vert u-x^{*} \bigr\Vert ^{2} + (1-\beta _{n}) \bigl\Vert x_{n} - x^{*} \bigr\Vert ^{2} \\ &{} -\frac{\beta _{n} \gamma _{n}}{1-\alpha _{n}} \Vert z_{n} - Gx_{n} \Vert ^{2} - \alpha _{n}(1-\alpha _{n}) \Vert u - t_{n} \Vert ^{2} \\ \leq{} & \alpha _{n} \bigl\Vert u-x^{*} \bigr\Vert ^{2} + (1-\beta _{n}) \bigl\Vert x_{n} - x^{*} \bigr\Vert ^{2} \\ &{} \vdots \\ \leq {}& \max \bigl\{ \bigl\Vert u-x^{*} \bigr\Vert ^{2} + \bigl\Vert x_{1} - x^{*} \bigr\Vert ^{2} \bigr\} . \end{aligned}$$
(25)

By induction,

$$\begin{aligned} \bigl\Vert x_{n+1} - x^{*} \bigr\Vert ^{2} \leq \max \bigl\{ \bigl\Vert u-x^{*} \bigr\Vert ^{2} + \bigl\Vert x_{1} - x^{*} \bigr\Vert ^{2} \bigr\} , \end{aligned}$$

then \(\{ x_{n} \} \) is a bounded sequence.

We use

$$\begin{aligned} \bigl\Vert x_{n+1} - x^{*} \bigr\Vert ^{2} \leq {}& \alpha _{n} \bigl\Vert u-x^{*} \bigr\Vert ^{2} + \beta _{n} \bigl\Vert z_{n} - x^{*} \bigr\Vert ^{2} + \gamma _{n} \bigl\Vert x_{n} - x^{*} \bigr\Vert ^{2} \\ &{} -\frac{\beta _{n} \gamma _{n}}{1-\alpha _{n}} \Vert z_{n} - Gx_{n} \Vert ^{2} \\ \leq {}& \alpha _{n} \bigl\Vert u-x^{*} \bigr\Vert ^{2} + \beta _{n} \biggl[ \bigl\Vert x_{n} - x^{*} \bigr\Vert ^{2} - \biggl(1-\frac{\lambda }{\eta } \biggr) \Vert z_{n} - y_{n} \Vert ^{2} \\ &{} - \biggl(1-\frac{\lambda }{\eta } \biggr) \Vert x_{n} - y_{n} \Vert ^{2} \biggr] + \gamma _{n} \bigl\Vert x_{n} - x^{*} \bigr\Vert ^{2} - \frac{\beta _{n} \gamma _{n}}{1-\alpha _{n}} \Vert z_{n} - Gx_{n} \Vert ^{2} \\ ={}& \alpha _{n} \bigl\Vert u-x^{*} \bigr\Vert ^{2} + (1-\alpha _{n}) \bigl\Vert x_{n} - x^{*} \bigr\Vert ^{2} - \beta _{n} \biggl(1- \frac{\lambda }{\eta } \biggr) \Vert z_{n} - y_{n} \Vert ^{2} \\ &{} - \beta _{n} \biggl(1-\frac{\lambda }{\eta } \biggr) \Vert x_{n} - y_{n} \Vert ^{2} - \frac{\beta _{n} \gamma _{n}}{1-\alpha _{n}} \Vert z_{n} - Gx_{n} \Vert ^{2} \\ \leq {}& \alpha _{n} \bigl\Vert u-x^{*} \bigr\Vert ^{2} + \bigl\Vert x_{n} - x^{*} \bigr\Vert ^{2} - \beta _{n} \biggl(1- \frac{\lambda }{\eta } \biggr) \Vert z_{n} - y_{n} \Vert ^{2} \\ &{} - \beta _{n} \biggl(1-\frac{\lambda }{\eta } \biggr) \Vert x_{n} - y_{n} \Vert ^{2} - \frac{\beta _{n} \gamma _{n}}{1-\alpha _{n}} \Vert z_{n} - Gx_{n} \Vert ^{2} . \end{aligned}$$

It implies that

$$\begin{aligned} &\beta _{n} \biggl(1-\frac{\lambda }{\eta } \biggr) \Vert z_{n} - y_{n} \Vert ^{2} + \beta _{n} \biggl(1- \frac{\lambda }{\eta } \biggr) \Vert x_{n} - y_{n} \Vert ^{2} + \frac{\beta _{n} \gamma _{n}}{1-\alpha _{n}} \Vert z_{n} - Gx_{n} \Vert ^{2} \\ &\quad\leq \alpha _{n} \bigl\Vert u-x^{*} \bigr\Vert ^{2} + \bigl\Vert x_{n} - x^{*} \bigr\Vert ^{2} - \bigl\Vert x_{n+1} - x^{*} \bigr\Vert ^{2}. \end{aligned}$$
(26)

Let \(S_{n} := \beta _{n}(1-\frac{\lambda }{\eta }) \Vert z_{n} - y_{n} \Vert ^{2} + \beta _{n}(1-\frac{\lambda }{\eta }) \Vert x_{n} - y_{n} \Vert ^{2} + \frac{\beta _{n} \gamma _{n}}{1-\alpha _{n}} \Vert z_{n} - Gx_{n} \Vert ^{2}\).

Then we have

$$ S_{n} \leq \alpha _{n} \bigl\Vert u-x^{*} \bigr\Vert ^{2} + \bigl\Vert x_{n} - x^{*} \bigr\Vert ^{2} - \bigl\Vert x_{n+1} - x^{*} \bigr\Vert ^{2}. $$
(27)

Now, we consider two possible cases:

Case 1. Put \(\Gamma _{n} := \Vert x_{n} - x^{*} \Vert ^{2}\) for all \(n \in \mathcal{N}\).

Assume that there is \(n_{0} \geq 0\) such that, for each \(n \geq n_{0}\), \(\Gamma _{n+1} \leq \Gamma _{n}\).

In this case, \(\lim_{n \rightarrow \infty } \Gamma _{n}\) exists and \(\lim_{n \rightarrow \infty } (\Gamma _{n} - \Gamma _{n+1}) = 0\).

Since \(\lim_{n \rightarrow \infty } \alpha _{n} = 0\), it follows from (27) that \(\lim_{n \rightarrow \infty } S_{n} = 0\).

Therefore, we have \(\lim_{n \rightarrow \infty } \beta _{n}(1-\frac{\lambda }{\eta }) \Vert z_{n} - y_{n} \Vert ^{2} = 0\), \(\lim_{n \rightarrow \infty } \beta _{n}(1-\frac{\lambda }{\eta }) \Vert x_{n} - y_{n} \Vert ^{2} = 0\) and \(\lim_{n \rightarrow \infty } \frac{\beta _{n} \gamma _{n}}{1-\alpha _{n}} \Vert z_{n} - Gx_{n} \Vert ^{2} = 0\).

From the assumptions i), ii), we obtain

$$\begin{aligned} \lim_{n \rightarrow \infty } \Vert z_{n} - y_{n} \Vert = \lim_{n \rightarrow \infty } \Vert x_{n} - y_{n} \Vert = \lim_{n \rightarrow \infty } \Vert z_{n} - Gx_{n} \Vert = 0. \end{aligned}$$
(28)

Hence, we obtain

$$\begin{aligned} \Vert x_{n} - Gx_{n} \Vert \leq \Vert x_{n} - y_{n} \Vert + \Vert y_{n} - z_{n} \Vert + \Vert z_{n} - Gx_{n} \Vert . \end{aligned}$$

From (28), we have

$$\begin{aligned} \lim_{n \rightarrow \infty } \Vert x_{n} - Gx_{n} \Vert = 0. \end{aligned}$$
(29)

We now show that \(\limsup_{n \rightarrow \infty } \langle u - x^{*}, x_{n} - x^{*} \rangle \leq 0\).

We can choose a subsequence \(\{ x_{n_{i}} \} \) of \(\{ x_{n} \} \) such that

$$ \limsup_{n \rightarrow \infty } \bigl\langle u - x^{*}, x_{n} - x^{*} \bigr\rangle = \lim_{i \rightarrow \infty } \bigl\langle u - x^{*}, x_{n_{i}} - x^{*} \bigr\rangle . $$
(30)

Because \(\{ x_{n} \} \) is a bounded sequence in H, there exists a subsequence of \(\{ x_{n} \} \) that converges weakly to an element in H. Without loss of generality, we can assume that \(x_{n_{i}} \rightharpoonup w\) where \(w \in H\). Since \(\lim_{n \rightarrow \infty } \Vert x_{n} - z_{n} \Vert = 0\), we have \(z_{n_{i}} \rightharpoonup w\).

Since \(\lim_{n \rightarrow \infty } \Vert x_{n} - y_{n} \Vert = 0\), \(y_{n_{i}} \rightharpoonup w\).

Assume that \(w \notin \bigcap_{i=1}^{N} VI(C,A_{i}) \). So, we have \(w \notin F(P_{C}(I-\lambda \sum_{i=1}^{N} a_{i}A_{i}))\).

Then we have \(w \neq P_{C}(I-\lambda \sum_{i=1}^{N} a_{i}A_{i})w\). By the nonexpansiveness of \(P_{C}(I-\lambda \sum_{i=1}^{N} a_{i}A_{i})\), (28) and Opial’s property, we have

$$\begin{aligned} &\liminf_{n \rightarrow \infty } \Vert x_{n_{i}}-w \Vert \\ &\quad< \liminf_{n \rightarrow \infty } \Biggl\Vert x_{n_{i}}-P_{C} \Biggl(I-\lambda \sum_{i=1}^{N} a_{i}A_{i} \Biggr)w \Biggr\Vert \\ &\quad\leq \liminf_{n \rightarrow \infty } \Biggl( \Vert x_{n_{i}}-y_{n_{i}} \Vert + \Biggl\Vert y_{n_{i}}-P_{C} \Biggl(I- \lambda \sum _{i=1}^{N} a_{i}A_{i} \Biggr)w \Biggr\Vert \Biggr) \\ &\quad\leq \liminf_{n \rightarrow \infty } \Biggl( \Vert x_{n_{i}}-y_{n_{i}} \Vert \\ &\qquad{} + \Biggl\Vert P_{C} \Biggl(I-\lambda \sum _{i=1}^{N} a_{i}A_{i} \Biggr)x_{n_{i}}-P_{C} \Biggl(I- \lambda \sum _{i=1}^{N} a_{i}A_{i} \Biggr)w \Biggr\Vert \Biggr) \\ &\quad\leq \liminf_{n \rightarrow \infty } \Vert x_{n_{i}}-w \Vert . \end{aligned}$$

This is a contradiction; we have \(w \in VI(C,\sum_{i=1}^{N}a_{i}A_{i})\). From Remark 2.5, we have

$$ w \in \bigcap_{i=1}^{N} VI(C,A_{i}). $$
(31)

Assume that \(w \notin F(G)\). Then we have \(w \neq Gw\). From (29) and Opial’s property, we have

$$\begin{aligned} \liminf_{n \rightarrow \infty } \Vert x_{n_{i}}-w \Vert &< \liminf _{n \rightarrow \infty } \Vert x_{n_{i}}-Gw \Vert \\ &\leq \liminf_{n \rightarrow \infty } \bigl( \Vert x_{n_{i}}-Gx_{n_{i}} \Vert + \Vert Gx_{n_{i}}-Gw \Vert \bigr) \\ &\leq \liminf_{n \rightarrow \infty } \bigl( \Vert x_{n_{i}}-Gx_{n_{i}} \Vert + \Vert x_{n_{i}}-w \Vert \bigr) \\ &\leq \liminf_{n \rightarrow \infty } \Vert x_{n_{i}}-w \Vert . \end{aligned}$$

This is a contradiction; we have

$$\begin{aligned} w \in F(G). \end{aligned}$$
(32)

From (31) and (32), we have \(w \in \bigcap_{i=1}^{N} VI(C,A_{i})\cap F(G)\).

Therefore, we get

$$ \limsup_{n \rightarrow \infty } \bigl\langle u - x^{*}, x_{n} - x^{*} \bigr\rangle = \lim_{i \rightarrow \infty } \bigl\langle u - x^{*}, x_{n_{i}} - x^{*} \bigr\rangle = \bigl\langle u - x^{*}, w - x^{*} \bigr\rangle \leq 0, $$
(33)

where \(x^{*} = P_{\Gamma }u\).

Next, we show that \(\{ x_{n} \} \) converges strongly to \(x^{*}\), where \(x^{*} = P_{\Gamma }u\).

From the nonexpansiveness of G, (22) and (24), we have

$$\begin{aligned} \bigl\Vert t_{n} - x^{*} \bigr\Vert ^{2} ={}& \frac{\beta _{n}}{1-\alpha _{n}} \bigl\Vert z_{n} - x^{*} \bigr\Vert ^{2} + \frac{\gamma _{n}}{1-\alpha _{n}} \bigl\Vert Gx_{n} - x^{*} \bigr\Vert ^{2} \\ &{} - \frac{\beta _{n}\gamma _{n}}{(1-\alpha _{n})^{2}} \Vert z_{n} - Gx_{n} \Vert ^{2} \\ \leq {}& \frac{\beta _{n}}{1-\alpha _{n}} \bigl\Vert z_{n} - x^{*} \bigr\Vert ^{2} + \frac{\gamma _{n}}{1-\alpha _{n}} \bigl\Vert Gx_{n} - x^{*} \bigr\Vert ^{2} \\ \leq {}& \frac{\beta _{n}}{1-\alpha _{n}} \bigl\Vert x_{n} - x^{*} \bigr\Vert ^{2} + \frac{\gamma _{n}}{1-\alpha _{n}} \bigl\Vert x_{n} - x^{*} \bigr\Vert ^{2} \\ ={}& \bigl\Vert x_{n} - x^{*} \bigr\Vert ^{2} . \end{aligned}$$
(34)

From the definition of \(x_{n}\), (34) and \(x^{*} = P_{\Gamma }u\), we have

$$\begin{aligned} \bigl\Vert x_{n+1} - x^{*} \bigr\Vert ^{2} &= \bigl\Vert \alpha _{n} \bigl(u - x^{*} \bigr) - (1 - \alpha _{n}) \bigl(t_{n} - x^{*} \bigr) \bigr\Vert ^{2} \\ &\leq (1 - \alpha _{n}) \bigl\Vert t_{n} - x^{*} \bigr\Vert ^{2} + 2\alpha _{n} \bigl\langle u - x^{*}, x_{n+1}- x^{*} \bigr\rangle \\ &\leq (1 - \alpha _{n}) \bigl\Vert x_{n} - x^{*} \bigr\Vert ^{2} + 2\alpha _{n} \bigl\langle u - x^{*}, x_{n+1}- x^{*} \bigr\rangle . \end{aligned}$$
(35)

By applying Lemma 2.8 to (35), we find that the sequence \(\{ x_{n} \} \) converges strongly to \(x^{*}\).

Case 2. Assume that there exists a subsequence \(\{ \Gamma _{n_{i}} \} \subset \{ \Gamma _{n} \} \) such that \(\Gamma _{n_{i}} \leq \Gamma _{n_{i} + 1}\) for all \(i \in \mathcal{N}\). In this case, we can define \(\tau : \mathcal{N} \rightarrow \mathcal{N}\) by \(\tau (n) = \max \{ k \leq n : \Gamma _{k} < \Gamma _{k+1} \} \).

Then we have \(\tau (n) \rightarrow \infty \) as \(n \rightarrow \infty \) and \(\Gamma _{\tau (n)} < \Gamma _{\tau (n)+1}\). So, we have from (26)

$$\begin{aligned} &\beta _{\tau (n)} \biggl(1-\frac{\lambda }{\eta } \biggr) \Vert z_{\tau (n)} - y_{\tau (n)} \Vert ^{2} + \beta _{\tau (n)} \biggl(1- \frac{\lambda }{\eta } \biggr) \Vert x_{\tau (n)} - y_{\tau (n)} \Vert ^{2} \\ &\qquad{} + \frac{\beta _{\tau (n)} \gamma _{\tau (n)}}{1-\alpha _{\tau (n)}} \Vert z_{ \tau (n)} - Gx_{\tau (n)} \Vert ^{2} \\ &\quad\leq \alpha _{\tau (n)} \bigl\Vert u-x^{*} \bigr\Vert ^{2} + \bigl\Vert x_{\tau (n)} - x^{*} \bigr\Vert ^{2} - \bigl\Vert x_{{\tau (n)}+1} - x^{*} \bigr\Vert ^{2}. \end{aligned}$$

Arguing as in Case 1, we have

$$\begin{aligned} \lim_{n \rightarrow \infty } \Vert z_{\tau (n)} - y_{\tau (n)} \Vert = \lim_{n \rightarrow \infty } \Vert x_{\tau (n)} - y_{\tau (n)} \Vert = \lim_{n \rightarrow \infty } \Vert z_{\tau (n)} - Gx_{\tau (n)} \Vert = 0. \end{aligned}$$
(36)

Because \(\{ x_{\tau (n)} \} \) is a bounded sequence, there exists a subsequence \(\{ x_{\tau (n_{j})} \} \) such that

$$ \limsup_{n \rightarrow \infty } \bigl\langle u - x^{*}, x_{\tau (n)} - x^{*} \bigr\rangle = \lim_{i \rightarrow \infty } \bigl\langle u - x^{*}, x_{\tau (n)+1} - x^{*} \bigr\rangle . $$

Following the same argument as the proof of Case 1 for \(\{ x_{\tau (n_{j})} \} \), we have

$$ \limsup_{n \rightarrow \infty } \bigl\langle u - x^{*}, x_{\tau (n)+1} - x^{*} \bigr\rangle \leq 0 $$

and

$$ \bigl\Vert x_{\tau (n)+1} - x^{*} \bigr\Vert ^{2} \leq (1 - \alpha _{\tau (n)}) \bigl\Vert x_{ \tau (n)} - x^{*} \bigr\Vert ^{2} + 2\alpha _{\tau (n)} \bigl\langle u - x^{*}, x_{{\tau (n)}+1}- x^{*} \bigr\rangle , $$

where \(\alpha _{\tau (n)} \rightarrow 0\), \(\sum_{n=1}^{\infty }\alpha _{\tau (n)} = \infty \) and \(\limsup_{n \rightarrow \infty } \langle u - x^{*}, x_{\tau (n)+1} - x^{*} \rangle \leq 0\).

Hence, by Lemma 2.8, we have \(\lim_{n \rightarrow \infty } \Vert x_{\tau (n)} - x^{*} \Vert = 0\) and \(\lim_{n \rightarrow \infty } \Vert x_{\tau (n)+1} - x^{*} \Vert = 0\)

Therefore, by Lemma 2.14, we have

$$ 0 \leq \bigl\Vert x_{n} - x^{*} \bigr\Vert \leq \max \bigl\{ \bigl\Vert x_{\tau (n)} - x^{*} \bigr\Vert , \bigl\Vert x_{n} - x^{*} \bigr\Vert \bigr\} \leq \bigl\Vert x_{\tau (n)+1} - x^{*} \bigr\Vert . $$

Hence, \(\{ x_{n} \} \) converge strongly to \(x^{*} = P_{\Gamma }u\). This completes the proof of the main theorem. □

4 Application

In 2013, Kangtunyakarn [14] introduced a modification of the system of variational inequalities as follows: finding \((x^{*},z^{*})\in C \times C \) such that

$$\begin{aligned} \textstyle\begin{cases} \langle x^{*}-(I-\lambda _{1} D_{1})(ax^{*} + (1-a)z^{*}), x-x^{*} \rangle \geq 0,\quad \forall x \in C, \\ \langle z^{*}-(I-\lambda _{2} D_{2})x^{*}, x - z^{*} \rangle \geq 0, \quad\forall x \in C, \end{cases}\displaystyle \end{aligned}$$
(37)

where \(D_{1}, D_{2} : C \rightarrow H\) be two mappings, for every \(\lambda _{1}, \lambda _{2} \geq 0\) and \(a \in [0,1]\).

Let h be a proper lower semicontinuous convex function of H into \((-\infty ,+\infty ]\). The subdifferential ∂h of h is defined by

$$\begin{aligned} \partial h(x) = \bigl\{ z \in H: h(x) + \langle z, u-x \rangle \leq h(u) , \forall u \in H \bigr\} \end{aligned}$$

for all \(x \in H\). From Rockafellar [19], we find that ∂h is a maximal monotone operator. Let C be a nonempty closed convex subset of H and \(i_{C}\) be the indicator function of C, i.e.,

$$ i_{C} = \textstyle\begin{cases} 0; & \text{if } x \in C, \\ +\infty; & \text{if }x \notin C, \end{cases} $$

Then \(i_{C}\) is a proper, lower semicontinuous and convex function on H and so the subdifferential \(\partial i_{C}\) of \(i_{C}\) is a maximal monotone operator. The resolvent operator \(J_{\partial i_{C},r}\) of \(i_{C}\) for \(\lambda > 0\), can be defined by \(J_{\partial i_{C},r}(x) = (I + \lambda \partial i_{C})^{-1}(x), x \in H\). We have \(J_{\partial i_{C},r}(x) = P_{C} x\), for all \(x \in H\) and \(\lambda > 0\). As a special case, if \(M_{A} = M_{B} = \partial i_{C}\) in Lemma 2.11, we find that \(J_{M_{A},\lambda _{A}}= J_{M_{B},\lambda _{B}} = P_{C}\). So we obtain the following result.

Lemma 4.1

([14])

Let C be a nonempty closed convex subset of a real Hilbert space H and let \(D_{1}, D_{2} : C \rightarrow H\) be mappings. For every \(\lambda _{1}, \lambda _{2} > 0\) and \(b \in [0,1]\), the following statements are equivalent:

(a) \((x^{*},z^{*})\in C \times C \) is a solution of problem (37),

(b) \(x^{*}\) is a fixed point of the mapping \(\widehat{G}: C \rightarrow C\), i.e., \(x^{*} \in F(T)\), defined by

$$\begin{aligned} \widehat{G}(x) = P_{C}(I-\lambda _{1}D_{1}) \bigl(bx+(1-b)P_{C}(I-\lambda _{2}D_{2})x \bigr), \end{aligned}$$
(38)

where \(z^{*} = P_{C}(I-\lambda _{2}D_{2})x^{*}\)

Theorem 4.2

Let H be a real Hilbert space. For \(i = 1,2,\ldots,N\), let \(A_{i} : H \rightarrow H\) be \(\alpha _{i}\)-inverse strongly monotone mappings and let \(A_{G} : H \rightarrow H \) be \(\alpha _{G}\)-inverse strongly monotone mappings. Define the mapping \(\widehat{G}: H \rightarrow H\) by (38). Assume that \(\Gamma = \bigcap_{i=1}^{N} VI(C,A_{i}) \cap F(T) \neq \emptyset \). Let the sequence \(\{ y_{n} \} \) and \(\{ x_{n} \} \) be generated by \(x_{1} , u \in H\) and

$$\begin{aligned} \textstyle\begin{cases} y_{n} = P_{C}(I-\lambda \sum_{i=1}^{N} a_{i}A_{i})x_{n}, \\ Q_{n} = \{ z \in H : \langle (I-\lambda \sum_{i=1}^{N} a_{i}A_{i})x_{n}-y_{n},y_{n}-z \rangle \geq 0 \} , \\ x_{n+1} = \alpha _{n}u + \beta _{n} P_{Q_{n}}(x_{n} - \lambda \sum_{i=1}^{N} a_{i}A_{i}y_{n}) + \gamma _{n} Tx_{n}, \end{cases}\displaystyle \end{aligned}$$
(39)

where \(\sum_{i=1}^{N} a_{i} = 1, 0 < a_{i} < 1 , \{ \alpha _{n} \} , \{ \beta _{n} \} , \{ \gamma _{n} \} \subset [0, 1]\) with \(\alpha _{n}+\beta _{n}+\gamma _{n} = 1\), \(\lambda \in (0,\eta )\) with \(\eta = \min_{i=1,2,\ldots,N} \{ \alpha _{i} \} \).

Suppose the following conditions hold:

(i) \(\sum_{n=0}^{\infty }\alpha _{n} = \infty \), \(\lim_{n \rightarrow \infty } \alpha _{n} = 0\).

(ii) \(0< c<\beta _{n}\), \(\gamma _{n} \leq d <1\).

Then \(\{ x_{n} \} \) converges strongly to \(x^{*} \in \Gamma \) where \(x^{*} = P_{\Gamma }u\).

Proof

Taking \(J_{M_{A},\lambda _{A}} = J_{M_{B},\lambda _{B}}= P_{C}\) in Theorem 3.1, we obtain the desired conclusion. □

In order to apply our main result, we give the following lemma.

Lemma 4.3

([14])

Let C be a nonempty closed convex subset of real Hilbert space H. Let \(T,S : C \rightarrow C\) be nonexpansive mappings. Define a mapping \(B^{A} : C \rightarrow C\) by \(B^{A}x j = T(aI + (1a)S)x\) for every \(x \in C\) and \(a \in (0,1)\). Then \(F(B^{A}) = F(T) \cap F(S)\) and \(B^{A}\) is a nonexpansive mapping.

We apply our Theorem 3.1, by using with Lemma 4.3 ([14]), to find a solution of the variational inclusion problem.

Lemma 4.4

Let H be a real Hilbert space and let \(A_{G} : H \rightarrow H\) be \(\alpha _{G}\)-inverse strongly monotone mappings. Let \(M_{A}, M_{B} : H \rightarrow 2^{H}\) be a multi-value maximum monotone mapping with \(VI(H,A_{G},M_{A}) \cap VI(H,A_{G},M_{B}) \neq \emptyset \). Define a mapping \(G: H \rightarrow H\) as in Lemma 2.11for all \(x \in H\), \(a \in (0,1)\) and \(\lambda _{A},\lambda _{B} \in (0,2\alpha _{G})\). Then \(F(G) = VI(H,A_{G},M_{A}) \cap VI(H,A_{G},M_{B})\).

Proof

Let \(x,y \in C\). From Lemma 2.11, we find that G is nonexpansive and \(J_{M_{A},\lambda _{A}}(I - \lambda _{A} A_{G})\) and \(J_{M_{B},\lambda _{B}}(I - \lambda _{B} A_{G})\) are nonexpansive. Since

$$\begin{aligned} G(x) = J_{M_{A},\lambda _{A}}(I-\lambda _{A}A_{G}) \bigl(bx+(1-b)J_{M_{B}, \lambda _{B}}(I-\lambda _{B}A_{G})x \bigr) \end{aligned}$$

and Lemma 4.3, we have

$$\begin{aligned} F(G) = F \bigl(J_{M_{A},\lambda _{A}}(I-\lambda _{A}A_{G}) \bigr) \cap F \bigl(J_{M_{B}, \lambda _{B}}(I-\lambda _{B}A_{G}) \bigr). \end{aligned}$$

By Lemma 2.10, we have

$$\begin{aligned} F(G) = VI(H,A_{G},M_{A}) \cap VI(H,A_{G},M_{B}). \end{aligned}$$

 □

Theorem 4.5

Let H be a real Hilbert space. For \(i = 1,2,\ldots,N\), let \(A_{i} : H \rightarrow H\) be \(\alpha _{i}\)-inverse strongly monotone mappings and let \(A_{G} : H \rightarrow H \) be \(\alpha _{G}\)-inverse strongly monotone mappings. Define the mapping \(G:H \rightarrow H\) by \(G(x) = J_{M_{A},\lambda _{A}}(I-\lambda _{A}A_{G})(bx+(1-b)J_{M_{B}, \lambda _{B}}(I-\lambda _{B}A_{G})x)\) for all \(x \in H\), \(b \in (0,1)\) and \(\lambda _{A},\lambda _{B} \in (0,2\alpha _{G})\). Assume that \(\Gamma = \bigcap_{i=1}^{N} VI(C,A_{i}) \cap VI(H,A_{G},M_{A}) \cap VI(H,A_{G},M_{B}) \neq \emptyset \). Let the sequence \(\{ y_{n} \} \) and \(\{ x_{n} \} \) be generated by \(x_{1} , u \in H\) and

$$\begin{aligned} \textstyle\begin{cases} y_{n} = P_{C}(I-\lambda \sum_{i=1}^{N} a_{i}A_{i})x_{n}, \\ Q_{n} = \{ z \in H : \langle (I-\lambda \sum_{i=1}^{N} a_{i}A_{i})x_{n}-y_{n},y_{n}-z \rangle \geq 0 \} , \\ x_{n+1} = \alpha _{n}u + \beta _{n} P_{Q_{n}}(x_{n} - \lambda \sum_{i=1}^{N} a_{i}A_{i}y_{n}) + \gamma _{n} Gx_{n}, \end{cases}\displaystyle \end{aligned}$$
(40)

where \(\sum_{i=1}^{N} a_{i} = 1, 0 < a_{i} < 1 , \{ \alpha _{n} \} , \{ \beta _{n} \} , \{ \gamma _{n} \} \subset [0, 1]\) with \(\alpha _{n}+\beta _{n}+\gamma _{n} = 1\), \(\lambda \in (0,\eta )\) with \(\eta = \min_{i=1,2,\ldots,N} \{ \alpha _{i} \} \).

Suppose the following conditions hold:

(i) \(\sum_{n=0}^{\infty }\alpha _{n} = \infty \), \(\lim_{n \rightarrow \infty } \alpha _{n} = 0\).

(ii) \(0< c<\beta _{n}\), \(\gamma _{n} \leq d <1\).

Then \(\{ x_{n} \} \) converges strongly to \(x^{*} \in \Gamma \) where \(x^{*} = P_{\Gamma }u\).

Proof

From Lemma 4.4, and Theorem 3.1, we obtain the desired conclusion. □

Remark 4.6

if \(VI(H,A_{G},M_{A}) \cap VI(H,A_{G},M_{B}) \neq \emptyset \), then observe that \(VI(H,A_{G},M_{A}) \cap VI(H,A_{G},M_{B}) =\Omega \).

5 Example and numerical results

In this section, we give an example supporting Theorem 3.1.

Example 5.1

Let \(H = \mathcal{R}^{2}\) be the two dimensional space of real numbers with an inner product \(\langle \cdot ,\cdot \rangle : \mathcal{R}^{2} \times \mathcal{R}^{2} \rightarrow \mathcal{R}\) defined by \(\langle x,y \rangle = x \cdot y = x_{1}y_{1} + x_{2}y_{2}\) and the usual norm \(\Vert \cdot \Vert : \mathcal{R}^{2} \times \mathcal{R}^{2} \rightarrow \mathcal{R}\) given by \(\Vert x \Vert = \sqrt{x_{1}^{2} + x_{2}^{2}}\) for all \(x=(x_{1},x_{2}) \in \mathcal{R}^{2}\). Let \(C_{1} = \{ (x_{1},x_{2}) \in H | -2x_{1} + x_{2} \leq 1 \} \) and \(C_{2} = \{ (x_{1},x_{2}) \in H | 4x_{1} - 2x_{2} \leq 3 \} \). Define the mapping \(A_{1} : C_{1} \rightarrow \mathcal{R}^{2}\) by \(A_{1}(x_{1},x_{2}) = (\frac{3x_{1}}{2},\frac{3x_{2}}{2})\). Define the mapping \(A_{2} : C_{2} \rightarrow \mathcal{R}^{2}\) by \(A_{2}(x_{1},x_{2}) =(2x_{1},2x_{2})\). Let the mapping \(A_{G} : \mathcal{R}^{2} \rightarrow \mathcal{R}^{2}\) be defined by \(A_{G}(x_{1},x_{2}) = (x_{1}+1,x_{2}+1)\). Let \(C = C_{1} \cap C_{2}\). We have

$$\begin{aligned} &P_{C}(x_{1},x_{2}) \\ &\quad = \textstyle\begin{cases} (-1999x_{1}+1000x_{2}+750 , 4000x_{1} - 1999x_{2} - 1500);\\ \quad \text{if }-40x_{1} + 20x_{2} < -15, \\ (x_{1} , x_{2}) ;\\ \quad \text{if }-15 \leq -40x_{1} + 20x_{2} \leq 5, \\ (-1999x_{1}+1000x_{2}-250 , 4000x_{1} - 1999x_{2} - 500);\\ \quad \text{if }-40x_{1} + 20x_{2} > 5. \end{cases}\displaystyle \end{aligned}$$

Let \(x_{1},u \in \mathcal{R}^{2}\), \(\{x_{n}\}_{n=0}^{\infty }\) and \(\{y_{n}\}_{n=0}^{\infty }\) be generated by

$$\begin{aligned} \textstyle\begin{cases} y_{n} = P_{C}(I-\lambda \sum_{i=1}^{2} a_{i}A_{i})x_{n}, \\ Q_{n} = \{z \in H : \langle (I-\lambda \sum_{i=1}^{2} a_{i}A_{i})x_{n}-y_{n},y_{n}-z \rangle \geq 0\}, \\ x_{n+1} = \alpha _{n}u + \beta _{n} P_{Q_{n}}(x_{n} - \lambda \sum_{i=1}^{2} a_{i}A_{i}y_{n}) + \gamma _{n} Gx_{n}, \end{cases}\displaystyle \end{aligned}$$
(41)

where \(\{ \alpha _{n} \} = \frac{1}{12n}, \{ \beta _{n} \} = \frac{5n-2}{12n}, \{ \gamma _{n} \} = \frac{7n+1}{12n} \subset [0, 1]\) and \(a = 0.5 \in (0,1)\). Show that \(\{ x_{n} \} \) and \(\{ y_{n} \} \) converge strongly to \((0,0)\).

Solution. Since \(A_{1},A_{2}\) and \(A_{G}\) are \(\frac{2}{3} , \frac{1}{2}\) and 1-inverse strongly monotone mappings, respectively, \(\eta = \frac{1}{2}\). Choose \(\lambda _{A} = \frac{1}{2} ,\lambda _{B} =1 \in (0,2\alpha _{G}) \) and \(b = \frac{1}{4}\), we obtain \(G(x_{1},x_{2}) = (\frac{x_{1}}{16} , \frac{x_{2}}{16}) \). Choose \(\lambda = \frac{1}{4} \in (0,\eta )\). It is easy to see that the sequences \(\{ \alpha _{n} \} , \{ \beta _{n} \} \) and \(\{ \gamma _{n} \} \) satisfy all conditions in Theorem 3.1 and \((0,0) \in VI(C,A_{1}) \cap VI(C,A_{2}) \cap F(G)\). From Theorem 3.1, we can conclude that the sequence \(\{ x_{n} \} \) and \(\{ y_{n} \} \) converge strongly to \((0,0)\).

Example 5.2

Let \(H = L_{2}([-1,1])\) with product \(\langle f,g \rangle = \int _{-1}^{1} f(t)g(t)\,dt\) and the associated norm given as \(\Vert f \Vert := \sqrt{\int _{-1}^{1} f(t)g(t)\,dt}\) for all \(f,g \in L_{2}([-1,1])\). Take \(C = \{ x \in H : \Vert x \Vert \leq 2 \} \). Define the mapping \(A_{1} : L_{2}([-1,1]) \rightarrow L_{2}([-1,1]) \) by \(A_{1}(h(t)) = h(t)-2t\) for all \(t \in [-1,1]\). Define the mapping \(A_{2} : L_{2}([-1,1]) \rightarrow L_{2}([-1,1])\) by \(A_{2}(h(t)) =\frac{3}{2}h(t) - 3t\) for all \(t \in [-1,1]\). Let the mapping \(A_{G} : L_{2}([-1,1]) \rightarrow L_{2}([-1,1])\) be defined by \(A_{G}(h(t)) = h(t) - 5t\) for all \(t \in [-1,1]\). We have

$$ P_{C} \bigl(f(t) \bigr) = \textstyle\begin{cases} f(t); & \text{if }\Vert f(t) \Vert \leq 2, \\ \frac{2f(t)}{ \Vert f(t) \Vert } ;& \text{if }\Vert f(t) \Vert > 2. \end{cases} $$

Let \(i=1,2 , x_{1},u \in \mathcal{R}^{2}\), \(\{ x_{n} \} _{n=0}^{\infty }\) and \(\{ y_{n} \} _{n=0}^{\infty }\) be generated by (21) where \(\{ \alpha _{n} \} = \frac{1}{12n}, \{ \beta _{n} \} = \frac{5n-2}{12n}, \{ \gamma _{n} \} = \frac{7n+1}{12n} \subset [0, 1]\) and \(a = 0.4 \in (0,1)\). Show that \(\{ x_{n} \} \) and \(\{ y_{n} \} \) converge strongly to 2t.

Solution. Since \(A_{1},A_{2}\) and \(A_{G}\) are \(\frac{1}{2} , \frac{1}{3}\) and 1-inverse strongly monotone mappings, respectively, \(\eta = \frac{1}{2}\). Choose \(\lambda _{A} = \frac{1}{2} ,\lambda _{B} =1 \in (0,2\alpha _{G}) \) and \(b = \frac{1}{4}\), we obtain \(G(h(t)) = \frac{h(t)}{16} \). Choose \(\lambda = \frac{1}{4} \in (0,\eta )\). It is easy to see that the sequences \(\{ \alpha _{n} \} , \{ \beta _{n} \} \) and \(\{ \gamma _{n} \} \) satisfy all conditions in Theorem 3.1 and \(2t \in VI(C,A_{1}) \cap VI(C,A_{2}) \cap F(G)\). From Theorem 3.1, we can conclude that the sequences \(\{ x_{n} \} \) and \(\{ y_{n} \} \) converge strongly to 2I.

Example 5.3

Let \(f:H \rightarrow \mathcal{R}\) be a convex function. Consider the following convex optimization problem:

$$\begin{aligned} \min_{x \in H} f(x) \end{aligned}$$
(42)

and

$$\begin{aligned} \min_{x \in H} g(x) \end{aligned}$$
(43)

It is well known that \(x^{*} \in C\) solves (42) and (43) if and only if \(x^{*} \in C\) satisfies the following variational inequalities:

$$\begin{aligned} \bigl\langle \nabla f \bigl(x^{*} \bigr), x - x^{*} \bigr\rangle \geq 0,\quad \forall x \in C, \end{aligned}$$
(44)

and

$$\begin{aligned} \bigl\langle \nabla g \bigl(x^{*} \bigr), x - x^{*} \bigr\rangle \geq 0, \quad\forall x \in C, \end{aligned}$$
(45)

that is, \(x^{*} \in VI(C,\nabla f) \cap VI(C,\nabla g)\). Let \(H = \mathcal{R}\). Take \(C = [1,10]\). Define the mapping \(f : [1,10] \rightarrow \mathcal{R}\) by \(f(x)=\frac{(x-1)^{2}}{3}+1\). Define the mapping \(g : [1,10] \rightarrow \mathcal{R}\) by \(g(x)=\frac{x^{2}}{2}-\ln {x}-\frac{1}{2}\).

Let \(x_{1},u \in \mathcal{R}^{2}\). From (21), we find that \(\{ x_{n} \} _{n=0}^{\infty }\) and \(\{ y_{n} \} _{n=0}^{\infty }\) are generated by

$$\begin{aligned} \textstyle\begin{cases} y_{n} = P_{C}(I-\lambda (a_{1} \nabla f + a_{2} \nabla g ))x_{n}, \\ Q_{n} = \{z \in H : \langle (I-\lambda (a_{1} \nabla f + a_{2} \nabla g ))x_{n}-y_{n},y_{n}-z\rangle \geq 0\}, \\ x_{n+1} = \alpha _{n}u + \beta _{n} P_{Q_{n}}(x_{n} - \lambda (a_{1} \nabla f + a_{2} \nabla g )y_{n}) + \gamma _{n} Gx_{n}, \end{cases}\displaystyle \end{aligned}$$
(46)

where \(\{ \alpha _{n} \} = \frac{1}{12n}, \{ \beta _{n} \} = \frac{5n-2}{12n}, \{ \gamma _{n} \} = \frac{7n+1}{12n} \subset [0, 1]\) and \(a = 0.5 \in (0,1)\). Show that \(\{ x_{n} \} \) and \(\{ y_{n} \} \) converge strongly to 1.

Solution. Since f and g are convex and differentiable with \(f'(x)= \frac{2(x-1)}{3}\) and \(g'(x) =x-\frac{1}{x}\). It implies that f and g are \(\frac{2}{3}\) and 1-inverse strongly monotone mappings, respectively. Choose \(\eta = \frac{1}{2}\), \(\lambda _{A} = \frac{1}{2} ,\lambda _{B} =1 \in (0,2\alpha _{G}) \) and \(b = \frac{1}{4}\), we obtain \(G(x) = \frac{x}{12}+\frac{11}{12} \). Choose \(\lambda = \frac{1}{4} \in (0,\eta )\). It is easy to see that the sequences \(\{ \alpha _{n} \} , \{ \beta _{n} \} \) and \(\{ \gamma _{n} \} \) satisfy all conditions in Theorem 3.1 and \(1 \in VI(C,\nabla f) \cap VI(C,\nabla g) \cap F(G)\). From Theorem 3.1, we can conclude that the sequences \(\{ x_{n} \} \) and \(\{ y_{n} \} \) converge strongly to 1.

Remark 5.4

According to Tables 13 and Figs. 13, it is shown that our Algorithm (21) converges to an element of the set \(\bigcap_{i=1}^{N} VI(C,A_{i}) \cap F(G)\) at a faster rate than Algorithm (3). Therefore, our algorithm is more efficient.

Figure 1
figure 1

Comparison between Algorithms (21) and (3) for Example 1 with \({\mathbf{u}} = (5,5)\) and \(N = 15 \)

Figure 2
figure 2

Comparison between Algorithms (21) and (3) for Example 2 with \({\mathbf{u}} = 3t\) and \(N = 15 \)

Figure 3
figure 3

Comparison between Algorithms (21) and (3) for Example 3 with \({\mathbf{u}} = 3\) and \(N = 15 \)

Table 1 Detailed analysis of computational methods (21) and (3) for Example 1 with \({\mathbf{u}} = (5,5)\), \(N = 15 \), \(E(x_{1}^{n}) = \Vert x_{1}^{n +1} - x_{1}^{n} \Vert , n\in N_{0} \) and \(E(x_{2}^{n}) = \Vert x_{2}^{n +1} - x_{2}^{n} \Vert , n\in N_{0} \)
Table 2 Detailed analysis of computational methods (21) and (3) for Example 1 with \({\mathbf{u}} = 3t\), \(N = 15 \) and \(E(x_{n}) = \Vert x_{n +1} - x_{n} \Vert , n\in N_{0} \)
Table 3 Detailed analysis of computational methods (21) and (3) for Example 1 with \({\mathbf{u}} = 3\), \(N = 15 \) and \(E(x_{n}) = \Vert x_{n +1} - x_{n} \Vert , n\in N_{0}\)

6 Conclusion

In this paper, we have proposed a new problem, called a generalized system of modified variational inclusion problems (GSMVIP). This problem can be reduced to a classical variational inclusion problem and a classical variational inequalities problem. Moreover, we study the half-space

$$ Q_{n} = \Biggl\{ z \in H : \Biggl\langle \Biggl(I-\lambda \sum _{i=1}^{N} a_{i}A_{i} \Biggr)x_{n}-y_{n},y_{n}-z \Biggr\rangle \geq 0 \Biggr\} , $$

which can be reduced to \(T_{n}\) in Algorithm (3). In order to solve the GSMVIP and the set of a finite family of variational inequalities problem, we have presented a new subgradient extragradient algorithm which uses \(Q_{n}\) and show that it converges to a solution of the GSMVIP and the set of a finite family of variational inequalities problem under suitable conditions. Therefore, our algorithm improves the algorithm proposed by Censor et al. [12]. The efficiency of the proposed algorithm has also been illustrated by several numerical experiments.

Availability of data and materials

Not applicable.

References

  1. Aubin, J.P., Ekeland, I.: Applied Nonlinear Analysis. Wiley, New York (1984)

    MATH  Google Scholar 

  2. Blum, E., Oettli, W.: From optimization and variational inequalities to equilibrium problems. Math. Program. 63, 123–145 (1994)

    MathSciNet  MATH  Google Scholar 

  3. Dafermos, S.C.: Traffic equilibrium and variational inequalities. Transp. Sci. 14, 42–54 (1980)

    Article  MathSciNet  Google Scholar 

  4. Dafermos, S.C., Mckelvey, S.C.: Partitionable variational inequalities with applications to network and economic equilibrium. J. Optim. Theory Appl. 73, 243–268 (1992)

    Article  MathSciNet  Google Scholar 

  5. Korpelevich, G.M.: The extragradient method for finding saddle points and other problems. Ekon. Mat. Met. 12, 747–756 (1976)

    MathSciNet  MATH  Google Scholar 

  6. Censor, Y., Gibali, A., Reich, S.: Algorithms for the split variational inequality problem. Numer. Algorithms 59, 301–323 (2012)

    Article  MathSciNet  Google Scholar 

  7. Hieu, D.V., Thong, D.V.: New extragradient like algorithms for strongly pseudomonotone variational inequalities. J. Glob. Optim. 70, 385–399 (2018)

    Article  MathSciNet  Google Scholar 

  8. Hieu, D.V., Thong, D.V.: A new projection method for a class of variational inequalities. Appl. Anal. 98(13), 2423–2439 (2019)

    Article  MathSciNet  Google Scholar 

  9. Malitsky, Y.V., Semenov, V.V.: A hybrid method without extrapolation step for solving variational inequalities problems. J. Glob. Optim. 61, 193–202 (2015)

    Article  Google Scholar 

  10. Malitsky, Y.V.: Projected reflected gradient methods for monotone variational inequalities. SIAM J. Optim. 25, 502–520 (2015)

    Article  MathSciNet  Google Scholar 

  11. Yao, Y., Marino, G., Muglia, L.: A modified Korpelevich’s method convergent to the minimum-norm solution of a variational inequality. Optimization 63, 559–569 (2014)

    Article  MathSciNet  Google Scholar 

  12. Censor, Y., Gibali, A., Reich, S.: The subgradient extragradient method for solving variational inequalities in Hilbert space. J. Optim. Theory Appl. 148, 318–335 (2011)

    Article  MathSciNet  Google Scholar 

  13. Solodov, M.V., Svaiter, B.F.: A new projection method for variational inequalities problems. SIAM J. Control Optim. 37, 765–776 (1999)

    Article  MathSciNet  Google Scholar 

  14. Kangtunyakarn, A.: An iterative algorithm to approximate a common element of the set of common fixed points for a finite family of strict pseudo-contractions and the set of solutions for a modified system of variational inequalities. Fixed Point Theory Appl. 2013, 143 (2013)

    Article  MathSciNet  Google Scholar 

  15. Brezis, H.: Operateurs maximaux monotone et semi-groupes de contractions dans les espaces de Hilbert. Math. Stud. 5, 759–775 (1973)

    MathSciNet  MATH  Google Scholar 

  16. Zhang, S.S., Lee, J.H.M., Chan, C.K.: Algorithms of common solutions for quasi variational inclusion and fixed point problems. Appl. Math. Mech. 29, 571–578 (2008)

    Article  MathSciNet  Google Scholar 

  17. Xu, H.K.: An iterative approach to quadric optimization. J. Optim. Theory Appl. 116, 659–678 (2003)

    Article  MathSciNet  Google Scholar 

  18. Mainge, P.E.: A hybrid extragradient viscosity method for monotone operators and fixed point problems. SIAM J. Control Optim. 49, 1499–1515 (2008)

    Article  MathSciNet  Google Scholar 

  19. Rockafellar, R.T.: On the maximal monotonicity of subdifferential mappings. Pac. J. Math. 33, 209–216 (2005)

    Article  MathSciNet  Google Scholar 

  20. Xu, H.K.: Viscosity approximation methods for nonexpansive mappings. J. Math. Anal. Appl. 298, 279–291 (2004)

    Article  MathSciNet  Google Scholar 

  21. Maing, P.E.: Approximation method for common fixed points of nonexpansive mappings in Hilbert spaces. J. Math. Anal. Appl. 325, 469–479 (2007)

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgements

This work is supported by King Mongkut’s Institute of Technology Ladkrabang.

Funding

Not applicable.

Author information

Authors and Affiliations

Authors

Contributions

The two authors contributed equally and significantly in writing this article. Both authors read and approved the final manuscript.

Corresponding author

Correspondence to Atid Kangtunyakarn.

Ethics declarations

Competing interests

The authors declare that they have no competing interests.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Kheawborisut, A., Kangtunyakarn, A. Modified subgradient extragradient method for system of variational inclusion problem and finite family of variational inequalities problem in real Hilbert space. J Inequal Appl 2021, 53 (2021). https://doi.org/10.1186/s13660-021-02583-1

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13660-021-02583-1

Keywords