Skip to main content

An intermixed method for solving the combination of mixed variational inequality problems and fixed-point problems

Abstract

In this paper, we introduce an intermixed algorithm with viscosity technique for finding a common solution of the combination of mixed variational inequality problems and the fixed-point problem of a nonexpansive mapping in a real Hilbert space. Moreover, we propose the mathematical tools related to the combination of mixed variational inequality problems in the second section of this paper. Utilizing our mathematical tools, a strong convergence theorem is established for the proposed algorithm. Furthermore, we establish additional conclusions concerning the split-feasibility problem and the constrained convex-minimization problem utilizing our main result. Finally, we provide numerical experiments to illustrate the convergence behavior of our proposed algorithm.

1 Introduction

Let C be a nonempty, closed, and convex subset of a real Hilbert space H. Let \(T:C\rightarrow C \) be a nonlinear mapping. A point \(x\in C\) is called a fixed point of T if \(Tx=x\). The set of fixed points of T is the set \(\mathrm{Fix}(T):= \{x\in C: Tx=x\}\). A mapping T of C into itself is called nonexpansive if

$$\begin{aligned} \Vert Tx-Ty \Vert \leq \Vert x-y \Vert ,\quad \forall x,y \in C. \end{aligned}$$

Note that the mapping \(I-T\) is demiclosed at zero iff \(x \in \mathrm{Fix}(T)\) whenever \(x_{n} \rightharpoonup x\) and \(x_{n}-Tx_{n} \to 0\) (see, [1]). It is widely known that if \(T:H \to H\) is nonexpansive, then \(I-T\) is demiclosed at zero. A mapping \(g:C \rightarrow C\) is said to be a contraction if there exists a constant \(\alpha \in (0,1)\) such that

$$\begin{aligned} \bigl\Vert g(x)-g(y) \bigr\Vert \leq \alpha \Vert x-y \Vert , \quad \forall x,y\in C. \end{aligned}$$

Let \(A: C \to H\) be a mapping and \(f: H \to \mathbb{R}\cup \{+\infty \}\) be a proper, convex, and lower semicontinuous function on H. Now, we consider the mixed variational inequality problem: Find a point \(x^{*}\in C\) such that

$$\begin{aligned} \bigl\langle y-x^{*}, Ax^{*} \bigr\rangle +f(y)-f \bigl(x^{*} \bigr)\geq 0, \end{aligned}$$
(1.1)

for all \(y\in C\). The set of solutions of problem (1.1) is denoted by \(VI(C,A,f)\). The problem (1.1) was originally considered by Lescarret [2] and Browder [3] in relation to its various application in mathematical physics. General equilibrium and oligopolistic equilibrium problems, which can be stated as mixed variational inequality problems, were studied by Konnov and Volotskaya [4]. The fixed-point problems and resolvent equations are well known to be equivalent to mixed variational inequality problems. In 1997, Noor, [5] proposed and analyzed a new iterative method for solving mixed variational inequality problems using the resolvent equations technique as follows:

$$\begin{aligned} \textstyle\begin{cases} z_{n}=x_{n}-\rho A x_{n}, \\ w_{n}=z_{n}-J_{\rho f}z_{n}+\rho A J_{\rho f}z_{n}, \\ x_{n+1}=x_{n}-\gamma w_{n},\quad \forall n\geq 1, \end{cases}\displaystyle \end{aligned}$$
(1.2)

where A is a monotone and Lipschitz continuous operator, \(\rho >0\) is a constant, \(J_{\rho f}=(I+\rho \partial f )^{-1}\) is the resolvent operator and I is the identity operator. In 2008, Noor et al. [6] introduced an iterative algorithm to solve the mixed variational inequalities as follows:

$$\begin{aligned} x_{n+1} = (1-\alpha _{n})x_{n}+ \alpha _{n}J_{\rho f}[x_{u}- \rho A x_{n}], \quad \forall n\geq 1, \end{aligned}$$
(1.3)

where \(0\leq \alpha _{n} \leq 1\) and A is strongly monotone and Lipschitz continuous. In recent years, several researchers have increasingly investigated the problem (1.1) in various directions, for example [5, 716] and the references therein.

Note that if C is a closed convex subset of H and \(f(x)=\delta _{C}(x)\), for all \(x\in C\), where \(\delta _{C}\) is the indicator function of C defined by \(\delta _{C}(x) = 0\) if \(x \in C\), and \(\delta _{C}(x)=\infty \) otherwise, then the mixed variational inequality problem (1.1) reduces to the following classical variational inequality problem: find a point \(x^{*}\in C\) such that

$$\begin{aligned} \bigl\langle y-x^{*}, Ax^{*} \bigr\rangle \geq 0, \quad\forall y\in C. \end{aligned}$$
(1.4)

The set of solutions of problem (1.4) is denoted by \(VI(C,A)\). The variational inequality problem was introduced and studied by Stampacchia in 1966 [17]. The solution of the variational inequality problem is well known to be equivalent to the following fixed-point equation for finding a point \(x^{*}\in C\) such that

$$\begin{aligned} x^{*}=P_{C}(I-\gamma A)x^{*}, \end{aligned}$$

where \(\gamma > 0\) is an arbitrary constant and \(P_{C}\) is the metric projection from H onto C (see [18]). This problem is useful in economics, engineering, and mathematics. Many nonlinear analysis problems, such as optimization, optimal control problems, saddle-point problems, and mathematical programming, are included as special cases; see, for example, [1922]. Furthermore, there have been various methods invented for solving the problem (1.4) and fixed-point problems, for example [2333] and the references therein.

The intermixed algorithm introduced by Yao et al. [34] is currently one of the most effective methods for solving the fixed-point problem of a nonlinear mapping. This algorithm has the following features: the definition of the sequence \(\{x_{n}\}\) is involved in the sequence \(\{y_{n}\}\) and the definition of the sequence \(\{y_{n}\}\) is also involved in the sequence \(\{x_{n}\}\). They studied the intermixed algorithm for two strict pseudocontractions S and T as follows: For arbitrarily given \(x_{1}\in C\), \(y_{1}\in C\), let the sequences \(\{x_{n}\}\) and \(\{y_{n}\}\) be generated iteratively by

$$\begin{aligned} \textstyle\begin{cases} x_{n+1}=(1-\beta _{n})x_{n}+\beta _{n}P_{C}[\alpha _{n}f(y_{n})+(1-k- \alpha _{n})x_{n}+kTx_{n}],\quad \forall n\geq 1, \\ y_{n+1}=(1-\beta _{n})y_{n}+\beta _{n}P_{C}[\alpha _{n}g(x_{n})+(1-k- \alpha _{n})y_{n}+kSy_{n}], \quad \forall n\geq 1, \end{cases}\displaystyle \end{aligned}$$
(1.5)

where \(S,T:C\rightarrow C\) are λ-strictly pseudocontraction mappings, \(f:C\rightarrow H\) is a \(\rho _{1}\)-contraction and \(g:C\rightarrow H\) is a \(\rho _{2}\)-contraction, \(k\in (0,1-\lambda )\) is a constant and \(\{\alpha _{n}\}\), \(\{\beta _{n}\}\) are two real-number sequences in \((0,1)\). They also proved that the proposed algorithms independently converge strongly to the fixed points of two strict pseudocontractions.

In 2012, Kangtunyakarn [35] modified the set of variational inequality problems as follows:

$$\begin{aligned} &VI \bigl(C,aA+(1-a)B \bigr)= \bigl\{ x\in C: \bigl\langle y-x, \bigl(aA+(1-a)B \bigr)x \bigr\rangle \geq 0, \forall y \in C \bigr\} , \\ &\quad \forall a \in (0,1), \end{aligned}$$
(1.6)

where A and B are the mappings of C into H. If \(A=B\), then the problem (1.6) reduces to the classical variational inequality problem. Moreover, he also gave a new iterative method for solving the proposed problem in Hilbert spaces.

In this article, motivated and inspired by Kangtunyakarn [35], we introduce a problem that is modified by a mixed variational inequality problem as follows: The combination of mixed variational inequality problems is to find \(x^{*}\in C\) such that

$$\begin{aligned} \bigl\langle y-x^{*}, \bigl(aA+(1-a)B \bigr)x^{*} \bigr\rangle +f(y)-f \bigl(x^{*} \bigr)\geq 0, \end{aligned}$$
(1.7)

for all \(y\in C\) and \(a\in (0,1)\), where \(A,B: C\to H\) are mappings. The set of all solutions to this problem is denoted by \(VI(C,aA+(1-a)B,f)\). In particular, if \(A=B\), then the problem (1.7) reduces to the mixed variational inequality problem (1.1).

Question. Can we design an intermixed algorithm for solving the combination of mixed variational inequality problems (1.7) above?

In this paper, we give a positive answer to this question. Motivated and inspired by the works in the literature, and by the ongoing research in these directions, we introduce a new intermixed algorithm with viscosity technique for finding a solution of the combination of mixed variational inequality problems and the fixed-point problem of a nonexpansive mapping in a real Hilbert space. Moreover, we propose the mathematical tools related to the combination of mixed variational inequality problems (1.7) in the second section of this paper. Utilizing our mathematical tools, a strong convergence theorem is established for the proposed algorithm. Furthermore, we establish additional conclusions concerning the split-feasibility problem and the constrained convex-minimization problem utilizing our main result. Finally, we provide numerical experiments to illustrate the convergence behavior of our proposed algorithm.

This paper is organized as follows. In Sect. 2, we first recall some basic definitions and lemmas. In Sect. 3, we prove and analyze the strong convergence of the proposed algorithm. In Sect. 4, we also consider the relaxation version of the proposed method. In Sect. 5, some numerical experiments are provided.

2 Preliminary

Let C be a nonempty, closed, and convex subset of a Hilbert space H. The notation I stands for the identity operator on a Hilbert space. Let \(\{x_{n}\}\) be a sequence in H. Weak and strong convergence of \(\{x_{n}\}\) to \(x \in H\) are denoted by \(x_{n} \rightharpoonup x\) and \(x_{n} \rightarrow x\), respectively.

Definition 2.1

A mapping \(A:C \to H\) is called

  1. (i)

    monotone if

    $$\begin{aligned} \langle Ax-Ay,x-y \rangle \geq 0\quad \text{for all } x,y \in C; \end{aligned}$$
  2. (ii)

    L-Lipschitz continuous if there exists \(L > 0\) such that

    $$\begin{aligned} \Vert Ax-Ay \Vert \leq L \Vert x-y \Vert \quad\text{for all } x,y \in C; \end{aligned}$$
  3. (iii)

    α-inverse strongly monotone if there exists \(\alpha > 0\) such that

    $$\begin{aligned} \langle Ax-Ay, x-y \rangle \geq \alpha \Vert Ax-Ay \Vert ^{2}\quad \text{for all } x,y \in C; \end{aligned}$$
  4. (iv)

    firmly nonexpansive if

    $$\begin{aligned} \Vert Ax-Ay \Vert ^{2}\leq \langle x-y, Ax-Ay \rangle \quad \text{for all } x,y \in C. \end{aligned}$$

Throughout this paper, the domain of any function \(f: H \to \mathbb{R}\cup \{+\infty \}\), denoted by domf, is defined as \(\operatorname{dom} f:= \{x \in H: f(x) < + \infty \}\). The domain of continuity of f is \(\operatorname{cont} f = \{ x\in H: f(x)\in \mathbb{R} \text{ and } f \text{ is continuous at } x \}\).

Definition 2.2

([36])

Let \(f: H \to \mathbb{R}\) be a function. Then,

  1. (i)

    f is proper if \(\{ x\in H: f(x) < \infty \}\neq \emptyset \);

  2. (ii)

    f is lower semicontinuous if \(\{ x\in H: f(x) \leq a\}\) is closed for each \(a \in \mathbb{R}\);

  3. (iii)

    f is convex if \(f(tx+(1-t)y)\leq tf(x)+(1-t)f(y)\) for every \(x,y\in H\) and \(t\in [0,1]\);

  4. (iv)

    f is Gâteaux differentiable at \(x \in H\) if there is \(\nabla f(x)\in H\) such that

    $$\begin{aligned} \lim_{t \to 0}\frac {f(x+ty)-f(x)}{t}= \bigl\langle y,\nabla f(x) \bigr\rangle \end{aligned}$$

    for each \(y \in H\);

  5. (v)

    f is Fréchet differentiable at \(x \in H\) if there is \(\nabla f(x)\) such that

    $$\begin{aligned} \lim_{y \to 0} \frac {f(x+y)-f(x)-\langle \nabla f(x), y \rangle}{ \Vert y \Vert }= 0. \end{aligned}$$

Let \(f: H \to \mathbb{R}\cup \{+\infty \}\) be a proper, convex, and lower semicontinuous function on H. The subset

$$\begin{aligned} \partial f(x) = \bigl\{ z \in H: \langle z, y - x \rangle + f(x) \leq f(y), \forall y \in H \bigr\} \end{aligned}$$

is called a subdifferential of f at \(x\in H\). The function f is said to be subdifferentiable at x if \(\partial f(x) \neq \emptyset \). The element of \(\partial f(x)\) is called the subgradient of f at x. It is well known that the subdifferential ∂f is a maximal monotone operator.

Proposition 2.1

([37] Proposition 17.31)

Let \(f: H \to \mathbb{R}\cup \{+\infty \}\) be a proper and convex function, and let \(x \in \operatorname{dom} f\). Then, the following hold:

  1. (i)

    Suppose that f is Gâteaux differentiable at x. Then \(\partial f(x) = \{ \nabla f(x) \}\).

  2. (ii)

    Suppose that \(x \in \operatorname{cont} f\) and that \(\partial f(x)\) consists of a single element u. Then, f is Gâteaux differentiable at x and \(u = \nabla f(x)\).

Definition 2.3

([38])

For any maximal operator A, the resolvent operator associated with A, for any \(\gamma >0\), is defined as

$$\begin{aligned} J_{\gamma A}(x)=(I+\gamma A )^{-1}(x),\quad \forall x\in H, \end{aligned}$$

where I is the identity operator.

It is well known that an operator A is maximal monotone if and only if its resolvent operator \(J_{\gamma A}\) is defined everywhere. It is single valued and nonexpansive. If f is a proper, convex, and lower-semicontinuous function, then its subdifferential ∂f is a maximal monotone operator. In this case, we can define the resolvent operator

$$\begin{aligned} J_{\gamma f}(x)=(I+\gamma \partial f)^{-1}(x),\quad \forall x \in H, \end{aligned}$$

associated with the subdifferential ∂f and \(\gamma >0\) is constant.

Recall that the (nearest point) projection \(P_{C}\) from H onto C assigns to each \(x \in H\) the unique point \(P_{C}x \in C\) satisfying the property

$$\begin{aligned} \Vert x- P_{C}x \Vert = \min_{y \in C} \Vert x- y \Vert . \end{aligned}$$

Lemma 2.2

([39])

For a given \(z\in H\) and \(u\in C\),

$$\begin{aligned} u=P_{C}z\quad \Leftrightarrow \quad\langle u-z,v-u\rangle \geq 0, \quad \forall v\in C. \end{aligned}$$

Furthermore, \(P_{C}\) is a firmly nonexpansive mapping of H onto C.

Lemma 2.3

([40])

For given \(x \in H\) let \(P_{C}: H \to C\) be a metric projection. Then,

  1. (a)

    \(z=P_{C}x\) if and only if \(\langle x-z,y-z \rangle \leq 0, \forall y \in C\);

  2. (b)

    \(z=P_{C}x\) if and only if \(\|x-z\|^{2}\leq \|x-y\|^{2}-\|y-z\|^{2}, \forall y \in C\);

  3. (c)

    \(\langle P_{C}x-P_{C}y,x-y \rangle \geq \|P_{C}x-P_{C}y\|^{2}, \forall x,y \in H\).

Lemma 2.4

([41])

Let \(\{s_{n}\}\) be a sequence of nonnegative real numbers satisfying

$$\begin{aligned} s_{n+1} = (1-\alpha _{n})s_{n}+\delta _{n},\quad \forall n\geq 1, \end{aligned}$$

where \(\{\alpha _{n}\}\) is a sequence in (0,1) and \(\{\delta _{n}\}\) is a sequence such that

  1. (i)

    \(\sum_{n=1}^{\infty}\alpha _{n}=\infty \);

  2. (ii)

    \(\limsup_{n\rightarrow \infty} \frac{\delta _{n}}{\alpha _{n}}\leq 0 \textit{ or } \sum_{n=1}^{ \infty}|\delta _{n}|<\infty \).

Then, \(\lim_{n\rightarrow \infty}s_{n}=0\).

Lemma 2.5

Let C be a nonempty closed convex subset of H and let \(f: H \to \mathbb{R}\cup \{+\infty \}\) be a proper, convex, and lower semicontinuous function and let \(A,B: C\to H\) be α- and β-inverse strongly monotone operators with \(\varepsilon = \min \{ \alpha, \beta \}\) and \(VI(C,A,f)\cap VI(C,B,f) \neq \emptyset \). Then,

$$\begin{aligned} VI(C,A,f)\cap VI(C,B,f) = VI \bigl(C,aA+(1-a)B,f \bigr) \end{aligned}$$
(2.1)

for all \(a \in (0,1)\).

Proof

Clearly,

$$\begin{aligned} VI(C,A,f)\cap VI(C,B,f) \subseteq VI \bigl(C,aA+(1-a)B,f \bigr). \end{aligned}$$
(2.2)

Let \(x_{0}\in VI(C,aA+(1-a)B,f)\) and \(x^{*}\in VI(C,A,f)\cap VI(C,B,f)\). Hence, we have

$$\begin{aligned} \bigl\langle y-x_{0}, \bigl(aA+(1-a)B \bigr)x_{0} \bigr\rangle +f(y)-f(x_{0})\geq 0,\quad \forall y\in C. \end{aligned}$$
(2.3)

It follows from \(x^{*}\in VI(C,aA+(1-a)B,f)\) that

$$\begin{aligned} \bigl\langle y-x^{*}, \bigl(aA+(1-a)B \bigr)x^{*} \bigr\rangle +f(y)-f \bigl(x^{*} \bigr)\geq 0, \quad \forall y\in C. \end{aligned}$$
(2.4)

From (2.3), (2.4), and the definition of \(x^{*}, x_{0}\), we have

$$\begin{aligned} \bigl\langle x^{*}-x_{0}, \bigl(aA+(1-a)B \bigr)x_{0} \bigr\rangle +f \bigl(x^{*} \bigr)-f(x_{0}) \geq 0 \end{aligned}$$
(2.5)

and

$$\begin{aligned} \bigl\langle x_{0}-x^{*}, \bigl(aA+(1-a)B \bigr)x^{*} \bigr\rangle +f(x_{0})-f \bigl(x^{*} \bigr)\geq 0, \quad\forall y\in C. \end{aligned}$$
(2.6)

By combining (2.5), (2.6), and the definition of \(A,B\), we obtain

$$\begin{aligned} 0&\geq \bigl\langle x_{0}-x^{*}, a \bigl(Ax_{0}-Ax^{*} \bigr)+(1-a) \bigl(Bx_{0}-Bx^{*} \bigr) \bigr\rangle \\ &= a \bigl\langle x_{0}-x^{*},Ax_{0}-Ax^{*} \bigr\rangle +(1-a) \bigl\langle x_{0}-x^{*}, Bx_{0}-Bx^{*} \bigr\rangle \\ &\geq a\alpha \bigl\Vert Ax_{0}-Ax^{*} \bigr\Vert ^{2}+(1-a)\beta \bigl\Vert Bx_{0}-Bx^{*} \bigr\Vert ^{2}, \end{aligned}$$

which implies that

$$\begin{aligned} Ax_{0}=Ax^{*},\qquad Bx_{0}=Bx^{*}. \end{aligned}$$

Let \(y\in C\). From \(x^{*}\in VI(C,A,f)\) and \(Ax_{0}=Ax^{*}\), we have

$$\begin{aligned} \langle y -x_{0}, Ax_{0} \rangle +f(y)-f(x_{0})={}& \bigl\langle y -x^{*}, Ax^{*} \bigr\rangle + \bigl\langle x^{*} -x_{0}, Ax_{0} \bigr\rangle \\ &{}+f(y)-f \bigl(x^{*} \bigr)+f \bigl(x^{*} \bigr)-f(x_{0}) \\ \geq{}& \bigl\langle x^{*} -x_{0}, Ax_{0} \bigr\rangle +f \bigl(x^{*} \bigr)-f(x_{0}). \end{aligned}$$
(2.7)

From \(Bx_{0}=Bx^{*}\), \(x_{0}\in VI(C,aA+(1-a)B,f)\), \(x^{*}\in VI(C,B,f)\), we obtain

$$\begin{aligned} \bigl\langle x^{*} -x_{0}, aAx_{0} \bigr\rangle +af \bigl(x^{*} \bigr)-af(x_{0}) ={}& \bigl\langle x^{*} -x_{0}, aAx_{0}+(1-a)Bx_{0} \bigr\rangle \\ &{}- \bigl\langle x^{*} -x_{0}, (1-a)Bx_{0} \bigr\rangle + af \bigl(x^{*} \bigr)-af(x_{0}) \\ ={}& \bigl\langle x^{*} -x_{0}, aAx_{0}+(1-a)Bx_{0} \bigr\rangle + f \bigl(x^{*} \bigr)-f(x_{0}) \\ &{}-f \bigl(x^{*} \bigr)+f(x_{0})- \bigl\langle x^{*} -x_{0}, (1-a)Bx_{0} \bigr\rangle \\ &{}+ af \bigl(x^{*} \bigr)-af(x_{0}) \\ \geq{} & \bigl\langle x_{0}-x^{*}, (1-a)Bx^{*} \bigr\rangle +(1-a)f(x_{0}) \\ &{}-(1-a)f \bigl(x^{*} \bigr) \\ ={}&(1-a) \bigl( \bigl\langle x_{0}-x^{*}, Bx^{*} \bigr\rangle + f(x_{0})- f \bigl(x^{*} \bigr) \bigr) \\ \geq {}& 0. \end{aligned}$$

Since \(a\in (0,1)\), we have

$$\begin{aligned} \bigl\langle x^{*} -x_{0}, Ax_{0} \bigr\rangle +f \bigl(x^{*} \bigr)-f(x_{0}) \geq 0. \end{aligned}$$
(2.8)

From (2.7) and (2.8), we have

$$\begin{aligned} \langle y -x_{0}, Ax_{0} \rangle +f(y)-f(x_{0}) \geq 0. \end{aligned}$$
(2.9)

This implies that

$$\begin{aligned} x_{0}\in VI(C,A,f). \end{aligned}$$
(2.10)

Using the same method as (2.10), we have

$$\begin{aligned} x_{0}\in VI(C,B,f). \end{aligned}$$
(2.11)

From (2.10) and (2.11), we obtain \(x_{0}\in VI(C,A,f)\cap VI(C,B,f)\). Hence, we can conclude that

$$\begin{aligned} VI \bigl(C,aA+(1-a)B,f \bigr) \subseteq VI(C,A,f)\cap VI(C,B,f). \end{aligned}$$
(2.12)

From (2.2) and (2.12), we obtain

$$\begin{aligned} VI(C,A,f)\cap VI(C,B,f)=VI \bigl(C,aA+(1-a)B,f \bigr). \end{aligned}$$
(2.13)

 □

Lemma 2.6

Let \(f: H \to \mathbb{R}\cup \{+\infty \}\) be a proper, convex, and lower semicontinuous function on H. Let \(A: C\to H\) be a mapping. Then, \(\mathrm{Fix}(J_{\gamma f}(I-\gamma A))=VI(C,A,f)\), where \(J_{\gamma f}:H \to H\) defined as \(J_{\gamma f}=(I+\gamma \partial f )^{-1}\) is the resolvent operator, I is the identity operator and \(\gamma >0\) is a constant.

Proof

Let \(z\in H\), then

$$\begin{aligned} z\in \mathrm{Fix} \bigl(J_{\gamma f}(I-\gamma A) \bigr) \quad& \Leftrightarrow\quad z=J_{\gamma f}(I- \gamma A)z \\ &\Leftrightarrow\quad z=(I+\gamma \partial f )^{-1}(I-\gamma A)z \\ &\Leftrightarrow \quad(I-\gamma A)z \in (I+\gamma \partial f )z \\ &\Leftrightarrow\quad - A z \in \partial f(z) \\ &\Leftrightarrow \quad\langle -Az, y-z \rangle \leq f(y)-f(z),\quad \forall y \in C \\ &\Leftrightarrow\quad z\in VI(C,A,f). \end{aligned}$$
(2.14)

Next, we will show that \(J_{\gamma f}\) is a firmly nonexpansive mapping.

Let \(p=J_{\gamma f}(x)=(I+\gamma \partial f )^{-1}x\) and \(q=J_{\gamma f}(y)=(I+\gamma \partial f )^{-1}y\). It follows that \(x\in (I+\gamma \partial f )p\) and \(y \in (I+\gamma \partial f )q\).

From the definition of \(\partial f(p)\) and \(\partial f(q)\), we have

$$\begin{aligned} \frac {x-p}{\gamma} \in \partial f(p)\quad \text{and}\quad \frac {y-q}{\gamma} \in \partial f(q). \end{aligned}$$

This implies that

$$\begin{aligned} \biggl\langle \frac {x-p}{\gamma}, c- p \biggr\rangle \leq f(c) - f(p)\quad \text{and}\quad \biggl\langle \frac {y-q}{\gamma}, c- q \biggr\rangle \leq f(c) - f(q) \end{aligned}$$

for all \(c\in H\). Then,

$$\begin{aligned} \biggl\langle \frac {x-p}{\gamma}, q- p \biggr\rangle \leq f(q) - f(p) \end{aligned}$$
(2.15)

and

$$\begin{aligned} \biggl\langle \frac {y-q}{\gamma}, p- q \biggr\rangle \leq f(p) - f(q). \end{aligned}$$
(2.16)

By combining (2.15) and (2.16), we obtain

$$\begin{aligned} \biggl\langle \frac {x-p}{\gamma}- \frac {y-q}{\gamma}, q- p \biggr\rangle \leq 0, \end{aligned}$$
(2.17)

which implies that

$$\begin{aligned} \langle x-y+q-p, q- p\rangle \leq 0. \end{aligned}$$
(2.18)

Then, we have

$$\begin{aligned} \Vert q-p \Vert ^{2}\leq \langle y-x, q- p\rangle. \end{aligned}$$

From the definition of \(p,q\), we have

$$\begin{aligned} \bigl\Vert J_{\gamma f}(y)-J_{\gamma f}(x) \bigr\Vert ^{2} \leq \bigl\langle J_{\gamma f}(y)-J_{ \gamma f}(x), y- x \bigr\rangle . \end{aligned}$$

Therefore, \(J_{\gamma f}\) is a firmly nonexpansive mapping. □

Remark 2.7

From Lemma 2.5 and Lemma 2.6, we have

$$\begin{aligned} VI(C,A,f)\cap VI(C,B,f) &= VI \bigl(C,aA+(1-a)B,f \bigr) \\ & = \mathrm{Fix}(J_{\gamma f} \bigl(I- \gamma \bigl(aA+(1-a)B \bigr) \bigr) \end{aligned}$$
(2.19)

for all \(\gamma >0\) and \(a \in (0,1)\).

3 Main results

In this section, we introduce a new intermixed algorithm with viscosity technique using Lemmas 2.5 and 2.6 as an important tool for finding a solution of the combination of mixed variational inequality problems and the fixed-point problem of a nonexpansive mapping in a real Hilbert space and establish its strong convergence under some mild conditions.

Theorem 3.1

Let C be a nonempty, closed, and convex subset of H. For every \(i=1,2\), let \(f_{i}: H \to \mathbb{R}\cup \{+\infty \}\) be a proper, convex, and lower semicontinuous function, let \(A_{i},B_{i}: C\to H\) be \(\delta ^{A}_{i}\)- and \(\delta ^{B}_{i}\)-inverse strongly monotone operators, respectively, with \(\delta _{i}= \min \{\delta ^{A}_{i}, \delta ^{B}_{i} \}\) and let \(T_{i}: C\to C\) be nonexpansive mappings. Assume that \(\Omega _{i}=\mathrm{Fix}(T_{i})\cap VI(C,A_{i},f_{i})\cap VI(C,B_{i},f_{i}) \neq \emptyset \), for all \(i=1,2\). Let \(g_{1},g_{2}:H \rightarrow H\) be \(\sigma _{1}\)- and \(\sigma _{2}\)-contraction mappings with \(\sigma _{1},\sigma _{2}\in (0,1)\) and \(\sigma =\max \{\sigma _{1},\sigma _{2}\}\). Let the sequences \(\{x_{n}\}\), \(\{y_{n}\}\) be generated by \(x_{1},y_{1}\in C\) and

$$\begin{aligned} \textstyle\begin{cases} w_{n}=b_{2}y_{n}+(1-b_{2})T_{2}y_{n}, \\ y_{n+1}=(1-\beta _{n})w_{n}+\beta _{n}P_{C}(\alpha _{n}g_{2}(x_{n})\\ \phantom{y_{n+1}=}{}+(1- \alpha _{n})J_{\gamma f}^{2}(y_{n}-\gamma _{2} (a_{2}A_{2}+(1-a_{2})B_{2})y_{n})), \\ z_{n}=b_{1}x_{n}+(1-b_{1})T_{1}x_{n}, \\ x_{n+1}=(1-\beta _{n})z_{n}+\beta _{n}P_{C}(\alpha _{n}g_{1}(y_{n})\\ \phantom{x_{n+1}=}{}+(1- \alpha _{n})J_{\gamma f}^{1}(x_{n}-\gamma _{1} (a_{1}A_{1}+(1-a_{1})B_{1})x_{n})), \quad \forall n \geq 1, \end{cases}\displaystyle \end{aligned}$$
(3.1)

where \(\{\beta _{n}\},\{\alpha _{n}\}\subseteq [0,1]\), \(\gamma _{i} \in (0,2\delta _{i})\), \(a_{i},b_{i} \in (0,1)\), and \(J^{i}_{\gamma f}:H \to H\) defined as \(J_{\gamma f}^{i}=(I+\gamma _{i} \nabla f_{i} )^{-1}\) is the resolvent operator for all \(i=1,2\). Assume that the following conditions hold:

  1. (i)

    \(\lim_{n\rightarrow \infty}\alpha _{n}=0\) and \(\sum_{n=1}^{\infty}\alpha _{n}=\infty \);

  2. (ii)

    \(0<\overline{l}\leq \beta _{n} \leq l\) for all \(n\in \mathbb{N}\) and for some \(\overline{l},l>0\);

  3. (iii)

    \(\sum_{n=1}^{\infty}|\alpha _{n+1}-\alpha _{n}|< \infty, \sum_{n=1}^{\infty}|\beta _{n+1}-\beta _{n}|< \infty \).

Then, \(\{x_{n}\}\) and \(\{y_{n}\}\) converge strongly to \(x^{*}=P_{\Omega _{1}}g_{1}(y^{*})\) and \(y^{*}=P_{\Omega _{2}}g_{2}(x^{*})\), respectively.

Proof

First, we show that \(\{x_{n}\}\) and \(\{y_{n}\}\) are bounded.

We claim that \(J_{\gamma f}^{i}(I-\gamma _{i} (a_{i}A_{i}+(1-a_{i})B_{i})))\) is nonexpansive for all \(i=1,2\). To show this let \(x,y \in C\), then

$$\begin{aligned} &\bigl\| J_{\gamma f}^{i} \bigl(I-\gamma _{i} \bigl(a_{i}A_{i}+(1-a_{i})B_{i} \bigr) \bigr))x-J_{ \gamma f}^{i} \bigl(I-\gamma _{i} \bigl(a_{i}A_{i}+(1-a_{i})B_{i} \bigr) \bigr))y\bigr\| ^{2} \\ &\quad\leq \bigl\| \bigl(I-\gamma _{i} \bigl(a_{i}A_{i}+(1-a_{i})B_{i} \bigr) \bigr))x- \bigl(I-\gamma _{i} \bigl(a_{i}A_{i}+(1-a_{i})B_{i} \bigr) \bigr))y \bigr\| ^{2} \\ &\quad= \bigl\Vert x-y-\gamma _{i} \bigl( \bigl(a_{i}A_{i}+(1-a_{i})B_{i} \bigr)x- \bigl(a_{i}A_{i}+(1-a_{i})B_{i} \bigr)y \bigr) \bigr\Vert ^{2} \\ &\quad= \bigl\Vert x-y-\gamma _{i} \bigl(a_{i}(A_{i}x-A_{i}y)+(1-a_{i}) (B_{i}x-B_{i}y) \bigr) \bigr\Vert ^{2} \\ &\quad= \Vert x-y \Vert ^{2}-2\gamma _{i} \bigl\langle a_{i}(A_{i}x-A_{i}y)+(1-a_{i}) (B_{i}x-B_{i}y), x-y \bigr\rangle \\ &\qquad{}+ \gamma _{i}^{2} \bigl\Vert a_{i}(A_{i}x-A_{i}y)+(1-a_{i}) (B_{i}x-B_{i}y) \bigr\Vert ^{2} \\ &\quad\leq \Vert x-y \Vert ^{2}-2\gamma _{i}a_{i} \langle A_{i}x-A_{i}y, x-y \rangle -2\gamma _{i}(1-a_{i}) \langle B_{i}x-B_{i}y, x-y \rangle \\ &\qquad{}+ \gamma _{i}^{2}a_{i} \Vert A_{i}x-A_{i}y \Vert ^{2}+(1-a_{i}) \gamma _{i}^{2} \Vert B_{i}x-B_{i}y \Vert ^{2} \\ &\quad\leq \Vert x-y \Vert ^{2}-2\gamma _{i}a_{i} \delta ^{A}_{i} \Vert A_{i}x-A_{i}y \Vert ^{2}-2 \gamma _{i}(1-a_{i})\delta ^{B}_{i} \Vert B_{i}x-B_{i}y \Vert ^{2} \\ &\qquad{}+ \gamma _{i}^{2}a_{i} \Vert A_{i}x-A_{i}y \Vert ^{2}+(1-a_{i}) \gamma _{i}^{2} \Vert B_{i}x-B_{i}y \Vert ^{2} \\ &\quad\leq \Vert x-y \Vert ^{2}-2\gamma _{i}a_{i} \delta _{i} \Vert A_{i}x-A_{i}y \Vert ^{2}-2 \gamma _{i}(1-a_{i})\delta _{i} \Vert B_{i}x-B_{i}y \Vert ^{2} \\ &\qquad{}+ \gamma _{i}^{2}a_{i} \Vert A_{i}x-A_{i}y \Vert ^{2}+(1-a_{i}) \gamma _{i}^{2} \Vert B_{i}x-B_{i}y \Vert ^{2} \\ &\quad\leq \Vert x-y \Vert ^{2}+a_{i}\gamma _{i}(\gamma _{i}-2\delta _{i}) \Vert A_{i}x-A_{i}y \Vert ^{2}+(1-a_{i}) \gamma _{i}(\gamma _{i}-2\delta _{i}) \Vert B_{i}x-B_{i}y \Vert ^{2} \\ &\quad\leq \Vert x-y \Vert ^{2}. \end{aligned}$$
(3.2)

Assume that \(x^{*}\in \Omega _{1}\) and \(y^{*}\in \Omega _{2}\).

From the definition of \(z_{n}\) and the nonexpansiveness of \(T_{1}\), we have

$$\begin{aligned} \bigl\Vert z_{n}-x^{*} \bigr\Vert ={}& \bigl\Vert b_{1}x_{n}+(1-b_{1})T_{1}x_{n}-x^{*} \bigr\Vert \\ \leq {}& b_{1} \bigl\Vert x_{n}-x^{*} \bigr\Vert +(1-b_{1}) \bigl\Vert T_{1}x_{n}-x^{*} \bigr\Vert \\ \leq {}& b_{1} \bigl\Vert x_{n}-x^{*} \bigr\Vert +(1-b_{1}) \bigl\Vert x_{n}-x^{*} \bigr\Vert \\ ={}& \bigl\Vert x_{n}-x^{*} \bigr\Vert . \end{aligned}$$
(3.3)

Similarly, we have \(\|w_{n}-x^{*}\| \leq \|y_{n}-x^{*}\|\).

Putting \(K_{i}=J_{\gamma f}^{i}(I-\gamma _{i} (a_{i}A_{i}+(1-a_{i})B_{i})))\) for all \(i=1,2\), from the definition of \(x_{n}\), the nonexpansiveness of \(K_{i}\) for all \(i=1,2\), and (3.3), we have

$$\begin{aligned} \bigl\Vert x_{n+1}-x^{*} \bigr\Vert ={}& \bigl\Vert (1-\beta _{n})z_{n}+\beta _{n}P_{C} \bigl(\alpha _{n}g_{1}(y_{n})+(1- \alpha _{n})K_{1}x_{n} \bigr)-x^{*} \bigr\Vert \\ \leq {}& (1-\beta _{n}) \bigl\Vert z_{n}-x^{*} \bigr\Vert +\beta _{n} \bigl\Vert \alpha _{n}g_{1}(y_{n})+(1- \alpha _{n})K_{1}x_{n}-x^{*} \bigr\Vert \\ \leq {}&(1-\beta _{n}) \bigl\Vert x_{n}-x^{*} \bigr\Vert +\beta _{n} \bigl(\alpha _{n} \bigl\Vert g_{1}(y_{n})-x^{*} \bigr\Vert +(1-\alpha _{n}) \bigl\Vert K_{1}x_{n}-x^{*} \bigr\Vert \bigr) \\ \leq {}&(1-\beta _{n}) \bigl\Vert x_{n}-x^{*} \bigr\Vert +\beta _{n} \bigl(\alpha _{n} \bigl\Vert g_{1}(y_{n})-x^{*} \bigr\Vert +(1-\alpha _{n}) \bigl\Vert x_{n}-x^{*} \bigr\Vert \bigr) \\ = {}&(1-\alpha _{n}\beta _{n}) \bigl\Vert x_{n}-x^{*} \bigr\Vert +\alpha _{n}\beta _{n} \bigl\Vert g_{1}(y_{n})-x^{*} \bigr\Vert \\ \leq {}&(1-\alpha _{n}\beta _{n}) \bigl\Vert x_{n}-x^{*} \bigr\Vert +\alpha _{n}\beta _{n} \bigl( \bigl\Vert g_{1}(y_{n})-g_{1} \bigl(y^{*} \bigr) \bigr\Vert + \bigl\Vert g_{1} \bigl(y^{*} \bigr)-x^{*} \bigr\Vert \bigr) \\ \leq {}&(1-\alpha _{n}\beta _{n}) \bigl\Vert x_{n}-x^{*} \bigr\Vert +\alpha _{n}\beta _{n} \sigma _{1} \bigl\Vert y_{n}-y^{*} \bigr\Vert +\alpha _{n}\beta _{n} \bigl\Vert g_{1} \bigl(y^{*} \bigr)-x^{*} \bigr\Vert \\ \leq{} &(1-\alpha _{n}\beta _{n}) \bigl\Vert x_{n}-x^{*} \bigr\Vert +\alpha _{n}\beta _{n} \sigma \bigl\Vert y_{n}-y^{*} \bigr\Vert +\alpha _{n}\beta _{n} \bigl\Vert g_{1} \bigl(y^{*} \bigr)-x^{*} \bigr\Vert . \end{aligned}$$
(3.4)

Similarly, we obtain

$$\begin{aligned} \bigl\Vert y_{n+1}-y^{*} \bigr\Vert \leq (1-\alpha _{n}\beta _{n}) \bigl\Vert y_{n}-y^{*} \bigr\Vert + \alpha _{n}\beta _{n}\sigma \bigl\Vert x_{n}-x^{*} \bigr\Vert +\alpha _{n}\beta _{n} \bigl\Vert g_{2} \bigl(x^{*} \bigr)-y^{*} \bigr\Vert . \end{aligned}$$
(3.5)

Combining (3.4) and (3.5), we have

$$\begin{aligned} \bigl\Vert x_{n+1}-x^{*} \bigr\Vert + \bigl\Vert y_{n+1}-y^{*} \bigr\Vert \leq {}& (1-\alpha _{n} \beta _{n}) \bigl\Vert x_{n}-x^{*} \bigr\Vert +\alpha _{n}\beta _{n}\sigma \bigl\Vert y_{n}-y^{*} \bigr\Vert \\ &{}+\alpha _{n}\beta _{n} \bigl\Vert g_{1} \bigl(y^{*} \bigr)-x^{*} \bigr\Vert \\ &{}+(1-\alpha _{n}\beta _{n}) \bigl\Vert y_{n}-y^{*} \bigr\Vert +\alpha _{n}\beta _{n} \sigma \bigl\Vert x_{n}-x^{*} \bigr\Vert \\ &{}+\alpha _{n}\beta _{n} \bigl\Vert g_{2} \bigl(x^{*} \bigr)-y^{*} \bigr\Vert \\ ={}& (1-\alpha _{n}\beta _{n}) \bigl( \bigl\Vert x_{n}-x^{*} \bigr\Vert + \bigl\Vert y_{n}-y^{*} \bigr\Vert \bigr) \\ &{}+\alpha _{n}\beta _{n}\sigma \bigl( \bigl\Vert x_{n}-x^{*} \bigr\Vert + \bigl\Vert y_{n}-y^{*} \bigr\Vert \bigr) \\ &{}+\alpha _{n}\beta _{n} \bigl( \bigl\Vert g_{1} \bigl(y^{*} \bigr)-x^{*} \bigr\Vert + \bigl\Vert g_{2} \bigl(x^{*} \bigr)-y^{*} \bigr\Vert \bigr) \\ ={}& \bigl(1-\alpha _{n}\beta _{n}(1-\sigma ) \bigr) \bigl( \bigl\Vert x_{n}-x^{*} \bigr\Vert + \bigl\Vert y_{n}-y^{*} \bigr\Vert \bigr) \\ &{}+\alpha _{n}\beta _{n} \bigl( \bigl\Vert g_{1} \bigl(y^{*} \bigr)-x^{*} \bigr\Vert + \bigl\Vert g_{2} \bigl(x^{*} \bigr)-y^{*} \bigr\Vert \bigr). \end{aligned}$$

We can deduce from induction that

$$\begin{aligned} \bigl\Vert x_{n}-x^{*} \bigr\Vert + \bigl\Vert y_{n}-y^{*} \bigr\Vert \leq \max \biggl\{ \bigl\Vert x_{1}-x^{*} \bigr\Vert + \bigl\Vert y_{1}-y^{*} \bigr\Vert , \frac { \Vert g_{1}(y^{*})-x^{*} \Vert + \Vert g_{2}(x^{*})-y^{*} \Vert }{1-\sigma} \biggr\} , \end{aligned}$$

for every \(n \in \mathbb{N}\). This implies that \(\{x_{n}\}\) and \(\{y_{n}\}\) are bounded. This implies that \(\{z_{n}\}, \{w_{n}\}\) are also bounded.

Next, we show that \(\|x_{n+1}-x_{n}\|\rightarrow 0\) and \(\|y_{n+1}-y_{n}\|\rightarrow 0\) as \(n \to \infty \).

Setting \(Q_{n}=P_{C}(\alpha _{n}g_{1}(y_{n})+(1-\alpha _{n})K_{1}x_{n})\) and \(Q_{n}^{*}=P_{C}(\alpha _{n}g_{2}(x_{n})+(1-\alpha _{n})K_{2}y_{n})\). By the nonexpansiveness of \(K_{i}\) for \(i=1,2\), we have

$$\begin{aligned} \Vert Q_{n}- Q_{n-1} \Vert ={}& \bigl\Vert P_{C} \bigl(\alpha _{n}g_{1}(y_{n})+(1- \alpha _{n})K_{1}x_{n} \bigr)-P_{C} \bigl( \alpha _{n-1}g_{1}(y_{n-1})+(1-\alpha _{n-1})K_{1}x_{n-1} \bigr) \bigr\Vert \\ \leq{} & \bigl\Vert \bigl(\alpha _{n}g_{1}(y_{n})+(1- \alpha _{n})K_{1}x_{n} \bigr)- \bigl(\alpha _{n-1}g_{1}(y_{n-1})+(1- \alpha _{n})K_{1}x_{n-1} \bigr) \bigr\Vert \\ ={}& \bigl\Vert \alpha _{n}g_{1}(y_{n})- \alpha _{n}g_{1}(y_{n-1})+\alpha _{n}g_{1}(y_{n-1})+(1- \alpha _{n})K_{1}x_{n}-(1- \alpha _{n})K_{1}x_{n-1} \\ &{}+(1-\alpha _{n})K_{1}x_{n-1}-\alpha _{n-1}g_{1}(y_{n-1})-(1-\alpha _{n-1})K_{1}x_{n-1} \bigr\Vert \\ ={}& \bigl\Vert \alpha _{n} \bigl(g_{1}(y_{n})-g_{1}(y_{n-1}) \bigr)+(\alpha _{n}-\alpha _{n-1})g_{1}(y_{n-1})+(1- \alpha _{n}) (K_{1}x_{n}-K_{1}x_{n-1}) \\ &{}+(\alpha _{n-1}-\alpha _{n})K_{1}x_{n-1} \bigr\Vert \\ \leq & \alpha _{n} \bigl\Vert g_{1}(y_{n})-g_{1}(y_{n-1}) \bigr\Vert + \vert \alpha _{n}- \alpha _{n-1} \vert \bigl\Vert g_{1}(y_{n-1}) \bigr\Vert \\ &{} +(1-\alpha _{n}) \Vert K_{1}x_{n}-K_{1}x_{n-1} \Vert \\ &{}+ \vert \alpha _{n}-\alpha _{n-1} \vert \Vert K_{1}x_{n-1} \Vert \\ \leq{} & \alpha _{n}\sigma _{1} \Vert y_{n}-y_{n-1} \Vert + \vert \alpha _{n}-\alpha _{n-1} \vert \bigl\Vert g_{1}(y_{n-1}) \bigr\Vert +(1-\alpha _{n}) \Vert x_{n}-x_{n-1} \Vert \\ &{}+ \vert \alpha _{n}-\alpha _{n-1} \vert \Vert K_{1}x_{n-1} \Vert \\ \leq {}& \alpha _{n}\sigma \Vert y_{n}-y_{n-1} \Vert + \vert \alpha _{n}-\alpha _{n-1} \vert \bigl\Vert g_{1}(y_{n-1}) \bigr\Vert +(1-\alpha _{n}) \Vert x_{n}-x_{n-1} \Vert \\ &{}+ \vert \alpha _{n}-\alpha _{n-1} \vert \Vert K_{1}x_{n-1} \Vert . \end{aligned}$$
(3.6)

From the definition of \(z_{n}\) and the nonexpansiveness of \(T_{1}\), we have

$$\begin{aligned} \Vert z_{n}-z_{n-1} \Vert ={}& \bigl\Vert b_{1}x_{n}+(1-b_{1})T_{1}x_{n}-b_{1}x_{n-1}-(1-b_{1})T_{1}x_{n-1} \bigr\Vert \\ \leq {}& \bigl\Vert b_{1}(x_{n}-x_{n-1})+(1-b_{1}) (T_{1}x_{n}-T_{1}x_{n-1}) \bigr\Vert \\ \leq {}& b_{1} \Vert x_{n}-x_{n-1} \Vert +(1-b_{1}) \Vert T_{1}x_{n}-T_{1}x_{n-1} \Vert \\ \leq {}& b_{1} \Vert x_{n}-x_{n-1} \Vert +(1-b_{1}) \Vert x_{n}-x_{n-1} \Vert \\ ={}& \Vert x_{n}-x_{n-1} \Vert . \end{aligned}$$
(3.7)

Similarly, we obtain

$$\begin{aligned} \Vert w_{n}-w_{n-1} \Vert \leq & \Vert x_{n}-x_{n-1} \Vert . \end{aligned}$$
(3.8)

From the definition of \(x_{n}\), (3.6), and (3.7), we have

$$\begin{aligned} \Vert x_{n+1}-x_{n} \Vert ={}& \bigl\Vert (1-\beta _{n})z_{n}+\beta _{n}Q_{n}- \bigl((1- \beta _{n-1})z_{n-1}+ \beta _{n-1}Q_{n-1} \bigr) \bigr\Vert \\ \leq {}& (1-\beta _{n}) \Vert z_{n}-z_{n-1} \Vert + \vert \beta _{n-1}-\beta _{n} \vert \Vert z_{n-1} \Vert \\ &{}+ \beta _{n} \Vert Q_{n}-Q_{n-1} \Vert + \vert \beta _{n}-\beta _{n-1} \vert \Vert Q_{n-1} \Vert \\ \leq {}& (1-\beta _{n}) \Vert x_{n}-x_{n-1} \Vert + \vert \beta _{n-1}-\beta _{n} \vert \Vert z_{n-1} \Vert \\ &{}+ \beta _{n} \bigl(\alpha _{n}\sigma \Vert y_{n}-y_{n-1} \Vert + \vert \alpha _{n}- \alpha _{n-1} \vert \bigl\Vert g_{1}(y_{n-1}) \bigr\Vert \\ &{}+(1-\alpha _{n}) \Vert x_{n}-x_{n-1} \Vert + \vert \alpha _{n}-\alpha _{n-1} \vert \Vert K_{1}x_{n-1} \Vert \bigr) \\ &{}+ \vert \beta _{n}-\beta _{n-1} \vert \Vert Q_{n-1} \Vert \\ = {}& (1-\beta _{n}) \Vert x_{n}-x_{n-1} \Vert + \vert \beta _{n-1}-\beta _{n} \vert \Vert z_{n-1} \Vert \\ &{}+ \beta _{n}\alpha _{n}\sigma \Vert y_{n}-y_{n-1} \Vert +\beta _{n} \vert \alpha _{n}- \alpha _{n-1} \vert \bigl\Vert g_{1}(y_{n-1}) \bigr\Vert \\ &{}+\beta _{n}(1-\alpha _{n}) \Vert x_{n}-x_{n-1} \Vert +\beta _{n} \vert \alpha _{n}- \alpha _{n-1} \vert \Vert K_{1}x_{n-1} \Vert \\ & + \vert \beta _{n}-\beta _{n-1} \vert \Vert Q_{n-1} \Vert \\ \leq{} & (1-\alpha _{n}\beta _{n}) \Vert x_{n}-x_{n-1} \Vert + \vert \beta _{n-1}- \beta _{n} \vert \bigl( \Vert z_{n-1} \Vert + \Vert Q_{n-1} \Vert \bigr) \\ &{}+ \alpha _{n}\beta _{n}\sigma \Vert y_{n}-y_{n-1} \Vert + \vert \alpha _{n}-\alpha _{n-1} \vert \bigl( \bigl\Vert g_{1}(y_{n-1}) \bigr\Vert + \Vert K_{1}x_{n-1} \Vert \bigr). \end{aligned}$$
(3.9)

Using the same method as derived in (3.9), we have

$$\begin{aligned} \Vert y_{n+1}-y_{n} \Vert \leq {}& (1- \alpha _{n}\beta _{n}) \Vert y_{n}-y_{n-1} \Vert + \vert \beta _{n-1}-\beta _{n} \vert \bigl( \Vert w_{n-1} \Vert + \bigl\Vert Q_{n-1}^{*} \bigr\Vert \bigr) \\ &{}+ \alpha _{n}\beta _{n}\sigma \Vert x_{n}-x_{n-1} \Vert + \vert \alpha _{n}- \alpha _{n-1} \vert \bigl( \bigl\Vert g_{2}(x_{n-1}) \bigr\Vert + \Vert K_{2}y_{n-1} \Vert \bigr). \end{aligned}$$
(3.10)

From (3.9) and (3.10), we have

$$\begin{aligned} \Vert x_{n+1}-x_{n} \Vert + \Vert y_{n+1}-y_{n} \Vert \leq {}& \bigl(1-(1-\sigma )\beta _{n} \alpha _{n} \bigr) ( \Vert x_{n}-x_{n-1} \Vert + \Vert y_{n}-y_{n-1} \Vert \\ &{}+ \vert \beta _{n-1}-\beta _{n} \vert \bigl( \Vert z_{n-1} \Vert + \Vert w_{n-1} \Vert + \Vert Q_{n} \Vert + \bigl\Vert Q_{n}^{*} \bigr\Vert \bigr) \\ &{}+ \vert \alpha _{n}-\alpha _{n-1} \vert \bigl( \bigl\Vert g_{1}(y_{n-1}) \bigr\Vert + \Vert K_{1}x_{n-1} \Vert \\ &{}+ \bigl\Vert g_{2}(x_{n-1}) \bigr\Vert + \Vert K_{2}x_{n-1} \Vert \bigr). \end{aligned}$$

Applying Lemma 2.4 and the condition (iii), we can conclude that

$$\begin{aligned} \lim_{n\to \infty} \Vert x_{n+1}-x_{n} \Vert =0\quad \text{and}\quad \lim_{n\to \infty} \Vert y_{n+1}-y_{n} \Vert =0. \end{aligned}$$
(3.11)

Next, we show that \(\|x_{n}-U_{n}\|\rightarrow 0\) as \(n\to \infty \), where \(U_{n}=\alpha _{n}g_{1}(y_{n})+(1-\alpha _{n})K_{1}x_{n}\), \(\|y_{n}-V_{n}\|\rightarrow 0\), where \(V_{n}=\alpha _{n}g_{2}(x_{n})+(1-\alpha _{n})K_{2}y_{n}\) as \(n\to \infty \), \(\|x_{n}-T_{1}x_{n}\|\rightarrow 0\) as \(n\to \infty \) and \(\|y_{n}-T_{2}y_{n}\|\rightarrow 0\) as \(n\to \infty \).

Let \(x^{*}\in \Omega _{1}\) and \(y^{*}\in \Omega _{2}\). From the definition of \(z_{n}\), we obtain

$$\begin{aligned} \bigl\Vert z_{n}-x^{*} \bigr\Vert ^{2}\leq {}& b_{1} \bigl\Vert x_{n}-x^{*} \bigr\Vert ^{2}+(1-b_{1}) \bigl\Vert T_{1}x_{n}-x^{*} \bigr\Vert ^{2}-b_{1}(1-b_{1}) \Vert x_{n}-T_{1}x_{n} \Vert ^{2} \\ \leq {}& b_{1} \bigl\Vert x_{n}-x^{*} \bigr\Vert ^{2}+(1-b_{1}) \bigl\Vert x_{n}-x^{*} \bigr\Vert ^{2}-b_{1}(1-b_{1}) \Vert x_{n}-T_{1}x_{n} \Vert ^{2} \\ \leq{} & \bigl\Vert x_{n}-x^{*} \bigr\Vert ^{2}-b_{1}(1-b_{1}) \Vert x_{n}-T_{1}x_{n} \Vert ^{2}. \end{aligned}$$
(3.12)

In a similar way, we have

$$\begin{aligned} \bigl\Vert w_{n}-x^{*} \bigr\Vert ^{2}\leq \bigl\Vert y_{n}-x^{*} \bigr\Vert ^{2}-b_{2}(1-b_{2}) \Vert y_{n}-T_{2}y_{n} \Vert ^{2}. \end{aligned}$$
(3.13)

From the definition of \(x_{n}\), (3.3), and (3.12), we obtain

$$\begin{aligned} \bigl\Vert x_{n+1}-x^{*} \bigr\Vert ^{2} ={}& \bigl\Vert (1-\beta _{n})z_{n}+\beta _{n}P_{C}U_{n}-x^{*} \bigr\Vert ^{2} \\ ={}& (1-\beta _{n}) \bigl\Vert z_{n}-x^{*} \bigr\Vert ^{2}+\beta _{n} \bigl\Vert P_{C}U_{n}-x^{*} \bigr\Vert ^{2} \\ &{}- (1-\beta _{n})\beta _{n} \Vert z_{n}-P_{C}U_{n} \Vert ^{2} \\ \leq {}& (1-\beta _{n}) \bigl( \bigl\Vert x_{n}-x^{*} \bigr\Vert ^{2}-b_{1}(1-b_{1}) \Vert x_{n}-T_{1}x_{n} \Vert ^{2} \bigr) \\ &{}+\beta _{n} \bigl\Vert \alpha _{n}g_{1}(y_{n})+(1- \alpha _{n})K_{1}x_{n}-x^{*} \bigr\Vert ^{2} \\ &{}- (1-\beta _{n})\beta _{n} \Vert z_{n}-P_{C}U_{n} \Vert ^{2} \\ ={}& (1-\beta _{n}) \bigl\Vert x_{n}-x^{*} \bigr\Vert ^{2}-b_{1}(1-b_{1}) (1-\beta _{n}) \Vert x_{n}-T_{1}x_{n} \Vert ^{2} \\ &{}+\beta _{n} \bigl\Vert \alpha _{n} \bigl(g_{1}(y_{n})-K_{1}x_{n} \bigr)+K_{1}x_{n}-x^{*} \bigr\Vert ^{2} \\ &{}- (1-\beta _{n})\beta _{n} \Vert z_{n}-P_{C}U_{n} \Vert ^{2} \\ \leq {}& (1-\beta _{n}) \bigl\Vert x_{n}-x^{*} \bigr\Vert ^{2}-b_{1}(1-b_{1}) (1-\beta _{n}) \Vert x_{n}-T_{1}x_{n} \Vert ^{2}+ \beta _{n} \bigl( \bigl\Vert K_{1}x_{n}-x^{*} \bigr\Vert ^{2} \\ &{}+2\alpha _{n} \bigl\langle g_{1}(y_{n})-K_{1}x_{n}, \alpha _{n}g_{1}(y_{n})+(1- \alpha _{n})K_{1}x_{n}-x^{*} \bigr\rangle \bigr) \\ &{}- (1-\beta _{n})\beta _{n} \Vert z_{n}-P_{C}U_{n} \Vert ^{2} \\ \leq {}& (1-\beta _{n}) \bigl\Vert x_{n}-x^{*} \bigr\Vert ^{2}-b_{1}(1-b_{1}) (1-\beta _{n}) \Vert x_{n}-T_{1}x_{n} \Vert ^{2}+ \beta _{n} \bigl( \bigl\Vert K_{1}x_{n}-x^{*} \bigr\Vert ^{2} \\ &{}+2\alpha _{n} \bigl\Vert g_{1}(y_{n})-K_{1}x_{n} \bigr\Vert \bigl\Vert \alpha _{n}g_{1}(y_{n})+(1- \alpha _{n})K_{1}x_{n}-x^{*} \bigr\Vert \bigr) \\ &{}- (1-\beta _{n})\beta _{n} \Vert z_{n}-P_{C}U_{n} \Vert ^{2} \\ \leq {}& (1-\beta _{n}) \bigl\Vert x_{n}-x^{*} \bigr\Vert ^{2}-b_{1}(1-b_{1}) (1-\beta _{n}) \Vert x_{n}-T_{1}x_{n} \Vert ^{2}+\beta _{n} \bigl\Vert x_{n}-x^{*} \bigr\Vert ^{2} \\ &{}+2\alpha _{n}\beta _{n} \bigl\Vert g_{1}(y_{n})-K_{1}x_{n} \bigr\Vert \bigl\Vert \alpha _{n}g_{1}(y_{n})+(1- \alpha _{n})K_{1}x_{n}-x^{*} \bigr\Vert \\ &{}- (1-\beta _{n})\beta _{n} \Vert z_{n}-P_{C}U_{n} \Vert ^{2} \\ = {}& \bigl\Vert x_{n}-x^{*} \bigr\Vert ^{2}+2 \alpha _{n}\beta _{n} \bigl\Vert g_{1}(y_{n})-K_{1}x_{n} \bigr\Vert \bigl\Vert \alpha _{n}g_{1}(y_{n})+(1- \alpha _{n})K_{1}x_{n}-x^{*} \bigr\Vert \\ &{}-b_{1}(1-b_{1}) (1-\beta _{n}) \Vert x_{n}-T_{1}x_{n} \Vert ^{2}- (1- \beta _{n}) \beta _{n} \Vert z_{n}-P_{C}U_{n} \Vert ^{2}. \end{aligned}$$
(3.14)

It follows from (3.14) that

$$\begin{aligned} &b_{1}(1-b_{1}) (1-\beta _{n}) \Vert x_{n}-T_{1}x_{n} \Vert ^{2}+(1- \beta _{n}) \beta _{n} \Vert z_{n}-P_{C}U_{n} \Vert ^{2} \\ & \quad\leq \bigl\Vert x_{n}-x^{*} \bigr\Vert ^{2}- \bigl\Vert x_{n+1}-x^{*} \bigr\Vert ^{2} \\ &\qquad{}+2\alpha _{n}\beta _{n} \bigl\Vert g_{1}(y_{n})-K_{1}x_{n} \bigr\Vert \bigl\Vert \alpha _{n}g_{1}(y_{n})+(1- \alpha _{n})K_{1}x_{n}-x^{*} \bigr\Vert \\ &\quad\leq \Vert x_{n}-x_{n+1} \Vert \bigl( \bigl\Vert x_{n}-x^{*} \bigr\Vert + \bigl\Vert x_{n+1}-x^{*} \bigr\Vert \bigr) \\ &\qquad{}+2\alpha _{n}\beta _{n} \bigl\Vert g_{1}(y_{n})-K_{1}x_{n} \bigr\Vert \bigl\Vert \alpha _{n}g_{1}(y_{n})+(1- \alpha _{n})K_{1}x_{n}-x^{*} \bigr\Vert . \end{aligned}$$

By (3.11) and the conditions i) and ii), we obtain

$$\begin{aligned} \lim_{n \to \infty} \Vert P_{C}U_{n}-z_{n} \Vert = \lim_{n \to \infty} \Vert x_{n}-T_{1}x_{n} \Vert =0. \end{aligned}$$
(3.15)

From the definition of \(y_{n}\) and applying the same method as (3.15), we have

$$\begin{aligned} \lim_{n \to \infty} \Vert P_{C}V_{n}-w_{n} \Vert = \lim_{n \to \infty} \Vert y_{n}-T_{2}y_{n} \Vert =0. \end{aligned}$$
(3.16)

From Lemma 2.3, we obtain

$$\begin{aligned} \bigl\Vert P_{C}U_{n}-x^{*} \bigr\Vert ^{2} \leq \bigl\Vert U_{n}-x^{*} \bigr\Vert ^{2}- \Vert U_{n}-P_{C}U_{n} \Vert ^{2}. \end{aligned}$$
(3.17)

From the definition of \(U_{n}\), we obtain

$$\begin{aligned} \bigl\Vert U_{n}-x^{*} \bigr\Vert ^{2} ={}& \bigl\Vert \alpha _{n} \bigl(g_{1}(y_{n})-x^{*} \bigr)+(1-\alpha _{n}) \bigl(K_{1}x_{n}-x^{*} \bigr) \bigr\Vert ^{2} \\ \leq {}& \alpha _{n} \bigl\Vert g_{1}(y_{n})-x^{*} \bigr\Vert ^{2}+(1-\alpha _{n}) \bigl\Vert K_{1}x_{n}-x^{*} \bigr\Vert ^{2} \\ \leq {}& \alpha _{n} \bigl\Vert g_{1}(y_{n})-x^{*} \bigr\Vert ^{2}+(1-\alpha _{n}) \bigl\Vert x_{n}-x^{*} \bigr\Vert ^{2}. \end{aligned}$$
(3.18)

From (3.3), (3.17), and (3.18), we obtain

$$\begin{aligned} \bigl\Vert x_{n+1}-x^{*} \bigr\Vert ^{2} ={}& \bigl\Vert (1-\beta _{n}) \bigl(z_{n}-x^{*} \bigr)+\beta _{n} \bigl(P_{C}U_{n}-x^{*} \bigr) \bigr\Vert ^{2} \\ \leq {}& (1-\beta _{n}) \bigl\Vert z_{n}-x^{*} \bigr\Vert ^{2}+\beta _{n} \bigl\Vert P_{C}U_{n}-x^{*} \bigr\Vert ^{2} \\ \leq {}& (1-\beta _{n}) \bigl\Vert x_{n}-x^{*} \bigr\Vert ^{2}+\beta _{n} \bigl( \bigl\Vert U_{n}-x^{*} \bigr\Vert ^{2}- \Vert U_{n}-P_{C}U_{n} \Vert ^{2} \bigr) \\ \leq {}& (1-\beta _{n}) \bigl\Vert x_{n}-x^{*} \bigr\Vert ^{2} \\ &{}+\beta _{n} \bigl(\alpha _{n} \bigl\Vert g_{1}(y_{n})-x^{*} \bigr\Vert ^{2}+(1-\alpha _{n}) \bigl\Vert x_{n}-x^{*} \bigr\Vert ^{2}- \Vert U_{n}-P_{C}U_{n} \Vert ^{2} \bigr) \\ \leq {}& (1-\alpha _{n}\beta _{n}) \bigl\Vert x_{n}-x^{*} \bigr\Vert ^{2} +\beta _{n} \alpha _{n} \bigl\Vert g_{1}(y_{n})-x^{*} \bigr\Vert ^{2}-\beta _{n} \Vert U_{n}-P_{C}U_{n} \Vert ^{2}, \end{aligned}$$

from which it follows that

$$\begin{aligned} \beta _{n} \Vert U_{n}-P_{C}U_{n} \Vert ^{2}\leq {}& (1-\alpha _{n}\beta _{n}) \bigl\Vert x_{n}-x^{*} \bigr\Vert ^{2}- \bigl\Vert x_{n+1}-x^{*} \bigr\Vert ^{2}+\alpha _{n}\beta _{n} \bigl\Vert g_{1}(y_{n})-x^{*} \bigr\Vert ^{2} \\ \leq{} & \bigl\Vert x_{n}-x^{*} \bigr\Vert ^{2}- \bigl\Vert x_{n+1}-x^{*} \bigr\Vert ^{2}+\alpha _{n}\beta _{n} \bigl\Vert g_{1}(y_{n})-x^{*} \bigr\Vert ^{2} \\ \leq {}& \Vert x_{n}-x_{n+1} \Vert \bigl( \bigl\Vert x_{n}-x^{*} \bigr\Vert + \bigl\Vert x_{n+1}-x^{*} \bigr\Vert \bigr)+\alpha _{n} \beta _{n} \bigl\Vert g_{1}(y_{n})-x^{*} \bigr\Vert ^{2}. \end{aligned}$$

From \(\|x_{n+1}-x_{n}\|\rightarrow 0 \text{ as } n\rightarrow \infty \) and the conditions (i) and (ii), we have

$$\begin{aligned} \lim_{n\to \infty} \Vert U_{n}-P_{C}U_{n} \Vert =0. \end{aligned}$$
(3.19)

From the definition of \(V_{n}\) and applying the same argument as (3.19), we also obtain

$$\begin{aligned} \lim_{n\to \infty} \Vert V_{n}-P_{C}V_{n} \Vert =0. \end{aligned}$$
(3.20)

Observe that

$$\begin{aligned} z_{n}-x_{n} = (1-b_{1}) (T_{1}x_{n}-x_{n}). \end{aligned}$$
(3.21)

From (3.15) and (3.21), we obtain

$$\begin{aligned} \lim_{n\to \infty} \Vert z_{n}-x_{n} \Vert =0. \end{aligned}$$
(3.22)

Similarly, we also have

$$\begin{aligned} \lim_{n\to \infty} \Vert w_{n}-y_{n} \Vert =0. \end{aligned}$$
(3.23)

Consider

$$\begin{aligned} \Vert x_{n}-U_{n} \Vert ={}& \Vert x_{n}-z_{n}+z_{n}-P_{C}U_{n}+P_{C}U_{n}-U_{n} \Vert \\ \leq {}& \Vert x_{n}-z_{n} \Vert + \Vert z_{n}-P_{C}U_{n} \Vert + \Vert P_{C}U_{n}-U_{n} \Vert . \end{aligned}$$

From (3.15) and (3.19), we have

$$\begin{aligned} \lim_{n\to \infty} \Vert x_{n}-U_{n} \Vert =0. \end{aligned}$$
(3.24)

From the definition of \(y_{n}\) and applying the same method as (3.24), we also have

$$\begin{aligned} \lim_{n\to \infty} \Vert y_{n}-V_{n} \Vert =0. \end{aligned}$$
(3.25)

Next, we show that \(\|x_{n}-K_{1}x_{n}\|\rightarrow 0 \text{ as } n\rightarrow \infty \) and \(\|y_{n}-K_{2}y_{n}\|\rightarrow 0 \text{ as } n\rightarrow \infty \), where \(K_{i}=J_{\gamma f}^{i}(I-\gamma _{i} (a_{i}A_{i}+(1-a_{i})B_{i})))\) for all \(i=1,2\).

Observe that

$$\begin{aligned} U_{n}-x_{n} = \alpha _{n} \bigl(g_{1}(y_{n})-x_{n} \bigr)+(1-\alpha _{n}) (K_{1}x_{n}-x_{n}), \end{aligned}$$

from which it follows that

$$\begin{aligned} (1-\alpha _{n}) \Vert K_{1}x_{n}-x_{n} \Vert \leq \Vert U_{n}-x_{n} \Vert +\alpha _{n} \bigl\Vert g_{1}(y_{n})-x_{n} \bigr\Vert . \end{aligned}$$

From (3.24) and the condition (i), we have

$$\begin{aligned} \lim_{n\to \infty} \Vert K_{1}x_{n}-x_{n} \Vert =\lim_{n\to \infty}\|J_{ \gamma f}^{1} \bigl(I- \gamma _{1} \bigl(a_{1}A_{1}+(1-a_{1})B_{1} \bigr) \bigr))x_{n}-x_{n} \|=0. \end{aligned}$$
(3.26)

Applying the same argument as (3.26), we also obtain

$$\begin{aligned} \lim_{n\to \infty} \Vert K_{2}y_{n}-y_{n} \Vert =\lim_{n\to \infty}\bigl\| J_{ \gamma f}^{2} \bigl(I- \gamma _{1} \bigl(a_{2}A_{2}+(1-a_{2})B_{2} \bigr) \bigr))y_{n}-y_{n} \bigr\| =0. \end{aligned}$$
(3.27)

Next, we show that \(\limsup_{n\rightarrow \infty}\langle g_{1}(y^{*})-x^{*},U_{n}-x^{*} \rangle \leq 0\), where \(x^{*}=P_{\Omega _{1}}g_{1}(y^{*})\) and \(\limsup_{n\rightarrow \infty}\langle g_{2}(x^{*})-y^{*},V_{n}-y^{*} \rangle \leq 0\), where \(y^{*}=P_{\Omega _{2}}g_{2}(x^{*})\).

Indeed, take a subsequence \(\{U_{n_{k}}\}\) of \(\{U_{n}\}\) such that

$$\begin{aligned} \limsup_{n\rightarrow \infty} \bigl\langle g_{1} \bigl(y^{*} \bigr)-x^{*},U_{n}-x^{*} \bigr\rangle = \limsup_{k\rightarrow \infty} \bigl\langle g_{1} \bigl(y^{*} \bigr)-x^{*},U_{n_{k}}-x^{*} \bigr\rangle . \end{aligned}$$

Since \(\{x_{n}\}\) is bounded, without loss of generality, we may assume that \(x_{n_{k}}\rightharpoonup p \text{ as } k\rightarrow \infty \). From (3.24), we obtain \(U_{n_{k}}\rightharpoonup p \text{ as } k\rightarrow \infty \).

Next, we show that \(p\in \Omega _{1}=\mathrm{Fix}(T_{1})\cap VI(C,A_{1},f_{1})\cap VI(C,B_{1},f_{1})\).

Since \(K_{1}\) is nonexpansive, then \(I-K_{1}\) is demiclosed at zero. From (3.26) and by the demiclosedness of \(I-K_{1}\) at zero, we obtain that \(p\in \mathrm{Fix}(K_{1})=\mathrm{Fix}(J_{\gamma f}^{1}(I-\gamma _{1} (a_{1}A_{1}+(1-a_{1})B_{1}))))\). By Remark 2.7, we have \(p\in VI(C,A_{1},f_{1})\cap VI(C,B_{1},f_{1})\).

Since \(T_{1}\) is nonexpansive, then \(I-T_{1}\) is demiclosed at zero. From (3.15) and by the demiclosedness of \(I-T_{1}\) at zero, we obtain that \(p \in \mathrm{Fix}(T_{1})\). Therefore, \(p\in \Omega _{1}=\mathrm{Fix}(T_{1})\cap VI(C,A_{1},f_{1})\cap VI(C,B_{1},f_{1})\).

Since \(U_{n_{k}}\rightharpoonup p \text{ as } k\rightarrow \infty \), \(p\in \Omega _{1}\) and Lemma 2.2, we can derive that

$$\begin{aligned} \limsup_{n\rightarrow \infty} \bigl\langle g_{1} \bigl(y^{*} \bigr)-x^{*},U_{n}-x^{*} \bigr\rangle ={}& \limsup_{k\rightarrow \infty} \bigl\langle g_{1} \bigl(y^{*} \bigr)-x^{*},U_{n_{k}}-x^{*} \bigr\rangle \\ ={}& \bigl\langle g_{1} \bigl(y^{*} \bigr)-x^{*},p-x^{*} \bigr\rangle \\ \leq {}& 0. \end{aligned}$$
(3.28)

Similarly, take a subsequence \(\{V_{n_{k}}\}\) of \(\{V_{n}\}\) such that

$$\begin{aligned} \limsup_{n\rightarrow \infty} \bigl\langle g_{2} \bigl(x^{*} \bigr)-y^{*},V_{n}-y^{*} \bigr\rangle = \lim_{k\rightarrow \infty} \bigl\langle g_{2} \bigl(x^{*} \bigr)-y^{*},V_{n_{k}}-y^{*} \bigr\rangle . \end{aligned}$$

Since \(\{y_{n}\}\) is bounded, without loss of generality, we may assume that \(y_{n_{k}}\rightharpoonup q \text{ as } k\rightarrow \infty \). From (3.25), we obtain \(V_{n_{k}}\rightharpoonup q \text{ as } k\rightarrow \infty \).

Following the same method as (3.28), we easily obtain that

$$\begin{aligned} \limsup_{n\rightarrow \infty} \bigl\langle g_{2} \bigl(x^{*} \bigr)-y^{*},V_{n}-y^{*} \bigr\rangle \leq 0. \end{aligned}$$
(3.29)

Finally, we show that \(\{x_{n}\}\) converges strongly to \(x^{*}\), where \(x^{*}=P_{\Omega _{1}}g_{1}(y^{*})\) and \(\{y_{n}\}\) converges strongly to \(y^{*}\), where \(y^{*}=P_{\Omega _{2}}g_{2}(x^{*})\).

Let \(U_{n}=\alpha _{n}g_{1}(y_{n})+(1-\alpha _{n})K_{1}x_{n}\) and \(V_{n}=\alpha _{n}g_{2}(x_{n})+(1-\alpha _{n})K_{2}y_{n}\).

From the definition of \(x_{n}\), we obtain

$$\begin{aligned} \bigl\Vert x_{n+1}-x^{*} \bigr\Vert ^{2} ={}& \bigl\Vert (1-\beta _{n})z_{n}+\beta _{n}P_{C} \bigl( \alpha _{n}g_{1}(y_{n})+(1-\alpha _{n})K_{1}x_{n} \bigr)-x^{*} \bigr\Vert ^{2} \\ \leq {}& (1-\beta _{n}) \bigl\Vert z_{n}-x^{*} \bigr\Vert ^{2}+\beta _{n} \bigl\Vert P_{C} \bigl(\alpha _{n}g_{1}(y_{n})+(1- \alpha _{n})K_{1}x_{n} \bigr)-x^{*} \bigr\Vert ^{2} \\ \leq {}& (1-\beta _{n}) \bigl\Vert x_{n}-x^{*} \bigr\Vert ^{2}+\beta _{n} \bigl\Vert \alpha _{n}g_{1}(y_{n})+(1- \alpha _{n})K_{1}x_{n}-x^{*} \bigr\Vert ^{2} \\ ={}& (1-\beta _{n}) \bigl\Vert x_{n}-x^{*} \bigr\Vert ^{2} \\ &{}+\beta _{n} \bigl\Vert \alpha _{n} \bigl(g_{1}(y_{n})-x^{*} \bigr)+(1-\alpha _{n}) \bigl(K_{1}x_{n}-x^{*} \bigr) \bigr\Vert ^{2} \\ \leq {}& (1-\beta _{n}) \bigl\Vert x_{n}-x^{*} \bigr\Vert ^{2} \\ &{}+\beta _{n} \bigl((1-\alpha _{n}) \bigl\Vert K_{1}x_{n}-x^{*} \bigr\Vert ^{2}+2 \alpha _{n} \bigl\langle g_{1}(y_{n})-x^{*},U_{n}-x^{*} \bigr\rangle \bigr) \\ \leq {}& (1-\beta _{n}) \bigl\Vert x_{n}-x^{*} \bigr\Vert ^{2} \\ &{}+\beta _{n}(1-\alpha _{n}) \bigl\Vert x_{n}-x^{*} \bigr\Vert ^{2}+2\alpha _{n}\beta _{n} \bigl\langle g_{1}(y_{n})-x^{*},U_{n}-x^{*} \bigr\rangle \\ = {}& (1-\alpha _{n}\beta _{n}) \bigl\Vert x_{n}-x^{*} \bigr\Vert ^{2}+2\alpha _{n}\beta _{n} \bigl\langle g_{1}(y_{n})-x^{*},U_{n}-x^{*} \bigr\rangle \\ ={}& (1-\alpha _{n}\beta _{n}) \bigl\Vert x_{n}-x^{*} \bigr\Vert ^{2} \\ &{}+2\alpha _{n}\beta _{n} \bigl( \bigl\langle g_{1}(y_{n})-g_{1} \bigl(y^{*} \bigr),U_{n}-x^{*} \bigr\rangle + \bigl\langle g_{1} \bigl(y^{*} \bigr)-x^{*},U_{n}-x^{*} \bigr\rangle \bigr) \\ \leq{} & (1-\alpha _{n}\beta _{n}) \bigl\Vert x_{n}-x^{*} \bigr\Vert ^{2} \\ &{}+2\alpha _{n}\beta _{n} \bigl( \bigl\Vert g_{1}(y_{n})-g_{1} \bigl(y^{*} \bigr) \bigr\Vert \bigl\Vert U_{n}-x^{*} \bigr\Vert + \bigl\langle g_{1} \bigl(y^{*} \bigr)-x^{*},U_{n}-x^{*} \bigr\rangle \bigr) \\ \leq {}& (1-\alpha _{n}\beta _{n}) \bigl\Vert x_{n}-x^{*} \bigr\Vert ^{2} \\ &{}+2\alpha _{n}\beta _{n} \bigl\Vert g_{1}(y_{n})-g_{1} \bigl(y^{*} \bigr) \bigr\Vert \bigl( \Vert U_{n}-x_{n+1} \Vert + \bigl\Vert x_{n+1}-x^{*} \bigr\Vert \bigr) \\ &{}+2\alpha _{n}\beta _{n} \bigl\langle g_{1} \bigl(y^{*} \bigr)-x^{*},U_{n}-x^{*} \bigr\rangle \\ \leq {}& (1-\alpha _{n}\beta _{n}) \bigl\Vert x_{n}-x^{*} \bigr\Vert ^{2} \\ &{}+2\alpha _{n}\beta _{n}\sigma \bigl\Vert y_{n}-y^{*} \bigr\Vert \Vert U_{n}-x_{n+1} \Vert +2 \alpha _{n}\beta _{n}\sigma \bigl\Vert y_{n}-y^{*} \bigr\Vert \bigl\Vert x_{n+1}-x^{*} \bigr\Vert \\ &{}+2\alpha _{n}\beta _{n} \bigl\langle g_{1} \bigl(y^{*} \bigr)-x^{*},U_{n}-x^{*} \bigr\rangle \\ \leq {}& (1-\alpha _{n}\beta _{n}) \bigl\Vert x_{n}-x^{*} \bigr\Vert ^{2} \\ &{}+2\alpha _{n}\beta _{n}\sigma \bigl\Vert y_{n}-y^{*} \bigr\Vert \Vert U_{n}-x_{n+1} \Vert + \alpha _{n}\beta _{n}\sigma \bigl( \bigl\Vert y_{n}-y^{*} \bigr\Vert ^{2}+ \bigl\Vert x_{n+1}-x^{*} \bigr\Vert ^{2} \bigr) \\ &{}+2\alpha _{n}\beta _{n} \bigl\langle g_{1} \bigl(y^{*} \bigr)-x^{*},U_{n}-x^{*} \bigr\rangle , \end{aligned}$$

which yields that

$$\begin{aligned} \bigl\Vert x_{n+1}-x^{*} \bigr\Vert ^{2} \leq {}& \frac{1-\alpha _{n}\beta _{n}}{1-\alpha _{n}\beta _{n}\sigma} \bigl\Vert x_{n}-x^{*} \bigr\Vert ^{2}+ \frac{2\alpha _{n}\beta _{n}\sigma}{1-\alpha _{n}\beta _{n}\sigma} \bigl\Vert y_{n}-y^{*} \bigr\Vert \Vert U_{n}-x_{n+1} \Vert \\ &{}+ \frac{\alpha _{n}\beta _{n}\sigma}{1-\alpha _{n}\beta _{n}\sigma} \bigl\Vert y_{n}-y^{*} \bigr\Vert ^{2}+ \frac{2\alpha _{n}\beta _{n}}{1-\alpha _{n}\beta _{n}\sigma} \bigl\langle g_{1} \bigl(y^{*} \bigr)-x^{*},U_{n}-x^{*} \bigr\rangle \\ ={}& \biggl(1- \frac{\alpha _{n}\beta _{n}-\alpha _{n}\beta _{n}\sigma}{1-\alpha _{n}\beta _{n}\sigma} \biggr) \bigl\Vert x_{n}-x^{*} \bigr\Vert ^{2}+ \frac{2\alpha _{n}\beta _{n}\sigma}{1-\alpha _{n}\beta _{n}\sigma} \bigl\Vert y_{n}-y^{*} \bigr\Vert \Vert U_{n}-x_{n+1} \Vert \\ &{}+ \frac{\alpha _{n}\beta _{n}\sigma}{1-\alpha _{n}\beta _{n}\sigma} \bigl\Vert y_{n}-y^{*} \bigr\Vert ^{2}+ \frac{2\alpha _{n}\beta _{n}}{1-\alpha _{n}\beta _{n}\sigma} \bigl\langle g_{1} \bigl(y^{*} \bigr)-x^{*},U_{n}-x^{*} \bigr\rangle \\ ={}& \biggl(1- \frac{\alpha _{n}\beta _{n}(1-\sigma )}{1-\alpha _{n}\beta _{n}\sigma} \biggr) \bigl\Vert x_{n}-x^{*} \bigr\Vert ^{2}+ \frac{2\alpha _{n}\beta _{n}\sigma}{1-\alpha _{n}\beta _{n}\sigma} \bigl\Vert y_{n}-y^{*} \bigr\Vert \Vert U_{n}-x_{n+1} \Vert \\ &{}+ \frac{\alpha _{n}\beta _{n}\sigma}{1-\alpha _{n}\beta _{n}\sigma} \bigl\Vert y_{n}-y^{*} \bigr\Vert ^{2}+ \frac{2\alpha _{n}\beta _{n}}{1-\alpha _{n}\beta _{n}\sigma} \bigl\langle g_{1} \bigl(y^{*} \bigr)-x^{*},U_{n}-x^{*} \bigr\rangle . \end{aligned}$$
(3.30)

Similarly, as previously stated, we have

$$\begin{aligned} \bigl\Vert y_{n+1}-y^{*} \bigr\Vert ^{2} \leq {}& \biggl(1- \frac{\alpha _{n}\beta _{n}(1-\sigma )}{1-\alpha _{n}\beta _{n}\sigma} \biggr) \bigl\Vert y_{n}-y^{*} \bigr\Vert ^{2}+ \frac{2\alpha _{n}\beta _{n}\sigma}{1-\alpha _{n}\beta _{n}\sigma} \bigl\Vert x_{n}-x^{*} \bigr\Vert \Vert V_{n}-y_{n+1} \Vert \\ &{}+ \frac{\alpha _{n}\beta _{n}\sigma}{1-\alpha _{n}\beta _{n}\sigma} \bigl\Vert x_{n}-x^{*} \bigr\Vert ^{2}+ \frac{2\alpha _{n}\beta _{n}}{1-\alpha _{n}\beta _{n}\sigma} \bigl\langle g_{2} \bigl(x^{*} \bigr)-y^{*},V_{n}-y^{*} \bigr\rangle . \end{aligned}$$
(3.31)

From (3.30) and (3.31), we deduce that

$$\begin{aligned} & \bigl\Vert x_{n+1}-x^{*} \bigr\Vert ^{2}+ \bigl\Vert y_{n+1}-y^{*} \bigr\Vert ^{2} \\ &\quad\leq \biggl( 1- \frac{\alpha _{n}\beta _{n}(1-\sigma )}{1-\alpha _{n}\beta _{n}\sigma} \biggr) \bigl( \bigl\Vert x_{n}-x^{*} \bigr\Vert ^{2}+ \bigl\Vert y_{n}-y^{*} \bigr\Vert ^{2} \bigr) \\ &\qquad{}+ \frac{2\alpha _{n}\beta _{n}\sigma}{1-\alpha _{n}\beta _{n}\sigma} \bigl( \bigl\Vert y_{n}-y^{*} \bigr\Vert \Vert U_{n}-x_{n+1} \Vert + \bigl\Vert x_{n}-x^{*} \bigr\Vert \Vert V_{n}-y_{n+1} \Vert \bigr) \\ &\qquad{}+ \frac{\alpha _{n}\beta _{n}\sigma}{1-\alpha _{n}\beta _{n}a} \bigl( \bigl\Vert x_{n}-x^{*} \bigr\Vert ^{2}+ \bigl\Vert y_{n}-y^{*} \bigr\Vert ^{2} \bigr) \\ &\qquad{}+ \frac{2\alpha _{n}\beta _{n}}{1-\alpha _{n}\beta _{n}\sigma} \bigl( \bigl\langle g_{1} \bigl(y^{*} \bigr)-x^{*},U_{n}-x^{*} \bigr\rangle + \bigl\langle g_{2} \bigl(x^{*} \bigr)-y^{*},V_{n}-y^{*} \bigr\rangle \bigr) \\ &\quad= \biggl(1- \frac{\alpha _{n}\beta _{n}(1-2\sigma )}{1-\alpha _{n}\beta _{n}\sigma} \biggr) \bigl( \bigl\Vert x_{n}-x^{*} \bigr\Vert ^{2}+ \bigl\Vert y_{n}-y^{*} \bigr\Vert ^{2} \bigr) \\ &\qquad{}+ \frac{2\alpha _{n}\beta _{n}\sigma}{1-\alpha _{n}\beta _{n}\sigma} \bigl( \bigl\Vert y_{n}-y^{*} \bigr\Vert \Vert U_{n}-x_{n+1} \Vert + \bigl\Vert x_{n}-x^{*} \bigr\Vert \Vert V_{n}-y_{n+1} \Vert \bigr) \\ &\qquad{}+ \frac{2\alpha _{n}\beta _{n}}{1-\alpha _{n}\beta _{n}\sigma} \bigl( \bigl\langle g_{1} \bigl(y^{*} \bigr)-x^{*},U_{n}-x^{*} \bigr\rangle + \bigl\langle g_{2} \bigl(x^{*} \bigr)-y^{*},V_{n}-y^{*} \bigr\rangle \bigr). \end{aligned}$$
(3.32)

By (3.11), (3.24), (3.25), (3.28), (3.29), the condition (i), and Lemma 2.4, we have \(\lim_{n\rightarrow \infty}(\|x_{n}-x^{*}\|+\|y_{n}-y^{*} \|)=0\). This implies that the sequence \(\{x_{n}\}\), \(\{y_{n}\}\) converges to \(x^{*}=P_{\Omega _{1}}g_{1}(y^{*})\), \(y^{*}=P_{\Omega _{2}}g_{2}(x^{*})\), respectively.

This completes the proof. □

As a direct proof of Theorem 3.1, we obtain the following results.

Corollary 3.2

Let C be a nonempty, closed, and convex subset of H. For every \(i=1,2\), let \(f_{i}: H \to \mathbb{R}\cup \{+\infty \}\) be a proper, convex, and lower semicontinuous function, let \(A_{i},B_{i}: C\to H\) be \(\delta ^{A}_{i}\)- and \(\delta ^{B}_{i}\)-inverse strongly monotone operators, respectively, with \(\delta _{i}= \min \{\delta ^{A}_{i}, \delta ^{B}_{i} \}\). Assume that \(VI(C,A_{i},f_{i})\cap VI(C,B_{i},f_{i}) \neq \emptyset \), for all \(i=1,2\). Let \(g_{1},g_{2}:H \rightarrow H\) be \(\sigma _{1}\)- and \(\sigma _{2}\)-contraction mappings with \(\sigma _{1},\sigma _{2}\in (0,1)\) and \(\sigma =\max \{\sigma _{1},\sigma _{2}\}\). Let the sequences \(\{x_{n}\}\), \(\{y_{n}\}\) be generated by \(x_{1},y_{1}\in C\) and

$$\begin{aligned} \textstyle\begin{cases} y_{n+1}=(1-\beta _{n})y_{n}+\beta _{n}P_{C}(\alpha _{n}g_{2}(x_{n})\\ \phantom{y_{n+1}=}{}+(1- \alpha _{n})J_{\gamma f}^{2}(y_{n}-\gamma _{2} (a_{2}A_{2}+(1-a_{2})B_{2})y_{n})) \\ x_{n+1}=(1-\beta _{n})x_{n}+\beta _{n}P_{C}(\alpha _{n}g_{1}(y_{n})\\ \phantom{x_{n+1}=}{}+(1- \alpha _{n})J_{\gamma f}^{1}(x_{n}-\gamma _{1} (a_{1}A_{1}+(1-a_{1})B_{1})x_{n})),\quad \forall n \geq 1, \end{cases}\displaystyle \end{aligned}$$
(3.33)

where \(\{\beta _{n}\},\{\alpha _{n}\}\subseteq [0,1]\), \(\gamma _{i} \in (0,2\delta _{i})\), \(a_{i}\in (0,1)\) and \(J_{\gamma f}^{i}=(I+\gamma _{i} \nabla f_{i} )^{-1}\) is the resolvent operator for all \(i=1,2\). Assume that the following conditions hold:

  1. (i)

    \(\lim_{n\rightarrow \infty}\alpha _{n}=0\) and \(\sum_{n=1}^{\infty}\alpha _{n}=\infty \);

  2. (ii)

    \(0<\overline{l}\leq \beta _{n} \leq l\) for all \(n\in \mathbb{N}\) and for some \(\overline{l},l>0\);

  3. (iii)

    \(\sum_{n=1}^{\infty}|\alpha _{n+1}-\alpha _{n}|< \infty, \sum_{n=1}^{\infty}|\beta _{n+1}-\beta _{n}|< \infty \).

Then, \(\{x_{n}\}\) and \(\{y_{n}\}\) converge strongly to \(x^{*}=P_{VI(C,A_{1},f_{1})\cap VI(C,B_{1},f_{1}) }g_{1}(y^{*})\) and \(y^{*}= P_{VI(C,A_{2},f_{2})\cap VI(C,B_{2},f_{2}) }g_{2}(x^{*})\), respectively.

Proof

If \(T_{1} \equiv T_{2} \equiv I\) in Theorem 3.1, Hence, from Theorem 3.1, we obtain the desired result. □

Corollary 3.3

Let C be a nonempty, closed, and convex subset of H. Let \(f: H \to \mathbb{R}\cup \{+\infty \}\) be a proper, convex, and lower semicontinuous function. Let \(A,B: C\to H\) be \(\delta ^{A}\)- and \(\delta ^{B}\)-inverse strongly monotone operators, respectively, with \(\delta = \min \{\delta ^{A}, \delta ^{B} \}\) and let \(T: C\to C\) be nonexpansive mapping. Assume that \(\Omega =\mathrm{Fix}(T)\cap VI(C,A,f)\cap VI(C,B,f) \neq \emptyset \). Let \(g:H \rightarrow H\) be a σ-contraction mapping with \(\sigma \in (0,1)\). Let the sequence \(\{x_{n}\}\) be generated by \(x \in C\) and

$$\begin{aligned} \textstyle\begin{cases} z_{n}=bx_{n}+(1-b)Tx_{n} \\ x_{n+1}=(1-\beta _{n})z_{n}+\beta _{n}P_{C}(\alpha _{n}g(x_{n})\\ \phantom{x_{n+1}=}{}+(1- \alpha _{n})J_{\gamma f}(x_{n}-\gamma (aA+(1-a)B)x_{n})), \quad\forall n \geq 1, \end{cases}\displaystyle \end{aligned}$$
(3.34)

where \(\{\beta _{n}\},\{\alpha _{n}\}\subseteq [0,1]\), \(\gamma \in (0,2\delta )\), \(a,b \in (0,1)\) and \(J_{\gamma f}=(I+\gamma \nabla f )^{-1}\) is the resolvent operator. Assume that the following conditions hold:

  1. (i)

    \(\lim_{n\rightarrow \infty}\alpha _{n}=0\) and \(\sum_{n=1}^{\infty}\alpha _{n}=\infty \);

  2. (ii)

    \(0<\overline{l}\leq \beta _{n} \leq l\) for all \(n\in \mathbb{N}\) and for some \(\overline{l},l>0\);

  3. (iii)

    \(\sum_{n=1}^{\infty}|\alpha _{n+1}-\alpha _{n}|< \infty, \sum_{n=1}^{\infty}|\beta _{n+1}-\beta _{n}|< \infty \).

Then, \(\{x_{n}\}\) converges strongly to \(x^{*}=P_{\Omega}g(x^{*})\).

Proof

If \(g\equiv g_{1} \equiv g_{2}, f \equiv f_{1} \equiv f_{2}, T \equiv T_{1} \equiv T_{2}, A \equiv A_{1} \equiv A_{2}, B \equiv B_{1} \equiv B_{2}\), \(w_{n}=z_{n}\), and \(x_{n}=y_{n}\) in Theorem 3.1. Hence, from Theorem 3.1, we obtain the desired result. □

Remark 3.4

We remark here that Corollary 3.3 is modified from Algorithm 3.2 in [6] in the following aspects:

  1. 1.

    From a strongly monotone and Lipschitz continuous operator to two inverse strongly monotone operators.

  2. 2.

    We add a nonexpansive mapping and a contraction mapping in our iterative algorithm.

4 Applications

In this section, we reduce our main problem to the following split-feasibility problem and constrained convex-minimization problem:

4.1 The split-feasibility problem

Let C and Q be nonempty, closed, and convex subsets of Hilbert spaces \(H_{1}\) and \(H_{2}\), respectively. The split-feasibility problem (SFP) is to find a point

$$\begin{aligned} x\in C \quad\text{such that } Ax\in Q, \end{aligned}$$
(4.1)

where \(A: H_{1}\rightarrow H_{2}\) is a bounded linear operator. The set of all solutions (SFP) is denoted by \(\Gamma = \{x \in C; Ax \in Q\}\). The split-feasibility problem is the first example of the split-inverse problem, which was first introduced by Censor and Elfving [42] in Euclidean spaces. Many mathematical problems, such as the constrained least-squares problem, the linear split-feasibility problem, and the linear programming problem, can be solved using the split-feasibility problem paradigm, and it can be used in real-world applications, for example, in signal processing, in image recovery, in intensity-modulated therapy, in pattern recognition, etc., see [4346]. Consequently, the split-feasibility problem has been widely studied by many authors, see [4752] and the references therein.

Proposition 4.1

([48])

Given \(x^{*}\in \mathcal{H}_{1}\), the following statements are equivalent.

  1. (i)

    \(x^{*}\) solves the Γ;

  2. (ii)

    \(P_{C}(I-\lambda A^{*}(I-P_{Q})A)x^{*}=x^{*}\), where \(A^{*}\) is the adjoint of A;

  3. (iii)

    \(x^{*}\) solves the variational inequality problem of finding \(x^{*}\in C\) such that

    $$\begin{aligned} \bigl\langle \nabla \mathcal{G} \bigl(x^{*} \bigr), x-x^{*} \bigr\rangle \geq 0,\quad \forall x\in C, \end{aligned}$$
    (4.2)

    where \(\nabla \mathcal{G}=A^{*}(I-P_{Q})A\).

If C is a closed and convex subset of H and the function f is the indicator function of C then it is well known that \(J_{\gamma f}=P_{C}\), the projection operator of H, onto the closed convex set C and putting \(A_{i}=B_{i}\) for all \(i=1,2\) in Theorem 3.1. Consequently, the following result can be obtained from Theorem 3.1.

Theorem 4.2

Let \(H_{1}\) and \(H_{2}\) be real Hilbert spaces and let C, Q be nonempty, closed, and convex subsets of real Hilbert space s \(H_{1}\) and \(H_{2}\), respectively. Let \(A_{1},A_{2}:H_{1} \to H_{2}\) be bounded linear operators, where \(A_{1}^{*},A_{2}^{*}\) are adjoints of \(A_{1}\) and \(A_{2}\), respectively, where \(L_{1}\) and \(L_{2}\) are special radii of \(A_{1}^{*}A_{1}\) and \(A_{2}^{*}A_{2}\). Let \(T_{i}: C\to C\) be nonexpansive mappings. Assume that \(\Xi _{i}=\mathrm{Fix}(T_{i})\cap \Gamma _{i} \neq \emptyset \), for all \(i=1,2\). Let \(g_{1},g_{2}:H \rightarrow H\) be \(\sigma _{1}\)- and \(\sigma _{2}\)-contraction mappings with \(\sigma _{1},\sigma _{2}\in (0,1)\) and \(\sigma =\max \{\sigma _{1},\sigma _{2}\}\). Let the sequences \(\{x_{n}\}\), \(\{y_{n}\}\) be generated by \(x_{1},y_{1}\in C\) and

$$\begin{aligned} \textstyle\begin{cases} w_{n}=b_{2}y_{n}+(1-b_{2})T_{2}y_{n} \\ y_{n+1}=(1-\beta _{n})w_{n}+\beta _{n}P_{C}(\alpha _{n}g_{2}(x_{n})+(1- \alpha _{n})P_{C}(I-\gamma _{2} \nabla \mathcal{G}_{2})y_{n}) \\ z_{n}=b_{1}x_{n}+(1-b_{1})T_{1}x_{n} \\ x_{n+1}=(1-\beta _{n})z_{n}+\beta _{n}P_{C}(\alpha _{n}g_{1}(y_{n})+(1- \alpha _{n})P_{C}(I-\gamma _{1} \nabla \mathcal{G}_{1})x_{n}),\quad \forall n \geq 1, \end{cases}\displaystyle \end{aligned}$$
(4.3)

where \(\nabla \mathcal{G}_{i}=A_{i}^{*}(I-P_{Q})A_{i}\), \(\gamma _{i} \in (0,\frac{2}{L_{i}})\), \(\{\beta _{n}\},\{\alpha _{n}\}\subseteq [0,1]\), \(b_{i} \in (0,1)\) for all \(i=1,2\). Assume that the following conditions hold:

  1. (i)

    \(\lim_{n\rightarrow \infty}\alpha _{n}=0\) and \(\sum_{n=1}^{\infty}\alpha _{n}=\infty \);

  2. (ii)

    \(0<\overline{l}\leq \beta _{n} \leq l\) for all \(n\in \mathbb{N}\) and for some \(\overline{l},l>0\);

  3. (iii)

    \(\sum_{n=1}^{\infty}|\alpha _{n+1}-\alpha _{n}|< \infty, \sum_{n=1}^{\infty}|\beta _{n+1}-\beta _{n}|< \infty \).

Then, \(\{x_{n}\}\) and \(\{y_{n}\}\) converges strongly to \(x^{*}=P_{\Xi _{1}}g_{1}(y^{*})\) and \(y^{*}=P_{\Xi _{2}}g_{2}(x^{*})\), respectively.

Proof

Let \(x,y\in C\) and \(\nabla \mathcal{G}_{i}=A_{i}^{*}(I-P_{Q})A_{i}\), for all \(i=1,2\). First, we show that \(\nabla \mathcal{G}_{i}\) is \(\frac {1}{L_{i}}\)-inverse strongly monotone for all \(i=1,2\).

Consider,

$$\begin{aligned} \bigl\Vert \nabla \mathcal{G}_{i}(x)-\nabla \mathcal{G}_{i}(y) \bigr\Vert ^{2} ={}& \bigl\Vert A_{i}^{*}(I-P_{Q})A_{i}x-A_{i}^{*}(I-P_{Q})A_{i}y \bigr\Vert ^{2} \\ \leq {}& L_{i} \bigl\Vert (I-P_{Q})A_{i}x-(I-P_{Q})A_{i}y \bigr\Vert ^{2}. \end{aligned}$$
(4.4)

From the property of \(P_{C}\), we have

$$\begin{aligned} & \bigl\Vert (I-P_{Q}) A_{i}x -(I-P_{Q})A_{i}y \bigr\Vert ^{2} \\ &\quad= \bigl\langle (I-P_{Q})A_{i}x-(I-P_{Q})A_{i}y,(I-P_{Q})A_{i}x-(I-P_{Q})A_{i}y \bigr\rangle \\ &\quad= \bigl\langle (I-P_{Q})A_{i}x-(I-P_{Q})A_{i}y,A_{i}x-A_{i}y \bigr\rangle \\ &\qquad{}- \bigl\langle (I-P_{Q})A_{i}x-(I-P_{Q})A_{i}y,P_{Q}A_{i}x-P_{Q}A_{i}y \bigr\rangle \\ &\quad= \bigl\langle A_{i}^{*}(I-P_{Q})A_{i}x-A_{i}^{*}(I-P_{Q})A_{i}y,x-y \bigr\rangle \\ &\qquad{}- \bigl\langle (I-P_{Q})A_{i}x-(I-P_{Q})A_{i}y,P_{Q}A_{i}x-P_{Q}A_{i}y \bigr\rangle \\ &\quad= \bigl\langle A_{i}^{*}(I-P_{Q})A_{i}x-A_{i}^{*}(I-P_{Q})A_{i}y,x-y \bigr\rangle \\ &\qquad{}- \bigl\langle (I-P_{Q})A_{i}x,P_{Q}A_{i}x-P_{Q}A_{i}y \bigr\rangle \\ &\qquad{}+ \bigl\langle (I-P_{Q})A_{i}y,P_{Q}A_{i}x-P_{Q}A_{i}y \bigr\rangle \\ &\quad\leq \bigl\langle A_{i}^{*}(I-P_{Q})A_{i}x-A_{i}^{*}(I-P_{Q})A_{i}y,x-y \bigr\rangle . \end{aligned}$$
(4.5)

Substituting (4.5) into (4.4), we have

$$\begin{aligned} \bigl\Vert \nabla \mathcal{G}_{i}(x)-\nabla \mathcal{G}_{i}(y) \bigr\Vert ^{2} \leq{} & L_{i} \bigl\langle A_{i}^{*}(I-P_{Q})A_{i}x-A_{i}^{*}(I-P_{Q})A_{i}y,x-y \bigr\rangle \\ ={}& L_{i} \bigl\langle \nabla \mathcal{G}_{i}(x)-\nabla \mathcal{G}_{i}(y),x-y \bigr\rangle . \end{aligned}$$

It follows that

$$\begin{aligned} \bigl\langle \nabla \mathcal{G}_{i}(x)-\nabla \mathcal{G}_{i}(y),x-y \bigr\rangle \geq \frac{1}{L_{i}} \bigl\Vert \nabla \mathcal{G}_{i}(x)- \nabla \mathcal{G}_{i}(y) \bigr\Vert ^{2}. \end{aligned}$$

Then, \(\nabla \mathcal{G}_{i}\) is \(\frac{1}{L_{A_{i}}}\)-inverse strongly monotone, for all \(i=1,2\). Hence, we can conclude Theorem 4.2 from Proposition 4.1 and Theorem 3.1. □

4.2 The constrained convex-minimization problem

Let C be a nonempty, closed, and convex subset of H. Consider that the constrained convex-minimization problem is to find \(x^{*}\in C\) such that

$$\begin{aligned} \mathcal{Q} \bigl(x^{*} \bigr)= \min _{x\in C} \mathcal{Q}(x), \end{aligned}$$
(4.6)

where \(\mathcal{Q}:H\to \mathbb{R}\) is a continuously differentiable function. Assume that (4.6) is consistent (i.e., it has a solution) and we use Ψ to denote its solution set. It is known that the gradient projection algorithm (GPA) plays an important role in solving constrained convex-minimization problems. It is well known that a necessary condition of optimality for a point \(x^{*} \in C\) to be a solution of the minimization problem (4.6) is that \(x^{*}\) solves the variational inequality:

$$\begin{aligned} x^{*}\in C, \bigl\langle \nabla \mathcal{Q} \bigl(x^{*} \bigr), x-x^{*} \bigr\rangle \geq 0, \quad\forall x\in C. \end{aligned}$$
(4.7)

That is, \(\Psi = VI(C,\nabla \mathcal{Q})\), where \(\Psi \neq \emptyset \). The following theorem is derived from these results.

Theorem 4.3

Let C be a nonempty, closed, and convex subset of H. For every \(i=1,2\), let \(\mathcal{Q}_{i}:H\to \mathbb{R}\) be a continuous differentiable function with \(\nabla \mathcal{Q}_{i}\), that is \(\frac{1}{L_{\mathcal{Q}_{i}}}\)-inverse strongly monotone. Let \(T_{i}: C\to C\) be nonexpansive mappings. Assume that \(\Theta _{i}=\mathrm{Fix}(T_{i})\cap \Psi _{i}\neq \emptyset \), for all \(i=1,2\). Let \(g_{1},g_{2}:H \rightarrow H\) be \(\sigma _{1}\)- and \(\sigma _{2}\)-contraction mappings with \(\sigma _{1},\sigma _{2}\in (0,1)\) and \(\sigma =\max \{\sigma _{1},\sigma _{2}\}\). Let the sequences \(\{x_{n}\}\), \(\{y_{n}\}\) be generated by \(x_{1},y_{1}\in C\) and

$$\begin{aligned} \textstyle\begin{cases} w_{n}=b_{2}y_{n}+(1-b_{2})T_{2}y_{n} \\ y_{n+1}=(1-\beta _{n})w_{n}+\beta _{n}P_{C}(\alpha _{n}g_{2}(x_{n})+(1- \alpha _{n})P_{C}(I-\gamma _{2}\nabla \mathcal{Q}_{2})y_{n})) \\ z_{n}=b_{1}x_{n}+(1-b_{1})T_{1}x_{n} \\ x_{n+1}=(1-\beta _{n})z_{n}+\beta _{n}P_{C}(\alpha _{n}g_{1}(y_{n})+(1- \alpha _{n})P_{C}(I-\gamma _{1}\nabla \mathcal{Q}_{1})x_{n}),\quad \forall n \geq 1, \end{cases}\displaystyle \end{aligned}$$
(4.8)

where \(\{\beta _{n}\},\{\alpha _{n}\}\subseteq [0,1]\), \(\gamma _{i} \in (0,\frac{2}{L_{\mathcal{Q}_{i}}})\), \(b_{i} \in (0,1)\) for all \(i=1,2\). Assume that the following conditions hold:

  1. (i)

    \(\lim_{n\rightarrow \infty}\alpha _{n}=0\) and \(\sum_{n=1}^{\infty}\alpha _{n}=\infty \);

  2. (ii)

    \(0<\overline{l}\leq \beta _{n} \leq l\) for all \(n\in \mathbb{N}\) and for some \(\overline{l},l>0\);

  3. (iii)

    \(\sum_{n=1}^{\infty}|\alpha _{n+1}-\alpha _{n}|< \infty, \sum_{n=1}^{\infty}|\beta _{n+1}-\beta _{n}|< \infty \).

Then, \(\{x_{n}\}\) and \(\{y_{n}\}\) converges strongly to \(x^{*}=P_{\Theta _{1}}g_{1}(y^{*})\) and \(y^{*}=P_{\Theta _{2}}g_{2}(x^{*})\), respectively.

Proof

By using Theorem 4.2, we obtain the conclusion. □

5 Numerical experiments

In this section, we give examples to support our main theorem. In the following examples, we choose \(\alpha _{n}=\frac {1}{3n}\), \(\beta _{n}=\frac {n+1}{6n}\), \(a_{1}=0.50\), \(a_{2}= 0.25\), \(b_{1}=0.40\), and \(b_{2}=0.45\). The stopping criterion used for our computation is \(\|x_{n+1}-x_{n} \|< 10^{-5}\) and \(\|y_{n+1}-y_{n}\|< 10^{-5}\).

Example 5.1

Let \(\mathbb{R}\) be the set of real numbers and let \(C=[1,10]\). Then, we obtain \(P_{C}x=\max \{\min \{x,10\},1 \}\), for all \(x\in C\). For every \(i=1,2\), let \(A_{i},B_{i}:C \to \mathbb{R}\) defined by \(A_{1}(x)= \frac {3x}{5}-\frac {3}{5}\), \(A_{2}(x)= \frac {2x}{5}-\frac {2}{5}\), \(B_{1}(x)= \frac {2x}{3}-\frac {2}{3}\), and \(B_{2}(x)= \frac {x}{6}-\frac {1}{6}\), for all \(x\in C\). For every \(i=1,2\), let \(f_{i}:\mathbb{R}\to \mathbb{R}\) defined by \(f_{1}(x)=x^{2}\), \(f_{2}(x)=2x^{2}\) for all \(x\in \mathbb{R}\). Then, we have \(J_{\gamma f}^{1}, J_{\gamma f}^{2}:\mathbb{R} \to \mathbb{R}\) defined by \(J_{\gamma f}^{1}(x)=\frac {x}{2}\) and \(J_{\gamma f}^{2}(x)=\frac {5x}{9}\), respectively. For every \(i=1,2\), let \(T_{i}:C \to C\) defined by \(T_{1}(x)= \frac {x}{2}+\frac {1}{2}\) and \(T_{2}(x)= \frac {x}{3}+\frac {2}{3}\), for all \(x\in C\). For every \(i=1,2\), let \(g_{i}:\mathbb{R} \to \mathbb{R}\) be defined by \(g_{1}(x)= \frac {x}{5}\) and \(g_{2}(x)= \frac {x}{4}\), for all \(x\in \mathbb{R}\). Let the sequences \(\{x_{n}\}\), \(\{y_{n}\}\) be generated by \(x_{1}\), \(y_{1}\in C\) and

$$\begin{aligned} \textstyle\begin{cases} w_{n}=0.45y_{n}+0.55T_{2}y_{n} \\ y_{n+1}= (1-\frac {n+1}{6n} )w_{n}+\frac {n+1}{6n}P_{C} (\frac {1}{3n}g_{2}(x_{n})+ (1-\frac {1}{3n} )J_{ \gamma f}^{2}(y_{n}-0.2(0.25A_{2}+0.75B_{2})y_{n}) ) \\ z_{n}=0.4x_{n}+0.6T_{1}x_{n} \\ x_{n+1}= (1-\frac {n+1}{6n} )z_{n}+\frac {n+1}{6n}P_{C} (\frac {1}{3n}g_{1}(y_{n})+ (1-\frac {1}{3n} )J_{ \gamma f}^{1}(x_{n}-0.5 (0.5A_{1}+0.5B_{1})x_{n}) ). \end{cases}\displaystyle \end{aligned}$$

According to the definition of \(A_{i}, B_{i}, T_{i}, f_{i}\), for all \(i=1,2\), we obtain \(1\in \mathrm{Fix}(T_{i})\cap VI(C, A_{i},f_{i}), \cap VI(C,B_{i},f_{i})\). From Theorem 3.1, we can conclude that the sequences \(\{x_{n}\}\) and \(\{y_{n}\}\) converge strongly to 1.

The numerical and graphical results of Example 5.1 are shown in Table 1 and Figs. 1 and 2.

Figure 1
figure 1

The convergence behavior of \(\{x_{n}\}\) and \(\{y_{n}\}\) with \(x_{1}=10\), \(y_{1}=8\) in Example 5.1

Figure 2
figure 2

Error plotting of \(\|x_{n+1}-x_{n} \|\) and \(\|y_{n+1}-y_{n}\|\) in Example 5.1, the y-axis is illustrated on a logscale

Table 1 The values of \(\{x_{n}\}\) and \(\{y_{n}\}\) with initial values \(x_{1}=10\), \(y_{1}=8\) in Example 5.1

Next, we consider the problem in the infinite-dimensional Hilbert space.

Example 5.2

Let \(H=L_{2}([0,1])\) with the inner product defined by

$$\begin{aligned} \langle x,y\rangle:= \int _{0}^{1}x(t)y(t)\,dt, \quad\forall x,y\in H \end{aligned}$$

and the induced norm by

$$\begin{aligned} \Vert x \Vert := \biggl( \int _{0}^{1} \bigl\vert x(t) \bigr\vert ^{2}\,dt \biggr)^{\frac{1}{2}}, \quad\forall x\in H. \end{aligned}$$

Let \(C:= \{x\in L_{2}([0,1]): \|x\|\leq 1 \}\) be the unit ball. Then, we have

$$\begin{aligned} P_{C} \bigl(x(t) \bigr) = \textstyle\begin{cases} x(t) &\text{if } \Vert x(t) \Vert \leq 1, \\ \frac {x(t)}{ \Vert x(t) \Vert } &\text{if } \Vert x(t) \Vert >1. \end{cases}\displaystyle \end{aligned}$$
(5.1)

For every \(i=1,2\), let \(A_{i},B_{i}:C \to H\) be defined by \(A_{1}(x(t))= x(t)\), \(A_{2}(x(t))= \frac {3x(t)}{2}\), \(B_{1}(x(t))= 2x(t)\), and \(B_{2}(x(t))= \frac {5x(t)}{3}\), for all \(t \in [0,1]\), \(x\in C\). For every \(i=1,2\), let \(f_{i}:H\to \mathbb{R}\) be defined by \(f_{1}(x(t))=\frac {3x(t)^{2}}{2}\), \(f_{2}(x(t))=\frac {x(t)^{2}}{2}\) for all \(t \in [0,1]\), \(x\in H\). Then, we have \(J_{\gamma f}^{1}, J_{\gamma f}^{2}:H \to H\) defined by \(J_{\gamma f}^{1}(x(t))=\frac {4x(t)}{7}\) and \(J_{\gamma f}^{2}(x(t))=\frac {5x(t)}{6}\), for all \(t \in [0,1]\), respectively. For every \(i=1,2\), let \(T_{i}:C \to C\) be defined by \(T_{1}(x(t))= \frac {x(t)}{2}\) and \(T_{2}(x(t))= \frac {x(t)}{3}\), for all \(t \in [0,1]\), \(x\in C\). For every \(i=1,2\), let \(g_{i}:H \to H\) be defined by \(g_{1}(x(t))= \frac {x(t)}{9}\) and \(g_{2}(x(t))= \frac {x(t)}{16}\), for all \(t \in [0,1]\), \(x\in H\). Let the sequences \(\{x_{n}\}\), \(\{y_{n}\}\) be generated by \(x_{1}\), \(y_{1}\in C\) and

$$\begin{aligned} \textstyle\begin{cases} w_{n}=0.45y_{n}+0.55T_{2}y_{n}, \\ y_{n+1}= (1-\frac {n+1}{6n} )w_{n}\\ \phantom{y_{n+1}=}{}+\frac {n+1}{6n}P_{C} (\frac {1}{3n}g_{2}(x_{n})+ (1-\frac {1}{3n} )J_{ \gamma f}^{2}(y_{n}-0.2(0.25A_{2}+0.75B_{2})y_{n}) ), \\ z_{n}=0.4x_{n}+0.6T_{1}x_{n}, \\ x_{n+1}= (1-\frac {n+1}{6n} )z_{n}\\ \phantom{x_{n+1}=}{}+\frac {n+1}{6n}P_{C} (\frac {1}{3n}g_{1}(y_{n})+ (1-\frac {1}{3n} )J_{ \gamma f}^{1}(x_{n}-0.25 (0.5A_{1}+ 0.5)B_{1})x_{n}) ). \end{cases}\displaystyle \end{aligned}$$
(5.2)

According to the definition of \(A_{i}, B_{i}, T_{i}, f_{i}\), for all \(i=1,2\), then the solution of this problem is \(x(t)=\mathbf{0}\), where \(\mathbf{0}\in \mathrm{Fix}(T_{i})\cap VI(C,A_{i},f_{i}), \cap VI(C,B_{i},f_{i})\). From Theorem 3.1, we can conclude that the sequences \(\{x_{n}\}\) and \(\{y_{n}\}\) converge strongly to \(x(t)=\mathbf{0}\).

We test the algorithms for three different starting points and use \(\|x_{n+1}-x_{n} \|< 10^{-5}\) and \(\|y_{n+1}-y_{n}\|< 10^{-5}\) as the stopping criterion.

Case 1: \(x_{1}=0.2t\) and \(y_{1}=0.8t\);

Case 2: \(x_{1}=e^{-2t}\) and \(y_{1}=t^{2}\);

Case 3: \(x_{1}=\sin (t)\) and \(y_{1}=\cos (t)\).

The computational and graphical results of Example 5.2 are shown in Tables 2, 3, and 4 and Figs. 3, 4, 5, and 6.

Figure 3
figure 3

The convergence behavior of \(\{x_{n}(t)\}\) and \(\{y_{n}(t)\}\) with \(x_{1}=0.2t\) and \(y_{1}=0.8t\) (Case 1) in Example 5.2, the y-axis is illustrated on a logscale

Figure 4
figure 4

The convergence behavior of \(\{x_{n}(t)\}\) and \(\{y_{n}(t)\}\) with \(x_{1}=e^{-2t}\) and \(y_{1}=t^{2}\) (Case 2) in Example 5.2, the y-axis is illustrated on a logscale

Figure 5
figure 5

The convergence behavior of \(\{x_{n}(t)\}\) and \(\{y_{n}(t)\}\) with \(x_{1}=\sin (t)\) and \(y_{1}=\cos (t)\) (Case 3) in Example 5.2, the y-axis is illustrated on a logscale

Figure 6
figure 6

Error plotting of \(\|x_{n+1}-x_{n} \|\) and \(\|y_{n+1}-y_{n}\|\) in Example 5.2, the y-axis is illustrated on a logscale

Table 2 Computational results of Case 1 for Example 5.2
Table 3 Computational results of Case 2 for Example 5.2
Table 4 Computational results of Case 3 for Example 5.2

We next give a comparison between Algorithm (5.3) in Corollary 3.3 and Algorithm 3.2 in [6].

Example 5.3

In this example, we use the same mappings and parameters as in Example 5.2. Putting the sequence \(\{x_{n}\}=\{y_{n}\}\) and \(\{w_{n}\}=\{z_{n}\}\), the mapping \(A_{1}\equiv A_{2}\equiv B_{1} \equiv B_{2}\), \(f_{1} \equiv f_{2}\), \(g_{1} \equiv g_{2}\), and \(T_{1} \equiv T_{2}\equiv I\), we can rewrite (3.34) as follows:

$$\begin{aligned} x_{n+1}= \biggl(1-\frac {n+1}{6n} \biggr)x_{n}+ \frac {n+1}{6n}P_{C} \biggl( \frac {1}{3n}g_{1}(x_{n})+ \biggl(1- \frac {1}{3n} \biggr)J_{\gamma f}^{1}(x_{n}-0.25 A_{1}x_{n}) \biggr). \end{aligned}$$
(5.3)

Also, we modify Algorithm 3.2 in [6] by putting \(A\equiv A_{1}\) that is an inverse strongly monotone operator and choose the same mappings and parameters as in Example 5.2. Hence, we can rewrite as follows:

$$\begin{aligned} x_{n+1}=\frac {1}{3n}x_{n}+ \biggl(1- \frac {1}{3n} \biggr)J_{\gamma f}^{1}(x_{n}-0.25A_{1}x_{n}). \end{aligned}$$
(5.4)

The comparison of Algorithm (5.3) and Algorithm (5.4), which is modified from Algorithm 3.2 in [6], in terms of the CPU time and the number of iterations with different starting points, is reported in Table 5.

Table 5 Numerical values of Algorithm (5.3) and Algorithm (5.4)

Remark 5.4

From our numerical experiments in Examples 5.1, 5.2, and 5.3, we make the following observations.

  1. 1.

    Table 1 and Figs. 1 and 2 show that \(\{x_{n}\}\) and \(\{y_{n}\}\) converge to 1, where \(1\in \mathrm{Fix}(T_{i})\cap VI(C,A_{i},f_{i}), \cap VI(C,B_{i},f_{i})\), for all \(i=1,2\). The convergence of \(\{x_{n}\}\) and \(\{y_{n}\}\) of Example 5.1 can be guaranteed by Theorem 3.1.

  2. 2.

    Tables 2, 3, and 4 and Figs. 3, 4, 5, and 6 show that \(\{x_{n}\}\) and \(\{y_{n}\}\) converge to \(x(t)=\mathbf{0}\), where \(\mathbf{0}\in \mathrm{Fix}(T_{i})\cap VI(C,A_{i},f_{i}), \cap VI(C,B_{i},f_{i})\), for all \(i=1,2\). The convergence of \(\{x_{n}\}\) and \(\{y_{n}\}\) of Example 5.2 can be guaranteed by Theorem 3.1.

  3. 3.

    From Table 5, we see that the sequence generated by our Algorithm (5.3) has a better convergence than Algorithm (5.4), which is modified from Algorithm 3.2 in [6], in terms of the number of iterations and the CPU time.

6 Conclusion

In this paper, we have proposed a new problem, called the combination of mixed variational inequality problems (1.7). This problem can be reduced to a classical variational inequalities problem (1.4). Using the intermixed method with viscosity technique, we introduce a new intermixed algorithm with viscosity technique for finding a solution of the combination of mixed variational inequality problems and the fixed-point problem of a nonexpansive mapping in a real Hilbert space. Moreover, we propose Lemmas 2.5 and 2.6 related to the combination of mixed variational inequality problems (1.7) in Sect. 2. Under some suitable conditions, a strong convergence theorem (Theorem 3.1) is established for the proposed Algorithm (3.1). We apply our theorem to solve the split-feasibility problem and the constrained convex-minimization problem. The effectiveness and numerical results of the proposed method for solving some examples in Hilbert space are illustrated (see Tables 1, 2, 3, 4, and 5 and Figs. 1, 2, 3, 4, 5, and 6). The obtained results improve and extend several previously published results in this field.

Availability of data and materials

Not applicable.

References

  1. Browder, F.E.: Semicontractive and semiaccretive nonlinear mappings in Banach spaces. Bull. Am. Math. Soc. 74, 660–665 (1968)

    Article  MathSciNet  MATH  Google Scholar 

  2. Lescarret, C.: Cas d’addition des applications monotones maximales dans un espace de Hilbert. C. R. Acad. Sci. Paris, Ser. I 261, 1160–1163 (1965). (French)

    MathSciNet  MATH  Google Scholar 

  3. Browder, F.E.: On the unification of the calculus of variations and the theory of monotone nonlinear operators in Banach spaces. Proc. Natl. Acad. Sci. USA 56, 419–425 (1966)

    Article  MathSciNet  MATH  Google Scholar 

  4. Konnov, I.V., Volotskaya, E.O.: Mixed variational in- equalities and economic equilibrium problems. J. Appl. Math. 2(6), 289–314 (2002)

    Article  MathSciNet  MATH  Google Scholar 

  5. Noor, M.A.: A new iterative method for monotone mixed varitational inequalities. Math. Comput. Model. 26(7), 29–34 (1997)

    Article  MATH  Google Scholar 

  6. Noor, M.A., Noor, K.I., Yaqoob, H.: On general mixed variational inequalities. Acta Appl. Math. 110, 227–246 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  7. Noor, M.A.: An implicit method for mixed variational inequalities. Appl. Math. Lett. 11, 109–113 (1998)

    Article  MathSciNet  MATH  Google Scholar 

  8. Noor, M.A.: Mixed quasi variational inequalities. Appl. Math. Comput. 146, 553–578 (2003)

    MathSciNet  MATH  Google Scholar 

  9. Noor, M.A.: Proximal methods for mixed quasi variational inequalities. J. Optim. Theory Appl. 115, 447–451 (2002)

    Article  MathSciNet  MATH  Google Scholar 

  10. Noor, M.A.: Fundamentals of mixed quasi variational inequalities. Int. J. Pure Appl. Math. 15, 137–258 (2004)

    MathSciNet  MATH  Google Scholar 

  11. Noor, M.A., Bnouhachem, A.: Self-adaptive methods for mixed quasi variational inequalities. J. Math. Anal. Appl. 312, 514–526 (2005)

    Article  MathSciNet  MATH  Google Scholar 

  12. Bnouhachem, A., Noor, M.A., Rassias, T.M.: Three-steps iterative algorithms for mixed variational inequalities. Appl. Math. Comput. 183, 436–446 (2006)

    MathSciNet  MATH  Google Scholar 

  13. Noor, M.A.: Numerical methods for monotone mixed variational inequalities. Adv. Nonlinear Var. Inequal. 1, 51–79 (1998)

    MathSciNet  MATH  Google Scholar 

  14. Bnouhachem, A.: A self-adaptive method for solving general mixed variational inequalities. J. Math. Anal. Appl. 309(1), 136–150 (2005)

    Article  MathSciNet  MATH  Google Scholar 

  15. Wang, Z.B., Chen, Z.Y., Xiao, Y.B., Cong Zhang, C.: A new projection-type method for solving multi-valued mixed variational inequalities without monotonicity. Appl. Anal. 99(9), 1453–1466 (2020)

    Article  MathSciNet  MATH  Google Scholar 

  16. Jolaoso, L.O., Shehu, Y., Yao, J.C.: Inertial extragradient type method for mixed variational inequalities without monotonicity. Math. Comput. Simul. 192, 353–369 (2022)

    Article  MathSciNet  MATH  Google Scholar 

  17. Stampacchia, G.: Formes bilineaires coercitives sur les ensembles convexes. C. R. Acad. Sci. 258, 4413–4416 (1964)

    MathSciNet  MATH  Google Scholar 

  18. Kinderlehrer, D., Stampacchia, G.: An Introduction to Variational Inequalities and Their Applications. SIAM, Philadelphia (2000)

    Book  MATH  Google Scholar 

  19. Aubin, J.P., Ekeland, I.: Applied Nonlinear Analysis. Wiley, New York (1984)

    MATH  Google Scholar 

  20. Blum, E., Oettli, W.: From optimization and variational inequalities to equilibrium problems. Math. Program. 63, 123–145 (1994)

    MathSciNet  MATH  Google Scholar 

  21. Dafermos, S.C.: Traffic equilibrium and variational inequalities. Transp. Sci. 14, 42–54 (1980)

    Article  MathSciNet  Google Scholar 

  22. Dafermos, S.C., Mckelvey, S.C.: Partitionable variational inequalities with applications to network and economic equilibrium. J. Optim. Theory Appl. 73, 243–268 (1992)

    Article  MathSciNet  MATH  Google Scholar 

  23. Ceng, L.C., Petrusel, A., Qin, X., Yao, J.C.: A modified inertial subgradient extragradient method for solving pseudomonotone variational inequalities and common fixed point problems. Fixed Point Theory 21, 93–108 (2020)

    Article  MathSciNet  MATH  Google Scholar 

  24. Ceng, L.C., Petrusel, A., Qin, X., Yao, J.C.: Pseudomonotone variational inequalities and fixed points. Fixed Point Theory 22, 543–558 (2021)

    Article  MathSciNet  MATH  Google Scholar 

  25. Ceng, L.C., Latif, A., Al-Mazrooei, A.E.: Composite viscosity methods for common solutions of general mixed equilibrium problem, variational inequalities and common fixed points. J. Inequal. Appl. 2015, Article ID 217 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  26. Zhao, T.Y., Wang, D.Q., Ceng, L.C., et al.: Quasi-inertial Tseng’s extragradient algorithms for pseudomonotone variational inequalities and fixed point problems of quasi-nonexpansive operators. Numer. Funct. Anal. Optim. 42, 69–90 (2020)

    Article  MathSciNet  MATH  Google Scholar 

  27. Ceng, L.C., Petrusel, A., Qin, X., Yao, J.C.: Two inertial subgradient extragradient algorithms for variational inequalities with fixed-point constraints. Optimization 70, 1337–1358 (2021)

    Article  MathSciNet  MATH  Google Scholar 

  28. Ceng, L.C., Shang, M.: Hybrid inertial subgradient extragradient methods for variational inequalities and fixed point problems involving asymptotically nonexpansive mappings. Optimization 70, 715–740 (2021)

    Article  MathSciNet  MATH  Google Scholar 

  29. Ceng, L.C., Yuan, Q.: Composite inertial subgradient extragradient methods for variational inequalities and fixed point problems. J. Inequal. Appl. 2019, Article ID 274 (2019)

    Article  MathSciNet  MATH  Google Scholar 

  30. Ceng, L.C., Köbis, E., Zhao, X.: On general implicit hybrid iteration method for triple hierarchical variational inequalities with hierarchical variational inequality constraints. Optimization 69, 1961–1986 (2020)

    Article  MathSciNet  MATH  Google Scholar 

  31. Ceng, L.C., Yao, J.C., Shehu, Y.: On Mann implicit composite subgradient extragradient methods for general systems of variational inequalities with hierarchical variational inequality constraints. J. Inequal. Appl. 2022, Article ID 78 (2022)

    Article  MathSciNet  MATH  Google Scholar 

  32. Ceng, L.C., Latif, A., Al-Mazrooei, A.E.: Hybrid viscosity methods for equilibrium problems, variational inequalities, and fixed point problems. Appl. Anal. 95, 1088–1117 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  33. Ceng, L.C., Coroian, I., Qin, X., Yao, J.C.: A general viscosity implicit iterative algorithm for split variational inclusions with hierarchical variational inequality constraints. Fixed Point Theory 20, 469–482 (2019)

    Article  MathSciNet  MATH  Google Scholar 

  34. Yao, Z., Kang, S.M., Li, H.J.: An intermixed algorithm for strict pseudocontractions in Hilbert spaces. Fixed Point Theory Appl. 2015, Article ID 206 (2015)

    Article  MATH  Google Scholar 

  35. Kangtunyakarn, A.: An iterative algorithm to approximate a common element of the set of common fixed points for a finite family of strict pseudocontractions and of the set of solutions for a modified system of variational inequalities. Fixed Point Theory Appl. 2013, Article ID 143 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  36. Chuang, C.S.: Algorithms and convergence theorems for mixed equilibrium problems in Hilbert spaces. Numer. Funct. Anal. Optim. 40(8), 953–979 (2019)

    Article  MathSciNet  MATH  Google Scholar 

  37. Bauschke, H.H., Combettes, P.L.: Convex Analysis and Monotone Operators Theory in Hilbert Spaces, 2nd edn. CMS Books in Mathematics. Springer, Berlin (2017)

    Book  MATH  Google Scholar 

  38. Brezis, H.: Operateurs Maximaux Monotone et Semigroupes de Contractions dans les Espace d’Hilbert. North-Holland, Amsterdam (1973)

    Google Scholar 

  39. Takahashi, W.: Nonlinear Functional Analysis. Yokohama Publishers, Yokohama (2000)

    MATH  Google Scholar 

  40. Ming, T., Liu, L.: General iterative methods for equilibrium and constrained convex minimization problem. J. Optim. Theory Appl. 63, 1367–1385 (2014)

    MathSciNet  MATH  Google Scholar 

  41. Xu, H.K.: An iterative approach to quadratic optimization. J. Optim. Theory Appl. 116, 659–678 (2003)

    Article  MathSciNet  MATH  Google Scholar 

  42. Censor, Y., Elfving, T.: A multiprojection algorithm using Bregman projections in a product space. Numer. Algorithms 8, 221–239 (1994)

    Article  MathSciNet  MATH  Google Scholar 

  43. Kotzer, T., Cohen, N., Shamir, J.: Extended and alternative projections onto convex sets: theory and applications, Technical Report No. EE 900, Dept. of Electrical Engineering, Technion, Haifa, Israel (1993)

  44. Censor, Y., Bortfeld, T., Martin, B., Trofimov, A.: A unified approach for inversion problems in intensity modulated radiation therapy. Phys. Med. Biol. 51, 2353–2365 (2003)

    Article  Google Scholar 

  45. Byrne, C.: A unified treatment of some iterative algorithms in signal processing and image reconstruction. Inverse Probl. 20, 103–120 (2004)

    Article  MathSciNet  MATH  Google Scholar 

  46. Censor, Y., Elfving, T., Kopf, N., Bortfeld, T.: The multiple-sets split feasibility problem and its applications for inverse problems. Inverse Probl. 21, 2071–2084 (2005)

    Article  MathSciNet  MATH  Google Scholar 

  47. Byrne, C.: Iterative oblique projection onto convex sets and the split feasibility problem. Inverse Probl. 18, 441–453 (2002)

    Article  MathSciNet  MATH  Google Scholar 

  48. Ceng, L.C., Ansari, Q.H., Yao, J.C.: An extragradient method for solving split feasibility and fixed point problems. Comput. Math. Appl. 64, 633–642 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  49. López, G., Martín-Márquez, V., Wang, F., Xu, H.K.: Solving the split feasibility problem without prior knowledge of matrix norms. Inverse Probl. 28(8), 085004 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  50. Vinh, N.T., Hoai, P.T.: Some subgradient extragradient type algorithms for solving split feasibility and fixed point problems. Math. Methods Appl. Sci. 39, 3808–3823 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  51. Gibali, A., Mai, D.T., Vinh, N.T.: A new relaxed CQ algorithm for solving split feasibility problems in Hilbert spaces and its applications. J. Ind. Manag. Optim. 15(2), 963–984 (2019)

    Article  MathSciNet  MATH  Google Scholar 

  52. Vinh, V.T., Cholamjiak, P., Suantai, S.: A new CQ algorithm for solving split feasibility problems in Hilbert spaces. Bull. Malays. Math. Sci. Soc. 42, 2517–2534 (2019)

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

The authors would like to thank the referees for valuable comments and suggestions for improving this work. The first author would like to thank Rajamangala University of Technology Thanyaburi (RMUTT) under The Science, Research and Innovation Promotion Funding (TSRI) (Contract No. FRB660012/0168 and under project number FRB66E0635) for financial support.

Funding

This research was supported by The Science, Research and Innovation Promotion Funding (TSRI) (Grant No. FRB660012/0168). This research block grant was managed under the Rajamangala University of Technology Thanyaburi (FRB66E0635).

Author information

Authors and Affiliations

Authors

Contributions

AK dealt with the conceptualization, formal analysis, supervision, writing—review and editing. WK writing—original draft, formal analysis, computation. Both authors read and approved the final manuscript.

Corresponding author

Correspondence to Atid Kangtunyakarn.

Ethics declarations

Competing interests

The authors declare no competing interests.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Khuangsatung, W., Kangtunyakarn, A. An intermixed method for solving the combination of mixed variational inequality problems and fixed-point problems. J Inequal Appl 2023, 1 (2023). https://doi.org/10.1186/s13660-022-02908-8

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13660-022-02908-8

Keywords