Skip to main content

Advertisement

Strong convergence of an inertial iterative algorithm for variational inequality problem, generalized equilibrium problem, and fixed point problem in a Banach space

Abstract

We propose and analyze an inertial iterative algorithm to approximate a common solution of generalized equilibrium problem, variational inequality problem, and fixed point problem in the framework of a 2-uniformly convex and uniformly smooth real Banach space. Further, we study the convergence analysis of our proposed iterative method. Finally, we give application and a numerical example to illustrate the applicability of the main algorithm.

Introduction

Let C be a nonempty closed convex subset of a real Banach space X and \(X^{*}\) be the dual space of X; let the pairing between X and \(X^{*}\) be denoted by \(\langle \cdot ,\cdot \rangle \). A mapping \(J: X\to 2^{X^{*}}\) such that

$$ J(x)=\bigl\{ x^{*}\in X^{*}: \bigl\langle x^{*},x\bigr\rangle = \Vert x \Vert ^{2}= \bigl\Vert x^{*} \bigr\Vert ^{2}\bigr\} , \quad \forall x\in X, $$
(1.1)

is called normalized duality mapping.

Let \(g,b:C\times C\to \mathbb{R}\) be bifunctions, where \(\mathbb{R}\) is the set of real numbers. We study the generalized equilibrium problem (in short, GEP) which was to find \(x\in C\) such that

$$ g(x,y)+b(x,y)-b(x,x)\geq 0, \quad \forall y\in C. $$
(1.2)

The solution of (1.2) is denoted by \(\operatorname{Sol}( \operatorname{GEP}(\mbox{1.2}))\). If we consider \(b(x,y)=0\), \(\forall x,y\in C\), (1.2) reduces to the equilibrium problem (in short, EP): Find \(x\in C\) such that

$$ g(x,y)\geq 0, \quad \forall y\in C, $$
(1.3)

which was studied by Blum and Oettli [1]. The solution of (1.3) is denoted by \(\operatorname{Sol}(\operatorname{EP} (\mbox{1.3}))\).

In the development of various fields of science and engineering, the equilibrium problem has a great importance. It provides various mathematical problems as special cases, like variational inclusion problem, variational inequality problem, mathematical programming problem, saddle point problem, complementary problem, Nash equilibrium problem in noncooperative games, minimization problem, minimax inequality problem, and fixed point problem (see [13]). If we consider \(g(x,y)=h(y)-h(x)\), where \(h :C \to \mathbb{R}\) is a nonlinear function, then (1.3) becomes the optimization problem: Find \(x \in C\) such that

$$ h(x)\leq h(y), \quad \forall y \in C. $$
(1.4)

If we consider \(g(x,y)=\langle y-x, Dx\rangle \), \(\forall x,y\in C\), where \(D:C\to X^{*}\) is a nonlinear mapping, then (1.3) becomes the variational inequality problem (in short, VIP): Find \(x\in C\) such that

$$ \langle y-x, Dx\rangle \geq 0, \quad \forall y \in C, $$
(1.5)

which was studied by Hartmann and Stampacchia [4]. The set of solutions of (1.5) is denoted by \(\operatorname{Sol}( \operatorname{VIP}(\mbox{1.5}))\).

In 2006, using the extragradient iterative method for VIP(1.5) given in [5], Nadezkhina and Takahashi [6] introduced and studied the following extragradient method and proved a strong convergence as follows:

$$ \left . \textstyle\begin{array}{l} x_{0}\in C\subseteq H, \\ u_{n}=P_{C}(x_{n}-r_{n} Dx_{n}), \\ y_{n}=\alpha _{n}x_{n}+(1-\alpha _{n})TP_{C}(x_{n}-r_{n} Du_{n}), \\ C_{n}=\{z\in C: \Vert y_{n}-z \Vert ^{2}\leq \Vert x_{n}-z \Vert ^{2}\}, \\ Q_{n}=\{z\in C:\langle x_{n}-z,x-x_{n}\rangle \geq 0\}, \\ x_{n+1}=P_{C_{n}\cap Q_{n}}x_{0}. \end{array}\displaystyle \right \} $$
(1.6)

For further generalizations of iterative method (1.6), see [710].

One drawback of algorithm (1.6) is the computation of values of mapping D at two different points and the necessity of two projections on the admissible set C to pass to the next iteration. To overcome this drawback partially, recently, by adopting the idea of Popov [11], Malitsky and Semenov [12] showed that with some other choice of \(C_{n}\) it is possible to drop from (1.6) the step of extrapolation, which consists in \(u_{n}=P _{C}(x_{n}-r_{n} Dx_{n})\), and introduced the following iteration without extrapolating step and proved a strong convergence:

$$ \left . \textstyle\begin{array}{l} x_{0}, z_{0}\in C\subseteq H, \\ z_{n+1} =P_{C}(x_{n}-\lambda Dz_{n}), \\ C_{n} =\{z\in H: \Vert z_{n+1}-z \Vert ^{2}\leq \Vert x_{n}-z \Vert ^{2}+k \Vert x_{n}-x_{n-1} \Vert ^{2} \\ \hphantom{C_{n} =} {}- (1-\frac{1}{k}-\lambda L ) \Vert z_{n+1}-z_{n} \Vert ^{2}+\lambda L \Vert x_{n}-x_{n-1} \Vert ^{2}\}, \\ Q_{n} =\{z\in H:\langle x_{n}-z,x-x_{n}\rangle \geq 0\}, \\ x_{n+1}=P_{C_{n}\cap Q_{n}}x_{0}, \end{array}\displaystyle \right \} $$
(1.7)

where L is a Lipschitz constant and \(\lambda >0\), \(k>0\) are parameters. We note that algorithm (1.7) on every iteration needs only one computation of projection and one value of D. The iterative method given in [12] extended the methods given in [5, 6]. Further, Dong and Lu [13] extended (1.7) and showed that the algorithm given by them could be faster than algorithm (1.6) by a numerical example. Very recently, Kazmi et al. [14] extended (1.7) for the mixed equilibrium problem.

In 2009, Takahashi et al. [15] introduced and studied the following iterative method and studied strong convergence for a relatively nonexpansive mapping to approximate the common solution of a fixed point problem and an equilibrium problem in Banach space:

$$ \left . \textstyle\begin{array}{l} x_{0}\in C, \\ u_{n}=J^{-1}(\alpha _{n}Jx_{n}+(1-\alpha _{n})JTx_{n}), \\ z_{n}\in C \text{ such that } g(z_{n},y)+\frac{1}{r_{n}}\langle y-z _{n},Jz_{n}-Ju_{n}\rangle \geq 0,\quad \forall y\in C, \\ C_{n}=\{z\in C:\phi (z,z_{n})\leq \phi (z,x_{n})\}, \\ Q_{n}=\{z\in C : \langle x_{n}-z,Jx-Jx_{n}\rangle \geq 0\}, \\ x_{n+1}=\varPi _{C_{n}\cap Q_{n}}{x_{0}}, \quad \forall n\geq 0, \end{array}\displaystyle \right \} $$
(1.8)

where \(\varPi _{C}:X\to C\) is the generalized projection. For further extension of [13, 15], see[1618].

On the other hand, Mainge [19] extended and unified the Krasnosel’skiı̌–Mann algorithm as follows:

$$ \left . \textstyle\begin{array}{l} w_{n} = x_{n}+\theta _{n}(x_{n}-x_{n-1}), \\ x_{n+1}=(1-\alpha _{n})w_{n}+\alpha _{n}Tw_{n}, \end{array}\displaystyle \right \} $$
(1.9)

for each \(n\geq 1\) and proved a weak convergence for a nonexpansive mapping T under some conditions.

The term \(\theta _{n}(x_{n}-x_{n-1})\) given in (1.9) is called the inertial term. It plays a crucial role in speeding up the convergence properties of iterative method (1.9); for details see [1927]. It is worth to mention that, if we consider \(\theta _{n}=0\), then iterative method (1.9) becomes Krasnosel’skiı̌–Mann type iterative methods; for details, see [2830]. Due to this importance, a number of researchers have been working on inertial type methods; see for details the following: inertial Douglas–Rachford splitting methods [31], inertial forward-backward splitting methods [32, 33], inertial forward-backward-forward method [34], and inertial proximal ADMM [35]. Further it is worth to mention that the study of convergence analysis of inertial type iterative methods is still unexplored in the setting of Banach space.

Therefore, inspired and motivated by the work given in [12, 15, 19], we introduce and study a hybrid iterative algorithm for approximating a common solution of GEP(1.2), VIP(1.5), and a fixed point problem for a relatively nonexpansive mapping. Further, we prove a strong convergence theorem in a uniformly smooth and 2-uniformly convex Banach spaces. Finally, we give a numerical example to justify the main theorem and demonstrate that our proposed inertial iterative algorithm is faster than the algorithms due to [15, 16].

Preliminaries

Suppose that weak and strong convergence are denoted by the symbols and →, respectively. Suppose that the unit sphere N is defined as \(N=\{x\in X: \|x\|=1\}\) on a Banach space X. If \(\frac{\|x+y\|}{2} < 1\), \(\forall x,y\in N\) with \(x\neq y\), then X is said to be strictly convex. If for any \(\varepsilon \in (0,2]\) there exists \(\delta >0\) such that

$$ \Vert x-y \Vert \geq \varepsilon\quad \text{implies}\quad \frac{ \Vert x+y \Vert }{2} \leq 1- \delta\quad \text{for any } x,y\in N, $$
(2.1)

then X is said to be uniformly convex. Notice that X is reflexive and strictly convex if it is a uniformly convex Banach space and smooth if \(\lim_{t\to 0}\frac{\|x+ty\|-\|x\|}{t}\) exists for all \(x,y\in N\). If the limit exists uniformly, then X is uniformly smooth and X is said to enjoy the Kadec–Klee property if for any \(\{x_{n}\}\in X\) and \(x\in X\) with \(x_{n}\rightharpoonup x\), and \(\|x_{n}\|\to \|x\|\), then \(\|x_{n}-x\|\to 0\) as \(n\to \infty \). Notice that X enjoys the Kadec–Klee property if X is a uniformly convex Banach space. Also J is single-valued if X is smooth, J is uniformly norm-to-norm continuous on bounded subsets of X if X is uniformly smooth, and X is strictly convex if J is strictly monotone.

The function \(\phi : X\times X\to \mathbb{R}\) is said to be Lyapunov function and is defined by

$$ \phi (x,y)= \Vert x \Vert ^{2}-2\langle x,Jy \rangle + \Vert y \Vert ^{2}, \quad \forall x,y\in X. $$
(2.2)

It is obvious that

$$\begin{aligned}& \bigl( \Vert x \Vert - \Vert y \Vert \bigr)^{2} \leq \phi (x,y) \leq \bigl( \Vert x \Vert + \Vert y \Vert \bigr)^{2}, \quad \forall x,y\in X, \end{aligned}$$
(2.3)
$$\begin{aligned}& \phi \bigl(x,J^{-1}\bigl(\lambda Jy+(1-\lambda )Jz \bigr)\bigr)\leq \lambda \phi (x,y)+(1- \lambda )\phi (x,z), \quad \forall x,y\in X, \lambda \in [0,1], \end{aligned}$$
(2.4)

and

$$ \phi (x,y)= \Vert x \Vert \Vert Jx-Jy \Vert + \Vert y \Vert \Vert x-y \Vert , \quad \forall x,y\in X. $$
(2.5)

Remark 2.1

If X is a reflexive, strictly convex, and smooth Banach space, then \(\forall x,y\in X\), \(\phi (x,y)=0 \Leftrightarrow x=y\).

Lemma 2.2

([36])

LetXbe a 2-uniformly convex Banach space, then for all\(x,y\in X\)the following inequality holds:

$$ \Vert x-y \Vert \leq \frac{2}{c} \Vert Jx-Jy \Vert , $$

wherecis a 2-uniformly convex constant and\(c\in (0, 1]\).

Lemma 2.3

([37])

LetXbe a smooth and uniformly convex Banach space, and let\(\{x_{n}\}\)and\(\{y_{n}\}\)be two sequences inXsuch that either\(\{x_{n}\}\)or\(\{y_{n}\}\)is bounded. If\(\lim_{n\to \infty } \phi (x_{n},y_{n})=0\), then\(\lim_{n\to \infty }\|x_{n}-y_{n} \|=0\).

Remark 2.4

If \(\{x_{n}\}\) and \(\{y_{n}\}\) are bounded, then by using (2.5) it is obvious that the converse of Lemma 2.3 is also true.

Definition 2.1

Let \(T: C\to C\) be a mapping. Then:

  1. (i)

    \(\operatorname{Fix}(T)=\{x\in C: Tx=x \}\) is the collection of all fixed points of T;

  2. (ii)

    A point \(x_{0}\in C\) is defined as an asymptotic fixed point of T if C contains a sequence \(\{x_{n}\}\) such that \(x_{n}\rightharpoonup x_{0}\) and \(\lim_{n\to \infty }\|Tx_{n}-x _{n}\|=0\). \(\widehat{ \operatorname{Fix}}(T)\) denotes the collection of all asymptotic fixed points of T;

  3. (iii)

    T is said to be relatively nonexpansive if

    $$ \widehat{\operatorname{Fix}}(T)={\operatorname{Fix}}(T)\neq \emptyset \quad \text{and} \quad \phi (p,Tx)\leq \phi (p,x), \quad \forall x\in C, p\in { \operatorname{Fix}}(T). $$

Lemma 2.5

([38])

LetXbe a reflexive, strictly convex, and smooth Banach space. LetCbe a nonempty closed convex subset ofX. Let\(T:C\to C\)be a relatively nonexpansive mapping. Then\({\operatorname{Fix}}(T)\)is a closed convex subset ofC.

Lemma 2.6

([39])

LetCbe a nonempty closed convex subset ofXandDbe a monotone and hemicontinuous mapping ofCinto\(X^{*}\). Then\({\operatorname{VIP}}(C, D)\)is closed and convex.

Lemma 2.7

([37])

LetCbe a nonempty closed convex subset of a real reflexive, strictly convex, and smooth Banach spaceX, and let\(x\in X\). Then there exists a unique element\(x_{0}\in C\)such that\(\phi (x_{0},x)=\inf_{y\in C}\phi (y,x)\).

Definition 2.2

([40])

A mapping \(\varPi _{C}: X\to C\) is said to be a generalized projection if, for any point \(x\in X\), \(\varPi _{C}{x}=\bar{x}\), where is a solution of the minimization problem \(\phi (\bar{x},x)=\inf_{y\in C}\phi (y,x)\).

Lemma 2.8

([40])

LetXbe a reflexive, strictly convex, and smooth Banach space, and letCbe a nonempty closed convex subset ofX. Then

$$ \phi (x,\varPi _{C}{y})+\phi (\varPi _{C}{y},y)\leq \phi (x,y), \quad \forall x\in C \textit{ and } y\in X. $$

Lemma 2.9

([40])

LetXbe reflexive, strictly convex, and letCbe a nonempty closed convex subset of a smooth Banach spaceX, let\(x\in X\)and\(z\in C\). Then

$$ z=\varPi _{C}{x}\quad \Longleftrightarrow \quad \langle z-y,Jx-Jz\rangle \geq 0, \quad \forall y\in C. $$

Assumption 2.1

Let \(g:C\times C\longrightarrow \mathbb{R}\) be a bifunction satisfying the following:

  1. (i)

    \(g(x,x)=0\), \(\forall x \in C\);

  2. (ii)

    g is monotone, that is, \(g(x,y)+g(y,x)\leq 0\), \(\forall x \in C\);

  3. (iii)

    \(\lim \sup_{t \to 0} g(tz+(1-t)x,y)\leq g(x,y)\), \(\forall x,y,z \in C\);

  4. (iv)

    For each \(x \in C\), \(y\to g(x,y)\) is convex and lower semicontinuous.

Assumption 2.2

Let \(b: C\times C\to \mathbb{R}\) be a bifunction satisfying the following:

  1. (i)

    b is skew-symmetric, i.e., \(b(x,x)-b(x,y)-b(y,x)+b(y,y) \geq 0\), \(\forall x,y\in C\);

  2. (ii)

    b is convex in the second argument;

  3. (iii)

    b is continuous.

Lemma 2.10

([41])

LetCbe a closed convex subset of a uniformly smooth, strictly convex, and reflexive Banach spaceX, and let\(g:C\times C\longrightarrow \mathbb{R}\)be a bifunction satisfying Assumption 2.1and\(b:C\times C\to \mathbb{R}\)satisfying Assumption 2.2. For all\(r>0\)and\(x\in X\), define a mapping\(T_{r}: X\to C\)as follows:

$$ T_{r}{x}= \biggl\{ z\in C :g(z,y)+ \frac{1}{r}\langle y-z,Jz-Jx\rangle +b(z,y)-b(z,z) \geq 0,\forall y\in C \biggr\} ,\quad \forall x\in X. $$
(2.6)

Then the following hold:

  1. (a)

    \(T_{r}{x}\)is single-valued;

  2. (b)

    \(T_{r}{x}\)is a firmly nonexpansive type mapping, i.e., for all\(x, y\in X\),

    $$ \langle T_{r}{x}-T_{r}{y},JT_{r}{x}-JT_{r}{y} \rangle \leq \langle T _{r}{x}-T_{r}{y},J_{x}-Jy \rangle ; $$
  3. (c)

    \(\operatorname{Fix}(T_{r})= \operatorname{Sol}( \operatorname{GEP}(\mbox{1.2}))\)is closed and convex;

  4. (d)

    \(T_{r}x\)is quasi-ϕ-nonexpansive;

  5. (e)

    \(\phi (q,T_{r}{x})+\phi (T_{r}{x},x)\leq \phi (q,x)\), \(\forall q\in F(T_{r})\).

In the sequel, we make use of the function \(\varPhi :X\times X^{*}\to \mathbb{R}\), defined by

$$ \varPhi \bigl(x,x^{*}\bigr)= \Vert x \Vert ^{2}-\bigl\langle x, x^{*}\bigr\rangle + \bigl\Vert x^{*} \bigr\Vert ^{2}. $$

Observe that \(\varPhi (x,x^{*})=\phi (x, J^{-1}x^{*})\).

Lemma 2.11

([40])

LetXbe a smooth, strictly convex, and reflexive Banach space with\(X^{*}\)as its dual. Then

$$ \varPhi \bigl(x,x^{*}\bigr)+2\bigl\langle J^{-1}x^{*}-x, y^{*}\bigr\rangle \leq \varPhi \bigl(x,x ^{*}+y^{*} \bigr),\quad \forall x\in X \textit{ and all } x^{*}, y^{*} \in X^{*}. $$

Main result

In this section, we prove a strong convergence theorem for the inertial hybrid iterative algorithm to approximate a common solution of GEP(1.2), VIP(1.5), and fixed point problem for a relatively-nonexpansive mapping in uniformly smooth and 2-uniformly convex real Banach spaces.

Iterative Algorithm 3.1

Let the sequences \(\{x_{n}\}\) and \(\{z_{n}\}\) be generated by the iterative algorithm:

$$ \left . \textstyle\begin{array}{l} x_{0}=x_{-1},\qquad z_{0}\in C, \qquad C_{0}:=C, \\ w_{n}=x_{n}+\theta _{n}(x_{n}-x_{n-1}) , \\ y_{n}=\varPi _{C}J^{-1}(Jw_{n}-\mu _{n}Dw_{n}), \\ u_{n}=J^{-1}(\alpha _{n}Jz_{n}+(1-\alpha _{n})JTy_{n}), \\ z_{n+1}=T_{r_{n}}u_{n}, \\ C_{n}=\{z\in C:\phi (z,z_{n+1})\leq \alpha _{n}\phi (z,z_{n})+(1-\alpha _{n})\phi (z,w_{n})\}, \\ Q_{n}=\{z\in C : \langle x_{n}-z,Jx_{n}-Jx_{0}\rangle \leq 0\}, \\ x_{n+1}=\varPi _{C_{n}\cap Q_{n}}{x_{0}}, \quad \forall n\geq 0, \end{array}\displaystyle \right \} $$
(3.1)

where \(\{\alpha _{n}\}\in [0,1]\), \(r_{n}\in [a,\infty )\) for some \(a>0\), \(\{\theta _{n}\}\in (0,1)\) and \(\{\mu _{n}\}\in (0,\infty )\).

Theorem 3.2

LetCbe a nonempty, closed, and convex subset of a 2-uniformly convex and uniformly smooth real Banach spaceX, and let\(X^{*}\)be the dual ofX. Let\(D:X\to X^{*}\)be aγ-inverse strongly monotone mapping with constant\(\gamma \in (0,1)\); \(g:C\times C\to \mathbb{R}\)be a bifunction satisfying Assumption 2.1, and\(b:C\times C\to \mathbb{R}\)satisfy Assumption 2.2. Let\(T:C\to C\)be a relatively nonexpansive mapping such that\(\varGamma := \operatorname{Sol}(\operatorname{GEP}(\mbox{1.2}))\cap \operatorname{Sol}(\operatorname{VIP}(\mbox{1.5}))\cap {\operatorname{Fix}}(T) \neq \emptyset \). Let the sequences\(\{x_{n}\}\)and\(\{z_{n}\}\)be generated by iterative algorithm (3.1) and the control sequences\(\{\alpha _{n}\}\in [0,1]\)such that\(\lim_{n\to \infty }\alpha _{n}=0\), \(r_{n}\in [a,\infty )\)for some\(a>0\), \(\{\theta _{n}\}\in (0,1)\), and\(\{\mu _{n}\}\in (0,\infty )\)satisfying the condition\(0<\liminf_{n\to \infty }\mu _{n}\leq \limsup_{n\to \infty }\mu _{n}<\frac{c^{2}\gamma }{2}\), wherecis the 2-uniformly convex constant ofX. Then\(\{x_{n}\}\)converges strongly to\(\hat{x}\in \varGamma \), where\(\hat{x}=\varPi _{\varGamma } x_{0}\)and\(\varPi _{\varGamma }x_{0}\)is the generalized projection ofXontoΓ.

Now, we give some lemmas for the main result in this paper as follows.

Lemma 3.3

For each\(n\geq 0\), Γand\(C_{n}\cap Q_{n}\)are closed and convex.

Proof

It follows from Lemmas 2.52.6 and Lemma 2.10 that Γ is a nonempty closed and convex set, and hence \(\varPi _{\varGamma }x_{0}\) is well defined. Evidently, \(C_{0}=C\) is closed and convex. Further, the closedness of \(C_{n}\) is also obvious. We only prove the convexity of \(C_{n}\). For \(q_{1}, q_{2}\in C_{n}\), we have \(q_{1}, q_{2}\in C\), \(tq_{1}+(1-t)q_{2}\in C\), where \(t\in (0,1)\), and

$$ \phi (q_{1},z_{n+1})\leq \alpha _{n}\phi (q_{1}, z_{n})+(1-\alpha _{n}) \phi (q_{1},w_{n}) $$
(3.2)

and

$$ \phi (q_{2},z_{n+1})\leq \alpha _{n}\phi (q_{2}, z_{n})+(1-\alpha _{n}) \phi (q_{2},w_{n}). $$
(3.3)

The above two inequalities are equivalent to

$$\begin{aligned} &2\alpha _{n}\langle q_{1}, Jz_{n}\rangle +2(1-\alpha _{n})\langle q _{1},Jw_{n} \rangle -2\langle q_{1},Jz_{n+1}\rangle \\ &\quad \leq \alpha _{n} \Vert z_{n} \Vert ^{2}+(1-\alpha _{n}) \Vert w_{n} \Vert ^{2}- \Vert z_{n+1} \Vert ^{2} \end{aligned}$$
(3.4)

and

$$\begin{aligned} &2\alpha _{n}\langle q_{2}, Jz_{n}\rangle +2(1-\alpha _{n})\langle q _{2},Jw_{n} \rangle -2\langle q_{2},Jz_{n+1}\rangle \\ &\quad \leq \alpha _{n} \Vert z_{n} \Vert ^{2}+(1-\alpha _{n}) \Vert w_{n} \Vert ^{2}- \Vert z_{n+1} \Vert ^{2}. \end{aligned}$$
(3.5)

It follows from (3.4) and (3.5) that

$$\begin{aligned} &2\alpha _{n}\bigl\langle tq_{1}+(1-t)q_{2}, Jz_{n}\bigr\rangle +2(1-\alpha _{n}) \bigl\langle tq_{1}+(1-t)q_{2},Jw_{n} \bigr\rangle -2 \bigl\langle tq_{1}+(1-t)q_{2},Jz _{n+1}\bigr\rangle \\ &\quad \leq \alpha _{n} \Vert z_{n} \Vert ^{2}+(1-\alpha _{n}) \Vert w_{n} \Vert ^{2}- \Vert z_{n+1} \Vert ^{2}. \end{aligned}$$
(3.6)

Hence, we have

$$ \phi \bigl(tq_{1}+(1-t)q_{2},z_{n+1} \bigr)\leq \alpha _{n}\phi \bigl(tq_{1}+(1-t)q _{2}, z_{n}\bigr)+(1-\alpha _{n})\phi \bigl(tq_{1}+(1-t)q_{2},w_{n}\bigr), $$
(3.7)

which implies that \(tq_{1}+(1-t)q_{2}\in C_{n}\), hence \(C_{n}\) is closed and convex for all \(n\geq 0\). By using the definition of \(Q_{n}\), it is obvious that \(Q_{n}\) is closed and convex. This implies that \(C_{n}\cap Q_{n}\), \(\forall n\geq 0\) is closed and convex. □

Lemma 3.4

For each\(n\geq 0\), \(\varGamma \subset C_{n}\cap Q_{n}\), and the sequence\(\{x_{n}\}\)is well defined.

Proof

Let \(p\in \varGamma \), we have

$$\begin{aligned} \phi (p,z_{n+1}) =&\phi (p,T_{r_{n}}u_{n}) \\ \leq & \phi (p,u_{n}) \\ \leq & \phi \bigl(p,J^{-1}\bigl(\alpha _{n}Jz_{n}+(1- \alpha _{n})JTy_{n}\bigr)\bigr) \\ \leq &\alpha _{n}\phi (p,z_{n})+(1-\alpha _{n})\phi (p,Ty_{n}) \\ \leq &\alpha _{n}\phi (p,z_{n})+(1-\alpha _{n})\phi (p,y_{n}). \end{aligned}$$
(3.8)

Additionally, by Lemmas 2.2 and 2.11, we obtain

$$\begin{aligned} \phi (p,y_{n}) =&\phi \bigl(p,\varPi _{C}J^{-1}(Jw_{n}- \mu _{n}Dw_{n})\bigr) \\ \leq &\phi \bigl(p,J^{-1}(Jw_{n}-\mu _{n}Dw_{n})\bigr) \\ =&\varPhi (p,Jw_{n}-\mu _{n}Dw_{n}) \\ \leq & \varPhi \bigl(p,(Jw_{n}-\mu _{n}Dw_{n})+ \mu _{n}Dw_{n}\bigr)-2\bigl\langle J^{-1}(Jw _{n}-\mu _{n}Dw_{n})-p ,\mu _{n}Dw_{n}\bigr\rangle \\ =&\varPhi (p,Jw_{n})-2\mu _{n}\bigl\langle J^{-1}(Jw_{n}-\mu _{n}Dw_{n})-p ,Dw _{n}\bigr\rangle \\ =&\phi (p,w_{n})-2\langle w_{n}-p, Dw_{n}\rangle -2\mu _{n}\bigl\langle J ^{-1}(Jw_{n}-\mu _{n}Dw_{n})-w_{n} ,Dw_{n}\bigr\rangle \\ =&\phi (p,w_{n})-2\langle w_{n}-p, Dw_{n}-Dp\rangle -2\mu _{n}\bigl\langle J^{-1}(Jw_{n}-\mu _{n}Dw_{n})-w_{n} ,Dw_{n}\bigr\rangle \\ \leq &\phi (p,w_{n})-2\mu _{n}\gamma \Vert Dw_{n} \Vert ^{2}+2\mu _{n} \bigl\Vert J^{-1}(Jw _{n}-Dw_{n})-J^{-1}Jw_{n} \bigr\Vert \Vert Dw_{n} \Vert ^{2} \\ \leq &\phi (p,w_{n})-2\mu _{n}\gamma \Vert Dw_{n} \Vert ^{2}+\frac{4\mu _{n} ^{2}}{c^{2}} \Vert Dw_{n} \Vert ^{2} \\ =&\phi (p,w_{n})-2\mu _{n}\biggl(\gamma - \frac{2\mu _{n}}{c^{2}}\biggr) \Vert Dw_{n} \Vert ^{2}, \end{aligned}$$
(3.9)

which, combined with \(\mu _{n}<\frac{c^{2}\gamma }{2}\), leads to

$$ \phi (p, y_{n})\leq \phi (p,w_{n}). $$
(3.10)

By (3.8) and (3.10), we have

$$ \phi (p,z_{n+1})\leq \alpha _{n}\phi (p,z_{n})+(1-\alpha _{n})\phi (p,w _{n}), $$
(3.11)

which implies that \(p\in C_{n}\). Thus, \(\varGamma \subset C_{n}\), \(\forall n\geq 0\). Next, we show by induction that \(\varGamma \subset C _{n}\cap Q_{n}\), \(\forall n\geq 0\). Since \(Q_{0}=C\), we have \(\varGamma \subset C_{0}\cap Q_{0}\). Let \(\varGamma \subset C_{k}\cap Q_{k}\) for some \(k> 0\). Then there exists \(x_{k+1}\in C_{k}\cap Q _{k}\) such that \(x_{k+1}=\varPi _{C_{k}\cap Q_{k}}x_{0}\). From the definition of \(x_{k+1}\), we have, for all \(z\in C_{k}\cap Q_{k}\), that \(\langle x_{k+1}-z,Jx_{0}-Jx_{k+1}\rangle \geq 0\). Since \(\varGamma \subset C_{k}\cap Q_{k}\), we have

$$ \langle x_{k+1}-p,Jx_{0}-Jx_{k+1} \rangle \geq 0, \quad \forall p\in \varGamma , $$
(3.12)

and hence \(p\in Q_{k+1}\). Thus, we obtain \(\varGamma \subset C_{k+1} \cap Q_{k+1}\) as \(\varGamma \subset C_{n}\) for all n. Therefore, \(\varGamma \subset C_{n}\cap Q_{n}\), \(\forall n\geq 0\), and hence \(x_{n+1}=\varPi _{C_{n}\cap Q_{n}}x_{0}\) is well defined \(\forall n \geq 0\). Thus, \(\{x_{n}\}\) is well defined. □

Lemma 3.5

The sequences\(\{x_{n}\}\), \(\{y_{n}\}\), \(\{z_{n}\}\), \(\{w_{n}\}\), and\(\{u_{n}\}\)generated by iterative algorithm (3.1) are bounded.

Proof

By the definition of \(Q_{n}\), \(x_{n}=\varPi _{Q_{n}}x_{0}\). Using \(x_{n}=\varPi _{Q_{n}}x_{0}\) and Lemma 2.8, we obtain

$$\begin{aligned} \phi (x_{n},x_{0}) =&\phi (\varPi _{Q_{n}}x_{0},x_{0}) \\ \leq &\phi (u,x_{0})-\phi (u,\varPi _{Q_{n}}x_{0}) \leq \phi (u,x_{0}), \quad \forall u\in \varGamma \subset Q_{n}. \end{aligned}$$

This shows that \(\{\phi (x_{n}, x_{0})\}\) is bounded and hence from (2.3) that \(\{x_{n}\}\) is bounded. Further,

$$\begin{aligned} \phi (p,x_{n}) =& \phi (p,\varPi _{C_{n-1}\cap Q_{n-1}}x_{0}) \\ =& \phi (p,x_{0})-\phi (x_{n},x_{0}) \end{aligned}$$

implies that \(\{\phi (p,x_{n})\}\) is bounded and by the fact \(\phi (p,Tx_{n})\leq \phi (p,x_{n})\), \(\forall p\in \varGamma \) that \(\{Tx_{n}\}\) is also bounded. Therefore, \(\{w_{n}\}\) and \(\{y_{n}\}\) are also bounded. Now, setting \(M=\max \{\phi (p,z_{0}),\sup_{n}\phi (p,w _{n})\}\). Then obviously \(\phi (p,z_{0})\leq M\). Let \(\phi (p,z_{n}) \leq M\) for some n, then from (3.11)

$$ \phi (p,z_{n+1})\leq \alpha _{n}M+(1-\alpha _{n})M\leq M. $$

Thus, \(\{\phi (p,z_{n+1})\}\) is bounded and hence \(\{z_{n}\}\) is also bounded. □

Lemma 3.6

The sequences\(x_{n}\to \hat{x}\), \(u_{n}\to \hat{x}\), and\(z_{n+1} \to \hat{x}\)as\(n\to \infty \), whereis some point inC.

Proof

Since \(x_{n+1}=\varPi _{C_{n}\cap Q_{n}}x_{0}\in Q_{n}\) and \(x_{n} \in \varPi _{Q_{n}}x_{0}\), we get

$$ \phi (x_{n},x_{0})\leq \phi (x_{n+1},x_{0}), \quad \forall n\geq 0. $$

This shows that \(\{\phi (x_{n},x_{0})\}\) is nondecreasing and hence from boundedness of \(\{\phi (x_{n},x_{0})\}\), \(\lim_{n\to \infty } \phi (x_{n},x_{0})\) exists. Further,

$$\begin{aligned} \phi (x_{n+1},x_{n}) =&\phi (x_{n+1},\varPi _{Q_{n}}x_{0}) \\ \leq & \phi (x_{n+1},x_{0})-\phi (\varPi _{Q_{n}}x_{0},x_{0}) \\ =& \phi (x_{n+1},x_{0})-\phi (x_{n},x_{0}), \quad \forall n\geq 0, \end{aligned}$$

and hence

$$ \lim_{n\to \infty }\phi (x_{n+1},x_{n})=0. $$
(3.13)

Since X is uniformly convex and smooth, by Lemma 2.3, we have

$$ \lim_{n\to \infty } \Vert x_{n+1}-x_{n} \Vert =0. $$
(3.14)

Since X is reflexive and \(\{x_{n}\}\) is bounded, there exists a subsequence \(\{x_{n_{k}}\}\) of \(\{x_{n}\}\) such that \(x_{n_{k}}\rightharpoonup \hat{x}\). Since \(C_{n}\cap Q_{n}\) is closed and convex, \(\hat{x} \in C_{n}\cap Q_{n}\). Using weak lower semicontinuity of \(\|\cdot \| ^{2}\), we obtain

$$\begin{aligned} \phi (\hat{x},x_{0}) =& \Vert \hat{x} \Vert ^{2}-2 \langle \hat{x},Jx_{0}\rangle + \Vert x_{0} \Vert ^{2} \\ \leq & \liminf_{k\to \infty }\bigl( \Vert x_{n_{k}} \Vert ^{2}-2\langle x _{n_{k}},Jx_{0}\rangle + \Vert x_{0} \Vert ^{2}\bigr) \\ =& \liminf_{k\to \infty }\phi (x_{n_{k}},x_{0}) \\ \leq & \limsup_{k\to \infty }\phi (x_{n_{k}},x_{0}) \\ \leq & \phi (\hat{x},x_{0}), \end{aligned}$$

which implies that \(\lim_{k\to \infty }\phi (x_{n_{k}},x_{0})= \phi (\hat{x},x_{0})\), and hence we have \(\lim_{k\to \infty } \|x_{n_{k}}\|= \|\hat{x}\|\). Further, from the Kadec–Klee property of X, \(x_{n_{k}}\to \hat{x}\) as \(k\to \infty \). Since \(\lim_{n\to \infty }\phi (x_{n},x_{0})\) exists, \(\lim_{n\to \infty }\phi (x_{n},x_{0})= \phi (\hat{x},x_{0})\). If there exists some subsequence \(\{x_{n_{j}}\}\) of \(\{x_{n}\}\) such that \(x_{n_{j}}\to \tilde{x}\) as \(j \to \infty \), then

$$\begin{aligned} \phi (\hat{x},\tilde{x}) =& \lim_{k,j\to \infty }\phi (x_{n _{k}},x_{n_{j}}) \\ =& \lim_{k,j\to \infty }\phi (x_{n_{k}},\varPi _{Q_{n_{j}}}x_{0}) \\ \leq & \lim_{k,j\to \infty }\bigl\{ \phi (x_{n_{k}},x_{0})- \phi (x _{n_{j}},x_{0})\bigr\} =0, \end{aligned}$$

which shows \(\hat{x}=\tilde{x}\) and thus \(x_{n} \to \hat{x}\) as \(n\to \infty \).

From the definition of \(w_{n}\), we have \(\|w_{n}-x_{n}\|=\|\theta _{n}(x _{n}-x_{n-1})\|\leq \|x_{n}-x_{n-1}\|\), which implies by (3.14) that

$$ \lim_{n\to \infty } \Vert w_{n}-x_{n} \Vert =0. $$
(3.15)

Since \(\{w_{n}\}\) is bounded and by Remark 2.4, we get

$$ \lim_{n\to \infty }\phi (x_{n}, w_{n})=0. $$
(3.16)

By (3.14) and (3.15), we have

$$ \lim_{n\to \infty } \Vert x_{n+1}- w_{n} \Vert =0, $$
(3.17)

using Remark 2.4

$$ \lim_{n\to \infty }\phi (x_{n+1}, w_{n})=0. $$
(3.18)

As \(x_{n+1}=\varPi _{C_{n}\cap Q_{n}}x_{0}\in C_{n}\), therefore

$$ \phi (x_{n+1},z_{n+1})\leq \alpha _{n}\phi (x_{n+1},z_{n})+(1-\alpha _{n})\phi (x_{n+1},w_{n}) . $$
(3.19)

By (3.16), (3.19), and assumption \(\lim_{n\to \infty }\alpha _{n}=0\),

$$ \phi (x_{n+1},z_{n+1})=0. $$

Using (2.3), we get

$$ \lim_{n\to \infty }\bigl( \Vert x_{n+1} \Vert - \Vert z_{n+1} \Vert \bigr)=0, $$

and using \(\lim_{n\to \infty }\|x_{n}\|=\|\hat{x}\|\), we have

$$ \lim_{n\to \infty } \Vert z_{n+1} \Vert = \Vert \hat{x} \Vert . $$
(3.20)

Hence, we have

$$ \lim_{n\to \infty } \Vert Jz_{n+1} \Vert =\lim_{n\to \infty } \Vert z _{n+1} \Vert = \Vert \hat{x} \Vert = \Vert J\hat{x} \Vert . $$
(3.21)

This shows that \(\{\|Jz_{n+1}\|\}\) is bounded. Since X and \(X^{*}\) are reflexive, we may assume that \(Jz_{n+1}\rightharpoonup x ^{*} \in X^{*}\). By reflexivity of X, we see that \(J(X)=X^{*}\), that is, there exists \(x\in X\) such that \(Jx=x^{*}\). Since

$$\begin{aligned}& \phi (x_{n+1},z_{n+1})= \Vert x_{n+1} \Vert ^{2}-2\langle x_{n+1},Jz_{n+1} \rangle + \Vert z_{n+1} \Vert ^{2}, \\& \phi (x_{n+1},z_{n+1})= \Vert x_{n+1} \Vert ^{2}-2\langle x_{n+1},Jz_{n+1} \rangle + \Vert Jz_{n+1} \Vert ^{2}. \end{aligned}$$

By using \(\liminf_{n\to \infty }\) in the above equality, we have

$$\begin{aligned} 0 \geq & \Vert \hat{x} \Vert ^{2}-2\bigl\langle \hat{x}, x^{*}\bigr\rangle + \bigl\Vert x^{*} \bigr\Vert ^{2} \\ =& \Vert \hat{x} \Vert ^{2}-2\langle \hat{x}, Jx\rangle + \Vert Jx \Vert ^{2} \\ =& \Vert \hat{x} \Vert ^{2}-2\langle \hat{x}, Jx\rangle + \Vert x \Vert ^{2} \\ =& \phi (\hat{x}, x), \end{aligned}$$

i.e., \(\hat{x}=x\), and hence \(x^{*}=J\hat{x}\). This implies that \(Jz_{n+1}\rightharpoonup J\hat{x}\in X^{*}\). Since \(X^{*}\) and (3.21) satisfy the Kadec–Klee property, then

$$ \lim_{n\to \infty } \Vert Jz_{n+1}-J\hat{x} \Vert = 0. $$

As \(J^{-1}: X^{*}\to X\) is demicontinuous, therefore \(z_{n+1}\rightharpoonup \hat{x}\). Using (3.20) and the Kadec–Klee property of X

$$ \lim_{n\to \infty }z_{n+1}=\hat{x}. $$
(3.22)

Next, by using the weak lower semicontinuity of \(\|\cdot \|^{2}\), we arrive at

$$\begin{aligned} \phi (p,\hat{x}) =& \Vert p \Vert ^{2}-2\langle p, J\hat{x} \rangle + \Vert \hat{x} \Vert ^{2} \\ \leq & \liminf_{n\to \infty }\bigl( \Vert p \Vert ^{2}-2\langle p, Jz_{n+1} \rangle + \Vert z_{n+1} \Vert ^{2}\bigr) \\ =& \liminf_{n\to \infty }\phi (p,z_{n+1}) \\ \leq & \limsup_{n\to \infty }\phi (p,z_{n+1}) \\ =& \limsup_{n\to \infty }\bigl( \Vert p \Vert ^{2}-2 \langle p, Jz_{n+1} \rangle + \Vert z_{n+1} \Vert ^{2}\bigr) \\ \leq & \phi (p,\hat{x}), \end{aligned}$$

and hence

$$ \lim_{n\to \infty }\phi (p,z_{n+1})=\phi (p,\hat{x}). $$
(3.23)

Since \(x_{n}\to \hat{x}\), \(n\to \infty \), and (3.22), we have

$$ \lim_{n\to \infty } \Vert x_{n}-z_{n+1} \Vert =0. $$
(3.24)

Since J is uniformly continuous, we get

$$ \lim_{n\to \infty } \Vert Jx_{n}-Jz_{n+1} \Vert =0. $$
(3.25)

By using the definition of Lyapunov function, we have, \(\forall p \in \varGamma \),

$$\begin{aligned} \phi (p,x_{n})-\phi (p,z_{n+1}) =& \Vert x_{n} \Vert ^{2}- \Vert z_{n+1} \Vert ^{2}-2 \langle p, Jx_{n}-Jz_{n+1}\rangle \\ \leq & \Vert x_{n}-z_{n+1} \Vert \bigl( \Vert x_{n} \Vert + \Vert z_{n+1} \Vert \bigr)+2 \Vert p \Vert \Vert Jx_{n}-Jz _{n+1} \Vert . \end{aligned}$$

From (3.24) and (3.25)

$$ \lim_{n\to \infty }\bigl\{ \phi (p,x_{n})- \phi (p,z_{n+1})\bigr\} =0. $$
(3.26)

From (3.23) and (3.26)

$$ \lim_{n\to \infty }\phi (p,x_{n})=\phi (p,\hat{x}). $$
(3.27)

Again, by using weak lower semicontinuity of \(\|\cdot \|^{2}\), we obtain

$$\begin{aligned} \phi (p,\hat{x}) =& \Vert p \Vert ^{2}-2\langle p, J\hat{x} \rangle + \Vert \hat{x} \Vert ^{2} \\ \leq & \liminf_{n\to \infty }\bigl( \Vert p \Vert ^{2}-2\langle p, Jw_{n} \rangle + \Vert w_{n} \Vert ^{2}\bigr) \\ =& \liminf_{n\to \infty }\phi (p,w_{n}) \\ \leq & \limsup_{n\to \infty }\phi (p,w_{n}) \\ =& \limsup_{n\to \infty }\bigl( \Vert p \Vert ^{2}-2 \langle p, Jw_{n}\rangle + \Vert w_{n} \Vert ^{2}\bigr) \\ \leq & \phi (p,\hat{x}), \end{aligned}$$

which implies

$$ \lim_{n\to \infty }\phi (p,w_{n})=\phi (p,\hat{x}). $$
(3.28)

Thus, for any \(p\in \varGamma \subset C_{n}\) and by (3.8) and (3.10),

$$\begin{aligned} \phi (p,u_{n}) \leq & \phi \bigl(p,J^{-1} \bigl(\alpha _{n}z_{n}+(1-\alpha _{n})JTy _{n}\bigr)\bigr) \\ \leq & \alpha _{n}\phi (p,z_{n})+(1-\alpha _{n})\phi (p,y_{n}) \\ \leq & \alpha _{n}\phi (p,z_{n})+(1-\alpha _{n})\phi (p,w_{n}). \end{aligned}$$
(3.29)

From (3.23), (3.28), (3.29), and \(\lim_{n\to \infty }\alpha _{n}=0\),

$$ \lim_{n\to \infty }\phi (p,u_{n})=\phi (p,\hat{x}). $$
(3.30)

From Lemma 2.10(e), we obtain, for any \(p\in \varGamma \) and \(z_{n+1}=T_{r_{n}}u_{n}\),

$$\begin{aligned} \phi (z_{n+1},u_{n}) =& \phi (T_{r_{n}}u_{n},u_{n}) \\ \leq & \phi (p,u_{n})-\phi (p,T_{r_{n}}u_{n}) \\ =& \phi (p,u_{n})-\phi (p,z_{n+1}). \end{aligned}$$
(3.31)

It follows from (3.23), (3.30), and (3.31) that

$$ \lim_{n\to \infty }\phi (z_{n+1},u_{n})=0, $$

and hence from (2.3) we have

$$ \lim_{n\to \infty }\bigl( \Vert z_{n+1} \Vert - \Vert u_{n} \Vert \bigr)=0. $$

From (3.20)

$$ \lim_{n\to \infty } \Vert u_{n} \Vert = \Vert \hat{x} \Vert , $$
(3.32)

and hence

$$ \lim_{n\to \infty } \Vert Ju_{n} \Vert = \Vert J\hat{x} \Vert , $$
(3.33)

i.e., \(\{\|Ju_{n}\|\}\) is bounded in \(X^{*}\). Since \(X^{*}\) is reflexive, we can assume that \(Ju_{n}\rightharpoonup u^{*}\in X^{*}\) as \(n\to \infty \). Since \(J(X)=X^{*}\), there exists \(u\in X\) such that \(Ju=u^{*}\). Since

$$\begin{aligned} \phi (z_{n+1},u_{n}) =& \Vert z_{n+1} \Vert ^{2}-2\langle z_{n+1},Ju_{n}\rangle + \Vert u_{n} \Vert ^{2} \\ =& \Vert z_{n+1} \Vert ^{2}-2\langle z_{n+1},Ju_{n}\rangle + \Vert Ju_{n} \Vert ^{2}. \end{aligned}$$

By using \(\liminf_{n\to \infty }\) in the above equality, we have

$$\begin{aligned} 0 \geq & \Vert \hat{x} \Vert ^{2}-2\bigl\langle \hat{x},u^{*}\bigr\rangle + \bigl\Vert u^{*} \bigr\Vert ^{2} \\ =& \Vert \hat{x} \Vert ^{2}-2\langle \hat{x},Ju\rangle + \Vert Ju \Vert ^{2} \\ =& \Vert \hat{x} \Vert ^{2}-2\langle \hat{x},Ju\rangle + \Vert u \Vert ^{2} \\ =&\phi (\hat{x}, u). \end{aligned}$$

Using Remark 2.1, we have \(\hat{x}=u\), i.e., \(u^{*}=J\hat{x}\). Therefore \(Ju_{n}\rightharpoonup J\hat{x}\in X^{*}\). From the Kadec–Klee property of \(X^{*}\) and (3.33), we obtain

$$ \lim_{n\to \infty } \Vert Ju_{n}-J \hat{x} \Vert =0. $$
(3.34)

As \(J^{-1}\) is demicontinuous and (3.34), therefore \(u_{n}\rightharpoonup \hat{x}\). From the Kadec–Klee property of X and (3.32), we obtain

$$ \lim_{n\to \infty }u_{n}=\hat{x}. $$

 □

Proof of Theorem 3.2

By (3.8) and (3.9), we have

$$\begin{aligned} \phi (p,u_{n}) \leq & \alpha _{n}\phi (p,z_{n})+(1-\alpha _{n})\phi (p,y _{n}) \\ \leq &\alpha _{n}\phi (p,z_{n})+(1-\alpha _{n})\phi (p,w_{n})+2(1- \alpha _{n})\mu _{n}\biggl(\frac{2\mu _{n}}{c^{2}}-\gamma \biggr) \Vert Dw_{n} \Vert ^{2}, \end{aligned}$$

which implies that

$$\begin{aligned} 2(1-\alpha _{n})\mu _{n}\biggl(\gamma -\frac{2\mu _{n}}{c^{2}}\biggr) \Vert Dw_{n} \Vert ^{2} \leq & \phi (p,w_{n})-\phi (p,u_{n}) \\ &{}+\alpha _{n}\bigl[\phi (p,w_{n})-\phi (p,u_{n})\bigr], \end{aligned}$$
(3.35)

and hence from (3.28),(3.30), (3.35), and \(\lim_{n\to \infty }\alpha _{n}=0\), \(\mu _{n}(\gamma -\frac{2\mu _{n}}{c^{2}})>0\),

$$ \lim_{n\to \infty } \Vert Dw_{n} \Vert =0. $$
(3.36)

Since D is γ-inverse strongly monotone, it is \(\frac{1}{ \gamma }\)-Lipschitz continuous. It follows from \(\lim_{n\to \infty }w_{n}=\hat{x}\) and (3.36) that \(\hat{x}\in D^{-1}(0)\). Hence \(\hat{x}\in \operatorname{Sol}( \operatorname{VIP}(\mbox{1.5}))\).

Furthermore, (3.1) combined with (3.36) yields that

$$\begin{aligned} \lim_{n\to \infty } \Vert y_{n}-\hat{x} \Vert =& \lim_{n\to \infty } \bigl\Vert \varPi _{C}J^{-1}(Jw_{n}- \mu _{n}Dw_{n})-\varPi _{C}\hat{x} \bigr\Vert \\ \leq &\lim_{n\to \infty } \bigl\Vert J^{-1}(Jw_{n}- \mu _{n}Dw_{n})- \hat{x} \bigr\Vert \\ =& 0. \end{aligned}$$
(3.37)

Using Lemmas 2.2 and 2.11, we estimate

$$\begin{aligned} \phi (w_{n}, y_{n}) =&\phi \bigl(w_{n}, \varPi _{C}J^{-1}(Jw_{n}-\mu _{n}Dw_{n})\bigr) \\ \leq &\phi \bigl(w_{n},J^{-1}(Jw_{n}-\mu _{n}Dw_{n})\bigr) \\ \leq &\varPhi \bigl(w_{n},(Jw_{n}-\mu _{n}Dw_{n})\bigr) \\ \leq &\varPhi \bigl(w_{n},(Jw_{n}-\mu _{n}Dw_{n})+\mu _{n}Dw_{n} \bigr)-2\bigl\langle J ^{-1}(Jw_{n}-\mu _{n}Dw_{n})-w_{n}, \mu _{n}Dw_{n}\bigr\rangle \\ =& \phi (w_{n},w_{n})+2\bigl\langle J^{-1}(Jw_{n}-\mu _{n}Dw_{n})-w_{n}, - \mu _{n}Dw_{n}\bigr\rangle \\ =&2\mu _{n}\bigl\langle J^{-1}(Jw_{n}-\mu _{n}Dw_{n})-w_{n}, -Dw_{n} \bigr\rangle \\ \leq & \bigl\Vert J^{-1}(Jw_{n}-\mu _{n}Dw_{n})-J^{-1}Jw_{n} \bigr\Vert \\ \leq & \frac{4}{c^{2}}\mu _{n}^{2} \Vert Dw_{n} \Vert ^{2}, \end{aligned}$$
(3.38)

then using (3.36) we have

$$ \lim_{n\to \infty }\phi (w_{n}, y_{n})=0, $$
(3.39)

which implies by Lemma 2.3 that

$$ \lim_{n\to \infty } \Vert w_{n}-y_{n} \Vert =0. $$
(3.40)

As \(\lim_{n\to \infty }\phi (z_{n+1},u_{n})=0\), hence from Lemma 2.3 we have

$$ \lim_{n\to \infty } \Vert z_{n+1}-u_{n} \Vert =0. $$
(3.41)

By the uniform continuity of J,

$$ \lim_{n\to \infty } \Vert Jz_{n+1}-Ju_{n} \Vert =0. $$
(3.42)

Since \(r_{n}\geq a\) and using (3.42), we get

$$ \lim_{n\to \infty }\frac{ \Vert Jz_{n+1}-Ju_{n} \Vert }{r_{n}}=0. $$
(3.43)

By \(z_{n+1}=T_{r_{n}}u_{n}\), we have

$$ g(z_{n+1},y)+\frac{1}{r_{n}}\langle y-z_{n+1},Jz_{n+1}-Ju_{n} \rangle +b(y,z_{n+1})-b(z_{n+1},z_{n+1})\geq 0, \quad \forall y\in C. $$

It follows from Assumption 2.1(ii) that

$$\begin{aligned} \frac{1}{r_{n}}\langle y-z_{n+1},Jz_{n+1}-Ju_{n} \rangle \geq & -g(z _{n+1},y)-b(y,z_{n+1})+b(z_{n+1},z_{n+1}) \\ \geq & g(y,z_{n+1})-b(y,z_{n+1})+b(z_{n+1},z_{n+1}). \end{aligned}$$

Setting \(n\to \infty \), by (3.43) and the lower semicontinuity of \(y\to f(y,\cdot )\), we have

$$ g(y,\hat{x})-b(y,\hat{x})+b(\hat{x},\hat{x})\leq 0, \quad \forall y\in C. $$

Setting \(y_{t}:=ty+(1-t)\hat{x}\), \(\forall t\in (0,1]\), and \(y\in C\), then \(y_{t}\in C\), and thus

$$ g(y_{t},\hat{x})-b(y_{t},\hat{x})+b(\hat{x},\hat{x}) \leq 0. $$

It follows from Assumption 2.1(i)–(iv) that

$$\begin{aligned} 0 =&g(y_{t},y_{t}) \\ \leq & tg(y_{t},y)+(1-t)g(y_{t},\hat{x}) \\ \leq & tg(y_{t},y)+(1-t)\bigl[b(y_{t},\hat{x})-b( \hat{x},\hat{x})\bigr] \\ \leq & tg(y_{t},y)+(1-t)\bigl[b(y,\hat{x})-b(\hat{x},\hat{x}) \bigr]. \end{aligned}$$

Letting \(t>0\), we have from Assumption 2.1(iii)

$$ g(\hat{x},y)+b(y,\hat{x})-b(\hat{x},\hat{x})\geq 0, \quad \forall y\in C. $$

Therefore \(\hat{x}\in \operatorname{Sol}(\operatorname{GEP} (\mbox{1.2}))\).

Next, we show that \(\hat{x}\in \operatorname{Fix}(T)\). In view of \(u_{n}=J^{-1}(\alpha _{n}Jz_{n}+(1-\alpha _{n})JTy_{n})\), we have

$$ Jz_{n+1}-Ju_{n}=\alpha _{n}(Jz_{n+1}-Jz_{n})+(1- \alpha _{n}) (Jz_{n+1}-JTy _{n}). $$

Hence, we have

$$ (1-\alpha _{n}) \Vert Jz_{n+1}-JTy_{n} \Vert \leq \Vert Jz_{n+1}-Ju_{n} \Vert +\alpha _{n} \Vert Jz_{n+1}-Jz_{n} \Vert . $$
(3.44)

From assumption \(\lim_{n\to \infty }\alpha _{n}=0\), (3.42), and (3.44), we obtain

$$\begin{aligned}& \lim_{n\to \infty } \Vert Jz_{n+1}-JTy_{n} \Vert =0, \end{aligned}$$
(3.45)
$$\begin{aligned}& \lim_{n\to \infty } \Vert z_{n+1}-Ty_{n} \Vert =0. \end{aligned}$$
(3.46)

Further, using (3.40) and (3.46), the inequality

$$ \Vert Ty_{n}-y_{n} \Vert \leq \Vert Ty_{n}-z_{n+1} \Vert + \Vert z_{n+1}-w_{n} \Vert + \Vert w_{n}-y _{n} \Vert $$

implies

$$ \lim_{n\to \infty } \Vert Ty_{n}-y_{n} \Vert =0. $$
(3.47)

From (3.17), (3.40), and (3.41) it follows that \(\{x_{n}\}\), \(\{y_{n}\}\), \(\{u_{n}\}\), \(\{w_{n}\}\), and \(\{z_{n}\}\) all have the same asymptotic behavior, hence from (3.47) we have

$$ \lim_{n\to \infty } \Vert Tx_{n}-x_{n} \Vert =0, $$
(3.48)

which implies that \(\hat{x}= T\hat{x}\), i.e., \(\hat{x}\in \operatorname{Fix}(T)\). Then \(\hat{x}\in \varGamma \).

Finally, we show \(\hat{x}= \varPi _{\varGamma }x_{0}\). Taking \(k\to \infty \) in (3.12), we have

$$ \langle \hat{x}-p, Jx_{0}-J\hat{x}\rangle \geq 0, \quad \forall p \in \varGamma . $$

Now, by Lemma 2.9, \(\hat{x}= \varPi _{\varGamma }x_{0}\). This completes the proof. □

If X is a Hilbert space, then \(J=I\) and \(\phi (x,y)=\|x-y\|^{2}\), \(\forall x,y\in C\). Then from Theorem 3.2 we get the following corollaries.

Corollary 3.1

LetCbe a nonempty, closed, and convex subset of a Hilbert spaceX. Let\(D:X\to X^{*}\)be aγ-inverse strongly monotone mapping with constant\(\gamma \in (0,1)\)and\(g:C\times C\to \mathbb{R}\)be a bifunction satisfying Assumption 2.1, and let\(b:C\times C\to \mathbb{R}\)satisfy Assumption 2.2. Let\(T:C\to C\)be a nonexpansive mapping such that\(\varGamma := \operatorname{Sol}(\operatorname{GEP}(\mbox{1.2}))\cap \operatorname{Sol}( \operatorname{VIP}(\mbox{1.5}))\cap \operatorname{Fix}(T) \neq \emptyset \). Let the sequences\(\{x_{n}\}\)and\(\{z_{n}\}\)be generated by the iterative algorithm:

$$ \left . \textstyle\begin{array}{l} x_{0}=x_{-1},\qquad z_{0}\in C, \qquad C_{0}:=C, \\ w_{n}=x_{n}+\theta _{n}(x_{n}-x_{n-1}) , \\ y_{n}=P_{C}(w_{n}-\mu _{n}Dw_{n}), \\ u_{n}=\alpha _{n}z_{n}+(1-\alpha _{n})Ty_{n}, \\ z_{n+1}=T_{r_{n}}u_{n}, \\ C_{n}=\{z\in C: \Vert z_{n+1}-z \Vert ^{2}\leq \alpha _{n} \Vert z_{n}-z \Vert ^{2}+(1- \alpha _{n}) \Vert w_{n}-z \Vert ^{2}\}, \\ Q_{n}=\{z\in C : \langle x_{n}-z,x_{n}-x_{0}\rangle \leq 0\}, \\ x_{n+1}=P_{C_{n}\cap Q_{n}}{x_{0}},\quad \forall n\geq 0, \end{array}\displaystyle \right \} $$
(3.49)

where\(\{\alpha _{n}\}\)is a sequence in\([0,1]\)such that\(\lim_{n\to \infty }\alpha _{n}=0\), \(r_{n}\in [a,\infty )\)for some\(a>0\), \(\{\theta _{n}\}\in (0,1)\), and\(\{\mu _{n}\}\in (0,\infty )\)satisfying the condition\(0<\liminf_{n\to \infty }\mu _{n}\leq \limsup_{n\to \infty }\mu _{n}<\frac{c^{2}\gamma }{2}\), wherecis the 2-uniformly convex constant ofX. Then\(\{x_{n}\}\)converges strongly to\(\hat{x}\in \varGamma \), where\(\hat{x}=\varPi _{\varGamma } x_{0}\)and\(\varPi _{\varGamma }x_{0}\)is the generalized projection ofXontoΓ.

Numerical example

If \(X=\mathbb{R}\) is a Hilbert space with the norm \(\|x\|=|x|\), \(\forall x\in X\). Now, we give a numerical example which justifies Theorem 3.2.

Example 4.1

Let \(X=\mathbb{R}\), \(C=X\), where X is a Hilbert space, and let \(g:C\times C\to \mathbb{R}\) be defined by \(g(x,y)=x(y-x)\), \(\forall x,y \in C\), and \(b:C\times C\to \mathbb{R}\) be defined by \(b(x,y)=xy\), \(\forall x,y\in C\). Let the mapping \(D:\mathbb{R}\to \mathbb{R}\) be defined by \(Dx=\frac{x}{2}\); let \(T: C\to C\) be defined by \(Tx= \frac{1}{3}x\). Setting \(\{\mu _{n}\}=\{\frac{0.9}{n}\}\), \(r_{n}= \frac{1}{4}\), \(\theta _{n}=0.9\), and \(\{\alpha _{n}\}=\{\frac{1}{n^{3}} \}\), \(\forall n\geq 0\), let the sequences \(\{x_{n}\}\), \(\{u_{n}\}\), and \(\{z_{n}\}\) be generated by hybrid iterative algorithm (3.1) converges to \(\hat{x}=\{0\}\in \varGamma \).

Proof

Note that for the case \(C=X\), where X is the Hilbert space, the sets \(C_{n}\) and \(Q_{n}\) in iterative algorithms (3.1) are half spaces. Therefore, the projection onto the intersection of sets \(C_{n}\) and \(Q_{n}\) can be computed using a similar formula as given in [42]. It is easy to observe that g and b satisfy Assumption 2.1 and Assumption 2.2, respectively, and \(\operatorname{Sol}(\operatorname{GEP}(\mbox{1.2}))=\{0\} \neq \emptyset \). Further, it easy to observe that D is a \(\frac{1}{2}\)-inverse strongly monotone mapping and \(\operatorname{Sol}(\operatorname{VIP}(\mbox{1.5}))=\{0\}\neq \emptyset \). Further, it is easy to observe that T is a relatively nonexpansive mapping with \(\operatorname{Fix}(T)=\{0\}\). Therefore, \(\varGamma :=\operatorname{Sol}(\operatorname{GEP}(\mbox{1.2})) \cap \operatorname{Sol}(\operatorname{VIP}(\mbox{1.5}))\cap \operatorname{Fix}(T)=\{0\}\neq \emptyset \). After simplification, hybrid iterative scheme (3.1) is reduced to the following scheme: Given initial values \(x_{0}\), \(x_{1}\), \(z_{0}\),

$$\begin{aligned} \textstyle\begin{cases} w_{n}=x_{n}+\theta _{n}(x_{n}-x_{n-1}) \\ y_{n} =P_{C}(w_{n}-\mu _{n}Dw_{n})= \textstyle\begin{cases} 0,&\text{if } x< 0,\\ 1,&\text{if } x> 1,\\ w_{n}-\mu _{n}\frac{w_{n}}{2},&\text{otherwise}, \end{cases}\displaystyle \\ u_{n}=\alpha _{n}z_{n}+ \frac{(1-\alpha _{n})}{3}y_{n}; \qquad z_{n+1} = \frac{2u_{n}}{3}; \\ C_{n}= [e_{n}, \infty ), \quad \text{where } e_{n}:=\frac{z_{n+1}^{2}-w_{n}^{2}+\alpha _{n}(w_{n}^{2}-z_{n}^{2})}{2z _{n+1}-2w_{n}+2\alpha _{n}(w_{n}-z_{n})}; \\ Q_{n}=[{x_{n}}, \infty ); \\ x_{n+1}=P_{C_{n}\cap Q_{n}}{x_{0}}, \quad \forall n\geq 0. \end{cases}\displaystyle \end{aligned}$$
(4.1)

Finally, using software Matlab 7.8.0, we have Figures 1 and 2 and Table 1 which show that \(\{x_{n}\}\), \(\{u_{n}\}\), and \(\{z_{n}\}\) converge to \(\hat{x}=\{0\}\) as \(n \to +\infty \).

Figure 1
figure1

Convergence of \(\{x_{n}\}\), \(\{z_{n}\}\) and \(\{u_{n}\}\) when \(x_{0}=1\), \(x_{1}=1\), \(z_{0}=3\)

Figure 2
figure2

Convergence of \(\{x_{n}\}\), \(\{z_{n}\}\) and \(\{u_{n}\}\) when \(x_{0}=-1\), \(x_{1}=-2\), \(z_{0}=-3\)

Table 1 Values of \(x_{n}\), \(z_{n}\) and \(u_{n}\)

 □

For \(D=0\), \(b(x,y)=0\), we now demonstrate that iterative algorithm (3.1) with conditions given in Theorem 3.1 approximates a common element of the solution set of \(\operatorname{EP} (\mbox{1.3})\) and the fixed point set of T. Further, we observe that it is faster than iterative algorithm (3.1) due to [16] and iterative algorithm (1.8) due to [15] for a nonexpansive mapping.

Set \(D=0\), \(b(x,y)=0\), in Example 4.1, we have that iterative algorithm (4.1), iterative algorithm (3.1) due to [16], and iterative algorithm (1.8) due to [15] reduce to the following iterative algorithms:

Iterative Algorithm 4.2

Given initial values \(x_{0}\), \(x_{1}\), \(z_{0}\),

$$ \textstyle\begin{cases} w_{n}=x_{n}+\theta _{n}(x_{n}-x_{n-1}) \\ y_{n} =w_{n}, \\ u_{n}=\alpha _{n}z_{n}+ \frac{(1-\alpha _{n})}{3}y_{n}; \qquad z_{n+1} =\frac{4u_{n}}{5}; \\ C_{n}= [e_{n}, \infty ), \quad \text{where } e_{n}:=\frac{z_{n+1}^{2}-w_{n}^{2}+\alpha _{n}(w_{n}^{2}-z_{n}^{2})}{2z _{n+1}-2w_{n}+2\alpha _{n}(w_{n}-z_{n})}; \\ Q_{n}=[{x_{n}}, \infty ); \\ x_{n+1}=P_{C_{n}\cap Q_{n}}{x_{0}}, \quad \forall n\geq 0; \end{cases} $$
(4.2)

and

Iterative Algorithm 4.3

Given initial values \(x_{0}\), \(z_{0}\),

$$ \textstyle\begin{cases} u_{n}=\alpha _{n}z_{n}+ \frac{(1-\alpha _{n})}{3}y_{n}; \qquad z_{n+1} =\frac{4u_{n}}{5}; \\ C_{n}= [e_{n}, \infty ), \quad \text{where } e_{n}:=\frac{z_{n+1}^{2}-x_{n}^{2}+\alpha _{n}(x_{n}^{2}-z_{n}^{2})}{2z _{n+1}-2x_{n}+2\alpha _{n}(x_{n}-z_{n})}; \\ Q_{n}=[{x_{n}}, \infty ); \\ x_{n+1}=P_{C_{n}\cap Q_{n}}{x_{0}}, \quad \forall n\geq 0; \end{cases} $$
(4.3)

and

Iterative Algorithm 4.4

Given initial values \(x_{0}\),

$$ \textstyle\begin{cases} u_{n}=\alpha _{n}x_{n}+ \frac{(1-\alpha _{n})}{3}x_{n}; \qquad z_{n} =\frac{4u_{n}}{5}; \\ C_{n}= [e_{n}, \infty ), \quad \text{where } e_{n}:=\frac{z_{n}+x_{n}}{2}; \\ Q_{n}=[{x_{n}}, \infty ); \\ x_{n+1}=P_{C_{n}\cap Q_{n}}{x_{0}}, \quad \forall n\geq 0, \end{cases} $$
(4.4)

respectively.

Hence, the sequence \(\{x_{n}\}\) defined by Iterative Algorithm 4.2, Iterative Algorithm 4.3 as well as by Iterative Algorithm 4.4 converges strongly to \(\hat{x}=0\).

Finally, using software Matlab 7.8, we have the following figures which show that the sequence \(\{x_{n}\}\) converges to \(\hat{x}=0 \in \varOmega \). Figure 3 shows the convergence of \(\{x_{n}\}\) when \(x_{0} = 1\), \(x _{1}=2\), \(z_{0} = 3\) for Algorithms 4.24.3 and 4.4, while Fig. 4 shows the convergence of \(\{x_{n}\}\) when \(x_{0} = -1\), \(x_{1}=-2\), \(z_{0} =-3\) for Algorithms 4.24.3. It is evident from figures that the sequence \(\{x_{n}\}\) obtained by Iterative Algorithm 4.2 converges faster than the sequence \(\{x_{n}\}\) obtained by Iterative Algorithm 4.3 and Iterative Algorithm 4.4.

Figure 3
figure3

Convergence of \(\{ x_{n} \}\) when \(x_{0}=1\), \(x_{1}=2\), \(z_{0}=3\)

Figure 4
figure4

Convergence of \(\{ x_{n} \}\) when \(x_{0} =-1\), \(x_{1} =-2\), \(z_{0} =-3\)

Concluding remark 4.1

We observe that

  1. (i)

    Iterative algorithm (3.1) is quite different from algorithm (1.8) given by Takahashi [15] and (3.1) given by [16].

  2. (ii)

    Corollary 3.1 is new and different from that of Theorem 3.2 due to Takahashi [15] and (1.9) given by Mainge [19].

  3. (iii)

    A numerical example was given to prove the efficiency of the proposed hybrid inertial iterative algorithm, that is, the proposed algorithms in Theorem 3.2 and Corollary 3.1 for \(D=0\) and \(b(x,y)=0 \) converge faster than the algorithm presented in [16] and [15].

References

  1. 1.

    Blum, E., Oettli, W.: From optimization and variational inequalities to equilibrium problems. Math. Stud. 63, 123–145 (1994)

  2. 2.

    Daniele, P., Giannessi, F., Mougeri, A.E.: Equilibrium Problems and Variational Models. Nonconvex Optimization and Its Application, vol. 68. Kluwer Academic, Norwell (2003)

  3. 3.

    Moudafi, A.: Second order differential proximal methods for equilibrium problems. J. Inequal. Pure Appl. Math. 4(1), Art. 18 (2003)

  4. 4.

    Hartman, P., Stampacchia, G.: On some non-linear elliptic differential-functional equation. Acta Math. 115, 271–310 (1966)

  5. 5.

    Korpelevich, G.M.: The extragradient method for finding saddle points and other problems. Matecon 12, 747–756 (1976)

  6. 6.

    Nadezhkina, N., Takahashi, W.: Strong convergence theorem by a hybrid method for nonexpansive mappings and Lipschitz continuous monotone mapping. SIAM J. Optim. 16(40), 1230–1241 (2006)

  7. 7.

    Ceng, L.C., Hadjisavvas, N., Wong, N.C.: Strong convergence theorem by a hybrid extragradient-like approximation method for variational inequalities and fixed point problems. J. Glob. Optim. 46, 635–646 (2010)

  8. 8.

    Ceng, L.C., Guu, S.M., Yao, J.C.: Finding common solution of variational inequality, a general system of variational inequalities and fixed point problem via a hybrid extragradient method. Fixed Point Theory Appl. 2011, Article ID 626159, 22 pages (2011)

  9. 9.

    Ceng, L.C., Wang, C.Y., Yao, J.C.: Strong convergence theorems by a relaxed extragradient method for a general system of variational inequalities. Math. Methods Oper. Res. 67, 375–390 (2008)

  10. 10.

    Rouhani, B.D., Kazmi, K.R., Rizvi, S.H.: A hybrid-extragradient-convex approximation method for a system of unrelated mixed equilibrium problems. Trans. Math. Pogram. Appl. 1(8), 82–95 (2013)

  11. 11.

    Popov, L.D.: A modification of the Arrow–Hurwicz method for searching for saddle points. Mat. Zametki 28(5), 777–784 (1980)

  12. 12.

    Malitsky, Y.V., Semenov, V.V.: A hybrid method without extrapolating step for solving variational inequality problems. J. Glob. Optim. 61, 193–202 (2015)

  13. 13.

    Dong, Q.L., Lu, Y.Y.: A new hybrid algorithm for a nonexpansive mapping. Fixed Point Theory Appl. 2015, 37, 22 pages (2015)

  14. 14.

    Kazmi, K.R., Rizvi, S.H., Ali, R.: A hybrid iterative method without extrapolating step for solving mixed equilibrium problem. Creative Math. Inform. 24(2), 165–172 (2015)

  15. 15.

    Takahashi, W., Zembayashi, K.: Strong and weak convergence theorem for equilibrium problems and relatively nonexpansive mappings in Banach spaces. Nonlinear Anal. 70, 45–57 (2009)

  16. 16.

    Kazmi, K.R., Ali, R.: Common solution to an equilibrium problem and a fixed point problem for an asymptotically quasi-ϕ-nonexpansive mapping in intermediate sense. Rev. R. Acad. Cienc. Exactas Fís. Nat., Ser. A Mat. 111, 877–889 (2017)

  17. 17.

    Kazmi, K.R., Ali, R., Yousuf, S.: Generalized equilibrium and fixed point problems for Bregman relatively nonexpansive mappings in Banach spaces. J. Fixed Point Theory Appl. 20, 151 (2018)

  18. 18.

    Dong, Q.L., Kazmi, K.R., Ali, R., Li, X.H.: Inertial Krasnoselskii–Mann type hybrid algorithms for solving hierarchical fixed point problems. J. Fixed Point Theory Appl. 21, 57 (2019)

  19. 19.

    Maingé, P.E.: Convergence theorem for inertial KM-type algorithms. J. Comput. Appl. Math. 219, 223–236 (2008)

  20. 20.

    Alvarez, F., Attouch, H.: An inertial proximal method for maximal monotone operators via discretization of a nonlinear oscillator with damping. Set-Valued Anal. 9, 3–11 (2001)

  21. 21.

    Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2(1), 183–202 (2009)

  22. 22.

    Dong, Q.L., Yuan, H.B., Cho, Y.J., Rassias, T.M.: Modified inertial Mann algorithm and inertial CQ-algorithm for nonexpansive mappings. Optim. Lett. 12(1), 87–102 (2018)

  23. 23.

    Maingé, P.E.: Regularized and inertial algorithms for common fixed points of nonlinear operators. J. Math. Anal. Appl. 344, 876–877 (2008)

  24. 24.

    Reich, S.: Weak convergence theorems for nonexpansive mappings in Banach spaces. J. Math. Anal. Appl. 67, 274–276 (1979)

  25. 25.

    Dong, Q.L., Jiang, D., Cholamjiak, P., Shehu, Y.: A strong convergence result involving an inertial forward-backward algorithm for monotone inclusions. J. Fixed Point Theory Appl. 19, 3097–3118 (2017)

  26. 26.

    Cholamjiak, W., Cholamjiak, P., Suantai, S.: An inertial forward-backward splitting method for solving inclusion problems in Hilbert spaces. J. Fixed Point Theory Appl. 20, 42 (2018)

  27. 27.

    Suantai, S., Cholamjiak, P., Pholasa, N.: The modified inertial relaxed CQ algorithm for solving the split feasibility problems. J. Ind. Manag. Optim. 14, 1595–1615 (2018)

  28. 28.

    Byrne, C.: A unified treatment of some iterative algorithms in signal processing and image reconstruction. Inverse Probl. 20, 103–120 (2004)

  29. 29.

    Eckstein, J., Bertsekas, D.P.: On the Douglas–Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55, 293–318 (1992)

  30. 30.

    Yang, Q., Zhao, J.: Generalized KM theorems and their applications. Inverse Probl. 22, 833–844 (2006)

  31. 31.

    Bot, R.I., Csetnek, E.R., Hendrich, C.: Inertial Douglas–Rachford splitting for monotone inclusion problems. Appl. Math. Comput. 256, 472–487 (2015)

  32. 32.

    Attouch, H., Peypouquet, J., Redont, P.: A dynamical approach to an inertial forward-backward algorithm for convex minimization. SIAM J. Optim. 24(4), 232–256 (2014)

  33. 33.

    Lorenz, D., Pock, T.: An inertial forward-backward algorithm for monotone inclusions. J. Math. Imaging Vis. 51, 311–325 (2015)

  34. 34.

    Bot, R.I., Csetnek, E.R.: An inertial forward-backward-forward primal-dual splitting algorithm for solving monotone inclusion problems. Numer. Algorithms 71, 519–540 (2016)

  35. 35.

    Chan, R.H., Ma, S., Yang, J.F.: Inertial proximal ADMM for linearly constrained separable convex optimization. SIAM J. Imaging Sci. 8(4), 2239–2267 (2015)

  36. 36.

    Xu, H.K.: Inequalities in Banach spaces with applications. Nonlinear Anal. 16, 1127–1138 (1991)

  37. 37.

    Kamimura, S., Takahashi, W.: Strong convergence of a proximal-type algorithm in a Banach space. SIAM J. Optim. 13, 938–945 (2002)

  38. 38.

    Matsushita, S., Takahashi, W.: Weak and strong convergence theorems for relatively nonexpansive mappings in Banach spaces. Fixed Point Theory Appl. 2004, 37–47 (2004)

  39. 39.

    Nakajo, K.: Strong convergence for gradient projection method and relatively nonexpansive mappings in Banach spaces. Appl. Math. Comput. 271, 251–258 (2017)

  40. 40.

    Alber, Y.I.: Metric and generalized projection operators in Banach spaces. In: Properties and Applications. Lect. Notes Pure Appl. Math. (1996)

  41. 41.

    Farid, M., Irfan, S.S., Khan, M.F., Khan, S.A.: Strong convergence of gradient projection method for generalized equilibrium problem in a Banach space. J. Inequal. Appl. 2017, 297 (2017)

  42. 42.

    He, S., Yang, C., Duan, P.: Realization of the hybrid method for Mann iterations. Appl. Math. Comput. 217, 4239–4247 (2010)

Download references

Acknowledgements

This work was supported by the Deanship of Scientific Research (DSR), King Abdulaziz University, Jeddah, under grant No. (D-022-363-1440). The authors, therefore, gratefully acknowledge the DSR technical and financial support.

Availability of data and materials

Not applicable.

Funding

This work was supported by the King Abdulaziz University represented by the Deanship of Scientific Research on the inertial support for this research under grant number D-022-363-1440.

Author information

All authors contributed equally and studied the final manuscript. All authors read and approved the final manuscript.

Correspondence to Mohammad Farid.

Ethics declarations

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Alansari, M., Ali, R. & Farid, M. Strong convergence of an inertial iterative algorithm for variational inequality problem, generalized equilibrium problem, and fixed point problem in a Banach space. J Inequal Appl 2020, 42 (2020). https://doi.org/10.1186/s13660-020-02313-z

Download citation

MSC

  • 47H09
  • 47H05
  • 47J25

Keywords

  • Generalized equilibrium problem
  • Variational inequality problem
  • Relatively nonexpansive mapping
  • Inertial hybrid iterative method