- Research
- Open access
- Published:
Strong convergence of an inertial iterative algorithm for variational inequality problem, generalized equilibrium problem, and fixed point problem in a Banach space
Journal of Inequalities and Applications volume 2020, Article number: 42 (2020)
Abstract
We propose and analyze an inertial iterative algorithm to approximate a common solution of generalized equilibrium problem, variational inequality problem, and fixed point problem in the framework of a 2-uniformly convex and uniformly smooth real Banach space. Further, we study the convergence analysis of our proposed iterative method. Finally, we give application and a numerical example to illustrate the applicability of the main algorithm.
1 Introduction
Let C be a nonempty closed convex subset of a real Banach space X and \(X^{*}\) be the dual space of X; let the pairing between X and \(X^{*}\) be denoted by \(\langle \cdot ,\cdot \rangle \). A mapping \(J: X\to 2^{X^{*}}\) such that
is called normalized duality mapping.
Let \(g,b:C\times C\to \mathbb{R}\) be bifunctions, where \(\mathbb{R}\) is the set of real numbers. We study the generalized equilibrium problem (in short, GEP) which was to find \(x\in C\) such that
The solution of (1.2) is denoted by \(\operatorname{Sol}( \operatorname{GEP}(\mbox{1.2}))\). If we consider \(b(x,y)=0\), \(\forall x,y\in C\), (1.2) reduces to the equilibrium problem (in short, EP): Find \(x\in C\) such that
which was studied by Blum and Oettli [1]. The solution of (1.3) is denoted by \(\operatorname{Sol}(\operatorname{EP} (\mbox{1.3}))\).
In the development of various fields of science and engineering, the equilibrium problem has a great importance. It provides various mathematical problems as special cases, like variational inclusion problem, variational inequality problem, mathematical programming problem, saddle point problem, complementary problem, Nash equilibrium problem in noncooperative games, minimization problem, minimax inequality problem, and fixed point problem (see [1–3]). If we consider \(g(x,y)=h(y)-h(x)\), where \(h :C \to \mathbb{R}\) is a nonlinear function, then (1.3) becomes the optimization problem: Find \(x \in C\) such that
If we consider \(g(x,y)=\langle y-x, Dx\rangle \), \(\forall x,y\in C\), where \(D:C\to X^{*}\) is a nonlinear mapping, then (1.3) becomes the variational inequality problem (in short, VIP): Find \(x\in C\) such that
which was studied by Hartmann and Stampacchia [4]. The set of solutions of (1.5) is denoted by \(\operatorname{Sol}( \operatorname{VIP}(\mbox{1.5}))\).
In 2006, using the extragradient iterative method for VIP(1.5) given in [5], Nadezkhina and Takahashi [6] introduced and studied the following extragradient method and proved a strong convergence as follows:
For further generalizations of iterative method (1.6), see [7–10].
One drawback of algorithm (1.6) is the computation of values of mapping D at two different points and the necessity of two projections on the admissible set C to pass to the next iteration. To overcome this drawback partially, recently, by adopting the idea of Popov [11], Malitsky and Semenov [12] showed that with some other choice of \(C_{n}\) it is possible to drop from (1.6) the step of extrapolation, which consists in \(u_{n}=P _{C}(x_{n}-r_{n} Dx_{n})\), and introduced the following iteration without extrapolating step and proved a strong convergence:
where L is a Lipschitz constant and \(\lambda >0\), \(k>0\) are parameters. We note that algorithm (1.7) on every iteration needs only one computation of projection and one value of D. The iterative method given in [12] extended the methods given in [5, 6]. Further, Dong and Lu [13] extended (1.7) and showed that the algorithm given by them could be faster than algorithm (1.6) by a numerical example. Very recently, Kazmi et al. [14] extended (1.7) for the mixed equilibrium problem.
In 2009, Takahashi et al. [15] introduced and studied the following iterative method and studied strong convergence for a relatively nonexpansive mapping to approximate the common solution of a fixed point problem and an equilibrium problem in Banach space:
where \(\varPi _{C}:X\to C\) is the generalized projection. For further extension of [13, 15], see[16–18].
On the other hand, Mainge [19] extended and unified the Krasnosel’skiı̌–Mann algorithm as follows:
for each \(n\geq 1\) and proved a weak convergence for a nonexpansive mapping T under some conditions.
The term \(\theta _{n}(x_{n}-x_{n-1})\) given in (1.9) is called the inertial term. It plays a crucial role in speeding up the convergence properties of iterative method (1.9); for details see [19–27]. It is worth to mention that, if we consider \(\theta _{n}=0\), then iterative method (1.9) becomes Krasnosel’skiı̌–Mann type iterative methods; for details, see [28–30]. Due to this importance, a number of researchers have been working on inertial type methods; see for details the following: inertial Douglas–Rachford splitting methods [31], inertial forward-backward splitting methods [32, 33], inertial forward-backward-forward method [34], and inertial proximal ADMM [35]. Further it is worth to mention that the study of convergence analysis of inertial type iterative methods is still unexplored in the setting of Banach space.
Therefore, inspired and motivated by the work given in [12, 15, 19], we introduce and study a hybrid iterative algorithm for approximating a common solution of GEP(1.2), VIP(1.5), and a fixed point problem for a relatively nonexpansive mapping. Further, we prove a strong convergence theorem in a uniformly smooth and 2-uniformly convex Banach spaces. Finally, we give a numerical example to justify the main theorem and demonstrate that our proposed inertial iterative algorithm is faster than the algorithms due to [15, 16].
2 Preliminaries
Suppose that weak and strong convergence are denoted by the symbols ⇀ and →, respectively. Suppose that the unit sphere N is defined as \(N=\{x\in X: \|x\|=1\}\) on a Banach space X. If \(\frac{\|x+y\|}{2} < 1\), \(\forall x,y\in N\) with \(x\neq y\), then X is said to be strictly convex. If for any \(\varepsilon \in (0,2]\) there exists \(\delta >0\) such that
then X is said to be uniformly convex. Notice that X is reflexive and strictly convex if it is a uniformly convex Banach space and smooth if \(\lim_{t\to 0}\frac{\|x+ty\|-\|x\|}{t}\) exists for all \(x,y\in N\). If the limit exists uniformly, then X is uniformly smooth and X is said to enjoy the Kadec–Klee property if for any \(\{x_{n}\}\in X\) and \(x\in X\) with \(x_{n}\rightharpoonup x\), and \(\|x_{n}\|\to \|x\|\), then \(\|x_{n}-x\|\to 0\) as \(n\to \infty \). Notice that X enjoys the Kadec–Klee property if X is a uniformly convex Banach space. Also J is single-valued if X is smooth, J is uniformly norm-to-norm continuous on bounded subsets of X if X is uniformly smooth, and X is strictly convex if J is strictly monotone.
The function \(\phi : X\times X\to \mathbb{R}\) is said to be Lyapunov function and is defined by
It is obvious that
and
Remark 2.1
If X is a reflexive, strictly convex, and smooth Banach space, then \(\forall x,y\in X\), \(\phi (x,y)=0 \Leftrightarrow x=y\).
Lemma 2.2
([36])
LetXbe a 2-uniformly convex Banach space, then for all\(x,y\in X\)the following inequality holds:
wherecis a 2-uniformly convex constant and\(c\in (0, 1]\).
Lemma 2.3
([37])
LetXbe a smooth and uniformly convex Banach space, and let\(\{x_{n}\}\)and\(\{y_{n}\}\)be two sequences inXsuch that either\(\{x_{n}\}\)or\(\{y_{n}\}\)is bounded. If\(\lim_{n\to \infty } \phi (x_{n},y_{n})=0\), then\(\lim_{n\to \infty }\|x_{n}-y_{n} \|=0\).
Remark 2.4
If \(\{x_{n}\}\) and \(\{y_{n}\}\) are bounded, then by using (2.5) it is obvious that the converse of Lemma 2.3 is also true.
Definition 2.1
Let \(T: C\to C\) be a mapping. Then:
- (i)
\(\operatorname{Fix}(T)=\{x\in C: Tx=x \}\) is the collection of all fixed points of T;
- (ii)
A point \(x_{0}\in C\) is defined as an asymptotic fixed point of T if C contains a sequence \(\{x_{n}\}\) such that \(x_{n}\rightharpoonup x_{0}\) and \(\lim_{n\to \infty }\|Tx_{n}-x _{n}\|=0\). \(\widehat{ \operatorname{Fix}}(T)\) denotes the collection of all asymptotic fixed points of T;
- (iii)
T is said to be relatively nonexpansive if
$$ \widehat{\operatorname{Fix}}(T)={\operatorname{Fix}}(T)\neq \emptyset \quad \text{and} \quad \phi (p,Tx)\leq \phi (p,x), \quad \forall x\in C, p\in { \operatorname{Fix}}(T). $$
Lemma 2.5
([38])
LetXbe a reflexive, strictly convex, and smooth Banach space. LetCbe a nonempty closed convex subset ofX. Let\(T:C\to C\)be a relatively nonexpansive mapping. Then\({\operatorname{Fix}}(T)\)is a closed convex subset ofC.
Lemma 2.6
([39])
LetCbe a nonempty closed convex subset ofXandDbe a monotone and hemicontinuous mapping ofCinto\(X^{*}\). Then\({\operatorname{VIP}}(C, D)\)is closed and convex.
Lemma 2.7
([37])
LetCbe a nonempty closed convex subset of a real reflexive, strictly convex, and smooth Banach spaceX, and let\(x\in X\). Then there exists a unique element\(x_{0}\in C\)such that\(\phi (x_{0},x)=\inf_{y\in C}\phi (y,x)\).
Definition 2.2
([40])
A mapping \(\varPi _{C}: X\to C\) is said to be a generalized projection if, for any point \(x\in X\), \(\varPi _{C}{x}=\bar{x}\), where x̄ is a solution of the minimization problem \(\phi (\bar{x},x)=\inf_{y\in C}\phi (y,x)\).
Lemma 2.8
([40])
LetXbe a reflexive, strictly convex, and smooth Banach space, and letCbe a nonempty closed convex subset ofX. Then
Lemma 2.9
([40])
LetXbe reflexive, strictly convex, and letCbe a nonempty closed convex subset of a smooth Banach spaceX, let\(x\in X\)and\(z\in C\). Then
Assumption 2.1
Let \(g:C\times C\longrightarrow \mathbb{R}\) be a bifunction satisfying the following:
- (i)
\(g(x,x)=0\), \(\forall x \in C\);
- (ii)
g is monotone, that is, \(g(x,y)+g(y,x)\leq 0\), \(\forall x \in C\);
- (iii)
\(\lim \sup_{t \to 0} g(tz+(1-t)x,y)\leq g(x,y)\), \(\forall x,y,z \in C\);
- (iv)
For each \(x \in C\), \(y\to g(x,y)\) is convex and lower semicontinuous.
Assumption 2.2
Let \(b: C\times C\to \mathbb{R}\) be a bifunction satisfying the following:
- (i)
b is skew-symmetric, i.e., \(b(x,x)-b(x,y)-b(y,x)+b(y,y) \geq 0\), \(\forall x,y\in C\);
- (ii)
b is convex in the second argument;
- (iii)
b is continuous.
Lemma 2.10
([41])
LetCbe a closed convex subset of a uniformly smooth, strictly convex, and reflexive Banach spaceX, and let\(g:C\times C\longrightarrow \mathbb{R}\)be a bifunction satisfying Assumption 2.1and\(b:C\times C\to \mathbb{R}\)satisfying Assumption 2.2. For all\(r>0\)and\(x\in X\), define a mapping\(T_{r}: X\to C\)as follows:
Then the following hold:
- (a)
\(T_{r}{x}\)is single-valued;
- (b)
\(T_{r}{x}\)is a firmly nonexpansive type mapping, i.e., for all\(x, y\in X\),
$$ \langle T_{r}{x}-T_{r}{y},JT_{r}{x}-JT_{r}{y} \rangle \leq \langle T _{r}{x}-T_{r}{y},J_{x}-Jy \rangle ; $$ - (c)
\(\operatorname{Fix}(T_{r})= \operatorname{Sol}( \operatorname{GEP}(\mbox{1.2}))\)is closed and convex;
- (d)
\(T_{r}x\)is quasi-Ï•-nonexpansive;
- (e)
\(\phi (q,T_{r}{x})+\phi (T_{r}{x},x)\leq \phi (q,x)\), \(\forall q\in F(T_{r})\).
In the sequel, we make use of the function \(\varPhi :X\times X^{*}\to \mathbb{R}\), defined by
Observe that \(\varPhi (x,x^{*})=\phi (x, J^{-1}x^{*})\).
Lemma 2.11
([40])
LetXbe a smooth, strictly convex, and reflexive Banach space with\(X^{*}\)as its dual. Then
3 Main result
In this section, we prove a strong convergence theorem for the inertial hybrid iterative algorithm to approximate a common solution of GEP(1.2), VIP(1.5), and fixed point problem for a relatively-nonexpansive mapping in uniformly smooth and 2-uniformly convex real Banach spaces.
Iterative Algorithm 3.1
Let the sequences \(\{x_{n}\}\) and \(\{z_{n}\}\) be generated by the iterative algorithm:
where \(\{\alpha _{n}\}\in [0,1]\), \(r_{n}\in [a,\infty )\) for some \(a>0\), \(\{\theta _{n}\}\in (0,1)\) and \(\{\mu _{n}\}\in (0,\infty )\).
Theorem 3.2
LetCbe a nonempty, closed, and convex subset of a 2-uniformly convex and uniformly smooth real Banach spaceX, and let\(X^{*}\)be the dual ofX. Let\(D:X\to X^{*}\)be aγ-inverse strongly monotone mapping with constant\(\gamma \in (0,1)\); \(g:C\times C\to \mathbb{R}\)be a bifunction satisfying Assumption 2.1, and\(b:C\times C\to \mathbb{R}\)satisfy Assumption 2.2. Let\(T:C\to C\)be a relatively nonexpansive mapping such that\(\varGamma := \operatorname{Sol}(\operatorname{GEP}(\mbox{1.2}))\cap \operatorname{Sol}(\operatorname{VIP}(\mbox{1.5}))\cap {\operatorname{Fix}}(T) \neq \emptyset \). Let the sequences\(\{x_{n}\}\)and\(\{z_{n}\}\)be generated by iterative algorithm (3.1) and the control sequences\(\{\alpha _{n}\}\in [0,1]\)such that\(\lim_{n\to \infty }\alpha _{n}=0\), \(r_{n}\in [a,\infty )\)for some\(a>0\), \(\{\theta _{n}\}\in (0,1)\), and\(\{\mu _{n}\}\in (0,\infty )\)satisfying the condition\(0<\liminf_{n\to \infty }\mu _{n}\leq \limsup_{n\to \infty }\mu _{n}<\frac{c^{2}\gamma }{2}\), wherecis the 2-uniformly convex constant ofX. Then\(\{x_{n}\}\)converges strongly to\(\hat{x}\in \varGamma \), where\(\hat{x}=\varPi _{\varGamma } x_{0}\)and\(\varPi _{\varGamma }x_{0}\)is the generalized projection ofXontoΓ.
Now, we give some lemmas for the main result in this paper as follows.
Lemma 3.3
For each\(n\geq 0\), Γand\(C_{n}\cap Q_{n}\)are closed and convex.
Proof
It follows from Lemmas 2.5–2.6 and Lemma 2.10 that Γ is a nonempty closed and convex set, and hence \(\varPi _{\varGamma }x_{0}\) is well defined. Evidently, \(C_{0}=C\) is closed and convex. Further, the closedness of \(C_{n}\) is also obvious. We only prove the convexity of \(C_{n}\). For \(q_{1}, q_{2}\in C_{n}\), we have \(q_{1}, q_{2}\in C\), \(tq_{1}+(1-t)q_{2}\in C\), where \(t\in (0,1)\), and
and
The above two inequalities are equivalent to
and
It follows from (3.4) and (3.5) that
Hence, we have
which implies that \(tq_{1}+(1-t)q_{2}\in C_{n}\), hence \(C_{n}\) is closed and convex for all \(n\geq 0\). By using the definition of \(Q_{n}\), it is obvious that \(Q_{n}\) is closed and convex. This implies that \(C_{n}\cap Q_{n}\), \(\forall n\geq 0\) is closed and convex. □
Lemma 3.4
For each\(n\geq 0\), \(\varGamma \subset C_{n}\cap Q_{n}\), and the sequence\(\{x_{n}\}\)is well defined.
Proof
Let \(p\in \varGamma \), we have
Additionally, by Lemmas 2.2 and 2.11, we obtain
which, combined with \(\mu _{n}<\frac{c^{2}\gamma }{2}\), leads to
which implies that \(p\in C_{n}\). Thus, \(\varGamma \subset C_{n}\), \(\forall n\geq 0\). Next, we show by induction that \(\varGamma \subset C _{n}\cap Q_{n}\), \(\forall n\geq 0\). Since \(Q_{0}=C\), we have \(\varGamma \subset C_{0}\cap Q_{0}\). Let \(\varGamma \subset C_{k}\cap Q_{k}\) for some \(k> 0\). Then there exists \(x_{k+1}\in C_{k}\cap Q _{k}\) such that \(x_{k+1}=\varPi _{C_{k}\cap Q_{k}}x_{0}\). From the definition of \(x_{k+1}\), we have, for all \(z\in C_{k}\cap Q_{k}\), that \(\langle x_{k+1}-z,Jx_{0}-Jx_{k+1}\rangle \geq 0\). Since \(\varGamma \subset C_{k}\cap Q_{k}\), we have
and hence \(p\in Q_{k+1}\). Thus, we obtain \(\varGamma \subset C_{k+1} \cap Q_{k+1}\) as \(\varGamma \subset C_{n}\) for all n. Therefore, \(\varGamma \subset C_{n}\cap Q_{n}\), \(\forall n\geq 0\), and hence \(x_{n+1}=\varPi _{C_{n}\cap Q_{n}}x_{0}\) is well defined \(\forall n \geq 0\). Thus, \(\{x_{n}\}\) is well defined. □
Lemma 3.5
The sequences\(\{x_{n}\}\), \(\{y_{n}\}\), \(\{z_{n}\}\), \(\{w_{n}\}\), and\(\{u_{n}\}\)generated by iterative algorithm (3.1) are bounded.
Proof
By the definition of \(Q_{n}\), \(x_{n}=\varPi _{Q_{n}}x_{0}\). Using \(x_{n}=\varPi _{Q_{n}}x_{0}\) and Lemma 2.8, we obtain
This shows that \(\{\phi (x_{n}, x_{0})\}\) is bounded and hence from (2.3) that \(\{x_{n}\}\) is bounded. Further,
implies that \(\{\phi (p,x_{n})\}\) is bounded and by the fact \(\phi (p,Tx_{n})\leq \phi (p,x_{n})\), \(\forall p\in \varGamma \) that \(\{Tx_{n}\}\) is also bounded. Therefore, \(\{w_{n}\}\) and \(\{y_{n}\}\) are also bounded. Now, setting \(M=\max \{\phi (p,z_{0}),\sup_{n}\phi (p,w _{n})\}\). Then obviously \(\phi (p,z_{0})\leq M\). Let \(\phi (p,z_{n}) \leq M\) for some n, then from (3.11)
Thus, \(\{\phi (p,z_{n+1})\}\) is bounded and hence \(\{z_{n}\}\) is also bounded. □
Lemma 3.6
The sequences\(x_{n}\to \hat{x}\), \(u_{n}\to \hat{x}\), and\(z_{n+1} \to \hat{x}\)as\(n\to \infty \), wherex̂is some point inC.
Proof
Since \(x_{n+1}=\varPi _{C_{n}\cap Q_{n}}x_{0}\in Q_{n}\) and \(x_{n} \in \varPi _{Q_{n}}x_{0}\), we get
This shows that \(\{\phi (x_{n},x_{0})\}\) is nondecreasing and hence from boundedness of \(\{\phi (x_{n},x_{0})\}\), \(\lim_{n\to \infty } \phi (x_{n},x_{0})\) exists. Further,
and hence
Since X is uniformly convex and smooth, by Lemma 2.3, we have
Since X is reflexive and \(\{x_{n}\}\) is bounded, there exists a subsequence \(\{x_{n_{k}}\}\) of \(\{x_{n}\}\) such that \(x_{n_{k}}\rightharpoonup \hat{x}\). Since \(C_{n}\cap Q_{n}\) is closed and convex, \(\hat{x} \in C_{n}\cap Q_{n}\). Using weak lower semicontinuity of \(\|\cdot \| ^{2}\), we obtain
which implies that \(\lim_{k\to \infty }\phi (x_{n_{k}},x_{0})= \phi (\hat{x},x_{0})\), and hence we have \(\lim_{k\to \infty } \|x_{n_{k}}\|= \|\hat{x}\|\). Further, from the Kadec–Klee property of X, \(x_{n_{k}}\to \hat{x}\) as \(k\to \infty \). Since \(\lim_{n\to \infty }\phi (x_{n},x_{0})\) exists, \(\lim_{n\to \infty }\phi (x_{n},x_{0})= \phi (\hat{x},x_{0})\). If there exists some subsequence \(\{x_{n_{j}}\}\) of \(\{x_{n}\}\) such that \(x_{n_{j}}\to \tilde{x}\) as \(j \to \infty \), then
which shows \(\hat{x}=\tilde{x}\) and thus \(x_{n} \to \hat{x}\) as \(n\to \infty \).
From the definition of \(w_{n}\), we have \(\|w_{n}-x_{n}\|=\|\theta _{n}(x _{n}-x_{n-1})\|\leq \|x_{n}-x_{n-1}\|\), which implies by (3.14) that
Since \(\{w_{n}\}\) is bounded and by Remark 2.4, we get
using Remark 2.4
As \(x_{n+1}=\varPi _{C_{n}\cap Q_{n}}x_{0}\in C_{n}\), therefore
By (3.16), (3.19), and assumption \(\lim_{n\to \infty }\alpha _{n}=0\),
Using (2.3), we get
and using \(\lim_{n\to \infty }\|x_{n}\|=\|\hat{x}\|\), we have
Hence, we have
This shows that \(\{\|Jz_{n+1}\|\}\) is bounded. Since X and \(X^{*}\) are reflexive, we may assume that \(Jz_{n+1}\rightharpoonup x ^{*} \in X^{*}\). By reflexivity of X, we see that \(J(X)=X^{*}\), that is, there exists \(x\in X\) such that \(Jx=x^{*}\). Since
By using \(\liminf_{n\to \infty }\) in the above equality, we have
i.e., \(\hat{x}=x\), and hence \(x^{*}=J\hat{x}\). This implies that \(Jz_{n+1}\rightharpoonup J\hat{x}\in X^{*}\). Since \(X^{*}\) and (3.21) satisfy the Kadec–Klee property, then
As \(J^{-1}: X^{*}\to X\) is demicontinuous, therefore \(z_{n+1}\rightharpoonup \hat{x}\). Using (3.20) and the Kadec–Klee property of X
Next, by using the weak lower semicontinuity of \(\|\cdot \|^{2}\), we arrive at
and hence
Since \(x_{n}\to \hat{x}\), \(n\to \infty \), and (3.22), we have
Since J is uniformly continuous, we get
By using the definition of Lyapunov function, we have, \(\forall p \in \varGamma \),
Again, by using weak lower semicontinuity of \(\|\cdot \|^{2}\), we obtain
which implies
Thus, for any \(p\in \varGamma \subset C_{n}\) and by (3.8) and (3.10),
From (3.23), (3.28), (3.29), and \(\lim_{n\to \infty }\alpha _{n}=0\),
From Lemma 2.10(e), we obtain, for any \(p\in \varGamma \) and \(z_{n+1}=T_{r_{n}}u_{n}\),
It follows from (3.23), (3.30), and (3.31) that
and hence from (2.3) we have
From (3.20)
and hence
i.e., \(\{\|Ju_{n}\|\}\) is bounded in \(X^{*}\). Since \(X^{*}\) is reflexive, we can assume that \(Ju_{n}\rightharpoonup u^{*}\in X^{*}\) as \(n\to \infty \). Since \(J(X)=X^{*}\), there exists \(u\in X\) such that \(Ju=u^{*}\). Since
By using \(\liminf_{n\to \infty }\) in the above equality, we have
Using Remark 2.1, we have \(\hat{x}=u\), i.e., \(u^{*}=J\hat{x}\). Therefore \(Ju_{n}\rightharpoonup J\hat{x}\in X^{*}\). From the Kadec–Klee property of \(X^{*}\) and (3.33), we obtain
As \(J^{-1}\) is demicontinuous and (3.34), therefore \(u_{n}\rightharpoonup \hat{x}\). From the Kadec–Klee property of X and (3.32), we obtain
 □
Proof of Theorem 3.2
which implies that
and hence from (3.28),(3.30), (3.35), and \(\lim_{n\to \infty }\alpha _{n}=0\), \(\mu _{n}(\gamma -\frac{2\mu _{n}}{c^{2}})>0\),
Since D is γ-inverse strongly monotone, it is \(\frac{1}{ \gamma }\)-Lipschitz continuous. It follows from \(\lim_{n\to \infty }w_{n}=\hat{x}\) and (3.36) that \(\hat{x}\in D^{-1}(0)\). Hence \(\hat{x}\in \operatorname{Sol}( \operatorname{VIP}(\mbox{1.5}))\).
Furthermore, (3.1) combined with (3.36) yields that
Using Lemmas 2.2 and 2.11, we estimate
then using (3.36) we have
which implies by Lemma 2.3 that
As \(\lim_{n\to \infty }\phi (z_{n+1},u_{n})=0\), hence from Lemma 2.3 we have
By the uniform continuity of J,
Since \(r_{n}\geq a\) and using (3.42), we get
By \(z_{n+1}=T_{r_{n}}u_{n}\), we have
It follows from Assumption 2.1(ii) that
Setting \(n\to \infty \), by (3.43) and the lower semicontinuity of \(y\to f(y,\cdot )\), we have
Setting \(y_{t}:=ty+(1-t)\hat{x}\), \(\forall t\in (0,1]\), and \(y\in C\), then \(y_{t}\in C\), and thus
It follows from Assumption 2.1(i)–(iv) that
Letting \(t>0\), we have from Assumption 2.1(iii)
Therefore \(\hat{x}\in \operatorname{Sol}(\operatorname{GEP} (\mbox{1.2}))\).
Next, we show that \(\hat{x}\in \operatorname{Fix}(T)\). In view of \(u_{n}=J^{-1}(\alpha _{n}Jz_{n}+(1-\alpha _{n})JTy_{n})\), we have
Hence, we have
From assumption \(\lim_{n\to \infty }\alpha _{n}=0\), (3.42), and (3.44), we obtain
Further, using (3.40) and (3.46), the inequality
implies
From (3.17), (3.40), and (3.41) it follows that \(\{x_{n}\}\), \(\{y_{n}\}\), \(\{u_{n}\}\), \(\{w_{n}\}\), and \(\{z_{n}\}\) all have the same asymptotic behavior, hence from (3.47) we have
which implies that \(\hat{x}= T\hat{x}\), i.e., \(\hat{x}\in \operatorname{Fix}(T)\). Then \(\hat{x}\in \varGamma \).
Finally, we show \(\hat{x}= \varPi _{\varGamma }x_{0}\). Taking \(k\to \infty \) in (3.12), we have
Now, by Lemma 2.9, \(\hat{x}= \varPi _{\varGamma }x_{0}\). This completes the proof. □
If X is a Hilbert space, then \(J=I\) and \(\phi (x,y)=\|x-y\|^{2}\), \(\forall x,y\in C\). Then from Theorem 3.2 we get the following corollaries.
Corollary 3.1
LetCbe a nonempty, closed, and convex subset of a Hilbert spaceX. Let\(D:X\to X^{*}\)be aγ-inverse strongly monotone mapping with constant\(\gamma \in (0,1)\)and\(g:C\times C\to \mathbb{R}\)be a bifunction satisfying Assumption 2.1, and let\(b:C\times C\to \mathbb{R}\)satisfy Assumption 2.2. Let\(T:C\to C\)be a nonexpansive mapping such that\(\varGamma := \operatorname{Sol}(\operatorname{GEP}(\mbox{1.2}))\cap \operatorname{Sol}( \operatorname{VIP}(\mbox{1.5}))\cap \operatorname{Fix}(T) \neq \emptyset \). Let the sequences\(\{x_{n}\}\)and\(\{z_{n}\}\)be generated by the iterative algorithm:
where\(\{\alpha _{n}\}\)is a sequence in\([0,1]\)such that\(\lim_{n\to \infty }\alpha _{n}=0\), \(r_{n}\in [a,\infty )\)for some\(a>0\), \(\{\theta _{n}\}\in (0,1)\), and\(\{\mu _{n}\}\in (0,\infty )\)satisfying the condition\(0<\liminf_{n\to \infty }\mu _{n}\leq \limsup_{n\to \infty }\mu _{n}<\frac{c^{2}\gamma }{2}\), wherecis the 2-uniformly convex constant ofX. Then\(\{x_{n}\}\)converges strongly to\(\hat{x}\in \varGamma \), where\(\hat{x}=\varPi _{\varGamma } x_{0}\)and\(\varPi _{\varGamma }x_{0}\)is the generalized projection ofXontoΓ.
4 Numerical example
If \(X=\mathbb{R}\) is a Hilbert space with the norm \(\|x\|=|x|\), \(\forall x\in X\). Now, we give a numerical example which justifies Theorem 3.2.
Example 4.1
Let \(X=\mathbb{R}\), \(C=X\), where X is a Hilbert space, and let \(g:C\times C\to \mathbb{R}\) be defined by \(g(x,y)=x(y-x)\), \(\forall x,y \in C\), and \(b:C\times C\to \mathbb{R}\) be defined by \(b(x,y)=xy\), \(\forall x,y\in C\). Let the mapping \(D:\mathbb{R}\to \mathbb{R}\) be defined by \(Dx=\frac{x}{2}\); let \(T: C\to C\) be defined by \(Tx= \frac{1}{3}x\). Setting \(\{\mu _{n}\}=\{\frac{0.9}{n}\}\), \(r_{n}= \frac{1}{4}\), \(\theta _{n}=0.9\), and \(\{\alpha _{n}\}=\{\frac{1}{n^{3}} \}\), \(\forall n\geq 0\), let the sequences \(\{x_{n}\}\), \(\{u_{n}\}\), and \(\{z_{n}\}\) be generated by hybrid iterative algorithm (3.1) converges to \(\hat{x}=\{0\}\in \varGamma \).
Proof
Note that for the case \(C=X\), where X is the Hilbert space, the sets \(C_{n}\) and \(Q_{n}\) in iterative algorithms (3.1) are half spaces. Therefore, the projection onto the intersection of sets \(C_{n}\) and \(Q_{n}\) can be computed using a similar formula as given in [42]. It is easy to observe that g and b satisfy Assumption 2.1 and Assumption 2.2, respectively, and \(\operatorname{Sol}(\operatorname{GEP}(\mbox{1.2}))=\{0\} \neq \emptyset \). Further, it easy to observe that D is a \(\frac{1}{2}\)-inverse strongly monotone mapping and \(\operatorname{Sol}(\operatorname{VIP}(\mbox{1.5}))=\{0\}\neq \emptyset \). Further, it is easy to observe that T is a relatively nonexpansive mapping with \(\operatorname{Fix}(T)=\{0\}\). Therefore, \(\varGamma :=\operatorname{Sol}(\operatorname{GEP}(\mbox{1.2})) \cap \operatorname{Sol}(\operatorname{VIP}(\mbox{1.5}))\cap \operatorname{Fix}(T)=\{0\}\neq \emptyset \). After simplification, hybrid iterative scheme (3.1) is reduced to the following scheme: Given initial values \(x_{0}\), \(x_{1}\), \(z_{0}\),
Finally, using software Matlab 7.8.0, we have Figures 1 and 2 and Table 1 which show that \(\{x_{n}\}\), \(\{u_{n}\}\), and \(\{z_{n}\}\) converge to \(\hat{x}=\{0\}\) as \(n \to +\infty \).
 □
For \(D=0\), \(b(x,y)=0\), we now demonstrate that iterative algorithm (3.1) with conditions given in Theorem 3.1 approximates a common element of the solution set of \(\operatorname{EP} (\mbox{1.3})\) and the fixed point set of T. Further, we observe that it is faster than iterative algorithm (3.1) due to [16] and iterative algorithm (1.8) due to [15] for a nonexpansive mapping.
Set \(D=0\), \(b(x,y)=0\), in Example 4.1, we have that iterative algorithm (4.1), iterative algorithm (3.1) due to [16], and iterative algorithm (1.8) due to [15] reduce to the following iterative algorithms:
Iterative Algorithm 4.2
Given initial values \(x_{0}\), \(x_{1}\), \(z_{0}\),
and
Iterative Algorithm 4.3
Given initial values \(x_{0}\), \(z_{0}\),
and
Iterative Algorithm 4.4
Given initial values \(x_{0}\),
respectively.
Hence, the sequence \(\{x_{n}\}\) defined by Iterative Algorithm 4.2, Iterative Algorithm 4.3 as well as by Iterative Algorithm 4.4 converges strongly to \(\hat{x}=0\).
Finally, using software Matlab 7.8, we have the following figures which show that the sequence \(\{x_{n}\}\) converges to \(\hat{x}=0 \in \varOmega \). Figure 3 shows the convergence of \(\{x_{n}\}\) when \(x_{0} = 1\), \(x _{1}=2\), \(z_{0} = 3\) for Algorithms 4.2–4.3 and 4.4, while Fig. 4 shows the convergence of \(\{x_{n}\}\) when \(x_{0} = -1\), \(x_{1}=-2\), \(z_{0} =-3\) for Algorithms 4.2–4.3. It is evident from figures that the sequence \(\{x_{n}\}\) obtained by Iterative Algorithm 4.2 converges faster than the sequence \(\{x_{n}\}\) obtained by Iterative Algorithm 4.3 and Iterative Algorithm 4.4.
Concluding remark 4.1
We observe that
- (i)
Iterative algorithm (3.1) is quite different from algorithm (1.8) given by Takahashi [15] and (3.1) given by [16].
- (ii)
Corollary 3.1 is new and different from that of Theorem 3.2 due to Takahashi [15] and (1.9) given by Mainge [19].
- (iii)
A numerical example was given to prove the efficiency of the proposed hybrid inertial iterative algorithm, that is, the proposed algorithms in Theorem 3.2 and Corollary 3.1 for \(D=0\) and \(b(x,y)=0 \) converge faster than the algorithm presented in [16] and [15].
References
Blum, E., Oettli, W.: From optimization and variational inequalities to equilibrium problems. Math. Stud. 63, 123–145 (1994)
Daniele, P., Giannessi, F., Mougeri, A.E.: Equilibrium Problems and Variational Models. Nonconvex Optimization and Its Application, vol. 68. Kluwer Academic, Norwell (2003)
Moudafi, A.: Second order differential proximal methods for equilibrium problems. J. Inequal. Pure Appl. Math. 4(1), Art. 18 (2003)
Hartman, P., Stampacchia, G.: On some non-linear elliptic differential-functional equation. Acta Math. 115, 271–310 (1966)
Korpelevich, G.M.: The extragradient method for finding saddle points and other problems. Matecon 12, 747–756 (1976)
Nadezhkina, N., Takahashi, W.: Strong convergence theorem by a hybrid method for nonexpansive mappings and Lipschitz continuous monotone mapping. SIAM J. Optim. 16(40), 1230–1241 (2006)
Ceng, L.C., Hadjisavvas, N., Wong, N.C.: Strong convergence theorem by a hybrid extragradient-like approximation method for variational inequalities and fixed point problems. J. Glob. Optim. 46, 635–646 (2010)
Ceng, L.C., Guu, S.M., Yao, J.C.: Finding common solution of variational inequality, a general system of variational inequalities and fixed point problem via a hybrid extragradient method. Fixed Point Theory Appl. 2011, Article ID 626159, 22 pages (2011)
Ceng, L.C., Wang, C.Y., Yao, J.C.: Strong convergence theorems by a relaxed extragradient method for a general system of variational inequalities. Math. Methods Oper. Res. 67, 375–390 (2008)
Rouhani, B.D., Kazmi, K.R., Rizvi, S.H.: A hybrid-extragradient-convex approximation method for a system of unrelated mixed equilibrium problems. Trans. Math. Pogram. Appl. 1(8), 82–95 (2013)
Popov, L.D.: A modification of the Arrow–Hurwicz method for searching for saddle points. Mat. Zametki 28(5), 777–784 (1980)
Malitsky, Y.V., Semenov, V.V.: A hybrid method without extrapolating step for solving variational inequality problems. J. Glob. Optim. 61, 193–202 (2015)
Dong, Q.L., Lu, Y.Y.: A new hybrid algorithm for a nonexpansive mapping. Fixed Point Theory Appl. 2015, 37, 22 pages (2015)
Kazmi, K.R., Rizvi, S.H., Ali, R.: A hybrid iterative method without extrapolating step for solving mixed equilibrium problem. Creative Math. Inform. 24(2), 165–172 (2015)
Takahashi, W., Zembayashi, K.: Strong and weak convergence theorem for equilibrium problems and relatively nonexpansive mappings in Banach spaces. Nonlinear Anal. 70, 45–57 (2009)
Kazmi, K.R., Ali, R.: Common solution to an equilibrium problem and a fixed point problem for an asymptotically quasi-Ï•-nonexpansive mapping in intermediate sense. Rev. R. Acad. Cienc. Exactas FÃs. Nat., Ser. A Mat. 111, 877–889 (2017)
Kazmi, K.R., Ali, R., Yousuf, S.: Generalized equilibrium and fixed point problems for Bregman relatively nonexpansive mappings in Banach spaces. J. Fixed Point Theory Appl. 20, 151 (2018)
Dong, Q.L., Kazmi, K.R., Ali, R., Li, X.H.: Inertial Krasnoselskii–Mann type hybrid algorithms for solving hierarchical fixed point problems. J. Fixed Point Theory Appl. 21, 57 (2019)
Maingé, P.E.: Convergence theorem for inertial KM-type algorithms. J. Comput. Appl. Math. 219, 223–236 (2008)
Alvarez, F., Attouch, H.: An inertial proximal method for maximal monotone operators via discretization of a nonlinear oscillator with damping. Set-Valued Anal. 9, 3–11 (2001)
Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2(1), 183–202 (2009)
Dong, Q.L., Yuan, H.B., Cho, Y.J., Rassias, T.M.: Modified inertial Mann algorithm and inertial CQ-algorithm for nonexpansive mappings. Optim. Lett. 12(1), 87–102 (2018)
Maingé, P.E.: Regularized and inertial algorithms for common fixed points of nonlinear operators. J. Math. Anal. Appl. 344, 876–877 (2008)
Reich, S.: Weak convergence theorems for nonexpansive mappings in Banach spaces. J. Math. Anal. Appl. 67, 274–276 (1979)
Dong, Q.L., Jiang, D., Cholamjiak, P., Shehu, Y.: A strong convergence result involving an inertial forward-backward algorithm for monotone inclusions. J. Fixed Point Theory Appl. 19, 3097–3118 (2017)
Cholamjiak, W., Cholamjiak, P., Suantai, S.: An inertial forward-backward splitting method for solving inclusion problems in Hilbert spaces. J. Fixed Point Theory Appl. 20, 42 (2018)
Suantai, S., Cholamjiak, P., Pholasa, N.: The modified inertial relaxed CQ algorithm for solving the split feasibility problems. J. Ind. Manag. Optim. 14, 1595–1615 (2018)
Byrne, C.: A unified treatment of some iterative algorithms in signal processing and image reconstruction. Inverse Probl. 20, 103–120 (2004)
Eckstein, J., Bertsekas, D.P.: On the Douglas–Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55, 293–318 (1992)
Yang, Q., Zhao, J.: Generalized KM theorems and their applications. Inverse Probl. 22, 833–844 (2006)
Bot, R.I., Csetnek, E.R., Hendrich, C.: Inertial Douglas–Rachford splitting for monotone inclusion problems. Appl. Math. Comput. 256, 472–487 (2015)
Attouch, H., Peypouquet, J., Redont, P.: A dynamical approach to an inertial forward-backward algorithm for convex minimization. SIAM J. Optim. 24(4), 232–256 (2014)
Lorenz, D., Pock, T.: An inertial forward-backward algorithm for monotone inclusions. J. Math. Imaging Vis. 51, 311–325 (2015)
Bot, R.I., Csetnek, E.R.: An inertial forward-backward-forward primal-dual splitting algorithm for solving monotone inclusion problems. Numer. Algorithms 71, 519–540 (2016)
Chan, R.H., Ma, S., Yang, J.F.: Inertial proximal ADMM for linearly constrained separable convex optimization. SIAM J. Imaging Sci. 8(4), 2239–2267 (2015)
Xu, H.K.: Inequalities in Banach spaces with applications. Nonlinear Anal. 16, 1127–1138 (1991)
Kamimura, S., Takahashi, W.: Strong convergence of a proximal-type algorithm in a Banach space. SIAM J. Optim. 13, 938–945 (2002)
Matsushita, S., Takahashi, W.: Weak and strong convergence theorems for relatively nonexpansive mappings in Banach spaces. Fixed Point Theory Appl. 2004, 37–47 (2004)
Nakajo, K.: Strong convergence for gradient projection method and relatively nonexpansive mappings in Banach spaces. Appl. Math. Comput. 271, 251–258 (2017)
Alber, Y.I.: Metric and generalized projection operators in Banach spaces. In: Properties and Applications. Lect. Notes Pure Appl. Math. (1996)
Farid, M., Irfan, S.S., Khan, M.F., Khan, S.A.: Strong convergence of gradient projection method for generalized equilibrium problem in a Banach space. J. Inequal. Appl. 2017, 297 (2017)
He, S., Yang, C., Duan, P.: Realization of the hybrid method for Mann iterations. Appl. Math. Comput. 217, 4239–4247 (2010)
Acknowledgements
This work was supported by the Deanship of Scientific Research (DSR), King Abdulaziz University, Jeddah, under grant No. (D-022-363-1440). The authors, therefore, gratefully acknowledge the DSR technical and financial support.
Availability of data and materials
Not applicable.
Funding
This work was supported by the King Abdulaziz University represented by the Deanship of Scientific Research on the inertial support for this research under grant number D-022-363-1440.
Author information
Authors and Affiliations
Contributions
All authors contributed equally and studied the final manuscript. All authors read and approved the final manuscript.
Corresponding author
Ethics declarations
Competing interests
The authors declare that they have no competing interests.
Additional information
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Alansari, M., Ali, R. & Farid, M. Strong convergence of an inertial iterative algorithm for variational inequality problem, generalized equilibrium problem, and fixed point problem in a Banach space. J Inequal Appl 2020, 42 (2020). https://doi.org/10.1186/s13660-020-02313-z
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s13660-020-02313-z