Skip to main content

An inertial iterative algorithm for generalized equilibrium problems and Bregman relatively nonexpansive mappings in Banach spaces

Abstract

The aim of this paper is to introduce and study an inertial hybrid iterative method for solving generalized equilibrium problems involving Bregman relatively nonexpansive mappings in Banach spaces. We study the strong convergence for the proposed algorithm. Finally, we list some consequences and computational example to emphasize the efficiency and relevancy of main result.

Introduction

Throughout the paper, unless otherwise stated, let Y be a reflexive Banach space with \(Y^{*}\) its dual, let \(K\neq \emptyset \) be a closed convex subset of Y and denote by \(\mathbb{R}\) the set of real numbers. Consider the following generalized equilibrium problem (in short, GEP): Find \(u_{0}\in K\) such that

$$ G(u_{0},u) + b(u_{0},u)-b(u_{0},u_{0}) \geq 0,\quad \forall u\in K, $$
(1.1)

where \(G,b:K\times K\to \mathbb{R}\) are bifunctions. We will write Sol(GEP(1.1)) for the solution of (1.1). If \(b\equiv 0\), then GEP(1.1) reduces to the equilibrium problem (in short, EP): Find \(u_{0}\in K\) such that

$$ G(u_{0},u)\geq 0,\quad \forall u\in K. $$
(1.2)

It is known that equilibrium problems have a great impact and influence in the development of several topics of science and engineering. It turned out that many well-known problems could be fitted into the equilibrium problems. It has been shown that the theory of equilibrium problems provides a natural, novel, and unified framework for several problems arising in nonlinear analysis, optimization, economics, finance, game theory, and engineering. The equilibrium problems include many mathematical problems as particular cases, for example, mathematical programming problem, variational inclusion problem, variational inequality problem, complementary problem, saddle point problem, Nash equilibrium problem in noncooperative games, minimax inequality problem, minimization problem, and fixed point problem, see [13]. For example, if we set \(G(u_{0},u)=\langle Du_{0},u-u_{0}\rangle \) and \(b(u_{0},u)=0\), \(\forall u_{0}\), \(u\in K\), where \(D:K\to Y^{*}\) is a nonlinear mapping, then EP(1.2) reduces to the classical variational inequality problem (in short, VIP): Find \(u_{0}\in K\) such that

$$ \langle Du_{0}, u-u_{0}\rangle \geq 0, \quad\forall u \in K, $$
(1.3)

which was introduced by Hartmann and Stampacchia [4]. The solution set of VIP(1.3) is denoted by Sol(VIP(1.3)).

Korpelevich [5] originated the iterative method for VIP on Hilbert space H in 1976 as:

$$ \textstyle\begin{cases} u_{0}\in C\subseteq H, \\ v_{n}={\mathrm{proj}}_{C}(u_{n}-\sigma Du_{n}), \\ u_{n+1}={\mathrm{proj}}_{C}(u_{n}-\sigma Dv_{n}), \quad n\geq 0, \end{cases} $$

where \(\sigma >0\), \({\mathrm{proj}}_{C}\) denotes the projection of H onto C, and D is a monotone and Lipschitz continuous mapping. This method is called the extragradient iterative method.

Nadezkhina et al. [6] proposed a hybrid extragradient algorithm involving nonexpansive mapping T on C and in 2006 studied the convergence analysis of

$$ \textstyle\begin{cases} u_{0}\in C\subseteq H, \\ x_{n}={\mathrm{proj}}_{C}(u_{n}-\sigma _{n} Du_{n}), \\ v_{n}=\alpha _{n}u_{n}+(1-\alpha _{n})T{\mathrm{proj}}_{C}(u_{n}-\sigma _{n} Dx_{n}), \\ C_{n}=\{z\in C: \Vert v_{n}-z \Vert ^{2}\leq \Vert u_{n}-z \Vert ^{2}\}, \\ D_{n}=\{z\in C:\langle u_{n}-z,u_{0}-u_{n}\rangle \geq 0\}, \\ u_{n+1}={\mathrm{proj}}_{C_{n}\cap D_{n}}u_{0}, \quad n\geq 0. \end{cases} $$
(1.4)

The iterative algorithm (1.4) has been extended by many authors; see [715] for details. The idea considered in [6] has been generalized in [16] from a Hilbert space to a Banach space Y as

$$ \textstyle\begin{cases} u_{0}\in K\subseteq Y , \\ v_{n} =J^{-1}(\alpha _{n} Ju_{n}+(1-\alpha _{n})JTu_{n}), \\ C_{n} = \{z\in K: \Psi (z,v_{n})\leq \Psi (z,u_{n})\}, \\ D_{n} =\{z\in K:\langle u_{n}-z,Ju_{0}-Ju_{n}\rangle \geq 0\}, \\ u_{n+1}=\prod_{C_{n}\cap D_{n}}u_{0}, \end{cases} $$

where \(\Pi _{K}\) denotes the generalized projection of Y onto K, Ψ is the Lyapunov function such that \(\Psi (u,v)=\|v\|^{2}-2\langle v, Ju\rangle +\|u\|^{2}\), \(\forall u,v \in Y\), \(J:Y\to 2^{Y^{*}}\) is the normalized duality mapping, with \(J^{-1}\) denoting its inverse. For further work, see [1720].

In 1967, an important technique was discovered by Bregman [21] in the light of Bregman distance function. This technique is very useful not only in the design and interpretation the iterative method but also in solving optimization and feasibility problems and in approximating equilibria, FPP, VIP, etc.; for details, see [2225].

In 2010, Reich [26] et al. introduced an iterative algorithm on Banach space involving maximal monotone operators. In the light of Bregman projection, there were various iterative algorithms studied by researchers in this field, see, for instance, [2731].

In 2008, Mainge [32] developed an inertial Krasnosel’skiı̌–Mann algorithm as follows:

$$ \textstyle\begin{cases} t_{n} = u_{n}+\theta _{n}(u_{n}-u_{n-1}), \\ u_{n+1}=(1-\alpha _{n})t_{n}+\alpha _{n}Tt_{n}. \end{cases} $$

The convergence of such an algorithm has been analyzed by various researchers and illustrated its importance on data analysis and some imaging problems, see [3341] for details. It is noticeable that the research on an inertial iterative algorithm is still unexplored on a Banach space.

Inspired by the work in [20, 30, 32], we establish an inertial hybrid iterative algorithm involving Bregman relatively nonexpansive mapping to find a common solution of GEP(1.1) and fixed point problem in a Banach space. Moreover, we analyze its convergence for our main result. At last, we list some consequences and a computational example to emphasize the efficiency and relevancy of the main result.

Preliminaries

Assume \(g:Y\to (-\infty , +\infty ]\) is a proper, convex, and lower semicontinuous mapping, and \(g^{*}:Y^{*}\to (-\infty , +\infty ]\), Fenchel conjugate of g, is defined as

$$ g^{*}(u_{0})=\sup \bigl\{ \langle u_{0},u \rangle -g(u):u\in Y\bigr\} ,\quad u_{0}\in Y^{*}.$$

And for any \(w\in {\mathrm{int(dom}}g)\), the interior of the domain of g, and \(u\in Y\), the right-hand derivative of g at w in the direction u is

$$ g^{0}(w,u)=\lim_{\lambda \to 0^{+}} \frac{g(w+\lambda u)-g(w)}{\lambda }.$$

A mapping g is called Gateaux differentiable at w if the above limit exists. So, \(g^{0}(w,u)\) agrees with \(\nabla g(w)\), the value of the gradient of g at w. It is called Frechet differentiable at w, if the limit is attained uniformly in u, \(\|u\|=1\). It is called uniformly Frechet differentiable on \(K\subseteq Y\), if the above limit is attained uniformly for \(w\in K\) and \(\|u\|=1\).

The mapping g is called Legendre if the following hold [22]:

  1. (i)

    \({\mathrm{int(dom}}g)\neq \emptyset \), g is Gateaux differentiable on \({\mathrm{int(dom}}g)\), \({\mathrm{dom}}\nabla g={\mathrm{int(dom}}g)\);

  2. (ii)

    \({\mathrm{int(dom}}g^{*})\neq \emptyset \), \(g^{*}\) is Gateaux differentiable on \({\mathrm{int(dom}}g^{*})\), \({\mathrm{dom}}\nabla g^{*}={\mathrm{int(dom}}g^{*})\).

We have the following [22]:

  1. (i)

    g is Legendre iff \(g^{*}\) is Legendre mapping;

  2. (ii)

    \((\partial g)^{-1}=\partial g^{*}\);

  3. (iii)

    \(\nabla g=(\nabla g^{*})^{-1}\), \({\mathrm{ran}}\nabla g={\mathrm{dom}}\nabla g^{*}={\mathrm{int(dom}}g^{*})\), \({\mathrm{ran}}\nabla g^{*}={\mathrm{dom}}\nabla g={\mathrm{int(dom}}g)\);

  4. (iv)

    the mappings g and \(g^{*}\) are strictly convex on \({\mathrm{int(dom}}g)\) and \({\mathrm{int(dom}}g^{*})\).

Definition 2.1

([21])

Let \(g:Y\rightarrow (-\infty ,+\infty ]\) be Gateaux differentiable and convex and \(D_{g}: {\mathrm{dom}}g\times {\mathrm{int(dom}}g)\rightarrow [0,+\infty )\) such that

$$ D_{g}(u,w)= g(u)- g(w)- \bigl\langle \nabla g(w), u-w\bigr\rangle ,\quad w\in { \mathrm{int(dom}}g), u\in {\mathrm{dom}}g, $$

is known as Bregman distance with respect to g.

We have listed some important properties of \(D_{g}\) [42]: for \(u, u_{1},u_{2}\in {\mathrm{(dom}}g)\) and \(w_{1},w_{2}\in {\mathrm{int(dom}}g)\),

  1. (i)

    Two point identity:

    $$ D_{g}(w_{1},w_{2})+ D_{g}(w_{2},w_{1})= \bigl\langle \nabla g(w_{1})- \nabla g(w_{2}),w_{1}-w_{2} \bigr\rangle ;$$
  2. (ii)

    Three point identity:

    $$ D_{g}(u,w_{1})+ D_{g}(w_{1},w_{2})- D_{g}(u,w_{2})= \bigl\langle \nabla g(w_{2})- \nabla g(w_{1}),u-w_{1}\bigr\rangle ;$$
  3. (iii)

    Four point identity:

    $$ D_{g}(u_{1},w_{1})- D_{g}(u_{1},w_{2})- D_{g}(u_{2},w_{1})+ D_{g}(u_{2},w_{2})= \bigl\langle \nabla g(w_{2})- \nabla g(w_{1}),u_{1}-u_{2} \bigr\rangle .$$

Definition 2.2

([26, 28])

Let \(T: K\to {\mathrm{int(dom}}g)\) be a mapping and \(F(T)=\{u\in K: Tu=u \}\), where \(F(T)\) is the set of fixed points of T. Then

  1. (i)

    a point \(u_{0}\in K\) is called an asymptotic fixed point if K contains a sequence \(\{u_{n}\}\) with \(u_{n} \rightharpoonup u_{0}\) such that \(\lim_{n\to \infty }\|Tu_{n}-u_{n}\|=0\). We denote by \(\widehat{\mathit{{F}}}(T)\) the set of all asymptotic fixed points of T;

  2. (ii)

    T is called Bregman quasinonexpansive if

    $$ F(T)\neq \emptyset ;\qquad D_{g}(u_{0},Tu)\leq D_{g}(u_{0},u),\quad \forall u \in K, u_{0} \in F(T);$$
  3. (iii)

    T is called Bregman relatively nonexpansive if

    $$ F(T)=\widehat{\mathit{{F}}}(T)\neq \emptyset ;\qquad D_{g}(u_{0},Tu) \leq D_{g}(u_{0},u),\quad \forall u\in K, u_{0}\in F(T);$$
  4. (iv)

    T is called Bregman firmly nonexpansive if \(\forall u_{1},u_{2}\in K\),

    $$ \bigl\langle \nabla g(Tu_{1})- \nabla g(Tu_{2}),Tu_{1}-Tu_{2} \bigr\rangle \leq \bigl\langle \nabla g(u_{1})- \nabla g(u_{2}),Tu_{1}-Tu_{2}\bigr\rangle ,$$

    or, correspondingly,

    $$ \begin{aligned} &D_{g}(Tu_{1},Tu_{2})+ D_{g}(Tu_{2},Tu_{1})+ D_{g}(Tu_{1},u_{1})+ D_{g}(Tu_{2},u_{2}) \\ &\quad \leq D_{g}(Tu_{1},u_{2})+ D_{g}(Tu_{2},u_{1}). \end{aligned} $$

Example 2.1

([29])

Let \(A:Y\to 2^{Y^{*}}\) be a maximal monotone mapping and Y be a real reflexive Banach space. If \(A^{-1}(0)\neq \emptyset \) and the Legendre function \(g:Y\to (-\infty ,+\infty ]\) is bounded on bounded subsets of Y and uniformly Frechet differentiable then the resolvent with respect to A,

$$ {\mathrm{res}}_{A}^{g}(u)=(\nabla g+A)^{-1} \circ \nabla g(u),$$

is a single-valued, closed, and Bregman relatively nonexpansive mapping from Y onto \(D(A)\), and \(F({\mathrm{res}}_{A}^{g})=A^{-1}(0)\).

Definition 2.3

([21])

Let \(g: Y\rightarrow (-\infty ,+ \infty ]\) be a Gateaux differentiable and convex function. The Bregman projection of \(w\in {\mathrm{int(dom}}g)\) onto K, nonempty closed convex subset of \({\mathrm{int(dom}}g)\), is a unique vector \({\mathrm{proj}}_{K}^{g} w\in K\) satisfies

$$ D_{g}\bigl({\mathrm{proj}}_{K}^{g}(w),w \bigr)=\inf \bigl\{ D_{g}(u,w):u\in K\bigr\} .$$

Remark 2.2

([27])

  1. (i)

    If Y is a smooth Banach space and \(g(u)=\frac{1}{2}\|u\|^{2}\), \(\forall u\in Y\), then \({\mathrm{proj}}_{K}^{g}(u)\) turns \(\Pi _{K}(u)\), the generalized projection, [43] into

    $$ \Psi \bigl(\Pi _{K}(u),u\bigr)=\min_{v\in K}\Psi (v,u),$$

    where Ψ is the Lyapunov function such that \(\Psi (u,v)=\|v\|^{2}-2\langle v, Ju\rangle +\|u\|^{2}\), \(\forall u,v \in Y\), \(J:Y\to 2^{Y^{*}}\) is the normalized duality mapping;

  2. (ii)

    If Y is a Hilbert space and \(g(u)=\frac{1}{2}\|u\|^{2}\), \(\forall u\in Y\), then \({\mathrm{proj}}_{K}^{g}(u)\) turns into metric projection.

Definition 2.4

([23])

Let \(g: Y \rightarrow (-\infty ,+ \infty ]\) be a Gateaux differentiable and convex function. Then, g is called:

  1. (i)

    totally convex at \(w\in {\mathrm{int(dom}}g)\) if its modulus of total convexity at u, i.e., the mapping \(v_{g}: {\mathrm{int(dom}}g)\times [0, +\infty )\to [0, +\infty )\) such that

    $$ v_{g}(w,s)= \inf \bigl\{ D_{g}(v,w):v\in { \mathrm{dom}}g, \Vert v-w \Vert =s\bigr\} ,$$

    is positive for \(s>0\);

  2. (ii)

    totally convex if it is totally convex at each point of \(w\in {\mathrm{int(dom}}g)\);

  3. (iii)

    totally convex on bounded sets if \(v_{g}: {\mathrm{int(dom}}g)\times [0, +\infty )\to [0, +\infty )\) such that

    $$ v_{g}(B,s)= \inf \bigl\{ v_{g}(w,s):w\in B\cap { \mathrm{dom}}g\bigr\} .$$

Definition 2.5

([23, 26])

A mapping \(g: Y\rightarrow (-\infty ,+ \infty ]\) is called:

  1. (i)

    coercive if \(\lim_{\|u\|\to +\infty }\frac{g(u)}{\|u\|}=+\infty \);

  2. (ii)

    sequentially consistent if for any \(\{u_{n}\}, \{v_{n}\}\subseteq Y\) with \(\{u_{n}\}\) bounded,

    $$ \lim_{n\to \infty }D_{g}(v_{n},u_{n})=0 \quad\Rightarrow \quad\lim_{n\to \infty } \Vert v_{n}-u_{n} \Vert =0.$$

Lemma 2.3

([24])

If \(g:Y\rightarrow (-\infty ,+ \infty ]\) is a convex function with domain of at least two points. Then, g is sequentially consistent iff it is totally convex on bounded sets.

Lemma 2.4

([44])

Let \(g: Y\rightarrow (-\infty ,+ \infty ]\) be uniformly Frechet differentiable and bounded on \(K\subseteq Y\), a bounded set. Then, g is uniformly continuous on K and g is uniformly continuous on K from the strong topology of Y to the strong topology of \(Y^{*}\).

Lemma 2.5

([26])

Let \(g: Y\rightarrow (-\infty ,+ \infty ]\) be a Gateaux differentiable and totally convex function. If \(u_{0}\in Y\) and \(\{D_{g}(u_{n},u_{0})\}\) is bounded, then \(\{u_{n}\}\) is also bounded.

Lemma 2.6

([24])

Let \(g: Y\rightarrow (-\infty ,+ \infty ]\) be a Gateaux differentiable and totally convex function on \({\mathrm{int(dom}} g)\). Let \(w\in {\mathrm{int(dom}}g)\) and \(K \subseteq {\mathrm{int(dom}}g)\), a nonempty closed convex set. If \(v\in K\), then the following statements are equivalent:

  1. (i)

    \(v\in K\) is the Bregman projection of w onto K with respect to g, i.e., \(v= {\mathrm{proj}}_{K}^{g}(w)\);

  2. (ii)

    the vector v is the unique solution of the variational inequality

    $$ \bigl\langle \nabla g(w)- \nabla g(v), v-u\bigr\rangle \geq 0,\quad \forall u\in K;$$
  3. (iii)

    the vector v is the unique solution of the inequality

    $$ D_{g}(u,v)+ D_{g}(v,w)\leq D_{g}(u,w),\quad \forall u\in K.$$

Lemma 2.7

([28])

Let \(g: Y\rightarrow (-\infty ,+\infty ]\) be Legendre and \(T:K\to K\) be a Bregman quasinonexpansive mapping with respect to g. Then, \(F(T)\) is closed and convex.

Lemma 2.8

([26])

Let \(g: Y\rightarrow (-\infty ,+ \infty ]\) be a Gateaux differentiable and totally convex function, \(u_{0}\in Y\) and \(K\subseteq Y\), a nonempty closed convex set. Suppose that \(\{u_{n}\}\) is bounded and any weak subsequential limit of \(\{u_{n}\}\) belongs to K. If \(D_{g}(u_{n},u_{0})\leq D_{g}({\mathrm{proj}}_{K}^{g}u_{0},u_{0})\) then \(\{u_{n}\}\) strongly converges to \({\mathrm{proj}}_{K}^{g}u_{0}\).

Assumption 2.1

Let \(G: K\times K\longrightarrow \mathbb{R}\) satisfy:

  1. (i)

    \(G(u,u)=0\), \(\forall u \in K\);

  2. (ii)

    G is monotone, i.e., \(G(u_{1},u_{2})+G(u_{2},u_{1})\leq 0 \), \(\forall u_{1},u_{2} \in K\);

  3. (iii)

    for each \(u_{1},u_{2},u_{3} \in K\), \(\lim \sup_{s \to 0} G(su_{3}+(1-s)u_{1},u_{2})\leq G(u_{1},u_{2})\);

  4. (iv)

    for each \(u \in K\), \(v\to G(u,v)\) is convex and lower semicontinuous.

Assumption 2.2

Let \(b: K\times K\to \mathbb{R}\) satisfy:

  1. (i)

    b is skew-symmetric, i.e., \(b(u_{1},u_{1})-b(u_{1},u_{2})-b(u_{2},u_{1})+b(u_{2},u_{2})\geq 0\), \(\forall u_{1},u_{2}\in K\);

  2. (ii)

    b is convex in the second argument;

  3. (iii)

    b is continuous.

Resolvent operator

The resolvent of \(G:K\times K\to \mathbb{R}\) with respect to b is the operator \({\mathrm{res}}_{G,b}^{g}: Y\to 2^{K}\) defined as follows:

$$ \begin{aligned}[b] {\mathrm{res}}_{G,b}^{g}(u)={}& \bigl\{ u_{0}\in K :G(u_{0},v)+ \bigl\langle \nabla g(u_{0})- \nabla g(u),v- u_{0}\bigr\rangle \\ &{}+ b(u_{0},v)- b(u_{0},u_{0}) \geq 0,\forall v\in K\bigr\} ,\quad \forall u \in Y. \end{aligned} $$
(3.1)

We obtain some properties of the resolvent operator \({\mathrm{res}}_{G,b}^{g}\).

Lemma 3.1

Let \(g: Y\to (-\infty , +\infty ]\) be a Gateaux differentiable and coercive function. Let \(G,b:K\times K\to \mathbb{R}\) fulfil Assumptions 2.1and 2.2, respectively, and let \({\mathrm{res}}_{G,b}^{g}: Y\to 2^{K}\) be defined by (3.1). Then, the following hold:

  1. (i)

    \(\mathrm{dom}({\mathrm{res}}_{G,b}^{g})=Y\);

  2. (ii)

    \({\mathrm{res}}_{G,b}^{g}\) is single-valued;

  3. (iii)

    \({\mathrm{res}}_{G,b}^{g}\) is a Bregman firmly nonexpansive type mapping, that is, \(\forall u,v\in Y\),

    $$\begin{aligned}& \bigl\langle \nabla g\bigl({\mathrm{res}}_{G,b}^{g}u \bigr)- \nabla g\bigl({\mathrm{res}}_{G,b}^{g}v\bigr),{ \mathrm{res}}_{G,b}^{g}u- {\mathrm{res}}_{G,b}^{g}v \bigr\rangle \\& \quad \leq \bigl\langle \nabla g(u)- \nabla g(v),{\mathrm{res}}_{G,b}^{g}u- {\mathrm{res}}_{G,b}^{g}y\bigr\rangle ; \end{aligned}$$
  4. (iv)

    \(F({\mathrm{res}}_{G,b}^{g})={{\mathrm{Sol(GEP}}\textit{(1.1)})}\) is closed and convex;

  5. (v)

    \(D_{g}(q,{\mathrm{res}}_{G,b}^{g}{u})+D_{g}({\mathrm{res}}_{G,b}^{g}{u},u)\leq D_{g}(q,u)\), \(\forall q\in F({\mathrm{res}}_{G,b}^{g})\);

  6. (vi)

    \({\mathrm{res}}_{G,b}^{g}\) is Bregman quasinonexpansive.

Proof

(i) The proof follows the lines of the proof of Lemma 1 [30]. For the sake of completeness, we give the proof. First, we show that for any \(\xi \in Y^{*}\), there exists \(u\in C\) such that

$$ G(u,v)+b(u,v)- b(u,u)+ g(v)- g(u)- \langle \xi , v-u\rangle \geq 0 $$
(3.2)

for any \(v\in K\). Since g is coercive, the function \(h:Y\times Y\to (-\infty , +\infty ]\) defined by

$$ h(u,v)= h(v)-h(u)-\langle \xi , v-u\rangle $$

satisfies

$$ \lim_{ \Vert u-v \Vert \to +\infty }\frac{h(u,v)}{ \Vert u-v \Vert }=-\infty , $$

for each fixed \(v\in K\). Therefore, it follows from Theorem 1 in [1], together with Assumptions 2.1 and 2.2, that (3.2) holds. Now, we prove that (3.2) implies that

$$ G(u,v)+b(u,v)- b(u,u)+\bigl\langle \nabla g(u), v-u\bigr\rangle - \langle \xi , v-u \rangle \geq 0 $$
(3.3)

for any \(v\in K\). We know that (3.2) holds for \(v=tu+(1-t)\bar{v}\), where \(\bar{v}\in K\) and \(t\in (0,1)\). Hence

$$ \begin{aligned}[b] &G\bigl(u,tu+(1-t)\bar{v}\bigr)+b \bigl(u,tu+(1-t)\bar{v}\bigr)- b(u,u) \\ &\quad {}+ g\bigl(tu+(1-t)\bar{v}\bigr)- g(u)- \bigl\langle \xi , tu+(1-t)\bar{v}-u\bigr\rangle \geq 0,\quad \forall \bar{v} \in K. \end{aligned} $$
(3.4)

Since

$$ g\bigl(tu+(1-t)\bar{v}\bigr)- g(u)\leq \bigl\langle \nabla g\bigl(tu+ (1-t) \bar{v}\bigr), tu + (1-t) \bar{v}-u\bigr\rangle , $$

we get from (3.3), Assumption 2.1 (iv) and Assumption 2.2 (ii) that

$$ \begin{aligned}&tG(u,u)+ (1-t)G(u,\bar{v})+ tb(u,u)+ (1-t)b(u, \bar{v})- b(u,u) \\ &\quad {}+\bigl\langle \nabla g\bigl(tu+ (1-t)\bar{v}\bigr), tu + (1-t)\bar{v}-u\bigr\rangle \\ &\quad {}-\bigl\langle \xi , tu+(1-t)\bar{v}-u\bigr\rangle \geq 0,\quad \forall \bar{v} \in K. \end{aligned} $$

From Assumption 2.1\((i)\), we have

$$ \begin{aligned}&(1-t)G(u,\bar{v})+ (1-t)b(u,\bar{v})- (1-t)b(u,u) \\ &\quad {}+ \bigl\langle \nabla g\bigl(tu+ (1-t)\bar{v}\bigr),(1-t) (\bar{v}-u)\bigr\rangle - \bigl\langle \xi ,(1-t) (\bar{v}-u)\bigr\rangle \geq 0 \end{aligned} $$

and

$$ (1-t)\bigl[G(u,\bar{v})+ b(u,\bar{v})- b(u,u)+ \bigl\langle \nabla g\bigl(tu+ (1-t) \bar{v}\bigr),(\bar{v}-u)\bigr\rangle - \bigl\langle \xi ,(\bar{v}-u)\bigr\rangle \bigr] \geq 0.$$

Therefore

$$ G(u,\bar{v})+ b(u,\bar{v})- b(u,u)+ \bigl\langle \nabla g\bigl(tu+ (1-t)\bar{v} \bigr),( \bar{v}-u)\bigr\rangle - \bigl\langle \xi ,(\bar{v}-u)\bigr\rangle \geq 0,\quad\forall \bar{v} \in K.$$

Since g is a Gateaux differentiable function, it follows that g is norm-to-weak continuous. Therefore, letting \(t\rightarrow 1_{-}\), we obtain that

$$ G(u,\bar{v})+ b(u,\bar{v})- b(u,u)+ \bigl\langle \nabla g(u),(\bar{v}-u) \bigr\rangle - \bigl\langle \xi ,(\bar{v}-u)\bigr\rangle \geq 0,\quad\forall \bar{v} \in K.$$

Hence, for any \(u\in Y\), taking \(\xi = \nabla g(u)\), we obtain \(\bar{u}\in K\) such that

$$ G(u,\bar{v})+ b(u,\bar{v})- b(u,u)+ \bigl\langle \nabla g(u),(\bar{v}-u) \bigr\rangle - \bigl\langle \nabla g(\bar{u}) ,(\bar{v}-u)\bigr\rangle \geq 0,\quad \forall \bar{v} \in K,$$

i.e.,

$$ G(u,\bar{v})+ b(u,\bar{v})- b(u,u)+ \bigl\langle \nabla g(u)-\nabla g(\bar{u}),( \bar{v}-u)\bigr\rangle \geq 0,\quad\forall \bar{v} \in K,$$

that is, \(u\in {\mathrm{res}}_{G,b}^{g}(u)\). Hence \(\mathrm{dom}({\mathrm{res}}_{G,b}^{g})=Y\).

(ii) For \(u\in Y\), let \(z_{1}, z_{2}\in F({\mathrm{res}}_{G,b}^{g})\). Then \(z_{1},z_{2}\in K\) and hence

$$ G(z_{1},z_{2})+\bigl\langle \nabla g(z_{1})- \nabla g(u),z_{2}- z_{1} \bigr\rangle + b(z_{1},z_{2})- b(z_{1},z_{1}) \geq 0$$

and

$$ G(z_{2},z_{1})+\bigl\langle \nabla g(z_{2})- \nabla g(u),z_{1}- z_{2} \bigr\rangle + b(z_{2},z_{1})- b(z_{2},z_{2}) \geq 0.$$

Adding these two inequalities and using Assumption 2.1 (i), we have

$$ \bigl\langle \nabla g(z_{1})- \nabla g(z_{2}),z_{2}- z_{1}\bigr\rangle + b(z_{1},z_{2})- b(z_{1},z_{1})+ b(z_{2},z_{1})- b(z_{2},z_{2}) \geq 0.$$

Since b is skew-symmetric, we have

$$ \bigl\langle \nabla g(z_{1})- \nabla g(z_{2}),z_{2}- z_{1}\bigr\rangle \geq 0. $$
(3.5)

By interchanging the position of \(z_{1}\) and \(z_{2}\), we have

$$ \bigl\langle \nabla g(z_{2})- \nabla g(z_{1}),z_{1}- z_{2}\bigr\rangle \geq 0. $$
(3.6)

Adding (3.5) and (3.6), we have

$$ 2\bigl\langle \nabla g(z_{1})- \nabla g(z_{2}),z_{2}- z_{1}\bigr\rangle \geq 0.$$

This implies that

$$ \bigl\langle \nabla g(z_{2})- \nabla g(z_{1}),z_{2}- z_{1}\bigr\rangle \leq 0. $$
(3.7)

Since g is convex and Gateaux differentiable, we have

$$ \bigl\langle \nabla g(z_{2})- \nabla g(z_{1}),z_{2}- z_{1}\bigr\rangle \geq 0. $$
(3.8)

By (3.7) and (3.8), we have

$$ \bigl\langle \nabla g(z_{2})- \nabla g(z_{1}),z_{2}- z_{1}\bigr\rangle = 0.$$

Since g is a Legendre function, \(z_{1}=z_{2}\). Hence, \({\mathrm{res}}_{G,b}^{g}\) is single-valued.

(iii) For \(u,v\in K\), we have

$$ \begin{aligned} &G\bigl({\mathrm{res}}_{G,b}^{g}u,{ \mathrm{res}}_{G,b}^{g}v\bigr)+\bigl\langle \nabla g \bigl({\mathrm{res}}_{G,b}^{g}v\bigr)-\nabla g(v) ,{ \mathrm{res}}_{G,b}^{g}u- { \mathrm{res}}_{G,b}^{g}v \bigr\rangle \\ &\quad {}+ b\bigl({\mathrm{res}}_{G,b}^{g}v,{ \mathrm{res}}_{G,b}^{g}u\bigr)- b\bigl({ \mathrm{res}}_{G,b}^{g}v,{ \mathrm{res}}_{G,b}^{g}v \bigr) \geq 0 \end{aligned} $$

and

$$ \begin{aligned} &G\bigl({\mathrm{res}}_{G,b}^{g}v,{ \mathrm{res}}_{G,b}^{g}u\bigr)+\bigl\langle \nabla g \bigl({\mathrm{res}}_{G,b}^{g}u\bigr)-\nabla g(u) ,{ \mathrm{res}}_{G,b}^{g}v- { \mathrm{res}}_{G,b}^{g}u \bigr\rangle \\ &\quad {}+ b\bigl({\mathrm{res}}_{G,b}^{g}u,{ \mathrm{res}}_{G,b}^{g}v\bigr)- b\bigl({ \mathrm{res}}_{G,b}^{g}u,{ \mathrm{res}}_{G,b}^{g}u \bigr) \geq 0. \end{aligned} $$

Adding the above two inequalities, then using the skew symmetry of b and Assumption 2.1(ii), we have

$$ \bigl\langle \nabla g\bigl({\mathrm{res}}_{G,b}^{g}u \bigr)-\nabla g(u)- \nabla g\bigl({\mathrm{res}}_{G,b}^{g}v \bigr)+ \nabla g(v), {\mathrm{res}}_{G,b}^{g}v- { \mathrm{res}}_{G,b}^{g}u\bigr\rangle \geq 0,$$

hence

$$ \begin{aligned}&\bigl\langle \nabla g\bigl({\mathrm{res}}_{G,b}^{g}u \bigr)-\nabla g\bigl({\mathrm{res}}_{G,b}^{g}v\bigr), { \mathrm{res}}_{G,b}^{g}(u)- {\mathrm{res}}_{G,b}^{g}(v) \bigr\rangle \\ &\quad \leq \bigl\langle \nabla g(u)-\nabla g(v), {\mathrm{res}}_{G,b}^{g}(u)- {\mathrm{res}}_{G,b}^{g}(v) \bigr\rangle . \end{aligned} $$

This means that \({\mathrm{res}}_{G,b}^{g}\) is a Bregman firmly nonexpansive mapping.

(iv) We now show that \(F({\mathrm{res}}_{G,b}^{g})={{\mathrm{Sol(GEP}}\text{(1.1)})}\). We have

$$\begin{aligned} u\in F\bigl({\mathrm{res}}_{G,b}^{g} \bigr) \quad \Leftrightarrow\quad & u\in {\mathrm{res}}_{G,b}^{g}(u) \\ \quad \Leftrightarrow \quad & G(u,v)+\bigl\langle \nabla g(u)-\nabla g(u),v-u\bigr\rangle +b(u,v)-b(u,u) \geq 0,\quad\forall v\in K \\ \quad \Leftrightarrow\quad & G(u,v)+b(u,v)-b(u,u)\geq 0,\quad\forall v\in K \\ \quad \Leftrightarrow\quad & u\in {{\mathrm{Sol(GEP}}\text{(1.1)})}. \end{aligned}$$
(3.9)

Further, since \({\mathrm{res}}_{G,b}^{g}\) is a Bregman firmly nonexpansive type mapping, it follows from [44, Lemma 1.3.1] that \(F({\mathrm{res}}_{G,b}^{g})\) is a closed and convex subset of K. Therefore, from (3.9), we obtain that \({{\mathrm{Sol(GEP}}\text{(1.1)})}= F({\mathrm{res}}_{G,b}^{g})\) is closed and convex.

(v) Now, we prove that \({\mathrm{res}}_{G,b}^{g}\) is a Bregman quasinonexpansive mapping.

For \(u,v\in K\), from (b), we have

$$ \begin{aligned}&\bigl\langle \nabla g\bigl({\mathrm{res}}_{G,b}^{g}u \bigr)-\nabla g\bigl({\mathrm{res}}_{G,b}^{g}v\bigr), { \mathrm{res}}_{G,b}^{g}(u)- {\mathrm{res}}_{G,b}^{g}(v) \bigr\rangle \\ &\quad \leq \bigl\langle \nabla g(u)-\nabla g(v), {\mathrm{res}}_{G,b}^{g}(u)- {\mathrm{res}}_{G,b}^{g}(v) \bigr\rangle . \end{aligned} $$

Moreover, we have

$$ \begin{aligned} &D_{g}\bigl({\mathrm{res}}_{G,b}^{g}(u), {\mathrm{res}}_{G,b}^{g}(v)\bigr)+D_{g}\bigl({ \mathrm{res}}_{G,b}^{g}(v), {\mathrm{res}}_{G,b}^{g}(u) \bigr) \\ &\quad \leq D_{g}\bigl({\mathrm{res}}_{G,b}^{g}(u), v\bigr)- D_{g}\bigl({\mathrm{res}}_{G,b}^{g}(u), u\bigr)+ D_{g}\bigl({\mathrm{res}}_{G,b}^{g}(v), u\bigr)- D_{g}\bigl({\mathrm{res}}_{G,b}^{g}(v), v\bigr). \end{aligned} $$

Taking \(v=q\in F({\mathrm{res}}_{G,b}^{g})\), we see that

$$ \begin{aligned}&D_{g}\bigl({\mathrm{res}}_{G,b}^{g}(u), q\bigr)+D_{g}\bigl(q, {\mathrm{res}}_{G,b}^{g}(u) \bigr) \\ &\quad \leq D_{g}\bigl({\mathrm{res}}_{G,b}^{g}(u), q\bigr)- D_{g}\bigl({\mathrm{res}}_{G,b}^{g}(u), u\bigr)+ D_{g}(q, u)- D_{g}(q, q). \end{aligned} $$

Hence

$$ D_{g}\bigl(q, {\mathrm{res}}_{G,b}^{g}(u) \bigr)+D_{g}\bigl({\mathrm{res}}_{G,b}^{g}(u), u\bigr) \leq D_{g}(q, u). $$
(3.10)

(vi) Equation (3.10) implies that \({\mathrm{res}}_{G,b}^{g}\) is a Bregman quasinonexpansive mapping. □

Main result

We developed the strong convergence algorithm for the inertial iterative method to find the common solution of GEP(1.1) and FPP for a Bregman relatively nonexpansive mapping in a reflexive Banach space.

Theorem 4.1

Let \(K\subseteq Y\) with \(K\subseteq {\mathrm{int(dom}}g)\), where \(g :Y\rightarrow (-\infty , +\infty ]\) is a coercive Legendre function which is bounded, uniformly Frechet differentiable, and totally convex on bounded subsets of Y. Let \(G,b: K\times K\rightarrow \mathbb{R}\) satisfy Assumptions 2.1and 2.2, respectively. Let \(T: K\to K\) be a Bregman relatively nonexpansive mapping. Let \(\Omega ={\mathrm{Sol(GEP\textit{(1.1)})}}\cap F(T)\neq \emptyset \). Let \(\{x_{n}\},\{z_{n}\} \subseteq K\) be generated by

$$ \textstyle\begin{cases} x_{0}= x_{-1}\in K, \\ u_{n}= x_{n}+\theta _{n}(x_{n}-x_{n-1}), \\ v_{n}=\nabla g^{*}(\alpha _{n}\nabla g(u_{n})+(1-\alpha _{n})\nabla g(Tu_{n})), \\ w_{n}=\nabla g^{*}(\beta _{n}\nabla g(Tu_{n})+(1-\beta _{n})\nabla g(v_{n})), \\ z_{n}={\mathrm{res}}_{G,b}^{g}w_{n}, \\ C_{n}=\{z\in C:D_{g}(z,z_{n})\leq D_{g}(z,u_{n})\}, \\ Q_{n}=\{z\in C : \langle \nabla g(x_{0})-\nabla g(x_{n}), z-x_{n} \rangle \leq 0\}, \\ x_{n+1}={\mathrm{proj}}_{C_{n}\cap Q_{n}}^{g}{x_{0}},\quad\textit{for all}\ n \geq 0, \end{cases} $$
(4.1)

where \(\{\theta _{n}\}\subseteq (0,1)\), \(\{\alpha _{n}\}, \{\beta _{n}\}\subseteq [0,1]\) with \(\lim_{n\to \infty }\alpha _{n}=0\). Then, \(\{x_{n}\}\) converges strongly to \({\mathrm{proj}}_{\Omega }^{g}{x_{0}}\).

Proof

For convenience, we divide the proof into several steps:

Step I. Ω and \(C_{n}\cap Q_{n}\) are closed and convex, \(\forall n\geq 0\).

By Lemmas 2.7 and 3.1, Ω is closed and convex and therefore \({\mathrm{proj}}_{\Omega }^{g}{x_{0}}\) is well defined.

Obviously, \(Q_{n}\) is closed and convex. Further, we prove that \(C_{n}\) is closed and convex, \(\forall n\geq 0\). We can easily show that \(C_{n}\) is closed and convex, n. Thus, \(C_{n} \cap Q_{n}\) is closed and convex, \(\forall n\geq 0\).

Step II. \(\Omega \subset C_{n}\cap Q_{n}\), \(\forall n\geq 0\) and \(\{x_{n}\}\) is well defined.

Let \(p\in \Omega \), then

$$ \begin{aligned}[b] D_{g}(p,z_{n})&=D_{g} \bigl(p,{\mathrm{res}}_{G,\phi }^{g}w_{n}\bigr) \\ &\leq D_{g}(p,w_{n}) \\ &=D_{g}(p,\nabla g^{*}\bigl(\beta _{n} \nabla g(Tu_{n})+(1-\beta _{n}) \nabla g(v_{n})\bigr) \\ &\leq \beta _{n}D_{g}(p,u_{n})+ (1-\beta _{n})D_{g}(p,v_{n}), \end{aligned} $$
(4.2)

and

$$ \begin{aligned}[b] D_{g}(p,v_{n})&=D_{g}(p, \nabla g^{*}\bigl(\alpha _{n}\nabla g(u_{n})+(1- \alpha _{n})\nabla g(Tu_{n})\bigr) \\ &\leq \alpha _{n}D_{g}(p,u_{n})+ (1-\alpha _{n})D_{g}(p,u_{n}) \\ &= D_{g}(p,u_{n}). \end{aligned} $$
(4.3)

Substituting (4.3) into (4.2), we have

$$ D_{g}(p,z_{n})\leq D_{g}(p,u_{n}). $$

Thus, \(p\in C_{n}\). Therefore, \(\Omega \subset C_{n}\), \(\forall n\geq 0\). Further, by induction we show that \(\Omega \subset C_{n}\cap Q_{n}\), \(n\geq 0\). As \(Q_{0}=C\), we have \(\Omega \subset C_{0}\cap Q_{0}\). Suppose that \(\Omega \subset C_{m}\cap Q_{m}\), for some \(m> 0\). Then, \(x_{m+1}\in C_{m}\cap Q_{m}\) such that \(x_{m+1}={\mathrm{proj}}_{C_{m}\cap Q_{m}}^{g}x_{0}\). From the definition of \(x_{m+1}\), we get \(\langle \nabla g(x_{0})-\nabla g(x_{m+1}), x_{m+1}-z\rangle \geq 0\), \(\forall z\in C_{k}\cap Q_{m}\). Since \(\Omega \subset C_{m}\cap Q_{m}\), we have

$$ \bigl\langle \nabla g(x_{0})-\nabla g(x_{m+1}), p-x_{m+1}\bigr\rangle \leq 0,\quad \forall p\in \Omega $$

which implies \(p\in Q_{m+1}\). Hence, \(\Omega \subset C_{m+1}\cap Q_{m+1}\) implies \(\Omega \subset C_{n}\cap Q_{n}\), \(\forall n\geq 0\) and hence \(x_{n+1}={\mathrm{proj}}_{C_{n}\cap Q_{n}}^{g}x_{0}\) is well defined, \(\forall n\geq 0\). Hence, \(\{x_{n}\}\) is well defined.

Step III. The sequences \(\{x_{n}\}\), \(\{u_{n}\}\), \(\{v_{n}\}\), \(\{z_{n}\}\), and \(\{w_{n}\}\) are bounded.

Using the definition of \(Q_{n}\), we get \(x_{n}={\mathrm{proj}}_{Q_{n}}^{g}x_{0}\). From the fact that \(x_{n}={\mathrm{proj}}_{Q_{n}}^{g}x_{0}\), and using Lemma 2.6 (iii), we obtain

$$ \begin{aligned}[b] D_{g}(x_{n},x_{0})&=D_{g} \bigl({\mathrm{proj}}_{Q_{n}}^{g}x_{0},x_{0} \bigr) \\ &\leq D_{g}(u,x_{0})-D_{g}\bigl(u,{ \mathrm{proj}}_{Q_{n}}^{g}x_{0}\bigr)\leq D_{g}(u,x_{0}),\quad \forall u\in \Omega \subset Q_{n}. \end{aligned} $$
(4.4)

This implies that \(\{D_{g}(x_{n}, x_{0})\}\) is bounded and hence \(\{x_{n}\}\) is bounded by Lemma 2.5.

Now,

$$ D_{g}(p,x_{n})=D_{g}\bigl(p,{ \mathrm{proj}}_{C_{n-1}\cap Q_{n-1}}^{g}x_{0}\bigr) \leq D_{g}(p,x_{0})-D_{g}(x_{n},x_{0}) $$

implies that \(\{D_{g}(p,x_{n})\}\) is bounded. Using \(D_{g}(p,Tx_{n})\leq D_{g}(p,x_{n})\), \(\forall p\in \Omega \), yields that \(\{Tx_{n}\}\) is bounded. Therefore, \(\{u_{n}\}\), \(\{v_{n}\}\), \(\{w_{n}\}\), and \(\{z_{n}\}\) are bounded.

Step IV. \(\lim_{n\to \infty }\|x_{n+1}-x_{n}\|=0\); \(\lim_{n\to \infty }\|x_{n}-u_{n}\|=0\); \(\lim_{n\to \infty }\|z_{n}-u_{n}\|=0\); \(\lim_{n\to \infty }\|z_{n}-w_{n}\|=0\); \(\lim_{n\to \infty }\|u_{n}-w_{n}\|=0\), and \(\lim_{n\to \infty }\|u_{n}-Tu_{n}\|=0\).

Since \(x_{n+1}={\mathrm{proj}}_{C_{n}\cap Q_{n}}^{g}x_{0}\in Q_{n}\) and \(x_{n}\in {\mathrm{proj}}_{Q_{n}}^{g}x_{0}\), we get

$$ D_{g}(x_{n},x_{0})\leq D_{g}(x_{n+1},x_{0}), \quad\forall n\geq 0,$$

which implies \(\{D_{g}(x_{n},x_{0})\}\) is nondecreasing. By the boundedness of \(\{D_{g}(x_{n},x_{0})\}\), \(\lim_{n\to \infty }D_{g}(x_{n},x_{0})\) exists and is finite. Further,

$$ \begin{aligned} D_{g}(x_{n+1},x_{n})&=D_{g} \bigl(x_{n+1},{\mathrm{proj}}_{Q_{n}}^{g}x_{0} \bigr) \\ &\leq D_{g}(x_{n+1},x_{0})-D_{g} \bigl({\mathrm{proj}}_{Q_{n}}^{g}x_{0},x_{0} \bigr) \\ &= D_{g}(x_{n+1},x_{0})-D_{g}(x_{n},x_{0}) \end{aligned} $$

which yields

$$ \lim_{n\to \infty }D_{g}(x_{n+1},x_{n})=0. $$

Using Lemma 2.3,

$$ \lim_{n\to \infty } \Vert x_{n+1}-x_{n} \Vert =0. $$
(4.5)

From the definition of \(u_{n}\), \(\|u_{n}-x_{n}\|=\|\theta _{n}(x_{n}-x_{n-1})\|\leq \|x_{n}-x_{n-1}\|\), which implies by (4.5) that

$$ \lim_{n\to \infty } \Vert u_{n}-x_{n} \Vert =0. $$
(4.6)

Since

$$ \Vert u_{n}- x_{n+1} \Vert \leq \Vert u_{n}- x_{n} \Vert + \Vert x_{n}- x_{n+1} \Vert ,$$

it follows from (4.5) and (4.6) that

$$ \lim_{n\to \infty } \Vert u_{n}-x_{n+1} \Vert = 0. $$
(4.7)

Using Lemma 2.4 and the fact that g is uniformly Frechet differentiable, we get

$$\begin{aligned} \lim_{n\to \infty } \bigl\vert g(u_{n})- g(x_{n+1}) \bigr\vert = 0 \end{aligned}$$
(4.8)

and

$$\begin{aligned} \lim_{n\to \infty } \bigl\Vert \nabla g(u_{n})-\nabla g(x_{n+1}) \bigr\Vert = 0. \end{aligned}$$

By the definition of \(D_{g}\), we get

$$ D_{g}(x_{n+1},u_{n})= g(x_{n+1})-g(u_{n})- \bigl\langle \nabla g(u_{n}),x_{n+1}-u_{n} \bigr\rangle . $$
(4.9)

We have that g is bounded on bounded subsets of Y because g is bounded on Y. Since g is uniformly Frechet differentiable, it is uniformly continuous on bounded subsets. Hence, by (4.7)–(4.9),

$$ \lim_{n\to \infty }D_{g}(x_{n+1},u_{n})=0. $$
(4.10)

As \(x_{n+1}={\mathrm{proj}}_{C_{n}\cap Q_{n}}^{g}x_{0}\in C_{n}\), we have

$$ D_{g}(x_{n+1},z_{n})\leq D_{g}(x_{n+1},u_{n}), $$
(4.11)

and hence by (4.10) and (4.11),

$$ \lim_{n\to \infty }D_{g}(x_{n+1},z_{n})= 0.$$

Thanks to Lemma 2.3,

$$ \lim_{n\to \infty } \Vert x_{n+1}-z_{n} \Vert = 0. $$
(4.12)

Taking into account

$$ \Vert z_{n}- u_{n} \Vert \leq \Vert z_{n}- x_{n+1} \Vert + \Vert x_{n+1}- u_{n} \Vert ,$$

by (4.7) and (4.12), we get

$$ \lim_{n\to \infty } \Vert z_{n}-u_{n} \Vert = 0. $$
(4.13)

By Lemma 2.4,

$$\begin{aligned} \lim_{n\to \infty } \bigl\vert g(z_{n})- g(u_{n}) \bigr\vert = 0 \end{aligned}$$
(4.14)

and

$$\begin{aligned} \lim_{n\to \infty } \bigl\Vert \nabla g(z_{n})-\nabla g(u_{n}) \bigr\Vert = 0. \end{aligned}$$
(4.15)

Next, we estimate the difference

$$ \begin{aligned}[b]& D_{g}(p,u_{n})-D_{g}(p,z_{n}) \\ &\quad = g(p)- g(u_{n})-\bigl\langle \nabla g(u_{n}),p-u_{n} \bigr\rangle - g(p)+ g(z_{n})+ \bigl\langle \nabla g(z_{n}),p-z_{n}\bigr\rangle \\ &\quad = g(z_{n})- g(u_{n})+ \bigl\langle \nabla g(z_{n}),p-z_{n}\bigr\rangle - \bigl\langle \nabla g(u_{n}),p-u_{n}\bigr\rangle \\ &\quad = g(z_{n})- g(u_{n})+ \bigl\langle \nabla g(z_{n}), u_{n}- z_{n}\bigr\rangle + \bigl\langle \nabla g(z_{n})- \nabla g(u_{n}), p- u_{n}\bigr\rangle . \end{aligned} $$
(4.16)

Since \(\{z_{n}\}\), \(\{u_{n}\}\), \(\{\nabla g(z_{n})\}\), and \(\{\nabla g(u_{n})\}\) are bounded, using (4.13)–(4.16), we get

$$\begin{aligned} \lim_{n\rightarrow \infty } \bigl\vert D_{g}(p,u_{n})- D_{g}(p,z_{n}) \bigr\vert = 0. \end{aligned}$$
(4.17)

Further, it follows from Lemma 3.1(v) that

$$ \begin{aligned}[b] D_{g}(z_{n},w_{n}) &\leq D_{g}(p,w_{n})- D_{g}(p,z_{n}) \\ &\leq D_{g}\bigl(p,\nabla g^{*}\bigl(\beta _{n}\nabla g(Tu_{n})+ (1-\beta _{n}) \nabla g(v_{n})\bigr)\bigr)- D_{g}(p,z_{n}) \\ &\leq \beta _{n} D_{g}(p,Tu_{n})+ (1- \beta _{n}) D_{g}(p,u_{n})- D_{g}(p,z_{n}) \\ &\leq D_{g}(p,u_{n})- D_{g}(p,z_{n}). \end{aligned} $$
(4.18)

Since \(\{D_{g}(p,u_{n})\}\) and \(\{D_{g}(p,z_{n})\}\) are bounded, by (4.17) and (4.18),

$$ \lim_{n\rightarrow \infty } D_{g}(z_{n},w_{n})= 0,$$

and hence

$$ \lim_{n\rightarrow \infty } \Vert z_{n}- w_{n} \Vert = 0. $$
(4.19)

From (4.13) and (4.19), we get

$$ \lim_{n\rightarrow \infty } \Vert u_{n}- w_{n} \Vert = 0. $$
(4.20)

Using uniform Frechet differentiability of g, Lemma 2.4, (4.19), and (4.20), we have

$$\begin{aligned} \lim_{n\rightarrow \infty } \bigl\Vert \nabla g(z_{n})- \nabla g(w_{n}) \bigr\Vert &= 0, \end{aligned}$$
(4.21)
$$\begin{aligned} \lim_{n\rightarrow \infty } \bigl\Vert \nabla g(u_{n})- \nabla g(w_{n}) \bigr\Vert &= 0. \end{aligned}$$
(4.22)

Note that

$$ \begin{aligned}[b] & \bigl\Vert \nabla g(u_{n})- \nabla g(w_{n}) \bigr\Vert \\ &\quad = \bigl\Vert \nabla g(u_{n})- \nabla g\bigl(\nabla g^{*}\bigl(\beta _{n}\nabla g(Tu_{n})+ (1- \beta _{n})\nabla g(v_{n})\bigr)\bigr) \bigr\Vert \\ &\quad = \|\nabla g(u_{n})-\beta _{n}\nabla g(Tu_{n})- (1-\beta _{n}) \nabla g(v_{n})) \| \\ &\quad = \bigl\Vert \beta _{n}\bigl(\nabla g(u_{n})-\nabla g(Tu_{n})\bigr)+ (1-\beta _{n}) \bigl( \nabla g(u_{n})-\nabla g(v_{n})\bigr) \bigr\Vert \\ &\quad = \bigl\Vert \beta _{n}\bigl(\nabla g(u_{n})-\nabla g(Tu_{n})\bigr) \\ &\qquad{} + (1-\beta _{n}) \bigl(\nabla g(u_{n})-\nabla g\bigl(\nabla g^{*}\bigl( \alpha _{n}\nabla g(u_{n})+ (1-\alpha _{n})\nabla g(Tu_{n}) \bigr)\bigr)\bigr) \bigr\Vert \\ &\quad = \bigl\Vert \beta _{n}\bigl(\nabla g(u_{n})-\nabla g(Tu_{n})\bigr)+(1-\beta _{n}) (1- \alpha _{n}) \bigl(\nabla g(u_{n})-\nabla g(Tu_{n}) \bigr) \bigr\Vert \\ &\quad = \bigl[1-\alpha _{n}(1-\beta _{n})\bigr]\bigl\| \nabla g(u_{n})-\nabla g(Tu_{n}))\bigr\| . \end{aligned} $$
(4.23)

By (4.22), (4.23), and using \(\lim_{n\to \infty }\alpha _{n}= 0\), we get

$$\begin{aligned} \lim_{n\rightarrow \infty } \bigl\Vert \nabla g(u_{n})- \nabla g(Tu_{n}) \bigr\Vert = 0. \end{aligned}$$
(4.24)

Moreover, we have from (4.24) that

$$\begin{aligned} \lim_{n\rightarrow \infty } \Vert u_{n}- Tu_{n} \Vert = 0. \end{aligned}$$
(4.25)

Step V. \(\bar{x} \in \Omega \).

First, we prove that \(\bar{x} \in F(T)\). As \(\{x_{n}\}\) is bounded, there exists a subsequence \(\{x_{n_{k}}\}\subseteq \{x_{n}\}\) such that \(x_{n_{k}}\rightharpoonup \bar{x} \in K\) as \(k\rightarrow \infty \). Due to (4.6), (4.13), (4.19), and (4.20), the sequences \(\{x_{n}\}\), \(\{u_{n}\}\), \(\{w_{n}\}\), and \(\{z_{n}\}\) have same asymptotic behavior and thus there exist subsequences \(\{u_{n_{k}}\}\) of \(\{u_{n}\}\), \(\{w_{n_{k}}\}\) of \(\{w_{n}\}\), and \(\{z_{n_{k}}\}\) of \(\{z_{n}\}\) such that \(u_{n_{k}}\rightharpoonup \bar{x}\), \(w_{n_{k}}\rightharpoonup \bar{x}\), and \(z_{n_{k}}\rightharpoonup \bar{x}\) as \(k\rightarrow \infty \). Using \(u_{n_{k}}\rightharpoonup \bar{x} \) and (4.25) shows that

$$\begin{aligned} \lim_{k\rightarrow \infty } \Vert u_{n_{k}}- Tu_{n_{k}} \Vert = 0. \end{aligned}$$

By the definition of T, \(\bar{x} \in \widehat{\mathit{{F}}}(T)=F(T)\).

Next, we prove that \(\bar{x}\in {\mathrm{Sol(GEP\text{(1.1)}}})\). As \(z_{n}={\mathrm{res}}_{G,b}^{g}w_{n}\), we have

$$ G(z_{n_{k}},v)+\bigl\langle \nabla g(z_{n_{k}})-\nabla g(w_{n_{k}}), v-z_{n_{k}} \bigr\rangle +b(v, z_{n_{k}})-b(z_{n_{k}},z_{n_{k}})\geq 0,\quad \forall v\in C. $$

Using Assumption 2.1, we have

$$ \bigl\langle \nabla g(z_{n_{k}})-\nabla g(w_{n_{k}}), v-z_{n_{k}}\bigr\rangle \geq G(v,z_{n_{k}})-b(v, z_{n_{k}})+b(z_{n_{k}},z_{n_{k}}), \quad \forall v \in C. $$
(4.26)

Using the definition of G, ϕ, (4.21), and letting \(k\to \infty \) in (4.26), we obtain

$$ 0\geq G(v, \bar{x})-b(v, \bar{x})+b(\bar{x}, \bar{x}).$$

Consider \(v_{s}:=sv+(1-s)\bar{x}\), \(\forall s\in (0,1]\) and \(v\in K\). Then, \(v_{s}\in K\) and hence

$$ G(v_{s}, \bar{x})-b(v_{s}, \bar{x})+b(\bar{x}, \bar{x})\leq 0.$$

Now,

$$ \begin{aligned} 0&= G(v_{s}, v_{s}) \\ &\leq s G(v_{s},v)+(1-s)G(v_{s}, \bar{x}) \\ &\leq s G(v_{s},v)+(1-s)\bigl[b(v_{s}, \bar{x})-b( \bar{x}, \bar{x})\bigr] \\ &\leq s G(v_{s},v)+(1-s)s\bigl[b(v, \bar{x})-b(\bar{x}, \bar{x}) \bigr]. \end{aligned} $$

Thus we get

$$ G(\bar{x}, v)+b(v, \bar{x})-b(\bar{x}, \bar{x})\geq 0,\quad \forall v\in K,$$

which implies \(\bar{x}\in {\mathrm{Sol(GEP\text{(1.1)})}}\). Therefore, \(\bar{x}\in \Omega \).

Step VI. \(x_{n}\rightarrow \bar{x}= {\mathrm{proj}}_{\Omega }^{g}x_{0}\).

Let \(\tilde{u}={\mathrm{proj}}_{\Omega }^{g}x_{0}\). As \(\{x_{n}\}\) is weakly convergent, \(x_{n+1}= {\mathrm{proj}}_{\Omega }^{g}x_{0}\) and \({\mathrm{proj}}_{\Omega }^{g}x_{0}\in \Omega \subset C_{n}\cap Q_{n}\). By (4.4) we see that

$$ D_{g}(x_{n+1},x_{0})\leq D_{g}\bigl({\mathrm{proj}}_{\Omega }^{g}x_{0}, x_{0}\bigr). $$

Using Lemma 2.8, \(\{x_{n}\}\) is strongly convergent to \(\tilde{u}={\mathrm{proj}}_{\Omega }^{g}x_{0}\). Hence, by the uniqueness of the limit, \(\{x_{n}\}\) converges strongly to \(\bar{x}={\mathrm{proj}}_{\Omega }^{g}x_{0}\). □

Consequences

If \(g(x)=\frac{1}{2}\|x\|^{2}\), \(\forall x\in Y\), a Bregman relatively nonexpansive becomes a relatively nonexpansive mapping.

Corollary 5.1

Let \(G, b: K\times K\rightarrow \mathbb{R}\) be bifunctions satisfying Assumptions 2.1and 2.2, respectively. Let T be a relatively nonexpansive mapping on K. Let \(\Omega ={\mathrm{Sol(GEP\textit{(1.1)})}}\cap F(T)\neq \emptyset \). Let \(\{x_{n}\},\{z_{n}\} \subseteq K\) be generated by

$$ \textstyle\begin{cases} x_{0}= x_{-1}\in K, \\ u_{n}= x_{n}+\theta _{n}(x_{n}-x_{n-1}), \\ v_{n}=J^{-1}(\alpha _{n}J(u_{n})+(1-\alpha _{n})J(Tu_{n})), \\ w_{n}=J^{-1}(\beta _{n}J(Tu_{n})+(1-\beta _{n})J(v_{n})), \\ z_{n}={\mathrm{res}}_{G,b}w_{n}, \\ C_{n}=\{z\in K:\Psi (z,z_{n})\leq \Psi (z,u_{n})\}, \\ Q_{n}=\{z\in K: \langle J (x_{0})-J(x_{n}), z-x_{n}\rangle \leq 0\}, \\ x_{n+1}={\mathrm{\prod }}_{C_{n}\cap Q_{n}}{x_{0}},\quad \textit{for all } n\geq 0, \end{cases} $$

where Ψ is defined in Remark 2.2, \(\{\theta _{n}\}\subseteq (0,1)\), \(\{\alpha _{n}\}, \{\beta _{n}\}\subseteq [0,1]\) with \(\lim_{n\to \infty }\alpha _{n}=0\). Then, \(\{x_{n}\}\) converges strongly to \({\mathrm{\prod }}_{\Omega }{x_{0}}\).

Also, if GEP(1.1) =K then Theorem 4.1 can be rewritten as

Corollary 5.2

Let T be a relatively nonexpansive mapping on K with \(F(T)\neq \emptyset \). Let \(\{x_{n}\},\{z_{n}\} \subseteq K\) be generated by

$$ \textstyle\begin{cases} x_{0}= x_{-1}\in K, \\ u_{n}= x_{n}+\theta _{n}(x_{n}-x_{n-1}), \\ v_{n}=J^{-1}(\alpha _{n}J(u_{n})+(1-\alpha _{n})J(Tu_{n})), \\ z_{n}=J^{-1}(\beta _{n}J(Tu_{n})+(1-\beta _{n})J(v_{n})), \\ C_{n}=\{z\in C:\Psi (z,z_{n+1})\leq \Psi (z,u_{n})\}, \\ Q_{n}=\{z\in C : \langle J (x_{0})-J(x_{n}), z-x_{n}\rangle \leq 0\}, \\ x_{n+1}={\mathrm{\prod }}_{C_{n}\cap Q_{n}}{x_{0}},\quad \textit{for all } n\geq 0, \end{cases} $$

where Ψ is defined in Remark 2.2, \(\{\theta _{n}\}\subseteq (0,1)\), \(\{\alpha _{n}\}, \{\beta _{n}\}\subseteq [0,1]\) with \(\lim_{n\to \infty }\alpha _{n}=0\). Then, \(\{x_{n}\}\) converges strongly to \({\mathrm{\prod }}_{F(T)}{x_{0}}\).

Moreover, if GEP(1.1) =K then, using Example 2.1 for \(A: E\to 2^{E^{*}}\), maximal monotone operator, we have

Corollary 5.3

Let \(K\subseteq Y\) with \(K\subseteq {\mathrm{int(dom}}g)\), where \(g :Y\rightarrow (-\infty , +\infty ]\) is a coercive Legendre function which is bounded, uniformly Frechet differentiable, and totally convex on bounded subsets of Y. Let \(A: E\to 2^{E^{*}}\) be a maximal monotone operator with \(A^{-1}(0)\neq \emptyset \). Let \(\{x_{n}\},\{z_{n}\} \subseteq K\) be generated by

$$ \textstyle\begin{cases} x_{0}= x_{-1}\in K, \\ u_{n}= x_{n}+\alpha _{n}(x_{n}-x_{n-1}), \\ v_{n}=\nabla g^{*}(\alpha _{n}\nabla g(u_{n})+(1-\alpha _{n})\nabla g({ \mathrm{res}}_{A}^{g}u_{n})), \\ w_{n}=\nabla g^{*}(\beta _{n}\nabla g({\mathrm{res}}_{A}^{g}u_{n})+(1- \beta _{n})\nabla g(v_{n})), \\ z_{n}={\mathrm{res}}_{G,b}^{g}w_{n}, \\ C_{n}=\{z\in C:D_{g}(z,z_{n})\leq D_{g}(z,u_{n})\}, \\ Q_{n}=\{z\in C : \langle \nabla g(x_{0})-\nabla g(x_{n}), z-x_{n} \rangle \leq 0\}, \\ x_{n+1}={\mathrm{proj}}_{C_{n}\cap Q_{n}}^{g}{x_{0}},\quad\textit{for all}\ n \geq 0, \end{cases} $$

where \(\{\theta _{n}\}\subseteq (0,1)\), \(\{\alpha _{n}\}, \{\beta _{n}\}\subseteq [0,1]\) with \(\lim_{n\to \infty }\alpha _{n}=0\). Then, \(\{x_{n}\}\) converges strongly to \({\mathrm{proj}}_{A^{-1}(0)}x_{0}\).

Numerical example

Figure 1
figure 1

Convergence of sequences

Table 1 Values of \(x_{n}\) and \(z_{n}\)

Example 6.1

Let \(Y=\mathbb{R}\), \(K=[r_{1},r_{2}]\), where \(r_{1},r_{2}\in \mathbb{R}\) are arbitrary but fixed, and \(g:\mathbb{R}\to \mathbb{R}\) with \(g(u)=\frac{2}{3}u^{2}\). Obviously, \(g:\mathbb{R}\to \mathbb{R}\) is a coercive Legendre function which is bounded, uniformly Frechet differentiable, and totally convex on bounded subsets of \(\mathbb{R}\), and \(\nabla g(u)=\frac{4}{3}u\). As \(g^{*}(u^{*})=\sup \{\langle u^{*}, u \rangle -g(u): u\in \mathbb{R}\}\), we get \(g^{*}(w)=\frac{3}{8}w^{2}\) and \(\nabla g^{*}(w)=\frac{3}{4}w\). Let \(G:K\times K\to \mathbb{R}\) with \(G(u,v)=(u-1)(v-u), \forall u,v\in K\) and let \(b:K\times K\to \mathbb{R}\) be such that \(b(u,v)=uv\), \(\forall u,v\in K\). Obviously, G and b satisfy Assumptions 2.1 and 2.2, respectively, and \({\mathrm{Sol(GEP\text{(1.1)})}}=\{\frac{1}{2}\}\neq \emptyset \). Let \(T: K\to K\) be such that \(Tx=\frac{x+1}{3}\). Clearly, T is a Bregman relatively nonexpansive mapping and \(F(T)=\{\frac{1}{2}\}\). Thus, \(\Omega =\{\frac{1}{2}\}\neq \emptyset \). Suppose \(\{\alpha _{n}\}=\{\frac{1}{n^{3}}\}\), \(\{\beta _{n}\}=\{\frac{1}{n^{2}}\} \) and \(\theta _{n}=0.6\). After simplification, the hybrid iterative scheme (4.1) becomes: given \(x_{0}\), \(x_{1}\),

$$\begin{aligned}& u_{n}= x_{n}+\theta _{n}(x_{n}-x_{n-1}), \\& v_{n}=\alpha _{n}u_{n}+(1-\alpha _{n}) \biggl(\frac{u_{n}+1}{3} \biggr), \\& w_{n}=\beta _{n} \biggl(\frac{u_{n}+1}{3} \biggr)+(1-\beta _{n})v_{n};\qquad z_{n}= \frac{4w_{n}+3}{10}, \\& C_{n}= [e_{n}, \infty ),\quad \text{where }e_{n}:= \frac{z_{n}+u_{n}}{2};\qquad Q_{n}=[{x_{n}}, \infty ); \\& x_{n+1}={\mathrm{proj}}_{C_{n}\cap Q_{n}}^{g}{x_{0}}, \quad \forall n\geq 1. \end{aligned}$$

Then, there are unique sequences \(\{x_{n}\}\) and \(\{z_{n}\}\) obtained by (4.1) converging to \(\bar{x}=\{\frac{1}{2}\}\in \Omega \).

Conclusion

The aim of this paper is to introduce and study an inertial hybrid iterative method to find the common solution of GEP and FPP for a Bregman relatively nonexpansive mapping in a Banach space. From a theoretical and application point of view, an inertial method via Bregman relatively nonexpansive mapping has great importance for data analysis and some imaging problems. It is worth mentioning that the convergence for inertial iterative methods in the setting of Banach spaces is still unexplored.

Availability of data and materials

Not applicable.

Abbreviations

GEP:

Generalized equilibrium problem

EP:

Equilibrium problem

VIP:

Variational inequality problem

FPP:

Fixed point problem

References

  1. Blum, E., Oettli, W.: From optimization and variational inequalities to equilibrium problems. Math. Stud. 10, 123–145 (1994)

    MathSciNet  MATH  Google Scholar 

  2. Daniele, P., Giannessi, F., Mougeri, A.E.: Equilibrium Problems and Variational Models. Nonconvex Optimization and Its Application, vol. 68. Kluwer Academic Publications, Norwell (2003)

    Book  Google Scholar 

  3. Moudafi, A.: Second order differential proximal methods for equilibrium problems. J. Inequal. Pure Appl. Math. 4(1), Article ID 18 (2003)

    MathSciNet  MATH  Google Scholar 

  4. Hartman, P., Stampacchia, G.: On some non-linear elliptic differential-functional equation. Acta Math. 115, 271–310 (1966)

    MathSciNet  Article  Google Scholar 

  5. Korpelevich, G.M.: The extragradient method for finding saddle points and other problems. Matecon 12, 747–756 (1976)

    MathSciNet  MATH  Google Scholar 

  6. Nadezhkina, N., Takahashi, W.: Strong convergence theorem by a hybrid method for nonexpansive mappings and Lipschitz continuous monotone mapping. SIAM J. Optim. 16(40), 1230–1241 (2006)

    MathSciNet  Article  Google Scholar 

  7. Ceng, L.C., Guu, S.M., Yao, J.C.: Finding common solution of variational inequality, a general system of variational inequalities and fixed point problem via a hybrid extragradient method. Fixed Point Theory Appl. 2011 Article ID 626159, (2011)

    MathSciNet  Article  Google Scholar 

  8. Ceng, L.C., Hadjisavvas, N., Wong, N.C.: Strong convergence theorem by a hybrid extragradient-like approximation method for variational inequalities and fixed point problems. J. Glob. Optim. 46, 635–646 (2010)

    MathSciNet  Article  Google Scholar 

  9. Ceng, L.C., Wang, C.Y., Yao, J.C.: Strong convergence theorems by a relaxed extragradient method for a general system of variational inequalities. Math. Methods Oper. Res. 67, 375–390 (2008)

    MathSciNet  Article  Google Scholar 

  10. Rouhani, B.D., Kazmi, K.R., Rizvi, S.H.: A hybrid-extragradient-convex approximation method for a system of unrelated mixed equilibrium problems. Trans. Math. Program. Appl. 1(8), 82–95 (2013)

    Google Scholar 

  11. Farid, M.: The subgradient extragradient method for solving mixed equilibrium problems and fixed point problems in Hilbert spaces. J. Appl. Numer. Optim. 1, 335–345 (2019)

    Google Scholar 

  12. Yao, Y., Li, H., Postolache, M.: Iterative algorithms for split equilibrium problems of monotone operators and fixed point problems of pseudo-contractions, Optimization (2020) https://doi.org/10.1080/02331934.2020.1857757

  13. Zhang, C., Zhu, Z., Yao, Y., Liu, Q.: Homotopy method for solving mathematical programs with bounded box-constrained variational inequalities. Optimization 68 (2019)

  14. Zhao, X., Kobis, M.A., Yao, Y., Yao, J.C.: A projected subgradient method for nondifferentiable quasiconvex multiobjective optimization problems. J. Optim. Theory Appl. 190 (2021)

  15. Zhu, L.J., Yao, Y., Postolache, M.: Projection methods with line search technique for pseudomonotone equilibrium problems and fixed point problems. UPB Sci. Bull., Ser. A 83(1), 3–14 (2021)

    Google Scholar 

  16. Matsushita, S., Takahashi, W.: A strong convergence theorem for relatively nonexpansive mappings in a Banach space. J. Approx. Theory 134, 257–266 (2005)

    MathSciNet  Article  Google Scholar 

  17. Dung, N.V., Hieu, N.T.: A new hybrid projection algorithm for equilibrium problems and asymptotically quasi-ϕ-nonexpansive mappings in Banach spaces. Rev. R. Acad. Cienc. Exactas Fís. Nat., Ser. A Mat. 113, 2017–2035 (2019)

    MathSciNet  Article  Google Scholar 

  18. Kazmi, K.R., Ali, R.: Common solution to an equilibrium problem and a fixed point problem for an asymptotically quasi-ϕ-nonexpansive mapping in intermediate sense. Rev. R. Acad. Cienc. Exactas Fís. Nat., Ser. A Mat. 111, 877–889 (2017)

    MathSciNet  Article  Google Scholar 

  19. Kazmi, K.R., Farid, M.: Some iterative schemes for generalized vector equilibrium problems and relatively nonexpansive mappings in Banach spaces. Math. Sci. 7, 19 (2013)

    MathSciNet  Article  Google Scholar 

  20. Takahashi, W., Zembayashi, K.: Strong and weak convergence theorem for equilibrium problems and relatively nonexpansive mappings in Banach spaces. Nonlinear Anal. 70, 45–57 (2009)

    MathSciNet  Article  Google Scholar 

  21. Bregman, L.M.: The relaxation method of finding the common point of convex sets and its application to the solution of problems in convex programming. USSR Comput. Math. Math. Phys. 7, 200–217 (1967)

    MathSciNet  Article  Google Scholar 

  22. Bauschke, H.H., Borwein, J.M., Combettes, P.L.: Essential smoothness, essential strict convexity, and Legendre function in Banach spaces. Commun. Contemp. Math. 3, 615–647 (2001)

    MathSciNet  Article  Google Scholar 

  23. Butnairu, D., Iusem, A.N.: Totally convex functions for fixed points computation and and infinite dimensional optimization. In: Applied Optimization, p. 40. Springer, Dordrecht (2000)

    Google Scholar 

  24. Butnairu, D., Resmerita, E.: Bregman distances, totally convex functions, and a method for solving operator equations in Banach spaces. Abstr. Appl. Anal. 2006 (2006) 39 pages

  25. Huang, Y.Y., Jeng, J.C., Kuo, T.Y., Hong, C.C.: Fixed point and weak convergence theorems for point-dependent λ-hybrid mappings in Banach spaces. Fixed Point Theory Appl. 2011, 105 (2011)

    MathSciNet  Article  Google Scholar 

  26. Reich, S., Sabach, S.: Two strong convergence theorems for a proximal method in reflexive Banach spaces. Numer. Funct. Anal. Optim. 31, 22–44 (2010)

    MathSciNet  Article  Google Scholar 

  27. Agarwal, R.P., Chen, J.W., Cho, Y.J.: Strong convergence theorems for equilibrium problems and weak Bregman relatively nonexpansive mappings in Banach spaces. J. Inequal. Appl. 2013, 119 (2013)

    MathSciNet  Article  Google Scholar 

  28. Chen, J.W., Wan, Z.P., Yuan, L.Y., Zheng, Y.: Approximation of fixed points of weak Bregman relatively nonexpansive mappings in Banach spaces. Int. J. Math. Math. Sci. 2011 (2011) 23 pages

  29. Kassay, G., Reich, S., Sabach, S.: Iterative methods for solving systems of variational inequalities in reflexive Banach spaces. SIAM J. Optim. 21, 1319–1344 (2011)

    MathSciNet  Article  Google Scholar 

  30. Reich, S., Sabach, S.: Two strong convergence theorems for Bregman strongly nonexpansive operators in reflexive Banach spaces. Nonlinear Anal. 73, 122–135 (2010)

    MathSciNet  Article  Google Scholar 

  31. Suantai, S., Cho, Y.J., Cholamjiak, P.: Halpern’s iteration for Bregman strongly nonexpansive mappings in reflexive Banach space. Comput. Math. Appl. 64, 489–499 (2012)

    MathSciNet  Article  Google Scholar 

  32. Maingé, P.E.: Convergence theorem for inertial KM-type algorithms. J. Comput. Appl. Math. 219, 223–236 (2008)

    MathSciNet  Article  Google Scholar 

  33. Alansari, M., Ali, R., Farid, M.: Strong convergence of an inertial iterative algorithm for variational inequality problem, generalized equilibrium problem, and fixed point problem in a Banach space. J. Inequal. Appl. 2020, 42 (2020) https://doi.org/10.1186/s13660-020-02313-z

    MathSciNet  Article  Google Scholar 

  34. Bot, R.I., Csetnek, E.R., Hendrich, C.: Inertial Douglas–Rachford splitting for monotone inclusion problems. Appl. Math. Comput. 256, 472–487 (2015)

    MathSciNet  MATH  Google Scholar 

  35. Dong, Q.L., Kazmi, K.R., Ali, R., Li, X.H.: Inertial Krasnoselskii–Mann type hybrid algorithms for solving hierarchical fixed point problems. J. Fixed Point Theory Appl. 21, 57 (2019)

    Article  Google Scholar 

  36. Dong, Q.L., Peng, Y., Yao, Y.: Alternated inertial projection methods for the split equality problem. J. Nonlinear Convex Anal. 22, 53–67 (2021)

    MathSciNet  MATH  Google Scholar 

  37. Dong, Q.L., Yuan, H.B., Cho, Y.J., Rassias, T.M.: Modified inertial Mann algorithm and inertial CQ-algorithm for nonexpansive mappings. Optim. Lett. 12(1), 87–102 (2018)

    MathSciNet  Article  Google Scholar 

  38. Farid, M., Cholamjiak, W., Ali, R., Kazmi, K.R.: A new shrinking projection algorithm for a generalized mixed variational-like inequality problem and asymptotically quasi-ϕ-nonexpansive mapping in a Banach space. Rev. R. Acad. Cienc. Exactas Fís. Nat., Ser. A Mat. 115, 114 (2021)

    MathSciNet  Article  Google Scholar 

  39. Khan, S.A., Suantai, S., Cholamjiak, W.: Shrinking projection methods involving inertial forward-backward splitting methods for inclusion problems. Rev. R. Acad. Cienc. Exactas Fís. Nat., Ser. A Mat. 113(2), 645–656 (2019)

    MathSciNet  Article  Google Scholar 

  40. Liu, L., Cho, S.Y., Yao, J.C.: Convergence analysis of an inertial Tseng’s extragradient algorithm for solving pseudomonotone variational inequalities and applications. J. Nonlinear Var. Anal. 5, 627–644 (2021)

    MATH  Google Scholar 

  41. Ogbuisi, F.U., Iyiola, O.S., Ngnotchouye, J.M.T., Shumba, T.M.M.: On inertial type self-adaptive iterative algorithms for solving pseudomonotone equilibrium problems and fixed point problems. J. Nonlinear Funct. Anal. 2021, Article ID 4 (2021)

    Google Scholar 

  42. Reich, S., Sabach, S.: A projection method for solving nonlinear problems in reflexive Banach spaces. J. Fixed Point Theory Appl. 9(1), 101–116 (2011)

    MathSciNet  Article  Google Scholar 

  43. Alber, Y.I.: Metric and generalized projection operators in Banach spaces. In: Properties and Applications. Lect. Notes Pure Appl. Math. (1996)

    MATH  Google Scholar 

  44. Reich, S., Sabach, S.: A strong convergence theorem for a proximal-type algorithm in reflexive Banach space. J. Nonlinear Convex Anal. 10, 471–485 (2009)

    MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

This work was supported by the Deanship of Scientific Research (DSR), King Abdulaziz University, Jeddah, under grant No. (FP-082-43). The authors, therefore, gratefully acknowledge the DSR technical and financial support.

Funding

The Deanship of Scientific Research (DSR) at King Abdulaziz University, Jeddah, Saudi Arabia has funded this project under grant No. (FP-082-43).

Author information

Affiliations

Authors

Contributions

All authors contributed equally and significantly in writing this paper. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Rehan Ali.

Ethics declarations

Competing interests

The authors declare that they have no competing interests.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Alansari, M., Farid, M. & Ali, R. An inertial iterative algorithm for generalized equilibrium problems and Bregman relatively nonexpansive mappings in Banach spaces. J Inequal Appl 2022, 11 (2022). https://doi.org/10.1186/s13660-021-02749-x

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13660-021-02749-x

MSC

  • 47H09
  • 47H05
  • 47J25

Keywords

  • Bregman relatively nonexpansive mapping
  • Fixed point problem; Generalized equilibrium problem
  • Inertial hybrid iterative method