# On the existence and approximation of solutions of generalized equilibrium problem on Hadamard manifolds

## Abstract

In this paper, we study the existence of solution of the generalized equilibrium problem (GEP) in the framework of an Hadamard manifold. Using the KKM lemma, we prove the existence of solution of the GEP and give the properties of the resolvent function associated with the problem under consideration. Furthermore, we introduce an iterative algorithm for approximating a common solution of the GEP and a fixed point problem. Using the proposed method, we obtain and prove a strong convergence theorem for approximating a solution of the GEP, which is also a fixed point of a nonexpansive mapping under some mild conditions. We give an application of our convergence result to a solution of the convex minimization problem. To illustrate the convergence of the method, we report some numerical experiments. The result in this paper extends the study of the GEP from the linear settings to the Hadamard manifolds.

## Introduction

Fixed point theory is an area of nonlinear analysis that has been extensively studied by mathematicians. Fixed point theorem, in particular, applies in proving the existence of solutions of differential equations, integral equations, and the existence of solutions of optimization problems. Let K be a nonempty, closed, and convex subset of a space X, and let $$T: K \to K$$ be a mapping. The fixed point problem (FPP) seeks a point $$x \in K$$ such that

\begin{aligned} x=Tx. \end{aligned}

We denote by $$\mathrm{Fix}(T)$$ the fixed point sets of the mapping T, i.e $$\mathrm{Fix}(T)=\{x\in K: x=Tx\}$$.

Let K be a nonempty, closed, and convex subset of a topological space X, then the variational inequality problem (VIP) is to find a point $$x \in K$$ such that

\begin{aligned} \langle Ax,y-x \rangle \geq 0,\quad \forall y \in K, \end{aligned}
(1.1)

where $$A: K \to X^{*}$$ is a nonlinear mapping, and $$X^{*}$$ is the dual space of X. The theory of the VIP was introduced independently by Kinderlehrer and Stampacchia , Stampacchia , and Fichera . The VIP applies in traffic network equilibrium modeling, economic equilibrium modeling, and bimatrix equilibrium. The model has also been used in the analysis of piece-wise linear restrictive circuits, elasticity and structural analysis [9, 11, 21].

An important generalization of the VIP is the so-called equilibrium problem (EP), which was first introduced by Ky Fan  as a minimax inequality. The term equilibrium problem was coined in a paper by Blum and Oetlli , which followed an earlier paper by Muu and Oetlli . In the latter paper, some standard examples of the EP were discussed, namely: optimization problem, variational inequality and fixed point problems. The EP provides a unified framework for the study of several optimization problems, such as the saddle point problem, minimax inequality, complementarity problem, and so on. The applications of the EP have been reported in several articles (for more details, see [4, 7, 8, 26]). The EP is given as: find $$x \in K$$ such that

\begin{aligned} F(x,y) \ge 0,\quad \forall y \in K, \end{aligned}
(1.2)

where $$F: K\times K \to \mathbb{R}$$ is a bifunction.

By combining the concepts of the VIP and EP, Takahashi and Takahashi  considered the generalized equilibrium problem (GEP) with the following formulation: Let K be a nonempty, closed, and convex subset of a real Hilbert space H. Let $$F: K \times K \to \mathbb{R}$$ be a bifunction and $$A: K \to H$$ be a nonlinear mapping, then the GEP consists of obtaining a point $$x \in K$$ satisfying

\begin{aligned} F(x,y)+\langle Ax,y-x \rangle \geq 0,\quad \forall y \in K. \end{aligned}
(1.3)

The relationship between the GEP, the EP and the VIP is obvious. The EP and VIP are easily obtained from the GEP by setting F and A to zero. The motivation for the study of the GEP stems from the fact that many important problems can be modeled as a problem of finding points that solve the GEP. In particular, the GEP is applicable in sensor networks, data compression, robustness to marginal changes and equilibrium stability, etc. (see [2, 3, 15, 18, 30, 37]).

On the other hand, there has been a growing interest in extending some concepts and ideas of nonlinear analysis from the Euclidean spaces to the Riemannian manifold. The advantages of doing this are well documented. For example, by choosing a suitable Riemannian metric, optimization problems with nonconvex objective functions can be viewed as convex [20, 31, 41]. Also, from the perspective of the Riemannian geometry, constrained optimization problems can be seen as unconstrained [31, 32, 41]. For these reasons, such extension becomes necessary and natural.

In 1999, Németh  introduced the study of the VIP on an Hadamard manifold as follows: Let M be a Hadamard manifold, TM the tangent bundle of M, K a nonempty, closed, and geodesic convex subset of M, exp an exponential mapping with inverse exp−1. Then the VIP is to seek a point a point $$x \in K$$ such that

\begin{aligned} \bigl\langle Ax,\exp _{x}^{-1}y \bigr\rangle \geq 0,\quad \forall y \in K, \end{aligned}
(1.4)

where $$A: K \to TM$$ is a single-valued vector field. The author in  extended and generalized some basic existence and uniqueness ideas of the classical VIP from the Euclidean frameworks to the Hadamard manifolds. Since then, there have been several results on the approximation of the VIP in this framework (see [19, 23, 25, 27] and the references therein).

In 2012, the existence result for the equilibrium problem in a similar vein was introduced into the Riemannian manifolds by Colao et al. . By developing the KKM lemma in this framework, they obtained an existence result for the EP, where the associated bifunction is monotone. Zhou and Huang  also studied the relationship between the vector variational inequality and the vector optimization problem on a Hadamard manifold. Other existence results in this regard include the existence result of Tang et al.  for a class of hemivariational inequality problem, also the existence of the solution of equilibrium problem by Salahudin .

Following Takahashi and Takahashi , we extend the study of the generalized equilibrium problem to the framework of a Hadamard manifold under the following formulation: Let K be a nonempty, closed, and geodesic convex subset of a Hadamard manifold M. Let $$F: K \times K \to \mathbb{R}$$ be a bifunction and $$A: K \to TM$$ be a single-valued vector field, then the GEP consists of finding a point $$x \in K$$ such that

\begin{aligned} F(x,y)+\bigl\langle Ax,\exp _{x}^{-1}y \bigr\rangle \geq 0, \quad\forall y \in K, \end{aligned}
(1.5)

where exp−1 is the inverse of the exponential mapping $$\exp:TM \to M$$ with TM the tangent bundle of M. This class of problem is more general since it includes as special cases the variational inequalities and equilibrium problems.

Furthermore, iterative approximation of the various optimization problems is another interesting area of research in nonlinear analysis, especially in the twin concepts of fixed point and optimization theory. For extensive literature on the iterative methods for solving variational inequalities (see [16, 17, 20, 22, 25, 29] and the reference therein). Also, for methods of approximating a solution of the EP in both the linear and nonlinear spaces, see [1, 20, 28, 40]. We refer the readers to see the following [18, 30] for methods of solving the GEP in the Hilbert and Banach spaces.

In this paper, our motivation is twofold: first, we study the existence of a solution of the generalized equilibrium problem thus extending the work of Takahashi and Takahashi  from the linear settings to the framework of a Hadamard manifold. Second, we introduce an effective algorithm for approximating a common solution of the GEP and a fixed point problem for a nonexpansive mapping in a Hadamard manifold. Using the proposed method, we obtain a strong convergence result under some mild conditions and monotonicity of the bifunction.

The rest of the paper is organized as follows: We recall some geometry of the Hadamard manifold and give some useful results and important definitions in Sect. 2. In Sect. 3, we establish the existence result for the GEP and prove the uniqueness of the resolvent operator of the associated function and bifunction. In Sect. 4, we propose a strong convergence algorithm for approximating a common solution of the GEP and FPP and discuss the convergence analysis of the method. In Sect. 5, we give an application of one of our results to an optimization problem. We report some numerical experiments in Sect. 6 and give a conclusion of the work in Sect. 7.

## Preliminaries

Let M be an m-dimensional manifold and $$x \in M$$; let $$T_{x} M$$ be the tangent space of M at $$x \in M$$. We denote by $$TM=\bigcup_{x \in M}T_{x} M$$ the tangent bundle of M. An inner product $$\mathcal{R}\langle \cdot, \cdot \rangle$$ is called the Riemannian metric on $$T_{x} M$$. The corresponding norm to the inner product $$\mathcal{R}_{x} \langle \cdot,\cdot \rangle$$ on $$T_{x} M$$ is denoted by $$\|\cdot \|_{x}$$. In this paper, we will adopt $$\|\cdot \|$$ for the norm and drop the subscript x. A differentiable manifold M endowed with a Riemannian metric $$\mathcal{R}\langle \cdot,\cdot \rangle$$ is called a Riemannian manifold. In what follows, we denote the Riemannian metric $$\mathcal{R} \langle \cdot,\cdot \rangle$$ by $$\langle \cdot,\cdot \rangle$$ when no confusion arise. Given a piecewise smooth curve $$\gamma:[a,b] \to M$$ joining x to y (i.e $$\gamma (a)=x$$ and $$\gamma (b)=y$$), we define the length $$l(\gamma )$$ of γ by $$l(\gamma )=\int _{a}^{b}\|\gamma ^{\prime}(t)\|\,dt$$. The Riemannian distance $$d(x,y)$$ is the minimal length over the set of all such curves joining x to y including the original topology on M. Let be the Levi–Civita connection associated with the Riemannian metric. Let γ be a smooth curve in M, a vector field X along γ is said to be parallel if $$\nabla _{\gamma ^{\prime}}X={\mathbf{0}}$$, where 0 is the zero tangent vector. If $$\gamma ^{\prime}$$ itself is parallel along γ, we say that γ is a geodesic, and $$\|\gamma ^{\prime}\|$$ is a constant. Suppose $$\|\gamma ^{\prime}\|=1$$, then γ is said to be normalized. A geodesic γ joining x to y in M is said to be minimal if $$l(\gamma )=d(x,y)$$. A Riemannian manifold M equipped with the Riemannian distance d is a metric space $$(M,d)$$. A Riemannian manifold M is said to be complete if for all $$x \in M$$, all geodesics emanating from x are defined for all $$t \in \mathbb{R}$$. The Hopf–Rinow theorem posits that if M is complete, then any pair of points in M can be joined by a minimizing geodesic. Moreover, if $$(M,d)$$ is a complete metric space, every bounded closed subset of M is compact. Also, if M is a complete Riemannian manifold, then the exponential map $$\exp _{x}: T_{x} M \to M$$ at $$x \in M$$ is defined by

\begin{aligned} \exp _{x}v=\gamma _{v}(1,x), \quad\forall v \in T_{x} M, \end{aligned}

where $$\gamma _{v}(\cdot,x)$$ is the geodesic starting from x with velocity v (i.e $$\gamma _{v}(0,x)=x$$ and $$\gamma _{v}^{\prime}(0,x)=v$$). Then, for any t, we have $$\exp _{x}tv=\gamma _{v}(t,x)$$ and $$\exp _{x}{\mathbf{0}}=\gamma _{v}(0,x)=x$$. Note that the mapping $$\exp _{x}$$ is differentiable on $$T_{x} M$$ for every $$x \in M$$. The exponential map has a differentiable inverse $$\exp _{x}^{-1}: M \to T_{x} M$$. For any $$x,y \in M$$, we have $$d(x,y)=\|\exp _{y}^{-1}x\|=\|\exp _{x}^{-1}y\|$$, (see , for more details). A subset $$K \subset M$$ is said to be convex, if for any two points $$x,y \in K$$, the geodesic γ joining x to y is contained in K, i.e. if $$\gamma:[a,b] \to M$$ is a geodesic such that $$x=\gamma (a)$$ and $$y=\gamma (b)$$, then $$\gamma ((1-t)a+tb) \in K$$ for all $$t \in [0,1]$$. Throughout this sequel unless otherwise stated, we denote by K a nonempty, closed, and convex subset of M. A complete simply connected Riemannian manifold of nonpositive sectional curvature is said to be a Hadamard manifold. From now, we denote by M a finite dimensional Hadamard manifold with a constant sectional curvature $$\kappa \in \mathbb{R}$$.

We now present some valuable results and definitions that will be helpful in the convergence analysis of our main result.

### Proposition 2.1

()

Let $$x \in M$$. The exponential mapping $$\exp _{x}: T_{x} M \to M$$ is a diffeomorphism, for any two points $$x,y \in M$$, there exists a unique normalized geodesic joining x to y, which is expressed by the formula

\begin{aligned} \gamma (t)=\exp _{x}t\exp _{x}^{-1}y,\quad \forall t \in [0,1]. \end{aligned}

A geodesic triangle $$\Delta (x,y,z)$$ of a Riemannian manifold M is a set containing three points x, y, z and three minimizing geodesic joining these points.

### Proposition 2.2

()

Let $$\Delta (x,y,z)$$ be a geodesic triangle in M. Then,

\begin{aligned} d^{2}(x,y)+d^{2}(y,z)-2\bigl\langle \exp _{y}^{-1}x,\exp _{y}^{-1}z \bigr\rangle \leq d^{2}(z,x) \end{aligned}
(2.1)

and

\begin{aligned} d^{2}(x,y) \leq \bigl\langle \exp _{x}^{-1}z, \exp _{x}^{-1}y \bigr\rangle + \bigl\langle \exp _{y}^{-1}z+\exp _{y}^{-1}x \bigr\rangle . \end{aligned}
(2.2)

Moreover, if α is the angle at x, then we have

\begin{aligned} \bigl\langle \exp _{x}^{-1}y,\exp _{x}^{-1}z \bigr\rangle =d(y,x)d(x,z)\cos \alpha. \end{aligned}

Also,

\begin{aligned} \bigl\Vert \exp _{x}^{-1}y \bigr\Vert ^{2}= \bigl\langle \exp _{x}^{-1}y,\exp _{x}^{-1}y \bigr\rangle =d^{2}(x,y). \end{aligned}

For any $$p \in M$$ and $$K \subset M$$, there exists a unique point $$q \in K$$ such that $$d(p,q) \leq d(p,r)$$ for all $$r \in K$$. The point q is called the projection of p onto the convex set K and is denoted $$P_{K}(p)$$.

### Lemma 2.3

()

For any $$p \in M$$, there exists a unique projection $$q=P_{K}(p)$$. Furthermore, the following inequality holds:

\begin{aligned} \bigl\langle \exp _{q}^{-1}p,\exp _{q}^{-1}r \bigr\rangle \leq 0,\quad \forall r \in K. \end{aligned}

A mapping $$f:M \to M$$ is said to be ψ-contraction if

\begin{aligned} d\bigl(f(x),f(y)\bigr) \leq \psi \bigl( d(x,y)\bigr),\quad \forall x,y \in M, \end{aligned}

where $$\psi: [0,+\infty ) \to [0,+\infty )$$ is a function satisfying:

1. (i)

$$\psi (s) < s$$ for all $$s >0$$,

2. (ii)

ψ is continuous.

For more properties of this class of mapping, we refer the readers to . Let K be a nonempty, closed, and convex subset of a Hadamard manifold M. A mapping $$T: K \to K$$ is called firmly nonexpansive (see ) if for any $$x,y \in K$$, the function $$\Phi: [0,1] \to [0,\infty ]$$ defined by

\begin{aligned} \Phi (t)=d\bigl(\gamma _{1}(t),\gamma _{2}(t) \bigr) \end{aligned}

is nonincreasing, where $$\gamma _{1}$$ and $$\gamma _{2}$$ represent geodesics joining x to Tx and y to Ty, respectively. From this definition, it is easy to see that every firmly nonexpansive mapping T is nonexpansive. That is for any $$x,y \in K$$,

\begin{aligned} d(Tx,Ty) \leq d(x,y). \end{aligned}

The following result was proved in :

### Proposition 2.4

A mapping $$T:K \to K$$ is firmly nonexpansive if and only if for any $$x,y \in K$$

\begin{aligned} \bigl\langle \exp _{Tx}^{-1}Ty,\exp _{Tx}^{-1}x \bigr\rangle +\bigl\langle \exp _{Ty}^{-1}Tx, \exp _{Ty}^{-1}y\bigr\rangle \leq 0. \end{aligned}

The following lemma establishes the relation between geodesic triangles in Riemannian manifolds and triangles in $$\mathbb{R}^{2}$$ (see ).

### Lemma 2.5

()

Let $$\Delta (x_{1},x_{2},x_{3})$$ be a geodesic triangle in M. Then, there exists a triangle $$\Delta (\bar{x}_{1},\bar{x}_{2},\bar{x}_{3})$$ for $$\Delta (x_{1},x_{2},x_{3})$$ such that $$d(x_{i},x_{i+1})=\|\bar{x}_{i}-\bar{x}_{i+1}\|$$, with the indices taken modulo 3. This is unique up to isometry of $$\mathbb{R}^{2}$$.

The triangle $$\Delta (\bar{x}_{1},\bar{x}_{2},\bar{x}_{3})$$ in Lemma 2.5 is said to be the comparison triangle for $$\Delta (x_{1},x_{2},x_{3}) \subset M$$. The points $$\bar{x}_{1}$$, $$\bar{x}_{2}$$ and $$\bar{x}_{3}$$ are called comparison points of the points $$x_{1},x_{2}$$, and $$x_{3}$$ in M.

A function $$h: M \to \mathbb{R}$$ is called geodesic if for any geodesic $$\gamma \in M$$, the composition $$h \circ \gamma: [a,b] \to \mathbb{R}$$ is convex, that is

\begin{aligned} h \circ \gamma \bigl(ta+(1-t)b\bigr) \leq t h \circ \gamma (a)+(1-t)h\circ \gamma (b),\quad a,b \in \mathbb{R}, \forall t \in [0,1]. \end{aligned}

The subdifferential of a function $$h: M \to \mathbb{R}$$ at the point $$x \in M$$ is given by

\begin{aligned} \partial h(x):=\bigl\{ u \in T_{x} M: h(y) \geq h(x)+\bigl\langle u, \exp _{x}^{-1}y \bigr\rangle , \forall y \in M \bigr\} . \end{aligned}

The elements of $$\partial h(x)$$ are called the subgradients of h at x. The set $$\partial h(x)$$ is a closed convex set, and it is known to be nonempty if h is convex on M.

### Lemma 2.6

()

Let $$x_{0} \in M$$ and $$\{x_{n}\} \subset M$$ such that $$x_{n} \to x_{0}$$. Then, for any $$y \in M$$, we have $$\exp _{x_{n}}^{-1}y \to \exp _{x_{0}}^{-1}y$$ and $$\exp _{y}^{-1}x_{n} \to \exp _{y}^{-1}x_{0}$$;

### Proposition 2.7

()

Let M be a Hadamard manifold and $$d: M \times M: \to \mathbb{R}$$ be the distance function. Then, the function d is convex with the product Riemannian metric. In other words, given any pair of geodesics $$\gamma _{1}: [0,1] \to M$$ and $$\gamma _{2}: [0,1] \to M$$, then for all $$t \in [0,1]$$,

\begin{aligned} d\bigl(\gamma _{1}(t),\gamma _{2}(t)\bigr) \leq (1-t)d \bigl(\gamma _{1}(0),\gamma _{2}(0)\bigr)+td\bigl( \gamma _{1}(1),\gamma _{2}(1)\bigr). \end{aligned}

In particular, for each $$y \in M$$, the function $$d(\cdot,y): M \to \mathbb{R}$$ is a convex function.

The following proposition given as Lemma 3.1 in  plays a key role in the proof of the existence of solution of GEP (1.5).

### Proposition 2.8

(KKM Principle). Let K be a nonempty, closed, and convex subset of a Hadamard manifold M. Let $$G: K \to 2^{K}$$ be a mapping such that for each $$x \in K$$, $$G(x)$$ is closed. Suppose that

1. (i)

there exists $$x_{0} \in K$$ such that $$G(x_{0})$$ is compact,

2. (ii)

for all $$\{x_{i}\}_{i=1}^{m} \in K$$, $$conv(\{x_{i}\}_{i=1}^{m}) \subset \bigcup_{i}^{m}G(x_{i})$$.

Then, $$\bigcap_{x \in K}G(x) \neq \emptyset$$.

Let K be a nonempty, closed, and convex subset of a Hadamard manifold M and $$\mathcal{X}(K)$$ denote the set of all single valued vector fields $$A: K \to TM$$ such that $$Ax \in T_{x}M$$ for every $$x \in K$$. Then, a vector field $$A \in \mathcal{X}(K)$$ is called monotone if

\begin{aligned} \bigl\langle Ax,\exp _{x}^{-1}y \bigr\rangle + \bigl\langle Ay,\exp _{y}^{-1}x \bigr\rangle \leq 0. \end{aligned}

### Proposition 2.9

()

Let M be a Hadamard manifold of constant curvature. Given $$x \in M$$ and $$z \in T_{x} M$$, then the set $$L_{x,y}=\{y\in M: \langle z,\exp _{x}^{-1}y \rangle < 0 \}$$ is convex.

### Lemma 2.10

()

Let $$x,y \in K$$ and $$\lambda \in [0,1]$$. Then, the following properties hold on K.

1. (i)

$$\|\lambda x+(1-\lambda )y\|^{2}=\lambda \|x\|^{2}+(1-\lambda )\|y\|^{2}- \lambda (1-\lambda )\|x-y\|^{2}$$;

2. (ii)

$$\|x\pm y\|^{2}=\|x\|^{2}\pm 2\langle x,y \rangle +\|y\|^{2}$$;

3. (iii)

$$\|x+y\|^{2} \leq \|x\|^{2} +2\langle y,x+y \rangle$$.

### Lemma 2.11

()

Let $$\{a_{n}\}$$ be a sequence of nonnegative real numbers, $$\{\alpha _{n} \}$$ be a sequence of real numbers in $$(0,1)$$ such that $$\sum_{n =1}^{\infty}\alpha _{n}=\infty$$ and $$\{b_{n}\}$$ be a sequence of real numbers. Assume that

\begin{aligned} a_{n+1} \leq (1-\alpha _{n})a_{n}+\alpha _{n} b_{n},\quad \forall n \geq 1. \end{aligned}

If $$\limsup_{k \to \infty} b_{n_{k}}\leq 0$$ for every subsequence $$\{a_{n_{k}}\}$$ of $$\{a_{n}\}$$ satisfying the condition

\begin{aligned} \liminf_{k \to \infty}(a_{n_{k} +1}-a_{n_{k}}) \geq 0, \end{aligned}

then $$\lim_{n \to \infty}a_{n}=0$$.

## Existence result

In this section, we prove the existence of solution of the GEP (1.5). The following remark will be used in the proof of the first theorem.

### Remark 3.1

Let K be a nonempty, closed, and convex subset of a Hadamard manifold M. For all $$x \in K$$, let $$F(x,\cdot )$$ be convex, then $$\mathcal{L}_{x,z}:= F(x,y)+\langle z,\exp _{x}^{-1} \rangle < 0$$, where $$z \in T_{x} M$$ is convex. Indeed, from the convexity of $$F(x,\cdot )$$ and Proposition 2.9, we see that $$\mathcal{L}_{x,z}$$ being the sum of two convex functions is also convex. Hence, we have that the set $$\{y \in K: F(x,y)+\langle z,\exp _{x}^{-1} \rangle < 0,\}$$ is convex.

### Theorem 3.2

Let K be a nonempty, closed, and convex subset of a Hadamard manifold M. Let $$A: K \to TM$$ be a single valued monotone vector field and $$F: K \times K \to \mathbb{R}$$ be a bifunction such that $$F(x,x)=0$$ satisfying

1. (A1)

F is monotone. That is, $$F(x,y)+F(y,x) \leq 0$$, for all $$x,y \in K$$.

2. (A2)

For all $$x \in K$$, $$F(x,\cdot )$$ is convex.

3. (A3)

There exists a compact subset $$D \subset K$$ containing a point $$u_{0} \in D$$ such that $$F(x,u_{0})+\langle Ax,\exp _{x}^{-1}u_{0} \rangle < 0$$ whenever $$x \in K \setminus D$$.

Then, the generalized equilibrium problem (1.5) is solvable.

### Proof

For each $$y \in K$$, let $$G: K \to 2^{K}$$ be a multivalued mapping defined by $$G(y)=\{x \in K: F(x,y)+\langle Ax,\exp _{x}^{-1}y\rangle \ge 0 \}$$. Since $$F(y,y)=0$$, for all $$y \in K$$, we have that $$G(y) \neq \emptyset$$. We obtain by (A2) that $$G(y)$$ is closed in K for all $$y \in K$$. Next, we claim that G is a KKM mapping. Assume the contradiction, then there exists $$\bar{y} \in K$$ such that $$\bar{y} \in conv(\{y_{i}\}_{i=1}^{n})$$ but $$\bar{y} \notin \bigcup_{i}^{n}G(y_{i})$$. That is

\begin{aligned} F(\bar{y},y_{i})+\bigl\langle A\bar{y}, \exp _{\bar{y}}^{-1}y_{i} \bigr\rangle < 0, \quad\forall i=1,2, \ldots, n. \end{aligned}

This implies that for any $$i\in I=(1,2, \ldots,n)$$, $$y_{i} \in \{y \in K: F(\bar{y},y)+\langle A \bar{y},\exp _{\bar{y}}^{-1}y \rangle <0 \}$$, which is convex by Remark 3.1. Thus,

\begin{aligned} \bar{y} \in \operatorname{conv}\bigl(\{y_{i}\}_{i=1}^{n}\bigr) \subset \bigl\{ y \in K: F(\bar{y},y)+ \bigl\langle A\bar{y}, \exp _{\bar{y}}^{-1}y \bigr\rangle < 0\bigr\} . \end{aligned}

That is

\begin{aligned} 0=F(\bar{y},\bar{y})+\bigl\langle A\bar{y},\exp _{\bar{y}}^{-1} \bar{y} \bigr\rangle < 0, \end{aligned}

which is a contradiction. Hence, G is a KKM mapping.

Now from (A3), there exists a compact subset D of K with $$y_{0} \in D$$ such that for any $$x \in K \setminus D$$, we have

\begin{aligned} F(x,y_{0})+\bigl\langle Ax,\exp _{x}^{-1}y_{0} \bigr\rangle < 0 \end{aligned}

implying

\begin{aligned} G(y_{0})=\bigl\{ x \in K: F(x,y_{0})+\bigl\langle Ax,\exp _{x}^{-1}y_{0} \bigr\rangle \geq 0 \bigr\} \subset D. \end{aligned}

Thus, $$G(y_{0})$$ is compact. It follows from Proposition 2.8 that $$\bigcap_{y \in K}G(y) \neq \emptyset$$. This implies that there exists $$x^{*} \in K$$ such that

\begin{aligned} F\bigl(x^{*},y\bigr)+\bigl\langle Ax^{*},\exp _{x^{*}}y \bigr\rangle \geq 0, \quad\forall y \in K. \end{aligned}

That is GEP (1.5) is solvable. □

### Lemma 3.3

Let K be a nonempty, closed, and convex subset of a Hadamard manifold M. Let $$F: K \times K \to \mathbb{R}$$ be a bifunction satisfying assumption (A1)(A3), $$A: K \to TM$$ be a mapping. For $$r > 0$$, define a set-valued mapping $$T_{r}^{F,A}: K \to 2^{K}$$ by

\begin{aligned} T_{r}^{F,A}(x)=\biggl\{ z \in K: F(z,y)+\bigl\langle Az,\exp _{z}^{-1}y \bigr\rangle - \frac{1}{r}\bigl\langle \exp _{z}^{-1}x,\exp _{z}^{-1}y \bigr\rangle \geq 0\biggr\} , \quad\forall y \in K \end{aligned}

for all $$x \in M$$. Then, there hold

1. (i)

$$T_{r}^{F,A}$$ is single-valued;

2. (ii)

$$T_{r}^{F,A}$$ is firmly nonexpansive;

3. (iii)

$$\mathrm{Fix}(T_{r}^{F,A})=\mathrm{GEP}(F,A)$$;

4. (iv)

$$\mathrm{GEP}(F,A)$$ is closed and convex;

5. (v)

Let $$0 < r \leq s$$, then for all $$x \in K$$,

\begin{aligned} d\bigl(x,T_{r}^{F,A}x\bigr) \leq 2d\bigl(x,T_{s}^{F,A}x \bigr) \end{aligned}
6. (vi)

For all $$x \in K$$ and $$p \in \mathrm{Fix}(T_{r}^{F,A})$$,

\begin{aligned} d^{2}\bigl(p,T_{r}^{F,A}x\bigr)+d^{2} \bigl(x,T_{r}^{F,A}x\bigr)\leq d^{2}(x,y). \end{aligned}

Proof: (i) Let $$x \in K$$ and $$z_{1},z_{2} \in T_{r}^{F,A}x$$. Then

\begin{aligned} F(z_{1},z_{2})+\bigl\langle Az_{1},\exp _{z_{1}}^{-1}z_{2} \bigr\rangle - \frac{1}{r} \bigl\langle \exp _{z_{1}}^{-1}x,\exp _{z_{1}}^{-1}z_{2} \bigr\rangle \geq 0 \end{aligned}

and

\begin{aligned} F(z_{2},z_{1})+\bigl\langle Az_{2},\exp _{z_{2}}^{-1}z_{1} \bigr\rangle - \frac{1}{r} \bigl\langle \exp _{z_{2}}^{-1}x,\exp _{z_{2}}^{-1}z_{1} \bigr\rangle \geq 0. \end{aligned}

Adding both inequality and using (A1), we obtain

\begin{aligned} \frac{1}{r} \bigl\langle \exp _{z_{1}}^{-1}x,\exp _{z_{1}}^{-1}z_{2} \bigr\rangle +\frac{1}{r}\bigl\langle \exp _{z_{2}}^{-1}x,\exp _{z_{2}}^{-1}z_{1} \bigr\rangle \leq 0, \end{aligned}

Thus by (2.2), we have

\begin{aligned} rd(z_{1},z_{2}) \leq \bigl\langle \exp _{z_{1}}^{-1}x, \exp _{z_{1}}^{-1}z_{2} \bigr\rangle +\bigl\langle \exp _{z_{2}}^{-1}x,\exp _{z_{2}}^{-1}z_{1} \bigr\rangle \leq 0. \end{aligned}

Since $$r > 0$$, we obtain $$z_{1}=z_{2}$$. Therefore, $$T_{r}^{F,A}$$ is single-valued.

(ii) We show that $$T_{r}^{F,A}$$ is firmly nonexpansive. Choose $$z_{1}$$ and $$z_{2}$$ in K such that for $$x,y \in K$$, we have $$z_{1}=T_{r}^{F,A}x$$ and $$z_{2}=T_{r}^{F,A}y$$. Then,

\begin{aligned} F(z_{1},z_{2})+\bigl\langle Az_{1},\exp _{z_{1}}^{-1}z_{2}\bigr\rangle - \frac{1}{r}\bigl\langle \exp _{z_{1}}^{-1}x,\exp _{z_{1}}^{-1}z_{2} \bigr\rangle \geq0 \end{aligned}

and

\begin{aligned} F(z_{2},z_{1})+\bigl\langle Az_{2},\exp _{z_{2}}^{-1}z_{1} \bigr\rangle - \frac{1}{r} \bigl\langle \exp _{z_{2}}^{-1}y,\exp _{z_{2}}^{-1}z_{1} \bigr\rangle \geq 0. \end{aligned}

Adding both inequalities and using the fact that A and F are monotone, we obtain

\begin{aligned} &\bigl\langle \exp _{z_{1}}^{-1}x,\exp _{z_{1}}^{z_{2}} \bigr\rangle +\bigl\langle \exp _{z_{2}}^{-1}y,\exp _{z_{2}}^{-1}z_{1} \bigr\rangle \\ &\quad \leq r \bigl(F(z_{1},z_{2})+F(z_{2},z_{1})+ \bigl\langle Az_{1},\exp _{z_{1}}^{-1}z_{2} \bigr\rangle +\bigl\langle Az_{2},\exp _{z_{2}}^{-1}z_{1} \bigr\rangle \bigr) \leq 0. \end{aligned}

Thus, by Proposition 2.4, $$T_{r}^{F,A}$$ is firmly nonexpansive.

(iii) Observe that

\begin{aligned} x \in {\mathrm{Fix}}\bigl(T_{r}^{F,A}\bigr)\quad &\iff\quad x=T_{r}^{F,A}x \\ &\iff\quad F(x,y)+\bigl\langle Ax,\exp _{x}^{-1}y\bigr\rangle - \frac{1}{r}\bigl\langle \exp _{x}^{-1}x,\exp _{x}^{-1}y \bigr\rangle \geq 0\quad \forall y \in K \\ &\iff\quad F(x,y)+\bigl\langle Ax,\exp _{x}^{-1}y\bigr\rangle \geq 0 \quad\forall y \in K \\ &\iff\quad x \in {\mathrm{GEP}(F,A)} \end{aligned}

(iv) It is known that firmly nonexpansive mappings are nonexpansive. It is also well known that the fixed point of nonexpansive mappings are closed and convex, thus by (ii) and (iii), we obtain (iv).

(v) Fix $$x \in M$$ and let $$0 < r \leq s$$. Let $$z_{1}=T_{r}^{F,A}x$$ and $$z_{2}=T_{r}^{F,A}x$$, then

\begin{aligned} F(z_{1},z_{2})+\bigl\langle Az_{1},\exp _{z_{1}}^{-1}z_{2} \bigr\rangle - \frac{1}{r} \bigl\langle \exp _{z_{1}}^{-1}x,\exp _{z_{1}}^{-1}z_{2} \bigr\rangle \geq 0 \end{aligned}

and

\begin{aligned} F(z_{2},z_{1})+\bigl\langle Az_{2},\exp _{z_{2}}^{-1}z_{1}\bigr\rangle - \frac{1}{s} \bigl\langle \exp _{z_{2}}^{-1}x,\exp _{z_{2}}^{-1}z_{1} \bigr\rangle \ge 0. \end{aligned}

Adding both inequalities and using the monotonicity of A and F, we obtain

\begin{aligned} \frac{1}{s}\bigl\langle \exp _{z_{1}}^{-1}x,\exp _{z_{1}}^{-1}z_{2} \bigr\rangle +\frac{1}{s}\bigl\langle \exp _{z_{2}}^{-1}x,\exp _{z_{2}}^{-1}z_{1} \bigr\rangle \leq 0, \end{aligned}

which implies by (2.1) that

\begin{aligned} \frac{1}{r}\bigl(d^{2}(x,z_{1})+d^{2}(z_{2},z_{1})-d^{2}(x,z_{2}) \bigr)+ \frac{1}{s}\bigl(d^{2}(x,z_{2})+d^{2}(z_{1},z_{2})-d^{2}(x,z_{1}) \bigr)\leq 0. \end{aligned}

Thus,

\begin{aligned} d^{2}(x,z_{1})+d^{2}(z_{2},z_{1})-d^{2}(x,z_{2}) \leq \frac{r}{s}\bigl(d^{2}(x,z_{1})-d^{2}(x,z_{2})-d^{2}(z_{1},z_{2}) \bigr) \end{aligned}

implying

\begin{aligned} \biggl(1+\frac{r}{s} \biggr)d^{2}(z_{1},z_{2}) \leq \biggl(1- \frac{r}{s} \biggr)d^{2}(x,z_{2})+ \biggl(\frac{r}{s}-1 \biggr)d^{2}(x,z_{1}), \end{aligned}

that is,

\begin{aligned} \biggl(1+\frac{r}{s} \biggr)d^{2}\bigl(T_{s}^{F,A}x,T_{r}^{F,A}x \bigr)\leq \biggl(1-\frac{r}{s} \biggr)d^{2}\bigl(x,T_{s}^{F,A}x \bigr)+ \biggl(\frac{r}{s}-1 \biggr)d^{2}\bigl(x,T_{r}^{F,A}x \bigr). \end{aligned}

Since $$r \leq s$$, we obtain

\begin{aligned} \biggl(1+\frac{r}{s} \biggr)d^{2}\bigl(T_{s}^{F,A}x,T_{r}^{F,A}x \bigr)\leq \biggl(1-\frac{r}{s} \biggr)d^{2}\bigl(x,T_{s}^{F,A}x \bigr). \end{aligned}

Hence,

\begin{aligned} d\bigl(T_{s}^{F,A}x,T_{r}^{F,A}x\bigr) \leq \sqrt{ \biggl(1-\frac{r}{s} \biggr)}d\bigl(x,T_{s}^{F,A}x \bigr). \end{aligned}

Finally, we obtain by triangular inequality that

\begin{aligned} d\bigl(x,T_{r}^{F,A}x\bigr) &\leq d\bigl(T_{r}^{F,A}x,T_{s}^{F,A}x \bigr)+d\bigl(T_{s}^{F,A}x,x\bigr) \\ &\leq 2d\bigl(x,T_{s}^{F,A}x\bigr). \end{aligned}
(3.1)

(vi) For $$x,y \in K$$, let $$z_{1}=T_{r}^{F,A}x$$ and $$z_{2}=T_{r}^{F,A}y$$, then by (ii), we have

\begin{aligned} \bigl\langle \exp _{z_{1}}^{-1}z_{2},\exp _{z_{1}}^{-1}x\bigr\rangle +\bigl\langle \exp _{z_{2}}^{-1}z_{1},\exp _{z_{2}}^{-1}y \bigr\rangle \leq 0. \end{aligned}

It implies by Proposition 2.2 that

\begin{aligned} &\frac{1}{2}\bigl[d^{2}(z_{1},z_{2})+d^{2}(x,z_{1})-d^{2}(x,z_{2}) \bigr]+ \frac{1}{2}\bigl[d^{2}(z_{1},z_{2})+d^{2}(y,z_{2})-d^{2}(y,z_{1}) \bigr] \leq 0 \\ &\quad\implies\quad d^{2}(z_{1},z_{2})+ \frac{1}{2}d^{2}(x,z_{1})+\frac{1}{2}d^{2}(y,z_{2}) \leq \frac{1}{2}d^{2}(x,z_{2})+\frac{1}{2}d^{2}(y,z_{1}). \end{aligned}

Now suppose $$y=p \in \mathrm{Fix}(T_{r}^{F,A})$$, then $$p=T_{r}^{F,A}p=z_{2}$$. The above inequality thus becomes

\begin{aligned} d^{2}\bigl(p,T_{r}^{F,A}x\bigr)+d^{2} \bigl(x,T_{r}^{F,A}x\bigr) \leq d^{2}(p,x). \end{aligned}

## Convergence result

In this section, we state and prove our convergence result.

### Theorem 4.1

Let K be a nonempty, closed, and convex subset of a Hadamard manifold M. Let $$A: K \to TM$$ be a monotone vector field and $$F: K \times K \to \mathbb{R}$$ such that $$F(x,x)=0$$ for all $$x \in K$$ be a bifunction satisfying conditions (A1)(A3). Let $$f: M \to M$$ be a ψ-contraction and $$S: K \to K$$ be a nonexpansive mapping. Assume $$\mathrm{Fix}(S) \cap \mathrm{GEP}(F,A) \neq \emptyset$$. For arbitrary $$x_{1} \in K$$, sequences $$\{r_{n}\} \in (0,\infty )$$, $$\beta _{n}, \alpha _{n} \in (0,1)$$, let the sequence $$\{x_{n}\}$$ be defined iteratively by

\begin{aligned} \textstyle\begin{cases} y_{n}=\exp _{x_{n}}(1-\beta _{n})\exp _{x_{n}}^{-1}Sx_{n}, \\ x_{n+1}=\exp _{f(x_{n})}(1-\alpha _{n})\exp _{f(x_{n})}^{-1}T_{r_{n}}^{F,A}y_{n}. \end{cases}\displaystyle \end{aligned}
(4.1)

Suppose the following conditions hold:

1. (i)

$$\lim_{n \to \infty}\alpha _{n}=0$$ and $$\sum_{n =1}^{\infty}\alpha _{n}=0$$;

2. (ii)

$$0< a\leq \beta _{n} \leq b < 1$$ for some $$a,b > 0$$ for all $$n \geq 1$$;

3. (iii)

$$0 < r \leq r_{n}$$.

If $$0 < \kappa =\sup \{\frac{\psi (d(x_{n},p))}{d(x_{n},p)}: x_{n} \ne p, n \ge 0\}< 1$$ for all $$p \in \mathrm{Fix}(S) \cap \mathrm{GEP}(F,A)$$, then the sequence $$\{x_{n}\}$$ converges to a point $$p \in \mathrm{Fix}(S) \cap \mathrm{GEP}(F,A)$$

### Proof

Let $$p \in \mathrm{Fix}(S) \cap \mathrm{GEP}(F,A)$$. We can rewrite $$y_{n}$$ as $$y_{n}=\gamma _{n}^{1}(1-\beta _{n})$$, where $$\gamma _{n}^{1}: [0,1] \to M$$ is a sequence of geodesics joining $$x_{n}$$ to $$Sx_{n}$$. Then by the nonexpansivity of S, we have

\begin{aligned} d(y_{n},p) &= d\bigl(\gamma _{n}^{1}(1- \beta _{n}),p\bigr) \\ &\leq \beta _{n} d\bigl(\gamma _{n}^{1}(0),p \bigr)+(1-\beta _{n})d\bigl(\gamma _{n}^{1}(1),p \bigr) \\ &= \beta _{n} d(x_{n},p)+(1-\beta _{n})d(Sx_{n},p) \\ &\leq \beta _{n} d(x_{n},p)+(1-\beta _{n})d(x_{n},p) \\ &=d(x_{n},p). \end{aligned}
(4.2)

Note also that $$x_{n+1}$$ can be written in the form $$x_{n+1}=\gamma _{n}^{2}(1-\alpha _{n})$$, where $$\gamma _{n}^{2}: [0,1] \to M$$ is a sequence of geodesics joining $$f(x_{n})$$ to $$T_{r_{n}}^{F,A}y_{n}$$, i.e $$\gamma _{n}^{2}(0)=f(x_{n})$$ and $$\gamma _{n}^{2}(1)=T_{r_{n}}^{F,A}y_{n}$$. By the convexity of Riemannian distance and the fact that $$0 < \kappa =\sup \{\frac{\psi (d(x_{n},p))}{d(x_{n},p)}: x_{n} \ne p, n \ge 0\}< 1$$, we obtain

\begin{aligned} d(x_{n+1},p)&=d\bigl(\gamma _{n}^{2}(1-\alpha _{n}),p\bigr) \\ &\leq \alpha _{n}d\bigl(\gamma _{n}^{2}(0),p \bigr)+(1-\alpha _{n})d\bigl(\gamma _{n}^{2}(1),p \bigr) \\ &= \alpha _{n} d\bigl(f(x_{n}),p\bigr)+(1-\alpha _{n})d\bigl(T_{r_{n}}^{F,A}y_{n},p\bigr) \\ &\leq \alpha _{n} \bigl[d\bigl(f(x_{n}),f(p)\bigr)+d \bigl(f(p),p\bigr)\bigr]+(1-\alpha _{n})d\bigl(T_{r_{n}}^{F,A}y_{n},T_{r_{n}}^{F,A}p \bigr) \\ &\leq \alpha _{n}\bigl[\psi \bigl( d(x_{n},p)\bigr)+d \bigl(f(p),p\bigr)\bigr]+(1-\alpha _{n})d(y_{n},p) \\ &\leq \alpha _{n}\bigl[\kappa d(x_{n},p)+d\bigl(f(p),p \bigr)\bigr]+(1-\alpha _{n})d(x_{n},p) \\ &\leq \bigl[1-\alpha _{n}(1-\kappa )\bigr]d(x_{n},p)+ \alpha _{n} d\bigl(f(p),p\bigr) \\ &\leq \max \biggl\{ d(x_{n},p),\frac {1}{(1-\kappa )}d\bigl(f(p),p\bigr) \biggr\} \\ &\vdots \\ &\leq \max \biggl\{ d(x_{1},p),\frac {1}{(1-\kappa )}d\bigl(f(p),p\bigr) \biggr\} . \end{aligned}

It implies that the sequence $$\{x_{n}\}$$ is bounded. It follows from (4.2) that $$\{y_{n}\}$$ is bounded, and thus $$\{Sx_{n}\}$$ and $$T_{r_{n}}^{F,A}y_{n}$$ are bounded.

Fix $$n \geq 1$$, let $$u_{n}=f(x_{n})$$, $$v_{n}=T_{r_{n}}^{F,A}y_{n}$$, $$w=f(p)$$, $$v=Sp$$, $$p_{n}=x_{n}$$ and $$q_{n}=Sx_{n}$$. Consider the geodesic triangles $$\Delta (u_{n},v_{n},p)$$, $$\Delta (w,v_{n},p)$$, $$\Delta (u_{n},v_{n},w)$$ and $$\Delta (p_{n},q_{n},p)$$. Then by Lemma 2.5, there exist comparison triangles $$\Delta (u_{n}^{\prime},v_{n}^{\prime},p)$$, $$\Delta (w^{\prime},v_{n}^{\prime},p^{\prime})$$, $$\Delta (u_{n}^{\prime},v_{n}^{\prime},w^{\prime})$$ and $$\Delta (p_{n}^{\prime},q_{n}^{\prime},p^{\prime})$$ such that

\begin{aligned} &d(p_{n},q_{n})= \bigl\Vert p_{n}^{\prime}-q_{n}^{\prime} \bigr\Vert , d(p_{n},p)= \bigl\Vert p_{n}^{ \prime}-p^{\prime} \bigr\Vert \quad\text{and}\quad d(q_{n},p)= \bigl\Vert q_{n}^{\prime}-p^{ \prime} \bigr\Vert ,\\ &d(u_{n},v_{n})= \bigl\Vert u_{n}^{\prime}-v_{n}^{\prime} \bigr\Vert , d(u_{n},p)= \bigl\Vert u_{n}^{ \prime}-p^{\prime} \bigr\Vert \quad \text{and}\quad d(v_{n},p)= \bigl\Vert v_{n}^{\prime}-p^{ \prime} \bigr\Vert \end{aligned}

and

\begin{aligned} d(w,p)= \bigl\Vert w^{\prime}-p^{\prime} \bigr\Vert , d(u_{n},w)= \bigl\Vert u_{n}^{\prime}-w^{ \prime} \bigr\Vert \quad\text{and}\quad d(q_{n},v)= \bigl\Vert q_{n}^{\prime}-v^{\prime} \bigr\Vert . \end{aligned}

Let θ and $$\theta ^{\prime}$$ be the angles at p and $$p^{\prime}$$ in the triangles $$\Delta (w,x_{n+1},p)$$ and $$\Delta (w^{\prime},x_{n+1}^{\prime},p^{\prime})$$, respectively. Hence $$\theta \leq \theta ^{\prime}$$ and $$\cos \theta ^{\prime} \leq \cos \theta$$. Let $$y_{n}^{\prime}$$ and $$x_{n+1}^{\prime}$$ be the comparison point of $$y_{n}$$ and $$x_{n+1}$$, respectively, then

\begin{aligned} y_{n}^{\prime}=\beta _{n} p_{n}^{\prime} +(1-\beta _{n})q_{n}^{\prime} \quad\text{and}\quad x_{n+1}^{\prime}=\alpha _{n}u_{n}^{\prime}+(1- \alpha _{n})v_{n}^{ \prime}. \end{aligned}

Then by Lemma 2.10, we have

\begin{aligned} d^{2}(x_{n+1},p) &\leq \bigl\Vert x_{n+1}^{\prime}-p^{\prime} \bigr\Vert ^{2} \\ &= \bigl\Vert \alpha _{n}u_{n}^{\prime}+(1-\alpha _{n})v_{n}^{\prime}-p^{\prime} \bigr\Vert ^{2} \\ &\leq \bigl\Vert \alpha _{n}\bigl(u_{n}^{\prime}-w^{\prime} \bigr)+(1-\alpha _{n}) \bigl(v_{n}^{ \prime}-p^{\prime} \bigr) \bigr\Vert +2\alpha _{n} \bigl\langle x_{n+1}^{\prime}-p^{\prime},w^{ \prime}-p^{\prime} \bigr\rangle \\ &\leq (1-\alpha _{n}) \bigl\Vert v_{n}^{\prime}-p^{\prime} \bigr\Vert ^{2} +\alpha _{n} \bigl\Vert u_{n}^{ \prime}-w^{\prime} \bigr\Vert ^{2} +2 \beta _{n} \bigl\Vert x_{n+1}^{\prime}-p^{\prime} \bigr\Vert \bigl\Vert w^{\prime}-p^{\prime} \bigr\Vert \cos \theta ^{\prime} \\ &\leq (1-\alpha _{n})d(v_{n},p)+\alpha _{n} d(u_{n},w)+2\alpha _{n}d(x_{n+1},p)d(w,p) \cos \theta \\ &=(1-\alpha _{n})d\bigl(T_{r_{n}}^{F,A}y_{n},p \bigr)+\alpha _{n}d\bigl(f(x_{n}),f(p)\bigr)+2 \alpha _{n} d(x_{n+1},p)d\bigl(f(p),p\bigr) \cos \theta \\ &\le (1-\alpha _{n})d\bigl(T_{r_{n}}^{F,A}y_{n},p \bigr)+\alpha _{n}\psi \bigl(d(x_{n},p)\bigr)+2 \alpha _{n} d(x_{n+1},p)d\bigl(f(p),p\bigr) \cos \theta . \end{aligned}

Using Lemma 3.3, $$0 < \kappa =\sup \{\frac{\psi (d(x_{n},p))}{d(x_{n},p)}: x_{n} \ne p, n \ge 0\}< 1$$ and the fact that $$\langle \exp _{p}^{-1}x_{n+1}, \exp _{p}^{-1}f(p) \rangle =d(x_{n+1},p)d(f(p),p) \cos \theta$$, we obtain

\begin{aligned} &d^{2}(x_{n+1},p) \\ &\quad \leq (1-\alpha _{n})d^{2} \bigl(T_{r_{n}}^{F,A}y_{n},p\bigr)+ \kappa d(x_{n},p)+2\alpha _{n} \bigl\langle \exp _{p}^{-1}x_{n+1}, \exp _{p}^{-1}f(p) \bigr\rangle \\ &\quad\leq (1-\alpha _{n})d^{2}(y_{n},p)-(1-\alpha _{n})d^{2}\bigl(y_{n},T_{r_{n}}^{F,A}y_{n} \bigr)+ \kappa d(x_{n},p)+2\alpha _{n} \bigl\langle \exp _{p}^{-1}x_{n+1}, \exp _{p}^{-1}f(p) \bigr\rangle \\ &\quad\leq (1-\alpha _{n})d^{2}(x_{n},p)+\kappa d(x_{n},p)+2\alpha _{n} \bigl\langle \exp _{p}^{-1}x_{n+1}, \exp _{p}^{-1}f(p) \bigr\rangle -(1-\alpha _{n})d^{2}\bigl(y_{n},T_{r_{n}}^{F,A}y_{n} \bigr) \\ &\quad\leq \bigl[1-\alpha _{n}(1-\kappa )\bigr]d^{2}(x_{n},p)+ \alpha _{n}(1-\kappa )b_{n}-(1- \alpha _{n})d^{2} \bigl(y_{n},T_{r_{n}}^{F,A}y_{n}\bigr), \end{aligned}
(4.3)

where $$b_{n} =\frac{2}{(1-\kappa )}\langle \exp _{p}^{-1}x_{n+1},\exp _{p}^{-1}f(p) \rangle$$. It follows from (4.3) that

\begin{aligned} (1-\alpha _{n})d^{2}\bigl(y_{n},T_{r_{n}}^{F,A}y_{n} \bigr) \leq d^{2}(x_{n},p)-d^{2}(x_{n+1},p)+ \alpha _{n}(1-\kappa )M^{\prime}, \end{aligned}
(4.4)

where $$M^{\prime}=\sup_{n \in \mathbb{N}}b_{n}$$.

We proceed to show that $$\{x_{n}\}$$ converges strongly to $$p=P_{\mathrm{Fix}(S) \cap \mathrm{GEP}(F,A)}f(p)$$. Let $$a_{n}=d^{2}(x_{n},p)$$ and $$\delta _{n}=\alpha _{n}(1-\kappa )$$, then we have that

\begin{aligned} a_{n+1}\leq (1-\alpha _{n})a_{n}+ \delta _{n} b_{n} \end{aligned}

holds from (4.3). Next, we show that $$\limsup_{k \to \infty} b_{n_{k}} \leq 0$$ whenever a subsequence $$\{a_{n_{k}}\}$$ of $$\{a_{n}\}$$ satisfies

\begin{aligned} \liminf_{k \to \infty}(a_{n_{k} +1}-a_{n_{k}}) \geq 0. \end{aligned}

Suppose such a subsequence exists, then by (4.4) and condition (i), we obtain

\begin{aligned} \limsup_{k \to \infty}(1-\alpha _{n_{k}})d^{2} \bigl(y_{n_{k}},T_{r_{n_{k}}}^{F,A}y_{n_{k}}\bigr)& \leq \limsup_{k \to \infty}(a_{n_{k}}-a_{n_{k} +1})+(1- \kappa )M^{\prime}\lim_{k \to \infty}\alpha _{n_{k}} \\ &=-\liminf_{k \to \infty}(a_{n_{k} +1}-a_{n_{k}}) \\ &\leq 0, \end{aligned}
(4.5)

thus

\begin{aligned} \lim_{k \to \infty}d\bigl(y_{n_{k}},T_{r_{n_{k}}}^{F,A}y_{n_{k}} \bigr)=0. \end{aligned}
(4.6)

It implies by Lemma 3.3, that

\begin{aligned} \lim_{k \to \infty}d\bigl(y_{n_{k}},T_{r}^{F,A}y_{n_{k}} \bigr)=0. \end{aligned}
(4.7)

Observe also that

\begin{aligned} d^{2}(y_{n},p) &\leq \bigl\Vert y_{n}^{\prime}-p^{\prime} \bigr\Vert ^{2} \\ & = \bigl\Vert \beta _{n}\bigl(p_{n}^{\prime}-p^{\prime} \bigr)+(1-\beta _{n}) \bigl(q_{n}^{ \prime}-p^{\prime} \bigr) \bigr\Vert ^{2} \\ &=\beta _{n} \bigl\Vert p_{n}^{\prime}-p^{\prime} \bigr\Vert ^{2}+(1-\beta _{n}) \bigl\Vert q_{n}^{ \prime}-v^{\prime} \bigr\Vert ^{2}-\beta _{n}(1-\beta _{n}) \bigl\Vert p_{n}^{\prime}-q_{n}^{ \prime} \bigr\Vert ^{2} \\ &\leq \beta _{n} d^{2}(p_{n},p)+(1-\beta _{n})d^{2}(q_{n},v)-\beta _{n}(1- \beta _{n})d^{2}(p_{n},q_{n}) \\ &\leq \beta _{n} d^{2}(x_{n},p)+(1-\beta _{n})d^{2}(Sx_{n},Sp)-\beta _{n}(1- \beta _{n})d^{2}(x_{n},Sx_{n}) \\ &\leq d^{2}(x_{n},p)-\beta _{n}(1-\beta _{n})d^{2}(x_{n},Sx_{n}). \end{aligned}
(4.8)

Using this in (4.3), we have

\begin{aligned} d^{2}(x_{n+1},p) \leq {}&(1-\alpha _{n})d^{2}(y_{n},p) +\kappa d(x_{n},p)+2 \alpha _{n} \bigl\langle \exp _{p}^{-1}x_{n+1},\exp _{p}^{-1}f(p) \bigr\rangle \\ \leq{}& (1-\alpha _{n})\bigl[d(x_{n},p)-\beta _{n}(1-\beta _{n})d^{2}(x_{n},Sx_{n}) \bigr] \\ &{}+\kappa d(x_{n},p)+2\alpha _{n} \bigl\langle \exp _{p}^{-1}x_{n+1},\exp _{p}^{-1}f(p) \bigr\rangle \\ \leq{}& \bigl[1-\alpha _{n}(1-\kappa )\bigr]d^{2}(x_{n},p)+2 \alpha _{n}\bigl\langle \exp _{p}^{-1}x_{n+1}, \exp _{p}^{-1}f(p) \bigr\rangle \\ &{}-\beta _{n}(1-\beta _{n}) (1- \alpha _{n})d^{2}(x_{n},Sx_{n}). \end{aligned}

Proceeding as before, we obtain

\begin{aligned} \lim_{k \to \infty}\beta _{n_{k}}(1-\beta _{n_{k}}) (1- \alpha _{n_{k}})d^{2}(x_{n_{k}},Sx_{n_{k}})&\leq \limsup_{k \to \infty}(a_{n_{k} +1}-a_{n_{k}})+(1-\kappa )M^{\prime}\lim_{k \to \infty}\alpha _{n_{k}} \\ &= -\liminf_{k \to \infty}(a_{n_{k} +1}-a_{n_{k}}) \\ &\leq 0, \end{aligned}

which implies using condition (i) and (ii) that

\begin{aligned} \lim_{k \to \infty}d(x_{n_{k}},Sx_{n_{k}})=0. \end{aligned}
(4.9)

Now, from convexity of the Riemannian manifold, we have

\begin{aligned} d\bigl(x_{n_{k} +1},T_{r_{n_{k}}}^{F,A}y_{n_{k}}\bigr)&=d \bigl(\gamma _{n_{k}}^{2}(1- \alpha _{n_{k}}),T_{r_{n_{k}}}^{F,A}y_{n_{k}} \bigr) \\ &\leq \alpha _{n_{k}}d\bigl(\gamma _{n_{k}}^{2}(0),T_{r_{n_{k}}}^{F,A}y_{n_{k}} \bigr)+(1- \alpha _{n_{k}})d\bigl(\gamma _{n_{k}}^{2}(1),T_{r_{n_{k}}}^{F,A}y_{n_{k}} \bigr) \\ &=\alpha _{n_{k}}d\bigl(f(x_{n_{k}}),T_{r_{n_{k}}}^{F,A}y_{n_{k}} \bigr)+(1- \alpha _{n_{k}})d\bigl(T_{r_{n_{k}}}^{F,A}y_{n_{k}},T_{r_{n_{k}}}^{F,A}y_{n_{k}} \bigr) \\ &\leq \alpha _{n_{k}}d\bigl(f(x_{n_{k}}),T_{r_{n_{k}}}^{F,A}y_{n_{k}} \bigr), \end{aligned}

which by condition (i), implies

\begin{aligned} d\bigl(x_{n_{k} +1},T_{r_{n_{k}}}^{F,A}y_{n_{k}}\bigr) \to 0 \quad\text{as } k \to \infty. \end{aligned}
(4.10)

In a similar vein, we have

\begin{aligned} d(y_{n_{k}},x_{n_{k}}) &=d\bigl(\gamma _{n_{k}}^{1}(1- \beta _{n_{k}}),x_{n_{k}}\bigr) \\ &\leq \beta _{n_{k}}d\bigl(\gamma _{n_{k}}^{1}(0),x_{n_{k}} \bigr)+(1-\beta _{n_{k}})d\bigl( \gamma _{n_{k}}^{1}(1),Sx_{n_{k}} \bigr) \\ &= \beta _{n_{k}}d(x_{n_{k}},x_{n_{k}})+(1-\beta _{n_{k}})d(x_{n_{k}},Sx_{n_{k}}) \\ &\leq (1-\beta _{n_{k}})d(x_{n_{k}},Sx_{n_{k}}), \end{aligned}

it implies using (4.9), that

\begin{aligned} \lim_{k \to \infty}d(y_{n_{k}},x_{n_{k}})=0. \end{aligned}
(4.11)

It is easy to see from (4.6) and (4.10) that

\begin{aligned} d(x_{n_{k} +1},y_{n_{k}}) \to 0 \quad\text{as } k \to \infty. \end{aligned}

Using this and (4.11), we get

\begin{aligned} \lim_{k \to \infty}d(x_{n_{k} +1},x_{n_{k}})=0. \end{aligned}
(4.12)

To conclude this process, we now show that $$\lim_{k \to \infty} b_{n_{k}} \leq 0$$. Indeed, since $$\{x_{n_{k}}\}$$ is bounded, there exists a subsequence $$\{x_{n_{k_{j}}}\}$$ of $$\{x_{n_{k}}\}$$, which converges weakly to $$q \in M$$. Thus, we obtain by (4.12) that

\begin{aligned} \limsup_{k \to \infty} \bigl\langle \exp _{p}^{-1}f(p),\exp _{p}^{-1}x_{n_{k} +1} \bigr\rangle &= \lim_{j \to \infty}\bigl\langle \exp _{p}^{-1}f(p), \exp _{p}^{-1}x_{n_{k_{j}}+1} \bigr\rangle \\ & =\bigl\langle \exp _{p}^{-1}f(p), \exp _{p}^{-1}q \bigr\rangle , \end{aligned}
(4.13)

It also follows from $$x_{n_{k_{j}}}\rightharpoonup q$$ and (4.11) that $$y_{n_{k_{j}}} \rightharpoonup q$$. Therefore, by (4.9) and (4.11), we obtain that $$q \in \mathrm{Fix}(S)$$ and $$q\in \mathrm{Fix}(T_{r}^{F,A})=\mathrm{GEP}(F,A)$$, respectively. Thus, $$q \in \mathrm{Fix}(S) \cap \mathrm{GEP}(F,A)$$. From Lemma 2.3, (4.13), and $$p=P_{\mathrm{Fix}(S) \cap \mathrm{GEP}(F,A)}f(p)$$, we get

\begin{aligned} \limsup_{k \to \infty} \bigl\langle \exp _{p}^{-1}f(p), \exp _{p}^{-1}x_{n_{k} +1} \bigr\rangle &=\lim _{j \to \infty}\bigl\langle \exp _{p}^{-1}f(p), \exp _{p}^{-1}x_{n_{k_{j}}+1} \bigr\rangle \\ &=\bigl\langle \exp _{p}^{-1}f(p),\exp _{p}^{-1}q \bigr\rangle \\ &\leq 0. \end{aligned}
(4.14)

Therefore, we conclude by Lemma 2.11 on (4.3) that $$x_{n} \to p$$. □

The following is a corollary of our main result. For $$A=0$$, we obtain a result for approximating a common solution of an equilibrium problem and a fixed point of a nonexpansive mapping.

### Corollary 4.2

Let K be a nonempty, closed, and convex subset of a Hadamard manifold M. Let $$F: K \times K \to \mathbb{R}$$ such that $$F(x,x)=0$$ for all $$x \in K$$ be a bifunction satisfying conditions (A1)(A3). Let $$f: M \to M$$ be a ψ-contraction and $$S: K \to K$$ be a nonexpansive mapping. Assume $$\mathrm{Fix}(S) \cap \mathrm{GEP}(F,0) \neq \emptyset$$. For arbitrary $$x_{1} \in K$$, sequences $$\{r_{n}\} \in (0,\infty )$$, $$\beta _{n}, \alpha _{n} \in (0,1)$$, let the sequence $$\{x_{n}\}$$ be defined iteratively by

\begin{aligned} \textstyle\begin{cases} y_{n}=\exp _{x_{n}}(1-\beta _{n})\exp _{x_{n}}^{-1}Sx_{n}, \\ x_{n+1}=\exp _{f(x_{n})}(1-\alpha _{n})\exp _{f(x_{n})}^{-1}T_{r_{n}}^{F,0}y_{n}. \end{cases}\displaystyle \end{aligned}
(4.15)

Suppose the following conditions hold:

1. (i)

$$\lim_{n \to \infty}\alpha _{n}=0$$ and $$\sum_{n =1}^{\infty}\alpha _{n}=0$$;

2. (ii)

$$0< a\leq \beta _{n} \leq b < 1$$ for some $$a,b > 0$$ for all $$n \geq 1$$;

3. (iii)

$$0 < r \leq r_{n}$$.

If $$0 < \kappa =\sup \{\frac{\psi (d(x_{n},p))}{d(x_{n},p)}: x_{n} \ne p, n \ge 0\}< 1$$ for all $$p \in \mathrm{Fix}(S) \cap \mathrm{GEP}(F,0)$$, then the sequence $$\{x_{n}\}$$ converges to a point $$p \in \mathrm{Fix}(S) \cap \mathrm{GEP}(F,0)$$.

Suppose $$M=H$$ is a real Hilbert space, then we have the following as a consequence of Theorem 4.1:

### Corollary 4.3

Let K be a nonempty, closed, and convex subset of a real Hilbert space H. Let $$A: K \to K$$ be a monotone vector field and $$F: K \times K \to \mathbb{R}$$ such that $$F(x,x)=0$$ for all $$x \in K$$ be a bifunction satisfying conditions (A1)(A3). Let $$f: M \to M$$ be a contraction and $$S: K \to K$$ be a nonexpansive mapping. Assume $$\mathrm{Fix}(S) \cap \mathrm{GEP}(F,A) \neq \emptyset$$. For arbitrary $$x_{1} \in K$$, sequences $$\{r_{n}\} \in (0,\infty )$$, $$\beta _{n}, \alpha _{n} \in (0,1)$$, let the sequence $$\{x_{n}\}$$ be defined iteratively by

\begin{aligned} \textstyle\begin{cases} y_{n}=\beta _{n} x_{n} +(1-\beta _{n})Sx_{n}, \\ x_{n+1}=\alpha _{n}f(x_{n})+(1-\alpha _{n})T_{r_{n}}^{F,A}y_{n}. \end{cases}\displaystyle \end{aligned}
(4.16)

Suppose the following conditions hold:

1. (i)

$$\lim_{n \to \infty}\alpha _{n}=0$$ and $$\sum_{n =1}^{\infty}\alpha _{n}=0$$;

2. (ii)

$$0< a\leq \beta _{n} \leq b < 1$$ for some $$a,b > 0$$ for all $$n \geq 1$$;

3. (iii)

$$0 < r \leq r_{n}$$.

Then, the sequence $$\{x_{n}\}$$ converges to a point $$p \in \mathrm{Fix}(S) \cap \mathrm{GEP}(F,A)$$.

## Application

In this section, we apply our main result in Sect. 4 to the problem of finding a common solution of fixed point and convex minimization problem. In particular, we consider a solution of the convex minimization of the sum of convex functions of the following form:

\begin{aligned} \min_{x\in M}h_{1}(x)+h_{2}(x), \end{aligned}
(5.1)

where M is a Hadamard manifold, $$h_{1}: M \to \mathbb{R} \cup \{\infty \}$$ is a proper, lower semicontinuous and convex function, $$h_{2}: M \to \mathbb{R}$$ is a convex and differentiable function. We note that the problem of finding $$x \in M$$ such that $$\langle Ax,\exp _{x}^{-1}y \rangle \ge 0$$ for all $$y \in M$$ is the optimality condition of the convex minimization problem

\begin{aligned} \min_{x\in M}h_{2}(x), \end{aligned}

when $$A=\nabla h_{2}$$. On the other hand, the Moreau–Yosida regularization $$h_{1,\lambda}:M \to \mathbb{R}$$ of a function $$h_{1}$$ defined by

\begin{aligned} h_{1,\lambda}(x)=\arg \min_{y \in M} \biggl(h_{1}(y)+ \frac{1}{2\lambda}d^{2}(x,y) \biggr) \end{aligned}

is the resolvent of the bifunction $$F: M \times M \to \mathbb{R}$$ defined by $$F(x,y)=h_{1}(y)-h_{1}(x)$$. It is known (see ) that there exists $$z_{\lambda}=h_{1,\lambda}(x)$$ for any $$x \in M$$ and $$\lambda \ge 0$$ with the property $$\frac{1}{\lambda}\exp _{z_{\lambda}}^{-1}x \in \partial h_{1}(x)$$. The mapping $$h_{1,\lambda}$$ is consistent, and the fixed point of $$h_{1,\lambda}$$ is a solution of the minimization problem

\begin{aligned} \min_{x\in M}h_{1}(x). \end{aligned}

Using $$h=h_{1}+h_{2}$$ and the adaptations above, we obtain the following result for approximating a common solution of the fixed point and convex minimization problem. Find $$x \in M$$ such that $$x \in F(S) \cap \arg \min h$$, where $$\arg \min h$$ is the solution set of

\begin{aligned} \min_{x\in M}h(x). \end{aligned}

### Theorem 5.1

Let K be a nonempty, closed, and convex subset of a Hadamard manifold M. Let $$h_{1}: M \to \mathbb{R} \cup \{\infty \}$$ be a proper, lower semicontinuous and $$h_{2}: M \to \mathbb{R}$$ be a convex and differentiable function with $$h=h_{1}+h_{2}$$. Let $$f: M \to M$$ be a ψ-contraction and $$S: K \to K$$ be a nonexpansive mapping. Assume $$\mathrm{Fix}(S) \cap \arg\min h \ne \emptyset$$. For arbitrary $$x_{1} \in K$$, sequences $$\{r_{n}\} \in (0,\infty )$$, $$\beta _{n}, \alpha _{n} \in (0,1)$$, let the sequence $$\{x_{n}\}$$ be defined iteratively by

\begin{aligned} \textstyle\begin{cases} y_{n}=\exp _{x_{n}}(1-\beta _{n})\exp _{x_{n}}^{-1}Sx_{n}, \\ x_{n+1}=\exp _{f(x_{n})}(1-\alpha _{n})\exp _{f(x_{n})}^{-1}h_{r_{n}}y_{n}. \end{cases}\displaystyle \end{aligned}
(5.2)

Suppose the following conditions hold:

1. (i)

$$\lim_{n \to \infty}\alpha _{n}=0$$ and $$\sum_{n =1}^{\infty}\alpha _{n}=0$$;

2. (ii)

$$0< a\leq \beta _{n} \leq b < 1$$ for some $$a,b > 0$$ for all $$n \geq 1$$;

3. (iii)

$$0 < r \leq r_{n}$$.

If $$0 < \kappa =\sup \{\frac{\psi (d(x_{n},p))}{d(x_{n},p)}: x_{n} \ne p, n \ge 0\}< 1$$ for all $$p \in \mathrm{Fix}(S) \cap \arg\min h$$, then the sequence $$\{x_{n}\}$$ converges to a point $$p \in \mathrm{Fix}(S) \cap \arg \min h$$.

## Numerical example

In this section, we present some numerical illustrations in the framework of Hadamard manifolds to represent convergence of Algorithms 4.1. All programs are written in Matlab R2022a and computed on PC Intel(R) Core(TM) i7 @2.40 GHz, with 8.00 GB RAM.

Let $$M:=\mathbb{R}^{++}=\{x \in \mathbb{R}: x > 0\}$$ and $$(M,\langle \cdot,\cdot \rangle )$$ be the Riemannian manifold with $$\langle \cdot, \cdot \rangle$$ the Riemannian metric defined by $$\langle p,q\rangle =\frac{1}{x^{2}}pq$$, for all $$p,q \in T_{x} M$$, where $$T_{x} M$$ is the tangent space at $$x \in M$$. For $$x \in M$$, the tangent space $$T_{x} M$$ at x equals $$\mathbb{R}$$, i.e $$T_{x} M=\mathbb{R}$$. The Riemannian distance (see ) $$d: M \times M \to \mathbb{R}^{+}$$ is defined by $$d(x,y)=|\ln \frac{x}{y}| \forall x,y \in M$$. Then $$(M,\langle \cdot,\cdot \rangle )$$ is a Hadamard manifold and the unique geodesic $$\gamma: \mathbb{R} \to M$$ starting from $$\gamma (0)=x$$ with $$q=\gamma ^{\prime}(0) \in T_{x} M$$ is defined by $$\gamma (t)=x\exp ^{\frac{qt}{x}}$$. Thus,

\begin{aligned} \exp _{x}qt=x\exp ^{\frac{qt}{x}}. \end{aligned}

The inverse exponential map is defined by

\begin{aligned} \exp _{x}^{-1}y=\gamma ^{\prime}(0)=x\ln \frac{y}{x}. \end{aligned}

### Example 6.1

Let $$K=[1,+\infty )$$ be a geodesic convex subset of $$\mathbb{R}^{++}$$, $$F: K \times K \to \mathbb{R}$$ be a bifunction defined for all $$x,y \in K$$ by $$F(x,y)=-\frac{1}{2}\ln \frac{y}{x}$$ and $$A:K \to \mathbb{R}$$ be a single valued vector field defined by $$Ax=x\ln x$$ for all $$x \in K$$. Then, it is easy to see that Assumptions (A1)–(A4) are satisfied; by Lemma 3.3, we can find $$z \in K$$ such that

\begin{aligned} 0 & \leq F(z,y)+\bigl\langle Az,\exp _{z}^{-1}y\bigr\rangle -\frac{1}{r} \bigl\langle \exp _{z}^{-1}x,\exp _{z}^{-1}y \bigr\rangle \\ &=-\frac{1}{2}\ln \frac{y}{z}+\biggl\langle z\ln z,z\ln \frac{y}{z}\biggr\rangle - \frac{1}{r}\biggl\langle z\ln \frac{x}{z},z\ln \frac{y}{z}\biggr\rangle \\ &=-\frac{1}{2}\ln \frac{y}{z}+\ln z\ln \frac{y}{z}- \frac{1}{r}\ln \frac{x}{z}\ln \frac{y}{z}, \end{aligned}

which implies

\begin{aligned} &\frac{1}{r}\ln \frac{x}{z}=\ln z+\frac{1}{2} \\ &\quad\Rightarrow\quad \ln x+\frac{r}{2}=r\ln z+\ln z \\ &\quad\Rightarrow \quad\ln z=\frac{2\ln x+r}{2(r+1)} \\ &\quad\Rightarrow\quad z=\exp \biggl(\frac{2\ln x+r}{2(r+1)} \biggr). \end{aligned}

Therefore, $$z=T_{r}^{F,A}x=\exp (\frac{2\ln x+r}{2(r+1)} )$$.

Let $$f: M \to M$$ be defined by $$f(x)=\frac{x}{2}$$. Choose $$r_{n}=\frac{1}{2}$$, $$\alpha _{n}=\frac{1}{n+1}$$ and $$\beta _{n}=\frac{1}{2n+3}$$. Using $$E_{n}=d^{2}(x_{n},x_{n+1})\leq \epsilon$$ with $$\epsilon = 10^{-4}$$ as the stopping criterion, we perform this experiment for varying values of $$x_{1}$$.

1. Case (1):

$$x_{1}=0.896$$;

2. Case (2):

$$x_{1}=1.062$$;

3. Case (3):

$$x_{1}=\ln 2+e^{2}$$;

4. Case (4):

$$x_{1}=\ln (\sqrt{2})$$.

The report of this experiment is given in Fig. 1.

### Example 6.2

Let $$M:=\mathbb{R}_{2}^{++}=\{x=(x_{1},x_{2})\in \mathbb{R}^{2}: x_{i}>0, i=1,2 \}$$. Let $$(M, \langle \cdot,\cdot \rangle )$$ be the Hadamard manifold with the metric defined $$\langle p,q \rangle:=p^{T}P(x)q$$, for $$x \in \mathbb{R}_{2}^{++}$$ and $$p,q \in T_{x} \mathbb{R}_{2}^{++}$$ where $$P(x)$$ is a diagonal metric defined by $$P(x)=\operatorname{diag} (\frac{1}{x_{1}^{2}},\frac{1}{x_{2}^{2}} )$$. In addition, the Riemannian distance is defined by $$d(x,y)=\sqrt{ (\ln ^{2} \frac{x_{1}}{y_{1}}+\ln ^{2} \frac{x_{2}}{y_{2}} )}$$ where $$x=(x_{1},x_{2})$$, $$y=(y_{1},y_{2}) \in \mathbb{R}^{2}$$. Let $$K=\{x=(x_{1},x_{2}):1 \leq x_{i} \leq 5, i=1,2\}$$ be a closed, geodesic convex subset of $$\mathbb{R}_{2}^{++}$$. Let $$F: K \times K \to \mathbb{R}$$ and $$A:K \to \mathbb{R}^{2}$$ be defined as in Example 6.1, then $$T_{r}^{F,A}x=\exp (\frac{2\ln x+r}{2(r+1)} )$$ for all $$x=(x_{1},x_{2}) \in \mathbb{R}^{2}$$. Let $$f: M \to M$$ be defined by $$f(x)=\frac{x}{16}$$. Choose $$r=\frac{3}{4}, \alpha _{n}=\frac{1}{n+1}$$ and $$\beta _{n}=\frac{1}{2n+5}$$. We use $$E_{n}=d(x_{n+1},x_{n}) \leq \epsilon$$ as the terminating criterion with $$\epsilon =10^{-4}$$. For this numerical experiment, we consider the following cases of the starting point $$x_{1}$$.

1. Case (I):

$$x_{1}=(\ln (\sqrt{2}), \ln (1.1))$$;

2. Case (II):

$$x_{1}= (0.89,1.12)$$;

3. Case (III):

$$x_{1}=(2.03,0.09)$$;

4. Case (IV):

$$x_{1}=(1.36,1.36)$$.

The report of this experiment is given in Fig. 2.

## Conclusion

By combining the notions of an equilibrium problem and a variational inequality problem, we introduced the concept of a generalized equilibrium problem in a Hadamard manifold. We studied the existence of the solution of the GEP and proved the properties of the associated resolvent function. Further, we proposed a convergence algorithm for approximating a common solution of the GEP and a fixed point problem. Using the proposed method, we proved a strong convergence theorem for approximating a solution of the GEP, which is also a fixed point of a nonexpansive mapping. Some numerical experiments were also given to illustrate the convergence of the method.

## Availability of data and materials

Data sharing not applicable to this article as no datasets were generated or analyzed during the current study.

## References

1. Ali-Akbari, M.: A subgradient extragradient method for equilibrium problems on Hadamard manifolds. Int. J. Nonlinear Anal. Appl. 13(1), 75–84 (2022)

2. Ansari, Q.H., Babu, F., Zeeshan, M.: Implicit and explicit viscosity methods for hierarchical variational inequalities on Hadamard manifolds. Fixed Point Theory 23(2), 447–472 (2022)

3. Babu, F., Ali, A., Alkhaldi, A.H.: An extragradient method for non-monotone equilibrium problems on Hadamard manifolds with applications. Appl. Numer. Math. 180, 85–103 (2022)

4. Blum, E., Oettli, W.: From optimization and variational inequalities to equilibrium problems. Math. Stud. 63(1–4), 123–145 (1994)

5. Boyd, D.W., Wong, J.S.: On nonlinear contractions. Proc. Am. Math. Soc. 20, 335–341 (1969)

6. Bridson, M.R., Haefliger, A.: Metric Spaces of Non-positive Curvature. Grundlehren der Mathematischen Wissenschaften (Fundamental Principles of Mathematical Sciences), vol. 319. Springer, Berlin (1999). https://doi.org/10.1007/978-3-662-12494-9

7. Chen, J., Liu, S.: Extragradient-like method for pseudomonotone equilibrium problems on Hadamard manifolds. J. Inequal. Appl. 2020, 205 (2020). https://doi.org/10.1186/s13660-020-02473-y

8. Colao, V., López, G., Marino, G., Martín-Márquez, V.: Equilibrium problems in Hadamard manifolds. J. Math. Anal. Appl. 388(1), 61–77 (2012)

9. Cottle, R.W., Giannessi, F., Lions, J.L.: Variational Inequalities and Complementarity Problems: Theorey and Applications. Wiley, New York (1980)

10. do Carmo, M.C.: Riemannian Geometry. Mathematics: Theory and Applications. Birkhäuser, Boston (1992). Translated from the second Portuguese edition by Francis Flaherty. https://doi.org/10.1007/978-1-4757-2201-7

11. Facchinei, F., Pang, J.S.: Finite-Dimensional Variational Inequalities and Complementarity Problems, Vol. II. Springer Series in Operations Research. Springer, New York (2003)

12. Fan, K.: A minimax inequality and its application. In: Shisha, O. (ed.) Inequalities, 3, pp. 103–113. Academic, New York (1972)

13. Ferreira, O.P., Oliveira, P.R.: Proximal point algorithm on Riemannian manifolds. Optimization 51(2), 257–270 (2002)

14. Fichera, G.: Sul problema elastostatico di Signorini con ambigue condizioni al contorno. Nem. Accad. Naz. Lincei. 8(7), 91–140 (1964)

15. Filali, D., Dilshad, M., Akram, M., Babu, F., Ahmad, I.: Viscosity method for hierarchical variational inequalities and variational inclusions on Hadamard manifolds. J. Inequal. Appl. 2021, Article ID 66 (2021)

16. Jolaoso, L.O., Aphane, M.: Bregman subgradient extragradient method with monotone self-adjustment stepsize for solving pseudo-monotone variational inequalities and fixed point problem. J. Ind. Manag. Optim. (2020). https://doi.org/10.3934/jimo.2020178

17. Jolaoso, L.O., Oyewole, O.K., Aremu, K.O.: A Bregman subgradient extragradient method with self-adaptive technique for solving variational inequalities in reflexive Banach spaces. Optimization. https://doi.org/10.1080/02331934.2021.1925669

18. Jolaoso, L.O., Oyewole, O.K., Aremu, K.O., Mewomo, O.T.: A new efficient algorithm for finding common fixed points of multivalued demicontractive mappings and solutions of split generalized equilibrium problems in Hilbert spaces. Int. J. Comput. Math. (2020). https://doi.org/10.1080/00207160.2020.1856823

19. Khammahawong, K., Kumam, P., Chaipunya, P., Plubtieng, S.: New Tseng’s extragradient methods for pseudomonotone variational inequality problems in Hadamard manifolds. Fixed Point Theory Algorithms Sci. Eng. 2021, 5 (2021). https://doi.org/10.1186/s13663-021-00689-1

20. Khammahawong, K., Kumam, P., Chaipunya, P., Yao, J.C., Wen, C.F., Jirakitpuwapat, W.: An extragradient algorithm for strongly pseudomonotone equilibrium problems on Hadamard manifolds. Thai J. Math. 18(1), 350–371 (2020)

21. Kinderlehrer, D., Stampcchia, G.: An Introduction to Variational Inequalities and Their Applications. Academic, New York (1980)

22. Korpelevich, G.M.: An extragradient method for finding saddle points and for other problems. Ekon. Mat. Metody. 12, 747–756 (1976)

23. Li, C., López, G., Martín-Márquez, V.: Monotone vector fields and the proximal point algorithm on Hadamard manifolds. J. Lond. Math. Soc. 79(3), 663–683 (2009). https://doi.org/10.1112/jlms/jdn087

24. Li, C., López, G., Martín-Márquez, V., Wang, J.-H.: Resolvents of set-valued monotone vector fields in Hadamard manifolds. Set-Valued Anal. 19, 361–383 (2011)

25. Li, C., Yao, J.C.: Variational inequalities for set-valued vector fields on Riemannian manifolds: convexity of the solution set and the proximal point algorithm. SIAM J. Control Optim. 50(4), 2486–2514 (2012)

26. Muu, L.D., Oettli, W.: Convergence of an adaptive penalty scheme for finding constrained equilibria. Nonlinear Anal. 18, 1159–1166 (1992)

27. Németh, S.Z.: Variational inequalities on Hadamard manifolds. Nonlinear Anal. 52, 1491–1498 (2003)

28. Nguyen, T.T.V., Strodiot, J.J., Nguyen, V.H.: Hybrid methods for solving simultaneously an equilibrium problem and countably many fixed point problems in a Hilbert space. J. Optim. Theory Appl. 160(3), 809–831 (2014)

29. Oyewole, O.K., Abass, H.A., Mebawondu, A.A., Aremu, K.O.: A Tseng extragradient method for solving variational inequality problems in Banach spaces. Numer. Algorithms (2021). https://doi.org/10.1007/s11075-021-01133-6

30. Phuengrattana, W., Lerkchaiyaphum, K.: On solving the split generalized equilibrium problem and the fixed point problem for a countable family of nonexpansive multivalued mappings. Fixed Point Theory Appl. 2018, Article ID 6 (2018)

31. Rapcsák, T.: Geodesic convexity in nonlinear optimization. J. Optim. Theory Appl. 69(1), 169–183 (1991)

32. Rapcsák, T.: Nonconvex Optimization and Its Applications, Smooth Nonlinear Optimization in Rn. Kluwer Academic, Dordrecht (1997)

33. Saejung, S., Yotkaew, P.: Approximation of zeros of inverse strongly monotone operator in Banach spaces. Nonlinear Anal. 75, 742–750 (2012)

34. Sakai, T.: Riemannian Geometry. Vol. 149, Translations of Mathematical Monographs. Am. Math. Soc., Providence (1996). Translated from the 1992 Japanese original by the author

35. Salahuddin, S.: The existence of solution for equilibrium problems in Hadamard manifolds. Trans. A. Razmadze Math. Inst. 171(3), 381–388 (2017)

36. Stampacchia, G.: Formes bilineaires coercivites sur les ensembles convexes. C. R. Acad. Paris 258, 4413–4416 (1964)

37. Takahashi, S., Takahashi, W.: Strong convergence theorem for a generalized equilibrium problem and a nonexpansive mapping in a Hilbert space. Nonlinear Anal. 69(1), 1025–1033 (2008)

38. Takahashi, W.: Introduction to Nonlinear and Convex Analysis. Yokohama Publishers, Yokohama (2009)

39. Tang, G.J., Zhou, L.W., Huang, N.J.: Existence results for a class of hemivariational inequality problems on Hadamard manifolds. Optimization 65(7), 1451–1461 (2016)

40. Tran, D.Q., Dung, M.L., Nguyen, V.H.: Extragradient algorithms extended to equilibrium problems. Optimization 57(6), 749–776 (2008)

41. Udriste, C.: Convex Functions and Optimization Methods on Riemannian Manifolds. Mathematics and Its Applications, vol. 297. Kluwer Academic, Dordrecht (1994)

42. Wang, J.H., López, G., Martín-Márquez, V., Li, C.: Monotone and accretive vector fields on Riemannian manifolds. J. Optim. Theory Appl. 146, 691–708 (2010)

43. Wang, X., Li, C., Yao, J.C.: On some basic results related to affine functions on Riemannian manifolds. J. Optim. Theory Appl. 170, 783–803 (2016)

44. Zhou, L.-W., Huang, N.-J.: Existence of solutions for vector optimization on Hadamard manifolds. J. Optim. Theory Appl. 157(1), 44–53 (2013)

## Acknowledgements

The authors appreciate the support of their institutions.

Not applicable.

## Author information

Authors

### Contributions

OKO conceptualized the research problem. OKO, KOA and LOJ validated the existence and approximation results. OKO, LOA and MA conducted the numerical experiments and established the result. All authors analyzed and interpreted the data from the numerical experiments. All authors read and approved the final manuscript.

### Corresponding author

Correspondence to K. O. Aremu.

## Ethics declarations

Not applicable.

Not applicable.

### Competing interests

The authors declare no competing interests.

## Rights and permissions 