Skip to main content

Extragradient-like method for pseudomontone equilibrium problems on Hadamard manifolds

Abstract

This paper presents an extragradient-like method for solving a pseudomonotone equilibrium problem with a Lipschitz-type condition on Hadamard manifolds. The algorithm only needs to know the existence of the Lipschitz-type constants of the bifunction, and the stepsize of each iteration is determined by the adjacent iterations. Convergence of the algorithm is analyzed, and its application to variational inequalities is also provided. Finally, several experiments are made to verify the effectiveness of the algorithms.

1 Introduction

The equilibrium problem \((\operatorname{EP})\) provides a general setting of many problems, such as optimization problem and the complementarity problem. In the past few decades, it has been studied extensively in linear space (e.g., Hadjisavvas et al. [1], Bianchi et al. [2, 3], Blum et al. [4]).

It is necessary to extend the concept and method from linear space to Riemannian manifolds. By choosing a suitable Riemannian metric, the nonconvex optimization problem can be transformed into convex optimization problem, and the constrained optimization problem can be transformed into an unconstrained one. Some classical algorithms have been extended from linear space to Riemannian manifolds, such as by Ferreira et al. [5, 6], Li et al. [7], and Tang et al. [8]. The related work on Hadamard manifolds can be found in Kristály [9], Li et al. [10, 11], Ceng et al. [12], Zhou et al. [13] and so on.

In 2012, Colao et al. [14] studied the equilibrium problem on Hadamard manifolds. Let E be a nonempty closed convex subset on Hadamard manifolds \(\mathbb{M}\), and \(S : E \times E \longrightarrow \mathbb{R}\) be a bifunction that satisfies \(S(x, x)=0\), \(\forall x \in E\), then the form of equilibrium problem is to find \(x \in E\), such that

$$ S (x, y ) \geq 0, \quad \forall y \in E. $$
(EP)

We define the solution set of equilibrium problem (EP) as \(\operatorname{EP}(S; E)\), and we always assume that \(\operatorname{EP}(S; E)\neq \emptyset \). In the special case, if \(S(x, y)=\langle V(x), \exp ^{-1}_{x}{y}\rangle \), where \(V : E\rightarrow T\mathbb{M}\) is a vector field satisfies \(V(x) \in T_{x}\mathbb{M}\), \(\forall x \in E\), and exp−1 is the inverse of exponential, then (EP) becomes the variational inequality problem. The form is: find \(x\in E\), such that

$$ \bigl\langle V (x ), \exp ^{-1}_{x}{y} \bigr\rangle \geq 0, \quad \forall y \in E . $$
(VI)

The solution set of variational inequality problem (VI) is denoted by \(\operatorname{VI}(V,E)\).

It is known that the KKM lemma is an important tool for studying the existence of solutions for equilibrium problems. Colao et al. [14] developed and proved Fan’s KKM lemma [15] and obtained the existence of solutions to equilibrium problem (EP) on Hadamard manifolds. For relevant conclusions, see, for instance, Yang and Pu [16], Tang et al. [17], Chen et al. [18], Batista et al. [19], Zhou et al. [2022].

Furthermore, the existence of solutions for equilibrium problems or variational inequality problems on Riemannian manifolds has been presented by several references. In particular, Li et al. [23] established the existence and uniqueness results for variational inequality problems on Riemannian manifolds. Meanwhile Li and Yao [24] provided the existence theorems of solutions for variational inequalities for set-valued mappings on Riemannian manifolds. Very recently, Wang et al. [25] obtained the existence of solutions and the convexity properties of the solution set for the equilibrium problem on Riemannian manifolds.

Many authors have studied ideas and methods for solving equilibrium problems or variational inequality problems in linear space, for example, Korpelevich [26] first designed an extragradient method for a solution of variational inequality problem, while Censor et al. [27] proposed the subgradient extragradient method inspired by extragradient method in [26]. In 2019, Thong and Hieu [28] introduced an inertial subgradient extragradient algorithm based on the subgradient extragradient method in [27]. Then Ceng et al. [29] and Yao et al. [30] obtained the inertial algorithms for finding a common solution of the variational inequality problem and the fixed-point problem by using a subgradient approach. As for equilibrium problems, Quoc et al. [31] obtained an extragradient method for a solution of a pseudomonotone equilibrium problem, while Nguyen et al. [32] provided an iterative method for finding a common solution to an equilibrium problem and a fixed point problem based on extragradient method in [31]. Then in 2020, Yao et al. [33] improved and extended the main result in [32] to a general case.

In recent years, algorithms for solving equilibrium problem (EP) on Hadamard manifolds have received a lot of attention by some authors, such as Colao et al. [14], Salahuddin [34], and Li et al. [35]. Recently, Cruz Neto et al. [36] extended the result of Nguyen et al. [32] and obtained an extragradient method for solving the equilibrium problem on Hadamard manifolds, which is described as follows: choose \(\lambda _{k}>0\), compute

$$ \textstyle\begin{cases} y^{k}=\arg \min_{z \in E} \{ S (x^{k}, z )+ \frac{1}{2 \lambda _{k}} d^{2} (x^{k}, z ) \} , \\ x^{k+1}=\arg \min_{z \in E} \{ S (y^{k}, z )+ \frac{1}{2 \lambda _{k}} d^{2} (x^{k}, z ) \} , \end{cases} $$
(1)

where \(0<\lambda _{k}<\beta <\min \{ \alpha _{1}^{-1}, \alpha _{2}^{-1} \} \), \(\alpha _{1}\), \(\alpha _{2}\) are constants related to Lipschitz-type constants. It should be noted that Lipschitz-type constants are unknown in general, and it is difficult to approximate them even in complex non-linear problems.

Recently, Hieu et al. [37, 38] and Yang et al. [39, 40] introduced some proximal-like algorithms in the linear setting. The stepsize of the algorithms is given by the adjacent iteration information in each iteration, so it is unnecessary to know the Lipschitz constants.

Inspired by the work above, we present a new extragradient-like method for (EP) on Hadamard manifolds. Compared with [36], our algorithm is performed without the prior knowledge of the Lipschitz-type constants. Moreover, values of the adjacent iteration points have great influence on the stepsize of the further iteration, which can effectively improve the efficiency of the iteration. We note that, if \(\mathbb{M} = \mathbb{R}\), then our algorithm is an improvement of the algorithm presented in Hieu et al. [38].

The organization of the paper is as follows. In Sect. 2, we present some basic knowledge on Riemannian manifolds which will be used in this paper; for more details, see [41, 42]. In Sect. 3, we introduce the extragradient-like algorithm and analyze its convergence. Finally, in Sect. 4, we present two experiments to verify the algorithms.

2 Preliminaries

Suppose \(\mathbb{M}\) is simply connected n-dimensional Riemannian manifold, is the Levi-Civita connection, and γ is a smooth curve on \(\mathbb{M}\). V is the unique vector field satisfies \(\nabla _{\gamma '(t)}V=0\) (\(\forall t \in [a, b]\)), and \(V(\gamma (a))=v\). Then the parallel transport \(\mathrm{P}_{\gamma , \gamma }(b), \gamma (a): T_{\gamma (a)} \mathbb{M} \rightarrow T_{\gamma (b)} \mathbb{M}\) on the tangent bundle \(T\mathbb{M}\) along γ is defined by

$$\begin{aligned} \mathrm{P}_{\gamma , \gamma (b), \gamma (a)}(v)=V\bigl(\gamma (b)\bigr), \quad \forall a, b \in \mathbb{R} \text{ and } v \in T_{\gamma (a)} \mathbb{M}. \end{aligned}$$

If γ is a minimal geodesic joining p to q, then we use \(\mathrm{P}_{q, p}\) instead of \(\mathrm{P}_{\gamma , q ,p}\).

A Riemannian manifold \(\mathbb{M}\) is complete if for any \(p\in \mathbb{M}\), all the geodesic \(\gamma (t)\) emanating from p are defined for all \(t \in \mathbb{R}\).

Suppose \(\mathbb{M}\) is complete, and \(\gamma (\cdot ) = \gamma _{v}(\cdot , p)\) is the geodesic, the exponential map \(\exp _{p}: T_{p}\mathbb{M} \rightarrow \mathbb{M}\) at p is defined by \(\exp _{p}v = \gamma _{v}(1, p) \), \(\forall v \in T_{p}\mathbb{M}\), then \(\exp _{p}tv = \gamma _{v}(t, p)\), \(\forall t\in \mathbb{R}\). We note here that \(\forall p \in \mathbb{M}\), \(\exp _{p}\) is differentiable on \(T_{p}\mathbb{M}\), and \(\exp _{p}: T_{p}\mathbb{M}\rightarrow \mathbb{M} \) is a diffeomorphism.

A complete, simply connected Riemannian manifold of nonpositive sectional curvature is named a Hadamard manifold. In this paper, we assume that \(\mathbb{M}\) is an n dimensional Hanamard manifold.

Proposition 2.1

([43])

Let\(p\in \mathbb{M}\), then\(\exp _{p}: T_{p}\mathbb{M}\rightarrow \mathbb{M} \)is a diffeomorphism, and for any\(p, q \in \mathbb{M}\), there exists a unique normalized geodesic\(\gamma _{q,p}\)joiningpto q.

A geodesic triangle \(\triangle (p_{1}, p_{2}, p_{3})\) of a Riemannian manifold is a set consisting of three points \(p_{1}\), \(p_{2}\), \(p_{3}\), and three minimal geodesic joining these points.

Proposition 2.2

([42])

Let\(\triangle (p_{1}, p_{2}, p_{3} )\)be a geodesic triangle on Hadamard manifolds\(\mathbb{M}\). Then

$$\begin{aligned} &d^{2}(p_{1}, p_{2})+d^{2}(p_{2}, p_{3})-2\bigl\langle \exp _{p_{2}}^{-1} p_{1}, \exp _{p_{2}}^{-1} p_{3}\bigr\rangle \leq d^{2}(p_{3}, p_{1}), \end{aligned}$$
(2)

where\(\exp _{p_{2}}^{-1}\)is the inverse of\(\exp _{p_{2}}\).

Proposition 2.3

([44])

Let\(\triangle (p_{1}, p_{2}, p_{3})\)be a geodesic triangle on\(\mathbb{M}\), Then there exist three points (i.e. \(p_{1}'\), \(p_{2}'\), and\(p_{3}'\)) in\(\mathbb{R}^{2}\), such that

$$\begin{aligned} d(p_{1}, p_{2})= \bigl\Vert p_{1}^{\prime }-p_{2}^{\prime } \bigr\Vert , \quad\quad d(p_{2}, p_{3})= \bigl\Vert p_{2}^{\prime }-p_{3}^{\prime } \bigr\Vert , \quad \quad d(p_{3}, p_{1})= \bigl\Vert p_{1}^{\prime }-p_{3}^{\prime } \bigr\Vert . \end{aligned}$$

Lemma 2.4

([7])

Let\(\triangle (p_{1}, p_{2}, p_{3})\)be a geodesic triangle on\(\mathbb{M}\)and the comparison triangle be\(\Delta (p_{1}', p_{2}', p_{3}')\).

  1. (1)

    Letα, β, γbe the angles of\(\Delta (p_{1}, p_{2}, p_{3})\)at the vertices\(p_{1}\), \(p_{2}\), \(p_{3}\), and\(\alpha '\), \(\beta '\), \(\gamma '\)be the angles of\(\Delta (p_{1}', p_{2}', p_{3}')\)at the vertices\(p_{1}'\), \(p_{2}'\), \(p_{3}'\). Then

    $$ \alpha ' \geq \alpha ,\quad\quad \beta ' \geq \beta ,\quad\quad \gamma ' \geq \gamma . $$
  2. (2)

    Letzbe a point in the geodesic joining\(p_{1}\)to\(p_{2}\), and\(z'\in [p_{1}', p_{2}']\)is the comparison point, if\(d(z, p_{1})= \Vert z'-p_{1}' \Vert \), \(d(z, p_{2})= \Vert z'-p_{2}' \Vert \), then

    $$ d(z, p_{3}) \leq \bigl\Vert z'-p_{3}' \bigr\Vert . $$

Lemma 2.5

([45])

Let\(x_{0} \in \mathbb{M}\), \(\{x_{n}\}\subset \mathbb{M}\), and\(x_{n} \rightarrow x_{0}\). Then, for\(\forall y\in \mathbb{M}\),

$$ \exp _{x_{n}}^{-1} y \rightarrow \exp _{x_{0}}^{-1} y, \quad\quad \exp _{y}^{-1} x_{n} \rightarrow \exp _{y}^{-1} x_{0}. $$

Definition 2.6

([46])

A subset \(E \subset \mathbb{M}\) is said to be convex if for any \(p, q \in E\), the geodesic connecting p and q is still in E.

Definition 2.7

([41])

Let ω be a real-valued function on \(\mathbb{M}\), ω is said to be convex if for any geodesic γ on \(\mathbb{M}\), the composition function \(\omega \circ \gamma : [a,b] \rightarrow \mathbb{R}\) is convex.

Definition 2.8

([34])

Let \(\omega : \mathbb{M} \rightarrow \mathbb{R}\) be a convex and \(z \in \mathbb{M} \). A vector \(u \in T_{z} \mathbb{M}\) is called a subgradient of ω at z, iff

$$ \omega (y) \geq \omega (z)+ \bigl\langle u, \exp _{z}^{-1} y \bigr\rangle , \quad \forall y \in \mathbb{M}. $$

The set of all subgradients of ω is named the subdifferential of ω at z, which is represented by \(\partial \omega (z)\), and the domain of ∂ω is \(\mathcal{D}(\partial \omega )=\{z \in \mathbb{M}| \partial \omega (z) \neq \emptyset \}\), and \(\partial \omega (z)\) is a closed and convex set.

Proposition 2.9

([6])

Let\(\omega :[a, b] \rightarrow \mathbb{R}\)be a convex, for any pointpin Hadamard manifolds\(\mathbb{M}\), we have\(\mathcal{D}(\partial \omega )=\mathbb{M}\).

Definition 2.10

([45])

Let \(\mathbb{M}\) be a Hadamard manifolds, ω be a lower semicontinuous, proper and convex function, \(\omega \subset \mathbb{M}\) and \(\mathcal{D}(\omega )=\mathbb{M}\), then the proximal mapping \(\operatorname{prox}_{\lambda \omega }: \mathbb{M} \rightarrow \mathbb{M}\) is defined as

$$ \operatorname{prox}_{\lambda \omega }(z):=\mathop{\operatorname{argmin}}_{y \in \mathbb{M}} \biggl\{ \omega (y)+\frac{1}{2 \lambda } d^{2}(z, y) \biggr\} , \quad \forall z \in \mathbb{M}, \lambda >0. $$

From [6, Lemma 4.2], \(\operatorname{prox}_{\lambda \omega }(\cdot )\) is a single-valued and \(\mathcal{D}(\operatorname{prox}_{\lambda \omega })=\mathbb{M}\), and for each \(z \in \mathbb{M}\), there exists a unique point \(p=\operatorname{prox}_{\lambda \omega }(z)\), which is characterized by

$$ \exp _{p}^{-1} z \in \lambda \partial \omega (p). $$

Combining this and Definition 2.8, we have the following.

Lemma 2.11

Letωbe a lower semicontinuous, proper and convex function on Hadamard manifold\(\mathbb{M}\), and\(z, p \in \mathbb{M}\), \(\lambda >0\). If\(p=\operatorname{prox}_{\lambda \omega }(z)\), \(\forall y \in \mathbb{M}\), Then

$$\begin{aligned} \bigl\langle \exp ^{-1}_{p}{y}, \exp ^{-1}_{p}{z} \bigr\rangle \leqslant \lambda \bigl(\omega (y)-\omega (p)\bigr). \end{aligned}$$

Remark 1

From Lemma 2.11, if \(z=\operatorname{prox}_{\lambda \omega }(z)\), then

$$\begin{aligned} z \in \operatorname{Argmin}\bigl\{ \omega (y) : y \in E\bigr\} := \Bigl\{ z \in E : \omega (z)=\min_{y \in E} \omega (y) \Bigr\} . \end{aligned}$$

For a closed and convex \(E \subseteq \mathbb{M}\), the projection \(P_{E} : \mathbb{M} \rightarrow E\) is defined for all \(z \in \mathbb{M}\), such that \(P_{E}(z)=\operatorname{argmin}\{d(z,y), \forall y\in E\}\).

Definition 2.12

([47])

For a bifunction \(S: E \times E \rightarrow \mathbb{R}\), \(\forall (z, y) \in E \times E\):

  1. (1)

    If \(S(z, y)+S(y, z) \leq 0\), then S is called monotone.

  2. (2)

    If \(S(z, y) \geq 0 \Rightarrow S(y, z) \leq 0\), then S is called pseudomonotone.

Definition 2.13

([48])

Let \(\mathbb{M}\) be a Hadamard manifolds, \(E\subset \mathbb{M}\), and \(S : E \times E \rightarrow \mathbb{R}\). S satisfies a Lipschitz-type condition, if there exist \(k_{1}, k_{2} > 0\) such that

$$\begin{aligned} S(x, y)+S(y, z) \geq S(x, z)-k_{1}d^{2}(x,y)-k_{2}d^{2}(y,z), \quad \forall x, y,z \in E. \end{aligned}$$

Lemma 2.14

([49])

Let\(\{a_{n}\}_{n\in \mathbb{N}}\) (\(a_{n}>0\)), \(\{b_{n}\}_{n\in \mathbb{N}} \) (\(b_{n}>0\)) be two real sequences and there exists\(N>0\), for all\(n>N\). such that\(a_{n+1} \leq a_{n}-b_{n}\). Then\(\{a_{n}\}_{n\in \mathbb{N}}\)is convergent and\(\lim_{n\rightarrow \infty } b_{n} = 0\).

In addition, compared with Definitions 2.12 and 2.13, for the variational inequality (VI), we have the following definitions. Let V is a single-valued vector field, and \(\mathcal{D}(V)\) be the domain of V.

Definition 2.15

([50])

If there exists a constant \(L > 0\) such that

$$ \bigl\Vert \mathrm{P}_{y, x} V(x)-V(y) \bigr\Vert \leq L d(x, y),\quad \forall x, y \in \mathbb{M}, $$

then V is called Lipschitz continuous.

Definition 2.16

([43])

For all x, y \(\in \mathcal{D}(V)\),

$$ \bigl\langle V(x), \exp _{x}^{-1} y\bigr\rangle \geq 0\quad \Rightarrow \quad \bigl\langle V(y), \exp _{y}^{-1} x\bigr\rangle \leq 0, $$

V is called pseudomonotone.

3 Main result

In this section, inspired by the algorithms in Hieu et al. [37, 38] and Yang et al. [39, 40], we introduce an extragradient-like algorithm for solving equilibrium problems (EP), and analyze the convergence of sequences generated by the algorithm. Finally, we apply the algorithm to solving the variational inequality problem (VI) as a particular case.

Unless explicitly stated otherwise, the subset E is a nonempty closed convex subset on \(\mathbb{M}\), and the bifunction S satisfies the following conditions:

\((A1)\):

For each \(z\in E\), S is pseudomonotone on E, i.e., \(S(z, y) \geq 0 \Rightarrow S(y, z) \leq 0\);

\((A2)\):

S satisfies the Lipschitz-type condition on E, i.e., \(S(x, y)+S(y, z) \geq S(x, z)-k_{1}d^{2}(x, y)-k_{2}d^{2}(y,z)\);

\((A3)\):

\(S (x, \cdot )\) is convex and subdifferentiable on E, fixed \(x \in E\);

\((A4)\):

\(S(\cdot , y)\) is upper semicontinuous, \(\forall y \in E\).

In order to describe the new algorithm more conveniently, we note that \([a]_{+} = \max \{0, a\}\) and adopt the convention \(\frac{0}{0}=+\infty \), and \(\frac{1}{0}=+\infty \).

Algorithm 3.1

(Extragradient-like algorithm for solving (EP))

Initialization: :

Choose \(x_{0}, \overline{x}_{0}, \overline{x}_{1}\in E\), \(\lambda _{1}>0\), \(\delta \in (0, 1) \), \(\theta \in (0, 1] \), \(\alpha \in (0, 1) \), \(\varphi \in (1-\frac{1-\theta }{2-\theta }\alpha ,1)\).

Iterative Steps: :

Suppose \(x_{n-1}\), \(\overline{x}_{n-1}\), \(\overline{x}_{n}\) are obtained.

Step 1:

Calculate

$$ \textstyle\begin{cases} x_{n}=\gamma _{{x_{n-1}},{\overline{x}_{n}}}{(\varphi )}, \\ \overline{x}_{n+1}=\operatorname{prox}_{\lambda _{n} S(\overline{x}_{n},\cdot )}(x_{n}). \end{cases} $$

If \(\overline{x}_{n+1} = x_{n} = \overline{x}_{n}\), then stop: \(\overline{x}_{n} \) is a solution. Otherwise,

Step 2:

Compute

$$ \lambda _{n+1}= \min \biggl\{ {\lambda _{n}, \frac{\alpha \delta \theta }{4\varphi \varLambda } \bigl(d^{2}( \overline{x}_{n}, \overline{x}_{n-1})+ d^{2}(\overline{x}_{n+1}, \overline{x}_{n}) \bigr) } \biggr\} , $$

where \(\varLambda = [S(\overline{x}_{n-1},\overline{x}_{n+1}) -S( \overline{x}_{n-1},\overline{x}_{n})-S(\overline{x}_{n},\overline{x}_{n+1}) ]_{+}\). Set \(n := n + 1\) and return.

Remark 2

Under the conditions \((A1)\)\((A4)\) and \(\overline{x}_{n+1} = x_{n} = \overline{x}_{n}\), then by Lemma 2.11 we obtain

$$ S(\overline{x}_{n},y)\geq S(\overline{x}_{n}, \overline{x}_{n+1}) + \frac{1}{\lambda _{n}}\bigl\langle \exp ^{-1}_{\overline{x}_{n+1}}{x_{n}}, \exp ^{-1}_{\overline{x}_{n+1}}{y} \bigr\rangle \geq 0,\quad \forall y\in E. $$

So \(\overline{x}_{n+1}\in \operatorname{EP}(S; E)\).

Remark 3

By Definition 2.13, if the hypothesis \((A2)\) holds, then there exist \(k_{1}>0\), \(k_{2}>0\) such that

$$ \begin{aligned} S(\overline{x}_{n-1},\overline{x}_{n+1}) -S(\overline{x}_{n-1}, \overline{x}_{n}) -S( \overline{x}_{n},\overline{x}_{n+1}) &\leq k_{1}d^{2}( \overline{x}_{n}, \overline{x}_{n-1})+k_{2}d^{2}(\overline{x}_{n+1}, \overline{x}_{n}) \\ &\leq \max \{ k_{1}, k_{2} \} \bigl(d^{2}( \overline{x}_{n}, \overline{x}_{n-1})+ d^{2}( \overline{x}_{n+1},\overline{x}_{n}) \bigr), \end{aligned} $$

then \(\{ \lambda _{n} \} \) is bounded from below by \(\{\lambda _{1}, \frac{\alpha \delta \theta }{4\varphi \max \{c_{1}, c_{2}\}}\}\). Moreover, \(\{ \lambda _{n} \} \) is a monotonically decreasing sequence. Thus, \(\lim_{n \rightarrow \infty } \lambda _{n}\) exists (i.e. \(\lim_{n \rightarrow \infty } \lambda _{n}=\lambda >0\)). It should be noted that, if \(S(\overline{x}_{n-1},\overline{x}_{n+1}) -S(\overline{x}_{n-1}, \overline{x}_{n}) -S(\overline{x}_{n},\overline{x}_{n+1})\leq 0\), then \(\lambda _{n+1}:=\lambda _{n}\).

Remark 4

From \(x_{n}=\gamma _{{x_{n-1}}, {\overline{x}_{n}}}{(\varphi )}\), we have \(x_{n}=\exp _{\overline{x}_{n}}{\varphi \exp ^{-1}_{\overline{x}_{n}}{x_{n-1}}}\), it implies that \(x_{n-1}\), \({\overline{x}_{n}}\), \({x_{n}}\) lies in the same geodesic. From [51], we have

$$\begin{aligned} &\exp ^{-1}_{x_{n-1}}{\overline{x}_{n}}= \frac{1}{1-\varphi }\exp ^{-1}_{x_{n-1}}{x_{n}}, \end{aligned}$$
(3)
$$\begin{aligned} &\exp ^{-1}_{\overline{x}_{n}}{x_{n}}=\exp ^{-1}_{\overline{x}_{n}} \bigl( \exp _{\overline{x}_{n}}{\varphi \exp ^{-1}_{\overline{x}_{n}}{x_{n-1}}} \bigr) =\varphi \exp ^{-1}_{\overline{x}_{n}}{x_{n-1}}, \end{aligned}$$
(4)
$$\begin{aligned} &\exp ^{-1}_{x_{n}}{\overline{x}_{n}}= \frac{-\varphi }{1-\varphi }\exp ^{-1}_{x_{n}}{x_{n-1}}. \end{aligned}$$
(5)

By the definition of \(\overline{x}_{n+1}\) and Remark 2, we know that, if Algorithm 3.1 terminates after finite iterations, then \(\overline{x}_{n+1}\in \operatorname{EP}(S; E)\). Otherwise, we have Lemma 3.1 and Theorem 3.2.

Lemma 3.1

Suppose\((A1)\)\((A4)\)hold, and\(\operatorname{EP}(S; E)\neq \emptyset \), let\(\{x_{n}\}\)be sequences generated by Algorithm 3.1. Then\(\{x_{n}\}\)is bounded.

Proof

Since \(\overline{x}_{n+1}=\operatorname{prox}_{\lambda _{n} S(\overline{x}_{n},\cdot )}(x_{n})\), by Lemma 2.11, \(\forall z\in E\), we obtain

$$\begin{aligned} &\bigl\langle \exp ^{-1}_{\overline{x}_{n+1}}{x_{n}},\exp ^{-1}_{ \overline{x}_{n+1}}{z}\bigr\rangle \leq \lambda _{n} \bigl(S(\overline{x}_{n},z)-S( \overline{x}_{n}, \overline{x}_{n+1})\bigr), \end{aligned}$$
(6)
$$\begin{aligned} &\bigl\langle \exp ^{-1}_{\overline{x}_{n}}{x_{n-1}},\exp ^{-1}_{ \overline{x}_{n}}{z}\bigr\rangle \leq \lambda _{n-1} \bigl(S(\overline{x}_{n-1},z)-S( \overline{x}_{n-1}, \overline{x}_{n})\bigr). \end{aligned}$$
(7)

Let \(s\in \operatorname{EP}(S; E)\), substituting \(z:=s\) into (6) and \(z:=\overline{x}_{n+1}\) into (7), we have

$$\begin{aligned} &\bigl\langle \exp ^{-1}_{\overline{x}_{n+1}}{x_{n}},\exp ^{-1}_{ \overline{x}_{n+1}}{s} \bigr\rangle \leq \lambda _{n} \bigl(S(\overline{x}_{n},s)-S( \overline{x}_{n}, \overline{x}_{n+1})\bigr), \end{aligned}$$
(8)
$$\begin{aligned} &\bigl\langle \exp ^{-1}_{\overline{x}_{n}}{x_{n-1}}, \exp ^{-1}_{ \overline{x}_{n}}{\overline{x}_{n+1}}\bigr\rangle \leq \lambda _{n-1}\bigl(S( \overline{x}_{n-1},\overline{x}_{n+1}) -S(\overline{x}_{n-1}, \overline{x}_{n})\bigr) . \end{aligned}$$
(9)

Since S is pseudomonotone, and \(s\in \operatorname{EP}(S; E)\), we obtain \(S(s,\overline{x}_{n})\geq 0\), so \(S(\overline{x}_{n},s)\leq 0\). From (8) and \(\lambda _{n}>0\), it follows that

$$\begin{aligned} \bigl\langle \exp ^{-1}_{\overline{x}_{n+1}}{x_{n}}, \exp ^{-1}_{ \overline{x}_{n+1}}{s}\bigr\rangle \leq -\lambda _{n}S( \overline{x}_{n}, \overline{x}_{n+1}). \end{aligned}$$
(10)

Combining (9) and (4), we obtain for \(\lambda _{n}>0\) that

$$\begin{aligned} \frac{\lambda _{n}}{\lambda _{n-1}}\frac{1}{\varphi }\bigl\langle \exp ^{-1}_{ \overline{x}_{n}}{x_{n}}, \exp ^{-1}_{\overline{x}_{n}}{ \overline{x}_{n+1}} \bigr\rangle \leq \lambda _{n}\bigl(S( \overline{x}_{n-1},\overline{x}_{n+1})-S( \overline{x}_{n-1}, \overline{x}_{n})\bigr). \end{aligned}$$
(11)

On the other hand, applying inequality (2) in Proposition 2.2, gives

$$\begin{aligned} &2 \bigl\langle \exp _{\overline{x}_{n}}^{-1} x_{n}, \exp _{ \overline{x}_{n}}^{-1} \overline{x}_{n+1} \bigr\rangle \geq d^{2} (\overline{x}_{n}, x_{n} )+d^{2} (\overline{x}_{n}, \overline{x}_{n+1} )-d^{2} (x_{n}, \overline{x}_{n+1} ), \end{aligned}$$
(12)
$$\begin{aligned} &2 \bigl\langle \exp _{\overline{x}_{n+1}}^{-1} x_{n}, \exp _{ \overline{x}_{n+1}}^{-1} s \bigr\rangle \geq d^{2} ( \overline{x}_{n+1}, x_{n} )+d^{2} ( \overline{x}_{n+1}, s )-d^{2} (x_{n}, s ). \end{aligned}$$
(13)

By multiplying the both sides of inequality (12) by \(\frac{\lambda _{n}}{\lambda _{n-1}}\frac{1}{\varphi }>0\), and then adding the both sides of the resulting equation to inequality (13), we get

$$\begin{aligned} &2 \bigl\langle \exp _{\overline{x}_{n+1}}^{-1} x_{n}, \exp _{ \overline{x}_{n+1}}^{-1} s \bigr\rangle +2 \frac{\lambda _{n}}{\lambda _{n-1}} \frac{1}{\varphi } \bigl\langle \exp _{\overline{x}_{n}}^{-1} x_{n}, \exp _{\overline{x}_{n}}^{-1} \overline{x}_{n+1} \bigr\rangle \\ &\quad \geq d^{2} (x_{n}, \overline{x}_{n+1} )+d^{2} ( \overline{x}_{n+1}, s )-d^{2} (x_{n}, s ) \\ &\quad\quad{} +\frac{\lambda _{n}}{\lambda _{n-1}} \frac{1}{\varphi } \bigl(d^{2} (x_{n}, \overline{x}_{n} )+d^{2} ( \overline{x}_{n+1}, \overline{x}_{n} )-d^{2} (x_{n}, \overline{x}_{n+1} ) \bigr) \\ &\quad =\biggl(1-\frac{\lambda _{n}}{\lambda _{n-1}}\frac{1}{\varphi }\biggr)d^{2}(x_{n}, \overline{x}_{n+1})-d^{2}(x_{n}, s) \\ &\quad\quad{} +\frac{\lambda _{n}}{\lambda _{n-1}}\frac{1}{\varphi } \bigl(d^{2}(x_{n}, \overline{x}_{n})+d^{2}(\overline{x}_{n+1}, \overline{x}_{n}) \bigr)+d^{2}( \overline{x}_{n+1}, s). \end{aligned}$$
(14)

Combining Eqs. (14), (10) and (11), we get for \(\lambda _{n}>0\)

$$\begin{aligned} d^{2}(\overline{x}_{n+1}, s) \leq{}&\biggl( \frac{\lambda _{n}}{\lambda _{n-1}}\frac{1}{\varphi }-1\biggr)d^{2}(x_{n}, \overline{x}_{n+1})+d^{2}(x_{n}, s) \\ & {} +2\lambda _{n+1}\frac{\lambda _{n}}{\lambda _{n+1}} \bigl(S( \overline{x}_{n-1}, \overline{x}_{n+1})-S(\overline{x}_{n-1}, \overline{x}_{n})-S( \overline{x}_{n},\overline{x}_{n+1}) \bigr) \\ & {} -\frac{\lambda _{n}}{\lambda _{n-1}}\frac{1}{\varphi } \bigl(d^{2}(x_{n}, \overline{x}_{n})+d^{2}(\overline{x}_{n+1}, \overline{x}_{n}) \bigr). \end{aligned}$$
(15)

By the definition of \(\lambda _{n}\) and (15), we obtain

$$\begin{aligned} d^{2}(\overline{x}_{n+1}, s) \leq{}& \biggl( \frac{\lambda _{n}}{\lambda _{n-1}}\frac{1}{\varphi }-1\biggr)d^{2}(x_{n}, \overline{x}_{n+1})+d^{2}(x_{n}, s) \\ & {} + \frac{1}{2}\delta \frac{\lambda _{n}}{\lambda _{n+1}} \frac{1}{\varphi }\alpha \theta \bigl(d^{2}(\overline{x}_{n}, \overline{x}_{n-1})+d^{2}( \overline{x}_{n}, \overline{x}_{n+1})\bigr) \\ & {} -\frac{\lambda _{n}}{\lambda _{n-1}}\frac{1}{\varphi }\bigl(d^{2}(x_{n}, \overline{x}_{n})+d^{2}(\overline{x}_{n+1}, \overline{x}_{n})\bigr). \end{aligned}$$
(16)

From Remark 3\(\lambda _{n}\rightarrow \lambda >0\), and \(0<\delta <1\). Hence, there exists \(N\geq 0\), such that, for all \(n \geq N\), \(0<\lambda _{n} \frac{\delta }{\lambda _{n+1}}<1\), and \(\frac{\lambda _{n}}{\lambda _{n-1}} \frac{1}{\varphi }-1 \leq \frac{\lambda _{n-1}}{\lambda _{n-1}} \frac{1}{\varphi }-1= \frac{1}{\varphi }-1\). Thus, from (16), we have

$$\begin{aligned} d^{2}(\overline{x}_{n+1}, s) \leq{}& \biggl( \frac{1}{\varphi }-1\biggr)d^{2}(x_{n}, \overline{x}_{n+1})+d^{2}(x_{n}, s)-\frac{\alpha }{\varphi }\bigl(d^{2}(x_{n}, \overline{x}_{n})+d^{2}(\overline{x}_{n+1}, \overline{x}_{n})\bigr) \\ & {} + \frac{1}{2}\delta \frac{1}{\varphi }\alpha \theta \bigl(d^{2}( \overline{x}_{n}, \overline{x}_{n-1})+d^{2}( \overline{x}_{n}, \overline{x}_{n+1})\bigr), \quad \forall n \geq N. \end{aligned}$$
(17)

Now, we estimate the term \(d^{2}(\overline{x}_{n+1}, s)\) in (17). Fix \(n \geq 0\), set \(p=\overline{x}_{n+1}\), \(q=x_{n}\) in geodesic triangle \(\triangle (s, p, q)\). Then using Lemma 2.3 in the comparison triangle \(\triangle (s', p', q')\), we have

$$ d(s, \overline{x}_{n+1})= d(s, p)= \bigl\Vert p'-s' \bigr\Vert , \quad\quad d(s, x_{n})= d(s, q)= \bigl\Vert q'-s' \bigr\Vert . $$

Recall from Algorithm 3.1 that \(x_{n+1}=\exp _{\overline{x}_{n+1}}\varphi \exp ^{-1}_{\overline{x}_{n+1}}{x_{n}}\). The comparison point of \(x_{n+1}\) is \(x_{n+1}'=(1-\varphi )p'+\varphi q'\). Let β and \(\beta '\) denote the angles at s and \(s'\), respectively. From Lemma 2.4(1), we have \(\beta \leq \beta '\), thus \(\cos \beta ' \leq \cos \beta \). Then, from Lemma 2.4(2) we have

$$\begin{aligned} d^{2}(x_{n+1}, s) \leq{}& \bigl\Vert (1-\varphi )p'+\varphi q'-s' \bigr\Vert ^{2} \\ ={}& \bigl\Vert (1-\varphi ) \bigl(p'-s'\bigr)+ \varphi \bigl(q'-s'\bigr) \bigr\Vert ^{2} \\ ={}&(1-\varphi )^{2}\bigl(p'-s' \bigr)^{2}+\varphi ^{2}\bigl(q'-s' \bigr)^{2} +2\varphi (1- \varphi ) \bigl\Vert p'-s' \bigr\Vert \bigl\Vert q'-s' \bigr\Vert \cos \beta ' \\ \leq{}&(1-\varphi )^{2}d^{2}(p, s)+\varphi ^{2}d^{2}(q,u) +2\varphi (1- \varphi )d(p, s)d(q, s)\cos \beta \\ ={}&(1-\varphi )^{2}d^{2}(\overline{x}_{n+1}, s)+ \varphi ^{2}d^{2}(x_{n},s) +2\varphi (1-\varphi ) \bigl\langle \exp ^{-1}_{s}{\overline{x}_{n+1}}, \exp ^{-1}_{s}{x_{n}}\bigr\rangle . \end{aligned}$$
(18)

Using the Cauchy–Schwarz inequality, we obtain

$$\begin{aligned} 2\bigl\langle \exp ^{-1}_{s}{x_{n}},\exp ^{-1}_{s}{\overline{x}_{n+1}} \bigr\rangle &\leq 2 \bigl\Vert \exp ^{-1}_{s}{x_{n}} \bigr\Vert \bigl\Vert \exp ^{-1}_{s}{\overline{x}_{n+1}} \bigr\Vert \\ &\leq d^{2}(s,x_{n})+d^{2}(s, \overline{x}_{n+1}). \end{aligned}$$
(19)

By substituting (19) into (18), we get

$$\begin{aligned} d^{2}(\overline{x}_{n+1}, s)\geq - \frac{\varphi }{1-\varphi }d^{2}(x_{n}, s)+\frac{1}{1-\varphi }d^{2}(x_{n+1},s). \end{aligned}$$
(20)

Consequently, combining (17) and (20), we have for all \(n\geq N\)

$$\begin{aligned} &\frac{1}{1-\varphi }d^{2}(x_{n+1}, s)-\frac{\varphi }{1-\varphi }d^{2}(x_{n}, s) \\ &\quad \leq d^{2}(x_{n}, s)+\biggl(\frac{1}{\varphi }-1 \biggr)d^{2}(x_{n}, \overline{x}_{n+1}) - \frac{\alpha }{\varphi }\bigl(d^{2}(x_{n}, \overline{x}_{n})+d^{2}( \overline{x}_{n+1}, \overline{x}_{n})\bigr) \\ &\quad\quad{} +\frac{\alpha }{2\varphi }\theta \bigl(d^{2}( \overline{x}_{n}, \overline{x}_{n-1})+d^{2}( \overline{x}_{n}, \overline{x}_{n+1})\bigr). \end{aligned}$$
(21)

Next, we need Lemma 2.14 to complete the proof. From (21) and (2), we get

$$\begin{aligned} &\frac{1}{1-\varphi }d^{2}(x_{n+1}, s) +\frac{\alpha \theta }{2\varphi }(d^{2}( \overline{x}_{n+1}, \overline{x}_{n}) \\ &\quad \leq \frac{\varphi }{1-\varphi }d^{2}(x_{n}, s)+ \frac{\alpha \theta }{2\varphi }(d^{2}(\overline{x}_{n+1}, \overline{x}_{n})+d^{2}(x_{n}, s)+\biggl( \frac{1}{\varphi }-1\biggr)d^{2}(x_{n}, \overline{x}_{n+1}) \\ &\quad\quad{} -\frac{\alpha }{\varphi }\bigl(d^{2}(x_{n}, \overline{x}_{n})+d^{2}( \overline{x}_{n+1}, \overline{x}_{n})\bigr) + \frac{\alpha \theta }{2\varphi }\bigl(d^{2}( \overline{x}_{n}, \overline{x}_{n-1})+d^{2}( \overline{x}_{n}, \overline{x}_{n+1})\bigr) \\ &\quad =\frac{1}{1-\varphi }d^{2}(x_{n}, s)+ \frac{\alpha \theta }{2\varphi }d^{2}( \overline{x}_{n}, \overline{x}_{n-1})+ \frac{(\theta -1)\alpha }{\varphi } d^{2}( \overline{x}_{n+1}, \overline{x}_{n}) \\ &\quad\quad{} +\biggl(\frac{1}{\varphi }-1\biggr)d^{2}( \overline{x}_{n+1},x_{n})- \frac{\alpha }{\varphi }d^{2}(x_{n}, \overline{x}_{n}) \\ &\quad \leq \frac{1}{1-\varphi }d^{2}(x_{n},s)+ \frac{\alpha \theta }{2\varphi }d^{2}(\overline{x}_{n}, \overline{x}_{n-1}) +\frac{(\theta -1)\alpha }{\varphi } d^{2}( \overline{x}_{n+1}, \overline{x}_{n})+\biggl( \frac{1}{\varphi }-1\biggr)d^{2}(\overline{x}_{n+1},x_{n}) \\ &\quad\quad{} -\frac{\alpha }{\varphi } \bigl(d^{2}(x_{n}, \overline{x}_{n+1})+d^{2}( \overline{x}_{n}, \overline{x}_{n+1})-2\bigl\langle \exp ^{-1}_{ \overline{x}_{n+1}}{x_{n}}, \exp ^{-1}_{\overline{x}_{n+1}}{ \overline{x}_{n}}\bigr\rangle \bigr). \end{aligned}$$
(22)

Moreover, note that \(\theta \in (0, 1] \), then \(2-\theta >0\), and it follows from the Cauchy–Schwarz inequality that

$$\begin{aligned} &2\bigl\langle \exp ^{-1}_{\overline{x}_{n+1}}{x_{n}},\exp ^{-1}_{ \overline{x}_{n+1}}{\overline{x}_{n}}\bigr\rangle \leq 2 \bigl\Vert \exp ^{-1}_{ \overline{x}_{n+1}}{x_{n}} \bigr\Vert \bigl\Vert \exp ^{-1}_{\overline{x}_{n+1}}{ \overline{x}_{n}} \bigr\Vert \\ &\quad \leq \frac{1}{2-\theta } \bigl\Vert \exp ^{-1}_{\overline{x}_{n+1}}{x_{n}} \bigr\Vert ^{2} +(2-\theta ) \bigl\Vert \exp ^{-1}_{\overline{x}_{n+1}}{ \overline{x}_{n}} \bigr\Vert ^{2} \\ &\quad =\frac{1}{2-\theta } d^{2}({\overline{x}_{n+1}}, {x_{n}})+(2-\theta )d^{2}({ \overline{x}_{n+1}}, { \overline{x}_{n}}). \end{aligned}$$
(23)

Equations (23) and (22) imply

$$\begin{aligned} \begin{aligned}[b] &\frac{1}{1-\varphi }d^{2}(x_{n+1}, s)+\frac{\alpha \theta }{2\varphi }d^{2}( \overline{x}_{n+1}, \overline{x}_{n}) \\ &\quad \leq \frac{1}{1-\varphi }d^{2}(x_{n}, s)+ \frac{\alpha \theta }{2\varphi }d^{2}(\overline{x}_{n}, \overline{x}_{n-1})+\biggl( \frac{1}{\varphi }-1-\frac{\alpha }{\varphi }+ \frac{\alpha }{\varphi (2-\theta )}\biggr)d^{2}(\overline{x}_{n+1},x_{n}). \end{aligned} \end{aligned}$$
(24)

Now we set

$$\begin{aligned} &a_{n}=\frac{1}{1-\varphi }d^{2}(x_{n}, s)+ \frac{\alpha \theta }{2\varphi }d^{2}(\overline{x}_{n}, \overline{x}_{n-1}), \\ &b_{n}=-\biggl(\frac{1}{\varphi }-1-\frac{\alpha }{\varphi }+ \frac{\alpha }{\varphi (2-\theta )}\biggr)d^{2}(\overline{x}_{n+1} x_{n}). \end{aligned}$$

It follows from \(\varphi \in (1-\frac{1-\theta }{2-\theta }\alpha ,1)\) that \(b_{n}>0\), then from (24), we have, for all \(n\geq N\), \(a_{n+1}\leq a_{n}-b_{n} \). Then we get the conclusion from Lemma 2.14 that \(\{a_{n}\}\) is bounded, \(\lim_{n\rightarrow \infty }a_{n}\) exists, \(\lim_{n\rightarrow \infty }b_{n}=0\), and \(\lim_{n\rightarrow \infty }d(\overline{x}_{n+1},x_{n})=0\).

Moreover, by using the triangle inequality, it follows that

$$ \begin{aligned} &d(\overline{x}_{n}, x_{n})+d(x_{n}, \overline{x}_{n-1})\geq d( \overline{x}_{n}, \overline{x}_{n-1}), \\ &d(x_{n}, x_{n-1})+d(x_{n-1}, \overline{x}_{n-1}) \geq d(x_{n}, \overline{x}_{n-1}). \end{aligned} $$

Combining this and Eqs. (4) and (5), we can obtain

$$\begin{aligned} \begin{gathered} \lim_{n\rightarrow \infty }d( \overline{x}_{n}, x_{n})=\lim_{n\rightarrow \infty }d(x_{n}, x_{n-1})=0, \\ \lim_{n\rightarrow \infty }d(\overline{x}_{n}, \overline{x}_{n-1})= \lim_{n\rightarrow \infty }d(\overline{x}_{n+1}, x_{n})=0, \\ \lim_{n\rightarrow \infty }a_{n}=\lim_{n\rightarrow \infty } \biggl(\frac{1}{1-\varphi }d^{2}(x_{n}, s)+ \frac{\alpha \theta }{2\varphi }d^{2}(\overline{x}_{n}, \overline{x}_{n-1})\biggr)>0. \end{gathered} \end{aligned}$$
(25)

Thus, we see that \(\{x_{n}\}\) and \(\{\overline{x}_{n}\}\) are bounded. □

Theorem 3.2

Assume that\((A1)\)\((A4)\)hold, and\(\operatorname{EP}(S; E)\neq \emptyset \), then the sequences\(\{x_{n}\}\)generated by Algorithm 3.1converge to a solution of the equilibrium problem (EP).

Proof

By Lemma 3.1, we know that \(\{x_{n}\}\) and \(\{\overline{x}_{n}\}\) are bounded, and there exists a subsequence \(\{x_{l}\}\) of \(\{x_{n}\}\) that converges to \(x^{*}\in E\). It follows from (25) that

$$\begin{aligned} \lim_{k\rightarrow \infty }d(\overline{x}_{l}, x_{l})=\lim_{k\rightarrow \infty }d(x_{l}, \overline{x}_{l+1})= \lim_{k\rightarrow \infty }d( \overline{x}_{l}, \overline{x}_{l-1})=0. \end{aligned}$$
(26)

It follows from inequality (6) that

$$\begin{aligned} \lambda _{l} S(\overline{x}_{l}, z)\geq \lambda _{l}S(\overline{x}_{l}, \overline{x}_{l+1})+ \bigl\langle \exp ^{-1}_{\overline{x}_{l+1}}{x_{l}}, \exp ^{-1}_{\overline{x}_{l+1}}{\overline{x}}\bigr\rangle , \quad \forall \overline{x}\in E. \end{aligned}$$
(27)

On the other hand, since S satisfies the Lipschitz-type condition, we have

$$\begin{aligned} \lambda _{l}S(\overline{x}_{l},\overline{x}_{l+1}) \geq{}& \lambda _{l}\bigl(S( \overline{x}_{l-1}, \overline{x}_{l+1})-f(\overline{x}_{l-1}, \overline{x}_{l}) \bigr) \\ &{} -\lambda _{l}c_{1}d^{2}( \overline{x}_{l}, \overline{x}_{l-1})- \lambda _{l}c_{2}d^{2}(\overline{x}_{l}, \overline{x}_{l+1}). \end{aligned}$$
(28)

From Eqs. (11) and (28), it follows that

$$\begin{aligned} \lambda _{l}S(\overline{x}_{l},\overline{x}_{l+1}) \geq{}& \frac{\lambda _{l}}{\lambda _{l-1}}\frac{1}{\varphi } \bigl\langle \exp ^{-1}_{ \overline{x}_{l}}{x_{l}}, \exp ^{-1}_{\overline{x}_{l}}{ \overline{x}_{l+1}} \bigr\rangle \\ & {} -\lambda _{l}c_{1}d^{2}( \overline{x}_{l}, \overline{x}_{l-1})- \lambda _{l}c_{2}d^{2}(\overline{x}_{l}, \overline{x}_{l+1}). \end{aligned}$$
(29)

Now, combining (27) and (29), we get, for \(\forall \overline{x} \in E\),

$$\begin{aligned} S(\overline{x}_{l},z)\geq{}& \frac{1}{\lambda _{l-1}}\frac{1}{\varphi } \bigl\langle \exp ^{-1}_{\overline{x}_{l}}{x_{l}},\exp ^{-1}_{\overline{x}_{l}}{ \overline{x}_{l+1}}\bigr\rangle + \frac{1}{\lambda _{l}}\bigl\langle \exp ^{-1}_{ \overline{x}_{l+1}}{x_{l}}, \exp ^{-1}_{\overline{x}_{l+1}}{ \overline{x}}\bigr\rangle \\ & {} -c_{1}d^{2}(\overline{x}_{l}, \overline{x}_{l-1})-c_{2}d^{2}( \overline{x}_{l}, \overline{x}_{l+1}). \end{aligned}$$
(30)

From Lemma 2.5, (26), (30), the boundedness of \(\{x_{n}\}\), and \(\lim_{n\rightarrow \infty }\lambda _{n}=\lambda >0\), we obtain

$$\begin{aligned} S\bigl(x^{*},z\bigr)\geq 0, \quad \forall y\in E. \end{aligned}$$
(31)

So we obtain \(x^{*}\in \operatorname{EP}(S; E)\).

Next, we will prove that \(\{{x_{n}}\}_{n\in \mathbb{N}}\) has a unique cluster point. Suppose that \(\{{x_{n}}\}_{n\in \mathbb{N}}\) has at least two cluster points \(\overline{x}_{1}, \overline{x}_{2}\in \operatorname{EP}(S; E)\). Let \(\{{x_{n_{i}}}\}\) be a sequence such that \(x_{n_{i}}\rightarrow \overline{x}_{1}\), \(x_{n_{j}}\rightarrow \overline{x}_{2}\), as \(i\rightarrow \infty \). By Lemma 2.2, we have

$$\begin{aligned} \lim_{n\to \infty }d^{2}(x_{n}, \overline{x}_{2}) &= \lim_{i\to \infty }d^{2}(x_{n_{i}}, \overline{x}_{2}) \\ &\geq \lim_{i\to \infty } \bigl({d^{2}(x_{n_{i}}, \overline{x}_{1})+d^{2}( \overline{x}_{1}, \overline{x}_{2})-2 \bigl\langle \exp _{ \overline{x}_{1}}^{-1} x_{n_{i}}, \exp _{\overline{x}_{1}}^{-1} \overline{x}_{2} \bigr\rangle } \bigr) \\ &= \lim_{n\to \infty }{ d^{2}(x_{n}, \overline{x}_{1})+d^{2}( \overline{x}_{1}, \overline{x}_{2})} \end{aligned}$$
(32)

and

$$\begin{aligned} \lim_{n\to \infty }d^{2}(x_{n}, \overline{x}_{1})&=\lim_{j\to \infty }d^{2}(x_{n_{j}}, \overline{x}_{1}) \\ &\geq \lim_{j\to \infty } \bigl({ d^{2}(x_{n_{j}}, \overline{x}_{2})+d^{2}( \overline{x}_{2}, \overline{x}_{1})-2 \bigl\langle \exp _{\overline{x}_{2}}^{-1} x_{n_{j}}, \exp _{\overline{x}_{2}}^{-1} \overline{x}_{1} \bigr\rangle } \bigr) \\ &=\lim_{n\to \infty }{ d^{2}(x_{n}, \overline{x}_{2})+d^{2}( \overline{x}_{2}, \overline{x}_{1})}. \end{aligned}$$
(33)

By summing (32) and (33), we have \(\overline{x}_{1}=\overline{x}_{2}\). So \(\{{x_{n}}\}_{n\in \mathbb{N}}\) has a unique cluster point. □

Remark 5

From Algorithm 3.1, we can obtain a new method for solving the pseudomonotone variational inequality (VI). If a vector field V is Lipschitz-continuous and pseudomonotone, then the conditions \((A1)\)\((A4)\) hold for S with \(k_{1}=k_{2}=\frac{L}{2}\). So, we can get the following algorithm for solving (VI).

Algorithm 3.2

(Extragradient-like algorithm for solving (VI))

Initialization: :

Choose \(x_{0}, \overline{x}_{0}, \overline{x}_{1}\in E\), \(\lambda _{1}>0\), \(\delta \in (0, 1) \), \(\theta \in (0, 1] \), \(\alpha \in (0, 1) \), \(\varphi \in (1-\frac{1-\theta }{2-\theta }\alpha ,1)\).

Iterative Steps: :

Suppose \(x_{n-1}\), \(\overline{x}_{n-1}\), \(\overline{x}_{n}\) are obtained.

Step 1:

Calculate

$$ \textstyle\begin{cases} x_{n}=\gamma _{{x_{n-1}},{\overline{x}_{n}}}{(\varphi )}, \\ \overline{x}_{n+1}= P_{E}(\exp _{x_{n}}{-\lambda _{n}V(\overline{x}_{n})}). \end{cases} $$

If \(\overline{x}_{n+1} = x_{n} = \overline{x}_{n}\), then stop: \(\overline{x}_{n} \) is a solution. Otherwise,

Step 2:

Compute

$$ \lambda _{n+1}= \min \biggl\{ \lambda _{n}, { \frac{\alpha \delta \theta }{4\varphi \varLambda } \bigl(d^{2}( \overline{x}_{n}, \overline{x}_{n-1})+ d^{2}(\overline{x}_{n+1}, \overline{x}_{n}) \bigr) } \biggr\} , $$

where \(\varLambda = [\langle \mathrm{P}_{\overline{x}_{n},\overline{x}_{n-1}}V( \overline{x}_{n-1})-V(\overline{x}_{n}), \exp ^{-1}_{\overline{x}_{n}}{ \overline{x}_{n+1}}\rangle ]_{+}\). Set \(n := n + 1\) and return.

As for the convergence of Algorithm 3.2, if Algorithm 3.2 terminates after finite iterations, we have \(\overline{x}_{n+1} = x_{n} = \overline{x}_{n}\), it follows that \(\overline{x}_{n}=P_{E}(\overline{x}_{n}-\lambda V(\overline{x}_{n}))\), thus \(\overline{x}_{n}\in \operatorname{VI}(V,E)\) follows directly from [43]; otherwise, we can find a sequence \(\{x_{n}\}\) generated by Algorithm 3.2 converging to some \(x^{*}\in \operatorname{VI}(V,E)\), as \(n\rightarrow \infty \). The analysis process is completely similar to that of Theorem 3.2, which we omit here.

4 Numerical experiments

In this section, we perform two experiments to show the numerical behaviors of proposed algorithms in this paper. We take \(\mathbb{M}=\mathbb{R}^{m}_{++}=\{x \in \mathbb{R}: x>0\}\), and involve two experiments named Test 1 and Test 2 to verify the effectiveness of Algorithms 3.1 and 3.2, respectively.

We choose \(\alpha =0.95\), \(\delta =0.90\), \(\theta =0.5, 0.75, 0.90\), and \(\varphi \in (1-\frac{1-\theta }{2-\theta }\alpha , 1)\) is a random number, and \(\overline{x}_{1}\), \(x_{0}\), \(\overline{x}_{0}\) by Matlab code 10*rand(m,1). The termination criterion is

$$\begin{aligned} \varepsilon \geq d^{2}(\overline{x}_{n+1},x_{n})+d^{2}( \overline{x}_{n}, x_{n}) . \end{aligned}$$

Example 4.1

Let \(\mathbb{R}_{++}=\{x \in \mathbb{R}: x>0\}\) and \(\mathbb{M}_{1}= (\mathbb{R}_{++},\langle \cdot , \cdot \rangle )\) be the Riemannian manifold with \(\langle x, y\rangle := x y\), \(\forall x,y\in \mathbb{R}_{++}\). It can be seen from Ref. [52] that the section curvature of \(\mathbb{M}_{1}\) is zero, thus \(\mathbb{M}_{1}\) is a Hadamard manifold. Suppose that \(x, y\in \mathbb{M}_{1}\) and \(u\in T_{x}\mathbb{M}_{1}\) with \(\Vert v \Vert _{2}=1\), then

$$\begin{aligned} \textstyle\begin{cases} d(x, y):= \vert \ln (\frac{x}{y}) \vert , \\ \exp _{x} tv=x e^{ (v / x ) t}, \quad t\in (0,+\infty ), \\ \exp ^{-1}_{x}y=x \ln (\frac{y}{x} ). \end{cases}\displaystyle \end{aligned}$$
(34)

Let \(\mathbb{R}_{++}^{m}\) be the product space of \(\mathbb{R}_{++}\), that is, \(\mathbb{R}_{++}^{m}=\{(x_{1}, x_{2},\ldots,x_{m})^{T}: x_{i}\in \mathbb{R}_{++}, i=1,2,\ldots,m\}\). Let \(\mathbb{M}= (\mathbb{R}_{++}^{m},\langle \cdot , \cdot \rangle )\) be the m-dimensional Hadamard manifolds with metric \(\langle u, v\rangle := u^{T} v\), and \(d(x, y):= \vert \ln (x / y) \vert = \vert \ln (\sum_{i=1}^{m}(x_{i} / y_{i})) \vert \), where \(x, y\in \mathbb{M}\), \(x=(x_{i})\), \(y=(y_{i})\), \(i=1,2,\ldots, m\).

Test 1

In this test, we verify the effectiveness of Algorithm 3.1in\(\mathbb{M}= (\mathbb{R}^{m}_{++},\langle \cdot , \cdot \rangle )\). We consider an extension of Nash equilibrium model, which was introduced in [53, 54]. The form is as follows:

$$\begin{aligned} S(x,y) = \langle P_{1}x+P_{2}y+p, y-x \rangle , \end{aligned}$$

the feasible set\(E\subset \mathbb{M}\)given by

$$ E:=\bigl\{ x=(x_{1}, x_{2},\ldots,x_{m})^{T}: 1\leq x_{i}\leq 100, i=1,\ldots, m\bigr\} , $$

\(x,y\in E\), \(p=(p_{1},p_{2},\ldots,p_{m})^{T} \in \mathbb{R}^{m}\)is chosen randomly with its elements in\([1, m]\), and the matrices\(P_{1}\)and\(P_{2}\)are two square matrices of ordermsuch that\(P_{2}\)is symmetric positive semidefinite and\(P_{2}-P_{1}\)is negative semidefinite.

From [54], we know that S is pseudomonotone. Moreover, from [31, Lemma 6.2], we obtain a bifunction S that satisfies \((A2)\) with the Lipschitz-type constants \(k_{1} = k_{2} = \frac{ \Vert P_{2}-P_{1} \Vert }{2}\). Assumptions \((A3)\), \((A4)\) are automatically fulfilled and so Algorithm 3.1 can be applied in this case.

For numerical experiment, we take \(\lambda _{1}=\frac{1}{ \Vert P_{2}-P_{1} \Vert }\), and \(m = 20, 300, 500\). For each m, we have generated two random samples with different choice of \(P_{1}\), \(P_{2}\) and p. The number of iterations (Iter.) and the computing time (Time) measured in seconds are described in Table 1.

Table 1 Performance of algorithm 3.1 for the number of iterations (Iter.) and the computing time (Time) measured in seconds with \(m=20, 300, 500\).

Test 2

We consider the performance of Algorithm 3.2in\(\mathbb{M}= (\mathbb{R}^{m}_{++},\langle \cdot , \cdot \rangle )\). Let the feasible set\(E:=\{x=(x_{1}, x_{2},\ldots,x_{m})^{T}: 1\leq x_{i}\leq 10, i=1,\ldots, m\}\)be a closed convex subset of\(\mathbb{R}^{m}_{++}\)and\(V : E \rightarrow \mathbb{R}\)be a single-valued vector field defined by

$$\begin{aligned} V(x):=\sum_{i=1}^{m}(x_{i}\ln {x_{i}}), \quad \forall x\in E. \end{aligned}$$

According to [55, Example 1], V is monotone and Lipschitz continuous. Therefore, the conditions \((A1)\) and \((A2)\) are valid, assuming that \((A3)\) and \((A4)\) are automatically verified, then Algorithm 3.2 can be applied in this case.

For the numerical experiment, we take \(\lambda _{1}=0.4\), \(m=200,300,500\), and generate three random samples with different choice of initial points. The number of iterations (Iter.) and the computing time (Time) measured in seconds are described in Table 2.

Table 2 Performance of algorithm 3.2 for the number of iterations (Iter.) and the computing time (Time) measured in seconds with \(m=20, 300, 500\)

5 Conclusions

In this paper, a new algorithm for solving the equilibrium problem on Hadamard manifolds is presented, in which the bifunctions satisfy the Lipschitz type extension and are pseudomonotone. Compared with the existing algorithm, the advantage of this algorithm is that the Lipschitz constants can be unknown.

References

  1. Hadjisavvas, N., Schaible, S.: Quasimonotone variational inequalities in Banach spaces. J. Optim. Theory Appl. 90(1), 95–111 (1996)

    MathSciNet  MATH  Google Scholar 

  2. Bianchi, M., Schaible, S.: Generalized monotone bifunctions and equilibrium problems. J. Optim. Theory Appl. 90(1), 31–43 (1996)

    MathSciNet  MATH  Google Scholar 

  3. Bianchi, M., Schaible, S.: Equilibrium problems under generalized convexity and generalized monotonicity. J. Glob. Optim. 30(2–3), 121–134 (2004)

    MathSciNet  MATH  Google Scholar 

  4. Blum, E., Oettli, W.: Variational principles for equilibrium problems. In: Parametric Optimization and Related Topics, III, Güstrow, 1991. Approx. Optim., vol. 3, pp. 79–88. Peter Lang, Frankfurt am Main (1993)

    Google Scholar 

  5. Ferreira, O.P., Pérez, L.R.L., Németh, S.Z.: Singularities of monotone vector fields and an extragradient-type algorithm. J. Glob. Optim. 31(1), 133–151 (2005)

    MathSciNet  MATH  Google Scholar 

  6. Ferreira, O.P., Oliveira, P.R.: Proximal point algorithm on Riemannian manifolds. Optimization 51(2), 257–270 (2002)

    MathSciNet  MATH  Google Scholar 

  7. Li, C., López, G., Martín-Márquez, V.: Iterative algorithms for nonexpansive mappings on Hadamard manifolds. Taiwan. J. Math. 14(2), 541–559 (2010)

    MathSciNet  MATH  Google Scholar 

  8. Tang, G.J., Zhou, L.W., Huang, N.J.: The proximal point algorithm for pseudomonotone variational inequalities on Hadamard manifolds. Optim. Lett. 7(4), 779–790 (2013)

    MathSciNet  MATH  Google Scholar 

  9. Kristály, A.: Location of Nash equilibria: a Riemannian geometrical approach. Proc. Am. Math. Soc. 138(5), 1803–1810 (2010)

    MathSciNet  MATH  Google Scholar 

  10. Li, S.L., Li, C., Liou, Y.C., Yao, J.C.: Existence of solutions for variational inequalities on Riemannian manifolds. Nonlinear Anal. 71(11), 5695–5706 (2009)

    MathSciNet  MATH  Google Scholar 

  11. Li, C., López, G., Martín-Márquez, V., Wang, J.H.: Resolvents of set-valued monotone vector fields in Hadamard manifolds. Set-Valued Var. Anal. 19(3), 361–383 (2011)

    MathSciNet  MATH  Google Scholar 

  12. Ceng, L.C., Li, X., Qin, X.: Parallel proximal point methods for systems of vector optimization problems on Hadamard manifolds without convexity. Optimization 69(2), 357–383 (2020)

    MathSciNet  MATH  Google Scholar 

  13. Zhou, L.-W., Huang, N.-J.: Existence of solutions for vector optimization on Hadamard manifolds. J. Optim. Theory Appl. 157(1), 44–53 (2013)

    MathSciNet  MATH  Google Scholar 

  14. Colao, V., López, G., Marino, G., Martín-Márquez, V.: Equilibrium problems in Hadamard manifolds. J. Math. Anal. Appl. 388(1), 61–77 (2012)

    MathSciNet  MATH  Google Scholar 

  15. Fan, K.: A generalization of Tychonoff’s fixed point theorem. Math. Ann. 142, 305–310 (1960/61)

  16. Yang, Z., Pu, Y.J.: Existence and stability of solutions for maximal element theorem on Hadamard manifolds with applications. Nonlinear Anal. 75(2), 516–525 (2012)

    MathSciNet  MATH  Google Scholar 

  17. Tang, G.J., Zhou, L.W., Huang, N.J.: Existence results for a class of hemivariational inequality problems on Hadamard manifolds. Optimization 65(7), 1451–1461 (2016)

    MathSciNet  MATH  Google Scholar 

  18. Chen, S.L., Huang, N.J.: Vector variational inequalities and vector optimization problems on Hadamard manifolds. Optim. Lett. 10(4), 753–767 (2016)

    MathSciNet  MATH  Google Scholar 

  19. Batista, E.E.A., Bento, G.C., Ferreira, O.P.: An existence result for the generalized vector equilibrium problem on Hadamard manifolds. J. Optim. Theory Appl. 167(2), 550–557 (2015)

    MathSciNet  MATH  Google Scholar 

  20. Zhou, L.W., Huang, N.J.: A revision on geodesic pseudo-convex combination and Knaster–Kuratowski–Mazurkiewicz theorem on Hadamard manifolds. J. Optim. Theory Appl. 182(3), 1186–1198 (2019)

    MathSciNet  MATH  Google Scholar 

  21. Zhou, L.W., Xiao, Y.B., Huang, N.J.: New characterization of geodesic convexity on Hadamard manifolds with applications. J. Optim. Theory Appl. 172(3), 824–844 (2017)

    MathSciNet  MATH  Google Scholar 

  22. Zhou, L.W., Huang, N.J.: Existence of solutions for vector optimization on Hadamard manifolds. J. Optim. Theory Appl. 157(1), 44–53 (2013)

    MathSciNet  MATH  Google Scholar 

  23. Li, S.L., Li, C., Liou, Y.C., Yao, J.C.: Existence of solutions for variational inequalities on Riemannian manifolds. Nonlinear Anal. 71(11), 5695–5706 (2009)

    MathSciNet  MATH  Google Scholar 

  24. Li, C., Yao, J.C.: Variational inequalities for set-valued vector fields on Riemannian manifolds: convexity of the solution set and the proximal point algorithm. SIAM J. Control Optim. 50(4), 2486–2514 (2012)

    MathSciNet  MATH  Google Scholar 

  25. Wang, X., López, G., Li, C., Yao, J.C.: Equilibrium problems on Riemannian manifolds with applications. J. Math. Anal. Appl. 473(2), 866–891 (2019)

    MathSciNet  MATH  Google Scholar 

  26. Korpelevič, G.M.: An extragradient method for finding saddle points and for other problems. Èkon. Mat. Metody 12(4), 747–756 (1976)

    MathSciNet  Google Scholar 

  27. Censor, Y., Gibali, A., Reich, S.: The subgradient extragradient method for solving variational inequalities in Hilbert space. J. Optim. Theory Appl. 148(2), 318–335 (2011)

    MathSciNet  MATH  Google Scholar 

  28. Thong, D.V., Hieu, D.V.: Inertial subgradient extragradient algorithms with line-search process for solving variational inequality problems and fixed point problems. Numer. Algorithms 80(4), 1283–1307 (2019)

    MathSciNet  MATH  Google Scholar 

  29. Ceng, L.C., Qin, X., Shehu, Y., Yao, J.C.: Mildly inertial subgradient extragradient method for variational inequalities involving an asymptotically nonexpansive and finitely many nonexpansive mappings. Mathematics 7, 881 (2019)

    Google Scholar 

  30. Yao, Y.H., Postolache, M., Yao, J.C.: Convergence of an extragradient algorithm for fixed point and variational inequality problems. J. Nonlinear Convex Anal. 20(12), 2623–2631 (2019)

    MathSciNet  Google Scholar 

  31. Tran, D.Q., Dung, M.L., Nguyen, V.H.: Extragradient algorithms extended to equilibrium problems. Optimization 57(6), 749–776 (2008)

    MathSciNet  MATH  Google Scholar 

  32. Nguyen, T.T.V., Strodiot, J.J., Nguyen, V.H.: Hybrid methods for solving simultaneously an equilibrium problem and countably many fixed point problems in a Hilbert space. J. Optim. Theory Appl. 160(3), 809–831 (2014)

    MathSciNet  MATH  Google Scholar 

  33. Yao, Y., Shahzad, N., Yao, J.C.: Projected subgradient algorithms for pseudomonotone equilibrium problems and fixed points of pseudocontractive operators. Mathematics 8, 461 (2020)

    Google Scholar 

  34. Salahuddin: The existence of solution for equilibrium problems in Hadamard manifolds. Trans. A. Razmadze Math. Inst. 171(3), 381–388 (2017)

    MathSciNet  MATH  Google Scholar 

  35. Li, X.B., Zhou, L.W., Huang, N.J.: Gap functions and descent methods for equilibrium problems on Hadamard manifolds. J. Nonlinear Convex Anal. 17(4), 807–826 (2016)

    MathSciNet  MATH  Google Scholar 

  36. Cruz Neto, J.X., Santos, P.S.M., Soares, P.A. Jr.: An extragradient method for equilibrium problems on Hadamard manifolds. Optim. Lett. 10(6), 1327–1336 (2016)

    MathSciNet  MATH  Google Scholar 

  37. Hieu, D., Strodiot, J.J., Muu, L.: Modified golden ratio algorithms for solving equilibrium problems (2019). https://arxiv.org/abs/1907.04013.

  38. Hieu, D.V.: New inertial algorithm for a class of equilibrium problems. Numer. Algorithms 80(4), 1413–1436 (2019)

    MathSciNet  MATH  Google Scholar 

  39. Yang, J., Liu, H., Liu, Z.: Modified subgradient extragradient algorithms for solving monotone variational inequalities. Optimization 67(12), 2247–2258 (2018)

    MathSciNet  MATH  Google Scholar 

  40. Yang, J., Liu, H.: A self-adaptive method for pseudomonotone equilibrium problems and variational inequalities. Comput. Optim. Appl. 75(2), 423–440 (2020)

    MathSciNet  MATH  Google Scholar 

  41. do Carmo, M: Riemannian Geometry. Mathematics: Theory & Applications. 300 pp. Birkhäuser Boston, Boston (1992)

    MATH  Google Scholar 

  42. Sakai, T.: Riemannian Geometry. Translations of Mathematical Monographs, vol. 149. 358 pp. Am. Math. Soc., Providence (1996)

    MATH  Google Scholar 

  43. Tang, G.J., Huang, N.J.: Korpelevich’s method for variational inequality problems on Hadamard manifolds. J. Glob. Optim. 54(3), 493–509 (2012)

    MathSciNet  MATH  Google Scholar 

  44. Bridson, M.R., Haefliger, A.: Metric Spaces of Non-positive Curvature. Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences], vol. 319. 643 pp. Springer, Berlin (1999)

    MATH  Google Scholar 

  45. Li, X.B., Zhou, L.W., Huang, N.J.: Gap functions and global error bounds for generalized mixed variational inequalities on Hadamard manifolds. J. Optim. Theory Appl. 168(3), 830–849 (2016)

    MathSciNet  MATH  Google Scholar 

  46. Udrişte, C.: Convex Functions and Optimization Methods on Riemannian Manifolds. Mathematics and Its Applications, vol. 297. 348 pp. Kluwer Academic, Dordrecht (1994)

    MATH  Google Scholar 

  47. Blum, E., Oettli, W.: From optimization and variational inequalities to equilibrium problems. Math. Stud. 63(1–4), 123–145 (1994)

    MathSciNet  MATH  Google Scholar 

  48. Mastroeni, G.: On auxiliary principle for equilibrium problems. In: Equilibrium Problems and Variational Models, Erice, 2000. Nonconvex Optim. Appl., vol. 68, pp. 289–298. Kluwer Academic, Norwell (2003)

    MATH  Google Scholar 

  49. Malitsky, Y.: Proximal extrapolated gradient methods for variational inequalities. Optim. Methods Softw. 33(1), 140–164 (2018)

    MathSciNet  MATH  Google Scholar 

  50. Wang, J.H., López, G., Martín-Márquez, V., Li, C.: Monotone and accretive vector fields on Riemannian manifolds. J. Optim. Theory Appl. 146(3), 691–708 (2010)

    MathSciNet  MATH  Google Scholar 

  51. Ansari, Q.H., Islam, M., Yao, J.-C.: Nonsmooth variational inequalities on Hadamard manifolds. Appl. Anal. 99(2), 340–358 (2020)

    MathSciNet  MATH  Google Scholar 

  52. Ansari, Q.H., Babu, F., Yao, J.C.: Regularization of proximal point algorithms in Hadamard manifolds. J. Fixed Point Theory Appl. 21(1), 25 (2019)

    MathSciNet  MATH  Google Scholar 

  53. Facchinei, F., Pang, J.S.: Finite-Dimensional Variational Inequalities and Complementarity Problems. Vol. II. Springer Series in Operations Research. 693 pp. Springer, Berlin (2003)

    MATH  Google Scholar 

  54. Khammahawong, K., Kumam, P., Chaipunya, P., Yao, J.C., Wen, C.F., Jirakitpuwapat, W.: An extragradient algorithm for strongly pseudomonotone equilibrium problems on Hadamard manifolds. Thai J. Math. 18(1), 350–371 (2020)

    MathSciNet  Google Scholar 

  55. Ansari, Q.H., Babu, F.: Proximal point algorithm for inclusion problems in Hadamard manifolds with applications. Optim. Lett. (2019). https://doi.org/10.1007/s11590-019-01483-0

    Article  Google Scholar 

Download references

Availability of data and materials

All data analyzed during this research are included in this paper.

Funding

The project was supported by the National Natural Science Foundation of China under Grant 61877046, 11602184.

Author information

Authors and Affiliations

Authors

Contributions

The authors contributed equally to this paper. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Junfeng Chen.

Ethics declarations

Competing interests

The authors have no competing interests in this paper.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Chen, J., Liu, S. Extragradient-like method for pseudomontone equilibrium problems on Hadamard manifolds. J Inequal Appl 2020, 205 (2020). https://doi.org/10.1186/s13660-020-02473-y

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13660-020-02473-y

MSC

Keywords