Skip to main content

Viscosity method for hierarchical variational inequalities and variational inclusions on Hadamard manifolds

Abstract

This article aims to introduce and analyze the viscosity method for hierarchical variational inequalities involving a ϕ-contraction mapping defined over a common solution set of variational inclusion and fixed points of a nonexpansive mapping on Hadamard manifolds. Several consequences of the composed method and its convergence theorem are presented. The convergence results of this article generalize and extend some existing results from Hilbert/Banach spaces and from Hadamard manifolds. We also present an application to a nonsmooth optimization problem. Finally, we clarify the convergence analysis of the proposed method by some computational numerical experiments in Hadamard manifold.

1 Introduction

The variational inclusion in Hilbert space \(\mathbb{H}\) can be stated as

$$ \text{Find } x\in K \text{ such that }x\in (M+F)^{-1}(0), $$
(1)

where K is nonempty closed, convex subset of \(\mathbb{H}\), \(M : K \to \mathbb{H}\) is an operator and \(F : \mathbb{H} \rightrightarrows \mathbb{H}\) is a set-valued operator and \((M+F)^{-1}(0)\) is the set of zeros of \(M+F\). If \(M=0\), then the inclusion problem (1) reduces to

$$ \text{Find } x\in K \text{ such that } x\in F^{-1}(0). $$
(2)

For a set-valued maximal monotone operator \(F : \mathbb{H} \rightrightarrows \mathbb{H}\) in Hilbert spaces, problem (2) was studied by Rockafellar [20]. The iconic method to solve the inclusion problem (2) is the proximal point method which was first suggested by Martinet [15] and later generalized by Rockafellar [20]. Many mathematical problems arising in nonlinear analysis such as optimization, variational inequality problems, economics and partial differential equations are reduced to the inclusion problem (2). Therefore, in the recent past, many authors have extended and generalized the inclusion problem (2) in different directions; see, for example, [1, 3, 4, 8, 9, 1114, 22, 24, 26] and the references therein.

The fixed point problem of a nonexpansive self mapping \(T:K\to K\) can be stated as

$$\begin{aligned} \text{Find } x\in K \text{ such that } x\in \operatorname{Fix}(T). \end{aligned}$$
(3)

The common solution of fixed point problem (3) of a nonexpansive self mapping T and variational inclusion problem (1) discussed by Takahashi et al. [24] in Hilbert spaces, which is defined by

$$ \text{Find } x\in K \text{ such that } x\in \operatorname{Fix}(T)\cap (M+F)^{-1}(0). $$
(4)

Later, Manaka and Takahashi [14] studied problem (4) with nonspreading mapping T in Hilbert spaces. Very recently, Al-Homidan et al. [1] extended the work of [14, 24] to Hadamard manifolds settings. Moudafi [16] introduced the viscosity method to study the hierarchical variational inequality problem which consists of a contraction mapping f over a nonempty closed convex subset Fix(T) in Hilbert spaces, that is,

$$ \text{Find } {x^{\star }} \in \operatorname{Fix}(T) \text{ such that } \bigl\langle x^{\star } - f\bigl(x^{\star } \bigr), x^{\star } - x \bigr\rangle \leq 0,\quad \forall x\in \text{Fix}(T). $$
(5)

If the set \(\operatorname{Fix} (T)\) is a nonempty closed and convex subset of \(\mathbb{H}\), then problem (5) reduces to the following equivalent form:

$$ \text{Find } x^{\star } \in \operatorname{Fix}(T) \text{ such that } x^{\star } = P_{\operatorname{Fix}(T)}f\bigl(x^{\star } \bigr), $$
(6)

where \(P_{\operatorname{Fix}(T)}\) denotes the projection onto \(\operatorname{Fix}(T)\).

Xu [27] extended hierarchical variational inequality problem (6) to uniformly smooth Banach spaces. The advantage of this method is that it allows us to replace the fixed point set by some nonlinear problems which satisfy various variational inequalities. Very recently, Al-Homidan et al. [2] used this idea to extend the viscosity method for hierarchical variational inequality problem involving weakly contraction mapping and discussed its several cases on Hadamard manifolds. During the last ten years, many problems in nonlinear analysis such as fixed point problems, variational inequality problems, equilibrium problems and optimization problems have been transformed from the linear spaces, namely, Banach spaces, Hilbert spaces to nonlinear spaces because of their applications in many areas of sciences; see [13, 5, 813, 18, 25] and the references therein.

Inspired by the work discussed in [1, 2, 14, 16], our motive is to present the viscosity method for the following hierarchical variational inequality problem (HVIP) involving ϕ-contraction mapping in the framework of Hadamard manifold \(\mathbb{M}\): Find \({x^{\star }} \in \operatorname{Fix}(T)\cap (M+F)^{-1}({\mathbf{0}})\) such that

$$ \Re \bigl( \exp _{x^{\star }}^{-1}f \bigl(x^{\star }\bigr), \exp _{x^{\star }}^{-1}x \bigr) \leq 0,\quad \forall x\in \operatorname{Fix}(T)\cap (M+F)^{-1}({ \mathbf{0}}), $$
(7)

where 0 is a zero tangent vector, K is a nonempty, closed and convex subset of Hadamard manifold \(\mathbb{M}\), \(f: K \to K\) is a ϕ-contraction mapping and \(T : K \to K\) is a nonexpansive mapping with \(\operatorname{Fix} (T)\neq \emptyset \), \(M :K \to T\mathbb{M}\) is a single-valued and \(F :K \rightrightarrows T\mathbb{M}\) is a set-valued vector field such that \((M+F)^{-1}({\mathbf{0}}) \neq \emptyset \), \(\Re (\cdot , \cdot )\) is a Riemannian metric and exp−1 is an inverse exponential mapping. Equivalently, problem (7) can be written as: Find \({x^{\star }} \in \operatorname{Fix}(T)\cap (M+F)^{-1}(0)\)

$$ \text{such that } x^{\star }= P_{\operatorname{Fix}(T)\cap (M+F)^{-1}({\mathbf{0}})}f\bigl(x^{ \star }\bigr) . $$

The rest of the paper is organized as follows: The next section consists of some preliminaries and auxiliary results of Riemannian manifolds. In Sect. 3, we propose a viscosity method to solve considered HVIP and establish a convergence result of the considered method. Some special cases and an application to nonsmooth optimization problem are also discussed in the subsequent sections that extend and improve some existing results in linear spaces and in Hadamard manifolds. In the last section, we analyze the convergence of the proposed viscosity method by some computational numerical experiments.

2 Preliminaries

Let \(\mathbb{M}\) be a finite dimensional differentiable manifold. For any element \(q\in \mathbb{M}\), we denote the tangent space of \(\mathbb{M}\) at q by \(T_{q}\mathbb{M}\) and the tangent bundle by \(T\mathbb{M}=\bigcup_{q\in \mathbb{M}}T_{q}\mathbb{M}\). The tangent space \(T_{q}\mathbb{M}\) at q is a vector space and has the same dimension as \(\mathbb{M}\). An inner product \(\Re _{q}(\cdot ,\cdot )\) on \(T_{q}\mathbb{M}\) is the Riemannian metric on \(T_{q}\mathbb{M}\). A tensor \(\Re (\cdot ,\cdot ) : q \longmapsto \Re _{q}(\cdot ,\cdot )\) is called a Riemannian metric on \(\mathbb{M}\), if for each \(q\in \mathbb{M}\), \(\Re _{q}(\cdot ,\cdot )\) is a Riemannian metric on \(T_{q}\mathbb{M}\). We assume that \(\mathbb{M}\) is endowed with the Riemannian metric \(\Re _{q}(\cdot ,\cdot )\) with the corresponding norm \(\|\cdot\|_{q}\) to be a Riemannian manifold. The angle between \({\mathbf{0}}\neq x\), \(y\in T_{q}\mathbb{M}\), denoted by \(\angle _{q}(x,y)\) is defined as \(\cos \angle _{q}(x,y)=\frac{\Re _{q}(\cdot ,\cdot )}{\|x\|\|y\|}\). For simplicity, we denote \(\|\cdot\|_{q}=\|\cdot\|\), \(\Re _{q}(\cdot ,\cdot )=\Re (\cdot ,\cdot )\) and \(\angle _{q}(x,y)=\angle (x,y)\), where 0 is a zero tangent vector.

For a piecewise smooth curve \(\gamma :[a, b]\to \mathbb{M}\) joining q to r (i.e. \(\gamma (a)=q\) and \(\gamma (b)=r\)), the length \(\mathcal{L}\) of γ is defined as

$$ \mathcal{L}(\gamma )= \int _{a}^{b} \bigl\Vert \gamma ^{\prime }(s) \bigr\Vert \,ds, \quad \text{where } \gamma'(s) \in T_{\gamma(s)}\mathbb{M}\text{, for all } s \in [0, 1]. $$

The Riemannian distance \(d(q, r)\) induces the original topology on \(\mathbb{M}\), minimize the length over the set of all such curves joining q to r.

Let be the Levi–Civita connection corresponding to Riemannian manifold \(\mathbb{M}\). A smooth mapping \(U : \mathbb{M} \to T\mathbb{M}\) is said to be single-valued vector field, if for each \(q \in \mathbb{M}\), a tangent vector \(U(q) \in T_{q}\mathbb{M}\) is assigned. A vector field U is said to be parallel along a smooth curve γ if \(\Delta _{\gamma ^{\prime }(s)}U=0\). If \(\gamma ^{\prime }\) is parallel along γ, i.e., \(\nabla _{\gamma ^{\prime }(s)}\gamma ^{\prime }(s)=0\), then γ is called geodesic and in this case \(\|\gamma ^{\prime }\|\) is constant and if \(\|\gamma ^{\prime }\|=1\), then γ is said to be normalized geodesic. A geodesic joining q to r in \(\mathbb{M}\) is called minimal geodesic if its length is equal to \(d(q, r)\).

A Riemannian manifold is said to be (geodesically) complete, if for any \(q\in \mathbb{M}\), all geodesics emanating from q, are defined for all \(s\in (-\infty , \infty )\). We know by the Hopf–Rinow theorem [23] that, in a Riemannian manifold \(\mathbb{M}\), the following are equivalent:

  1. (1)

    \(\mathbb{M}\) is complete,

  2. (2)

    any pair of point in \(\mathbb{M}\) can be joined by a minimal geodesic,

  3. (3)

    \((\mathbb{M}, d)\) is a complete metric space,

  4. (4)

    bounded closed subsets are compact.

Let \(\gamma :[0,1]\to \mathbb{M}\) be a geodesic joining q to r. Then

$$ d\bigl(\gamma (s_{1}), \gamma (s_{2})\bigr)= \vert s_{1}-s_{2} \vert d(q, r),\quad \forall s_{1}, s_{2}\in [0,1]. $$
(8)

Assuming \(\mathbb{M}\) is a complete Riemannian manifold, the exponential mapping \(\exp_{q}:T_{q}\mathbb{M}\to \mathbb{M}\) at q is defined by \(\exp_{q}(\vartheta )=\gamma _{\vartheta }(1, q)\) for each \(\vartheta \in T_{q}\mathbb{M}\), where \(\gamma (\cdot )= \gamma _{\vartheta }(\cdot , q)\) is the geodesic starting at q with velocity ϑ (i.e., \(\gamma (0)=0\) and \(\gamma ^{\prime }(0)=\vartheta \)). We know that \(\exp_{q}(s\vartheta )=\gamma _{\vartheta }(s, q)\) for each real number s. One can easily see that \(\exp_{q}{\mathbf{0}}=\gamma _{\vartheta }(0;q)=q\). The exponential mapping \(\exp_{q}\) is differentiable on \(T_{q}\mathbb{M}\) for any \(q\in \mathbb{M}\).

A complete, simply connected Riemannian manifold of non-positive sectional curvature is called Hadamard manifold. From now on, we will suppose that \(\mathbb{M}\) is a finite dimensional Hadamard manifold.

Proposition 1

([23])

Let \(\mathbb{M}\) be a Hadamard manifold. Then \(\exp_{q}:T_{q} \mathbb{M}\to \mathbb{M}\) is diffeomorphism for all \(q\in \mathbb{M}\) and for any two points \(q, r\in \mathbb{M}\), there exists a unique normalized geodesic \(\gamma :[0,1]\to \mathbb{M}\) joining \(q=\gamma (0)\) to \(r=\gamma (1)\) which is in fact a minimal geodesic denoted by

$$ \gamma (s)=\exp_{x} s \exp^{-1} y,\quad \forall s\in [0,1]. $$
(9)

A subset \(K\subset \mathbb{M}\) is said to be convex if for any two points \(q, r\in K\), then any geodesic joining q to r is contained in K, that is, if any \(\gamma :[a, b]\to \mathbb{M} \) geodesic such that \(q=\gamma (a)\) and \(r=\gamma (b)\), then \(\gamma ((1-s)a+sb)\in K\) for all \(s\in [0, 1]\). From now on, \(K\subset \mathbb{M}\) will denote a nonempty, closed and convex subset of a Hadamard manifold \(\mathbb{M}\). The projection mapping onto K is defined by

$$ P_{K}(q)=\bigl\{ r\in K: d(q, r)\leq d(q, p), \forall p\in K\bigr\} ,\quad \forall q \in \mathbb{M}. $$
(10)

A function \(g : K \to \mathbb{R}\) is said to be convex if for any geodesic \(\gamma : [a, b]\to \mathbb{M}\), the composition function \(g\circ \gamma : [a, b]\to \mathbb{R}\) is convex, that is,

$$ (g\circ \gamma ) \bigl(as + (1-s)b\bigr) \leq s (g\circ \gamma ) (a) + (1-s) (g \circ \gamma ) (b),\quad \forall s\in [0, 1] \mbox{ and } \forall a, b \in \mathbb{R}. $$

Proposition 2

([23])

The Riemannian distance \(d: \mathbb{M} \times \mathbb{M}\to \mathbb{R}\) is a convex function with respect to the product Riemannian metric, i.e., given any pair of geodesics \(\gamma _{1} : [0,1] \rightarrow \mathbb{M}\) and \(\gamma _{2}: [0,1] \rightarrow \mathbb{M}\), the following inequality holds for all \(s \in [0,1]\):

$$ d\bigl(\gamma _{1}(s), \gamma _{2}(s) \bigr) \leq (1-s)d\bigl(\gamma _{1}(0), \gamma _{2}(0) \bigr) + sd\bigl(\gamma _{1}(1), \gamma _{2}(1)\bigr). $$
(11)

In particular, for each \(q \in \mathbb{M}\), the function \(d(\cdot , q): \mathbb{M} \to \mathbb{R}\) is a convex function.

If \(\mathbb{M}\) is a finite dimensional manifold with dimension n, then Proposition 1 shows that \(\mathbb{M}\) is diffeomorphism to the Euclidean space \(\mathbb{R}^{n}\). Thus, we see that \(\mathbb{M}\) has the same topology and differential structure as \(\mathbb{R}^{n}\). Moreover, Hadamard manifolds and Euclidean spaces have some similar geometrical properties. We describe some of them in the following results.

Recall that a geodesic triangle \(\Delta (q_{1}, q_{2}, q_{3})\) of Riemannian manifold is a set consisting of three points \(q_{1}\), \(q_{2}\) and \(q_{3}\) and the three minimal geodesics \(\gamma _{j}\) joining \(q_{j}\) to \(q_{j+1}\), where \(j=1, 2,3 \bmod (3)\).

Lemma 1

([13])

Let \(\Delta (q_{1},q_{2},q_{3})\) be a geodesic triangle in Hadamard manifold \(\mathbb{M}\). Then there exist \(q_{1}^{\prime }, q_{2}^{\prime }, q_{3}^{\prime }\in \mathbb{R}^{2}\) such that

$$ d(q_{1}, q_{2})= \bigl\Vert q_{1}^{\prime }-q_{2}^{\prime } \bigr\Vert , \qquad d(q_{2}, q_{3})= \bigl\Vert q_{2}^{\prime }-q_{3}^{\prime } \bigr\Vert ,\quad \textit{and}\quad d(q_{3}, q_{1})= \bigl\Vert q_{3}^{\prime }-q_{1}^{\prime } \bigr\Vert . $$

The points \(q_{1}^{\prime }\), \(q_{2}{'}\), \(q_{3}^{\prime }\) are called the comparison points to \(q_{1}\), \(q_{2}\), \(q_{3}\), respectively. The triangle \(\Delta (q_{1}^{\prime },q_{2}^{\prime },q_{3}^{\prime })\) is called the comparison triangle of the geodesic triangle \(\Delta (q_{1},q_{2},q_{3})\), which is unique up to isometry of \(\mathbb{M}\).

Lemma 2

([13])

Let \(\Delta (q_{1},q_{2},q_{3})\) be a geodesic triangle in Hadamard manifold \(\mathbb{M}\) and \(\Delta (q_{1}^{\prime },q_{2}^{\prime },q_{3}^{\prime })\in \mathbb{R}^{2}\) be its comparison triangle.

  1. (i)

    Let θ, φ, ψ (respectively, \(\theta ^{\prime }\), \(\varphi ^{\prime }\), \(\psi ^{\prime }\)) be the angles of \(\Delta (q_{1},q_{2},q_{3})\) (respectively, \(\Delta (q_{1}^{\prime },q_{2}^{\prime },q_{3}^{\prime })\)) at the vertices \((q_{1},q_{2},q_{3})\) (respectively, \(q_{1}^{\prime }\), \(q_{2}^{\prime }\), \(q_{3}^{\prime }\)). Then the following inequality holds:

    $$ \theta ^{\prime } \geq \theta ,\qquad \varphi ^{\prime }\geq \varphi ,\qquad \psi ^{\prime }\geq \psi . $$
  2. (ii)

    Let p be a point on the geodesic joining \(q_{1}\) to \(q_{2}\) and \(p^{\prime }\) be its comparison point in the interval \([q_{1}^{\prime }, q_{2}^{\prime }]\). Suppose that \(d(p,q_{1})=\|p^{\prime }-q_{1}^{\prime }\|\) and \(d(p,q_{2})=\|p^{\prime }-q_{2}^{\prime }\|\). Then

    $$ d(p,q_{3})\leq \bigl\Vert p^{\prime }-q_{3}^{\prime } \bigr\Vert . $$

Proposition 3

(Comparison theorem for triangle, [23])

Let \(\Delta (q_{1},q_{2},q_{3})\) be a geodesic triangle. Denote, for each \(j=1,2,3 \bmod (3)\), by \({\gamma }_{j}:[0, l_{j}]\to \mathbb{M}\) the geodesic joining \(q_{j}\) to \(q_{j+1}\) and set \(l_{j}=L(\gamma _{j})\), \(\alpha _{1}=\angle (\gamma _{j}^{\prime }(0), - \gamma _{j-1}^{\prime }(l_{j-1}))\). Then

$$\begin{aligned}& \alpha _{1}+\alpha _{2}+\alpha _{3}\leq \pi , \end{aligned}$$
(12)
$$\begin{aligned}& l_{j}^{2}+l_{j+1}^{2}-2 l_{j}l_{j+1} \cos {\alpha }_{j+1}\leq l_{j-1}^{2}. \end{aligned}$$
(13)

In terms of distance and exponential mapping, the above inequality can be rewritten as

$$ d^{2}(q_{j}, q_{j+1})+d^{2}(q_{j+1}, q_{j+2})-2 \Re \bigl( \exp _{q_{j+1}}^{-1} q_{j}, \exp _{q_{j+1}}^{-1} q_{j+2}\bigr)\leq d^{2}(q_{j-1}, q_{j}) $$
(14)

since

$$ \Re \bigl( \exp _{q_{j+1}}^{-1} q_{j}, \exp _{q_{j+1}}^{-1} q_{j+2} \bigr)=d(q_{j}, q_{j+1})d(q_{j+1}, q_{j+2})\cos {\alpha _{j+1}}. $$
(15)

Proposition 4

([25])

Let K be a closed convex subset of a Hadamard manifold \(\mathbb{M}\). Then \(P_{K}(q)\) is a singleton for each \(q \in \mathbb{M}\). Also, for any point \(q \in \mathbb{M}\), the following assertion holds:

$$ \Re \bigl( \exp_{{P_{K}}(q)}^{-1}q, \exp_{{P_{K}}(q)}^{-1}r \bigr)\leq 0, \quad \forall r\in K. $$
(16)

The set of all single-valued vector fields \(M :\mathbb{M} \to T\mathbb{M}\) is denoted by \(\Omega (\mathbb{M})\) such that \(M(q)\in T_{q}(\mathbb{M})\) for all \(q\in \mathbb{M}\). We denotes \(\chi (\mathbb{M})\) the set of all set-valued vector fields \(F :\mathbb{M} \rightrightarrows T\mathbb{M}\) such that \(F(q)\subseteq T_{q}(\mathbb{M})\) for all \(q\in D(F)\), where \(D(F)\) is the domain of F defined as \(D(F)=\{q\in \mathbb{M}:F(q)\neq \emptyset \}\).

Definition 1

([12, 17])

A single-valued vector field \(M\in \Omega (\mathbb{M})\) is said to be

  1. (i)

    monotone, if for all \(q, r\in \mathbb{M}\),

    $$ \Re \bigl(M(q), \exp_{q}^{-1}r \bigr)\leq \Re \bigl(M(r), -\exp_{r}^{-1}q \bigr); $$
  2. (ii)

    a mapping \(T : K\subseteq \mathbb{M} \to \mathbb{M}\) is said to be firmly nonexpansive, if for all \(q, r\in K\), the mapping \(\varphi :[0,1]\to [0, \infty ]\) defined by

    $$ \varphi (s)=d\bigl(\exp_{q} s \exp_{q}^{-1} T(q), \exp_{r} s \exp_{r}^{-1} T(r)\bigr),\quad \forall s\in [0,1], $$

    is nonincreasing.

Firmly nonexpansive mappings are nonexpansive; see [12].

Definition 2

([7])

A set-valued vector field \(F\in \chi (\mathbb{M})\) is said to be monotone, if for all \(q, r\in D(F)\),

$$ \Re \bigl(u, \exp_{q}^{-1}r \bigr)\leq \Re \bigl(v, - \exp_{r}^{-1}q \bigr),\quad \forall u\in F(q), \forall v\in F(r). $$

Definition 3

([12])

Let \(F\in \chi (\mathbb{M})\). The resolvent of F of order \(\lambda >0\) is set-valued mapping \(J_{\lambda }^{F}:\mathbb{M}\rightrightarrows D(F)\) defined by

$$ J_{\lambda }^{F}(q)=\bigl\{ r\in \mathbb{M}:q\in \exp_{r}\lambda F(r) \bigr\} ,\quad \forall q\in \mathbb{M}. $$

Theorem 1

([12])

Let \(\lambda >0\) and \(F\in \chi (\mathbb{M})\). Then vector field F is monotone if and only if \(J_{\lambda }^{F}\) is single-valued and firmly nonexpansive.

The following ϕ-contraction mapping was introduced by Boyd and Wong [6] in the setting of metric spaces.

Definition 4

([6])

A mapping \(f : \mathbb{M} \to \mathbb{M}\) is said to be a ϕ-contraction, if

$$ d\bigl(f(q), f(r)\bigr) \leq \phi \bigl(d(q, r)\bigr),\quad \forall q, r \in \mathbb{M}, $$

where \(\phi : [0, +\infty ) \to [0, +\infty )\) is called a comparison function; it satisfies the following conditions:

  1. (i)

    \(\phi (s) < s\) for all \(s>0\);

  2. (ii)

    ϕ is continuous.

Remark 1

  1. (i)

    \(\phi (s) = \ln (1+s)\) for all \(s\geq 0\) satisfies the conditions (i)–(ii).

  2. (ii)

    If \(\phi (s) = \varrho s\) for all \(s\geq 0\), where \(\varrho \in (0, 1)\), then f is a contraction mapping with Lipschitz constant ϱ.

  3. (iii)

    ϕ-contraction mappings are nonexpansive.

We recall some facts from [21], which will be used in the sequel.

  1. (a)

    For any \(q, r \in \mathbb{R}^{n}\) and \(\kappa >0\), following inequality holds:

    $$ \bigl\Vert q \kappa + (1-\kappa )r \bigr\Vert ^{2} = \kappa ^{2} \Vert q \Vert ^{2} + (1-\kappa ) \Vert r \Vert ^{2}+ 2\kappa (1-\kappa )\langle q, r \rangle . $$
    (17)
  2. (b)

    If \(\{\alpha _{n}\} \subseteq [0, 1)\) is a sequence of real numbers, then

    $$ \sum_{n=1}^{\infty }\alpha _{n} = +\infty\quad \Leftrightarrow\quad \prod_{n=1}^{\infty }(1 \pm \alpha _{n}) = 0. $$

3 Main results

We propose the following viscosity method for (HVIP) on Hadamard manifolds.

Algorithm 1

Suppose that K be a nonempty closed and convex subset of Hadamard manifold \(\mathbb{M}\). Let \(M:K\to T\mathbb{M}\) be a single-valued vector field and \(F:\mathbb{M} \rightrightarrows T{\mathbb{M}}\) be a set-valued vector field such that \(D(F) \subseteq K\) and \(f, T:K\to K\) are self mappings. For an arbitrary \(u_{0}\in K\), \({\alpha _{n}}\in (0, 1)\) and \(\lambda >0\), compute the sequences \(\{v_{n}\}\) and \(\{u_{n}\}\) as follows:

$$\begin{aligned}& v_{n} = J^{F}_{\lambda } \bigl[\exp_{u_{n}}\bigl(-\lambda M(u_{n})\bigr) \bigr], \\& u_{n+1} = \exp_{f(u_{n})}(1-\alpha _{n}) \exp_{f(u_{n})}^{-1}T(v_{n}), \end{aligned}$$

or, equivalently

$$ u_{n+1}=\gamma _{n}(1-\alpha _{n}), \quad \forall n\geq 0, $$
(18)

where \(\gamma _{n}:[0,1]\to {\mathbb{M}}\) is sequence of geodesics joining \(f(u_{n})\) to \(T(v_{n})\), that is, \(\gamma _{n}(0)=f(u_{n})\) and \(\gamma _{n}(1)=T(v_{n})\) for all \(n\geq 0\).

For the convergence of Algorithm 1, we impose the following conditions on the sequence \(\{{\alpha _{n}}\}\):

(\({\mathrm{A}_{1}}\)):

\(\lim_{n\to \infty }\alpha _{n}=0\);

(\({\mathrm{A}_{2}}\)):

\(\sum_{n=0}^{\infty }\alpha _{n}=+\infty \);

(\({\mathrm{A}_{3}}\)):

\(\sum_{n= 0}^{\infty }|\alpha _{n+1}-\alpha _{n}|<\infty \).

We make the following assumption on a single-valued vector field \(M : K \to T\mathbb{M,}\) which also appeared in [1] in the setting of Hadamard manifolds.

Assumption 1

For any nonempty subset K of Hadamard manifold \(\mathbb{M}\). A single-valued vector field \(M:K\to T\mathbb{M}\) is said to satisfy the contraction type assumption if for any \(q,r\in K\) and any \(\lambda >0\), the following holds:

$$ d \bigl(\exp_{q}\big(-\lambda M(q)\big), \exp_{r}\big(-\lambda M(r)\big) \bigr) \leq (1-\eta ) d(q, r),\quad \eta \in [0,1). $$
(19)

Proposition 5

([1])

For any \(q\in K\), the following assertions are equivalent:

  1. (i)

    \(q\in (M+F)^{-1}({\mathbf{0}})\);

  2. (ii)

    \(q=J^{F}_{\lambda } [ \exp_{q}(-\lambda M(q)) ]\), for all \(\lambda >0\).

Remark 2

It can be easily seen that, in \(\mathbb{M}\), for a nonexpansive mapping T, the set \(\operatorname{Fix}(T)\) is geodesic convex, for more details, (see [1, 12]). Together with the Assumption 1, we see that \(J_{\lambda }^{F} (\exp(-\lambda M) )\) is nonexpansive. By Proposition 5, it follows that \(\operatorname{Fix} (J_{\lambda }^{F} (\exp(-\lambda M) ) )= (M+F )^{-1}({\mathbf{0}})\). Therefore \((M+F )^{-1}({\mathbf{0}})\) is closed and convex in \(\mathbb{M}\). Hence, \(\operatorname{Fix}(T)\cap (M+F )^{-1}({\mathbf{0}})\) is closed and convex in \(\mathbb{M}\).

Theorem 2

Let \(\mathbb{M}\) be Hadamard manifold and K be a nonempty, closed and convex subset of \(\mathbb{M}\). Let \(T:K\to K\) be a nonexpansive mapping and \(f:K\to K\) be a ϕ-contraction mapping with the comparison function \(\phi : [0, +\infty ] \to [0, +\infty ]\). Let \(M:K\to T\mathbb{M}\) be a continuous vector field satisfying the Assumption 1and \(F:\mathbb{M} \rightrightarrows T\mathbb{M}\) be a set-valued monotone vector field such that \(D(F) \subseteq K\). Suppose \(\{\alpha _{n}\}\) is a sequence in \((0,1)\), which satisfies (\(\mathrm{A}_{1}\))(\(\mathrm{A}_{3}\)). If \(\operatorname{Fix}(T)\cap (M+F)^{-1}({\mathbf{0}})\neq \emptyset \) and \(0< \sigma = \sup \{\phi (d(u_{n}, u^{*}))/d(u_{n}, u^{*}) : u_{n} \neq u^{*}, n\in \mathbb{N}\} <1\) for all \(u^{*} \in \operatorname{Fix}(T)\cap (M+F)^{-1}({\mathbf{0}})\). Then the sequence obtained by Algorithm 1converges to the solution of HVIP (7), which is a fixed point of the mapping \(P_{\operatorname{Fix}(T)\cap (M+F)^{-1}({\mathbf{0}})}f\).

Proof

We break the proof into six steps.

Step 1. We show that \(\{u_{n}\}\), \(\{v_{n}\}\), \(\{f(u_{n})\}\), \(\{\exp_{u_{n}}(\lambda M(u_{n}) )\}\) and \(\{T(v_{n})\}\) are bounded.

Let \(u^{*}\) be a solution of HVIP (7), then \(u^{*}\in \operatorname{Fix}(T) \) and \(u^{*}\in (M+F )^{-1}({\mathbf{0}})\). By Proposition 5, nonexpansive property of \(J^{F}_{\lambda } ( \exp_{x}(-\lambda M ) )\) and Assumption 1, we have

$$\begin{aligned} d\bigl(v_{n}, u^{*} \bigr) =&d \bigl( J^{F}_{\lambda } \big( \exp_{u_{n}}\big(-\lambda M (u_{n})\big), J^{F}_{\lambda } \big( \exp_{u^{*}}\big(- \lambda M\big(u^{*}\big)\big) \bigr) \\ \leq & d \bigl(\exp_{u_{n}}\big(-\lambda M (u_{n})\big), \exp_{u^{*}}\big(- \lambda M \big(u^{*}\big)\big) \bigr) \\ \leq & (1-\eta ) d\bigl(u_{n}, u^{*}\bigr). \end{aligned}$$
(20)

Since \(u_{n+1}=\gamma _{n}(1-\alpha _{n})\), by convexity of the Riemannian distance, we have

$$\begin{aligned} d\bigl(u_{n+1}, u^{*} \bigr) =&d\bigl(\gamma _{n}, (1-\alpha _{n}), u^{*}\bigr) \\ \leq & \alpha _{n} d\bigl(\gamma _{n}(0), u^{*}\bigr)+(1-\alpha _{n}) d\bigl( \gamma _{n}(1), u^{*}\bigr) \\ =& \alpha _{n} d\bigl(f(u_{n}), u^{*} \bigr)+ (1-\alpha _{n}) d\bigl(T(v_{n}), u^{*}\bigr) \\ \leq &\alpha _{n} \bigl[d(f(u_{n}), f \bigl(u^{*}\bigr)+d\bigl(f\bigl(u^{*}\bigr), u^{*}\bigr) \bigr] +(1- \alpha _{n})d \bigl(T(v_{n}), T\bigl(u^{*}\bigr)\bigr) \\ \leq & \alpha _{n} \bigl[\phi \bigl(d\bigl(u_{n}, u^{*}\bigr)\bigr)+d\bigl(f\bigl(u^{*}\bigr), u^{*}\bigr) \bigr]+(1-\alpha _{n})d\bigl(v_{n}, u^{*}\bigr) \\ \leq & \alpha _{n} \bigl[\phi \bigl(d\bigl(u_{n}, u^{*}\bigr)\bigr)+d\bigl(f\bigl(u^{*}\bigr), u^{*}\bigr) \bigr]+(1-\alpha _{n}) (1-\eta )d \bigl(u_{n}, u^{*}\bigr) \\ \leq &\alpha _{n} \bigl[\phi \bigl(d\bigl(u_{n}, u^{*}\bigr)\bigr)+d\bigl(f\bigl(u^{*} \bigr),u^{*}\bigr) \bigr]+(1- \alpha _{n})d \bigl(u_{n}, u^{*}\bigr). \end{aligned}$$

Since \(0< \sigma = \sup \{\phi (d(u_{n}, u^{*}))/d(u_{n}, u^{*}) : u_{n} \neq u^{*}, n\in \mathbb{N}\} < 1\), the above inequality yields

$$\begin{aligned} d\bigl(u_{n+1}, u^{*}\bigr) \leq & \alpha _{n} \sigma d\bigl(u_{n}, u^{*} \bigr)+(1- \alpha _{n})d\bigl(u_{n}, u^{*} \bigr)+\alpha _{n} d\bigl(f\bigl(u^{*}\bigr), u^{*}\bigr) \\ =& \bigl(1-\alpha _{n}(1-\sigma )\bigr)d\bigl(u_{n}, u^{*}\bigr)+\alpha _{n} d\bigl(f\bigl(u^{*} \bigr), u^{*}\bigr) \\ \vdots & \\ \leq & \max \biggl\{ d\bigl(u_{0},u^{*}\bigr), \frac{1}{1-\sigma }d\bigl(f\bigl(u^{*}\bigr), u^{*} \bigr) \biggr\} , \end{aligned}$$

which implies that \(\{u_{n}\}\) is bounded. By (20), \(\{v_{n}\}\) is also bounded. Since T is nonexpansive mapping, f is a ϕ-contraction and by Assumption 1, we conclude that \(\{T(v_{n})\}\), \(\{f(u_{n})\}\) and \(\{\exp_{u_{n}}(-\lambda M(u_{n}))\}\) are also bounded.

Step 2. We show that \(\lim_{n\to \infty }d(u_{n+1},u_{n})=0 \).

Since T is nonexpansive, and f is a ϕ-contraction, by using (8), (11) and Proposition 2, we obtain

$$\begin{aligned} d(u_{n+1},u_{n}) =&d (\gamma _{n}(1-\alpha _{n}),\gamma _{n-1}(1- \alpha _{n-1} ) \\ \leq & d \bigl(\gamma _{n}(1-\alpha _{n}),\gamma _{n-1}(1-\alpha _{n}) \bigr)+ d \bigl(\gamma _{n-1}(1-\alpha _{n}),\gamma _{n-1}(1-\alpha _{n-1}) \bigr) \\ \leq & \alpha _{n} d \bigl( \gamma _{n}(0), \gamma _{n-1}(0) \bigr)+(1- \alpha _{n})d \bigl( \gamma _{n}(1), \gamma _{n-1}(1) \bigr) \\ &{} + \vert \alpha _{n}-\alpha _{n-1} \vert d ( f(u_{n-1}, T(v_{n-1} ) \\ \leq & \alpha _{n} d \bigl(f(u_{n}), f(u_{n-1}) \bigr)+(1-\alpha _{n})d \bigl( T(v_{n}), T(v_{n-1}) \bigr) \\ &{} + \vert \alpha _{n}-\alpha _{n-1} \vert d \bigl( f(u_{n-1}), T(v_{n-1}) \bigr) \\ \leq & \alpha _{n} \phi \bigl(d(u_{n}, u_{n-1})\bigr)+(1-\alpha _{n}) d(v_{n}, v_{n-1}) \\ &{} + \vert \alpha _{n}-\alpha _{n-1} \vert d \bigl( f(u_{n-1}), T(v_{n-1}) \bigr). \end{aligned}$$
(21)

Again, by using the nonexpansive property of \(J^{F}_{\lambda }\) and Assumption 1, we get

$$\begin{aligned} d(v_{n}, v_{n-1}) =&d ( J^{F}_{\lambda } ( \exp_{u_{n}}\bigl(- \lambda M(u_{n})\bigr), J^{F}_{\lambda } \bigl( \exp_{u_{n-1}}\bigl(-\lambda M(u_{n-1})\bigr) \bigr) \\ \leq & d (\exp_{u_{n}}\bigl(-\lambda M(u_{n})\bigr), \exp_{u_{n-1}}\bigl(- \lambda M(u_{n-1}) \bigr) \\ \leq & (1-\eta ) d(u_{n}, u_{n-1}). \end{aligned}$$
(22)

Since \(\{u_{n}\}\) and \(\{f(u_{n})\}\) are bounded, there exist constants \(K_{1}\) and \(K_{2}\) such that \(d(u_{n}, p)\leq K_{1}\) and \(d(f(u_{n}), u^{*})\leq K_{2}\). Thus, we have

$$\begin{aligned} d \bigl(f(u_{n-1}), T(v_{n-1}) \bigr) \leq & d\bigl(f(u_{n-1}),u^{*}\bigr)+d \bigl(T(v_{n-1}), u^{*}\bigr) \\ \leq & d\bigl(f(u_{n-1}), u^{*}\bigr)+d \bigl(v_{n-1}, u^{*}\bigr) \\ \leq & d\bigl(f(u_{n-1}), u^{*}\bigr)+d \bigl(u_{n-1}, u^{*}\bigr) \\ \leq & K_{1}+ K_{2}:=K_{3}. \end{aligned}$$
(23)

By combining this inequality with (21) and (22), we have

$$\begin{aligned} d (u_{n+1}, u_{n} ) \leq & \alpha _{n} \phi \bigl(d(u_{n}, u_{n-1})\bigr) +(1- \alpha _{n}) (1-\eta ) d(u_{n}, u_{n-1})+ \vert \alpha _{n}-\alpha _{n-1} \vert K_{3} \\ < & \alpha _{n} d(u_{n}, u_{n-1}) +(1-\alpha _{n}) (1-\eta ) d(u_{n}, u_{n-1}) + \vert \alpha _{n}-\alpha _{n-1} \vert K_{3} \\ =&(1-\bar{\alpha }_{n}) d(u_{n}, u_{n-1}) + \delta _{n} K_{3}, \end{aligned}$$
(24)

where \(\bar{\alpha }_{n} = \eta (1-\alpha _{n})\) and \(\delta _{n}=|\alpha _{n}-\alpha _{n-1}|\) for each \(n \geq 0\). Since \(\{u_{n}\}\) is bounded, there exists a constant \(K_{4}\) such that \(d (u_{n+1}, u_{n} )\leq K_{4}\). For \(m\leq n\), from (24), we have

$$\begin{aligned} d (u_{n+1}, u_{n} ) \leq & \prod_{i=m}^{n}(1- \bar{\alpha }_{i}) d(u_{m}, u_{m-1})+K_{3} \sum_{j=m}^{n} \Biggl\{ \delta _{j}\prod_{i=j+1}^{n}(1-\bar{ \alpha }_{i}) \Biggr\} \\ \leq & K_{4}\prod_{i=m}^{n}(1- \bar{\alpha }_{i}) +K_{3}\sum _{j=m}^{n} \Biggl\{ \delta _{j}\prod _{i=j+1}^{n}(1- \bar{\alpha }_{i}) \Biggr\} . \end{aligned}$$
(25)

Taking \(n\to \infty \), we get

$$ \lim_{n\to \infty }d(u_{n+1}, u_{n})\leq K_{4}\prod_{i=m}^{ \infty }(1- \bar{\alpha }_{i}) +K_{3}\sum _{j=m}^{\infty } \Biggl\{ \delta _{j}\prod _{i=j+1}^{\infty }(1-\bar{\alpha }_{i}) \Biggr\} . $$
(26)

From condition (\(\mathrm{A}_{3}\)), \(\lim_{n\to \infty }\delta _{n} = 0\). Thus, from (\(\mathrm{A}_{2}\)) and (\(\mathrm{A}_{3}\)), we deduce that \(\lim_{m\to \infty }\sum_{j=m}^{\infty } \{ \delta _{j}\prod_{i=j+1}^{\infty }(1-\bar{\alpha }_{i}) \} = 0\) and by condition (\(\mathrm{A}_{2}\)), \(\lim_{m\to \infty }\prod_{i=m}^{\infty }(1- \bar{\alpha }_{i})=0\). Hence, by taking \(m\to \infty \), we obtain

$$ \lim_{n\to \infty }d(u_{n+1}, u_{n})=0. $$

Step 3. Next, we show that \(\lim_{n\to \infty }d(u_{n},v_{n})=0 \). Since f is a ϕ-contraction, by using (18) and (20), we have

$$\begin{aligned} d(u_{n}, v_{n}) \leq& d \bigl(u_{n}, u^{*}\bigr)+d\bigl(v_{n}, u^{*}\bigr) \\ \leq &d\bigl(u_{n}, u^{*}\bigr)+(1-\eta ) d \bigl(u_{n}, u^{*}\bigr) \\ =& (2-\eta ) d\bigl(u_{n}, u^{*}\bigr) \\ =& (2-\eta ) \bigl\{ d\bigl( \gamma _{n-1} (1-\alpha _{n-1}), u^{*}\bigr) \bigr\} \\ \leq & (2-\eta ) \bigl\{ \alpha _{n-1}d\bigl( \gamma _{n-1}(0),u^{*}\bigr)+ (1- \alpha _{n-1})d \bigl(\gamma _{n-1}(1),u^{*}\bigr) \bigr\} \\ \leq & (2-\eta ) \bigl\{ \alpha _{n-1}d\bigl( f(u_{n-1}), u^{*}\bigr)+ (1- \alpha _{n-1})d\bigl(T(v_{n-1}),u^{*} \bigr) \bigr\} \\ \leq & (2-\eta ) \bigl\{ \alpha _{n-1}\bigl[d\bigl( f(u_{n-1}), f\bigl(u^{*}\bigr)\bigr)+d\bigl(f \bigl(u^{*}\bigr), u^{*}\bigr)\bigr] \\ &{}+ (1-\alpha _{n-1})d\bigl(T(v_{n-1}),u^{*}\bigr) \bigr\} \\ \leq & (2-\eta ) \bigl\{ \alpha _{n-1} \phi \bigl(d \bigl(u_{n-1}, u^{*}\bigr)\bigr) \\ &{}+\alpha _{n-1}d\bigl(f\bigl(u^{*}\bigr), u^{*}\bigr)+ (1-\alpha _{n-1}) (1-\eta )d \bigl(u_{n-1}, u^{*}\bigr) \bigr\} \\ < & (2-\eta ) \bigl\{ \alpha _{n-1} d\bigl(u_{n-1}, u^{*}\bigr)+\alpha _{n-1}d\bigl(f\bigl(u^{*} \bigr), u^{*}\bigr) \\ &{} + (1-\alpha _{n-1}) (1-\eta )d\bigl(u_{n-1}, u^{*}\bigr) \bigr\} \\ =&(2-\eta ) \bigl\{ \bigl[1-\eta (1-\alpha _{n-1}) \bigr]d \bigl(u_{n-1}, u^{*}\bigr)+ \alpha _{n-1}d \bigl(f\bigl(u^{*}\bigr), u^{*}\bigr) \bigr\} \\ =&(2-\eta ) \bigl\{ (1-\bar{\alpha }_{n-1})d\bigl(u_{n-1}, u^{*}\bigr)+ \alpha _{n-1}d\bigl(f\bigl(u^{*} \bigr), u^{*}\bigr) \bigr\} , \end{aligned}$$
(27)

where \(\bar{\alpha }_{n} = \eta (1-\alpha _{n})\) for each \(n \geq 0\). Let \(m\leq n\), then it follows that

$$\begin{aligned} d(u_{n}, v_{n}) < & (2-\eta ) K_{1}\prod_{j=m}^{n-1}(1- \bar{\alpha }_{j}) \\ &{} +(2-\eta ) \sum_{j=m}^{n-1} \Biggl\{ \alpha _{j}\prod_{i=j+1}^{n-1}(1- \bar{\alpha }_{i}) \Biggr\} d\bigl(f\bigl(u^{*}\bigr), u^{*}\bigr). \end{aligned}$$
(28)

Taking \(n\to \infty \) implies that

$$\begin{aligned} \lim_{n\to \infty }d(u_{n}, v_{n}) < & (2-\eta ) K_{1}\prod _{j=m}^{ \infty }(1-\bar{\alpha }_{j}) \\ &{} +(2-\eta ) \sum_{j=m}^{\infty } \Biggl\{ \alpha _{j}\prod_{i=j+1}^{\infty }(1- \bar{\alpha }_{i}) \Biggr\} d\bigl(f\bigl(u^{*}\bigr), u^{*}\bigr). \end{aligned}$$
(29)

From (\(\mathrm{A}_{2}\)), it follows that \(\lim_{m\to \infty }\prod_{j=m}^{\infty }(1- \bar{\alpha }_{j})=0\) and from (\(\mathrm{A}_{1}\)) and (\(\mathrm{A}_{2}\)), \(\lim_{m\to \infty }\sum_{j=m}^{\infty } \{ \alpha _{j}\prod_{i=j+1}^{\infty }(1-\bar{\alpha }_{i}) \}=0\). Hence, by taking \(m\to \infty \), we get

$$ \lim_{n\to \infty }d(u_{n}, v_{n})=0. $$
(30)

Step 4. Boundedness of \(\{u_{n}\}\) implies that there exists a sequence \(\{n_{k}\}\) of \(\{n\}\) such that \(u_{n_{k}}\to z\) as \(k\to \infty \). Now, we show that \(z\in \operatorname{Fix}(T)\cap (M+F)^{-1}({\mathbf{0}})\). Since \(v_{n}=J_{\lambda }^{F} ( \exp_{u_{n}}(-\lambda M(u_{n})) )\), by using the continuity of \(J_{\lambda }^{F} ( \exp_{\cdot }(-\lambda M) )\) and (29), we have

$$\begin{aligned} 0 =&\lim_{k\to \infty }d(u_{n_{k}}, v_{n_{k}}) \\ =& \lim_{k\to \infty } d \bigl(u_{n_{k}},J_{\lambda }^{F} \bigl( \exp_{u_{n_{k}}}\bigl(-\lambda M(u_{n_{k}})\bigr) \bigr) \bigr) \\ =&d \bigl(z,J_{\lambda }^{F} \bigl( \exp_{z} \bigl(-\lambda M(z)\bigr) \bigr) \bigr), \end{aligned}$$
(31)

that is, \(z\in (M+F)^{-1}({\mathbf{0}})\).

Again, by using the convexity of the Riemannian distance, we get

$$\begin{aligned} d\bigl(u_{n+1}, T(v_{n}) \bigr) =& d \bigl(\gamma _{n}(1-\alpha _{n}), T(v_{n}) \bigr) \\ \leq &\alpha _{n} d \bigl( \gamma _{n}(0), T(v_{n}) \bigr)+(1-\alpha _{n})d \bigl( \gamma _{n}(1), T(v_{n}) \bigr) \\ \leq &\alpha _{n} d \bigl( f(u_{n}), T(v_{n})\bigr) )+(1-\alpha _{n})d \bigl( T(v_{n}), T(v_{n}) \bigr) \\ \leq &\alpha _{n} d \bigl( f(u_{n}), T(v_{n}) \bigr). \end{aligned}$$
(32)

Since \(\{u_{n}\}\) is bounded and f is a ϕ-contraction, therefore

$$\begin{aligned} d\bigl(f(u_{n}), T(v_{n}) \bigr) \leq & d\bigl(f(u_{n}), f\bigl(u^{*}\bigr)\bigr)+d \bigl(u^{*}, T(v_{n})\bigr) \\ \leq & \phi \bigl(d\bigl(u_{n}, u^{*}\bigr)\bigr)+d \bigl(f\bigl(u^{*}\bigr), u^{*}\bigr)+d\bigl( T(v_{n}), u^{*}\bigr) \\ < & d\bigl(u_{n}, u^{*}\bigr)+d\bigl(f \bigl(u^{*}\bigr), u^{*}\bigr)+d\bigl(v_{n}, u^{*}\bigr) \\ \leq & d\bigl(u_{n}, u^{*}\bigr)+d\bigl(f \bigl(u^{*}\bigr), u^{*}\bigr)+ (1-\eta )d \bigl(u_{n}, u^{*}\bigr) \\ \leq & d\bigl(u_{n}, u^{*}\bigr)+d\bigl(f \bigl(u^{*}\bigr), u^{*}\bigr) \\ \leq & K_{2}+d\bigl(f\bigl(u^{*}\bigr), u^{*}\bigr) = \bar{K}. \end{aligned}$$
(33)

This together with the condition (\(\mathrm{A}_{1}\)) and (32), implies that

$$ \lim_{n\to \infty }d\bigl(u_{n+1}, T(v_{n})\bigr)=\lim_{n\to \infty }\alpha _{n} \bar{K} =0. $$
(34)

Also, from (30) and with a subsequence \(\{v_{n_{k}}\}\) of \(\{v_{n}\}\), we have

$$ \lim_{k\to \infty }d(v_{n_{k}}, z)\leq \lim_{k\to \infty }d(u_{n_{k}}, v_{n_{k}})+ \lim_{k\to \infty }d(u_{n_{k}}, z)=0, $$
(35)

that is, \(v_{n_{k}}\) converges to z as \(k \to \infty \). Then we get

$$\begin{aligned} d\bigl(T(z), z\bigr) \leq & d\bigl(T(z), T(v_{n_{k}})\bigr)+d\bigl(T(v_{n_{k}}), u_{{n_{k}}+1} \bigr)+d(u_{{n_{k}}+1}, z) \\ \leq & d(z, v_{n_{k}})+d\bigl(T(v_{n_{k}}), u_{{n_{k}}+1}\bigr)+d(u_{{n_{k}}+1}, z) \to 0,\quad k\to \infty , \end{aligned}$$
(36)

and so \(z\in \operatorname{Fix}(T)\). Thus we have \(z\in \operatorname{Fix}(T)\cap (M+F)^{-1}({\mathbf{0}}) \).

Step 5. We show that \(\limsup_{n\to \infty }\Re ( \exp_{w}^{-1} f(w), \exp_{w}^{-1} T(v_{n}) )\leq 0\), where w is a fixed point of the mapping \(P_{\operatorname{Fix}(T)\cap (M+F)^{-1}({\mathbf{0}})}f\).

Since \(z\in {\operatorname{Fix}(T)}\cap (M+F)^{-1}({\mathbf{0}})\) and \(z = P_{{\operatorname{Fix(T)}}\cap (M+F)^{-1}({\mathbf{0}})}f(z)\), by Proposition 4, we have \(\Re ( \exp_{w}^{-1} f(w), \exp_{w}^{-1} z )\) ≤0. Boundedness of \(\{v_{n}\}\) implies that \(\{\Re ( \exp_{w}^{-1} f(w), \exp_{w}^{-1}T(v_{n})) \}\) is bounded. Then we have

$$ \limsup_{n\to \infty } \Re \bigl( \exp_{w}^{-1} f(w), \exp_{w}^{-1}v_{n} \bigr)=\lim_{k\to \infty } \Re \bigl(\exp_{w}^{-1} f(w), \exp_{w}^{-1}T(v_{n_{k}}) \bigr). $$
(37)

Since \(v_{n_{k}}\to z\) as \(k\to \infty \) and by using continuity of T, we obtain

$$ \lim_{k\to \infty } \Re \bigl( \exp_{w}^{-1} f(w), \exp_{w}^{-1}T(v_{n_{k}}) \bigr)= \Re \bigl(\exp_{w}^{-1} f(w), \exp_{w}^{-1}T(z) \bigr) \leq 0. $$

Therefore,

$$ \limsup_{n\to \infty }\Re \bigl( \exp_{w}^{-1} f(w), \exp_{w}^{-1} T(v_{n}) \bigr)\leq 0. $$
(38)

Step 6. Finally, we show that \(\lim_{n\to \infty } d(u_{n}, w)=0\).

We fix \(n\geq 0\) and set \(v=f(u_{n})\), \(q=T(v_{n})\) and consider geodesic triangles \(\Delta (v,q,w)\), \(\Delta (f(w),q,v)\) and \(\Delta (f(w),q,w)\), and their comparison triangles \(\Delta (v^{\prime },q^{\prime },w^{\prime })\), \(\Delta (f(w)',q',v')\) and \(\Delta (f(w)^{\prime },q^{\prime },w^{\prime })\). From Lemma 1, we have

$$\begin{aligned}& d\bigl(f(u_{n}), w\bigr)= d(v, w)= \bigl\Vert v^{\prime }- w^{\prime } \bigr\Vert \quad \mbox{and}\quad d\bigl(T(v_{n}), w\bigr)= d(q, w)= \bigl\Vert q^{\prime }- w^{\prime } \bigr\Vert , \\& d\bigl(f(w), w\bigr)= \bigl\Vert f(w)^{\prime }- w^{\prime } \bigr\Vert \quad \mbox{and} \quad d\bigl(T(v_{n}), w\bigr)= d(q, w)= \bigl\Vert q^{\prime }- w^{\prime } \bigr\Vert . \end{aligned}$$

Recall that \(u_{n+1}=\exp_{f(u_{n})}(1-\alpha _{n})\exp_{f(u_{n})}^{-1}T(v_{n})= \exp_{v}(1-\alpha _{n})\exp_{v}^{-1}q\). The comparison point of \(u_{n+1}\) in \(\mathbb{R}^{2}\) is \(x^{\prime }_{n+1}=\alpha _{n} v^{\prime }+(1-\alpha _{n})q^{\prime }\). Let φ and \(\varphi ^{\prime }\) denote the angles at q and \(q^{\prime }\) in the triangles \(\Delta (f(w),q,w)\) and \(\Delta (f(w)^{\prime },q^{\prime },w^{\prime })\), respectively. Therefore, \(\varphi \leq \varphi ^{\prime }\), and then \(\cos \varphi '\leq \cos \varphi \). By Lemma 2(ii) and the nonexpansive property of T and the ϕ-contraction property of f, we have

$$\begin{aligned}& d^{2}(u_{n+1}, w) \\& \quad \leq \bigl\Vert x^{\prime }_{n+1}-w^{\prime } \bigr\Vert ^{2} \\& \quad = \bigl\Vert \alpha _{n} v^{\prime }+(1-\alpha _{n})q^{\prime }-w^{\prime } \bigr\Vert ^{2} \\& \quad = \bigl\Vert \alpha _{n} \bigl(v^{\prime }-w^{\prime } \bigr)+(1-\alpha _{n}) \bigl(q^{\prime }-w^{\prime }\bigr) \bigr\Vert ^{2} \\& \quad = \alpha _{n}^{2} \bigl\Vert v^{\prime }-w^{\prime } \bigr\Vert ^{2}+(1-\alpha _{n})^{2} \bigl\Vert q^{\prime }-w^{\prime } \bigr\Vert ^{2} +2 \alpha _{n} (1-\alpha _{n})\bigl\langle v^{\prime }-w^{\prime }, q^{\prime }-w^{\prime } \bigr\rangle \cos \varphi ^{\prime } \\& \quad \leq \alpha _{n}^{2} \bigl\Vert v^{\prime }-w^{\prime } \bigr\Vert ^{2}+(1-\alpha _{n})^{2} \bigl\Vert q^{\prime }-w^{\prime } \bigr\Vert ^{2} +2 \alpha _{n} (1-\alpha _{n})\bigl\langle v^{\prime }-f(w)^{\prime }, q^{\prime }-w^{\prime } \bigr\rangle \\& \qquad {} +2 \alpha _{n} (1-\alpha _{n})\bigl\langle f(w)^{\prime } - w', q^{\prime }-w^{\prime } \bigr\rangle \cos \varphi ^{\prime } \\& \quad \leq \alpha _{n}^{2} d^{2} \bigl(f(u_{n}), w\bigr)+(1-\alpha _{n})^{2} d^{2}\bigl(T(v_{n}), w\bigr) + 2 \alpha _{n} (1-\alpha _{n}) \bigl\Vert v^{\prime }-f(w)^{\prime } \bigr\Vert \bigl\Vert q^{\prime }-w^{\prime } \bigr\Vert \\& \qquad {} + 2 \alpha _{n} (1-\alpha _{n}) d\bigl(f(w), w\bigr)d \bigl(T(v_{n}), w\bigr)\cos \varphi . \end{aligned}$$

By the Cauchy–Schwarz inequality, we obtain

$$\begin{aligned}& d^{2}(u_{n+1}, w) \\& \quad \leq \alpha _{n}^{2} d^{2} \bigl(f(u_{n}), w\bigr)+(1-\alpha _{n})^{2} d^{2}\bigl(T(v_{n}), w\bigr)+ 2 \alpha _{n} (1-\alpha _{n})d\bigl(v,f(w)\bigr)d(q, w) \\& \qquad {} + 2 \alpha _{n} (1-\alpha _{n}) d\bigl(f(w), w\bigr)d \bigl(T(v_{n}), w\bigr)\cos \varphi \\& \quad \leq \alpha _{n}^{2} d^{2} \bigl(f(u_{n}), w\bigr)+(1-\alpha _{n})^{2} d^{2}(v_{n}, w) + 2 \alpha _{n} (1-\alpha _{n})d\bigl(f(u_{n}),f(w)\bigr)d\bigl(T(v_{n}), w\bigr) \\& \qquad {} + 2 \alpha _{n} (1-\alpha _{n}) \Re \bigl( \exp_{w}^{-1} f(w), \exp_{w}^{-1} T(v_{n})\bigr) \\& \quad \leq \alpha _{n}^{2} d^{2} \bigl(f(u_{n}), w\bigr)+(1-\alpha _{n})^{2} d^{2}(u_{n}, w) + 2 \alpha _{n} (1-\alpha _{n})\phi \bigl(d(u_{n},w)\bigr)d(u_{n}, w) \\& \qquad {} + 2 \alpha _{n} (1-\alpha _{n}) \Re \bigl( \exp_{w}^{-1} f(w), \exp_{w}^{-1} T(v_{n})\bigr) \\& \quad \leq (1-\alpha _{n}) d^{2}(u_{n}, w) + \alpha _{n}^{2} d^{2}\bigl(f(u_{n}), w\bigr)+2 \alpha _{n}\phi \bigl(d(u_{n},w) \bigr)d(u_{n}, w) \\& \qquad {} + 2 \alpha _{n} \Re \bigl( \exp_{w}^{-1} f(w), \exp_{w}^{-1} T(v_{n}) \bigr) \\& \quad < (1-\alpha _{n}) d^{2}(u_{n}, w) + \alpha _{n}^{2} d^{2}\bigl(f(u_{n}), w \bigr)+2 \alpha _{n}d^{2}(u_{n},w) \\& \qquad {} + 2 \alpha _{n} \Re \bigl( \exp_{w}^{-1} f(w), \exp_{w}^{-1} T(v_{n}) \bigr) \\& \quad = (1+\alpha _{n})d^{2}(u_{n}, w) + \alpha _{n}\beta _{n}, \end{aligned}$$

where \(\beta _{n}=\alpha _{n}d^{2}(f(u_{n}), w)+2\Re ( \exp_{w}^{-1} f(w), \exp_{w}^{-1} T(v_{n}) )\). By condition (\(\mathrm{A}_{1}\)) and (38), \(\lim_{n\to \infty }\beta _{n}= 0\). Let \(m\leq n\). Then the above inequality becomes

$$ d^{2}(u_{n+1}, w)< K_{1}\prod_{j=m}^{n}(1+ \alpha _{j})+ \sum_{j=m}^{n} \Biggl\{ \alpha _{j}\prod_{i=j+1}^{n}(1+ \alpha _{i}) \Biggr\} \beta _{j}. $$
(39)

By taking \(n\to \infty \), it follows that

$$ \lim_{n\to \infty }d^{2}(u_{n+1}, w)< K_{1}\prod_{j=m}^{ \infty }(1+ \alpha _{j})+ \sum_{j=m}^{\infty } \Biggl\{ \alpha _{j} \prod_{i=j+1}^{\infty }(1+ \alpha _{i}) \Biggr\} \beta _{j}. $$
(40)

By condition (\(\mathrm{A}_{1}\)) and (\(\mathrm{A}_{2}\)), \(\lim_{m\to \infty }\prod_{j=m}^{\infty }(1+\alpha _{j})=0\) and \(\lim_{m\to \infty }\sum_{j=m}^{\infty } \{ \alpha _{j}\prod_{i=j+1}^{\infty }(1+\alpha _{i}) \}=0\). Since \(\lim_{n\to \infty }\beta _{n}= 0\) for any \(\varepsilon >0\), there exists \(k \in \mathbb{N}\) such that \(\beta _{j} < \varepsilon \) for all \(j \geq k\). Thus, taking the limit as \(m \to \infty \) in the inequality (40), we obtain

$$ \lim_{n\to \infty }d(u_{n}, w)=0. $$

This completes the proof. □

4 Consequences

If f is a contraction on K, then a corollary of Theorem 2, which can be seen as the extension of the work in [22] from Banach spaces to Hadamard manifolds, is mentioned now.

Corollary 1

Let K be a nonempty, closed and convex subset of Hadamard manifold \(\mathbb{M}\), \(f:K\to K\) be a contraction mapping and \(T:K\to K\) be a nonexpansive mapping. Let \(M:K \to T\mathbb{M}\) be a continuous single-valued vector field satisfying Assumption 1and \(F:\mathbb{M} \rightrightarrows T\mathbb{M}\) be a set-valued monotone vector field such that \(D(F) \subseteq K\). If \(\operatorname{Fix}(T)\cap (M+F)^{-1}({\mathbf{0}})\neq \emptyset \), then the sequence generated by Algorithm 1converges to \(z \in \operatorname{Fix}(T)\cap (M+F)^{-1}({\mathbf{0}})\), where z a fixed point of the mapping \(P_{\operatorname{Fix}(T)\cap (M+F)^{-1}({\mathbf{0}})}f\).

If \(f = I\), identity mapping in the Algorithm 1, then the following result is an extension from Hilbert spaces to Hadamard manifolds, discussed in [14, 24]. Moreover, the following result is also appeared in [1] on a Hadamard manifold.

Corollary 2

Let K be a nonempty, closed and convex subset of Hadamard manifold \(\mathbb{M}\) and \(T:K\to K\) be a nonexpansive mapping. Let \(M:K \to T\mathbb{M}\) be a continuous vector field satisfying Assumption 1and \(F:\mathbb{M} \rightrightarrows T\mathbb{M}\) be a monotone vector field such that \(D(F) \subseteq K\). If \(\operatorname{Fix}(T)\cap (M+F)^{-1}({\mathbf{0}})\neq \emptyset \), then the sequence \(\{u_{n}\}\) generated by Algorithm 1converges to \(z \in \operatorname{Fix}(T)\cap (M+F)^{-1}({\mathbf{0}})\), where \(z= \lim_{n\to \infty }P_{\operatorname{Fix}(T)\cap (M+F)^{-1}({\mathbf{0}})}u_{n}\).

5 Nonsmooth optimization problem

In this section, we study composite minimization of a smooth and a nonsmooth real-valued functions defined on a Hadamard manifold \(\mathbb{M}\). Let \(\mathcal{Y}, \mathcal{Z} : \mathbb{M} \to \mathbb{R}\) be real-valued functions such that \(\mathcal{Y}\) is lower semicontinuous and convex, and \(\mathcal{Z}\) is differentiable. We address the following minimization problem: to find

$$ \min_{q\in \mathbb{M}}\bigl\{ (\mathcal{Y} + \mathcal{Z}) (q)\bigr\} . $$
(41)

Assume that S is the solution set of the problem (41). The directional derivative of a function \(\mathcal{Z} : \mathbb{M} \to \mathbb{R}\) at q in the direction \(u\in T_{q}\mathbb{M}\) is defined by

$$ \mathcal{Z}'(q; u) := \lim_{s\to 0^{+}} \frac{\mathcal{Z}(\exp _{q}su) - \mathcal{Z}(q)}{s}. $$

The gradient of \(\mathcal{Z}\) at \(q\in \mathbb{M}\) is defined by \(\Re (\nabla \mathcal{Z}(q), u) = \mathcal{Z}'(q; u)\) for all \(u\in T_{q}\mathbb{M}\). The subdifferential [23] \(\partial \mathcal{Y} : \mathbb{M} \rightrightarrows T\mathbb{M}\) of \(\mathcal{Y}\) at q is defined as

$$ \partial \mathcal{Y}(q) := \bigl\{ u \in T_{q} \mathbb{M} : \Re \bigl( u, \exp _{q}^{-1}p\bigr) \leq \mathcal{Y}(p) - \mathcal{Y}(q), \forall p \in \mathbb{M} \bigr\} . $$
(42)

The equivalent relation between minimization problem (41) and the inclusion problem \(0\in \nabla \mathcal{Z}(q) + \partial \mathcal{Y}(q)\) discussed in [3] is given by

$$ q\in \mathbf{S}\quad \Leftrightarrow\quad {\mathbf{0}}\in \nabla \mathcal{Z}(q) + \partial \mathcal{Y}(q). $$
(43)

Lemma 3

([11])

Let \(\mathcal{Y} : \mathbb{M} \to \mathbb{R}\) be a lower semicontinuous and convex function on a Hadamard manifold \(\mathbb{M}\). Then the subdifferential \(\partial \mathcal{Y}\) of \(\mathcal{Y}\) is a monotone vector field.

By replacing \(M= \nabla \mathcal{Z}\) and \(F=\partial \mathcal{Y} \) in Algorithm 1, we obtain the following algorithm.

Algorithm 2

Suppose that K be a nonempty closed and convex subset of Hadamard manifold \(\mathbb{M}\). Let \(\mathcal{Y}, \mathcal{Z} : \mathbb{M} \to \mathbb{R}\) be real-valued functions such that \(\mathcal{Y}\) is lower semicontinuous convex and \(\mathcal{Z}\) is differentiable. For an arbitrary \(u_{0}\in K\), and \(\lambda >0\), compute the sequences \(\{v_{n}\}\) and \(\{u_{n}\}\) as follows:

$$\begin{aligned}& v_{n} = J^{\partial \mathcal{Y}}_{\lambda } \bigl[ \exp_{u_{n}}\bigl(-\lambda \nabla \mathcal{Z}(u_{n})\bigr) \bigr], \\& u_{n+1} = \exp_{f(u_{n})}(1-\alpha _{n}) \exp_{f(u_{n})}^{-1}T(v_{n}), \end{aligned}$$

where \({\alpha _{n}}\in (0, 1)\) satisfying the conditions (\(\mathrm{A}_{1}\))–(\(\mathrm{A}_{3}\)).

The following result is an extension of the result discussed in [22] from Banach spaces to Hadamard manifolds, where they assumed f to be a contraction mapping.

Theorem 3

Let K be a nonempty, closed and convex subset of Hadamard manifold \(\mathbb{M}\), \(f:K\to K\) be a ϕ-contraction mapping and \(T:K\to K\) be a nonexpansive mapping. Let \(\mathcal{Y}, \mathcal{Z} : \mathbb{M} \to \mathbb{R}\) be real-valued functions such that \(\mathcal{Y}\) is lower semicontinuous and convex, and \(\mathcal{Z}\) is differentiable such that \(\operatorname{Fix}(T)\cap \mathbf{S}\neq \emptyset \) and \(\nabla \mathcal{Z}\) satisfy the Assumption 1. Then the sequence generated by Algorithm 2converges to a fixed point of the mapping \(P_{\operatorname{Fix}(T)\cap (\nabla \mathcal{Z} + \partial \mathcal{Y})^{-1}( \mathbf{0)}}f\), which is in fact a solution of (43).

6 Computational experiment

Let \(\mathbb{M} = \mathbb{R}_{++}= \{p \in \mathbb{R} : p > 0\}\) be a Hadamard manifold with the Riemannian metric \(\Re ( \cdot , \cdot )\) defined by \(\Re (w_{1}, w_{2}) := H(p)w_{1}w_{2}\) for all \(w_{1}, w_{2} \in T_{p}\mathbb{M} \), where \(H: \mathbb{R}_{++} \to (0, +\infty )\) is given by \(H(p) = p^{-2}\). The tangent space \(T_{p}\mathbb{M}\) at \(p\in \mathbb{M}\) is equal to \(\mathbb{R}\) for all \(p\in \mathbb{M}\). The Riemannian distance \(d : \mathbb{M} \times \mathbb{M} \to [0, +\infty )\) is given by

$$ d(p, q) := \vert \ln p - \ln q \vert ,\quad \forall p, q \in \mathbb{M}. $$

The unique geodesic \(\gamma : \mathbb{R} \to \mathbb{M}\) joining \(\gamma (0) = p\) and \(\gamma (1) = q\), is defined as \(\gamma (t) := p^{1-t}q^{t}\). The inverse of exponential mapping is given by

$$ \exp _{p}^{-1}q=\dot{\gamma }(0) =p\ln \frac{q}{p}. $$

For further details, we refer to [19].

Let \(K = (0, 1]\) be a closed convex subset of \(\mathbb{M} = \mathbb{R}_{++}\). Now, we define a single-valued vector field \(M : K \to \mathbb{R}\) as

$$ M(p) := p + p\ln p,\quad \forall p\in K. $$

Then M satisfies the Assumption 1. Indeed, for any \(p, q\in K\) and any \(0< \lambda \leq 1\), we have

$$\begin{aligned}& d \bigl(\exp _{p}\bigl(-\lambda M(p)\bigr), \exp _{q}\bigl(-\lambda M(q)\bigr) \bigr) \\& \quad = d \bigl( \exp _{p}\bigl(-\lambda (p+ p\ln p)\bigr), \exp _{q}\bigl(-\lambda (q + q\ln q)\bigr) \bigr) \\& \quad = d \bigl( p^{1-\lambda }e^{-\lambda }, q^{1-\lambda }e^{-\lambda } \bigr) = \biggl\vert \ln \frac{p^{1-\lambda }}{q^{1-\lambda }} \biggr\vert \\& \quad = (1-\lambda )d(p, q). \end{aligned}$$

A set-valued vector field \(F : \mathbb{M} \rightrightarrows \mathbb{R}\) with \(D(F) = K\), is defined by

$$ F (p)= \textstyle\begin{cases} -p, & \text{if } 0< p < 1, \\ {[0, 1]}, &\text{if } p=1. \end{cases} $$

Notice that F is monotone vector field on K. Clearly, the solution set of the inclusion problem \((M+F)^{-1}({\mathbf{0}})\) is \(\{1\}\). The resolvent of F, for any \(p \in \mathbb{M}\) and any \(\lambda > 0\), is given by

$$ J_{\lambda }^{F}(p) = \textstyle\begin{cases} pe^{\lambda }, & \text{if } 0< p < 1, \\ 1, & \text{if } p= 1. \end{cases} $$

Now, \(f : K \to K\) be defined as

$$ f(p) = e^{\frac{\ln p}{1-\ln p}},\quad \forall p\in K. $$

Then f is a ϕ-contraction mapping with the comparison function \(\phi (s) = \frac{s}{1+s}\). Indeed, for any \(p, q\in K\),

$$ d\bigl(f(p), f(q)\bigr) = \biggl\vert \frac{\ln p}{1-\ln p} - \frac{\ln q}{1-\ln q} \biggr\vert = \frac{ \vert \ln p - \ln q \vert }{(1-\ln p)(1-\ln q)}. $$
(44)

Since \(0 < p\), \(q \leq 1\), we have \(-\infty < \ln p\), \(\ln q \leq 0\). Therefore, the inequality \(1+|\ln p - \ln q| \leq (1-\ln p)(1-\ln q)\) holds. This together with (44) shows that we have

$$ d\bigl(f(p), f(q)\bigr) \leq \frac{ \vert \ln p - \ln q \vert }{1+ \vert \ln p - \ln q \vert } = \phi \bigl(d(p, q)\bigr), $$

where \(\phi (s) = \frac{s}{1+s}\) for all \(s \geq 0\). Clearly, ϕ satisfies all the conditions of Definition 4. Note that f is not a contraction mapping on K. Let \(T : K \to K\) be a nonexpansive mapping given by \(T(p) = p\) for all \(p\in K\). Hence, \(\operatorname{Fix} (T) = (0, 1]\) on K. Therefore, the common solution set of the problem \(\operatorname{Fix}(T)\cap (M+F)^{-1}({\mathbf{0}})\) is \(\{1\}\). Then the fixed point of the mapping \(P_{\operatorname{Fix}(T)\cap (M+F)^{-1}({\mathbf{0}})}f\) is \(\{1\}\). Indeed, choose \(\bar{p} = 1 \in \operatorname{Fix}(T)\cap (M+F)^{-1}({\mathbf{0}})\) and for any \(q \in \operatorname{Fix}(T)\cap (M+F)^{-1}({\mathbf{0}})\), we have

$$ \exp _{\bar{p}}^{-1}f(\bar{p}) = 0, \quad \text{and} \quad \exp _{\bar{p}}^{-1}q = \bar{p}\ln \frac{q}{\bar{p}}. $$

Hence, we have

$$ \Re \bigl(\exp _{\bar{p}}^{-1}f(\bar{p}), \exp _{\bar{p}}^{-1}q \bigr) = 0,\quad \forall q\in \operatorname{Fix}(T)\cap (M+F)^{-1}({\mathbf{0}}), $$

that is, the set of fixed point of the mapping \(P_{\operatorname{Fix}(T)\cap (M+F)^{-1}({\mathbf{0}})}f\) is \(\{1\}\). Let \(\alpha _{n} = \frac{1}{n+1}\) or \(\alpha _{n} = \frac{1}{(n+1)^{3/2}}\) and \(\lambda = \frac{1}{3}\). Then \(\alpha _{n}\) satisfies the Assumptions (\(\mathrm{A}_{1}\))–(\(\mathrm{A}_{3}\)) of Algorithm 1. By choosing the initial points \(u_{1} = 0.2\) and \(u_{1} = 0.5\) the Algorithm 1 converges to a solution of the (HVIP), which we show in Table 1, Table 2, Fig. 1 and Fig. 2. The computational codes are run on a PC desktop Intel(R) Core(TM) i5-5200U CPU @ 2.20 GHz, RAM 2.00 GB under GNU Octave program version 4.2.2-1ubuntu1.

Figure 1
figure 1

Computational convergence of Algorithm 1 and error term \(|u_{n+1} - u_{n}|\) with the choices of scalars \(\lambda = \frac{1}{3}\) and \(\alpha _{n} = \frac{1}{n+1}\) and different initial points \(u_{1} = 0.2\) or \(u_{1} = 0.5\)

Figure 2
figure 2

Computational convergence of Algorithm 1 and error term \(|u_{n+1} - u_{n}|\) with the choices of scalars \(\lambda = \frac{1}{3}\) and \(\alpha _{n} = \frac{1}{(n+1)^{3/2}}\) and different initial points \(u_{1} = 0.2\), \(u_{1} = 0.5\)

Table 1 Computative iterates and error of Algorithm 1 for the choices of different parameters \(\lambda = \frac{1}{3}\) and \(\alpha _{n} = \frac{1}{n+1}\), different initial points \(u_{1} = 0.2\), \(u_{1} = 0.5\) and the tolerance of error \(|u_{n+1} - u_{n}| < 10^{-6}\)
Table 2 Computative iterates and error of Algorithm 1 for the choices of different parameters \(\lambda = \frac{1}{3}\) and \(\alpha _{n} = \frac{1}{(n+1)^{3/2}}\), different initial points \(u_{1} = 0.2\) and \(u_{1} = 0.5\), with the stoping criterion \(|u_{n+1} - u_{n}| < 10^{-6}\)

7 Conclusions

In this article, we have introduced the viscosity method for hierarchical variational inequalities involving a ϕ-contraction mapping defined over the common solution of variational inclusions and a fixed point problem. Some consequences of the proposed method are also provided. Furthermore, an application of the proposed viscosity method is presented to a nonsmooth optimization problem. Moreover, the convergence analysis of the proposed method is illustrated by some computational numerical experiments on Hadamard manifolds.

Availability of data and materials

Not applicable.

References

  1. Al-Homidan, S., Ansari, Q.H., Babu, F.: Halpern and Mann type algorithms for fixed points and inclusion problems on Hadamard manifolds. Numer. Funct. Anal. Optim. 40(6), 621–653 (2019)

    Article  MathSciNet  Google Scholar 

  2. Al-Homidan, S., Ansari, Q.H., Babu, F., Yao, J.C.: Viscosity method with a ϕ-contraction mapping for hierarchical variational inequalities on Hadamard manifolds. Fixed Point Theory 21(2), 561–584 (2020)

    Article  Google Scholar 

  3. Ansari, Q.H., Babu, F.: Proximal point algorithm for inclusion problems in Hadamard manifolds with applications. Optim. Lett. (2019). https://doi.org/10.1007/s11590-019-01483-0

    Article  Google Scholar 

  4. Ansari, Q.H., Ceng, L.C., Gupta, H.: Triple hierarchical variational inequalities. In: Ansari, Q.H. (ed.) Nonlinear Analysis: Approximation Theory, Optimization and Applications, pp. 231–280. Birkhäuser/Springer, New Delhi (2014)

    Google Scholar 

  5. Ansari, Q.H., Islam, M., Yao, J.C.: Nonsmooth variational inequalities on Hadamard manifolds. Appl. Anal. 99(2), 340–358 (2020). Correction: 359–360

    Article  MathSciNet  Google Scholar 

  6. Boyd, D.W., Wong, J.S.: On nonlinear contractions. Proc. Am. Math. Soc. 20, 335–341 (1969)

    Article  MathSciNet  Google Scholar 

  7. Da Cruz Neto, J.X., Ferreira, O.P., Lucambio Pérez, L.R.: Monotone point-to-set vector fields. Balk. J. Geom. Appl. 5(1), 69–79 (2000)

    MathSciNet  MATH  Google Scholar 

  8. Dilshad, M.: Solving Yosida inclusion problem in Hadamard manifold. Arab. J. Math. 9, 357–366 (2020)

    Article  MathSciNet  Google Scholar 

  9. Dilshad, M., Khan, A., Akram, M.: Splitting type viscosity methods for inclusion and fixed point problems on Hadamard manifolds. AIMS Math. 6(5), 5205–5221 (2021)

    Article  Google Scholar 

  10. Huang, S.: Approximations with weak contractions in Hadamard manifolds. Linear Nonlinear Anal. 1(2), 317–328 (2015)

    MathSciNet  MATH  Google Scholar 

  11. Li, C., López, G., Márquez, V.M.: Monotone vector fields and the proximal point algorithm on Hadamard manifolds. J. Lond. Math. Soc. 79(3), 663–683 (2009)

    Article  MathSciNet  Google Scholar 

  12. Li, C., López, G., Márquez, V.M., Wang, J.H.: Resolvent of set-valued monotone vector fields in Hadamard manifolds. Set-Valued Anal. 19(3), 361–383 (2011)

    Article  MathSciNet  Google Scholar 

  13. Li, C., López, G., Martín-Márquez, V.: Iterative algorithms for nonexpansive mappings on Hadamard manifolds. Taiwan. J. Math. 14(2), 541–559 (2010)

    MathSciNet  MATH  Google Scholar 

  14. Manaka, H., Takahashi, W.: Weak convergence theorems for maximal monotone operators with nonspreading mappings in a Hilbert space. CUBO Math. J. 13(1), 11–24 (2011)

    Article  MathSciNet  Google Scholar 

  15. Martinet, B.: Régularisation d’inequations variationnelles par approximations successives. Rev. Fr. Inform. Rech. Opér. 4, 154–158 (1970)

    MATH  Google Scholar 

  16. Moudafi, A.: Viscosity approximation methods for fixed-points problems. J. Math. Anal. Appl. 241, 46–55 (2000)

    Article  MathSciNet  Google Scholar 

  17. Németh, S.Z.: Monotone vector fields. Publ. Math. (Debr.) 54, 437–449 (1999)

    MathSciNet  MATH  Google Scholar 

  18. Németh, S.Z.: Variational inequalities on Hadamard manifolds. Nonlinear Anal. 52, 1491–1498 (2003)

    Article  MathSciNet  Google Scholar 

  19. Rapcsák, T.: Smooth Nonlinear Optimization in \(\mathbb{R}^{n}\). Kluwer Academic, Dordrecht (1997)

    Book  Google Scholar 

  20. Rockafellar, R.T.: Monotone operators and the proximal point algorithm. SIAM J. Control Optim. 14(5), 877–898 (1976)

    Article  MathSciNet  Google Scholar 

  21. Rudin, W.: Real and Complex Analysis, 3rd edn. McGraw-Hill, New York (1987)

    MATH  Google Scholar 

  22. Sahu, D.R., Ansari, Q.H., Yao, J.C.: The prox-Tikhonov forward method and application. Taiwan. J. Math. 19, 481–503 (2015)

    Article  MathSciNet  Google Scholar 

  23. Sakai, T.: Riemannian Geomety, Translations of Mathematical Monographs. Am. Math. Soc., Providence (1996)

    Google Scholar 

  24. Takahashi, S., Takahashi, W., Toyoda, M.: Strong convergence theorems for maximal monotone operators with nonlinear mappings in Hilbert spaces. J. Optim. Theory Appl. 147(1), 27–41 (2010)

    Article  MathSciNet  Google Scholar 

  25. Walter, R.: On the metric projections onto convex sets in Riemannian spaces. Arch. Math. XXV, 91–98 (1974)

    Article  MathSciNet  Google Scholar 

  26. Wong, N.C., Sahu, D.R., Yao, J.C.: Solving variational inequalities involving nonexpansive type mappings. Nonlinear Anal. 69, 4732–4753 (2008)

    Article  MathSciNet  Google Scholar 

  27. Xu, H.K.: Viscosity approximation methods for nonexpansive mappings. J. Math. Anal. Appl. 298, 279–291 (2004)

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgements

This research was funded by the Deanship of Scientific Research at Princess Nourah bint Abdulrahman University through the Fast-track Research Funding Program. The authors sincerely thank the unknown referees for their valuable suggestions and useful comments that have led to the present form of the original manuscript.

Funding

This research was funded by the Deanship of Scientific Research at Princess Nourah bint Abdulrahman University through the Fast-Track Research Funding Program.

Author information

Authors and Affiliations

Authors

Contributions

The authors contributed equally and significantly in writing this paper. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Mohammad Dilshad.

Ethics declarations

Competing interests

The authors declare that they have no competing interests.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Filali, D., Dilshad, M., Akram, M. et al. Viscosity method for hierarchical variational inequalities and variational inclusions on Hadamard manifolds. J Inequal Appl 2021, 66 (2021). https://doi.org/10.1186/s13660-021-02598-8

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13660-021-02598-8

MSC

Keywords