 Research
 Open access
 Published:
Iterative methods for vector equilibrium and fixed point problems in Hilbert spaces
Journal of Inequalities and Applications volumeÂ 2022, ArticleÂ number:Â 131 (2022)
Abstract
In this paper, algorithms for finding common solutions of strong vector equilibrium and fixed point problems of multivalued mappings are considered. First, a Minty vector equilibrium problem is introduced and the relationship between the Minty vector equilibrium problem and the strong equilibrium problem is discussed. Then, by applying the Minty vector equilibrium problem, projection iterative methods are proposed and some convergence results are established in Hilbert spaces. The main results obtained in this paper develop and improve some recent works in this field.
1 Introduction
Let \(\mathcal{H}\) be a real Hilbert space and X be a nonempty subset of \(\mathcal{H}\). Let \(f:X\times X\to \mathbb{R}\) be a bifunction satisfying \(f(x,x)=0\) for all \(x\in X\). The scalar equilibrium problem consists in finding \(x^{*}\in X\) such that
As pointed out by Blum and Oettli [1], (EP) provides a unifying framework for several important problems, such as the optimization problem, saddle point problem, Nash equilibrium problem, fixed point problem, variational inequality and complementarity problem.
Let Z be a real Hausdorff topological vector space and C a convex cone of Z. Let \(f: X \times X \to Z\) be a vectorvalued bifunction satisfying \(f(x,x)=0\) for all \(x\in X\). In 1997, Ansari et al. [2] introduced the following vector equilibrium problems: find \(x^{*}\in X\) such that
or find \(x^{*}\in X\) such that
where intC denotes the topological interior of C. Provided the cone C is proper (i.e., \(C\neq Z\)), any solution of (SVEP) must be also a solution of (WVEP).
Clearly, each of these problems constitutes a valid extension of (EP).
Existence of solutions is a fundamental question for equilibrium problems. In the past decades, it has been intensively studied by many authors, and a large number of existence results have been obtained in the literature. In most cases, a monotonicity or coerciveness condition on the equilibrium function f and/or a compactness condition on the feasible set X are imposed. For details, we refer the readers to the monographs [3â€“5] and the references therein.
Algorithm method is another fundamental but very important question for equilibrium problems. In the last years, lots of effective methods for solving scalar equilibrium problems have been proposed. For details, we refer the readers to [6â€“21] and the references therein. Recently, methods for finding a solution of vector equilibrium problems have also been explored. In 2009, by using a scalarization method, Cheng and Liu [22] suggested a projection iterative algorithm for finding solutions of a weak vector equilibrium problem by solving a corresponding convex feasibility problem in an ndimensional Euclidean space. In 2012, by applying a regularization technique, Li and Wang [23] proposed a viscosity approximation method for finding a common element of the set of fixed points of a nonexpansive mapping and of the set of solutions of a strong vector equilibrium problem in a Hilbert sapce. Later, Shan and Huang [24] extended this viscosity approximation method for finding a common element of the set of fixed points of an infinite family of nonexpansive mappings, of the set of solutions of a generalized mixed vector equilibrium problem, and of the set of solutions of a variational inequality problem. In 2015, by applying the Gerstewitz nonlinear scalarization function, Wang and Li [25] presented a projection iterative algorithm for finding solutions of a strong vector equilibrium problem by solving a corresponding scalar optimization problem. Afterwards, by extending and developing the iterative method used in [25], Huang, Wang, and Mao [26] introduced a general iterative algorithm for solving a strong vector equilibrium problem. In 2018, Wang, Huang, and Zhu [27] further suggested a projection iterative algorithm for solving a strong vector equilibrium problem with variable domination structure. Very recently, by using the auxiliary principle, Chadli, Ansari, and AlHomidan [28] also proposed an algorithm for bilevel vector equilibrium problems.
Motivated by the works mentioned above, in this paper, we shall investigate iterative methods for finding common solutions of strong vector equilibrium problems and fixed point problems of multivalued mappings. The organization of this paper is as follows. In Sect. 2, a Minty vector equilibrium problem is introduced and the relationship between the Minty vector equilibrium problem and the strong vector equilibrium problem is discussed. Some definitions and known results are also recalled in this section. In Sect. 3, by employing the Minty vector equilibrium problem, projection iterative methods are suggested for finding common solutions of a strong vector equilibrium problem and a fixed point problem of a multivalued mapping. Moreover, some convergence results are established under suitable conditions of cone continuity and convexity. The main results obtained in this paper generalize and improve the corresponding ones of Van, Strodiot, Nguyen, and Vuong [29], Iusem and Sosa [8], Shan and Huang [24], and Huang, Wang, and Mao [26].
2 Preliminaries
In this paper, let \(\mathcal{H}\) be a real Hilbert space with inner product \(\langle \cdot ,\cdot \rangle \) and the induced norm \(\\cdot \\). When \(\{x^{k}\}\) is a sequence of \(\mathcal{H}\), we denote strong convergence of \(\{x^{k}\}\) to \(x\in \mathcal{H}\) as \(k\to \infty \) by \(x^{k}\to x\) and weak convergence by \(x^{k}\rightharpoonup x\).
It is easy to see that
for all \(x,y\in \mathcal{H}\) and all \(t\in \mathbb{R}\).
Let K be a nonempty closed and convex subset of \(\mathcal{H}\). For every element \(x\in \mathcal{H}\), there exists a unique nearest point in K, denoted by \(P_{K}(x)\), such that
Then \(P_{K}\) is called the metric projection of \(\mathcal{H}\) onto K.
Lemma 2.1
The metric projection has the following basic properties:

(i)
\(\langle xP_{K}(x), yP_{K}(x)\rangle \leq 0\) for all \(x\in \mathcal{H}\) and \(y\in K\);

(ii)
\(\P_{K}(x)y\^{2}\leq \xy\^{2}\xP_{K}(x)\^{2} \) for all \(x\in \mathcal{H}\) and \(y\in K\);

(iii)
\(\P_{K}(x)P_{K}(y)\^{2}\leq \langle xy, P_{K}(x)P_{K}(y)\rangle \) for all \(x,y\in \mathcal{H}\);

(iv)
\(\P_{K}(x)P_{K}(y)\\leq \xy\\) for all \(x,y\in \mathcal{H}\).
Let X be a nonempty subset of \(\mathcal{H}\) and \(S:X\to 2^{X}\) a multivalued mapping. The fixed point problem associated with S can be formulated as
Denote by \(\operatorname{Fix}(S)\) the set of all fixed points of S, i.e., \(\operatorname{Fix}(S)\) denotes the solution set of (FPP).
Remark 2.1
When \(S:X\to X\) is a vectorvalued mapping, the (FPP) reduces to finding \(x^{*}\in X\) such that \(x^{*}=S(x^{*})\).
For dealing with (SVEP), in this paper, we need the following Minty vector equilibrium problem: find \(x^{*}\in X\) such that
Denote by \(\mathcal{S}_{0}\), \(\mathcal{S}_{E}\), and \(\mathcal{S}_{M}\) the solution sets of (EP), (SVEP), and (MVEP), respectively.
Next, we shall give some definitions and known results needed in this paper.
Definition 2.1
([30])
Let E and Z be two real Hausdorff topological vector spaces, \(X\subseteq E\) a nonempty subset, and \(C\subseteq Z\) a closed convex cone. A mapping \(g:X\to Z\) is said to be

(i)
Cupper semicontinuous (for short, Cu.s.c.) (resp. Clower semicontinuous (for short, Cl.s.c.)) at \(x_{0}\in X\) if, for any neighborhood V of 0 in Z, there exists a neighborhood U of \(x_{0}\) in E such that
$$\begin{aligned} & g(x)\in g(x_{0})+VC,\quad \forall x\in U\cap X \\ &\quad \bigl(\text{resp.}\ g(x)\in g(x_{0})+V+C, \forall x\in U \cap X\bigr); \end{aligned}$$ 
(ii)
Cu.s.c. (resp. Cl.s.c.) on X if it is Cu.s.c. (resp. Cl.s.c.) at every point \(x\in X\);

(iii)
Ccontinuous on X if it is both Cu.s.c. and Cl.s.c. on X.
Remark 2.2
From the above definition, it is easy to see that a mapping \(g: X\to Z\) is Cu.s.c. at \(x_{0}\in X\) if and only if âˆ’g is Cl.s.c. at \(x_{0}\).
Remark 2.3
If \(g: X\to Z\) is continuous on X, then it is Ccontinuous on X. Conversely, if g is Ccontinuous on X, then it is continuous only when the cone C has a closed convex bounded base (see [30, Theorem 5.3, pp. 22â€“23]).
The following lemma plays an important role in our convergence analysis.
Lemma 2.2
Let E and Z be two real Hausdorff topological vector spaces, \(X\subseteq E\) a nonempty closed subset, and \(C\subseteq Z\) a closed convex cone. Let \(g, h:X\to Z\) be two vectorvalued mappings. Let \(\{x_{\alpha}\}\) and \(\{y_{\alpha}\}\) be two nets in X such that \(x_{\alpha}\to x_{0}\in X\) and \(y_{\alpha}\to y_{0}\in X\). Let \(\{w_{\alpha}\}\) be a given net in Z such that \(w_{\alpha}\to w_{0}\in Z\).

(i)
If g is Cl.s.c. at \(x_{0}\) and h is Cu.s.c. at \(y_{0}\) and
$$ g(x_{\alpha})\in h(y_{\alpha})+w_{\alpha}C\quad \textit{for all } \alpha , $$then \(g(x_{0})\in h(y_{0})+w_{0}C\);

(ii)
If g is Cu.s.c. at \(x_{0}\) and h is Cl.s.c. at \(y_{0}\) and
$$ g(x_{\alpha})\in h(y_{\alpha})+w_{\alpha}+C\quad \textit{for all } \alpha , $$then \(g(x_{0})\in h(y_{0})+w_{0}+C\).
Proof
(i) Suppose to the contrary that \(g(x_{0})\notin h(y_{0})+w_{0}C\), i.e., \(g(x_{0})h(y_{0})w_{0}\notin C\). Then, by the closedness of C, there exists some neighborhood V of the origin in Z such that
Notice that C is a convex cone. We can further obtain
For the above neighborhood V of the origin in Z, it is known from the theory of topological vector spaces that there exists a balanced neighborhood \(V'\) of the origin in Z such that \(V'+V'+V'\subseteq V\). For the neighborhood \(V'\), since g is Cl.s.c. at \(x_{0}\), there exists a neighborhood \(U(x_{0})\) of \(x_{0}\) such that
As \(\{x_{\alpha}\}\subseteq X\) and \(x_{\alpha}\to x_{0}\in X\), there must exist some \(\alpha _{1}\) such that, for every \(\alpha \geq \alpha _{1}\),
On the other hand, since h is Cu.s.c. at \(y_{0}\), there exists a neighborhood \(U(y_{0})\) of \(y_{0}\) such that
Since \(\{y_{\alpha}\}\subseteq X\) and \(y_{\alpha}\to y_{0}\in X\), there must exist some \(\alpha _{2}\) such that, for every \(\alpha \geq \alpha _{2}\),
In addition, since \(w_{\alpha}\to w_{0}\), there must exist some \(\alpha _{3}\) such that, for every \(\alpha \geq \alpha _{3}\),
Take any \(\alpha _{0}\) such that \(\alpha _{0}\geq \alpha _{1}\), \(\alpha _{0}\geq \alpha _{2}\), and \(\alpha _{0}\geq \alpha _{3}\). Notice that \(V'\) is a balanced neighborhood and C is a convex cone. Then, by (2.2), (2.3), and (2.4), we have, for any \(\alpha \geq \alpha _{0}\),
This, together with (2.1), implies that
a contradiction. Thus \(g(x_{0})\in h(y_{0})+w_{0}C\).
(ii) Since g is Cu.s.c. at \(x_{0}\), we know that âˆ’g is Cl.s.c. at \(x_{0}\). Similarly, since h is Cl.s.c. at \(y_{0}\), we know that âˆ’h is Cu.s.c. at \(y_{0}\). In addition, since \(w_{\alpha}\to w_{0}\), we have \((w_{\alpha})\to (w_{0})\). By the assumption, we further have
Then, it follows from item (i) that
That is, \(g(x_{0})\in h(y_{0})+w_{0}+C\).â€ƒâ–¡
Definition 2.2
([30])
Let E and Z be two real Hausdorff topological vector spaces, \(X\subseteq E\) a nonempty subset, and \(C\subseteq Z\) a closed convex cone. A mapping \(g:X\to Z\) is called lower semicontinuous (for short, l.s.c.) (resp. upper semicontinuous (for short, u.s.c.)) on X if, for any \(z\in Z\), the set
is closed in X.
Lemma 2.3
([31])
If g is Cu.s.c. (resp. Cl.s.c.) on X, then it is u.s.c. (resp. l.s.c.) on X.
Definition 2.3
Let E and Z be two real Hausdorff topological vector spaces, \(X\subseteq E\) a nonempty convex subset, and \(C\subseteq Z\) a convex cone. A mapping \(h:X\to Z\) is said to be

(i)
Cconvex if, for any \(u_{1},u_{2}\in X\) and any \(t\in [0,1]\), one has
$$ h\bigl(t u_{1}+(1t)u_{2}\bigr)\in t h(u_{1})+(1t)h(u_{2})C; $$ 
(ii)
Cquasiconvex if, for any \(z\in Z\), the set \(\{u\in X: h(u)\in zC\}\) is convex;

(iii)
properly Cquasiconvex if, for any \(u_{1},u_{2}\in X\) and for any \(t\in [0,1]\), one has
$$\begin{aligned}& \text{either}\quad h\bigl(t u_{1}+(1t)u_{2}\bigr)\in h(u_{1})C, \\& \text{or}\quad h\bigl(t u_{1}+(1t)u_{2}\bigr)\in h(u_{2})C. \end{aligned}$$ 
(iv)
properly Cquasiconcave if âˆ’h is properly Cquasiconvex.
Remark 2.4
Obviously, if h is Cconvex or properly Cquasiconvex, then it is Cquasiconvex.
Definition 2.4
([33])
Let Z be a real Hausdorff topological vector space and \(C\subseteq Z\) a convex cone. A nonempty set \(M\subseteq Z\) is called upward directed if, for every \(u_{1},u_{2}\in M\), there exists \(u\in M\) such that \(u_{1}\in uC\) and \(u_{2}\in uC\).
The following theorem is useful in the convergence analysis of our algorithm, which provides the existence of maximal elements.
Theorem 2.1
([34])
Let E and Z be two real Hausdorff topological vector spaces, \(X\subseteq E\) a nonempty compact subset, and \(C\subseteq Z\) a closed convex cone. Assume that \(f:X\to Z\) is u.s.c. and \(f(X)\) is upward directed. Then, there exists \(\bar{x}\in X\) such that
Now, we present an important local property of (SVEP), which says that local solutions of (SVEP) are indeed global ones.
Theorem 2.2
([25])
Let E and Z be two real Hausdorff topological vector spaces, \(X\subseteq E\) a nonempty convex subset, and \(C\subseteq Z\) a closed convex cone. Let \(f: X\times X\to Z\) be a vectorvalued bifunction such that, for each \(x\in X\), \(f(x,x)=0\) and \(f(x,y)\) is Cconvex in y. If there exist an open set \(U\subseteq E\) and \(\bar{x}\in X\cap U\) such that \(f(\bar{x},y)\in C\), \(\forall y\in X\cap U\), then xÌ„ solves (SVEP).
The following lemma uncovers the relation between (SVEP) and (MVEP).
Lemma 2.4
([26])
Let \(\mathcal{H}\) be a real Hilbert space and X a nonempty closed convex subset of \(\mathcal{H}\). Let Z be a real Hausdorff topological vector space and C a closed convex cone of Z. Let \(f:X\times X\to Z\) be a vectorvalued bifunction such that, for any \(x\in X\), \(f(x,x)=0\) and \(f(x,y)\) is Cconvex in y and, for each \(y\in X\), \(f(x,y)\) is u.s.c. in x. Then, the solution set of (MVEP) is contained in the solution set of (SVEP), i.e., \(\mathcal{S}_{M}\subseteq \mathcal{S}_{E}\).
Definition 2.5
([35])
Let \(\mathcal{H}\) be a real Hilbert space with inner product \(\langle \cdot ,\cdot \rangle \) and the induced norm \(\\cdot \\). Let X be a nonempty closed convex subset of \(\mathcal{H}\). Denote by \(\operatorname{CCB}(X)\) the family of nonempty convex closed bounded subsets of X. A multivalued mapping \(T: X \to \operatorname{CCB}(X)\) is said to be âˆ—nonexpansive if
where \(P_{T(x)}(x)\) denotes the metric projection of x onto the nonempty convex closed bounded subset \(T(x)\).
Remark 2.5
If \(T:X\to X\) is a vectorvalued mapping and nonexpansive, i.e., \(\TxTy\\leq \xy\\), for all \(x,y\in X\), then T is clearly âˆ—nonexpansive. In particular, the identity mapping I on X is âˆ—nonexpansive.
Theorem 2.3
([36, Demiclosedness Principle])
Let \(\mathcal{H}\) be a real Hilbert space with inner product \(\langle \cdot ,\cdot \rangle \) and the induced norm \(\\cdot \\). Let X be a nonempty bounded closed convex subset of \(\mathcal{H}\). Denote by \(\mathcal{K}(X)\) the family of nonempty compact convex subsets of X. Let \(T:X\to \mathcal{K}(X)\) be a âˆ—nonexpansive mapping. Then \(\operatorname{Fix}(T)\) is convex and closed and \(IT\) is demiclosed at 0, i.e., for every sequence \(\{x^{k}\}\subseteq X\) such that \(x^{k}\rightharpoonup x\) and \(d(x^{k},T(x^{k}))\to 0\) as \(k\to \infty \), one has \(x\in T(x)\).
3 Main results
In this section, we shall propose a projection iterative algorithm for finding common solutions of (SVEP) and (FPP) and further investigate its convergence.
From now on, unless otherwise specified, let \(\mathcal{H}\) be a real Hilbert space with inner product \(\langle \cdot ,\cdot \rangle \) and the induced norm \(\\cdot \\). Let X be a nonempty compact convex subset of \(\mathcal{H}\). Let \(\operatorname{CCB}(X)\) be the family of nonempty convex closed bounded subsets of X and \(\mathcal{K}(X)\) be the family of nonempty compact convex subsets of X. It is clear that \(\mathcal{K}(X)\) is included in \(\operatorname{CCB}(X)\). Let Z be a real Hausdorff topological vector space and \(C\subseteq Z\) a closed convex pointed cone. Let \(S:X\to \mathcal{K}(X)\) be a multivalued mapping and \(f: X \times X \to Z\) a vectorvalued bifunction satisfying \(f(x,x)=0\) for all \(x\in X\).
Also, the following assumptions are supposed to be satisfied:

(A1)
S is âˆ—nonexpansive on X;

(A2)
f is Ccontinuous on \(X\times X\);

(A3)
For each \(x\in X\), \(f(x,y)\) is Cconvex in y;

(A4)
For any given \(y \in X\), for every \(x^{1},x^{2} \in X\), there exists \(x \in X\), \(\x\\leq \max \{\x^{1}\,\x^{2}\\}\), such that
$$ f\bigl(x^{1},y\bigr)\in f(x,y)C \quad \text{and} \quad f \bigl(x^{2},y\bigr)\in f(x,y)C.$$
Remark 3.1
Assumption (A4) holds, for example, when \(f(\cdot ,y)\) is Cu.s.c. and properly Cquasiconcave on X, see Huang, Wang, and Mao [26, Proposition 3.1].
Algorithm 3.1
Step 0. (Initial step) Take an arbitrary point \(e\in C\). Choose two sequences \(\{\lambda _{k}\}\) and \(\{\mu _{k}\}\) satisfying \(\lambda _{k}\geq 0\) and \(\lambda _{k}\to 0\) (\(k\to \infty \)) and \(\{\mu _{k}\}\subseteq [a,b]\) for some \(a,b\in (0,1)\). Select an initial \(x^{0}\in X\). Set \(k=0\) and \(\rho _{0}=\x_{0}\\).
Step 1. Define
Step 2. Find \(y^{k}\in X^{k}\) such that
Step 3. Compute \(u^{k}\in X\) as
where \(P_{L_{f}(y^{k})}(\cdot )\) denotes the metric projection onto \(L_{f}(y^{k})=\{x\in X:f(y^{k},x)\in C\}\).
Step 4. Calculate \(x^{k+1}\) as
where \(v^{k}=P_{S(u^{k})}(u^{k})\).
Step 5. Compute \(\rho _{k+1}\) as
Step 6. Set \(k=k+1\) and return to Step 1.
Now, we start the convergence analysis of Algorithm 3.1.
Lemma 3.1
Algorithm 3.1is welldefined.
Proof
Clearly, \(\rho _{k}=\max\{\x^{0}\, \x^{1}\,\dots ,\x^{k}\\}\), so we see that the sequence \(\{\rho _{k}\}\) is nondecreasing. This implies that \(X^{k}\subseteq X^{k+1}\) for all k. As \(x^{0}\) belongs to \(X^{0}\), all the sets \(X^{k}\) are nonempty and trivially closed. Noting that \(X^{k}\subseteq X\) and X is compact, we obtain that the sets \(X^{k}\) are compact. By assumption (A4), we know that, for any given \(x^{k}\in X^{k}\), the set \(f(X^{k},x^{k})=\{f(y,x^{k}):y\in X^{k}\}\) is upward directed. Moreover, by assumption (A2), the vectorvalued mapping \(f(\cdot ,x^{k})\) is Cu.s.c. and so is u.s.c.. Then, it follows from Theorem 2.1 that there exists \(y^{k}\in X^{k}\) such that
This, together with \(\lambda _{k}\geq 0\) and \(e\in C\), implies that \(y^{k}\) can be selected as the point desired in Step 2.
Clearly, the set \(L_{f}(y^{k})\) is nonempty as \(y^{k}\in L_{f}(y^{k})\). Moreover, \(L_{f}(y^{k})\) is convex as \(f(y^{k},\cdot )\) is Cconvex. By assumption (A2), \(f(y^{k},\cdot )\) is Cl.s.c. and so is l.s.c.. It follows that the set \(L_{f}(y^{k})\) is closed. Thus, the metric projection of \(x^{k}\) onto \(L_{f}(y^{k})\) is existing and unique. Similarly, as the set \(S(u^{k})\) is nonempty, closed and convex, there exists unique a metric projection of \(u^{k}\) onto \(S(u^{k})\). Therefore, \(x^{k+1}\) is uniquely calculated by Step 4.â€ƒâ–¡
Let \(\{x^{k}\}\), \(\{y^{k}\}\), \(\{u^{k}\}\), \(\{v^{k}\}\), and \(\{\rho _{k}\}\) be the sequences generated by Algorithm 3.1. Let \(L_{\infty} = \bigcap_{k=1}^{\infty} L_{f}(y^{k})\).
Proposition 3.1
For each \(x^{*}\in L_{\infty}\cap \operatorname{Fix}(S)\), one has

(i)
\(\v^{k}x^{*}\ \leq \u^{k}x^{*}\\);

(ii)
\(\u^{k}x^{*}\^{2} \leq \x^{k}x^{*}\^{2}  \x^{k}u^{k}\^{2} \leq \x^{k}x^{*}\^{2}\);

(iii)
\(\x^{k+1}x^{*}\ \leq \x^{k}x^{*}\\);

(iv)
the sequence \(\{\x^{k}x^{*}\\}\) is convergent.
Proof
Take any \(x^{*}\in L_{\infty}\cap \operatorname{Fix}(S)\) and let it be fixed.
(i) Since \(x^{*}\in \operatorname{Fix}(S)\), we have \(x^{*}\in S(x^{*})\). Thus \(x^{*}=P_{S(x^{*})}(x^{*})\). By assumption (A1), S is âˆ—nonexpansive on X. It follows that
(ii) Notice that \(x^{*}\in L_{\infty}\subseteq L_{f}(y^{k})\) and \(u^{k}\) is the metric projection of \(x^{k}\) onto \(L_{f}(y^{k})\). Then, by the property of metric projection, one has
It follows that
Hence, we obtain
(iii) Noting that \(x^{k+1} = \mu _{k}x^{k} + (1\mu _{k})v^{k}\), we have
Then, by (3.3), (3.1), and (3.2), one has
(iv) Clearly, the sequence \(\{\x^{k}x^{*}\\}\) is nonnegative. By item (iii), it is also nonincreasing and thus convergent.â€ƒâ–¡
Proposition 3.2
For each k, let \(w^{k}=P_{S(x^{k})}(x^{k})\). If \(L_{\infty}\cap \operatorname{Fix}(S)\neq \emptyset \), then the sequences \(\{\x^{k}w^{k}\\}\), \(\{\x^{k}u^{k}\\}\), and \(\{\x^{k}v^{k}\\}\) all converge to 0.
Proof
Take any \(x^{*}\in L_{\infty}\cap \operatorname{Fix}(S)\) and let \(x^{*}\) be fixed. Then, by applying successively (3.1) and (3.2), we can obtain
Notice that \(\mu _{k}(1\mu _{k})\geq a(1b)>0\). We can conclude from (3.5) that
This, together with the fact that the sequence \(\{\x^{k}x^{*}\\}\) is convergent, yields \(\x^{k}v^{k}\\to 0\) as \(k\to \infty \). Further, by applying (3.4) and (3.2), we can obtain
Since \(0<1b\leq 1\mu _{k}\), we get
Again, from the convergence of the sequence \(\{\x^{k}x^{*}\\}\), we obtain \(\x^{k}u^{k}\ \to 0\) as \(k \to \infty \).
Noting that the multivalued mapping \(S(\cdot )\) is âˆ—nonexpansive, we have
It follows that
From this, we get \(\w^{k}x^{k}\ \to 0\) as \(k \to \infty \).â€ƒâ–¡
Now, we are ready to prove the convergence of Algorithm 3.1.
Theorem 3.1
Assume that \(L_{\infty}\cap \operatorname{Fix}(S)\neq \emptyset \). Then, the sequence \(\{x^{k}\}\) generated by Algorithm 3.1converges to some xÌ„ belonging to the set \(\mathcal{S}_{E}\cap \operatorname{Fix}(S)\).
Proof
For each k, let \(w^{k}=P_{S(x^{k})}(x^{k})\). As \(\{x^{k}\}\) is a sequence of the compact set X, it has a convergent subsequence \(\{x^{k_{j}}\}\) such that \(x^{k_{j}} \to \bar{x}\in X\) as \(j \to \infty \). Similarly, since \(\{y^{k_{j}}\}\) is a sequence of the compact set X, it also has a convergent subsequence. Without loss of generality, we may assume that \(y^{k_{j}}\to \bar{y}\in X\) as \(j \to \infty \).
To show the conclusion, we divide the proof into three steps.
(I) \(\bar{x}\in \operatorname{Fix}(S)\), i.e., xÌ„ is a fixed point of S.
By Proposition 3.2, we know that the sequence \(\{\x^{k}w^{k}\\}\) converges to 0 as \(k\to \infty \). As a consequence, its subsequence \(\{\x^{k_{j}}w^{k_{j}}\\}\) also converges to 0 as \(j\to \infty \). This yields \(d(x^{k_{j}},S(x^{k_{j}})) \to 0\) as \(j \to \infty \). As \(\{x^{k_{j}}\}\) converges to \(\bar{x}\in X\), it also converges weakly to \(\bar{x}\in X\). Notice that S is âˆ—nonexpansive on X. We can conclude from the Demiclosedness Principle that \(\operatorname{Fix}(S)\) is convex and closed, and \(IS\) is demiclosed at 0. Thus \(\bar{x}\in S(\bar{x})\), i.e., xÌ„ is a fixed point of S.
(II) \(\bar{x}\in \mathcal{S}_{E}\), i.e., xÌ„ is a solution of (SVEP).
We first show that \(f(\bar{y},\bar{x})=0\).
In fact, by Proposition 3.2, we get \(\x^{k_{j}}u^{k_{j}}\ \to 0\) as \(j \to \infty \). This, together with \(x^{k_{j}} \to \bar{x}\), implies \(u^{k_{j}} \to \bar{x}\) as \(j \to \infty \). In addition, by the definition of \(L_{f}(y^{k_{j}})\) and the fact \(u^{k_{j}}=P_{L_{f}(y^{k_{j}})}(x^{k_{j}})\), which is the metric projection of \(x^{k_{j}}\) onto \(L_{f}(y^{k_{j}})\), belongs to \(L_{f}(y^{k_{j}})\), we can get \(f(y^{k_{j}},u^{k_{j}}) \in C\). As f is Cl.s.c., it is l.s.c.. Thus, \(f(\bar{y},\bar{x})\in C\).
On the other hand, observe that \(\rho _{k}=\max\{\x^{0}\, \x^{1}\,\dots ,\x^{k}\\}\) and so \(x^{k} \in X^{k}\). Then, by Step 2, we have
It follows that
Since f is Cu.s.c. and \(\lambda _{k_{j}}\to 0\), we can conclude from Lemma 2.2(ii) that \(f(\bar{y},\bar{x}) \in C\). Notice that the cone C is pointed. We have \(C\cap (C)=\{0\}\). Therefore \(f(\bar{y},\bar{x})=0\).
Next, we show \(\bar{x} \in \mathcal{S}_{E}\).
Indeed, it is known from functional analysis that the set X is bounded as it is compact. Then the sequence \(\{x^{k}\}\) is also bounded as it is contained in X. Moreover, by noting the fact that \(\rho _{k}=\max\{\x^{0}\, \x^{1}\,\dots ,\x^{k}\\}\), we know that the sequence \(\{\rho _{k}\}\) is bounded. Let \(\bar{\rho}=\sup \{\rho _{k}\}\). Take any \(\delta \in (0,1)\) and let \(B(\delta )\) be the open ball in \(\mathcal{H}\) centered at 0 with radius \(\bar{\rho}+1\delta \). Then, we have
This indicates that xÌ„ belongs to the interior of \(B(\delta )\). We claim
which means that xÌ„ is a local solution of (MVEP) on \(X\cap B(\delta )\). It follows from Lemma 2.4 that xÌ„ is also a local solution of (SVEP) on \(X\cap B(\delta )\). Then, by Theorem 2.2, we know that xÌ„ is further a global solution of (SVEP), i.e., \(\bar{x}\in \mathcal{S}_{E}\).
Hence, it remains to show that (3.6) holds. In fact, by the definition of supremum, we can choose \(k_{0}\) satisfying \(\rho _{k_{0}} \geq \bar{\rho}  \delta \). Observe that \(\{\rho _{k}\}\) is nondecreasing. We have \(\rho _{k} +1 \geq \bar{\rho} + 1  \delta \) for all \(k \geq k_{0}\). It follows that \(X \cap B(\delta ) \subseteq X^{k}\) for all \(k \geq k_{0}\). Thus, for each \(y \in X \cap B(\delta )\), we have \(y\in X^{k}\) for all \(k \geq k_{0}\). Further, by applying Step 2, we can obtain
As f is Ccontinuous on X, it is both Cu.s.c. and Cl.s.c. on X. Notice that \(\lambda _{k}\to 0\) as \(k\to \infty \). Then, it follows from Lemma 2.2(i) that
As \(f(\bar{y},\bar{x})=0\), we get \(f(y,\bar{x})\in C\). Then, by the arbitrariness of y, we know that (3.6) holds.
(III) The whole sequence \(\{x^{k}\}\) converges to xÌ„ as \(k\to \infty \).
Indeed, for each \(\delta \in (0,1)\), we can obtain from (3.6) that
Then, by the arbitrariness of Î´, we have
Since \(f(\cdot ,\bar{x})\) is Cl.s.c., it is also l.s.c.. Then, by (3.7), we can further get
For each k, by Step 2 of Algorithm 3.1, we know that \(y^{k} \in X^{k}\). Hence \(y^{k} \in X\) and \(\y^{k}\ \leq \rho _{k} + 1 \leq \bar{\rho} + 1\). It follows that \(f(y^{k}, \bar{x}) \in C\). This indicates that \(\bar{x}\in L_{\infty}\) and so \(\bar{x}\in L_{\infty}\cap \operatorname{Fix}(S)\). Hence, by Proposition 3.1(iv), the sequence \(\{\x^{k}\bar{x}\\}\) is convergent. This, together with the fact that the subsequence \(\{\x^{k_{j}}\bar{x}\\}\) converges to 0, implies that the whole sequence \(\{\x^{k}\bar{x}\\}\) also converges to 0. It means \(\{x^{k}\}\) converges to xÌ„ as \(k\to \infty \).â€ƒâ–¡
Remark 3.2
In [24], Shan and Huang studied the iterative method for finding common solutions of a generalized mixed vector equilibrium problem and fixed point problems of an infinite family of nonexpansive mappings and a variational inequality problem. They suggested a viscosity approximation method and established a convergence result Theorem 3.1 under suitable conditions of continuity and convexity. However, Theorem 3.1 of Shan and Huang [24] is very different from Theorem 3.1 of this paper. More precisely,
(i) in Theorem 3.1 of Shan and Huang [24], the mappings associated with the fixed point problems are all vectorvalued, while in Theorem 3.1 of this paper, the mapping associated with the fixed point problem is setvalued;
(ii) the method to generate the approximating sequence \(\{x^{k}\}\) is very different. In fact, the approximating sequence \(\{x^{k}\}\) is produced in Theorem 3.1 of Shan and Huang [24] by using a viscosity approximation method, while it is generated in Theorem 3.1 of this paper by using a projection method;
(iii) the assumptions in Theorem 3.1 of this paper are weaker than those in Theorem 3.1 of Shan and Huang [24]. Indeed, the monotonicity condition imposed on the equilibrium mapping in Theorem 3.1 of Shan and Huang [24] is removed in Theorem 3.1 of this paper. Moreover, the continuity condition of the equilibrium mapping in Theorem 3.1 of Shan and Huang [24] is weakened to that of cone continuity in Theorem 3.1 of this paper.
From Theorem 3.1, we can obtain the following convergence result.
Corollary 3.1
Assume that \(\mathcal{S}_{M}\cap \operatorname{Fix}(S)\neq \emptyset \). Then, the sequence \(\{x^{k}\}\) generated by Algorithm 3.1converges to some point xÌ„ belonging to the set \(\mathcal{S}_{E}\cap \operatorname{Fix}(S)\).
Proof
It is clear that \(\mathcal{S}_{M}\subseteq L_{\infty}\). Then, the nonemptiness of the intersection set \(L_{\infty}\cap \operatorname{Fix}(S)\) can be derived immediately from the assumption. Hence, Theorems 3.1 yields the conclusion.â€ƒâ–¡
If \(S:X\to X\) is a vectorvalued mapping and nonexpansive, then it is clearly âˆ—nonexpansive. Moreover, for each \(x\in X\), we have \(P_{S(x)}(x)=S(x)\). And so Step 4 in Algorithm 3.1 is reduced to
Step 4\(\boldsymbol{'}\). Calculate \(x^{k+1}\) as
Thus, we have the following iterative method for finding a common solution of (SVEP) and a fixed point problem with a nonexpansive vectorvalued mapping.
Algorithm 3.2
The iterative steps are the same as those in Algorithm 3.1 except for Step 4 which is replaced by
Step 4\(\boldsymbol{'}\). Calculate \(x^{k+1}\) as
Corollary 3.2
Let \(S:X\to X\) be a vectorvalued nonexpansive mapping. Let \(f: X \times X \to Z\) be a vectorvalued bifunction satisfying \(f(x,x)=0\) for all \(x\in X\). Assume that the assumptions (A2)â€“(A4) are satisfied. Let \(\{x^{k}\}\) and \(\{y^{k}\}\) be the sequences generated by Algorithm 3.2. If \(L_{\infty}\cap \operatorname{Fix}(S)\neq \emptyset \), then \(\{x^{k}\}\) converges to some point of \(\mathcal{S}_{E}\cap \operatorname{Fix}(S)\);
If \(S=I\) (the identity mapping on X), then S is a vectorvalued mapping, which is trivially nonexpansive and \(\operatorname{Fix}(S)=X\). Moreover, for each \(x\in X\), we have \(P_{S(x)}(x)=x\) and \(d(x,S(x))=0\). And so Step 4 in Algorithm 3.1 is reduced to
Step 4\(\boldsymbol{''}\). Calculate \(x^{k+1}\) as
So, we have the following iterative method for solving (SVEP).
Algorithm 3.3
The iterative steps are the same as those in Algorithm 3.1 except for Step 4 which is replaced by
Step 4\(\boldsymbol{''}\). Calculate \(x^{k+1}\) as
Corollary 3.3
([26])
Let \(\mathcal{H}\) be a real Hilbert space with inner product \(\langle \cdot ,\cdot \rangle \) and the induced norm \(\\cdot \\). Let X be a nonempty compact convex subset of \(\mathcal{H}\). Let Z be a real Hausdorff topological vector space and \(C\subseteq Z\) a closed convex pointed cone. Let \(f: X \times X \to Z\) be a vectorvalued bifunction satisfying \(f(x,x)=0\) for all \(x\in X\). Suppose that the assumptions (A2)â€“(A4) are satisfied. Let \(\{x^{k}\}\) and \(\{y^{k}\}\) be the sequences generated by Algorithm 3.3.

(i)
If \(L_{\infty}\neq \emptyset \), then \(\{x^{k}\}\) converges to a solution of (SVEP);

(ii)
If (SVEP) has no solution, then \(\{x^{k}\}\) diverges.
Proof
(i) The conclusion follows immediately from Corollary 3.2 and the above explanation.
(ii) Assume that the sequence \(\{x^{k}\}\) converges to some point \(\bar{x}\in X\). Notice that the condition \(L_{\infty}\neq \emptyset \) (missing in this item) is used in the proof of item (i) only to deduce that \(\x^{k}u^{k}\\to 0\) as \(k\to \infty \). When \(\{x^{k}\}\) converges, we can prove that this fact also occurs. And then, by following the arguments as in the proof of item (i), we can show that xÌ„ is a solution of (SVEP), which contradicts the hypothesis of this item. And so \(\{x^{k}\}\) does not converge.
Hence, there remains to show that, when \(\{x^{k}\}\) converges, \(\x^{k}u^{k}\\to 0\) as \(k\to \infty \). In fact, if \(\{x^{k}\}\) converges, then, by Step 4\(\boldsymbol{''}\) of Algorithm 3.3 and the assumptions on \(\{\mu _{k}\}\), we have
As \((1b)>0\) and \(\{x^{k}\}\) converges, we conclude that \(\x^{k}u^{k}\\to 0\) as \(k\to \infty \).â€ƒâ–¡
In Theorems 3.1, if \(\mathcal{H}\) is a finitedimensional Euclidean space \(\mathbb{R}^{n}\), then the compactness condition imposed on the feasible set X can be weakened by a closedness one.
For this, we need the following lemma.
Lemma 3.2
Let \(\{x^{k}\}\), \(\{y^{k}\}\), \(\{u^{k}\}\), and \(\{v^{k}\}\) be the sequences generated by Algorithm 3.1. If \(L_{\infty}\cap \operatorname{Fix}(S)\neq \emptyset \), then the four sequences are all bounded.
Proof
Take arbitrary \(x^{*}\in L_{\infty}\cap \operatorname{Fix}(S)\) and let it be fixed. Then, by Proposition 3.1(iv), we know that the sequence \(\{\x^{k}x^{*}\\}\) is convergent, and so bounded. As a consequence, the sequence \(\{x^{k}\}\) is bounded. Further, we can conclude the boundedness of the sequences \(\{u^{k}\}\) and \(\{v^{k}\}\) by Proposition 3.1, (ii) and (i), respectively.
On the other hand, since \(\{x^{k}\}\) is bounded, there exists some \(r>0\) such that \(\x^{k}\\leq r\) for all \(k \in \mathbb{N}_{+} \). This yields
This indicates that all the sets \(X^{k}\), \(k\in \mathbb{N}_{+}\) are contained in the closed ball centered at 0 with radius \(r+1\). Hence, for each \(k\in \mathbb{N}_{+}\), \(\y^{k}\\leq r+1\) as \(y^{k} \in X^{k}\). It means that the sequence \(\{y^{k}\}\) is bounded.â€ƒâ–¡
Theorem 3.2
Let \(\mathcal{H}\) be the ndimensional Euclidean space \(\mathbb{R}^{n}\) and X a nonempty closed convex subset of \(\mathcal{H}\). Let \(\{x^{k}\}\) and \(\{y^{k}\}\) be the sequences generated by Algorithm 3.1. If \(\operatorname{Fix}(S)\cap L_{\infty}\neq \emptyset \), then \(\{x^{k}\}\) converges to some point xÌ„ belonging to \(\mathcal{S}_{E}\cap \operatorname{Fix}(S)\).
Proof
Notice that the compactness condition of X is used essentially to obtain the compactness of the subset \(X^{k}\) (\(k\in \mathbb{N}_{+}\)) in the proof of Lemma 3.1. When \(\mathcal{H}\) is the ndimensional Euclidean space \(\mathbb{R}^{n}\) and X is a nonempty closed convex subset of \(\mathcal{H}\), the set \(X^{k}\) (\(k\in \mathbb{N}_{+}\)) is clearly compact as it is a nonempty bounded and closed subset of X. And then, by repeating the arguments as in the proof of Lemma 3.1, we can show that Algorithm 3.1 is welldefined.
Next, we shall prove that the conclusion is true.
In fact, since \(\{x^{k}\}\) is a bounded sequence in the ndimensional Euclidean space \(\mathbb{R}^{n}\), it must have a convergent subsequence \(\{x^{k_{j}}\}\) such that \(x^{k_{j}}\to \bar{x}\) for some \(\bar{x}\in \mathbb{R}^{n}\). As \(\{x^{k_{j}}\}\subseteq X\) and X is closed, we have \(\bar{x}\in X\). Similarly, since \(\{y^{k_{j}}\}\) is another bounded sequence of \(\mathbb{R}^{n}\), it also has a convergent subsequence. Without loss of generality, we may assume that \(y^{k_{j}}\to \bar{y}\in X\) as \(j \to \infty \). Then, by following the rest of the arguments as in the proof of Theorem 3.1, we can show that \(\bar{x}\in \mathcal{S}_{E}\cap \operatorname{Fix}(S)\) and \(x^{k} \to \bar{x}\) as \(k \to \infty \).â€ƒâ–¡
Remark 3.3
In Theorem 3.2, the set X can be unbounded.
The following corollary indicates that, when \(Z=\mathbb{R}\) and \(C=\mathbb{R}_{+}=[0,+\infty )\), the assumption (A4) in Theorem 3.2 can be omitted.
Corollary 3.4
Let \(\mathcal{H}\), X, and S be the same as in Theorem 3.2. Let Z be the real number space \(\mathbb{R}\) and \(C=[0,\infty )\) be the standard ordering cone of Z. Let \(f:X\times X\to Z\) be a bifunction satisfying \(f(x,x)=0\) for all \(x\in X\). Suppose that assumptions (A1)â€“(A3) are satisfied. Let \(\{x^{k}\}\) and \(\{y^{k}\}\) be the sequences generated by Algorithm 3.1. If \(L_{\infty}\cap \operatorname{Fix}(S)\neq \emptyset \), then \(\{x^{k}\}\) converges to some point xÌ„ belonging to \(\mathcal{S}_{0}\cap \operatorname{Fix}(S)\).
Proof
Notice that assumption (A4), together with (A2), is used in Theorem 3.2 to guarantee the validity of Step 2 in Algorithm 3.1. That is to guarantee the existence of \(y_{k}\) in Step 2 of Algorithm 3.1. When \(Z=\mathbb{R}\), \(C=\mathbb{R}_{+}=[0,+\infty )\), we can use only assumption (A2) to prove this fact occurs, but without assumption (A4). And then, the conclusion follows immediately from Theorem 3.2.
Indeed, when \(Z=\mathbb{R}\), \(C=\mathbb{R}_{+}=[0,+\infty )\), the Ccontinuity of a vectorvalued mapping reduces to the usual continuity of a realvalued function. Moreover, for each \(k\in \mathbb{N}_{+}\), \(X^{k}\) is clearly compact. Thus, the realvalued function \(f(\cdot ,x^{k})\) can attain its maximum value on \(X^{k}\), i.e., there exists some point \(y^{k}\in X^{k}\) such that
That is,
From this, it is easy to see that \(y^{k}\) satisfies Step 2.â€ƒâ–¡
Remark 3.4
If \(S=I\) (the identity mapping on X) and \(e=1\), then the above Corollary 3.2 reduces to the main result Theorem 3.3 of Iusem and Sosa [8].
Remark 3.5
For finding common solutions of (EP) and (FPP), Van et al. [29] suggested an extragradienttype method and provided an important convergence result Theorem 3.11. However, Theorem 3.11 of Van et al. [29] is different from Corollary 3.4 of this paper in the following aspects:
(i) the method to produce the approximating sequence \(\{x^{k}\}\) is very different. In fact, the approximating sequence \(\{x^{k}\}\) is generated in Theorem 3.11 of Van et al. [29] by using an extragradienttype method, while it is generated in Corollary 3.4 of this paper by using a projection method;
(ii) the assumptions in Corollary 3.4 of this paper are weaker than those in Theorem 3.11 of Van et al. [29]. More precisely, besides all the conditions included in Corollary 3.4 of this paper, the condition that S is lower semicontinuous on X is additionally needed in Theorem 3.11 of Van et al. [29]. Further, the condition \(\operatorname{Fix}(S)\cap L_{\infty}\neq \emptyset \) in Corollary 3.4 of this paper is weaker than \(\mathcal{S}_{*}\neq \emptyset \) in Theorem 3.11 of Van et al. [29], where \(\mathcal{S}_{*}=\{x^{*}\in S(x^{*}): f(y,x^{*})\in C, \forall y\in X\} = \mathcal{S}_{M}\cap \operatorname{Fix}(S)\).
4 Conclusions
The main purpose of this paper was to investigate iterative methods for finding common solutions of strong vector equilibrium problems and fixed point problems of multivalued mappings. With the help of a Minty vector equilibrium problem, projection iterative methods were suggested and some convergence results were established in Hilbert spaces under suitable conditions of cone continuity and convexity. The main results obtained in this paper generalize and improve the corresponding ones of Van, Strodiot, Nguyen and Vuong [29], Iusem and Sosa [8], Shan and Huang [24], and Huang, Wang, and Mao [26].
Availability of data and materials
There is no additional data required for the finding of results of this paper.
References
Blum, E., Oettli, W.: From optimization and variational inequalities to equilibrium problems. Math. Stud. 63, 123â€“145 (1994)
Ansari, Q.H., Oettli, W., SchlÃ¤ger, D.: A generalization of vectorial equilibria. Math. Methods Oper. Res. 46(2), 147â€“152 (1997)
Chen, G.Y., Huang, X.X., Yang, X.Q.: Vector Optimization, SetValued and Variational Analysis. Springer, Berlin (2005)
Daniele, P., Giannessi, F., Maugeri, A. (eds.): Equilibrium Problems and Variational Models Kluwer Academic, Norwell (2003)
Giannessi, F. (ed.): Vector Variational Inequalities and Vector Equilibrium Kluwer Academic, Dordrecht (2000)
Ceng, L.C., Ansari, Q.H., Schaible, S.: Hybrid extragradientlike methods for generalized mixed equilibrium problems, systems of generalized equilibrium problems and optimization problems. J. Glob. Optim. 53(1), 69â€“96 (2012)
Colao, V., Marino, G., Xu, H.K.: An iterative method for finding common solutions of equilibrium and fixed point problems. J. Math. Anal. Appl. 344, 340â€“352 (2008)
Iusem, A.N., Sosa, W.: Iterative algorithms for equilibrium problems. Optimization 52(3), 301â€“316 (2003)
Qin, X., Cho, Y.J., Kang, S.M.: Viscosity approximation methods for generalized equilibrium problems and fixed point problems with applications. Nonlinear Anal. TMA 72, 99â€“112 (2010)
Le, H.Y., Nguyen, T.T.H., Le, D.M.: A subgradient algorithm for a class of nonlinear split feasibility problems: application to jointly constrained Nash equilibrium models. J. Glob. Optim. 73, 849â€“868 (2019)
Yang, J., Liu, H.W.: A selfadaptive method for pseudomonotone equilibrium problems and variational inequalities. Comput. Optim. Appl. 75(2), 423â€“440 (2020)
Nguyen, T.T.V., Strodiot, J.J., Nguyen, V.H.: A bundle method for solving equilibrium problems. Math. Program. Ser. B 116(1â€“2), 529â€“552 (2009)
Peng, J.W., Yao, J.C.: Two extragradient methods for generalized mixed equilibrium problems, nonexpansive mappings and monotone mappings. Comput. Math. Appl. 58, 1287â€“1301 (2009)
Chang, S.S., Lee, H.W.J., Chan, C.K.: A block hybrid method for solving generalized equilibrium problems and convex feasibility problem. Adv. Comput. Math. 38(3), 563â€“580 (2013)
Shehu, Y.: Hybrid iterative scheme for fixed point problem, infinite systems of equilibrium and variational inequality problems. Comput. Math. Appl. 63(6), 1089â€“1103 (2012)
Takahashi, S., Takahashi, W.: Viscosity approximation methods for equilibrium problems and fixed point problems in Hilbert spaces. J. Math. Anal. Appl. 331(1), 506â€“515 (2007)
Hieu, D.V., Muu, L.D., Anh, P.K.: Parallel hybrid extragradient methods for pseudomonotone equilibrium problems and nonexpansive mappings. Numer. Algorithms 73, 197â€“217 (2016)
Quoc, T.D., Anh, P.N., Muu, L.D.: Dual extragradient algorithms extended to equilibrium problems. J. Glob. Optim. 52(1), 139â€“159 (2012)
Thuy, L.Q., Hai, T.N.: A projected subgradient algorithm for bilevel equilibrium problems and applications. J. Optim. Theory Appl. 175(2), 411â€“431 (2017)
Alansari, M., Kazmi, K.R., Ali, R.: Hybrid iterative scheme for solving split equilibrium and hierarchical fixed point problems. Optim. Lett. 14(8), 2379â€“2394 (2020)
Deng, L.M., Hu, R., Fang, Y.P.: Projection extragradient algorithms for solving nonmonotone and nonLipschitzian equilibrium problems in Hilbert spaces. Numer. Algorithms 86(1), 191â€“221 (2021)
Cheng, B., Liu, S.Y.: An iterative algorithm for vector equilibrium problems. J. Lanzhou Univ. Nat. Sci. 45(5), 105â€“109 (2009)
Li, Q.Y., Wang, S.H.: Viscosity approximation methods for strong vector equilibrium problems and fixed point problems. Acta Anal. Funct. Appl. 14(2), 183â€“192 (2012)
Shan, S.Q., Huang, N.J.: An iterative method for generalized mixed vector equilibrium problems and fixed point of nonexpansive mappings and variational inequalities. Taiwan. J. Math. 16(5), 1681â€“1705 (2012)
Wang, S.H., Li, Q.Y.: A projection iterative algorithm for strong vector equilibrium problem. Optimization 64(10), 2049â€“2063 (2015)
Huang, J.X., Wang, S.H., Mao, J.Y.: A general iterative algorithm for vector equilibrium problem. J. Nonlinear Sci. Appl. 10, 4337â€“4351 (2017)
Wang, S.H., Huang, J.X., Zhu, C.X.: An iterative method for solving the strong vector equilibrium problem with variable domination structure. Optimization 67(6), 865â€“879 (2018)
Chadli, O., Ansari, Q.H., AlHomidan, S.: Existence of solutions and algorithms for bilevel vector equilibrium problems: an auxiliary principle technique. J. Optim. Theory Appl. 172, 726â€“758 (2017)
Van, N.T.T., Strodiot, J.J., Nguyen, V.H., Vuong, P.T.: An extragradienttype method for solving nonmonotone quasiequilibrium problems. Optimization 67(5), 651â€“664 (2018)
Luc, D.T.: Theory of Vector Optimization. Lecture Notes in Economics and Mathematical Systems. Springer, New York (1989)
Gong, X.H.: Strong vector equilibrium problems. J. Glob. Optim. 36, 339â€“349 (2006)
Ferro, F.: A minimax theorem for vectorvalued functions. J. Optim. Theory Appl. 60, 19â€“31 (1989)
Jameson, G.: Ordered Linear Spaces. Lecture Notes in Mathematics. Springer, Berlin/Heidelberg/New York (1970)
Gong, X.H.: The strong minimax theorem and strong saddle point of vectorvalued function. Nonlinear Anal. 68, 2228â€“2241 (2008)
Chanthorn, P., Chaoha, P.: Fixed point sets of setvalued mappings. Fixed Point Theory Appl. 2015, 56 (2015)
Dehghan, H.: Demiclosed principle and convergence of a hybrid algorithm for multivalued âˆ—nonexpansive mappings. Fixed Point Theory 14(1), 107â€“116 (2013)
Acknowledgements
The authors thank the anonymous referees and editors for their valuable suggestions and comments to improve this work.
Funding
This work was supported by the National Natural Science Foundation of China (11661055, 71971102, 12061045), the Natural Science Foundation of Jiangxi Province (20212BAB201028), and the Science and Technology Research Project of Jiangxi Educational Committee (GJJ210866).
Author information
Authors and Affiliations
Contributions
All authors have equal contribution to this article. All authors read and approved the final manuscript.
Corresponding author
Ethics declarations
Competing interests
The authors declare no competing interests.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the articleâ€™s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the articleâ€™s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Wang, Sh., Zhang, Yx. & Huang, Wj. Iterative methods for vector equilibrium and fixed point problems in Hilbert spaces. J Inequal Appl 2022, 131 (2022). https://doi.org/10.1186/s1366002202868z
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s1366002202868z