Skip to main content

A subgradient extragradient algorithm for solving monotone variational inequalities in Banach spaces

Abstract

In this paper, we introduce an algorithm for solving classical variational inequalities problem with Lipschitz continuous and monotone mapping in Banach space. We modify the subgradient extragradient methods with a new and simple iterative step size, the strong convergence of algorithm is established without the knowledge of the Lipschitz constant of the mapping. Finally, a numerical experiment is presented to show the efficiency and advantage of the proposed algorithm. Our results generalize some of the work in Hilbert spaces to Banach spaces.

1 Introduction

The variational inequality problem (VIP) which was first introduced by Hartman and Stampacchia [1] in 1966, is a very important tool in studying engineering mechanics, physics, economics, optimization theory and applied sciences in a unified and general framework (see [2, 3]). Under appropriate conditions, there are two general approaches for solving the variational inequality problem, one is the regularized method and the other is the projection method. Many projection-type algorithms for solving the variational inequalities problem have been proposed and analyzed by many authors [422]. The gradient method is the simplest algorithm in which only one projection on feasible set is performed, and the convergence of the method requires a strongly monotonicity. To avoid the hypothesis of the strongly monotonicity, Korpelevich [4] proposed an algorithm for solving the variational inequalities in Euclidean space, which was called the extragradient-type method. The subgradient extragradient-type algorithm was introduced by Censor et al. in [5] for solving variational inequalities in real Hilbert space. Yao et al. in [6] proposed an iterative algorithm for solving a common solution of the pseudomonotone variational inequalities and fixed point of pseudocontractive operators in Hilbert spaces.

In the past, most variational inequalities were in Euclidean or Hilbert space, recently, extragradient-type method was extended from Hilbert spaces to Banach spaces (see [2327]). In [23] they used the subgradient extragraduent method and Halpern method to propose an algorithm for solving variational inequalities in Banach spaces. In [24], they proposed a splitting algorithm for finding a common zero of a finite family of inclusion problems of accretive operators in Banach space. Inspired by the work mentioned, in this work, we extend subgradient extragradient algorithm proposed by [8] for solving variational inequalities from Hilbert spaces to Banach spaces. It is worth stressing that our algorithm has a simple structure and the convergence of algorithms is not required to know the Lipschitz constant of the mapping. The paper is organized as follows. In Sect. 2, we present some preliminaries that will be needed in the sequel. In Sect. 3, we propose an algorithm and analyze its convergence. Finally, in Sect. 4 we present a numerical example and comparison.

2 Mathematical preliminaries

This section we will introduce some definitions and basic results that will be used in our paper. Assume that X is a real Banach space with its dual \(X^{\ast }\), \(\| \cdot \| \) and \(\| \cdot \| _{\ast }\) denote the norms of X and \(X^{\ast }\), respectively, \(\langle x, x^{\ast } \rangle \) the duality coupling in \(X\times X^{\ast }\) for all \(x^{\ast } \in X^{\ast }\) and \(x\in X\), \(x_{n}\longrightarrow x\) strong convergence of a sequence \(\{ x_{n}\}\) of X to \(x\in X\), \(x_{n}\rightharpoonup x \) weak convergence of a sequence \(\{ x_{n}\}\) of X to \(x\in X\). \(S_{X}\) denote the unit sphere of X, and \(B_{X}\) the closed unit ball of X. Let C be a nonempty closed convex subset of X, its closure be denoted by and \(F: C\longrightarrow X^{\ast }\) be a continuous mapping. Consider with the variational inequality (for short, \(\operatorname{VI}(F, C)\)) which consists in finding a point \(x \in C\) such that

$$ \bigl\langle F(x),y-x\bigr\rangle \geq 0, \quad \forall y\in C. $$
(1)

Let S be the solution set of (1). Finding a solution of S is fundamental problem in optimization theory. It is well known that x is the solution of the \(\operatorname{VI}(F,C)\) if and only if x is the solution of the fixed-point equation \(x=P_{C}(x-\lambda F(x))\), where λ is an arbitrary positive constant. Therefore, the knowledge of fixed-point algorithms can be used to solve \(\operatorname{VI}(F, C)\).

We next recall some properties of the Banach space. Let X be a real Banach space and \(X^{*}\) be the corresponding dual space.

Definition 1

Assume that \(C\subseteq X\) is a nonempty set, \(F: C\longrightarrow X ^{\ast }\) is a continuous mapping, then

\((A1)\):

The mapping F is monotone, i.e.,

$$ \bigl\langle F(x)-F(y),x-y\bigr\rangle \geq 0, \quad \forall x, y\in C. $$
(2)
\((A2)\):

The mapping F is Lipschitz-continuous with constant \(L>0\), i.e., there exists \(L>0\) such that

$$ \bigl\Vert F(x)-F(y) \bigr\Vert \leq L \Vert x-y \Vert , \quad \forall x, y\in C. $$
(3)
\((A3)\):

([28]) The mapping F is called hemicontinuous of C into \(X^{*}\) iff for any \(x,y \in C\) and \(z \in X\) the function \(t\mapsto \langle z,F(tx+(1-t)y)\rangle \) of \([0, 1]\) into \(\mathcal{R}\) is continuous.

The normalized duality mapping \(J_{X}\) (usually written J) of X into \(X^{*}\) is defined by

$$ J(x)=\bigl\{ x^{*}\in X^{*} | \bigl\langle x,x^{*} \bigr\rangle = \bigl\Vert x^{*} \bigr\Vert ^{2}= \Vert x \Vert ^{2} \bigr\} $$

for all \(x\in X\). Let \(q \in (0,2]\). The generalized duality mapping \(J_{q}:X \rightarrow 2^{X^{*}}\) is defined (for the definitions and properties, see [24]) by

$$ J_{q}(x)=\bigl\{ j_{q}(x)\in X^{*} | \bigl\langle j_{q}(x),x \bigr\rangle = \Vert x \Vert \bigl\Vert j_{q}(x) \bigr\Vert , \bigl\Vert j_{q}(x) \bigr\Vert = \Vert x \Vert ^{q-1}\bigr\} $$

for all \(x\in X\).

Let \(U=\{x\in X: \|x\|=1 \}\). The norm of X is said to be Gâteaux differentiable if, for each \(x, y\in U\), the limit

$$ \lim_{t\rightarrow 0}\frac{ \Vert x+ty \Vert - \Vert x \Vert }{t} $$
(4)

exists. In this case, the space X is also called smooth. We know that X is smooth iff J is a single-valued mapping of X into \(X^{*}\), X is reflexive iff J is surjective, and X is strictly convex iff J is one-to-one. Therefore, if X is a smooth, strictly convex and reflexive Banach space, then J is a single-valued bijection, and then there exist the inverse mapping \(J^{-1}\) coincides with the duality mapping \(J^{*}\) on \(X^{*}\). More details can be found in [2931]. If (4) converges uniformly in \(x, y\in S_{X}\), X is said to be uniformly smooth. It is said to be strictly convex if \(\| \frac{x+ y}{2}\| <1\) whenever \(x, y\in S_{X}\) and \(x\neq y\). The modulus \(\delta _{X}\) of convexity is defined by

$$ \delta _{X}(\varepsilon )=\inf \biggl\{ 1- \biggl\Vert \frac{x+ y}{2} \biggr\Vert \Big| x,y \in B_{X}, \Vert x-y \Vert \geq \varepsilon \biggr\} , $$
(5)

for all \(\varepsilon \in [0,2]\). A Banach space X is said to be uniformly convex if \(\delta _{X}(\varepsilon )> 0\). It is well known that a Banach space X is uniformly convex if and only if for any two sequences \(\{x_{n}\}\) and \(\{y_{n}\}\) in X such that

$$ \lim_{n\rightarrow \infty } \Vert x_{n} \Vert =\lim _{n\rightarrow \infty } \Vert y_{n} \Vert =1 \quad \text{and} \quad \lim_{n\rightarrow \infty } \Vert x_{n}+y_{n} \Vert =2,\qquad \lim_{n\rightarrow \infty } \Vert x_{n}-y_{n} \Vert =0 $$

hold. A uniformly convex Banach space is strictly convex and reflexive. By [24] we know that a Banach space X is smooth if and only if the duality mapping \(J_{q}\) is single valued and is uniformly smooth if and only if the duality mapping \(J_{q}\) is single valued and norm-to-norm uniformly continuous on bounded sets of X. Moreover, if there exists \(c>0\) such that, for all \(\varepsilon \in [0,2]\), \(\delta _{X}(\varepsilon )> c\varepsilon ^{2}\), then X is said to be 2-uniformly convex. It is obvious that every 2-uniformly convex Banach space is uniformly convex and all Hilbert spaces are uniformly smooth and 2-uniformly convex, and therefore are reflexive.

Now, we recall some useful definitions and results. Firstly, let us introduce the generalized projection operator of X. Let \(C\subseteq X\) be a nonempty closed convex subset of a real uniformly convex Banach space X. Then we know that, for any \(z\in X\), there exists a unique element \(\tilde{z} \in C\) such that \(\|z-\tilde{z}\|\leq \|z-y\|\) for all \(y\in C\). Putting \(\tilde{z}=P_{C}z\), the operator \(P_{C}: X^{*} \longrightarrow C \subset X\) is called the generalized projection (or metric projection) operator of X onto C.

To avoid the hypothesis of the strongly monotonicity, Korpelevich [4] give the extragradient-type method:

$$ y_{n}=P_{C}\bigl(x_{n}-\lambda F(x_{n})\bigr), \qquad x_{n+1}=P_{C} \bigl(x_{n}-\lambda F(y_{n})\bigr), $$
(6)

where \(\lambda \in (0,\frac{1}{L})\).

The subgradient extragradient-type algorithm extend (6) in which the second orthogonal projection onto some constructible set in Euclidean space for solving \(\operatorname{VI}(F, C)\) in real Hilbert space. Their method is of the following form:

$$ y_{n}=P_{C}\bigl(x_{n}-\lambda F(x_{n})\bigr), \qquad x_{n+1}=P_{T_{n}} \bigl(x_{n}-\lambda F(y_{n})\bigr), $$
(7)

where \(T_{n}=\{x\in X|\langle x_{n}-\lambda F(x_{n})-y_{n},x-y_{n} \rangle \leq 0\}\) and \(\lambda \in (0,\frac{1}{L})\). Cai et al. [23] suggested the following method:

$$ \textstyle\begin{cases} x_{0}\in X,\\ y_{n}=P_{C}(Jx_{n}-\lambda _{n} F(x_{n})), \\ T_{n}=\{x \in X|\langle Jx_{n}-\lambda _{n} F(x_{n})-Jy_{n},x-y_{n} \rangle \leq 0\},\\ z_{n}=P_{T_{n}}(Jx_{n}-\lambda _{n} F(y_{n})),\\ x_{n+1}=J ^{-1}(\alpha _{n}Jx_{0}+(1-\alpha _{n})Jz_{n}), \end{cases} $$
(8)

where J is the normalized duality mapping of X into \(X^{*}\), \(\lambda _{n} \in (0,\frac{1}{L})\), \({\alpha _{n}}\subset (0,1)\), \(\alpha _{n}\rightarrow 0\) and \(\sum_{n=1}^{\infty }\alpha _{n}=+\infty \). They proved that the sequence \(\{x_{n}\}\) generated by (8) converges strongly to \(P_{S}Jx_{0}\).

The main drawback of algorithms (7) and (8) is a requirement to know the Lipschitz constant or to know some estimation of it. Yekini and Olaniyi [7] proposed the following subgradient extragradient method:

$$ \textstyle\begin{cases} \text{Given } \rho \in (0, 1), \mu \in (0, 1), \\ y_{n}=P_{C}(x_{n}-\lambda _{n} F(x_{n})), \text{where } \lambda _{n}= \rho ^{l_{n}} \text{ and } l_{n} \text{ is the smallest nonnegative inter } l \\ \text{such that } \lambda _{n} \Vert F(x_{n})-F(y_{n}) \Vert \leq \mu \Vert x_{n}-y_{n} \Vert , \\ z_{n}=P_{T_{n}}(x_{n}-\lambda _{n} F(y_{n})), \text{where } T_{n}=\{x\in H|\langle x_{n}-\lambda _{n} F(x_{n})-y _{n},x-y_{n} \rangle \leq 0\}, \\ x_{n+1}=\alpha _{n}f(x_{n})+(1-\alpha _{n})z_{n}, \text{where } f:H \rightarrow H \text{ is a contraction mapping}. \end{cases} $$
(9)

The algorithm of (9) does not require one to know the Lipschitz constant, but the method may involve computation of additional projections.

In [32], Alber introduced a functional \(V(x^{*},y): X^{*} \times X \longrightarrow R\) by

$$ V\bigl(x^{*},y\bigr)= \bigl\Vert x^{*} \bigr\Vert ^{2}_{*}-2 \bigl\langle x^{*},y \bigr\rangle + \Vert y \Vert ^{2} . $$
(10)

Clearly,

$$ V\bigl(x^{*},y\bigr)\geq \bigl( \bigl\Vert x^{*} \bigr\Vert _{*}- \Vert y \Vert \bigr)^{2} . $$

The operator \(P_{C}: X^{*}\longrightarrow C\subseteq X\) is said to be generalized projection operator if it associates to an arbitrary fixed point \(x^{*}\in X^{*}\), the solution to the minimization problem

$$ V\bigl(x^{*}, \tilde{x^{*}}\bigr)= \inf_{y\in C} V\bigl(x^{*}, y\bigr), $$

where \(\tilde{x^{*}}=P_{C}x^{*}\in C \subset X\) is called a generalized projection of the point \(x^{*}\). For more results about \(P_{C}\), see [32]. The next lemma can describe the properties of \(P_{C}\).

Lemma 1

LetCbe a nonempty closed convex set inXand \(x^{*}, y^{*} \in X^{*}\), \(\tilde{x^{*}}=P_{C}x^{*}\). Then

$$\begin{aligned} \mathrm{(i)}& \quad \bigl\langle J\tilde{x^{*}}-x^{*}, y- \tilde{x^{*}}\bigr\rangle \geq 0, \quad \forall y\in C. \\ \mathrm{(ii)}&\quad V\bigl(J\tilde{x^{*}},y\bigr)\leq V \bigl(x^{*},y\bigr)-V\bigl(x^{*},\tilde{x^{*}} \bigr), \quad \forall y\in C. \\ \mathrm{(iii)}&\quad V\bigl(x^{*},z\bigr)+2\bigl\langle J^{-1}x^{*}-z, y^{*}\bigr\rangle \leq V \bigl(x^{*}+ y^{*},z\bigr),\quad \forall z\in X. \end{aligned}$$

In [32], Alber also introduced the Lyapunov functional \(\varphi : X \times X\longrightarrow R\) by

$$ \varphi (x,y)= \Vert x \Vert ^{2}-2 \langle Jx,y \rangle + \Vert y \Vert ^{2}, \quad \forall x,y\in X. $$

Then, combining (10), we obtain \(V(x^{*},y)=\varphi (J^{-1}x^{*},y)\), for all \(x^{*}\in X^{*}\), \(y\in X\). Moreover, we have the following lemma (see [33]).

Lemma 2

([33])

LetXbe a real 2-uniformly convex Banach space. Then, there exists \(\mu \geq 1\)such that, for all \(x,y\in X\),

$$ \frac{1}{\mu } \Vert x-y \Vert ^{2}\leq \varphi (x,y). $$

The following two lemmas which will be useful to our subsequent convergence analysis, and they are stated and proved in [34, 35].

Lemma 3

([34])

Let \(\{a_{n}\}\)be a sequence of real numbers that does not decrease at infinity in the sense that there exists a subsequence \(\{a_{n_{j}}\}\)of \(\{a_{n}\}\)which satisfies \(a_{n_{j}}< a_{{n_{j}}+1} \)for all \(j\in \mathcal{N}\). Define the sequence \(\{\tau (n)\}_{n \geq n_{0}}\)of integers as follows:

$$ \tau (n)=\max \{k\leq n: a_{k}< a_{k+1}\}, $$

where \(n_{0} \in \mathcal{N}\)such that \(\{k\leq n_{0}: a_{k}< a_{k+1} \}\)is nonempty. Then the following hold:

  1. (i)

    \(\tau (n)\leq \tau (n+1)\leq\cdots \) , and \(\tau (n)\longrightarrow \infty \);

  2. (ii)

    \(a_{\tau (n)}\leq a_{{\tau (n)}+1}\)and \(a_{n}\leq a _{{\tau (n)}+1}\).

Lemma 4

([35])

Let \(\{a_{n}\}\)be a nonnegative real sequence and \(\exists N>0\), such that \(\forall n\geq N\), satisfying the following relation:

$$ a_{n+1}\leq (1-\alpha _{n})a_{n}+\alpha _{n}\sigma _{n}+\gamma _{n}, $$

where (i) \(\{\alpha _{n}\}\subset (0, 1)\), \(\sum_{n=0}^{\infty }\alpha _{n}=\infty \); (ii) \(\{\sigma _{n}\}\)is a sequence such that \(\limsup_{n\rightarrow \infty }b_{n}\leq 0\); (iii) \(\gamma _{n} \geq 0\), \(\sum_{n=0}^{\infty }\gamma _{n} < \infty \). Then \(\lim_{n\rightarrow \infty }a_{n}=0\).

The following result for proving our main result relies on certain estimate and other classical properties of the iterates which are given in [23].

Lemma 5

([23])

Let \(x^{*}=P_{S} x_{0}\). Define \(a_{n}=\varphi (x_{n},x^{*})\)and \(b_{n}=2\langle x_{0}-x^{*},x_{n+1}-x^{*} \rangle \), then

  1. (i)

    \(a_{n+1}\leq (1-\alpha _{n})a_{n}+\alpha _{n}b_{n}\),

  2. (ii)

    \(-1\leq \limsup_{n\rightarrow \infty } b_{n}< \infty \)

Lemma 6

([36])

LetCbe a nonempty convex subset of a topological vector spaceXand \(F:C\rightarrow X^{*}\)be a hemicontinuous mapping, then \(x^{*}\)is a solution of (1) if and only if

$$ \bigl\langle F\bigl(x^{*}\bigr),y-x^{*}\bigr\rangle \geq 0, \quad \forall y\in C. $$
(11)

3 Main results

In this section, we introduce a new iterative algorithms for solving monotone variational inequality problems in Banach spaces. In order to present the method and establish its convergence, we make the following assumption.

Assumption 1

  1. (a)

    The feasible set C is a nonempty closed convex subset of a real 2-uniformly convex Banach space X.

  2. (b)

    \(F:X\rightarrow X^{\ast }\) is a monotone on C and L-Lipschitz continuous on X.

  3. (c)

    The solution set S of \(\operatorname{VI}(F,C)\) is nonempty.

Now, we discuss the strong convergence using the following algorithm for solving monotone variational inequality. Our algorithms are of the following forms.

Algorithm A

(Step 0):

Take \(\lambda _{0}>0\), \(x_{0}\in X\) be a given starting point, \(\mu \in (0, 1) \).

(Step 1):

Given the current iterate \(x_{n}\), compute

$$ y_{n}=P_{C}\bigl(Jx_{n}-\lambda _{n} F(x_{n})\bigr). $$
(12)

If \(x_{n} = y_{n}\), then stop: \(x_{n} \) is a solution. Otherwise, go to Step 2.

(Step 2):

Construct the set \(T_{n}=\{x\in X|\langle Jx_{n}- \lambda _{n} F(x_{n})-Jy_{n},x-y_{n} \rangle \leq 0\}\) and compute

$$ z_{n}=P_{T_{n}}\bigl(Jx_{n}-\lambda _{n} F(y_{n})\bigr), \qquad x_{n+1}=J^{-1}\bigl(\alpha _{n} Jx_{0}+(1-\alpha _{n})Jz_{n} \bigr). $$
(13)
(Step 3):

Compute

$$ \lambda _{n+1}= \textstyle\begin{cases} \min \{{\frac{\mu ( \Vert x_{n}-y_{n} \Vert ^{2}+ \Vert z_{n}-y_{n} \Vert ^{2})}{2\langle F(x_{n})-F(y_{n}),z_{n}-y_{n} \rangle },\lambda _{n} }\} , & \text{if } \langle F(x_{n})-F(y_{n}),z_{n}-y_{n} \rangle >0, \\ \lambda _{n}, & \text{otherwise}. \end{cases} $$
(14)

Set \(n := n + 1\) and return to step 1.

We prove the strong convergence theorem for Algorithm A. Firstly, we give the following theorem, which plays a crucial role in the proof of the main theorem.

Theorem 1

Assume that Assumption 1holds and \(x_{n}\), \(y_{n}\), \(\lambda _{n}\)be the sequences generated by Algorithm A, then we have the following result:

  1. (1)

    If \(x_{n}=y_{n}\)for some \(n\in N\), then \(x_{n} \in S\).

  2. (2)

    The sequence \(\{\lambda _{n}\} \)is a monotonically decreasing sequence with lower bound \(\min \{{\frac{\mu }{L },\lambda _{0} }\}\), and therefore, the limit of \(\{\lambda _{n}\} \)exists and is denoted \(\lambda =\lim_{n\rightarrow \infty }\lambda _{n}\). It is obvious that \(\lambda >0\).

Proof

(1) If \(x_{n}=y_{n}\), then \(x_{n}=P_{C}(Jx_{n}- \lambda _{n} F(x_{n}))\), so \(x_{n} \in C\). By the characterization of the generalized projection \(P_{C}\) onto C, we have

$$ \bigl\langle Jx_{n}-\lambda _{n} F(x_{n})-Jx_{n},x_{n}-x \bigr\rangle \geq 0 \quad \forall x \in C. $$

Therefore,

$$ \bigl\langle -\lambda _{n} F(x_{n}),x_{n}-x \bigr\rangle = \lambda _{n} \bigl\langle F(x _{n}),x-x_{n} \bigr\rangle \geq 0 \quad \forall x \in C. $$

Since \(\lambda _{n}\geq 0\), \(x_{n} \in S\).

(2) It is obvious that \(\{\lambda _{n}\} \) is a monotonically decreasing sequence. Since F is a Lipschitz-continuous mapping with constant \(L>0\), in the case of \(\langle F(x_{n})-F(y_{n}),x_{n+1}-y_{n} \rangle >0\), we have

$$ \frac{\mu ( \Vert x_{n}-y_{n} \Vert ^{2}+ \Vert z_{n}-y_{n} \Vert ^{2})}{2\langle F(x _{n})-F(y_{n}),z_{n}-y_{n} \rangle }\geq \frac{2\mu \Vert x_{n}-y_{n} \Vert \Vert z_{n}-y_{n} \Vert }{2 \Vert F(x_{n})-F(y_{n}) \Vert \Vert z_{n}-y_{n} \Vert }\geq \frac{ \mu \Vert x_{n}-y_{n} \Vert }{L \Vert x_{n}-y_{n} \Vert }= \frac{\mu }{L} . $$
(15)

Clearly, the sequence \(\{\lambda _{n}\} \) has the lower bound \(\min \{{\frac{\mu }{L },\lambda _{0} }\}\).

Since \(\{\lambda _{n}\} \) is monotonically decreasing sequence and has the lower bound, the limit of \(\{\lambda _{n}\} \) exists, and we denote \(\lambda =\lim_{n\rightarrow \infty }\lambda _{n}\). Clearly, \(\lambda >0\). □

The following lemma plays a crucial role in the proof of Theorem 2.

Lemma 7

Assume that Assumption 1holds. Let \(\{x_{n}\}\)be a sequence generated by Algorithm Aand \(\{\alpha _{n}\}\subset (0, 1)\). Then the sequence \(\{x_{n}\}\)is bounded.

Proof

Let \(u\in S\). By Lemma 1(ii), we have

$$ \begin{aligned}[b] V(Jz_{n}, u)&=V\bigl(JP_{T_{n}} \bigl(Jx_{n}-\lambda _{n} F(y_{n})\bigr), u\bigr) \\ &\leq V\bigl(Jx_{n}-\lambda _{n} F(y_{n}), u \bigr) - V\bigl(Jx_{n}-\lambda _{n} F(y _{n}), z_{n}\bigr) \\ &= \bigl\Vert Jx_{n}-\lambda _{n} F(y_{n}) \bigr\Vert ^{2}-2\bigl\langle Jx_{n}-\lambda _{n} F(y _{n}),u\bigr\rangle + \Vert u \Vert ^{2} \\ & \quad {} - \bigl\Vert Jx_{n}-\lambda _{n} F(y_{n}) \bigr\Vert ^{2}+2\bigl\langle Jx_{n}- \lambda _{n} F(y_{n}),z_{n}\bigr\rangle - \Vert z_{n} \Vert ^{2} \\ &= -2\langle Jx_{n},u\rangle +2\lambda _{n}\bigl\langle F(y_{n}),u-z_{n} \bigr\rangle +2\langle Jx_{n},z_{n}\rangle + \Vert u \Vert ^{2}- \Vert z_{n} \Vert ^{2} \\ &= V(Jx_{n}, u)- V(Jx_{n}, z_{n})+2\lambda _{n}\bigl\langle F(y_{n}),u-z _{n}\bigr\rangle . \end{aligned} $$
(16)

Since F is monotone, i.e., \(\langle F(y_{n})-F(u),y_{n}-u\rangle \geq 0\), for all \(n\in N\), combining with \(u\in S\), we have

$$ \bigl\langle F(y_{n}),y_{n}-u\bigr\rangle \geq \bigl\langle F(u),y_{n}-u\bigr\rangle \geq 0. $$

Then \(0\leq \langle F(y_{n}),y_{n}-u+z_{n}-z_{n}\rangle = \langle F(y _{n}),y_{n}-z_{n}\rangle -\langle F(y_{n}),u-z_{n}\rangle \). It implies that

$$ \bigl\langle F(y_{n}),y_{n}-z_{n}\bigr\rangle \geq \bigl\langle F(y_{n}),u-z_{n} \bigr\rangle , \quad \forall n\in N. $$
(17)

By the definition of \(T_{n}\), we have \(\langle Jx_{n}-\lambda _{n} F(x _{n})-Jy_{n},z_{n}-y_{n} \rangle \leq 0\). Then

$$\begin{aligned} \begin{aligned}[b] &\bigl\langle Jx_{n}-\lambda _{n} F(y_{n})-Jy_{n},z_{n}-y_{n} \bigr\rangle \\ &\quad =\bigl\langle Jx_{n}-\lambda _{n} F(x_{n})-Jy_{n},z_{n}-y_{n} \bigr\rangle + \lambda _{n}\bigl\langle F(x_{n})- F(y_{n}),z_{n}-y_{n} \bigr\rangle \\ &\quad \leq \lambda _{n}\bigl\langle F(x_{n})- F(y_{n}),z_{n}-y_{n} \bigr\rangle . \end{aligned} \end{aligned}$$
(18)

Using the definition of \(\lambda _{n+1}\) and (17), (18) to (16), we get

$$ \begin{aligned}[b] V(Jz_{n}, u)&\leq V(Jx_{n}, u)- V(Jx_{n}, z_{n})+2\lambda _{n}\bigl\langle F(y_{n}),u-z_{n}\bigr\rangle \\ &\leq V(Jx_{n}, u)- V(Jx_{n}, z_{n})+2\lambda _{n}\bigl\langle F(y_{n}),y _{n}-z_{n} \bigr\rangle \\ &= V(Jx_{n}, u)- V(Jx_{n}, y_{n})- V(Jy_{n}, z_{n})+2\bigl\langle Jx_{n}- \lambda _{n}F(y_{n})-Jy_{n},z_{n}-y_{n} \bigr\rangle \\ &\leq V(Jx_{n}, u)- V(Jx_{n}, y_{n})- V(Jy_{n}, z_{n})+2\lambda _{n} \bigl\langle F(x_{n})- F(y_{n}),z_{n}-y_{n} \bigr\rangle \\ &\leq V(Jx_{n}, u)- V(Jx_{n}, y_{n})- V(Jy_{n}, z_{n})+\lambda _{n}\frac{ \mu }{\lambda _{n+1}} \bigl( \Vert x_{n}-y_{n} \Vert ^{2}+ \Vert z_{n}-y_{n} \Vert ^{2}\bigr). \end{aligned} $$
(19)

By Theorem 1(2), we get \(\lim_{n\rightarrow \infty }\lambda _{n}\frac{\mu }{\lambda _{n+1}}=\mu (0<\mu <1)\), which means that there exists a integer number \(N_{0}>0\), such that, for every \(n>N_{0}\), we have \(0<\lambda _{n}\frac{\mu }{\lambda _{n+1}}<1\), taking this result in (19), we obtain, for every \(n>N_{0}\),

$$ \begin{aligned} V(Jz_{n}, u)&\leq V(Jx_{n}, u)- V(Jx_{n}, y_{n})- V(Jy_{n}, z_{n})+ \lambda _{n}\frac{\mu }{\lambda _{n+1}}\bigl( \Vert x_{n}-y_{n} \Vert ^{2}+ \Vert z_{n}-y _{n} \Vert ^{2}\bigr) \\ &\leq V(Jx_{n}, u)- (1-\mu ) \bigl(V(Jx_{n}, y_{n})+ V(Jy_{n}, z_{n})\bigr) \\ &\leq V(Jx_{n}, u). \end{aligned} $$

Then, by the definition of \(x_{n+1}\), we have, for every \(n>N_{0}\),

$$ \begin{aligned} V(Jx_{n+1}, u)&= V\bigl(\alpha _{n}Jx_{0}+(1-\alpha _{n})Jz_{n}, u \bigr) \\ &= \bigl\Vert \alpha _{n}Jx_{0}+(1-\alpha _{n})Jz_{n} \bigr\Vert ^{2}-2\bigl\langle \alpha _{n}Jx _{0}+(1-\alpha _{n})Jz_{n},u \bigr\rangle + \Vert u \Vert ^{2} \\ &\leq \alpha _{n} \Vert Jx_{0} \Vert ^{2}-2\alpha _{n} \langle Jx_{0},u\rangle + \alpha _{n} \Vert u \Vert ^{2} \\ & \quad {}+ \bigl\Vert (1-\alpha _{n})Jz_{n} \bigr\Vert ^{2}- 2(1-\alpha _{n})\langle Jz_{n},u \rangle +(1-\alpha _{n}) \Vert u \Vert ^{2} \\ &= \alpha _{n}V(Jx_{0}, u)+(1-\alpha _{n})V(Jz_{n}, u) \\ &\leq \alpha _{n}V(Jx_{0}, u)+(1-\alpha _{n})V(Jx_{n}, u) \\ &\leq \max \bigl\{ V(Jx_{0}, u), V(Jx_{n}, u)\bigr\} \\ &\leq \cdots \leq \max \bigl\{ V(Jx_{0}, u), V(Jx_{N_{0}}, u) \bigr\} . \end{aligned} $$

Hence, \(\{V(Jx_{n}, u) \}\) is bounded. Since \(V(Jx_{n}, u)\geq \frac{1}{ \mu }\|x_{n}-u \|^{2}\), we see that \(\{x_{n}\}\) is bounded. □

Theorem 2

Assume that Assumption 1holds, the sequence \(\{\alpha _{n}\}\)satisfies \(\{\alpha _{n}\}\subset (0, 1)\), \(\sum_{n=0}^{\infty }\alpha _{n}= \infty \)and \(\lim_{n\rightarrow \infty }\alpha _{n}=0 \). Let \(\{x_{n}\}\)be a sequence generated by Algorithm A. Then \(\{x_{n}\}\)strongly converges to a solution \(x^{*}=P_{S}Jx_{0}\).

Proof

Let \(x^{*}=P_{S}Jx_{0}\), by Lemma 1(i), we have

$$\begin{aligned} \bigl\langle Jx_{0}-Jx^{*},z-x^{*}\bigr\rangle \leq 0, \quad \forall z\in S. \end{aligned}$$

By the proof of Theorem 1, we get \(\exists N_{0}\geq 0\), such that \(\forall n\geq N_{0}\), \(V(Jz_{n}, x^{*})\leq V(Jx_{n}, x^{*})\).

From Theorem 1(1), we see that the sequence \(\{x_{n}\}\) is bounded, consequently, \(\{y_{n}\}\) and \(\{z_{n}\}\) are bounded. Moreover, by (19), we see that there exists \(N_{0}\geq 0\), such that, for every \(n\geq N_{0}\),

$$ \begin{aligned}[b] V\bigl(Jx_{n+1}, x^{*} \bigr)&= V\bigl(\alpha _{n}Jx_{0}+(1-\alpha _{n})Jz_{n}, x^{*}\bigr) \\ &\leq \alpha _{n}V\bigl(Jx_{0}, x^{*}\bigr)+(1- \alpha _{n})V\bigl(Jz_{n}, x^{*}\bigr) \\ &\leq \alpha _{n}V\bigl(Jx_{0}, x^{*}\bigr)+(1- \alpha _{n})V\bigl(Jx_{n}, x^{*}\bigr)\\ &\quad {}-(1- \alpha _{n}) (1-\mu ) \bigl(V(Jx_{n}, y_{n})+ V(Jy_{n}, z_{n})\bigr). \end{aligned} $$
(20)

Case 1 As in Lemma 5, set \(a_{n}=\varphi (x_{n}, x^{*})\). By Theorem 1(1), we know that there exists \(N_{1}\in \mathcal{N}\) (\(N_{1} \geq N_{0}\)), such that the sequence \(\{\varphi (x_{n}, x^{*})\}^{ \infty }_{n=N_{1}}\) is nonincreasing. Then \(\{a_{n}\}^{\infty }_{n=1}\) converges, using this in (20), we obtain, when \(n> N_{1}\geq N_{0}\),

$$ \begin{aligned}[b] &(1-\alpha _{n}) (1-\mu ) \bigl(V(Jx_{n}, y_{n})+\varphi (y_{n}, z_{n})\bigr) \\ &\quad \leq \alpha _{n}V\bigl(Jx_{0}, x^{*} \bigr)-V\bigl(Jx_{n+1}, x^{*}\bigr)+(1-\alpha _{n})V \bigl(Jx _{n}, x^{*}\bigr) \\ &\quad \leq V\bigl(Jx_{n}, x^{*}\bigr)-V \bigl(Jx_{n+1}, x^{*}\bigr)+\alpha _{n}V \bigl(Jx_{0}-Jx_{n}, x^{*}\bigr). \end{aligned} $$
(21)

By \(V(Jx_{0}-Jx_{n}, x^{*})\) being bounded and because \(\{a_{n}\}^{ \infty }_{n=1}\) converges, we have, when \(n\longrightarrow \infty \),

$$ (1-\alpha _{n}) (1-\mu ) \bigl(\varphi (x_{n}, y_{n})+\varphi (y_{n}, z_{n})\bigr) \leq \varphi \bigl(x_{n}, x^{*}\bigr)-\varphi \bigl(x_{n+1}, x^{*}\bigr)+\alpha _{n}\varphi \bigl(x_{0}-x_{n}, x^{*}\bigr)\longrightarrow 0. $$

Notice that \(\varphi (x_{n}, y_{n})\geq 0\) and \(0<\mu \), \(\alpha _{n}<1\), we have, when \(n\longrightarrow \infty \),

$$ \Vert x_{n}-y_{n} \Vert ^{2}\longrightarrow 0 \quad \text{and} \quad \Vert y_{n}-z_{n} \Vert ^{2}\longrightarrow 0. $$
(22)

Furthermore, by the definition of \(x_{n+1}\), we have

$$ \Vert Jx_{n+1}-Jz_{n} \Vert =\alpha _{n} \Vert Jx_{0}-Jz_{n} \Vert \longrightarrow 0, \quad \text{as } n\longrightarrow \infty . $$

Since \(J^{-1}\) is norm-to-norm uniformly continuous on bounded subset of \(X^{*}\), we have \(\|x_{n+1}-z_{n}\|\longrightarrow 0\). Therefore, we get

$$ \Vert x_{n+1}-x_{n} \Vert = \Vert x_{n+1}-z_{n} \Vert + \Vert z_{n}-y_{n} \Vert + \Vert y_{n}-x_{n} \Vert \longrightarrow 0, \quad n \longrightarrow \infty . $$

By Theorem 1(1), we know that \(\{x_{n}\}\) is bounded, then there exists a subsequence \(\{x_{n_{k}}\}\) that converges weakly to some \(z_{0} \in X\), such that \(x_{n_{k}}\rightharpoonup z_{0}\) and

$$\begin{aligned} \limsup_{n\rightarrow \infty }\bigl\langle Jx_{0}-Jx^{*}, x_{n}-x^{*} \bigr\rangle =\lim_{k\rightarrow \infty }\bigl\langle Jx_{0}-Jx^{*}, x _{n_{k}}-x^{*} \bigr\rangle =\bigl\langle Jx_{0}-Jx^{*}, z_{0}-x^{*} \bigr\rangle \leq 0. \end{aligned}$$
(23)

Then \(y_{n_{k}}\rightharpoonup z_{0}\) and \(z_{0}\in C\). Since F is monotone and \(y_{n_{k}}=P_{C}(x_{n_{k}}-\lambda _{n_{k}} F(x_{n_{k}}))\), by Lemma 1(i), we have \(\langle Jx_{n_{k}}-\lambda _{n_{k}}F(x_{n _{k}})-Jy_{n_{k}}, z-y_{n_{k}}\rangle \leq 0\), \(\forall z\in C \). That is, for all \(z\in C\),

$$ \begin{aligned} 0&\leq \langle Jy_{n_{k}}-Jx_{n_{k}},z-y_{n_{k}} \rangle + \lambda _{n_{k}}\bigl\langle F(x_{n_{k}}), z-y_{n_{k}}\bigr\rangle \\ &=\langle Jy_{n_{k}}-Jx_{n_{k}},z-y_{n_{k}}\rangle + \lambda _{n_{k}} \bigl\langle F(x_{n_{k}}), z-x_{n_{k}}\bigr\rangle + \lambda _{n_{k}}\bigl\langle F(x _{n_{k}}), x_{n_{k}}-y_{n_{k}}\bigr\rangle \\ &\leq \langle Jy_{n_{k}}-Jx_{n_{k}},z-y_{n_{k}}\rangle + \lambda _{n_{k}}\bigl\langle F(z), z-x_{n_{k}}\bigr\rangle + \lambda _{n_{k}}\bigl\langle F(x _{n_{k}}), x_{n_{k}}-y_{n_{k}} \bigr\rangle . \end{aligned} $$

Let \(k\rightarrow \infty \), using the facts that \(\lim_{k\rightarrow \infty }\|y_{n_{k}}-x_{n_{k}}\|=0\), \(\{y_{n_{k}} \}\) is bounded and \(\lim_{k\rightarrow \infty }\lambda _{n_{k}}= \lambda >0\), we obtain \(\langle F(z), z-z_{0}\rangle \geq 0\), \(\forall z\in C\). By Lemma 6, we have \(z_{0}\in S\).

By Lemma 1(iii) and (19), we have

$$ \begin{aligned} \varphi \bigl(x_{n+1}, x^{*} \bigr)&=V\bigl(Jx_{n+1}, x^{*}\bigr)=V\bigl(\alpha _{n}Jx_{0}+(1- \alpha _{n})Jz_{n}, x^{*}\bigr) \\ &\leq V\bigl(\alpha _{n}Jx_{0}+(1-\alpha _{n})Jz_{n}-\alpha _{n}\bigl(Jx_{0}-Jx ^{*}\bigr), x^{*}\bigr)+2\alpha _{n}\bigl\langle Jx_{0}-Jx^{*},x_{n+1}-x^{*} \bigr\rangle \\ &\leq \alpha _{n}V\bigl(Jz, x^{*}\bigr)+(1-\alpha _{n})V\bigl(Jz_{n}, x^{*}\bigr)+2\alpha _{n}\bigl\langle Jx_{0}-Jx^{*},x_{n+1}-x^{*} \bigr\rangle \\ &= (1-\alpha _{n})V\bigl(Jz_{n}, x^{*}\bigr)+2 \alpha _{n}\bigl\langle Jx_{0}-Jx^{*},x _{n+1}-x^{*} \bigr\rangle \\ &\leq (1-\alpha _{n})\varphi \bigl(x_{n}, x^{*} \bigr)+2\alpha _{n}\bigl\langle Jx _{0}-Jx^{*},x_{n+1}-x^{*} \bigr\rangle . \end{aligned} $$

It follows from Lemma 5 and Lemma 4 that \(\lim_{n\rightarrow \infty }\varphi (x_{n}, x^{*})=0\), which means

$$ \lim_{n\rightarrow \infty }x_{n}=x^{*}. $$

Case 2 Suppose that there exists a subsequence \(\{x_{n_{j}} \}\) of \(\{x_{n}\}\) such that \(\varphi (x_{m_{j}},x^{*})<\varphi (x _{m_{j}+1},x^{*})\) for all \(j\in \mathcal{N}\). From Lemma 3, there exists a nondecreasing sequence \(m_{k} \in \mathcal{N}\) such that \(\lim_{n\rightarrow \infty }m_{k}=\infty \) and the following inequalities hold for all \(k\in \mathcal{N}\):

$$\begin{aligned} \varphi \bigl(x_{m_{k}}, x^{*}\bigr)< \varphi \bigl(x_{m_{k}+1}, x^{*}\bigr) \quad \text{and} \quad \varphi \bigl(x_{k}, x^{*}\bigr)< \varphi \bigl(x_{m_{k}+1}, x^{*}\bigr). \end{aligned}$$
(24)

By (21), we know

$$\begin{aligned} \begin{aligned}[b] &(1-\alpha _{m_{k}}) \biggl(1-\lambda _{m_{k}} \frac{\mu }{\lambda _{{m_{k}}+1}}\biggr) \bigl( \varphi (x_{m_{k}},y_{m_{k}})+\varphi (y_{m_{k}},z_{m_{k}})\bigr) \\ &\quad \leq \bigl\Vert x_{m_{k}}-x^{*} \bigr\Vert ^{2}- \bigl\Vert x_{{m_{k}}+1}-x^{*} \bigr\Vert ^{2}+\alpha _{m _{k}} \bigl\Vert x_{0}-x^{*} \bigr\Vert ^{2}. \end{aligned} \end{aligned}$$
(25)

Since \(\{x_{n_{k}}\}\) is bounded, there exists a subsequence \(\{x_{m_{k}}\}\) of \(\{x_{n_{k}}\}\) which converges weakly to \(z_{0} \in X\). Using the same argument as in the proof of Case 1, and Combining (25) and \(\lim_{k\rightarrow \infty }(1-\lambda _{m _{k}}\frac{\mu }{\lambda _{{m_{k}}+1}}) =1-\mu >0\), we obtain

$$ \lim_{k\rightarrow \infty } \Vert x_{m_{k}}-y_{m_{k}} \Vert =0, \qquad \lim_{k\rightarrow \infty } \Vert z_{m_{k}}-y_{m_{k}} \Vert =0, \qquad \lim_{k\rightarrow \infty } \Vert x_{m_{k}+1}-x_{m_{k}} \Vert =0. $$

Similarly we can conclude that

$$\begin{aligned} \limsup_{k\rightarrow \infty }\bigl\langle Jx_{0}-Jx^{*}, x_{m_{k}+1}-x ^{*} \bigr\rangle =\limsup_{k\rightarrow \infty } \bigl\langle Jx_{0}-Jx ^{*}, x_{m_{k}}-x^{*} \bigr\rangle \leq 0. \end{aligned}$$
(26)

It follows from (25) and the proof of case 1 that, for all \(m_{k} \geq N_{0}\), we have

$$\begin{aligned}& \begin{aligned} \bigl\Vert x_{{m_{k}}+1}-x^{*} \bigr\Vert ^{2} &\leq (1-\alpha _{m_{k}}) \bigl\Vert x_{m_{k}}-x^{*} \bigr\Vert ^{2}+2\alpha _{m_{k}}\bigl\langle x_{0}-x^{*},x_{{m_{k}}+1}-x^{*} \bigr\rangle \\ &\leq (1-\alpha _{m_{k}}) \bigl\Vert x_{{m_{k}}+1}-x^{*} \bigr\Vert ^{2}+2\alpha _{m_{k}} \bigl\langle x_{0}-x^{*},x_{{m_{k}}+1}-x^{*} \bigr\rangle , \end{aligned} \\& \begin{aligned} \varphi \bigl(x_{m_{k}+1}, x^{*} \bigr)&\leq (1-\alpha _{m_{k}})\varphi \bigl(x_{m _{k}}, x^{*}\bigr)+\alpha _{m_{k}}\bigl\langle Jx_{0}-Jx^{*},x_{m_{k}+1}-x^{*} \bigr\rangle \\ &\leq (1-\alpha _{m_{k}})\varphi \bigl(x_{m_{k}+1}, x^{*} \bigr)+\alpha _{m_{k}} \bigl\langle Jx_{0}-Jx^{*},x_{m_{k}+1}-x^{*} \bigr\rangle . \end{aligned} \end{aligned}$$

Since \(\alpha _{n}>0\), this implies that \(\forall m_{k}\geq N_{1}\), we have

$$\begin{aligned} \varphi \bigl(x_{m_{k}}, x^{*}\bigr)\leq \varphi \bigl(x_{m_{k}+1}, x^{*}\bigr)\leq \bigl\langle Jx_{0}-Jx^{*},x_{{m_{k}}+1}-x^{*} \bigr\rangle . \end{aligned}$$

And then

$$ \limsup_{k\rightarrow \infty }\varphi \bigl(x_{m_{k}}, x^{*} \bigr) \leq \limsup_{k\rightarrow \infty }\bigl\langle Jx_{0}-Jx^{*}, x _{m_{k}+1}-x^{*} \bigr\rangle \leq 0 , $$

we obtain \(\limsup_{k\rightarrow \infty }\varphi (x_{m_{k}}, x ^{*})=0\), which means \(\lim_{k\rightarrow \infty }\|x_{m_{k}}-x ^{*}\|^{2}=0\). Since \(\|x_{k}-x^{*}\|\leq \|x_{{m_{k}}+1}-x^{*}\|\), we have \(\lim_{k\rightarrow \infty }\|x_{k}-x^{*}\|=0\). Therefore \(x_{k}\rightarrow x^{*}\). This concludes the proof. □

4 Numerical experiments

In this section, we present two numerical experiments relative to the variational inequalities.

Example 4.1

We compare the proposed algorithm with the Algorithm 3.5 in [23]. For Algorithm A and Algorithm 3.5 in [23], we take \(\alpha _{n}=\frac{1}{100(n+2)}\). To terminate the algorithms, we use the condition \(\|y_{n}-x_{n}\|\leq \varepsilon \) and \(\varepsilon =10^{-3}\) for all the algorithms.

Let \(H=L^{2}([0,2\pi ]) \) with norm \(\|x\|=(\int _{0}^{2\pi }|x(t)|^{2}\,dt)^{ \frac{1}{2}} \) and inner product \(\langle x,y\rangle = [4]\int _{0}^{2 \pi }x(t)y(t)\,dt\), \(x, y\in H\). The operator \(F:H\rightarrow H\) is defined by \(Fx(t)=\max (0,x(t))\), \(t\in [0,2\pi ]\) for all \(x\in H\). It can be easily verified that F is Lipschitz-continuous and monotone. The feasible set is \(C=\{x\in H: \int _{0}^{2\pi }(t^{2}+1)x(t)\,dt \leq 1 \}\). Observe that \(0\in S\) and so \(S\neq \emptyset \). We take \(\lambda _{0}=0.7 \) and \(\mu =0.9\) for Algorithm A. For Algorithm 3.5 in [23], we take \(\lambda =0.7 \). The numerical results are showed in Table 1.

Table 1 Comparison between the Algorithm A and Algorithm 3.5 in [23]

Example 4.2

The example is classical. The feasible set is \(C=R^{m}\) and \(F(x)=Ax\), where A is a square \(m\times m\) matrix given by the condition

$$ a_{i,j}= \textstyle\begin{cases} -1 , & \text{if } j=m+1-i \text{ and } j>i, \\ 1, & \text{if } j=m+1-i \text{ and } j< i, \\ 0, & \text{otherwise}. \end{cases} $$

This is a classical example of a problem where the usual gradient method does not converge. For even m, the zero vector is the solution of the Example 4.1. We take \(\lambda =0.7\) (\(\lambda =0.9\)) and \(\mu =0.9\) for Algorithm A. For Algorithm 3.5 in [17], we take \(\lambda =0.7 \) and \(L=1\). For all tests, we take \(x_{0}=(1,\ldots ,1)\). The numerical results are showed in Table 2.

Table 2 Comparison between Algorithm A and Algorithm 3.5 in [17]

Tables 1 and 2 illustrate that algorithms may behave similarly to when we have knowledge of the Lipschitz constant.

5 Conclusions

In this paper, we consider a strong convergence result for monotone variational inequalities problem with Lipschitz continuous and monotone mapping in uniformly convex Banach spaces. Our algorithm is based on the subgradient extragradient methods with a new step size, the convergence of algorithm is established without the knowledge of the Lipschitz constant of the mapping. Our results extend the results of Yang and Liu in [8] from Hilbert spaces to uniformly convex Banach spaces which are also uniformly smooth and show strong convergence. Finally, a numerical experiment demonstrates the validity and advantage of the proposed method.

References

  1. Hartman, P., Stampacchia, G.: On some non-linear elliptic differential-functional equations. Acta Math. 115, 271–310 (1966)

    Article  MathSciNet  Google Scholar 

  2. Aubin, J.P., Ekeland, I.: Applied Nonlinear Analysis. Wiley, New York (1984)

    MATH  Google Scholar 

  3. Baiocchi, C., Capelo, A.: Variational and Quasivariational Inequalities. Applications to Free Boundary Problems. Wiley, New York (1984)

    MATH  Google Scholar 

  4. Korpelevich, G.M.: The extragradient method for finding saddle points and other problem. Èkon. Mat. Metody 12, 747–756 (1976)

    MathSciNet  MATH  Google Scholar 

  5. Censor, Y., Gibali, A., Reich, S.: The subgradient extragradient method for solving variational inequalities in Hilbert space. J. Optim. Theory Appl. 148, 318–335 (2011)

    Article  MathSciNet  Google Scholar 

  6. Yao, Y., Postolache, M., Yao, J.C.: Iterative algorithms for pseudomonotone variational inequalities and fixed point problems of pseudocontractive operators. Mathematics 7, 1189 (2019)

    Article  Google Scholar 

  7. Yekini, S., Olaniyi, S.I.: Strong convergence result for monotone variational inequalities. Numer. Algorithms 76, 259–282 (2017)

    Article  MathSciNet  Google Scholar 

  8. Yang, J., Liu, H.W.: A modified projected gradient method for monotone variational inequalities. J. Optim. Theory Appl. 179, 197–211 (2018)

    Article  MathSciNet  Google Scholar 

  9. Malitsky, Y.V.: Projected reflected gradient methods for variational inequalities. SIAM J. Optim. 25, 502–520 (2015)

    Article  MathSciNet  Google Scholar 

  10. Yao, Y., Postolache, M., Yao, J.C.: Strong convergence of an extragradient algorithm for variational inequality and fixed point problems. UPB Sci. Bull., Ser. A (2013)

  11. Yao, Y., Postolache, M., Liou, Y.C.: Variant extragradient-type method for monotone variational inequalities. Fixed Point Theory Appl. 2013, 185 (2013)

    Article  MathSciNet  Google Scholar 

  12. Yao, Y., Postolache, M., Liou, Y.C., Yao, Z.: Construction algorithms for a class of monotone variational inequalities. Optim. Lett. 10, 1519–1528 (2016)

    Article  MathSciNet  Google Scholar 

  13. Yang, J., Liu, H.W., Liu, Z.X.: Modified subgradient extragradient algorithms for solving monotone variational inequalities. Optimization 67, 2247–2258 (2018)

    Article  MathSciNet  Google Scholar 

  14. Yang, J., Liu, H.W.: Strong convergence result for solving monotone variational inequalities in Hilbert space. Numer. Algorithms 80, 741–752 (2019)

    Article  MathSciNet  Google Scholar 

  15. Vinh, N.T., Hoai, P.T.: Some subgradient extragradient type algorithms for solving split feasibility and fixed point problems. Math. Methods Appl. Sci. 39, 3808–3823 (2016)

    Article  MathSciNet  Google Scholar 

  16. Duong, V.T., Dang, V.H.: Modified subgradient extragradient algorithms for variational inequality problems and fixed point problems. Optimization 67, 83–102 (2018)

    Article  MathSciNet  Google Scholar 

  17. Rapeepan, K., Satit, S.: Strong convergence of the Halpern subgradient extragradient method for solving variational inequalities in Hilbert space. J. Optim. Theory Appl. 163, 399–412 (2014)

    Article  MathSciNet  Google Scholar 

  18. Mainge, F.: A hybrid extragradient-viscosity method for monotone operators and fixed point problems. SIAM J. Control Optim. 47, 1499–1515 (2008)

    Article  MathSciNet  Google Scholar 

  19. Yao, Y., Postolache, M., Yao, J.C.: An iterative algorithm for solving generalized variational inequalities and fixed points problems. Mathematics 7, 61 (2019)

    Article  Google Scholar 

  20. Dehaish, B.A.B.: Weak and strong convergence of algorithms for the sum of two accretive operators with applications. J. Nonlinear Convex Anal. 16, 1321–1336 (2015)

    MathSciNet  MATH  Google Scholar 

  21. Qin, X., Yao, J.C.: A viscosity iterative method for a split feasibility problem. J. Nonlinear Convex Anal. 20, 1497–1506 (2019)

    MathSciNet  Google Scholar 

  22. Yao, Y., Postolache, M., Yao, J.C.: Iterative algorithms for generalized variational inequalities. UPB Sci. Bull., Ser. A 81, 3–16 (2019)

    MathSciNet  Google Scholar 

  23. Cai, G., Gibali, A., Iyiola, O.S., Yekini, S.: A new double-projection method for solving variational inequalities in Banach spaces. J. Optim. Theory Appl. 178, 219–239 (2018)

    Article  MathSciNet  Google Scholar 

  24. Chang, S., Wen, C.F., Yao, J.C.: Common zero point for a finite family of inclusion problems of accretive mappings in Banach spaces. Optimization 67, 1183–1196 (2018)

    Article  MathSciNet  Google Scholar 

  25. Iusem, A.N., Nasri, M.: Korpelevich’s method for variational inequality problems in Banach spaces. J. Glob. Optim. 50, 59–76 (2011)

    Article  MathSciNet  Google Scholar 

  26. Chang, S., Wen, C.F., Yao, J.C.: Zero point problem of accretive operators in Banach spaces. Bull. Malays. Math. Soc. 42, 105–118 (2019)

    Article  MathSciNet  Google Scholar 

  27. Lian, Z.: A double projection algorithm for quasimonotone variational inequalities in Banach spaces. J. Inequal. Appl. 2018, 256 (2018)

    Article  MathSciNet  Google Scholar 

  28. Minty, G.J.: On a monotonicity method for the solution of non-linear equations in Banach spaces. Proc. Natl. Acad. Sci. USA 50, 1038–1041 (1963)

    Article  MathSciNet  Google Scholar 

  29. Takahashi, W.: Convex Analysis and Approximation of Fixed Point. Yokohama Publishers, Yokohama (2009)

    Google Scholar 

  30. Takahashi, W.: Nonlinear Functional Analysis. Yokohama Publishers, Yokohama (2000)

    MATH  Google Scholar 

  31. Beauzamy, B.: Introduction to Banach Spaces and Their Geometry. North-Holland, Amsterdam (1985)

    MATH  Google Scholar 

  32. Alber, Y.I.: Metric and generalized projection operator in Banach spaces: properties and applications. In: Theory and Applications of Nonlinear Operators of Accretive and Monotone Type. Lecture Notes in Pure and Applied Mathematics, vol. 178, pp. 15–50. Dekker, New York (1996)

    Google Scholar 

  33. Aoyama, K., Kohsaka, F.: Strongly relatively nonexpansive sequences generated by firmly nonexpansive-like mappings. Fixed Point Theory Appl. 2014, 95 (2014)

    Article  MathSciNet  Google Scholar 

  34. Mainge, P.E.: The viscosity approximation process for quasi-nonexpansive mapping in Hilbert space. Comput. Math. Appl. 59, 74–79 (2010)

    Article  MathSciNet  Google Scholar 

  35. Xu, H.K.: Iterative algorithm for nonlinear operators. J. Lond. Math. Soc. 66, 240–256 (2002)

    Article  MathSciNet  Google Scholar 

  36. Hadjisavvas, N., Schaible, S.: Quasimonotone variational inequalities in Banach spaces. J. Optim. Theory Appl. 90, 95–111 (1996)

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgements

The authors would like to express their sincere thanks to the editors.

Availability of data and materials

Not applicable for this section.

Funding

The Project Supported by “Qinglan talents” Program of Xianyang Normal University of China (No. XSYQL201801), The Educational Science Foundation of Shaanxi of China (No. 18JK0830), Scientific research plan projects of Xianyang Normal University of China (No. 14XSYK003).

Author information

Authors and Affiliations

Authors

Contributions

The author worked on the results, and he read and approved the final manuscript.

Corresponding author

Correspondence to Fei Ma.

Ethics declarations

Competing interests

The author declares to have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Ma, F. A subgradient extragradient algorithm for solving monotone variational inequalities in Banach spaces. J Inequal Appl 2020, 26 (2020). https://doi.org/10.1186/s13660-020-2295-0

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13660-020-2295-0

MSC

Keywords