Skip to main content

Self-adaptive subgradient extragradient method with inertial modification for solving monotone variational inequality problems and quasi-nonexpansive fixed point problems

Abstract

In this paper, we introduce a new algorithm with self-adaptive method for finding a solution of the variational inequality problem involving monotone operator and the fixed point problem of a quasi-nonexpansive mapping with a demiclosedness property in a real Hilbert space. The algorithm is based on the subgradient extragradient method and inertial method. At the same time, it can be considered as an improvement of the inertial extragradient method over each computational step which was previously known. The weak convergence of the algorithm is studied under standard assumptions. It is worth emphasizing that the algorithm that we propose does not require one to know the Lipschitz constant of the operator. Finally, we provide some numerical experiments to verify the effectiveness and advantage of the proposed algorithm.

1 Introduction

Throughout this paper, let H be a real Hilbert space with the inner product \(\langle \cdot ,\cdot \rangle \) and norm \(\|\cdot \|\). Let C be a nonempty, closed and convex subset of H. Let \(\mathbb{N}\) and \(\mathbb{R}\) be the sets of positive integers and real numbers, respectively.

The variational inequality problem (VIP) is the problem to find a point \(x^{*}\in C\) such that

$$ \bigl\langle Ax^{*},x-x^{*}\bigr\rangle \geq 0,\quad \forall x\in C. $$
(1)

The solution set of VIP is denoted by \(VI(C,A)\). The variational inequality problem is an important branch of the nonlinear problem and it has received a lot of attentions by many authors in recent years (see [1,2,3] and the references therein). Under appropriate conditions, there are two general approaches for solving the variational inequality problem, one is the regularized method and the other is the projection method. Now, we mainly study the projection method.

For every point \(x\in H\), there exists a unique point in C such that

$$ \Vert x-P_{C}x \Vert =\inf \bigl\{ \Vert x-y \Vert :y\in C \bigr\} , $$

where \(P_{C}:H\rightarrow C\) is called the metric projection from H into C.

We know that the variational inequality problem can be turned into the fixed point problem, which means (1) is equivalent to

$$ x^{*}=P_{C}(I-\lambda A)x^{*}, $$
(2)

where \(P_{C}:H\rightarrow C\) is the metric projection and \(\lambda >0\). Thus we generate \(\{x_{n}\}\) in the following manner:

$$ x_{n+1}=P_{C}(I-\lambda A)x_{n}. $$
(3)

This simple algorithm is an extension of the projection gradient method. However, the convergence of this method requires a slight assumption that the operator \(A:H\rightarrow H\) is strongly monotone or inverse strongly monotone.

To avoid this strong assumption, Korpelevich [4] proposed an algorithm which was called the extragradient method:

$$ \textstyle\begin{cases} y_{n}=P_{C}(x_{n}-\lambda Ax_{n}), \\ x_{n+1}=P_{C}(x_{n}-\lambda Ay_{n}), \end{cases} $$
(4)

for each \(n=1,2,\ldots \) , where \(\lambda \in (0,1/L)\). At the beginning, this method was used to solve saddle point problems. Soon, this method was extended to Euclidean spaces and Hilbert spaces. In particular, this method only requires that the operator A is monotone and L-Lipschitz continuous in a Hilbert space. If \(VI(C,A)\neq \emptyset \), the sequence \(\{x_{n}\}\) generated by (4) converges weakly to an element of \(VI(C,A)\).

However, the extragradient method needs to calculate two projections from H onto the closed convex set C and it is applicable to the case that \(P_{C}\) has a closed form which means that \(P_{C}\) has an explicit expression. In fact, in some cases, the projection onto the nonempty closed convex subset C might be difficult to calculate. To overcome this drawback, it has received great attentions by many authors who had improved it in various ways.

To our knowledge, there were four kinds of methods to overcome this drawback. The first one was the modification of the extragradient method by Tseng [5] who proposed it in 2000 with the following remarkable scheme:

$$ \textstyle\begin{cases} y_{n}=P_{C}(x_{n}-\lambda Ax_{n}), \\ x_{n+1}=y_{n}-\lambda (Ay_{n}-Ax_{n}), \end{cases} $$
(5)

where A is monotone, L-Lipschitz continuous and \(\lambda \in (0,1/L)\). From (5), we find using this method one only needs to calculate one projection, which is simpler than (4). The second one was the subgradient extragradient method which was proposed by Censor et al. [6] in 2011:

$$ \textstyle\begin{cases} y_{n}=P_{C}(x_{n}-\lambda Ax_{n}), \\ T_{n}=\{w\in H\mid \langle x_{n}-\lambda Ax_{n}-y_{n},w-y_{n}\rangle \leq 0\}, \\ x_{n+1}=P_{T_{n}}(x_{n}-\lambda Ay_{n}), \end{cases} $$
(6)

where A is monotone, L-Lipschitz continuous and \(\lambda \in (0,1/L)\). The key operation of the subgradient extragradient method replaces the second projection onto C of the extragradient method by a projection onto a special constructible half-space, which significantly reduces the difficulty of calculations.

Before explaining the third method, let us take a look at the inertial method. In 2001, Alvarez and Attouch [7] applied the inertial technique to obtain an inertial proximal method to solve the problem of finding zero of a maximal monotone operator, which works as follows:

$$\mbox{find } x_{n+1}\in H,\mbox{ such that }0\in \lambda _{n}A(x_{n+1})+x_{n+1}-x _{n}-\theta _{n}(x_{n}-x_{n-1}), $$

where \(x_{n-1}, x_{n}\in H\), \(\theta _{n}\in [0,1)\) and \(\lambda _{n}>0\). It also can be written in the following form:

$$ x_{n+1}=J^{A}_{\lambda _{n}}\bigl(x_{n}+\theta _{n}(x_{n}-x_{n-1})\bigr), $$
(7)

where \(J^{A}_{\lambda _{n}}\) is the resolvent of A with parameter \(\lambda _{n}\) and the inertia is induced by the term \(\theta _{n}(x _{n}-x_{n-1})\). Recently, considerable interest has been shown in studying the inertial method by many authors. They constructed fast iterative algorithms by using inertial method. The third method which was studied by Q.L. Dong et al. [8] in 2017:

$$ \textstyle\begin{cases} w_{n}=x_{n}+\alpha _{n}(x_{n}-x_{n-1}), \\ y_{n}=P_{C}(w_{n}-\lambda Aw_{n}), \\ d(w_{n},y_{n})=(w_{n}-y_{n})-\lambda (Aw_{n}-Ay_{n}), \\ x_{n+1}=w_{n}-\gamma \beta _{n}d(w_{n},y_{n}), \end{cases} $$
(8)

for each \(k\geq 1\), where \(\gamma \in (0,2)\), \(\lambda >0\),

$$\begin{aligned}& \beta _{n}:=\textstyle\begin{cases} \varphi (w_{n},y_{n})/ \Vert d(w_{n},y_{n}) \Vert ^{2}, & \mbox{if } d(w_{n},y_{n})\neq 0, \\ 0,& \mbox{if } d(w_{n},y_{n})=0, \end{cases}\displaystyle \\& \varphi (w_{n},y_{n})=\bigl\langle w_{n}-y_{n},d(w_{n},y_{n}) \bigr\rangle . \end{aligned}$$

This algorithm incorporate the inertial terms in the projection and contraction algorithm, which does not need the summability condition for the sequence. The fourth one was a self-adaptive algorithm which was based on Tseng’s extragradient method [9] and it was proposed by Duong Viet Thong and Dang Van Hieu [9] in 2017. The algorithm is described as follows.

It is worth mentioning that the Algorithm 1 does not require one to know the Lipschitz constant of the operator A, which is different from the other three algorithms. If \(VI(C,A)\neq \emptyset \), the sequences \(\{x_{n}\}\) generated by (5), (6), (8) and Algorithm 1 all converge weakly to an element of \(VI(C,A)\). For Algorithm 1, it does not require to know the Lipschitz constant, but the step size may involve computation of additional projections.

Algorithm 1
figure a

A SA-Tseng’s EGM for monotone variational inequality problem

In 2016, Mainge and Gobinddass [10] got \(x_{n+1}\) by the following algorithm:

$$ \textstyle\begin{cases} y_{n}=x_{n}+\theta _{n}(x_{n}-x_{n-1}), \\ x_{n+1}=P_{C}(x_{n}-\lambda _{n}Ay_{n}), \end{cases} $$
(9)

where \(\theta _{n}=\frac{\lambda _{n}}{\delta \lambda _{n-1}}\), \(\lambda _{n}k_{n}\leq \varepsilon \delta (\sqrt{2}-1)\), \(\lambda _{n}\leq k\lambda _{n-1}(\delta +\frac{\lambda _{n-1}}{\lambda _{n-2}})^{ \frac{1}{2}}\), \(\{\lambda _{n}\}\subset [\overline{\mu },\overline{ \nu }]\) and

$$ k_{n}:= \textstyle\begin{cases} \frac{ \Vert Ay_{n}-Ay_{n-1} \Vert }{ \Vert y_{n}-y_{n-1} \Vert }, & \mbox{if } y_{n}-y_{n-1}\neq 0, \\ 0, &\mbox{if } y_{n}-y_{n-1}=0. \end{cases} $$

In this iterative algorithm, it does not require either additional projection for the determination of the step-sizes or the knowledge of the Lipschitz constant of the operator.

The fixed point problem is the problem to find \(x^{*}\in H\) such that

$$ Tx^{*}=x^{*}, $$
(10)

where \(x^{*}\) is called a fixed point of \(T:H\rightarrow H\). The set of fixed points of T is denoted by \(\operatorname{Fix}(T)\). Recently, many iterative methods have been proposed (see [6, 11,12,13,14,15,16,17,18,19,20,21] and the references therein) for finding a common element of \(\operatorname{Fix}(T)\) and \(VI(C,A)\) in a real Hilbert space.

In this paper, motivated and inspired by the above results, we introduce a new algorithm with self-adaptive subgradient extragradient method and inertial modification for finding a solution of the variational inequality problem involving monotone operator and the fixed point problem of a quasi-nonexpansive mapping with a demiclosedness property in a real Hilbert space. Then the weak convergence theorem will be proved in Sect. 3.

This paper is organized as follows. In Sect. 2, we list some lemmas which will be used for further proof. In Sect. 3, we proposed a new algorithm, then the weak convergence theorem is analyzed. In Sect. 4, we give some numerical examples to illustrate the efficiency and advantage of our algorithm.

2 Preliminaries

In this section, we introduce some lemmas which will be used in this paper. Assume H is a real Hilbert space and C is a nonempty closed convex subset of H. In the following of the paper, we use the symbol \(x_{n}\rightarrow x\) to denote the strong convergence of the sequence \(\{x_{n}\}\) to x as \(n\rightarrow \infty \) and use the symbol \(x_{n}\rightharpoonup x\) to denote the weak convergence of the sequence \(\{x_{n}\}\) to x as \(n\rightarrow \infty \). If there exists a subsequence \(\{x_{n_{i}}\}\) of \(\{x_{n}\}\) converging weakly to a point z, then z is called a weak cluster point of \(\{x_{n}\}\) and the set of all weak cluster points of \(\{x_{n}\}\) is denoted by \(\omega _{w}(x _{n})\).

Lemma 2.1

([22])

Let H be a real Hilbert space, for each \(x,y\in H\) and \(\lambda \in \mathbb{R}\), we have

  1. (i)

    \(\|x+y\|^{2}=\|x\|^{2}+\|y\|^{2}+2\langle x,y\rangle \);

  2. (ii)

    \(\|\lambda x+(1-\lambda )y\|^{2}=\lambda \|x\|^{2}+(1-\lambda )\|y\|^{2}-\lambda (1-\lambda )\| x-y\|^{2}\).

In the following, we gather some characteristic properties of \(P_{C}\).

Lemma 2.2

([23])

Let H be a real Hilbert space and C be a nonempty closed subset of H. Then

  1. (i)

    \(\|P_{C}x-P_{C}y\|^{2}\leq \langle x-y,P_{C}x-P_{C}y\rangle \), \(\forall x,y\in H\);

  2. (ii)

    \(\|x-P_{C}x\|^{2}+\|y-P_{C}x\|^{2}\leq \| x-y\|^{2}\), \(\forall x\in H\), \(y\in C\).

Lemma 2.3

Let H be a real Hilbert space and C be a nonempty closed subset of H. Given \(x\in H\) and \(z\in C\), then \(z=P_{C}x\) if and only if there hold the inequality \(\langle x-z,y-z \rangle \leq 0\), \(\forall y\in C\).

Next, we present some concepts of an operator.

Definition 2.4

([24])

An operator \(A:H\rightarrow H\) is said to be:

  1. (i)

    monotone, if

    $$ \langle x-y,Ax-Ay\rangle \geq 0,\quad \forall x,y\in H; $$
  2. (ii)

    L-Lipschitz continuous with \(L>0\), if

    $$ \Vert Ax-Ay \Vert \leq L \Vert x-y \Vert ,\quad \forall x,y\in H; $$
  3. (iii)

    nonexpansive, if

    $$ \Vert Ax-Ay \Vert \leq \Vert x-y \Vert ,\quad \forall x,y\in H; $$
  4. (iv)

    quasi-nonexpansive, if

    $$ \Vert Ax-p \Vert \leq \Vert x-p \Vert ,\quad \forall x\in H, p\in Fix(A), $$

    where \(Fix(A)\neq \emptyset \).

Remark 2.5

([25])

It is well that every nonexpansive mapping with a nonempty set of fixed point is quasi-nonexpansive. However, a quasi-nonexpansive mapping may not be a nonexpansive mapping.

Lemma 2.6

([23])

Assume that \(T:H\rightarrow H\) is a nonlinear operator with \(\operatorname{Fix}(T)\neq \emptyset \). Then \(I-T\) is said to be demiclosed at zero if for any \(\{x_{n}\}\) in H, the following implication holds:

$$ x_{n}\rightharpoonup x \quad \textit{and}\quad (I-T)x_{n} \rightarrow 0\Rightarrow x\in \operatorname{Fix}(T). $$

Remark 2.7

We know that the Lemma 2.6 is clearly established when the operator T is nonexpansive. However, there exists a quasi-nonexpansive mapping T but \(I-T\) is not demiclosed at zero. Therefore, in this paper, we need to emphasize that \(T:H\rightarrow H\) is a quasi-nonexpansive mapping such that \(I-T\) is demiclosed at zero.

Example 1

Let H be the line real and \(C=[0,\frac{3}{2}]\). Define the operator T on C by

$$ Tx=\textstyle\begin{cases} \frac{x}{2}, & \mbox{if } x\in [0,1], \\ x\cos 2\pi x, & \mbox{if } x\in (1,\frac{3}{2}]. \end{cases} $$

Indeed, it is easy to see that \(\operatorname{Fix}(T)=\{0\}\).

On the one hand, for any \(x\in [0,1]\), we have

$$ \vert Tx-0 \vert = \biggl\vert \frac{x}{2}-0 \biggr\vert \leq \vert x-0 \vert . $$

On the other hand, for any \(x\in (1,\frac{3}{2}]\), we have

$$ \vert Tx-0 \vert = \vert x\cos 2\pi x-0 \vert = \vert x\cos 2\pi x-0 \vert \leq \vert x \vert = \vert x-0 \vert . $$

Thus, the operator T is quasi-nonexpansive.

By taking \(\{x_{n}\}\subset (1,\frac{3}{2}]\) and \(x_{n}\rightarrow 1\) as \(n\rightarrow \infty \), we have

$$ \bigl\vert (I-T)x_{n} \bigr\vert = \vert x_{n}-Tx_{n} \vert = \vert x_{n}-x_{n}\cos 2\pi x_{n} \vert = \vert x_{n} \vert \cdot \vert 1-\cos 2\pi x_{n} \vert \rightarrow 0 \quad (n\rightarrow \infty ). $$

But \(1\notin \operatorname{Fix}(T)\), so \(I-T\) is not demiclosed at zero.

Lemma 2.8

([7])

Let \(\{\varphi _{n}\}\), \(\{\delta _{n}\}\) and \(\{\alpha _{n}\}\) be sequences in \([0,+\infty )\) such that

$$ \varphi _{n+1}\leq \varphi _{n}+\alpha _{n}(\varphi _{n}-\varphi _{n-1})+ \delta _{n}, \quad \forall n\geq 1, \sum^{+\infty }_{n=1}\delta _{n}< +\infty , $$

and there exists a real number α with \(0\leq \alpha _{n}\leq \alpha <1\) for all \(n\in \mathbb{N}\). Then the following hold:

  1. (i)

    \(\sum^{+\infty }_{n=1}[\varphi _{n}-\varphi _{n-1}]_{+}<+\infty \), where \([t]_{+}:=\max \{t,0\}\);

  2. (ii)

    there exists \(\varphi ^{*}\in [0,+\infty )\) such that \(\lim_{n\rightarrow +\infty }\varphi _{n}=\varphi ^{*}\).

Lemma 2.9

([26])

Let \(A:H\rightarrow H\) be a monotone and L-Lipschitz continuous mapping on C. Let \(S=P_{C}(I- \mu A)\), where \(\mu >0\). If \(\{x_{n}\}\) is a sequence in H satisfying \(x_{n}\rightharpoonup q\) and \(x_{n}-Sx_{n}\rightarrow 0\), then \(q\in VI(C,A)=\operatorname{Fix}(S)\).

Lemma 2.10

([27])

Let C be a nonempty closed and convex subset of a real Hilbert space H and \(\{x_{n}\}\) be a sequence in H. The following two properties hold:

  1. (i)

    \(\lim_{n\rightarrow \infty }\|x_{n}-x\|\) exists for each \(x\in C\);

  2. (ii)

    \(\omega _{w}(x_{n})\subset C\).

Then the sequence \(\{x_{n}\}\) converges weakly to a point in C.

3 Main results

In this section, we propose a new iterative algorithm with self-adaptive method for solving monotone variational inequality problems and quasi-nonexpansive fixed point problems in a Hilbert space. Meanwhile, we combine subgradient extragradient method and inertial modification for the algorithm. Under the assumption \(\operatorname{Fix}(T)\cap VI(C,A)\neq \emptyset \), we prove the weak convergence theorem. Let H be a real Hilbert space. Let C be a nonempty closed convex subset in H. Let \(A:H\rightarrow H\) be a monotone and L-Lipschitz continuous operator. In particular, the information of the Lipschitz constant L does not require to be known. Let \(T:H\rightarrow H\) be a quasi-nonexpansive mapping such that \(I-T\) is demiclosed at zero. The algorithm is described as follows.

Before giving the theorem and its proof, we propose several useful lemmas firstly.

Lemma 3.1

The sequence \(\{\lambda _{n}\}\) generated by Algorithm 2 is a monotonically decreasing sequence, and its lower bound is \(\min \{\frac{\mu }{L},\lambda _{0}\}\).

Algorithm 2
figure b

A ISA-SEGM for monotone variational inequality problem

Proof

It is obvious that the sequence \(\{\lambda _{n}\}\) is a monotonically decreasing sequence.

Since A is L-Lipschitz continuous with \(L>0\), we have

$$ \Vert Ax_{n}-Ay_{n} \Vert \leq L \Vert x_{n}-y_{n} \Vert . $$

In the case of \(Ax_{n}-Ay_{n}\neq 0\), we have

$$ \frac{\mu \Vert x_{n}-y_{n} \Vert }{ \Vert Ax_{n}-Ay_{n} \Vert }\geq \frac{\mu }{L}. $$

Clearly, the lower bound of the sequence \(\{\lambda _{n}\}\) is \(\min \{\frac{\mu }{L},\lambda _{0}\}\). □

Lemma 3.2

If \(w_{n}=y_{n}=x_{n+1}\), then \(w_{n} \in \operatorname{Fix}(T)\cap VI(C,A)\).

Proof

If \(w_{n}=y_{n}\), we have \(w_{n}\in VI(C,A)\).

Besides, since \(w_{n}=y_{n}\), \(y_{n}=P_{C}(w_{n}-\lambda _{n} Aw_{n})\), according to Lemma 2.3, we have \(\langle w_{n}-\lambda _{n} Aw_{n}-y _{n},x-y_{n}\rangle \leq 0\), \(\forall x\in C\). Since \(w_{n}=y_{n}\), \(z_{n}=P_{T_{n}}(w_{n}-\lambda _{n}Ay_{n})\), where \(T_{n}=\{x\in H| \langle w_{n}-\lambda _{n} Aw_{n}-y_{n},x-y_{n}\rangle \leq 0\}\), we have \(y_{n}=z_{n}\).

On the other hand, if \(w_{n}=y_{n}=x_{n+1}\), by \(x_{n+1}=(1-\beta _{n})w _{n}+\beta _{n}Tz_{n}\), we have

$$ w_{n}=(1-\beta _{n})w_{n}+\beta _{n}Tw_{n}, $$

through deformation, we can get \(Tw_{n}=w_{n}\), which means \(w_{n}\in \operatorname{Fix}(T)\).

Therefore, \(w_{n}\in \operatorname{Fix}(T)\cap VI(C,A)\). □

Lemma 3.3

Let \(\{z_{n}\}\) be a sequence generated by Algorithm 2, then, for all \(p\in VI(C,A)\), and for n sufficiently large, we have

$$\begin{aligned} \Vert z_{n}-p \Vert ^{2} \leq& \Vert w_{n}-p \Vert ^{2}-(1-\mu \gamma ) \Vert y_{n}-w_{n} \Vert ^{2} \\ &{}-(1-\mu \gamma ) \Vert z_{n}-y_{n} \Vert ^{2}-2\lambda _{n}\langle Ap,y_{n}-p \rangle . \end{aligned}$$
(11)

Proof

Since \(p\in VI(C,A)\) and \(VI(C,A)\subset C\subset T_{n}\), we have

$$\begin{aligned} \Vert z_{n}-p \Vert ^{2} =& \bigl\Vert P_{T_{n}}(w_{n}-\lambda _{n}Ay_{n})-p \bigr\Vert ^{2} \\ \leq &\bigl\langle P_{T_{n}}(w_{n}-\lambda _{n}Ay_{n})-P_{T_{n}}p,w_{n}- \lambda _{n}Ay_{n}-p\bigr\rangle \\ =&\langle z_{n}-p,w_{n}-\lambda _{n}Ay_{n}-p \rangle \\ =&\frac{1}{2} \Vert z_{n}-p \Vert ^{2}+ \frac{1}{2} \Vert w_{n}-\lambda _{n}Ay_{n}-p \Vert ^{2}-\frac{1}{2} \Vert z_{n}-w_{n}+ \lambda _{n}Ay_{n} \Vert ^{2} \\ =&\frac{1}{2} \Vert z_{n}-p \Vert ^{2}+ \frac{1}{2} \Vert w_{n}-p \Vert ^{2}+ \frac{1}{2} \lambda _{n}^{2} \Vert Ay_{n} \Vert ^{2} \\ &{}-\langle w_{n}-p,\lambda _{n}Ay_{n}\rangle -\frac{1}{2} \Vert z_{n}-w _{n} \Vert ^{2}-\frac{1}{2}\lambda _{n}^{2} \Vert Ay_{n} \Vert ^{2} \\ &{}-\langle z_{n}-w_{n},\lambda _{n}Ay_{n} \rangle \\ =&\frac{1}{2} \Vert z_{n}-p \Vert ^{2}+ \frac{1}{2} \Vert w_{n}-p \Vert ^{2}- \frac{1}{2} \Vert z_{n}-w_{n} \Vert ^{2}- \langle z_{n}-p,\lambda _{n}Ay_{n}\rangle . \end{aligned}$$

It implies that

$$ \Vert z_{n}-p \Vert ^{2}\leq \Vert w_{n}-p \Vert ^{2}- \Vert z_{n}-w_{n} \Vert ^{2}-2\langle z _{n}-p,\lambda _{n}Ay_{n} \rangle . $$

Because the operator A is monotone and \(\lambda _{n}>0\), we have

$$ 2\lambda _{n}\langle Ay_{n}-Ap,y_{n}-p\rangle \geq 0. $$

So

$$\begin{aligned} \Vert z_{n}-p \Vert ^{2} \leq & \Vert w_{n}-p \Vert ^{2}- \Vert z_{n}-w_{n} \Vert ^{2}-2\langle z _{n}-p,\lambda _{n}Ay_{n} \rangle \\ &{}+2\lambda _{n}\langle Ay_{n}-Ap,y_{n}-p \rangle \\ =& \Vert w_{n}-p \Vert ^{2}- \Vert z_{n}-w_{n} \Vert ^{2}+2\lambda _{n} \bigl(\langle Ay_{n}-Ap,y _{n}-p\rangle \\ &{}-\langle z_{n}-p,Ay_{n}\rangle \bigr) \\ =& \Vert w_{n}-p \Vert ^{2}- \Vert z_{n}-w_{n} \Vert ^{2}+2\lambda _{n} \bigl(\langle Ay_{n},y _{n}-p\rangle -\langle Ap,y_{n}-p\rangle \\ &{}-\langle z_{n}-p,Ay_{n}\rangle \bigr) \\ =& \Vert w_{n}-p \Vert ^{2}- \Vert z_{n}-w_{n} \Vert ^{2}+2\lambda _{n} \bigl(\langle Ay_{n},y _{n}-z_{n}\rangle -\langle Ap,y_{n}-p\rangle \bigr) \\ =& \Vert w_{n}-p \Vert ^{2}- \Vert z_{n}-w_{n} \Vert ^{2}+2\lambda _{n} \langle Ay_{n}-Aw _{n},y_{n}-z_{n}\rangle \\ &{}+2\lambda _{n}\langle Aw_{n},y_{n}-z_{n} \rangle -2\lambda _{n} \langle Ap,y_{n}-p\rangle . \end{aligned}$$

Since \(\{\lambda _{n}\}\) is a monotonically decreasing sequence, the limit of \(\{\lambda _{n}\}\) exists and \(\frac{\lambda _{n}}{\lambda _{n+1}}\geq 1\). We denote \(\lambda = \lim_{n\rightarrow \infty }\lambda _{n}\). Therefore, we have \(\lim_{n\rightarrow \infty }=\frac{\lambda _{n}}{\lambda _{n+1}}=1\). We have \(\mu \in (0,1)\), \(\frac{1}{\mu }>1\), let \(\gamma =\frac{1+\frac{1}{ \mu }}{2}\), \(1<\gamma <\frac{1}{\mu }\). Therefore, \(\exists N\in \mathbb{N}\), \(\forall n>N\), \(\frac{\lambda _{n}}{\lambda _{n+1}}< \gamma \). Therefore, \(1-\mu \gamma <1\).

$$\begin{aligned} 2\lambda _{n}\langle Ay_{n}-Aw_{n},y_{n}-z_{n} \rangle \leq &2\lambda _{n} \Vert y_{n}-z_{n} \Vert \cdot \Vert Ay_{n}-Aw_{n} \Vert \\ \leq &2\lambda _{n} \Vert y_{n}-z_{n} \Vert \cdot \frac{\mu }{\lambda _{n+1}} \Vert y _{n}-w_{n} \Vert \\ \leq &2\mu \gamma \Vert y_{n}-z_{n} \Vert \cdot \Vert y_{n}-w_{n} \Vert \\ \leq &\mu \gamma \Vert y_{n}-z_{n} \Vert ^{2}+\mu \gamma \Vert y_{n}-w_{n} \Vert ^{2} . \end{aligned}$$

Since \(z_{n}\in T_{n}\), we have

$$ \langle w_{n}-\lambda _{n}Aw_{n}-y_{n},z_{n}-y_{n} \rangle \leq 0. $$

So

$$\begin{aligned} 2\lambda _{n}\langle Aw_{n},y_{n}-z_{n} \rangle \leq &2\langle y_{n}-w _{n},z_{n}-y_{n} \rangle \\ =& \Vert z_{n}-w_{n} \Vert ^{2}- \Vert y_{n}-w_{n} \Vert ^{2}- \Vert z_{n}-y_{n} \Vert ^{2}. \end{aligned}$$

Therefore

$$\begin{aligned} \Vert z_{n}-p \Vert ^{2} \leq & \Vert w_{n}-p \Vert ^{2}- \Vert z_{n}-w_{n} \Vert ^{2}+2\lambda _{n}\langle Ay_{n}-Aw_{n},y_{n}-z_{n} \rangle \\ &{}+2\lambda _{n}\langle Aw_{n},y_{n}-z_{n} \rangle -2\lambda _{n} \langle Ap,y_{n}-p\rangle \\ \leq & \Vert w_{n}-p \Vert ^{2}- \Vert z_{n}-w_{n} \Vert ^{2}+\mu \gamma \Vert y_{n}-z_{n} \Vert ^{2}+\mu \gamma \Vert y_{n}-w_{n} \Vert ^{2} \\ &{}+ \Vert z_{n}-w_{n} \Vert ^{2}- \Vert y_{n}-w_{n} \Vert ^{2}- \Vert z_{n}-y_{n} \Vert ^{2}-2 \lambda _{n} \langle Ap,y_{n}-p\rangle \\ =& \Vert w_{n}-p \Vert ^{2}-(1-\mu \gamma ) \Vert y_{n}-w_{n} \Vert ^{2}-(1-\mu \gamma ) \Vert z_{n}-y_{n} \Vert ^{2} \\ &{}-2\lambda _{n}\langle Ap,y_{n}-p\rangle . \end{aligned}$$

 □

Theorem 3.4

Assume that the sequence \(\{\alpha _{n}\}\) is non-decreasing such that \(0\leq \alpha _{n}\leq \alpha \leq \frac{1}{4}\) and the sequence \(\{\beta _{n}\}\) is a sequence of real numbers such that \(0<\beta \leq \beta _{n}\leq \frac{1}{2}\). Then the sequence \(\{x_{n}\}\) generated by Algorithm 2 converges weakly to an element of \(\operatorname{Fix}(T)\cap VI(C,A)\).

Proof

Let \(p\in \operatorname{Fix}(T)\cap VI(C,A)\).

From Lemma 3.3, we have \(\exists N\geq 0\), \(\forall n>N\), \(\|z_{n}-p\| \leq \|w_{n}-p\|\).

Since T is quasi-nonexpansive, by Lemma 2.1, we have \(\forall n>N\)

$$\begin{aligned} \Vert x_{n+1}-p \Vert ^{2} =& \bigl\Vert (1-\beta _{n})w_{n}+\beta _{n}Tz_{n}-p \bigr\Vert ^{2} \\ =& \bigl\Vert (1-\beta _{n}) (w_{n}-p)+\beta _{n}(Tz_{n}-p) \bigr\Vert ^{2} \\ =&(1-\beta _{n}) \Vert w_{n}-p \Vert ^{2}+ \beta _{n} \Vert Tz_{n}-p \Vert ^{2}-\beta _{n}(1- \beta _{n}) \Vert Tz_{n}-w_{n} \Vert ^{2} \\ \leq &(1-\beta _{n}) \Vert w_{n}-p \Vert ^{2}+\beta _{n} \Vert z_{n}-p \Vert ^{2}-\beta _{n}(1-\beta _{n}) \Vert Tz_{n}-w_{n} \Vert ^{2} \\ \leq &(1-\beta _{n}) \Vert w_{n}-p \Vert ^{2}+\beta _{n} \Vert w_{n}-p \Vert ^{2}-\beta _{n}(1-\beta _{n}) \Vert Tz_{n}-w_{n} \Vert ^{2} \\ =& \Vert w_{n}-p \Vert ^{2}-\beta _{n}(1- \beta _{n}) \Vert Tz_{n}-w_{n} \Vert ^{2}. \end{aligned}$$
(12)

Since \(x_{n+1}=(1-\beta _{n})w_{n}+\beta _{n}Tz_{n}\), we can write it as

$$ Tz_{n}-w_{n}=\frac{1}{\beta _{n}}(x_{n+1}-w_{n}). $$
(13)

Combining (12) and (13), with \(\beta _{n}\leq \frac{1}{2}\), we have

$$\begin{aligned} \Vert x_{n+1}-p \Vert ^{2} \leq & \Vert w_{n}-p \Vert ^{2}- \frac{1-\beta _{n}}{\beta _{n}} \Vert x_{n+1}-w_{n} \Vert ^{2} \\ \leq & \Vert w_{n}-p \Vert ^{2}- \Vert x_{n+1}-w_{n} \Vert ^{2}. \end{aligned}$$
(14)

Besides,

$$\begin{aligned} \Vert w_{n}-p \Vert ^{2} =& \bigl\Vert x_{n}+\alpha _{n}(x_{n}-x_{n-1})-p \bigr\Vert ^{2} \\ =& \bigl\Vert (1+\alpha _{n}) (x_{n}-p)-\alpha _{n}(x_{n-1}-p) \bigr\Vert ^{2} \\ =&(1+\alpha _{n}) \Vert x_{n}-p \Vert ^{2}- \alpha _{n} \Vert x_{n-1}-p \Vert ^{2} \\ &{}+\alpha _{n}(1+\alpha _{n}) \Vert x_{n}-x_{n-1} \Vert ^{2} \end{aligned}$$
(15)

and

$$\begin{aligned} \Vert x_{n+1}-w_{n} \Vert ^{2} =& \bigl\Vert x_{n+1}-\bigl(x_{n}+\alpha _{n}(x_{n}-x_{n-1}) \bigr) \bigr\Vert ^{2} \\ =& \Vert x_{n+1}-x_{n} \Vert ^{2}+\alpha _{n}^{2} \Vert x_{n}-x_{n-1} \Vert ^{2}-2\alpha _{n}\langle x_{n+1}-x_{n},x_{n}-x_{n-1} \rangle \\ \geq & \Vert x_{n+1}-x_{n} \Vert ^{2}+\alpha _{n}^{2} \Vert x_{n}-x_{n-1} \Vert ^{2} \\ &{}-2\alpha _{n} \Vert x_{n+1}-x_{n} \Vert \cdot \Vert x_{n}-x_{n-1} \Vert \\ \geq & \Vert x_{n+1}-x_{n} \Vert ^{2}+\alpha _{n}^{2} \Vert x_{n}-x_{n-1} \Vert ^{2}- \alpha _{n} \Vert x_{n+1}-x_{n} \Vert ^{2} \\ &{}-\alpha _{n} \Vert x_{n}-x_{n-1} \Vert ^{2} \\ =&(1-\alpha _{n}) \Vert x_{n+1}-x_{n} \Vert ^{2}+\bigl(\alpha _{n}^{2}-\alpha _{n} \bigr) \Vert x _{n}-x_{n-1} \Vert ^{2}. \end{aligned}$$
(16)

Combining (14), (15), (16), and \(\{\alpha _{n}\}\) being non-decreasing, we obtain

$$\begin{aligned} \Vert x_{n+1}-p \Vert ^{2} \leq &(1+\alpha _{n}) \Vert x_{n}-p \Vert ^{2}-\alpha _{n} \Vert x _{n-1}-p \Vert ^{2}+\alpha _{n}(1+\alpha _{n}) \Vert x_{n}-x_{n-1} \Vert ^{2} \\ &{}-(1-\alpha _{n}) \Vert x_{n+1}-x_{n} \Vert ^{2}-\bigl(\alpha _{n}^{2}-\alpha _{n}\bigr) \Vert x_{n}-x_{n-1} \Vert ^{2} \\ =&(1+\alpha _{n}) \Vert x_{n}-p \Vert ^{2}- \alpha _{n} \Vert x_{n-1}-p \Vert ^{2}-(1-\alpha _{n}) \Vert x_{n+1}-x_{n} \Vert ^{2} \\ &{}+\bigl[\alpha _{n}(1+\alpha _{n})-\bigl(\alpha _{n}^{2}-\alpha _{n}\bigr)\bigr] \Vert x _{n}-x_{n-1} \Vert ^{2} \\ =&(1+\alpha _{n}) \Vert x_{n}-p \Vert ^{2}- \alpha _{n} \Vert x_{n-1}-p \Vert ^{2}-(1-\alpha _{n}) \Vert x_{n+1}-x_{n} \Vert ^{2} \\ &{}+2\alpha _{n} \Vert x_{n}-x_{n-1} \Vert ^{2} \\ \leq &(1+\alpha _{n+1}) \Vert x_{n}-p \Vert ^{2}-\alpha _{n} \Vert x_{n-1}-p \Vert ^{2}-(1- \alpha _{n}) \Vert x_{n+1}-x_{n} \Vert ^{2} \\ &{}+2\alpha _{n} \Vert x_{n}-x_{n-1} \Vert ^{2}. \end{aligned}$$
(17)

Therefore,

$$\begin{aligned} & \Vert x_{n+1}-p \Vert ^{2}-\alpha _{n+1} \Vert x_{n}-p \Vert ^{2}+2\alpha _{n+1} \Vert x_{n+1}-x _{n} \Vert ^{2} \\ &\quad \leq \Vert x_{n}-p \Vert ^{2}-\alpha _{n} \Vert x_{n-1}-p \Vert ^{2}+2\alpha _{n} \Vert x_{n}-x _{n-1} \Vert ^{2} \\ &\qquad {}+2\alpha _{n+1} \Vert x_{n+1}-x_{n} \Vert ^{2}-(1-\alpha _{n}) \Vert x_{n+1}-x _{n} \Vert ^{2} \\ &\quad = \Vert x_{n}-p \Vert ^{2}-\alpha _{n} \Vert x_{n-1}-p \Vert ^{2}+2\alpha _{n} \Vert x_{n}-x _{n-1} \Vert ^{2} \\ &\qquad {}+(2\alpha _{n+1}-1+\alpha _{n}) \Vert x_{n+1}-x_{n} \Vert ^{2}. \end{aligned}$$
(18)

Put \(\varGamma _{n}:=\|x_{n}-p\|^{2}-\alpha _{n}\|x_{n-1}-p\|^{2}+2\alpha _{n}\|x_{n}-x_{n-1}\|^{2}\).

From (18), we obtain

$$ \varGamma _{n+1}-\varGamma _{n}\leq (2\alpha _{n+1}-1+ \alpha _{n}) \Vert x_{n+1}-x _{n} \Vert ^{2}. $$
(19)

We have \(0\leq \alpha _{n}\leq \alpha \leq \frac{1}{4}\), \(-(2\alpha _{n+1}-1+\alpha _{n})\geq \frac{1}{4}\).

So \(\varGamma _{n+1}-\varGamma _{n}\leq -\delta \|x_{n+1}-x_{n}\|^{2}\leq 0\), where \(\delta =\frac{1}{4}\), which implies that the sequence \(\{\varGamma _{n}\}\) is non-increasing.

Besides,

$$\begin{aligned} \varGamma _{n} =& \Vert x_{n}-p \Vert ^{2}- \alpha _{n} \Vert x_{n-1}-p \Vert ^{2}+2\alpha _{n} \Vert x_{n}-x_{n-1} \Vert ^{2} \\ \geq & \Vert x_{n}-p \Vert ^{2}-\alpha _{n} \Vert x_{n-1}-p \Vert ^{2} \end{aligned}$$
(20)

and

$$\begin{aligned} \varGamma _{n+1} =& \Vert x_{n+1}-p \Vert ^{2}- \alpha _{n+1} \Vert x_{n}-p \Vert ^{2}+2 \alpha _{n+1} \Vert x_{n+1}-x_{n} \Vert ^{2} \\ \geq &-\alpha _{n+1} \Vert x_{n}-p \Vert ^{2}. \end{aligned}$$
(21)

Therefore,

$$\begin{aligned} \Vert x_{n}-p \Vert ^{2} \leq &\alpha _{n} \Vert x_{n-1}-p \Vert ^{2}+\varGamma _{n} \\ \leq &\alpha \Vert x_{n-1}-p \Vert ^{2}+\varGamma _{1} \\ \leq &\ldots \\ \leq &\alpha ^{n} \Vert x_{0}-p \Vert ^{2}+ \bigl(1+\cdots +\alpha ^{n-1}\bigr)\varGamma _{1} \\ \leq &\alpha ^{n} \Vert x_{0}-p \Vert ^{2}+ \frac{\varGamma _{1}}{1-\alpha }. \end{aligned}$$
(22)

This implies that the sequence \(\{x_{n}\}\) is bounded.

Combining (21) and (22), we have

$$\begin{aligned} -\varGamma _{n+1} \leq &\alpha _{n+1} \Vert x_{n}-p \Vert ^{2} \\ \leq &\alpha \Vert x_{n}-p \Vert ^{2} \\ \leq &\alpha ^{n+1} \Vert x_{0}-p \Vert ^{2}+ \frac{\alpha \varGamma _{1}}{1-\alpha }. \end{aligned}$$
(23)

From (19), we have

$$\begin{aligned} \delta \sum_{n=1}^{k} \Vert x_{n+1}-x_{n} \Vert ^{2} \leq &\varGamma _{1}-\varGamma _{k+1} \\ \leq &\varGamma _{1}+\alpha ^{k+1} \Vert x_{0}-p \Vert ^{2}+\frac{\alpha \varGamma _{1}}{1- \alpha } \\ =&\alpha ^{k+1} \Vert x_{0}-p \Vert ^{2}+ \frac{\varGamma _{1}}{1-\alpha } \\ \leq & \Vert x_{0}-p \Vert ^{2}+\frac{\varGamma _{1}}{1-\alpha }, \end{aligned}$$
(24)

which means

$$ \sum_{n=1}^{\infty } \Vert x_{n+1}-x_{n} \Vert ^{2}< +\infty $$
(25)

and

$$ \lim_{n\rightarrow \infty } \Vert x_{n+1}-x_{n} \Vert =0. $$
(26)

Besides, since \(\alpha _{n}\leq \alpha \), we have

$$\begin{aligned} \Vert x_{n+1}-w_{n} \Vert =& \bigl\Vert x_{n+1}-x_{n}-\alpha _{n}(x_{n}-x_{n-1}) \bigr\Vert \\ \leq & \Vert x_{n+1}-x_{n} \Vert +\alpha _{n} \Vert x_{n}-x_{n-1} \Vert \\ \leq & \Vert x_{n+1}-x_{n} \Vert +\alpha \Vert x_{n}-x_{n-1} \Vert . \end{aligned}$$
(27)

Therefore, by (26) and (27), we can obtain

$$ \Vert x_{n+1}-w_{n} \Vert \rightarrow 0\quad (n\rightarrow \infty ). $$
(28)

We have

$$\begin{aligned} \Vert x_{n+1}-p \Vert ^{2} \leq &(1+\alpha _{n}) \Vert x_{n}-p \Vert ^{2}-\alpha _{n} \Vert x _{n-1}-p \Vert ^{2}-(1-\alpha _{n}) \Vert x_{n+1}-x_{n} \Vert ^{2} \\ &{}+2\alpha _{n} \Vert x_{n}-x_{n-1} \Vert ^{2} \\ \leq &(1+\alpha _{n}) \Vert x_{n}-p \Vert ^{2}-\alpha _{n} \Vert x_{n-1}-p \Vert ^{2}+2 \alpha _{n} \Vert x_{n}-x_{n-1} \Vert ^{2}. \end{aligned}$$
(29)

Therefore, by (29) and Lemma 2.8, we have

$$ \lim_{n\rightarrow \infty } \Vert x_{n}-p \Vert ^{2}=l $$
(30)

by (15), we have

$$ \lim_{n\rightarrow \infty } \Vert w_{n}-p \Vert ^{2}=l $$
(31)

and we also have

$$ \lim_{n\rightarrow \infty } \Vert x_{n}-w_{n} \Vert ^{2}=0. $$
(32)

We have

$$\begin{aligned} \Vert x_{n+1}-p \Vert ^{2} \leq &(1-\beta _{n}) \Vert w_{n}-p \Vert ^{2}+\beta _{n} \Vert z _{n}-p \Vert ^{2}-\beta _{n}(1-\beta _{n}) \Vert Tz_{n}-w_{n} \Vert ^{2} \\ \leq &(1-\beta _{n}) \Vert w_{n}-p \Vert ^{2}+\beta _{n} \Vert z_{n}-p \Vert ^{2}, \end{aligned}$$

which means

$$ \Vert z_{n}-p \Vert ^{2}\geq \frac{ \Vert x_{n+1}-p \Vert ^{2}- \Vert w_{n}-p \Vert ^{2}}{\beta _{n}}+ \Vert w_{n}-p \Vert ^{2}. $$
(33)

Combining (30), (31) and (33), the sequence \(\{\beta _{n}\}\) being bounded, we have

$$ \liminf_{n\rightarrow \infty } \Vert z_{n}-p \Vert ^{2} \geq \lim_{n\rightarrow \infty } \Vert w_{n}-p \Vert ^{2}=l. $$

By (11), we have

$$ \limsup_{n\rightarrow \infty } \Vert z_{n}-p \Vert ^{2} \leq \lim_{n\rightarrow \infty } \Vert w_{n}-p \Vert ^{2}=l. $$

Therefore, we have

$$ \lim_{n\rightarrow \infty } \Vert z_{n}-p \Vert ^{2}=l. $$
(34)

From Lemma 3.3, we can obtain \(\forall n>N\)

$$ \Vert z_{n}-p \Vert ^{2}\leq \Vert w_{n}-p \Vert ^{2}-(1-\mu \gamma ) \Vert y_{n}-w_{n} \Vert ^{2} $$

and

$$ \Vert z_{n}-p \Vert ^{2}\leq \Vert w_{n}-p \Vert ^{2}-(1-\mu \gamma ) \Vert z_{n}-y_{n} \Vert ^{2}. $$

By (31) and (34), we have

$$ \lim_{n\rightarrow \infty } \Vert y_{n}-w_{n} \Vert =0 $$
(35)

and

$$ \lim_{n\rightarrow \infty } \Vert z_{n}-y_{n} \Vert =0. $$
(36)

So

$$ \lim_{n\rightarrow \infty } \Vert z_{n}-w_{n} \Vert \leq \lim_{n\rightarrow \infty }\bigl( \Vert z_{n}-y_{n} \Vert + \Vert y_{n}-w_{n} \Vert \bigr)=0, $$

which implies

$$ \lim_{n\rightarrow \infty } \Vert z_{n}-w_{n} \Vert =0. $$
(37)

Therefore, by (12), (30) and (31), we have

$$ \lim_{n\rightarrow \infty } \Vert Tz_{n}-w_{n} \Vert =0. $$
(38)

So

$$ \lim_{n\rightarrow \infty } \Vert Tz_{n}-z \Vert \leq \lim _{n\rightarrow \infty }\bigl( \Vert Tz_{n}-w_{n} \Vert + \Vert w_{n}-z_{n} \Vert \bigr)=0, $$

which implies

$$ \lim_{n\rightarrow \infty } \Vert Tz_{n}-z_{n} \Vert =0. $$
(39)

Since \(\{x_{n}\}\) is bounded, there exist a subsequence \(\{x_{n_{k}} \}\) of \(\{x_{n}\}\) and \(q\in H\) such that \(x_{n_{k}}\rightharpoonup q\).

So, by (32) we have \(\omega _{n_{k}}\rightharpoonup q\) and by (37) we have \(z_{n_{k}}\rightharpoonup q\).

Since \(z_{n_{k}}\rightharpoonup q\) and \(I-T\) is demiclosed at zero, by Lemma 2.6, we have \(q\in \operatorname{Fix}(T)\).

On the other hand, for all \(n\in \mathbb{R}\), we have \(\lambda _{n}>\frac{ \mu }{L}\). We have \(\omega _{n_{k}}\rightharpoonup q\) and

$$ \Vert w_{n}-y_{n} \Vert = \bigl\Vert w_{n}-P_{C}(I-\mu A)w_{n} \bigr\Vert \rightarrow 0. $$

By Lemma 2.9, we have \(q\in VI(C,A)\).

Therefore, \(q\in \operatorname{Fix}(T)\cap VI(C,A)\).

By Lemma 2.10, we get the conclusion that the sequence \(\{x_{n}\}\) converges weakly to an element of \(\operatorname{Fix}(T)\cap VI(C,A)\).

This completes the proof. □

4 Numerical experiments

In this section, we give some numerical examples to illustrate the efficiency and advantage of our algorithm in comparisons with the well-known algorithm. We compare Algorithm 2 with the weakly convergent Algorithm 1 [19].

We choose \(\alpha _{n}=\frac{1}{4}\), \(\beta _{n}=\frac{1}{2}\), \(\mu =\frac{1}{2}\), \(\lambda _{0}=\frac{1}{7}\). The starting point is \(x_{0}=x_{1}=(1,1,\ldots ,1)\in \mathfrak{R}^{m}\). In order to show the converges of the algorithm, we illustrate the behavior of the sequence \(D_{n}=\|x_{n}-x^{*}\|^{2}\), \(n=0,1,2,\ldots \) , when the execution time in second elapses where \(x^{*}\) is the solution of the problem and \(\{x_{n}\}\) is the sequence generated by the algorithms. Now we introduce the examples in detail.

Example 2

Let A be a Lipschitz continuous and monotone mapping. Let T be a quasi-nonexpansive mapping. Assume \(\operatorname{Fix}(T) \cap VI(C,A)\neq \emptyset \) and \(C=[-2,5]\), \(H=\mathbb{R}\). Let A and T be given by

$$\begin{aligned} &Ax:=x+\sin x, \\ &Tx:=\frac{x}{2}\sin x. \end{aligned}$$

In the following, let us verify if A and T meet the requirements of the topic.

First, for all \(x,y\in H\), we have

$$\begin{aligned}& \Vert Ax-Ay \Vert = \Vert x+\sin x-y-\sin y \Vert \leq \Vert x-y \Vert + \Vert \sin x-\sin y \Vert \leq 2 \Vert x-y \Vert , \\& \langle Ax-Ay,x-y\rangle =(x+\sin x-y-\sin y) (x-y)=(x-y)^{2}+(\sin x- \sin y) (x-y)\geq 0. \end{aligned}$$

Therefore, \(\|Ax-Ay\|\leq L\|x-y\|\), where \(L=2\) and \(\langle Ax-Ay,x-y \rangle \geq 0\). Therefore, A is L-Lipschitz continuous and monotone.

Second, for \(Tx=\frac{x}{2}\sin x\), if \(x\neq 0\) and \(Tx=x\), then we have \(x=\frac{x}{2}\sin x\), and \(\sin x=2\), which is impossible. Therefore, we obtain \(x=0\), which means \(\operatorname{Fix}(T)=\{0\}\).

For all \(x\in H\),

$$ \Vert Tx-0 \Vert = \biggl\Vert \frac{x}{2}\sin x \biggr\Vert \leq \biggl\Vert \frac{x}{2} \biggr\Vert < \Vert x \Vert = \Vert x-0 \Vert , $$

which means T is quasi-nonexpansive.

Besides, take \(x=2\pi \) and \(y=\frac{3\pi }{2}\), we have

$$ \Vert Tx-Ty \Vert = \biggl\Vert \frac{2\pi }{2}\sin 2\pi - \frac{3\pi }{4}\sin \frac{3 \pi }{2} \biggr\Vert =\frac{3\pi }{4}> \biggl\Vert 2\pi -\frac{3\pi }{2} \biggr\Vert =\frac{\pi }{2}, $$

which means T is not a nonexpansive mapping.

Therefore, A and T meet the requirements of the topic. The numerical results for the example are shown in Fig. 1.

Figure 1
figure 1

Experiment for Example 2

From Fig. 1, we can see that the Algorithm 2 converges for a shorter time than the previously studied Algorithm 1 [19].

Example 3

We consider the operator \(T:H\rightarrow H\) with \(Tx=-\frac{1}{2}x\) and a linear operator \(A:\mathfrak{R}^{m}\rightarrow \mathfrak{R}^{m}\) in the form \(A(x)=Mx+q\) [28, 29], where

$$ M=NN^{T}+S+D, $$

N is a \(m\times m\) matrix, S is a \(m\times m\) skew-symmetric matrix, D is a \(m\times m\) diagonal matrix which its diagonal entries are nonnegative, and \(q\in \mathfrak{R}^{m}\) is a vector, therefore M is positive definite. The feasible set is

$$ C=\bigl\{ x=(x_{1},\ldots ,x_{m}\in \mathfrak{R}^{m} \bigr\} :-2\leq x_{i}\leq 5, i=1,2,\ldots ,m\}. $$

It is obvious that A is monotone and Lipschitz continuous. For experiments, q is equal to zero vector, all the entries of N, S are generated randomly and uniformly in \([-2,2]\), and the diagonal entries of D are in \((0,2)\).

We can easily see that the solution of the algorithm in this case is \(x^{*}=0\). In order to illustrate the effectiveness of the algorithm, we show the behavior of \(D_{n}\) when execution time elapses(in second) by Figs. 2, 3, 4 in \(\mathfrak{R}^{20}\), \(\mathfrak{R}^{50}\), \(\mathfrak{R}^{100}\) respectively.

Figure 2
figure 2

Experiment with \(m=20\) for Example 3

Figure 3
figure 3

Experiment with \(m=50\) for Example 3

Figure 4
figure 4

Experiment with \(m=100\) for Example 3

According to Figs. 2, 3, and 4, we have confirmed that the proposed algorithm have the competitive advantages over the existing Algorithm 1 [19].

5 Conclusion

In this paper, we introduce a new algorithm with self-adaptive method for finding a solution of the variational inequality problem involving monotone operator and the fixed point problem of a quasi-nonexpansive mapping with a demiclosedness property in a real Hilbert space. We combine a subgradient extragradient method and inertial modification for the algorithm. Under some suitable conditions, we have proved the weak convergence of the algorithm. In particular, it is worth emphasizing that the algorithm that we propose does not need any additional projections of the Lipschitz constant. Finally, some numerical experiments are performed to verify the convergence of the algorithm and compared with previously known Algorithm 1 [19].

References

  1. Gibali, A.: Two simple relaxed perturbed extragradient methods for solving variational inequalities in Euclidean spaces. J. Nonlinear Var. Anal. 2, 49–61 (2018)

    Article  MathSciNet  Google Scholar 

  2. Yao, Y.H., Shahzad, N.: Strong convergence of a proximal point algorithm with general errors. Optim. Lett. 6, 621–628 (2012)

    Article  MathSciNet  Google Scholar 

  3. Yao, Y.H., Chen, R.D., Xu, H.K.: Schemes for finding minimum-norm solutions of variational inequalities. Nonlinear Anal. 72, 3447–3456 (2010)

    Article  MathSciNet  Google Scholar 

  4. Korpelevich, G.M.: The extragradient method for finding saddle points and other problem. Èkon. Mat. Metody 12, 747–756 (1976)

    MathSciNet  MATH  Google Scholar 

  5. Tseng, P.: A modified forward-backward splitting method for maximal monotone mappings. SIAM J. Control Optim. 38, 431–446 (2000)

    Article  MathSciNet  Google Scholar 

  6. Censor, Y., Gibali, A., Reich, S.: The subgradient extragradient method for solving variational inequalities in Hilbert space. J. Optim. Theory Appl. 148, 318–335 (2011)

    Article  MathSciNet  Google Scholar 

  7. Alvarez, F., Attouch, H.: An inertial proximal method for maximal monotone operators via discretization of a nonlinear oscillator with damping. Set-Valued Anal. 9, 3–11 (2001)

    Article  MathSciNet  Google Scholar 

  8. Dong, Q.L., Cho, Y.J., Zhong, L.L.: Inertial projection and contraction algorithms for variational inequalities. J. Glob. Optim. (2017). https://doi.org/10.1007/s10898-017-0506-0

    Article  MATH  Google Scholar 

  9. Thong, D.V., Hieu, D.V.: Weak and strong convergence theorems for variational inequality problems. Numer. Algorithms (2017). https://doi.org/10.1007/s11075-017-0412-Z

    Article  MATH  Google Scholar 

  10. Mainge, P.E., Gobinddass, M.L.: Covergence of one-step projected gradient methods for variational inequalities. J. Optim. Theory Appl. 171, 146–168 (2016)

    Article  MathSciNet  Google Scholar 

  11. Ceng, L.C., Yao, J.C.: Strong convergence theorem by an extragradient method for fixed point problems and variational inequality problems. Taiwan. J. Math. 10, 1293–1303 (2006)

    Article  MathSciNet  Google Scholar 

  12. Iiduka, H., Takahashi, W.: Strong convergence theorems for nonexpansive mappings and inverses-strongly monotone mappings. Nonlinear Anal. 61, 341–350 (2005)

    Article  MathSciNet  Google Scholar 

  13. Nadezhkina, N., Takashi, W.: Strong convergence theorem by a hybrid method for nonexpansive mappings and monotone mappings. J. Optim. 16, 1230–1241 (2006)

    Article  MathSciNet  Google Scholar 

  14. Yao, Y.H., Liou, Y.C., Yao, J.C.: Iterative algorithms for the split variational inequality and fixed point problems under nonlinear transformations. J. Nonlinear Sci. Appl. 10, 843–854 (2017)

    Article  MathSciNet  Google Scholar 

  15. Thong, D.V., Hieu, D.V.: An inertial method for solving split common fixed point problems. J. Fixed Point Theory Appl. 19, 3029–3051 (2017)

    Article  MathSciNet  Google Scholar 

  16. Qin, X.L., Yao, J.C.: Projection splitting algorithms for nonself operators. J. Nonlinear Convex Anal. 18, 925–935 (2017)

    MathSciNet  MATH  Google Scholar 

  17. Yang, Y., Yuan, Q.: A hybrid descent iterative algorithm for a split inclusion problem. J. Nonlinear Funct. Anal. 2018, Article ID 42 (2018)

    Google Scholar 

  18. Zegeye, H., Shahzad, N., Yao, Y.H.: Minimum-norm solution of variational inequality and fixed point problem in Banach spaces. Optimization 64, 453–471 (2015)

    Article  MathSciNet  Google Scholar 

  19. Thong, D.V., Hieu, D.V.: Inertial subgradient extragradient algorithms with line-search process for solving variational inequality problems and fixed point problems. Numer. Algorithms (2018). https://doi.org/10.1007/s11075-018-0527-x

    Article  MATH  Google Scholar 

  20. Takahashi, W.: Nonlinear Functional Analysis-Fixed Point Theory and Its Applications. Yokohama Publishers, Yokohama (2000)

    MATH  Google Scholar 

  21. Xu, H.K.: Iterative algorithm for nonlinear operators. J. Lond. Math. Soc. 66(2), 240–256 (2002)

    Article  MathSciNet  Google Scholar 

  22. Takahashi, W.: Introduction to Nonlinear and Convex Analysis. Yokohoma Publishers, Yokohoma (2009)

    MATH  Google Scholar 

  23. Goebel, K., Reich, S.: Uniform Convexity, Hyperbolic Geometry, and Nonexpansive Mappings. Dekker, New York(1984)

    MATH  Google Scholar 

  24. Mainge, P.E.: The viscosity approximation process for quasi-nonexpansive mapping in Hilbert space. Comput. Math. Appl. 59, 74–79 (2010)

    Article  MathSciNet  Google Scholar 

  25. Chidume, C.E.: Geometric Properties of Banach Spaces and Nonlinear Iterations. Lecture Notes in Mathematics. vol. 1965. Springer, Berlin (2009)

    MATH  Google Scholar 

  26. Kraikaew, R., Saejung, S.: Strong convergence of the Halpern subgradient extragradient method for solving variational inequalities in Hilbert spaces. J. Optim. Theory Appl. 66, 75–96 (2017)

    MATH  Google Scholar 

  27. Xu, H.K.: Averaged mappings and the gradient-projection algorithm. J. Optim. Theory Appl. 150, 360–378 (2011)

    Article  MathSciNet  Google Scholar 

  28. Harker, P.T., Pang, J.S.: A damped-Newton method for the linear complementarity problem. Lect. Appl. Math. 26, 265–284 (1990)

    MathSciNet  MATH  Google Scholar 

  29. Hieu, D.V., Anh, P.K., Muu, L.D.: Modified hybrid projection methods for finding common solutions to variational inequality problems. Comput. Optim. Appl. 66, 75–96 (2017)

    Article  MathSciNet  Google Scholar 

Download references

Funding

This work was supported by the Financial Funds for the Central Universities (No. 3122018L004) and Scientific research project of Tianjin Municipal Education Commission (No. 2018KJ253).

Author information

Authors and Affiliations

Authors

Contributions

All the authors read and approved the final manuscript.

Corresponding author

Correspondence to Ming Tian.

Ethics declarations

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Tian, M., Tong, M. Self-adaptive subgradient extragradient method with inertial modification for solving monotone variational inequality problems and quasi-nonexpansive fixed point problems. J Inequal Appl 2019, 7 (2019). https://doi.org/10.1186/s13660-019-1958-1

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13660-019-1958-1

Keywords