Skip to main content

Inertial hybrid algorithm for variational inequality problems in Hilbert spaces

Abstract

For a variational inequality problem, the inertial projection and contraction method have been studied. It has a weak convergence result. In this paper, we propose a strong convergence iterative method for finding a solution of a variational inequality problem with a monotone mapping by projection and contraction method and inertial hybrid algorithm. Our result can be used to solve other related problems in Hilbert spaces.

1 Introduction

The variational inequality (VI) problem plays an important role in nonlinear analysis and optimization. It is a generalization of the nonlinear complementarity problem. Recently, it has had considerable applications in many fields. The VI problem was introduced by Fichera [1, 2] for solving Signorini problem. Later, it was studied by Stampacchia [3] for solving mechanic problems.

Let H be a real Hilbert space with the inner product \(\langle \cdot ,\cdot \rangle \) and the norm \(\|\cdot \|\). Let C be a nonempty closed convex subset of H. The variational inequality problem is to find a point \(x^{*}\in C\) such that

$$ \bigl\langle Fx^{*},x-x^{*}\bigr\rangle \geq 0,\quad \forall x\in C, $$
(1.1)

where F is a mapping of H into H. The solution set of VI (1.1) is denoted by \(VI(C,F)\).

Using properties of the metric projection, we can easily see that \(x^{*}\in VI(C,F)\) if and only if

$$ x^{*}=P_{C}(I-\lambda F)x^{*}. $$

Many scholars are devoted to the research of variational inequality problems. Some authors have proposed several iterative methods for solving VI (1.1). A simple iterative method [4] is

$$ x_{n+1}=P_{C}(I-\lambda F)x_{n}, $$
(1.2)

or more generally,

$$ x_{n+1}=P_{C}(I-\lambda _{n}F)x_{n}. $$
(1.3)

The convergence of (1.2) and (1.3) depends on the properties of F. If F is strongly monotone and Lipschitz continuous, (1.2) and (1.3) have strong convergence results under certain conditions of parameters. If F is inverse strongly monotone, (1.2) and (1.3) have weak convergence results under some suitable conditions.

In 1976, Korpelevich [5] proposed the following so-called extragradient method for solving VI (1.1) when F is monotone and Lipschitz continuous in the finite-dimensional Euclidean space \(\mathbb{R}^{n}\):

$$ \textstyle\begin{cases} x_{1}=x\in C\quad \text{is chosen arbitrarily}, \\ y_{n}=P_{C}(x_{n}-\lambda Fx_{n}), \\ x_{n+1}=P_{C}(x_{n}-\lambda Fy_{n}), \end{cases} $$
(1.4)

for each \(n\in \mathbb{N}\). Under some suitable conditions, the sequences \(\{x_{n}\}\) and \(\{y_{n}\}\) converge to the same point \(z\in VI(C,F)\). The recent variants of Korpelevich’s method can be found in [6].

In 1997, He [7] proposed another method to solve VI with monotone mappings. His method is called projection and contraction method:

$$ \textstyle\begin{cases} x_{1}=x\in C\quad \text{is chosen arbitrarily}, \\ y_{n}=P_{C}(x_{n}-\lambda Fx_{n}), \\ d(x_{n},y_{n})=(x_{n}-y_{n})-\lambda (Fx_{n}-Fy_{n}), \\ x_{n+1}=x_{n}-\gamma \beta _{n}d(x_{n},y_{n}), \end{cases} $$
(1.5)

for each \(n\in \mathbb{N}\), where \(\gamma \in (0,2)\),

$$ \beta _{n}= \textstyle\begin{cases} \frac{\varphi (x_{n},y_{n})}{ \Vert d(x_{n},y_{n}) \Vert },&\text{if }d(x_{n},y_{n})\neq 0, \\ 0,&\text{if }d(x_{n},y_{n})=0, \end{cases} $$

and

$$ \varphi (x_{n},y_{n})=\bigl\langle x_{n}-y_{n},d(x_{n},y_{n}) \bigr\rangle . $$

This method has a convergence result under certain conditions.

In 2017, Dong et al. [8] proposed the following so-called inertial projection and contraction method:

$$ \textstyle\begin{cases} x_{0},x_{1}\in H \quad \text{are chosen arbitrarily}, \\ w_{n}=x_{n}+\alpha _{n}(x_{n}-x_{n-1}), \\ y_{n}=P_{C}(w_{n}-\lambda Fw_{n}), \\ d(w_{n},y_{n})=(w_{n}-y_{n})-\lambda (Fw_{n}-Fy_{n}), \\ x_{n+1}=w_{n}-\gamma \beta _{n}d(x_{n},y_{n}), \end{cases} $$
(1.6)

for each \(n\in \mathbb{N}\), where \(\gamma \in (0,2)\),

$$ \beta _{n}=\textstyle\begin{cases} \frac{\varphi (w_{n},y_{n})}{ \Vert d(w_{n},y_{n}) \Vert },&\text{if }d(w_{n},y_{n})\neq 0, \\ 0,&\text{if }d(w_{n},y_{n})=0, \end{cases} $$

and

$$ \varphi (w_{n},y_{n})=\bigl\langle w_{n}-y_{n},d(w_{n},y_{n}) \bigr\rangle . $$

They proved that the sequence \(\{x_{n}\}\) generated by (1.6) converges weakly to a point in \(VI(C,F)\) under certain conditions.

Sometimes, a weak convergence result is not very good. We want to get a strong convergence result. Very recently, Dong et al. [9] used hybrid method to modify an inertial forward-backward algorithm for solving zero point problems in Hilbert spaces:

$$ \textstyle\begin{cases} x_{0},x_{1}\in H \quad \text{are chosen arbitrarily}, \\ y_{n}=x_{n}+\alpha _{n}(x_{n}-x_{n-1}), \\ z_{n}=(I+r_{n}B)^{-1}(y_{n}-r_{n}Ay_{n}), \\ C_{n}=\{u\in H: \Vert z_{n}-u \Vert ^{2}\leq \Vert x_{n}-u \Vert ^{2}-2\alpha _{n}\langle x_{n}-u,x_{n-1}-x_{n}\rangle \\ \hphantom{C_{n}=} {}+\alpha _{n}^{2} \Vert x_{n-1}-x_{n} \Vert ^{2}\}, \\ Q_{n}=\{u\in H:\langle u-x_{n},x_{0}-x_{n}\rangle \leq 0\}, \\ x_{n+1}=P_{C_{n}\cap Q_{n}}x_{0}. \end{cases} $$
(1.7)

They proved that \(\{x_{n}\}\) converges strongly to \(P_{(A+B)^{-1}(0)}x _{0}\) under some suitable conditions.

Based on the work above, we propose an inertial hybrid method for finding a solution of a variational inequality problem with a monotone mapping. As applications, we use algorithm we proposed to solve other related problems in Hilbert spaces.

2 Preliminaries

In this section, we introduce some mathematical symbols, definitions, and lemmas which can be used in the proofs of our main results.

Throughout this paper, let \(\mathbb{N}\) and \(\mathbb{R}\) be the sets of positive integers and real numbers, respectively. Let H be a real Hilbert space with the inner product \(\langle \cdot ,\cdot \rangle \) and norm \(\|\cdot \|\). Let \(\{x_{n}\}\) be a sequence in H, we write “\(x _{n}\rightharpoonup x\)” to indicate that the sequence \(\{x_{n}\}\) converges weakly to x and “\(x_{n}\rightarrow x\)” to indicate that the sequence \(\{x_{n}\}\) converges strongly to x. z is called a weak cluster point of \(\{x_{n}\}\) if there exists a subsequence \(\{x_{n _{i}}\}\) of \(\{x_{n}\}\) converging weakly to z. We write \(\omega _{w}(x_{n})\) to indicate the set of all weak cluster points of \(\{x_{n}\}\). A fixed point of a mapping \(T:H\rightarrow H\) is a point \(x\in H\) such that \(Tx=x\), and we denote the set of all fixed points of mapping T by \(\mathit{Fix}(T)\).

We introduce definitions of some operators we will use in the following sections.

Definition 2.1

([1012])

Let \(T:H\rightarrow H\) be the nonlinear operators.

  1. (i)

    T is nonexpansive if

    $$ \Vert Tx-Ty \Vert \leq \Vert x-y \Vert , \quad \forall x,y\in H. $$
  2. (ii)

    T is firmly nonexpansive if

    $$ \langle Tx-Ty, x-y\rangle \geq \Vert Tx-Ty \Vert ^{2}, \quad \forall x,y\in H. $$

    We can easily show that a firmly nonexpansive mapping is always nonexpansive by using the Cauchy–Schwarz inequality.

  3. (iii)

    T is α-averaged, with \(0<\alpha <1\), if

    $$ T=(1-\alpha )I+\alpha S, $$

    where \(S:H\rightarrow H\) is nonexpansive. The term “averaged mapping” was introduced in the early paper by Baillon, Bruck, and Reich [13]. It is obvious that \(\mathit{Fix}(S)=\mathit{Fix}(T)\). We can easily show that a firmly nonexpansive mapping is \(\frac{1}{2}\)-averaged.

  4. (iv)

    T is L-Lipschitz continuous, with \(L\geq 0\), if

    $$ \Vert Tx-Ty \Vert \leq L \Vert x-y \Vert , \quad \forall x,y\in H. $$

    We call T a contractive mapping when \(0\leq L<1\).

Definition 2.2

([10, 11])

Let \(F:H\rightarrow H\) be a nonlinear mapping.

  1. (i)

    F is monotone if

    $$ \langle Fx-Fy, x-y\rangle \geq 0, \quad \forall x,y\in H. $$
  2. (ii)

    F is η-strongly monotone, with \(\eta >0\), if

    $$ \langle Fx-Fy, x-y\rangle \geq \eta \Vert x-y \Vert ^{2}, \quad \forall x,y \in H. $$
  3. (iii)

    F is v-inverse strongly monotone (v-ism), with \(v>0\), if

    $$ \langle Fx-Fy, x-y\rangle \geq v \Vert Fx-Fy \Vert ^{2}, \quad \forall x,y \in H. $$

    We can easily show that a v-ism mapping is \(\frac{1}{v}\)-Lipschitz continuous by using the Cauchy–Schwarz inequality.

We introduce some definitions and propositions about projections.

Proposition 2.3

([4])

LetCbe a nonempty closed convex subset ofH. Then, for each \(x\in H\), there exists a unique point \(z\in C\)such that

$$ \Vert x-z \Vert \leq \Vert x-y \Vert ,\quad \forall y\in C. $$

Definition 2.4

([4])

Let C be a nonempty closed convex subset of H. Define

$$ P_{C}x=\arg \min_{y\in C} \Vert y-x \Vert , \quad \forall x\in H. $$

\(P_{C}\) is called the metric projection on C. We can show that \(P_{C}\) is firmly nonexpansive.

Lemma 2.5

([14, 15])

LetCbe a nonempty closed convex subset of a real Hilbert spaceH. Given \(x\in H\)and \(z\in C\). Then \(z=P_{C}x\)if and only if there holds the inequality

$$ \langle x-z, z-y\rangle \geq 0, \quad \forall y\in C. $$

Lemma 2.6

([1416])

LetCbe a nonempty closed convex subset of a real Hilbert spaceH. Given \(x\in H\)and \(z\in C\). Then \(z=P_{C}x\)if and only if there holds the inequality

$$ \Vert x-y \Vert ^{2}\geq \Vert x-z \Vert ^{2}+ \Vert y-z \Vert ^{2}, \quad \forall y\in C. $$

More properties of metric projections can be found in [12].

Next, we introduce some definitions and propositions about set-valued mappings.

Definition 2.7

([17])

Let H be a real Hilbert space. Let A be a set-valued mapping of H into \(2^{H}\). We denote the effective domain of A by \(D(A)\), \(D(A)\) is defined by

$$ D(A)=\{x\in H: Ax\neq \emptyset \}. $$

The graph of A is defined by

$$ G(A)=\bigl\{ (x,u)\in H\times H: u\in Ax\bigr\} . $$

A set-valued mapping A is called monotone if

$$ \langle x-y, u-v\rangle \geq 0, \quad \forall (x,u), (y,v)\in G(A). $$

A monotone mapping A is called maximal if its graph is not properly contained in the graph of any other monotone mappings on \(D(A)\).

In fact, we cannot use the definition of the maximal monotone mapping conveniently, a property of the maximal monotone mapping is usually used: A monotone mapping B is maximal if and only if for \((x,u) \in H\times H\), \(\langle x-y, u-v\rangle \geq 0\) for each \((y,v) \in G(A)\) implies \((x,u)\in G(A)\). This property is just a reformulation of the definition of maximal monotone mappings.

Definition 2.8

([17, 18])

Let \(A:H\rightarrow 2^{H}\) be a mapping and \(r>0\). The resolvent of A is

$$ J_{r}^{A}:=(I+rA)^{-1}. $$

Lemma 2.9

([17, 18])

Let \(A:H\rightarrow 2^{H}\)be a maximal monotone mapping and \(r>0\). Then \(J_{r}^{A}:H\rightarrow D(A)\)is firmly nonexpansive.

In particular, let C be a nonempty closed convex subset of a real Hilbert space H, recall the normal cone [19] to C at \(x\in C\):

$$ N_{C}x=\bigl\{ z\in H: \langle z, y-x\rangle \leq 0, \forall y \in C\bigr\} . $$

We can easily show that \(N_{C}\) is a maximal monotone mapping and its resolvent is \(P_{C}\). So we can consider the resolvent of a maximal monotone mapping as a generalization of metric projection operator.

Lemma 2.10

([19])

LetCbe a nonempty closed convex subset of a real Hilbert spaceH. LetFbe a monotone and Lipschitz continuous mapping ofCintoH. Define

$$ Tv= \textstyle\begin{cases} Fv+N_{C}v, &\forall v\in C, \\ \emptyset ,&\forall v\notin C. \end{cases} $$

ThenTis maximal monotone and \(0\in Tv\)if and only if \(v\in VI(C,F)\).

3 Main result

In this section, we propose a strong convergence algorithm for finding a solution of a variational inequality problem. The algorithm we propose is based on the work in Sect. 1.

Let H be a real Hilbert space. Let C be a nonempty closed convex subset of H. Let F be a mapping of H into H.

Algorithm 1

Choose \(x_{0}\), \(x_{1}\in H\) arbitrarily. Calculate the \((n+1)\)th iterate \(x_{n+1}\) via the formula

$$ \textstyle\begin{cases} w_{n}=x_{n}+\alpha _{n}(x_{n}-x_{n-1}), \\ y_{n}=P_{C}(w_{n}-\lambda _{n}Fw_{n}), \\ d(w_{n},y_{n})=(w_{n}-y_{n})-\lambda _{n}(Fw_{n}-Fy_{n}), \\ z_{n}=w_{n}-\gamma \beta _{n}d(w_{n},y_{n}), \\ C_{n}=\{u\in H: \Vert z_{n}-u \Vert ^{2}\leq \Vert x_{n}-u \Vert ^{2}-2\alpha _{n}\langle x_{n}-u,x_{n-1}-x_{n}\rangle \\ \hphantom{C_{n}=} {}+\alpha _{n}^{2} \Vert x_{n-1}-x_{n} \Vert ^{2}\}, \\ Q_{n}=\{u\in H:\langle u-x_{n},x_{1}-x_{n}\rangle \leq 0\}, \\ x_{n+1}=P_{C_{n}\cap Q_{n}}x_{1}, \end{cases} $$
(3.1)

for each \(n\geq 1\), where \(\gamma \in (0,2)\), \(\lambda _{n}>0\), and

$$ \beta _{n}=\textstyle\begin{cases} \frac{\varphi (w_{n},y_{n})}{ \Vert d(w_{n},y_{n}) \Vert },&\text{if }d(w_{n},y_{n})\neq 0, \\ 0,&\text{if }d(w_{n},y_{n})=0, \end{cases} $$

where

$$ \varphi (w_{n},y_{n})=\bigl\langle w_{n}-y_{n},d(w_{n},y_{n}) \bigr\rangle . $$

If \(y_{n}=w_{n}\) or \(d(w_{n},y_{n})=0\), then calculate \(x_{n+1}\) and the iterative process stops; otherwise, we set \(n:=n+1\) and go on to (3.1) to calculate the next iterate \(x_{n+2}\).

Theorem 3.1

LetCbe a nonempty closed convex subset of a real Hilbert spaceH. Let \(F:H\rightarrow H\)be a monotone andL-Lipschitz continuous mapping with \(L>0\). Assume that \(VI(C,F)\neq \emptyset \)and \(0< a\leq \lambda _{n}\leq b<\frac{1}{L}\). Let \(\{x_{n}\}\)be a sequence generated by Algorithm 1. If \(y_{n}=w_{n}\)or \(d(w_{n},y_{n})=0\), then \(x_{n+1}\in VI(C,F)\).

Proof

From the expression of \(d(w_{n},y_{n})\) and the condition imposed on F, we have

$$\begin{aligned}& \bigl\Vert d(w_{n},y_{n}) \bigr\Vert \\& \quad = \bigl\Vert (w_{n}-y_{n})-\lambda _{n}(Fw_{n}-Fy_{n}) \bigr\Vert \\& \quad \geq \Vert w_{n}-y_{n} \Vert -\lambda _{n} \Vert Fw_{n}-Fy_{n} \Vert \\& \quad \geq \Vert w_{n}-y_{n} \Vert -\lambda _{n}L \Vert w_{n}-y_{n} \Vert \\& \quad \geq (1-bL) \Vert w_{n}-y_{n} \Vert . \end{aligned}$$

On the other hand,

$$\begin{aligned}& \bigl\Vert d(w_{n},y_{n}) \bigr\Vert \\& \quad = \bigl\Vert (w_{n}-y_{n})-\lambda _{n}(Fw_{n}-Fy_{n}) \bigr\Vert \\& \quad \leq \Vert w_{n}-y_{n} \Vert +\lambda _{n} \Vert Fw_{n}-Fy_{n} \Vert \\& \quad \leq \Vert w_{n}-y_{n} \Vert +\lambda _{n}L \Vert w_{n}-y_{n} \Vert \\& \quad \leq (1+bL) \Vert w_{n}-y_{n} \Vert . \end{aligned}$$

So we have

$$ (1-bL) \Vert w_{n}-y_{n} \Vert \leq \bigl\Vert d(w_{n},y_{n}) \bigr\Vert \leq (1+bL) \Vert w_{n}-y_{n} \Vert . $$
(3.2)

Hence \(y_{n}=w_{n}\) and \(d(w_{n},y_{n})=0\) are equivalent. Using Lemma 2.5, we can get the desired result. □

Theorem 3.2

LetCbe a nonempty closed convex subset of a real Hilbert spaceH. Let \(F:H\rightarrow H\)be a monotone andL-Lipschitz continuous mapping with \(L>0\). Assume that \(VI(C,F)\neq \emptyset \)and \(0< a\leq \lambda _{n}\leq b<\frac{1}{L}\). Let \(\{x_{n}\}\)be a sequence generated by Algorithm 1. If \(y_{n}\neq w_{n}\)for each \(n\in N\), then \(\{x_{n}\}\)converges strongly to \(x^{*}=P_{VI(C,F)}x_{1}\).

Proof

We divide the proof into four steps.

Step 1. We show that \(VI(C,F)\subset C_{n}\cap Q_{n}\) for each \(n\in \mathbb{N}\).

It is obvious that \(C_{n}\) and \(Q_{n}\) are half-spaces for each \(n\in \mathbb{N}\).

$$\begin{aligned}& \varphi (w_{n},y_{n}) \\& \quad =\bigl\langle w_{n}-y_{n},d(w_{n},y_{n}) \bigr\rangle \\& \quad =\bigl\langle w_{n}-y_{n},(w_{n}-y_{n})- \lambda _{n}(Fw_{n}-Fy_{n})\bigr\rangle \\& \quad = \Vert w_{n}-y_{n} \Vert ^{2}-\lambda _{n}\langle w_{n}-y_{n},Fw_{n}-Fy_{n} \rangle \\& \quad \geq \Vert w_{n}-y_{n} \Vert ^{2}- \lambda _{n} \Vert w_{n}-y_{n} \Vert \Vert Fw_{n}-Fy_{n} \Vert \\& \quad \geq \Vert w_{n}-y_{n} \Vert ^{2}-bL \Vert w_{n}-y_{n} \Vert ^{2} \\& \quad =(1-bL) \Vert w_{n}-y_{n} \Vert ^{2}. \end{aligned}$$
(3.3)

On the other hand,

$$\begin{aligned}& \bigl\Vert d(w_{n},y_{n}) \bigr\Vert ^{2} \\& \quad = \bigl\Vert (w_{n}-y_{n})-\lambda _{n}(Fw_{n}-Fy_{n}) \bigr\Vert ^{2} \\& \quad = \Vert w_{n}-y_{n} \Vert ^{2}+\lambda _{n}^{2} \Vert Fw_{n}-Fy_{n} \Vert ^{2}-2\lambda _{n}\langle w_{n}-y_{n},Fw_{n}-Fy_{n} \rangle \\& \quad \leq \Vert w_{n}-y_{n} \Vert ^{2}+ \lambda _{n}^{2} \Vert Fw_{n}-Fy_{n} \Vert ^{2} \\& \quad \leq \Vert w_{n}-y_{n} \Vert ^{2}+b^{2}L^{2} \Vert w_{n}-y_{n} \Vert ^{2} \\& \quad =\bigl(1+b^{2}L^{2}\bigr) \Vert w_{n}-y_{n} \Vert ^{2}. \end{aligned}$$
(3.4)

Combining (3.3) and (3.4), we have

$$ \beta _{n}=\frac{\varphi (w_{n},y_{n})}{ \Vert d(w_{n},y_{n}) \Vert ^{2}}\geq \frac{1-bL}{1+b ^{2}L^{2}}. $$
(3.5)

Let \(u\in VI(C,F)\), we have

$$\begin{aligned}& \Vert z_{n}-u \Vert ^{2} \\& \quad = \bigl\Vert w_{n}-\gamma \beta _{n}d(w_{n},y_{n})-u \bigr\Vert ^{2} \\& \quad = \Vert w_{n}-u \Vert ^{2}-2\gamma \beta _{n}\bigl\langle w_{n}-u,d(w_{n},y_{n}) \bigr\rangle +\gamma ^{2}\beta _{n}^{2} \bigl\Vert d(w_{n},y_{n}) \bigr\Vert ^{2} \\& \quad = \Vert w_{n}-u \Vert ^{2}-2\gamma \beta _{n}\bigl\langle w_{n}-y_{n},d(w_{n},y_{n}) \bigr\rangle -2\gamma \beta _{n}\bigl\langle y_{n}-u,d(w_{n},y_{n}) \bigr\rangle \\& \qquad {}+\gamma ^{2}\beta _{n}^{2} \bigl\Vert d(w_{n},y_{n}) \bigr\Vert ^{2}. \end{aligned}$$
(3.6)

By the definition of \(y_{n}\) and Lemma 2.5,

$$ \langle y_{n}-u,w_{n}-y_{n}-\lambda _{n}Fw_{n}\rangle \geq 0. $$

So we have

$$\begin{aligned}& \bigl\langle y_{n}-u,d(w_{n},y_{n})\bigr\rangle \\& \quad =\bigl\langle y_{n}-u,w_{n}-y_{n}-\lambda _{n}(Fw_{n}-Fy_{n})\bigr\rangle \\& \quad =\langle y_{n}-u,w_{n}-y_{n}-\lambda _{n}Fw_{n}\rangle +\lambda _{n} \langle y_{n}-u,Fy_{n}-Fu\rangle +\lambda _{n}\langle y_{n}-u,Fu\rangle \\& \quad \geq 0. \end{aligned}$$
(3.7)

Combining (3.6) and (3.7), we get

$$\begin{aligned}& \Vert z_{n}-u \Vert ^{2} \\& \quad \leq \Vert w_{n}-u \Vert ^{2}-2\gamma \beta _{n}\bigl\langle w_{n}-y_{n},d(w_{n},y _{n})\bigr\rangle +\gamma ^{2}\beta _{n}^{2} \bigl\Vert d(w_{n},y_{n}) \bigr\Vert ^{2} \\& \quad = \Vert w_{n}-u \Vert ^{2}-2\gamma \beta _{n}\varphi (w_{n},y_{n})+\gamma ^{2} \beta _{n}^{2} \bigl\Vert d(w_{n},y_{n}) \bigr\Vert ^{2} \\& \quad = \Vert w_{n}-u \Vert ^{2}-2\gamma \beta _{n}^{2} \bigl\Vert d(w_{n},y_{n}) \bigr\Vert ^{2}+\gamma ^{2}\beta _{n}^{2} \bigl\Vert d(w_{n},y_{n}) \bigr\Vert ^{2} \\& \quad = \Vert w_{n}-u \Vert ^{2}-\gamma (2-\gamma )\beta _{n}^{2} \bigl\Vert d(w_{n},y_{n}) \bigr\Vert ^{2} \\& \quad = \Vert w_{n}-u \Vert ^{2}-\frac{2-\gamma }{\gamma } \Vert z_{n}-w_{n} \Vert ^{2} \\& \quad \leq \Vert w_{n}-u \Vert ^{2}. \end{aligned}$$
(3.8)

By the expression of \(w_{n}\), we have

$$\begin{aligned}& \Vert w_{n}-u \Vert ^{2} \\& \quad = \bigl\Vert (x_{n}-u)-\alpha _{n}(x_{n-1}-x_{n}) \bigr\Vert ^{2} \\& \quad = \Vert x_{n}-u \Vert ^{2}-2\alpha _{n}\langle x_{n}-u,x_{n-1}-x_{n} \rangle + \alpha _{n}^{2} \Vert x_{n-1}-x_{n} \Vert ^{2}. \end{aligned}$$
(3.9)

It follows from (3.8) and (3.9) that

$$\begin{aligned}& \Vert z_{n}-u \Vert ^{2} \\& \quad \leq \Vert w_{n}-u \Vert ^{2} \\& \quad = \Vert x_{n}-u \Vert ^{2}-2\alpha _{n}\langle x_{n}-u,x_{n-1}-x_{n} \rangle + \alpha _{n}^{2} \Vert x_{n-1}-x_{n} \Vert ^{2}. \end{aligned}$$
(3.10)

Therefore, \(u\in C_{n}\) for each \(n\in \mathbb{N}\). Hence, \(VI(C,F) \subset C_{n}\) for each \(n\in \mathbb{N}\).

For \(n=1\), we have \(Q_{1}=H\) and hence \(VI(C,F)\subset C_{1}\cap Q _{1}\).

Suppose that \(x_{k}\) is given and \(VI(C,F)\subset C_{k}\cap Q_{k}\) for some \(k\in \mathbb{N}\). It follows from \(x_{k+1}\) and Lemma 2.5 that

$$ \langle y-x_{k+1},x_{1}-x_{k+1}\rangle \leq 0,\quad \forall y\in VI(C,F). $$

It means that \(VI(C,F)\subset Q_{k+1}\). Hence, \(VI(C,F)\subset C_{k+1} \cap Q_{k+1}\).

By induction, we obtain \(VI(C,F)\subset C_{n}\cap Q_{n}\) for each \(n\in \mathbb{N}\).

Step 2. We show that \(\{x_{n}\}\) is bounded.

From

$$ \langle y-x_{n},x_{1}-x_{n}\rangle \leq 0, \quad \forall y\in Q_{n} $$

and Lemma 2.5, we have

$$ x_{n}=P_{Q_{n}}x_{1} $$

and hence

$$ \Vert x_{n}-x_{1} \Vert \leq \Vert x_{1}-y \Vert ,\quad \forall y\in Q_{n}. $$

Since \(VI(C,F)\subset Q_{n}\), we have

$$ \Vert x_{n}-x_{1} \Vert \leq \Vert x_{1}-y \Vert ,\quad \forall y\in VI(C,F). $$
(3.11)

In particular, since \(x_{n+1}\in Q_{n}\), we obtain

$$ \Vert x_{n}-x_{1} \Vert \leq \Vert x_{n+1}-x_{1} \Vert . $$
(3.12)

Therefore, there exists

$$ c=\lim_{n\rightarrow \infty } \Vert x_{n}-x_{1} \Vert . $$
(3.13)

It means that \(\{x_{n}\}\) is bounded.

Step 3. We show that \(\omega _{w}(x_{n})\subset VI(C,F)\).

Since \(x_{n}=P_{Q_{n}}x_{1}\), \(x_{n+1}\in Q_{n}\) and Lemma 2.6, we obtain

$$ \Vert x_{n+1}-x_{n} \Vert ^{2}\leq \Vert x_{n+1}-x_{1} \Vert ^{2}- \Vert x_{n}-x_{1} \Vert ^{2}, $$

and hence

$$ x_{n+1}-x_{n}\rightarrow 0,\quad n\rightarrow \infty . $$
(3.14)

From

$$\begin{aligned} \Vert w_{n}-x_{n} \Vert = &\bigl\Vert x_{n}-\alpha _{n}(x_{n}-x_{n-1})-x_{n} \bigr\Vert \\ =&\alpha _{n} \Vert x_{n}-x_{n-1} \Vert \end{aligned}$$

and that \(\{x_{n}\}\) is bounded, we have

$$ w_{n}-x_{n}\rightarrow 0,\quad n\rightarrow \infty . $$
(3.15)

Since \(x_{n+1}\in C_{n}\), we have

$$\begin{aligned} \Vert z_{n}-x_{n+1} \Vert ^{2} \leq& \Vert x_{n}-x_{n+1} \Vert ^{2}-2 \alpha _{n}\langle x_{n}-x_{n+1},x_{n-1}-x _{n}\rangle +\alpha _{n}^{2} \Vert x_{n-1}-x_{n} \Vert ^{2} \\ \leq& \Vert x_{n}-x_{n+1} \Vert ^{2}+2 \alpha _{n} \Vert x_{n}-x_{n+1} \Vert \Vert x_{n-1}-x _{n} \Vert +\alpha _{n}^{2} \Vert x_{n-1}-x_{n} \Vert ^{2} \end{aligned}$$

and hence

$$ z_{n}-x_{n+1}\rightarrow 0,\quad n\rightarrow \infty . $$
(3.16)

Combining (3.14), (3.15), and (3.16), we obtain

$$ z_{n}-w_{n}\rightarrow 0,\quad n\rightarrow \infty . $$
(3.17)

From (3.1), (3.2), (3.5), and (3.17), we have

$$ w_{n}-y_{n}\rightarrow 0,\quad n\rightarrow \infty . $$
(3.18)

Since \(\{x_{n}\}\) is bounded, we can take a suitable subsequence \(\{x_{n_{i}}\}\) such that \(x_{n_{i}}\rightharpoonup z\). So we have \(w_{n_{i}}\rightharpoonup z\) and \(y_{n_{i}}\rightharpoonup z\). Let

$$ Tv=\textstyle\begin{cases} Fv+N_{C}v,&v\in C, \\ \emptyset ,&v\notin C. \end{cases} $$

Then from Lemma 2.10, we know that T is maximal monotone and \(0\in Tv\) if and only if \(v\in VI(C,F)\). For each \((v,w)\in G(T)\), we have

$$ w\in Tv=Fv+N_{C}v $$

and hence

$$ w-Fv\in N_{C}v. $$

By the definition of \(N_{C}\), we obtain

$$ \langle v-p,w-Fv\rangle \geq 0,\quad \forall p\in C. $$
(3.19)

On the other hand, from \(v\in C\) and the expression of \(y_{n}\), we have

$$ \langle w_{n}-\lambda _{n}Fw_{n}-y_{n},y_{n}-v \rangle \geq 0 $$

and hence

$$ \biggl\langle v-y_{n},\frac{y_{n}-w_{n}}{\lambda _{n}}+Fw_{n}\biggr\rangle \geq 0. $$
(3.20)

Therefore, from (3.19) and (3.20), we obtain

$$\begin{aligned}& \langle v-y_{n_{i}},w\rangle \\& \quad \geq \langle v-y_{n_{i}},Fv\rangle \\& \quad \geq \langle v-y_{n_{i}},Fv\rangle -\biggl\langle v-y_{n_{i}},\frac{y_{n _{i}}-w_{n_{i}}}{\lambda _{n_{i}}}+Fw_{n_{i}}\biggr\rangle \\& \quad =\langle v-y_{n_{i}},Fv-Fw_{n_{i}}\rangle -\biggl\langle v-y_{n_{i}},\frac{y _{n_{i}}-w_{n_{i}}}{\lambda _{n_{i}}}\biggr\rangle \\& \quad = \langle v-y_{n_{i}},Fv-Fy_{n_{i}}\rangle +\langle v-y_{n_{i}},Fy _{n_{i}}-Fw_{n_{i}}\rangle -\biggl\langle v-y_{n_{i}},\frac{y_{n_{i}}-w_{n _{i}}}{\lambda _{n_{i}}}\biggr\rangle \\& \quad \geq +\langle v-y_{n_{i}},Fy_{n_{i}}-Fw_{n_{i}} \rangle -\biggl\langle v-y _{n_{i}},\frac{y_{n_{i}}-w_{n_{i}}}{\lambda _{n_{i}}}\biggr\rangle . \end{aligned}$$
(3.21)

As \(i\rightarrow \infty \), we have

$$ \langle v-z,w\rangle \geq 0. $$
(3.22)

Since T is maximal monotone, we have \(0\in Tz\) and hence \(z\in VI(C,F)\). So we obtain \(\omega _{w}(x_{n})\subset VI(C,F)\).

Step 4. We show that \(x_{n}\rightarrow x^{*}\) as \(n\rightarrow \infty \).

Since the norm is convex and lower continuous and \(z\in VI(C,F)\), it follows from (3.11) that

$$ \bigl\Vert x_{1}-x^{*} \bigr\Vert \leq \Vert x_{1}-z \Vert \leq \liminf_{i\rightarrow \infty } \Vert x _{n_{i}}-x_{1} \Vert \leq \limsup_{i\rightarrow \infty } \Vert x_{n_{i}}-x_{1} \Vert \leq \bigl\Vert x_{1}-x^{*} \bigr\Vert . $$
(3.23)

So we have

$$ \lim_{i\rightarrow \infty } \Vert x_{n_{i}}-x_{1} \Vert = \Vert x_{1}-z \Vert = \bigl\Vert x_{1}-x ^{*} \bigr\Vert . $$
(3.24)

From \(x^{*}=P_{VI(C,F)}x_{1}\), we obtain \(z=x^{*}\), i.e., \(\omega _{w}(x _{n})=\{x^{*}\}\). So we have

$$ \lim_{n\rightarrow \infty } \Vert x_{n}-x_{1} \Vert = \bigl\Vert x_{1}-x^{*} \bigr\Vert $$
(3.25)

and

$$ x_{n}\rightharpoonup x^{*},\quad n\rightarrow \infty . $$
(3.26)

Hence \(x_{n}-x_{1}\rightharpoonup x^{*}-x_{1}\). Since H satisfies the K-K property, we can obtain \(x_{n}-x_{1}\rightarrow x^{*}-x_{1}\), i.e., \(x_{n}\rightarrow x^{*}\). □

Remark 3.3

If we set \(\alpha _{n}=0\) for each \(n\in \mathbb{N}\), we can get the following algorithm:

$$ \textstyle\begin{cases} y_{n}=P_{C}(x_{n}-\lambda _{n}Fx_{n}), \\ d(x_{n},y_{n})=(x_{n}-y_{n})-\lambda _{n}(Fx_{n}-Fy_{n}), \\ z_{n}=x_{n}-\gamma \beta _{n}d(x_{n},y_{n}), \\ C_{n}=\{u\in H: \Vert z_{n}-u \Vert ^{2}\leq \Vert x_{n}-u \Vert ^{2}\}, \\ Q_{n}=\{u\in H:\langle u-x_{n},x_{1}-x_{n}\rangle \leq 0\}, \\ x_{n+1}=P_{C_{n}\cap Q_{n}}x_{1}. \end{cases} $$

4 Applications

In this section, we introduce some applications which are useful in nonlinear analysis and optimization problems in Hilbert spaces.

4.1 Constrained convex minimization problem

Let C be a nonempty closed convex subset of a real Hilbert space H. The constrained convex minimization problem [14] is to find a point \(x^{*}\in C\) such that

$$ f\bigl(x^{*}\bigr)=\min_{x\in C}f(x), $$
(4.1)

where f is a real-valued convex function. We denote the solution set of problem (4.1) by Ω.

We need the following lemma.

Lemma 4.1

([11, 20])

LetHbe real Hilbert space, and letCbe a nonempty closed convex subset ofH. Letfbe a convex function ofHinto \(\mathbb{R}\). Iffis differentiable, then \(z\in \varOmega \)if and only if \(z\in VI(C,\nabla f)\).

Let H be a real Hilbert space. Let C be a nonempty closed convex subset of H. Let f be a real-valued convex function of H. Assume that f is differentiable.

Algorithm 2

Choose \(x_{0}\), \(x_{1}\in H\) arbitrarily. Calculate the \((n+1)\)th iterate \(x_{n+1}\) via the formula

$$ \textstyle\begin{cases} w_{n}=x_{n}+\alpha _{n}(x_{n}-x_{n-1}), \\ y_{n}=P_{C}(w_{n}-\lambda _{n}\nabla f(w_{n})), \\ d(w_{n},y_{n})=(w_{n}-y_{n})-\lambda _{n}(\nabla f(w_{n})-\nabla f(y _{n})), \\ z_{n}=w_{n}-\gamma \beta _{n}d(w_{n},y_{n}), \\ C_{n}=\{u\in H: \Vert z_{n}-u \Vert ^{2}\leq \Vert x_{n}-u \Vert ^{2}-2\alpha _{n}\langle x_{n}-u,x_{n-1}-x_{n}\rangle \\ \hphantom{C_{n}=} {}+\alpha _{n}^{2} \Vert x_{n-1}-x_{n} \Vert ^{2}\}, \\ Q_{n}=\{u\in H:\langle u-x_{n},x_{1}-x_{n}\rangle \leq 0\}, \\ x_{n+1}=P_{C_{n}\cap Q_{n}}x_{1}, \end{cases} $$
(4.2)

for each \(n\geq 1\), where \(\gamma \in (0,2)\), \(\lambda _{n}>0\) and

$$ \beta _{n}=\textstyle\begin{cases} \frac{\varphi (w_{n},y_{n})}{ \Vert d(w_{n},y_{n}) \Vert },&\text{if }d(w_{n},y_{n})\neq 0, \\ 0,&\text{if }d(w_{n},y_{n})=0, \end{cases} $$

where

$$ \varphi (w_{n},y_{n})=\bigl\langle w_{n}-y_{n},d(w_{n},y_{n}) \bigr\rangle . $$

If \(y_{n}=w_{n}\) or \(d(w_{n},y_{n})=0\), then calculate \(x_{n+1}\) and the iterative process stops; otherwise, we set \(n:=n+1\) and go on to (4.2) to calculate the next iterate \(x_{n+2}\).

Theorem 4.2

LetCbe a nonempty closed convex subset of a real Hilbert spaceH. Letfbe real-valued convex function ofH. Assume thatfis differentiable and fisL-Lipschitz continuous with \(L>0\). Assume that \(\varOmega \neq \emptyset \)and \(0< a\leq \lambda _{n}\leq b< \frac{1}{L}\). Let \(\{x_{n}\}\)be a sequence generated by Algorithm 2. If \(y_{n}=w_{n}\)or \(d(w_{n},y_{n})=0\), then \(x_{n+1}\in \varOmega \).

Proof

Since f is convex, we conclude that f is monotone. Putting \(F=\nabla f\) in Theorem 3.1, we get the desired result by Lemma 4.1. □

Theorem 4.3

LetCbe a nonempty closed convex subset of a real Hilbert spaceH. Let \(F:H\rightarrow H\)be a monotone andL-Lipschitz continuous mapping with \(L>0\). Assume that \(VI(C,F)\neq \emptyset \)and \(0< a\leq \lambda _{n}\leq b<\frac{1}{L}\). Let \(\{x_{n}\}\)be a sequence generated by Algorithm 2. If \(y_{n}\neq w_{n}\)for each \(n\in N\), then \(\{x_{n}\}\)converges strongly to \(x^{*}=P_{\varOmega }x_{1}\).

Proof

Since f is convex, we conclude that f is monotone. Putting \(F=\nabla f\) in Theorem 3.2, we get the desired result by Lemma 4.1. □

4.2 Split feasibility problem

Next, we consider the split feasibility problem.

The split feasibility problem (SFP) was proposed by Censor and Elfving [21] in 1994. The SFP is to find a point \(x^{*}\) such that

$$ x^{*}\in C \quad \text{and} \quad Ax^{*}\in Q, $$
(4.3)

where C and Q are nonempty closed convex subsets of real Hilbert spaces \(H_{1}\) and \(H_{2}\), respectively, A is a bounded linear operator of \(H_{1}\) and \(H_{2}\) with \(A\neq 0\).

In 2004, Byrne [22] proposed the following algorithm for solving (4.3):

$$ x_{n+1}=P_{C}\bigl(x_{n}-\gamma _{n}A^{*}(I-P_{Q})Ax_{n}\bigr). $$
(4.4)

In this section, we introduce a new algorithm to solve (4.3). We need the following lemmas.

Lemma 4.4

([20])

Let \(H_{1}\)and \(H_{2}\)be real Hilbert spaces. LetCandQbe nonempty closed convex subsets of \(H_{1}\)and \(H_{2}\), respectively. LetAbe a bounded linear operator of \(H_{1}\)into \(H_{2}\)with \(A\neq 0\). Assume that \(C\cap A^{-1}Q\)is nonempty. Let \(\lambda \geq 0\). Then \(z\in C\cap A^{-1}Q\)if and only if \(z\in VI(C,A^{*}(I-P_{Q})A)\), where \(A^{*}\)is the adjoint operator ofA.

Lemma 4.5

([20])

Let \(H_{1}\)and \(H_{2}\)be real Hilbert spaces. LetAbe a bounded linear operator of \(H_{1}\)into \(H_{2}\)such that \(A\neq 0\). LetQbe a nonempty closed convex subset of \(H_{2}\). Then \(A^{*}(I-P_{Q})A\)is monotone and \(\|A\|^{2}\)-Lipschitz continuous.

We propose the following algorithm for solving SFP (4.3).

Algorithm 3

Choose \(x_{0}\), \(x_{1}\in H_{1}\) arbitrarily. Calculate the \((n+1)\)th iterate \(x_{n+1}\) via the formula

$$ \textstyle\begin{cases} w_{n}=x_{n}+\alpha _{n}(x_{n}-x_{n-1}), \\ y_{n}=P_{C}(w_{n}-\lambda _{n}A^{*}(I-P_{Q})Aw_{n}), \\ d(w_{n},y_{n})=(w_{n}-y_{n})-\lambda _{n}(A^{*}(I-P_{Q})Aw_{n}-A^{*}(I-P _{Q})Ay_{n}), \\ z_{n}=w_{n}-\gamma \beta _{n}d(w_{n},y_{n}), \\ C_{n}=\{u\in H: \Vert z_{n}-u \Vert ^{2}\leq \Vert x_{n}-u \Vert ^{2}-2\alpha _{n}\langle x_{n}-u,x_{n-1}-x_{n}\rangle \\ \hphantom{C_{n}=} {}+\alpha _{n}^{2} \Vert x_{n-1}-x_{n} \Vert ^{2}\}, \\ Q_{n}=\{u\in H:\langle u-x_{n},x_{1}-x_{n}\rangle \leq 0\}, \\ x_{n+1}=P_{C_{n}\cap Q_{n}}x_{1}, \end{cases} $$
(4.5)

for each \(n\geq 1\), where \(\gamma \in (0,2)\), \(\lambda _{n}>0\), and

$$ \beta _{n}=\textstyle\begin{cases} \frac{\varphi (w_{n},y_{n})}{ \Vert d(w_{n},y_{n}) \Vert },&\text{if }d(w_{n},y_{n})\neq 0, \\ 0,&\text{if }d(w_{n},y_{n})=0, \end{cases} $$

where

$$ \varphi (w_{n},y_{n})=\bigl\langle w_{n}-y_{n},d(w_{n},y_{n}) \bigr\rangle . $$

If \(y_{n}=w_{n}\) or \(d(w_{n},y_{n})=0\), then calculate \(x_{n+1}\) and the iterative process stops; otherwise, we set \(n:=n+1\) and go on to (4.5) to calculate the next iterate \(x_{n+2}\).

Theorem 4.6

LetCandQbe nonempty closed convex subsets of real Hilbert spaces \(H_{1}\)and \(H_{2}\), respectively. LetAbe a bounded linear operator with \(A\neq 0\). Set \(\varGamma =C\cap A^{-1}Q\). Assume that \(\varGamma \neq \emptyset \)and \(0< a\leq \lambda _{n}\leq b<\frac{1}{L}\). Let \(\{x_{n}\}\)be a sequence generated by Algorithm 3. If \(y_{n}=w _{n}\)or \(d(w_{n},y_{n})=0\), then \(x_{n+1}\in \varGamma \).

Proof

Putting \(F=A^{*}(I-P_{Q})A\) in Theorem 3.1, we get the desired result by Lemmas 4.4 and 4.5. □

Theorem 4.7

LetCandQbe nonempty closed convex subsets of real Hilbert spaces \(H_{1}\)and \(H_{2}\), respectively. LetAbe a bounded linear operator with \(A\neq 0\). Set \(\varGamma =C\cap A^{-1}Q\). Assume that \(\varGamma \neq \emptyset \)and \(0< a\leq \lambda _{n}\leq b<\frac{1}{L}\). Let \(\{x_{n}\}\)be a sequence generated by Algorithm 3. If \(y_{n} \neq w_{n}\)for each \(n\in N\), then \(\{x_{n}\}\)converges strongly to \(x^{*}=P_{\varGamma }x_{1}\).

Proof

Putting \(F=A^{*}(I-P_{Q})A\) in Theorem 3.2, we get the desired result by Lemmas 4.4 and 4.5. □

5 Numerical experiments

In this section, we give some numerical results to illustrate the effectiveness of our iterative scheme in Sect. 3 and compare with extragradient method [5] and iterative scheme (1.2). All the programs are written in Matlab 7.10 and performed on a PC Desktop Intel® Core™ i5-2450M CPU @ 2.50 GHz 2.50 GHz, RAM 4.00 GB. All the projections over C and \(C_{n}\cap Q_{n}\) are computed effectively by the function quadprog in Matlab 7.10 Optimization Toolbox.

Example 1

Let \(H=\mathbb{R}\) and \(C=[-2,5]\). Let F be a function given by

$$ Fx:=x+\sin x $$

for each \(x\in \mathbb{R}\). For all \(x,y\in H\), we have

$$\begin{aligned}& \Vert Fx-Fy \Vert = \Vert x+\sin x-y-\sin y \Vert \leq \Vert x-y \Vert + \Vert \sin x-\sin y \Vert \leq 2 \Vert x-y \Vert , \\& \langle Fx-Fy,x-y\rangle =(x+\sin x-y-\sin y) (x-y)=(x-y)^{2}+(\sin x- \sin y) (x-y)\geq 0. \end{aligned}$$

Therefore, F is monotone and 2-Lipschitz continuous.

Choose \(x_{0}=2\), \(\lambda _{n}=\lambda \), \(\alpha _{n}=2\), and \(\gamma =1\) for our iterative scheme (3.1). It is easy to find that \(VI(C,F)=\{0\}\). We denote \(x^{*}=0\) and use \(\|x_{n}-x^{*}\|\leq 10^{-5}\) for stopping criterion. The numerical results for this example are described in Table 1.

Table 1 Numerical results as regards Example 1

Example 2

Let \(H=\mathbb{R}^{m}\). We consider a classical problem [23, 24]. The feasible set is \(C=\mathbb{R}^{m}\) and \(F:\mathbb{R}^{m}\rightarrow \mathbb{R}^{m}\) is a linear operator in the form

$$ F(x):=Ax $$

for each \(x\in \mathbb{R}^{m}\), where \(A=(a_{i,j})_{1\leq i,j\leq m}\) is a matrix in \(\mathbb{R}^{m\times m}\) whose terms are given by

$$ a_{i,j}= \textstyle\begin{cases} -1, &\text{if } j=m+1-i \text{ and }j>i, \\ 1,&\text{if } j=m+1-i \text{ and } j< i, \\ 0,&\text{otherwise}. \end{cases} $$

Then F is monotone and \(\|A\|\)-Lipschitz continuous. This is a classical example of a problem where the usual gradient method does not converge. We can easily see that \(VI(C,F)=F^{-1}(0)\) and the zero vector is the unique element in \(VI(C,F)\). We denote \(x^{*}=(0,0,\ldots ,0)^{T}\).

Choose \(x_{1}=(1,1,\ldots ,1)^{T}\) and \(\lambda _{n}=\lambda =0.2/\|A \|\) in each iterative scheme. Take \(x_{0}=(2,2,\ldots ,2)^{T}\), \(\alpha _{n}=2\), and \(\gamma =1\) in our iterative scheme (3.1). We show the numerical results for the cases \(m=10,20,30,40\) respectively in Fig. 1, Fig. 2, Fig. 3, and Fig. 4.

Figure 1
figure 1

Numerical results as regards Example 2

Figure 2
figure 2

Numerical results as regards Example 2

Figure 3
figure 3

Numerical results as regards Example 2

Figure 4
figure 4

Numerical results as regards Example 2

6 Conclusion

For a variational inequality problem, Algorithms (1.2) and (1.3) have been studied. Considering that sometimes the conditions of operators are not strong enough, He proposed the projection and contraction algorithm. In 2017, Dong et al. proposed the inertial projection and contraction algorithm originated from the second-order dynamical systems. Recently, Dong et al. proposed a strong convergence method for solving zero point problems by using hybrid method. Motivated by their work, we propose an inertial hybrid algorithm for solving variational inequality problems in Hilbert spaces and obtain strong convergence theorems.

References

  1. Fichera, G.: Sul problema elastostatico di Signorini con ambigue condizioni al contorno. Atti Accad. Naz. Lincci, VIII. Scr., Rend., Cl. Sci. Fis. Mat. Nat. 34, 138–142 (1963)

    MathSciNet  MATH  Google Scholar 

  2. Fichera, G.: Problemi elastostatici con vincoli unilaterali: il problema di Signorini con ambigue condizioni al contorno. Atti Accad. Naz. Lincci, Mem., Cl. Sci. Fis. Mat. Nat., Sez. 7, 91–140 (1964)

    MathSciNet  MATH  Google Scholar 

  3. Stampacchia, G.: Formes bilineaires coercitives sur les ensembles convexes. C. R. Math. Acad. Sci. Paris 258, 4413–4416 (1964)

    MathSciNet  MATH  Google Scholar 

  4. Xu, H.K.: Averaged mappings and the gradient-projection algorithm. J. Optim. Theory Appl. 150, 360–378 (2011)

    Article  MathSciNet  Google Scholar 

  5. Korpelevich, G.M.: An extragradient method for finding saddle points and for other problems. Èkon. Mat. Metody 12, 747–756 (1976)

    MathSciNet  MATH  Google Scholar 

  6. Censor, Y., Gibali, A., Reich, S.: Extensions of Korpelevich’s extragradient method for the variational inequality problem in Euclidean space. Optimization 61(9), 1119–1132 (2012)

    Article  MathSciNet  Google Scholar 

  7. He, B.S.: A class of projection and contraction methods for monotone variational inequalities. Appl. Math. Optim. 35, 69–76 (1997)

    Article  MathSciNet  Google Scholar 

  8. Dong, Q.L., Cho, Y.J., Zhong, L.L., Rassias, T.M.: Inertial projection and contraction algorithms for variational inequalites. J. Glob. Optim. 70, 687–704 (2018)

    Article  Google Scholar 

  9. Dong, Q.L., Jiang, D., Cholamjiak, P., Shehu, Y.: A strong convergence result involving an inertial forward-backward algorithm for monotone inclusions. J. Fixed Point Theory Appl. 19(4), 3097–3118 (2017)

    Article  MathSciNet  Google Scholar 

  10. Xu, H.K.: Iterative methods for the split feasibility problem in infinite-dimensional Hilbert space. Inverse Probl. 26, 10518 (2010)

    Article  MathSciNet  Google Scholar 

  11. Tian, M., Jiang, B.N.: Weak convergence theorem for a class of split variational inequality problems and applications in a Hilbert space. J. Inequal. Appl. 2017, 123 (2017)

    Article  MathSciNet  Google Scholar 

  12. Goebel, K., Reich, S.: Uniform Convexity, Hyperbolic Geometry, and Nonexpansive Mappings. Marcel Dekker, New York (1984)

    MATH  Google Scholar 

  13. Baillon, J.-B., Bruck, R.E., Reich, S.: On the asymptotic behavior of nonexpansive mappings and semigroups in Banach spaces. Houst. J. Math. 4, 1–9 (1978)

    MathSciNet  MATH  Google Scholar 

  14. Ceng, L.C., Ansari, Q.H., Yao, J.C.: Some iterative methods for finding fixed points and for solving constrained convex minimization problems. Nonlinear Anal. 74, 5286–5302 (2011)

    Article  MathSciNet  Google Scholar 

  15. Takahashi, W., Toyoda, M.: Weak convergence theorem for nonexpansive mappings and monotone mappings. J. Optim. Theory Appl. 118, 417–428 (2003)

    Article  MathSciNet  Google Scholar 

  16. Kopecká, E., Reich, S.: A note on alternating projections in Hilbert space. J. Fixed Point Theory Appl. 12, 41–47 (2012)

    Article  MathSciNet  Google Scholar 

  17. Takahashi, W., Xu, H.K., Yao, J.C.: Iterative methods for generalized split feasibility problems in Hilbert spaces. Set-Valued Var. Anal. 23(2), 205–221 (2015)

    Article  MathSciNet  Google Scholar 

  18. Tian, M., Jiao, S.W., Liou, Y.C.: Methods for solving constrained convex minimization problems and finding zeros of the sum of two operators in Hilbert spaces. J. Inequal. Appl. 2015, 227 (2015)

    Article  MathSciNet  Google Scholar 

  19. Takahashi, W., Nadezhkina, N.: Weak convergence theorem by an extragradient method for nonexpansive mappings and monotone mappings. J. Optim. Theory Appl. 128, 191–201 (2006)

    Article  MathSciNet  Google Scholar 

  20. Tian, M., Jiang, B.N.: Weak convergence theorem for variational inequality problems with monotone mapping in Hilbert space. J. Inequal. Appl. 2016, 286 (2016)

    Article  MathSciNet  Google Scholar 

  21. Censor, Y., Elfving, T.: A multiprojection algorithm using Bregman projections in a product space. Numer. Algorithms 8, 221–239 (1994)

    Article  MathSciNet  Google Scholar 

  22. Byrne, C.: A unified treatment of some iterative algorithms in signal processing and image reconstruction. Inverse Probl. 20, 103–120 (2004)

    Article  MathSciNet  Google Scholar 

  23. Malitsky, Yu.V.: Projected reflected gradient methods for variational inequalities. SIAM J. Optim. 25(1), 502–520 (2015)

    Article  MathSciNet  Google Scholar 

  24. Yang, J., Liu, H.: A modified projected gradient method for monotone variational inequalities. J. Optim. Theory Appl. 179, 197–211 (2018)

    Article  MathSciNet  Google Scholar 

Download references

Availability of data and materials

Not applicable.

Funding

This work was supported by Tianjin Key Lab for Advanced Signal Processing, Civil Aviation University of China [grant number 2019 ASP-TJ02].

Author information

Authors and Affiliations

Authors

Contributions

All the authors read and approved the final manuscript.

Corresponding author

Correspondence to Ming Tian.

Ethics declarations

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Tian, M., Jiang, BN. Inertial hybrid algorithm for variational inequality problems in Hilbert spaces. J Inequal Appl 2020, 12 (2020). https://doi.org/10.1186/s13660-020-2286-1

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13660-020-2286-1

MSC

Keywords