# Efficient implementation of a modified and relaxed hybrid steepest-descent method for a type of variational inequality

## Abstract

To reduce the difficulty and complexity in computing the projection from a real Hilbert space onto a nonempty closed convex subset, researchers have provided a hybrid steepest-descent method for solving VI(F, K) and a subsequent three-step relaxed version of this method. In a previous study, the latter was used to develop a modified and relaxed hybrid steepest-descent (MRHSD) method. However, choosing an efficient and implementable nonexpansive mapping is still a difficult problem. We first establish the strong convergence of the MRHSD method for variational inequalities under different conditions that simplify the proof, which differs from previous studies. Second, we design an efficient implementation of the MRHSD method for a type of variational inequality problem based on the approximate projection contraction method. Finally, we design a set of practical numerical experiments. The results demonstrate that this is an efficient implementation of the MRHSD method.

## 1 Introduction

Let H be a real Hilbert space with inner product < •, • > and norm || • ||, let K be a nonempty closed convex subset of H, and let F: HH be an operator. Then the variational inequality problem VI(F, K) involves finding x* K such that

${x}^{*}\in K,\ge 0,\forall x\in K.$
(1)

Variational inequality problems were introduced by Hartman and Stampacchia and were subsequently expanded in several classic articles [1, 2]. Variational inequality theory provides a method for unifying the treatment of equilibrium problems encountered in areas as diverse as economics, optimal control, game theory, transportation science, and mechanics. Variational inequality problems have many applications, such as in mathematical optimization problems, complementarity problems and fixed point problems . Thus, it is important to solve variational inequality problems and much research has been devoted to this topic .

It is known that

${x}^{*}\phantom{\rule{2.77695pt}{0ex}}\mathsf{\text{is}}\phantom{\rule{2.77695pt}{0ex}}\mathsf{\text{the}}\phantom{\rule{2.77695pt}{0ex}}\mathsf{\text{solution}}\phantom{\rule{2.77695pt}{0ex}}\mathsf{\text{of}}\phantom{\rule{2.77695pt}{0ex}}\phantom{\rule{2.77695pt}{0ex}}VI\left(F,K\right)⇔{x}^{*}={P}_{K}\left[{x}^{*}-\beta F\left({x}^{*}\right)\right],\phantom{\rule{2.77695pt}{0ex}}\phantom{\rule{2.77695pt}{0ex}}\beta >0.$

where P K is the projection from H onto K, i.e.,

${P}_{K}\left(x\right)=\mathsf{\text{argmi}}{\mathsf{\text{n}}}_{y\in K}∥x-y∥,\phantom{\rule{2.77695pt}{0ex}}\forall x\in H.$

Thus, we can solve a variational inequality problem using a fixed-point problem with some appropriate conditions. For example, if F is a strongly monotone and Lipschitzian mapping on K and β > 0 is small enough, then P K is a contraction. Hence, Banach's fixed point theorem guarantees convergence of the Picard iterates generated by P K [x - βF(x)]. Such a method is called a projection method, as described elsewhere .

To reduce the complexity of computing the projection P K , Yamada and Deutsch developed a hybrid steepest-descent method for solving VI(F, K) [7, 8], but choosing an efficient and implementable nonexpansive mapping is still a difficult problem. Subsequently, Xu and Kim  and Zeng et al.  proved the convergence of hybrid steepest-descent method. Noor introduced iterations after analysis of several three-step iterative methods . Ding et al. provided a three-step relaxed hybrid steepest-descent method for variational inequalities  and Yao et al.  provided a simple proof of the method under different conditions. Our group has described a modified and relaxed hybrid steepest descent (MRHSD) method that makes greater use of historical information and minimizes information loss .

This article makes three new contributions compared to previous results. First, we prove a strong convergence of the MRHSD method under different and suitable restrictions imposed on the parameters (Condition 3.2). The proof of strong convergence is different from the previous proof . Second, based on the approximate projection contraction method, we design an efficient implementation of the MRHSD method for a type of variational inequality problem. Third, we design some practical numerical experiments and the results verify that it is efficient implementation. Furthermore, the MRHSD method under Condition 3.2 is more efficient than under Condition 3.1.

The remainder of the article is organized as follows. In Section 2, we review several lemmas and preliminaries. In Section 3, we prove the convergence theorem. We discuss an implementation of the MRHSD method for a type of variational inequality problem in Section 4. Section 5 presents numerical experiments and results applicable to finance and statistics. Section 6 concludes.

## 2 Preliminaries

To prove the convergence theorem, we first introduce several lemmas and the main results reported by others [10, 11, 21].

Lemma 1 Let {x n } and {y n } be bounded sequences in a Banach space X and let {ζ n } be a sequence in [0, 1] with $0<\underset{n\to \infty }{lim inf}{\zeta }_{n}\le \underset{n\to \infty }{lim sup}{\zeta }_{n}<1$.

Suppose

${x}_{n+1}=\left(1-{\zeta }_{n}\right){y}_{n}+{\zeta }_{n}{x}_{n}\phantom{\rule{2.77695pt}{0ex}}\phantom{\rule{2.77695pt}{0ex}}\forall n\ge 0$

and

$\underset{n\to \infty }{lim sup}\left(∥{y}_{n+1}-{y}_{n}∥-∥{x}_{n+1}-{x}_{n}∥\right)\le 0\phantom{\rule{2.77695pt}{0ex}}\phantom{\rule{2.77695pt}{0ex}}\forall n\ge 0.$

Then $\underset{n\to \infty }{lim}∥{y}_{n}-{x}_{n}∥=0$.

Lemma 2 Let {s n } be a sequence of non-negative real numbers satisfying the inequality:

${s}_{n+1}\le \left(1-{\alpha }_{n}\right){s}_{n}+{\alpha }_{n}{\tau }_{n}+{\gamma }_{n},\forall n\ge 0,$

where {α n }, {τ n }, and {γ n } satisfy the following conditions:

1. (1)

α n [0, 1], $\sum _{n=0}^{\infty }{\alpha }_{n}=\infty$, or $\prod _{n=0}^{\infty }\left(1-{\alpha }_{n}\right)=0$;

2. (2)

$\underset{n\to \infty }{lim sup}{\tau }_{n}\le 0$;

3. (3)

γ n [0, ∞), $\sum _{n=0}^{\infty }{\gamma }_{n}<\infty$.

Then $\underset{n\to \infty }{lim}{s}_{n}=0$.

Lemma 3 (Demiclosedness principle) Assume that T is a nonexpansive self-mapping on a nonempty closed convex subset K of a Hilbert space H. If T has a fixed point, then (I - T) is demiclosed; that is, whenever {x n } is a sequence in K weakly converging to some x K and the sequence {(I - T)x n } strongly converges to some y H, it follows that (I - T)x = y, where I is the identity operator of H.

The following lemma is an immediate result of the inner product of a Hilbert space.

Lemma 4 In a real Hilbert space H, the following inequality holds:

${∥x+y∥}^{2}\le {∥x∥}^{2}+2,\phantom{\rule{2.77695pt}{0ex}}\forall x,\phantom{\rule{2.77695pt}{0ex}}y\in H.$

Lemma 5 Let {α n } be a sequence nonnegative numbers with and be sequence of real numbers with $\underset{n\to \infty }{lim sup}{\alpha }_{n}<\infty$ and {β n } be sequence of real numbers with $\underset{n\to \infty }{lim sup}{\beta }_{n}\le 0$. Then

$\underset{n\to \infty }{lim sup}{\alpha }_{n}{\beta }_{n}\le 0.$

A basic property of the projection mapping onto a closed convex subset of Hilbert space will be given out in the following lemma.

Lemma 6 Let K be a nonempty closed convex subset of H. x, y H and z K,

1. (1)

< P K (x) - x, z - P K (x) > ≥ 0,

2. (2)

P K (x) - P K (y)2x - y2 - P K (x) - x + y - P K (y)2.

We now introduce some basic assumptions. Let F: HH be an operator with F: κ- Lipschtz and η-strongly monotone; that is, F satisfies the following conditions:

$∥F\left(x\right)-F\left(y\right)∥\le \kappa ∥x-y∥$

and

$\ge \eta {∥x-y∥}^{2},\phantom{\rule{2.77695pt}{0ex}}\forall x,\phantom{\rule{2.77695pt}{0ex}}y\in K.$

Assuming the solution set of VI(F, K) is nonempty, naturally VI(F, K) has a unique solution x* K under these conditions. Following Yamada  and to reduce the complexity of computing the projection P K , we replace the projection P K with a nonexpansive mapping T: HH with the property that the fixed point set is Fix(T) = K. Now we introduce some notation. For any given numbers λ (0, 1) and μ (0, 2η/κ2), we define the mapping ${T}_{\mu }^{\lambda }:H\to H$ as

${T}_{\mu }^{\lambda }x:Tx-\lambda \mu F\left(Tx\right),\phantom{\rule{2.77695pt}{0ex}}\forall x\in H,$

where ${T}_{\mu }^{\lambda }$ satisfies the following property under some conditions.

Lemma 7 If 0 < μ < 2η/κ2 and 0 < λ < 1, then ${T}_{\mu }^{\lambda }$ is a contraction. In fact,

$∥{T}_{\mu }^{\lambda }x-{T}_{\mu }^{\lambda }y∥\le \left(1-\lambda \delta \right)∥x-y∥,\phantom{\rule{2.77695pt}{0ex}}\forall x,\phantom{\rule{2.77695pt}{0ex}}y\in H,$

where $\delta =1-\sqrt{1-\mu \left(2\eta -\mu {\kappa }^{2}\right)}$.

## 3 Convergence theorem

Before analysis and proof, we first review the MRHSD method and related results .

Algorithm 

Take three fixed numbers t, ρ, γ (0, 2η/κ2), and let {α n } [0, 1), {β n }, {γ n } [0, 1] and {λ n }, $\left\{{\lambda }_{n}^{\prime }\right\}$, $\left\{{\lambda }_{n}^{″}\right\}\subset \left(0,\phantom{\rule{2.77695pt}{0ex}}1\right)$. Starting with arbitrarily chosen initial points x0 H, compute the sequences {x n }, $\left\{{\stackrel{̄}{x}}_{n}\right\}$, $\left\{{\stackrel{̃}{x}}_{n}\right\}$ such that

Step 1: ${\stackrel{̄}{x}}_{n}={\gamma }_{n}{x}_{n}+\left(1-{\gamma }_{n}\right)\left[T{x}_{n}-{\lambda }_{n+1}^{″}\gamma F\left(T{x}_{n}\right)\right]$,

Step 2: ${\stackrel{̃}{x}}_{n}={\beta }_{n}{x}_{n}+\left(1-{\beta }_{n}\right)\left[T{\stackrel{̄}{x}}_{n}-{\lambda }_{n+1}^{\prime }\rho F\left(T{\stackrel{̄}{x}}_{n}\right)\right]$,

Step 3: ${x}_{n+1}={\alpha }_{n}{\stackrel{̄}{x}}_{n}+\left(1-{\alpha }_{n}\right)\left[T{\stackrel{̃}{x}}_{n}-{\lambda }_{n+1}tF\left(T{\stackrel{̃}{x}}_{n}\right)\right]$,

where T: HH is a nonexpansive mapping. However, choosing an efficient and implementable nonexpansive mapping T is a difficult problem, and previous studies did not design numerical experiments or describe an efficient and implementable nonexpansive mapping T [811, 19, 20]. In Section 4, we design an efficient and implementable nonexpansive mapping T for a type of variational problem based on the approximate projection contraction method. We then review the conditions and theorem presented by Xu et al. .

Condition 3.1

1. (1)

$\sum _{1}^{\infty }\left|{\alpha }_{n}-{\alpha }_{n-1}\right|<\infty$, $\sum _{1}^{\infty }\left|{\beta }_{n}-{\beta }_{n-1}\right|<\infty$, $\sum _{1}^{\infty }\left|{\gamma }_{n}-{\gamma }_{n-1}\right|<\infty$;

2. (2)

$\underset{n\to \infty }{lim}{\alpha }_{n}=0$, $\underset{n\to \infty }{lim}{\beta }_{n}=1$, $\underset{n\to \infty }{lim}{\gamma }_{n}=1$;

3. (3)

$\underset{n\to \infty }{lim}{\lambda }_{n}=0$, $\underset{n\to \infty }{lim}\frac{{\lambda }_{n}}{{\lambda }_{n+1}}=1$, $\sum _{1}^{\infty }{\lambda }_{n}=\infty$;

4. (4)

${\lambda }_{n}\ge max\left\{{\lambda }_{n}^{\prime },\phantom{\rule{2.77695pt}{0ex}}\phantom{\rule{2.77695pt}{0ex}}{\lambda }_{n}^{″}\right\}$, n ≥ 1.

Theorem 1  Under the Condition 3.1, the sequence {x n } generated by algorithm  converges strongly to x* K, and x* is the unique solution of the VI(F, K).

We provide different conditions and establish a strong convergence theorem for the MRHSD method for variational inequalities under these conditions. Note that Condition 3.2 and a strong convergence theorem (Theorem 2) are the first contributions of the article.

Condition 3.2

1. (1)

$0<\underset{n\to \infty }{lim inf}{\alpha }_{n}\le \underset{n\to \infty }{lim sup}{\alpha }_{n}<1$, $\underset{n\to \infty }{lim}{\beta }_{n}=1$, $\underset{n\to \infty }{lim}{\gamma }_{n}=1$;

2. (2)

$\underset{n\to \infty }{lim}{\lambda }_{n}=0$, $\sum _{1}^{\infty }{\lambda }_{n}=\infty$;

3. (3)

${\lambda }_{n}\ge max\left\{{\lambda }_{n}^{\prime },\phantom{\rule{2.77695pt}{0ex}}\phantom{\rule{2.77695pt}{0ex}}{\lambda }_{n}^{″}\right\}$, n ≥ 1.

Theorem 2 The sequence {x n } generated by algorithm  converges strongly to x* K, and x* is the unique solution of the VI(F, K); assume that α n , β n , γ n and λ n , ${\lambda }_{n}^{\prime }$, ${\lambda }_{n}^{″}$ satisfy the Condition 3.2.

Proof. We divide the proof into several steps.

Step 1.  The sequences {x n }, $\left\{{\stackrel{̄}{x}}_{n}\right\}$, $\left\{{\stackrel{̃}{x}}_{n}\right\}$ are bounded.

According to Step 1, we have that

$\left\{T{x}_{n}\right\},\left\{T{\stackrel{̄}{x}}_{n}\right\},\left\{T{\stackrel{̃}{x}}_{n}\right\},\left\{F\left(T{x}_{n}\right)\right\},\left\{F\left(T{\stackrel{̄}{x}}_{n}\right)\right\},\left\{F\left(T{\stackrel{̃}{x}}_{n}\right)\right\}\right\}$

are also bounded and

$∥{x}_{n}-{x}^{*}∥\le {M}_{0},\forall n\ge 0,$

where M0 = max{3x0 - x*, 3(ρ + γ + t)F(x*)/τ}.

and

$\begin{array}{c}∥{\stackrel{̃}{x}}_{n}-{x}^{*}∥\le {\beta }_{n}∥{x}_{n}-{x}^{*}∥+\left(1-{\beta }_{n}\right){\lambda }_{n+1}\left(\gamma +\rho \right)∥F\left({x}^{*}\right)∥\le \left(1+\tau \right){M}_{0,}\hfill \\ ∥{\stackrel{̄}{x}}_{n}-{x}^{*}∥\le ∥{x}_{n}-{x}^{*}∥+\left(1-{\gamma }_{n}\right){\lambda }_{n+1}^{″}\gamma ∥F\left({x}^{*}\right)∥\le \left(1+\tau \right){M}_{0}.\hfill \end{array}$

Step 2. xn+1- x n → 0.

Indeed, a series of computations yields:

$\begin{array}{cc}\hfill ∥{\stackrel{̄}{x}}_{n}-{\stackrel{̄}{x}}_{n-1}∥& =∥{\gamma }_{n}{x}_{n}-{\gamma }_{n-1}{x}_{n-1}+\left(1-{\gamma }_{n}\right){T}_{\gamma }^{{\lambda }_{n+1}^{″}}{x}_{n}-\left(1-{\gamma }_{n-1}\right){T}_{\gamma }^{{\lambda }_{n}^{″}}{x}_{n-1}∥\hfill \\ \le ∥{\gamma }_{n}{x}_{n}-{\gamma }_{n-1}{x}_{n-1}∥+∥\left(1-{\gamma }_{n}\right){T}_{\gamma }^{{\lambda }_{n+1}^{″}}{x}_{n}-\left(1-{\gamma }_{n-1}\right){T}_{\gamma }^{{\lambda }_{n}^{″}}{x}_{n-1}∥\hfill \\ \le ∥{x}_{n}-{x}_{n-1}∥+\left|\left(1-{\gamma }_{n}\right){\lambda }_{n+1}^{″}-\left(1-{\gamma }_{n-1}\right){\lambda }_{n}^{″}\right|\gamma ∥F\left(T{x}_{n-1}\right)∥\hfill \\ \phantom{\rule{2.77695pt}{0ex}}\phantom{\rule{2.77695pt}{0ex}}\phantom{\rule{2.77695pt}{0ex}}\phantom{\rule{2.77695pt}{0ex}}+\left|{\gamma }_{n}-{\gamma }_{n-1}\right|\left(∥{x}_{n-1}∥+∥T{x}_{n-1}∥\right).\hfill \end{array}$
(2)

By ${T}_{\rho }^{{\lambda }_{n+1}^{\prime }}{\stackrel{̄}{x}}_{n}=T{\stackrel{̄}{x}}_{n}-{\lambda }_{n+1}^{\prime }\rho F\left(T{\stackrel{̄}{x}}_{n}\right)$, ${T}_{\rho }^{{\lambda }_{n}^{\prime }}{\stackrel{̄}{x}}_{n-1}=T{\stackrel{̄}{x}}_{n-1}-{\lambda }_{n}^{\prime }\rho F\left(T{\stackrel{̄}{x}}_{n-1}\right)$ and (2), we can obtain

$\begin{array}{cc}\hfill ∥{\stackrel{̃}{x}}_{n}-{\stackrel{̃}{x}}_{n-1}∥& =∥{\beta }_{n}{x}_{n}-{\beta }_{n-1}{x}_{n-1}+\left(1-{\beta }_{n}\right){T}_{\rho }^{{\lambda }_{n+1}^{\prime }}{\stackrel{̄}{x}}_{n}-\left(1-{\beta }_{n-1}\right){T}_{\rho }^{{\lambda }_{n}^{\prime }}{\stackrel{̄}{x}}_{n-1}∥\hfill \\ \le ∥{\beta }_{n}{x}_{n}-{\beta }_{n-1}{x}_{n-1}∥+∥\left(1-{\beta }_{n}\right){T}_{\rho }^{{\lambda }_{n+1}^{\prime }}{\stackrel{̄}{x}}_{n}-\left(1-{\beta }_{n-1}\right){T}_{\rho }^{{\lambda }_{n}^{\prime }}{\stackrel{̄}{x}}_{n-1}∥\hfill \\ \le ∥{x}_{n}-{x}_{n-1}∥+\left|\left(1-{\beta }_{n}\right){\lambda }_{n+1}^{\prime }-\left(1-{\beta }_{n-1}\right){\lambda }_{n}^{\prime }\right|\rho ∥F\left(T{\stackrel{̄}{x}}_{n-1}\right)∥\hfill \\ \phantom{\rule{2.77695pt}{0ex}}\phantom{\rule{2.77695pt}{0ex}}\phantom{\rule{2.77695pt}{0ex}}\phantom{\rule{2.77695pt}{0ex}}+\left(1-{\beta }_{n}\right)\left(1-{\lambda }_{n+1}^{\prime }{\tau }^{\prime }\right)\left|{\gamma }_{n}-{\gamma }_{n-1}\right|\left(∥{x}_{n-1}∥+∥T{x}_{n-1}∥\right)\hfill \\ \phantom{\rule{2.77695pt}{0ex}}\phantom{\rule{2.77695pt}{0ex}}\phantom{\rule{2.77695pt}{0ex}}\phantom{\rule{2.77695pt}{0ex}}+\left(1-{\beta }_{n}\right)\left(1-{\lambda }_{n+1}^{\prime }{\tau }^{\prime }\right)\left|\left(1-{\gamma }_{n}\right){\lambda }_{n+1}^{″}-{\gamma }_{n-1}{\lambda }_{n}^{″}\right|\gamma ∥F\left(T{x}_{n-1}\right)∥\hfill \\ \phantom{\rule{2.77695pt}{0ex}}\phantom{\rule{2.77695pt}{0ex}}\phantom{\rule{2.77695pt}{0ex}}\phantom{\rule{2.77695pt}{0ex}}+\left|{\beta }_{n}-{\beta }_{n-1}\right|\left(∥{x}_{n-1}∥+∥T{\stackrel{̄}{x}}_{n-1}∥+∥T{\stackrel{̄}{x}}_{n-1}∥\right).\hfill \end{array}$
(3)

Let

${\mathit{ỹ}}_{n}={T}_{t}^{{\lambda }_{n+1}}{\stackrel{̃}{x}}_{n}=T{\stackrel{̃}{x}}_{n}-{\lambda }_{n+1}tF\left(T{\stackrel{̃}{x}}_{n}\right),$

so we obtain

${x}_{n+1}={\alpha }_{n}{\stackrel{̄}{x}}_{n}+\left(1-{\alpha }_{n}\right){\mathit{ỹ}}_{n}.$

Furthermore,

$\begin{array}{cc}\hfill ∥{\mathit{ỹ}}_{n}-{\mathit{ỹ}}_{n-1}∥& =∥T{\stackrel{̃}{x}}_{n}-T{\stackrel{̃}{x}}_{n-1}+{\lambda }_{n}tF\left(T{\stackrel{̃}{x}}_{n-1}\right)-{\lambda }_{n+1}tF\left(T{\stackrel{̃}{x}}_{n}\right)∥\hfill \\ \le ∥T{\stackrel{̃}{x}}_{n}-T{\stackrel{̃}{x}}_{n-1}∥+{\lambda }_{n}t∥F\left(T{\stackrel{̃}{x}}_{n-1}\right)∥+{\lambda }_{n+1}t∥F\left(T{\stackrel{̃}{x}}_{n}\right)∥\hfill \\ \le ∥{\stackrel{̃}{x}}_{n}-{\stackrel{̃}{x}}_{n-1}∥+{\lambda }_{n}t∥F\left(T{\stackrel{̃}{x}}_{n-1}\right)∥+{\lambda }_{n+1}t∥F\left(T{\stackrel{̃}{x}}_{n}\right)∥.\hfill \end{array}$
(4)

By $\underset{n\to \infty }{lim}{\beta }_{n}=1$, $\underset{n\to \infty }{lim}{\lambda }_{n}=0$ and (3), (4), we obtain:

$\begin{array}{c}∥{\mathit{ỹ}}_{n}-{\mathit{ỹ}}_{n-1}∥-∥{x}_{n}-{x}_{n-1}∥\hfill \\ \phantom{\rule{2.77695pt}{0ex}}\phantom{\rule{2.77695pt}{0ex}}\le \left|\left(1-{\beta }_{n}\right){\lambda }_{n+1}^{\prime }-\left(1-{\beta }_{n-1}\right){\lambda }_{n}^{\prime }\right|\rho ∥F\left(T{\stackrel{̃}{x}}_{n-1}\right)∥\hfill \\ \phantom{\rule{1em}{0ex}}\phantom{\rule{2.77695pt}{0ex}}\phantom{\rule{2.77695pt}{0ex}}\phantom{\rule{2.77695pt}{0ex}}\phantom{\rule{2.77695pt}{0ex}}+\left(1-{\beta }_{n}\right)\left(1-{\lambda }_{n+1}^{\prime }{\tau }^{\prime }\right)\left|{\gamma }_{n}-{\gamma }_{n-1}\right|\left(∥{x}_{n-1}∥+∥T{x}_{n-1}∥\right)\hfill \\ \phantom{\rule{1em}{0ex}}\phantom{\rule{2.77695pt}{0ex}}\phantom{\rule{2.77695pt}{0ex}}\phantom{\rule{2.77695pt}{0ex}}\phantom{\rule{2.77695pt}{0ex}}+\left(1-{\beta }_{n}\right)\left(1-{\lambda }_{n+1}^{\prime }{\tau }^{\prime }\right)\left|\left(1-{\gamma }_{n}\right){\lambda }_{n+1}^{″}-{\gamma }_{n-1}{\lambda }_{n}^{″}\right|\gamma ∥F\left(T{x}_{n-1}\right)∥\hfill \\ \phantom{\rule{1em}{0ex}}\phantom{\rule{2.77695pt}{0ex}}\phantom{\rule{2.77695pt}{0ex}}\phantom{\rule{2.77695pt}{0ex}}\phantom{\rule{2.77695pt}{0ex}}+\left|{\beta }_{n}-{\beta }_{n-1}\right|\left(∥{x}_{n-1}∥+∥T{\stackrel{̄}{x}}_{n-1}∥+∥T{\stackrel{̄}{x}}_{n-1}∥\right)\hfill \\ \phantom{\rule{1em}{0ex}}\phantom{\rule{2.77695pt}{0ex}}\phantom{\rule{2.77695pt}{0ex}}\phantom{\rule{2.77695pt}{0ex}}\phantom{\rule{2.77695pt}{0ex}}+{\lambda }_{n}t∥F\left(T{\stackrel{̃}{x}}_{n-1}\right)∥+{\lambda }_{n+1}t∥F\left(T{\stackrel{̃}{x}}_{n}\right)∥\to 0.\hfill \end{array}$
(5)

According to Lemma 1, we can obtain

$\underset{n\to \infty }{lim}∥{\mathit{ỹ}}_{n-1}-{x}_{n-1}∥=0.$

Furthermore, using the conditions $\underset{n\to \infty }{lim}{\gamma }_{n}=1$, $max\left\{{\lambda }_{n}^{\prime },{\lambda }_{n}^{″}\right\}\le {\lambda }_{n}\to 0$, we obtain

$\begin{array}{cc}\hfill ∥{\stackrel{̄}{x}}_{n}-{x}_{n}∥& =∥-\left(1-{\gamma }_{n}\right){x}_{n}+\left(1-{\gamma }_{n}\right)\left(T{x}_{n}-{\lambda }_{n+1}^{\prime }\gamma F\left(T{x}_{n}\right)\right)∥\hfill \\ \le \left(1-{\gamma }_{n}\right)∥{x}_{n}∥+\left(1-{\gamma }_{n}\right)∥T{x}_{n}∥+{\lambda }_{n+1}^{\prime }\gamma ∥F\left(T{x}_{n}\right)∥\to 0.\hfill \end{array}$
(6)

According to (5) and (6), we conclude that

$\begin{array}{cc}\hfill ∥{x}_{n}-{x}_{n-1}∥& =∥{\alpha }_{n-1}{\stackrel{̄}{x}}_{n-1}+\left(1-{\alpha }_{n-1}\right){\mathit{ỹ}}_{n-1}-{x}_{n-1}∥\hfill \\ \le {\alpha }_{n-1}∥{\stackrel{̄}{x}}_{n-1}-{x}_{n-1}∥+\left(1-{\alpha }_{n-1}\right)∥{\mathit{ỹ}}_{n-1}-{x}_{n-1}∥\to 0,\hfill \end{array}$

so we immediately obtain

$∥{x}_{n+1}-{x}_{n}∥\to 0.$

Step 3. xn+1- Tx n → 0.

In fact,

$\begin{array}{cc}\hfill ∥{\stackrel{̃}{x}}_{n}-{x}_{n}∥& =∥-\left(1-{\beta }_{n}\right){x}_{n}+\left(1-{\beta }_{n}\right)\left(T{\stackrel{̄}{x}}_{n}-{\lambda }_{n+1}^{\prime }\rho F\left(T{\stackrel{̄}{x}}_{n}\right)\right)∥\hfill \\ \le \left(1-{\beta }_{n}\right)∥{x}_{n}∥+\left(1-{\beta }_{n}\right)∥T{\stackrel{̄}{x}}_{n}∥+{\lambda }_{n+1}^{\prime }\rho ∥F\left(T{\stackrel{̄}{x}}_{n}\right)∥.\hfill \end{array}$
(7)

According to the assumptions $\underset{n\to \infty }{lim}{\beta }_{n}=1$ and $\underset{n\to \infty }{lim}{\lambda }_{n}=0$, then

$∥{\stackrel{̃}{x}}_{n}-{x}_{n}∥\to 0.$

A series of computations yields:

$\begin{array}{cc}\hfill ∥{x}_{n+1}-T{x}_{n}∥& =∥{\alpha }_{n}\left({\stackrel{̄}{x}}_{n}-T{x}_{n}\right)+\left(1-{\alpha }_{n}\right)\left({T}_{t}^{{\lambda }_{n+1}}\stackrel{̃}{x}-T{x}_{n}\right)∥\hfill \\ \le {\alpha }_{n}∥{\stackrel{̄}{x}}_{n}-T{x}_{n}∥+\left(1-{\alpha }_{n}\right)∥T{\stackrel{̃}{x}}_{n}-T{x}_{n}∥\hfill \\ \phantom{\rule{1em}{0ex}}+\left(1-{\alpha }_{n}\right){\lambda }_{n+1}t∥F\left(T{\stackrel{̃}{x}}_{n}\right)∥\hfill \\ \le {\alpha }_{n}∥{\stackrel{̄}{x}}_{n}-T{x}_{n}∥+∥{\stackrel{̃}{x}}_{n}-{x}_{n}∥+{\lambda }_{n+1}t∥F\left(T\stackrel{̃}{x}\right)∥\hfill \\ \le {\alpha }_{n}∥{x}_{n+1}-T{x}_{n}∥+{\alpha }_{n}∥{\stackrel{̄}{x}}_{n}-{x}_{n+1}∥+∥{\stackrel{̃}{x}}_{n}-{x}_{n}∥\hfill \\ \phantom{\rule{1em}{0ex}}+{\lambda }_{n+1}t∥F\left(T\stackrel{̃}{x}\right)∥.\hfill \end{array}$
(8)

Hence, by (6), (7), (8) and Conditions 3.2, we obtain:

$∥{x}_{n+1}-T{x}_{n}∥\le \frac{{\alpha }_{n}}{1-{\alpha }_{n}}∥{\stackrel{̄}{x}}_{n}-{x}_{n+1}∥+\frac{∥{\stackrel{̃}{x}}_{n}-{x}_{n}∥}{1-{\alpha }_{n}}+\frac{{\lambda }_{n+1}t∥F\left(T\stackrel{̃}{x}\right)∥}{1-{\alpha }_{n}}\to 0.$
(9)

Corollary 1 x n - Tx n → 0.

Applying Steps 2 and 3, we get

$∥{x}_{n+1}-T{x}_{n}∥\to 0$

and

$∥{x}_{n+1}-{x}_{n}∥\to 0,$

So then

$∥{x}_{n}-T{x}_{n}∥\le ∥{x}_{n+1}-T{x}_{n}∥+∥{x}_{n+1}-{x}_{n}∥\to 0.$

Step 4. $lim{sup}_{n\to \infty }<-F\left({x}^{*}\right),T{\stackrel{̃}{x}}_{n}-{x}^{*}>\le 0$.

For some $\stackrel{̃}{x}\in H$, here exits $\left\{T{x}_{{n}_{i}}\right\}\to \stackrel{̃}{x}$ weakly, and such that

$\underset{n\to \infty }{lim sup}<-F\left({x}^{*}\right),T{x}_{n}-{x}^{*}>=\underset{n\to \infty }{lim sup}<-F\left({x}^{*}\right),T{x}_{{n}_{i}}-{x}^{*}>.$

According to $\left\{T{x}_{{n}_{i}}\right\}\to \stackrel{̃}{x}$, we have

$\stackrel{̃}{x}\in Fix\left(T\right)=K.$

Moreover, we have x* is the unique solution of VI(F, K), so we can obtain:

$\begin{array}{c}\underset{n\to \infty }{lim sup}<-F\left({x}^{*}\right),T{x}_{n}-{x}^{*}>\hfill \\ \phantom{\rule{2.77695pt}{0ex}}=\underset{n\to \infty }{lim sup}<-F\left({x}^{*}\right),\stackrel{̃}{x}-{x}^{*}>\hfill \\ \phantom{\rule{2.77695pt}{0ex}}\le 0.\hfill \end{array}$

Since $∥T{\stackrel{̃}{x}}_{n}-T{x}_{n}∥\le ∥{\stackrel{̃}{x}}_{n}-{x}_{n}∥\to 0$, we immediately conclude that

$\begin{array}{c}\underset{n\to \infty }{lim sup}<-F\left({x}^{*}\right),T{\stackrel{̃}{x}}_{n}-{x}^{*}>\hfill \\ \phantom{\rule{2.77695pt}{0ex}}\phantom{\rule{2.77695pt}{0ex}}\le \underset{n\to \infty }{lim sup}<-F\left({x}^{*}\right),T{\stackrel{̃}{x}}_{n}-T{x}_{n}>\hfill \\ \phantom{\rule{2.77695pt}{0ex}}\phantom{\rule{2.77695pt}{0ex}}\phantom{\rule{2.77695pt}{0ex}}\phantom{\rule{2.77695pt}{0ex}}\phantom{\rule{2.77695pt}{0ex}}+\underset{n\to \infty }{lim sup}<-F\left({x}^{*}\right),T{x}_{n}-{x}^{*}>\hfill \\ \phantom{\rule{2.77695pt}{0ex}}\phantom{\rule{2.77695pt}{0ex}}\le \underset{n\to \infty }{lim sup}<-F\left({x}^{*}\right),T{x}_{n}-{x}^{*}>\hfill \\ \phantom{\rule{2.77695pt}{0ex}}\phantom{\rule{2.77695pt}{0ex}}\le 0.\hfill \end{array}$

Step 5. x n - x* → 0. To prove this conclusion, we have to apply the Lemma 2 several times.

By Step 1 and Lemma 4, we have:

(10)

where

$\begin{array}{cc}\hfill {w}_{n+1}^{\prime }& =\frac{2t<-F\left({x}^{*}\right),T{\stackrel{̃}{x}}_{n}-{x}^{*}-t{\lambda }_{n+1}F\left(T{\stackrel{̃}{x}}_{n}\right)>}{\tau \left(1-{\alpha }_{n}\right)}\hfill \\ \phantom{\rule{1em}{0ex}}+\frac{{\phi }_{n}}{\tau \left(1-{\alpha }_{n}\right)}+\frac{{\xi }_{n}}{\tau \left(1-{\alpha }_{n}\right)},\hfill \\ \hfill {\phi }_{n}& =\left(1-{\gamma }_{n}\right)\gamma M,\hfill \\ \hfill {\xi }_{n}& =\left(1-{\alpha }_{n}\right){\left(1-{\lambda }_{n+1}\tau \right)}^{2}\left(1-{\beta }_{n}\right)M\hfill \end{array}$

and M0 M < ∞.

If we denote

${s}_{n+1}^{\prime }=∥{x}_{n+1}-{x}^{*}∥,{u}_{n}=\left(1-{\alpha }_{n}\right){\lambda }_{n+1}\tau ,$

we can rewrite (10):

${s}_{n+1}^{\prime }\le \left(1-{u}_{n}\right){s}_{n}^{\prime }+{u}_{n}{w}_{n}^{\prime }+0.$

In fact, u n , ${w}_{n}^{\prime }$ satisfies Lemma 2, according to

$\underset{n\to \infty }{lim}{\beta }_{n}=1,\underset{n\to \infty }{lim}{\gamma }_{n}=1,\underset{n\to \infty }{lim}{\lambda }_{n}=0$

and step 4, we obtain

$\frac{{\phi }_{n}}{\tau \left(1-{\alpha }_{n}\right)}\to 0$

and

$\frac{{\xi }_{n}}{\tau \left(1-{\alpha }_{n}\right)}\to 0.$

Furthermore, $lim{sup}_{n\to \infty }<-F\left({x}^{*}\right),T{\stackrel{̃}{x}}_{n}-{x}^{*}>\le 0$, so we have:

Consequently we obtain

$\underset{n\to \infty }{lim sup}{w}_{n}^{\prime }\le 0,$

and then from Lemma 2, we have

$∥{x}_{n}-{x}^{*}∥\to 0.$

which completes the proof.

## 4 Implementation of the MRHSD method for a kind of variational inequalities

Now we consider the variational inequality problem VI(F, K1 K2), which involves finding x* K1 K2 such that

${x}^{*}\in {K}_{1}\cap {K}_{2},\ge 0,\forall x\in {K}_{1}\cap {K}_{2},$
(11)

where K1 and K2 are nonempty and closed convex subsets of H.

To reduce the difficulty and complexity in computing the projection P K , we solve VI(F, K1 K2) by the MRHSD method. Then we have to choose an efficient and implementable nonexpansive mapping T. Based on the spirit of the approximate projection contraction method, we define Tx as:

$Tx=H\left(G\left(x\right)\right)\approx {P}_{K}\left[x\right],$
(12)

where

$G\left(x\right)={{P}_{K}}_{{\phantom{\rule{0.1em}{0ex}}}_{2}}\left(x\right),\phantom{\rule{2.77695pt}{0ex}}H\left(x\right)={{P}_{K}}_{{\phantom{\rule{0.1em}{0ex}}}_{1}}\left(x\right).$

Assuming that ${P}_{{K}_{2}}\left(x\right)$, ${P}_{{K}_{1}}\left(x\right)$ can be computed without much difficulty, we can efficiently compute Tx. According to TxP K [x], we can partly retain the efficiency of the projection contraction method. Obviously, the fixed point set is Fix(T) = K and T satisfies the property of nonexpansive mapping.

## 5 Numerical experiments

To show the effects of the MRHSD method for VI(F, K1 K2), we test a set of problems that arise in finance and statistics [12, 22]. Let H L , H U be given n × n symmetric matrices, let C be asymmetric, which differs from previous approaches [12, 22], and H L H U in terms of elements. The problem considered in this section is:

$min\left\{\frac{1}{2}{∥X-C∥}_{F}^{2}|X\in K={S}_{+}^{n}\cap ß\right\},$
(13)

where F is the matrix Fröbenis norm, i.e.,

${∥C∥}_{F}={\left(\sum _{i=1}^{\infty }\sum _{j=1}^{\infty }{\left|{C}_{ij}\right|}^{2}\right)}^{\frac{1}{2}}.$

Furthermore,

${S}_{+}^{n}=\left\{H\in I{R}^{n×n}|{H}^{T}=H,H\underset{¯}{\succ }0\right\}$

and

$ß=\left\{H\in I{R}^{n×n}|{H}^{T}=H,{H}_{L}\le H\le {H}_{U}\right\}.$

Note that the matrix Fröbenis norm is induced by the inner product

$⟨A,B⟩=\mathsf{\text{Trace}}\left({A}^{T}B\right).$

It is known that optimization problem (13) is equivalent to the following variational inequality problem:

$⟨{X}^{\prime }-X,\nabla \left(\frac{1}{2}{∥X-C∥}^{2}\right)⟩\ge 0,\forall {X}^{\prime }\in K,$

so we obtain

$⟨{X}^{\prime }-X,X-C⟩\ge 0,\forall {X}^{\prime }\in K.$
(14)

To solve variational inequality problem (14) by the MRHSD method, we take one set of parameter sequences satisfying Condition 3.1.

Condition 3.1.

$\begin{array}{c}{\alpha }_{n}={\lambda }_{n}={\lambda }_{n}^{\prime }={\lambda }_{n}^{″}=\frac{1}{n},\hfill \\ {\beta }_{n}={\gamma }_{n}=1-\frac{1}{n},\hfill \\ \gamma =\rho =t={c}_{0}>0.\hfill \end{array}$

Furthermore, we take two different parameter sequences satisfying Condition 3.2 to demonstrate the different effects for different α n .

Condition 3.2a.

$\begin{array}{c}\left\{\begin{array}{c}{\alpha }_{n}=0.3-1/\left(100*n\right);\phantom{\rule{2.77695pt}{0ex}}n=2k\hfill \\ {\alpha }_{n}=0.1-1/\left(100*n\right);\phantom{\rule{2.77695pt}{0ex}}n=2k-1;\hfill \end{array}\right\\hfill \\ \phantom{\rule{2.77695pt}{0ex}}\phantom{\rule{2.77695pt}{0ex}}\phantom{\rule{2.77695pt}{0ex}}{\lambda }_{n}={\lambda }_{n}^{\prime }={\lambda }_{n}^{″}=1/\left(n+1\right);\hfill \\ \left\{\begin{array}{c}{\beta }_{n}=1-1/n;{\gamma }_{n}=1-1/n;\phantom{\rule{2.77695pt}{0ex}}n=2k\hfill \\ {\beta }_{n}=1-1/n;\phantom{\rule{2.77695pt}{0ex}}{\gamma }_{n}=1-1/\left(2n\right);\phantom{\rule{2.77695pt}{0ex}}n=2k-1;\hfill \end{array}\right\\hfill \\ \phantom{\rule{2.77695pt}{0ex}}\phantom{\rule{2.77695pt}{0ex}}\phantom{\rule{2.77695pt}{0ex}}\gamma =\rho =t={c}_{0}>0.\hfill \end{array}$

Condition 3.2b.

$\begin{array}{c}\left\{\begin{array}{c}{\alpha }_{n}=0.8-1/\left(100*n\right);\phantom{\rule{2.77695pt}{0ex}}n=2k\hfill \\ {\alpha }_{n}=0.3-1/\left(100*n\right);\phantom{\rule{2.77695pt}{0ex}}n=2k-1;\hfill \end{array}\right\\hfill \\ \phantom{\rule{2.77695pt}{0ex}}\phantom{\rule{2.77695pt}{0ex}}\phantom{\rule{2.77695pt}{0ex}}{\lambda }_{n}={\lambda }_{n}^{\prime }={\lambda }_{n}^{″}=1/\left(n+1\right);\hfill \\ \left\{\begin{array}{c}{\beta }_{n}=1-1/n;{\gamma }_{n}=1-1/n;\phantom{\rule{2.77695pt}{0ex}}n=2k\hfill \\ {\beta }_{n}=1-1/n;\phantom{\rule{2.77695pt}{0ex}}{\gamma }_{n}=1-1/\left(2n\right);\phantom{\rule{2.77695pt}{0ex}}n=2k-1;\hfill \end{array}\right\\hfill \\ \phantom{\rule{2.77695pt}{0ex}}\phantom{\rule{2.77695pt}{0ex}}\phantom{\rule{2.77695pt}{0ex}}\gamma =\rho =t={c}_{0}>0.\hfill \end{array}$

According to Tx(12), we define TX as

$TX=H\left(G\left(X\right)\right),$
(15)

where

$G\left(X\right)=min\left({H}_{U},max\left(X,{H}_{L}\right)\right),H\left(X\right)={P}_{{s}_{+}^{n}}\left(X\right),$

which can easily be computed and the fixed point set to Fix(T) = K. Moreover, according to Theorems 1 and 2, the sequences generated by algorithm  under Conditions 3.1 and 3.2 are convergent.

The computation started with ones(n, n) in MATLAB and stopped when xk+1- x k ≤ 10-4 or 10-5. All codes were implemented in MATLAB 7.0 and were run using a Pentium R 1.70 G processor on a 768 M ASUS notebook computer.

We tested the problem using n = 100, 200, 300, 400, 500. The test results for the MRHSD method under different conditions and tolerances are reported in Tables 1 and 2.

### Test examples

In this example we generate the data in a similar manner as in . Note that it is very difficult to compute the examples using the extended contraction method  when C is asymmetric. However, the MRHSD method can efficiently compute the examples when C is asymmetric.

The diagonal elements of C are randomly generated in the interval (0, 2) and the off-diagonal elements are randomly generated in the interval (-1, 1):

$\begin{array}{c}\phantom{\rule{2.77695pt}{0ex}}\phantom{\rule{2.77695pt}{0ex}}{\left({H}_{U}\right)}_{jj}={\left({H}_{L}\right)}_{jj}=1,\hfill \\ {\left({H}_{U}\right)}_{ij}=-{\left({H}_{L}\right)}_{ij}=0.1,\forall i\ne j,i,j=1,2,...,n.\hfill \end{array}$

Matlab code:

C = zeros(n, n); HU = ones(n, n)0.1; HL = -HU;

for i = 1:n

for j = 1:n

t = mod(t42108+13846,46273);

C(i, j) = t2/46273-1;

end;

end;

for i = 1:n

C(i, i) = abs(C(i, i))2; HU(i, i) = 1; HL(i, i) = 1;

end;

The numerical results demonstrate that this implementation of the MRHSD method is efficient. Furthermore, the MRHSD Method under Condition 3.2 is more efficient than under Condition 3.1. These numerical experiments and results are the third contribution of the article.

## 6 Conclusions and discussions

We have proved strong convergence of the MRHSD method under Condition 3.2, which differs from Condition 3.1. The proof can be simplified using Condition 3.2, which imposes suitable restrictions on the parameters. The result can be considered an improvement and refinement of previous results . In particular, we designed an efficient implementation of the MRHSD method based on the approximate projection contraction method. Numerical experiments demonstrated that this is an efficient implementation and that the MRHSD method under Condition 3.2 is more efficient than under Condition 3.1. However, choosing an efficient and implementable nonexpansive mapping for a general VI(F, K) is still a difficult problem.

## References

1. Hartman P, Stampacchia G: On some nonlinear elliptic differential functional equations. Acta Math 1966, 115: 153–188.

2. Stampacchia G: Variational inequalities in theory and applications of monotone operators. In Proceedings of the NATO Advanced Study Institute. Venice, Italy(Edizioni Oderisi, Gubbio, Italy); 1968:102–192.

3. Harker PT, Pang JS: Finite-dimensional variational inequality and nonlinear complementarity problems: a survey of theory, algorithms and applications. Math Program 1990, 48: 161–220.

4. Smith MJ: The existence, uniqueness and stability of traffic equilibria. Transport Res 1979, 13B: 295–304.

5. Mathiesen L: Computation of economic equilibria by a sequence of linear complementarity problems. Math Program Stud 1985, 23: 144–162.

6. Huang NJ, Li J, Wu SY: Optimality conditions for vector optimization problems. J Optim Theory Appl 2009, 142: 323–342.

7. Deutsch F, Yamada I: Minimizing certain convex functions over the intersection of the fixed-point sets of nonexpansive mappings. Numer Func Anal Opt 1998, 19: 33–56.

8. Yamada I: The hybrid steepest-descent method for variational inequality problems over the intersection of the fixed-point sets of nonexpansive mappings. Stud Comput Math 2001, 8: 473–504.

9. Xu HK, Kim TH: Convergence of hybrid steepest-descent methods for variational inequalities. J Optim Theory Appl 2003, 119: 185–201.

10. Zeng LC, Wong NC, Yao JC: Convergence analysis of modified hybrid steepest-descent methods with variable parameters for variational inequalities. J Optim Theory Appl 2007, 132: 51–69.

11. Ding XP, Lin YC, Yao JC: Three-step relaxed hybrid steepest-descent methods for variational inequalities. Appl Math Mech 2007, 28: 1029–1036.

12. He BS, Liao LZ, Wang X: Proximal-like contraction methods for monotone variational inequalities in a unified framework.2009. [http://www.optimization-online.org/DB-HTML/2008/12/2163.html]

13. He BS, Yang ZH, Yuan XM: An approximate proximal-extragradient type method for monotone variational inequalities. J Math Anal Appl 2004, 300: 362–374.

14. He BS: A new method for a class of linear variational inequalities. Math Program 1994, 66: 137–144.

15. He BS: A class of projection and contraction methods for monotone variational inequalities. Appl Math Optim 1997, 35: 69–76.

16. Li M, Bnouhachem A: A modified inexact operator splitting method for monotone variational inequalities. J Glob Optim 2008, 41: 417–426.

17. Sun DF: A projection and contraction method for the nonlinear complementarity problem and its extensions. Math Numer Sin 1994, 16: 183–194.

18. Noor MA: New approximation schemes for general variational inequalities. J Math Anal Appl 2005, 251: 217–229.

19. Yao YH, Noor MA, Chen RD, Liou YC: Strong convergence of three-step relaxed hybrid steepest-descent methods for variational inequalities. Appl Math Comput 2008, 201: 175–183.

20. Xu HW, Song EB, Pan HP, Shao H, Sun LM: The modified and relaxed hybrid steepest-descent methods for variational inequalities. In Proceedings of the 1st International Conference on Modelling and Simulation. Volume II. World Academic Press; 2008:169–174.

21. Suzuki T: Strong convergence of Krasnoselskii and Mann's type sequences for one-parameter nonexpansive semigroups without Bochner integrals. J Math Anal Appl 2005, 305: 227–239.

22. Gao Y, Sun DF: Calibrating least squares covariance matrix problems with equality and inequality constraints.2008. [http://www.math.nus.edu.sg/matsundf/CaliMat_June_2008.pdf]

## Acknowledgements

This research was supported by National Science and Technology Support Program (Grant No. 2011BAH24B06), Science Foundation of the Civil Aviation Flight University of China (Grant No. J2010-45).

## Author information

Authors

### Corresponding author

Correspondence to Haiwen Xu.

### Competing interests

The author declares that they have no competing interests.

## Rights and permissions

Reprints and Permissions

Xu, H. Efficient implementation of a modified and relaxed hybrid steepest-descent method for a type of variational inequality. J Inequal Appl 2012, 93 (2012). https://doi.org/10.1186/1029-242X-2012-93

• Accepted:

• Published:

• DOI: https://doi.org/10.1186/1029-242X-2012-93

### Keywords

• hybrid steepest-descent method
• variational inequalities
• approximate projection contraction method
• strong convergence
• nonexpansive mapping 