 Research
 Open Access
 Published:
Iterative methods for solving a class of monotone variational inequality problems with applications
Journal of Inequalities and Applications volume 2015, Article number: 68 (2015)
Abstract
In this paper, we provide a more general regularization method for seeking a solution to a class of monotone variational inequalities in a real Hilbert space, where the regularizer is a hemicontinuous and strongly monotone operator. As a discretization of the regularization method, we propose an iterative method. We then prove that the proposed iterative method converges in norm to a solution of the class of monotone variational inequalities. We also apply our results to the constrained minimization problem and the minimumnorm fixed point problem for a generalized Lipschitz continuous and pseudocontractive mapping. The results presented in the paper improve and extend recent ones in the literature.
Introduction
The present paper is devoted to presenting a more general regularization method for a class of monotone variational inequality problems (MVIPs) with a monotone and hemicontinuous operator F over a nonempty, closed, and convex subset C of a real Hilbert space H. The socalled MVIP is to find a point \(x^{*}\in C\) such that
The set of solutions for MVIP (1.1) is denoted by \(VI(C,F)\).
The monotone variational inequalities were initially investigated by Kinderlehrer and Stampacchia in [1] and have been widely studied by many authors ever since, due to the fact that they cover as diverse disciplines as partial differential equations, optimization, optimal control, mathematical programming, mechanics, and finance (see [1–23]).
Over the past five decades years or so, the researchers designed various iterative algorithms for solving MVIP (1.1); see [1–23] and the references therein. An early and typical iterative algorithm for solving MVIP (1.1) seems to be the projected gradient method (PGM), see, for instance, [4, 8], which generates a sequence \(\{x_{n}\}\) by the recursive procedure
where \(P_{C}\) stands for the metric projection from H onto C, I is the identity mapping on H, and μ is an arbitrarily positive number.
It is well known that if F is kLipschitz continuous and ηstrongly monotone, then MVIP (1.1) has a unique solution. Moreover, if one chooses \(\mu\in (0,\frac{2\eta}{k^{2}})\), then the sequence \(\{x_{n}\}\) generated by PGM (1.2) converges in norm to the unique solution to MVIP (1.1); see, for instance, [4, 8].
However, if F is simply kLipschitz continuous and monotone, but not ηstrongly monotone, then MVIP (1.1) may fail to have a solution, which can be seen from the following counterexample.
Example 1.1
Let \(H=C=\mathbb{R}=(\infty,\infty)\) and consider the function \(F:C\to H\) defined by \(F(x)=\arctan(x)\frac{\pi }{2}\), \(x\in C\). Then F is 1Lipschitz continuous and monotone. It is clear that the equation \(F(x)=0\) has no solutions in H, and hence MVIP (1.1) has no solutions.
For a kLipschitz continuous and monotone operator F, even though MVIP (1.1) has a solution, the PGM (1.2) associated with F does not yet necessarily converge to the solution of the MVIP (1.1).
Example 1.2
Let \(H=\mathbb{R}^{2}\), \(C=B=\{x\in H:\x\\le1\}\), \(B_{1}=\{x\in B:\x\\le\frac{1}{2}\}\), and \(B_{2}=\{x\in B:\frac {1}{2}\le\x\\le1\}\). If \(x=(a,b)\in H\), we write \(x^{\bot}=(b,a)\in H\). Define \(F:C\to H\) by
Then, from Chidume and Mutangadura [17], we know that \(F:C\to H\) is a 6Lipschitz continuous and monotone operator with the unique zero point \(\theta=(0,0)\in C\), but PGM (1.2) does not converge to θ for any \(\mu\in(0,1)\).
The Example 1.2 tells us that PGM (1.2) is invalid for a kLipschitz continuous and monotone operator F, and therefore further modifications to PGM (1.2) are needed. In this regard, we pick up the following several known results.
In 1976, Korpelevich [18] made a kind of modification to PGM (1.2) by introducing the following extragradient method (EM):
for all \(n\geq1\), where \(\lambda\in(0,\frac{1}{k})\), C is a nonempty, closed, and convex subset of \(R^{N}\) and F is a kLipschitz continuous and monotone operator of C into \(R^{N}\), where N is a positive integer. He proved that if \(VI(C, F)\) is nonempty, then the sequences \(\{x_{n}\}\) and \(\{y_{n}\}\), generated by EM (1.3), converge to the same point \(p\in VI(C, F)\) which is a solution to MVIP (1.1).
EM (1.3) contains two metric projections and it is indeed a composite of two classical projected gradient methods. We also remark in passing that in an infinitedimensional Hilbert space, Korpelevich’s EM (1.3) has only weak convergence, in general, moreover, it cannot be used to seek the minimumnorm solution of MVIP (1.1).
Recently, Xu and Xu [22] provided a general regularization method for solving MVIP (1.1), where the regularizer is a Lipschitz continuous and strongly monotone operator. They also introduced an iterative method as discretization of the regularization method. They proved that both regularization and iterative methods converge in norm to a solution to MVIP (1.1) under some conditions.
Very recently, Iemoto et al. [23] studied a variational inequality for a hemicontinuous and monotone operator over the fixed point set of a strongly nonexpansive mapping in a real Hilbert space. They proposed an iterative algorithm and analyzed the weak convergence of the proposed algorithm.
On the other hand, the construction of fixed points for pseudocontractive mappings has been studied extensively by several authors since 1974. A good number of results was reported recently; see, for instance, [24–30].
Question
Can the Lipschitz continuity assumptions be removed or weakened in the results mentioned above?
The purpose of this paper is to answer the question mentioned above. In order to realize this objective, we first establish a new existence and uniqueness theorem for MVIP (1.1), where \(F:C\to H\) is a hemicontinuous and strongly monotone operator. By using the established existence and uniqueness theorem, we then introduce an implicit method for seeking a solution of MVIP (1.1). We also introduce an explicit iterative method. We prove that both the implicit and the explicit iterative methods converge in norm to the same solution of MVIP (1.1). Some applications are also included.
The rest of the paper is organized as follows. Section 2 contains some necessary concepts and useful facts. The new existence and uniqueness theorem for MVIP (1.1) and several convergence results of the proposed algorithms are established in Section 3. Finally, in Section 4, we provide some applications to the constrained minimization problem for a convex and continuously Fréchet differentiable functional and the minimumnorm fixed point problem for a generalized Lipschitz continuous and pseudocontractive mapping.
Preliminaries
Throughout this paper, we will assume that H is a real Hilbert space with inner product \(\langle\cdot,\cdot\rangle\) and its induced norm \(\ \cdot\\). Let C be a nonempty, closed, and convex subset of H. We denote the strong convergence and weak convergence of \(\{x_{n}\}\) to \(x\in H\) by \(x_{n}\to x\) and \(x_{n}\rightharpoonup x\), respectively. We use ℝ to denote the set of real numbers. Let \(T:C\to H\) be a mapping. We use \(\operatorname{Fix}(T)\) to denote the set of fixed points of T. We also denote by \(\mathfrak{D}(T)\) and \(\mathfrak{R}(T)\) the domain and range of T, respectively. The letter I stands for the identity mapping on H.
In what follows, we shall collect some important concepts, facts, and tools, which will be used in Section 3.
It is well known that the following equalities hold:
for all \(x,y\in H\).
Recall that the metric projection from a real Hilbert space H onto a nonempty, closed, and convex subset C of H is defined as follows: for each \(x\in H\), there exists a unique element \(P_{C}x\in C\) with the property
that is, for any \(x\in H\), \(\bar{x}=P_{C}x\) if and only if
Lemma 2.1
(see [11])
Let \(P_{C}\) be the metric projection from H onto a nonempty, bounded, and closed and convex subset C of a real Hilbert space H. Then the following conclusions hold true:

(C1)
Given \(x\in H\) and \(z\in C\).Then \(z=P_{C}x\) if and only if the inequality
$$ \langle xz,yz\rangle\le0,\quad \forall y\in C $$(2.4)holds.

(C2)
$$ \P_{C}xP_{C}y\^{2}\le\langle P_{C}xP_{C}y,xy\rangle,\quad \forall x,y\in H, $$(2.5)
in particular, one has
$$ \P_{C}xP_{C}y\\le\xy\,\quad \forall x,y\in H. $$(2.6) 
(C3)
$$ \bigl\ (IP_{C})x(IP_{C})y \bigr\ ^{2}\le \bigl\langle (IP_{C})x(IP_{C})y, xy \bigr\rangle , \quad \forall x,y \in H, $$(2.7)
in particular, one has
$$ \bigl\ (IP_{C})x(IP_{C})y \bigr\ \le\xy\,\quad \forall x,y\in H. $$(2.8)
Recall that an operator \(F:C\to H\) is said to be
(i) monotone if
for all \(x,y\in C\);
(ii) ηstrongly monotone if there exists an \(\eta>0\) such that
for all \(x,y\in C\);
(iii) kLipschitz continuous if there exists a positive number k such that
for all \(x,y\in C\);
(iv) generalized Lipschitz continuous if there exists a positive number c such that
for all \(x,y\in C\);
(v) demicontinuous if \(Fx_{n}\rightharpoonup Fx\) as \(n\to\infty\), whenever \(x_{n}\to x\) for any \(\{x_{n}\}\subset C\) and \(x\in C\);
(vi) hemicontinuous if for any \(x,y\in C\) and \(z\in H\), the function
of \([0,1]\) into ℝ is continuous;
(vii) a mapping \(T:C\to H\) is said to be pseudocontractive if
for all \(x,y\in C\);
(viii) an operator \(A\subset H\times H\) is called maximal monotone if it is monotone and it is not properly contained in any other monotone. We denote by \(\mathfrak{G}(A)\) the graph of A.
It is well known that T is pseudocontractive if and only if \(F=IT\) is monotone.
Example 2.1
Let \(f:H\to\mathbb{R}\) be a convex and continuously Fréchet differentiable function. Then the gradient ∇f of f is maximal monotone and hemicontinuous.
It is clear that the following implication relation holds:
\(F:C\to H\) is kLipschitz continuous ⇒ F is continuous ⇒ F is demicontinuous ⇒ F is hemicontinuous, however, the converse relation does not hold true, which can be seen from the following counterexample.
Example 2.2
Consider a function \(\varphi:\mathbb{R}^{2}\to \mathbb{R}\) of two variables \(\varphi(x,y)=xy^{2}(x^{2}+y^{4})^{1}\) for \((x,y)\in\mathbb{R}^{2}\backslash\{(0,0)\}\) and \(\varphi(0,0)=0\). Then φ is hemicontinuous but demicontinuous.
If \(F:\mathfrak{D}(F)=H\to H\) is monotone, then F is demicontinuous if and only if it is hemicontinuous; see, for instance, [9].
We remark that if an operator \(F:C\to H\) is either kLipschitz continuous or has a bounded range \(\mathfrak{R}(F)\), then it is generalized Lipschitz continuous. However, a generalized Lipschitz continuous mapping may be not continuous.
Example 2.3
Consider the sign function \(F:\mathbb{R}\to \mathbb{R}\) defined by
Then F is generalized Lipschitz continuous but continuous.
Lemma 2.2
(see [7])
Let \(A\subset H\times H\) be a maximal monotone operator and let \((x,y)\in H\times H\). If
for all \((a,b)\in\mathfrak{G}(A)\), then \((x,y)\in\mathfrak{G}(A)\).
Lemma 2.3
(see [7])
Let \(A\subset H\times H\) be a monotone operator. Then A is maximal monotone if and only if
for all \(r>0\).
Let C be a nonempty, closed, and convex subset of H and \(x\in C\). Then we define set \(N_{C}(x)\) of H by
Such a set \(N_{C}(x)\) is called the normal cone of C at x.
Lemma 2.4
(see [7])
Let \(F:C\to H\) be monotone and hemicontinuous and let \(N_{C}(x)\) denote the normal cone of C at \(x\in C\). Define
Then, \(T\subset H\times H\) is maximal monotone, moreover, \(T^{1}0=VI(C,F)\).
From Lemma 2.4, we know that \(VI(C,F)\) is closed convex, since \(T^{1}0\) is closed convex.
Lemma 2.5
Let \(F:C\rightarrow H\) be a hemicontinuous monotone operator and \(x^{*}\in C\). Then, the following variational inequalities are equivalent:

(i)
\(\langle Fx,x  {x^{*}} \rangle\ge0\), \(\forall x \in C\);

(ii)
\(\langle F{x^{*}},x  {x^{*}} \rangle\ge0\), \(\forall x \in C\).
From Lemma 2.5, we also deduce that \(VI(C,F)\) is closed convex.
Lemma 2.6
(see [31])
Let \(\{a_{n}\}\) be a sequence of nonnegative real numbers satisfying
where \(\{ {\gamma_{n}}\} \subset(0,1)\) and \(\{ {\sigma_{n}}\} \) satisfy

(i)
\(\sum_{n = 0}^{\infty}{\gamma_{n}} = \infty\);

(ii)
either \(\lim{\sup_{n \to\infty}}{\sigma_{n}} \le0\) or \(\sum_{n = 0}^{\infty}{\gamma_{n}}{\sigma_{n}} < \infty\).
Then \(\lim_{n\to\infty}a_{n}=0\).
Now we are in a proposition to state and prove the main results in this paper.
Main results
In this section we first establish a new existence and uniqueness theorem to MVIP (1.1) with F being a hemicontinuous and ηstrongly monotone operator. We then introduce two kinds of algorithms (one implicit and the other explicit) for solving MVIP (1.1).
Theorem 3.1
Let C be a nonempty, closed, and convex subset of a real Hilbert space H. Let \(F:C\to H\) be a hemicontinuous and ηstrongly monotone operator. Let \(y\in H\) be an arbitrary but fixed element. Then the monotone variational inequality
has a unique solution.
Proof
Let T be defined by (2.17). Then \(T\subset H\times H\) is maximal monotone by Lemma 2.4. By taking \(r_{n}>0\) with \(r_{n}\to0\) as \(n\to\infty\), in view of Lemma 2.3, we have \(R(r_{n}I+T)=H\) for all \(n\ge1\). Consequently, for any \(y\in H\), there exist \(x_{n}\in C\) such that
for all \(n\ge1\). We plan to prove that \(\{x_{n}\}\) is bounded. To end this, using (2.17) and (3.2), we get
for all \(n\ge1\), in particular, we have
Then we can write them by
Since F is ηstrongly monotone, \(z_{n}\in N_{C}(x_{n})\) and \(z_{1}\in N_{C}(x_{1})\), from (3.5) and (3.6) we have
from which it turns out that
for all \(n\ge1\). Equation (3.7) shows that \(\{x_{n}\}\) is bounded. Without loss of generality, we may assume that \(x_{n}\rightharpoonup x^{*}\) as \(n\to\infty\). Since T is monotone, and \(yr_{n}x_{n}\in Tx_{n}\), we have
for all \((a,b)\in\mathfrak{G}(T)\) and all \(n\ge1\). Letting \(n\to\infty\) and taking the limit in (3.8) yield
for all \((a,b)\in\mathfrak{G}(T)\). By Lemma 2.2, we conclude that \((x^{*},y)\in\mathfrak{G}(T)\), that is, \(y\in Tx^{*}\), and hence we have \(yFx^{*}\in N_{C}(x^{*})\) by (2.17), i.e., \(\langle Fx^{*}y, xx^{*}\rangle\ge0\), \(\forall x\in C\), that is, \(x^{*}\) is a solution of (3.1). We have proven that the monotone variational inequality (3.1) has a solution. We next prove that the monotone variational inequality (3.1) has a unique solution. Suppose \(x^{**}\) is another solution of (3.1), then \(\langle Fx^{**}y, xx^{**}\rangle\ge0\), \(\forall x\in C\). Thus we have \(\langle Fx^{*}y, x^{**}x^{*}\rangle\ge0\) and \(\langle Fx^{**}y, x^{*}x^{**}\rangle\ge0\). Adding up these two inequalities yields \(\langle Fx^{*}Fx^{**}, x^{**}x^{*}\rangle\ge0\), from which one derives that \(x^{**}=x^{*}\) by using the strong monotonicity of F. This completes the proof. □
From Theorem 3.1, we deduce immediately the following important result.
Corollary 3.1
Let C be a nonempty, closed, and convex subset of a real Hilbert space H. Let \(F:C\to H\) be a hemicontinuous and ηstrongly monotone operator. Then the MVIP (1.1) has a unique solution.
Let \(F:C\to H\) be hemicontinuous and monotone and \(R:H\to H\) be hemicontinuous and ηstrongly monotone. Choose arbitrarily a point \(u\in H\) and a sequence of positive numbers \(\{\gamma_{n}\}\) with \(\gamma_{n}\to0\) as \(n\to\infty\). Then \(\gamma _{n}(Ru)+F\) are hemicontinuous and \(\eta\gamma_{n}\)strongly monotone for all \(n\ge1\). By using Theorem 3.1, we conclude that the variational inequality problem
has a unique solution \(\{y_{n}\}\subset C\) for every fixed \(n\ge 1\).
Take \(\gamma_{n}=\frac{{{\alpha_{n}}}}{{{\beta_{n}}}}\) in (3.9). Then (3.9) yields
and hence
It follows from Lemma 2.1(C1) that
Theorem 3.2
Let C be a nonempty, closed, and convex subset of a real Hilbert space H. Let \(F:C\to H\) be a hemicontinuous monotone operator and \(R:H\to H\) be a hemicontinuous and ηstrongly monotone operator. Let \(\{\alpha_{n}\}\) and \(\{\beta_{n}\}\) be two sequences in \((0,1]\) that satisfy the following condition:
Assume that \(VI(C,F)\neq\emptyset\). Then, the sequence \(\{y_{n}\}\) generated by (3.12) converges in norm to \({x^{*}} = {P_{VI(C,F)}}((IR)x^{*}+u) \), in particular, if we take \(R=I\) and \(u=0\) in (3.12), then the sequence \(\{y_{n}\}\) generated by (3.12) converges in norm to the minimumnorm solution to MVIP (1.1).
Proof
Since \(VI(C,F)\) is nonempty, we see that \(VI(C,F)\) is nonempty, closed, and convex, and hence \(P_{VI(C,F)}h\) is well defined for any \(h\in H\). Let \(\{y_{n}\}\) be defined by (3.12). Then (3.12) is equivalent to (3.10). By Lemma 2.5, we have
for all \(n\ge1\). \(\forall x^{*}\in VI(C,F)\), we have
By Lemma 2.5, we have
Taking \(x=y_{n}\) in (3.15), we have
for all \(n\ge1\).
Taking \(x=x^{*}\) in (3.10), we have
for all \(n\ge1\).
By using (3.16), (3.17), and the strong monotonicity of F, we get
from which it turns out that
in particular,
for all \(n\ge1\).
Equation (3.19) shows that \(\{y_{n}\}\) is bounded. Without loss of generality, we may assume that \(y_{n}\rightharpoonup\hat{y}\) as \(n\to\infty\); then \(\hat{y}\in VI(C,F)\). Indeed, using (3.13), we have
Letting \(n\to\infty\) and taking the limit in (3.20) yield
It follows from (3.21) and Lemma 2.5 that
that is, \(\hat{y}\in VI(C,F)\).
Replace \(x^{*}\) by \(\hat{y}\) in (3.18) to yield
for all \(n\ge1\).
Use \(y_{n}\rightharpoonup\hat{y}\) and (3.22) to conclude that \(y_{n}\to \hat{y}\) as \(n\to\infty\).
In view of (3.18), we have
for all \(n\ge1\).
Letting \(n\to\infty\) and taking the limit in (3.23) yield
which implies that
Theorem 3.1 tells us that \(\hat{y}\) is the unique solution of (3.24), which ensures that the whole sequence \(\{y_{n}\}\) converges in norm to \(\hat{y}\) as \(n\to \infty\). Moreover, it follows from Lemma 2.1(C1) and (3.24) that \(\hat {y}=P_{VI(C,F)}[(IR)\hat{y}+u]\). This completes the proof. □
We next introduce an explicit iterative method for solving MVIP (1.1).
It is natural to consider the following iteration method that generates a sequence \(\{x_{n}\}\) according to the recursion:
where the initial guess \(x_{1}\in C\) and \(u\in H\) are selected arbitrarily, while \(\{\alpha_{n}\}\) and \(\{\beta_{n}\}\) are two sequences of positive numbers in \((0, 1]\).
Theorem 3.3
Let \(F:C\to H\) be a hemicontinuous, generalized Lipschitz continuous and monotone operator. Let \(R:H\to H\) be a hemicontinuous, generalized Lipschitz continuous, and ηstrongly monotone operator. Assume that \(\{\alpha_{n}\}\) and \(\{\beta_{n}\}\) are two sequences in \((0,1]\) that satisfy the following conditions:

(i)
\(\frac{{{\alpha_{n}}}}{{{\beta_{n}}}} \to0\), \(\frac{{{\beta _{n}}^{2}}}{{{\alpha_{n}}}} \to0\) as \(n \to\infty\);

(ii)
\({\alpha_{n}} \to0\) as \(n\to\infty\), \(\sum_{n = 1}^{\infty}{{\alpha_{n}}} = \infty\);

(iii)
\(\frac{{{\alpha_{n}}  {\alpha_{n  1}} + {\beta_{n}}  {\beta _{n  1}}}}{{{\alpha_{n}}^{2}}} \to0\) as \(n \to\infty\).
Assume that \(VI(C,F)\neq\emptyset\). Then, the sequence \(\{x_{n}\}\) generated by (3.25) converges in norm to \({x^{*}} = {P_{VI(C,F)}}((IR)x^{*}+u)\).
Proof
Let \(\{y_{n}\}\) be defined by (3.12). By using Theorem 3.2, we know that \(\{y_{n}\}\) converges in norm to \({x^{*}} = {P_{VI(C,F)}}((IR)x^{*}+u) \). It is sufficient to show that \(x_{n+1}y_{n}\to0\) as \(n\to\infty\). To end this, we first show that \(\{x_{n}\}\) is bounded.
For any \(p\in VI(C,F)\), we have
By using (3.25), (3.26), (2.1), and Lemma 2.1(C2), we have
Observe that
Since both F and R are generalized Lipschitz continuous, we have
and
Observe also that
In view of conditions (i) and (ii), without loss of generality, we may assume that
for all \(n\ge1\).
Substitute (3.28)(3.32) into (3.27) and simplify to yield
for all \(n\ge1\). We have shown that \(\{x_{n}\}\) is bounded. Next, we shall prove that \(x_{n+1}y_{n}\to0\) as \(n\to\infty\).
Since R is ηstrongly monotone and F is monotone, we have
and
for all \(n\ge1\). Since both R and F are generalized Lipschitz continuous, we have
and
for all \(n\ge1\).
By using (3.12), Lemma 2.1(C2), the strong monotonicity of R, and the monotonicity of F, we have
which implies that
where M is a positive constant such that \(M\ge\max\{\Ry_{n1}\+\ u\, \Fy_{n1}\\}\).
In view of conditions (i) and (ii), without loss of generality, we may assume that
for all \(n\ge1\).
By using (3.12), (3.25), (2.1), Lemma 2.1(C2), and (3.33)(3.39), we get
where we have used conditions (i)(iii) and \(M_{1}\) is a fixed positive number.
By Lemma 2.6, we conclude that \(\Vert {{x_{n + 1}}  {y_{n}}} \Vert \to0\), as \(n \to\infty\), which means that \(\{ {x_{n}}\} \) converges in norm to \({x^{*}} = {P_{VI(C,F)}}((IR)x^{*}+u)\). This completes the proof. □
Remark 3.1
Theorem 3.3 extends Theorem 3.1 of Xu and Xu [22] to the more general case, moreover, the choice of the iterative parameter sequences \(\{\alpha_{n}\}\) and \(\{\beta_{n}\}\) does not depend on the generalized Lipschitz constants of R and F.
Remark 3.2
Choose the sequences \(\{\alpha_{n}\}\) and \(\{\beta_{n}\}\) such that
where \(a<\frac{b+1}{2}\), \(0< b<a\) and \(a<2b\). Then it is clear that conditions (i)(iii) of Theorem 3.3 are satisfied.
Applications
In this section, we give some applications of the results established in Section 3.
Consider the constrained convex minimization problem:
where C is a closed convex subset of a real Hilbert space H and \(\varphi:H\to\mathbb{R}\) is a realvalued convex function. Assume that φ is continuously Fréchet differentiable with a generalized Lipschitz continuous gradient:
for all \(x,y\in H\), where c is a positive constant.
It is well known that the minimization problem (4.1) is equivalent to the following variational inequality problem:
From Example 2.1, we know that \(\nabla\varphi:H\to H\) is maximal monotone and hemicontinuous.
By virtue of Theorem 3.3, we can deduce the following convergence result.
Theorem 4.1
Assume that (4.1) has a solution. Let \(\{ {x_{n}}\} \) be generated by the following recursion:
where \(\{\alpha_{n}\}\) and \(\{\beta_{n}\}\) are two sequences in \((0,1]\) that satisfy conditions (i)(iii) in Theorem 3.3. Then \(\{x_{n}\}\) converges in norm to the minimumnorm solution of the constrained minimization problem (4.1).
Proof
Apply Theorem 3.3 to the case where \(F=\nabla\varphi\), \(R=I\), and \(u=0\) to get the conclusion. □
Finally, we apply our results to the minimumnorm fixed point problem for pseudocontractive mappings.
Theorem 4.2
Let C be a nonempty, closed, and convex subset of a real Hilbert space H. Let \(T:C\to C\) be a generalized Lipschitz continuous, hemicontinuous, and pseudocontractive mapping with \(\operatorname{Fix}(T)\neq\emptyset\). Assume that \(\{\alpha_{n}\}\) and \(\{\beta_{n}\} \) are two sequences in \((0,1]\) that satisfy the following conditions:

(i)
\(\frac{{{\alpha_{n}}}}{{{\beta_{n}}}} \to0\), \(\frac{{{\beta _{n}}^{2}}}{{{\alpha_{n}}}} \to0\) as \(n \to\infty\);

(ii)
\({\alpha_{n}} \to0\) as \(n\to\infty\), \(\sum_{n = 1}^{\infty}{{\alpha_{n}}} = \infty\);

(iii)
\(\frac{{{\alpha_{n}}  {\alpha_{n  1}} + {\beta_{n}}  {\beta _{n  1}}}}{{{\alpha_{n}}^{2}}} \to0\) as \(n \to\infty\).
Then the sequence \(\{x_{n}\}\) generated by
converges in norm to \({x^{*}} = {P_{\operatorname{Fix}(T)}}u\), in particular, if we take \(u=0\) in (4.5), then the sequence \(\{x_{n}\}\) generated by (4.5) converges in norm to the minimumnorm fixed point of T.
Proof
Write \(F=IT\). Then (4.5) reduces to the following iterative algorithm:
Since \(T:C\to C\) is a generalized Lipschitz continuous, hemicontinuous, and pseudocontractive mapping, we deduce that F is a generalized Lipschitz continuous, hemecontinuous, and monotone operator. By Lemma 2.1(C1), noting that \(T:C\to C\) is a selfmapping, we see that \(VI(C,F)=\operatorname{Fix}(P_{C}T)=\operatorname{Fix}(T)\ne\emptyset\) by our assumption. Apply Theorem 3.3 to the case where \(F=IT\) and \(R=I\) to derive the desired conclusion. □
Corollary 4.1
Let C be a nonempty, bounded, and closed convex subset of a real Hilbert space H. Let \(T:C\to C\) be a hemicontinuous and pseudocontractive mapping. Let \(\{\alpha_{n}\}\) and \(\{\beta_{n}\}\) be the same as in Theorem 4.2. Then the sequence \(\{x_{n}\}\) generated by (4.5) converges in norm to \({x^{*}} = {P_{\operatorname{Fix}(T)}}u\).
Proof
Write \(F=IT\). Then \(F:C\to H\) is a generalized Lipschitz continuous, hemicontinuous, and monotone operator. Indeed, since C is bounded, we see that there exists a positive number c such that
for all \(x,y\in C\). Since \(T:C\to C\) is a hemicontinuous and pseudocontractive mapping, we also know that F is a hemicontinuous and monotone operator. By Minty [15], we know that \(VI(C,F)\ne \emptyset\). Since \(VI(C,F)=\operatorname{Fix}(T)\), we have \(\operatorname{Fix}(T)\ne\emptyset\). Apply Theorem 4.2 to derive the desired conclusion. This completes the proof. □
Remark 4.1
Theorem 4.2 and Corollary 4.1 improve and extend the main results of [24–26] and [29] in the sense that the mapping T under consideration is hemicontinuous and generalized Lipschitz continuous.
Remark 4.2
Our main results presented in this paper can be used to solve the split feasibility problem and the split equality problem; see, for instance, [30–39] for the details.
Conclusion
We studied a class of monotone variational inequality problems (MVIPs) for generalized Lipschitz continuous, hemicontinuous, and monotone operators defined on a nonempty, closed, and convex subset of a real Hilbert space. Firstly, by using the maximal monotone theory, we established a new existence and uniqueness theorem for a variational inequality problem with a hemicontinuous and strongly monotone operator. Then, by virtue of the existence and uniqueness theorem, we introduced an implicit method and analyzed its strong convergence. We also introduced an explicit iterative method as discretization of the implicit method. We proved that both the implicit and the explicit methods converge in norm to the same solutions to the MVIPs. Finally, we applied our main results to the constrained minimization problems and the minimumnorm fixed point problems for generalized Lipschitz continuous, hemicontinuous, and pseudocontractive mappings.
References
 1.
Kinderlehrer, D, Stampacchia, G: An Introduction to Variational Inequalities and Their Applications. Academic Press, New York (1980)
 2.
Baiocchi, C, Capelo, A: Variational and Quasivariational Inequalities: Applications to Free Boundary Problems. WileyInterscience, New York (1984)
 3.
Giannessi, F, Maugeri, A: Variational Inequalities and Network Equilibrium Problems. Plenum Press, New York (1985)
 4.
Zeidler, E: Nonlinear Functional Analysis and Its Applications. III. Springer, New York (1985)
 5.
Zeidler, E: Nonlinear Functional Analysis and Its Applications. II/B. Springer, New York (1990)
 6.
Takahashi, W: Nonlinear Functional Analysis: Fixed Point Theory and Its Applications. Yokohama Publishers, Yokohama (2000)
 7.
Takahashi, W: Introduction to Nonlinear and Convex Analysis. Yokohama Publishers, Yokohama (2009)
 8.
Cegielski, A: Iterative Methods for Fixed Point Problems in Hilbert Spaces. Springer, Berlin (2012)
 9.
Deimling, K: Nonlinear Functional Analysis. Springer, Berlin (1985)
 10.
Agarwal, R, O’Regan, D, Sahu, DR: Fixed Point Theory for LipschitzianType Mappings with Applications. Springer, Berlin (2009)
 11.
Goebel, K, Kirk, WA: Topics in Metric Fixed Point Theory. Cambridge University Press, Cambridge (1990)
 12.
Minty, GJ: On the maximal domain of a ‘monotone’ function. Mich. Math. J. 8, 135157 (1961)
 13.
Minty, GJ: On the monotonicity of the gradient of a convex function. Pac. J. Math. 14, 243247 (1967)
 14.
Minty, GJ: Monotone (nonlinear) operators in Hilbert spaces. Duke Math. J. 29, 341346 (1962)
 15.
Minty, GJ: On variational inequalities for monotone operators. Adv. Math. 30, 17 (1978)
 16.
Browder, FE: Nonlinear monotone operators and convex sets in Banach spaces. Bull. Am. Math. Soc. 71, 780785 (1965)
 17.
Chidume, CE, Mutangadura, SA: An example on the Mann iteration method for Lipschitz pseudocontractions. Proc. Am. Math. Soc. 129, 23592363 (2001)
 18.
Korpelevich, GM: The extragradient method for finding saddle points and the other problem. Matecon 12, 747756 (1976)
 19.
Yamada, I: The hybrid steepest descent method for the variational inequality problem over the intersection of fixed point sets of nonexpansive mappings. In: Butnariu, D, Censor, Y, Reich, S (eds.) Inherently Parallel Algorithms in Feasibility and Optimization and Their Applications, pp. 473504. NorthHolland, Amsterdam (2000)
 20.
Iemoto, S, Takahashi, W: Strong convergence theorems by a hybrid steepest descent method for countable nonexpansive mappings in Hilbert spaces. Sci. Math. Jpn. (Online) e2008, 557570 (2008)
 21.
Chen, RD, Su, YF, Xu, HK: Regularization and iteration methods for a class of monotone variational inequalities. Taiwan. J. Math. 13, 739752 (2009)
 22.
Xu, XB, Xu, HK: Regularization and iterative methods for monotone variational inequalities. Fixed Point Theory Appl. 2010, Article ID 765206 (2010)
 23.
Iemoto, S, Hishinuma, K, Iiduka, H: Approximate solutions to variational inequality over the fixed point set of a strongly nonexpansive mapping. Fixed Point Theory Appl. 2014, Article ID 51 (2014)
 24.
Ishikawa, S: Fixed points by a new iteration method. Proc. Am. Math. Soc. 44, 147150 (1974)
 25.
Bruck, RE Jr.: A strongly convergent iterative solution of \(0\in Ux\) for a maximal monotone operator U in Hilbert spaces. J. Math. Anal. Appl. 48, 114126 (1974)
 26.
Schu, J: Approximating fixed points of Lipschitzian pseudocontractive mappings. Houst. J. Math. 19, 107115 (1993)
 27.
Chidume, CE, Zegeye, H: Approximate point sequences and convergence theorems for Lipschitz pseudocontractive maps. Proc. Am. Math. Soc. 132, 831840 (2004)
 28.
Zegeye, H, Shahzad, N: An algorithm for a common fixed point of a family of pseudocontractive mappings. Fixed Point Theory Appl. 2013, Article ID 234 (2013)
 29.
Yao, YH, et al.: Construction of minimumnorm fixed points of pseudocontractions in Hilbert spaces. J. Inequal. Appl. 2014, Article ID 206 (2014)
 30.
Zhou, HY, Wang, PY: A new iteration method for variational inequalities on set of common fixed points for a finite family of quasipseudocontractions in Hilbert spaces. J. Inequal. Appl. 2014, Article ID 218 (2014)
 31.
Liu, LS: Ishikawa and Mann iteration process with errors for nonlinear strongly accretive mappings in Banach spaces. J. Math. Anal. Appl. 194, 114125 (1995)
 32.
Censor, Y, Elfving, T: A multiprojection algorithm using Bregman projections in a product space. Numer. Algorithms 8, 221239 (1994)
 33.
Byrne, C: Iterative oblique projection onto convex sets and the split feasibility problem. Inverse Probl. 18, 441453 (2002)
 34.
Byrne, C: A unified treatment of some iterative algorithms in signal processing and image reconstruction. Inverse Probl. 20, 103120 (2004)
 35.
Chen, RD, Li, JL, Ren, YJ: Regularization method for the approximate split equality problem in infinitedimensional Hilbert spaces. Abstr. Appl. Anal. 2013, Article ID 813635 (2013)
 36.
Zhou, HY, Wang, PY, Zhou, Y: Minimumnorm fixed point of nonexpansive mappings with applications. Optimization (2013). doi:10.1080/02331934.2013.811667
 37.
Zhou, HY, Wang, PY: A simpler explicit iterative algorithm for a class of variational inequalities in Hilbert spaces. J. Optim. Theory Appl. (2013). doi:10.1007/s109570130470x
 38.
Zhou, HY, Wang, PY: Adaptively relaxed algorithms for solving the split feasibility problem with a new step size. J. Inequal. Appl. 2014, Article ID 448 (2014)
 39.
Zhou, HY, Zhou, Y, Feng, GH: Iterative methods for seeking minimumnorm solutions of monotone variational inequality problems with applications in Hilbert spaces. J. Inequal. Appl. (submitted)
Acknowledgements
This research was supported by the National Natural Science Foundation of China (11071053).
Author information
Affiliations
Corresponding author
Additional information
Competing interests
The authors declare that they have no competing interests.
Authors’ contributions
All authors contributed equally to the writing of this paper. All authors read and approved the final manuscript.
Rights and permissions
Open Access This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited.
About this article
Cite this article
Zhou, H., Zhou, Y. & Feng, G. Iterative methods for solving a class of monotone variational inequality problems with applications. J Inequal Appl 2015, 68 (2015). https://doi.org/10.1186/s136600150590y
Received:
Accepted:
Published:
MSC
 41A65
 47H17
 47J20
Keywords
 monotone variational inequality problem
 minimumnorm solution
 iterative method
 strong convergence
 Hilbert space