- Research
- Open access
- Published:
Gap functions and error bounds for random generalized variational inequality problems
Journal of Inequalities and Applications volume 2016, Article number: 47 (2016)
Abstract
This paper is devoted to the study of gap functions of random generalized variational inequality problems in a fuzzy environment. Further by using the residual vector we compute error bounds for random generalized variational inequality problems and we generalize the regularized gap functions proposed by Fukushima. Furthermore we study various properties of the generalized regularized gap functions in random fuzzy mappings and derive the global error bounds for random generalized variational inequality problems. Our results are new and generalize a number of known results to generalized variational inequality problems with fuzzy mappings.
1 Introduction
The theory of gap function was introduced for the study of a convex optimization problem and subsequently applied to variational inequality problems. One of the classical approaches in the analysis of a variational inequality problem is to transform it into an equivalent optimization problem via the notion of a gap function. Recently some effort has been made to develop gap functions for various classes of variational inequality problems; see for example [1–18]. Besides these, gap functions also turned out to be very useful in designing new globally convergent algorithms and in analyzing the rate of convergence of some iterative methods and also in deriving the error bounds.
The fuzzy set theory introduced by Zadeh [19] in 1965, has emerged as an interesting and fascinating branch of pure and applied sciences. The applications of the fuzzy set theory can be found in control engineering and optimization problems of mathematical sciences. In the recent past, variational inequalities in the setting of fuzzy mappings have been introduced and studied which are closely related with fuzzy optimization and decision-making problems. In recent years, many efforts have been made to reformulate the variational inequality problems and optimization problems in the fuzzy mapping. As a result, variational inequality problems have been generalized and extended in various directions using novel techniques of fuzzy theory.
In 1989, Chang and Zhu [20] introduced the concepts of variational inequality for fuzzy mappings. Since then, many authors studied various classes of variational inequalities for fuzzy mappings such as existence, iterative algorithms etc.; see for example [20–26]. The concept of a random fuzzy mapping was first introduced by Huang [27] while studying a new class of random multi-valued nonlinear generalized variational inclusions. For some related work, we refer to [27–31]. Recently, Dai [28] introduced a new class of generalized mixed variational like inequalities for random fuzzy mappings and established an existence theorem for the auxiliary problem and analyzed a new iterative algorithm for finding the solution of generalized mixed variational like inequality problems.
Gap functions have turned out to be very useful in deriving the error bounds, which provide a measure of the distance between solution set and an arbitrary point. Error bounds have played an important role not only in sensitivity analysis but also in convergence analysis of iterative algorithms for solving variational inequality problems. It is therefore of interest to investigate error bounds for gap functions associated with various variational inequalities. For some related work, we refer to [1, 3–18]. To the best of our knowledge, gap functions and error bounds have not been studied yet in fuzzy environment.
The purpose of this paper is to study of gap functions for random generalized variational inequalities in the random fuzzy environment. Then error bounds of generalized mixed variational inequalities are established in terms of residual vector. Further, by using the generalized regularized gap function we obtain global error bounds of solutions of random generalized variational inequalities with and without a Lipschitz continuity assumption. Finally, we give concluding remarks.
2 Preliminaries and basic formulations
Throughout this paper, let H be a real Hilbert space, whose inner product and norm are denoted by \(\langle\cdot,\cdot\rangle\) and \(\|\cdot\|\), respectively. Let \({\mathcal{F}}\) be a collection of all fuzzy sets over H. A mapping \(T:H\to{\mathcal{F}}(H)\) is called a fuzzy mapping on H. If T is a fuzzy mapping on H, then \(T(x)\) (denoted by \(T_{x}\), in the sequel) is a fuzzy set on H and \(T_{x}(y)\) is the membership function of y in \(T_{x}\). Let \(A \in{\mathcal{F}}(H)\), \(\alpha\in[0, 1]\). Then the set \((A)_{\alpha}= \{x\in H : A(x)\geq\alpha\}\) is called an α-cut set of A.
In this paper, we denote by \((\Omega,\Sigma)\) a measurable space, where Ω is a set and Σ is a σ-algebra of subsets of Ω and also we denote by \({\mathcal{B}}(H)\), \(2^{H}\), \(\operatorname{CB}(H)\), and \(H({\cdot}, {\cdot})\) the class of Borel σ-fields in H, the family of all nonempty subsets of H, the family of all nonempty closed bounded subsets of H, and the Hausdorff metric on \(\operatorname{CB}(H)\), respectively.
Definition 2.1
[27]
A mapping \(x :\Omega\to H\) is said to be measurable if for any \(B\in{\mathcal{B}}(H)\), \(\{t\in\Omega:x(t)\in B\}\in \Sigma\).
Definition 2.2
[27]
A mapping \(f :\Omega\times H\to H\) is called a random operator if for any \(x\in H\), \(f (t, x) = x(t)\) is measurable. A random operator f is said to be continuous if for any \(t\in\Omega\), the mapping \(f (t,\cdot) : H\to H\) is continuous.
Definition 2.3
[27]
A multi-valued mapping \(T:\Omega\to2^{H}\) is said to be measurable if for any \(B\in{\mathcal{B}}(H)\), \(T^{-1}(B)=\{ t\in\Omega:T(t)\cap B\neq\emptyset\}\in\Sigma\).
Definition 2.4
[27]
A mapping \(w :\Omega\to H\) is called a measurable selection of a multi-valued measurable mapping \(T :\Omega\to 2^{H}\) if w is measurable and for any \(t\in\Omega\), \(w(t)\in T (t)\).
Definition 2.5
[27]
A mapping \(T :\Omega\times H\to2^{H}\) is called a random multi-valued mapping if for any \(x\in H\), \(T (\cdot, x)\) is measurable. A random multi-valued mapping \(T :\Omega\times H\to \operatorname{CB}(H)\) is said to be H-continuous if for any \(t\in\Omega\), \(T (t,\cdot)\) is continuous in the Hausdorff metric.
Definition 2.6
[27]
A fuzzy mapping \(T : \Omega\to{\mathcal {F}}(H)\) is called measurable, if for any \(\alpha\in(0, 1]\), \((T(\cdot))_{\alpha}:\Omega\to2^{H}\) is a measurable multi-valued mapping.
Definition 2.7
[27]
A fuzzy mapping \(T : \Omega\times H\to {\mathcal{F}}(H)\) is called a random fuzzy mapping, if for any \(x\in H\), \(T(\cdot, x) :\Omega\to{\mathcal{F}}(H)\) is a measurable fuzzy mapping.
Clearly, the random fuzzy mapping include multi-valued mappings, random multi-valued mappings, and fuzzy mappings as special cases.
Let \(\hat{T}:\Omega\times H\to{\mathcal{F}}(H)\) be a random fuzzy mapping satisfying the following condition:
-
(I):
there exists a mapping \(a: H\to[0,1]\) such that
$$({\hat{T}}_{t,x})_{a(x)}\in \operatorname{CB}(H),\quad \forall(t,x)\in\Omega\times H. $$
By using the random fuzzy mapping TÌ‚, we can define a random multi-valued mapping T as follows:
T is called the random multi-valued mapping induced by the random fuzzy mapping TÌ‚.
Given mappings \(a:H\to[0,1]\), the random fuzzy mapping \(\hat{T}:\Omega \times H\to{\mathcal{F}}(H)\) satisfies the condition (I) and random operator \(h:\Omega\times H\to H\) with \(\operatorname{Im}(h)\cap \operatorname{dom}(\partial\phi )\neq\emptyset\), we consider the following random generalized variational inequality problem (for short, RGVIP): Find measurable mappings \(x,w:\Omega\to H\), such that for all \(t\in\Omega\), \(y(t)\in H\),
where ∂ϕ denotes the sub-differential of a proper, convex, and lower semicontinuous function \(\phi:H\to\mathbb{R}\cup\{+\infty\}\) with its effective domain being closed.
The set of measurable mappings \((x,w)\) is called a random solution of the RGVIP (2.1).
Special cases:
-
(I)
If a is zero operator and \(T:H\to H \) is a single-valued mapping, then the RGVIP (2.1) reduces to the generalized mixed variational inequality problem, denoted by GMVIP, which consists in finding \(x\in H\) such that
$$ \bigl\langle Tx,y-h(x) \bigr\rangle +\phi(y)-\phi\bigl(h(x)\bigr)\geq0,\quad \forall y\in H, $$(2.2)which was studied by Solodov [14]. He introduced three gap functions for the GMVIP (2.2) and by using these he obtained error bounds.
-
(II)
If \(h(x)=x\), ∀x, then problem GMVIP (2.2) reduces to mixed variational inequality problem, denoted by MVIP, which consists in finding \(x\in H\) such that
$$ \langle Tx, y-x \rangle+\phi(y)-\phi(x)\geq0, \quad \forall y\in H, $$(2.3)which was considered by Tang and Huang [15]. They introduced two regularized gap functions for the MVIP (2.3) and studied their differentiable properties.
-
(III)
If the function \(\phi(\cdot)\) is an indicator function of a closed set K in H, then problem MVIP (2.3) reduces to a classical variational inequality problem, denoted by VIP, which consists in finding \(x\in K\) such that
$$ \langle Tx, y-x\rangle\geq0,\quad \forall y\in K, $$(2.4)which was studied by [1, 3, 4, 6, 11, 15]. They derived local and global error bounds for the VIP (2.4) in terms of the regularized gap functions and the D-gap functions.
First of all, we recall the following well-known results and concepts.
For the VIP (2.4), it is well known that \(x\in K\) is a solution if and only if
where \(P_{K}\) is the orthogonal projector onto K and \(\theta>0\) is arbitrary. The norm of the right-hand side of the above equation is commonly known as the natural residual vector.
Throughout this paper, unless otherwise stated, the set of minimizers of the function \(t :H\to\mathbb{R}\cup\{+\infty\}\) over the set \(Y\subset H\) is denoted by \(\arg\min_{t\in Y}t(y)\). For a convex function \(\phi: H \to\mathbb{R}\cup\{+\infty\}\) with \(\operatorname{dom} \phi=\{x\in H: \phi(x)<+\infty\}\) denotes its effective domain, the sub-differential at \(x\in H\) is denoted by \(\partial\phi(x) = \{w\in H: \phi(y)\geq\phi(x) +\langle w, y - x\rangle, \forall y\in H\}\) and a point \(w\in\partial\phi(x)\) is called a sub-gradient of Ï• at x.
Motivated by the proximal map given in [32], we give a similar characterization for the RGVIP (2.1) in the random fuzzy environment by defining the mapping \(P_{\theta(t)}^{\phi}:\Omega\times H\to\operatorname{dom} \phi\), as
where \(\theta:\Omega\to(0,+\infty)\) a measurable function which is the so called proximal mapping in H for a random fuzzy mapping. Note that the objective function above is proper strongly convex. Since \(\operatorname{dom}(\phi)\) is closed, \(P_{\theta(t)}^{\phi,z}(t,z)\) is well defined and single-valued. For any measurable function \(\theta:\Omega\to (0,+\infty)\) define the residual vector
We next show that \(R_{\theta(t)}^{\phi}(t,x(t))\) plays the role of natural residual vector in random fuzzy mapping for the RGVIP (2.1).
Lemma 2.1
For any measurable function \(\theta:\Omega\to (0,+\infty)\) and for each \(t\in\Omega\), the measurable mapping \(x:\Omega \to H\) is a solution of the RGVIP (2.1) if and only if \(R_{\theta (t)}^{\phi}(t,x(t))=0\).
Proof
Let \(R_{\theta(t)}^{\phi}(t,x(t))=0\), which implies that \(h(t,x)=P_{\theta(t)}^{\phi,x}[h(t,x)-\theta(t) w(t)]\). It is equivalent to \(h(t,x)=\arg\min_{y(t)\in H}\{\phi(y(t))+{1\over 2\theta (t)}\|y(t)-(h(t,x)-\theta(t) w(t))\|^{2}\}\). By the optimality conditions (which are necessary and sufficient, by convexity), the latter is equivalent to
which implies \(-w(t)\in\partial\phi(h(t,x))\). This in turn is equivalent, by the definition of the sub-gradient, to
which implies that \(x(t)\) solves the RGVIP (2.1).
This completes the proof. □
Definition 2.8
A random multi-valued operator \(T: \Omega \times H \to \operatorname{CB}(H)\) is said to be strongly h-monotone, if there exists a measurable function \(\alpha:\Omega\to(0,+\infty)\) such that
Definition 2.9
A random operator \(h:\Omega\times H\to H\) is said to be Lipschitz continuous, if there exists a measurable function \(L:\Omega\to(0,+\infty)\) such that
Definition 2.10
A random multi-valued mapping \(T:\Omega \times H\to \operatorname{CB}(H)\) is said to be Ĥ-Lipschitz continuous, if there exists a measurable function \(\lambda:\Omega\to(0,+\infty)\) such that
Now, we give the following lemmas.
Lemma 2.2
[21]
Let \(T:\Omega\times H\to \operatorname{CB}(H)\) be a Ĥ-Lipschitz continuous random multi-valued mapping, then for measurable mapping \(x:\Omega\to H\), the multi-valued mapping \(T(\cdot,x(\cdot)):\Omega\to \operatorname{CB}(H)\) is measurable.
Lemma 2.3
[21]
Let \(T_{1}, T_{2}:\Omega\to \operatorname{CB}(H)\) be two measurable multi-valued mappings, \(\epsilon> 0\) be a constant, and \(w_{1}:\Omega\to H\) be a measurable selection of \(T_{1}\), then there exists a measurable selection \(w_{2}:\Omega\to H\) of \(T_{2}\) such that for all \(t\in\Omega\),
Definition 2.11
A function \(G: H\to\mathbb{R}\) is said to be a gap function for the RGVIP (2.1), if it satisfies the following properties:
-
(i)
\(G(x)\geq0\), \(\forall x\in H\);
-
(ii)
\(G(x^{*}) = 0\), if and only if \(x^{*}\in H\) solves the RGVIP (2.1).
Lemma 2.4
Let \(x:\Omega\to H\) be a measurable mapping and \(\theta:\Omega\to(0,+\infty)\) be a measurable function, then for all \(x(t)\in H\) and for each \(t\in\Omega\), \(R_{\theta(t)}^{\phi}(t,x(t))\) is a gap function for the RGVIP (2.1).
Proof
It is clear that \(\|R_{\theta(t)}^{\phi}(t,x(t))\|\geq0\) for each \(t\in\Omega\), \(x(t)\in H\) and \(\|R_{\theta(t)}^{\phi}(t,x^{*}(t))\|= 0\) if and only if \(h(t,x^{*})=P_{\theta(t)}^{\phi,x^{*}}[h(t,x^{*})-\theta(t) w(t)]\). By the definition of \(P_{\theta(t)}^{\phi,x^{*}}\), \(P_{\theta (t)}^{\phi,x^{*}}[h(t,x^{*})-\theta(t) w(t)]\) satisfies
Hence,
It follows that, for all \(y(t)\in H\),
Thus, \(x^{*}(t)\) solves the RGVIP (2.1).
This completes the proof. □
Next we study those conditions under which the RGVIP (2.1) has a unique solution.
Lemma 2.5
Let \((\Omega, \Sigma)\) be a measurable space and H be a real Hilbert space. Suppose that \(h:\Omega\times H\to H\) be a random mapping and \(\phi:H\to\mathbb{R}\cup\{+\infty\}\) be a real-valued function. Let the random fuzzy mapping \({\hat{T}}:\Omega \times H\to{\mathcal{F}}(H)\) satisfy the condition (I), and a random multi-valued mapping \(T:\Omega\times H\to \operatorname{CB}(H)\) induced by the random fuzzy mapping TÌ‚ be strongly h-monotone with the measurable function \(\alpha:\Omega\to(0,+\infty)\), then the RGVIP (2.1) has a unique solution.
Proof
Let the two measurable mappings \(x_{1},x_{2}:\Omega\to H\) be the two solutions of the RGVIP (2.1) such that \(x_{1}(t)\neq x_{2}(t)\in H\). Then we have
Taking \(y(t) =h(t,x_{2}(t))\) in (2.5) and \(y(t) =h(t,x_{1}(t))\) in (2.6), adding the resultants, we have
Since T is strongly h-monotone with measurable function \(\alpha :\Omega\to(0,+\infty)\), therefore
which implies that \(x_{1}(t) =x_{2}(t)\), \(\forall t\in\Omega\), the uniqueness of the solution of the RGVIP (2.1).
This completes the proof. □
Now by using normal residual vector \(R_{\theta(t)}^{\phi}(t,x(t))\), we derive the error bounds for the solution of the RGVIP (2.1).
Theorem 2.1
Suppose for each \(t\in\Omega\), \(x_{0}(t)\in H\) is solution of the RGVIP (2.1). Let \((\Omega, \Sigma)\) be a measurable space, and H be a real Hilbert space. Let the random fuzzy mapping \({\hat{T}}:\Omega\times H\to{\mathcal{F}}(H)\) satisfy the condition (I) and \(T:\Omega\times H\to \operatorname{CB}(H)\) be the random multi-valued mapping induced by the random fuzzy mapping TÌ‚. Let \(h:\Omega\times H\to H\) be a random mapping and \(\phi:H\to\mathbb{R}\cup\{+\infty\}\) be a real-valued function such that
-
(i)
for each \(t\in\Omega\), the measurable mapping T is strongly h-monotone and Ĥ-Lipschitz continuous with the measurable functions \(\alpha, \lambda:\Omega\to(0,+\infty)\), respectively;
-
(ii)
for each \(t\in\Omega\), the mapping \(h(t,\cdot)\) is Lipschitz continuous with the measurable function \(L:\Omega\to(0,+\infty)\);
-
(iii)
if there exists a measurable function \(K:\Omega\to (0,+\infty)\) such that
$$\bigl\Vert P_{\theta(t)}^{\phi,x}(z)-P_{\theta(t)}^{\phi,x_{0}}(z) \bigr\Vert \leq K(t)\bigl\Vert x(t)-x_{0}(t)\bigr\Vert ,\quad \forall x(t),x_{0}(t),z(t)\in H, $$
then for any \(x(t)\in H\), \(t\in\Omega\) and \(\theta(t)>{K(t)L(t)\over \alpha(t)-K(t)\lambda(t)(1+\epsilon)}\), we have
Proof
Let for each \(t\in\Omega\), \(x_{0}(t)\in H\) be a solution of the RGVIP (2.1), then
Substituting \(y(t)=P_{\theta(t)}^{\phi,x_{0}}[h(t,x(t))-\theta(t) w(t)]\) in the above inequality, we have
For any fixed \(x(t)\in H\) and measurable function \(\theta:\Omega\to (0,+\infty)\), we observe that
which is equivalent to
By the definition of a sub-differential, we have
Taking \(y(t)=h(t,x_{0}(t))\) in the above we get
This implies that
Adding (2.7) and (2.8), we get
This also can be written as
which implies that
By using the strong h-monotonicity of T, we get
Also the above inequality can be written as
By using the Cauchy-Schwarz inequality along with the triangular inequality, we have
Now using the Ĥ-Lipschitz continuity of T, the Lipschitz continuity of h, and assumption (iii) on \(P_{\theta(t)}^{\phi,x}(\cdot)\), we have
The above can be written again as
Therefore, we have
where \(\theta(t)>{K(t)L(t)\over \alpha(t)-K(t)\lambda(t)(1+\epsilon )}\).
This completes the proof. □
3 Generalized regularized gap functions for RGVIP (2.1)
Fukushima [4] considered the regularized gap function \(g_{\theta}: H\to \mathbb{R}\) defined by
The function \(g_{\theta}\) constitutes an equivalent constrained optimization reformulation of the VIP (2.4) in the sense that x solves the VIP (2.4) if and only if x minimizes \(g_{\theta}\) on K and \(g_{\theta}(x) = 0\), \(\forall x\in H\). The regularized gap function was further extended by Wu et al. [18]. They considered the function \(G_{\theta}: H\to\mathbb{R}\) defined by \(G_{\theta}(x)=\max_{y\in K}\{ \langle Tx, x-y \rangle-\theta F(x,y)\}\).
Wu et al. [18] showed that \(G_{\theta}\) constitutes an equivalent constrained differentiable optimization reformulation of the VIP (2.4). Motivated and inspired by the work of [4, 7, 12, 17, 18] going in this direction, in this section we construct a generalized regularized gap function \(G_{\theta(t)} :\Omega\times H\to\mathbb{R}\), associated with problem RGVIP (2.1) defined by
where \(\pi_{\theta(t)}(x(t))\) denotes the unique minimizer of \(-\Psi _{\theta(t)}(x(t), \cdot)\) on H for each \(t\in\Omega\) and the function \(F :H\times H\to\mathbb{R}\) satisfies the following conditions:
-
(A1)
F is continuously differentiable on \(H\times H\);
-
(A2)
F is nonnegative on \(H\times H\);
-
(A3)
\(F(x(t), \cdot)\) is strongly convex uniformly in \(x(t)\), for each \(t\in\Omega\), i.e., there exists a measurable function \(\beta:\Omega\to(0,+\infty)\) such that, for any \(t\in\Omega\), \(x(t)\in H\),
$$\begin{aligned}& F\bigl(x(t),y_{1}(t)\bigr)-F\bigl(x(t),y_{2}(t)\bigr) \\& \quad \geq \bigl\langle \nabla_{2}F\bigl(x(t),y_{2}(t)\bigr), y_{1}(t)-y_{2}(t) \bigr\rangle +\beta(t)\bigl\Vert y_{1}(t)-y_{2}(t)\bigr\Vert ^{2},\quad \forall y_{1}(t),y_{2}(t)\in H, \end{aligned}$$where \(\nabla_{2}F\) denotes the partial differential of F with respect to the second variable of F;
-
(A4)
\(F(x(t), y(t)) = 0\) if and only if \(x (t)= y(t)\), \(\forall t\in\Omega\);
-
(A5)
\(\nabla_{2}F(x(t),\cdot)\) is uniformly Lipschitz continuous, i.e., there exists a measurable function \(\gamma:\Omega\to (0,+\infty)\) such that, for any \(t\in\Omega\), \(x(t)\in H\),
$$\bigl\Vert \nabla_{2}F\bigl(x(t),y_{1}(t)\bigr)- \nabla_{2}F\bigl(x(t),y_{2}(t)\bigr)\bigr\Vert \leq\gamma(t) \bigl\Vert y_{1}(t)-y_{2}(t)\bigr\Vert ,\quad \forall y_{1}(t),y_{2}(t)\in H. $$
Lemma 3.1
Suppose F satisfies (A1)-(A4). Then for each \(t\in\Omega\), \(\nabla_{2}F(x(t), y(t)) = 0\) if and only if \(x(t) = y(t)\), \(\forall t\in\Omega\).
Proof
Suppose that for each \(t\in\Omega\), \(\nabla_{2}F(x(t), y(t)) = 0\). Then (A3) implies that \(y(t)\) is the unique minimizer of \(F(x(t),\cdot)\). Since \(F(x(t),\cdot)\) always has the unique minimizer \(x(t)\) by (A2) and (A4), it follows that \(x(t)=y(t)\). Next, we suppose \(x(t)=y(t)\). Then, by (A4), we have \(F(x(t),\cdot)=0\). It follows from (A2) that \(y(t)\) is a global minimizer of \(F(x(t),\cdot)\). Hence, we have \(\nabla _{2}F(x(t), y(t)) = 0\).
This completes the proof. □
Lemma 3.2
Suppose that the function F satisfies (A3). Then, for any \(t\in\Omega\), \(y_{1}(t), y_{2}(t)\in H\), we have
that is, \(\nabla_{2}F(x(t),\cdot)\) is strongly monotone with modulus \(2\beta (t)\) on H.
Proof
For any \(t\in\Omega\), \(y_{1}(t), y_{2}(t)\in H\), it follows from (A3) that
and
Adding the above inequalities together, we get the required result.
This completes the proof. □
Lemma 3.3
Suppose that the function F satisfies (A1)-(A5) with the associated measurable functions \(\beta,\gamma:\Omega \to(0,+\infty)\). Then we have
Proof
From (A3), we have
By using (A4), the above can be written as \(-F(x(t),y(t))\geq \langle\nabla_{2}F(x(t),y(t)), x(t)-y(t) \rangle +\beta(t)\|x(t)-y(t)\|^{2}\),
From (A5), we have \(\|\nabla_{2}F(x(t),x(t))-\nabla_{2}F(x(t),y(t))\|\leq \gamma(t)\|x(t)-y(t)\|\), by using (A4), it can be written as
From inequalities (3.2) and (3.3), we have
This completes the proof. □
Lemma 3.4
Suppose that the function F satisfies (A1)-(A4). Then, for each \(t\in\Omega\), the measurable mapping \(x:\Omega \to H\) is a solution of the RGVIP (2.1) if and only if \(h(t,x(t))=\pi _{\theta(t)}(x(t))\).
Proof
For any \(x(t)\in H\), since \(\pi_{\theta(t)}(x(t))\) minimizes \(-\Psi_{\theta}(x(t), \cdot)\) on H, and \(-\Psi_{\theta}(x(t), \cdot)\) is convex, we have
By the definition of the sub-gradient,
If \(h(t,x(t))=\pi_{\theta(t)}(x(t))\) then, by Lemma 3.1, we see that \(x(t)\) is a solution of the RGVIP (2.1).
Conversely, \(x(t)\) is a solution of the RGVIP (2.1), then taking \(y(t)=\pi_{\theta(t)}(x(t))\) in (2.1), we obtain
On the other hand, since \(h(t,x(t))\in H\), it follows from (3.4) that
Adding the above two inequalities, we have
By using the strong convexity of \(F(h(t,x(t)),\cdot)\), together with (A2) and (A4), we have
From the last two inequalities, we get \(h(t,x(t))=\pi_{\theta (t)}(x(t))\).
This completes the proof. □
Theorem 3.1
Let for each \(t\in\Omega\), \(x_{0}(t)\in H\) is solution of the RGVIP (2.1). Assume that for each \(t\in\Omega\), the random multi-valued mapping T is strongly h-monotone and Ĥ-Lipschitz continuous with measurable functions \(\alpha,\lambda :\Omega\to(0,+\infty)\), respectively. If \(h:\Omega\times H\to H\) is Lipschitz continuous with measurable function \(L:\Omega\to(0,+\infty)\) and the function F satisfies (A1)-(A5), then
Proof
Since \(x_{0}(t)\) is a solution of the RGVIP (2.1) and \(\pi _{\theta(t)}(x(t))\in H\) for every \(x(t)\in H\), we have
On the other hand, taking \(y(t)=h(t,x_{0}(t))\) in (3.4)
Adding the two inequalities (3.5) and (3.6), we have
By Lemmas 3.1, 3.2, and (A5) we have
By using the Lipschitz continuity of h, we have
From the inequalities (3.7) and (3.8), we have
Now from the strongly h-monotonicity of T, we have
From the Ĥ-Lipschitz continuity of T and then from inequalities (3.9), we have
Therefore,
This completes the proof. □
Theorem 3.2
Suppose that the function F satisfies (A1)-(A4). We have
In particular, \(G_{\theta(t)}(x(t)) = 0\) if and only if \(x(t)\) solves the RGVIP (2.1).
Proof
For any \(x(t)\in H\), taking \(y(t) = h(t,x(t))\) in (3.4), we have
Therefore, we have
From (A3), we have
From (A4), we have
The second assertion is obvious by Lemma 3.4.
This completes the proof. □
From Theorem 3.1 and Theorem 3.2, we can easily get the following global error bound for the RGVIP (2.1).
Theorem 3.3
Let for each \(t\in\Omega\), \(x_{0}(t)\in H\) be solution of the RGVIP (2.1). Assume that, for each \(t\in\Omega\), the random multi-valued mapping T is strongly h-monotone and Ĥ-Lipschitz continuous with measurable functions \(\alpha,\lambda :\Omega\to(0,+\infty)\), respectively. If \(h:\Omega\times H\to H\) is Lipschitz continuous with measurable function \(L:\Omega\to(0,+\infty)\) and the function F satisfies (A1)-(A5), then \(G_{\theta(t)}(x(t))\) provides a global error bound for the RGVIP (2.1)
Also we can derive the global error bound for the RGVIP (2.1) without using the Lipschitz continuity of T.
Theorem 3.4
Let for each \(t\in\Omega\), \(x_{0}(t)\in H\) be solution of the RGVIP (2.1). Assume that for each \(t\in\Omega\), the random multi-valued mapping T is strongly h-monotone and \(h:\Omega \times H\to H\) is Lipschitz continuous with measurable functions \(\alpha ,L:\Omega\to(0,+\infty)\), respectively. If the function F satisfies (A1)-(A5), then
Proof
For each \(t\in\Omega\), fix an arbitrary \(x(t)\in H\). Since \(x_{0}(t)\) is a solution of the RGVIP (2.1), from the definition of \(G_{\theta(t)}(x(t))\) and the h-strongly monotonicity of T, we obtain
Since for each \(t\in\Omega\), \(x_{0}(t)\in H\) is a solution of the RGVIP (2.1),
Taking \(y(t)=h(t,x(t))\) in the above inequality, we have
From inequalities (3.10) and (3.11), we have
Moreover, by Lemma 3.3 and the Lipschitz continuity of h, we have
From (3.12) and (3.13), we have
This completes the proof. □
Remark 3.1
For a suitable choice of the mappings T, h, Ï• and the function \(a:H\to[0,1]\), we can obtain several known results [1, 3, 6, 9, 11] as special cases of the main result of this paper.
4 Conclusion
In this paper, the concept of gap functions for random generalized variational inequality problems has been introduced by using the fuzzy residual vector. By using a generalized regularized gap function we calculated global error bounds, i.e., upper estimates on the distance to the solution set of random generalized variational inequality problems, which is one of the useful applications of gap functions. The concepts of fuzzy theory can be applied to fuzzy optimization problems and probabilistic models, also by using fuzzy theory many innovative methods can be developed further in this fascinating area of the variational inequality theory. Further attention is needed for the study of relationship between fuzzy sets, random sets, and fuzzy variational inequalities, which might provide useful applications to the gap functions and finding its error bounds.
References
Aussel, D, Dutta, J: On gap functions for multivalued Stampacchia variational inequalities. J. Optim. Theory Appl. 149, 513-527 (2011)
Aussel, D, Correa, R, Marechal, M: Gap functions for quasi variational inequalities and generalized Nash equilibrium problems. J. Optim. Theory Appl. 151, 474-488 (2011)
Aussel, D, Gupta, R, Mehra, A: Gap functions and error bounds for inverse quasi-variational inequality problems. J. Math. Anal. Appl. 407, 270-280 (2013)
Fukushima, M: Equivalent differentiable optimization problems and descent methods for asymmetric variational inequality problems. Math. Program. 53, 99-110 (1992)
Fukushima, M: Merit functions for variational inequality and complementarity problems. In: Nonlinear Optimization and Applications, pp. 155-170. Springer, New York (1996)
Gupta, R, Mehra, A: Gap functions and error bounds for quasi variational inequalities. J. Glob. Optim. 53, 737-748 (2012)
Huang, LR, Ng, KF: Equivalent optimization formulations and error bounds for variational inequality problem. J. Optim. Theory Appl. 125, 299-314 (2005)
Khan, SA, Chen, JW: Gap function and global error bounds for generalized mixed quasi variational inequalities. Appl. Math. Comput. 260, 71-81 (2015)
Khan, SA, Chen, JW: Gap functions and error bounds for generalized mixed vector equilibrium problems. J. Optim. Theory Appl. 166, 767-776 (2015)
Khan, SA, Iqbal, J, Shehu, Y: Mixed quasi variational inequalities involving error bounds. J. Inequal. Appl. 2015, 417 (2015)
Noor, MA: Merit functions for general variational inequalities. J. Math. Anal. Appl. 316, 736-752 (2006)
Qu, B, Wang, CY, Zhang, JZ: Convergence and error bound of a method for solving variational inequality problems via the generalized D-gap function. J. Optim. Theory Appl. 119, 535-552 (2003)
Solodov, MV, Tseng, P: Some methods based on the D-gap function for solving monotone variational inequalities. Comput. Optim. Appl. 17, 255-277 (2000)
Solodov, MV: Merit functions and error bounds for generalized variational inequalities. J. Math. Anal. Appl. 287, 405-414 (2003)
Tang, GJ, Huang, NJ: Gap functions and global error bounds for set-valued mixed variational inequalities. Taiwan. J. Math. 17, 1267-1286 (2013)
Yamashita, N, Fukushima, M: Equivalent unconstraint minimization and global error bounds for variational inequality problems. SIAM J. Control Optim. 35, 273-284 (1997)
Yamashita, N, Taji, K, Fukushima, M: Unconstrained optimization reformulations of variational inequality problems. J. Optim. Theory Appl. 92, 439-456 (1997)
Wu, JH, Florian, M, Marcotte, P: A general descent framework for the monotone variational inequality problem. Math. Program. 61, 281-300 (1993)
Zadeh, LA: Fuzzy sets. Inf. Control 8, 338-353 (1965)
Chang, SS, Zhu, YG: On variational inequalities for fuzzy mappings. Fuzzy Sets Syst. 32, 359-367 (1989)
Chang, SS: Fixed Point Theory with Applications. Chongqing Publishing House, Chongqing (1984)
Chang, SS: Coincidence theorems and variational inequalities for fuzzy mappings. Fuzzy Sets Syst. 61, 359-368 (1994)
Chang, SS, Huang, NJ: Generalized complementarity problems for fuzzy mappings. Fuzzy Sets Syst. 55, 227-234 (1993)
Noor, MA: Variational inequalities for fuzzy mappings (I). Fuzzy Sets Syst. 55, 309-312 (1993)
Noor, MA: Variational inequalities for fuzzy mappings (II). Fuzzy Sets Syst. 97, 101-107 (1998)
Noor, MA: Variational inequalities for fuzzy mappings (III). Fuzzy Sets Syst. 110, 101-108 (2000)
Huang, NJ: Random generalized nonlinear variational inclusions for random fuzzy mappings. Fuzzy Sets Syst. 105, 437-444 (1999)
Dai, HX: Generalized mixed variational-like inequality for random fuzzy mappings. J. Comput. Appl. Math. 224, 20-28 (2009)
Huang, NJ: Random generalized set-valued implicit variational inequalities. J. Liaoning Norm. Univ. 18, 89-93 (1995)
Husain, T, Tarafdar, E, Yuan, XZ: Some results on random generalized games and random quasi-variational inequalities. Far East J. Math. Sci. 2, 35-55 (1994)
Yuan, XZ: Non-compact random generalized games and random quasi-variational inequalities. J. Appl. Math. Stoch. Anal. 7, 467-486 (1994)
Rockafellar, RT: Monotone operators and the proximal point algorithm. SIAM J. Control Optim. 14, 877-898 (1976)
Acknowledgements
The authors would like to thank the associated editor and anonymous referees for their valuable comments and suggestions, which have helped to improve the paper.
Author information
Authors and Affiliations
Corresponding author
Additional information
Competing interests
The authors declare that they have no competing interests.
Authors’ contributions
All authors contributed equally to the writing of this paper. All authors read and approved the final manuscript.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
Khan, S.A., Iqbal, J. & Shehu, Y. Gap functions and error bounds for random generalized variational inequality problems. J Inequal Appl 2016, 47 (2016). https://doi.org/10.1186/s13660-016-0984-5
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s13660-016-0984-5