Skip to main content

Gap functions and error bounds for random generalized variational inequality problems

Abstract

This paper is devoted to the study of gap functions of random generalized variational inequality problems in a fuzzy environment. Further by using the residual vector we compute error bounds for random generalized variational inequality problems and we generalize the regularized gap functions proposed by Fukushima. Furthermore we study various properties of the generalized regularized gap functions in random fuzzy mappings and derive the global error bounds for random generalized variational inequality problems. Our results are new and generalize a number of known results to generalized variational inequality problems with fuzzy mappings.

1 Introduction

The theory of gap function was introduced for the study of a convex optimization problem and subsequently applied to variational inequality problems. One of the classical approaches in the analysis of a variational inequality problem is to transform it into an equivalent optimization problem via the notion of a gap function. Recently some effort has been made to develop gap functions for various classes of variational inequality problems; see for example [1–18]. Besides these, gap functions also turned out to be very useful in designing new globally convergent algorithms and in analyzing the rate of convergence of some iterative methods and also in deriving the error bounds.

The fuzzy set theory introduced by Zadeh [19] in 1965, has emerged as an interesting and fascinating branch of pure and applied sciences. The applications of the fuzzy set theory can be found in control engineering and optimization problems of mathematical sciences. In the recent past, variational inequalities in the setting of fuzzy mappings have been introduced and studied which are closely related with fuzzy optimization and decision-making problems. In recent years, many efforts have been made to reformulate the variational inequality problems and optimization problems in the fuzzy mapping. As a result, variational inequality problems have been generalized and extended in various directions using novel techniques of fuzzy theory.

In 1989, Chang and Zhu [20] introduced the concepts of variational inequality for fuzzy mappings. Since then, many authors studied various classes of variational inequalities for fuzzy mappings such as existence, iterative algorithms etc.; see for example [20–26]. The concept of a random fuzzy mapping was first introduced by Huang [27] while studying a new class of random multi-valued nonlinear generalized variational inclusions. For some related work, we refer to [27–31]. Recently, Dai [28] introduced a new class of generalized mixed variational like inequalities for random fuzzy mappings and established an existence theorem for the auxiliary problem and analyzed a new iterative algorithm for finding the solution of generalized mixed variational like inequality problems.

Gap functions have turned out to be very useful in deriving the error bounds, which provide a measure of the distance between solution set and an arbitrary point. Error bounds have played an important role not only in sensitivity analysis but also in convergence analysis of iterative algorithms for solving variational inequality problems. It is therefore of interest to investigate error bounds for gap functions associated with various variational inequalities. For some related work, we refer to [1, 3–18]. To the best of our knowledge, gap functions and error bounds have not been studied yet in fuzzy environment.

The purpose of this paper is to study of gap functions for random generalized variational inequalities in the random fuzzy environment. Then error bounds of generalized mixed variational inequalities are established in terms of residual vector. Further, by using the generalized regularized gap function we obtain global error bounds of solutions of random generalized variational inequalities with and without a Lipschitz continuity assumption. Finally, we give concluding remarks.

2 Preliminaries and basic formulations

Throughout this paper, let H be a real Hilbert space, whose inner product and norm are denoted by \(\langle\cdot,\cdot\rangle\) and \(\|\cdot\|\), respectively. Let \({\mathcal{F}}\) be a collection of all fuzzy sets over H. A mapping \(T:H\to{\mathcal{F}}(H)\) is called a fuzzy mapping on H. If T is a fuzzy mapping on H, then \(T(x)\) (denoted by \(T_{x}\), in the sequel) is a fuzzy set on H and \(T_{x}(y)\) is the membership function of y in \(T_{x}\). Let \(A \in{\mathcal{F}}(H)\), \(\alpha\in[0, 1]\). Then the set \((A)_{\alpha}= \{x\in H : A(x)\geq\alpha\}\) is called an α-cut set of A.

In this paper, we denote by \((\Omega,\Sigma)\) a measurable space, where Ω is a set and Σ is a σ-algebra of subsets of Ω and also we denote by \({\mathcal{B}}(H)\), \(2^{H}\), \(\operatorname{CB}(H)\), and \(H({\cdot}, {\cdot})\) the class of Borel σ-fields in H, the family of all nonempty subsets of H, the family of all nonempty closed bounded subsets of H, and the Hausdorff metric on \(\operatorname{CB}(H)\), respectively.

Definition 2.1

[27]

A mapping \(x :\Omega\to H\) is said to be measurable if for any \(B\in{\mathcal{B}}(H)\), \(\{t\in\Omega:x(t)\in B\}\in \Sigma\).

Definition 2.2

[27]

A mapping \(f :\Omega\times H\to H\) is called a random operator if for any \(x\in H\), \(f (t, x) = x(t)\) is measurable. A random operator f is said to be continuous if for any \(t\in\Omega\), the mapping \(f (t,\cdot) : H\to H\) is continuous.

Definition 2.3

[27]

A multi-valued mapping \(T:\Omega\to2^{H}\) is said to be measurable if for any \(B\in{\mathcal{B}}(H)\), \(T^{-1}(B)=\{ t\in\Omega:T(t)\cap B\neq\emptyset\}\in\Sigma\).

Definition 2.4

[27]

A mapping \(w :\Omega\to H\) is called a measurable selection of a multi-valued measurable mapping \(T :\Omega\to 2^{H}\) if w is measurable and for any \(t\in\Omega\), \(w(t)\in T (t)\).

Definition 2.5

[27]

A mapping \(T :\Omega\times H\to2^{H}\) is called a random multi-valued mapping if for any \(x\in H\), \(T (\cdot, x)\) is measurable. A random multi-valued mapping \(T :\Omega\times H\to \operatorname{CB}(H)\) is said to be H-continuous if for any \(t\in\Omega\), \(T (t,\cdot)\) is continuous in the Hausdorff metric.

Definition 2.6

[27]

A fuzzy mapping \(T : \Omega\to{\mathcal {F}}(H)\) is called measurable, if for any \(\alpha\in(0, 1]\), \((T(\cdot))_{\alpha}:\Omega\to2^{H}\) is a measurable multi-valued mapping.

Definition 2.7

[27]

A fuzzy mapping \(T : \Omega\times H\to {\mathcal{F}}(H)\) is called a random fuzzy mapping, if for any \(x\in H\), \(T(\cdot, x) :\Omega\to{\mathcal{F}}(H)\) is a measurable fuzzy mapping.

Clearly, the random fuzzy mapping include multi-valued mappings, random multi-valued mappings, and fuzzy mappings as special cases.

Let \(\hat{T}:\Omega\times H\to{\mathcal{F}}(H)\) be a random fuzzy mapping satisfying the following condition:

  1. (I):

    there exists a mapping \(a: H\to[0,1]\) such that

    $$({\hat{T}}_{t,x})_{a(x)}\in \operatorname{CB}(H),\quad \forall(t,x)\in\Omega\times H. $$

By using the random fuzzy mapping TÌ‚, we can define a random multi-valued mapping T as follows:

$$T:\Omega\times H\to \operatorname{CB}(H), \qquad (t,x)\to({\hat{T}}_{t,x})_{a(x)}, \quad \forall (t,x)\in\Omega\times H. $$

T is called the random multi-valued mapping induced by the random fuzzy mapping TÌ‚.

Given mappings \(a:H\to[0,1]\), the random fuzzy mapping \(\hat{T}:\Omega \times H\to{\mathcal{F}}(H)\) satisfies the condition (I) and random operator \(h:\Omega\times H\to H\) with \(\operatorname{Im}(h)\cap \operatorname{dom}(\partial\phi )\neq\emptyset\), we consider the following random generalized variational inequality problem (for short, RGVIP): Find measurable mappings \(x,w:\Omega\to H\), such that for all \(t\in\Omega\), \(y(t)\in H\),

$$ {\hat{T}}_{t,x(t)}\bigl(w(t)\bigr)\geq a\bigl(x(t)\bigr), \qquad \bigl\langle w(t),y(t)-h\bigl(t,x(t)\bigr) \bigr\rangle +\phi\bigl(y(t)\bigr)-\phi \bigl(h\bigl(t,x(t)\bigr)\bigr)\geq0, $$
(2.1)

where ∂ϕ denotes the sub-differential of a proper, convex, and lower semicontinuous function \(\phi:H\to\mathbb{R}\cup\{+\infty\}\) with its effective domain being closed.

The set of measurable mappings \((x,w)\) is called a random solution of the RGVIP (2.1).

Special cases:

  1. (I)

    If a is zero operator and \(T:H\to H \) is a single-valued mapping, then the RGVIP (2.1) reduces to the generalized mixed variational inequality problem, denoted by GMVIP, which consists in finding \(x\in H\) such that

    $$ \bigl\langle Tx,y-h(x) \bigr\rangle +\phi(y)-\phi\bigl(h(x)\bigr)\geq0,\quad \forall y\in H, $$
    (2.2)

    which was studied by Solodov [14]. He introduced three gap functions for the GMVIP (2.2) and by using these he obtained error bounds.

  2. (II)

    If \(h(x)=x\), ∀x, then problem GMVIP (2.2) reduces to mixed variational inequality problem, denoted by MVIP, which consists in finding \(x\in H\) such that

    $$ \langle Tx, y-x \rangle+\phi(y)-\phi(x)\geq0, \quad \forall y\in H, $$
    (2.3)

    which was considered by Tang and Huang [15]. They introduced two regularized gap functions for the MVIP (2.3) and studied their differentiable properties.

  3. (III)

    If the function \(\phi(\cdot)\) is an indicator function of a closed set K in H, then problem MVIP (2.3) reduces to a classical variational inequality problem, denoted by VIP, which consists in finding \(x\in K\) such that

    $$ \langle Tx, y-x\rangle\geq0,\quad \forall y\in K, $$
    (2.4)

    which was studied by [1, 3, 4, 6, 11, 15]. They derived local and global error bounds for the VIP (2.4) in terms of the regularized gap functions and the D-gap functions.

First of all, we recall the following well-known results and concepts.

For the VIP (2.4), it is well known that \(x\in K\) is a solution if and only if

$$0 =x-P_{K}\bigl[x-\theta T(x)\bigr], $$

where \(P_{K}\) is the orthogonal projector onto K and \(\theta>0\) is arbitrary. The norm of the right-hand side of the above equation is commonly known as the natural residual vector.

Throughout this paper, unless otherwise stated, the set of minimizers of the function \(t :H\to\mathbb{R}\cup\{+\infty\}\) over the set \(Y\subset H\) is denoted by \(\arg\min_{t\in Y}t(y)\). For a convex function \(\phi: H \to\mathbb{R}\cup\{+\infty\}\) with \(\operatorname{dom} \phi=\{x\in H: \phi(x)<+\infty\}\) denotes its effective domain, the sub-differential at \(x\in H\) is denoted by \(\partial\phi(x) = \{w\in H: \phi(y)\geq\phi(x) +\langle w, y - x\rangle, \forall y\in H\}\) and a point \(w\in\partial\phi(x)\) is called a sub-gradient of Ï• at x.

Motivated by the proximal map given in [32], we give a similar characterization for the RGVIP (2.1) in the random fuzzy environment by defining the mapping \(P_{\theta(t)}^{\phi}:\Omega\times H\to\operatorname{dom} \phi\), as

$$P_{\theta(t)}^{\phi,z}(t,z) =\arg\min_{y(t)\in H}\biggl\{ \phi\bigl(y(t)\bigr)+{1\over 2\theta(t)}\bigl\Vert y(t)-z(t)\bigr\Vert ^{2}\biggr\} ,\quad z(t)\in H, t\in\Omega, $$

where \(\theta:\Omega\to(0,+\infty)\) a measurable function which is the so called proximal mapping in H for a random fuzzy mapping. Note that the objective function above is proper strongly convex. Since \(\operatorname{dom}(\phi)\) is closed, \(P_{\theta(t)}^{\phi,z}(t,z)\) is well defined and single-valued. For any measurable function \(\theta:\Omega\to (0,+\infty)\) define the residual vector

$$R_{\theta(t)}^{\phi}\bigl(t,x(t)\bigr)=h\bigl(t,x(t)\bigr) -P_{\theta(t)}^{\phi ,x}\bigl[h\bigl(t,x(t)\bigr)-\theta(t) w(t)\bigr], \quad x(t)\in H. $$

We next show that \(R_{\theta(t)}^{\phi}(t,x(t))\) plays the role of natural residual vector in random fuzzy mapping for the RGVIP (2.1).

Lemma 2.1

For any measurable function \(\theta:\Omega\to (0,+\infty)\) and for each \(t\in\Omega\), the measurable mapping \(x:\Omega \to H\) is a solution of the RGVIP (2.1) if and only if \(R_{\theta (t)}^{\phi}(t,x(t))=0\).

Proof

Let \(R_{\theta(t)}^{\phi}(t,x(t))=0\), which implies that \(h(t,x)=P_{\theta(t)}^{\phi,x}[h(t,x)-\theta(t) w(t)]\). It is equivalent to \(h(t,x)=\arg\min_{y(t)\in H}\{\phi(y(t))+{1\over 2\theta (t)}\|y(t)-(h(t,x)-\theta(t) w(t))\|^{2}\}\). By the optimality conditions (which are necessary and sufficient, by convexity), the latter is equivalent to

$$0\in\partial\phi\bigl(h(t,x)\bigr)+{1\over \theta(t)}\bigl(h(t,x)- \bigl(h(t,x)-\theta(t) w(t)\bigr)\bigr)=\partial\phi\bigl(h(t,x)\bigr)+w(t), $$

which implies \(-w(t)\in\partial\phi(h(t,x))\). This in turn is equivalent, by the definition of the sub-gradient, to

$$\phi\bigl(y(t)\bigr)\geq\phi\bigl(h(t,x)- \bigl\langle w(t), y(t)-h(t,x) \bigr\rangle \bigr),\quad \forall y(t)\in H, t\in\Omega, $$

which implies that \(x(t)\) solves the RGVIP (2.1).

This completes the proof. □

Definition 2.8

A random multi-valued operator \(T: \Omega \times H \to \operatorname{CB}(H)\) is said to be strongly h-monotone, if there exists a measurable function \(\alpha:\Omega\to(0,+\infty)\) such that

$$\begin{aligned}& \bigl\langle w_{1}(t)-w_{2}(t), h\bigl(t,x_{1}(t) \bigr)-h\bigl(t,x_{2}(t)\bigr)\bigr\rangle \\& \quad \geq\alpha(t)\bigl\Vert x_{1}(t)-x_{2}(t)\bigr\Vert ^{2},\quad \forall w_{i}(t)\in T(t,x_{i}), \forall x_{i}(t)\in H, i=1,2, \forall t\in\Omega. \end{aligned}$$

Definition 2.9

A random operator \(h:\Omega\times H\to H\) is said to be Lipschitz continuous, if there exists a measurable function \(L:\Omega\to(0,+\infty)\) such that

$$\bigl\Vert h\bigl(t,x_{1}(t)\bigr)-h\bigl(t,x_{2}(t) \bigr)\bigr\Vert \leq L(t)\bigl\Vert x_{1}(t)-x_{2}(t) \bigr\Vert ,\quad \forall x_{i}(t)\in H, i=1,2, \forall t\in\Omega. $$

Definition 2.10

A random multi-valued mapping \(T:\Omega \times H\to \operatorname{CB}(H)\) is said to be Ĥ-Lipschitz continuous, if there exists a measurable function \(\lambda:\Omega\to(0,+\infty)\) such that

$${\hat{H}}\bigl(T\bigl(t,x(t)\bigr),T\bigl(t,x_{0}(t)\bigr)\bigr)\leq \lambda(t)\bigl\Vert x(t)-x_{0}(t)\bigr\Vert ,\quad \forall x(t),x_{0}(t)\in H. $$

Now, we give the following lemmas.

Lemma 2.2

[21]

Let \(T:\Omega\times H\to \operatorname{CB}(H)\) be a Ĥ-Lipschitz continuous random multi-valued mapping, then for measurable mapping \(x:\Omega\to H\), the multi-valued mapping \(T(\cdot,x(\cdot)):\Omega\to \operatorname{CB}(H)\) is measurable.

Lemma 2.3

[21]

Let \(T_{1}, T_{2}:\Omega\to \operatorname{CB}(H)\) be two measurable multi-valued mappings, \(\epsilon> 0\) be a constant, and \(w_{1}:\Omega\to H\) be a measurable selection of \(T_{1}\), then there exists a measurable selection \(w_{2}:\Omega\to H\) of \(T_{2}\) such that for all \(t\in\Omega\),

$$\bigl\Vert w_{1}(t)-w_{2}(t)\bigr\Vert \leq(1+ \epsilon){\hat{H}}\bigl(T_{1}(t),T_{2}(t)\bigr). $$

Definition 2.11

A function \(G: H\to\mathbb{R}\) is said to be a gap function for the RGVIP (2.1), if it satisfies the following properties:

  1. (i)

    \(G(x)\geq0\), \(\forall x\in H\);

  2. (ii)

    \(G(x^{*}) = 0\), if and only if \(x^{*}\in H\) solves the RGVIP (2.1).

Lemma 2.4

Let \(x:\Omega\to H\) be a measurable mapping and \(\theta:\Omega\to(0,+\infty)\) be a measurable function, then for all \(x(t)\in H\) and for each \(t\in\Omega\), \(R_{\theta(t)}^{\phi}(t,x(t))\) is a gap function for the RGVIP (2.1).

Proof

It is clear that \(\|R_{\theta(t)}^{\phi}(t,x(t))\|\geq0\) for each \(t\in\Omega\), \(x(t)\in H\) and \(\|R_{\theta(t)}^{\phi}(t,x^{*}(t))\|= 0\) if and only if \(h(t,x^{*})=P_{\theta(t)}^{\phi,x^{*}}[h(t,x^{*})-\theta(t) w(t)]\). By the definition of \(P_{\theta(t)}^{\phi,x^{*}}\), \(P_{\theta (t)}^{\phi,x^{*}}[h(t,x^{*})-\theta(t) w(t)]\) satisfies

$$\begin{aligned} 0 \in&\partial\phi\bigl(P_{\theta(t)}^{\phi,x^{*}}\bigl[h\bigl(t,x^{*}\bigr)- \theta(t) w(t)\bigr]\bigr) \\ &{}+{1\over \theta(t)}\bigl(P_{\theta(t)}^{\phi,x^{*}} \bigl[h\bigl(t,x^{*}\bigr)-\theta(t) w(t)\bigr]-\bigl(h(t,x)-\theta(t) w(t)\bigr) \bigr). \end{aligned}$$

Hence,

$$-w(t)+{1\over \theta(t)}\bigl(h\bigl(t,x^{*}\bigr)-P_{\theta(t)}^{\phi ,x^{*}} \bigl[h\bigl(t,x^{*}\bigr)-\theta(t) w(t)\bigr]\bigr)\in\partial\phi \bigl(P_{\theta(t)}^{\phi ,x^{*}}\bigl[h\bigl(t,x^{*}\bigr)-\theta(t) w(t)\bigr] \bigr). $$

It follows that, for all \(y(t)\in H\),

$$\begin{aligned}& \phi\bigl(y(t)\bigr)-\phi\bigl(P_{\theta(t)}^{\phi,x^{*}}\bigl[h\bigl(t,x^{*} \bigr)-\theta(t) w(t)\bigr]\bigr) \\& \quad {}+ \biggl\langle w(t)-{1\over \theta(t)}\bigl(h\bigl(t,x^{*} \bigr)-P_{K(x^{*})}^{\phi}\bigl[h\bigl(t,x^{*}\bigr)-\theta(t) w(t) \bigr]\bigr), \\& \quad y(t)-P_{K(x^{*})}^{\phi,x^{*}}\bigl[h\bigl(t,x^{*}\bigr)-\theta (t) w(t)\bigr] \biggr\rangle \geq0. \end{aligned}$$

Thus, \(x^{*}(t)\) solves the RGVIP (2.1).

This completes the proof. □

Next we study those conditions under which the RGVIP (2.1) has a unique solution.

Lemma 2.5

Let \((\Omega, \Sigma)\) be a measurable space and H be a real Hilbert space. Suppose that \(h:\Omega\times H\to H\) be a random mapping and \(\phi:H\to\mathbb{R}\cup\{+\infty\}\) be a real-valued function. Let the random fuzzy mapping \({\hat{T}}:\Omega \times H\to{\mathcal{F}}(H)\) satisfy the condition (I), and a random multi-valued mapping \(T:\Omega\times H\to \operatorname{CB}(H)\) induced by the random fuzzy mapping TÌ‚ be strongly h-monotone with the measurable function \(\alpha:\Omega\to(0,+\infty)\), then the RGVIP (2.1) has a unique solution.

Proof

Let the two measurable mappings \(x_{1},x_{2}:\Omega\to H\) be the two solutions of the RGVIP (2.1) such that \(x_{1}(t)\neq x_{2}(t)\in H\). Then we have

$$\begin{aligned}& \bigl\langle w_{1}(t), y(t)-h\bigl(t,x_{1}(t)\bigr) \bigr\rangle +\phi\bigl(y(t)\bigr)-\phi \bigl(h\bigl(t,x_{1}(t)\bigr)\bigr) \geq0, \end{aligned}$$
(2.5)
$$\begin{aligned}& \bigl\langle w_{2}(t), y(t)-h\bigl(t,x_{2}(t)\bigr) \bigr\rangle +\phi\bigl(y(t)\bigr)-\phi \bigl(h\bigl(t,x_{2}(t)\bigr)\bigr) \geq0. \end{aligned}$$
(2.6)

Taking \(y(t) =h(t,x_{2}(t))\) in (2.5) and \(y(t) =h(t,x_{1}(t))\) in (2.6), adding the resultants, we have

$$\bigl\langle w_{1}(t)-w_{2}(t), h\bigl(t,x_{2}(t) \bigr)-h\bigl(t,x_{1}(t)\bigr) \bigr\rangle \geq0. $$

Since T is strongly h-monotone with measurable function \(\alpha :\Omega\to(0,+\infty)\), therefore

$$0\leq \bigl\langle w_{1}(t)-w_{2}(t), h \bigl(t,x_{2}(t)\bigr)-h\bigl(t,x_{1}(t)\bigr) \bigr\rangle \leq -\alpha(t)\bigl\Vert x_{1}(t)-x_{2}(t)\bigr\Vert ^{2}, $$

which implies that \(x_{1}(t) =x_{2}(t)\), \(\forall t\in\Omega\), the uniqueness of the solution of the RGVIP (2.1).

This completes the proof. □

Now by using normal residual vector \(R_{\theta(t)}^{\phi}(t,x(t))\), we derive the error bounds for the solution of the RGVIP (2.1).

Theorem 2.1

Suppose for each \(t\in\Omega\), \(x_{0}(t)\in H\) is solution of the RGVIP (2.1). Let \((\Omega, \Sigma)\) be a measurable space, and H be a real Hilbert space. Let the random fuzzy mapping \({\hat{T}}:\Omega\times H\to{\mathcal{F}}(H)\) satisfy the condition (I) and \(T:\Omega\times H\to \operatorname{CB}(H)\) be the random multi-valued mapping induced by the random fuzzy mapping TÌ‚. Let \(h:\Omega\times H\to H\) be a random mapping and \(\phi:H\to\mathbb{R}\cup\{+\infty\}\) be a real-valued function such that

  1. (i)

    for each \(t\in\Omega\), the measurable mapping T is strongly h-monotone and Ĥ-Lipschitz continuous with the measurable functions \(\alpha, \lambda:\Omega\to(0,+\infty)\), respectively;

  2. (ii)

    for each \(t\in\Omega\), the mapping \(h(t,\cdot)\) is Lipschitz continuous with the measurable function \(L:\Omega\to(0,+\infty)\);

  3. (iii)

    if there exists a measurable function \(K:\Omega\to (0,+\infty)\) such that

    $$\bigl\Vert P_{\theta(t)}^{\phi,x}(z)-P_{\theta(t)}^{\phi,x_{0}}(z) \bigr\Vert \leq K(t)\bigl\Vert x(t)-x_{0}(t)\bigr\Vert ,\quad \forall x(t),x_{0}(t),z(t)\in H, $$

then for any \(x(t)\in H\), \(t\in\Omega\) and \(\theta(t)>{K(t)L(t)\over \alpha(t)-K(t)\lambda(t)(1+\epsilon)}\), we have

$$\bigl\Vert x(t)-x_{0}(t)\bigr\Vert \leq{L(t)+\theta(t)\lambda(t)(1+\epsilon)\over \alpha (t)\theta(t)-K(t) (L(t)+\theta(t)\lambda(t)(1+\epsilon) )} \bigl\Vert R_{\theta(t)}^{\phi}\bigl(t,x(t)\bigr)\bigr\Vert . $$

Proof

Let for each \(t\in\Omega\), \(x_{0}(t)\in H\) be a solution of the RGVIP (2.1), then

$$\bigl\langle w_{0}(t), y(t)-h(t,x_{0})\bigr\rangle +\phi \bigl(y(t)\bigr)-\phi\bigl(h\bigl(t,x_{0}(t)\bigr)\bigr)\geq0,\quad \forall y(t)\in H. $$

Substituting \(y(t)=P_{\theta(t)}^{\phi,x_{0}}[h(t,x(t))-\theta(t) w(t)]\) in the above inequality, we have

$$\begin{aligned}& \bigl\langle w_{0}(t), P_{\theta(t)}^{\phi,x_{0}}\bigl[h \bigl(t,x(t)\bigr)-\theta(t) w(t)\bigr]-h\bigl(t,x_{0}(t)\bigr) \bigr\rangle \\& \quad {}+\phi \bigl(P_{\theta(t)}^{\phi ,x_{0}}\bigl[h\bigl(t,x(t)\bigr)- \theta(t) w(t)\bigr] \bigr)-\phi\bigl(h\bigl(t,x_{0}(t)\bigr)\bigr) \geq0. \end{aligned}$$
(2.7)

For any fixed \(x(t)\in H\) and measurable function \(\theta:\Omega\to (0,+\infty)\), we observe that

$$\begin{aligned} h\bigl(t,x(t)\bigr)-\theta(t) w(t) \in&\bigl(I+\theta(t)\partial\phi\bigr) \bigl(I+\theta (t)\partial\phi\bigr)^{-1} \bigl(h\bigl(t,x(t)\bigr)- \theta(t) w(t) \bigr) \\ =&\bigl(I+\theta (t)\partial\phi\bigr)P_{\theta(t)}^{\phi,x_{0}} \bigl[h\bigl(t,x(t)\bigr)-\theta(t) w(t)\bigr], \end{aligned}$$

which is equivalent to

$$\begin{aligned} \begin{aligned} &{-}w(t)+{1\over \theta(t)} \bigl[h\bigl(t,x(t)\bigr)-P_{\theta(t)}^{\phi ,x_{0}} \bigl[h\bigl(t,x(t)\bigr)-\theta(t) w(t)\bigr] \bigr] \\ &\quad \in\partial\phi \bigl(P_{\theta (t)}^{\phi,x_{0}}\bigl[h\bigl(t,x(t)\bigr)-\theta(t) w(t)\bigr] \bigr). \end{aligned} \end{aligned}$$

By the definition of a sub-differential, we have

$$\begin{aligned}& \biggl\langle w(t)-{1\over \theta(t)} \bigl(h\bigl(t,x(t) \bigr)-P_{\theta(t)}^{\phi ,x_{0}}\bigl[h\bigl(t,x(t)\bigr)-\theta(t) w(t) \bigr] \bigr), y(t)-P_{\theta(t)}^{\phi ,x_{0}}\bigl[h\bigl(t,x(t)\bigr)- \theta(t) w(t)\bigr] \biggr\rangle \\& \quad {}+\phi\bigl(y(t)\bigr)-\phi \bigl(P_{\theta(t)}^{\phi,x_{0}}\bigl[h \bigl(t,x(t)\bigr)-\theta(t) w(t)\bigr] \bigr)\geq0. \end{aligned}$$

Taking \(y(t)=h(t,x_{0}(t))\) in the above we get

$$\begin{aligned}& \biggl\langle w(t)-{1\over \theta(t)} \bigl(h\bigl(t,x(t) \bigr)-P_{\theta(t)}^{\phi ,x_{0}}\bigl[h\bigl(t,x(t)\bigr)-\theta(t) w(t) \bigr] \bigr), \\& \quad h\bigl(t,x_{0}(t)\bigr)-P_{\theta(t)}^{\phi ,x_{0}} \bigl[h\bigl(t,x(t)\bigr)-\theta(t) w(t)\bigr] \biggr\rangle \\& \quad {}+\phi\bigl(h\bigl(t,x_{0}(t)\bigr)\bigr)-\phi \bigl(P_{\theta(t)}^{\phi,x_{0}}\bigl[h\bigl(t,x(t)\bigr)-\theta (t) w(t) \bigr] \bigr)\geq0. \end{aligned}$$

This implies that

$$\begin{aligned}& \biggl\langle -w(t)+{1\over \theta(t)} \bigl(h\bigl(t,x(t) \bigr)-P_{\theta(t)}^{\phi ,x_{0}}\bigl[h\bigl(t,x(t)\bigr)-\theta(t) w(t) \bigr] \bigr), \\& \quad P_{\theta(t)}^{\phi ,x_{0}}\bigl[h\bigl(t,x(t)\bigr)-\theta(t) w(t)\bigr]-h\bigl(t,x_{0}(t)\bigr) \biggr\rangle \\& \quad {}+\phi\bigl(h\bigl(t,x_{0}(t)\bigr)\bigr)-\phi \bigl(P_{\theta(t)}^{\phi,x_{0}}\bigl[h\bigl(t,x(t)\bigr)-\theta (t) w(t) \bigr] \bigr)\geq0. \end{aligned}$$
(2.8)

Adding (2.7) and (2.8), we get

$$\begin{aligned}& \biggl\langle w_{0}(t)-w(t)+{1\over \theta(t)} \bigl(h \bigl(t,x(t)\bigr)-P_{\theta (t)}^{\phi,x_{0}}\bigl[h\bigl(t,x(t)\bigr)- \theta(t) w(t)\bigr] \bigr), \\& \quad P_{\theta(t)}^{\phi ,x_{0}}\bigl[h\bigl(t,x(t) \bigr)-\theta(t) w(t)\bigr]-h\bigl(t,x_{0}(t)\bigr) \biggr\rangle \geq0. \end{aligned}$$

This also can be written as

$$\begin{aligned}& \theta(t) \bigl\langle w_{0}(t)-w(t), P_{\theta(t)}^{\phi ,x_{0}} \bigl[h\bigl(t,x(t)\bigr)-\theta(t) w(t)\bigr]-h\bigl(t,x(t)\bigr) \bigr\rangle \\& \quad {}+ \theta(t) \bigl\langle w_{0}(t)-w(t), h\bigl(t,x(t)\bigr)-h \bigl(t,x_{0}(t)\bigr) \bigr\rangle \\& \quad {}+ \bigl\langle h\bigl(t,x(t)\bigr)-P_{\theta(t)}^{\phi,x_{0}}\bigl[h \bigl(t,x(t)\bigr)-\theta(t) w(t)\bigr], P_{\theta(t)}^{\phi,x_{0}}\bigl[h \bigl(t,x(t)\bigr)-\theta(t) w(t)\bigr]-h\bigl(t,x(t)\bigr) \bigr\rangle \\& \quad {}+ \bigl\langle h\bigl(t,x(t)\bigr)-P_{\theta(t)}^{\phi,x_{0}}\bigl[h \bigl(t,x(t)\bigr)-\theta(t) w(t)\bigr], h\bigl(t,x(t)\bigr)-h \bigl(t,x_{0}(t)\bigr) \bigr\rangle \geq0, \end{aligned}$$

which implies that

$$\begin{aligned}& \theta(t) \bigl\langle w_{0}(t)-w(t), P_{\theta(t)}^{\phi ,x_{0}} \bigl[h\bigl(t,x(t)\bigr)-\theta(t) w(t)\bigr]-h\bigl(t,x(t)\bigr) \bigr\rangle \\& \qquad {}+ \bigl\langle h\bigl(t,x(t)\bigr)-P_{\theta(t)}^{\phi,x_{0}} \bigl[h\bigl(t,x(t)\bigr)-\theta(t) w(t)\bigr], h\bigl(t,x(t)\bigr)-h \bigl(t,x_{0}(t)\bigr) \bigr\rangle \\& \quad \geq\theta(t) \bigl\langle w_{0}(t)-w(t), h \bigl(t,x_{0}(t)\bigr)-h\bigl(t,x(t)\bigr) \bigr\rangle \\& \qquad {}+ \bigl\langle h\bigl(t,x(t)\bigr)-P_{\theta(t)}^{\phi,x_{0}} \bigl[h\bigl(t,x(t)\bigr)-\theta(t) w(t)\bigr], h\bigl(t,x(t)\bigr)-P_{\theta(t)}^{\phi,x_{0}} \bigl[h\bigl(t,x(t)\bigr)-\theta(t) w(t)\bigr] \bigr\rangle . \end{aligned}$$

By using the strong h-monotonicity of T, we get

$$\begin{aligned}& \theta(t) \bigl\langle w_{0}(t)-w(t), P_{\theta(t)}^{\phi ,x_{0}} \bigl[h\bigl(t,x(t)\bigr)-\theta(t) w(t)\bigr]-h\bigl(t,x(t)\bigr) \bigr\rangle \\& \qquad {}+ \bigl\langle h\bigl(t,x(t)\bigr)-P_{\theta(t)}^{\phi,x_{0}} \bigl[h\bigl(t,x(t)\bigr)-\theta(t) w(t)\bigr], h\bigl(t,x(t)\bigr)-h \bigl(t,x_{0}(t)\bigr) \bigr\rangle \\& \quad \geq\alpha(t)\theta(t)\bigl\Vert x_{0}(t)-x(t)\bigr\Vert ^{2}+\bigl\Vert R_{\theta(t)}^{\phi}\bigl(t,x(t)\bigr) \bigr\Vert ^{2}. \end{aligned}$$

Also the above inequality can be written as

$$\begin{aligned}& \theta(t) \bigl\langle w_{0}(t)-w(t), P_{\theta(t)}^{\phi ,x_{0}} \bigl[h\bigl(t,x(t)\bigr)-\theta(t) w(t)\bigr]-h\bigl(t,x(t)\bigr) \\& \qquad {}-P_{\theta(t)}^{\phi ,x} \bigl[h\bigl(t,x(t)\bigr)-\theta(t) w(t)\bigr]+P_{\theta(t)}^{\phi,x} \bigl[h\bigl(t,x(t)\bigr)-\theta (t) w(t)\bigr] \bigr\rangle \\& \qquad {}+ \bigl\langle h\bigl(t,x(t)\bigr)-P_{\theta(t)}^{\phi,x_{0}} \bigl[h\bigl(t,x(t)\bigr)-\theta(t) w(t)\bigr]-P_{\theta(t)}^{\phi,x} \bigl[h\bigl(t,x(t)\bigr)-\theta(t) w(t)\bigr] \\& \qquad {}+P_{\theta (t)}^{\phi,x} \bigl[h\bigl(t,x(t)\bigr)-\theta(t) w(t)\bigr], h\bigl(t,x(t)\bigr)-h \bigl(t,x_{0}(t)\bigr) \bigr\rangle \\& \quad \geq\alpha(t)\theta(t)\bigl\Vert x_{0}(t)-x(t)\bigr\Vert ^{2}+\bigl\Vert R_{\theta(t)}^{\phi}\bigl(t,x(t)\bigr) \bigr\Vert ^{2}. \end{aligned}$$

By using the Cauchy-Schwarz inequality along with the triangular inequality, we have

$$\begin{aligned}& \theta(t)\bigl\Vert w_{0}(t)-w(t)\bigr\Vert \bigl\Vert P_{\theta(t)}^{\phi ,x_{0}}\bigl[h\bigl(t,x(t)\bigr)-\theta(t) w(t) \bigr]-P_{\theta(t)}^{\phi,x}\bigl[h\bigl(t,x(t)\bigr)-\theta (t) w(t) \bigr]\bigr\Vert \\& \qquad {}+\theta(t)\bigl\Vert w_{0}(t)-w(t)\bigr\Vert \bigl\Vert P_{\theta(t)}^{\phi ,x}\bigl[h\bigl(t,x(t)\bigr)-\theta(t) w(t) \bigr]-h\bigl(t,x(t)\bigr)\bigr\Vert \\& \qquad {}+\bigl\Vert h\bigl(t,x(t)\bigr)- P_{\theta(t)}^{\phi,x} \bigl[h\bigl(t,x(t)\bigr)-\theta(t) w(t)\bigr]\bigr\Vert \bigl\Vert h \bigl(t,x_{0}(t)\bigr)-h\bigl(t,x(t)\bigr)\bigr\Vert \\& \qquad {}+ \bigl\Vert P_{\theta(t)}^{\phi,x}\bigl[h\bigl(t,x(t)\bigr)- \theta(t) w(t)\bigr]-P_{\theta (t)}^{\phi,x_{0}}\bigl[h\bigl(t,x(t)\bigr)- \theta(t) w(t)\bigr]\bigr\Vert \\& \qquad {}\times\bigl\Vert h\bigl(t,x_{0}(t)\bigr)-h \bigl(t,x(t)\bigr)\bigr\Vert \\& \quad \geq\alpha(t)\theta(t)\bigl\Vert x_{0}(t)-x(t)\bigr\Vert ^{2}+\bigl\Vert R_{\theta (t)}^{\phi}\bigl(t,x(t)\bigr) \bigr\Vert ^{2}. \end{aligned}$$

Now using the Ĥ-Lipschitz continuity of T, the Lipschitz continuity of h, and assumption (iii) on \(P_{\theta(t)}^{\phi,x}(\cdot)\), we have

$$\begin{aligned}& \theta(t)\lambda(t) (1+\epsilon)\bigl\Vert x_{0}(t)-x(t)\bigr\Vert K(t)\bigl\Vert x_{0}(t)-x(t)\bigr\Vert \\& \qquad {}+\theta(t)\lambda(t) (1+\epsilon)\bigl\Vert x_{0}(t)-x(t)\bigr\Vert \bigl\Vert R_{\theta(t)}^{\phi}\bigl(t,x(t)\bigr)\bigr\Vert \\& \qquad {}+\bigl\Vert R_{\theta(t)}^{\phi}\bigl(t,x(t)\bigr)\bigr\Vert L(t)\bigl\Vert x_{0}(t)-x(t)\bigr\Vert + K(t)L(t)\bigl\Vert x(t)-x_{0}(t)\bigr\Vert ^{2} \\& \quad \geq\alpha(t)\theta(t)\bigl\Vert x_{0}(t)-x(t)\bigr\Vert ^{2}+\bigl\Vert R_{\theta(t)}^{\phi}\bigl(t,x(t)\bigr) \bigr\Vert ^{2}. \end{aligned}$$

The above can be written again as

$$\begin{aligned}& K(t)\theta(t)\lambda(t) (1+\epsilon)\bigl\Vert x_{0}(t)-x(t)\bigr\Vert ^{2}+\theta(t)\lambda (t) (1+\epsilon)\bigl\Vert x_{0}(t)-x(t)\bigr\Vert \bigl\Vert R_{\theta(t)}^{\phi}\bigl(t,x(t)\bigr)\bigr\Vert \\& \qquad {}+L(t)\bigl\Vert R_{\theta(t)}^{\phi}\bigl(t,x(t)\bigr) \bigr\Vert \bigl\Vert x_{0}(t)-x(t)\bigr\Vert + K(t)L(t)\bigl\Vert x(t)-x_{0}(t)\bigr\Vert ^{2} \\& \quad \geq\alpha(t)\theta(t)\bigl\Vert x_{0}(t)-x(t)\bigr\Vert ^{2}. \end{aligned}$$

Therefore, we have

$$\begin{aligned} \bigl\Vert x(t)-x_{0}(t)\bigr\Vert \leq&{L(t)+\theta(t)\lambda(t)(1+\epsilon)\over \alpha (t)\theta(t)-K(t) (L(t)+\theta(t)\lambda(t)(1+\epsilon) )} \\ &{}\times \bigl\Vert R_{\theta(t)}^{\phi}\bigl(t,x(t)\bigr)\bigr\Vert , \quad \forall x(t)\in H, t\in\Omega, \end{aligned}$$

where \(\theta(t)>{K(t)L(t)\over \alpha(t)-K(t)\lambda(t)(1+\epsilon )}\).

This completes the proof. □

3 Generalized regularized gap functions for RGVIP (2.1)

Fukushima [4] considered the regularized gap function \(g_{\theta}: H\to \mathbb{R}\) defined by

$$g_{\theta}(x)=\max_{y\in K}\biggl\{ \langle Tx, x-y \rangle-{\theta\over 2} \|x-y\|^{2}\biggr\} ,\quad \mbox{where } \theta>0\mbox{ is a positive parameter}. $$

The function \(g_{\theta}\) constitutes an equivalent constrained optimization reformulation of the VIP (2.4) in the sense that x solves the VIP (2.4) if and only if x minimizes \(g_{\theta}\) on K and \(g_{\theta}(x) = 0\), \(\forall x\in H\). The regularized gap function was further extended by Wu et al. [18]. They considered the function \(G_{\theta}: H\to\mathbb{R}\) defined by \(G_{\theta}(x)=\max_{y\in K}\{ \langle Tx, x-y \rangle-\theta F(x,y)\}\).

Wu et al. [18] showed that \(G_{\theta}\) constitutes an equivalent constrained differentiable optimization reformulation of the VIP (2.4). Motivated and inspired by the work of [4, 7, 12, 17, 18] going in this direction, in this section we construct a generalized regularized gap function \(G_{\theta(t)} :\Omega\times H\to\mathbb{R}\), associated with problem RGVIP (2.1) defined by

$$ \begin{aligned} &G_{\theta(t)}\bigl(x(t)\bigr)=\max_{y(t)\in H}\Psi_{\theta(t)} \bigl(x(t),y(t)\bigr), \\ &\quad \mbox{where }\theta:\Omega\to(0,+\infty) \mbox{ is a measurable function}, \\ &G_{\theta(t)}\bigl(x(t)\bigr)=\max_{y(t)\in H} \bigl\{ \bigl\langle w(t), h\bigl(t,x(t)\bigr)-y(t) \bigr\rangle +\phi\bigl(h\bigl(t,x(t)\bigr) \bigr) \\ &\hphantom{G_{\theta(t)}\bigl(x(t)\bigr)={}}{}-\phi\bigl(y(t)\bigr)-\theta (t)F\bigl(h\bigl(t,x(t)\bigr),y(t)\bigr) \bigr\} , \\ &G_{\theta(t)}\bigl(x(t)\bigr)= \bigl\langle w(t), h\bigl(t,x(t)\bigr)- \pi_{\theta (t)}\bigl(x(t)\bigr) \bigr\rangle +\phi\bigl(h\bigl(t,x(t)\bigr) \bigr) \\ &\hphantom{G_{\theta(t)}\bigl(x(t)\bigr)={}}{}-\phi\bigl(\pi_{\theta(t)}\bigl(x(t)\bigr)\bigr)-\theta (t)F\bigl(h \bigl(t,x(t)\bigr),\pi_{\theta(t)}\bigl(x(t)\bigr)\bigr), \end{aligned} $$
(3.1)

where \(\pi_{\theta(t)}(x(t))\) denotes the unique minimizer of \(-\Psi _{\theta(t)}(x(t), \cdot)\) on H for each \(t\in\Omega\) and the function \(F :H\times H\to\mathbb{R}\) satisfies the following conditions:

  1. (A1)

    F is continuously differentiable on \(H\times H\);

  2. (A2)

    F is nonnegative on \(H\times H\);

  3. (A3)

    \(F(x(t), \cdot)\) is strongly convex uniformly in \(x(t)\), for each \(t\in\Omega\), i.e., there exists a measurable function \(\beta:\Omega\to(0,+\infty)\) such that, for any \(t\in\Omega\), \(x(t)\in H\),

    $$\begin{aligned}& F\bigl(x(t),y_{1}(t)\bigr)-F\bigl(x(t),y_{2}(t)\bigr) \\& \quad \geq \bigl\langle \nabla_{2}F\bigl(x(t),y_{2}(t)\bigr), y_{1}(t)-y_{2}(t) \bigr\rangle +\beta(t)\bigl\Vert y_{1}(t)-y_{2}(t)\bigr\Vert ^{2},\quad \forall y_{1}(t),y_{2}(t)\in H, \end{aligned}$$

    where \(\nabla_{2}F\) denotes the partial differential of F with respect to the second variable of F;

  4. (A4)

    \(F(x(t), y(t)) = 0\) if and only if \(x (t)= y(t)\), \(\forall t\in\Omega\);

  5. (A5)

    \(\nabla_{2}F(x(t),\cdot)\) is uniformly Lipschitz continuous, i.e., there exists a measurable function \(\gamma:\Omega\to (0,+\infty)\) such that, for any \(t\in\Omega\), \(x(t)\in H\),

    $$\bigl\Vert \nabla_{2}F\bigl(x(t),y_{1}(t)\bigr)- \nabla_{2}F\bigl(x(t),y_{2}(t)\bigr)\bigr\Vert \leq\gamma(t) \bigl\Vert y_{1}(t)-y_{2}(t)\bigr\Vert ,\quad \forall y_{1}(t),y_{2}(t)\in H. $$

Lemma 3.1

Suppose F satisfies (A1)-(A4). Then for each \(t\in\Omega\), \(\nabla_{2}F(x(t), y(t)) = 0\) if and only if \(x(t) = y(t)\), \(\forall t\in\Omega\).

Proof

Suppose that for each \(t\in\Omega\), \(\nabla_{2}F(x(t), y(t)) = 0\). Then (A3) implies that \(y(t)\) is the unique minimizer of \(F(x(t),\cdot)\). Since \(F(x(t),\cdot)\) always has the unique minimizer \(x(t)\) by (A2) and (A4), it follows that \(x(t)=y(t)\). Next, we suppose \(x(t)=y(t)\). Then, by (A4), we have \(F(x(t),\cdot)=0\). It follows from (A2) that \(y(t)\) is a global minimizer of \(F(x(t),\cdot)\). Hence, we have \(\nabla _{2}F(x(t), y(t)) = 0\).

This completes the proof. □

Lemma 3.2

Suppose that the function F satisfies (A3). Then, for any \(t\in\Omega\), \(y_{1}(t), y_{2}(t)\in H\), we have

$$\bigl\langle \nabla_{2}F\bigl(x(t),y_{1}(t)\bigr)- \nabla_{2}F\bigl(x(t),y_{2}(t)\bigr), y(t)_{1}-y_{2}(t) \bigr\rangle \geq2\beta(t)\bigl\Vert y_{1}(t)-y_{2}(t)\bigr\Vert ^{2}; $$

that is, \(\nabla_{2}F(x(t),\cdot)\) is strongly monotone with modulus \(2\beta (t)\) on H.

Proof

For any \(t\in\Omega\), \(y_{1}(t), y_{2}(t)\in H\), it follows from (A3) that

$$F\bigl(x(t),y_{1}(t)\bigr)-F\bigl(x(t),y_{2}(t)\bigr)\geq \bigl\langle \nabla_{2}F\bigl(x(t),y_{2}(t)\bigr), y_{1}(t)-y_{2}(t) \bigr\rangle +\beta(t)\bigl\Vert y_{1}(t)-y_{2}(t)\bigr\Vert ^{2} $$

and

$$F\bigl(x(t),y_{2}(t)\bigr)-F\bigl(x(t),y_{1}(t)\bigr)\geq \bigl\langle \nabla_{2}F\bigl(x(t),y_{1}(t)\bigr), y_{2}(t)-y_{1}(t) \bigr\rangle +\beta(t)\bigl\Vert y_{1}(t)-y_{2}(t)\bigr\Vert ^{2}. $$

Adding the above inequalities together, we get the required result.

This completes the proof. □

Lemma 3.3

Suppose that the function F satisfies (A1)-(A5) with the associated measurable functions \(\beta,\gamma:\Omega \to(0,+\infty)\). Then we have

$$F\bigl(x(t),y(t)\bigr)\leq\bigl(\gamma(t)-\beta(t)\bigr)\bigl\Vert x(t)-y(t) \bigr\Vert ^{2}, \quad \forall x(t),y(t)\in H. $$

Proof

From (A3), we have

$$\begin{aligned}& F\bigl(x(t),x(t)\bigr)-F\bigl(x(t),y(t)\bigr) \\& \quad \geq \bigl\langle \nabla_{2}F\bigl(x(t),y(t)\bigr), x(t)-y(t) \bigr\rangle +\beta(t)\bigl\Vert x(t)-y(t)\bigr\Vert ^{2},\quad \forall x(t),y(t)\in H. \end{aligned}$$

By using (A4), the above can be written as \(-F(x(t),y(t))\geq \langle\nabla_{2}F(x(t),y(t)), x(t)-y(t) \rangle +\beta(t)\|x(t)-y(t)\|^{2}\),

$$ F\bigl(x(t),y(t)\bigr)\leq\bigl\Vert \nabla_{2}F\bigl(x(t),y(t) \bigr)\bigr\Vert \bigl\Vert x(t)-y(t)\bigr\Vert -\beta(t)\bigl\Vert x(t)-y(t)\bigr\Vert ^{2}. $$
(3.2)

From (A5), we have \(\|\nabla_{2}F(x(t),x(t))-\nabla_{2}F(x(t),y(t))\|\leq \gamma(t)\|x(t)-y(t)\|\), by using (A4), it can be written as

$$ \bigl\Vert \nabla_{2}F\bigl(x(t),y(t)\bigr)\bigr\Vert \leq \gamma(t)\bigl\Vert x(t)-y(t)\bigr\Vert . $$
(3.3)

From inequalities (3.2) and (3.3), we have

$$F\bigl(x(t),y(t)\bigr)\leq\bigl(\gamma(t)-\beta(t)\bigr)\bigl\Vert x(t)-y(t) \bigr\Vert ^{2},\quad \forall x(t),y(t)\in H. $$

This completes the proof. □

Lemma 3.4

Suppose that the function F satisfies (A1)-(A4). Then, for each \(t\in\Omega\), the measurable mapping \(x:\Omega \to H\) is a solution of the RGVIP (2.1) if and only if \(h(t,x(t))=\pi _{\theta(t)}(x(t))\).

Proof

For any \(x(t)\in H\), since \(\pi_{\theta(t)}(x(t))\) minimizes \(-\Psi_{\theta}(x(t), \cdot)\) on H, and \(-\Psi_{\theta}(x(t), \cdot)\) is convex, we have

$$\begin{aligned} 0 \in&\partial\Psi_{\theta}\bigl(x(t),y(t)\bigr) \\ =&w(t)+\partial\phi\bigl(\pi_{\theta (t)}\bigl(x(t)\bigr)\bigr)+ \theta(t) \nabla_{2}F\bigl(h\bigl(t,x(t)\bigr),\pi_{\theta(t)}\bigl(x(t)\bigr) \bigr). \\ &{}-w(t)- \theta(t)\nabla_{2}F\bigl(h\bigl(t,x(t)\bigr), \pi_{\theta(t)}\bigl(x(t)\bigr)\bigr)\in\partial \phi\bigl(\pi_{\theta(t)} \bigl(x(t)\bigr)\bigr). \end{aligned}$$

By the definition of the sub-gradient,

$$ \begin{aligned} &\phi\bigl(y(t)\bigr)\geq\phi\bigl(\pi_{\theta(t)}\bigl(x(t) \bigr)\bigr) \\ &\hphantom{\phi\bigl(y(t)\bigr)\geq{}}{}- \bigl\langle w(t)+\theta (t)\nabla_{2}F \bigl(h \bigl(t,x(t)\bigr),\pi_{\theta(t)}\bigl(x(t)\bigr) \bigr), y(t)- \pi_{\theta (t)}\bigl(x(t)\bigr) \bigr\rangle ,\quad \textit{i.e.}, \\ & \bigl\langle w(t), y(t)-\pi_{\theta(t)}\bigl(x(t)\bigr) \bigr\rangle +\phi \bigl(y(t)\bigr)- \phi\bigl(\pi_{\theta(t)}\bigl(x(t)\bigr)\bigr) \\ &\quad \geq\theta(t) \bigl\langle -\nabla _{2}F \bigl(h\bigl(t,x(t)\bigr),\pi_{\theta(t)} \bigl(x(t)\bigr) \bigr), y(t)-\pi_{\theta (t)}\bigl(x(t)\bigr) \bigr\rangle . \end{aligned} $$
(3.4)

If \(h(t,x(t))=\pi_{\theta(t)}(x(t))\) then, by Lemma 3.1, we see that \(x(t)\) is a solution of the RGVIP (2.1).

Conversely, \(x(t)\) is a solution of the RGVIP (2.1), then taking \(y(t)=\pi_{\theta(t)}(x(t))\) in (2.1), we obtain

$$\bigl\langle w(t), \pi_{\theta(t)}\bigl(x(t)\bigr)-h\bigl(t,x(t)\bigr) \bigr\rangle +\phi\bigl(\pi _{\theta(t)}\bigl(x(t)\bigr)\bigr)-\phi\bigl(h \bigl(t,x(t)\bigr)\bigr)\geq0. $$

On the other hand, since \(h(t,x(t))\in H\), it follows from (3.4) that

$$\begin{aligned}& \bigl\langle w(t), h\bigl(t,x(t)\bigr)-\pi_{\theta(t)}\bigl(x(t)\bigr) \bigr\rangle +\phi \bigl(h\bigl(t,x(t)\bigr)\bigr)- \phi\bigl(\pi_{\theta(t)} \bigl(x(t)\bigr)\bigr) \\& \quad \geq\theta(t) \bigl\langle -\nabla_{2}F \bigl(h\bigl(t,x(t) \bigr),\pi_{\theta (t)}\bigl(x(t)\bigr) \bigr), h\bigl(t,x(t)\bigr)- \pi_{\theta(t)}\bigl(x(t)\bigr) \bigr\rangle . \end{aligned}$$

Adding the above two inequalities, we have

$$\theta(t) \bigl\langle \nabla_{2}F \bigl(h\bigl(t,x(t)\bigr), \pi_{\theta(t)}\bigl(x(t)\bigr) \bigr), h\bigl(t,x(t)\bigr)-\pi_{\theta(t)} \bigl(x(t)\bigr) \bigr\rangle \geq0. $$

By using the strong convexity of \(F(h(t,x(t)),\cdot)\), together with (A2) and (A4), we have

$$\begin{aligned}& \theta(t) \bigl\langle \nabla_{2}F \bigl(h\bigl(t,x(t)\bigr), \pi_{\theta(t)}\bigl(x(t)\bigr) \bigr), h\bigl(t,x(t)\bigr)-\pi_{\theta(t)} \bigl(x(t)\bigr) \bigr\rangle +\beta(t)\bigl\Vert h\bigl(t,x(t)\bigr)-\pi _{\theta(t)}\bigl(x(t)\bigr)\bigr\Vert ^{2} \\& \quad \leq F\bigl(h\bigl(t,x(t)\bigr),h\bigl(t,x(t)\bigr)\bigr)- F \bigl(h \bigl(t,x(t)\bigr),\pi_{\theta(t)}\bigl(x(t)\bigr) \bigr)\leq0. \end{aligned}$$

From the last two inequalities, we get \(h(t,x(t))=\pi_{\theta (t)}(x(t))\).

This completes the proof. □

Theorem 3.1

Let for each \(t\in\Omega\), \(x_{0}(t)\in H\) is solution of the RGVIP (2.1). Assume that for each \(t\in\Omega\), the random multi-valued mapping T is strongly h-monotone and Ĥ-Lipschitz continuous with measurable functions \(\alpha,\lambda :\Omega\to(0,+\infty)\), respectively. If \(h:\Omega\times H\to H\) is Lipschitz continuous with measurable function \(L:\Omega\to(0,+\infty)\) and the function F satisfies (A1)-(A5), then

$$\begin{aligned} \bigl\Vert x(t)-x_{0}(t)\bigr\Vert \leq&{\lambda(t)(1+\epsilon)+\theta(t)\gamma (t)L(t)\over {\alpha(t)}} \\ &{}\times{ \bigl\Vert h\bigl(t,x(t)\bigr)-\pi_{\theta(t)}\bigl(x(t)\bigr)\bigr\Vert }, \quad \forall x(t)\in H, \forall t\in\Omega. \end{aligned}$$

Proof

Since \(x_{0}(t)\) is a solution of the RGVIP (2.1) and \(\pi _{\theta(t)}(x(t))\in H\) for every \(x(t)\in H\), we have

$$ \bigl\langle w_{0}(t), \pi_{\theta(t)}\bigl(x(t)\bigr)-h \bigl(t,x_{0}(t)\bigr) \bigr\rangle +\phi \bigl(\pi_{\theta(t)} \bigl(x(t)\bigr)\bigr)-\phi\bigl(h\bigl(t,x_{0}(t)\bigr)\bigr) \geq0. $$
(3.5)

On the other hand, taking \(y(t)=h(t,x_{0}(t))\) in (3.4)

$$\begin{aligned}& \bigl\langle w(t), h\bigl(t,x_{0}(t)\bigr)-\pi_{\theta(t)} \bigl(x(t)\bigr) \bigr\rangle +\phi \bigl(h\bigl(t,x_{0}(t)\bigr)\bigr)- \phi\bigl(\pi_{\theta(t)}\bigl(x(t)\bigr)\bigr) \\& \quad \geq\theta(t) \bigl\langle -\nabla_{2}F \bigl(h\bigl(t,x(t) \bigr),\pi_{\theta (t)}\bigl(x(t)\bigr) \bigr), h\bigl(t,x_{0}(t) \bigr)-\pi_{\theta(t)}\bigl(x(t)\bigr) \bigr\rangle . \end{aligned}$$
(3.6)

Adding the two inequalities (3.5) and (3.6), we have

$$\begin{aligned}& \bigl\langle w(t)-w_{0}(t), \pi_{\theta(t)}\bigl(x(t)\bigr)-h \bigl(t,x_{0}(t)\bigr) \bigr\rangle \\& \quad \leq\theta(t) \bigl\langle \nabla_{2}F \bigl(h\bigl(t,x(t)\bigr),\pi_{\theta (t)}\bigl(x(t)\bigr) \bigr), h\bigl(t,x_{0}(t)\bigr)-\pi_{\theta(t)}\bigl(x(t)\bigr) \bigr\rangle . \end{aligned}$$
(3.7)

By Lemmas 3.1, 3.2, and (A5) we have

$$\begin{aligned}& \theta(t)\bigl\langle \nabla_{2}F\bigl(h\bigl(t,x(t)\bigr), \pi_{\theta(t)}\bigl(x(t)\bigr)\bigr), h\bigl(t,x_{0}(t)\bigr)- \pi_{\theta(t)}\bigl(x(t)\bigr)\bigr\rangle \\& \quad =\theta(t)\bigl\langle \nabla_{2}F\bigl(h\bigl(t,x(t)\bigr), \pi_{\theta(t)}\bigl(x(t)\bigr) \bigr)-\nabla_{2}F\bigl(h\bigl(t,x(t) \bigr),h\bigl(t,x(t)\bigr)\bigr), h\bigl(t,x_{0}(t)\bigr)-h\bigl(t,x(t) \bigr)\bigr\rangle \\& \qquad {}-\theta(t)\bigl\langle \nabla_{2}F\bigl(h\bigl(t,x(t)\bigr),h \bigl(t,x(t)\bigr)\bigr)-\nabla_{2}F \bigl(h\bigl(t,x(t)\bigr), \pi_{\theta(t)}\bigl(x(t)\bigr)\bigr), \\& \qquad h\bigl(t,x(t)\bigr)-\pi_{\theta (t)} \bigl(x(t)\bigr)\bigr\rangle \\& \quad \leq\theta(t)\bigl\Vert \nabla_{2}F\bigl(h\bigl(t,x(t)\bigr), \pi_{\theta(t)}\bigl(x(t)\bigr)\bigr)-\nabla _{2}F\bigl(h\bigl(t,x(t) \bigr),h\bigl(t,x(t)\bigr)\bigr)\bigr\Vert \\& \qquad {}\times\bigl\Vert h\bigl(t,x_{0}(t) \bigr)-h\bigl(t,x(t)\bigr)\bigr\Vert -2\theta(t)\beta(t)\bigl\Vert h\bigl(t,x(t)\bigr)-\pi_{\theta(t)} \bigl(x(t)\bigr)\bigr\Vert ^{2} \\& \quad \leq\theta(t)\gamma(t)\bigl\Vert h\bigl(t,x(t)\bigr)-\pi_{\theta(t)} \bigl(x(t)\bigr)\bigr\Vert \bigl\Vert h\bigl(t,x_{0}(t)\bigr)-h \bigl(t,x(t)\bigr)\bigr\Vert \\& \qquad {}-2\theta(t)\beta(t)\bigl\Vert h\bigl(t,x(t) \bigr)-\pi _{\theta(t)}\bigl(x(t)\bigr)\bigr\Vert ^{2}. \\& \quad \leq\theta(t)\gamma(t)\bigl\Vert h\bigl(t,x(t)\bigr)-\pi_{\theta(t)} \bigl(x(t)\bigr)\bigr\Vert \bigl\Vert h\bigl(t,x_{0}(t)\bigr)-h \bigl(t,x(t)\bigr)\bigr\Vert . \end{aligned}$$

By using the Lipschitz continuity of h, we have

$$\begin{aligned}& \theta(t) \bigl\langle \nabla_{2}F \bigl(h\bigl(t,x(t)\bigr), \pi_{\theta(t)}\bigl(x(t)\bigr) \bigr), h\bigl(t,x_{0}(t)\bigr)- \pi_{\theta(t)}\bigl(x(t)\bigr) \bigr\rangle \\& \quad \leq\theta(t)\gamma (t)L(t)\bigl\Vert h\bigl(t,x(t)\bigr)-\pi_{\theta(t)}\bigl(x(t)\bigr)\bigr\Vert \bigl\Vert x(t)-x_{0}(t)\bigr\Vert . \end{aligned}$$
(3.8)

From the inequalities (3.7) and (3.8), we have

$$\begin{aligned}& \bigl\langle w(t)-w_{0}(t), \pi_{\theta(t)}\bigl(x(t)\bigr)-h \bigl(t,x_{0}(t)\bigr) \bigr\rangle \\& \quad \leq\theta(t)\gamma(t)L(t)\bigl\Vert h\bigl(t,x(t)\bigr)-\pi_{\theta(t)}\bigl(x(t)\bigr)\bigr\Vert \bigl\Vert x(t)-x_{0}(t)\bigr\Vert . \end{aligned}$$
(3.9)

Now from the strongly h-monotonicity of T, we have

$$\begin{aligned}& \alpha(t)\bigl\Vert x(t)-x_{0}(t)\bigr\Vert ^{2} \\& \quad \leq \bigl\langle w(t)-w_{0}(t), h\bigl(t,x(t)\bigr)-h\bigl(t,x_{0}(t) \bigr)\bigr\rangle \\& \quad \leq \bigl\langle w(t)-w_{0}(t), h\bigl(t,x(t)\bigr)- \pi_{\theta(t)}\bigl(x(t)\bigr) \bigr\rangle + \bigl\langle w(t)-w_{0}(t), \pi_{\theta(t)}\bigl(x(t)\bigr)-h \bigl(t,x_{0}(t)\bigr) \bigr\rangle \\& \quad \leq \bigl\Vert w(t)-w_{0}(t)\bigr\Vert \bigl\Vert h \bigl(t,x(t)\bigr)-\pi_{\theta(t)}\bigl(x(t)\bigr)\bigr\Vert + \bigl\langle w(t)-w_{0}(t), \pi_{\theta(t)}\bigl(x(t)\bigr)-h \bigl(t,x_{0}(t)\bigr) \bigr\rangle . \end{aligned}$$

From the Ĥ-Lipschitz continuity of T and then from inequalities (3.9), we have

$$\begin{aligned}& \alpha(t)\bigl\Vert x(t)-x_{0}(t)\bigr\Vert ^{2} \\& \quad \leq \lambda(t) (1+\epsilon)\bigl\Vert x(t)-x_{0}(t)\bigr\Vert \bigl\Vert h\bigl(t,x(t)\bigr)-\pi_{\theta(t)}\bigl(x(t)\bigr)\bigr\Vert \\& \qquad {}+\theta(t)\gamma(t) L(t)\bigl\Vert h\bigl(t,x(t)\bigr)-\pi_{\theta(t)} \bigl(x(t)\bigr)\bigr\Vert \bigl\Vert x(t)-x_{0}(t)\bigr\Vert \\& \quad \leq \bigl(\lambda(t) (1+\epsilon)+\theta(t)\gamma(t)L(t)\bigr)\bigl\Vert h \bigl(t,x(t)\bigr)-\pi_{\theta(t)}\bigl(x(t)\bigr)\bigr\Vert \bigl\Vert x(t)-x_{0}(t)\bigr\Vert . \end{aligned}$$

Therefore,

$$\bigl\Vert x(t)-x_{0}(t)\bigr\Vert \leq{\lambda(t)(1+\epsilon)+\theta(t)\gamma (t)L(t)\over {\alpha(t)}}{ \bigl\Vert h\bigl(t,x(t)\bigr)-\pi_{\theta(t)}\bigl(x(t)\bigr)\bigr\Vert }, \quad \forall x(t)\in H. $$

This completes the proof. □

Theorem 3.2

Suppose that the function F satisfies (A1)-(A4). We have

$$G_{\theta(t)}\bigl(x(t)\bigr)\geq\theta(t)\beta(t)\bigl\Vert h\bigl(t,x(t) \bigr)-\pi_{\theta (t)}\bigl(x(t)\bigr)\bigr\Vert ^{2},\quad \forall x(t)\in H, \forall t\in\Omega. $$

In particular, \(G_{\theta(t)}(x(t)) = 0\) if and only if \(x(t)\) solves the RGVIP (2.1).

Proof

For any \(x(t)\in H\), taking \(y(t) = h(t,x(t))\) in (3.4), we have

$$\begin{aligned} \begin{aligned} &\bigl\langle w(t), h\bigl(t,x(t)\bigr)-\pi_{\theta(t)}\bigl(x(t)\bigr) \bigr\rangle + \phi \bigl(h\bigl(t,x(t)\bigr)\bigr)-\phi\bigl(\pi_{\theta(t)} \bigl(x(t)\bigr)\bigr) \\ &\quad \geq\theta(t) \bigl\langle -\nabla _{2}F \bigl(h \bigl(t,x(t)\bigr),\pi_{\theta(t)}\bigl(x(t)\bigr) \bigr), h\bigl(t,x(t)\bigr)- \pi_{\theta (t)}\bigl(x(t)\bigr) \bigr\rangle . \end{aligned} \end{aligned}$$

Therefore, we have

$$\begin{aligned} G_{\theta(t)}\bigl(x(t)\bigr) \geq& \bigl\langle -\theta(t)\nabla_{2}F \bigl(h\bigl(t,x(t)\bigr),\pi _{\theta(t)}\bigl(x(t)\bigr) \bigr), h \bigl(t,x(t)\bigr)-\pi_{\theta(t)}\bigl(x(t)\bigr) \bigr\rangle \\ &{}-\theta(t)F \bigl(h\bigl(t,x(t)\bigr),\pi_{\theta(t)}\bigl(x(t)\bigr) \bigr). \end{aligned}$$

From (A3), we have

$$G_{\theta(t)}\bigl(x(t)\bigr)\geq\theta(t)\bigl[-F\bigl(h\bigl(t,x(t)\bigr),h \bigl(t,x(t)\bigr)\bigr) +\beta(t)\bigl\Vert h\bigl(t,x(t)\bigr)- \pi_{\theta(t)}\bigl(x(t)\bigr)\bigr\Vert ^{2}\bigr]. $$

From (A4), we have

$$G_{\theta(t)}\bigl(x(t)\bigr)\geq\theta(t)\beta(t)\bigl\Vert h\bigl(t,x(t) \bigr)-\pi_{\theta (t)}\bigl(x(t)\bigr)\bigr\Vert ^{2}. $$

The second assertion is obvious by Lemma 3.4.

This completes the proof. □

From Theorem 3.1 and Theorem 3.2, we can easily get the following global error bound for the RGVIP (2.1).

Theorem 3.3

Let for each \(t\in\Omega\), \(x_{0}(t)\in H\) be solution of the RGVIP (2.1). Assume that, for each \(t\in\Omega\), the random multi-valued mapping T is strongly h-monotone and Ĥ-Lipschitz continuous with measurable functions \(\alpha,\lambda :\Omega\to(0,+\infty)\), respectively. If \(h:\Omega\times H\to H\) is Lipschitz continuous with measurable function \(L:\Omega\to(0,+\infty)\) and the function F satisfies (A1)-(A5), then \(G_{\theta(t)}(x(t))\) provides a global error bound for the RGVIP (2.1)

$$\bigl\Vert x(t)-x_{0}(t)\bigr\Vert \leq{\lambda(t)(1+\epsilon)+\theta(t)\gamma(t)L(t)\over {\alpha(t)}{\sqrt{ \theta(t)\beta(t)}}}{ \sqrt{G_{\theta(t)}\bigl(x(t)\bigr) }},\quad \forall x(t)\in H, \forall t\in \Omega. $$

Also we can derive the global error bound for the RGVIP (2.1) without using the Lipschitz continuity of T.

Theorem 3.4

Let for each \(t\in\Omega\), \(x_{0}(t)\in H\) be solution of the RGVIP (2.1). Assume that for each \(t\in\Omega\), the random multi-valued mapping T is strongly h-monotone and \(h:\Omega \times H\to H\) is Lipschitz continuous with measurable functions \(\alpha ,L:\Omega\to(0,+\infty)\), respectively. If the function F satisfies (A1)-(A5), then

$$\bigl\Vert x(t)-x_{0}(t)\bigr\Vert \leq{1\over {\sqrt{\alpha(t)+ \theta(t)(\beta(t)-\gamma (t))L^{2}(t)}}}{ \sqrt{G_{\theta(t)}\bigl(x(t)\bigr) }}, \quad \forall x(t)\in H, \forall t\in \Omega. $$

Proof

For each \(t\in\Omega\), fix an arbitrary \(x(t)\in H\). Since \(x_{0}(t)\) is a solution of the RGVIP (2.1), from the definition of \(G_{\theta(t)}(x(t))\) and the h-strongly monotonicity of T, we obtain

$$\begin{aligned} G_{\theta(t)}\bigl(x(t)\bigr) \geq& \bigl\langle w(t), h\bigl(t,x(t)\bigr)-h \bigl(t,x_{0}(t)\bigr) \bigr\rangle + \phi\bigl(h\bigl(t,x(t)\bigr) \bigr)-\phi\bigl(h\bigl(t,x_{0}(t)\bigr)\bigr) \\ &{}-\theta(t)F\bigl(h \bigl(t,x(t)\bigr), h\bigl(t,x_{0}(t)\bigr)\bigr) \\ \geq& \bigl\langle w_{0}(t), h\bigl(t,x(t)\bigr)-h \bigl(t,x_{0}(t)\bigr) \bigr\rangle + \alpha(t)\bigl\Vert x(t)-x_{0}(t)\bigr\Vert ^{2} \\ &{}+\phi\bigl(h\bigl(t,x(t)\bigr) \bigr)-\phi\bigl(h\bigl(t,x_{0}(t)\bigr)\bigr)-\theta(t)F\bigl(h \bigl(t,x(t)\bigr), h\bigl(t,x_{0}(t)\bigr)\bigr). \end{aligned}$$
(3.10)

Since for each \(t\in\Omega\), \(x_{0}(t)\in H\) is a solution of the RGVIP (2.1),

$$\bigl\langle w_{0}(t),y(t)- h\bigl(t,x_{0}(t)\bigr)\bigr\rangle +\phi\bigl(y(t)\bigr)-\phi \bigl(h\bigl(t,x_{0}(t)\bigr)\bigr) \geq0. $$

Taking \(y(t)=h(t,x(t))\) in the above inequality, we have

$$ \bigl\langle w_{0}(t),h\bigl(t,x(t)\bigr)- h\bigl(t,x_{0}(t) \bigr)\bigr\rangle +\phi\bigl(h\bigl(t,x(t)\bigr)\bigr)-\phi \bigl(h \bigl(t,x_{0}(t)\bigr)\bigr)\geq0. $$
(3.11)

From inequalities (3.10) and (3.11), we have

$$ G_{\theta(t)}\bigl(x(t)\bigr)\geq\alpha(t)\bigl\Vert x(t)-x_{0}(t) \bigr\Vert ^{2}-\theta(t)F\bigl(h\bigl(t,x(t)\bigr), h \bigl(t,x_{0}(t)\bigr)\bigr). $$
(3.12)

Moreover, by Lemma 3.3 and the Lipschitz continuity of h, we have

$$\begin{aligned} -F\bigl(h\bigl(t,x(t)\bigr), h\bigl(t,x_{0}(t)\bigr)\bigr) \geq&\bigl( \beta(t)-\gamma(t)\bigr)\bigl\Vert h\bigl(t,x(t)\bigr)-h\bigl(t,x_{0}(t) \bigr)\bigr\Vert ^{2} \\ \geq&\bigl(\beta(t)-\gamma(t)\bigr)L^{2}(t) \bigl\Vert x(t)-x_{0}(t)\bigr\Vert ^{2}. \end{aligned}$$
(3.13)

From (3.12) and (3.13), we have

$$\bigl\Vert x(t)-x_{0}(t)\bigr\Vert \leq{1\over {\sqrt{\alpha(t)+ \theta(t)(\beta(t)-\gamma (t))L^{2}(t)}}}{ \sqrt{G_{\theta(t)}\bigl(x(t)\bigr) }}, \quad \forall x(t)\in H. $$

This completes the proof. □

Remark 3.1

For a suitable choice of the mappings T, h, Ï• and the function \(a:H\to[0,1]\), we can obtain several known results [1, 3, 6, 9, 11] as special cases of the main result of this paper.

4 Conclusion

In this paper, the concept of gap functions for random generalized variational inequality problems has been introduced by using the fuzzy residual vector. By using a generalized regularized gap function we calculated global error bounds, i.e., upper estimates on the distance to the solution set of random generalized variational inequality problems, which is one of the useful applications of gap functions. The concepts of fuzzy theory can be applied to fuzzy optimization problems and probabilistic models, also by using fuzzy theory many innovative methods can be developed further in this fascinating area of the variational inequality theory. Further attention is needed for the study of relationship between fuzzy sets, random sets, and fuzzy variational inequalities, which might provide useful applications to the gap functions and finding its error bounds.

References

  1. Aussel, D, Dutta, J: On gap functions for multivalued Stampacchia variational inequalities. J. Optim. Theory Appl. 149, 513-527 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  2. Aussel, D, Correa, R, Marechal, M: Gap functions for quasi variational inequalities and generalized Nash equilibrium problems. J. Optim. Theory Appl. 151, 474-488 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  3. Aussel, D, Gupta, R, Mehra, A: Gap functions and error bounds for inverse quasi-variational inequality problems. J. Math. Anal. Appl. 407, 270-280 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  4. Fukushima, M: Equivalent differentiable optimization problems and descent methods for asymmetric variational inequality problems. Math. Program. 53, 99-110 (1992)

    Article  MathSciNet  MATH  Google Scholar 

  5. Fukushima, M: Merit functions for variational inequality and complementarity problems. In: Nonlinear Optimization and Applications, pp. 155-170. Springer, New York (1996)

    Chapter  Google Scholar 

  6. Gupta, R, Mehra, A: Gap functions and error bounds for quasi variational inequalities. J. Glob. Optim. 53, 737-748 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  7. Huang, LR, Ng, KF: Equivalent optimization formulations and error bounds for variational inequality problem. J. Optim. Theory Appl. 125, 299-314 (2005)

    Article  MathSciNet  MATH  Google Scholar 

  8. Khan, SA, Chen, JW: Gap function and global error bounds for generalized mixed quasi variational inequalities. Appl. Math. Comput. 260, 71-81 (2015)

    Article  MathSciNet  Google Scholar 

  9. Khan, SA, Chen, JW: Gap functions and error bounds for generalized mixed vector equilibrium problems. J. Optim. Theory Appl. 166, 767-776 (2015)

    Article  MathSciNet  Google Scholar 

  10. Khan, SA, Iqbal, J, Shehu, Y: Mixed quasi variational inequalities involving error bounds. J. Inequal. Appl. 2015, 417 (2015)

    Article  MathSciNet  Google Scholar 

  11. Noor, MA: Merit functions for general variational inequalities. J. Math. Anal. Appl. 316, 736-752 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  12. Qu, B, Wang, CY, Zhang, JZ: Convergence and error bound of a method for solving variational inequality problems via the generalized D-gap function. J. Optim. Theory Appl. 119, 535-552 (2003)

    Article  MathSciNet  MATH  Google Scholar 

  13. Solodov, MV, Tseng, P: Some methods based on the D-gap function for solving monotone variational inequalities. Comput. Optim. Appl. 17, 255-277 (2000)

    Article  MathSciNet  MATH  Google Scholar 

  14. Solodov, MV: Merit functions and error bounds for generalized variational inequalities. J. Math. Anal. Appl. 287, 405-414 (2003)

    Article  MathSciNet  MATH  Google Scholar 

  15. Tang, GJ, Huang, NJ: Gap functions and global error bounds for set-valued mixed variational inequalities. Taiwan. J. Math. 17, 1267-1286 (2013)

    MathSciNet  MATH  Google Scholar 

  16. Yamashita, N, Fukushima, M: Equivalent unconstraint minimization and global error bounds for variational inequality problems. SIAM J. Control Optim. 35, 273-284 (1997)

    Article  MathSciNet  MATH  Google Scholar 

  17. Yamashita, N, Taji, K, Fukushima, M: Unconstrained optimization reformulations of variational inequality problems. J. Optim. Theory Appl. 92, 439-456 (1997)

    Article  MathSciNet  MATH  Google Scholar 

  18. Wu, JH, Florian, M, Marcotte, P: A general descent framework for the monotone variational inequality problem. Math. Program. 61, 281-300 (1993)

    Article  MathSciNet  MATH  Google Scholar 

  19. Zadeh, LA: Fuzzy sets. Inf. Control 8, 338-353 (1965)

    Article  MathSciNet  MATH  Google Scholar 

  20. Chang, SS, Zhu, YG: On variational inequalities for fuzzy mappings. Fuzzy Sets Syst. 32, 359-367 (1989)

    Article  MathSciNet  MATH  Google Scholar 

  21. Chang, SS: Fixed Point Theory with Applications. Chongqing Publishing House, Chongqing (1984)

    Google Scholar 

  22. Chang, SS: Coincidence theorems and variational inequalities for fuzzy mappings. Fuzzy Sets Syst. 61, 359-368 (1994)

    Article  MATH  Google Scholar 

  23. Chang, SS, Huang, NJ: Generalized complementarity problems for fuzzy mappings. Fuzzy Sets Syst. 55, 227-234 (1993)

    Article  MathSciNet  MATH  Google Scholar 

  24. Noor, MA: Variational inequalities for fuzzy mappings (I). Fuzzy Sets Syst. 55, 309-312 (1993)

    Article  MATH  Google Scholar 

  25. Noor, MA: Variational inequalities for fuzzy mappings (II). Fuzzy Sets Syst. 97, 101-107 (1998)

    Article  MATH  Google Scholar 

  26. Noor, MA: Variational inequalities for fuzzy mappings (III). Fuzzy Sets Syst. 110, 101-108 (2000)

    Article  MATH  Google Scholar 

  27. Huang, NJ: Random generalized nonlinear variational inclusions for random fuzzy mappings. Fuzzy Sets Syst. 105, 437-444 (1999)

    Article  MATH  Google Scholar 

  28. Dai, HX: Generalized mixed variational-like inequality for random fuzzy mappings. J. Comput. Appl. Math. 224, 20-28 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  29. Huang, NJ: Random generalized set-valued implicit variational inequalities. J. Liaoning Norm. Univ. 18, 89-93 (1995)

    Google Scholar 

  30. Husain, T, Tarafdar, E, Yuan, XZ: Some results on random generalized games and random quasi-variational inequalities. Far East J. Math. Sci. 2, 35-55 (1994)

    MathSciNet  MATH  Google Scholar 

  31. Yuan, XZ: Non-compact random generalized games and random quasi-variational inequalities. J. Appl. Math. Stoch. Anal. 7, 467-486 (1994)

    Article  MATH  Google Scholar 

  32. Rockafellar, RT: Monotone operators and the proximal point algorithm. SIAM J. Control Optim. 14, 877-898 (1976)

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

The authors would like to thank the associated editor and anonymous referees for their valuable comments and suggestions, which have helped to improve the paper.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Javid Iqbal.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

All authors contributed equally to the writing of this paper. All authors read and approved the final manuscript.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Khan, S.A., Iqbal, J. & Shehu, Y. Gap functions and error bounds for random generalized variational inequality problems. J Inequal Appl 2016, 47 (2016). https://doi.org/10.1186/s13660-016-0984-5

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13660-016-0984-5

MSC

Keywords