- Research
- Open access
- Published:
Mixed quasi-variational inequalities involving error bounds
Journal of Inequalities and Applications volume 2015, Article number: 417 (2015)
Abstract
In this paper, we define some new notions of gap functions for set-valued mixed quasi-variational inequalities under suitable conditions. Further, we obtain local/global bounds for the solution of set-valued mixed quasi-variational inequality problems in terms of the residual gap function, the regularized gap function, and the D-gap function. The results obtained in this paper are generalization and refinement of previously known results for some class of variational inequality problems.
1 Introduction
The set-valued quasi-variational inequality problems containing the nonlinear term are definitely most notable one among the several variants of variational inequality problems. It is well known that all the projection type methods cannot be extended and generalized to suggest and analyze iterative methods for solving the mixed variational inequalities involving the nonlinear terms. To overcome this difficulty the resolvent operator methods can be used. In fact, if the nonlinear term involving the mixed variational inequalities is proper, convex, and lower-semicontinuous, then the mixed variational inequalities are equivalent to the fixed point problem and the resolvent equations. In this technique, the given operator is decomposed into the sum of (maximal) monotone operators, whose resolvents are easier to evaluate than the resolvent of the given original operator; see, for example [1–4] and the references therein.
Throughout this paper, let H be a real Hilbert space, whose inner product and norm are denoted by \(\langle\cdot,\cdot\rangle\) and \(\|\cdot\|\), respectively. Let \(T: H \to2^{H}\) be the set-valued mapping and \(A(\cdot,\cdot):H\times H\to H\) be a single-valued mapping. Let \(f(\cdot,\cdot): H \times H\to\mathbb{R}\cup\{+\infty\}\) be a be a proper convex bifunction with respect to both arguments where \(\operatorname{dom}(f)\) is closed. Let \(K:H\to2^{H}\) be a set-valued mapping such that \(K(x)\) is closed convex set in H for any element x of H. Now, we consider the set-valued mixed quasi-variational inequality problem, denoted by SVMQVIP, which consists in finding \(x\in H\) such that \(u\in T(x)\), \(x\in K(x)\) and
It can be shown that a wide class of set-valued odd order and non-symmetric free, obstacle, moving, equilibrium, and optimization problems arising in the pure and applied sciences can be studied via the set-valued quasi-variational inequality problems.
One of the classical approaches in the analysis of variational inequality problem is to transform it into an equivalent optimization problem via the notion of a gap function; see for example [5–25] and the references therein. This enables us to develop descent-like algorithms to solve the variational inequality problem. Besides these, gap functions also turned out to be very useful in designing new globally convergent algorithms, in analyzing the rate of convergence of some iterative methods, and in obtaining the error bounds, which provide a measure of the distance between solution set and an arbitrary point. Recently, many error bounds for various kinds of variational inequalities have been established; see for example [7–11, 13–22, 24, 25] and the references therein.
If \(f (x,y)=f(y)\), ∀x and \(A(u,u)=T(x)\) and the mapping \(x\to K(x)\) is a constant closed convex set K, then problem SVMQVIP (1.1) collapses to the set-valued mixed variational inequality problem, denoted by SVMVIP, which consists in finding \(x\in H\) such that
which was considered by Tang and Huang [21]. One introduced two regularized gap functions for the above SVMVIP (1.2) and studied their differentiable properties.
If T is single-valued, then problem SVMVIP (1.2) reduces to the mixed variational inequality problem, denoted by MVIP, which consists in finding \(x\in H\) such that
which was studied by Solodov [20]. In this paper, he introduced three gap functions for MVIP (1.3) and by using these he obtained error bounds.
If the function \(f(\cdot)\) is an indicator function of a closed set K in H, then problem MVIP (1.3) reduces to the set-valued variational inequality problem, denoted by SVVIP, which consists in finding \(x\in K\) such that
studied by Li and Mastroeni [15]. One obtained some existence results for global error bounds for gap function under strong monotonicity. Later, Aussel and Dutta [5] defined gap functions and by using it they obtained finiteness and error bounds properties for the above set-valued variational inequalities.
If T is single-valued and \(K:H\to2^{H}\) a set-valued mapping, such that \(K(x)\) is a closed convex set in H, for each \(x\in H\), then the above problem SVVIP (1.4) is equivalent to the quasi-variational inequality problem, denoted by QVIP, which consists in finding \(x\in K(x)\) such that
which was studied by Gupta and Mehra [10] and Noor [17]. They derived local and global error bounds for the above quasi-variational inequality problems in terms of the regularized gap function and the D-gap function.
If T is single-valued, then problem SVVIP (1.4) reduces to the variational inequality problem, denoted by VIP, which consists in finding \(x\in K\) such that
which was considered by many authors to derive gap functions and the corresponding error bounds; see for example [13, 16, 22–25].
Inspired and motivated by the recent research work above, we define some new notions of gap functions for set-valued mixed quasi-variational inequalities and obtain local/global bounds in terms of the residual, the regularized, and the D-gap function. Since this class is the most general and includes some previously studied classes of variational inequalities as special cases, our results cover and extend the previously known results. The results presented in this paper generalize and improve the work presented in [7, 10, 11, 14, 17, 20, 21].
This paper is organized as follows: In Section 2, we give some basic definitions and results which will be used in this paper. Further, we establish some conditions under which SVMQVIP (1.1) has a unique solution. Furthermore, by using the residue vector \(R(x,\theta)\) we obtain the error bound for the solution of SVMQVIP (1.1). In Section 3, we introduce a regularized gap function for SVMQVIP (1.1) and derive the error bounds with and without using the Lipschitz continuity assumption. In Section 4, we introduce the D-gap function and derive global error bounds in terms of the D-gap function for the solution of SVMQVIP (1.1).
2 Preliminaries and basic facts
First of all, we recall the following well-known results and concepts.
Definition 2.1
A bifunction \(f:H\times H\to\mathbb{R}\) is said to be skew-symmetric if \(f(x,x)-f(x,y)-f(y,x)+f(y,y)\geq 0\), \(\forall x,y \in H\).
Definition 2.2
Let κ be the domain of SVMQVIP (1.1). A function \(p: \kappa\to\mathbb{R}\) is said to be a gap function for SVMQVIP (1.1), if it satisfies the following properties:
-
(i)
\(p(x)\geq0\), \(\forall x\in\kappa\);
-
(ii)
\(p(x^{*}) = 0\), \(x^{*}\in\kappa\), if and only if \(x^{*}\) solves SVMQVIP (1.1).
For VIP (1.6), it is well known that \(x\in H\) is a solution if, and only if,
where \(P_{K}\) is the orthogonal projector onto K and \(\theta>0\) is arbitrary. Hence, the norm of the right-hand side in this equation can serve as a gap function for VIP (1.6), which is commonly called natural residual vector.
We next derive a similar characterization for SVMQVIP (1.1). Recall that the proximal map [26, 27], \(P_{K}^{f}:H\to\operatorname{dom}(f)\), is given by
Note that the objective function above is proper strongly convex. Since \(\operatorname{dom}(f)\) is closed, \(P_{K}^{f}(\cdot)\) is well defined and single-valued. For \(\theta>0\), define
We next show that \(R(x, \theta)\) plays the role of the natural residual vector for SVMQVIP (1.1).
Lemma 2.1
Let \(\theta>0\) be arbitrary. An element \(x\in H\) solves SVMQVIP (1.1) if, and only if, \(R(x, \theta)=0\).
Proof
Let \(R(x, \theta)=0\), which implies that \(x=P_{K(x)}^{f}[x-\theta A(u,u)]\). It is equivalent to
By the optimality conditions (which are necessary and sufficient, by convexity), the latter is equivalent to
which implies \(-A(u,u)\in\partial f(x,y)\). This in turn is equivalent, by the definition of the subgradient, to
which implies x solves SVMQVIP (1.1). This completes the proof. □
Remark 2.1
It is easy to see the natural residual vector \(R(x,\theta)\) is a gap function for SVMQVIP (1.1).
Definition 2.3
Let \(A:H\times H\to H\) be a single-valued operator and \(T: H \to2^{H}\) be the set-valued operator, then \(\forall x, x_{0}\in H\), \(u\in T(x)\), \(u_{0}\in T(x_{0})\):
-
(a)
A is said to be strongly monotone, if there exists a constant \(\alpha>0\) such that
$$\bigl\langle A(u,u)-A(u_{0},u_{0}), x-x_{0}\bigr\rangle \geq\alpha\|x-x_{0}\|^{2}; $$ -
(b)
A is said to be cocoercive, if there exists a constant \(\mu>0\) such that
$$\bigl\langle A(u,u)-A(u_{0},u_{0}), x-x_{0}\bigr\rangle \geq\tau\bigl\Vert A(u,u)-A(u_{0},u_{0})\bigr\Vert ^{2}; $$ -
(c)
A is said to be Lipschitz continuous, if there exists a constant \(\beta>0\) such that
$$\bigl\Vert A(u,u)-A(u_{0},u_{0})\bigr\Vert \leq\beta \|u-u_{0}\|; $$ -
(d)
T is said to be M-Lipschitz continuous, if there exists a constant \(\mu>0\) such that
$$M\bigl(T(x),T(x_{0})\bigr)\leq\mu\|x-x_{0}\|, $$where \(M(\cdot, \cdot)\) is the Hausdorff metric;
-
(e)
\(P_{K(x)}^{f}\) is said to be nonexpansive, if
$$\bigl\Vert P_{K(x)}^{f}(v)-P_{K(x)}^{f}(w) \bigr\Vert \leq\|v-w\|,\quad \forall x, v, w\in H. $$
Since the \(P_{K(x)}^{f}(z)\) is a cocoercive map with modulus 1 on \(z\in H\) (see Theorem 1.5.5(c) in [28]), for \(\theta>0\) and \(x\in H\), define \(e_{x}:H\to H\) as \(e_{x}(z)=z-P_{K(x)}^{f}(z)\). We have the following result, to be used in the sequel.
Lemma 2.2
\(e_{x}(z)\) is a cocoercive map on H with modulus 1.
Proof
For all \(z,w\in H\), we have
implying
Therefore
yielding the required result. □
Next we study those conditions under which SVMQVIP (1.1) has a unique solution.
Theorem 2.1
Let A be strongly monotone and Lipschitz continuous with constants \(\alpha, \beta> 0\), respectively. Assume that f is skew-symmetric and T is M-Lipschitz continuous with constant \(\mu>0\). If there exists \(k > 0\), such that for all \(\theta<{2\alpha\over \beta^{2}\mu^{2}}\) and
then SVMQVIP (1.1) has a unique solution.
Proof
(a) Uniqueness. Let \(x_{1}\neq x_{2}\in H\) be two solutions of SVMQVIP (1.1). Then we have
Taking \(y =x_{2}\) in (2.1) and \(y =x_{1}\) in (2.2), adding the resultants and then using the skew-symmetry of f, we have
Since A is strongly monotone with a constant \(\alpha>0\),
which implies that \(x_{1} =x_{2}\), the uniqueness of the solution of SVMQVIP (1.1).
(b) Existence. From Lemma 2.1, it follows that SVMQVIP (1.1) is equivalent to the fixed point problem, given by
In order to prove the existence of a solution of SVMQVIP (1.1), it is sufficient to show that \(F(\cdot)\) has a fixed point. Thus, for all \(x, x_{0}\in H\), \(x\neq x_{0}\), we have
We know that
By using the strong monotonicity and the Lipschitz continuity of A with constants \(\alpha>0\) and \(\beta> 0\), respectively, we get
Further, using the M-Lipschitz continuity of T with constant \(\mu>0\), we get
where \(m=k+{\sqrt{\theta^{2}\beta^{2}\mu^{2}-2\alpha\theta+1}}\). From the assumption on k, it follows that \(m < 1\), so the mapping \(F(x)\) defined by (2.3) has a fixed point \(x\in K(x)\) such that \(u\in T(x)\) satisfying SVMQVIP (1.1). This completes the proof. □
Now by using normal residual vector \(R(x,\theta)\), we derive the error bounds for the solution of SVMQVIP (1.1).
Theorem 2.2
Assume that \(x_{0}\) be a solution of SVMQVIP (1.1). Let the operator A be strongly monotone and Lipschitz continuous with constants \(\alpha, \beta> 0\), respectively. Let T be a M-Lipschitz continuous with constant \(\mu> 0\) and f is skew-symmetric. If there exists \(k<{\alpha\over \beta\mu}\) such that, for any \(\theta>{k\over \alpha-\beta\mu k}\),
then, for any \(x\in H\) and \(\theta>{k\over \alpha-\beta\mu k}\), we have
Proof
Let \(x_{0}\in H\) be a solution of SVMQVIP (1.1), then
Substituting \(y=P_{K(x_{0})}^{f}[x-\theta A(u,u)]\) in the above inequality, we have
For any fixed \(x\in H\), and \(\theta> 0\), we observe that
which is equivalent to
By the definition of a sub-differential, we have
Taking \(y=x_{0}\) in the above we get
This implies that
Adding (2.6) and (2.7), we get
Since f is skew-symmetric,
This also can be written as
which implies that
By using the strong monotonicity of A, we get
Also the above inequality can be written as
By using the Cauchy-Schwarz inequality along with the triangular inequality, we have
Now using the Lipschitz continuity of the operator A and assumption on \(P_{K(x)}^{f}(\cdot)\), we have
Now using the M-Lipschitz continuity of T, we have
Therefore, we have
where \(\theta>{k\over \alpha-\beta\mu k}\). This completes the proof. □
Remark 2.2
Theorem 2.2 generalizes Theorem 1 of [7].
Besides the above result on the error bound, we provide another error bound for SVMQVIP (1.1).
Theorem 2.3
Assume that \(x_{0}\) is the solution of SVMQVIP (1.1). Let the operator A be strongly monotone and Lipschitz continuous with constants \(\alpha, \beta> 0\), respectively. Let T be M-Lipschitz continuous with constant \(\mu> 0\) and f be skew-symmetric. If there exists \(k > 0\), such that, for any \(\theta <{2\alpha\over \beta^{2}\mu^{2}}\) and
then, for any \(x\in H\), we have
Proof
For any \(x,x_{0}\in H\), let us consider \(v=x-\theta A(u,u)\) and \(w=x_{0}-\theta A(u_{0},u_{0})\). Now
By using the cocoercive property of \(P_{K(x)}^{f}(\cdot)\), given in Lemma 2.2,
Since
By using the strong monotonicity and the Lipschitz continuity of A we have
Finally, using the M-Lipschitz continuity of T, we get
implies
By a classical argument
Since \(x_{0}\) is solution of SVMQVIP (1.1), \(R(x_{0},\theta)=0\) and hence
This completes the proof. □
Remark 2.3
3 Regularized gap functions for SVMQVIP (1.1)
In this section by using an approach due to Fukushima [8], we construct another gap function associated with problem SVMQVIP (1.1), which can be viewed as a regularized gap function. For \(\theta>0\), the functions \(G_{\theta}\) is defined by
which is finite-valued everywhere and is differentiable whenever all operators involved in \(G_{\theta}(x)\), are differentiable.
Lemma 3.1
For any \(\theta>0\), \(G_{\theta}(x)\) can be written as
Proof
If \(x\notin\operatorname{dom}f\) then equation (3.2) is correct, because \(f\equiv\infty\) while the other terms are all finite (recall that \(P_{K(x)}^{f} (z)\in\operatorname{dom}f\) for any \(z\in H\)).
Consider now any \(x\in\operatorname{dom}f\). Denote by \(t(y)\) the function being maximized in (3.1). Let z be the (unique, by concavity of \(t(y)\)) element at which the maximum is realized in (3.1). Then z is uniquely characterized by the optimality condition
That is to say, \(z =\arg\min_{y\in K(x)}\{f(x,y)+{1\over 2\theta }\|y-(x-\theta A(u,u))\|^{2}\}=P_{K(x)}^{f}[x-\theta A(u,u)]\), where the second equation follows from the definition of the proximal mapping \(P_{K(x)}^{f}(\cdot)\). This completes the proof. □
Next, we show the function \(G_{\theta}(x)\) for \(\theta> 0\) given (3.1) is a gap function for SVMQVIP (1.1).
Theorem 3.1
If \(\theta> 0\), then we have
In particular, \(G_{\theta}(x)= 0\), if and only if x is a solution of SVMQVIP (1.1).
Proof
Fix any fixed \(x\in H\), and \(\theta> 0\). Observe that
which is equivalent to
By the definition of a sub-differential, we have
Taking \(y=x\) in the above inequality, we get
Combining (3.2) with (3.3) and by using the skew-symmetry of f, we get
Clearly, we have \(G_{\theta}(x)\geq0\), for all \(x\in H\).
Now from the above conclusion, if \(G_{\theta}(x)= 0\), then \(R(x,\theta )=0\). Hence by Lemma 2.1, we see that \(x\in H\) is a solution of SVMQVIP (1.1). Conversely, if \(x\in H\) is a solution of SVMQVIP (1.1), then \(x=P_{K(x)}^{f}[x-\theta A(u,u)]\), consequently, from (3.2) we have \(G_{\theta}(x)= 0\). This completes the proof. □
As a consequence of Theorem 2.2 and Theorem 3.1, we have the following result on error bound in terms of \(G_{\theta}(x)\) for SVMQVIP (1.1).
Corollary 3.1
Assume that \(x_{0}\) be a solution of SVMQVIP (1.1). Let the operator A be strongly monotone and Lipschitz continuous with constants \(\alpha, \beta> 0\), respectively. Let T be a M-Lipschitz continuous with constant \(\mu> 0\) and f is skew-symmetric. If there exists \(k<{\alpha\over \beta\mu}\) such that for any \(\theta>{k\over \alpha-\beta\mu k}\) and
then, for any \(x\in H\) and \(\theta>{k\over \alpha-\beta\mu k}\), we have
Also by using Theorem 2.3 and Theorem 3.1, we obtain another error bound for SVMQVIP (1.1).
Corollary 3.2
Assume that \(x_{0}\) is the solution of SVMQVIP (1.1). Let the operator A be strongly monotone and Lipschitz continuous with constants \(\alpha, \beta> 0\), respectively. Let T be a M-Lipschitz continuous with constant \(\mu> 0\) and f is skew-symmetric. If there exists \(k > 0\), such that for any \(\theta <{2\alpha\over \beta^{2}\mu^{2}}\) and
then, for any \(x\in H\), we have
Remark 3.1
Corollary 3.1 and Corollary 3.2 generalize the corresponding results of [7, 10, 17, 20].
Now, we derive the error bound for SVMQVIP (1.1) without using the Lipschitz continuity of the A and T.
Theorem 3.2
Let A is strongly monotone with constant \(\alpha>0\) and f is skew-symmetric. If \(x_{0}\) is a solution of SVMQVIP (1.1), then
Proof
From (3.1), it can be written as
By using the strong monotonicity of A, we have
Since \(x_{0}\) is a solution of SVMQVIP (1.1),
Taking \(y=x\) in the above inequality
Combining (3.4) with (3.5) and using the skew-symmetry of f, we get
which implies
This completes the proof. □
Remark 3.2
Theorem 3.2 generalizes Theorem 5 of [20] and Theorem 3.4 of [17].
4 Global error bounds for SVMQVIP (1.1)
In this section, we consider another gap function associated with SVMQVIP (1.1), which can be viewed as a difference of two regularized gap functions with distinct parameters, known as the D-gap function, which was introduced and studied by [19, 20, 22] for solving variational inequalities and complementarity problems.
For each \(x\in H\), the difference of two regularized gap functions \(G_{\theta}(x)-G_{\psi}(x)\), where \(\theta>\psi>0\) for SVMQVIP (1.1) will not be well defined for \(x\notin\operatorname{dom}(f)\), as both quantities are not finite. Nevertheless, we shall define the D-gap function by taking a formal difference of equations (3.1) for the two parameters \(\theta>\psi>0\).
The D-gap function associated with SVMQVIP (1.1) is given by
The D-gap function defined by (4.1) can be written as
Further, it can be written as
Next, we derive global error bounds for SVMQVIP (1.1).
Theorem 4.1
\(\forall x\in H\), \(\theta>\psi>0 \), we have
In particular \(D_{\theta,\psi}(x)=0\), if and only if, \(x\in H\) solves SVMQVIP (1.1).
Proof
By the definition of a sub-differential, we have
Taking \(y=P_{K(x)}^{f}[x-\psi A(u,u)]\) in the above inequality, we get
which implies that
Combining (4.2) with (4.3) and using the skew-symmetry of f, we get
which implies the left-most inequality in the assertion.
On the other hand,
which implies that
Similarly to the analysis above, we then obtain
which implies the right-most inequality in the assertion. Combining (4.4) and (4.6), we obtain the required result. The last assertion now follows from Lemma 2.1. □
As a consequence of Theorem 2.2 and Theorem 4.1, we obtain the following result on the global error bound for SVMQVIP (1.1).
Corollary 4.1
Assume that \(x_{0}\) be a solution of SVMQVIP (1.1). Let the operator A be strongly monotone and Lipschitz continuous with constants \(\alpha, \beta> 0\), respectively. Let T be a M-Lipschitz continuous with constant \(\mu> 0\) and f is skew-symmetric. If there exists \(k<{\alpha\over \beta\mu}\) such that for any \(\theta>{k\over \alpha-\beta\mu k}\) and
then, for any \(x\in H\) and \(\theta>{k\over \alpha-\beta\mu k}\), we have
Also by using Theorem 2.3 and Theorem 4.1, we obtain another global error bound for SVMQVIP (1.1).
Corollary 4.2
Assume that \(x_{0}\) is the solution of SVMQVIP (1.1). Let the operator A be strongly monotone and Lipschitz continuous with constants \(\alpha, \beta> 0\), respectively. Let T be a M-Lipschitz continuous with constant \(\mu> 0\) and f is skew-symmetric. If there exists \(k > 0\), such that for any \(\theta <{2\alpha\over \beta^{2}\mu^{2}}\) and
then, for any \(x\in H\), we have
Remark 4.1
Corollary 4.1 and Corollary 4.2 generalize the corresponding results of [7, 10, 17, 20].
Now, we derive the global error bound for SVMQVIP (1.1) without using the Lipschitz continuity of A and T.
Theorem 4.2
Let \(x_{0}\) is a solution of SVMQVIP (1.1). Suppose that A is strongly monotone with constant \(\alpha>0\) and f is skew-symmetric, then
Proof
From (4.1), it can be written as
By using the strong monotonicity of A, we have
Since \(x_{0}\) is a solution of SVMQVIP (1.1),
Taking \(y=x\) in the above inequality
Combining (4.7) with (4.8) and using the skew-symmetry of f, we get
which implies
This completes the proof. □
Remark 4.2
Theorem 4.2 generalizes Theorem 7 of [20] and Theorem 3.6 of [17].
References
Noor, MA: Some algorithms for general monotone mixed variational inequalities. Math. Comput. Model. 29, 1-9 (1999)
Noor, MA: Algorithms for general monotone mixed variational inequalities. J. Math. Anal. Appl. 229, 330-343 (1999)
Noor, MA: Splitting methods for pseudomonotone general mixed variational inequalities. J. Glob. Optim. 18, 75-89 (2000)
Noor, MA: Solvability of multivalued general mixed variational inequalities. J. Math. Anal. Appl. 261, 390-402 (2001)
Aussel, D, Dutta, J: On gap functions for multivalued Stampacchia variational inequalities. J. Optim. Theory Appl. 149, 513-527 (2011)
Aussel, D, Correa, R, Marechal, M: Gap functions for quasivariational inequalities and generalized Nash equilibrium problems. J. Optim. Theory Appl. 151, 474-488 (2011)
Aussel, D, Gupta, R, Mehra, A: Gap functions and error bounds for inverse quasi-variational inequality problems. J. Math. Anal. Appl. 407, 270-280 (2013)
Fukushima, M: Equivalent differentiable optimization problems and descent methods for asymmetric variational inequality problems. Math. Program. 53, 99-110 (1992)
Fukushima, M: Merit functions for variational inequality and complementarity problems. In: Nonlinear Optimization and Applications, pp. 155-170 (1996)
Gupta, R, Mehra, A: Gap functions and error bounds for quasi-variational inequalities. J. Glob. Optim. 53, 737-748 (2012)
Huan, L, Qu, B, Jiang, J: Merit functions for general mixed quasi-variational inequalities. J. Appl. Math. Comput. 33, 411-421 (2010)
Huang, NJ, Li, J, Wu, SY: Gap functions for a system of generalized vector quasi-equilibrium problems with set-valued mappings. J. Glob. Optim. 41, 401-415 (2008)
Huang, LR, Ng, KF: Equivalent optimization formulations and error bounds for variational inequality problems. J. Optim. Theory Appl. 125, 299-314 (2005)
Khan, SA, Chen, JW: Gap function and global error bounds for generalized mixed quasi-variational inequalities. Appl. Math. Comput. 260, 71-81 (2015)
Li, J, Mastroeni, G: Vector variational inequalities involving set-valued mappings via scalarization with applications to error bounds for gap functions. J. Optim. Theory Appl. 145, 355-372 (2010)
Li, G, Ng, KF: Error bounds of generalized D-gap functions for nonsmooth and nonmonotone variational inequality problems. SIAM J. Optim. 20, 667-690 (2009)
Noor, MA: Merit functions for general variational inequalities. J. Math. Anal. Appl. 316, 736-752 (2006)
Noor, MA: On merit functions for quasi-variational inequalities. J. Math. Inequal. 2, 259-268 (2007)
Solodov, MV, Tseng, P: Some methods based on the D-gap function for solving monotone variational inequalities. Comput. Optim. Appl. 17, 255-277 (2000)
Solodov, MV: Merit functions and error bounds for generalized variational inequalities. J. Math. Anal. Appl. 287, 405-414 (2003)
Tang, GJ, Huang, NJ: Gap functions and global error bounds for set-valued mixed variational inequalities. Taiwan. J. Math. 17, 1267-1286 (2013)
Yamashita, N, Fukushima, M: Equivalent unconstrained minimization and global error bounds for variational inequality problems. SIAM J. Control Optim. 35, 273-284 (1997)
Yamashita, N, Taji, K, Fukushima, M: Unconstrained optimization reformulations of variational inequality problems. J. Optim. Theory Appl. 92, 439-456 (1997)
Zhao, YB, Hu, J: Global bounds for the distance to solutions of co-coercive variational inequalities. Oper. Res. Lett. 35, 409-415 (2007)
Zhao, YB, Li, D: Monotonicity of fixed point and normal mapping associated with variational inequality and its application. SIAM J. Optim. 11, 962-973 (2011)
Moreau, JJ: Proximité et dualité dans un espace Hilbertien. Bull. Soc. Math. Fr. 93, 273-299 (1965)
Rockafellar, RT: Monotone operators and the proximal point algorithm. SIAM J. Control Optim. 14, 877-898 (1976)
Facchinei, F, Pang, JS: Finite Dimensional Variational Inequalities and Complementarity Problems, vol. 1. Springer, Berlin (2003)
Acknowledgements
The authors would like to thank the associated editor and two anonymous referees for their valuable comments and suggestions, which have helped to improve the paper.
Author information
Authors and Affiliations
Corresponding author
Additional information
Competing interests
The authors declare that they have no competing interests.
Authors’ contributions
All authors contributed equally to the writing of this paper. All authors read and approved the final manuscript.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
Khan, S.A., Iqbal, J. & Shehu, Y. Mixed quasi-variational inequalities involving error bounds. J Inequal Appl 2015, 417 (2015). https://doi.org/10.1186/s13660-015-0945-4
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s13660-015-0945-4