Open Access

Error bounds of regularized gap functions for weak vector variational inequality problems

Journal of Inequalities and Applications20142014:331

https://doi.org/10.1186/1029-242X-2014-331

Received: 20 June 2014

Accepted: 18 August 2014

Published: 2 September 2014

Abstract

In this paper, by the nonlinear scalarization method, a global error bound of a weak vector variational inequality is established via a regularized gap function. The result extends some existing results in the literature.

MSC:49K40, 90C31.

Keywords

error bound regularized gap function weak vector variational inequality

1 Introduction

Throughout this paper, let K be a closed convex subset of an Euclidean space R n and F : R n B ( R n , R m ) be a continuously differentiable mapping. We consider a weak vector variational inequality (WVVI) of finding x K such that
F ( x ) , x x int C , x K ,

where C R m is a closed convex and pointed cone with nonempty interior intC. (WVVI) was firstly introduced by Giannessi [1]. It has been shown to have many applications in vector optimization problems and traffic equilibrium problems (e.g., [2, 3]).

Error bounds are to depict the distance from a feasible solution to the solution set, and have played an important role not only in sensitivity analysis but also in convergence analysis of iterative algorithms. Recently, kinds of error bounds have been presented for weak vector variational inequalities in [47]. By using a scalarization approach of Konnov [8], Li and Mastroeni [5] established the error bounds for two kinds of (WVVIs) with set-valued mappings. By a regularized gap function and a D-gap function for a weak vector variational inequality, Charitha and Dutta [4] obtained the error bounds of (WVVI), respectively. Recently, in virtue of the regularized gap functions, Sun and Chai [6] studied some error bounds for generalized (WVVIs). By using the image space analysis, Xu and Li [7] got a gap function for (WVVI). Then, they established an error bound for (WVVI) without the convexity of the constraint set. These papers have a common characteristic: the solution set of (WVVI) is a singleton [6, 7]. Even though the solution set of (WVVI) is not a singleton [4, 5], the solution set of the corresponding variational inequality (VI) is a singleton, when their results reduce to (VI).

In this paper, by the nonlinear scalarization method, we study a global error bound of (WVVI). This paper is organized as follows. In Section 2, we establish a global error bound of (VI) via the generalized gap functions. In Section 3, we discuss a global error bound of (WVVI) by the nonlinear scalarization method.

2 A global error bound of (VI)

Let h : R n R { + } be a proper lower semicontinuous function, and let S = { x R n | h ( x ) 0 } . h has a global error bound if there exists τ > 0 such that
d ( x , S ) τ h ( x ) + , x X ,
where h ( x ) + : = max { h ( x ) , 0 } and d ( x , S ) : = inf { x s | s S } if S is nonempty and d ( x , S ) = + if S is empty. f : R n R n is said to be coercive on K if
lim x K , x + f ( x ) , x y x = + , y K .
f : R n R n is said to be strongly monotone on R n with the modulus λ > 0 if
f ( x ) f ( x ) , x x λ x x 2 , x , x R n .
In this section, we establish a global error bound of (VI) of finding x K such that
f ( x ) , y x 0 , y K ,

where f : R n R n is a continuously differentiable mapping.

To study the error bound of (VI), we need to construct a class of merit functions which were made to reformulate (VI) as an optimization problem; see [916]. One of such functions is a generalized regularized gap function [17] defined by
f γ ( x ) : = inf y K { f ( x ) , y x + γ φ ( x , y ) } , x R n , γ > 0 ,
(1)

where φ : R n × R n R is a real-valued function with the following properties:

(P1) φ is continuously differentiable on R n × R n .

(P2) φ ( x , y ) 0 , x , y R n and the equality holds if and only if x = y .

(P3) φ ( x , ) is uniformly strongly convex on R n with the modulus β > 0 in the sense that
φ ( x , y 1 ) φ ( x , y 2 ) 2 φ ( x , y 2 ) , y 1 y 2 + β y 1 y 2 2 , x , y 1 , y 2 R n ,

where 2 φ denotes the partial derivative of φ with respect to the second variable.

(P4) 2 φ ( , y ) is uniformly Lipschitz continuous on R n with the modulus α, i.e., for all x R n ,
2 φ ( x , y 1 ) 2 φ ( x , y 2 ) α y 1 y 2 , y 1 , y 2 R n .

Now we recall some properties of φ in (1).

Proposition 2.1 The following statements hold for each x , y K :
  1. (i)

    2 φ ( x , y ) , u α x y u , u span ( K x ) .

     
  2. (ii)

    β x y 2 φ ( x , y ) ( α β ) x y 2 .

     
  3. (iii)

    φ ( x , y ) 2 φ ( x , y ) , y x ( α β ) x y 2 .

     
  4. (iv)

    2 φ ( x , y ) = 0 if and only if x = y .

     

Proof Parts (i)-(iii) are taken from [[14], Lemma 2.1] and part (iv) is from [[17], Lemma 2.1]. □

Remark 2.1 In light of (ii) in Proposition 2.1, it holds true that α 2 β .

Then we list some basic properties of the generalized regularized gap function f γ .

Proposition 2.2 The following conclusions are valid for (VI).
  1. (i)
    For every x R n , there exists a unique vector y γ φ ( x ) K at which the infimum in (1) is attained, i.e.,
    f γ ( x ) = f ( x ) , y γ φ ( x ) x γ φ ( x , y γ φ ( x ) ) .
     
  2. (ii)

    f γ is a gap function of (VI).

     
  3. (iii)

    x = y γ φ ( x ) if and only if x is a solution of (VI).

     
  4. (iv)
    f γ is continuously differentiable on R n with
    f γ ( x ) = f ( x ) ( y γ φ ( x ) x ) + f ( x ) γ 1 φ ( x , y γ φ ( x ) ) .
     
  5. (v)

    y γ φ and f γ are both locally Lipschitz on R n .

     
  6. (vi)

    If f is coercive on K, then (VI) has a nonempty compact solution set.

     
  7. (vii)

    f γ ( x ) β γ y γ φ ( x ) x 2 , x K .

     

Proof Parts (i)-(iv) are from [16], part (v) from [[18], Lemma 3.1] and part (vi) from [[11], Proposition 2.2.7].

It follows from (ii) and (iii) that we only need to prove (vii) for x K S . Since y γ φ ( x ) is the minimizer of the function
G ( ) : = f ( x ) , x + γ φ ( x , ) on  K ,
the first-order optimality condition implies that
G ( y γ φ ( x ) ) , y y γ φ ( x ) 0 , y K .
Letting y = x , we get
G ( y γ φ ( x ) ) , y γ φ ( x ) + x 0 ,
i.e.,
f ( x ) + γ 2 φ ( x , y γ φ ( x ) ) , y γ φ ( x ) x 0 .
It follows from (P3) that the mapping G is strongly convex on R n with the modulus β > 0 , i.e., x , y 1 , y 2 R n
G ( x , y 1 ) G ( x , y 2 ) f ( x ) + γ 2 φ ( x , y 2 ) , y 1 y 2 + β γ y 1 y 2 2 .
Letting y 1 = x and y 2 = y γ φ ( x ) , by f γ ( x ) = G ( x , y γ φ ( x ) ) , we obtain
f γ ( x ) f ( x ) + γ 2 φ ( x , y γ φ ( x ) ) , x y γ φ ( x ) + β γ y γ φ ( x ) x 2 .

Thus, one has f γ ( x ) β γ y γ φ ( x ) x 2 . □

Theorem 2.1 Let f be coercive on K and γ ( α 2 β ) < μ . Assume that φ satisfies

(P5) 1 φ ( x , y γ φ ( x ) ) + 2 φ ( x , y γ φ ( x ) ) , y γ φ ( x ) x 0 , x K .

Suppose further that the following condition holds:
μ : = inf { d , f ( x ) d | x K S , d = y γ φ ( x ) x y γ φ ( x ) x } > 0 ,
(2)
where S is the solution set of (VI). Then f γ has a global error bound with the modulus
max { 2 β γ μ + 2 γ β γ α , 2 β γ β γ } .
Proof It follows from (vi) of Proposition 2.2 that S is a nonempty compact set of K. If x S , then the assertion obviously holds. Let x K S . Then f γ ( x ) > 0 . For brevity, we denote w : = y γ φ ( x ) x and d : = w w . It follows from [[19], Theorem 2.5] that we only need to prove
f γ ( x ) d min { μ + 2 γ β γ α 2 β γ , β γ 2 } .
(3)
It follows from (iv) of Proposition 2.2 that
f γ ( x ) w = f ( x ) w + f ( x ) γ 1 φ ( x , y γ φ ( x ) ) , w = f ( x ) w , w + f ( x ) , w + γ φ ( x , y γ φ ( x ) ) γ [ 1 φ ( x , y γ φ ( x ) ) , w + φ ( x , y γ φ ( x ) ) ] = f ( x ) w , w f γ ( x ) γ [ 1 φ ( x , y γ φ ( x ) ) , w + φ ( x , y γ φ ( x ) ) ] .
By (P5) and (2), we have
f γ ( x ) w μ w 2 f γ ( x ) γ [ 2 φ ( x , y γ φ ( x ) ) , w + φ ( x , y γ φ ( x ) ) ] .
It follows from (iii) of Proposition 2.1 that
f γ ( x ) w μ w 2 f γ ( x ) + γ ( α β ) w 2 .
Thus,
f γ ( x ) d = f γ ( x ) w w 2 f γ ( x ) [ μ γ ( α β ) ] w 2 f γ ( x ) f γ ( x ) 2 w .
(4)
In light of (4) and (vii) of Proposition 2.2, we have
f γ ( x ) d [ μ γ ( α β ) ] w 2 f γ ( x ) β γ 2 .
If μ < γ ( α β ) , then it follows from γ ( α 2 β ) < μ that
f γ ( x ) d [ γ ( α β ) μ ] 2 β γ β γ 2 = γ α μ 2 γ β 2 β γ < 0 .
(5)
If μ γ ( α β ) , then
f γ ( x ) d β γ 2 .
(6)

Hence, (3) follows from (5) and (6). The proof is complete. □

Now we use two examples to show that (2) cannot be dropped and that Theorem 2.1 is applicable, respectively.

Example 2.1 Consider K = R , φ ( x , y ) = 1 2 | x y | 2 , γ = 1 2 and f ( x ) = x 3 . Then we can easily get that α = 2 β = 1 , y γ φ ( x ) = x 2 f ( x ) , f γ ( x ) = x 6 and S = { 0 } . It is clear that f is coercive on K and μ = 0 . Thus, (2) does not hold. Moreover, it is obvious that f γ does not have a global error bound.

Example 2.2 Consider K = [ 0 , + ) , φ ( x , y ) = 1 2 | x y | 2 , γ = 1 and f ( x ) = x . Then we can easily get that α = 2 β = 1 , f ( x ) = 1 , y γ φ ( x ) = 0 , f γ ( x ) = 1 2 x 2 and S = { 0 } . It is clear that f is coercive on K and (2) holds. Thus, it follows from Theorem 2.1 that f γ has a global error bound.

By [[21], Proposition 2.3(ii)] Huang and Ng [[14], Theorem 2.1] have obtained the following conclusion. Now we give a slightly different proof by Theorem 2.1.

Corollary 2.1 Let f be strongly monotone on R n with the modulus λ > 0 and γ ( α 2 β ) < λ . Assume that φ satisfies (P5). Then f γ has a global error bound with the modulus
max { 2 β γ λ + 2 γ β γ α , 2 β γ β γ } .
Proof Let x K S and w = y γ φ ( x ) x . Since f is continuously differentiable, then
f ( x + t w ) = f ( x ) + t f ( x ) , w + o ( t ) ,
where o ( t ) t 0 as t 0 . Since f is strongly monotone with the modulus λ, one has
f ( x + t w ) f ( x ) , t w λ t w 2 ,
which implies that
w , f ( x ) w λ w 2 .

Thus, (2) holds. Moreover, the strong monotonicity of f implies the coerciveness of f (cf. [[22], Remark 2.1]). Thus, by Theorem 2.1, we get that f γ has a global error bound. □

3 A global error bound of (WVVI)

In this section, by the nonlinear scalarization method and by Theorem 2.1, we discuss a global error bound of (WVVI). The dual cone of C is defined by C : = { ξ R m : ξ , z 0 , z C } . For each ξ R m , ξ : = sup { | ξ , z | : z 1 } , where ξ , z denotes the value of ξ at z. Let e int C and B e : = { ξ C : ξ , e = 1 } . It is well known that B e is a compact convex base of C .

Lemma 3.1 [20]
S ξ C { 0 } S ξ = ξ B e S ξ ,

where S ξ : = { x K : i = 1 m ξ i F i ( x ) , x x 0 , y K } and S is the solution set of (WVVI).

Recall the generalized regularized gap function for (WVVI) which is defined by
ϕ γ ( x ) : = min ξ B e f γ ( x , ξ ) ,

where f γ ( x , ξ ) = max y K { i = 1 m ξ i F i ( x ) , x y γ φ ( x , y ) } . When ϕ ( x , y ) = 1 2 x y 2 , the generalized regularized gap function reduces to the regularized gap function which was defined in [4].

Theorem 3.1 Let γ ( α 2 β ) < min ξ B e μ ξ . Assume that φ satisfies (P5). For each ξ B e , suppose that ξ F is coercive on K, and that the following condition holds:
μ ξ : = inf { d , ( ξ F ) ( x ) d | x K S , d = y γ φ ( x ) x y γ φ ( x ) x } > 0 .
(7)
Then ϕ γ has a global error bound with the modulus
max { max ξ B e 2 β γ μ ξ + 2 γ β γ α , 2 β γ β γ } .
Proof It follows from (vi) of Proposition 2.2 that S ξ is a nonempty compact set of K for each ξ B e . If x S , then the assertion obviously holds. Let x K S . Then ϕ γ ( x ) > 0 and there exists ξ 0 B e such that f γ ( x , ξ 0 ) = ϕ γ ( x ) . It follows from Theorem 2.1 that
d ( x , S ξ 0 ) τ ξ 0 f γ ( x , ξ 0 ) ,
where τ ξ = max { 2 β γ μ ξ + 2 γ β γ α , 2 β γ β γ } . Thus, by Lemma 3.1, one has
d ( x , S ) d ( x , S ξ 0 ) τ ξ 0 f γ ( x , ξ 0 ) max ξ B e τ ξ ϕ γ ( x ) .

Hence, ϕ γ has a global error bound with the modulus max ξ B e τ ξ . □

Remark 3.1 If F i is strongly monotone with the modulus λ i for i = 1 , 2 , , m and C = R + m , it follows from [[21], Proposition 2.3] that
d , ( ξ F ) ( x ) d λ d 2 , d R n , ξ B e .

Moreover, the strong monotonicity of F i implies the coerciveness of F i (cf. [[22], Remark 2.1]) and that (VI) has a unique solution (cf. [[11], Theorem 2.3.3]). Thus, by Theorem 3.1, we get that ϕ γ has a global error bound. Hence, our results extend those of [[4], Theorem 2.9].

Declarations

Acknowledgements

This research was supported by the Natural Science Foundation of Shaanxi Province, China (Grant number: 2014JQ1023).

Authors’ Affiliations

(1)
College of Science, Northwest A&F University

References

  1. Giannessi F: Theorems of the alternative, quadratic programs and complementarity problems. In Variational Inequalities and Complementarity Problems. Edited by: Cottle RW, Giannessi F, Lions JL. Wiley, New York; 1980:151–186.Google Scholar
  2. Chen GY, Goh CJ, Yang XQ: Vector network equilibrium problems and nonlinear scalarization methods. Math. Methods Oper. Res. 1999, 49: 239–253.MathSciNetMATHGoogle Scholar
  3. Li SJ, Teo KL, Yang XQ: A remark on a standard and linear vector network equilibrium problem with capacity constraints. Eur. J. Oper. Res. 2008, 184: 13–23. 10.1016/j.ejor.2005.11.059MathSciNetView ArticleMATHGoogle Scholar
  4. Charitha C, Dutta J: Regularized gap functions and error bounds for vector variational inequalities. Pac. J. Optim. 2010, 6: 497–510.MathSciNetMATHGoogle Scholar
  5. Li J, Mastroeni G: Vector variational inequalities involving set-valued mappings via scalarization with applications to error bounds for gap functions. J. Optim. Theory Appl. 2010, 145: 355–372. 10.1007/s10957-009-9625-1MathSciNetView ArticleMATHGoogle Scholar
  6. Sun XK, Chai Y: Gap functions and error bounds for generalized vector variational inequalities. Optim. Lett. 2014, 8: 1663–1673. 10.1007/s11590-013-0685-7MathSciNetView ArticleMATHGoogle Scholar
  7. Xu YD, Li SJ: Gap functions and error bounds for weak vector variational inequalities. Optimization 2014, 63: 1339–1352. 10.1080/02331934.2012.721115MathSciNetView ArticleMATHGoogle Scholar
  8. Konnov IV: A scalarization approach for vector variational inequalities with applications. J. Glob. Optim. 2005, 32: 517–527. 10.1007/s10898-003-2688-xMathSciNetView ArticleMATHGoogle Scholar
  9. Auchmuty G: Variational principles for variational inequalities. Numer. Funct. Anal. Optim. 1989, 10: 863–874. 10.1080/01630568908816335MathSciNetView ArticleMATHGoogle Scholar
  10. Auslender A: Optimization, Méthodes Numériques. Masson, Paris; 1976.MATHGoogle Scholar
  11. Facchinei F, Pang JS: Finite-Dimensional Variational Inequalities and Complementarity Problems. Springer, Berlin; 2003.MATHGoogle Scholar
  12. Fukushima M: Equivalent differentiable optimization problems and descent methods for asymmetric variational inequality problems. Math. Program. 1992, 53: 99–110. 10.1007/BF01585696MathSciNetView ArticleMATHGoogle Scholar
  13. Harker PT, Pang JS: Finite-dimensional variational inequality and nonlinear complementarity problems: a survey of theory, algorithms, and applications. Math. Program. 1990, 48: 161–220. 10.1007/BF01582255MathSciNetView ArticleMATHGoogle Scholar
  14. Huang LR, Ng KF: Equivalent optimization formulations and error bounds for variational inequality problems. J. Optim. Theory Appl. 2005, 125: 299–314. 10.1007/s10957-004-1839-7MathSciNetView ArticleMATHGoogle Scholar
  15. Ng KF, Tan LL: Error bounds of regularized gap functions for nonsmooth variational inequality problems. Math. Program. 2007, 110: 405–429. 10.1007/s10107-006-0007-2MathSciNetView ArticleMATHGoogle Scholar
  16. Wu JH, Florian M, Marcotte P: A general descent framework for the monotone variational inequality problem. Math. Program. 1993, 61: 281–300. 10.1007/BF01582152MathSciNetView ArticleMATHGoogle Scholar
  17. Yamashita N, Taji K, Fukushima M: Unconstrained optimization reformulations of variational inequality problems. J. Optim. Theory Appl. 1997, 92: 439–456. 10.1023/A:1022660704427MathSciNetView ArticleMATHGoogle Scholar
  18. Li G, Tang C, Wei Z: Error bound results for generalized D-gap functions of nonsmooth variational inequality problems. J. Comput. Appl. Math. 2010, 233: 2795–2806. 10.1016/j.cam.2009.11.025MathSciNetView ArticleMATHGoogle Scholar
  19. Ng KF, Zheng XY: Error bounds for lower semicontinuous functions in normed spaces. SIAM J. Optim. 2001, 12: 1–17. 10.1137/S1052623499358884MathSciNetView ArticleMATHGoogle Scholar
  20. Lee GM, Kim DS, Lee BS, Yen ND: Vector variational inequality as a tool for studying vector optimization problems. Nonlinear Anal. 1998, 34: 745–765. 10.1016/S0362-546X(97)00578-6MathSciNetView ArticleMATHGoogle Scholar
  21. Jiang HY, Qi LQ: Local uniqueness and convergence of iterative methods for nonsmooth variational inequalities. J. Math. Anal. Appl. 1995, 196: 314–331. 10.1006/jmaa.1995.1412MathSciNetView ArticleMATHGoogle Scholar
  22. Li G, Ng KF: Error bounds of generalized D-gap functions for nonsmooth and nonmonotone variational inequality problems. SIAM J. Optim. 2009, 20: 667–690. 10.1137/070696283MathSciNetView ArticleMATHGoogle Scholar

Copyright

© Li; licensee Springer 2014

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited.