Skip to content

Advertisement

  • Research
  • Open Access

Mixed quasi-variational inequalities involving error bounds

Journal of Inequalities and Applications20152015:417

https://doi.org/10.1186/s13660-015-0945-4

  • Received: 28 August 2015
  • Accepted: 15 December 2015
  • Published:

Abstract

In this paper, we define some new notions of gap functions for set-valued mixed quasi-variational inequalities under suitable conditions. Further, we obtain local/global bounds for the solution of set-valued mixed quasi-variational inequality problems in terms of the residual gap function, the regularized gap function, and the D-gap function. The results obtained in this paper are generalization and refinement of previously known results for some class of variational inequality problems.

Keywords

  • error bounds
  • sub-differential
  • strong monotone mapping
  • regularized gap function
  • residue vector

MSC

  • 49C25
  • 49J53
  • 90C30
  • 90C33

1 Introduction

The set-valued quasi-variational inequality problems containing the nonlinear term are definitely most notable one among the several variants of variational inequality problems. It is well known that all the projection type methods cannot be extended and generalized to suggest and analyze iterative methods for solving the mixed variational inequalities involving the nonlinear terms. To overcome this difficulty the resolvent operator methods can be used. In fact, if the nonlinear term involving the mixed variational inequalities is proper, convex, and lower-semicontinuous, then the mixed variational inequalities are equivalent to the fixed point problem and the resolvent equations. In this technique, the given operator is decomposed into the sum of (maximal) monotone operators, whose resolvents are easier to evaluate than the resolvent of the given original operator; see, for example [14] and the references therein.

Throughout this paper, let H be a real Hilbert space, whose inner product and norm are denoted by \(\langle\cdot,\cdot\rangle\) and \(\|\cdot\|\), respectively. Let \(T: H \to2^{H}\) be the set-valued mapping and \(A(\cdot,\cdot):H\times H\to H\) be a single-valued mapping. Let \(f(\cdot,\cdot): H \times H\to\mathbb{R}\cup\{+\infty\}\) be a be a proper convex bifunction with respect to both arguments where \(\operatorname{dom}(f)\) is closed. Let \(K:H\to2^{H}\) be a set-valued mapping such that \(K(x)\) is closed convex set in H for any element x of H. Now, we consider the set-valued mixed quasi-variational inequality problem, denoted by SVMQVIP, which consists in finding \(x\in H\) such that \(u\in T(x)\), \(x\in K(x)\) and
$$ \bigl\langle A(u,u), y-x \bigr\rangle +f(x,y)-f(x,x)\geq0,\quad \forall y\in K(x). $$
(1.1)
It can be shown that a wide class of set-valued odd order and non-symmetric free, obstacle, moving, equilibrium, and optimization problems arising in the pure and applied sciences can be studied via the set-valued quasi-variational inequality problems.

One of the classical approaches in the analysis of variational inequality problem is to transform it into an equivalent optimization problem via the notion of a gap function; see for example [525] and the references therein. This enables us to develop descent-like algorithms to solve the variational inequality problem. Besides these, gap functions also turned out to be very useful in designing new globally convergent algorithms, in analyzing the rate of convergence of some iterative methods, and in obtaining the error bounds, which provide a measure of the distance between solution set and an arbitrary point. Recently, many error bounds for various kinds of variational inequalities have been established; see for example [711, 1322, 24, 25] and the references therein.

If \(f (x,y)=f(y)\), x and \(A(u,u)=T(x)\) and the mapping \(x\to K(x)\) is a constant closed convex set K, then problem SVMQVIP (1.1) collapses to the set-valued mixed variational inequality problem, denoted by SVMVIP, which consists in finding \(x\in H\) such that
$$ \exists u\in T(x)\mbox{:}\quad \langle u, y-x \rangle+f (y)-f (x)\geq 0, \quad \forall y\in K, $$
(1.2)
which was considered by Tang and Huang [21]. One introduced two regularized gap functions for the above SVMVIP (1.2) and studied their differentiable properties.
If T is single-valued, then problem SVMVIP (1.2) reduces to the mixed variational inequality problem, denoted by MVIP, which consists in finding \(x\in H\) such that
$$ \bigl\langle T(x), y-x \bigr\rangle +f (y)-f (x)\geq0,\quad \forall y\in K, $$
(1.3)
which was studied by Solodov [20]. In this paper, he introduced three gap functions for MVIP (1.3) and by using these he obtained error bounds.
If the function \(f(\cdot)\) is an indicator function of a closed set K in H, then problem MVIP (1.3) reduces to the set-valued variational inequality problem, denoted by SVVIP, which consists in finding \(x\in K\) such that
$$ \exists u\in T(x)\mbox{:}\quad \langle u, y-x\rangle\geq0,\quad \forall y\in K, $$
(1.4)
studied by Li and Mastroeni [15]. One obtained some existence results for global error bounds for gap function under strong monotonicity. Later, Aussel and Dutta [5] defined gap functions and by using it they obtained finiteness and error bounds properties for the above set-valued variational inequalities.
If T is single-valued and \(K:H\to2^{H}\) a set-valued mapping, such that \(K(x)\) is a closed convex set in H, for each \(x\in H\), then the above problem SVVIP (1.4) is equivalent to the quasi-variational inequality problem, denoted by QVIP, which consists in finding \(x\in K(x)\) such that
$$ \bigl\langle T(x), y-x\bigr\rangle \geq0,\quad \forall y\in K(x), $$
(1.5)
which was studied by Gupta and Mehra [10] and Noor [17]. They derived local and global error bounds for the above quasi-variational inequality problems in terms of the regularized gap function and the D-gap function.
If T is single-valued, then problem SVVIP (1.4) reduces to the variational inequality problem, denoted by VIP, which consists in finding \(x\in K\) such that
$$ \bigl\langle T(x), y-x\bigr\rangle \geq0, \quad \forall y\in K, $$
(1.6)
which was considered by many authors to derive gap functions and the corresponding error bounds; see for example [13, 16, 2225].

Inspired and motivated by the recent research work above, we define some new notions of gap functions for set-valued mixed quasi-variational inequalities and obtain local/global bounds in terms of the residual, the regularized, and the D-gap function. Since this class is the most general and includes some previously studied classes of variational inequalities as special cases, our results cover and extend the previously known results. The results presented in this paper generalize and improve the work presented in [7, 10, 11, 14, 17, 20, 21].

This paper is organized as follows: In Section 2, we give some basic definitions and results which will be used in this paper. Further, we establish some conditions under which SVMQVIP (1.1) has a unique solution. Furthermore, by using the residue vector \(R(x,\theta)\) we obtain the error bound for the solution of SVMQVIP (1.1). In Section 3, we introduce a regularized gap function for SVMQVIP (1.1) and derive the error bounds with and without using the Lipschitz continuity assumption. In Section 4, we introduce the D-gap function and derive global error bounds in terms of the D-gap function for the solution of SVMQVIP (1.1).

2 Preliminaries and basic facts

First of all, we recall the following well-known results and concepts.

Definition 2.1

A bifunction \(f:H\times H\to\mathbb{R}\) is said to be skew-symmetric if \(f(x,x)-f(x,y)-f(y,x)+f(y,y)\geq 0\), \(\forall x,y \in H\).

Definition 2.2

Let κ be the domain of SVMQVIP (1.1). A function \(p: \kappa\to\mathbb{R}\) is said to be a gap function for SVMQVIP (1.1), if it satisfies the following properties:
  1. (i)

    \(p(x)\geq0\), \(\forall x\in\kappa\);

     
  2. (ii)

    \(p(x^{*}) = 0\), \(x^{*}\in\kappa\), if and only if \(x^{*}\) solves SVMQVIP (1.1).

     
For VIP (1.6), it is well known that \(x\in H\) is a solution if, and only if,
$$0 =x-P_{K}\bigl[x-\theta T(x)\bigr], $$
where \(P_{K}\) is the orthogonal projector onto K and \(\theta>0\) is arbitrary. Hence, the norm of the right-hand side in this equation can serve as a gap function for VIP (1.6), which is commonly called natural residual vector.
We next derive a similar characterization for SVMQVIP (1.1). Recall that the proximal map [26, 27], \(P_{K}^{f}:H\to\operatorname{dom}(f)\), is given by
$$P_{K}^{f}(z) =\arg\min_{y\in K(x)}\biggl\{ f(x,y)+{1\over 2\theta}\| y-z\| ^{2}\biggr\} ,\quad z\in H, \theta>0. $$
Note that the objective function above is proper strongly convex. Since \(\operatorname{dom}(f)\) is closed, \(P_{K}^{f}(\cdot)\) is well defined and single-valued. For \(\theta>0\), define
$$R(x, \theta)=x -P_{K(x)}^{f}\bigl[x-\theta A(u,u)\bigr],\quad x\in H. $$

We next show that \(R(x, \theta)\) plays the role of the natural residual vector for SVMQVIP (1.1).

Lemma 2.1

Let \(\theta>0\) be arbitrary. An element \(x\in H\) solves SVMQVIP (1.1) if, and only if, \(R(x, \theta)=0\).

Proof

Let \(R(x, \theta)=0\), which implies that \(x=P_{K(x)}^{f}[x-\theta A(u,u)]\). It is equivalent to
$$x=\arg\min_{y\in K(x)}\biggl\{ f(x,y)+{1\over 2\theta}\|y-z \|^{2}\biggr\} . $$
By the optimality conditions (which are necessary and sufficient, by convexity), the latter is equivalent to
$$0\in\partial f(x,y)+{1\over \theta}\bigl(x-\bigl(x-\theta A(u,u)\bigr) \bigr)=\partial f(x,y)+A(u,u), $$
which implies \(-A(u,u)\in\partial f(x,y)\). This in turn is equivalent, by the definition of the subgradient, to
$$f(x,y)\geq f (x,x)-\bigl\langle A(u,u), y-x \bigr\rangle ,\quad \forall y\in K(x), $$
which implies x solves SVMQVIP (1.1). This completes the proof. □

Remark 2.1

It is easy to see the natural residual vector \(R(x,\theta)\) is a gap function for SVMQVIP (1.1).

Definition 2.3

Let \(A:H\times H\to H\) be a single-valued operator and \(T: H \to2^{H}\) be the set-valued operator, then \(\forall x, x_{0}\in H\), \(u\in T(x)\), \(u_{0}\in T(x_{0})\):
  1. (a)
    A is said to be strongly monotone, if there exists a constant \(\alpha>0\) such that
    $$\bigl\langle A(u,u)-A(u_{0},u_{0}), x-x_{0}\bigr\rangle \geq\alpha\|x-x_{0}\|^{2}; $$
     
  2. (b)
    A is said to be cocoercive, if there exists a constant \(\mu>0\) such that
    $$\bigl\langle A(u,u)-A(u_{0},u_{0}), x-x_{0}\bigr\rangle \geq\tau\bigl\Vert A(u,u)-A(u_{0},u_{0})\bigr\Vert ^{2}; $$
     
  3. (c)
    A is said to be Lipschitz continuous, if there exists a constant \(\beta>0\) such that
    $$\bigl\Vert A(u,u)-A(u_{0},u_{0})\bigr\Vert \leq\beta \|u-u_{0}\|; $$
     
  4. (d)
    T is said to be M-Lipschitz continuous, if there exists a constant \(\mu>0\) such that
    $$M\bigl(T(x),T(x_{0})\bigr)\leq\mu\|x-x_{0}\|, $$
    where \(M(\cdot, \cdot)\) is the Hausdorff metric;
     
  5. (e)
    \(P_{K(x)}^{f}\) is said to be nonexpansive, if
    $$\bigl\Vert P_{K(x)}^{f}(v)-P_{K(x)}^{f}(w) \bigr\Vert \leq\|v-w\|,\quad \forall x, v, w\in H. $$
     

Since the \(P_{K(x)}^{f}(z)\) is a cocoercive map with modulus 1 on \(z\in H\) (see Theorem 1.5.5(c) in [28]), for \(\theta>0\) and \(x\in H\), define \(e_{x}:H\to H\) as \(e_{x}(z)=z-P_{K(x)}^{f}(z)\). We have the following result, to be used in the sequel.

Lemma 2.2

\(e_{x}(z)\) is a cocoercive map on H with modulus 1.

Proof

For all \(z,w\in H\), we have
$$\bigl\langle P_{K(x)}^{f}(z)-P_{K(x)}^{f}(w), z-w\bigr\rangle \geq\bigl\Vert P_{K(x)}^{f}(z)-P_{K(x)}^{f}(w) \bigr\Vert ^{2}, $$
implying
$$\bigl\langle z-P_{K(x)}^{f}(z)-\bigl(w-P_{K(x)}^{f}(w) \bigr), P_{K(x)}^{f}(z)-P_{K(x)}^{f}(w)\bigr\rangle \geq0. $$
Therefore
$$\bigl\langle e_{x}(z)-e_{x}(w),z-e_{x}(z)-w+e_{x}(w) \bigr\rangle \geq0, $$
yielding the required result. □

Next we study those conditions under which SVMQVIP (1.1) has a unique solution.

Theorem 2.1

Let A be strongly monotone and Lipschitz continuous with constants \(\alpha, \beta> 0\), respectively. Assume that f is skew-symmetric and T is M-Lipschitz continuous with constant \(\mu>0\). If there exists \(k > 0\), such that for all \(\theta<{2\alpha\over \beta^{2}\mu^{2}}\) and
$$\bigl\Vert P_{K(x)}^{f}(z)-P_{K(x_{0})}^{f}(z) \bigr\Vert \leq k\Vert x-x_{0}\Vert ,\quad \forall x,x_{0},z\in H, \textit{with }k< 1-{\sqrt{\theta^{2} \beta^{2}\mu^{2}-2\alpha\theta+1}}, $$
then SVMQVIP (1.1) has a unique solution.

Proof

(a) Uniqueness. Let \(x_{1}\neq x_{2}\in H\) be two solutions of SVMQVIP (1.1). Then we have
$$\begin{aligned}& \bigl\langle A(u_{1},u_{1}), y-x_{1} \bigr\rangle +f (x_{1},y )-f (x_{1},x_{1} )\geq0, \end{aligned}$$
(2.1)
$$\begin{aligned}& \bigl\langle A(u_{2},u_{2}), y-x_{2} \bigr\rangle +f (x_{2},y )-f (x_{2},x_{2} )\geq0. \end{aligned}$$
(2.2)
Taking \(y =x_{2}\) in (2.1) and \(y =x_{1}\) in (2.2), adding the resultants and then using the skew-symmetry of f, we have
$$\bigl\langle A(u_{1},u_{1})-A(u_{2},u_{2}), x_{2}-x_{1} \bigr\rangle \geq0. $$
Since A is strongly monotone with a constant \(\alpha>0\),
$$0\leq \bigl\langle A(u_{1},u_{1})-A(u_{2},u_{2}), x_{2}-x_{1} \bigr\rangle \leq -\alpha\| x_{1}-x_{2} \|^{2}, $$
which implies that \(x_{1} =x_{2}\), the uniqueness of the solution of SVMQVIP (1.1).
(b) Existence. From Lemma 2.1, it follows that SVMQVIP (1.1) is equivalent to the fixed point problem, given by
$$ x=F(x):=P_{K(x)}^{f}\bigl[x-\theta A(u,u)\bigr]:H\to H. $$
(2.3)
In order to prove the existence of a solution of SVMQVIP (1.1), it is sufficient to show that \(F(\cdot)\) has a fixed point. Thus, for all \(x, x_{0}\in H\), \(x\neq x_{0}\), we have
$$\begin{aligned} \bigl\Vert F(x)-F(x_{0})\bigr\Vert =&\bigl\Vert P_{K(x)}^{f}\bigl[x-\theta A(u,u)\bigr]-P_{K(x_{0})}^{f} \bigl[x_{0}-\theta A(u_{0},u_{0})\bigr]\bigr\Vert \\ =&\bigl\Vert P_{K(x)}^{f}\bigl[x-\theta A(u,u) \bigr]-P_{K(x_{0})}^{f}\bigl[x-\theta A(u,u)\bigr] \\ &{}+P_{K(x_{0})}^{f} \bigl[x-\theta A(u,u)\bigr]-P_{K(x_{0})}^{f}\bigl[x_{0}- \theta A(u_{0},u_{0})\bigr]\bigr\Vert \\ \leq&\bigl\Vert P_{K(x)}^{f}\bigl[x-\theta A(u,u) \bigr]-P_{K(x_{0})}^{f}\bigl[x-\theta A(u,u)\bigr]\bigr\Vert \\ &{}+ \bigl\Vert P_{K(x_{0})}^{f}\bigl[x-\theta A(u,u) \bigr]-P_{K(x_{0})}^{f}\bigl[x_{0}-\theta A(u_{0},u_{0})\bigr]\bigr\Vert \\ \leq& k\Vert x-x_{0}\Vert +\bigl\Vert x-x_{0}-\theta \bigl( A(u,u)- A(u_{0},u_{0})\bigr)\bigr\Vert . \end{aligned}$$
(2.4)
We know that
$$\begin{aligned}& \bigl\Vert x-x_{0}-\theta\bigl( A(u,u)- A(u_{0},u_{0}) \bigr)\bigr\Vert ^{2} \\& \quad =\Vert x-x_{0}\Vert ^{2}-2\theta \bigl\langle A(u,u)-A(u_{0},u_{0}), x-x_{0} \bigr\rangle +\theta^{2}\bigl\Vert A(u,u)-A(u_{0},u_{0})\bigr\Vert ^{2}. \end{aligned}$$
By using the strong monotonicity and the Lipschitz continuity of A with constants \(\alpha>0\) and \(\beta> 0\), respectively, we get
$$\begin{aligned}& \bigl\Vert x-x_{0}-\theta\bigl( A(u,u)- A(u_{0},u_{0}) \bigr)\bigr\Vert ^{2} \\& \quad \leq \Vert x-x_{0}\Vert ^{2}-2\theta\alpha \Vert x-x_{0}\Vert ^{2} + \theta^{2}\beta^{2}\Vert u-u_{0}\Vert ^{2}. \end{aligned}$$
Further, using the M-Lipschitz continuity of T with constant \(\mu>0\), we get
$$ \bigl\Vert x-x_{0}-\theta\bigl( A(u,u)- A(u_{0},u_{0}) \bigr)\bigr\Vert ^{2}\leq\bigl(\theta^{2}\beta^{2} \mu ^{2}-2\alpha \theta+1\bigr)\Vert x-x_{0}\Vert ^{2}. $$
(2.5)
From (2.4) and (2.5), we get
$$\bigl\Vert F(x)-F(x_{0})\bigr\Vert \leq \bigl(k+{\sqrt{ \theta^{2}\beta^{2}\mu^{2}-2\alpha \theta +1}} \bigr)\Vert x-x_{0}\Vert =m\Vert x-x_{0}\Vert , $$
where \(m=k+{\sqrt{\theta^{2}\beta^{2}\mu^{2}-2\alpha\theta+1}}\). From the assumption on k, it follows that \(m < 1\), so the mapping \(F(x)\) defined by (2.3) has a fixed point \(x\in K(x)\) such that \(u\in T(x)\) satisfying SVMQVIP (1.1). This completes the proof. □

Now by using normal residual vector \(R(x,\theta)\), we derive the error bounds for the solution of SVMQVIP (1.1).

Theorem 2.2

Assume that \(x_{0}\) be a solution of SVMQVIP (1.1). Let the operator A be strongly monotone and Lipschitz continuous with constants \(\alpha, \beta> 0\), respectively. Let T be a M-Lipschitz continuous with constant \(\mu> 0\) and f is skew-symmetric. If there exists \(k<{\alpha\over \beta\mu}\) such that, for any \(\theta>{k\over \alpha-\beta\mu k}\),
$$\bigl\Vert P_{K(x)}^{f}(w)-P_{K(x_{0})}^{f}(w) \bigr\Vert \leq k\Vert x-x_{0}\Vert , \quad \forall x,x_{0},w\in H, $$
then, for any \(x\in H\) and \(\theta>{k\over \alpha-\beta\mu k}\), we have
$$\Vert x-x_{0}\Vert \leq{1+\theta\beta\mu\over \alpha\theta-(1+\theta\beta \mu)k}\bigl\Vert R(x, \theta)\bigr\Vert . $$

Proof

Let \(x_{0}\in H\) be a solution of SVMQVIP (1.1), then
$$\bigl\langle A(u_{0},u_{0}), y-x_{0}\bigr\rangle +f (x_{0},y)-f (x_{0},x_{0})\geq0,\quad \forall y\in K(x_{0}). $$
Substituting \(y=P_{K(x_{0})}^{f}[x-\theta A(u,u)]\) in the above inequality, we have
$$ \bigl\langle A(u_{0},u_{0}), P_{K(x_{0})}^{f} \bigl[x-\theta A(u,u)\bigr]-x_{0} \bigr\rangle +f \bigl(x_{0},P_{K(x_{0})}^{f} \bigl[x-\theta A(u,u)\bigr] \bigr)-f (x_{0},x_{0}) \geq0. $$
(2.6)
For any fixed \(x\in H\), and \(\theta> 0\), we observe that
$$x-\theta A(u,u)\in(I+\theta\partial f) (I+\theta\partial f)^{-1} \bigl(x-\theta A(u,u) \bigr)=(I+\theta\partial f)P_{K}^{f} \bigl[x-\theta A(u,u)\bigr], $$
which is equivalent to
$$-A(u,u)+{1\over \theta} \bigl[x-P_{K}^{f}\bigl[x- \theta A(u,u)\bigr] \bigr]\in \partial f \bigl(P_{K}^{f} \bigl[x-\theta A(u,u)\bigr] \bigr). $$
By the definition of a sub-differential, we have
$$\begin{aligned}& \biggl\langle A(u,u)-{1\over \theta} \bigl(x-P_{K(x_{0})}^{f} \bigl[x-\theta A(u,u)\bigr] \bigr), y-P_{K(x_{0})}^{f}\bigl[x-\theta A(u,u)\bigr] \biggr\rangle \\& \quad {}+f \bigl(P_{K(x_{0})}^{f}\bigl[x-\theta A(u,u)\bigr],y \bigr)-f \bigl(P_{K(x_{0})}^{f}\bigl[x-\theta A(u,u) \bigr],P_{K(x_{0})}^{f}\bigl[x-\theta A(u,u)\bigr] \bigr)\geq0. \end{aligned}$$
Taking \(y=x_{0}\) in the above we get
$$\begin{aligned}& \biggl\langle A(u,u)-{1\over \theta} \bigl(x-P_{K(x_{0})}^{f} \bigl[x-\theta A(u,u)\bigr] \bigr), x_{0}-P_{K(x_{0})}^{f} \bigl[x-\theta A(u,u)\bigr] \biggr\rangle \\& \quad {}+f \bigl(P_{K(x_{0})}^{f}\bigl[x-\theta A(u,u) \bigr],x_{0} \bigr)-f \bigl(P_{K(x_{0})}^{f}\bigl[x-\theta A(u,u)\bigr],P_{K(x_{0})}^{f}\bigl[x-\theta A(u,u)\bigr] \bigr) \geq0. \end{aligned}$$
This implies that
$$\begin{aligned}& \biggl\langle -A(u,u)+{1\over \theta} \bigl(x-P_{K(x_{0})}^{f} \bigl[x-\theta A(u,u)\bigr] \bigr), P_{K(x_{0})}^{f}\bigl[x-\theta A(u,u)\bigr]-x_{0} \biggr\rangle \\& \quad {}+f \bigl(P_{K(x_{0})}^{f}\bigl[x-\theta A(u,u) \bigr],x_{0} \bigr) \\& \quad {}-f \bigl(P_{K(x_{0})}^{f}\bigl[x-\theta A(u,u)\bigr],P_{K(x_{0})}^{f}\bigl[x-\theta A(u,u)\bigr] \bigr)\geq 0. \end{aligned}$$
(2.7)
Adding (2.6) and (2.7), we get
$$\begin{aligned}& \biggl\langle A(u_{0},u_{0})-A(u,u)+{1\over \theta} \bigl(x-P_{K(x_{0})}^{f}\bigl[x-\theta A(u,u)\bigr] \bigr), P_{K(x_{0})}^{f}\bigl[x-\theta A(u,u)\bigr]-x_{0} \biggr\rangle \\& \quad {}+f \bigl(x_{0}, P_{K(x_{0})}^{f}\bigl[x-\theta A(u,u) \bigr] \bigr)-f (x_{0}, x_{0} )+f \bigl(P_{K(x_{0})}^{f} \bigl[x-\theta A(u,u)\bigr],x_{0} \bigr) \\& \quad {}-f \bigl(P_{K(x_{0})}^{f}\bigl[x-\theta A(u,u)\bigr], P_{K(x_{0})}^{f}\bigl[x-\theta A(u,u)\bigr] \bigr)\geq0. \end{aligned}$$
Since f is skew-symmetric,
$$\biggl\langle A(u_{0},u_{0})-A(u,u)+{1\over \theta} \bigl(x-P_{K(x_{0})}^{f}\bigl[x-\theta A(u,u)\bigr] \bigr), P_{K(x_{0})}^{f}\bigl[x-\theta A(u,u)\bigr]-x_{0} \biggr\rangle \geq0. $$
This also can be written as
$$\begin{aligned}& \theta \bigl\langle A(u_{0},u_{0})-A(u,u), P_{K(x_{0})}^{f}\bigl[x-\theta A(u,u)\bigr]-x \bigr\rangle +\theta \bigl\langle A(u_{0},u_{0})-A(u,u), x-x_{0} \bigr\rangle \\& \quad {}+ \bigl\langle x-P_{K(x_{0})}^{f}\bigl[x-\theta A(u,u) \bigr], P_{K(x_{0})}^{f}\bigl[x-\theta A(u,u)\bigr]-x \bigr\rangle \\& \quad {}+ \bigl\langle x-P_{K(x_{0})}^{f}\bigl[x-\theta A(u,u)\bigr], x-x_{0} \bigr\rangle \geq0, \end{aligned}$$
which implies that
$$\begin{aligned}& \theta \bigl\langle A(u_{0},u_{0})-A(u,u), P_{K(x_{0})}^{f}\bigl[x-\theta A(u,u)\bigr]-x \bigr\rangle + \bigl\langle x-P_{K(x_{0})}^{f}\bigl[x-\theta A(u,u)\bigr], x-x_{0} \bigr\rangle \\& \quad \geq \theta \bigl\langle A(u_{0},u_{0})-A(u,u), x_{0}-x \bigr\rangle \\& \qquad {}+ \bigl\langle x-P_{K(x_{0})}^{f} \bigl[x-\theta A(u,u)\bigr], x-P_{K(x_{0})}^{f}\bigl[x-\theta A(u,u) \bigr] \bigr\rangle . \end{aligned}$$
By using the strong monotonicity of A, we get
$$\begin{aligned}& \theta \bigl\langle A(u_{0},u_{0})-A(u,u), P_{K(x_{0})}^{f}\bigl[x-\theta A(u,u)\bigr]-x \bigr\rangle + \bigl\langle x-P_{K(x_{0})}^{f}\bigl[x-\theta A(u,u)\bigr], x-x_{0} \bigr\rangle \\& \quad \geq\alpha\theta\|x_{0}-x\|^{2}+\bigl\Vert R(x, \theta)\bigr\Vert ^{2}. \end{aligned}$$
Also the above inequality can be written as
$$\begin{aligned}& \theta \bigl\langle A(u_{0},u_{0})-A(u,u), P_{K(x_{0})}^{f}\bigl[x-\theta A(u,u)\bigr] \\& \qquad {}-x-P_{K(x)}^{f} \bigl[x-\theta A(u,u)\bigr]+P_{K(x)}^{f}\bigl[x-\theta A(u,u) \bigr] \bigr\rangle \\& \qquad {}+ \bigl\langle x-P_{K(x_{0})}^{f}\bigl[x-\theta A(u,u) \bigr]-P_{K(x)}^{f}\bigl[x-\theta A(u,u)\bigr]+P_{K(x)}^{f} \bigl[x-\theta A(u,u)\bigr], x-x_{0} \bigr\rangle \\& \quad \geq\alpha \theta\| x_{0}-x\|^{2}+\bigl\Vert R(x, \theta)\bigr\Vert ^{2}. \end{aligned}$$
By using the Cauchy-Schwarz inequality along with the triangular inequality, we have
$$\begin{aligned}& \theta\bigl\Vert A(u_{0},u_{0})-A(u,u)\bigr\Vert \cdot \bigl\Vert P_{K(x_{0})}^{f}\bigl[x-\theta A(u,u) \bigr]-P_{K(x)}^{f}\bigl[x-\theta A(u,u)\bigr]\bigr\Vert \\& \qquad {}+\theta\bigl\Vert A(u_{0},u_{0})-A(u,u)\bigr\Vert \cdot\bigl\Vert P_{K(x)}^{f}\bigl[x-\theta A(u,u) \bigr]-x\bigr\Vert \\& \qquad {}+\bigl\Vert x- P_{K(x)}^{f}\bigl[x-\theta A(u,u)\bigr]\bigr\Vert \cdot \Vert x-x_{0}\Vert \\& \qquad {}+ \bigl\Vert P_{K(x)}^{f}\bigl[x-\theta A(u,u) \bigr]-P_{K(x_{0})}^{f}\bigl[x-\theta A(u,u)\bigr]\bigr\Vert \cdot \Vert x-x_{0}\Vert \\& \quad \geq\alpha\theta \Vert x_{0}-x\Vert ^{2}+\bigl\Vert R(x,\theta)\bigr\Vert ^{2}. \end{aligned}$$
Now using the Lipschitz continuity of the operator A and assumption on \(P_{K(x)}^{f}(\cdot)\), we have
$$\begin{aligned}& \theta\beta \Vert u_{0}-u\Vert \cdot k\Vert x_{0}-x \Vert +\theta\beta \Vert u_{0}-u\Vert \cdot\bigl\Vert R(x,\theta) \bigr\Vert +\bigl\Vert R(x,\theta)\bigr\Vert \cdot \Vert x-x_{0} \Vert + k\Vert x-x_{0}\Vert ^{2} \\& \quad \geq\alpha\theta \Vert x_{0}-x\Vert ^{2}+\bigl\Vert R(x,\theta)\bigr\Vert ^{2}. \end{aligned}$$
Now using the M-Lipschitz continuity of T, we have
$$\begin{aligned}& k\theta\beta\mu \Vert x_{0}-x\Vert ^{2}+\theta\beta\mu \Vert x_{0}-x\Vert \cdot\bigl\Vert R(x,\theta)\bigr\Vert +\bigl\Vert R(x,\theta)\bigr\Vert \cdot \Vert x-x_{0}\Vert + k\Vert x-x_{0}\Vert ^{2} \\& \quad \geq\alpha\theta \Vert x_{0}-x \Vert ^{2}+\bigl\Vert R(x,\theta)\bigr\Vert ^{2}. \end{aligned}$$
Therefore, we have
$$\|x-x_{0}\|\leq{1+\theta\beta\mu\over \alpha\theta-(1+\theta\beta \mu)k}\bigl\Vert R(x,\theta)\bigr\Vert ,\quad \forall x\in H, $$
where \(\theta>{k\over \alpha-\beta\mu k}\). This completes the proof. □

Remark 2.2

Theorem 2.2 generalizes Theorem 1 of [7].

Besides the above result on the error bound, we provide another error bound for SVMQVIP (1.1).

Theorem 2.3

Assume that \(x_{0}\) is the solution of SVMQVIP (1.1). Let the operator A be strongly monotone and Lipschitz continuous with constants \(\alpha, \beta> 0\), respectively. Let T be M-Lipschitz continuous with constant \(\mu> 0\) and f be skew-symmetric. If there exists \(k > 0\), such that, for any \(\theta <{2\alpha\over \beta^{2}\mu^{2}}\) and
$$\bigl\Vert P_{K(x)}^{f}(z)-P_{K(x_{0})}^{f}(z) \bigr\Vert \leq k\|x-x_{0}\|,\quad \forall x,x_{0},z\in H, \textit{with }k< 1-{\sqrt{\theta^{2}\beta^{2} \mu^{2}-2\alpha\theta+1}}, $$
then, for any \(x\in H\), we have
$$\|x-x_{0}\|\leq{4\over {4\alpha\theta-\beta^{2}\mu^{2}\theta^{2} -4k}}\bigl\Vert R(x,\theta)\bigr\Vert . $$

Proof

For any \(x,x_{0}\in H\), let us consider \(v=x-\theta A(u,u)\) and \(w=x_{0}-\theta A(u_{0},u_{0})\). Now
$$\begin{aligned}& \bigl\langle R(x,\theta)-R(x_{0},\theta),x-x_{0} \bigr\rangle \\& \quad = \bigl\langle x-P_{K(x)}^{f}\bigl[x-\theta A(u,u) \bigr]-x_{0}+P_{K(x_{0})}^{f}\bigl[x_{0}-\theta A(u_{0},u_{0})\bigr],x-x_{0} \bigr\rangle \\& \quad = \bigl\langle x-\theta A(u,u)-P_{K(x)}^{f}\bigl[x-\theta A(u,u)\bigr]-\bigl(x_{0}-\theta A(u_{0},u_{0}) \bigr) \\& \qquad {}+P_{K(x_{0})}^{f}\bigl[x_{0}-\theta A(u_{0},u_{0})\bigr],x-x_{0} \bigr\rangle +\theta \bigl\langle A(u,u)-A(u_{0},u_{0}), x-x_{0} \bigr\rangle \\& \quad = \bigl\langle v-P_{K(x)}^{f}\bigl[x-\theta A(u,u) \bigr]-w+P_{K(x_{0})}^{f}\bigl[x_{0}-\theta A(u_{0},u_{0})\bigr],x-x_{0} \bigr\rangle \\& \qquad {}+\theta \bigl\langle A(u,u)-A(u_{0},u_{0}), x-x_{0} \bigr\rangle \\& \quad = \bigl\langle v-P_{K(x)}^{f}\bigl[x-\theta A(u,u) \bigr]-w+P_{K(x)}^{f}\bigl[x_{0}-\theta A(u_{0},u_{0})\bigr],x-x_{0} \bigr\rangle \\& \qquad {}+ \bigl\langle P_{K(x_{0})}^{f}\bigl[x_{0}-\theta A(u_{0},u_{0})\bigr]-P_{K(x)}^{f} \bigl[x_{0}-\theta A(u_{0},u_{0})\bigr], x-x_{0} \bigr\rangle \\& \qquad {}+\theta \bigl\langle A(u,u)-A(u_{0},u_{0}), x-x_{0} \bigr\rangle \\& \quad = \bigl\langle v-P_{K(x)}^{f}\bigl[x-\theta A(u,u)\bigr]- \bigl(w-P_{K(x)}^{f}\bigl[x_{0}-\theta A(u_{0},u_{0})\bigr]\bigr), \\& \qquad v+\theta A(u,u)-w-\theta A(u_{0},u_{0}) \bigr\rangle \\& \qquad {}+ \bigl\langle P_{K(x_{0})}^{f}\bigl[x_{0}-\theta A(u_{0},u_{0})\bigr]-P_{K(x)}^{f} \bigl[x_{0}-\theta A(u_{0},u_{0})\bigr], x-x_{0} \bigr\rangle \\& \qquad {}+\theta \bigl\langle A(u,u)-A(u_{0},u_{0}), x-x_{0} \bigr\rangle \\& \quad = \bigl\langle v-P_{K(x)}^{f}\bigl[x-\theta A(u,u)\bigr]- \bigl(w-P_{K(x)}^{f}\bigl[x_{0}-\theta A(u_{0},u_{0})\bigr]\bigr), v-w \bigr\rangle \\& \qquad {}+\theta \bigl\langle v-P_{K(x)}^{f}\bigl[x-\theta A(u,u) \bigr]-\bigl(w-P_{K(x)}^{f}\bigl[x_{0}-\theta A(u_{0},u_{0})\bigr]\bigr), A(u,u)-A(u_{0},u_{0}) \bigr\rangle \\& \qquad {}+ \bigl\langle P_{K(x_{0})}^{f}\bigl[x_{0}-\theta A(u_{0},u_{0})\bigr]-P_{K(x)}^{f} \bigl[x_{0}-\theta A(u_{0},u_{0})\bigr], x-x_{0} \bigr\rangle \\& \qquad {}+\theta \bigl\langle A(u,u)-A(u_{0},u_{0}), x-x_{0} \bigr\rangle . \end{aligned}$$
By using the cocoercive property of \(P_{K(x)}^{f}(\cdot)\), given in Lemma 2.2,
$$\begin{aligned}& \bigl\langle R(x,\theta)-R(x_{0},\theta),x-x_{0} \bigr\rangle \\ & \quad \geq\bigl\Vert v-P_{K(x)}^{f}\bigl[x-\theta A(u,u) \bigr]-\bigl(w-P_{K(x)}^{f}\bigl[x_{0}-\theta A(u_{0},u_{0})\bigr]\bigr)\bigr\Vert ^{2} \\ & \qquad {}+\theta\bigl\langle v-P_{K(x)}^{f}\bigl[x-\theta A(u,u)\bigr]-\bigl(w-P_{K(x)}^{f}\bigl[x_{0}-\theta A(u_{0},u_{0})\bigr]\bigr), A(u,u)-A(u_{0},u_{0}) \bigr\rangle \\ & \qquad {}-\bigl\Vert P_{K(x)}^{f}\bigl[x_{0}- \theta A(u_{0},u_{0})\bigr]-P_{K(x_{0})}^{f} \bigl[x_{0}-\theta A(u_{0},u_{0})\bigr]\bigr\Vert \cdot\|x-x_{0}\| \\ & \qquad {}+\theta\bigl\langle A(u,u)-A(u_{0},u_{0}), x-x_{0}\bigr\rangle \\ & \quad \geq\bigl\Vert v-P_{K(x)}^{f}\bigl[x-\theta A(u,u) \bigr]-\bigl(w-P_{K(x)}^{f}\bigl[x_{0}-\theta A(u_{0},u_{0})\bigr]\bigr)\bigr\Vert ^{2} \\ & \qquad {}+\theta\bigl\langle v-P_{K(x)}^{f}\bigl[x-\theta A(u,u)\bigr]-\bigl(w-P_{K(x)}^{f}\bigl[x_{0}-\theta A(u_{0},u_{0})\bigr]\bigr), A(u,u)-A(u_{0},u_{0}) \bigr\rangle \\ & \qquad {}-k\|x-x_{0}\|^{2}+\theta\bigl\langle A(u,u)-A(u_{0},u_{0}), x-x_{0}\bigr\rangle . \end{aligned}$$
(2.8)
Since
$$\begin{aligned}& \bigl\Vert v-P_{K(x)}^{f}\bigl[x-\theta A(u,u)\bigr]- \bigl(w-P_{K(x)}^{f}\bigl[x_{0}-\theta A(u_{0},u_{0})\bigr]\bigr)\bigr\Vert ^{2} \\& \qquad {}+2 \biggl({\theta\over 2} \biggr)\bigl\langle v-P_{K(x)}^{f} \bigl[x-\theta A(u,u)\bigr] \\& \qquad {} -\bigl(w-P_{K(x)}^{f} \bigl[x_{0}-\theta A(u_{0},u_{0})\bigr]\bigr),A(u,u)-A(u_{0},u_{0})\bigr\rangle \\& \quad \geq- \biggl({\theta\over 2} \biggr)^{2}\bigl\Vert A(u,u)-A(u_{0},u_{0})\bigr\Vert ^{2}, \end{aligned}$$
(2.9)
from (2.8) and (2.9) we have
$$\begin{aligned} \begin{aligned} &\bigl\langle R(x,\theta)-R(x_{0},\theta),x-x_{0}\bigr\rangle \\ &\quad \geq-{{\theta }^{2}\over 4}\bigl\Vert A(u,u)-A(u_{0},u_{0}) \bigr\Vert ^{2}-k\|x-x_{0}\|^{2}+\theta\bigl\langle A(u,u)-A(u_{0},u_{0}), x-x_{0}\bigr\rangle . \end{aligned} \end{aligned}$$
By using the strong monotonicity and the Lipschitz continuity of A we have
$$\bigl\langle R(x,\theta)-R(x_{0},\theta),x-x_{0}\bigr\rangle \geq-{\beta^{2}\theta ^{2}\over 4}\|u-u_{0}\|^{2}-k \|x-x_{0}\|^{2}+\alpha\theta\|x-x_{0} \|^{2}. $$
Finally, using the M-Lipschitz continuity of T, we get
$$\bigl\langle R(x,\theta)-R(x_{0},\theta),x-x_{0}\bigr\rangle \geq \biggl(\alpha \theta -{\beta^{2}\mu^{2}\theta^{2}\over 4}-k \biggr)\|x-x_{0} \|^{2}, $$
implies
$$\bigl\langle R(x,\theta)-R(x_{0},\theta),x-x_{0}\bigr\rangle \geq\xi(\theta)\| x-x_{0}\| ^{2},\quad \mbox{where } \xi( \theta)=\alpha\theta-{\beta^{2}\mu^{2}\theta ^{2}\over 4}-k>0. $$
By a classical argument
$$\bigl\Vert R(x,\theta)-R(x_{0},\theta)\bigr\Vert \geq\xi(\theta) \|x-x_{0}\|,\quad \forall x, x_{0}\in H. $$
Since \(x_{0}\) is solution of SVMQVIP (1.1), \(R(x_{0},\theta)=0\) and hence
$$\|x-x_{0}\|\leq{4\over {4\alpha\theta-\beta^{2}\mu^{2}\theta^{2} -4k}}\bigl\Vert R(x,\theta)\bigr\Vert . $$
This completes the proof. □

Remark 2.3

Theorem 2.3 generalizes Lemma 1 of [7] and Lemma 2 of [10].

3 Regularized gap functions for SVMQVIP (1.1)

In this section by using an approach due to Fukushima [8], we construct another gap function associated with problem SVMQVIP (1.1), which can be viewed as a regularized gap function. For \(\theta>0\), the functions \(G_{\theta}\) is defined by
$$ G_{\theta}(x)=\max_{y\in K(x)} \biggl\{ \bigl\langle A(u,u), x-y \bigr\rangle -f (x,y )+f (x,x )-{1\over {2\theta}} \| x-y \| ^{2} \biggr\} , $$
(3.1)
which is finite-valued everywhere and is differentiable whenever all operators involved in \(G_{\theta}(x)\), are differentiable.

Lemma 3.1

For any \(\theta>0\), \(G_{\theta}(x)\) can be written as
$$\begin{aligned} G_{\theta}(x) =& \bigl\langle A(u,u), x-P_{K(x)}^{f} \bigl[x-\theta A(u,u)\bigr] \bigr\rangle -f \bigl(x,P_{K(x)}^{f} \bigl[x-\theta A(u,u)\bigr] \bigr) \\ &{}+f (x,x )-{1\over {2\theta}}\bigl\Vert x-P_{K(x)}^{f} \bigl[x-\theta A(u,u)\bigr]\bigr\Vert ^{2},\quad \forall x\in H. \end{aligned}$$
(3.2)

Proof

If \(x\notin\operatorname{dom}f\) then equation (3.2) is correct, because \(f\equiv\infty\) while the other terms are all finite (recall that \(P_{K(x)}^{f} (z)\in\operatorname{dom}f\) for any \(z\in H\)).

Consider now any \(x\in\operatorname{dom}f\). Denote by \(t(y)\) the function being maximized in (3.1). Let z be the (unique, by concavity of \(t(y)\)) element at which the maximum is realized in (3.1). Then z is uniquely characterized by the optimality condition
$$0\in\partial\bigl(-t(z)\bigr)=A(u,u)+\partial f(\cdot,z)+{1\over \theta }(z-x)= \partial f(\cdot,z)+{1\over \theta}\bigl[z-\bigl(x-\theta A(u,u)\bigr) \bigr]. $$
That is to say, \(z =\arg\min_{y\in K(x)}\{f(x,y)+{1\over 2\theta }\|y-(x-\theta A(u,u))\|^{2}\}=P_{K(x)}^{f}[x-\theta A(u,u)]\), where the second equation follows from the definition of the proximal mapping \(P_{K(x)}^{f}(\cdot)\). This completes the proof. □

Next, we show the function \(G_{\theta}(x)\) for \(\theta> 0\) given (3.1) is a gap function for SVMQVIP (1.1).

Theorem 3.1

If \(\theta> 0\), then we have
$$G_{\theta}(x)\geq{1\over 2\theta}\bigl\Vert R(x,\theta)\bigr\Vert ^{2},\quad \forall x\in H. $$
In particular, \(G_{\theta}(x)= 0\), if and only if x is a solution of SVMQVIP (1.1).

Proof

Fix any fixed \(x\in H\), and \(\theta> 0\). Observe that
$$x-\theta A(u,u)\in(I+\theta\partial f) (I+\theta\partial f)^{-1} \bigl(x-\theta A(u,u) \bigr)=(I+\theta\partial f)P_{K(x)}^{f} \bigl[x-\theta A(u,u)\bigr], $$
which is equivalent to
$$-A(u,u)+{1\over \theta} \bigl[x-P_{K(x)}^{f}\bigl[x- \theta A(u,u)\bigr] \bigr]\in \partial f \bigl(P_{K(x)}^{f} \bigl[x-\theta A(u,u)\bigr] \bigr). $$
By the definition of a sub-differential, we have
$$\begin{aligned}& \biggl\langle A(u,u)-{1\over \theta} \bigl(x-P_{K(x)}^{f} \bigl[x-\theta A(u,u)\bigr] \bigr), y-P_{K(x)}^{f}\bigl[x-\theta A(u,u)\bigr] \biggr\rangle \\& \quad {}+f \bigl(P_{K(x)}^{f}\bigl[x-\theta A(u,u)\bigr],y \bigr)-f \bigl(P_{K(x)}^{f}\bigl[x-\theta A(u,u) \bigr],P_{K(x)}^{f}\bigl[x-\theta A(u,u)\bigr] \bigr)\geq0. \end{aligned}$$
Taking \(y=x\) in the above inequality, we get
$$\begin{aligned}& \biggl\langle A(u,u)-{1\over \theta} \bigl(x-P_{K(x)}^{f}\bigl[x-\theta A(u,u)\bigr] \bigr), x-P_{K(x)}^{f}\bigl[x-\theta A(u,u)\bigr] \biggr\rangle \\& \quad {}+f \bigl(P_{K(x)}^{f}\bigl[x-\theta A(u,u)\bigr],x \bigr)-f \bigl(P_{K(x)}^{f}\bigl[x-\theta A(u,u) \bigr],P_{K(x)}^{f}\bigl[x-\theta A(u,u)\bigr] \bigr)\geq0, \\& \bigl\langle A(u,u), x-P_{K(x)}^{f}\bigl[x-\theta A(u,u) \bigr] \bigr\rangle +f \bigl(P_{K(x)}^{f}\bigl[x-\theta A(u,u) \bigr],x \bigr) \\& \quad {}-f \bigl(P_{K(x)}^{f}\bigl[x-\theta A(u,u) \bigr],P_{K(x)}^{f}\bigl[x-\theta A(u,u)\bigr] \bigr)\geq {1\over \theta} \bigl\langle R(x,\theta), R(x,\theta ) \bigr\rangle . \end{aligned}$$
(3.3)
Combining (3.2) with (3.3) and by using the skew-symmetry of f, we get
$$G_{\theta}(x)\geq{1\over \theta} \bigl\langle R(x,\theta), R(x, \theta) \bigr\rangle -{1\over {2\theta}}\bigl\Vert R(x,\theta)\bigr\Vert ^{2}={1\over 2\theta}\bigl\Vert R(x,\theta)\bigr\Vert ^{2}. $$
Clearly, we have \(G_{\theta}(x)\geq0\), for all \(x\in H\).

Now from the above conclusion, if \(G_{\theta}(x)= 0\), then \(R(x,\theta )=0\). Hence by Lemma 2.1, we see that \(x\in H\) is a solution of SVMQVIP (1.1). Conversely, if \(x\in H\) is a solution of SVMQVIP (1.1), then \(x=P_{K(x)}^{f}[x-\theta A(u,u)]\), consequently, from (3.2) we have \(G_{\theta}(x)= 0\). This completes the proof. □

As a consequence of Theorem 2.2 and Theorem 3.1, we have the following result on error bound in terms of \(G_{\theta}(x)\) for SVMQVIP (1.1).

Corollary 3.1

Assume that \(x_{0}\) be a solution of SVMQVIP (1.1). Let the operator A be strongly monotone and Lipschitz continuous with constants \(\alpha, \beta> 0\), respectively. Let T be a M-Lipschitz continuous with constant \(\mu> 0\) and f is skew-symmetric. If there exists \(k<{\alpha\over \beta\mu}\) such that for any \(\theta>{k\over \alpha-\beta\mu k}\) and
$$\bigl\Vert P_{K(x)}^{f}(w)-P_{K(x_{0})}^{f}(w) \bigr\Vert \leq k\|x-x_{0}\|, \quad \forall x,x_{0},w\in H, $$
then, for any \(x\in H\) and \(\theta>{k\over \alpha-\beta\mu k}\), we have
$$\|x-x_{0}\|\leq{1+\theta\beta\mu\over \alpha\theta-(1+\theta\beta \mu )k}{\sqrt{2\theta}} \sqrt{G_{\theta}(x)},\quad \forall x\in H. $$

Also by using Theorem 2.3 and Theorem 3.1, we obtain another error bound for SVMQVIP (1.1).

Corollary 3.2

Assume that \(x_{0}\) is the solution of SVMQVIP (1.1). Let the operator A be strongly monotone and Lipschitz continuous with constants \(\alpha, \beta> 0\), respectively. Let T be a M-Lipschitz continuous with constant \(\mu> 0\) and f is skew-symmetric. If there exists \(k > 0\), such that for any \(\theta <{2\alpha\over \beta^{2}\mu^{2}}\) and
$$\bigl\Vert P_{K(x)}^{f}(z)-P_{K(x_{0})}^{f}(z) \bigr\Vert \leq k\|x-x_{0}\|,\quad \forall x,x_{0},z\in H, \textit{with }k< 1-{\sqrt{\theta^{2}\beta^{2} \mu^{2}-2\alpha\theta+1}}, $$
then, for any \(x\in H\), we have
$$\|x-x_{0}\|\leq{4\over {4\alpha\theta-\beta^{2}\mu^{2}\theta^{2} -4k}}{\sqrt {2\theta}} \sqrt{G_{\theta}(x)},\quad \forall x\in H. $$

Remark 3.1

Corollary 3.1 and Corollary 3.2 generalize the corresponding results of [7, 10, 17, 20].

Now, we derive the error bound for SVMQVIP (1.1) without using the Lipschitz continuity of the A and T.

Theorem 3.2

Let A is strongly monotone with constant \(\alpha>0\) and f is skew-symmetric. If \(x_{0}\) is a solution of SVMQVIP (1.1), then
$$\|x-x_{0}\|\leq{1\over {\sqrt{(\alpha-{1\over 2\theta})}}}\sqrt {G_{\theta}(x)}, \quad \forall x\in H, \theta>{1\over 2\alpha}. $$

Proof

From (3.1), it can be written as
$$G_{\theta}(x)\geq \bigl\langle A(u,u), x-x_{0} \bigr\rangle -f (x,x_{0} )+f (x,x )-{1\over {2\theta}} \|x-x_{0} \|^{2}. $$
By using the strong monotonicity of A, we have
$$\begin{aligned} G_{\theta}(x) \geq& \bigl\langle A(u_{0},u_{0}), x-x_{0} \bigr\rangle +\alpha \| x-x_{0} \|^{2}-f (x,x_{0} ) \\ &{}+f (x,x )-{1\over {2\theta }} \| x-x_{0} \|^{2}. \end{aligned}$$
(3.4)
Since \(x_{0}\) is a solution of SVMQVIP (1.1),
$$\bigl\langle A(u_{0},u_{0}), y-x_{0} \bigr\rangle +f (x_{0},y)-f (x_{0},x_{0})\geq 0,\quad \forall y\in K(x_{0}). $$
Taking \(y=x\) in the above inequality
$$ \bigl\langle A(u_{0},u_{0}), x-x_{0} \bigr\rangle +f (x_{0},x)-f (x_{0},x_{0})\geq 0. $$
(3.5)
Combining (3.4) with (3.5) and using the skew-symmetry of f, we get
$$G_{\theta}(x)\geq\alpha\|x-x_{0}\|^{2}- {1\over {2\theta}}\|x-x_{0}\|^{2}= \biggl(\alpha- {1\over {2\theta}} \biggr)\|x-x_{0}\|^{2}, $$
which implies
$$\|x-x_{0}\|\leq{1\over {\sqrt{(\alpha-{1\over 2\theta})}}}\sqrt {G_{\theta}(x)}. $$
This completes the proof. □

Remark 3.2

Theorem 3.2 generalizes Theorem 5 of [20] and Theorem 3.4 of [17].

4 Global error bounds for SVMQVIP (1.1)

In this section, we consider another gap function associated with SVMQVIP (1.1), which can be viewed as a difference of two regularized gap functions with distinct parameters, known as the D-gap function, which was introduced and studied by [19, 20, 22] for solving variational inequalities and complementarity problems.

For each \(x\in H\), the difference of two regularized gap functions \(G_{\theta}(x)-G_{\psi}(x)\), where \(\theta>\psi>0\) for SVMQVIP (1.1) will not be well defined for \(x\notin\operatorname{dom}(f)\), as both quantities are not finite. Nevertheless, we shall define the D-gap function by taking a formal difference of equations (3.1) for the two parameters \(\theta>\psi>0\).

The D-gap function associated with SVMQVIP (1.1) is given by
$$\begin{aligned} D_{\theta,\psi}(x) =&\max_{y\in K(x)} \biggl\{ \bigl\langle A(u,u), x-y\bigr\rangle -f (x,y)+f (x,x) \\ &{}+{1\over {2\psi}}\|x-y\|^{2}-{1\over {2\theta}}\|x-y \|^{2} \biggr\} ,\quad x\in H, \theta>\psi>0. \end{aligned}$$
(4.1)
The D-gap function defined by (4.1) can be written as
$$\begin{aligned} D_{\theta,\psi}(x) =& \bigl\langle A(u,u), P_{K(x)}^{f} \bigl[x-\psi A(u,u)\bigr]-P_{K(x)}^{f}\bigl[x-\theta A(u,u) \bigr] \bigr\rangle \\ &{}-f \bigl(P_{K(x)}^{f}\bigl[x-\psi A(u,u) \bigr],P_{K(x)}^{f}\bigl[x-\theta A(u,u)\bigr] \bigr) \\ &{}+f \bigl(P_{K(x)}^{f}\bigl[x-\psi A(u,u)\bigr],P_{K(x)}^{f} \bigl[x-\psi A(u,u)\bigr] \bigr) \\ &{}+{1\over {2\psi}}\bigl\Vert x-P_{K(x)}^{f} \bigl[x-\psi A(u,u)\bigr]\bigr\Vert ^{2}-{1\over {2\theta}}\bigl\Vert x-P_{K(x)}^{f}\bigl[x-\theta A(u,u)\bigr]\bigr\Vert ^{2}. \end{aligned}$$
Further, it can be written as
$$\begin{aligned} D_{\theta,\psi}(x) =& \bigl\langle A(u,u), R(x,\theta)-R(x,\psi) \bigr\rangle -f \bigl(P_{K(x)}^{f}\bigl[x-\psi A(u,u)\bigr],P_{K(x)}^{f} \bigl[x-\theta A(u,u)\bigr] \bigr) \\ &{}+f \bigl(P_{K(x)}^{f}\bigl[x-\psi A(u,u) \bigr],P_{K(x)}^{f}\bigl[x-\psi A(u,u)\bigr] \bigr) \\ &{}+ {1\over {2\psi}}\bigl\Vert R(x,\psi)\bigr\Vert ^{2}- {1\over {2\theta}}\bigl\Vert R(x,\theta)\bigr\Vert ^{2}. \end{aligned}$$
(4.2)

Next, we derive global error bounds for SVMQVIP (1.1).

Theorem 4.1

\(\forall x\in H\), \(\theta>\psi>0 \), we have
$${1\over 2} \biggl({1\over \psi}-{1\over \theta} \biggr)\bigl\Vert R(x,\psi)\bigr\Vert ^{2}\leq\bigl\Vert D_{\theta,\psi}(x)\bigr\Vert \leq{1\over 2} \biggl( {1\over \psi}-{1\over \theta} \biggr)\bigl\Vert R(x,\theta) \bigr\Vert ^{2}. $$
In particular \(D_{\theta,\psi}(x)=0\), if and only if, \(x\in H\) solves SVMQVIP (1.1).

Proof

By the definition of a sub-differential, we have
$$\begin{aligned}& \biggl\langle A(u,u)-{1\over \theta} \bigl(x-P_{K(x)}^{f} \bigl[x-\theta A(u,u)\bigr] \bigr), y-P_{K(x)}^{f}\bigl[x-\theta A(u,u)\bigr] \biggr\rangle \\& \quad {}+f \bigl(P_{K(x)}^{f}\bigl[x-\theta A(u,u)\bigr],y \bigr)-f \bigl(P_{K(x)}^{f}\bigl[x-\theta A(u,u) \bigr],P_{K(x)}^{f}\bigl[x-\theta A(u,u)\bigr] \bigr)\geq0. \end{aligned}$$
Taking \(y=P_{K(x)}^{f}[x-\psi A(u,u)]\) in the above inequality, we get
$$\begin{aligned}& \biggl\langle A(u,u)-{1\over \theta} \bigl(x-P_{K(x)}^{f} \bigl[x-\theta A(u,u)\bigr] \bigr), P_{K(x)}^{f}\bigl[x-\psi A(u,u)\bigr]-P_{K(x)}^{f}\bigl[x-\theta A(u,u)\bigr] \biggr\rangle \\& \quad {}+f \bigl(P_{K(x)}^{f}\bigl[x-\theta A(u,u) \bigr],P_{K(x)}^{f}\bigl[x-\psi A(u,u)\bigr] \bigr) \\& \quad {}-f \bigl(P_{K(x)}^{f}\bigl[x-\theta A(u,u)\bigr],P_{K(x)}^{f} \bigl[x-\theta A(u,u)\bigr] \bigr)\geq0, \end{aligned}$$
which implies that
$$\begin{aligned}& \bigl\langle A(u,u),R(x,\theta)-R(x,\psi) \bigr\rangle \\& \quad \geq{1\over \theta } \bigl\langle R(x,\theta), R(x,\theta)-R(x, \psi) \bigr\rangle -f \bigl(P_{K(x)}^{f}\bigl[x-\theta A(u,u) \bigr],P_{K(x)}^{f}\bigl[x-\psi A(u,u)\bigr] \bigr) \\& \qquad {}+f \bigl(P_{K(x)}^{f}\bigl[x-\theta A(u,u) \bigr],P_{K(x)}^{f}\bigl[x-\theta A(u,u)\bigr] \bigr). \end{aligned}$$
(4.3)
Combining (4.2) with (4.3) and using the skew-symmetry of f, we get
$$\begin{aligned} D_{\theta,\psi}(x) \geq&{1\over \theta} \bigl\langle R(x,\theta), R(x, \theta)-R(x,\psi) \bigr\rangle +{1\over {2\psi}}\bigl\Vert R(x,\psi)\bigr\Vert ^{2}-{1\over {2\theta}}\bigl\Vert R(x,\theta)\bigr\Vert ^{2} \\ =&{1\over 2} \biggl({1\over \psi}- {1\over \theta} \biggr)\bigl\Vert R(x,\psi)\bigr\Vert ^{2}+ {1\over \theta} \bigl\langle R(x,\theta), R(x,\theta)-R(x,\psi) \bigr\rangle \\ &{}- {1\over 2\theta}\bigl\Vert R(x,\theta)-R(x,\psi)\bigr\Vert ^{2}- {1\over \theta} \bigl\langle R(x,\psi), R(x,\theta)-R(x, \psi) \bigr\rangle \\ =&{1\over 2} \biggl({1\over \psi}- {1\over \theta} \biggr)\bigl\Vert R(x,\psi)\bigr\Vert ^{2}+ {1\over 2\theta}\bigl\Vert R(x,\theta)-R(x,\psi)\bigr\Vert ^{2} \\ \geq&{1\over 2} \biggl({1\over \psi}- {1\over \theta} \biggr)\bigl\Vert R(x,\psi)\bigr\Vert ^{2}, \end{aligned}$$
(4.4)
which implies the left-most inequality in the assertion.
On the other hand,
$$-A(u,u)+{1\over \psi}R(x,\psi)\in\partial f \bigl( P_{K(x)}^{f}\bigl[x-\psi A(u,u)\bigr] \bigr), $$
which implies that
$$\begin{aligned}& \bigl\langle A(u,u),R(x,\theta)-R(x,\psi) \bigr\rangle \\& \quad \leq{1\over \psi} \bigl\langle R(x,\psi), R(x,\theta)-R(x,\psi) \bigr\rangle -f \bigl(P_{K(x)}^{f}\bigl[x-\theta A(u,u) \bigr],P_{K(x)}^{f}\bigl[x-\psi A(u,u)\bigr] \bigr) \\& \qquad {}+f \bigl(P_{K(x)}^{f}\bigl[x-\theta A(u,u) \bigr],P_{K(x)}^{f}\bigl[x-\theta A(u,u)\bigr] \bigr). \end{aligned}$$
(4.5)
Similarly to the analysis above, we then obtain
$$\begin{aligned} D_{\theta,\psi}(x) \leq&{1\over \psi}\bigl\langle R(x,\psi), R(x, \theta )-R(x,\psi)\bigr\rangle +{1\over {2\psi}}\bigl\Vert R(x,\psi)\bigr\Vert ^{2}-{1\over {2\theta }}\bigl\Vert R(x,\theta)\bigr\Vert ^{2} \\ =&{1\over 2} \biggl({1\over \psi}- {1\over \theta} \biggr)\bigl\Vert R(x,\theta)\bigr\Vert ^{2}-{1\over 2\psi}\bigl\Vert R(x,\theta)-R(x,\psi)\bigr\Vert ^{2} \\ \leq&{1\over 2} \biggl({1\over \psi}- {1\over \theta} \biggr)\bigl\Vert R(x,\theta) \bigr\Vert ^{2}, \end{aligned}$$
(4.6)
which implies the right-most inequality in the assertion. Combining (4.4) and (4.6), we obtain the required result. The last assertion now follows from Lemma 2.1. □

As a consequence of Theorem 2.2 and Theorem 4.1, we obtain the following result on the global error bound for SVMQVIP (1.1).

Corollary 4.1

Assume that \(x_{0}\) be a solution of SVMQVIP (1.1). Let the operator A be strongly monotone and Lipschitz continuous with constants \(\alpha, \beta> 0\), respectively. Let T be a M-Lipschitz continuous with constant \(\mu> 0\) and f is skew-symmetric. If there exists \(k<{\alpha\over \beta\mu}\) such that for any \(\theta>{k\over \alpha-\beta\mu k}\) and
$$\bigl\Vert P_{K(x)}^{f}(w)-P_{K(x_{0})}^{f}(w) \bigr\Vert \leq k\|x-x_{0}\|, \quad \forall x,x_{0},w\in H, $$
then, for any \(x\in H\) and \(\theta>{k\over \alpha-\beta\mu k}\), we have
$$\|x-x_{0}\|\leq{1+\theta\beta\mu\over \alpha\theta-(1+\theta\beta \mu )k}{\sqrt{2\theta\psi\over \theta-\psi}} \sqrt{D_{\theta,\psi }(x)},\quad \forall x\in H. $$

Also by using Theorem 2.3 and Theorem 4.1, we obtain another global error bound for SVMQVIP (1.1).

Corollary 4.2

Assume that \(x_{0}\) is the solution of SVMQVIP (1.1). Let the operator A be strongly monotone and Lipschitz continuous with constants \(\alpha, \beta> 0\), respectively. Let T be a M-Lipschitz continuous with constant \(\mu> 0\) and f is skew-symmetric. If there exists \(k > 0\), such that for any \(\theta <{2\alpha\over \beta^{2}\mu^{2}}\) and
$$\bigl\Vert P_{K(x)}^{f}(z)-P_{K(x_{0})}^{f}(z) \bigr\Vert \leq k\|x-x_{0}\|, \quad \forall x,x_{0},z\in H, \textit{with }k< 1-{\sqrt{\theta^{2}\beta^{2} \mu^{2}-2\alpha\theta+1}}, $$
then, for any \(x\in H\), we have
$$\|x-x_{0}\|\leq{4\over {4\alpha\theta-\beta^{2}\mu^{2}\theta^{2} -4k}}{\sqrt {2\theta\psi\over \theta-\psi}} \sqrt{D_{\theta,\psi}(x)},\quad \forall x\in H. $$

Remark 4.1

Corollary 4.1 and Corollary 4.2 generalize the corresponding results of [7, 10, 17, 20].

Now, we derive the global error bound for SVMQVIP (1.1) without using the Lipschitz continuity of A and T.

Theorem 4.2

Let \(x_{0}\) is a solution of SVMQVIP (1.1). Suppose that A is strongly monotone with constant \(\alpha>0\) and f is skew-symmetric, then
$$\|x-x_{0}\|\leq{1\over {\sqrt{[\alpha+{1\over 2} ({1\over \psi}-{1\over \theta} )]}}}\sqrt{D_{\theta,\psi}(x)}, \quad \forall x\in H, \alpha>{1\over 2} \biggl({1\over \theta}- {1\over \psi} \biggr). $$

Proof

From (4.1), it can be written as
$$D_{\theta,\psi}(x)\geq \bigl\langle A(u,u), x-x_{0} \bigr\rangle -f (x,x_{0} )+f (x,x )+{1\over {2\psi}}\|x-x_{0} \|^{2}-{1\over {2\theta }}\|x-x_{0}\|^{2}. $$
By using the strong monotonicity of A, we have
$$\begin{aligned} D_{\theta,\psi}(x) \geq&\bigl\langle A(u_{0},u_{0}), x-x_{0}\bigr\rangle +\alpha\| x-x_{0}\| ^{2}-f (x,x_{0})+f (x,x) \\ &{}+{1\over {2\psi}}\|x-x_{0}\|^{2}- {1\over {2\theta}}\|x-x_{0}\|^{2}. \end{aligned}$$
(4.7)
Since \(x_{0}\) is a solution of SVMQVIP (1.1),
$$\bigl\langle A(u_{0},u_{0}), y-x_{0} \bigr\rangle +f (x_{0},y )-f (x_{0},x_{0} )\geq0, \quad \forall y\in K(x_{0}). $$
Taking \(y=x\) in the above inequality
$$ \bigl\langle A(u_{0},u_{0}), x-x_{0} \bigr\rangle +f (x_{0},x )-f (x_{0},x_{0} )\geq0. $$
(4.8)
Combining (4.7) with (4.8) and using the skew-symmetry of f, we get
$$D_{\theta,\psi}(x)\geq\alpha\|x-x_{0}\|^{2}+ {1\over {2\psi}}\|x-x_{0}\| ^{2}-{1\over {2\theta}} \|x-x_{0}\|^{2}= \biggl(\alpha+{1\over {2\psi }}- {1\over {2\theta}} \biggr) \|x-x_{0} \|^{2}, $$
which implies
$$\|x-x_{0}\|\leq{1\over {\sqrt{[\alpha+{1\over 2} ({1\over \psi }-{1\over \theta} )]}}}\sqrt{D_{\theta,\psi}(x)}. $$
This completes the proof. □

Remark 4.2

Theorem 4.2 generalizes Theorem 7 of [20] and Theorem 3.6 of [17].

Declarations

Acknowledgements

The authors would like to thank the associated editor and two anonymous referees for their valuable comments and suggestions, which have helped to improve the paper.

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Authors’ Affiliations

(1)
Department of Mathematics, BITS-Pilani, Dubai Campus, Dubai, 345055, UAE
(2)
Department of Mathematical Sciences, Baba Ghulam Shah Badshah University, Rajouri, Jammu and Kashmir, India
(3)
Department of Mathematics, University of Nigeria, Nsukka, Nigeria

References

  1. Noor, MA: Some algorithms for general monotone mixed variational inequalities. Math. Comput. Model. 29, 1-9 (1999) Google Scholar
  2. Noor, MA: Algorithms for general monotone mixed variational inequalities. J. Math. Anal. Appl. 229, 330-343 (1999) MATHMathSciNetView ArticleGoogle Scholar
  3. Noor, MA: Splitting methods for pseudomonotone general mixed variational inequalities. J. Glob. Optim. 18, 75-89 (2000) MATHView ArticleGoogle Scholar
  4. Noor, MA: Solvability of multivalued general mixed variational inequalities. J. Math. Anal. Appl. 261, 390-402 (2001) MATHMathSciNetView ArticleGoogle Scholar
  5. Aussel, D, Dutta, J: On gap functions for multivalued Stampacchia variational inequalities. J. Optim. Theory Appl. 149, 513-527 (2011) MATHMathSciNetView ArticleGoogle Scholar
  6. Aussel, D, Correa, R, Marechal, M: Gap functions for quasivariational inequalities and generalized Nash equilibrium problems. J. Optim. Theory Appl. 151, 474-488 (2011) MATHMathSciNetView ArticleGoogle Scholar
  7. Aussel, D, Gupta, R, Mehra, A: Gap functions and error bounds for inverse quasi-variational inequality problems. J. Math. Anal. Appl. 407, 270-280 (2013) MATHMathSciNetView ArticleGoogle Scholar
  8. Fukushima, M: Equivalent differentiable optimization problems and descent methods for asymmetric variational inequality problems. Math. Program. 53, 99-110 (1992) MATHMathSciNetView ArticleGoogle Scholar
  9. Fukushima, M: Merit functions for variational inequality and complementarity problems. In: Nonlinear Optimization and Applications, pp. 155-170 (1996) View ArticleGoogle Scholar
  10. Gupta, R, Mehra, A: Gap functions and error bounds for quasi-variational inequalities. J. Glob. Optim. 53, 737-748 (2012) MATHMathSciNetView ArticleGoogle Scholar
  11. Huan, L, Qu, B, Jiang, J: Merit functions for general mixed quasi-variational inequalities. J. Appl. Math. Comput. 33, 411-421 (2010) MATHMathSciNetView ArticleGoogle Scholar
  12. Huang, NJ, Li, J, Wu, SY: Gap functions for a system of generalized vector quasi-equilibrium problems with set-valued mappings. J. Glob. Optim. 41, 401-415 (2008) MATHMathSciNetView ArticleGoogle Scholar
  13. Huang, LR, Ng, KF: Equivalent optimization formulations and error bounds for variational inequality problems. J. Optim. Theory Appl. 125, 299-314 (2005) MATHMathSciNetView ArticleGoogle Scholar
  14. Khan, SA, Chen, JW: Gap function and global error bounds for generalized mixed quasi-variational inequalities. Appl. Math. Comput. 260, 71-81 (2015) MathSciNetView ArticleGoogle Scholar
  15. Li, J, Mastroeni, G: Vector variational inequalities involving set-valued mappings via scalarization with applications to error bounds for gap functions. J. Optim. Theory Appl. 145, 355-372 (2010) MATHMathSciNetView ArticleGoogle Scholar
  16. Li, G, Ng, KF: Error bounds of generalized D-gap functions for nonsmooth and nonmonotone variational inequality problems. SIAM J. Optim. 20, 667-690 (2009) MATHMathSciNetView ArticleGoogle Scholar
  17. Noor, MA: Merit functions for general variational inequalities. J. Math. Anal. Appl. 316, 736-752 (2006) MATHMathSciNetView ArticleGoogle Scholar
  18. Noor, MA: On merit functions for quasi-variational inequalities. J. Math. Inequal. 2, 259-268 (2007) View ArticleGoogle Scholar
  19. Solodov, MV, Tseng, P: Some methods based on the D-gap function for solving monotone variational inequalities. Comput. Optim. Appl. 17, 255-277 (2000) MATHMathSciNetView ArticleGoogle Scholar
  20. Solodov, MV: Merit functions and error bounds for generalized variational inequalities. J. Math. Anal. Appl. 287, 405-414 (2003) MATHMathSciNetView ArticleGoogle Scholar
  21. Tang, GJ, Huang, NJ: Gap functions and global error bounds for set-valued mixed variational inequalities. Taiwan. J. Math. 17, 1267-1286 (2013) MATHMathSciNetGoogle Scholar
  22. Yamashita, N, Fukushima, M: Equivalent unconstrained minimization and global error bounds for variational inequality problems. SIAM J. Control Optim. 35, 273-284 (1997) MATHMathSciNetView ArticleGoogle Scholar
  23. Yamashita, N, Taji, K, Fukushima, M: Unconstrained optimization reformulations of variational inequality problems. J. Optim. Theory Appl. 92, 439-456 (1997) MATHMathSciNetView ArticleGoogle Scholar
  24. Zhao, YB, Hu, J: Global bounds for the distance to solutions of co-coercive variational inequalities. Oper. Res. Lett. 35, 409-415 (2007) MATHMathSciNetView ArticleGoogle Scholar
  25. Zhao, YB, Li, D: Monotonicity of fixed point and normal mapping associated with variational inequality and its application. SIAM J. Optim. 11, 962-973 (2011) View ArticleGoogle Scholar
  26. Moreau, JJ: Proximité et dualité dans un espace Hilbertien. Bull. Soc. Math. Fr. 93, 273-299 (1965) MATHMathSciNetGoogle Scholar
  27. Rockafellar, RT: Monotone operators and the proximal point algorithm. SIAM J. Control Optim. 14, 877-898 (1976) MATHMathSciNetView ArticleGoogle Scholar
  28. Facchinei, F, Pang, JS: Finite Dimensional Variational Inequalities and Complementarity Problems, vol. 1. Springer, Berlin (2003) Google Scholar

Copyright

Advertisement