- Research
- Open Access
- Published:
On the unconstrained optimization reformulations for a class of stochastic vector variational inequality problems
Journal of Inequalities and Applications volume 2023, Article number: 97 (2023)
Abstract
In this paper, a class of stochastic vector variational inequality (SVVI) problems are considered. By employing the idea of a D-gap function, the SVVI problem is reformulated as a deterministic model, which is an unconstrained expected residual minimization (UERM) problem, while it is reformulated as a constrained expected residual minimization problem in the work of Zhao et al. Then, the properties of the objective function are investigated and a sample average approximation approach is proposed for solving the UERM problem. Convergence of the proposed approach for global optimal solutions and stationary points is analyzed. Moreover, we consider another deterministic formulation, i.e., the expected value (EV) formulation for an SVVI problem, and the global error bound of a D-gap function based on the EV formulation is given.
1 Introduction
It is well known that the vector variational inequality (VVI) is an effective tool for studying vector optimization problems [16]. The concept of VVI was originally introduced in finite-dimensional Euclidean spaces by Giannessi [9]. Since then, the VVI problem has been extensively considered in studying some related problems, such as vector equilibrium [10], traffic network equilibrium [6, 18], market equilibrium [2], and so on. Since we often encounter uncertain problems in the real world, it is meaningful to consider the stochastic version of a vector variational inequality.
Let \((\Omega ,\mathcal{F},\mathbb{P})\) be a probability space, \(K\subseteq \mathbb{R}^{n}\) be a nonempty, closed and convex set, and \(\xi (\omega ):\Omega \rightarrow \Xi \) a random vector with supported on a closed set \(\Xi \subset \mathbb{R}^{r}\). The stochastic vector variational inequality (SVVI) problem is to find \(x^{*}\in K\) such that
where the vector-valued functions \(F_{j}:\mathbb{R}^{n}\times \mathbb{R}^{r}\rightarrow \mathbb{R}^{n}\) (\(j=1,\dots ,m\)) contain certain random variables and “a.s.” is the abbreviation for “almost surely” under the given probability measure. For convenience, we denote \(\xi (\omega )\) by ξ and \(F:=(F_{1},\dots ,F_{m})\). It is easy to see that problem (1) includes some models as special cases, for example:
-
(i)
If Ω only contains a realization, then SVVI (1) reduces to a VVI problem, that is, to find a vector \(x^{*}\in K\) such that
$$ \bigl( \bigl(y-x^{*} \bigr)^{\top} F_{1} \bigl(x^{*} \bigr),\dots , \bigl(y-x^{*} \bigr)^{\top} F_{m} \bigl(x^{*} \bigr) \bigr) \notin \operatorname{-int} \mathbb{R}_{+}^{m},\quad \forall y\in K. $$The interested reader is referred to the monographs [1, 10] and some papers [3, 4, 13, 16, 17, 27] for more details on VVI problems.
-
(ii)
When \(m=1\), the SVVI (1) reduces to a stochastic variational inequality (SVI) problem, that is, to find a vector \(x^{*}\in K\) such that
$$ \bigl(y-x^{*} \bigr)^{\top} F_{1} \bigl(x^{*},\xi \bigr)\geq 0,\quad \forall y\in K, \text{ a.s.} \xi \in \Xi . $$For more details on SVI problems, please refer to some papers, for example [11, 12, 19, 21–25, 28].
-
(iii)
When Ω only contains a realization and \(m=1\), the SVVI (1) reduces to a classical variational inequality (VI) problem, that is, to find a vector \(x^{*}\in K\) such that
$$ \bigl(y-x^{*} \bigr)^{\top} F_{1} \bigl(x^{*} \bigr)\geq 0, \quad \forall y\in K. $$(2)For VI problems, please refer to, for example, the monograph [7, 8].
In the past decades, much attention has been paid to the study of some SVI problems. We would like to point out that an important issue in the field of SVI problems is how to solve them. Gürkan–Özge–Robinson [11] proposed a sample-path solution, also called the expected value (EV) approach, for dealing with the SVI problem. Subsequently, the SVI is turned into a certain VI problem. After that, Jiang–Xu [14] reformulated SVI as an optimization problem based on the EV formulation and studied the global convergence results of the presented approach. He et al. [12] utilized a sample average approximation approach for dealing with the SVI based on the EV formulation. In addition, another approach, called the expected residual minimization (ERM), was presented by Chen–Fukushima [5] for solving a stochastic linear complementarity problem. Luo–Lin [23, 24] applied the ERM approach as a natural extension to solve the SVI problems, where the functions \(F(x,\xi )\) are affine and nonlinear, respectively. Later, Ma–Wu–Huang [28] applied the ERM approach to solve a stochastic affine variational inequality problem with nonlinear perturbations based on the work of Luo–Lin [23]. Lu–Li [21] and Lu–Li–Yang [22] introduced a new formulation called weighted expected residual minimization (WERM) for solving the SVI problem based on the works of Ma–Wu–Huang [28] and Luo–Lin [24], respectively.
In the recent literature, there are few studies about SVVI problems [26, 27, 33]. Zhao et al. [33] applied the ERM approach for solving the SVVI problem, which generalized some results of Luo–Lin in [23, 24] from an SVI problem to an SVVI problem. There, the SVVI is converted into a constrained optimization problem. We would like to point out that it is interesting to propose an unconstrained optimization formulation to deal with the SVVI problem. By using the idea of the D-gap function, we propose an unconstrained optimization reformulation for a class of SVVI problems. For more details on the D-gap function, please refer to [12, 15, 20, 32], for instance.
The remainder of this paper is structured as follows: in Sect. 2, we recall some fundamental results that will be used in the following sections. In Sect. 3, we investigate the properties of the objective function θ, i.e., continuous differentiability and the boundedness of the level set. In Sect. 4, we use the SAA approach to approximate the expected value of the objective function, and investigate the convergence of the SAA approach for global optimal solutions and stationary points. In Sect. 5, another deterministic formulation, i.e., the EV formulation for the SVVI problem is considered. And we give the global error bound of the D-gap function \(g_{\alpha \beta}(\cdot ,\cdot )\) based on the EV formulation.
2 Preliminaries
Recently, Zhao et al. [33, Eq. (6)] presented an equivalent scalar variational inequality (with certain constraints) formulation of the SVVI problem, which is to find \((x^{*},\lambda ^{*})\in K\times \Lambda \) such that
where \(\Lambda := \{\lambda \in \mathbb{R}^{m}:\lambda _{j}\geq 0,\sum_{j=1}^{m} \lambda _{j}=1 \}\).
Generally, the presence of a random variable \(\xi \in \Xi \) in problem (3) leads to no solution. It is therefore particularly significant to give a reasonable deterministic reformulation for problem (3).
Following the ideas of Yamashita–Taji–Fukushima [32, Eq. (6)] and Zhao et al. [33, Eq. (7)], we introduce the D-gap function \(g_{\alpha \beta}:\mathbb{R}^{n}\times \Lambda \times \Xi \rightarrow [0,\infty )\) for SVVI (3) as follows:
where \(0<\alpha <\beta \) and \(g_{\gamma}(\gamma =\alpha ,\beta ):\mathbb{R}^{n}\times \Lambda \times \Xi \rightarrow \mathbb{R}\) is the regularized gap function originated from [33, Eq. (7)] by Zhao et al., which is defined as
It is easy to see that
where
In what follows, we assume that D-gap function \(g_{\alpha \beta}(x,\lambda ,\cdot )\) is integrable on Ξ for each \((x,\lambda )\).
Remark 1
-
(i)
If \(m=1\) and \(K=\mathbb{R}^{n}_{+}\), then the D-gap function \(g_{\alpha \beta}\) reduces to the corresponding gap function [20, Eq. (3)] for the stochastic complementarity problem of Liu–Li [20];
-
(ii)
If Ω only involves a realization and \(m=1\), then the D-gap function \(g_{\alpha \beta}\) is the same as the gap function [32, Eq. (6)] considered by Yamashita–Taji–Fukushima [32] with \(\phi (x,y)=(1/2)\|x-y\|^{2}\).
In what follows, similar to Liu–Li [20, Eq. (6)], we present an unconstrained ERM (UERM) formulation for problem (3):
where \(\mathbb{E}\) denotes the mathematical expectation with respect to the law of \(\xi \in \Xi \).
Following the work of Zhao et al. [33, p. 551], we adopt the following assumptions, which will be used in the sequel:
-
(a)
For \(\xi \in \Xi \) and each \(j=1,\dots ,m\), the function \(F_{j}(\cdot ,\xi )\) is a.s. continuously differentiable on \(\mathbb{R}^{n}\).
-
(b)
There exists an integrable function \(\kappa (\xi )\) such that
$$ \mathbb{E} \bigl[\kappa ^{2}(\xi ) \bigr]< +\infty \quad \text{and} \quad \sum_{j=1}^{m} \bigl\Vert F_{j}(x, \xi ) \bigr\Vert +\sum_{j=1}^{m} \bigl\Vert \nabla _{x}F_{j}(x, \xi ) \bigr\Vert _{\mathcal{F}}\leq \kappa (\xi ), $$hold a.s. for any \(x\in \mathbb{R}^{n}\) and \(\xi \in \Xi \). Here the Frobenius norm \(\|\cdot \|_{\mathcal{F}}\) is defined by \(\|A\|_{\mathcal{F}}= (\sum_{i=1}^{m}\sum_{j=1}^{n}a_{ij}^{2} )^{\frac{1}{2}}\) for a given matrix A.
-
(c)
For each \(j=1,\dots ,m\), the function \(F_{j}(\cdot ,\xi )\) is Lipschitz continuous on \(\mathbb{R}^{n}\) with Lipschitz constant \(L_{j}(\xi )\) satisfying \(\mathbb{E}[L^{2}_{j}(\xi )]<+\infty \), i.e.,
$$ \bigl\Vert F_{j}(y,\xi )-F_{j}(x,\xi ) \bigr\Vert \leq L_{j}(\xi ) \Vert y-x \Vert , \quad \forall x,y \in \mathbb{R}^{n}. $$And meanwhile, \(L=\max_{1\leq j\leq m} \mathbb{E} [L_{j}(\xi )]\).
Let us recall some well-known concepts and lemmas, which will be frequently used in the sequel.
Definition 1
Let \(F: \mathbb{R}^{n}\rightarrow \mathbb{R}^{n}\) be a mapping.
-
(i)
It is said to to be monotone on \(\mathbb{R}^{n}\) if for any \(x,y\in \mathbb{R}^{n}\),
$$ \bigl(F(y)-F(x) \bigr)^{\top}(y-x)\geq 0; $$ -
(ii)
It is said to be strongly monotone on \(\mathbb{R}^{n}\) with modulus \(\sigma >0\) if for any \(x,y\in \mathbb{R}^{n}\),
$$ \bigl(F(y)-F(x) \bigr)^{\top}(y-x)\geq \sigma \Vert y-x \Vert ^{2}. $$
Definition 2
([14])
Let \(F: \mathbb{R}^{n} \times \mathbb{R}^{r} \rightarrow \mathbb{R}^{n}\) be a mapping and \(K\subseteq \mathbb{R}^{n}\), \(V\subseteq \mathbb{R}^{r}\). Then F is said to be uniformly strongly monotone on K with modulus \(\mu > 0\) over V, if for almost every \(\xi \in V\) and any \(x,y \in K\),
Lemma 1
([7, Theorem 2.3.3])
Let \(K\subseteq \mathbb{R}^{n}\) be a nonempty, closed and convex set and let \(F:K\rightarrow \mathbb{R}^{n}\) be continuous. If F is strongly monotone on K, then \(\mathrm{VI} (K,F)\) has a unique solution.
Lemma 2
([8, Theorem 10.2.3])
Let \(K\subseteq \mathbb{R}^{n}\) be a nonempty, closed and convex set and let \(F:K\rightarrow \mathbb{R}^{n}\) be continuous. The following statements are valid for VI (2):
-
(a)
For every \(x\in K\), \(y_{c}(x)=\Pi _{K}(x-\alpha ^{-1}F(x))\).
-
(b)
\(\theta _{\alpha}(x)\) is continuous on K and nonnegative on K.
-
(c)
[\(\theta _{\alpha}(x)=0\), \(x\in K\)] if and only if x solves the VI problem,
where \(\theta _{\alpha}(x)\) is the regularized gap function for problem (2), which is defined as \(\theta _{\alpha}(x)=\max_{y\in K}\{F(x)^{\top}(x-y) - \frac{\alpha}{2}\|x-y\|^{2}\}\).
Lemma 3
([30, Theorem 16.8])
Suppose that \(f(x,\xi )\) is a measurable and integrable function of ξ for each x in \((a,b)\). Let \(\phi (x)=\int f(x,\xi )\mu (d\xi )\). Suppose that for \(\xi \in \mathcal{A}\), where \(\mathcal{A}\) satisfies \(\mu ( \Omega - \mathcal{A}) =0\), \(f(x,\xi )\) has in \((a,b)\) a derivative \(f^{\prime}(x,\xi )\); suppose further that \(|f^{\prime}(x,\xi )| \leq g(\xi )\) for \(\xi \in \mathcal{A}\) and \(x\in (a,b)\), where g is integrable. Then \(\phi (x)\) has derivative \(\int f^{\prime}(x,\xi ) \mu (d\xi )\) on \((a,b)\).
3 Properties of the objective function θ
In this section, we first investigate the continuous differentiability of the objective function θ. To this end, we give some fundamental lemmas.
Lemma 4
For any \((x,\lambda )\in \mathbb{R}^{n}\times \Lambda \) and \(\xi \in \Xi \), it holds that
Proof
We have
where the first inequality follows from the nonnegativity of the regularized gap function \(g_{\gamma}\), the second inequality follows from the Cauchy–Schwarz inequality, and the last inequality follows from the fact that \(\lambda _{j}\in [0,1]\). Thus, we get the desired conclusion. This completes the proof. □
Lemma 5
Let \(0<\alpha <\beta \). For any \((x,\lambda )\in \mathbb{R}^{n}\times \Lambda \) and \(\xi \in \Xi \), one has
Proof
It holds that
where the first inequality follows from the nonexpansivity property of the projection operator \(\operatorname{Proj}_{K}\), and the second inequality follows from the triangle inequality of the norm and the fact that \(\lambda _{j}\in [0,1]\). This completes the proof. □
Remark 2
If \(m=1\) and \(K=\mathbb{R}^{n}_{+}\), then Lemma 5 reduces to [20, Lemma 2.1] due to Liu–Li.
Theorem 1
Suppose that assumptions (a) and (b) hold. Then, one has
-
(i)
\(g_{\alpha \beta}(\cdot ,\cdot ,\xi )\) is continuously differentiable with respect to \((x,\lambda )\) for any \(\xi \in \Xi \);
-
(ii)
\(\theta (\cdot ,\cdot )\) is continuously differentiable with respect to \((x,\lambda )\) and
$$ \nabla \theta (x,\lambda )=\mathbb{E} \bigl[{\nabla}_{(x,\lambda )}g_{ \alpha \beta}(x, \lambda ,\xi ) \bigr]. $$
Proof
(i) Since the functions \(F_{j}(\cdot ,\xi )\) (\(j=1,\dots ,m\)) are continuously differentiable with respect to x on \(\mathbb{R}^{n}\) by assumption (a), it follows from item (c) of Lemma 2 that the D-gap function \(g_{\alpha \beta}(\cdot ,\cdot ,\xi )\) is continuously differentiable with respect to \((x,\lambda )\) on \(\mathbb{R}^{n}\times \Lambda \) for any \(\xi \in \Xi \).
(ii) From item (c) of Lemma 2, we have
Hence, we obtain that
where the last inequality follows from the triangle and Cauchy–Schwarz inequalities, as well as the fact that \(\lambda _{j}\in [0,1]\). Taking \(\gamma =\alpha ,\beta \) in Lemma 4, we get
and
Thus, we obtain
where the first inequality follows from (12) and the second inequality follows from (13), (14), and (10) in Lemma 5. From Lemma 3, we know that the function θ is continuously differentiable, and simultaneously get that \(\nabla \theta (x,\lambda )=\mathbb{E}[{\nabla}_{(x,\lambda )}g_{ \alpha \beta}(x,\lambda ,\xi )]\). □
Next, we investigate the boundedness of the level set of the objective function θ. The level set of θ is defined by
Lemma 6
Let \(0<\alpha <\beta \). For any \((x,\lambda )\in \mathbb{R}^{n}\times \Lambda \) and \(\xi \in \Xi \), one has
Proof
We only prove the first inequality of (15). By the definition of D-gap function \(g_{\alpha \beta}\) and Lemma 2, we have
In a similar way, we obtain the second inequality of (15). This completes the proof. □
Theorem 2
If K is a compact set, then, for any \(c\geq 0\), the level set \(L_{\theta}(c)\) of the objective function θ is bounded.
Proof
Suppose on the contrary that there exists a \(\bar{c}\geq 0\) such that \(L_{\theta}(\bar{c})\) is unbounded. This implies that there exists a sequence \(\{(x^{k},\lambda ^{k})\}\subset L_{\theta}(\bar{c})\) such that
Since \(\{\lambda ^{k}\}\subset \Lambda \) and Λ is a compact set, we get that
For each k, we know that \(H_{\beta}(x^{k},\lambda ^{k},\xi )\in K\) by (7). Taking into account that the set K is compact, the sequence \(\{H_{\beta}(x^{k},\lambda ^{k},\xi )\}\) is also bounded. Therefore, we obtain that \(\|x^{k}-H_{\beta}(x^{k},\lambda ^{k},\xi )\|\rightarrow \infty \), as k tends to ∞.
It follows from the definition of \(\theta (x,\lambda )\) and (15) in Lemma 6 that
Since \(\|x^{k}-H_{\beta}(x^{k},\lambda ^{k},\xi )\|\rightarrow \infty \), as k tends to ∞, we get that \(\mathbb{E}[\|x^{k}-H_{\beta}(x^{k},\lambda ^{k},\xi )\|^{2}] \rightarrow \infty \), and so
whenever \(0 < \alpha <\beta \). This, together with (16), implies that \(\theta (x^{k},\lambda ^{k})\rightarrow \infty \) as k tends to ∞. This contradicts the fact that \((x^{k},\lambda ^{k}) \in L_{\theta}(\bar{c})\), completing the proof. □
4 Convergence analysis
In this section, we utilize the sample average approximation (SAA) approach (for more about the SAA approach, please refer to, for example, [31]) when dealing with the expected value of the objective function.
Lemma 7
([29])
Let \(\Phi :\Xi \rightarrow \mathbb{R}\) be an integrable function. Then, one has
where \(\{\xi _{1},\xi _{2},\dots ,\xi _{N}\}\) is the independent and identical distributed (iid) sample of ξ and “\(\mathrm{w.p. 1}\)” denotes that this procedure converges with probability one.
Consequently, from Lemma 7, the UERM problem (8) is further converted into the following SAA problem:
4.1 Convergence of global optimal solutions
In this subsection, we will investigate the limiting behavior of global optimal solutions. We denote by \(S^{*}\) and \(S_{N}^{*}\) the sets of optimal solutions for problems (8) and (18), respectively.
Lemma 8
For any \((x,\lambda )\in \mathbb{R}^{n}\times \Lambda \), one has
Proof
Since the D-gap function \(g_{\alpha \beta}(x,\lambda ,\cdot )\) is integrable on Ξ, by Lemma 7, we obtain that
This completes the proof. □
Theorem 3
Suppose that the assumptions (a) and (b) hold. Let \((x^{N},\lambda ^{N})\in S_{N}^{*}\) for each N and let \((x^{*},\lambda ^{*})\) be an accumulation point of the sequence \(\{(x^{N},\lambda ^{N})\}\). Then, we have \((x^{*},\lambda ^{*})\in S^{*}\).
Proof
Let \((x^{*},\lambda ^{*})\) be an accumulation point of the sequence \(\{(x^{N},\lambda ^{N})\}\). Without loss of generality, we assume that the sequence \(\{(x^{N},\lambda ^{N})\}\) converges to a point \((x^{*},\lambda ^{*})\). It is obvious that \((x^{*},\lambda ^{*})\in \mathbb{R}^{n}\times \Lambda \). We divide the proof into two parts.
Part 1. We claim that
In fact, from Theorem 1, for any \((x,\lambda )\in \mathbb{R}^{n}\times \Lambda \) and \(\xi \in \Xi \), we have
and
It follows from the mean-value theorem that for any each \((x^{N},\lambda ^{N})\) and each \(\xi _{i}\),
where \((y^{Ni}, \lambda ^{Ni} )\in \mathbb{R}^{n}\times \Lambda \) and \(y^{Ni}=a_{Ni}x^{N}+(1-a_{Ni})x^{*}\), \(\lambda ^{Ni}=a_{Ni}\lambda ^{N}+(1-a_{Ni})\lambda ^{*}\) with \(a_{Ni}\in [0,1]\). Then we get that
where the second inequality follows from (23) and Cauchy–Schwarz inequality, the third inequality follows from (21) and (22), the fourth inequality follows from the definition of the Frobenius matrix norm, and the last inequality follows from assumption (b). Since the sequence \(\{ (x^{N},\lambda ^{N} ) \}\) converges to a point \((x^{*},\lambda ^{*})\), using assumption (b), we conclude that (20) is true.
Part 2. We next show that \((x^{*},\lambda ^{*})\in S^{*}\). Since
it follows from Lemma 8 and (20) that
Notice that \((x^{N},\lambda ^{N} )\in S_{N}^{*}\) for each N, which means that
Taking the limit in the above inequality as \(N\rightarrow \infty \), we obtain that
which implies that \((x^{*},\lambda ^{*})\in S^{*}\) with probability one. □
4.2 Convergence of stationary points
A point \((x^{*},\lambda ^{*})\) is said to be a stationary point for problem (8) if it satisfies
For each N, a point \((x^{N},\lambda ^{N} )\) is said to be a stationary point for problem (18) if it satisfies
Theorem 4
Suppose that assumptions (a), (b), and (c) hold. Let \((x^{N},\lambda ^{N} )\) be a stationary point of problem (18) for each N and \((x^{*},\lambda ^{*})\) be an accumulation point of the sequence \(\{ (x^{N},\lambda ^{N} ) \}\). Then, \((x^{*},\lambda ^{*})\) is a stationary point of problem (8) with probability one.
Proof
In view of the definitions of \(\theta (\cdot ,\cdot )\) and \(\theta _{N}(\cdot ,\cdot )\), we have
and
Let \(D\subset \mathbb{R}^{n}\times \Lambda \) be a compact set. We have from assumptions (a), (b), and (c) that
From assumptions (a) and (b), it is easy to see that \(\nabla \theta (\cdot ,\cdot )\) is a continuous function. Let \((x^{*},\lambda ^{*})\) be an accumulation point of the sequence \(\{ (x^{N},\lambda ^{N} ) \}\). Without loss of generality, we assume that \(\lim_{N\rightarrow \infty} (x^{N},\lambda ^{N} )=(x^{*}, \lambda ^{*})\). The sequence \(\{ (x^{N},\lambda ^{N} ) \}\) is contained in a closed neighborhood \(B\subset \mathbb{R}^{n}\times \Lambda \) of \((x^{*},\lambda ^{*})\) for sufficiently large N. Thus we conclude that for any given \(\varepsilon >0\),
From (26), there exists \(N_{0}>0\) such that \((x^{N},\lambda ^{N} )\in B\) for all \(N\geq N_{0}\) and
Since
we have from (27) and (28) that
Thus, we obtain that
By taking the limit as N tends to ∞ in (25), we obtain (24). That is, \((x^{*},\lambda ^{*})\) is stationary point of problem (8) with probability one. This completes the proof. □
5 EV formulation and its global error bound
In this section, let us consider another deterministic formulation for the SVVI problem, i.e., the EV formulation (for more details about the EV formulation, please see, for example, [11]), that is, to find \(x^{*} \in K\) such that
The notations \(\operatorname{sol}(\mathbb{E}[F(x,\xi )], K)\) denotes the solution set of problem (29) and \(\operatorname{dist}(x,X)\) denotes \(\min_{y\in X} \|x-y\|\). Lee et al. [16] provided the following properties for the deterministic VVI.
Lemma 9
([16, Theorem 4.2])
Suppose that \(F_{j}\) are strongly monotone on K with modulus \(\beta >0\) and Lipschitz continuous on K with modulus \(l>0\) for all j (\(j=1,\dots ,m\)). Then the solution set is compact.
Based on the EV formulation, we give some conditions in this section. Then, the D-gap function provides a global error bound for the SVVI problem (29). Assume that the expected value of the function \(F_{j}(x,\cdot )\) is well defined. Similar to the work of Zhao et al. [33, Eq. (6)], we give an equivalent scalar variational inequality for problem (29), i.e., find \((x^{*},\lambda ^{*}) \in K \times \Lambda \) such that
From the ideas of Yamashita–Taji–Fukushima [32, Eq. (6)] and Zhao et al. [33, Eq. (7)], the D-gap function \({g}_{\alpha \beta}(\cdot ,\cdot ): \mathbb{R}^{n} \times \Lambda \) of problem (30) is defined as follows:
where \(0<\alpha <\beta \) and \(g_{\gamma}(\cdot ,\cdot )\) \((\gamma =\alpha ,\beta )\) is the regularized gap function, which is defined by
It is easy to see that
where
The following main properties of the D-gap function \(g_{\alpha \beta}(\cdot ,\cdot )\) hold:
-
\(g_{\alpha \beta}(x,\lambda )\geq 0\) for any \((x,\lambda )\in \mathbb{R}^{n}\times \Lambda \);
-
\(g_{{\alpha}{\beta}}(x^{*},\lambda ^{*})=0\) if and only if \((x^{*},\lambda ^{*})\) solves problem (30).
Remark 3
If \(m=1\), then the D-gap function \(g_{{\alpha \beta}}(\cdot ,\cdot )\) considered in the present paper reduces to the D-gap function [12, Eq. (5)] discussed by He et al.
As usual, \(\mathbb{P}(V)\) denotes the probability of an event V.
Lemma 10
Let \(0<{\alpha}<{\beta}\). For any \((x,\lambda )\in \mathbb{R}^{n}\times \Lambda \), one has
Proof
This proof is similar to that of Lemma 6, so we omit it here. □
Lemma 11
Suppose that assumptions (a) and (c) hold, and each function \(F_{j}(\cdot ,\xi )\) (\(j=1,\dots ,m\)) is monotone on K for almost every \(\xi \in \Xi \). Let \(F_{j}\) (\(j=1,\dots ,m\)) be uniformly strongly monotone on K with modulus \(\mu _{j}>0\) over \(V_{j}\subset \Xi \subset \mathbb{R}^{n}\) with \(\mathbb{P}(V_{j})>0\), \(\mu :=\min_{1\leq j\leq m}\mu _{j}\) and \(\nu :=\min_{1\leq j\leq m}\mathbb{P}(V_{j})\). Then the solution set \(\operatorname{sol}(\mathbb{E}[F(x,\xi )],K)\) is compact.
Proof
For each \(j=1,\dots ,m\), since \(F_{j}\) is uniformly strongly monotone on K with modulus \(\mu _{j}>0\) over \(V_{j}\), we have
In addition, since \(F_{j}(\cdot ,\xi )\) is monotone on \(\mathbb{R}^{n}\) for almost every \(\xi \in \Xi \), we have
Combining (34) and (35), for any \(x,y \in \mathbb{R}^{n}\), we get
Hence, for each j (\(j=1,\dots ,m\)), \(\mathbb{E}[F_{j}]\) is strongly monotone with modulus \(\nu \mu >0\). On the other hand, by assumption (c), for any \(x,y \in \mathbb{R}^{n}\), it holds that
Therefore, for each j (\(j=1,\dots ,m\)), \(\mathbb{E}[F_{j}]\) is Lipschitz continuous with modulus \(L>0\) and strongly monotone with modulus \(\nu \mu >0\). Then, from Lemma 9, we obtain that \(\operatorname{sol}(\mathbb{E}[F(x,\xi )],K)\) is compact. □
Theorem 5
Suppose that assumptions (a), (b), and (c) hold, and each function \(F_{j}(\cdot ,\xi )\) (\(j=1,\dots ,m\)) is monotone on K for almost every \(\xi \in \Xi \). Let \(F_{j}\) (\(j=1,\dots ,m\)) be uniformly strongly monotone on K with modulus \(\mu _{j}>0\) over \(V_{j}\subset \Xi \subset \mathbb{R}^{n}\) with \(\mathbb{P}(V_{j})>0\), \(\mu :=\min_{1\leq j\leq m}\mu _{j}\) and \(\nu :=\min_{1\leq j\leq m}\mathbb{P}(V_{j})\). Thus, for any \(\lambda \in \Lambda \), one has
Proof
From (30), it is natural that as we change \(\lambda \in \Lambda \), x will also change. Furthermore, once we have fixed \(\lambda ^{*} \in \Lambda \), the corresponding x is also fixed. Since each function \(F_{j}\) (\(j=1,\dots ,m\)) is uniformly strongly monotone on \(\mathbb{R}^{n}\), using the concepts of monotonicity and strong monotonicity, for any \(x,y\in \mathbb{R}^{n}\), one has
where \(\lambda ^{*}_{j}\in [0,1]\). Hence, \(\sum_{j=1}^{m}\lambda ^{*}_{j}\mathbb{E}[F_{j}]\) is strongly monotone with modulus \(\nu \mu >0\) for fixed \(\lambda ^{*}\in \Lambda \). Therefore, from Lemma 1, there is a unique solution \(x^{*}\) such that
For any \(x,y\in \mathbb{R}^{n}\), using the concepts of monotonicity and Lipschitz continuity, we obtain
where \(\lambda ^{*}_{j}\in [0,1]\). Hence, \(\sum_{j=1}^{m}\lambda ^{*}_{j}\mathbb{E}[F_{j}]\) is Lipschitz continuous with modulus \(L>0\). Then, we claim that there exists a constant \(r>0\) such that
In fact, given \(x\in K\), from item (a) of Lemma 2, we know that \(H_{{\beta}}(x,\lambda ^{*})\), defined by (32), is the unique solution of the following strongly convex minimization problem:
Hence, \(H_{{\beta}}(x,\lambda ^{*})\) fulfills, for all \(y\in K\), the following optimality condition:
Taking \(y=x^{*}\) in the above inequality, we obtain
From (36), the function \(\sum_{j=1}^{m}\lambda ^{*}_{j}\mathbb{E}[F_{j}(\cdot ,\xi )]\) is strongly monotone with modulus μν. Hence, there exists a unique solution \(x^{*}\) such that
Adding (40) with (41), we have
This can be rewritten as follows:
Noticing that \({\beta}>0\) and simultaneously utilizing Cauchy–Schwarz inequality, we get that
This, together with (36) and (38), implies that
which means that
As a result, the needed constant \(r>0\) is defined by \(r=\frac{L+{\beta}}{\nu \mu}\). Thus, (39) is true. From the first inequality of (33) in Lemma 10, we have
Combining (43) with (44) and using the arbitrariness of \(\lambda ^{*} \in \Lambda \), we get the desired conclusion. This completes the proof. □
In the rest of this section, we investigate the boundedness of the level set of the D-gap function \(g_{{\alpha}{\beta}}(\cdot ,\cdot )\). The level set of the D-gap function \(g_{{\alpha}{\beta}}(\cdot ,\cdot )\) is defined by
Corollary 1
Suppose that assumptions (a), (b), and (c) hold, and each function \(F_{j}(\cdot ,\xi )\) (\(j=1,\dots ,m\)) is monotone on K for almost every \(\xi \in \Xi \). Let \(F_{j}\) (\(j=1,\dots ,m\)) be uniformly strongly monotone on K with modulus \(\mu _{j}>0\) over \(V_{j}\subset \Xi \subset \mathbb{R}^{n}\) with \(\mathbb{P}(V_{j})>0\), \(\mu :=\min_{1\leq j\leq m}\mu _{j}\) and \(\nu :=\min_{1\leq j\leq m}\mathbb{P}(V_{j})\). Then, for any \(\eta \geq 0\), the level set \(L_{g_{{\alpha}{\beta}}}(\eta )\) is bounded.
Proof
Suppose on the contrary that there exists a \(\bar{\eta}\geq 0\) such that \(L_{g_{{\alpha}{\beta}}}(\bar{\eta})\) is unbounded. This implies that there exists a sequence \(\{(x^{k},\lambda ^{k})\}\subset L_{g_{{\alpha}{\beta}}}(\bar{\eta})\) such that
Since \(\{\lambda ^{k}\}\subset \Lambda \) and Λ is a compact set, we get that
By the proof of Theorem 5, one has
This means that
Then, taking the limit of the right-hand side of the above inequality, we have
which means that \(g_{{\alpha}{\beta}}(x^{k},\lambda ^{k})\rightarrow \infty \) as k tends to ∞. This contradicts the fact that \((x^{k},\lambda ^{k}) \in L_{g_{{\alpha}{\beta}}}(\bar{\eta})\), completing the proof. □
Remark 4
We would like to point out that the condition that each function \(F_{j}\) (\(j=1,\dots ,m\)) is Lipschitz continuous with Lipschitz constant \(L_{j}(\xi )\) used in Theorem 5 and Corollary 1 can be replaced by a requirement that K is a compact set.
We only prove the corresponding result of Theorem 5. In fact, since K is a compact set, there exists a positive number \(b>0\) such that \(K\subset \{x|\|x\|\leq b\}\). Let \(\mathfrak{B}\) be a closed ball with radius 3b, that is, \(\mathfrak{B}=\{x|\|x\|\leq 3b\}\). For any \(x\in \mathbb{R}^{n}\), let us consider two possible cases:
(Case (i): \(x\in \mathfrak{B}\)) Since each function \(F_{j}(\cdot ,\xi )\) (\(j=1,\dots ,m\)) is continuously differentiable, each function \(F_{j}(\cdot ,\xi )\) (\(j=1,\dots ,m\)) is Lipschitz continuous on \(\mathfrak{B}\). From the proof of Theorem 5, for all \(x\in \mathfrak{B}\), we have \(\|x-x^{*}\|\leq \frac{L+{\beta}}{\nu \mu}\|H_{{\beta}}(x,\lambda ^{*})-x \|\).
(Case (ii): \(x\notin \mathfrak{B}\)) For an arbitrary \(x\notin \mathfrak{B}\), we have \(\|x\|\geq 3b\). Since \(x^{*}\in K\), \(H_{{\beta}}(x,\lambda ^{*})\in K\) and K is a compact set, we obtain that \(\|x^{*}\|\leq b\) and \(\|H_{{\beta}}(x,\lambda ^{*})\|\leq b\). Hence, we have
It follows from the triangle inequality that
This, together with (45), implies that \(\|x-x^{*}\|\leq 2\|x-H_{{\beta}}(x,\lambda ^{*})\|\).
Setting \(\bar{r}=\max \{\frac{L+{\beta}}{\nu \mu},2 \}\), we have \(\|x-x^{*}\|\leq \bar{r}\|x-H_{{\beta}}(x,\lambda ^{*})\|\), which plays the role of (39). Then by repeating the rest of the proof of Theorem 5, we get the desired conclusion.
Similarly, the analogue for Corollary 1 is true in the case where the set K is compact.
6 Concluding remarks
In the present paper, mainly motivated by the works [20, 32, 33], we have presented an UERM approach (i.e., problem (8)) for solving the SVVI problem (1). Several properties of the objective function θ were discussed, namely, the continuous differentibility and the boundedness of the level set. Furthermore, a well-known sample average approximation approach was presented for solving problem (8). The convergence of the proposed approach for global optimal solutions and stationary points was analyzed. Finally, we considered another deterministic formulation, i.e., the EV formulation for the SVVI problem. And the global error bound of the D-gap function \(g_{{\alpha}{\beta}}(\cdot ,\cdot )\) based on the EV formulation was given.
We would like to mention that the UERM approach presented in this paper is formalistic, since the projection operator on K is still needed in the calculation of \(g_{\alpha \beta}\). Thus, in order to convert the formalistic approach into practical methods, it is interesting to investigate the following aspects:
-
(i)
Based on the structure of the constraint set K, how to design effective algorithms;
-
(ii)
The sample complexity of the SAA approach proposed in this paper;
-
(iii)
The convergence rate of the SAA approach.
Availability of data and materials
Not applicable.
References
Ansari, Q.H., Köbis, E., Yao, J.C.: Vector Variational Inequalities and Vector Optimization: Theory and Applications. Springer, Switzerland (2018)
Bianchi, M., Konnov, I.V., Pini, R.: Limit vector variational inequalities and market equilibrium problems. Optim. Lett. 15, 817–832 (2021)
Charitha, C., Dutta, J.: Regularized gap functions and error bounds for vector variational inequalities. Pac. J. Optim. 6, 497–510 (2010)
Charitha, C., Dutta, J., Lalitha, C.S.: Gap functions for vector variational inequalities. Optimization 64, 1499–1520 (2015)
Chen, X.J., Fukushima, M.: Expected residual minimization method for stochastic linear complementarity problems. Math. Oper. Res. 30, 1022–1038 (2005)
Daniele, P., Maugeri, A.: Vector variational inequalities and modelling of a continuum traffic equilibrium problem. In: Giannessi, F. (ed.) Vector Variational Inequalities and Vector Equilibria, Mathematical Theories, pp. 97–111. Kluwer Academic, Dordrecht (2000)
Facchinei, F., Pang, J.S.: Finite-Dimensional Variational Inequalities and Complementarity Problems, Volume I. Springer, New York (2003)
Facchinei, F., Pang, J.S.: Finite-Dimensional Variational Inequalities and Complementarity Problems, vol. II. Springer, New York (2003)
Giannessi, F.: Theorems of alternative, quadratic programs and complementarity problems. In: Cottle, R.W., Giannessi, F., Lions, J.L. (eds.) Variational Inequalities and Complementarity Problems, pp. 151–186. Wiley, New York (1980)
Giannessi, F. (ed.): Vector Variational Inequalities and Vector Equilibria: Mathematical Theories. Nonconvex Optimization and Its Applications, vol. 38. Kluwer Academic, Dordrecht (2000)
Gürkan, G., Özge, A.Y., Robinson, S.M.: Sample-path solution of stochastic variational inequalities. Math. Program. 84, 313–333 (1999)
He, S.X., Zhang, P., Hu, X., Hu, R.: A sample average approximation method based on a D-gap function for stochastic variational inequality problems. J. Ind. Manag. Optim. 10, 977–987 (2014)
Huang, N.J., Li, J., Yang, X.Q.: Weak sharpness for gap functions in vector variational inequalities. J. Math. Anal. Appl. 394, 449–457 (2012)
Jiang, H.Y., Xu, H.F.: Stochastic approximation approaches to the stochastic variational inequality problem. IEEE Trans. Autom. Control 53, 1462–1475 (2008)
Kanzow, C., Fukushima, M.: Theoretical and numerical investigation of the D-gap function for box constrained variational inequalities. Math. Program. 83, 55–87 (1998)
Lee, G.M., Kim, D.S., Lee, B.S., Yen, N.D.: Vector variational inequality as a tool for studying vector optimization problems. Nonlinear Anal. 34, 745–765 (1998)
Lee, G.M., Yen, N.D.: A result on vector variational inequalities with polyhedral constraint sets. J. Optim. Theory Appl. 109, 193–197 (2001)
Li, S.J., Chen, G.Y.: On relations between multiclass, multicriteria traffic network equilibrium models and vector variational inequalities. J. Syst. Sci. Syst. Eng. 15, 284–297 (2006)
Lin, G.H., Fukushima, M.: Stochastic equilibrium problems and stochastic mathematical programs with equilibrium constraints: a survey. Pac. J. Optim. 6, 455–482 (2010)
Liu, J.X., Li, S.J.: Unconstrained optimization reformulation for stochastic nonlinear complementarity problems. Appl. Anal. 100, 1158–1179 (2021)
Lu, F., Li, S.J.: Method of weighted expected residual for solving stochastic variational inequality problems. Appl. Math. Comput. 269, 651–663 (2015)
Lu, F., Li, S.J., Yang, J.: Convergence analysis of weighted expected residual method for nonlinear stochastic variational inequality problems. Math. Methods Oper. Res. 82, 229–242 (2015)
Luo, M.J., Lin, G.H.: Expected residual minimization method for stochastic variational inequality problems. J. Optim. Theory Appl. 140, 103–116 (2009)
Luo, M.J., Lin, G.H.: Convergence results of ERM method for nonlinear stochastic variational inequality problems. J. Optim. Theory Appl. 142, 569–581 (2009)
Luo, M.J., Lin, G.H.: Stochastic variational inequality problems with additional constraints and their applications in supply chain network equilibria. Pac. J. Optim. 7, 263–279 (2011)
Luo, M.J., Zhang, K.: Convergence analysis of the approximation problems for solving stochastic vector variational inequality problems. Complexity 2020, Article ID 1203627 (2020)
Ma, H.Q., Huang, N.J., Wu, M., Regan, D.O.: A new gap function for vector variational inequalities with an application. J. Appl. Math. 2013, Article ID 423040 (2013)
Ma, H.Q., Wu, M., Huang, N.J.: Expected residual minimization method for stochastic variational inequality problems with nonlinear perturbations. Appl. Math. Comput. 219, 6256–6267 (2013)
Niederreiter, H.: Random Number Generation and Quasi-Monte Carlo Methods. SIAM, Philadelphia (1992)
Patrick, B.: Probability and Measure. Wiley, New York (1995)
Shapiro, A., Xu, H.F.: Stochastic mathematical programs with equilibrium constraints, modelling and sample average approximation. Optimization 57, 395–418 (2008)
Yamashita, N., Taji, K., Fukushima, M.: Unconstrained optimization reformulations of variational inequality problems. J. Optim. Theory Appl. 92, 439–456 (1997)
Zhao, Y., Zhang, J., Yang, X.M., Lin, G.H.: Expected residual minimization formulation for a class of stochastic vector variational inequalities. J. Optim. Theory Appl. 175, 545–566 (2017)
Acknowledgements
The authors are grateful to the editor and referee for their valuable comments and suggestions.
Funding
This work was supported by National Natural Science Foundation of China (No. 11961006), Guangxi Natural Science Foundation (2020GXNSFAA159100) and Innovation Project of Guangxi Graduate Education (gxun-chxs2021055).
Author information
Authors and Affiliations
Contributions
Dan-Dan Dong contributed to methodology and writing-original draft; Guo-ji Tang contributed to conceptualization, methodology and supervision; Hui-ming Qiu contributed to the revision of the manuscript.
Corresponding author
Ethics declarations
Competing interests
The authors declare no competing interests.
Additional information
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Dong, Dd., Tang, Gj. & Qiu, Hm. On the unconstrained optimization reformulations for a class of stochastic vector variational inequality problems. J Inequal Appl 2023, 97 (2023). https://doi.org/10.1186/s13660-023-03011-2
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s13660-023-03011-2
Keywords
- Stochastic vector variational inequality
- D-gap function
- Sample average approximation
- Convergence
- Error bound