Skip to main content

On the unconstrained optimization reformulations for a class of stochastic vector variational inequality problems

Abstract

In this paper, a class of stochastic vector variational inequality (SVVI) problems are considered. By employing the idea of a D-gap function, the SVVI problem is reformulated as a deterministic model, which is an unconstrained expected residual minimization (UERM) problem, while it is reformulated as a constrained expected residual minimization problem in the work of Zhao et al. Then, the properties of the objective function are investigated and a sample average approximation approach is proposed for solving the UERM problem. Convergence of the proposed approach for global optimal solutions and stationary points is analyzed. Moreover, we consider another deterministic formulation, i.e., the expected value (EV) formulation for an SVVI problem, and the global error bound of a D-gap function based on the EV formulation is given.

1 Introduction

It is well known that the vector variational inequality (VVI) is an effective tool for studying vector optimization problems [16]. The concept of VVI was originally introduced in finite-dimensional Euclidean spaces by Giannessi [9]. Since then, the VVI problem has been extensively considered in studying some related problems, such as vector equilibrium [10], traffic network equilibrium [6, 18], market equilibrium [2], and so on. Since we often encounter uncertain problems in the real world, it is meaningful to consider the stochastic version of a vector variational inequality.

Let \((\Omega ,\mathcal{F},\mathbb{P})\) be a probability space, \(K\subseteq \mathbb{R}^{n}\) be a nonempty, closed and convex set, and \(\xi (\omega ):\Omega \rightarrow \Xi \) a random vector with supported on a closed set \(\Xi \subset \mathbb{R}^{r}\). The stochastic vector variational inequality (SVVI) problem is to find \(x^{*}\in K\) such that

$$\begin{aligned}& \bigl( \bigl(y-x^{*} \bigr)^{\top}F_{1} \bigl(x^{*},\xi (\omega ) \bigr),\dots , \bigl(y-x^{*} \bigr)^{ \top}F_{m} \bigl(x^{*},\xi (\omega ) \bigr) \bigr) \\& \quad \notin \operatorname{-int} \mathbb{R}^{m}_{+}, \quad \forall y\in K, \text{a.s. } \xi (\omega )\in \Xi , \end{aligned}$$
(1)

where the vector-valued functions \(F_{j}:\mathbb{R}^{n}\times \mathbb{R}^{r}\rightarrow \mathbb{R}^{n}\) (\(j=1,\dots ,m\)) contain certain random variables and “a.s.” is the abbreviation for “almost surely” under the given probability measure. For convenience, we denote \(\xi (\omega )\) by ξ and \(F:=(F_{1},\dots ,F_{m})\). It is easy to see that problem (1) includes some models as special cases, for example:

  1. (i)

    If Ω only contains a realization, then SVVI (1) reduces to a VVI problem, that is, to find a vector \(x^{*}\in K\) such that

    $$ \bigl( \bigl(y-x^{*} \bigr)^{\top} F_{1} \bigl(x^{*} \bigr),\dots , \bigl(y-x^{*} \bigr)^{\top} F_{m} \bigl(x^{*} \bigr) \bigr) \notin \operatorname{-int} \mathbb{R}_{+}^{m},\quad \forall y\in K. $$

    The interested reader is referred to the monographs [1, 10] and some papers [3, 4, 13, 16, 17, 27] for more details on VVI problems.

  2. (ii)

    When \(m=1\), the SVVI (1) reduces to a stochastic variational inequality (SVI) problem, that is, to find a vector \(x^{*}\in K\) such that

    $$ \bigl(y-x^{*} \bigr)^{\top} F_{1} \bigl(x^{*},\xi \bigr)\geq 0,\quad \forall y\in K, \text{ a.s.} \xi \in \Xi . $$

    For more details on SVI problems, please refer to some papers, for example [11, 12, 19, 2125, 28].

  3. (iii)

    When Ω only contains a realization and \(m=1\), the SVVI (1) reduces to a classical variational inequality (VI) problem, that is, to find a vector \(x^{*}\in K\) such that

    $$ \bigl(y-x^{*} \bigr)^{\top} F_{1} \bigl(x^{*} \bigr)\geq 0, \quad \forall y\in K. $$
    (2)

    For VI problems, please refer to, for example, the monograph [7, 8].

In the past decades, much attention has been paid to the study of some SVI problems. We would like to point out that an important issue in the field of SVI problems is how to solve them. Gürkan–Özge–Robinson [11] proposed a sample-path solution, also called the expected value (EV) approach, for dealing with the SVI problem. Subsequently, the SVI is turned into a certain VI problem. After that, Jiang–Xu [14] reformulated SVI as an optimization problem based on the EV formulation and studied the global convergence results of the presented approach. He et al. [12] utilized a sample average approximation approach for dealing with the SVI based on the EV formulation. In addition, another approach, called the expected residual minimization (ERM), was presented by Chen–Fukushima [5] for solving a stochastic linear complementarity problem. Luo–Lin [23, 24] applied the ERM approach as a natural extension to solve the SVI problems, where the functions \(F(x,\xi )\) are affine and nonlinear, respectively. Later, Ma–Wu–Huang [28] applied the ERM approach to solve a stochastic affine variational inequality problem with nonlinear perturbations based on the work of Luo–Lin [23]. Lu–Li [21] and Lu–Li–Yang [22] introduced a new formulation called weighted expected residual minimization (WERM) for solving the SVI problem based on the works of Ma–Wu–Huang [28] and Luo–Lin [24], respectively.

In the recent literature, there are few studies about SVVI problems [26, 27, 33]. Zhao et al. [33] applied the ERM approach for solving the SVVI problem, which generalized some results of Luo–Lin in [23, 24] from an SVI problem to an SVVI problem. There, the SVVI is converted into a constrained optimization problem. We would like to point out that it is interesting to propose an unconstrained optimization formulation to deal with the SVVI problem. By using the idea of the D-gap function, we propose an unconstrained optimization reformulation for a class of SVVI problems. For more details on the D-gap function, please refer to [12, 15, 20, 32], for instance.

The remainder of this paper is structured as follows: in Sect. 2, we recall some fundamental results that will be used in the following sections. In Sect. 3, we investigate the properties of the objective function θ, i.e., continuous differentiability and the boundedness of the level set. In Sect. 4, we use the SAA approach to approximate the expected value of the objective function, and investigate the convergence of the SAA approach for global optimal solutions and stationary points. In Sect. 5, another deterministic formulation, i.e., the EV formulation for the SVVI problem is considered. And we give the global error bound of the D-gap function \(g_{\alpha \beta}(\cdot ,\cdot )\) based on the EV formulation.

2 Preliminaries

Recently, Zhao et al. [33, Eq. (6)] presented an equivalent scalar variational inequality (with certain constraints) formulation of the SVVI problem, which is to find \((x^{*},\lambda ^{*})\in K\times \Lambda \) such that

$$ \bigl(y-x^{*} \bigr)^{\top}\sum _{j=1}^{m}\lambda _{j}^{*}F_{j} \bigl(x^{*}, \xi \bigr)\geq 0,\quad \forall y\in K, \text{a.s.} \xi \in \Xi , $$
(3)

where \(\Lambda := \{\lambda \in \mathbb{R}^{m}:\lambda _{j}\geq 0,\sum_{j=1}^{m} \lambda _{j}=1 \}\).

Generally, the presence of a random variable \(\xi \in \Xi \) in problem (3) leads to no solution. It is therefore particularly significant to give a reasonable deterministic reformulation for problem (3).

Following the ideas of Yamashita–Taji–Fukushima [32, Eq. (6)] and Zhao et al. [33, Eq. (7)], we introduce the D-gap function \(g_{\alpha \beta}:\mathbb{R}^{n}\times \Lambda \times \Xi \rightarrow [0,\infty )\) for SVVI (3) as follows:

$$ g_{\alpha \beta}(x,\lambda ,\xi ):=g_{\alpha}(x,\lambda ,\xi )-g_{ \beta}(x,\lambda ,\xi ),\quad x\in \mathbb{R}^{n}, \lambda \in \Lambda , \text{a.s. } \xi \in \Xi , $$
(4)

where \(0<\alpha <\beta \) and \(g_{\gamma}(\gamma =\alpha ,\beta ):\mathbb{R}^{n}\times \Lambda \times \Xi \rightarrow \mathbb{R}\) is the regularized gap function originated from [33, Eq. (7)] by Zhao et al., which is defined as

$$ g_{\gamma}(x,\lambda ,\xi ):=\max_{y\in K} \Biggl\{ (x-y)^{\top} \sum_{j=1}^{m}\lambda _{j}F_{j}(x,\xi )-\frac{\gamma}{2} \Vert x-y \Vert ^{2} \Biggr\} . $$
(5)

It is easy to see that

$$ g_{\gamma}(x,\lambda ,\xi )= \bigl(x-H_{\gamma}(x,\lambda ,\xi ) \bigr)^{\top} \sum_{j=1}^{m}\lambda _{j}F_{j}(x,\xi )-\frac{\gamma}{2} \bigl\Vert x-H_{ \gamma}(x,\lambda ,\xi ) \bigr\Vert ^{2}, $$
(6)

where

$$ H_{\gamma}(x,\lambda ,\xi ):=\operatorname{Proj}_{K} \Biggl(x-\gamma ^{-1} \sum_{j=1}^{m} \lambda _{j}F_{j}(x,\xi ) \Biggr). $$
(7)

In what follows, we assume that D-gap function \(g_{\alpha \beta}(x,\lambda ,\cdot )\) is integrable on Ξ for each \((x,\lambda )\).

Remark 1

  1. (i)

    If \(m=1\) and \(K=\mathbb{R}^{n}_{+}\), then the D-gap function \(g_{\alpha \beta}\) reduces to the corresponding gap function [20, Eq. (3)] for the stochastic complementarity problem of Liu–Li [20];

  2. (ii)

    If Ω only involves a realization and \(m=1\), then the D-gap function \(g_{\alpha \beta}\) is the same as the gap function [32, Eq. (6)] considered by Yamashita–Taji–Fukushima [32] with \(\phi (x,y)=(1/2)\|x-y\|^{2}\).

In what follows, similar to Liu–Li [20, Eq. (6)], we present an unconstrained ERM (UERM) formulation for problem (3):

$$ \min_{x\in \mathbb{R}^{n},\lambda \in \Lambda} \theta (x, \lambda ):=\mathbb{E} \bigl[g_{\alpha \beta}(x,\lambda ,\xi ) \bigr] , $$
(8)

where \(\mathbb{E}\) denotes the mathematical expectation with respect to the law of \(\xi \in \Xi \).

Following the work of Zhao et al. [33, p. 551], we adopt the following assumptions, which will be used in the sequel:

  1. (a)

    For \(\xi \in \Xi \) and each \(j=1,\dots ,m\), the function \(F_{j}(\cdot ,\xi )\) is a.s. continuously differentiable on \(\mathbb{R}^{n}\).

  2. (b)

    There exists an integrable function \(\kappa (\xi )\) such that

    $$ \mathbb{E} \bigl[\kappa ^{2}(\xi ) \bigr]< +\infty \quad \text{and} \quad \sum_{j=1}^{m} \bigl\Vert F_{j}(x, \xi ) \bigr\Vert +\sum_{j=1}^{m} \bigl\Vert \nabla _{x}F_{j}(x, \xi ) \bigr\Vert _{\mathcal{F}}\leq \kappa (\xi ), $$

    hold a.s. for any \(x\in \mathbb{R}^{n}\) and \(\xi \in \Xi \). Here the Frobenius norm \(\|\cdot \|_{\mathcal{F}}\) is defined by \(\|A\|_{\mathcal{F}}= (\sum_{i=1}^{m}\sum_{j=1}^{n}a_{ij}^{2} )^{\frac{1}{2}}\) for a given matrix A.

  3. (c)

    For each \(j=1,\dots ,m\), the function \(F_{j}(\cdot ,\xi )\) is Lipschitz continuous on \(\mathbb{R}^{n}\) with Lipschitz constant \(L_{j}(\xi )\) satisfying \(\mathbb{E}[L^{2}_{j}(\xi )]<+\infty \), i.e.,

    $$ \bigl\Vert F_{j}(y,\xi )-F_{j}(x,\xi ) \bigr\Vert \leq L_{j}(\xi ) \Vert y-x \Vert , \quad \forall x,y \in \mathbb{R}^{n}. $$

    And meanwhile, \(L=\max_{1\leq j\leq m} \mathbb{E} [L_{j}(\xi )]\).

Let us recall some well-known concepts and lemmas, which will be frequently used in the sequel.

Definition 1

Let \(F: \mathbb{R}^{n}\rightarrow \mathbb{R}^{n}\) be a mapping.

  1. (i)

    It is said to to be monotone on \(\mathbb{R}^{n}\) if for any \(x,y\in \mathbb{R}^{n}\),

    $$ \bigl(F(y)-F(x) \bigr)^{\top}(y-x)\geq 0; $$
  2. (ii)

    It is said to be strongly monotone on \(\mathbb{R}^{n}\) with modulus \(\sigma >0\) if for any \(x,y\in \mathbb{R}^{n}\),

    $$ \bigl(F(y)-F(x) \bigr)^{\top}(y-x)\geq \sigma \Vert y-x \Vert ^{2}. $$

Definition 2

([14])

Let \(F: \mathbb{R}^{n} \times \mathbb{R}^{r} \rightarrow \mathbb{R}^{n}\) be a mapping and \(K\subseteq \mathbb{R}^{n}\), \(V\subseteq \mathbb{R}^{r}\). Then F is said to be uniformly strongly monotone on K with modulus \(\mu > 0\) over V, if for almost every \(\xi \in V\) and any \(x,y \in K\),

$$ \bigl(F(y,\xi )-F(x,\xi ) \bigr)^{\top}(y-x)\geq \mu \Vert y-x \Vert ^{2}. $$

Lemma 1

([7, Theorem 2.3.3])

Let \(K\subseteq \mathbb{R}^{n}\) be a nonempty, closed and convex set and let \(F:K\rightarrow \mathbb{R}^{n}\) be continuous. If F is strongly monotone on K, then \(\mathrm{VI} (K,F)\) has a unique solution.

Lemma 2

([8, Theorem 10.2.3])

Let \(K\subseteq \mathbb{R}^{n}\) be a nonempty, closed and convex set and let \(F:K\rightarrow \mathbb{R}^{n}\) be continuous. The following statements are valid for VI (2):

  1. (a)

    For every \(x\in K\), \(y_{c}(x)=\Pi _{K}(x-\alpha ^{-1}F(x))\).

  2. (b)

    \(\theta _{\alpha}(x)\) is continuous on K and nonnegative on K.

  3. (c)

    [\(\theta _{\alpha}(x)=0\), \(x\in K\)] if and only if x solves the VI problem,

where \(\theta _{\alpha}(x)\) is the regularized gap function for problem (2), which is defined as \(\theta _{\alpha}(x)=\max_{y\in K}\{F(x)^{\top}(x-y) - \frac{\alpha}{2}\|x-y\|^{2}\}\).

Lemma 3

([30, Theorem 16.8])

Suppose that \(f(x,\xi )\) is a measurable and integrable function of ξ for each x in \((a,b)\). Let \(\phi (x)=\int f(x,\xi )\mu (d\xi )\). Suppose that for \(\xi \in \mathcal{A}\), where \(\mathcal{A}\) satisfies \(\mu ( \Omega - \mathcal{A}) =0\), \(f(x,\xi )\) has in \((a,b)\) a derivative \(f^{\prime}(x,\xi )\); suppose further that \(|f^{\prime}(x,\xi )| \leq g(\xi )\) for \(\xi \in \mathcal{A}\) and \(x\in (a,b)\), where g is integrable. Then \(\phi (x)\) has derivative \(\int f^{\prime}(x,\xi ) \mu (d\xi )\) on \((a,b)\).

3 Properties of the objective function θ

In this section, we first investigate the continuous differentiability of the objective function θ. To this end, we give some fundamental lemmas.

Lemma 4

For any \((x,\lambda )\in \mathbb{R}^{n}\times \Lambda \) and \(\xi \in \Xi \), it holds that

$$ \bigl\Vert x-H_{\gamma}(x,\lambda ,\xi ) \bigr\Vert \leq \frac{2}{\gamma}\sum_{j=1}^{m} \bigl\Vert F_{j}(x,\xi ) \bigr\Vert . $$
(9)

Proof

We have

$$ \begin{aligned} \frac{\gamma}{2} \bigl\Vert x-H_{\gamma}(x, \lambda ,\xi ) \bigr\Vert ^{2}& \leq \bigl(x-H_{\gamma}(x, \lambda ,\xi ) \bigr)^{\top}\sum_{j=1}^{m} \lambda _{j}F_{j}(x,\xi ) \\ &\leq \bigl\Vert x-H_{\gamma}(x,\lambda ,\xi ) \bigr\Vert \sum _{j=1}^{m} \lambda _{j} \bigl\Vert F_{j}(x,\xi ) \bigr\Vert \\ &\leq \bigl\Vert x-H_{\gamma}(x,\lambda ,\xi ) \bigr\Vert \sum _{j=1}^{m} \bigl\Vert F_{j}(x, \xi ) \bigr\Vert , \end{aligned} $$

where the first inequality follows from the nonnegativity of the regularized gap function \(g_{\gamma}\), the second inequality follows from the Cauchy–Schwarz inequality, and the last inequality follows from the fact that \(\lambda _{j}\in [0,1]\). Thus, we get the desired conclusion. This completes the proof. □

Lemma 5

Let \(0<\alpha <\beta \). For any \((x,\lambda )\in \mathbb{R}^{n}\times \Lambda \) and \(\xi \in \Xi \), one has

$$ \bigl\Vert H_{\beta}(x,\lambda ,\xi )-H_{\alpha}(x, \lambda ,\xi ) \bigr\Vert \leq \bigl( \alpha ^{-1}-\beta ^{-1} \bigr)\sum_{j=1}^{m} \bigl\Vert F_{j}(x,\xi ) \bigr\Vert . $$
(10)

Proof

It holds that

$$ \begin{aligned} & \bigl\Vert H_{\beta}(x,\lambda ,\xi )-H_{\alpha}(x,\lambda ,\xi ) \bigr\Vert \\ &\quad = \Biggl\Vert \operatorname{Proj}_{K} \Biggl(x-\beta ^{-1}\sum_{j=1}^{m} \lambda _{j}F_{j}(x, \xi ) \Biggr)-\operatorname{Proj}_{K} \Biggl(x-\alpha ^{-1} \sum_{j=1}^{m} \lambda _{j}F_{j}(x, \xi ) \Biggr) \Biggr\Vert \\ &\quad \leq \Biggl\Vert \Biggl(x-\beta ^{-1}\sum _{j=1}^{m}\lambda _{j}F_{j}(x, \xi ) \Biggr)- \Biggl(x-\alpha ^{-1}\sum_{j=1}^{m} \lambda _{j}F_{j}(x, \xi ) \Biggr) \Biggr\Vert \\ &\quad \leq \bigl(\alpha ^{-1}-\beta ^{-1} \bigr)\sum _{j=1}^{m} \bigl\Vert F_{j}(x,\xi ) \bigr\Vert , \end{aligned} $$

where the first inequality follows from the nonexpansivity property of the projection operator \(\operatorname{Proj}_{K}\), and the second inequality follows from the triangle inequality of the norm and the fact that \(\lambda _{j}\in [0,1]\). This completes the proof. □

Remark 2

If \(m=1\) and \(K=\mathbb{R}^{n}_{+}\), then Lemma 5 reduces to [20, Lemma 2.1] due to Liu–Li.

Theorem 1

Suppose that assumptions (a) and (b) hold. Then, one has

  1. (i)

    \(g_{\alpha \beta}(\cdot ,\cdot ,\xi )\) is continuously differentiable with respect to \((x,\lambda )\) for any \(\xi \in \Xi \);

  2. (ii)

    \(\theta (\cdot ,\cdot )\) is continuously differentiable with respect to \((x,\lambda )\) and

    $$ \nabla \theta (x,\lambda )=\mathbb{E} \bigl[{\nabla}_{(x,\lambda )}g_{ \alpha \beta}(x, \lambda ,\xi ) \bigr]. $$

Proof

(i) Since the functions \(F_{j}(\cdot ,\xi )\) (\(j=1,\dots ,m\)) are continuously differentiable with respect to x on \(\mathbb{R}^{n}\) by assumption (a), it follows from item (c) of Lemma 2 that the D-gap function \(g_{\alpha \beta}(\cdot ,\cdot ,\xi )\) is continuously differentiable with respect to \((x,\lambda )\) on \(\mathbb{R}^{n}\times \Lambda \) for any \(\xi \in \Xi \).

(ii) From item (c) of Lemma 2, we have

$$\begin{aligned} &\nabla _{(x,\lambda )}g_{\alpha \beta}(x,\lambda ,\xi ) \\ &\quad = \begin{pmatrix} \sum_{j=1}^{m}\lambda _{j}\nabla _{x}F_{j}(x,\xi )(H_{\beta}(x, \lambda ,\xi )-H_{\alpha}(x,\lambda ,\xi ))+\alpha (H_{\alpha}(x, \lambda ,\xi )-x)-\beta (H_{\beta}(x,\lambda ,\xi )-x) \\ (H_{\beta}(x,\lambda ,\xi )-H_{\alpha}(x,\lambda ,\xi ))^{\top }F_{1}(x, \xi ) \\ \vdots \\ (H_{\beta}(x,\lambda ,\xi )-H_{\alpha}(x,\lambda ,\xi ))^{\top }F_{m}(x, \xi ) \end{pmatrix}. \end{aligned}$$
(11)

Hence, we obtain that

$$\begin{aligned}& \bigl\Vert \nabla _{(x,\lambda )}g_{\alpha \beta}(x, \lambda ,\xi ) \bigr\Vert \\& \quad \leq \Biggl\Vert \sum_{j=1}^{m} \lambda _{j}\nabla _{x}F_{j}(x, \xi ) \bigl(H_{\beta}(x,\lambda ,\xi )-H_{\alpha}(x,\lambda ,\xi ) \bigr)+ \alpha \bigl(H_{ \alpha}(x,\lambda ,\xi )-x \bigr) \\& \qquad {} -\beta \bigl(H_{\beta}(x,\lambda ,\xi )-x \bigr) \Biggr\Vert +\sum_{j=1}^{m} \bigl\Vert F_{j}(x, \xi ) \bigl(H_{\beta}(x,\lambda ,\xi )-H_{\alpha}(x,\lambda ,\xi ) \bigr) \bigr\Vert \\& \quad \leq \sum_{j=1}^{m} \bigl\Vert \nabla _{x}F_{j}(x,\xi ) \bigr\Vert \bigl\Vert H_{\beta}(x, \lambda ,\xi )-H_{\alpha}(x,\lambda ,\xi ) \bigr\Vert +\alpha \bigl\Vert H_{\alpha}(x, \lambda ,\xi )-x \bigr\Vert \\& \qquad {} +\beta \bigl\Vert H_{\beta}(x,\lambda ,\xi )-x \bigr\Vert + \sum_{j=1}^{m} \bigl\Vert F_{j}(x, \xi ) \bigr\Vert \bigl\Vert H_{\beta}(x,\lambda , \xi )-H_{\alpha}(x,\lambda , \xi ) \bigr\Vert , \end{aligned}$$
(12)

where the last inequality follows from the triangle and Cauchy–Schwarz inequalities, as well as the fact that \(\lambda _{j}\in [0,1]\). Taking \(\gamma =\alpha ,\beta \) in Lemma 4, we get

$$ \bigl\Vert x-H_{\alpha}(x,\lambda ,\xi ) \bigr\Vert \leq \frac{2}{\alpha}\sum_{j=1}^{m} \bigl\Vert F_{j}(x,\xi ) \bigr\Vert $$
(13)

and

$$ \bigl\Vert x-H_{\beta}(x,\lambda ,\xi ) \bigr\Vert \leq \frac{2}{\beta}\sum_{j=1}^{m} \bigl\Vert F_{j}(x,\xi ) \bigr\Vert . $$
(14)

Thus, we obtain

$$\begin{aligned}& \bigl\Vert \nabla _{(x,\lambda )}g_{\alpha \beta}(x, \lambda ,\xi ) \bigr\Vert \\& \quad \leq \Biggl(\sum_{j=1}^{m} \bigl\Vert \nabla _{x}F_{j}(x,\xi ) \bigr\Vert + \sum _{j=1}^{m} \bigl\Vert F_{j}(x, \xi ) \bigr\Vert \Biggr) \bigl( \bigl\Vert H_{\beta}(x, \lambda ,\xi )-H_{\alpha}(x,\lambda ,\xi ) \bigr\Vert \bigr) \\& \qquad {} +\alpha \bigl\Vert H_{\alpha}(x,\lambda ,\xi )-x \bigr\Vert +\beta \bigl\Vert H_{\beta}(x, \lambda ,\xi )-x \bigr\Vert \\& \quad \leq \bigl(\alpha ^{-1}-\beta ^{-1} \bigr)\sum _{j=1}^{m} \bigl\Vert F_{j}(x,\xi ) \bigr\Vert \Biggl(\sum_{j=1}^{m} \bigl\Vert \nabla _{x}F_{j}(x,\xi ) \bigr\Vert +\sum _{j=1}^{m} \bigl\Vert F_{j}(x,\xi ) \bigr\Vert \Biggr)+4\sum_{j=1}^{m} \bigl\Vert F_{j}(x, \xi ) \bigr\Vert \\& \quad = \Biggl(4+ \bigl(\alpha ^{-1}-\beta ^{-1} \bigr) \Biggl(\sum_{j=1}^{m} \bigl\Vert \nabla _{x}F_{j}(x,\xi ) \bigr\Vert +\sum _{j=1}^{m} \bigl\Vert F_{j}(x,\xi ) \bigr\Vert \Biggr) \Biggr)\sum_{j=1}^{m} \bigl\Vert F_{j}(x,\xi ) \bigr\Vert , \end{aligned}$$

where the first inequality follows from (12) and the second inequality follows from (13), (14), and (10) in Lemma 5. From Lemma 3, we know that the function θ is continuously differentiable, and simultaneously get that \(\nabla \theta (x,\lambda )=\mathbb{E}[{\nabla}_{(x,\lambda )}g_{ \alpha \beta}(x,\lambda ,\xi )]\). □

Next, we investigate the boundedness of the level set of the objective function θ. The level set of θ is defined by

$$ L_{\theta}(c):= \bigl\{ (x,\lambda )\in \mathbb{R}^{n}\times \Lambda : \theta (x,\lambda )\leq c \bigr\} . $$

Lemma 6

Let \(0<\alpha <\beta \). For any \((x,\lambda )\in \mathbb{R}^{n}\times \Lambda \) and \(\xi \in \Xi \), one has

$$ \frac{\beta -\alpha}{2} \bigl\Vert x-H_{\beta}(x,\lambda ,\xi ) \bigr\Vert ^{2}\leq g_{ \alpha \beta}(x,\lambda ,\xi )\leq \frac{\beta -\alpha}{2} \bigl\Vert x-H_{ \alpha}(x,\lambda ,\xi ) \bigr\Vert ^{2}. $$
(15)

Proof

We only prove the first inequality of (15). By the definition of D-gap function \(g_{\alpha \beta}\) and Lemma 2, we have

$$\begin{aligned} g_{\alpha \beta}(x,\lambda ,\xi ) =& \bigl(x-H_{\alpha}(x, \lambda ,\xi ) \bigr)^{\top}\sum_{j=1}^{m} \lambda _{j}F_{j}(x, \xi )- \frac{\alpha}{2} \bigl\Vert x-H_{\alpha}(x,\lambda ,\xi ) \bigr\Vert ^{2} \\ &{} - \bigl(x-H_{\beta}(x,\lambda ,\xi ) \bigr)^{\top}\sum _{j=1}^{m} \lambda _{j}F_{j}(x, \xi )+\frac{\beta}{2} \bigl\Vert x-H_{\beta}(x,\lambda , \xi ) \bigr\Vert ^{2} \\ \geq & \bigl(x-H_{\beta}(x,\lambda ,\xi ) \bigr)^{\top}\sum _{j=1}^{m} \lambda _{j}F_{j}(x, \xi )-\frac{\alpha}{2} \bigl\Vert x-H_{\beta}(x,\lambda , \xi ) \bigr\Vert ^{2} \\ &{} - \bigl(x-H_{\beta}(x,\lambda ,\xi ) \bigr)^{\top}\sum _{j=1}^{m} \lambda _{j}F_{j}(x, \xi )+\frac{\beta}{2} \bigl\Vert x-H_{\beta}(x,\lambda , \xi ) \bigr\Vert ^{2} \\ =& \frac{\beta -\alpha}{2} \bigl\Vert x-H_{\beta}(x,\lambda ,\xi ) \bigr\Vert ^{2}. \end{aligned}$$

In a similar way, we obtain the second inequality of (15). This completes the proof. □

Theorem 2

If K is a compact set, then, for any \(c\geq 0\), the level set \(L_{\theta}(c)\) of the objective function θ is bounded.

Proof

Suppose on the contrary that there exists a \(\bar{c}\geq 0\) such that \(L_{\theta}(\bar{c})\) is unbounded. This implies that there exists a sequence \(\{(x^{k},\lambda ^{k})\}\subset L_{\theta}(\bar{c})\) such that

$$ \lim_{k\rightarrow \infty} \bigl\Vert \bigl(x^{k},\lambda ^{k} \bigr) \bigr\Vert =+\infty . $$

Since \(\{\lambda ^{k}\}\subset \Lambda \) and Λ is a compact set, we get that

$$ \lim_{k\rightarrow \infty} \bigl\Vert x^{k} \bigr\Vert =+\infty . $$

For each k, we know that \(H_{\beta}(x^{k},\lambda ^{k},\xi )\in K\) by (7). Taking into account that the set K is compact, the sequence \(\{H_{\beta}(x^{k},\lambda ^{k},\xi )\}\) is also bounded. Therefore, we obtain that \(\|x^{k}-H_{\beta}(x^{k},\lambda ^{k},\xi )\|\rightarrow \infty \), as k tends to ∞.

It follows from the definition of \(\theta (x,\lambda )\) and (15) in Lemma 6 that

$$ \begin{aligned} \theta \bigl(x^{k},\lambda ^{k} \bigr)&=\mathbb{E} \bigl[g_{\alpha \beta} \bigl(x^{k}, \lambda ^{k},\xi \bigr) \bigr] \\ &\geq \mathbb{E} \biggl[\frac{\beta -\alpha}{2} \bigl\Vert x^{k}-H_{\beta} \bigl(x^{k}, \lambda ^{k},\xi \bigr) \bigr\Vert ^{2} \biggr] \\ &=\frac{\beta -\alpha}{2}\mathbb{E} \bigl[ \bigl\Vert x^{k}-H_{\beta} \bigl(x^{k},\lambda ^{k}, \xi \bigr) \bigr\Vert ^{2} \bigr]. \end{aligned} $$
(16)

Since \(\|x^{k}-H_{\beta}(x^{k},\lambda ^{k},\xi )\|\rightarrow \infty \), as k tends to ∞, we get that \(\mathbb{E}[\|x^{k}-H_{\beta}(x^{k},\lambda ^{k},\xi )\|^{2}] \rightarrow \infty \), and so

$$ \lim_{k\rightarrow \infty} \biggl(\frac{\beta -\alpha}{2} \mathbb{E} \bigl[ \bigl\Vert x^{k}-H_{\beta} \bigl(x^{k},\lambda ^{k},\xi \bigr) \bigr\Vert ^{2} \bigr] \biggr) \rightarrow \infty , $$

whenever \(0 < \alpha <\beta \). This, together with (16), implies that \(\theta (x^{k},\lambda ^{k})\rightarrow \infty \) as k tends to ∞. This contradicts the fact that \((x^{k},\lambda ^{k}) \in L_{\theta}(\bar{c})\), completing the proof. □

4 Convergence analysis

In this section, we utilize the sample average approximation (SAA) approach (for more about the SAA approach, please refer to, for example, [31]) when dealing with the expected value of the objective function.

Lemma 7

([29])

Let \(\Phi :\Xi \rightarrow \mathbb{R}\) be an integrable function. Then, one has

$$ \mathbb{E} \bigl[\Phi (\xi ) \bigr]=\lim_{N\rightarrow \infty} \frac{1}{N}\sum_{i=1}^{N}\Phi (\xi _{i}), \quad \mathrm{{w.p. 1}}, $$
(17)

where \(\{\xi _{1},\xi _{2},\dots ,\xi _{N}\}\) is the independent and identical distributed (iid) sample of ξ and “\(\mathrm{w.p. 1}\)” denotes that this procedure converges with probability one.

Consequently, from Lemma 7, the UERM problem (8) is further converted into the following SAA problem:

$$ \min_{x\in \mathbb{R}^{n},\lambda \in \Lambda} \theta _{N}(x, \lambda ):=\frac{1}{N}\sum_{i=1}^{N}g_{\alpha \beta}(x, \lambda ,\xi _{i}). $$
(18)

4.1 Convergence of global optimal solutions

In this subsection, we will investigate the limiting behavior of global optimal solutions. We denote by \(S^{*}\) and \(S_{N}^{*}\) the sets of optimal solutions for problems (8) and (18), respectively.

Lemma 8

For any \((x,\lambda )\in \mathbb{R}^{n}\times \Lambda \), one has

$$ \theta (x,\lambda )=\lim_{N\rightarrow \infty}\theta _{N}(x, \lambda ),\quad {\mathrm{{w.p. 1.}}} $$
(19)

Proof

Since the D-gap function \(g_{\alpha \beta}(x,\lambda ,\cdot )\) is integrable on Ξ, by Lemma 7, we obtain that

$$\begin{aligned} \lim_{N \rightarrow \infty} \theta _{N} (x,\lambda ) &= \lim _{N \rightarrow \infty} \frac{1}{N} \sum_{i=1}^{N} g_{\alpha \beta}(x,\lambda ,\xi _{i}) = \mathbb{E} \bigl[g_{\alpha \beta}(x, \lambda ,\xi ) \bigr] = \theta (x,\lambda ). \end{aligned}$$

This completes the proof. □

Theorem 3

Suppose that the assumptions (a) and (b) hold. Let \((x^{N},\lambda ^{N})\in S_{N}^{*}\) for each N and let \((x^{*},\lambda ^{*})\) be an accumulation point of the sequence \(\{(x^{N},\lambda ^{N})\}\). Then, we have \((x^{*},\lambda ^{*})\in S^{*}\).

Proof

Let \((x^{*},\lambda ^{*})\) be an accumulation point of the sequence \(\{(x^{N},\lambda ^{N})\}\). Without loss of generality, we assume that the sequence \(\{(x^{N},\lambda ^{N})\}\) converges to a point \((x^{*},\lambda ^{*})\). It is obvious that \((x^{*},\lambda ^{*})\in \mathbb{R}^{n}\times \Lambda \). We divide the proof into two parts.

Part 1. We claim that

$$ \lim_{N\rightarrow \infty} \bigl(\theta _{N} \bigl(x^{N}, \lambda ^{N} \bigr)-\theta _{N} \bigl(x^{*},\lambda ^{*} \bigr) \bigr)=0. $$
(20)

In fact, from Theorem 1, for any \((x,\lambda )\in \mathbb{R}^{n}\times \Lambda \) and \(\xi \in \Xi \), we have

$$ \bigl\Vert \nabla _{x}g_{\alpha \beta}(x,\lambda , \xi ) \bigr\Vert \leq \sum_{j=1}^{m} \bigl\Vert F_{j}(x,\xi ) \bigr\Vert \Biggl(4+ \bigl(\alpha ^{-1}-\beta ^{-1} \bigr)\sum_{j=1}^{m} \bigl\Vert \nabla _{x}F_{j}(x,\xi ) \bigr\Vert \Biggr) $$
(21)

and

$$ \bigl\Vert \nabla _{\lambda}g_{\alpha \beta}(x,\lambda , \xi ) \bigr\Vert \leq \bigl(\alpha ^{-1}+ \beta ^{-1} \bigr) \Biggl(\sum_{j=1}^{m} \bigl\Vert F_{j}(x,\xi ) \bigr\Vert \Biggr)^{2}. $$
(22)

It follows from the mean-value theorem that for any each \((x^{N},\lambda ^{N})\) and each \(\xi _{i}\),

$$ \begin{aligned}[b] & \bigl\vert g_{\alpha \beta} \bigl(x^{N},\lambda ^{N},\xi _{i} \bigr)-g_{\alpha \beta} \bigl(x^{*},\lambda ^{*},\xi _{i} \bigr) \bigr\vert \\ &\quad = \bigl\vert \nabla _{x}g_{ \alpha \beta} \bigl(y^{Ni},\lambda ^{Ni},\xi _{i} \bigr)^{\top} \bigl(x^{N}-x^{*} \bigr)+\nabla _{\lambda}g_{\alpha \beta} \bigl(y^{Ni},\lambda ^{Ni}, \xi _{i} \bigr)^{\top} \bigl(\lambda ^{N}-\lambda ^{*} \bigr) \bigr\vert , \end{aligned} $$
(23)

where \((y^{Ni}, \lambda ^{Ni} )\in \mathbb{R}^{n}\times \Lambda \) and \(y^{Ni}=a_{Ni}x^{N}+(1-a_{Ni})x^{*}\), \(\lambda ^{Ni}=a_{Ni}\lambda ^{N}+(1-a_{Ni})\lambda ^{*}\) with \(a_{Ni}\in [0,1]\). Then we get that

$$\begin{aligned}& \bigl\vert \theta _{N} \bigl(x^{N},\lambda ^{N} \bigr)-\theta _{N} \bigl(x^{*}, \lambda ^{*} \bigr) \bigr\vert \\& \quad \leq \frac{1}{N}\sum_{i=1}^{N} \bigl\vert g_{ \alpha \beta} \bigl(x^{N},\lambda ^{N},\xi _{i} \bigr)-g_{\alpha \beta} \bigl(x^{*}, \lambda ^{*},\xi _{i} \bigr) \bigr\vert \\& \quad \leq \frac{1}{N}\sum_{i=1}^{N} \bigl( \bigl\Vert \nabla _{x}g_{ \alpha \beta} \bigl(y^{Ni}, \lambda ^{Ni},\xi _{i} \bigr) \bigr\Vert \bigl\Vert x^{N}-x^{*} \bigr\Vert + \bigl\Vert \nabla _{\lambda}g_{\alpha \beta} \bigl(y^{Ni},\lambda ^{Ni}, \xi _{i} \bigr) \bigr\Vert \bigl\Vert \lambda ^{N}- \lambda ^{*} \bigr\Vert \bigr) \\& \quad \leq \frac{1}{N}\sum_{i=1}^{N} \Biggl(\sum_{j=1}^{m} \bigl\Vert F_{j} \bigl(y^{Ni},\xi _{i} \bigr) \bigr\Vert \Biggr) \Biggl(4+ \bigl(\alpha ^{-1}- \beta ^{-1} \bigr)\sum _{j=1}^{m} \bigl\Vert \nabla _{x}F_{j} \bigl(y^{Ni}, \xi _{i} \bigr) \bigr\Vert \Biggr) \bigl\Vert x^{N}-x^{*} \bigr\Vert \\& \qquad {}+\frac{1}{N}\sum_{i=1}^{N} \bigl( \alpha ^{-1}+\beta ^{-1} \bigr) \Biggl(\sum _{j=1}^{m} \bigl\Vert F_{j} \bigl(y^{Ni},\xi _{i} \bigr) \bigr\Vert \Biggr)^{2} \bigl\Vert \lambda ^{N}-\lambda ^{*} \bigr\Vert \\& \quad \leq \frac{1}{N}\sum_{i=1}^{N} \Biggl(\sum_{j=1}^{m} \bigl\Vert F_{j} \bigl(y^{Ni},\xi _{i} \bigr) \bigr\Vert \Biggr) \Biggl(4+ \bigl(\alpha ^{-1}- \beta ^{-1} \bigr)\sum _{j=1}^{m} \bigl\Vert \nabla _{x}F_{j} \bigl(y^{Ni}, \xi _{i} \bigr) \bigr\Vert _{\mathcal{F}} \Biggr) \bigl\Vert x^{N}-x^{*} \bigr\Vert \\& \qquad {}+\frac{1}{N}\sum_{i=1}^{N} \bigl( \alpha ^{-1}+\beta ^{-1} \bigr) \Biggl(\sum _{j=1}^{m} \bigl\Vert F_{j} \bigl(y^{Ni},\xi _{i} \bigr) \bigr\Vert \Biggr)^{2} \bigl\Vert \lambda ^{N}-\lambda ^{*} \bigr\Vert \\& \quad \leq \frac{1}{N}\sum_{i=1}^{N} \kappa (\xi _{i}) \bigl( 4 + \bigl( \alpha ^{-1} -\beta ^{-1} \bigr) \kappa (\xi _{i}) \bigr) \bigl\Vert x^{N}-x^{*} \bigr\Vert \\& \qquad {}+ \bigl(\alpha ^{-1} +\beta ^{-1} \bigr) \frac{1}{N} \sum_{i=1}^{N} \kappa ^{2}(\xi _{i}) \bigl\Vert \lambda ^{N}-\lambda ^{*} \bigr\Vert \\& \quad \stackrel{N\rightarrow \infty}{\longrightarrow}0, \end{aligned}$$

where the second inequality follows from (23) and Cauchy–Schwarz inequality, the third inequality follows from (21) and (22), the fourth inequality follows from the definition of the Frobenius matrix norm, and the last inequality follows from assumption (b). Since the sequence \(\{ (x^{N},\lambda ^{N} ) \}\) converges to a point \((x^{*},\lambda ^{*})\), using assumption (b), we conclude that (20) is true.

Part 2. We next show that \((x^{*},\lambda ^{*})\in S^{*}\). Since

$$ \bigl\vert \theta _{N} \bigl(x^{N},\lambda ^{N} \bigr)-\theta \bigl(x^{*},\lambda ^{*} \bigr) \bigr\vert \leq \bigl\vert \theta _{N} \bigl(x^{N}, \lambda ^{N} \bigr)-\theta _{N} \bigl(x^{*}, \lambda ^{*} \bigr) \bigr\vert + \bigl\vert \theta _{N} \bigl(x^{*},\lambda ^{*} \bigr)-\theta \bigl(x^{*}, \lambda ^{*} \bigr) \bigr\vert , $$

it follows from Lemma 8 and (20) that

$$ \lim_{N\rightarrow \infty}\theta _{N} \bigl(x^{N}, \lambda ^{N} \bigr)=\theta \bigl(x^{*},\lambda ^{*} \bigr) \quad \mathrm{{w.p. 1.} } $$

Notice that \((x^{N},\lambda ^{N} )\in S_{N}^{*}\) for each N, which means that

$$ \theta _{N} \bigl(x^{N},\lambda ^{N} \bigr)\leq \theta _{N}(x,\lambda ), \quad \forall (x,\lambda )\in \mathbb{R}^{n} \times \Lambda . $$

Taking the limit in the above inequality as \(N\rightarrow \infty \), we obtain that

$$ \theta \bigl(x^{*},\lambda ^{*} \bigr)\leq \theta (x,\lambda ), \quad \forall (x, \lambda )\in \mathbb{R}^{n}\times \Lambda \mathrm{{w.p. 1}}, $$

which implies that \((x^{*},\lambda ^{*})\in S^{*}\) with probability one. □

4.2 Convergence of stationary points

A point \((x^{*},\lambda ^{*})\) is said to be a stationary point for problem (8) if it satisfies

$$ \nabla \theta \bigl(x^{*},\lambda ^{*} \bigr)=0. $$
(24)

For each N, a point \((x^{N},\lambda ^{N} )\) is said to be a stationary point for problem (18) if it satisfies

$$ \nabla \theta _{N} \bigl(x^{N},\lambda ^{N} \bigr)=0. $$
(25)

Theorem 4

Suppose that assumptions (a), (b), and (c) hold. Let \((x^{N},\lambda ^{N} )\) be a stationary point of problem (18) for each N and \((x^{*},\lambda ^{*})\) be an accumulation point of the sequence \(\{ (x^{N},\lambda ^{N} ) \}\). Then, \((x^{*},\lambda ^{*})\) is a stationary point of problem (8) with probability one.

Proof

In view of the definitions of \(\theta (\cdot ,\cdot )\) and \(\theta _{N}(\cdot ,\cdot )\), we have

θ ( x , λ ) = ( E [ j = 1 m λ j x F j ( x , ξ ) ( H β ( x , λ , ξ ) H α ( x , λ , ξ ) ) ] + α E [ ( H α ( x , λ , ξ ) x ) ] β E [ ( H β ( x , λ , ξ ) x ) ] E [ ( H β ( x , λ , ξ ) H α ( x , λ , ξ ) ) F 1 ( x , ξ ) ] E [ ( H β ( x , λ , ξ ) H α ( x , λ , ξ ) ) F m ( x , ξ ) ] )

and

θ N ( x , λ ) = ( 1 N i = 1 N ( j = 1 m λ j x F j ( x , ξ i ) ( H β ( x , λ , ξ i ) H α ( x , λ , ξ i ) ) ) + α N i = 1 N ( H α ( x , λ , ξ i ) x ) β N i = 1 N ( H β ( x , λ , ξ i ) x ) 1 N i = 1 N ( ( H β ( x , λ , ξ i ) H α ( x , λ , ξ i ) ) F 1 ( x , ξ i ) ) 1 N i = 1 N ( ( H β ( x , λ , ξ i ) H α ( x , λ , ξ i ) ) F m ( x , ξ i ) ) ) .

Let \(D\subset \mathbb{R}^{n}\times \Lambda \) be a compact set. We have from assumptions (a), (b), and (c) that

$$ \lim_{N\rightarrow \infty}\sup_{(x,\lambda )\in D} \bigl\Vert \nabla \theta _{N}(x,\lambda )-\nabla \theta (x,\lambda ) \bigr\Vert =0\quad \text{w.p. 1.} $$
(26)

From assumptions (a) and (b), it is easy to see that \(\nabla \theta (\cdot ,\cdot )\) is a continuous function. Let \((x^{*},\lambda ^{*})\) be an accumulation point of the sequence \(\{ (x^{N},\lambda ^{N} ) \}\). Without loss of generality, we assume that \(\lim_{N\rightarrow \infty} (x^{N},\lambda ^{N} )=(x^{*}, \lambda ^{*})\). The sequence \(\{ (x^{N},\lambda ^{N} ) \}\) is contained in a closed neighborhood \(B\subset \mathbb{R}^{n}\times \Lambda \) of \((x^{*},\lambda ^{*})\) for sufficiently large N. Thus we conclude that for any given \(\varepsilon >0\),

$$ \bigl\Vert \nabla \theta \bigl(x^{N},\lambda ^{N} \bigr)-\nabla \theta \bigl(x^{*}, \lambda ^{*} \bigr) \bigr\Vert \leq \frac{\varepsilon}{2}. $$
(27)

From (26), there exists \(N_{0}>0\) such that \((x^{N},\lambda ^{N} )\in B\) for all \(N\geq N_{0}\) and

$$ \bigl\Vert \nabla \theta _{N} \bigl(x^{N}, \lambda ^{N} \bigr)-\nabla \theta \bigl(x^{N},\lambda ^{N} \bigr) \bigr\Vert \leq \frac{\varepsilon}{2}. $$
(28)

Since

$$\begin{aligned}& \bigl\Vert \nabla \theta _{N} \bigl(x^{N},\lambda ^{N} \bigr)-\nabla \theta \bigl(x^{*}, \lambda ^{*} \bigr) \bigr\Vert \\& \quad \leq \bigl\Vert \nabla \theta _{N} \bigl(x^{N}, \lambda ^{N} \bigr)-\nabla \theta \bigl(x^{N},\lambda ^{N} \bigr) \bigr\Vert + \bigl\Vert \nabla \theta \bigl(x^{N},\lambda ^{N} \bigr)-\nabla \theta \bigl(x^{*},\lambda ^{*} \bigr) \bigr\Vert , \end{aligned}$$

we have from (27) and (28) that

$$ \bigl\Vert \nabla \theta _{N} \bigl(x^{N},\lambda ^{N} \bigr)-\nabla \theta \bigl(x^{*}, \lambda ^{*} \bigr) \bigr\Vert \leq \varepsilon . $$

Thus, we obtain that

$$ \lim_{N\rightarrow \infty}\nabla \theta _{N} \bigl(x^{N}, \lambda ^{N} \bigr)=\nabla \theta \bigl(x^{*},\lambda ^{*} \bigr) \quad \text{w.p. 1}. $$

By taking the limit as N tends to ∞ in (25), we obtain (24). That is, \((x^{*},\lambda ^{*})\) is stationary point of problem (8) with probability one. This completes the proof. □

5 EV formulation and its global error bound

In this section, let us consider another deterministic formulation for the SVVI problem, i.e., the EV formulation (for more details about the EV formulation, please see, for example, [11]), that is, to find \(x^{*} \in K\) such that

$$\begin{aligned}& \bigl( \bigl(y-x^{*} \bigr)^{\top}\mathbb{E} \bigl[F_{1} \bigl(x^{*},\xi \bigr) \bigr],\dots , \bigl(y-x^{*} \bigr)^{ \top}\mathbb{E} \bigl[F_{m} \bigl(x^{*},\xi \bigr) \bigr] \bigr) \\& \quad \notin \operatorname{-int} \mathbb{R}^{m}_{+}, \quad \forall y\in K, \text{a.s. } \xi \in \Xi . \end{aligned}$$
(29)

The notations \(\operatorname{sol}(\mathbb{E}[F(x,\xi )], K)\) denotes the solution set of problem (29) and \(\operatorname{dist}(x,X)\) denotes \(\min_{y\in X} \|x-y\|\). Lee et al. [16] provided the following properties for the deterministic VVI.

Lemma 9

([16, Theorem 4.2])

Suppose that \(F_{j}\) are strongly monotone on K with modulus \(\beta >0\) and Lipschitz continuous on K with modulus \(l>0\) for all j (\(j=1,\dots ,m\)). Then the solution set is compact.

Based on the EV formulation, we give some conditions in this section. Then, the D-gap function provides a global error bound for the SVVI problem (29). Assume that the expected value of the function \(F_{j}(x,\cdot )\) is well defined. Similar to the work of Zhao et al. [33, Eq. (6)], we give an equivalent scalar variational inequality for problem (29), i.e., find \((x^{*},\lambda ^{*}) \in K \times \Lambda \) such that

$$ \bigl(y-x^{*} \bigr)^{\top}\sum _{j=1}^{m}\lambda _{j}^{*} \mathbb{E} \bigl[F_{j} \bigl(x^{*}, \xi \bigr) \bigr]\geq 0, \quad \forall y\in K. $$
(30)

From the ideas of Yamashita–Taji–Fukushima [32, Eq. (6)] and Zhao et al. [33, Eq. (7)], the D-gap function \({g}_{\alpha \beta}(\cdot ,\cdot ): \mathbb{R}^{n} \times \Lambda \) of problem (30) is defined as follows:

$$ {g}_{\alpha \beta}(x,\lambda ):={g}_{\alpha}(x,\lambda )-{g}_{\beta}(x, \lambda ), $$

where \(0<\alpha <\beta \) and \(g_{\gamma}(\cdot ,\cdot )\) \((\gamma =\alpha ,\beta )\) is the regularized gap function, which is defined by

$$\begin{aligned} {g}_{\gamma}(x,\lambda ):=\max_{y\in K} \Biggl\{ (x-y)^{\top} \sum_{j=1}^{m} \lambda _{j}\mathbb{E} \bigl[F_{j}(x,\xi ) \bigr]- \frac{\gamma}{2} \Vert x-y \Vert ^{2} \Biggr\} . \end{aligned}$$

It is easy to see that

$$\begin{aligned} {g}_{\gamma}(x,\lambda ):= \bigl(x-H_{\gamma}(x,\lambda ) \bigr)^{\top}\sum_{j=1}^{m}\lambda _{j}\mathbb{E} \bigl[F_{j}(x,\xi ) \bigr]- \frac{{\gamma}}{2} \bigl\Vert x-H_{{\gamma}}(x,\lambda ) \bigr\Vert ^{2}, \end{aligned}$$
(31)

where

$$ H_{{\gamma}}(x,\lambda ):=\operatorname{Proj}_{K} \Biggl(x-{\gamma}^{-1}\sum_{j=1}^{m} \lambda _{j}\mathbb{E} \bigl[F_{j}(x,\xi ) \bigr] \Biggr). $$
(32)

The following main properties of the D-gap function \(g_{\alpha \beta}(\cdot ,\cdot )\) hold:

  • \(g_{\alpha \beta}(x,\lambda )\geq 0\) for any \((x,\lambda )\in \mathbb{R}^{n}\times \Lambda \);

  • \(g_{{\alpha}{\beta}}(x^{*},\lambda ^{*})=0\) if and only if \((x^{*},\lambda ^{*})\) solves problem (30).

Remark 3

If \(m=1\), then the D-gap function \(g_{{\alpha \beta}}(\cdot ,\cdot )\) considered in the present paper reduces to the D-gap function [12, Eq. (5)] discussed by He et al.

As usual, \(\mathbb{P}(V)\) denotes the probability of an event V.

Lemma 10

Let \(0<{\alpha}<{\beta}\). For any \((x,\lambda )\in \mathbb{R}^{n}\times \Lambda \), one has

$$ \frac{{\beta}-{\alpha}}{2} \bigl\Vert x-H_{{\beta}}(x,\lambda ) \bigr\Vert ^{2}\leq g_{{ \alpha \beta}}(x,\lambda )\leq \frac{{\beta}-{\alpha}}{2} \bigl\Vert x-H_{{ \alpha}}(x,\lambda ) \bigr\Vert ^{2}. $$
(33)

Proof

This proof is similar to that of Lemma 6, so we omit it here. □

Lemma 11

Suppose that assumptions (a) and (c) hold, and each function \(F_{j}(\cdot ,\xi )\) (\(j=1,\dots ,m\)) is monotone on K for almost every \(\xi \in \Xi \). Let \(F_{j}\) (\(j=1,\dots ,m\)) be uniformly strongly monotone on K with modulus \(\mu _{j}>0\) over \(V_{j}\subset \Xi \subset \mathbb{R}^{n}\) with \(\mathbb{P}(V_{j})>0\), \(\mu :=\min_{1\leq j\leq m}\mu _{j}\) and \(\nu :=\min_{1\leq j\leq m}\mathbb{P}(V_{j})\). Then the solution set \(\operatorname{sol}(\mathbb{E}[F(x,\xi )],K)\) is compact.

Proof

For each \(j=1,\dots ,m\), since \(F_{j}\) is uniformly strongly monotone on K with modulus \(\mu _{j}>0\) over \(V_{j}\), we have

$$ \begin{aligned} \int _{V_{j}} \bigl(F_{j}(y,\xi )-F_{j}(x, \xi ) \bigr)^{\top} (y-x) \mathbb{P}(d\xi ) &\geq \int _{V_{j}}\mu _{j} \Vert y-x \Vert ^{2}\mathbb{P}(d \xi ) \\ &=\mu _{j} \Vert y-x \Vert ^{2} \int _{V_{j}}\mathbb{P}(d\xi ) \\ &=\mu _{j} \Vert y-x \Vert ^{2}\mathbb{P}(V_{j}) \\ &\geq \mu \nu \Vert y-x \Vert ^{2}. \end{aligned} $$
(34)

In addition, since \(F_{j}(\cdot ,\xi )\) is monotone on \(\mathbb{R}^{n}\) for almost every \(\xi \in \Xi \), we have

$$ \int _{\Xi \setminus V_{j}} \bigl(F_{j}(y,\xi )-F_{j}(x, \xi ) \bigr)^{\top} (y-x) \mathbb{P}(d\xi )\geq \int _{\Xi \setminus V_{j}}0\mathbb{P}(d\xi )=0. $$
(35)

Combining (34) and (35), for any \(x,y \in \mathbb{R}^{n}\), we get

$$ \begin{aligned} \bigl(\mathbb{E} \bigl[F_{j}(y,\xi ) \bigr]- \mathbb{E} \bigl[F_{j}(x,\xi ) \bigr] \bigr)^{\top} (y-x) ={}& \int _{V_{j}} \bigl(F_{j}(y,\xi )-F_{j}(x, \xi ) \bigr)^{\top} (y-x)\mathbb{P}(d\xi ) \\ &{} + \int _{\Xi \setminus V_{j}} \bigl(F_{j}(y,\xi )-F_{j}(x, \xi ) \bigr)^{ \top} (y-x)\mathbb{P}(d\xi ) \\ \geq{}& \nu \mu \Vert y-x \Vert ^{2}. \end{aligned} $$

Hence, for each j (\(j=1,\dots ,m\)), \(\mathbb{E}[F_{j}]\) is strongly monotone with modulus \(\nu \mu >0\). On the other hand, by assumption (c), for any \(x,y \in \mathbb{R}^{n}\), it holds that

$$\begin{aligned} \bigl\Vert \mathbb{E} \bigl[F_{j} (y,\xi ) \bigr] - \mathbb{E} \bigl[F_{j} (x,\xi ) \bigr] \bigr\Vert ={}& \int _{\Xi} \bigl\Vert F_{j} (y,\xi ) - F_{j} (x,\xi ) \bigr\Vert \mathbb{P}(d\xi ) \\ \leq {}& \int _{\Xi} L_{j}(\xi ) \Vert y - x \Vert \mathbb{P}(d\xi ) \\ \leq {}& L \Vert y - x \Vert . \end{aligned}$$

Therefore, for each j (\(j=1,\dots ,m\)), \(\mathbb{E}[F_{j}]\) is Lipschitz continuous with modulus \(L>0\) and strongly monotone with modulus \(\nu \mu >0\). Then, from Lemma 9, we obtain that \(\operatorname{sol}(\mathbb{E}[F(x,\xi )],K)\) is compact. □

Theorem 5

Suppose that assumptions (a), (b), and (c) hold, and each function \(F_{j}(\cdot ,\xi )\) (\(j=1,\dots ,m\)) is monotone on K for almost every \(\xi \in \Xi \). Let \(F_{j}\) (\(j=1,\dots ,m\)) be uniformly strongly monotone on K with modulus \(\mu _{j}>0\) over \(V_{j}\subset \Xi \subset \mathbb{R}^{n}\) with \(\mathbb{P}(V_{j})>0\), \(\mu :=\min_{1\leq j\leq m}\mu _{j}\) and \(\nu :=\min_{1\leq j\leq m}\mathbb{P}(V_{j})\). Thus, for any \(\lambda \in \Lambda \), one has

$$ \operatorname{dist} \bigl(x, \operatorname{sol} \bigl(\mathbb{E} \bigl[F(x,\xi ) \bigr],K \bigr) \bigr) \leq \frac{L+{\beta}}{\nu \mu}\sqrt{\frac{2}{{\beta}-{\alpha}}} \sqrt{g_{{ \alpha}{\beta}}(x, \lambda )}. $$

Proof

From (30), it is natural that as we change \(\lambda \in \Lambda \), x will also change. Furthermore, once we have fixed \(\lambda ^{*} \in \Lambda \), the corresponding x is also fixed. Since each function \(F_{j}\) (\(j=1,\dots ,m\)) is uniformly strongly monotone on \(\mathbb{R}^{n}\), using the concepts of monotonicity and strong monotonicity, for any \(x,y\in \mathbb{R}^{n}\), one has

$$\begin{aligned}& \begin{aligned} & \Biggl(\sum _{j=1}^{m}\lambda ^{*}_{j} \mathbb{E} \bigl[F_{j}(y, \xi ) \bigr]-\sum _{j=1}^{m} \lambda ^{*}_{j} \mathbb{E} \bigl[F_{j}(x,\xi ) \bigr] \Biggr)^{\top }(y-x) \\ &\quad = \sum_{j=1}^{m} \int _{V_{j}}\lambda ^{*}_{j} \bigl(F_{j}(y, \xi )-F_{j}(x,\xi ) \bigr)^{\top }(y-x) \mathbb{P}(d\xi ) \\ &\qquad {} +\sum_{j=1}^{m} \int _{\Xi \setminus V_{j}}\lambda ^{*}_{j} \bigl(F_{j}(y, \xi )-F_{j}(x,\xi ) \bigr)^{\top }(y-x) \mathbb{P}(d\xi ) \\ &\quad \geq \nu \mu \Vert y-x \Vert ^{2}, \end{aligned} \end{aligned}$$
(36)

where \(\lambda ^{*}_{j}\in [0,1]\). Hence, \(\sum_{j=1}^{m}\lambda ^{*}_{j}\mathbb{E}[F_{j}]\) is strongly monotone with modulus \(\nu \mu >0\) for fixed \(\lambda ^{*}\in \Lambda \). Therefore, from Lemma 1, there is a unique solution \(x^{*}\) such that

$$ \bigl(y - x^{*} \bigr)^{\top} \sum _{j=1}^{m} \lambda ^{*}_{j} \mathbb{E} \bigl[F_{j} \bigl(x^{*},\xi \bigr) \bigr] \geq 0 , \quad \forall y \in K. $$
(37)

For any \(x,y\in \mathbb{R}^{n}\), using the concepts of monotonicity and Lipschitz continuity, we obtain

$$ \begin{aligned} & \Biggl(\sum _{j=1}^{m}\lambda ^{*}_{j} \mathbb{E} \bigl[F_{j}(y, \xi ) \bigr]-\sum _{j=1}^{m} \lambda ^{*}_{j} \mathbb{E} \bigl[F_{j}(x,\xi ) \bigr] \Biggr)^{\top }(y-x) \\ &\quad = \sum_{j=1}^{m} \int _{\Xi}\lambda ^{*}_{j} \bigl(F_{j}(y, \xi )-F_{j}(x,\xi ) \bigr)^{\top }(y-x) \mathbb{P}(d\xi ) \\ &\quad \leq L \Vert y-x \Vert ^{2}, \end{aligned} $$
(38)

where \(\lambda ^{*}_{j}\in [0,1]\). Hence, \(\sum_{j=1}^{m}\lambda ^{*}_{j}\mathbb{E}[F_{j}]\) is Lipschitz continuous with modulus \(L>0\). Then, we claim that there exists a constant \(r>0\) such that

$$ \operatorname{dist} \bigl(x, \operatorname{sol} \bigl(\mathbb{E} \bigl[F(x, \xi ) \bigr],K \bigr) \bigr) \leq \bigl\Vert x-x^{*} \bigr\Vert \leq r \bigl\Vert H_{{\beta}} \bigl(x,\lambda ^{*} \bigr)-x \bigr\Vert . $$
(39)

In fact, given \(x\in K\), from item (a) of Lemma 2, we know that \(H_{{\beta}}(x,\lambda ^{*})\), defined by (32), is the unique solution of the following strongly convex minimization problem:

$$ \min_{y\in K} \Biggl\{ \Biggl\langle \sum _{j=1}^{m}\lambda ^{*}_{j} \mathbb{E} \bigl[F_{j}(x,\xi ) \bigr],y-x \Biggr\rangle + \frac{{\beta}}{2} \Vert y-x \Vert ^{2} \Biggr\} . $$

Hence, \(H_{{\beta}}(x,\lambda ^{*})\) fulfills, for all \(y\in K\), the following optimality condition:

$$ \Biggl\langle \sum_{j=1}^{m}\lambda ^{*}_{j} \mathbb{E} \bigl[F_{j}(x,\xi ) \bigr]-{ \beta} \bigl(x-H_{{\beta}} \bigl(x,\lambda ^{*} \bigr) \bigr),y-H_{{\beta}} \bigl(x,\lambda ^{*} \bigr) \Biggr\rangle \geq 0. $$

Taking \(y=x^{*}\) in the above inequality, we obtain

$$ \Biggl\langle \sum_{j=1}^{m} \lambda ^{*}_{j}\mathbb{E} \bigl[F_{j}(x,\xi ) \bigr]-{ \beta} \bigl(x-H_{{\beta}} \bigl(x,\lambda ^{*} \bigr) \bigr),H_{{\beta}} \bigl(x,\lambda ^{*} \bigr)-x^{*} \Biggr\rangle \leq 0. $$
(40)

From (36), the function \(\sum_{j=1}^{m}\lambda ^{*}_{j}\mathbb{E}[F_{j}(\cdot ,\xi )]\) is strongly monotone with modulus μν. Hence, there exists a unique solution \(x^{*}\) such that

$$ \Biggl\langle \sum_{j=1}^{m} \lambda _{j}\mathbb{E} \bigl[F_{j} \bigl(x^{*},\xi \bigr) \bigr],x^{*}-H_{{ \beta}} \bigl(x,\lambda ^{*} \bigr) \Biggr\rangle \leq 0. $$
(41)

Adding (40) with (41), we have

$$ \Biggl\langle \sum_{j=1}^{m}\lambda ^{*}_{j} \mathbb{E} \bigl[F_{j}(x,\xi ) \bigr]- \sum_{j=1}^{m} \lambda ^{*}_{j} \mathbb{E} \bigl[F_{j} \bigl(x^{*},\xi \bigr) \bigr]-{ \beta} \bigl(x-H_{{\beta}} \bigl(x, \lambda ^{*} \bigr) \bigr),H_{{\beta}} \bigl(x,\lambda ^{*} \bigr)-x^{*} \Biggr\rangle \leq 0. $$

This can be rewritten as follows:

$$ \begin{aligned} & \Biggl\langle \sum_{j=1}^{m} \lambda ^{*}_{j} \mathbb{E} \bigl[F_{j}(x,\xi ) \bigr]-\sum_{j=1}^{m}\lambda ^{*}_{j} \mathbb{E} \bigl[F_{j} \bigl(x^{*},\xi \bigr) \bigr],x-x^{*} \Biggr\rangle \\ &\quad \leq -{\beta} \bigl\Vert H_{{\beta}} \bigl(x,\lambda ^{*} \bigr)-x \bigr\Vert ^{2} \\ &\qquad {}- \Biggl\langle \sum _{j=1}^{m} \lambda ^{*}_{j} \mathbb{E} \bigl[F_{j}(x,\xi ) \bigr]-\sum _{j=1}^{m}\lambda ^{*}_{j} \mathbb{E} \bigl[F_{j} \bigl(x^{*},\xi \bigr) \bigr],H_{{ \beta}} \bigl(x,\lambda ^{*} \bigr)-x \Biggr\rangle \\ &\qquad {} +{\beta} \bigl\langle H_{{\beta}} \bigl(x,\lambda ^{*} \bigr)-x,x^{*}-x \bigr\rangle . \end{aligned} $$
(42)

Noticing that \({\beta}>0\) and simultaneously utilizing Cauchy–Schwarz inequality, we get that

$$ \begin{aligned} &\Biggl\langle \sum _{j=1}^{m} \lambda ^{*}_{j} \mathbb{E} \bigl[F_{j}(x,\xi ) \bigr]- \sum _{j=1}^{m}\lambda ^{*}_{j} \mathbb{E} \bigl[F_{j} \bigl(x^{*},\xi \bigr) \bigr],x-x^{*} \Biggr\rangle \\ &\quad \leq \Biggl\Vert \sum _{j=1}^{m}\lambda ^{*}_{j} \mathbb{E} \bigl[F_{j}(x, \xi ) \bigr]-\sum _{j=1}^{m} \lambda ^{*}_{j} \mathbb{E} \bigl[F_{j} \bigl(x^{*}, \xi \bigr) \bigr] \Biggr\Vert \bigl\Vert H_{{\beta}} \bigl(x,\lambda ^{*} \bigr)-x \bigr\Vert \\ & \qquad {}+{\beta} \bigl\Vert H_{{\beta}} \bigl(x,\lambda ^{*} \bigr)-x \bigr\Vert \bigl\Vert x-x^{*} \bigr\Vert . \end{aligned} $$

This, together with (36) and (38), implies that

$$ \nu \mu \bigl\Vert x-x^{*} \bigr\Vert ^{2}\leq L \bigl\Vert x-x^{*} \bigr\Vert \bigl\Vert H_{{\beta}} \bigl(x, \lambda ^{*} \bigr)-x \bigr\Vert +{\beta} \bigl\Vert H_{{\beta}} \bigl(x,\lambda ^{*} \bigr)-x \bigr\Vert \bigl\Vert x-x^{*} \bigr\Vert , $$

which means that

$$ \operatorname{dist} \bigl(x, \operatorname{sol} \bigl(\mathbb{E} \bigl[F(x, \xi ) \bigr],K \bigr) \bigr) \leq \bigl\Vert x-x^{*} \bigr\Vert \leq \frac {L+{\beta}}{\nu \mu} \bigl\Vert H_{{\beta}} \bigl(x,\lambda ^{*} \bigr)-x \bigr\Vert . $$
(43)

As a result, the needed constant \(r>0\) is defined by \(r=\frac{L+{\beta}}{\nu \mu}\). Thus, (39) is true. From the first inequality of (33) in Lemma 10, we have

$$ \bigl\Vert x-H_{{\beta}} \bigl(x,\lambda ^{*} \bigr) \bigr\Vert \leq \sqrt{ \frac{2}{{\beta}-{\alpha}}}\sqrt{g_{{\alpha}{\beta}} \bigl(x, \lambda ^{*} \bigr)}. $$
(44)

Combining (43) with (44) and using the arbitrariness of \(\lambda ^{*} \in \Lambda \), we get the desired conclusion. This completes the proof. □

In the rest of this section, we investigate the boundedness of the level set of the D-gap function \(g_{{\alpha}{\beta}}(\cdot ,\cdot )\). The level set of the D-gap function \(g_{{\alpha}{\beta}}(\cdot ,\cdot )\) is defined by

$$ L_{g_{{\alpha}{\beta}}}(\eta ):= \bigl\{ (x,\lambda )\in \mathbb{R}^{n} \times \Lambda :g_{{\alpha}{\beta}}(x,\lambda )\leq \eta \bigr\} . $$

Corollary 1

Suppose that assumptions (a), (b), and (c) hold, and each function \(F_{j}(\cdot ,\xi )\) (\(j=1,\dots ,m\)) is monotone on K for almost every \(\xi \in \Xi \). Let \(F_{j}\) (\(j=1,\dots ,m\)) be uniformly strongly monotone on K with modulus \(\mu _{j}>0\) over \(V_{j}\subset \Xi \subset \mathbb{R}^{n}\) with \(\mathbb{P}(V_{j})>0\), \(\mu :=\min_{1\leq j\leq m}\mu _{j}\) and \(\nu :=\min_{1\leq j\leq m}\mathbb{P}(V_{j})\). Then, for any \(\eta \geq 0\), the level set \(L_{g_{{\alpha}{\beta}}}(\eta )\) is bounded.

Proof

Suppose on the contrary that there exists a \(\bar{\eta}\geq 0\) such that \(L_{g_{{\alpha}{\beta}}}(\bar{\eta})\) is unbounded. This implies that there exists a sequence \(\{(x^{k},\lambda ^{k})\}\subset L_{g_{{\alpha}{\beta}}}(\bar{\eta})\) such that

$$ \lim_{k\rightarrow \infty} \bigl\Vert \bigl(x^{k},\lambda ^{k} \bigr) \bigr\Vert =+\infty . $$

Since \(\{\lambda ^{k}\}\subset \Lambda \) and Λ is a compact set, we get that

$$ \lim_{k\rightarrow \infty} \bigl\Vert x^{k} \bigr\Vert =+\infty . $$

By the proof of Theorem 5, one has

$$ \operatorname{dist} \bigl(x^{k}, \operatorname{sol} \bigl(\mathbb{E} \bigl[F \bigl(x^{k},\xi \bigr) \bigr],K \bigr) \bigr) \leq \frac{L+{\beta}}{\nu \mu} \sqrt{\frac{2}{{\beta}-{\alpha}}}\sqrt{g_{{ \alpha}{\beta}} \bigl(x^{k},\lambda ^{k} \bigr)}. $$

This means that

$$ g_{{\alpha}{\beta}} \bigl(x^{k},\lambda ^{k} \bigr)\geq \frac{\nu ^{2}\mu ^{2}({\beta}-{\alpha})}{2(L+{\beta})^{2}} \operatorname{dist}^{2} \bigl(x^{k}, \operatorname{sol} \bigl(\mathbb{E} \bigl[F \bigl(x^{k},\xi \bigr) \bigr],K \bigr) \bigr). $$

Then, taking the limit of the right-hand side of the above inequality, we have

$$ \lim_{k\rightarrow \infty} \biggl( \frac{\nu ^{2}\mu ^{2}({\beta}-{\alpha})}{2(L+{\beta})^{2}} \operatorname{dist}^{2} \bigl(x^{k}, \operatorname{sol} \bigl(\mathbb{E} \bigl[F \bigl(x^{k}, \xi \bigr) \bigr],K \bigr) \bigr) \biggr)=\infty , $$

which means that \(g_{{\alpha}{\beta}}(x^{k},\lambda ^{k})\rightarrow \infty \) as k tends to ∞. This contradicts the fact that \((x^{k},\lambda ^{k}) \in L_{g_{{\alpha}{\beta}}}(\bar{\eta})\), completing the proof. □

Remark 4

We would like to point out that the condition that each function \(F_{j}\) (\(j=1,\dots ,m\)) is Lipschitz continuous with Lipschitz constant \(L_{j}(\xi )\) used in Theorem 5 and Corollary 1 can be replaced by a requirement that K is a compact set.

We only prove the corresponding result of Theorem 5. In fact, since K is a compact set, there exists a positive number \(b>0\) such that \(K\subset \{x|\|x\|\leq b\}\). Let \(\mathfrak{B}\) be a closed ball with radius 3b, that is, \(\mathfrak{B}=\{x|\|x\|\leq 3b\}\). For any \(x\in \mathbb{R}^{n}\), let us consider two possible cases:

(Case (i): \(x\in \mathfrak{B}\)) Since each function \(F_{j}(\cdot ,\xi )\) (\(j=1,\dots ,m\)) is continuously differentiable, each function \(F_{j}(\cdot ,\xi )\) (\(j=1,\dots ,m\)) is Lipschitz continuous on \(\mathfrak{B}\). From the proof of Theorem 5, for all \(x\in \mathfrak{B}\), we have \(\|x-x^{*}\|\leq \frac{L+{\beta}}{\nu \mu}\|H_{{\beta}}(x,\lambda ^{*})-x \|\).

(Case (ii): \(x\notin \mathfrak{B}\)) For an arbitrary \(x\notin \mathfrak{B}\), we have \(\|x\|\geq 3b\). Since \(x^{*}\in K\), \(H_{{\beta}}(x,\lambda ^{*})\in K\) and K is a compact set, we obtain that \(\|x^{*}\|\leq b\) and \(\|H_{{\beta}}(x,\lambda ^{*})\|\leq b\). Hence, we have

$$ \bigl\Vert x-x^{*} \bigr\Vert \leq \bigl\Vert x-H_{{\beta}} \bigl(x,\lambda ^{*} \bigr) \bigr\Vert + \bigl\Vert H_{{\beta}} \bigl(x, \lambda ^{*} \bigr) \bigr\Vert + \bigl\Vert x^{*} \bigr\Vert \leq \bigl\Vert x-H_{{\beta}} \bigl(x, \lambda ^{*} \bigr) \bigr\Vert +2b. $$
(45)

It follows from the triangle inequality that

$$ \bigl\Vert x-H_{{\beta}} \bigl(x,\lambda ^{*} \bigr) \bigr\Vert \geq \Vert x \Vert - \bigl\Vert H_{{\beta}} \bigl(x,\lambda ^{*} \bigr) \bigr\Vert \geq 3b-b =2b. $$

This, together with (45), implies that \(\|x-x^{*}\|\leq 2\|x-H_{{\beta}}(x,\lambda ^{*})\|\).

Setting \(\bar{r}=\max \{\frac{L+{\beta}}{\nu \mu},2 \}\), we have \(\|x-x^{*}\|\leq \bar{r}\|x-H_{{\beta}}(x,\lambda ^{*})\|\), which plays the role of (39). Then by repeating the rest of the proof of Theorem 5, we get the desired conclusion.

Similarly, the analogue for Corollary 1 is true in the case where the set K is compact.

6 Concluding remarks

In the present paper, mainly motivated by the works [20, 32, 33], we have presented an UERM approach (i.e., problem (8)) for solving the SVVI problem (1). Several properties of the objective function θ were discussed, namely, the continuous differentibility and the boundedness of the level set. Furthermore, a well-known sample average approximation approach was presented for solving problem (8). The convergence of the proposed approach for global optimal solutions and stationary points was analyzed. Finally, we considered another deterministic formulation, i.e., the EV formulation for the SVVI problem. And the global error bound of the D-gap function \(g_{{\alpha}{\beta}}(\cdot ,\cdot )\) based on the EV formulation was given.

We would like to mention that the UERM approach presented in this paper is formalistic, since the projection operator on K is still needed in the calculation of \(g_{\alpha \beta}\). Thus, in order to convert the formalistic approach into practical methods, it is interesting to investigate the following aspects:

  1. (i)

    Based on the structure of the constraint set K, how to design effective algorithms;

  2. (ii)

    The sample complexity of the SAA approach proposed in this paper;

  3. (iii)

    The convergence rate of the SAA approach.

Availability of data and materials

Not applicable.

References

  1. Ansari, Q.H., Köbis, E., Yao, J.C.: Vector Variational Inequalities and Vector Optimization: Theory and Applications. Springer, Switzerland (2018)

    Book  MATH  Google Scholar 

  2. Bianchi, M., Konnov, I.V., Pini, R.: Limit vector variational inequalities and market equilibrium problems. Optim. Lett. 15, 817–832 (2021)

    Article  MathSciNet  MATH  Google Scholar 

  3. Charitha, C., Dutta, J.: Regularized gap functions and error bounds for vector variational inequalities. Pac. J. Optim. 6, 497–510 (2010)

    MathSciNet  MATH  Google Scholar 

  4. Charitha, C., Dutta, J., Lalitha, C.S.: Gap functions for vector variational inequalities. Optimization 64, 1499–1520 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  5. Chen, X.J., Fukushima, M.: Expected residual minimization method for stochastic linear complementarity problems. Math. Oper. Res. 30, 1022–1038 (2005)

    Article  MathSciNet  MATH  Google Scholar 

  6. Daniele, P., Maugeri, A.: Vector variational inequalities and modelling of a continuum traffic equilibrium problem. In: Giannessi, F. (ed.) Vector Variational Inequalities and Vector Equilibria, Mathematical Theories, pp. 97–111. Kluwer Academic, Dordrecht (2000)

    Chapter  MATH  Google Scholar 

  7. Facchinei, F., Pang, J.S.: Finite-Dimensional Variational Inequalities and Complementarity Problems, Volume I. Springer, New York (2003)

    MATH  Google Scholar 

  8. Facchinei, F., Pang, J.S.: Finite-Dimensional Variational Inequalities and Complementarity Problems, vol. II. Springer, New York (2003)

    MATH  Google Scholar 

  9. Giannessi, F.: Theorems of alternative, quadratic programs and complementarity problems. In: Cottle, R.W., Giannessi, F., Lions, J.L. (eds.) Variational Inequalities and Complementarity Problems, pp. 151–186. Wiley, New York (1980)

    Google Scholar 

  10. Giannessi, F. (ed.): Vector Variational Inequalities and Vector Equilibria: Mathematical Theories. Nonconvex Optimization and Its Applications, vol. 38. Kluwer Academic, Dordrecht (2000)

    MATH  Google Scholar 

  11. Gürkan, G., Özge, A.Y., Robinson, S.M.: Sample-path solution of stochastic variational inequalities. Math. Program. 84, 313–333 (1999)

    Article  MathSciNet  MATH  Google Scholar 

  12. He, S.X., Zhang, P., Hu, X., Hu, R.: A sample average approximation method based on a D-gap function for stochastic variational inequality problems. J. Ind. Manag. Optim. 10, 977–987 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  13. Huang, N.J., Li, J., Yang, X.Q.: Weak sharpness for gap functions in vector variational inequalities. J. Math. Anal. Appl. 394, 449–457 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  14. Jiang, H.Y., Xu, H.F.: Stochastic approximation approaches to the stochastic variational inequality problem. IEEE Trans. Autom. Control 53, 1462–1475 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  15. Kanzow, C., Fukushima, M.: Theoretical and numerical investigation of the D-gap function for box constrained variational inequalities. Math. Program. 83, 55–87 (1998)

    Article  MathSciNet  MATH  Google Scholar 

  16. Lee, G.M., Kim, D.S., Lee, B.S., Yen, N.D.: Vector variational inequality as a tool for studying vector optimization problems. Nonlinear Anal. 34, 745–765 (1998)

    Article  MathSciNet  MATH  Google Scholar 

  17. Lee, G.M., Yen, N.D.: A result on vector variational inequalities with polyhedral constraint sets. J. Optim. Theory Appl. 109, 193–197 (2001)

    Article  MathSciNet  MATH  Google Scholar 

  18. Li, S.J., Chen, G.Y.: On relations between multiclass, multicriteria traffic network equilibrium models and vector variational inequalities. J. Syst. Sci. Syst. Eng. 15, 284–297 (2006)

    Article  Google Scholar 

  19. Lin, G.H., Fukushima, M.: Stochastic equilibrium problems and stochastic mathematical programs with equilibrium constraints: a survey. Pac. J. Optim. 6, 455–482 (2010)

    MathSciNet  MATH  Google Scholar 

  20. Liu, J.X., Li, S.J.: Unconstrained optimization reformulation for stochastic nonlinear complementarity problems. Appl. Anal. 100, 1158–1179 (2021)

    Article  MathSciNet  MATH  Google Scholar 

  21. Lu, F., Li, S.J.: Method of weighted expected residual for solving stochastic variational inequality problems. Appl. Math. Comput. 269, 651–663 (2015)

    MathSciNet  MATH  Google Scholar 

  22. Lu, F., Li, S.J., Yang, J.: Convergence analysis of weighted expected residual method for nonlinear stochastic variational inequality problems. Math. Methods Oper. Res. 82, 229–242 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  23. Luo, M.J., Lin, G.H.: Expected residual minimization method for stochastic variational inequality problems. J. Optim. Theory Appl. 140, 103–116 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  24. Luo, M.J., Lin, G.H.: Convergence results of ERM method for nonlinear stochastic variational inequality problems. J. Optim. Theory Appl. 142, 569–581 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  25. Luo, M.J., Lin, G.H.: Stochastic variational inequality problems with additional constraints and their applications in supply chain network equilibria. Pac. J. Optim. 7, 263–279 (2011)

    MathSciNet  MATH  Google Scholar 

  26. Luo, M.J., Zhang, K.: Convergence analysis of the approximation problems for solving stochastic vector variational inequality problems. Complexity 2020, Article ID 1203627 (2020)

    Article  MATH  Google Scholar 

  27. Ma, H.Q., Huang, N.J., Wu, M., Regan, D.O.: A new gap function for vector variational inequalities with an application. J. Appl. Math. 2013, Article ID 423040 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  28. Ma, H.Q., Wu, M., Huang, N.J.: Expected residual minimization method for stochastic variational inequality problems with nonlinear perturbations. Appl. Math. Comput. 219, 6256–6267 (2013)

    MathSciNet  MATH  Google Scholar 

  29. Niederreiter, H.: Random Number Generation and Quasi-Monte Carlo Methods. SIAM, Philadelphia (1992)

    Book  MATH  Google Scholar 

  30. Patrick, B.: Probability and Measure. Wiley, New York (1995)

    MATH  Google Scholar 

  31. Shapiro, A., Xu, H.F.: Stochastic mathematical programs with equilibrium constraints, modelling and sample average approximation. Optimization 57, 395–418 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  32. Yamashita, N., Taji, K., Fukushima, M.: Unconstrained optimization reformulations of variational inequality problems. J. Optim. Theory Appl. 92, 439–456 (1997)

    Article  MathSciNet  MATH  Google Scholar 

  33. Zhao, Y., Zhang, J., Yang, X.M., Lin, G.H.: Expected residual minimization formulation for a class of stochastic vector variational inequalities. J. Optim. Theory Appl. 175, 545–566 (2017)

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

The authors are grateful to the editor and referee for their valuable comments and suggestions.

Funding

This work was supported by National Natural Science Foundation of China (No. 11961006), Guangxi Natural Science Foundation (2020GXNSFAA159100) and Innovation Project of Guangxi Graduate Education (gxun-chxs2021055).

Author information

Authors and Affiliations

Authors

Contributions

Dan-Dan Dong contributed to methodology and writing-original draft; Guo-ji Tang contributed to conceptualization, methodology and supervision; Hui-ming Qiu contributed to the revision of the manuscript.

Corresponding author

Correspondence to Guo-ji Tang.

Ethics declarations

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Dong, Dd., Tang, Gj. & Qiu, Hm. On the unconstrained optimization reformulations for a class of stochastic vector variational inequality problems. J Inequal Appl 2023, 97 (2023). https://doi.org/10.1186/s13660-023-03011-2

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13660-023-03011-2

Keywords