Smoothing sample average approximation method for solving stochastic second-order-cone complementarity problems

In this paper, we consider stochastic second-order-cone complementarity problems (SSOCCP). We first use the so-called second-order-cone complementarity function to present an expected residual minimization (ERM) model for giving reasonable solutions of SSOCCP. Then, we introduce a smoothing function, by which we obtain a smoothing approximate ERM model. We further show that the global solution sequence and weak stationary point sequence of this smoothing approximate ERM model converge to the global solution and the weak stationary point of the original ERM model as the smoothing parameter tends to zero respectively. Moreover, since the ERM formulation contains an expectation, we employ a sample average approximate method for solving the smoothing ERM model. As the convergence analysis, we first show that the global optimal solution of this smoothing sample average approximate problem converges to the global optimal solution of the ERM problem with probability one. Subsequently, we consider the weak stationary points’ convergence results of this smoothing sample average approximate problem of ERM model. Finally, some numerical examples are given to explain that the proposed methods are feasible.


Introduction
The second-order-cone (SOC) in R n (n ≥ 1) is defined as follows: where · denotes the Euclidean norm. The second-order-cone complementarity problems (SOCCP) are to find vectors x, y ∈ R n and z ∈ R l satisfying x, y = 0, x ∈ K, y ∈ K, F(x, y, z) = 0, where ·, · denotes the Euclidean inner product. K = K n 1 × · · · × K n m with m, n 1 , . . . , n m ≥ 1 and n 1 + · · · + n m = n. F : R n × R n × R l → R n × R l is a continuously differentiable mapping. Especially, if F(x, y, z) = f (x)y with a continuously differentiable function f : R n → R n and K = K n , then we can rewrite SOCCP as follows: Find (x, y) ∈ R n × R n such that x ∈ K n , y ∈ K n , x, y = 0, y = f (x).
If n i = 1 for all i, l = 0, and the mapping F has the form F(x, y, z) = F 0 (x)y for some F 0 : R n → R n , the SOCCP become which are the well-known nonlinear complementarity problems (NCP). In fact, many engineering and practical problems, such as the three-dimensional frictional contact problems [1] and the robust Nash equilibria [2], can be changed into SOCCP directly. In addition, as the special case of the SOCCP, some basic theory, effective algorithms, and important applications of complementarity problems in engineering, economics have been developed. However, in practice, several types of uncertain data such as weather, supply, demand, cost, etc. may be involved in SOCCP. The stochastic problems are aimed at a practical treatment of such problems under uncertainty. It is well known that stochastic second-order-cone complementarity problems (SSOCCP) are more complicated than SOCCP and they have found applications in more fields. Therefore, it is meaningful and interesting to study the general SSOCCP.
In this paper, we consider the following SSOCCP: Find vectors x, y ∈ R n and z ∈ R l satisfying x, y = 0, x ∈ K, y ∈ K, F(x, y, z, ξ ) = 0, a.s., (1.1) where ·, · denotes the Euclidean inner product. ξ ∈ is a stochastic variable and is the underlying sample space. F : R n × R n × R l × → R n × R l is a continuously differentiable mapping, and a.s. is the abbreviation for "almost surely" under the given probability measure. Especially, if F(x, y, z, ξ ) = f (x, ξ )y with a continuously differentiable function f : R n × → R n and K = K n , then we can rewrite SSOCCP as follows: Find (x, y) ∈ R n × R n such that If n i = 1 for all i, l = 0, and the mapping F has the form F(x, y, z, ξ ) = F 0 (x, ξ )y for some F 0 : R n × → R n , the SSOCCP become which are the well-known stochastic nonlinear complementarity problems (SNCP). Unless otherwise specified, in the following analysis we assume that K = K n . This, however, does not lose any generality because our analysis can be easily extended to the general case. Note that problem (1.3) may not have a solution in general. There have been proposed three ways to deal with (1.3). One way was suggested by Gürkan et al. [3], who used an expectation of F 0 instead of F 0 for giving a simple nonlinear complementarity problems reformulation. Another way was presented by Chen and Fukushima [4], who made use of the so-called NCP function to present the expected residual minimization formulation for SNCP. The last was proposed by Lin and Fukushima [5]. They formulated SNCP as a special here-and-now model of stochastic mathematical program with equilibrium constraints. Moreover, Luo and Wang [6] presented the ERM and CVaR reformulation for solving stochastic generalized complementarity problem.
Motivated by the above work, we will propose a reformulation of SSOCCP for giving reasonable solutions of SSOCCP. In the rest of this paper, we assume that is a nonempty compact set. We will often write x = (x 1 , x 2 ) for (x T 1 , x T 2 ) T and (x, y, z) for (x T , y T , z T ) T for simplicity. In addition, for any x = (x 1 , x 2 ) ∈ R × R n-1 and y = (y 1 , y 2 ) ∈ R × R n-1 , we define their Jordan product as x • y = (x T y, y 1 x 2 + x 1 y 2 ). We will write x 2 to mean x • x and write x + y to mean the usual componentwise addition of vectors. Moreover, if x ∈ K n , then there exists a unique vector in K n , which we denote by x 1/2 such that (x 1/2 ) 2 = x 1/2 • x 1/2 = x. F(·, ·, ·, ξ ) is twice continuous differentiable with respect to (x, y, z) and F(x, y, z, ·) is continuous differentiable with respect to ξ ∈ . For each t = 1, . . . , n + l, ∇ (x,y,z) F t (x, y, z, ξ ) denotes the gradient of F t on (x, y, z), ∇ 2 (x,y,z) F t (x, y, z, ξ ) denotes the Hessian matrix of F t on (x, y, z), and ∂F t ∂x j denotes a partial derivative on x j . ∇ (x,y,z) F(x, y, z, ξ ) = [∇ (x,y,z) F 1 (x, y, z, ξ ), . . . , ∇ (x,y,z) F n+l (x, y, z, ξ )] T . For an m × n matrix A, n j=1 |a ij | 2 denotes its Frobenius norm. Moreover, I and O denote the identity matrix and null matrix with suitable dimensions, respectively.
The remainder of the paper is organized as follows. In Sect. 2, some preliminaries are given and a smoothing sample average approximation problem is presented. The convergence analysis of global solution and weak stationary point of this approximation problem are established in Sect. 3 and Sect. 4, respectively. Moreover, the conclusions are given in Sect. 5.

Deterministic reformulation and its approximation problems
Because of the existence of a random element ξ , we cannot generally expect that there exists a vector (x, y, z) satisfying (1.1) or (1.2) for almost all ξ ∈ , that is, both (1.1) and (1.2) may not have a solution in general. Therefore, an important issue in the study of SSOCCP is to present an appropriate deterministic formulation of the considered problem. Before giving the reformulation, we first introduce some related functions and their properties.
If x, y = 0, x ∈ K n , y ∈ K n ⇐⇒ φ(x, y) = 0. We call mapping φ : R n × R n → R n an SOC complementarity function associated with K n . One popular SOC complementarity function which is called by natural residual function was presented by Fukushima et al. in [7]. This function associated with K n is defined as where [x] + denotes the projection of x onto the second-order-cone K n , that is, [x] + = argmin x ∈K n xx . For simplicity, set φ 0 (x, y) := φ NR (x, y) in this paper. Note that this function is nondifferentiable but strongly semismooth.
In reference [8], Chen et al. presented a continuously differentiable smoothing function φ μ : R n × R n → R n of φ 0 (x, y), parameterized by μ > 0, such that the pointwise limit where μ > 0 is a smoothing parameter and e = (1, 0, . . . , 0) T ∈ R n . Especially, by Proposition 6.2 of [9], we know that φ μ (x, y) is globally Lipschitz continuous, which means that there exists a positive constant ν > 0 such that Motivated by the work of Chen and Fukushima [4] for the special case of SSOCCP, we propose the following deterministic formulation for SSOCCP, called the ERM problem below, in which we try to find a vector (x, y, z) ∈ R n × R n × R l that minimizes an expected residual for φ 0 and F(x, y, z, ξ ), that is, Here, E stands for the expectation with respect to the random variable ξ ∈ . It is well known that φ 0 is not differentiable with respect to (x, y). Hence, for solving ERM model, we consider the following smoothing approximation problem of (2.1): Since an expectation function is generally difficult to evaluate exactly, we employ a sample average approximation (SAA) method for numerical integration to solve (2.2). Thus, by taking independent identically distributed (i.i.d.) samples ξ 1 , . . . , ξ N k , one may construct, for each k, a smoothing sample average approximation to problem (2.2) as follows: where k := {ξ i |i = 1, . . . , N k } is a set of observations generated by a sample average approximation method such that k ⊂ and N k → ∞ as k → +∞. For the sample average approximation method, the strong law of large numbers guarantees that the following result holds with probability one (abbreviated by "w.p.1. ").
Lemma 2.1 [10,11] Let η : → R be integrable over . Then we have It is generally known that the boundedness of an iteration sequence is the basic requirement in the iteration methods for solving various optimization problems. To ensure the boundedness of the iteration sequence of problem (2.1), it is essential to take the level sets of θ (x, y, z) into account. And it is worth noting that if θ (x, y, z) is coercive, i.e., lim x →∞ θ (x, y, z) = +∞, then the level sets of θ (x, y, z) are bounded. Next, we study the coerciveness of θ (x, y, z) under appropriate conditions.
Proof It is easy to obtain the conclusion from Lemma 3.5 of [9]. In addition, more details about Jordan algebra associated with SOCCP also can be seen in [9].

Convergence of global optimal solution
In this section, we will show that the global solution sequence of smoothing approximation problem (2.2) converges to the global solution of the true ERM problem (2.1). Then we will consider that the global solution sequence of smoothing SAA approximation problem (2.3) converges to the global solution of the so-called true problem (2.1) when the smoothing parameter tends to 0 + as sample size increases to +∞.
From now on, we denote byS(μ) the set of global optimal solutions of problem (2.2).
Proof Let D be a compact convex set containing the sequence {(x(μ), y(μ), z(μ))}. By the continuous differentiability of F on the compact set D × and Theorem 16.8 of [12], we have Noting that where the first inequality follows from the definition of φ 0 , the second inequality follows from the nonexpansive property of project. Then we have It then follows from (3.2) and the fact φ 0 = lim μ→0 + φ μ that Since (x(μ), y(μ), z(μ)) is a global optimal solution of problem (2.2), there holds Letting μ → 0 + in (3.4), we get from (3.1), (3.3), and the fact φ 0 = lim μ→0 + φ μ that which indicates (x,ŷ,ẑ) is a global optimal solution of problem (2.1).
Let μ vary as k increases, that is, let μ = μ k → 0 + as k → +∞. In the following, we discuss the convergence for this case.

Theorem 3.2 Suppose that for each k,
Then we have (x,ȳ,z) is a global optimal solution of problem (2.1) w.p.1.
Proof Let A be a compact convex set containing the sequence {(x k (μ k ), y k (μ k ), z k (μ k ))}. By the continuous differentiability of F on the compact set A × , there exist constants C 1 > 0, C 2 > 0 such that Moreover, we have from the mean-value theorem that, for each k and each ξ i , there exist Then we have where the second inequality follows from (3.5), the third inequality follows from (3.6), the definition of · F , and the fact that It then follows from Lemma 2.1 and (3.8) that Similar to obtaining (3.3), we have we have from (3.9) and (3.10) that On the other hand, since (x k (μ k ), y k (μ k ), z k (μ k )) is a global optimal solution of problem Letting k → +∞ in (3.12), we get from (3.11), Lemma 2.1, and the limit φ 0 = lim μ→0 + φ μ that θ (x,ȳ,z) ≤ θ (x, y, z), ∀(x, y, z) ∈ R n × R n × R l , which indicates (x,ȳ,z) is a global optimal solution of problem (2.1).

Convergence of weak stationary point
We first give some definitions associated with a weak stationary point of problem (2.1), then we show that the sequence of weak stationary point for (2.2) converges to a weak stationary point for (2.1) as μ → 0 + . Then we will consider the convergence of the stationary point of (2.3) while μ = μ k → 0 + as k → +∞.

Definition 4.1
Let g : R n → R be locally Lipschitz continuous. The Clarke generalized gradient of g at x 0 is defined as where Co{X} denotes the convex hull of a set X.

Definition 4.3
For each fixed μ, a point (x(μ), y(μ), z(μ)) satisfying the following equation is called a weak stationary point of (2.2): where 0 ∈ R l is a zero vector.

Definition 4.4
For each fixed μ and k, a point (x k (μ), y k (μ), z k (μ)) satisfying the following equation is called a stationary point of (2.3): Especially, more details about weak stationary points can be seen in [13].
Proof Let D be a compact convex set containing the sequence {(x(μ), y(μ), z(μ))}. By the twice continuous differentiability of F on the compact set D × and Theorem 16.8 of [12], we have Since ∇ (x,y) φ μ (x, y) is continuously on D, there exists C 3 > 0 such that Noting (4.6) and the gradient consistency of φ μ (x, y) 2 , we know that the conditions of Theorem 3.1 in [13] hold. Taking a limit in (4.3), by (4.5), and the results of Theorem 3.1 in [13], we obtain (4.2) immediately. That is, (x,ỹ,z) is a weak stationary point of problem (2.1).
Remark 4.1 The condition φ μ (x, y) 2 satisfying gradient consistency has been proved in [14]. In fact, the smoothing function φ μ used in this paper is a special form of smoothing function considered in [14].

Lemma 4.1
Suppose that lim k→+∞ x k (μ k ) =x, lim k→+∞ y k (μ k ) =ỹ, lim k→+∞ z k (μ k ) =z, w.p.1. Then we have that the following limit holds with probability one: Proof Let A be a compact convex set containing the sequence {(x k (μ k ), y k (μ k ), z k (μ k ))}. By the twice continuous differentiability of F on the compact set A and the continuity of F with respect to ξ on a compact set , there exist constants C 4 > 0, C 5 > 0, C 6 > 0 such that Similar to obtaining (3.7), taking the twice continuous differentiability of F and (4.9) into account, we have Then, for j = 1, . . . , n, we have where the second inequality follows from (4.8), (4.9), (4.10), (4.11), and the last limit follows from Lemma 2.1, (4.7), and (4.8). This means the conclusion holds.

Numerical examples
In this section, we use random generator rand to generate a random sequence {ξ 1 , . . . , ξ N k } of ξ and employ the Matlab unconstrained optimization command fminunc to solve the corresponding problems (2.1) and (2.3).

Results and discussion
In this paper, we give a reasonable reformulation for solving SSOCCP, which is called ERM model. To solve this ERM model, we pay attention to two problems: one is how to handle the property of non-differentiability, the other is how to obtain the value of expectation for the objective function. We employ a smoothing technique and a sample average approximation method to deal with the problems. Further, we show that the global solution and weak stationary point of this smoothing sample average approximation ERM problem converge to those of the true ERM model correspondingly.