An algorithm for solving a multi-valued variational inequality
Journal of Inequalities and Applications volume 2013, Article number: 218 (2013)
We propose a new extragradient method for solving a multi-valued variational inequality. It is showed that the method converges globally to a solution of the multi-valued variational inequality, provided the multi-valued mapping is continuous with nonempty compact convex values. Preliminary computational experience is also reported.
MSC: 47H04, 47H10, 47J20, 47J25.
We consider the following multi-valued variational inequality, denoted by : to find and such that
where C is a nonempty closed convex set in , F is a multi-valued mapping from C into with nonempty values, and and denote the inner product and the norm in , respectively.
Extragradient-type algorithms have been extensively studied in the literature; see [1–3]. Various algorithms for solving the multi-valued variational inequality have been extensively studied in the literature [4–15]. The well-known proximal point algorithm  requires the multi-valued mapping F to be monotone.  proposes a projection algorithm for solving the multi-valued variational inequality with a pseudomonotone mapping. In , choosing needs solving a single-valued variational inequality; see the expression (2.1) in .  presents a double projection algorithm, which is an improvement of , so that can be taken arbitrarily. In , however, choosing the hyperplane needs computing the supremum and hence is computationally expensive. To overcome this difficulty,  introduces an extragradient algorithm for solving the multi-valued variational inequality in which computing the supremum is avoided. In this paper, we present a new extragradient method for solving the multi-valued variational inequality. In our method, can be taken arbitrarily. Moreover, the main difference of our method from those of [6, 7, 11] is the procedure of Armijo-type linesearch. We also present numerical tests to compare our Algorithm 2.2 with those in [6, 11].
This paper is organized as follows. In Section 2, we present the algorithm details. We prove the preliminary results for convergence analysis in Section 3. Numerical results are reported in the last section.
Let us recall the definition of a continuous multi-valued mapping. F is said to be upper semicontinuous at if for every open set V containing , there is an open set U containing x such that for all . F is said to be lower semicontinuous at if given any sequence converging to x and any , there exists a sequence that converges to y. F is said to be continuous at if it is both upper semicontinuous and lower semicontinuous at x. If F is single-valued, then both upper semicontinuity and lower semicontinuity reduce to the continuity of F.
F is called pseudomonotone on C in the sense of Karamardian  if for any ,
Let S be the solution set of (1.1), that is, those points satisfying (1.1). Throughout this paper, we assume that the solution set S of problem (1.1) is nonempty and F is continuous on C with nonempty compact convex values satisfying the following property:
The property (2.2) holds if F is pseudomonotone on C.
Let denote the projector onto C, and let be a parameter.
Proposition 2.1 and solve problem (1.1) if and only if
Algorithm 2.2 Choose and two parameters . Set .
Step 1. Choose and let be the smallest nonnegative integer satisfying
Set . If , stop.
Step 2. Compute , where
Let and go to Step 1.
Remark 2.3 Let us compare the above algorithm with those in [6, 7, 11]. First, Aimijo-type linesearch procedures in the four algorithms are different. [6, 7, 11] use different procedures which replace (2.4) by the following ones:
where μ is required to be strictly less than 1 or , and . In our algorithm, μ can change according to the value of in each iteration and . Secondly, the way to generate the next iterate is different. In [6, 11], the next iterate is a projection of the current iterate onto the intersection of the feasible set C and a hyperplane, while in our algorithm as well as in  the next iterate is a projection onto the feasible set C. In addition, the searching directions in  and our algorithm are also different.
Lemma 2.4 Let C be a closed convex subset of . For any and , the following statements hold.
Proof See . □
The proof of the following lemma is easy and we omit it (see Lemma 3.1 in  for example).
Lemma 2.5 For any , and ,
We first show that Algorithm 2.2 is well defined.
Proposition 2.6 If is not a solution of problem (1.1), then there exists a nonnegative integer satisfying (2.3) and (2.4).
Proof Suppose that for all k and all , we have
where the second inequality follows from Lemma 2.5 and the equality follows from and . Since is continuous and , . Since F is lower semicontinuous, and , there is such that . Therefore,
Let in (2.8), we have
This contradiction completes the proof. □
3 Main results
Now we obtain the following auxiliary result that will be used for proving the convergence of Algorithm 2.2.
Theorem 3.1 If the assumption (2.2) holds and , then for any ,
Proof Let . Since and , it follows from (2.2) that
Similarly, we have
because . Since , from Lemma 2.4(i) we have
It follows from (3.2), (3.3) and (3.4) that
where the second inequality follows from (2.4). This completes the proof. □
Theorem 3.2 If is continuous with nonempty compact convex values on C and the assumption (2.2) holds, then the sequence generated by Algorithm 2.2 converges to a solution of (1.1).
Proof Let . It follows from Lemma 2.4(ii), Lemma 2.5, (2.5), (2.6) and (3.6) that
It follows that the sequence is nonincreasing, and hence is a convergent sequence. Therefore, is bounded. Since F is continuous with compact values, Proposition 3.11 in  implies that is a bounded set, and so are , and . Thus, is bounded. Then there exists a positive number M such that
It follows from (3.7) that
By the boundedness of , there exists a convergent subsequence converging to .
If is a solution of problem (1.1), we show next that the whole sequence converges to . Replacing by in the preceding argument, we obtain that the sequence is nonincreasing and hence converges. Since is an accumulation point of , some subsequence of converges to zero. This shows that the whole sequence converges to zero, hence .
Suppose now that is not a solution of problem (1.1). We show first that in Algorithm 2.2 cannot tend to ∞. Since F is continuous with compact values, Proposition 3.11 in  implies that is a bounded set, and so the sequence is bounded. Therefore, there exists a subsequence converging to . Since F is upper semicontinuous with compact values, Proposition 3.7 in  implies that F is closed, and so . By the definition of , we have
where the second inequality follows from Lemma 2.5 and the equality follows from .
If , then . The lower continuity of F, in turn, implies the existence of such that converges to . Therefore,
Letting , we obtain the contradiction
being continuous. Therefore, is bounded and so is .
By the boundedness of , it follows from (3.9) that . Since is continuous and the sequences and are bounded, there exists an accumulation point of such that . This implies that solves the variational inequality (1.1). Similar to the preceding proof, we obtain that .
Now we provide a result on the convergence rate of the iterative sequence generated by Algorithm 2.2. To establish this result, we need a certain error bound to hold locally (see (3.15) below). The research on an error bound is a large topic in mathematical programming. One can refer to the survey  for the roles played by error bounds in the convergence analysis of iterative algorithms; more recent developments on this topic are included in Chapter 6 in . A condition similar to (3.15) has also been used in  (see expression (10) therein) to analyze the convergence rate in a very general framework.
For any , define
We say that F is Lipschitz continuous on C if there exists a constant such that, for all , , where H denotes the Hausdorff metric. □
Theorem 3.3 In addition to the assumptions in Theorem 3.2, if F is Lipschitz continuous with modulus and if there exist positive constants c and δ such that
then there is a constant such that for sufficiently large i,
Proof Put . We first prove that for all i. By the construction of , we have . If , then clearly . Now we assume that . Since , it follows that the nonnegative integer . Thus the construction of implies that
and hence, as ,
Since and F is compact-valued, the definition of the Hausdorff metric implies the existence of such that
Let . By (3.8) and (3.15), we obtain that for sufficiently large i,
where the second inequality follows from .
Write α for . Applying Lemma 6 in Chapter 2 of , we have
This completes the proof. □
4 Numerical experiments
In this section, we present some numerical experiments for the proposed algorithm. The MATLAB codes are run on a PC (with CPU Intel P-T2390) under MATLAB Version 22.214.171.12404(R14) Service Pack 1. We compare the performance of our Algorithm 2.2, , Algorithm 1] and , Algorithm 1]. In Tables 1 and 2, ‘It.’ denotes the number of iteration and ‘CPU’ denotes the CPU time in seconds. The tolerance ε means when , the procedure stops.
Example 4.1 Let ,
and be defined by
Then the set C and the mapping F satisfy the assumptions of Theorem 3.2 and is a solution of the multi-valued variational inequality. Example 4.1 is tested in [6, 11]. We choose , for our algorithm; , , for Algorithm 1 in ; , , for Algorithm 1 in . We use as the initial point (Table 1 and Table 2).
Iusem AN, Svaiter BF: A variant of Korpelevich’ method for variational inequalities with a new search strategy. Optimization 1997, 42: 309–321. doi:10.1080/02331939708844365
Korpelevich GM: The extragradient method for finding saddle points and other problems. Èkon. Mat. Metody 1976, 12: 747–756.
Wang YJ, Xiu NH, Zhang JZ: Modified extragradient method for variational inequalities and verification of solution existence. J. Optim. Theory Appl. 2003, 119(1):167–183. doi:10.1023/B:JOTA.0000005047.30026.b8
Auslender A, Teboulle M: Lagrangian duality and related multiplier methods for variational inequality problems. SIAM J. Optim. 2000, 10(4):1097–1115. doi:10.1137/S1052623499352656
Bao TQ, Khanh PQ: A projection-type algorithm for pseudomonotone nonlipschitzian multivalued variational inequalities. In Generalized Convexity, Generalized Monotonicity and Applications. Springer, New York; 2005:113–129.
Fang CJ, He YR: A double projection algorithm for multi-valued variational inequalities and a unified framework of the method. Appl. Math. Comput. 2011, 217: 9543–9551. doi:10.1016/j.amc.2011.04.009
Fang CJ, He YR: An extragradient method for generalized variational inequality. Pac. J. Optim. 2013, 9(1):47–59.
Fang SC, Peterson EL: Generalized variational inequalities. J. Optim. Theory Appl. 1982, 38(3):363–383. doi:10.1007/BF00935344
Fukushima M: The primal Douglas-Rachford splitting algorithm for a class of monotone mappings with application to the traffic equilibrium problem. Math. Program. 1996, 72(1, Ser. A):1–15. doi:10.1007/BF02592328
Konnov IV: Combined Relaxation Methods for Variational Inequalities. Springer, Berlin; 2001.
Li FL, He YR: An algorithm for generalized variational inequality with pseudomonotone mapping. J. Comput. Appl. Math. 2009, 228: 212–218. doi:10.1016/j.cam.2008.09.014
Rockafellar RT: Monotone operators and the proximal point algorithm. SIAM J. Control Optim. 1976, 14(5):877–898. doi:10.1137/0314056
Saigal R: Extension of the generalized complementarity problem. Math. Oper. Res. 1976, 1(3):260–266. doi:10.1287/moor.1.3.260
Salmon G, Strodiot JJ, Nguyen VH: A bundle method for solving variational inequalities. SIAM J. Optim. 2003, 14(3):869–893. doi:10.1137/S1052623401384096
Xia FQ, Huang NJ: A projection-proximal point algorithm for solving generalized variational inequalities. J. Optim. Theory Appl. 2011, 150: 98–117. doi:10.1007/s10957–011–9825–3
Karamardian S: Complementarity problems over cones with monotone and pseudomonotone maps. J. Optim. Theory Appl. 1976, 18(4):445–454. doi:10.1007/BF00932654
Zarantonello EH: Projections on convex sets in Hilbert space and spectral theory. In Contributions to Nonlinear Functional Analysis. Academic Press, New York; 1971.
Bnouhachem A: A self-adaptive method for solving general mixed variational inequalities. J. Math. Anal. Appl. 2005, 309: 136–150. doi:10.1016/j.jmaa.2004.12.023
Aubin JP, Ekeland I: Applied Nonlinear Analysis. Wiley, New York; 1984.
Pang JS: Error bounds in mathematical programming. Math. Program. 1997, 79: 299–332. doi:10.1007/BF02614322
Facchinei F, Pang JS: Finite-Dimensional Variational Inequalities and Complementary Problems. Springer, New York; 2003.
Solodov MV: Convergence rate analysis of iterative algorithms for solving variational inequality problems. Math. Program. 2003, 96: 513–528. doi:10.1007/s10107–002–0369-z
Polyak BT: Introduction to Optimization. Optimization Software Inc., Publications Division, New York; 1987.
This work is partially supported by Natural Science Foundation Project of CQ CSTC (No. 2010BB9401), Science and Technology Project of Chongqing Municipal Education Committee of China (Nos. KJ110509 and KJ100513) and Foundation of Chongqing University of Posts and Telecommunications for the Scholars with Doctorate (No. A2012-04).
The authors declare that they have no competing interests.
All the authors contributed equally to the writing of the present article. And they also read and approved the final manuscript.
About this article
Cite this article
Fang, C., Chen, S. & Yang, C. An algorithm for solving a multi-valued variational inequality. J Inequal Appl 2013, 218 (2013). https://doi.org/10.1186/1029-242X-2013-218