Efficient implementation of a modified and relaxed hybrid steepest-descent method for a type of variational inequality
- Haiwen Xu1, 2Email author
https://doi.org/10.1186/1029-242X-2012-93
© Xu; licensee Springer. 2012
Received: 1 October 2011
Accepted: 19 April 2012
Published: 19 April 2012
Abstract
To reduce the difficulty and complexity in computing the projection from a real Hilbert space onto a nonempty closed convex subset, researchers have provided a hybrid steepest-descent method for solving VI(F, K) and a subsequent three-step relaxed version of this method. In a previous study, the latter was used to develop a modified and relaxed hybrid steepest-descent (MRHSD) method. However, choosing an efficient and implementable nonexpansive mapping is still a difficult problem. We first establish the strong convergence of the MRHSD method for variational inequalities under different conditions that simplify the proof, which differs from previous studies. Second, we design an efficient implementation of the MRHSD method for a type of variational inequality problem based on the approximate projection contraction method. Finally, we design a set of practical numerical experiments. The results demonstrate that this is an efficient implementation of the MRHSD method.
Keywords
1 Introduction
Variational inequality problems were introduced by Hartman and Stampacchia and were subsequently expanded in several classic articles [1, 2]. Variational inequality theory provides a method for unifying the treatment of equilibrium problems encountered in areas as diverse as economics, optimal control, game theory, transportation science, and mechanics. Variational inequality problems have many applications, such as in mathematical optimization problems, complementarity problems and fixed point problems [3–7]. Thus, it is important to solve variational inequality problems and much research has been devoted to this topic [8–12].
Thus, we can solve a variational inequality problem using a fixed-point problem with some appropriate conditions. For example, if F is a strongly monotone and Lipschitzian mapping on K and β > 0 is small enough, then P K is a contraction. Hence, Banach's fixed point theorem guarantees convergence of the Picard iterates generated by P K [x - βF(x)]. Such a method is called a projection method, as described elsewhere [13–17].
To reduce the complexity of computing the projection P K , Yamada and Deutsch developed a hybrid steepest-descent method for solving VI(F, K) [7, 8], but choosing an efficient and implementable nonexpansive mapping is still a difficult problem. Subsequently, Xu and Kim [9] and Zeng et al. [10] proved the convergence of hybrid steepest-descent method. Noor introduced iterations after analysis of several three-step iterative methods [18]. Ding et al. provided a three-step relaxed hybrid steepest-descent method for variational inequalities [11] and Yao et al. [19] provided a simple proof of the method under different conditions. Our group has described a modified and relaxed hybrid steepest descent (MRHSD) method that makes greater use of historical information and minimizes information loss [20].
This article makes three new contributions compared to previous results. First, we prove a strong convergence of the MRHSD method under different and suitable restrictions imposed on the parameters (Condition 3.2). The proof of strong convergence is different from the previous proof [20]. Second, based on the approximate projection contraction method, we design an efficient implementation of the MRHSD method for a type of variational inequality problem. Third, we design some practical numerical experiments and the results verify that it is efficient implementation. Furthermore, the MRHSD method under Condition 3.2 is more efficient than under Condition 3.1.
The remainder of the article is organized as follows. In Section 2, we review several lemmas and preliminaries. In Section 3, we prove the convergence theorem. We discuss an implementation of the MRHSD method for a type of variational inequality problem in Section 4. Section 5 presents numerical experiments and results applicable to finance and statistics. Section 6 concludes.
2 Preliminaries
To prove the convergence theorem, we first introduce several lemmas and the main results reported by others [10, 11, 21].
Lemma 1 Let {x n } and {y n } be bounded sequences in a Banach space X and let {ζ n } be a sequence in [0, 1] with .
Then .
- (1)
α n ⊂ [0, 1], , or ;
- (2)
;
- (3)
γ n ⊂ [0, ∞), .
Then .
Lemma 3 (Demiclosedness principle) Assume that T is a nonexpansive self-mapping on a nonempty closed convex subset K of a Hilbert space H. If T has a fixed point, then (I - T) is demiclosed; that is, whenever {x n } is a sequence in K weakly converging to some x ∈ K and the sequence {(I - T)x n } strongly converges to some y ∈ H, it follows that (I - T)x = y, where I is the identity operator of H.
The following lemma is an immediate result of the inner product of a Hilbert space.
A basic property of the projection mapping onto a closed convex subset of Hilbert space will be given out in the following lemma.
- (1)
< P K (x) - x, z - P K (x) > ≥ 0,
- (2)
∥P K (x) - P K (y)∥2 ≤ ∥x - y∥2 - ∥P K (x) - x + y - P K (y)∥2.
where satisfies the following property under some conditions.
where .
3 Convergence theorem
Before analysis and proof, we first review the MRHSD method and related results [20].
Algorithm [20]
Take three fixed numbers t, ρ, γ ∈ (0, 2η/κ2), and let {α n } ⊂ [0, 1), {β n }, {γ n } ⊂ [0, 1] and {λ n }, , . Starting with arbitrarily chosen initial points x0 ∈ H, compute the sequences {x n }, , such that
Step 1: ,
Step 2: ,
Step 3: ,
where T: H → H is a nonexpansive mapping. However, choosing an efficient and implementable nonexpansive mapping T is a difficult problem, and previous studies did not design numerical experiments or describe an efficient and implementable nonexpansive mapping T [8–11, 19, 20]. In Section 4, we design an efficient and implementable nonexpansive mapping T for a type of variational problem based on the approximate projection contraction method. We then review the conditions and theorem presented by Xu et al. [20].
- (1)
, , ;
- (2)
, , ;
- (3)
, , ;
- (4)
, ∀ n ≥ 1.
Theorem 1 [20] Under the Condition 3.1, the sequence {x n } generated by algorithm [20] converges strongly to x* ∈ K, and x* is the unique solution of the VI(F, K).
We provide different conditions and establish a strong convergence theorem for the MRHSD method for variational inequalities under these conditions. Note that Condition 3.2 and a strong convergence theorem (Theorem 2) are the first contributions of the article.
- (1)
, , ;
- (2)
, ;
- (3)
, ∀n ≥ 1.
Theorem 2 The sequence {x n } generated by algorithm [20] converges strongly to x* ∈ K, and x* is the unique solution of the VI(F, K); assume that α n , β n , γ n and λ n , , satisfy the Condition 3.2.
Proof. We divide the proof into several steps.
Step 1. [20] The sequences {x n }, , are bounded.
where M0 = max{3∥x0 - x*∥, 3(ρ + γ + t)∥F(x*)∥/τ}.
Step 2. ∥xn+1- x n ∥ → 0.
Step 3. ∥xn+1- Tx n ∥ → 0.
Corollary 1 ∥x n - Tx n ∥ → 0.
Step 4. .
Step 5. ∥x n - x*∥ → 0. To prove this conclusion, we have to apply the Lemma 2 several times.
and M0 ≪ M < ∞.
which completes the proof.
The following section is our second contribution in this article.
4 Implementation of the MRHSD method for a kind of variational inequalities
where K1 and K2 are nonempty and closed convex subsets of H.
Assuming that , can be computed without much difficulty, we can efficiently compute Tx. According to Tx ≈ P K [x], we can partly retain the efficiency of the projection contraction method. Obviously, the fixed point set is Fix(T) = K and T satisfies the property of nonexpansive mapping.
5 Numerical experiments
To solve variational inequality problem (14) by the MRHSD method, we take one set of parameter sequences satisfying Condition 3.1.
Furthermore, we take two different parameter sequences satisfying Condition 3.2 to demonstrate the different effects for different α n .
which can easily be computed and the fixed point set to Fix(T) = K. Moreover, according to Theorems 1 and 2, the sequences generated by algorithm [20] under Conditions 3.1 and 3.2 are convergent.
The computation started with ones(n, n) in MATLAB and stopped when ∥xk+1- x k ∥ ≤ 10-4 or 10-5. All codes were implemented in MATLAB 7.0 and were run using a Pentium R 1.70 G processor on a 768 M ASUS notebook computer.
Numerical results for tolerance of 10-4
γ= ρ= t= c0= 0.01 | ||||||
---|---|---|---|---|---|---|
Matrix | Condition 3.1 | Condition 3.2a | Condition 3.2b | |||
n | It | cpu | It | cpu | It | cpu |
100 | 24 | 1.04 | 22 | 0.98 | 20 | 0.84 |
200 | 34 | 8.35 | 30 | 7.48 | 22 | 5.25 |
300 | 41 | 29.93 | 36 | 27.48 | 26 | 19.10 |
400 | 46 | 72.46 | 42 | 67.63 | 30 | 51.84 |
500 | 52 | 149.64 | 48 | 142.07 | 34 | 98.93 |
Numerical results for tolerance of 10-5
γ= ρ= t= c0= 0.01 | ||||||
---|---|---|---|---|---|---|
Matrix | Condition 3.1 | Condition 3.2a | Condition 3.2b | |||
n | It | cpu | It | cpu | It | cpu |
100 | 75 | 3.59 | 68 | 2.83 | 46 | 2.37 |
200 | 106 | 25.94 | 96 | 22.71 | 66 | 15.53 |
300 | 128 | 94.37 | 116 | 89.52 | 78 | 58.30 |
400 | 147 | 236.59 | 132 | 205.70 | 88 | 138.28 |
500 | 165 | 529.44 | 148 | 420.73 | 100 | 285.24 |
Test examples
In this example we generate the data in a similar manner as in [12]. Note that it is very difficult to compute the examples using the extended contraction method [12] when C is asymmetric. However, the MRHSD method can efficiently compute the examples when C is asymmetric.
Matlab code:
C = zeros(n, n); HU = ones(n, n)∗0.1; HL = -HU;
for i = 1:n
for j = 1:n
t = mod(t∗42108+13846,46273);
C(i, j) = t∗2/46273-1;
end;
end;
for i = 1:n
C(i, i) = abs(C(i, i))∗2; HU(i, i) = 1; HL(i, i) = 1;
end;
The numerical results demonstrate that this implementation of the MRHSD method is efficient. Furthermore, the MRHSD Method under Condition 3.2 is more efficient than under Condition 3.1. These numerical experiments and results are the third contribution of the article.
6 Conclusions and discussions
We have proved strong convergence of the MRHSD method under Condition 3.2, which differs from Condition 3.1. The proof can be simplified using Condition 3.2, which imposes suitable restrictions on the parameters. The result can be considered an improvement and refinement of previous results [20]. In particular, we designed an efficient implementation of the MRHSD method based on the approximate projection contraction method. Numerical experiments demonstrated that this is an efficient implementation and that the MRHSD method under Condition 3.2 is more efficient than under Condition 3.1. However, choosing an efficient and implementable nonexpansive mapping for a general VI(F, K) is still a difficult problem.
Declarations
Acknowledgements
This research was supported by National Science and Technology Support Program (Grant No. 2011BAH24B06), Science Foundation of the Civil Aviation Flight University of China (Grant No. J2010-45).
Authors’ Affiliations
References
- Hartman P, Stampacchia G: On some nonlinear elliptic differential functional equations. Acta Math 1966, 115: 153–188.MathSciNetView ArticleGoogle Scholar
- Stampacchia G: Variational inequalities in theory and applications of monotone operators. In Proceedings of the NATO Advanced Study Institute. Venice, Italy(Edizioni Oderisi, Gubbio, Italy); 1968:102–192.Google Scholar
- Harker PT, Pang JS: Finite-dimensional variational inequality and nonlinear complementarity problems: a survey of theory, algorithms and applications. Math Program 1990, 48: 161–220.MathSciNetView ArticleGoogle Scholar
- Smith MJ: The existence, uniqueness and stability of traffic equilibria. Transport Res 1979, 13B: 295–304.View ArticleGoogle Scholar
- Mathiesen L: Computation of economic equilibria by a sequence of linear complementarity problems. Math Program Stud 1985, 23: 144–162.MathSciNetView ArticleGoogle Scholar
- Huang NJ, Li J, Wu SY: Optimality conditions for vector optimization problems. J Optim Theory Appl 2009, 142: 323–342.MathSciNetView ArticleGoogle Scholar
- Deutsch F, Yamada I: Minimizing certain convex functions over the intersection of the fixed-point sets of nonexpansive mappings. Numer Func Anal Opt 1998, 19: 33–56.MathSciNetView ArticleGoogle Scholar
- Yamada I: The hybrid steepest-descent method for variational inequality problems over the intersection of the fixed-point sets of nonexpansive mappings. Stud Comput Math 2001, 8: 473–504.View ArticleGoogle Scholar
- Xu HK, Kim TH: Convergence of hybrid steepest-descent methods for variational inequalities. J Optim Theory Appl 2003, 119: 185–201.MathSciNetView ArticleGoogle Scholar
- Zeng LC, Wong NC, Yao JC: Convergence analysis of modified hybrid steepest-descent methods with variable parameters for variational inequalities. J Optim Theory Appl 2007, 132: 51–69.MathSciNetView ArticleGoogle Scholar
- Ding XP, Lin YC, Yao JC: Three-step relaxed hybrid steepest-descent methods for variational inequalities. Appl Math Mech 2007, 28: 1029–1036.MathSciNetView ArticleGoogle Scholar
- He BS, Liao LZ, Wang X: Proximal-like contraction methods for monotone variational inequalities in a unified framework.2009. [http://www.optimization-online.org/DB-HTML/2008/12/2163.html]Google Scholar
- He BS, Yang ZH, Yuan XM: An approximate proximal-extragradient type method for monotone variational inequalities. J Math Anal Appl 2004, 300: 362–374.MathSciNetView ArticleGoogle Scholar
- He BS: A new method for a class of linear variational inequalities. Math Program 1994, 66: 137–144.View ArticleGoogle Scholar
- He BS: A class of projection and contraction methods for monotone variational inequalities. Appl Math Optim 1997, 35: 69–76.MathSciNetView ArticleGoogle Scholar
- Li M, Bnouhachem A: A modified inexact operator splitting method for monotone variational inequalities. J Glob Optim 2008, 41: 417–426.MathSciNetView ArticleGoogle Scholar
- Sun DF: A projection and contraction method for the nonlinear complementarity problem and its extensions. Math Numer Sin 1994, 16: 183–194.View ArticleGoogle Scholar
- Noor MA: New approximation schemes for general variational inequalities. J Math Anal Appl 2005, 251: 217–229.View ArticleGoogle Scholar
- Yao YH, Noor MA, Chen RD, Liou YC: Strong convergence of three-step relaxed hybrid steepest-descent methods for variational inequalities. Appl Math Comput 2008, 201: 175–183.MathSciNetView ArticleGoogle Scholar
- Xu HW, Song EB, Pan HP, Shao H, Sun LM: The modified and relaxed hybrid steepest-descent methods for variational inequalities. In Proceedings of the 1st International Conference on Modelling and Simulation. Volume II. World Academic Press; 2008:169–174.Google Scholar
- Suzuki T: Strong convergence of Krasnoselskii and Mann's type sequences for one-parameter nonexpansive semigroups without Bochner integrals. J Math Anal Appl 2005, 305: 227–239.MathSciNetView ArticleGoogle Scholar
- Gao Y, Sun DF: Calibrating least squares covariance matrix problems with equality and inequality constraints.2008. [http://www.math.nus.edu.sg/matsundf/CaliMat_June_2008.pdf]Google Scholar
Copyright
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.