Skip to main content

An algorithm for solving a multi-valued variational inequality

Abstract

We propose a new extragradient method for solving a multi-valued variational inequality. It is showed that the method converges globally to a solution of the multi-valued variational inequality, provided the multi-valued mapping is continuous with nonempty compact convex values. Preliminary computational experience is also reported.

MSC: 47H04, 47H10, 47J20, 47J25.

1 Introduction

We consider the following multi-valued variational inequality, denoted by MVI(F,C): to find x ∗ ∈C and ξ∈F( x ∗ ) such that

〈 ξ , y − x ∗ 〉 ≥0,∀y∈C,
(1.1)

where C is a nonempty closed convex set in R n , F is a multi-valued mapping from C into R n with nonempty values, and 〈⋅,⋅〉 and ∥⋅∥ denote the inner product and the norm in R n , respectively.

Extragradient-type algorithms have been extensively studied in the literature; see [1–3]. Various algorithms for solving the multi-valued variational inequality have been extensively studied in the literature [4–15]. The well-known proximal point algorithm [12] requires the multi-valued mapping F to be monotone. [11] proposes a projection algorithm for solving the multi-valued variational inequality with a pseudomonotone mapping. In [11], choosing u i ∈F( x i ) needs solving a single-valued variational inequality; see the expression (2.1) in [11]. [6] presents a double projection algorithm, which is an improvement of [11], so that u i ∈F( x i ) can be taken arbitrarily. In [6], however, choosing the hyperplane needs computing the supremum and hence is computationally expensive. To overcome this difficulty, [7] introduces an extragradient algorithm for solving the multi-valued variational inequality in which computing the supremum is avoided. In this paper, we present a new extragradient method for solving the multi-valued variational inequality. In our method, u i ∈F( x i ) can be taken arbitrarily. Moreover, the main difference of our method from those of [6, 7, 11] is the procedure of Armijo-type linesearch. We also present numerical tests to compare our Algorithm 2.2 with those in [6, 11].

This paper is organized as follows. In Section 2, we present the algorithm details. We prove the preliminary results for convergence analysis in Section 3. Numerical results are reported in the last section.

2 Algorithms

Let us recall the definition of a continuous multi-valued mapping. F is said to be upper semicontinuous at x∈C if for every open set V containing F(x), there is an open set U containing x such that F(y)⊂V for all y∈C∩U. F is said to be lower semicontinuous at x∈C if given any sequence x k converging to x and any y∈F(x), there exists a sequence y k ∈F( x k ) that converges to y. F is said to be continuous at x∈C if it is both upper semicontinuous and lower semicontinuous at x. If F is single-valued, then both upper semicontinuity and lower semicontinuity reduce to the continuity of F.

F is called pseudomonotone on C in the sense of Karamardian [16] if for any x,y∈C,

〈v,x−y〉≥0for some v∈F(y)⇒〈u,x−y〉≥0 for all u∈F(x).
(2.1)

Let S be the solution set of (1.1), that is, those points x ∗ ∈C satisfying (1.1). Throughout this paper, we assume that the solution set S of problem (1.1) is nonempty and F is continuous on C with nonempty compact convex values satisfying the following property:

〈ζ,y−x〉≥0,∀y∈C,∀ζ∈F(y),∀x∈S.
(2.2)

The property (2.2) holds if F is pseudomonotone on C.

Let P C denote the projector onto C, and let μ>0 be a parameter.

Proposition 2.1 x∈C and ξ∈F(x) solve problem (1.1) if and only if

r μ (x,ξ):=x− P C (x−μξ)=0.

Algorithm 2.2 Choose x 0 ∈C and two parameters γ,σ∈(0,1). Set i=0.

Step 1. Choose u i ∈F( x i ) and let k i be the smallest nonnegative integer satisfying

v i ∈F ( P C ( x i − γ k i u i ) ) ,
(2.3)
γ k i 〈 u i − v i , r γ k i ( x i , u i ) 〉 ≤σ ∥ r γ k i ( x i , u i ) ∥ 2 .
(2.4)

Set η i = γ k i . If r η i ( x i , u i )=0, stop.

Step 2. Compute x i + 1 := P C ( x i − α i d i ), where

d i = r η i ( x i , u i )+ η i v i ,
(2.5)
α i = ( 1 − σ ) ∥ r η i ( x i , u i ) ∥ 2 ∥ d i ∥ 2 .
(2.6)

Let i:=i+1 and go to Step 1.

Remark 2.3 Let us compare the above algorithm with those in [6, 7, 11]. First, Aimijo-type linesearch procedures in the four algorithms are different. [6, 7, 11] use different procedures which replace (2.4) by the following ones:

〈 v i , r μ ( x i , u i ) 〉 ≥σ ∥ r μ ( x i , u i ) ∥ 2 or 〈 u i − v i , r μ ( x i , u i ) 〉 ≤σ ∥ r μ ( x i , u i ) ∥ 2 ,

where μ is required to be strictly less than 1 or 1/σ, and v i ∈F( x i − γ k i r μ ( x i , u i )). In our algorithm, μ can change according to the value of η i in each iteration and v i ∈F( P C ( x i − γ k i u i )). Secondly, the way to generate the next iterate is different. In [6, 11], the next iterate is a projection of the current iterate onto the intersection of the feasible set C and a hyperplane, while in our algorithm as well as in [7] the next iterate is a projection onto the feasible set C. In addition, the searching directions in [7] and our algorithm are also different.

Lemma 2.4 Let C be a closed convex subset of R n . For any x,y∈ R n and z∈C, the following statements hold.

(i) 〈x− P C (x),z− P C (x)〉≤0.

(ii) ∥ P C ( x ) − P C ( y ) ∥ 2 ≤ ∥ x − y ∥ 2 − ∥ P C ( x ) − x + y − P C ( y ) ∥ 2 .

Proof See [17]. □

The proof of the following lemma is easy and we omit it (see Lemma 3.1 in [18] for example).

Lemma 2.5 For any x∈ R n , ξ∈F(x) and μ>0,

min{1,μ} ∥ r 1 ( x , ξ ) ∥ ≤ ∥ r μ ( x , ξ ) ∥ ≤max{1,μ} ∥ r 1 ( x , ξ ) ∥ .

We first show that Algorithm 2.2 is well defined.

Proposition 2.6 If x i is not a solution of problem (1.1), then there exists a nonnegative integer k i satisfying (2.3) and (2.4).

Proof Suppose that for all k and all v∈F( P C ( x i − γ k u i )), we have

γ k 〈 u i − v , r γ k ( x i , u i ) 〉 >σ ∥ r γ k ( x i , u i ) ∥ 2 ,

and hence,

γ k ∥ u i −v∥>σ ∥ r γ k ( x i , u i ) ∥ .

Therefore,

∥ u i − v ∥ > σ γ k ∥ r γ k ( x i , u i ) ∥ ≥ σ γ k min { 1 , γ k } ∥ r 1 ( x i , u i ) ∥ = σ ∥ r 1 ( x i , u i ) ∥ ,
(2.7)

where the second inequality follows from Lemma 2.5 and the equality follows from γ∈(0,1) and k≥0. Since P C (â‹…) is continuous and x i ∈C, P C ( x i − γ k u i )→ x i (k→∞). Since F is lower semicontinuous, u i ∈F( x i ) and P C ( x i − γ k u i )→ x i (k→∞), there is v k ∈F( P C ( x i − γ k u i )) such that v k → u i (k→∞). Therefore,

∥ u i − v k ∥>σ ∥ r 1 ( x i , u i ) ∥ ,∀k.
(2.8)

Let k→∞ in (2.8), we have

0=∥ u i − u i ∥≥σ ∥ r 1 ( x i , u i ) ∥ >0.

This contradiction completes the proof. □

3 Main results

Now we obtain the following auxiliary result that will be used for proving the convergence of Algorithm 2.2.

Theorem 3.1 If the assumption (2.2) holds and x i ∉S, then for any x ∗ ∈S,

〈 d i , x i − x ∗ 〉 ≥(1−σ) ∥ r η i ( x i , ξ i ) ∥ 2 >0.
(3.1)

Proof Let x ∗ ∈S. Since u i ∈F( x i ) and η i >0, it follows from (2.2) that

〈 η i u i , x i − x ∗ 〉 ≥0.
(3.2)

Similarly, we have

〈 η i v i , P C ( x i − η i u i ) − x ∗ 〉 ≥0,
(3.3)

because v i ∈F( P C ( x i − η i u i )). Since x ∗ ∈C, from Lemma 2.4(i) we have

〈 x i − η i u i − P C ( x i − η i u i ) , P C ( x i − η i u i ) − x ∗ 〉 ≥0.
(3.4)

It follows from (3.2), (3.3) and (3.4) that

〈 d i , x i − x ∗ 〉 = 〈 r η i ( x i , u i ) + η i v i , x i − x ∗ 〉 = 〈 r η i ( x i , u i ) + η i ( v i − u i ) , x i − x ∗ 〉 + 〈 η i u i , x i − x ∗ 〉 ≥ 〈 r η i ( x i , u i ) + η i ( v i − u i ) , x i − x ∗ 〉 = 〈 r η i ( x i , u i ) + η i ( v i − u i ) , r η i ( x i , u i ) 〉 + 〈 x i − η i u i − P C ( x i − η i u i ) , P C ( x i − η i u i ) − x ∗ 〉 + 〈 η i v i , P C ( x i − η i u i ) − x ∗ 〉 ≥ 〈 r η i ( x i , u i ) + η i ( v i − u i ) , r η i ( x i , u i ) 〉 .
(3.5)

Therefore,

〈 d i , x i − x ∗ 〉 ≥ 〈 r η i ( x i , u i ) + η i ( v i − u i ) , r η i ( x i , u i ) 〉 = ∥ r η i ( x i , u i ) ∥ 2 − η i 〈 u i − v i , r η i ( x i , u i ) 〉 ≥ ∥ r η i ( x i , u i ) ∥ 2 − σ ∥ r η i ( x i , u i ) ∥ 2 = ( 1 − σ ) ∥ r η i ( x i , u i ) ∥ 2 ,
(3.6)

where the second inequality follows from (2.4). This completes the proof. □

Theorem 3.2 If F:C→ 2 R n is continuous with nonempty compact convex values on C and the assumption (2.2) holds, then the sequence { x i } generated by Algorithm  2.2 converges to a solution x ¯ of (1.1).

Proof Let x ∗ ∈S. It follows from Lemma 2.4(ii), Lemma 2.5, (2.5), (2.6) and (3.6) that

∥ x i + 1 − x ∗ ∥ 2 ≤ ∥ x i − x ∗ − α i d i ∥ 2 = ∥ x i − x ∗ ∥ 2 − 2 α i 〈 d i , x i − x ∗ 〉 + α i 2 ∥ d i ∥ 2 ≤ ∥ x i − x ∗ ∥ 2 − ( 1 − σ ) 2 ∥ r η i ( x i , u i ) ∥ 4 ∥ d i ∥ 2 ≤ ∥ x i − x ∗ ∥ 2 − ( 1 − σ ) 2 ( min { η i , 1 } ∥ r 1 ( x i , u i ) ∥ ) 4 ∥ d i ∥ 2 = ∥ x i − x ∗ ∥ 2 − ( 1 − σ ) 2 η i 4 ∥ r 1 ( x i , u i ) ∥ 4 ∥ r η i ( x i , u i ) + η i v i ∥ 2 .
(3.7)

It follows that the sequence { ∥ x i + 1 − x ∗ ∥ 2 } is nonincreasing, and hence is a convergent sequence. Therefore, { x i } is bounded. Since F is continuous with compact values, Proposition 3.11 in [19] implies that {F( x i ):i∈N} is a bounded set, and so are { u i }, { r η i ( x i , u i )} and { v i }. Thus, { r η i ( x i , u i )+ η i v i } is bounded. Then there exists a positive number M such that

∥ r η i ( x i , u i ) + η i v i ∥ ≤M.

It follows from (3.7) that

∥ x i + 1 − x ∗ ∥ 2 ≤ ∥ x i − x ∗ ∥ 2 − ( 1 − σ ) 2 M − 2 η i 4 ∥ r 1 ( x i , u i ) ∥ 4 .
(3.8)

Therefore,

lim i → ∞ η i ∥ r 1 ( x i , u i ) ∥ =0.
(3.9)

By the boundedness of { x i }, there exists a convergent subsequence { x i j } converging to x ¯ .

If x ¯ is a solution of problem (1.1), we show next that the whole sequence { x i } converges to x ¯ . Replacing x ∗ by x ¯ in the preceding argument, we obtain that the sequence {∥ x i − x ¯ ∥} is nonincreasing and hence converges. Since x ¯ is an accumulation point of { x i }, some subsequence of {∥ x i − x ¯ ∥} converges to zero. This shows that the whole sequence {∥ x i − x ¯ ∥} converges to zero, hence lim i → ∞ x i = x ¯ .

Suppose now that x ¯ is not a solution of problem (1.1). We show first that k i in Algorithm 2.2 cannot tend to ∞. Since F is continuous with compact values, Proposition 3.11 in [19] implies that {F( x i ):i∈N} is a bounded set, and so the sequence { u i } is bounded. Therefore, there exists a subsequence { u i j } converging to u ¯ . Since F is upper semicontinuous with compact values, Proposition 3.7 in [19] implies that F is closed, and so u ¯ ∈F( x ¯ ). By the definition of k i , we have

γ k i − 1 〈 u i − v , r γ k i − 1 ( x i , u i ) 〉 >σ ∥ r γ k i − 1 ( x i , u i ) ∥ 2 ,∀v∈F ( P C ( x i − γ k i − 1 u i ) ) ,
(3.10)

and hence

γ k i − 1 ∥ u i −v∥>σ ∥ r γ k i − 1 ( x i , u i ) ∥ ,∀v∈F ( P C ( x i − γ k i − 1 u i ) ) .
(3.11)

Therefore,

∥ u i − v ∥ > σ γ k i − 1 ∥ r γ k i − 1 ( x i , u i ) ∥ ≥ σ γ k i − 1 min { 1 , γ k i − 1 } ∥ r 1 ( x i , u i ) ∥ = σ ∥ r 1 ( x i , u i ) ∥ , ∀ v ∈ F ( P C ( x i − γ k i − 1 u i ) ) , ∀ k i ≥ 1 ,
(3.12)

where the second inequality follows from Lemma 2.5 and the equality follows from γ∈(0,1).

If k i j →∞, then P C ( x i j − γ k i j − 1 u i j )→ x ¯ . The lower continuity of F, in turn, implies the existence of u ¯ i j ∈F( P C ( x i j − γ k i j − 1 u i j )) such that u ¯ i j converges to u ¯ . Therefore,

∥ u i j − u ¯ i j ∥>σ ∥ r 1 ( x i j , u i j ) ∥ .
(3.13)

Letting j→∞, we obtain the contradiction

0≥σ ∥ r 1 ( x ¯ , u ¯ ) ∥ 2 >0,
(3.14)

being r 1 (⋅,⋅) continuous. Therefore, { k i } is bounded and so is { η i }.

By the boundedness of { η i }, it follows from (3.9) that lim i → ∞ ∥ r 1 ( x i , u i )∥=0. Since r 1 (⋅,⋅) is continuous and the sequences { x i } and { u i } are bounded, there exists an accumulation point ( x ¯ , u ¯ ) of {( x i , u i )} such that r 1 ( x ¯ , u ¯ )=0. This implies that x ¯ solves the variational inequality (1.1). Similar to the preceding proof, we obtain that lim i → ∞ x i = x ¯ .

Now we provide a result on the convergence rate of the iterative sequence generated by Algorithm 2.2. To establish this result, we need a certain error bound to hold locally (see (3.15) below). The research on an error bound is a large topic in mathematical programming. One can refer to the survey [20] for the roles played by error bounds in the convergence analysis of iterative algorithms; more recent developments on this topic are included in Chapter 6 in [21]. A condition similar to (3.15) has also been used in [22] (see expression (10) therein) to analyze the convergence rate in a very general framework.

For any δ>0, define

P(δ):= { ( x , ξ ) ∈ C × R n : ξ ∈ F ( x ) , ∥ r 1 ( x , ξ ) ∥ ≤ δ } .

We say that F is Lipschitz continuous on C if there exists a constant L>0 such that, for all x,y∈C, H(F(x),F(y))≤L∥x−y∥, where H denotes the Hausdorff metric. □

Theorem 3.3 In addition to the assumptions in Theorem  3.2, if F is Lipschitz continuous with modulus L>0 and if there exist positive constants c and δ such that

dist(x,S)≤c ∥ r 1 ( x , ξ ) ∥ ,∀(x,ξ)∈P(δ),
(3.15)

then there is a constant α>0 such that for sufficiently large i,

dist( x i ,S)≤ 1 α i + dist − 2 ( x 0 , S ) .

Proof Put η:=min{1/2, L − 1 γσ}. We first prove that η i >η for all i. By the construction of η i , we have η i ∈(0,1]. If η i =1, then clearly η i > 1 2 ≥η. Now we assume that η i <1. Since η i = γ k i , it follows that the nonnegative integer k i ≥1. Thus the construction of k i implies that

γ k i − 1 〈 u i − v , r γ k i − 1 ( x i , u i ) 〉 >σ ∥ r γ k i − 1 ( x i , u i ) ∥ 2 ,∀v∈F ( P C ( x i − γ k i − 1 u i ) ) ,
(3.16)

and hence, as k i ≥1,

∥ u i −v∥> σ γ k i − 1 ∥ r γ k i − 1 ( x i , u i ) ∥ ,∀v∈F ( P C ( x i − γ k i − 1 u i ) ) .

Since u i ∈F( x i ) and F is compact-valued, the definition of the Hausdorff metric implies the existence of v i ∈F( P C ( x i − γ k i − 1 u i )) such that

σ γ k i − 1 ∥ r γ k i − 1 ( x i , u i ) ∥ <∥ u i − v i ∥≤H ( F ( x i ) , F ( P C ( x i − γ k i − 1 u i ) ) ) ≤L ∥ r γ k i − 1 ( x i , u i ) ∥ .

Therefore η i > L − 1 γσ≥η.

Let x ∗ ∈ P S ( x i ). By (3.8) and (3.15), we obtain that for sufficiently large i,

dist 2 ( x i + 1 , S ) ≤ ∥ x i + 1 − x ∗ ∥ 2 ≤ ∥ x i − x ∗ ∥ 2 − ( 1 − σ ) 2 M − 2 η i 4 ∥ r 1 ( x i , u i ) ∥ 4 ≤ ∥ x i − x ∗ ∥ 2 − ( 1 − σ ) 2 M − 2 η 4 ∥ r 1 ( x i , u i ) ∥ 4 ≤ dist 2 ( x i , S ) − ( 1 − σ ) 2 M − 2 η 4 c − 4 dist 4 ( x i , S ) ,

where the second inequality follows from η i >η.

Write α for ( 1 − σ ) 2 M − 2 η 4 c − 4 . Applying Lemma 6 in Chapter 2 of [23], we have

dist( x i ,S)≤dist( x 0 ,S)/ α i dist 2 ( x 0 , S ) + 1 =1/ α i + dist − 2 ( x 0 , S ) .

This completes the proof. □

4 Numerical experiments

In this section, we present some numerical experiments for the proposed algorithm. The MATLAB codes are run on a PC (with CPU Intel P-T2390) under MATLAB Version 7.0.1.24704(R14) Service Pack 1. We compare the performance of our Algorithm 2.2, [6], Algorithm 1] and [11], Algorithm 1]. In Tables 1 and 2, ‘It.’ denotes the number of iteration and ‘CPU’ denotes the CPU time in seconds. The tolerance ε means when ∥ r μ (x,ξ)∥≤ε, the procedure stops.

Table 1 Example 4.1
Table 2 Example 4.1

Example 4.1 Let n=3,

C:= { x ∈ R + n : ∑ i = 1 n x i = 1 }

and F:C→ 2 R n be defined by

F(x):= { ( t , t − x 1 , t − x 2 ) : t ∈ [ 0 , 1 ] } .

Then the set C and the mapping F satisfy the assumptions of Theorem  3.2 and (0,0,1) is a solution of the multi-valued variational inequality. Example  4.1 is tested in [6, 11]. We choose σ=0.5, γ=0.9 for our algorithm; σ=0.1, γ=0.8, μ=1 for Algorithm  1 in [6]; σ=0.9, γ=0.4, μ=1 for Algorithm  1 in [11]. We use x 0 =(0,0.5,0.5) as the initial point (Table 1 and Table 2).

References

  1. Iusem AN, Svaiter BF: A variant of Korpelevich’ method for variational inequalities with a new search strategy. Optimization 1997, 42: 309–321. doi:10.1080/02331939708844365

    Article  MathSciNet  Google Scholar 

  2. Korpelevich GM: The extragradient method for finding saddle points and other problems. Èkon. Mat. Metody 1976, 12: 747–756.

    Google Scholar 

  3. Wang YJ, Xiu NH, Zhang JZ: Modified extragradient method for variational inequalities and verification of solution existence. J. Optim. Theory Appl. 2003, 119(1):167–183. doi:10.1023/B:JOTA.0000005047.30026.b8

    Article  MathSciNet  Google Scholar 

  4. Auslender A, Teboulle M: Lagrangian duality and related multiplier methods for variational inequality problems. SIAM J. Optim. 2000, 10(4):1097–1115. doi:10.1137/S1052623499352656

    Article  MathSciNet  Google Scholar 

  5. Bao TQ, Khanh PQ: A projection-type algorithm for pseudomonotone nonlipschitzian multivalued variational inequalities. In Generalized Convexity, Generalized Monotonicity and Applications. Springer, New York; 2005:113–129.

    Chapter  Google Scholar 

  6. Fang CJ, He YR: A double projection algorithm for multi-valued variational inequalities and a unified framework of the method. Appl. Math. Comput. 2011, 217: 9543–9551. doi:10.1016/j.amc.2011.04.009

    Article  MathSciNet  Google Scholar 

  7. Fang CJ, He YR: An extragradient method for generalized variational inequality. Pac. J. Optim. 2013, 9(1):47–59.

    MathSciNet  Google Scholar 

  8. Fang SC, Peterson EL: Generalized variational inequalities. J. Optim. Theory Appl. 1982, 38(3):363–383. doi:10.1007/BF00935344

    Article  MathSciNet  Google Scholar 

  9. Fukushima M: The primal Douglas-Rachford splitting algorithm for a class of monotone mappings with application to the traffic equilibrium problem. Math. Program. 1996, 72(1, Ser. A):1–15. doi:10.1007/BF02592328

    Article  MathSciNet  Google Scholar 

  10. Konnov IV: Combined Relaxation Methods for Variational Inequalities. Springer, Berlin; 2001.

    Book  Google Scholar 

  11. Li FL, He YR: An algorithm for generalized variational inequality with pseudomonotone mapping. J. Comput. Appl. Math. 2009, 228: 212–218. doi:10.1016/j.cam.2008.09.014

    Article  MathSciNet  Google Scholar 

  12. Rockafellar RT: Monotone operators and the proximal point algorithm. SIAM J. Control Optim. 1976, 14(5):877–898. doi:10.1137/0314056

    Article  MathSciNet  Google Scholar 

  13. Saigal R: Extension of the generalized complementarity problem. Math. Oper. Res. 1976, 1(3):260–266. doi:10.1287/moor.1.3.260

    Article  MathSciNet  Google Scholar 

  14. Salmon G, Strodiot JJ, Nguyen VH: A bundle method for solving variational inequalities. SIAM J. Optim. 2003, 14(3):869–893. doi:10.1137/S1052623401384096

    Article  MathSciNet  Google Scholar 

  15. Xia FQ, Huang NJ: A projection-proximal point algorithm for solving generalized variational inequalities. J. Optim. Theory Appl. 2011, 150: 98–117. doi:10.1007/s10957–011–9825–3

    Article  MathSciNet  Google Scholar 

  16. Karamardian S: Complementarity problems over cones with monotone and pseudomonotone maps. J. Optim. Theory Appl. 1976, 18(4):445–454. doi:10.1007/BF00932654

    Article  MathSciNet  Google Scholar 

  17. Zarantonello EH: Projections on convex sets in Hilbert space and spectral theory. In Contributions to Nonlinear Functional Analysis. Academic Press, New York; 1971.

    Google Scholar 

  18. Bnouhachem A: A self-adaptive method for solving general mixed variational inequalities. J. Math. Anal. Appl. 2005, 309: 136–150. doi:10.1016/j.jmaa.2004.12.023

    Article  MathSciNet  Google Scholar 

  19. Aubin JP, Ekeland I: Applied Nonlinear Analysis. Wiley, New York; 1984.

    Google Scholar 

  20. Pang JS: Error bounds in mathematical programming. Math. Program. 1997, 79: 299–332. doi:10.1007/BF02614322

    Google Scholar 

  21. Facchinei F, Pang JS: Finite-Dimensional Variational Inequalities and Complementary Problems. Springer, New York; 2003.

    Google Scholar 

  22. Solodov MV: Convergence rate analysis of iterative algorithms for solving variational inequality problems. Math. Program. 2003, 96: 513–528. doi:10.1007/s10107–002–0369-z

    Article  MathSciNet  Google Scholar 

  23. Polyak BT: Introduction to Optimization. Optimization Software Inc., Publications Division, New York; 1987.

    Google Scholar 

Download references

Acknowledgements

This work is partially supported by Natural Science Foundation Project of CQ CSTC (No. 2010BB9401), Science and Technology Project of Chongqing Municipal Education Committee of China (Nos. KJ110509 and KJ100513) and Foundation of Chongqing University of Posts and Telecommunications for the Scholars with Doctorate (No. A2012-04).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Changjie Fang.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

All the authors contributed equally to the writing of the present article. And they also read and approved the final manuscript.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Fang, C., Chen, S. & Yang, C. An algorithm for solving a multi-valued variational inequality. J Inequal Appl 2013, 218 (2013). https://doi.org/10.1186/1029-242X-2013-218

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1029-242X-2013-218

Keywords