Skip to main content

A Korpelevich-like algorithm for variational inequalities

Abstract

A Korpelevich-like algorithm has been introduced for solving a generalized variational inequality. It is shown that the presented algorithm converges strongly to a special solution of the generalized variational inequality.

MSC:47H05, 47J25.

1 Introduction

Now it is well-known that the variational inequality of finding x C such that

A x , x x 0,xC,
(1.1)

where C is a nonempty closed convex subset of a real Hilbert space H and A:CH is a given mapping, is a fundamental problem in variational analysis and, in particular, in optimization theory. For related works, please see [120] and the references contained therein. Especially, Yao, Marino and Muglia [21] presented the following modified Korpelevich method for solving (1.1):

y n = P C [ x n λ A x n α n x n ] , x n + 1 = P C [ x n λ A y n + μ ( y n x n ) ] , n 0 .
(1.2)

Recently, Aoyama, Iiduka and Takahashi [22] extended the variational inequality (1.1) to Banach spaces as follows:

Find  x C such that  A x , J ( x x ) 0,xC,
(1.3)

where C is a nonempty closed convex subset of a real Banach space E. We use S(C,A) to denote the solution set of (1.3). The generalized variational inequality (1.3) is connected with the fixed point problem for nonlinear mappings. For solving the above generalized variational inequality (1.3), Aoyama, Iiduka and Takahashi [22] introduced the iterative algorithm

x n + 1 = α n x n +(1 α n ) Q C [ x n λ n A x n ],n0,
(1.4)

where Q C is a sunny nonexpansive retraction from E onto C and { α n }(0,1), { λ n }(0,) are two real number sequences. Motivated by (1.4), Yao and Maruster [23] presented a modification of (1.4) as follows:

x n + 1 = β n x n +(1 β n ) Q C [ ( 1 α n ) ( x n λ A x n ) ] ,n0.
(1.5)

Motivated and inspired by the above algorithms (1.2), (1.4) and (1.5), in this paper, we suggest an extragradient-type method via the sunny nonexpansive retraction for solving the variational inequalities (1.3) in Banach spaces. It is shown that the presented algorithm converges strongly to a special solution of the variational inequality (1.3).

2 Preliminaries

Let C be a nonempty closed convex subset of a real Banach space E. Recall that a mapping A of C into E is said to be accretive if there exists j(xy)J(xy) such that

A x A y , j ( x y ) 0

for all x,yC. A mapping A of C into E is said to be α-strongly accretive if for α>0,

A x A y , j ( x y ) α x y 2

for all x,yC. A mapping A of C into E is said to be α-inverse-strongly accretive if for α>0,

A x A y , j ( x y ) α A x A y 2

for all x,yC.

Let U={xE:x=1}. A Banach space E is said to uniformly convex if for each ϵ(0,2], there exists δ>0 such that for any x,yU,

xyϵimplies x + y 2 1δ.

It is known that a uniformly convex Banach space is reflexive and strictly convex. A Banach space E is said to be smooth if the limit

lim t 0 x + t y x t
(2.1)

exists for all x,yU. It is also said to be uniformly smooth if the limit (2.1) is attained uniformly for x,yU. The norm of E is said to be Frechet differentiable if for each xU, the limit (2.1) is attained uniformly for yU. And we define a function ρ:[0,)[0,) called the modulus of smoothness of E as follows:

ρ(τ)=sup { 1 2 ( x + y + x y ) 1 : x , y X , x = 1 , y = τ } .

It is known that E is uniformly smooth if and only if lim τ 0 ρ(τ)/τ=0. Let q be a fixed real number with 1<q2. Then a Banach space E is said to be q-uniformly smooth if there exists a constant c>0 such that ρ(τ)c τ q for all τ>0.

We need the following lemmas for the proof of our main results.

Lemma 2.1 [24]

Let q be a given real number with 1<q2 and let E be a q-uniformly smooth Banach space. Then

x + y q x q +q y , J q ( x ) +2 K y q

for all x,yE, where K is the q-uniformly smoothness constant of E and J q is the generalized duality mapping from E into 2 E defined by

J q (x)= { f E : x , f = x q , f = x q 1 } ,xE.

Let D be a subset of C and let Q be a mapping of C into D. Then Q is said to be sunny if

Q ( Q x + t ( x Q x ) ) =Qx,

whenever Qx+t(xQx)C for xC and t0. A mapping Q of C into itself is called a retraction if Q 2 =Q. If a mapping Q of C into itself is a retraction, then Qz=z for every zR(Q), where R(Q) is the range of Q. A subset D of C is called a sunny nonexpansive retract of C if there exists a sunny nonexpansive retraction from C onto D. We know the following lemma concerning sunny nonexpansive retraction.

Lemma 2.2 [25]

Let C be a closed convex subset of a smooth Banach space E, let D be a nonempty subset of C and Q be a retraction from C onto D. Then Q is sunny and nonexpansive if and only if

u Q u , j ( y Q u ) 0

for all uC and yD.

Lemma 2.3 [22]

Let C be a nonempty closed convex subset of a smooth Banach space X. Let Q C be a sunny nonexpansive retraction from X onto C and let A be an accretive operator of C into X. Then for all λ>0,

S(C,A)=F ( Q C ( I λ A ) ) ,

where S(C,A)={ x C:A x ,J(x x )0,xC}.

Lemma 2.4 [26]

Let C be a nonempty closed convex subset of a real 2-uniformly smooth Banach space X. Let the mapping A:CX be α-inverse-strongly accretive. Then we have

( I λ A ) x ( I λ A ) y 2 x y 2 +2λ ( K 2 λ α ) A x A y 2 .

In particular, if 0λ α K 2 , then IλA is nonexpansive.

Proof Indeed, for all x,yC, from Lemma 2.1, we have

( I λ A ) x ( I λ A ) y 2 = ( x y ) λ ( A x A y ) 2 x y 2 2 λ A x A y , j ( x y ) + 2 K 2 λ 2 A x A y 2 x y 2 2 λ α A x A y 2 + 2 K 2 λ 2 A x A y 2 = x y 2 + 2 λ ( K 2 λ α ) A x A y 2 .

It is clear that if 0λ α K 2 , then IλA is nonexpansive. □

Lemma 2.5 [27]

Let C be a nonempty bounded closed convex subset of a uniformly convex Banach space E and let T be a nonexpansive mapping of C into itself. If { x n } is a sequence of C such that x n x weakly and x n T x n 0 strongly, then x is a fixed point of T.

Lemma 2.6 [28]

Assume { a n } is a sequence of nonnegative real numbers such that

a n + 1 (1 γ n ) a n + δ n ,n0,

where { γ n } is a sequence in (0,1) and { δ n } is a sequence in R such that

  1. (a)

    n = 0 γ n =;

  2. (b)

    lim sup n δ n / γ n 0 or n = 0 | δ n |<.

Then lim n a n =0.

3 Main results

In this section, we present our Korpelevich-like algorithm and consequently we will show its strong convergence.

3.1 Conditions assumptions

(A1) E is a uniformly convex and 2-uniformly smooth Banach space with a weakly sequentially continuous duality mapping;

(A2) C is a nonempty closed convex subset of E;

(A3) A:CE is an α-strongly accretive and L-Lipschitz continuous mapping with S(C,A);

(A4) Q C is a sunny nonexpansive retraction from E onto C.

3.2 Parameters restrictions

(P1) λ, μ and γ are three positive constants satisfying:

  1. (i)

    γ(0,1), λ[a,b] for some a, b with 0<a<b< α K 2 L 2 ;

  2. (ii)

    λ μ < α K 2 L 2 where K is the smooth constant of E.

(P2) { α n } is a sequence in (0,1) such that lim n α n =0 and n = 1 α n =.

Algorithm 3.1 For given x 0 C, define a sequence { x n } iteratively by

{ y n = Q C [ ( 1 α n ) x n λ A x n ] , x n + 1 = ( 1 γ ) x n + γ Q C [ x n λ A y n + μ ( y n x n ) ] , n 0 .
(3.1)

Theorem 3.2 The sequence { x n } generated by (3.1) converges strongly to Q (0), where Q is a sunny nonexpansive retraction of E onto S(C,A).

Proof Let pS(C,A). First, from Lemma 2.2, we have p= Q C [pδAp] for all δ>0. In particular, p= Q C [pλAp]= Q C [ α n p+(1 α n )(p λ 1 α n Ap)] for all n0.

Since A:CE is α-strongly accretive and L-Lipschitzian, it must be α L 2 -inverse-strongly accretive mapping. Thus, by Lemma 2.4, we have

( I λ A ) x ( I λ A ) y 2 x y 2 +2λ ( K 2 λ α L 2 ) A x A y 2 .

Since α n 0 and λ[a,b](0, α K 2 L 2 ), we get α n <1 K 2 L 2 λ α for enough large n. Without loss of generality, we may assume that for all nN, α n <1 K 2 L 2 λ α , i.e., λ 1 α n (0, α K 2 L 2 ). Hence, I λ 1 α n A is nonexpansive.

From (3.1), we have

y n p = Q C [ ( 1 α n ) x n λ A x n ] Q C [ α n p + ( 1 α n ) ( p λ 1 α n A p ) ] α n ( p ) + ( 1 α n ) [ ( x n λ 1 α n A x n ) ( p λ 1 α n A p ) ] α n p + ( 1 α n ) ( I λ 1 α n A ) x n ( I λ 1 α n A ) p α n p + ( 1 α n ) x n p .
(3.2)

By (3.1) and (3.2), we have

x n + 1 p ( 1 γ ) x n p + γ Q C [ x n λ A y n + μ ( y n x n ) ] p = ( 1 γ ) x n p + γ Q C [ ( 1 μ ) x n + μ ( y n λ μ A y n ) ] Q C [ ( 1 μ ) p + μ ( p λ μ A p ) ] ( 1 γ ) x n p + γ ( 1 μ ) ( x n p ) + μ [ ( y n λ μ A y n ) ( p λ μ A p ) ] ( 1 γ ) x n p + ( 1 μ ) γ x n p + μ γ ( y n λ μ A y n ) ( p λ μ A p ) ( 1 μ γ ) x n p + μ γ y n p ( 1 μ γ ) x n p + μ γ α n p + μ γ ( 1 α n ) x n p = ( 1 μ γ α n ) x n p + μ γ α n p max { x n p , p } max { x 0 p , p } .
(3.3)

Hence, { x n } is bounded.

Set z n = Q C [ x n λA y n +μ( y n x n )]. From (3.1), we have x n + 1 =(1γ) x n +γ z n for all n0. Then we have

y n y n 1 = Q C [ ( 1 α n ) x n λ A x n ] Q C [ ( 1 α n 1 ) x n 1 λ A x n 1 ] ( 1 α n ) ( x n λ 1 α n A x n ) ( 1 α n 1 ) ( x n 1 λ 1 α n 1 A x n 1 ) ( 1 α n ) ( x n λ 1 α n A x n ) ( x n 1 λ 1 α n A x n 1 ) + | α n α n 1 | x n 1 ( 1 α n ) x n x n 1 + | α n α n 1 | x n 1 ,

and thus

z n z n 1 = Q C [ x n λ A y n + μ ( y n x n ) ] Q C [ x n 1 λ A y n 1 + μ ( y n 1 x n 1 ) ] ( 1 μ ) x n x n 1 + μ ( y n λ μ A y n ) ( y n 1 λ μ A y n 1 ) ( 1 μ ) x n x n 1 + μ y n y n 1 ( 1 μ α n ) x n x n 1 + | α n α n 1 | x n 1 .

It follows that

lim sup n ( z n z n 1 x n x n 1 ) 0.

This together with Lemma 2.6 implies that

lim n x n + 1 x n =0.

From (3.2), we have

y n p 2 α n ( p ) + ( 1 α n ) [ ( x n λ 1 α n A x n ) ( p λ 1 α n A p ) ] 2 α n p 2 + ( 1 α n ) ( x n λ 1 α n A x n ) ( p λ 1 α n A p ) 2 α n p 2 + ( 1 α n ) x n p 2 + 2 λ ( K 2 λ 1 α n α L 2 ) A x n A p 2 .
(3.4)

From (3.1), (3.3) and (3.4), we obtain

Therefore, we have

0 2 γ λ μ ( K 2 λ 1 α n α L 2 ) A x n A p 2 2 γ λ μ ( K 2 λ μ α L 2 ) A y n A p 2 α n γ μ p 2 + x n p 2 x n + 1 p 2 = α n γ μ p 2 + ( x n p + x n + 1 p ) ( x n p x n + 1 p ) α n γ μ p 2 + ( x n p + x n + 1 p ) x n x n + 1 .

Since α n 0 and x n x n + 1 0 , we obtain

lim n A x n Ap= lim n A y n Ap=0.

It follows that

lim n A y n A x n =0.

Since A is α-strongly accretive, we deduce

A y n A x n α y n x n ,

which implies that

lim n y n x n =0,

that is,

lim n Q C [ ( 1 α n ) x n λ A x n ] x n =0.

It follows that

lim n Q C [ x n λ A x n ] x n =0.
(3.5)

Next, we show that

lim sup n Q ( 0 ) , j ( x n Q ( 0 ) ) 0.
(3.6)

To show (3.6), since { x n } is bounded, we can choose a sequence { x n i } of { x n } converging weakly to z such that

lim sup n Q ( 0 ) , j ( x n Q ( 0 ) ) = lim sup i Q ( 0 ) , j ( x n i Q ( 0 ) ) .
(3.7)

We first prove zS(C,A). It follows that

lim i Q C ( I λ A ) x n i x n i =0.
(3.8)

By Lemma 2.5 and (3.8), we have zF( Q C (IλA)), it follows from Lemma 2.3 that zS(C,A).

Now, from (3.7) and Lemma 2.2, we have

lim sup n Q ( 0 ) , j ( x n Q ( 0 ) ) = lim sup i Q ( 0 ) , j ( x n i Q ( 0 ) ) = Q ( 0 ) , j ( z Q ( 0 ) ) 0 .

Noticing that x n y n 0, we deduce that

lim sup n Q ( 0 ) , j ( y n Q ( 0 ) ) 0.

Since y n = Q C [(1 α n )( x n λ 1 α n A x n )] and Q (0)= Q C [ α n Q (0)+(1 α n )( Q (0) λ 1 α n A Q (0))] for all n0, we can deduce from Lemma 2.2 that

Q C [ ( 1 α n ) ( x n λ 1 α n A x n ) ] [ ( 1 α n ) ( x n λ 1 α n A x n ) ] , j ( y n Q ( 0 ) ) 0

and

Therefore, we have

which implies that

y n Q ( 0 ) 2 (1 α n ) x n Q ( 0 ) 2 +2 α n Q ( 0 ) , j ( y n Q ( 0 ) ) .
(3.9)

Finally, we will prove that the sequence x n Q (0). As a matter of fact, from (3.1) and (3.9), we have

Applying Lemma 2.6 to the last inequality, we conclude that x n converges strongly to Q (0). This completes the proof. □

References

  1. Korpelevich GM: An extragradient method for finding saddle points and for other problems. Ekon. Mat. Metod. 1976, 12: 747–756.

    Google Scholar 

  2. Iusem AN, Svaiter BF: A variant of Korpelevich’s method for variational inequalities with a new search strategy. Optimization 1997, 42: 309–321. 10.1080/02331939708844365

    Article  MathSciNet  Google Scholar 

  3. Iusem AN, Lucambio Peŕez LR: An extragradient-type algorithm for non-smooth variational inequalities. Optimization 2000, 48: 309–332. 10.1080/02331930008844508

    Article  MathSciNet  Google Scholar 

  4. Solodov MV, Tseng P: Modified projection-type methods for monotone variational inequalities. SIAM J. Control Optim. 1996, 34: 1814–1830. 10.1137/S0363012994268655

    Article  MathSciNet  Google Scholar 

  5. Lions JL, Stampacchia G: Variational inequalities. Commun. Pure Appl. Math. 1967, 20: 493–517. 10.1002/cpa.3160200302

    Article  MathSciNet  Google Scholar 

  6. He BS, Yang ZH, Yuan XM: An approximate proximal-extragradient type method for monotone variational inequalities. J. Math. Anal. Appl. 2004, 300: 362–374. 10.1016/j.jmaa.2004.04.068

    Article  MathSciNet  Google Scholar 

  7. Bello Cruz JY, Iusem AN: A strongly convergent direct method for monotone variational inequalities in Hilbert space. Numer. Funct. Anal. Optim. 2009, 30(1–2):23–36. 10.1080/01630560902735223

    Article  MathSciNet  Google Scholar 

  8. Glowinski R: Numerical Methods for Nonlinear Variational Problems. Springer, New York; 1984.

    Book  Google Scholar 

  9. Yao Y, Noor MA, Liou YC: Strong convergence of a modified extra-gradient method to the minimum-norm solution of variational inequalities. Abstr. Appl. Anal. 2012., 2012: Article ID 817436. doi:10.1155/2012/817436

    Google Scholar 

  10. Xu HK, Kim TH: Convergence of hybrid steepest-descent methods for variational inequalities. J. Optim. Theory Appl. 2003, 119: 185–201.

    Article  MathSciNet  Google Scholar 

  11. Yamada I: The hybrid steepest descent for the variational inequality problems over the intersection of fixed points sets of nonexpansive mappings. In Inherently Parallel Algorithms in Feasibility and Optimization and Their Applications. Edited by: Butnariu D, Censor Y, Reich S. Elsevier, New York; 2001:473–504.

    Chapter  Google Scholar 

  12. Yao Y, Liou YC, Kang SM: Two-step projection methods for a system of variational inequality problems in Banach spaces. J. Glob. Optim. 2013, 55: 801–811. doi:10.1007/s10898–011–9804–0 10.1007/s10898-011-9804-0

    Article  MathSciNet  Google Scholar 

  13. Yao Y, Chen R, Liou YC: A unified implicit algorithm for solving the triple-hierarchical constrained optimization problem. Math. Comput. Model. 2012, 55: 1506–1515. 10.1016/j.mcm.2011.10.041

    Article  MathSciNet  Google Scholar 

  14. Yao Y, Noor MA, Noor KI, Liou YC, Yaqoob H: Modified extragradient method for a system of variational inequalities in Banach spaces. Acta Appl. Math. 2010, 110: 1211–1224. 10.1007/s10440-009-9502-9

    Article  MathSciNet  Google Scholar 

  15. Cho YJ, Yao Y, Zhou H: Strong convergence of an iterative algorithm for accretive operators in Banach spaces. J. Comput. Anal. Appl. 2008, 10: 113–125.

    MathSciNet  Google Scholar 

  16. Yao Y, Cho YJ, Liou YC: Algorithms of common solutions for variational inclusions, mixed equilibrium problems and fixed point problems. Eur. J. Oper. Res. 2011, 212: 242–250. 10.1016/j.ejor.2011.01.042

    Article  MathSciNet  Google Scholar 

  17. Cho YJ, Kang SM, Qin X: On systems of generalized nonlinear variational inequalities in Banach spaces. Appl. Math. Comput. 2008, 206: 214–220. 10.1016/j.amc.2008.09.005

    Article  MathSciNet  Google Scholar 

  18. Cho YJ, Qin X: Systems of generalized nonlinear variational inequalities and its projection methods. Nonlinear Anal. 2008, 69: 4443–4451. 10.1016/j.na.2007.11.001

    Article  MathSciNet  Google Scholar 

  19. Ceng LC, Ansari QH, Yao JC: Mann type steepest-descent and modified hybrid steepest-descent methods for variational inequalities in Banach spaces. Numer. Funct. Anal. Optim. 2008, 29: 987–1033. 10.1080/01630560802418391

    Article  MathSciNet  Google Scholar 

  20. Sahu DR, Wong NC, Yao JC: A unified hybrid iterative method for solving variational inequalities involving generalized pseudo-contractive mappings. SIAM J. Control Optim. 2012, 50: 2335–2354. 10.1137/100798648

    Article  MathSciNet  Google Scholar 

  21. Yao, Y, Marino, G, Muglia, L: A modified Korpelevich’s method convergent to the minimum norm solution of a variational inequality. Optimization (in press). doi:10.1080/02331934.2013.764522

  22. Aoyama K, Iiduka H, Takahashi W: Weak convergence of an iterative sequence for accretive operators in Banach spaces. Fixed Point Theory Appl. 2006., 2006: Article ID 35390. doi:10.1155/FPTA/2006/35390

    Google Scholar 

  23. Yao Y, Maruster S: Strong convergence of an iterative algorithm for variational inequalities in Banach spaces. Math. Comput. Model. 2011, 54: 325–329. 10.1016/j.mcm.2011.02.016

    Article  MathSciNet  Google Scholar 

  24. Xu HK: Inequalities in Banach spaces with applications. Nonlinear Anal. 1991, 16: 1127–1138. 10.1016/0362-546X(91)90200-K

    Article  MathSciNet  Google Scholar 

  25. Bruck RE Jr.: Nonexpansive retracts of Banach spaces. Bull. Am. Math. Soc. 1970, 76: 384–386. 10.1090/S0002-9904-1970-12486-7

    Article  MathSciNet  Google Scholar 

  26. Takahashi W, Toyoda M: Weak convergence theorems for nonexpansive mappings and monotone mappings. J. Optim. Theory Appl. 2003, 118: 417–428. 10.1023/A:1025407607560

    Article  MathSciNet  Google Scholar 

  27. Browder FE: Nonlinear operators and nonlinear equations of evolution in Banach spaces. In Nonlinear Functional Analysis. Am. Math. Soc., Rhode Island; 1976:1–308. (Proc. Sympos. Pure Math., vol. XVIII, Part 2, Chicago, Ill, 1968)

    Google Scholar 

  28. Xu HK: Iterative algorithms for nonlinear operators. J. Lond. Math. Soc. 2002, 66: 240–256. 10.1112/S0024610702003332

    Article  Google Scholar 

Download references

Acknowledgements

Yonghong Yao was supported in part by NSFC 11071279 and NSFC 71161001-G0105. Yeong-Cheng Liou was supported in part by NSC 101-2628-E-230-001-MY3 and NSC 101-2622-E-230-005-CC3.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yonghong Yao.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

All authors contributed equally and significantly in writing this paper. All authors read and approved the final manuscript.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License ( https://creativecommons.org/licenses/by/2.0 ), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Wu, Z., Yao, Y., Liou, YC. et al. A Korpelevich-like algorithm for variational inequalities. J Inequal Appl 2013, 76 (2013). https://doi.org/10.1186/1029-242X-2013-76

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1029-242X-2013-76

Keywords