Skip to main content
  • Research Article
  • Open access
  • Published:

Some Iterative Methods for Solving Equilibrium Problems and Optimization Problems

Abstract

We introduce a new iterative scheme for finding a common element of the set of solutions of the equilibrium problems, the set of solutions of variational inequality for a relaxed cocoercive mapping, and the set of fixed points of a nonexpansive mapping. The results presented in this paper extend and improve some recent results of Ceng and Yao (2008), Yao (2007), S. Takahashi and W. Takahashi (2007), Marino and Xu (2006), Iiduka and Takahashi (2005), Su et al. (2008), and many others.

1. Introduction

Throughout this paper, we always assume that is a real Hilbert space with inner product and norm , respectively, is a nonempty closed and convex subset of , and is the metric projection of onto . In the following, we denote by "'' strong convergence, by "'' weak convergence, and by "'' the real number set. Recall that a mapping is called nonexpansive if

(1.1)

We denote by the set of fixed points of the mapping .

For a given nonlinear operator , consider the problem of finding such that

(1.2)

which is called the variational inequality. For the recent applications, sensitivity analysis, dynamical systems, numerical methods, and physical formulations of the variational inequalities, see [1–24] and the references therein.

For a given , satisfies the inequality

(1.3)

if and only if , where is the projection of the Hilbert space onto the closed convex set .

It is known that projection operator is nonexpansive. It is also known that satisfies

(1.4)

Moreover, is characterized by the properties and for all .

Using characterization of the projection operator, one can easily show that the variational inequality (1.2) is equivalent to finding the fixed point problem of finding which satisfies the relation

(1.5)

where is a constant.

This fixed-point formulation has been used to suggest the following iterative scheme. For a given ,

(1.6)

which is known as the projection iterative method for solving the variational inequality (1.2). The convergence of this iterative method requires that the operator must be strongly monotone and Lipschitz continuous. These strict conditions rule out their applications in many important problems arising in the physical and engineering sciences. To overcome these drawbacks, Noor [2, 3] used the technique of updating the solution to suggest the two-step (or predictor-corrector) method for solving the variational inequality (1.2). For a given ,

(1.7)

which is also known as the modified double-projection method. For the convergence analysis and applications of this method, see the works of Noor [3] and Y. Yao and J.-C. Yao [16].

Numerous problems in physics, optimization, and economics reduce to find a solution of (2.12). Some methods have been proposed to solve the equilibrium problem; see [4, 5]. Combettes and Hirstoaga [4] introduced an iterative scheme for finding the best approximation to the initial data when is nonempty and proved a strong convergence theorem. Very recently, S. Takahashi and W. Takahashi [6] also introduced a new iterative scheme,

(1.8)

for approximating a common element of the set of fixed points of a nonexpansive nonself mapping and the set of solutions of the equilibrium problem and obtained a strong convergence theorem in a real Hilbert space.

Iterative methods for nonexpansive mappings have recently been applied to solve convex minimization problems; see [7–11] and the references therein. A typical problem is to minimize a quadratic function over the set of the fixed points of a nonexpansive mapping on a real Hilbert space :

(1.9)

where is a linear bounded operator, is the fixed point set of a nonexpansive mapping , and is a given point in . In [10, 11], it is proved that the sequence defined by the iterative method below, with the initial guess chosen arbitrarily,

(1.10)

converges strongly to the unique solution of the minimization problem (1.9) provided the sequence satisfies certain conditions. Recently, Marino and Xu [8] introduced a new iterative scheme by the viscosity approximation method [12]:

(1.11)

They proved that the sequence generated by the above iterative scheme converges strongly to the unique solution of the variational inequality

(1.12)

which is the optimality condition for the minimization problem

(1.13)

where is the fixed point set of a nonexpansive mapping and a potential function for (i.e., for ).

For finding a common element of the set of fixed points of nonexpansive mappings and the set of solution of variational inequalities for α-cocoercive map, Takahashi and Toyoda [13] introduced the following iterative process:

(1.14)

for every , where is α-cocoercive, , is a sequence in (0,1), and is a sequence in . They showed that, if is nonempty, then the sequence generated by (1.14) converges weakly to some . Recently, Iiduka and Takahashi [14] proposed another iterative scheme as follows:

(1.15)

for every , where is α-cocoercive, , is a sequence in (0,1), and is a sequence in . They proved that the sequence converges strongly to .

Recently, Chen et al. [15] studied the following iterative process:

(1.16)

and also obtained a strong convergence theorem by viscosity approximation method.

Inspired and motivated by the ideas and techniques of Noor [2, 3] and Y. Yao and J.-C. Yao [16] introduce the following iterative scheme.

Let be a closed convex subset of real Hilbert space . Let be an α-inverse strongly monotone mapping of into , and let be a nonexpansive mapping of into itself such that . Suppose that and , are given by

(1.17)

where , , and are the sequences in and is a sequence in . They proved that the sequence defined by (1.17) converges strongly to common element of the set of fixed points of a nonexpansive mapping and the set of solutions of the variational inequality for α-inverse-strongly monotone mappings under some parameters controlling conditions.

In this paper motivated by the iterative schemes considered in [6, 15, 16], we introduce a general iterative process as follows:

(1.18)

where is a linear bounded operator and is relaxed cocoercive. We prove that the sequence generated by the above iterative scheme converges strongly to a common element of the set of fixed points of a nonexpansive mapping, the set of solutions of the variational inequalities for a relaxed cocoercive mapping, and the set of solutions of the equilibrium problems (2.12), which solves another variational inequality

(1.19)

where and is also the optimality condition for the minimization problem , where is a potential function for (i.e., for ). The results obtained in this paper improve and extend the recent ones announced by S. Takahashi and W. Takahashi [6], Iiduka and Takahashi [14], Marino and Xu [8], Chen et al. [15], Y. Yao and J.-C. Yao [16], Ceng and Yao [22], Su et al. [17], and many others.

2. Preliminaries

For solving the equilibrium problem for a bifunction , let us assume that satisfies the following conditions:

(A1) for all ;

(A2) is monotone, that is, for all ;

(A3) for each , ;

(A4) for each , is convex and lower semicontinuous.

Recall the following.

  1. (1)

    is called -strong monotone if for all , we have

    (2.1)

for a constant . This implies that

(2.2)

that is, is -expansive, and when , it is expansive.

  1. (2)

    is said to be -cocoercive [2, 3] if for all , we have

    (2.3)

Clearly, every -cocoercive map is -Lipschitz continuous.

  1. (3)

    is called -cocoercive if there exists a constant such that

    (2.4)
  1. (4)

    is said to be relaxed -cocoercive if there exists two constants such that

    (2.5)

for , is -strongly monotone. This class of maps are more general than the class of strongly monotone maps. It is easy to see that we have the following implication: -strongly monotonicity relaxed -cocoercivity.

We will give the practical example of the relaxed -cocoercivity and Lipschitz continuous operator.

Example 2.1.

Let , for all , for a constant ; then, is relaxed -cocoercive and Lipschitz continuous. Especially, is -strong monotone.

Proof.

  1. 1.

    Since , for all , we have . For for all , for all , we also have the below

    (2.6)

Taking , it is clear that is relaxed -cocoercive.

  1. 2.

    Obviously, for for all

    (2.7)

Then, is Lipschitz continuous.

Especially, Taking , we observe that

(2.8)

Obviously, is -strong monotone.

The proof is completed.

  1. (5)

    A mapping is said to be a contraction if there exists a coefficient such that

    (2.9)
  1. (6)

    An operator is strong positive if there exists a constant with the property

    (2.10)
  1. (7)

    A set-valued mapping is called monotone if for all , , and imply . A monotone mapping is maximal if the graph of of is not properly contained in the graph of any other monotone mapping. It is well known that a monotone mapping is maximal if and only if for , for every implies .

Let be a monotone map of into and let be the normal cone to at , that is, and define

(2.11)

Then is the maximal monotone and if and only if ; see [1].

Related to the variational inequality problem (1.2), we consider the equilibrium problem, which was introduced by Blum and Oettli [19] and Noor and Oettli [20] in 1994. To be more precise, let be a bifunction of into , where is the set of real numbers.

For given bifunction , we consider the problem of finding such that

(2.12)

which is known as the equilibrium problem. The set of solutions of (2.12) is denoted by . Given a mapping , let for all . Then if and only if for all , that is, is a solution of the variational inequality. That is to say, the variational inequality problem is included by equilibrium problem, and the variational inequality problem is the special case of equilibrium problem.

Assume that is a potential function for (i.e., for all ), it is well known that satisfies the optimality condition for all if and only if

(2.13)

We can rewrite the variational inequality for all as, for any ,

(2.14)

If we introduce the nearest point projection from onto ,

(2.15)

which is characterized by the inequality

(2.16)

then we see from the above (2.14) that the minimization (2.13) is equivalent to the fixed point problem

(2.17)

Therefore, they have a relation as follows:

(2.18)

In addition to this, based on the result (3) of Lemma 2.7, , we know if the element , we have is the solution of the nonlinear equation

(2.19)

where is defined as in Lemma 2.7. Once we have the solutions of the equation (2.19), then it simultaneously solves the fixed points problems, equilibrium points problems, and variational inequalities problems. Therefore, the constrained set is very important and applicable.

We now recall some well-known concepts and results. It is well-known that for all and there holds

(2.20)

A space is said to satisfy Opial's condition [18] if for each sequence in which converges weakly to point , we have

(2.21)

Lemma 2.2 (see [9, 10]).

Assume that is a sequence of nonnegative real numbers such that

(2.22)

where is a sequence in (0,1) and is a sequence such that

(i);

(ii) or .

Then .

Lemma 2.3.

In a real Hilbert space , the following inequality holds:

(2.23)

Lemma 2.4 (Marino and Xu [8]).

Assume that is a strong positive linear bounded operator on a Hilbert space with coefficient and . Then .

Lemma 2.5 (see [21]).

Let and be bounded sequences in a Banach space and let be a sequence in with . Suppose for all integers and . Then, .

Lemma 2.6 (Blum and Oettli [19]).

Let be a nonempty closed convex subset of and let be a bifunction of into satisfying (A1)–(A4). Let and . Then, there exists such that

(2.24)

Lemma 2.7 (Combettes and Hirstoaga [4]).

Assume that satisfies (A1)–(A4). For and , define a mapping as follows:

(2.25)

for all . Then, the following hold:

(1) is single-valued;

(2) is firmly nonexpansive, that is, for any , ;

(3);

(4) is closed and convex.

3. Main Results

Theorem 3.1.

Let be a nonempty closed convex subset of a Hilbert space . Let be a bifunction of into which satisfies (A1)–(A4), let be a nonexpansive mapping of into , and let be a -Lipschitzian, relaxed -cocoercive map of into such that . Let be a strongly positive linear bounded operator with coefficient . Assume that . Let be a contraction of into itself with a coefficient and let and be sequences generated by and

(3.1)

for all , where and satisfy

(C1);

(C2);

(C3);

(C4), and ;

(C5);

(C6) for some , with .

Then, both and converge strongly to , where , which solves the following variational inequality:

(3.2)

Proof.

Note that from the condition (C1), we may assume, without loss of generality, that . Since is a strongly positive bounded linear operator on , then

(3.3)

observe that

(3.4)

that is to say is positive. It follows that

(3.5)

First, we show that is nonexpansive. Indeed, from the relaxed -cocoercive and -Lipschitzian definition on and condition (C6), we have

(3.6)

which implies that the mapping is nonexpansive.

Now, we observe that is bounded. Indeed, take , since , we have

(3.7)

Put , since , we have . Therefore, we have

(3.8)

Due to (3.5), it follows that

(3.9)

It follows from (3.9) that

(3.10)

Hence, is bounded, so are , , and .

Next, we show that

(3.11)

Observing that and , we have

(3.12)
(3.13)

Putting in (3.12) and in (3.13), we have

(3.14)

It follows from (A2) that

(3.15)

That is,

(3.16)

Without loss of generality, let us assume that there exists a real number such that for all . It follows that

(3.17)

It follows that

(3.18)

where is an appropriate constant such that . Note that

(3.19)

Substituting (3.18) into (3.19) yields that

(3.20)

where is an appropriate constant such that .

Define

(3.21)

Observe that from the definition of , we obtain

(3.22)

It follows that with

(3.23)

This together with (C1), (C3), and (C4) implies that

(3.24)

Hence, by Lemma 2.5, we obtain as .

Consequently,

(3.25)

Note that

(3.26)

This together with (3.25) implies that

(3.27)

For , we have

(3.28)

and hence

(3.29)

Set as a constant such that

(3.30)

By (3.29) and (3.30), we have

(3.31)

It follows that

(3.32)

By and , as , and is bounded, we obtain that

(3.33)

For , we have

(3.34)

Observe (3.31) that

(3.35)

Substituting (3.34) into (3.35), we have

(3.36)

It follows from condition (C6) that

(3.37)

From condition (C1) and (3.25), we have that

(3.38)

On the other hand, we have

(3.39)

which yields that

(3.40)

Substituting (3.40) into (3.35) yields that

(3.41)

It follows that

(3.42)

From condition (C1), (3.25), and (3.38), we have that

(3.43)

Observe that

(3.44)

From (3.27), (3.33), and (3.43), we have

(3.45)

Observe that is a contraction. Indeed, for all , we have

(3.46)

Banach's Contraction Mapping Principle guarantees that has a unique fixed point, say , that is, .

Next, we show that

(3.47)

To see this, we choose a subsequence of such that

(3.48)

Correspondingly, there exists a subsequence of . Since is bounded, there exists a subsequence of which converges weakly to . Without loss of generality, we can assume that .

Next, we show that . First, we prove . Since , we have

(3.49)

It follows from (A2) that,

(3.50)

It follows that

(3.51)

Since , , and (A4), we have for all . For with and , let . Since and , we have and hence . So, from (A1) and (A4), we have

(3.52)

That is, . It follows from (A3) that for all and hence . Since Hilbert spaces satisfy Opial's condition, from (3.43), suppose ; we have

(3.53)

which is a contradiction. Thus, we have .

Next, let us show that . Put

(3.54)

Since is relaxed -cocoercive and from condition (C6), we have

(3.55)

which yields that is monotone. Thus is maximal monotone. Let . Since and , we have

(3.56)

On the other hand, from , we have

(3.57)

and hence

(3.58)

It follows that

(3.59)

which implies that , We have and hence . That is, .

Since , we have

(3.60)

That is, (3.47) holds.

Finally, we show that , where , which solves the following variational inequality:

(3.61)

We consider

(3.62)

which implies that

(3.63)

Since , , and are bounded, we can take a constant such that

(3.64)

for all . It then follows that

(3.65)

where

(3.66)

From (3.27) and (3.47), we also have

(3.67)

By (C1), (3.47), and (3.67), we get . Now applying Lemma 2.2 to (3.65) concludes that .

This completes the proof.

Remark 3.2.

Some iterative algorithms were presented in Yamada [11], Combettes [24], and Iiduka-Yamada [25], for example, the steepest descent method, the hybrid steepest descent method, and the conjugate gradient methods; these methods have common form

(3.68)

where is the th approximation to the solution, is a step size, and is a search direction. In this paper, We define ; the method (3.1) will be changed as

(3.69)

Take , the method (3.1) will be changed as (3.68).

Remark 3.3.

The computational possibility of the resolvent, , of in Lemma 2.7 and Theorem 3.1 is well defined mathematically, but, in general, the computation of is very difficult in large-scale finite spaces and infinite spaces.

4. Applications

Theorem 4.1.

Let be a nonempty closed convex subset of a Hilbert space . Let be a bifunction of into which satisfies (A1)–(A4); let be a nonexpansive mapping of into such that . Let be a strongly positive linear bounded operator with coefficient . Assume that . Let be a contraction of into itself with a coefficient and let and be sequences generated by and

(4.1)

for all , where and satisfy

(C1);

(C2);

(C3);

(C4) and ;

(C5).

Then, both and converge strongly to , where , which solves the following variational inequality:

(4.2)

Proof.

Taking in Theorem 3.1, we can get the desired conclusion easily.

Theorem 4.2.

Let be a nonempty closed convex subset of a Hilbert space , let be a nonexpansive mapping of into , and let be a -Lipschitzian, relaxed -cocoercive map of into such that . Let be a strongly positive linear bounded operator with coefficient . Assume that . Let be a contraction of into itself with a coefficient and let and be sequences generated by and

(4.3)

for all , where and satisfy

(C1);

(C2);

(C3);

(C4) and ;

(C5);

(C6) for some , with .

Then, both and converge strongly to , where , which solves the following variational inequality:

(4.4)

Proof.

Put for all and for all in Theorem 3.1. Then we have . we can obtain the desired conclusion easily.

References

  1. Rockafellar RT: On the maximality of sums of nonlinear monotone operators. Transactions of the American Mathematical Society 1970, 149: 75–88. 10.1090/S0002-9947-1970-0282272-5

    Article  MathSciNet  MATH  Google Scholar 

  2. Noor MA: New approximation schemes for general variational inequalities. Journal of Mathematical Analysis and Applications 2000, 251(1):217–229. 10.1006/jmaa.2000.7042

    Article  MathSciNet  MATH  Google Scholar 

  3. Noor MA: Some developments in general variational inequalities. Applied Mathematics and Computation 2004, 152(1):199–277. 10.1016/S0096-3003(03)00558-7

    Article  MathSciNet  MATH  Google Scholar 

  4. Combettes PL, Hirstoaga SA: Equilibrium programming in Hilbert spaces. Journal of Nonlinear and Convex Analysis 2005, 6(1):117–136.

    MathSciNet  MATH  Google Scholar 

  5. Flåm SD, Antipin AS: Equilibrium programming using proximal-like algorithms. Mathematical Programming 1997, 78(1):29–41.

    Article  MathSciNet  MATH  Google Scholar 

  6. Takahashi S, Takahashi W: Viscosity approximation methods for equilibrium problems and fixed point problems in Hilbert spaces. Journal of Mathematical Analysis and Applications 2007, 331(1):506–515. 10.1016/j.jmaa.2006.08.036

    Article  MathSciNet  MATH  Google Scholar 

  7. Deutsch F, Yamada I: Minimizing certain convex functions over the intersection of the fixed point sets of nonexpansive mappings. Numerical Functional Analysis and Optimization 1998, 19(1–2):33–56.

    Article  MathSciNet  MATH  Google Scholar 

  8. Marino G, Xu H-K: A general iterative method for nonexpansive mappings in Hilbert spaces. Journal of Mathematical Analysis and Applications 2006, 318(1):43–52. 10.1016/j.jmaa.2005.05.028

    Article  MathSciNet  MATH  Google Scholar 

  9. Xu H-K: Iterative algorithms for nonlinear operators. Journal of the London Mathematical Society 2002, 66(1):240–256. 10.1112/S0024610702003332

    Article  MathSciNet  MATH  Google Scholar 

  10. Xu HK: An iterative approach to quadratic optimization. Journal of Optimization Theory and Applications 2003, 116(3):659–678. 10.1023/A:1023073621589

    Article  MathSciNet  MATH  Google Scholar 

  11. Yamada I: The hybrid steepest descent method for the variational inequality problem over the intersection of fixed point sets of nonexpansive mappings. In Inherently Parallel Algorithms in Feasibility and Optimization and Their Applications (Haifa, 2000), Studies in Computational Mathematics. Volume 8. Edited by: Butnariu D, Censor Y, Reich S. North-Holland, Amsterdam, The Netherlands; 2001:473–504.

    Google Scholar 

  12. Moudafi A: Viscosity approximation methods for fixed-points problems. Journal of Mathematical Analysis and Applications 2000, 241(1):46–55. 10.1006/jmaa.1999.6615

    Article  MathSciNet  MATH  Google Scholar 

  13. Takahashi W, Toyoda M: Weak convergence theorems for nonexpansive mappings and monotone mappings. Journal of Optimization Theory and Applications 2003, 118(2):417–428. 10.1023/A:1025407607560

    Article  MathSciNet  MATH  Google Scholar 

  14. Iiduka H, Takahashi W: Strong convergence theorems for nonexpansive mappings and inverse-strongly monotone mappings. Nonlinear Analysis: Theory, Methods & Applications 2005, 61(3):341–350. 10.1016/j.na.2003.07.023

    Article  MathSciNet  MATH  Google Scholar 

  15. Chen J, Zhang L, Fan T: Viscosity approximation methods for nonexpansive mappings and monotone mappings. Journal of Mathematical Analysis and Applications 2007, 334(2):1450–1461. 10.1016/j.jmaa.2006.12.088

    Article  MathSciNet  MATH  Google Scholar 

  16. Yao Y, Yao J-C: On modified iterative method for nonexpansive mappings and monotone mappings. Applied Mathematics and Computation 2007, 186(2):1551–1558. 10.1016/j.amc.2006.08.062

    Article  MathSciNet  MATH  Google Scholar 

  17. Su Y, Shang M, Qin X: An iterative method of solution for equilibrium and optimization problems. Nonlinear Analysis: Theory, Methods & Applications 2008, 69(8):2709–2719. 10.1016/j.na.2007.08.045

    Article  MathSciNet  MATH  Google Scholar 

  18. Opial Z: Weak convergence of the sequence of successive approximations for nonexpansive mappings. Bulletin of the American Mathematical Society 1967, 73: 591–597. 10.1090/S0002-9904-1967-11761-0

    Article  MathSciNet  MATH  Google Scholar 

  19. Blum E, Oettli W: From optimization and variational inequalities to equilibrium problems. The Mathematics Student 1994, 63(1–4):123–145.

    MathSciNet  MATH  Google Scholar 

  20. Noor MA, Oettli W: On general nonlinear complementarity problems and quasi-equilibria. Le Matematiche 1994, 49(2):313–331.

    MathSciNet  MATH  Google Scholar 

  21. Suzuki T: Strong convergence of Krasnoselskii and Mann's type sequences for one-parameter nonexpansive semigroups without Bochner integrals. Journal of Mathematical Analysis and Applications 2005, 305(1):227–239. 10.1016/j.jmaa.2004.11.017

    Article  MathSciNet  MATH  Google Scholar 

  22. Ceng L-C, Yao J-C: Hybrid viscosity approximation schemes for equilibrium problems and fixed point problems of infinitely many nonexpansive mappings. Applied Mathematics and Computation 2008, 198(2):729–741. 10.1016/j.amc.2007.09.011

    Article  MathSciNet  MATH  Google Scholar 

  23. Noor MA, Noor KI, Yaqoob H: On general mixed variational inequalities. Acta Applicandae Mathematicae 2010, 110(1):227–246. 10.1007/s10440-008-9402-4

    Article  MathSciNet  MATH  Google Scholar 

  24. Combettes PL: A block-iterative surrogate constraint splitting method for quadratic signal recovery. IEEE Transactions on Signal Processing 2003, 51(7):1771–1782. 10.1109/TSP.2003.812846

    Article  MathSciNet  Google Scholar 

  25. Iiduka H, Yamada I: A use of conjugate gradient direction for the convex optimization problem over the fixed point set of a nonexpansive mapping. SIAM Journal on Optimization 2008, 19(4):1881–1893.

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgments

This work is supported by the Fundamental Research Funds for the Central Universities, no. JY10000970006 and National Science Foundation of China, no. 60974082.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Huimin He.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

He, H., Liu, S. & Fan, Q. Some Iterative Methods for Solving Equilibrium Problems and Optimization Problems. J Inequal Appl 2010, 943275 (2010). https://doi.org/10.1155/2010/943275

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1155/2010/943275

Keywords