Skip to main content

Strong convergence theorems for equilibrium problems and weak Bregman relatively nonexpansive mappings in Banach spaces

Abstract

In this paper, a shrinking projection algorithm based on the prediction correction method for equilibrium problems and weak Bregman relatively nonexpansive mappings is introduced and investigated in Banach spaces, and then the strong convergence of the sequence generated by the proposed algorithm is derived under some suitable assumptions. These results are new and develop some recent results in this field.

MSC:26B25, 47H09, 47J05, 47J25.

1 Introduction

In this paper, without other specifications, let R be the set of real numbers, C be a nonempty, closed and convex subset of a real reflexive Banach space E with the dual space E ∗ . The norm and the dual pair between E ∗ and E are denoted by ∥⋅∥ and 〈⋅,⋅〉, respectively. Let f:E→R∪{+∞} be a proper convex and lower semicontinuous function. Denote the domain of f by domf, i.e., domf={x∈E:f(x)<+∞}. The Fenchel conjugate of f is the function f ∗ : E ∗ →(−∞,+∞] defined by f ∗ (ξ)=sup{〈ξ,x〉−f(x):x∈E}. Let T:C→C be a nonlinear mapping. Denote by F(T)={x∈C:Tx=x} the set of fixed points of T. A mapping T is said to be nonexpansive if ∥Tx−Ty∥≤∥x−y∥ for all x,y∈C.

In 1994, Blum and Oettli [1] firstly studied the equilibrium problem: finding x ¯ ∈C such that

H( x ¯ ,y)≥0,∀y∈C,
(1.1)

where H:C×C→R is functional. Denote the set of solutions of the problem (1.1) by EP(H). Since then, various equilibrium problems have been investigated. It is well known that equilibrium problems and their generalizations have been important tools for solving problems arising in the fields of linear or nonlinear programming, variational inequalities, complementary problems, optimization problems, fixed point problems and have been widely applied to physics, structural analysis, management science and economics etc. One of the most important and interesting topics in the theory of equilibria is to develop efficient and implementable algorithms for solving equilibrium problems and their generalizations (see, e.g., [2–8] and the references therein). Since the equilibrium problems have very close connections with both the fixed point problems and the variational inequalities problems, finding the common elements of these problems has drawn many people’s attention and has become one of the hot topics in the related fields in the past few years (see, e.g., [9–16] and the references therein).

In 1967, Bregman [17] discovered an elegant and effective technique for using of the so-called Bregman distance function D f (see, Section 2, Definition 2.1) in the process of designing and analyzing feasibility and optimization algorithms. This opened a growing area of research in which Bregman’s technique has been applied in various ways in order to design and analyze not only iterative algorithms for solving feasibility and optimization problems, but also algorithms for solving variational inequalities, for approximating equilibria, for computing fixed points of nonlinear mappings and so on (see, e.g., [18–24] and the references therein). In 2005, Butnariu and Resmerita [25] presented Bregman-type iterative algorithms and studied the convergence of the Bregman-type iterative method of solving some nonlinear operator equations.

Recently, by using the Bregman projection, Reich and Sabach [26] presented the following algorithms for finding common zeroes of maximal monotone operators A i :E→ 2 E ∗ (i=1,2,…,N) in a reflexive Banach space E, respectively:

{ x 0 ∈ E , y n i = Res λ n i f ( x n + e n i ) , C n i = { z ∈ E : D f ( z , y n i ) ≤ D f ( z , x n + e n i ) } , C n = ⋂ i = 1 N C n i , Q n = { z ∈ E : 〈 ∇ f ( x 0 ) − ∇ f ( x n ) , z − x n 〉 ≤ 0 } , x n + 1 = proj C n ∩ Q n f x 0 , ∀ n ≥ 0

and

{ x 0 ∈ E , η n i = ξ n i + 1 λ n i ( ∇ f ( y n i ) − ∇ f ( x n ) ) , ξ n i ∈ A i y n i , ω n i = ∇ f ∗ ( λ n i η n i + ∇ f ( x n ) ) , C n i = { z ∈ E : D f ( z , y n i ) ≤ D f ( z , ω n i ) } , C n = ⋂ i = 1 N C n i , Q n = { z ∈ E : 〈 ∇ f ( x 0 ) − ∇ f ( x n ) , z − x n 〉 ≤ 0 } , x n + 1 = proj C n ∩ Q n f x 0 , ∀ n ≥ 0 ,

where { λ n i } i = 1 N ⊆(0,+∞), { e n } i = 1 N is an error sequence in E with e n i →0 and proj C f is the Bregman projection with respect to f from E onto a closed and convex subset C. Further, under some suitable conditions, they obtained two strong convergence theorems of maximal monotone operators in a reflexive Banach space. Reich and Sabach [7] also studied the convergence of two iterative algorithms for finitely many Bregman strongly nonexpansive operators in a Banach space. In [15], Reich and Sabach proposed the following algorithms for finding common fixed points of finitely many Bregman firmly nonexpansive operators T i :C→C (i=1,2,…,N) in a reflexive Banach space E if ⋂ i = 1 N F( T i )≠∅:

{ x 0 ∈ E , Q 0 i = E , i = 1 , 2 , … , N , y n i = T i ( x n + e n i ) , Q n + 1 i = { z ∈ Q n i : 〈 ∇ f ( x n + e n i ) − ∇ f ( y n i ) , z − y n i 〉 ≤ 0 } , Q n = ⋂ i = 1 N Q n i , x n + 1 = proj Q n + 1 f x 0 , ∀ n ≥ 0 .
(1.2)

Under some suitable conditions, they proved that the sequence { x n } generated by (1.2) converges strongly to â‹‚ i = 1 N F( T i ) and applied the result to the solution of convex feasibility and equilibrium problems.

Very recently, Chen et al. [27] introduced the concept of weak Bregman relatively nonexpansive mappings in a reflexive Banach space and gave an example to illustrate the existence of a weak Bregman relatively nonexpansive mapping and the difference between a weak Bregman relatively nonexpansive mapping and a Bregman relatively nonexpansive mapping. They also proved the strong convergence of the sequences generated by the constructed algorithms with errors for finding a fixed point of weak Bregman relatively nonexpansive mappings and Bregman relatively nonexpansive mappings under some suitable conditions.

This paper is devoted to investigating the shrinking projection algorithm based on the prediction correction method for finding a common element of solutions to the equilibrium problem (1.1) and fixed points to weak Bregman relatively nonexpansive mappings in Banach spaces, and then the strong convergence of the sequence generated by the proposed algorithm is derived under some suitable assumptions.

2 Preliminaries

Let T:C→C be a nonlinear mapping. A point ω∈C is called an asymptotic fixed point of T (see [28]) if C contains a sequence { x n } which converges weakly to ω such that lim n → ∞ ∥T x n − x n ∥=0. A point ω∈C is called a strong asymptotic fixed point of T (see [28]) if C contains a sequence { x n } which converges strongly to ω such that lim n → ∞ ∥T x n − x n ∥=0. We denote the sets of asymptotic fixed points and strong asymptotic fixed points of T by F ˆ (T) and F ˜ (T), respectively.

Let { x n } be a sequence in E; we denote the strong convergence of { x n } to x∈E by x n →x. For any x∈int(domf), the right-hand derivative of f at x in the direction y∈E is defined by

f ′ (x,y):= lim t ↘ 0 f ( x + t y ) − f ( x ) t .

f is called Gâteaux differentiable at x if, for all y∈E, lim t ↘ 0 f ( x + t y ) − f ( x ) t exists. In this case, f ′ (x,y) coincides with ∇f(x), the value of the gradient of f at x. f is called Gâteaux differentiable if it is Gâteaux differentiable for any x∈int(domf). f is called Fréchet differentiable at x if this limit is attained uniformly for ∥y∥=1. We say that f is uniformly Fréchet differentiable on a subset C of E if the limit is attained uniformly for x∈C and ∥y∥=1.

The Legendre function f:E→(−∞,+∞] is defined in [18]. From [18], if E is a reflexive Banach space, then f is the Legendre function if and only if it satisfies the following conditions (L1) and (L2):

(L1) The interior of the domain of f, int(domf), is nonempty, f is Gâteaux differentiable on int(domf) and domf=int(domf);

(L2) The interior of the domain of f ∗ , int(dom f ∗ ), is nonempty, f ∗ is Gâteaux differentiable on int(dom f ∗ ) and dom f ∗ =int(dom f ∗ ).

Since E is reflexive, we know that ( ∇ f ) − 1 =∇ f ∗ (see [29]). This, by (L1) and (L2), implies the following equalities:

∇f= ( ∇ f ∗ ) − 1 ,ran∇f=dom∇ f ∗ =int ( dom f ∗ )

and

ran∇ f ∗ =dom∇f=int(domf).

By Bauschke et al. [[18], Theorem 5.4], the conditions (L1) and (L2) also yield that the functions f and f ∗ are strictly convex on the interior of their respective domains. From now on we assume that the convex function f:E→(−∞,+∞] is Legendre.

We first recall some definitions and lemmas which are needed in our main results.

Assumption 2.1 Let C be a nonempty, closed convex subset of a uniformly convex and uniformly smooth Banach space E, and let H:C×C→R satisfy the following conditions (C1)-(C4):

(C1) H(x,x)=0 for all x∈C;

(C2) H is monotone, i.e., H(x,y)+H(y,x)≤0 for all x,y∈C;

(C3) for all x,y,z∈C,

lim sup t → 0 + H ( t z + ( 1 − t ) x , y ) ≤H(x,y);

(C4) for all x∈C, H(x,⋅) is convex and lower semicontinuous.

Definition 2.1 [3, 17]

Let f:E→(−∞,+∞] be a Gâteaux differentiable and convex function. The function D f :domf×int(domf)→[0,+∞) defined by

D f (y,x):=f(y)−f(x)− 〈 ∇ f ( x ) , y − x 〉

is called the Bregman distance with respect to f.

Remark 2.1 [15]

The Bregman distance has the following properties:

  1. (1)

    the three point identity, for any x∈domf and y,z∈int(domf),

    D f (x,y)+ D f (y,z)− D f (x,z)= 〈 ∇ f ( z ) − ∇ f ( y ) , x − y 〉 ;
  2. (2)

    the four point identity, for any y,ω∈domf and x,z∈int(domf),

    D f (y,x)− D f (y,z)− D f (ω,x)+ D f (ω,z)= 〈 ∇ f ( z ) − ∇ f ( x ) , y − ω 〉 .

Definition 2.2 [17]

Let f:E→(−∞,+∞] be a Gâteaux differentiable and convex function. The Bregman projection of x∈int(domf) onto the nonempty, closed and convex set C⊂domf is the necessarily unique vector proj C f (x)∈C satisfying the following:

D f ( proj C f ( x ) , x ) =inf { D f ( y , x ) : y ∈ C } .

Remark 2.2

  1. (1)

    If E is a Hilbert space and f(y)= 1 2 ∥ x ∥ 2 for all x∈E, then the Bregman projection proj C f (x) is reduced to the metric projection of x onto C;

  2. (2)

    If E is a smooth Banach space and f(y)= 1 2 ∥ x ∥ 2 for all x∈E, then the Bregman projection proj C f (x) is reduced to the generalized projection Π C (x) (see [11, 28]), which is defined by

    ϕ ( Π C ( x ) , x ) = min y ∈ C ϕ(y,x),

where ϕ(y,x)= ∥ y ∥ 2 −2〈y,J(x)〉+ ∥ x ∥ 2 , J is the normalized duality mapping from E to 2 E ∗ .

Definition 2.3 [21, 26, 27]

Let C be a nonempty, closed and convex set of domf. The operator T:C→int(domf) with F(T)≠∅ is called:

  1. (1)

    quasi-Bregman nonexpansive if

    D f (u,Tx)≤ D f (u,x),∀x∈C,u∈F(T);
  2. (2)

    Bregman relatively nonexpansive if F ˆ (T)=F(T) and

    D f (u,Tx)≤ D f (u,x),∀x∈C,u∈F(T);
  3. (3)

    Bregman firmly nonexpansive if

    〈 ∇ f ( T x ) − ∇ f ( T y ) , T x − T y 〉 ≤ 〈 ∇ f ( x ) − ∇ f ( y ) , T x − T y 〉 ,∀x,y∈C,

or, equivalently,

D f (Tx,Ty)+ D f (Ty,Tx)+ D f (Tx,x)+ D f (Ty,y)≤ D f (Tx,y)+ D f (Ty,x),∀x,y∈C;
  1. (4)

    a weak Bregman relatively nonexpansive mapping with F(T)≠∅ if F ˜ (T)=F(T) and

    D f (u,Tx)≤ D f (u,x),∀x∈C,u∈F(T).

Definition 2.4 [4]

Let H:C×C→R be functional. The resolvent of H is the operator Res H f :E→ 2 C defined by

Res H f (x)= { z ∈ C : H ( z , y ) + 〈 ∇ f ( z ) − ∇ f ( x ) , y − z 〉 ≥ 0 , ∀ y ∈ C } .

Definition 2.5 [21]

Let f:E→(−∞,+∞] be a convex and Gâteaux differentiable function. f is called:

  1. (1)

    totally convex at x∈int(domf) if its modulus of total convexity at x, that is, the function ν f :int(domf)×[0,+∞)→[0,+∞) defined by

    ν f (x,t):=inf { D f ( y , x ) : y ∈ dom f , ∥ y − x ∥ = t } ,

is positive whenever t>0;

  1. (2)

    totally convex if it is totally convex at every point x∈int(domf);

  2. (3)

    totally convex on bounded sets if ν f (B,t) is positive for any nonempty bounded subset B of E and t>0, where the modulus of total convexity of the function f on the set B is the function ν f :int(domf)×[0,+∞)→[0,+∞) defined by

    ν f (B,t):=inf { ν f ( x , t ) : x ∈ B ∩ dom f } .

Definition 2.6 [21, 26]

The function f:E→(−∞,+∞] is called:

  1. (1)

    cofinite if dom f ∗ = E ∗ ;

  2. (2)

    coercive if lim ∥ x ∥ → + ∞ (f(x)/∥x∥)=+∞;

  3. (3)

    sequentially consistent if for any two sequences { x n } and { y n } in E such that { x n } is bounded,

    lim n → ∞ D f ( y n , x n )=0⇒ lim n → ∞ ∥ y n − x n ∥=0.

Lemma 2.1 [[26], Proposition 2.3]

If f:E→(−∞,+∞] is Fréchet differentiable and totally convex, then f is cofinite.

Lemma 2.2 [[25], Theorem 2.10]

Let f:E→(−∞,+∞] be a convex function whose domain contains at least two points. Then the following statements hold:

  1. (1)

    f is sequentially consistent if and only if it is totally convex on bounded sets;

  2. (2)

    If f is lower semicontinuous, then f is sequentially consistent if and only if it is uniformly convex on bounded sets;

  3. (3)

    If f is uniformly strictly convex on bounded sets, then it is sequentially consistent and the converse implication holds when f is lower semicontinuous, Fréchet differentiable on its domain and the Fréchet derivative ∇f is uniformly continuous on bounded sets.

Lemma 2.3 [[30], Proposition 2.1]

Let f:E→R be uniformly Fréchet differentiable and bounded on bounded subsets of E. Then ∇f is uniformly continuous on bounded subsets of E from the strong topology of E to the strong topology of E ∗ .

Lemma 2.4 [[26], Lemma 3.1]

Let f:E→R be a Gâteaux differentiable and totally convex function. If x 0 ∈E and the sequence { D f ( x n , x 0 )} is bounded, then the sequence { x n } is also bounded.

Lemma 2.5 [[26], Proposition 2.2]

Let f:E→R be a Gâteaux differentiable and totally convex function, x 0 ∈E and let C be a nonempty, closed convex subset of E. Suppose that the sequence { x n } is bounded and any weak subsequential limit of { x n } belongs to C. If D f ( x n , x 0 )≤ D f ( proj C f ( x 0 ), x 0 ) for any n∈N, then { x n } n = 1 ∞ converges strongly to proj C f ( x 0 ).

Lemma 2.6 [[27], Proposition 2.17]

Let f:E→(−∞,+∞] be the Legendre function. Let C be a nonempty, closed convex subset of int(domf) and T:C→C be a quasi-Bregman nonexpansive mapping with respect to f. Then F(T) is closed and convex.

Lemma 2.7 [[27], Lemma 2.18]

Let f:E→(−∞,+∞] be Gâteaux differentiable and proper convex lower semicontinuous. Then, for all z∈E,

D f ( z , ∇ f ∗ ( ∑ i = 1 N t i ∇ f ( x i ) ) ) ≤ ∑ i = 1 N t i D f (z, x i ),

where { x i } i = 1 N ⊂E and { t i } i = 1 N ⊂(0,1) with ∑ i = 1 N t i =1.

Lemma 2.8 [[25], Corollary 4.4]

Let f:E→(−∞,+∞] be Gâteaux differentiable and totally convex on int(domf). Let x∈int(domf) and C⊂int(domf) be a nonempty, closed convex set. If x ˆ ∈C, then the following statements are equivalent:

  1. (1)

    the vector x ˆ is the Bregman projection of x onto C with respect to f;

  2. (2)

    the vector x ˆ is the unique solution of the variational inequality:

    〈 ∇ f ( x ) − ∇ f ( z ) , z − y 〉 ≥0,∀y∈C;
  3. (3)

    the vector x ˆ is the unique solution of the inequality:

    D f (y,z)+ D f (z,x)≤ D f (y,x),∀y∈C.

Lemma 2.9 [[7], Lemmas 1 and 2]

Let f:E→(−∞,+∞] be a coercive Legendre function. Let C be a nonempty, closed and convex subset of int(domf). Assume that H:C×C→R satisfies Assumption  2.1. Then the following results hold:

  1. (1)
    Res H f

    is single-valued and dom( Res H f )=E;

  2. (2)
    Res H f

    is Bregman firmly nonexpansive;

  3. (3)
    EP(H)

    is a closed and convex subset of C and EP(H)=F( Res H f );

  4. (4)

    for all x∈E and for all u∈F( Res H f ),

    D f ( u , Res H f ( x ) ) + D f ( Res H f ( x ) , x ) ≤ D f (u,x).

Lemma 2.10 [[31], Proposition 5]

Let f:E→R be a Legendre function such that ∇ f ∗ is bounded on bounded subsets of intdom f ∗ . Let x∈E. If { D f (x, x n )} is bounded, then the sequence { x n } is bounded too.

3 Main results

In this section, we will introduce a new shrinking projection algorithm based on the prediction correction method for finding a common element of solutions to the equilibrium problem (1.1) and fixed points to weak Bregman relatively nonexpansive mappings in Banach spaces, and then the strong convergence of the sequence generated by the proposed algorithm is proved under some suitable conditions.

Let { α n } and { β n } be the sequences in [0,1] such that lim n → ∞ α n =0 and lim inf n → ∞ (1− α n ) β n >0. We propose the following shrinking projection algorithm based on the prediction correction method.

Algorithm 3.1 Step 1: Select an arbitrary starting point x 0 ∈C, let Q 0 =C and C 0 ={z∈C: D f (z, u 0 )≤ D f (z, x 0 )}.

Step 2: Given the current iterate x n , calculate the next iterate as follows:

{ z n = ∇ f ∗ ( β n ∇ f ( T ( x n ) ) + ( 1 − β n ) ∇ f ( x n ) ) , y n = ∇ f ∗ ( α n ∇ f ( x 0 ) + ( 1 − α n ) ∇ f ( z n ) ) , u n = Res H f ( y n ) , C n = { z ∈ C n − 1 ∩ Q n − 1 : D f ( z , u n ) ≤ α n D f ( z , x 0 ) + ( 1 − α n ) D f ( z , x n ) } , Q n = { z ∈ C n − 1 ∩ Q n − 1 : 〈 ∇ f ( x 0 ) − ∇ f ( x n ) , z − x n 〉 ≤ 0 } , x n + 1 = proj C n ∩ Q n f x 0 , ∀ n ≥ 0 .
(3.1)

Theorem 3.1 Let C be a nonempty, closed and convex subset of a real reflexive Banach space E, f:E→R be a coercive Legendre function which is bounded, uniformly Fréchet differentiable and totally convex on a bounded subset of E, and ∇ f ∗ be bounded on bounded subsets of E ∗ . Let H:C×C→R satisfy Assumption  2.1 and T:C→C be a weak Bregman relatively nonexpansive mapping such that EP(H)∩F(T)≠∅. Then the sequence { x n } generated by Algorithm 3.1 converges strongly to the point proj EP ( H ) ∩ F ( T ) f ( x 0 ), where proj EP ( H ) ∩ F ( T ) f ( x 0 ) is the Bregman projection of C onto EP(H)∩F(T).

To prove Theorem 3.1, we need the following lemmas.

Lemma 3.1 Assume that EP(H)∩F(T)⊆ C n ∩ Q n for all n≥0. Then the sequence { x n } is bounded.

Proof Since 〈∇f( x 0 )−∇f( x n ),v− x n 〉≤0 for all v∈ Q n , it follows from Lemma 2.8 that x n = proj Q n f ( x 0 ) and so, by x n + 1 = proj C n ∩ Q n f ( x 0 )∈ Q n , we have

D f ( x n , x 0 )≤ D f ( x n + 1 , x 0 ).
(3.2)

Let ω∈EP(H)∩F(T). It follows from Lemma 2.8 that

D f ( ω , proj Q n f ( x 0 ) ) + D f ( proj Q n f ( x 0 ) , x 0 ) ≤ D f (ω, x 0 )

and so

D f ( x n , x 0 )≤ D f (ω, x 0 )− D f (ω, x n )≤ D f (ω, x 0 ).

Therefore, { D f ( x n , x 0 )} is bounded. Moreover, { x n } is bounded and so are {T( x n )}, { y n }, { z n }. This completes the proof. □

Lemma 3.2 Assume that EP(H)∩F(T)⊆ C n ∩ Q n for all n≥0. Then the sequence { x n } is a Cauchy sequence.

Proof By the proof of Lemma 3.1, we know that { D f ( x n , x 0 )} is bounded. It follows from (3.2) that lim n → ∞ D f ( x n , x 0 ) exists. From x m ∈ Q m − 1 ⊆ Q n for all m>n and Lemma 2.8, one has

D f ( x m , proj Q n f ( x 0 ) ) + D f ( proj Q n f ( x 0 ) , x 0 ) ≤ D f ( x m , x 0 )

and so D f ( x m , x n )≤ D f ( x m , x 0 )− D f ( x n , x 0 ). Therefore, we have

lim n → ∞ D f ( x m , x n )≤ lim n → ∞ ( D f ( x m , x 0 ) − D f ( x n , x 0 ) ) =0.
(3.3)

Since f is totally convex on bounded subsets of E, by Definition 2.6, Lemma 2.2 and (3.3), we obtain

lim n → ∞ ∥ x m − x n ∥=0.
(3.4)

Thus { x n } is a Cauchy sequence and so lim n → ∞ ∥ x n + 1 − x n ∥=0. This completes the proof. □

Lemma 3.3 Assume that EP(H)∩F(T)⊆ C n ∩ Q n for all n≥0. Then the sequence { x n } converges strongly to a point in EP(H)∩F(T).

Proof From Lemma 3.2, the sequence { x n } is a Cauchy sequence. Without loss of generality, let x n → ω ˆ ∈C. Since f is uniformly Fréchet differentiable on bounded subsets of E, it follows from Lemma 2.2 that ∇f is norm-to-norm uniformly continuous on bounded subsets of E. Hence, by (3.4), we have

lim n → ∞ ∥ ∇ f ( x m ) − ∇ f ( x n ) ∥ =0

and so

lim n → ∞ ∥ ∇ f ( x n + 1 ) − ∇ f ( x n ) ∥ =0.
(3.5)

Since x n + 1 ∈ C n , we have

D f ( x n + 1 , u n )≤ α n D f ( x n + 1 , x 0 )+(1− α n ) D f ( x n + 1 , x n ).

It follows from lim n → ∞ α n =0 and lim n → ∞ D f ( x n + 1 , x n )=0 that { D f ( x n + 1 , u n )} is bounded and

lim n → ∞ D f ( x n + 1 , u n )=0.

By Lemma 2.10, { u n } is bounded. Hence, lim n → ∞ ∥ x n + 1 − u n ∥=0 and so

lim n → ∞ ∥ ∇ f ( x n + 1 ) − ∇ f ( u n ) ∥ =0.
(3.6)

Taking into account ∥ x n − u n ∥≤∥ x n − x n + 1 ∥+∥ x n + 1 − u n ∥, we obtain

lim n → ∞ ∥ x n − u n ∥=0

and so u n → ω ˆ as n→∞. For any ω∈EP(H)∩F(T), from Lemma 2.9, we get

By the three point identity of the Bregman distance, one has

Since f is a coercive Legendre function which is bounded, uniformly Fréchet differentiable and totally convex on a bounded subset of E, it follows from Lemma 2.3 that f is continuous on E and ∇f is uniformly continuous on bounded subsets of E from the strong topology of E to the strong topology of E ∗ . Therefore, we have

that is, lim n → ∞ D f ( u n , y n )=0. Furthermore, one has lim n → ∞ ∥ u n − y n ∥=0 and thus

lim n → ∞ ∥ ∇ f ( u n ) − ∇ f ( y n ) ∥ =0.

Since u n → ω ˆ as n→∞, we have y n → ω ˆ as n→∞. Further, in the light of u n = Res H f ( y n ) and Definition 2.4, it follows that, for each y∈C,

H( u n ,y)+ 〈 ∇ f ( u n ) − ∇ f ( y n ) , y − u n 〉 ≥0

and hence, combining this with Assumption 2.1,

〈 ∇ f ( u n ) − ∇ f ( y n ) , y − u n 〉 ≥−H( u n ,y)≥H(y, u n ).

Consequently, one can conclude that

H ( y , ω ˆ ) ≤ lim inf n → ∞ H ( y , u n ) ≤ lim inf n → ∞ 〈 ∇ f ( u n ) − ∇ f ( y n ) , y − u n 〉 ≤ lim inf n → ∞ ∥ ∇ f ( u n ) − ∇ f ( y n ) ∥ ⋅ ∥ y − u n ∥ = 0 .

For any y∈C and t∈(0,1], let y t =ty+(1−t) ω ˆ ∈C. It follows from Assumption 2.1 that H( y t , ω ˆ )≤0 and

0=H( y t , y t )≤tH( y t ,y)+(1−t)H( y t , ω ˆ )≤tH( y t ,y)

and so H( y t ,y)≥0. Moreover, one has

0≤ lim sup t → 0 + H( y t ,y)= lim sup t → 0 + H ( t y + ( 1 − t ) ω ˆ , y ) ≤H( ω ˆ ,y),∀y∈C,

which shows that ω ˆ ∈EP(H).

Next, we prove that ω ˆ ∈F(T). Note that

This implies that

(1− α n ) β n ∥ ∇ f ( x n ) − ∇ f ( T ( x n ) ) ∥ ≤ ∥ ∇ f ( x n ) − ∇ f ( y n ) ∥ + α n ∥ ∇ f ( x n ) − ∇ f ( x 0 ) ∥ .
(3.7)

Letting n→∞ in (3.7), it follows from lim inf n → ∞ (1− α n ) β n >0 and lim n → ∞ α n =0 that

lim n → ∞ ∥ ∇ f ( x n ) − ∇ f ( T ( x n ) ) ∥ =0.

Moreover, we have that lim n → ∞ ∥ x n −T( x n )∥=0. This together with x n → ω ˆ implies that ω ˆ ∈ F ˜ (T). In view of F ˜ (T)=F(T), one has ω ˆ ∈EP(H)∩F(T). Therefore, the sequence { x n } generated by Algorithm 3.1 converges strongly to a point ω ˆ in EP(H)∩F(T). This completes the proof. □

Now, we prove Theorem 3.1 by using lemmas.

Proof of Theorem 3.1 From Lemmas 2.6 and 2.9, it follows that EP(H)∩F(T) is a nonempty, closed and convex subset of E. Clearly, C n and Q n are closed and convex and so C n ∩ Q n are closed and convex for all n≥0.

Now, we show that EP(H)∩F(T)⊆ C n ∩ Q n for all n≥0. Take ω∈EP(H)∩F(T) arbitrarily. Then

D f ( ω , u n ) = D f ( ω , Res H f ( y n ) ) ≤ D f ( ω , y n ) − D f ( Res H f ( y n ) , y n ) ≤ D f ( ω , y n ) = D f ( ω , ∇ f ∗ ( α n ∇ f ( x 0 ) + ( 1 − α n ) ∇ f ( z n ) ) ) ≤ α n D f ( ω , x 0 ) + ( 1 − α n ) D f ( ω , z n ) = α n D f ( ω , x 0 ) + ( 1 − α n ) D f ( ω , ∇ f ∗ ( β n ∇ f ( T ( x n ) ) + ( 1 − β n ) ∇ f ( x n ) ) ) ≤ α n D f ( ω , x 0 ) + ( 1 − α n ) [ β n D f ( ω , T ( x n ) ) + ( 1 − β n ) D f ( ω , x n ) ] ≤ α n D f ( ω , x 0 ) + ( 1 − α n ) D f ( ω , x n ) ,

which implies that ω∈ C n and so EP(H)∩F(T)⊆ C n for all n≥0.

Next, we prove that EP(H)∩F(T)⊆ Q n for all n≥0. Obviously, EP(H)∩F(T)⊆ Q 0 ( Q 0 =C). Suppose that EP(H)∩F(T)⊆ Q k for all k≥0. In view of x k + 1 = proj C k ∩ Q k f ( x 0 ), it follows from Lemma 2.8 that

〈 ∇ f ( x 0 ) − ∇ f ( x k + 1 ) , x k + 1 − v 〉 ≥0,∀v∈ C k ∩ Q k .

Moreover, one has

〈 ∇ f ( x 0 ) − ∇ f ( x k + 1 ) , x k + 1 − ω 〉 ≥0,∀ω∈EP(H)∩F(T)

and so, for each ω∈EP(H)∩F(T),

〈 ∇ f ( x 0 ) − ∇ f ( x k + 1 ) , ω − x k + 1 〉 ≤0.

This implies that EP(H)∩F(T)⊆ Q k + 1 . To sum up, we have EP(H)∩F(T)⊆ Q n and so

EP(H)∩F(T)⊆ C n ∩ Q n ,∀n≥0.

This together with EP(H)∩F(T)≠∅ yields that C n ∩ Q n is a nonempty, closed convex subset of C for all n≥0. Thus { x n } is well defined and, from both Lemmas 3.2 and 3.3, the sequence { x n } is a Cauchy sequence and converges strongly to a point ω ˆ of EP(H)∩F(T).

Finally, we now prove that ω ¯ = proj EP ( H ) ∩ F ( T ) f ( x 0 ). Since proj EP ( H ) ∩ F ( T ) f ( x 0 )∈EP(H)∩F(T), it follows from x n + 1 = proj ( C n ∩ Q n ) f ( x 0 ) that

D f ( x n + 1 , x 0 )≤ D f ( proj EP ( H ) ∩ F ( T ) f ( x 0 ) , x 0 ) .

Thus, by Lemma 2.5, we have x n → proj EP ( H ) ∩ F ( T ) f ( x 0 ) as n→∞. Therefore, the sequence { x n } converges strongly to the point proj EP ( H ) ∩ F ( T ) f ( x 0 ). This completes the proof. □

Remark 3.1 (1) If f(x)= 1 2 ∥ x ∥ 2 for all x∈E, then the weak Bregman relatively nonexpansive mapping is reduced to the weak relatively nonexpansive mapping defined by Su et al. [32], that is, T is called a weak relatively nonexpansive mapping if the following conditions are satisfied:

F ˜ (T)=F(T)≠∅,ϕ(u,Tx)≤ϕ(u,x),∀x∈C,u∈F(T),

where ϕ(y,x)= ∥ y ∥ 2 −2〈y,J(x)〉+ ∥ x ∥ 2 for all x,y∈E and J is the normalized duality mapping from E to 2 E ∗ ;

  1. (2)

    If f(x)= 1 2 ∥ x ∥ 2 for all x∈E, then Algorithm 3.1 is reduced to the following iterative algorithm.

Algorithm 3.2 Step 1: Select an arbitrary starting point x 0 ∈C, let Q 0 =C and C 0 ={z∈C:ϕ(z, u 0 )≤ϕ(z, x 0 )}.

Step 2: Given the current iterate x n , calculate the next iterate as follows:

{ z n = J − 1 ( β n J ( T ( x n ) ) + ( 1 − β n ) J ( x n ) ) , y n = J − 1 ( α n J ( x 0 ) + ( 1 − α n ) J ( z n ) ) , u n = Res H f ( y n ) , C n = { z ∈ C n − 1 ∩ Q n − 1 : ϕ ( z , u n ) ≤ α n ϕ ( z , x 0 ) + ( 1 − α n ) ϕ ( z , x n ) } , Q n = { z ∈ C n − 1 ∩ Q n − 1 : 〈 J ( x 0 ) − J ( x n ) , z − x n 〉 ≤ 0 } , x n + 1 = Π C n ∩ Q n x 0 , ∀ n ≥ 0 .
(3.8)
  1. (3)

    Particularly, if EP(H)=C, then Algorithm 3.2 is reduced to the following iterative algorithm.

Algorithm 3.3 Step 1: Select an arbitrary starting point x 0 ∈C, let Q 0 =C and C 0 ={z∈C:ϕ(z, u 0 )≤ϕ(z, x 0 )}.

Step 2: Given the current iterate x n , calculate the next iterate as follows:

{ z n = J − 1 ( β n J ( T ( x n ) ) + ( 1 − β n ) J ( x n ) ) , y n = J − 1 ( α n J ( x 0 ) + ( 1 − α n ) J ( z n ) ) , C n = { z ∈ C n − 1 ∩ Q n − 1 : ϕ ( z , y n ) ≤ α n ϕ ( z , x 0 ) + ( 1 − α n ) ϕ ( z , x n ) } , Q n = { z ∈ C n − 1 ∩ Q n − 1 : 〈 J ( x 0 ) − J ( x n ) , z − x n 〉 ≤ 0 } , x n + 1 = Π C n ∩ Q n x 0 , ∀ n ≥ 0 .
(3.9)
  1. (4)

    If Tx=x for all x∈C, then, by Algorithm 3.3, we can get the following modified Mann iteration algorithm for the equilibrium problem (1.1).

Algorithm 3.4 Step 1: Select an arbitrary starting point x 0 ∈C, let Q 0 =C and C 0 ={z∈C:ϕ(z, u 0 )≤ϕ(z, x 0 )}.

Step 2: Given the current iterate x n , calculate the next iterate as follows:

{ y n = J − 1 ( α n J ( x 0 ) + ( 1 − α n ) J ( x n ) ) , u n = Res H f ( y n ) , C n = { z ∈ C n − 1 ∩ Q n − 1 : ϕ ( z , u n ) ≤ α n ϕ ( z , x 0 ) + ( 1 − α n ) ϕ ( z , x n ) } , Q n = { z ∈ C n − 1 ∩ Q n − 1 : 〈 J ( x 0 ) − J ( x n ) , z − x n 〉 ≤ 0 } , x n + 1 = Π C n ∩ Q n x 0 , ∀ n ≥ 0 .
(3.10)

If f(x)= 1 2 ∥ x ∥ 2 for all x∈E, then, by Theorem 3.1 and Remark 3.1, the following results hold.

Corollary 3.1 Let C be a nonempty, closed convex subset of a real reflexive Banach space E. Suppose that H:C×C→R satisfies Assumption  2.1 and T:C→C is a weak relatively nonexpansive mapping such that EP(H)∩F(T)≠∅. Then the sequence { x n } generated by Algorithm 3.2 converges strongly to the point Π EP ( H ) ∩ F ( T ) ( x 0 ), where Π EP ( H ) ∩ F ( T ) ( x 0 ) is the generalized projection of C onto EP(H)∩F(T).

Corollary 3.2 Let C be a nonempty, closed convex subset of a real reflexive Banach space E. Let T:C→C be a weak relatively nonexpansive mapping such that F(T)≠∅. Then the sequence { x n } generated by Algorithm 3.3 converges strongly to the point Π F ( T ) f ( x 0 ), where Π F ( T ) ( x 0 ) is the generalized projection of C onto F(T).

Corollary 3.3 Let C be a nonempty, closed convex subset of a real reflexive Banach space E. Suppose that H:C×C→R satisfies Assumption  2.1 such that EP(H)≠∅. Then the sequence { x n } generated by Algorithm 3.4 converges strongly to the point Π EP ( H ) ( x 0 ), where Π EP ( H ) ( x 0 ) is the generalized projection of C onto EP(H).

Remark 3.2

  1. (1)

    It is well known that any closed and firmly nonexpansive-type mapping (see [11, 33]) is a weak Bregman relatively nonexpansive mapping whenever f(x)= 1 2 ∥ x ∥ 2 for all x∈E. If β n ≡1 for all n≥0 and E is a uniformly convex and uniformly smooth Banach space, then Corollary 3.2 improves [[11], Corollary 3.1];

  2. (2)

    If α n ≡0 for all n≥0 and E is a uniformly convex and uniformly smooth Banach space, then Corollary 3.2 is reduced to [[32], Theorem 3.1];

  3. (3)

    If β n ≡1− β n ′ for all n≥0, β n ′ ∈[0,1], f(x)= 1 2 ∥ x ∥ 2 for all x∈E and E is a uniformly convex and uniformly smooth Banach space, then Corollary 3.1 improves [[11], Theorem 4.1].

References

  1. Blum E, Oettli W: From optimization and variational inequalities to equilibrium problems. Math. Stud. 1994, 63: 123–145.

    MathSciNet  Google Scholar 

  2. Agarwal RP, Chen JW, Cho YJ, Wan Z: Stability analysis for parametric generalized vector quasi-variational-like inequality problems. J. Inequal. Appl. 2012., 2012: Article ID 57

    Google Scholar 

  3. Butnariu D, Iusem AN, Zalinescu C: On uniform convexity, total convexity and convergence of the proximal point and outer Bregman projection algorithms in Banach spaces. J. Convex Anal. 2003, 10: 35–61.

    MathSciNet  Google Scholar 

  4. Combettes PL, Hirstoaga SA: Equilibrium programming in Hilbert spaces. J. Nonlinear Convex Anal. 2005, 6: 117–136.

    MathSciNet  Google Scholar 

  5. Eckstein I: Nonlinear proximal point algorithms using Bregman function, with applications to convex programming. Math. Oper. Res. 1993, 18: 202–226. 10.1287/moor.18.1.202

    Article  MathSciNet  Google Scholar 

  6. Kiwiel KC: Proximal minimization methods with generalized Bregman functions. SIAM J. Control Optim. 1997, 35: 1142–1168. 10.1137/S0363012995281742

    Article  MathSciNet  Google Scholar 

  7. Reich S, Sabach S: Two strong convergence theorems for Bregman strongly nonexpansive operators in reflexive Banach spaces. Nonlinear Anal. 2010, 73: 122–135. 10.1016/j.na.2010.03.005

    Article  MathSciNet  Google Scholar 

  8. Reich S, Sabach S: Existence and approximation of fixed points of Bregman firmly nonexpansive mappings in reflexive Banach spaces. In Fixed-Point Algorithms for Inverse Problems in Science and Engineering. Springer, New York; 2011.

    Google Scholar 

  9. Chen JW, Cho YJ, Wan Z: Shrinking projection algorithms for equilibrium problems with a bifunction defined on the dual space of a Banach space. Fixed Point Theory Appl. 2011., 2011: Article ID 91

    Google Scholar 

  10. Chen JW, Wan Z, Cho YJ: Levitin-Polyak well-posedness by perturbations for systems of set-valued vector quasi-equilibrium problems. Math. Methods Oper. Res. 2013, 77: 33–64. 10.1007/s00186-012-0414-5

    Article  MathSciNet  Google Scholar 

  11. Chen JW, Wan Z, Zou Y: Strong convergence theorems for firmly nonexpansive-type mappings and equilibrium problems in Banach spaces. Optimization 2011. doi:10.1080/02331934.2011.626779

    Google Scholar 

  12. Cho YJ, Kang JI, Sadaati R: Fixed points and stability of additive functional equations on Banach algebras. J. Comput. Anal. Appl. 2012, 14: 1103–1111.

    MathSciNet  Google Scholar 

  13. Cho YJ, Kim JK, Dragomir SS 1. In Inequality Theory and Applications. Nova Science Publishers, New York; 2002.

    Google Scholar 

  14. Cho YJ, Qin XL: Systems of generalized nonlinear variational inequalities and its projection methods. Nonlinear Anal. 2008, 69: 4443–4451. 10.1016/j.na.2007.11.001

    Article  MathSciNet  Google Scholar 

  15. Reich S, Sabach S: A projection method for solving nonlinear problems in reflexive Banach spaces. J. Fixed Point Theory Appl. 2011. doi:10.1007/s11784–010–0037–5

    Google Scholar 

  16. Yao Y, Cho YJ, Liou YC: Algorithms of common solutions for variational inclusions, mixed equilibrium problems and fixed point problems. Eur. J. Oper. Res. 2011, 212: 242–250. 10.1016/j.ejor.2011.01.042

    Article  MathSciNet  Google Scholar 

  17. Bregman LM: The relaxation method for finding common points of convex sets and its application to the solution of problems in convex programming. U.S.S.R. Comput. Math. Math. Phys. 1967, 7: 200–217.

    Article  Google Scholar 

  18. Bauschke HH, Borwein JM, Combettes PL: Essential smoothness, essential strict convexity, and Legendre functions in Banach spaces. Commun. Contemp. Math. 2001, 3: 615–647. 10.1142/S0219199701000524

    Article  MathSciNet  Google Scholar 

  19. Bauschke HH, Borwein JM, Combettes PL: Bregman monotone optimization algorithms. SIAM J. Control Optim. 2003, 42: 596–636. 10.1137/S0363012902407120

    Article  MathSciNet  Google Scholar 

  20. Bauschke HH, Combettes PL: Construction of best Bregman approximation in reflexive Banach spaces. Proc. Am. Math. Soc. 2003, 131: 3757–3766. 10.1090/S0002-9939-03-07050-3

    Article  MathSciNet  Google Scholar 

  21. Butnariu D, Iusem AN Applied Optimization 40. In Totally Convex Functions for Fixed Points Computation and Infinite Dimensional Optimization. Kluwer Academic, Dordrecht; 2000.

    Chapter  Google Scholar 

  22. Reich S: A weak convergence theorem for the alternating method with Bregman distances. Lect. Not. Pure Appl. Math. 178. Theory and Applications of Nonlinear Operators of Accretive and Monotone Type 1996, 313–318.

    Google Scholar 

  23. Resmerita E: On total convexity, Bregman projections and stability in Banach spaces. J. Convex Anal. 2004, 11: 1–16.

    MathSciNet  Google Scholar 

  24. Solodov MV, Svaiter BF: An inexact hybrid generalized proximal point algorithm and some new results on the theory of Bregman functions. Math. Oper. Res. 2000, 25: 214–230. 10.1287/moor.25.2.214.12222

    Article  MathSciNet  Google Scholar 

  25. Butnariu D, Resmerita E: Bregman distances, totally convex functions, and a method for solving operator equations in Banach spaces. Abstr. Appl. Anal. 2006, 2006: 1–39.

    Article  MathSciNet  Google Scholar 

  26. Reich S, Sabach S: Two strong convergence theorems for a proximal method in reflexive Banach spaces. Numer. Funct. Anal. Optim. 2010, 31: 22–44. 10.1080/01630560903499852

    Article  MathSciNet  Google Scholar 

  27. Chen JW, Wan Z, Yuan L, et al.: Approximation of fixed points of weak Bregman relatively nonexpansive mappings in Banach spaces. Int. J. Math. Math. Sci. 2011, 2011: 1–23.

    MathSciNet  Google Scholar 

  28. Alber YI: Generalized projection operators in Banach spaces: properties and applications. 1. Functional Differential Equations 1993, 1–21. Proceedings of the Israel Seminar Ariel, Israel

    Google Scholar 

  29. Bonnans JF, Shapiro A: Perturbation Analysis of Optimization Problem. Springer, New York; 2000.

    Book  Google Scholar 

  30. Reich S, Sabach S: A strong convergence theorem for a proximal-type algorithm in reflexive Banach spaces. J. Nonlinear Convex Anal. 2009, 10: 471–485.

    MathSciNet  Google Scholar 

  31. Kassay G, Reich S, Sabach S: Iterative methods for solving systems of variational inequalities in reflexive Banach spaces. SIAM J. Optim. 2011, 21: 1319–1344. 10.1137/110820002

    Article  MathSciNet  Google Scholar 

  32. Su Y, Gao J, Zhou H: Monotone CQ algorithm of fixed points for weak relatively nonexpansive mappings and applications. J. Math. Res. Expo. 2007, 28: 957–967.

    MathSciNet  Google Scholar 

  33. Takahashi W: Nonlinear Functional Analysis-Fixed Point Theory and Its Applications. Yokohama Publishers, Yokohama; 2000.

    Google Scholar 

Download references

Acknowledgements

The authors are indebted to the referees and the associate editor for their insightful and pertinent comments on an earlier version of the work. The second author (Jiawei Chen) was supported by the Natural Science Foundation of China and the Fundamental Research Fund for the Central Universities, the third author (Yeol Je Cho) was supported by the Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education, Science and Technology (Grant Number: 2012-0008170).

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to Jia-Wei Chen or Yeol Je Cho.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

All authors jointly worked on the results and they read and approved the final manuscript.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License ( https://creativecommons.org/licenses/by/2.0 ), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Agarwal, R.P., Chen, JW. & Cho, Y.J. Strong convergence theorems for equilibrium problems and weak Bregman relatively nonexpansive mappings in Banach spaces. J Inequal Appl 2013, 119 (2013). https://doi.org/10.1186/1029-242X-2013-119

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1029-242X-2013-119

Keywords