Multi-step extragradient method with regularization for triple hierarchical variational inequalities with variational inclusion and split feasibility constraints
© Ceng et al.; licensee Springer. 2014
Received: 9 October 2014
Accepted: 25 November 2014
Published: 12 December 2014
By combining Korpelevich’s extragradient method, the viscosity approximation method, the hybrid steepest-descent method, Mann’s iteration method, and the gradient-projection method with regularization, a hybrid multi-step extragradient algorithm with regularization for finding a solution of triple hierarchical variational inequality problem is introduced and analyzed. It is proven that under appropriate assumptions, the proposed algorithm converges strongly to a unique solution of a triple hierarchical variational inequality problem which is defined over the set of solutions of a hierarchical variational inequality problem defined over the set of common solutions of finitely many generalized mixed equilibrium problems (GMEP), finitely many variational inclusions, fixed point problems, and the split feasibility problem (SFP). We also prove the strong convergence of the proposed algorithm to a common solution of the SFP, finitely many GMEPs, finitely many variational inclusions, and the fixed point problem of a strict pseudocontraction. The results presented in this paper improve and extend the corresponding results announced by several others.
MSC:49J30, 47H09, 47J20, 49M05.
Keywordstriple hierarchical variational inequalities hierarchical variational inequalities variational inclusions split feasibility problems fixed point problems iterative algorithm strong convergence result
The following problems have their own importance because of their applications in diverse areas of science, engineering, social sciences, and management:
Equilibrium problems including variational inequalities.
Variational inclusion problems.
Split feasibility problems.
Fixed point problems.
One way or the other, these problems are related to each other. They are described as follows.
The set of solutions of EP is denoted by . It includes several problems, namely, variational inequality problems, optimization problems, saddle point problems, fixed point problems, etc., as special cases. For further details on EP, we refer to [1–6] and the references therein.
where is a real-valued function. The set of solutions of GMEP is denoted by . For different choices of operators/functions Θ, φ, and A, we get different forms of equilibrium problems. For applications of GMEP, we refer to [14, 15] and the references therein.
Variational inclusion problem
Huang  studied problem (1.2) in the case where R is maximal monotone and B is strongly monotone and Lipschitz continuous with . Zeng et al.  further studied problem (1.2) in a more general setting than in . They gave the geometric convergence rate estimate for approximate solutions. Various types of iterative algorithms for solving variational inclusions have been further studied and developed in the literature; see, for example, [19–22] and the references therein.
Split feasibility problem
where is a bounded linear operator from to . We denote by Γ the solution set of the SFP. It is a model of an inverse problem which arises in phase retrievals and in medical image reconstruction. A number of image reconstruction problems can be formulated as SFP; see, for example  and the references therein. Recently, it is found that the SFP can also be applied to study intensity-modulated radiation therapy (IMRT); see, for example, [24, 25] and the references therein. In the recent past, a wide variety of iterative methods have been proposed to solve SFP; see, for example, [24–28] the references therein.
Fixed point problem
Let C be a nonempty subset of a H and be a mapping. The fixed point problem is to find an element such that .
It is a well-known problem and has tremendous applications in different branches of science, engineering, social sciences, and management.
The following proposition provides some relations among the above mentioned problems.
solves the SFP;
- (b)solves the fixed point equation
where , , is the projection operator and is the adjoint of A;
- (c)solves the variational inequality problem (VIP) of finding such that
A variational inequality problem which is defined over the set of fixed points of a mapping is called hierarchical variational inequality problem; that is, when the set C in variational inequality formulation is equal to the set of fixed points of a mapping. A variational inequality problem which is defined over the set of solutions of a hierarchical variational inequality problem is called a triple hierarchical variational inequality problem. For further details on hierarchical variational inequality problems and triple hierarchical variational inequality problems, we refer to , a recent survey on these problems.
Very recently, Kong et al.  considered the following triple hierarchical variational inequality problem (THVIP).
In this paper, we consider the following triple hierarchical variational inequality problem (THVIP).
is κ-Lipschitzian and η-strongly monotone with positive constants such that and where ;
for each , satisfies conditions (A1)-(A4) and is a proper lower semicontinuous and convex function with restriction (B1) or (B2) (conditions (A1)-(A4) and (B1)-(B2) are given in the next section);
for each and , is a maximal monotone mapping, and and are -inverse strongly monotone and -inverse strongly monotone, respectively;
is a ξ-strict pseudocontraction, is a nonexpansive mapping and is a ρ-contraction with coefficient ;
By combining Korpelevich’s extragradient method, the viscosity approximation method, the hybrid steepest-descent method, Mann’s iteration method, and the gradient-projection method (GPM) with regularization, we introduce and analyze a hybrid multi-step extragradient algorithm with regularization in the setting of Hilbert spaces. It is proven that under appropriate assumptions, the proposed algorithm converges strongly to a unique solution of THVIP (1.6). The algorithm and convergence result of this paper extend and generalize several existing algorithms and results, respectively, in the literature.
- (a)nonexpansive if
- (b)firmly nonexpansive if is nonexpansive, or equivalently, if T is 1-inverse strongly monotone (1-ism),
where is nonexpansive; projections are firmly nonexpansive.
It can easily be seen that if T is nonexpansive, then is monotone.
where and is nonexpansive. More precisely, when the last equality holds, we say that T is α-averaged. Thus firmly nonexpansive mappings (in particular, projections) are -averaged mappings.
Proposition 2.1 
T is nonexpansive if and only if the complement is -ism.
If T is ν-ism, then for , γT is -ism.
T is averaged if and only if the complement is ν-ism for some . Indeed, for , T is α-averaged if and only if is -ism.
If for some and if S is averaged and V is nonexpansive, then T is averaged.
T is firmly nonexpansive if and only if the complement is firmly nonexpansive.
If for some and if S is firmly nonexpansive and V is nonexpansive, then T is averaged.
The composite of finitely many averaged mappings is averaged, that is, if each of the mappings is averaged, then so is the composite . In particular, if is -averaged and is -averaged, where , then the composite is α-averaged, where .
- (e)If the mappings are averaged and have a common fixed point, then
The notation denotes the set of all fixed points of the mapping T, that is, .
In this case, we also say that T is a ξ-strict pseudocontraction. We denote by the set of fixed points of S. In particular, if , T is a nonexpansive mapping.
This immediately implies that if T is a ξ-strictly pseudocontractive mapping, then is -inverse strongly monotone; for further details, we refer to  and the references therein. It is well known that the class of strict pseudocontractions strictly includes the class of nonexpansive mappings and that the class of pseudocontractions strictly includes the class of strict pseudocontractions.
Lemma 2.1 [, Proposition 2.1]
- (a)If T is a ξ-strictly pseudocontractive mapping, then T satisfies the Lipschitzian condition
If T is a ξ-strictly pseudocontractive mapping, then the mapping is semiclosed at 0, that is, if is a sequence in C such that and , then .
If T is ξ-(quasi-)strict pseudocontraction, then the fixed point set of T is closed and convex so that the projection is well defined.
Lemma 2.2 
Lemma 2.3 (Demiclosedness principle)
Let C be a nonempty closed convex subset of a real Hilbert space H. Let S be a nonexpansive self-mapping on C with . Then is demiclosed. That is, whenever is a sequence in C weakly converging to some and the sequence strongly converges to some y, it follows that , where I is the identity operator of H.
- (a)monotone if
- (b)β-strongly monotone if there exists a constant such that
- (c)ν-inverse strongly monotone if there exists a constant such that
So, if , then is a nonexpansive mapping.
Some important properties of projections are gathered in the following proposition.
Consequently, is nonexpansive and monotone.
for all .
Lemma 2.4 [, Lemma 3.1]
(A1) , ;
(A2) T is monotone, that is, , ;
(A3) T is upper-hemicontinuous, that is, ,
(A4) is convex and lower semicontinuous, for each .
(B1) for each and , there exist a bounded subset and such that, for any ,
(B2) C is a bounded set.
Next we list some elementary conclusions for the MEP.
Proposition 2.4 
for each , is nonempty and single-valued;
- (ii)is firmly nonexpansive, that is, for any ,
is closed and convex;
, for all and .
We need some facts and tools in a real Hilbert space H which are listed as lemmas below.
, for all ;
, for all and with ;
- (c)if is a sequence in H such that , it follows that
Lemma 2.8 
either or ;
where , for all .
Recall that a set-valued mapping is called monotone if, for all , , and imply . A set-valued mapping T is called maximal monotone if T is monotone and , for each , where I is the identity mapping of H. We denote by the graph of T. It is well known that a monotone mapping T is maximal if and only if, for , for every implies .
Next we provide an example to illustrate the concept of maximal monotone mapping.
Let be a maximal monotone mapping. Let be two positive numbers.
Lemma 2.9 
Consequently, is nonexpansive and monotone.
Lemma 2.11 
Lemma 2.12 
Let R be a maximal monotone mapping with and let be a strongly monotone, continuous and single-valued mapping. Then, for each , the equation has a unique solution for .
Lemma 2.13 
Let R be a maximal monotone mapping with and be a monotone, continuous and single-valued mapping. Then , for each . In this case, is maximal monotone.
3 Algorithms and convergence results
is -Lipschitz continuous.
Throughout the paper, unless otherwise specified, M, N are positive integers and C be a nonempty closed convex subset of a real Hilbert space H.
The following result provides the strong convergence of the sequence generated by Algorithm 3.1.
(C1) , , and ;
(C2) or ;
(C3) or ;
(C4) or ;
(C5) or ;
(C6) or ;
(C7) , and ;
(C8) for each , or ;
(C9) for each , or ;
(C10) there exist positive constants such that and , for sufficiently large .
converges strongly to a point provided , which is the unique solution of Problem 1.2.
for all , , and , where I is the identity mapping on H. Then we have and .
it may be confirmed that is nonexpansive, for each .
We divide the rest of the proof into several steps.
Step 1. We prove that is bounded.
Thus, is bounded since , and so are the sequences , , , and .
Step 2. We prove that .
for some and for some .
where for some .
Step 3. We prove that , , and .