- Open Access
Regularized hybrid iterative algorithms for triple hierarchical variational inequalities
Journal of Inequalities and Applications volume 2014, Article number: 490 (2014)
In this paper, we introduce and study a triple hierarchical variational inequality (THVI) with constraints of minimization and equilibrium problems. More precisely, let be the fixed point set of a nonexpansive mapping, let be the solution set of a mixed equilibrium problem (MEP), and let Γ be the solution set of a minimization problem (MP) for a convex and continuously Frechet differential functional in Hilbert spaces. We want to find a solution of a variational inequality with a variational inequality constraint over the intersection of , , and Γ. We propose a hybrid iterative algorithm with regularization to compute approximate solutions of the THVI, and we present the convergence analysis of the proposed iterative algorithm.
MSC:49J40, 47J20, 47H10, 65K05, 47H09.
Let H be a Hilbert space with inner product and norm over the real scalar field ℝ. Let C be a nonempty closed convex subset of H, and be the metric projection of H onto C. Let be a self-mapping on C. Denote by the set of fixed points of T. We say that T is L-Lipschitzian if there exists a constant such that
When or , we call T a nonexpansive or a contractive mapping, respectively. We say that a mapping is α-inverse strongly monotone if there exists a constant such that
and that A is η-strongly monotone (resp. monotone) if there exists a constant (resp. ) such that
It is known that T is nonexpansive if and only if is -inverse strongly monotone. Moreover, L-Lipschitz continuous mappings are -inversely strong monotone (see, e.g., ).
Let be a convex and continuously Frechet differentiable functional. Consider the minimization problem (MP):
(assuming the existence of minimizers). We denote by the set of minimizers of problem (1.1). The gradient-projection algorithm (GPA) generates a sequence determined by the gradient ∇f and the metric projection :
The convergence of algorithm (1.2) depends on ∇f. It is known that if ∇f is η-strongly monotone and L-Lipschitz continuous, then for , the operator
is a contraction. Hence, the sequence defined by the GPA (1.2) converges in norm to the unique solution of (1.1). If the gradient ∇f is only assumed to be Lipschitz continuous, then can only be weakly convergent when H is infinite-dimensional (a counterexample to the norm convergence of is given by Xu [, Section 5]).
The regularization, in particular the traditional Tikhonov regularization, is usually used to solve ill-posed optimization problems. Consider the regularized minimization problem
where is the regularization parameter, and again f is convex with Lipschitz continuous gradient ∇f. While a regularization method provides the possible strong convergence to the minimum-norm solution, its disadvantage is the implicity. Hence explicit iterative methods seem to be attractive. See, e.g., Xu [2, 3].
On the other hand, for a given mapping , we consider the variational inequality problem (VIP) of finding such that
The solution set of VIP (1.3) is denoted by . It is well known that when A is monotone,
When C is the fixed point set of a nonexpansive mapping T and , VIP (1.3) becomes the variational inequality problem of finding such that
This problem, introduced by Moudafi and Maingé [9, 10], is called the hierarchical fixed point problem. It is clear that if S has fixed points, then they are solutions of VIP (1.4). If S is contractive, the solution set of VIP (1.4) is a singleton and it is well known as a viscosity problem. This was previously introduced by Moudafi  and also developed by Xu . In this case, solving VIP (1.4) is equivalent to finding a fixed point of the nonexpansive mapping , where is the metric projection onto the closed and convex set . Yao et al.  introduced a two-step algorithm to solve VIP (1.4).
Let be a bifunction and be a function. Consider the mixed equilibrium problem (MEP) of finding such that
which was studied by Ceng and Yao . The solution set of MEP (1.5) is denoted by . The MEP (1.5) is very general in the sense that it includes, as special cases, fixed point problems, optimization problems, variational inequality problems, minimax problems, Nash equilibrium problems in noncooperative games and others; see, e.g., [13–15].
Recently, Iiduka [16, 17] considered a variational inequality with a variational inequality constraint over the set of fixed points of a nonexpansive mapping. Since this problem has a triple structure in contrast with bilevel programming problems or hierarchical constrained optimization problems or hierarchical fixed point problems, it is referred to as a triple hierarchical constrained optimization problem (THCOP). He presented some examples of THCOP and developed iterative algorithms to find the solution of such a problem. Since the original problem is a variational inequality, in this paper, we call it a triple hierarchical variational inequality (THVI). Ceng et al. introduced and considered some THVI in . A nice survey article on THVI is . See also [20–22].
Extending the works done in , we introduce and study in this paper the following triple hierarchical variational inequality with constraints of minimization and equilibrium problems.
The problem to study
Let C be a nonempty closed convex subset of a real Hilbert space H. Let be convex and continuously Frechet differentiable with Γ being the set of its minimizers. Let and be both nonexpansive. Let be ρ-contractive, and be κ-Lipschitzian and η-strongly monotone with constants and . Suppose and where .
Let Ξ denote the solution set of the following hierarchical variational inequality (HVI): find such that
where the solution set Ξ is assumed to be nonempty. Consider the following triple hierarchical variational inequality (THVI).
Find such that
Based on the iterative schemes provided by Xu  and the two-step iterative scheme provided by Yao et al. , by virtue of the viscosity approximation method, hybrid steepest-descent method and the regularization method, we propose the following hybrid iterative algorithm with regularization:
Here, , , and . It is shown that under appropriate assumptions, the two iterative sequences and converge strongly to the unique solution of the THVI (1.6).
Let K be a nonempty closed convex subset of a real Hilbert space H. We write and to indicate that the sequence converges weakly and strongly to x, respectively. The weak ω-limit set of the sequence is denoted by
The metric (or nearest point) projection from H onto K is the mapping which assigns to each point the unique point satisfying the property
Proposition 2.1 For given and :
Hence, is nonexpansive and monotone.
Definition 2.2 A mapping is said to be firmly nonexpansive if is nonexpansive, or equivalently,
Alternatively, T is firmly nonexpansive if and only if T can be expressed as
where is nonexpansive. Projections are firmly nonexpansive. We call T an averaged mapping if T can be expressed as a proper convex combination of the identity map I and a nonexpansive mapping. In particular, firmly nonexpansive mappings are averaged.
Proposition 2.3 (see )
Let be a given mapping.
T is nonexpansive if and only if the complement is -inverse strongly monotone.
If T is ν-inverse strongly monotone, then so is γT for all .
T is averaged if and only if the complement is ν-inverse strongly monotone for some . Indeed, for , T is α-averaged if and only if is -inverse strongly monotone.
If for some and if S is averaged and V is nonexpansive, then T is averaged.
T is firmly nonexpansive if and only if the complement is firmly nonexpansive.
If for some and if S is firmly nonexpansive and V is nonexpansive, then T is averaged.
The composition of finitely many averaged mappings is averaged. In particular, if is -averaged and is -averaged, where , then is -averaged.
If the mappings are averaged and have a common fixed point, then
For solving the equilibrium problem for a bifunction , let us consider the following conditions:
(A1) for all ;
(A2) T is monotone, that is, for all ;
(A3) for each , ;
(A4) for each , is convex and lower semicontinuous;
(A5) for each , is weakly upper semicontinuous;
(B1) for each and , there exist a bounded subset and such that for any ,
(B2) C is a bounded set.
Lemma 2.5 (see )
Let C be a nonempty closed convex subset of a real Hilbert space H and be a bifunction satisfying (A1)-(A4). Let and . Then there exists such that
Lemma 2.6 (see )
Let C be a nonempty closed convex subset of a real Hilbert space H. Let be a bifunction satisfying (A1)-(A5) and be a proper lower semicontinuous and convex function. For and , define a mapping as follows:
for all . Assume that either (B1) or (B2) holds. Then is a single-valued firmly nonexpansive map on H, and is closed and convex.
Lemma 2.7 (see )
Let be a sequence of nonnegative real numbers such that
Here, , , and for all , such that
either or ;
Lemma 2.8 (Demiclosedness principle; see )
Let C be a nonempty closed convex subset of a real Hilbert space H and let be a nonexpansive mapping with . If is a sequence in C converging weakly to x and if converges strongly to y, then ; in particular, if , then .
Lemma 2.9 (see )
Let be a nonexpansive mapping and be a ρ-contraction with , respectively.
is monotone, i.e.,
is -strongly monotone, i.e.,
Lemma 2.10 ()
Let H be a real Hilbert space. Then, for all and ,
Lemma 2.11 We have the following inequality in an inner product space X:
Notations Let λ be a number in and let . Let be κ-Lipschitzian and η-strongly monotone. Associated with a nonexpansive mapping , we define the mapping by
Lemma 2.12 (see [, Lemma 3.1])
The map is a contraction provided , that is,
where . In particular, if the identity mapping, then
A set-valued mapping is called monotone if for all , and . A monotone set-valued mapping is called maximal if its graph is not properly contained in the graph of any other monotone set-valued mapping. It is known that a monotone set-valued mapping is maximal if and only if for , for every implies that .
Let be a monotone and Lipschitz continuous mapping and let be the normal cone to C at , namely
Lemma 2.13 (see )
Let be a monotone mapping.
is maximal monotone;
3 Main results
Let us consider the following three-step iterative scheme with regularization:
is a ρ-contraction;
and are nonexpansive mappings;
is a κ-Lipschitzian and η-strongly monotone mapping;
and are real-valued functions;
is L-Lipschitz continuous with ;
and are sequences in with and ;
and are sequences in ;
and , where .
Theorem 3.1 Suppose that satisfies (A1)-(A5) and that (B1) or (B2) holds. Let be the bounded sequence generated from any given by (3.1). Assume that
(H1) , ;
(H2) and ;
(H3) , , and ;
(H4) and .
Then we have the following:
if held in addition, i.e., .
Proof First, let us show that is ξ-averaged for each , where
The Lipschitz condition implies that the gradient ∇f is -inverse strongly monotone , that is,
Hence, is -inverse strongly monotone. Thus, is -inverse strongly monotone by Proposition 2.3(ii). By Proposition 2.3(iii) the complement is -averaged. Noting that is -averaged and utilizing Proposition 2.4(iv), we know that for each , the map is ξ-averaged with
In particular, is nonexpansive. Furthermore, for , utilizing the fact that , we may assume
Consequently, for each integer , is -averaged with
This immediately implies that is nonexpansive for all .
We divide the proof into several steps.
Step 1. .
For simplicity, put . Then and for every . We observe that
Moreover, from (3.1) we have
Utilizing Lemma 2.12 from (3.2) we deduce that
where . Taking into consideration that and , we have
Putting in (3.4) and in (3.5), we obtain
Adding the last two inequalities, by (A2) we get
Since , we may assume, without loss of generality, that there exists a positive number c such that for all . Thus we have
Substituting (3.6) into (3.3) we derive
Here, for some .
On the other hand, from (3.1) we have
Simple calculations show that
Utilizing Lemma 2.12 from (3.2), (3.6), and (3.7) we deduce that
where , for some . Therefore,
where , for some . From (H1), (H2), and (H4), it follows that and
Applying Lemma 2.7 to (3.8), we immediately conclude that
In particular, from (H3) it follows that
Step 2. and .
By the firm nonexpansivity of , if , we have
This immediately yields
Let . We have
Hence we have
By Lemmas 2.10 and 2.12, we have from (3.9) and (3.10) that
Furthermore, utilizing Lemmas 2.11 and 2.12 we have from (3.9) and (3.10) that
It turns out therefore that
Then it is clear that
Since , , , and , we conclude that
Since , , and as , we have
Therefore, from the last inequality we have
Step 3. and .
Let . Utilizing Lemmas 2.6 and 2.11 we have from (3.12) that
Since , , , and , it follows from that , and hence
Furthermore, from the firm nonexpansiveness of we obtain
Thus, from (3.12) we have
This implies that
Since , , , and , it follows that
This, together with and (due to Step 2), implies that
Step 4. ; moreover, if in addition, then .
Let . Then there exists a subsequence of such that . Since
Hence from , , , and , we get
Since and , we have . Utilizing Lemma 2.8 we derive .
Let us show that . As a matter of fact, since , for any we have
It follows from (A2) that
Replacing n by , we have
Since and , it follows from (A4) that
Put for all and . We have and
Utilizing (A1), (A4), and (3.13), we have
Letting in (3.14) and utilizing (A3), we get, for each ,
Let us show that . From and , we know that and . Define
Then is maximal monotone and if and only if ; see  for more details. Let . Then we have
On the other hand, from
Hence, we obtain
Since is maximal monotone, we have , and hence, , which leads to . Consequently, . This shows that .
Utilizing Lemmas 2.11 and 2.12, we have for every ,
Suppose now that in addition. It follows from (3.15) that