- Research
- Open access
- Published:
An intermixed iteration for constrained convex minimization problem and split feasibility problem
Journal of Inequalities and Applications volume 2019, Article number: 269 (2019)
Abstract
In this paper, we first introduce the two-step intermixed iteration for finding the common solution of a constrained convex minimization problem, and also we prove a strong convergence theorem for the intermixed algorithm. By using our main theorem, we prove a strong convergence theorem for the split feasibility problem. Finally, we apply our main theorem for the numerical example.
1 Introduction
Let H be a real Hilbert space with inner product \(\langle \cdot ,\cdot \rangle \) and norm \(\|\cdot \|\). Let C be a nonempty, closed, and convex subset of a real Hilbert space H.
We denote the fixed point set of a mapping T by \(F(T)\). Fixed point theory can be applied to variational inequality problems, equilibrium problems, split feasibility problems, optimization problems, etc. These problems are encountered in various fields such as engineering, physics, game theory, and economics.
A mapping T of C into itself is called nonexpansive if
In mathematics, conventional optimization problems arise in the process of making a trading system more effective and are usually stated in terms of minimization problems. In this paper, we give a new iteration for solving two constrained convex minimization problems.
Convex constrained minimization problem is popular and very important to various branches in physics, engineering and economics, e.g., to find the minimum travel distance or to find the lowest cost. Consider the constrained convex minimization problem as follows:
where \(f:C\rightarrow \mathbb{R}\) is a real-valued convex function. If f is (Fréchet) differentiable, then the gradient-projection algorithm (GPA) generates a sequence \(\{x_{n}\}\) using the following recursive formula:
or more generally,
where both in (2) and (3) the initial guess \(x_{0}\) is taken from C arbitrarily, and the parameters, λ or \(\lambda _{n}\), are positive real numbers satisfying certain conditions. The convergence of the algorithms (2) and (3) depends on the behavior of the gradient ∇f. In fact, it is known that if ∇f is α-strongly monotone and L-Lipschitz with constants \(\alpha ,L \geq 0\), then the operator
is a contraction; hence, the sequence \(\{x_{n}\}\) defined by the algorithm (2) converges in norm to the unique minimizer of (1). However, if the gradient ∇f fails to be strongly monotone, the operator T defined by (4) could fail to be contractive; consequently, the sequence \(\{x_{n}\}\) generated by the algorithm (2) may fail to converge strongly [1]. If ∇f is Lipschitz, then the algorithms (2) and (3) can still converge in the weak topology under certain conditions [2,3,4].
The variational inequality problem is to find a point \(u\in C\) such that
We denote the set of solutions of the variational inequality by \(\operatorname{VI}(C,A)\). Many models of variational inequalities are used in practice, including a mathematical theory, some interesting connections to numerous disciplines and a wide range of important applications in engineering, physics, optimization, minimax problems, game theory, and economics; for more details, see [5, 6].
Su and Xu [3] introduced the relation of a solution to the minimization problem (1) and solutions of the variational inequality (5) as stated in the following Lemma 1, and this lemma helps to prove the theorem about the minimization problem more effectively; for more details, see [7,8,9].
Lemma 1
(Optimality condition, [3])
A necessary condition for a point \(x^{*}\in C\) to be a solution of the minimization problem (1) is that \(x^{*}\) solves the variational inequality
Equivalently, \(x^{*}\in C\) solves the fixed point equation
for every constant \(\lambda >0\). If, in addition, f is convex, then the optimality condition (6) is also sufficient.
By \(U_{f}\) we denote the set of solutions of (1).
In 2011, Ceng et al. [10] introduced the following iterative scheme that generates a sequence \(\{x_{n}\}\) in an explicit way:
where \(s_{n}=\frac{2-\lambda _{n}L}{4}\) and \(P_{C}(I-\lambda _{n}\nabla f)=s_{n}I+(1-s_{n})T_{n}\) for each \(n\geq 0\). He proved that the sequence \(\{x_{n}\}\) converges strongly to a minimizer \(x^{*}\in S\) of (1).
In 2014, Ming and Lei [11] introduced an explicit composite iterative method for finding the common element of the set of solutions to an equilibrium problem and the solution set to a constrained convex minimization problem, as well as proved a strong convergence theorem, as follows:
Algorithm 1
Given \(x_{1}\in C\), let the sequences \(\{u_{n}\}\) and \(\{x_{n}\}\) be generated iteratively by
where \(T_{n}\) is a nonexpansive mapping from \(P_{C}(I-\lambda _{n} \nabla f)=s_{n}I+(1-s_{n})T_{n}\), which is \(\frac{2+\lambda _{n}L}{4}\)-averaged with \(s_{n}= \frac{2-\lambda _{n}L}{4}\), and ∇f is an L-Lipschitz mapping, for all \(L\geq 0\), \(V:C\rightarrow C\) is an l-Lipschitz mapping with constant \(l \geq 0\), \(A:C\rightarrow C\) is a strongly positive bounded linear operator with coefficient \(\bar{\gamma } \geq 0\) and \(0<\gamma <\frac{\bar{\gamma }}{l}\), \(u_{n}=Q_{\beta _{n}}x_{n}\), \(\{\lambda _{n}\}\subset (0,\frac{2}{L})\), \(\{\alpha _{n}\}\subset (0,1)\), \(\{\beta _{n}\}\subset (0,\infty )\) and \(\{s_{n}\}\subset (0, \frac{1}{2})\).
In 2015, Yao et al. [12] introduced the intermixed algorithm for two strict pseudocontractions S and T as follows:
Algorithm 2
For arbitrarily given \(x_{0}\in C\), \(y_{0}\in C\), let the sequences \(\{x_{n}\}\) and \(\{y_{n}\}\) be generated iteratively by
where \(S,T:C\rightarrow C\) are λ-strictly pseudocontractions, \(f:C\rightarrow H\) is a \(\rho _{1}\)-contraction, and \(g:C\rightarrow H\) is a \(\rho _{2}\)-contraction, \(k\in (0,1-\lambda )\) is a constant, and \(\{\alpha _{n}\}\), \(\{\beta _{n}\}\) are two real number sequences in \((0,1)\).
Furthermore, under some control conditions, they proved that the iterative sequences \(\{x_{n}\}\) and \(\{y_{n}\}\) defined by (7) converge independently to \(P_{F(T)}f(y^{*})\) and \(P_{F(S)}g(x^{*})\), respectively, where \(x^{*}\in F(T)=\{z\in C:Tz=z \}\) and \(y^{*}\in F(S)=\{z^{*}\in C:Tz^{*}=z^{*}\}\).
Motivated by Yao et al. [12] and Ming et al. [11], we introduce the new iterative method as follows:
Algorithm 3
Given \(x_{1},y_{1}\in C\), let the sequences \(\{x_{n}\}\) and \(\{y_{n}\}\) be defined by
where \(f,g:H\rightarrow H\) are \(a_{f}\)- and \(a_{g}\)-contraction mappings with \(a_{f},a_{g}\in (0,1)\) and \(a=\max \{a_{f},a_{g}\}\), \(\nabla \widetilde{f_{i}}\) is an \(\frac{1}{L_{i}}\)-inverse strongly monotone with \(L_{i}\geq 0\), for all \(i=1,2\), \(\{\mu _{n}\}\), \(\{\alpha _{n}\} \subseteq [0,1]\), \(P_{C}(I-\lambda _{n}^{i}\nabla \widetilde{f}_{i})=s _{n}^{i}I+(1-s_{n}^{i})T_{n}^{\widetilde{f}_{i}}\), \(\forall i=1,2\) and \(s_{n}^{i}=\frac{2-\lambda _{n}^{i}L_{i}}{4}\), \(\{\lambda _{n}^{i}\} \subset (0,\frac{2}{L_{i}})\) and \(0<\overline{\theta }\leq \mu _{n} \leq \theta \) for all \(n\in \mathbb{N}\) and for some \(\overline{ \theta },\theta >0\).
The purpose of this article is to combine the GPA and averaged mapping approach to design a two-step intermixed iteration for finding the common solution of a constrained convex minimization problem, and also prove a strong convergence theorem for the intermixed algorithm generated by (8). Applying our main result, we prove a strong convergence theorem for the split feasibility problem. Moreover, we utilize our main theorem in the numerical example.
2 Preliminaries
Throughout this article, we always assume that C is a nonempty, closed, and convex subset of a real Hilbert space H. We use “⇀” for weak convergence and “→” for strong convergence. For every \(x\in H\), there is a unique nearest point \(P_{C}x\) in C such that
Such an operator \(P_{C}\) is called the metric projection of H onto C.
Assume that C is a nonempty closed and convex subset of H. A mapping \(V:C \rightarrow C\) is said to be an l-Lipschitz if there exists a constant \(l \geq 0\) such that
If \(l \in [0,1)\), then V is called a contraction. Obviously, if \(l=1\), V is a nonexpansive mapping.
Definition 1
A mapping \(T:H \rightarrow H\) is said to be firmly nonexpansive if and only if \(2T-I\) is nonexpansive, or equivalently,
Alternatively, T is firmly nonexpansive if and only if T can be expressed as
where \(S:H \rightarrow H\) is nonexpansive.
Definition 2
(Positive operator)
An operator A is called positive if it is self-adjoint and \(\langle Ax,x\rangle \geq 0\) for all \(x \in H\).
An operator A on H is strongly positive if there exists a constant \(\overline{\gamma } > 0\) with the property
Lemma 2
([13])
For a given \(z\in H\) and \(u\in C\),
Furthermore, \(P_{C}\) is a firmly nonexpansive mapping of H onto C.
Lemma 3
([14])
Let H be a real Hilbert space. Then the following results hold:
-
(i)
For all \(x,y\in H\) and \(\alpha \in [0,1]\),
$$ \bigl\Vert \alpha x+(1-\alpha )y \bigr\Vert ^{2} = \alpha \Vert x \Vert ^{2}+(1-\alpha ) \Vert y \Vert ^{2}- \alpha (1- \alpha ) \Vert x-y \Vert ^{2}, $$ -
(ii)
\(\|x+y\|^{2} \leq \|x\|^{2}+2\langle y,x+y\rangle \), for each \(x,y\in H\).
Lemma 4
([4])
Let \(\{s_{n}\}\) be a sequence of nonnegative real numbers satisfying
where \(\{\alpha _{n}\}\) is a sequence in \((0,1)\) and \(\{\delta _{n}\}\) is a sequence such that
-
(1)
\(\sum_{n=1}^{\infty }\alpha _{n}=\infty \),
-
(2)
\(\limsup_{n\rightarrow \infty }\frac{\delta _{n}}{\alpha _{n}}\leq 0\) or \(\sum_{n=1}^{\infty }|\delta _{n}|<\infty \).
Then \(\lim_{n\rightarrow \infty }s_{n}=0\).
Definition 3
A mapping \(T:H \rightarrow H\) is said to be an averaged mapping if it can be written as the average of the identity I and a nonexpansive mapping, that is,
where α is a number in \((0,1)\) and \(S: H \rightarrow H\) is nonexpansive. More precisely, when (9) holds, we say that T is α-averaged.
Clearly, a firmly nonexpansive mapping is a \(\frac{1}{2}\)-averaged mapping.
Proposition 1
For given operators \(S, T, V:H \rightarrow H\):
-
(i)
If \(T=(1-\alpha )S+\alpha V\) for some \(\alpha \in (0,1)\) and if S is averaged and V is nonexpansive, then T is averaged.
-
(ii)
T is firmly nonexpansive if and only if the complement \(I-T\) is firmly nonexpansive.
-
(iii)
If \(T=(1-\alpha )S+\alpha V\) for some \(\alpha \in (0,1)\), S is firmly nonexpansive and V is nonexpansive, then T is averaged.
-
(iv)
The composition of finitely many averaged mappings is averaged. That is, if each of the mappings \(\{T_{i}\}_{i=1}^{N}\) is averaged, then so is the composition \(T_{1}\circ T_{2} \circ \cdots \circ T_{N}\). In particular, if \(T_{1}\) is \(\alpha _{1}\)-averaged, and \(T_{2}\) is \(\alpha _{2}\)-averaged, where \(\alpha _{1},\alpha _{2} \in (0,1)\), then the composition \(T_{1}\circ T_{2}\) is α-averaged, where \(\alpha =\alpha _{1}+\alpha _{2}-\alpha _{1}\alpha _{2}\).
Lemma 5
([11])
For given \(x \in H\) and let \(P_{C}:H \rightarrow C\) be a metric projection. Then
-
(a)
\(z=P_{C}x\) if and only if \(\langle x-z,y-z\rangle \leq 0\), \(\forall y\in C\).
-
(b)
\(z=P_{C}x\) if and only if \(\| x-z\|^{2} \leq \| x-y\|^{2}- \| y-z\|^{2}\), \(\forall y \in C\).
-
(c)
\(\langle P_{C}x-P_{C}y,x-y\rangle \geq \| P_{C}x-P_{C}y\| ^{2}\), \(\forall x,y\in H\).
Consequently, \(P_{C}\) is nonexpansive and monotone.
Lemma 6
([15])
Each Hilbert space H satisfies Opial’s condition, i.e., for any sequence \(\{ u_{n} \} \subset H\) with \(u_{n} \rightharpoonup u\), the inequality
holds for every \(v \in H\) with \(v \neq u\).
Definition 4
A nonlinear operator T whose domain \(D(T)\subseteq H\) and range \(R(T)\subseteq H\) is said to be:
-
(a)
monotone if
$$ \langle x-y,Tx-Ty \rangle \geq 0, \quad \forall x,y \in D(T); $$ -
(b)
β-strongly monotone if there exists \(\beta > 0\) such that
$$ \langle x-y,Tx-Ty \rangle \geq \beta \Vert x-y \Vert ^{2}, \quad \forall x,y \in D(T); $$ -
(c)
v-inverse strongly monotone (for short, v-ism) if there exists \(v>0\) such that
$$ \langle x-y,Tx-Ty \rangle \geq v \Vert Tx-Ty \Vert ^{2}, \quad \forall x,y \in D(T). $$
Proposition 2
Let T be an operator from H to itself. Then
-
(a)
T is nonexpansive if and only if the complement \(I-T\) is \(\frac{1}{2}\)-ism;
-
(b)
If T is v-ism, then for \(\gamma >0\), γT is \(\frac{v}{\gamma }\)-ism;
-
(c)
T is averaged if and only if the complement \(I-T\) is v-ism for some \(v > \frac{1}{2}\). Indeed, for \(\alpha \in (0,1)\), T is α-averaged if and only if \(I-T\) is \(\frac{1}{2\alpha }\)-ism.
Lemma 7
([16])
Assume \(A:H \rightarrow H\) is a strongly positive bounded linear operator with coefficient \(\overline{\gamma }>0\) and \(0< t\leq \|A\| ^{-1}\). Then \(\|I-tA\|\leq 1-t\overline{\gamma }\).
3 Main results
Let \(V:C\rightarrow C\) be l-Lipschitz with coefficient \(l \geq 0\), and \(A:C\rightarrow C\) a strongly positive bounded linear operator with coefficient γ̅ and \(0<\gamma <\frac{\overline{ \gamma }}{l}\). Let \(f:C\rightarrow \mathbb{R}\) be a real-valued convex function and assume that ∇f is an L-Lipschitz mapping with \(L \geq 0\). From Xu [1], we have that \(P_{C}(I-\lambda \nabla f)\) is \(\frac{2+\lambda L}{4}\)-averaged for \(0<\lambda < \frac{2}{L}\) and for each \(n\in \mathbb{N}\), that is, we can write
where \(T_{n}^{f}\) is nonexpansive and \(s_{n}=\frac{2+\lambda _{n}L}{4}\).
Theorem 1
Let C be a nonempty closed convex subset of a real Hilbert space H. For every \(i=1,2\), \(\widetilde{f}_{i}:C \rightarrow \mathbb{R}\) be a real-valued convex function and assume that \(\nabla \widetilde{f} _{i}\) is an \(\frac{1}{L_{i}}\)-inverse strongly monotone with \(L_{i}> 0\) and \(U_{\widetilde{f}_{i}}\neq \emptyset \). Let \(f,g:H \rightarrow H\) be \(a_{f}\)- and \(a_{g}\)-contraction mappings, respectively, with \(a_{f},a_{g}\in (0,1)\) and \(a=\max \{a_{f},a_{g}\}\). Let the sequences \(\{x_{n}\}\), \(\{y_{n}\}\) be generated by \(x_{1},y _{1}\in C\) and
where \(\{\mu _{n}\},\{\alpha _{n}\}\subseteq [0,1]\), \(P_{C}(I-\lambda _{n}^{i}\nabla \widetilde{f}_{i})=s_{n}^{i}I+(1-s_{n}^{i})T_{n}^{ \widetilde{f}_{i}}\), \(s_{n}^{i}=\frac{2-\lambda _{n}^{i}L_{i}}{4}\) and \(\{\lambda _{n}^{i}\}\subset (0,\frac{2}{L_{i}})\) for all \(i=1,2\). Assume that the following conditions hold:
-
(i)
\(\lim_{n\rightarrow \infty }\alpha _{n}=0\) and \(\sum_{n=1}^{\infty }\alpha _{n}=\infty \),
-
(ii)
\(0<\overline{\theta }\leq \mu _{n} \leq \theta \) for all \(n\in \mathbb{N}\) and for some \(\overline{\theta },\theta >0\),
-
(iii)
\(\sum_{n=1}^{\infty }|\alpha _{n+1}-\alpha _{n}|<\infty \), \(\sum_{n=1}^{\infty }|\mu _{n+1}-\mu _{n}|<\infty \).
Then \(\{x_{n}\}\) and \(\{y_{n}\}\) converge strongly as \(s_{n}^{i} \rightarrow 0\) (\(\Longleftrightarrow \lambda _{n}^{i}\rightarrow \frac{2}{L _{i}}\)) \(\forall i=1,2\), to \(x^{*}=P_{U_{\widetilde{f}_{1}}}f(y^{*})\) and \(y^{*}=P_{U_{\widetilde{f}_{2}}}g(x^{*})\), respectively.
Proof
First, we show that \(\{x_{n}\}\) and \(\{y_{n}\}\) are bounded. Assume that \(\widetilde{x}\in U_{\widetilde{f}_{1}}\) and \(\widetilde{y}\in U_{ \widetilde{f}_{2}}\). Then we have
Similarly, we get
Combining (11) and (12), we have
By induction, we can derive that
for every \(n \in \mathbb{N}\). This implies that \(\{x_{n}\}\) and \(\{y_{n}\}\) are bounded.
Next, we show that \(\|x_{n+1}-x_{n}\|\rightarrow 0\) and \(\|y_{n+1}-y _{n}\|\rightarrow 0\). Observe that
for some \(M_{1}>0\) such that \(M_{1}\geq L_{1}\|P_{C}(I-\lambda _{n} ^{1}\nabla \widetilde{f}_{1})x_{n-1}\|+4\|\nabla \widetilde{f}_{1}x _{n-1}\|+L_{1}\|x_{n-1}\|\), \(\forall n\geq 1 \).
From the definition of \(x_{n}\) and (13), we have
Using the same method as derived in (14), we have
for some \(M_{2}>0\) such that \(M_{2}\geq L_{2}\|P_{C}(I-\lambda _{n} ^{2}\nabla \widetilde{f}_{2})y_{n-1}\|+4\|\nabla \widetilde{f}_{2}y _{n-1}\|+L_{2}\|y_{n-1}\|\), \(\forall n\geq 1\).
Applying Lemma 4 and condition (iii), we can conclude that
Next, we show that \(\|x_{n}-W_{n}\|\rightarrow 0\) where \(W_{n}=\alpha _{n}f(y_{n})+(1-\alpha _{n})T_{n}^{\widetilde{f}_{1}}x_{n}\) and \(\|y_{n}-V_{n}\|\rightarrow 0\) where \(V_{n}=\alpha _{n}g(x_{n})+(1- \alpha _{n})T_{n}^{\widetilde{f}_{2}}y_{n}\). Let \(\widetilde{x}\in U _{\widetilde{f}_{1}}\) and \(\widetilde{y}\in U_{\widetilde{f}_{2}}\). Then we derive that
which implies that
By (16), as well as conditions (i) and (ii), we get
From definition of \(x_{n}\) and applying the same method as (17), we have
Considering
implies that
Observe that
implying that
From \(\|x_{n+1}-x_{n}\|\rightarrow 0\) as \(n\rightarrow \infty \) and condition (i), we have
From definition of \(V_{n}\) and applying the same argument as (21), we also obtain
Since
From definition of \(y_{n}\) and applying the same method as in (23), we also have
Next, we show that \(\|W_{n}-P_{C}(I-\frac{2}{L_{1}}\nabla \widetilde{f}_{1})W_{n}\|\rightarrow 0\) as \(n\rightarrow \infty \) and \(\|V_{n}-P_{C}(I-\frac{2}{L_{2}}\nabla \widetilde{f}_{2})V_{n}\| \rightarrow 0\) as \(n\rightarrow \infty \). Observe that
which yields
From (23) and condition (i), we have
Since
Observe that
where \(s_{n}^{1}=\frac{2-\lambda _{n}^{1}L_{1}}{4}\in (0,\frac{1}{2})\).
From (27), we have
From the boundedness of \(\{W_{n}\}\), \(s_{n}^{1}\rightarrow 0\) (\(\Longleftrightarrow \lambda _{n}^{1}\rightarrow \frac{2}{L_{1}}\)) and (26), we conclude that
Applying the same method as for (28), we also have
Next, we show that \(\limsup_{n\rightarrow \infty }\langle f(y^{*})-x^{*},W_{n}-x^{*} \rangle \leq 0\), where \(x^{*}=P_{U_{\widetilde{f}_{1}}}f(y^{*})\) and \(\limsup_{n\rightarrow \infty }\langle g(x^{*})-y^{*},V_{n}-y^{*} \rangle \leq 0\), where \(y^{*}=P_{U_{\widetilde{f}_{2}}}g(x^{*})\).
Indeed, take a subsequence \(\{W_{n_{k}}\}\) of \(\{W_{n}\}\) such that
Since \(\{x_{n}\}\) is bounded, without loss of generality, we may assume that \(x_{n_{k}}\rightharpoonup \widehat{x}\) as \(k\rightarrow \infty \). From (23), we obtain \(W_{n_{k}}\rightharpoonup \widehat{x}\) as \(k\rightarrow \infty \). Assume that \(\widehat{x} \neq P_{C}(I-\frac{2}{L_{1}}\nabla \widetilde{f}_{1}) \widehat{x}\). By nonexpansiveness of \(P_{C}(I-\frac{2}{L_{1}}\nabla \widetilde{f}_{1})\), (28) and Opial’s property, we have
This is a contradiction, thus we have
Since \(W_{n_{k}}\rightharpoonup \widehat{x}\) as \(k\rightarrow \infty \), due to (30) and Lemma 2, we can derive that
Similarly, take a subsequence \(\{V_{n_{k}}\}\) of \(\{V_{n}\}\) such that
Since \(\{y_{n}\}\) is bounded, without loss of generality, we may assume that \(y_{n_{k}}\rightharpoonup \widehat{y}\) as \(k\rightarrow \infty \). From (24), we obtain \(V_{n_{k}}\rightharpoonup \widehat{y}\) as \(k\rightarrow \infty \). Following the same method as for (31), we easily obtain that
Finally, we show that \(\{x_{n}\}\) converges strongly to \(x^{*}\), where \(x^{*}=P_{U_{\widetilde{f}_{1}}}f(y^{*})\) and \(\{y_{n}\}\) converges strongly to \(y^{*}\), where \(y^{*}=P_{U_{\widetilde{f}_{2}}}g(x^{*})\).
Let \(W_{n}=\alpha _{n}f(y_{n})+(1-\alpha _{n})T_{n}^{\widetilde{f}_{1}}x _{n}\) and \(V_{n}=\alpha _{n}g(x_{n})+(1-\alpha _{n})T_{n}^{ \widetilde{f}_{2}}y_{n}\). From the definition of \(x_{n}\), we get
which yields
Similarly, as derived above, we also have
From (33) and (34), we deduce that
By (16), (23), (24), (31), (32), condition (i) and Lemma 4, we have \(\lim_{n\rightarrow \infty }(\|x_{n}-x^{*}\|+\|y_{n}-y^{*}\|)=0\). It implies that the sequences \(\{x_{n}\}\), \(\{y_{n}\}\) converge to \(x^{*}=P_{U_{\widetilde{f}_{1}}}f(y^{*})\), \(y^{*}=P_{U_{\widetilde{f} _{2}}}g(x^{*})\), respectively. This completes the proof. □
Corollary 1
Let C be a nonempty closed convex subset of a real Hilbert space H. Let \(\widetilde{f}:C \rightarrow \mathbb{R}\) be a real-valued convex function and assume that ∇f̃ is \(\frac{1}{L}\)-inverse strongly monotone with \(L> 0\) and \(U_{ \widetilde{f}}\neq \emptyset \). Let \(f:H \rightarrow H\) be an a-contraction mapping with \(a\in (0,1)\). Let the sequence \(\{x_{n}\}\) be generated by \(x_{1}\in C\) and
where \(\{\mu _{n}\},\{\alpha _{n}\}\subseteq [0,1]\), \(P_{C}(I-\lambda _{n}^{i}\nabla \widetilde{f})=s_{n}I+(1-s_{n})T_{n}^{\widetilde{f}}\) and \(s_{n}=\frac{2-\lambda _{n}L}{4}\), \(\{\lambda _{n}\}\subset (0, \frac{2}{L})\). Assume that the following conditions hold:
-
(i)
\(\lim_{n\rightarrow \infty }\alpha _{n}=0\) and \(\sum_{n=1}^{\infty }\alpha _{n}=\infty \),
-
(ii)
\(0<\overline{\theta }\leq \mu _{n} \leq \theta \) for all \(n\in \mathbb{N}\) and for some \(\overline{\theta },\theta >0\),
-
(iii)
\(\sum_{n=1}^{\infty }|\alpha _{n+1}-\alpha _{n}|<\infty \), \(\sum_{n=1}^{\infty }|\mu _{n+1}-\mu _{n}|<\infty \).
Then \(\{x_{n}\}\) converges strongly, as \(s_{n}\rightarrow 0\) (\(\Longleftrightarrow \lambda _{n}\rightarrow \frac{2}{L}\)), to \(x^{*}=P_{U_{\widetilde{f}}}f(x ^{*})\).
Proof
If we put \(f\equiv g\), \(x_{n}=y_{n}\), in Theorem 1, we obtain the desired conclusion. □
4 Application
Let \(H_{1}\), \(H_{2}\) be two real Hilbert spaces. Let C, Q be nonempty closed convex subsets of \(H_{1}\) and \(H_{2}\), respectively.
In 1994, Censor and Elfving [17] introduced the split feasibility problem (SFP), which is to find a point x such that
where \(D:H_{1}\rightarrow H_{2}\) is a bounded linear operator.
Throughout this paper, we assume that the SFP is consistent, that is, the solution set Γ of the SFP is nonempty. Let \(f:\mathcal{H}_{1} \rightarrow \mathbb{R}\) be a continuous differentiable function. The minimization problem
is ill-posed.
Before proving Theorem 2, we need the following:
Proposition 3
([18])
Given \(x^{*}\in \mathcal{H}_{1}\), the following statements are equivalent:
-
(i)
\(x^{*}\) solves the SFP;
-
(ii)
\(P_{C}(I-\lambda \nabla f)x^{*}=P_{C}(I-\lambda A^{*}(I-P_{Q})A)x^{*}=x ^{*}\);
-
(iii)
\(x^{*}\) solves the variational inequality problem of finding \(x^{*}\in C\) such that
$$ \bigl\langle \nabla f\bigl(x^{*}\bigr), x-x^{*}\bigr\rangle \geq 0, \quad \forall x\in C, $$(38)where \(\nabla f=A^{*}(I-P_{Q})A\) and \(A^{*}\) is the adjoint of A.
Theorem 2
Let C and Q be nonempty, closed, and convex subsets of \(H_{1}\) and \(H_{2}\), respectively, and let \(A_{i}:H_{1}\rightarrow H _{2}\) be bounded linear operators for all \(i=1,2\) with \(L_{i}\) being the spectral radius of \(A_{i}^{*}A_{i}\) for all \(i=1,2\) with \(\varGamma _{i} \neq \emptyset\). Let \(f,g:H\rightarrow H\) be \(a_{f}\)- and \(a_{g}\)-contraction mappings with \(a_{f},a_{g}\in (0,1)\) and \(a=\max \{a_{f},a_{g}\}\). Let the sequences \(\{x_{n}\}\), \(\{y_{n}\}\) be generated by \(x_{1},y_{1} \in C\) and
where \(\{\mu _{n}\},\{\alpha _{n}\}\subseteq [0,1]\), \(P_{C}(I-\lambda _{n}^{i}(A_{i}^{*}(I-P_{Q})A_{i}))=s_{n}^{i}I+(1-s_{n}^{i})T_{n}^{a _{i}}\), \(\forall i=1,2\) and \(s_{n}^{i}= \frac{2-\lambda _{n}^{i}L_{i}}{4}\), \(\{\lambda _{n}^{i}\}\subset (0,\frac{2}{L _{i}})\). Assume that the following conditions hold:
-
(i)
\(\lim_{n\rightarrow \infty }\alpha _{n}=0\) and \(\sum_{n=1}^{\infty }\alpha _{n}=\infty \),
-
(ii)
\(0<\overline{\theta }\leq \mu _{n} \leq \theta \) for all \(n\in \mathbb{N}\) and for some \(\overline{\theta },\theta >0\),
-
(iii)
\(\sum_{n=1}^{\infty }|\alpha _{n+1}-\alpha _{n}|<\infty \), \(\sum_{n=1}^{\infty }|\mu _{n+1}-\mu _{n}|<\infty \).
Then \(\{x_{n}\}\) and \(\{y_{n}\}\) converge strongly, as \(s_{n}^{i} \rightarrow 0\) (\(\Longleftrightarrow \lambda _{n}^{i}\rightarrow \frac{2}{L _{i}}\)) \(\forall i=1,2\), to \(x^{*}=P_{\varGamma _{1}}f(y^{*})\) with \(\varGamma _{1}=\{x\in C;A_{1}x\in Q\}\) and \(y^{*}=P_{\varGamma _{2}}g(x^{*})\) with \(\varGamma _{2}=\{\bar{x}\in C;A_{2}\bar{x}\in Q\}\), respectively.
Proof
Letting \(x,y\in C\) and \(\nabla f_{i}=A_{i}^{*}(I-P_{Q})A_{i}\) for all \(i=1,2\), we have
From the property of \(P_{C}\), we have
Substituting (41) into (40), we have
It follows that
Then \(\nabla f_{i}\) is \(\frac{1}{L_{i}}\)-inverse strongly monotone, for all \(i=1,2\).
From Proposition 3 and Theorem 1, we can conclude that Theorem 2 is true. □
Corollary 2
Let C and Q be nonempty, closed, and convex subsets of \(H_{1}\) and \(H_{2}\), respectively, and let \(A:H_{1}\rightarrow H_{2}\) be bounded linear operator with L being the spectral radius of \(A^{*}A\) with \(\varGamma \neq \emptyset\). Let \(f:H\rightarrow H\) be an a-contraction mapping with \(a\in (0,1)\). Let the sequence \(\{x_{n}\}\) be generated by \(x_{1}\in C\) and
where \(\{\mu _{n}\},\{\alpha _{n}\}\subseteq [0,1]\), \(P_{C}(I-\lambda _{n}(A^{*}(I-P_{Q})A))=s_{n}I+(1-s_{n})T_{n}^{a_{1}}\) and \(s_{n}=\frac{2- \lambda _{n}L}{4}\), \(\{\lambda _{n}\}\subset (0,\frac{2}{L})\). Assume that the following conditions hold:
-
(i)
\(\lim_{n\rightarrow \infty }\alpha _{n}=0\) and \(\sum_{n=1}^{\infty }\alpha _{n}=\infty \),
-
(ii)
\(0<\overline{\theta }\leq \mu _{n} \leq \theta \) for all \(n\in \mathbb{N}\) and for some \(\overline{\theta },\theta >0\),
-
(iii)
\(\sum_{n=1}^{\infty }|\alpha _{n+1}-\alpha _{n}|<\infty \), \(\sum_{n=1}^{\infty }|\mu _{n+1}-\mu _{n}|<\infty \).
Then \(\{x_{n}\}\) converges strongly, as \(s_{n}\rightarrow 0\) (\(\Longleftrightarrow \lambda _{n}\rightarrow \frac{2}{L}\)), to \(x^{*}=P_{\varGamma }f(x^{*})\) with \(\varGamma =\{x\in C;Ax\in Q\}\).
Proof
If we put \(f \equiv g\), \(x_{n}=y_{n}\) in Theorem 2, then the conclusion follows. □
5 Numerical examples
Example 1
Let \(C=[-10,10]\times [-10,10]\) and let \(\langle \cdot , \cdot \rangle : \mathbb{R}^{2} \times \mathbb{R}^{2}\rightarrow \mathbb{R}\) be an inner product defined by \(\langle \mathbf{x},\mathbf{y} \rangle = \mathbf{x}\cdot \mathbf{y}=x_{1}y_{1}+x_{2}y_{2}\), for all \(\mathbf{x}=(x_{1},x_{2})\in \mathbb{R}^{2}\) and \(\mathbf{y}=(y_{1},y _{2})\in \mathbb{R}^{2}\). For every \(i=1,2\), let \(\widetilde{f_{i}}:C \rightarrow \mathbb{R}\) be defined by \(\widetilde{f_{1}}(x_{1},x_{2})=2x _{1}^{2}+x_{2}\) and \(\widetilde{f_{2}}(x_{1},x_{2})=(x_{1}-1)+x_{2} ^{2}\), \(\forall x_{1},x_{2}\in \mathbb{R}\). Let \(f,g:\mathbb{R}^{2} \rightarrow \mathbb{R}^{2}\), defined by \(f(x_{1},x_{2})=( \frac{x_{1}}{3},\frac{x_{2}}{3})\) and \(g(x_{1},x_{2})=( \frac{x_{1}}{4},\frac{x_{2}}{4})\), be \(\frac{1}{2}\)- and \(\frac{1}{3}\)-contraction mappings and \(a=\max \{\frac{1}{2}, \frac{1}{3}\}=\frac{1}{2}\). Let the sequences \(\{x_{n}\}\), \(\{y_{n}\}\) be generated by \(x_{1},y_{1}\in C\). Putting \(\alpha _{n}=\frac{1}{4n}\) and \(\mu _{n}=\frac{3n+1}{7n}\), we can rewrite (10) as follows:
where \(P_{C}(x_{1},x_{2})=(\max \{\min \{x_{1},10\},-10\},\max \{ \min \{x_{2},10\},-10\})\) and also \(P_{C}(I-\lambda _{n}^{i}\nabla \widetilde{f_{i}})=s_{n}^{i}I+(1-s_{n}^{i})T_{n}^{\widetilde{f_{i}}}\) and \(s_{n}^{i}=\frac{2-\lambda _{n}^{i}(16)}{4}\), where \(\lambda _{n} ^{i}=\frac{n^{2}}{8n^{2}+1}\) \(\forall i=1,2\).
Then, since \(\widetilde{f_{1}}(x_{1},x_{2})=2x_{1}^{2}+x_{2}\) and \(\widetilde{f_{2}}(x_{1},x_{2})=(x_{1}-1)+x_{2}^{2}\), we have
It is obvious that \(\nabla \widetilde{f_{i}}\) is a \(\frac{1}{16}\)-inverse strongly monotone, \(\forall i=1,2\).
Consider \((0,-10),(-10,0)\in [-10,10]\times [-10,10]\) for which
thus \((0,-10)\in U_{\widetilde{f_{1}}}\).
Similarly,
thus \((-10,0)\in U_{\widetilde{f_{2}}}\).
It is clear that the sequences \(\{\alpha _{n}\}\), \(\{\mu _{n} \}\) satisfy all the conditions of Theorem 1, so we can conclude that the sequences \(\{x_{n}\}\) and \(\{y_{n}\}\) converge strongly to \((0,-10)\) and \((-10,0)\), respectively. Table 1 shows the values of \(\{x_{n}\}\) and \(\{y_{n}\}\) with \(x_{n}^{1}=-10\), \(x_{n}^{2}=10\), \(y_{n}^{1}=10\), \(y_{n}^{2}=-10\), and \(n=N=400\).
Remark 1
If we choose \(f\equiv g\) and \(x_{n}=y_{n}\) in Example 1, we can rewrite (36) as follows:
where \(P_{C}(x_{1},x_{2})=(\max \{\min \{x_{1},10\},-10\},\max \{ \min \{x_{2},10\},-10\})\) and also \(P_{C}(I-\lambda _{n}\nabla \widetilde{f})=s_{n}I+(1-s_{n})T_{n}^{\widetilde{f}}\) and \(s_{n}=\frac{2- \lambda _{n}(16)}{4}\), where \(\lambda _{n}=\frac{n^{2}}{8n^{2}+1}\). From Corollary 1, we can conclude that the sequence \(\{x_{n}\}\) converges strongly to \((0,-10)\). Table 2 shows the values of \(\{x_{n}\}\) with \(x_{n}^{1}=-10\), \(x_{n}^{2}=10\), and \(n=N=400\).
Conclusion
-
1.
Theorem 1 guarantees the convergence of \(\{x_{n}\}\) and \(\{y_{n}\}\) in Example 1.
-
2.
Corollary 1 guarantees the convergence of \(\{x_{n}\}\) in Remark 1.
-
3.
By using the concepts of an intermixed algorithm and gradient-projection algorithm (GPA), we give a new iteration for solving two constrained convex minimization problems.
References
Xu, H.K.: Iterative methods for the split feasibility problem in infinite-dimensional Hilbert spaces. Inverse Probl. 26, 105018 (2010)
Bretarkas, D.P., Gafin, E.M.: Projection methods for variational inequalities with applications to the traffic assignment problem. Math. Program. Stud. 17, 139–159 (1982)
Su, M., Xu, H.K.: Remarks on the gradient-projection algorithm. J. Nonlinear Anal. Optim. 1, 35–43 (2010)
Xu, H.K.: An iterative approach to quadratic optimization. J. Optim. Theory Appl. 116, 659–678 (2003)
Lions, J.L., Stampacchia, G.: Variational inequalities. Commun. Pure Appl. Math. 20, 493–517 (1967)
Kangtunyakarn, A.: A new iterative algorithm for the set of fixed-point problems of nonexpansive mappings and the set of equilibrium problem and variational inequalities problem. Abstr. Appl. Anal. 2011, Article ID 562689 (2011). https://doi.org/10.1155/2011/562689
Ke, Y., Ma, C.: Iterative algorithm of common solutions for a constrained convex minimization problem, a quasi-variational inclusion problem and the fixed point problem of a strictly pseudo-contractive mapping. Fixed Point Theory Appl. 2014, 54 (2014)
Chahn, Y.-J., Nazeer, W., Naqvi, S.-A., Shin, M.-K.: An implicit viscosity technique of nonexpansive mappings in Hilbert spaces. Int. J. Pure Appl. Math. 108(3), 635–650 (2016)
Nazeer, W., Munir, M.: Strong convergence of new viscosity rules of nonexpansive mappings. J. Appl. Math. 35(5–6), 423–438 (2017)
Ceng, L.-C., Ansari, Q.H., Yao, J.-C.: Some iterative methods for finding fixed points and for solving constrained convex minimization problems. Nonlinear Anal. 74, 5286–5302 (2011)
Ming, T., Liu, L.: General iterative methods for equilibrium and constrained convex minimization problem. J. Optim. Theory Appl. 63, 1367–1385 (2014)
Yao, Z., Kang, S.M., Li, H.J.: An intermixed algorithm for strict pseudo-contractions in Hilbert spaces. Fixed Point Theory Appl. 2015, 206 (2015)
Takahashi, W.: Nonlinear Functional Analysis. Yokohama Publishers, Yokohama (2000)
Osilike, M.O., Isiogugu, F.O.: Weak and strong convergence theorems for nonspreading-type mappings in Hilbert spaces. Nonlinear Anal. 74, 1814–1822 (2011)
Opial, Z.: Weak convergence of the sequence of successive approximation of nonexpansive mappings. Bull. Am. Math. Soc. 73, 591–597 (1967)
Marino, G., Xu, H.-K.: A general method for nonexpansive mappings in Hilbert space. J. Math. Anal. Appl. 318, 43–52 (2016)
Censor, Y., Elfving, T.: A multiprojection algorithm using Bregman projections in a product space. Numer. Algorithms 8, 221–239 (1994)
Ceng, L.-C., Ansari, Q.H., Yao, J.-C.: An extragradient method for solving split feasibility and fixed point problems. Comput. Math. Appl. 64, 633–642 (2012)
Acknowledgements
This work is supported by King Mongkut’s Institute of Technology Ladkrabang.
Availability of data and materials
Not applicable.
Funding
Not applicable.
Author information
Authors and Affiliations
Contributions
The two authors contributed equally to the writing of this paper. Both authors read and approved the final manuscript.
Corresponding author
Ethics declarations
Competing interests
The authors declare that they have no competing interests.
Additional information
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
Saechou, K., Kangtunyakarn, A. An intermixed iteration for constrained convex minimization problem and split feasibility problem. J Inequal Appl 2019, 269 (2019). https://doi.org/10.1186/s13660-019-2222-4
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s13660-019-2222-4