- Research
- Open access
- Published:
Adapting step size algorithms for solving split equilibrium problems with applications to signal recovery
Journal of Inequalities and Applications volume 2022, Article number: 125 (2022)
Abstract
Recent developments in split equilibrium problems (SEPs) have found practical applications in convex optimization problems, information theory, and signal processing. In this paper, we present three novel algorithms with no prior knowledge of the operator norm of a bounded linear operator to approximated solutions for SEPs. Strong convergence results are well presented under appropriate conditions. In addition, we illustrate our main results by providing various numerical examples. Their computational performances are compared with those previously studied in the literature, and the results are presented by showing numerical implementation of the sparse sensor signal recovery problem.
1 Introduction
Throughout this work, let two nonempty closed convex subsets C and D be in two real Hilbert spaces \(H_{1}\) and \(H_{2}\), respectively. This paper focuses on the split equilibrium problems (SEPs) introduced by He [1], which are used to solve a point \(w^{*}\in C\) that
and a point \(v^{*}=Aw^{*}\in D\) solves
where f and g are nonlinear bi-functions on \(C\times C\) and \(D\times D\), respectively, and an operator \(A:H_{1}\to H_{2}\) is bounded linear with its adjoint operator \(A^{*}\). The equilibrium problem plays a vital role in various branches of science, optimization, and economics (see [2–6] for details). We provide a set of all solutions for the split equilibrium problems (SEPs) (1)–(2) as follows:
where \(\mathrm{EP}(f)\) and \(\mathrm{EP}(g)\) denote the solution sets of classical equilibrium problems (1) and (2), respectively.
Over a decade, many methods have been constructed and developed to find solutions for SEPs (1)–(2). The development of algorithms to solve the SEPs is stated as follows. Kazmi and Rizvi [7], in 2013, devised an iterative technique to examine a solution for the SEPs. Let a real number \(L_{A}\) be the spectral radius of \(A^{*}A\), \(P_{C}\) be an orthogonal projection onto C, and a map \(E:C\to H_{1}\) be α-inverse strongly monotone. Their algorithm was defined as follows:
where \(\gamma \in (0,\frac{1}{L_{A}})\), \(r_{n} >0\), \(\lambda _{n}\in (2,2\alpha )\), and real sequences \(\{\mu _{n} \}, \{\rho _{n} \}\), and \(\{\sigma _{n} \}\) are in \((0,1)\). Under suitable conditions of all parameters, the iterative method (4) provided the strong convergence for the SEPs.
In 2016, Suantai [8] presented an iterative algorithm for solving the SEPs as follows:
where W is a \(\frac{1}{2}\)-nonspreading multivalued mapping, \(\gamma \in (0,\frac{1}{L_{A}})\), \(\{r_{n}\}\subseteq (0,\infty )\), and \(\{\mu _{n} \}\subseteq (0,1)\). Suantai also provided a weak convergence result.
By using Suantai’s ideas, Onjai-uea and Phuengrattana [9] constructed and developed an algorithm to solve the SEPs in 2017 as follows:
where W is a multivalued λ-hybrid mapping, \(\gamma \in (0,\frac{1}{L_{A}})\), \(\{r_{n}\}\subseteq (0,\infty )\), and \(\{\alpha _{n} \} , \{\mu _{n} \} \subseteq (0,1)\). Under some appropriate conditions, Onjai-uea and Phuengrattana presented that algorithm (6) weakly converges to a solution for the SEPs. There are many methods to solve the SEPs that are not mentioned above. The reader can refer to [1, 10–12] for details.
Note that the iterative methods (4), (5), and (6) are usually used with the stepsize γ that depends on the operator norm \(\|A\|\). Hence this paper is focused on overcoming this difficulty by constructing algorithms with no prior knowledge of operator norm \(\|A\|\). For more details about these ideas, the reader is directed to [13, 14]. For any \(r_{n}>0\), we define
Consider in case that f and g are indicator functions for the closed convex subsets C and Q of Hilbert spaces, then \(T_{r_{n}}^{f}=P_{C}\) and \(T_{r_{n}}^{g}=P_{Q}\). Recall that the squared distance functions \(\|x-P_{C}x\|^{2}\) and \(\|x-P_{Q}x\|^{2}\) are differentiable, we have the gradients ∇ψ and ∇ω of operators ω and ψ, respectively, as follows:
However, gradients (9) and (10) are not true in general.
By using operators (8)–(9), we construct a new step size by using self-adaptive method to avoid computing the operator norm \(\|A\|\) as follows:
where \(0<\rho _{n}<4\).
Inspired by the above works, the aims of this papers are as follows: (1) To construct and propose three modified step size algorithms by using new self-adaptive techniques; (2) to obtain theoretical results of strong convergence for the SEPs based on the proposed algorithms; (3) to show the performance of our proposed algorithms by comparing them to the previously developed algorithms by Kazmi and Rizvi [7], Suantai [8], Onjai-uea and Phuengrattana [9]; and (4) to demonstrate the presented results by showing numerical implementation of the sparse sensor signal recovery problem.
The remainder of this paper is organized as follows. Section 2 establishes preliminaries used in this work, and related definitions and lemma are presented. Section 3 introduces the proposed step size algorithms for solving the SEPs with three modifications, and the main theorems are proved. Section 4 then demonstrates the numerical examples of the proposed algorithms and shows theoretical applications in split feasibility problem and optimization problem. Section 5 then gives conclusions of this work.
2 Preliminaries
In this section, we give a preliminary result that is necessary for our consequent analysis. In the weak topology, the set of all limit points is given by \(\omega _{w}(u_{n})\) for a sequence \(\{u_{n}\}\) in a real Hilbert space. Let and be nonlinear bi-functions. Suppose the following assumptions for a bi-function f:
-
(A1)
\(f(s,s)\geq 0\);
-
(A2)
\(f(s,t)+f(t,s)\leq 0\) (f is monotone);
-
(A3)
\(\limsup_{\zeta \to 0} f(\zeta s+(1-\zeta )t,w)\leq f(t,w)\);
-
(A4)
\(s\mapsto f(t,s)\) is a lower semicontinuous and convex function;
for all \(s,t,w\in C\). Let \(T_{r}^{f}:H_{1}\to C\) be a mapping given by
for \(r>0\) and for all \(w\in H_{1}\).
Definition 2.1
For any \(v,w \in C\), the mapping \(S: C\to H_{1}\) is said to be
-
1.
monotone if \(\langle v-w, Sv-Sw \rangle \geq 0\);
-
2.
ϑ-inverse strongly monotone (for short ϑ-ism) if \(\langle w-v, Sw-Sv \rangle \geq \vartheta \|Sw-Sv\|^{2} \text{ for some } \vartheta > 0\);
-
3.
L-Lipschitz continuous if \(\|Sw-Sv\|\leq L\|w-v\| \text{ for some } L\in [0,1)\);
-
4.
nonexpansive if \(\|Sw-Sv\|\leq \|w-v\|\);
-
5.
firmly nonexpansive if \(\|Sw-Sv\|^{2}\leq \langle Sw-Sv, w-v\rangle \).
Recall that every nonexpansive mapping S satisfies the following inequalities:
-
1.
\(\langle (v-Sv)-(w-Sw),Sw-Sv\rangle \leq \frac{1}{2}\|(Sv-v)-(Sw-w)\|^{2}\);
-
2.
\(\langle v-Sv,w-Sv\rangle \leq \frac{1}{2}\|Sv-v\|^{2}\).
Lemma 2.2
For any \(v, w\in H_{1}\) and \(\theta \in [0,1]\), the following relationships hold:
-
1.
\(\|w+u\|^{2}\leq \|w\|+2\langle u,w+u \rangle \);
-
2.
\(\|w-u\|^{2}=\|w\|^{2}+\|u\|^{2}-2\langle w-u,u \rangle \);
-
3.
\(\|\theta w+(1-\theta )u\|^{2}=\theta \|w\|^{2}+(1-\theta )\|u\|^{2}- \theta (1-\theta )\|w-u\|^{2}\).
Lemma 2.3
For the metric projection \(P_{C}\), the following relationships hold:
-
1.
\(P_{C}\) is a nonexpansive mapping;
-
2.
\(\|P_{C}v-P_{C}w\|^{2}\leq \langle v-w, P_{C}v-P_{C}w \rangle \);
-
3.
\(\langle v-P_{C}v, w-P_{C}w\rangle \leq 0\);
-
4.
\(\|v-P_{C}v\|^{2}+\|w-P_{C}v\|^{2}\leq \|v-w\|^{2}\);
-
5.
\(\|v-w\|^{2}-\|P_{C}v-P_{C}w\|^{2}\leq \|(v-w)-(P_{C}v-P_{C}w)\|^{2}\);
for any \(v, w\in H_{1}\).
Lemma 2.4
([3])
Assume that a bi-function f satisfies assumptions (A1)–(A4) and the mapping \(T_{r}^{f}\) is defined by equation (12). Then:
-
1.
\(T_{r}^{f}\) is nonempty, single-valued, and firmly nonexpansive;
-
2.
\(\mathrm{EP}(f) = \mathrm{Fix}(T_{r}^{f})\);
-
3.
\(\mathrm{EP}(f)\) is a closed convex set.
Moreover, see in [15], the mapping \(T_{r}^{f}\) satisfies the following inequality:
for any \(v,w\in H_{1}\) and \(r,s>0\).
Lemma 2.5
([16])
Let \(\{w_{n}\}\) and \(\{v_{n}\}\) be two bounded sequences and a real sequence \(\{\zeta _{n}\} \subseteq [0,1]\) such that
If the following statements hold:
-
1.
\(\limsup_{n\to \infty}(\|v_{n+1}-v_{n}\|-\|w_{n+1}-w_{n}\|)\leq 0\);
-
2.
\(0<\liminf_{n\to \infty}\zeta _{n}\leq \limsup_{n\to \infty}\zeta _{n}<1\).
Then \(\lim_{n\to \infty}\|v_{n}-w_{n}\|=0\).
Lemma 2.6
([17])
The Hilbert space H satisfies the Opial condition, if \(w_{n}\rightharpoonup w\) for any sequence \(\{w_{n}\} \in H\) and element \(v\in H\) with \(v\neq w\), then
Lemma 2.7
([18])
Let \(\{w_{n}\} \subseteq (0 , \infty )\) be a sequence of real numbers with
for all \(n\geq 1\), where
-
1.
\(\{\mu _{n}\}\subset [0,1]\) and \(\sum \mu _{n}=\infty \),
-
2.
\(\limsup \delta _{n}\leq 0\),
-
3.
\(\xi _{n}\geq 0\) and \(\sum \xi _{n}<\infty \).
Then \(\lim_{n\to \infty}w_{n}=0\).
Lemma 2.8
([19])
Suppose that a real number sequence \(\{\Gamma _{m}\}\) does not decrease at infinity. This means that there is \(\{\Gamma _{m_{i}}\} \subseteq \{\Gamma _{m}\}\) with \(\{\Gamma _{m_{i}}\}<\{\Gamma _{m_{i}+1}\}\) \(\forall i\geq 0\). Let a sequence of integers \(\{\sigma (m)\}_{m\geq m_{0}}\) be given by
Thus \(\{\sigma (m)\}_{m\geq m_{0}}\) is a nondecreasing sequence that verifies \(\lim_{m\to \infty}\sigma (m)=\infty \),
for all \(m\geq m_{0}\).
Lemma 2.9
Assume that and are two functions given by equations (7) and (8), respectively. Then the gradients ∇ω and ∇ψ of the functions ω and ψ are Lipschitz continuous.
Proof
Since \(\nabla \omega (p)=A^{*}(I-T_{r_{n}}^{g})Ap\), we have
where \(L_{A}=\|A\|^{2}\). On the other hand,
By combining (14) with (15), we get that
Thus ∇ω is \(\frac{1}{L_{A}}\)-inverse strongly monotone. Moreover,
Hence \(\|\nabla \omega (p)-\nabla \omega (q)\|\leq L_{A}\|p-q\|\). Similarly, ∇ψ is Lipschitz continuous. □
3 Results
This section presents and analyzes our three iterative algorithms generated sequences that strongly converge to a solution of SEPs (1)–(2) and the fixed point problems of a nonexpansive mapping. In the sequel, the solution set is given by
Before starting the results, assume that two bi-functions f and g satisfy assumptions (A1)–(A4) and g is upper semicontinuous. Let a mapping S be nonexpansive on C and \(\Psi \neq \emptyset \). Now, we constructed and then analyzed three algorithms as follows.
Condition 3.1
Suppose that \(\{r_{n}\}\subseteq (0,\infty )\), a is a constant, \(\{\rho _{n} \}\) is any positive sequence, and real parameter sequences \(\{\alpha _{n} \}, \{\beta _{n} \}, \{\gamma _{n} \}\), and \(\{\kappa _{n} \}\) satisfy \(\alpha _{n}, \beta _{n}, \gamma _{n}, \kappa _{n} \in (0, 1)\), and the following conditions hold:
-
(C1)
\(\alpha _{n}+\beta _{n}+\gamma _{n}=1\);
-
(C2)
\(\sum_{n=0}^{\infty}\alpha _{n}=\infty , \lim_{n\to \infty}\alpha _{n}=0\);
-
(C3)
\(\lim_{n\to \infty}\kappa _{n}=0 \textit{ and } \lim_{n\to \infty} \frac{\kappa _{n}}{\alpha _{n}}=0\);
-
(C4)
\(\lim_{n\to \infty}|r_{n+1}-r_{n}|=0 \textit{ and } 0< a\leq r_{n} \);
-
(C5)
\(0<\lim \inf_{n\to \infty}\beta _{n}\leq \lim \sup_{n\to \infty} \beta _{n}<1\);
-
(C6)
\(\lim_{n\to \infty} ( \frac {\gamma _{n+1}}{1-\beta _{n+1}}- \frac {\gamma _{n}}{1-\beta _{n}} ) =0\);
-
(C7)
\(\inf \rho _{n}(4-\rho _{n})>0\) and \(0<\rho _{n}<4\).
Theorem 3.1
The sequence \(\{x_{n}\}\) generated by Algorithm 1 converges strongly to a solution \(s\in \Psi \), where \(s=P_{\Psi }u\).
Proof
Since Ψ is a nonempty set, take each \(c\in \Psi \), this implies \(c=T_{r_{n}}^{f}(c)\) and \(Ac=T_{r_{n}}^{g}(Ac)\). Thus \((I-T_{r_{n}}^{g})Ac=Ac-Ac=0\). Since \(\nabla \omega (x_{n})=A^{*}(T_{r_{n}}^{g}-I)Ax_{n}\) and \(T_{r_{n}}^{g}-I\) is a firmly nonexpansive mapping, we find that
By using the fact that \(T_{r_{n}}^{f}(I+\tau _{n}A^{*}(T_{r_{n}}^{g}-I)A)\) is a nonexpansive mapping, we can easily verify that
This implies that
Now, let \(v\in C\) be fixed and \(y_{n}=\kappa _{n} v+(1-\kappa _{n})u_{n}\). We find that
Also, we find that
By setting \(\varrho _{n}=(\alpha _{n}+\kappa _{n}\gamma _{n})\) and using (18) and (20), we obtain
It concludes that \(\{x_{n}\}\) is bounded, and two sequences \(\{u_{n}\}\), \(\{y_{n} \}\) are also bounded. Because \(c=T_{r_{n}}^{f}c\) and \(T_{r_{n}}^{f}\) is a firmly nonexpansive mapping, thus
Thus, by using inequalities (21) and (23), we get that
This implies that
By setting \(s=P_{\Psi }u\), we next prove that \(\|x_{n+1}-x_{n}\|\to 0\) and \(x_{n}\to s\). By using inequality (18), we note that
This implies that
Next, we consider two possible cases to show \(\|x_{n}-s\|\to 0\) as \(n\to \infty \).
Case 1. Assume that, for existing , a sequence \(\{\|x_{n}-s\|^{2} \}\) is nonincreasing. Then the limit of a sequence \(\lim_{n\to \infty}\|x_{n}-s\|\) exists and
Since the sequences \(\alpha _{n}\) and \(\kappa _{n}\) tend to 0, then by using inequality (27), we get that
Since ∇ω and ∇ψ are Lipschitz continuous, and \(\lim \inf \gamma _{n}>0\) and \(\inf \rho _{n}(4-\rho _{n})>0\). Thus we obtain \(\lim_{n\to \infty}\omega ^{2}(x_{n})=0\), and also \(\omega (x_{n})\to 0\) as \(n\to \infty \).
Next we prove \(\|x_{n}-s\|\to 0\) as \(n\to \infty \) by dividing the proof into three steps as follows.
Step 1. First, we must show that \(\lim_{n\to \infty}\|Sy_{n}-y_{n}\|=0\). By using the fact that \(T_{r_{n+1}}^{f}\) and \(T_{r_{n+1}}^{g}\) are firmly nonexpansive mappings, and the mapping \(T_{r_{n+1}}^{f}(I+\tau _{n}A^{*}(T_{r_{n+1}}^{g}-I)A)\) is a nonexpansive mapping, we find that
where
and \(B=\sup_{n\geq 1}(|\tau _{n}|\|A\|\sigma _{n}+\zeta _{n})\). Next, by using inequality (28), we get that
where \(K=\sup_{n\geq 1}\|v-u_{n}\|\). If we put \(x_{n+1}=\beta _{n}x_{n}+(1-\beta _{n})d_{n}\), then by using Algorithm 1, we get that \(d_{n}=(\alpha _{n} u+\gamma _{n} Sy_{n})/(1-\beta _{n})\). Further, we have that
Thus, by combining the above inequality with (29), we find that
This implies that
Thus, by using conditions (C2)–(C6), we have
Using Lemma 2.5, and from inequality (33), we find that \(\lim_{n\to \infty}\|d_{n}-x_{n}\|=0\). Moreover, we can conclude that
Now, by combining the above information with inequality (25), we have
Since \(\|y_{n}-u_{n}\| = \kappa _{n}\|u_{n}-v\|\) and \(\kappa _{n}\to 0\) as \(n\to \infty \), we find that
We next consider
Then it is easy to see that
Since \(\|x_{n+1}-x_{n}\|\to 0\) and \(\alpha _{n}\to 0\) as \(n\to \infty \). By using condition (C5), we get that
Note that
Thus, by using inequalities (35), (36), and (39), we obtain
Step 2. We will prove that \(\lim \sup_{n\to \infty}\langle u-s, s-Sy_{n} \rangle \geq 0\) with \(s=P_{\Psi }u\). Then we choose \(\{y_{n_{l}}\} \subseteq \{y_{n}\}\) such that
By using the fact that \(\{y_{n_{l}}\}\) is a bounded sequence, we can find the \(\{y_{n_{l_{j}}}\} \subseteq \{y_{n_{l}}\}\) that weakly converges to y. Thus we can assume that \(y_{n_{l}}\rightharpoonup y\) without losing a generality. So, we have that \(Sy_{n_{l}}\rightharpoonup y\) since \(\|Sy_{n}-y_{n}\|\to 0\) as \(n\to \infty \).
Next, claim that \(y\in \mathrm{Fix}(S)\), suppose by contradiction that \(y\notin \mathrm{Fix}(S)\). By using Lemma 2.6, we find that
it implies a contradiction. Hence, \(y\in \mathrm{Fix}(S)\).
Next, we will prove that \(y\in \Omega _{\mathrm{SEP}s}\). Because y is a weak limit point of the sequence \(\{ y_{n}\}\), by using inequalities (39) and (41), there is a subsequence \(\{x_{n_{k}}\} \subseteq \{x_{n}\}\) that \(x_{n_{k}}\rightharpoonup y\). By using the fact that ω is lower semicontinuous, we find that
This implies that \(\omega (y)=\frac{1}{2}\|(I-T_{r_{n}}^{g})Ay\|^{2}=0\). Therefore \(Ay\in \mathrm{EP}(f)\). We then consider
and
Since we have \(\omega (x_{n})\to 0\) as \(n\to \infty \), we get by using inequality (43) that \(\|u_{n}-T_{r_{n}}^{f}x_{n}\|\to 0\) as \(n\to \infty \). Moreover, we obtain by applying (35) and \(\|u_{n}-T_{r_{n}}^{f}x_{n}\|\to 0\) into (43) that \(\|x_{n}-T_{r_{n}}^{f}x_{n}\|\to 0 \). Since ψ is lower semicontinuous, we find that
This implies that \(\frac{1}{2}\|(I-T_{r_{n}}^{f})y\|^{2}=0\). Therefore \(y\in \mathrm{EP}(g)\). Thus \(y\in \Omega _{\mathrm{SEP}s}\). We now conclude that \(y\in \Psi \).
Next, by using the fact that \(y\in \Psi \), \(s=P_{\Psi }u\), \(\langle u-P_{\Psi }u,y-P_{\Psi }u \rangle \leq 0\), and inequality (39). Thus
Step 3. We show that \(x_{n} \to s=P_{\Psi }u\). We observe that
And we observe that
Thus, by using inequality (45), conditions (C2)–(C3), the boundedness of \(\{y_{n}\}\), and Lemma 2.7, we find that \(x_{n}\to s=P_{\Psi }u\).
Case 2. Assume that \(\{\|x_{n}-s\|^{2}\}\) is an increasing sequence. From inequality (25), \(\kappa _{n}\to 0\), and \(\alpha _{n}\to 0\), it is easy to see that \(\omega (x_{n})\to 0\). By using a similar way as Case 1, we get \(\|x_{n+1}-x_{n}\|\to 0\) as \(n\to \infty \).
Next, let a mapping σ be defined on integers \(n>n_{0}\) (where \(n_{0}\) is large enough) by
Then we see that \(\sigma (n)\to \infty \) as \(n\to \infty \) and
Thus \(\{\|x_{\sigma (n)}-s\|\}\) is a nondecreasing sequence. By virtue of Case 1, we find that
Consequently, we have \(\lim_{n\to \infty}\omega (x_{\sigma (n)})=0\). Moreover, we find that
and
From (47) we get that
This inequality yields
By using inequality (47), we have
Therefore \(\lim_{n\to \infty}\|x_{\sigma (n)+1}-s\|=0\). Next, by using Lemma 2.8, we have
which implies that the sequence \(x_{n}\to s=P_{\Psi }u\). □
Now, let us set the following conditions for some parameters used in Algorithm 2 and Algorithm 3.
Condition 3.2
Suppose that \(\{r_{n}\}\subseteq (0,\infty )\), a is a constant, \(\{\rho _{n} \}\) is any positive sequence, and \(\{\alpha _{n} \}\) and \(\{\kappa _{n} \}\) are real sequences that \(0< \alpha _{n}, \kappa _{n} <1\) and satisfy:
-
(C1)
\(\sum_{n=0}^{\infty}\alpha _{n}=\infty , \lim_{n\to \infty}\alpha _{n}=0\);
-
(C2)
\(\lim_{n\to \infty}\kappa _{n}=0\) and \(\lim_{n\to \infty}\frac{\kappa _{n}}{\alpha _{n}}=0\);
-
(C3)
\(\lim_{n\to \infty}|r_{n+1}-r_{n}|=0 \textit{ and } 0< a\leq r_{n} \);
-
(C4)
\(\inf \rho _{n}(4-\rho _{n})>0\) and \(0<\rho _{n}<4\).
Theorem 3.2
The sequence \(\{x_{n}\}\) generated by Algorithm 2 converges strongly to a solution \(s\in \Psi \), where \(s=P_{\Psi }u\).
Proof
Since Ψ is a nonempty set, take each \(c\in \Psi \). By using inequalities (20), (19) and proving similar to inequality (22), we obtain that three sequences \(\{x_{n}\}\), \(\{u_{n} \}\), and \(\{y_{n}\}\) are bounded. Next, we note that
Since \(\kappa _{n}\to 0\) as \(n\to \infty \). Then
By putting \(s=P_{\Psi }u\) and using inequality (21), we find that
This implies that
Next, to show that \(\lim_{n\to \infty}\|x_{n}-s\|=0\), we divide it into two possible cases.
Case 1. Let a sequence \(\{\|x_{n}-s\|^{2} \}\) be nonincreasing. By virtue of Theorem 3.1 (Case 1), we obtain
By applying \(\lim_{n\to \infty}\alpha _{n}=0\) and \(\lim_{n\to \infty}\kappa _{n}=0\) into inequality (55), we have
Since \(\inf \rho _{n}(4-\rho _{n})>0\), and ∇ω and ∇ψ are Lipschitz continuous, then \(\omega (x_{n})\to 0\) as \(n\to \infty \). Observe that
where \(M:=\sup_{n\geq 1}\|u-Sy_{n-1}\|\). We observe that
where \(K:=\sup_{n\geq 1}\sup \|v-u_{n-1} \|\). By virtue of inequality (28), we find that
where
By combining inequality (56) with inequalities (57) and (58), we find that
Thus, by using Lemma 2.7, we get that
Moreover, by virtue of formula (39), we conclude that
as \(n\to \infty \). Also,
as \(n\to \infty \). By following the proof in Theorem 3.1 (Case 1), we have \(w_{w}(x_{n})\subset \Psi \). Moreover, by the property of metric projection \(P_{\Psi}\),
Next, by following inequality (54), we find that
By applying Lemma 2.7, inequality (63), and condition (C2) into inequality (64), we find that \(x_{n}\to s=P_{\Psi }u\).
Case 2. Assume that \(\{\|x_{n}-s\|^{2} \}\) is an increasing sequence. From inequality (55), \(\alpha _{n}\to 0\) and \(\kappa _{n}\to 0\), it is easy to see that \(\omega (x_{n})\to 0\) as \(n\to \infty \). By following the proof in Case 1, we find that
Suppose that the map σ is given by Theorem 3.1. By using formula (64), we complete the proof for this case in the same way as in the proof of Case 2 in Theorem 3.1. □
Theorem 3.3
The sequence \(\{x_{n}\}\) generated by Algorithm 3 converges strongly to a solution \(s\in \Psi \), where \(s=P_{\Psi }v\).
Proof
By using the fact that Ψ is a nonempty set, we then take each \(c\in \Psi \). Moreover, by using inequality (20), we find that
This inequality implies \(\{x_{n}\}\) is bounded. Thus, from inequalities (19) and (20), we then find that \(\{u_{n} \}\) and \(\{y_{n}\}\) are also bounded.
Setting \(s=P_{\Psi }v\), we now show \(\|x_{n+1}-x_{n}\|\to 0\) and \(x_{n}\to s\) as \(n\to \infty \). By inequality (21), we obtain
This inequality implies that
Next, to prove that \(\lim_{n\to \infty}\|x_{n}-s\|=0\), we divide it into two possible cases.
Case 1. Assume that a sequence \(\{\|x_{n}-s\|^{2} \}\) is nonincreasing. By virtue of Theorem 3.1 (Case 1), we have
Since \(\lim_{n\to \infty}\alpha _{n}=0\) and \(\lim_{n\to \infty}\kappa _{n}=0\), by following inequality (55), we find that
Since and ∇ω and ∇ψ are Lipschitz continuous, and \(\inf \rho _{n}(4-\rho _{n})>0\), we obtain that \(\omega (x_{n})\to 0\) as \(n\to \infty \). We then observe
where \(K=\sup_{n\geq 1}\{\|x_{n}\|+\|Sy_{n-1}\|\}\). By a similar argument to the proof of inequality (59), we combine the above inequality with inequalities (57) and (58). Then
Next, consider
Consequently,
By a similar argument to the proof of Theorem 3.1, we have \(y\in \Psi \) and
By applying the above inequality, Lemma 2.7, conditions (C1)–(C2) into inequality (70), we find that \(x_{n} \to s=P_{\Psi }v\).
Case 2. Assume that \(\{\|x_{n}-z\|^{2} \}\) is an increasing sequence. From inequality (67), \(\alpha _{n}\to 0\), and \(\kappa _{n}\to 0\), it is easy to see that \(\omega (x_{n})\to 0\) as \(n\to \infty \). By following the proof in Case 1, we find that
Suppose that the map σ is given by Theorem 3.1. By using formula (70), we complete the proof for this case in the same way as in the proof of Case 2 in Theorem 3.1. □
Remark 3.3
Because of the conditions of all parameters, our three proposed theorems are entirely different.
4 Numerical examples and applications
In this section, we implement our proposed algorithms to show their performance of computation in three different numerical examples. The proposed Algorithms 1, 2, and 3 are coded with MATLAB (R2021a) programming language. CPU time and numbers of iteration are considered the computational performance of the iteration algorithms.
Example 4.1
In this example, we illustrate our main results. We consider the numerical behavior of the proposed algorithms by comparing them with different \(\rho _{n}\).
Let and be defined by \(Ax:=\frac{1}{2}(x_{1},x_{2},\ldots ,x_{N})\) with its adjoint operator \(A^{*}x:=\frac{1}{2}(x_{1},x_{2},\ldots ,x_{N})\) for all . We define the sets . Suppose that \(f(p,q)=p^{2}-pq\) for all \(p,q\in C\). Then we derive the resolvent function \(T_{r_{n}}^{f}\) as follows: find \(s\in C\) such that \(f(s,q)+\frac{1}{r_{n}}\langle q-s, s-x\rangle \geq 0\) for all \(q\in C\) and \(x\in H_{1}\). We observe that
It follows from Lemma 2.4 that \(T_{r_{n}}^{f}\) is single-valued; therefore we get that \(s=\frac{x}{1-r_{n}}\). This implies that
Let \(g(v,w)=v^{2}+vw-2w^{2}\) for all \(v,w\in D\). Then we derive the resolvent function \(T_{r_{n}}^{g}\) as follows: find \(u\in C\) such that \(g(u,w)+\frac{1}{r_{n}}\langle w-u, u-y\rangle \geq 0\) for all \(w\in C\) and \(y\in H_{2}\). We observe that
Let \(M(w)=-2r_{n}w^{2}+(r_{n}u+u-y)w+r_{n}u^{2}-u^{2}+uy\). Then M is a quadratic function of w with coefficient \(a=-2r_{n}, b=r_{n}u+u-y, c=r_{n}u^{2}-u^{2}+uy\). We observe that the discriminant of \(M(w)\) can be computed as follows:
Thus \(b^{2}-4ac\geq 0\) for all \(y\in H_{2}\). It follows from Lemma 2.4 that \(T_{r_{n}}^{g}\) is single-valued; therefore we get that
It is easy to check that assumptions (A1)–(A4) are satisfied. We take \(S:C\to C\) by \(\frac {1}{2}(x_{1}, x_{2},\ldots , x_{N})\) for all and choose \(\alpha _{n}=\frac {1}{2n+2}\), \(\kappa _{n}=\frac {1}{(2n+2)^{2}}\), \(\beta _{n}=\frac{n}{2n+2}\), \(\gamma _{n}=\frac {1}{2}\), \(r_{n}=\frac {1}{2^{n}}\), \(u=0.1, v=0.5\).
We test our proposed algorithms for different values of the parameter \(\rho _{n}\) by using \(E(n)=\frac {\|x_{n+1}-x_{n}\|}{\|x_{2}-x_{1}\|}<10^{-5}\), \(N=3\), and the initial value \(x_{1} \in ([-5,5],1,N)\) (randomly generated vector in ) as follows:
-
Case I:
\(\rho _{n}=0.01\);
-
Case II:
\(\rho _{n}=2.00\);
-
Case III:
\(\rho _{n}=3.99\).
In addition, Fig. 1–Fig. 3 show the error \(E(n)\) and behavior \(\|x_{n+1}-x_{n}\|\) and Table 1 presents the CPU times and the number of iterations in three cases for testing suitable values of the parameter \(\rho _{n}\).
Remark 4.2
By the results of Example 4.1, we observed that:
-
(a)
the third algorithm was more efficient than other proposed algorithms regarding CPU time;
-
(b)
in case II (\(\rho _{n}=2.00\)), the first algorithm was more efficient than other proposed algorithms regarding the number of iterations.
Example 4.3
In this example, we show our performance of the proposed algorithms by comparing them with algorithms (4)–(6) in the literature.
Let and \(A:l_{2}\to l_{2}\) be defined by \(Ax:=\frac{1}{4} (x_{1},x_{2},\ldots ,x_{i},\ldots , )\) with its adjoint operator \(A^{*}x:=\frac{1}{4} (x_{1},x_{2},\ldots ,x_{i},\ldots , )\) for all \(x=\{x_{i}\}_{i=1}^{\infty}\in l_{2}\). We define the sets \(C= D:=\{z\in l_{2} : \|z\|\leq 1\}\). Suppose that \(f(p,q)=-11p^{2}+pq+10q^{2}\) for all \(p,q\in C\). Then we derive the resolvent function \(T_{r_{n}}^{f}\) as follows: find \(s\in C\) such that \(f(s,q)+\frac{1}{r_{n}}\langle q-s, s-x\rangle \geq 0\) for all \(q\in C\) and \(x\in H_{1}\). We observe that
Let \(P(q)=10r_{n}q^{2}+(r_{n}s+s-x)q+(-11r_{n}s^{2}-s^{2}+sx )\). Then P is a quadratic function of q with coefficient \(a=10r_{n}, b=r_{n}s+s-x, c=-11r_{n}s^{2}-s^{2}+sx\). We observe that the discriminant of \(P(q)\) can be computed as follows:
Thus \(b^{2}-4ac\geq 0\) for all \(x\in H_{1}\). It follows from Lemma 2.4 that \(T_{r_{n}}^{f}\) is single-valued; therefore we get that
Let \(g(v,w)=-15v^{2} + vw + 14w^{2}\) for all \(v,w\in D\). Then we derive the resolvent function \(T_{r_{n}}^{g}\) as follows: find \(u\in C\) such that \(g(u,w)+\frac{1}{r_{n}}\langle w-u, u-y\rangle \geq 0\) for all \(w\in C\) and \(y\in H_{2}\). We observe that
Let \(M(w)=14r_{n}w^{2}+(r_{n}u+u-y)w+(-15r_{n}u^{2}-u^{2}+uy)\). Then M is a quadratic function of w with coefficient \(a=14r_{n}, b=r_{n}u+u-y, c=-15r_{n}u^{2}-u^{2}+uy\). We observe that the discriminant of \(M(w)\) can be computed as follows:
Thus \(b^{2}-4ac\geq 0\) for all \(y\in H_{2}\). It follows from Lemma 2.4 that \(T_{r_{n}}^{g}\) is single-valued; therefore we get that
It is easy to check that assumptions (A1)–(A4) are satisfied. We take \(S:C\to C\) by \(\frac{1}{2}(x_{1}, x_{2},\ldots , x_{i},\ldots )\) for all \(x=\{x_{i}\}_{i=1}^{\infty}\in l_{2}\) and choose \(\alpha _{n}=\frac{1}{2n+2}\), \(\kappa _{n}=\frac{1}{(2n+2)^{2}}\), \(\beta _{n}=\frac{n}{2n+2}\), \(\gamma _{n}=\frac{1}{2}\), \(r_{n}=\frac{1}{2^{n}}\), \(u=0.1, v=0.5\), \(\rho _{n}=2.00\). For algorithms (4)–(6), we take \(D\equiv I\), \(Sx=\{\frac{1}{2}(x_{1}, x_{2},\ldots , x_{i},\ldots )\}\) for all \(x=\{x_{i}\}_{i=1}^{\infty}\in l_{2}\), \(\gamma =0.01\), \(\alpha _{n}=\frac{1}{2n+2}\), and \(\beta _{n}=\frac{n}{2n+2}\).
We test all of the algorithms for different values of initial values \(x_{1}\) by using \(E(n)=\frac{\|x_{n+1}-x_{n}\|}{\|x_{2}-x_{1}\|}<10^{-5}\) as follows:
-
Case I:
\(x_{1} = (1,1,1,\ldots,1,0_{200},\ldots)\);
-
Case II:
\(x_{1} = (1.8, -2.56, 0.6, 0.6, \ldots ,0.6_{100},0,\ldots )\);
-
Case III:
\(x_{1}=(5.3,1,1.02,0,0,\ldots )\);
-
Case IV:
\(x_{1}=(4,-2,6,-1,1,1,1,\ldots ,1_{100},0,\ldots )\).
In addition, Fig. 4–Fig. 7 show the error \(E(n)\) and behavior \(\|x_{n+1}-x_{n}\|\) and Table 2 presents the CPU times and the number of iterations in four cases for comparing three proposed algorithms with algorithms (4)–(6).
Remark 4.4
By examining the outcomes of Example 4.3, we found that our proposed algorithms were more efficient than other algorithms in the literature.
4.1 Application to the optimization problem
Let and be two functions. Given \(f(v,w)=P(w)-P(v)\) for all \(v,w \in C\), and \(g(y,z)=Q(z)-Q(y)\) for all \(y, z \in D\). The optimization problem is to find a point \(s\in C\) such that
and a point \(t=As\in D\) solves
A solution set of the optimization problem (75)–(76) is defined by Φ, and assume that \(\Phi \neq \emptyset \). It is easy to check that assumptions (A1)–(A4) are satisfied. Clearly, \(\Phi = \Psi \). Theorems 3.1–3.3 can be reduced to the strong convergence theorems for approximating the common solution of split minimization problems and fixed point problems of a nonexpansive mapping.
4.2 Application to the split feasibility problem
The well-known split feasibility problem (SFP) was introduced in 1994 by Censor and Elfving [20]. This problem was defined as follows: find a point
The SFP has attracted many researchers due to its applications in a large variety of problems such as in image reconstruction, signal processing, intensity-modulation radiation therapy treatment planning (IMRT). The reader can refer to [21–24] for details.
Denote a solution set of (77) by Φ and assume that \(\Phi \neq \emptyset \). Let \(f(p,q)=i_{C}(q)-i_{C}(p)\) for all \(p,q \in C\), and \(g(v,w)=i_{D}(w)-i_{D}(v)\) for all \(v, w \in D\), where \(i_{C}\) and \(i_{D}\) are indicator functions of the subsets C and D, respectively. It is easy to see that all assumptions are satisfied. Clearly, \(\Phi = \Psi \). By putting \(T_{r_{n}}^{f}=P_{C}\) and \(T_{r_{n}}^{g}=P_{D}\) into Theorems 3.1–3.3, Theorems 3.1–3.3 can be reduced to the strong convergence theorems for approximating the common solution of split feasibility problems and fixed point problems of a nonexpansive mapping. Moreover, by these settings, we present the following numerical implementation to solve the LASSO problem.
Example 4.5
In this example, we show the performance of our algorithms for the sparse signal recovery problem in compressed sensing. This problem can be considered as the following linear inverse problem:
where , ϵ is the observed data with noisy in , x is the signal to recover in . The linear inverse problem (78) can be solved by transforming it into the following LASSO problem:
where \(q>0\) is a constant. LASSO problem (79) can be considered as the split feasibility problem (SFP) (77) if the sets and \(D=\{b \}\). Thus we can apply our algorithms to solve (79).
In all algorithms, we take \(S\equiv I\) for all and choose \(\alpha _{n}=\frac{1}{2n+2}\), \(\kappa _{n}=\frac{1}{(2n+2)^{2}}\), \(\beta _{n}=\frac{n}{2n+2}\), \(\gamma _{n}=\frac{1}{2}\), \(r_{n}=\frac{1}{2^{n}}\), \(u=0.1, v=0.5\), and \(\rho _{n}=2\).
In this example, the matrix A is generated by the normal distribution with \(\mu =0\) and \(\sigma =1\), the sparse vector is generated from uniform distribution in the interval \([-1, 1]\) with k (\(0< k\ll N\)), and the observation y is generated by the white Gaussian noise with \(\mathrm{SNR}=40\).
We test all of the algorithms for different dimension N and different sparsity k with \(q = k\), the initial point \(x_{1}=\mathrm{zeros}(N,1) \), and the restoration accuracy as the mean squared error \(\mathrm{MSE}=\frac{1}{N}\|x^{*}-x_{n}\|^{2}<10^{-4}\), where \(x^{*}\) is the original signal, as follows:
-
Case I:
\(N=400\), \(k=1\)0;
-
Case II:
\(N=400\), \(k=30\);
-
Case III:
\(N=1000\), \(k=50\);
-
Case IV:
\(N=1000\), \(k=100\).
The MSE values against the number of iterations are presented in Fig. 8–Fig. 12. From the numerical results in Fig. 8–Fig. 12, we can summarize that the proposed algorithms outperform the other previous algorithms in [7–9] with greater efficiency and faster computations. Considering the proposed algorithms, the implemented results show that Algorithm 3 solves the sparse sensor signal recovery problem with the fastest computation time, as shown in Fig. 12.
Remark 4.6
By observing the outcomes of Example 4.5, we obtain that:
-
(a)
As shown in Fig. 8–Fig. 11, the signal x can be recovered by our proposed algorithms. However, it is revealed that among these methods, Algorithm 3 has the smallest number of iterations and also the shortest CPU time for all cases;
-
(b)
In Fig. 12, we plotted the mean squared error (MSE) value per iteration. It is evident that the errors obtained by Algorithm 3 decrease faster than for the other proposed algorithms.
5 Conclusions
This work proposed three modified step size algorithms to solve the SEPs. These iterative algorithms were developed to approximate solutions for the SEPs with no prior knowledge. Strong convergence theorems are well established and proved under appropriate conditions. We gave some of the applications to solve the split feasibility problem and optimization problem. Numerical examples were presented to show the usefulness of our main results. Moreover, we showed the computational performance of the proposed algorithms by comparing them with the existing algorithms. The compared results show that the proposed algorithms outperform the state-of-the-art algorithms.
Availability of data and materials
Not applicable.
References
He, Z.: The split equilibrium problem and its convergence algorithms. J. Inequal. Appl. 2012(1), 162 (2012)
Ceng, L.C., Yao, J.C.: A hybrid iterative scheme for mixed equilibrium problems and fixed point problems. J. Comput. Appl. Math. 214(1), 186–201 (2008)
Combettes, P.L., Hirstoaga, S.A., et al.: Equilibrium programming in Hilbert spaces. J. Nonlinear Convex Anal. 6(1), 117–136 (2005)
Peng, J.W., Liou, Y.C., Yao, J.C.: An iterative algorithm combining viscosity method with parallel method for a generalized equilibrium problem and strict pseudocontractions. Fixed Point Theory Appl. 2009(1), 794178 (2009)
Tada, A., Takahashi, W.: Strong convergence theorem for an equilibrium problem and a nonexpansive mapping. J. Nonlinear Convex Anal., 609–617 (2007)
He, Z.H., Sun, J.T.: The problem of split convex feasibility and its alternating approximation algorithms. Acta Math. Sin. Engl. Ser. 31(12), 2353 (2015)
Kazmi, K., Rizvi, S.: Iterative approximation of a common solution of a split equilibrium problem, a variational inequality problem and a fixed point problem. J. Egypt. Math. Soc. 21(1), 44–51 (2013)
Suantai, S., Cholamjiak, P., Cho, Y.J., Cholamjiak, W.: On solving split equilibrium problems and fixed point problems of nonspreading multi-valued mappings in Hilbert spaces. Fixed Point Theory Appl. 2016(1), 1 (2016)
Onjai-Uea, N., Phuengrattana, W.: On solving split mixed equilibrium problems and fixed point problems of hybrid-type multivalued mappings in Hilbert spaces. J. Inequal. Appl. 2017(1), 1 (2017)
Deepho, J., Martínez-Moreno, J., Sitthithakerngkiet, K., Kumam, P.: Convergence analysis of hybrid projection with Cesàro mean method for the split equilibrium and general system of finite variational inequalities. J. Comput. Appl. Math. 318, 658–673 (2017)
Hieua, D.V.: Parallel extragradient-proximal methods for split equilibrium problems. Math. Model. Anal. 21(4), 478–501 (2016)
Moudafi, A.: Proximal point algorithm extended to equilibrium problems. J. Nat. Geom. 15(1–2), 91–100 (1999)
López, G., Martín Márquez, V., Wang, F., Xu, H.K.: Solving the split feasibility problem without prior knowledge of matrix norms. Inverse Probl. 28(8), 085004 (2012)
Yang, Q.: On variable-step relaxed projection algorithm for variational inequalities. J. Math. Anal. Appl. 302(1), 166–179 (2005)
Cianciaruso, F., Marino, G., Muglia, L., Yao, Y.: A hybrid projection algorithm for finding solutions of mixed equilibrium problem and variational inequality problem. Fixed Point Theory Appl. 2010(1), 383740 (2009)
Suzuki, T.: Strong convergence of Krasnoselskii and Mann’s type sequences for one-parameter nonexpansive semigroups without Bochner integrals. J. Math. Anal. Appl. 305(1), 227–239 (2005)
Opial, Z.: Weak convergence of successive approximations for nonexpansive mappings. Bull. Am. Math. Soc. 73, 531–537 (1967)
Xu, H.K.: Iterative algorithms for nonlinear operators. J. Lond. Math. Soc. 66(1), 240–256 (2002)
Maingé, P.E.: Strong convergence of projected subgradient methods for nonsmooth and nonstrictly convex minimization. Set-Valued Anal. 16(7–8), 899–912 (2008)
Censor, Y., Elfving, T.: A multiprojection algorithm using Bregman projections in a product space. Numer. Algorithms 8, 221–239 (1994)
Ansari, Q.H., Rehan, A.: Split feasibility and fixed point problems. In: Nonlinear Analysis, pp. 281–322. Springer, Berlin (2014)
Ansari, Q.H., Rehan, A.: Iterative methods for generalized split feasibility problems in Banach spaces. Carpath. J. Math. 33(1), 9–26 (2017)
Ceng, L.C., Ansari, Q.H., Yao, J.C.: An extragradient method for solving split feasibility and fixed point problems. Comput. Math. Appl. 64(4), 633–642 (2012)
Censor, Y., Bortfeld, T., Martin, B., Trofimov, A.: A unified approach for inversion problems in intensity-modulated radiation therapy. Phys. Med. Biol. 51(10), 2353 (2006)
Acknowledgements
The authors would like to thank the Department of Mathematics, Faculty of Applied Science, King Mongkut’s University of Technology North Bangkok.
Funding
This research project was supported by the Thailand Science Research and Innovation Fund, the University of Phayao (Grant No. FF65-RIM041); National Science, Research and Innovation Fund (NSRF), and King Mongkut’s University of Technology North Bangkok with Contract no. KMUTNB-FF-66-05.
Author information
Authors and Affiliations
Contributions
Conceptualization, KS, NK; Formal analysis, KS, NK; Investigation, KS, AJ; Software, SM, NK; Validation, SM, KS; Writing the original draft, KS, NK, AJ. All authors read and approved the final manuscript.
Corresponding author
Ethics declarations
Competing interests
The authors declare that they have no competing interests.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Mekruksavanich, S., Kaewyong, N., Jitpattanakul, A. et al. Adapting step size algorithms for solving split equilibrium problems with applications to signal recovery. J Inequal Appl 2022, 125 (2022). https://doi.org/10.1186/s13660-022-02860-7
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s13660-022-02860-7