 Research
 Open access
 Published:
An accelerated common fixed point algorithm for a countable family of Gnonexpansive mappings with applications to image recovery
Journal of Inequalities and Applications volume 2022, Article number: 68 (2022)
Abstract
In this paper, we define a new concept of left and right coordinate affine of a directed graph and then employ it to introduce a new accelerated common fixed point algorithm for a countable family of Gnonexpansive mappings in a real Hilbert space with a graph. We prove, under certain conditions, weak convergence theorems for the proposed algorithm. As applications, we also apply our results to solve convex minimization and image restoration problems. Moreover, we show that our algorithm provides better convergence behavior than other methods in the literature.
1 Introduction
Let C be a nonempty closed convex subset of a real Hilbert space H with norm \(\\cdot \\). A mapping T of C into itself is said to be

(i)
Lipschitzian if there exists \(\gamma \geq 0\) such that
$$ \Vert TxTy \Vert \leq \gamma \Vert xy \Vert $$for all \(x,y\in C\), where γ is called the coefficient of T;

(ii)
nonexpansive if T is Lipschitzian with \(\gamma =1\).
The element \(x\in C\) is a fixed point of T if \(Tx=x\), and \(F(T):=\{x\in C : x=Tx\}\) denotes the set of all fixed points of T.
For the past seven decades, several iterative methods were proposed to find approximating fixed point theorems of nonexpansive mappings; see, for instance, [1, 2].
One of the famous and wellknown iterative methods, the Picard iteration process, is defined by
for \(n\geq 1\), and the initial point \(x_{1}\) is chosen arbitrarily. Furthermore, the Picard iteration process was improved and studied extensively by many mathematicians such as follows.
The Mann iteration process [3] is defined by
for \(n\geq 1\), the initial point \(x_{1}\) is chosen arbitrarily, and \(\{\rho _{n}\}\) is a sequence in \([0,1]\).
The Ishikawa iteration process [4] is defined by
for \(n\geq 1\), the initial point \(x_{1}\) is chosen arbitrarily, and \(\{\beta _{n}\}\) and \(\{\rho _{n}\}\) are sequences in \([0,1]\).
The Siteration process [5] is defined by
for \(n \geq 1\), the initial point \(x_{1}\) is chosen arbitrarily, and \(\{\beta _{n}\}\) and \(\{\rho _{n}\}\) are sequences in \([0,1]\).
In 2017, Agarwal, O’Regan, and Sahu [5] proved that the iteration process (1.3) is independent of the Mann and Ishikawa iteration processes and converges faster. In 2012, Aleomraninejad et al. [6] used the idea of combination of fixed point and graph theories in proving a convergence theorem for Gnonexpansive mappings in a Banach space. In 2015, Tiammee et al. [7] proved the Browder convergence theorem and a strong convergence theorem of the Halpern iterative scheme for Gnonexpansive mappings in a Hilbert space endowed with a graph. Later, Tripak [8], by using the Ishikawa iteration, proved weak and strong convergence theorems for finding a common fixed point for two Gnonexpansive mappings in a Banach space. In 2019, Sridarat et al. [9], using the SPiteration, proved weak and strong convergence theorems for finding a common fixed point of three Gnonexpansive mappings in a uniformly convex Banach space with a graph. In 2020, Yambangwai et al. [10], using a modified threestep iteration method, proved weak and strong convergence theorems for three Gnonexpansive mappings in a uniformly convex Banach space with a graph. They also applied their results to find solutions of constrained minimization problems and split feasibility problems. Recently, Suantai et al. [11] modified the shrinking projection method with the parallel monotone hybrid method for approximating common fixed points of a finite family of Gnonexpansive mappings. They proved a strong convergence theorem under suitable conditions in Hilbert spaces endowed with graphs and applied it to signal recovery.
The main objectives of this paper are introducing an iterative method for finding a common fixed point of a countable family of Gnonexpansive mappings, analyzing the convergence behavior of the recommended algorithm in comparison with the others, and giving some applications to solve the image restoration problem.
2 Preliminaries
In what follows, X is a real normed space. Let C be a nonempty subset of X. Let \(G=(V(G),E(G))\) be a directed graph with \(V(G)=C\) and \(E(G)\supseteq \triangle \), where \(\triangle =\{(u,u):u\in C\}\). Assume that G has no parallel edges. We denote by \(G^{1}\) the graph obtained from G by reversing the direction of edges. Then
Recall that a graph G is said to be connected if there is a path between any two vertices of the graph G. For more detail on some basic notions of the graphs, we refer the readers to [12].
A mapping \(T:C\to C\) is said to be

(i)
Gcontraction [13] if

(a)
T is edgepreserving, i.e., \((Tu,Tv)\in E(G)\) for all \((u,v)\in E(G)\), and

(b)
there exists \(\rho \in [0,1)\) such that \(\TuTv\\leq \rho \uv\\) for all \((u,v)\in E(G)\), where ρ is called a contraction factor;

(a)

(ii)
Gnonexpansive [7] if

(a)
T is edgepreserving, and

(b)
\(\TuTv\\leq \uv\\) for all \((u,v)\in E(G)\).

(a)
If \(\{u_{n}\}\) is a sequence in X, then \(u_{n}\rightharpoonup u\) denotes weak convergence of the sequence \(\{u_{n}\}\) to u. For \(v\in C\), if there is a subsequence \(\{u_{n_{k}}\}\) of \(\{u_{n}\}\) such that \(u_{n_{k}}\rightharpoonup v\), then v is called a weak cluster point of \(\{u_{n}\}\). By \(\omega _{w}(u_{n})\) we denote the set of all weak cluster points of \(\{u_{n}\}\).
Let \(\{T_{n}\}\) and ψ be families of nonexpansive mappings of C into itself such that \(\varnothing \neq F(\psi )\subset \Lambda :=\bigcap _{n=1}^{\infty}F(T_{n})\), where \(F(\psi )\) is the set of all common fixed points of all \(T\in \psi \). A sequence \(\{T_{n}\}\) satisfies the \(NST\)condition (I) with ψ [14] if for any bounded sequence \(\{u_{n}\}\) in C,
for all \(T\in \psi \). If \(\psi =\{T\}\), then \(\{T_{n}\}\) satisfies the \(NST\)condition (I) with T. In 2009, Nakajo et al. [15] have given the definition of the \(NST^{*}\)condition: a sequence \(\{T_{n}\}\) satisfies the \(NST^{*}\)condition if
for every bounded sequence \(\{u_{n}\}\) in C.
We recall the definition of forward–backward operator of lower semicontinuous and convex functions of \(f, g:\mathbb{R}^{n} \to (\infty ,+\infty ]\) as follows: A forward–backward operator T is defined by \(T:=prox_{\lambda g}(I  \lambda \nabla f)\) for \(\lambda >0\), where ∇f is the gradient operator of function f and \(prox_{\lambda g}x := \arg \min_{y\in H} \{g(y) + \frac{1}{2\lambda}\yx\^{2} \}\) (see [16, 17]). The operator \(prox_{\lambda g}\) was defined by Moreau [18], who called it the proximity operator with respect to λ and function g. We know that T is a nonexpansive mapping whenever \(\lambda \in (0,2/L)\), where L is a Lipschitz constant of ∇f.
Remark 2.1
([19])
Let \(g:\mathbb{R}^{n} \to \mathbb{R}\) be given by \(g(x)=\lambda \x\_{1}\). The proximity operator of g is defined by the formula
where \(x= (x_{1},x_{2}, \dots ,x_{n})\) and \(\x\_{1} = \sum_{i=1}^{n}x_{i}\).
Lemma 2.2
([20])
Let g be a lower semicontinuous and proper convex function from a Hilbert space H into \(\mathbb{R}\cup \{\infty \}\), and let f be a convex differentiable function from H into \(\mathbb{R}\) with LLipschitz gradient ∇f for some \(L > 0\). Let T be the forward–backward operator of g and f. A sequence \(\{T_{n}\}\) satisfies the \(NST\)condition (I) with T if \(\{T_{n}\}\) is the forward–backward operator of g and f such that \(a_{n} \to a\) with \(a,a_{n} \in (0,2/L)\).
Lemma 2.3
([21])
For a real Hilbert space H, we have:

(i)
For all \(u,v \in H\) and \(\gamma \in [0,1]\),
$$ \bigl\Vert \gamma u+(1\gamma )v \bigr\Vert ^{2}=\gamma \Vert u \Vert ^{2}+(1\gamma ) \Vert v \Vert ^{2} \gamma (1 \gamma ) \Vert uv \Vert ^{2};$$ 
(ii)
For any \(u,v \in H\),
$$ \Vert u\pm v \Vert ^{2}= \Vert u \Vert ^{2}\pm 2 \langle u,v \rangle + \Vert v \Vert ^{2}.$$
Lemma 2.4
([22])
Let \(\{u_{n}\}\), \(\{v_{n}\}\), and \(\{\vartheta _{n}\}\) be sequences of nonnegative real numbers such that
for \(n\in \mathbb{N}\). If \(\sum_{n=1}^{\infty}\vartheta _{n}<\infty \) and \(\sum_{n=1}^{\infty}v_{n}<\infty \), then \(\lim_{n\to \infty}u_{n}\) exists.
Lemma 2.5
([23])
Let H be a real Hilbert space, and let \(\{u_{n}\}\) be a sequence in H such that there exists a nonempty set \(\Lambda \subset H\) satisfying the following conditions:

(i)
For any \(p\in \Lambda , \lim_{n\to \infty}\u_{n}p\\) exists;

(ii)
Any weak cluster point of \(\{u_{n}\}\in \Lambda \).
Then there exists \(q^{*}\in \Lambda \) such that \(u_{n}\rightharpoonup q^{*}\).
Lemma 2.6
([24])
Let \(\{u_{n}\}\) and \(\{\mu _{n}\}\) be sequences of nonnegative real numbers such that
for \(n\in \mathbb{N}\). Then
where \(M=\max \{u_{1},u_{2}\}\). Moreover, if \(\sum_{n=1}^{\infty}\mu _{n}<\infty \), then \(\{u_{n}\}\) is bounded.
3 Main results
In this section, by using the inertial technique we prove a weak convergence theorem for a new accelerated algorithm for a countable family of Gnonexpansive mappings in a real Hilbert space with a directed graph.
Let C be a nonempty closed and convex subset of a real Hilbert space H with a directed graph \(G = (V(G),E(G))\) such that \(V(G)=C\). Let \(\{T_{n}\}\) be a family of Gnonexpansive mappings of C into itself such that \(\emptyset \neq \Lambda :=\bigcap _{n=1}^{\infty}F(T_{n})\).
The sequence \(\mu _{n}\) is called an inertial step size. Before giving a weak convergence theorem for Algorithm 3.1 for a family of Gnonexpansive mappings, we need to introduce a concept of coordinate affine of the graph \(G = (V(G),E(G))\).
Definition 3.1
Assume that \(\Lambda :=\bigcap _{n=1}^{\infty}F(T_{n})\neq \emptyset \) and \(\Lambda \times \Lambda \subseteq E(G)\). Then \(E(G)\) is said to be

(i)
left coordinate affine if \(\alpha (x,y)+\beta (u,y)\in E(G)\) for all \((x,y)\), \((u,y) \in E(G)\) and all α, \(\beta \in \mathbb{R}\) such that \(\alpha +\beta = 1\).

(ii)
right coordinate affine if \(\alpha (x,y)+\beta (x,z)\in E(G)\) for all \((x,y)\), \((x,z) \in E(G)\) and all α, \(\beta \in \mathbb{R}\) such that \(\alpha +\beta = 1\).
We say that \(E(G)\) is coordinate affine if \(E(G)\) is both left and right coordinate affine.
We start with some properties of the sequences \(\{x_{n}\}\) and \(\{y_{n}\}\) generated by Algorithm 3.1 related to \(E(G)\).
Example 3.2
Let \(X=\mathbb{R}^{2}\) and \(C=\mathbb{R}\times \{1\}\). Let \(G = (V(G), E(G))\) be a directed graph defined by \(V(G) = C\) and \((x, y) \in E(G)\) if \(x,y \in \mathbb{R} \times \{1\}\). We will show that \(E(G)\) is left coordinate affine. To see this, let \((x,y)\), \((z,y) \in C\) be such that \(x=(x_{1},1)\), \(y= (y_{1},1)\), and \(z=(z_{1},1)\). For all α, \(\beta \in \mathbb{R}\) with \(\alpha +\beta = 1\),
Then \(\alpha (x,y)+\beta (z,y) \in E(G)\), and hence \(E(G)\) is left coordinate affine.
Proposition 3.3
Let \(\breve{q}\in \Lambda \) and \(x_{0},x_{1} \in C\) be such that \((x_{0},\breve{q})\), \((x_{1},\breve{q}) \in E(G)\). Let \(\{x_{n}\}\) be a sequence generated by Algorithm 3.1. Suppose \(E(G)\) is left coordinate affine. Then \((x_{n},\breve{q})\) and \((y_{n},\breve{q}) \in E(G)\) for all \(n \in \mathbb{N}\).
Proof
We will prove the results by using mathematical induction. From Algorithm 3.1 we obtain
Since \((x_{0},\breve{q})\), \((x_{1},\breve{q}) \in E(G)\) and \(E(G)\) is left coordinate affine, we get \((y_{1},\breve{q}) \in E(G)\). Next, suppose that \((x_{k},\breve{q})\), \((y_{k},\breve{q}) \in E(G)\). Notice that
Since \((y_{k},\breve{q})\in E(G)\) and \(T_{k}\) is edgepreserving, we obtain \((x_{k+1},\breve{q}) \in E(G)\). Then
Since \((x_{k+1},\breve{q})\), \((x_{k},\breve{q}) \in E(G)\) and \(E(G)\) is left coordinate affine, we have that \((y_{k+1},\breve{q}) \in E(G)\). By mathematical induction we obtain \((x_{n},\breve{q})\), \((y_{n},\breve{q})\in E(G)\) for all \(n \in \mathbb{N}\). □
Theorem 3.4
Let C be a nonempty closed and convex subset of a real Hilbert space H with a directed graph \(G=(V(G),E(G))\) with \(V(G)=C\) and left coordinate affine \(E(G)\). Let \(x_{0},x_{1} \in C\), and let \(\{x_{n}\}\) be the sequence in H defined by Algorithm 3.1. Suppose \(\{T_{n}\}\) satisfies the \(\textit{NST}^{*}\)condition with \(\Lambda \neq \emptyset \) and \((x_{0}, \breve{q})\), \((x_{1},\breve{q}) \in E(G)\) for all \(\breve{q}\in \Lambda \). Then \(\{x_{n}\}\) converges weakly to a common fixed point of Λ.
Proof
Let \(\breve{q}\in \Lambda \). By Algorithm 3.1 we obtain
and
Then we have
Applying Lemma 2.6, we get \(\x_{n+1}\breve{q}\ \leq M \cdot \prod^{n}_{j=1}(1+2\mu _{j})\), where \(M = \max \{\x_{1}\breve{q}\, \x_{2}\breve{q}\\}\). Since \(\sum_{n=1}^{\infty}\mu _{n} \leq \infty \), we obtain that \(\{x_{n}\}\) is bounded. Thus
By Lemma 2.5 and (3.3) we get that \(\lim_{n\to \infty} \x_{n}\breve{q}\\) exists. By Algorithm 3.1 and Lemma 2.3(i) we obtain
From (3.5) and (3.6) we obtain
Since
it follows that
we obtain
Next, we will show that \(\y_{n}y_{n+1}\ \to 0\). By Algorithm 3.1 we obtain
Since \(\{T_{n}\}\) satisfies the \(\text{NST}^{*}\)condition, by (3.7) and (3.11) we obtain
Finally, we will show that \(\omega _{w}(x_{n})\subset \Lambda \). To see this, let \(x\in \omega _{w}(x_{n})\). By the definition of \(\omega _{w}(x_{n})\) there exists a subsequence \(\{x_{n_{k}}\}\) of \(\{x_{n}\}\) such that \(x_{n_{k}} \rightharpoonup x\). From (3.8) we obtain that \(y_{n_{k}} \rightharpoonup x\). Then \(x\in \omega _{w}(y_{n})\). It follows that \(\omega _{w}(x_{n})\subset \omega _{w}(y_{n})\subset \Lambda \). Thus \(\omega _{w}(x_{n}) \subset \Lambda \). By Lemma 2.5 we get \(x_{n} \rightharpoonup \breve{q}\) in Λ. The proof is now complete. □
4 Application on convex minimization problems
In this section, we are interested in applying our proposed method for solving a convex minimization problem of the sum of two convex and lower semicontinuous functions \(f, g:\mathbb{R}^{n} \to (\infty ,+\infty ]\). So we consider the following convex minimization problem: \(\operatorname{min} ( f(x)+g(x) ) x\in \mathbb{R}^{n}\). Combettes and Wajs [17] proved that q̆ is a minimizer of (4.1) if and only if \(\breve{q} = T\breve{q}\), where \(T= prox_{\rho g}(I\rho \nabla f )\); see [17, Prop. 3.1(iii)]. It is also known that T is nonexpansive if \(\rho \in (0,2 / L)\) where L is a Lipschitz constant of ∇f. For the past two decades, several algorithms were introduced for solving problem (4.1). A simple and classical algorithm is the forward–backward algorithm (FBA) introduced by Lions and Mercier [25].
The forward–backward algorithm (FBA) is defined by
where \(n\geq 1\), \(x_{0} \in H\), L is a Lipschitz constant of ∇f, \(\gamma \in (0, 2/L)\), \(\delta = 2  (\gamma L/2)\), and \(\{\rho _{n}\}\) is a sequence in \([0,\delta ]\) such that \(\sum_{n\in \mathbb{N}} \rho _{n}(\delta  \rho _{n}) = +\infty \). The technique for improving speed and giving a better convergence behavior of the algorithms was first introduced by Polyak [26] by adding an inertial step. Since then, many authors employed the inertial technique to accelerate their algorithms for using in various problems [20, 24, 27–31]. The following iterative method with an inertial step can be used for improving performance of FBA.
A fast iterative shrinkagethresholding algorithm (FISTA) [30] is defined by
where \(n\geq 1\), \(x_{1} = y_{0} \in \mathbb{R}^{n}\), \(t_{1} = 1\), \(T := prox_{\frac{1}{L}g}(I  \frac{1}{L}\nabla f) \), and \(\mu _{n}\) is a socalled inertial step size. The FISTA was suggested by Beck and Teboulle [30]. They proved the convergence rate of the FISTA and applied the FISTA to image restoration problem [30]. The inertial step size \(\mu _{n}\) of the FISTA was first introduced by Nesterov [32].
A new accelerated proximal gradient algorithm (nAGA) [31] is defined by
where \(n\geq 1\), \(T_{n}\) is the forward–backward operator of f and g with respect to \(a_{n} \in (0,2/L)\), \(\{\mu _{n}\}\) and \(\{\rho _{n}\}\) are sequences in \((0,1)\), and \(\frac{\x_{n} x_{n1}\_{2}}{\mu _{n}} \to 0 \). The nAGA was introduced for proving a convergence theorem by Verma and Shukla [31]. They also applied this method for solving the nonsmooth convex minimization problem with sparsity inducing regularizers for the multitask learning framework.
Theorem 4.1
Let \(f,g:\mathbb{R}^{n} \rightarrow (\infty ,\infty ]\) be such that g is a convex function and f is a smooth convex function with a gradient having a Lipschitz constant L. Let \(a_{n} \in (0, 2/L)\) be such that \(\{a_{n}\}\) converges to a, let \(T:=prox_{ag}(I  a\nabla f)\) and \(T_{n} := prox_{a_{n} g} (I  a_{n}\nabla f)\), and let \(\{x_{n}\}\) be a sequence generated by Algorithm 3.1. Then:

(i)
\(\x_{n+1}  \breve{q}\ \leq K\cdot \prod^{n}_{j=1}(1 + 2\mu _{j})\), where \(K = \max \{\x_{1}\breve{q}\,\x_{2}\breve{q}\\}\) and \(\breve{q} \in Argmin(f+g)\);

(ii)
\(\{x_{n}\}\) converges weakly to a point in \(\operatorname{Argmin} (f+g)\).
Proof
It is known that T and \(\{T_{n}\}\) are nonexpansive operators for all n and that \(F(T) = \bigcap _{n=1}^{\infty}F(T_{n}) = Argmin(f + g)\); see [16, Prop. 26.1]. By Lemma 2.2 we obtain that \(\{T_{n}\}\) satisfies the \(\text{NST}^{*}\)condition. From Theorem 3.4 we get the required result directly by putting \(G=\mathbb{R}^{n} \times \mathbb{R}^{n}\), the complete graph on \(\mathbb{R}^{n}\). □
5 Application on the image restoration problem
We can describe the image restoration problem as a simple linear model
where \(A \in \mathbb{R}^{m\times n}\) is the blurring operation, an image \(x \in \mathbb{R}^{n\times 1}\), \(c \in \mathbb{R}^{m\times 1}\) is the observed image, and u is an additive noise. The image restoration problem is finding the original image \(x^{*} \in \mathbb{R}^{n\times 1}\) that satisfies (5.1). To find the solution of problem (5.1), we minimize the additive noise to approximate the original image by using the method knowing as the least squares (LS) problem
where \(\\cdot \_{2}\) is an \(l_{2}\)norm. The solution of (5.2) can be estimated by many iterations such as the Richardson iteration; see [33] for details. However, the number of unknown variables is much greater than that of observations, which causes (5.2) to be an illposed problem because of a huge norm result, which is thus meaningless; see [34] and [35]. Therefore, to improve the illconditioned least squares problem, several regularization methods were introduced. One of the most popular regularization methods is the Tikhonov regularization suggested by Tikhonov; see [36]. It is defined to solve the following minimization problem:
where \(\lambda > 0\) is called a regularization parameter, and \(L \in \mathbb{R}^{m\times n}\) is called the Tikhonov matrix. In the standard form, L is set to be the identity. In statistics, (5.3) is known as a ridge regression.
A new method for estimation a solution of (5.1) called the least absolute shrinkage and selection operator (LASSO), was proposed by Tibshirani [37] as follows:
where \(\\cdot \_{1}\) is the \(l_{1}\)norm defined as \(\x\_{1} = \sum_{i=1}^{n}x_{i}\). This method improved the original LS (5.2) and the classical regularization such as the subset selection and the ridge regression (5.3). Moreover, the LASSO can also be applied to image and regression problems [30, 37], etc.
For solving image restoration problem, especially the true RGB images, the model (5.4) is highly costly to compute the multiplication Ax and \(\x\_{1}\) because of the size of matrix A and x as well as their members. To overcome this problem, most of researchers in this area employ the 2D fast Fourier transform for transformation of the true RGB images, and the model (5.4) is slightly modified by using the 2D fast Fourier transform of the following form:
where \(\mathcal{A}\) is the blurring operation, often chosen as \(\mathcal{A} = RW\), R is the blurring matrix, W is the 2D fast Fourier transform, \(\mathcal{C} \in \mathbb{R}^{m\times n}\) is the observed blurred and noisy image of size \(m \times n\), and λ is a positive regularization parameter.
In this section, we apply Algorithm 3.1 to solving the image restoration problem (5.5) by using Theorem 4.1 when \(f(x) = \\mathcal{A}x  \mathcal{C}\_{2}^{2}\) and \(g (x)=\lambda \Wx\_{1}\) and compare the deblurring efficiency of Algorithm 3.1 with FISTA and FBA. In this experiment the true RGB images, Wat Chedi Luang and Wat Boonyawad of size 256^{2} are considered as the original images. We blur the images with a Gaussian blur of size 9^{2} and \(\sigma = 4\), where σ is a standard deviation. After that, we use the peak signaltonoise ratio (PSNR) [38] to measure the performance of those three algorithms when PSNR(\(x_{n}\)) is defined by
where 255 is the maximum gray level of an 8 bits/pixel monotonic image, and \(MSE = \frac{1}{N}\x_{n}x^{*}\_{2}^{2}= \frac{1}{N}\sum_{i=1}^{N}x_{n}(i) x^{*}(i)^{2}\), \(x_{n}(i)\) and \(x^{*}(i)\) are the ith samples in images \(x_{n}\) and \(x^{*}\), respectively, N is the number of image samples, and \(x^{*}\) is the original image. We note that a higher PSNR shows a higher quality for deblurring image. For these experiments, we set \(\lambda = 5 \times 10^{5}\), and the original image was the blurred image. Then we compute the Lipschitz constant L by using the maximum eigenvalues of the matrix \(A^{T}A\).
The parameters for Algorithm 3.1, FISTA and FBA, are set as in Table 1.
Note that all parameters in Table 1 satisfy those convergence theorems for each algorithm. Theorem 4.1 guarantees the convergence of the sequence \(\{x_{n}\}\) generated by Algorithm 3.1 to the original image \(x^{*}\). However, convergence behavior of this sequence is measured by the value of PSNR. It is known that PSNR is an appropriate measurement for image restoration problems.
The following experiments show the performance of the proposed algorithm and compare efficiency of deblurring images with FISTA and FBA by using PSNR as our measurement.
The results of a deblurring image of Wat Chedi Luang and Wat Boonyawad with the 300th iteration of the proposed algorithm, FISTA, and FBA are shown in Figs. 1, 2, 3, 4 and Tables 2, 3, 4.
We observe from Figs. 1 and 2 that the graph of PSNR of our proposed algorithm (Algorithm 3.1) is higher than that of FISTA and FBA, which shows that our proposed algorithm gives a better performance than the others.
Tables 2 and 3 show the efficiency for restoration images of each algorithm under different number of iterations. We found that Algorithm 3.1 gives a higher PSNR than that of FISTA and FBA. Thus our algorithm has a better convergence behavior than the others.
We observed from Table 4 that at the 300th iteration, the value of PSNR of our proposed algorithm is higher than that of FISTA and FBA. This shows that the performance of Algorithm 3.1 is better than the others.
In Figs. 3 and 4, we present the original images, blurred images, and deblurring images by Algorithm 3.1, FISTA, and FBA.
6 Conclusion
In this study, we introduced a new concept of left and right coordinate affine of a directed graph and used it to introduce a new accelerated common fixed point algorithm for a countable family of Gnonexpansive mappings in a real Hilbert space with a directed graph. The weak convergence theorem of our suggested method, Theorem 3.4, has been established and proven under certain reasonable conditions. Then we compared the convergence behavior of our proposed algorithm with that of FISTA and FBA. We also gave an application of our results to image restoration problems with FISTA and FBA. We found that Algorithm 3.1 gives better results.
Availability of data and materials
Not applicable.
References
Bin Dehaish, B.A., Khamsi, M.A.: Mann iteration process for monotone nonexpansive mappings. Fixed Point Theory Appl. 2015, 177 (2015)
Dong, Y.: New inertial factors of the Krasnosel’skii–Mann iteration. SetValued Var. Anal. 29, 145–161 (2021)
Mann, W.R.: Mean value methods in iteration. Proc. Am. Math. Soc. 4, 506–510 (1953)
Ishikawa, S.: Fixed points by a new iteration method. Proc. Am. Math. Soc. 44, 147–150 (1974)
Agarwal, R.P., O’Regan, D., Sahu, D.R.: Iterative construction of fixed point of nearly asymptotically nonexpansive mappings. J. Nonlinear Convex Anal. 8, 61–79 (2007)
Aleomraninejad, S.M.A., Rezapour, S., Shahzad, N.: Some fixed point result on a metric space with a graph. Topol. Appl. 159, 659–663 (2012)
Tiammee, J., Kaewkhao, A., Suantai, S.: On Browder’s convergence theorem and Halpern iteration process for Gnonexpansive mappings in Hilbert spaces endowed with graphs. Fixed Point Theory Appl. 2015, 187, 1–12 (2015)
Tripak, O.: Common fixed points of Gnonexpansive mappings on Banach spaces with a graph. Fixed Point Theory Appl. 2016, 87 (2016)
Sridarat, P., Suparaturatorn, R., Suantai, S., Cho, Y.J.: Convergence analysis of SPiteration for Gnonexpansive mappings with directed graphs. Bull. Malays. Math. Sci. Soc. 42, 2361–2380 (2019)
Yambangwai, D., Aunruean, S., Thianwan, T.: A new modified threestep iteration method for Gnonexpansive mappings in Banach spaces with a graph. Numer. Algorithms 84, 537–565 (2020)
Suantai, S., Kankam, K., Cholamjiak, P., et al.: A parallel monotone hybrid algorithm for a finite family of Gnonexpansive mappings in Hilbert spaces endowed with a graph applicable in signal recovery. Comput. Appl. Math. 40, 145 (2021)
Johnsonbaugh, R.: Discrete Mathematics, New Jersey (1997)
Jachymski, J.: The contraction principle for mappings on a metric space with a graph. Proc. Am. Math. Soc. 136(4), 1359–1373 (2008)
Nakajo, K., Shimoji, K., Takahashi, W.: Strong convergence to a common fixed point of families of nonexpansive mappings in Banach spaces. J. Nonlinear Convex Anal. 8, 11–34 (2007)
Nakajo, K., Shimoji, K., Takahashi, W.: On strong convergence by the hybrid method for families of mappings in Hilbert spaces. Nonlinear Anal., Theory Methods Appl. 71(1–2), 112–119 (2009)
Bauschke, H.H., Combettes, P.L.: Convex Analysis and Monotone Operator Theory in Hilbert Spaces, 2nd edn. Springer, Berlin (2017)
Combettes, P.L., Wajs, V.R.: Signal recovery by proximal forward–backward splitting. Multiscale Model. Simul. 4(4), 1168–1200 (2005)
Moreau, J.J.: Fonctions convexes duales et points proximaux dans un espace hilbertien. C. R. Acad. Sci., Sér. 1 Math. 255, 2897–2899 (1962)
Beck, A.: FirstOrder Methods in Optimization, pp. 129–177. TelAviv University, TelAviv (2017). ISBN 9781611974980
Bussaban, L., Suantai, S., Kaewkhao, A.: A parallel inertial Siteration forwardbackward algorithm for regression and classification problems. Carpath. J. Math. 36, 21–30 (2020)
Takahashi, W.: Introduction to Nonlinear and Convex Analysis. Yokohama Publishers, Yokohama (2009)
Tan, K., Xu, H.K.: Approximating fixed points of nonexpansive mappings by the Ishikawa iteration process. J. Math. Anal. Appl. 178, 301–308 (1993)
Moudafi, A., AlShemas, E.: Simultaneous iterative methods for split equality problem. Trans. Math. Program. Appl. 1, 1–11 (2013)
Hanjing, A., Suantai, S.: A fast image restoration algorithm based on a fixed point and optimization method. Mathematics 8(3), 378 (2020)
Lions, P.L., Mercier, B.: Splitting algorithms for the sum of two nonlinear operators. SIAM J. Numer. Anal. 16(6), 964–979 (1979)
Polyak, B.: Some methods of speeding up the convergence of iteration methods. USSR Comput. Math. Math. Phys. 4, 1–17 (1964)
Janngam, K., Suantai, S.: An accelerated forward–backward algorithm with applications to image restoration problems. Thai J. Math. 19(2), 325–339 (2021)
Alakoya, T.O., Jolaoso, L.O., Mewomo, O.T.: Two modifications of the inertial Tseng extragradient method with selfadaptive step size for solving monotone variational inequality problems. Demonstr. Math. 53, 208–224 (2020)
Gebrie, A.G., Wangkeeree, R.: Strong convergence of an inertial extrapolation method for a split system of minimization problems. Demonstr. Math. 53, 332–351 (2020)
Beck, A., Teboulle, M.: A fast iterative shrinkagethresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2, 183–202 (2009)
Verma, M., Shukla, K.: A new accelerated proximal gradient technique for regularized multitask learning framework. Pattern Recognit. Lett. 95, 98–103 (2017)
Nesterov, Y.: A method for solving the convex programming problem with convergence rate \(O(1/k^{2})\). Dokl. Akad. Nauk SSSR 269, 543–547 (1983)
Vogel, C.R.: Computational Methods for Inverse Problems. SIAM, Philadelphia (2002)
Eldén, L.: Algorithms for the regularization of illconditioned least squares problems. BIT Numer. Math. 17(2), 134–145 (1977)
Hansen, P.C., Nagy, J.G., O’Leary, D.P.: Deblurring Images: Matrices, Spectra, and Filtering (Fundamentals of Algorithms 3) (Fundamentals of Algorithms). SIAM, Philadelphia (2006)
Tikhonov, A.N., Arsenin, V.Y.: Solutions of IllPosed Problems. W.H. Winston (1997)
Tibshirani, R.: Regression shrinkage and selection via the lasso. J. R. Stat. Soc., Ser. B, Stat. Methodol. 58, 267–288 (1996)
Thung, K., Raveendran, P.: A survey of image quality measures. In: Proceedings of the International Conference for Technical Postgraduates (TECHPOS), Kuala Lumpur, Malaysia, 14–15 December, pp. 1–4 (2009)
Acknowledgements
The first author was supported by Fundamental Fund 2022, Chiang Mai University, Thailand, under the supervision of Suthep Suantai. The second author would like to thank Ubon Ratchathani University, Thailand. This research has also received funding support from the NSRF via the program Management Unit for Human Resources & Institutional Development, Research and Innovation [grant number B05F640183].
Funding
NSRF [grant number B05F640183].
Author information
Authors and Affiliations
Contributions
K.J. and R.W. contributed equally in this research paper. Both authors have read and agreed to the published version of the manuscript.
Corresponding author
Ethics declarations
Competing interests
The authors declare that they have no competing interests.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Wattanataweekul, R., Janngam, K. An accelerated common fixed point algorithm for a countable family of Gnonexpansive mappings with applications to image recovery. J Inequal Appl 2022, 68 (2022). https://doi.org/10.1186/s1366002202796y
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s1366002202796y