Skip to main content

An accelerated common fixed point algorithm for a countable family of G-nonexpansive mappings with applications to image recovery

Abstract

In this paper, we define a new concept of left and right coordinate affine of a directed graph and then employ it to introduce a new accelerated common fixed point algorithm for a countable family of G-nonexpansive mappings in a real Hilbert space with a graph. We prove, under certain conditions, weak convergence theorems for the proposed algorithm. As applications, we also apply our results to solve convex minimization and image restoration problems. Moreover, we show that our algorithm provides better convergence behavior than other methods in the literature.

Introduction

Let C be a nonempty closed convex subset of a real Hilbert space H with norm \(\|\cdot \|\). A mapping T of C into itself is said to be

  1. (i)

    Lipschitzian if there exists \(\gamma \geq 0\) such that

    $$ \Vert Tx-Ty \Vert \leq \gamma \Vert x-y \Vert $$

    for all \(x,y\in C\), where γ is called the coefficient of T;

  2. (ii)

    nonexpansive if T is Lipschitzian with \(\gamma =1\).

The element \(x\in C\) is a fixed point of T if \(Tx=x\), and \(F(T):=\{x\in C : x=Tx\}\) denotes the set of all fixed points of T.

For the past seven decades, several iterative methods were proposed to find approximating fixed point theorems of nonexpansive mappings; see, for instance, [1, 2].

One of the famous and well-known iterative methods, the Picard iteration process, is defined by

$$ x_{n+1}=Tx_{n}$$

for \(n\geq 1\), and the initial point \(x_{1}\) is chosen arbitrarily. Furthermore, the Picard iteration process was improved and studied extensively by many mathematicians such as follows.

The Mann iteration process [3] is defined by

$$\begin{aligned} x_{n+1}=(1-\rho _{n})x_{n}+\rho _{n}Tx_{n} \end{aligned}$$
(1.1)

for \(n\geq 1\), the initial point \(x_{1}\) is chosen arbitrarily, and \(\{\rho _{n}\}\) is a sequence in \([0,1]\).

The Ishikawa iteration process [4] is defined by

$$\begin{aligned} \textstyle\begin{cases} y_{n} =(1-\beta _{n})x_{n}+\beta _{n}Tx_{n}, \\ x_{n+1}=(1-\rho _{n})x_{n} +\rho _{n}Ty_{n} \end{cases}\displaystyle \end{aligned}$$
(1.2)

for \(n\geq 1\), the initial point \(x_{1}\) is chosen arbitrarily, and \(\{\beta _{n}\}\) and \(\{\rho _{n}\}\) are sequences in \([0,1]\).

The S-iteration process [5] is defined by

$$\begin{aligned} \textstyle\begin{cases} y_{n} = (1-\beta _{n})x_{n} + \beta _{n}Tx_{n}, \\ x_{n+1} = (1-\rho _{n})Tx_{n} + \rho _{n}Ty_{n} \end{cases}\displaystyle \end{aligned}$$
(1.3)

for \(n \geq 1\), the initial point \(x_{1}\) is chosen arbitrarily, and \(\{\beta _{n}\}\) and \(\{\rho _{n}\}\) are sequences in \([0,1]\).

In 2017, Agarwal, O’Regan, and Sahu [5] proved that the iteration process (1.3) is independent of the Mann and Ishikawa iteration processes and converges faster. In 2012, Aleomraninejad et al. [6] used the idea of combination of fixed point and graph theories in proving a convergence theorem for G-nonexpansive mappings in a Banach space. In 2015, Tiammee et al. [7] proved the Browder convergence theorem and a strong convergence theorem of the Halpern iterative scheme for G-nonexpansive mappings in a Hilbert space endowed with a graph. Later, Tripak [8], by using the Ishikawa iteration, proved weak and strong convergence theorems for finding a common fixed point for two G-nonexpansive mappings in a Banach space. In 2019, Sridarat et al. [9], using the SP-iteration, proved weak and strong convergence theorems for finding a common fixed point of three G-nonexpansive mappings in a uniformly convex Banach space with a graph. In 2020, Yambangwai et al. [10], using a modified three-step iteration method, proved weak and strong convergence theorems for three G-nonexpansive mappings in a uniformly convex Banach space with a graph. They also applied their results to find solutions of constrained minimization problems and split feasibility problems. Recently, Suantai et al. [11] modified the shrinking projection method with the parallel monotone hybrid method for approximating common fixed points of a finite family of G-nonexpansive mappings. They proved a strong convergence theorem under suitable conditions in Hilbert spaces endowed with graphs and applied it to signal recovery.

The main objectives of this paper are introducing an iterative method for finding a common fixed point of a countable family of G-nonexpansive mappings, analyzing the convergence behavior of the recommended algorithm in comparison with the others, and giving some applications to solve the image restoration problem.

Preliminaries

In what follows, X is a real normed space. Let C be a nonempty subset of X. Let \(G=(V(G),E(G))\) be a directed graph with \(V(G)=C\) and \(E(G)\supseteq \triangle \), where \(\triangle =\{(u,u):u\in C\}\). Assume that G has no parallel edges. We denote by \(G^{-1}\) the graph obtained from G by reversing the direction of edges. Then

$$ E\bigl(G^{-1}\bigr)=\bigl\{ (u,v)\in C\times C: (v,u)\in E(G)\bigr\} .$$

Recall that a graph G is said to be connected if there is a path between any two vertices of the graph G. For more detail on some basic notions of the graphs, we refer the readers to [12].

A mapping \(T:C\to C\) is said to be

  1. (i)

    G-contraction [13] if

    1. (a)

      T is edge-preserving, i.e., \((Tu,Tv)\in E(G)\) for all \((u,v)\in E(G)\), and

    2. (b)

      there exists \(\rho \in [0,1)\) such that \(\|Tu-Tv\|\leq \rho \|u-v\|\) for all \((u,v)\in E(G)\), where ρ is called a contraction factor;

  2. (ii)

    G-nonexpansive [7] if

    1. (a)

      T is edge-preserving, and

    2. (b)

      \(\|Tu-Tv\|\leq \|u-v\|\) for all \((u,v)\in E(G)\).

If \(\{u_{n}\}\) is a sequence in X, then \(u_{n}\rightharpoonup u\) denotes weak convergence of the sequence \(\{u_{n}\}\) to u. For \(v\in C\), if there is a subsequence \(\{u_{n_{k}}\}\) of \(\{u_{n}\}\) such that \(u_{n_{k}}\rightharpoonup v\), then v is called a weak cluster point of \(\{u_{n}\}\). By \(\omega _{w}(u_{n})\) we denote the set of all weak cluster points of \(\{u_{n}\}\).

Let \(\{T_{n}\}\) and ψ be families of nonexpansive mappings of C into itself such that \(\varnothing \neq F(\psi )\subset \Lambda :=\bigcap _{n=1}^{\infty}F(T_{n})\), where \(F(\psi )\) is the set of all common fixed points of all \(T\in \psi \). A sequence \(\{T_{n}\}\) satisfies the \(NST\)-condition (I) with ψ [14] if for any bounded sequence \(\{u_{n}\}\) in C,

$$ \lim_{n\to \infty} \Vert T_{n}u_{n}-u_{n} \Vert =0 \quad \text{implies} \quad \lim_{n \to \infty} \Vert Tu_{n}-u_{n} \Vert =0$$

for all \(T\in \psi \). If \(\psi =\{T\}\), then \(\{T_{n}\}\) satisfies the \(NST\)-condition (I) with T. In 2009, Nakajo et al. [15] have given the definition of the \(NST^{*}\)-condition: a sequence \(\{T_{n}\}\) satisfies the \(NST^{*}\)-condition if

$$ \lim_{n\to \infty} \Vert T_{n}u_{n}-u_{n} \Vert =\lim_{n\to \infty} \Vert u_{n+1}-u_{n} \Vert =0 \quad \text{implies} \quad \omega _{w}(u_{n})\subset \Lambda $$

for every bounded sequence \(\{u_{n}\}\) in C.

We recall the definition of forward–backward operator of lower semicontinuous and convex functions of \(f, g:\mathbb{R}^{n} \to (-\infty ,+\infty ]\) as follows: A forward–backward operator T is defined by \(T:=prox_{\lambda g}(I - \lambda \nabla f)\) for \(\lambda >0\), where f is the gradient operator of function f and \(prox_{\lambda g}x := \arg \min_{y\in H} \{g(y) + \frac{1}{2\lambda}\|y-x\|^{2} \}\) (see [16, 17]). The operator \(prox_{\lambda g}\) was defined by Moreau [18], who called it the proximity operator with respect to λ and function g. We know that T is a nonexpansive mapping whenever \(\lambda \in (0,2/L)\), where L is a Lipschitz constant of f.

Remark 2.1

([19])

Let \(g:\mathbb{R}^{n} \to \mathbb{R}\) be given by \(g(x)=\lambda \|x\|_{1}\). The proximity operator of g is defined by the formula

$$ prox_{\lambda \Vert \cdot \Vert _{1}}(x) = \bigl(sign(x_{i})\max \bigl( \vert x_{i} \vert -\lambda ,0\bigr)\bigr)_{i=1}^{n},$$

where \(x= (x_{1},x_{2}, \dots ,x_{n})\) and \(\|x\|_{1} = \sum_{i=1}^{n}|x_{i}|\).

Lemma 2.2

([20])

Let g be a lower semicontinuous and proper convex function from a Hilbert space H into \(\mathbb{R}\cup \{\infty \}\), and let f be a convex differentiable function from H into \(\mathbb{R}\) with L-Lipschitz gradient f for some \(L > 0\). Let T be the forward–backward operator of g and f. A sequence \(\{T_{n}\}\) satisfies the \(NST\)-condition (I) with T if \(\{T_{n}\}\) is the forward–backward operator of g and f such that \(a_{n} \to a\) with \(a,a_{n} \in (0,2/L)\).

Lemma 2.3

([21])

For a real Hilbert space H, we have:

  1. (i)

    For all \(u,v \in H\) and \(\gamma \in [0,1]\),

    $$ \bigl\Vert \gamma u+(1-\gamma )v \bigr\Vert ^{2}=\gamma \Vert u \Vert ^{2}+(1-\gamma ) \Vert v \Vert ^{2}- \gamma (1- \gamma ) \Vert u-v \Vert ^{2};$$
  2. (ii)

    For any \(u,v \in H\),

    $$ \Vert u\pm v \Vert ^{2}= \Vert u \Vert ^{2}\pm 2 \langle u,v \rangle + \Vert v \Vert ^{2}.$$

Lemma 2.4

([22])

Let \(\{u_{n}\}\), \(\{v_{n}\}\), and \(\{\vartheta _{n}\}\) be sequences of nonnegative real numbers such that

$$ u_{n+1}\leq (1+\vartheta _{n})u_{n}+v_{n}$$

for \(n\in \mathbb{N}\). If \(\sum_{n=1}^{\infty}\vartheta _{n}<\infty \) and \(\sum_{n=1}^{\infty}v_{n}<\infty \), then \(\lim_{n\to \infty}u_{n}\) exists.

Lemma 2.5

([23])

Let H be a real Hilbert space, and let \(\{u_{n}\}\) be a sequence in H such that there exists a nonempty set \(\Lambda \subset H\) satisfying the following conditions:

  1. (i)

    For any \(p\in \Lambda , \lim_{n\to \infty}\|u_{n}-p\|\) exists;

  2. (ii)

    Any weak cluster point of \(\{u_{n}\}\in \Lambda \).

Then there exists \(q^{*}\in \Lambda \) such that \(u_{n}\rightharpoonup q^{*}\).

Lemma 2.6

([24])

Let \(\{u_{n}\}\) and \(\{\mu _{n}\}\) be sequences of nonnegative real numbers such that

$$ u_{n+1}\leq (1+\mu _{n})u_{n}+\mu _{n}u_{n-1}$$

for \(n\in \mathbb{N}\). Then

$$ u_{n+1}\leq M\cdot \prod^{n}_{j=1}(1+2 \mu _{j}),$$

where \(M=\max \{u_{1},u_{2}\}\). Moreover, if \(\sum_{n=1}^{\infty}\mu _{n}<\infty \), then \(\{u_{n}\}\) is bounded.

Main results

In this section, by using the inertial technique we prove a weak convergence theorem for a new accelerated algorithm for a countable family of G-nonexpansive mappings in a real Hilbert space with a directed graph.

Let C be a nonempty closed and convex subset of a real Hilbert space H with a directed graph \(G = (V(G),E(G))\) such that \(V(G)=C\). Let \(\{T_{n}\}\) be a family of G-nonexpansive mappings of C into itself such that \(\emptyset \neq \Lambda :=\bigcap _{n=1}^{\infty}F(T_{n})\).

The sequence \(\mu _{n}\) is called an inertial step size. Before giving a weak convergence theorem for Algorithm 3.1 for a family of G-nonexpansive mappings, we need to introduce a concept of coordinate affine of the graph \(G = (V(G),E(G))\).

Algorithm 3.1
figure a

An Inertial Mann Algorithm

Definition 3.1

Assume that \(\Lambda :=\bigcap _{n=1}^{\infty}F(T_{n})\neq \emptyset \) and \(\Lambda \times \Lambda \subseteq E(G)\). Then \(E(G)\) is said to be

  1. (i)

    left coordinate affine if \(\alpha (x,y)+\beta (u,y)\in E(G)\) for all \((x,y)\), \((u,y) \in E(G)\) and all α, \(\beta \in \mathbb{R}\) such that \(\alpha +\beta = 1\).

  2. (ii)

    right coordinate affine if \(\alpha (x,y)+\beta (x,z)\in E(G)\) for all \((x,y)\), \((x,z) \in E(G)\) and all α, \(\beta \in \mathbb{R}\) such that \(\alpha +\beta = 1\).

We say that \(E(G)\) is coordinate affine if \(E(G)\) is both left and right coordinate affine.

We start with some properties of the sequences \(\{x_{n}\}\) and \(\{y_{n}\}\) generated by Algorithm 3.1 related to \(E(G)\).

Example 3.2

Let \(X=\mathbb{R}^{2}\) and \(C=\mathbb{R}\times \{1\}\). Let \(G = (V(G), E(G))\) be a directed graph defined by \(V(G) = C\) and \((x, y) \in E(G)\) if \(x,y \in \mathbb{R} \times \{1\}\). We will show that \(E(G)\) is left coordinate affine. To see this, let \((x,y)\), \((z,y) \in C\) be such that \(x=(x_{1},1)\), \(y= (y_{1},1)\), and \(z=(z_{1},1)\). For all α, \(\beta \in \mathbb{R}\) with \(\alpha +\beta = 1\),

$$\begin{aligned} \alpha (x,y)+\beta (z,y) &=\alpha \bigl( (x_{1},1),(y_{1},1) \bigr) + \beta \bigl( (z_{1},1),(y_{1},1) \bigr) \\ &= \bigl( (\alpha x_{1},\alpha ),(\alpha y_{1},\alpha ) \bigr) + \bigl( ( \beta z_{1},\beta ),(\beta y_{1},\beta ) \bigr) \\ &= \bigl( (\alpha x_{1}+\beta z_{1},\alpha +\beta ),( \alpha y_{1}+ \beta y_{1},\alpha +\beta ) \bigr) \\ &= \bigl( (\alpha x_{1}+\beta z_{1},1),(y_{1},1) \bigr). \end{aligned}$$

Then \(\alpha (x,y)+\beta (z,y) \in E(G)\), and hence \(E(G)\) is left coordinate affine.

Proposition 3.3

Let \(\breve{q}\in \Lambda \) and \(x_{0},x_{1} \in C\) be such that \((x_{0},\breve{q})\), \((x_{1},\breve{q}) \in E(G)\). Let \(\{x_{n}\}\) be a sequence generated by Algorithm 3.1. Suppose \(E(G)\) is left coordinate affine. Then \((x_{n},\breve{q})\) and \((y_{n},\breve{q}) \in E(G)\) for all \(n \in \mathbb{N}\).

Proof

We will prove the results by using mathematical induction. From Algorithm 3.1 we obtain

$$\begin{aligned} (y_{1},\breve{q}) &= \bigl( x_{1}+\mu _{1}(x_{1}-x_{0}), \breve{q} \bigr) \\ &= \bigl( (1+\mu _{1})x_{1}-\mu _{1} x_{0},\breve{q} \bigr) \\ &=(1+\mu _{1}) (x_{1},\breve{q})-\mu _{1}(x_{0}, \breve{q}). \end{aligned}$$

Since \((x_{0},\breve{q})\), \((x_{1},\breve{q}) \in E(G)\) and \(E(G)\) is left coordinate affine, we get \((y_{1},\breve{q}) \in E(G)\). Next, suppose that \((x_{k},\breve{q})\), \((y_{k},\breve{q}) \in E(G)\). Notice that

$$\begin{aligned} (x_{k+1},\breve{q}) &= \bigl( (1-\rho _{k})y_{k}+ \rho _{k}T_{k}y_{k}, \breve{q} \bigr) \\ &=(1-\rho _{k}) (y_{k},\breve{q})+\rho _{k}(T_{k}y_{k},\breve{q}). \end{aligned}$$

Since \((y_{k},\breve{q})\in E(G)\) and \(T_{k}\) is edge-preserving, we obtain \((x_{k+1},\breve{q}) \in E(G)\). Then

$$\begin{aligned} (y_{k+1},\breve{q}) &= \bigl( x_{k+1}+\mu _{k+1}(x_{k+1}-x_{k}), \breve{q} \bigr) \\ &=(1+\mu _{k+1}) (x_{k+1},\breve{q})-\mu _{k+1}(x_{k}, \breve{q}). \end{aligned}$$

Since \((x_{k+1},\breve{q})\), \((x_{k},\breve{q}) \in E(G)\) and \(E(G)\) is left coordinate affine, we have that \((y_{k+1},\breve{q}) \in E(G)\). By mathematical induction we obtain \((x_{n},\breve{q})\), \((y_{n},\breve{q})\in E(G)\) for all \(n \in \mathbb{N}\). □

Theorem 3.4

Let C be a nonempty closed and convex subset of a real Hilbert space H with a directed graph \(G=(V(G),E(G))\) with \(V(G)=C\) and left coordinate affine \(E(G)\). Let \(x_{0},x_{1} \in C\), and let \(\{x_{n}\}\) be the sequence in H defined by Algorithm 3.1. Suppose \(\{T_{n}\}\) satisfies the \(\textit{NST}^{*}\)-condition with \(\Lambda \neq \emptyset \) and \((x_{0}, \breve{q})\), \((x_{1},\breve{q}) \in E(G)\) for all \(\breve{q}\in \Lambda \). Then \(\{x_{n}\}\) converges weakly to a common fixed point of Λ.

Proof

Let \(\breve{q}\in \Lambda \). By Algorithm 3.1 we obtain

$$\begin{aligned} \Vert y_{n} - \breve{q} \Vert &= \bigl\Vert x_{n} +\mu _{n}(x_{n}-x_{n-1} ) - \breve{q} \bigr\Vert \\ &\leq \Vert x_{n}-\breve{q} \Vert + \mu _{n} \Vert x_{n} -x_{n-1} \Vert \end{aligned}$$
(3.1)

and

$$\begin{aligned} \Vert x_{n+1}-\breve{q} \Vert &= \bigl\Vert (1-\rho _{n})y_{n} -\breve{q} +\rho _{n} \breve{q} +\rho _{n} T_{n}y_{n}-\rho _{n} \breve{q} \bigr\Vert \\ &= \bigl\Vert (1-\rho _{n}) (y_{n}-\breve{q})+\rho _{n}(T_{n}y_{n}-\breve{q}) \bigr\Vert \\ &\leq (1-\rho _{n}) \Vert y_{n}-\breve{q} \Vert +\rho _{n} \Vert T_{n}y_{n}- \breve{q} \Vert \\ &= (1-\rho _{n}) \Vert y_{n}-\breve{q} \Vert +\rho _{n} \Vert T_{n}y_{n}-T_{n} \breve{q} \Vert \\ &\leq (1-\rho _{n}) \Vert y_{n}-\breve{q} \Vert +\rho _{n} \Vert y_{n}-\breve{q} \Vert \\ &= \Vert y_{n}-\breve{q} \Vert . \end{aligned}$$
(3.2)

From (3.1) and (3.2) we get

$$\begin{aligned} \Vert x_{n+1}-\breve{q} \Vert \leq \Vert x_{n}- \breve{q} \Vert + \mu _{n} \Vert x_{n} -x_{n-1} \Vert . \end{aligned}$$
(3.3)

Then we have

$$\begin{aligned} \Vert x_{n+1}-\breve{q} \Vert \leq (1+\mu _{n}) \Vert x_{n}-\breve{q} \Vert +\mu _{n} \Vert x_{n-1}- \breve{q} \Vert . \end{aligned}$$
(3.4)

Applying Lemma 2.6, we get \(\|x_{n+1}-\breve{q}\| \leq M \cdot \prod^{n}_{j=1}(1+2\mu _{j})\), where \(M = \max \{\|x_{1}-\breve{q}\|, \|x_{2}-\breve{q}\|\}\). Since \(\sum_{n=1}^{\infty}\mu _{n} \leq \infty \), we obtain that \(\{x_{n}\}\) is bounded. Thus

$$\begin{aligned} \sum_{n=1}^{\infty}\mu _{n} \Vert x_{n} -x_{n-1} \Vert \leq \infty . \end{aligned}$$
(3.5)

By Lemma 2.5 and (3.3) we get that \(\lim_{n\to \infty} \|x_{n}-\breve{q}\|\) exists. By Algorithm 3.1 and Lemma 2.3(i) we obtain

$$\begin{aligned} \Vert x_{n+1}-\breve{q} \Vert ^{2} &= \bigl\Vert (1- \rho _{n}) (y_{n}-\breve{q})+\rho _{n}(T_{n}y_{n}- \breve{q}) \bigr\Vert ^{2} \\ &= (1-\rho _{n}) \Vert y_{n}-\breve{q} \Vert ^{2}+\rho _{n} \Vert T_{n}y_{n}- \breve{q} \Vert ^{2} -(1-\rho _{n})\rho _{n} \Vert y_{n}-T_{n}y_{n} \Vert ^{2} \\ &\leq (1-\rho _{n}) \Vert y_{n}-\breve{q} \Vert ^{2} +\rho _{n} \Vert y_{n}- \breve{q} \Vert ^{2}-(1-\rho _{n})\rho _{n} \Vert y_{n}-T_{n}y_{n} \Vert ^{2} \\ &= \Vert y_{n}-\breve{q} \Vert ^{2}-(1-\rho _{n})\rho _{n} \Vert y_{n}-T_{n}y_{n} \Vert ^{2} \\ &\leq \bigl( \Vert x_{n}-\breve{q} \Vert + \mu _{n} \Vert x_{n} -x_{n-1} \Vert \bigr)^{2} - (1- \rho _{n})\rho _{n} \Vert y_{n}-T_{n}y_{n} \Vert ^{2} \\ &= \Vert x_{n}-\breve{q} \Vert ^{2} +2\mu _{n} \Vert x_{n}-\breve{q} \Vert \Vert x_{n}-x_{n-1} \Vert +\mu _{n}^{2} \Vert x_{n}-x_{n-1} \Vert ^{2} \\ &\quad {} -(1-\rho _{n})\rho _{n} \Vert y_{n}-T_{n}y_{n} \Vert ^{2}. \end{aligned}$$
(3.6)

From (3.5) and (3.6) we obtain

$$\begin{aligned} \Vert y_{n}-T_{n}y_{n} \Vert \to 0. \end{aligned}$$
(3.7)

Since

$$\begin{aligned} \Vert x_{n}-y_{n} \Vert = \mu _{n} \Vert x_{n}-x_{n-1} \Vert , \end{aligned}$$

it follows that

$$\begin{aligned} \Vert x_{n}-y_{n} \Vert \to 0. \end{aligned}$$
(3.8)

By (3.7) and (3.8) from

$$\begin{aligned} \Vert x_{n+1}-x_{n} \Vert \leq \Vert y_{n}-x_{n} \Vert + \rho _{n} \Vert T_{n}y_{n}-y_{n} \Vert \to 0 \end{aligned}$$
(3.9)

we obtain

$$\begin{aligned} \Vert x_{n+1}-x_{n} \Vert \to 0. \end{aligned}$$
(3.10)

Next, we will show that \(\|y_{n}-y_{n+1}\| \to 0\). By Algorithm 3.1 we obtain

$$\begin{aligned} \Vert y_{n}-y_{n+1} \Vert &= \bigl\Vert x_{n}+\mu _{n}(x_{n}-x_{n-1})-x_{n+1}- \mu _{n+1}(x_{n+1}-x_{n}) \bigr\Vert \\ &\leq \Vert x_{n}-x_{n+1} \Vert +\mu _{n} \Vert x_{n}-x_{n-1} \Vert +\mu _{n+1} \Vert x_{n}-x_{n+1} \Vert . \end{aligned}$$

From (3.5) and (3.10) we get

$$\begin{aligned} \Vert y_{n}-y_{n+1} \Vert \to 0. \end{aligned}$$
(3.11)

Since \(\{T_{n}\}\) satisfies the \(\text{NST}^{*}\)-condition, by (3.7) and (3.11) we obtain

$$ \omega _{w}(y_{n})\subset \Lambda .$$

Finally, we will show that \(\omega _{w}(x_{n})\subset \Lambda \). To see this, let \(x\in \omega _{w}(x_{n})\). By the definition of \(\omega _{w}(x_{n})\) there exists a subsequence \(\{x_{n_{k}}\}\) of \(\{x_{n}\}\) such that \(x_{n_{k}} \rightharpoonup x\). From (3.8) we obtain that \(y_{n_{k}} \rightharpoonup x\). Then \(x\in \omega _{w}(y_{n})\). It follows that \(\omega _{w}(x_{n})\subset \omega _{w}(y_{n})\subset \Lambda \). Thus \(\omega _{w}(x_{n}) \subset \Lambda \). By Lemma 2.5 we get \(x_{n} \rightharpoonup \breve{q}\) in Λ. The proof is now complete. □

Application on convex minimization problems

In this section, we are interested in applying our proposed method for solving a convex minimization problem of the sum of two convex and lower semicontinuous functions \(f, g:\mathbb{R}^{n} \to (-\infty ,+\infty ]\). So we consider the following convex minimization problem: \(\operatorname{min} ( f(x)+g(x) ) x\in \mathbb{R}^{n}\). Combettes and Wajs [17] proved that is a minimizer of (4.1) if and only if \(\breve{q} = T\breve{q}\), where \(T= prox_{\rho g}(I-\rho \nabla f )\); see [17, Prop. 3.1(iii)]. It is also known that T is nonexpansive if \(\rho \in (0,2 / L)\) where L is a Lipschitz constant of f. For the past two decades, several algorithms were introduced for solving problem (4.1). A simple and classical algorithm is the forward–backward algorithm (FBA) introduced by Lions and Mercier [25].

The forward–backward algorithm (FBA) is defined by

$$\begin{aligned} \textstyle\begin{cases} y_{n} =x_{n} -\gamma \nabla fx_{n}, \\ x_{n+1} = x_{n} + \rho _{n}(J_{\gamma \partial g}y_{n}-x_{n}), \end{cases}\displaystyle \end{aligned}$$
(4.1)

where \(n\geq 1\), \(x_{0} \in H\), L is a Lipschitz constant of f, \(\gamma \in (0, 2/L)\), \(\delta = 2 - (\gamma L/2)\), and \(\{\rho _{n}\}\) is a sequence in \([0,\delta ]\) such that \(\sum_{n\in \mathbb{N}} \rho _{n}(\delta - \rho _{n}) = +\infty \). The technique for improving speed and giving a better convergence behavior of the algorithms was first introduced by Polyak [26] by adding an inertial step. Since then, many authors employed the inertial technique to accelerate their algorithms for using in various problems [20, 24, 2731]. The following iterative method with an inertial step can be used for improving performance of FBA.

A fast iterative shrinkage-thresholding algorithm (FISTA) [30] is defined by

$$\begin{aligned} \textstyle\begin{cases} y_{n} = Tx_{n}, \\ t_{n+1} = \frac{1+\sqrt{1+4t_{n}^{2}}}{2}, \\ \mu _{n} = \frac{t_{n}-1}{t_{n+1}}, \\ x_{n+1}=y_{n}+\mu _{n}(y_{n}-y_{n-1}), \end{cases}\displaystyle \end{aligned}$$
(4.2)

where \(n\geq 1\), \(x_{1} = y_{0} \in \mathbb{R}^{n}\), \(t_{1} = 1\), \(T := prox_{\frac{1}{L}g}(I - \frac{1}{L}\nabla f) \), and \(\mu _{n}\) is a so-called inertial step size. The FISTA was suggested by Beck and Teboulle [30]. They proved the convergence rate of the FISTA and applied the FISTA to image restoration problem [30]. The inertial step size \(\mu _{n}\) of the FISTA was first introduced by Nesterov [32].

A new accelerated proximal gradient algorithm (nAGA) [31] is defined by

$$\begin{aligned} \textstyle\begin{cases} y_{n} = x_{n} + \mu _{n}(x_{n} -x_{n-1}), \\ x_{n+1} = T_{n}[(1-\rho _{n})y_{n} + \rho _{n}T_{n}y_{n}], \end{cases}\displaystyle \end{aligned}$$
(4.3)

where \(n\geq 1\), \(T_{n}\) is the forward–backward operator of f and g with respect to \(a_{n} \in (0,2/L)\), \(\{\mu _{n}\}\) and \(\{\rho _{n}\}\) are sequences in \((0,1)\), and \(\frac{\|x_{n} -x_{n-1}\|_{2}}{\mu _{n}} \to 0 \). The nAGA was introduced for proving a convergence theorem by Verma and Shukla [31]. They also applied this method for solving the nonsmooth convex minimization problem with sparsity inducing regularizers for the multitask learning framework.

Theorem 4.1

Let \(f,g:\mathbb{R}^{n} \rightarrow (-\infty ,\infty ]\) be such that g is a convex function and f is a smooth convex function with a gradient having a Lipschitz constant L. Let \(a_{n} \in (0, 2/L)\) be such that \(\{a_{n}\}\) converges to a, let \(T:=prox_{ag}(I - a\nabla f)\) and \(T_{n} := prox_{a_{n} g} (I - a_{n}\nabla f)\), and let \(\{x_{n}\}\) be a sequence generated by Algorithm 3.1. Then:

  1. (i)

    \(\|x_{n+1} - \breve{q}\| \leq K\cdot \prod^{n}_{j=1}(1 + 2\mu _{j})\), where \(K = \max \{\|x_{1}-\breve{q}\|,\|x_{2}-\breve{q}\|\}\) and \(\breve{q} \in Argmin(f+g)\);

  2. (ii)

    \(\{x_{n}\}\) converges weakly to a point in \(\operatorname{Argmin} (f+g)\).

Proof

It is known that T and \(\{T_{n}\}\) are nonexpansive operators for all n and that \(F(T) = \bigcap _{n=1}^{\infty}F(T_{n}) = Argmin(f + g)\); see [16, Prop. 26.1]. By Lemma 2.2 we obtain that \(\{T_{n}\}\) satisfies the \(\text{NST}^{*}\)-condition. From Theorem 3.4 we get the required result directly by putting \(G=\mathbb{R}^{n} \times \mathbb{R}^{n}\), the complete graph on \(\mathbb{R}^{n}\). □

Application on the image restoration problem

We can describe the image restoration problem as a simple linear model

$$\begin{aligned} Ax=c+u, \end{aligned}$$
(5.1)

where \(A \in \mathbb{R}^{m\times n}\) is the blurring operation, an image \(x \in \mathbb{R}^{n\times 1}\), \(c \in \mathbb{R}^{m\times 1}\) is the observed image, and u is an additive noise. The image restoration problem is finding the original image \(x^{*} \in \mathbb{R}^{n\times 1}\) that satisfies (5.1). To find the solution of problem (5.1), we minimize the additive noise to approximate the original image by using the method knowing as the least squares (LS) problem

$$\begin{aligned} \min_{x} \Vert Ax-c \Vert _{2}^{2}, \end{aligned}$$
(5.2)

where \(\|\cdot \|_{2}\) is an \(l_{2}\)-norm. The solution of (5.2) can be estimated by many iterations such as the Richardson iteration; see [33] for details. However, the number of unknown variables is much greater than that of observations, which causes (5.2) to be an ill-posed problem because of a huge norm result, which is thus meaningless; see [34] and [35]. Therefore, to improve the ill-conditioned least squares problem, several regularization methods were introduced. One of the most popular regularization methods is the Tikhonov regularization suggested by Tikhonov; see [36]. It is defined to solve the following minimization problem:

$$\begin{aligned} \min_{x} \bigl\{ \Vert Ax-c \Vert _{2}^{2} +\lambda \Vert Lx \Vert _{2}^{2} \bigr\} , \end{aligned}$$
(5.3)

where \(\lambda > 0\) is called a regularization parameter, and \(L \in \mathbb{R}^{m\times n}\) is called the Tikhonov matrix. In the standard form, L is set to be the identity. In statistics, (5.3) is known as a ridge regression.

A new method for estimation a solution of (5.1) called the least absolute shrinkage and selection operator (LASSO), was proposed by Tibshirani [37] as follows:

$$\begin{aligned} \min_{x} \bigl\{ \Vert Ax-c \Vert _{2}^{2} +\lambda \Vert x \Vert _{1} \bigr\} , \end{aligned}$$
(5.4)

where \(\|\cdot \|_{1}\) is the \(l_{1}\)-norm defined as \(\|x\|_{1} = \sum_{i=1}^{n}|x_{i}|\). This method improved the original LS (5.2) and the classical regularization such as the subset selection and the ridge regression (5.3). Moreover, the LASSO can also be applied to image and regression problems [30, 37], etc.

For solving image restoration problem, especially the true RGB images, the model (5.4) is highly costly to compute the multiplication Ax and \(\|x\|_{1}\) because of the size of matrix A and x as well as their members. To overcome this problem, most of researchers in this area employ the 2-D fast Fourier transform for transformation of the true RGB images, and the model (5.4) is slightly modified by using the 2-D fast Fourier transform of the following form:

$$\begin{aligned} \min_{x} \bigl\{ \Vert \mathcal{A}x- \mathcal{C} \Vert _{2}^{2} +\lambda \Vert Wx \Vert _{1} \bigr\} , \end{aligned}$$
(5.5)

where \(\mathcal{A}\) is the blurring operation, often chosen as \(\mathcal{A} = RW\), R is the blurring matrix, W is the 2-D fast Fourier transform, \(\mathcal{C} \in \mathbb{R}^{m\times n}\) is the observed blurred and noisy image of size \(m \times n\), and λ is a positive regularization parameter.

In this section, we apply Algorithm 3.1 to solving the image restoration problem (5.5) by using Theorem 4.1 when \(f(x) = \|\mathcal{A}x - \mathcal{C}\|_{2}^{2}\) and \(g (x)=\lambda \|Wx\|_{1}\) and compare the deblurring efficiency of Algorithm 3.1 with FISTA and FBA. In this experiment the true RGB images, Wat Chedi Luang and Wat Boonyawad of size 2562 are considered as the original images. We blur the images with a Gaussian blur of size 92 and \(\sigma = 4\), where σ is a standard deviation. After that, we use the peak signal-to-noise ratio (PSNR) [38] to measure the performance of those three algorithms when PSNR(\(x_{n}\)) is defined by

$$\begin{aligned} PSNR(x_{n})=10\log _{10} \biggl(\frac{255^{2}}{MSE} \biggr), \end{aligned}$$

where 255 is the maximum gray level of an 8 bits/pixel monotonic image, and \(MSE = \frac{1}{N}\|x_{n}-x^{*}\|_{2}^{2}= \frac{1}{N}\sum_{i=1}^{N}|x_{n}(i) -x^{*}(i)|^{2}\), \(x_{n}(i)\) and \(x^{*}(i)\) are the ith samples in images \(x_{n}\) and \(x^{*}\), respectively, N is the number of image samples, and \(x^{*}\) is the original image. We note that a higher PSNR shows a higher quality for deblurring image. For these experiments, we set \(\lambda = 5 \times 10^{-5}\), and the original image was the blurred image. Then we compute the Lipschitz constant L by using the maximum eigenvalues of the matrix \(A^{T}A\).

The parameters for Algorithm 3.1, FISTA and FBA, are set as in Table 1.

Table 1 Algorithms and their setting controls

Note that all parameters in Table 1 satisfy those convergence theorems for each algorithm. Theorem 4.1 guarantees the convergence of the sequence \(\{x_{n}\}\) generated by Algorithm 3.1 to the original image \(x^{*}\). However, convergence behavior of this sequence is measured by the value of PSNR. It is known that PSNR is an appropriate measurement for image restoration problems.

The following experiments show the performance of the proposed algorithm and compare efficiency of deblurring images with FISTA and FBA by using PSNR as our measurement.

The results of a deblurring image of Wat Chedi Luang and Wat Boonyawad with the 300th iteration of the proposed algorithm, FISTA, and FBA are shown in Figs. 1, 2, 3, 4 and Tables 2, 3, 4.

Figure 1
figure 1

The graphs of PSNR of each algorithm for Wat Chedi Luang

Figure 2
figure 2

The graphs of PSNR of each algorithm for Wat Boonyawad

Figure 3
figure 3

Results for Wat Chedi Luang’s deblurring image

Figure 4
figure 4

Results for Wat Boonyawad’s deblurring image

Table 2 The values of PSNR at \(x_{1}\), \(x_{5}\), \(x_{10}\), \(x_{25}\), \(x_{50}\), \(x_{100}\), \(x_{175}\), \(x_{300}\) (Wat Chedi Luang)
Table 3 The values of PSNR at \(x_{1}\), \(x_{5}\), \(x_{10}\), \(x_{25}\), \(x_{50}\), \(x_{100}\), \(x_{175}\), \(x_{300}\) (Wat Boonyawad)
Table 4 Comparison of image restorations at the 300th iteration of Algorithm 3.1, FISTA, and FBA

We observe from Figs. 1 and 2 that the graph of PSNR of our proposed algorithm (Algorithm 3.1) is higher than that of FISTA and FBA, which shows that our proposed algorithm gives a better performance than the others.

Tables 2 and 3 show the efficiency for restoration images of each algorithm under different number of iterations. We found that Algorithm 3.1 gives a higher PSNR than that of FISTA and FBA. Thus our algorithm has a better convergence behavior than the others.

We observed from Table 4 that at the 300th iteration, the value of PSNR of our proposed algorithm is higher than that of FISTA and FBA. This shows that the performance of Algorithm 3.1 is better than the others.

In Figs. 3 and 4, we present the original images, blurred images, and deblurring images by Algorithm 3.1, FISTA, and FBA.

Conclusion

In this study, we introduced a new concept of left and right coordinate affine of a directed graph and used it to introduce a new accelerated common fixed point algorithm for a countable family of G-nonexpansive mappings in a real Hilbert space with a directed graph. The weak convergence theorem of our suggested method, Theorem 3.4, has been established and proven under certain reasonable conditions. Then we compared the convergence behavior of our proposed algorithm with that of FISTA and FBA. We also gave an application of our results to image restoration problems with FISTA and FBA. We found that Algorithm 3.1 gives better results.

Availability of data and materials

Not applicable.

References

  1. Bin Dehaish, B.A., Khamsi, M.A.: Mann iteration process for monotone nonexpansive mappings. Fixed Point Theory Appl. 2015, 177 (2015)

    MathSciNet  MATH  Article  Google Scholar 

  2. Dong, Y.: New inertial factors of the Krasnosel’skii–Mann iteration. Set-Valued Var. Anal. 29, 145–161 (2021)

    MathSciNet  MATH  Article  Google Scholar 

  3. Mann, W.R.: Mean value methods in iteration. Proc. Am. Math. Soc. 4, 506–510 (1953)

    MathSciNet  MATH  Article  Google Scholar 

  4. Ishikawa, S.: Fixed points by a new iteration method. Proc. Am. Math. Soc. 44, 147–150 (1974)

    MathSciNet  MATH  Article  Google Scholar 

  5. Agarwal, R.P., O’Regan, D., Sahu, D.R.: Iterative construction of fixed point of nearly asymptotically nonexpansive mappings. J. Nonlinear Convex Anal. 8, 61–79 (2007)

    MathSciNet  MATH  Google Scholar 

  6. Aleomraninejad, S.M.A., Rezapour, S., Shahzad, N.: Some fixed point result on a metric space with a graph. Topol. Appl. 159, 659–663 (2012)

    MathSciNet  MATH  Article  Google Scholar 

  7. Tiammee, J., Kaewkhao, A., Suantai, S.: On Browder’s convergence theorem and Halpern iteration process for G-nonexpansive mappings in Hilbert spaces endowed with graphs. Fixed Point Theory Appl. 2015, 187, 1–12 (2015)

    MathSciNet  MATH  Article  Google Scholar 

  8. Tripak, O.: Common fixed points of G-nonexpansive mappings on Banach spaces with a graph. Fixed Point Theory Appl. 2016, 87 (2016)

    MathSciNet  MATH  Article  Google Scholar 

  9. Sridarat, P., Suparaturatorn, R., Suantai, S., Cho, Y.J.: Convergence analysis of SP-iteration for G-nonexpansive mappings with directed graphs. Bull. Malays. Math. Sci. Soc. 42, 2361–2380 (2019)

    MathSciNet  MATH  Article  Google Scholar 

  10. Yambangwai, D., Aunruean, S., Thianwan, T.: A new modified three-step iteration method for G-nonexpansive mappings in Banach spaces with a graph. Numer. Algorithms 84, 537–565 (2020)

    MathSciNet  MATH  Article  Google Scholar 

  11. Suantai, S., Kankam, K., Cholamjiak, P., et al.: A parallel monotone hybrid algorithm for a finite family of G-nonexpansive mappings in Hilbert spaces endowed with a graph applicable in signal recovery. Comput. Appl. Math. 40, 145 (2021)

    MathSciNet  MATH  Article  Google Scholar 

  12. Johnsonbaugh, R.: Discrete Mathematics, New Jersey (1997)

    MATH  Google Scholar 

  13. Jachymski, J.: The contraction principle for mappings on a metric space with a graph. Proc. Am. Math. Soc. 136(4), 1359–1373 (2008)

    MathSciNet  MATH  Article  Google Scholar 

  14. Nakajo, K., Shimoji, K., Takahashi, W.: Strong convergence to a common fixed point of families of nonexpansive mappings in Banach spaces. J. Nonlinear Convex Anal. 8, 11–34 (2007)

    MathSciNet  MATH  Google Scholar 

  15. Nakajo, K., Shimoji, K., Takahashi, W.: On strong convergence by the hybrid method for families of mappings in Hilbert spaces. Nonlinear Anal., Theory Methods Appl. 71(1–2), 112–119 (2009)

    MathSciNet  MATH  Article  Google Scholar 

  16. Bauschke, H.H., Combettes, P.L.: Convex Analysis and Monotone Operator Theory in Hilbert Spaces, 2nd edn. Springer, Berlin (2017)

    MATH  Book  Google Scholar 

  17. Combettes, P.L., Wajs, V.R.: Signal recovery by proximal forward–backward splitting. Multiscale Model. Simul. 4(4), 1168–1200 (2005)

    MathSciNet  MATH  Article  Google Scholar 

  18. Moreau, J.J.: Fonctions convexes duales et points proximaux dans un espace hilbertien. C. R. Acad. Sci., Sér. 1 Math. 255, 2897–2899 (1962)

    MathSciNet  MATH  Google Scholar 

  19. Beck, A.: First-Order Methods in Optimization, pp. 129–177. Tel-Aviv University, Tel-Aviv (2017). ISBN 978-1-61197-498-0

    MATH  Book  Google Scholar 

  20. Bussaban, L., Suantai, S., Kaewkhao, A.: A parallel inertial S-iteration forward-backward algorithm for regression and classification problems. Carpath. J. Math. 36, 21–30 (2020)

    MathSciNet  MATH  Google Scholar 

  21. Takahashi, W.: Introduction to Nonlinear and Convex Analysis. Yokohama Publishers, Yokohama (2009)

    MATH  Google Scholar 

  22. Tan, K., Xu, H.K.: Approximating fixed points of nonexpansive mappings by the Ishikawa iteration process. J. Math. Anal. Appl. 178, 301–308 (1993)

    MathSciNet  MATH  Article  Google Scholar 

  23. Moudafi, A., Al-Shemas, E.: Simultaneous iterative methods for split equality problem. Trans. Math. Program. Appl. 1, 1–11 (2013)

    Google Scholar 

  24. Hanjing, A., Suantai, S.: A fast image restoration algorithm based on a fixed point and optimization method. Mathematics 8(3), 378 (2020)

    Article  Google Scholar 

  25. Lions, P.L., Mercier, B.: Splitting algorithms for the sum of two nonlinear operators. SIAM J. Numer. Anal. 16(6), 964–979 (1979)

    MathSciNet  MATH  Article  Google Scholar 

  26. Polyak, B.: Some methods of speeding up the convergence of iteration methods. USSR Comput. Math. Math. Phys. 4, 1–17 (1964)

    Article  Google Scholar 

  27. Janngam, K., Suantai, S.: An accelerated forward–backward algorithm with applications to image restoration problems. Thai J. Math. 19(2), 325–339 (2021)

    MathSciNet  MATH  Google Scholar 

  28. Alakoya, T.O., Jolaoso, L.O., Mewomo, O.T.: Two modifications of the inertial Tseng extragradient method with self-adaptive step size for solving monotone variational inequality problems. Demonstr. Math. 53, 208–224 (2020)

    MathSciNet  MATH  Article  Google Scholar 

  29. Gebrie, A.G., Wangkeeree, R.: Strong convergence of an inertial extrapolation method for a split system of minimization problems. Demonstr. Math. 53, 332–351 (2020)

    MathSciNet  Article  Google Scholar 

  30. Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2, 183–202 (2009)

    MathSciNet  MATH  Article  Google Scholar 

  31. Verma, M., Shukla, K.: A new accelerated proximal gradient technique for regularized multitask learning framework. Pattern Recognit. Lett. 95, 98–103 (2017)

    Article  Google Scholar 

  32. Nesterov, Y.: A method for solving the convex programming problem with convergence rate \(O(1/k^{2})\). Dokl. Akad. Nauk SSSR 269, 543–547 (1983)

    MathSciNet  Google Scholar 

  33. Vogel, C.R.: Computational Methods for Inverse Problems. SIAM, Philadelphia (2002)

    MATH  Book  Google Scholar 

  34. Eldén, L.: Algorithms for the regularization of ill-conditioned least squares problems. BIT Numer. Math. 17(2), 134–145 (1977)

    MathSciNet  MATH  Article  Google Scholar 

  35. Hansen, P.C., Nagy, J.G., O’Leary, D.P.: Deblurring Images: Matrices, Spectra, and Filtering (Fundamentals of Algorithms 3) (Fundamentals of Algorithms). SIAM, Philadelphia (2006)

    Book  Google Scholar 

  36. Tikhonov, A.N., Arsenin, V.Y.: Solutions of Ill-Posed Problems. W.H. Winston (1997)

    MATH  Google Scholar 

  37. Tibshirani, R.: Regression shrinkage and selection via the lasso. J. R. Stat. Soc., Ser. B, Stat. Methodol. 58, 267–288 (1996)

    MathSciNet  MATH  Google Scholar 

  38. Thung, K., Raveendran, P.: A survey of image quality measures. In: Proceedings of the International Conference for Technical Postgraduates (TECHPOS), Kuala Lumpur, Malaysia, 14–15 December, pp. 1–4 (2009)

    Google Scholar 

Download references

Acknowledgements

The first author was supported by Fundamental Fund 2022, Chiang Mai University, Thailand, under the supervision of Suthep Suantai. The second author would like to thank Ubon Ratchathani University, Thailand. This research has also received funding support from the NSRF via the program Management Unit for Human Resources & Institutional Development, Research and Innovation [grant number B05F640183].

Funding

NSRF [grant number B05F640183].

Author information

Authors and Affiliations

Authors

Contributions

K.J. and R.W. contributed equally in this research paper. Both authors have read and agreed to the published version of the manuscript.

Corresponding author

Correspondence to Rattanakorn Wattanataweekul.

Ethics declarations

Competing interests

The authors declare that they have no competing interests.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Wattanataweekul, R., Janngam, K. An accelerated common fixed point algorithm for a countable family of G-nonexpansive mappings with applications to image recovery. J Inequal Appl 2022, 68 (2022). https://doi.org/10.1186/s13660-022-02796-y

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13660-022-02796-y

Keywords

  • Convex minimization
  • Coordinate affine
  • G-nonexpansive
  • Image restoration problem
  • Inertial techniques
  • Weak convergence