Skip to main content

An accelerated viscosity forward-backward splitting algorithm with the linesearch process for convex minimization problems

Abstract

In this paper, we consider and investigate a convex minimization problem of the sum of two convex functions in a Hilbert space. The forward-backward splitting algorithm is one of the popular optimization methods for approximating a minimizer of the function; however, the stepsize of this algorithm depends on the Lipschitz constant of the gradient of the function, which is not an easy work to find in general practice. By using a new modification of the linesearches of Cruz and Nghia [Optim. Methods Softw. 31:1209–1238, 2016] and Kankam et al. [Math. Methods Appl. Sci. 42:1352–1362, 2019] and an inertial technique, we introduce an accelerated viscosity-type algorithm without any Lipschitz continuity assumption on the gradient. A strong convergence result of the proposed algorithm is established under some control conditions. As applications, we apply our algorithm to solving image and signal recovery problems. Numerical experiments show that our method has a higher efficiency than the well-known methods in the literature.

1 Introduction

The convex minimization problem is one of the important problems in mathematical optimization. It has been widely studied because its applications are desirable and can be used in many branches of science and in various real-world applications such as in image and signal processing, data classification and regression problems, etc., see [3, 5, 8, 10, 12, 13] and the references therein. Various optimization methods for solving the convex minimization problem have been introduced and developed by many researchers, see [1, 35, 79, 11, 14, 1619, 23, 26, 28] for instance. In this work, we are interested in studying an unconstrained convex minimization problem of the sum of the following form:

$$ \mathop {\operatorname {minimize}}_{x \in \mathcal{X}} h_{1}(x)+h_{2}(x), $$
(1)

where \(\mathcal{X}\) is a Hilbert space, \(h_{1} : \mathcal{X} \to \mathbb{R} \) is a convex and differentiable function, and \(h_{2} : \mathcal{X}\to \mathbb{R}\cup \{\infty \}\) is a proper, lower semi-continuous, and convex function.

It is known that if a minimizer \(p^{*}\) of \(h_{1}+h_{2}\) exists, then \(p^{*}\) is a fixed point of the forward-backward operator

$$ FB_{\alpha } := \underbrace{\operatorname {prox}_{\alpha h_{2}}}_{ \text{backward step}} \underbrace{(I_{d} - \alpha \nabla h_{1})}_{ \text{forward step}}, $$

where \(\alpha >0\), \(\operatorname {prox}_{h_{2}}\) is the proximity operator of \(h_{2}\), and \(\nabla h_{1}\) stands for the gradient of \(h_{1}\), that is, \(p^{*} = FB_{\alpha }(p^{*})\). If \(\nabla h_{1} \) is Lipschitz continuous with a coefficient \(L>0\) and \(\alpha \in (0, 2/L)\), then the forward-backward operator \(FB_{\alpha }\) is nonexpansive. In this case, we can employ fixed point approximation methods for the class of nonexpansive operators to solve (1). One of the popular methods is known as the forward-backward splitting (FBS) algorithm [8, 18].

Method FBS

Let \(x_{1} \in \mathcal{X}\). For \(k \geq 1\), let

$$ x_{k+1}=\operatorname {prox}_{\alpha _{k}h_{2}} \bigl(x_{k}- \alpha _{k}\nabla h_{1}(x_{k}) \bigr), $$

where \(0 < \alpha _{k} < 2/L\).

This method includes the proximal point algorithm [19, 26], the gradient method [4, 11], and the CQ algorithm [6] as special cases. It can be seen from Method FBS that we need to assume the Lipschitz continuity condition on the gradient of \(h_{1}\), and the stepsize \(\alpha _{k}\) depends on the Lipschitz constant L. However, finding such a Lipschitz constant is not an easy task in general practice. This leads to the natural question:

Question: How can we construct an algorithm whose stepsize does not depend on any Lipschitz constant of the gradient for solving Problem (1)?

In the sequel, we set the standing hypotheses on Problem (1) as follows:

  1. (AI)

    \(h_{1} : \mathcal{X} \to \mathbb{R} \) is a convex and differentiable function and the gradient \(\nabla h_{1} \) is uniformly continuous on \(\mathcal{X}\);

  2. (AII)

    \(h_{2} : \mathcal{X}\to \mathbb{R}\cup \{\infty \}\) is a proper, lower semi-continuous, and convex function.

We see that the second part of (AI) is a weaker condition than the Lipschitz continuity condition on \(\nabla h_{1}\).

In 2016, Cruz and Nghia [9] suggested one of the ways to select the stepsize \(\alpha _{k}\) which is independent of the Lipschitz constant L by using the following linesearch process.

It was proved that Linesearch A is well defined, this means that it stops after finitely many steps, see [9, Lemma 3.1] and [32, Theorem 3.4(a)]. Linesearch A is a special case of the linesearch proposed in [32] for inclusion problems. Cruz and Nghia [9] employed the forward-backward splitting method where the stepsize \(\alpha _{k}\) is generated by Linesearch A.

Linesearch A
figure a

Fix \(x \in \mathcal{X}\), \(\sigma >0\), \(\delta >0\), and \(\theta \in (0,1)\)

Method 1

Let \(x_{1} \in \mathcal{X}\), \(\sigma >0\), \(\delta \in (0, 1/2)\), and \(\theta \in (0,1)\). For \(k \geq 1\), let

$$ x_{k+1}=\operatorname {prox}_{\alpha _{k}h_{2}} \bigl(x_{k} -\alpha _{k}\nabla h_{1}(x_{k}) \bigr), $$

where \(\alpha _{k}:= \text{Linesearch A}(x_{k},\sigma , \theta , \delta ) \).

In optimization theory, to speed up the convergence of iterative procedures, many mathematicians often use the inertial-type extrapolation [15, 22, 24] by supplementing the technical term \(\beta _{k}(x_{k}-x_{k-1})\). We call the parameter \(\beta _{k}\) an inertial parameter, which controls the momentum \(x_{k}-x_{k-1}\). Based on Method 1, Cruz and Nghia [9] also proposed an accelerated algorithm with an inertial technical term as follows.

Method 2

Let \(x_{0}=x_{1} \in \mathcal{X}\), \(\alpha _{0}=\sigma > 0\), \(\delta \in (0, 1/2)\), \(\theta \in (0,1)\), and \(t_{1}=1\). For \(k \geq 1\), let

$$\begin{aligned} &t_{k+1}= \frac{1 + \sqrt{1+4t_{k}^{2}}}{2},\qquad \beta _{k}= \frac{t_{k}-1}{t_{k+1}}, \\ &y_{k}= x_{k} + \beta _{k}(x_{k}-x_{k-1}), \\ &x_{k+1}=\operatorname {prox}_{\alpha _{k}h_{2}} \bigl(y_{k} - \alpha _{k}\nabla h_{1}(y_{k}) \bigr), \end{aligned}$$

where \(\alpha _{k}:= \mbox{Linesearch A} (y_{k},\alpha _{k-1}, \theta , \delta ) \).

The technique of selecting \(\beta _{k}\) in Method 2 was first defined in the fast iterative shrinkage-thresholding algorithm (FISTA) by Beck and Teboulle [3].

In 2019, Kankam et al. [16] introduced a modification of Linesearch A as follows.

Using Linesearch B, they proposed the following double forward-backward splitting algorithm.

Linesearch B
figure b

Fix \(x \in \mathcal{X}\), \(\sigma >0\), \(\delta >0\), and \(\theta \in (0,1)\)

Method 3

Let \(x_{1} \in \mathcal{X}\), \(\sigma >0\), \(\delta \in (0, 1/8)\), and \(\theta \in (0,1)\). For \(k \geq 1\), let

$$\begin{aligned} &y_{k}=\operatorname {prox}_{\alpha _{k}h_{2}} \bigl(x_{k}-\alpha _{k}\nabla h_{1}(x_{k}) \bigr), \\ &x_{k+1}=\operatorname {prox}_{\alpha _{k}h_{2}} \bigl(y_{k}-\alpha _{k}\nabla h_{1}(y_{k}) \bigr), \end{aligned}$$

where \(\alpha _{k}:= \mbox{Linesearch B}(x_{k}, \sigma , \theta , \delta )\).

We note that Methods 13 with some mild conditions guarantee only weak convergence results for Problem (1); however, strong convergence gives more desirable theoretical result. To get strong convergence, we focus on the forward-backward splitting algorithm based on the viscosity approximation method [21, 34] as follows.

Method 4

Let \(x_{1} \in \mathcal{X}\). For \(k \geq 1\), let

$$ x_{k+1}=\gamma _{k}f(x_{k})+ (1-\gamma _{k})\operatorname {prox}_{\alpha _{k}h_{2}} \bigl(x_{k}- \alpha _{k}\nabla h_{1}(x_{k}) \bigr), $$

where \(f : \mathcal{X} \rightarrow \mathcal{X}\) is a contraction, \(\gamma _{k} \in (0, 1)\) and \(\alpha _{k} >0\).

In this work, inspired and motivated by the results of Cruz and Nghia [9] and Kankam et al. [16] and the above-mentioned research, we aim to improve Linesearches A and B and introduce a new accelerated algorithm using our proposed linesearch for strong convergence on a convex minimization problem of the sum of two convex functions in a Hilbert space. This paper is organized as follows. The notation, basic definitions, and some useful lemmas for proving our main result are given in Sect. 2. Our main result is in Sect. 3. In this section, we introduce a new modification of Linesearches A and B and present a double forward-backward algorithm based on the viscosity approximation method by using an inertial technique for solving Problem (1) with Assumptions (AI) and (AII). Subsequently, we prove a strong convergence theorem of the proposed method under some suitable control conditions. In Sect. 4, we apply the convex minimization problem to image and signal recovery problems. We analyze and illustrate the convergence behavior of our method, and also compare its efficiency with Methods 14.

2 Basic definitions and lemmas

The mathematical symbols adopted throughout this article are as follows. \(\mathbb{R}\), \(\mathbb{R}_{+}\), and \(\mathbb{R}_{++}\) are the set of real numbers, the set of nonnegative real numbers, and the set of positive real numbers, respectively, and \(\mathbb{N}\) stands for the set of positive integers. We suppose that \(\mathcal{X}\) is a real Hilbert space with an inner product \(\langle \cdot , \cdot \rangle \) and the induced norm \(\| \cdot \|\). Let \(I_{d}\) denote the identity operator on \(\mathcal{X}\). Weak and strong convergence of a sequence \(\{x_{k}\} \subset \mathcal{X}\) to \(p \in \mathcal{X}\) are denoted by \(x_{k} \rightharpoonup p\) and \(x_{k} \rightarrow p\), respectively.

Let E be a nonempty closed convex subset of \(\mathcal{X}\). An operator \(A : E \rightarrow \mathcal{X}\) is said to be Lipschitz continuous if there exists \(L > 0\) such that

$$ \Vert Ax - Ay \Vert \leq L \Vert x - y \Vert ,\quad \forall x, y \in E. $$

If A is Lipschitz continuous with a coefficient \(L \in (0, 1)\), then A is called a contraction. The metric projection from \(\mathcal{X}\) onto E, denoted by \(P_{E}\), is defined for each \(x \in \mathcal{X}\), \(P_{E}x\) is the unique element in E such that \(\| x - P_{E}x \| = \inf_{y \in E} \| x - y \|\). It is known that

$$ p^{*} = P_{E}x\quad \Longleftrightarrow \quad \bigl\langle x - p^{*}, y - p^{*} \bigr\rangle \leq 0,\quad \forall y \in E. $$

The following definition extends the concept of the metric projection.

Definition 2.1

([2, 20])

Let \(h : \mathcal{X} \to \mathbb{R}\cup \{\infty \} \) be a proper, lower semi-continuous, and convex function. The proximity (or proximal) operator of h, denoted by \(\operatorname {prox}_{h}\), is defined for each \(x \in \mathcal{X}\), \(\operatorname {prox}_{h}x\) is the unique solution of the minimization problem

$$ \mathop {\operatorname {minimize}}_{y\in \mathcal{X}} h(y) + \frac{1}{2} \Vert x - y \Vert ^{2}. $$

In particular, if \(h := i_{E}\) is an indicator function on E (defined by \(i_{E}(x)=0\) if \(x \in E\); otherwise \(i_{E}(x) = \infty \)), then \(\operatorname {prox}_{h} = P_{E}\).

Let \(h : \mathcal{X} \to \mathbb{R}\cup \{\infty \} \) be a proper, lower semi-continuous, and convex function. The subdifferential ∂h of h is defined by

$$ \partial h(x) := \bigl\{ p \in \mathcal{X} : h(x) + \langle p, y - x \rangle \leq h(y) , \forall y \in \mathcal{X} \bigr\} ,\quad \forall x \in \mathcal{X}. $$

Here, we give some relationships between the proximity operator and the subdifferential operator as follows. For \(\alpha >0\) and \(x \in \mathcal{X}\), then

$$\begin{aligned}& \operatorname {prox}_{\alpha h} = (I_{d} + \alpha \partial h)^{-1} : \mathcal{X} \rightarrow \operatorname {dom}h, \end{aligned}$$
(2)
$$\begin{aligned}& \frac{x - \operatorname {prox}_{\alpha h}(x) }{\alpha }\in \partial h \bigl(\operatorname {prox}_{ \alpha h}(x) \bigr). \end{aligned}$$
(3)

We end this section by giving useful lemmas for proving our main result.

Lemma 2.2

([25])

Let \(h : \mathcal{X} \to \mathbb{R}\cup \{\infty \} \) be a proper, lower semi-continuous, and convex function. Let \(\{x_{k}\}\) and \(\{y_{k}\}\) be two sequences in \(\mathcal{X}\) such that \(y_{k} \in \partial h(x_{k})\) for all \(k \in \mathbb{N}\). If \(x_{k} \rightharpoonup x \) and \(y_{k} \rightarrow y\), then \(y \in \partial h(x)\).

Lemma 2.3

([29])

Let \(x, y \in \mathcal{X}\) and \(\xi \in [0, 1]\). Then the following properties hold on \(\mathcal{X}\):

  1. (i)

    \(\|\xi x +(1-\xi )y\|^{2}= \xi \|x\|^{2}+(1-\xi )\|y\|^{2}-\xi (1- \xi )\|x-y\|^{2}\);

  2. (ii)

    \(\|x \pm y\|^{2}=\|x\|^{2}\pm 2\langle x, y\rangle +\|y\|^{2} \);

  3. (iii)

    \(\|x + y\|^{2} \leq \|x\|^{2}+2\langle y, x + y\rangle \).

Lemma 2.4

([27])

Let \(\{a_{k}\} \subset \mathbb{R}_{+}\), \(\{b_{k}\} \subset \mathbb{R}\), and \(\{\xi _{k}\} \subset (0,1)\) be such that \(\sum_{k=1}^{\infty }\xi _{k}= \infty \) and

$$ a_{k+1}\leq (1-\xi _{k})a_{k} + \xi _{k} b_{k},\quad \forall k \in \mathbb{N}. $$

If \(\limsup_{i \to \infty }b_{k_{i}}\leq 0 \) for every subsequence \(\{a_{k_{i}}\} \) of \(\{a_{k}\} \) satisfying \(\liminf_{i \to \infty }(a_{k_{i}+1}-a_{k_{i}})\geq 0\), then \(\lim_{k\to \infty }a_{k}=0 \).

3 Method and convergence result

In this section, by modifying Linesearches A and B, we introduce a new linesearch and present an inertial double forward-backward splitting algorithm based on the viscosity approximation method for solving the convex minimization problem of the sum of two convex functions without any Lipschitz continuity assumption on the gradient. A strong convergence result of our proposed algorithm is analyzed and established.

We now focus on Problem (1) with Assumptions (AI) and (AII). For simplicity, let \(\textbf{h} := h_{1} + h_{2}\) and denote \(FB_{\alpha } := \operatorname {prox}_{\alpha h_{2}}(I_{d}-\alpha \nabla h_{1})\) for \(\alpha >0\). The set of minimizer of h is denoted by Γ. Also, assume that \(\Gamma \neq \emptyset \). We begin by designing the following linesearch.

In other words, if \(\alpha := \mbox{Linesearch C}(x, \sigma , \theta , \delta )\), then \(\alpha = \sigma \theta ^{m}\), where m is the smallest nonnegative integer such that

$$\begin{aligned} &\frac{\alpha }{2} \bigl\{ \bigl\Vert \nabla h_{1} \bigl(FB_{\alpha }^{2}(x) \bigr)- \nabla h_{1} \bigl(FB_{\alpha }(x) \bigr) \bigr\Vert + \bigl\Vert \nabla h_{1} \bigl(FB_{ \alpha }(x) \bigr) - \nabla h_{1}(x) \bigr\Vert \bigr\} \\ &\quad \leq \delta \bigl( \bigl\Vert FB_{\alpha }^{2}(x)-FB_{\alpha }(x) \bigr\Vert + \bigl\Vert FB_{\alpha }(x)-x \bigr\Vert \bigr). \end{aligned}$$

It can be seen that the terminating condition of the while loop in Linesearch C is somewhat weaker than that in Linesearch B. So, it follows from the well-definedness of Linesearch B that our linesearch also stops after finitely many steps, see [16, Lemma 3.2].

Linesearch C
figure c

Fix \(x \in \mathcal{X}\), \(\sigma >0\), \(\delta >0\), and \(\theta \in (0,1)\)

Using Linesearch C, we introduce a new viscosity forward-backward splitting algorithm with the inertial technical term as follows.

To show a strong convergence result of Method 5, the following tool is needed.

Method 5
figure d

An accelerated viscosity forward-backward algorithm with Linesearch C

Lemma 3.1

Let \(\{x_{k}\} \) be a sequence generated by Method 5 and \(p \in \mathcal{X}\). Then the following inequality holds:

$$\begin{aligned} \Vert w_{k}-p \Vert ^{2}- \Vert y_{k}-p \Vert ^{2}\geq{}& 2\alpha _{k} \bigl[\textbf{h}(y_{k})+ \textbf{h}(z_{k}) - 2 \textbf{h}(p) \bigr] \\ &{} +(1-8\delta ) \bigl( \Vert w_{k}-z_{k} \Vert ^{2} + \Vert z_{k}-y_{k} \Vert ^{2} \bigr),\quad \forall k \in \mathbb{N}. \end{aligned}$$

Proof

From (3), (6), and (7), we get

$$ \frac{w_{k}-z_{k}}{\alpha _{k}} -\nabla h_{1}(w_{k})\in \partial h_{2}(z_{k})\quad \text{and}\quad \frac{z_{k}-y_{k}}{\alpha _{k}} - \nabla h_{1}(z_{k}) \in \partial h_{2}(y_{k}). $$

Let \(p\in \mathcal{X}\). By the definition of subdifferential of \(h_{2}\), the above expressions give

$$\begin{aligned} h_{2}(p)-h_{2}(z_{k}) & \geq \biggl\langle \frac{w_{k}-z_{k}}{\alpha _{k}} -\nabla h_{1}(w_{k}), p - z_{k} \biggr\rangle \\ &= \frac{1}{\alpha _{k}}\langle w_{k} - z_{k}, p - z_{k} \rangle + \bigl\langle \nabla h_{1}(w_{k}), z_{k} - p \bigr\rangle \end{aligned}$$
(9)

and

$$\begin{aligned} h_{2}(p)-h_{2}(y_{k}) & \geq \biggl\langle \frac{z_{k}-y_{k}}{\alpha _{k}} -\nabla h_{1}(z_{k}), p - y_{k} \biggr\rangle \\ &= \frac{1}{\alpha _{k}}\langle z_{k} - y_{k}, p - y_{k} \rangle + \bigl\langle \nabla h_{1}(z_{k}), y_{k} - p \bigr\rangle . \end{aligned}$$
(10)

By (AI), we obtain the fact

$$ h_{1}(x)-h_{1}(y) \geq \bigl\langle \nabla h_{1}(y), x-y \bigr\rangle , \quad \forall x, y \in \mathcal{X}. $$
(11)

From (11), we get

$$ h_{1}(p)-h_{1}(w_{k}) \geq \bigl\langle \nabla h_{1}(w_{k}), p- w_{k} \bigr\rangle $$
(12)

and

$$ h_{1}(p)-h_{1}(z_{k}) \geq \bigl\langle \nabla h_{1}(z_{k}), p-z_{k} \bigr\rangle . $$
(13)

Combining (9), (10), (12), and (13), we have

$$\begin{aligned} & 2\textbf{h}(p) - \textbf{h}(z_{k}) - h_{2}(y_{k}) - h_{1}(w_{k}) \\ &\quad \geq \bigl\langle \nabla h_{1}(w_{k}), z_{k}-p \bigr\rangle + \bigl\langle \nabla h_{1}(z_{k}), y_{k}-p \bigr\rangle + \bigl\langle \nabla h_{1}(w_{k}), p-w_{k} \bigr\rangle \\ &\qquad {} + \bigl\langle \nabla h_{1}(z_{k}),p-z_{k} \bigr\rangle + \frac{1}{\alpha _{k}} \bigl[\langle w_{k}-z_{k}, p - z_{k} \rangle + \langle z_{k}-y_{k}, p-y_{k} \rangle \bigr] \\ &\quad = \bigl\langle \nabla h_{1}(w_{k}), z_{k} - w_{k} \bigr\rangle + \bigl\langle \nabla h_{1}(z_{k}), y_{k}-z_{k} \bigr\rangle \\ &\qquad {} + \frac{1}{\alpha _{k}} \bigl[\langle w_{k} - z_{k}, p - z_{k} \rangle +\langle z_{k}-y_{k}, p-y_{k} \rangle \bigr] \\ &\quad = \bigl\langle \nabla h_{1}(w_{k})-\nabla h_{1}(z_{k}), z_{k} - w_{k} \bigr\rangle + \bigl\langle \nabla h_{1}(z_{k}), z_{k} - w_{k} \bigr\rangle + \bigl\langle \nabla h_{1}(y_{k}), y_{k}-z_{k} \bigr\rangle \\ &\qquad {} + \bigl\langle \nabla h_{1}(z_{k})-\nabla h_{1}(y_{k}), y_{k} - z_{k} \bigr\rangle + \frac{1}{\alpha _{k}} \bigl[\langle w_{k} - z_{k}, p - z_{k} \rangle +\langle z_{k}-y_{k}, p-y_{k} \rangle \bigr] \\ &\quad \geq \bigl\langle \nabla h_{1}(z_{k}),z_{k}-w_{k} \bigr\rangle + \bigl\langle \nabla h_{1}(y_{k}),y_{k}-z_{k} \bigr\rangle - \bigl\Vert \nabla h_{1}(w_{k})- \nabla h_{1}(z_{k}) \bigr\Vert \Vert z_{k}-w_{k} \Vert \\ &\qquad {} - \bigl\Vert \nabla h_{1}(z_{k})-\nabla h_{1}(y_{k}) \bigr\Vert \Vert y_{k}-z_{k} \Vert \\ &\qquad {} + \frac{1}{\alpha _{k}} \bigl[\langle w_{k}-z_{k}, p-z_{k} \rangle +\langle z_{k}-y_{k}, p-y_{k} \rangle \bigr]. \end{aligned}$$

Again, applying (11), the above inequality becomes

$$\begin{aligned} & 2\textbf{h}(p) - \textbf{h}(z_{k}) - h_{2}(y_{k}) - h_{1}(w_{k}) \\ &\quad \geq h_{1}(y_{k}) - h_{1}(w_{k}) - \bigl\Vert \nabla h_{1}(w_{k})-\nabla h_{1}(z_{k}) \bigr\Vert \Vert z_{k}-w_{k} \Vert \\ & \qquad {}- \bigl\Vert \nabla h_{1}(z_{k})-\nabla h_{1}(y_{k}) \bigr\Vert \Vert y_{k}-z_{k} \Vert \\ &\qquad {} + \frac{1}{\alpha _{k}} \bigl[\langle w_{k}-z_{k}, p-z_{k} \rangle +\langle z_{k}-y_{k}, p-y_{k} \rangle \bigr] \\ & \quad \geq h_{1}(y_{k}) - h_{1}(w_{k}) - \bigl\Vert \nabla h_{1}(w_{k}) - \nabla h_{1}(z_{k}) \bigr\Vert \bigl( \Vert y_{k}-z_{k} \Vert + \Vert z_{k}-w_{k} \Vert \bigr) \\ &\qquad {} - \bigl\Vert \nabla h_{1}(z_{k})-\nabla h_{1}(y_{k}) \bigr\Vert \bigl( \Vert y_{k}-z_{k} \Vert + \Vert z_{k}-w_{k} \Vert \bigr) \\ &\qquad {} + \frac{1}{\alpha _{k}} \bigl[\langle w_{k}-z_{k}, p-z_{k} \rangle +\langle z_{k}-y_{k}, p-y_{k} \rangle \bigr] \\ &\quad = h_{1}(y_{k}) - h_{1}(w_{k})+ \frac{1}{\alpha _{k}} \bigl[\langle w_{k}-z_{k}, p-z_{k} \rangle +\langle z_{k}-y_{k}, p-y_{k} \rangle \bigr] \\ & \qquad {}- \bigl( \bigl\Vert \nabla h_{1}(w_{k})- \nabla h_{1}(z_{k}) \bigr\Vert + \bigl\Vert \nabla h_{1}(z_{k}) -\nabla h_{1}(y_{k}) \bigr\Vert \bigr) \bigl( \Vert y_{k}-z_{k} \Vert + \Vert z_{k} - w_{k} \Vert \bigr). \end{aligned}$$
(14)

Since \(\alpha _{k}:= \mbox{Linesearch C}(w_{k}, \sigma ,\theta , \delta )\), then

$$\begin{aligned} &\frac{\alpha _{k}}{2} \bigl\{ \bigl\Vert \nabla h_{1}(y_{k})-\nabla h_{1}(z_{k}) \bigr\Vert + \bigl\Vert \nabla h_{1}(z_{k}) - \nabla h_{1}(w_{k}) \bigr\Vert \bigr\} \\ &\quad \leq \delta \bigl( \Vert y_{k}-z_{k} \Vert + \Vert z_{k}-w_{k} \Vert \bigr). \end{aligned}$$
(15)

From (14) and (15), we have

$$\begin{aligned} & \frac{1}{\alpha _{k}} \bigl[\langle w_{k}-z_{k}, z_{k}-p \rangle + \langle z_{k}-y_{k}, y_{k}-p \rangle \bigr] \\ &\quad \geq \textbf{h}(y_{k}) + \textbf{h}(z_{k}) - 2 \textbf{h}(p) \\ & \qquad {}- \bigl( \bigl\Vert \nabla h_{1}(w_{k})- \nabla h_{1}(z_{k}) \bigr\Vert + \bigl\Vert \nabla h_{1}(z_{k})-\nabla h_{1}(y_{k}) \bigr\Vert \bigr) \bigl( \Vert y_{k}-z_{k} \Vert + \Vert z_{k}-w_{k} \Vert \bigr) \\ &\quad \geq \textbf{h}(y_{k}) + \textbf{h}(z_{k}) - 2 \textbf{h}(p) - \frac{2 \delta }{ \alpha _{k}} \bigl( \Vert y_{k}-z_{k} \Vert + \Vert z_{k}-w_{k} \Vert \bigr)^{2} \\ &\quad \geq \textbf{h}(y_{k}) + \textbf{h}(z_{k}) - 2 \textbf{h}(p) - \frac{4\delta }{\alpha _{k}} \bigl( \Vert y_{k}-z_{k} \Vert ^{2} + \Vert z_{k}-w_{k} \Vert ^{2} \bigr). \end{aligned}$$
(16)

By Lemma 2.3(ii), we get

$$ \langle w_{k}-z_{k}, z_{k}-p \rangle = \frac{1}{2} \bigl( \Vert w_{k}-p \Vert ^{2}- \Vert w_{k}-z_{k} \Vert ^{2}- \Vert z_{k}-p \Vert ^{2} \bigr), $$
(17)

and

$$ \langle z_{k}-y_{k}, y_{k}-p \rangle = \frac{1}{2} \bigl( \Vert z_{k}-p \Vert ^{2}- \Vert z_{k}-y_{k} \Vert ^{2}- \Vert y_{k}-p \Vert ^{2} \bigr). $$
(18)

Hence, we can conclude from (16)–(18) that

$$\begin{aligned} \Vert w_{k}-p \Vert ^{2}- \Vert y_{k}-p \Vert ^{2}\geq{}& 2\alpha _{k} \bigl[\textbf{h}(y_{k})+ \textbf{h}(z_{k}) - 2 \textbf{h}(p) \bigr] \\ &{} +(1-8\delta ) \bigl( \Vert w_{k}-z_{k} \Vert ^{2} + \Vert z_{k}-y_{k} \Vert ^{2} \bigr), \quad \forall k \in \mathbb{N} . \end{aligned}$$

 □

Now we are in a position to prove our main theorem.

Theorem 3.2

Let \(\{x_{k}\} \subset \mathcal{X}\) be a sequence generated by Method 5. Then:

  1. (i)

    For \(p \in \Gamma \), we have

    $$ \Vert x_{k+1}-p \Vert \leq \max \biggl\{ \Vert x_{k} -p \Vert , \frac{\frac{\beta _{k}}{\gamma _{k}} \Vert x_{k}-x_{k-1} \Vert + \Vert f(p) -p \Vert }{1-\eta } \biggr\} ,\quad \forall k \in \mathbb{N}. $$
  2. (ii)

    If the sequences \(\{\alpha _{k}\}\), \(\{\gamma _{k}\}\), and \(\{\tau _{k}\}\) satisfy the following conditions:

    1. (Ci)

      \(\alpha _{k}\geq \alpha \) for some \(a \in \mathbb{R}_{++}\);

    2. (Cii)

      \(\gamma _{k} \in (0, 1)\) such that \(\lim_{k\to \infty } \gamma _{k}=0\) and \(\sum_{k=1}^{\infty }\gamma _{k} =\infty \);

    3. (Ciii)

      \(\lim_{k\to \infty }\tau _{k}/\gamma _{k}=0\),

    then \(\{x_{k}\}\) converges strongly to a point \(p^{*} \in \Gamma \), where \(p^{*}=P_{\Gamma }f(p^{*})\).

Proof

Let \(p \in \Gamma \). Applying Lemma 3.1, we have

$$\begin{aligned} \Vert w_{k}-p \Vert ^{2}- \Vert y_{k}-p \Vert ^{2}\geq {}&2\alpha _{k} \bigl[\textbf{h}(y_{k}) - \textbf{h}(p) + \textbf{h}(z_{k}) - \textbf{h}(p) \bigr] \\ &{} +(1-8\delta ) \bigl( \Vert w_{k}-z_{k} \Vert ^{2} + \Vert z_{k}-y_{k} \Vert ^{2} \bigr) \\ \geq {}&(1-8\delta ) \bigl( \Vert w_{k}-z_{k} \Vert ^{2} + \Vert z_{k}-y_{k} \Vert ^{2} \bigr) \end{aligned}$$
(19)
$$\begin{aligned} \geq{}& 0 . \end{aligned}$$
(20)

From (19) and (5) and by Lemma 2.3(ii), we get

$$\begin{aligned} \Vert y_{k}-p \Vert ^{2}\leq{}& \Vert w_{k}-p \Vert ^{2} -(1-8\delta ) \bigl( \Vert w_{k}-z_{k} \Vert ^{2} + \Vert z_{k}-y_{k} \Vert ^{2} \bigr) \\ ={}& \Vert x_{k}-p \Vert ^{2} + \beta _{k}^{2} \Vert x_{k}-x_{k-1} \Vert ^{2} + 2\beta _{k} \langle x_{k} - p, x_{k}-x_{k-1} \rangle \\ &{} -(1-8\delta ) \bigl( \Vert w_{k}-z_{k} \Vert ^{2} + \Vert z_{k}-y_{k} \Vert ^{2} \bigr). \\ \leq{}& \Vert x_{k}-p \Vert ^{2} + \beta _{k}^{2} \Vert x_{k}-x_{k-1} \Vert ^{2} + 2\beta _{k} \Vert x_{k}-p \Vert \Vert x_{k}-x_{k-1} \Vert \\ &{} -(1-8\delta ) \bigl( \Vert w_{k}-z_{k} \Vert ^{2} + \Vert z_{k}-y_{k} \Vert ^{2} \bigr). \end{aligned}$$
(21)

From (20) and (5), we get

$$\begin{aligned} \Vert y_{k}-p \Vert &\leq \Vert w_{k}-p \Vert \leq \Vert x_{k}-p \Vert + \beta _{k} \Vert x_{k}-x_{k-1} \Vert . \end{aligned}$$
(22)

By (8) and (22), we have

$$\begin{aligned} \Vert x_{k+1}-p \Vert \leq {}&\gamma _{k} \bigl\Vert f(x_{k}) - f(p) \bigr\Vert + \gamma _{k} \bigl\Vert f(p) -p \bigr\Vert + (1-\gamma _{k}) \Vert y_{k}-p \Vert \\ \leq{}& \gamma _{k} \eta \Vert x_{k} - p \Vert + \gamma _{k} \bigl\Vert f(p) -p \bigr\Vert + (1- \gamma _{k}) \Vert y_{k}-p \Vert \\ \leq{}& \bigl(1-\gamma _{k}(1-\eta ) \bigr) \Vert x_{k} -p \Vert + \gamma _{k} \bigl\Vert f(p) -p \bigr\Vert \\ &{} + (1-\gamma _{k})\beta _{k} \Vert x_{k}-x_{k-1} \Vert \\ \leq {}& \bigl(1-\gamma _{k}(1-\eta ) \bigr) \Vert x_{k} -p \Vert + \gamma _{k} \biggl( \frac{\beta _{k}}{\gamma _{k}} \Vert x_{k}-x_{k-1} \Vert + \bigl\Vert f(p) -p \bigr\Vert \biggr) \\ \leq {}&\max \biggl\{ \Vert x_{k} -p \Vert , \frac{\frac{\beta _{k}}{\gamma _{k}} \Vert x_{k}-x_{k-1} \Vert + \Vert f(p) -p \Vert }{1-\eta } \biggr\} . \end{aligned}$$

Therefore, we obtain (i). By (4) and using (Ciii), we have \(\frac{\beta _{k}}{\gamma _{k}}\|x_{k}-x_{k-1}\| \rightarrow 0\) as \(k \rightarrow \infty \), and so there exists \(M>0 \) such that \(\frac{\beta _{k}}{\gamma _{k}}\|x_{k}-x_{k-1}\|\leq M \) for all \(k \in \mathbb{N}\). Thus,

$$ \Vert x_{k+1}-p \Vert \leq \max \biggl\{ \Vert x_{k} -p \Vert , \frac{M + \Vert f(p) -p \Vert }{1-\eta } \biggr\} . $$

By mathematical induction, we deduce that

$$ \Vert x_{k}-p \Vert \leq \max \biggl\{ \Vert x_{1}-p \Vert , \frac{M+ \Vert f(p^{*})-p \Vert }{1-\eta } \biggr\} , \quad \forall k \in \mathbb{N}. $$

Hence, \(\{x_{k}\} \) is bounded. One can see that the operator \(P_{\Gamma }f\) is a contraction. By the Banach contraction principle, there is a unique point \(p^{*} \in \Gamma \) such that \(p^{*}=P_{\Gamma }f(p^{*})\). It follows from the characterization of \(P_{\Gamma }\) that

$$ \bigl\langle f \bigl(p^{*} \bigr) - p^{*}, p - p^{*} \bigr\rangle \leq 0,\quad \forall p \in \Gamma . $$
(23)

Using Lemma 2.3(i), (iii) and (21), we have

$$\begin{aligned} \bigl\Vert x_{k+1}-p^{*} \bigr\Vert ^{2} \leq& \bigl\Vert (1-\gamma _{k}) \bigl(y_{k}-p^{*} \bigr) + \gamma _{k} \bigl(f(x_{k}) -f \bigl(p^{*} \bigr) \bigr) \bigr\Vert ^{2} \\ & {}+ 2\gamma _{k} \bigl\langle f \bigl(p^{*} \bigr)-p^{*}, x_{k+1}-p^{*} \bigr\rangle \\ \leq{}& (1-\gamma _{k}) \bigl\Vert y_{k}-p^{*} \bigr\Vert ^{2} + \gamma _{k} \bigl\Vert f(x_{k}) - f \bigl(p^{*} \bigr) \bigr\Vert ^{2} \\ &{} + 2\gamma _{k} \bigl\langle f \bigl(p^{*} \bigr)-p^{*}, x_{k+1}-p^{*} \bigr\rangle \\ \leq{}& (1-\gamma _{k}) \bigl\Vert x_{k}-p^{*} \bigr\Vert ^{2} + \beta _{k}^{2} \Vert x_{k}-x_{k-1} \Vert ^{2} \\ &{} + 2\beta _{k} \bigl\Vert x_{k}- p^{*} \bigr\Vert \Vert x_{k}-x_{k-1} \Vert \\ &{} + \gamma _{k}\eta \bigl\Vert x_{k} -p^{*} \bigr\Vert ^{2} + 2\gamma _{k} \bigl\langle f \bigl(p^{*} \bigr)-p^{*}, x_{k+1}-p^{*} \bigr\rangle \\ &{} -(1-\gamma _{k}) (1-8\delta ) \bigl( \Vert w_{k}-z_{k} \Vert ^{2} + \Vert z_{k}-y_{k} \Vert ^{2} \bigr) \\ ={}& \bigl(1-\gamma _{k}(1-\eta ) \bigr) \bigl\Vert x_{k}-p^{*} \bigr\Vert ^{2} + \gamma _{k}(1-\eta )b_{k} \\ &{} -(1-\gamma _{k}) (1-8\delta ) \bigl( \Vert w_{k}-z_{k} \Vert ^{2} + \Vert z_{k}-y_{k} \Vert ^{2} \bigr), \end{aligned}$$
(24)

where

$$b_{k} := \frac{1}{1-\eta } \biggl(2\bigl\langle f \bigl(p^{*}\bigr)-p^{*}, x_{k+1}-p^{*} \bigr\rangle + \frac{\beta _{k}^{2}}{\gamma _{k}}\|x_{k}-x_{k-1} \|^{2} + 2 \frac{\beta _{k}}{\gamma _{k}}\|x_{k}-p^{*} \|\|x_{k}-x_{k-1}\| \biggr). $$

It follows that

$$\begin{aligned} (1-\gamma _{k}) (1-8\delta ) \bigl( \Vert w_{k}-z_{k} \Vert ^{2} + \Vert z_{k}-y_{k} \Vert ^{2} \bigr)\leq{}& \bigl\Vert x_{k}-p^{*} \bigr\Vert ^{2} - \bigl\Vert x_{k+1}-p^{*} \bigr\Vert ^{2} \\ & {}+ \gamma _{k}(1-\eta )M^{\prime }, \end{aligned}$$
(25)

where \(M^{\prime } = \sup \{b_{k} : k \in \mathbb{N}\}\).

Let us show that \(\{x_{k}\} \) converges to \(p^{*}\). Set \(a_{k} := \|x_{k}-p^{*}\|^{2}\) and \(\xi _{k} := \gamma _{k}(1-\eta )\). From (24), we have the following inequality:

$$ a_{k+1}\leq (1-\xi _{k})a_{k} + \xi _{k} b_{k}. $$

To apply Lemma 2.4, we have to show that \(\limsup_{i \to \infty }b_{k_{i}}\leq 0 \) whenever a subsequence \(\{a_{k_{i}}\} \) of \(\{a_{k}\} \) satisfies

$$ \liminf_{i \to \infty }(a_{k_{i}+1}-a_{k_{i}}) \geq 0. $$
(26)

To do this, suppose that \(\{a_{k_{i}}\} \subseteq \{a_{k}\} \) is a subsequence satisfying (26). Then, by (25) and (Cii), we have

$$\begin{aligned} &\limsup_{i\to \infty }(1-\gamma _{k_{i}}) (1-8\delta ) \bigl( \Vert w_{k_{i}}-z_{k_{i}} \Vert ^{2} + \Vert z_{k_{i}}-y_{k_{i}} \Vert ^{2} \bigr) \\ &\quad \leq \limsup_{i\to \infty }(a_{k_{i}}-a_{k_{i}+1}) + (1-\eta )M^{ \prime }\lim_{i\to \infty } \gamma _{k_{i}} \\ &\quad =-\liminf_{i\to \infty }(a_{k_{i}+1}-a_{k_{i}}) \\ &\quad \leq 0, \end{aligned}$$

which implies

$$ \lim_{i\to \infty } \Vert w_{k_{i}}-z_{k_{i}} \Vert =\lim_{i\to \infty } \Vert z_{k_{i}}-y_{k_{i}} \Vert =0. $$
(27)

Using (Cii), (Ciii), and (27), we have

$$\begin{aligned} \Vert x_{k_{i}+1} -x_{k_{i}} \Vert \leq{}& \gamma _{k_{i}} \bigl\Vert f(x_{k_{i}})-y_{k_{i}} \bigr\Vert + \Vert y_{k_{i}}-x_{k_{i}} \Vert \\ \leq{}& \gamma _{k_{i}} \bigl\Vert f(x_{k_{i}})-y_{k_{i}} \bigr\Vert + \Vert y_{k_{i}}- w_{k_{i}} \Vert + \Vert w_{k_{i}}-x_{k_{i}} \Vert \\ \leq{}& \gamma _{k_{i}} \bigl\Vert f(x_{k_{i}})-y_{k_{i}} \bigr\Vert + \Vert y_{k_{i}}- z_{k_{i}} \Vert + \Vert z_{k_{i}} - w_{k_{i}} \Vert \\ & {}+ \frac{\beta _{k_{i}}}{\gamma _{k_{i}}} \Vert x_{k_{i}}-x_{k_{i}-1} \Vert \\ \to{}& 0 \end{aligned}$$
(28)

as \(i \to \infty \). We next show that \(\limsup_{i\to \infty }b_{k_{i}}\leq 0\). Clearly, it suffices to show that

$$ \limsup_{i\to \infty } \bigl\langle f \bigl(p^{*} \bigr)-p^{*}, x_{k_{i}+1}-p^{*} \bigr\rangle \leq 0. $$

Let \(\{ x_{k_{i_{j}}} \} \) be a subsequence of \(\{ x_{k_{i}} \} \) such that

$$ \lim_{j\to \infty } \bigl\langle f \bigl(p^{*} \bigr)-p^{*}, x_{k_{i_{j}}}-p^{*} \bigr\rangle = \limsup_{i\to \infty } \bigl\langle f \bigl(p^{*} \bigr)-p^{*}, x_{k_{i}}-p^{*} \bigr\rangle . $$

Since \(\{ x_{k_{i_{j}}} \} \) is bounded, there exists a subsequence \(\{ x_{k_{i_{j_{p}}}} \} \) of \(\{ x_{k_{i_{j}}} \} \) such that \(x_{k_{i_{j_{p}}}}\rightharpoonup \bar{p}\in \mathcal{X}\). Without loss of generality, we may assume that \(x_{k_{i_{j}}}\rightharpoonup \bar{p}\). Thus, we also have \(z_{k_{i_{j}}}\rightharpoonup \bar{p}\). From (AI), we have \(\|\nabla h_{1}(w_{k_{i_{j}}})-\nabla h_{1}(z_{k_{i_{j}}})\| \rightarrow 0\) as \(j \rightarrow \infty \). This together with (27) and (Ci) yields

$$\begin{aligned} \lim_{j \to \infty } \biggl\Vert \frac{w_{k_{i_{j}}}-z_{k_{i_{j}}}}{\alpha _{k_{i_{j}}}} +\nabla h_{1}(z_{k_{i_{j}}})- \nabla h_{1}(w_{k_{i_{j}}}) \biggr\Vert = 0. \end{aligned}$$
(29)

By (3), we get

$$\begin{aligned} \frac{w_{k_{i_{j}}}-z_{k_{i_{j}}}}{\alpha _{k_{i_{j}}}} +\nabla h_{1}(z_{k_{i_{j}}})- \nabla h_{1}(w_{k_{i_{j}}})\in \partial h_{2}(z_{k_{i_{j}}}) +\nabla h_{1}(z_{k_{i_{j}}})= \partial \textbf{h}(z_{k_{i_{j}}}). \end{aligned}$$
(30)

Now, by (29), (30), and \(z_{k_{i_{j}}}\rightharpoonup \bar{p}\), it follows from Lemma 2.2 that \(0\in \partial \textbf{h}(\bar{p})\). Hence, \(\bar{p} \in \Gamma \). From (28) and (23), we have

$$\begin{aligned} \limsup_{i\to \infty } \bigl\langle f \bigl(p^{*} \bigr)-p^{*}, x_{k_{i}+1}-p^{*} \bigr\rangle \leq{}& \limsup_{i\to \infty } \bigl\langle f \bigl(p^{*} \bigr)-p^{*}, x_{k_{i}+1}-x_{k_{i}} \bigr\rangle \\ &{} +\limsup_{i\to \infty } \bigl\langle f \bigl(p^{*} \bigr)-p^{*}, x_{k_{i}}-p^{*} \bigr\rangle \\ ={}& \lim_{j\to \infty } \bigl\langle f \bigl(p^{*} \bigr)-p^{*}, x_{k_{i_{j}}}-p^{*} \bigr\rangle \\ ={}& \bigl\langle f \bigl(p^{*} \bigr)-p^{*}, \bar{p}-p^{*} \bigr\rangle \\ \leq{}& 0. \end{aligned}$$

By Lemma 2.4, we can conclude that \(\{x_{k}\}\) converges to \(p^{*}\). The proof is complete. □

Note that the stepsize condition on \(\{\alpha _{k}\}\) in Theorem 3.2 needs the boundedness from below by a positive real number. Next, we show that this condition can be ensured by the Lipschitz continuity assumption on \(\nabla h_{1}\).

Proposition 3.3

Let \(\{\alpha _{k}\} \) be the sequence generated by Linesearch C of Method 5. If \(\nabla h_{1} : \mathcal{X} \rightarrow \mathcal{X}\) is Lipschitz continuous with a constant \(L>0 \), then \(\alpha _{k}\geq \min \{ \sigma , 2\delta \theta /L \} \) for all \(k \in \mathbb{N}\).

Proof

Let \(\nabla h_{1} \) be L-Lipschitz continuous on \(\mathcal{X}\). Since \(\alpha _{k}:= \mbox{Linesearch C}(w_{k}, \sigma ,\theta , \delta )\), then \(\alpha _{k} \leq \sigma \) for all \(k \in \mathbb{N}\). If \(\alpha _{k} < \sigma \), then \(\alpha _{k} = \sigma \theta ^{m_{k}}\) where \(m_{k}\) is the smallest positive integer such that

$$\begin{aligned} &\frac{\alpha _{k}}{2} \bigl\{ \bigl\Vert \nabla h_{1} \bigl(FB_{\alpha _{k}}^{2}(w_{k}) \bigr)- \nabla h_{1} \bigl(FB_{\alpha _{k}}(w_{k}) \bigr) \bigr\Vert + \bigl\Vert \nabla h_{1} \bigl(FB_{ \alpha _{k}}(w_{k}) \bigr) - \nabla h_{1}(w_{k}) \bigr\Vert \bigr\} \\ &\quad \leq \delta \bigl( \bigl\Vert FB_{\alpha _{k}}^{2}(w_{k})-FB_{\alpha _{k}}(w_{k}) \bigr\Vert + \bigl\Vert FB_{\alpha _{k}}(w_{k})-w_{k} \bigr\Vert \bigr). \end{aligned}$$

Set \(\hat{\alpha _{k}}:= \alpha _{k}/\theta \). By the Lipschitz continuity of \(\nabla h_{1}\) and the above expression, we have

$$\begin{aligned} &\frac{\hat{\alpha _{k}}L}{2} \bigl( \bigl\Vert FB_{\hat{\alpha _{k}}}^{2}(w_{k})-FB_{ \hat{\alpha _{k}}}(w_{k}) \bigr\Vert + \bigl\Vert FB_{\hat{\alpha _{k}}}(w_{k})-w_{k} \bigr\Vert \bigr) \\ &\quad \geq \frac{\hat{\alpha _{k}}}{2} \bigl( \bigl\Vert \nabla h_{1} \bigl(FB_{ \hat{\alpha _{k}}}^{2}(w_{k}) \bigr)-\nabla h_{1} \bigl(FB_{\hat{\alpha _{k}}}(w_{k}) \bigr) \bigr\Vert + \bigl\Vert \nabla h_{1} \bigl(FB_{\hat{\alpha _{k}}}(w_{k}) \bigr) - \nabla h_{1}(w_{k}) \bigr\Vert \bigr) \\ &\quad > \delta \bigl( \bigl\Vert FB_{\hat{\alpha _{k}}}^{2}(w_{k})-FB_{ \hat{\alpha _{k}}}(w_{k}) \bigr\Vert + \bigl\Vert FB_{\hat{\alpha _{k}}}(w_{k})-w_{k} \bigr\Vert \bigr), \end{aligned}$$

it follows that \(\alpha _{k} > 2\delta \theta /L\). Therefore, \(\alpha _{k}\geq \min \{ \sigma , 2\delta \theta /L \} \) for all \(k \in \mathbb{N}\). □

Remark 3.4

It is worth mentioning that the Lipschitz continuity assumption on the gradient of \(h_{1}\) is sufficient for Assumption (AI). However, if we assume this assumption further, the computation of the stepsize \(\alpha _{k}\) generated by Linesearch C is still independent of the Lipschitz constant.

4 Numerical experiments in image and signal recovery

In this section, we apply the convex minimization problem, Problem (1), to image and signal recovery problems. We analyze and illustrate the convergence behavior of Method 5 for recovering images and signals, and also compare its efficiency with Methods 14. All experiments and visualizations are performed on a laptop computer (Intel Core-i5/4.00 GB RAM/Windows 8/64-bit) with MATLAB.

Many problems in image and signal processing, especially the image/signal recovery, are the problems of inferring an image/signal \(x \in \mathbb{R}^{N}\) from the observation of an image/signal \(y \in \mathbb{R}^{M}\) via the linear equation

$$ y = Tx + \varepsilon , $$
(31)

where \(T : \mathbb{R}^{N} \rightarrow \mathbb{R}^{M}\) is a bounded linear operator and ε is an additive noise. To approximate the original image/signal in (31), we need to minimize the value of ε by using the LASSO problem [31]

$$ \min_{x \in \mathbb{R}^{N}} \biggl\{ \frac{1}{2} \Vert y - Tx \Vert _{2}^{2} + \lambda \Vert x \Vert _{1} \biggr\} , $$
(32)

where λ is a positive parameter, \(\|\cdot \|_{1} \) is the \(l_{1} \)-norm, and \(\|\cdot \|_{2}\) is the Euclidean norm. It is worth noting that Problem (1) can be applied to the LASSO problem (32) by setting

$$ h_{1}(x) = \frac{1}{2} \Vert y - Tx \Vert _{2}^{2} \quad \text{and}\quad h_{2}(x) = \lambda \Vert x \Vert _{1}. $$

4.1 Image recovery

In the following two examples, we set a regularization parameter in the LASSO problem (32) by \(\lambda := 10^{-5}\). Signal-to-noise ratio (PSNR) in decibel (dB) [30] and structural similarity index metric (SSIM) [33] are used as image quality metrics. The maximum iteration number for all deblurring methods is fixed at 500.

Example 4.1

Consider a prototype image (Lenna) with size of \(256\times 256 \), which is contaminated by Gaussian blur of filter size \(7 \times 7 \) with standard deviation \(\hat{\sigma } = 6\) and noise 10−5, see the original image (a) and the blurred image (b) in Fig. 1. The values of PSNR and SSIM of the blurred image are 24.6547 dB and 0.4770, respectively. The parameters of our method (Method 5) are chosen as follows:

$$\begin{aligned}& \sigma =2,\qquad \theta =0.9,\qquad \delta =0.1,\qquad \tau _{k}= \frac{10^{50}}{k^{2}},\qquad \gamma _{k}= \frac{1}{50k},\qquad \mu _{k}= \frac{t_{k}-1}{t_{k+1}}, \\& t_{k+1}= \frac{1 + \sqrt{1+4t_{k}^{2}}}{2},\quad t_{1}=1. \end{aligned}$$

Consider a contraction f in the form of \(f(x)= \eta x \), where \(0 < \eta < 1\). We take the parameter η as the following five cases:

$$\begin{aligned}& \textit{Case 1: } \eta = 0.1, \qquad \textit{Case 2: } \eta = 0.3,\qquad \textit{Case 3: } \eta = 0.5, \qquad \textit{Case 4: } \eta = 0.8,\\& \textit{Case 5: } \eta = 0.99. \end{aligned}$$
Figure 1
figure 1

Restoration for the Lenna image at the 500th iteration. (a) Original image; (b) Blurry image contaminated by Gaussian blur; (c)–(g) Restored images by Method 5 with different parameters η

Now, the experiments for recovering the Lenna image of Method 5 with Cases 1–5 are shown in Figs. 1 and 2. It is observed from Fig. 2 that Case 5 gives the higher values of PSNR and SSIM than other cases.

Figure 2
figure 2

Plot of PSNR and SSIM of restored images by Method 5

Example 4.2

Consider a prototype image (hall) with size of \(256\times 256 \), which is contaminated by Gaussian blur of filter size \(9 \times 9 \) with standard deviation \(\hat{\sigma } = 4\) and noise 10−5, see the original image (a) and the blurred image (b) in Fig. 3. The parameters for each deblurring method are set as in Table 1.

Figure 3
figure 3

Restoration for the hall image at the 500th iteration. (a) Original image; (b) Blurry image contaminated by Gaussian blur; (c)–(g) Restored images by Methods 1–5

Figure 4
figure 4

The comparison of PSNR and SSIM values for the blurred image and restored images by Methods 1–5 at the 500th iteration

Table 1 The parameters for the deblurring methods

Also, we define a contraction f by \(f(x)= 0.99x\) for Methods 4 and 5.

Let us see the comparative experiments for recovering the hall images of Methods 1–5 as shown in Figs. 35. It can be seen that Method 5 gives the higher values of PSNR and SSIM than the other tested methods. So, our method has the highest image recovery efficiency compared with other methods.

Figure 5
figure 5

Plot of PSNR and SSIM of restored images by Methods 1–5

4.2 Signal recovery

Example 4.3

In the LASSO problem (32), the matrix \(T \in \mathbb{R}^{M \times N}\) is generated by the normal distribution with mean zero and variance one. The vector \(x\in \mathbb{R}^{N} \) is generated by a uniform distribution in \([-2, 2]\) with m nonzero elements. The vector y is generated by the Gaussian noise with the signal-to-noise ratio (SNR) as 40 dB. The regularization parameter is taken by \(\lambda = 1\). The parameters of Methods 1–5 are set as in Table 1 in Example 4.2. We use the mean squared error (MSE) as the stopping criterion defined by

$$ \operatorname{MSE}(k) := \frac{1}{N} \bigl\Vert x_{k}-p^{*} \bigr\Vert _{2}^{2}\leq 10^{-5}, $$

where \(p^{*}\) is an original signal.

Now, the experiments for recovering two signals by Methods 1–5 are shown in Figs. 67, and the graphs of the MSE for two cases are shown in Fig. 8. It is observed from Figs. 68 that the convergence speed of Method 5 is better than that of Methods 14 and hence our method has a better convergence behavior than the other tested methods in terms of the number of iterations.

Figure 6
figure 6

Signal recovery in case of \(N=512\), \(M=256\), \(m=15\). (a) Original signal; (b) Observed data; (c)–(g) Recovered signals by Methods 1–5

Figure 7
figure 7

Signal recovery in case of \(N=1024\), \(M=512\), \(m=40\). (a) Original signal; (b) Observed data; (c)–(g) Recovered signals by Methods 1–5

Figure 8
figure 8

The MSE versus the number of iterations for recovering the signals by Methods 1–5

5 Conclusion

In this work, we discuss the convex minimization problem of the sum of two convex functions in a Hilbert space. The challenge of removing the Lipschitz continuity assumption on the gradient of the function attracts us to study the concept of the linesearch method. We introduce a new linesearch and propose an inertial viscosity forward-backward algorithm whose stepsize does not depend on any Lipschitz constant for solving the considered problem without any Lipschitz continuity condition on the gradient. We prove that the sequence generated by our proposed method converges strongly to a minimizer of the sum of those two convex functions under some mild control conditions. As applications, we apply our method to solving image and signal recovery problems. The comparative experiments show that our method has a higher efficiency than the well-known methods in [9, 16, 18].

Availability of data and materials

Data sharing not applicable to this article as no datasets were generated or analysed during the current study.

References

  1. Aremu, K.O., Izuchukwu, C., Grace, O.N., Mewomo, O.T.: Multi-step iterative algorithm for minimization and fixed point problems in p-uniformly convex metric spaces. J. Ind. Manag. Optim. 13(5) (2020). https://doi.org/10.3934/jimo.2020063

  2. Bauschke, H.H., Combettes, P.L.: Convex Analysis and Monotone Operator Theory in Hilbert Spaces. Springer, New York (2011)

    Book  Google Scholar 

  3. Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2, 183–202 (2009)

    Article  MathSciNet  Google Scholar 

  4. Bertsekas, D.P., Tsitsiklis, J.N.: Parallel and Distributed Computation: Numerical Methods. Athena Scientific, Belmont (1997)

    MATH  Google Scholar 

  5. Bussaban, L., Suantai, S., Kaewkhao, A.: A parallel inertial S-iteration forward-backward algorithm for regression and classification problems. Carpath. J. Math. 36, 35–44 (2020)

    MathSciNet  Google Scholar 

  6. Byrne, C.: Iterative oblique projection onto convex sets and the split feasibility problem. Inverse Probl. 18, 441–453 (2002)

    Article  MathSciNet  Google Scholar 

  7. Combettes, P.L., Pesquet, J.C.: A Douglas-Rachford splitting approach to nonsmooth convex variational signal recovery. IEEE J. Sel. Top. Signal Process. 1, 564–574 (2007)

    Article  Google Scholar 

  8. Combettes, P.L., Wajs, V.R.: Signal recovery by proximal forward-backward splitting. Multiscale Model. Simul. 4, 1168–1200 (2005)

    Article  MathSciNet  Google Scholar 

  9. Cruz, J.Y.B., Nghia, T.T.A.: On the convergence of the forward-backward splitting method with linesearches. Optim. Methods Softw. 31, 1209–1238 (2016)

    Article  MathSciNet  Google Scholar 

  10. Daubechies, I., Defrise, M., Mol, C.D.: An iterative thresholding algorithm for linear inverse problems with a sparsity constraint. Commun. Pure Appl. Math. 57, 1413–1457 (2004)

    Article  MathSciNet  Google Scholar 

  11. Dunn, J.C.: Convexity, monotonicity, and gradient processes in Hilbert space. J. Math. Anal. Appl. 53, 145–158 (1976)

    Article  MathSciNet  Google Scholar 

  12. Figueiredo, M., Nowak, R.: An EM algorithm for wavelet-based image restoration. IEEE Trans. Image Process. 12, 906–916 (2003)

    Article  MathSciNet  Google Scholar 

  13. Hale, E., Yin, W., Zhang, Y.: A fixed-point continuation method for \(l_{1}\)-regularized minimization with applications to compressed sensing, Rice University: Department of Computational and Applied Mathematics (2007)

  14. Hanjing, A., Suantai, S.: A fast image restoration algorithm based on a fixed point and optimization. Mathematics 8, 378 (2020). https://doi.org/10.3390/math8030378

    Article  Google Scholar 

  15. Izuchukwu, C., Grace, O.N., Mewomo, O.T.: An inertial method for solving generalized split feasibility problems over the solution set of monotone variational inclusions. Optimization (2020). https://doi.org/10.1080/02331934.2020.1808648

    Article  Google Scholar 

  16. Kankam, K., Pholasa, N., Cholamjiak, P.: On convergence and complexity of the modified forward-backward method involving new linesearches for convex minimization. Math. Methods Appl. Sci. 42, 1352–1362 (2019)

    Article  MathSciNet  Google Scholar 

  17. Lin, L.J., Takahashi, W.: A general iterative method for hierarchical variational inequality problems in Hilbert spaces and applications. Positivity 16, 429–453 (2012)

    Article  MathSciNet  Google Scholar 

  18. Lions, P.L., Mercier, B.: Splitting algorithms for the sum of two nonlinear operators. SIAM J. Numer. Anal. 16, 964–979 (1979)

    Article  MathSciNet  Google Scholar 

  19. Martinet, B.: Régularisation d’inéquations variationnelles par approximations successives. Rev. Fr. Inform. Rech. Oper. 4, 154–158 (1970)

    MATH  Google Scholar 

  20. Moreau, J.J.: Fonctions convexes duales et points proximaux dans un espace hilbertien. C. R. Acad. Sci. Paris, Sér. A Math. 255, 2897–2899 (1962)

    MathSciNet  MATH  Google Scholar 

  21. Moudafi, A.: Viscosity approximation method for fixed-points problems. J. Math. Anal. Appl. 241, 46–55 (2000)

    Article  MathSciNet  Google Scholar 

  22. Nesterov, Y.: A method for solving the convex programming problem with convergence rate \(O(1/k^{2})\). Dokl. Akad. Nauk SSSR 269, 543–547 (1983)

    MathSciNet  Google Scholar 

  23. Okeke, C.C., Izuchukwu, C.: A strong convergence theorem for monotone inclusion and minimization problems in complete CAT(0) spaces. Optim. Methods Softw. 34(6), 1168–1183 (2019)

    Article  MathSciNet  Google Scholar 

  24. Polyak, B.T.: Some methods of speeding up the convergence of iteration methods. USSR Comput. Math. Math. Phys. 4, 1–17 (1964)

    Article  Google Scholar 

  25. Rockafellar, R.T.: On the maximal monotonicity of subdifferential mappings. Pac. J. Math. 33, 209–216 (1970)

    Article  MathSciNet  Google Scholar 

  26. Rockafellar, R.T.: Monotone operators and the proximal point algorithm. SIAM J. Control Optim. 17, 877–898 (1976)

    Article  MathSciNet  Google Scholar 

  27. Saejung, S., Yotkaew, P.: Approximation of zeros of inverse strongly monotone operators in Banach spaces. Nonlinear Anal. 75, 724–750 (2012)

    Article  MathSciNet  Google Scholar 

  28. Suantai, S., Kankam, K., Cholamjiak, P.: A novel forward-backward algorithm for solving convex minimization problem in Hilbert spaces. Mathematics 8, 42 (2020). https://doi.org/10.3390/math8010042

    Article  Google Scholar 

  29. Takahashi, W.: Introduction to Nonlinear and Convex Analysis. Yokohama Publishers, Yokohama (2009)

    MATH  Google Scholar 

  30. Thung, K., Raveendran, P.: A survey of image quality measures. In: Proceedings of the International Conference for Technical Postgraduates (TECHPOS), Kuala Lumpur, 14–15 December, pp. 1–4. IEEE Comput. Soc., Los Alamitos (2009)

    Google Scholar 

  31. Tibshirani, R.: Regression shrinkage and selection via the lasso. J. R. Stat. Soc., Ser. B, Methodol. 58, 267–288 (1996)

    MathSciNet  MATH  Google Scholar 

  32. Tseng, P.: A modified forward-backward splitting method for maximal monotone mappings. SIAM J. Control Optim. 38, 431–446 (2000)

    Article  MathSciNet  Google Scholar 

  33. Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE Trans. Image Process. 13, 600–612 (2004)

    Article  Google Scholar 

  34. Xu, H.K.: Viscosity approximation methods for nonexpansive mappings. J. Math. Anal. Appl. 298, 279–291 (2004)

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgements

This work was supported by Chiang Mai University and Thailand Science Research and Innovation under the project IRN62W0007.

Funding

Chiang Mai University and Thailand Science Research and Innovation under the project IRN62W0007.

Author information

Authors and Affiliations

Authors

Contributions

All authors contributed equally to the writing of this paper. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Adisak Hanjing.

Ethics declarations

Competing interests

The authors declare that they have no competing interests.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Suantai, S., Jailoka, P. & Hanjing, A. An accelerated viscosity forward-backward splitting algorithm with the linesearch process for convex minimization problems. J Inequal Appl 2021, 42 (2021). https://doi.org/10.1186/s13660-021-02571-5

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13660-021-02571-5

MSC

Keywords