 Research
 Open Access
 Published:
An accelerated viscosity forwardbackward splitting algorithm with the linesearch process for convex minimization problems
Journal of Inequalities and Applications volume 2021, Article number: 42 (2021)
Abstract
In this paper, we consider and investigate a convex minimization problem of the sum of two convex functions in a Hilbert space. The forwardbackward splitting algorithm is one of the popular optimization methods for approximating a minimizer of the function; however, the stepsize of this algorithm depends on the Lipschitz constant of the gradient of the function, which is not an easy work to find in general practice. By using a new modification of the linesearches of Cruz and Nghia [Optim. Methods Softw. 31:1209–1238, 2016] and Kankam et al. [Math. Methods Appl. Sci. 42:1352–1362, 2019] and an inertial technique, we introduce an accelerated viscositytype algorithm without any Lipschitz continuity assumption on the gradient. A strong convergence result of the proposed algorithm is established under some control conditions. As applications, we apply our algorithm to solving image and signal recovery problems. Numerical experiments show that our method has a higher efficiency than the wellknown methods in the literature.
Introduction
The convex minimization problem is one of the important problems in mathematical optimization. It has been widely studied because its applications are desirable and can be used in many branches of science and in various realworld applications such as in image and signal processing, data classification and regression problems, etc., see [3, 5, 8, 10, 12, 13] and the references therein. Various optimization methods for solving the convex minimization problem have been introduced and developed by many researchers, see [1, 3–5, 7–9, 11, 14, 16–19, 23, 26, 28] for instance. In this work, we are interested in studying an unconstrained convex minimization problem of the sum of the following form:
where \(\mathcal{X}\) is a Hilbert space, \(h_{1} : \mathcal{X} \to \mathbb{R} \) is a convex and differentiable function, and \(h_{2} : \mathcal{X}\to \mathbb{R}\cup \{\infty \}\) is a proper, lower semicontinuous, and convex function.
It is known that if a minimizer \(p^{*}\) of \(h_{1}+h_{2}\) exists, then \(p^{*}\) is a fixed point of the forwardbackward operator
where \(\alpha >0\), \(\operatorname {prox}_{h_{2}}\) is the proximity operator of \(h_{2}\), and \(\nabla h_{1}\) stands for the gradient of \(h_{1}\), that is, \(p^{*} = FB_{\alpha }(p^{*})\). If \(\nabla h_{1} \) is Lipschitz continuous with a coefficient \(L>0\) and \(\alpha \in (0, 2/L)\), then the forwardbackward operator \(FB_{\alpha }\) is nonexpansive. In this case, we can employ fixed point approximation methods for the class of nonexpansive operators to solve (1). One of the popular methods is known as the forwardbackward splitting (FBS) algorithm [8, 18].
Method FBS
Let \(x_{1} \in \mathcal{X}\). For \(k \geq 1\), let
where \(0 < \alpha _{k} < 2/L\).
This method includes the proximal point algorithm [19, 26], the gradient method [4, 11], and the CQ algorithm [6] as special cases. It can be seen from Method FBS that we need to assume the Lipschitz continuity condition on the gradient of \(h_{1}\), and the stepsize \(\alpha _{k}\) depends on the Lipschitz constant L. However, finding such a Lipschitz constant is not an easy task in general practice. This leads to the natural question:
Question: How can we construct an algorithm whose stepsize does not depend on any Lipschitz constant of the gradient for solving Problem (1)?
In the sequel, we set the standing hypotheses on Problem (1) as follows:

(AI)
\(h_{1} : \mathcal{X} \to \mathbb{R} \) is a convex and differentiable function and the gradient \(\nabla h_{1} \) is uniformly continuous on \(\mathcal{X}\);

(AII)
\(h_{2} : \mathcal{X}\to \mathbb{R}\cup \{\infty \}\) is a proper, lower semicontinuous, and convex function.
We see that the second part of (AI) is a weaker condition than the Lipschitz continuity condition on \(\nabla h_{1}\).
In 2016, Cruz and Nghia [9] suggested one of the ways to select the stepsize \(\alpha _{k}\) which is independent of the Lipschitz constant L by using the following linesearch process.
It was proved that Linesearch A is well defined, this means that it stops after finitely many steps, see [9, Lemma 3.1] and [32, Theorem 3.4(a)]. Linesearch A is a special case of the linesearch proposed in [32] for inclusion problems. Cruz and Nghia [9] employed the forwardbackward splitting method where the stepsize \(\alpha _{k}\) is generated by Linesearch A.
Method 1
Let \(x_{1} \in \mathcal{X}\), \(\sigma >0\), \(\delta \in (0, 1/2)\), and \(\theta \in (0,1)\). For \(k \geq 1\), let
where \(\alpha _{k}:= \text{Linesearch A}(x_{k},\sigma , \theta , \delta ) \).
In optimization theory, to speed up the convergence of iterative procedures, many mathematicians often use the inertialtype extrapolation [15, 22, 24] by supplementing the technical term \(\beta _{k}(x_{k}x_{k1})\). We call the parameter \(\beta _{k}\) an inertial parameter, which controls the momentum \(x_{k}x_{k1}\). Based on Method 1, Cruz and Nghia [9] also proposed an accelerated algorithm with an inertial technical term as follows.
Method 2
Let \(x_{0}=x_{1} \in \mathcal{X}\), \(\alpha _{0}=\sigma > 0\), \(\delta \in (0, 1/2)\), \(\theta \in (0,1)\), and \(t_{1}=1\). For \(k \geq 1\), let
where \(\alpha _{k}:= \mbox{Linesearch A} (y_{k},\alpha _{k1}, \theta , \delta ) \).
The technique of selecting \(\beta _{k}\) in Method 2 was first defined in the fast iterative shrinkagethresholding algorithm (FISTA) by Beck and Teboulle [3].
In 2019, Kankam et al. [16] introduced a modification of Linesearch A as follows.
Using Linesearch B, they proposed the following double forwardbackward splitting algorithm.
Method 3
Let \(x_{1} \in \mathcal{X}\), \(\sigma >0\), \(\delta \in (0, 1/8)\), and \(\theta \in (0,1)\). For \(k \geq 1\), let
where \(\alpha _{k}:= \mbox{Linesearch B}(x_{k}, \sigma , \theta , \delta )\).
We note that Methods 1–3 with some mild conditions guarantee only weak convergence results for Problem (1); however, strong convergence gives more desirable theoretical result. To get strong convergence, we focus on the forwardbackward splitting algorithm based on the viscosity approximation method [21, 34] as follows.
Method 4
Let \(x_{1} \in \mathcal{X}\). For \(k \geq 1\), let
where \(f : \mathcal{X} \rightarrow \mathcal{X}\) is a contraction, \(\gamma _{k} \in (0, 1)\) and \(\alpha _{k} >0\).
In this work, inspired and motivated by the results of Cruz and Nghia [9] and Kankam et al. [16] and the abovementioned research, we aim to improve Linesearches A and B and introduce a new accelerated algorithm using our proposed linesearch for strong convergence on a convex minimization problem of the sum of two convex functions in a Hilbert space. This paper is organized as follows. The notation, basic definitions, and some useful lemmas for proving our main result are given in Sect. 2. Our main result is in Sect. 3. In this section, we introduce a new modification of Linesearches A and B and present a double forwardbackward algorithm based on the viscosity approximation method by using an inertial technique for solving Problem (1) with Assumptions (AI) and (AII). Subsequently, we prove a strong convergence theorem of the proposed method under some suitable control conditions. In Sect. 4, we apply the convex minimization problem to image and signal recovery problems. We analyze and illustrate the convergence behavior of our method, and also compare its efficiency with Methods 1–4.
Basic definitions and lemmas
The mathematical symbols adopted throughout this article are as follows. \(\mathbb{R}\), \(\mathbb{R}_{+}\), and \(\mathbb{R}_{++}\) are the set of real numbers, the set of nonnegative real numbers, and the set of positive real numbers, respectively, and \(\mathbb{N}\) stands for the set of positive integers. We suppose that \(\mathcal{X}\) is a real Hilbert space with an inner product \(\langle \cdot , \cdot \rangle \) and the induced norm \(\ \cdot \\). Let \(I_{d}\) denote the identity operator on \(\mathcal{X}\). Weak and strong convergence of a sequence \(\{x_{k}\} \subset \mathcal{X}\) to \(p \in \mathcal{X}\) are denoted by \(x_{k} \rightharpoonup p\) and \(x_{k} \rightarrow p\), respectively.
Let E be a nonempty closed convex subset of \(\mathcal{X}\). An operator \(A : E \rightarrow \mathcal{X}\) is said to be Lipschitz continuous if there exists \(L > 0\) such that
If A is Lipschitz continuous with a coefficient \(L \in (0, 1)\), then A is called a contraction. The metric projection from \(\mathcal{X}\) onto E, denoted by \(P_{E}\), is defined for each \(x \in \mathcal{X}\), \(P_{E}x\) is the unique element in E such that \(\ x  P_{E}x \ = \inf_{y \in E} \ x  y \\). It is known that
The following definition extends the concept of the metric projection.
Definition 2.1
Let \(h : \mathcal{X} \to \mathbb{R}\cup \{\infty \} \) be a proper, lower semicontinuous, and convex function. The proximity (or proximal) operator of h, denoted by \(\operatorname {prox}_{h}\), is defined for each \(x \in \mathcal{X}\), \(\operatorname {prox}_{h}x\) is the unique solution of the minimization problem
In particular, if \(h := i_{E}\) is an indicator function on E (defined by \(i_{E}(x)=0\) if \(x \in E\); otherwise \(i_{E}(x) = \infty \)), then \(\operatorname {prox}_{h} = P_{E}\).
Let \(h : \mathcal{X} \to \mathbb{R}\cup \{\infty \} \) be a proper, lower semicontinuous, and convex function. The subdifferential ∂h of h is defined by
Here, we give some relationships between the proximity operator and the subdifferential operator as follows. For \(\alpha >0\) and \(x \in \mathcal{X}\), then
We end this section by giving useful lemmas for proving our main result.
Lemma 2.2
([25])
Let \(h : \mathcal{X} \to \mathbb{R}\cup \{\infty \} \) be a proper, lower semicontinuous, and convex function. Let \(\{x_{k}\}\) and \(\{y_{k}\}\) be two sequences in \(\mathcal{X}\) such that \(y_{k} \in \partial h(x_{k})\) for all \(k \in \mathbb{N}\). If \(x_{k} \rightharpoonup x \) and \(y_{k} \rightarrow y\), then \(y \in \partial h(x)\).
Lemma 2.3
([29])
Let \(x, y \in \mathcal{X}\) and \(\xi \in [0, 1]\). Then the following properties hold on \(\mathcal{X}\):

(i)
\(\\xi x +(1\xi )y\^{2}= \xi \x\^{2}+(1\xi )\y\^{2}\xi (1 \xi )\xy\^{2}\);

(ii)
\(\x \pm y\^{2}=\x\^{2}\pm 2\langle x, y\rangle +\y\^{2} \);

(iii)
\(\x + y\^{2} \leq \x\^{2}+2\langle y, x + y\rangle \).
Lemma 2.4
([27])
Let \(\{a_{k}\} \subset \mathbb{R}_{+}\), \(\{b_{k}\} \subset \mathbb{R}\), and \(\{\xi _{k}\} \subset (0,1)\) be such that \(\sum_{k=1}^{\infty }\xi _{k}= \infty \) and
If \(\limsup_{i \to \infty }b_{k_{i}}\leq 0 \) for every subsequence \(\{a_{k_{i}}\} \) of \(\{a_{k}\} \) satisfying \(\liminf_{i \to \infty }(a_{k_{i}+1}a_{k_{i}})\geq 0\), then \(\lim_{k\to \infty }a_{k}=0 \).
Method and convergence result
In this section, by modifying Linesearches A and B, we introduce a new linesearch and present an inertial double forwardbackward splitting algorithm based on the viscosity approximation method for solving the convex minimization problem of the sum of two convex functions without any Lipschitz continuity assumption on the gradient. A strong convergence result of our proposed algorithm is analyzed and established.
We now focus on Problem (1) with Assumptions (AI) and (AII). For simplicity, let \(\textbf{h} := h_{1} + h_{2}\) and denote \(FB_{\alpha } := \operatorname {prox}_{\alpha h_{2}}(I_{d}\alpha \nabla h_{1})\) for \(\alpha >0\). The set of minimizer of h is denoted by Γ. Also, assume that \(\Gamma \neq \emptyset \). We begin by designing the following linesearch.
In other words, if \(\alpha := \mbox{Linesearch C}(x, \sigma , \theta , \delta )\), then \(\alpha = \sigma \theta ^{m}\), where m is the smallest nonnegative integer such that
It can be seen that the terminating condition of the while loop in Linesearch C is somewhat weaker than that in Linesearch B. So, it follows from the welldefinedness of Linesearch B that our linesearch also stops after finitely many steps, see [16, Lemma 3.2].
Using Linesearch C, we introduce a new viscosity forwardbackward splitting algorithm with the inertial technical term as follows.
To show a strong convergence result of Method 5, the following tool is needed.
Lemma 3.1
Let \(\{x_{k}\} \) be a sequence generated by Method 5 and \(p \in \mathcal{X}\). Then the following inequality holds:
Proof
From (3), (6), and (7), we get
Let \(p\in \mathcal{X}\). By the definition of subdifferential of \(h_{2}\), the above expressions give
and
By (AI), we obtain the fact
From (11), we get
and
Combining (9), (10), (12), and (13), we have
Again, applying (11), the above inequality becomes
Since \(\alpha _{k}:= \mbox{Linesearch C}(w_{k}, \sigma ,\theta , \delta )\), then
By Lemma 2.3(ii), we get
and
Hence, we can conclude from (16)–(18) that
□
Now we are in a position to prove our main theorem.
Theorem 3.2
Let \(\{x_{k}\} \subset \mathcal{X}\) be a sequence generated by Method 5. Then:

(i)
For \(p \in \Gamma \), we have
$$ \Vert x_{k+1}p \Vert \leq \max \biggl\{ \Vert x_{k} p \Vert , \frac{\frac{\beta _{k}}{\gamma _{k}} \Vert x_{k}x_{k1} \Vert + \Vert f(p) p \Vert }{1\eta } \biggr\} ,\quad \forall k \in \mathbb{N}. $$ 
(ii)
If the sequences \(\{\alpha _{k}\}\), \(\{\gamma _{k}\}\), and \(\{\tau _{k}\}\) satisfy the following conditions:

(Ci)
\(\alpha _{k}\geq \alpha \) for some \(a \in \mathbb{R}_{++}\);

(Cii)
\(\gamma _{k} \in (0, 1)\) such that \(\lim_{k\to \infty } \gamma _{k}=0\) and \(\sum_{k=1}^{\infty }\gamma _{k} =\infty \);

(Ciii)
\(\lim_{k\to \infty }\tau _{k}/\gamma _{k}=0\),
then \(\{x_{k}\}\) converges strongly to a point \(p^{*} \in \Gamma \), where \(p^{*}=P_{\Gamma }f(p^{*})\).

(Ci)
Proof
Let \(p \in \Gamma \). Applying Lemma 3.1, we have
From (19) and (5) and by Lemma 2.3(ii), we get
From (20) and (5), we get
By (8) and (22), we have
Therefore, we obtain (i). By (4) and using (Ciii), we have \(\frac{\beta _{k}}{\gamma _{k}}\x_{k}x_{k1}\ \rightarrow 0\) as \(k \rightarrow \infty \), and so there exists \(M>0 \) such that \(\frac{\beta _{k}}{\gamma _{k}}\x_{k}x_{k1}\\leq M \) for all \(k \in \mathbb{N}\). Thus,
By mathematical induction, we deduce that
Hence, \(\{x_{k}\} \) is bounded. One can see that the operator \(P_{\Gamma }f\) is a contraction. By the Banach contraction principle, there is a unique point \(p^{*} \in \Gamma \) such that \(p^{*}=P_{\Gamma }f(p^{*})\). It follows from the characterization of \(P_{\Gamma }\) that
Using Lemma 2.3(i), (iii) and (21), we have
where
It follows that
where \(M^{\prime } = \sup \{b_{k} : k \in \mathbb{N}\}\).
Let us show that \(\{x_{k}\} \) converges to \(p^{*}\). Set \(a_{k} := \x_{k}p^{*}\^{2}\) and \(\xi _{k} := \gamma _{k}(1\eta )\). From (24), we have the following inequality:
To apply Lemma 2.4, we have to show that \(\limsup_{i \to \infty }b_{k_{i}}\leq 0 \) whenever a subsequence \(\{a_{k_{i}}\} \) of \(\{a_{k}\} \) satisfies
To do this, suppose that \(\{a_{k_{i}}\} \subseteq \{a_{k}\} \) is a subsequence satisfying (26). Then, by (25) and (Cii), we have
which implies
Using (Cii), (Ciii), and (27), we have
as \(i \to \infty \). We next show that \(\limsup_{i\to \infty }b_{k_{i}}\leq 0\). Clearly, it suffices to show that
Let \(\{ x_{k_{i_{j}}} \} \) be a subsequence of \(\{ x_{k_{i}} \} \) such that
Since \(\{ x_{k_{i_{j}}} \} \) is bounded, there exists a subsequence \(\{ x_{k_{i_{j_{p}}}} \} \) of \(\{ x_{k_{i_{j}}} \} \) such that \(x_{k_{i_{j_{p}}}}\rightharpoonup \bar{p}\in \mathcal{X}\). Without loss of generality, we may assume that \(x_{k_{i_{j}}}\rightharpoonup \bar{p}\). Thus, we also have \(z_{k_{i_{j}}}\rightharpoonup \bar{p}\). From (AI), we have \(\\nabla h_{1}(w_{k_{i_{j}}})\nabla h_{1}(z_{k_{i_{j}}})\ \rightarrow 0\) as \(j \rightarrow \infty \). This together with (27) and (Ci) yields
By (3), we get
Now, by (29), (30), and \(z_{k_{i_{j}}}\rightharpoonup \bar{p}\), it follows from Lemma 2.2 that \(0\in \partial \textbf{h}(\bar{p})\). Hence, \(\bar{p} \in \Gamma \). From (28) and (23), we have
By Lemma 2.4, we can conclude that \(\{x_{k}\}\) converges to \(p^{*}\). The proof is complete. □
Note that the stepsize condition on \(\{\alpha _{k}\}\) in Theorem 3.2 needs the boundedness from below by a positive real number. Next, we show that this condition can be ensured by the Lipschitz continuity assumption on \(\nabla h_{1}\).
Proposition 3.3
Let \(\{\alpha _{k}\} \) be the sequence generated by Linesearch C of Method 5. If \(\nabla h_{1} : \mathcal{X} \rightarrow \mathcal{X}\) is Lipschitz continuous with a constant \(L>0 \), then \(\alpha _{k}\geq \min \{ \sigma , 2\delta \theta /L \} \) for all \(k \in \mathbb{N}\).
Proof
Let \(\nabla h_{1} \) be LLipschitz continuous on \(\mathcal{X}\). Since \(\alpha _{k}:= \mbox{Linesearch C}(w_{k}, \sigma ,\theta , \delta )\), then \(\alpha _{k} \leq \sigma \) for all \(k \in \mathbb{N}\). If \(\alpha _{k} < \sigma \), then \(\alpha _{k} = \sigma \theta ^{m_{k}}\) where \(m_{k}\) is the smallest positive integer such that
Set \(\hat{\alpha _{k}}:= \alpha _{k}/\theta \). By the Lipschitz continuity of \(\nabla h_{1}\) and the above expression, we have
it follows that \(\alpha _{k} > 2\delta \theta /L\). Therefore, \(\alpha _{k}\geq \min \{ \sigma , 2\delta \theta /L \} \) for all \(k \in \mathbb{N}\). □
Remark 3.4
It is worth mentioning that the Lipschitz continuity assumption on the gradient of \(h_{1}\) is sufficient for Assumption (AI). However, if we assume this assumption further, the computation of the stepsize \(\alpha _{k}\) generated by Linesearch C is still independent of the Lipschitz constant.
Numerical experiments in image and signal recovery
In this section, we apply the convex minimization problem, Problem (1), to image and signal recovery problems. We analyze and illustrate the convergence behavior of Method 5 for recovering images and signals, and also compare its efficiency with Methods 1–4. All experiments and visualizations are performed on a laptop computer (Intel Corei5/4.00 GB RAM/Windows 8/64bit) with MATLAB.
Many problems in image and signal processing, especially the image/signal recovery, are the problems of inferring an image/signal \(x \in \mathbb{R}^{N}\) from the observation of an image/signal \(y \in \mathbb{R}^{M}\) via the linear equation
where \(T : \mathbb{R}^{N} \rightarrow \mathbb{R}^{M}\) is a bounded linear operator and ε is an additive noise. To approximate the original image/signal in (31), we need to minimize the value of ε by using the LASSO problem [31]
where λ is a positive parameter, \(\\cdot \_{1} \) is the \(l_{1} \)norm, and \(\\cdot \_{2}\) is the Euclidean norm. It is worth noting that Problem (1) can be applied to the LASSO problem (32) by setting
Image recovery
In the following two examples, we set a regularization parameter in the LASSO problem (32) by \(\lambda := 10^{5}\). Signaltonoise ratio (PSNR) in decibel (dB) [30] and structural similarity index metric (SSIM) [33] are used as image quality metrics. The maximum iteration number for all deblurring methods is fixed at 500.
Example 4.1
Consider a prototype image (Lenna) with size of \(256\times 256 \), which is contaminated by Gaussian blur of filter size \(7 \times 7 \) with standard deviation \(\hat{\sigma } = 6\) and noise 10^{−5}, see the original image (a) and the blurred image (b) in Fig. 1. The values of PSNR and SSIM of the blurred image are 24.6547 dB and 0.4770, respectively. The parameters of our method (Method 5) are chosen as follows:
Consider a contraction f in the form of \(f(x)= \eta x \), where \(0 < \eta < 1\). We take the parameter η as the following five cases:
Now, the experiments for recovering the Lenna image of Method 5 with Cases 1–5 are shown in Figs. 1 and 2. It is observed from Fig. 2 that Case 5 gives the higher values of PSNR and SSIM than other cases.
Example 4.2
Consider a prototype image (hall) with size of \(256\times 256 \), which is contaminated by Gaussian blur of filter size \(9 \times 9 \) with standard deviation \(\hat{\sigma } = 4\) and noise 10^{−5}, see the original image (a) and the blurred image (b) in Fig. 3. The parameters for each deblurring method are set as in Table 1.
Also, we define a contraction f by \(f(x)= 0.99x\) for Methods 4 and 5.
Let us see the comparative experiments for recovering the hall images of Methods 1–5 as shown in Figs. 3–5. It can be seen that Method 5 gives the higher values of PSNR and SSIM than the other tested methods. So, our method has the highest image recovery efficiency compared with other methods.
Signal recovery
Example 4.3
In the LASSO problem (32), the matrix \(T \in \mathbb{R}^{M \times N}\) is generated by the normal distribution with mean zero and variance one. The vector \(x\in \mathbb{R}^{N} \) is generated by a uniform distribution in \([2, 2]\) with m nonzero elements. The vector y is generated by the Gaussian noise with the signaltonoise ratio (SNR) as 40 dB. The regularization parameter is taken by \(\lambda = 1\). The parameters of Methods 1–5 are set as in Table 1 in Example 4.2. We use the mean squared error (MSE) as the stopping criterion defined by
where \(p^{*}\) is an original signal.
Now, the experiments for recovering two signals by Methods 1–5 are shown in Figs. 6–7, and the graphs of the MSE for two cases are shown in Fig. 8. It is observed from Figs. 6–8 that the convergence speed of Method 5 is better than that of Methods 1–4 and hence our method has a better convergence behavior than the other tested methods in terms of the number of iterations.
Conclusion
In this work, we discuss the convex minimization problem of the sum of two convex functions in a Hilbert space. The challenge of removing the Lipschitz continuity assumption on the gradient of the function attracts us to study the concept of the linesearch method. We introduce a new linesearch and propose an inertial viscosity forwardbackward algorithm whose stepsize does not depend on any Lipschitz constant for solving the considered problem without any Lipschitz continuity condition on the gradient. We prove that the sequence generated by our proposed method converges strongly to a minimizer of the sum of those two convex functions under some mild control conditions. As applications, we apply our method to solving image and signal recovery problems. The comparative experiments show that our method has a higher efficiency than the wellknown methods in [9, 16, 18].
Availability of data and materials
Data sharing not applicable to this article as no datasets were generated or analysed during the current study.
References
 1.
Aremu, K.O., Izuchukwu, C., Grace, O.N., Mewomo, O.T.: Multistep iterative algorithm for minimization and fixed point problems in puniformly convex metric spaces. J. Ind. Manag. Optim. 13(5) (2020). https://doi.org/10.3934/jimo.2020063
 2.
Bauschke, H.H., Combettes, P.L.: Convex Analysis and Monotone Operator Theory in Hilbert Spaces. Springer, New York (2011)
 3.
Beck, A., Teboulle, M.: A fast iterative shrinkagethresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2, 183–202 (2009)
 4.
Bertsekas, D.P., Tsitsiklis, J.N.: Parallel and Distributed Computation: Numerical Methods. Athena Scientific, Belmont (1997)
 5.
Bussaban, L., Suantai, S., Kaewkhao, A.: A parallel inertial Siteration forwardbackward algorithm for regression and classification problems. Carpath. J. Math. 36, 35–44 (2020)
 6.
Byrne, C.: Iterative oblique projection onto convex sets and the split feasibility problem. Inverse Probl. 18, 441–453 (2002)
 7.
Combettes, P.L., Pesquet, J.C.: A DouglasRachford splitting approach to nonsmooth convex variational signal recovery. IEEE J. Sel. Top. Signal Process. 1, 564–574 (2007)
 8.
Combettes, P.L., Wajs, V.R.: Signal recovery by proximal forwardbackward splitting. Multiscale Model. Simul. 4, 1168–1200 (2005)
 9.
Cruz, J.Y.B., Nghia, T.T.A.: On the convergence of the forwardbackward splitting method with linesearches. Optim. Methods Softw. 31, 1209–1238 (2016)
 10.
Daubechies, I., Defrise, M., Mol, C.D.: An iterative thresholding algorithm for linear inverse problems with a sparsity constraint. Commun. Pure Appl. Math. 57, 1413–1457 (2004)
 11.
Dunn, J.C.: Convexity, monotonicity, and gradient processes in Hilbert space. J. Math. Anal. Appl. 53, 145–158 (1976)
 12.
Figueiredo, M., Nowak, R.: An EM algorithm for waveletbased image restoration. IEEE Trans. Image Process. 12, 906–916 (2003)
 13.
Hale, E., Yin, W., Zhang, Y.: A fixedpoint continuation method for \(l_{1}\)regularized minimization with applications to compressed sensing, Rice University: Department of Computational and Applied Mathematics (2007)
 14.
Hanjing, A., Suantai, S.: A fast image restoration algorithm based on a fixed point and optimization. Mathematics 8, 378 (2020). https://doi.org/10.3390/math8030378
 15.
Izuchukwu, C., Grace, O.N., Mewomo, O.T.: An inertial method for solving generalized split feasibility problems over the solution set of monotone variational inclusions. Optimization (2020). https://doi.org/10.1080/02331934.2020.1808648
 16.
Kankam, K., Pholasa, N., Cholamjiak, P.: On convergence and complexity of the modified forwardbackward method involving new linesearches for convex minimization. Math. Methods Appl. Sci. 42, 1352–1362 (2019)
 17.
Lin, L.J., Takahashi, W.: A general iterative method for hierarchical variational inequality problems in Hilbert spaces and applications. Positivity 16, 429–453 (2012)
 18.
Lions, P.L., Mercier, B.: Splitting algorithms for the sum of two nonlinear operators. SIAM J. Numer. Anal. 16, 964–979 (1979)
 19.
Martinet, B.: Régularisation d’inéquations variationnelles par approximations successives. Rev. Fr. Inform. Rech. Oper. 4, 154–158 (1970)
 20.
Moreau, J.J.: Fonctions convexes duales et points proximaux dans un espace hilbertien. C. R. Acad. Sci. Paris, Sér. A Math. 255, 2897–2899 (1962)
 21.
Moudafi, A.: Viscosity approximation method for fixedpoints problems. J. Math. Anal. Appl. 241, 46–55 (2000)
 22.
Nesterov, Y.: A method for solving the convex programming problem with convergence rate \(O(1/k^{2})\). Dokl. Akad. Nauk SSSR 269, 543–547 (1983)
 23.
Okeke, C.C., Izuchukwu, C.: A strong convergence theorem for monotone inclusion and minimization problems in complete CAT(0) spaces. Optim. Methods Softw. 34(6), 1168–1183 (2019)
 24.
Polyak, B.T.: Some methods of speeding up the convergence of iteration methods. USSR Comput. Math. Math. Phys. 4, 1–17 (1964)
 25.
Rockafellar, R.T.: On the maximal monotonicity of subdifferential mappings. Pac. J. Math. 33, 209–216 (1970)
 26.
Rockafellar, R.T.: Monotone operators and the proximal point algorithm. SIAM J. Control Optim. 17, 877–898 (1976)
 27.
Saejung, S., Yotkaew, P.: Approximation of zeros of inverse strongly monotone operators in Banach spaces. Nonlinear Anal. 75, 724–750 (2012)
 28.
Suantai, S., Kankam, K., Cholamjiak, P.: A novel forwardbackward algorithm for solving convex minimization problem in Hilbert spaces. Mathematics 8, 42 (2020). https://doi.org/10.3390/math8010042
 29.
Takahashi, W.: Introduction to Nonlinear and Convex Analysis. Yokohama Publishers, Yokohama (2009)
 30.
Thung, K., Raveendran, P.: A survey of image quality measures. In: Proceedings of the International Conference for Technical Postgraduates (TECHPOS), Kuala Lumpur, 14–15 December, pp. 1–4. IEEE Comput. Soc., Los Alamitos (2009)
 31.
Tibshirani, R.: Regression shrinkage and selection via the lasso. J. R. Stat. Soc., Ser. B, Methodol. 58, 267–288 (1996)
 32.
Tseng, P.: A modified forwardbackward splitting method for maximal monotone mappings. SIAM J. Control Optim. 38, 431–446 (2000)
 33.
Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE Trans. Image Process. 13, 600–612 (2004)
 34.
Xu, H.K.: Viscosity approximation methods for nonexpansive mappings. J. Math. Anal. Appl. 298, 279–291 (2004)
Acknowledgements
This work was supported by Chiang Mai University and Thailand Science Research and Innovation under the project IRN62W0007.
Funding
Chiang Mai University and Thailand Science Research and Innovation under the project IRN62W0007.
Author information
Affiliations
Contributions
All authors contributed equally to the writing of this paper. All authors read and approved the final manuscript.
Corresponding author
Ethics declarations
Competing interests
The authors declare that they have no competing interests.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Suantai, S., Jailoka, P. & Hanjing, A. An accelerated viscosity forwardbackward splitting algorithm with the linesearch process for convex minimization problems. J Inequal Appl 2021, 42 (2021). https://doi.org/10.1186/s13660021025715
Received:
Accepted:
Published:
MSC
 65K05
 90C25
 90C30
Keywords
 Convex minimization problems
 Forwardbackward splitting
 Linesearch
 Inertial techniques
 Viscosity approximation
 Strong convergence