An accelerated viscosity forward-backward splitting algorithm with the linesearch process for convex minimization problems

In this paper, we consider and investigate a convex minimization problem of the sum of two convex functions in a Hilbert space. The forward-backward splitting algorithm is one of the popular optimization methods for approximating a minimizer of the function; however, the stepsize of this algorithm depends on the Lipschitz constant of the gradient of the function, which is not an easy work to find in general practice. By using a new modification of the linesearches of Cruz and Nghia [Optim. Methods Softw. 31:1209–1238, 2016] and Kankam et al. [Math. Methods Appl. Sci. 42:1352–1362, 2019] and an inertial technique, we introduce an accelerated viscosity-type algorithm without any Lipschitz continuity assumption on the gradient. A strong convergence result of the proposed algorithm is established under some control conditions. As applications, we apply our algorithm to solving image and signal recovery problems. Numerical experiments show that our method has a higher efficiency than the well-known methods in the literature.


Introduction
The convex minimization problem is one of the important problems in mathematical optimization. It has been widely studied because its applications are desirable and can be used in many branches of science and in various real-world applications such as in image and signal processing, data classification and regression problems, etc., see [3,5,8,10,12,13] and the references therein. Various optimization methods for solving the convex minimization problem have been introduced and developed by many researchers, see [1, 3-5, 7-9, 11, 14, 16-19, 23, 26, 28] for instance. In this work, we are interested in studying an unconstrained convex minimization problem of the sum of the following form: where X is a Hilbert space, h 1 : X → R is a convex and differentiable function, and h 2 : X → R ∪ {∞} is a proper, lower semi-continuous, and convex function. It is known that if a minimizer p * of h 1 + h 2 exists, then p * is a fixed point of the forwardbackward operator FB α := prox αh 2 backward step (I d -α∇h 1 ) forward step , where α > 0, prox h 2 is the proximity operator of h 2 , and ∇h 1 stands for the gradient of h 1 , that is, p * = FB α (p * ). If ∇h 1 is Lipschitz continuous with a coefficient L > 0 and α ∈ (0, 2/L), then the forward-backward operator FB α is nonexpansive. In this case, we can employ fixed point approximation methods for the class of nonexpansive operators to solve (1). One of the popular methods is known as the forward-backward splitting (FBS) algorithm [8,18].
Method FBS Let x 1 ∈ X . For k ≥ 1, let where 0 < α k < 2/L. This method includes the proximal point algorithm [19,26], the gradient method [4,11], and the CQ algorithm [6] as special cases. It can be seen from Method FBS that we need to assume the Lipschitz continuity condition on the gradient of h 1 , and the stepsize α k depends on the Lipschitz constant L. However, finding such a Lipschitz constant is not an easy task in general practice. This leads to the natural question: Question: How can we construct an algorithm whose stepsize does not depend on any Lipschitz constant of the gradient for solving Problem (1)?
In the sequel, we set the standing hypotheses on Problem (1) as follows: (AI) h 1 : X → R is a convex and differentiable function and the gradient ∇h 1 is uniformly continuous on X ; (AII) h 2 : X → R ∪ {∞} is a proper, lower semi-continuous, and convex function. We see that the second part of (AI) is a weaker condition than the Lipschitz continuity condition on ∇h 1 .
In 2016, Cruz and Nghia [9] suggested one of the ways to select the stepsize α k which is independent of the Lipschitz constant L by using the following linesearch process.
It was proved that Linesearch A is well defined, this means that it stops after finitely many steps, see [9,Lemma 3.1] and [32,Theorem 3.4(a)]. Linesearch A is a special case of the linesearch proposed in [32] for inclusion problems. Cruz and Nghia [9] employed the forward-backward splitting method where the stepsize α k is generated by Linesearch A.
Method 1 Let x 1 ∈ X , σ > 0, δ ∈ (0, 1/2), and θ ∈ (0, 1). For k ≥ 1, let In optimization theory, to speed up the convergence of iterative procedures, many mathematicians often use the inertial-type extrapolation [15,22,24] by supplementing the technical term β k (x kx k-1 ). We call the parameter β k an inertial parameter, which controls the momentum x kx k-1 . Based on Method 1, Cruz and Nghia [9] also proposed an accelerated algorithm with an inertial technical term as follows.
The technique of selecting β k in Method 2 was first defined in the fast iterative shrinkage-thresholding algorithm (FISTA) by Beck and Teboulle [3].
In 2019, Kankam et al. [16] introduced a modification of Linesearch A as follows.
We note that Methods 1-3 with some mild conditions guarantee only weak convergence results for Problem (1); however, strong convergence gives more desirable theoretical result. To get strong convergence, we focus on the forward-backward splitting algorithm based on the viscosity approximation method [21,34] as follows.
In this work, inspired and motivated by the results of Cruz and Nghia [9] and Kankam et al. [16] and the above-mentioned research, we aim to improve Linesearches A and B and introduce a new accelerated algorithm using our proposed linesearch for strong convergence on a convex minimization problem of the sum of two convex functions in a Hilbert space. This paper is organized as follows. The notation, basic definitions, and some useful lemmas for proving our main result are given in Sect. 2. Our main result is in Sect. 3. In this section, we introduce a new modification of Linesearches A and B and present a double forward-backward algorithm based on the viscosity approximation method by using an inertial technique for solving Problem (1) with Assumptions (AI) and (AII). Subsequently, we prove a strong convergence theorem of the proposed method under some suitable control conditions. In Sect. 4, we apply the convex minimization problem to image and signal recovery problems. We analyze and illustrate the convergence behavior of our method, and also compare its efficiency with Methods 1-4.

Basic definitions and lemmas
The mathematical symbols adopted throughout this article are as follows. R, R + , and R ++ are the set of real numbers, the set of nonnegative real numbers, and the set of positive real numbers, respectively, and N stands for the set of positive integers. We suppose that X is a real Hilbert space with an inner product ·, · and the induced norm · . Let I d denote the identity operator on X . Weak and strong convergence of a sequence {x k } ⊂ X to p ∈ X are denoted by x k p and x k → p, respectively. Let E be a nonempty closed convex subset of X . An operator A : E → X is said to be Lipschitz continuous if there exists L > 0 such that The following definition extends the concept of the metric projection. In particular, if h : Let h : X → R ∪ {∞} be a proper, lower semi-continuous, and convex function. The subdifferential ∂h of h is defined by Here, we give some relationships between the proximity operator and the subdifferential operator as follows. For α > 0 and x ∈ X , then We end this section by giving useful lemmas for proving our main result.

Lemma 2.2 ([25])
Let h : X → R ∪ {∞} be a proper, lower semi-continuous, and convex function. Let {x k } and {y k } be two sequences in X such that y k ∈ ∂h(x k ) for all k ∈ N. If x k x and y k → y, then y ∈ ∂h(x).

Method and convergence result
In this section, by modifying Linesearches A and B, we introduce a new linesearch and present an inertial double forward-backward splitting algorithm based on the viscosity approximation method for solving the convex minimization problem of the sum of two convex functions without any Lipschitz continuity assumption on the gradient. A strong convergence result of our proposed algorithm is analyzed and established. We now focus on Problem (1) with Assumptions (AI) and (AII). For simplicity, let h := h 1 + h 2 and denote FB α := prox αh 2 (I d -α∇h 1 ) for α > 0. The set of minimizer of h is denoted by . Also, assume that = ∅. We begin by designing the following linesearch.
In other words, if α := Linesearch C(x, σ , θ , δ), then α = σ θ m , where m is the smallest nonnegative integer such that It can be seen that the terminating condition of the while loop in Linesearch C is somewhat weaker than that in Linesearch B. So, it follows from the well-definedness of Linesearch B that our linesearch also stops after finitely many steps, see [16,Lemma 3.2].
Using Linesearch C, we introduce a new viscosity forward-backward splitting algorithm with the inertial technical term as follows.
Step 3. Compute the viscosity step: Set k := k + 1 and return to Step 1.
To show a strong convergence result of Method 5, the following tool is needed.

By Lemma 2.3(ii), we get
and Hence, we can conclude from (16)-(18) that Now we are in a position to prove our main theorem.
Proof Let p ∈ . Applying Lemma 3.1, we have From (19) and (5) and by Lemma 2.3(ii), we get From (20) and (5), we get By (8) and (22), we have Therefore, we obtain (i). By (4) and using (Ciii), we have β k Thus, By mathematical induction, we deduce that Hence, {x k } is bounded. One can see that the operator P f is a contraction. By the Banach contraction principle, there is a unique point p * ∈ such that p * = P f (p * ). It follows from the characterization of P that Using Lemma 2.3(i), (iii) and (21), we have It follows that where M = sup{b k : k ∈ N}. Let us show that {x k } converges to p * . Set a k := x kp * 2 and ξ k := γ k (1η). From (24), we have the following inequality: To apply Lemma 2.4, we have to show that lim sup i→∞ b k i ≤ 0 whenever a subsequence To do this, suppose that {a k i } ⊆ {a k } is a subsequence satisfying (26). Then, by (25) and (Cii), we have Using (Cii), (Ciii), and (27), we have Setα k := α k /θ . By the Lipschitz continuity of ∇h 1 and the above expression, we havê it follows that α k > 2δθ/L. Therefore, α k ≥ min{σ , 2δθ/L} for all k ∈ N.
Remark 3.4 It is worth mentioning that the Lipschitz continuity assumption on the gradient of h 1 is sufficient for Assumption (AI). However, if we assume this assumption further, the computation of the stepsize α k generated by Linesearch C is still independent of the Lipschitz constant.

Numerical experiments in image and signal recovery
In this section, we apply the convex minimization problem, Problem (1), to image and signal recovery problems. We analyze and illustrate the convergence behavior of Method 5 for recovering images and signals, and also compare its efficiency with Methods 1-4. All experiments and visualizations are performed on a laptop computer (Intel Core-i5/4.00 GB RAM/Windows 8/64-bit) with MATLAB. Many problems in image and signal processing, especially the image/signal recovery, are the problems of inferring an image/signal x ∈ R N from the observation of an image/signal y ∈ R M via the linear equation where T : R N → R M is a bounded linear operator and ε is an additive noise. To approximate the original image/signal in (31), we need to minimize the value of ε by using the LASSO problem [31] min x∈R N where λ is a positive parameter, · 1 is the l 1 -norm, and · 2 is the Euclidean norm. It is worth noting that Problem (1) can be applied to the LASSO problem (32) by setting and h 2 (x) = λ x 1 .

Image recovery
In the following two examples, we set a regularization parameter in the LASSO problem (32) by λ := 10 -5 . Signal-to-noise ratio (PSNR) in decibel (dB) [30] and structural similarity index metric (SSIM) [33] are used as image quality metrics. The maximum iteration number for all deblurring methods is fixed at 500.
Consider a contraction f in the form of f (x) = ηx, where 0 < η < 1. We take the parameter η as the following five cases:      Fig. 3. The parameters for each deblurring method are set as in Table 1. Also, we define a contraction f by f (x) = 0.99x for Methods 4 and 5.
Let us see the comparative experiments for recovering the hall images of Methods 1-5 as shown in Figs. 3-5. It can be seen that Method 5 gives the higher values of PSNR and SSIM than the other tested methods. So, our method has the highest image recovery efficiency compared with other methods. Example 4.3 In the LASSO problem (32), the matrix T ∈ R M×N is generated by the normal distribution with mean zero and variance one. The vector x ∈ R N is generated by a uniform  Table 1 in Example 4.2. We use the mean squared error (MSE) as the stopping criterion defined by

Signal recovery
where p * is an original signal. Now, the experiments for recovering two signals by Methods 1-5 are shown in Figs. 6-7, and the graphs of the MSE for two cases are shown in Fig. 8. It is observed from Figs. 6-8 that the convergence speed of Method 5 is better than that of Methods 1-4 and hence our method has a better convergence behavior than the other tested methods in terms of the number of iterations.

Conclusion
In this work, we discuss the convex minimization problem of the sum of two convex functions in a Hilbert space. The challenge of removing the Lipschitz continuity assumption on the gradient of the function attracts us to study the concept of the linesearch method.
We introduce a new linesearch and propose an inertial viscosity forward-backward algorithm whose stepsize does not depend on any Lipschitz constant for solving the considered problem without any Lipschitz continuity condition on the gradient. We prove that the sequence generated by our proposed method converges strongly to a minimizer of the sum of those two convex functions under some mild control conditions. As applications, we apply our method to solving image and signal recovery problems. The comparative experiments show that our method has a higher efficiency than the well-known methods in [9,16,18].