Convergence analysis of a variable metric forward–backward splitting algorithm with applications

The forward–backward splitting algorithm is a popular operator-splitting method for solving monotone inclusion of the sum of a maximal monotone operator and an inverse strongly monotone operator. In this paper, we present a new convergence analysis of a variable metric forward–backward splitting algorithm with extended relaxation parameters in real Hilbert spaces. We prove that this algorithm is weakly convergent when certain weak conditions are imposed upon the relaxation parameters. Consequently, we recover the forward–backward splitting algorithm with variable step sizes. As an application, we obtain a variable metric forward–backward splitting algorithm for solving the minimization problem of the sum of two convex functions, where one of them is differentiable with a Lipschitz continuous gradient. Furthermore, we discuss the applications of this algorithm to the fundamental of the variational inequalities problem, constrained convex minimization problem, and split feasibility problem. Numerical experimental results on LASSO problem in statistical learning demonstrate the effectiveness of the proposed iterative algorithm.


Introduction
Let H be a real Hilbert space.The forward-backward splitting algorithm is a classical operatorsplitting algorithm, which solves the monotone inclusion problem, find x ∈ H, such that 0 ∈ Ax + Bx, where A : H → 2 H is a maximal monotone operator and B : H → H is a β-cocoercive operator, for some β > 0. The forward-backward splitting algorithm, which dates back to the original work of Lions and Mercier [1], has been studied and reported extensively in the literature, for example [2][3][4][5][6].
The emergence of compressive sensing theory and large-scale optimization problems associated with signal and image processing has resulted in the forward-backward splitting algorithm receiving much attention in recent years.A forward-backward splitting algorithm with relaxation and errors in Hilbert spaces was proposed by Combettes [4].More precisely, let x 0 ∈ H, set where {γ k } ⊂ (0, 2β), {λ k } ⊂ (0, 1], {a k } and {b k } are absolutely summable sequences in H.In addition, J γ k A := (I + γ k A) −1 denotes the resolvent of operator A with index γ k > 0. Combettes [4] proved the convergence of the iterative scheme (1.2) when certain conditions are imposed upon the parameters.Jiao and Wang [7] generalized the iterative scheme (1.2) by extending the work of Combettes [4].They proved the convergence of (1.2) by requiring the parameters {λ k } to comply with the requirement {λ k } ⊂ 0, 4β 2β+γ k when b k = 0.It is easy to see that 4β 2β+γ k is strictly larger than one when {γ k } ⊂ (0, 2β).Further, Combettes and Yamada [8] improved the range of the relaxation parameters {λ k } in (1.2) to (0, 4β−γ k 2β ).After a simple calculation, we know that 4β−γ k 2β > 4β 2β+γ k .Therefore, the range of {λ k } in the work of Combettes and Yamada [8] is larger than that of Jiao and Wang [7].
In the case when γ k = γ and a k = b k = 0, the iterative scheme (1.2) is reduced to the forwardbackward splitting algorithm with a constant step size [9], where γ ∈ (0, 2β) and {λ k } ⊂ (0, 4β−γ 2β ).Bauschke and Combettes [9] obtained the convergence of the iterative algorithm (1.3) by adopting the Krasnoseki ȋ-Mann iteration for computing the fixed points of nonexpansive operators.The forward-backward splitting algorithm with constant step size (1.3) is usually considered to be stationary, whereas the forward-backward splitting algorithm with variable step sizes (1.2) is referred to as non-stationary.
It is worth mentioning that by letting λ k = 1, then (1.3) reduces to the classical forward-backward splitting algorithm.More precisely, the iterative sequence {x k } is defined by (1.4) In the context of convex optimization, the forward-backward splitting algorithm is equivalent to the so-called proximal gradient algorithm (PGA) applied to solve the following convex minimization problem, min where f : H → R is convex, differentiable with an L-Lipschitz continuous gradient for some L > 0 and g : H → (−∞, +∞] is a proper, lower semicontinuous, convex function.The convex optimization problem (1.5) has found widespread application in signal and image processing, for example [10][11][12].As a consequence of [4], Combettes and Wajs [13] employed the forward-backward splitting algorithm (1.2) to solve the minimization problem (1.5).The obtained iterative algorithm is defined as where {γ k } ⊂ (0, 2/L), {λ k } ⊂ (0, 1], and {a k }, {b k } are absolutely summable sequences in H. prox γg denotes the proximity operator of g with index γ > 0. In addition, Combettes and Wajs [13] presented applications of this algorithm to many concrete convex optimization problems.This iterative algorithm (1.6) was subsequently improved by Combettes and Yamada [8] who extended the range of the relaxation parameters {λ k }.
Inspired by solving large-scale convex optimization problems arising in image processing, machine learning, and economic management, many efficient primal-dual splitting algorithms have been proposed for structured monotone inclusions involving maximal monotone operators and single-valued Lipschitz or cocoercive monotone operators, for example [14,15].Although these monotone inclusions are more complicated than the monotone inclusion problem (1.1), they can be transformed into the form of this problem in a suitable product space.Therefore, it is natural to consider using the forward-backward splitting algorithm (e.g., (1.2) or (1.3)) to solve the equivalent monotone inclusion problem.Because the backward steps cannot be decomposed, direct use of the forward-backward splitting algorithm often fails to obtain a completely splitting algorithm.Many researchers attempted to overcome this difficulty by investigating variable metric operator splitting algorithms.The use of a suitable variable metric enables the implicit step of backward splitting to be easily decomposed.For example, the primal-dual hybrid gradient algorithm [16] (also known as the primal-dual of the Chambolle-Pock algorithm [17]) is equivalent to the variable metric proximal point algorithm [18,19].We refer the readers to a subsequent paper [20] for more details.Vũ [21] proposed a variable metric extension of the forward-backward-forward splitting algorithm [3] for solving monotone inclusion of the sum of a maximal monotone operator and a monotone Lipschitzian operator in Hilbert spaces.Liang [22] proposed a variable metric multi-step inertial operator-splitting algorithm for solving the monotone inclusion problem (1.1).Bonettini et al. [23] developed a scaled inertial forward-backward splitting algorithm for solving (1.1) in the context of convex minimization.Neither of the respective algorithms in the work by Liang [22] and Bonettini et al. [23] was compatible with the relaxation strategy.The variable metric forward-backward splitting algorithm was originally studied in finitedimensional Hilbert spaces [2,24]; however, the methods in these studies either had to be strongly monotone to study the convergence rate or they did not make use of the cocoercive property of B in (1.1).For infinite-dimensional Hilbert spaces, Combettes and Vũ [25] proposed a variable metric forward-backward splitting algorithm to solve (1.1) and analyzed its weak and strong convergence.This algorithm is defined as follows.Let x 0 ∈ H, and set where  [8].
While preparing this manuscript, we discovered that in Chapter 5 of the dissertation [26], Simões generalized the variable metric forward-backward splitting algorithm by replacing the relaxation parameters {λ k } in (1.7) with self-adjoint, strong positive linear operators.However, this approach still requires the maximum eigenvalue of the operators to be smaller than one.The purpose of this paper is to introduce a new convergence analysis for the variable metric forward-backward splitting algorithm (1.7) with an extended range of relaxation parameters.We prove the weak convergence of the variable metric forward-backward splitting algorithm by setting the relaxation parameter {λ k } larger than one in real Hilbert spaces.To achieve this goal, we make full use of the averaged and firmly nonexpansive property of operators where λ k > 0 and U k ∈ P α (H).In contrast, existing solutions mainly rely on J γ k U k A being firmly nonexpansive.Consequently, we obtain the convergence of the forward-backward splitting algorithm with variable step sizes.Moreover, we impose a slightly weak condition on the relaxation parameters to ensure the convergence of this algorithm.The results we obtained complement and extend those of Combettes and Yamada [8].As an application, we obtain the variable metric forward-backward splitting algorithm for solving the minimization problem (1.5).We also present the application of this algorithm to the variational inequalities problem, constrained convex minimization problem, and split feasibility problem.To the best of our knowledge, the iterative algorithms we obtained are the most general ones for solving these problems.Finally, we conduct numerical experiments on LASSO problem to validate the effectiveness of the proposed iterative algorithm.
The remainder of this paper is organized as follows.Section 2 reviews selected notations and lemmas on monotone operator theory and presents some technical lemmas.In Section 3, we prove the main convergence results of the variable metric forward-backward splitting algorithm with relaxation in real Hilbert spaces.Consequently, we obtain several corollaries of some special cases.Section 4 presents our use of the proposed iterative algorithm to solve three typical optimization problems include the variational inequalities problem, constrained convex minimization problem, and split feasibility problem.In Section 5, we present preliminary numerical results on LASSO problem to illustrate the performance of the proposed iterative algorithm.Finally, we provide our conclusions.

Preliminaries
In this section, we recall selected concepts and lemmas that are commonly used in the context of convex analysis and monotone operator theory.Most of them can be found in [9,27].Throughout this paper, let H be a real Hilbert space.The inner product and the associated norm of Hilbert space H are denoted by •, • and • , respectively.I denotes the identity operator and the symbols ⇀ and → denote weak and strong convergence.
We first recall selected basic notations and definitions.Let A : H → 2 H be a set-valued operator.We denote its domain, range, graph, and zeros by dom Moreover, A is said to be maximal monotone, if its graph is not strictly contained in the graph of any other monotone operator on H.A well-known example of a maximal monotone operator is the subgradient mapping of a proper, lower semicontinuous convex function f : H → (−∞, +∞] defined by ∂f : H → 2 H : x → {u ∈ H|f (y) ≥ f (x) + u, y − x , ∀y ∈ H}.Definition 2.2.Let A : H → 2 H be a maximal monotone operator.The resolvent operator of A with index λ > 0 is defined as According to the Minty theorem, the resolvent operator J λA is defined everywhere on Hilbert space H, and J λA is firmly nonexpansive.
Let us recall the definition of the proximity operator, which was first introduced by Moreau [28].Let f ∈ Γ 0 (H), where Γ 0 (H) denotes the set of all proper lower semicontinuous convex functions f : H → (−∞, +∞].The proximity operator of f with index λ > 0 is defined by In fact, the resolvent operator of the subdifferential operator of any f ∈ Γ 0 (H) with index λ > 0 is the proximal operator of f with index λ > 0, that is Therefore, the proximity operators have the same property as the resolvent operators.
The β-cocoercive operator is also known as a β-inverse strongly monotone operator (β-ism), for example [29].It is easy to see from the above definition that a β-cocoercive operator is 1 β -Lipschitz continuous, i.e., Bx − By ≤ 1 β x − y .Next, we recall the definitions of nonexpansive and related mappings.These mappings often appear in the convergence analysis of optimization algorithms.Definition 2.4.Let C be a nonempty subset of H. Let T : C → H, then (i) T is considered to be nonexpansive, if (ii) T is considered to be firmly nonexpansive, if (iii) T is referred to as α-averaged, α ∈ (0, 1), if there exists a nonexpansive mapping S such that T = (1 − α)I + αS.
It follows immediately that a firmly nonexpansive mapping is a nonexpansive mapping and an α-averaged mapping is also nonexpansive.
We denote by F ix(T ) the set of fixed pints of a mapping T , that is The following proposition provides some equivalent definitions of the firmly nonexpansive mappings.This proposition can be found in Proposition 4.2 of [27].
Proposition 2.1.Let C be a nonempty subset of H. Let T : C → H, then the following are equivalent (i) T is firmly nonexpansive; From Proposition 2.1 (iii) and (iv), we know that if T is firmly nonexpansive, then T is 1  2 -averaged, and a 1-cocoercive operator is firmly nonexpansive.
The following proposition is taken from Proposition 4.25 of [27]. .
The following lemma provides a relation between an operator T with its complement We refer interested readers to [27] for further properties of nonexpansive, firmly nonexpansive, and α-averaged nonlinear mappings.
We recall the results of the composition of two averaged operators.The following lemma first appeared in [30] after which it was extended to a finite family of composition-averaged operators [8].
Remark 2.1.(i) It is worth mentioning that two other results of the combination of averaged operators were reported.From Proposition 4.32 of [27], -averaged.
From Byrne [29], is smaller than the other two constants α and α. (ii) The constant α is used in [7] to show the upper bound of the relaxation parameter λ k such that λ k < 1 α .
We employ the following previously used notation [25].Let B(H, G) be the spaces of bounded linear operators from Hilbert space H to Hilbert space G.The norm of L ∈ B(H, G) is defined as where L * denotes the adjoint of L. The Loewner partial ordering on S(H) is defined by, for any U, V ∈ S(H), We denote √ U as the square root of U ∈ P α (H).Moreover, for every U ∈ P α (H), we define a semi-scalar product and a semi-norm (a scalar product and a norm , if α > 0 by We borrow the following results on monotone operators in a variable metric setting from Combettess work [25]. Lemma 2.4.Let A : H → 2 H be maximal monotone, let α ∈ (0, +∞), let U ∈ P α (H) and let H U −1 be the real Hilbert space with the scalar product x, y U −1 = U −1 x, y , ∀x, y ∈ H. Then the following hold: Let U ∈ P α (H) for some α > 0. The proximity operator of f ∈ Γ 0 (H) relative to the metric induced by U is defined by We have prox U f = J U −f ∂f and we can write prox I f = prox f .We make full use of the following lemmas to obtain the weak convergence of the considered iterative sequence.Both of the two lemmas were previously reported [31].In the following, we denote by ℓ 1 + (N) the set of summable sequences in [0, +∞), where N is a set of nonnegative integer numbers.
Lemma 2.5.Let α ∈ (0, +∞), and let {W k } be in P α (H), let C be a nonempty subset of H, and let {x k } be a sequence in H such that where Lemma 2.6.Let α ∈ (0, +∞), and let {W k } and W be in P α (H) such that W k → W pointwise as k → +∞, as is the case when Let C be a nonempty subset of H, and let {x k } be a sequence in H such that (2.2) is satisfied.Then {x k } converges weakly to a point in C if and only if every weak sequential cluster point of The following lemma can be found in Corollary 2.14 in the book by Bauschke and Combettes [27].
Lemma 2.7.Let x ∈ H, y ∈ H, and α ∈ R. Then 3 Variable metric forward-backward splitting algorithm In this section, we study the convergence of the variable metric forward-backward splitting algorithm.First, we prove the following useful lemmas.
Lemma 3.1.Let B : H → H be a β-cocoercive operator.Let α > 0, and let U ∈ P α (H).Let H U −1 be a real Hilbert space with the scalar product x, y On the other hand, we obtain From (3.1) and (3.2), we obtain Lemma 3.2.Let A : H → 2 H be maximal monotone.Let α ∈ (0, +∞), and let U ∈ P α (H).Let H U −1 be a real Hilbert space with the scalar product x, y U −1 = U −1 x, y , ∀x, y ∈ H. Let B : H → H be a β-cocoercive operator.Then, for any γ ∈ (0 Proof.Because A is maximal monotone, then for any γ > 0, γU A is maximal monotone.According to Lemma 2.4 (ii), 2β -averaged.Therefore, we apply Lemma 2.3, from which we know that which is the averaged operator.
Lemma 3.3.Let H be a real Hilbert space.Let A : H → 2 H be a maximal monotone operator.Let B : H → H be a β-cocoercive operator, for some β > 0. Suppose that Ω :=zer(A + B) = ∅.Let γ k > 0, α > 0, and {U k } ⊂ P α (H).Then the following are equivalent: Lemma 3.4.Let H be a real Hilbert space.Let A : H → 2 H be a maximal monotone operator.Let B : H → H be a β-cocoercive operator, for some β > 0. Let r > 0 and s > 0, and let U, V ∈ P α (H).Define a variable metric forward-backward operator T rU := J rU A (I − rU B).Then, for any x ∈ H, we have where λ min (U −1 ) represents the minimum eigenvalue of U −1 .
Proof.Let x ∈ H, in which case we have It follows from the monotonicity of operator A that Then Because of the Cauchy-Schwarz inequality and the fact that λ min (U −1 ) We are ready to state our main theorems and present their convergence analysis.
Proof.According to condition (3.5), we have For the sake of convenience, let Then, iterative scheme (3.6) can be rewritten as where Consequently, the iterative sequence {x k+1 } in (3.9) is equivalent to (3.12) (i) Let x * ∈ zer(A + B), according to Lemma 3.3, (3.10), and (3.12), we obtain where On the basis of Lemma 2.5, we conclude that lim k→+∞ (ii) With the help of the inequality x + y 2 ≤ x 2 + 2 y, x + y , ∀x, y ∈ H.We obtain From Lemma 2.7 and (3.9) we derive that Combining (3.17) with (3.14), we obtain which implies that Observe that lim k→+∞ x k − x * U −1 k exists and +∞ k=0 λ k e k < +∞.Then by letting k → +∞ in the above inequality and considering the condition on {λ k }, we obtain (iii) In this part, we prove that the sequence {x k } converges weakly to a point in Ω.In fact, let x be a weak sequential cluster point of {x k }, then there exists a subsequence α ) is bounded, there exists a subsequence of {γ k } converges to γ ∈ (γ, 2β α ).Without loss of generality, we may assume that γ kn → γ.According to condition (3.5), it follows from Lemma 2.6 that there exists With the help of Lemma 3.4, we make the following estimation, Because { x kn − J γU A (x kn − γU Bx kn ) } is bounded, it follows from the conditions above, and we can conclude from (3.22) that x kn − J γU A (x kn − γU Bx kn ) → 0 as k n → +∞. (3.23) As J γU A (I − γU B) is nonexpansive, based on the demiclosedness property of nonexpansive mapping, we deduce that x = J γU A (x − γU B x), which means that x ∈ zer (A + B).Because x is arbitrary, together with conclusion (i), we can conclude from Lemma 2.6 that {x k } converges weakly to a point in zer(A + B).
(iv) On the other hand, as J γ k U k A is firmly nonexpansive, it follows that we have Because B is β-cocoercive, we have that In addition, we have The combination of (3.27) with (3.15) yields Further, on the basis of (3.28) and (3.14), we obtain which implies that Remark 3.2.If we assume that λ k ∈ (λ, 1], then we reaffirm the conclusion that +∞ k=0 Bx k − Bx * 2 < +∞ as in Theorem 4.1 of the paper by Combettes [25].In fact, from inequality (3.30), we have By summing the above inequality from zero to infinity, we have Because the proof is the same as that of Combettes [25], we omit it here.
Next, we impose a slightly weaker condition on the iterative parameter λ k than in Theorem 3.1 to ensure the weak convergence of the iterative sequence {x k }.Theorem 3.2.Let H be a real Hilbert space.Let A : H → 2 H be maximal monotone.Let B : H → H be β-cocoercive, for some β > 0. Suppose that Let the iterative sequence {x k } be defined by (3.6).Then, we have Further, suppose that λ k ≥ λ > 0. Then (iv) Bx k → Bx * as k → +∞, where x * ∈ Ω.
Proof.(i) Let x * ∈ Ω, it follows from the same proof of Theorem 3.1 (i) and we know that Because +∞ k=0 η k < +∞ and +∞ k=0 λ k e k < +∞, then Let Using formulation (3.10) and the fact that R k+1 is nonexpansive on H U −1 k+1 , we have On the other hand, using the relation The combination of (3.36) with (3.35) yields With the help of Lemma 2.5, we can conclude from (3.37) that lim (iii) and (iv) can be proven using the same proof as Theorem 3.1.
Then the following hold: (i) For any Proof.Because f is convex differentiable, according to the Baillon-Haddad theorem, ∇f is β-cocoercive.From the definition of the proximity operator on the Hilbert space H U −1 , we know that prox

Applications
In this section, we present our study of several applications of the variable metric forward-backward splitting algorithm.

Application to variational inequality problem
Consider the following variational inequality problem (VIP): where C is a nonempty closed convex subset of H, and B : H → H is a nonlinear operator.
Recall the indicator function δ C , which is defined as The proximal operator of δ C is well known to be the metric projection on C, which is defined by The normal cone operator of C is N C , which is defined by Then, VIP (4.1) is equivalent to the following monotone inclusion problem: Assuming that B is β-cocoercive, then (4.4) is a special case of the monotone inclusion problem (1.1).Let A = N C , then we know that J γU A = P U −1

C
denotes the projector onto a nonempty closed convex subset C of H relative to the norm • U −1 .More precisely, On the basis of Theorems 3.1 and 3.2, we obtain the following convergence theorem to solve the VIP (4.1).
Theorem 4.1.Let H be a real Hilbert space.Let B : H → H be a β-cocoercive operator.We denote by Ω the solution set of VIP (4.1) and assume that where {U k }, {γ k }, {λ k }, {a k }, and {b k } satisfy the same conditions as in Theorem 3.1 or Theorem 3.2.
Then the following hold: (i) For any

Application to constrained convex minimization problem
Consider the following constrained convex minimization problem: where C is a nonempty closed convex subset of H, and f : H → R is a proper closed convex differentiable function with a Lipschitz continuous gradient.It follows from the definition of the indicator function that constrained convex minimization problem (4.6) is equivalent to the following unconstrained minimization problem: It is obvious that problem (4.7) is a special case of (1.5).Therefore, by taking g(x) = δ C (x), we obtain the following convergence theorem for solving constrained convex minimization problem (4.6).
Theorem 4.2.Let H be a real Hilbert space.Let f : H → R be a proper, closed convex function such that f is differentiable with an L-Lipschitz continuous gradient.We denote by Ω the solution set of the constrained convex minimization problem (4.1) and assume that Ω = ∅.Let x 0 ∈ H, and set where {U k }, {γ k }, {λ k }, {a k }, and {b k } satisfy the same conditions as in Theorem 3.1 or Theorem 3.2.

Application to split feasibility problem
Consider the split feasibility problem (SFP) as follows: where C and Q are nonempty, closed convex subsets of Hilbert spaces H and G, respectively.L : H → G is a bounded linear operator.SFP (4.9) was first introduced by Censor and Elfving [32] in a finite dimensional Hilbert space and has since been extensively studied by many authors, see, for example [33,34] and references therein.SFP (4.9) is closely related to constrained convex minimization problem (4.6).More precisely, the corresponding constrained convex minimization problem of SFP (4.9) is, min Let x * be a solution of SFP (4.9), then x * is a solution of (4.10).Conversely, let x * be a solution of (4.10) and f (x) := 1 2 x − P Q (Lx) 2 = 0, then x * is a solution of SFP (4.9).Under the assumption that the solution set of SFP (4.9) is nonempty, SFP (4.9) and constrained convex minimization problem (4.10) are equivalent.
The function f Therefore, we obtain the following theorem for solving SFP (4.9).Theorem 4.3.Let H and G be real Hilbert spaces.Let L : H → G be a bounded linear operator.Let C and Q be nonempty closed and convex subsets of H and G, respectively.We denote by Ω the solution set of SFP (4.9) and assume that Ω = ∅.Let x 0 ∈ H, and set where {U k }, {γ k }, {λ k }, {a k }, and {b k } satisfy the same conditions as in Theorem 3.1 or Theorem 3.2.

Numerical experiments
In this section, we apply the proposed iterative algorithm (3.39) to solve the famous LASSO problem [37].All the experiments are performed on a standard Lenovo Laptop with Intel (R) Core (TM) i7-4712MQ 2.3 GHZ CPU and 4 GB RAM.We run the program with MATLAB 2014a.
Let's recall the LASSO problem: where A ∈ R m×n , b ∈ R m and t > 0. Define C := {x| x 1 ≤ t}, by using the indicator function, we see that (5.1) is equivalent to the following unconstrained optimization problem min which is a special case of the general optimization problem (1.5).Let f (x) = 1 2 Ax − b 2 2 and g(x) = δ C (x), then we can apply iterative algorithm (3.39) to solve (5.2).Notice that the gradient of f (x) is ∇f (x) = A T (Ax − b) and the Lipschitz constant of ∇f is L := A 2 .Besides, the proximity operator of indicator function δ C (x) is the orthogonal projection onto the closed convex set C. Although it has no closed-form solution, it can be calculated in a polynomial time.
In the tests, the true signal x ∈ R n has k non-zero elements, which is generated from uniform distribution in the interval [−2, 2].The system matrix A ∈ R m×n is generated from standard Gaussian distribution.The observed signal b is given by b = Ax.In the experiment, we set m = 240, n = 1024 and k = 40.The stopping criterion is defined as, where ε > 0 is a small constant.We test the performance of the proposed iterative algorithm with different choices of the step size γ k and the relaxation parameter λ k .For simplicity, we set them as constant during the iteration process.According to Corollary 3.4, we know that γ k ∈ (0, 2 L ) and 2 ).The obtained numerical results are listed in Table 1, in which we report the number of iterations ("Iter"), the objective function value ("Obj") and the error between the recovered signal and the true signal ("Err").We can see from Table 1 that when the step size γ k is fixed, a large relaxation parameter λ k leads to a faster convergence.At the same time, the larger the step size, the faster the algorithm converges.In order to more visualize the effect of iterative parameters on the value of the function, Figure 1 shows the objective function value against the number of iterations.Further, we plot the true signal and the recovered signal in Figure 2 for the parameters of γ k = 1.9 L , λ k = 1.05 and the stopping criterion ε = 10 −8 .We can see from Figure 2 that the true signal is successfully reconstructed.  the weak convergence of this algorithm.Compared to existing work, we imposed a slightly weak condition on the relaxation parameters to ensure the convergence of the forward-backward splitting algorithm when using the variable metric and variable step sizes.Our results complemented and extended the corresponding results of Combettes and Yamada [8].Furthermore, we obtained several general iterative algorithms for solving the variational inequality problem, the constrained convex minimization problem, and the split feasibility problem, respectively.These results generalized and improved the known results in the literature.Numerical experimental results on LASSO problem showed that the step size γ k and relaxation parameter λ k had much impact on the convergence speed of the proposed iterative algorithm.The larger the step size, the faster the algorithm converged.The over-relaxation parameter λ k (λ k > 1) performed better than the under-relaxation parameter λ k (λ k ≤ 1).

. 30 )
By the conditions on {γ k } and {λ k }, and together with conclusions (i), (ii) and the fact that +∞ k=0 λ k e k < +∞, letting k → +∞ in the above inequality, we obtain Bx k → Bx * as k → +∞.(3.31)This completes the proof.Remark 3.1.Because the upper bound of the relaxation parameter {λ k } in Theorem 3.1 is governed by the averaged constant of the variable metric forward-backward operator, Theorem 3.1 provides a larger selection of the relaxation parameter and errors than Theorem 4.1 of Combettes [25].

Remark 3 . 3 .
In view of Theorem 3.1 (iii), the iterative sequence generated by(3.6)converges weakly to a point in Ω.The strong convergence of {x k } requires x k → x * , x * ∈ Ω.Similar to Theorem 4.1 of Combettes[25], we need to assume that one of the following conditions holds.(i)lim inf k→+∞ d Ω (x k ) = 0; (ii) A or B is demiregular at every point in Ω;(iii) intΩ = ∅ and there exists {v k .40) Set A = ∂g and B = ∇f in Theorem 3.1 or Theorem 3.2 and this enables us to confirm the conclusions of Corollary 3.4.

Table 1 :
Numerical results for different choices of γ k and λ k for solving the LASSO problem(5.1)