Skip to main content

Solving total-variation image super-resolution problems via proximal symmetric alternating direction methods

Abstract

The single image super-resolution (SISR) problem represents a class of efficient models appealing in many computer vision applications. In this paper, we focus on designing a proximal symmetric alternating direction method of multipliers (SADMM) for the SISR problem. By taking full exploitation of the special structure, the method enjoys the advantage of being easily implementable by linearizing the quadratic term of subproblems in the SISR problem. With this linearization, the resulting subproblems easily achieve closed-form solutions. A global convergence result is established for the proposed method. Preliminary numerical results demonstrate that the proposed method is efficient and the computing time is saved by nearly 40% compared with several state-of-the-art methods.

1 Introduction

SISR is a technique that aims at restoring a high-resolution (HR) image from a single degraded low-resolution (LR) image. Since the HR image contains more details than the LR one, the image of HR is preferred in many practical cases, such as video surveillance, the hyperspectral technique, and remote sensing. Super-resolution (SR) is a typical ill-posed inverse problem since a multiplicity of solutions exist for any input LR pixel [1]. To tackle such a problem, most of the SR methods reduce the size of the solution space by incorporating strong prior information, which can be obtained by training data, using various regularization methods, and capturing specific image features [2]. Motivated by those ideas, the SISR schemes can be broadly divided into three categories: interpolation-based methods, learning-based methods, and reconstruction-based methods.

Interpolation-based methods such as the bicubic approach are simple and easy to implement but tend to blur the high frequency details and produce aliasing artifacts at salient edges [3]. The learning-based algorithms recover the missing high frequency details by learning the relations between LR and HR image patches from a given database [4]. However, they highly rely on the similarity between the training set and the test images and generally have high computational complexity. Reconstruction-based methods that are considered in this paper belong to the third category of SISR schemes [57]. As SISR essentially is a highly ill-posed problem, the performance of such approaches mainly relies on the prior knowledge. Among all the priors, smoothness priors such as the total variation (TV) prior have been widely used in many image processing applications [8]. To reduce its computational complexity, a fast SR alternating direction method of multipliers (FSR-ADMM) based on the TV model is proposed in [9]. Considering the efficiency of a symmetric alternating direction method of multipliers (SADMM) [10], our paper aims at constructing a fast SR symmetric alternating direction method of multipliers (FSR-SADMM).

In the SISR problem, the observed LR image is modeled as a noisy version of the blurred and downsampled HR image estimated as follows:

$$ y=\Phi Hx+\nu, $$
(1.1)

where the vector \(y \in\mathbb{R}^{N_{l}}\) (\(N_{l}=m_{l}\times n_{l}\)) corresponds to the LR observed image and \(x \in\mathbb{R}^{N_{h}}\) (\(N_{h}= m_{h}\times n_{h}\)) denotes the HR image with \(N_{h}> N_{l}\). \(\nu\in\mathbb{R}^{N_{l}}\) represents a zero-mean additive white Gaussian noise (AWGN), \(\Phi \in\mathbb{R}^{N_{l}\times N_{h}}\) and \(H \in\mathbb {R}^{N_{h}\times N_{h}}\) stand for the downsampling and the blurring operations, respectively.

Concisely, the SISR TV model corresponds to solving the following optimization problem:

$$ \min_{x} \frac{1}{2} \underbrace{\|y - \Phi Hx \|_{2}^{2}}_{\text{data fidelity}} + \alpha \underbrace{\|x \|_{\mathrm{TV}}}_{\text{TV regularization}}, $$
(1.2)

where \(\|y - \Phi Hx\|_{2}^{2}\) stands for the data fidelity term while \(\|x\|_{\mathrm{TV}}=\phi(Ax)= \sqrt{\|\nabla_{h}x\| _{2}^{2}+\|\nabla_{\nu}x\|_{2}^{2}}\) represents the regularizationFootnote 1 prior with \(A=[\nabla _{h},\nabla_{\nu}]^{\mathsf{T}}\in \mathbb{R}^{2N_{h}\times N_{h}}\), and α denotes the regularization parameter balancing the regularization term and the data fidelity term. The direct ADMM in [6] is given hereinafter which first rewrites the problem (1.2) as

$$\begin{aligned} &\min_{x,z,u} \frac{1}{2} \|y - \Phi {z} \|_{2}^{2} + \alpha \phi(u) \\ &\quad \text{s.t. } H x = z, \\ &\hphantom{\quad \text{s.t. }}A x = u, \end{aligned}$$

and adopts the following iterative scheme:

$$\begin{aligned}& x^{k+1} = \mathop{\operatorname {argmin}}_{x} \frac{\mu}{2}\bigl\Vert {H x}- z^{k} +d^{k}_{1}\bigr\Vert _{2}^{2} + \frac{\mu}{2}\bigl\Vert Ax - u^{k} +d^{k}_{2}\bigr\Vert _{2}^{2}, \end{aligned}$$
(1.3a)
$$\begin{aligned}& z^{k+1} = \mathop{\operatorname {argmin}}_{z} \frac{1}{2} \Vert y - \Phi z \Vert _{2}^{2} +\frac{\mu}{2}\bigl\Vert Hx^{k+1} - z +d^{k}_{1} \bigr\Vert _{2}^{2}, \end{aligned}$$
(1.3b)
$$\begin{aligned}& u^{k+1} = \mathop{\operatorname {argmin}}_{u} \alpha \phi(u) + \frac{\mu}{2}\bigl\Vert Ax^{k+1} - u +d^{k}_{2} \bigr\Vert _{2}^{2}, \end{aligned}$$
(1.3c)
$$\begin{aligned}& d^{k+1}_{1}= d^{k}_{1} +\bigl(\Phi x^{k+1} - z^{k+1}\bigr), \end{aligned}$$
(1.3d)
$$\begin{aligned}& d^{k+1}_{2}= d^{k}_{2} + \bigl(Ax^{k+1} - u^{k+1}\bigr). \end{aligned}$$
(1.3e)

Recently, Zhao et al. [9] proposed a fast single image super-resolution by adopting a new efficient analytical solution for \(\ell_{2}\)-norm regularized problems, which can reduce the number of iterations in each loop from five steps to three steps by tackling the downsampling operator Φ and the blurring operator H simultaneously. By rewriting (1.2) as the following constrained optimization problem:

$$ \begin{aligned} &\min_{x,u} \frac{1}{2} \|y - \Phi Hx \|_{2}^{2} + \alpha \phi(u) \\ &\quad \text{s.t. } A x = u, \end{aligned} $$
(1.4)

they proposed the easier ADMM-type iterative scheme, i.e., FSR-ADMM:

$$\begin{aligned}& x^{k+1}=\mathop{\operatorname {argmin}}_{x} \|y-\Phi Hx \|_{2}^{2}+\mu\bigl\| Ax-u^{k}+d^{k} \bigr\| _{2}^{2}, \end{aligned}$$
(1.5a)
$$\begin{aligned}& u^{k+1} = \mathop{\operatorname {argmin}}_{u} \alpha \phi(u) + \frac{\mu}{2}\bigl\| Ax^{k+1}-u+d^{k} \bigr\| _{2}^{2}, \end{aligned}$$
(1.5b)
$$\begin{aligned}& d^{k+1} = d^{k} + \bigl(Ax^{k+1}-u^{k+1} \bigr). \end{aligned}$$
(1.5c)

Note that (1.5a) is the classical least square problem and has the solution given by

$$ x^{k+1}=\bigl(H^{\mathsf{T}}\Phi^{\mathsf{T}}\Phi H+ \mu A^{\mathsf{T}}A\bigr)^{-1}\bigl(H^{\mathsf{T}}\Phi^{\mathsf{T}}y+\mu A^{\mathsf{T}}\bigl(u^{k}-d^{k} \bigr)\bigr). $$
(1.6)

As to such an expensive inversion process, the methods to alleviate the computational burden can be roughly categorized into two main categories, one is to ideally assume \(A^{\mathsf{T}}A=I\). Then such formula can be solved efficiently by a Thomas algorithm in \(32N_{h}^{2}\) flops [11]. The other option is to establish a block circulant matrix with circulant blocks (BCCB) A. Under such conditions, the Woodbury formula can be utilized to decrease the order of computation complexity from \(\mathcal{O} (8N_{h}^{3})\) to \(\mathcal{O}(2N_{h}\log2N_{h})\) [9]. Nevertheless, in realistic settings the problem is that one does not necessarily know in advance if the BCCB assumption is good for SISR because it may lead to the appearance of ringing artifacts emanating from the boundaries, which is a well-known mismatch and degradation consequence of such deconvolved images [12].

To alleviate the above dilemma and further apply FSR-ADMM into wider scenarios with proper matrix A, we propose a new method with two-fold solution. (1) Compute \((H^{\mathsf{T}}\Phi^{\mathsf{T}}\Phi H+\frac{\mu}{\tau }I_{N_{h}})^{-1}\) instead of computing \((H^{\mathsf{T}}\Phi^{\mathsf{T}}\Phi H+ {\mu} A^{\mathsf{T}}A)^{-1}\) so as to bypass the need of special condition of A. (2) Accelerate FSR-ADMM by introducing a new dual multiplier \(\lambda ^{k+\frac{1}{2}} \).

In light of the above analysis, this paper proposes the following iterative scheme based on semiproximal symmetric ADMM (FSR-SADMM):Footnote 2

$$\begin{aligned}& x^{k+1}= \biggl(H^{\mathsf{T}}\Phi^{\mathsf{T}}\Phi H+ \frac{\mu}{\tau}I_{N_{h}}\biggr)^{-1} \biggl(H^{\mathsf{T}}\Phi^{\mathsf{T}}y+\frac{\mu}{\tau}x^{k}-\mu A^{\mathsf{T}}\biggl(Ax^{k}-u^{k}-\frac {\lambda^{k}}{\mu}\biggr) \biggr), \end{aligned}$$
(1.7a)
$$\begin{aligned}& \lambda^{k+\frac{1}{2}} = \lambda^{k} - r\mu\bigl(Ax^{k+1}- \lambda^{k}\bigr), \end{aligned}$$
(1.7b)
$$\begin{aligned}& u^{k+1} = \mathop{\operatorname {argmin}}_{u} \alpha \phi(u) + \frac{\mu}{2}\bigl\Vert Ax^{k+1}-u-\lambda^{k+\frac{1}{2}}/\mu \bigr\Vert _{2}^{2}, \end{aligned}$$
(1.7c)
$$\begin{aligned}& \lambda^{k+1} = \lambda^{k}- s\mu\bigl(Ax^{k+1}- \lambda^{k+1}\bigr). \end{aligned}$$
(1.7d)

In a nutshell, the contributions of this article can be summarized as follows:

  1. 1.

    We propose a customized semiproximal term especially suitable for SR imaging system which can avoid computing some boring matrix inversion such as \((H^{\mathsf{T}}\Phi^{\mathsf{T}}\Phi H+\mu A^{\mathsf{T}}A)^{-1}\) existing in (1.5a). Consequently, FSR-SADMM can be applied in wider scenarios without a strict condition of A.

  2. 2.

    Based on the iterative scheme of strictly contractive semiproximal Peaceman-Rachford splitting method, our proposed FSR-SADMM can significantly reduce the total iteration count while only involving additional dual update (i.e., \(\lambda^{k+\frac {1}{2}}\)) which added negligible computational burden for each iteration. As a result, our proposed FSR-SADMM prefers to maintain a better convergence rate which can experimentally reduce the computing time by 40%.

  3. 3.

    We prove that the FSR-SADMM is convergent under mild conditions.

The rest of this paper is organized as follows. In Section 2, we present some preliminaries which are useful for the subsequent analysis. In Section 3, we illustrate the FSR-SADMM to reconstruct the single image super-resolution. Section 4 provides convergence analysis of the proposed method. Section 5 presents extensive numerical results that evaluate the performance of the proposed reconstruction algorithm in comparison with some state-of-the-art algorithms. Finally, concluding remarks are provided in Section 6.

2 Preliminaries

2.1 Variational reformulation of (1.4)

In this section, following He and Yuan’s approach [13], we reformulate the convex minimization model (1.4) as a variational form, which is useful for succedent algorithmic illustration and convergence analysis. Let us denote \(z_{1}=x\), \(z_{2}=u\), \(B_{1}=A\), \(B_{2}=-I_{2N_{h}}\), then (1.4) becomes

$$ \begin{aligned} &\min_{z_{1}\in \mathbb{R}^{N_{h}},z_{2}\in \mathbb{R}^{2N_{h}}} \theta_{1}(z_{1})+ \theta_{2}(z_{2}) \\ &\quad \text{s.t. } B_{1}z_{1}+B_{2}z_{2}=0, \end{aligned} $$
(2.1)

where \(\theta_{1}(z_{1})=\frac{1}{2\mu}\|y-\Phi H z_{1}\|_{2}^{2}\) and \(\theta _{2}(z_{2})=\alpha\phi(z_{2})\). The Lagrangian function and augmented Lagrangian function of (2.1) can be written as

$$ \mathcal{L}(z_{1},z_{2},\lambda)= \theta_{1}(z_{1})+\theta_{2}(z_{2})- \lambda^{\mathsf{T}}(B_{1}z_{1}+B_{2}z_{2}) $$
(2.2)

and

$$ \mathcal{L}_{\mu}(z_{1},z_{2},\lambda)= \theta_{1}(z_{1})+\theta_{2}(z_{2})- \lambda ^{\mathsf{T}}(B_{1}z_{1}+B_{2}z_{2})+ \frac{\mu}{2}\|B_{1}z_{1}+B_{2}z_{2} \|^{2}, $$
(2.3)

respectively, where \(\lambda\in \mathbb{R}^{2N_{h}}\) is a Lagrangian multiplier. Then seeking a saddle point of \(L(z_{1},z_{2},\lambda)\) is to find \((z_{1}^{*},z_{2}^{*},\lambda^{*})\) such that

$$ \mathcal{L}_{\lambda\in \mathbb{R}^{2N_{h}}}\bigl(z_{1}^{*},z_{2}^{*}, \lambda\bigr)\leq\mathcal {L}\bigl(z_{1}^{*},z_{2}^{*},\lambda^{*} \bigr)\leq\mathcal{L}_{z_{1}\in \mathbb{R}^{N_{h}}, z_{2}\in \mathbb{R}^{2N_{h}}}\bigl(z_{1},z_{2}, \lambda^{*}\bigr). $$
(2.4)

That is, for any \((z_{1},z_{2},\lambda)\), we have

$$\begin{aligned}& \theta_{1}(z_{1})+\theta_{2}(z_{2})- \bigl(\theta_{1}\bigl(z_{1}^{*}\bigr)+\theta _{2} \bigl(z_{2}^{*}\bigr)\bigr)-\bigl(z_{1}-z_{1}^{*} \bigr)^{\mathsf{T}}B_{1}^{\mathsf{T}}\lambda^{*}- \bigl(z_{2}-z_{2}^{*}\bigr)^{\mathsf{T}}B_{2}^{\mathsf{T}} \lambda^{*}\geq0, \end{aligned}$$
(2.5a)
$$\begin{aligned}& \bigl(\lambda-\lambda^{*}\bigr)^{\mathsf{T}}(B_{1}z_{1}+B_{2}z_{2}) \geq0. \end{aligned}$$
(2.5b)

Therefore, solving (1.4) is equivalent to finding \(w=(z_{1}^{*},z_{2}^{*},\lambda^{*})\) such that

$$ \operatorname{VI}(\Omega,F,\theta)\mbox{:}\quad \theta(u)-\theta \bigl(u^{*}\bigr)+\bigl(w-w^{*}\bigr)^{\mathsf{T}}F\bigl(w^{*}\bigr)\geq0,\quad \forall w\in\Omega, $$
(2.6)

where

$$ u := \left ( \textstyle\begin{array}{@{}c@{}}z_{1}\\ z_{2} \end{array}\displaystyle \right ), \qquad w := \left ( \textstyle\begin{array}{@{}c@{}}z_{1}\\ z_{2}\\ \lambda \end{array}\displaystyle \right ), \qquad \theta(u):= \theta_{1}(z_{1})+\theta_{2}(z_{2}), $$
(2.7)

and

$$ F(w) := \left ( \textstyle\begin{array}{@{}c@{}} -B_{1}^{\mathsf{T}}\lambda\\ -B_{2}^{\mathsf{T}}\lambda\\ B_{1}z_{1} + B_{2}z_{2} \end{array}\displaystyle \right ), \qquad \Omega:=\mathbb{R}^{N_{h}}\times \mathbb{R}^{2N_{h}}\times \mathbb{R}^{2N_{h}}. $$
(2.8)

Note that the mapping \(F(w)\) defined in (2.8) is affine with a skew-symmetric matrix, it is monotone. We denote by \(\Omega^{*}\) the solution set of \(\operatorname{VI}(\Omega,F,\theta)\).

2.2 Some notations

We use \(\|\cdot\|\) to denote the 2-norm of a vector and let \(\|z\| _{G}^{2} = z^{\mathsf{T}} Gz\) for \(z \in \mathbb{R}^{N}\) and \(G \in \mathbb{R}^{N \times N}\). For a real symmetric matrix S, we denote \({S}\succeq0\) (\({S}\succ0\)) if S is positive semidefinite (positive definite). For convenient analysis, we define the following matrices:

$$\begin{aligned}& H =\left ( \textstyle\begin{array}{@{}c@{\quad}c@{\quad}c@{}} R&0&0\\ 0&\frac{r+s-rs}{r+s}\mu B_{2}^{\mathsf{T}}B_{2} &-\frac {r}{r+s}B_{2}^{\mathsf{T}}\\ 0&-\frac{r}{r+s}B_{2}&\frac{1}{(r+s)\mu}{I_{2N_{h}}} \end{array}\displaystyle \right ), \end{aligned}$$
(2.9)
$$\begin{aligned}& M=\left ( \textstyle\begin{array}{@{}c@{\quad}c@{\quad}c@{}} {I_{N_{h}}}&0&0\\ 0& {I_{2N_{h}}}&0\\ 0& -s\mu B_{2} & (r+s) {I_{2N_{h}}} \end{array}\displaystyle \right ), \end{aligned}$$
(2.10)

and

$$ Q=\left ( \textstyle\begin{array}{@{}c@{\quad}c@{\quad}c@{}} R&0&0\\ 0&\mu {B_{2}^{\mathsf{T}}B_{2}}&-r B_{2}^{\mathsf{T}}\\ 0& -B_{2} & \frac{1}{\mu} {I_{2N_{h}}} \end{array}\displaystyle \right ). $$
(2.11)

Below we prove three assertions regarding the matrices just defined. These assertions make it possible to present our convergence analysis for the new algorithm compactly with alleviated notation.

Lemma 2.1

If \(R\succ0\), \(\mu\in(0,1)\), \(r\in(0,1)\), \(s\in(0,1]\), and \(B_{2}\) is full column rank, then the matrices H, M, and Q defined, respectively, in (2.9), (2.10), and (2.11) satisfy

$$ H\succ0,\qquad HM=Q, $$
(2.12)

and

$$ G:=Q^{\mathsf{T}}+Q-M^{\mathsf{T}}HM\succeq0. $$
(2.13)

Proof

We consider two cases.

(I) \(r\in(0,1)\), \(s\in(0,1)\). Since \(R\succeq0\), \(B_{2}\) is assumed to be full column rank, we only need to check that

$$ \bar{H}=\left ( \textstyle\begin{array}{@{}c@{\quad}c@{}} \mu(r+s-rs) &-r\\ -r&\frac{1}{\mu} \end{array}\displaystyle \right )\succ0. $$

Note that

$$\left \{ \textstyle\begin{array}{l} r+s-rs=r+s(1-r)>0, \\ r+s-rs-r^{2}=(r+s)(1-r)>0. \end{array}\displaystyle \right . $$

Then we have

$$\bar{H}\succ0. $$

Therefore, the assertion \(H\succ0\) is verified.

Under the definition of the matrices H, M, and Q, by a simple manipulation, we get

$$\begin{aligned} HM&=\left ( \textstyle\begin{array}{@{}c@{\quad}c@{\quad}c@{}}R&0&0\\ 0&\frac{r+s-rs}{r+s}\mu B_{2}^{\mathsf{T}}B_{2}&-\frac {r}{r+s}B_{2}^{\mathsf{T}}\\ 0&-\frac{r}{r+s}B_{2}&\frac{1}{(r+s)\mu } {I_{2N_{h}}} \end{array}\displaystyle \right )\left ( \textstyle\begin{array}{@{}c@{\quad}c@{\quad}c@{}} {I_{N_{h}}}&0&0\\ 0& {I_{2N_{h}}}&0\\ 0&-s\mu B_{2}&(r+s) {I_{2N_{h}}} \end{array}\displaystyle \right ) \\ &=\left ( \textstyle\begin{array}{@{}c@{\quad}c@{\quad}c@{}} R&0&0\\ 0&\mu {B_{2}^{\mathsf{T}}B_{2}}&-r B_{2}^{\mathsf{T}}\\ 0& -B_{2} & \frac{1}{\mu} {I_{2N_{h}}} \end{array}\displaystyle \right )=Q. \end{aligned}$$
(2.14)

The second assertion \(HM=Q\) is proved. Consequently, we have

$$\begin{aligned} M^{\mathsf{T}}HM&=M^{\mathsf{T}}Q=\left ( \textstyle\begin{array}{@{}c@{\quad}c@{\quad}c@{}} {I_{N_{h}}}&0&0\\ 0& {I_{2N_{h}}}&-s\mu B_{2}^{\mathsf{T}}\\ 0&0&(r+s) {I_{2N_{h}}} \end{array}\displaystyle \right )\left ( \textstyle\begin{array}{@{}c@{\quad}c@{\quad}c@{}} R&0&0\\ 0&\mu {I_{2N_{h}}}&-r B_{2}^{\mathsf{T}}\\ 0& -B_{2} & \frac {1}{\mu} {I_{2N_{h}}} \end{array}\displaystyle \right ) \end{aligned}$$
(2.15)
$$\begin{aligned} &=\left ( \textstyle\begin{array}{@{}c@{\quad}c@{\quad}c@{}}R&0&0\\ 0&(1+s)\mu B_{2}^{\mathsf{T}}B_{2}&-(r+s)B_{2}^{\mathsf{T}}\\ 0&-(r+s)B_{2}&\frac{r+s}{\mu} {I_{2N_{h}}} \end{array}\displaystyle \right ). \end{aligned}$$
(2.16)

Using (2.10), (2.11), and the above equation, we get

$$\begin{aligned} G =&Q^{\mathsf{T}}+Q-M^{\mathsf{T}}HM \\ =&\left ( \textstyle\begin{array}{@{}c@{\quad}c@{\quad}c@{}} 2R&0&0\\ 0&2\mu {B_{2}^{\mathsf{T}}B_{2}}&-(1+r )B_{2}^{\mathsf{T}}\\ 0&-(1+r ) B_{2} & \frac{2}{\mu} {I_{2N_{h}}} \end{array}\displaystyle \right ) \\ &{}-\left ( \textstyle\begin{array}{@{}c@{\quad}c@{\quad}c@{}}R&0&0\\ 0&(1+s)\mu B_{2}^{\mathsf{T}}B_{2}&-(r+s)B_{2}^{\mathsf{T}}\\ 0&-(r+s)B_{2}&\frac{r+s}{\mu} {I_{2N_{h}}} \end{array}\displaystyle \right ) \\ =&\left ( \textstyle\begin{array}{@{}c@{\quad}c@{\quad}c@{}}R&0&0\\ 0&(1-s)\mu B_{2}^{\mathsf{T}}B_{2}&(s-1)B_{2}^{\mathsf{T}}\\ 0&(s-1)B_{2}&\frac{2-r-s}{\mu} {I_{2N_{h}}} \end{array}\displaystyle \right ). \end{aligned}$$

Similarly, note that

$$\left \{ \textstyle\begin{array}{l} 1-s>0, \\ (1-s)(2-r-s)-(1-s)^{2}=(1-s)\{(1-s)+(1-r)\}-(1-s)^{2}>0. \end{array}\displaystyle \right . $$

Therefore, the matrix G is positive definite.

(II) \(r\in(0,1)\) and \(s=1\). Note that

$$ H =\left ( \textstyle\begin{array}{@{}c@{\quad}c@{\quad}c@{}} R&0&0\\ 0&\frac{\mu}{r+1} B_{2}^{\mathsf{T}}B_{2} &-\frac {r}{r+1}B_{2}^{\mathsf{T}}\\ 0&-\frac{r}{r+1}B_{2}&\frac{1}{(r+1)\mu} {I_{2N_{h}}} \end{array}\displaystyle \right ) $$

and

$$G=\left ( \textstyle\begin{array}{@{}c@{\quad}c@{\quad}c@{}}R&0&0\\ 0&0&0\\ 0&0&\frac{1-r}{\mu} {I_{2N_{h}}} \end{array}\displaystyle \right ). $$

Thus H and G are positive definite and positive semidefinite, respectively. Here, we would emphasize that we do not require the positive definiteness of G. Instead, positive semidefiniteness of G is enough for our algorithmic analysis. □

3 Algorithm

3.1 FSR-SADMM

In this section, we will present our new algorithm to solve (1.4). But we first present the iterative scheme by using the standard strictly contractive Peaceman-Rachford splitting method with two different relaxation factors:

$$\begin{aligned}& z_{1}^{k+1}=\mathop{\operatorname {argmin}}_{z_{1}\in \mathbb{R}^{N_{h}}} \mathcal{L}_{\mu}\bigl(z_{1},z_{2}^{k}, \lambda ^{k}\bigr), \end{aligned}$$
(3.1a)
$$\begin{aligned}& \lambda ^{k+\frac{1}{2}}=\lambda ^{k}-r\mu\bigl(B_{1} z_{1}^{k+1}+B_{2} {z_{2}^{k}} \bigr), \end{aligned}$$
(3.1b)
$$\begin{aligned}& z_{2}^{k+1}=\mathop{\operatorname {argmin}}_{z_{2}\in \mathbb{R}^{2N_{h}}} \mathcal{L}_{\mu}\bigl( {z_{1}^{k+1}},z_{2}, \lambda^{k+\frac{1}{2}}\bigr) , \end{aligned}$$
(3.1c)
$$\begin{aligned}& \lambda ^{k+1}=\lambda ^{k+\frac{1}{2}}-s\mu\bigl(B_{1}z_{1}^{k+1}+B_{2}z_{2}^{k+1} \bigr). \end{aligned}$$
(3.1d)

By introducing a customized semiproximal term especially for TV super-resolution imaging, our FSR-SADMM has the iterative scheme

$$\begin{aligned}& z_{1}^{k+1}=\mathop{\operatorname {argmin}}_{z_{1}\in \mathbb{R}^{N_{h}}} \mathcal{L}_{\mu}\bigl(z_{1},z_{2}^{k}, \lambda ^{k}\bigr)+\frac{1}{2}\bigl\Vert z_{1}-z_{1}^{k} \bigr\Vert ^{2}_{R}, \end{aligned}$$
(3.2a)
$$\begin{aligned}& \lambda ^{k+\frac{1}{2}}=\lambda ^{k}-r\mu\bigl(B_{1} z_{1}^{k+1}+B_{2} {z_{2}^{k}} \bigr), \end{aligned}$$
(3.2b)
$$\begin{aligned}& z_{2}^{k+1}=\mathop{\operatorname {argmin}}_{z_{2}\in \mathbb{R}^{2N_{h}}} \mathcal {L}_{\mu}\bigl( {z_{1}^{k+1}},z_{2}, \lambda^{k+\frac{1}{2}}\bigr) , \end{aligned}$$
(3.2c)
$$\begin{aligned}& \lambda ^{k+1}=\lambda ^{k+\frac{1}{2}}-s\mu\bigl(B_{1} z_{1}^{k+1}+B_{2} z_{2}^{k+1} \bigr), \end{aligned}$$
(3.2d)

where \(R=\frac{\mu}{\tau}I_{N_{h}}-\mu B_{1}^{\mathsf{T}} B_{1}\) is a customized semidefinite matrix.

3.2 Related methods

The methods proposed in [14] and [15] belong to the category of SADMM-type approaches with logarithmic-quadratic proximal regularization, which have larger step sizes compared with SADMM. The former is with the step sizes of \(r\in (0,1)\), \(s\in(0,1)\) while the latter \(s\in(0,2)\), \(r\in(0,2-s)\), which are different from \(r\in(0,1)\), \(s\in(0,1]\) considered in this paper. The parameters r and s in SADMM have also been studied intensively by He et al. [16] and Gu et al. [15]. However, both of them cannot fully utilize the special structure of SISR scenarios involved in this paper. And computing \((H^{\mathsf{T}}\Phi^{\mathsf{T}}\Phi H+\mu A^{\mathsf{T}} A)^{-1}\) cannot be avoided.

4 Global convergence

To make the analysis more elegant, we reformulate FSR-SADMM (3.2a)-(3.2d) into the formFootnote 3

$$\begin{aligned}& z_{1}^{k+1}=\mathop{\operatorname {argmin}}_{z_{1}\in \mathbb{R}^{N_{h}}}\biggl\{ \theta_{1}(z_{1})-\bigl(\lambda^{k} \bigr)^{\mathsf{T}}B_{1}z_{1}+\frac{\mu}{2}\bigl\Vert B_{1}z_{1}+B_{2}z_{2}^{k} \bigr\Vert ^{2}+\frac{1}{2}\bigl\Vert z_{1}-z_{1}^{k} \bigr\Vert ^{2}_{R}\biggr\} , \end{aligned}$$
(4.1a)
$$\begin{aligned}& \lambda ^{k+\frac{1}{2}}=\lambda ^{k}-r\mu \bigl(B_{1}z_{1}^{k+1}+B_{2}z_{2}^{k} \bigr), \end{aligned}$$
(4.1b)
$$\begin{aligned}& z_{2}^{k+1}=\mathop{\operatorname {argmin}}_{z_{2}\in \mathbb{R}^{N_{h}}} \biggl\{ \theta _{2}(z_{2})-\bigl(\lambda^{k+\frac{1}{2}} \bigr)^{\mathsf{T}}B_{2} z_{2}+\frac{\mu}{2}\bigl\Vert B_{1}z_{1}^{k+1}+B_{2}z_{2} \bigr\Vert ^{2} \biggr\} , \end{aligned}$$
(4.1c)
$$\begin{aligned}& \lambda ^{k+1}=\lambda ^{k+\frac{1}{2}}-s\mu \bigl(B_{1}z_{1}^{k+1}+B_{2}z_{2}^{k+1} \bigr). \end{aligned}$$
(4.1d)

Now we analyze the convergence for our proposed FSR-SADMM (4.1a)-(4.1d). We prove its global convergence from the contraction perspective. In order to further alleviate the notation in our analysis, we define an auxiliary sequence \(\tilde{w}^{k}\) as

$$ \tilde{w}^{k}= \left ( \textstyle\begin{array}{@{}c@{}}\tilde{z}_{1}^{k} \\ \tilde{z}_{2}^{k}\\ \tilde{\lambda}^{k} \end{array}\displaystyle \right )=\left ( \textstyle\begin{array}{@{}c@{}}{z}_{1}^{k+1} \\ {z}_{2}^{k+1}\\ {\lambda}^{k}-\mu (B_{1}z_{1}^{k+1}+B_{2}z_{2}^{k}) \end{array}\displaystyle \right ), $$
(4.2)

where \((z_{1}^{k+1},z_{2}^{k+1})\) is produced by (4.1a) and (4.1c), and we immediately get

$$z_{1}^{k+1}=\tilde{z}_{1}^{k}, \qquad z_{2}^{k+1}=\tilde{z}_{2}^{k}, \qquad \lambda^{k+\frac{1}{2}}=\lambda^{k}-r\bigl(\lambda^{k}-\tilde{ \lambda }^{k}\bigr), $$

and

$$\begin{aligned} {\lambda}^{k+1}&=\lambda^{k+\frac{1}{2}}-s\mu\bigl(B_{1} \tilde {z}_{1}^{k}+B_{2}\tilde{z}_{2}^{k} \bigr) \\ &=\lambda^{k}-r\bigl(\lambda^{k}-\tilde{ \lambda}^{k}\bigr)-s\bigl[ \mu\bigl(B_{1}\tilde {z}_{1}^{k}+B_{2}z_{2}^{k} \bigr)-\mu B_{2}\bigl(z_{2}^{k}- \tilde{z}_{2}^{k}\bigr) \bigr] \\ &=\lambda^{k}-r\bigl(\lambda^{k}-\tilde{ \lambda}^{k}\bigr)-s\bigl[ \lambda^{k}-\tilde{\lambda }^{k}-\mu B_{2}\bigl(z_{2}^{k}- \tilde{z}_{2}^{k}\bigr) \bigr] \\ &=\lambda^{k}- \bigl[ (r+s) \bigl(\lambda^{k}-\tilde{ \lambda}^{k}\bigr)-s\mu B_{2}\bigl(z_{2}^{k}- \tilde{z}_{2}^{k}\bigr)\bigr]. \end{aligned}$$

Moreover, we have the following relationship:

$$\left ( \textstyle\begin{array}{@{}c@{}}{z}_{1}^{k+1} \\ {z}_{2}^{k+1}\\ {\lambda}^{k+1} \end{array}\displaystyle \right )=\left ( \textstyle\begin{array}{@{}c@{}}{z}_{1}^{k} \\ {z}_{2}^{k}\\ {\lambda}^{k} \end{array}\displaystyle \right )-\left ( \textstyle\begin{array}{@{}c@{\quad}c@{\quad}c@{}}I_{N} &0&0\\ 0&I_{N}&0\\ 0 &-s\mu B_{2}&(r+s) {I_{2N_{h}}} \end{array}\displaystyle \right )\left ( \textstyle\begin{array}{@{}c@{}}z_{1}^{k}-\tilde{z}_{1}^{k}\\ z_{2}^{k}-\tilde{z}_{2}^{k}\\ \lambda ^{k}-\tilde{\lambda}^{k} \end{array}\displaystyle \right ), $$

which can be reformulated as a compact form under the notation of \(w^{k}\) and \(\tilde{w}^{k}\):

$$ w^{k+1}=w^{k}-M\bigl(w^{k}- \tilde{w}^{k}\bigr), $$
(4.3)

where M is defined in (2.10).

Now we start to prove some properties for the sequence \(\{\tilde{w}^{k}\} \) defined in (4.2). We are interested in estimating how accurate the point \(\tilde{w}^{k}\) is to a solution point \(w^{*}\) of \(\operatorname{VI}(\Omega,F,\theta)\). The main result is proved in Theorem 4.1. To prove this main result, we require two lemmas. The first key lemma provides a lower bound on specially constructed functional in terms of a quadratic term involving the matrix Q defined in (2.11).

Lemma 4.1

For given \(w^{k}\in\Omega\), let \(w^{k+1}\) be generated by FSR-SADMM (4.1a)-(4.1d) and \(\tilde{w}^{k}\) be defined in (4.2). Then we have \(\tilde{w}\in\Omega\) and

$$ \theta(u)-\theta\bigl(\tilde{u}^{k}\bigr)+\bigl(w- \tilde{w}^{k}\bigr)^{\mathsf{T}}F\bigl(\tilde{w}^{k}\bigr) \geq\bigl(w- {\tilde{w}^{k}}\bigr)^{\mathsf{T}}Q\bigl(w^{k}- \tilde{w}^{k}\bigr),\quad \forall w\in\Omega, $$
(4.4)

where Q is defined in (2.11).

Proof

From the first-order optimality condition of \(z_{1}\)-subproblem in (4.1a), for any \(z_{1}\in{ \mathbb{R}^{N_{h}}}\), we obtain

$$ \theta_{1}(z_{1})-\theta_{1} \bigl(z_{1}^{k+1}\bigr)+\bigl(z_{1}-z_{1}^{k+1} \bigr)^{\mathsf{T}}\bigl\{ -B_{1}^{\mathsf{T}}\lambda ^{k}+\mu B_{1}^{\mathsf{T}}\bigl(B_{1}z_{1}^{k+1}+B_{2}z_{2}^{k} \bigr)+R\bigl(z_{1}^{k+1}-z_{1}^{k}\bigr) \bigr\} \geq0. $$
(4.5)

By the definition of \(\tilde{z}_{1}^{k}\) and \(\tilde{\lambda}^{k}\) in (4.2), the above inequality can be written as

$$ \theta_{1}(z_{1})-\theta_{1} \bigl(\tilde{z}_{1}^{k}\bigr)+\bigl(z_{1}-\tilde {z}_{1}^{k}\bigr)^{\mathsf{T}}\bigl\{ - {B_{1}^{\mathsf{T}}}\tilde{\lambda}^{k}+R\bigl( \tilde{z}_{1}^{k}-z_{1}^{k}\bigr)\bigr\} \geq0. $$
(4.6)

Similarly, from the first-order optimization condition of \(z_{2}\)-subproblem in (4.1c), we achieve

$$\begin{aligned}& \theta_{2}(z_{2})-\theta_{2} \bigl(z_{2}^{k+1}\bigr)+\bigl(z_{2}-z_{2}^{k+1} \bigr)^{\mathsf{T}}\bigl\{ -B_{2}^{\mathsf{T}}\lambda^{k+\frac{1}{2}}{+} \mu B_{2}^{\mathsf{T}}\bigl(B_{1}z_{1}^{k+1}+B_{2}z_{2}^{k+1} \bigr)\bigr\} \\& \quad \geq0, \quad \forall z_{2}\in R^{2N_{h}}. \end{aligned}$$
(4.7)

From the definitions of \(\lambda^{k+\frac{1}{2}},\tilde{z}_{2}^{k}\) and \(\tilde{\lambda}^{k}\), we get

$$\begin{aligned}& {\lambda}^{k+\frac{1}{2}} -\mu\bigl(B_{1}z_{1}^{k+1}+B_{2}z_{2}^{k+1} \bigr) \\& \quad = \lambda^{k}-r\mu \bigl(B_{1}z_{1}^{k+1}+B_{2}z_{2}^{k} \bigr)-\mu \bigl(B_{1}z_{1}^{k+1}+B_{2}z_{2}^{k} \bigr)-\mu B_{2}\bigl(z_{2}^{k+1}-z_{2}^{k} \bigr) \\& \quad = \lambda^{k}-\mu \bigl(B_{1}z_{1}^{k+1}+B_{2}z_{2}^{k} \bigr)-r\bigl(\lambda^{k}-\tilde{\lambda }^{k}\bigr)-\mu B_{2}\bigl(\tilde{z}_{2}^{k}-z_{2}^{k} \bigr) \\& \quad = \tilde{\lambda}^{k}-r\bigl(\lambda^{k}-\tilde{ \lambda}^{k}\bigr)-\mu B_{2}\bigl(\tilde{z}_{2}^{k}-z_{2}^{k} \bigr). \end{aligned}$$
(4.8)

Substituting this into (4.7) and considering the definition of \(\tilde{z}_{2}^{k}\) in (4.2), we have

$$ \theta_{2}(z_{2})-\theta_{2}\bigl( \tilde{z}_{2}^{k}\bigr)+\bigl(z_{2}-\tilde {z}_{2}^{k}\bigr)^{\mathsf{T}}\bigl\{ -B_{2}^{\mathsf{T}}\tilde{\lambda}^{k}+rB_{2}^{\mathsf{T}}\bigl( \lambda^{k}-\tilde{\lambda }^{k}\bigr)+\mu B_{2}^{\mathsf{T}}B_{2} \bigl(\tilde{z}_{2}^{k}-z_{2}^{k} \bigr)\bigr\} \geq0. $$
(4.9)

In addition, as follows from (4.2) again, we have

$$ \bigl(B_{1}\tilde{z}_{1}^{k}+B_{2} \tilde{z}_{2}^{k}\bigr)-B_{2}\bigl(\tilde {z}_{2}^{k}-z_{2}^{k}\bigr)+ \frac{1}{\mu}\bigl(\tilde{\lambda}^{k}-\lambda^{k} \bigr)=0. $$
(4.10)

Combining (4.6), (4.9), and (4.10), we obtain

$$\begin{aligned}& \theta(u)-\theta\bigl(\tilde{u}^{k}\bigr)+\left ( \textstyle\begin{array}{@{}c@{}}z_{1}-\tilde{z}_{1}^{k}\\ z_{2}-\tilde{z}_{2}^{k}\\ \lambda-\tilde {\lambda}^{k} \end{array}\displaystyle \right )^{\mathsf{T}}\left \{\left ( \textstyle\begin{array}{@{}c@{}}-B_{1}^{\mathsf{T}}\tilde{\lambda}^{k}\\ -B_{2}^{\mathsf{T}}\tilde{\lambda}^{k}\\ B_{1}\tilde{z}_{1}^{k}+B_{2}\tilde{z}_{2}^{k} \end{array}\displaystyle \right )-\left ( \textstyle\begin{array}{@{}c@{}}R(z_{1}^{k}-\tilde{z}_{1}^{k})\\ \mu B_{2}^{\mathsf{T}}B_{2}(z_{2}^{k}-\tilde{z}_{2}^{k})-r B_{2}^{\mathsf{T}}(\lambda^{k}-\tilde{\lambda}^{k})\\ -B_{2}(z_{2}^{k}-\tilde{z}_{2}^{k})+\frac{1}{\mu}(\lambda^{k}-\tilde{\lambda}^{k}) \end{array}\displaystyle \right ) \right \} \\& \quad \geq0. \end{aligned}$$
(4.11)

By using the notation of Q in (2.11), and w and F in (2.8), the compact form of the above inequality is exactly (4.4). □

In the next lemma, we further analyze the right-hand side of the inequality (4.4) and reformulate it as the sum of some quadratic terms. This new form is more convenient for our further analysis.

Lemma 4.2

For given \(w^{k}\in\Omega\), let \(w^{k+1}\) be generated by FSR-SADMM (4.1a)-(4.1d) and \(\tilde{w}^{k}\) be defined in (4.2). Then for any \(w\in\Omega\), we get

$$ \bigl(w-\tilde{w}^{k}\bigr)^{\mathsf{T}}Q \bigl(w^{k}-\tilde{w}^{k}\bigr)=\frac{1}{2}\bigl(\bigl\Vert w-w^{k+1}\bigr\Vert _{H}^{2}-\bigl\Vert w-w^{k}\bigr\Vert ^{2}_{H}\bigr)+ \frac{1}{2}\bigl\Vert w^{k}-\tilde{w}^{k}\bigr\Vert ^{2}_{G}. $$
(4.12)

Proof

By using \(Q=HM\) and \(M(w^{k}-\tilde{w}^{k})=(w^{k}-w^{k+1})\) (see (4.3)), we have

$$ \bigl(w-\tilde{w}^{k}\bigr)^{\mathsf{T}}Q \bigl(w^{k}-\tilde{w}^{k}\bigr)=\bigl(w-\tilde{w}^{k} \bigr)^{\mathsf{T}}HM\bigl(w^{k}-\tilde{w}^{k}\bigr)= \bigl(w-\tilde{w}^{k}\bigr)^{\mathsf{T}}H\bigl(w^{k}-{w}^{k+1} \bigr). $$
(4.13)

For any vectors \(a,b,c,d\in \mathbb{R}^{N_{h}}\) and a matrix \(H\in \mathbb{R}^{n\times n}\), it follows that

$$ (a-b)^{\mathsf{T}}H(c-d)=\frac{1}{2}\bigl(\|a-d\|^{2}_{H}- \|a-c\|_{H}^{2}\bigr)+\frac{1}{2}\bigl(\| c-b \|^{2}_{H}-\|d-b\|_{H}^{2}\bigr). $$
(4.14)

Applying the above identity with \(a=w\), \(b=\tilde{w}^{k}\), \(c=w^{k}\), and \(d={w}^{k+1}\) gives

$$\begin{aligned} \bigl(w-\tilde{w}^{k}\bigr)^{\mathsf{T}}H \bigl(w^{k}-{w}^{k+1}\bigr) =&\frac{1}{2}\bigl(\bigl\Vert w-w^{k+1}\bigr\Vert ^{2}_{H}-\bigl\Vert w-{w}^{k}\bigr\Vert ^{2}_{H}\bigr) \\ &{}+ \frac{1}{2}\bigl(\bigl\Vert w^{k}-\tilde{w}^{k}\bigr\Vert _{H}^{2}-\bigl\Vert w^{k+1}-\tilde {w}^{k}\bigr\Vert ^{2}_{H}\bigr). \end{aligned}$$
(4.15)

Rearranging the last term in the above identity by using (2.12), (2.13), and (4.3), we obtain

$$\begin{aligned} \bigl\Vert w^{k}-\tilde{w}^{k}\bigr\Vert ^{2}_{H}-\bigl\Vert w^{k+1}-\tilde{w}^{k} \bigr\Vert ^{2}_{H} =&\bigl\Vert w^{k}- \tilde{w}^{k}\bigr\Vert ^{2}_{H}-\bigl\Vert \bigl(w^{k}-\tilde{w}^{k}\bigr)-\bigl(w^{k}-w^{k+1} \bigr)\bigr\Vert ^{2}_{H} \\ \stackrel{\text{(4.3)}}{=}&\bigl\Vert w^{k}- \tilde{w}^{k}\bigr\Vert ^{2}_{H}-\bigl\Vert \bigl(w^{k}-\tilde {w}^{k}\bigr)-M\bigl(w^{k}- \tilde{w}^{k}\bigr)\bigr\Vert ^{2}_{H} \\ =&2\bigl(w^{k}-\tilde{w}^{k}\bigr)^{\mathsf{T}}HM \bigl(w^{k}-\tilde{w}^{k}\bigr) \\ &{}-\bigl(w^{k}- \tilde{w}^{k}\bigr)^{\mathsf{T}}M^{\mathsf{T}}HM \bigl(w^{k}-\tilde{w}^{k}\bigr) \\ \stackrel{\text{(2.12)}}{=}&\bigl(w^{k}- \tilde{w}^{k}\bigr)^{\mathsf{T}}\bigl(Q^{\mathsf{T}}+Q-M^{\mathsf{T}}HM\bigr) \bigl(w^{k}-\tilde{w}^{k}\bigr) \\ \stackrel{\text{(2.13)}}{=}&\bigl\Vert w^{k}- \tilde{w}^{k}\bigr\Vert _{G}^{2}. \end{aligned}$$

Substituting it in (4.15) and considering (4.13), we immediately obtain the assertion (4.12). The proof is complete. □

Now, we try to find a lower bound in terms of the discrepancy between \(\|w-w^{k+1}\|_{H}^{2}\) and \(\|w-w^{k}\| _{H}^{2}\) for any \(w\in\Omega\).

Theorem 4.1

For given \(w^{k}\in\Omega\), let \(w^{k+1}\) be generated by FSR-SADMM (4.1a)-(4.1d) and \(\tilde{w}^{k}\) be defined in (4.2). Then for any \(w\in\Omega\), we have

$$ \theta\bigl(\tilde{u}^{k}\bigr)-\theta(u)+\bigl( \tilde{w}^{k}-w\bigr)^{\mathsf{T}}F(w)\leq\frac {1}{2}\bigl( \bigl\Vert w-w^{k}\bigr\Vert _{H}^{2}-\bigl\Vert w-w^{k+1}\bigr\Vert _{H}^{2}\bigr)- \frac{1}{2}\bigl\Vert w^{k}-\tilde{w}^{k}\bigr\Vert _{G}^{2}. $$
(4.16)

Proof

Note that F is monotone, we obtain

$$ \bigl(\tilde{w}^{k}-w\bigr)^{\mathsf{T}}F(w)\leq\bigl( \tilde{w}^{k}-w\bigr)^{\mathsf{T}}F\bigl(\tilde{w}^{k} \bigr). $$
(4.17)

We have from the above inequality and (4.4)

$$ \theta\bigl(\tilde{u}^{k}\bigr)-\theta(u)+\bigl( \tilde{w}^{k}-w\bigr)^{\mathsf{T}}F(w)\leq-\bigl(w-\tilde {w}^{k}\bigr)^{\mathsf{T}}Q\bigl(w^{k}- \tilde{w}^{k}\bigr),\quad \forall w\in\Omega. $$
(4.18)

The assertion (4.16) holds immediately from (4.12) and (4.18). The proof is complete. □

The next lemma demonstrates the contraction property of the sequence \((w^{k})_{k=0}^{\infty}\) generated by FSR-SADMM (4.1a)-(4.1d).

Lemma 4.3

Let \((w^{k})_{k=0}^{\infty}\) be the sequence generated by the FSR-SADMM (4.1a)-(4.1d) and \(\{\tilde{w}^{k}\}\) be defined in (4.2). Then for any \(w^{*}\in\Omega^{*}\), we have

$$ \bigl\Vert w^{k+1}-w^{*}\bigr\Vert _{H}^{2}\leq\bigl\Vert w^{k}-w^{*}\bigr\Vert _{H}^{2}-\bigl\Vert w^{k}-\tilde{w}^{k} \bigr\Vert _{G}^{2}. $$
(4.19)

Proof

Setting \(w=w^{*}\) in (4.16), we have

$$\begin{aligned} \bigl\Vert w^{k}-w^{*}\bigr\Vert _{H}^{2}-\bigl\Vert w^{k+1}-w^{*}\bigr\Vert ^{2}_{H}&\geq\bigl\Vert w^{k}-\tilde{w}^{k}\bigr\Vert ^{2}_{G}+2 \bigl[\theta\bigl(\tilde{u}^{k}\bigr)-\theta\bigl(u^{*}\bigr)+\bigl( \tilde{w}^{k}-w^{*}\bigr)^{\mathsf{T}}F\bigl(w^{*}\bigr)\bigr] \\ &\geq\bigl\Vert w^{k}-\tilde{w}^{k}\bigr\Vert ^{2}_{G}, \end{aligned}$$
(4.20)

where the last inequality follows from the fact that \(w^{*}\in\Omega^{*}\) (see (2.6)). The proof is complete. □

Recall that for the case \(0< r<1\), \(0< s<1\), the matrix G is positive definite, while when \(0< r<1\), \(s=1\), G may not be positive definite. We need further investigate the term \(\|w^{k}-\tilde{w}^{k}\|^{2}_{G}\). For the case \(0< r<1\), \(s=1\), we have

$$ \bigl\Vert w^{k}-\tilde{w}^{k}\bigr\Vert ^{2}_{G}=\bigl\Vert z_{1}^{k}- \tilde{z}_{1}^{k}\bigr\Vert ^{2}_{R}+ \frac{1-r}{\mu}\bigl\Vert \lambda ^{k}-\tilde{\lambda }^{k}\bigr\Vert ^{2}. $$
(4.21)

Notice that

$$\begin{aligned} \lambda^{k+1} \stackrel{\text{(4.1d)}}{=}& \lambda^{k+\frac {1}{2}}-\mu\bigl(B_{1}z_{1}^{k+1}+B_{2}z_{2}^{k+1} \bigr) \\ \stackrel{\text{(4.2)}}{=} & \lambda^{k}-r\bigl( \lambda^{k}-\tilde{\lambda }^{k}\bigr) -\mu \bigl(B_{1}z_{1}^{k+1}+B_{2}z_{2}^{k+1} \bigr) \\ = & \lambda^{k}-r\bigl(\lambda^{k}-\tilde{ \lambda}^{k}\bigr) -\mu \bigl(B_{1}z_{1}^{k+1}+B_{2}z_{2}^{k} \bigr)-\mu B_{2}\bigl(z_{2}^{k+1}-z_{2}^{k} \bigr) \\ = & \lambda^{k}-(1+r) \bigl(\lambda^{k}-\tilde{ \lambda}^{k}\bigr) -\mu B_{2}\bigl(z_{2}^{k+1}-z_{2}^{k} \bigr). \end{aligned}$$

Thus, we have

$$ \lambda^{k}-\tilde{\lambda}^{k}= \frac{1}{1+r}\bigl(\lambda^{k}-\lambda ^{k+1}\bigr)+ \frac{1}{1+r}\mu B_{2}\bigl(z_{2}^{k}-z_{2}^{k+1} \bigr). $$
(4.22)

Then we get

$$\begin{aligned} \bigl\Vert \lambda^{k}-\tilde{\lambda}^{k} \bigr\Vert ^{2} = & \frac{1}{(1+r)^{2}} \bigl\Vert \bigl( \lambda^{k}-\lambda^{k+1}\bigr)+\mu B_{2} \bigl(z_{2}^{k}-z_{2}^{k+1}\bigr)\bigr\Vert ^{2} \\ = & \frac{1}{(1+r)^{2}} \bigl\Vert \bigl(\lambda^{k}- \lambda^{k+1}\bigr)\bigr\Vert ^{2}+\frac{\mu ^{2}}{(1+r)^{2}} \bigl\Vert B_{2}\bigl(z_{2}^{k}-z_{2}^{k+1} \bigr)\bigr\Vert ^{2} \\ &{}+\frac{2\mu }{(1+r)^{2}}\bigl(\lambda^{k}-\lambda^{k+1} \bigr)^{\mathsf{T}}B_{2}\bigl(z_{2}^{k}-z_{2}^{k+1} \bigr). \end{aligned}$$
(4.23)

Now, we treat the cross term in the above equation. For the previous iteration, we have

$$ \theta_{2}(z_{2})-\theta_{2} \bigl(z_{2}^{k}\bigr)+\bigl(z_{2}-z_{2}^{k} \bigr)^{\mathsf{T}}\bigl\{ -B_{2}^{\mathsf{T}}\lambda^{k-\frac {1}{2}}+ \mu B_{2}^{\mathsf{T}}\bigl(B_{1}z_{1}^{k}{+}B_{2}z_{2}^{k} \bigr)\bigr\} \geq0, \quad \forall z_{2}\in{ \mathbb{R}^{2N_{h}}}. $$
(4.24)

Reconsidering \(s=1\) in (4.1d) and applying in (4.24), we achieve

$$ \theta_{2}(z_{2})-\theta_{2} \bigl(z_{2}^{k}\bigr)+\bigl(z_{2}-z_{2}^{k} \bigr)^{\mathsf{T}}\bigl\{ -B_{2}^{\mathsf{T}}\lambda^{k} \bigr\} \geq0, \quad \forall z_{2}\in{ \mathbb{R}^{2N_{h}}}. $$
(4.25)

Similarly, we get

$$ \theta_{2}(z_{2})-\theta_{2} \bigl(z_{2}^{k+1}\bigr)+\bigl(z_{2}-z_{2}^{k+1} \bigr)^{\mathsf{T}}\bigl\{ -B_{2}^{\mathsf{T}}\lambda ^{k+1} \bigr\} \geq0,\quad \forall z_{2}\in{ \mathbb{R}^{2N_{h}}}. $$
(4.26)

Setting \(z_{2}=z_{2}^{k+1}\) and \(z_{2}=z_{2}^{k}\) in (4.25) and (4.26), respectively, and then adding them, we get

$$ \bigl(\lambda^{k}-\lambda^{k+1} \bigr)^{\mathsf{T}}B_{2}\bigl(z_{2}^{k}-z_{2}^{k+1} \bigr)\geq0. $$
(4.27)

Combining with (4.21), (4.27), and (4.23), we obtain

$$\begin{aligned} \bigl\Vert w^{k}-\tilde{w}^{k}\bigr\Vert _{G}^{2} \ge& \bigl\Vert z_{1}^{k}- z_{1}^{k+1} \bigr\Vert ^{2}_{R}+ \frac {1-r}{(1+r)^{2}\mu} \bigl\Vert \lambda^{k}-\lambda^{k+1}\bigr\Vert ^{2} \\ &{}+\frac{(1-r)\mu }{(1+r)^{2}} \bigl\Vert z_{2}^{k}-z_{2}^{k+1} \bigr\Vert ^{2}_{B_{2}^{\mathsf{T}}B_{2}}. \end{aligned}$$
(4.28)

Recall \(\tilde{w}^{k}\) defined by (4.2) and combining with (4.19), we have the following lemma, which is important for the proof of the convergence.

Lemma 4.4

Let \((w^{k})_{k=0}^{\infty}\) be the sequence generated by the FSR-SADMM (4.1a)-(4.1d) with \(r\in (0,1)\), \(s=1\), and \(\{\tilde{w}^{k}\}\) be defined in (4.2). Then we have

$$\begin{aligned} \bigl\Vert w^{k+1}-w^{*}\bigr\Vert _{H}^{2} \leq&\bigl\Vert w^{k}-w^{*}\bigr\Vert _{H}^{2}- \biggl\{ \bigl\Vert z_{1}^{k}-\tilde{z}_{1}^{k} \bigr\Vert ^{2}_{R}+\frac{1-r}{(1+r)^{2}\mu}\bigl\Vert \lambda^{k}-\lambda^{k+1} \bigr\Vert ^{2} \\ &{}+\frac {(1-r)\mu}{(1+r)^{2}} \bigl\Vert z_{2}^{k}- \tilde{z}_{2}^{k}\bigr\Vert ^{2}_{B_{2}^{\mathsf{T}}B_{2}} \biggr\} . \end{aligned}$$
(4.29)

With the above lemmas, we are now ready to establish the global convergence of FSR-SADMM for solving \(\operatorname{VI}(\Omega,F,\theta)\).

Theorem 4.2

The sequence \((w^{k})_{k=0}^{\infty}\) generated by FSR-SADMM (4.1a)-(4.1d) converges to some \(w^{\infty}\) that is a solution of \(\operatorname{VI}(\Omega,F,\theta)\).

Proof

(I) For the case \(r\in(0,1)\), \(s\in(0,1)\). Summing (4.19) over \(k = 0, \ldots,\infty\) yields

$$ \sum_{k=0}^{\infty}\bigl\Vert w^{k}-\tilde{w}^{k}\bigr\Vert _{G}^{2} \leq\bigl\Vert w^{0}-w^{*}\bigr\Vert _{H}^{2}, $$
(4.30)

which implies that

$$ \lim_{k\rightarrow\infty}\bigl\Vert w^{k}- \tilde{w}^{k}\bigr\Vert _{G}=0. $$
(4.31)

Recall the sequence \(\{w^{k}\}\) is bounded (see Lemma 4.3), we see that the sequence \(\{\tilde{w}^{k}\}\) is also bounded, and it has at least one cluster point. Let \(w^{\infty}\) be a cluster of \(\{\tilde {w}^{k}\}\) and the subsequence \(\tilde{w}^{k_{j}}\) converge to \(w^{\infty }\). Combining (4.4) and (4.31), we get

$$ \theta(u)-\theta\bigl(\tilde{u}^{k_{j}}\bigr)+\bigl(w- \tilde{w}^{k_{j}}\bigr)^{\mathsf{T}}F\bigl(\tilde {w}^{k_{j}} \bigr)\geq0,\quad \forall w\in\Omega. $$
(4.32)

Taking \(j\rightarrow\infty\) in the left term of the above inequality yields

$$ \theta(u)-\theta\bigl({u}^{\infty}\bigr)+\bigl(w-{w}^{\infty}\bigr)^{\mathsf{T}}F\bigl({w}^{\infty}\bigr)\geq 0,\quad \forall w\in \Omega, $$
(4.33)

which implies that \(w^{\infty}\in\Omega^{*}\). From \(\lim_{k\rightarrow\infty}\|w^{k}-\tilde{w}^{k}\|_{G}=0\), we can deduce \(\lim_{k\rightarrow\infty}\|w^{k}-{w}^{k}\|_{{H}}=0\). Recall \(\{\tilde{w}^{k_{j}}\}\rightarrow w^{\infty}\), thus for any given \(\epsilon>0\), there exists an integer l such that

$$ \bigl\Vert w^{k_{l}}-\tilde{w}^{k_{l}}\bigr\Vert _{H}\leq\frac{\epsilon}{2}\quad \mbox{and} \quad \bigl\Vert \tilde{w}^{k_{l}}-w^{\infty}\bigr\Vert _{H}\leq \frac{\epsilon}{2}. $$
(4.34)

Thus, for any \(k\geq k_{l}\), it follows from the above two inequalities and (4.19) that

$$ \bigl\Vert w^{k}-w^{\infty}\bigr\Vert _{H}\leq \bigl\Vert w^{k_{l}}-w^{\infty}\bigr\Vert _{H}\leq \bigl\Vert {w}^{k_{l}}-\tilde {w}^{k_{l}} \bigr\Vert _{H}+\bigl\Vert \tilde{w}^{k_{l}}-w^{\infty}\bigr\Vert _{H}< \epsilon. $$
(4.35)

This implies that the sequence \(\{w^{k}\}\) converges to \(w^{\infty}\in\Omega ^{*}\). This completes the proof.

(II) For the case \(r\in(0,1)\), \(s=1\). Summing (4.29) over \(k = 0, \ldots,\infty\) yields

$$\begin{aligned}& \sum_{k=0}^{\infty}\bigl\Vert z_{1}^{k}-\tilde{z}_{1}^{k}\bigr\Vert _{R}^{2}+\frac{(1-r)\mu }{(1+r)^{2}}\sum _{k=0}^{\infty}\bigl\Vert z_{2}^{k}- \tilde{z}_{2}^{k}\bigr\Vert ^{2}_{B_{2}^{\mathsf{T}}B_{2}}+ \frac{1-r}{(1+r)^{2}\mu}\sum_{k=0}^{\infty}\bigl\Vert \lambda^{k}-\lambda ^{k+1}\bigr\Vert ^{2} \\& \quad \leq\bigl\Vert w^{0}-w^{*}\bigr\Vert _{H}^{2}, \end{aligned}$$
(4.36)

which implies thatFootnote 4

$$ \lim_{k\rightarrow\infty}\bigl\Vert z_{1}^{k}- \tilde{z}_{1}^{k}\bigr\Vert _{R}=0, \qquad \lim _{k\rightarrow\infty}\bigl\Vert z_{2}^{k}- \tilde{z}_{2}^{k}\bigr\Vert =0 \quad \mbox{and}\quad \lim _{k\rightarrow\infty}\bigl\Vert \lambda^{k}-\tilde{ \lambda}^{k}\bigr\Vert =0. $$
(4.37)

Recall the bounded sequence \(\{w^{k}\}\) again, we see that the sequence \(\{\tilde{w}^{k}\}\) is also bounded, and it has at least one cluster point. The following proof for the assertion is similar and omitted. □

5 Numerical results

In this section, we study the performance of the FSR-SADMM for solving (1.2). Our codes were written in MATLAB R2014a. In addition, all of the experiments were performed on a laptop with an Intel Core 2 Duo CPU at 2.2 GHz and 2 GB memory.Footnote 5

Experiments were conducted on three different text images: Peppers, Lena, and Baboon. Images are all with \(256\times256\times3\) RGB images. Color images were processed using the illuminate channel only, as in [9]. Precisely, the RGB images were transformed into YUV coordinates and the color channels (Cb, Cr) were upsampled using bicubic interpolation. The results were quantitatively evaluated by using the standard peak signal-to-noise ratio (PSNR) [17], defined as

$$ \mathrm{PSNR}=10\log_{10}\frac{256^{2}L^{2}}{\|x-\hat{x}\|^{2}}, $$
(5.1)

where x and are the original and reconstructed images, L denotes the maximum intensity value in x. We compare direct ADMM, FSR-ADMM and FSR-SADMM under the stop criterion such as \(\frac{\|x^{k}-x^{k-1}\|}{\|x^{k-1}\|}<\{1e^{-6},1e^{-5},1e^{-4}\}\).

Figures 1-3 compare the PSNR outputs of our FSR-SADMM with direct ADMM and FSR-ADMM, while Figure 4 depicts the comparison of total computational time of the three methods. Direct ADMM splits the super-resolution imaging problem into three subproblems by using two dual multipliers while FSR-ADMM reduces the computational complexity of each iteration by efficiently reducing the order of computation complexity of their subproblems from \(\mathcal {O} (N_{h}^{3})\) to \(\mathcal{O}(N_{h}\log N_{h})\). Similarly, our FSR-SADMM has the computation complexity of \(\mathcal{O}(N_{h}\log N_{h})\) and shares much better convergence performance. In our experiments, we set \(\mu =0.005\) for all three methods. In all figures, we list the PSNR below the reconstructed image corresponding to compared methods. As the reconstructed images shown in Figures 1-3, there are not noticeable differences in the texture of the Peppers, the wrinkles on the eyes of Lena, and also hair on Boboon’s nose. FSR-SADMM can only get a little higher PSNR rather than Direct ADMM and FSR-ADMM for various tolerance. However, the results shown in Figure 4 indicate that, compared with FSR-ADMM, our FSR-SADMM saves roughly 45% computational time with high precision \(\mathrm{Tolerance}=10^{-6}\) and can still save roughly 20% computational time under low precision \(\mathrm{Tolerance}=10^{-4}\). On the whole average, 30% computational time can be reduced, which efficiently accelerates the super-resolution imaging process.

Figure 1
figure 1

From left to right: super-resolution Peppers by direct ADMM, FSR-ADMM, and FSR-SADMM.

Figure 2
figure 2

From left to right: super-resolution Lena by direct ADMM, FSR-ADMM, and FSR-SADMM.

Figure 3
figure 3

From left to right: super-resolution Baboon by direct ADMM, FSR-ADMM, and FSR-SADMM.

Figure 4
figure 4

The original signal, noisy measurement, and reconstruction results by using different methods.

We also test the influence of relaxation parameter r for FSR-SADMM. We fix \(s=1\), \(\mu=0.005\) and choose different values of r in the interval \([0.3,1]\) (more specifically, we choose \(r=\{ 0.3,0.5,0.7,0.9,1\}\)). For comparison purposes, we also plot for FSR-ADMM with \(\mu=0.005\). As shown in Figure 5, we see that the relaxation parameter r works well for a wide range of values. In particular, when \(r\geq0.9\), the PSNR cannot be improved and even decrease in the first 20 iterations, while the effectiveness of the improvement of PSNR is not obvious especially when r is less than or equal to 0.3. As such, some offending values for the interval \([0.7,0.9]\) are preferred.

Figure 5
figure 5

Evolutions of PSNR with respect to iterations for Peppers under different parameter  r .

From Table 1, our proposed FSR-SADMM outperforms ADMM and FSR-ADMM for all the tolerance. Especially, FSR-SADMM costs much less computing time and has a shorter iteration number for tolerance with higher precision such as 10−6.

Table 1 The performance of different SADMM-type algorithms in terms of PSNR and SSIM

6 Conclusions

For solving the SISR problem (1.2), we proposed a new algorithm based on the strictly contractive semiproximal Peaceman-Rachford splitting method in this paper. The global convergence of our algorithm is established. Then the computational results indicate that our algorithm achieves better performance compared with start-of-the-art methods including direct ADMM and FSR-ADMM. More specifically with \(\mathrm{Tolerance}=10^{-6}\), our algorithm can save computing time about 80% and 45% compared with direct ADMM and FSR-ADMM, respectively.

Notes

  1. Note that TV model can be defined anisotropically or isotropically, i.e.,

    $$ \|x\|_{\mathrm{TV}} := \left \{ \textstyle\begin{array}{l@{\quad}l} |(\nabla_{h} x)| + |(\nabla_{v} x)| & (\mbox{anisotropic}); \\ \sqrt{\|\nabla_{h}x\|_{2}^{2} + \|\nabla_{v}x\|_{2}^{2}} & (\mbox{isotropic}). \end{array}\displaystyle \right . $$

    As our proposed method can be applied in the two different TV models in a similar way, we just consider the isotropical model and omit the anisotropical case.

  2. The symmetric ADMM is also known as the Peaceman-Rachford splittting method [10].

  3. Note that the proximal term \(\frac{\mu}{2}\|z_{1}-z_{1}^{k}\|^{2}_{R}\) plays an important role to linearize the term \(-(\lambda^{k})^{\mathsf{T}}B_{1}z_{1}+\frac{\mu }{2}\|B_{1}z_{1}-B_{2}z_{2}^{k} \|_{2}^{2}\) so as to avoid computing \((H^{\mathsf{T}}\Phi ^{\mathsf{T}}\Phi H+\mu B_{1}^{\mathsf{T}}B_{1})^{-1}\) to find an analytical solution of (1.7a). In fact, many manuscripts using approximate linearization process [18], they first denote \(h(x)=\frac {1}{2}\|B_{1}z_{1}-B_{2}z_{2}^{k}-\lambda^{k}/\mu\|_{2}^{2}\), given the Lipschitz constant τ, and use the Taylor approximation \(h(x)=\nabla h(z_{1}^{k})(z_{1}-z_{1}^{k})+\frac{1}{\tau}\|z_{1}-z_{1}^{k}\|^{2}_{2}\) to approximate the term \(\mu\|B_{1}z_{1}-z_{2}^{k}-\lambda^{k}/\mu \|_{2}^{2}\), which increases the difficulty of convergence analysis and needs a specific customized stop criterion [19].

  4. Note that in this paper \(B_{2}^{\mathsf{T}}B_{2}=I_{2N_{h}}\), R is assumed to be semidefinite and the last term in (4.37) is deduced with the aid of (4.22).

  5. The MATLAB code has been released on Github https://github.com/gaobingaobingaobin/SISR.

References

  1. Dodgson, NA: Quadratic interpolation for image resampling. IEEE Trans. Image Process. 6, 1322-1326 (1997)

    Article  Google Scholar 

  2. Lu, X, Yuan, Y, Yan, P: Image super-resolution via double sparsity regularized manifold learning. IEEE Trans. Circuits Syst. Video Technol. 23, 2022-2033 (2013)

    Article  Google Scholar 

  3. Liu, X, Zhao, D, Xiong, R, Ma, S, Gao, W, Sun, H: Image interpolation via regularized local linear regression. IEEE Trans. Image Process. 20, 3455-3469 (2011)

    Article  MathSciNet  Google Scholar 

  4. Yang, J, Wright, J, Huang, TS, Ma, Y: Image super-resolution via sparse representation. IEEE Trans. Image Process. 19, 2861-2873 (2010)

    Article  MathSciNet  Google Scholar 

  5. Chan, TF, Ng, MK, Yau, AC, Yip, AM: Superresolution image reconstruction using fast inpainting algorithms. Appl. Comput. Harmon. Anal. 23, 3-24 (2007)

    Article  MathSciNet  MATH  Google Scholar 

  6. Ng, MK, Weiss, P, Yuan, X: Solving constrained total-variation image restoration and reconstruction problems via alternating direction methods. SIAM J. Sci. Comput. 32, 2710-2736 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  7. Sun, J, Sun, J, Xu, Z, Shum, H-Y: Image super-resolution using gradient profile prior. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1-8. IEEE Press, New York (2008)

    Google Scholar 

  8. Marquina, A, Osher, SJ: Image super-resolution by TV-regularization and Bregman iteration. J. Sci. Comput. 37, 367-382 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  9. Zhao, N, Wei, Q, Basarab, A, Kouame, D, Tourneret, J-Y: Fast single image super-resolution (2015). arXiv:1510.00143

  10. He, B, Liu, H, Wang, Z, Yuan, X: A strictly contractive Peaceman-Rachford splitting method for convex programming. SIAM J. Optim. 24, 1011-1040 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  11. Golub, GH, Van Loan, CF: Matrix Computations, vol. 3. Johns Hopkins University Press, Baltimore (2012)

    MATH  Google Scholar 

  12. Almeida, MS, Figueiredo, MA: Deconvolving images with unknown boundaries using the alternating direction method of multipliers. IEEE Trans. Image Process. 22, 3074-3086 (2013)

    Article  MathSciNet  Google Scholar 

  13. He, B, Yuan, X: On the \(O(1/n)\) convergence rate of the Douglas-Rachford alternating direction method. SIAM J. Numer. Anal. 50, 700-709 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  14. Li, M, Yuan, X: A strictly contractive Peaceman-Rachford splitting method with logarithmic-quadratic proximal regularization for convex programming. Math. Oper. Res. 40, 842-858 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  15. Gu, Y, Jiang, B, Han, D: A semi-proximal-based strictly contractive Peaceman-Rachford splitting method (2015). arXiv:1506.02221

  16. He, B, Ma, F, Yuan, X: Convergence study on the symmetric version of ADMM with larger step sizes. SIAM J. Imaging Sci. (to appear)

  17. Hore, A, Ziou, D: Image quality metrics: PSNR vs. SSIM. In: 20th International Conference on Pattern Recognition (ICPR), pp. 2366-2369. New York, New York (2010)

    Google Scholar 

  18. Chen, Z, Basarab, A, Kouamé, D: Joint compressive sampling and deconvolution in ultrasound medical imaging. In: IEEE International Ultrasonics Symposium (IUS), pp. 1-4 (2015)

    Google Scholar 

  19. Yang, J, Yuan, X: Linearized augmented Lagrangian and alternating direction methods for nuclear norm minimization. Math. Comput. 82, 301-329 (2013)

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

This research was supported by Jiangsu Key Laboratory of Meteorological Observation and Information Processing Foundation (KDXS1503).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Bin Gao.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

BG proposed the main algorithm and drafted the manuscript. FS participated in the design of the method. YT participated in the design of the study and performed the experiment part. SX conceived of the study and helped to draft the manuscript. All authors read and approved the final manuscript.

Appendix:  Proof of the equivalence of (3.2a) and (1.7a)

Appendix:  Proof of the equivalence of (3.2a) and (1.7a)

Proof

From (3.2a) and the definitions \(B_{1}\) and \(B_{2}\) above (2.1), it is not difficult to verify

$$\begin{aligned} x^{k+1}&=\mathop{\operatorname {argmin}}_{x\in \mathbb{R}^{N_{h}}}\frac{1}{2}\Vert y-\Phi Hx\Vert _{2}^{2}-\lambda ^{k}\bigl(A x-u^{k}\bigr)+\frac{\mu}{2}\bigl\Vert A x -u^{k}\bigr\Vert _{2}^{2}+\frac{\mu}{2}\bigl\Vert x-x^{k}\bigr\Vert ^{2}_{\frac{1}{\tau}- A^{\mathsf{T}}A} \\ &= \mathop{\operatorname {argmin}}_{x\in \mathbb{R}^{N_{h}}}\frac{1}{2}\Vert y-\Phi Hx\Vert _{2}^{2}+\frac{\mu }{2}\biggl\Vert A x-u^{k}- \frac{\lambda^{k}}{\mu}\biggr\Vert ^{2}_{2}+\frac{\mu}{2} \bigl\Vert x-x^{k}\bigr\Vert ^{2}_{\frac{1}{\tau}- A^{\mathsf{T}}A} \\ &= \mathop{\operatorname {argmin}}_{x\in \mathbb{R}^{N_{h}}}\frac{1}{2}\Vert y-\Phi Hx\Vert _{2}^{2}+\frac{\mu }{2\tau}\bigl\Vert x-x^{k} \bigr\Vert ^{2}_{2}-\frac{\mu}{2}\bigl\Vert A x- A x^{k}\bigr\Vert _{2}^{2}+\frac{\mu }{2}\biggl\Vert A x-u^{k}-\frac{\lambda^{k}}{\mu}\biggr\Vert ^{2}_{2} \\ &= \mathop{\operatorname {argmin}}_{x\in \mathbb{R}^{N_{h}}}\frac{1}{2}\Vert y-\Phi Hx\Vert _{2}^{2}+\frac{\mu }{2\tau}\bigl\Vert x-x^{k} \bigr\Vert ^{2}_{2}+\mu x^{\mathsf{T}}A^{\mathsf{T}}\biggl( A x-u^{k}-\frac{\lambda^{k}}{\mu }\biggr) \\ &= \mathop{\operatorname {argmin}}_{x\in \mathbb{R}^{N_{h}}}\frac{1}{2}\Vert y-\Phi Hx\Vert _{2}^{2}+\frac{\mu }{2\tau}\biggl\Vert x-x^{k}+ \tau A^{\mathsf{T}}\biggl( A x^{k}-u^{k}-\frac{\lambda^{k}}{\mu} \biggr)\biggr\Vert ^{2}_{2} \\ &= \biggl(H^{\mathsf{T}}\Phi^{\mathsf{T}}\Phi H+\frac{\mu}{\tau}I_{N_{h}} \biggr)^{-1} \biggl(H^{\mathsf{T}}\Phi^{\mathsf{T}}y+ \frac{\mu}{\tau}x^{k}-\mu A^{\mathsf{T}}\biggl(Ax^{k}-u^{k}- \frac{\lambda ^{k}}{\mu}\biggr) \biggr). \end{aligned}$$

The above equation is exactly (1.7a); the proof is complete. □

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Gao, B., Sun, F., Tong, Y. et al. Solving total-variation image super-resolution problems via proximal symmetric alternating direction methods. J Inequal Appl 2016, 197 (2016). https://doi.org/10.1186/s13660-016-1136-7

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13660-016-1136-7

Keywords