Skip to main content

Strictly contractive Peaceman–Rachford splitting method to recover the corrupted low rank matrix

Abstract

The strictly contractive Peaceman–Rachford splitting method (SC-PRSM) attracts much attention on solving the separable convex programming. In this paper, the SC-PRSM is first applied to recover the corrupted low rank matrix, which extends the application of the SC-PRSM. At each iteration, we just solve two easy subproblems, where one subproblem has a closed solution and another needs to solve linear equations by the conjugate gradient method. Finally, numerical comparisons with the existing types of the alternating direction method of multipliers show that the SC-PRSM is efficient and competitive for recovering the low rank matrix problems.

1 Introduction

Recovering a corrupted low rank matrix from a small amount of observations has attracted a lot of attention in many applications, such as online recommendation system and collaborative filtering [1], the Joster joke data [2], DNA data [3] and the famous Netflix problem [4]. This problem is the famous matrix completion (MC) problem, and its mathematical formula can be expressed as

$$ \min_{X\in \mathbb{R}^{m\times n}} \Vert X \Vert _{*}, \quad\text{s.t. } X _{i,j}= M_{i,j},\quad {i,j}\in \varOmega , $$
(1)

where Ω is a given set of the index pairs \((i,j)\), \(\|X\|_{*}\) is the nuclear norm, which is defined as the sum of its singular values. Assuming the matrix X has r positive singular values of \(\sigma _{1} \geq \sigma _{2} \geq \cdots \geq \sigma _{r} > 0\), we can get \(\|X\|_{*}=\sum_{i=1}^{r}\sigma _{i}(X)\). It is well known that the nuclear norm is the best convex approximation of the rank function over the unit ball of matrices with norm less than one [5]. Furthermore, a general form of the MC problem is nuclear norm minimization with affine constraint, which can be depicted as

$$ \min_{X\in \mathbb{R}^{m\times n}} \Vert X \Vert _{*}, \quad\text{s.t. } \mathcal{A}(X)= b, $$
(2)

where \(\mathcal{A}: \mathbb{R}^{m\times n}\rightarrow \mathbb{R}^{p}\) is a linear map and \(b\in \mathbb{R}^{p}\) is a given measurement vector. The b is always contaminated by noise in practical applications, so the problem (2) can be relaxed to the following regularized nuclear norm minimization problem:

$$ \min_{X\in \mathbb{R}^{m\times n}} \Vert X \Vert _{*}+ \frac{\gamma }{2} \bigl\Vert \mathcal{A}(X)-b \bigr\Vert _{2} ^{2}, $$
(3)

where γ is the regularization parameter, which balances the two terms for obtaining the optimal solution.

Recently, many efficient algorithms could solve this problem (3), such as SDPT3 [6], singular value thresholding (SVT) [7], fixed point continuation with the approximate SVD (FPCA) method [5], a proximal point algorithm [8], an accelerated proximal gradient (APG) algorithm [9], the alternation direction method of multiplier (ADMM) type of algorithms [10,11,12], etc. But to the best of our knowledge, there are few studies on the application of SC-PRSM for recovering the corrupted low rank matrix problems. In this paper, we are going to further study the SC-PRSM for the problem (3).

The strictly contractive Peaceman–Rachford splitting method (SC-PRSM) is proposed in [13] by attaching an underdetermined relaxation factor α to the penalty parameter in the steps of Lagrange multiplier updating, which guarantees the convergence of PRSM proposed in [14] without some restrictive assumptions. It is worth mentioning that the difference between the PRSM and the ADMM is the addition of the intermediate update of the multipliers, which offers the same set of advantages for the two variable values. Recently, the SC-PRSM is also widely applied in solving many valuable problems, which can be referred to [15,16,17,18]. In this paper, we focus on applying the SC-PRSM to solve the problem (3) based on the proposed method of IADM-CG [12]. Moreover, we give some numerical comparisons to illustrate the advantage of the SC-PRSM for the problem (3).

The rest of this paper is organized as follows. In Sect. 2, we will give some preliminaries for this paper. In Sect. 3, we will give the construction of SC-PRSM for the problem (3). Some results of numerical experiments will be reported in Sect. 4. Finally, some conclusions are given.

2 Preliminaries

In this section, we mainly review some preliminaries on the ADMM and the SC-PRSM for further application of the SC-PRSM. Both of them are applied to solve the following convex minimization model with linear constraints and a separable objective function:

$$ \min \bigl\{ \theta _{1}(x)+\theta _{2}(y)\mid Ax+By=b,x\in \mathcal{X},y \in \mathcal{Y}\bigr\} , $$
(4)

where \(A\in \mathbb{R}^{m\times n_{1}}\), \(B\in \mathbb{R}^{m\times n _{2}}\), \(b\in \mathbb{R}^{m}\), \(\mathcal{X}\subset \mathbb{R}^{n_{1}}\) and \(\mathcal{Y}\subset \mathbb{R}^{n_{2}}\) are closed convex sets, and \(\theta _{1}(x): \mathcal{X}\rightarrow \mathbb{R}^{m}\) and \(\theta _{2}(y): \mathcal{Y}\rightarrow \mathbb{R} ^{m}\) are convex functions.

The iterative scheme of ADMM [19, 20] for (4) reads

$$\begin{aligned} \textstyle\begin{cases} x_{k+1}= \mathop{\operatorname{argmin}} \{\theta _{1}(x)-(\lambda _{k})^{T}(Ax+By _{k}-b)+\frac{\beta }{2} \Vert Ax+By_{k}-b \Vert ^{2},x\in \mathcal{X} \}, \\ y_{k+1}= \mathop{\operatorname{argmin}} \{ \theta _{2}(y)-(\lambda _{k})^{T}(Ax _{k+1}+By-b)+\frac{\beta }{2} \Vert Ax_{k+1}+By-b \Vert ^{2},y\in \mathcal{Y} \}, \\ \lambda _{k+1} = \lambda _{k}-\beta [Ax_{k+1}+By_{k+1}-{ {b}}], \end{cases}\displaystyle \end{aligned}$$
(5)

where the λ is the Lagrangian multiplier associated with the linear constrains and the \(\beta >0\) is a penalty parameter. Moreover we note that the ADMM is viewed as an application of the Douglas–Rachford splitting method to the dual problem of (4) as analyzed in [21], and for its convergence performance one may be referred to [22, 23].

Applying the PRSM [24] to the dual problem of (4) [21], we can obtain the iterative schemes as follows:

$$\begin{aligned} \textstyle\begin{cases} x_{k+1}= \mathop{\operatorname{argmin}} \{\theta _{1}(x)-(\lambda _{k})^{T}(Ax+By _{k}-b)+\frac{\beta }{2} \Vert Ax+By_{k}-b \Vert ^{2},x\in \mathcal{X} \}, \\ \lambda _{k+\frac{1}{2}} = \lambda _{k}-\beta [Ax_{k+1}+By_{k}-{ {b}}], \\ y_{k+1}= \mathop{\operatorname{argmin}} \{ \theta _{2}(y)-( \lambda _{k+\frac{1}{2}})^{T}(Ax_{k+1}+By-b)+\frac{\beta }{2} \Vert Ax_{k+1}+By-b \Vert ^{2},y\in \mathcal{Y} \}, \\ \lambda _{k+1} = \lambda _{k+\frac{1}{2}}-\beta [Ax_{k+1}+By_{k+1}- {{b}}], \end{cases}\displaystyle \end{aligned}$$
(6)

where updating the \(\lambda _{k+\frac{1}{2}}\) is the only difference between the PRSM and the ADMM, which makes the two variable values to be treated fairly. But the PRSM needs more restrictive assumptions than ADMM to guarantee convergence [21]. The reader may refer to [24, 25] for more numerical verifications of the PRSM.

In order to overcome the weakness of a strict contraction of PRSM’s iterative sequence, He et al. [13] found that when an underdetermined relaxation factor \(\alpha \in (0,1)\) is attached to the penalty parameter β in the steps of Lagrange multipliers updating in (6), the resulting sequence becomes strictly contractive with respect to the solution set of (4). And they pointed out that the PRSM with an underdetermined relaxation factor can possibly establish a worst-case \(\mathcal{O}(1/t)\) convergence rate in a nonergodic sence by the strict property. So they named this method SC-PRSM in [13] and gave its iterative scheme for (4) as follows:

$$\begin{aligned} \textstyle\begin{cases} x_{k+1}= \mathop{\operatorname{argmin}} \{\theta _{1}(x)-(\lambda _{k})^{T}(Ax+By _{k}-b)+\frac{\beta }{2} \Vert Ax+By_{k}-b \Vert ^{2},x\in \mathcal{X} \}, \\ \lambda _{k+\frac{1}{2}} = \lambda _{k}-\alpha \beta [Ax_{k+1}+By_{k}- {{b}}], \\ y_{k+1}= \mathop{\operatorname{argmin}} \{ \theta _{2}(y)-( \lambda _{k+\frac{1}{2}})^{T}(Ax_{k+1}+By-b)+\frac{\beta }{2} \Vert Ax_{k+1}+By-b \Vert ^{2},y\in \mathcal{Y} \}, \\ {\lambda _{k+1} = \lambda _{k+\frac{1}{2}}-\alpha \beta [Ax_{k+1}+By _{k+1}-{{b}}],} \end{cases}\displaystyle \end{aligned}$$
(7)

where \(\alpha \in (0,1)\). As showed in [13], the SC-PRSM ensures the sequence generated by (7) to be strictly contractive with respect to the solution set of (4). And without any further assumption on the model (4), they established worse-case \(\mathcal{O}(1/t)\) convergence rates for (7). Moreover, the applications of SC-PRSM in machine learning and image processing illustrated that the SC-PRSM is numerically more efficient. Therefore, in this paper, we will further study the SC-PRSM for recovering a corrupted low rank matrix.

3 Algorithm

In this section, we will give the processing of extending the SC-PRSM to the problem of (3) based on the proposed the method of IADM-CG. Firstly, by introducing an auxiliary variable Y, the problem of (3) can be translated into the following form:

$$\begin{aligned} \min_{X, Y} \Vert X \Vert _{*}+ \frac{\gamma }{2} \bigl\Vert \mathcal{A}(Y)-b \bigr\Vert _{2}^{2}, \quad\text{s.t. } X=Y . \end{aligned}$$
(8)

Then its corresponding augmented Lagrangian function can be obtained as follows:

$$ \mathcal{L}_{\mathcal{A}}(X,Y,Z)= \Vert X \Vert _{*}+ \frac{\gamma }{2} \bigl\Vert \mathcal{A}(Y)-b \bigr\Vert ^{2}_{2}- \langle Z, X-Y\rangle +\frac{\mu }{2} \Vert X-Y \Vert ^{2}_{F}, $$
(9)

where \(Z\in \mathbb{R}^{m\times n}\) is the Lagrangian multiplier, \(\mu >0\) is the penalty parameter, \(\langle \cdot \rangle \) represents the inner product of matrices or vectors.

Given \(\{X_{k},Y_{k},Z_{k}\}\), applying the SC-PRSM for the problem of (8), we can obtain the following iteration equations:

$$\begin{aligned} &X_{k+1}=\mathop{\operatorname{argmin}}_{X} \mathcal{L}_{\mathcal{A}}(X,Y _{k},Z_{k}), \end{aligned}$$
(10)
$$\begin{aligned} &Z_{k+\frac{1}{2}}=Z_{k}-\alpha \mu (X_{k+1}-Y_{k}), \end{aligned}$$
(11)
$$\begin{aligned} &Y_{k+1}=\mathop{\operatorname{argmin}}_{Y} \mathcal{L}_{\mathcal{A}}(X_{k+1},Y,Z _{k+\frac{1}{2}}), \end{aligned}$$
(12)
$$\begin{aligned} &Z_{k+1}=Z_{k+\frac{1}{2}}-\alpha \mu (X_{k+1}-Y_{k+1}), \end{aligned}$$
(13)

where \(\alpha \in (0,1)\) is a relaxation factor. Observing the above iteration scheme, we can note that the two subproblem with respect to X and Y are mainly needed to solve. And the problem of minimizing X can be reformulated into

$$\begin{aligned} X_{k+1} &=\mathop{\operatorname{argmin}}_{X} \Vert X \Vert _{*}- \langle Z_{k}, X-Y _{k}\rangle + \frac{\mu }{2} \Vert X-Y_{k} \Vert ^{2}_{F} \\ &=\mathop{\operatorname{argmin}}_{X} \Vert X \Vert _{*}+\frac{\mu }{2} \biggl\Vert X-\biggl(Y_{k}+ \frac{1}{ \mu }Z_{k}\biggr) \biggr\Vert ^{2}_{F}. \end{aligned}$$
(14)

Given the singular soft-thresholding operator proposed in [7], we can get

$$ X_{k+1}= \mathcal{S}_{1/\mu }\biggl(Y_{k}+ \frac{1}{\mu }Z_{k}\biggr). $$
(15)

Given the \(X_{k+1}\), we can update the Lagrangian multiplier \(Z_{k+\frac{1}{2}}\) by (11). Given \(\{X_{k+1}, Z_{k+ \frac{1}{2}}\}\), we will compute the iteration of \(Y_{k+1}\).

Firstly, we note the augmented Lagrangian function with respect to Y as \(Q(Y)\), which is expressed as

$$\begin{aligned} Q(Y) &=\mathcal{L}_{\mathcal{A}}(X_{k+1},Y,Z_{k+\frac{1}{2}}) \\ &= \frac{\gamma }{2} \bigl\Vert \mathcal{A}(Y)-b \bigr\Vert ^{2}_{2}- \langle Z_{k+ \frac{1}{2}}, X_{k+1}-Y\rangle +\frac{\mu }{2} \Vert X_{k+1}-Y \Vert ^{2}_{F}. \end{aligned}$$

It is not hard to see that the \(Q(Y)\) is a convex quadratic function, its gradient is

$$ G(Y)=Z_{k+\frac{1}{2}}-\mu (X_{k+1}-Y)+\gamma \mathcal{A}^{*} \bigl( \mathcal{A}(Y)-b\bigr). $$

Let \(G(Y)=0\), then we obtain

$$ \bigl(\mu I+\gamma \bigl(\mathcal{A}^{*}\mathcal{A} \bigr) \bigr)Y= \mu X_{k+1}-Z _{k+\frac{1}{2}}+\gamma \mathcal{A}^{*}b, $$
(16)

so the \(Y_{k+1}\) can be expressed as

$$ Y_{k+1}= \bigl(\mu I+\gamma \mathcal{A}^{*} \mathcal{A} \bigr)^{-1} \bigl( \mu X_{k+1}-Z_{k+\frac{1}{2}}+ \gamma \mathcal{A}^{*}b \bigr), $$
(17)

where I is an identity matrix, and \(\mathcal{A}^{*}\) is the adjoint of \(\mathcal{A}\). It seems that the above linear system is easy to solve, but it may be expensive to solve directly in practice when the scale is large. Thus we consider applying the linear conjugate gradient method [12] for solving it iteratively. Now we let \(C=\mu I+\gamma \mathcal{A}^{*}\mathcal{A}\), and \(D_{k}= \mu X_{k+1}-Z_{k+\frac{1}{2}}+\gamma \mathcal{A}^{*}b\). Let \(\widehat{Y}_{0}=Y^{k}\), \(\widehat{R}_{0}=C\widehat{Y}_{0}-D_{k}\) and \(\widehat{P}_{0}=-\widehat{R}_{0}\), and then the sequence \(\{ \widehat{Y}_{i}\}\) can be acquired iteratively as follows:

$$ \textstyle\begin{cases} \alpha _{i}= -\frac{\langle \widehat{R}_{i}, \widehat{P}_{i}\rangle }{ \langle \widehat{P}_{i}, C \widehat{P}_{i}\rangle } , \\ \widehat{Y}_{i+1}= \widehat{Y}_{i}+\alpha _{i}\widehat{P}_{i} , \\ \widehat{R}_{i+1}=C\widehat{Y}_{i+1}-D_{k}, \\ \beta _{i+1}= \frac{\langle \widehat{R}_{i+1}, C\widehat{P}_{i}\rangle }{\langle \widehat{P}_{i}, C\widehat{P}_{i}\rangle }, \\ \widehat{P}_{i+1}= -\widehat{R}_{i+1}+\beta _{i+1}\widehat{P}_{i}, \end{cases} $$
(18)

and set \(Y^{k+1}=\widehat{Y_{i}}\). It is worth being mentioned that the linear conjugate gradient method is very efficient to solve the linear system and has good convergence properties, for which one may refer to [26].

In the last step, we update the Lagrangian multiplier \(Z_{k+1}\) by (13).

To sum up, the expending of PRSM for problem (3) has the same form for solving the two subproblems of X and Y with IADM-CG. The one difference is the addition of the intermediate update of the multipliers \(Z_{k+\frac{1}{2}}\). The other difference is adding a relaxation factor \(\alpha \in (0,1)\) to guarantee the convergence property of the SC-PRSM. Now we give the iteration scheme of SC-PRSM for solving Eq. (3) as follows:

figure a

Remark 1

In the above scheme, the constants of ϵ and ī are to guarantee the exactness of \(Y_{k+1}\). Generally in numerical experiments, we set \(\epsilon =10^{-2}\) and \(\bar{i}=5\) to ensure the availability of the proposed algorithm.

At the end of this section, we state the convergence of SC-PRSM without proof. One should refer to [13] or [15] for more details.

Theorem 1

Let \(\{(X^{k},Y^{k},Z^{k})\}\) be a sequence generated by SC-PRSM. The sequence \(\{(X^{k},Y^{k},Z^{k})\}\) converges to some \(\{(X^{*},Y^{*},Z ^{*})\}\).

4 Numerical experiments

In this section, we will report some numerical results on the SC-PRSM for matrix nuclear norm minimization problems, including the matrix complete problems and the regularized least square nuclear norm minimization in the cases of containing noise and noiseless. We firstly give the meanings of the different signal and the sets of parameters. The quantities m and n represent the dimension of the matrix, and r is the rank of the matrix. We denote by p the number of measurements. Given \(r\leq \min (m,n)\), we can generate \(M= M_{L}M _{R}^{T}\), where the matrices \(M_{L}\in \mathbb{R}^{m\times r} \) and \(M_{R}\in \mathbb{R}^{n\times r}\) are generated with independent identically distributed Gaussian entries [12]. The subset Ω of p elements is selected uniformly at random entries form \(\{(i, j): i=1,\ldots , m, j= 1, \ldots , n \}\). And the partial discrete cosine transform (DCT) matrix is chosen as the linear map \(\mathcal{A}\). Since the DCT matrix-vector multiplication is implemented implicitly by FFT, this makes us test the numerical experiments more efficiently [27]. We get the linear measurements \(b=\mathcal{A}(M)+ \omega \), where ω is the additive Gaussian noise of zero mean and standard deviation σ, which will be varied in different situations.

Now \(sr=p/(mn)\) represents the sampling ratio, and \(dr=r(m+n-r)\) is the number of degrees of freedom for a real-valued rank r matrix. As mentioned in [28, 29], the problem can be viewed as an easy problem when the ratio \(p/dr\) is greater than 3. On the contrary, it is regarded as a hard problem. Another ratio \(FR= r(m+n-r)/p\) is also important for successfully recovering the matrix M. If \(FR> 1\), it is impossible to recover the matrix because there is an infinite number of matrices X with rank r with the given entries [5]. Therefore the FR varies in \((0,1)\) in this paper. In addition, we take \(\mu =2.5/\min (m,n)\), \(\gamma =2^{4}\) and \(\alpha =0.5\).

In all tests, the optimal solution produced by the proposed method is denoted \(X^{*}\). The relative error is used to measure the quality of \(X^{*}\) to original M, i.e.

$$ \mathrm{RelErr}= \frac{ \Vert X^{*}-M \Vert _{F}}{ \Vert M \Vert _{F}}. $$
(19)

If the corresponding RelErr is less than 10−3, it can be seen that M is recovered successfully by \(X^{*}\), which has been used in [5, 7]. In all the tests, we take \(\mathrm{RelErr}= 10^{-4}\) as the terminal condition. In addition, we apply the same technique for solving the matrix singular value decomposition (SVD) as in [10, 12, 27, 30]

All numerical experiments were performed under Windows 7 premium and MATLAB v7.8(2009a) running on a Lenovo laptop with an Intel core CPU at 2.4 GHz and 2 GB memory.

4.1 Tests on the nuclear norm minimization problems

In this subsection, we mainly solve the problem (3) in different cases to illustrate the efficiency of the SC-PRSM.

In the first test, we mainly use the SC-PRSM for the problem (3) and report the numerical results of the relative error and the estimated rank, which can be seen in Fig. 1. Here we set \(m= n= 1000\), \(r= 50\), \(sr= 0.5\). Observing Fig. 1, we can get close to the real rank just in the third iteration, and the relative error reduces to the given measure with less 20 iterations. So we can conclude that the SC-PRSM is efficient for the nuclear norm minimization.

Figure 1
figure 1

The convergence result of SC-PRSM (\(m= n= 1000\), \(r= 50\), \(sr= 0.5\)). The upper figure: the estimation of rank. The lower figure: the relative error between the optimal solution and the original low rank matrix

In the second experiment, we compare the SC-PRSM with the IADM-CG [12] for solving the problem (3). The numerical results can be seen in Fig. 2. In this test, we apply the SC-PRSM and the IADM-CG [12] for solving the nuclear norm minimization problems in the cases of \(m=n= 500\), \(r=50\), \(sr= 0.5\). Then we compare the efficiency of the above two methods by displaying Fig. 2. From Fig. 2, we can see that the SC-PRSM needs less running time than the IADM-CG, and the SC-PRSM can attain a better accuracy for the solution. Thus we can see that the SC-PRSM needs to update the multiplier twice at each iteration, but it improves the accuracy of the solution of each iteration produced by the primal algorithm. Most of the running time is spent on solving the SVD decomposition of the matrix, so the SC-PRSM shows some improvement but not too dramatic. In the next test, we compare the SC-PRSM with the IADM-CG, IADM_BB [10] and IADM_NNLS [11] for solving the nuclear norm minimization with different settings. The numerical results is displayed in Table 1. Observing Table 1, we can see that the SC-PRSM and IADM-CG needs less computing time than the IADM_BB for getting the same accuracy of the solution. Comparing SC-PRSM with IADM_NNLS, we can note that SC-PRSM needs more running time but it can solve the case of the \(sr= 0.8\) successfully. However, the IADM_NNLS cannot attain a high accuracy of the solution within the given largest iteration steps. So in some sense, the SC-PRSM is more efficient.

Figure 2
figure 2

SC-PRSM and IADM-CG for solving the nuclear norm minimization (\(m=n= 500\), \(r=50\), \(sr= 0.5\))

Table 1 SC-PRSM, IADM-CG, IADM_BB, IADM_NNLS for solving the easy nuclear norm minimization, \(m= n\)

In the fourth test, we compare SC-PRSM with IADM-CG, IADM_BB and IADM_NNLS for solving the hard problem of the nuclear norm minimization. From Table 2, it is easy to see that the SC-PRSM is more efficient than the IADM_NNLS and IADM_BB in the aspects of running time and accuracy of solution. And the SC-PRSM needs a little bit less running time than IADM-CG for getting a similar accuracy of solution. By the above limited numerical tests, it is illustrated that the SC-PRSM is promising and efficient for solving the hard problems of nuclear norm minimization.

Table 2 SC-PRSM, IADM-CG, IADM_BB, IADM_NNLS for solving the hard nuclear norm minimization, \(m= n\)

In the last numerical test of this subsection, we apply the SC-PRSM for solving the nuclear norm minimization with different level noise. We set \(m=n= 200\) and \(\sigma = 10^{-1}, 10^{-2}, 10^{-4}\), respectively. It can be observed from Fig. 3 that the SC-PRSM can deal with the nuclear norm minimization problem with \(\sigma =0.1\) successfully. As is well known, as the level of noise is lower, the problem can be solved more easily. When the σ is 10−4, the accuracy of solution can attain 10−4. Therefore, this numerical test illustrated that the SC-PRSM is efficient for solving the low rank minimization problem with Gaussian noise.

Figure 3
figure 3

SC-PRSM for the nuclear norm minimization with noise (\(m=n= 200\), \(r=5\), \(sr= 0.5\)), σ is 1e–1, 1e–2, 1e–4, respectively

4.2 Tests on the matrix completion problems

In this subsection, we will apply the SC-PRSM for solving the matrix completion problems and further verify the proposed method. The matrix completion problem can be reformulated as follows:

$$ \min_{X\in \mathbb{R}^{m\times n}} \Vert X \Vert _{*}+ \frac{\gamma }{2} \sum_{(i,j)\in \varOmega } \vert X_{i,j}- M_{i,j} \vert ^{2} ,\quad \forall (i,j)\in \varOmega . $$
(20)

Firstly, we use the SC-PRSM, IADM_NNLS, IADM_BB and IADM-CG to solve the low rank matrix completion problems in the noiseless case. In this test, when tol reaches 1e–3, the SC-PRSM, IADM-CG and IADM_BB are terminated. For IADM_NNLS, all parameters are by default expected to have \(\mathrm{opts}.\mathrm{tol}\_\mathrm{relchg} = 1\mathrm{e}\text{--}3\).

Observing Table 3, the SC-PRSM is better than the IADM-CG and IADM_BB on the aspects of running time and accuracy of solution, but it needs more running time than IADM_NNLS. From the end line of Table 3, we can see that the SC-PRSM is more efficient than others for solving the hard problems. Thus we can conclude that the SC-PRSM is more efficient than IADM-CG and IADM_BB and comparable with IADM_NNLS by the limited tests.

Table 3 SC-PRSM, IADM-CG, IADM_BB and IADM_NNLS for solving the matrix completion problems, \(m= n, sr= 0.5\)

In the last test, the numerical results are shown in Fig. 4. We apply the SC-PRSM for recovering the corrupted images, where the dimension of “boat” and “pentagon” is \(512\times 512\) and \(450\times 450\), respectively. Firstly, we deal with the original image (a) by singular value decomposition to get a low rank image (b), where the rank is 40. And then we choose 50% elements of the known observations to obtain the corrupted image (c). Finally, we use the SC-PRSM to recover (c) and get a completed low rank image (d). Comparing (b) with (d), we can find that the SC-PRSM can recover the corrupted low rank image successfully, which further illustrates its practicability.

Figure 4
figure 4

(a) Original image; (b) low rank image with \(r=40\); (c) the corrupted image with \(sr=50\%\); (d) the recovered image by SC-PRSM

5 Conclusions

In this paper, we mainly devoted our attention to proposing a SC-PRSM method based on the IADM-CG for solving the nuclear norm minimization problem, which expanded the SC-PRSM to recover the corrupted low rank matrix. The classical ADMM is to solve the X-subproblem and Y-subproblem orderly and then to update the Lagrange multiplier. However, the SC-PRSM updates once the Lagrange multiplier after computing the X-subproblem and Y-subproblem, respectively. We introduce the relax factor α into the computing of the Lagrange multiplier for improving the convergence efficiency of the proposed method. Numerical results illustrated that the SC-PRSM can solve the nuclear norm minimization problems successfully. The last real test on recovering the corrupted low rank images further demonstrated the efficiency and practicability of SC-PRSM.

References

  1. Srebro, N.: Learning with matrix factorizations. PhD thesis, Citeseer (2004)

  2. Goldberg, K., Roeder, T., Gupta, D., Perkins, C.: Eigentaste: a constant time collaborative filtering algorithm. Inf. Retr. 4(2), 133–151 (2001)

    Article  Google Scholar 

  3. Spellman, P.T., Sherlock, G., Zhang, M.Q., Iyer, V.R., Anders, K., Eisen, M.B., Brown, P.O., Botstein, D., Futcher, B.: Comprehensive identification of cell cycle-regulated genes of the yeast saccharomyces cerevisiae by microarray hybridization. Mol. Biol. Cell 9(12), 3273–3297 (1998)

    Article  Google Scholar 

  4. Netflix prize website. http://www.netflixprize.com

  5. Ma, S., Goldfarb, D., Chen, L.: Fixed point and Bregman iterative methods for matrix rank minimization. Math. Program. 128(1–2), 321–353 (2011)

    Article  MathSciNet  Google Scholar 

  6. Tütüncü, R.H., Toh, K.-C., Todd, M.J.: Solving semidefinite-quadratic-linear programs using sdpt3. Math. Program. 95(2), 189–217 (2003)

    Article  MathSciNet  Google Scholar 

  7. Cai, J.-F., Candès, E.J., Shen, Z.: A singular value thresholding algorithm for matrix completion. SIAM J. Optim. 20(4), 1956–1982 (2010)

    Article  MathSciNet  Google Scholar 

  8. Liu, Y.-J., Sun, D., Toh, K.-C.: An implementable proximal point algorithmic framework for nuclear norm minimization. Math. Program. 133(1–2), 399–436 (2012)

    Article  MathSciNet  Google Scholar 

  9. Toh, K.-C., Yun, S.: An accelerated proximal gradient algorithm for nuclear norm regularized linear least squares problems. Pac. J. Optim. 6(615–640), 15 (2010)

    MathSciNet  MATH  Google Scholar 

  10. Xiao, Y.-H., Jin, Z.-F.: An alternating direction method for linear-constrained matrix nuclear norm minimization. Numer. Linear Algebra Appl. 19(3), 541–554 (2012)

    Article  MathSciNet  Google Scholar 

  11. Yang, J., Yuan, X.: Linearized augmented Lagrangian and alternating direction methods for nuclear norm minimization. Math. Comput. 82(281), 301–329 (2013)

    Article  MathSciNet  Google Scholar 

  12. Jin, Z.-F., Wang, Q., Wan, Z.: Recovering low-rank matrices from corrupted observations via the linear conjugate gradient algorithm. J. Comput. Appl. Math. 256, 114–120 (2014)

    Article  MathSciNet  Google Scholar 

  13. He, B., Liu, H., Wang, Z., Yuan, X.: A strictly contractive Peaceman–Rachford splitting method for convex programming. SIAM J. Optim. 24, 1011–1040 (2014)

    Article  MathSciNet  Google Scholar 

  14. Bertsekas, D.P.: Constrained Optimization and Lagrange Multiplier Methods. Academic Press, San Diego (1982)

    MATH  Google Scholar 

  15. Li, X., Yuan, X.: A proximal strictly contractive Peaceman–Rachford splitting method for convex programming with applications to imaging. SIAM J. Imaging Sci. 8, 1332–1365 (2015)

    Article  MathSciNet  Google Scholar 

  16. Gu, Y., Jiang, B., Han, D.: A semi-proximal-based strictly contractive Peaceman–Rachford splitting method (2015). arXiv preprint. arXiv:1506.02221

  17. Li, M., Yuan, X.: A strictly contractive Peaceman–Rachford splitting method with logarithmic-quadratic proximal regularization for convex programming. Math. Oper. Res. 40, 842–858 (2015)

    Article  MathSciNet  Google Scholar 

  18. Sun, M., Liu, J.: A proximal Peaceman–Rachford splitting method for compressive sensing. J. Appl. Math. Comput. 50, 349–363 (2016)

    Article  MathSciNet  Google Scholar 

  19. Gabay, D., Mercier, B.: A dual algorithm for the solution of nonlinear variational problems via finite element approximation. Comput. Math. Appl. 2, 17–40 (1976)

    Article  Google Scholar 

  20. Glowinski, R., Marrocco, A.: Sur l’approximatoin, par éléments finis d’ordre un, et la résolution, par pénalisation-dualité d’une classe de problémes de Dirichlet non linéaires. Rev. Fr. Autom. Inform. Rech. Opér. 9, 41–76 (1975)

    MATH  Google Scholar 

  21. Gabay, D.: Applications of the method of multipliers to variational inequalities. In: Fortin, M., Glowinski, R. (eds.) Augmented Lagrange Method: Applications to the Solution of Boundary-Valued Problems, pp. 299–331. North Holland, Amsterdam (1983)

    Chapter  Google Scholar 

  22. He, B., Yuan, X.: On the \(o(1/n)\) convergence rate of the Douglas–Rachford alternating direction method. SIAM J. Imaging Sci. 50, 700–709 (2012)

    Google Scholar 

  23. Boyd, S.P., Parikh, N., Chu, E., Peleato, B., Eckstein, J.: Distributed optimization and statistical learning via the alternating direction method of multipliers. Found. Trends Mach. Learn. 3, 1–122 (2011)

    Article  Google Scholar 

  24. Bertsekas, D.P.: Constrained Optimization and Lagrange Multiplier Methods. Academic Press, New York (1982)

    MATH  Google Scholar 

  25. Glowinski, R., Le Tallec, P.: Augmented Lagrangian and Operator-Splitting Methods in Nonlinear Mechanics. SIAM, Philadelphia (1989)

    Book  Google Scholar 

  26. Kelley, C.T.: Iterative Methods for Linear and Nonlinear Equations. Society for Industrial and Applied Mathematics, Philadelphia (1995)

    Book  Google Scholar 

  27. Jin, Z.-F., Wan, Z., Jiao, Y., Lu, X.: An alternating direction method with continuation for nonconvex low rank minimization. J. Sci. Comput. 66(2), 849–869 (2016)

    Article  MathSciNet  Google Scholar 

  28. Candès, E.J., Recht, B.: Exact matrix completion via convex optimization. Found. Comput. Math. 9(6), 717–772 (2009)

    Article  MathSciNet  Google Scholar 

  29. Malek-Mohammadi, M., Babaie-Zadeh, M., Amini, A., Jutten, C.: Recovery of low-rank matrices under affine constraints via a smoothed rank function. IEEE Trans. Signal Process. 62(4), 981–992 (2014)

    Article  MathSciNet  Google Scholar 

  30. Jin, Z.-F., Wan, Z., Zhao, X., Xiao, Y.: A penalty decomposition method for rank minimization problem with affine constraints. Appl. Math. Model. 39, 4859–4870 (2015)

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgements

Not applicable.

Funding

The work of Z. Wan is supported by the National Natural Science Foundation of China Grant No. 11871383. The work of Z. Jin is supported by the Plan for Scientific Innovation Talent of Henan Province Grant No. 174200510011. The work of Z. Zhang is supported by the Program for Innovative Research Team (in Science and Technology) in University of Henan Province Grant No. 15IRTSTHN010.

Author information

Authors and Affiliations

Authors

Contributions

All authors contributed equally to this work. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Zhongping Wan.

Ethics declarations

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Jin, ZF., Wan, Z. & Zhang, Z. Strictly contractive Peaceman–Rachford splitting method to recover the corrupted low rank matrix. J Inequal Appl 2019, 147 (2019). https://doi.org/10.1186/s13660-019-2091-x

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13660-019-2091-x

MSC

Keywords