 Research
 Open Access
 Published:
A semismoothing augmented Lagrange multiplier algorithm for lowrank Toeplitz matrix completion
Journal of Inequalities and Applications volume 2019, Article number: 83 (2019)
Abstract
The smoothing augmented Lagrange multiplier (SALM) algorithm is a generalization of the augmented Lagrange multiplier algorithm for completing a Toeplitz matrix, which saves computational cost of the singular value decomposition (SVD) and approximates well the solution. However, the communication of numerous data is computationally demanding at each iteration step. In this paper, we propose an accelerated scheme to the SALM algorithm for the Toeplitz matrix completion (TMC), which will reduce the extra load coming from data communication under reasonable smoothing. It has resulted in a semismoothing augmented Lagrange multiplier (SSALM) algorithm. Meanwhile, we demonstrate the convergence theory of the new algorithm. Finally, numerical experiments show that the new algorithm is more effective/economic than the original algorithm.
Introduction
Completing a lowrank matrix from a subset of its entries has been a hot problem recently, first introduced by [8], that has arisen in a wide variety of practical contexts across all disciplines of engineering and computational science such as model reduction [19], machine learning [1, 2], control [22], pattern recognition [12], imaging inpainting [3], video denoising [16], computer vision [28], and so on. Despite matrix completion (MC) requiring the global solution of a nonconvex objective, there are many computational efficient algorithms which are effective for a broad class of matrices. The problem has received intensive research from both theoretical and algorithmic aspects, see, e.g., [4,5,6,7,8,9,10,11, 13,14,15,16,17,18, 21, 23, 26, 27, 29,30,31,32, 34, 35], and the references therein for partial review. It is well known that the mathematical model of the MC problem is of the following form:
where the matrix \(M\in \mathbb{R}^{m\times n}\) is an underlying matrix to be completed, Ω is a random subset of indices for the available entries, and \(\mathcal{P}_{\varOmega }\) is the associated sampling orthogonal projection operator which acquires only the entries indexed by \(\varOmega \subset \{1,2,\ldots ,m\}\times \{1,2,\ldots ,n\}\).
In the current MC problems, the matrix M is of special structure in general. Therefore, much attention has been paid to the completion of Toeplitz and Hankel matrices in recent years [20, 24, 25, 30, 32]. Many scholars have conducted indepth research on the special structure, property, and application of the Toeplitz and Hankel matrices; for example, nuclear norm minimization for the lowrank Hankel matrix reconstruction problem under the random Gaussian sampling model is investigated in [7]. In addition, Hankel matrix reconstitution in the sense of minimizing the nuclear norm with nonuniform sampling of entries is researched in [11]. To make full use of the special structure of a Toeplitz matrix, a mean value algorithm is presented in [30]; the modified Lagrange multiplier (MALM) algorithm [31] and the smoothing augmented Lagrange multiplier (SALM) algorithm [34] are also proposed. Therefore, the Toeplitz matrix completion (TMC) is one of the most important MC problems and has attracted a large amount of attention recently. As is well known, an \(n\times n\) Toeplitz matrix is of the following form:
which is determined by \(2n1\) entries, say, the first row and the first column. Explicitly seeking the lowest rank Toeplitz matrix consistent with the known entries is mathematically considered as
where “∘” is the Hadamard product, \(H=(H_{ij})\in \mathbb{R} ^{n\times n}\) is the weighted matrix with entries \(H_{ij}=1\) for \(ji\in \varOmega \subset \{n+1,\ldots ,n1\}\) and \(H_{ij}=0\) for any other \((i,j)\), \(M=(M_{ij})\in \mathbb{T}^{n\times n}\) is the underlying Toeplitz matrix to be completed, namely \(M_{ij}=0\) for \(ji\notin \varOmega \).
The SALM algorithm switches the iteration matrix into the Toeplitz structure at each iteration step by the smoothing operator, which saves computational cost of the singular value decomposition and approximates well the solution. Unfortunately, numerous data have to be shifted at each iteration step in the process of implementing this algorithm. However, there is a cost, sometimes relatively great, associated with the moving of data. The control of memory traffic is crucial to performance in many computers.
These factors motivated us to reduce the traffic jam of data, resulting in a semismoothing augmented Lagrange multiplier (SSALM) algorithm based on the selecting technique of the optimal parameter \(\omega ^{(k)}\) at each of the five iteration steps in [33]. Compared with the SALM algorithm, the new algorithm either saves computation cost or reduces data communication. Two aspects are taken into account, which results in more practical or economic implementation. The new algorithm not only overcomes the slowness of the SVD for the original ALM algorithm, but also reduces the greatness of the data communication for the ALM algorithm. We can see that the CPU of SSALM algorithm is reduced to 30.44% from the numerical experiments.
The rest of this paper is organized as follows. Some preliminaries are provided in Sect. 2. Section 3 presents the semismoothing augmented Lagrange multiplier (SSALM) algorithm after giving an outline of the ALM algorithm, the dual approach, and the SALM algorithm. The convergence property of the SSALM algorithm is constructed in Sect. 4. We report the numerical results to indicate the effectiveness of the SSALM algorithm in Sect. 5. Finally, we end the paper with the concluding remarks in Sect. 6.
Preliminaries
This section is devoted to some of the necessary notations and preliminaries. \(\mathbb{R}^{m\times n}\) denotes the set of \(m\times n\) real matrices, \(\mathbb{T}^{n\times n}\) is the set of \(n\times n\) real Toeplitz matrices. The nuclear norm of a matrix A is denoted by \(\A\_{\ast }\), and the Frobenius norm \(\A\_{F}\) is the maximum absolute value of the matrix entries of a matrix A. \(A^{T}\) is used to express the transpose of a matrix \(A\in \mathbb{R}^{n\times n}\), \(\operatorname{rank}(A)\) is equal to the rank of a matrix A, and \(\operatorname{tr}(A)\) represents the trace of A. The standard inner product between two matrices is denoted by \(\langle X,Y\rangle = \operatorname{tr}(X^{T}Y)\). For \(A=(a_{ij}) \in \mathbb{R}^{m\times n}\), \(B=(b_{ij})\in \mathbb{R}^{m\times n}\), their Hadamard product \(A\circ B\) is an \(m\times n\) matrix whose \((i,j)\) entry is the \(a_{ij}b_{ij}\), i.e., \(A\circ B=(a_{ij}b_{ij})\in \mathbb{R} ^{m\times n}\).
The singular value decomposition (SVD) of a matrix \(A\in \mathbb{R} ^{m\times n}\) of rrank is defined by
where \(U\in \mathbb{R}^{m\times r}\) and \(V \in \mathbb{R}^{n\times r}\) are column orthonormal matrices, and \(\sigma _{1} \geq \sigma _{2} \geq \cdots \geq \sigma _{r}>0\).
Definition 2.1
(Singular value thresholding operator [6])
For each \(\tau \geq 0\), the singular value thresholding operator \(\mathcal{D}_{\tau }\) is defined as follows:
where
\(I_{n}=(e_{1},e_{2},\ldots ,e_{n})\in \mathbb{R}^{n\times n}\) denotes the \(n\times n\) identity matrix and \(Z_{n}=(e_{2},e_{3},\ldots ,e_{n},0) \in \mathbb{R}^{n\times n}\) is called the shift matrix. It is clear that
where “O” stands for a zeromatrix. Thus, a Toeplitz matrix \(T \in \mathbb{T} ^{n\times n}\), shown in (1.2), can be written as a linear combination of these shift matrices, that is,
\(\varOmega \subset \{n+1,\ldots ,n1\}\) is an indices set of observed diagonals of a Toeplitz matrix \(M\in \mathbb{T}^{n\times n}\), Ω̄ is the complementary set of Ω. For any Toeplitz matrix \(A\in \mathbb{T}^{n\times n}\), the vector vec\((A,\alpha )\) denotes the αth diagonal of A, \(\alpha =n+1, n+2,\ldots ,n1\), that is to say,
Definition 2.2
(Toeplitz structure smoothing operator [34])
For any matrix \(A=(a_{ij})\in \mathbb{R}^{n\times n}\), the Toeplitz structure smoothing operator \(\mathcal{T}\) is defined as follows:
where \(\tilde{a}_{\alpha }=\frac{_{\alpha }A^{\min }+_{\alpha }A^{ \max }}{2}\), \(\alpha =n+1,n+2,\ldots , n1\) with
It is clear that \(\mathcal{T}(A)\) is a Toeplitz matrix derived from the matrix A. Namely, any \(A\in \mathbb{R}^{n\times n}\) can be changed into a Toeplitz structure via the smoothing operator \(\mathcal{T}( \cdot )\).
Algorithms
First of all in this section, for completeness as well as for the purpose of comparison, we briefly review and summarize some relative algorithms for approximately minimizing the nuclear norm of a matrix under convex constraints.
Since the matrix completion problem is closely connected to the robust principal component analysis (RPCA) problem, then it can be formulated in the same way as RPCA, an equivalent problem of (1.1) can be considered as follows.
As E will compensate for the unknown entries of M, the unknown entries of M are simply set as zeros. Suppose that the given data are arranged as the columns of a large matrix \(M\in \mathbb{R}^{m\times n}\). The mathematical model for estimating the lowdimensional subspace is to find a lowrank matrix \(A\in \mathbb{R}^{m\times n}\) such that the discrepancy between A and M is minimized, leading to the following constrained optimization:
where E will compensate for the unknown entries of M, the unknown entries of \(M\in \mathbb{R}^{m\times n}\) are simply set as zeros. And \(\mathcal{P}_{\varOmega }: \mathbb{R}^{m\times n}\rightarrow \mathbb{R} ^{m\times n}\) is a linear operator that keeps the entries in Ω unchanged and sets those outside Ω (say, in Ω̅) zeros.
The dual algorithm
The dual algorithm proposed in [10] tackles problem (3.1) via its dual. That is, one first solves the dual problem
for the optimal Lagrange multiplier Y, where
A steepest ascend algorithm constrained on the surface \(\{YJ(Y)=1\}\) can be adopted to solve (3.2), where the constrained steepest ascend direction is obtained by projecting M onto the tangent cone of the convex body \(\{YJ(Y)\leq 1\}\). It turns out that the optimal solution to the primal problem (3.1) can be obtained during the process of finding the constrained steepest ascend direction.
The augmented Lagrange multiplier algorithm
The augmented Lagrange multiplier (ALM) algorithm was proposed in [18] for solving a convex optimization (1.1). It should be described subsequently.
It is famous that the partial augmented Lagrangian function of problem (3.1) is
Hence, the augmented Lagrange multiplier algorithm is designed as follows.
Algorithm 3.1
([18])
Given a sampled set Ω, a sampled matrix \(D=\mathcal{P}_{\varOmega }(M)\), \(\mu _{0}>0\), \(\rho >1\). Given also two initial matrices \(Y_{0}=0\), \(E_{0}=0\). \(k:=0\).

1.
Compute the SVD of the matrix \((DE_{k}+\mu _{k}^{1}Y_{k})\):
$$ [U_{k},\varSigma _{k},V_{k}]=\operatorname{svd}\bigl(DE_{k}+\mu _{k}^{1}Y_{k}\bigr); $$ 
2.
Set
$$ A_{k+1}=U_{k}\mathcal{D}_{\mu _{k}^{1}}(\varSigma _{k})V_{k}^{T}, $$solve \(E_{k+1}=\arg \min_{\mathcal{P}_{\varOmega }(E)=0} \mathcal{L}(A_{k+1},E,Y_{k},\mu _{k})\),
$$ E_{k+1}=\mathcal{P}_{\bar{\varOmega }}\bigl(DA_{k+1}+\mu _{k}^{1}Y_{k}\bigr); $$ 
3.
If \(\DA_{k+1}E_{k+1}\_{F}/\D\_{F}<\epsilon _{1}\) and \(\mu _{k}\E_{k+1}E_{k}\_{F}/\D\_{F}<\epsilon _{2}\), stop; otherwise, go to the next step;

4.
Set \(Y_{k+1}=Y_{k}+\mu _{k}(DA_{k+1}E_{k+1})\).
If \(\mu _{k}\E_{k+1}E_{k}\_{F}/\D\_{F}<\epsilon _{2}\), set \(\mu _{k+1}=\rho \mu _{k}\); otherwise, go to Step 1.
Note that the ALM algorithm performs better both in theory and practice than other algorithms with a Qlinear convergence speed globally. It is of much better numerical behavior or higher accuracy. It is reported that the ALM algorithm has been applied to MC problem (1.1). The algorithm, however, has the disadvantage of the penalty function, namely the matrix sequences \(\{A_{k}\}\) generated by the ALM algorithm are the accepted solutions but feasible ones.
The smoothing augmented Lagrange multiplier algorithm
In this subsection, we make mention of a steppedup scheme for the TMC problem. The smoothing augmented Lagrange multiplier (SALM) approach employs the smoothing operator \(\mathcal{T}\) (see (2.1)) to approximate a matrix generated in the kth iteration so that the current approximation is of a Toeplitz structure.
Then our problem can be expressed as the following convex programming:
where \(M\in \mathbb{T}^{n\times n}\) is a real Toeplitz matrix, and \(\varOmega \subset \{n+1,\ldots ,n1\}\).
Let \(D=\mathcal{P}_{\varOmega }(M)\). Then the partial augmented Lagrangian function is
where \(Y\in \mathbb{R}^{n\times n}\).
Algorithm 3.2
([34])
Given a sampled set Ω, a sampled matrix D, \(\mu _{0}>0\), \(\rho >1\). Given also two initial matrices \(Y_{0}=0\), \(E_{0}=0\). \(k:=0\).

1.
Compute the SVD of the matrix \((DE_{k}+\mu _{k}^{1}Y_{k})\) using the Lanczos method
$$ [U_{k},\varSigma _{k},V_{k}]=\operatorname{lansvd}\bigl(DE_{k}+\mu _{k}^{1}Y_{k}\bigr); $$ 
2.
Set
$$ X_{k+1}=U_{k}\mathcal{D}_{\mu _{k}^{1}}(\varSigma _{k})V_{k}^{T}, $$compute for smoothing \(\tilde{a}_{\alpha }=\frac{_{\alpha }X_{k+1} ^{\min }+_{\alpha }X_{k+1}^{\max }}{2}\), \(\alpha \in \{n+1,n+2, \ldots ,n1\}\), and
$$\begin{aligned}& A_{k+1}= \mathcal{T}(X_{k+1})=\sum _{l=1}^{n1}\tilde{a}_{l}Z _{n}^{l}+\sum_{l=0}^{n1} \tilde{a}_{l}\bigl(Z_{n}^{T}\bigr)^{l}, \\& E_{k+1}=\mathcal{P}_{\bar{\varOmega }}\bigl(DA_{k+1}+\mu _{k}^{1}Y_{k}\bigr); \end{aligned}$$ 
3.
If \(\DA_{k+1}E_{k+1}\_{F}/\D\_{F}<\epsilon _{1}\) and \(\mu _{k}\E_{k+1}E_{k}\_{F}/\D\_{F}<\epsilon _{2}\), stop; otherwise, go to the next step;

4.
Set \(Y_{k+1}=Y_{k}+\mu _{k}(DA_{k+1}E_{k+1})\).
If \(\mu _{k}\E_{k+1}E_{k}\_{F}/\D\_{F}<\epsilon _{2}\), set \(\mu _{k+1}=\rho \mu _{k}\); otherwise, go to Step 1.
It is reported that the convergence speed of the SALM algorithm is greater than that of the ALM and APG algorithms. A merit of smoothing is that the fast SVD procedure can be utilized to reduce the computation.
As we know, the SVD time is saved at the expense of data communication. Sometimes, this is not worth the candle. This motivated us to come up with the following algorithm.
The semismoothing augmented Lagrange multiplier algorithm
In this subsection, we propose a semismoothing augmented Lagrange multiplier algorithm based on the ALM and SALM algorithms for the TMC problem. The new algorithm consists of two stages: one is \(\ell 1\) iterations by the ALM scheme, which is free moving of data; another is the ℓth smoothing by the SALM procedure, which is keeping the iteration matrix as a Toeplitz structure.
Now, the semismoothing augmented Lagrange multiplier (SSALM) algorithm will be presented in the following.
Algorithm 3.3
(SSALM algorithm)
Input: A sampled set Ω, a sampled matrix D, \(Y_{0,0}=0\), \(E_{0,0}=0\); parameters \(\mu _{0}>0\), \(\rho >1\), \(\ell >1\), \(\epsilon _{1}\), \(\epsilon _{2}\).
Let \(k:=0\), \(q:=1\), \(q=1,2,\ldots ,\ell 1\).
Repeat:

1.
\(\ell 1\) iterations.
 (1) :

Compute the SVD of the matrix \((DE_{k,q}+\mu _{k,q} ^{1}Y_{k,q})\):
$$ [U_{k,q},\varSigma _{k,q},V_{k,q}]=\operatorname{lansvd}\bigl(DE_{k,q}+\mu _{k,q} ^{1}Y_{k,q} \bigr); $$  (2) :

Set
$$\begin{aligned}& X_{k+1,q+1}=U_{k,q}\mathcal{D}_{\mu _{k,q}^{1}}(\varSigma _{k,q})V_{k,q} ^{T}, \\& E_{k+1,q+1}=\mathcal{P}_{\bar{\varOmega }}\bigl(DX_{k+1,q+1}+\mu _{k,q}^{1}Y _{k,q}\bigr); \end{aligned}$$  (3) :

If \(\DX_{k+1,q+1}E_{k+1,q+1}\_{F}/\D\_{F}<\epsilon _{1}\) and \(\mu _{k,q}\E_{k+1,q+1}E_{k,q}\_{F}/\D\_{F}<\epsilon _{2}\), stop; otherwise, go to the next step;
 (4) :

Set \(Y_{k+1,q+1}=Y_{k,q}+\mu _{k,q}(DX_{k+1,q+1}E_{k+1,q+1})\), \(\mu _{k+1,q+1}=\rho \mu _{k,q}\); otherwise, go to step (1);

2.
ℓth smoothing.
 (1) :

Compute the SVD of the matrix \((DE_{k,\ell }+\mu _{k, \ell }^{1}Y_{k,\ell })\):
$$ [U_{k,\ell },\varSigma _{k,\ell },V_{k,\ell }]=\operatorname{svd}\bigl(DE_{k, \ell }+\mu _{k,\ell }^{1}Y_{k,\ell }\bigr); $$  (2) :

Set
$$ X_{k+1,\ell }=U_{k,\ell }\mathcal{D}_{\mu _{k,\ell }^{1}}(\varSigma _{k, \ell })V_{k,\ell }^{T}, $$compute the factors of smoothing \(\tilde{a}_{\alpha }=\frac{_{\alpha }X_{k+1,\ell }^{\min }+_{\alpha }X_{k+1,\ell }^{\max }}{2}\), \(\alpha =n+1,n+2,\ldots , n1\),
$$ A_{k+1,\ell }= \mathcal{T}(X_{k+1,\ell })=\sum _{l=1}^{n1} \tilde{a}_{l}Z_{n}^{l}+ \sum_{l=0}^{n1}\tilde{a}_{l} \bigl(Z_{n} ^{T}\bigr)^{l}. $$Update
$$ E_{k+1,\ell }=\mathcal{P}_{\bar{\varOmega }}\bigl(DA_{k+1,\ell }+\mu _{k, \ell }^{1}Y_{k+1,\ell }\bigr); $$ 
3.
If \(\DA_{k+1,\ell }E_{k+1,\ell }\_{F}/\D\_{F}<\epsilon _{1}\) and \(\mu _{k,\ell }\E_{k+1,\ell }E_{k,\ell }\_{F}/\D\_{F}< \epsilon _{2}\), stop; otherwise, go to the next step;

4.
Set \(Y_{k+1,q+1}=Y_{k,q}+\mu _{k,q}(DA_{k+1,q+1}E_{k+1,q+1})\).
If \(\mu _{k,q}\E_{k+1,q+1}E_{k,q}\_{F}/\D\_{F}< \epsilon _{2}\), set \(\mu _{k+1,q+1}=\rho \mu _{k,q}\); otherwise, go to Step 1.
In fact, Algorithm 3.3 includes the above Algorithm 3.2 as a special case if \(\ell =1\). Obviously, the algorithm is an acceleration of the SALM algorithm in [34].
Convergence analysis
This section will analyze the convergence of Algorithm 3.3.
Let \((\ddot{A},\ddot{E})\) be the solution of model (3.5) and Ÿ be that of the optimal problem (3.2). We provide some lemmas in the following firstly.
Lemma 4.1
([8])
Let \(A\in \mathbb{R}^{m\times n}\) be an arbitrary matrix and \(U\varSigma V^{T}\) be its SVD. Then the set of subgradients of the nuclear norm of A is given by
Lemma 4.2
([18])
If \(\mu _{k}\) is nondecreasing, then each entry of the following series is nonnegative and their sum is finite, i.e.,
Lemma 4.3
([18])
The sequences \(\{\ddot{Y}_{k}\}\), \(\{Y_{k}\}\), and \(\{\hat{Y}_{k}\}\) are all bounded, where \(\hat{Y} _{k}=Y_{k+1}+\mu _{k1}(DA_{k}E_{k1})\).
Lemma 4.4
The sequence \(\{Y_{k,q}\}\) generated by Algorithm 3.3 is bounded.
Proof
Let \(B=\mu _{k,q}(DE_{k,q}+\mu _{k,q}^{1}Y_{k,q}X _{k+1,q+1})\), \(\mathcal{T}(B)=\sum_{l=1}^{n1}\tilde{b}_{l}Z _{n}^{l}+\sum_{l=0}^{n1}\tilde{b}_{l}(Z_{n}^{T})^{l}\), defined as (2.1).
First of all, we indicate that \(Y_{k,q}\), \(E_{k,q}\), \(k=1,2,\ldots \) , \(q=1,2, \ldots ,\ell 1\), are all the Toeplitz matrices. Clearly, \(Y_{0,0}=0\), \(E _{0,0}=0\) are both smoothed into a Toeplitz structure. Suppose that \(Y_{k,q}\), \(E_{k,q}\) are both the Toeplitz matrices, so is \(E_{k+1,q+1}= \mathcal{P}_{\bar{\varOmega }}(DX_{k+1,q+1}+\mu _{k,q}^{1}Y_{k,q})\). Thus, \(Y_{k+1,q+1}\) is a Toeplitz matrix also because of Step 4 in Algorithm 3.3.
Moreover,
It is clear that by Steps 1–2 in Algorithm 3.3 we have
where \(\check{U}_{k,q}\), \(\check{V}_{k,q}\) are the singular vectors associated with singular values that are more than \(\frac{1}{\mu _{k,q}}\) and \(\tilde{U}_{k,q}\), \(\tilde{V}_{k,q}\) are those associated with singular values that are less than or equal to \(\frac{1}{ \mu _{k,q}}\), the elements of the diagonal matrix \(\check{\varSigma } _{k,q}\) are more than \(\frac{1}{\mu _{k,q}}\), and those of the diagonal matrix \(\tilde{\varSigma }_{k,q}\) are less than or equal to \(\frac{1}{ \mu _{k,q}}\). Hence, it is drawn that \(A_{k+1,q+1}=\check{U}_{k,q}( \check{\varSigma }_{k,q}\mu _{k,q}^{1}I)\check{V}_{k,q}^{T}\), and
We can obtain hence that \(Y_{k,q}+\mu _{k,q}(DA_{k+1,q+1}E_{k,q}) \in \partial \A_{k+1,q+1}\_{*}\) from Lemmas 4.2 and 4.3.
It is known that, for \(A_{k+1,q+1}=U\varSigma V^{T}\), by Lemma 4.1,
We also have
Therefore, the following inequalities can be obtained:
and
It is evident that the sequence \(\{Y_{k,q}\}\) is bounded. □
Theorem 4.1
Suppose that \(\langle A_{k+1,q+1}A_{k,q},DA _{k+1,q+1}E_{k,q}\rangle \geq 0\), then the sequence \(\{A_{k,q}\}\) converges to the solution of (3.5) when \(\mu _{k,q}\rightarrow \infty \) and \(\sum_{k=1}^{+\infty }\mu _{k,q}^{1}=+\infty \).
Proof
It is true that
since \(\mu _{k,q}^{1}(Y_{k+1,q+1}Y_{k,q})=DA_{k+1,q+1}E_{k+1,q+1}\) and Lemma 4.4.
Let \((\ddot{A},\ddot{E})\) be the solution of (3.5). Then \(A _{k+1,q+1},Y_{k+1,q+1},E_{k+1,q+1}\), \(k=1,2,\ldots \) , are all Toeplitz matrices since \(\ddot{A}+\ddot{E}=D\). We prove first that
where \(\hat{Y}_{k+1,q+1}=Y_{k,q}+\mu _{k,q}(DA_{k+1,q+1}E_{k,q})\), Ÿ is the optimal solution to the dual problem (3.2).
We obtain the following result with the same analysis:
Then
holds true.
The sum \(\sum_{k=1}^{\infty }\mu _{k,q}^{1}\langle A_{k,q} \ddot{A},\hat{Y}_{k,q}\ddot{Y}\rangle <+\infty \) and \(\E_{k,q} \ddot{E}\^{2}+\mu _{k,q}^{2}\Y_{k,q}\ddot{Y}\^{2}\) is nonincreasing since \(\A\_{\ast }\) is a convex function and \(\hat{Y}_{k+1,q+1} \in \partial \A\_{\ast }\), \(\langle A_{k+1,q+1}\ddot{A}, \hat{Y} _{k+1,q+1}\ddot{Y}\rangle \geq 0\). On the other hand, the following are true by Algorithm 3.3:
Therefore, by the same idea of Theorem 2 in [18], it is obtained that Ä is the solution of (3.5). □
Theorem 4.2
Let \(X=(x_{ij})\in \mathbb{R}^{n\times n}\), \(\mathcal{T}(X)=(\tilde{x}_{ij})\in \mathbb{T}^{n\times n}\) be the Toeplitz matrix derived from X, introduced in (2.1). Then, for any Toeplitz matrix \(Y=(y_{ij})\in \mathbb{T}^{n\times n}\), \(\langle X\mathcal{T}(X),Y\rangle =0\).
Proof
By the definition of \(\mathcal{T}(X)\), we have \(\sum_{i,j}(x_{ij}\tilde{x}_{ij})=0\), \(i,j=1,2,\ldots ,n\). Since Y is a Toeplitz matrix and \(y_{\alpha }=y_{ij}\), \(\alpha =ij\), \(i,j=1,2,\ldots , n\), then
□
Theorem 4.3
In Algorithm 3.3, \(A_{k,q}\) is a Toeplitz matrix generated by \(X_{k,q}\), then
is true with Ä being the solution of (3.5).
Proof
Thus, \(\A_{k,q}\ddot{A}\_{F}<\X_{k,q}\ddot{A}\_{F}\) holds true. □
Numerical experiments
In this section we report some original numerical results of two algorithms (SALM, SSALM) for some \(n\times n\) matrices with different ranks. All the experiments are conducted on the same workstation with an Intel(R) Core(TM) i76700 CPU @ 3.40 GHz that has 16 GB memory and 16bit operating system, running Windows 7 and Matlab (vision 2016a). We analyze and compare iteration numbers (IT), computing time in seconds (time(s)), deviation (error 1, error 2), and ratio which are defined in the following. It can be seen that the SSALM algorithm proposed in this study is highly effective compared with the SALM algorithm.
In the experiments, \(M\in \mathbb{T}^{n\times n} \) represents an uncompleted Toeplitz matrix, the sampling density \(p= m/(2n1)\), where m is the number of the known diagonal entries. By the way, \(p\in \{0.3,0.4,0.5,0.6\}\) here. Due to the special structure of a Toeplitz matrix, we have \(0\leq m \leq 2n1\). The SALM and SSALM algorithms share the same values of all parameters, say, \(\tau _{0}=1/ \D\_{2}\), \(\delta =1.2172 + {1.8588m}/{n^{2}}\), \(\epsilon _{1}=10^{9}\), \(\epsilon _{2}=5\times 10^{6}\). In the test of the SSALM algorithm, \(\ell =2\) or \(\ell =3\) as a rule of semismoothing.
The experimental results of two algorithms are presented in Tables 1–6. From the tables, two algorithms can successfully compute an approximate solution of the prescriptive stop condition for all the test matrix M. And computing time of our SSALM algorithm in far less than that of the SALM algorithm. In particular, compared with the cost of the SALM algorithm, we can find that the cost of the SSALM algorithm is decreased up to 30.44%. The “ratio” in Tables 5–6 can show this effectiveness.
Concluding remarks
As is known to all, matrix completion usually means to reconstruct a matrix from a subset of the items of a matrix by taking advantage of lowrank structure matrix interdependencies between the entries. It is well known but NPhard in general. In recent years, the Toeplitz matrix completion has attracted widespread attention and has become one of the most important completion problems. In order to solve such problems, we proposed an accelerated technique of the SALM algorithm in this study, namely the SSALM algorithm, and the theory of the SSALM algorithm convergence is established. Theoretical analysis and numerical results have shown that the SSALM scheme is an effective algorithm for solving the TMC problem. In particular, the CPU of the SSALM algorithm is consistently reduced up to 30.44% in all cases. The SSALM algorithm overcomes the original ALM algorithm both complexity of the singular value decomposition and surmounts the property of the extra load of the SALM algorithm. The reason is that data communication congestion is far more expensive than computing in many computers. Therefore, the SSALM algorithm has better numerical behavior for solving the TMC problem than the SALM algorithm (Tables 1–6).
Abbreviations
 MC:

matrix completion
 TMC:

Toeplitz matrix completion
 SVD:

singular value decomposition
 SVT:

singular value thresholding
 APG:

accelerated proximal gradient
 ALM:

augmented Lagrange multiplier
 MALM:

modified augmented Lagrange multiplier
 SALM:

soothing augmented Lagrange multiplier
 SSALM:

semisoothing augmented Lagrange multiplier
 RPCA:

robust principal component analysis
 IT:

iteration number
 CPU:

computing time
References
 1.
Amit, Y., Fink, M., Srebro, N., Ullman, S.: Uncovering shared structures in multiclass classification. In: Proceeding of the 24th International Conference on Machine Learning, pp. 17–24. ACM, New York (2007)
 2.
Argyriou, A., Evgeniou, T., Pontil, M.: Multitask feature learning. Adv. Neural Inf. Process. Syst. 19, 41–48 (2007)
 3.
Bertalmio, M., Sapiro, G., Caselles, V., Ballester, C.: Multitask feature learning, image inpainting. Comput. Graph. 34, 417–424 (2000)
 4.
Blanchard, J., Tanner, J., Wei, K.: CGIHT: conjugate gradient iterative hard thresholding for compressed sensing and matrix completion. In: Numerical Analysis Group (2014) Preprint 14/08
 5.
Boumal, N., Absil, P.A.: RTRMC: a Riemannian trustregion method for lowrank matrix completion. In: ShaweTaylor, J., Zemel, R.S., Bartlett, P., Pereira, F.C.N., Weinberger, K.Q. (eds.) Advances in Neural Inf. Processing Systems, NIPS, vol. 24, pp. 406–414 (2011)
 6.
Cai, J.F., Candès, E.J., Shen, Z.: A singular value thresholding method for matrix completion. SIAM J. Optim. 20(4), 1956–1982 (2010)
 7.
Cai, J.F., Qu, X., Xu, W., Ye, G.B.: Robust recovery of complex exponential signals from random Gaussian projections via low rank Hankel matrix reconstruction. Appl. Comput. Harmon. Anal. 41(2), 470–490 (2016)
 8.
Candès, E.J., Recht, B.: Exact matrix completion via convex optimization. Found. Comput. Math. 9(6), 717–772 (2009)
 9.
Chen, C., He, B., Yuan, X.: Matrix completion via an alternating direction method. IMA J. Numer. Anal. 32, 227–245 (2012)
 10.
Chen, M., Ganesh, A., Lin, Z., Ma, Y., Wright, J., Wu, L.: Fast convex optimization algorithms for exact recovery of a corrupted lowrank matrix. J. Mar. Biol. Assoc. UK 56(3), 707–722 (2009)
 11.
Chen, Y., Chi, Y.: Robust spectral compressed sensing via structured matrix completion. IEEE Trans. Inf. Theory 60(10), 6576–6601 (2014)
 12.
Eldén, L.: Matrix Methods in Data Mining and Pattern Recognition. Society for Industrial and Applied Mathematics, Philadelphia (2007)
 13.
Hu, Y., Zhang, D.B., Liu, J., Ye, J.P., He, X.F.: Accelerated singular value thresholding for matrix completion. In: KDD’12, Beijing, August 1216, 2012 (2012)
 14.
Jain, P., Meka, R., Dhillon, I.: Guaranteed rank minimization via singular value projection. In: Proceeding of the Neural Information Processing Systems Conf., NIPS, pp. 937–945 (2010)
 15.
Jain, P., Netrapalli, P., Sanghavi, S.: Lowrank matrix completion using alternating minimization. In: Proceedings of the 45th Annual ACM Symposium on Theory of Computing (STOC), pp. 665–674 (2013)
 16.
Ji, H., Liu, C., Shen, Z., Xu, Y.: Robust video denoising using low rank matrix completion. In Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition CVPR (2010)
 17.
Kyrillidis, A., Cevher, V.: Matrix recipes for hard thresholding methods. J. Math. Imaging Vis. 48(2), 235–265 (2013)
 18.
Lin, Z., Chen, M., Wu, L., Ma, Y.: The augmented Lagrange multiplier method for exact recovery of corrupted lowrank matrices. UIUC Technicial Report UIULENG092214, (2010)
 19.
Liu, Z., Vandenberghe, L.: Interiorpoint method for nuclear norm approximation with application to system identification. SIAM J. Matrix Anal. Appl. 31(3), 1235–1256 (2009)
 20.
Luk, F.T., Qiao, S.: A fast singular value algorithm for Hankel matrices. In: Olshevsky, V. (ed.) Fast Algorithms for Structured Matrices: Theory and Applications. Contemporary Mathematics, vol. 323. Am. Math. Soc., Providence (2003)
 21.
Ma, S., Goldfarb, D., Chen, L.: Fixed point and Bregman iterative methods for matrix rank minimization. Math. Program. 128, 321–353 (2011)
 22.
Mesbahi, M., Papavassilopoulos, G.P.: On the rank minimization problem over a positive semidefinite linear matrix inequality. IEEE Trans. Autom. Control 42(2), 239–243 (1997)
 23.
Mishra, B., Apuroop, K.A., Sepulchre, R.: A Riemannian geometry for lowrank matrix completion (2013) arXiv:1306.2672
 24.
Sebert, F., Zou, Y.M., Ying, L.: Toeplitz block matrices in compressed sensing and their applications in imaging. IEEE, Inf. Tech. Appl. in Biomedicine, 47–50 (2008)
 25.
Shaw, A.K., Pokala, S., Kumaresan, R.: Toeplitz and Hankel approximation using structured approach. Acoustics. Speech and signal processing. Proc. IEEE Int. Conf. 2349, 12–15 (1998)
 26.
Tanner, J., Wei, K.: Low rank matrix completion by alternating steepest descent methods. Appl. Comput. Harmon. Anal. 40, 417–429 (2016)
 27.
Toh, K.C., Yun, S.: An accelerated proximal gradient algorithm for nuclear norm regularized linear least squares problems. Pac. J. Optim. 6, 615–640 (2010)
 28.
Tomasi, C., Kanade, T.: Shape and motion from image streams under orthography: a factorization method. Int. J. Comput. Vis. 9(2), 137–154 (1992)
 29.
Vandereycken, B.: Low rank matrix completion by Riemannian optimization. SIAM J. Optim. 23(2), 1214–1236 (2013)
 30.
Wang, C.L., Li, C.: A mean value algorithm for Toeplitz matrix completion. Appl. Math. Lett. 41, 35–40 (2015)
 31.
Wang, C.L., Li, C., Wang, J.: A modified augmented Lagrange multiplier algorithm for Toeplitz matrix completion. Adv. Comput. Math. 42, 1209–1224 (2016)
 32.
Wang, C.L., Li, C., Wang, J.: Comparisons of several algorithms for Toeplitz matrix recovery. Comput. Math. Appl. 71(1), 133–146 (2016)
 33.
Wen, R.P., Li, S.D., Meng, G.Y.: SORlike methods with optimization model for augmented linear systems. East Asian J. Appl. Math. 7(1), 101–115 (2017)
 34.
Wen, R.P., Li, S.Z., Zhou, F.: Toeplitz matrix completion via smoothing augmented Lagrange multiplier algorithm. Appl. Math. Comput. 355, 299–310 (2019)
 35.
Wen, R.P., Yan, X.H.: A new gradient projection method for matrix completion. Appl. Math. Comput. 258, 537–544 (2015)
Acknowledgements
The authors are very much indebted to the editor and anonymous referees for their helpful comments and suggestions which greatly improved the original manuscript of this paper. The authors also are thankful for the support from the NSF of Shanxi Province (201601D011004) and the SYYJSKC1803.
Availability of data and materials
Please contact author for data requests.
Funding
It is not applicable.
Author information
Affiliations
Contributions
All authors contributed equally to the writing of this paper. All authors read and approved the final manuscript.
Corresponding author
Ethics declarations
Competing interests
The authors declare that they have no competing interests.
Additional information
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
Wen, R., Li, S. & Duan, Y. A semismoothing augmented Lagrange multiplier algorithm for lowrank Toeplitz matrix completion. J Inequal Appl 2019, 83 (2019). https://doi.org/10.1186/s1366001920337
Received:
Accepted:
Published:
Keywords
 Toeplitz matrix
 Completion
 Augmented Lagrange multiplier
 Data communication