Skip to content
• Research
• Open Access

# Structure method for solving the nearest Euclidean distance matrix problem

Journal of Inequalities and Applications20142014:491

https://doi.org/10.1186/1029-242X-2014-491

• Received: 2 September 2014
• Accepted: 24 November 2014
• Published:

## Abstract

A matrix with zero diagonal is called a Euclidean distance matrix when the matrix values are measurements of distances between points in a Euclidean space. Because of data errors such a matrix may not be exactly Euclidean and it is desirable in many applications to find the best Euclidean matrix which approximates the non-Euclidean matrix. In this paper the problem is formulated as a smooth unconstrained minimization problem, for which rapid convergence can be obtained. Comparative numerical results are reported.

## Keywords

• Euclidean distance matrix
• positive semidefinite matrix
• Newton method
• BFGS method

## 1 Introduction

Symmetric matrices with non-negative off-diagonal elements and zero diagonal elements arise as data in many experimental sciences. This occurs when the values are measurements of squared distances between points in a Euclidean space (e.g. atoms, stars, cities). Such a matrix is referred to as a Euclidean distance matrix. Because of data errors such a matrix may not be exactly Euclidean and it is desirable to find the best Euclidean matrix which approximates the non-Euclidean matrix. The aim of this paper is to study a new method for solving the Euclidean distance matrix problem and compare it with other older methods .

An important application arises in the conformation of molecular structures from nuclear magnetic resonance data (see  and ). Here a Euclidean distance matrix is used to represent the squares of distances between the atoms of a molecular structure. An attempt to determine such a structure by nuclear magnetic resonance experiments gives rise to a distance matrix F which, because of data errors, may not be Euclidean. There are many other applications in subjects as diverse as archeology, cartography, genetics, geography, and multivariate analysis. Pertinent references are given by Al-Homidan [4, 5].

Characterization theorems for the Euclidean distance matrix have been given in many forms. In Section 2 we show a very important characterization which brings out the underlying structure and is readily applicable to the algorithms that follow.

This paper addresses a non-smooth optimization problem in which some matrix, defined in terms of the problem variables, has to be positive semidefinite. One way to handle this problem is to impose a functional constraint in which the least eigenvalue of the matrix is non-negative. However, if there are multiple eigenvalues at the solution, which is usually the case, such a constraint is non-smooth, and this non-smoothness cannot be modeled by a convex polyhedral composite function. An important factor is the determination of the multiplicity of the zero eigenvalues, or alternatively the rank of the matrix at the solution. If this rank is known it is usually possible to solve the problem by conventional techniques.

Glunt et al.  formulate the Euclidean distance matrix problem as a constrained least distance problem in which the constraint is the intersection of two convex sets. The Dykstra-Han alternating projection algorithm can then be used to solve the problem. This method is globally convergent but the rate of convergence is very slow. However, the method does have the capability to determine the correct rank of the solution matrix.

Recently, there has been much interest in the interior point methods applied to problems with semidefinite matrix constraints (e.g. the survey papers  and  and the references therein). Semidefinite programming optimizes a linear function subject to a positive semidefinite matrix. It is a convex programming problem since the objective and constraints are convex. In this paper, we deal with a problem that is a little different since the objective is quadratic; also an additional rank constraint is added which makes the problem non-convex and harder to solve. Here, we use a different approach from the interior point methods. If the correct rank of the solution matrix is known, it is shown in Section 3 how to formulate the problem as a smooth unconstrained minimization problem, for which rapid convergence can be obtained by for example the BFGS method. We give expressions for the objective function and its first derivatives.

In  a hybrid method is studied between a projection method and a quasi-Newton method; a similar study can be performed as regards all its features. Finally, in Section 4, numerical comparisons are carried out.

## 2 The Euclidean distance matrix problem

In this section the definition of the Euclidean distance matrix is given, and the relationship between points and distances is summarized. A characterization theorem for the Euclidean distance matrix is proved in a concise way that brings out the underlying structure and is readily applicable to the algorithms that follow.

It is necessary to distinguish between distance matrices that are obtained in practice and those that can be derived exactly from n vectors in an affine subspace.

Definition 2.1 A matrix $F\in {\mathbb{R}}^{n×n}$ is called a distance matrix iff it is symmetric, the diagonal elements are zero
${f}_{ii}=0,\phantom{\rule{1em}{0ex}}i=1,\dots ,n,$
and the off-diagonal entries are non-negative
${f}_{ij}\ge 0,\phantom{\rule{1em}{0ex}}\mathrm{\forall }i\ne j.$
Definition 2.2 A matrix $D\in {\mathbb{R}}^{n×n}$ is called a Euclidean distance matrix iff there exist n vectors ${\mathbf{x}}_{1},\dots ,{\mathbf{x}}_{n}$ in an affine subspace of dimension ${\mathbb{R}}^{r}$ ($r\le n-1$) such that
${d}_{ij}={\parallel {\mathbf{x}}_{i}-{\mathbf{x}}_{j}\parallel }_{2}^{2},\phantom{\rule{1em}{0ex}}\mathrm{\forall }i,j.$
(2.1)
The Euclidean distance problem can now be stated as follows: Given a distance matrix $F\in {\mathbb{R}}^{n×n}$, find the Euclidean distance matrix $D\in {\mathbb{R}}^{n×n}$ that minimizes
${\parallel F-D\parallel }_{F},$
(2.2)

where ${\parallel \cdot \parallel }_{F}$ denotes the Frobenius norm.

The theorem is essentially due to Schoenberg .

Theorem 2.3 The distance matrix $D\in {\mathbb{R}}^{n×n}$ is a Euclidean distance matrix if and only if the $\left(n-1\right)×\left(n-1\right)$ symmetric matrix A defined by
${a}_{ij}=\frac{1}{2}\left[{d}_{s1}+{d}_{t1}-{d}_{st}\right]\phantom{\rule{1em}{0ex}}\left(1\le i,j\le n-1\right)$
(2.3)
is positive semidefinite, where $s=i+1$, $t=j+1$, and D is irreducibly embeddable in ${\mathbb{R}}^{r}$ ($r) where $r=rank\left(A\right)$. Moreover, consider the spectral decomposition
$A=U\mathrm{\Lambda }{U}^{T}.$
(2.4)
Let ${\mathrm{\Lambda }}_{r}$ be the matrix of non-zero eigenvalues in Λ and define X by
(2.5)

where ${\mathrm{\Lambda }}_{r}\in {\mathbb{R}}^{r×r}$ is a diagonal matrix and ${U}_{r}\in {\mathbb{R}}^{\left(n-1\right)×r}$.

## 3 The method

In this section we consider a different approach to the Euclidean distance matrix problem (2.2). The main idea is to replace (2.2) by a smooth unconstrained optimization problem in order to use superlinearly convergent quasi-Newton methods. To do this it is necessary to estimate the rankr as this piece of information is not generally known. Once a value of r is chosen, the problem (2.2) is solved by the BFGS method. We give the relevant formulas for the derivatives. At the end of the section we discuss details of the initialization and implementation.

If the rankr is known, it is possible to express (2.2) as a smooth unconstrained optimization problem in the following way. The unknowns in the problem are chosen to be the elements of the matrix X and ${\mathrm{\Lambda }}_{r}$ introduced in (2.5). We take X to have r columns and ${\mathrm{\Lambda }}_{r}$ a diagonal matrix as shown below. This gives us an unconstrained optimization problem in $r\left(n-1\right)-\frac{r\left(r+1\right)}{2}$ unknowns. We therefore parametrize X and ${\mathrm{\Lambda }}_{r}$ in the following way:
(3.1)
The objective function $\varphi \left(X\right)$ is readily calculated by first forming D from X and ${\mathrm{\Lambda }}_{r}$ as indicated by (2.1), after which ϕ is given by $\varphi \left(X,{\mathrm{\Lambda }}_{r}\right)={\parallel D-F\parallel }_{F}^{2}$. When $s=t$, then ${d}_{st}=0$, using (2.3) we get ${a}_{ii}=\frac{1}{2}\left[{d}_{s1}+{d}_{s1}-0\right]={d}_{s1}$, then the elements of the matrix D take the form
where $t=j+1$. Hence
$\begin{array}{rcl}\varphi & =& \sum _{s,t=1}^{n}{\left({d}_{ij}-{f}_{ij}\right)}^{2}=2\sum _{s=2}^{n}{\left({d}_{s1}-{f}_{s1}\right)}^{2}+2\underset{s
(3.2)
Our chosen method to minimize $\varphi \left(X\right)$ is the BFGS quasi-Newton method (see for example ). This requires expressions for the first partial derivatives of ϕ, which are given from (3.2) by
$\begin{array}{rcl}\frac{\partial \varphi }{\partial {\lambda }_{i}}& =& 2\left\{2\sum _{l=1}^{i}\sum _{k=l+i}^{n-1}\left({d}_{k+1l}-{f}_{k+1l}\right){x}_{ik}^{2}+2\sum _{k=i+1}^{n-1}\left({d}_{k+1i+1}-{f}_{k+1i+1}\right)\left(1+{x}_{ik}^{2}-2{x}_{ik}\right)\\ +2\sum _{l=i+2}^{r}\sum _{k=l}^{n-1}\left({d}_{k+1l}-{f}_{k+1l}\right)\left({x}_{il-1}^{2}+{x}_{ik}^{2}-2\right){x}_{i,l-1}{x}_{ik}\right)\right\},\end{array}$
(3.3)
for all $i=1,\dots ,r$. For $j=1,\dots ,r$, and $i=j+1,\dots ,n-1$:
$\frac{\partial \varphi }{\partial {x}_{ij}}=4\sum _{k=0}^{i-1}\left({d}_{j+1k+1}-{f}_{j+1k+1}\right)\left(2{x}_{ij}{\lambda }_{j}\right)+4\sum _{k=i}^{n-1}\left({d}_{j+1k+1}-{f}_{j+1k+1}\right)\left(2{x}_{ij}{\lambda }_{j}-2{x}_{kj}{\lambda }_{j}\right).$
(3.4)

The BFGS method also requires the Hessian approximation to be initialized. Where necessary, we do this using a unit matrix.

Some care has to be taken when choosing the initial value of the matrix X and ${\mathrm{\Lambda }}_{r}$, in particular the rank must be r. If not, the minimization method may not be able to increase the rank of X. An extreme case occurs when the initial matrix $X=0$ and ${\mathrm{\Lambda }}_{r}=0$ are chosen, and $F\ne 0$. It can be seen from (3.3) and (3.4) that the components of the gradient vector are all zero, so that $X=0$ and ${\mathrm{\Lambda }}_{r}=0$ are stationary points, but not minimizers. A gradient method will usually terminate in this situation and so fail to find the solution.

A reliable method for initializing X and ${\mathrm{\Lambda }}_{r}=0$ is to use the construction suggested by (3.1) and (2.3). Thus we define the elements of A by those of F by
${a}_{ij}=\frac{1}{2}\left({f}_{ij}-{f}_{1i}-{f}_{1j}\right),\phantom{\rule{1em}{0ex}}i\ge 2,j\ge 2.$
(3.5)

The first row and column of A are zero and are ignored. We then find the spectral decomposition $U\mathrm{\Sigma }{U}^{T}$ of the nontrivial part of A. Finally the nontrivial part of X and ${\mathrm{\Lambda }}_{r}$ in (3.1) is initialized to the matrix ${\mathrm{\Sigma }}_{r}^{1/2}{U}_{r}^{T}$ where ${\mathrm{\Sigma }}_{r}=diag\left({\sigma }_{i}\right)$, $i=1,\dots ,r$ is composed of the r largest eigenvalues in Σ, and the columns of ${U}_{r}$ are the corresponding eigenvectors. When ${\mathrm{\Sigma }}_{r}$ is positive definite, this procedure ensures that A has the correct rankr. Otherwise the process must be modified in some way, for example by ensuring that the diagonal elements in ${\mathrm{\Sigma }}_{r}$ lie above a positive threshold.

An advantage of this method is that it allows the spatial dimensions to be chosen by the user. This is useful when the rank is already known. For example if the entries in F are derived from distances between cities then the dimension will be no higher than $r=2$. Likewise, if the entries are derived from distances between atoms in a molecule or stars in space, then the maximum dimension is $r=3$.

In general, however, the rank is not known, for example the atoms in a molecule may turn out to be collinear or coplanar. We therefore must consider an algorithm in which we are prepared to revise our estimate of r. A simple strategy is to repeat the entire method for different values of r. If ${r}^{\ast }$ denotes the correct value of r which solves (2.2), then it is observed that the BFGS method converges rapidly if $r\le {r}^{\ast }$, and that it exhibits superlinear convergence. On the other hand if $r>{r}^{\ast }$ then slow convergence is observed. One reason is that there are more variables in the problem. Also redundancy in the parameter space may have an effect. Thus it makes sense to start with a small value of r, and increase it by one until the solution is recognized. One way to recognize termination is when ${D}^{\left(r\right)}$ agrees sufficiently well with ${D}^{\left(r+1\right)}$, where ${D}^{\left(r\right)}$ denotes the Euclidean distance matrix obtained by minimizing ϕ when ${\mathrm{\Lambda }}_{r}$ in (3.1) has r diagonal elements. Numerical experience is reported in  for solving various test problems by other methods which will be compared with this method.

An obvious alternative to using the BFGS method is to evaluate the Hessian matrix of second derivatives of $\varphi \left(X\right)$ and use Newton’s method. This would likely reduce the number of iterations required. However, there is also the disadvantage of increased complexity, and increased housekeeping at each iteration. Moreover, it is possible that the Hessian has some negative eigenvalues so a modified form of Newton’s method would be required. A simple example serves to illustrate the possibility of a negative eigenvalue. Take $n=2$, $r=1$, and let $F=\left[\begin{array}{cc}0& -1\\ -1& 0\end{array}\right]$, $X=\left[1\right]$, and ${\mathrm{\Lambda }}_{r}=\left[{\lambda }_{1}\right]$. Then $\varphi =2{\left(1-{\lambda }_{1}^{2}\right)}^{2}$. This has global minimizers at ${\lambda }_{1}=±1$, a local maximizer at ${\lambda }_{1}=0$, and the Hessian is negative for all ${\lambda }_{1}$ such that $3{\lambda }_{1}^{2}<1$.

This method has entirely different features, some good, some bad, which suggests that a combination of both this method and a projection method  might be successful. Projection methods are globally convergent and hence potentially reliable, but the rate of convergence is first order or slower, which can be very inefficient. Quasi-Newton methods are reliable and locally superlinearly convergent, but they require that the correct $rank{r}^{\ast }$ is known. Therefore hybrid methods should be established along the lines of , in which the projection algorithm is used sparingly as a way of establishing the correct rank, while the BFGS method is used to provide rapid convergence.

## 4 Numerical results

In this section, we compare three methods, our method, the hybrid method in  and the unconstrained method of the same reference. The algorithms have been tested on randomly generated distance matrices F with values distributed between 10−3 and 103. All calculations were performed with Mathlab 8. Figure 1 compares the line searches and CPU time of the three methods. The termination criterion for both methods is $\parallel {D}^{\left(k\right)}-{D}^{\left(k-1\right)}\parallel <{10}^{-5}$. All methods converge to essentially the same values. Figure 1 Comparing the line searches and CPU time of the three methods for the Euclidean distance matrix problem.

In Figure 1, the upper figure shows that the number of line searches for our method is slightly lower than the unconstrained method and higher than the hybrid method. However, in the lower figure it is clear that our method is much faster and this because our method has $\frac{r\left(r+1\right)}{2}$ less CPU time. A hybrid method uses much less line searches from both methods, however, it consumes much more time than our method because it uses a projection method as a start. This makes our method more efficient and faster.

The housekeeping associated with each line search is $O\left({n}^{2}\right)$. Also, if care is taken, it is possible to calculate $\varphi \left(X\right)$ and $\mathrm{\nabla }\varphi \left(X\right)$ in $O\left({n}^{2}\right)$ operations. The initial value ${r}^{\left(0\right)}$ is tabulated, and r is increased by one until the solution is found. The total number of line searches is tabulated, and in this figure, it is found that fewer line searches are required as r increases. Also the initial value ${r}^{\left(0\right)}=6$ is rather arbitrary: a smaller value of ${r}^{\left(0\right)}$ would have given an even larger number of line searches.

## Declarations

### Acknowledgements

The author is grateful to King Fahd University of Petroleum & Minerals for providing excellent research facilities.

## Authors’ Affiliations

(1)
Department of Mathematics and Statistics, King Fahd University of Petroleum and Minerals, Dhahran, Saudi Arabia

## References

Advertisement 