Skip to main content

On stability of generalized (affine) phase retrieval in the complex case

Abstract

In this paper, we discuss the stability of generalized phase retrieval and generalized affine phase retrieval in the complex case. By the realification method, we obtain the bi-Lipschitz property in the absence of noise case and Cramer–Rao lower bound under noise conditions.

1 Introduction

The problem of phase retrieval aims to recover a signal from its intensity measurements, it naturally arises in various fields of physics, such as crystallography [14], radar [12], electron-microscopy [11]. Most of the successful instances are corresponding to two-dimensional signals. For one-dimensional signals, the uniqueness and algorithms of Fourier phase retrieval problems are discussed in [5]. The frame-based phase retrieval, which addresses the signal reconstruction from the absolute value of frame coefficients, was introduced by Balan et al. [2]. It is crucial that the measurement mapping is of bi-Lipschitz property, since it preserves the density and dispersion point [6]. The Cramer–Rao lower bound (CRLB) is the lower bound on the variance of any unbiased estimation, which allows us to assert that an estimator is minimum variance unbiased estimator. The bi-Lipschitz property and CRLB of frame-based phase retrieval is given in [1, 3, 4]. Generalized phase retrieval (GPR) is introduced by Yang Wang and Zhiqiang Xu [16], which unifies and enhances results from the standard phase retrieval, phase retrieval by projections, and low-rank matrix. Affine phase retrieval aims to recover signals from the magnitudes of affine measurements [8]. Necessary and sufficient conditions as well as minimal number of measurements are given in [10]. The bi-Lipschitz property and CRLB of GPR and GAPR for real signals are discussed in [17]. However, the GPR and GAPR problems with complex signals are also encountered frequently in some fields like optics [15], quantum information [9], interferometry [7], which leads us to addressing the complex case in this paper.

Let \(H_{n}(\mathbb {C}) \) denote the set of \(n \times n \) Hermitian matrices over complex field \(\mathbb {C}\). For any given matrix sequence \(A = \{A_{j}\}^{m}_{j =1}\subset H_{n}(\mathbb {C}) \), define the map \(M_{A} : \mathbb {C}^{n}\rightarrow\mathbb{R}^{m} \) by

$$M_{A}(x) = \bigl(x^{*}A_{1}x, \dots, x^{*}A_{m}x \bigr), $$

where \(x^{*} \) denotes the conjugate transpose of x. We say that A is generalized phase retrievable if \(M_{A} \) is injective up to a global phase, which means

$$\bigl\{ x\in \mathbb {C}^{n}:M_{A}x=M_{A}x_{0} \bigr\} = \bigl\{ cx_{0}: \vert c \vert =1,c\in \mathbb {C}\bigr\} . $$

Similarly, if \(A_{j} \) is positive semidefinite for \(j=1,\dots,m \), we can define the map \(\sqrt{M_{A}} : \mathbb {C}^{n}\rightarrow\mathbb{R}^{m} \) by

$$\sqrt{M_{A}}(x) = \bigl(\sqrt{x^{*}A_{1}x}, \dots, \sqrt{x^{*}A_{m}x} \bigr). $$

Let \(B(\mathbb {C}^{n}) \) and \(B(\mathbb {R}^{2n}) \) denote the sets of bounded linear operators on \(\mathbb {C}^{n} \) and \(\mathbb {R}^{2n} \) respectively. For any \(T\in B(\mathbb {C}^{n}) \), the nuclear norm of T denoted by \(\lVert{T} \rVert_{1} \) is given by the 1-norm of its singular values. We still denote the operator norm of T by \(\lVert{T} \rVert\). Given two vectors \(x,y \in \mathbb {C}^{n}\), we define metrics \(d(x,y)= \lVert{x-y} \rVert\), \(d_{1}(x,y)= \min_{|\alpha|=1} \lVert{x-\alpha y} \rVert\) and matrix metric

$$d_{2}(x,y)= \bigl\lVert {xx^{*}-yy^{*}} \bigr\rVert _{1}= \sqrt{ \bigl( \lVert{x} \rVert^{2}+ \lVert{y} \rVert^{2} \bigr)^{2}- 4 \bigl\vert \langle x, y \rangle \bigr\vert ^{2}}, $$

which is corresponding to the nuclear norm.

Let \(B_{j} \in \mathbb {C}^{r_{j}\times n} \) and \(b_{j} \in \mathbb {C}^{r_{j}} \), where \(r_{j} \) is a positive integer. The GAPR problem aims to recover a signal \(x\in \mathbb {C}^{n} \) from the norms of the affine linear measurements \(\{ \lVert{B_{j}x+b_{j}} \rVert\}_{j=1}^{m} \). Let \(B=\{B_{j}\}_{j=1}^{m} \) and \(b=\{b_{j}\}_{j=1}^{m} \). We define the map \(M_{B,b}:\mathbb {C}^{n} \rightarrow \mathbb {R}^{m} \) by

$$M_{B,b}(x)= \bigl( \lVert{B_{1}x+b_{1}} \rVert^{2}, \lVert {B_{2}x+b_{2}} \rVert^{2} ,\dots, \lVert{B_{m}x+b_{m}} \rVert^{2} \bigr). $$

The pair \((B,b) \) is said to be generalized affine phase retrievable for \(\mathbb {C}^{n} \) if \(M_{B,b} \) is injective on \(\mathbb {C}^{n} \). Note that if \((B,b) \) is generalized affine phase retrievable, one can recover the signal x exactly but not up to a global phase.

Our study mainly focuses on the stability of GPR and GAPR in the complex case. In Sect. 2, we establish the bi-Lipschitz inequalities of GPR and GAPR with appropriate metrics. In Sect. 3, we present the Cramer–Rao lower bound of noised GPR and GAPR.

2 Bi-Lipschitz stability

In this section, we discuss the bi-Lipschitz property of GPR and GAPR in the complex case. Realification of the complex vectors and operators is our main method to deal with the phase retrieval problems in the complex case. We consider the \(\mathbb{R} \)-linear map \(\mathbf{j} : \mathbb{C}^{n} \rightarrow\mathbb{R}^{2 n} \) defined by

$$\mathbf{j}(z)=\left [ \textstyle\begin{array}{c}{\Re(z)} \\ {\Im(z)} \end{array}\displaystyle \right ],\quad z\in \mathbb {C}^{n}, $$

where \(\Re(z) \) and \(\Im(z) \) are the real part and the imaginary part of z respectively. Then, for any \(x, y \in\mathbb{R}^{n} \), the inverse operator of j is given by

$$\mathbf {j}^{-1}\left [ \textstyle\begin{array}{l}{x} \\ {y} \end{array}\displaystyle \right ] =x+iy. $$

We transfer operators in \(B(\mathbb {C}^{n}) \) to operators in \(B(\mathbb {R}^{2n}) \) by

$$\tau: B\bigl(\mathbb {C}^{n}\bigr) \rightarrow B \bigl( \mathbb {R}^{2n} \bigr), \quad\tau(T)= \mathbf {j}T\mathbf {j}^{-1}. $$

As in reference [1], for any two vectors \(u, v \in \mathbb {C}^{n} \), we define their symmetric outer product by

$$[\![ u, v ]\! ]= \frac{1}{2} \bigl(u v^{*}+v u^{*} \bigr). $$

Let \(I_{n} \) be the \(n \times n \) identity matrix and

$$J=\left [ \textstyle\begin{array}{c@{\quad}c}{0} & {-I_{n}} \\ {I_{n}} & {0} \end{array}\displaystyle \right ]. $$

Then the transpose of J is \(J^{T}=-J \), and by direct computation we have

$$\tau\bigl([\![ u, v ]\! ]\bigr)=[\![ \xi, \eta]\! ]+J[\![ \xi, \eta]\! ]J^{T}, $$

where \(\xi=\mathbf {j}(u) \), \(\eta=\mathbf {j}(v) \), and \([\![ \xi, \eta]\!]=\frac{1}{2}(\xi\eta^{T}+\eta\xi^{T}) \).

Denote the real part and the imaginary part of a complex matrix \(A_{j}\in H_{n}(\mathbb {C}) \) by \(D_{j} \) and \(C_{j} \) respectively. Then we have \(A_{j}=D_{j}+iC_{j} \), \(D_{j}^{T}=D_{j} \), \(C_{j}^{T}=-C_{j} \), and

$$\tau (A_{j} )= \left [ \textstyle\begin{array}{c@{\quad}c}{D_{j}} & {-C_{i}} \\ {C_{j}} & {D_{j}} \end{array}\displaystyle \right ]. $$

Let \(\operatorname {Tr}(A_{j}) \) be the trace of \(A_{j} \). Then it is only related to the real part of \(A_{j} \) since \(\operatorname {Tr}(A_{j})=\operatorname {Tr}(D_{j}+iC_{j})=\operatorname {Tr}(D_{j}) \). Furthermore, the trace of the realification of \(A_{j} \) can be computed by \(\operatorname {Tr}(\tau(A_{j}))=2\operatorname {Tr}(D_{j})=2\operatorname {Tr}(A_{j}) \). Let \(\langle T, G \rangle_{\mathrm{HS}}=\operatorname {Tr}(TG^{*}) \) be the Hilbert–Schmidt inner product of T and G. Since the trace of the product of a symmetric matrix and an antisymmetric matrix equals zero, one can easily prove that

$$ \bigl\langle \tau(T), \tau(G) \bigr\rangle _{\mathrm{HS}}= 2 \langle T, G \rangle_{\mathrm{HS}},\quad\forall T,G\in H_{n}(\mathbb {C}). $$
(2.1)

Since \(J^{T}\tau(A_{j})J=\tau(A_{j}) \), we have

$$\begin{aligned} \bigl\langle \tau (A_{j} ), J[\![ \xi, \eta]\! ] J^{T} \bigr\rangle _{\mathrm{HS}} =& \operatorname {Tr}\bigl(\tau(A_{j})J [\![ \xi, \eta]\! ] J^{T} \bigr) \\ =& \operatorname {Tr}\bigl(J^{T}\tau(A_{j})J[\![ \xi, \eta]\! ] \bigr) \\ =& \bigl\langle \tau (A_{j} ), [\![ \xi, \eta]\! ] \bigr\rangle _{\mathrm{HS}}. \end{aligned}$$

Therefore, we obtain the relationship before and after the realification:

$$\bigl\langle \tau(A_{j}), \tau[\![ u, v ]\!] \bigr\rangle _{\mathrm{HS}}= \bigl\langle \tau (A_{j} ), [\![ \xi, \eta]\!] \bigr\rangle _{\mathrm{HS}}+ \bigl\langle \tau (A_{j} ), J[\![ \xi, \eta]\!] J^{T} \bigr\rangle _{\mathrm{HS}}= 2 \bigl\langle \tau (A_{j} ), [\![ \xi, \eta]\!] \bigr\rangle _{\mathrm{HS}}. $$

Since \(\tau(A_{j}) \) is symmetric, the Hilbert–Schmidt inner product can be simplified as follows:

$$\bigl\langle \tau (A_{j} ), [\![ \xi, \eta]\!] \bigr\rangle _{\mathrm{HS}}= \eta^{T}\tau( A_{j})\xi= \bigl\langle \tau( A_{j})\xi, \eta \bigr\rangle . $$

It follows that

$$\bigl\langle \tau(A_{j}), \tau[\![ u, v ]\!] \bigr\rangle _{\mathrm{HS}}= 2 \bigl\langle \tau( A_{j})\xi, \eta \bigr\rangle . $$

Let

$$R(\xi)= \sum_{j=1}^{m} \tau(A_{j})\xi\xi^{T}\tau(A_{j}). $$

The summation can be written as

$$ \sum_{j=1}^{m} \bigl\vert \bigl\langle \tau(A_{j}), \tau[\![ u, v ]\!] \bigr\rangle _{\mathrm{HS}} \bigr\vert ^{2} =4 \bigl\langle R(\xi)\eta, \eta \bigr\rangle . $$
(2.2)

The following theorem gives an equivalent condition for a set of matrices to be phase retrievable.

Theorem 2.1

([16])

Let \(A= \{A_{j} \}_{j=1}^{m} \subset\mathbf {H}_{n}(\mathbb{C}) \). The following are equivalent:

  1. (1)

    A has the phase retrieval property.

  2. (2)

    There exist no \(v, u \neq0 \)in \(\mathbb{C}^{n} \)with \(u \neq i c v \)for any \(c \in\mathbb{R} \)such that \(\Re (v^{*} A_{j} u )=0\)for all \(1 \leq j \leq m \).

2.1 Stability of GPR

In this subsection, we discuss the bi-Lipschitz property of GPR in the complex case. For a set \(A= \{A_{j} \}_{j=1}^{m} \subset\mathbf{H}_{n}(\mathbb{C})\), we define the map \(\mathcal{A }\) on the set \(B(\mathbb {C}^{n}) \) by

$$\mathcal{A} : B\bigl(\mathbb {C}^{n}\bigr) \rightarrow \mathbb{C}^{m}, \quad \bigl(\mathcal{A}(T)\bigr)_{j}= \operatorname {Tr}\bigl(T A_{j}^{*} \bigr), \quad1 \leq j \leq m. $$

We denote by \(S^{1,1} \) the set of Hermitian matrices that have at most one positive eigenvalue and at most one negative eigenvalue. Then we have the following equivalent conditions for the set A to be phase retrievable.

Theorem 2.2

Let \(A= \{A_{j} \}_{j=1}^{m} \subset\mathbf{H}_{n}(\mathbb{C}) \), then the following are equivalent:

  1. (1)

    Ahas the phase retrieval property.

  2. (2)

    \(\ker(\mathcal{A}) \cap S^{1,1}=\{0\} \).

  3. (3)

    There is a constant \(a_{0}>0 \)such that, for every \(u,v \in \mathbb {C}^{n} \),

    $$ \sum_{j=1}^{m} \bigl( \Re \bigl(v^{*} A_{j} u \bigr) \bigr)^{2}\geq a_{0} \bigl[ \Vert u \Vert ^{2} \Vert v \Vert ^{2} -\bigl( \Im\langle u, v\rangle\bigr)^{2} \bigr] =a_{0} \bigl\lVert { [\![ u, v ]\!]} \bigr\rVert _{1}^{2}. $$
    (2.3)
  4. (4)

    There is a constant \(a_{0}>0 \)such that, for all nonzero \(\xi\in \mathbb {R}^{2n} \),

    $$ R(\xi) \geq a_{0} \Vert \xi \Vert ^{2} P_{J \xi}^{\perp}, $$
    (2.4)

    where the inequality is in the sense of quadratic forms and \(P_{J \xi}^{\perp}=I-\frac{1}{\|\xi\|^{2}} J \xi\xi^{T} J^{T} \).

  5. (5)

    For any nonzero \(\xi\in \mathbb {R}^{2n} \), \(\operatorname {rank}(R(\xi))=2n-1 \).

Proof

(1) (2). If there exists a nonzero operator \(T\in \operatorname{ker}(\mathcal{A}) \cap S^{1,1} \), by [1, Lemma 3.7], there exist nonzero vectors \(u,v \) such that \(T=[\![ u, v ]\!] \). Since \(T\ne0 \), we have \(u\ne icv \) for any \(c\in \mathbb {R}\). Furthermore, \(T\in\ker(\mathcal{A}) \) implies

$$\bigl\lVert {\mathcal{A}(T)} \bigr\rVert ^{2}= \sum _{j=1}^{m} \bigl\vert \bigl\langle A_{j}, [\![ u, v ]\!] \bigr\rangle _{\mathrm{HS}} \bigr\vert ^{2} = \sum_{j=1}^{m}\bigl(\Re\bigl(v^{*}A_{j}u \bigr)\bigr)^{2}=0, $$

which contradicts (2) of Theorem 2.1. The necessity can be proved similarly.

(3) (2). If \(T\in\operatorname{ker}(\mathcal{A}) \cap S^{1,1} \), then there exist vectors u, v such that \(T=[\![ u, v ]\!]\) and

$$0=\sum_{j=1}^{m} \bigl\vert \bigl\langle A_{j}, [\![ u, v ]\!] \bigr\rangle _{\mathrm{HS}} \bigr\vert ^{2} \geq a_{0} \bigl\lVert {[\![ u, v ]\!]} \bigr\rVert _{1}^{2}, $$

which means \(T=0 \).

(2) (3). Since \(\operatorname{ker}(\mathcal{A}) \cap S^{1,1}=\{0\} \), we have \(\lVert{\mathcal{A}(T)} \rVert^{2}= \sum_{j=1}^{m}| \langle A_{j}, [\![ u, v ]\!] \rangle_{\mathrm{HS}}|^{2} >0 \) for all nonzero \(T=[\![ u, v ]\!]\in S^{1,1} \). Let

$$a_{0}= \min_{T\in S^{1,1}, \lVert{T} \rVert_{1}=1}\sum_{j=1}^{m} \bigl\vert \langle A_{j}, T \rangle_{\mathrm{HS}} \bigr\vert ^{2}. $$

Since \(S^{1,1} \) is a cone in \(B(\mathbb {C}^{n}) \), the homogeneity and continuousness of mapping \(T\rightarrow\sum_{j=1}^{m} | \langle A_{j}, T \rangle_{\mathrm{HS}}|^{2} \) implies

$$ \sum_{j=1}^{m} \bigl\vert \bigl\langle A_{j}, [\![ u, v ]\!] \bigr\rangle _{\mathrm{HS}} \bigr\vert ^{2} \geq a_{0} \bigl\lVert {[\![ u, v ]\!]} \bigr\rVert _{1}^{2}. $$
(2.5)

By [1, Lemma 3.8], we have

$$ \bigl\lVert {[\![ u, v ]\!]} \bigr\rVert _{1}^{2}= \bigl[ \Vert u \Vert ^{2} \Vert v \Vert ^{2} - \bigl(\Im\langle u, v\rangle\bigr)^{2} \bigr]. $$
(2.6)

Substituting \(\langle A_{j}, [\![ u, v ]\!] \rangle_{\mathrm{HS}}= \Re(v^{*}A_{j}u) \) and (2.6) to (2.5), we get the desired inequality.

(3) (4). By formula (2.1), we have

$$\bigl\langle \tau(A_{j}), \tau[\![ u, v ]\!] \bigr\rangle _{\mathrm{HS}} =2 \bigl\langle A_{j}, [\![ u, v ]\!] \bigr\rangle _{\mathrm{HS}} =2\Re\bigl(v^{*}A_{j}u\bigr). $$

Hence (2.3) is equivalent to

$$ \frac{1}{4}\sum_{k=1}^{m} \bigl\vert \bigl\langle \tau (A_{j} ), \tau\bigl([\![ u, v ]\!]\bigr) \bigr\rangle _{\mathrm{HS}} \bigr\vert ^{2} \geq a_{0} \bigl[ \Vert u \Vert ^{2} \Vert v \Vert ^{2}- \bigl(\Im\langle u, v\rangle\bigr)^{2} \bigr]. $$
(2.7)

Since \(\|u\|=\|\xi\|,\|v\|=\|\eta\| \), and \((\Im \langle u, v \rangle)^{2}=\eta^{T} J\xi\xi^{T}J^{T}\eta\), we have

$$ \Vert u \Vert ^{2} \Vert v \Vert ^{2}-\bigl(\Im\langle u, v\rangle\bigr)^{2} = \lVert{\xi} \rVert^{2}\eta^{T}\eta-\eta^{T} J\xi\xi ^{T}J^{T}\eta = \bigl\langle \bigl( \Vert \xi \Vert ^{2}I_{2n}-J \xi\xi^{T} J^{T} \bigr) \eta, \eta \bigr\rangle . $$
(2.8)

Substituting (2.2) and (2.8) to (2.7), we obtain the desired inequality.

(4) (5). The symmetry of \(D_{j} \) and the antisymmetry of \(C_{j} \) imply \(\xi^{T}\tau(A_{j})J\xi=0 \). As a result, we have

$$R(\xi) J\xi=0. $$

Considering that \(R(\xi) \) is a \(2n\times2n \) real matrix, the rank of \(R(\xi) \) can not be greater than \(2n-1 \). If it is less than \(2n-1 \), then there exists a nonzero vector η such that \(\langle\eta, J\xi \rangle=0 \) and \(R(\xi)\eta=0 \). Consequently, we have \(\eta^{T}R(\xi)\eta=0 \) and

$$ a_{0}\eta^{T} \lVert{\xi} \rVert^{2}P_{J\xi}^{\perp} \eta= a_{0}\bigl( \lVert{\xi} \rVert^{2} \lVert{\eta} \rVert^{2}-\eta^{T}J\xi\xi^{T}J^{T}\eta \bigr)= a_{0} \lVert{\xi} \rVert^{2} \lVert{\eta} \rVert^{2}>0, $$

which contradicts (2.4). This proves (5). Conversely, assume \(\operatorname {rank}(R(\xi))=2n-1 \) for all \(\xi\neq0 \). Let \(a(\xi) \) be the smallest nonzero eigenvalue of \(R(\xi) \). Then we have \(R(\xi)\geq a(\xi)P_{J\xi}^{\perp}\). Define \(a_{0}=\min_{ \lVert{\xi} \rVert=1}a(\xi) \). Then the constant \(a_{0}>0 \). By the homogeneity of \(R(\xi) \), we have \(a(\xi) =a_{0} \lVert{\xi } \rVert^{2}\). This proves (4). □

Theorem 2.3

Let \(\{A_{j}\}_{j=1}^{m} \subset H_{n} (\mathbb {C}) \)be generalized phase retrievable. Then \(M_{A} \)is bi-Lipschitz with respect to metric \(\lVert{xx^{*}-yy^{*}} \rVert_{1}^{2} \)with the upper Lipschitz bound

$$B_{0}=\sqrt{\max_{\xi\in \mathbb {R}^{2n}, \Vert \xi \Vert =1} \bigl\Vert R(\xi) \bigr\Vert } $$

and the lower Lipschitz bound

$$A_{0}= \sqrt{\min_{\xi\in \mathbb {R}^{2n},\|\xi\|=1} a_{2 n-1}\bigl(R( \xi)\bigr)}, $$

where \(a_{2n-1}(R(\xi)) \)is the smallest nonzero eigenvalue of \(R(\xi) \).

Proof

For any \(x,y\in \mathbb {C}^{n} \), the definition of \(M_{A} \) gives

$$\bigl\Vert M_{A}(x)-M_{A}(y) \bigr\Vert ^{2}= \sum_{j=1}^{m} \bigl\vert x^{*} A_{j} x-y^{*} A_{j} y \bigr\vert ^{2}= \sum_{j=1}^{m} \bigl\vert \bigl\langle A_{j}, x x^{*}-y y^{*} \bigr\rangle _{\mathrm{HS}} \bigr\vert ^{2}. $$

Substituting \(u=x+y \) and \(v=x-y \) into the above equation and applying (2.1), we have

$$\bigl\Vert M_{A}(x)-M_{A}(y) \bigr\Vert ^{2}= \sum_{j=1}^{m} \bigl\vert \bigl\langle A_{j},[\![ u, v ]\!] \bigr\rangle _{\mathrm{HS}} \bigr\vert ^{2}= \frac{1}{4} \sum _{j=1}^{m} \bigl\vert \bigl\langle \tau (A_{j} ), \tau[\![ u, v ]\!] \bigr\rangle _{\mathrm{HS}} \bigr\vert ^{2} =\bigl\langle R(\xi)\eta, \eta\bigr\rangle , $$

where \(\xi=\mathbf {j}(u) \), \(\eta=\mathbf {j}(v) \). As shown in (2.8), \(\|[\![ u, v ]\!]\|_{1}^{2}= \|\xi\|^{2}\langle P_{J \xi}^{\perp} \eta, P_{J \xi}^{\perp} \eta\rangle\) holds true. This implies that, for \(\eta\in\{ \operatorname {span} \{J\xi\} \}^{\perp}\), we have

$$\sup_{\xi, \eta\neq0} \frac{\langle R(\xi) \eta, \eta\rangle }{ \Vert [\![ u, v ]\!] \Vert _{1}^{2}} =\sup_{\xi, \eta\neq0} \frac{\langle R(\xi) \eta, \eta\rangle}{ \Vert \xi \Vert ^{2} \langle\eta, \eta \rangle} =\sup_{\xi\neq0} \frac{ \Vert R(\xi) \Vert }{ \Vert \xi \Vert ^{2}} =\max _{ \Vert \xi \Vert =1} \bigl\Vert R(\xi) \bigr\Vert =B_{0}^{2}. $$

Since \(\langle R(\xi)\eta, \eta \rangle=0 \) and \(\|[\![ u, v ]\!]\|_{1}^{2}=0 \) for \(\eta\in \operatorname {span} \{J\xi\} \), we conclude that

$$\bigl\lVert {M_{A}(x)-M_{A}(y)} \bigr\rVert \leq B_{0} \bigl\lVert {[\![ u, v ]\!]} \bigr\rVert _{1}=B_{0} \bigl\lVert {xx^{*}-yy*} \bigr\rVert _{1}. $$

Similarly, we have

$$\bigl\lVert {M_{A}(x)-M_{A}(y)} \bigr\rVert \geq A_{0} \bigl\lVert {xx^{*}-yy*} \bigr\rVert _{1}. $$

 □

The following lemma states the relationship between the metrics \(d_{1}\) and \(d_{2} \) defined in Sect. 1.

Lemma 2.1

For any \(x,y \in \mathbb {C}^{n} \)with \(\lVert{x} \rVert+ \lVert{y} \rVert\neq0 \), the metrics \(d_{1}\)and \(d_{2} \)have the relationship

$$ d_{1}^{2}(x, y) \leq \frac{d_{2}^{2}(x, y)}{ \Vert x \Vert ^{2}+ \Vert y \Vert ^{2}}. $$

Proof

Take \(\alpha_{0}= \alpha_{0}(x,y)= \frac{\langle x, y\rangle}{ |\langle x, y \rangle|}\). Then the modulus of \(\alpha_{0} \) equals one and

$$ d^{2}_{1}(x,y)= \min_{ \vert \alpha \vert =1} \Vert x- \alpha y \Vert ^{2} \leq\min \bigl\{ \Vert x-\alpha_{0} y \Vert ^{2}, \Vert x+\alpha_{0} y \Vert ^{2} \bigr\} . $$

The parallelogram law gives \(\Vert x-\alpha_{0} y \Vert ^{2}+ \Vert x+\alpha_{0} y \Vert ^{2}=2 (\|x\|^{2}+\| y\|^{2} )\). It follows that

$$ \min \bigl\{ \Vert x-\alpha_{0} y \Vert ^{2}, \Vert x+\alpha_{0} y \Vert ^{2} \bigr\} \leq\frac{ \Vert x-\alpha_{0} y \Vert ^{2} \Vert x+\alpha_{0} y \Vert ^{2}}{ \Vert x \Vert ^{2}+ \Vert y \Vert ^{2}}. $$

Since \(\langle x, \alpha_{0} y \rangle =|\langle x, y \rangle|\), direct computation gives

$$\begin{aligned} \Vert x-\alpha_{0} y \Vert ^{2} \Vert x+ \alpha_{0} y \Vert ^{2}& = \bigl( \Vert x \Vert ^{2}+ \Vert y \Vert ^{2} \bigr)^{2}- 4 \bigl(\Re\langle x, \alpha_{0} y\rangle \bigr)^{2} \\ &= \bigl( \Vert x \Vert ^{2}+ \Vert y \Vert ^{2} \bigr)^{2} -4 \bigl\vert \langle x, y\rangle \bigr\vert ^{2} \\ &=d^{2}_{2}(x,y). \end{aligned}$$

Combining the above inequalities, we get the relationship of two metrics:

$$ d_{1}^{2}(x, y)= \min_{ \vert \alpha \vert =1} \Vert x- \alpha y \Vert ^{2} \leq \frac{d_{2}^{2}(x, y)}{ \Vert x \Vert ^{2}+ \Vert y \Vert ^{2}}. $$

 □

With Lemma 2.1 in hand, one can prove the bi-Lipschitz property of map \(\sqrt{M_{A}} \) by the similar process of Theorem 2.4 in [17].

Theorem 2.4

Let \(\{A_{j}\}_{j=1}^{m} \subset H_{n}(\mathbb {C}) \)be generalized phase retrievable and all \(A_{j} \)be positive semidefinite. Then \(\sqrt{ M_{A}} \)is bi-Lipschitz with respect to metric \(d_{1}(x,y)=\min_{\alpha=1} \{ \lVert{x-\alpha y} \rVert \} \)as follows:

$$\frac{a_{0}}{2C}d_{1}^{2}(x,y)\leq \bigl\lVert { \sqrt{M_{A}}(x)-\sqrt {M_{A}}(y)} \bigr\rVert ^{2} \leq\lambda_{1} d_{1}^{2}(x,y), $$

whereCis the uniform upper operator bound for \(\{A_{j}\}_{j=1}^{m} \)and \(\lambda_{1} \)is the maximum eigenvalue of matrix \(\sum_{j=1}^{m}A_{j} \).

2.2 Stability of GAPR

In this subsection, we discuss the bi-Lipschitz property of GAPR in the complex case. We show that the bi-Lipschitz bound is related to two metrics.

Theorem 2.5

Let \(\tilde{A}_{j}=(B_{j}^{*},b_{j}^{*})^{*}(B_{j},b_{j})\). Suppose that \(\tilde{A}=\{\tilde{A}_{j}\}_{j=1}^{m} \)is a generic set with \(m\geq4n \), then \((B, b) \)is generalized affine phase retrievable. Furthermore, there exist positive constants \(c_{0}\), \(c_{1}\), \(C_{0}\), \(C_{1} \)depending on \((B, b)\)such that, for any \(x,y \in \mathbb {C}^{n} \),

$$\begin{aligned}& c_{0}\bigl(d^{2}_{2}(x,y)+d^{2}(x,y) \bigr) \leq \bigl\lVert {M_{B,b}(x)-M_{B,b}(y)} \bigr\rVert ^{2} \leq c_{1}\bigl(d^{2}_{2}(x,y)+d^{2}(x,y) \bigr), \end{aligned}$$
(2.9)
$$\begin{aligned}& C_{0} d_{1}^{2}(x,y)\leq \bigl\lVert {\sqrt{M_{B,b}}(x)- \sqrt{M_{B,b}}(y)} \bigr\rVert ^{2}\leq C_{1} d^{2}(x,y). \end{aligned}$$
(2.10)

Proof

Since à is generic, so is \((B,b) \). By Theorem 4.3 in [16], the pair \((B,b) \) is generalized affine phase retrievable. Notice that the equation \(\lVert{B_{j}x+b_{j}} \rVert^{2}= {\tilde{x}}^{*}\tilde{A}\tilde{x}\) implies that the equation \(M_{B,b}(x)=M_{\tilde{A}}(\tilde{x}) \) holds true for \(\tilde{x}=(x^{*},1)^{*} \) with \(x\in \mathbb {C}^{n} \). Combining with Theorem 2.3, we have

$$ \bigl\lVert {M_{B,b}(x)-M_{B,b}(y)} \bigr\rVert ^{2} \simeq d_{2}^{2}(\tilde{x},\tilde{y}), $$

where the symbol “” denotes the bi-Lipschitz relationship. Direct computation of the metric yields

$$d_{2}^{2}(\tilde{x},\tilde{y})= \bigl( \lVert{\tilde{x}} \rVert^{2}+ \lVert{\tilde {y}} \rVert^{2} \bigr)^{2}- 4 \bigl\vert \langle\tilde{x}, \tilde{y} \rangle \bigr\vert ^{2} =d_{2}^{2}(x,y)+4d^{2}(x,y). $$

Therefore, there exist constants \(c_{0}\), \(c_{1} \) such that (2.9) holds. Similarly, by Theorem 2.4, we have

$$ \bigl\lVert {\sqrt{M_{B,b}}(x)-\sqrt{M_{b,b}}(y)} \bigr\rVert ^{2} \simeq d_{1}^{2}(\tilde{x},\tilde{y}). $$

Since \(d_{1}^{2}(\tilde{x},\tilde{y})= \min_{|\alpha|=1} \{ \lVert{x-\alpha y} \rVert^{2}+|1-\alpha|^{2}\} \), we have the inequities

$$d_{1}^{2}(x,y) \leq d_{1}^{2}( \tilde{x},\tilde{y})\leq d^{2}(x,y). $$

Therefore, there exist constants \(C_{0}\), \(C_{1} \) such that (2.10) holds. □

3 Cramer–Rao stability

In this section, we discuss the stability of GPR and GAPR in the noised measurement case. Given signal \(x\in \mathbb {R}^{n} \) and \(\varphi(x) \) is a real differentiable vector-valued function of x. Assume that the measurement has the form \(Y=\varphi(x)+Z \), where the entries of Z are independent Gaussian random variables with mean value 0 and variance \(\sigma^{2} \). The noised generalized phase retrieval problem is to estimate x from measurement Y. In this scenario, we apply the Fisher information theory to evaluate the Cramer–Rao lower bound of any unbiased estimator of signal x. The Fisher information matrix is defined entrywise by

$$\bigl(\mathbb {I}(x)\bigr)_{m,\ell} =-\mathbb {E}\biggl[ \frac{\partial\log p(y;x)}{ \partial x_{m}} \frac{\partial\log p(y;x)}{ \partial x_{\ell}} \biggr], $$

where \(p(y;x) \) is the probability density function of random vector Y with vector parameter x and the expectation \(\mathbb {E}\) is taken with respect to \(p(y;x) \), resulting in a function of x only. As assumption, Y is a random vector with probability density function

$$p(y;x)=\frac{1}{(2\pi\sigma^{2})^{m/2}} e^{-\frac{1}{2\sigma^{2}} \lVert{y-\varphi(x)} \rVert^{2}},\quad y\in\mathfrak{X}, x\in\varTheta. $$

By some regular computations, the Fisher information matrix entry \((\mathbb {I}(x))_{m,\ell} \) equals

$$ \bigl(\mathbb {I}(x)\bigr)_{m,\ell}= \frac{1}{\sigma^{2}} \sum _{j=1}^{m} \frac{\partial}{\partial x_{m}} \varphi_{j}(x) \frac{\partial}{\partial x_{\ell}} \varphi_{j}(x) , $$
(3.1)

where \(\varphi_{j}(x) \) is the jth element of \(\varphi(x) \).

We are now ready to state the Cramer–Rao lower bound theorem.

Theorem 3.1

It is assumed that the probability density function \(p(y;x) \)satisfies conditions in an open set \(\varTheta\subset \mathbb {R}^{n} \)as follows:

  1. (1)

    For any \(y\in\mathfrak{X} \)and \(x\in\varTheta\), the probability density function \(p(y;x)>0 \);

  2. (2)

    The derivative \(\partial p(y;x)/\partial x_{j} \)exists and

    $$\int_{\mathfrak{X}}\frac{\partial p(y;x)}{\partial x_{j}}\,dy=0,\quad 1\le j \le n; $$
  3. (3)

    The expectation \(\mathbb {E}\vert \frac{\partial\log p(y;x)}{\partial x_{i}} \frac{\partial \log p(y;x)}{\partial x_{j}} \vert \)is finite and the Fisher information matrix is positive semidefinite.

For any unbiased estimator \(\hat{\delta}(y) \)of a differentiable function \(g(x) \), if integration and differentiation byxcan be interchanged in \(\int_{\mathfrak{X}}\hat{\delta}_{j}(y)p(y;x)\,dy \), then we have

$$ \operatorname {Cov}\bigl(\hat{\delta}(y)\bigr)\geq D(x)\mathbb {I}^{\dagger}(x)D(x)^{T}, $$
(3.2)

where \(\mathbb {I}^{\dagger}(x) \)is the Moore–Penrose inverse of the Fisher information matrix \(\mathbb {I}(x) \)and \(D(x) \)is the matrix with elements \(\partial g_{i}(x)/\partial x_{j} \).

Proof

Define the vector-valued function \(S=(S_{1},S_{2},\ldots,S_{n})^{T} \), where \(S_{j}= \partial p(y;x)/\partial x_{j} \). Condition (2) implies that \(\mathbb {E}[S]=0 \). By condition (3), the covariance matrix \(\operatorname {Cov}(S) \) exists and is exactly the Fisher information matrix \(\mathbb {I}(x) \). The condition assumed on the estimator δ̂ gives

$$\operatorname {Cov}\biggl(\hat{\delta}_{i}(y), \frac{\partial p(y;x)}{\partial x_{j} } \biggr) = \frac{\partial g_{i}(x)}{\partial x_{j}}. $$

Therefore, we have

$$ 0\leq \operatorname {Cov}\begin{pmatrix} \hat{\delta} \\ S \end{pmatrix} = \begin{pmatrix} \operatorname {Cov}(\hat{\delta}) & D(x)\\ D(x)^{T} & \mathbb {I}(x) \end{pmatrix} . $$

Since \(\mathbb {I}(x) \) is positive semidefinite and the set Θ is open, we have the desired inequality (3.2). In fact, if this is not the case, there exists a nonzero vector ξ such that

$$\xi^{T}\operatorname {Cov}(\hat{\delta})\xi- \xi^{T} D(x) \mathbb {I}^{\dagger}(x)D(x)^{T} \xi< 0. $$

By taking \(\eta=(\xi^{T}, (-\mathbb {I}^{\dagger}(x)D(x)^{T}\xi)^{T})^{T} \), matrix multiplication shows that

$$\eta^{T} \operatorname {Cov}\begin{pmatrix} \hat{\delta}\\ S \end{pmatrix} \eta= \xi^{T}\operatorname {Cov}(\hat{\delta})\xi- \xi^{T} D(x) \mathbb {I}^{\dagger}(x)D(x)^{T} \xi< 0. $$

This contradicts Cov(δˆS)0. □

3.1 Cramer–Rao lower bound of GPR

Since we consider phase retrieval of complex signals, the above theorem can not be used directly. We still deal with this by realification of complex vectors and matrices. Let \(\xi=\mathbf {j}(x) \) be the realification of x. Then we have \(\varphi(x)=x^{*}A_{j}x=\xi^{T}\tau(A_{j})\xi\). We use ξ to take the place of x in formula (3.1). Consequently, we get the Fisher information matrix

$$\bigl(\mathbb {I}(\xi)\bigr)_{m,\ell}= \frac{4}{\sigma^{2}} \sum _{j=1}^{m}\tau(A_{j})\xi\xi^{T} \tau(A_{j}). $$

In order to obtain a unique solution, we make an assumption of the signal x introduced in [1]: the signal x satisfies \(\langle x, z_{0} \rangle>0 \) for a fixed normalized vector \(z_{0}\in \mathbb {C}^{n} \). Define \(H_{z_{0}}=\{\xi=\mathbf {j}(x): \langle x, z_{0} \rangle >0,x\in \mathbb {C}^{n}\} \). We have the Cramer–Rao lower bound of the noised GPR problem as follows.

Theorem 3.2

Assume that the nonlinear map \(M_{A} \)is injective and fix a vector \(z_{0} \in \mathbb {C}^{n}\). For any vector \(x \in \mathbb {C}^{n} \)with \(\langle x, z_{0} \rangle> 0 \), the covariance of any unbiased estimatorδ̂of x is bounded below by the Cramer–Rao lower bound given by

$$ \operatorname {Cov}\bigl(\mathbf {j}( \hat{\delta} )\bigr) \geq\frac{\sigma^{2}}{4} \Biggl(\sum_{j=1}^{m} \tau(A_{j}) \xi\xi^{T}\tau(A_{j}) \Biggr)^{\dagger},\quad \xi\in H_{z_{0}}, $$
(3.3)

wheredenotes the Moore–Penrose pseudoinverse operator. In particular, the mean-square error ofδ̂is bounded below by

$$ \operatorname{M S E}(\hat{\delta})=\mathbb {E}\bigl[ \Vert x-\hat{\delta} \Vert ^{2} \bigr] \geq\frac{\sigma^{2}}{4} \operatorname {Tr}\Biggl\{ \Biggl(\sum _{j=1}^{m} \tau(A_{j})\xi\xi^{T} \tau(A_{j}) \Biggr)^{\dagger} \Biggr\} . $$
(3.4)

Proof

By taking \(g(\xi)=\xi\), inequality (3.3) is the direct result of applying Theorem 3.1 to \(\mathbf {j}(\hat{\delta}) \), the realification of δ̂. Taking trace of inequality (3.3) and considering \(\|x-\hat{\delta}\|= \|\xi-\mathbf {j}(\hat{\delta})\| \), we get inequality (3.4). □

3.2 Cramer–Rao lower bound of GAPR

In this subsection, we need not make an extra assumption to guarantee the uniqueness since the signal can be retrieved exactly with the generalized affine phase retrievable set. Furthermore, one can prove that if the pair \((B,b) \) is generalized affine phase retrievable, then the collection \(\{B^{T}_{j} B_{j}\}^{m}_{ j=1} \) is a g-frame for \(\mathbb {C}^{n} \). We denote the upper frame bound by Δ, which means

$$\sum_{j=1}^{m} \bigl\lVert {B_{J}^{T}B_{j}x} \bigr\rVert ^{2}\leq \Delta \lVert{x} \rVert^{2}. $$

In the noised GAPR problem, we have

$$ \varphi(x)= \bigl( \lVert{B_{j}x+b_{j}} \rVert \bigr)_{j=1}^{m}= \bigl( \bigl\lVert {\tau(B_{j})\xi+ \mathbf {j}(b_{j})} \bigr\rVert \bigr)_{j=1}^{m}. $$

Define \(R_{x}^{a}=\sum_{j=1}^{m} B_{j}^{T} (B_{j} x+b_{j} ) (B_{j} x+b_{j} )^{T} B_{j} \). By direct computation, the Fisher information matrix of the noised GAPR problem is given by

$$\mathbb {I}^{a}(\xi)= \frac{4}{\sigma^{2}}\tau\bigl(R_{x}^{a} \bigr) =\sum_{j=1}^{m}\tau \bigl(B_{j}^{T}\bigr) \bigl(\tau(B_{j}) \xi+j(b_{j})\bigr) \bigl(\tau(B_{j})\xi+j(b_{j}) \bigr)^{T} \tau(B_{j}). $$

Theorem 3.3

The Fisher information matrix for the noised generalized affine phase retrieval model is \(\frac{4}{\sigma^{2}}\tau(R_{x}^{a}) \). Consequently, for any unbiased estimatorδ̂ofx, the covariance matrix is bounded below by the Cramer–Rao lower bound as follows:

$$ \operatorname{Cov}\bigl[\mathbf {j}(\hat{\delta})\bigr]\geq \bigl(\mathbb {I}^{a}(x) \bigr)^{-1} =\frac{\sigma^{2}}{4} \bigl(\tau\bigl(R_{x}^{a} \bigr)\bigr)^{-1}. $$
(3.5)

Therefore, the mean square error of any unbiased estimatorδ̂is given by

$$ \mathbb {E}\bigl[ \Vert x-\hat{\delta} \Vert ^{2}\bigr] \geq \frac{\sigma^{2}n^{2}}{16(\Delta \lVert{x} \rVert^{2}+C)} . $$

Proof

By applying Theorem 3.2 in [13] to the realification model \(Y=\varphi(x)+Z \), we get inequality (3.5) directly. Taking trace of the two sides of inequality (3.5) yields

$$\mathbb {E}\bigl[ \Vert x-\hat{\delta} \Vert ^{2}\bigr] \geq \frac{\sigma^{2}}{4}\operatorname {Tr}\bigl(\bigl(\tau\bigl(R_{x}^{a}\bigr) \bigr)^{-1}\bigr). $$

Since \(\operatorname {Tr}(Q) \cdot \operatorname {Tr}(Q^{-1}) \geq n^{2}\) holds for any invertible \(n\times n \) matrix Q, we have

$$ \operatorname {Tr}\bigl(\bigl(\tau\bigl(R_{x}^{a}\bigr) \bigr)^{-1}\bigr)\geq \frac{n^{2}}{\operatorname {Tr}(\tau(R_{x}^{a}))}. $$
(3.6)

Furthermore, the g-frame property of the collection \(\{B^{T}_{j} B_{j}\}^{m}_{ j=1} \) gives

$$\operatorname {Tr}\tau\bigl(R_{x}^{a}\bigr)=2\operatorname {Tr}\bigl(R_{x}^{a} \bigr)=2\sum_{j=1}^{m} \bigl\lVert {B_{j}^{T}B_{j}x+B_{j}^{T}b_{j}} \bigr\rVert ^{2} \leq 4\Delta \lVert{x} \rVert^{2}+4C, $$

where \(C=\sum_{j=1}^{m} \lVert{B_{j}^{T}b_{j}} \rVert^{2} \). Substituting the above inequality into (3.6), we have

$$\mathbb {E}\bigl[ \Vert x-\hat{\delta} \Vert ^{2}\bigr] \geq \frac{\sigma^{2}}{4} \cdot \frac{n^{2}}{4(\Delta \lVert{x} \rVert^{2}+C)} =\frac{\sigma^{2}n^{2}}{16(\Delta \lVert{x} \rVert^{2}+C)}. $$

 □

References

  1. Balan, R.: Reconstruction of signals from magnitudes of redundant representations: the complex case. Found. Comput. Math. 16(3), 677–721 (2016)

    Article  MathSciNet  Google Scholar 

  2. Balan, R., Casazza, P., Edidin, D.: On signal reconstruction without phase. Appl. Comput. Harmon. Anal. 20(3), 345–356 (2006)

    Article  MathSciNet  Google Scholar 

  3. Balan, R., Wang, Y.: Invertibility and robustness of phaseless reconstruction. Appl. Comput. Harmon. Anal. 38(3), 469–488 (2015)

    Article  MathSciNet  Google Scholar 

  4. Bandeira, A.S., Cahill, J., Mixon, D.G., Nelson, A.A.: Saving phase: injectivity and stability for phase retrieval. Appl. Comput. Harmon. Anal. 37(1), 106–125 (2014)

    Article  MathSciNet  Google Scholar 

  5. Bendory, T., Beinert, R., Eldar, Y.C.: Fourier phase retrieval: uniqueness and algorithms. In: Compressed Sensing and Its Applications, Applied and Computational Harmonic Analysis, pp. 55–91. Springer, Cham (2017)

    Google Scholar 

  6. Buczolich, Z.: Density points and bi-Lipschitz functions in \(\textbf{R}^{m}\). Proc. Am. Math. Soc. 116(1), 53–59 (1992)

    MATH  Google Scholar 

  7. Demanet, L., Jugnon, V.: Convex recovery from interferometric measurements. IEEE Trans. Comput. Imaging 3(2), 282–295 (2017)

    Article  MathSciNet  Google Scholar 

  8. Gao, B., Sun, Q., Wang, Y., Xu, Z.: Phase retrieval from the magnitudes of affine linear measurements. Adv. Appl. Math. 93, 121–141 (2018)

    Article  MathSciNet  Google Scholar 

  9. Heinosaari, T., Mazzarella, L., Wolf, M.M.: Quantum tomography under prior information. Commun. Math. Phys. 318(2), 355–374 (2013)

    Article  MathSciNet  Google Scholar 

  10. Huang, M., Xu, Z.: Phase retrieval from the norms of affine transformations (2018). arXiv:1805.07899

  11. Huiser, A.M.J., Drenth, A.J.J., Ferwerda, H.A.: Phase retrieval in electron-microscopy from image and diffraction pattern. Optik 45(4), 303–316 (1976)

    Google Scholar 

  12. Jaming, P.: Phase retrieval techniques for radar ambiguity problems. J. Fourier Anal. Appl. 5(4), 309–329 (1999)

    Article  MathSciNet  Google Scholar 

  13. Kay, S.M.: Fundamentals of Statistical Signal Processing: Estimation Theory. Prentice-Hall, New York (1993)

    MATH  Google Scholar 

  14. Millane, R.P.: Phase retrieval in crystallography and optics. J. Opt. Soc. Am. A 7(3), 394–411 (1990)

    Article  Google Scholar 

  15. Walther, A.: The question of phase retrieval in optics. Opt. Acta 10, 41–49 (1963)

    Article  MathSciNet  Google Scholar 

  16. Wang, Y., Xu, Z.: Generalized phase retrieval: measurement number, matrix recovery and beyond. Appl. Comput. Harmon. Anal. 47(2), 423–446 (2019)

    Article  MathSciNet  Google Scholar 

  17. Zhuang, Z.: On stability of generalized phase retrieval and generalized affine phase retrieval. J. Inequal. Appl. 2019, Article ID 14 (2019)

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgements

The authors would like to thank the referees for their useful comments and remarks.

Availability of data and materials

Not applicable.

Funding

This study was partially supported by the National Natural Science Foundation of China (Grant No. 11601152).

Author information

Authors and Affiliations

Authors

Contributions

All authors contributed equally to this work. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Zhitao Zhuang.

Ethics declarations

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zhuang, Z. On stability of generalized (affine) phase retrieval in the complex case. J Inequal Appl 2019, 316 (2019). https://doi.org/10.1186/s13660-019-2270-9

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13660-019-2270-9

MSC

Keywords