Skip to main content

Revisiting the minimum-norm problem

Abstract

The design of optimal Magnetic Resonance Imaging (MRI) coils is modeled as a minimum-norm problem (MNP), that is, as an optimization problem of the form \(\min_{x\in\mathcal{R}}\|x\|\), where \(\mathcal{R}\) is a closed and convex subset of a normed space X. This manuscript is aimed at revisiting MNPs from the perspective of Functional Analysis, Operator Theory, and Banach Space Geometry in order to provide an analytic solution to the following MRI problem: \(\min_{\psi\in\mathcal{R}}\|\psi\|_{2}\), where \(\mathcal{R}:=\{\psi\in \mathbb{R}^{n}:\frac{\|A\psi-b\|_{\infty}}{\|b\|_{\infty}} \leq D\}\), with \(A\in\mathcal{M}_{m\times n}(\mathbb{R})\), \(D>0\), and \(b\in\mathbb{R}^{m}\setminus\{0\}\).

1 Introduction

There is a specific type of optimization problem that arises very often in Bioengineering [22, 23]. This problem can be generally formulated as follows:

$$ \textstyle\begin{cases} \min \Vert x \Vert , \\ x\in \mathcal{R}, \end{cases} $$
(1)

where \(\mathcal{R}\) is a closed and convex subset of a normed space X. The optimization problem given in (1) is known as a minimum-norm problem (MNP) since its solutions, sol(1), are the elements of \(\mathcal{R}\) of minimum norm. All the literature on MNPs from the context of Functional Analysis can be reduced to the following facts [24]:

  • If \(\mathrm{sol}\text{(1)}\neq \emptyset \) and \(0\notin\mathcal{R}\), then sol(1) is a bounded, closed, and convex subset of the boundary of \(\mathcal{R}\).

  • If X is strictly convex, then sol(1) is either empty or a singleton.

  • If X is reflexive, then sol(1) is non-empty.

In particular, if X is a Hilbert space, then sol(1) is a singleton. We refer the reader to [1] for a wider perspective on these geometrical concepts. Observe that (1) has the trivial solution 0 if and only if \(0\in \mathcal{R}\). This is trivially equivalent to \(\mathrm{dist}(\mathcal{R},0)=\inf_{x\in \mathcal{R}} \|x\|=0\) due to the closedness of \(\mathcal{R}\). As a consequence, we will be assuming throughout the rest of this manuscript that \(0\notin \mathcal{R}\). Also, note that if \(\mathrm{sol}\text{(1)}\neq \emptyset \), then \(\min_{x\in \mathcal{R}}\|x\|=\inf_{x\in \mathcal{R}} \|x\|=\mathrm{dist}(\mathcal{R},0)\) and \(\mathrm{sol}\text{(1)}=\mathrm{arg}\min_{x\in \mathcal{R}}\|x\|\).

Given a vector space X and a convex subset M of X with at least two points, the set of inner points of M is defined as

$$ \mathrm{inn}(M):= \bigl\{ x\in X: \forall m\in M\setminus \{x\}\; \exists n\in M \setminus \{m,x\}\text{ such that }x\in (m,n) \bigr\} .$$

The previous set was originally posed in [17] for non-convex sets. However, in [4, 10], the notion of the inner point has been already coined implicitly for convex sets. A geometrical and topological study of inner structure can be found in [11, 12, 15]. In particular, in [17], it is shown that every non-singleton convex subset of any finite-dimensional vector space has inner points. Nevertheless, in [17], it was also proved that every infinite dimensional vector space possesses a non-singleton convex subset lacking inner points. Notice that \(\mathrm{inn}(M)\subseteq M\). According to [11], if \(x\in \mathrm{inn}(M)\), then \([x,m)\subseteq \mathrm{inn}(M)\) for all \(m \in M\). As a consequence, \(\mathrm{inn}(M)\) is convex and \(\mathrm{cl}(\mathrm{inn}(M))=\mathrm{cl}(M)\). Finally, when M is a singleton, we agree by definition that \(\mathrm{inn}(M)=\varnothing \).

The optimal design of MRI coils is modeled as an MNP [5, 22, 23], more particularly, as

$$ \textstyle\begin{cases} \min \Vert \psi \Vert _{2}, \\ \frac{ \Vert A\psi -b \Vert _{\infty }}{ \Vert b \Vert _{\infty }} \leq D, \end{cases} $$
(2)

where \(\psi \in \mathbb{R}^{n}\), \(A\in \mathcal{M}_{m\times n}(\mathbb{R})\) is a matrix, \(D>0\), and \(b\in \mathbb{R}^{m}\setminus \{0\}\). The purpose of this manuscript is to revisit MNPs from the perspective of Functional Analysis, Operator Theory, and Banach Space Geometry in order to provide an analytic solution to (2).

2 Results

Throughout this manuscript, we will only work with real vector spaces. Standard notation from Metric Space Theory will be employed, such as \(\mathsf{B}_{X}(x,t)\), \(\mathsf{U}_{X}(x,t)\), and \(\mathsf{S}_{X}(x,t)\), which stand for the closed ball, the open ball, and the sphere of center x and radius t, respectively, where \(x\in X\), \(t>0\), and X is a metric space. Also, \(\mathsf{B}_{X}\), \(\mathsf{U}_{X}\), and \(\mathsf{S}_{X}\) denote the closed unit ball, the open unit ball, and the unit sphere, respectively, in case X is a vector space endowed with a norm.

2.1 Elements of minimum norm

When formulating MNPs, the constraint set \(\mathcal{R}\) is often asked to be bounded. However, this is not a necessary requirement as shown in the following lemma.

Lemma 1

Let X be a normed space and \(\mathcal{R}\subseteq X\) closed and convex. Consider the MNPs given by (1) and by

$$ \textstyle\begin{cases} \min \Vert x \Vert , \\ x\in \mathcal{R} \cap \mathsf{B}_{X}(0,\tau ),\end{cases} $$
(3)

where \(\tau >\inf_{x\in \mathcal{R}}\|x\|\). Then \(\mathrm{sol}\textit{(1)}=\mathrm{sol}\textit{(3)}\).

Proof

Take \(x_{0}\in \mathrm{sol}\text{(1)}\). Then \(\|x_{0}\|=\min_{x\in \mathcal{R}}\|x\|=\inf_{x\in \mathcal{R}}\|x\|<\tau \). Thus, \(x_{0}\in \mathrm{sol}\text{(3)}\). Conversely, let \(x_{0}\in \mathrm{sol}\text{(3)}\) and suppose that there exists \(x\in \mathcal{R}\) with \(\|x\|<\|x_{0}\|\). Then \(\|x\|<\|x_{0}\|\leq \tau \), which means that \(x\in \mathcal{R}\cap \mathsf{B}_{X}(0,\tau )\). As a consequence, \(\|x_{0}\|\leq \|x\|\) because \(x_{0}\in \mathrm{sol}(RN)\). This is a contradiction. □

The following shows that the fact that \(\mathrm{sol}\text{(1)}\cap \mathrm{int}(\mathcal{R})=\varnothing \) does not hold if we replace \(\mathrm{int}(\mathcal{R})\) by \(\mathrm{inn}(\mathcal{R})\).

Example 1

Let X be a normed space with \(\dim (X)\geq 2\). Let \(x\in \mathsf{S}_{X}\) and \(x^{*}\in \mathsf{S}_{X}\) with \(x^{*}(x)=1\). Consider \(\mathcal{R}:= (x^{*} )^{-1} (\{1\} )\). According to [17], \(\mathrm{inn}(\mathcal{R})=\mathcal{R}\). On the other hand, x is trivially a minimum-norm element of \(\mathcal{R}\). As a consequence, x is an inner point of \(\mathcal{R}\) and a minimum-norm element of \(\mathcal{R}\).

The next technical lemma will serve to characterize minimum-norm elements as supporting vectors.

Lemma 2

Let X be a normed space. Let \(\mathcal{R}\) be a closed and convex subset of X such that \(0\notin \mathcal{R}\). There exists \(f\in \mathsf{S}_{X^{*}}\) such that

$$ \sup_{x\in \mathsf{U}_{X} (0,\inf _{x\in \mathcal{R}} \|x\| )} f(x)\leq \inf_{x\in \mathcal{R}} f(x). $$
(4)

Proof

Notice that \(\inf_{x\in \mathcal{R}} \|x\|>0\) since \(\mathcal{R}\) is closed and \(0\notin \mathcal{R}\). Then the existence of \(f\in \mathsf{S}_{X^{*}}\) satisfying (4) follows directly from the Hahn-Banach Separation Theorem applied to the non-empty open convex set \(\mathsf{U}_{X} (0,\inf_{x\in \mathcal{R}} \|x\| )\) and to the non-empty convex set \(\mathcal{R}\) due to the fact that \(\mathsf{U}_{X} (0,\inf_{x\in \mathcal{R}} \|x\| )\cap \mathcal{R}=\emptyset \). □

Definition 1

Let X be a normed space. Let \(\mathcal{R}\) be a closed and convex subset of X such that \(0\notin \mathcal{R}\). Any functional \(f\in \mathsf{S}_{X^{*}}\) satisfying (4) will be called a minimum functional for \(\mathcal{R}\).

Our next result in this subsection assures that all the minimum-norm elements of a closed and convex subset of a normed space are always supporting vectors of a certain functional. Recall that if X, Y are normed spaces and \(T:X\to Y\) is a continuous linear operator, then the set of supporting vectors of T is defined as

$$ \operatorname{suppv}(T):=\bigl\{ x\in \mathsf{S}_{X}: \bigl\Vert T(x) \bigr\Vert = \Vert T \Vert \bigr\} .$$

Supporting vectors appear implicitly in the literature of Operator Theory and Banach Space Geometry [13, 1821]. However, they are formerly introduced very recently. We refer the reader to [8, 13, 16, 26] for a topological and geometrical study of the above set. On the other hand, \(X^{*}\) stands for the normed space of linear and continuous functionals from X to \(\mathbb{K}\) (\(\mathbb{R}\) or \(\mathbb{C}\)). If \(x^{*}\in X^{*}\setminus \{0\}\), then the exposed face determined by \(x^{*}\) is defined as

suppv 1 ( x ) = { x S X : x ( x ) = x } .

Successful real-life applications of supporting vectors to multioptimization in Bioengineering can be found in [6, 7, 9, 14, 25].

Theorem 3

Let X be a normed space. Let \(\mathcal{R}\) be a closed and convex subset of X such that \(0\notin \mathcal{R}\). Let \(f\in \mathsf{S}_{X^{*}}\) be a minimum functional for \(\mathcal{R}\). The following conditions are equivalent for an element \(x_{0}\in \mathcal{R}\):

  1. 1.

    \(x_{0}\) is an element of the minimum norm of \(\mathcal{R}\).

  2. 2.

    x 0 x 0 suppv 1 (f) and \(x_{0}\in \mathrm{sol}\textit{(5)}\), where

    $$ \textstyle\begin{cases} \min f(x), \\ x\in \mathcal{R}.\end{cases} $$
    (5)

Proof

Suppose first that \(x_{0}\) is an element of the minimum norm of \(\mathcal{R}\). Since \(\|x_{0}\|=\min_{x\in \mathcal{R}}\|x\|=\inf_{x\in \mathcal{R}}\|x\|\), from Equation (4), we obtain that

$$\begin{aligned} \Vert x_{0} \Vert =\sup_{x\in \mathsf{U}_{X}(0, \Vert x_{0} \Vert )} f(x) \leq \inf_{x\in \mathcal{R}} f(x)\leq f (x_{0} )\leq \Vert x_{0} \Vert . \end{aligned}$$

This shows that \(f(x_{0})=\|x_{0}\|\), meaning that x 0 suppv 1 (f), and that \(\inf_{x\in \mathcal{R}} f(x)= f (x_{0} )\), implying that \(x_{0}\in \mathrm{sol}\text{(5)}\). Conversely, suppose that Simply observe that x 0 x 0 suppv 1 (f) and \(x_{0}\in \mathrm{sol}\text{(5)}\). Note that it only suffices to observe that \(\|x_{0}\|=f(x_{0})\leq f(x)\leq \|x\|\) for all \(x\in \mathcal{R}\). □

2.2 The operator minimum-norm problem (OMNP)

We now take care of the operator minimum-norm problem (OMNP):

$$ \textstyle\begin{cases} \min \Vert T(x) \Vert , \\ x\in \mathcal{R}, \end{cases} $$
(6)

where \(T:X\to Y\) is a continuous linear operator between normed spaces X, Y, and \(\mathcal{R}\) is a bounded, closed, and convex subset of X. Under not excessively restrictive conditions, this optimization problem can be reduced to an MNP.

Theorem 4

Let \(T:X\to Y\) be a continuous linear operator between normed spaces X, Y and \(\mathcal{R}\) a bounded, closed, and convex subset of X. Consider the optimization problem

$$ \textstyle\begin{cases} \min \Vert y \Vert , \\ y\in T(\mathcal{R}), \end{cases} $$
(7)

Then:

  1. 1.

    \(T(\mathrm{sol}\textit{(6)})=\mathrm{sol}\textit{(7)}\).

  2. 2.

    \(\mathrm{sol}\textit{(6)}=T^{-1}(\mathrm{sol}\textit{(7)})\cap \mathcal{R}\).

  3. 3.

    If \(\ker (T)=\{0\}\), then \(\mathrm{sol}\textit{(6)}=T^{-1}(\mathrm{sol}\textit{(7)})\).

  4. 4.

    If X is reflexive, then \(T(\mathcal{R)}\) is closed in Y, hence (7) is an MNP.

Proof

  1. 1.

    We will prove first that \(T(\mathrm{sol}\text{(6)})\subseteq \mathrm{sol}\text{(7)}\). Indeed, let \(x_{0}\in \mathrm{sol}\text{(6)}\). Note that \(T(x_{0})\in T(\mathcal{R})\). If \(y\in T(\mathcal{R})\), then there exists \(x\in \mathcal{R}\) such that \(T(x)=y\). Then \(\|T(x_{0})\|\leq \|T(x)\|=\|y\|\). This shows that \(T(x_{0})\in \mathrm{sol}\text{(7)}\). Now the reverse inclusion. If \(y_{0}\in \mathrm{sol}\text{(7)}\), then \(y_{0}\in T(\mathcal{R})\), and there exists \(x_{0}\in \mathcal{R}\) such that \(T(x_{0})=y_{0}\). For every \(x\in \mathcal{R,}\) we have that \(\|T(x_{0})\|=\|y_{0}\|\leq \|T(x)\|\), which shows that \(x_{0}\in \mathrm{sol}\text{(6)}\).

  2. 2.

    By Theorem 4(3), \(\mathrm{sol}\text{(6)}\subseteq T^{-1}(\mathrm{sol}\text{(7)})\). It is clear by definition that \(\mathrm{sol}\text{(6)}\subseteq \mathcal{R}\). Next, if \(x_{0}\in T^{-1}(\mathrm{sol}\text{(7)})\cap \mathcal{R}\), then following a similar argument as above, we deduce that \(x_{0}\in \mathrm{sol}\text{(6)}\).

  3. 3.

    By Theorem 4(3), \(\mathrm{sol}\text{(6)}\subseteq T^{-1}(\mathrm{sol}\text{(7)})\). Since T is injective, we deduce that \(\mathrm{sol}\text{(6)}= T^{-1}(\mathrm{sol}\text{(7)})\).

  4. 4.

    If X is reflexive, then James Theorem [20] assures that \(\mathsf{B}_{X}\) is weakly compact. Since R is bounded, there exists \(K>0\) such that \(\mathcal{R}\subseteq \mathsf{B}_{X}(0,K)\). Then \(\mathsf{B}_{X}(0,K)\) is also weakly compact. Next, \(\mathcal{R}\) is closed and convex, so it is weakly closed. Therefore, \(\mathcal{R}\) is weakly compact. Since \(T:X\to Y\) is weakly continuous, we have that \(T(\mathcal{R})\) is also weakly compact and thus closed in Y. □

2.3 Minimum-norm problems for closed balls

A specific type of MNP will be treated here in this subsection. These results will be later applied to solve (2). We will first need the following remark and the next technical lemma.

Remark 1

Let X be a topological vector space, and let M be a convex subset of X. If \(m\in \mathrm{int}(M)\) and \(x\in \mathrm{cl}(M)\), then \([m,x)\subseteq \mathrm{int}(M)\).

Lemma 5

Let X be a normed space. Let \(\mathcal{R}\) be a closed and convex subset of X such that \(0\notin \mathcal{R}\). If \(a\in \mathrm{int}(\mathcal{R})\), then there exists a unique \(0<\lambda <1\) such that λa is in the boundary of \(\mathcal{R}\).

Proof

Consider the continuous function

$$ \begin{aligned} {}[0,1]&\to X \\ t&\mapsto ta.\end{aligned} $$

Notice that for \(t=1\), \(ta\in \mathrm{int}(\mathcal{R})\), and for \(t=0\), \(ta\in X\setminus \mathcal{R}\). The image of the above function is a connected subset of X (it is, in fact, the segment \([0,a]\)). If \([0,a]\) does not intersect the boundary of \(\mathcal{R}\), then

$$ [0,a]= \bigl([0,a]\cap \mathrm{int}(\mathcal{R}) \bigr) \cup \bigl([0,a] \cap X \setminus {R} \bigr)$$

which contradicts the connectedness of \([0,a]\). Thus, there must exist \(\lambda \in (0,1)\) such that λa is in the boundary of \(\mathcal{R}\). Finally, if \(\gamma \in (0,1)\setminus \{\lambda \}\) also verifies that γa is in the boundary of \(\mathcal{R}\), then we may assume without any loss of generality that \(\gamma <\lambda \) to reach the contradiction that \(\lambda a\in (\gamma a,a]\subseteq \mathrm{int}(\mathcal{R})\) in view of Remark 1. □

Lemma 6

Let X be a normed space. Let \(a\in X\) and \(0< r<\|a\|\). Then

$$ \min_{x\in \mathsf{B}_{X}(a,r)} \Vert x \Vert = \Vert a \Vert -r \textit{ and } \frac{ \Vert a \Vert -r}{ \Vert a \Vert }a\in \mathrm{arg}\min_{x\in \mathsf{B}_{X}(a,r)} \Vert x \Vert .$$

Proof

Fix an arbitrary \(x\in \mathsf{B}_{X}(a,r)\). Then

$$ \Vert x \Vert = \bigl\Vert a-(a-x) \bigr\Vert \geq \bigl\lvert \Vert a \Vert - \Vert a-x \Vert \bigr\rvert \geq \Vert a \Vert - \Vert a-x \Vert \geq \Vert a \Vert -r. $$

This proves that

$$ \min_{x\in \mathsf{B}_{X}(a,r)} \Vert x \Vert \geq \Vert a \Vert -r.$$

Notice that

$$ \biggl\Vert \frac{ \Vert a \Vert -r}{ \Vert a \Vert }a-a \biggr\Vert =\frac{1}{ \Vert a \Vert } \bigl\Vert \bigl( \Vert a \Vert -r\bigr)a- \bigr\Vert a \Vert a \Vert = r,$$

which implies that \(\frac{\|a\|-r}{\|a\|}a\in \mathsf{B}_{X}(a,r)\). Finally,

$$ \biggl\Vert \frac{ \Vert a \Vert -r}{ \Vert a \Vert }a \biggr\Vert = \Vert a \Vert -r,$$

meaning that

$$ \frac{ \Vert a \Vert -r}{ \Vert a \Vert }a\in \mathrm{arg}\min_{x\in \mathsf{B}_{X}(a,r)} \Vert x \Vert .$$

Notice that \(\frac{\|a\|-r}{\|a\|}\) is λ of Lemma 5. □

Lemma 6 can be restated as:

Lemma 7

Let X be a normed space. Let \(a\in X\) and \(0< r<\|a\|\). Then \(\frac{\|a\|-r}{\|a\|}a\in \mathrm{sol}\textit{(8)}\), where

$$ \textstyle\begin{cases} \min \Vert x \Vert , \\ x\in \mathsf{B}_{X}(a,r). \end{cases} $$
(8)

The following theorem constitutes the main result of this subsection and serves to directly solve (2).

Theorem 8

Let X, Y be normed spaces. Let \(T:X\to Y\) be a continuous linear operator, \(b\in Y\), and \(0< s<\|b\|\). If there exists \(a\in X\) such that \(T(a)=b\) and \(\frac{a}{\|a\|}\in \operatorname{suppv}(T)\), then

$$ \min_{x\in T^{-1} (\mathsf{B}_{Y}(b,s) )} \Vert x \Vert = \frac{ \Vert b \Vert -s}{ \Vert T \Vert }$$

and

$$ \frac{ \Vert b \Vert -s}{ \Vert b \Vert }a\in \mathrm{arg}\min_{x\in T^{-1} ( \mathsf{B}_{Y}(b,s) )} \Vert x \Vert .$$

Proof

In the first place, according to Lemma 6, we have that

$$ T \biggl(\frac{ \Vert b \Vert -s}{ \Vert b \Vert }a \biggr)=\frac{ \Vert b \Vert -s}{ \Vert b \Vert }b\in \mathsf{B}_{Y}(b,s).$$

Therefore, \(\frac{\|b\|-s}{\|b\|}a\in T^{-1} (\mathsf{B}_{Y}(b,s) )\). Now, fix an arbitrary \(x\in T^{-1} (\mathsf{B}_{Y}(b,s) )\). We will prove that

$$ \biggl\Vert \frac{ \Vert b \Vert -s}{ \Vert b \Vert }a \biggr\Vert \leq \Vert x \Vert .$$

Indeed, by Lemma 6, we know that

$$ \biggl\Vert \frac{ \Vert b \Vert -s}{ \Vert b \Vert }b \biggr\Vert \leq \bigl\Vert T(x) \bigr\Vert $$
(9)

because \(T(x)\in \mathsf{B}_{Y}(b,s)\). On the other hand, \(\frac{a}{\|a\|}\in \operatorname{suppv}(T)\) which means that

$$ \biggl\Vert T \biggl(\frac{a}{ \Vert a \Vert } \biggr) \biggr\Vert = \Vert T \Vert ,$$

in other words,

$$ \Vert b \Vert = \bigl\Vert T(a) \bigr\Vert = \Vert T \Vert \Vert a \Vert .$$

Finally, if we get back to Equation (9), then we obtain that

$$\begin{aligned} \biggl\Vert \frac{ \Vert b \Vert -s}{ \Vert b \Vert }a \biggr\Vert =&\frac{ \Vert b \Vert -s}{ \Vert b \Vert } \Vert a \Vert = \frac{ \Vert b \Vert -s}{ \Vert T \Vert \Vert a \Vert } \Vert a \Vert =\frac{ \Vert b \Vert -s}{ \Vert b \Vert } \Vert b \Vert \frac{1}{ \Vert T \Vert } \\ =& \biggl\Vert \frac{ \Vert b \Vert -s}{ \Vert b \Vert }b \biggr\Vert \frac{1}{ \Vert T \Vert }\leq \bigl\Vert T(x) \bigr\Vert \frac{1}{ \Vert T \Vert } \leq \Vert T \Vert \Vert x \Vert \frac{1}{ \Vert T \Vert }= \Vert x \Vert . \end{aligned}$$

 □

Observe that Theorem 8 can be restated as follows:

Theorem 9

Let X, Y be normed spaces. Let \(T:X\to Y\) be a continuous linear operator, \(b\in Y\), and \(0< s<\|b\|\). If there exists \(a\in X\) such that \(T(a)=b\) and \(\frac{a}{\|a\|}\in \operatorname{suppv}(T)\), then \(\frac{\|b\|-s}{\|b\|}a\in \mathrm{sol}\textit{(10)}\), where

$$ \textstyle\begin{cases} \min \Vert x \Vert , \\ x\in T^{-1} (\mathsf{B}_{Y}(b,s) ). \end{cases} $$
(10)

We will finalize this subsection with the following result.

Proposition 10

Let X be a normed space, \(f\in X^{*}\), \(a\in X\), and \(0< r<\|a\|\). Then

$$ \inf_{x\in \mathsf{B}_{X}(a,r)}f(x)=f(a)- \Vert f \Vert r.$$

Even more, if suppv 1 (f), then

$$ \min_{x\in \mathsf{B}_{X}(a,r)}f(x)=f(a)- \Vert f \Vert r$$

and

arg min x B X ( a , r ) f(x)=ar suppv 1 (f).

Proof

Fix any arbitrary \(x\in \mathsf{B}_{X}(a,r)\). Then

$$ f(x)=f(a)+f(x-a)\geq f(a)- \Vert f \Vert \Vert x-a \Vert =f(a)- \Vert f \Vert r.$$

This shows that

$$ \inf_{x\in \mathsf{B}_{X}(a,r)}f(x)\geq f(a)- \Vert f \Vert r.$$

Fix an arbitrary \(\varepsilon >0\). There exists \(x\in \mathsf{B}_{X}\) such that \(f(x)>\|f\|-\frac{\varepsilon }{r}\). Observe that \(a-rx\in \mathsf{B}_{X}(a,r)\) and

$$ f(a-rx)=f(a)-rf(x)< f(a)-r \Vert f \Vert +\varepsilon .$$

The arbitrariness of ε forces that

$$ \inf_{x\in \mathsf{B}_{X}(a,r)}f(x)= f(a)- \Vert f \Vert r.$$

Next, suppose that suppv 1 (f). Take any u suppv 1 (f). Then

$$ f(a-ru)=f(a)-rf(u)=f(a)-r \Vert f \Vert =\inf_{x\in \mathsf{B}_{X}(a,r)}f(x),$$

meaning that \(\inf_{x\in \mathsf{B}_{X}(a,r)}f(x)\) is attained at any element of ar suppv 1 (f). As a consequence,

$$ \min_{x\in \mathsf{B}_{X}(a,r)}f(x)=f(a)- \Vert f \Vert r$$

and

arg min x B X ( a , r ) f(x)ar suppv 1 (f).

Finally, take any \({y\in \mathrm{arg}\min_{x\in \mathsf{B}_{X}(a,r)}f(x)}\). Then \(\frac{a-y}{r}\in \mathsf{B}_{X}\) and

$$ f \biggl(\frac{a-y}{r} \biggr)=\frac{f(a)-f(y)}{r}= \frac{f(a)-f(a)+r \Vert f \Vert }{r}= \Vert f \Vert . $$

That is, a y r suppv 1 (f), hence y=ar a y r ar suppv 1 (f). □

Notice that Proposition 10 can be restated as follows:

Proposition 11

Let X be a normed space, \(f\in X^{*}\), \(a\in X\), and \(0< r<\|a\|\). If suppv 1 (f), then sol(11)=a+r suppv 1 (f), where

$$ \textstyle\begin{cases} \min f(x), \\ x\in \mathsf{B}_{X}(a,r). \end{cases} $$
(11)

3 Discussion and conclusions

Applying the previous results to the optimal design of MRI coils is the main goal of this section. Our first step is to verify that (2) is, indeed, an MNP.

Proposition 12

Consider the optimization problem (2), whose region of constraints can be expressed by

$$ \mathcal{R}:= \biggl\{ \psi \in \mathbb{R}^{n}: \frac{ \Vert A\psi -b \Vert _{\infty }}{ \Vert b \Vert _{\infty }} \leq D \biggr\} .$$

Then:

  1. 1.

    \(\mathcal{R}\) is a closed and convex subset of \(\mathbb{R}^{n}\).

  2. 2.

    \(D\geq 1\) if and only if \(\ker (A)\subseteq \mathcal{R}\). In this situation, \(\mathrm{sol}\textit{(2)}=\{0\}\).

  3. 3.

    If \(\psi _{0}\in \mathrm{sol}\textit{(2)}\), then \(\frac{\|A\psi _{0}-b\|_{\infty }}{\|b\|_{\infty }} = D\).

  4. 4.

    If \(\psi \in \mathcal{R}\), then \(\psi +\ker (A)\subseteq \mathcal{R}\).

  5. 5.

    \(\mathcal{R}\) is bounded if and only if \(\ker (A)=\{0\}\).

Proof

  1. 1.

    It is an easy exercise to check the convexity of \(\mathcal{R}\). Now if \((\psi _{n})_{n\in \mathbb{N}}\subseteq \mathcal{R}\) converges to \(\psi _{0}\in \mathbb{R}^{n}\), then \((A\psi _{n})_{n\in \mathbb{N}}\) converges to \(A\psi _{0}\), so

    $$ \frac{ \Vert A\psi _{n}-b \Vert _{\infty }}{ \Vert b \Vert _{\infty }} \stackrel{n\to \infty }{\longrightarrow } \frac{ \Vert A\psi _{0}-b \Vert _{\infty }}{ \Vert b \Vert _{\infty }}$$

    which implies that

    $$ \frac{ \Vert A\psi _{0}-b \Vert _{\infty }}{ \Vert b \Vert _{\infty }}\leq D.$$
  2. 2.

    This is obvious.

  3. 3.

    Notice that

    $$ \biggl\{ \psi \in \mathbb{R}^{n}: \frac{ \Vert A\psi -b \Vert _{\infty }}{ \Vert b \Vert _{\infty }}< D \biggr\} \subseteq \mathrm{int}(\mathcal{R}). $$
    (12)

    Lemma 3 assures that \(\psi _{0}\) is in the boundary of \(\mathcal{R}\); therefore, by bearing in mind Equation (12), we conclude that

    $$ \frac{ \Vert A\psi _{0}-b \Vert _{\infty }}{ \Vert b \Vert _{\infty }}=D.$$
  4. 4.

    This is trivial.

  5. 5.

    If \(\ker (A)\neq \{0\}\), then by Proposition 12(4), we have that \(\mathcal{R}+\ker (A)\subseteq \mathcal{R}\), which means that \(\mathcal{R}\) is not bounded. Conversely, suppose that \(\ker (A)=\{0\}\). Consider the continuous linear operator

    $$ \begin{aligned} T:\ell _{2}^{n}& \to \ell _{\infty }^{m} \\ \psi &\mapsto T(\psi ):=A\psi .\end{aligned} $$
    (13)

    Let \(S:T(\ell _{2}^{n})\to \ell _{\infty }^{m}\) be the linear inverse of T. Notice that S is continuous because \(T(\ell _{2}^{n})\) is finite-dimensional. Next,

    $$ \mathcal{R}= T^{-1} \bigl(\mathsf{B}_{\ell _{\infty }^{m}} \bigl(b,D \Vert b \Vert _{\infty } \bigr) \bigr)=S \bigl(\mathsf{B}_{\ell _{\infty }^{m}} \bigl(b,D \Vert b \Vert _{\infty } \bigr)\cap T\bigl(\ell _{2}^{n} \bigr) \bigr).$$

    The last set in the previous chain of equalities is bounded in \(\ell _{2}^{n}\) because of the continuity of S. □

Now that we know that (2) is an MNP, we will search for a solution of (2). For this, we will rely on Theorem 8.

Corollary 13

Consider the optimization problem (2). Suppose that \(0< D<1\). Consider the continuous linear operator T given in (13). If there exists \(\psi _{0}\in \mathbb{R}^{n}\) such that \(A\psi _{0}=b\) and \(\frac{\psi _{0}}{\| \psi _{0}\|_{2}}\in \operatorname{suppv} (T )\), then \((1-D)\psi _{0}\in \mathrm{sol}\textit{(2)}\).

Proof

Consider the MNP

$$ \textstyle\begin{cases} \min \Vert \psi \Vert _{2}, \\ \psi \in T^{-1} (\mathsf{B}_{\ell _{\infty }^{m}} (b, \Vert b \Vert _{\infty }D ) ).\end{cases} $$
(14)

Notice that the MNP (14) is precisely the MRI problem (2). Observe also that \(0<\|b\|_{\infty }D < \|b\|_{\infty }\). According to Theorem 8,

$$ (1-D)\psi _{0}=\frac{ \Vert b \Vert _{\infty }- \Vert b \Vert _{\infty }D}{ \Vert b \Vert _{\infty }} \psi _{0}\in \mathrm{sol}\text{(14)}=\mathrm{sol}\text{(2)}.$$

 □

Availability of data and materials

Not applicable.

References

  1. Aizpuru, A., García-Pacheco, F.J.: A short note about exposed points in real Banach spaces. Acta Math. Sci. Ser. B Engl. Ed. 28(4), 797–800 (2008)

    Article  MathSciNet  Google Scholar 

  2. Bishop, E., Phelps, R.R.: A proof that every Banach space is subreflexive. Bull. Am. Math. Soc. 67, 97–98 (1961)

    Article  MathSciNet  Google Scholar 

  3. Bishop, E., Phelps, R.R.: The support functionals of a convex set. In: Proc. Sympos. Pure Math., vol. VII, pp. 27–35. Am. Math. Soc., Providence (1963)

    Google Scholar 

  4. Bourbaki, N.: Elements of Mathematics. Topological Vector Spaces. Chapters 1–5, p. viii+364. Springer, Berlin (1987)

    Book  Google Scholar 

  5. Cobos Sanchez, C., et al.: Forward electric field calculation using BEM for time-varying magnetic field gradients and motion in strong static fields. Eng. Anal. Bound. Elem. 33(8–9), 1074–1088 (2009)

    Article  MathSciNet  Google Scholar 

  6. Cobos-Sánchez, C., García-Pacheco, F.J., Guerrero-Rodríguez, J.M., García-Barrachina, L.: Solving an IBEM with supporting vector analysis to design quiet TMS coils. Eng. Anal. Bound. Elem. 117, 1–12 (2020)

    Article  MathSciNet  Google Scholar 

  7. Cobos-Sánchez, C., García-Pacheco, F.J., Guerrero-Rodríguez, J.M., Hill, J.R.: An inverse boundary element method computational framework for designing optimal TMS coils. Eng. Anal. Bound. Elem. 88, 156–169 (2018)

    Article  MathSciNet  Google Scholar 

  8. Cobos-Sánchez, C., García-Pacheco, F.J., Moreno-Pulido, S., Sáez-Martínez, S.: Supporting vectors of continuous linear operators. Ann. Funct. Anal. 8(4), 520–530 (2017)

    Article  MathSciNet  Google Scholar 

  9. Cobos-Sánchez, C., Vilchez-Membrilla, J.A., Campos-Jiménez, A., García-Pacheco, F.J.: Pareto optimality for multioptimization of continuous linear operators. Symmetry 13, 2073–8994 (2021)

    Article  Google Scholar 

  10. Dubins, L.E.: On extreme points of convex sets. J. Math. Anal. Appl. 5, 237–244 (1962)

    Article  MathSciNet  Google Scholar 

  11. García-Pacheco, F.J.: A solution to the faceless problem. J. Geom. Anal. 30, 3859–3871 (2020)

    Article  MathSciNet  Google Scholar 

  12. García-Pacheco, F.J.: Relative interior and closure of the set of inner points. Quaest. Math. 43, 761–772 (2020)

    Article  MathSciNet  Google Scholar 

  13. García-Pacheco, F.J.: Lineability of the set of supporting vectors. Rev. R. Acad. Cienc. Exactas Fís. Nat., Ser. A Mat. 115(2), 41 (2021) 32 pp.

    Article  MathSciNet  Google Scholar 

  14. García-Pacheco, F.J., Cobos-Sánchez, C., Moreno-Pulido, S., Sánchez-Alzola, S.: Exact solutions to \({\max_{\|x\|_{2}=1} \sum_{i=1}^{\infty }\|A_{i}x\|_{2}^{2}}\) with applications to bioengineering and statistics. Commun. Nonlinear Sci. Numer. Simul. 82, 105054 (2020)

    Article  MathSciNet  Google Scholar 

  15. García-Pacheco, F.J., Moreno-Pulido, S., Naranjo-Guerra, E., Sánchez-Alzola, S.: Non-linear inner structure of topological vector spaces. Mathematics 9(5), 466 (2021)

    Article  Google Scholar 

  16. García-Pacheco, F.J., Naranjo-Guerra, E.: Supporting vectors of continuous linear projections. Int. J. Funct. Anal. Oper. Theory Appl. 9(3), 85–95 (2017)

    MATH  Google Scholar 

  17. García-Pacheco, F.J., Naranjo-Guerra, E.: Inner structure in real vector spaces. Georgian Math. J. 27, 361–366 (2020)

    Article  MathSciNet  Google Scholar 

  18. García-Pacheco, F.J., Puglisi, D.: Lineability of functionals and operators. Stud. Math. 201(1), 37–47 (2010)

    Article  MathSciNet  Google Scholar 

  19. García-Pacheco, F.J., Rambla-Barreno, F., Seoane-Sepúlveda, J.B.: Q-Linear functions, functions with dense graph, and everywhere surjectivity. Math. Scand. 102(1), 156–160 (2008)

    Article  MathSciNet  Google Scholar 

  20. James, R.C.: Characterizations of reflexivity. Stud. Math. 23, 205–216 (1963/64)

    Article  MathSciNet  Google Scholar 

  21. Lindenstrauss, J.: On operators which attain their norm. Isr. J. Math. 1, 139–148 (1963)

    Article  MathSciNet  Google Scholar 

  22. Marin, L., Power, H., Bowtell, R.W., Cobos-Sánchez, C., Becker, A.A., Glover, P., Jones, I.A.: Numerical solution of an inverse problem in magnetic resonance imaging using a regularized higher-order boundary element method. In: Boundary Elements and Other Mesh Reduction Methods XXIX. WIT Trans. Model. Simul., vol. 44, pp. 323–332. WIT Press, Southampton (2007)

    Chapter  Google Scholar 

  23. Marin, L., Power, H., Bowtell, R.W., Cobos-Sánchez, C., Becker, A.A., Glover, P., Jones, I.A.: Boundary element method for an inverse problem in magnetic resonance imaging gradient coils. Comput. Model. Eng. Sci. 23(3), 149–173 (2008)

    MathSciNet  MATH  Google Scholar 

  24. Megginson, R.E.: An Introduction to Banach Space Theory. Graduate Texts in Mathematics, vol. 183, xx+596 pp. Springer, New York (1998)

    Book  Google Scholar 

  25. Moreno-Pulido, S., García-Pacheco, F.J., Cobos-Sánchez, C., Sánchez-Alzola, A.: Exact solutions to the maxmin problem \(\max \|Ax\|\) subject to \(\|Bx\| \leq 1\). Mathematics 8(1), 85 (2020)

    Article  Google Scholar 

  26. Sánchez-Alzola, A., García-Pacheco, F.J., Naranjo-Guerra, E., Moreno-Pulido, S.: Supporting vectors for the \(\ell _{1}\)-norm and the \(\ell _{\infty }\)-norm and an application. Math. Sci. 15(2), 173–187 (2021)

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgements

This manuscript is dedicated to the beloved memory of Prof. María de los Santos Bruzón Gallego.

Funding

This work has been supported by the Research Grant PGC-101514-B-I00 awarded by the Spanish Ministry of Science, Innovation and Universities and partially funded by ERDF, by the Andalusian Research, Development and Innovation Programme (PAIDI 2020) under the Research Grant \(\text{PY}20\_01295\), and by the Research Grant FEDER-UCA18-105867 awarded by the 2014-2020 ERDF Operational Programme of the Department of Economy, Knowledge, Business and University of the Regional Government of Andalusia. The APCs have been paid by the Mathematics Department of the University of Cadiz.

Author information

Authors and Affiliations

Authors

Contributions

FJGP and SMP stated and proved the lemmas, theorems, and propositions of this manuscript. ASA conducted the research, proposed the problems to work on, and applied the results. All authors contributed equally to the writing of this paper. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Francisco Javier García-Pacheco.

Ethics declarations

Competing interests

The authors declare that they have no competing interests.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Moreno-Pulido, S., Sánchez-Alzola, A. & García-Pacheco, F.J. Revisiting the minimum-norm problem. J Inequal Appl 2022, 22 (2022). https://doi.org/10.1186/s13660-022-02757-5

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13660-022-02757-5

MSC

Keywords