In this section, we consider the constrained extremal problems for a linear inclusions restricted to a hyperplane in Banach space. By using Proposition 2.3, we establish several equivalent characterizations of the constrained extremal solution of linear inclusions in Banach spaces by means of the algebraic operator parts, the metric generalized inverse of multivalued linear operator, and the dual mapping of the spaces. It follows from these results that we may get the constrained extremal solution of multivalued linear inclusions by using the extremal solution of some interrelated multivalued linear inclusions in the same space, which are well investigated in [5] and [6]. These characterizations involve algebraic operator parts, the metric generalized inverse, and the dual mapping of the spaces.
Theorem 3.1
Let
X
and
Y
be Banach spaces, \(L\subset X\times Y\)
be a multivalued linear operator from
X
to
Y, \(N\subset X\)
be a subspace, P
be an algebraic projector from
Y
onto the subspace
\(L ( \theta)\), \(L_{S,P}\)
be any fixed algebraic operator part of
L
with respect to the projector
P. Let
$$ S=g+N\quad\textit{and}\quad A:=L_{N}, $$
where
\(g\in D ( L )\). Suppose that
\(N ( A ) \)
and
\(R ( A ) \)
are Chebyshev subspaces in
X
and
Y, respectively. Then the following are equivalent:

(1)
\(w\in D ( L ) \cap S\)
is a constrained extremal solution of the linear inclusion
\(y\in L ( x ) \)
with respect to
S;

(2)
\(k:=gw\in D ( L ) \cap N\)
is an extremal solution of the linear inclusion
\(L_{S,P} ( g ) y\in A ( x )\);

(3)
\(w\in D ( L ) \cap S\)
and
$$ L_{S,P} ( w ) y\in L ( \theta ) +F_{Y}^{1} \bigl( R ( A ) ^{\bot} \bigr); $$
(3.1)

(4)
\(g\in D ( L ) \)
such that
$$ L_{S,P} ( g ) y\in R ( A ) \dotplus F_{Y}^{1} \bigl( R ( A ) ^{\bot} \bigr). $$
(3.2)
Proof
(1) ⇔ (2) From Definition 2.2, \(w\in D ( L ) \cap S\) is a constrained extremal solution of the linear inclusion \(y\in L ( x )\) if and only if there exists \(z\in Y\) such that
$$ z\in L_{S} ( w ) $$
(3.3)
and
$$ \ yz\ =\operatorname{dist} \bigl( y, R ( L_{S} ) \bigr) . $$
(3.4)
We will characterize (3.3) and (3.4).
Now, by definition, (3.3) holds if and only if
$$ w\in D ( L ) \cap S\quad\mbox{and}\quad \{ w,z \} \in L. $$
Equivalently, we see that (3.3) holds if
$$ w=gk\in D ( L ) \quad\mbox{for some }k\in N\quad\mbox{and}\quad z=L_{S,P} ( w ) +s \quad\mbox{for some }s\in L ( \theta ). $$
Since \(g\in D ( L )\) and \(w\in D ( L )\), \(k=gw\in D ( L ) \cap N\). Hence, we see that (3.3) holds if and only if \(w=gk\) for some \(k\in D ( L ) \cap N\) such that
$$ z=L_{S,P} ( g ) L_{S,P} ( k ) +s\quad\mbox{for some }s\in L ( \theta ). $$
(3.5)
Next, we characterize (3.4). Note first that
$$ \yz\=\bigl\ L_{S,P} ( g ) yL_{S,P} ( k ) +s\bigr\ $$
(3.6)
and
$$\begin{aligned} \operatorname{dist} \bigl( y, R ( L_{S} ) \bigr) =& \inf \bigl\{ \ yz\ :z\in L ( u ), w\in D ( L ) \cap S \bigr\} \\ =&\inf \bigl\{ \bigl\ L_{S, P} ( g ) yL_{S, P} ( k ) +s \bigr\ :k\in D ( A ) , s\in L ( \theta ) \bigr\} \\ =&\operatorname{dist} \bigl( L_{S, P} ( g ) y, R ( A ) \bigr). \end{aligned}$$
(3.7)
Note that, from \(s\in L ( \theta ) \) and \(k=gw\in D ( L ) \cap N\), we have \(L ( k ) =L_{N} ( k ) =A ( k )\) and
$$ s+L_{S,P} ( k ) \in L ( \theta ) +L_{S,P} ( k ) =L_{N} ( k ) =A ( k ). $$
(3.8)
From (3.6) and (3.7), we see that (3.4) holds if and only if (3.8) holds and
$$ \bigl\ L_{S, P} ( g ) y \bigl(L_{S,P} ( k ) s \bigr)\bigr\ = \operatorname{dist} \bigl( L_{S, P} ( g ) y, R ( A ) \bigr) . $$
(3.9)
Hence, it follows that (3.4) holds if and only if k is an extremal solution,
$$ L_{S,P} ( g ) y\in A ( x ) . $$
Consequently, it follows from (3.3) and (3.4) that \(w\in D ( L ) \cap S\) is a constrained extremal solution of the linear inclusion \(y\in L ( x ) \) with respect to S if and only if \(w=gk\) for some \(k\in D ( L ) \cap N\) such that k is an extremal solution of \(L_{S, P} ( g ) y\in A ( x )\). This proves that (i) and (ii) are equivalent.
(2) ⇒ (3). Assume that \(k:=gw\in D ( L ) \cap N\) is an extremal solution of the linear inclusion \(y+L_{S,P} ( g ) \in A ( x )\). From (i) ⇔ (ii) in Proposition 2.9, we have \(w=gk\) for some \(k\in D ( L ) \cap N\) and
$$ y+L_{S,P} ( g ) \in A ( k ) \dotplus F_{Y}^{1} \bigl( R ( A ) ^{\bot} \bigr) . $$
(3.10)
Let us write
$$ y+L_{S,P} ( g ) =x+z, $$
where \(x\in A ( k )\), \(z\in F_{Y}^{1} ( R ( A ) ^{\bot} )\). Note that \(A=L_{N}\), it follows that \(k\in D ( L ) \cap N\) and \(\{ k,x \} \in L\). Thus we obtain
$$ x=L_{S,P} ( k ) +s\quad\mbox{for some }s\in L ( \theta ) . $$
Consequently, we have \(w\in D ( L ) \cap S\) and
$$\begin{aligned} y+L_{S,P} ( w ) =&y+L_{S,P} ( g ) L_{S,P} ( k ) \\ =&x+zL_{S,P} ( k ) \\ =&s+z \\ \in&L ( \theta ) \dotplus F_{Y}^{1} \bigl( R ( A ) ^{\bot} \bigr). \end{aligned}$$
It follows that (2) ⇒ (3).
Next, we intend to prove that (3) ⇒ (2). Assume (3) is true, we want to show that there exists \(k\in D ( L ) \cap N\) such that
$$ L_{S,P} ( g ) y\in A ( k ) \dotplus F_{Y}^{1} \bigl( R ( A ) ^{\bot} \bigr). $$
From (i) ⇔ (iii) in Proposition 2.9, we have showed that (2) holds. Indeed, from (3), we write
$$ L_{S,P} ( w ) y=s+z \quad\mbox{where }s\in L ( \theta ) \mbox{ and }z \in F_{Y}^{1} \bigl( R ( A ) ^{\bot} \bigr) . $$
Note that \(w\in D ( L ) \cap S\) and \(w=gk\) for some \(k\in D ( L ) \cap N=D ( A )\). Then
$$\begin{aligned} L_{S,P} ( g ) y =&L_{S,P} ( w ) +L_{S,P} ( k ) y \\ =&s+L_{S,P} ( k ) +z\in A ( k ) \dotplus F_{Y}^{1} \bigl( R ( A ) ^{\bot} \bigr) \end{aligned}$$
since \(s+L_{S,P} ( k ) \in L ( \theta ) +L_{S,P} ( k ) =A ( \theta ) +A_{S,P} ( k ) =A ( k )\). Thus (3) ⇒ (2). It follows that (2) ⇔ (3).
(2) ⇔ (4) Assume (2) is true, i.e.
\(k:=gw\in D ( L ) \cap N\) is an extremal solution of the linear inclusion \(L_{S,P} ( g ) y\in A ( x )\). By (i) ⇔ (iii) in Proposition 2.9, we have \(L_{S,P} ( g ) y\in A ( k ) \dotplus F_{Y}^{1} ( R ( A ) ^{\bot} ) \subset R ( A ) \dotplus F_{Y}^{1} ( R ( A ) ^{\bot} )\). This proves that (2) ⇒ (4). Conversely, we assume (4) is true, i.e.
\(L_{S,P} ( g ) y\in R ( A ) \dotplus F_{Y}^{1} ( R ( A ) ^{\bot} )\), then there exist \(y_{1}\in R ( A )\) and \(y_{2}\in F_{Y}^{1} ( R ( A ) ^{\bot} ) \) such that \(L_{S,P} ( g ) y=y_{1}+y_{2}\). By the definition, from \(y_{1}\in R ( A )\), we have a \(k\in D ( A ) =D ( L ) \cap N\) such that \(y_{1}\in A ( k )\), hence
$$ L_{S,P} ( g ) y=y_{1}+y_{2}\in A ( k ) \dotplus F_{Y}^{1} \bigl( R ( A ) ^{\bot} \bigr). $$
Again, by (i) ⇔ (iii) in Proposition 2.9, \(k\in D ( L ) \cap N\) is an extremal solution of the linear inclusion \(L_{S,P} ( g ) y\in A ( x ) \). Thus (4) ⇒ (2). □
Theorem 3.2
Let the assumptions of Theorem
3.1
hold. Then, for any
\(y\in Y\), the set of all constrained extremal solution of the linear inclusion
\(y\in L ( x ) \)
with respect to
S, denoted by
\(\Omega_{y}\), is not empty and is given by
$$ \Omega_{y}= \bigl\{ gA^{\#} \bigl[ L_{S,P} ( g ) y \bigr] \bigr\} \dotplus N ( A ) , $$
(3.11)
where
\(A^{\#}\)
is the metric generalized inverse of the multivalued linear operator
A, \(A=L_{N}\), \(S=g+N\).
Proof
(i) Since \(R ( A )\) is a Chebyshev subspace in Y, by Lemma 2.4, we have
$$ Y=R ( A ) \dotplus F_{Y}^{1} \bigl( R ( A ) ^{\bot } \bigr). $$
For any \(y\in Y\), we must have \(L_{S,P} ( g ) y\in R ( A ) \dotplus F_{Y}^{1} ( R ( A ) ^{\bot} )\). From (1) ⇔ (4) in Theorem 3.1, we see that \(\Omega_{y}\neq\emptyset\).
(ii) For any \(y\in Y\), we see that \(L_{S,P} ( g ) y\in R ( A ) \dotplus F_{Y}^{1} ( R ( A ) ^{\bot } )\), hence \(\Omega_{y}\neq\emptyset\) by (i).
For any \(w\in\Omega_{y}\), i.e.
\(w\in D ( L ) \cap S\) is a constrained extremal solution of the linear inclusion \(y\in L ( x ) \) with respect to S. By (1) ⇔ (2) in Theorem 3.1, we have \(w\in\Omega_{y}\) if and only if \(k:=gw\in D ( L ) \cap N\) is an extremal solution of the linear inclusion \(L_{S,P} ( g ) y\in A ( x )\). By (ii) in Proposition 2.10, we see that \(w\in\Omega_{y}\Leftrightarrow\)
$$\begin{aligned} w \in& \bigl\{ gk:k\in D ( A ) \cap N\mbox{ is an extremal solution of } L_{S,P} ( g ) y\in A ( x ) \bigr\} \\ =& \bigl\{ gk:k\in A^{\#} \bigl[ L_{S,P} ( g ) y \bigr] +N ( A ) \bigr\} \\ =& \bigl\{ gA^{\#} \bigl[L_{S,P} ( g ) y \bigr] \bigr\} \dotplus N ( A ), \end{aligned}$$
where \(A^{\#}\) is the metric generalized inverse of the multivalued linear operator A. Hence, it follows that
$$ \Omega_{y}= \bigl\{ gA^{\#} \bigl[L_{S,P} ( g ) y \bigr] \bigr\} \dotplus N ( A ). $$
□
Corollary 3.3
Let
X, Y, and
Z
be reflexive strictly convex Banach spaces. Let
\(A\subset X\times Z\)
be a linear relation and
\(L\subset X\times Y\)
be a singlevalued linear operator. Assume that
\(N ( A )\)
is closed in
X. Let
\(z\in R ( A ) \dotplus F_{Z}^{1} ( R ( A ) ^{\bot} ) \)
and
\(y\in Y\)
be given. Define
$$ S:=A^{\#} ( z ) +N ( A ),\qquad T:=L_{N ( A )}. $$
Assume that
\(A^{\#} ( z ) \in D ( L )\). Then we have the following:
(I) The following statements are equivalent:

(i)
\(w \in D ( L ) \cap S\)
is a constrained extremal solution of the linear operator equation
\(L(x)=y\)
with respect to
S;

(ii)
\(k:=A^{\#} ( z ) w \in D ( L ) \cap N ( A ) \)
is an extremal solution of the linear operator equation
$$ L \bigl(A^{\#} ( z ) \bigr)y=T ( x ); $$

(iii)
\(w\in D ( L ) \cap S\)
and
$$ L ( w ) y\in F_{Y}^{1} \bigl( R ( T ) ^{\bot } \bigr). $$
(II) \(L ( x ) =y\)
has a constrained extremal solution with respect to
S
if and only if
$$ L \bigl(A^{\#} ( z ) \bigr)y\in R ( T ) \dotplus F_{Y}^{1} \bigl( R ( T ) ^{\bot} \bigr). $$
(3.12)
In particular, if
\(R ( T ) \)
is closed in
Y, then
\(L ( x ) =y\)
always has a constrained extremal solution with respect to
S.
(III) Assume that (3.12) and that
\(N ( T ) =N ( A ) \cap N ( L ) \)
is closed in
X. Then the set of all constrained extremal solution of
\(L ( x ) =y\)
with respect to
S
is given by
$$ \Omega_{y}= \bigl\{ A^{\#} ( z ) T^{M} \bigl[L \bigl( A^{\# } ( z ) \bigr) y \bigr] \bigr\} \dotplus N ( A ) \cap N ( L ) . $$
The main results, (i)(iii) in Theorem 3.1 and (i)(ii) in Theorem 3.2 in [7], will be especial cases of Theorem 3.1 and Theorem 3.2. We express them as the following corollary.
Corollary 3.4
[7]
Let
\(H_{1}\)
and
\(H_{2}\)
be Hilbert spaces. Let
\(L\subset H_{1}\times H_{2}\)
and
\(N\subset H_{1}\)
be linear manifolds. Let
\(L_{S,P}\)
be an arbitrary, but fixed algebraic operator part of L corresponding to an algebraic projector P of
\(H_{1}\)
onto
\(L ( \theta )\). Let
$$ S:=g\dotplus N\quad\textit{and}\quad M:=L_{N}, $$
where
\(g\in D ( L )\). Then we have:
(I) for fixed
\(h\in H_{2}\), the following statements are equivalent:

(i)
w
is a restricted leastsquares solution (LSS) of the linear inclusion
\(y\in L ( x ) \)
with respect to
S.

(ii)
\(k:=gw\)
is an LSS of
$$ L_{S,P} ( g ) h\in M ( x ). $$

(iii)
\(w\in S\cap D ( L ) \)
and
$$ L_{S,P} ( g ) h\in L ( \theta ) +N \bigl( M^{\ast } \bigr), $$
where
\(M^{\ast}:= \{ ( x,y ) : ( y,x ) \in M^{\bot } \} \)
is the adjoint subspace of the linear manifold
\(M\subset H_{1}\times H_{2}\), and
\(M^{\bot}\)
is the orthogonal complement of
M
in Hilbert space
\(H_{1}\times H_{2}\).

(iv)
\(g\in D ( L ) \)
such that
$$ L_{S,P} ( g ) h\in R ( M ) \dotplus N \bigl( M^{\ast } \bigr). $$
In particular, if
\(R ( M )\)
is closed, then a restricted LSS exists for each
\(h\in H_{2}\).
(II) The set of all restricted LSS of the linear inclusion
\(y\in L ( x ) \)
with respect to
S, denoted by
\(\Omega _{y}\), is not empty and is given by
$$ \Omega_{y}= \bigl\{ gM^{\#} \bigl[ L_{S,P} ( g ) y \bigr] \bigr\} \dotplus N ( M ). $$
Proof
In Theorem 3.1, take \(X=H_{1}\) and \(Y=H_{2}\), \(A=M=L_{N}\), since \(H_{1}\), \(H_{2}\), and \(H_{1}\times H_{2}\) are Hilbert spaces, \(F_{Y}=I\) the identity operator of \(H_{2}\), and \(N ( M^{\ast} ) =R ( M ) ^{\bot}=R ( A ) ^{\bot}\) (see [13]). (I) in Corollary 3.4 follows from Theorem 3.1, and (II) in Corollary 3.4 follows from Theorem 3.2. □
Remark 3.5
In [7], the authors gave an application of Theorem 3.1, i.e. Corollary 3.4, to concrete cases of singular optimal control problems involving ordinary differential equations with general boundary conditions where both the control space and the state space are Hilbert space \(L_{2}^{m}=L_{2}([a,b],\mathbb{C}^{m})\) and \(L_{2}^{n}=L_{2}([a,b],\mathbb{C}^{n})\), but for the same problem with the control space \(L_{p}^{m}=L_{p}([a,b],\mathbb{C}^{m})\) and the state space \(L_{p}^{n}=L_{p}([a,b],\mathbb{C}^{n})\) (\(1< p<\infty\)), we cannot apply Theorem 3.1 in [7], while we can apply Theorem 3.1, and Theorem 3.2, in this paper.
Remark 3.6
In Theorem 3.1, the three equivalent characterizations of a constrained extremal solution of the linear inclusion \(y\in L ( x ) \) with respect to S are expressed in terms of algebraic operator parts and the generalized orthogonal complement of \(R ( A )\). In characterization (2) of Theorem 3.1, the constrained extremal solution is equivalent to an unconstrained, but modified extremal solution. Characterization (3) is a generalized form of the normal equation; we call
$$ L_{S,P} ( w ) y\in L ( \theta ) +F_{Y}^{1} \bigl( R ( A ) ^{\bot} \bigr) $$
the ‘normal inclusion’ for the given inclusion \(y\in L ( x )\). In the case of a singlevalued operator with domain \(H_{1}\) (\(X=H_{1}\), \(Y=H_{2} \) are Hilbert spaces with \(( H_{2} ) ^{\ast }=H_{2}\)), \(g=\theta\), \(N=H_{1}\), \(A=L\), \(R ( L ) \) is closed in \(H_{2}\). Then \(F_{Y}^{1} ( R ( A ) ^{\bot} ) =R ( A ) ^{\bot }=N ( A^{\ast} ) \) by the Banach closed range theorem (see the theorem in [13]), \(L ( \theta ) =\theta \), hence the ‘normal inclusion’ reduces to
$$ A ( w ) y\in N \bigl( A^{\ast} \bigr) ,\quad \textit{i.e. } A^{\ast} \bigl[ A ( w ) y \bigr] =\theta, $$
which gives the normal equation.