On local spectral properties of operator matrices

In this paper, we focus on a 2×2\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$2 \times 2$\end{document} operator matrix Tϵk\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$T_{\epsilon _{k}}$\end{document} as follows: Tϵk=(ACϵkDB),\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document} $$\begin{aligned} T_{\epsilon _{k}}= \begin{pmatrix} A & C \\ \epsilon _{k} D & B\end{pmatrix}, \end{aligned}$$ \end{document} where ϵk\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$\epsilon _{k}$\end{document} is a positive sequence such that limk→∞ϵk=0\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$\lim_{k\rightarrow \infty }\epsilon _{k}=0$\end{document}. We first explore how Tϵk\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$T_{\epsilon _{k}}$\end{document} has several local spectral properties such as the single-valued extension property, the property (β)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$(\beta )$\end{document}, and decomposable. We next study the relationship between some spectra of Tϵk\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$T_{\epsilon _{k}}$\end{document} and spectra of its diagonal entries, and find some hypotheses by which Tϵk\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$T_{\epsilon _{k}}$\end{document} satisfies Weyl’s theorem and a-Weyl’s theorem. Finally, we give some conditions that such an operator matrix Tϵk\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$T_{\epsilon _{k}}$\end{document} has a nontrivial hyperinvariant subspace.


Introduction
Let L(H) denote the algebra of bounded linear operators on a separable Hilbert space H. Let {T} , the commutant of T, be the collection of all bounded linear operators such that commute with T. A subspace G ⊂ H is invariant for T ∈ L(H) if an inclusion TG ⊂ G holds, and is hyperinvariant for T if the inclusion SG ⊂ G holds for all S ∈ {T} . The hyperinvariant subspace problem is asking whether every operator on a separable complex Hilbert space has a nontrivial hyperinvariant subspace. It has been known that this is one of unresolved problems in operator theory and it has attracted a lot of interest by many authors.
For the study of this problem, in 2011, H. J. Kim [8] proved that, if T = where T 1 , T 2 , and T 3 are arbitrary operators in L(H) such that T 1 is either a compact operator with T 1 = 0 or a normal operator with T 1 = λI, then at least one of T and T , has a nontrivial hyperinvariant subspace where T = L(H). As mentioned in the above results, in the case of a 2 × 2 upper triangular operator matrix, there are some known results, but in the case of a full 2 × 2 operator matrix, it is very difficult to solve the invariant subspace problem. So, we focus on the matrix T k as a variation of the 2 × 2 upper triangular operator matrix and we study some conditions so that a 2 × 2 operator matrix T k has a nontrivial hyperinvariant subspace.
We now provide a simple outline of the paper. We first study the local spectral theory of operator matrices (cf. [3] and [9]). In particular, we consider the case when the (2, 1)entry of a 2 × 2 operator matrix approaches zero. In addition, we give the relationship between some spectra of 2 × 2 operator matrices and spectra of their diagonal entries, and find some hypotheses by which such operator matrices T k entail Weyl's theorem and a-Weyl's theorem.

Preliminaries
We briefly review some notions of local spectral properties, which are used in this paper. We refer to [10] for more detailed information. The In general, it is known that T is decomposable if and only if T and its adjoint T * possess property (β) [1,10]. Now, we introduce some Weyl type theorems related to definitions of various spectra (see [12] for more details). For these, we first take a look at some notions needed in this paper. If T ∈ L(H), we shall write ker(T) (or N(T)) and ran(T) (or R(T)) for the null space and the range of T, respectively. We know that the family {ker(T k )} forms an ascending sequence of subspaces for T ∈ L(H) and k ∈ N. So we call the ascent of T for the smallest nonnegative integer k for which ker(T k ) = ker(T k+1 ) holds. We also see that the family {ran(T k )} forms a descending sequence for k ∈ N, and then the smallest nonnegative integer k for which ran(T k ) = ran(T k+1 ) is said the descent of T. An operator T ∈ L(H) is called upper semi-Fredholm (resp., lower semi-Fredholm) if it has both finite dimensional kernel and closed range (resp., it has both finite dimensional co-kernel and closed range). Either upper or lower semi-Fredholm operator T ∈ L(H) is called semi-Fredholm, and its index is given by ind(T) := dim ker(T) -dim ker(T * ). When both dim ker(T) and dim ker(T * ) are finite, then T is called Fredholm. If T ∈ L(H) is a Fredholm operator satisfying ind(T) = 0, then it is called Weyl, and if T is a Fredholm operator with finite descent and ascent, then it is called Browder.
If T ∈ L(H), we shall write σ p (T), σ s (T), σ a (T), σ (T), σ e (T), σ le (T), and σ re (T) the point spectrum, the surjective spectrum, the approximate point spectrum, the spectrum, the essential spectrum, the left essential spectrum, and the right essential spectrum the left essential spectrum of T, respectively. The Weyl spectrum σ w (T) := {μ ∈ C : T -μI is not Weyl} and the Browder spectrum σ b (T) := {μ ∈ C : T -μI is not Browder}, where I is an identity operator on H. We write K(H) for the set of all compact operators on H and review another spectra as follows: the Weyl essential approximate point spectrum σ ea (T) := {μ ∈ C : T + C -μI is not bounded below for all C ∈ K(H)} and the Browder essential approximate point spectrum σ ab (T) := {μ ∈ C : T + C -μI is not bounded below for all C ∈ K(H) and TC = CT}. Evidently, we get the inclusions Let iso K be the collection of all isolated points of a complex subset K . We write π 00 (T) := {λ ∈ iso σ (T) : 0 < dim ker(Tλ) < ∞}. And we denote p 00 (T) := σ (T) \ σ b (T) which is the collection of Riesz points of T. We say that Weyl's theorem is obeyed for T provided σ (T) \ σ w (T) = π 00 (T), and that Browder's theorem is obeyed for T provided σ (T) \ σ w (T) = p 00 (T), equivalently, if σ w (T) = σ b (T). We say that a-Weyl's theorem is obeyed for T provided σ a (T) \ σ ea (T) = π a 00 (T) and that a-Browder's theorem is obeyed for T provided σ a (T) \ σ ea (T) = p a 00 (T), where π a 00 (T) := {λ ∈ iso σ a (T) : 0 < dim ker(Tλ) < ∞} and p a 00 (T) := σ a (T) \ σ ab (T). Then it is well known that a-Weyl's theorem ⇒ a-Browder's theorem ⇓ ⇓ Weyl's theorem ⇒ Browder's theorem

Main results
In this section, we study 2 × 2 operator matrices. In particular, we consider the case when their (2, 1)-entry approaches zero. We begin our program with the following theorem. Then Since A has the single-valued extension property, f 1 (λ) = 0. Hence T k has the singlevalued extension property.
(ii) Let T k have the single-valued extension property and (Bλ)f 2 (λ) = 0 where f 2 is an analytic function. Then Since T k has the single-valued extension property, it follows from (3) Since T k has the single-valued extension property, C m-2 f 2 (λ) = 0. By induction, we have f 2 (λ) = 0. Hence B has the single-valued extension property.

Corollary 3.2 Let T k =
A C k D B where A, B, C, D ∈ L(H) and { k } is a positive sequence such that lim k→∞ k = 0. If A and B have the single-valued extension property, then the following inclusions hold.
Proof (i) We know that T k has the single-valued extension property from Theorem 3.1.
Then there exists a neighborhood D of λ 0 and an Letting Then there exists a neighborhood G of λ 0 and an analytic function we get the following inclusions: for all x 1 , x 2 ∈ 2 (N).
We next investigate some relations among the spectra, the point spectra and the approximate point spectra of A, B and T k , respectively.
(ii) If both A * and B * have the single-valued extension property, then Proof (i) From Theorem 3.1, we know that T k has the single-valued extension property. Since For the converse, we suppose that γ / ∈ σ (A) ∪ σ (B). If lim n→∞ (T kγ ) Since Bγ is invertible, Then lim sup n→∞ y n ≤ k D (Bγ ) -1 (lim sup n→∞ x n ). Taking lim k→∞ k = 0, we have lim sup n→∞ y n = 0 and so lim n→∞ y n = 0.
From these arguments, the second equality trivially holds.
(ii) Suppose that both A * and B * have the single-valued extension property. Then we can prove that T * k also has the single-valued extension property using a similar method from the proof of Theorem 3.1. It is known that σ a (T) = σ (T) provided T * has the single-valued extension property for every T ∈ L(H). This means that the equality σ a (A) ∪ σ a (B) = σ a (T k ) holds by (i).
It is well known that, if A and B have the property (β), then A C 0 B has the property (β) without any conditions. However, 2 × 2 operator matrices which their all entries are nonzero, in addition, their (2, 1)-entries are either μI for some nonzero constant μ, or k I for a positive sequence { k } with lim k→∞ k = 0 may not have the property (β) even though their diagonal entries have the property (β) (see (8)). We now study the property (β) and decomposability of such a 2 × 2 operator matrix T k .

Theorem 3.5 Let T k =
A C k D B where A, B, C, D ∈ L(H) and { k } is a positive sequence such that lim k→∞ k = 0. If sup n f n,1 K < ∞ whenever We observe that Since sup n f n,1 K < ∞, letting k → 0, Hence T k has the property (β).
(ii) If A and B are decomposable, then A, A * , B, and B * have the property (β). Since A and B have the property (β), it follows from (1) that T k has the property (β). Note that Since A * and B * also have property (β), it follows from (i) that B * C * k D * A * has the property (β). Since B * C * k D * A * is unitarily equivalent to T * k , we see that T * k has the property (β). Therefore T k is decomposable.
From these arguments for some local spectral properties of the operator matrices T k , we get more corollaries. Proof (i) If A and B have the property (β), then it follows from Theorem 3.5 that T k has the property (β). Since σ (T k ) has nonempty interior, T k has a nontrivial invariant subspace by [5,Theorem 2.1].
(ii) If A and B are hyponormal, then they are subscalar by [14], and so it is known that they have the property (β). Hence it is obvious that T k has the property (β) from Theorem 3.5.
(iii) Since A and B are compact or normal, then they are decomposable (see [10]) and this implies from Theorem 3.5 that T k is also decomposable.
From [2], if T k = A C k D B on H ⊕ H and R(C) is closed, then we have the following matrix representation: which maps from Here, P N(C) (resp. P N(C) ⊥ ) denotes the projection of K onto N(C) (resp. N(C) ⊥ ). We now study the next theorem in the sense of the representation (9) and mention that a sequence { k } need not converge to 0.

bounded sequence. If C = k I and A is selfadjoint, then T k is decomposable.
Proof Since A is self-adjoint, so is A 1 = P R(C) ⊥ A| H . Thus it has the property (β). Since { k } is a bounded sequence, B 1 = B| N(C) is decomposable, so it follows from Theorem 3.7 that T k is decomposable.
where { k } is a bounded sequence and U is the unilateral shift given by Ue n = e n+1 on 2 (N) for n ∈ N. Then B 1 = B| N(U) is decomposable and this is equivalent to T k being decomposable by Theorem 3.7. Now, we address Weyl type theorems for T k . We start with the following lemma. Proof By Theorem 3.1, we know that T k has the single-valued extension property. Then it is obvious that σ w (T k ) = σ b (T k ) and σ ea (T k ) = σ ab (T k ) (see [1]). Hence this means that (a-)Browder's theorem holds for T k . Proof (i) If A and B have the single-valued extension property, then it follows from Lemma 3.10 that σ (T k ) \ σ w (T k ) = p 00 (T k ) ⊆ π 00 (T k ). To show the reverse, we suppose that 0 ∈ π 00 (T k ) without loss of generality. It follows from Theorem 3.4 that Since Weyl's theorem holds for both A and B, we have 0 / ∈ σ w (A) ∪ σ w (B). Set T 0 := A C 0 B . Then T 0 is Weyl by [11,Lemma 3]. This implies from [13,Theorem 1] that, if T k → T 0 in norm, then lim sup k→∞ σ w (T k ) ⊂ σ w (T 0 ). Hence 0 / ∈ lim sup k→∞ σ w (T k ). So there exists δ 1 > 0 such that, for μ ∈ D(0, δ 1 2 ), open disc with center 0 and radius δ 1 2 , such that T k -μI is Weyl. Since Weyl operators form an open set, there exists δ 2 > 0 such that T k -μI -T 0 < δ 2 2 . We choose δ := min{δ 1 , δ 2 } > 0. Then Therefore T k is Weyl but is not invertible. Consequently, Weyl's theorem holds for T k .
It is known that σ (S) = σ a (S) and σ w (S) = σ ea (S) provided S * ∈ L(H) has the single-valued extension property by [1]. On the other hand, A * and B * have the single-valued extension property, and satisfy Weyl's theorem. This implies that 0 / ∈ σ w (A) ∪ σ w (B). Then T 0 is Weyl.
We say that T ∈ L(H) is normal if T * T = TT * , hyponormal if T * T ≥ TT * , algebraically hyponormal if there exists a nonconstant polynomial p such that p(T) is hyponormal, respectively. It is known that normal operators imply hyponormal operators, and hyponormal operators imply algebraically hyponormal operators. From these notions, we have the following corollary. (i) If A and B where γ is any scalar in C. Let {T γ } be the collection of operators commuting with T γ as follows: and let We recall that a transitive subalgebra of L(H) has the property that it has no nontrivial invariant subspace.

Theorem 3.13 Let T k =
A C k I B where A, B, C ∈ L(H) and { k } is a positive sequence such that lim k→∞ k = 0 and there is X ∈ L(H) such that AX = XB. If there exists a nontrivial hyperinvariant subspace N for B such that N ⊂ ker X, then S ∈ {T k } 0 has a nontrivial invariant subspace.
Proof Assume that there exists a nontrivial hyperinvariant subspace N for B such that N ⊂ ker X. Let S ∈ {T n } 0 . Then we put S = L σ M σ N σ P σ where σ ∈ and sup σ ∈ L σ -P σ < ∞. Since S ∈ {T n } 0 , we get Then we have BN σ -N σ A = n (P σ -L σ ) and so Since lim n→∞ n = 0, BN σ = N σ A for σ ∈ . Hence BN σ X = N σ AX = N σ XB for σ ∈ . So N σ XN ⊂ N for σ ∈ . On the other hand, assume, to obtain a contradiction, that {T n } 0 is transitive. Then, for arbitrary z ∈ H, it follows from [7, Proposition 2.2] and the hypothesis that there exist σ 0 ∈ and y ∈ N with Xy = 0 such that, for every > 0, which means that {N σ Xy : σ ∈ } = H for some σ 0 ∈ . But this is a contradiction from N σ Xy ∈ N for all σ ∈ . Hence {T n } 0 is not transitive. Thus S ∈ {T n } 0 has nontrivial invariant subspace.
We easily see that there exists a nontrivial hyperinvariant subspace N for B such that N ker X as the following example.  Thus 0 Since W ∈ {R δ k } 0 has a nontrivial invariant subspace, we conclude that has a nontrivial invariant subspace.