Open Access

The binomial sequence spaces of nonabsolute type

Journal of Inequalities and Applications20162016:309

https://doi.org/10.1186/s13660-016-1256-0

Received: 13 August 2016

Accepted: 22 November 2016

Published: 29 November 2016

Abstract

In this paper, we introduce the binomial sequence spaces \(b^{r,s}_{0}\) and \(b^{r,s}_{c}\) of nonabsolute type which include the spaces \(c_{0}\) and c, respectively. Also, we prove that the spaces \(b^{r,s}_{0}\) and \(b^{r,s}_{c}\) are linearly isomorphic to the spaces \(c_{0}\) and c, in turn, and we investigate some inclusion relations. Moreover, we obtain the Schauder bases of those spaces and determine their α-, β-, and γ-duals. Finally, we characterize some matrix classes related to those spaces.

Keywords

matrix transformationsmatrix domainSchauder basis α-, β- and γ-dualsmatrix classes

MSC

40C0540H0546B45

1 The basic definitions and notations

Let w be the set of all real (or complex) valued sequences. Then w becomes a vector space under point-wise addition and scalar multiplication. A sequence space is a vector subspace of w. We use the notations of \(\ell_{\infty}, c_{0}, c\), and \(\ell_{p}\) for the spaces of all bounded, null, convergent, and absolutely p-summable sequences, respectively, where \(1\leq p <\infty\).

A Banach sequence space is called a BK-space provided each of the maps \(p_{n}:X\longrightarrow\mathbb{C}\) defined by \(p_{n}(x)=x_{n}\) is continuous for all \(n\in\mathbb{N}\) [1]. By taking into account the definition above, one can say that the sequence spaces \(\ell_{\infty }, c_{0}\), and c are BK-spaces with their usual \(sup\)-norm defined by \(\Vert x\Vert _{\infty}=\sup_{k \in\mathbb{N}}\vert x_{k} \vert \) and \(\ell_{p}\) is a BK-space with its \(\ell_{p}\)-norm defined by
$$ \Vert x \Vert _{\ell_{p}}= \Biggl(\sum_{k=0}^{\infty}{ \vert x_{k} \vert }^{p} \Biggr)^{\frac{1}{p}}, $$
where \(p \in[0,\infty)\).
Let \(A=(a_{nk})\) be an infinite matrix with complex entries and \(x \in w\), then the A-transform of x is defined by
$$ (Ax)_{n}=\sum_{k=0}^{\infty}{a_{nk}x_{k}} $$
(1.1)
and is assumed to be convergent for all \(n \in\mathbb{N}\) [2]. For brevity in the notation, we henceforth prefer that the summation without limits runs from 0 to ∞.
Let X and Y be two arbitrary sequence spaces and \(A=(a_{nk})\) be an infinite matrix. Then the domain of A is denoted by \(X_{A}\) and defined by
$$ X_{A}= \bigl\{ x=(x_{k}) \in w : Ax \in X \bigr\} , $$
(1.2)
which is also a sequence space and the class of all matrices A such that \(X\subset Y_{A}\) is denoted by \((X:Y)\). Moreover, \(A=(a_{nk})\) is called conservative if \(c\subset c_{A}\). Furthermore, \(A=(a_{nk})\) is called m-multiplicative, if \(\lim_{n\rightarrow\infty }(Ax)_{n}=m\lim_{n\rightarrow\infty}x_{n}\) for all \(x\in c\) and the class of all m-multiplicative matrices is denoted by \((X:Y)_{m}\). Specially, \(A=(a_{nk})\) is called regular, if \(A=(a_{nk})\) is 1-multiplicative.
The spaces of all bounded and convergent series are denoted by bs and cs and are defined by aid of the matrix domain of the summation matrix \(S=(s_{nk})\) such that \(bs=(\ell_{\infty})_{S}\) and \(cs=c_{S}\), respectively, where \(S=(s_{nk})\) is defined by
$$ s_{nk}= \textstyle\begin{cases} 1, & 0\leq k\leq n, \\ 0, & k>n, \end{cases} $$
for all \(k,n\in\mathbb{N}\). A matrix \(A=(a_{nk})\) is said to be a triangle if \(a_{nk}=0\) for \(k>n\) and \(a_{nn}\neq0\) for all \(n, k \in \mathbb{N}\). Furthermore, a triangle matrix uniquely has an inverse, which is also a triangle matrix.

The theory of matrix transformations is of great importance in the summability which was obtained by Cesàro, Borel, Riesz and others. Therefore, many authors have defined new sequence spaces by using this theory. For example, \((\ell_{\infty})_{N_{q}}\) and \(c_{N_{q}}\) in [3], \(X_{p}\) and \(X_{\infty}\) in [4], and \(\tilde{c}_{0}\) in [5], \(a_{0}^{r}\) and \(a_{c}^{r}\) in [6]. Moreover, many authors have constructed new sequence spaces by using especially the Euler matrix. For instance, \(e_{0}^{r}\) and \(e_{c}^{r}\) in [7], \(e_{p}^{r}\) and \(e_{\infty}^{r}\) in [8] and [9], \(e_{0}^{r}(\Delta), e_{c}^{r}(\Delta)\), and \(e_{\infty}^{r}(\Delta)\) in [10], \(e_{0}^{r}(\Delta^{(m)}), e_{c}^{r}(\Delta^{(m)})\) and \(e_{\infty}^{r}(\Delta^{(m)})\) in [11], \(e_{0}^{r}(B^{(m)}), e_{c}^{r}(B^{(m)})\), and \(e_{\infty}^{r}(B^{(m)})\) in [12], \(e_{0}^{r}(\Delta, p), e_{c}^{r}(\Delta, p)\), and \(e_{\infty }^{r}(\Delta, p)\) in [13], \(e_{0}^{r}(u , p)\) and \(e_{c}^{r}(u , p)\) in [14].

In this paper, we introduce the binomial sequence spaces \(b^{r,s}_{0}\) and \(b^{r,s}_{c}\) of nonabsolute type which include the spaces \(c_{0}\) and c, respectively. Also, we prove that the spaces \(b^{r,s}_{0}\) and \(b^{r,s}_{c}\) are linearly isomorphic to the spaces \(c_{0}\) and c, in turn and investigate some inclusion relations. Moreover, we obtain the Schauder basis of those spaces and determine their α-, β-, and γ-duals. Finally, we characterize some matrix classes related to those spaces.

2 The binomial sequence spaces of nonabsolute type

In this chapter, we introduce the binomial sequence spaces \(b^{r,s}_{0}\) and \(b^{r,s}_{c}\) of nonabsolute type and prove that the spaces \(b^{r,s}_{0}\) and \(b^{r,s}_{c}\) are linearly isomorphic to the spaces \(c_{0}\) and c, respectively. Moreover, we deal with an inclusion relation related to those spaces.

Let \(r,s \in\mathbb{R}\) and \(r+s \neq0\). Then the binomial matrix \(B^{r,s}=(b^{r,s}_{nk})\) is defined by
$$ b^{r,s}_{nk}= \textstyle\begin{cases} \frac{1}{(s+r)^{n}}\binom{n}{k}s^{n-k}r^{k}, & 0\leq k\leq n, \\ 0, & k>n, \end{cases} $$
for all \(k,n \in\mathbb{N}_{0}\). For \(sr>0\) we have the following properties of the binomial matrix \(B^{r,s}=(b^{r,s}_{nk})\):
  1. (i)

    \(\sup_{n \in\mathbb{N}}\sum_{k=0}^{n} \vert \frac {1}{(s+r)^{n}}\binom{n}{k}s^{n-k}r^{k} \vert <\infty\),

     
  2. (ii)

    \(\lim_{n \rightarrow\infty}\frac{1}{(s+r)^{n}}\binom {n}{k}s^{n-k}r^{k}=0\),

     
  3. (iii)

    \(\lim_{n\rightarrow\infty}\sum_{k=0}^{n}\frac {1}{(s+r)^{n}}\binom{n}{k}s^{n-k}r^{k}=1\).

     
Therefore, the binomial matrix \(B^{r,s}=(b^{r,s}_{nk})\) is regular for \(sr>0\). Unless stated otherwise, we assume that \(sr>0\).

Here, we would like to emphasize that if we take \(r+s=1\), we obtain the Euler matrix \(E^{r}=(e_{nk}^{r})\). So the binomial matrix \(B^{r,s}=(b^{r,s}_{nk})\) generalizes the Euler matrix \(E^{r}=(e_{nk}^{r})\).

By considering the definition of the binomial matrix \(B^{r,s}=(b^{r,s}_{nk})\), we define the binomial sequence spaces \(b^{r,s}_{0}\) and \(b^{r,s}_{c}\) as follows:
$$ b^{r,s}_{0}= \Biggl\{ x=(x_{k}) \in w :\lim _{n \rightarrow\infty}\frac {1}{(s+r)^{n}}\sum_{k=0}^{n} \binom{n}{k}s^{n-k}r^{k}x_{k}=0 \Biggr\} $$
and
$$ b^{r,s}_{c}= \Biggl\{ x=(x_{k}) \in w :\lim _{n \rightarrow\infty}\frac {1}{(s+r)^{n}}\sum_{k=0}^{n} \binom{n}{k}s^{n-k}r^{k}x_{k} \mbox{ exists} \Biggr\} . $$
The sequence spaces \(b^{r,s}_{0}\) and \(b^{r,s}_{c}\) can be redefined by using the notion of (1.2) as follows:
$$ b^{r,s}_{0}=(c_{0})_{B^{r,s}} \quad \mbox{and} \quad b^{r,s}_{c}=c_{B^{r,s}} . $$
(2.1)
It is clear that \(b^{r,s}_{0}\subset b^{r,s}_{c}\). Let \(x=(x_{k})\) be an arbitrary sequence. Then the \(B^{r,s}\)-transform of \(x=(x_{k})\) is defined by
$$ \bigl(B^{r,s}x \bigr)_{k}=y_{k}= \frac{1}{(s+r)^{k}}\sum_{j=0}^{k} \binom {k}{j}s^{k-j}r^{j}x_{j} $$
(2.2)
for all \(k\in\mathbb{N}\).

Now, we want to start with the following theorem related to the theory of BK-spaces, which is of great importance in the characterization of matrix transformations between sequence spaces.

Theorem 2.1

The binomial sequence spaces \(b^{r,s}_{0}\) and \(b^{r,s}_{c}\) are BK-spaces with their \(sup\)-\(norms\) defined by
$$ \Vert x \Vert _{b^{r,s}_{0}}=\Vert x \Vert _{b^{r,s}_{c}}= \bigl\Vert B^{r,s}x \bigr\Vert _{\infty}=\sup_{n \in\mathbb{N}} \bigl\vert \bigl(B^{r,s}x \bigr)_{n} \bigr\vert . $$

Proof

The sequence spaces \(c_{0}\) and c are BK-spaces according to their \(sup\)-\(norms\). Moreover, the binomial matrix \(B^{r,s}=(b^{r,s}_{nk})\) is a triangle matrix and (2.1) holds. By combining these three facts and Theorem 4.3.12 of Wilansky [2], we deduce that the binomial sequence spaces \(b^{r,s}_{0}\) and \(b^{r,s}_{c}\) are BK-spaces. This completes the proof. □

Let \(\vert x\vert =(\vert x_{k}\vert )\) for all \(k \in\mathbb{N}\). Because of \(\Vert x\Vert _{b^{r,s}_{0}} \neq \Vert \vert x\vert \Vert _{b^{r,s}_{0}}\) and \(\Vert x\Vert _{b^{r,s}_{c}} \neq \Vert \vert x\vert \Vert _{b^{r,s}_{c}}\) for at least one sequence in the binomial sequence spaces \(b^{r,s}_{0}\) and \(b^{r,s}_{c}\), \(b^{r,s}_{0}\) and \(b^{r,s}_{c}\) are sequence spaces of nonabsolute type.

Theorem 2.2

The binomial sequence spaces \(b^{r,s}_{0}\) and \(b^{r,s}_{c}\) are linearly isomorphic to the sequence spaces \(c_{0}\) and c, respectively.

Proof

Because a repetition of similar statements is redundant, the proof of the theorem is given for only the space \(b^{r,s}_{c}\). For this purpose, we should show the existence of a linear bijection between the spaces \(b^{r,s}_{c}\) and c. Let us consider the transformation L defined by \(L:b^{r,s}_{c}\longrightarrow c ,L(x)=B^{r,s}x\). Then it is obvious that, for all \(x \in b^{r,s}_{c}, L(x)=B^{r,s}x \in c\). Moreover, it is clear that L is a linear transformation and \(x=0\) whenever \(L(x)=0\). Because of this, L is injective.

Let \(y=(y_{k}) \in c\) be given. We define a sequence \(x=(x_{k})\) by
$$ x_{k}=\frac{1}{r^{k}}\sum_{j=0}^{k} \binom{k}{j}(-s)^{k-j}(s+r)^{j}y_{j} $$
for all \(k \in\mathbb{N}\). Then we obtain
$$\begin{aligned} \bigl(B^{r,s}x \bigr)_{n} =&\frac{1}{(s+r)^{n}}\sum _{k=0}^{n}\binom{n}{k}s^{n-k}r^{k}x_{k} \\ =&\frac{1}{(s+r)^{n}}\sum_{k=0}^{n} \binom{n}{k}s^{n-k}\sum_{j=0}^{k} \binom{k}{j}(-s)^{k-j}(s+r)^{j}y_{j} \\ =&y_{n} \end{aligned}$$
for all \(n \in\mathbb{N}\). So, \(B^{r,s}x=y\) and since \(y \in c\), we conclude that \(B^{r,s}x \in c\). Hence, we deduce that \(x \in b_{c}^{r,s}\) and \(L(x)=y\). On account of this L is surjective.
Moreover, we have for all \(x \in b_{c}^{r,s}\)
$$ \bigl\Vert L(x) \bigr\Vert _{\infty}= \bigl\Vert B^{r,s}x \bigr\Vert _{\infty}=\Vert x \Vert _{b_{c}^{r,s}}. $$
So L is norm preserving. Consequently, L is a linear bijection. Then we obtain the fact that the spaces \(b_{c}^{r,s}\) and c are linearly isomorphic, that is, \(b_{c}^{r,s} \cong c\). This completes the proof. □

Theorem 2.3

The inclusions \(e_{0}^{r} \subset b_{0}^{r,s}\) and \(e_{c}^{r} \subset b_{c}^{r,s}\) strictly hold, where \(e_{0}^{r}\) and \(e_{c}^{r}\) are Euler sequence spaces of nonabsolute type.

Proof

If \(r+s=1\), we obtain \(E^{r}=B^{r,s}\). So, the inclusion \(e_{0}^{r} \subset b_{0}^{r,s}\) holds. Assume that \(0< r<1\) and \(s=4\). Now, we define a sequence \(x=(x_{k})\) such that \(x_{k}= (-\frac{3}{r} )^{k}\) for all \(k \in\mathbb{N}\). Then it is obvious that \(x= ( (-\frac{3}{r} )^{k} ) \notin c_{0}\) and \(E^{r}x= ( (-2-r )^{k} ) \notin c_{0}\). On the other hand, \(B^{r,s}x= ( (\frac {1}{4+r} )^{k} )\in c_{0}\). As a consequence, \(x=(x_{k}) \in b^{r,s}_{0}\setminus e_{0}^{r}\).

This shows that the inclusion \(e_{0}^{r} \subset b_{0}^{r,s}\) strictly holds. Another part of the theorem can be proved in a similar way. This completes the proof. □

Theorem 2.4

The inclusions \(c_{0} \subset b_{0}^{r,s}\) and \(c \subset b_{c}^{r,s}\) strictly hold. But the sequence spaces \(b_{0}^{r,s}\) and \(\ell_{\infty }\) do not include each other.

Proof

If we consider regularity of the binomial matrix \(B^{r,s}\), we can easily conclude that \(B^{r,s}x\in c_{0}\) whenever \(x \in c_{0}\). This means that \(x \in b_{0}^{r,s}\) for all \(x \in c_{0}\), namely \(c_{0} \subset b_{0}^{r,s}\). Now we define a sequence \(u=(u_{k})\) such that \(u_{k}=(-1)^{k}\) for all \(k \in\mathbb{N}\). Then we obtain \(B^{r,s}x= ( (\frac{s-r}{s+r} )^{k} ) \in c_{0}\). As a consequence, u is in \(b_{0}^{r,s}\) but not in \(c_{0}\). So, the inclusion \(c_{0} \subset b_{0}^{r,s}\) is strict. By using a similar way, one can show that the inclusion \(c \subset b_{c}^{r,s}\) is strict.

To prove the second part of the theorem, we consider the sequences \(e=(1,1,1,\ldots)\) and \(v=(v_{k})\) defined by \(v_{k}= (-\frac{s}{r} )^{k}\) for all \(k \in\mathbb{N}\), where \(\vert \frac{s}{r} \vert >1\). Then we obtain \(B^{r,s}e=e\) and \(B^{r,s}v=(1,0,0,\ldots)\). Hence, e is in \(\ell_{\infty}\) but not in \(b_{0}^{r,s}\) and v is in \(b_{0}^{r,s}\) but not in \(\ell_{\infty}\). This shows that the sequence spaces \(b_{0}^{r,s}\) and \(\ell_{\infty}\) overlap but these spaces do not include each other. This completes the proof. □

Definition 2.1

see [2]

An infinite matrix \(A=(a_{nk})\) is called coregular, if \(A=(a_{nk})\) is conservative and \(\chi(A)=\lim_{n\rightarrow\infty}\sum_{k}a_{nk}-\sum_{k}\lim_{n\rightarrow\infty}a_{nk}\neq0\).

By taking into account the regularity of the binomial matrix \(B^{r,s}=(b_{nk}^{r,s})\), we obtain \(\chi(B^{r,s})=1\neq0\). So, the binomial matrix \(B^{r,s}=(b_{nk}^{r,s})\) is coregular.

Definition 2.2

see [15]

Let \(A=(a_{nk})\) be an infinite matrix with bounded columns. Then A is defined to be of type M if \(tA=0\) implies \(t=0\) for every \(t \in \ell\).

Definition 2.3

see [2]

For a conservative triangle \(A=(a_{nk})\), \(c\subset c_{A}\). Its closure in \(c_{A}\) is called the perfect part of \(c_{A}\). If c is dense, \(A=(a_{nk})\) is called perfect.

Now we give the following two theorems, which are needed.

Theorem 2.5

see [15]

A regular triangle \(A=(a_{nk})\) is of type M if there exists a \((z_{i}) \in\ell_{\infty}\) with \(z_{i} \neq z_{j}\ (i \neq j)\) and \(\Vert (z_{i}) \Vert _{\infty}<1\) such that
$$ \forall i \in\mathbb{N}_{0} , \exists(x_{k_{i}})_{k} \in\ell_{\infty } , \quad \forall n \in\mathbb{N} : z_{i}^{n}= \sum_{k=0}^{n}a_{nk}x_{k_{i}}. $$

Theorem 2.6

see [2]

A coregular triangle is perfect if and only if it is of type M.

Theorem 2.7

Each regular binomial matrix \(B^{r,s}=(b^{r,s}_{nk})\) is perfect.

Proof

We know that the regular binomial matrix \(B^{r,s}=(b^{r,s}_{nk})\) is coregular. So, for the proof, we should show that \(B^{r,s}=(b^{r,s}_{nk})\) is of type M.

Let \(D^{r,s}=(d^{r,s}_{nk})\) be the inverse of \(B^{r,s}=(b^{r,s}_{nk})\). Then we have for every \(z \in\mathbb{C}\)
$$\begin{aligned} u_{k}(z) =&\sum_{v=0}^{k}d^{r,s}_{kv}z^{v}= \frac{1}{r^{k}}\sum_{v=0}^{k} \binom{k}{v}(-s)^{k-v}(s+r)^{v}z^{v} \\ =& \frac{1}{r^{k}} \bigl((s+r)z-s \bigr)^{k}. \end{aligned}$$
One can easily verify that \(\sup_{k \in\mathbb{N}}\sup \{\vert u_{k}(z) \vert : \vert (s+r)z-s\vert <\vert r\vert \}\leq1\) and that
$$ \sum_{k=0}^{n}b^{r,s}_{nk}u_{k}(z)= \sum_{k=0}^{n}b^{r,s}_{nk} \sum_{v=0}^{k}d^{r,s}_{kv}z^{v}=z^{n} $$
for all \(z \in\mathbb{C}\). Therefore, if we choose a sequence \((z_{i})\) with \(z_{i} \neq z_{j}\ (i \neq j)\), \(\Vert (z_{i}) \Vert _{\infty}<1\) and \(\vert (s+r)z_{i}-s\vert <\vert r\vert \ (i \in\mathbb {N}_{0})\), then \(x_{k_{i}}=u_{k}(z_{i})\ (i,k \in\mathbb{N}_{0})\) fit Theorem 2.7. Thus, the regular binomial matrix \(B^{r,s}=(b^{r,s}_{nk})\) is perfect. This completes the proof. □

3 The Schauder basis and α-, β-, γ- and continuous duals

In this chapter, we construct the Schauder basis and designate the α-, β-, γ-, and continuous duals of the binomial sequence spaces \(b^{r,s}_{0}\) and \(b^{r,s}_{c}\).

Given a normed space \((X,\Vert \cdot \Vert _{X})\), a set \(\{x_{k}:x_{k} \in X, k \in\mathbb{N} \}\) is called a Schauder basis for X if for all \(x \in X\) there exist unique scalars \(\lambda_{k}, k \in\mathbb {N}\), such that \(x=\sum_{k}\lambda_{k}x_{k}\); i.e.,
$$ \Biggl\Vert x-\sum_{k=0}^{n} \lambda_{k}x_{k} \Biggr\Vert _{X} \longrightarrow0 $$
as \(n \rightarrow\infty\) [1].

Theorem 3.1

Let \(\mu_{k}= \{B^{r,s}x \}_{k}\) for all \(k \in\mathbb{N}\) and \(l=\lim_{k \rightarrow\infty}\mu_{k}\). We define a sequence \(g^{(k)}(r,s)= \{g^{(k)}_{n}(r,s) \}_{n \in\mathbb{N}}\) as follows:
$$ g^{(k)}_{n}(r,s)= \textstyle\begin{cases} 0, &0 \leq n < k,\\ \frac{1}{r^{n}}\binom{n}{k}(-s)^{n-k}(s+r)^{k}, &n\geq k, \end{cases} $$
for all fixed \(k \in\mathbb{N}\). Then the following statements hold.
  1. (a)
    The sequence \(\{g^{(k)}(r,s) \}_{k \in\mathbb {N}}\) is a Schauder basis for the binomial sequence space \(b_{0}^{r,s}\), and every \(x \in b_{0}^{r,s}\) has a unique representation of the form
    $$ x=\sum_{k}\mu_{k}g^{(k)}(r,s). $$
     
  2. (b)
    The set \(\{e, g^{(0)}(r,s), g^{(1)}(r,s),\ldots \}\) is a Schauder basis for the binomial sequence space \(b_{c}^{r,s}\), and any \(x \in b_{c}^{r,s}\) has a unique representation of the form
    $$ x=le+\sum_{k} [\mu_{k}-l ]g^{(k)}(r,s). $$
     

Proof

(a) Obviously we have
$$ B^{r,s}g^{(k)}(r,s)=e^{(k)}\in c_{0} ,\quad k\in \mathbb{N}, $$
where \(e^{(k)}\) is a sequence with 1 in the kth place and zeros elsewhere. So, the inclusion \(\{g^{(k)}(r,s) \}\subset b_{0}^{r,s}\) holds.
Given a sequence \(x=(x_{k})\in b_{0}^{r,s}\) and \(m \in\mathbb{N}\), we define
$$ x^{[m]}=\sum_{k=0}^{m} \mu_{k}g^{(k)}(r,s). $$
Then, if we apply the binomial matrix \(B^{r,s}=(b^{r,s}_{nk})\) to \(x^{[m]}\), we have
$$ B^{r,s}x^{[m]}=\sum_{k=0}^{m} \mu_{k}B^{r,s}g^{(k)}(r,s)=\sum _{k=0}^{m} \bigl(B^{r,s}x \bigr)_{k}e^{(k)} $$
and
$$ \bigl\{ B^{r,s} \bigl(x-x^{[m]} \bigr) \bigr\} _{n}= \textstyle\begin{cases} 0, &0 \leq n \leq m,\\ (B^{r,s}x )_{n}, &n > m, \end{cases} $$
for all \(m,n \in\mathbb{N}\).
Now, let \(\epsilon>0\) be arbitrarily given. Then we may choose a \(m_{0}\in\mathbb{N}\) such that
$$ \bigl\vert \bigl(B^{r,s}x \bigr)_{m} \bigr\vert < \frac{\epsilon}{2} $$
for all \(m \geq m_{0}\). Therefore,
$$ \bigl\Vert x-x^{[m]} \bigr\Vert _{b_{0}^{r,s}}=\sup _{n\geq m} \bigl\vert \bigl(B^{r,s}x \bigr)_{n} \bigr\vert < \sup_{n\geq m_{0}} \bigl\vert \bigl(B^{r,s}x \bigr)_{n} \bigr\vert \leq\frac{\epsilon}{2}< \epsilon $$
for all \(m \geq m_{0}\). This shows that
$$ x=\sum_{k}\mu_{k}g^{(k)}(r,s). $$
To complete the proof of part (a), we should show the uniqueness of this representation.
We assume that there exists a representation
$$ x=\sum_{k}\lambda_{k}g^{(k)}(r,s). $$
Due to the transformation, L defined in the proof of Theorem 2.2 is continuous; we can write
$$ \bigl(B^{r,s}x \bigr)_{n}=\sum_{k} \lambda_{k} \bigl\{ B^{r,s}g^{(k)}(r,s) \bigr\} _{n}=\sum_{k}\lambda_{k}e^{(k)}_{n}= \lambda_{n} $$
for all \(n \in\mathbb{N}\), which contradicts the fact that \((B^{r,s}x )_{n}=\mu_{n}\) for all \(n \in\mathbb{N}\). Hence, every \(x \in b_{0}^{r,s}\) has a unique representation, as desired.

(b) We know that \(\{g^{(k)}(r,s) \}\subset b_{0}^{r,s}\) and \(B^{r,s}e=e \in c\). So, the inclusion \(\{e, g^{(k)}(r,s) \} \subset b_{c}^{r,s}\) trivially holds.

For a given arbitrary sequence \(x=(x_{k}) \in b_{c}^{r,s}\), we define a sequence \(y=(y_{k})\) such that \(y=x-le\), where \(l=\lim_{k \rightarrow \infty}\mu_{k}\). Then it is obvious that \(y=(y_{k}) \in b_{0}^{r,s}\). By considering the part (a), one can say that \(y=(y_{k})\) has a unique representation. This implies that \(x=(x_{k})\) has a unique representation, as desired in part (b). This completes the proof. □

By taking into account the results of Theorems 2.1 and 3.1, we give the following result.

Corollary 3.2

The binomial sequence spaces \(b_{0}^{r,s}\) and \(b_{c}^{r,s}\) are separable.

Let X and Y be two arbitrary sequence spaces. The multiplier space of X and Y is symbolized with \(M(X,Y)\) and defined by
$$ M(X,Y)= \bigl\{ y=(y_{k}) \in w : xy=(x_{k}y_{k}) \in Y\text{ for all }x=(x_{k}) \in X \bigr\} . $$
By considering the definition of \(M(X,Y)\), the α-, β-, and γ-duals of a sequence space X are defined by
$$ X^{\alpha}=M(X,\ell_{1}),\qquad X^{\beta}=M(X,cs)\quad \text{and}\quad X^{\gamma}=M(X,bs), $$
respectively.

By \(X^{*}\), we denote the space of all bounded linear functionals on X. \(X^{*}\) is called the continuous dual of a normed space X.

Let us give some properties to use in the next lemma:
$$\begin{aligned} &\sup_{K\in\mathcal{F}}\sum_{n} \biggl\vert \sum_{k\in K}a_{nk} \biggr\vert ^{p} < \infty , \end{aligned}$$
(3.1)
$$\begin{aligned} & \sup_{n\in\mathbb{N}}\sum_{k} \vert a_{nk}\vert < \infty , \end{aligned}$$
(3.2)
$$\begin{aligned} &\lim_{n\rightarrow\infty}a_{nk}=\alpha_{k}\quad \text{for each } k\in \mathbb{N} , \end{aligned}$$
(3.3)
$$\begin{aligned} & \lim_{n\rightarrow\infty}\sum_{k}a_{nk}= \alpha, \end{aligned}$$
(3.4)
where \(\mathcal{F}\) is the collection of all finite subsets of \(\mathbb{N}\) and \(1\leq p <\infty\).

Lemma 3.3

see [16]

Let \(A=(a_{nk})\) be an infinite matrix. Then the following statements hold:
  1. (i)

    \(A=(a_{nk}) \in(c_{0}:\ell_{1})=(c:\ell_{1}) \Leftrightarrow \) (3.1) holds with \(p=1\);

     
  2. (ii)

    \(A=(a_{nk}) \in(c_{0}:c) \Leftrightarrow\) (3.2) and (3.3) hold;

     
  3. (iii)

    \(A=(a_{nk}) \in(c:c) \Leftrightarrow\) (3.2), (3.3) and (3.4) hold;

     
  4. (iv)

    \(A=(a_{nk}) \in(c_{0}:\ell_{\infty})=(c:\ell_{\infty}) \Leftrightarrow\) (3.2) holds;

     
  5. (v)

    \(A=(a_{nk}) \in(c:\ell_{p}) \Leftrightarrow\) (3.1) holds with \(1 \leq p < \infty\).

     

Theorem 3.4

The α-dual of the binomial sequence spaces \(b_{0}^{r,s}\) and \(b_{c}^{r,s}\) is
$$ v_{1}^{r,s}= \biggl\{ a=(a_{k}) \in w : \sup _{K\in\mathcal{F}}\sum_{n} \biggl\vert \sum _{k\in K}\binom {n}{k}(-s)^{n-k}r^{-n}(r+s)^{k}a_{n} \biggr\vert < \infty \biggr\} . $$

Proof

Let us consider the sequence \(x=(x_{n})\) that is defined in the proof of Theorem 2.2. Then, for given \(a=(a_{n}) \in w\), we obtain
$$ a_{n}x_{n}=\sum_{k=0}^{n} \binom {n}{k}(-s)^{n-k}r^{-n}(r+s)^{k}a_{n}y_{k}= \bigl(U^{r,s}y \bigr)_{n} $$
for all \(n \in\mathbb{N}\). By considering the equality above, we deduce that \(ax=(a_{n}x_{n}) \in\ell_{1}\) whenever \(x=(x_{k}) \in b_{0}^{r,s}\) or \(b_{c}^{r,s}\) if and only if \(U^{r,s}y \in\ell_{1}\) whenever \(y=(y_{k}) \in c_{0}\) or c. This shows that \(a=(a_{n}) \in \{b_{0}^{r,s} \}^{\alpha}= \{b_{c}^{r,s} \}^{\alpha }\) if and only if \(U^{r,s} \in(c_{0}:\ell_{1})=(c:\ell_{1})\). If we combine this and Lemma 3.3(i), we obtain
$$ a=(a_{n}) \in \bigl\{ b_{0}^{r,s} \bigr\} ^{\alpha}= \bigl\{ b_{c}^{r,s} \bigr\} ^{\alpha} \quad\Leftrightarrow\quad\sup_{K\in\mathcal{F}}\sum_{n} \biggl\vert \sum_{k\in K}\binom {n}{k}(-s)^{n-k}r^{-n}(r+s)^{k}a_{n} \biggr\vert < \infty. $$
Therefore, \(\{b_{0}^{r,s} \}^{\alpha}= \{b_{c}^{r,s} \}^{\alpha}=v_{1}^{r,s}\). This completes the proof. □

Theorem 3.5

Define the sets \(v_{2}^{r,s}\), \(v_{3}^{r,s}\), and \(v_{4}^{r,s}\) by
$$\begin{aligned} &v_{2}^{r,s}= \Biggl\{ a=(a_{k}) \in w : \sup _{n\in\mathbb{N}}\sum_{k=0}^{n} \Biggl\vert \sum_{j=k}^{n}\binom {j}{k}(-s)^{j-k}r^{-j}(r+s)^{k}a_{j} \Biggr\vert < \infty \Biggr\} , \\ &v_{3}^{r,s}= \Biggl\{ a=(a_{k}) \in w : \sum _{j=k}^{\infty}\binom {j}{k}(-s)^{j-k}r^{-j}(r+s)^{k}a_{j} \textit{ exists for each }k\in\mathbb {N} \Biggr\} , \\ &v_{4}^{r,s}= \Biggl\{ a=(a_{k}) \in w :\lim _{n\rightarrow\infty}\sum_{k=0}^{n} \sum _{j=k}^{n}\binom{j}{k}(-s)^{j-k}r^{-j}(r+s)^{k}a_{j} \textit{ exists}\Biggr\} . \end{aligned}$$
Then the following statements hold.
  1. (I)

    \(\{b_{0}^{r,s} \}^{\beta}=v_{2}^{r,s}\cap v_{3}^{r,s}\),

     
  2. (II)

    \(\{b_{c}^{r,s} \}^{\beta}=v_{2}^{r,s}\cap v_{3}^{r,s}\cap v_{4}^{r,s}\),

     
  3. (III)

    \(\{b_{0}^{r,s} \}^{\gamma}= \{ b_{c}^{r,s} \}^{\gamma}=v_{2}^{r,s}\).

     

Proof

Let \(a=(a_{n}) \in w\). If we bear in mind the sequence \(x=(x_{k})\) that is defined in proof of Theorem 2.2, we have
$$\begin{aligned} \sum_{k=0}^{n}a_{k}x_{k} = &\sum_{k=0}^{n} \Biggl[\frac{1}{r^{k}} \sum_{j=0}^{k} \binom{k}{j}(-s)^{k-j}(r+s)^{j}y_{j} \Biggr]a_{k} \\ = &\sum_{k=0}^{n} \Biggl[ \sum _{j=k}^{n}\binom {j}{k}(-s)^{j-k}r^{-j}(r+s)^{k}a_{j} \Biggr]y_{k} \\ = & \bigl(H^{r,s}y \bigr)_{n} \end{aligned}$$
for all \(n \in\mathbb{N}\), where the matrix \(H^{r,s}=(h_{nk}^{r,s})\) is defined by
$$ h^{r,s}_{nk}= \textstyle\begin{cases} \sum_{j=k}^{n}\binom{j}{k}(-s)^{j-k}r^{-j}(r+s)^{k}a_{j}, & 0\leq k\leq n, \\ 0, & k>n, \end{cases} $$
for all \(k,n \in\mathbb{N}\). Then:
(I) \(ax=(a_{k}x_{k}) \in cs\) whenever \(x=(x_{k}) \in b_{0}^{r,s}\) if and only if \(H^{r,s}y \in c\) whenever \(y=(y_{k}) \in c_{0}\). This shows that \(a=(a_{k}) \in \{b_{0}^{r,s} \}^{\beta}\) if and only if \(H^{r,s} \in(c_{0}:c)\). If we combine this and Lemma 3.3(ii), we conclude that
$$\begin{aligned} & \sup_{n\in\mathbb{N}}\sum_{k=0}^{n} \bigl\vert h_{nk}^{r,s} \bigr\vert < \infty, \end{aligned}$$
(3.5)
$$\begin{aligned} & \lim_{n\rightarrow\infty}h_{nk}^{r,s}\quad \mbox{exists for each }k\in\mathbb{N.} \end{aligned}$$
(3.6)

These results show that \(\{b_{0}^{r,s} \}^{\beta }=v_{2}^{r,s}\cap v_{3}^{r,s}\).

(II) In a similar way, we obtain \(a=(a_{k}) \in \{ b_{c}^{r,s} \}^{\beta}\) if and only if \(H^{r,s} \in(c:c)\). If we combine this and Lemma 3.3(iii), we deduce that (3.5), (3.6) hold and
$$ \lim_{n \rightarrow\infty}\sum_{k=0}^{n}h_{nk}^{r,s}\quad \mbox{exists}, $$
(3.7)
which shows that \(\{b_{c}^{r,s} \}^{\beta}=v_{2}^{r,s}\cap v_{3}^{r,s}\cap v_{4}^{r,s}\).

(III) \(ax=(a_{k}x_{k}) \in bs\) whenever \(x=(x_{k}) \in b_{0}^{r,s}\) or \(b_{c}^{r,s}\) if and only if \(H^{r,s}y \in\ell_{\infty}\) whenever \(y=(y_{k}) \in c_{0}\) or c. This means that \(a=(a_{k}) \in \{ b_{0}^{r,s} \}^{\gamma}= \{b_{c}^{r,s} \}^{\gamma}\) if and only if \(H^{r,s} \in(c_{0}:\ell_{\infty})=(c:\ell_{\infty})\). By combining this and Lemma 3.3(iv), we deduce that (3.5) holds. Hence, \(\{b_{0}^{r,s} \}^{\gamma}= \{ b_{c}^{r,s} \}^{\gamma}=v_{2}^{r,s}\). This completes the proof. □

Theorem 3.6

\(\{b_{0}^{r,s} \}^{*}\) and \(\{b_{c}^{r,s} \}^{*}\) are equivalent to \(\ell_{1}\).

Proof

To avoid a repetition of similar statements, the proof of the theorem is given for only the binomial sequence space \(b_{c}^{r,s}\). For the proof, the existence of a linear surjective norm preserving mapping \(L : \{b_{c}^{r,s} \}^{*}\longrightarrow\ell_{1}\) should be shown.

Suppose that \(f \in \{b_{c}^{r,s} \}^{*}\). Now from Theorem 3.1(b) we know that \(\{e, g^{(0)}(r,s), g^{(1)}(r,s),\ldots \}\) is a basis for \(b_{c}^{r,s}\), and each \(x \in b_{c}^{r,s}\) has a unique representation of the form
$$ x=le+\sum_{k} [\mu_{k}-l ]g^{(k)}(r,s). $$
By the linearity and continuity of f, we get
$$ f(x)=lf(e)+\sum_{k} [ \mu_{k}-l ]f \bigl(g^{(k)}(r,s) \bigr) $$
(3.8)
for all \(x \in b_{c}^{r,s}\). Now define a sequence \(x=(x_{k}) \in b_{c}^{r,s}\) such that \(\Vert x\Vert _{b_{c}^{r,s}}=1\) as follows:
$$ x_{k}= \textstyle\begin{cases} \frac{1}{r^{k}}\sum_{j=0}^{k}\binom{k}{j}(-s)^{k-j}(r+s)^{j}sgnf (g^{(j)}(r,s) ), &0 \leq k \leq n,\\ \frac{1}{r^{k}}\sum_{j=0}^{n}\binom{k}{j}(-s)^{k-j}(r+s)^{j}sgnf (g^{(j)}(r,s) ), &k >n, \end{cases} $$
for all \(k,n \in\mathbb{N}\). Hence,
$$ \bigl\vert f(x) \bigr\vert =\sum _{k=0}^{n} \bigl\vert f \bigl(g^{(k)}(r,s) \bigr) \bigr\vert \leq \Vert f \Vert $$
(3.9)
since \(\vert f(x)\vert \leq \Vert f\Vert \cdot \Vert x\Vert \) on \(b_{c}^{r,s}\). It follows from (3.9) that
$$ \sum_{k} \bigl\vert f \bigl(g^{(k)}(r,s) \bigr) \bigr\vert =\sup_{n\in\mathbb {N}}\sum _{k=0}^{n} \bigl\vert f \bigl(g^{(k)}(r,s) \bigr) \bigr\vert \leq \Vert f\Vert . $$
Now we write (3.8) as
$$ f(x)=al+\sum_{k}a_{k}\mu_{k}, $$
where
$$ a=f(e)-\sum_{k} f \bigl(g^{(k)}(r,s) \bigr),\qquad a_{k}=f \bigl(g^{(k)}(r,s) \bigr) $$
and the series \(\sum_{k} f (g^{(k)}(r,s) )\) is absolutely convergent.
By taking into account \(\vert \lim_{k\rightarrow\infty} (B^{r,s}x )_{k} \vert \leq \Vert x\Vert _{b_{c}^{r,s}}=1\), we get
$$ \bigl\vert f(x) \bigr\vert \leq \Vert x\Vert _{b_{c}^{r,s}} \biggl( \vert a\vert +\sum_{k}\vert a_{k} \vert \biggr) $$
whence
$$ \bigl\Vert f(x) \bigr\Vert \leq \vert a\vert +\sum _{k}\vert a_{k} \vert $$
(3.10)
and \(\vert f(x)\vert \leq \Vert f\Vert \), so we define for all \(n\geq0\),
$$ x_{k}= \textstyle\begin{cases} \frac{1}{r^{k}}\sum_{j=0}^{k}\binom{k}{j}(-s)^{k-j}(r+s)^{j}sgna_{j}, &0 \leq k \leq n,\\ \frac{1}{r^{k}}\sum_{j=0}^{n}\binom {k}{j}(-s)^{k-j}(r+s)^{j}sgna_{j}+\frac{1}{r^{k}}\sum_{j=n+1}^{k}\binom {k}{j}(-s)^{k-j}(r+s)^{j}sgna, &k >n. \end{cases} $$
Then \(x \in b_{c}^{r,s}, \Vert x\Vert _{b_{c}^{r,s}}=1, \lim_{k\rightarrow\infty}(B^{r,s}x)_{k}=sgna\) and so
$$ \bigl\vert f(x) \bigr\vert = \Biggl\vert \vert a\vert +\sum _{k=0}^{n} \vert a_{k} \vert +sgna \sum _{k=n+1}^{\infty}a_{k} \Biggr\vert \leq \Vert f\Vert . $$
We know that \(\lim_{n\rightarrow\infty}\sum_{k=n+1}^{\infty}a_{k}=0\) whenever \(a=(a_{k})\in\ell_{1}\). Then, if we pass to the limit as \(n\rightarrow\infty\) in the last inequality, we have
$$ \vert a\vert +\sum_{k}\vert a_{k} \vert \leq \Vert f\Vert . $$
(3.11)
By combining (3.10) and (3.11), we conclude that
$$ \Vert f\Vert =\vert a\vert +\sum_{k}\vert a_{k} \vert , $$
which is the norm on \(\ell_{1}\).
Let us define a transformation L such that \(L : \{ b_{c}^{r,s} \}^{*}\longrightarrow\ell_{1}, L(f)=(a, a_{0}, a_{1}, \ldots)\). Then we have
$$ \bigl\Vert L(f) \bigr\Vert =\vert a\vert +\vert a_{0} \vert +\vert a_{1} \vert +\cdots=\Vert f\Vert $$
\(\Vert L(f)\Vert \) being the \(\ell_{1}\) norm. Therefore, L is norm preserving. It is obvious that L is surjective and linear. This completes the proof. □

4 Some matrix classes related to the binomial sequence spaces

In this chapter, we characterize some matrix classes related to the binomial sequence spaces \(b_{0}^{r,s}\) and \(b_{c}^{r,s}\). Now, we start with two lemmas which are required in the proof of the *theorems.

Lemma 4.1

see [2]

Each matrix map between BK-spaces is continuous.

Lemma 4.2

see [17]

Let \(X, Y\) be any two sequence spaces, A be an infinite matrix and B be a triangle matrix. Then \(A\in(X:Y_{B})\) if and only if \(BA \in(X:Y)\).

For brevity of notations, here and in the following, we prefer to use
$$ t_{nk}^{r,s}=\sum_{j=k}^{n} \binom{j}{k}(-s)^{j-k}r^{-j}(r+s)^{k}a_{nj} $$
for all \(n,k \in\mathbb{N}\).

Theorem 4.3

Let \(A=(a_{nk})\) be an infinite matrix with complex entries. Then the following statements hold.
  1. (I)
    \(A \in(b_{c}^{r,s}:\ell_{p})\) if and only if
    $$\begin{aligned} &\sup_{K\in\mathcal{F}}\sum_{n} \biggl\vert \sum_{k\in K}t_{nk}^{r,s} \biggr\vert ^{p} < \infty, \end{aligned}$$
    (4.1)
    $$\begin{aligned} &t_{nk}^{r,s} \quad\textit{exists for all } k,n\in\mathbb{N} , \end{aligned}$$
    (4.2)
    $$\begin{aligned} &\sum_{k}t_{nk}^{r,s} \quad\textit{converges for all }n\in\mathbb{N} , \end{aligned}$$
    (4.3)
    $$\begin{aligned} & \sup_{m \in\mathbb{N}}\sum_{k=0}^{m} \Biggl\vert \sum_{j=k}^{m} \binom {j}{k}(-s)^{j-k}r^{-j}(r+s)^{k}a_{nj} \Biggr\vert < \infty,\quad n\in\mathbb{N} , \end{aligned}$$
    (4.4)
    where \(1\leq p <\infty\).
     
  2. (II)
    \(A \in(b_{c}^{r,s}:\ell_{\infty})\) if and only if (4.2) and (4.4) hold, and
    $$\begin{aligned} \sup_{n \in\mathbb{N}}\sum_{k} \bigl\vert t_{nk}^{r,s}\bigr\vert < \infty . \end{aligned}$$
    (4.5)
     

Proof

Given a sequence \(x=(x_{k}) \in b_{c}^{r,s}\), we suppose that the conditions (4.1)-(4.4) hold. Then, by taking into account Theorem 3.5(II), we conclude that \(\{a_{nk} \} _{k\in\mathbb{N}} \in \{b_{c}^{r,s} \}^{\beta}\) for all \(n \in\mathbb{N}\). Thus, the A-transform of x exists. Let us consider a matrix \(U^{r,s}=(u_{nk}^{r,s})\) defined by \(u_{nk}^{r,s}=t_{nk}^{r,s}\) for all \(n,k \in\mathbb{N}\). Since \(U^{r,s}=(u_{nk}^{r,s})\) satisfies Lemma 3.3(v), we deduce that \(U^{r,s}=(u_{nk}^{r,s}) \in(c:\ell_{p})\).

Now, we consider the following equality:
$$ \sum_{k=0}^{m}a_{nk}x_{k}= \sum_{k=0}^{m}\sum _{j=k}^{m} \binom {j}{k}(-s)^{j-k}r^{-j}(r+s)^{k}a_{nj}y_{k} $$
(4.6)
for all \(n,m \in\mathbb{N}\). If we take the limit (4.6) side by side as \(m\rightarrow\infty\), we obtain
$$ \sum_{k}a_{nk}x_{k}= \sum_{k}t_{nk}^{r,s}y_{k} . $$
(4.7)
By taking the \(\ell_{p}\)-norm (4.7) side by side, we have
$$ \Vert Ax \Vert _{\ell_{p}}= \bigl\Vert U^{r,s}y \bigr\Vert _{\ell_{p}}< \infty. $$
Therefore \(Ax\in\ell_{p}\) and so \(A \in(b_{c}^{r,s}:\ell_{p})\).
Conversely, suppose that \(A \in(b_{c}^{r,s}:\ell_{p})\). It is known that \(b_{c}^{r,s}\) and \(\ell_{p}\) are BK-spaces. If we combine this fact and Lemma 4.1, we deduce that there is a constant \(M>0\) such that
$$ \Vert Ax \Vert _{\ell_{p}}\leq M\Vert x\Vert _{b_{c}^{r,s}} $$
(4.8)
for all \(x \in b_{c}^{r,s}\).
Now, we define a sequence \(x=(x_{k})\) such that \(x=\sum_{k\in K}g^{(k)}(r,s)\) for all fixed \(k \in\mathbb{N}\), where \(g^{(k)}(r,s)= \{ g^{(k)}_{n}(r,s) \}_{n\in\mathbb{N}}\) and \(K \in\mathcal{F}\). Since inequality (4.8) holds for all \(x \in b_{c}^{r,s}\), we have
$$ \Vert Ax \Vert _{\ell_{p}}= \biggl( \sum_{n} \biggl\vert \sum_{k\in K}t_{nk}^{r,s} \biggr\vert ^{p} \biggr)^{\frac{1}{p}}\leq M\Vert x\Vert _{b_{c}^{r,s}}=M. $$
Therefore (4.1) holds.

According to the assumption, A can be applied to the binomial sequence space \(b_{c}^{r,s}\). So, it is trivial that the conditions (4.2)-(4.4) hold. This completes the proof of part (I).

If we take Lemma 3.3(iv) instead of Lemma 3.3(v), then part (II) can be proved in a similar way. □

Theorem 4.4

Let \(A=(a_{nk})\) be an infinite matrix with complex entries. Then, \(A \in(b_{c}^{r,s}:c)\) if and only if (4.2), (4.4), and (4.5) hold, and
$$\begin{aligned} &\lim_{n\rightarrow\infty}t_{nk}^{r,s}= \alpha_{k}\quad \textit{for all }k\in \mathbb{N} , \end{aligned}$$
(4.9)
$$\begin{aligned} & \lim_{n\rightarrow\infty}\sum_{k}t_{nk}^{r,s}= \alpha . \end{aligned}$$
(4.10)

Proof

Suppose that A satisfies the conditions (4.2), (4.4), (4.5), (4.9), and (4.10). Given an arbitrary sequence \(x=(x_{k}) \in b_{c}^{r,s}\) with \(\lim_{k\rightarrow\infty}x_{k}=l\), then Ax exists. Since \(B^{r,s}=(b_{nk}^{r,s})\) is regular and \(y=(y_{k})\) is connected with the sequence \(x=(x_{k})\) by equation (2.2), we obtain \(y=(y_{k})\in c\) such that \(\lim_{k\rightarrow \infty}y_{k}=l\).

By considering the conditions (4.5) and (4.9), we have
$$ \sum_{j=0}^{k}\vert \alpha_{j} \vert \leq\sup_{n \in\mathbb{N}}\sum_{j} \bigl\vert t_{nj}^{r,s} \bigr\vert < \infty $$
for all \(k \in\mathbb{N}\). This shows us that \((\alpha_{k})\in\ell _{1}\). Bearing in mind the condition (4.7), we have
$$ \sum_{k}a_{nk}x_{k}= \sum_{k}t_{nk}^{r,s}(y_{k}-l)+l \sum_{k}t_{nk}^{r,s} ,\quad n\in \mathbb{N} . $$
(4.11)
By taking into account the conditions (4.5), (4.9), and (4.10), if we take the limit (4.11) side by side as \(n\rightarrow\infty\), we write
$$ \lim_{n\rightarrow\infty}(Ax)_{n}=\sum _{k}\alpha_{k}(y_{k}-l)+l\alpha, $$
(4.12)
which means that \(A \in(b_{c}^{r,s}:c)\).

On the contrary, suppose that \(A \in(b_{c}^{r,s}:c)\). It is well known that every convergent sequence is also bounded, namely \(c\subset\ell _{\infty}\). By combining this fact and Theorem 4.3(II), we deduce that the necessity of the conditions (4.2), (4.4), and (4.5) holds. Since Ax exists and belongs to c for all \(x \in b_{c}^{r,s}\), if we take \(g^{(k)}(r,s)= \{ g^{(k)}_{n}(r,s) \}_{n\in\mathbb{N}}\) instead of an arbitrary sequence \(x=(x_{k})\), we deduce that \(Ag^{(k)}(r,s)= \{ t^{r,s}_{nk} \} _{n\in\mathbb{N}}\in c\) for all \(k \in\mathbb{N}\). This shows us that the necessity of (4.9) holds.

Moreover, if we take \(x=e\) in (4.7), we obtain \(Ax= \{ \sum_{k}t^{r,s}_{nk} \}_{n\in\mathbb{N}}\in c\). The last result is the necessity of (4.10). This completes the proof. □

Corollary 4.5

Let \(A=(a_{nk})\) be an infinite matrix with complex entries. Then \(A \in(b_{c}^{r,s}:c)_{m}\) if and only if the conditions (4.2), (4.4), and (4.5) hold, and the conditions (4.9) and (4.10) hold with \(\alpha_{k}=0\) for all \(k \in\mathbb{N}\) and \(\alpha=m\), in turn.

Lemma 4.6

see [18]

Let \(A=(a_{nk})\) be an infinite matrix with complex entries. Then \(A \in(b_{\infty}^{r,s}:c)\) if and only if (4.5) and (4.9) hold, and
$$\begin{aligned} & \lim_{n\rightarrow\infty}\sum_{k} \bigl\vert t_{nk}^{r,s}\bigr\vert =\sum _{k} \Bigl\vert \lim_{n\rightarrow\infty} t_{nk}^{r,s} \Bigr\vert , \end{aligned}$$
(4.13)
$$\begin{aligned} &\lim_{m\rightarrow\infty}\sum_{k} \Biggl\vert \sum_{j=k}^{m} \binom {j}{k}(-s)^{j-k}r^{-j}(r+s)^{k}a_{nj} \Biggr\vert =\sum_{k}\bigl\vert t_{nk}^{r,s} \bigr\vert ,\quad n \in\mathbb{N} . \end{aligned}$$
(4.14)

Theorem 4.7

\((b_{c}^{r,s}:c)_{m}\cap(b_{\infty}^{r,s}:c)=\emptyset\).

Proof

We suppose that \((b_{c}^{r,s}:c)_{m}\cap(b_{\infty}^{r,s}:c)\neq \emptyset\). Then there is at least a matrix \(A=(a_{nk})\) such that the conditions of Corollary 4.5 and Lemma 4.6 hold for \(A=(a_{nk})\). If we consider the conditions (4.9) and (4.13), we conclude that
$$ \lim_{n\rightarrow\infty}\sum_{k}\bigl\vert t_{nk}^{r,s}\bigr\vert =0. $$
This result contradicts the condition (4.10). So, the classes \((b_{c}^{r,s}:c)_{m}\) and \((b_{\infty}^{r,s}:c)\) are disjoint. This last step completes the proof of the theorem. □

Now, by using Lemma 4.2, we can give some more results.

Corollary 4.8

Given an infinite matrix \(A=(a_{nk})\) with complex entries, we define a matrix \(E^{u,v}=(e_{nk}^{u,v})\) as follows:
$$ e_{nk}^{u,v}=\frac{1}{(u+v)^{n}}\sum _{j=0}^{n}\binom{n}{j}v^{n-j}u^{j}a_{jk} $$
for all \(n,k \in\mathbb{N}\), where \(u, v \in\mathbb{R}\) and \(uv>0\). Then the necessary and sufficient conditions in order that A belongs to any of the classes \((b_{c}^{r,s}:b_{\infty}^{u,v})\), \((b_{c}^{r,s}:b_{p}^{u,v})\), \((b_{c}^{r,s}:b_{c}^{u,v})\) and \((b_{c}^{r,s}:b_{c}^{u,v})_{m}\) are obtained by taking \(E^{u,v}=(e_{nk}^{u,v})\) instead of \(A=(a_{nk})\) in the required ones in Theorems 4.3, 4.4 and Corollary  4.5, where \(b_{p}^{u,v}\) and \(b_{\infty}^{u,v}\) are defined in [18].

Corollary 4.9

Given an infinite matrix \(A=(a_{nk})\) with complex entries, we define a matrix \(C=(c_{nk})\) as follows:
$$ c_{nk}=\frac{1}{n+1}\sum_{j=0}^{n}a_{jk} $$
for all \(n,k \in\mathbb{N}\). Then the necessary and sufficient conditions in order that A belongs to any of the classes \((b_{c}^{r,s}:X_{\infty})\), \((b_{c}^{r,s}:X_{p})\), \((b_{c}^{r,s}:\tilde {c})\), and \((b_{c}^{r,s}:\tilde{c})_{m}\) are obtained by taking \(C=(c_{nk})\) instead of \(A=(a_{nk})\) in the required ones in Theorems 4.3, 4.4 and Corollary  4.5, where \(X_{p}\), \(X_{\infty}\), and are defined in [4] and [5], respectively.

Corollary 4.10

Given an infinite matrix \(A=(a_{nk})\) with complex entries, we define two matrices \(C=(c_{nk})\) and \(E=(e_{nk})\) as follows:
$$ c_{nk}=a_{nk}-a_{n+1,k}\quad \textit{and} \quad e_{nk}=a_{nk}-a_{n-1,k} $$
for all \(n,k \in\mathbb{N}\). Then the necessary and sufficient conditions in order that A belongs to any of the classes \((b_{c}^{r,s}:\ell_{\infty}(\Delta))\), \((b_{c}^{r,s}:c(\Delta))\), \((b_{c}^{r,s}:\ell_{p}(\Delta))\), and \((b_{c}^{r,s}:c(\Delta))_{m}\) are obtained by taking \(C=(c_{nk})\) or \(E=(e_{nk})\) instead of \(A=(a_{nk})\) in the required ones in Theorems 4.3, 4.4 and Corollary  4.5, where \(\ell_{\infty}(\Delta)\) and \(c(\Delta)\) are defined in [19] and \(\ell_{p}(\Delta)\) is defined in [17].

Corollary 4.11

Given an infinite matrix \(A=(a_{nk})\) with complex entries, we define a matrix \(E=(e_{nk})\) as follows:
$$ e_{nk}=\frac{1}{n+1}\sum_{j=0}^{n} \bigl(1+t^{j} \bigr)a_{jk} $$
for all \(n,k \in\mathbb{N}\), where \(0< t<1\). Then the necessary and sufficient conditions in order that A belongs to any of the classes \((b_{c}^{r,s}:a_{\infty}^{t})\), \((b_{c}^{r,s}:a_{p}^{t})\), \((b_{c}^{r,s}:a_{c}^{t})\), and \((b_{c}^{r,s}:a_{c}^{t})_{m}\) are obtained by taking \(E=(e_{nk})\) instead of \(A=(a_{nk})\) in the required ones in Theorems 4.3, 4.4 and Corollary  4.5, where \(a_{\infty}^{t}\), \(a_{p}^{t}\), and \(a_{c}^{t}\) are defined in [20] and [6], respectively.

Corollary 4.12

Given an infinite matrix \(A=(a_{nk})\) with complex entries, we define a matrix \(E=(e_{nk})\) as follows:
$$ e_{nk}=\sum_{j=0}^{n}a_{jk} $$
for all \(n,k \in\mathbb{N}\). Then the necessary and sufficient conditions in order that A belongs to any of the classes \((b_{c}^{r,s}:bs)\), \((b_{c}^{r,s}:cs)\), and \((b_{c}^{r,s}:cs)_{m}\) are obtained by taking \(E=(e_{nk})\) instead of \(A=(a_{nk})\) in the required ones in Theorems 4.3, 4.4 and Corollary  4.5.

5 Conclusion

By considering the definition of the binomial matrix \(B^{r,s}=(b_{nk}^{r,s})\), we deduce that \(B^{r,s}=(b_{nk}^{r,s})\) reduces in the case \(r+s=1\) to the \(E^{r}=(e_{nk}^{r})\), which is called the method of Euler means of order r. So, our results obtained from the matrix domain of the binomial matrix \(B^{r,s}=(b_{nk}^{r,s})\) are more general and more extensive than the results on the matrix domain of the Euler means of order r. Moreover, the binomial matrix \(B^{r,s}=(b_{nk}^{r,s})\) is not a special case of the weighed mean matrices. So, this paper filled up a gap in the existent literature.

Declarations

Acknowledgements

We would like to express our thanks to the anonymous reviewers for their valuable comments.

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Authors’ Affiliations

(1)
Faculty Of Arts And Sciences, Department Of Mathematics, Recep Tayyip Erdoğan University

References

  1. Choudhary, B, Nanda, S: Functional Analysis with Applications. Wiley, New Delhi (1989) MATHGoogle Scholar
  2. Wilansky, A: Summability Through Functional Analysis. North-Holland Mathematics Studies, vol. 85. Elsevier, Amsterdam (1984) MATHGoogle Scholar
  3. Wang, C-S: On Nörlund sequence spaces. Tamkang J. Math. 9, 269-274 (1978) MathSciNetMATHGoogle Scholar
  4. Ng, P-N, Lee, P-Y: Cesàro sequence spaces of non-absolute type. Comment. Math. (Prace Mat.) 20(2), 429-433 (1978) MathSciNetMATHGoogle Scholar
  5. Şengönül, M, Başar, F: Some new Cesàro sequence spaces of non-absolute type which include the spaces \(c_{0}\) and c. Soochow J. Math. 31(1), 107-119 (2005) MathSciNetMATHGoogle Scholar
  6. Aydın, C, Başar, F: On the new sequence spaces which include the spaces \(c_{0}\) and c. Hokkaido Math. J. 33(2), 383-398 (2004) MathSciNetView ArticleMATHGoogle Scholar
  7. Altay, B, Başar, F: Some Euler sequence spaces of non-absolute type. Ukr. Math. J. 57(1), 1-17 (2005) View ArticleMATHGoogle Scholar
  8. Altay, B, Başar, F, Mursaleen, M: On the Euler sequence spaces which include the spaces \(\ell _{p}\) and \(\ell_{\infty}\) I. Inf. Sci. 176(10), 1450-1462 (2006) MathSciNetView ArticleMATHGoogle Scholar
  9. Mursaleen, M, Başar, F, Altay, B: On the Euler sequence spaces which include the spaces \(\ell _{p}\) and \(\ell_{\infty}\) II. Nonlinear Anal. 65(3), 707-717 (2006) MathSciNetView ArticleMATHGoogle Scholar
  10. Altay, B, Polat, H: On some new Euler difference sequence spaces. Southeast Asian Bull. Math. 30(2), 209-220 (2006) MathSciNetMATHGoogle Scholar
  11. Polat, H, Başar, F: Some Euler spaces of difference sequences of order m. Acta Math. Sci. Ser. B, Engl. Ed. 27B(2), 254-266 (2007) MathSciNetMATHGoogle Scholar
  12. Kara, EE, Başarır, M: On compact operators and some Euler \(B^{(m)}\)-difference sequence spaces. J. Math. Anal. Appl. 379(2), 499-511 (2011) MathSciNetView ArticleMATHGoogle Scholar
  13. Karakaya, V, Polat, H: Some new paranormed sequence spaces defined by Euler difference operators. Acta Sci. Math. (Szeged) 76, 87-100 (2010) MathSciNetMATHGoogle Scholar
  14. Demiriz, S, Çakan, C: On some new paranormed Euler sequence spaces and Euler core. Acta Math. Sin. Engl. Ser. 26(7), 1207-1222 (2010) MathSciNetView ArticleMATHGoogle Scholar
  15. Boos, B: Classical and Modern Methods in Summability. Oxford University Press, New York (2000) MATHGoogle Scholar
  16. Stieglitz, M, Tietz, H: Matrix Transformationen von Folgenräumen eine Ergebnisübersicht. Math. Z. 154, 1-16 (1977) MathSciNetView ArticleMATHGoogle Scholar
  17. Başar, F, Altay, B: On the space of sequences of p-bounded variation and related matrix mappings. Ukr. Math. J. 55(1), 136-147 (2003) MathSciNetView ArticleMATHGoogle Scholar
  18. Bisgin, MC: Matrix Transformations And Compact Operators On The Binomial Sequence Spaces. Under review Google Scholar
  19. Kızmaz, H: On certain sequence spaces. Can. Math. Bull. 24(2), 169-176 (1981) MathSciNetView ArticleMATHGoogle Scholar
  20. Aydın, C, Başar, F: Some new sequence spaces which include the spaces \(\ell_{p}\) and \(\ell_{\infty}\). Demonstr. Math. 38(3), 641-656 (2005) MathSciNetMATHGoogle Scholar

Copyright

© The Author(s) 2016