Skip to content
• Review
• Open Access

# Statistical convergence in a paranormed space

Journal of Inequalities and Applications20122012:39

https://doi.org/10.1186/1029-242X-2012-39

• Received: 14 August 2011
• Accepted: 21 February 2012
• Published:

## Abstract

In this article, we define the notion of statistical convergence, statistical Cauchy and strongly p-Cesàro summability in a paranormed space. We establish some relations between them.

AMS subject classification (2000): 41A10; 41A25; 41A36; 40A05; 40A30.

## Keywords

• density
• statistical convergence
• statistical Cauchy
• para-normed space
• strongly p-Cesàro summability.

## 1 Introduction and preliminaries

The concept of statistical convergence for sequences of real numbers was introduced by Fast  and Steinhaus  independently in the same year 1951 and since then several generalizations and applications of this notion have been investigated by various authors, namely . This notion was defined in normed spaces by Kolk  and in locally convex Hausdorff topological spaces by Maddox . Çakalli  extended this notation to topological Hausdorff groups. Recently, in [15, 16], the concept of statistical convergence is studied in probabilistic normed space and in intuitionistic fuzzy normed spaces, respectively. In this article, we shall study the concept of statistical convergence, statistical Cauchy, and strongly p-Cesàro summability in a paranormed space.

Let K be a subset of the set of natural numbers . Then the asymptotic density of K denoted by δ(K), is defined as $\delta \left(K\right)={\text{lim}}_{n}\frac{1}{n}\left|\left\{k\le n:k\in K\right\}\right|$, where the vertical bars denote the cardinality of the enclosed set.

A number sequence x = (x k ) is said to be statistically convergent to the number L if for each ϵ > 0, the set K(ϵ) = {kn: |x k - L| > ϵ} has asymptotic density zero, i.e.,
$\underset{n}{\text{lim}}\frac{1}{n}\left|\left\{k\le n:\left|{x}_{k}-L\right|\right\}\right|=0.$

In this case we write st-lim x = L.

A number sequence x = (x k ) is said to be statistically Cauchy sequence if for every ϵ > 0, there exists a number N = N(ϵ) such that
$\underset{n}{\text{lim}}\frac{1}{n}\left|\left\{j\le n:\left|{x}_{j}-{x}_{N}\right|\ge \epsilon \right\}\right|=0.$

The concept of paranorm is a generalization of absolute value (see ).

A paranorm is a function g: X defined on a linear space X such that for all x, y, z X

(P 1) g(x) = 0 if x = θ

(P 2) g(-x) = g(x)

(P 3) g(x + y) ≤ g(x) + g(y)

(P 4) If (α n ) is a sequence of scalars with α n α0 (n → ∞) and x n , a X with x n a (n → ∞) in the sense that g(x n - a) → 0 (n → ∞), then α n x n α0a (n → ∞), in the sense that g(α n x n - α0a) → 0 (n → ∞).

A paranorm g for which g(x) = 0 implies x = θ is called a total paranorm on X, and the pair (X, g) is called a total paranormed space.

Note that each seminorm (norm) p on X is a paranorm (total) but converse need not be true.

In this article, we define and study the notion of convergence, statistical convergence, statistical Cauchy, and strong summability by a modulus function in a paranormed space.

Let (X, g) be a paranormed space.

Definition 1.1. A sequence x = (x k ) is said to be convergent (or g-convergent) to the number ξ in (X, g) if for every ε > 0, there exists a positive integer k0 such that g(x k - ξ) < ε whenever kk0. In this we write g-lim x = ξ, and ξ is called the g-limit of x.

Definition 1.2. A sequence x = (x k ) is said to be statistically convergent to the number ξ in (X, g) (or g(st)-convergent) if for each ϵ > 0,
$\underset{n}{\text{lim}}\frac{1}{n}\left|\left\{k\le n:g\left({x}_{k}-\xi \right)>\epsilon \right\}\right|=0.$

In this case, we write g(st)-lim x = ξ. We denote the set of all g(st)-convergent sequences by S g .

Definition 1.3. A number sequence x = (x k ) is said to be statistically Cauchy sequence in (X, g) (or g(st)-Cauchy) if for every ϵ > 0 there exists a number N = N(ϵ) such that
$\underset{n}{\text{lim}}\frac{1}{n}\left|\left\{j\le n:g\left({x}_{j}-{x}_{N}\right)\ge \epsilon \right\}\right|=0.$

## 2 Main results

Theorem 2.1. If a sequence x = (x k ) is statistically convergent in (X, g) then g(st)-limit is unique.

Proof. Suppose that g(st)-lim x = ξ1 and g(st)-lim x = ξ2. Given ε > 0, define the following sets as:
$\begin{array}{c}{K}_{1}\left(\epsilon \right)=\left\{n\in ℕ:g\left({x}_{n}-{\xi }_{1}\right)\ge \epsilon /2\right\},\\ {K}_{2}\left(\epsilon \right)=\left\{n\in ℕ:g\left({x}_{n}-{\xi }_{2}\right)\ge \epsilon /2\right\}.\end{array}$

Since g(st)-lim x = ξ1, we have δ(K1(ε)) = 0. Similarly, g(st)-lim x = ξ2 implies that δ(K2(ε)) = 0. Now, let K(ε) = K1(ε)K2(ε). Then δ(K(ε)) = 0 and hence the compliment K C (ε) is a nonempty set and δ(K C (ε)) = 1. Now if k \K(ε), then we have g(ξ1-ξ2) ≤ g(x n -ξ1)+g(x n -ξ2) < ε/2+ε/2 = ε.

Since ε > 0 was arbitrary, we get g(ξ1 - ξ2) = 0 and hence ξ1 = ξ2.

Theorem 2.2. If g-lim x = ξ then g(st)-lim x = ξ but converse need not be true in general.

Proof. Let g-lim x = ξ. Then for every ε > 0, there is a positive integer N such that
$g\left({x}_{n}-\xi \right)<\epsilon$

for all nN. Since the set A(ϵ):= {k : g(x k - ξ) ≥ ε} {1, 2, 3, ...}, δ(A(ϵ)) = 0. Hence g(st)-lim x = ξ.

The following examle shows that the converse need not be true.

Example 3.1. Let X = ℓ(1/k): = {x = (x k ): ∑ k |x k |1/k< ∞} with the paranorm g(x) = (∑ k |x k |1/k). Define a sequence x = (x k ) by
${x}_{k}:=\left\{\begin{array}{cc}\hfill k,\hfill & \hfill \text{if}\phantom{\rule{2.77695pt}{0ex}}k={n}^{2},n\in ℕ;\hfill \\ \hfill 0,\hfill & \hfill \mathsf{\text{otherwise}};\hfill \end{array}\right\$
and write
$K\left(\epsilon \right):=\left\{k\le n:g\left({x}_{k}\right)\ge \epsilon \right\},0<\epsilon <1.$
We see that
$g\left({x}_{k}\right):=\left\{\begin{array}{cc}\hfill {k}^{1/k},\hfill & \hfill \text{if}\phantom{\rule{2.77695pt}{0ex}}k={n}^{2},n\in ℕ;\hfill \\ \hfill 0,\hfill & \hfill \mathsf{\text{otherwise}};\hfill \end{array}\right\$
and hence
$\underset{k}{\text{lim}}g\left({x}_{k}\right):=\left\{\begin{array}{cc}\hfill 1,\hfill & \hfill \text{if}\phantom{\rule{2.77695pt}{0ex}}k={n}^{2},n\in ℕ;\hfill \\ \hfill 0,\hfill & \hfill \mathsf{\text{otherwise}};\hfill \end{array}\right\$

Therefore g-lim x does not exist. On the other hand δ(K(ε)) = 0, that is, g(st)-lim x = 0.

Theorem 2.3. Let g(st)-lim x = ξ1 and g(st) - lim y = ξ2. Then

(i) g(st)-lim(x ± y) = ξ1 ± ξ2,

(ii) g(st)-lim αx = αξ1, α .

Proof. It is easy to prove.

Theorem 2.4. A sequence x = (x k ) in (X, g) is statistically convergent to ξ if and only if there exists a set K = {k1 < k2 < < k n < } with δ(K) = 1 such that $g\left({x}_{{k}_{n}}-\xi \right)\to 0\left(n\to \infty \right)$.

Proof. Suppose that g(st)-lim x = ξ. Now, write for r = 1, 2, ....
${K}_{r}:=\left\{n\in ℕ:g\left({x}_{{k}_{n}}-\xi \right)\le 1-\frac{1}{r}\right\},$
and
${M}_{r}:=\left\{n\in ℕ:g\left({x}_{{k}_{n}}-\xi \right)>\frac{1}{r}\right\}.$
Then δ(K r ) = 0,
${M}_{1}\supset {M}_{2}\supset \dots \supset {M}_{i}\supset {M}_{i+1}\supset \dots ,$
(2.4.1)
and
$\delta \left({M}_{r}\right)=1,r=1,2,\dots$
(2.4.2)
Now we have to show that for n M r , $\left({x}_{{k}_{n}}\right)$ is g-convergent to ξ. On contrary suppose that $\left({x}_{{k}_{n}}\right)$ is not g-convergent to ξ. Therefore there is ε > 0 such that $g\left({x}_{{k}_{n}}-\xi \right)\le \epsilon$ for infinitely many terms. Let ${M}_{\epsilon }:=\left\{n\in ℕ:g\left({x}_{{k}_{n}}-\xi \right)>\epsilon \right\}$ and $\epsilon >\frac{1}{r},r\in ℕ$. Then
$\delta \left({M}_{\epsilon }\right)=0,$
(2.4.3)

and by (2.4.1), M r M ε . Hence δ(M r ) = 0, which contradicts (2.4.2) and we get that $\left({x}_{{k}_{n}}\right)$ is g-convergent to ξ.

Conversely, suppose that there exists a set K = {k1 < k2 < k3 < < k n < } with δ(K) = 1 such that $g-{\text{lim}}_{n\to \infty }{x}_{{k}_{n}}=\xi$. Then there is a positive integer N such that g(x n - ξ) < ε for n > N. Put K ε (t): = {n : g(x n -ξ) ≥ ε} and K': = {kN+1, kN+2, ...}. Then δ(K') = 1 and K ε -K' which implies that δ(K ε ) = 0. Hence g(st)-lim x = ξ.

Theorem 2.5. Let (X, g) be a complete paranormed space. Then a sequence x = (x k ) of points in (X, g) is statistically convergent if and only if it is statistically Cauchy.

Proof. Suppose that g(st)-lim x = ξ. Then, we get
$\delta \left(A\left(\epsilon \right)\right)=0,$
(2.5.1)
where A(ε): = {n : g(x n - ξ) ≥ ε/2}. This implies that
$\delta \left({A}^{C}\left(\epsilon \right)\right)=\delta \left(\left\{n\in ℕ:g\left({x}_{n}-\xi \right)<\epsilon \right\}\right)=1.$
Let m A C (ε). Then g(x m - ξ) < ε/2. Now, let B(ε): = {n : g(x m - x n ) ≥ ε}. We need to show that B(ε) A(ε). Let n B(ε). Then g(x n - x m ) ≥ ε and hence g(x n - ξ) ≥ ε/2, i.e. n A(ε). Otherwise, if g(x n - ξ) < ε then
$\epsilon \le g\left({x}_{n}-{x}_{m}\right)\le g\left({x}_{n}-\xi \right)+g\left({x}_{m}-\xi \right)<\frac{\epsilon }{2}+\frac{\epsilon }{2}=\epsilon ,$

which is not possible. Hence B(ε) A(ε), which implies that x = (x k ) is g(st)-convergent.

Conversely, suppose that x = (x k ) is g(st)-Cauchy but not g(st)-convergent. Then there exists M such that δ(G(ε) = 0,

where G(ε): = {n : g(x n - x M ) ≥ ε}, and δ(D(ε)) = 0, where D(ε): ={n : g(x n - ξ) < ε/2}, i.e., δ(D C (ε)) = 1. Since g(x n - x m ) ≤ 2g(x n - ξ) < ε,

if g(x n - ξ) < ε/2. Therefore δ(G C (ε)) = 0, i.e., δ(G(ε) = 1, which leads to a contradiction, since x = (x k ) was g(st)-Cauchy. Hence x = (x k ) must be g(st)-convergent.

## 3 Strong summability

In this section, we define the notion of strong summability by a modulus function and establish its relation with statistical convergence in a paranormed space.

Definition 3.1. A sequence x = (x k ) is said to be strongly p-Cesàro summable (0 < p < ∞) to the limit ξ in (X, g) if
$\underset{n}{\text{lim}}\frac{1}{n}\sum _{j=1}^{n}{\left(g\left({x}_{j}-\xi \right)\right)}^{p}=0,$

and we write it as x k ξ[C1, g] p . In this case ξ is called the [C1, g] p -limit of x.

Theorem 3.1. (a) If 0 < p < ∞ and x k ξ[C1, g] p , then x = (x k ) is statistically convergent to ξ in (X, g).

(b) If x = (x k ) is bounded and statistically convergent to ξ in (X, g) then x k ξ[C1, g] p .

Proof. (a) Let x k ξ[C1, g] p , then
$\begin{array}{c}0←\frac{1}{n}\sum _{k=1}^{n}{\left(g\left({x}_{k}-\xi \right)\right)}^{p}\ge \frac{1}{n}\sum _{\begin{array}{c}\hfill k=1\hfill \\ \hfill {\left(g\left({x}_{k}-\xi \right)\right)}^{p}\ge \epsilon \hfill \end{array}}^{n}{\left(g\left({x}_{k}-\xi \right)\right)}^{p}\\ \phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}\ge \frac{{\epsilon }^{p}}{n}\left|{K}_{\epsilon }\right|,\end{array}$

as n → ∞. That is, ${\text{lim}}_{n\to \infty }\frac{1}{n}\left|{K}_{\epsilon }\right|=0$ and so δ(K ε ) = 0, where K ε : = {kn: (g(x k - ξ)) p ε} . Hence x = (x k ) is statistically convergent to ξ in (X, g).

(b) Suppose that x = (x k ) is bounded and statistically convergent to ξ in (X, g). Then for ε > 0, we have δ(K ε ) = 0. Since x l, there exists M > 0 such that g(x k - ξ) ≤ M (k = 1, 2, ...). We have
$\frac{1}{n}\sum _{k=1}^{n}{\left(g\left({x}_{k}-\xi \right)\right)}^{p}=\frac{1}{n}\sum _{\begin{array}{c}\hfill k=1\hfill \\ \hfill k\notin {K}_{\epsilon }\hfill \end{array}}^{n}{\left(g\left({x}_{k}-\xi \right)\right)}^{p}+\frac{1}{n}\sum _{\begin{array}{c}\hfill k=1\hfill \\ \hfill k\in {K}_{\epsilon }\hfill \end{array}}^{n}{\left(g\left({x}_{k}-\xi \right)\right)}^{p}={S}_{1}\left(n\right)+{S}_{2}\left(n\right),$
where
${S}_{1}\left(n\right)=\frac{1}{n}\sum _{\begin{array}{c}\hfill k=1\hfill \\ \hfill k\notin {K}_{\epsilon }\hfill \end{array}}^{n}{\left(g\left({x}_{k}-\xi \right)\right)}^{p}\mathsf{\text{and}}\phantom{\rule{2.77695pt}{0ex}}{S}_{2}\left(n\right)=\frac{1}{n}\sum _{\begin{array}{c}\hfill k=1\hfill \\ \hfill k\in {K}_{\epsilon }\hfill \end{array}}^{n}{\left(g\left({x}_{k}-\xi \right)\right)}^{p}.$
Now if k K ε then S1(n) < ε q . For k K ε , we have
${S}_{2}\left(n\right)\le \left(\text{sup}g\left({x}_{k}-\xi \right)\right)\left(\left|{K}_{\epsilon }\right|/n\right)\le M\left|{K}_{\epsilon }\right|/n\to 0,$

as n → ∞, since δ(K ε ) = 0. Hence x k ξ[C1, g] p .

This completes the proof of the theorem.

Recall that a modulus f is a function from [0, ∞) to [0, ∞) such that (i) f(x) = 0 if and only if x = 0, (ii) f(x + y) ≤ f(x) + f(y) for all x, y ≥ 0, (iii) f is increasing, and (iv) f is continuous from the right at 0.

Now we define the following:

Definition 3.2. Let f be a modulus. we say that a sequence x = (x k ) is strongly p-Cesàro summable with respect to f to the limit ξ in (X, g) if
$\underset{n}{\text{lim}}\frac{1}{n}\sum _{j=1}^{n}f\left({\left(g\left({x}_{j}-\xi \right)\right)}^{p}\right)=0,$

(0 < p < ∞). In this case we write x k ξ(w(f, g, p)).

As in , it is easy to prove the following:

Theorem 3.2. (a) Let f be any modulus and x k ξ(w(f, g, p)). Then x = (x k ) is statistically convergent to ξ in (X, g).

(b) S g = w(f, g, p) If and only if f is bounded.

## Declarations

### Acknowledgements

The authors would like to thank the Deanship of Scientific Research at King Abdulaziz University for its financial support under a grant with number 156/130/1431.

## Authors’ Affiliations

(1)
Department of Mathematics, King Abdulaziz University, P.O. Box 80203, Jeddah, 21589, Saudi Arabia

## References

Advertisement 