• Research
• Open Access

An approximation to the subfractional Brownian sheet using martingale differences

Journal of Inequalities and Applications20152015:102

https://doi.org/10.1186/s13660-015-0625-4

• Accepted: 7 March 2015
• Published:

Abstract

In this paper, we obtain an approximation in law of the subfractional Brownian sheet using martingale differences.

Keywords

• subfractional Brownian sheet
• martingale differences
• weak convergence

• 60G15
• 60G18
• 60F05

1 Introduction

As an extension of Brownian motion, Bojdecki et al.  introduced and studied the subfractional Brownian motion, a class of self-similar Gaussian processes preserving many properties of the fractional Brownian motion (self-similarity, long-range dependence, Hölder paths). However, in comparison with fractional Brownian motion, the subfractional Brownian motion has non-stationary increments. The subfractional Brownian motion arises from occupation time fluctuations of branching particle systems with a Poisson initial condition, and it also appeared independently in a different context in Dzhaparidze and Van Zanten .

Recall that the subfractional Brownian motion $$S^{H}$$ with index $$H\in(0,1)$$ is a mean zero Gaussian process $$S^{H}=\{S_{t}^{H}, t\geq0\}$$ with $$S^{H}_{0}=0$$ and the covariance
$$R_{H}(t,s)\equiv E \bigl[S^{H}_{t}S^{H}_{s} \bigr]=s^{2H}+t^{2H}- \frac{1}{2} \bigl[ (s+t)^{2H}+|t-s|^{2H} \bigr]$$
(1.1)
for all $$s,t\geq0$$. For $$H=1/2$$, $$S^{H}$$ coincides with the standard Brownian motion W. $$S^{H}$$ is neither a semimartingale nor a Markov process unless $$H=1/2$$, so many of the powerful techniques from stochastic analysis are not available when dealing with $$S^{H}$$. By Dzhaparidze and Van Zanten , Tudor , the subfractional Brownian motion has the following integral representation with respect to the standard Brownian motion W:
$$S^{H}_{t}=\int^{t}_{0}K_{H}(t,s)\,dW_{s},\quad t\geq0,$$
(1.2)
where
$$K_{H}(t,s)=\frac{c_{H}\sqrt{\pi}}{2^{H-1}\Gamma(H-\frac{1}{2})}s^{\frac {3}{2}-H}\int ^{t}_{s}\bigl(u^{2}-s^{2} \bigr)^{H-\frac{3}{2}}\,du1_{(0,t)}(s),$$
(1.3)
with
$$c^{2}_{H}=\frac{\Gamma(1+2H)\sin\pi H}{\pi}, \quad H>\frac{1}{2}.$$

Convergence to subfractional Brownian motion has been studied since the works of Delgado and Jolis . Recently, many authors have got some new results on approximations of subfractional Brownian motion. For example, Bardina and Bascompte  constructed two independent Gaussian processes by using a unique Poisson process. As an application of this result they obtained families of processes that converge in law toward subfractional Brownian motion. Garzón et al.  proved a strong uniform approximation with a rate of convergence for subfractional Brownian motion by means of transport processes. Harnett and Nualart  proved a weak convergence of the Stratonovich integral with respect to a class of Gaussian processes which includes subfractional Brownian motion with $$H=\frac{1}{6}$$. Shen and Yan  obtained an approximation theorem for subfractional Brownian motion using martingale differences (Nieminen , Chen et al. , Wang et al.  obtained an approximation theorem for fractional Brownian motion, Rosenblatt process, and fractional Brownian sheet, respectively, using martingale differences). Dai  showed a result of approximation in law to subfractional Brownian motion in the Skorohord topology. The construction of these approximations is based on a sequence of I.I.D. random variables.

There are two possible multidimensional parameter extensions of the subfractional Brownian motion. The first one is Lévy’s subfractional Brownian random field, and the second one is the anisotropic subfractional Brownian random field. In this paper, we will consider the second one in the two-parameter case and call it a subfractional Brownian sheet. This is a centered Gaussian process on
$$S^{\alpha,\beta}=\bigl\{ S^{\alpha,\beta}(t,s), (t,s)\in\mathbb{R}^{2}_{+} \bigr\}$$
such that
\begin{aligned} E\bigl[S^{\alpha,\beta}(t,s)S^{\alpha,\beta}(u,v)\bigr] =&\biggl\{ t^{2H}+u^{2H}- \frac{1}{2} \bigl[ (t+u)^{2H}+|t-u|^{2H} \bigr]\biggr\} \\ &{} \times\biggl\{ s^{2H}+v^{2H}- \frac{1}{2} \bigl[ (s+v)^{2H}+|s-v|^{2H} \bigr]\biggr\} , \end{aligned}
where $$\alpha,\beta\in(0,1)$$. For $$\alpha=\beta=\frac{1}{2}$$, $$S^{\alpha ,\beta}$$ coincides with the standard Brownian sheet. Recall that a subfractional Brownian sheet with parameters $$\alpha>\frac{1}{2}$$, $$\beta>\frac{1}{2}$$ admits an integral representation of the form, for $$(t,s)\in[0,T]\times[0,S]$$,
$$S^{\alpha,\beta}(t,s) =\int^{t}_{0} \int^{s}_{0}K_{\alpha}(t,x_{1})K_{\beta}(s,x_{2})B(dx_{1},dx_{2}),$$
where B is a standard Brownian sheet and $$K_{\cdot}$$ is the deterministic kernel given by (1.3).

To the best of our knowledge, no work has been done on weak or strong convergence to the subfractional Brownian sheet. Motivated by the aforementioned works, as a first attempt, in this paper, we will prove weak convergence to the subfractional Brownian sheet result for processes construed from the martingale differences sequence in the Skorohord space. It can be seen as an extension of the previous results of Shen and Yan  to the two-parameter case.

The rest of this paper is organized as follows. Section 2 contains some preliminaries on the stochastic process in the plane. In Section 3, we prove the main weak convergence Theorem 3.1 using the criterion given by Bickel and Wichura  to check the tightness of the approximated processes and the convergence of the finite-dimensional distributions.

2 Preliminaries

In this section, we will use the definitions and notations introduced in the basic work of Cairoli and Walsh  on stochastic calculus in the plane. Let $$(\Omega,\mathcal{F},P)$$ be a complete probability space and let $$\{\mathcal{F}_{t,s}; (t,s)\in[0,T]\times[0,S]\}$$ be a family of sub-σ-fields of such that:
1. (i)

$$\mathcal{F}_{t,s}\subseteq\mathcal{F}_{t',s'}$$ for any $$t\leq t'$$, $$s\leq s'$$;

2. (ii)

$$\mathcal{F}_{0,0}$$ contains all null sets of ;

3. (iii)

for each $$z\in[0,T]\times[0,S]$$, $$\mathcal{F}_{z}=\bigcap_{z< z'}\mathcal{F}_{z'}$$, where $$z=(t,s)< z'=(t',s')$$ denotes the partial order on $$[0,T]\times[0,S]$$, meaning that $$t< t'$$ and $$s< s'$$.

Given a real valued process X, defined on $$[0,T]\times[0,S]$$ with $$(t,s)<(t',s')$$, we denote by $$\bigtriangleup_{t,s}X(t',s')$$ the increment of X over the rectangle $$((t,s),(t',s')]$$, that is,
$$\bigtriangleup_{t,s}X\bigl(t',s'\bigr)=X \bigl(t',s'\bigr)-X\bigl(t,s'\bigr)-X \bigl(t',s\bigr)+X(t,s).$$

Definition 2.1

An integrable process $$X=\{X(t,s),(t,s)\in[0,T]\times[0,S]\}$$ is called a strong martingale if:

• X vanishes on the axes;

• $$E[\bigtriangleup_{t,s}X(t',s')|\mathcal{G}_{t,s}]=0$$ for any $$(t,s)<(t',s')$$.

Denote $$\mathcal{G}^{(n)}_{i,j}:=\mathcal{F}^{(n)}_{i,n}\vee\mathcal {F}^{(n)}_{n,j}$$, where $$\mathcal{F}^{(n)}_{i,n}$$ denotes the σ-fields generated by $$\xi^{(n)}_{i,n}$$ and $$\mathcal {F}^{(n)}_{n,j}$$ denotes the σ-fields generated by $$\xi^{(n)}_{n,j}$$ for $$i,j=1,2,\ldots,n$$ and $$n\geq1$$. Let $$\{\xi^{(n)}\}_{n\geq1}:=\{\xi^{(n)}_{i,j},\mathcal{G}^{(n)}_{i,j}\} _{n\geq1}$$, $$i,j=1,2,\ldots,n$$, be a sequence such that
$$E\bigl[\xi^{(n)}_{i+1,j+1}|\mathcal{G}^{(n)}_{i,j} \bigr]=0$$
for all $$n\geq1$$, and $$\xi^{(n)}_{k,l}\in\mathcal{F}^{(n)}_{i,n}\wedge \mathcal{F}^{(n)}_{n,j}$$ for $$i>k$$ or $$j>l$$. Then we call it a martingale differences sequence. Obviously, for a martingale difference sequence $$\xi^{n}$$,
$$X_{n}(i,j)=\sum^{i}_{k=1}\sum ^{j}_{l=1}\xi^{(n)}_{k,l}$$
is a strong martingale.
Let Λ be the group of all mappings $$\lambda: [0,T]\times [0,S]\rightarrow[0,T]\times[0,S]$$ of the form $$\lambda(s,t)=(\lambda_{1}(t),\lambda_{2}(s))$$, where each $$\lambda_{i}$$ is continuous and strictly increasing. Denote by $$\mathcal{D}=\mathcal {D}([0,T]\times[0,S])$$ the Skorohod space of functions on $$[0,T]\times[0,S]$$ that are continuous from above with limits from below and equipped $$\mathcal{D}$$ with the metric
$$d(x,y):=\inf\bigl\{ \min\bigl(\|x-y\lambda\|,\|\lambda\|\bigr) :\lambda\in\Lambda\bigr\} ,$$
where $$\|x-y\lambda\|:=\operatorname{sup}\{|x(t,s)-y(\lambda(t,s))|:(t,s)\in[0,T]\times [0,S]\}$$ and
$$\|\lambda\|=\operatorname{sup}\bigl\{ \bigl|\lambda(t,s)-(t,s)\bigr| :(t,s)\in[0,T]\times[0,S]\bigr\} .$$
Under this metric, $$\mathcal{D}$$ is a separable and complete metric space.
For $$n\geq1$$, $$(t,s)\in[0,T]\times[0,S]$$. Define now
$$B_{n}(t,s)=\sum^{[nt]}_{i=1}\sum ^{[ns]}_{j=1}\xi^{(n)}_{i,j},$$
and
\begin{aligned} S_{n}(t,s) :=&\int ^{t}_{0}\int^{s}_{0}K^{(n)}_{\alpha}(t,x_{1})K^{(n)}_{\beta}(s,x_{2})B_{n}(dx_{1},dx_{2}) \\ =&n^{2}\sum^{[nt]}_{i=1}\sum ^{[ns]}_{j=1}\xi^{(n)}_{i,j} \int^{\frac{i}{n}}_{\frac{i-1}{n}}\int^{\frac{j}{n}}_{\frac {j-1}{n}}K_{\alpha}\biggl(\frac{[nt]}{n},x_{1}\biggr) \cdot K_{\beta}\biggl( \frac{[ns]}{n},x_{2}\biggr)\,dx_{1}\,dx_{2}, \end{aligned}
(2.1)
where the sequence $$K^{(n)}_{H}(t,v)$$ is an approximation of $$K_{H}(t,v)$$ and
$$K^{(n)}_{H}(t,x)=n\int^{x}_{x-\frac{1}{n}}K_{H} \biggl(\frac{[nt]}{n},y\biggr)\,dy,\quad n=1,2,\ldots$$
and $$[x]$$ denotes the greatest integer not exceeding x.
It is well known that if the martingale differences sequence $$\xi ^{(n)}$$ satisfies the following condition:
$$\sum^{[nt]}_{i=1}\sum ^{[ns]}_{j=1}\bigl(\xi^{(n)}_{i,j} \bigr)^{2}\rightarrow t\cdot s$$
in the sense of $$L^{1}$$, then $$B_{n}(t,s)$$ converges weakly to the Brownian sheet $$B(t,s)$$ in $$\mathcal{D}$$ as n tends to infinity (see, for instance, Morkvenas ).

3 Main results

In this section, we will extend the result of Morkvenas  to the subfractional Brownian sheet with $$\alpha,\beta>\frac{1}{2}$$ in the following theorem.

Theorem 3.1

Let $$\alpha>\frac{1}{2}$$, $$\beta>\frac{1}{2}$$, and $$\{\xi^{(n)}_{i,j} , i,j=1,2,\ldots,n\}$$ be a square integrable martingale differences sequence such that for all $$1\leq i,j\leq n$$
$$\lim_{n\rightarrow\infty}n\xi^{(n)}_{i,j}=1\quad \textit{a.s.}$$
(3.1)
and
$$\max_{1\leq i,j\leq n}\bigl|\xi^{(n)}_{i,j}\bigr|\leq \frac{C}{n}\quad \textit{a.s.}$$
(3.2)
for some $$C>1$$. Then $$\{S_{n}\}$$ defined in (2.1) converges weakly to the subfractional Brownian sheet $$S^{\alpha,\beta}$$ in the Skorohod space $$\mathcal{D}$$ as n tends to infinity.

In order to prove Theorem 3.1, we have to check the tightness of process $$S_{n}$$ and the following lemmas will be needed.

Lemma 3.1

Let $$S_{n}(t,s)$$ be the family of processes defined by (2.1). Then for any $$(t,s)<(t',s')$$, there exists a constant C such that
$$\sup_{n}E\bigl[\bigl(\bigtriangleup_{t,s}S_{n} \bigl(t',s'\bigr)\bigr)^{2}\bigr]\leq C \bigl(t'-t\bigr)^{2\alpha }\bigl(s'-s \bigr)^{2\beta}.$$

Proof

Notice that
\begin{aligned} \bigtriangleup_{t,s}S_{n} \bigl(t',s'\bigr) =&\int^{t'}_{0} \int^{s'}_{0}\bigl[K^{(n)}_{\alpha}\bigl(t',x_{1}\bigr)-K^{(n)}_{\alpha}(t,x_{1}) \bigr] \\ &{} \times\bigl[K^{(n)}_{\beta}\bigl(s',x_{2} \bigr)-K^{(n)}_{\beta}(s,x_{2})\bigr]B_{n}(dx_{1},dx_{2}) \\ =& \sum^{[nt']}_{i=1}\sum ^{[ns']}_{j=1}n^{2}\int^{\frac{i}{n}}_{\frac{i-1}{n}} \int^{\frac{j}{n}}_{\frac{j-1}{n}}\biggl[K_{\alpha}\biggl( \frac {[nt']}{n},x_{1}\biggr)-K_{\alpha}\biggl( \frac{[nt]}{n},x_{1}\biggr)\biggr] \\ &{} \times\biggl[K_{\beta}\biggl(\frac{[ns']}{n},x_{2}\biggr)- K_{\beta}\biggl(\frac{[ns]}{n},x_{2}\biggr)\biggr] \xi^{(n)}_{i,j}\,dx_{1}\,dx_{2}. \end{aligned}
By (3.2) and $$E\xi^{(n)}_{i,j}\xi^{(n)}_{k,l}=0$$, for some $$i\neq k$$ or $$j\neq l$$ [in fact, we can suppose $$i>k$$, then $$E\xi^{(n)}_{i,j}\xi^{(n)}_{k,l}=E(\xi^{(n)}_{k,l}E(\xi ^{(n)}_{i,j}|\mathcal{G}^{(n)}_{i-1,j-1}))=0$$, since $$\xi^{(n)}_{k,l}\in \mathcal{F}^{(n)}_{i,n}\wedge\mathcal{F}^{(n)}_{n,j}$$]. We have
\begin{aligned} E\bigl[\bigtriangleup_{t,s}S_{n}\bigl(t',s' \bigr)\bigr]^{2} =& E \Biggl[ \sum^{[nt']}_{i=1} \sum^{[ns']}_{j=1}n^{2}\int ^{\frac{i}{n}}_{\frac{i-1}{n}} \int^{\frac{j}{n}}_{\frac{j-1}{n}} \biggl[K_{\alpha}\biggl(\frac {[nt']}{n},x_{1} \biggr)-K_{\alpha}\biggl(\frac{[nt]}{n},x_{1}\biggr)\biggr] \\ &{} \times\biggl[K_{\beta}\biggl(\frac{[ns']}{n},x_{2}\biggr)- K_{\beta}\biggl(\frac {[ns]}{n},x_{2}\biggr)\biggr] \xi^{(n)}_{i,j}\,dx_{1}\,dx_{2} \Biggr]^{2} \\ \leq& C\sum^{[nt']}_{i=1}n \biggl(\int ^{\frac{i}{n}}_{\frac {i-1}{n}}\biggl[K_{\alpha}\biggl( \frac{[nt']}{n},x_{1}\biggr)-K_{\alpha}\biggl( \frac {[nt]}{n},x_{1}\biggr)\biggr]\,dx_{1} \biggr)^{2} \\ &{} \times \sum^{[ns']}_{j=1}n \biggl(\int ^{\frac{j}{n}}_{\frac{j-1}{n}} \biggl[K_{\beta}\biggl( \frac{[ns']}{n},x_{2}\biggr)- K_{\beta}\biggl( \frac{[ns]}{n},x_{2}\biggr)\biggr]\,dx_{2} \biggr)^{2} \\ \leq& C\int^{t'}_{0}\biggl[K_{\alpha}\biggl(\frac{[nt']}{n},x_{1}\biggr)-K_{\alpha}\biggl( \frac {[nt]}{n},x_{1}\biggr)\biggr]^{2}\,dx_{1} \\ &{} \times\int^{s'}_{0}\biggl[K_{\beta}\biggl(\frac{[ns']}{n},x_{2}\biggr)- K_{\beta}\biggl( \frac{[ns]}{n},x_{2}\biggr)\biggr]^{2}\,dx_{2} \\ =&C E \bigl(S^{H}_{\frac{[nt']}{n}}-S^{H}_{\frac{[nt]}{n}} \bigr)^{2}E \bigl(S^{H}_{\frac{[ns']}{n}}-S^{H}_{\frac{[ns]}{n}} \bigr)^{2} \\ \leq& C\biggl\vert \frac{[nt']-[nt]}{n}\biggr\vert ^{2\alpha}\biggl\vert \frac {[ns']-[ns]}{n}\biggr\vert ^{2\beta}. \end{aligned}
For any $$0< t<t'$$, and $$\frac{1}{2}<\alpha<1$$, we see that if $$nt'-nt\geq1$$, then $$\vert \frac{[nt']-[nt]}{n}\vert ^{2\alpha}\leq |2(t'-t)|^{2\alpha}$$. Conversely, if $$nt'-nt<1$$, then either $$t'$$ or t belong to a same subinterval $$[\frac{m}{n},\frac{m+1}{n})$$ for some integer m, which implies $$\vert \frac{[nt']-[nt]}{n}\vert ^{2\alpha}=0$$. So we get
$$\biggl\vert \frac{[nt']-[nt]}{n}\biggr\vert ^{2\alpha}\leq \bigl|2 \bigl(t'-t\bigr)\bigr|^{2\alpha}.$$
The second term follows from a similar discussion. This completes the proof. □

Lemma 3.2

Let $$S_{n}(t,s)$$ be the family of processes defined by (2.1). Then for any $$(t,s)<(t',s')$$, there exists a constant C such that
$$\sup_{n}E\bigl[\bigl(\bigtriangleup_{t,s}S_{n} \bigl(t',s'\bigr)\bigr)^{4}\bigr]\leq C \bigl(t'-t\bigr)^{4\alpha }\bigl(s'-s \bigr)^{4\beta}.$$

Proof

We have
\begin{aligned} E\bigl[\bigtriangleup_{t,s}S_{n}\bigl(t',s' \bigr)\bigr]^{4} =&E \Biggl[ \sum^{[nt']}_{i=1} \sum^{[ns']}_{j=1}n^{2}\int ^{\frac {i}{n}}_{\frac{i-1}{n}} \int^{\frac{j}{n}}_{\frac{j-1}{n}} \biggl[K_{\alpha}\biggl(\frac {[nt']}{n},x_{1} \biggr)-K_{\alpha}\biggl(\frac{[nt]}{n},x_{1}\biggr)\biggr] \\ &{} \times\biggl[K_{\beta}\biggl(\frac{[ns']}{n},x_{2}\biggr)- K_{\beta}\biggl(\frac {[ns]}{n},x_{2}\biggr)\biggr] \xi^{(n)}_{i,j}\,dx_{1}\,dx_{2} \Biggr]^{4} \\ \leq& C\sum^{[nt']}_{i=1}\sum ^{[ns']}_{j=1}\sum^{[nt']}_{k=1} \sum^{[ns']}_{l=1}n^{4} \biggl(\int ^{\frac{i}{n}}_{\frac{i-1}{n}}\biggl(K_{\alpha}\biggl( \frac {[nt']}{n},x_{1}\biggr)-K_{\alpha}\biggl( \frac{[nt]}{n},x_{1}\biggr)\biggr)\,dx_{1} \biggr)^{2} \\ &{} \times \biggl(\int^{\frac{j}{n}}_{\frac{j-1}{n}} \biggl(K_{\beta}\biggl(\frac{[ns']}{n},x_{2} \biggr)-K_{\beta}\biggl(\frac{[ns]}{n},x_{2} \biggr)\biggr)\,dx_{2} \biggr)^{2} \\ &{} \times \biggl(\int^{\frac{k}{n}}_{\frac{k-1}{n}} \biggl(K_{\alpha}\biggl(\frac{[nt']}{n},x_{1} \biggr)-K_{\alpha}\biggl(\frac{[nt]}{n},x_{1} \biggr)\biggr)\,dx_{1} \biggr)^{2} \\ &{}\times \biggl(\int^{\frac{l}{n}}_{\frac{l-1}{n}} \biggl(K_{\beta}\biggl(\frac{[ns']}{n},x_{2} \biggr)-K_{\beta}\biggl(\frac{[ns]}{n},x_{2} \biggr)\biggr)\,dx_{2} \biggr)^{2} \\ =&C \Biggl(\sum^{[nt']}_{i=1}n \biggl(\int ^{\frac{i}{n}}_{\frac {i-1}{n}}\biggl(K_{\alpha}\biggl( \frac{[nt']}{n},x_{1}\biggr)-K_{\alpha}\biggl( \frac {[nt]}{n},x_{1}\biggr)\biggr)\,dx_{1} \biggr)^{2} \Biggr)^{2} \\ &{} \times \Biggl(\sum^{[ns']}_{i=1}n \biggl( \int^{\frac{j}{n}}_{\frac {j-1}{n}}\biggl(K_{\beta}\biggl( \frac{[ns']}{n},x_{2}\biggr)-K_{\beta}\biggl( \frac {[ns]}{n},x_{2}\biggr)\biggr)\,dx_{2} \biggr)^{2} \Biggr)^{2} \\ \leq& C \Biggl(\sum^{[nt']}_{i=1}\int ^{\frac{i}{n}}_{\frac{i-1}{n}} \biggl(K_{\alpha}\biggl( \frac{[nt']}{n},x_{1}\biggr)-K_{\alpha}\biggl( \frac{[nt]}{n},x_{1}\biggr) \biggr)^{2}\,dx_{1} \Biggr) ^{2} \\ &{}\times \Biggl(\sum^{[ns']}_{j=1}\int ^{\frac{j}{n}}_{\frac{j-1}{n}} \biggl(K_{\beta}\biggl( \frac{[ns']}{n},x_{2}\biggr)-K_{\beta}\biggl( \frac{[ns]}{n},x_{2}\biggr) \biggr)^{2}\,dx_{2} \Biggr) ^{2} \\ \leq& C\biggl\vert \frac{[nt']-[nt]}{n}\biggr\vert ^{4\alpha}\biggl\vert \frac {[ns']-[ns]}{n}\biggr\vert ^{4\beta} \\ \leq& C\bigl(t'-t\bigr)^{4\alpha}\bigl(s'-s \bigr)^{4\beta}. \end{aligned}
□

Lemma 3.3

Let $$\frac{1}{2}<\alpha,\beta<1$$, $$(t_{k},s_{k}),(t_{l},s_{l})\in[0,T]\times[0,S]$$, and $$\{\xi^{(n)}_{i,j},i,j=1,2,\ldots,n\}$$ be a martingale differences sequence satisfying (3.1) and (3.2). Then
\begin{aligned}& n^{4}\sum^{[nT]}_{i=1}\sum ^{[nS]}_{j=1}\int^{\frac{i}{n}}_{\frac{i-1}{n}} \int^{\frac{j}{n}}_{\frac{j-1}{n}}K_{\alpha}\biggl( \frac{[nt_{k}]}{n},x_{1}\biggr)K_{\beta}\biggl( \frac{[ns_{k}]}{n},x_{2}\biggr)\,dx_{2}\,dx_{1} \\& \quad{} \times\int^{\frac{i}{n}}_{\frac{i-1}{n}} \int^{\frac{j}{n}}_{\frac{j-1}{n}} K_{\alpha}\biggl(\frac{[nt_{l}]}{n},x_{1}\biggr)K_{\beta}\biggl(\frac{[ns_{l}]}{n},x_{2}\biggr)\,dx_{2}\,dx_{1} \bigl(\xi ^{(n)}_{i,j}\bigr)^{2} \end{aligned}
converges to
$$\int^{T}_{0}K_{\alpha}(t_{k},x_{1})K_{\alpha}(t_{l},x_{1})\,dx_{1} \int^{S}_{0}K_{\beta}(s_{k},x_{2})K_{\beta}(t_{l},x_{2})\,dx_{2}$$
as n tends to infinity.

Proof

The proof follows from a similar discussion of Lemma 3.1 in Shen and Yan . □

Proof of Theorem 3.1

Note that the processes $$S_{n}$$ are null on the axes, and that the tightness of the laws of the family $$S_{n}$$ follows from Lemma 3.2. Then, in order to prove Theorem 3.1, by the criterion given by Bickel and Wichura  it suffices to show: the family of process $$S_{n}(t,s)$$ converges, as n tends to infinity, to the subfractional Brownian sheet in the sense of finite-dimensional distribution.

Let $$\alpha_{1},\alpha_{2},\ldots,\alpha_{d}\in\mathbb{R}$$ and $$(t_{1},s_{1}),\ldots,(t_{d},s_{d})\in[0,T]\times[0,S]$$. We want to show that $$X_{n}$$,
$$X_{n}:=\sum^{d}_{m=1} \alpha_{m}S_{n}(t_{m},s_{m}),$$
converges in distribution, as n tends to infinity, to a normal random variable with zero mean and variance
$$E \Biggl(\sum^{d}_{m=1} \alpha_{m} S^{\alpha,\beta}(t_{m},s_{m}) \Biggr)^{2}.$$
Indeed, the zero mean is trivial. Let us write $$X_{n}$$ as
\begin{aligned} X_{n} =&\sum^{[nT]}_{i=1}\sum ^{[nS]}_{j=1}n^{2} \xi^{(n)}_{i,j} \sum^{d}_{m=1} \alpha_{m}\int^{\frac{i}{n}}_{\frac{i-1}{n}} K_{\alpha}\biggl(\frac {[nt_{m}]}{n},x_{1}\biggr)\,dx_{1} \int ^{\frac{j}{n}}_{\frac{j-1}{n}}K_{\beta}\biggl( \frac{[ns_{m}]}{n},x_{2}\biggr)\,dx_{2} \\ :=&\sum^{[nT]}_{i=1}\sum ^{[nS]}_{j=1}X^{(n)}_{i,j}. \end{aligned}
Then
\begin{aligned} \sum^{[nT]}_{i=1} \sum^{[nS]}_{j=1}{\bigl(X^{(n)}_{i,j} \bigr)}^{2} =&\sum^{d}_{m,h=1} \alpha_{m}\alpha_{h}n^{4}\sum ^{[nT]}_{i=1}\sum^{[nS]}_{j=1} \int^{\frac{i}{n}}_{\frac{i-1}{n}}\int^{\frac{j}{n}}_{\frac{j-1}{n}} K_{\alpha}\biggl(\frac{[nt_{m}]}{n},x_{1}\biggr)K_{\beta}\biggl(\frac{[ns_{m}]}{n},x_{2}\biggr)\,dx_{2}\,dx_{1} \\ &{} \times \int^{\frac{i}{n}}_{\frac{i-1}{n}}\int^{\frac{j}{n}}_{\frac{j-1}{n}} K_{\alpha}\biggl(\frac{[nt_{h}]}{n},x_{1}\biggr)K_{\beta}\biggl(\frac{[ns_{h}]}{n},x_{2}\biggr)\,dx_{2}\,dx_{1} \bigl(\xi ^{(n)}_{i,j}\bigr)^{2}. \end{aligned}
By Lemma 3.3, the above equation converges to
\begin{aligned}& \sum^{d}_{m,h=1} \alpha_{m}\alpha_{h}\int^{T}_{0}K_{\alpha}(t_{m},x_{1})K_{\alpha}(t_{h},x_{1})\,dx_{1}\int^{S}_{0}K_{\beta}(s_{m},x_{2})K_{\beta}(s_{h},x_{2})\,dx_{2} \\& \quad=E \Biggl(\sum^{d}_{m=1} \alpha_{m}S^{\alpha,\beta}(t_{m},s_{m}) \Biggr)^{2}, \end{aligned}
since $$K_{\alpha}(t,s)=0$$ for $$s\geq t$$. Therefore, in order to end the proof we just need to prove the following Lindeberg condition holds: for any $$\varepsilon>0$$,
$$\sum^{[nT]}_{i=1}\sum ^{[nS]}_{j=1}E\bigl[\bigl(X^{(n)}_{i,j} \bigr)^{2} I_{(|X^{(n)}_{i,j}|>\varepsilon)}|\mathcal{G}^{n}_{i-1,j-1} \bigr]\stackrel{\rm P}{\rightarrow}0.$$
Consider the set
$$\bigl\{ \bigl|X^{(n)}_{i,j}\bigr|>\varepsilon\bigr\} =\bigl\{ \bigl(X^{(n)}_{i,j}\bigr)^{2}>\varepsilon^{2} \bigr\} .$$
We get an upper bound to $$X^{(n)}_{i,j}$$ by noticing that $$K_{H}(t,u)$$ is increasing with respect to t and decreasing with respect to u,
\begin{aligned} \bigl(X^{(n)}_{i,j}\bigr)^{2} =& \Biggl(n^{2}\xi^{(n)}_{i,j} \sum ^{d}_{m=1}\alpha_{m}\int ^{\frac{i}{n}}_{\frac{i-1}{n}} \int^{\frac{j}{n}}_{\frac{j-1}{n}} K_{\alpha}\biggl(\frac{[nt_{m}]}{n},x_{1}\biggr)K_{\beta}\biggl(\frac{[ns_{m}]}{n},x_{2}\biggr)\,dx_{1}\,dx_{2} \Biggr)^{2} \\ \leq& n^{4}\bigl(\xi^{(n)}_{i,j}\bigr)^{2} \Biggl(\sum^{d}_{m=1}\alpha_{m} \Biggr)^{2} \biggl[\int^{\frac{i}{n}}_{\frac{i-1}{n}}K_{\alpha}(T,x_{1})\,dx_{1} \biggr]^{2} \cdot \biggl[\int^{\frac{j}{n}}_{\frac{j-1}{n}}K_{\beta}(S,x_{2})\,dx_{2} \biggr]^{2} \\ \leq& n^{2}\bigl(\xi^{(n)}_{i,j}\bigr)^{2} A\int^{\frac{i}{n}}_{\frac{i-1}{n}} K^{2}_{\alpha}(T,x_{1})\,dx_{1} \cdot\int^{\frac{j}{n}}_{\frac{j-1}{n}}K^{2}_{\beta}(S,x_{2})\,dx_{2} \\ \leq& n^{2}\bigl(\xi^{(n)}_{i,j}\bigr)^{2} A :=n^{2}\bigl(\xi^{(n)}_{i,j}\bigr)^{2}A \delta^{n}, \end{aligned}
where $$A:=(\sum^{d}_{m=1}\alpha_{m})^{2}$$, and
$$\delta^{n}:=\int^{\frac{1}{n}}_{0}K^{2}_{\alpha}(T,x_{1})\,dx_{1} \cdot\int^{\frac{1}{n}}_{0}K^{2}_{\beta}(S,x_{2})\,dx_{2}.$$
Thus, we get
$$\bigl\{ \bigl|X^{(n)}_{i,j}\bigr|>\varepsilon\bigr\} \subset \bigl\{ n^{2}\bigl(\xi^{(n)}_{i,j}\bigr)^{2}A \delta ^{n}>\varepsilon^{2}\bigr\} .$$
(3.3)
Using the inclusion (3.3) and the Cauchy-Schwartz inequality we get, a.s.,
\begin{aligned} E \bigl[\bigl(X^{(n)}_{i,j}\bigr)^{2}I_{(|X^{(n)}_{i,j}|>\varepsilon)}| \mathcal {G}^{n}_{i-1,j-1} \bigr] \leq& E \bigl[n^{2}\bigl(\xi^{(n)}_{i,j} \bigr)^{2}A\delta^{n}I_{(n^{2}(\xi ^{(n)}_{i,j})^{2}A\delta^{n}>\varepsilon^{2})}|\mathcal{G}^{n}_{i-1,j-1} \bigr] \\ \leq& CA\delta^{n}E \bigl[I_{(n^{2}(\xi^{(n)}_{i,j})^{2}A\delta^{n}>\varepsilon ^{2})}|\mathcal{G}^{n}_{i-1,j-1} \bigr]. \end{aligned}
Hence, for some constant $$K>0$$
\begin{aligned} \sum^{[nT]}_{i=1}\sum ^{[nS]}_{j=1}E\bigl[\bigl(X^{(n)}_{i,j} \bigr)^{2} I_{(|X^{(n)}_{i,j}|>\varepsilon^{2})}|\mathcal{G}^{n}_{i-1,j-1} \bigr] \leq&\sum^{[nT]}_{i=1}\sum ^{[nS]}_{j=1}CA\delta^{n} E \bigl[I_{(n^{2}(\xi^{(n)}_{i,j})^{2}A\delta^{n}>\varepsilon^{2})}|\mathcal {G}^{n}_{i-1,j-1} \bigr] \quad \mbox{a.s.} \\ \leq& CA\delta^{n}\sum^{[nT]}_{i=1} \sum^{[nS]}_{j=1}E [I_{(K^{2}A\delta ^{n}>\varepsilon^{2})} ] \rightarrow0,\quad n\rightarrow\infty, \end{aligned}
because $$\delta^{n}\rightarrow0$$ implies $$I_{(K^{2}A\delta^{n}>\varepsilon ^{2})}\rightarrow0$$. This completes the proof. □

Declarations

Acknowledgements

The authors would like to thank the anonymous highly diligent referees for their careful reading of the manuscript and helpful comments. This work was supported by National Natural Science Foundation of China (11271020).

Open Access This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited.

Authors’ Affiliations

(1)
Department of Mathematics, Anhui Normal University, Wuhu, 241000, China

References 