# Hajek–Renyi-type inequality for $$(\alpha , \beta )$$-mixing sequences and its application to change-point model

## Abstract

In this paper, we investigate the Hajek–Renyi-type inequalities for $$(\alpha ,\beta )$$-mixing sequences. As an important application of the results, we further study the CUSUM-type estimator of mean change-point models for $$(\alpha , \beta )$$-mixing sequences. We establish the convergence rate of weak consistency for the CUSUM estimator. Moreover, we also present some simulation studies and a real data example to support the theoretical results.

## 1 Introduction

Inequalities play an important role in almost all branches of mathematical science. Probability inequalities are important tools in probability theory and mathematical statistics. For probability limit theory and statistical large sample theory, almost all important results are demonstrated by establishing powerful probability inequalities. Let $$\{X_{n}, n\ge 1\}$$ be a sequence of random variables defined on a probability space $$(\Omega , \mathscr{F},P)$$. If $$\{b_{n}, n \ge 1\}$$ is a positive nondecreasing sequence of real numbers and $$\{X_{n}, n\ge 1\}$$ is a sequence of independent random variables with $$EX_{n}=0$$, then for any $$\varepsilon > 0$$ and positive integers $$m \le n$$, Hajek and Renyi [7] obtained the following important inequality:

\begin{aligned} P \Biggl(\max_{m\le k\le n} \Biggl\vert \frac{1}{b_{k}}\sum _{i=1}^{k} X_{i} \Biggr\vert > \varepsilon \Biggr)\le \varepsilon ^{-2} \Biggl(\sum _{i=m+1}^{n} \frac{E{X_{i}}^{2}}{b_{i}^{2}}+\frac{1}{b_{m}^{2}}\sum _{i=1}^{m}E{X_{i}}^{2} \Biggr). \end{aligned}

Many scholars have shown a strong interest in Hajek–Renyi-type inequality since it was proposed. For example, Liu et al. [15] extended the Hajek–Renyi inequality to negatively associated (NA) random variables and used the inequality to prove the Marcinkiewicz strong law of large numbers for NA random variables. Prakasa Rao [18] obtained a Hajek–Renyi-type inequality for associated sequences and presented some applications. The inequality of Prakasa Rao [18] was improved by Sung [23], and some applications were also presented for associated random variables. Hu et al. [11] improved the results of Sung [23] by the demimartingale method, which was different from that of Sung. Yang et al. [27] extended the Hajek–Renyi-type inequality to a pairwise negatively quadrant dependent (NQD) sequence, an $$L^{r}$$ ($$r>1$$) mixingale, and a linear process and obtained the strong law of large numbers for these sequences by the established inequality. Wan [24] established the Hajek–Renyi inequality for ρ-mixing sequences. Deng et al. [4] studied the Hajek–Renyi-type inequality for $$0< p\le 2$$ for a sequence of extended negatively dependent (END) random variables.

In this paper, we further study the Hajek–Renyi-type inequality for $$( \alpha , \beta )$$-mixing random variables for $$1< p\le 2$$. Let us recall the notion of $$(\alpha , \beta )$$-mixing random variables, which was first introduced by Bradley and Bryc [1].

Let $$\{X_{n}, n\ge 1\}$$ be a sequence of random variables defined on a probability space $$(\Omega , \mathscr{F},P)$$. Denote $$S_{n}=\sum_{i=1}^{n}X_{i}$$, $$n\ge 1$$, and $$S_{0}=0$$. Let n and m be positive integers, and let $$\mathscr{F}_{n}^{m}=\sigma (X_{i}, n\le i\le m)$$. Given σ-algebras $$\mathscr{A}$$ and $$\mathscr{B}$$ in $$\mathscr{F}$$, let

\begin{aligned} \lambda (\mathscr{A},\mathscr{B})=\sup_{X\in L_{1/\alpha}( \mathscr{A}),Y\in L_{1/\beta}(\mathscr{B})} \frac{ \vert EXY-EXEY \vert }{ \Vert X \Vert _{1/\alpha} \Vert Y \Vert _{1/\beta}}, \end{aligned}

where $$0<\alpha$$, $$\beta <1$$, $$\alpha +\beta =1$$, and $$\|X\|_{p}=(E|X|^{p})^{1/p}$$. Define the $$(\alpha , \beta )$$-mixing coefficients by

\begin{aligned} \lambda (n)=\sup_{k\ge 1}\lambda \bigl(\mathscr{F}_{1}^{k}, \mathscr{F}_{k+n}^{ \infty}\bigr), \quad n\ge 1. \end{aligned}

### Definition 1.1

A sequence $$\{X_{n}, n\ge 1\}$$ of random variable is said to be $$(\alpha , \beta )$$-mixing if $$\lambda (n)\downarrow 0$$ as $$n\rightarrow \infty$$.

Since the notion of $$(\alpha , \beta )$$-mixing was proposed, it has been studied by many scholars, and many limit theorems have been obtained. For example, Bradley and Bryc [1] obtained the central limit theorem for $$(\alpha , \beta )$$-mixing sequences under absolutely regular conditions; Shao [20] further studied the limiting properties of $$(\alpha , \beta )$$-mixing sequences; Cai [2] studied the strong consistency and almost sure convergence rates for recursive estimators of joint and conditional probability density functions for stationary $$(\alpha , \beta )$$-mixing processes; Lu and Lin [16] established the bound of the covariance for $$(\alpha , \beta )$$-mixing sequences; Shen et al. [21] gave the moment inequality for $$(\alpha , \beta )$$-mixing sequences on the basis of Lu and Lin [16] and obtained the convergence theorem for $$(\alpha , \beta )$$-mixing sequences; Gao [6] obtained the result on the strong stability for stochastically dominated $$(\alpha , \beta )$$-mixing sequences; Yu [28] proved the Rosenthal-type inequalities by using the moment inequalities of $$( \alpha , \beta )$$-mixing sequences and studied the strong convergence; Samura et al. [19] investigated the strong consistency, complete consistency, and mean consistency for the estimators of partially linear regression models under $$(\alpha , \beta )$$-mixing errors.

Inspired by the literature above, we further investigate the Hajek–Renyi-type inequalities for $$(\alpha ,\beta )$$-mixing random variables by adopting the Marcinkiewicz–Zygmund-type maximum inequality. As an application, we investigate the cumulative sum (CUSUM) type estimator of mean change-point models based on $$(\alpha , \beta )$$-mixing sequences. In addition, we provide some simulation studies and a real data example to verify the theoretical results.

The layout of this work is organized as follows. The main results are presented in Sect. 2. As an application, a change-point model is presented in Sect. 3. Some numerical analysis and a real data example are provided in Sects. 4 and 5, respectively. Throughout this paper, let C, $$C_{1},C_{2},\dots$$ represent some positive constants whose values may vary in different places, $$\log x=\ln \max \{e,x\}$$, and $$\lfloor x\rfloor$$ denotes the largest integer not exceeding x.

## 2 Hajek–Renyi-type inequalities

We first present the Marcinkiewicz–Zygmund-type maximum inequality of $$(\alpha ,\beta )$$-mixing random variables, which will be used to prove our main results.

### Lemma 2.1

(Samura et al. [19])

Let $$1< p\le 2$$. Let $$\{X_{n},n\ge 1\}$$ be a sequence of $$(\alpha ,\beta )$$-mixing random variables with $$EX_{n}=0$$, $$E|X_{n}|^{p}<\infty$$, and $$\sum_{n=1}^{\infty}(\lambda (n))^{\frac{1}{2\alpha}\wedge \frac{1}{2\beta}}<\infty$$, where $$0<\alpha , \beta <1$$ and $$\alpha +\beta =1$$. Let $$\{a_{ni}, 1\le i\le n, n\ge 1\}$$ be an array of real numbers. Then there exists a positive constant C depending only on α, β, and $$\lambda (\cdot )$$ such that

\begin{aligned} E\max_{1\le k\le n} \Biggl\vert \sum_{i=1}^{k}a_{ni}X_{i} \Biggr\vert ^{p}\le C( \log n)^{p}\sum _{i=1}^{n} \vert a_{ni} \vert ^{p} \vert X_{i} \vert ^{p}. \end{aligned}
(2.1)

Taking $$a_{ni}=1$$ in (2.1), we have that

\begin{aligned} E\max_{1\le k\le n} \Biggl\vert \sum_{i=1}^{k}X_{i} \Biggr\vert ^{p}\le C( \log n)^{p}\sum _{i=1}^{n} \vert X_{i} \vert ^{p}. \end{aligned}
(2.2)

Now we will state and prove our main results of this paper. The first one is the Hajek–Renyi-type inequality for $$(\alpha ,\beta )$$-mixing sequences.

### Theorem 2.1

(Hajek–Renyi-type inequality)

Let $$1< p\le 2$$. Let $$\{X_{n},n\ge 1\}$$ be a sequence of $$(\alpha ,\beta )$$-mixing random variables with $$EX_{n}=0$$, $$E|X_{n}|^{p}<\infty$$, and $$\sum_{n=1}^{\infty}(\lambda (n))^{\frac{1}{2\alpha}\wedge \frac{1}{2\beta}}<\infty$$, where $$0<\alpha , \beta <1$$ with $$\alpha +\beta =1$$. Let $$\{b_{n},n\ge 1\}$$ be a sequence of nondecreasing positive numbers. Then for any $$\varepsilon >0$$ and $$n\ge 1$$,

\begin{aligned} P \Biggl(\max_{1\le k\le n} \Biggl\vert \frac{1}{b_{k}}\sum _{i=1}^{k} X_{i} \Biggr\vert \ge \varepsilon \Biggr)\le \frac{C{(\log n)^{p}}}{\varepsilon ^{p}}\sum_{i=1}^{n} \frac{E \vert X_{i} \vert ^{p}}{b_{i}^{p}}, \end{aligned}

where C depends only on α, β, $$\lambda (\cdot )$$, and p.

### Proof of Theorem 2.1

Set $$S_{k}=\sum_{i=1}^{k}X_{i}$$ and $$b_{0}=0$$. Assume that $$\sum_{i=1}^{0}X_{i}/b_{i}=0$$. It is easy to verify that

\begin{aligned} \frac{S_{k}}{b_{k}} =&\frac{1}{b_{k}}\sum_{i=1}^{k} X_{i}= \frac{1}{b_{k}}\sum_{i=1}^{k} \sum_{j=1}^{i}(b_{j}-b_{j-1}) \frac{X_{i}}{b_{i}} \\ =&\frac{1}{b_{k}}\sum_{j=1}^{k}(b_{j}-b_{j-1}) \sum_{i=j}^{k} \frac{X_{i}}{b_{i}} \\ \le &\max_{1\le j\le k} \Biggl\vert \sum _{i=j}^{k}\frac{X_{i}}{b_{i}} \Biggr\vert . \end{aligned}

Therefore

\begin{aligned} \biggl\{ \biggl\vert \frac{S_{k}}{b_{k}} \biggr\vert \ge \varepsilon \biggr\} \subset \Biggl\{ \max_{1\le j\le k} \Biggl\vert \sum _{i=j}^{k} \frac{X_{i}}{b_{i}} \Biggr\vert \ge \varepsilon \Biggr\} , \end{aligned}

and thus

\begin{aligned} \biggl\{ \max_{1\le k\le n} \biggl\vert \frac{S_{k}}{b_{k}} \biggr\vert \ge \varepsilon \biggr\} \subset & \Biggl\{ \max_{1\le k\le n} \max_{1 \le j\le k} \Biggl\vert \sum_{i=j}^{k} \frac{X_{i}}{b_{i}} \Biggr\vert \ge \varepsilon \Biggr\} \\ =& \Biggl\{ \max_{1\le j\le k \le n} \Biggl\vert \sum _{i=1}^{k} \frac{X_{i}}{b_{i}}-\sum _{i=1}^{j-1}\frac{X_{i}}{b_{i}} \Biggr\vert \ge \varepsilon \Biggr\} \\ \subset & \Biggl\{ \max_{1\le j\le n} \Biggl\vert \sum _{i=1}^{j} \frac{X_{i}}{b_{i}} \Biggr\vert \ge \frac{\varepsilon}{2} \Biggr\} . \end{aligned}
(2.3)

From (2.3) we have

\begin{aligned} P \biggl\{ \max_{1\le k\le n} \biggl\vert \frac{S_{k}}{b_{k}} \biggr\vert \ge \varepsilon \biggr\} \le P \Biggl\{ \max_{1\le j\le n} \Biggl\vert \sum_{i=1}^{j} \frac{X_{i}}{b_{i}} \Biggr\vert \ge \frac{\varepsilon}{2} \Biggr\} . \end{aligned}
(2.4)

By Markov’s inequality and (2.2) we have

\begin{aligned} P \Biggl\{ \max_{1\le j\le n} \Biggl\vert \sum _{i=1}^{j}\frac{X_{i}}{b_{i}} \Biggr\vert \ge \frac{\varepsilon}{2} \Biggr\} \le & \frac{2^{p}}{\varepsilon ^{p}}E \Biggl(\max _{1\le j\le n} \Biggl\vert \sum_{i=1}^{j} \frac{X_{i}}{b_{i}} \Biggr\vert ^{p} \Biggr) \\ \le &\frac{C{(\log n)^{p}}}{\varepsilon ^{p}}\sum_{i=1}^{n} \frac{E \vert X_{i} \vert ^{p}}{b_{i}^{p}}. \end{aligned}
(2.5)

Finally, the desired result of Theorem 2.1 immediately follows from (2.4) and (2.5). □

By Theorem 2.1 we can further obtain the following result.

### Theorem 2.2

(Generalized Hajek–Renyi-type inequality)

Let $$1< p\le 2$$. Let $$\{X_{n},n\ge 1\}$$ be a sequence of $$(\alpha ,\beta )$$-mixing random variables with $$EX_{n}=0$$, $$E|X_{n}|^{p}<\infty$$, and $$\sum_{n=1}^{\infty}(\lambda (n))^{\frac{1}{2\alpha}\wedge \frac{1}{2\beta}}<\infty$$, where $$0<\alpha , \beta <1$$ with $$\alpha +\beta =1$$. Let $$\{b_{n},n\ge 1\}$$ is a nondecreasing sequence of positive numbers. Then for any $$\varepsilon >0$$ and positive integers $$m>n$$, we have

\begin{aligned} P \Biggl(\max_{n\le k\le m} \Biggl\vert \frac{1}{b_{k}}\sum _{j=1}^{k} X_{j} \Biggr\vert \ge \varepsilon \Biggr)\le \frac{C{(\log n)^{p}}}{\varepsilon ^{p}} \Biggl( \frac{1}{b_{n}^{p}}\sum _{j=1}^{n}E \vert X_{j} \vert ^{p}+\sum_{j=n+1}^{m} \frac{E \vert X_{j} \vert ^{p}}{b_{j}^{p}} \Biggr), \end{aligned}

where C depends only on α, β, $$\lambda (\cdot )$$, and p.

### Proof of Theorem 2.2

Note that

\begin{aligned} \max_{n\le k\le m} \Biggl\vert \frac{1}{b_{k}}\sum _{j=1}^{k} X_{j} \Biggr\vert \le \frac{1}{b_{n}} \Biggl\vert \sum_{j=1}^{n} X_{j} \Biggr\vert +\max_{n+1 \le k\le m} \Biggl\vert \frac{1}{b_{k}}\sum_{j=n+1}^{k} X_{j} \Biggr\vert . \end{aligned}

It follows that

\begin{aligned} \Biggl(\max_{n\le k\le m} \Biggl\vert \frac{1}{b_{k}}\sum _{j=1}^{k} X_{j} \Biggr\vert \ge \varepsilon \Biggr)\subset \Biggl(\frac{1}{b_{n}} \Biggl\vert \sum _{j=1}^{n} X_{j} \Biggr\vert \ge \frac{\varepsilon}{2} \Biggr)\cup \Biggl(\max_{n+1\le k\le m} \Biggl\vert \frac{1}{b_{k}}\sum_{j=n+1}^{k} X_{j} \Biggr\vert \ge \frac{\varepsilon}{2} \Biggr). \end{aligned}

Therefore

\begin{aligned} &P \Biggl(\max_{n\le k\le m} \Biggl\vert \frac{1}{b_{k}}\sum _{j=1}^{k} X_{j} \Biggr\vert \ge \varepsilon \Biggr) \\ &\quad \le P \Biggl(\frac{1}{b_{n}} \Biggl\vert \sum _{j=1}^{n} X_{j} \Biggr\vert \ge \frac{\varepsilon}{2} \Biggr)+P \Biggl(\max_{n+1\le k\le m} \Biggl\vert \frac{1}{b_{k}}\sum_{j=n+1}^{k} X_{j} \Biggr\vert \ge \frac{\varepsilon}{2} \Biggr). \end{aligned}
(2.6)

Now we consider the first term of (2.6). By Markov’s inequality and Lemma 2.1 we have

\begin{aligned} P \Biggl(\frac{1}{b_{n}} \Biggl\vert \sum_{j=1}^{n} X_{j} \Biggr\vert \ge \frac{\varepsilon}{2} \Biggr)\le \frac{2^{p}}{\varepsilon ^{p}b_{n}^{p}}E \Biggl\vert \sum_{j=1}^{n} X_{j} \Biggr\vert ^{p}\le \frac{C(\log n)^{p}}{\varepsilon ^{p}b_{n}^{p}}\sum _{j=1}^{n} E \vert X_{j} \vert ^{p}. \end{aligned}
(2.7)

For the second term, applying Theorem 2.1 to $$\{X_{n+i},1\le i\le m-n\}$$, we obtain

\begin{aligned} P \Biggl(\max_{n+1\le k\le m} \Biggl\vert \frac{1}{b_{k}}\sum _{j=n+1}^{k} X_{j} \Biggr\vert \ge \frac{\varepsilon}{2} \Biggr) =&P \Biggl(\max_{1\le k\le m-n} \Biggl\vert \frac{1}{b_{n+k}}\sum_{j=1}^{k} X_{n+j} \Biggr\vert \ge \frac{\varepsilon}{2} \Biggr) \\ \le &\frac{C{(\log n)^{p}}}{(\varepsilon /2)^{p}} \sum_{j=1}^{m-n} \frac{E \vert X_{n+j} \vert ^{p}}{b_{n+j}^{p}} \\ =&\frac{C{(\log n)^{p}}}{\varepsilon ^{p}}\sum_{j=n+1}^{m} \frac{E \vert X_{j} \vert ^{p}}{b_{j}^{p}}. \end{aligned}
(2.8)

Combining (2.6), (2.7), and (2.8), we obtain the desired result. This completes the proof of the theorem. □

## 3 Application to chang-point model

First, we give a brief introduction to the change-point model. For some $$0<\tau ^{*}<1$$, let $$k^{*}=\lfloor n\tau ^{*}\rfloor$$. For $$n\geq 1$$, suppose that the observations $$X_{1},\dots ,X_{n}$$ satisfy the model

\begin{aligned} X_{i}=\mu +\delta _{n}I\bigl(k^{*}+1\le i \le n \bigr)+Z_{i}, \quad 1\le i\le n, \end{aligned}
(3.1)

where the mean parameter μ, change amount $$\delta _{n}$$, and change-point location $$k^{*}$$ are unknown, and $$\{Z_{i},1\leq i\leq n\}$$ is a sequence of random variables with zero means. We denote by $$\tau ^{*}=k^{*}/n$$ the ratio of the change point.

The estimators of $$k^{*}$$ and $$\tau ^{*}$$ in model (3.1) based on the CUSUM method (see Csörgő and Horváth [3]) are defined respectively as

\begin{aligned} \hat{k}_{n}(\theta )=\min \Bigl\{ k: \vert U_{k} \vert = \max_{1 \leq j\leq n} \vert U_{j} \vert \Bigr\} \quad \text{{and}}\quad \hat{\tau}_{n}(\theta )=\hat{k}_{n}/n, \end{aligned}
(3.2)

where

\begin{aligned} U_{k}= \biggl(\frac{k(n-k)}{n} \biggr)^{1-\theta} \Biggl( \frac{1}{k} \sum_{i=1}^{k}X_{i}- \frac{1}{n-k}\sum_{i=k+1}^{n}X_{i} \Biggr) \end{aligned}
(3.3)

with some $$0\leq \theta <1$$.

Over the past few decades, many statisticians studied the properties of large samples of a CUSUM-type estimator. For example, Horváth and Kokoszka [9] studied the asymptotic distribution of the CUSUM estimator of the change point for a stationary Gaussian random variable with long-range dependencies; Lavielle [14] presented some convergence results for the minimum contrast estimator in a problem of the change-point estimation in the case of strongly mixing and strongly dependent processes; Hariz and Wylie [8] established a unified framework for estimating the change point in the mean of stationary sequences and gave the rate of convergence for the CUSUM estimator; Shi et al. [22] investigated the strong convergence and the corresponding rate for the CUSUM estimator of the change point under NA sequences and proposed an iterative algorithm for searching the location of a change point. More recent studies on the change-point analysis can be referred to Horváth and Rice [10], Jin et al. [12], Messer et al. [17], Xu et al. [25], Yang et al. [26], Ding et al. [5], and the references therein. In this section, we investigate the CUSUM-type estimator of mean change-point models based on $$(\alpha , \beta )$$-mixing sequences by using the Hajek–Renyi-type inequality we established previously. We list some assumptions.

### Assumption A.1

For some $$1< p\le 2$$, let $$\{Z_{n},n\ge 1\}$$ be a sequence of $$(\alpha , \beta )$$-mixing random variables with $$EZ_{i}=0$$, $$\sup_{i\ge 1}E|Z_{i}|^{p}<\infty$$, and the mixing coefficients satisfying $$\sum_{n=1}^{\infty}(\lambda (n))^{\frac{1}{2\alpha}\wedge \frac{1}{2\beta}}<\infty$$, where $$0<\alpha , \beta <1$$ with $$\alpha +\beta =1$$.

### Assumption A.2

For $$0\le \theta <1$$ and $$1< p\le 2$$, let

\begin{aligned} \delta _{n}\neq 0 \quad \text{and} \quad g_{n}(\theta , p)/ \delta _{n} \rightarrow 0 \quad \text{as } n\rightarrow \infty . \end{aligned}

where

$$g_{n}(\theta , p)=\textstyle\begin{cases} n^{1/p-1}\log n &\text{{if }} {0\le \theta < \frac{1}{p}}, \\ n^{1/p-1}\log ^{1/p+1}n &\text{{if }} {\theta =\frac{1}{p}}, \\ n^{\theta -1}\log n &\text{{if }} {\frac{1}{p}< \theta < 1}. \end{cases}$$
(3.4)

### Theorem 3.1

Let Assumptions A.1and A.2be satisfied. Then, for any $$0\le \theta <1$$,

\begin{aligned} \hat{\tau}_{n}(\theta )-\tau ^{*}=O_{P} \bigl(g_{n}(\theta , p)/ \vert \delta _{n} \vert \bigr). \end{aligned}

### Proof of Theorem 3.1

Let $$\tau _{n}=\lfloor k/n\rfloor$$. For any $$0\le \theta <1$$, by (3.1) and (3.2) we have

\begin{aligned} EU_{k}(\theta ) =& \biggl(\frac{k(n-k)}{n} \biggr)^{1-\theta} \Biggl( \frac{1}{k}\sum_{i=1}^{k}EX_{i}- \frac{1}{n-k}\sum_{i=k+1}^{n}EX_{i} \Biggr) \\ =&\textstyle\begin{cases} -\delta _{n}n^{1-\theta}\tau _{n}^{1-\theta}{(1-\tau _{n})^{-\theta}{(1- \tau ^{*})}}, &{k\le k^{*}}, \\ -\delta _{n}n^{1-\theta}{(1-\tau _{n})}^{1-\theta}{\tau _{n}}^{- \theta}{\tau ^{*}}, &{k> k^{*}}, \end{cases}\displaystyle \end{aligned}
(3.5)

and

\begin{aligned} EU_{k^{*}}(\theta )=-\delta _{n}n^{1-\theta}\bigl(\tau ^{*}\bigr)^{1-\theta}{\bigl(1- \tau ^{*} \bigr)^{1-\theta}}. \end{aligned}
(3.6)

Therefore by (3.11) of Kokoszka and Leipus [13] we have

\begin{aligned} \vert \delta _{n} \vert \overline{\tau}n^{1-\theta} \bigl\vert {\hat{\tau}}_{n}-\tau ^{*} \bigr\vert \le 2\max _{1\le k\le n-1} \bigl\vert U_{k}(\theta )-EU_{k}( \theta ) \bigr\vert , \end{aligned}
(3.7)

where $$\overline{\tau}:=(1-\theta )(\tau ^{*})^{-\theta}(1-\tau ^{*})^{- \theta}\min \{\tau ^{*}, 1-\tau ^{*}\}$$. By (3.1) and (3.3) we easily see that

\begin{aligned} & \vert \delta _{n} \vert ^{-1}n^{\theta -1}\max _{1\le k\le n-1} \bigl\vert U_{k}(\theta )-EU_{k}( \theta ) \bigr\vert \\ &\quad \le \vert \delta _{n} \vert ^{-1}n^{\theta -1}\max _{1\le k\le n-1} \frac{1}{k^{\theta}} \Biggl\vert \sum _{i=1}^{k}(X_{i}-EX_{i}) \Biggr\vert + \vert \delta _{n} \vert ^{-1}n^{\theta -1} \max_{1\le k\le n-1} \frac{1}{(n-k)^{\theta}} \Biggl\vert \sum _{i=k+1}^{n}(X_{i}-EX_{i}) \Biggr\vert \\ &\quad = \vert \delta _{n} \vert ^{-1}n^{\theta -1}\max _{1\le k\le n-1} \frac{1}{k^{\theta}} \Biggl\vert \sum _{i=1}^{k}Z_{i} \Biggr\vert + \vert \delta _{n} \vert ^{-1}n^{ \theta -1}\max _{1\le k\le n-1}\frac{1}{(n-k)^{\theta}} \Biggl\vert \sum _{i=k+1}^{n}Z_{i} \Biggr\vert \\ &\quad =I_{n1}+I_{n2}. \end{aligned}
(3.8)

By (3.4), (3.7), and (3.8), to prove Theorem 3.1, we have to prove that

\begin{aligned} I_{ni}=O_{P}\bigl(g_{n}(\theta , p)/ \vert \delta _{n} \vert \bigr),\quad i=1, 2, \end{aligned}

where $$g_{n}(\theta , p)$$ is defined in (3.4). For any $$M>0$$, it follows from Theorem 2.1 with $$\sup_{i\ge 1}E|Z_{i}|^{p}<\infty$$ that

\begin{aligned} &P \Biggl( \vert \delta _{n} \vert ^{-1}n^{\theta -1} \max_{1\le k\le n-1} \frac{1}{k^{\theta}} \Biggl\vert \sum _{i=1}^{k}Z_{i} \Biggr\vert >M g_{n}( \theta , p)/ \vert \delta _{n} \vert \Biggr) \\ &\quad \le \frac{C_{1} n^{\theta p-p}{(\log n)^{p}}}{M^{p} g_{n}^{p}(\theta , p)} \sum_{i=1}^{n} \frac{E \vert Z_{i} \vert ^{p}}{i^{\theta p}} \\ &\quad \le \textstyle\begin{cases} C_{2}M^{-p}n^{1-p}(\log n)^{p}g_{n}^{-p}(\theta , p), &{0\le \theta < \frac{1}{p}}, \\ C_{3}M^{-p}n^{1-p}(\log n)^{p+1}g_{n}^{-p}(\theta , p), &{\theta = \frac{1}{p}}, \\ C_{4}M^{-p}n^{p(\theta -1)}(\log n)^{p}g_{n}^{-p}(\theta , p), &{ \frac{1}{p}< \theta < 1}. \end{cases}\displaystyle \\ &\quad \le C_{5}M^{-p}. \end{aligned}
(3.9)

Thus by taking M large enough in (3.9) we have $$I_{n1}=O_{P}(g_{n}(\theta , p)/|\delta _{n}|)$$. In addition, we can see that

\begin{aligned} I_{n2}= \vert \delta _{n} \vert ^{-1}n^{\theta -1} \max_{1\le k\le n-1} \frac{1}{(n-k)^{\theta}} \Biggl\vert \sum _{i=k+1}^{n}Z_{i} \Biggr\vert = \vert \delta _{n} \vert ^{-1}n^{\theta -1}\max _{1\le k< n}\frac{1}{k^{\theta}} \Biggl\vert \sum _{i=1}^{k}Z_{n-i+1} \Biggr\vert . \end{aligned}

Therefore, analogously to (3.9), we can obtain that $$I_{n2}=O_{P}(g_{n}(\theta , p)/|\delta _{n}|)$$. The proof of the theorem is completed. □

### Remark 3.1

In the mean change-point model (3.1), if $$\delta _{n}=\delta _{0}\neq 0$$ and $$\{Z_{n}, n\ge 1\}$$ is a sequence of independent identically distributed random variables with $$EZ_{1}=0$$ and $$VaR(Z_{1})=\sigma ^{2}>0$$, then Kokoszka and Leipus [13] obtained the following result:

$$\hat{\tau}_{n}(\theta )-\tau ^{*}=\textstyle\begin{cases} O_{p}(n^{-1/2}) &\text{{if }} {0\le \theta < \frac{1}{2}}, \\ O_{p}(n^{-1/2}\log ^{1/2}n) &\text{{if }} {\theta =\frac{1}{2}}, \\ O_{p}(n^{\theta -1}) &\text{{if }} {\frac{1}{2}< \theta < 1}. \end{cases}$$
(3.10)

In this work, Theorem 3.1 improves and extends the corresponding one of Kokoszka and Leipus [13] from independent and identically distributed sequence to $$(\alpha , \beta )$$-mixing sequence without identically distributed condition. The condition $$VaR(Z_{1})=\sigma ^{2}>0$$ in Kokoszka and Leipus [13] is weakened to $$\sup_{i\ge 1}E|Z_{i}|^{p}<\infty$$. More recently, Ding et al. [5] obtained the convergence rate of the CUSUM estimator of the mean change-point model for m-asymptotically almost negatively associated (m-AANA) sequences, Theorem 3.1 also extends the corresponding one of Ding et al. [5].

## 4 Numerical analysis

In this section, we conduct some simple simulations to verify the numerical performance of the CUSUM estimator of the change point based on $$( \alpha , \beta )$$-mixing random variables with random coefficients. In the mean change-point model (3.1), we assume that there exists a mean change-point location $$k^{*}$$ such that

\begin{aligned} X_{i}=\mu +\delta _{n}I\bigl(k^{*}+1\le i \le n \bigr)+Z_{i}, \quad 1\le i\le n, \end{aligned}

where $$Z_{i}=\sum_{k=0}^{m}e_{i+k}$$ for $$i\ge 1$$ and the fixed positive integer m. Let $$e_{i}\stackrel{\text{i.i.d}}{\sim}N(0, \sigma _{0}^{2})$$, where $$\sigma _{0}^{2}=1/(m+1)$$. It is easy to verify that $$\{Z_{1}, Z_{2},\ldots , Z_{n}\}$$ is an $$(\alpha , \beta )$$-mixing sequence.

For simplicity, we take $$\mu =1$$, $$\tau _{0}=0.35$$, and $$m=10$$ for the simulation with 500 replications. For $$n=50, 150, 300, 600$$, we use Python software to compute $${\hat{\tau}}_{n}(\theta )$$ with different $$\theta =0.1, 0.3, 0.5$$ and $$\delta _{n}=n^{-0.2}, n^{-0.1}, n^{0}$$, where $${\hat{\tau}}_{n}(\theta )$$ is defined by (3.2). We obtain the boxplots shown in Fig. 1. It reveals in Fig. 1 that as n increases, the CUSUM estimator $${\hat{\tau}}_{n}(\theta )$$ converges to the true parameter $$\tau _{0}$$. On the other hand, the estimator $$\tau _{n}$$ has lower bias and variability for the larger values of $$\delta _{n}$$.

To further examine the effect of the CUSUM estimator for different change-point locations, we also consider other two cases $$\tau _{0}=0.55$$ and $$\tau _{0}=0.75$$. Other settings are the same as those in the case $$\tau _{0}=0.35$$. We also use Python software to compute $${\hat{\tau}}_{n}(\theta )$$. The results are presented in Figs. 2 and 3, respectively. From the two figures we can conclude some similar results as in Fig. 1.

Specifically, we also compute the mean square error (MSE) of the estimator $${\hat{\tau}}_{n}(\theta )$$ in the case $$\tau _{0}=0.35$$ as presented in Table 1 with different θ and $$\delta _{n}$$ under different sample sizes. When θ takes different values, we can see that the larger the sample, the smaller the MSE. In addition, as the value of $$\delta _{n}$$ increases, the MSE becomes smaller and smaller. The result reflected in Table 1 is consistent with the conclusions drawn from the boxplots in Fig. 1. These simulation results show a good fit of the theoretical results we have established in Sect. 3.

## 5 A real data example

In this section, we conduct a real data analysis for the mean change-point model based on a financial time series. Let $$P_{t}$$ be the closing prices of Beijing Carbon Emission Quota (BEA). So the return is defined as $$r_{t}=\log (P_{t})-\log (P_{t-1})$$. The dataset is for the daily return of BEA from 1 June 2020 to 31 May 2021. The data can be found at http://www.tanpaifang.com/. Taking into account the fact that there are no transactions on some trading days, we have deleted these data.

First, we plot the sample autocorrelation function (ACF) and partial autocorrelation function (PACF) in Fig. 4. The trend of the series in Fig. 5 and its ACF and PACF show a stationary process. Then we conduct the unit root test and obtain that this time the series of the returns is stationary. Figure 4 and the unit root test obviously show that this time series of returns is a moving average process.

Next, we use the CUSUM estimator to detect the change points of the time series of the return from 1 June 2020 to 31 May 2021. For different $$\theta =0,0.1,0.2, \dots ,0.9$$, the values of $$\hat{k}_{n}(\theta )$$ are presented in Table 1. By Fig. 5 and Table 2 the location 197 should not be a true change-point location. Therefore we detect a mean change-point location 171 (14 April 2020). The mean change-point location is added in Fig. 5. Finally, we give an economic and policy explanation to this change point. It is known that Beijing Municipal Bureau of Ecology and Environment issued a notice on doing a good job in the management of key carbon emission units and the pilot work of carbon emission trading on 12 April 2021. The notice points out that it is necessary to maximize the role of market mechanism in the control of greenhouse gas emissions, effectively reduce greenhouse gas emissions, and improve the carbon emission trading mechanism. The mean returns significantly changed from negative to positive after 12 April 2021.

Not applicable.

## References

1. Bradley, R.C., Bryc, W.: Multilinear forms and measures of dependence between random variables. J. Multivar. Anal. 16, 335–367 (1985)

2. Cai, Z.W.: Strong consistency and rates for recursive nonparametric conditional probability density estimates under $$(\alpha , \beta )$$-mixing conditions. Stoch. Process. Appl. 38, 323–333 (1991)

3. Csörgő, M., Horváth, L.: Limit Theorems in Change-Point Analysis. Wiley, Chichester, pp. 170–181 (1997)

4. Deng, X., Wang, X., Xia, F.: Hajek–Renyi-type inequality and strong law of large numbers for end sequences. Commun. Stat., Theory Methods 46(2), 672–682 (2016)

5. Ding, S.S., Fang, H.Y., Dong, X., Yang, W.Z.: The CUSUM statistics of change-point models based on dependent sequences. J. Appl. Stat. 49(10), 2593–2611 (2022). https://doi.org/10.1080/02664763.2021.1913104

6. Gao, P.: Strong stability of $$(\alpha , \beta )$$-mixing sequences. Appl. Math. J. Chin. Univ. 31(4), 405–412 (2016)

7. Hajek, J., Renyi, A.: A generalization of an inequality of Kolmogorov. Acta Math. Acad. Sci. Hung. 6, 281–284 (1955)

8. Hariz, S.B., Wylie, J.J.: Rates of convergence for the change-point estimator for long-range dependent sequences. Stat. Probab. Lett. 73(2), 155–164 (2005)

9. Horváth, L., Kokoszka, P.: The effect of long-range dependence on change-point estimators. J. Stat. Plan. Inference 64, 57–81 (1997)

10. Horváth, L., Rice, G.: Extensions of some classical methods in change point analysis. Test 23, 219–255 (2014)

11. Hu, S.H., et al.: The Hajek–Renyi-type inequality for associated random variables. Stat. Probab. Lett. 79, 884–888 (2009)

12. Jin, B.S., Dong, C.L., Tan, C.C., Miao, B.Q.: Estimator of a change point in single index models. Sci. China Math. 57(8), 1701–1712 (2014)

13. Kokoszka, P., Leipus, R.: Change-point in the mean of dependent observations. Stat. Probab. Lett. 40(4), 385–393 (1998)

14. Lavielle, M.: Detection of multiple changes in a sequence of dependent variables. Stoch. Process. Appl. 83, 79–102 (1999)

15. Liu, J.J., Gan, S.X., Chen, P.Y.: The Hajek–Renyi inequality for NA random variables and its application. Stat. Probab. Lett. 43, 99–105 (1999)

16. Lu, C.R., Lin, Z.Y.: Limit Theory for Mixed Dependent Variables. Science Press of China, Beijing (1997)

17. Messer, M., Albert, S., Schneider, G.: The multiple filter test for change point detection in time series. Metrika 81, 589–607 (2018)

18. Prakasa Rao, B.L.S.: Hajek–Renyi-type inequality for associated sequences. Stat. Probab. Lett. 57, 139–143 (2002)

19. Samura, S.K., Wang, X.J., Wu, Y.: Consistency properties for the estimators of partially linear regression model under dependent errors. J. Stat. Comput. Simul. 89(3), 1–24 (2019)

20. Shao, Q.M.: Limit Theorems for the Partial Sums of Dependent and Independent Random Variable. University of Science and Technology of China, Hefei, pp. 1–309 (1989)

21. Shen, Y., Zhang, Y.J., Wang, X.J., et al.: Strong limit theorems for $$(\alpha , \beta )$$-mixing random variable sequences $$(\alpha , \beta )$$-mixing conditions. J. Univ. Sci. Technol. China 41(9), 778–795 (2011)

22. Shi, X.P., Wu, Y.H., Miao, B.Q.: Strong convergence rate of estimators of change point and its application. Comput. Stat. Data Anal. 53, 990–998 (2009)

23. Sung, H.S.: A note on the Hajek–Renyi inequality for associated random variables. Stat. Probab. Lett. 78, 885–889 (2008)

24. Wan, Y.: Hajek–Renyi inequality for ρ-mixing sequences. J. Jianghan Univ. Nat. Sci. 41(1), 43–46 (2013)

25. Xu, M., Wu, Y.H., Jin, B.S.: Detection of a change-point in variance by a weighted sum of powers of variances test. J. Appl. Stat. 46(4), 664–679 (2019)

26. Yang, Q., Li, Y.N., Zhang, Y.: Change point detection for nonparametric regression under strongly mixing process. Stat. Pap. 61, 1465–1506 (2020)

27. Yang, W.Z., Shen, Y., Hu, S.H., Wang, X.J.: Hajek–Renyi inequality and strong law of large numbers for some dependent sequences. Acta Math. Appl. Sin. Engl. Ser. 28(3), 495–504 (2012)

28. Yu, C.Q.: Convergence theorems of weighted sum for $$(\alpha , \beta )$$-mixing sequences. J. Hubei Univ. Nat. Sci. 38(6), 477–487 (2016)

## Acknowledgements

The authors are most grateful to the Editor-in-Chief and anonymous referees for carefully reading the manuscript and valuable suggestions, which helped in improving an earlier version of this paper.

## Funding

The research is supported by the Provincial Humanities and Social Science Research Project of Anhui Colleges (SK2021A0739), the Natural Science Foundation of Anhui Province (2108085QA15), the Key Research Project of Chaohu University (XLZ-201903), Anhui Province Curriculum Ideological and Political Teaching Team (2020kcszjxtd57), and the National Fund Cultivation Project of Chizhou University (CZ2021GP04).

## Author information

Authors

### Contributions

WW derived and proved the inequality and was a major contributor in writing the manuscript. KC and YW applied the inequality to the change-point model, WQW and KZ did numerical simulation and real data analysis. XRT made significant contributions to data acquisition, analysis, and English writing. All authors read and approved the final manuscript.

### Corresponding author

Correspondence to Kan Chen.

## Ethics declarations

### Competing interests

The authors declare that they have no competing interests.

## Rights and permissions

Reprints and permissions

Wang, W., Wu, Y., Wang, W. et al. Hajek–Renyi-type inequality for $$(\alpha , \beta )$$-mixing sequences and its application to change-point model. J Inequal Appl 2022, 130 (2022). https://doi.org/10.1186/s13660-022-02867-0