Skip to main content

Asymptotic properties of conditional value-at-risk estimate for asymptotic negatively associated samples

Abstract

This article examines the strong consistency of the conditional value-at-risk (CVaR) estimate for asymptotic negatively associated (ANA or \(\rho ^{-}\), for short) random samples under mild conditions. It is demonstrated that the optimal rate can achieve nearly \(O (n^{-1/2})\) under certain appropriate conditions. Furthermore, we present numerical simulations and a real data example to corroborate our theoretical results based on finite samples.

1 Introduction

The introduction of the value-at-risk (VaR) model has revolutionised the investment, management and governance landscape. In the investment arena, VaR enables individuals to assess the risk associated with investment assets, enabling them to formulate investment strategies based on risk level and risk tolerance, thereby reducing investment uncertainty. At the operational level, VaR allows for the continuous monitoring of potential fluctuations in order to avoid significant losses due to adverse changes in certain factors. In management, the VaR model plays a key role in the internal management of institutions, including the development of investment strategies, the assessment and supervision of traders and the prudent allocation of resources, while also serving as a valuable tool for market regulators. Market regulators are tasked with preventing adverse effects on the overall market and economic system arising from excessive market risk accumulation and concentrated risk release, with the VaR model serving as the primary tool for quantifying market risk accumulation. This innovative VaR technology and risk management framework underpinned by the VaR model will improve the operations of financial institutions in China, promote more rational investment behaviour among investors and provide regulators with an effective mechanism for market supervision.

Assume X to be a random cost variable with cumulative distribution function \(F(u)=P(X\leq u)\). Let \(F^{-1}(v)\) be its right continuous inverse, defined as \(F^{-1}(v)=\inf \{u:~F(u)>v\}\). For a fixed α, the value-at-risk \(\mathrm{VaR}_{\alpha}\) is defined as the α-quantile, represented as

$$\begin{aligned} \mathrm{VaR}_{\alpha}(X)=F^{-1}(\alpha ). \end{aligned}$$
(1.1)

In order for financial institutions to quantify and mitigate financial risk, as required by the Basel III framework, VaR is commonly used. Despite its widespread adoption, VaR is considered insufficient as a comprehensive risk measure due to inherent mathematical limitations such as non-subadditivity and non-convexity. Additionally, optimising VaR based on scenarios presents challenges.

To address the limitations of VaR, some authors advocate the adoption of CVaR as a more comprehensive risk metric. Regarded as a coherent alternative, CVaR is gaining traction in the realm of financial risk management. For risk measurement, CVaR is proved to have better properties than VaR, where CVaRα can be defined by

$$\begin{aligned} \mathrm{CVaR}_{\alpha}(X)=E\left (X|X\geq \mathrm{VaR}_{\alpha}(X) \right ). \end{aligned}$$
(1.2)

i.e., \(\mathrm{CVaR}_{\alpha}\) can be thought of as the conditional expectation of losses that exceed the \(\mathrm{CVaR}_{\alpha}(X)\) level. Pflug [1] put forward that the \(\mathrm{CVaR}_{\alpha}(X)\) can be viewed as the solution of an optimisation problem, namely,

$$\begin{aligned} \mathrm{CVaR}_{\alpha}(X)=\inf _{x\in \mathrm{R}}\left \{x+ \frac{1}{1-\alpha}E[X-x]^{+}\right \}:=\theta ^{*}, \end{aligned}$$
(1.3)

where \([a]^{+}:=\max \{0,a\}\) denotes the positive part of \(a \in \mathrm{R}\).

The CVaR model, a prevalent financial risk metric, enjoys broad support and acceptance within the international financial community. Its optimized version is recognised as a refined certainty equivalent risk measure, garnering increasing attention from practitioners and academics alike. Scholars have identified several key properties of CVaR as a coherent risk measure, including transition-equivariance, convexity, and positive homogeneity. For further insights, references such as Pflug [1], Artzner et al. [2], Embrechts et al. [3], Bodnar et al. [4], Pavlikov and Uryasev [5], Wang et al. [6], Luo [7], among others, offer detailed discussions on the subject.

In equation (1.3), consider \(h_{\alpha}(X, x) =x+\frac{1}{1-\alpha}[X-x]^{+}\) and define \(\theta ^{\ast}=\mathrm{CVaR}_{\alpha}(X)\). Therefore, \(\mathrm{CVaR}_{\alpha}(X)= \inf _{x\in \mathrm{R}} E h_{\alpha}(X, x)\). If \(X_{1},\ldots ,X_{n}\) represent n realisations of the random variable X, then \(\theta ^{\ast}\) can be estimated as

$$\begin{aligned} \mathrm{CVaR}_{n,\alpha}(X)=\inf _{x\in \mathrm{R}} n^{-1}\sum _{i=1}^{n}h_{ \alpha}(X_{i}, x):=\hat{\theta}_{n}. \end{aligned}$$
(1.4)

Trindade et al. [8] examined the consistency of \(\hat{\theta}_{n}\) for independently and identically distributed samples as well as for stationary processes. However, time series data in fields like finance and economics typically exhibit interdependencies, making sample dependence inherent. Recently, Xing et al. [9] demonstrated the strong consistency of conditional value-at-risk estimation for φ-mixing samples under mild assumptions. Luo and Ou [10] delved into exponential inequalities, the strong laws of large numbers, and convergence rates for CVaR estimators in α-mixing sequences. Ding et al. [11] explored Berry-Esseen type bounds for conditional value-at-risk estimators in ψ-mixing sequences, among other contributions in this area.

Next, we will review several different dependent structures. The NA random variable was proposed by Joag Dev and Proschan [12], which has a wide range of applications in multivariate statistical analysis and system reliability. Another important dependent structure is the \(\rho ^{*}\)-mixing random variable proposed by Bradley [13]. Because \(\rho ^{*}\)-mixing random variables contain certain moving average processes and specific categories of Markov chains, their importance extends to different fields such as economics, finance, and other scientific disciplines. Zhang and Wang [14] introduced the concept of asymptotically negatively associated random variables based on NA sequences and \(\rho ^{*}\)-mixing sequences. As is well known, ANA random variables encompass a mixture of \(\rho ^{*}\)-mixing and NA random variables as special instances, as detailed in Zhang [15] examples 2.2 and 2.3. The definition of ANA random variable is as follows.

Definition 1.1

A sequence \(\{X_{n}, n\geq 1\}\) of random variables is said to be asymptotically negatively associated (ANA or \(\rho ^{-}\), for short) if

$$\begin{aligned} \rho ^{-}(s)=\sup \{\rho ^{-}(S,T): S,T\subset \mathbb{N}, dist(S,T) \geq s\}\rightarrow 0, ~~as~~s\rightarrow \infty , \end{aligned}$$
(1.5)

where

$$\begin{aligned} \rho ^{-}(S,T)=0\vee \left \{ \frac{Cov\left (f_{1}(X_{i}, i\in S), f_{2}(X_{j}, j\in T)\right )}{\sqrt{Var(f_{1}(X_{i}, i\in S))\cdot Var(f_{2}(X_{j}, j\in T))}}: f_{1},f_{2}\in \mathcal{C}\right \}, \end{aligned}$$
(1.6)

and \(\mathcal{C}\) is the set of nondecreasing functions.

Since the inception of ANA random variables by Zhang and Wang [14], a plethora of noteworthy theoretical findings has emerged. For instance, Zhang and Wang [14] explored moment inequalities and complete convergence for partial sums of ANA random fields, while Zhang [16] derived central limit theorems. Yuan and Wu [17] delved into the limiting behaviour of the maximum of partial sums under residual Cesaro alpha-integrability assumptions. Tang et al. [18] established a Berry–Esseen type bound for wavelet estimators in a nonparametric regression model with ANA errors. Wu et al. [19] established a general result on complete moment convergence and the Marcinkiewicz–Zygmund-type strong law of large numbers for weighted sums of masymptotic negatively associated random variables. Additionally, Ko Mi-Hwa [20] elucidated the limiting behaviour of the maximum of partial sums in Hilbert space, among other notable contributions in this domain.

Building upon the insights from the aforementioned article, this paper delves into exploring the strong consistency of estimate \(\mathrm{CVaR}_{n,\alpha}(X)\) when the sample follows an the ANA sequence, and provide its convergence rate. Furthermore, the theoretical findings derived from a restricted sample are validated through numerical simulations and illustrative examples.

The paper is structured as follows: Main results and their proofs are detailed in Sect. 2. Section 3 includes numerical simulations, while Sect. 4 presents a real data example. Finally, the Appendix contains the necessary lemmas.

Throughout this paper, the symbol C represents a positive constant, which may vary across different instances. Let \(I(A)\) be the indicator function of the set A, \(\lfloor x\rfloor \) be the integer part of x, and \(a\ll b\) mean that \(a\leq C b\).

2 Main result and proof

In this section, we give our main theorem as follows.

Theorem 2.1

Suppose that \(\{X_{i},~1\leq i\leq n\}\) is a sequence of identically distributed ANA random variables with \(E\left |X_{i}\right |^{r}<\infty \) for some \(r >1\). Then we have

$$\begin{aligned} \left |\mathrm{CVaR}_{n,\alpha}(X)-\mathrm{CVaR}_{\alpha}(X)\right |=O \left (n^{-\mu}\right ) \textit{ a.s., } \end{aligned}$$
(2.1)

where (i) if \(1< r\leq 2\), then \(0\leq \mu <1-\frac{1}{r}\); (ii) if \(r> 2\), then \(0\leq \mu <\frac{1}{2}\).

Remark 2.2

Let \(\mu >0\) in (2.1), then we know that \(\hat{\theta}_{n}\) is the strong consistent estimator of \(\theta ^{*}\). And by (2.1), the strong consistency rate is \(n^{-\mu}\) with \(0<\mu <1-1 / r\). Specially select appropriate parameters, and the convergence speed is close to \(n^{-1/2}\) in order.

Remark 2.3

Since ANA sequences include NA (in particular, independent) and \(\rho ^{*}\)-mixing sequences, Theorem 2.1 also apply for NA and \(\rho ^{*}\)-mixing sequences. Furthermore, Theorem 2.1 relaxes the constraint \(1< r\leq 2\) in Theorem 2.1 of Xing et al. [9] to \(r>1\), and expands the scope of the sample from φ-mixing sequences to ANA sequences.

To prove the main results of the paper, the following lemma play a crucial role

Lemma 2.4

Suppose that \(\{X_{i},~1\leq i\leq n\}\) is a sequence of identically distributed ANA random variables with \(E\left |X_{i}\right |^{r}<\infty \) for some \(r >1\). Then we have

$$\begin{aligned} \frac{1}{n^{1-\mu}} \sum _{i=1}^{n}\left [h_{\alpha}\left (X_{i}, x \right )-E h_{\alpha}\left (X_{i}, x\right )\right ] \rightarrow 0 \textit{ a.s., } \end{aligned}$$
(2.2)

where (i) if \(1< r\leq 2\), then \(0\leq \mu <1-\frac{1}{r}\); (ii) if \(r> 2\), then \(0\leq \mu <\frac{1}{2}\).

Proof

(i) Since \(0<\mu <1-1 / r\) for \(1< r \leq 2\), we have \(r(1-\mu )>1\), then \(\mu /(r-1)<1-\mu \). Hence, there exist \(\delta >0\) and \(s>0\) such that

$$\begin{aligned} \frac{s+\mu}{r-1}< \delta < 1-\mu . \end{aligned}$$
(2.3)

Set

$$\begin{aligned}& h_{\alpha}^{(1)}\left (X_{i}, x\right )=-n^{\delta }I(h_{\alpha} \left (X_{i}, x\right )< -n^{\delta})+h_{\alpha}\left (X_{i}, x\right )I(|h_{ \alpha}\left (X_{i}, x\right )|\leq n^{\delta})+n^{\delta }I(h_{ \alpha}\left (X_{i}, x\right )>n^{\delta}),\\& h_{\alpha}^{(2)}\left (X_{i}, x\right )=(h_{\alpha}\left (X_{i}, x \right )+n^{\delta })I(h_{\alpha}\left (X_{i}, x\right )< -n^{\delta})+(h_{ \alpha}\left (X_{i}, x\right )-n^{\delta })I(h_{\alpha}\left (X_{i}, x \right )>n^{\delta}), \end{aligned}$$

and

$$ H_{\alpha}^{(1)}\left (X_{i}, x\right )=h_{\alpha}^{(1)}\left (X_{i}, x \right )-Eh_{\alpha}^{(1)}\left (X_{i}, x\right ),~~H_{\alpha}^{(2)} \left (X_{i}, x\right )=h_{\alpha}^{(2)}\left (X_{i}, x\right )-Eh_{ \alpha}^{(2)}\left (X_{i}, x\right ) $$

Without loss of generality, assume that \(E h_{\alpha}\left (X_{i}, x\right )=0\). We can get

$$\begin{aligned}& \frac{1}{n^{1-\mu}} \sum _{i=1}^{n} h_{\alpha}\left (X_{i}, x \right ) \\& = \frac{1}{n^{1-\mu}} \sum _{i=1}^{n}\left (h_{\alpha}\left (X_{i}, x \right )-E h_{\alpha}\left (X_{i}, x\right )\right ) \\& = \frac{1}{n^{1-\mu}} \sum _{i=1}^{n}\left (h_{\alpha}^{(1)}\left (X_{i}, x\right )-Eh_{\alpha}^{(1)}\left (X_{i}, x\right )\right )+ \frac{1}{n^{1-\mu}} \sum _{i=1}^{n}\left (h_{\alpha}^{(2)}\left (X_{i}, x\right )-Eh_{\alpha}^{(2)}\left (X_{i}, x\right )\right ) \\& = \frac{1}{n^{1-\mu}} \sum _{i=1}^{n}H_{\alpha}^{(1)}\left (X_{i}, x \right )+\frac{1}{n^{1-\mu}} \sum _{i=1}^{n}H_{\alpha}^{(2)}\left (X_{i}, x\right ) \\& := S_{n 1}+S_{n 2} . \end{aligned}$$
(2.4)

From (2.4) it follows that to prove that (2.2) holds, it is sufficient to prove \(S_{n 1} \stackrel{a.s.}{\longrightarrow} 0\) and \(S_{n 2} \stackrel{a.s.}{\longrightarrow} 0\) for \(x \in \mathrm{R}\) to obtain the desired result, respectively.

First, we prove \(S_{n 1} \stackrel{a.s.}{\longrightarrow} 0\) for \(x \in \mathrm{R}\). By \(C_{r}\) inequality and Lemma A.3, we have for \(x \in \mathrm{R}\) and \(p>2\),

$$\begin{aligned} &E\left |h_{\alpha}^{(1)}\left (X_{i}, x\right )-E h_{\alpha}^{(1)} \left (X_{i}, x\right )\right |^{p} \\ \leq & 2^{p-1}\left (E\left |h_{\alpha}^{(1)}\left (X_{i}, x\right ) \right |^{p}+\left |E h_{\alpha}^{(1)}\left (X_{i}, x\right )\right |^{p} \right ) \\ \ll & E\left |h_{\alpha}^{(1)}\left (X_{i}, x\right )\right |^{p}+ \left (E\left |h_{\alpha}^{(1)}\left (X_{i}, x\right )\right |\right )^{p} \\ \ll & E\left |h_{\alpha}^{(1)}\left (X_{i}, x\right )\right |^{p} \\ \ll & E\left |-n^{\delta }I(h_{\alpha}\left (X_{i}, x\right )< -n^{ \delta})+h_{\alpha}\left (X_{i}, x\right )I(|h_{\alpha}\left (X_{i}, x \right )|\leq n^{\delta})+n^{\delta }I(h_{\alpha}\left (X_{i}, x \right )>n^{\delta})\right |^{p} \\ \ll & n^{\delta (p-r)} E\left |h_{\alpha}\left (X_{i}, x\right ) \right |^{r} I\left (\left |h_{\alpha}\left (X_{i}, x\right )\right | \leq n^{\delta}\right )+n^{\delta p}P\left (\left |h_{\alpha}\left (X_{i}, x\right )\right | > n^{\delta}\right ) \\ \ll & n^{\delta (p-r)} . \end{aligned}$$
(2.5)

Similarly, we can prove that

$$\begin{aligned} E\left (h_{\alpha}^{(1)}\left (X_{i}, x\right )-E h_{\alpha}^{(1)} \left (X_{i}, x\right )\right )^{2} \ll n^{\delta (2-r)} . \end{aligned}$$
(2.6)

Thus, by Markov’s inequality Lemma A.1 and Lemma A.2, combined with (2.5) and (2.6), it follows that for any \(\varepsilon >0\),

$$\begin{aligned} P\left (\left |S_{n 1}\right |>\varepsilon \right ) \ll & n^{-(1- \mu ) p} E\left |\sum _{i=1}^{n}H_{\alpha}^{(1)}\left (X_{i}, x \right )\right |^{p} \\ \ll & n^{-(1-\mu ) p}\left \{\sum _{i=1}^{n}E\left |\left (h_{ \alpha}^{(1)}\left (X_{i}, x\right )-E h_{\alpha}^{(1)}\left (X_{i}, x \right )\right )\right |^{p}\right . \\ &\left .~~~~~~~~~~~~~+\left (\sum _{i=1}^{n} E\left |\left (h_{ \alpha}^{(1)}\left (X_{i}, x\right )-E h_{\alpha}^{(1)}\left (X_{i}, x \right )\right )\right |^{2}\right )^{p / 2}\right \} \\ \ll & n^{-(1-\mu ) p}\left \{n^{\delta (p-r)+1}+n^{p / 2+\delta (2-r) p / 2}\right \} \\ =& n^{-(1-\mu -\delta ) p+1-r \delta}+n^{-[1-2 \mu -\delta (2-r)] p/2+ \delta} . \end{aligned}$$
(2.7)

Then, from \(1< r\leq 2\) and \(\mu <1-\frac{1}{r}\) we have \(1-2\mu -(1-\mu )(2-r)>0\). Moreover, by \(\delta <1-\mu \), we can get \(1-\mu -\delta >0\) and \(1-2\mu -\delta (2-r)>0\). Note that \(1-\delta r<1-\delta \) and \(0<\delta <1\), we now choose p such that

$$\begin{aligned} (1-\mu -\delta )p-1+\delta r>1,~~[1-2\mu -\delta (2-r)]p/2-\mu >1. \end{aligned}$$
(2.8)

Which together with (2.7) and (2.8), implies that \(\sum P\left (\left |S_{n 1}\right |>\varepsilon \right )<\infty \) for sufficiently large p. It follows by Borel–Cantelli lemma that

$$\begin{aligned} S_{n 1}\stackrel{a.s.}{\longrightarrow} 0.~~~~~for~x~\in ~\mathrm{R}. \end{aligned}$$
(2.9)

Next, we prove \(S_{n 2} \stackrel{a.s.}{\longrightarrow} 0\) for \(x \in \mathrm{R}\). For this purpose, set \(S_{n2}^{(2)}=\frac{1}{n^{1-\mu}} \sum _{i=1}^{n} h_{\alpha}^{(2)} \left (X_{i}, x\right )\). Then, \(S_{n 2}=S_{n2}^{(2)}-E S_{n2}^{(2)}\).

Noting that \(\mu -\delta (r-1)<-s<0\) and (2.6) implies \(\mu -\delta (r-1)<0\). Hence, applying Lemma A.3, we obtain for \(x \in \mathrm{R}\),

$$\begin{aligned}& \left |E S_{n2}^{(2)}\right | \\& \quad \leq \frac{1}{n^{1-\mu}} \sum _{i=1}^{n} E\left \{\left |h_{\alpha}\left (X_{i}, x\right )\right | I\left ( \left |h_{\alpha}\left (X_{i}, x\right )\right |>n^{\delta}\right )+n^{ \delta }I\left (\left |h_{\alpha}\left (X_{i}, x\right )\right |>n^{ \delta}\right )\right \} \\& \quad \ll \frac{1}{n^{1-\mu}}\left \{ n^{\delta (1-r)} \sum _{i=1}^{n} \left [E\left |h_{\alpha}\left (X_{i}, x\right )\right |^{r} I\left ( \left |h_{\alpha}\left (X_{i}, x\right )\right |>n^{r}\right )\right ]+n^{ \delta +1}P\left (\left |h_{\alpha}\left (X_{i}, x\right )\right |>n^{r} \right )\right \} \\& \quad \ll n^{\mu -\delta (r-1)} \rightarrow 0, ~~~~n \rightarrow \infty , \end{aligned}$$
(2.10)

which means that

$$\begin{aligned} E S_{n2}^{(2)} \rightarrow 0 . \end{aligned}$$
(2.11)

Moreover,

$$\begin{aligned} \left |S_{n2}^{(2)}\right | \leq & \frac{1}{n^{1-\mu}} \sum _{i=1}^{n} \left \{\left |h_{\alpha}\left (X_{i}, x\right )\right | I\left ( \left |h_{\alpha}\left (X_{i}, x\right )\right |>n^{\delta}\right )+n^{ \delta }I\left (\left |h_{\alpha}\left (X_{i}, x\right )\right |>n^{ \delta}\right )\right \} \\ \ll &\frac{1}{n^{1-\mu}} \sum _{i=1}^{n}\left |h_{\alpha}\left (X_{i}, x\right )\right | I\left (\left |h_{\alpha}\left (X_{i}, x\right ) \right |>n^{\delta}\right )+\frac{1}{n^{1-\mu}} \sum _{i=1}^{n}n^{ \delta }I\left (\left |h_{\alpha}\left (X_{i}, x\right )\right |>n^{ \delta}\right ) \\ \ll &\frac{1}{n^{1-\mu}} \sum _{i=1}^{n}\left |h_{\alpha}\left (X_{i}, x\right )\right | I\left (\left |h_{\alpha}\left (X_{i}, x\right ) \right |>i^{\delta}\right )+\frac{1}{n^{1-\mu}} \sum _{i=1}^{n}n^{ \delta }I\left (\left |h_{\alpha}\left (X_{i}, x\right )\right |>i^{ \delta}\right ) \\ \ll & \frac{1}{n^{1-\mu}} \sum _{i=1}^{n} \xi _{i}+ \frac{n^{\delta}}{n^{1-\mu}} \sum _{i=1}^{n} \eta _{i}. \end{aligned}$$
(2.12)

where \(\xi _{i}=\left |h_{\alpha}\left (X_{i}, x\right )\right | I\left ( \left |h_{\alpha}\left (X_{i}, x\right )\right |>i^{\delta}\right )\), \(\eta _{i}=I\left (\left |h_{\alpha}\left (X_{i}, x\right )\right |>i^{ \delta}\right )\). It follows from (2.12) that to prove \(S_{n2}\stackrel{a.s.}{\longrightarrow}0\), it is only necessary to prove \(S_{n2}^{(2)}\stackrel{a.s.}{\longrightarrow} 0\). Hence, to obtain \(S_{n2}^{(2)} \stackrel{a.s.}{\longrightarrow} 0\), it is sufficient to show

$$ n^{-(1-\mu )} \sum _{i=1}^{n} \xi _{i} \stackrel{a.s.}{\longrightarrow} 0 ,~~n^{-(1-\mu -\delta )} \sum _{i=1}^{n} \eta _{i} \stackrel{a.s.}{\longrightarrow} 0. $$

Set

$$ J_{n}^{(1)}=\sum _{i=1}^{n} i^{-(1-\mu )} \xi _{i},~~~~J_{n}^{(2)}= \sum _{i=1}^{n} i^{-(1-\mu -\delta )} \eta _{i}. $$

In what follows, we will prove \(J_{n}^{(1)}\) and \(J_{n}^{(2)}\) almost surely converges by subsequence method.

By (2.3), there exists \(s>0 \) such that \((s+\mu )/(r-1)<\delta <1-\mu \), we have for all \(m \geq n \geq 1\),

$$\begin{aligned} E\left |J_{n}^{(1)}-J_{n}^{(1)}\right | \leq & \sum _{i=n+1}^{m} i^{-(1- \mu )} E \xi _{i} \\ \ll & \sum _{i=n+1}^{m} i^{-(1-\mu )} E\left |h_{\alpha}\left (X_{i}, x\right )\right | I\left (\left |h_{\alpha}\left (X_{i}, x\right ) \right |>i^{\delta}\right ) \\ \ll & \sum _{i=n+1}^{m} i^{\delta (1-r)+\mu -1} \ll \sum _{i=n+1}^{ \infty} i^{-(1+s)} \\ \ll &n^{-s} \rightarrow 0, n \rightarrow \infty . \end{aligned}$$
(2.13)

By (2.13), the random sequence \(\{J_{n}^{(1)}\}\) is a Cauchy sequence in \(L_{1}\). Hence, there exists a random variable \(J^{(1)}\) such that \(E|J^{(1)}|<\infty \) and \(E\left |J_{n}^{(1)}-J^{(1)}\right | \rightarrow 0\). Then, for any \(\varepsilon >0\),

$$\begin{aligned} P\left (\left |J_{2^{k}}^{(1)}-J^{(1)}\right |>\epsilon \right ) \leq & E\left |J_{2^{k}}^{(1)}-J_{n}^{(1)}\right |+E\left |J_{n}^{(1)}-J^{(1)} \right | \\ \leq & \underset{n\rightarrow \infty}{\lim \sup } \left |E J_{2^{k}}^{(1)}-J_{n}^{(1)} \right | \\ \ll & \sum _{i=2^{k}+1}^{\infty} i^{-(1+s)}\ll 2^{-k s}. \end{aligned}$$
(2.14)

which implies that \(\sum _{k=1}^{\infty} P\left (\left |J_{2^{k}}^{(1)}-J^{(1)}\right |> \varepsilon \right )<\infty \), that is to say that \(J_{2^{k}}^{(1)}\stackrel{a.s.}{\longrightarrow} J^{(1)}\). Also,

$$\begin{aligned} P\left (\max _{2^{k-1}< n \leq 2^{k}}\left |J_{n}^{(1)}-J_{2^{k-1}}^{(1)} \right |>\varepsilon \right ) \ll \sum _{i=2^{k-1}+1}^{2^{k}} i^{-(1+s)} \ll 2^{-k s}, \end{aligned}$$
(2.15)

which means that \(\sum \limits _{k=1}^{\infty} P\left (\max \limits _{2^{k-1}< n \leq 2^{k-1}} \left |J_{n}^{(1)}-J_{2^{k}}^{(1)}\right |>\varepsilon \right )< \infty \), then \(\max \limits _{2^{k-1}< n \leq 2^{k}}\left |J_{n}^{(1)}-J_{2^{k}}^{(1)} \right |\stackrel{a.s.}{\longrightarrow} 0\) as \(k \rightarrow \infty \). So we have

$$\begin{aligned} J_{n}^{(1)} \stackrel{a.s.}{\longrightarrow} J^{(1)}. \end{aligned}$$
(2.16)

By (2.16) and Kronecker lemma,

$$\begin{aligned} \frac{1}{n^{(1-\mu )}}\sum _{i=1}^{n} \xi _{i} \stackrel{a.s.}{\longrightarrow} 0 \end{aligned}$$
(2.17)

is obtained.

Similar to the proof of \(J_{n}^{(1)} \stackrel{a.s.}{\longrightarrow} J^{(1)}\). For \(m^{\prime }\geq n \geq 1\), which together with Lemma A.3 yields that

$$\begin{aligned} E\left |J_{m^{\prime }}^{(2)}-J_{n}^{(2)}\right | \leq & \sum _{i=n+1}^{m^{\prime }} i^{-(1-\mu -\delta )} E \eta _{i} \\ \ll & \sum _{i=n+1}^{m^{\prime }} i^{-(1-\mu -\delta )} P\left (\left |h_{ \alpha}\left (X_{i}, x\right )\right |>i^{\delta}\right ) \\ \ll & \sum _{i=n+1}^{m^{\prime }} i^{\delta (1-r)+\mu -1} \ll \sum _{i=n+1}^{ \infty} i^{-(1+s)} \\ \ll &n^{-s} \rightarrow 0, ~~~~~~~~~~~n \rightarrow \infty . \end{aligned}$$
(2.18)

By (2.18), the random sequence \(\{J_{n}^{(2)}\}\) is a Cauchy sequence in \(L_{1}\). Hence, there exists a random variable \(J^{(2)}\) such that \(E|J^{(2)}|<\infty \) and \(E\left |J_{n}^{(2)}-J^{(2)}\right | \rightarrow 0\). Similarly to the derivation of (2.17)–(2.19), it is obtained that

$$\begin{aligned} \frac{1}{n^{(1-\mu -\delta )}}\sum _{i=1}^{n} \eta _{i} \stackrel{a.s.}{\longrightarrow} 0. \end{aligned}$$
(2.19)

From \(S_{n2}^{(2)} \stackrel{a.s.}{\longrightarrow} 0\), it implies that \(S_{n 2} \stackrel{a.s.}{\longrightarrow} 0\) for \(x \in \mathrm{R}\). Therefore, by combining equations (2.6) and (2.19), for \(1< r\leq 2\), consider \(0\leq \mu <1-1/r\). It is demonstrated that equation (2.1) holds true.

(ii) When \(r>2\), it follows from \(0\leq \mu <1/2\) that there exists \(\delta >0\) and \(s>0\) such that \((2s+\mu )/(r-1)<\delta <1-\mu \). Similar to the proof of \(1< r\leq 2\), the equation (2.1) still holds.

The proof is completed. □

In the following, we will give the proofs of theorems.

Proof of Theorem 2.1

It is easy to observe that

$$\begin{aligned} & n^{\mu}\left |\mathrm{CVaR}_{n,\alpha}(X)-\mathrm{CVaR}_{\alpha}(X) \right | \\ = & n^{\mu}\left |\left (\hat{\theta}_{n}-\theta ^{*}\right ) I \left (\hat{\theta}_{n}-\theta ^{*} \geq 0\right )+\left ( \hat{\theta}_{n}-\theta ^{*}\right ) I\left (\hat{\theta}_{n}-\theta ^{*}< 0 \right )\right | \\ \leq & n^{\mu}\left |\left (\hat{\theta}_{n}-\theta ^{*}\right ) I \left (\hat{\theta}_{n}-\theta ^{*} \geq 0\right )\right |+n^{\mu} \left |\left (\hat{\theta}_{n}-\theta ^{*}\right ) I\left ( \hat{\theta}_{n}-\theta ^{*}< 0\right )\right | \\ :=& I_{n 1}+I_{n 2} . \end{aligned}$$
(2.20)

By Lemma 2.4, it follows that for any \(\varepsilon >0\), there exists \(N>0\) such that

$$\begin{aligned} n^{\mu} \left |\frac{1}{n} \sum _{i=1}^{n} h_{\alpha}\left (X_{i}, x \right )-E h_{\alpha}\left (X_{1}, x\right )\right |\leq \frac{\varepsilon}{2}~~~~~ \text{ a.s. ,} \end{aligned}$$
(2.21)

for \(n>N\) and \(x \in \mathrm{R}\). Hence, we have

$$\begin{aligned} I_{n 1} =& n^{\mu}\left |\inf _{x \in \mathrm{R}} \frac{1}{n} \sum _{i=1}^{n} h_{\alpha}\left (X_{i}, x\right )-\inf _{x \in \mathrm{R}} E h_{ \alpha}\left (X_{1}, x\right )\right | I\left (\hat{\theta}_{n}- \theta ^{*} \geq 0\right ) \\ = & n^{\mu }\left |\inf _{x \in \mathrm{R}}\left \{\frac{1}{n} \sum _{i=1}^{n} h_{\alpha}\left (X_{i}, x\right )-\inf _{x \in \mathrm{R}} E h_{ \alpha}\left (X_{1}, x\right )\right \} \right | I\left (\hat{\theta}_{n}- \theta ^{*} \geq 0\right ) \\ = & n^{\mu }\inf _{x \in \mathrm{R}}\left |\frac{1}{n} \sum _{i=1}^{n} h_{\alpha}\left (X_{i}, x\right )-\inf _{x \in \mathrm{R}} E h_{ \alpha}\left (X_{1}, x\right )\right | I\left (\hat{\theta}_{n}- \theta ^{*} \geq 0\right ) \\ \leq & n^{\mu }\inf _{x \in \mathrm{R}} \left \{\left |\frac{1}{n} \sum _{i=1}^{n} h_{\alpha}\left (X_{i}, x\right )-E h_{\alpha}\left (X_{1}, x\right )\right | +\left |E h_{\alpha}\left (X_{1}, x\right )-\inf _{x \in \mathrm{R}} E h_{\alpha}\left (X_{1}, x\right ) \right |\right \} \\ \leq & \inf _{x \in \mathrm{R}}\left \{\frac{\varepsilon}{2}+n^{\mu} \left |E h_{\alpha}\left (X_{1}, x\right )-\inf _{x \in \mathrm{R}} E h_{ \alpha}\left (X_{1}, x\right )\right |\right \} \\ =&\frac{\varepsilon}{2}+n^{\mu}\inf _{x \in \mathrm{R}}\left \{ \left |E h_{\alpha}\left (X_{1}, x\right )-\inf _{x \in \mathrm{R}} E h_{ \alpha}\left (X_{1}, x\right )\right |\right \}=\frac{\varepsilon}{2}~~~~~ \text{ a.s., } \end{aligned}$$
(2.22)

as \(n>N\). Similarly, we obtain

$$\begin{aligned} I_{n 2} =& n^{\mu}\left |\inf _{x \in R} \frac{1}{n} \sum _{i=1}^{n} h_{ \alpha}\left (X_{i}, x\right )-\inf _{x \in R} E h_{\alpha}\left (X_{1}, x\right )\right | I\left (\hat{\theta}_{n}-\theta ^{*}< 0\right ) \\ =&n^{\mu }\left | \inf _{x \in R}\left \{E h_{\alpha}\left (X_{1}, x \right )-\inf _{x \in R} \frac{1}{n} \sum _{i=1}^{n} h_{\alpha}\left (X_{i}, x\right )\right \}\right | I\left (\hat{\theta}_{n}-\theta ^{*}< 0 \right ) \\ =&n^{\mu }\inf _{x \in R}\left |E h_{\alpha}\left (X_{1}, x\right )- \inf _{x \in R} \frac{1}{n} \sum _{i=1}^{n} h_{\alpha}\left (X_{i}, x \right )\right | I\left (\hat{\theta}_{n}-\theta ^{*}< 0\right ) \\ = & n^{\mu }\inf _{x \in R} \left \{ \left |E h_{\alpha}\left (X_{1}, x\right )-\frac{1}{n} \sum _{i=1}^{n} h_{\alpha}\left (X_{i}, x \right )\right |+\left |\frac{1}{n} \sum _{i=1}^{n} h_{\alpha}\left (X_{i}, x\right )-\inf _{x \in R} \frac{1}{n} \sum _{i=1}^{n} h_{\alpha} \left (X_{i}, x\right )\right | \right \} \\ \leq & \inf _{x \in R}\left \{\frac{\varepsilon}{2}+n^{\mu}\left | \frac{1}{n} \sum _{i=1}^{n} h_{\alpha}\left (X_{i}, x\right )-\inf _{x \in R} \frac{1}{n} \sum _{i=1}^{n} h_{\alpha}\left (X_{i}, x\right ) \right |\right \} \\ =&\frac{\varepsilon}{2}+ n^{\mu }\inf _{x \in R}\left \{\left | \frac{1}{n} \sum _{i=1}^{n} h_{\alpha}\left (X_{i}, x\right )-\inf _{x \in R} \frac{1}{n} \sum _{i=1}^{n} h_{\alpha}\left (X_{i}, x\right ) \right |\right \} = \frac{\varepsilon}{2}~~~~~ \text{ a.s., } \end{aligned}$$
(2.23)

for \(n>N\). A combination of (2.20)–(2.23) yields that for any \(\varepsilon >0\), there exists \(N>0\) such that

$$\begin{aligned} n^{\mu}\left |\mathrm{CVaR}_{n,\alpha}(X)-\mathrm{CVaR}_{\alpha}(X) \right |=n^{\mu}\left |\hat{\theta}_{n}-\theta ^{*}\right | \leq \varepsilon \text{ a.s., } \end{aligned}$$
(2.24)

for \(n>N\) and \(x \in \mathrm{R}\), which implies that Theorem 2.1 holds. The proof is complete. □

3 Numerical simulation

In this section, we will carry out a simulation to study the numerical performance of the strong consistency for conditional value-at-risk estimators based on ANA sample. The simulations are conducted for the following two cases.

Case I MA(1) process.

The following MA(1) process is considered:

$$\begin{aligned} X_{t}=\varepsilon _{t}-0.5\varepsilon _{t-1}, \end{aligned}$$
(3.1)

where \(\{\varepsilon _{t},\}\) is the white noise sequence, with the properties:

$$ E(X_{t})=0,~~~~D(X_{t})=1.25,~~~~Cov(X_{t},X_{t-1})=-0.5. $$

For \(k\geq 1\), \(Cov(X_{t},X_{t-k})=-0.5\). Therefore, this MA(1) process forms a \(\rho ^{*}\)-mixing sequence, consequently qualifying as an ANA sequence. Utilizing this MA(1) process to generate random numbers, we consider probability levels \(\alpha = 0.01,0.02, 0.03, 0.04, 0.05, 0.06, 0.07, 0.08, 0.09,0.1\). with sample sizes \(n=25, 50, 100\), 200, 300, 500. Initially, we compute the true value of the Conditional Value at Risk (CVaR) based on the generated ANA samples. Subsequently, we estimate the CVaR values using different probability levels α and sample sizes n as specified above. Finally, we derive the average of 1000 simulation repetitions to obtain the true and estimated CVaR values, resulting in the CVaR dataset (refer to Table 1).

Table 1 The estimated values of CVaR data under MA(1) process

Based on the aforementioned MA(1) model, simulations were conducted 1000 times for sample sizes \(n=25, 50, 100\), 200, 300, 500. The average of these simulations was taken to represent the true and estimated values of CVaR, resulting in the curves for the true and estimated values (refer to Fig. 1). In the graph, the estimated values of CVaR are depicted by the blue dashed line, while the true values of CVaR are represented by the red solid line.

Figure 1
figure 1

CVaR of true value curve and estimated value curve under AR(1) model

In the MA(1) model, based on the real and estimated values in Table 1 and the curve in Fig. 1, it can be observed that as the sample size n increases, the estimated values of VaR and CVaR also approach the true values more closely.

Case II ARMA(1,1) process.

The following ARMA(1,1) process is selected

$$\begin{aligned} X_{t}=0.3X_{t-1}+\varepsilon _{t}-0.7\varepsilon _{t-1} \end{aligned}$$
(3.2)

where \(\{\varepsilon _{t},\}\) is the white noise sequence, noting that

$$ E(X_{t})=0,~~~~D(X_{t})=1.175824,~~~~Cov(X_{t},X_{t-1})=-0.6196608 $$

when \(k\geq 1\), \(Cov(X_{t},X_{t-k})=0.3Cov(X_{t},X_{t-k+1})\). Thus this ARMA(1,1) process is an NA sequence, and it follows that it is also an ANA sequence. Other settings are the same as in Case I. Similar to the MA(1) model, simulations were conducted 1000 times for sample sizes \(n=25, 50, 100, 200, 300, 500\). The average of these simulations was taken as the true and estimated values of CVaR. The estimated values of CVaR under the ARMA(1,1) model are presented in Table 2. Furthermore, we obtained the corresponding true value curve and estimated value curve for CVaR under the ARMA(1,1) model (refer to Fig. 2). In the graph, the estimated values of CVaR are depicted by the blue dashed line, while the true values of CVaR are represented by the red solid line.

Figure 2
figure 2

CVaR of true value curve and estimated value curve under ARMA(1,1) model

Table 2 The estimated values of CVaR data under ARMA(1,1) process

In the ARMA(1,1) model, based on the real and estimated values in Table 2 and the curve in Fig. 2, furthermore, as the sample size n increases, the estimated values of CVaR also approach the true values more closely.

4 Real data exercises

In this section, we apply the optimised CVaR estimation method discussed in this article to analyse a dataset consisting of the closing prices of Hongta Securities and Central China Securities on workdays from May 6, 2021, to May 5, 2023. To account for the absence of transactions on certain trading days, these data points have been excluded, resulting in a sample size of \(n=481\). Each time series comprises \(n=481\) data points obtained from the CSMAR database. Previous studies have utilized multivariate regression models or time-series models to analyse similar stock data. Refer to Wang et al. [6], Luo [7], and others for further details.

By taking the logarithm of the closing price sequence of the stocks, we consider the yield variable Y and the loss variable X, where \(Y=-X\). Consequently, CVaR is computed using the yield variable. In this context, the logarithmic yield is adopted as the yield variable. The formula for calculating the logarithmic return of stocks is given by:

$$ R_{t}=\ln \left (\frac{P_{t}}{P_{t-1}}\right ) $$

Here, \(P_{t}\) and \(P_{t-1}\) denote the closing prices of the stock on days \(t-1\) and t, respectively.

Subsequently, we depict the scatter plot of the time series of log-returns in Fig. 3. Following this, the autocorrelation function (ACF) and partial autocorrelation function (PACF) of the two samples are illustrated in Figs. 4 and 5, respectively. The ACF and PACF results depicted in Figs. 4 and 5 indicate that both time series sets exhibit a stationary process. We then proceed to conduct the Augmented Dickey-Fuller (ADF) test, where the null hypothesis assumes non-stationarity in the time series. The α value obtained less than 0.05, signifying that the two time series of log-returns exhibit weak stationarity. By considering the Akaike Information Criterion (AIC) and Bayesian Information Criterion (BIC), we determine that both sequences conform to ARMA (1,1) models. Consequently, these two sets of log-return time series data can be regarded as ANA random samples.

Figure 3
figure 3

Time series of Hongta Securities (left) and Central China Securities (right) log-returns

Figure 4
figure 4

ACF of Hongta Securities (left) and Central China Securities (right) log-returns

Figure 5
figure 5

PACF of Hongta Securities (left) and Central China Securities (right) log-returns

Finally, employ optimisation techniques to estimate the Conditional Value at Risk (CVaR) of log-returns. Illustrated in Fig. 6, at equivalent probability levels, the CVaR of Hongta Securities is observed to be lower than that of Central China Securities. This suggests that the risk associated with Hongta Securities is comparatively lower than that of Central China Securities.

Figure 6
figure 6

CVaR of Hongta Securities (left) and Central China Securities (right) log-returns

Data Availability

The actual data used in this article comes from the CSMAR database dataset, which includes the closing prices of Hongta Securities and Huazhong Securities from May 6, 2021 to May 5, 2023. Simulation data was produced by R software.

References

  1. Pflug, G.: Some remarks on the value-at-risk and the conditional value-at-risk. In: Uryasev, S. (ed.) Probabilistic Constrained Optimization: Methodology and Applications, pp. 272–277. Kluwer Academic, Dordrecht (2000)

    Chapter  Google Scholar 

  2. Artzner, P., Delbaen, F., Eber, J., Heath, D.: Thinking coherently. Risk 10, 68–71 (1997)

    Google Scholar 

  3. Embrechts, P., Resnick, S., Samorodnitsky, G.: Extreme value theory as a risk management tool. N. Am. Actuar. J. 3(2), 32–41 (1999)

    Article  MathSciNet  Google Scholar 

  4. Bodnar, T., Schmid, W., Zabolotskyy, T.: Asymptotic behavior of the estimated weights and of the estimated performance measures of the minimum VaR and the minimum CVaR optimal portfolios for dependent data. Metrika 76(8), 1105–1134 (2013)

    Article  MathSciNet  Google Scholar 

  5. Pavlikov, K., Uryasev, S.: CVaR normand applications in optimization. Optim. Lett. 8(7), 1999–2020 (2014)

    Article  MathSciNet  Google Scholar 

  6. Wang, X., Wu, Y., Yu, W., Yang, W., Hu, S.: Asymptotics for the linear kenel quantile estimator of value-at-risk. Test 28(4), 1144–1174 (2019)

    Article  MathSciNet  Google Scholar 

  7. Luo, Z.: Nonparametric kernel estimation of CVaR under α-mixing sequences. Stat. Pap. 61(2), 615–643 (2020)

    Article  MathSciNet  Google Scholar 

  8. Trindade, A., Uryasev, S., Shapiro, A., Zrazhevsky, G.: Financial prediction with constrained tail risk. J. Bank. Finance 31(11), 3524–3538 (2007)

    Article  Google Scholar 

  9. Xing, G., Yang, S., Li, Y.: Strong consistency of conditional value-at-risk estimate for ϕ-mixing samples. Commun. Stat., Theory Methods 43(23), 5105–5113 (2014)

    Article  MathSciNet  Google Scholar 

  10. Luo, Z., Ou, S.: The almost sure convergence rate of the estimator of optimized certainty equivalent risk measure under α-mixing sequences. Commun. Stat., Theory Methods 46(16), 8166–8177 (2017)

    Article  MathSciNet  Google Scholar 

  11. Ding, L., Chen, P., Li, Y.: On some inequalities for ψ-mixing sequences and its applications in conditional value-at-risk estimate. Commun. Stat., Theory Methods 49(22), 5455–5467 (2020)

    Article  MathSciNet  Google Scholar 

  12. Joag-Dev, K., Proschan, F.: Negative association of random variables with applications. Ann. Stat. 11(1), 286–295 (1983)

    Article  MathSciNet  Google Scholar 

  13. Bradley, R.: On the spectral density and asymptotic normality of weakly dependent random fields. J. Theor. Probab. 5, 355–373 (1992)

    Article  MathSciNet  Google Scholar 

  14. Zhang, L., Wang, X.: Convergence rates in the strong laws of asymptotically negatively associated random fields. Appl. Math. J. Chin. Univ. Ser. B 14(4), 406–416 (1999)

    Article  MathSciNet  Google Scholar 

  15. Zhang, L.: A functional central limit theorem for asymptotically negatively dependent random fields. Acta Math. Hung. 86(3), 237–259 (2000)

    Article  MathSciNet  Google Scholar 

  16. Zhang, L.: Central limit theorems for asymptotically negatively associated random fields. Acta Math. Sin. Engl. Ser. 16(4), 691–710 (2000)

    Article  MathSciNet  Google Scholar 

  17. Yuan, D., Wu, X.: Limiting behavior of the maximum of the partial sum for asymptotically negatively associated random variables under residual Cesaro alpha-integrability assumption. J. Stat. Plan. Inference 140, 2395–2402 (2010)

    Article  MathSciNet  Google Scholar 

  18. Tang, X., Wang, X., Wu, Y., Zhang, F.: The Berry-Esseen type bound of wavelet estimator in a non-randomly designed nonparametric regression model based on ANA errors. ESAIM Probab. Stat. 24, 21–38 (2020)

    Article  MathSciNet  Google Scholar 

  19. Wu, Y., Wang, X., Shen, A.: Strong convergence properties for weighted sums of m-asymptotic negatively associated random variables and statistical applications. Stat. Pap. 62, 2169–2194 (2021)

    Article  MathSciNet  Google Scholar 

  20. Ko, M.: Some limiting behavior of the maximum of the partial sum for asymptotically negatively associated random vectors in Hilbert space. Commun. Stat., Theory Methods 52(11), 3598–3611 (2023)

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgements

The authors greatly appreciate the constructive comments and suggestions of the Editor and referee.

Funding

This research was supported by the University Key Project of the Natural Science Foundation of Anhui Province (grant no. 2022AH051723, 2023AH052106, KJ2021A1032, KJ2019A0683, KJ2020A0679 and KJ2021A1031), Key Project of Natural Science Foundation of Chaohu University (grant no. XLZ202201), Key Construction Discipline of Chaohu University (grant no. kj22zdjsxk01, kj22yjzx05, and kj22xjzz01).

Author information

Authors and Affiliations

Authors

Contributions

RJ derived and proved the inequality and was a major contributor in writing the manuscript. XFT did numerical simulation and real data analysis. KC made significant contributions to data acquisition, analysis, and English writing. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Kan Chen.

Ethics declarations

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix

Appendix

To prove our main results, the following lemmas are indispensable. The first one can be found in Zhang and Wang [15].

Lemma A.1

Let \(\{X_{n};~n\geq 1\}\) be a sequence of ANA random variables. If \(\{g_{n}(\cdot );~n\geq 1\}\) are all increasing or all decreasing functions, then \(\{g_{n}(X_{n});~n\geq 1\}\) is still a sequence of ANA random variables.

The next one is the Rosenthal type inequality for ANA random variables, which was established by Zhang [16].

Lemma A.2

Suppose that \(\{X_{k}; k\in N^{d}\}\) is a ANA random variables fields with \(EX_{k}=0\) and \(\|X_{k}\|_{p}<\infty \) for some \(p\geq 2\) and all k. Then there exists a positive constant \(C_{p}\) depending only on p and \(\rho ^{-}(\cdot )\) such that for any finite set \(S\subset N^{d}\)

$$\begin{aligned} E\left |\sum _{k\in S}X_{k}\right |^{p}\leq C_{p}\left \{\sum _{k\in S}E|X_{k}|^{p}+ \left (\sum _{k\in S}E|X_{k}|^{2}\right )^{p/2}\right \}. \end{aligned}$$

When \(d=1\), the Rosenthal-type inequality remains true for the maximal partial sums, if some conditions on \(\rho ^{-}(\cdot )\) are added.

The third lemma is derived from Xing et al. [9].

Lemma A.3

If \(E|X|^{r}<\infty \) for \(r>0\), then there exists a positive constant C such that for \(x \in \mathrm{R}\),

$$\begin{aligned} E\left |h_{\alpha}(X, x)-E h_{\alpha}(X, x)\right |^{r} \leq C E|X|^{r}, \end{aligned}$$
(A.1)

where \(h_{\alpha}(X, x)=x+\frac{1}{1-\alpha}[X-x]^{+}\).

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Jin, R., Tang, X. & Chen, K. Asymptotic properties of conditional value-at-risk estimate for asymptotic negatively associated samples. J Inequal Appl 2024, 118 (2024). https://doi.org/10.1186/s13660-024-03191-5

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13660-024-03191-5

Mathematics Subject Classification

Keywords