• Research
• Open Access

# On locally convex probabilistic normed spaces

Journal of Inequalities and Applications20162016:319

https://doi.org/10.1186/s13660-016-1263-1

• Received: 18 April 2016
• Accepted: 18 November 2016
• Published:

## Abstract

In this paper, we give the notion of locally convex probabilistic seminormed spaces and discuss some property of locally convex probabilistic seminormed spaces.

## Keywords

• probabilistic normed space
• locally convex probabilistic seminormed space
• local base

• 4E70
• 46S50

## 1 Introduction

Locally convex probabilistic normed spaces are an interesting topic. In fact, some papers  discussed the subject, and we enjoy the topic too. On the basis of these papers, we try to search more concepts and properties about locally convex probabilistic normed spaces. In this article, we show our results.

Probabilistic normed spaces (briefly, PN spaces) were introduced by Šerstnev  by means of a definition that was closely modeled in the theory of normed spaces. Here we consistently adopt the new and, in our opinion, convincing definition of a PN space given Alsina, Schweizer, and Sklar , from which we further use the notation and concepts. On the basis of this classical work, continuity properties, linear operators, and nonlinear operators on PN spaces are studied in detail , and contraction maps, boundedness property, finite and countable infinite products, and probabilistic quasi-normed spaces are deeply discussed . In order to understand the new advances on PN spaces, we refer to .

We recall the definition, properties, and examples of probabilistic normed spaces. Let Δ be the space of distribution functions, and $$\Delta^{+}:=\{F\in \Delta:F(0)=0\}$$ be the subset of distance distribution functions . The space Δ can be metrized in several equivalent ways so that the metric topology coincides with the topology of weak convergence for distribution functions. Here, we assume that Δ is metrized by the Sibley metric $$d_{S}$$, which is the same metric denoted by $$d_{L}$$ in . We also consider the subset $$D^{+}\subset \Delta^{+}$$ of the proper distance distribution functions, that is, those $$F\in \Delta^{+}$$ for which $$\lim_{x\longrightarrow +\infty} F(x)=1$$.

A triangle function is a mapping $$\tau: \Delta^{+}\times \Delta^{+}\longrightarrow \Delta^{+}$$ that is commutative, associative, nondecreasing in each variable and has $$\varepsilon_{0}$$ as the identity, where $$\varepsilon_{a}$$ $$(a\leq +\infty)$$ is the distribution function defined by
$$\varepsilon_{a}(t):= \textstyle\begin{cases} 0, & t\leq a,\\ 1, & t>a. \end{cases}$$
Given a nonempty set S, a mapping $$\mathcal {F}$$ from $$S\times S$$ into $$\Delta^{+}$$ and a triangle function τ, a probabilistic metric space (briefly a PM space) is the triple $$(S, \mathcal {F}, \tau)$$ with the following properties, where we set $$F_{p, q}:=\mathcal {F}_{p, q}$$:
1. (PM1)

$$F_{p, q}=\varepsilon_{0}$$ if and only if $$p=q$$;

2. (PM2)

$$F_{p, q}=F_{q, p}$$ for all p and $$q \in S$$;

3. (PM3)

$$F_{p, r}\geq \tau(F_{p, q}, F_{q, r})$$ for all p, q, $$r \in S$$.

A probabilistic normed space (briefly, a PN space) is a quadruple $$(\mathcal{V}, \upsilon, \tau, \tau^{*})$$, where $$\mathcal{V}$$ is a vector space, τ and $$\tau^{*}$$ are continuous triangle functions such that $$\tau\leq \tau^{*}$$, and υ is a mapping from $$\mathcal{V}$$ into $$\Delta^{+}$$, called the probabilistic norm, such that for every choice of p and q in $$\mathcal{V}$$, the following conditions hold:
1. (PN1)

$$\upsilon_{p}=\varepsilon_{0}$$ if and only if $$p=\theta$$ (θ is the null vector in $$\mathcal{V}$$);

2. (PN2)

$$\upsilon_{-p}=\upsilon_{p}$$;

3. (PN3)

$$\upsilon_{p+q}\geq\tau(\upsilon_{p}, \upsilon_{q})$$;

4. (PN4)

$$\upsilon_{p}\leq \tau^{*}(\upsilon_{\lambda p}, \upsilon_{(1-\lambda)p})$$ for every $$\lambda \in [0, 1]$$.

When there is a continuous t-norm T (see [7, 15]) such that $$\tau=\tau_{T}$$ and $$\tau^{*}=\tau_{T^{*}}$$, where
\begin{aligned} &{T^{*}(x, y):=1-T(1-x, 1-y),} \\ &{\tau_{T}(F, G) (x):=\sup_{s+t=x} T \bigl(F(s), G(t) \bigr),\quad\mbox{and}\quad \tau_{T^{*}}(F, G) (x):=\inf _{s+t=x} T^{*} \bigl(F(s), G(t) \bigr),} \end{aligned}
the PN space $$(\mathcal{V}, \upsilon, \tau_{T}, \tau_{T^{*}})$$ is called a Menger PN space and is denoted by $$(\mathcal{V}, \upsilon, T)$$.

A PN space is called a Šerstnev space if it satisfies (PN1), (PN3), and the following condition, which implies both (PN2) and (PN4):

For any $$p\in\mathcal{V}$$, $$\alpha \in \mathbb{R}\backslash\{0\}$$, and $$x>0$$, $$\upsilon_{\alpha p}(x)=\upsilon_{p}(\frac{x}{|\alpha|})$$.

If $$(\mathcal{V}, \upsilon, \tau, \tau^{*})$$ is a PN space and a mapping $$\mathcal {F}: \mathcal{V}\times \mathcal{V}\longrightarrow \Delta^{+}$$ is defined as
$$\mathcal {F}(p, q):=\upsilon_{p-q}$$
(1)
then $$(\mathcal{V}, \mathcal {F}, \tau)$$ is a probabilistic metric space. Every PM space can be endowed with strong topology; this topology is generated by the strong neighborhoods defined as follows: for every $$t>0$$, the neighborhood $$N_{p}(t)$$ at a point p of $$\mathcal{V}$$ is defined by
$$N_{p}(t):= \bigl\{ q\in \mathcal{V}:d_{S}( \upsilon_{p-q}, \varepsilon_{0})< t \bigr\} = \bigl\{ q\in \mathcal{V}:\upsilon_{p-q}(t)>1-t \bigr\} .$$

### Definition 1.1

Let $$\mathcal {V}$$ is a topological space. If for any $$p\in\mathcal {V}$$, $$\mathcal{W}_{p}$$ is a neighborhood system of p, then the subset $$\mathcal {U}_{p}\subset\mathcal {W}_{p}$$ is a neighborhood base of p if for any $$W\in \mathcal{W}_{p}$$, there exists $$U\in\mathcal{U}_{p}$$ such that $$U\subset W$$.

The local base of $$\mathcal{V}$$ is a neighborhood base of any p in $$\mathcal{V}$$. For a topological linear space, the neighborhood base of θ can be translated to be a neighborhood base of any point p in $$\mathcal{V}$$. Thus, we always call the neighborhood base of θ the local base.

### Definition 1.2

Let $$\mathcal{V}$$ is a real linear space, and $$W\subset\mathcal{V}$$.
1. (1)

W is a convex set if for any $$t\in(0, 1)$$, $$tW+(1-t)W\subset W$$.

2. (2)

W is a balanceable set if for any $$\alpha\in\mathbb {R}$$ such that $$|\alpha|\leq1$$, $$\alpha W\subset W$$.

3. (3)

W is a symmetrical set if $$W=-W$$.

4. (4)

W is an absorbing set if for any $$p\in\mathcal{V}$$, there exists $$\lambda >0$$ such that $$\mu p\in W$$, where $$|\mu|\leq\lambda$$ ($$\mu\in\mathbb{R}$$).

### Definition 1.3



Let $$\mathcal{V}$$ is a real linear space. A real function p is called a seminorm on $$\mathcal{V}$$ if it has following properties:
1. (1)

$$p(x+y)\leq p(x)+p(y)$$ and

2. (2)

$$p(\alpha x)=|\alpha|p(x)$$ for all $$x, y\in\mathcal{V}$$ and $$\alpha\in \mathbb {R}$$.

## 2 Main Results

### Lemma 2.1



Let $$\mathcal{W}$$ is a subset of a linear space $$\mathcal{V}$$ having the following properties:
1. (a)

For any $$W_{1}, W_{2}\in\mathcal{W}$$, there exists $$W_{3}\in \mathcal{W}$$ such that $$W_{3}\subset W_{1}\cap W_{2}$$;

2. (b)

Every $$W\in\mathcal{W}$$ is a balanceable set;

3. (c)

For any $$W\in\mathcal{W}$$ and $$p\in\mathcal{V}$$, there exists $$\alpha \in \mathbb{R}$$, $$\alpha\neq 0$$, such that $$\alpha p\in W$$;

4. (d)

For any $$W\in \mathcal{W}$$, there exists $$W_{0}\in\mathcal{W}$$ such that $$W_{0}+W_{0}\subset W$$;

5. (e)

If $$W\in\mathcal{W}$$ and $$0\neq\alpha\in \mathbb{R}$$, then $$\alpha W\in\mathcal{W}$$.

Then there exists a linear topology τ on $$\mathcal{V}$$ such that $$\mathcal{W}$$ in the topology τ is a neighborhood base of zero. Conversely, there exists a neighborhood base of zero that satisfies properties (a)-(e) on every topological linear space $$\mathcal{V}$$.

(I) Š-probabilistic seminorm and locally convex Š-probabilistic semi-normed spaces

### Definition 2.1

A Šerstnev probabilistic seminorm υ with τ (briefly, a Š-probabilistic seminorm) is a mapping from $$\mathcal {V}$$ into $$\Delta^{+}$$, where $$\mathcal {V}$$ is a real vector space, and τ is a continuous triangle function for all p, q in $$\mathcal {V}$$, such that the following conditions hold:
1. (ŠPSN1)

$$\upsilon_{p+q}(t)\geq\tau(\upsilon_{p}, \upsilon_{q})(t)$$;

2. (ŠPSN2)

$$\upsilon_{\alpha p}(t)=\upsilon_{p}(\frac{t}{|\alpha|})$$ for all $$\alpha\in\mathbb{R}$$.

We adopt the convention that $$\upsilon_{p}(\frac{t}{|{0}|})=\varepsilon_{0}(t)$$.

Let $$\upsilon_{0}=\varepsilon_{0}$$. It is obvious that for any Š-probabilistic seminorm $$\upsilon_{p}$$, if $$p=\theta$$, then $$\upsilon_{p}=\varepsilon_{0}$$, and $$\upsilon_{-p}=\upsilon_{p}$$.

If $$\mathcal{V}$$ satisfies (ŠPSN1) and (ŠPSN2), then $$(\mathcal{V}, \upsilon , \tau)$$ is said to be a Š-probabilistic seminorm space with τ (briefly, a Š-PSN space).

Obviously, every Šerstnev probabilistic norm is a Šerstnev probabilistic seminorm, and a Šerstnev space is a Š-PSN space.

### Definition 2.2

A linear topological space $$\mathcal{V}$$ is called a locally convex space if it has a convex neighborhood base of zero.

### Theorem 2.1

Let $$\mathcal{V}$$ be a vector space, and υ be a Š-probabilistic seminorm with τ and υ satisfying the following condition:

(ŠPSN3) For any $$p, q \in \mathcal{V}$$, $$t_{1}, t_{2}>0$$, if $$\upsilon_{p}(t_{1})>1-\lambda$$ and $$\upsilon_{q}(t_{2})>1-\lambda$$, then $$\upsilon_{p+q}(t_{1}+t_{2})>1-\lambda$$.

Then we have:

(1) For each $$\lambda \in(0, 1]$$, the function $$\mathscr{P}_{\lambda}$$ defined by
$$\mathscr{P}_{\lambda}(p):=\inf \bigl\{ t\geq 0;\upsilon_{p}(t)>1- \lambda \bigr\}$$
is a seminorm.
(2) $$(\mathcal{V}, \upsilon)$$ is a locally convex topological linear space induced by the family of seminorms $$\{\mathscr{P}_{\lambda}, \lambda \in(0, 1]\}$$, and for any positive integer n and each $$p\in \mathcal{V}$$,
$$\mathcal{W}(p)= \bigl\{ W(p, \lambda_{1}, \lambda_{2}, \ldots, \lambda_{n}, \lambda): \lambda>0, \lambda_{1}, \lambda_{2}, \ldots, \lambda_{n}\in (0, 1] \bigr\}$$
is the basis of neighborhoods of zero, where
$$W(p, \lambda_{1}, \lambda_{2}, \ldots, \lambda_{n}, \lambda)= \bigl\{ p\in\mathcal{V}:\mathscr{P}_{\lambda_{i}}(p)< \lambda, \lambda_{i}\in(0, 1], i=1, 2, \ldots, n \bigr\} .$$
(3) The topology induced by the basis $$\mathcal{W}(p)$$ of neighborhoods of zero coincides with the topology induced by the following basis of neighborhoods of zero:
$$\mathcal{N}_{0}= \bigl\{ U(\varepsilon, \lambda);\lambda\in(0, 1] \bigr\} ,$$
where
$$U(\varepsilon, \lambda)= \bigl\{ p\in\mathcal{V};\upsilon_{p}( \varepsilon)>1-\lambda, \lambda\in(0, 1], \varepsilon>0 \bigr\} .$$

### Proof

(1) For any $$\alpha \in\mathbb{R}$$, $$\alpha\neq 0$$, we have
\begin{aligned} \mathscr{P}_{\lambda}(\alpha p) =&\inf \bigl\{ t\geq 0;\upsilon_{\alpha p}(t)>1- \lambda \bigr\} \\ =&\inf \biggl\{ t\geq 0;\upsilon_{p} \biggl(\frac{t}{|\alpha|} \biggr)>1- \lambda \biggr\} \\ =& |\alpha|\inf \bigl\{ t\geq 0;\upsilon_{p}(t)>1-\lambda \bigr\} \\ =& |\alpha|\mathscr{P}_{\lambda}(p). \end{aligned}
It is obvious that from $$\alpha=0$$ we get $$\mathscr{P}_{\lambda}(0\cdot p)=0 \cdot \mathscr{P}_{\lambda}(p)$$ and $$\mathscr{P}_{\lambda}(p)\geq 0$$. According to the definition of $$\mathscr{P}_{\lambda}$$, for any $$\varepsilon>0$$, we have $$\upsilon_{p}(\mathscr{P}_{\lambda}(p)+\frac{\varepsilon}{2})>1-\lambda$$ and $$\upsilon_{q}(\mathscr{P}_{\lambda}(q)+\frac{\varepsilon}{2})>1-\lambda$$. By condition (ŠPSN3), $$\upsilon_{p+q}(\mathscr{P}_{\lambda}(p)+\mathscr{P}_{\lambda}(q)+\varepsilon)>1-\lambda$$. Therefore,
\begin{aligned} \mathscr{P}_{\lambda}(p+q) =&\inf \bigl\{ t\geq 0;\upsilon_{p+q}(t)>1- \lambda \bigr\} \\ \leq& \mathscr{P}_{\lambda}(p)+\mathscr{P}_{\lambda}(q)+\varepsilon. \end{aligned}
Letting $$\varepsilon\to0$$, we have $$\mathscr{P}_{\lambda}(p+q)\leq\mathscr{P}_{\lambda}(p)+\mathscr{P}_{\lambda}(q)$$, $$\lambda\in(0, 1]$$.

Conclusion (1) is proved.

(2) Firstly, it is easy to show that $$W(p, \lambda_{1}, \lambda_{2}, \ldots, \lambda_{n}, \lambda)$$ is convex. In fact, for any $$p, q\in W(p, \lambda_{1}, \lambda_{2}, \ldots, \lambda_{n}, \lambda)$$,
$$\mathscr{P}_{\lambda_{i}}(p)< \lambda\quad\mbox{and}\quad \mathscr{P}_{\lambda_{i}}(q)< \lambda.$$
Then, for every $$t \in [0, 1]$$,
\begin{aligned} \mathscr{P}_{\lambda_{i}} \bigl(tp+(1-t)q \bigr) \leq& \mathscr{P}_{\lambda_{i}}(tp)+ \mathscr{P}_{\lambda_{i}} \bigl((1-t)q \bigr)= t\mathscr{P}_{\lambda_{i}}(p)+(1-t)\mathscr{P}_{\lambda_{i}}(q) < t\lambda+(1-t)\lambda = \lambda. \end{aligned}
Thus,
$$tp+(1-t)q\in W(p, \lambda_{1}, \lambda_{2}, \ldots, \lambda_{n}, \lambda).$$
Secondly, we consider the system
$$\mathcal{W}(p)= \bigl\{ W(p, \lambda_{1}, \lambda_{2}, \ldots, \lambda_{n}, \lambda):\lambda>0, \lambda_{1}, \lambda_{2}, \ldots, \lambda_{n}\in(0, 1] \bigr\} ,$$
in which $$W(p, \lambda_{1}, \lambda_{2}, \ldots, \lambda_{n}, \lambda)=\{p\in\mathcal{V}:\mathscr{P}_{\lambda_{i}}(p)<\lambda, \lambda_{i}\in(0, 1], i=1, 2, \ldots, n\}$$. By Lemma 2.1 we know that if $$W_{1}=W(p, \lambda_{1}^{\prime}, \lambda_{2}^{\prime}, \ldots, \lambda_{n}^{\prime}, \lambda^{\prime})$$, $$W_{2}=W(p, \lambda_{1}^{\prime\prime}, \lambda_{2}^{\prime\prime}, \ldots, \lambda_{m}^{\prime\prime}, \lambda^{\prime\prime})$$, $$\lambda=\min (\lambda^{\prime}, \lambda^{\prime\prime})$$, and $$W_{3}=W(p, \lambda_{1}^{\prime}, \lambda_{2}^{\prime}, \ldots, \lambda_{n}^{\prime}, \lambda_{1}^{\prime\prime}, \lambda_{2}^{\prime\prime}, \ldots, \lambda_{m}^{\prime\prime}, \lambda)$$, then $$W_{3}\subset W_{1}\cap W_{2}$$, so that property (a) is satisfied.

If $$\alpha\in \mathbb{R}$$ and $$|\alpha|\leq 1$$, then from $$\mathscr{P}_{\lambda_{i}}(p)<\lambda$$, we get $$\mathscr{P}_{\lambda_{i}}(\alpha p)<\lambda$$, that is, the set $$W(p, \lambda_{1}, \lambda_{2}, \ldots, \lambda_{n}, \lambda)$$ is balanceable, so that property (b) is satisfied.

Let $$q\in \mathcal{V}$$ and denote $$W(p, \lambda_{1}, \lambda_{2}, \ldots, \lambda_{n}, \lambda)$$ by $$W_{0}$$. Let $$\mu\in \mathbb{R}$$ be such that $$0<|\mu|<\lambda$$, and let $$\sigma=\max_{1\leq i\leq n} \mathscr{P}_{\lambda_{i}}(q)$$. If $$q \notin W_{0}$$ and $$\alpha=\mu\sigma^{-1}$$, then $$\mathscr{P}_{\lambda_{i}}(\alpha q)=|\mu|\sigma^{-1}\mathscr{P}_{\lambda_{i}}(q)\leq|\mu|<\lambda$$, that is, $$\alpha q\in W_{0}$$, so that property (c) is satisfied.

Let $$W_{1}=W(p, \lambda_{1}, \lambda_{2}, \ldots, \lambda_{n}, 2^{-1}\lambda)$$. Then $$W_{1}+W_{1}=\frac{1}{2} W_{0}+\frac{1}{2} W_{0}=W_{0}$$, and we see that $$\mathcal{W}(p)$$ satisfies property (d).

Since $$W_{0}=W(p, \lambda_{1}, \lambda_{2}, \ldots, \lambda_{n}, \lambda)$$ and $$\alpha \in \mathbb{R}$$, $$\alpha\neq 0$$, we have
\begin{aligned} \alpha W_{0} =& \bigl\{ \alpha p|\mathscr{P}_{\lambda_{i}}(p)< \lambda, i=1, 2, \ldots, n \bigr\} \\ =& \bigl\{ p|\mathscr{P}_{\lambda_{i}}(p)< |\alpha|\lambda, i=1, 2, \ldots, n \bigr\} \\ =& W(p, \lambda_{1}, \lambda_{2}, \ldots, \lambda_{n}, |\alpha|\lambda), \end{aligned}
that is, $$\alpha W_{0}\in \mathcal{W}(p)$$, so $$\mathcal{W}(p)$$ satisfies property (e).

Conclusion (2) is proved.

(3) Next, we prove that $$U(\lambda, \lambda_{i})=W(p, \lambda_{i}, \lambda)$$ ($$i=1, 2, \ldots, n$$). Let $$p\in U(\lambda, \lambda_{i})$$. Then $$\upsilon_{p}(\lambda)>1-\lambda_{i}$$. Since the distribution function $$\upsilon_{p}$$ is left continuous, there exists $$\lambda^{\prime}\in (0, \lambda)$$ such that, for each $$i=1, 2, \ldots, n$$,
$$\upsilon_{p}(\lambda)\geq \upsilon_{p} \bigl( \lambda^{\prime} \bigr)>1-\lambda_{i}.$$
Hence,
$$\inf \bigl\{ t\geq 0;\upsilon_{p}(t)>1-\lambda_{i} \bigr\} \leq \lambda^{\prime}< \lambda,$$
which implies that $$p\in W(p, \lambda_{i}, \lambda)$$. Conversely, let $$p\in W(p, \lambda_{i}, \lambda)=\{p\in\mathcal{V}:\mathscr{P}_{\lambda_{i}}(p)<\lambda\}$$. Then $$\mathscr{P}_{\lambda_{i}}(p)=$$ $$\inf\{t\geq 0;\upsilon_{p}(t)>1-\lambda_{i}\}<\lambda$$, $$\upsilon_{p}(\lambda)>1-\lambda_{i}$$, that is, $$p\in U(\lambda, \lambda_{i})$$. Thus, we get the conclusion $$U(\lambda, \lambda_{i})=W(p, \lambda_{i}, \lambda)$$ ($$i=1, 2, \ldots, n$$).
On the other hand, for each $$W_{0}=W(p, \lambda_{1}, \lambda_{2}, \ldots, \lambda_{n}, \lambda) \in\mathcal{W}(p)$$, we have
\begin{aligned} W(p, \lambda_{1}, \lambda_{2}, \ldots, \lambda_{n}, \lambda) =& \bigl\{ p\in\mathcal{V}:\mathscr{P}_{\lambda_{i}}(p)< \lambda, \lambda_{i}\in(0, 1], i=1, 2, \ldots, n \bigr\} \\ =&\bigcap_{i=1}^{n} W(p, \lambda_{i}, \lambda) = \bigcap_{i=1}^{n} U(\lambda, \lambda_{i}), \end{aligned}
which implies that $$\mathcal{W}(p)$$ coincides with $$\mathcal{N}_{0}$$. Therefore, the topologies induced by them are equivalent. This completes the proof. □

### Theorem 2.2

A S̆erstnev space $$(\mathcal{V}, \upsilon, \tau)$$ is a locally convex Š-probabilistic normed space.

### Proof

By Corollary 7.5.8 , $$\tau_{M}=\tau_{M^{*}}$$. It suffices to consider the family of neighborhoods of the origin θ, $$\mathcal{N}_{\theta}=\{U(\varepsilon, \lambda);\lambda\in(0, 1]\}$$. Let $$p, q\in U(\varepsilon, \lambda)$$, $$\lambda\in(0, 1]$$, and $$\alpha\in[0, 1]$$, $$\varepsilon>0$$. Then
\begin{aligned} \upsilon_{ \alpha p+(1-\alpha)q}(\varepsilon) \geq&\tau_{M}( \upsilon_{\alpha p}, \upsilon_{(1-\alpha)q}) (\varepsilon) \\ =&\sup_{\beta\in[0, 1]} T_{M} \bigl(\upsilon_{\alpha p}( \beta \varepsilon), \upsilon_{(1-\alpha)q} \bigl((1-\beta)\varepsilon \bigr) \bigr) \\ \geq& T_{M} \bigl(\upsilon_{\alpha p}(\alpha \varepsilon), \upsilon_{(1-\alpha)q} \bigl((1-\alpha)\varepsilon \bigr) \bigr) \\ =& T_{M} \bigl(\upsilon_{p}(\varepsilon), \upsilon_{q}(\varepsilon) \bigr) \\ >& 1-\lambda. \end{aligned}
Thus, for every $$\alpha\in[0, 1]$$,
$$\alpha p+(1-\alpha)q\in {U(\varepsilon, \lambda)}.$$
This completes the proof. □

(II) Probabilistic seminorm and locally convex probabilistic seminormed spaces

### Definition 2.3

A probabilistic seminorm υ with τ and $$\tau^{*}$$ is a mapping from $$\mathcal {V}$$ into $$\Delta^{+}$$, where $$\mathcal {V}$$ is a real vector space, τ and $$\tau^{*}$$are continuous triangle functions for all p, q in $$\mathcal {V}$$, and the following conditions hold:
1. (PSN1)

$$\upsilon_{-p}(t)=\upsilon_{p}(t)$$;

2. (PSN2)

$$\upsilon_{p+q}(t)\geq\tau(\upsilon_{p}, \upsilon_{q})(t)$$;

3. (PSN3)

$$\upsilon_{p}(t)\leq\tau^{*}(\upsilon_{\alpha p}, \upsilon_{(1-\alpha)p})(t)$$ for all $$\alpha\in[0, 1]$$.

If $$\mathcal{V}$$ satisfies (PSN1), (PSN2), and (PSN3), then $$(\mathcal{V}, \upsilon, \tau, \tau^{*})$$ is said to be a probabilistic seminorm space (briefly, a PSN space).

Similarly, when there is a continuous t-norm T (see [7, 15]) such that $$\tau=\tau_{T}$$ and $$\tau^{*}=\tau_{T^{*}}$$, where
\begin{aligned} &{ T^{*}(x, y): = 1-T(1-x, 1-y),} \\ &{\tau_{T}(F, G) (x) := \sup_{s+t=x} T \bigl(F(s), G(t) \bigr),\quad\mbox{and}\quad \tau_{T^{*}}(F, G) (x):=\inf _{s+t=x} T^{*} \bigl(F(s), G(t) \bigr),} \end{aligned}
the PSN space $$(\mathcal{V}, \upsilon, \tau_{T}, \tau_{T^{*}})$$ is called a Menger PSN space and is denoted by $$(\mathcal{V}, \upsilon, T)$$.

Obviously, a probabilistic norm is a probabilistic seminorm, and a PN space is a PSN space.

It is easy to prove that following lemma.

### Lemma 2.2

Let $$\upsilon_{p}$$ is a probabilistic seminorm with τ and $$\tau^{*}$$. Then for any $$\alpha, \beta\in\mathbb{R}$$ such that $$|\alpha|<|\beta|$$ and any $$p\in \mathcal{V}$$,
$$\upsilon_{\beta p}\leq\upsilon_{\alpha p}.$$

### Definition 2.4

A linear space $$\mathcal{V}$$ is called a locally convex probabilistic seminormed (normed) space if it has a convex neighborhood of zero induced by a probabilistic seminorm (norm).

### Theorem 2.3

For each $$\lambda\in(0, 1]$$, let $$\mathscr{P}_{\lambda}(p):=\inf_{t}\{t\geq 0;\upsilon_{p}(t)>1-\lambda\}$$, where $$\upsilon_{p}(t)$$ is a probabilistic seminorm satisfying the following conditions:

(1) $$\upsilon_{p}(t)=\tau(\upsilon_{\alpha p}, \upsilon_{(1-\alpha)p})(t)$$;

(2) For any $$p, q \in \mathcal{V}$$ and any $$t_{1}, t_{2}>0$$, the inequalities $$\upsilon_{p}(t_{1})>1-\lambda$$ and $$\upsilon_{q}(t_{2})>1-\lambda$$, imply $$\upsilon_{p+q}(t_{1}+t_{2})>1-\lambda$$.

Then:

(1) $$\mathscr{P}_{\lambda}$$ is a seminorm;

(2) $$(\mathcal{V}, \upsilon)$$ is a locally convex topological space induced by the family of semi-norms $$\{\mathscr{P}_{\lambda}, \lambda \in(0, 1]\}$$, and, for any positive integer n and each $$p\in \mathcal{V}$$,
$$\mathcal{W}(p)= \bigl\{ W(p, \lambda_{1}, \lambda_{2}, \ldots, \lambda_{n}, \lambda): \lambda>0, \lambda_{1}, \lambda_{2}, \ldots, \lambda_{n}\in (0, 1] \bigr\}$$
is a basis of neighborhoods of zero, where
$$W(p, \lambda_{1}, \lambda_{2}, \ldots, \lambda_{n}, \lambda)= \bigl\{ p\in\mathcal{V}:\mathscr{P}_{\lambda_{i}}(p)< \lambda, \lambda_{i}\in(0, 1], i=1, 2, \ldots, n \bigr\} ;$$
(3) The topology induced by the basis $$\mathcal{W}(p)$$ of neighborhoods of zero coincides with the topology induced by the following basis of neighborhoods of zero:
$$\mathcal{N}_{0}= \bigl\{ U(\varepsilon, \lambda);\lambda\in(0, 1] \bigr\} ,$$
where
$$U(\varepsilon, \lambda)= \bigl\{ p\in\mathcal{V};\upsilon_{p}( \varepsilon)>1-\lambda, \lambda\in(0, 1] \bigr\} .$$

### Proof

(1) By Theorem 2 of  we know that the PN space is a S̆erstnev PN-space when $$\tau=\tau_{M}$$. So the probabilistic seminorm includes the example in the sense of S̆erstnev as a particular case.

For any $$F\in \Delta^{+}$$, let $$F^{\wedge}$$ denote the left-continuous quasi-inverse of F, that is, the function defined for all $$t\in [0, 1]$$ by
$$F^{\wedge}(t)=\sup \bigl\{ x|F(x)< t \bigr\} .$$
It is known form , Section 7.7, that, for any F, G, H in $$\Delta^{+}$$, $$H=\tau_{M}(F, G)$$ if and only if
$$H^{\wedge}=F^{\wedge}+G^{\wedge}.$$
Thus, we get
$$\upsilon^{\wedge}_{p}=\upsilon^{\wedge}_{\alpha p}+ \upsilon^{\wedge}_{(1-\alpha)p}\quad\mbox{for all }p\in \mathcal{V}\mbox{ and } \alpha \in [0, 1].$$
It follows from $$H^{\wedge}=F^{\wedge}+G^{\wedge}$$ that the function $$f_{t}:\mathcal{V}\longrightarrow R^{+}$$ defined for a fixed $$t\in[0, 1]$$ by $$f_{t}(p)=\upsilon^{\wedge}_{p}(t)$$ satisfies $$f(-p)=f(p)$$ and $$f(p)=f(\alpha p)+f((1-\alpha) p)$$. Therefore, for all $$\lambda\in R$$ and all $$t\in[0, 1]$$,
$$\upsilon^{\wedge}_{\lambda p}(t)=f_{t}(\lambda p)=| \lambda|f_{t}(p)=|\lambda|\upsilon^{\wedge}_{p}(t),$$
whence $$\upsilon^{\wedge}_{\lambda p}=|\lambda|\upsilon^{\wedge}_{p}$$, which is equivalent to $$\mathscr{P}_{\lambda}(\alpha p)=\alpha \mathscr{P}_{\lambda}(p)$$ (see , Thm. 1).
By the definition of $$\mathscr{P}_{\lambda}$$, for any $$\varepsilon>0$$, we have $$\upsilon_{p}(\mathscr{P}_{\lambda}(p)+\frac{\varepsilon}{2})>1-\lambda$$ and $$\upsilon_{q}(\mathscr{P}_{\lambda}(q)+\frac{\varepsilon}{2})>1-\lambda$$. By condition (2), $$\upsilon_{p+q}(\mathscr{P}_{\lambda}(p)+\mathscr{P}_{\lambda}(q)+\varepsilon)>1-\lambda$$. Therefore,
\begin{aligned} \mathscr{P}_{\lambda}(p+q) =&\inf_{t} \bigl\{ t\geq 0; \upsilon_{p+q}(t)>1-\lambda \bigr\} \\ \leq& \mathscr{P}_{\lambda}(p)+\mathscr{P}_{\lambda}(q)+\varepsilon. \end{aligned}
Letting $$\varepsilon\to0$$, we get $$\mathscr{P}_{\lambda}(p+q)\leq\mathscr{P}_{\lambda}(p)+\mathscr{P}_{\lambda}(q)$$ for $$\lambda\in(0, 1)$$.

Conclusion (1) is proved.

(2) and (3). The proof is similar to that of (2) and (3) of Theorem 2.1.

This completes the proof. □

### Definition 2.5



A PN space $$(\mathcal{V}, \upsilon, \tau, \tau^{*})$$ is called a characteristic space if $$\lim_{t\to +\infty} \upsilon_{p}(t)=1$$.

### Theorem 2.4

Let $$(\mathcal{V}, \upsilon, T)$$ be a characteristic Menger PSN space. Then every neighborhood $$U(\varepsilon, \lambda)=\{p\in\mathcal{V};\upsilon_{p}(\varepsilon)>1-\lambda, \lambda\in (0, 1]\}$$ of the origin θ is a balanced and absorbing set.

### Proof

Firstly, we show that $$U(\varepsilon, \lambda)$$ is a balanceable set.

For any $$p\in U(\varepsilon, \lambda)$$ and $$|\alpha|\leq 1$$, by Lemma 2.2, $$\upsilon_{\alpha p}(\varepsilon)\geq \upsilon_{p}(\varepsilon)>1-\lambda$$. Thus, $$\alpha p \in U(\varepsilon, \lambda)$$.

Now we show that $$U(\varepsilon, \lambda)$$ is an absorbing set.

Since $$\mathcal{V}$$ is a characteristic space, that is, $$\lim_{t\to+\infty} \upsilon_{p}(t)=1$$ for any $$p\in \mathcal{V}$$. Then for $$\lambda \in (0, 1]$$ and $$\varepsilon>0$$, taking $$t_{2}$$ such that $$0< t_{2}<\lambda$$, there exists $$t_{1}>\varepsilon$$ such that $$\upsilon_{p}(t_{1})>1-t_{2}$$, and letting $$\delta_{0}=\frac{\varepsilon}{t_{1}}<1$$, by (PSN3) we have
\begin{aligned} \upsilon_{p}(t_{1}) \leq& \tau^{*}( \upsilon_{\delta_{0} p}, \upsilon_{(1-\delta_{0})p}) (t_{1}) \\ =& \inf_{0\leq k\leq 1} T^{*} \bigl(\upsilon_{\delta_{0} p}(kt_{1}), \upsilon_{(1-\delta_{0})p} \bigl((1-k)t_{1} \bigr) \bigr) \\ \leq& T^{*} \bigl(\upsilon_{\delta_{0} p}(\delta_{0}t_{1}), \upsilon_{(1-\delta_{0})p} \bigl((1-\delta_{0})t_{1} \bigr) \bigr) \\ \leq& \upsilon_{\delta_{0} p}(\delta_{0}t_{1}) \\ =&\upsilon_{\delta_{0} p}(\varepsilon)\quad (t_{1}>\varepsilon). \end{aligned}
Therefore, we have
$$\upsilon_{\delta_{0}p}(\varepsilon)\geq\upsilon_{p}(t_{1})>1-t_{2}>1- \lambda.$$
Thus, when $$|\mu|<\delta_{0}$$, by Lemma 2.2 we have
$$\upsilon_{\mu p}(\varepsilon)\geq\upsilon_{\delta_{0}p}(\varepsilon)>1- \lambda.$$
Then,
$$\mu p\in U(\varepsilon, \lambda)\quad\mbox{whenever }|\mu|< \delta_{0}.$$
This completes the proof. □

(III) The cases of simple spaces and α-simple spaces

### Definition 2.6

Let $$\mathcal{V}$$ is a locally convex space, and $$\mathcal{W}$$ be a balanced convex neighborhood base. Then, for all $$W\in \mathcal{W}$$ and $$x\in \mathcal{V}$$, we define the Minkowski functional $$p_{w}(x)$$ as follows:
$$p_{w}(x)=\inf \{\mu|x\in \mu W, \mu >0\}.$$

### Lemma 2.3

The Minkowski functional $$p_{w}$$ is a seminorm.

### Proof

Let $$x, y\in \mathcal{V}$$, and for any $$\varepsilon>0$$, let $$\lambda=\frac{p_{w}(x)+\varepsilon}{p_{w}(x)+p_{w}(y)+2\varepsilon}$$. Then we have $$0<\lambda<1$$ and $$1-\lambda=\frac{p_{w}(y)+\varepsilon}{p_{w}(x)+p_{w}(y)+2\varepsilon}$$. By the definition of $$p_{w}$$ we have
$$x\in \bigl[p_{w}(x)+\varepsilon \bigr]W,\qquad y\in \bigl[p_{w}(y)+\varepsilon \bigr]W$$
or
$$\frac{1}{p_{w}(x)+\varepsilon}x \in W,\qquad \frac{1}{p_{w}(y)+\varepsilon}y \in W.$$
Since the set W is convex, we have
$$\lambda \frac{1}{p_{w}(x)+\varepsilon}x+(1-\lambda)\frac{1}{p_{w}(y)+\varepsilon}y \in W$$
or
$$\frac{1}{p_{w}(x)+p_{w}(y)+2\varepsilon}(x+y) \in W.$$
Therefore,
$$x+y\in \bigl[p_{w}(x)+p_{w}(y)+2\varepsilon \bigr]W.$$
According to the definition of $$p_{w}$$, we get
$$p_{w}(x+y)\leq p_{w}(x)+p_{w}(y)+2\varepsilon.$$
Since ε is arbitrary, we have
$$p_{w}(x+y)\leq p_{w}(x)+p_{w}(y).$$
Now let $$x\in \mathcal{V}$$. For any $$\alpha >0$$, we have
\begin{aligned} p_{w}(\alpha x) =&\inf\{\mu|\alpha x\in \mu W, \mu>0\} \\ =&\inf \biggl\{ \mu |x\in \frac{\mu}{\alpha}W, \mu>0 \biggr\} \\ =&\alpha \inf \biggl\{ \frac{\mu}{\alpha}\Big|x\in \frac{\mu}{\alpha}W, \frac{\mu}{\alpha}>0 \biggr\} \\ =&\alpha p_{w}(x). \end{aligned}
If $$0\neq \alpha \in\mathbb {R}$$, then $$\alpha=|\alpha|\beta$$, $$|\beta|=1$$. In view of the balanceable property of the set W, we have $$\beta ^{-1}W=W$$ and $$p_{w}(\beta x)=p_{w}(x)$$, so that
$$p_{w}(\alpha x)=p_{w}\bigl(\vert \alpha \vert \beta x \bigr)=|\alpha| p_{w}(\beta x)=|\alpha| p_{w}(x).$$
It is obvious that $$P(0)=0$$. This completes the proof. □

### Definition 2.7

Let $$(V, \|\cdot\|)$$ be a normed space, and let $$G\in \Delta^{+}$$ be different from $$\varepsilon_{0}$$ and from $$\varepsilon_{\infty}$$. We define $$\upsilon:V\longrightarrow\Delta^{+}$$ by
$$\upsilon_{p}(t):=G\biggl(\frac{t}{\|p\|}\biggr)\quad (p\neq\theta, t>0).$$
The pair $$(V, \upsilon)$$ is called the simple space generated by $$(V, \|\cdot\|)$$ and G.

### Theorem 2.5

Let $$p_{w}$$ is a Minkowski function of Definition 2.6, and let $$G\in \Delta^{+}$$. Then $$\upsilon_{p}(t):=G(\frac{t}{p_{w}(p)})$$ is a Š-probabilistic seminorm with $$\tau_{M}$$ generated by the Minkowski functional (briefly, Minkowski Š-probabilistic seminorm).

### Proof

It is obvious that $$\upsilon_{p_{w}}(t)=\upsilon_{-p_{w}}(t)$$ is satisfied. Given a d.f. F, its quasi-inverse $$F^{\wedge}$$ is defined by
$$F^{\wedge}(x)=\sup\bigl\{ t:F(t)< x\bigr\} .$$
Since $$\upsilon_{p}^{\wedge}=p_{w}(p)G^{\wedge}$$ for all $$p, q \in\mathcal{V}$$, we have
\begin{aligned} \bigl[\tau_{M}(\upsilon_{p}, \upsilon_{q}) \bigr]^{\wedge} =&\upsilon_{p}^{\wedge}+ \upsilon_{q}^{\wedge} \\ =&p_{w}(p)G^{\wedge}+p_{w}(q)G^{\wedge} \\ =&\bigl(p_{w}(p)+p_{w}(q)\bigr)G^{\wedge} \\ \geq& p_{w}(p+q)G^{\wedge} \\ =&\upsilon_{p+q}^{\wedge}. \end{aligned}
Thus,
$$\upsilon_{p+q}\geq\tau_{M}(\upsilon_{p}, \upsilon_{q}).$$
By the equality $$\tau_{M}=\tau_{M^{*}}$$ , Cor. 7.5.8, we have
\begin{aligned} \bigl[\tau_{M^{*}}(\upsilon_{\lambda p}, \upsilon_{(1-\lambda)p}) \bigr]^{\wedge} =&\bigl[\tau_{M}(\upsilon_{\lambda p}, \upsilon_{(1-\lambda)p})\bigr]^{\wedge} \\ =&\lambda p_{w}(p)G^{\wedge}+(1-\lambda)p_{w}(p)G^{\wedge} \\ =& p_{w}(p)G^{\wedge} \\ =&\upsilon_{p}^{\wedge}, \end{aligned}
and thus
$$\upsilon_{p}=\tau_{M^{*}}(\upsilon_{\lambda p}, \upsilon_{(1-\lambda)p}).$$
By Lemma 1 of , condition (ŠPSN2) holds. This completes the proof. □

In view of Theorem 2.5, we easily get the corollary.

### Corolary 2.1

A simple space is a locally convex Š-probabilistic seminormed space.

### Definition 2.8

A PSN space $$(V, \upsilon)$$ is said to be equilateral if there is a d.f. F$$\in \Delta^{+}$$, different from $$\varepsilon_{0}$$ and from $$\varepsilon_{\infty}$$, such that, for every $$p\neq\theta$$, $$\upsilon_{p}=F$$.

### Theorem 2.6

An equilateral space is a locally convex Š-probabilistic seminormed space.

### Proof

According to the definition of an equilateral space, we know that for every $$p\neq \theta$$, $$\upsilon_{p}=F$$ and easily get that the following two conditions are satisfied:
1. (1)

$$\upsilon_{p+q}(t)\geq\tau(\upsilon_{p}, \upsilon_{q})(t)$$;

2. (2)

$$\upsilon_{\alpha p}(t)=\upsilon_{p}(\frac{t}{|\alpha|})$$.

By the Theorem 2.1 we know that an equilateral space is a locally convex Š-probabilistic seminormed space but not a TV space (topological vector space) and also it is a PN space but not a TV space. This completes the proof. □

### Definition 2.9

Let $$(V, \|\cdot\|)$$ be a normed space, and let $$G\in \Delta^{+}$$ be different from $$\varepsilon_{0}$$ and $$\varepsilon_{\infty}$$. Define $$\upsilon:V\longrightarrow \Delta^{+}$$ by
$$\upsilon_{p}(t):=G\biggl(\frac{t}{\|p\|^{\alpha}}\biggr)\quad (p\neq\theta, t>0),$$
where $$\alpha>0$$. Then the pair $$(V, \upsilon)$$ is called the α-simple space generated by $$(V, \|\cdot\|)$$ and G.

The following theorem shows that, generally, an α-simple space need not be a locally convex probabilistic seminormed space.

### Theorem 2.7

Let U be the d.f. of the uniform law on $$(0, 1)$$. Then the α-simple space $$(V, \|\cdot\|, U;\alpha)$$ with $$\alpha\in (0, 1)$$ for $$\lambda=1/2$$ is not a locally convex probabilistic seminormed space.

### Proof

It is easy to evaluate $$\upsilon_{p}(\|p\|^{\alpha})=1$$. On the other hand, $$\tau_{M^{*}}=\tau_{M}$$ and $$\tau_{M}(F, F)(x)=F(x/2)$$ for all $$F\in\Delta^{+}$$ and $$x\geq0$$. Therefore,
\begin{aligned} \tau_{M^{*}}(\upsilon_{p/2}, \upsilon_{p/2}) \bigl(\|p \|^{\alpha}\bigr) =&U\biggl(\frac{\|p\|^{\alpha}}{2\|p/2\|^{\alpha}}\biggr) \\ =&\frac{2^{\alpha}}{\|p\|^{\alpha}}\frac{\|p\|^{\alpha}}{2} \\ =&2^{\alpha-1} \\ < &1. \end{aligned}
Thus,
$$\upsilon_{p}\bigl(\|p\|^{\alpha}\bigr)>\tau_{M^{*}}( \upsilon_{p/2}, \upsilon_{p/2}) \bigl(\|p\|^{\alpha}\bigr).$$
Obviously, condition (PSN3) is not satisfied, and this α-simple space is not a locally convex probabilistic seminormed space. This completes the proof. □

### Theorem 2.8

Let $$(V, \|\cdot\|)$$ be a normed space, $$G\in D^{+}$$ be a strictly increasing continuous d.f., and T be a strict t-norm with additive generator f. Then, for every $$\alpha>0$$ with $$\alpha\neq 1$$, $$(V, \|\cdot\|, G, \alpha)$$ is a locally convex Š-probabilistic seminormed space under T if the following inequalities hold for all $$u, v \in (0, +\infty)$$, $$\lambda\in[0, 1]$$, and every pair of points p and q in V with $$p\neq\theta$$, $$q\neq\theta$$, and $$p+q\neq\theta$$:
1. (1)

$$(f\circ G)(\frac{u+v}{\|p+q\|^{\alpha}})\leq(f\circ G)(\frac{u}{\|p\|^{\alpha}})+(f\circ G)(\frac{v}{\|q\|^{\alpha}})$$ and

2. (2)

$$(f\circ G^{*})(\frac{u+v}{\|p\|^{\alpha}})\leq(f\circ G^{*})(\frac{u}{\lambda^{\alpha}\|p\|^{\alpha}})+(f\circ G^{*})(\frac{v}{(1-\lambda)^{\alpha}\|p\|^{\alpha}})$$,

where $$G^{*}(x):=1-G(x)$$.

### Proof

Setting $$h:=f\circ G$$ and $$h^{*}:=f\circ G^{*}$$, we get

(a) $$h, h^{*} :[0, +\infty]\longrightarrow [0, +\infty]$$, $$h(0)=h^{*}(+\infty)=+\infty$$, $$h(+\infty)=h^{*}(0)=0$$;

(b) both h and $$h^{*}$$ are continuous;

(c) h is strictly decreasing, and $$h^{*}$$ is strictly increasing.

Therefore, their inverses $$h^{-1}$$ and $$(h^{*})^{-1}$$ satisfy the same properties as h and $$h^{*}$$, respectively.

Let p and q be in V with $$p\neq\theta$$, $$q\neq\theta$$, and $$p+q\neq\theta$$, and let $$\lambda\in(0, 1)$$. For $$u, v>0$$, let
$$s:=h\biggl(\frac{u}{\|p\|^{\alpha}}\biggr),\qquad t:=h\biggl(\frac{v}{\|q\|^{\alpha}} \biggr).$$
Thus, $$h^{-1}(s)=\frac{u}{\|p\|^{\alpha}}$$ and $$h^{-1}(t)=\frac{v}{\|q\|^{\alpha}}$$. Now an easy calculation shows that (1) is equivalent to
$$\|p+q\|^{\alpha}h^{-1}(s+t)\leq \|p\|^{\alpha}h^{-1}(s)+ \|q\|^{\alpha}h^{-1}(t).$$
In a similar way, we show that (2) is equivalent to
$$\lambda ^{\alpha}\bigl(h^{*}\bigr)^{-1}(s)+(1- \lambda)^{\alpha}\bigl(h^{*}\bigr)^{-1}(t)\leq \bigl(h^{*}\bigr)^{-1}(s+t).$$
Thus, $$\upsilon_{p}$$ is a Š-probabilistic seminorm, and by Theorem2.1 we know that $$(V, \|\cdot\|, G, \alpha)$$ is a locally convex Š-probabilistic seminormed space. This completes the proof. □

## 3 Conclusion

The concept of this paper is motivated by the increased interest of the research on best approximation in statistics. The concept of locally convex probabilistic normed spaces has been introduced following the definition of Šerstnev through the intersection of two concepts of locally convex spaces and probabilistic normed spaces. In this paper, we discuss the condition under which a vector space is in fact a locally convex space and the relation between seminorm and locally convex spaces. Then, we give some particular cases of locally convex spaces. We prove some examples of locally convex probabilistic Šerstnev semi-normed spaces (Theorem 2.1 and Theorem 2.2). Also, we give some properties of locally convex probabilistic seminormed spaces in Theorem 2.3 and Theorem 2.4. In the third part of this work, we prove that some kinds of simple spaces including α-simple spaces are locally convex PN spaces (Theorems 2.5-2.8). The theory of locally convex probabilistic normed spaces can be also applied to fuzzy optimization problems and probabilistic models, and by using this theory many innovative methods can be developed further in some fascinating area of the stochastic optimal control theory.

## Declarations

### Acknowledgements

The research was supported by the Fundamental Research Funds for National Science and Technology Major Projects (2016ZX05011-002) and the Fundamental Research Funds for the Central Universities (2652015142). The authors would like to thank the associated editors and anonymous referees for their valuable comments and suggestions, which have helped to improve the paper.

## Authors’ Affiliations

(1)
School of Energy Resources, China University of Geosciences (Beijing), Xueyuan Road No. 29, Haidian District, Beijing, 100083, China
(2)
Key Laboratory of Marine Reservoir Evolution and Hydrocarbon Accumulation Mechanism, Ministry of Education, China University of Geosciences (Beijing), Beijing, 100083, China
(3)
Key Laboratory for Shale Gas Exploration and Assessment, Ministry of Land and Resources, China University of Geosciences (Beijing), Beijing, 100083, China
(4)
Department of Mathematics and Physics, Hebei University of Architecture (Zhang Jiakou), Zhang Jiakou, Hebei, 075000, China
(5)
College of Mathematics, Chengdu University of Information Technology, Chengdu, Sichuan, 610225, China

## References 