Throughout this paper, let H be a real Hilbert space, whose inner product and norm are denoted by \(\langle\cdot,\cdot\rangle\) and \(\|\cdot\|\), respectively. Let \({\mathcal{F}}\) be a collection of all fuzzy sets over H. A mapping \(T:H\to{\mathcal{F}}(H)\) is called a fuzzy mapping on H. If T is a fuzzy mapping on H, then \(T(x)\) (denoted by \(T_{x}\), in the sequel) is a fuzzy set on H and \(T_{x}(y)\) is the membership function of y in \(T_{x}\). Let \(A \in{\mathcal{F}}(H)\), \(\alpha\in[0, 1]\). Then the set \((A)_{\alpha}= \{x\in H : A(x)\geq\alpha\}\) is called an α-cut set of A.
In this paper, we denote by \((\Omega,\Sigma)\) a measurable space, where Ω is a set and Σ is a σ-algebra of subsets of Ω and also we denote by \({\mathcal{B}}(H)\), \(2^{H}\), \(\operatorname{CB}(H)\), and \(H({\cdot}, {\cdot})\) the class of Borel σ-fields in H, the family of all nonempty subsets of H, the family of all nonempty closed bounded subsets of H, and the Hausdorff metric on \(\operatorname{CB}(H)\), respectively.
Definition 2.1
[27]
A mapping \(x :\Omega\to H\) is said to be measurable if for any \(B\in{\mathcal{B}}(H)\), \(\{t\in\Omega:x(t)\in B\}\in \Sigma\).
Definition 2.2
[27]
A mapping \(f :\Omega\times H\to H\) is called a random operator if for any \(x\in H\), \(f (t, x) = x(t)\) is measurable. A random operator f is said to be continuous if for any \(t\in\Omega\), the mapping \(f (t,\cdot) : H\to H\) is continuous.
Definition 2.3
[27]
A multi-valued mapping \(T:\Omega\to2^{H}\) is said to be measurable if for any \(B\in{\mathcal{B}}(H)\), \(T^{-1}(B)=\{ t\in\Omega:T(t)\cap B\neq\emptyset\}\in\Sigma\).
Definition 2.4
[27]
A mapping \(w :\Omega\to H\) is called a measurable selection of a multi-valued measurable mapping \(T :\Omega\to 2^{H}\) if w is measurable and for any \(t\in\Omega\), \(w(t)\in T (t)\).
Definition 2.5
[27]
A mapping \(T :\Omega\times H\to2^{H}\) is called a random multi-valued mapping if for any \(x\in H\), \(T (\cdot, x)\) is measurable. A random multi-valued mapping \(T :\Omega\times H\to \operatorname{CB}(H)\) is said to be H-continuous if for any \(t\in\Omega\), \(T (t,\cdot)\) is continuous in the Hausdorff metric.
Definition 2.6
[27]
A fuzzy mapping \(T : \Omega\to{\mathcal {F}}(H)\) is called measurable, if for any \(\alpha\in(0, 1]\), \((T(\cdot))_{\alpha}:\Omega\to2^{H}\) is a measurable multi-valued mapping.
Definition 2.7
[27]
A fuzzy mapping \(T : \Omega\times H\to {\mathcal{F}}(H)\) is called a random fuzzy mapping, if for any \(x\in H\), \(T(\cdot, x) :\Omega\to{\mathcal{F}}(H)\) is a measurable fuzzy mapping.
Clearly, the random fuzzy mapping include multi-valued mappings, random multi-valued mappings, and fuzzy mappings as special cases.
Let \(\hat{T}:\Omega\times H\to{\mathcal{F}}(H)\) be a random fuzzy mapping satisfying the following condition:
-
(I):
there exists a mapping \(a: H\to[0,1]\) such that
$$({\hat{T}}_{t,x})_{a(x)}\in \operatorname{CB}(H),\quad \forall(t,x)\in\Omega\times H. $$
By using the random fuzzy mapping T̂, we can define a random multi-valued mapping T as follows:
$$T:\Omega\times H\to \operatorname{CB}(H), \qquad (t,x)\to({\hat{T}}_{t,x})_{a(x)}, \quad \forall (t,x)\in\Omega\times H. $$
T is called the random multi-valued mapping induced by the random fuzzy mapping T̂.
Given mappings \(a:H\to[0,1]\), the random fuzzy mapping \(\hat{T}:\Omega \times H\to{\mathcal{F}}(H)\) satisfies the condition (I) and random operator \(h:\Omega\times H\to H\) with \(\operatorname{Im}(h)\cap \operatorname{dom}(\partial\phi )\neq\emptyset\), we consider the following random generalized variational inequality problem (for short, RGVIP): Find measurable mappings \(x,w:\Omega\to H\), such that for all \(t\in\Omega\), \(y(t)\in H\),
$$ {\hat{T}}_{t,x(t)}\bigl(w(t)\bigr)\geq a\bigl(x(t)\bigr), \qquad \bigl\langle w(t),y(t)-h\bigl(t,x(t)\bigr) \bigr\rangle +\phi\bigl(y(t)\bigr)-\phi \bigl(h\bigl(t,x(t)\bigr)\bigr)\geq0, $$
(2.1)
where ∂ϕ denotes the sub-differential of a proper, convex, and lower semicontinuous function \(\phi:H\to\mathbb{R}\cup\{+\infty\}\) with its effective domain being closed.
The set of measurable mappings \((x,w)\) is called a random solution of the RGVIP (2.1).
Special cases:
-
(I)
If a is zero operator and \(T:H\to H \) is a single-valued mapping, then the RGVIP (2.1) reduces to the generalized mixed variational inequality problem, denoted by GMVIP, which consists in finding \(x\in H\) such that
$$ \bigl\langle Tx,y-h(x) \bigr\rangle +\phi(y)-\phi\bigl(h(x)\bigr)\geq0,\quad \forall y\in H, $$
(2.2)
which was studied by Solodov [14]. He introduced three gap functions for the GMVIP (2.2) and by using these he obtained error bounds.
-
(II)
If \(h(x)=x\), ∀x, then problem GMVIP (2.2) reduces to mixed variational inequality problem, denoted by MVIP, which consists in finding \(x\in H\) such that
$$ \langle Tx, y-x \rangle+\phi(y)-\phi(x)\geq0, \quad \forall y\in H, $$
(2.3)
which was considered by Tang and Huang [15]. They introduced two regularized gap functions for the MVIP (2.3) and studied their differentiable properties.
-
(III)
If the function \(\phi(\cdot)\) is an indicator function of a closed set K in H, then problem MVIP (2.3) reduces to a classical variational inequality problem, denoted by VIP, which consists in finding \(x\in K\) such that
$$ \langle Tx, y-x\rangle\geq0,\quad \forall y\in K, $$
(2.4)
which was studied by [1, 3, 4, 6, 11, 15]. They derived local and global error bounds for the VIP (2.4) in terms of the regularized gap functions and the D-gap functions.
First of all, we recall the following well-known results and concepts.
For the VIP (2.4), it is well known that \(x\in K\) is a solution if and only if
$$0 =x-P_{K}\bigl[x-\theta T(x)\bigr], $$
where \(P_{K}\) is the orthogonal projector onto K and \(\theta>0\) is arbitrary. The norm of the right-hand side of the above equation is commonly known as the natural residual vector.
Throughout this paper, unless otherwise stated, the set of minimizers of the function \(t :H\to\mathbb{R}\cup\{+\infty\}\) over the set \(Y\subset H\) is denoted by \(\arg\min_{t\in Y}t(y)\). For a convex function \(\phi: H \to\mathbb{R}\cup\{+\infty\}\) with \(\operatorname{dom} \phi=\{x\in H: \phi(x)<+\infty\}\) denotes its effective domain, the sub-differential at \(x\in H\) is denoted by \(\partial\phi(x) = \{w\in H: \phi(y)\geq\phi(x) +\langle w, y - x\rangle, \forall y\in H\}\) and a point \(w\in\partial\phi(x)\) is called a sub-gradient of ϕ at x.
Motivated by the proximal map given in [32], we give a similar characterization for the RGVIP (2.1) in the random fuzzy environment by defining the mapping \(P_{\theta(t)}^{\phi}:\Omega\times H\to\operatorname{dom} \phi\), as
$$P_{\theta(t)}^{\phi,z}(t,z) =\arg\min_{y(t)\in H}\biggl\{ \phi\bigl(y(t)\bigr)+{1\over 2\theta(t)}\bigl\Vert y(t)-z(t)\bigr\Vert ^{2}\biggr\} ,\quad z(t)\in H, t\in\Omega, $$
where \(\theta:\Omega\to(0,+\infty)\) a measurable function which is the so called proximal mapping in H for a random fuzzy mapping. Note that the objective function above is proper strongly convex. Since \(\operatorname{dom}(\phi)\) is closed, \(P_{\theta(t)}^{\phi,z}(t,z)\) is well defined and single-valued. For any measurable function \(\theta:\Omega\to (0,+\infty)\) define the residual vector
$$R_{\theta(t)}^{\phi}\bigl(t,x(t)\bigr)=h\bigl(t,x(t)\bigr) -P_{\theta(t)}^{\phi ,x}\bigl[h\bigl(t,x(t)\bigr)-\theta(t) w(t)\bigr], \quad x(t)\in H. $$
We next show that \(R_{\theta(t)}^{\phi}(t,x(t))\) plays the role of natural residual vector in random fuzzy mapping for the RGVIP (2.1).
Lemma 2.1
For any measurable function
\(\theta:\Omega\to (0,+\infty)\)
and for each
\(t\in\Omega\), the measurable mapping
\(x:\Omega \to H\)
is a solution of the RGVIP (2.1) if and only if
\(R_{\theta (t)}^{\phi}(t,x(t))=0\).
Proof
Let \(R_{\theta(t)}^{\phi}(t,x(t))=0\), which implies that \(h(t,x)=P_{\theta(t)}^{\phi,x}[h(t,x)-\theta(t) w(t)]\). It is equivalent to \(h(t,x)=\arg\min_{y(t)\in H}\{\phi(y(t))+{1\over 2\theta (t)}\|y(t)-(h(t,x)-\theta(t) w(t))\|^{2}\}\). By the optimality conditions (which are necessary and sufficient, by convexity), the latter is equivalent to
$$0\in\partial\phi\bigl(h(t,x)\bigr)+{1\over \theta(t)}\bigl(h(t,x)- \bigl(h(t,x)-\theta(t) w(t)\bigr)\bigr)=\partial\phi\bigl(h(t,x)\bigr)+w(t), $$
which implies \(-w(t)\in\partial\phi(h(t,x))\). This in turn is equivalent, by the definition of the sub-gradient, to
$$\phi\bigl(y(t)\bigr)\geq\phi\bigl(h(t,x)- \bigl\langle w(t), y(t)-h(t,x) \bigr\rangle \bigr),\quad \forall y(t)\in H, t\in\Omega, $$
which implies that \(x(t)\) solves the RGVIP (2.1).
This completes the proof. □
Definition 2.8
A random multi-valued operator \(T: \Omega \times H \to \operatorname{CB}(H)\) is said to be strongly h-monotone, if there exists a measurable function \(\alpha:\Omega\to(0,+\infty)\) such that
$$\begin{aligned}& \bigl\langle w_{1}(t)-w_{2}(t), h\bigl(t,x_{1}(t) \bigr)-h\bigl(t,x_{2}(t)\bigr)\bigr\rangle \\& \quad \geq\alpha(t)\bigl\Vert x_{1}(t)-x_{2}(t)\bigr\Vert ^{2},\quad \forall w_{i}(t)\in T(t,x_{i}), \forall x_{i}(t)\in H, i=1,2, \forall t\in\Omega. \end{aligned}$$
Definition 2.9
A random operator \(h:\Omega\times H\to H\) is said to be Lipschitz continuous, if there exists a measurable function \(L:\Omega\to(0,+\infty)\) such that
$$\bigl\Vert h\bigl(t,x_{1}(t)\bigr)-h\bigl(t,x_{2}(t) \bigr)\bigr\Vert \leq L(t)\bigl\Vert x_{1}(t)-x_{2}(t) \bigr\Vert ,\quad \forall x_{i}(t)\in H, i=1,2, \forall t\in\Omega. $$
Definition 2.10
A random multi-valued mapping \(T:\Omega \times H\to \operatorname{CB}(H)\) is said to be Ĥ-Lipschitz continuous, if there exists a measurable function \(\lambda:\Omega\to(0,+\infty)\) such that
$${\hat{H}}\bigl(T\bigl(t,x(t)\bigr),T\bigl(t,x_{0}(t)\bigr)\bigr)\leq \lambda(t)\bigl\Vert x(t)-x_{0}(t)\bigr\Vert ,\quad \forall x(t),x_{0}(t)\in H. $$
Now, we give the following lemmas.
Lemma 2.2
[21]
Let
\(T:\Omega\times H\to \operatorname{CB}(H)\)
be a
Ĥ-Lipschitz continuous random multi-valued mapping, then for measurable mapping
\(x:\Omega\to H\), the multi-valued mapping
\(T(\cdot,x(\cdot)):\Omega\to \operatorname{CB}(H)\)
is measurable.
Lemma 2.3
[21]
Let
\(T_{1}, T_{2}:\Omega\to \operatorname{CB}(H)\)
be two measurable multi-valued mappings, \(\epsilon> 0\)
be a constant, and
\(w_{1}:\Omega\to H\)
be a measurable selection of
\(T_{1}\), then there exists a measurable selection
\(w_{2}:\Omega\to H\)
of
\(T_{2}\)
such that for all
\(t\in\Omega\),
$$\bigl\Vert w_{1}(t)-w_{2}(t)\bigr\Vert \leq(1+ \epsilon){\hat{H}}\bigl(T_{1}(t),T_{2}(t)\bigr). $$
Definition 2.11
A function \(G: H\to\mathbb{R}\) is said to be a gap function for the RGVIP (2.1), if it satisfies the following properties:
-
(i)
\(G(x)\geq0\), \(\forall x\in H\);
-
(ii)
\(G(x^{*}) = 0\), if and only if \(x^{*}\in H\) solves the RGVIP (2.1).
Lemma 2.4
Let
\(x:\Omega\to H\)
be a measurable mapping and
\(\theta:\Omega\to(0,+\infty)\)
be a measurable function, then for all
\(x(t)\in H\)
and for each
\(t\in\Omega\), \(R_{\theta(t)}^{\phi}(t,x(t))\)
is a gap function for the RGVIP (2.1).
Proof
It is clear that \(\|R_{\theta(t)}^{\phi}(t,x(t))\|\geq0\) for each \(t\in\Omega\), \(x(t)\in H\) and \(\|R_{\theta(t)}^{\phi}(t,x^{*}(t))\|= 0\) if and only if \(h(t,x^{*})=P_{\theta(t)}^{\phi,x^{*}}[h(t,x^{*})-\theta(t) w(t)]\). By the definition of \(P_{\theta(t)}^{\phi,x^{*}}\), \(P_{\theta (t)}^{\phi,x^{*}}[h(t,x^{*})-\theta(t) w(t)]\) satisfies
$$\begin{aligned} 0 \in&\partial\phi\bigl(P_{\theta(t)}^{\phi,x^{*}}\bigl[h\bigl(t,x^{*}\bigr)- \theta(t) w(t)\bigr]\bigr) \\ &{}+{1\over \theta(t)}\bigl(P_{\theta(t)}^{\phi,x^{*}} \bigl[h\bigl(t,x^{*}\bigr)-\theta(t) w(t)\bigr]-\bigl(h(t,x)-\theta(t) w(t)\bigr) \bigr). \end{aligned}$$
Hence,
$$-w(t)+{1\over \theta(t)}\bigl(h\bigl(t,x^{*}\bigr)-P_{\theta(t)}^{\phi ,x^{*}} \bigl[h\bigl(t,x^{*}\bigr)-\theta(t) w(t)\bigr]\bigr)\in\partial\phi \bigl(P_{\theta(t)}^{\phi ,x^{*}}\bigl[h\bigl(t,x^{*}\bigr)-\theta(t) w(t)\bigr] \bigr). $$
It follows that, for all \(y(t)\in H\),
$$\begin{aligned}& \phi\bigl(y(t)\bigr)-\phi\bigl(P_{\theta(t)}^{\phi,x^{*}}\bigl[h\bigl(t,x^{*} \bigr)-\theta(t) w(t)\bigr]\bigr) \\& \quad {}+ \biggl\langle w(t)-{1\over \theta(t)}\bigl(h\bigl(t,x^{*} \bigr)-P_{K(x^{*})}^{\phi}\bigl[h\bigl(t,x^{*}\bigr)-\theta(t) w(t) \bigr]\bigr), \\& \quad y(t)-P_{K(x^{*})}^{\phi,x^{*}}\bigl[h\bigl(t,x^{*}\bigr)-\theta (t) w(t)\bigr] \biggr\rangle \geq0. \end{aligned}$$
Thus, \(x^{*}(t)\) solves the RGVIP (2.1).
This completes the proof. □
Next we study those conditions under which the RGVIP (2.1) has a unique solution.
Lemma 2.5
Let
\((\Omega, \Sigma)\)
be a measurable space and
H
be a real Hilbert space. Suppose that
\(h:\Omega\times H\to H\)
be a random mapping and
\(\phi:H\to\mathbb{R}\cup\{+\infty\}\)
be a real-valued function. Let the random fuzzy mapping
\({\hat{T}}:\Omega \times H\to{\mathcal{F}}(H)\)
satisfy the condition (I), and a random multi-valued mapping
\(T:\Omega\times H\to \operatorname{CB}(H)\)
induced by the random fuzzy mapping
T̂
be strongly
h-monotone with the measurable function
\(\alpha:\Omega\to(0,+\infty)\), then the RGVIP (2.1) has a unique solution.
Proof
Let the two measurable mappings \(x_{1},x_{2}:\Omega\to H\) be the two solutions of the RGVIP (2.1) such that \(x_{1}(t)\neq x_{2}(t)\in H\). Then we have
$$\begin{aligned}& \bigl\langle w_{1}(t), y(t)-h\bigl(t,x_{1}(t)\bigr) \bigr\rangle +\phi\bigl(y(t)\bigr)-\phi \bigl(h\bigl(t,x_{1}(t)\bigr)\bigr) \geq0, \end{aligned}$$
(2.5)
$$\begin{aligned}& \bigl\langle w_{2}(t), y(t)-h\bigl(t,x_{2}(t)\bigr) \bigr\rangle +\phi\bigl(y(t)\bigr)-\phi \bigl(h\bigl(t,x_{2}(t)\bigr)\bigr) \geq0. \end{aligned}$$
(2.6)
Taking \(y(t) =h(t,x_{2}(t))\) in (2.5) and \(y(t) =h(t,x_{1}(t))\) in (2.6), adding the resultants, we have
$$\bigl\langle w_{1}(t)-w_{2}(t), h\bigl(t,x_{2}(t) \bigr)-h\bigl(t,x_{1}(t)\bigr) \bigr\rangle \geq0. $$
Since T is strongly h-monotone with measurable function \(\alpha :\Omega\to(0,+\infty)\), therefore
$$0\leq \bigl\langle w_{1}(t)-w_{2}(t), h \bigl(t,x_{2}(t)\bigr)-h\bigl(t,x_{1}(t)\bigr) \bigr\rangle \leq -\alpha(t)\bigl\Vert x_{1}(t)-x_{2}(t)\bigr\Vert ^{2}, $$
which implies that \(x_{1}(t) =x_{2}(t)\), \(\forall t\in\Omega\), the uniqueness of the solution of the RGVIP (2.1).
This completes the proof. □
Now by using normal residual vector \(R_{\theta(t)}^{\phi}(t,x(t))\), we derive the error bounds for the solution of the RGVIP (2.1).
Theorem 2.1
Suppose for each
\(t\in\Omega\), \(x_{0}(t)\in H\)
is solution of the RGVIP (2.1). Let
\((\Omega, \Sigma)\)
be a measurable space, and
H
be a real Hilbert space. Let the random fuzzy mapping
\({\hat{T}}:\Omega\times H\to{\mathcal{F}}(H)\)
satisfy the condition (I) and
\(T:\Omega\times H\to \operatorname{CB}(H)\)
be the random multi-valued mapping induced by the random fuzzy mapping
T̂. Let
\(h:\Omega\times H\to H\)
be a random mapping and
\(\phi:H\to\mathbb{R}\cup\{+\infty\}\)
be a real-valued function such that
-
(i)
for each
\(t\in\Omega\), the measurable mapping
T
is strongly
h-monotone and
Ĥ-Lipschitz continuous with the measurable functions
\(\alpha, \lambda:\Omega\to(0,+\infty)\), respectively;
-
(ii)
for each
\(t\in\Omega\), the mapping
\(h(t,\cdot)\)
is Lipschitz continuous with the measurable function
\(L:\Omega\to(0,+\infty)\);
-
(iii)
if there exists a measurable function
\(K:\Omega\to (0,+\infty)\)
such that
$$\bigl\Vert P_{\theta(t)}^{\phi,x}(z)-P_{\theta(t)}^{\phi,x_{0}}(z) \bigr\Vert \leq K(t)\bigl\Vert x(t)-x_{0}(t)\bigr\Vert ,\quad \forall x(t),x_{0}(t),z(t)\in H, $$
then for any
\(x(t)\in H\), \(t\in\Omega\)
and
\(\theta(t)>{K(t)L(t)\over \alpha(t)-K(t)\lambda(t)(1+\epsilon)}\), we have
$$\bigl\Vert x(t)-x_{0}(t)\bigr\Vert \leq{L(t)+\theta(t)\lambda(t)(1+\epsilon)\over \alpha (t)\theta(t)-K(t) (L(t)+\theta(t)\lambda(t)(1+\epsilon) )} \bigl\Vert R_{\theta(t)}^{\phi}\bigl(t,x(t)\bigr)\bigr\Vert . $$
Proof
Let for each \(t\in\Omega\), \(x_{0}(t)\in H\) be a solution of the RGVIP (2.1), then
$$\bigl\langle w_{0}(t), y(t)-h(t,x_{0})\bigr\rangle +\phi \bigl(y(t)\bigr)-\phi\bigl(h\bigl(t,x_{0}(t)\bigr)\bigr)\geq0,\quad \forall y(t)\in H. $$
Substituting \(y(t)=P_{\theta(t)}^{\phi,x_{0}}[h(t,x(t))-\theta(t) w(t)]\) in the above inequality, we have
$$\begin{aligned}& \bigl\langle w_{0}(t), P_{\theta(t)}^{\phi,x_{0}}\bigl[h \bigl(t,x(t)\bigr)-\theta(t) w(t)\bigr]-h\bigl(t,x_{0}(t)\bigr) \bigr\rangle \\& \quad {}+\phi \bigl(P_{\theta(t)}^{\phi ,x_{0}}\bigl[h\bigl(t,x(t)\bigr)- \theta(t) w(t)\bigr] \bigr)-\phi\bigl(h\bigl(t,x_{0}(t)\bigr)\bigr) \geq0. \end{aligned}$$
(2.7)
For any fixed \(x(t)\in H\) and measurable function \(\theta:\Omega\to (0,+\infty)\), we observe that
$$\begin{aligned} h\bigl(t,x(t)\bigr)-\theta(t) w(t) \in&\bigl(I+\theta(t)\partial\phi\bigr) \bigl(I+\theta (t)\partial\phi\bigr)^{-1} \bigl(h\bigl(t,x(t)\bigr)- \theta(t) w(t) \bigr) \\ =&\bigl(I+\theta (t)\partial\phi\bigr)P_{\theta(t)}^{\phi,x_{0}} \bigl[h\bigl(t,x(t)\bigr)-\theta(t) w(t)\bigr], \end{aligned}$$
which is equivalent to
$$\begin{aligned} \begin{aligned} &{-}w(t)+{1\over \theta(t)} \bigl[h\bigl(t,x(t)\bigr)-P_{\theta(t)}^{\phi ,x_{0}} \bigl[h\bigl(t,x(t)\bigr)-\theta(t) w(t)\bigr] \bigr] \\ &\quad \in\partial\phi \bigl(P_{\theta (t)}^{\phi,x_{0}}\bigl[h\bigl(t,x(t)\bigr)-\theta(t) w(t)\bigr] \bigr). \end{aligned} \end{aligned}$$
By the definition of a sub-differential, we have
$$\begin{aligned}& \biggl\langle w(t)-{1\over \theta(t)} \bigl(h\bigl(t,x(t) \bigr)-P_{\theta(t)}^{\phi ,x_{0}}\bigl[h\bigl(t,x(t)\bigr)-\theta(t) w(t) \bigr] \bigr), y(t)-P_{\theta(t)}^{\phi ,x_{0}}\bigl[h\bigl(t,x(t)\bigr)- \theta(t) w(t)\bigr] \biggr\rangle \\& \quad {}+\phi\bigl(y(t)\bigr)-\phi \bigl(P_{\theta(t)}^{\phi,x_{0}}\bigl[h \bigl(t,x(t)\bigr)-\theta(t) w(t)\bigr] \bigr)\geq0. \end{aligned}$$
Taking \(y(t)=h(t,x_{0}(t))\) in the above we get
$$\begin{aligned}& \biggl\langle w(t)-{1\over \theta(t)} \bigl(h\bigl(t,x(t) \bigr)-P_{\theta(t)}^{\phi ,x_{0}}\bigl[h\bigl(t,x(t)\bigr)-\theta(t) w(t) \bigr] \bigr), \\& \quad h\bigl(t,x_{0}(t)\bigr)-P_{\theta(t)}^{\phi ,x_{0}} \bigl[h\bigl(t,x(t)\bigr)-\theta(t) w(t)\bigr] \biggr\rangle \\& \quad {}+\phi\bigl(h\bigl(t,x_{0}(t)\bigr)\bigr)-\phi \bigl(P_{\theta(t)}^{\phi,x_{0}}\bigl[h\bigl(t,x(t)\bigr)-\theta (t) w(t) \bigr] \bigr)\geq0. \end{aligned}$$
This implies that
$$\begin{aligned}& \biggl\langle -w(t)+{1\over \theta(t)} \bigl(h\bigl(t,x(t) \bigr)-P_{\theta(t)}^{\phi ,x_{0}}\bigl[h\bigl(t,x(t)\bigr)-\theta(t) w(t) \bigr] \bigr), \\& \quad P_{\theta(t)}^{\phi ,x_{0}}\bigl[h\bigl(t,x(t)\bigr)-\theta(t) w(t)\bigr]-h\bigl(t,x_{0}(t)\bigr) \biggr\rangle \\& \quad {}+\phi\bigl(h\bigl(t,x_{0}(t)\bigr)\bigr)-\phi \bigl(P_{\theta(t)}^{\phi,x_{0}}\bigl[h\bigl(t,x(t)\bigr)-\theta (t) w(t) \bigr] \bigr)\geq0. \end{aligned}$$
(2.8)
Adding (2.7) and (2.8), we get
$$\begin{aligned}& \biggl\langle w_{0}(t)-w(t)+{1\over \theta(t)} \bigl(h \bigl(t,x(t)\bigr)-P_{\theta (t)}^{\phi,x_{0}}\bigl[h\bigl(t,x(t)\bigr)- \theta(t) w(t)\bigr] \bigr), \\& \quad P_{\theta(t)}^{\phi ,x_{0}}\bigl[h\bigl(t,x(t) \bigr)-\theta(t) w(t)\bigr]-h\bigl(t,x_{0}(t)\bigr) \biggr\rangle \geq0. \end{aligned}$$
This also can be written as
$$\begin{aligned}& \theta(t) \bigl\langle w_{0}(t)-w(t), P_{\theta(t)}^{\phi ,x_{0}} \bigl[h\bigl(t,x(t)\bigr)-\theta(t) w(t)\bigr]-h\bigl(t,x(t)\bigr) \bigr\rangle \\& \quad {}+ \theta(t) \bigl\langle w_{0}(t)-w(t), h\bigl(t,x(t)\bigr)-h \bigl(t,x_{0}(t)\bigr) \bigr\rangle \\& \quad {}+ \bigl\langle h\bigl(t,x(t)\bigr)-P_{\theta(t)}^{\phi,x_{0}}\bigl[h \bigl(t,x(t)\bigr)-\theta(t) w(t)\bigr], P_{\theta(t)}^{\phi,x_{0}}\bigl[h \bigl(t,x(t)\bigr)-\theta(t) w(t)\bigr]-h\bigl(t,x(t)\bigr) \bigr\rangle \\& \quad {}+ \bigl\langle h\bigl(t,x(t)\bigr)-P_{\theta(t)}^{\phi,x_{0}}\bigl[h \bigl(t,x(t)\bigr)-\theta(t) w(t)\bigr], h\bigl(t,x(t)\bigr)-h \bigl(t,x_{0}(t)\bigr) \bigr\rangle \geq0, \end{aligned}$$
which implies that
$$\begin{aligned}& \theta(t) \bigl\langle w_{0}(t)-w(t), P_{\theta(t)}^{\phi ,x_{0}} \bigl[h\bigl(t,x(t)\bigr)-\theta(t) w(t)\bigr]-h\bigl(t,x(t)\bigr) \bigr\rangle \\& \qquad {}+ \bigl\langle h\bigl(t,x(t)\bigr)-P_{\theta(t)}^{\phi,x_{0}} \bigl[h\bigl(t,x(t)\bigr)-\theta(t) w(t)\bigr], h\bigl(t,x(t)\bigr)-h \bigl(t,x_{0}(t)\bigr) \bigr\rangle \\& \quad \geq\theta(t) \bigl\langle w_{0}(t)-w(t), h \bigl(t,x_{0}(t)\bigr)-h\bigl(t,x(t)\bigr) \bigr\rangle \\& \qquad {}+ \bigl\langle h\bigl(t,x(t)\bigr)-P_{\theta(t)}^{\phi,x_{0}} \bigl[h\bigl(t,x(t)\bigr)-\theta(t) w(t)\bigr], h\bigl(t,x(t)\bigr)-P_{\theta(t)}^{\phi,x_{0}} \bigl[h\bigl(t,x(t)\bigr)-\theta(t) w(t)\bigr] \bigr\rangle . \end{aligned}$$
By using the strong h-monotonicity of T, we get
$$\begin{aligned}& \theta(t) \bigl\langle w_{0}(t)-w(t), P_{\theta(t)}^{\phi ,x_{0}} \bigl[h\bigl(t,x(t)\bigr)-\theta(t) w(t)\bigr]-h\bigl(t,x(t)\bigr) \bigr\rangle \\& \qquad {}+ \bigl\langle h\bigl(t,x(t)\bigr)-P_{\theta(t)}^{\phi,x_{0}} \bigl[h\bigl(t,x(t)\bigr)-\theta(t) w(t)\bigr], h\bigl(t,x(t)\bigr)-h \bigl(t,x_{0}(t)\bigr) \bigr\rangle \\& \quad \geq\alpha(t)\theta(t)\bigl\Vert x_{0}(t)-x(t)\bigr\Vert ^{2}+\bigl\Vert R_{\theta(t)}^{\phi}\bigl(t,x(t)\bigr) \bigr\Vert ^{2}. \end{aligned}$$
Also the above inequality can be written as
$$\begin{aligned}& \theta(t) \bigl\langle w_{0}(t)-w(t), P_{\theta(t)}^{\phi ,x_{0}} \bigl[h\bigl(t,x(t)\bigr)-\theta(t) w(t)\bigr]-h\bigl(t,x(t)\bigr) \\& \qquad {}-P_{\theta(t)}^{\phi ,x} \bigl[h\bigl(t,x(t)\bigr)-\theta(t) w(t)\bigr]+P_{\theta(t)}^{\phi,x} \bigl[h\bigl(t,x(t)\bigr)-\theta (t) w(t)\bigr] \bigr\rangle \\& \qquad {}+ \bigl\langle h\bigl(t,x(t)\bigr)-P_{\theta(t)}^{\phi,x_{0}} \bigl[h\bigl(t,x(t)\bigr)-\theta(t) w(t)\bigr]-P_{\theta(t)}^{\phi,x} \bigl[h\bigl(t,x(t)\bigr)-\theta(t) w(t)\bigr] \\& \qquad {}+P_{\theta (t)}^{\phi,x} \bigl[h\bigl(t,x(t)\bigr)-\theta(t) w(t)\bigr], h\bigl(t,x(t)\bigr)-h \bigl(t,x_{0}(t)\bigr) \bigr\rangle \\& \quad \geq\alpha(t)\theta(t)\bigl\Vert x_{0}(t)-x(t)\bigr\Vert ^{2}+\bigl\Vert R_{\theta(t)}^{\phi}\bigl(t,x(t)\bigr) \bigr\Vert ^{2}. \end{aligned}$$
By using the Cauchy-Schwarz inequality along with the triangular inequality, we have
$$\begin{aligned}& \theta(t)\bigl\Vert w_{0}(t)-w(t)\bigr\Vert \bigl\Vert P_{\theta(t)}^{\phi ,x_{0}}\bigl[h\bigl(t,x(t)\bigr)-\theta(t) w(t) \bigr]-P_{\theta(t)}^{\phi,x}\bigl[h\bigl(t,x(t)\bigr)-\theta (t) w(t) \bigr]\bigr\Vert \\& \qquad {}+\theta(t)\bigl\Vert w_{0}(t)-w(t)\bigr\Vert \bigl\Vert P_{\theta(t)}^{\phi ,x}\bigl[h\bigl(t,x(t)\bigr)-\theta(t) w(t) \bigr]-h\bigl(t,x(t)\bigr)\bigr\Vert \\& \qquad {}+\bigl\Vert h\bigl(t,x(t)\bigr)- P_{\theta(t)}^{\phi,x} \bigl[h\bigl(t,x(t)\bigr)-\theta(t) w(t)\bigr]\bigr\Vert \bigl\Vert h \bigl(t,x_{0}(t)\bigr)-h\bigl(t,x(t)\bigr)\bigr\Vert \\& \qquad {}+ \bigl\Vert P_{\theta(t)}^{\phi,x}\bigl[h\bigl(t,x(t)\bigr)- \theta(t) w(t)\bigr]-P_{\theta (t)}^{\phi,x_{0}}\bigl[h\bigl(t,x(t)\bigr)- \theta(t) w(t)\bigr]\bigr\Vert \\& \qquad {}\times\bigl\Vert h\bigl(t,x_{0}(t)\bigr)-h \bigl(t,x(t)\bigr)\bigr\Vert \\& \quad \geq\alpha(t)\theta(t)\bigl\Vert x_{0}(t)-x(t)\bigr\Vert ^{2}+\bigl\Vert R_{\theta (t)}^{\phi}\bigl(t,x(t)\bigr) \bigr\Vert ^{2}. \end{aligned}$$
Now using the Ĥ-Lipschitz continuity of T, the Lipschitz continuity of h, and assumption (iii) on \(P_{\theta(t)}^{\phi,x}(\cdot)\), we have
$$\begin{aligned}& \theta(t)\lambda(t) (1+\epsilon)\bigl\Vert x_{0}(t)-x(t)\bigr\Vert K(t)\bigl\Vert x_{0}(t)-x(t)\bigr\Vert \\& \qquad {}+\theta(t)\lambda(t) (1+\epsilon)\bigl\Vert x_{0}(t)-x(t)\bigr\Vert \bigl\Vert R_{\theta(t)}^{\phi}\bigl(t,x(t)\bigr)\bigr\Vert \\& \qquad {}+\bigl\Vert R_{\theta(t)}^{\phi}\bigl(t,x(t)\bigr)\bigr\Vert L(t)\bigl\Vert x_{0}(t)-x(t)\bigr\Vert + K(t)L(t)\bigl\Vert x(t)-x_{0}(t)\bigr\Vert ^{2} \\& \quad \geq\alpha(t)\theta(t)\bigl\Vert x_{0}(t)-x(t)\bigr\Vert ^{2}+\bigl\Vert R_{\theta(t)}^{\phi}\bigl(t,x(t)\bigr) \bigr\Vert ^{2}. \end{aligned}$$
The above can be written again as
$$\begin{aligned}& K(t)\theta(t)\lambda(t) (1+\epsilon)\bigl\Vert x_{0}(t)-x(t)\bigr\Vert ^{2}+\theta(t)\lambda (t) (1+\epsilon)\bigl\Vert x_{0}(t)-x(t)\bigr\Vert \bigl\Vert R_{\theta(t)}^{\phi}\bigl(t,x(t)\bigr)\bigr\Vert \\& \qquad {}+L(t)\bigl\Vert R_{\theta(t)}^{\phi}\bigl(t,x(t)\bigr) \bigr\Vert \bigl\Vert x_{0}(t)-x(t)\bigr\Vert + K(t)L(t)\bigl\Vert x(t)-x_{0}(t)\bigr\Vert ^{2} \\& \quad \geq\alpha(t)\theta(t)\bigl\Vert x_{0}(t)-x(t)\bigr\Vert ^{2}. \end{aligned}$$
Therefore, we have
$$\begin{aligned} \bigl\Vert x(t)-x_{0}(t)\bigr\Vert \leq&{L(t)+\theta(t)\lambda(t)(1+\epsilon)\over \alpha (t)\theta(t)-K(t) (L(t)+\theta(t)\lambda(t)(1+\epsilon) )} \\ &{}\times \bigl\Vert R_{\theta(t)}^{\phi}\bigl(t,x(t)\bigr)\bigr\Vert , \quad \forall x(t)\in H, t\in\Omega, \end{aligned}$$
where \(\theta(t)>{K(t)L(t)\over \alpha(t)-K(t)\lambda(t)(1+\epsilon )}\).
This completes the proof. □