- Research
- Open access
- Published:
Modified forward-backward splitting midpoint method with superposition perturbations for the sum of two kinds of infinite accretive mappings and its applications
Journal of Inequalities and Applications volume 2017, Article number: 227 (2017)
Abstract
In a real uniformly convex and p-uniformly smooth Banach space, a modified forward-backward splitting iterative algorithm is presented, where the computational errors and the superposition of perturbed operators are considered. The iterative sequence is proved to be convergent strongly to zero point of the sum of infinite m-accretive mappings and infinite \(\theta_{i}\)-inversely strongly accretive mappings, which is also the unique solution of one kind variational inequalities. Some new proof techniques can be found, especially, a new inequality is employed compared to some of the recent work. Moreover, the applications of the newly obtained iterative algorithm to integro-differential systems and convex minimization problems are exemplified.
1 Introduction and preliminaries
Let X be a real Banach space with norm \(\Vert \cdot \Vert \) and \(X^{*}\) be its dual space. ‘→’ denotes strong convergence and \(\langle x,f \rangle\) is the value of \(f \in X^{*}\) at \(x \in X\).
The function \(\rho_{X}: [0,+\infty) \rightarrow [0,+\infty)\) is called the modulus of smoothness of X if it is defined as follows:
A Banach space X is said to be uniformly smooth if \(\frac{\rho _{X}(t)}{t} \rightarrow0\), as \(t \rightarrow0\). Let \(p > 1\) be a real number, a Banach space X is said to be p-uniformly smooth with constant \(K_{p}\) if \(K_{p} > 0\) such that \(\rho_{X}(t)\leq K_{p}t^{p}\) for \(t > 0\). It is well known that every p-uniformly smooth Banach space is uniformly smooth. For \(p> 1\), the generalized duality mapping \(J_{p}: X \rightarrow2^{X^{*}}\) is defined by
In particular, \(J: = J_{2}\) is called the normalized duality mapping.
For a mapping \(T: D(T) \sqsubseteq X \rightarrow X\), we use \(F(T)\) and \(N(T)\) to denote its fixed point set and zero point set, respectively; that is, \(F(T): = \{x\in D(T): Tx = x\}\) and \(N(T) = \{x \in D(T): Tx = 0\}\). The mapping \(T: D(T) \sqsubseteq X \rightarrow X\) is said to be
-
(1)
non-expansive if
$$\Vert Tx - Ty \Vert \leq \Vert x-y \Vert \quad \text{for }\forall x,y \in D(T); $$ -
(2)
contraction with coefficient \(k \in(0,1)\) if
$$\Vert Tx - Ty \Vert \leq k \Vert x - y \Vert \quad \text{for } \forall x,y \in D(T); $$ -
(3)
accretive [1, 2] if for all \(x, y \in D(T)\), \(\langle Tx - Ty, j(x-y)\rangle\geq0\), where \(j(x-y) \in J(x-y)\);
m-accretive if T is accretive and \(R(I+\lambda T) = X\) for \(\forall\lambda> 0\);
-
(4)
θ-inversely strongly accretive [3] if for \(\theta> 0\), \(\forall x,y \in D(T)\), there exists \(j_{p}(x - y) \in J_{p}(x-y)\) such that
$$\bigl\langle Tx - Ty, j_{p}(x-y) \bigr\rangle \geq\theta \Vert Tx - Ty \Vert ^{p}\quad\text{for }\forall x,y \in X; $$ -
(5)
γ-strongly accretive [2, 3] if for each \(x, y \in D(T)\), there exists \(j(x-y) \in J(x-y)\) such that
$$\bigl\langle Tx - Ty, j(x-y) \bigr\rangle \geq\gamma \Vert x - y \Vert ^{2} $$for some \(\gamma\in(0,1)\);
-
(6)
μ-strictly pseudo-contractive [4] if for each \(x, y \in X\), there exists \(j(x-y) \in J(x-y)\) such that
$$\bigl\langle Tx - Ty, j(x-y) \bigr\rangle \leq \Vert x - y \Vert ^{2}-\mu \bigl\Vert x - y - (Tx - Ty) \bigr\Vert ^{2} $$for some \(\mu\in(0,1)\).
If T is accretive, then for each \(r>0\), the non-expansive single-valued mapping \(J_{r}^{T}: R(I+rT)\rightarrow D(T)\) defined by \(J_{r}^{T}: = (I+rT)^{-1}\) is called the resolvent of T [1]. Moreover, \(N(T) = F(J_{r}^{T})\).
Let D be a nonempty closed convex subset of X and Q be a mapping of X onto D. Then Q is said to be sunny [5] if \(Q(Q(x)+t(x-Q(x))) = Q(x)\) for all \(x \in X\) and \(t \geq 0\). A mapping Q of X into X is said to be a retraction [5] if \(Q^{2} = Q\). If a mapping Q is a retraction, then \(Q(z) = z\) for every \(z \in R(Q)\), where \(R(Q)\) is the range of Q. A subset D of X is said to be a sunny non-expansive retract of X [5] if there exists a sunny non-expansive retraction of X onto D and it is called a non-expansive retract of X if there exists a non-expansive retraction of X onto D.
It is a hot topic in applied mathematics to find zero points of the sum of two accretive mappings, namely, a solution of the following inclusion problem:
For example, a stationary solution to the initial value problem of the evolution equation
can be recast as (1.1). A forward-backward splitting iterative method for (1.1) means each iteration involves only A as the forward step and B as the backward step, not the sum \(A+B\). The classical forward-backward splitting algorithm is given in the following way:
Some of the related work can be seen in [6–8] and the references therein.
In 2015, Wei et al. [9] extended the related work of (1.1) from a Hilbert space to the real smooth and uniformly convex Banach space and from two accretive mappings to two finite families of accretive mappings:
where D is a nonempty, closed and convex sunny non-expansive retract of X, \(Q_{D}\) is the sunny non-expansive retraction of E onto D, \(\{e_{n}\}\) is the error, \(A_{i}\) and \(B_{i}\) are m-accretive mappings and θ-inversely strongly accretive mappings, respectively, where \(i = 1,2,\ldots, N\). \(T: X \rightarrow X\) is a strongly positive linear bounded operator with coefficient γ̅ and \(f: X \rightarrow X\) is a contraction. \(\sum_{m = 0}^{N}a_{m} = 1\), \(0 < a_{m} < 1\). The iterative sequence \(\{x_{n}\}\) is proved to converge strongly to \(p_{0} \in\bigcap_{i = 1}^{N} N(A_{i}+B_{i})\), which solves the variational inequality
for \(\forall z\in\bigcap_{i = 1}^{N} N(A_{i}+B_{i})\) under some conditions.
The implicit midpoint rule is one of the powerful numerical methods for solving ordinary differential equations, and it has been extensively studied by Alghamdi et al. They presented the following implicit midpoint rule for approximating the fixed point of a non-expansive mapping in a Hilbert space H in [10]:
where T is non-expansive from H to H. If \(F(T) \neq \emptyset\), then they proved that \(\{x_{n}\}\) converges weakly to \(p_{0} \in F(T)\) under some conditions.
Combining the ideas of forward-backward method and midpoint method, Wei et al. extended the study of two finite families of accretive mappings to two infinite families of accretive mappings [3] in a real q-uniformly smooth and uniformly convex Banach space:
where \(\{e'_{n}\}\), \(\{e''_{n}\}\) and \(\{e'''_{n}\}\) are three error sequences, \(A_{i}: D \rightarrow X\) and \(B_{i}: D \rightarrow X\) are m-accretive mappings and \(\theta_{i}\)-inversely strongly accretive mappings, respectively, where \(i \in N\). \(T: X \rightarrow X\) is a strongly positive linear bounded operator with coefficient γ̅, \(f: X \rightarrow X \) is a contraction, \(\sum_{n = 1}^{\infty}a_{n} = 1\), \(0 < a_{n} < 1\), \(\delta_{n} + \beta_{n} +\zeta_{n} \equiv1\) for \(n \in N\cup\{ 0\}\). The iterative sequence \(\{x_{n}\}\) is proved to converge strongly to \(p_{0} \in\bigcap_{i = 1}^{\infty} N(A_{i}+B_{i})\), which solves the following variational inequality:
In 2012, Ceng et al. [11] presented the following iterative algorithm to approximate zero point of an m-accretive mapping:
where \(T: X \rightarrow X\) is a γ-strongly accretive and μ-strictly pseudo-contractive mapping, with \(\gamma+ \mu> 1\), \(f: E \rightarrow E \) is a contraction and \(A: X \rightarrow X\) is m-accretive. Under some assumptions, \(\{x_{n}\}\) is proved to be convergent strongly to the unique element \(p_{0}\in N(A)\), which solves the following variational inequality:
The mapping F in (1.9) is called a perturbed operator which only plays a role in the construction of the iterative algorithm for selecting a particular zero of A, and it is not involved in the variational inequality (1.10).
Inspired by the work mentioned above, in Section 2, we shall construct a new modified forward-backward splitting midpoint iterative algorithm to approximate the zero points of the sum of infinite m-accretive mappings and infinite \(\theta_{i}\)-inversely strongly accretive mappings. New proof techniques can be found, the superposition of perturbed operators is considered and some restrictions on the parameters are mild compared to the existing similar works. In Section 3, we shall discuss the applications of the newly obtained iterative algorithms to integro-differential systems and the convex minimization problems.
We need the following preliminaries in our paper.
Lemma 1.1
[12]
Let X be a real uniformly convex and p-uniformly smooth Banach space with constant \(K_{p}\) for some \(p \in(1,2]\). Let D be a nonempty closed convex subset of X. Let \(A: D\rightarrow X\) be an m-accretive mapping and \(B: D\rightarrow X\) be a θ-inversely strongly accretive mapping. Then, given \(s > 0\), there exists a continuous, strictly increasing and convex function \(\varphi_{p}: R^{+} \rightarrow R^{+}\) with \(\varphi_{p}(0) = 0\) such that for all \(x, y \in D\) with \(\Vert x \Vert \leq s\) and \(\Vert y \Vert \leq s\),
In particular, if \(0 < r \leq(\frac{p \theta}{K_{p}})^{\frac{1}{p-1}}\), then \(J_{r}^{A}(I-rB)\) is non-expansive.
Lemma 1.2
[13]
Let X be a real smooth Banach space and \(B: X \rightarrow X\) be a μ-strictly pseudo-contractive mapping and also be a γ-strongly accretive mapping with \(\mu+ \gamma> 1\). Then, for any fixed number \(\delta\in(0,1)\), \(I-\delta B\) is a contraction with coefficient \(1-\delta(1-\sqrt{\frac{1-\gamma }{\mu}})\).
Lemma 1.3
[2]
Let X be a real Banach space and D be a nonempty closed and convex subset of X. Let \(f: D \rightarrow D\) be a contraction. Then f has a unique fixed point.
Lemma 1.4
[14]
Let X be a real strictly convex Banach space, and let D be a nonempty closed and convex subset of X. Let \(T_{m}: D \rightarrow D\) be a non-expansive mapping for each \(m \in N\). Let \(\{a_{m}\}\) be a real number sequence in (0,1) such that \(\sum_{m = 1}^{\infty}a_{m} = 1\). Suppose that \(\bigcap_{m=1}^{\infty}F(T_{m}) \neq\emptyset\). Then the mapping \(\sum_{m = 1}^{\infty}a_{m} T_{m}\) is non-expansive and \(F(\sum_{m = 1}^{\infty}a_{m}T_{m}) = \bigcap_{m = 1}^{\infty}F(T_{m})\).
Lemma 1.5
[12]
In a real Banach space X, for \(p > 1\), the following inequality holds:
Lemma 1.6
[15]
Let X be a real Banach space, and let D be a nonempty closed and convex subset of X. Suppose \(A: D \rightarrow X\) is a single-valued mapping and \(B: X \rightarrow 2^{X}\) is m-accretive. Then
Lemma 1.7
[16]
Let \(\{a_{n}\}\) be a real sequence that does not decrease at infinity, in the sense that there exists a subsequence \(\{a_{n_{k}}\}\) so that \(a_{n_{k}}\leq a_{n_{k}+1}\) for all \(k \in N\cup\{0\}\). For every \(n > n_{0}\), define an integer sequence \(\{ \tau(n)\}\) as
Then \(\tau(n) \rightarrow\infty\) as \(n \rightarrow\infty\) and for all \(n > n_{0}\), \(\max\{a_{\tau(n)}, a_{n}\} \leq a_{\tau(n)+1}\).
Lemma 1.8
[17]
For \(p > 1\), the following inequality holds:
for any positive real numbers a and b.
Lemma 1.9
[18]
The Banach space X is uniformly smooth if and only if the duality mapping \(J_{p}\) is single-valued and norm-to-norm uniformly continuous on bounded subsets of X.
2 Strong convergence theorems
Theorem 2.1
Let X be a real uniformly convex and p-uniformly smooth Banach space with constant \(K_{p}\) where \(p \in(1,2]\) and D be a nonempty closed and convex sunny non-expansive retract of X. Let \(Q_{D}\) be the sunny non-expansive retraction of X onto D. Let \(f: X \rightarrow X\) be a contraction with coefficient \(k \in(0,1)\), \(A_{i}: D \rightarrow X\) be m-accretive mappings, \(C_{i}: D \rightarrow X\) be \(\theta_{i}\)-inversely strongly accretive mappings, \(W_{i}: X \rightarrow X\) be \(\mu_{i}\)-strictly pseudo-contractive mappings and \(\gamma_{i}\)-strongly accretive mappings with \(\mu_{i}+\gamma_{i} > 1\) for \(i\in N\). Suppose \(\{\omega_{i}^{(1)}\}\) and \(\{\omega_{i}^{(2)}\}\) are real number sequences in \((0,1)\) for \(i \in N\). Suppose \(0 < r_{n,i}\leq(\frac{p\theta_{i}}{K_{p}})^{\frac{1}{p-1}}\) for \(i \in N\) and \(n \in N\), \(\kappa_{t} \in(0,1)\) for \(t \in(0,1)\), \(\sum_{i = 1}^{\infty }\omega_{i}^{(1)} \Vert W_{i} \Vert <+\infty\), \(\sum_{i = 1}^{\infty} \omega_{i}^{(1)} = \sum_{i = 1}^{\infty} \omega_{i}^{(2)}= 1\) and \(\bigcap_{i = 1}^{\infty }N(A_{i}+C_{i})\neq\emptyset\). If, for each \(t \in(0,1)\), we define \(Z_{t}^{n}: X \rightarrow X\) by
then \(Z_{t}^{n}\) has a fixed point \(u_{t}^{n}\). Moreover, if \(\frac{\kappa _{t}}{t} \rightarrow0\), then \(u_{t}^{n}\) converges strongly to the unique solution \(q_{0}\) of the following variational inequality, as \(t \rightarrow0\):
Proof
We split the proof into five steps.
Step 1. \(Z_{t}^{n}: X \rightarrow X\) is a contraction for \(t \in(0,1)\), \(\kappa_{t} \in(0,1)\) and \(n \in N\).
In fact, for \(\forall x,y \in X\), using Lemmas 1.1 and 1.2, we have
which implies that \(Z_{t}^{n}\) is a contraction. By Lemma 1.3, there exists \(u_{t}^{n}\) such that \(Z_{t}^{n}u_{t}^{n} = u_{t}^{n}\). That is, \(u_{t}^{n} = t f(u_{t}^{n}) + (1-t)(I-\kappa_{t} \sum_{i = 1}^{\infty} \omega_{i}^{(1)}W_{i})(\sum_{i = 1}^{\infty} \omega _{i}^{(2)}J_{r_{n,i}}^{A_{i}}(I-r_{n,i}C_{i})Q_{D}u_{t}^{n})\).
Step 2. If \(\lim_{t \rightarrow0}\frac{\kappa_{t}}{t} = 0\), then \(\{ u_{t}^{n}\}\) is bounded for \(n \in N\), \(0 < t \leq\overline{a}\), where a̅ is a sufficiently small positive number and \(u_{t}^{n}\) is the same as that in Step 1.
For \(\forall u \in\bigcap_{i = 1}^{\infty}N(A_{i}+C_{i})\), using Lemmas 1.1, 1.2 and 1.6, we know that
Then
Since \(\lim_{t \rightarrow0}\frac{\kappa_{t}}{t} = 0\), then there exists a sufficiently small positive number a̅ such that \(0 < \frac {\kappa_{t}}{t}< 1\) for \(0 < t \leq\overline{a}\). Thus \(\{u_{t}^{n}\}\) is bounded for \(n \in N\) and \(0 < t \leq\overline{a}\).
Step 3. If \(\lim_{t \rightarrow0}\frac{\kappa_{t}}{t} = 0\), then \(u_{t}^{n} - \sum_{i = 1}^{\infty} \omega _{i}^{(2)}J_{r_{n,i}}^{A_{i}}(I-r_{n,i}C_{i})Q_{D}u_{t}^{n} \rightarrow0\), as \(t \rightarrow0\), for \(n \in N\).
Noticing Step 2, we have
as \(t \rightarrow0\).
Step 4. If the variational inequality (2.1) has solutions, the solution must be unique.
Suppose \(u_{0} \in\bigcap_{i = 1}^{\infty}N(A_{i}+C_{i})\) and \(v_{0} \in \bigcap_{i = 1}^{\infty}N(A_{i}+C_{i})\) are two solutions of (2.1), then
and
Adding up (2.2) and (2.3), we get
Since
then (2.4) implies that \(u_{0} = v_{0}\).
Step 5. If \(\lim_{t \rightarrow0}\frac{\kappa_{t}}{t} = 0\), then \(u_{t}^{n} \rightarrow q_{0} \in\bigcap_{i = 1}^{\infty}N(A_{i}+C_{i})\), as \(t \rightarrow0\), which solves the variational inequality (2.1).
Assume \(t_{m} \rightarrow0\). Set \(u_{m}^{n}: = u_{t_{m}}^{n}\) and define \(\mu: X \rightarrow R\) by
where LIM is the Banach limit on \(l^{\infty}\). Let
It is easily seen that K is a nonempty closed convex bounded subset of X. Since \(u_{m}^{n} - \sum_{i = 1}^{\infty} \omega _{i}^{(2)}J_{r_{n,i}}^{A_{i}}(I-r_{n,i}C_{i})Q_{D}u_{m}^{n} \rightarrow0\) from Step 3, then for \(u \in K\),
it follows that \(\sum_{i = 1}^{\infty}\omega _{i}^{(2)}J_{r_{n,i}}^{A_{i}}(I-r_{n,i}C_{i})Q_{D}(K) \subset K\); that is, K is invariant under \(\sum_{i = 1}^{\infty} \omega _{i}^{(2)}J_{r_{n,i}}^{A_{i}}(I-r_{n,i}C_{i})Q_{D}\). Since a uniformly smooth Banach space has the fixed point property for non-expansive mappings, \(\sum_{i = 1}^{\infty} \omega _{i}^{(2)}J_{r_{n,i}}^{A_{i}}(I-r_{n,i}C_{i})Q_{D}\) has a fixed point, say \(q_{0}\), in K. That is, \(\sum_{i = 1}^{\infty} \omega _{i}^{(2)}J_{r_{n,i}}^{A_{i}}(I-r_{n,i}C_{i})Q_{D}q_{0} = q_{0} \in D\), which ensures from Lemmas 1.4 and 1.6 that \(q_{0} \in\bigcap_{i = 1}^{\infty}N(A_{i}+C_{i})\). Since \(q_{0}\) is also a minimizer of μ over X, it follows that, for \(t \in(0,1)\),
Since X is uniformly smooth, then by letting \(t \rightarrow0\), we find the two limits above can be interchanged and obtain
Since \(u_{m}^{n} - q_{0} = t_{m}(f(u_{m}^{n})-q_{0})+(1-t_{m})[(I-\kappa_{t_{m}} \sum_{i = 1}^{\infty} \omega_{i}^{(1)}W_{i})(\sum_{i = 1}^{\infty}\omega _{i}^{(2)}J_{r_{n,i}}^{A_{i}}(I- r_{n,i}C_{i})Q_{D}u_{m}^{n})-q_{0}]\), then
Therefore,
Since \(\frac{\kappa_{t_{m}}}{t_{m}} \rightarrow0\), then from (2.5), (2.6) and the result of Step 2, we have \(\operatorname{LIM} \Vert u_{m}^{n} - q_{0} \Vert ^{2} \leq0\), which implies that \(\operatorname{LIM} \Vert u_{m}^{n} - q_{0} \Vert ^{2}=0\), and then there exists a subsequence which is still denoted by \(\{u_{m}^{n}\}\) such that \(u_{m}^{n} \rightarrow q_{0}\).
Next, we shall show that \(q_{0}\) solves the variational inequality (2.1).
Note that \(u_{m}^{n} = t_{m}f(u_{m}^{n})+(1-t_{m})(I-\kappa_{t_{m}}\sum_{i = 1}^{\infty}\omega_{i}^{(1)}W_{i})(\sum_{i = 1}^{\infty} \omega _{i}^{(2)}J_{r_{n,i}}^{A_{i}}(I-r_{n,i}C_{i})Q_{D}u_{m}^{n})\), then for \(\forall v \in\bigcap_{i = 1}^{\infty}N(A_{i}+C_{i})\),
as \(t_{m} \rightarrow0\). Since \(x_{n} \rightarrow q_{0}\) and J is uniformly continuous on each bounded subset of X, then taking the limits on both sides of the above inequality, \(\langle q_{0} - f(q_{0}), J(q_{0} - v)\rangle\leq0\), which implies that \(q_{0}\) satisfies the variational inequality (2.1).
Next, to prove the net \(\{u_{m}^{n}\}\) converges strongly to \(q_{0}\), as \(t \rightarrow0\), suppose that there is another subsequence \(\{u_{t_{k}}^{n}\} \) of \(\{u_{t}^{n}\}\) satisfying \(u_{t_{k}}^{n} \rightarrow v_{0}\) as \(t_{k} \rightarrow0\). Denote \(u_{t_{k}}^{n} \) by \(u_{k}^{n}\). Then the result of Step 3 implies that \(0 = \lim_{t_{k} \rightarrow0}(u_{k}^{n} - \sum_{i = 1}^{\infty} \omega _{i}^{(2)}J_{r_{n,i}}^{A_{i}}(I-r_{n,i}C_{i})Q_{D}u_{k}^{n}) = v_{0} - \sum_{i = 1}^{\infty} \omega_{i}^{(2)}J_{r_{n,i}}^{A_{i}}(I-r_{n,i}C_{i})Q_{D}v_{0}\), which ensures that \(v_{0}\in\bigcap_{i = 1}^{\infty}N(A_{i}+C_{i})\) in view of Lemmas 1.4 and 1.6. Repeating the above proof, we can also know that \(v_{0}\) solves the variational inequality (2.1). Thus \(q_{0} = v_{0}\) by using the result of Step 4.
Hence \(u_{t}^{n} \rightarrow q_{0}\), as \(t \rightarrow0\), which is the unique solution of the variational inequality (2.1).
This completes the proof. □
Theorem 2.2
Let X be a real uniformly convex and p-uniformly smooth Banach space with constant \(K_{p}\) where \(p \in(1,2]\) and D be a nonempty closed and convex sunny non-expansive retract of X. Let \(Q_{D}\) be the sunny non-expansive retraction of X onto D. Let \(f: X \rightarrow X\) be a contraction with coefficient \(k \in(0,1)\), \(A_{i}: D \rightarrow X\) be m-accretive mappings, \(C_{i}: D \rightarrow X\) be \(\theta_{i}\)-inversely strongly accretive mappings, and \(W_{i}: X \rightarrow X\) be \(\mu_{i}\)-strictly pseudo-contractive mappings and \(\gamma_{i}\)-strongly accretive mappings with \(\mu_{i}+\gamma_{i} > 1\) for \(i\in N\). Suppose \(\{\omega_{i}^{(1)}\}\), \(\{\omega_{i}^{(2)}\}\), \(\{\alpha_{n}\}\), \(\{ \beta_{n}\}\), \(\{\vartheta_{n}\}\), \(\{\nu_{n}\}\), \(\{\xi_{n}\}\), \(\{\delta_{n}\}\) and \(\{\zeta_{n}\}\) are real number sequences in \((0,1)\), \(\{r_{n,i}\} \subset(0,+\infty)\), \(\{a_{n}\}\subset X\) and \(\{b_{n}\}\subset D\) are error sequences, where \(n \in N\) and \(i \in N\). Suppose \(\bigcap_{i = 1}^{\infty}N(A_{i}+C_{i})\neq\emptyset\). Let \(\{x_{n}\} \) be generated by the following iterative algorithm:
Under the following assumptions:
-
(i)
\(\alpha_{n}+\beta_{n} \leq1\), \(\vartheta_{n} + \nu_{n}+\xi_{n} \equiv1\) for \(n \in N\);
-
(ii)
\(\sum_{i = 1}^{\infty} \omega_{i}^{(1)} = \sum_{i = 1}^{\infty} \omega_{i}^{(2)}= 1\);
-
(iii)
\(\sum_{n = 1}^{\infty} \Vert a_{n} \Vert <+ \infty\), \(\sum_{n = 1}^{\infty} \Vert b_{n} \Vert <+ \infty\), \(\sum_{n = 1}^{\infty} (1-\alpha_{n}) <+ \infty\), \(\sum_{n = 1}^{\infty} \xi_{n} <+ \infty\), \(\lim_{n \rightarrow\infty} \sum_{i = 1}^{\infty}r_{n,i} = 0\);
-
(iv)
\(\lim_{n \rightarrow\infty}\delta_{n} = 0\), \(\sum_{n = 1}^{\infty } \delta_{n} = + \infty\);
-
(v)
\(1-\alpha_{n}+ \Vert a_{n} \Vert = o(\delta_{n})\), \(\xi_{n} = o(\delta_{n})\), \(\zeta_{n} = o(\xi_{n})\), \(\nu_{n} \nrightarrow 0\), as \(n \rightarrow \infty\);
-
(vi)
\(\sum_{i = 1}^{\infty}\omega_{i}^{(1)} \Vert W_{i} \Vert <+ \infty\), \(0 < r_{n,i}\leq(\frac{p\theta_{i}}{K_{p}})^{\frac{1}{p-1}}\) for \(i \in N\), \(n \in N\),
the iterative sequence \(x_{n} \rightarrow q_{0}\in\bigcap_{i = 1}^{\infty }N(A_{i}+C_{i})\), which is the unique solution of the variational inequality (2.1).
Proof
We split the proof into four steps.
Step 1. \(\{v_{n}\}\) is well defined and so is \(\{x_{n}\}\).
For \(s, t\in(0,1)\), define \(H_{s,t}: D \rightarrow D\) by \(H_{s,t} x: = su+tH(\frac{u+x}{2}) + (1-s-t)v\), where \(H: D \rightarrow D\) is non-expansive for \(x \in D\) and \(u,v \in D\). Then, for \(\forall x,y \in D\),
Thus \(H_{s,t}\) is a contraction, which ensures from Lemma 1.3 that there exists \(x_{s,t}\in D\) such that \(H_{s,t} x_{s,t} = x_{s,t}\). That is, \(x_{s,t} = su+tH(\frac{u+x_{s,t}}{2}) + (1-s-t)v\).
Since \(\sum_{i = 1}^{\infty} \omega_{i}^{(2)} = 1\) and \(J^{A_{i}}_{r_{n,i}}(I-r_{n,i}C_{i})\) is non-expansive for \(n \in N\) and \(i \in N\), then \(\{v_{n}\}\) is well defined, which implies that \(\{x_{n}\}\) is well defined.
Step 2. \(\{x_{n}\}\) is bounded.
For \(\forall p \in\bigcap_{i = 1}^{\infty}N(A_{i}+C_{i})\), we can easily know that
And
Thus
Using Lemma 1.2 and (2.8), we have, for \(n \in N\),
By using the inductive method, we can easily get the following result from (2.9):
Therefore, from assumptions (iii) and (vi), we know that \(\{x_{n}\}\) is bounded.
Step 3. There exists \(q_{0} \in\bigcap_{i = 1}^{\infty}N(A_{i}+C_{i})\), which solves the variational inequality (2.1).
Using Theorem 2.1, we know that there exists \(u_{t}^{n}\) such that \(u_{t}^{n} = tf(u_{t}^{n})+(1-t)(I-\kappa_{t}\sum_{i = 1 }^{\infty}\omega_{i}^{(1)}W_{i})(\sum_{i = 1}^{\infty}\omega _{i}^{(2)}J_{r_{n,i}}^{A_{i}}(I-r_{n,i}C_{i})Q_{D}u_{t}^{n})\) for \(t \in(0,1)\). Moreover, under the assumption that \(\frac{\kappa_{t}}{t}\rightarrow0\), \(u_{t}^{n} \rightarrow q_{0} \in\bigcap_{i = 1}^{\infty}N(A_{i}+C_{i})\), as \(t \rightarrow0\), which is the unique solution of the variational inequality (2.1).
Step 4. \(x_{n} \rightarrow q_{0}\), as \(n \rightarrow\infty\), where \(q_{0}\) is the same as that in Step 3.
Set \(C_{1}:= \operatorname{sup}\{2 \Vert \alpha_{n}x_{n}+\beta_{n}a_{n}-q_{0} \Vert ^{p-1}, 2 \Vert q_{0} \Vert \Vert \alpha_{n}x_{n}+\beta _{n}a_{n}-q_{0} \Vert ^{p-1}: n \in N\}\), then from Step 2 and assumption (iii), \(C_{1}\) is a positive constant. Using Lemma 1.5, we have
Using Lemma 1.1, we know that
Therefore,
Now, from (2.10)–(2.11) and Lemmas 1.4 and 1.5, we know that for \(n\in N\),
which implies that
From Step 2, if we set \(C_{2} = \operatorname{sup} \{\sum_{i = 1}^{\infty}\omega _{i}^{(1)} \Vert W_{i}(\sum_{i = 1}^{\infty}\omega _{i}^{(2)}J_{r_{n,i}}^{A_{i}}(I-r_{n,i}C_{i})(\frac{u_{n}+v_{n}}{2})) \Vert , \Vert x_{n}-q_{0} \Vert ^{p-1}: n \in N\}\), then \(C_{2}\) is a positive constant.
Let \(\varepsilon_{n}^{(1)} = \frac{\delta_{n}(1-2k)}{1-\delta_{n}k}\), \(\varepsilon_{n}^{(2)} = \frac{1}{\delta_{n}(1-2k)}[C_{1}(1-\alpha_{n}+ \Vert a_{n} \Vert )+\frac{\xi_{n}}{2-\nu_{n}} \Vert b_{n} - q_{0} \Vert ^{p} +p\delta_{n}\langle f(q_{0})-q_{0}, J_{p}(x_{n+1}-q_{0})\rangle+2\zeta _{n}C_{2}^{2}]\) and \(\varepsilon_{n}^{(3)} = (1-\delta_{n})\frac{\nu_{n}}{2-\nu_{n}}\frac {1}{1-\delta_{n}k}\sum_{i = 1}^{\infty} \omega_{i}^{(2)}\varphi_{p}(\Vert (I-J^{A_{i}}_{r_{n,i}})(\frac{u_{n}+v_{n}}{2}-r_{n,i}C_{i}(\frac {u_{n}+v_{n}}{2}))-(I-J^{A_{i}}_{r_{n,i}})(q_{0} -r_{n,i}C_{i}q_{0})\Vert)\).
Then
Our next discussion will be divided into two cases.
Case 1. \(\{ \Vert x_{n} - q_{0} \Vert \}\) is decreasing.
If \(\{ \Vert x_{n} - q_{0} \Vert \}\) is decreasing, we know from (2.12) and assumptions (iv) and (v) that
which ensures that \(\sum_{i = 1}^{\infty} \omega_{i}^{(2)}\varphi_{p}(\Vert (I-J^{A_{i}}_{r_{n,i}})(\frac{u_{n}+v_{n}}{2}-r_{n,i}C_{i}(\frac {u_{n}+v_{n}}{2}))-(I-J^{A_{i}}_{r_{n,i}})(q_{0} -r_{n,i}C_{i}q_{0})\Vert)\rightarrow0\), as \(n \rightarrow+\infty\). Then, from the property of \(\varphi_{p}\), we know that \(\sum_{i = 1}^{\infty} \omega_{i}^{(2)}\Vert(I-J^{A_{i}}_{r_{n,i}})(\frac {u_{n}+v_{n}}{2}-r_{n,i}C_{i}(\frac{u_{n}+v_{n}}{2}))-(I-J^{A_{i}}_{r_{n,i}})(q_{0} -r_{n,i}C_{i}q_{0})\Vert\rightarrow0\), as \(n \rightarrow+\infty\).
Note that \(\lim_{n \rightarrow\infty}\sum_{i= 1}^{\infty}r_{n,i} = 0\), then
as \(n \rightarrow\infty\).
Now, our purpose is to show that \(\operatorname{limsup}_{n \rightarrow\infty }\varepsilon_{n}^{(2)}\leq0\), which reduces to showing that \(\operatorname{limsup}_{n \rightarrow\infty}\langle f(q_{0}) - q_{0}, J_{p}(x_{n+1}-q_{0})\rangle\leq0\).
Let \(u_{t}^{n}\) be the same as that in Step 3. Since \(\Vert u_{t}^{n} \Vert \leq \Vert u_{t}^{n} - q_{0} \Vert + \Vert q_{0} \Vert \), then \(\{u_{t}^{n}\}\) is bounded, as \(t \rightarrow0\). Using Lemma 1.5 again, we have
which implies that
So, \(\lim_{t \rightarrow0}\operatorname{limsup}_{n\rightarrow+\infty}\langle\sum_{i = 1}^{\infty} \omega_{i}^{(2)}J^{A_{i}}_{r_{n,i}}(I-r_{n,i}C_{i})Q_{D}u_{t}^{n} - f(u_{t}^{n})+\frac{\kappa_{t}}{t}(1-t)\times [4]\sum_{i = 1}^{\infty} \omega _{i}^{(1)}W_{i}(\sum_{i = 1}^{\infty} \omega _{i}^{(2)}J^{A_{i}}_{r_{n,i}}(I-r_{n,i}C_{i})Q_{D}u_{t}^{n}), J_{p}(u_{t}^{n} - \sum_{i = 1}^{\infty} \omega_{i}^{(2)}J^{A_{i}}_{r_{n,i}}(I-r_{n,i}C_{i})(\frac {u_{n}+v_{n}}{2})) \rangle \leq0\).
Since \(u_{t}^{n} \rightarrow q_{0}\), then \(\sum_{i = 1}^{\infty} \omega _{i}^{(2)}J^{A_{i}}_{r_{n,i}}(I-r_{n,i}C_{i})Q_{D}u_{t}^{n} \rightarrow\sum_{i = 1}^{\infty} \omega_{i}^{(2)}J^{A_{i}}_{r_{n,i}}(I-r_{n,i}C_{i})Q_{D}q_{0} = q_{0}\), as \(t \rightarrow0\).
Noticing that
we have \(\operatorname{limsup}_{n\rightarrow+\infty}\langle q_{0}- f(q_{0}), J_{p}(q_{0} - \sum_{i = 1}^{\infty} \omega _{i}^{(2)}J^{A_{i}}_{r_{n,i}}(I-r_{n,i}C_{i})(\frac{u_{n}+v_{n}}{2}))\rangle\leq0\).
From assumptions (iv) and (v) and Step 2, we know that \(x_{n+1}-\sum_{i = 1}^{\infty} \omega_{i}^{(2)} J_{r_{n,i}}^{A_{i}}(I-r_{n,i}C_{i})(\frac {u_{n}+v_{n}}{2})\rightarrow0\) and then \(\operatorname{limsup}_{n\rightarrow+\infty}\langle q_{0}- f(q_{0}), J_{p}(q_{0} - x_{n+1})\rangle\leq0\). Thus \(\operatorname{limsup}_{n \rightarrow\infty}\varepsilon _{n}^{(2)}\leq0\).
Employing (2.12) again, we have
Assumption (iv) implies that \(\operatorname{liminf}_{n \rightarrow\infty}\frac{ \Vert x_{n} - q_{0} \Vert ^{p} - \Vert x_{n+1} - q_{0} \Vert ^{p} }{\varepsilon_{n}^{(1)}} = 0\). Then
Then the result that \(x_{n} \rightarrow q_{0}\) follows.
Case 2. If \(\{ \Vert x_{n} - q_{0} \Vert \}\) is not eventually decreasing, then we can find a subsequence \(\{ \Vert x_{n_{k}} - q_{0} \Vert \}\) so that \(\Vert x_{n_{k}} - q_{0} \Vert \leq \Vert x_{n_{k+1}} - q_{0} \Vert \) for all \(k \geq1\). From Lemma 1.7, we can define a subsequence \(\{ \Vert x_{\tau (n)} - q_{0} \Vert \}\) so that \(\max\{ \Vert x_{\tau(n)} - q_{0} \Vert , \Vert x_{n} - q_{0} \Vert \}\leq \Vert x_{\tau(n)+1} - q_{0} \Vert \) for all \(n > n_{1}\). This enables us to deduce that (similar to Case 1)
and then copying Case 1, we have \(\lim_{n \rightarrow\infty} \Vert x_{\tau(n)} - q_{0} \Vert = 0\). Thus \(0 \leq \Vert x_{n} - q_{0} \Vert \leq \Vert x_{\tau(n)+1} - q_{0} \Vert \rightarrow0\), as \(n \rightarrow\infty\).
This completes the proof. □
Remark 2.3
Theorem 2.2 is reasonable if we suppose \(X = D = (-\infty, +\infty)\), take \(f(x) = \frac{x}{4}\), \(A_{i} x = C_{i} x = \frac{x}{2^{i}}\), \(W_{i}x = \frac{x}{2^{i+1}}\), \(\theta_{i} = 2^{i}\), \(\omega_{i}^{(1)} = \omega _{i}^{(2)} = \frac{1}{2^{i}}\), \(\alpha_{n} = 1-\frac{1}{n^{2}}\), \(\beta_{n} = \frac{1}{n^{3}}\), \(\vartheta_{n} = \delta_{n} = \frac{1}{n}\), \(\xi_{n} = \zeta _{n} = a_{n} = b_{n} = \frac{1}{n^{2}}\), \(\gamma_{i} = \frac{1}{2^{i+2}}\), \(\mu_{i} = \frac{2^{i+1}-\frac{3}{2}+\frac{1}{2^{i+1}}}{2^{i+1}-1}\), \(r_{n,i} = \frac{1}{2^{n+i}}\) for \(n \in N\) and \(i \in N\).
Remark 2.4
Our differences from the main references are:
-
(i)
the normalized duality mapping \(J: E \rightarrow E^{*}\) is no longer required to be weakly sequentially continuous at zero as that in [9];
-
(ii)
the parameter \(\{r_{n,i}\}\) in the resolvent \(J_{r_{n,i}}^{A_{i}}\) does not need satisfying the condition ‘\(\sum_{n=1}^{\infty} \vert r_{n+1,i} - r_{n,i} \vert < +\infty\) and \(r_{n,i}\geq\varepsilon > 0\) for \(i \in N\) and some \(\varepsilon> 0\)’ as that in [3] or [9];
-
(iii)
Lemma 1.7 plays an important role in the proof of strong convergence of the iterative sequence, which leads to different restrictions on the parameters and different proof techniques compared to the already existing similar works.
3 Applications
3.1 Integro-differential systems
In Section 3.1, we shall investigate the following nonlinear integro-differential systems involving the generalized \(p_{i}\)-Laplacian, which have been studied in [3]:
where Ω is a bounded conical domain of a Euclidean space \(R^{N}\) (\(N\geq1\)), Γ is the boundary of Ω with \(\Gamma\in C^{1}\) and ϑ denotes the exterior normal derivative to Γ. \(\langle\cdot,\cdot\rangle\) and \(\vert \cdot \vert \) denote the Euclidean inner-product and the Euclidean norm in \(R^{N}\), respectively. T is a positive constant. \(\nabla u^{(i)} = (\frac{\partial u^{(i)}}{\partial x_{1}}, \frac{\partial u^{(i)}}{\partial x_{2}}, \ldots, \frac{\partial u^{(i)}}{\partial x_{N}})\) and \(x = (x_{1}, x_{2}, \ldots, x_{N}) \in\Omega\). \(\beta_{x}\) is the subdifferential of \(\varphi_{x}\), where \(\varphi_{x}= \varphi(x,\cdot):R\rightarrow R\) for \(x\in\Gamma\). a and ε are non-expansive constants, \(0 \leq C(x,t) \in \bigcap_{i = 1}^{\infty}V_{i}: = \bigcap_{i = 1}^{\infty} L^{p_{i}}(0, T; W^{1,p_{i}}(\Omega))\), \(f(x,t)\in\bigcap_{i = 1}^{\infty} W_{i}: = \bigcap_{i = 1}^{\infty} L^{\max\{p_{i},p_{i}'\}}(0,T; L^{\max\{p_{i},p_{i}'\}}(\Omega))\) and \(g:\Omega\times R^{N+1} \rightarrow R\) are given functions.
Just like [3], we need the following assumptions to discuss (3.1).
Assumption 1
\(\{p_{i}\}_{i=1}^{\infty}\) is a real number sequence with \(\frac{2N}{N+1} < p_{i} < +\infty\), \(\{\theta_{i}\}_{i=1}^{\infty}\) is any real number sequence in \((0,1]\) and \(\{r_{i}\}_{i=1}^{\infty}\) is a real number sequence satisfying \(\frac{2N}{N+1} < r_{i} \leq \min\{p_{i},p_{i}'\} < +\infty\). \(\frac{1}{p_{i}}+\frac{1}{p'_{i}} = 1\) and \(\frac{1}{r_{i}}+\frac{1}{r'_{i}} = 1\) for \(i \in N\).
Assumption 2
Green’s formula is available.
Assumption 3
For each \(x\in\Gamma\), \(\varphi_{x}= \varphi(x,\cdot):R\rightarrow R\) is a proper, convex and lower-semicontinuous function and \(\varphi_{x}(0)=0\).
Assumption 4
\(0 \in\beta_{x}(0)\) and for each \(t \in R\), the function \(x \in\Gamma\rightarrow(I+\lambda\beta_{x})^{-1}(t)\in R\) is measurable for \(\lambda > 0\).
Assumption 5
Suppose that \(g:\Omega\times R^{N+1} \rightarrow R\) satisfies the following conditions:
-
(a)
Carathéodory’s conditions;
-
(b)
Growth condition.
$$\bigl\vert g(x,r_{1},\ldots,r_{N+1}) \bigr\vert ^{\max\{p_{i},p_{i}'\}}\leq \bigl\vert h_{i}(x,t) \bigr\vert ^{p_{i}}+ b_{i} \vert r_{1} \vert ^{p_{i}}, $$where \((r_{1}, r_{2}, \ldots, r_{N+1})\in R^{N+1} \), \(h_{i}(x,t)\in W_{i}\) and \(b_{i}\) is a positive constant for \(i \in N\);
-
(c)
Monotone condition. g is monotone in the following sense:
$$\bigl(g(x,r_{1},\ldots,r_{N+1})-g(x,t_{1}, \ldots,t_{N+1}) \bigr)\geq(r_{1} - t_{1}) $$for all \(x \in\Omega\) and \((r_{1},\ldots,r_{N+1}),(t_{1},\ldots,t_{N+1})\in R^{N+1}\).
Assumption 6
For \(i \in N\), let \(V^{*}_{i}\) denote the dual space of \(V_{i}\). The norm in \(V_{i}\), \(\Vert \cdot \Vert _{V_{i}}\), is defined by
Definition 3.1
[3]
For \(i \in N\), define the operator \(B_{i}: V_{i} \rightarrow V^{*}_{i}\) by
for \(u,w \in V_{i}\).
Definition 3.2
[3]
For \(i \in N\), define the function \(\Phi_{i}: V_{i} \rightarrow R \) by
for \(u(x,t) \in V_{i}\).
Definition 3.3
[3]
For \(i \in N\), define \(S_{i}: D(S_{i}) = \{u(x,t) \in V_{i}: \frac{\partial u }{\partial t} \in V^{*}_{i}, u(x,0) = [4] u(x,T)\} \rightarrow V^{*}_{i}\) by
Lemma 3.4
[3]
For \(i \in N\), define a mapping \(A_{i}: W_{i} \rightarrow2^{W_{i}}\) as follows:
where \(\partial\Phi_{i}: V_{i}\rightarrow V^{*}_{i}\) is the subdifferential of \(\Phi _{i}\). For \(u \in D(A_{i})\), we set \(A_{i}u =\{f\in W_{i} \vert f \in B_{i}u + \partial \Phi_{i}(u) + S_{i}u\}\). Then \(A_{i}: W_{i} \rightarrow2^{W_{i}}\) is m-accretive, where \(i \in N\).
Lemma 3.5
[3]
Define \(C_{i}: D(C_{i}) = L^{\max\{p_{i},p'_{i}\}}(0,T;W^{1,\max\{p_{i},p'_{i}\}}(\Omega))\subset W_{i} \rightarrow W_{i}\) by
for \(\forall u(x,t) \in D(C_{i})\) and \(f(x,t)\) is the same as that in (3.1), where \(i \in N\). Then \(C_{i}: D(C_{i})\subset W_{i} \rightarrow W_{i}\) is continuous and strongly accretive. If we further assume that \(g(x,r_{1},\ldots,r_{N+1}) \equiv r_{1}\), then \(C_{i}\) is \(\theta_{i}\)-inversely strongly accretive, where \(i \in N\).
Lemma 3.6
[3]
For \(f(x,t)\in\bigcap_{i = 1}^{\infty}W_{i}\), integro-differential systems (3.1) have a unique solution \(u^{(i)}(x,t) \in W_{i}\) for \(i \in N\).
Lemma 3.7
[3]
If \(\varepsilon\equiv0\), \(g(x,r_{1},\ldots,r_{N+1}) \equiv r_{1}\) and \(f(x,t) \equiv k \), where k is a constant, then \(u(x,t)\equiv k\) is the unique solution of integro-differential systems (3.1). Moreover, \(\{u(x,t)\in\bigcap_{i = 1}^{\infty }W_{i}\vert u(x,t)\equiv k \textit{ satisfying }\text{(3.1)}\} = \bigcap_{i = 1}^{\infty}N(A_{i}+C_{i})\).
Remark 3.8
[3]
Set \(p:= \inf_{i \in N}(\min\{ p_{i},p_{i}'\})\) and \(q:= \operatorname{sup}_{i \in N}(\max\{p_{i},p_{i}'\})\).
Let \(X:= L^{\min\{p,p'\}}(0,T; L^{\min\{p,p'\}}(\Omega))\), where \(\frac{1}{p}+\frac{1}{p'} = 1\).
Let \(D:= L^{\max\{q,q'\}}(0,T; W^{1,\max\{q,q'\}}(\Omega))\), where \(\frac{1}{q}+\frac{1}{q'} = 1\).
Then \(X = L^{p}(0,T;L^{p}(\Omega))\), \(D = L^{q}(0,T;W^{1,q}(\Omega))\) and \(D \subset W_{i} \subset X\), \(\forall i \in N\).
Theorem 3.9
Let D and X be the same as those in Remark 3.8. Suppose \(A_{i}\) and \(C_{i}\) are the same as those in Lemmas 3.4 and 3.5, respectively. Let \(f: X\rightarrow X\) be a fixed contractive mapping with coefficient \(k \in(0,1)\) and \(W_{i}: X \rightarrow X\) be \(\mu_{i}\)-strictly pseudo-contractive mappings and \(\gamma_{i}\)-strongly accretive mappings with \(\mu_{i}+\gamma_{i} > 1\) for \(i\in N\). Suppose that \(\{\omega_{i}^{(1)}\}\), \(\{\omega_{i}^{(2)}\}\), \(\{\alpha_{n}\}\), \(\{ \beta_{n}\}\), \(\{\vartheta_{n}\}\), \(\{\nu_{n}\}\), \(\{\xi_{n}\}\), \(\{\delta_{n}\} \), \(\{\zeta_{n}\}\), \(\{r_{n,i}\}\), \(\{a_{n}\}\subset X\) and \(\{b_{n}\} \subset D\) satisfy the same conditions as those in Theorem 2.2, where \(n \in N\) and \(i \in N\). Let \(\{x_{n}\}\) be generated by the following iterative algorithm:
If, in integro-differential systems (3.1), \(\varepsilon\equiv0\), \(g(x,r_{1},\ldots,r_{N+1})\equiv r_{1}\) and \(f(x,t)\equiv k\), then under the following assumptions in Theorem 2.2, the iterative sequence \(x_{n} \rightarrow q_{0}\in\bigcap_{i = 1}^{\infty }N(A_{i}+C_{i})\), which is the unique solution of integro-differential systems (3.1) and which satisfies the following variational inequality: for \(\forall y \in \bigcap_{i=1}^{\infty}N(A_{i}+C_{i})\),
3.2 Convex minimization problems
Let H be a real Hilbert space. Suppose \(h_{i}: H \rightarrow(-\infty, +\infty)\) are proper convex, lower-semicontinuous and nonsmooth functions [2], suppose \(g_{i}: H \rightarrow(-\infty, +\infty)\) are convex and smooth functions for \(i \in N\). We use \(\nabla g_{i}\) to denote the gradient of \(g_{i}\) and \(\partial h_{i}\) the subdifferential of \(h_{i}\) for \(i \in N\).
The convex minimization problems are to find \(x^{*} \in H\) such that
for \(\forall x \in H\).
By Fermats’ rule, (3.3) is equivalent to finding \(x^{*} \in H\) such that
Theorem 3.10
Let H be a real Hilbert space and D be the nonempty closed convex sunny non-expansive retract of H. Let \(Q_{D}\) be the sunny non-expansive retraction of H onto D. Let \(f: H \rightarrow H\) be a contraction with coefficient \(k \in(0,1)\). Let \(h_{i}: H \rightarrow(-\infty, +\infty)\) be proper convex, lower-semicontinuous and nonsmooth functions and \(g_{i}: H \rightarrow (-\infty, +\infty)\) be convex and smooth functions for \(i \in N\). Let \(W_{i}: H \rightarrow H\) be \(\mu_{i}\)-strictly pseudo-contractive mappings and \(\gamma_{i}\)-strongly accretive mappings with \(\mu_{i}+\gamma_{i} > 1\) for \(i \in N\). Suppose \(\{\omega_{i}^{(1)}\}\), \(\{\omega_{i}^{(2)}\}\), \(\{\alpha_{n}\}\), \(\{ \beta_{n}\}\), \(\{\vartheta_{n}\}\), \(\{\nu_{n}\}\), \(\{\xi_{n}\}\), \(\{\delta_{n}\} \), \(\{\zeta_{n}\}\), \(\{r_{n,i}\}\subset(0,+\infty)\), \(\{a_{n}\}\subset H\) and \(\{b_{n}\}\subset D\) satisfy the same conditions as those in Theorem 2.2, where \(n \in N\) and \(i \in N\). Let \(\{x_{n}\}\) be generated by the following iterative algorithm:
If, further, suppose \(\nabla g_{i}\) is \(\frac{1}{\theta_{i}}\)-Lipschitz continuous and \(h_{i}+g_{i}\) attains a minimizer, then \(\{x_{n}\}\) converges strongly to the minimizer of \(h_{i}+g_{i}\) for \(i\in N\).
Proof
It follows from [2] that \(\partial h_{i}\) is m-accretive. From [19], since \(\nabla g_{i}\) is \(\frac{1}{\theta_{i}}\)-Lipschitz continuous, then \(\nabla g_{i}\) is \(\theta_{i}\)-inversely strongly accretive. Thus Theorem 2.2 ensures the result.
This completes the proof. □
References
Barbu, V: Nonlinear Semigroups and Differential Equations in Banach Spaces. Noordhoff, Leyden (1976)
Agarwal, RP, O’Regan, D, Sahu, DR: Fixed Point Theory for Lipschitz-Type Mappings with Applications. Springer, Berlin (2008)
Wei, L, Agarwal, RP: A new iterative algorithm for the sum of infinite m-accretive mappings and infinite \(\mu_{i}\)-inversely strongly accretive mappings and its applications to integro-differential systems. Fixed Point Theory Appl. 2016, 7 (2016)
Browder, FE, Petryshn, WV: Construction of fixed points of nonlinear mappings in Hilbert space. J. Math. Anal. Appl. 20, 197-228 (1967)
Takahashi, W: Proximal point algorithms and four resolvents of nonlinear operators of monotone type in Banach spaces. Taiwan. J. Math. 12(8), 1883-1910 (2008)
Lions, PL, Mercier, B: Splitting algorithms for the sum of two nonlinear operators. SIAM J. Numer. Anal. 16, 964-979 (1979)
Combettes, PL, Wajs, VR: Signal recovery by proximal forward-backward splitting. Multiscale Model. Simul. 4, 1168-1200 (2005)
Tseng, P: A modified forward-backward splitting method for maximal monotone mappings. SIAM J. Control Optim. 38(2), 431-446 (1998)
Wei, L, Duan, LL: A new iterative algorithm for the sum of two different types of finitely many accretive operators in Banach space and its connection with capillarity equation. Fixed Point Theory Appl. 2015, 25 (2015)
Alghamdi, MA, Alghamdi, MA, Shahzad, N, Xu, H: The implicit midpoint rule for nonexpansive mappings. Fixed Point Theory Appl. 2014, 96 (2014)
Ceng, LC, Ansari, QH, Schaible, S, Yao, JC: Hybrid viscosity approximation method for zeros of m-accretive operators in Banach spaces. Numer. Funct. Anal. Optim. 33(2), 142-165 (2012)
Lopez, G, Martin-Marquez, V, Wang, FH, Xu, HK: Forward-backward splitting methods for accretive operators in Banach spaces. Abstr. Appl. Anal. 2012, Article ID 109236 (2012)
Ceng, LC, Ansari, QH, Yao, YC: Mann-type steepest descent and modified hybrid steepest-descent methods for variational inequalities in Banach spaces. Numer. Funct. Anal. Optim. 29(9-10), 987-1033 (2008)
Bruck, RE: Properties of fixed-point sets of nonexpansive mappings in Banach spaces. Trans. Am. Math. Soc. 179, 251-262 (1973)
Aoyama, K, Kimura, Y, Takahashi, W, Toyoda, M: On a strongly nonexpansive sequence in Hilbert spaces. J. Nonlinear Convex Anal. 8, 471-489 (2007)
Mainge, PE: Strong convergence of projected subgradient methods for nonsmooth and nonstrictly convex minimization. Set-Valued Anal. 66, 899-912 (2008)
Mitrinovic, DS: Analytic Inequalities. Springer, New York (1970)
Cioranescu, I: Geometry of Banach Spaces, Duality Mappings and Nonlinear Problems. Kluwer Academic, Dordrecht (1990)
Baillon, JB, Haddad, G: Quelques proprietes des operateurs angle-bornes et cycliquement monotones. Isr. J. Math. 26, 137-150 (1977)
Acknowledgements
Supported by the National Natural Science Foundation of China (11071053), Natural Science Foundation of Hebei Province (A2014207010), Key Project of Science and Research of Hebei Educational Department (ZD2016024), Key Project of Science and Research of Hebei University of Economics and Business (2016KYZ07), Youth Project of Science and Research of Hebei Educational Department (QN2017328) and Science and Technology Foundation of Agricultural University of Hebei (LG201612).
Author information
Authors and Affiliations
Contributions
All authors contributed equally to the manuscript. All authors read and approved the final manuscript.
Corresponding author
Ethics declarations
Competing interests
The authors declare that they have no competing interests.
Additional information
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Li Wei, Liling Duan, Ravi P Agarwal, Rui Chen and Yaqin Zheng contributed equally to this work.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
Wei, L., Duan, L., Agarwal, R.P. et al. Modified forward-backward splitting midpoint method with superposition perturbations for the sum of two kinds of infinite accretive mappings and its applications. J Inequal Appl 2017, 227 (2017). https://doi.org/10.1186/s13660-017-1506-9
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s13660-017-1506-9