- Research
- Open Access
- Published:
An intermixed method for solving the combination of mixed variational inequality problems and fixed-point problems
Journal of Inequalities and Applications volume 2023, Article number: 1 (2023)
Abstract
In this paper, we introduce an intermixed algorithm with viscosity technique for finding a common solution of the combination of mixed variational inequality problems and the fixed-point problem of a nonexpansive mapping in a real Hilbert space. Moreover, we propose the mathematical tools related to the combination of mixed variational inequality problems in the second section of this paper. Utilizing our mathematical tools, a strong convergence theorem is established for the proposed algorithm. Furthermore, we establish additional conclusions concerning the split-feasibility problem and the constrained convex-minimization problem utilizing our main result. Finally, we provide numerical experiments to illustrate the convergence behavior of our proposed algorithm.
1 Introduction
Let C be a nonempty, closed, and convex subset of a real Hilbert space H. Let \(T:C\rightarrow C \) be a nonlinear mapping. A point \(x\in C\) is called a fixed point of T if \(Tx=x\). The set of fixed points of T is the set \(\mathrm{Fix}(T):= \{x\in C: Tx=x\}\). A mapping T of C into itself is called nonexpansive if
Note that the mapping \(I-T\) is demiclosed at zero iff \(x \in \mathrm{Fix}(T)\) whenever \(x_{n} \rightharpoonup x\) and \(x_{n}-Tx_{n} \to 0\) (see, [1]). It is widely known that if \(T:H \to H\) is nonexpansive, then \(I-T\) is demiclosed at zero. A mapping \(g:C \rightarrow C\) is said to be a contraction if there exists a constant \(\alpha \in (0,1)\) such that
Let \(A: C \to H\) be a mapping and \(f: H \to \mathbb{R}\cup \{+\infty \}\) be a proper, convex, and lower semicontinuous function on H. Now, we consider the mixed variational inequality problem: Find a point \(x^{*}\in C\) such that
for all \(y\in C\). The set of solutions of problem (1.1) is denoted by \(VI(C,A,f)\). The problem (1.1) was originally considered by Lescarret [2] and Browder [3] in relation to its various application in mathematical physics. General equilibrium and oligopolistic equilibrium problems, which can be stated as mixed variational inequality problems, were studied by Konnov and Volotskaya [4]. The fixed-point problems and resolvent equations are well known to be equivalent to mixed variational inequality problems. In 1997, Noor, [5] proposed and analyzed a new iterative method for solving mixed variational inequality problems using the resolvent equations technique as follows:
where A is a monotone and Lipschitz continuous operator, \(\rho >0\) is a constant, \(J_{\rho f}=(I+\rho \partial f )^{-1}\) is the resolvent operator and I is the identity operator. In 2008, Noor et al. [6] introduced an iterative algorithm to solve the mixed variational inequalities as follows:
where \(0\leq \alpha _{n} \leq 1\) and A is strongly monotone and Lipschitz continuous. In recent years, several researchers have increasingly investigated the problem (1.1) in various directions, for example [5, 7–16] and the references therein.
Note that if C is a closed convex subset of H and \(f(x)=\delta _{C}(x)\), for all \(x\in C\), where \(\delta _{C}\) is the indicator function of C defined by \(\delta _{C}(x) = 0\) if \(x \in C\), and \(\delta _{C}(x)=\infty \) otherwise, then the mixed variational inequality problem (1.1) reduces to the following classical variational inequality problem: find a point \(x^{*}\in C\) such that
The set of solutions of problem (1.4) is denoted by \(VI(C,A)\). The variational inequality problem was introduced and studied by Stampacchia in 1966 [17]. The solution of the variational inequality problem is well known to be equivalent to the following fixed-point equation for finding a point \(x^{*}\in C\) such that
where \(\gamma > 0\) is an arbitrary constant and \(P_{C}\) is the metric projection from H onto C (see [18]). This problem is useful in economics, engineering, and mathematics. Many nonlinear analysis problems, such as optimization, optimal control problems, saddle-point problems, and mathematical programming, are included as special cases; see, for example, [19–22]. Furthermore, there have been various methods invented for solving the problem (1.4) and fixed-point problems, for example [23–33] and the references therein.
The intermixed algorithm introduced by Yao et al. [34] is currently one of the most effective methods for solving the fixed-point problem of a nonlinear mapping. This algorithm has the following features: the definition of the sequence \(\{x_{n}\}\) is involved in the sequence \(\{y_{n}\}\) and the definition of the sequence \(\{y_{n}\}\) is also involved in the sequence \(\{x_{n}\}\). They studied the intermixed algorithm for two strict pseudocontractions S and T as follows: For arbitrarily given \(x_{1}\in C\), \(y_{1}\in C\), let the sequences \(\{x_{n}\}\) and \(\{y_{n}\}\) be generated iteratively by
where \(S,T:C\rightarrow C\) are λ-strictly pseudocontraction mappings, \(f:C\rightarrow H\) is a \(\rho _{1}\)-contraction and \(g:C\rightarrow H\) is a \(\rho _{2}\)-contraction, \(k\in (0,1-\lambda )\) is a constant and \(\{\alpha _{n}\}\), \(\{\beta _{n}\}\) are two real-number sequences in \((0,1)\). They also proved that the proposed algorithms independently converge strongly to the fixed points of two strict pseudocontractions.
In 2012, Kangtunyakarn [35] modified the set of variational inequality problems as follows:
where A and B are the mappings of C into H. If \(A=B\), then the problem (1.6) reduces to the classical variational inequality problem. Moreover, he also gave a new iterative method for solving the proposed problem in Hilbert spaces.
In this article, motivated and inspired by Kangtunyakarn [35], we introduce a problem that is modified by a mixed variational inequality problem as follows: The combination of mixed variational inequality problems is to find \(x^{*}\in C\) such that
for all \(y\in C\) and \(a\in (0,1)\), where \(A,B: C\to H\) are mappings. The set of all solutions to this problem is denoted by \(VI(C,aA+(1-a)B,f)\). In particular, if \(A=B\), then the problem (1.7) reduces to the mixed variational inequality problem (1.1).
Question. Can we design an intermixed algorithm for solving the combination of mixed variational inequality problems (1.7) above?
In this paper, we give a positive answer to this question. Motivated and inspired by the works in the literature, and by the ongoing research in these directions, we introduce a new intermixed algorithm with viscosity technique for finding a solution of the combination of mixed variational inequality problems and the fixed-point problem of a nonexpansive mapping in a real Hilbert space. Moreover, we propose the mathematical tools related to the combination of mixed variational inequality problems (1.7) in the second section of this paper. Utilizing our mathematical tools, a strong convergence theorem is established for the proposed algorithm. Furthermore, we establish additional conclusions concerning the split-feasibility problem and the constrained convex-minimization problem utilizing our main result. Finally, we provide numerical experiments to illustrate the convergence behavior of our proposed algorithm.
This paper is organized as follows. In Sect. 2, we first recall some basic definitions and lemmas. In Sect. 3, we prove and analyze the strong convergence of the proposed algorithm. In Sect. 4, we also consider the relaxation version of the proposed method. In Sect. 5, some numerical experiments are provided.
2 Preliminary
Let C be a nonempty, closed, and convex subset of a Hilbert space H. The notation I stands for the identity operator on a Hilbert space. Let \(\{x_{n}\}\) be a sequence in H. Weak and strong convergence of \(\{x_{n}\}\) to \(x \in H\) are denoted by \(x_{n} \rightharpoonup x\) and \(x_{n} \rightarrow x\), respectively.
Definition 2.1
A mapping \(A:C \to H\) is called
-
(i)
monotone if
$$\begin{aligned} \langle Ax-Ay,x-y \rangle \geq 0\quad \text{for all } x,y \in C; \end{aligned}$$ -
(ii)
L-Lipschitz continuous if there exists \(L > 0\) such that
$$\begin{aligned} \Vert Ax-Ay \Vert \leq L \Vert x-y \Vert \quad\text{for all } x,y \in C; \end{aligned}$$ -
(iii)
α-inverse strongly monotone if there exists \(\alpha > 0\) such that
$$\begin{aligned} \langle Ax-Ay, x-y \rangle \geq \alpha \Vert Ax-Ay \Vert ^{2}\quad \text{for all } x,y \in C; \end{aligned}$$ -
(iv)
firmly nonexpansive if
$$\begin{aligned} \Vert Ax-Ay \Vert ^{2}\leq \langle x-y, Ax-Ay \rangle \quad \text{for all } x,y \in C. \end{aligned}$$
Throughout this paper, the domain of any function \(f: H \to \mathbb{R}\cup \{+\infty \}\), denoted by domf, is defined as \(\operatorname{dom} f:= \{x \in H: f(x) < + \infty \}\). The domain of continuity of f is \(\operatorname{cont} f = \{ x\in H: f(x)\in \mathbb{R} \text{ and } f \text{ is continuous at } x \}\).
Definition 2.2
([36])
Let \(f: H \to \mathbb{R}\) be a function. Then,
-
(i)
f is proper if \(\{ x\in H: f(x) < \infty \}\neq \emptyset \);
-
(ii)
f is lower semicontinuous if \(\{ x\in H: f(x) \leq a\}\) is closed for each \(a \in \mathbb{R}\);
-
(iii)
f is convex if \(f(tx+(1-t)y)\leq tf(x)+(1-t)f(y)\) for every \(x,y\in H\) and \(t\in [0,1]\);
-
(iv)
f is Gâteaux differentiable at \(x \in H\) if there is \(\nabla f(x)\in H\) such that
$$\begin{aligned} \lim_{t \to 0}\frac {f(x+ty)-f(x)}{t}= \bigl\langle y,\nabla f(x) \bigr\rangle \end{aligned}$$for each \(y \in H\);
-
(v)
f is Fréchet differentiable at \(x \in H\) if there is \(\nabla f(x)\) such that
$$\begin{aligned} \lim_{y \to 0} \frac {f(x+y)-f(x)-\langle \nabla f(x), y \rangle}{ \Vert y \Vert }= 0. \end{aligned}$$
Let \(f: H \to \mathbb{R}\cup \{+\infty \}\) be a proper, convex, and lower semicontinuous function on H. The subset
is called a subdifferential of f at \(x\in H\). The function f is said to be subdifferentiable at x if \(\partial f(x) \neq \emptyset \). The element of \(\partial f(x)\) is called the subgradient of f at x. It is well known that the subdifferential ∂f is a maximal monotone operator.
Proposition 2.1
([37] Proposition 17.31)
Let \(f: H \to \mathbb{R}\cup \{+\infty \}\) be a proper and convex function, and let \(x \in \operatorname{dom} f\). Then, the following hold:
-
(i)
Suppose that f is Gâteaux differentiable at x. Then \(\partial f(x) = \{ \nabla f(x) \}\).
-
(ii)
Suppose that \(x \in \operatorname{cont} f\) and that \(\partial f(x)\) consists of a single element u. Then, f is Gâteaux differentiable at x and \(u = \nabla f(x)\).
Definition 2.3
([38])
For any maximal operator A, the resolvent operator associated with A, for any \(\gamma >0\), is defined as
where I is the identity operator.
It is well known that an operator A is maximal monotone if and only if its resolvent operator \(J_{\gamma A}\) is defined everywhere. It is single valued and nonexpansive. If f is a proper, convex, and lower-semicontinuous function, then its subdifferential ∂f is a maximal monotone operator. In this case, we can define the resolvent operator
associated with the subdifferential ∂f and \(\gamma >0\) is constant.
Recall that the (nearest point) projection \(P_{C}\) from H onto C assigns to each \(x \in H\) the unique point \(P_{C}x \in C\) satisfying the property
Lemma 2.2
([39])
For a given \(z\in H\) and \(u\in C\),
Furthermore, \(P_{C}\) is a firmly nonexpansive mapping of H onto C.
Lemma 2.3
([40])
For given \(x \in H\) let \(P_{C}: H \to C\) be a metric projection. Then,
-
(a)
\(z=P_{C}x\) if and only if \(\langle x-z,y-z \rangle \leq 0, \forall y \in C\);
-
(b)
\(z=P_{C}x\) if and only if \(\|x-z\|^{2}\leq \|x-y\|^{2}-\|y-z\|^{2}, \forall y \in C\);
-
(c)
\(\langle P_{C}x-P_{C}y,x-y \rangle \geq \|P_{C}x-P_{C}y\|^{2}, \forall x,y \in H\).
Lemma 2.4
([41])
Let \(\{s_{n}\}\) be a sequence of nonnegative real numbers satisfying
where \(\{\alpha _{n}\}\) is a sequence in (0,1) and \(\{\delta _{n}\}\) is a sequence such that
-
(i)
\(\sum_{n=1}^{\infty}\alpha _{n}=\infty \);
-
(ii)
\(\limsup_{n\rightarrow \infty} \frac{\delta _{n}}{\alpha _{n}}\leq 0 \textit{ or } \sum_{n=1}^{ \infty}|\delta _{n}|<\infty \).
Then, \(\lim_{n\rightarrow \infty}s_{n}=0\).
Lemma 2.5
Let C be a nonempty closed convex subset of H and let \(f: H \to \mathbb{R}\cup \{+\infty \}\) be a proper, convex, and lower semicontinuous function and let \(A,B: C\to H\) be α- and β-inverse strongly monotone operators with \(\varepsilon = \min \{ \alpha, \beta \}\) and \(VI(C,A,f)\cap VI(C,B,f) \neq \emptyset \). Then,
for all \(a \in (0,1)\).
Proof
Clearly,
Let \(x_{0}\in VI(C,aA+(1-a)B,f)\) and \(x^{*}\in VI(C,A,f)\cap VI(C,B,f)\). Hence, we have
It follows from \(x^{*}\in VI(C,aA+(1-a)B,f)\) that
From (2.3), (2.4), and the definition of \(x^{*}, x_{0}\), we have
and
By combining (2.5), (2.6), and the definition of \(A,B\), we obtain
which implies that
Let \(y\in C\). From \(x^{*}\in VI(C,A,f)\) and \(Ax_{0}=Ax^{*}\), we have
From \(Bx_{0}=Bx^{*}\), \(x_{0}\in VI(C,aA+(1-a)B,f)\), \(x^{*}\in VI(C,B,f)\), we obtain
Since \(a\in (0,1)\), we have
This implies that
Using the same method as (2.10), we have
From (2.10) and (2.11), we obtain \(x_{0}\in VI(C,A,f)\cap VI(C,B,f)\). Hence, we can conclude that
From (2.2) and (2.12), we obtain
□
Lemma 2.6
Let \(f: H \to \mathbb{R}\cup \{+\infty \}\) be a proper, convex, and lower semicontinuous function on H. Let \(A: C\to H\) be a mapping. Then, \(\mathrm{Fix}(J_{\gamma f}(I-\gamma A))=VI(C,A,f)\), where \(J_{\gamma f}:H \to H\) defined as \(J_{\gamma f}=(I+\gamma \partial f )^{-1}\) is the resolvent operator, I is the identity operator and \(\gamma >0\) is a constant.
Proof
Let \(z\in H\), then
Next, we will show that \(J_{\gamma f}\) is a firmly nonexpansive mapping.
Let \(p=J_{\gamma f}(x)=(I+\gamma \partial f )^{-1}x\) and \(q=J_{\gamma f}(y)=(I+\gamma \partial f )^{-1}y\). It follows that \(x\in (I+\gamma \partial f )p\) and \(y \in (I+\gamma \partial f )q\).
From the definition of \(\partial f(p)\) and \(\partial f(q)\), we have
This implies that
for all \(c\in H\). Then,
and
By combining (2.15) and (2.16), we obtain
which implies that
Then, we have
From the definition of \(p,q\), we have
Therefore, \(J_{\gamma f}\) is a firmly nonexpansive mapping. □
Remark 2.7
From Lemma 2.5 and Lemma 2.6, we have
for all \(\gamma >0\) and \(a \in (0,1)\).
3 Main results
In this section, we introduce a new intermixed algorithm with viscosity technique using Lemmas 2.5 and 2.6 as an important tool for finding a solution of the combination of mixed variational inequality problems and the fixed-point problem of a nonexpansive mapping in a real Hilbert space and establish its strong convergence under some mild conditions.
Theorem 3.1
Let C be a nonempty, closed, and convex subset of H. For every \(i=1,2\), let \(f_{i}: H \to \mathbb{R}\cup \{+\infty \}\) be a proper, convex, and lower semicontinuous function, let \(A_{i},B_{i}: C\to H\) be \(\delta ^{A}_{i}\)- and \(\delta ^{B}_{i}\)-inverse strongly monotone operators, respectively, with \(\delta _{i}= \min \{\delta ^{A}_{i}, \delta ^{B}_{i} \}\) and let \(T_{i}: C\to C\) be nonexpansive mappings. Assume that \(\Omega _{i}=\mathrm{Fix}(T_{i})\cap VI(C,A_{i},f_{i})\cap VI(C,B_{i},f_{i}) \neq \emptyset \), for all \(i=1,2\). Let \(g_{1},g_{2}:H \rightarrow H\) be \(\sigma _{1}\)- and \(\sigma _{2}\)-contraction mappings with \(\sigma _{1},\sigma _{2}\in (0,1)\) and \(\sigma =\max \{\sigma _{1},\sigma _{2}\}\). Let the sequences \(\{x_{n}\}\), \(\{y_{n}\}\) be generated by \(x_{1},y_{1}\in C\) and
where \(\{\beta _{n}\},\{\alpha _{n}\}\subseteq [0,1]\), \(\gamma _{i} \in (0,2\delta _{i})\), \(a_{i},b_{i} \in (0,1)\), and \(J^{i}_{\gamma f}:H \to H\) defined as \(J_{\gamma f}^{i}=(I+\gamma _{i} \nabla f_{i} )^{-1}\) is the resolvent operator for all \(i=1,2\). Assume that the following conditions hold:
-
(i)
\(\lim_{n\rightarrow \infty}\alpha _{n}=0\) and \(\sum_{n=1}^{\infty}\alpha _{n}=\infty \);
-
(ii)
\(0<\overline{l}\leq \beta _{n} \leq l\) for all \(n\in \mathbb{N}\) and for some \(\overline{l},l>0\);
-
(iii)
\(\sum_{n=1}^{\infty}|\alpha _{n+1}-\alpha _{n}|< \infty, \sum_{n=1}^{\infty}|\beta _{n+1}-\beta _{n}|< \infty \).
Then, \(\{x_{n}\}\) and \(\{y_{n}\}\) converge strongly to \(x^{*}=P_{\Omega _{1}}g_{1}(y^{*})\) and \(y^{*}=P_{\Omega _{2}}g_{2}(x^{*})\), respectively.
Proof
First, we show that \(\{x_{n}\}\) and \(\{y_{n}\}\) are bounded.
We claim that \(J_{\gamma f}^{i}(I-\gamma _{i} (a_{i}A_{i}+(1-a_{i})B_{i})))\) is nonexpansive for all \(i=1,2\). To show this let \(x,y \in C\), then
Assume that \(x^{*}\in \Omega _{1}\) and \(y^{*}\in \Omega _{2}\).
From the definition of \(z_{n}\) and the nonexpansiveness of \(T_{1}\), we have
Similarly, we have \(\|w_{n}-x^{*}\| \leq \|y_{n}-x^{*}\|\).
Putting \(K_{i}=J_{\gamma f}^{i}(I-\gamma _{i} (a_{i}A_{i}+(1-a_{i})B_{i})))\) for all \(i=1,2\), from the definition of \(x_{n}\), the nonexpansiveness of \(K_{i}\) for all \(i=1,2\), and (3.3), we have
Similarly, we obtain
Combining (3.4) and (3.5), we have
We can deduce from induction that
for every \(n \in \mathbb{N}\). This implies that \(\{x_{n}\}\) and \(\{y_{n}\}\) are bounded. This implies that \(\{z_{n}\}, \{w_{n}\}\) are also bounded.
Next, we show that \(\|x_{n+1}-x_{n}\|\rightarrow 0\) and \(\|y_{n+1}-y_{n}\|\rightarrow 0\) as \(n \to \infty \).
Setting \(Q_{n}=P_{C}(\alpha _{n}g_{1}(y_{n})+(1-\alpha _{n})K_{1}x_{n})\) and \(Q_{n}^{*}=P_{C}(\alpha _{n}g_{2}(x_{n})+(1-\alpha _{n})K_{2}y_{n})\). By the nonexpansiveness of \(K_{i}\) for \(i=1,2\), we have
From the definition of \(z_{n}\) and the nonexpansiveness of \(T_{1}\), we have
Similarly, we obtain
From the definition of \(x_{n}\), (3.6), and (3.7), we have
Using the same method as derived in (3.9), we have
From (3.9) and (3.10), we have
Applying Lemma 2.4 and the condition (iii), we can conclude that
Next, we show that \(\|x_{n}-U_{n}\|\rightarrow 0\) as \(n\to \infty \), where \(U_{n}=\alpha _{n}g_{1}(y_{n})+(1-\alpha _{n})K_{1}x_{n}\), \(\|y_{n}-V_{n}\|\rightarrow 0\), where \(V_{n}=\alpha _{n}g_{2}(x_{n})+(1-\alpha _{n})K_{2}y_{n}\) as \(n\to \infty \), \(\|x_{n}-T_{1}x_{n}\|\rightarrow 0\) as \(n\to \infty \) and \(\|y_{n}-T_{2}y_{n}\|\rightarrow 0\) as \(n\to \infty \).
Let \(x^{*}\in \Omega _{1}\) and \(y^{*}\in \Omega _{2}\). From the definition of \(z_{n}\), we obtain
In a similar way, we have
From the definition of \(x_{n}\), (3.3), and (3.12), we obtain
It follows from (3.14) that
By (3.11) and the conditions i) and ii), we obtain
From the definition of \(y_{n}\) and applying the same method as (3.15), we have
From Lemma 2.3, we obtain
From the definition of \(U_{n}\), we obtain
From (3.3), (3.17), and (3.18), we obtain
from which it follows that
From \(\|x_{n+1}-x_{n}\|\rightarrow 0 \text{ as } n\rightarrow \infty \) and the conditions (i) and (ii), we have
From the definition of \(V_{n}\) and applying the same argument as (3.19), we also obtain
Observe that
From (3.15) and (3.21), we obtain
Similarly, we also have
Consider
From (3.15) and (3.19), we have
From the definition of \(y_{n}\) and applying the same method as (3.24), we also have
Next, we show that \(\|x_{n}-K_{1}x_{n}\|\rightarrow 0 \text{ as } n\rightarrow \infty \) and \(\|y_{n}-K_{2}y_{n}\|\rightarrow 0 \text{ as } n\rightarrow \infty \), where \(K_{i}=J_{\gamma f}^{i}(I-\gamma _{i} (a_{i}A_{i}+(1-a_{i})B_{i})))\) for all \(i=1,2\).
Observe that
from which it follows that
From (3.24) and the condition (i), we have
Applying the same argument as (3.26), we also obtain
Next, we show that \(\limsup_{n\rightarrow \infty}\langle g_{1}(y^{*})-x^{*},U_{n}-x^{*} \rangle \leq 0\), where \(x^{*}=P_{\Omega _{1}}g_{1}(y^{*})\) and \(\limsup_{n\rightarrow \infty}\langle g_{2}(x^{*})-y^{*},V_{n}-y^{*} \rangle \leq 0\), where \(y^{*}=P_{\Omega _{2}}g_{2}(x^{*})\).
Indeed, take a subsequence \(\{U_{n_{k}}\}\) of \(\{U_{n}\}\) such that
Since \(\{x_{n}\}\) is bounded, without loss of generality, we may assume that \(x_{n_{k}}\rightharpoonup p \text{ as } k\rightarrow \infty \). From (3.24), we obtain \(U_{n_{k}}\rightharpoonup p \text{ as } k\rightarrow \infty \).
Next, we show that \(p\in \Omega _{1}=\mathrm{Fix}(T_{1})\cap VI(C,A_{1},f_{1})\cap VI(C,B_{1},f_{1})\).
Since \(K_{1}\) is nonexpansive, then \(I-K_{1}\) is demiclosed at zero. From (3.26) and by the demiclosedness of \(I-K_{1}\) at zero, we obtain that \(p\in \mathrm{Fix}(K_{1})=\mathrm{Fix}(J_{\gamma f}^{1}(I-\gamma _{1} (a_{1}A_{1}+(1-a_{1})B_{1}))))\). By Remark 2.7, we have \(p\in VI(C,A_{1},f_{1})\cap VI(C,B_{1},f_{1})\).
Since \(T_{1}\) is nonexpansive, then \(I-T_{1}\) is demiclosed at zero. From (3.15) and by the demiclosedness of \(I-T_{1}\) at zero, we obtain that \(p \in \mathrm{Fix}(T_{1})\). Therefore, \(p\in \Omega _{1}=\mathrm{Fix}(T_{1})\cap VI(C,A_{1},f_{1})\cap VI(C,B_{1},f_{1})\).
Since \(U_{n_{k}}\rightharpoonup p \text{ as } k\rightarrow \infty \), \(p\in \Omega _{1}\) and Lemma 2.2, we can derive that
Similarly, take a subsequence \(\{V_{n_{k}}\}\) of \(\{V_{n}\}\) such that
Since \(\{y_{n}\}\) is bounded, without loss of generality, we may assume that \(y_{n_{k}}\rightharpoonup q \text{ as } k\rightarrow \infty \). From (3.25), we obtain \(V_{n_{k}}\rightharpoonup q \text{ as } k\rightarrow \infty \).
Following the same method as (3.28), we easily obtain that
Finally, we show that \(\{x_{n}\}\) converges strongly to \(x^{*}\), where \(x^{*}=P_{\Omega _{1}}g_{1}(y^{*})\) and \(\{y_{n}\}\) converges strongly to \(y^{*}\), where \(y^{*}=P_{\Omega _{2}}g_{2}(x^{*})\).
Let \(U_{n}=\alpha _{n}g_{1}(y_{n})+(1-\alpha _{n})K_{1}x_{n}\) and \(V_{n}=\alpha _{n}g_{2}(x_{n})+(1-\alpha _{n})K_{2}y_{n}\).
From the definition of \(x_{n}\), we obtain
which yields that
Similarly, as previously stated, we have
From (3.30) and (3.31), we deduce that
By (3.11), (3.24), (3.25), (3.28), (3.29), the condition (i), and Lemma 2.4, we have \(\lim_{n\rightarrow \infty}(\|x_{n}-x^{*}\|+\|y_{n}-y^{*} \|)=0\). This implies that the sequence \(\{x_{n}\}\), \(\{y_{n}\}\) converges to \(x^{*}=P_{\Omega _{1}}g_{1}(y^{*})\), \(y^{*}=P_{\Omega _{2}}g_{2}(x^{*})\), respectively.
This completes the proof. □
As a direct proof of Theorem 3.1, we obtain the following results.
Corollary 3.2
Let C be a nonempty, closed, and convex subset of H. For every \(i=1,2\), let \(f_{i}: H \to \mathbb{R}\cup \{+\infty \}\) be a proper, convex, and lower semicontinuous function, let \(A_{i},B_{i}: C\to H\) be \(\delta ^{A}_{i}\)- and \(\delta ^{B}_{i}\)-inverse strongly monotone operators, respectively, with \(\delta _{i}= \min \{\delta ^{A}_{i}, \delta ^{B}_{i} \}\). Assume that \(VI(C,A_{i},f_{i})\cap VI(C,B_{i},f_{i}) \neq \emptyset \), for all \(i=1,2\). Let \(g_{1},g_{2}:H \rightarrow H\) be \(\sigma _{1}\)- and \(\sigma _{2}\)-contraction mappings with \(\sigma _{1},\sigma _{2}\in (0,1)\) and \(\sigma =\max \{\sigma _{1},\sigma _{2}\}\). Let the sequences \(\{x_{n}\}\), \(\{y_{n}\}\) be generated by \(x_{1},y_{1}\in C\) and
where \(\{\beta _{n}\},\{\alpha _{n}\}\subseteq [0,1]\), \(\gamma _{i} \in (0,2\delta _{i})\), \(a_{i}\in (0,1)\) and \(J_{\gamma f}^{i}=(I+\gamma _{i} \nabla f_{i} )^{-1}\) is the resolvent operator for all \(i=1,2\). Assume that the following conditions hold:
-
(i)
\(\lim_{n\rightarrow \infty}\alpha _{n}=0\) and \(\sum_{n=1}^{\infty}\alpha _{n}=\infty \);
-
(ii)
\(0<\overline{l}\leq \beta _{n} \leq l\) for all \(n\in \mathbb{N}\) and for some \(\overline{l},l>0\);
-
(iii)
\(\sum_{n=1}^{\infty}|\alpha _{n+1}-\alpha _{n}|< \infty, \sum_{n=1}^{\infty}|\beta _{n+1}-\beta _{n}|< \infty \).
Then, \(\{x_{n}\}\) and \(\{y_{n}\}\) converge strongly to \(x^{*}=P_{VI(C,A_{1},f_{1})\cap VI(C,B_{1},f_{1}) }g_{1}(y^{*})\) and \(y^{*}= P_{VI(C,A_{2},f_{2})\cap VI(C,B_{2},f_{2}) }g_{2}(x^{*})\), respectively.
Proof
If \(T_{1} \equiv T_{2} \equiv I\) in Theorem 3.1, Hence, from Theorem 3.1, we obtain the desired result. □
Corollary 3.3
Let C be a nonempty, closed, and convex subset of H. Let \(f: H \to \mathbb{R}\cup \{+\infty \}\) be a proper, convex, and lower semicontinuous function. Let \(A,B: C\to H\) be \(\delta ^{A}\)- and \(\delta ^{B}\)-inverse strongly monotone operators, respectively, with \(\delta = \min \{\delta ^{A}, \delta ^{B} \}\) and let \(T: C\to C\) be nonexpansive mapping. Assume that \(\Omega =\mathrm{Fix}(T)\cap VI(C,A,f)\cap VI(C,B,f) \neq \emptyset \). Let \(g:H \rightarrow H\) be a σ-contraction mapping with \(\sigma \in (0,1)\). Let the sequence \(\{x_{n}\}\) be generated by \(x \in C\) and
where \(\{\beta _{n}\},\{\alpha _{n}\}\subseteq [0,1]\), \(\gamma \in (0,2\delta )\), \(a,b \in (0,1)\) and \(J_{\gamma f}=(I+\gamma \nabla f )^{-1}\) is the resolvent operator. Assume that the following conditions hold:
-
(i)
\(\lim_{n\rightarrow \infty}\alpha _{n}=0\) and \(\sum_{n=1}^{\infty}\alpha _{n}=\infty \);
-
(ii)
\(0<\overline{l}\leq \beta _{n} \leq l\) for all \(n\in \mathbb{N}\) and for some \(\overline{l},l>0\);
-
(iii)
\(\sum_{n=1}^{\infty}|\alpha _{n+1}-\alpha _{n}|< \infty, \sum_{n=1}^{\infty}|\beta _{n+1}-\beta _{n}|< \infty \).
Then, \(\{x_{n}\}\) converges strongly to \(x^{*}=P_{\Omega}g(x^{*})\).
Proof
If \(g\equiv g_{1} \equiv g_{2}, f \equiv f_{1} \equiv f_{2}, T \equiv T_{1} \equiv T_{2}, A \equiv A_{1} \equiv A_{2}, B \equiv B_{1} \equiv B_{2}\), \(w_{n}=z_{n}\), and \(x_{n}=y_{n}\) in Theorem 3.1. Hence, from Theorem 3.1, we obtain the desired result. □
Remark 3.4
We remark here that Corollary 3.3 is modified from Algorithm 3.2 in [6] in the following aspects:
-
1.
From a strongly monotone and Lipschitz continuous operator to two inverse strongly monotone operators.
-
2.
We add a nonexpansive mapping and a contraction mapping in our iterative algorithm.
4 Applications
In this section, we reduce our main problem to the following split-feasibility problem and constrained convex-minimization problem:
4.1 The split-feasibility problem
Let C and Q be nonempty, closed, and convex subsets of Hilbert spaces \(H_{1}\) and \(H_{2}\), respectively. The split-feasibility problem (SFP) is to find a point
where \(A: H_{1}\rightarrow H_{2}\) is a bounded linear operator. The set of all solutions (SFP) is denoted by \(\Gamma = \{x \in C; Ax \in Q\}\). The split-feasibility problem is the first example of the split-inverse problem, which was first introduced by Censor and Elfving [42] in Euclidean spaces. Many mathematical problems, such as the constrained least-squares problem, the linear split-feasibility problem, and the linear programming problem, can be solved using the split-feasibility problem paradigm, and it can be used in real-world applications, for example, in signal processing, in image recovery, in intensity-modulated therapy, in pattern recognition, etc., see [43–46]. Consequently, the split-feasibility problem has been widely studied by many authors, see [47–52] and the references therein.
Proposition 4.1
([48])
Given \(x^{*}\in \mathcal{H}_{1}\), the following statements are equivalent.
-
(i)
\(x^{*}\) solves the Γ;
-
(ii)
\(P_{C}(I-\lambda A^{*}(I-P_{Q})A)x^{*}=x^{*}\), where \(A^{*}\) is the adjoint of A;
-
(iii)
\(x^{*}\) solves the variational inequality problem of finding \(x^{*}\in C\) such that
$$\begin{aligned} \bigl\langle \nabla \mathcal{G} \bigl(x^{*} \bigr), x-x^{*} \bigr\rangle \geq 0,\quad \forall x\in C, \end{aligned}$$(4.2)where \(\nabla \mathcal{G}=A^{*}(I-P_{Q})A\).
If C is a closed and convex subset of H and the function f is the indicator function of C then it is well known that \(J_{\gamma f}=P_{C}\), the projection operator of H, onto the closed convex set C and putting \(A_{i}=B_{i}\) for all \(i=1,2\) in Theorem 3.1. Consequently, the following result can be obtained from Theorem 3.1.
Theorem 4.2
Let \(H_{1}\) and \(H_{2}\) be real Hilbert spaces and let C, Q be nonempty, closed, and convex subsets of real Hilbert space s \(H_{1}\) and \(H_{2}\), respectively. Let \(A_{1},A_{2}:H_{1} \to H_{2}\) be bounded linear operators, where \(A_{1}^{*},A_{2}^{*}\) are adjoints of \(A_{1}\) and \(A_{2}\), respectively, where \(L_{1}\) and \(L_{2}\) are special radii of \(A_{1}^{*}A_{1}\) and \(A_{2}^{*}A_{2}\). Let \(T_{i}: C\to C\) be nonexpansive mappings. Assume that \(\Xi _{i}=\mathrm{Fix}(T_{i})\cap \Gamma _{i} \neq \emptyset \), for all \(i=1,2\). Let \(g_{1},g_{2}:H \rightarrow H\) be \(\sigma _{1}\)- and \(\sigma _{2}\)-contraction mappings with \(\sigma _{1},\sigma _{2}\in (0,1)\) and \(\sigma =\max \{\sigma _{1},\sigma _{2}\}\). Let the sequences \(\{x_{n}\}\), \(\{y_{n}\}\) be generated by \(x_{1},y_{1}\in C\) and
where \(\nabla \mathcal{G}_{i}=A_{i}^{*}(I-P_{Q})A_{i}\), \(\gamma _{i} \in (0,\frac{2}{L_{i}})\), \(\{\beta _{n}\},\{\alpha _{n}\}\subseteq [0,1]\), \(b_{i} \in (0,1)\) for all \(i=1,2\). Assume that the following conditions hold:
-
(i)
\(\lim_{n\rightarrow \infty}\alpha _{n}=0\) and \(\sum_{n=1}^{\infty}\alpha _{n}=\infty \);
-
(ii)
\(0<\overline{l}\leq \beta _{n} \leq l\) for all \(n\in \mathbb{N}\) and for some \(\overline{l},l>0\);
-
(iii)
\(\sum_{n=1}^{\infty}|\alpha _{n+1}-\alpha _{n}|< \infty, \sum_{n=1}^{\infty}|\beta _{n+1}-\beta _{n}|< \infty \).
Then, \(\{x_{n}\}\) and \(\{y_{n}\}\) converges strongly to \(x^{*}=P_{\Xi _{1}}g_{1}(y^{*})\) and \(y^{*}=P_{\Xi _{2}}g_{2}(x^{*})\), respectively.
Proof
Let \(x,y\in C\) and \(\nabla \mathcal{G}_{i}=A_{i}^{*}(I-P_{Q})A_{i}\), for all \(i=1,2\). First, we show that \(\nabla \mathcal{G}_{i}\) is \(\frac {1}{L_{i}}\)-inverse strongly monotone for all \(i=1,2\).
Consider,
From the property of \(P_{C}\), we have
Substituting (4.5) into (4.4), we have
It follows that
Then, \(\nabla \mathcal{G}_{i}\) is \(\frac{1}{L_{A_{i}}}\)-inverse strongly monotone, for all \(i=1,2\). Hence, we can conclude Theorem 4.2 from Proposition 4.1 and Theorem 3.1. □
4.2 The constrained convex-minimization problem
Let C be a nonempty, closed, and convex subset of H. Consider that the constrained convex-minimization problem is to find \(x^{*}\in C\) such that
where \(\mathcal{Q}:H\to \mathbb{R}\) is a continuously differentiable function. Assume that (4.6) is consistent (i.e., it has a solution) and we use Ψ to denote its solution set. It is known that the gradient projection algorithm (GPA) plays an important role in solving constrained convex-minimization problems. It is well known that a necessary condition of optimality for a point \(x^{*} \in C\) to be a solution of the minimization problem (4.6) is that \(x^{*}\) solves the variational inequality:
That is, \(\Psi = VI(C,\nabla \mathcal{Q})\), where \(\Psi \neq \emptyset \). The following theorem is derived from these results.
Theorem 4.3
Let C be a nonempty, closed, and convex subset of H. For every \(i=1,2\), let \(\mathcal{Q}_{i}:H\to \mathbb{R}\) be a continuous differentiable function with \(\nabla \mathcal{Q}_{i}\), that is \(\frac{1}{L_{\mathcal{Q}_{i}}}\)-inverse strongly monotone. Let \(T_{i}: C\to C\) be nonexpansive mappings. Assume that \(\Theta _{i}=\mathrm{Fix}(T_{i})\cap \Psi _{i}\neq \emptyset \), for all \(i=1,2\). Let \(g_{1},g_{2}:H \rightarrow H\) be \(\sigma _{1}\)- and \(\sigma _{2}\)-contraction mappings with \(\sigma _{1},\sigma _{2}\in (0,1)\) and \(\sigma =\max \{\sigma _{1},\sigma _{2}\}\). Let the sequences \(\{x_{n}\}\), \(\{y_{n}\}\) be generated by \(x_{1},y_{1}\in C\) and
where \(\{\beta _{n}\},\{\alpha _{n}\}\subseteq [0,1]\), \(\gamma _{i} \in (0,\frac{2}{L_{\mathcal{Q}_{i}}})\), \(b_{i} \in (0,1)\) for all \(i=1,2\). Assume that the following conditions hold:
-
(i)
\(\lim_{n\rightarrow \infty}\alpha _{n}=0\) and \(\sum_{n=1}^{\infty}\alpha _{n}=\infty \);
-
(ii)
\(0<\overline{l}\leq \beta _{n} \leq l\) for all \(n\in \mathbb{N}\) and for some \(\overline{l},l>0\);
-
(iii)
\(\sum_{n=1}^{\infty}|\alpha _{n+1}-\alpha _{n}|< \infty, \sum_{n=1}^{\infty}|\beta _{n+1}-\beta _{n}|< \infty \).
Then, \(\{x_{n}\}\) and \(\{y_{n}\}\) converges strongly to \(x^{*}=P_{\Theta _{1}}g_{1}(y^{*})\) and \(y^{*}=P_{\Theta _{2}}g_{2}(x^{*})\), respectively.
Proof
By using Theorem 4.2, we obtain the conclusion. □
5 Numerical experiments
In this section, we give examples to support our main theorem. In the following examples, we choose \(\alpha _{n}=\frac {1}{3n}\), \(\beta _{n}=\frac {n+1}{6n}\), \(a_{1}=0.50\), \(a_{2}= 0.25\), \(b_{1}=0.40\), and \(b_{2}=0.45\). The stopping criterion used for our computation is \(\|x_{n+1}-x_{n} \|< 10^{-5}\) and \(\|y_{n+1}-y_{n}\|< 10^{-5}\).
Example 5.1
Let \(\mathbb{R}\) be the set of real numbers and let \(C=[1,10]\). Then, we obtain \(P_{C}x=\max \{\min \{x,10\},1 \}\), for all \(x\in C\). For every \(i=1,2\), let \(A_{i},B_{i}:C \to \mathbb{R}\) defined by \(A_{1}(x)= \frac {3x}{5}-\frac {3}{5}\), \(A_{2}(x)= \frac {2x}{5}-\frac {2}{5}\), \(B_{1}(x)= \frac {2x}{3}-\frac {2}{3}\), and \(B_{2}(x)= \frac {x}{6}-\frac {1}{6}\), for all \(x\in C\). For every \(i=1,2\), let \(f_{i}:\mathbb{R}\to \mathbb{R}\) defined by \(f_{1}(x)=x^{2}\), \(f_{2}(x)=2x^{2}\) for all \(x\in \mathbb{R}\). Then, we have \(J_{\gamma f}^{1}, J_{\gamma f}^{2}:\mathbb{R} \to \mathbb{R}\) defined by \(J_{\gamma f}^{1}(x)=\frac {x}{2}\) and \(J_{\gamma f}^{2}(x)=\frac {5x}{9}\), respectively. For every \(i=1,2\), let \(T_{i}:C \to C\) defined by \(T_{1}(x)= \frac {x}{2}+\frac {1}{2}\) and \(T_{2}(x)= \frac {x}{3}+\frac {2}{3}\), for all \(x\in C\). For every \(i=1,2\), let \(g_{i}:\mathbb{R} \to \mathbb{R}\) be defined by \(g_{1}(x)= \frac {x}{5}\) and \(g_{2}(x)= \frac {x}{4}\), for all \(x\in \mathbb{R}\). Let the sequences \(\{x_{n}\}\), \(\{y_{n}\}\) be generated by \(x_{1}\), \(y_{1}\in C\) and
According to the definition of \(A_{i}, B_{i}, T_{i}, f_{i}\), for all \(i=1,2\), we obtain \(1\in \mathrm{Fix}(T_{i})\cap VI(C, A_{i},f_{i}), \cap VI(C,B_{i},f_{i})\). From Theorem 3.1, we can conclude that the sequences \(\{x_{n}\}\) and \(\{y_{n}\}\) converge strongly to 1.
The numerical and graphical results of Example 5.1 are shown in Table 1 and Figs. 1 and 2.
The convergence behavior of \(\{x_{n}\}\) and \(\{y_{n}\}\) with \(x_{1}=10\), \(y_{1}=8\) in Example 5.1
Error plotting of \(\|x_{n+1}-x_{n} \|\) and \(\|y_{n+1}-y_{n}\|\) in Example 5.1, the y-axis is illustrated on a logscale
Next, we consider the problem in the infinite-dimensional Hilbert space.
Example 5.2
Let \(H=L_{2}([0,1])\) with the inner product defined by
and the induced norm by
Let \(C:= \{x\in L_{2}([0,1]): \|x\|\leq 1 \}\) be the unit ball. Then, we have
For every \(i=1,2\), let \(A_{i},B_{i}:C \to H\) be defined by \(A_{1}(x(t))= x(t)\), \(A_{2}(x(t))= \frac {3x(t)}{2}\), \(B_{1}(x(t))= 2x(t)\), and \(B_{2}(x(t))= \frac {5x(t)}{3}\), for all \(t \in [0,1]\), \(x\in C\). For every \(i=1,2\), let \(f_{i}:H\to \mathbb{R}\) be defined by \(f_{1}(x(t))=\frac {3x(t)^{2}}{2}\), \(f_{2}(x(t))=\frac {x(t)^{2}}{2}\) for all \(t \in [0,1]\), \(x\in H\). Then, we have \(J_{\gamma f}^{1}, J_{\gamma f}^{2}:H \to H\) defined by \(J_{\gamma f}^{1}(x(t))=\frac {4x(t)}{7}\) and \(J_{\gamma f}^{2}(x(t))=\frac {5x(t)}{6}\), for all \(t \in [0,1]\), respectively. For every \(i=1,2\), let \(T_{i}:C \to C\) be defined by \(T_{1}(x(t))= \frac {x(t)}{2}\) and \(T_{2}(x(t))= \frac {x(t)}{3}\), for all \(t \in [0,1]\), \(x\in C\). For every \(i=1,2\), let \(g_{i}:H \to H\) be defined by \(g_{1}(x(t))= \frac {x(t)}{9}\) and \(g_{2}(x(t))= \frac {x(t)}{16}\), for all \(t \in [0,1]\), \(x\in H\). Let the sequences \(\{x_{n}\}\), \(\{y_{n}\}\) be generated by \(x_{1}\), \(y_{1}\in C\) and
According to the definition of \(A_{i}, B_{i}, T_{i}, f_{i}\), for all \(i=1,2\), then the solution of this problem is \(x(t)=\mathbf{0}\), where \(\mathbf{0}\in \mathrm{Fix}(T_{i})\cap VI(C,A_{i},f_{i}), \cap VI(C,B_{i},f_{i})\). From Theorem 3.1, we can conclude that the sequences \(\{x_{n}\}\) and \(\{y_{n}\}\) converge strongly to \(x(t)=\mathbf{0}\).
We test the algorithms for three different starting points and use \(\|x_{n+1}-x_{n} \|< 10^{-5}\) and \(\|y_{n+1}-y_{n}\|< 10^{-5}\) as the stopping criterion.
Case 1: \(x_{1}=0.2t\) and \(y_{1}=0.8t\);
Case 2: \(x_{1}=e^{-2t}\) and \(y_{1}=t^{2}\);
Case 3: \(x_{1}=\sin (t)\) and \(y_{1}=\cos (t)\).
The computational and graphical results of Example 5.2 are shown in Tables 2, 3, and 4 and Figs. 3, 4, 5, and 6.
The convergence behavior of \(\{x_{n}(t)\}\) and \(\{y_{n}(t)\}\) with \(x_{1}=0.2t\) and \(y_{1}=0.8t\) (Case 1) in Example 5.2, the y-axis is illustrated on a logscale
The convergence behavior of \(\{x_{n}(t)\}\) and \(\{y_{n}(t)\}\) with \(x_{1}=e^{-2t}\) and \(y_{1}=t^{2}\) (Case 2) in Example 5.2, the y-axis is illustrated on a logscale
The convergence behavior of \(\{x_{n}(t)\}\) and \(\{y_{n}(t)\}\) with \(x_{1}=\sin (t)\) and \(y_{1}=\cos (t)\) (Case 3) in Example 5.2, the y-axis is illustrated on a logscale
Error plotting of \(\|x_{n+1}-x_{n} \|\) and \(\|y_{n+1}-y_{n}\|\) in Example 5.2, the y-axis is illustrated on a logscale
We next give a comparison between Algorithm (5.3) in Corollary 3.3 and Algorithm 3.2 in [6].
Example 5.3
In this example, we use the same mappings and parameters as in Example 5.2. Putting the sequence \(\{x_{n}\}=\{y_{n}\}\) and \(\{w_{n}\}=\{z_{n}\}\), the mapping \(A_{1}\equiv A_{2}\equiv B_{1} \equiv B_{2}\), \(f_{1} \equiv f_{2}\), \(g_{1} \equiv g_{2}\), and \(T_{1} \equiv T_{2}\equiv I\), we can rewrite (3.34) as follows:
Also, we modify Algorithm 3.2 in [6] by putting \(A\equiv A_{1}\) that is an inverse strongly monotone operator and choose the same mappings and parameters as in Example 5.2. Hence, we can rewrite as follows:
The comparison of Algorithm (5.3) and Algorithm (5.4), which is modified from Algorithm 3.2 in [6], in terms of the CPU time and the number of iterations with different starting points, is reported in Table 5.
Remark 5.4
From our numerical experiments in Examples 5.1, 5.2, and 5.3, we make the following observations.
-
1.
Table 1 and Figs. 1 and 2 show that \(\{x_{n}\}\) and \(\{y_{n}\}\) converge to 1, where \(1\in \mathrm{Fix}(T_{i})\cap VI(C,A_{i},f_{i}), \cap VI(C,B_{i},f_{i})\), for all \(i=1,2\). The convergence of \(\{x_{n}\}\) and \(\{y_{n}\}\) of Example 5.1 can be guaranteed by Theorem 3.1.
-
2.
Tables 2, 3, and 4 and Figs. 3, 4, 5, and 6 show that \(\{x_{n}\}\) and \(\{y_{n}\}\) converge to \(x(t)=\mathbf{0}\), where \(\mathbf{0}\in \mathrm{Fix}(T_{i})\cap VI(C,A_{i},f_{i}), \cap VI(C,B_{i},f_{i})\), for all \(i=1,2\). The convergence of \(\{x_{n}\}\) and \(\{y_{n}\}\) of Example 5.2 can be guaranteed by Theorem 3.1.
-
3.
From Table 5, we see that the sequence generated by our Algorithm (5.3) has a better convergence than Algorithm (5.4), which is modified from Algorithm 3.2 in [6], in terms of the number of iterations and the CPU time.
6 Conclusion
In this paper, we have proposed a new problem, called the combination of mixed variational inequality problems (1.7). This problem can be reduced to a classical variational inequalities problem (1.4). Using the intermixed method with viscosity technique, we introduce a new intermixed algorithm with viscosity technique for finding a solution of the combination of mixed variational inequality problems and the fixed-point problem of a nonexpansive mapping in a real Hilbert space. Moreover, we propose Lemmas 2.5 and 2.6 related to the combination of mixed variational inequality problems (1.7) in Sect. 2. Under some suitable conditions, a strong convergence theorem (Theorem 3.1) is established for the proposed Algorithm (3.1). We apply our theorem to solve the split-feasibility problem and the constrained convex-minimization problem. The effectiveness and numerical results of the proposed method for solving some examples in Hilbert space are illustrated (see Tables 1, 2, 3, 4, and 5 and Figs. 1, 2, 3, 4, 5, and 6). The obtained results improve and extend several previously published results in this field.
Availability of data and materials
Not applicable.
References
Browder, F.E.: Semicontractive and semiaccretive nonlinear mappings in Banach spaces. Bull. Am. Math. Soc. 74, 660–665 (1968)
Lescarret, C.: Cas d’addition des applications monotones maximales dans un espace de Hilbert. C. R. Acad. Sci. Paris, Ser. I 261, 1160–1163 (1965). (French)
Browder, F.E.: On the unification of the calculus of variations and the theory of monotone nonlinear operators in Banach spaces. Proc. Natl. Acad. Sci. USA 56, 419–425 (1966)
Konnov, I.V., Volotskaya, E.O.: Mixed variational in- equalities and economic equilibrium problems. J. Appl. Math. 2(6), 289–314 (2002)
Noor, M.A.: A new iterative method for monotone mixed varitational inequalities. Math. Comput. Model. 26(7), 29–34 (1997)
Noor, M.A., Noor, K.I., Yaqoob, H.: On general mixed variational inequalities. Acta Appl. Math. 110, 227–246 (2010)
Noor, M.A.: An implicit method for mixed variational inequalities. Appl. Math. Lett. 11, 109–113 (1998)
Noor, M.A.: Mixed quasi variational inequalities. Appl. Math. Comput. 146, 553–578 (2003)
Noor, M.A.: Proximal methods for mixed quasi variational inequalities. J. Optim. Theory Appl. 115, 447–451 (2002)
Noor, M.A.: Fundamentals of mixed quasi variational inequalities. Int. J. Pure Appl. Math. 15, 137–258 (2004)
Noor, M.A., Bnouhachem, A.: Self-adaptive methods for mixed quasi variational inequalities. J. Math. Anal. Appl. 312, 514–526 (2005)
Bnouhachem, A., Noor, M.A., Rassias, T.M.: Three-steps iterative algorithms for mixed variational inequalities. Appl. Math. Comput. 183, 436–446 (2006)
Noor, M.A.: Numerical methods for monotone mixed variational inequalities. Adv. Nonlinear Var. Inequal. 1, 51–79 (1998)
Bnouhachem, A.: A self-adaptive method for solving general mixed variational inequalities. J. Math. Anal. Appl. 309(1), 136–150 (2005)
Wang, Z.B., Chen, Z.Y., Xiao, Y.B., Cong Zhang, C.: A new projection-type method for solving multi-valued mixed variational inequalities without monotonicity. Appl. Anal. 99(9), 1453–1466 (2020)
Jolaoso, L.O., Shehu, Y., Yao, J.C.: Inertial extragradient type method for mixed variational inequalities without monotonicity. Math. Comput. Simul. 192, 353–369 (2022)
Stampacchia, G.: Formes bilineaires coercitives sur les ensembles convexes. C. R. Acad. Sci. 258, 4413–4416 (1964)
Kinderlehrer, D., Stampacchia, G.: An Introduction to Variational Inequalities and Their Applications. SIAM, Philadelphia (2000)
Aubin, J.P., Ekeland, I.: Applied Nonlinear Analysis. Wiley, New York (1984)
Blum, E., Oettli, W.: From optimization and variational inequalities to equilibrium problems. Math. Program. 63, 123–145 (1994)
Dafermos, S.C.: Traffic equilibrium and variational inequalities. Transp. Sci. 14, 42–54 (1980)
Dafermos, S.C., Mckelvey, S.C.: Partitionable variational inequalities with applications to network and economic equilibrium. J. Optim. Theory Appl. 73, 243–268 (1992)
Ceng, L.C., Petrusel, A., Qin, X., Yao, J.C.: A modified inertial subgradient extragradient method for solving pseudomonotone variational inequalities and common fixed point problems. Fixed Point Theory 21, 93–108 (2020)
Ceng, L.C., Petrusel, A., Qin, X., Yao, J.C.: Pseudomonotone variational inequalities and fixed points. Fixed Point Theory 22, 543–558 (2021)
Ceng, L.C., Latif, A., Al-Mazrooei, A.E.: Composite viscosity methods for common solutions of general mixed equilibrium problem, variational inequalities and common fixed points. J. Inequal. Appl. 2015, Article ID 217 (2015)
Zhao, T.Y., Wang, D.Q., Ceng, L.C., et al.: Quasi-inertial Tseng’s extragradient algorithms for pseudomonotone variational inequalities and fixed point problems of quasi-nonexpansive operators. Numer. Funct. Anal. Optim. 42, 69–90 (2020)
Ceng, L.C., Petrusel, A., Qin, X., Yao, J.C.: Two inertial subgradient extragradient algorithms for variational inequalities with fixed-point constraints. Optimization 70, 1337–1358 (2021)
Ceng, L.C., Shang, M.: Hybrid inertial subgradient extragradient methods for variational inequalities and fixed point problems involving asymptotically nonexpansive mappings. Optimization 70, 715–740 (2021)
Ceng, L.C., Yuan, Q.: Composite inertial subgradient extragradient methods for variational inequalities and fixed point problems. J. Inequal. Appl. 2019, Article ID 274 (2019)
Ceng, L.C., Köbis, E., Zhao, X.: On general implicit hybrid iteration method for triple hierarchical variational inequalities with hierarchical variational inequality constraints. Optimization 69, 1961–1986 (2020)
Ceng, L.C., Yao, J.C., Shehu, Y.: On Mann implicit composite subgradient extragradient methods for general systems of variational inequalities with hierarchical variational inequality constraints. J. Inequal. Appl. 2022, Article ID 78 (2022)
Ceng, L.C., Latif, A., Al-Mazrooei, A.E.: Hybrid viscosity methods for equilibrium problems, variational inequalities, and fixed point problems. Appl. Anal. 95, 1088–1117 (2016)
Ceng, L.C., Coroian, I., Qin, X., Yao, J.C.: A general viscosity implicit iterative algorithm for split variational inclusions with hierarchical variational inequality constraints. Fixed Point Theory 20, 469–482 (2019)
Yao, Z., Kang, S.M., Li, H.J.: An intermixed algorithm for strict pseudocontractions in Hilbert spaces. Fixed Point Theory Appl. 2015, Article ID 206 (2015)
Kangtunyakarn, A.: An iterative algorithm to approximate a common element of the set of common fixed points for a finite family of strict pseudocontractions and of the set of solutions for a modified system of variational inequalities. Fixed Point Theory Appl. 2013, Article ID 143 (2013)
Chuang, C.S.: Algorithms and convergence theorems for mixed equilibrium problems in Hilbert spaces. Numer. Funct. Anal. Optim. 40(8), 953–979 (2019)
Bauschke, H.H., Combettes, P.L.: Convex Analysis and Monotone Operators Theory in Hilbert Spaces, 2nd edn. CMS Books in Mathematics. Springer, Berlin (2017)
Brezis, H.: Operateurs Maximaux Monotone et Semigroupes de Contractions dans les Espace d’Hilbert. North-Holland, Amsterdam (1973)
Takahashi, W.: Nonlinear Functional Analysis. Yokohama Publishers, Yokohama (2000)
Ming, T., Liu, L.: General iterative methods for equilibrium and constrained convex minimization problem. J. Optim. Theory Appl. 63, 1367–1385 (2014)
Xu, H.K.: An iterative approach to quadratic optimization. J. Optim. Theory Appl. 116, 659–678 (2003)
Censor, Y., Elfving, T.: A multiprojection algorithm using Bregman projections in a product space. Numer. Algorithms 8, 221–239 (1994)
Kotzer, T., Cohen, N., Shamir, J.: Extended and alternative projections onto convex sets: theory and applications, Technical Report No. EE 900, Dept. of Electrical Engineering, Technion, Haifa, Israel (1993)
Censor, Y., Bortfeld, T., Martin, B., Trofimov, A.: A unified approach for inversion problems in intensity modulated radiation therapy. Phys. Med. Biol. 51, 2353–2365 (2003)
Byrne, C.: A unified treatment of some iterative algorithms in signal processing and image reconstruction. Inverse Probl. 20, 103–120 (2004)
Censor, Y., Elfving, T., Kopf, N., Bortfeld, T.: The multiple-sets split feasibility problem and its applications for inverse problems. Inverse Probl. 21, 2071–2084 (2005)
Byrne, C.: Iterative oblique projection onto convex sets and the split feasibility problem. Inverse Probl. 18, 441–453 (2002)
Ceng, L.C., Ansari, Q.H., Yao, J.C.: An extragradient method for solving split feasibility and fixed point problems. Comput. Math. Appl. 64, 633–642 (2012)
López, G., Martín-Márquez, V., Wang, F., Xu, H.K.: Solving the split feasibility problem without prior knowledge of matrix norms. Inverse Probl. 28(8), 085004 (2012)
Vinh, N.T., Hoai, P.T.: Some subgradient extragradient type algorithms for solving split feasibility and fixed point problems. Math. Methods Appl. Sci. 39, 3808–3823 (2016)
Gibali, A., Mai, D.T., Vinh, N.T.: A new relaxed CQ algorithm for solving split feasibility problems in Hilbert spaces and its applications. J. Ind. Manag. Optim. 15(2), 963–984 (2019)
Vinh, V.T., Cholamjiak, P., Suantai, S.: A new CQ algorithm for solving split feasibility problems in Hilbert spaces. Bull. Malays. Math. Sci. Soc. 42, 2517–2534 (2019)
Acknowledgements
The authors would like to thank the referees for valuable comments and suggestions for improving this work. The first author would like to thank Rajamangala University of Technology Thanyaburi (RMUTT) under The Science, Research and Innovation Promotion Funding (TSRI) (Contract No. FRB660012/0168 and under project number FRB66E0635) for financial support.
Funding
This research was supported by The Science, Research and Innovation Promotion Funding (TSRI) (Grant No. FRB660012/0168). This research block grant was managed under the Rajamangala University of Technology Thanyaburi (FRB66E0635).
Author information
Authors and Affiliations
Contributions
AK dealt with the conceptualization, formal analysis, supervision, writing—review and editing. WK writing—original draft, formal analysis, computation. Both authors read and approved the final manuscript.
Corresponding author
Ethics declarations
Competing interests
The authors declare no competing interests.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Khuangsatung, W., Kangtunyakarn, A. An intermixed method for solving the combination of mixed variational inequality problems and fixed-point problems. J Inequal Appl 2023, 1 (2023). https://doi.org/10.1186/s13660-022-02908-8
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s13660-022-02908-8
Keywords
- Mixed variational inequality problems
- Intermixed algorithm
- Strong convergence