- Research
- Open access
- Published:
The forward–backward splitting method for finding the minimum like-norm solution of the mixed variational inequality problem
Journal of Inequalities and Applications volume 2023, Article number: 126 (2023)
Abstract
We consider a general class of convex optimization problems in which one seeks to minimize a strongly convex function over a closed and convex set, which is by itself an optimal set of another mixed variational inequality problem in a Hilbert space. Regularized forward–backward splitting method is applied to find the minimum like-norm solution of the mixed variational inequality problem under investigation.
1 Introduction
Let H be a real Hilbert space. Consider the mixed variational inequality problem: find \(\bar{x}\in H\) such that
where the following assumptions are made throughout the paper:
-
\(f:H\rightarrow (-\infty ,+\infty ]\) is proper, lower semicontinuous, and convex.
-
\(A:H\rightarrow H\) is a nonlinear monotone mapping.
-
The set of solutions to problem (MVI), denoted by \(\operatorname{Sol}(\mathrm{MVI})\), is nonempty.
Mixed variational inequalities are general problems that encompass as special cases several problems from continuous optimization and variational analysis, such as minimization problems, linear complementary problems, vector optimization problems, and variational inequalities, having applications in economics, engineering, physics, mechanics, and electronics (see [6, 7, 12, 13, 19] among others).
We note that if f is the indicator function of a closed convex set C in H, then the mixed variational inequality problem (MVI) is equivalent to finding \(\bar{x}\in C\) such that
which is called the standard variational inequality problem. On the other hand, if \(A=0\), then the mixed variational inequality problem (MVI) reduces to the unconstrained optimization problem of minimizing f over H:
For mixed variational inequalities, one can find various algorithms in the literature, for instance, in [3, 11, 15, 17, 20]. It is known that problem (MVI) is characterized by the fixed point equation
where \(t>0\) and
This equation suggests the possibility of iterating (see [4])
This method is called the forward–backward splitting method. Forward–backward methods belong to the class of proximal splitting methods. These methods require the computation of the proximity operator and the approximation of proximal points (see [9]).
Problem (MVI) might have multiple solutions, and in this case it is natural to consider the minimal like-norm solution problem in which one seeks to find the optimal solution of (MLN) with a minimal like-norm:
The function \(\omega (x):H\rightarrow R\) is assumed to satisfy the following:
-
ω is a strongly convex function over H with parameter \(t>0\) (see Definition 2.1).
-
ω is continuously differentiable.
If \(\operatorname{Sol}(\mathrm{MVI})\) is a nonempty closed convex set, then by the strong convexity of ω, problem (MLN) has a unique solution. For simplicity, problem (MVI) will be called the core problem, problem (MLN) will be called the outer problem, and correspondingly, ω will be called the outer objective function.
When \(A=0\), \(\omega (x)=\frac{1}{2}\|x\|^{2}\), the best known indirect method for solving problem (MNP) is by the well-known Tikhonov regularization [18], which suggests solving the following alternative regularized problem for some \(\lambda >0\):
In [5], the authors treat the case that f is an indicator function of a closed and convex set C and show that under some restrictive conditions, including C being a polyhedron, there exists a small enough \(\lambda ^{*}>0\) such that the optimal solution of problem \(Q_{\lambda ^{*}}\) is the optimal solution of problem (MLN). In [16], Solodov showed that \(\sum_{k=1}^{\infty}\lambda _{k}=\infty \) and f is again an indicator function of a closed and convex set, there is no need to find the optimal solution of problem \(Q_{\lambda _{k}}\), and it is sufficient to approximate its solution by performing a single projected gradient step on \(Q_{\lambda _{k}}\). In [2], a first order method for solving problem (MLN), called the minimal norm gradient, was proposed, for which the authors proved an \(O(\frac{1}{\sqrt{k}})\) rate of convergence result in terms of the inner objective function values. The minimal norm gradient method is based on the cutting plane idea, which means that at each iteration of the algorithm two specific half-spaces are constructed and then a minimization of the outer objective function ω over the intersection of these half-spaces is solved.
In [21], for finding the minimum-norm solution to the standard monotone variational inequality problem (1), Zhou et al. proposed the following iterative method:
where \(P_{C}\) stands for the metric projection from H onto C. They proved that the proposed iterative sequences converge strongly to the minimum-norm solution of the variational inequality provided \(\{\alpha _{n}\}\) and \(\{\beta _{n}\}\) satisfy certain conditions. In [8], when A is pseudo-monotone and Lipschitz continuous, Linh et al. introduced an inertial projection algorithm for finding the minimum-norm solutions of the variational inequality problem. In [14], Linh et al. introduced an inertial method for finding minimum-norm solutions of the split variational inequality problem.
Our interest in this paper is to study regularization forward–backward splitting method for finding minimum like-norm solution of the mixed variational inequality problem in infinite dimensional real Hilbert spaces when operator A is monotone and hemicontinuous.
2 Mathematical toolbox
Let \(f:H\rightarrow (-\infty ,+\infty ]\) be an extended real-valued function. The subdifferential of f is the set-valued operator \(\partial f:H\rightarrow 2^{H}\), the value of which at \(x\in H\) is
Consider the Moreau envelope \(\operatorname{env}_{f}^{\alpha}(x)\) and the set-valued proximal mapping \(\operatorname{prox}_{f}^{\alpha}(x)\) defined by
The operator \(\operatorname{prox}_{f}^{\alpha}\) is called the proximity operator. For every \(x\in H\), the infimum in (3) is achieved at a unique point \(\operatorname{prox}_{f}^{\alpha}(x)\) that is characterized by the inclusion
The proximity operator possesses several important properties, three of them will be useful in our analysis and are thus recalled here.
-
(i)
Variational inequality:
$$ \bigl\langle x-\operatorname{prox}_{f}^{\alpha}(x),y- \operatorname{prox}_{f}^{\alpha}(x) \bigr\rangle \leq \alpha f(y)-\alpha f\bigl(\operatorname{prox}_{f}^{\alpha}(x)\bigr),\quad \forall x,y\in H. $$ -
(ii)
Nonexpansive:
$$ \Vert x-y \Vert \geq \bigl\Vert \operatorname{prox}_{f}^{\alpha}(x)- \operatorname{prox}_{f}^{\alpha}(y) \bigr\Vert , \quad \forall x,y \in H. $$ -
(iii)
Firmly nonexpansive:
$$ \bigl\langle x-y,\operatorname{prox}_{f}^{\alpha}(x)- \operatorname{prox}_{f}^{\alpha}(y) \bigr\rangle \geq \bigl\Vert \operatorname{prox}_{f}^{\alpha}(x)- \operatorname{prox}_{f}^{\alpha}(y) \bigr\Vert ^{2} ,\quad \forall x,y\in H. $$
Definition 2.1
A strictly convex and Gâteaux differentiable function \(h:H\rightarrow R \) is said to be strongly convex with parameter \(t>0\) if
Definition 2.2
A mapping \(T:H\rightarrow H \) is called monotone if
Definition 2.3
A mapping \(T:H\rightarrow H\) is called strongly monotone if there exists \(t>0\) such that
If a strictly convex and Gâteaux differentiable function \(h:H\rightarrow R \) is strongly convex with parameter \(t>0\), then ∇h is strongly monotone with parameter \(t>0\).
Definition 2.4
A mapping \(T:H\rightarrow H\) is called Lipschitz continuous if there exists \(L>0\) such that
If a mapping \(T:H\rightarrow H\) is strongly monotone with parameter \(t>0\) and Lipschitz continuous with constant L, then \(L\geq t\).
Remark 2.1
As a matter of fact, it is know that:
-
(i)
If ∇ω is strongly monotone with constant t and A is monotone, then \(A+\alpha \nabla \omega \) is strongly monotone with constant tα.
-
(ii)
If ∇ω is Lipschitz continuous with constant \(L_{\omega}\) and A is Lipschitz continuous with constant \(L_{A}\), then \(A+\alpha \nabla \omega \) is also Lipschitz continuous with constant \((L_{A}+\alpha L_{\omega})\).
Definition 2.5
[1] A function f is called lower semicontinuous at the point \(x_{0}\in {\mathrm{dom}} f\) if for any sequence \(x_{n}\in {\mathrm{dom}} f\) such that \(x_{n}\rightarrow x_{0}\) there holds the inequality
If inequality (4) occurs with the condition that the convergence of \({x_{n}}\) to \(x_{0}\) is weak, then the function f is called weakly lower semicontinuous at \(x_{0}\).
Lemma 2.1
[1] Let f be a convex and lower semicontinuous function. Then it is weakly lower semicontinuous.
Definition 2.6
[21] A mapping T is said to be hemicontinuous if for any sequence \(\{x_{n}\}\) converging to \(x_{0}\in H\) along a line implies \(T(x_{n})\rightharpoonup T(x_{0})\), i.e., \(T(x_{n})=T(x_{0}+t_{n}x)\rightharpoonup T(x_{0})\) as \(t_{n}\rightarrow 0\) for all \(x\in H\).
Lemma 2.2
[21] Let \(\{\alpha _{n}\}\) be a sequence of nonnegative real numbers satisfying
where \(\{\gamma _{n}\}\subseteq (0,1)\) and \(\{\beta _{n}\}\) satisfy
-
(i)
\(\sum_{n=0}^{\infty} \gamma _{n}=\infty \);
-
(ii)
either \(\limsup_{n\rightarrow \infty} \beta _{n}\leq 0\) or \(\sum_{n=0}^{\infty} |\gamma _{n}\beta _{n}|<\infty \).
Then \(\lim_{n\rightarrow \infty}\alpha _{n}=0\).
Lemma 2.3
[10] Let \(A:H\rightarrow H\) be a hemicontinuous monotone operator. Assume that the following coercivity condition holds: there exists \(v\in \operatorname{dom}f\) such that
Then \(\operatorname{Sol}(\mathrm{MVI})\) is a nonempty set.
3 Main result
Before describing the algorithms, we require the following notation for the optimal solution of the problem consisting of minimizing ω over a given closed and convex set C:
By the optimality condition in problem (5), it follows that
Lemma 3.1
Let \(A:H\rightarrow H\) be a hemicontinuous monotone operator. Then, for a fixed element \(\bar{x}\in H\), the following mixed variational inequalities are equivalent:
-
(i)
\(f(x)-f(\bar{x})+\langle A(x),x-\bar{x}\rangle \geq 0\), \(\forall x\in H \).
-
(ii)
\(f(x)-f(\bar{x})+\langle A(\bar{x}),x-\bar{x}\rangle \geq 0\), \(\forall x \in H \).
Proof
\({\mathrm{(ii)}}\Rightarrow{\mathrm{(i)}}\) Since A is a monotone operator, then for any \(x\in H\), we have
Hence,
\({\mathrm{(i)}}\Rightarrow{\mathrm{(ii)}}\) Letting \(x_{t}=\bar{x}+t(x-\bar{x})\), \(0< t<1\). Then we have
and
Hence,
This completes the proof. □
Lemma 3.2
Let \(A:H\rightarrow H\) be a hemicontinuous monotone operator. Then \(\operatorname{Sol}(\mathrm{MVI})\) is a closed convex set.
Proof
Let \(\bar{x},\hat{x}\in \operatorname{Sol}(\mathrm{MVI})\), let \(0< t<1\). Then, for any \(x\in H \), by Lemma 3.1 we have
and
Hence,
Then we have \(t\bar{x}+(1-t)\hat{x}\in \operatorname{Sol}(\mathrm{MVI})\), that is, \(\operatorname{Sol}(\mathrm{MVI})\) is a convex set.
Let \(\{x_{n}\}\subseteq\operatorname{Sol}(\mathrm{MVI})\) and \(x_{n}\rightarrow \bar{x}\). Then, for any \(x\in H \), by Lemma 3.1, we have
By the weak semicontinuity of f, we have
Then we have
Hence, \(\bar{x}\in \operatorname{Sol}(\mathrm{MVI})\), that is, \(\operatorname{Sol}(\mathrm{MVI})\) is a closed set. □
In this section, we use the idea of regularization to attach the general case. For given \(\gamma >0\), we consider the following regularization mixed variational inequality problem: find \(\bar{x}\in H\) such that
where \(\gamma >0\) is the regularization parameter.
Lemma 3.3
Let \(A:H\rightarrow H\) be a hemicontinuous monotone operator. Then the regularization mixed variational inequality problem (7) has a unique solution.
Proof
For any \(v\in \operatorname{dom}f\) and \(v^{*}\in \partial f(v)\), by Remark 2.1, we have
Hence,
Then, by Lemma 2.3, the set of solutions to the regularization mixed variational inequality problem (7) is nonempty. Next, we show that problem (7) has a unique solution.
Assume that x̄ and x̂ are solutions of the regularization mixed variational inequality problem (7). Then we have
and
Hence, by Remark 2.1, we have
Therefore, \(\hat{x}=\bar{x}\). This completes the proof. □
Remark 3.1
For any \(\gamma >0\) and \(\beta >0\), we have
In this section, we will introduce two iterative methods (one implicit and the other explicit). First, by Remark 3.1, we introduce the implicit one:
where \(\{\alpha _{n}\}\) and \(\{\beta _{n}\}\) are two sequences in \((0,1)\) that satisfy the following condition:
Theorem 3.1
Let A be a hemicontinuous monotone operator. Then the sequence \(\{y_{n}\}\) generated by implicit method (10) converges to \(\bar{x}=\Omega (\operatorname{Sol}(\mathrm{MVI}))\), which is the minimum like-norm solution of (MVI).
Proof
Put \(z_{n}=y_{n}-\alpha _{n}\nabla \omega (y_{n})-\beta _{n}A(y_{n})\). For any \(p\in\operatorname{Sol}(\mathrm{MVI})\), we have
By using (10) and (11), we get
It follows from the property of \(\operatorname{prox}_{f}^{\beta _{n}}\) that
which simplifies to
and then
Setting \(\gamma _{n}=\frac{\alpha _{n}}{\beta _{n}}\), we have
Since A is a monotone operator and \(p\in\operatorname{Sol}(\mathrm{MVI})\), we know
and
Combining the above three relations yields
Then we have
from which it turns out that
Hence, by the strong monotonicity of ∇ω and the Cauchy–Schwarz inequality, we have
Therefore, \(\{y_{n}\}\) is bounded. Then we know that \(\{y_{n}\}\) has a subsequence \(\{y_{n,k}\}\) such that \(y_{n,k}\rightharpoonup \bar{x}\) as \(k\rightarrow \infty \). Furthermore, without loss of generality, we may assume that \(\{y_{n}\}\) converges weakly to a point \(\bar{x}\in H\). We show that x̄ is a solution to (MVI). For any \(x\in H\), by Remark 2.1, we have
Combining (15) and (10), we get
Taking the limit as \(n\rightarrow \infty \) in (16) yields
By Lemma 3.1, we get
that is, \(\bar{x}\in\operatorname{Sol}(\mathrm{MVI})\). Therefore, we can substitute p by x̄ in (14) to obtain
Since \(y_{n}\rightharpoonup \bar{x}\) as \(n\rightarrow \infty \), by (17) we get \(y_{n}\rightarrow \bar{x}\) as \(n\rightarrow \infty \). Moreover, from (13) we get
from which we know that x̄ is the minimum like-norm solution of (MVI). This completes the proof. □
Now, we introduce an explicit method (regularization forward–backward splitting) and establish its strong convergence analysis. From the implicit method, it is natural to consider the following iteration method that generates a sequence \(\{x_{n}\}\) according to the recursion.
Algorithm 3.1
Given \(x_{0}\in H\), for every \(n\in \mathbb{N}\), set
where \(\{s_{n}\}\), \(\{\alpha _{n}\}\), and \(\{\beta _{n}\}\) are three sequences in \((0,1)\) that satisfy the following conditions:
-
(i)
\(\alpha _{n}ts_{n}\leq 1\), \(0<\bar{s}<s_{n}\);
-
(ii)
\(\frac{\alpha _{n}}{\beta _{n}}\rightarrow 0\), \(\frac{\beta _{n}^{2}}{\alpha _{n}}\rightarrow 0\) as \(n\rightarrow \infty \);
-
(iii)
\(\alpha _{n}\rightarrow 0\) as \(n\rightarrow \infty \), \(\sum_{n=1}^{n=\infty}\alpha _{n}=\infty \);
-
(iv)
\(\frac{|\alpha _{n}-\alpha _{n-1}|+|\beta _{n}-\beta _{n-1}|}{\alpha _{n}^{2}}\rightarrow 0\) as \(n\rightarrow \infty \).
Proposition 3.1
Let A be a hemicontinuous monotone operator. Let \(\{x_{n}\}\) be defined by Algorithm 3.1. Assume that \(\{\beta _{n}A(x_{n})\}\) is bounded. Then the iterative sequence \(\{x_{n}\}\) is bounded.
Proof
For any \(p\in \operatorname{Sol}(\mathrm{MVI}) \), from the property of \(\operatorname{prox}_{f}^{\beta _{n}}\), we know
Then, by the monotonicity of A, strong monotonicity and Lipschitz continuity of ∇ω, we have
Suppose that \(\{x_{n}\}\) is unbounded, then there exists \(\{x_{(n,k)+1}\}\subseteq \{x_{n}\}\) such that
Hence,
Then, by (19) and \(\{\beta _{n}A(x_{n})\}\) is bounded, we have
Using inequality (19) again, we get
Then we have
Since \(\alpha _{n}\rightarrow 0\) and \(\{\beta _{n}A(x_{n})\}\) is bounded, then by (20) \(\{x_{n,k}\}\) is also bounded. This contradicts \(\|x_{n,k}-p\|\rightarrow +\infty \). Hence, the iterative sequence \(\{x_{n}\}\) is bounded. □
Theorem 3.2
Let A be a hemicontinuous monotone operator. Let \(\{x_{n}\}\) be defined by Algorithm 3.1. Assume that both \(\{\beta _{n}A(x_{n})\}\) and \(\{A(y_{n})\}\) are bounded. Then the iterative sequence \(\{x_{n}\}\) converges to x̄, which is the minimum like-norm solution of (MVI).
Proof
By using Theorem 3.1, we know that \(\{y_{n}\}\) converges strongly to x̄. Therefore, it is sufficient to show that \(x_{n+1}-y_{n}\rightarrow 0\) as \(n\rightarrow \infty \). By using (10) and Algorithm 3.1, we get
Hence, by the monotonicity of A, we have
and then, by the strong monotonicity and Lipschitz continuity of ∇ω, we have
Since \(\operatorname{prox}_{f}^{\beta _{n}}\) is a firmly nonexpansive mapping, we have
Then we have that
and so that
Hence, we get
Since \(\{y_{n}\}\) and \(\{A(y_{n})\}\) are two bounded sequences, there exists \(M_{1}>0\) such that \(\sup \{\|\nabla \omega (y_{n-1})\| ,\|A(y_{n-1})\|\}\leq M_{1}\) for any \(n\geq 1\). Then we have
From conditions (ii) and (iv) we know that \(\frac{|\alpha _{n}-\alpha _{n-1}|+|\beta _{n}-\beta _{n-1}|}{\alpha _{n}}=o( \alpha _{n})\) and \(\beta _{n}^{2}=o(\alpha _{n})\). Then (21) turns out to be
By Lemma 2.2 and condition (iii), we have \(\|x_{n+1}-y_{n}\|\rightarrow 0\), as \(n\rightarrow \infty \). It follows that \(\{x_{n}\}\) converges strongly to \(\bar{x}=\operatorname*{argmin}_{x \in \operatorname{Sol}(\mathrm{MVI}) } \omega (x)\). This completes the proof. □
If \(A:H\rightarrow H\) is an \(L_{A}\)-Lipschitz continuous and monotone operator, then we have the following convergence result.
Theorem 3.3
Let A be an \(L_{A}\)-Lipschitz continuous and monotone operator. Let \(\{x_{n}\}\) be defined by Algorithm 3.1. Then the iterative sequence \(\{x_{n}\}\) converges to x̄, which is the minimum like-norm solution of (MVI).
Proof
From Theorem 3.1, we know that \(y_{n}\rightarrow \bar{x}=\operatorname*{argmin}_{x \in \operatorname{Sol}(\mathrm{MVI}) } \omega (x)\). Therefore, it is sufficient to show that \(x_{n+1}-y_{n}\rightarrow 0\) as \(n\rightarrow \infty \). In view of conditions \(\{\alpha _{n}\}\) and \(\{\beta _{n}\}\), without loss of generality, we may assume that
By using (10) and Algorithm 3.1, we get
Hence, by the monotonicity of A and the strong monotonicity of ∇ω, we have
Then, by (24) and (23), we have
From (25), (22) and condition (iv), we obtain
By condition (iii) and Lemma 2.2, we deduce that \(x_{n+1}-y_{n}\rightarrow 0\) as \(n\rightarrow \infty \). This completes the proof. □
Corollary 3.1
Let A be a hemicontinuous monotone operator. Let \(\omega (x)=\frac{1}{2}\|x\|^{2}\). Let \(\{x_{n}\}\) be defined by Algorithm 3.1. Assume that both \(\{\beta _{n}A(x_{n})\}\) and \(\{A(y_{n})\}\) are bounded. Then the iterative sequence \(\{x_{n}\}\) converges to \(\bar{x}=P_{x \in \operatorname{Sol}(\mathrm{MVI}) }(0)\), which is the minimum norm solution of (MVI).
Corollary 3.2
Let A be an \(L_{A}\)-Lipschitz continuous and monotone operator. Let \(\omega (x)=\frac{1}{2}\|x\|^{2}\). Let \(\{x_{n}\}\) be defined by Algorithm 3.1. Then the iterative sequence \(\{x_{n}\}\) converges to \(\bar{x}=P_{x \in \operatorname{Sol}(\mathrm{MVI}) }(0)\), which is the minimum norm solution of (MVI).
4 Application
Let \(f:H\rightarrow (-\infty ,+\infty ]\) be a proper, lower semicontinuous, and convex function, let \(g:H\rightarrow (-\infty ,+\infty )\) be a convex and Gâteaux differentiable function. Consider the optimization
We denote by \(\operatorname{Sol}(\mathrm{P})\) the solution set of problem (P). Notice that
Note that if g is convex and Gâteaux differentiable, then ∇g is norm-to-weak continuous and monotone. Hence, ∇g is a hemicontinuous monotone operator. On the other hand, when \(A=\nabla g\), the minimization problem corresponding to regularization mixed variational inequality problem (7) becomes
Since \(\omega (x)\) is a strongly convex function, the minimization problem (26) has a unique solution. Therefore, as an application of Theorem 3.2, we have the following result.
Algorithm 4.1
Given \(x_{0}\in H\), for every \(n\in \mathbb{N}\), set
where \(\{s_{n}\}\), \(\{\alpha _{n}\}\), and \(\{\beta _{n}\}\) are three sequences in \((0,1)\) that satisfy the following conditions:
-
(i)
\(\alpha _{n}ts_{n}\leq 1\), \(0<\bar{s}<s_{n}\);
-
(ii)
\(\frac{\alpha _{n}}{\beta _{n}}\rightarrow 0\), \(\frac{\beta _{n}^{2}}{\alpha _{n}}\rightarrow 0\) as \(n\rightarrow \infty \);
-
(iii)
\(\alpha _{n}\rightarrow 0\) as \(n\rightarrow \infty \), \(\sum_{n=1}^{n=\infty}\alpha _{n}=\infty \);
-
(iv)
\(\frac{|\alpha _{n}-\alpha _{n-1}|+|\beta _{n}-\beta _{n-1}|}{\alpha _{n}^{2}}\rightarrow 0\) as \(n\rightarrow \infty \).
Theorem 4.1
Let the sequence \(\{x_{n}\}\) be generated by Algorithm 4.1. Assume that both \(\{\beta _{n}\nabla g(x_{n})\}\) and \(\{\nabla g(y_{n})\}\) are bounded. Assume that \(\operatorname{Sol}(\mathrm{P})\) is nonempty. Then the iterative sequence \(\{x_{n}\}\) converges to \(\bar{x}=\operatorname*{argmin}_{x \in \operatorname{Sol}(\mathrm{P})} \omega (x)\), which is the minimum like-norm solution of (P).
5 Numerical experiment
Example 5.1
Let \(H=\mathbb{R}\). Let
and let \(\omega (x)=\frac{1}{2}x^{2}\). It is clear that \(A=\nabla g\) is a hemicontinuous monotone operator and \(\omega (\cdot )\) is a strongly convex function with parameter 1. Choose the sequences \(\{\alpha _{n}\}\), \(\{\beta _{n}\}\), and \(\{s_{n}\}\) such that
Then it is clear that conditions (i)–(iv) of Algorithm 3.1 and Algorithm 4.1 are satisfied.
In Fig. 1, we present the numerical results by Algorithm 4.1. If \(x_{0}=10\), then the optimal solution can be obtained through 8 steps of iteration. If \(x_{0}=50\), then the optimal solution can be obtained through 19 steps of iteration.
6 Concluding remarks
In this paper, we considered a class of regularization forward–backward splitting methods for finding the minimum like-norm solution of the mixed variational inequalities and convex minimization problem in a Hilbert space. Strong convergence results have been obtained for the forward–backward splitting method under the hemicontinuous assumption.
Availability of data and materials
Not applicable.
References
Albert, Y., Ryazantseva, I.: Nonlinear Ill-Posed Problems of Monotone Type. Springer, New York (2006)
Beck, A., Sabach, S.: A first order method for finding minimal norm-like solutions of convex optimization problems. Math. Program., Ser. A 147, 25–46 (2014)
Chen, C.H., Ma, S.Q., Yang, J.F.: A general inertial proximal point algorithm for mixed variational inequality problem. SIAM J. Optim. 25, 2120–2142 (2015)
Combettes, P.L., Wajs, V.R.: Signal recovery by proximal forward–backward splitting. Multiscale Model. Simul. 4, 1168–1200 (2005)
Ferris, M.C., Mangasarian, O.L.: Finite perturbation of convex programs. Appl. Math. Optim. 23, 263–273 (1991)
Goeleven, D.: Existence and uniqueness for a linear mixed variational inequality arising in electrical circuits with transistors. J. Optim. Theory Appl. 138, 397–406 (2008)
Konnov, I.V., Volotskaya, E.O.: Mixed variational inequalities and economic equilibrium problems. J. Appl. Math. 6, 289–314 (2002)
Linh, H.M., Reich, S., Thong, D.V., Dung, V.T., Lan, N.P.H.: Analysis of two variants of an inertial projection algorithm for finding the minimum-norm solutions of variational inequality and fixed point problems. Numer. Algorithms 89, 1695–1721 (2022)
Lions, P.L., Mercier, B.: Splitting algorithms for the sum of two nonlinear operators. SIAM J. Numer. Anal. 16, 964–979 (1979)
Liu, Z.H., Migorski, S., Zeng, S.D.: Partial differential variational inequalities involving nonlocal boundary conditions in Banach space. J. Differ. Equ. 7, 3989–4006 (2017)
Malitsky, Y.: Golden ratio algorithms for variational inequalities. Math. Program. 184, 383–410 (2020)
Noor, M.A.: Proximal methods for mixed variational inequalities. J. Optim. Theory Appl. 115, 447–452 (2002)
Noor, M.A., Huang, Z.Y.: Some proximal methods for solving mixed variational inequalities. Appl. Anal. 2012, Article ID 610852 (2012)
Ogwo, G.N., Izuchukwu, C., Mewomo, O.T.: Inertial methods for finding minimum-norm solutions of the split variational inequality problem beyond monotonicity. Numer. Algorithms 88, 1419–1456 (2021)
Quoc, T.D., Muu, L.D., Hien, N.V.: Extragradient algorithms extended to equilibrium problems. Optimization 57, 749–766 (2008)
Solodov, M.: An explicit descent method for bilevel convex optimization. J. Convex Anal. 14, 227–237 (2007)
Thakur, B.S., Varghese, S.: Approximate solvability of general strongly mixed variational inequalities. Tbil. Math. J. 6, 13–20 (2013)
Tikhonov, A.N., Arsenin, V.Y.: Solutions of ill-posed problems. Scr. Ser. Math. Comp. (1977)
Wang, M.: The existence results and Tikhonov regularization method for generalized mixed variational inequalities in Banach spaces. Ann. Math. Phys. 7, 151–163 (2017)
Xia, F.Q., Huang, N.J.: An inexact hybrid projection-proximal point algorithm for solving generalized mixed variational inequalities. Comput. Math. Appl. 62, 4596–4604 (2011)
Zhou, Y., Zhou, H.Y., Wang, P.Y.: Iterative methods for finding the minimum-norm solution of the standard monotone variational inequality problems with applications in Hilbert spaces. J. Inequal. Appl. 2015, 135 (2015)
Funding
The work was partially supported by the Heilongjiang Provincial Natural Sciences Grant (No. LH2022A017) and the National Natural Sciences Grant (No. 11871182).
Author information
Authors and Affiliations
Contributions
All authors equally contributed to this work. All authors reviewed the manuscript
Corresponding author
Ethics declarations
Competing interests
The authors declare no competing interests.
Additional information
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Guan, WB., Song, W. The forward–backward splitting method for finding the minimum like-norm solution of the mixed variational inequality problem. J Inequal Appl 2023, 126 (2023). https://doi.org/10.1186/s13660-023-03039-4
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s13660-023-03039-4