Skip to main content

The forward–backward splitting method for finding the minimum like-norm solution of the mixed variational inequality problem

Abstract

We consider a general class of convex optimization problems in which one seeks to minimize a strongly convex function over a closed and convex set, which is by itself an optimal set of another mixed variational inequality problem in a Hilbert space. Regularized forward–backward splitting method is applied to find the minimum like-norm solution of the mixed variational inequality problem under investigation.

1 Introduction

Let H be a real Hilbert space. Consider the mixed variational inequality problem: find \(\bar{x}\in H\) such that

$$ f(x)-f(\bar{x})+\bigl\langle A(\bar{x}),x-\bar{x}\bigr\rangle \geq 0,\quad \forall x \in H, $$
(MVI)

where the following assumptions are made throughout the paper:

  • \(f:H\rightarrow (-\infty ,+\infty ]\) is proper, lower semicontinuous, and convex.

  • \(A:H\rightarrow H\) is a nonlinear monotone mapping.

  • The set of solutions to problem (MVI), denoted by \(\operatorname{Sol}(\mathrm{MVI})\), is nonempty.

Mixed variational inequalities are general problems that encompass as special cases several problems from continuous optimization and variational analysis, such as minimization problems, linear complementary problems, vector optimization problems, and variational inequalities, having applications in economics, engineering, physics, mechanics, and electronics (see [6, 7, 12, 13, 19] among others).

We note that if f is the indicator function of a closed convex set C in H, then the mixed variational inequality problem (MVI) is equivalent to finding \(\bar{x}\in C\) such that

$$ \bigl\langle A(\bar{x}),x-\bar{x}\bigr\rangle \geq 0,\quad \forall x \in C, $$
(1)

which is called the standard variational inequality problem. On the other hand, if \(A=0\), then the mixed variational inequality problem (MVI) reduces to the unconstrained optimization problem of minimizing f over H:

$$ \min_{x\in H}f(x). $$
(2)

For mixed variational inequalities, one can find various algorithms in the literature, for instance, in [3, 11, 15, 17, 20]. It is known that problem (MVI) is characterized by the fixed point equation

$$ x= \operatorname{prox}_{f}^{t}\bigl(x-tA(x) \bigr),$$

where \(t>0\) and

$$ \operatorname{prox}_{f}^{t}(x)=\operatorname*{argmin}_{y\in H} \biggl\{ tf(y)+\frac{1}{2} \Vert x-y \Vert ^{2} \biggr\} . $$

This equation suggests the possibility of iterating (see [4])

$$ x_{n+1} = \operatorname{prox}_{f}^{t_{n}} \bigl(x_{n}-t_{n}A(x_{n})\bigr).$$

This method is called the forward–backward splitting method. Forward–backward methods belong to the class of proximal splitting methods. These methods require the computation of the proximity operator and the approximation of proximal points (see [9]).

Problem (MVI) might have multiple solutions, and in this case it is natural to consider the minimal like-norm solution problem in which one seeks to find the optimal solution of (MLN) with a minimal like-norm:

$$ \min_{x \in \operatorname{Sol}(\mathrm{MVI}) } \omega (x). $$
(MLN)

The function \(\omega (x):H\rightarrow R\) is assumed to satisfy the following:

  • ω is a strongly convex function over H with parameter \(t>0\) (see Definition 2.1).

  • ω is continuously differentiable.

If \(\operatorname{Sol}(\mathrm{MVI})\) is a nonempty closed convex set, then by the strong convexity of ω, problem (MLN) has a unique solution. For simplicity, problem (MVI) will be called the core problem, problem (MLN) will be called the outer problem, and correspondingly, ω will be called the outer objective function.

When \(A=0\), \(\omega (x)=\frac{1}{2}\|x\|^{2}\), the best known indirect method for solving problem (MNP) is by the well-known Tikhonov regularization [18], which suggests solving the following alternative regularized problem for some \(\lambda >0\):

figure a

In [5], the authors treat the case that f is an indicator function of a closed and convex set C and show that under some restrictive conditions, including C being a polyhedron, there exists a small enough \(\lambda ^{*}>0\) such that the optimal solution of problem \(Q_{\lambda ^{*}}\) is the optimal solution of problem (MLN). In [16], Solodov showed that \(\sum_{k=1}^{\infty}\lambda _{k}=\infty \) and f is again an indicator function of a closed and convex set, there is no need to find the optimal solution of problem \(Q_{\lambda _{k}}\), and it is sufficient to approximate its solution by performing a single projected gradient step on \(Q_{\lambda _{k}}\). In [2], a first order method for solving problem (MLN), called the minimal norm gradient, was proposed, for which the authors proved an \(O(\frac{1}{\sqrt{k}})\) rate of convergence result in terms of the inner objective function values. The minimal norm gradient method is based on the cutting plane idea, which means that at each iteration of the algorithm two specific half-spaces are constructed and then a minimization of the outer objective function ω over the intersection of these half-spaces is solved.

In [21], for finding the minimum-norm solution to the standard monotone variational inequality problem (1), Zhou et al. proposed the following iterative method:

$$ x_{n+1}=P_{C}\bigl(x_{n}-\alpha _{n}x_{n}-\beta _{n}A(x_{n}) \bigr), $$

where \(P_{C}\) stands for the metric projection from H onto C. They proved that the proposed iterative sequences converge strongly to the minimum-norm solution of the variational inequality provided \(\{\alpha _{n}\}\) and \(\{\beta _{n}\}\) satisfy certain conditions. In [8], when A is pseudo-monotone and Lipschitz continuous, Linh et al. introduced an inertial projection algorithm for finding the minimum-norm solutions of the variational inequality problem. In [14], Linh et al. introduced an inertial method for finding minimum-norm solutions of the split variational inequality problem.

Our interest in this paper is to study regularization forward–backward splitting method for finding minimum like-norm solution of the mixed variational inequality problem in infinite dimensional real Hilbert spaces when operator A is monotone and hemicontinuous.

2 Mathematical toolbox

Let \(f:H\rightarrow (-\infty ,+\infty ]\) be an extended real-valued function. The subdifferential of f is the set-valued operator \(\partial f:H\rightarrow 2^{H}\), the value of which at \(x\in H\) is

$$ \partial f(x)=\bigl\{ x^{*}\in H:\bigl\langle x^{*},y-x \bigr\rangle \leq f(y)-f(x), \forall y\in H\bigr\} . $$

Consider the Moreau envelope \(\operatorname{env}_{f}^{\alpha}(x)\) and the set-valued proximal mapping \(\operatorname{prox}_{f}^{\alpha}(x)\) defined by

$$\begin{aligned}& \operatorname{env}_{f}^{\alpha}(x)=\inf _{y\in H}\biggl\{ \alpha f(y)+\frac{1}{2} \Vert x-y \Vert ^{2}\biggr\} , \\& \operatorname{prox}_{f}^{\alpha}(x) = \operatorname*{argmin}_{y\in H} \biggl\{ \alpha f(y)+ \frac{1}{2} \Vert x-y \Vert ^{2} \biggr\} . \end{aligned}$$
(3)

The operator \(\operatorname{prox}_{f}^{\alpha}\) is called the proximity operator. For every \(x\in H\), the infimum in (3) is achieved at a unique point \(\operatorname{prox}_{f}^{\alpha}(x)\) that is characterized by the inclusion

$$ x-\operatorname{prox}_{f}^{\alpha}(x)\in \partial (\alpha f) \bigl(\operatorname{prox}_{f}^{ \alpha}(x)\bigr). $$

The proximity operator possesses several important properties, three of them will be useful in our analysis and are thus recalled here.

  1. (i)

    Variational inequality:

    $$ \bigl\langle x-\operatorname{prox}_{f}^{\alpha}(x),y- \operatorname{prox}_{f}^{\alpha}(x) \bigr\rangle \leq \alpha f(y)-\alpha f\bigl(\operatorname{prox}_{f}^{\alpha}(x)\bigr),\quad \forall x,y\in H. $$
  2. (ii)

    Nonexpansive:

    $$ \Vert x-y \Vert \geq \bigl\Vert \operatorname{prox}_{f}^{\alpha}(x)- \operatorname{prox}_{f}^{\alpha}(y) \bigr\Vert , \quad \forall x,y \in H. $$
  3. (iii)

    Firmly nonexpansive:

    $$ \bigl\langle x-y,\operatorname{prox}_{f}^{\alpha}(x)- \operatorname{prox}_{f}^{\alpha}(y) \bigr\rangle \geq \bigl\Vert \operatorname{prox}_{f}^{\alpha}(x)- \operatorname{prox}_{f}^{\alpha}(y) \bigr\Vert ^{2} ,\quad \forall x,y\in H. $$

Definition 2.1

A strictly convex and Gâteaux differentiable function \(h:H\rightarrow R \) is said to be strongly convex with parameter \(t>0\) if

$$ h(x)-h(y)-\bigl\langle \nabla h(y), x-y \bigr\rangle \geq \frac{t}{2} \Vert x-y \Vert ^{2},\quad \forall x,y\in H. $$

Definition 2.2

A mapping \(T:H\rightarrow H \) is called monotone if

$$ \bigl\langle T(x)-T(y),x-y\bigr\rangle \geq 0,\quad \forall x,y\in H. $$

Definition 2.3

A mapping \(T:H\rightarrow H\) is called strongly monotone if there exists \(t>0\) such that

$$ \bigl\langle T(x)-T(y),x-y\bigr\rangle \geq t \Vert x-y \Vert ^{2}, \quad \forall x,y\in H. $$

If a strictly convex and Gâteaux differentiable function \(h:H\rightarrow R \) is strongly convex with parameter \(t>0\), then h is strongly monotone with parameter \(t>0\).

Definition 2.4

A mapping \(T:H\rightarrow H\) is called Lipschitz continuous if there exists \(L>0\) such that

$$ \bigl\Vert T(x)-T(y) \bigr\Vert \leq L \Vert x-y \Vert ,\quad \forall x,y\in H. $$

If a mapping \(T:H\rightarrow H\) is strongly monotone with parameter \(t>0\) and Lipschitz continuous with constant L, then \(L\geq t\).

Remark 2.1

As a matter of fact, it is know that:

  1. (i)

    If ω is strongly monotone with constant t and A is monotone, then \(A+\alpha \nabla \omega \) is strongly monotone with constant .

  2. (ii)

    If ω is Lipschitz continuous with constant \(L_{\omega}\) and A is Lipschitz continuous with constant \(L_{A}\), then \(A+\alpha \nabla \omega \) is also Lipschitz continuous with constant \((L_{A}+\alpha L_{\omega})\).

Definition 2.5

[1] A function f is called lower semicontinuous at the point \(x_{0}\in {\mathrm{dom}} f\) if for any sequence \(x_{n}\in {\mathrm{dom}} f\) such that \(x_{n}\rightarrow x_{0}\) there holds the inequality

$$ f(x_{0})\leq \liminf_{n\rightarrow \infty} f(x_{n}).$$
(4)

If inequality (4) occurs with the condition that the convergence of \({x_{n}}\) to \(x_{0}\) is weak, then the function f is called weakly lower semicontinuous at \(x_{0}\).

Lemma 2.1

[1] Let f be a convex and lower semicontinuous function. Then it is weakly lower semicontinuous.

Definition 2.6

[21] A mapping T is said to be hemicontinuous if for any sequence \(\{x_{n}\}\) converging to \(x_{0}\in H\) along a line implies \(T(x_{n})\rightharpoonup T(x_{0})\), i.e., \(T(x_{n})=T(x_{0}+t_{n}x)\rightharpoonup T(x_{0})\) as \(t_{n}\rightarrow 0\) for all \(x\in H\).

Lemma 2.2

[21] Let \(\{\alpha _{n}\}\) be a sequence of nonnegative real numbers satisfying

$$ \alpha _{n+1}\leq (1-\gamma _{n})\alpha _{n}+ \gamma _{n}\beta _{n},\quad n \geq 0, $$

where \(\{\gamma _{n}\}\subseteq (0,1)\) and \(\{\beta _{n}\}\) satisfy

  1. (i)

    \(\sum_{n=0}^{\infty} \gamma _{n}=\infty \);

  2. (ii)

    either \(\limsup_{n\rightarrow \infty} \beta _{n}\leq 0\) or \(\sum_{n=0}^{\infty} |\gamma _{n}\beta _{n}|<\infty \).

Then \(\lim_{n\rightarrow \infty}\alpha _{n}=0\).

Lemma 2.3

[10] Let \(A:H\rightarrow H\) be a hemicontinuous monotone operator. Assume that the following coercivity condition holds: there exists \(v\in \operatorname{dom}f\) such that

$$ \lim_{ \Vert u \Vert \rightarrow +\infty} \frac{f(u)-f(v)+\langle A(u),u-v\rangle}{ \Vert u \Vert }= +\infty . $$

Then \(\operatorname{Sol}(\mathrm{MVI})\) is a nonempty set.

3 Main result

Before describing the algorithms, we require the following notation for the optimal solution of the problem consisting of minimizing ω over a given closed and convex set C:

$$ \Omega (C)\equiv \operatorname*{argmin}_{x\in C}\omega (x).$$
(5)

By the optimality condition in problem (5), it follows that

$$ \bar{x}=\Omega (C)\quad \Leftrightarrow \quad \bigl\langle \nabla \omega ( \bar{x}),x- \bar{x}\bigr\rangle \geq 0,\quad \forall x\in C.$$
(6)

Lemma 3.1

Let \(A:H\rightarrow H\) be a hemicontinuous monotone operator. Then, for a fixed element \(\bar{x}\in H\), the following mixed variational inequalities are equivalent:

  1. (i)

    \(f(x)-f(\bar{x})+\langle A(x),x-\bar{x}\rangle \geq 0\), \(\forall x\in H \).

  2. (ii)

    \(f(x)-f(\bar{x})+\langle A(\bar{x}),x-\bar{x}\rangle \geq 0\), \(\forall x \in H \).

Proof

\({\mathrm{(ii)}}\Rightarrow{\mathrm{(i)}}\) Since A is a monotone operator, then for any \(x\in H\), we have

$$ \bigl\langle A(x),x-\bar{x}\bigr\rangle \geq \bigl\langle A(\bar{x}),x-\bar{x} \bigr\rangle . $$

Hence,

$$ f(x)-f(\bar{x})+\bigl\langle A(x),x-\bar{x}\bigr\rangle \geq f(x)-f(\bar{x})+ \bigl\langle A(\bar{x}),x-\bar{x}\bigr\rangle \geq 0. $$

\({\mathrm{(i)}}\Rightarrow{\mathrm{(ii)}}\) Letting \(x_{t}=\bar{x}+t(x-\bar{x})\), \(0< t<1\). Then we have

$$ f(x_{t})-f(\bar{x})+\bigl\langle A(x_{t}),x_{t}- \bar{x}\bigr\rangle \geq 0 $$

and

$$\begin{aligned} &f(x_{t})-f(\bar{x})+\bigl\langle A(x_{t}),x_{t}- \bar{x}\bigr\rangle \\ &\quad \leq tf(x)+(1-t)f(\bar{x})-f(\bar{x})+\bigl\langle A\bigl(\bar{x}+t(x-\bar{x}) \bigr), \bar{x}+t(x-\bar{x})-\bar{x}\bigr\rangle \\ &\quad = tf(x)-tf(\bar{x})+\bigl\langle A\bigl(\bar{x}+t(x-\bar{x})\bigr),t(x- \bar{x}) \bigr\rangle . \end{aligned}$$

Hence,

$$ 0\leq \lim_{t\rightarrow 0}f(x)-f(\bar{x})+\bigl\langle A\bigl( \bar{x}+t(x- \bar{x})\bigr),x-\bar{x}\bigr\rangle =f(x)-f(\bar{x})+\bigl\langle A( \bar{x}),x- \bar{x}\bigr\rangle . $$

This completes the proof. □

Lemma 3.2

Let \(A:H\rightarrow H\) be a hemicontinuous monotone operator. Then \(\operatorname{Sol}(\mathrm{MVI})\) is a closed convex set.

Proof

Let \(\bar{x},\hat{x}\in \operatorname{Sol}(\mathrm{MVI})\), let \(0< t<1\). Then, for any \(x\in H \), by Lemma 3.1 we have

$$ tf(x)-tf(\bar{x})+\bigl\langle A(x),tx-t\bar{x}\bigr\rangle \geq 0 $$

and

$$ (1-t)f(x)-(1-t)f(\hat{x})+\bigl\langle A(x),(1-t)x-(1-t)\hat{x}\bigr\rangle \geq 0. $$

Hence,

$$ f(x)-f\bigl(t\bar{x}+(1-t)\hat{x}\bigr)+\bigl\langle A(x),x-\bigl(t\bar{x}+(1-t) \hat{x}\bigr) \bigr\rangle \geq 0,\quad \forall x\in H. $$

Then we have \(t\bar{x}+(1-t)\hat{x}\in \operatorname{Sol}(\mathrm{MVI})\), that is, \(\operatorname{Sol}(\mathrm{MVI})\) is a convex set.

Let \(\{x_{n}\}\subseteq\operatorname{Sol}(\mathrm{MVI})\) and \(x_{n}\rightarrow \bar{x}\). Then, for any \(x\in H \), by Lemma 3.1, we have

$$ f(x)-f(x_{n})+\bigl\langle A(x),x-x_{n}\bigr\rangle \geq 0. $$

By the weak semicontinuity of f, we have

$$ f(\bar{x})\leq \liminf_{n\rightarrow \infty} f(x_{n}). $$

Then we have

$$ f(x)-f(\bar{x})+\bigl\langle A(x),x-\bar{x}\bigr\rangle \geq 0 ,\quad \forall x\in H. $$

Hence, \(\bar{x}\in \operatorname{Sol}(\mathrm{MVI})\), that is, \(\operatorname{Sol}(\mathrm{MVI})\) is a closed set. □

In this section, we use the idea of regularization to attach the general case. For given \(\gamma >0\), we consider the following regularization mixed variational inequality problem: find \(\bar{x}\in H\) such that

$$ f(x)-f(\bar{x})+\bigl\langle \gamma \nabla \omega (\bar{x})+ A( \bar{x}),x- \bar{x}\bigr\rangle \geq 0,\quad \forall x\in H, $$
(7)

where \(\gamma >0\) is the regularization parameter.

Lemma 3.3

Let \(A:H\rightarrow H\) be a hemicontinuous monotone operator. Then the regularization mixed variational inequality problem (7) has a unique solution.

Proof

For any \(v\in \operatorname{dom}f\) and \(v^{*}\in \partial f(v)\), by Remark 2.1, we have

$$\begin{aligned} &\frac{f(u)-f(v)+\langle \gamma \nabla \omega (u)+ A(u),u-v\rangle}{ \Vert u \Vert } \\ &\quad \geq \frac{\langle v^{*},u-v\rangle +\langle (\gamma \nabla \omega + A)(u)-(\gamma \nabla \omega + A)(v),u-v\rangle +\langle (\gamma \nabla \omega + A)(v),u-v\rangle}{ \Vert u \Vert } \\ &\quad \geq \frac{\langle v^{*},u-v\rangle +t\gamma \Vert u-v \Vert ^{2}+\langle (\gamma \nabla \omega + A)(v),u-v\rangle}{ \Vert u \Vert }. \end{aligned}$$

Hence,

$$ \lim_{ \Vert u \Vert \rightarrow +\infty} \frac{f(u)-f(v)+\langle \gamma \nabla \omega (u)+ A(u),u-v\rangle}{ \Vert u \Vert }=+ \infty . $$

Then, by Lemma 2.3, the set of solutions to the regularization mixed variational inequality problem (7) is nonempty. Next, we show that problem (7) has a unique solution.

Assume that and are solutions of the regularization mixed variational inequality problem (7). Then we have

$$ f(\hat{x})-f(\bar{x})+\bigl\langle \gamma \nabla \omega ( \bar{x})+ A( \bar{x}),\hat{x}-\bar{x}\bigr\rangle \geq 0 $$
(8)

and

$$ f(\bar{x})-f(\hat{x})+\bigl\langle \gamma \nabla \omega ( \hat{x})+ A( \hat{x}),\bar{x}-\hat{x}\bigr\rangle \geq 0. $$
(9)

Combining (8) and (9), we get

$$ \bigl\langle \bigl( \gamma \nabla \omega (\hat{x})+ A(\hat{x})\bigr) -\bigl( \gamma \nabla \omega (\bar{x})+ A(\bar{x})\bigr),\bar{x}-\hat{x}\bigr\rangle \geq 0. $$

Hence, by Remark 2.1, we have

$$ t\gamma \Vert \hat{x}-\bar{x} \Vert ^{2}\leq \bigl\langle \bigl( \gamma \nabla \omega ( \hat{x})+ A(\hat{x})\bigr) -\bigl(\gamma \nabla \omega ( \bar{x})+ A(\bar{x})\bigr), \hat{x}-\bar{x}\bigr\rangle \leq 0. $$

Therefore, \(\hat{x}=\bar{x}\). This completes the proof. □

Remark 3.1

For any \(\gamma >0\) and \(\beta >0\), we have

$$\begin{aligned} &\bar{x} \text{ is a solution of problem (7)} \\ &\quad \Leftrightarrow \quad f(x)-f(\bar{x})+\bigl\langle \gamma \nabla \omega ( \bar{x})+ A(\bar{x}),x-\bar{x}\bigr\rangle \geq 0,\quad \forall x\in H \\ &\quad \Leftrightarrow \quad {-}\gamma \nabla \omega (\bar{x}) -A(\bar{x})\in \partial f( \bar{x}) \\ &\quad \Leftrightarrow\quad \bar{x}- \gamma \beta \nabla \omega (\bar{x})- \beta A( \bar{x})-\bar{x}\in \partial (\beta f) (\bar{x}) \\ &\quad \Leftrightarrow\quad \bar{x}=\operatorname{prox}_{f}^{\beta} \bigl(\bar{x}-\gamma \beta \nabla \omega (\bar{x})-\beta A(\bar{x})\bigr). \end{aligned}$$

In this section, we will introduce two iterative methods (one implicit and the other explicit). First, by Remark 3.1, we introduce the implicit one:

$$ y_{n}=\operatorname{prox}_{f}^{\beta _{n}} \bigl(y_{n}-\alpha _{n}\nabla \omega (y_{n})- \beta _{n}A(y_{n})\bigr), $$
(10)

where \(\{\alpha _{n}\}\) and \(\{\beta _{n}\}\) are two sequences in \((0,1)\) that satisfy the following condition:

$$ \frac{\alpha _{n}}{\beta _{n}}\rightarrow 0,\quad \text{as } n\rightarrow + \infty . $$

Theorem 3.1

Let A be a hemicontinuous monotone operator. Then the sequence \(\{y_{n}\}\) generated by implicit method (10) converges to \(\bar{x}=\Omega (\operatorname{Sol}(\mathrm{MVI}))\), which is the minimum like-norm solution of (MVI).

Proof

Put \(z_{n}=y_{n}-\alpha _{n}\nabla \omega (y_{n})-\beta _{n}A(y_{n})\). For any \(p\in\operatorname{Sol}(\mathrm{MVI})\), we have

$$ \Vert y_{n}-p \Vert ^{2}=\langle y_{n}-z_{n},y_{n}-p\rangle +\langle z_{n}-p,y_{n}-p \rangle . $$
(11)

By using (10) and (11), we get

$$ \langle y_{n}-z_{n},y_{n}-p\rangle =\bigl\langle \operatorname{prox}_{f}^{\beta _{n}}(z_{n})-z_{n},{ \mathrm{prox}}_{f}^{\beta _{n}}(z_{n})-p\bigr\rangle . $$

It follows from the property of \(\operatorname{prox}_{f}^{\beta _{n}}\) that

$$ \bigl\langle z_{n}-\operatorname{prox}_{f}^{\beta _{n}}(z_{n}),p- \operatorname{prox}_{f}^{ \beta _{n}}(z_{n})\bigr\rangle \leq \beta _{n} f(p)-\beta _{n} f\bigl( \operatorname{prox}_{f}^{ \beta _{n}}(z_{n})\bigr). $$
(12)

By (11) and (12), we have

$$\begin{aligned} & \Vert y_{n}-p \Vert ^{2} \\ &\quad = \langle y_{n}-z_{n},y_{n}-p\rangle + \langle z_{n}-p,y_{n}-p \rangle \\ &\quad \leq \beta _{n} f(p)-\beta _{n} f\bigl( \operatorname{prox}_{f}^{\beta _{n}}(z_{n})\bigr)+ \langle z_{n}-p,y_{n}-p\rangle \\ &\quad = \beta _{n} f(p)-\beta _{n} f\bigl( \operatorname{prox}_{f}^{\beta _{n}}(z_{n})\bigr)+ \bigl\langle y_{n}-\alpha _{n}\nabla \omega (y_{n})-\beta _{n}A(y_{n})-p,y_{n}-p \bigr\rangle \\ &\quad \leq \beta _{n} f(p)-\beta _{n} f\bigl( \operatorname{prox}_{f}^{\beta _{n}}(z_{n})\bigr)+ \Vert y_{n}-p \Vert ^{2}-\bigl\langle \alpha _{n}\nabla \omega (y_{n})+\beta _{n}A(y_{n}),y_{n}-p \bigr\rangle , \end{aligned}$$

which simplifies to

$$ \beta _{n} f(p)-\beta _{n} f\bigl(\operatorname{prox}_{f}^{\beta _{n}}(z_{n}) \bigr)\geq \bigl\langle \alpha _{n}\nabla \omega (y_{n})+ \beta _{n}A(y_{n}),y_{n}-p \bigr\rangle , $$

and then

$$ f(p)- f\bigl(\operatorname{prox}_{f}^{\beta _{n}}(z_{n}) \bigr)\geq \biggl\langle \frac{\alpha _{n}}{\beta _{n}}\nabla \omega (y_{n})+A(y_{n}),y_{n}-p \biggr\rangle . $$

Setting \(\gamma _{n}=\frac{\alpha _{n}}{\beta _{n}}\), we have

$$\begin{aligned} &f(p)- f\bigl(\operatorname{prox}_{f}^{\beta _{n}}(z_{n}) \bigr) \\ &\quad \geq \bigl\langle \gamma _{n}\nabla \omega (y_{n})+A(y_{n}),y_{n}-p \bigr\rangle \\ &\quad = \bigl\langle \gamma _{n}\nabla \omega (y_{n}),y_{n}-p \bigr\rangle +\bigl\langle A(y_{n})-A(p),y_{n}-p \bigr\rangle +\bigl\langle A(p),y_{n}-p\bigr\rangle . \end{aligned}$$

Since A is a monotone operator and \(p\in\operatorname{Sol}(\mathrm{MVI})\), we know

$$ \bigl\langle A(y_{n})-A(p),y_{n}-p\bigr\rangle \geq 0 $$

and

$$ f\bigl(\operatorname{prox}_{f}^{\beta _{n}}(z_{n}) \bigr)-f(p)+\bigl\langle A(p),y_{n}-p \bigr\rangle =f(y_{n})-f(p)+\bigl\langle A(p),y_{n}-p\bigr\rangle \geq 0. $$

Combining the above three relations yields

$$ \bigl\langle \nabla \omega (y_{n}),y_{n}-p \bigr\rangle \leq 0. $$
(13)

Then we have

$$ \bigl\langle \nabla \omega (y_{n})-\nabla \omega (p)+\nabla \omega (p),y_{n}-p \bigr\rangle =\bigl\langle \nabla \omega (y_{n}),y_{n}-p\bigr\rangle \leq 0, $$

from which it turns out that

$$ \bigl\langle \nabla \omega (y_{n})-\nabla \omega (p),y_{n}-p\bigr\rangle \leq \bigl\langle -\nabla \omega (p),y_{n}-p\bigr\rangle . $$

Hence, by the strong monotonicity of ω and the Cauchy–Schwarz inequality, we have

$$ t \Vert y_{n}-p \Vert ^{2}\leq \bigl\langle \nabla \omega (y_{n})-\nabla \omega (p),y_{n}-p \bigr\rangle \leq \bigl\langle -\nabla \omega (p),y_{n}-p\bigr\rangle \leq \bigl\Vert \nabla \omega (p) \bigr\Vert \Vert y_{n}-p \Vert . $$
(14)

Therefore, \(\{y_{n}\}\) is bounded. Then we know that \(\{y_{n}\}\) has a subsequence \(\{y_{n,k}\}\) such that \(y_{n,k}\rightharpoonup \bar{x}\) as \(k\rightarrow \infty \). Furthermore, without loss of generality, we may assume that \(\{y_{n}\}\) converges weakly to a point \(\bar{x}\in H\). We show that is a solution to (MVI). For any \(x\in H\), by Remark 2.1, we have

$$\begin{aligned} &\bigl\langle \gamma _{n}\nabla \omega (x)+A(x),x-y_{n}\bigr\rangle -\bigl\langle \gamma _{n} \nabla \omega (y_{n})+A(y_{n}),x-y_{n}\bigr\rangle \\ &\quad = \bigl\langle (\gamma _{n}\nabla \omega +A) (x)-(\gamma _{n}\nabla \omega +A) (y_{n}),x-y_{n}\bigr\rangle \\ &\quad \geq t\gamma _{n} \Vert x-y_{n} \Vert ^{2} \\ &\quad \geq 0. \end{aligned}$$
(15)

Combining (15) and (10), we get

$$\begin{aligned} &f(x)-f(y_{n})+\bigl\langle \gamma _{n} \nabla \omega (x)+A(x),x-y_{n} \bigr\rangle \\ &\quad \geq f(x)-f(y_{n})+\bigl\langle \gamma _{n}\nabla \omega (y_{n})+A(y_{n}),x-y_{n} \bigr\rangle \\ &\quad \geq 0. \end{aligned}$$
(16)

Taking the limit as \(n\rightarrow \infty \) in (16) yields

$$ f(x)-f(\bar{x})+\bigl\langle A(x),x-\bar{x}\bigr\rangle \geq 0, \quad \forall x\in H. $$

By Lemma 3.1, we get

$$ f(x)-f(\bar{x})+\bigl\langle A(\bar{x}),x-\bar{x}\bigr\rangle \geq 0,\quad \forall x \in H, $$

that is, \(\bar{x}\in\operatorname{Sol}(\mathrm{MVI})\). Therefore, we can substitute p by in (14) to obtain

$$ t \Vert y_{n}-\bar{x} \Vert ^{2} \leq \bigl\langle -\nabla \omega (\bar{x}),y_{n}- \bar{x}\bigr\rangle . $$
(17)

Since \(y_{n}\rightharpoonup \bar{x}\) as \(n\rightarrow \infty \), by (17) we get \(y_{n}\rightarrow \bar{x}\) as \(n\rightarrow \infty \). Moreover, from (13) we get

$$ \bigl\langle \nabla \omega (\bar{x}),\bar{x}-p\bigr\rangle \leq 0,\quad \forall p\in{ \mathrm{Sol}}(\mathrm{MVI}), $$

from which we know that is the minimum like-norm solution of (MVI). This completes the proof. □

Now, we introduce an explicit method (regularization forward–backward splitting) and establish its strong convergence analysis. From the implicit method, it is natural to consider the following iteration method that generates a sequence \(\{x_{n}\}\) according to the recursion.

Algorithm 3.1

Given \(x_{0}\in H\), for every \(n\in \mathbb{N}\), set

$$ x_{n+1}=(1-s_{n})x_{n}+s_{n} \operatorname{prox}_{f}^{\beta _{n}}\bigl(x_{n}- \alpha _{n}\nabla \omega (x_{n})-\beta _{n}A(x_{n})\bigr), $$
(18)

where \(\{s_{n}\}\), \(\{\alpha _{n}\}\), and \(\{\beta _{n}\}\) are three sequences in \((0,1)\) that satisfy the following conditions:

  1. (i)

    \(\alpha _{n}ts_{n}\leq 1\), \(0<\bar{s}<s_{n}\);

  2. (ii)

    \(\frac{\alpha _{n}}{\beta _{n}}\rightarrow 0\), \(\frac{\beta _{n}^{2}}{\alpha _{n}}\rightarrow 0\) as \(n\rightarrow \infty \);

  3. (iii)

    \(\alpha _{n}\rightarrow 0\) as \(n\rightarrow \infty \), \(\sum_{n=1}^{n=\infty}\alpha _{n}=\infty \);

  4. (iv)

    \(\frac{|\alpha _{n}-\alpha _{n-1}|+|\beta _{n}-\beta _{n-1}|}{\alpha _{n}^{2}}\rightarrow 0\) as \(n\rightarrow \infty \).

Proposition 3.1

Let A be a hemicontinuous monotone operator. Let \(\{x_{n}\}\) be defined by Algorithm 3.1. Assume that \(\{\beta _{n}A(x_{n})\}\) is bounded. Then the iterative sequence \(\{x_{n}\}\) is bounded.

Proof

For any \(p\in \operatorname{Sol}(\mathrm{MVI}) \), from the property of \(\operatorname{prox}_{f}^{\beta _{n}}\), we know

$$\begin{aligned} & \Vert x_{n+1}-p \Vert ^{2} \\ &\quad = \bigl\Vert (1-s_{n})x_{n}+s_{n} \operatorname{prox}_{f}^{\beta _{n}}\bigl(x_{n}- \alpha _{n} \nabla \omega (x_{n})-\beta _{n}A(x_{n})\bigr) \\ &\quad \quad{} - (1-s_{n})p-s_{n}\operatorname{prox}_{f}^{\beta _{n}} \bigl(p-\beta _{n}A(p)\bigr) \bigr\Vert ^{2} \\ &\quad = \bigl\Vert (1-s_{n}) (x_{n}-p)+s_{n} \bigl(\operatorname{prox}_{f}^{\beta _{n}}\bigl(x_{n}- \alpha _{n}\nabla \omega (x_{n})-\beta _{n}A(x_{n})\bigr) \\ &\quad \quad{} - \operatorname{prox}_{f}^{\beta _{n}}\bigl(p-\beta _{n}A(p)\bigr)\bigr) \bigr\Vert ^{2} \\ &\quad \leq (1-s_{n}) \Vert x_{n}-p \Vert ^{2}+s_{n} \bigl\Vert \operatorname{prox}_{f}^{\beta _{n}} \bigl(x_{n}- \alpha _{n}\nabla \omega (x_{n})- \beta _{n}A(x_{n})\bigr)-\operatorname{prox}_{f}^{ \beta _{n}} \bigl(p-\beta _{n}A(p)\bigr) \bigr\Vert ^{2} \\ &\quad \leq (1-s_{n}) \Vert x_{n}-p \Vert ^{2}+s_{n} \bigl\Vert \bigl(x_{n}-\alpha _{n}\nabla \omega (x_{n})-\beta _{n}A(x_{n}) \bigr)-\bigl(p-\beta _{n}A(p)\bigr) \bigr\Vert ^{2} \\ &\quad = (1-s_{n}) \Vert x_{n}-p \Vert ^{2} \\ &\quad \quad{} + s_{n} \bigl\Vert (x_{n}-p)-\alpha _{n}\bigl(\nabla \omega (x_{n})-\nabla \omega (p)\bigr)- \alpha _{n}\nabla \omega (p)-\beta _{n} \bigl(A(x_{n})-A(p)\bigr) \bigr\Vert ^{2} \\ &\quad = \Vert x_{n}-p \Vert ^{2}+\alpha _{n}^{2}s_{n} \bigl\Vert \nabla \omega (x_{n})- \nabla \omega (p) \bigr\Vert ^{2}+s_{n} \bigl\Vert \alpha _{n}\nabla \omega (p)+\beta _{n} \bigl(A(x_{n})-A(p)\bigr) \bigr\Vert ^{2} \\ &\quad \quad{} - 2\alpha _{n}s_{n}\bigl\langle \nabla \omega (x_{n})-\nabla \omega (p),x_{n}-p \bigr\rangle -2s_{n}\bigl\langle \alpha _{n}\nabla \omega (p)+ \beta _{n}\bigl(A(x_{n})-A(p)\bigr),x_{n}-p \bigr\rangle \\ &\quad \quad{} + 2\alpha _{n}s_{n}\bigl\langle \alpha _{n}\nabla \omega (p)+ \beta _{n}\bigl(A(x_{n})-A(p) \bigr), \nabla \omega (x_{n})-\nabla \omega (p)\bigr\rangle . \end{aligned}$$

Then, by the monotonicity of A, strong monotonicity and Lipschitz continuity of ω, we have

$$\begin{aligned} & \Vert x_{n+1}-p \Vert ^{2} \\ &\quad \leq \Vert x_{n}-p \Vert ^{2}+\alpha _{n}^{2}L_{\omega}^{2}s_{n} \Vert x_{n}-p \Vert ^{2}+s_{n} \bigl\Vert \alpha _{n}\nabla \omega (p)+\beta _{n} \bigl(A(x_{n})-A(p)\bigr) \bigr\Vert ^{2} \\ &\quad \quad{} - 2\alpha _{n}ts_{n} \Vert x_{n}-p \Vert ^{2}-2s_{n}\bigl\langle \alpha _{n}\nabla \omega (p),x_{n}-p\bigr\rangle \\ &\quad \quad{} + 2\alpha _{n}s_{n}\bigl\langle \alpha _{n}\nabla \omega (p)+ \beta _{n}\bigl(A(x_{n})-A(p) \bigr), \nabla \omega (x_{n})-\nabla \omega (p)\bigr\rangle \\ &\quad \leq \bigl(1+\alpha _{n}^{2}L_{\omega}^{2}s_{n}-2 \alpha _{n}ts_{n}\bigr) \Vert x_{n}-p \Vert ^{2}+s_{n} \bigl\Vert \alpha _{n} \nabla \omega (p)+\beta _{n}\bigl(A(x_{n})-A(p)\bigr) \bigr\Vert ^{2} \\ &\quad \quad{} + 2\alpha _{n}s_{n} \bigl\Vert \nabla \omega (p) \bigr\Vert \Vert x_{n}-p \Vert +2\alpha _{n}^{2}L_{ \omega}s_{n} \bigl\Vert \nabla \omega (p) \bigr\Vert \Vert x_{n}-p \Vert \\ &\quad \quad{} + 2\alpha _{n}L_{\omega} \beta _{n}s_{n} \bigl\Vert A(x_{n})-A(p) \bigr\Vert \Vert x_{n}-p \Vert . \end{aligned}$$
(19)

Suppose that \(\{x_{n}\}\) is unbounded, then there exists \(\{x_{(n,k)+1}\}\subseteq \{x_{n}\}\) such that

$$ \Vert x_{(n,k)+1}-p \Vert \geq \max \bigl\{ \Vert x_{1}-p \Vert , \Vert x_{2}-p \Vert ,\dots , \Vert x_{n,k}-p \Vert \bigr\} . $$

Hence,

$$ \Vert x_{(n,k)+1}-p \Vert \rightarrow +\infty . $$

Then, by (19) and \(\{\beta _{n}A(x_{n})\}\) is bounded, we have

$$ \Vert x_{n,k}-p \Vert \rightarrow +\infty . $$

Using inequality (19) again, we get

$$\begin{aligned} & \Vert x_{n,k}-p \Vert ^{2} \\ &\quad \leq \bigl(1+\alpha _{n,k}^{2}L_{\omega}^{2}s_{n,k}-2 \alpha _{n,k}ts_{n,k}\bigr) \Vert x_{n,k}-p \Vert ^{2}+s_{n,k} \bigl\Vert \alpha _{n,k} \nabla \omega (p)+\beta _{n,k}\bigl(A(x_{n,k})-A(p)\bigr) \bigr\Vert ^{2} \\ &\quad \quad{} + 2\alpha _{n,k}s_{n,k} \bigl\Vert \nabla \omega (p) \bigr\Vert \Vert x_{n,k}-p \Vert +2\alpha _{n,k}^{2}L_{ \omega}s_{n,k} \bigl\Vert \nabla \omega (p) \bigr\Vert \Vert x_{n,k}-p \Vert \\ &\quad \quad{} + 2\alpha _{n,k}L_{\omega} \beta _{n,k}s_{n,k} \bigl\Vert A(x_{n,k})-A(p) \bigr\Vert \Vert x_{n,k}-p \Vert . \end{aligned}$$

Then we have

$$\begin{aligned} &\bigl(2t-\alpha _{n,k}L_{\omega}^{2} \bigr) \Vert x_{n,k}-p \Vert ^{2} \\ &\quad \leq \alpha _{n,k} \biggl\Vert \nabla \omega (p)+ \frac{\beta _{n,k}}{\alpha _{n,k}}\bigl(A(x_{n,k})-A(p)\bigr) \biggr\Vert ^{2}+2L_{\omega} \beta _{n,k} \bigl\Vert A(x_{n,k})-A(p) \bigr\Vert \Vert x_{n,k}-p \Vert \\ &\quad \quad{} + 2 \bigl\Vert \nabla \omega (p) \bigr\Vert \Vert x_{n,k}-p \Vert +2\alpha _{n,k}L_{\omega} \bigl\Vert \nabla \omega (p) \bigr\Vert \Vert x_{n,k}-p \Vert . \end{aligned}$$
(20)

Since \(\alpha _{n}\rightarrow 0\) and \(\{\beta _{n}A(x_{n})\}\) is bounded, then by (20) \(\{x_{n,k}\}\) is also bounded. This contradicts \(\|x_{n,k}-p\|\rightarrow +\infty \). Hence, the iterative sequence \(\{x_{n}\}\) is bounded. □

Theorem 3.2

Let A be a hemicontinuous monotone operator. Let \(\{x_{n}\}\) be defined by Algorithm 3.1. Assume that both \(\{\beta _{n}A(x_{n})\}\) and \(\{A(y_{n})\}\) are bounded. Then the iterative sequence \(\{x_{n}\}\) converges to , which is the minimum like-norm solution of (MVI).

Proof

By using Theorem 3.1, we know that \(\{y_{n}\}\) converges strongly to . Therefore, it is sufficient to show that \(x_{n+1}-y_{n}\rightarrow 0\) as \(n\rightarrow \infty \). By using (10) and Algorithm 3.1, we get

$$\begin{aligned} & \Vert x_{n+1}-y_{n} \Vert ^{2} \\ &\quad = \bigl\Vert (1-s_{n})x_{n}+s_{n} \operatorname{prox}_{f}^{\beta _{n}}\bigl(x_{n}- \alpha _{n} \nabla \omega (x_{n})-\beta _{n}A(x_{n})\bigr) \\ &\quad \quad{} - (1-s_{n})y_{n}-s_{n} \operatorname{prox}_{f}^{\beta _{n}}\bigl(y_{n}- \alpha _{n} \nabla \omega (y_{n})-\beta _{n}A(y_{n})\bigr) \bigr\Vert ^{2} \\ &\quad = \bigl\Vert (1-s_{n}) (x_{n}-y_{n})+s_{n} \bigl(\operatorname{prox}_{f}^{\beta _{n}}\bigl(x_{n}- \alpha _{n}\nabla \omega (x_{n})-\beta _{n}A(x_{n})\bigr) \\ &\quad \quad{} - \operatorname{prox}_{f}^{\beta _{n}} \bigl(y_{n}-\alpha _{n}\nabla \omega (y_{n})- \beta _{n}A(y_{n})\bigr)\bigr) \bigr\Vert ^{2} \\ &\quad \leq (1-s_{n}) \Vert x_{n}-y_{n} \Vert ^{2}+s_{n} \bigl\Vert \operatorname{prox}_{f}^{\beta _{n}} \bigl(x_{n}- \alpha _{n}\nabla \omega (x_{n})- \beta _{n}A(x_{n})\bigr) \\ &\quad \quad{} - \operatorname{prox}_{f}^{\beta _{n}} \bigl(y_{n}-\alpha _{n}\nabla \omega (y_{n})- \beta _{n}A(y_{n})\bigr) \bigr\Vert ^{2} \\ &\quad \leq (1-s_{n}) \Vert x_{n}-y_{n} \Vert ^{2}+s_{n} \bigl\Vert \bigl(x_{n}-\alpha _{n}\nabla \omega (x_{n})-\beta _{n}A(x_{n}) \bigr) \\ &\quad \quad{} - \bigl(y_{n}-\alpha _{n}\nabla \omega (y_{n})-\beta _{n}A(y_{n})\bigr) \bigr\Vert ^{2} \\ &\quad = (1-s_{n}) \bigl\Vert (x_{n}-y_{n}) \bigr\Vert ^{2} \\ &\quad \quad{} + s_{n} \bigl\Vert (x_{n}-y_{n})- \alpha _{n}\bigl(\nabla \omega (x_{n})-\nabla \omega (y_{n})\bigr)-\beta _{n}\bigl(A(x_{n})-A(y_{n}) \bigr) \bigr\Vert ^{2} \\ &\quad = \Vert x_{n}-y_{n} \Vert ^{2}+\alpha _{n}^{2}s_{n} \bigl\Vert \nabla \omega (x_{n})- \nabla \omega (y_{n}) \bigr\Vert ^{2}+\beta _{n}^{2}s_{n} \bigl\Vert A(x_{n})-A(y_{n}) \bigr\Vert ^{2} \\ &\quad \quad{} - 2\alpha _{n}s_{n}\bigl\langle \nabla \omega (x_{n})-\nabla \omega (y_{n}),x_{n}-y_{n} \bigr\rangle -2\beta _{n}s_{n}\bigl\langle A(x_{n})-A(y_{n}),x_{n}-y_{n} \bigr\rangle \\ &\quad \quad{} + 2\alpha _{n}\beta _{n}s_{n}\bigl\langle A(x_{n})-A(y_{n}),\nabla \omega (x_{n})-\nabla \omega (y_{n})\bigr\rangle . \end{aligned}$$

Hence, by the monotonicity of A, we have

$$\begin{aligned} & \Vert x_{n+1}-y_{n} \Vert ^{2} \\ &\quad \leq \Vert x_{n}-y_{n} \Vert ^{2}+ \alpha _{n}^{2}s_{n} \bigl\Vert \nabla \omega (x_{n})- \nabla \omega (y_{n}) \bigr\Vert ^{2}-2\alpha _{n}s_{n}\bigl\langle \nabla \omega (x_{n})- \nabla \omega (y_{n}),x_{n}-y_{n} \bigr\rangle \\ &\quad \quad{} + \beta _{n}^{2}s_{n} \bigl\Vert A(x_{n})-A(y_{n}) \bigr\Vert ^{2}+2\alpha _{n}\beta _{n}s_{n} \bigl\langle A(x_{n})-A(y_{n}),\nabla \omega (x_{n})- \nabla \omega (y_{n}) \bigr\rangle , \end{aligned}$$

and then, by the strong monotonicity and Lipschitz continuity of ω, we have

$$\begin{aligned} & \Vert x_{n+1}-y_{n} \Vert ^{2} \\ &\quad \leq \Vert x_{n}-y_{n} \Vert ^{2}+ \alpha _{n}^{2}L_{\omega}^{2}s_{n} \Vert x_{n}-y_{n} \Vert ^{2}-2\alpha _{n}ts_{n} \Vert x_{n}-y_{n} \Vert ^{2} \\ &\quad \quad{} + \beta _{n}^{2}s_{n} \bigl\Vert A(x_{n})-A(y_{n}) \bigr\Vert ^{2}+2\alpha _{n}\beta _{n}s_{n} \bigl\langle A(x_{n})-A(y_{n}),\nabla \omega (x_{n})- \nabla \omega (y_{n}) \bigr\rangle \\ &\quad = \Vert x_{n}-y_{n} \Vert ^{2}+\alpha _{n}^{2}t^{2}s_{n}^{2} \Vert x_{n}-y_{n} \Vert ^{2}-2 \alpha _{n}ts_{n} \Vert x_{n}-y_{n} \Vert ^{2}+\alpha _{n}^{2} \bigl(L_{\omega}^{2}s_{n}-t^{2}s_{n}^{2} \bigr) \Vert x_{n}-y_{n} \Vert ^{2} \\ &\quad \quad{} + \beta _{n}^{2}s_{n} \bigl\Vert A(x_{n})-A(y_{n}) \bigr\Vert ^{2}+2\alpha _{n}\beta _{n}s_{n} \bigl\langle A(x_{n})-A(y_{n}),\nabla \omega (x_{n})- \nabla \omega (y_{n}) \bigr\rangle \\ &\quad = (1-\alpha _{n}ts_{n})^{2} \Vert x_{n}-y_{n} \Vert ^{2}+\alpha _{n}^{2}\bigl(L_{ \omega}^{2}s_{n}-t^{2}s_{n}^{2} \bigr) \Vert x_{n}-y_{n} \Vert ^{2} \\ &\quad \quad{} + \beta _{n}^{2}s_{n} \bigl\Vert A(x_{n})-A(y_{n}) \bigr\Vert ^{2}+2\alpha _{n}\beta _{n}s_{n} \bigl\langle A(x_{n})-A(y_{n}),\nabla \omega (x_{n})- \nabla \omega (y_{n}) \bigr\rangle \\ &\quad \leq (1-\alpha _{n}ts_{n})^{2}\bigl( \Vert x_{n}-y_{n-1} \Vert ^{2}+2 \Vert x_{n}-y_{n-1} \Vert \Vert y_{n}-y_{n-1} \Vert + \Vert y_{n}-y_{n-1} \Vert ^{2} \bigr) \\ &\quad \quad{} + \alpha _{n}^{2}\bigl(L_{\omega}^{2}s_{n}-t^{2}s_{n}^{2} \bigr) \Vert x_{n}-y_{n} \Vert ^{2}+ \beta _{n}^{2}s_{n} \bigl\Vert A(x_{n})-A(y_{n}) \bigr\Vert ^{2} \\ &\quad \quad{} + 2\alpha _{n}\beta _{n}s_{n}\bigl\langle A(x_{n})-A(y_{n}),\nabla \omega (x_{n})-\nabla \omega (y_{n})\bigr\rangle \\ &\quad \leq (1-\alpha _{n}ts_{n}) \Vert x_{n}-y_{n-1} \Vert ^{2}+2 \Vert x_{n}-y_{n-1} \Vert \Vert y_{n}-y_{n-1} \Vert + \Vert y_{n}-y_{n-1} \Vert ^{2} \\ &\quad \quad{} + \alpha _{n}^{2}\bigl(L_{\omega}^{2}s_{n}-t^{2}s_{n}^{2} \bigr) \Vert x_{n}-y_{n} \Vert ^{2}+ \beta _{n}^{2}s_{n} \bigl\Vert A(x_{n})-A(y_{n}) \bigr\Vert ^{2} \\ &\quad \quad{} + 2\alpha _{n}\beta _{n}s_{n} \bigl\Vert A(x_{n})-A(y_{n}) \bigr\Vert \bigl\Vert \nabla \omega (x_{n})- \nabla \omega (y_{n}) \bigr\Vert . \end{aligned}$$
(21)

Since \(\operatorname{prox}_{f}^{\beta _{n}}\) is a firmly nonexpansive mapping, we have

$$\begin{aligned} & \Vert y_{n}-y_{n-1} \Vert ^{2} \\ &\quad \leq \bigl\langle y_{n}-y_{n-1}, \bigl(y_{n}- \alpha _{n}\nabla \omega (y_{n})- \beta _{n}A(y_{n})\bigr)- \bigl(y_{n-1}-\alpha _{n-1}\nabla \omega (y_{n-1})- \beta _{n-1}A(y_{n-1}) \bigr)\bigr\rangle . \end{aligned}$$

Then we have that

$$\begin{aligned} & \Vert y_{n}-y_{n-1} \Vert ^{2} \\ &\quad \leq \bigl\langle y_{n}-y_{n-1}, y_{n}-y_{n-1}- \alpha _{n}\nabla \omega (y_{n}) +\alpha _{n} \nabla \omega (y_{n-1})-\alpha _{n}\nabla \omega (y_{n-1})+ \alpha _{n-1}\nabla \omega (y_{n-1}) \\ &\quad \quad{} - \beta _{n}A(y_{n})+\beta _{n}A(y_{n-1})- \beta _{n}A(y_{n-1})+ \beta _{n-1}A(y_{n-1}) \bigr\rangle \\ &\quad = \Vert y_{n}-y_{n-1} \Vert ^{2} \\ &\quad \quad{} - \alpha _{n}\bigl\langle y_{n}-y_{n-1}, \nabla \omega (y_{n})-\nabla \omega (y_{n-1})\bigr\rangle -(\alpha _{n}-\alpha _{n-1})\bigl\langle y_{n}-y_{n-1}, \nabla \omega (y_{n-1})\bigr\rangle \\ &\quad \quad{} - \beta _{n}\bigl\langle y_{n}-y_{n-1},A(y_{n})-A(y_{n-1}) \bigr\rangle -( \beta _{n}-\beta _{n-1})\bigl\langle y_{n}-y_{n-1},A(y_{n-1})\bigr\rangle \\ &\quad \leq \Vert y_{n}-y_{n-1} \Vert ^{2}+ \vert \beta _{n}-\beta _{n-1} \vert \Vert y_{n}-y_{n-1} \Vert \bigl\Vert A(y_{n-1}) \bigr\Vert \\ &\quad \quad{} - \alpha _{n}t \Vert y_{n}-y_{n-1} \Vert ^{2}+ \vert \alpha _{n}-\alpha _{n-1} \vert \Vert y_{n}-y_{n-1} \Vert \bigl\Vert \nabla \omega (y_{n-1}) \bigr\Vert , \end{aligned}$$

and so that

$$\begin{aligned} &\alpha _{n}t \Vert y_{n}-y_{n-1} \Vert ^{2} \\ & \quad \leq \vert \beta _{n}-\beta _{n-1} \vert \Vert y_{n}-y_{n-1} \Vert \bigl\Vert A(y_{n-1}) \bigr\Vert + \vert \alpha _{n}-\alpha _{n-1} \vert \Vert y_{n}-y_{n-1} \Vert \bigl\Vert \nabla \omega (y_{n-1}) \bigr\Vert . \end{aligned}$$

Hence, we get

$$ t \Vert y_{n}-y_{n-1} \Vert \leq \biggl\vert \frac{\beta _{n}-\beta _{n-1}}{\alpha _{n}} \biggr\vert \bigl\Vert A(y_{n-1}) \bigr\Vert + \biggl\vert \frac{\alpha _{n}-\alpha _{n-1}}{\alpha _{n}} \biggr\vert \bigl\Vert \nabla \omega (y_{n-1}) \bigr\Vert . $$

Since \(\{y_{n}\}\) and \(\{A(y_{n})\}\) are two bounded sequences, there exists \(M_{1}>0\) such that \(\sup \{\|\nabla \omega (y_{n-1})\| ,\|A(y_{n-1})\|\}\leq M_{1}\) for any \(n\geq 1\). Then we have

$$ t \Vert y_{n}-y_{n-1} \Vert \leq \biggl( \biggl\vert \frac{\beta _{n}-\beta _{n-1}}{\alpha _{n}} \biggr\vert + \biggl\vert \frac{\alpha _{n}-\alpha _{n-1}}{\alpha _{n}} \biggr\vert \biggr)M_{1}. $$
(22)

From conditions (ii) and (iv) we know that \(\frac{|\alpha _{n}-\alpha _{n-1}|+|\beta _{n}-\beta _{n-1}|}{\alpha _{n}}=o( \alpha _{n})\) and \(\beta _{n}^{2}=o(\alpha _{n})\). Then (21) turns out to be

$$\begin{aligned} & \Vert x_{n+1}-y_{n} \Vert ^{2} \\ &\quad \leq (1-\alpha _{n}ts_{n}) \Vert x_{n}-y_{n-1} \Vert ^{2}+2 \Vert x_{n}-y_{n-1} \Vert \Vert y_{n}-y_{n-1} \Vert + \Vert y_{n}-y_{n-1} \Vert ^{2} \\ &\quad \quad{} + \alpha _{n}^{2}\bigl(L_{\omega}^{2}s_{n}-t^{2}s_{n}^{2} \bigr) \Vert x_{n}-y_{n} \Vert ^{2}+ \beta _{n}^{2}s_{n} \bigl\Vert A(x_{n})-A(y_{n}) \bigr\Vert ^{2} \\ &\quad \quad{} + 2\alpha _{n}\beta _{n}s_{n} \bigl\Vert A(x_{n})-A(y_{n}) \bigr\Vert \bigl\Vert \nabla \omega (x_{n})- \nabla \omega (y_{n}) \bigr\Vert \\ &\quad \leq (1-\alpha _{n}ts_{n}) \Vert x_{n}-y_{n-1} \Vert ^{2} \\ &\quad \quad{} + \alpha _{n}ts_{n} \biggl( \frac{\alpha _{n}(L_{\omega}^{2}-t^{2}s_{n})}{t} \Vert x_{n}-y_{n} \Vert ^{2}+ \frac{\beta _{n}^{2}}{\alpha _{n}t} \bigl\Vert A(x_{n})-A(y_{n}) \bigr\Vert ^{2} \\ &\quad \quad{} + \bigl(2 \Vert x_{n}-y_{n-1} \Vert + \Vert y_{n}-y_{n-1} \Vert \bigr) \frac{ \vert \beta _{n}-\beta _{n-1} \vert + \vert \alpha _{n}-\alpha _{n-1} \vert }{\alpha _{n}^{2}} \frac{M_{1}}{s_{n}t^{2}} \\ &\quad \quad{} + \frac{2\beta _{n}}{t} \bigl\Vert A(x_{n})-A(y_{n}) \bigr\Vert \bigl\Vert \nabla \omega (x_{n})- \nabla \omega (y_{n}) \bigr\Vert \biggr) \\ &\quad = (1-\alpha _{n}ts_{n}) \Vert x_{n}-y_{n-1} \Vert ^{2}+o(\alpha _{n}ts_{n}). \end{aligned}$$

By Lemma 2.2 and condition (iii), we have \(\|x_{n+1}-y_{n}\|\rightarrow 0\), as \(n\rightarrow \infty \). It follows that \(\{x_{n}\}\) converges strongly to \(\bar{x}=\operatorname*{argmin}_{x \in \operatorname{Sol}(\mathrm{MVI}) } \omega (x)\). This completes the proof. □

If \(A:H\rightarrow H\) is an \(L_{A}\)-Lipschitz continuous and monotone operator, then we have the following convergence result.

Theorem 3.3

Let A be an \(L_{A}\)-Lipschitz continuous and monotone operator. Let \(\{x_{n}\}\) be defined by Algorithm 3.1. Then the iterative sequence \(\{x_{n}\}\) converges to , which is the minimum like-norm solution of (MVI).

Proof

From Theorem 3.1, we know that \(y_{n}\rightarrow \bar{x}=\operatorname*{argmin}_{x \in \operatorname{Sol}(\mathrm{MVI}) } \omega (x)\). Therefore, it is sufficient to show that \(x_{n+1}-y_{n}\rightarrow 0\) as \(n\rightarrow \infty \). In view of conditions \(\{\alpha _{n}\}\) and \(\{\beta _{n}\}\), without loss of generality, we may assume that

$$ 0< \lambda _{n}=2\alpha _{n}ts_{n}-L_{\omega}^{2} \alpha _{n}^{2}s_{n}- \beta _{n}^{2}L_{A}^{2}s_{n}-2 \alpha _{n}\beta _{n}L_{\omega}L_{A}s_{n}< 1. $$
(23)

By using (10) and Algorithm 3.1, we get

$$\begin{aligned} & \Vert x_{n+1}-y_{n} \Vert ^{2} \\ &\quad = \bigl\Vert (1-s_{n})x_{n}+s_{n} \operatorname{prox}_{f}^{\beta _{n}}\bigl(x_{n}- \alpha _{n} \nabla \omega (x_{n})-\beta _{n}A(x_{n})\bigr) \\ &\quad \quad{} - (1-s_{n})y_{n}-s_{n} \operatorname{prox}_{f}^{\beta _{n}}\bigl(y_{n}- \alpha _{n} \nabla \omega (y_{n})-\beta _{n}A(y_{n})\bigr) \bigr\Vert ^{2} \\ &\quad = \bigl\Vert (1-s_{n}) (x_{n}-y_{n})+s_{n} \bigl(\operatorname{prox}_{f}^{\beta _{n}}\bigl(x_{n}- \alpha _{n}\nabla \omega (x_{n})-\beta _{n}A(x_{n})\bigr) \\ &\quad \quad{} - \operatorname{prox}_{f}^{\beta _{n}} \bigl(y_{n}-\alpha _{n}\nabla \omega (y_{n})- \beta _{n}A(y_{n})\bigr)\bigr) \bigr\Vert ^{2} \\ &\quad \leq (1-s_{n}) \Vert x_{n}-y_{n} \Vert ^{2}+s_{n} \bigl\Vert \operatorname{prox}_{f}^{\beta _{n}} \bigl(x_{n}- \alpha _{n}\nabla \omega (x_{n})- \beta _{n}A(x_{n})\bigr) \\ &\quad \quad{} - \operatorname{prox}_{f}^{\beta _{n}} \bigl(y_{n}-\alpha _{n}\nabla \omega (y_{n})- \beta _{n}A(y_{n})\bigr) \bigr\Vert ^{2} \\ &\quad \leq (1-s_{n}) \Vert x_{n}-y_{n} \Vert ^{2} \\ &\quad \quad{} + s_{n} \bigl\Vert \bigl(x_{n}-\alpha _{n}\nabla \omega (x_{n})-\beta _{n}A(x_{n}) \bigr)-\bigl(y_{n}- \alpha _{n}\nabla \omega (y_{n})-\beta _{n}A(y_{n})\bigr) \bigr\Vert ^{2} \\ &\quad = (1-s_{n}) \Vert x_{n}-y_{n} \Vert ^{2} \\ &\quad \quad{} + s_{n} \bigl\Vert (x_{n}-y_{n})- \alpha _{n}\bigl(\nabla \omega (x_{n})-\nabla \omega (y_{n})\bigr)-\beta _{n}\bigl(A(x_{n})-A(y_{n}) \bigr) \bigr\Vert ^{2} \\ &\quad \leq \Vert x_{n}-y_{n} \Vert ^{2}+ \alpha _{n}^{2}s_{n} \bigl\Vert \nabla \omega (x_{n})- \nabla \omega (y_{n}) \bigr\Vert ^{2}+\beta _{n}^{2}s_{n} \bigl\Vert A(x_{n})-A(y_{n}) \bigr\Vert ^{2} \\ &\quad \quad{} - 2\alpha _{n}s_{n}\bigl\langle \nabla \omega (x_{n})-\nabla \omega (y_{n}),x_{n}-y_{n} \bigr\rangle -2\beta _{n}s_{n}\bigl\langle A(x_{n})-A(y_{n}),x_{n}-y_{n} \bigr\rangle \\ &\quad \quad{} + 2\alpha _{n}\beta _{n}s_{n}\bigl\langle A(x_{n})-A(y_{n}),\nabla \omega (x_{n})-\nabla \omega (y_{n})\bigr\rangle . \end{aligned}$$

Hence, by the monotonicity of A and the strong monotonicity of ω, we have

$$\begin{aligned} \Vert x_{n+1}-y_{n} \Vert ^{2} & \leq \Vert x_{n}-y_{n} \Vert ^{2}+\alpha _{n}^{2}L_{ \omega}^{2}s_{n} \Vert x_{n}-y_{n} \Vert ^{2}+\beta _{n}^{2}L_{A}^{2}s_{n} \Vert x_{n}-y_{n} \Vert ^{2} \\ &\quad{} - 2\alpha _{n}ts_{n} \Vert x_{n}-y_{n} \Vert ^{2}+2\alpha _{n}\beta _{n}L_{ \omega}L_{A}s_{n} \Vert x_{n}-y_{n} \Vert ^{2}. \end{aligned}$$
(24)

Then, by (24) and (23), we have

$$\begin{aligned} \Vert x_{n+1}-y_{n} \Vert ^{2} & \leq 1-\bigl(2\alpha _{n}ts_{n}-\alpha _{n}^{2}L_{ \omega}^{2}s_{n}- \beta _{n}^{2}L_{A}^{2}s_{n}-2 \alpha _{n}\beta _{n}L_{ \omega}L_{A}s_{n} \bigr) \Vert x_{n}-y_{n} \Vert ^{2} \\ & = (1-\lambda _{n}) \Vert x_{n}-y_{n} \Vert ^{2}. \end{aligned}$$
(25)

From (25), (22) and condition (iv), we obtain

$$\begin{aligned} \Vert x_{n+1}-y_{n} \Vert & \leq \biggl(1-\frac{1}{2}\lambda _{n}\biggr) \Vert x_{n}-y_{n} \Vert \\ & \leq \biggl(1-\frac{1}{2}\lambda _{n}\biggr) \bigl( \Vert x_{n}-y_{n-1} \Vert + \Vert y_{n}-y_{n-1} \Vert \bigr) \\ & \leq \biggl(1-\frac{1}{2}\lambda _{n}\biggr) \Vert x_{n}-y_{n-1} \Vert +o(\lambda _{n}). \end{aligned}$$

By condition (iii) and Lemma 2.2, we deduce that \(x_{n+1}-y_{n}\rightarrow 0\) as \(n\rightarrow \infty \). This completes the proof. □

Corollary 3.1

Let A be a hemicontinuous monotone operator. Let \(\omega (x)=\frac{1}{2}\|x\|^{2}\). Let \(\{x_{n}\}\) be defined by Algorithm 3.1. Assume that both \(\{\beta _{n}A(x_{n})\}\) and \(\{A(y_{n})\}\) are bounded. Then the iterative sequence \(\{x_{n}\}\) converges to \(\bar{x}=P_{x \in \operatorname{Sol}(\mathrm{MVI}) }(0)\), which is the minimum norm solution of (MVI).

Corollary 3.2

Let A be an \(L_{A}\)-Lipschitz continuous and monotone operator. Let \(\omega (x)=\frac{1}{2}\|x\|^{2}\). Let \(\{x_{n}\}\) be defined by Algorithm 3.1. Then the iterative sequence \(\{x_{n}\}\) converges to \(\bar{x}=P_{x \in \operatorname{Sol}(\mathrm{MVI}) }(0)\), which is the minimum norm solution of (MVI).

4 Application

Let \(f:H\rightarrow (-\infty ,+\infty ]\) be a proper, lower semicontinuous, and convex function, let \(g:H\rightarrow (-\infty ,+\infty )\) be a convex and Gâteaux differentiable function. Consider the optimization

$$ \min_{x\in H}f(x)+g(x). $$
(P)

We denote by \(\operatorname{Sol}(\mathrm{P})\) the solution set of problem (P). Notice that

$$\begin{aligned} \bar{x} \in \operatorname{Sol}(\mathrm{P}) &\quad \Leftrightarrow\quad 0\in \partial f(\bar{x})+ \nabla g(\bar{x}) \\ &\quad \Leftrightarrow\quad {-}\nabla g(\bar{x})\in \partial f(\bar{x}) \\ &\quad \Leftrightarrow\quad f(y)-f(\bar{x})+\bigl\langle \nabla g(\bar{x}),y-\bar{x} \bigr\rangle \geq 0,\quad \forall y\in H. \end{aligned}$$

Note that if g is convex and Gâteaux differentiable, then g is norm-to-weak continuous and monotone. Hence, g is a hemicontinuous monotone operator. On the other hand, when \(A=\nabla g\), the minimization problem corresponding to regularization mixed variational inequality problem (7) becomes

$$ \min_{x\in H}f(x)+g(x)+\gamma _{n} \omega (x). $$
(26)

Since \(\omega (x)\) is a strongly convex function, the minimization problem (26) has a unique solution. Therefore, as an application of Theorem 3.2, we have the following result.

Algorithm 4.1

Given \(x_{0}\in H\), for every \(n\in \mathbb{N}\), set

$$ x_{n+1}=(1-s_{n})x_{n}+s_{n} \operatorname{prox}_{f}^{\beta _{n}}\bigl(x_{n}- \alpha _{n}\nabla \omega (x_{n})-\beta _{n} \nabla g(x_{n})\bigr), $$

where \(\{s_{n}\}\), \(\{\alpha _{n}\}\), and \(\{\beta _{n}\}\) are three sequences in \((0,1)\) that satisfy the following conditions:

  1. (i)

    \(\alpha _{n}ts_{n}\leq 1\), \(0<\bar{s}<s_{n}\);

  2. (ii)

    \(\frac{\alpha _{n}}{\beta _{n}}\rightarrow 0\), \(\frac{\beta _{n}^{2}}{\alpha _{n}}\rightarrow 0\) as \(n\rightarrow \infty \);

  3. (iii)

    \(\alpha _{n}\rightarrow 0\) as \(n\rightarrow \infty \), \(\sum_{n=1}^{n=\infty}\alpha _{n}=\infty \);

  4. (iv)

    \(\frac{|\alpha _{n}-\alpha _{n-1}|+|\beta _{n}-\beta _{n-1}|}{\alpha _{n}^{2}}\rightarrow 0\) as \(n\rightarrow \infty \).

Theorem 4.1

Let the sequence \(\{x_{n}\}\) be generated by Algorithm 4.1. Assume that both \(\{\beta _{n}\nabla g(x_{n})\}\) and \(\{\nabla g(y_{n})\}\) are bounded. Assume that \(\operatorname{Sol}(\mathrm{P})\) is nonempty. Then the iterative sequence \(\{x_{n}\}\) converges to \(\bar{x}=\operatorname*{argmin}_{x \in \operatorname{Sol}(\mathrm{P})} \omega (x)\), which is the minimum like-norm solution of (P).

5 Numerical experiment

Example 5.1

Let \(H=\mathbb{R}\). Let

$$ f(x)= \textstyle\begin{cases} x,& x\geq 0, \\ 0,& x< 0, \end{cases}\displaystyle \qquad g(x)= \textstyle\begin{cases} \frac{2}{3}x^{\frac{3}{2}},&x\geq 0, \\ 0,& x< 0, \end{cases}\displaystyle \qquad A(x)=\nabla g(x)=\textstyle\begin{cases} x^{\frac{1}{2}},&x\geq 0, \\ 0,& x< 0, \end{cases} $$

and let \(\omega (x)=\frac{1}{2}x^{2}\). It is clear that \(A=\nabla g\) is a hemicontinuous monotone operator and \(\omega (\cdot )\) is a strongly convex function with parameter 1. Choose the sequences \(\{\alpha _{n}\}\), \(\{\beta _{n}\}\), and \(\{s_{n}\}\) such that

$$ \alpha _{n}=n^{-\frac{2}{3}},\qquad \beta _{n}=n^{-\frac{1}{2}},\qquad s_{n}= \frac{1}{2}. $$

Then it is clear that conditions (i)–(iv) of Algorithm 3.1 and Algorithm 4.1 are satisfied.

In Fig. 1, we present the numerical results by Algorithm 4.1. If \(x_{0}=10\), then the optimal solution can be obtained through 8 steps of iteration. If \(x_{0}=50\), then the optimal solution can be obtained through 19 steps of iteration.

Figure 1
figure 1

Numerical results of Algorithm 4.1

6 Concluding remarks

In this paper, we considered a class of regularization forward–backward splitting methods for finding the minimum like-norm solution of the mixed variational inequalities and convex minimization problem in a Hilbert space. Strong convergence results have been obtained for the forward–backward splitting method under the hemicontinuous assumption.

Availability of data and materials

Not applicable.

References

  1. Albert, Y., Ryazantseva, I.: Nonlinear Ill-Posed Problems of Monotone Type. Springer, New York (2006)

    MATH  Google Scholar 

  2. Beck, A., Sabach, S.: A first order method for finding minimal norm-like solutions of convex optimization problems. Math. Program., Ser. A 147, 25–46 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  3. Chen, C.H., Ma, S.Q., Yang, J.F.: A general inertial proximal point algorithm for mixed variational inequality problem. SIAM J. Optim. 25, 2120–2142 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  4. Combettes, P.L., Wajs, V.R.: Signal recovery by proximal forward–backward splitting. Multiscale Model. Simul. 4, 1168–1200 (2005)

    Article  MathSciNet  MATH  Google Scholar 

  5. Ferris, M.C., Mangasarian, O.L.: Finite perturbation of convex programs. Appl. Math. Optim. 23, 263–273 (1991)

    Article  MathSciNet  MATH  Google Scholar 

  6. Goeleven, D.: Existence and uniqueness for a linear mixed variational inequality arising in electrical circuits with transistors. J. Optim. Theory Appl. 138, 397–406 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  7. Konnov, I.V., Volotskaya, E.O.: Mixed variational inequalities and economic equilibrium problems. J. Appl. Math. 6, 289–314 (2002)

    Article  MathSciNet  MATH  Google Scholar 

  8. Linh, H.M., Reich, S., Thong, D.V., Dung, V.T., Lan, N.P.H.: Analysis of two variants of an inertial projection algorithm for finding the minimum-norm solutions of variational inequality and fixed point problems. Numer. Algorithms 89, 1695–1721 (2022)

    Article  MathSciNet  MATH  Google Scholar 

  9. Lions, P.L., Mercier, B.: Splitting algorithms for the sum of two nonlinear operators. SIAM J. Numer. Anal. 16, 964–979 (1979)

    Article  MathSciNet  MATH  Google Scholar 

  10. Liu, Z.H., Migorski, S., Zeng, S.D.: Partial differential variational inequalities involving nonlocal boundary conditions in Banach space. J. Differ. Equ. 7, 3989–4006 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  11. Malitsky, Y.: Golden ratio algorithms for variational inequalities. Math. Program. 184, 383–410 (2020)

    Article  MathSciNet  MATH  Google Scholar 

  12. Noor, M.A.: Proximal methods for mixed variational inequalities. J. Optim. Theory Appl. 115, 447–452 (2002)

    Article  MathSciNet  MATH  Google Scholar 

  13. Noor, M.A., Huang, Z.Y.: Some proximal methods for solving mixed variational inequalities. Appl. Anal. 2012, Article ID 610852 (2012)

    MathSciNet  MATH  Google Scholar 

  14. Ogwo, G.N., Izuchukwu, C., Mewomo, O.T.: Inertial methods for finding minimum-norm solutions of the split variational inequality problem beyond monotonicity. Numer. Algorithms 88, 1419–1456 (2021)

    Article  MathSciNet  MATH  Google Scholar 

  15. Quoc, T.D., Muu, L.D., Hien, N.V.: Extragradient algorithms extended to equilibrium problems. Optimization 57, 749–766 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  16. Solodov, M.: An explicit descent method for bilevel convex optimization. J. Convex Anal. 14, 227–237 (2007)

    MathSciNet  MATH  Google Scholar 

  17. Thakur, B.S., Varghese, S.: Approximate solvability of general strongly mixed variational inequalities. Tbil. Math. J. 6, 13–20 (2013)

    MathSciNet  MATH  Google Scholar 

  18. Tikhonov, A.N., Arsenin, V.Y.: Solutions of ill-posed problems. Scr. Ser. Math. Comp. (1977)

  19. Wang, M.: The existence results and Tikhonov regularization method for generalized mixed variational inequalities in Banach spaces. Ann. Math. Phys. 7, 151–163 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  20. Xia, F.Q., Huang, N.J.: An inexact hybrid projection-proximal point algorithm for solving generalized mixed variational inequalities. Comput. Math. Appl. 62, 4596–4604 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  21. Zhou, Y., Zhou, H.Y., Wang, P.Y.: Iterative methods for finding the minimum-norm solution of the standard monotone variational inequality problems with applications in Hilbert spaces. J. Inequal. Appl. 2015, 135 (2015)

    Article  MathSciNet  MATH  Google Scholar 

Download references

Funding

The work was partially supported by the Heilongjiang Provincial Natural Sciences Grant (No. LH2022A017) and the National Natural Sciences Grant (No. 11871182).

Author information

Authors and Affiliations

Authors

Contributions

All authors equally contributed to this work. All authors reviewed the manuscript

Corresponding author

Correspondence to Wen Song.

Ethics declarations

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Guan, WB., Song, W. The forward–backward splitting method for finding the minimum like-norm solution of the mixed variational inequality problem. J Inequal Appl 2023, 126 (2023). https://doi.org/10.1186/s13660-023-03039-4

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13660-023-03039-4

Mathematics Subject Classification

Keywords