The forward–backward splitting method for finding the minimum like-norm solution of the mixed variational inequality problem

We consider a general class of convex optimization problems in which one seeks to minimize a strongly convex function over a closed and convex set, which is by itself an optimal set of another mixed variational inequality problem in a Hilbert space. Regularized forward–backward splitting method is applied to find the minimum like-norm solution of the mixed variational inequality problem under investigation.


Introduction
Let H be a real Hilbert space.Consider the mixed variational inequality problem: find x ∈ H such that f (x)f (x) + A(x), x -x ≥ 0, ∀x ∈ H, ( M V I ) where the following assumptions are made throughout the paper: • f : H → (-∞, +∞] is proper, lower semicontinuous, and convex.
• A : H → H is a nonlinear monotone mapping.
• The set of solutions to problem (MVI), denoted by Sol(MVI), is nonempty.Mixed variational inequalities are general problems that encompass as special cases several problems from continuous optimization and variational analysis, such as minimization problems, linear complementary problems, vector optimization problems, and variational inequalities, having applications in economics, engineering, physics, mechanics, and electronics (see [6,7,12,13,19] among others).
We note that if f is the indicator function of a closed convex set C in H, then the mixed variational inequality problem (MVI) is equivalent to finding x ∈ C such that A(x), x -x ≥ 0, ∀x ∈ C, ( 1 ) which is called the standard variational inequality problem.On the other hand, if A = 0, then the mixed variational inequality problem (MVI) reduces to the unconstrained optimization problem of minimizing f over H: For mixed variational inequalities, one can find various algorithms in the literature, for instance, in [3,11,15,17,20].It is known that problem (MVI) is characterized by the fixed point equation where t > 0 and This equation suggests the possibility of iterating (see [4]) This method is called the forward-backward splitting method.Forward-backward methods belong to the class of proximal splitting methods.These methods require the computation of the proximity operator and the approximation of proximal points (see [9]).Problem (MVI) might have multiple solutions, and in this case it is natural to consider the minimal like-norm solution problem in which one seeks to find the optimal solution of (MLN) with a minimal like-norm: (MLN) The function ω(x) : H → R is assumed to satisfy the following: • ω is a strongly convex function over H with parameter t > 0 (see Definition 2.1).
If Sol(MVI) is a nonempty closed convex set, then by the strong convexity of ω, problem (MLN) has a unique solution.For simplicity, problem (MVI) will be called the core problem, problem (MLN) will be called the outer problem, and correspondingly, ω will be called the outer objective function.
When A = 0, ω(x) = 1 2 x 2 , the best known indirect method for solving problem (MNP) is by the well-known Tikhonov regularization [18], which suggests solving the following alternative regularized problem for some λ > 0: In [5], the authors treat the case that f is an indicator function of a closed and convex set C and show that under some restrictive conditions, including C being a polyhedron, there exists a small enough λ * > 0 such that the optimal solution of problem Q λ * is the optimal solution of problem (MLN).In [16], Solodov showed that ∞ k=1 λ k = ∞ and f is again an indicator function of a closed and convex set, there is no need to find the optimal solution of problem Q λ k , and it is sufficient to approximate its solution by performing a single projected gradient step on Q λ k .In [2], a first order method for solving problem (MLN), called the minimal norm gradient, was proposed, for which the authors proved an O( 1 √ k ) rate of convergence result in terms of the inner objective function values.The minimal norm gradient method is based on the cutting plane idea, which means that at each iteration of the algorithm two specific half-spaces are constructed and then a minimization of the outer objective function ω over the intersection of these half-spaces is solved.
In [21], for finding the minimum-norm solution to the standard monotone variational inequality problem (1), Zhou et al. proposed the following iterative method: where P C stands for the metric projection from H onto C.They proved that the proposed iterative sequences converge strongly to the minimum-norm solution of the variational inequality provided {α n } and {β n } satisfy certain conditions.In [8], when A is pseudomonotone and Lipschitz continuous, Linh et al. introduced an inertial projection algorithm for finding the minimum-norm solutions of the variational inequality problem.In [14], Linh et al. introduced an inertial method for finding minimum-norm solutions of the split variational inequality problem.
Our interest in this paper is to study regularization forward-backward splitting method for finding minimum like-norm solution of the mixed variational inequality problem in infinite dimensional real Hilbert spaces when operator A is monotone and hemicontinuous.

Mathematical toolbox
Let f : H → (-∞, +∞] be an extended real-valued function.The subdifferential of f is the set-valued operator ∂f : H → 2 H , the value of which at x ∈ H is Consider the Moreau envelope env α f (x) and the set-valued proximal mapping prox α f (x) defined by The operator prox α f is called the proximity operator.For every x ∈ H, the infimum in ( 3) is achieved at a unique point prox α f (x) that is characterized by the inclusion The proximity operator possesses several important properties, three of them will be useful in our analysis and are thus recalled here.
(iii) Firmly nonexpansive: If a mapping T : H → H is strongly monotone with parameter t > 0 and Lipschitz continuous with constant L, then L ≥ t.Remark 2.1 As a matter of fact, it is know that: (i) If ∇ω is strongly monotone with constant t and A is monotone, then A + α∇ω is strongly monotone with constant tα.(ii) If ∇ω is Lipschitz continuous with constant L ω and A is Lipschitz continuous with constant L A , then A + α∇ω is also Lipschitz continuous with constant (L A + αL ω ).

Definition 2.5 [1]
A function f is called lower semicontinuous at the point x 0 ∈ domf if for any sequence x n ∈ domf such that x n → x 0 there holds the inequality If inequality (4) occurs with the condition that the convergence of x n to x 0 is weak, then the function f is called weakly lower semicontinuous at x 0 .
Lemma 2.1 [1] Let f be a convex and lower semicontinuous function.Then it is weakly lower semicontinuous.

Definition 2.6 [21]
A mapping T is said to be hemicontinuous if for any sequence {x n } converging to x 0 ∈ H along a line implies T(x n ) T(x 0 ), i.e., T(x n ) = T(x 0 + t n x) T(x 0 ) as t n → 0 for all x ∈ H.
Lemma 2.2 [21] Let {α n } be a sequence of nonnegative real numbers satisfying where {γ n } ⊆ (0, 1) and Lemma 2.3 [10] Let A : H → H be a hemicontinuous monotone operator.Assume that the following coercivity condition holds: there exists v ∈ dom f such that Then Sol(MVI) is a nonempty set.

Main result
Before describing the algorithms, we require the following notation for the optimal solution of the problem consisting of minimizing ω over a given closed and convex set C: By the optimality condition in problem (5), it follows that Lemma 3.1 Let A : H → H be a hemicontinuous monotone operator.Then, for a fixed element x ∈ H, the following mixed variational inequalities are equivalent: Since A is a monotone operator, then for any x ∈ H, we have Hence, Hence, This completes the proof.

Lemma 3.2
Let A : H → H be a hemicontinuous monotone operator.Then Sol(MVI) is a closed convex set.
Proof Let x, x ∈ Sol(MVI), let 0 < t < 1.Then, for any x ∈ H, by Lemma 3.1 we have Hence, Let {x n } ⊆ Sol(MVI) and x n → x.Then, for any x ∈ H, by Lemma 3.1, we have By the weak semicontinuity of f , we have Then we have Hence, x ∈ Sol(MVI), that is, Sol(MVI) is a closed set.
In this section, we use the idea of regularization to attach the general case.For given γ > 0, we consider the following regularization mixed variational inequality problem: find x ∈ H such that where γ > 0 is the regularization parameter.

Lemma 3.3
Let A : H → H be a hemicontinuous monotone operator.Then the regularization mixed variational inequality problem (7) has a unique solution.
Proof For any v ∈ dom f and v * ∈ ∂f (v), by Remark 2.1, we have Hence, Then, by Lemma 2.3, the set of solutions to the regularization mixed variational inequality problem ( 7) is nonempty.Next, we show that problem (7) has a unique solution.
Assume that x and x are solutions of the regularization mixed variational inequality problem (7).Then we have and Combining ( 8) and ( 9), we get Hence, by Remark 2.1, we have Therefore, x = x.This completes the proof.
Remark 3.1 For any γ > 0 and β > 0, we have x is a solution of problem ( 7) In this section, we will introduce two iterative methods (one implicit and the other explicit).First, by Remark 3.1, we introduce the implicit one: where {α n } and {β n } are two sequences in (0, 1) that satisfy the following condition: Theorem 3.1 Let A be a hemicontinuous monotone operator.Then the sequence {y n } generated by implicit method (10) converges to x = (Sol(MVI)), which is the minimum likenorm solution of (MVI).
Proof Put z n = y nα n ∇ω(y n )β n A(y n ).For any p ∈ Sol(MVI), we have By using (10) and (11), we get It follows from the property of prox By ( 11) and ( 12), we have which simplifies to and then Setting γ n = α n β n , we have Since A is a monotone operator and p ∈ Sol(MVI), we know Combining the above three relations yields Then we have ∇ω(y n ) -∇ω(p) + ∇ω(p), y np = ∇ω(y n ), y np ≤ 0, from which it turns out that ∇ω(y n ) -∇ω(p), y np ≤ -∇ω(p), y np .
Hence, by the strong monotonicity of ∇ω and the Cauchy-Schwarz inequality, we have Therefore, {y n } is bounded.Then we know that {y n } has a subsequence {y n,k } such that y n,k x as k → ∞.Furthermore, without loss of generality, we may assume that {y n } converges weakly to a point x ∈ H.We show that x is a solution to (MVI).For any x ∈ H, by Remark 2.1, we have Combining ( 15) and ( 10), we get Taking the limit as n → ∞ in (16) yields By Lemma 3.1, we get that is, x ∈ Sol(MVI).Therefore, we can substitute p by x in ( 14) to obtain Since y n x as n → ∞, by (17) we get y n → x as n → ∞.Moreover, from (13) we get ∇ω(x), xp ≤ 0, ∀p ∈ Sol(MVI), from which we know that x is the minimum like-norm solution of (MVI).This completes the proof.Now, we introduce an explicit method (regularization forward-backward splitting) and establish its strong convergence analysis.From the implicit method, it is natural to consider the following iteration method that generates a sequence {x n } according to the recursion.
Algorithm 3.1 Given x 0 ∈ H, for every n ∈ N, set where {s n }, {α n }, and {β n } are three sequences in (0, 1) that satisfy the following conditions:

Proposition 3.1
Let A be a hemicontinuous monotone operator.Let {x n } be defined by Algorithm 3.1.Assume that {β n A(x n )} is bounded.Then the iterative sequence {x n } is bounded.
Proof For any p ∈ Sol(MVI), from the property of prox Then, by the monotonicity of A, strong monotonicity and Lipschitz continuity of ∇ω, we have Suppose that {x n } is unbounded, then there exists {x (n,k)+1 } ⊆ {x n } such that Hence, Then, by (19) and {β n A(x n )} is bounded, we have Using inequality ( 19) again, we get Then we have Since α n → 0 and {β n A(x n )} is bounded, then by (20) {x n,k } is also bounded.This contradicts x n,kp → +∞.Hence, the iterative sequence {x n } is bounded.

Theorem 3.2
Let A be a hemicontinuous monotone operator.Let {x n } be defined by Algorithm 3.1.Assume that both {β n A(x n )} and {A(y n )} are bounded.Then the iterative sequence {x n } converges to x, which is the minimum like-norm solution of (MVI).
Proof By using Theorem 3.1, we know that {y n } converges strongly to x.Therefore, it is sufficient to show that x n+1y n → 0 as n → ∞.By using (10) and Algorithm 3.1, we get Hence, by the monotonicity of A, we have and then, by the strong monotonicity and Lipschitz continuity of ∇ω, we have Since prox f is a firmly nonexpansive mapping, we have Then we have that and so that Hence, we get Since {y n } and {A(y n )} are two bounded sequences, there exists M 1 > 0 such that sup{ ∇ω(y n-1 ) , A(y n-1 ) } ≤ M 1 for any n ≥ 1.Then we have From conditions (ii) and (iv) we know that By Lemma 2.2 and condition (iii), we have x n+1y n → 0, as n → ∞.It follows that {x n } converges strongly to x = argmin x∈Sol(MVI) ω(x).This completes the proof.
If A : H → H is an L A -Lipschitz continuous and monotone operator, then we have the following convergence result.

Theorem 3.3
Let A be an L A -Lipschitz continuous and monotone operator.Let {x n } be defined by Algorithm 3.1.Then the iterative sequence {x n } converges to x, which is the minimum like-norm solution of (MVI).
Proof From Theorem 3.1, we know that y n → x = argmin x∈Sol(MVI) ω(x).Therefore, it is sufficient to show that x n+1y n → 0 as n → ∞.In view of conditions {α n } and {β n }, without loss of generality, we may assume that By using (10) and Algorithm 3.1, we get Hence, by the monotonicity of A and the strong monotonicity of ∇ω, we have Then, by ( 24) and (23), we have From (25), (22) and condition (iv), we obtain By condition (iii) and Lemma 2.2, we deduce that x n+1y n → 0 as n → ∞.This completes the proof.

Corollary 3.1
Let A be a hemicontinuous monotone operator.Let ω(x) = 1 2 x 2 .Let {x n } be defined by Algorithm 3.1.Assume that both {β n A(x n )} and {A(y n )} are bounded.Then the iterative sequence {x n } converges to x = P x∈Sol(MVI) (0), which is the minimum norm solution of (MVI).

Application
Let f : H → (-∞, +∞] be a proper, lower semicontinuous, and convex function, let g : H → (-∞, +∞) be a convex and Gâteaux differentiable function.Consider the optimization min x∈H f (x) + g(x). (P) We denote by Sol(P) the solution set of problem (P).Notice that Note that if g is convex and Gâteaux differentiable, then ∇g is norm-to-weak continuous and monotone.Hence, ∇g is a hemicontinuous monotone operator.On the other hand, when A = ∇g, the minimization problem corresponding to regularization mixed variational inequality problem (7) becomes Since ω(x) is a strongly convex function, the minimization problem (26) has a unique solution.Therefore, as an application of Theorem 3.2, we have the following result.
Then it is clear that conditions (i)-(iv) of Algorithm 3.1 and Algorithm 4.1 are satisfied.
In Fig. 1, we present the numerical results by Algorithm 4.1.If x 0 = 10, then the optimal solution can be obtained through 8 steps of iteration.If x 0 = 50, then the optimal solution can be obtained through 19 steps of iteration.

Concluding remarks
In this paper, we considered a class of regularization forward-backward splitting methods for finding the minimum like-norm solution of the mixed variational inequalities and convex minimization problem in a Hilbert space.Strong convergence results have been obtained for the forward-backward splitting method under the hemicontinuous assumption.

Funding
The work was partially supported by the Heilongjiang Provincial Natural Sciences Grant (No. LH2022A017) and the National Natural Sciences Grant (No. 11871182).

1 2 x 2 .
A -Lipschitz continuous and monotone operator.Let ω(x) = Let {x n } be defined by Algorithm 3.1.Then the iterative sequence {x n } converges to x = P x∈Sol(MVI) (0), which is the minimum norm solution of (MVI).

then ∇h is strongly monotone with parameter t > 0. Definition 2.4 A mapping T : H → H is called Lipschitz continuous if there exists L > 0 such that
∀x, y ∈ H.