• Research
• Open Access

# Regularization of ill-posed mixed variational inequalities with non-monotone perturbations

Journal of Inequalities and Applications20112011:25

https://doi.org/10.1186/1029-242X-2011-25

• Received: 10 February 2011
• Accepted: 21 July 2011
• Published:

## Abstract

In this paper, we study a regularization method for ill-posed mixed variational inequalities with non-monotone perturbations in Banach spaces. The convergence and convergence rates of regularized solutions are established by using a priori and a posteriori regularization parameter choice that is based upon the generalized discrepancy principle.

## Keywords

• monotone mixed variational inequality
• non-monotone perturbations
• regularization
• convergence rate

## 1 Introduction

Variational inequality problems in finite-dimensional and infinite-dimensional spaces appear in many fields of applied mathematics such as convex programming, nonlinear equations, equilibrium models in economics, and engineering (see ). Therefore, methods for solving variational inequalities and related problems have wide applicability. In this paper, we consider the mixed variational inequality: for a given f X*, find an element x0 X such that

where A : XX* is a monotone-bounded hemicontinuous operator with domain D(A) = X, φ : X is a proper convex lower semicontinuous functional and X is a real reflexive Banach space with its dual space X*. For the sake of simplicity, the norms of X and X* are denoted by the same symbol || · ||. We write 〈x*, x〉 instead of x*(x) for x* X* and x X.

By S0 we denote the solution set of the problem (1). It is easy to see that S0 is closed and convex whenever it is not empty. For the existence of a solution to (1), we have the following well-known result (see ):

Theorem 1.1. If there exists u dom φ satisfying the coercive condition

then (1) has at least one solution.

Many standard extremal problems can be considered as special cases of (1). Denote φ by the indicator function of a closed convex set K in X,
Then, the problem (1) is equivalent to that of finding x0 K such that
In the case K is the whole space X, the later variational inequality is of the form of the following operator equation:
When A is the Gâteaux derivative of a finite-valued convex function F defined on X, the problem (1) becomes the nondifferentiable convex optimization problem (see ):
Some methods have been proposed for solving problem (1), for example, the proximal point method (see ), and the auxiliary subproblem principle (see ). However, the problem (1) is in general ill-posed, as its solutions do not depend continuously on the data (A, f, φ), we used stable methods for solving it. A widely used and efficient method is the regularization method introduced by Liskovets  using the perturbative mixed variational inequality:

where A h is a monotone operator, α is a regularization parameter, U is the duality mapping of X, x * X and (A h , f δ , φ ε ) are approximations of (A, f, φ), τ = (h, δ, ε). The convergence rates of the regularized solutions defined by (6) are considered by Buong and Thuy .

In this paper, we do not require A h : to be monotone. In this case, the regularized variational inequality (6) may be unsolvable. In order to avoid this fact, we introduce the regularized problem of finding such that

where μ is positive small enough, U s is the generalized duality mapping of X (see Definition 1.3) and is in X which plays the role of a criterion of selection, g is defined below.

Assume that the solution set S0 of the inequality (1) is non-empty, and its data A, f, φ are given by A h , f δ , φ ε satisfying the conditions:

(1) || f - f δ || ≤ δ, δ → 0;

(3) φ ε : X is a proper convex lower semicontinuous functional for which there exist positive numbers c ε and r ε such that

where C0 is some positive constant, d(t) has the same properties as g(t).

In the next section we consider the existence and uniqueness of solutions of (7), for every α > 0. Then, we show that the regularized solutions converge to x0 S0, the -minimal norm solution defined by

The convergence rate of the regularized solutions to x0 will be established under the condition of inverse-strongly monotonicity for A and the regularization parameter choice based on the generalized discrepancy principle.

We now recall some known definitions (see ).

Definition 1.1. An operator A : D(A) = XX* is said to be
1. (a)

hemicontinuous if A(x + t n y) Ax as t n → 0+, x, y X, and demicontinuous if x n x implies Ax n Ax;

2. (b)

monotone if 〈Ax - Ay, x - y〉 ≥ 0, x, y X;

3. (c)

where m A is a positive constant.

It is well-known that a monotone and hemicontinuous operator is demicontinuous and a convex and lower semicontinuous functional is weakly lower semicontinuous (see ). And an inverse-strongly monotone operator is not strongly monotone (see ).

Definition 1.2. It is said that an operator A : XX* has S-property if the weak convergence x n x and 〈Ax n - Ax, x n - x〉 → 0 imply the strong convergence x n x as n → ∞.

Definition 1.3. The operator U s : XX* is called the generalized duality mapping of X if

When s = 2, we have the duality mapping U. If X and X* are strictly convex spaces, U s is single-valued, strictly monotone, coercive, and demicontinuous (see ).

Assume that the generalized duality mapping U s satisfies the following condition:

where m s is a positive constant. It is well-known that when X is a Hilbert space, then U s = I, s = 2 and m s = 1, where I denotes the identity operator in the setting space (see ).

## 2 Main result

Lemma 2.1. Let X* be a strictly convex Banach space. Assume that A is a monotone-bounded hemicontinuous operator with D(A) = X and conditions (2) and (3) are satisfied. Then, the inequality (7) has a non-empty solution set S ε for each α > 0 and f δ X*.

Proof. Let x ε dom φ ε . The monotonicity of A and assumption (3) imply the following inequality:
for ||x|| > r ε . Consequently, (2) is fulfilled for the pair (A + αU s , φ ε ). Thus, for each α > 0 and f δ X*, there exists a solution of the following inequality:
Observe that the unique solvability of this inequality follows from the monotonicity of A and the strict monotonicity of U s . Indeed, let x1 and x2 be two different solutions of (14). Then,

Due to the monotonicity of A and the strict monotonicity of U s , the last inequality occurs only if x1 = x2.

Let be a solution of (14), that is,

Since μh, we can conclude that each is a solution of (7).

Let be a solution of (7). We have the following result.

Theorem 2.1. Let X and X* be strictly convex Banach spaces and A be a monotone-bounded hemicontinuous operator with D(A) = X. Assume that conditions (1)-(3) are satisfied, the operator U s satisfies condition (13) and, in addition, the operator A has the S-property. Let

Then converges strongly to the -minimal norm solution x0 S0.

The monotonicity of A, assumption (1), and the inequalities (8), (9), (13) and (20) yield the relation

Since μ/α → 0 as α → 0 (and consequently, h/α → 0), it follows from (19) and the last inequality that the set are bounded. Therefore, there exists a subsequence of which we denote by the same weakly converges to .

We now prove the strong convergence of to . The monotonicity of A and U s implies that
In view of the weak convergence of to , we have
Since and φ ε is proper convex weakly lower semicontinuous, we have from (25) that

Finally, the S property of A implies the strong convergence of to .

We show that . By (8) and take into account (7) we obtain
Since is bounded, by (9), there exists a positive constant c2 such that
By letting α → 0 in the inequality (7), provided that A is demicontinuous, from (8), (9), (28), (29) and condition (1) imply that

This means that .

We show that . Applying the monotonicity of U s and the inequalities (8), (9) and (13), we can rewrite (17) as
Replacing x by , t (0, 1) in the last inequality, dividing by (1 - t) and then letting t to 1, we get

Using the property of U s , we have that , x S0. Because of the convexity and the closedness of S0, and the strictly convexity of X, we can conclude that . The proof is complete.

Now, we consider the problem of choosing posteriori regularization parameter such that
To solve this problem, we use the function for selecting by generalized discrepancy principle, i.e. the relation is constructed on the basis of the following equation:

with , where is the solution of (7) with , c is some positive constant.

Lemma 2.2. Let X and X* be strictly convex Banach spaces and A : XX* be a monotone-bounded hemicontinuous operator with D(A) = X. Assume that conditions (1), (2) are satisfied, the operator U s satisfies condition (13). Then, the function is single-valued and continuous for αα0> 0, where is the solution of (7).

Proof. Single-valued solvability of the inequality (7) implies the continuity property of the function ρ(α). Let α1, α2α0 be arbitrary (α0> 0). It follows from (7) that
where and are solutions of (7) with α = α1 and α = α2. Using the condition (2) and the monotonicity of A, we have

Obviously, as μ → 0 and α1α2. It means that the function is continuous on [α0; +∞). Therefore, ρ(α) is also continuous on [α0; +∞).

Theorem 2.2. Let X and X* be strictly convex Banach spaces and A : XX* be a monotone-bounded hemicontinuous operator with D(A) = X. Assume that conditions (1)-(3) are satisfied, the operator U s satisfies condition (13). Then

(i) there exists at least a solution of the equation (30),

(ii) let μ, δ, ε → 0. Then

(1) ;

(2) if 0 < p < q then , with -minimal norm and there exist constants C1, C2> 0 such that for sufficiently small μ, δ, ε > 0 the relation

holds.

Proof.

We invoke the condition (1), the monotonicity of A, (8), (10), (12), and the last inequality to deduce that

Therefore, limα→+0α q ρ(α) = 0.

Since ρ(α) is continuous, there exists at leat one which satisfies (30).

(ii) It follows from (30) and the form of that

Therefore, as μ, δ, ε → 0.

By Theorem 2.1 the sequence converges to x0 S0 with -minimal norm as μ, δ, ε → 0.

therefore, there exists a positive constant C2 such that (32). On the other hand, because c > 0 so there exists a positive constant C1 satisfied (32). This finishes the proof.

Theorem 2.3. Let X be a strictly convex Banach space and A be a monotone-bounded hemicontinuous operator with D(A) = X. Suppose that

(i) for each h, δ, ε > 0 conditions (1)-(3) are satisfied;

(ii) U s satisfies condition (13);

(iii) A is an inverse-strongly monotone operator from X into X*, Fréchet differentiable at some neighborhood of x0 S0and satisfies
Proof. By an argument analogous to that used for the proof of the first part of Theorem 2.1, we have (21). The boundedness of the sequence follows from (21) and the properties of g(t), d(t) and α. On the other hand, based on (20), the property of U s and the inverse-strongly monotone property of A we get that
where , i = 1, 2, 3, are the positive constants. Using the implication
Remark 2.1 If α is chosen a priori such that α ~ (μ + δ + ε) η , 0 < η < 1, it follows from (35) that

Remark 2.2 Condition (34) was proposed in  for studying convergence analysis of the Landweber iteration method for a class of nonlinear operators. This condition is used to estimate convergence rates of regularized solutions of ill-posed variational inequalities in .

Remark 2.3 The generalized discrepancy principle for regularization parameter choice is presented in  for the ill-posed operator equation (4) when A is a linear and bounded operator in Hilbert space. It is considered and applied to estimating convergence rates of the regularized solution for equation (4) involving an accretive operator in .

## Authors’ Affiliations

(1)
College of Sciences, Thainguyen University, Thainguyen, Vietnam

## References 