 Research
 Open Access
 Published:
Regularization of illposed mixed variational inequalities with nonmonotone perturbations
Journal of Inequalities and Applications volume 2011, Article number: 25 (2011)
Abstract
In this paper, we study a regularization method for illposed mixed variational inequalities with nonmonotone perturbations in Banach spaces. The convergence and convergence rates of regularized solutions are established by using a priori and a posteriori regularization parameter choice that is based upon the generalized discrepancy principle.
1 Introduction
Variational inequality problems in finitedimensional and infinitedimensional spaces appear in many fields of applied mathematics such as convex programming, nonlinear equations, equilibrium models in economics, and engineering (see [1–3]). Therefore, methods for solving variational inequalities and related problems have wide applicability. In this paper, we consider the mixed variational inequality: for a given f ∈ X*, find an element x_{0} ∈ X such that
where A : X → X* is a monotonebounded hemicontinuous operator with domain D(A) = X, φ : X → ℝ is a proper convex lower semicontinuous functional and X is a real reflexive Banach space with its dual space X*. For the sake of simplicity, the norms of X and X* are denoted by the same symbol  · . We write 〈x*, x〉 instead of x*(x) for x* ∈ X* and x ∈ X.
By S_{0} we denote the solution set of the problem (1). It is easy to see that S_{0} is closed and convex whenever it is not empty. For the existence of a solution to (1), we have the following wellknown result (see [4]):
Theorem 1.1. If there exists u ∈ dom φ satisfying the coercive condition
then (1) has at least one solution.
Many standard extremal problems can be considered as special cases of (1). Denote φ by the indicator function of a closed convex set K in X,
Then, the problem (1) is equivalent to that of finding x_{0} ∈ K such that
In the case K is the whole space X, the later variational inequality is of the form of the following operator equation:
When A is the Gâteaux derivative of a finitevalued convex function F defined on X, the problem (1) becomes the nondifferentiable convex optimization problem (see [4]):
Some methods have been proposed for solving problem (1), for example, the proximal point method (see [5]), and the auxiliary subproblem principle (see [6]). However, the problem (1) is in general illposed, as its solutions do not depend continuously on the data (A, f, φ), we used stable methods for solving it. A widely used and efficient method is the regularization method introduced by Liskovets [7] using the perturbative mixed variational inequality:
where A_{ h } is a monotone operator, α is a regularization parameter, U is the duality mapping of X, x_{ * }∈ X and (A_{ h } , f_{ δ } , φ_{ ε } ) are approximations of (A, f, φ), τ = (h, δ, ε). The convergence rates of the regularized solutions defined by (6) are considered by Buong and Thuy [8].
In this paper, we do not require A_{ h } : to be monotone. In this case, the regularized variational inequality (6) may be unsolvable. In order to avoid this fact, we introduce the regularized problem of finding such that
where μ is positive small enough, U^{s} is the generalized duality mapping of X (see Definition 1.3) and is in X which plays the role of a criterion of selection, g is defined below.
Assume that the solution set S_{0} of the inequality (1) is nonempty, and its data A, f, φ are given by A_{ h } , f_{ δ } , φ_{ ε } satisfying the conditions:
(1)  f  f_{ δ }  ≤ δ, δ → 0;
(2) A_{ h } : X → X* is not necessarily monotone, D(A_{ h } ) = D(A) = X, and
with a nonnegative function g(t) satisfying the condition
(3) φ_{ ε } : X → ℝ is a proper convex lower semicontinuous functional for which there exist positive numbers c_{ ε } and r_{ ε } such that
and
where C_{0} is some positive constant, d(t) has the same properties as g(t).
In the next section we consider the existence and uniqueness of solutions of (7), for every α > 0. Then, we show that the regularized solutions converge to x_{0} ∈ S_{0}, the minimal norm solution defined by
The convergence rate of the regularized solutions to x_{0} will be established under the condition of inversestrongly monotonicity for A and the regularization parameter choice based on the generalized discrepancy principle.
We now recall some known definitions (see [9–11]).
Definition 1.1. An operator A : D(A) = X → X* is said to be

(a)
hemicontinuous if A(x + t_{ n }y) ⇀ Ax as t_{ n } → 0^{+}, x, y ∈ X, and demicontinuous if x_{ n } → x implies Ax_{ n } ⇀ Ax;

(b)
monotone if 〈Ax  Ay, x  y〉 ≥ 0, ∀x, y ∈ X;

(c)
inversestrongly monotone if
(11)
where m_{ A } is a positive constant.
It is wellknown that a monotone and hemicontinuous operator is demicontinuous and a convex and lower semicontinuous functional is weakly lower semicontinuous (see [9]). And an inversestrongly monotone operator is not strongly monotone (see [10]).
Definition 1.2. It is said that an operator A : X → X* has Sproperty if the weak convergence x_{ n } ⇀ x and 〈Ax_{ n }  Ax, x_{ n }  x〉 → 0 imply the strong convergence x_{ n } → x as n → ∞.
Definition 1.3. The operator U^{s} : X → X* is called the generalized duality mapping of X if
When s = 2, we have the duality mapping U. If X and X* are strictly convex spaces, U^{s} is singlevalued, strictly monotone, coercive, and demicontinuous (see [9]).
Let X = L^{p} (Ω) with p ∈ (1, ∞) and Ω ⊂ ℝ ^{m} measurable, we have
Assume that the generalized duality mapping U^{s} satisfies the following condition:
where m_{ s } is a positive constant. It is wellknown that when X is a Hilbert space, then U^{s} = I, s = 2 and m_{ s } = 1, where I denotes the identity operator in the setting space (see [12]).
2 Main result
Lemma 2.1. Let X* be a strictly convex Banach space. Assume that A is a monotonebounded hemicontinuous operator with D(A) = X and conditions (2) and (3) are satisfied. Then, the inequality (7) has a nonempty solution set S_{ ε } for each α > 0 and f_{ δ } ∈ X*.
Proof. Let x_{ ε } ∈ dom φ_{ ε } . The monotonicity of A and assumption (3) imply the following inequality:
for x > r_{ ε } . Consequently, (2) is fulfilled for the pair (A + αU^{s} , φ_{ ε } ). Thus, for each α > 0 and f_{ δ } ∈ X*, there exists a solution of the following inequality:
Observe that the unique solvability of this inequality follows from the monotonicity of A and the strict monotonicity of U^{s} . Indeed, let x_{1} and x_{2} be two different solutions of (14). Then,
and
Putting z = x_{2} in (15) and z = x_{1} in (16) and add the obtained inequalities, we obtain
Due to the monotonicity of A and the strict monotonicity of U^{s} , the last inequality occurs only if x_{1} = x_{2}.
Let be a solution of (14), that is,
For all h > 0, making use of (8), from (17) one gets
Since μ ≥ h, we can conclude that each is a solution of (7).
□
Let be a solution of (7). We have the following result.
Theorem 2.1. Let X and X* be strictly convex Banach spaces and A be a monotonebounded hemicontinuous operator with D(A) = X. Assume that conditions (1)(3) are satisfied, the operator U^{s} satisfies condition (13) and, in addition, the operator A has the Sproperty. Let
Thenconverges strongly to theminimal norm solution x_{0} ∈ S_{0}.
Proof. By (1) and (7), we obtain
This inequality is equivalent to the following
The monotonicity of A, assumption (1), and the inequalities (8), (9), (13) and (20) yield the relation
Since μ/α → 0 as α → 0 (and consequently, h/α → 0), it follows from (19) and the last inequality that the set are bounded. Therefore, there exists a subsequence of which we denote by the same weakly converges to .
We now prove the strong convergence of to . The monotonicity of A and U^{s} implies that
In view of the weak convergence of to , we have
By virtue of (8),
Using further (7), we deduce
Since and φ_{ ε } is proper convex weakly lower semicontinuous, we have from (25) that
By (22)(24) and (26), it results that
Finally, the S property of A implies the strong convergence of to .
We show that . By (8) and take into account (7) we obtain
Since the functional φ is weakly lower semicontinuous,
Since is bounded, by (9), there exists a positive constant c_{2} such that
By letting α → 0 in the inequality (7), provided that A is demicontinuous, from (8), (9), (28), (29) and condition (1) imply that
This means that .
We show that . Applying the monotonicity of U^{s} and the inequalities (8), (9) and (13), we can rewrite (17) as
Since α → 0, ε/α, δ/α, μ/α → 0 (and h/α → 0), the last inequality becomes
Replacing x by , t ∈ (0, 1) in the last inequality, dividing by (1  t) and then letting t to 1, we get
or
Using the property of U^{s} , we have that , ∀x ∈ S_{0}. Because of the convexity and the closedness of S_{0}, and the strictly convexity of X, we can conclude that . The proof is complete.
□
Now, we consider the problem of choosing posteriori regularization parameter such that
To solve this problem, we use the function for selecting by generalized discrepancy principle, i.e. the relation is constructed on the basis of the following equation:
with , where is the solution of (7) with , c is some positive constant.
Lemma 2.2. Let X and X* be strictly convex Banach spaces and A : X → X* be a monotonebounded hemicontinuous operator with D(A) = X. Assume that conditions (1), (2) are satisfied, the operator U^{s} satisfies condition (13). Then, the function is singlevalued and continuous for α≥ α_{0}> 0, whereis the solution of (7).
Proof. Singlevalued solvability of the inequality (7) implies the continuity property of the function ρ(α). Let α_{1}, α_{2} ≥ α_{0} be arbitrary (α_{0}> 0). It follows from (7) that
where and are solutions of (7) with α = α_{1} and α = α_{2}. Using the condition (2) and the monotonicity of A, we have
It follows from (13) and the last inequality that
Obviously, as μ → 0 and α_{1} → α_{2}. It means that the function is continuous on [α_{0}; +∞). Therefore, ρ(α) is also continuous on [α_{0}; +∞).
Theorem 2.2. Let X and X* be strictly convex Banach spaces and A : X → X* be a monotonebounded hemicontinuous operator with D(A) = X. Assume that conditions (1)(3) are satisfied, the operator U^{s} satisfies condition (13). Then
(i) there exists at least a solutionof the equation (30),
(ii) let μ, δ, ε → 0. Then
(1);
(2) if 0 < p < q then, withminimal norm and there exist constants C_{1}, C_{2}> 0 such that for sufficiently small μ, δ, ε > 0 the relation
holds.
Proof.
(i) For 0 < α < 1, it follows from (7) that
Hence,
We invoke the condition (1), the monotonicity of A, (8), (10), (12), and the last inequality to deduce that
It follows from (33) and the form of ρ(α) that
Therefore, lim_{α→+0}α^{q}ρ(α) = 0.
On the other hand,
Since ρ(α) is continuous, there exists at leat one which satisfies (30).
(ii) It follows from (30) and the form of that
Therefore, as μ, δ, ε → 0.
If 0 < p < q, it follows from (30) and (32) that
So,
By Theorem 2.1 the sequence converges to x_{0} ∈ S_{0} with minimal norm as μ, δ, ε → 0.
Clearly,
therefore, there exists a positive constant C_{2} such that (32). On the other hand, because c > 0 so there exists a positive constant C_{1} satisfied (32). This finishes the proof.
□
Theorem 2.3. Let X be a strictly convex Banach space and A be a monotonebounded hemicontinuous operator with D(A) = X. Suppose that
(i) for each h, δ, ε > 0 conditions (1)(3) are satisfied;
(ii) U^{s} satisfies condition (13);
(iii) A is an inversestrongly monotone operator from X into X*, Fréchet differentiable at some neighborhood of x_{0} ∈ S_{0}and satisfies
(iv) there exists z ∈ X such that
then, if the parameter α = α (μ, δ, ε) is chosen by (30) with 0 < p < q, we have
Proof. By an argument analogous to that used for the proof of the first part of Theorem 2.1, we have (21). The boundedness of the sequence follows from (21) and the properties of g(t), d(t) and α. On the other hand, based on (20), the property of U^{s} and the inversestrongly monotone property of A we get that
Hence,
Further, by virtue of conditions (iii), (iv) and the last estimate, we obtain
Consequently, (21) has the form
When α is chosen by (30), it follows from Theorem 2.1 that
and
Therefore, it follows from (35) that
where , i = 1, 2, 3, are the positive constants. Using the implication
we obtain
Remark 2.1 If α is chosen a priori such that α ~ (μ + δ + ε) ^{η} , 0 < η < 1, it follows from (35) that
Therefore,
Remark 2.2 Condition (34) was proposed in [13] for studying convergence analysis of the Landweber iteration method for a class of nonlinear operators. This condition is used to estimate convergence rates of regularized solutions of illposed variational inequalities in [14].
Remark 2.3 The generalized discrepancy principle for regularization parameter choice is presented in [15] for the illposed operator equation (4) when A is a linear and bounded operator in Hilbert space. It is considered and applied to estimating convergence rates of the regularized solution for equation (4) involving an accretive operator in [16].
References
Badriev IB, Zadvornov OA, Ismagilov LN: On iterative regularization methods for variational inequalities of the second kind with pseudomonotone operators. Comput Meth Appl Math 2003,3(2):223–234.
Konnov IV: Combined Relaxation Methods for Variational Inequalities. Springer, Berlin; 2001.
Konnov IV, Volotskaya EO: Mixed variational inequalities and economic equilibrium problems. J Appl Math 2002,2(6):289–314. 10.1155/S1110757X02106012
Ekeland I, Temam R: Convex Analysis and Variational Problems. NorthHolland Publ. Company, Amsterdam; 1970.
Noor MA: Proximal methods for mixed variational inequalities. J Opt Theory Appl 2002,115(2):447–452. 10.1023/A:1020848524253
Cohen G: Auxiliary problem principle extended to variational inequalities. J Opt Theory Appl 1988,59(2):325–333.
Liskovets OA: Regularization for illposed mixed variational inequalities. Soviet Math Dokl 1991, 43: 384–387. (in Russian)
Buong Ng, Thuy NgTT: On regularization parameter choice and convergence rates in regularization for illposed mixed variational inequalities. Int J Contemporary Math Sci 2008,4(3):181–198.
Alber YaI, Ryazantseva IP: Nonlinear IllPosed Problems of Monotone Type. Springer, New York; 2006.
Liu F, Nashed MZ: Regularization of nonlinear illposed variational inequalities and convergence rates. SetValued Anal 1998, 6: 313–344. 10.1023/A:1008643727926
Zeidler E: Nonlinear Functional Analysis and Its Applications. Springer, New York; 1985.
Alber YaI, Notik AI: Geometric properties of Banach spaces and approximate methods for solving nonlinear operator equations. Soviet Math Dokl 1984, 29: 611–615.
Hanke M, Neubauer A, Scherzer O: A convergence analysis of the Landweber iteration for nonlinear illposed problems. Numer Math 1995, 72: 21–37. 10.1007/s002110050158
Buong Ng: Convergence rates in regularization for illposed variational inequalities. CUBO, Math J 2005,21(3):87–94.
Engl HW: Discrepancy principles for Tikhonov regularization of illposed problems leading to optimal convergence rates. J Opt Theory Appl 1987, 52: 209–215. 10.1007/BF00941281
Buong Ng: Generalized discrepancy principle and illposed equation involving accretive operators. J Nonlinear Funct Anal Appl Korea 2004, 9: 73–78.
Author information
Authors and Affiliations
Corresponding author
Additional information
Competing interests
The author declares that they have no competing interests.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
About this article
Cite this article
Thuy, N.T. Regularization of illposed mixed variational inequalities with nonmonotone perturbations. J Inequal Appl 2011, 25 (2011). https://doi.org/10.1186/1029242X201125
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/1029242X201125
Keywords
 monotone mixed variational inequality
 nonmonotone perturbations
 regularization
 convergence rate