Skip to main content

Perturbed projection and iterative algorithms for a system of general regularized nonconvex variational inequalities

Abstract

The purpose of this paper is to introduce a new system of general nonlinear regularized nonconvex variational inequalities and verify the equivalence between the proposed system and fixed point problems. By using the equivalent formulation, the existence and uniqueness theorems for solutions of the system are established. Applying two nearly uniformly Lipschitzian mappings S1 and S2 and using the equivalent alternative formulation, we suggest and analyze a new perturbed p-step projection iterative algorithm with mixed errors for finding an element of the set of the fixed points of the nearly uniformly Lipschitzian mapping Q= ( S 1 , S 2 ) which is the unique solution of the system of general nonlinear regularized nonconvex variational inequalities. We also discuss the convergence analysis of the proposed iterative algorithm under some suitable conditions.

MSC : Primary 47H05; Secondary 47J20, 49J40, 90C33.

1 Introduction

The theory of variational inequalities introduced by Stampacchia [1] in the early 1960s have enjoyed vigorous growth for the last 30 years. Variational inequality theory describes a broad spectrum of interesting and important developments involving a link among various fields of mathematics, physics, economics, and engineering sciences. The ideas and techniques of this theory are being used in a variety of diverse areas and proved to be productive and innovative (see [27]). One of the most interesting and important problems in variational inequality theory is the development of an efficient numerical method. There is a substantial number of numerical methods including projection method and its variant forms, Wiener-Holf (normal) equations, auxiliary principle, and descent framework for solving variational inequalities and complementarity problems. For the applications, physical formulations, numerical methods and other aspects of variational inequalities (see [152] and the references therein). Projection method and its variant forms represent important tool for finding the approximate solution of various types of variational and quasi-variational inequalities, the origin of which can be traced back to Lions and Stampacchia [21]. The projection type methods were developed in 1970s and 1980s. The main idea in this technique is to establish the equivalence between the variational inequalities and the fixed point problem using the concept of projection. This alternative formulation enables us to suggest some iterative methods for computing the approximate solution (see [36, 42, 43]).

It is worth mentioning that most of the results regarding the existence and iterative approximation of solutions to variational inequality problems have been investigated and considered so far to the case where the underlying set is a convex set. Recently, the concept of convex set has been generalized in many directions, which has potential and important applications in various fields. It is well known that the uniformly prox-regular sets are nonconvex and include the convex sets as special cases, for more details (see for example [11, 12, 17, 45, 46, 30, 31]). In recent years, Bounkhel et al. [17], Cho et al. [40], Moudafi [24], Noor [25, 26] and Pang et al. [30] have considered variational inequalities and equilibrium problems in the context of uniformly prox-regular sets. They suggested and analyzed some projection type iterative algorithms by using the prox-regular technique and auxiliary principle technique.

On the other hand, related to the variational inequalities, we have the problem of finding the fixed points of the nonexpansive mappings, which is the subject of current interest in functional analysis. It is natural to consider a unified approach to these two different problems. Motivated and inspired by the problems, Noor and Huang [27] considered the problem of finding the common element of the set of the solutions of variational inequalities and the set of the fixed points of the nonexpansive mappings. It is well known that every nonexpansive mapping is a Lipschitzian mapping. Lipschitzian mappings have been generalized by various authors. Sahu [50] introduced and investigated nearly uniformly Lipschitzian mappings as a generalization of Lipschitzian mappings.

Motivated and inspired by the recent results in this area, in this paper, we introduce and consider a new system of general nonlinear regularized nonconvex variational inequalities involving four different nonlinear operators. We first establish the equivalence between the system of general nonlinear regularized nonconvex variational inequalities and fixed point problems and, by the equivalent formulation, we discuss the existence and uniqueness of solution of the proposed system. By using two nearly uniformly Lipschitzian mappings S1 and S2 and the equivalent alternative formulation, we suggest and analyze a new perturbed p-step iterative algorithm with mixed errors for finding an element of the set of the fixed points of the nearly uniformly Lipschitzian mapping Q= ( S 1 , S 2 ) which is the unique solution of the system of general nonlinear regularized nonconvex variational inequalities. We also discuss the convergence analysis of the proposed iterative algorithm under some suitable conditions.

2 Preliminaries

Throughout this paper, let be a real Hilbert space with the inner product 〈·,·〉 and the norm || · || and K be a nonempty convex subset of . We denote by d K (·) or d(·, K) the usual distance function to the subset K, i.e., d K (u) = infvK||u - v|| Let us recall the following well-known definitions and some auxiliary results of nonlinear convex analysis and nonsmooth analysis [31, 4446].

Definition 2.1. Let uHis a point not lying in K. A point v K is called a closest point or a projection of u onto K if, d K (u) = ||u - v||. The set of all such closest points is denoted by P K (u), i.e.,

P K ( u ) : = { v K : d K ( u ) = | | u - v | | } .

Definition 2.2. The proximal normal cone of K at a point u K is given by

N K P ( u ) : = { ξ H : u P K ( u + α ξ ) , for some α > 0 } .

Clarke et al. [45], in Proposition 1.1.5, give a characterization of N K P ( u ) as the following:

Lemma 2.3. Let K be a nonempty closed subset in . Then ξ N K P ( u ) if and only if there exists a constant α = α(ξ, u) > 0 such thatξ, v - u〉 ≤ α ||v - u||2 for all v K.

The above inequality is called the proximal normal inequality. The special case in which K is closed and convex is an important one. In Proposition 1.1.10 of [45], the authors give the following characterization of the proximal normal cone the closed and convex subset KH:

Lemma 2.4. Let K be a nonempty, closed and convex subset in . Then ξ N K P ( u ) if and only if ξ, v - u 0 for all v K.

Definition 2.5. Let X be a real Banach space and f:X be Lipschitz with constant τ near a given point x X; that is, for some ε > 0, we have |f (y) -f(z)| ≤ τ|| y - z|| for all y, z B(x; ε), where B(x; ε) denotes the open ball of radius ε > 0 and centered at x. The generalized directional derivative of f at x in the direction v, denoted as f° (x; v), is defined as follows:

f ( x ; v ) = lim sup y x , t 0 f ( y + t v ) - f ( y ) t ,

where y is a vector in X and t is a positive scalar.

The generalized directional derivative defined earlier can be used to develop a notion of tangency that does not require K to be smooth or convex.

Definition 2.6. The tangent cone T K (x) to K at a point x in K is defined as follows:

T K ( x ) : = { v H : d K ( x ; v ) = 0 } .

Having defined a tangent cone, the likely candidate for the normal cone is the one obtained from T K (x) by polarity. Accordingly, we define the normal cone of K at x by polarity with T K (x) as follows:

N K ( x ) : = { ξ : ξ , v 0 , v T K ( x ) } .

Definition 2.7. The Clarke normal cone, denoted by N K C ( x ) , is given by N K C ( x ) = c o ¯ [ N K P ( x ) ] , where c o ¯ [ S ] means the closure of the convex hull of S.

It is clear that one always has N K P ( x ) N K C ( x ) . The converse is not true in general. Note that N K C ( x ) is always closed and convex cone, whereas N K P ( x ) is always convex, but may not be closed (see [31, 44, 45]).

In 1995, Clarke et al. [46], introduced and studied a new class of nonconvex sets, called proximally smooth sets; subsequently Poliquin et al. [31] investigated the aforementioned sets, under the name of uniformly prox-regular sets. These have been successfully used in many nonconvex applications in areas, such as optimizations, economic models, dynamical systems, differential inclusions, etc. For such applications, see [1416, 18]. This class seems particularly well suited to overcome the difficulties which arise due to the nonconvexity assumptions on K. We take the following characterization proved in [46] as a definition of this class. We point out that the original definition was given in terms of the differentiability of the distance function (see [46]).

Definition 2.8. For any r (0, +∞], a subset K r of is called normalized uniformly prox-regular (or uniformly r-prox-regular [46]) if every nonzero proximal normal to K r can be realized by an r-ball.

This means that, for all x ̄ K r and 0ξ N K r P ( x ̄ ) with ||ξ|| = 1,

ξ , x - x ̄ 1 2 r | | x - x ̄ | | 2 , x K r .

Obviously, the class of normalized uniformly prox-regular sets is sufficiently large to include the class of convex sets, p-convex sets, C1,1 submanifolds (possibly with boundary) of , the images under a C1,1 diffeomorphism of convex sets and many other nonconvex sets (see [19, 46]).

Lemma 2.9. [46] A closed set KH is convex if and only if it is proximally smooth of radius r for all r > 0.

If r = +, then, in view of Definition 2.8 and Lemma 2.9, the uniform r-prox-regularity of K r is equivalent to the convexity of K r , which makes this class of great importance. For the case of that r = +, we set K r = K.

The following proposition summarizes some important consequences of the uniform prox-regularity needed in the sequel. The proof of this results can be found in [31, 46].

Proposition 2.10. Let r > 0 and K r be a nonempty closed and uniformly r-prox-regular subset of . Set U ( r ) = { u H : 0 < d K r ( u ) < r } . Then the following statements hold:

  1. (1)

    For all x U(r), one has P K r ( x ) ;

  2. (2)

    For all r′ (0, r, P K r is Lipschitz continuous with constant r r - r on U ( r ) = { u H : 0 < d K r ( u ) < r } ;

  3. (3)

    The proximal normal cone is closed as a set-valued mapping.

As a direct consequent of part (c) of Proposition 2.10, we have N K r C ( x ) = N K r P ( x ) . Therefore, we define N K r ( x ) := N K r C ( x ) = N K r P ( x ) for such a class of sets.

In order to make clear the concept of r-prox-regular sets, we state the following concrete example: The union of two disjoint intervals [a, b] and [c, d] is r-prox-regular with r= c - b 2 . The finite union of disjoint intervals is also r-prox-regular and r depends on the distances between the intervals.

Definition 2.11. Let T:H×HH and g:HH be two single-valued operators. Then the operator T is said to be:

  1. (1)

    monotone in the first variable if, for all x,yH,

    T ( x , u ) - T ( y , v ) , x - y 0 , u , v H ;
  2. (2)

    r-strongly monotone in the first variable if there exists a constant r > 0 such that, for all x,yH,

    T ( x , u ) - T ( y , v ) , x - y r | | x - y | | 2 , u , v H ;
  3. (3)

    κ-strongly monotone with respect to g in the first variable if there exists a constant κ > 0 such that, for all x,yH,

    T ( x , u ) - T ( y , v ) , g ( x ) - g ( y ) κ | | x - y | | 2 , u , v H ;
  4. (4)

    (θ, ν)-relaxed cocoercive in the first variable if there exist constants θ, ν > 0 such that, for all x,yH,

    T ( x , u ) - T ( y , v ) , x - y - θ | | T ( x , u ) - T ( y , v ) | | 2 + v | | x - y | | 2 , u , v H ;
  5. (5)

    µ-Lipschitz continuous in the first variable if there exists a constant µ > 0 such that, for all x,yH,

    | | T ( x , u ) - T ( y , v ) | | μ | | x - y | | , u , v H .

Definition 2.12. A nonlinear single-valued operator g:HH is said to be γ-Lipschitz continuous if there exists a constant γ > 0 such that

| | g ( x ) - g ( y ) | | γ | | x - y | | , x , y H .

In the next definitions, several generalizations of the nonexpansive mappings which have been introduced by various authors in recent years are stated.

Definition 2.13. A nonlinear mapping T:HH is said to be:

  1. (1)

    nonexpansive if

    | | T x - T y | | | | x - y | | , x , y H ;
  2. (2)

    L-Lipschitzian if there exists a constant L > 0 such that

    | | T x - T y | | L | | x - y | | , x , y H ;
  3. (3)

    generalized Lipschitzian if there exists a constant L > 0 such that

    | | T x - T y | | L | | x - y | | + 1 , x , y H ;
  4. (4)

    generalized (L, M)-Lipschitzian [50] if there exist two constants L, M > 0 such that

    | | T x - T y | | L | | x - y | | + M , x , y H ;
  5. (5)

    asymptotically nonexpansive [48] if there exists a sequence {k n } [1,∞) with lim n→∞ k n = 1 such that for each n ,

    | | T n x - T n y | | k n | | x - y | | , x , y H ;
  6. (6)

    pointwise asymptotically nonexpansive [49] if, for each n ≥ 1,

    | | T n x - T n y | | α n ( x ) | | x - y | | , x , y H ,

where α n → 1 pointwise on X;

  1. (7)

    uniformly L-Lipschitzian if there exists a constant L > 0 such that, for each n ,

    | | T n x - T n y | | L | | x - y | | , x , y H .

Definition 2.14. [50] A nonlinear mapping T:HH is said to be:

  1. (1)

    nearly Lipschitzian with respect to the sequence {a n } if, for each n, there exists a constant k n > 0 such that

    | | T n x - T n y | | k n | | x - y | | + a n , x , y H ,
    (2.1)

where {a n } is a fix sequence in [0, ∞) with a n → 0 as n → ∞.

For an arbitrary, but fixed n , the infimum of constants k n in (2.1) is called nearly Lipschitz constant, which is denoted by η(Tn). Notice that

η ( T n ) = sup | | T n x - T n y | | | | x - y | | + a n : x , y H , x y .

A nearly Lipschitzian mapping T with the sequence {(a n , η(Tn))} is said to be:

  1. (2)

    nearly nonexpansive if η(Tn) = 1 for all n , that is,

    | | T n x - T n y | | | | x - y | | + a n , x , y H ;
  2. (3)

    nearly asymptotically nonexpansive if η(Tn) ≥ 1 for all n and lim n→∞ η(Tn) = 1, in other words, k n ≥ 1 for all n with lim n→∞ k n = 1;

  3. (4)

    nearly uniformly L-Lipschitzian if η(T n) ≤ L for all n , in other words, k n = L for all n.

Remark 2.15. It should be pointed that

  1. (1)

    Every nonexpansive mapping is a asymptotically non-expansive mapping and every asymptotically non-expansive mapping is a pointwise asymptotically nonexpansive mapping. In addition, the class of Lipschitzian mappings properly includes the class of pointwise asymptotically nonexpansive mappings.

  2. (2)

    It is obvious that every Lipschitzian mapping is a generalized Lipschitzian mapping. Furthermore, every mapping with a bounded range is a generalized Lipschitzian mapping. It is easy to see that the class of generalized (L, M)-Lipschitzian mappings is more general than the class of generalized Lipschitzian mappings.

  3. (3)

    Clearly, the class of nearly uniformly L-Lipschitzian mappings properly includes the class of generalized (L, M)-Lipschitzian mappings and that of uniformly L-Lipschitzian mappings. Note that every nearly asymptotically nonexpansive mapping is nearly uniformly L-Lipschitzian.

Now, we present some new examples to investigate relations between these mappings.

Example 2.16. Let H= and define a mapping T:HH as follow:

T ( x ) = 1 γ , x [ 0 , γ ] , 0 , x ( - , 0 ) ( γ , ) ,

where γ > 1 is a constant real number. Evidently, the mapping T is discontinuous at the points x = 0, γ. Since every Lipschitzian mapping is continuous it follows that T is not Lipschitzian. For each n , take a n = 1 γ n . Then

| T x - T y | | x - y | + 1 γ = | x - y | + a 1 , x , y .

Since T n z= 1 γ , for all z and n 2, it follows that, for all x, y and n ≥ 2,

| T n x - T n y | | x - y | + 1 γ n = | x - y | + a n .

Hence T is a nearly nonexpansive mapping with respect to the sequence { a n } = { 1 γ n } .

The following example shows that the nearly uniformly L-Lipschitzian mappings are not necessarily continuous.

Example 2.17. Let H= [ 0 , b ] , where b (0, 1] is an arbitrary constant real number, and let the self-mapping T of be defined as below:

T ( x ) = γ x , x [ 0 , b ) , 0 , x = b ,

where γ (0, 1) is also an arbitrary constant real number. It is plain that the mapping T, is discontinuous in the point b. Hence T is not a Lipschitzian mapping. Take, for each n , a n = γn−1. Then, for all n and x, y [0, b), we have

| T n x - T n y | = | γ n x - γ n y | = γ n | x - y | γ n | x - y | + γ n γ | x - y | + γ n = γ ( | x - y | + a n ) .

If x [0, b) and y = b, then, for each n, we have T nx = γnx and T ny = 0. Since 0 < |x y| ≤ b 1, it follows that, for all n,

| T n x - T n y | = | γ n x - 0 | = γ n x γ n b γ n < γ n | x - y | + γ n γ | x - y | + γ n = γ ( | x - y | + a n ) .

Hence T is a nearly uniformly γ-Lipschitzian mapping with respect to the sequence {a n } = {γn−1}.

Obviously, every nearly nonexpansive mapping is a nearly uniformly Lipschitzian mapping. In the following example, we show that the class of nearly uniformly Lipschitzian mappings properly includes the class of nearly nonexpansive mappings.

Example 2.18. Let H= and the self-mapping T of be defined as follow:

T ( x ) = 1 2 , x [ 0 , 1 ) { 2 } , 2 , x = 1 , 0 , x ( - , 0 ) ( 1 , 2 ) ( 2 , + ) .

Evidently, the mapping T is discontinuous in the points x = 0, 1, 2. Hence, T is not a Lipschitzian mapping. Take, for each n , a n = 1 2 n . Then T is not a nearly nonexpansive mapping with respect to the sequence { 1 2 n } because, taking x = 1 and y = 1 2 , we have Tx = 2, Ty= 1 2 and

| T x - T y | > | x - y | + 1 2 = | x - y | + a 1 .

However,

| T x - T y | 4 ( | x - y | + 1 2 ) = 4 ( | x - y | + a 1 ) , x , y

and for all n ≥ 2,

| T n x - T n y | 4 ( | x - y | + 1 2 n ) = 4 ( | x - y | + a n ) , x , y ,

since T n z= 1 2 for all z and n 2. Hence, for each L 4, T is a nearly uniformly L-Lipschitzian mapping with respect to the sequence { 1 2 n } .

It is clear that every uniformly L-Lipschitzian mapping is a nearly uniformly L-Lipschitzian mapping. In the next example, we show that the class nearly uniformly L-Lipschitzian mappings properly includes the class of uniformly L-Lipschitzian mappings.

Example 2.19. Let H= and the self-mapping T of be defined the same as in Example 2.18. Then T is not a uniformly 4-Lipschitzian mapping. If x = 1 and y ( 1 , 3 2 ) , then we have |T × Ty| > 4|x y| because of 0<|x-y|< 1 2 . But, in view of Example 2.18, T is a nearly uniformly 4-Lipschitzian mapping.

The following example shows that the class of generalized Lipschitzian mappings properly includes the class of Lipschitzian mappings and that of mappings with bounded range.

Example 2.20. [35] Let H= and T:HH be a mapping defined by

T ( x ) = x - 1 , x ( - , - 1 ) , x - 1 - ( x + 1 ) 2 , x [ - 1 , 0 ) , x + 1 - ( x - 1 ) 2 , x [ 0 , 1 ] , x + 1 , x ( 1 , ) .

Then T is a generalized Lipschitzian mapping which is not Lipschitzian and whose range is not bounded.

3 System of general regularized nonconvex variational inequalities

In this section, we introduce a new system of general nonlinear regularized nonconvex variational inequalities and establish the existence and uniqueness theorem for a solution of the mentioned system.

Let K r be an uniformly r-prox-regular subset of and let T i :H×HHand g i :HH ( i = 1 , 2 ) be four nonlinear single-valued operators. For any given constants ρ, η > 0, we consider the problem of finding (x*, y*) K r × K r such that

ρ T 1 ( y * , x * ) + x * - g 1 ( y * ) , g 1 ( x ) - x * + 1 2 r | | g 1 ( x ) - x * | | 2 0 , x H , η T 2 ( x * , y * ) + y * - g 2 ( x * ) , g 2 ( x ) - y * + 1 2 r | | g 2 ( x ) - y * | | 2 0 , x H .
(3.1)

The problem (3.1) is called a system of general nonlinear regularized nonconvex variational inequalities involving four different nonlinear operators (SGNRNVID).

Some special cases of the system (3.1) can be found in [1, 26, 28, 3234, 47] and the references therein.

Lemma 3.1. Let T i , g i (i = 1, 2), ρ, η be the same as in the system (3.1) and suppose further that g i ( H ) = K r for each i = 1, 2. Then the system (3.1) is equivalent to that of finding (x*, y*) K r × K r such that

0 ρ T 1 ( y * , x * ) + x * - g 1 ( y * ) + N K r P ( x * ) , 0 η T 2 ( x * , y * ) + y * - g 2 ( x * ) + N K r P ( y * ) ,
(3.2)

where N K r P ( s ) denotes the P-normal cone of K r at s in the sense of nonconvex analysis.

Proof. Let (x*, y*) K r × K r be a solution of the system (3.1). If ρT1(y*, x*) + x * −g1(y*) = 0, because the vector zero always belongs to any normal cone, then 0 ρ T 1 ( y * , x * ) + x * - g 1 ( y * ) + N K r P ( x * ) . If ρT1(y*, x*) + x* - g1(y*) ≠ 0, then, for all xH, we have

- ( ρ T 1 ( y * , x * ) + x * - g 1 ( y * ) ) , g 1 ( x ) - x * 1 2 r | | g 1 ( x ) - x * | | 2 .

Since g 1 ( H ) = K r , by Lemma 2.3, it follows that

- ( ρ T 1 ( y * , x * ) + x * - g 1 ( y * ) ) N K r P ( x * ) ,

and so

0 ρ T 1 ( y * , x * ) + x * - g 1 ( y * ) + N K r P ( x * ) ,

Similarly, one can establish that

0 η T 2 ( x * , y * ) + y * - g 2 ( x * ) + N K r P ( y * ) .

Conversely, if (x*, y*) K r × K r is a solution of the system (3.2), then it follows from Definition 2.8 that (x*, y*) is a solution of the system (3.1). This completes the proof.

The problem (3.2) is called the general nonlinear nonconvex variational inclusions system associated with the system of general nonlinear regularized nonconvex variational inequalities (3.1).

Now, we prove the existence and uniqueness theorem for a solution of the system of general nonlinear regularized nonconvex variational inequalities (3.1). For this end, we need the following lemma in which by using the projection operator technique, we verify the equivalence between the system of general nonlinear regularized nonconvex variational inequalities (3.1) and a fixed point problem.

Lemma 3.2. Let T i , g i (i = 1, 2), ρ and η be the same as in the system (3.1) and suppose further that g i ( H ) = K r for each i = 1, 2. Then (x*, y*) K r × K r is a solution of the system (3.1) if and only if

x * = P K r ( g 1 ( y * ) - ρ T 1 ( y * , x * ) ) , y * = P K r ( g 2 ( x * ) - η T 2 ( x * , y * ) )
(3.3)

provided that ρ< r 1 + | | T 1 ( y * , x * ) | | and η< r 1 + | | T 2 ( x * , y * ) | | , where r' (0, r) and P K r is the projection of onto Kr.

Proof. Let (xi*, y*) K r × K r be a solution of the system (3.1). Since g1(y*), g2(x*) K r , ρ < r 1 + | | T 1 ( y * , x * ) | | and η < r 1 + | | T 2 ( x * , y * ) | | , it is easy to check that two the points g1(y*) ρT1(y*, x*) and g2(x*) ηT2(x*, y*) belong to U(r′). Therefore, the r-prox regularity of K r implies that two the sets P K r ( g 1 ( y * ) - ρ T 1 ( y * , x * ) ) and P K r ( g 2 ( x * ) - η T 2 ( x * , y * ) ) are nonempty and singleton, that is, the equations (3.3) are well defined. By using Lemma 3.1, we have

0 ρ T 1 ( y * , x * ) + x * - g 1 ( y * ) + N K r P ( x * ) , 0 η T 2 ( x * , y * ) + y * - g 2 ( x * ) + N K r P ( y * ) , g 1 ( y * ) - ρ T 1 ( y * , x * ) x * + N K r P ( x * ) = ( I + N K r P ) ( x * ) , g 2 ( x * ) - η T 2 ( x * , y * ) y * + N K r P ( y * ) = ( I + N K r P ) ( y * ) , x * = P K r ( g 1 ( y * ) - ρ T 1 ( y * , x * ) ) , y * = P K r ( g 2 ( x * ) - η T 2 ( x * , y * ) ) ,

where I is identity operator and we have used the well-known fact that P K r = ( I + N K r P ) - 1 . This completes the proof.

Theorem 3.3. Let T i , g i (i = 1, 2), ρ and η be the same as in the system (3.1) such that g i ( H ) = K r for each i = 1, 2. Suppose that for each i = 1, 2, T i is π i -strongly monotone with respect to g i and σ i -Lipschitz continuous in the first variable and g i is δ i -Lipschitz continuous. If the constants ρ and η satisfy the following conditions:

ρ < r 1 + | | T 1 ( y , x ) | | , η < r 1 + | | T 2 ( x , y ) | | , x , y H ,
(3.4)
| ρ - π 1 σ 1 2 | < r 2 π 1 2 - σ 1 2 ( r 2 δ 1 2 - ( r - r ) 2 ) r σ 1 2 , | η - π 2 σ 2 2 | < r 2 π 2 2 - σ 2 2 ( r 2 δ 2 2 - ( r - r ) 2 ) r σ 2 2 , r π i > σ i r 2 δ i 2 - ( r - r ) 2 , r δ i > r - r , ( i = 1 , 2 ) ,
(3.5)

where r(0, r), then the system (3.1) admits a unique solution.

Proof. Define the mappings ψ, ϕ:H×H K r by

ψ ( x , y ) = P K r ( g 1 ( y ) - ρ T 1 ( y , x ) ) , ϕ ( x , y ) = P K r ( g 2 ( x ) - η T 2 ( x , y ) )
(3.6)

for all ( x , y ) H×H, respectively. Since g1(y), g2(x) K r for x, yH, all easily check that the mappings ψ and φ are well defined. Define ||| | * on H×H by

| | ( x , y ) | | * = | | x | | + | | y | | , ( x , y ) H × H .

It is obvious that ( H × H , | | | | * ) is a Hilbert space. In addition, define a mapping F : K r × K r K r × K r as follows:

F ( x , y ) = ( ψ ( x , y ) , ϕ ( x , y ) ) , ( x , y ) K r × K r .
(3.7)

Now, we verify that F is a contraction mapping. Indeed, let (x, y), ( x ^ , ŷ ) K r × K r be given. Since two the points g1(y) ρT1(y, x) and g 1 ( ŷ ) -ρ T 1 ( ŷ , x ^ ) belong to U(r′), by using Proposition 2.10, we have

| | ψ ( x , y ) - ψ ( x ^ , ŷ ) | | = | | P K r ( g 1 ( y ) - ρ T 1 ( y , x ) ) - P K r ( g 1 ( ŷ ) - ρ T 1 ( ŷ , x ^ ) ) | | r r - r | | g 1 ( y ) - g 1 ( ŷ ) - ρ ( T 1 ( y , x ) - T 1 ( ŷ , x ^ ) ) | | .
(3.8)

Since T1 is π1-strongly monotone with respect to g1 and σ1-Lipschitz continuous in the first variable and g1 is δ1-Lipschitz continuous, we conclude that

| | g 1 ( y ) - g 1 ( ŷ ) - ρ ( T 1 ( y , x ) - T 1 ( ŷ , x ^ ) ) | | 2 = | | g 1 ( y ) - g 1 ( ŷ ) | | 2 - 2 ρ T 1 ( y , x ) - T 1 ( ŷ , x ^ ) , g 1 ( y ) - g 1 ( ŷ ) } + ρ 2 | | T 1 ( y , x ) - T 1 ( ŷ , x ^ ) | | 2 ( δ 1 2 - 2 ρ π 1 + ρ 2 σ 1 2 ) | | y - ŷ | | 2 .
(3.9)

Substituting (3.9) in (3.8), it follows that

| | ψ ( x , y ) - ψ ( x ^ , ŷ ) | | θ | | y - ŷ | | ,
(3.10)

where

θ = r r - r δ 1 2 - 2 ρ π 1 + ρ 2 σ 1 2 .

Since T2 is π2-strongly monotone with respect to g2 and σ2-Lipschitz continuous in the first variable and g2 is δ2-Lipschitz continuous, by the similar way given in the proofs of (3.8)-(3.10), we can prove that

| | ϕ ( x , y ) - ϕ ( x ^ , ŷ ) | | ω | | x - x ^ | | ,
(3.11)

where

ω = r r - r δ 2 2 - 2 η π 2 + η 2 σ 2 2 .

It follows from (3.7), (3.10) and (3.11) that

F ( x , y ) - F ( x ^ , ŷ ) * = | | ψ ( x , y ) - ψ ( x ^ , ŷ ) | | + | | ϕ ( x , y ) - ϕ ( x ^ , ŷ ) | | ϑ | | ( x , y ) - ( x ^ , ŷ ) | | * ,
(3.12)

where ϑ = max{θ, ω}. By the condition (3.5), we note that 0 ϑ < 1 and so (3.12) guarantees that F is a contraction mapping. According to Banach's fixed point theorem, there exists a unique point (x*, y*) K r × K r such that F(x*, y*) = (x*, y*). From (3.6) and (3.7), we conclude that x * = P K r ( g 1 ( y * ) - ρ T 1 ( y * , x * ) ) and y * = P K r ( g 2 ( x * ) - η T 2 ( x * , y * ) ) . Now, Lemma 3.2 guarantees that (x*, y*) K r × K r is a solution of the system (3.1). This completes the proof.

4 Perturbed projection and iterative algorithms

In this section, by applying two nearly uniformly Lipschitzian mappings S1 and S2 and using the equivalent alternative formulation (3.3), we suggest and analyze a new perturbed p-step projection iterative algorithm with mixed errors for finding an element of the set of the fixed points of Q= ( S 1 , S 2 ) which is the unique solution of the system of general nonlinear regularized nonconvex variational inequalities (3.1).

Let S1 : K r K r be a nearly uniformly L1-Lipschitzian mapping with the sequence { a n } n = 1 and S2 : K r K r be a nearly uniformly L2-Lipschitzian mapping with the sequence { b n } n = 1 . We define the self-mapping of K r × K r as follows:

Q ( x , y ) = ( S 1 x , S 2 y ) , x , y K r .
(4.1)

Then Q= ( S 1 , S 2 ) : K r × K r K r × K r is a nearly uniformly max{L1, L2}-Lipschitzian mapping with the sequence {a n + b n }n=1with respect to the norm || · || in H×H. Since, for any (x, y), (x', y') K r × K r and n , we have

| | Q n ( x , y ) - Q n ( x , y ) | | * = | | ( S 1 n x , S 2 n y ) - ( S 1 n x , S 2 n y ) | | * = | | ( S 1 n x - S 1 n x , S 2 n y - S 2 n y ) | | * = | | S 1 n x - S 1 n x | | + | | S 2 n y - S 2 n y | | L 1 ( | | x - x | | + a n ) + L 2 ( | | y - y | | + b n ) max { L 1 , L 2 } ( | | x - x | | + | | y - y | | + a n + b n ) = max { L 1 , L 2 } ( | | ( x , y ) - ( x , y ) | | * + a n + b n ) .

We denote the sets of all the fixed points of S i (i = 1, 2) and by Fix(S i ) and Fix ( Q ) , respectively, and the set of all the solutions of the system (3.1) by SGNRNVID(K r , T i , g i , i = 1, 2). In view of (4.1), for any (x, y) K r × K r , ( x , y ) Fix ( Q ) if and only if x Fix(S1) and y Fix(S2), that is, Fix ( Q ) = Fix( S 1 , S 2 ) = Fix( S 1 ) × Fix( S 2 ) .

We now characterize the given problem. Let the operators T i , g i (i = 1,2) and the constants ρ, η be the same as in the system (3.1) and, further, suppose that g i ( H ) = K r for each i = 1, 2. If ( x * , y * ) Fix ( Q ) SGNRNVID (K r , T i , g i , i = 1, 2), ρ< r 1 + | | T 1 ( y * , x * ) | | and η< r 1 + | | T 2 ( y * , x * ) | | , where r' (0, r), then by using Lemma 3.2, it is easy to see that for each n ,

x * = S 1 n x * = P K r ( g 1 ( y * ) - ρ T 1 ( y * , x * ) ) = S 1 n P K r ( g 1 ( y * ) - ρ T 1 ( y * , x * ) ) , y * = S 2 n y * = P K r ( g 2 ( x * ) - η T 2 ( x * , y * ) ) = S 2 n P K r ( g 2 ( x * ) - η T 2 ( x * , y * ) ) .
(4.2)

The fixed point formulation (4.2) enables us to suggest the following iterative algorithm with mixed errors for finding an element of the set of the fixed points of the nearly uniformly Lipschitzian mapping Q = (S1, S2) which is the unique solution of the system of the general nonlinear regularized nonconvex variational inequalities (3.1).

Algorithm 4.1. Let T i , g i (i = 1, 2), ρ and η be the same as in the system (3.1) such that g i ( H ) = K r for each i = 1, 2. Furthermore, let the constants ρ and η satisfy the condition (3.4). For an arbitrary chosen initial point ( x 1 , y 1 ) H×H, compute the iterative sequence { ( x n , y n ) } n = 1 in H×H in the following way:

x n + 1 = ( 1 - α n , 1 - β n , 1 ) x n + α n , 1 ( S 1 n P K r ( Φ ( v n , 1 , ν n , 1 ) )  +  e n , 1 ) + β n , 1 j n , 1 + r n , 1 , y n + 1 = ( 1 - α n , 1 - β n , 1 ) y n + α n , 1 ( S 2 n P K r ( Ψ ( v n , 1 , ν n , 1 ) ) + l n , 1 ) + β n , 1 s n , 1 + k n , 1 , v n , i = ( 1 - α n , i + 1 - β n , i + 1 ) x n + α n , i + 1 ( S 1 n P K r ( Φ ( v n , i + 1 , ν n , i + 1 ) )  +  e n , i  + 1 ) + β n , i + 1 j n , i + 1 + r n , i + 1 , ν n , i  =  ( 1 -  α n , i + 1  -  β n , i + 1 ) y n  +  α n , i + 1 ( S 2 n P K r ( Ψ ( v n , i + 1 , ν n , i + 1 ) )  +  l n , i + 1 )  +  β n , i + 1 s n , i + 1  +  k n , i + 1 , v n , p - 1 = ( 1 - α n , p - β n , p ) x n + α n , p ( S 1 n P K r ( Φ ( x n , y n ) ) + e n , p ) + β n , p j n , p + r n , p , ν n , p - 1  =  ( 1{ α n , p - - β n , p ) y n  +  α n , p ( S 2 n P K r ( Ψ ( x n , y n ) )  +  l n , p )  +  β n , p s n , p  +  k n , p , i = 1 , 2 , , p - 2 ,
(4.3)

where

Φ ( v n , i , V n , i )  =  g 1 ( V n , i ) - ρ T 1 ( V n , i , v n , i ) , Ψ ( v n , i , V n , i )  =  g 2 ( v n , i ) - η T 2 ( v n , i , V n , i ) , Φ ( x n , y n ) = g 1 ( y n ) - ρ T 1 ( y n , x n ) , Ψ ( x n , y n ) = g 2 ( x n ) - η T 2 ( x n , y n ) , i = 1 , 2 , , p - 2 ,

S1, S2: K r K r , are two nearly uniformly Lipschitzian mappings, { α n , i } n = 1 , { β n , i } n = 1 ( i = 1 , 2 , , p ) are 2p sequences in interval [0, 1] such that n = 1 i = 1 p α n , i =, α n , i + β n , i 1, n = 1 β n , i < and { e n , i } n = 1 , { l n , i } n = 1 , { j n , i } n = 1 , { s n , i } n = 1 , { r n , i } n = 1 , { k n , i } n = 1 ( i = 1 , 2 , , p ) are 6p sequences in to take into account a possible inexact computation of the resolvent operator point satisfying the following conditions: { j n , i } n = 1 , { s n , i } n = 1 ( i = 1 , 2 , , p ) are 2p bounded sequences in and { e n , i } n = 1 , { l n , i } n = 1 , { r n , i } n = 1 , { k n , i } n = 1 ( i = 1 , 2 , , p ) are 4p sequencs in such that

e n , i = e n , i + e n , i , l n , i = l n , i + l n , i , n , i = 1 , 2 , , p , lim n | | ( e n , i , l n , i ) | | * = 0 , i = 1 , 2 , , p , n = 1 | | ( e n , i , l n , i ) | | * < , n = 1 | | ( r n , i , k n , i ) | | * < , i = 1 , 2 , , p .
(4.4)

If S i I for each i = 1, 2, then Algorithm 4.1 reduces to the following iterative algorithm for solving the system (3.1).

Algorithm 4.2. Assume that T i , g i (i = 1, 2), ρ and η are the same as in Algorithm 4.1. Moreover, let the constants ρ and η satisfy the condition (3.4). For an arbitrary chosen initial point ( x 1 , y 1 ) H×H, compute the iterative sequence { ( x n , y n ) } n = 1 in H×H in the following way:

x n + 1 = ( 1 - α n , 1 - β n , 1 ) x n + α n , 1 ( P K r ( Φ ( v n , 1 , ν n , 1 ) )  +  e n , 1 ) + β n , 1 j n , 1 + r n , 1 , y n + 1 = ( 1 - α n , 1 - β n , 1 ) y n + α n , 1 ( P K r ( Ψ ( v n , 1 , ν n , 1 ) ) + l n , 1 ) + β n , 1 s n , 1 + k n , 1 , v n , i = ( 1 - α n , i + 1 - β n , i + 1 ) x n + α n , i + 1 ( P K r ( Φ ( v n , i + 1 , ν n , i + 1 ) )  +  e n , i + 1 ) + β n , i + 1 j n , i + 1 + r n , i + 1 , V n , i  =  ( 1{ α n , i + 1 - - β n , i + 1 ) y n  +  α n , i + 1 ( P K r ( Ψ ( v n , i + 1 , ν n , i + 1 ) )  +  l n , i + 1 )  +  β n , i + 1 s n , i + 1  +  k n , i + 1 , v n , p - 1 = ( 1 - α n , p - β n , p ) x n + α n , p ( P K r ( Φ ( x n , y n ) ) + e n , p ) + β n , p j n , p + r n , p , ν n , p  - 1  =  ( 1{ α n , p - - β n , p ) y n  +  α n , p ( P K r ( Ψ ( x n , y n ) )  +  l n , p )  +  β n , p s n , p  +  k n , p , i = 1 , 2 , , p - 2 ,

where Φ(vn,i, νn,i), Ψ(vn,i, νn,i)(i = 1, 2, ..., p 2), Φ(x n , y n ), Ψ(x n , y n ), { α n , i } n = 1 , { β n , i } n = 1 , { e n , i } n = 1 , { l n , i } n = 1 , { j n , i } n = 1 , { s n , i } n = 1 , { r n , i } n = 1 , { k n , i } n = 1 ( i = 1 , 2 , , p ) are the same as in Algorithm 4.1.

Algorithm 4.3. Let T i , g i (i = 1, 2), ρ and η be the same as in Algorithm 4.1. Furthermore, suppose that the constants ρ and η satisfy the condition (3.4). For an arbitrary chosen initial point ( x 1 , y 1 ) H×H, compute the iterative sequence { ( x n , y n ) } n = 1 in H×H, by the following iterative processes:

x n + 1 = ( 1 - α n ) x n + α n S 1 n P K r ( g 1 ( y n ) - ρ T 1 ( y n , x n ) ) , y n + 1 = ( 1 - α n ) y n + α n S 2 n P K r ( g 2 ( x n ) - η T 2 ( x n , y n ) ) ,

where S1, S2 are the same as in Algorithm 4.1 and { α n } n = 1 is a sequence in [0, 1] satisfying n = 1 α n =.

If S i I for each i = 1, 2, then Algorithm 4.3 reduces to the following iterative algorithm for solving the system (3.1).

Algorithm 4.4. Let T i , g i (i = 1, 2), ρ and η be the same as in Algorithm 4.1. Furthermore, assume that the constants ρ and η satisfy the condition (3.4). For an arbitrary chosen initial point ( x 1 , y 1 ) H×H, compute the iterative sequence { ( x n , y n ) } n = 1 in H×H in the following way:

x n + 1 = ( 1 - α n ) x n + α n P K r ( g 1 ( y n ) - ρ T 1 ( y n , x n ) ) , y n + 1 = ( 1 - α n ) y n + α n P K r ( g 2 ( x n ) - η T 2 ( x n , y n ) ) ,

where the sequence { α n } n = 1 is the same as in Algorithm 4.3.

Remark 4.5. Algorithms 2.1-2.4 in [20], Algorithms 3.1-3.7 in [28], Algorithms 2.1-2.3 in [32], Algorithms 2.1 and 2.2 in [33] and Algorithms 2.1-2.4 in [34] are special cases of Algorithms 4.1-4.4. In brief, for a suitable and appropriate choice of the operators S i , T i , g i (i = 1, 2) and the constants ρ and η, one can obtain a number of new and previously known iterative schemes for solving the system (3.1) and related problems. This clearly shows that Algorithms 4.1-4.4 are quite general and unifying.

Remark 4.6. It should be pointed that

  1. (1)

    If en,i= ln,i= rn,i= kn,i= 0 for all n and i = 1, 2, ..., p, then Algorithms 4.1 and 4.2 change into the perturbed iterative process with mean errors.

  2. (2)

    When en,i= ln,i= jn,i= sn,i= rn,i= kn,i= 0 for all n and i = 1, 2, ..., p, then Algorithms 4.1 and 4.2 reduce to the perturbed iterative process without error.

5 Main results

In this section, we establish the strong convergence of the sequences generated by perturbed projection iterative Algorithms 4.1 and 4.2, under some suitable conditions. We need the following lemma for verifying our main results.

Lemma 5.1. Let {a n }, {b n } and {c n } be three nonnegative real sequences satisfying the following condition: There exists a positive integer n0 such that

a n + 1 ( 1 - t n ) a n + b n t n + c n , n n 0 ,

where t n [0, 1, n = 0 t n =, lim n→∞ b n = 0 and n = 0 c n <. Then limn→ 0a n = 0.

Proof. The proof directly follows from Lemma 2 in Liu [22].

Theorem 5.2. Let T i , g i (i = 1, 2), ρ and η be the same as in Theorem 3.3. Suppose that all the conditions of Theorem 3.3 hold and the constants ρ, η satisfy the conditions (3.4) and (3.5). Assume that S1 : K r → K r is a nearly uniformly L1-Lipschitzian mapping with the sequence { a n } n = 1 , S2 : K r → K r is a nearly uniformly L2-Lipschitzian mapping with the sequence { b n } n = 1 , and is a self-mapping of K r × K r , defined by (4.1) such that Fix( Q ) SGNRNVID( K r , T i , g i , i = 1 , 2 ) . Moreover, for each i = 1, 2, let Liϑ < 1, where ϑ is the same as in (3.12) If there exists a constant α > 0 such that i = 1 p α n , i >α for each n, then the iterative sequence { ( x n , y n ) } n = 1 generated by Algorithm 4.1 converges strongly to the only element of Fix( Q ) SGNRNVID( K r , T i , g i , i = 1 , 2 ) .

Proof. According to Theorem 3.3, the system (3.1) has a unique solution (x*, y*) in K r × K r . Since, ρ< r 1 + | | T 1 ( y * , x * ) | | and η< r 1 + | | T 1 ( y * , x * ) | | , it follows from Lemma 3.2 that (x*, y*) satisfies the equations (3.3) Since SGNRNVID(K r , T i , g i , i = 1, 2) is a singleton set and Fix( Q ) SGNRNVID( K r , T i , g i , i = 1 , 2 ) , we conclude that x* Fix(S1) and y* Fix(S2). Hence, for each i = 1, 2, ..., p and n, we can write

x * = ( 1 - α n , i - β n , i ) x * + α n , i S 1 n P K r ( g 1 ( y * ) - ρ T 1 ( y * , x * ) ) + β n , i x * , y * = ( 1 - α n , i - β n , i ) y * + α n , i S 2 n P K r ( g 2 ( x * ) - η T 2 ( x * , y * ) ) + β n , i y * ,
(5.1)

Where the sequences { a n , i } n = 1 and { β n , i } n = 1 ( i = 1 , 2 , , p ) are the same as in Algorithm 4.1. Let Γ=max { sup n 0 | | j n , i - x * | | , sup n 0 | | s n , i - y * | | , i = 1 , 2 , , p } . Since g1(y*) g1(y n ) K r , ρ< r 1 + | | T 1 ( y * , x * ) | | and ρ< r 1 + | | T 1 ( y * , x * ) | | , for all n, we can easily check that the points g1(y*) - ρT1(y*,x*) and g1(y n ) - ρT1(y n , x n ) (n), belong to U(r'). By using (4.3), (5.1), Proposition (2.10) and the assumptions, we have

| | x n + 1 - x * | | ( 1 - α n , 1 - β n , 1 ) | | x n - x * | | + α n , 1 | | S 1 n P K r ( g 1 ( ν n , 1 ) - ρ T 1 ( ν n , 1 , v n , 1 ) ) - S 1 n P K r ( g 1 ( y * ) - ρ T 1 ( y * , x * ) ) | | + β n , 1 | | j n , 1 - x * | | + α n , 1 | | e n , 1 | | + | | r n , 1 | | ( 1 - α n , 1 - β n , 1 ) | | x n - x * | | + α n , 1 L 1 | | P K r ( g 1 ( ν n , 1 ) - ρ T 1 ( ν n , 1 , v n , 1 ) ) - P K r ( g 1 ( y * ) - ρ T 1 ( y * , x * ) ) | | + a n + β n , 1 | | j n , 1 - x * | | + α n , 1 ( | | e n , 1 | | + | | e n , 1 | | ) + | | r n , 1 | | ( 1 - α n , 1 - β n , 1 ) | | x n - x * | | + α n , 1 L 1 r r - r | | g 1 ( ν n , 1 ) - g 1 ( y * ) - ρ ( T 1 ( ν n , 1 , v n , 1 ) - T 1 ( y * , x * ) ) | |  +  a n  +  α n , 1 | | e n , 1 | |  +  | | e n , 1 | |  +  | | r n , 1 | |  +  β n , 1 Γ ( 1- α n , 1 - β n , 1 ) | | x n - x * | |  +  α n , 1 L 1 θ | | ν n , 1 - y * | |  +  α n , 1 | | e n , 1 | |  +  | | e n , 1 | |  +  | | r n , 1 | |  +  α n , 1 L 1 a n  +  β n , 1 Γ ,
(5.2)

where θ is the same as in (3.10). In similar way to the proof of (5.2), we can get

| | y n + 1 - y * | | ( 1 - α n , 1 - β n , 1 ) | | y n - y * | | + α n , 1 L 2 ω | | v n , 1 - x * | | + α n , 1 | | l n , 1 | | + | | l n , 1 | | + | | k n , 1 | | + α n , 1 L 2 b n + β n , 1 Γ ,
(5.3)

where ω is the same as in (3.11). Letting L = max{L1, L2} and using (5.2) and (5.3), we obtain

| | ( x n + 1 , y n + 1 ) - ( x * , y * ) | | * ( 1 - α n , 1 - β n , 1 ) | | ( x n , y n ) - ( x * , y * ) | | * + α n , 1 L ϑ | | ( v n , 1 , ν n , 1 ) - ( x * , y * ) | | *  +  α n , 1 | | ( e n , 1 , l n , 1 ) | | * + | | ( e n , 1 , l n , 1 ) | | * + | | ( r n , 1 , k n , 1 ) | | *  +  α n , 1 L ( a n  +  b n )  + 2 β n , 1 Γ ,
(5.4)

where ϑ is the same as in (3.12).

As in the proof of the inequalities (5.2)-(5.4), for each i {1, 2, ..., p - 2}, we can prove that

| | ( v n , i , ν n , i ) - ( x * , y * ) | | * ( 1 - α n , i  + 1 - β n , i + 1 ) | | ( x n , y n ) - ( x * , y * ) | | *  +  α n , i  + 1 L ϑ | | ( v n , i  + 1 , ν n , i  + 1 ) - ( x * , y * ) | | *  +  α n , i  + 1 | | ( e n , i  + 1 , l n , i  + 1 ) | | * + | | ( e n , i  + 1 , l n , i  + 1 ) | | * + | | ( r n , i + 1 , k n , i + 1 ) | | *  +  α n , i  + 1 L ( a n + b n ) + 2 β n , i + 1 Γ
(5.5)

and

| | ( v n , p - 1 , ν n , p - 1 ) - ( x * , y * ) | | * ( 1 - α n , p - β n , p ) | | ( x n , y n ) - ( x * , y * ) | | *  +  α n , p L ϑ | | ( x n , y n ) - ( x * , y * ) | | *  +  α n , p | | ( e n , p , l n , p ) | | * + | | ( e n , p , l n , p ) | | * + | | ( r n , p , k n , p ) | | * + α n , p L ( a n + b n ) + 2 β n , p Γ .
(5.6)

It follows from (5.5) and (5.6) that

| | ( v n , 1 , ν n , 1 ) ( x * , y * ) | | * ( 1– α n , 2 β n , 2 ) | | ( x n , y n ) ( x * , y * ) | | * α n , 2 L ϑ | | ( v n , 2 , ν n , 2 ) ( x * , y * ) | | * α n , 2 | | ( e n , 2 ' , l n , 2 ' ) | | * + | | ( e n , 2 ' ' , l n , 2 ' ' ) | | * + | | ( r n , 2 , k n , 2 ) | | * + α n , 2 L ( a n b n ) +2 β n , 2 Γ ( 1 α n , 2 β n , 2 ) | | ( x n , y n ) ( x * , y * ) | | * + α n , 2 L ϑ ( ( 1 α n , 3 β n , 3 ) ( x n , y n ) ( x * , y * ) * + α n , 3 L ϑ | | ( v n , 3 , ν n , 3 ) ( x * , y * ) | | * + α n , 3 | | ( e n , 3 ' , l n , 3 ' ) | | * + | | ( e n , 3 ' , l n , 3 ' ' ) | | * + | | ( r n , 3 , k n , 3 ) | | * + α n , 3 L ( a n + b n ) +2 β n , 3 Γ ) + α n , 2 | | ( e n , 2 ' , l n , 2 ' ) | | * + | | ( e n , 2 ' ' , l n , 2 ' ' ) | | * + | | ( r n , 2 , k n , 2 ) | | * + α n , 2 L ( a n + b n ) + 2 β n , 2 Γ = ( 1 α n , 2 β n , 2 + α n , 2 ( 1 α n , 3 β n , 3 ) L ϑ ) | | ( x n , y n ) ( x * , y * ) | | * + α n , 2 α n , 3 L 2 ϑ 2 | | ( v n , 3 , ν n , 3 ) ( x * , y * ) | | * + α n , 2 | | ( e n , 2 ' , l n , 2 ' ) | | * + α n , 2 α n , 3 L ϑ | | ( e n , 3 ' , l n , 3 ' ) | | * + | | ( e n , 2 ' ' , l n , 2 ' ' ) | | * + α n , 2 L ϑ | | ( e n , 3 ' ' , l n , 3 ' ' ) | | * + | | ( r n , 2 , k n , 2 ) | | * + α n , 2 L ϑ | | ( r n , 3 , k n , 3 ) | | * + ( α n , 2 L + α n , 2 α n , 3 L 2 ϑ ) ( a n + b n ) + 2 ( β n , 2 + α n , 2 β n , 3 L ϑ ) Γ ... ( 1 α n , 2 β n , 2 + α n , 2 ( 1 α n , 3 β n , 3 ) L ϑ + α n , 2 α n , 3 ( 1 α n , 4 β n , 4 ) L 2 ϑ 2 + + i = 2 p 1 α n , i ( 1 α n , p β n , p ) L p 2 ϑ p 2 + i = 2 p α n , i L p 1 ϑ p 1 ) | | ( x n , y n ) ( x * , y * ) | | * + α n , 2 | | ( e n , 2 ' , l n , 2 ' ) | | * + α n , 2 α n , 3 L ϑ | | ( e n , 3 ' , l n , 3 ' ) | | * + + i = 2 p α n , i L p 2 ϑ p 2 | | ( e n , p ' , l n , p ' ) | | * + | | ( e n , 2 ' ' , l n , 2 ' ' ) | | * + α n , 2 L ϑ | | ( e n , 3 ' ' , l n , 3 ' ' ) | | * + + i = 2 p 1 α n , i L p 2 ϑ p 2 | | ( e n , p ' ' , l n , p ' ' ) | | * + | | ( r n , 2 , k n , 2 ) | | * + α n , 2 L ϑ | | ( r n , 3 , k n , 3 ) | | * + + i = 2 p 1 α n , i L p 2 ϑ p 2 | | ( r n , p , k n , p ) | | * + ( α n , 2 L + α n , 2 α n , 3 L 2 ϑ + α n , 2 α n , 3 α n , 4 L 3 ϑ 2 + + i = 2 p α n , i L p 1 ϑ p 2 ) ( a n + b n ) + 2 ( β n , 2 + α n , 2 β n , 3 L ϑ + α n , 2 α n , 3 β n , 4 L 2 ϑ 2 + + i = 2 p 1 α n , i β n , p L p 2 ϑ p 2 ) Γ .
(5.7)

Applying (5.4) and (5.7), we get

( x n + 1 , y n + 1 ) - ( x * , y * ) | | * 1 - α n , 1 - β n , 1 + α n , 1 ( 1 - α n , 2 - β n , 2 ) L ϑ + α n , 1 α n , 2 ( 1 - α n , 3 - β n , 3 ) L 2 ϑ 2 + + i = 1 p - 1 α n , i ( 1 - α n , p - β n , p ) L p - 1 ϑ p - 1 + i = 1 p α n , i L p ϑ p | | ( x n , y n ) - ( x * , y * ) | | * + α n , 1 | | ( e n , 1 , l n , 1 ) | | * + α n , 1 α n , 2 L ϑ | | ( e n , 2 , l n , 2 ) | | * + + i = 1 p α n , i L p - 1 ϑ p - 1 | | ( e n , p , l n , p ) | | * + | | ( e n , 1 , l n , 1 ) | | * + α n , 1 L ϑ | | ( e n , 2 , l n , 2 ) | | * + + i = 1 p - 1 α n , i L p - 1 ϑ p - 1 | | ( e n , p , l n , p ) | | * + | | ( r n , 1 , k n , 1 ) | | * + α n , 1 L ϑ | | ( r n , 2 , k n , 2 ) | | * + + i = 1 p - 1 α n , i L p - 1 ϑ p - 1 ( r n , p , k n , p ) * + α n , 1 L + α n , 1 α n , 2 L 2 ϑ + α n , 1 α n , 2 α n , 3 L 3 ϑ 2 + + i = 1 p α n , i L p ϑ p - 1 ( a n + b n ) + 2 β n , 1 + α n , 1 β n , 2 L ϑ + α n , 1 α n , 2 β n , 3 L 2 ϑ 2 + + i = 1 p - 1 α n , i β n , p L p - 1 ϑ p - 1 Γ 1 - ( 1 - L ϑ ) i = 1 p α n , i L p - 1 ϑ p - 1 | | ( x n , y n ) - ( x * , y * ) | | * + i = 1 p j = 1 i α n , j L i - 1 ϑ i - 1 | | ( e n , i , l n , i ) | | * + | | ( e n , 1 , l n , 1 ) | | * + i = 2 p j = 1 i - 1 α n , j L i - 1 ϑ i - 1 | | ( e n , i , l n , i ) | | * + | | ( r n , 1 , k n , 1 ) | | * + i = 2 p j = 1 i - 1 α n , j L i - 1 ϑ i - 1 | | ( r n , i , k n , i ) | | * + i = 1 p j = 1 i α n , j L i ϑ i - 1 ( a n + b n ) + 2 ( β n , 1 + i = 2 p j = 1 i - 1 α n , j β n , i L i - 1 ϑ i - 1 ) Γ 1 - ( 1 - L ϑ ) i = 1 p α n , i L p - 1 ϑ p - 1 | | ( x n , y n ) - ( x * , y * ) | | * + ( 1 - L ϑ ) i = 1 p α n , i L p - 1 ϑ p - 1 i = 1 p j = 1 i α n , j L i - 1 ϑ i - 1 | | ( e n , i , l n , i ) | | * + i = 1 p j = 1 i α n , j L i ϑ i - 1 ( a n + b n ) α ( 1 - L ϑ ) L p - 1 ϑ p - 1 + | | ( e n , 1 , l n , 1 ) | | * + i = 2 p j = 1 i - 1 α n , j L i - 1 ϑ i - 1 | | ( e n , i , l n , i ) | | * + | | ( r n , 1 , k n , 1 ) | | * + i = 2 p j = 1 i - 1 α n , j L i - 1 ϑ i - 1 | | ( r n , i , k n , i ) | | * + 2 β n , 1 + i = 2 p j = 1 i - 1 α n , j β n , i L i - 1 ϑ i - 1 Γ .
(5.8)

Since < 1, lim n→∞ a n = lim n→∞ b n = 0 and n = 1 β n , i < for each i {1, 2, ..., p}, in view of (4.4), it is obvious that all the conditions of Lemma 5.1 are satisfied. Now, Lemma 5.1 and (5.8) guarantee that (x n , y n ) → (x*, y*) as n → ∞ and so the sequence { ( x n , y n ) } n = 0 generated by Algorithm 4.1 converges strongly to the unique solution (x*, y*) of the system (3.1). This completes the proof.

Corollary 5.3. Suppose that T i , g i (i = 1, 2), ρ and η are the same as in Theorem 3.3 and let all the conditions Theorem 3.3 hold. Furthermore, assume that the constants ρ and η satisfy the conditions (3.4) and (3.5) and, for each i = 1, 2, L i ϑ < 1, where ϑ is the same as in (3.12). If there exists a constant α > 0 such that i = 1 p α n , i >α for each n , then the iterative sequence { ( x n , y n ) } n = 1 generated by Algorithm 4.2 converges strongly to the unique solution of the system (3.1).

As in the proof of Theorem 5.2, one can prove the convergence of the iterative sequence generated by Algorithms 4.3 and 4.4.

6 Comments on results in the papers [20, 28, 33, 47]

In view of Definition 2.11, we note that the condition relaxed cocoercivity of the operator T is weaker than the condition strongly monotonicity of T. In other words, the class of relaxed cocoercive mappings is more general than the class of strongly monotone mappings. In fact, Chang et al. [20], Verma [33], Huang and Noor [47], Noor and Noor [28] studied the convergence analysis of the proposed iterative algorithms under the condition of strong monotonicity. In the present section, we show that, under the mild condition, that is, the relaxed cocoercivity, the main results in the papers [20, 28, 33, 47] still hold.

Let K be a closed convex subset of and let T:K×KH be a nonlinear operator. Verma [33] and Chang et al. [20] introduced and considered the following system of nonlinear variational inequalities (SNVI): Find (x*, y*) K × K such that

ρ T ( y * , x * ) + x * - y * , x - x * 0 , x K , ρ > 0 , η T ( x * , y * ) + y * - x * , x - y * 0 , x K , η > 0 .
(6.1)

Verma [33] proposed the following two-step iterative algorithm for solving the SNVI (6.1):

Algorithm 6.1. (Algorithm 2.1 [33]) For arbitrary chosen initial points x0, y0 K, compute the sequences {xk} and {yk} such that

x k + 1 = ( 1 - a k ) x k + a k P K [ y k - ρ T ( y k , x k ) ] , y k = P K [ x k - η T ( x k , y k ) ] ,

where ρ, η > 0 are constants, P K is the projection of H onto K, 0 ≤ ak ≤ 1 and k = 0 a k =.

He also studied the convergence analysis of the suggested iterative algorithm under some certain conditions as follows:

Theorem 6.2. (Theorem 2.1 [33]) Let be a real Hilbert space and let K be a nonempty closed convex subset of . Let T:K×KH be(γ, r)-relaxed cocoercive and µ-Lipschitz continuous in the first variable. Suppose that (x*, y*) K × K is a solution to the SNVI (6.1), the sequences {xk} and {yk} are generated by Algorithm 6.1,

0 a k 1 , k = 0 a k = .

Then the sequences {xk} and {yk}, respectively, converge to x* and y* for

0 < ρ < 2 ( r - γ μ 2 ) μ 2 , 0 < η < 2 ( r - γ μ 2 ) μ 2 .

We note that the condition 0<ρ< 2 ( r - γ μ 2 ) μ 2 implies that r > γµ2. Since T:K×KH is (γ, r)-relaxed cocoercive and µ-Lipschitz continuous in the first variable, the condition r > γµ2 guarantees that the operator T is (r − γµ2)-strongly monotone in the first variable. Hence, one can rewrite the statement of Theorem 6.2 as follows:

Theorem 6.3. Let be a real Hilbert space and let K be a nonempty closed convex subset of . Let T:K×KH be ξ-strongly monotone and µ-Lipschitz continuous in the first variable. Suppose that (x*, y*) K × K is a solution to the SNVI (6.1), the sequences {xk} and {yk} are generated by Algorithm 6.1 and

0 a k 1 , k = 0 a k = .

Then the sequences {xk} and {yk}, respectively, converge to x* and y* for 0<ρ< 2 ξ μ 2 and 0<ρ< 2 ξ μ 2 .

Remark 6.4. Theorem 2.3 in [33] has been stated with the condition relaxed cocoercivity of the operator T. Similarly, the conditions of Theorem 2.3 imply that the operator T is, in fact, strongly monotone. Hence Theorem 2.3 from [33] has been proved with the condition strongly monotonicity of the operator T, not the mild condition relaxed cocoercivity.

Chang et al. [20] proposed the following two-step iterative method for solving the SNVI (6.1):

Algorithm 6.5. (Algorithm 2.1 [20]) For arbitrary chosen initial points x0, y0 K, compute the sequences {x n } and {y n } such that

x n + 1 = ( 1 - α n ) x n + α n P K [ y n - ρ T ( y n , x n ) ] , y n = ( 1 - β n ) x n + β n P K [ x n - η T ( x n , y n ) ] ,

where ρ, η > 0 are two constants, P K is the projection of onto K and {α n } and {β n } are sequences in [0, 1].

They also studied the convergence analysis of the proposed iterative algorithm under some certain conditions as follows:

Theorem 6.6. (Theorem 3.1 [20]) Let be a real Hilbert space and let K be a nonempty closed convex subset of . Let T ( . , . ) :K×KH be two-variable(γ, r)-relaxed cocoercive and µ-Lipschitz continuous in the first variable. Suppose that (x, y ) K ×K is a solution to the SNVI (6.1) and that {x n } and {y n } are the sequences generated by Algorithm 6.5. If {α n } and {β n } are two sequences in [0, 1] satisfying the following conditions:

  1. (a)

    n = 0 α n =;

  2. (b)

    n = 0 ( 1 - β n ) <;

  3. (c)

    0<ρ,η< 2 ( r - γ μ 2 ) μ 2 ;

  4. (d)

    r > γμ2,

then the sequences {x n } and {y n } converge strongly to x* and y*, respectively.

Similarly, since T is (γ, r)-relaxed cocoercive and µ-Lipschitz continuous in the first variable, the condition (d) implies that the operator T is (r γµ2)-strongly monotone in the first variable. Accordingly, we can rewrite the statement of Theorem 6.6 as follows:

Theorem 6.7. Let be a real Hilbert space and let K be a nonempty closed convex subset of . Let T ( , ) :K×KH be two-variable ξ-strongly monotone and µ-Lipschitz continuous in the first variable. Suppose that (x*, y*) is a solution to the SNVI (6.1) and that {x n } and {y n } are the sequences generated by Algorithm 6.5. If {α n } and {β n } are two sequences in [0, 1] satisfying the following conditions:

  1. (a)

    n = 0 α n =;

  2. (b)

    n = 0 ( 1 - β n ) <;

  3. (c)

    0<ρ,η< 2 ξ μ 2 ,

then the sequences {x n } and {y n } converge strongly to x* and y*, respectively.

Remark 6.8. Theorems 3.2-3.4 in [20] have been stated with the condition relaxed cocoercivity of the operator T. The conditions of the aforesaid Theorems imply that the operator T in these theorems is in fact strongly monotone. Therefore, Theorems 3.2-3.4 in [20] have been stated with the condition strongly monotonicity of the operator T instead of the mild condition relaxed cocoercivity.

For given two different nonlinear operators T1, T 2 : K × K H , Huang and Noor [47] introduced and considered the problem of finding (x, y) K × K such that

ρ T 1 ( y * , x * ) + x * - y * , x - x * } 0 , x K , ρ > 0 , η T 2 ( x * , y * ) + y * - x * , x - y * } 0 , x K , η > 0 ,
(6.2)

which is called a system of nonlinear variational inequalities involving two different nonlinear operators (SNVID).

They proposed the following two-step iterative algorithm for solving the SNVID (6.2):

Algorithm 6.9. (Algorithm 2.1 [47]) For arbitrary chosen initial points x0, y0 K, compute the sequences {x n } and {y n } such that

x n + 1 = ( 1 - a n ) x n + a n P K [ y n - ρ T 1 ( y n , x n ) ] , y n + 1 = P K [ x n + 1 - η T 2 ( x n + 1 , y n ) ] ,

where a n [0, 1] for all n 0, ρ, η > 0 are two constants and P K is the projection of onto K.

Meanwhile, they studied the convergence analysis of the proposed iterative algorithm under some certain conditions as follows:

Theorem 6.10. (Theorem 3.1 [47]) Let K be a nonempty closed convex subset of a real Hilbert space and let (x*, y*) be the solution of the SNVID (6.2). If T 1 :K×KH is1, r1)-relaxed cocoercive and µ1-Lipschitz continuous in the first variable, and T 2 :K×KH is2, r2)-relaxed cocoercive and µ2-Lipschitz continuous in the first variable with conditions

0 < ρ < min 2 ( r 1 - γ 1 μ 1 2 ) μ 1 2 , 2 ( r 2 - γ 2 μ 2 2 ) μ 2 2 , 0 < η < min 2 ( r 1 - γ 1 μ 1 2 ) μ 1 2 , 2 ( r 2 - γ 2 μ 2 2 ) μ 2 2 ,

and r 1 > γ 1 μ 1 2 , r 2 > γ 2 μ 2 2 , μ 1 > 0 , μ 2 > 0 , a n [ 0 , 1 ] , n = 0 a n = , then, for arbitrarily chosen initial points x 0 , y 0 K, the sequences {x n } and {y n } obtained from explicit Algorithm 6.9 converge strongly to x* and y* respectively.

In similar way, since, for each i = 1, 2, the operator T i is (γi, r i )-relaxed cocoercive and µ i -Lipschitz continuous in the first variable, the conditions r i > γ i μ i 2 ( i = 1 , 2 ) guarantee that, for each i = 1, 2, the operator T i is ( r i  -  γ i μ i 2 ) -strongly monotone in the first variable. Therefore, one can rewrite the statement of Theorem 6.10 as follows:

Theorem 6.11. Let K be a nonempty closed convex subset of a real Hilbert space and let (x*, y*) be a solution of the SNVID (6.2). Suppose that for each i = 1, 2, the operator T i :K×KH is ξ i -strongly monotone and µ i -Lipschitz continuous in the first variable. If the constants ρ, η > 0 satisfy the following conditions:

0 < ρ , η < min 2 ξ 1 μ 1 2 , 2 ξ 2 μ 2 2 , a n [ 0 , 1 ] , n = 0 a n = ,

then, for arbitrarily chosen initial points x0, y0 K, the sequences {x n } and {y n } obtained from explicit Algorithm 6.9 converge strongly to x* and y*, respectively.

Remark 6.12. The operator T in Theorems 3.2 and 3.3 from [47] is relaxed cocoercive. But, by using the conditions of the aforesaid theorems we note that the operator T in these theorems is in fact strongly monotone. Accordingly, Theorems 3.2 and 3.3 in [47] have been stated with the strongly monotonicity of the operator T, not the relaxed cocoercivity.

For given different nonlinear operators T 1 , T 2 :H×HH and g, h:HH, Noor and Noor [28] introduced and considered the problem of finding (x, y) K × K such that

ρ T 1 ( y * , x * ) + x * - g ( y * ) , g ( x ) - x * } 0 , x H : g ( x ) K , ρ > 0 , η T 2 ( x * , y * ) + y * - h ( x * ) , h ( x ) - y * } 0 , x H : h ( x ) K , η > 0 ,
(6.3)

which is called a system of general nonlinear variational inequalities involving four different nonlinear operators (SGNVID).

They proposed the following two-step iterative scheme for solving the SGNVID (6.3):

Algorithm 6.13. (Algorithm 3.1 [28]) For arbitrary chosen initial points x0, y0 K, compute the sequences {x n } and {y n } such that

x n + 1 = ( 1 - a n ) x n + a n P K [ g ( y n ) - ρ T 1 ( y n , x n ) ] , y n + 1 = P K [ h ( x n + 1 ) - η T ( x n + 1 , y n ) ] ,

where a n [0, 1] for all n 0, ρ, η > 0 are two constants and P K is the projection of onto K.

They also studied the convergence analysis of the proposed iterative algorithm under some certain conditions as follows:

Theorem 6.14. (Theorem 4.1 [28]) Let (x*, y*) be a solution of the SGNVID (6.3). Suppose that T 1 :H×HH is (γ1, r1)-relaxed cocoercive and µ1-Lipschitz continuous in the first variable and T 2 :H×HH is (γ2, r2)-relaxed cocoercive and µ2-Lipschitz continuous in the first variable. Let g be (γ3, r3)-relaxed cocoercive and µ3-Lipschitz continuous and let h be (γ4, r4)-relaxed cocoercive and µ4-Lipschitz continuous. If

ρ - r 1 - γ 1 μ 1 2 μ 1 2 < ( r 1 - γ 1 μ 1 2 ) 2 - μ 1 2 κ 1 ( 2 - κ 1 ) μ 1 2 , r 1 > γ 1 μ 1 2 + μ 1 κ 1 ( 2 - κ 1 ) , κ 1 < 1 , η - r 2 - γ 2 μ 2 2 μ 2 2 < ( r 2 - γ 2 μ 2 2 ) 2 - μ 2 2 κ 2 ( 2 - κ 2 ) μ 2 2 , r 2 > γ 2 μ 2 2 + μ 2 κ 1 ( 2 - κ 1 ) , κ 2 < 1 ,
(6.4)

where

κ 1 = 1 - 2 ( r 3 - γ 3 μ 3 2 ) + μ 3 2 , κ 2 = 1 - 2 ( r 4 - γ 4 μ 4 2 ) + μ 4 2 , a n [ 0 , 1 ] , n = 0 a n = ,
(6.5)

then, for arbitrarily chosen initial points x0, y0 K, the sequences {x n } and {y n } obtained from Algorithm 6.13 converge strongly to x* and y*, respectively.

The condition (6.4) implies that, for each i=1,2, r i > r i μ i 2 . Since, for each i = 1, 2, the operator T i is (γi, r i )-relaxed cocoercive and µ i -Lipschitz continuous in the first variable, the conditions r i > r i μ i 2 ( i = 1 , 2 ) guarantee that, for each i = 1, 2, the operator T i is ( r i - r i μ i 2 ) -strongly monotone in the first variable. Since, for each i = 1, 2, κ i < 1, it follows from the condition (6.5) that, for each i = 3, 4, r i > γ i µ2 i .

Similarly, since g is (γ3, r3)-relaxed cocoercive and µ3-Lipschitz continuous, and h is (γ4, r4)-relaxed cocoercive and µ4-Lipschitz continuous, the conditions r i > γ i µ2 i (i = 3, 4) imply that the operator g is (r3 γ3µ23)-strongly monotone and the operator h is (r4 γ4µ24)-strongly monotone. Therefore, one can rewrite the statement of Theorem 6.14 as follows:

Theorem 6.15. Let (x*, y*) be a solution of the SGNVID (6.3). Suppose that for each i = 1, 2, the operator T i : H × H H is ξ i -strongly monotone and µ i -Lipschitz continuous in the first variable. Let g be ξ3-strongly monotone and µ3-Lipschitz continuous and h be ξ4-strongly monotone and µ4-Lipschitz continuous. If the constants ρ and η satisfy the following conditions:

| ρ - ζ 1 2 μ 1 2 | < ζ 1 2 - μ 1 2 κ 1 ( 2 - κ 1 ) μ 1 2 , | η - ζ 2 2 μ 2 2 | < ζ 2 2 - μ 2 2 κ 2 ( 2 - κ 1 ) μ 2 2 , ξ i > μ i κ i ( 2 - κ i ) , κ i < 1 ( i = 1 , 2 ) , κ i = 1 - 2 ξ i + μ i 2 , 2 ξ i 1 + μ i 2 ( i = 1 , 2 ) ,

then, for arbitrarily chosen initial points x0, y0 K, the sequences {x n } and {y n } obtained from Algorithm 6.13 converge strongly to x and y, respectively.

Remark 6.16. The operators T i (i = 1, 2) in Theorems 4.2 and 4.4 from [28] are relaxed cocoercive. While the conditions of the aforesaid theorems guarantee that the operators T i (i = 1, 2) in these theorems are in fact strongly monotone. Therefore, Theorems 4.2 and 4.4 in [28] have been stated with the condition strongly monotonicity of the operators T i (i = 1, 2), not the mild condition relaxed cocoercivity. In addition, Theorem 4.3 in [28] has been stated with the condition relaxed cocoercivity of the operator T. The conditions of Theorem 4.3 imply that the operator T in this theorem is in fact strongly monotone. Therefore, Theorem 4.3 in [28] has been stated with the condition strongly monotonicity of the operator T instead of the mild condition relaxed cocoercivity.

Remark 6.17. In view of the above facts, we note that Theorem 5.2 extends and improves Theorems 3.1-3.4 in [20], Theorems 4.1-4.4 in [28], Theorems 3.1-3.3 in [32] and Theorems 2.1-2.3 in [33] and [34].

7 Conclusion

In this paper, we have introduced and considered a new system of general nonlinear regularized nonconvex variational inequalities involving four different nonlinear operators and established the equivalence between the aforesaid system and a fixed point problem. By this equivalent formulation, we have discussed the existence and uniqueness of solution of the system of general nonlinear regularized nonconvex variational inequalities. This equivalence and two nearly uniformly Lipschitzian mappings S i (i = 1, 2) are used to suggest and analyze a new perturbed p-step iterative algorithm with mixed errors for finding an element of the set of the fixed points of the nearly uniformly Lipschitzian mapping Q= ( S 1 , S 2 ) which is the unique solution of the system of general nonlinear regularized nonconvex variational inequalities. In the final section, we have presented some remarks on results presented by Change et al [20], Huang and Noor [47], Noor and Noor [28] and Verma [3234]. We also have shown that their results are special cases of our results. Several special cases are also discussed. It is expected that the results proved in this paper may simulate further research regarding the numerical methods and their applications in various fields of pure and applied sciences.

References

  1. Stampacchia G: Formes bilineaires coercitives sur les ensembles convexes. CR Acad Sci Paris 1964, 258: 4413–4416.

    MathSciNet  Google Scholar 

  2. Blum E, Oettli W: From optimization and variational inequalities to equilibrium problems. Math Stud 1994, 63(1–4):123–145.

    MathSciNet  Google Scholar 

  3. Giannessi F, Maugeri A: Variational Inequalities and Network Equilibrium Problems. Plenum Press, New York; 1995.

    Chapter  Google Scholar 

  4. Giannessi F, Maugeri A, Pardalos PM: Equilibrium Problems: Non-smooth Optimization and Variational Inequality Models. Kluwer, Dordrecht; 2001.

    Google Scholar 

  5. Glowinski R, Lions JL, Trémolières R: Numerical Analysis of Variational Inequalities. North-Holland, Amsterdam; 1981.

    Google Scholar 

  6. Noor MA: Auxiliary principle technique for equilibrium problems. J Optim Theory Appl 2004, 122: 371–386.

    Article  MathSciNet  Google Scholar 

  7. Patriksson M: Nonlinear Programming and Variational Inequality Problems. Kluwer, Dordrecht; 1999.

    Chapter  Google Scholar 

  8. Alimohammady M, Balooee J, Cho YJ, Roohi M: A new system of nonlinear fuzzy variational inclusions involving ( A , η )-accretive mappings in uniformly smooth Banach spaces. J Inequal Appl 2009, 2009: 33. Article ID 806727, doi:10.1155/2010/806727

    Article  MathSciNet  Google Scholar 

  9. Alimohammady M, Balooee J, Cho YJ, Roohi M: Generalized nonlinear random equations with random fuzzy and relaxed cocoercive mappings in Banach spaces. Adv Nonlinear Var Inequal 2010, 13: 37–58.

    MathSciNet  Google Scholar 

  10. Alimohammady M, Balooee J, Cho YJ, Roohi M: New perturbed finite step iterative algorithms for a system of extended generalized nonlinear mixed-quasi variational inclusions. Comput Math Appl 2010, 60: 2953–2970. 10.1016/j.camwa.2010.09.055

    Article  MathSciNet  Google Scholar 

  11. Balooee J, Cho YJ, Kang MK: The Wiener-Hopf equation technique for solving general nonlinear regularized nonconvex variational inequalities. Fixed Point Theory Appl 2011: 86. doi:10.1186/1687–1812–2011–86

    Google Scholar 

  12. Balooee J, Cho YJ, Kang MK: Projection methods and a new system of extended general regularized nonconvex set-valued variational inequalities. J Appl Math 2012: 18. Article ID 690648, doi:10.1155/2012/690648

    Google Scholar 

  13. Bnouhachem A, Noor MA: Numerical methods for general mixed variational inequalities. Appl Math Comput 2008, 204: 27–36. 10.1016/j.amc.2008.05.134

    Article  MathSciNet  Google Scholar 

  14. Bounkhel M: Existence results of nonconvex differential inclusions. Port Math (N.S.) 2002, 59: 283–309.

    MathSciNet  Google Scholar 

  15. Bounkhel M: General existence results for second order nonconvex sweeping process with unbounded perturbations. Port Math (N.S.) 2003, 60: 269–304.

    MathSciNet  Google Scholar 

  16. Bounkhel M, Azzam L: Existence results on the second order nonconvex sweeping processes with perturbations. Set-valued Anal 2004, 12: 291–318.

    Article  MathSciNet  Google Scholar 

  17. Bounkhel M, Tadji L, Hamdi A: Iterative schemes to solve nonconvex variational problems. J Inequal Pure Appl Math 2003, 4: 1–14.

    Google Scholar 

  18. Bounkhel M, Thibault L: Further characterizations of regular sets in Hilbert spaces and their applications to nonconvex sweeping process. Centro de Modelamiento Matematico (CMM), Universidad de Chile; 2000.

    Google Scholar 

  19. Canino A: On p -convex sets and geodesics. J Differ Equ 1988, 75: 118–157. 10.1016/0022-0396(88)90132-5

    Article  MathSciNet  Google Scholar 

  20. Chang SS, Joseph Lee HW, Chan CK: Generalized system for relaxed cocoercive variational inequalities in Hilbert spaces. Appl Math Lett 2007, 20: 329–334. 10.1016/j.aml.2006.04.017

    Article  MathSciNet  Google Scholar 

  21. Lions JL, Stampacchia G: Variational inequalities. Commun Pure Appl Math 1967, 20: 493–512. 10.1002/cpa.3160200302

    Article  MathSciNet  Google Scholar 

  22. Liu LS: Ishikawa and Mann iterative process with errors for nonlinear strongly accretive mappings in Banach spaces. J Math Anal Appl 1995, 194: 114–125. 10.1006/jmaa.1995.1289

    Article  MathSciNet  Google Scholar 

  23. Lv S: Generalized systems of variational inclusions involving ( A , η )- monotone mappings. Adv Fixed Point Theory 2011, 1: 1–14.

    Google Scholar 

  24. Moudafi A: An algorithmic approach to prox-regular variational inequalities. Appl Math Comput 2004, 155: 845–852. 10.1016/j.amc.2003.06.003

    Article  MathSciNet  Google Scholar 

  25. Noor MA: Iterative schemes for nonconvex variational inequalites. J Optim Theory Appl 2004, 121: 385–395.

    Article  MathSciNet  Google Scholar 

  26. Noor MA: Variational Inequalities and Applications. Lecture Notes, Mathematics Department, COMSATS Institute of information Technology, Islamabad, Pakistan; 2007–2009.

    Google Scholar 

  27. Noor MA, Huang Z: Three-step iterative methods for nonexpansive mappings and variational inequalities. Appl Math Comput 2007, 187: 680–685. 10.1016/j.amc.2006.08.088

    Article  MathSciNet  Google Scholar 

  28. Noor MA, Noor KI: Projection algorithms for solving a system of general variational inequalities. Nonlinear Anal 2009, 70: 2700–2706. 10.1016/j.na.2008.03.057

    Article  MathSciNet  Google Scholar 

  29. Noor MA, Noor KI, Said EA: Iterative methods for solving general quasi-variational inequalities. Optim Lett 2010, 4: 513–530. 10.1007/s11590-010-0180-3

    Article  MathSciNet  Google Scholar 

  30. Pang LP, Shen J, Song HS: A modified predictor-corrector algorithm for solving nonconvex generalized variational inequalities. Comput Math Appl 2007, 54: 319–325. 10.1016/j.camwa.2006.07.010

    Article  MathSciNet  Google Scholar 

  31. Poliquin RA, Rockafellar RT, Thibault L: Local differentiability of distance functions. Trans Amer Math Soc 2000, 352: 5231–5249. 10.1090/S0002-9947-00-02550-2

    Article  MathSciNet  Google Scholar 

  32. Verma RU: General convergence analysis for two-step projection methods and applications to variational problems. Appl Math Lett 2005, 18: 1286–1292. 10.1016/j.aml.2005.02.026

    Article  MathSciNet  Google Scholar 

  33. Verma RU: Generalized system for relaxed cocoercive variational inequalities and projection methods. J Optim Theory Appl 2004, 121: 203–210.

    Article  MathSciNet  Google Scholar 

  34. Verma RU: Projection methods, algorithms, and a new system of nonlinear variational inequalities. Comput Math Appl 2001, 41: 1025–1031. 10.1016/S0898-1221(00)00336-9

    Article  MathSciNet  Google Scholar 

  35. Chang SS, Cho YJ, Zhou H: Iterative Methods for Nonlinear Operator Equations in Banach Spaces. Nova Science Publishers. Inc, Huntington; 2002:xiv+459. ISBN: 1–59033–170–2

    Google Scholar 

  36. Chang SS, Cho YJ, Kim JK: On the two-step projection methods and applications to variational inequalities. Math Inequal Appl 2007, 10: 755–760.

    MathSciNet  Google Scholar 

  37. Cho YJ, Argyros IK, Petrot N: Approximation methods for common solutions of generalized equilibrium, systems of nonlinear variational inequalities and fixed point problems. Comput Math Appl 2010, 60: 2292–2301. 10.1016/j.camwa.2010.08.021

    Article  MathSciNet  Google Scholar 

  38. Cho YJ, Fang YP, Huang NJ, Hwang HJ: Algorithms for systems of nonlinear variational inequalities. J Korean Math Soc 2004, 41: 489–499.

    Article  MathSciNet  Google Scholar 

  39. Cho YJ, Kang SM, Qin X: On systems of generalized nonlinear variational inequalities in Banach spaces. Appl Math Comput 2008, 206: 214–220. 10.1016/j.amc.2008.09.005

    Article  MathSciNet  Google Scholar 

  40. Cho YJ, Kim JK, Verma RU: A class of nonlinear variational inequalities involving partially relaxed monotone mappings and general auxiliary principle. Dyn Syst Appl 2002, 11: 333–338.

    MathSciNet  Google Scholar 

  41. Cho YJ, Petrot N: Regularization and iterative method for general variational inequality problem in Hilbert spaces. J Inequal Appl 2011, 21.

    Google Scholar 

  42. Cho YJ, Qin X: Systems of generalized nonlinear variational inequalities and its projection methods. Nonlinear Anal 2008, 69: 4443–4451. 10.1016/j.na.2007.11.001

    Article  MathSciNet  Google Scholar 

  43. Cho YJ, Qin X: Generalized systems for relaxed cocoercive variational inequalities and projection methods in Hilbert spaces. Math Inequal Appl 2009, 12: 365–375.

    MathSciNet  Google Scholar 

  44. Clarke FH: Optimization and Nonsmooth Analysis. Wiley, New York; 1983.

    Google Scholar 

  45. Clarke FH, Ledyaev YS, Stern RJ, Wolenski PR: Nonsmooth Analysis and Control Theory. Springer, New York; 1998.

    Google Scholar 

  46. Clarke FH, Stern RJ, Wolenski PR: Proximal smoothness and the lower C2property. J Convex Anal 1995, 2: 117–144.

    MathSciNet  Google Scholar 

  47. Huang Z, Noor MA: An explicit projection method for a system of nonlinear variational inequalities with different ( γ , r )-cocoersive mappings. Appl Math Comput 2007, 190: 356–361. 10.1016/j.amc.2007.01.032

    Article  MathSciNet  Google Scholar 

  48. Goebel K, Kirk WA: A fixed point theorem for asymptotically nonexpansive mappings. Proc Amer Math Soc 1972, 35: 171–174. 10.1090/S0002-9939-1972-0298500-3

    Article  MathSciNet  Google Scholar 

  49. Kirk WA, Xu HK: Asymptotic pointwise contractions. Nonlinear Anal 2008, 69: 4706–4712. 10.1016/j.na.2007.11.023

    Article  MathSciNet  Google Scholar 

  50. Sahu DR: Fixed points of demicontinuous nearly Lipschitzian mappings in Banach spaces. Comment Math Univ Carolin 2005, 46: 653–666.

    MathSciNet  Google Scholar 

  51. Ye J, Huang J: Strong convergence theorems for fixed point problems and generalized equilibrium problems of three relatively quasi-nonexpansive mappings in Banach spaces. J Math Comput Sci 2011, 1: 1–18.

    Article  MathSciNet  Google Scholar 

  52. Zhang SS, Lee JHW, Chan CK: Algorithms of common solutions for quasi variational inclusion and fixed point problems. Appl Math Mech 2008, 29: 571–581. 10.1007/s10483-008-0502-y

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgements

This research was also supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education, Science and Technology (Grant Number: 2011-0021821).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yeol Je Cho.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors' contributions

All authors read and approved the final manuscript.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Balooee, J., Je Cho, Y. Perturbed projection and iterative algorithms for a system of general regularized nonconvex variational inequalities. J Inequal Appl 2012, 141 (2012). https://doi.org/10.1186/1029-242X-2012-141

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1029-242X-2012-141

Keywords