Skip to main content

A new parallel splitting augmented Lagrangian-based method for a Stackelberg game

Abstract

This paper addresses a novel solution scheme for a special class of variational inequality problems which can be applied to model a Stackelberg game with one leader and three or more followers. In the scheme, the leader makes his decision first and then the followers reveal their choices simultaneously based on the information of the leader’s strategy. Under mild conditions, we theoretically prove that the scheme can obtain an equilibrium. The proposed approach is applied to solve a simple game and a traffic problem. Numerical results about the performance of the new method are reported.

1 Introduction

In our work, we concentrate on typical structured variational inequality problems which are mathematically described as follows: Find a point \(w^{*}\in\Omega\) such that

$$ Q\bigl(w^{*}\bigr)^{T}\bigl(w-w^{*}\bigr)\ge0,\quad \forall w\in\Omega, $$

where

$$\begin{aligned}& w= \begin{pmatrix} x_{1}\\ x_{2}\\ \vdots\\ x_{m} \end{pmatrix},\qquad Q(w)= \begin{pmatrix} f_{1}(x_{1})\\ f_{2}(x_{2})\\ \vdots\\ f_{m}(x_{m}) \end{pmatrix}, \qquad \Omega=\Biggl\{ (x_{1}, x_{2},\ldots, x_{m})\bigg|x_{i} \in\mathcal{X}_{i}, \sum_{i=1}^{m} A_{i}x_{i}=b\Biggr\} , \end{aligned}$$

\(\mathcal{X}_{i}\subseteq\mathcal{R}^{n_{i}}\), and \(f_{i}: \mathcal {X}_{i}\to\mathcal{R}^{n_{i}} \) (\(i=1,2,\ldots,m\)) are mappings. In the work, we assume that \(\mathcal{X}_{i}\) (\(i=1,2,\ldots,m\)) are nonempty, closed, and convex sets, the mappings \(f_{i}\) (\(i=1,2,\ldots,m\)) are monotone, and the solution of the problem exists. By attaching a Lagrange multiplier vector \(\lambda\in R^{l}\) to the linear constraint \(\sum_{i=1}^{m} A_{i}x_{i}=b\), the above problem can be converted to the following form: Find a point \(u^{*}\in\mathcal {U}=\prod_{i=1}^{m}\mathcal{X}_{i}\times R^{l}\) such that

$$ F\bigl(u^{*}\bigr)^{T}\bigl(u-u^{*}\bigr)\ge0, \quad\forall u\in\mathcal{U}, $$
(1.1)

where

$$ u= \begin{pmatrix} x_{1}\\ x_{2}\\ \vdots\\ x_{m}\\ \lambda \end{pmatrix}\quad \mbox{and}\quad F(u)= \begin{pmatrix} f_{1}(x_{1})-A_{1}^{T}\lambda\\ f_{2}(x_{2})-A_{2}^{T}\lambda\\ \vdots\\ f_{m}(x_{m})-A_{m}^{T}\lambda\\ \sum_{i=1}^{m} A_{i}x_{i}-b \end{pmatrix}. $$
(1.2)

The structured system (1.1)-(1.2) can be viewed as a mathematical formulation of a one-leader-m-follower Stackelberg game where ith follower controls his decision \(x_{i}\), \(i=1,2,\ldots,m\). For the special case where the game has one leader and two followers, that is, \(m=2\), there is extensive literature on numerical algorithms [1–5]. Although the one-leader-two-follower game is helpful as a baseline model in analyzing games and their equilibrium-seeking algorithms, many application problems involve three or more followers, that is, they are formulated as the problem (1.1)-(1.2) with \(m\geq3\). However, there is less work that focuses on designing algorithms for the general case \(m\geq3\) compared to a considerable number of studies on the special case \(m=2\).

For the general case of the problem (1.1)-(1.2), we can directly employ the classical augmented Lagrangian method proposed by [6]. However, the classical augmented Lagrangian method may prevent us from enjoying the separable structure of problem (1.1)-(1.2), since the subproblems obtained in the steps of the method involve coupled variables. Frequently used and powerful ideas for dealing with the difficulty are decomposition techniques. Based on the techniques, there are two most noteworthy categories of decomposition methods for the problem (1.1)-(1.2) in the literature: the alternating direction method and the parallel splitting augmented Lagrangian method. The alternating direction method has obtained recognition as a benchmark method to solve the problem (1.1)-(1.2), ever since it was proposed by Gabay and Mercier [7]. The application of the solution method has been extended in the past decade to cover a variety of areas [8–10]. The basic iterative scheme of the method for solving the problem (1.1)-(1.2) is as follows: When a decision \(u^{k}=(x_{1}^{k},x_{2}^{k},\ldots,x_{m}^{k}, \lambda^{k} )\) is provided, the method gets \(x_{1}^{k+1},x_{2}^{k+1},\ldots,x_{m}^{k+1}\) by solving the following system:

$$ \left \{ \textstyle\begin{array}{@{}l} (x^{\prime}_{1}-x_{1})^{T} \{f_{1}(x_{1})-A_{1}^{T}[\lambda^{k}-H( A_{1}x_{1}+\sum_{j=2}^{m} A_{j}x_{j}^{k}-b)] \}\ge0,\quad \forall x^{\prime}_{1}\in\mathcal{X}_{1},\\ \cdots\\ (x^{\prime}_{i}-x_{i})^{T} \{f_{i}(x_{i})-A_{i}^{T}[\lambda ^{k}-H(\sum_{j=1}^{i-1} A_{j}x_{j}^{k+1}+A_{i}x_{i}+\sum_{j=i+1}^{m} A_{j}x_{j}^{k}-b)] \}\ge0, \\ \quad\forall x^{\prime}_{i}\in\mathcal{X}_{i},\\ \cdots\\ (x^{\prime}_{m}-x_{m})^{T} \{f_{m}(x_{m})-A_{m}^{T}[\lambda^{k}-H( \sum_{j=1}^{m-1} A_{j}x_{j}^{k+1}+A_{m}x_{m}-b)] \}\ge0,\\ \quad \forall x^{\prime}_{m}\in\mathcal{X}_{m}. \end{array}\displaystyle \right . $$
(1.3)

Then update λ by equation (1.4),

$$ \lambda^{k+1}=\lambda^{k}- H\Biggl(\sum _{i=1}^{m} A_{i}x_{i}^{k+1}-b \Biggr), $$
(1.4)

where H is a symmetric positive definite matrix.

Note that the alternating direction method has been an attractive approach in that it successfully employs the Gauss-Seidel decomposition technique. However, in the method, each follower reveals his strategies sequentially, that is, the decision \(x_{i}^{k+1}\) of the follower i is revealed only when his former followers’ strategies \(x_{1}^{k+1},\ldots,x_{i-1}^{k+1}\) are available, which is not reasonable for the case where each follower is blind with the others’ strategies.

As we mentioned, the other efficient method to deal with the problem (1.1)-(1.2) is so-called the parallel splitting augmented Lagrangian method proposed by He [11] based on Jacobian decomposition. Compared to the alternating direction method, the speciality of the parallel splitting augmented Lagrangian method is that it simultaneously solves the following subproblems to obtain \(x_{i}^{k+1}\) (\(i=1,2,\ldots,m\)):

$$\begin{aligned} &\bigl(x^{\prime}_{i}-x_{i} \bigr)^{T} \biggl\{ f_{i}(x_{i})-A_{i}^{T} \biggl[\lambda^{k}-H\biggl( \sum_{j\neq i} A_{j}x_{j}^{k}+A_{i}x_{i}-b \biggr)\biggr] \biggr\} \ge0, \\ &\quad \forall x^{\prime}_{i}\in \mathcal{X}_{i}, i=1,2,\ldots,m, \end{aligned}$$
(1.5)

which means that the followers make their choices simultaneously. However, there are few studies to address the convergence of the scheme (1.5)-(1.4), which results in improved methods where the output provided by (1.5) is corrected by a further correction step [12–15]. All these methods are designed by adding a correction step as follows:

$$ u^{k+1}=u^{k}-\alpha_{k}d \bigl(u^{k}-\tilde{u}^{k}\bigr), $$
(1.6)

where \(\alpha_{k}\) is a stepsize, \(\tilde{u}^{k}\) is output of (1.5) which is called a predictor, and \(-d(u^{k}-\tilde{u}^{k})\) is a descent direction at \(u^{k}\). The schemes may be understood in the context of a game in the following way: When the leader provides a decision \(u^{k}=(x_{1}^{k},\ldots,x_{m}^{k}, \lambda^{k} )\), all followers decide their strategies \(\tilde{x}_{i}^{k}\) (\(i=1,2,\ldots,m\)) simultaneously by solving the corresponding subproblems in (1.5), respectively. Then, based on the feedback \(\tilde{x}_{i}^{k}\) (\(i=1,2,\ldots,m\)) from the followers, the leader improves his strategy by (1.6).

It is noted that in the schemes (1.5)-(1.6), the leader controls all decision variables which is not realistic since in many practical problems the leader only has power on his own decision variables. Based on this identified research gap, the aim of this work is to devise a new method for the problem (1.1)-(1.2), that is, a mathematical formulation of a one-leader-m-follower Stackelberg game where the leader controls λ and the ith follower controls \(x_{i}\). In the correction step of our method, only the leader’s variable λ is improved. Furthermore, we provide insights on the convergence of our method and a computational study of its performance.

The rest of the paper is organized as follows. Section 2 gives preliminaries, such as definitions and notations, which will be useful for our analysis and ease of exposition. Section 3 presents the proposed method in detail. Section 4 conducts an analysis on the global convergence of the proposed method. In Section 5, we apply the method to solve some practical problems and report the corresponding computational results. Finally, conclusions and some future research directions are stated in Section 6.

2 Preliminaries

For the convenience of the analysis in the paper, this section provides some basic definitions and notations. An n-dimensional Euclidean space is denoted by \(\mathcal {R}^{n}\). All vectors used in the paper mean column vectors. For ease of exposition, we use the vector \((x_{1},\ldots,x_{m})\) to represent \((x_{1}^{T},\ldots,x_{m}^{T})^{T}\), where T represents the transpose operator. \(\delta_{\max}(A)\) denotes the largest eigenvalue of square matrix A. For any symmetric and positive definite matrix G, we denote by \(\|x\|_{G}:=\sqrt{x^{T}Gx}\) its G-norm. In the work, we define

$$ G= \begin{pmatrix} \alpha A_{1}^{T}HA_{1}& & & &\\ & \alpha A_{2}^{T}HA_{2} & & & \\ & & \ddots& & \\ & & & \alpha A_{m}^{T}HA_{m}& \\ & & & & H^{-1} \end{pmatrix} $$
(2.1)

such that \(\|u-u^{*}\|^{2}_{G}:=\|A_{1}x_{1}-A_{1}x_{1}^{*}\|_{\alpha H}^{2}+\cdots+\|A_{m}x_{m}-A_{m}x_{m}^{*}\|_{\alpha H}^{2}+\|\lambda-\lambda ^{*}\|_{H^{-1}}^{2}\).

Definition 2.1

  1. (a)

    A mapping \(f: \mathcal {R}^{n}\to\mathcal {R}^{n}\) is described as a monotone function, if

    $$(x_{1}-x_{2})^{T} \bigl(f(x_{1})-f(x_{2}) \bigr)\geq0,\quad \forall x_{1}, x_{2}\in \mathcal {R}^{n}. $$
  2. (b)

    A mapping \(f: \mathcal {R}^{n}\to\mathcal {R}^{n}\) is described as a strongly monotone function with modulus \(\mu>0\), if

    $$(x_{1}-x_{2})^{T} \bigl(f(x_{1})-f(x_{2}) \bigr)\geq\mu\|x_{1}-x_{2}\|^{2},\quad \forall x_{1}, x_{2}\in\mathcal {R}^{n}. $$

In the paper, there is a basic assumption that the mappings \(f_{i}\) (\(i=1,2,\ldots,m\)) are continuous and strongly monotone with modulus \(\mu _{f_{i}}\), respectively.

3 Parallel method

In this section, we formally state the procedure of the new parallel splitting method for solving the problem (1.1)-(1.2) and provide some insights on the method’s properties.

Algorithm 3.1

(A new augmented Lagrangian-based parallel splitting method)

  1. S0.

    Select an initial point \(u^{0}=(x_{1}^{0},\ldots,x_{m}^{0},\lambda ^{0}) \in\mathcal{U}\), \(\epsilon>0\), \(\alpha>0\), and H, and set \(k=0\).

  2. S1.

    Simultaneously obtain solutions \(x_{i}^{k+1}\) (\(i=1,2,\ldots,m\)) by solving the following variational inequalities:

    $$\begin{aligned} &x_{i}\in\mathcal{X}_{i},\quad \bigl(x_{i}^{\prime}-x_{i}\bigr)^{T} \biggl\{ f_{i}(x_{i})-A_{i}^{T} \biggl[ \lambda^{k}-H \biggl(\sum_{j\neq i} A_{j}x_{j}^{k}+ A_{i}x_{i}-b \biggr) \biggr] \biggr\} \ge0, \\ &\quad \forall x_{i}^{\prime}\in \mathcal{X}_{i}, \end{aligned}$$
    (3.1)

    respectively. Then set

    $$ \tilde{\lambda}^{k}=\lambda^{k}- H \Biggl(\sum _{i=1}^{m} A_{i}x_{i}^{k+1}-b \Biggr). $$
    (3.2)
  3. S2.

    Update \(\lambda^{k+1}\) through equation (3.3),

    $$ \lambda^{k+1}= \lambda^{k}-\alpha\bigl( \lambda^{k}-\tilde{\lambda}^{k}\bigr). $$
    (3.3)
  4. S3.

    If

    $$ \max \Bigl\{ \max_{i}\bigl\| A_{i}x_{i}^{k}-A_{i}x_{i}^{k+1} \bigr\| , \bigl\| \lambda^{k}-\lambda^{k+1}\bigr\| \Bigr\} \leq \epsilon, $$
    (3.4)

    stop. Otherwise, set \(k=k+1\) and go to S1.

Remark 3.1

Now, we conduct some analysis on the proposed algorithm. From (3.2) and (3.3), we can deduce that \(\sum_{i=1}^{m} A_{i}x_{i}^{k+1}=b\) if \(\lambda^{k+1}=\lambda^{k}\). Then we have \(\sum_{j\neq i} A_{j}x_{j}^{k}+ A_{i}x_{i}^{k+1}-b=0\) under the condition that \(A_{i}x_{i}^{k+1}=A_{i}x_{i}^{k}\) (\(i=1,2,\ldots,m\)). Furthermore, according to (3.1), there exist the following inequalities and equation:

$$ x_{i}^{k+1}\in\mathcal{X}_{i},\quad \bigl(x_{i}^{\prime}-x_{i}^{k+1} \bigr)^{T} \bigl\{ f_{i}\bigl(x_{i}^{k+1} \bigr)-A_{i}^{T}\lambda^{k+1} \bigr\} \ge0,\quad \forall x_{i}^{\prime}\in\mathcal{X}_{i}, i=1,2,\ldots, m, $$
(3.5)

and

$$\sum_{i=1}^{m} A_{i}x_{i}^{k+1}=b. $$

Thus, it is obvious that \((x_{1}^{k+1},\ldots,x_{m}^{k+1},\lambda^{k+1})\in \mathcal{U}\) is a solution of the one-leader-m-follower game. Based on the analysis, we conclude that for a given small enough ϵ, the proposed method with termination condition (3.4) can obtain an approximation solution \((x_{1}^{k+1},\ldots,x_{m}^{k+1},\lambda ^{k+1})\in\mathcal{U}\) for the concerned game, that is, the stopping criterion (3.4) in the method is reasonable.

Remark 3.2

It is obvious that our proposed algorithm falls into the parallel splitting method since all the subproblems (3.1) can be solved in parallel by many existing efficient algorithms. Moreover, the proposed algorithm makes the best of the separable characteristic of the concerned problem (1.1)-(1.2) since only one function is involved in each subproblem. In addition, the proposed algorithm is a prediction-correction parallel splitting method. But the most significant difference from others is that only λ is corrected, which leads to less computational cost.

4 Convergence result

The convergence property of our parallel splitting algorithm is given in this section. First, we give a lemma that is useful for the convergence result.

Lemma 4.1

Given \(\lambda^{k}\) by the leader and \(x_{i}^{k}\) by the ith follower \((i=1,2,\ldots,m)\) at iteration k, the strategy \((x_{1}^{k+1},\ldots,x_{m}^{k+1},\lambda^{k+1})\) in the next iteration satisfies

$$\begin{aligned} & \sum_{i=1}^{m}\bigl\| A_{i}x_{i}^{k+1}-A_{i}x_{i}^{*} \bigr\| _{\alpha H}^{2}+\bigl\| \lambda^{k+1}-\lambda^{*} \bigr\| _{H^{-1}}^{2} \\ &\quad\le\sum_{i=1}^{m} \bigl\| A_{i}x_{i}^{k}-A_{i}x_{i}^{*} \bigr\| _{\alpha H}^{2}+\bigl\| \lambda^{k}-\lambda^{*} \bigr\| _{H^{-1}}^{2}-\alpha\sum_{i=1}^{m} \biggl\| \sum_{j\neq i} A_{j}x_{j}^{k+1}+A_{i}x_{i}^{k}-b\biggr\| _{H}^{2} \\ &\qquad{}+\alpha\sum_{i=1}^{m} \bigl[( \alpha+m-2)\delta _{\max}\bigl(A_{i}^{T}HA_{i} \bigr)-2\mu_{f_{i}} \bigr]\bigl\| x_{i}^{k+1}-x_{i}^{*}\bigr\| ^{2}. \end{aligned}$$

Proof

We assume that there exists an optimal solution \(u^{*}=(x_{1}^{*},\ldots,x_{m}^{*},\lambda^{*})\in\mathcal{U}\). Using (1.1) with \(u=u^{k+1}=(x_{1}^{k+1},\ldots,x_{m}^{k+1},\lambda^{k+1})\in\mathcal {U}\), we have

$$ F\bigl(u^{*}\bigr)^{T}\bigl(u^{k+1}-u^{*}\bigr) \ge0. $$
(4.1)

On the other hand, let \(x_{i}^{\prime}=x_{i}^{*} \) (\(i=1,2,\ldots,m\)) in each inequality of (3.1). It is easy to obtain

$$\begin{aligned} &\bigl(x_{i}^{*}-x_{i}^{k+1} \bigr)^{T} \biggl\{ f_{i}\bigl(x_{i}^{k+1} \bigr)-A_{i}^{T}\lambda ^{k}+A_{i}^{T}H \biggl(\sum_{j\neq i} A_{j}x_{j}^{k}+ A_{i}x_{i}^{k+1}-b \biggr) \biggr\} \ge0, \\ &\quad i=1,2,\ldots,m. \end{aligned}$$
(4.2)

From the summation of all the inequalities included in (4.1) and (4.2) and the equality \(\sum_{i=1}^{m} A_{i}x_{i}^{*}=b\), we have

$$\sum_{i=1}^{m}\bigl(x_{i}^{k+1}-x_{i}^{*} \bigr)^{T} \biggl\{ f_{i}\bigl(x_{i}^{*} \bigr)-f_{i}\bigl(x_{i}^{k+1}\bigr)+A_{i}^{T} \bigl(\lambda^{k}-\lambda ^{*}\bigr)-A_{i}^{T}H \biggl(\sum_{j\neq i} A_{j}x_{j}^{k}+ A_{i}x_{i}^{k+1}-b \biggr) \biggr\} \ge0. $$

Rearranging the above inequality and taking account of the strong monotonicity of all the functions \(f_{i}\) (\(i=1,2,\ldots,m\)) and \(\sum_{i=1}^{m} A_{i}x_{i}^{*}=b\), we obtain

$$\begin{aligned} & \bigl(\lambda^{k}-\lambda^{*}\bigr)^{T}\Biggl(\sum _{i=1}^{m} A_{i}x_{i}^{k+1}-b \Biggr) \\ &\quad\ge\sum_{i=1}^{m} \biggl[ \bigl(A_{i}x_{i}^{k+1}-A_{i}x_{i}^{*} \bigr)^{T}H \biggl(\sum_{j\neq i} A_{j}x_{j}^{k}+ A_{i}x_{i}^{k+1}-b \biggr) \biggr]+\sum_{i=1}^{m} \mu_{f_{i}}\bigl\| x_{i}^{k+1}-x_{i}^{*}\bigr\| ^{2}. \end{aligned}$$

Note that b can be replaced by \(\sum_{i=1}^{m} A_{i}x_{i}^{*}\). Then

$$\begin{aligned} & \bigl(\lambda^{k}-\lambda^{*}\bigr)^{T}\Biggl( \sum_{i=1}^{m} A_{i}x_{i}^{k+1}-b \Biggr) \\ &\quad\ge\sum_{i=1}^{m} \Biggl[ \bigl(A_{i}x_{i}^{k+1}-A_{i}x_{i}^{*} \bigr)^{T}H \Biggl(\sum_{j\neq i} A_{j}x_{j}^{k}+ A_{i}x_{i}^{k+1}- \sum_{j=1}^{m} A_{j}x_{j}^{*} \Biggr) \Biggr] \\ &\qquad{}+\sum_{i=1}^{m} \mu _{f_{i}}\bigl\| x_{i}^{k+1}-x_{i}^{*}\bigr\| ^{2} \\ &\quad=\sum_{i=1}^{m} \biggl[ \bigl(A_{i}x_{i}^{k+1}-A_{i}x_{i}^{*} \bigr)^{T}H \biggl(\sum_{j\neq i} \bigl(A_{j}x_{j}^{k}-A_{j}x_{j}^{*} \bigr)+ A_{i}x_{i}^{k+1}- A_{i}x_{i}^{*} \biggr) \biggr] \\ &\qquad{}+\sum_{i=1}^{m} \mu _{f_{i}}\bigl\| x_{i}^{k+1}-x_{i}^{*}\bigr\| ^{2} \\ &\quad=\sum_{i=1}^{m}\bigl\| A_{i}x_{i}^{k+1}-A_{i}x_{i}^{*}\bigr\| ^{2}_{H}+ \sum_{i=1}^{m}\sum _{j\neq i}\bigl(A_{i}x_{i}^{k+1}-A_{i}x_{i}^{*} \bigr)^{T}H\bigl(A_{j}x_{j}^{k}-A_{j}x_{j}^{*} \bigr) \\ &\qquad{}+ \sum_{i=1}^{m} \mu_{f_{i}}\bigl\| x_{i}^{k+1}-x_{i}^{*}\bigr\| ^{2} \\ &\quad=\Biggl\| \sum_{i=1}^{m} A_{i}x_{i}^{k+1}-b\Biggr\| ^{2}_{H}+ \sum_{i=1}^{m}\sum _{j\neq i}\bigl(A_{i}x_{i}^{k+1}-A_{i}x_{i}^{*} \bigr)^{T}H\bigl(A_{j}x_{j}^{k}-A_{j}x_{j}^{k+1} \bigr) \\ &\qquad{}+ \sum_{i=1}^{m} \mu_{f_{i}}\bigl\| x_{i}^{k+1}-x_{i}^{*}\bigr\| ^{2}. \end{aligned}$$
(4.3)

Thus,

$$\begin{aligned} &\bigl\| \lambda^{k+1}-\lambda^{*}\bigr\| _{H^{-1}}^{2} \\ &\quad=\Biggl\| \lambda^{k}-\lambda^{*}-\alpha H\Biggl(\sum _{i=1}^{m} A_{i}x_{i}^{k+1}-b \Biggr)\Biggr\| _{H^{-1}}^{2} \\ &\quad=\bigl\| \lambda^{k}-\lambda^{*}\bigr\| _{H^{-1}}^{2}-2\alpha \bigl(\lambda^{k}-\lambda ^{*}\bigr)^{T}\Biggl(\sum _{i=1}^{m} A_{i}x_{i}^{k+1}-b \Biggr)+\alpha^{2}\Biggl\| \sum_{i=1}^{m} A_{i}x_{i}^{k+1}-b\Biggr\| _{H}^{2} \\ &\quad\le\bigl\| \lambda^{k}-\lambda^{*}\bigr\| _{H^{-1}}^{2}-2 \alpha\sum_{i=1}^{m} \mu_{f_{i}}\bigl\| x_{i}^{k+1}-x_{i}^{*}\bigr\| ^{2} +\alpha^{2}\Biggl\| \sum_{i=1}^{m} A_{i}x_{i}^{k+1}-b\Biggr\| _{H}^{2} \\ &\qquad{}-2\alpha \Biggl\{ \Biggl\| \sum_{i=1}^{m} A_{i}x_{i}^{k+1}-b\Biggr\| ^{2}_{H}+ \sum_{i=1}^{m}\sum _{j\neq i}\bigl(A_{i}x_{i}^{k+1}-A_{i}x_{i}^{*} \bigr)^{T}H\bigl(A_{j}x_{j}^{k}-A_{j}x_{j}^{k+1} \bigr) \Biggr\} . \end{aligned}$$
(4.4)

Now, we focus on the terms \(\|A_{i}x_{i}^{k+1}-A_{i}x_{i}^{*}\|^{2}_{\alpha H} \) (\(i=1,2,\ldots,m\)).

Since

$$\begin{aligned} &\bigl(A_{i}x_{i}^{k}-A_{i}x_{i}^{*} \bigr)^{T}H\bigl(A_{i}x_{i}^{k}-A_{i}x_{i}^{k+1} \bigr) \\ &\quad=\bigl(A_{i}x_{i}^{k+1}-A_{i}x_{i}^{*} \bigr)^{T}H\bigl(A_{i}x_{i}^{k}-A_{i}x_{i}^{k+1} \bigr) +\bigl\| A_{i}x_{i}^{k}-A_{i}x_{i}^{k+1}\bigr\| ^{2}_{H}, \quad i=1,2,\ldots,m, \end{aligned}$$

it follows that

$$\begin{aligned} &\bigl\| A_{i}x_{i}^{k+1}-A_{i}x_{i}^{*} \bigr\| _{\alpha H}^{2} \\ &\quad=\bigl\| A_{i}x_{i}^{k}-A_{i}x_{i}^{*} \bigr\| _{\alpha H}^{2}+\bigl\| A_{i}x_{i}^{k+1}-A_{i}x_{i}^{k} \bigr\| _{\alpha H}^{2}-2\alpha \bigl\| A_{i}x_{i}^{k}-A_{i}x_{i}^{k+1}\bigr\| ^{2}_{H} \\ &\qquad{}-2\alpha \bigl(A_{i}x_{i}^{k+1}-A_{i}x_{i}^{*} \bigr)^{T}H\bigl(A_{i}x_{i}^{k}-A_{i}x_{i}^{k+1} \bigr), \quad i=1,2,\ldots,m. \end{aligned}$$
(4.5)

Adding all formulas in (4.4) and (4.5), we get the following inequality:

$$\begin{aligned} &\sum_{i=1}^{m} \bigl\| A_{i}x_{i}^{k+1}-A_{i}x_{i}^{*} \bigr\| _{\alpha H}^{2} +\bigl\| \lambda^{k+1}-\lambda^{*} \bigr\| _{H^{-1}}^{2} \\ &\quad\leq\sum_{i=1}^{m} \bigl\| A_{i}x_{i}^{k}-A_{i}x_{i}^{*} \bigr\| _{\alpha H}^{2} +\bigl\| \lambda^{k}-\lambda^{*} \bigr\| _{H^{-1}}^{2} \\ &\qquad{}+\sum_{i=1}^{m} \bigl\| A_{i}x_{i}^{k+1}-A_{i}x_{i}^{k} \bigr\| _{\alpha H}^{2}+\alpha^{2}\Biggl\| \sum _{i=1}^{m} A_{i}x_{i}^{k+1}-b \Biggr\| _{H}^{2} \\ &\qquad{}-2\alpha \Biggl\{ \Biggl\| \sum_{i=1}^{m} A_{i}x_{i}^{k+1}-b\Biggr\| ^{2}_{H}+ \sum_{i=1}^{m}\sum _{j\neq i}\bigl(A_{i}x_{i}^{k+1}-A_{i}x_{i}^{*} \bigr)^{T}H\bigl(A_{j}x_{j}^{k}-A_{j}x_{j}^{k+1} \bigr) \\ &\qquad{}+\sum_{i=1}^{m} \bigl(A_{i}x_{i}^{k+1}-A_{i}x_{i}^{*} \bigr)^{T}H\bigl(A_{i}x_{i}^{k}-A_{i}x_{i}^{k+1} \bigr)+ \sum_{i=1}^{m}\bigl\| A_{i}x_{i}^{k}-A_{i}x_{i}^{k+1}\bigr\| ^{2}_{H} \Biggr\} \\ &\qquad{}-2\alpha\sum_{i=1}^{m} \mu _{f_{i}}\bigl\| x_{i}^{k+1}-x_{i}^{*}\bigr\| ^{2} \\ &\quad=\sum_{i=1}^{m}\bigl\| A_{i}x_{i}^{k}-A_{i}x_{i}^{*} \bigr\| _{\alpha H}^{2} +\bigl\| \lambda^{k}-\lambda^{*} \bigr\| _{H^{-1}}^{2}+\alpha^{2}\Biggl\| \sum _{i=1}^{m} A_{i}x_{i}^{k+1}-b \Biggr\| _{H}^{2} \\ &\qquad{}-2\alpha \Biggl\{ \Biggl\| \sum_{i=1}^{m} A_{i}x_{i}^{k+1}-b\Biggr\| _{H}^{2}+ \sum_{i=1}^{m}\sum _{j=1}^{m}\bigl(A_{i}x_{i}^{k+1}-A_{i}x_{i}^{*} \bigr)^{T}H\bigl(A_{j}x_{j}^{k}-A_{j}x_{j}^{k+1} \bigr) \Biggr\} \\ &\qquad{}-2\alpha\sum_{i=1}^{m} \mu _{f_{i}}\bigl\| x_{i}^{k+1}-x_{i}^{*}\bigr\| ^{2}- \sum_{i=1}^{m}\bigl\| A_{i}x_{i}^{k+1}-A_{i}x_{i}^{k} \bigr\| _{\alpha H}^{2} \\ &\quad=\sum_{i=1}^{m} \bigl\| A_{i}x_{i}^{k}-A_{i}x_{i}^{*} \bigr\| _{\alpha H}^{2} +\bigl\| \lambda^{k}-\lambda^{*} \bigr\| _{H^{-1}}^{2}+\alpha^{2}\Biggl\| \sum _{i=1}^{m} A_{i}x_{i}^{k+1}-b \Biggr\| _{H}^{2} \\ &\qquad{}-2\alpha\sum_{i=1}^{m} \mu _{f_{i}}\bigl\| x_{i}^{k+1}-x_{i}^{*}\bigr\| ^{2}- \alpha \Biggl\{ \sum_{i=1}^{m} \bigl\| A_{i}x_{i}^{k+1}-A_{i}x_{i}^{k} \bigr\| _{H}^{2} \\ &\qquad{}+2\Biggl\| \sum_{i=1}^{m} A_{i}x_{i}^{k+1}-b\Biggr\| _{H}^{2}+2 \sum_{j=1}^{m}\Biggl(\sum _{i=1}^{m} A_{i}x_{i}^{k+1}-b \Biggr)^{T}H\bigl(A_{j}x_{j}^{k}-A_{j}x_{j}^{k+1} \bigr) \Biggr\} \\ &\quad=\sum_{i=1}^{m}\bigl\| A_{i}x_{i}^{k}-A_{i}x_{i}^{*} \bigr\| _{\alpha H}^{2} +\bigl\| \lambda^{k}-\lambda^{*} \bigr\| _{H^{-1}}^{2}+\alpha(\alpha+m-2)\Biggl\| \sum _{i=1}^{m} A_{i}x_{i}^{k+1}-b \Biggr\| _{H}^{2} \\ &\qquad{}-\alpha\sum_{i=1}^{m}\Biggl\| \sum _{j\neq i}A_{j}x_{j}^{k+1}+A_{i}x_{i}^{k}-b\Biggr\| ^{2}_{H}-2 \alpha\sum_{i=1}^{m} \mu_{f_{i}}\bigl\| x_{i}^{k+1}-x_{i}^{*}\bigr\| ^{2}. \end{aligned}$$
(4.6)

Since

$$\begin{aligned} \Biggl\| \sum_{i=1}^{m} A_{i}x_{i}^{k+1}-b \Biggr\| _{H}^{2}&\le \sum_{i=1}^{m}\bigl\| A_{i}x_{i}^{k+1}-A_{i}x_{i}^{*}\bigr\| _{H}^{2} \\ &\le\sum_{i=1}^{m}\delta_{\max} \bigl(A_{i}^{T}HA_{i}\bigr)\bigl\| x_{i}^{k+1}-x_{i}^{*} \bigr\| ^{2}, \end{aligned}$$
(4.7)

from (4.6), we get

$$\begin{aligned} & \sum_{i=1}^{m}\bigl\| A_{i}x_{i}^{k+1}-Ax_{i}^{*} \bigr\| _{\alpha H}^{2}+\bigl\| \lambda^{k+1}-\lambda^{*} \bigr\| _{H^{-1}}^{2} \\ &\quad\le\sum_{i=1}^{m} \bigl\| A_{i}x_{i}^{k}-Ax_{i}^{*} \bigr\| _{\alpha H}^{2}+\bigl\| \lambda^{k}-\lambda^{*} \bigr\| _{H^{-1}}^{2}-\alpha\sum_{i=1}^{m} \Biggl\| \sum_{j\neq i} A_{j}x_{j}^{k+1}+A_{i}x_{i}^{k}-b\Biggr\| _{H}^{2} \\ &\qquad{}+\alpha\sum_{i=1}^{m} \bigl[( \alpha+m-2)\delta _{\max}\bigl(A_{i}^{T}HA_{i} \bigr)-2\mu_{f_{i}} \bigr]\bigl\| x_{i}^{k+1}-x_{i}^{*}\bigr\| ^{2}. \end{aligned}$$

The proof is completed. □

Lemma 4.1 indicates that

$$\begin{aligned} \bigl\| u^{k+1}-u^{*}\bigr\| _{G}^{2} \le{}& \bigl\| u^{k}-u^{*}\bigr\| _{G}^{2}+\alpha\sum _{i=1}^{m} \bigl[(\alpha +m-2) \delta_{\max}\bigl(A_{i}^{T}HA_{i} \bigr)-2\mu_{f_{i}} \bigr]\bigl\| x_{i}^{k+1}-x_{i}^{*}\bigr\| ^{2} \\ &{} -\alpha\sum_{i=1}^{m} \biggl\| \sum _{j\neq i} A_{j}x_{j}^{k+1}+A_{i}x_{i}^{k}-b\biggr\| _{H}^{2}, \end{aligned}$$
(4.8)

where G is defined by (2.1).

Based on the above analysis, the global convergence of the proposed method is presented in the following theorem.

Theorem 4.2

Let m be the number of followers. Suppose that for each \(i\in\{ 1,2,\ldots,m\}\), \(f_{i}(x_{i})\) is continuous and strongly monotone on \(\mathcal {X}_{i}\subseteq\mathcal {R}^{n_{i}}\). Moreover, if

$$u_{f_{i}}>\frac{(m-2)\delta_{\max}(A_{i}^{T}HA_{i})}{2},\quad i=1,2,\ldots,m, $$

and

$$0< \alpha\le\min_{i} \biggl\{ \frac{2\mu_{f_{i}}}{\delta_{\max }(A_{i}^{T}HA_{i})}-(m-2) \biggr\} , $$

the sequence \(\{u^{k}\}\) generated by the proposed method converges to an optimal solution of the problem (1.1)-(1.2).

Proof

From Lemma 4.1 and (4.8), we have

$$\begin{aligned} & \bigl\| u^{k+1}-u^{*}\bigr\| _{G}^{2}-\bigl\| u^{k}-u^{*} \bigr\| _{G}^{2} \\ &\quad\le -\alpha\sum_{i=1}^{m} \bigl[2 \mu_{f_{i}}-(\alpha +m-2)\delta_{\max}\bigl(A_{i}^{T}HA_{i} \bigr) \bigr]\bigl\| x_{i}^{k+1}-x_{i}^{*}\bigr\| ^{2} \\ &\qquad{} -\alpha\sum_{i=1}^{m} \biggl\| \sum _{j\neq i} A_{j}x_{j}^{k+1}+A_{i}x_{i}^{k}-b\biggr\| _{H}^{2}. \end{aligned}$$
(4.9)

The two terms on the right side of the above inequality are negative due to the conditions of the theorem. Thus,

$$\begin{aligned} \bigl\| u^{k+1}-u^{*}\bigr\| _{G}^{2} \le\bigl\| u^{k}-u^{*} \bigr\| _{G}^{2}\le\cdots\le\bigl\| u^{0}-u^{*} \bigr\| _{G}^{2}\le+\infty, \end{aligned}$$

which means that \(\{u^{k}\}\) is a bounded sequence generated by the developed method. Consequently,

$$\begin{aligned} &\alpha\sum_{k=1}^{+\infty}\sum _{i=1}^{m} \bigl[2\mu_{f_{i}}-(\alpha+m-2) \delta_{\max}\bigl(A_{i}^{T}HA_{i}\bigr) \bigr]\bigl\| x_{i}^{k+1}-x_{i}^{*}\bigr\| ^{2} \\ &\qquad{} +\alpha\sum_{k=1}^{+\infty}\sum _{i=1}^{m} \biggl\| \sum _{j\neq i} A_{j}x_{j}^{k+1}+A_{i}x_{i}^{k}-b\biggr\| _{H}^{2} < \sum_{k=1}^{+\infty}\bigl( \bigl\| u^{k}-u^{*}\bigr\| _{G}^{2}-\bigl\| u^{k+1}-u^{*}\bigr\| _{G}^{2}\bigr)< +\infty, \end{aligned}$$

which means that

$$ \begin{aligned} &\lim_{k\rightarrow+\infty}\bigl\| x_{i}^{k+1}-x_{i}^{*} \bigr\| =0, \quad i=1,2,\ldots,m, \\ &\lim_{k\rightarrow+\infty}\biggl\| \sum_{j\neq i} A_{j}x_{j}^{k+1}+A_{i}x_{i}^{k}-b\biggr\| _{H}=0, \quad i=1,2,\ldots,m. \end{aligned} $$
(4.10)

Thus,

$$\begin{aligned} \sum_{i=1}^{m} A_{i}x_{i}^{*}=b. \end{aligned}$$
(4.11)

On the other hand, for each \(i\in\{1,2,\ldots,m\}\), \(x_{i}^{k+1}\) satisfies the following inequality:

$$ \bigl(x_{i}'-x_{i}^{k+1} \bigr)^{T} \biggl\{ f_{i}\bigl(x_{i}^{k+1} \bigr)-A_{i}^{T}\lambda ^{k}+A_{i}^{T}H \biggl(\sum_{j\neq i}A_{j}x_{j}^{k}+A_{i}x_{i}^{k+1}-b \biggr) \biggr\} \ge0,\quad \forall x_{i}'\in \mathcal {X}_{i}. $$
(4.12)

Moreover, there exist cluster points for the sequence \(\{\lambda^{k}\}\) due to the boundedness of \(\{\lambda^{k}\}\) implied by the boundedness of \(\{u^{k}\}\). Let \(\lambda^{*}\) be one of cluster points such that it is a limit of a convergent subsequence \(\{\lambda^{k_{j}}\}\). From the limit of (4.12) along this subsequence, it follows that

$$ \bigl(x'_{i}-x_{i}^{*} \bigr)^{T} \bigl\{ f_{i}\bigl(x_{i}^{*} \bigr)-A_{i}^{T}\lambda^{*} \bigr\} \ge0, \quad\forall x'_{i}\in\mathcal {X}_{i}, i=1,2,\ldots,m. $$
(4.13)

From (4.13) and (4.11), we can assert that the sequence generated by the proposed method is globally convergent. This completes the proof. □

5 Numerical experiments

In this section, we present some numerical results by implementing our proposed algorithm in a game and a traffic problem, which demonstrate the application and performance of our proposed algorithm. All tests are performed in a MATLAB environment on a PC with Intel Core 2 Duo 3.10GHz CPU and 4GB of RAM. The section is organized as follows: First, we provide strategies for players in a game to verify the application of our algorithm. Second, we investigate the performance of the proposed algorithm by comparing it with one existing algorithm for solving a generic test problem.

Example 5.1

A simple game with one leader and three followers.

We begin the computational study by solving a one-leader-three-follower game where each follower i decides a pure strategy \(s_{i}\) (\(i=1,2,3\)). The problem is given by the following programs,

$$ \textstyle\begin{array}{@{}l@{\quad}l@{\quad}l@{\quad}l@{\quad}l@{\quad}l@{}} \min&-s_{1} &\min&(s_{2}-0.5)^{2} &\min&(s_{3}-1.5)^{2}\\ \mbox{s.t.}& s_{1}+s_{2}+s_{3}=2 & \mbox{s.t.} &s_{1}+s_{2}+s_{3}=2& \mbox{s.t.} &s_{1}+s_{2}+s_{3}=2\\ &s_{1}\geq0 & &s_{2}\geq0& &s_{3}\geq0 \end{array} $$

The game is a revised version of a game considered by [16–18]. We solve the game using the proposed algorithm to obtain a solution which matches with the analytic optimal solution, that is, \(s_{1}=1\), \(s_{2}=0\), \(s_{3}=1\). Furthermore, we investigate the effect of initial points and optimality tolerances on the performance of our algorithm. In the tests, initial points are \(u_{0}=(0,0,0,0)\), \(u_{0}=(1,1,1,1)\), or a random point in \((0,1)^{4}\). Optimality tolerances are \(\epsilon=10^{-4}\), \(\epsilon=10^{-5}\) or \(\epsilon=10^{-6}\). In our experiments, the parameter setting is \(\alpha =0.8\) and \(H=0.9I\). The associated numerical results are recorded in Table 1 where Iter. means the number of iterations and CPU means CPU time. The numerical results from Table 1 confirm the validity and efficiency of our method. Moreover, we observe that the proposed algorithm is robust to the initial points.

Table 1 Computational results for Example 5.1

To further showcase the performance of the proposed algorithm, we use it to solve a generic test problem, a traffic equilibrium problem with fixed demand. Now, we introduce the problem briefly.

Example 5.2

A traffic equilibrium problem with fixed demand constraints.

The problem is always selected as a test case; for example, see [13, 19, 20]. Its network is shown in Figure 1 where there are 25 nodes, 37 links, and 55 paths. Other parameters and notations are summarized in Table 2.

Figure 1
figure 1

The network for Example 5.2 .

Table 2 Parameter setting and notations for the network
Table 3 The link cost function \(\pmb{t_{a}(f)}\) for Example 5.2

We define variables \(x_{p}\) as the traffic flow of path p. Then arc flow vector f is calculated by the following formula:

$$f=A^{T}x, \qquad d_{w}=\sum_{p\in P_{w}}x_{p}, \quad \mbox{and} \quad d=Bx. $$

Moreover, based on the link travel cost vector denoted by \(t(f)=\{ t_{a}, a \in L\}\) whose expressions are given in Table 3, the travel cost vector θ can be formulated as follows:

$$\theta=At(f)=At\bigl(A^{T}x\bigr). $$

Hence, the problem is converted to a variational inequality as

$$\bigl(x-x^{*}\bigr)^{T}\theta(x)\geq0, \quad \forall x\in S, $$

where \(S=\{x\in\mathcal{R}^{55}|Bx=d,x\geq0\}\).

We implement our algorithm for this problem. First, the decision variable vector x is partitioned into three parts,

$$x= \begin{pmatrix} x_{1} \\ x_{2} \\ x_{3} \end{pmatrix}, $$

where \(x_{1}\in\mathcal{R}^{25}\), \(x_{2}\in\mathcal{R}^{15}\), and \(x_{3}\in\mathcal{R}^{15}\). Subsequently, matrices A, B, and θ are partitioned, respectively, as follows:

$$A= \begin{pmatrix} A_{1} \\ A_{2} \\ A_{3} \end{pmatrix}, \qquad B= (B_{1}\quad B_{2}\quad B_{3}),\quad \mbox{and}\quad \theta= \begin{pmatrix} \theta_{1} \\ \theta_{2} \\ \theta_{3} \end{pmatrix}, $$

where \(A_{1}\in\mathcal{R}^{25\times37}\), \(A_{2}\in\mathcal {R}^{15\times37}\), \(A_{3}\in\mathcal{R}^{15\times37}\), \(B_{1}\in\mathcal {R}^{6\times25}\), \(B_{2}\in\mathcal{R}^{6\times15}\), \(B_{3}\in\mathcal {R}^{6\times15}\), and θ is partitioned in the same way as x.

Based on the above partitions, the resulting traffic problem is as follows:

$$\begin{aligned}& \bigl(x_{1}-x_{1}^{*}\bigr)^{T} \theta(x_{1})\geq0, \quad \forall x_{1}\in S, \end{aligned}$$
(5.1)
$$\begin{aligned}& \bigl(x_{2}-x_{2}^{*}\bigr)^{T} \theta(x_{2})\geq0, \quad \forall x_{2}\in S, \end{aligned}$$
(5.2)

and

$$ \bigl(x_{3}-x_{3}^{*}\bigr)^{T} \theta(x_{3})\geq0, \quad \forall x_{3}\in S, $$
(5.3)

where \(S=\{(x_{1},x_{2}, x_{3})|B_{1}x_{1}+B_{2}x_{2}+B_{3}x_{3}=d, x_{1}\geq0, x_{2}\geq0, x_{3}\geq0 \}\).

In order to show the efficiency and effectiveness of our algorithm, we conduct numerical experiments on the performance of the proposed algorithm and the parallel splitting augmented Lagrangian method in [11] since both methods are used for a one-leader-three-follower game. The performance of the two algorithms with different initial points (\(u^{0}=\operatorname{rand}(61,1)\), \(u^{0}=50^{*}\operatorname{ones}(61,1)\), and \(u^{0}=100^{*}\operatorname{ones}(61,1)\)) and optimality tolerance (\(\epsilon=10^{-4}\), \(\epsilon=10^{-5}\), and \(\epsilon=10^{-6}\)) for the traffic equilibrium problem with fixed demand constraints (Example 5.2) is stated in Table 4. Here, Alg means algorithm and ’-’ means failure. In these tests, the common parameters of the two methods are the same, that is, \(H=\beta I\), where \(\beta=0.8\) and I is the identity matrix, and the maximum number of iterations is 5,000.

Table 4 Computational results for Example 5.2

The numerical results from Table 4 demonstrate the preference of our algorithm over the algorithm in [11] since both the number of iterations and the CPU time of our algorithm are smaller than those of the algorithm in [11] for a random initial point and our algorithm can solve the problem while the algorithm in [11] fails to solve it for other initial points. The results verify the efficiency and effectiveness of the proposed algorithm again.

6 Conclusion

The system (1.1)-(1.2) can be considered as a mathematical formulation of a one-leader-m-follower Stackelberg game in which the leader constantly improves his strategy by determining the value of λ from strategy set \(\mathcal{R}^{l}\) while the ith follower determines his plan \(x_{i}\) from set \(\mathcal{X}_{i}\) based on the value of λ. Based on the characteristic, we design an augmented Lagrangian-based parallel splitting method to solve the system. In the method, each player can only control and improve his own decision. We establish the global convergence of the method under some suitable conditions. Finally, we conduct a computational study to demonstrate the validity and efficiency of our algorithm.

To improve the application of the proposed algorithm, we provide two research directions according to its limitations. First, the convergence of the method is proved under the condition that each player’s utility function is strongly monotone. We plan to relax the condition such that our method can be applied to more practical problems. Second, our method only serves to solve problems with a separable structure, which sounds reasonable but may not always be the case. We should improve it to solve general problems.

References

  1. Eckstein, J, Bertsekas, DP: On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55, 293-318 (1992)

    Article  MathSciNet  MATH  Google Scholar 

  2. Fukushima, M: Application of the alternating direction method of multipliers to separable convex programming problems. Comput. Optim. Appl. 1, 93-111 (1992)

    Article  MathSciNet  MATH  Google Scholar 

  3. Chen, G, Teboulle, M: A proximal-based decomposition method for convex minimization problems. Math. Program. 64, 81-101 (1994)

    Article  MathSciNet  MATH  Google Scholar 

  4. Tseng, P: Alternating projection-proximal methods for convex programming and variational inequalities. SIAM J. Optim. 7, 951-965 (1997)

    Article  MathSciNet  MATH  Google Scholar 

  5. Han, D, He, H, Yang, H, Yuan, X: A customized Douglas-Rachford splitting algorithm for separable convex minimization with linear constraints. Numer. Math. 127, 167-200 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  6. Hestenes, MR: Multiplier and gradient methods. J. Optim. Theory Appl. 4, 303-320 (1969)

    Article  MathSciNet  MATH  Google Scholar 

  7. Gabay, D, Mercier, B: A dual algorithm for the solution of nonlinear variational problems via finite element approximations. Comput. Math. Appl. 2, 17-40 (1976)

    Article  MATH  Google Scholar 

  8. Esser, E: Applications of Lagrangian-based alternating direction methods and connections to split Bregman. CAM Report 09-31, UCLA (2009)

  9. Lin, Z, Chen, M, Ma, Y: The augmented lagrange multiplier method for exact recovery of corrupted low-rank matrices. UILU-ENG 09-2215, UIUC (2009)

  10. Yang, JF, Zhang, Y: Alternating direction algorithms for \(\ell_{1}\)-problems in compressive sensing. SIAM J. Sci. Comput. 33, 250-278 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  11. He, BS: Parallel splitting augmented Lagrangian methods for monotone structured variational inequalities. Comput. Math. Appl. 42, 195-212 (2009)

    MathSciNet  MATH  Google Scholar 

  12. Tao, M: Some parallel splitting methods for separable convex programming with \(O (1/t)\) convergence rate. Pac. J. Optim. 10, 359-384 (2014)

    MathSciNet  MATH  Google Scholar 

  13. Wang, K, Xu, L, Han, D: A new parallel splitting descent method for structured variational inequalities. J. Ind. Manag. Optim. 10, 461-476 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  14. Jiang, ZK, Yuan, XM: New parallel descent-like method for solving a class of variational inequalities. J. Optim. Theory Appl. 145, 311-323 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  15. Han, D, Yuan, X, Zhang, W: An augmented Lagrangian based parallel splitting method for separable convex minimization with applications to image processing. Math. Comput. 83, 2263-2291 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  16. Facchinei, F, Fischer, A, Piccialli, V: Generalized Nash equilibrium problems and Newton methods. Math. Program. 117, 163-194 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  17. Facchinei, F, Kanzow, C: Penalty methods for the solution of generalized Nash equilibrium problems. SIAM J. Optim. 20, 2228-2253 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  18. Han, D, Zhang, H, Qian, G, Xu, L: An improved two-step method for solving generalized Nash equilibrium problems. Eur. J. Oper. Res. 216, 613-623 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  19. Nagurney, A, Zhang, D: Projected Dynamical Systems and Variational Inequalities with Applications. Kluwer Academic, Boston (1996)

    Book  MATH  Google Scholar 

  20. He, BS, Xu, Y, Yuan, XM: A logarithmic-quadratic proximal prediction-correction method for structured monotone variational inequalities. Comput. Optim. Appl. 35, 19-46 (2006)

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

This work is supported by grants from the NSF of Shanxi Province (2014011006-1).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Xihong Yan.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

All authors contributed equally and significantly in this paper. All authors read and approved the final manuscript.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Yan, X., Wen, R. A new parallel splitting augmented Lagrangian-based method for a Stackelberg game. J Inequal Appl 2016, 108 (2016). https://doi.org/10.1186/s13660-016-1047-7

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13660-016-1047-7

Keywords