Skip to main content

Lebesgue constants for Chebyshev thresholding greedy algorithms


We investigate the efficiency of Chebyshev Thresholding Greedy Algorithm (CTGA) for an n-term approximation with respect to general bases in a Banach space. We show that the convergence property of CTGA is better than TGA for non-quasi-greedy bases. Then we determine the exact rate of the Lebesgue constants \(L_{n}^{\mathrm{ch}}\) for two examples of such bases: the trigonometric system and the summing basis. We also establish the upper estimates for \(L_{n}^{\mathrm{ch}}\) with respect to general bases in terms of quasi-greedy parameter, democracy parameter and A-property parameter. These estimates do not involve an unconditionality parameter, therefore they are better than those of TGA. In particular, for conditional quasi-greedy bases, a faster convergence rate is obtained.

1 Introduction

Nonlinear n-term approximations with respect to biorthogonal systems such as the trigonometric system and wavelet bases are frequently used in image or signal processing, PDE solvers and statistic learning (see [1]). The fundamental question of a nonlinear approximation is how to construct good algorithms to realize the best n-term approximation. It turns out the Thresholding Greedy Algorithm (TGA), which was proposed by Konyagin and Temlyakov in [2], in some sense is a suitable method for nonlinear n-term approximation. In this paper, we investigate the efficiency of the Chebyshev Thresholding Greedy Algorithm (CTGA), which is an enhancement of TGA.

Throughout this paper, X is an infinite-dimensional separable Banach space (over \(\mathbb{K}=\mathbb{R}\) or \(\mathbb{C}\)) with a norm \(\Vert \cdot \Vert =\Vert \cdot \Vert _{X}\) and its dual space is denoted by \(X^{*}\). A family \(\{e_{i}, e_{i}^{*}\}_{i=1}^{\infty}\subset X \times X^{*}\) is called a bounded biorthogonal system if

  1. 1.

    \(X=\overline{\operatorname{span}} [ e_{i}: i \in\mathbb{N} ]\).

  2. 2.

    \(e_{i}^{*}(e_{j})=1\) if \(i=j\), \(e_{i}^{*}(e_{j})=0\) if \(i\neq j\).

  3. 3.

    \(0< \inf_{i}\min\{ \Vert e_{i}\Vert , \Vert e_{i}^{*}\Vert \} \leq \sup_{i}\max\{ \Vert e_{i}\Vert , \Vert e_{i}^{*}\Vert \} <\infty\).

For brevity, we refer to \(( e_{i})\) as a basis, and denote it by Ψ. It is known from [1] that, for any \(c>1\), any separable Banach space has a bounded biorthogonal system (a Markushevitch basis) with \(1\leq \Vert e_{i}\Vert \), \(\Vert e_{i}^{*}\Vert \leq c\), and \(X^{*}=\overline{\operatorname{span}}^{w^{*}}[e_{i}^{*}: i\in\mathbb{N}]\).

For any \(x \in X\), we have the formal expansion

$$x\sim \sum_{i=1}^{\infty}e_{i}^{*}(x)e_{i}. $$

It is easy to see that \(\lim_{i\rightarrow\infty} e_{i}^{*}(x)=0\), and \(\sup_{i}\vert e_{i}^{*}(x)\vert >0\), unless \(x=0\). For each \(n\in\mathbb{N}\), let

$$\Sigma_{n}:=\Sigma_{n}(\Psi):=\biggl\{ \sum _{i\in A}a_{i}e_{i}: A\subset \mathbb{N}, \#(A)= n, a_{i} \in\mathbb{K}\biggr\} , $$

where \(\#(A)\) denotes the cardinality of the set A. We consider the problem of approximating \(x \in X\) by the elements of \(\Sigma_{n}\) and define the best error of such an approximation as

$$\sigma_{n}(x):=\sigma_{n}(x, \Psi)= \inf _{y \in\Sigma_{n}}\Vert x-y\Vert . $$

For any finite set \(A \subset\mathbb{N}\), define the projection operator \(P_{A}x=\sum_{i \in A}e_{i}^{*}(x)e_{i}\). The n-term error of the expansion approximation with respect to Ψ is

$$\widetilde{\sigma}_{n}(x):=\widetilde{\sigma}_{n}(x, \Psi)= \inf_{\#(A)= n}\Vert x-P_{A}x\Vert . $$

It is clear that \(\widetilde{\sigma}_{n}(x) \geq {\sigma}_{n}(x)\).

A finite set Γ with \(\min_{i\in\Gamma} \vert e_{i}^{*}(x)\vert \geq \max_{i\in{\Gamma^{c}}}\vert e_{i}^{*}(x)\vert \), is called a greedy set of order n for x if \(\#(\Gamma)=n\), and we write \(\Gamma\in\mathcal{G}(x, n)\). Let \(\rho: \mathbb{N}\rightarrow\mathbb{N}\) be a permutation, it is a greedy ordering if \(\vert e^{*}_{\rho(j)}(x)\vert \geq \vert e^{*}_{\rho(i)}(x)\vert \) for \(j\leq i\). In general, a greedy ordering is not unique. In [2], Konyagin and Temlyakov defined the thresholding greedy operator of x of order n with respect to Ψ by

$$G_{n}(x):=G_{n}(x, \Psi):=\sum_{i=1}^{n}e_{\rho(i)}^{*}(x)e_{\rho(i)}. $$

We write \(\mathcal{G}_{n}(x, \Psi)\) for the set of all thresholding greedy operators of order n with respect to Ψ for x, when there is no confusion about x and Ψ, denote it by \(\mathcal{G}_{n}\). We note that \(G_{n}\in\mathcal{G}_{n}\) is neither continuous nor bounded, but homogeneous, that is, \(G_{n}(\lambda x)=\lambda G_{n}(x)\) for \(\lambda\in\mathbb{K}\), so we can define the norm of \(G_{n}\) as \(\Vert G_{n}\Vert :=\sup_{\Vert x\Vert \leq 1}\Vert G_{n}(x)\Vert \), and we have \(\Vert G_{n}(x)\Vert \leq \Vert G_{n}\Vert \Vert x\Vert \) for any \(x\in X\).

We address the concept of a class of important bases which are called quasi-greedy bases.

Definition 1.1


We call a basis Ψ of a Banach space X a quasi-greedy basis if, for any \(G_{n} \in\mathcal{G}_{n}\), the inequality

$$\Vert G_{n}\Vert \leq C $$

holds with a constant C independent of n.

Subsequently, Wojtaszczyk in [3] proved that a basis Ψ is quasi-greedy if and only if for any \(x\in X\),

$$\lim_{n\rightarrow+\infty}G_{n}(x, \Psi)=x. $$

Examples of quasi-greedy bases can be found in the literature [49]. Of course, bases need not to be quasi-greedy, there exists a non-quasi-greedy basis, for these types of bases, TGA may fail to converge for certain vector \(x \in X\). For example, Temlyakov in [10] showed that trigonometric system in Lebesgue spaces is not quasi-greedy. Now we recall this result.

Let d be a natural number, \(\mathbb{T}^{d}\) the d-dimensional torus. For \(1\leq p<\infty\), let \(L_{p}(\mathbb{T}^{d})\) denote the space of all measurable functions for which

$$\Vert f\Vert _{p}:= \biggl(\frac{1}{(2\pi)^{d}} \int_{\mathbb{T}^{d}} \bigl\vert f(x)\bigr\vert ^{p}\,dx \biggr)^{\frac{1}{p}}< \infty. $$

For convenience we also set \(L_{\infty}(\mathbb{T}^{d})=C(\mathbb{T}^{d})\), the space of continuous functions with the uniform norm

$$\Vert f\Vert _{\infty}:=\max_{x\in\mathbb{T}^{d}}\bigl\vert f(x) \bigr\vert . $$

Let \(\mathcal{T}^{d}\) denote the trigonometric system \(\{e^{i(k,x)}: k\in\mathbb{Z}^{d}\}\) in \(L_{p}(\mathbb{T}^{d})\), where \(\{e^{i(k,x)}\}\) is for the complex exponentials and \(1\leq p\leq\infty\). In [10] Temlyakov proved that there is a positive absolute constant C such that for each n and \(1\leq p \leq\infty\) there exists a nonzero function \(f\in L_{p}(\mathbb{T}^{d})\) for which

$$\bigl\Vert G_{n}\bigl(f, \mathcal{T}^{d}\bigr)\bigr\Vert _{p} \geq Cn^{\vert \frac {1}{2}-\frac{1}{p}\vert }\Vert f\Vert _{p}. $$

So \(\mathcal{T}^{d}\) is not a quasi-greedy basis of \(L_{p}(\mathbb{T}^{d})\) when \(p \neq2\).

Now we recall the definition of CTGA. An n-term Chebyshev thresholding greedy approximant of x with respect to Ψ is defined as \(CG_{n}(x):=CG_{n}(x, \Psi) \in \operatorname{span}[e_{\rho(i)}: 1\leq i \leq n]\) such that

$$\bigl\Vert x-CG_{n}(x)\bigr\Vert =\min\Biggl\{ \Biggl\Vert x-\sum _{i=1}^{n}a_{i}e_{\rho (i)} \Biggr\Vert : (a_{i})_{i=1}^{n}\in \mathbb{K}^{n}\Biggr\} . $$

We have the following results on the convergence of CTGA in the more general bases.

Theorem 1.1

Let \(\Psi=(e_{i})\) be a basis for a Banach space X. Then for each \(x\in X\),

$$\lim_{n\rightarrow+\infty}CG_{n}(x, \Psi)=x. $$

From Theorem 1.1, we know that the convergence property of CTGA is better than TGA for non-quasi-greedy bases. As a consequence, we have the following result, which was firstly pointed out in [11].

Corollary 1.2

Let \(1\leq p\leq\infty\) and \(\mathcal{T}^{d}\) be trigonometric system in \(L_{p}(\mathbb{T}^{d})\). Then, for any \(f\in L_{p}(\mathbb{T}^{d})\),

$$\lim_{n\rightarrow+\infty}\bigl\Vert f-CG_{n}\bigl(f, \mathcal{T}^{d}\bigr)\bigr\Vert _{p}=0. $$

In view of Theorem 1.1, it is natural to study the convergence rate of CTGA for every \(x\in X\). To this end, we will estimate the Chebyshevian Lebesgue constants and its relatives:

$$L_{n}^{\mathrm{ch}}:=L_{n}^{\mathrm{ch}}(\Psi):=\sup _{x \in X, \sigma_{n}(x) \neq 0}\frac{\Vert x-CG_{n}(x)\Vert }{\sigma_{n}(x)} $$


$$\widetilde{L}_{n}^{\mathrm{ch}}:=\widetilde{L}_{n}^{\mathrm{ch}}( \Psi):=\sup_{x \in X, \widetilde{\sigma}_{n}(x) \neq 0}\frac{\Vert x-CG_{n}(x)\Vert }{\widetilde{\sigma}_{n}(x)}. $$

It is obvious that \(L_{n}^{\mathrm{ch}}\geq\widetilde{L}_{n}^{\mathrm{ch}}\).

In [6] and [12], the authors got the exact orders for \(L_{n}^{\mathrm{ch}}(\Psi)\) and \(\widetilde{L}_{n}^{\mathrm{ch}}(\Psi)\) with respect to quasi-greedy bases. To state their results, we recall some notions. For \(n\geq1\) and \(A,B \subset\mathbb{N}\), the democracy parameter \(\mu_{n}\) and the disjoint democracy parameter \(\mu_{n}^{d}\) are defined as

$$\mu_{n}=\sup_{\#(A)=\#(B)\leq n}\frac{\Vert {\mathbf{1}}_{A}\Vert }{\Vert {\mathbf{1}}_{B}\Vert }\quad \mbox{and}\quad \mu_{n}^{d}=\sup_{\#(A)=\#(B)\leq n, A\cap B=\emptyset}\frac{\Vert {\mathbf{1}}_{A}\Vert }{\Vert {\mathbf{1}}_{B}\Vert }, $$

where \({\mathbf{1}}_{C}=\sum_{i \in C}e_{i}\) for any finite set C. Clearly, \(\mu_{n}^{d}\leq\mu_{n}\). For a quasi-greedy basis, we define the quasi-greedy constant \(\mathfrak{K}\) to be the least constant such that

$$\bigl\Vert G_{n}(x, \Psi)\bigr\Vert \leq\mathfrak{K}\Vert x \Vert \quad \mbox{and}\quad \bigl\Vert x-G_{n}(x, \Psi)\bigr\Vert \leq \mathfrak{K}\Vert x\Vert \quad \text{for all } x\in X, n\geq1. $$

Theorem 1.3

If Ψ is a \(\mathfrak{K}\)-quasi-greedy basis in a Banach space (over \(\mathbb{K}=\mathbb{R}\) or \(\mathbb{C}\)), then, for all \(n\geq1\),

$$\frac{\mu_{n}^{d}}{2\mathfrak{K}} \leq\widetilde{L}_{n}^{\mathrm{ch}}(\Psi) \leq L_{n}^{\mathrm{ch}}(\Psi) \leq20\mathfrak{K}^{3} \mu_{n}^{d} \quad \textit{for } \mathbb{K}=\mathbb{R} $$


$$\frac{\mu_{n}^{d}}{2\mathfrak{K}} \leq\widetilde{L}_{n}^{\mathrm{ch}}(\Psi) \leq L_{n}^{\mathrm{ch}}(\Psi) \leq100\mathfrak{K}^{3} \mu_{n}^{d} \quad \textit{for } \mathbb{K}=\mathbb{C}. $$

To compare Theorem 1.3 with the corresponding results of TGA, for a basis Ψ of a Banach space X, we recall the definitions of Lebesgue constants \(L_{n}\) and \(\widetilde{L}_{n}\):

$$L_{n}:=L_{n}(\Psi):=\sup_{x \in X, \sigma_{n}(x)\neq 0} \frac{\Vert x-G_{n}(x)\Vert }{\sigma_{n}(x)} $$


$$\widetilde{L}_{n}:=\widetilde{L}_{n}(\Psi):=\sup _{x \in X, \widetilde{\sigma}_{n}(x)\neq 0}\frac{\Vert x-G_{n}(x)\Vert }{\widetilde{\sigma}_{n}(x)}. $$

In what follows, for any two nonnegative sequences \(\{a_{n}\}\) and \(\{b_{n}\}\), the order inequality \(a_{n}\preceq b_{n}\) (\(a_{n}\succeq b_{n}\)) means that there is a number C independent of n such that \(a_{n} \leq Cb_{n}\), (\(a_{n} \geq Cb_{n}\)). The asymptotic relation \(a_{n}\asymp b_{n}\) means \(a_{n}\preceq b_{n}\) and \(a_{n}\succeq b_{n}\).

It is known from Theorem 1.1 in [7] that, for any quasi-greedy basis Ψ,

$$L_{n}(\Psi)\asymp k_{n}+\mu_{n}, $$

where \(k_{n}:=\sup_{\# (A)\leq n}\Vert P_{A}\Vert \) is the unconditionality parameter. So together with Theorem 1.3, we have

$$\frac{L_{n}(\Psi)}{L_{n}^{\mathrm{ch}}(\Psi)}\asymp\frac{k_{n}+\mu_{n}}{\mu_{n}}=1+\frac {k_{n}}{\mu_{n}} \preceq1+k_{n}, $$

since \(\mu_{n} \geq1\) for all \(n\geq1\). It is known from [13, Lemma 8.2], that

$$k_{n}\preceq\ln n\quad \mbox{for any } n=2, 3, \ldots. $$

So we have

$$\frac{L_{n}(\Psi)}{L_{n}^{\mathrm{ch}}(\Psi)}\preceq 1+ \ln n. $$

From the above inequalities one can see that the order of \(L_{n}^{\mathrm{ch}}\) may have some improvements for some conditional quasi-greedy bases. In fact it is shown in [7], Example 2, that the maximum improvement is lnn. On the other hand, for unconditional bases, CTGA makes no essential improvements.

However, as far as we know, there is no result on the estimates of \(L_{n}^{\mathrm{ch}}(\Psi)\) for non-quasi-greedy bases. So in Sect. 2, we study Chebyshevian Lebesgue constants for two examples of such bases, the trigonometric system and the summing basis. We determine the exact order of \(L_{n}^{\mathrm{ch}}(\Psi)\) for these bases. In Sect. 3, we obtain the upper estimates for \(L_{n}^{\mathrm{ch}}(\Psi)\) with respect to the general bases which leads to an improvement of Theorem 1.3 for conditional quasi-greedy bases. In the final section, we present a short survey of the results and questions on the efficiency of CTGA.

2 Chebyshevian Lebesgue constants for two non-quasi-greedy bases

The conclusion of Theorem 1.1 is our motivation to study the efficiency of CTGA for non-quasi-greedy bases. So we begin with the proof of Theorem 1.1.

Proof of Theorem 1.1

Since \(\Psi=(e_{i})\) is a basis for a Banach space X, we have \(X=\overline{\operatorname{span}}\{e_{i}: i\in \mathbb{N}\}\). Let \(x\in X\). For any \(\epsilon>0\), there exist \(M\in \mathbb{N}\) and \(\{x_{i}\}_{i=1}^{M}\subset\mathbb{K}^{M}\), such that

$$\Biggl\Vert x-\sum_{i=1}^{M}x_{i}e_{i} \Biggr\Vert < \epsilon. $$

For this M, we can take a positive integer N such that

$$\{1, 2, \ldots, M\}\subset\bigl\{ \rho(1), \rho(2), \ldots, \rho(N)\bigr\} . $$

So for a Chebyshev greedy approximant \(CG_{N}(x)=\sum_{i=1}^{N}c_{i}e_{\rho(i)}\), we have

$$\Biggl\Vert x-\sum_{i=1}^{N}c_{ i}e_{\rho(i)} \Biggr\Vert \leq \Biggl\Vert x-\sum_{i=1}^{M}x_{i}e_{i} \Biggr\Vert < \epsilon. $$

The proof is completed. □

A basic problem in approximation theory is to represent a given function approximately, and solving this problem is to choose a representation system. Traditionally, a representation system has natural features such as minimality, orthogonality, simple structure and nice computational characteristics. The trigonometric system is one of the most typical representation systems, a very importance feature of the trigonometric system that made it attractive for the representation of periodic functions is orthogonality.

Next we consider \(L_{n}^{\mathrm{ch}}(\Psi)\) for two non-quasi-greedy bases Ψ. The first one is the trigonometric system. We obtain the following result.

Theorem 2.1

For the trigonometric system \(\mathcal{T}^{d}=\{e^{ikx}\}_{ k \in\mathbb{Z}}\) in \(L^{p}(\mathbb{T}^{d})\), \(1< p\leq\infty\), we have, for \(n\geq1\),

$$L_{n}^{\mathrm{ch}}\bigl(\mathcal{T }^{d}\bigr)\asymp n^{\vert \frac{1}{2}-\frac {1}{p}\vert }. $$

The upper bound of \(L_{n}^{\mathrm{ch}}(\mathcal{T}^{d})\) follows from the known results of \(L_{n}(\mathcal{T}^{d})\) and the proof of the lower bound of \(L_{n}^{\mathrm{ch}}(\mathcal{T}^{d})\) relies on a theorem on the lower estimate of \(L_{n}^{\mathrm{ch}}(\Psi)\) for a general basis Ψ in a Banach space X.

To present this theorem we recall some concepts. For a basis \(\Psi=(e_{i})_{i=1}^{\infty}\) of X, define the partial sum operator \(S_{m}: X\rightarrow X\) by

$$ S_{m}(x):=\sum_{i=1}^{m}e_{i}^{*}(x)e_{i}. $$

Then \(S_{m}\) is a continuous linear operator. So we define \(\Vert S_{m}\Vert \) in the usual way. For finite sets \(A, B\subset\mathbb{N}\), we write \(A< B\) if \(\max\{n: n\in A\}< \min\{n: n\in B\}\). Denote by ϒ the set of \(\varepsilon=\{\varepsilon_{i}\}\) with \(\vert \varepsilon _{i}\vert =1\) for all i (where \(\varepsilon_{i}\) could be real or complex).

Theorem 2.2

Let Ψ be a basis of a Banach space X. For any \(A, B\subset\mathbb{N}\) with \(\#(A)=\#(B)=n\) and \(A< B\), we have, for all \(\varepsilon\in\Upsilon\) and \(n\geq1\),

$$ L_{n}^{\mathrm{ch}}(\Psi)\geq\max \biggl\{ \frac{1}{\Vert S_{n}\Vert } \frac{\Vert {\mathbf{1}}_{\varepsilon A}\Vert }{ \Vert {\mathbf{1}}_{B}\Vert }, \frac{1}{1+\Vert S_{n}\Vert }\frac{\Vert {\mathbf{1}}_{B}\Vert }{\Vert {\mathbf{1}}_{\varepsilon A}\Vert } \biggr\} , $$

where \({\mathbf{1}}_{\varepsilon A}=\sum_{i\in A}\varepsilon_{i} e_{i}\).


Let \(\varepsilon=(\varepsilon_{i})\) be any choice of signs from ϒ. Choose \(\epsilon>0\), we consider

$$ x=\sum_{i\in A}\varepsilon_{i}e_{i}+(1+ \epsilon)\sum_{i\in B}e_{i}, $$

then \(B \in\mathcal{G}(x, n)\) and we can find a Chebyshev greedy approximant \(CG_{n}(x)\) which is supported on B, such that

$$x-CG_{n}(x)=\sum_{i\in A } \varepsilon_{i}e_{i}+\sum_{i\in B}a_{i}e_{i} $$

for some \(\{a_{i}\}_{i\in B}\subset\mathbb{K}\). Thus

$$ \bigl\Vert x-CG_{n}(x)\bigr\Vert =\biggl\Vert \sum _{i\in A}\varepsilon_{i}e_{i}+\sum _{i\in B}a_{i}e_{i}\biggr\Vert \leq L_{n}^{\mathrm{ch}} \sigma_{n}(x). $$

Notice that \(A< B\) and \(\Vert S_{n}(x-CG_{n}(x))\Vert \leq \Vert S_{n}\Vert \Vert x-CG_{n}(x)\Vert \), we have

$$ \biggl\Vert \sum_{i \in A}\varepsilon_{i}e_{i} \biggr\Vert \leq \Vert S_{n}\Vert \biggl\Vert \sum _{i\in A }\varepsilon_{i}e_{i}+\sum _{i\in B}a_{i}e_{i}\biggr\Vert . $$

Combining (2.4) and (2.5), we obtain

$$\biggl\Vert \sum_{i \in A}\varepsilon_{i}e_{i} \biggr\Vert \leq \Vert S_{n}\Vert \biggl\Vert \sum _{i\in A}\varepsilon_{i}e_{i}+\sum _{i\in B}a_{i}e_{i}\biggr\Vert \leq \Vert S_{n}\Vert L_{n}^{\mathrm{ch}} \sigma_{n}(x). $$


$$ \begin{aligned}\sigma_{n}(x) \leq \bigl\Vert x-P_{B}(x)\bigr\Vert =\biggl\Vert \sum_{i \in A} \varepsilon_{i}e_{i}\biggr\Vert \leq \Vert S_{n}\Vert L_{n}^{\mathrm{ch}} \sigma_{n}(x). \end{aligned} $$

Now we consider

$$ y=(1+\epsilon)\sum_{i \in A }\varepsilon_{i}e_{i}+ \sum_{i \in B}e_{i}, $$

then \(A \in\mathcal{G}(y,n)\) and there exists some \(\{b_{i}\}_{i\in A}\subset\mathbb{K}\) such that

$$y-CG_{n}(y)=\sum_{i\in A}b_{i}e_{i}+ \sum_{i\in B}e_{i}. $$

Since \(A< B\) and

$$\bigl\Vert y-CG_{n}(y)\bigr\Vert =\biggl\Vert \sum _{i\in A}b_{i}e_{i}+\sum _{i\in B}e_{i}\biggr\Vert \leq L_{n}^{\mathrm{ch}} \sigma_{n}(y), $$

we have, for \(S_{n}(y-CG_{n}(y))=\sum_{i\in A}b_{i}e_{i}\),

$$ \begin{aligned}\biggl\Vert \sum _{i\in B}e_{i}\biggr\Vert &= \biggl\Vert \sum _{i\in A}b_{i}e_{i}+\sum _{i\in B}e_{i}-\sum_{i\in A}b_{i}e_{i} \biggr\Vert \\ &\leq \biggl\Vert \sum_{i\in A}b_{i}e_{i}+ \sum_{i\in B}e_{i}\biggr\Vert +\biggl\Vert \sum_{i\in A}b_{i}e_{i}\biggr\Vert \\ & \leq\biggl\Vert \sum_{i\in A}b_{i}e_{i}+ \sum_{i\in B}e_{i}\biggr\Vert +\Vert S_{n}\Vert \biggl\Vert \sum_{i\in A}b_{i}e_{i}+ \sum_{i\in B}e_{i}\biggr\Vert \\ &= \bigl(1+\Vert S_{n}\Vert \bigr)\biggl\Vert \sum _{i\in A}b_{i}e_{i}+\sum _{i\in B}e_{i}\biggr\Vert \\ &\leq \bigl(1+\Vert S_{n}\Vert \bigr)L_{n}^{\mathrm{ch}} \sigma_{n}(y). \end{aligned} $$


$$ \sigma_{n}(y)\leq\bigl\Vert y-P_{A}(y)\bigr\Vert = \biggl\Vert \sum_{i\in B}e_{i}\biggr\Vert \leq \bigl(1+\Vert S_{n}\Vert \bigr)L_{n}^{\mathrm{ch}} \sigma_{n}(y). $$

For any \(z\in\Sigma_{n}\),

$$\sigma_{n}(x)\leq \Vert x-z\Vert \leq \Vert x-y\Vert +\Vert y-z \Vert . $$

Taking the infimum over all such z, and using the symmetry of x and y, we have

$$ \bigl\vert \sigma_{n}(x)-\sigma_{n}(y)\bigr\vert \leq \Vert x-y\Vert . $$

From (2.3) and (2.7),

$$\Vert x-y\Vert =\biggl\Vert \epsilon\biggl(\sum _{i \in D}e_{i}-\sum_{i\in A}e_{i} \biggr)\biggr\Vert \rightarrow0\quad \mbox{as } \epsilon\rightarrow 0, $$

then (2.9) and the above equality imply that

$$ \sigma_{n}(x)\asymp\sigma_{n}(y). $$

Combining (2.6), (2.8) and (2.10), we obtain

$$\begin{aligned} \frac{1}{\Vert S_{n}\Vert L_{n}^{\mathrm{ch}}}\biggl\Vert \sum_{i \in A} \varepsilon_{i}e_{i}\biggr\Vert \leq& \biggl\Vert \sum _{i\in B}e_{i}\biggr\Vert \\ \leq&\bigl(1+\Vert S_{n}\Vert \bigr)L_{n}^{\mathrm{ch}}\biggl\Vert \sum _{i \in A}\varepsilon_{i}e_{i}\biggr\Vert . \end{aligned}$$


$$L_{n}^{\mathrm{ch}}\geq \frac{1}{\Vert S_{n}\Vert }\frac{\Vert {\mathbf{1}}_{\varepsilon A}\Vert }{\Vert {\mathbf{1}}_{B}\Vert }\quad \mbox{from the first inequality of (2.11)} $$


$$L_{n}^{\mathrm{ch}}\geq\frac{1}{1+\Vert S_{n}\Vert }\frac{\Vert {\bf 1}_{B}\Vert }{\Vert {\mathbf{1}}_{\varepsilon A}\Vert }\quad \mbox{from the second inequality of (2.11)}. $$

We complete the proof. □

If Ψ is a Schauder basis of a Banach space X, then for any \(x\in X\) there exists a unique expansion

$$x=\sum_{i=1}^{\infty}e_{i}^{*}(x)e_{i}, $$

which means the partial sum sequence \(\{S_{m}(x)\}\) defined in (2.1) converges to x in X norm for every \(x\in X\). By the principle of uniform boundedness, we have \(\sup_{m}\Vert S_{m}\Vert <\infty\). The number \(\sup_{m}\Vert S_{m}\Vert \) is called the basis constant of the basis Ψ (see [14]) and denoted by β, which is used frequently for the research of greedy algorithms (see [15]).

We have the following corollary, which is a direct consequence of Theorem 2.2.

Corollary 2.3

Let Ψ be a Schauder basis for a Banach space X. For any \(A, B\subset\mathbb{N}\) with \(\#(A)=\#(B)=n\) and \(A< B\), we have, for all \(\varepsilon\in\Upsilon\) and \(n\geq1\),

$$L_{n}^{\mathrm{ch}}(\Psi)\geq\max \biggl\{ \frac{1}{\beta} \frac{\Vert {\bf 1}_{\varepsilon A}\Vert }{\Vert {\mathbf{1}}_{B}\Vert }, \frac{1}{1+\beta}\frac{\Vert {\mathbf{1}}_{B}\Vert }{\Vert {\bf 1}_{\varepsilon A}\Vert } \biggr\} . $$

Now we prove Theorem 2.1.

Proof of Theorem 2.1

The upper bounds follows from the obvious relationship \(L_{n}^{\mathrm{ch}}(\mathcal{T})\leq L_{n}(\mathcal{T})\), and the inequality

$$ L_{n}(\mathcal{T}) \leq 1+3n^{\vert \frac{1}{p}-\frac{1}{2}\vert } \quad \mbox{for } 1\leq p\leq \infty, $$

which was proved by Temlykov in [10].

Now we turn to the proof of the lower bounds. It suffices to prove the results for \(d=1\). We consider separately two cases \(1< p <\infty\) and \(p=\infty\).

We first apply Theorem 2.2 to prove the results for \(1< p<\infty\). It is well known that \(e_{k}^{*}(f)\) is the kth Fourier coefficient of f defined by

$$e_{k}^{*}(f)=\widehat{f}(k):=\frac{1}{2\pi} \int_{\mathbb{T}}f(x)e^{-ikx}\,dx. $$

For notational convenience, we take a particular order

$$\bigl\{ 1, e^{ix}, e^{-ix}, e^{i2x}, e^{-i2x}, \ldots\bigr\} $$

of the trigonometric system \(\mathcal{T}\).

We consider two important trigonometric polynomials (see for instance [16]).

1. The Dirichlet kernel of order n is defined as

$$ \mathcal{D}_{n}(x)=\sum_{\vert k\vert \leq n}e^{ikx},\quad x\in \mathbb{T}. $$

It is well known that

$$ \Vert \mathcal{D}_{n}\Vert _{p}\asymp n^{1-\frac{1}{p}},\quad n=1,2,\ldots, 1< p\leq\infty, $$


$$ \Vert \mathcal{D}_{n}\Vert _{1} \asymp\ln n,\quad n=1, 2, 3 \ldots. $$

2. The Rudin–Shapiro polynomial \(\mathcal{R}_{N}(x)\), which is defined recursively by pairs of trigonometric polynomials \(P_{j}(x)\) and \(Q_{j}(x)\) of order \(2^{j}-1\):

$$\begin{aligned}&P_{0}=Q_{0}=1, \\ &P_{j+1}(x)=P_{j}(x)+e^{i2^{j}x}Q_{j}(x), \qquad Q_{j+1}(x)=P_{j}(x)-e^{i2^{j}x}Q_{j}(x). \end{aligned} $$

From the definition of \(P_{n}\), it is clear that

$$P_{n}(x)=\sum_{k=0}^{2^{n}-1} \varepsilon_{k}e^{ikx},\quad \varepsilon_{k}=\pm1. $$

Let N be a natural number, and let it have the binary representation

$$N=\sum_{j=1}^{m}2^{n_{j}},\quad n_{1}>n_{2}>\cdots>n_{m}\geq0. $$

We set

$$\begin{aligned}&\widetilde{\mathcal{R}}_{N}(x)=P_{n_{1}}(x)+\sum _{j=2}^{m}P_{n_{j}}(x)e^{i(2^{n_{1}}+\cdots+2^{n_{j-1}})x}, \\ &\mathcal{R}_{N}(x)=\widetilde{\mathcal{R}}_{N}(x)+ \widetilde{\mathcal{R}}_{N}(-x)-1. \end{aligned}$$

Then \(\mathcal{R}_{N}(x)\) has the form

$$ \mathcal{R}_{N}(x)=\sum_{\vert k\vert \leq N} \varepsilon_{k}e^{ikx},\quad \varepsilon_{k}=\pm1, $$

and it is known that

$$\Vert \mathcal{R}_{N}\Vert _{\infty}\leq CN^{\frac{1}{2}} \quad \mbox{for } 1\leq p\leq\infty. $$

Notice that

$$ \begin{aligned} 2N+1&=\langle \mathcal{R}_{N}, \mathcal{R}_{N}\rangle \\ &\leq \Vert \mathcal{R}_{N}\Vert _{1}\cdot \Vert \mathcal{R}_{N}\Vert _{\infty} \\ &\leq \Vert \mathcal{R}_{N}\Vert _{1}\cdot CN^{\frac{1}{2}}, \end{aligned} $$

which implies \(\Vert \mathcal{R}_{N}\Vert _{1}\succeq N^{\frac{1}{2}}\). It is trivial that \(\Vert \mathcal{R}_{N}\Vert _{1}\preceq \Vert \mathcal{R}_{N}\Vert _{p}\preceq \Vert \mathcal{R}_{N}\Vert _{\infty}\). Hence, we have

$$ \Vert \mathcal{R}_{N}\Vert _{p}\asymp N^{\frac{1}{2}} \quad \mbox{for } 1\leq p\leq\infty. $$

Firstly, we assume that n is even. Define the sets

$$A=\{k: 2\leq k\leq n+1\} \quad \mbox{and}\quad B=\{k: n+1< k\leq2n+1\}. $$

It is clear that \(\#(A)=\#(B)=n\) and \(A< B\). So from (2.13) we have

$${\mathbf{1}}_{B}(x)=\mathcal{D}_{n}(x)-\mathcal{D}_{\frac{n}{2}}(x), $$

where \({\mathbf{1}}_{B}(x)=\sum_{k\in B}e^{ikx}\), hence the triangle inequality and (2.14) give

$$n^{1-\frac{1}{p}}\succeq \Vert \mathcal{D}_{n}\Vert _{p}+ \Vert \mathcal{D}_{\frac{n}{2}}\Vert _{p}\geq \Vert { \mathbf{1}}_{B}\Vert _{p}\geq \Vert \mathcal{D}_{n} \Vert _{p}-\Vert \mathcal{D}_{\frac {n}{2}}\Vert _{p} \succeq n^{1-\frac{1}{p}}, $$

which is equivalent to

$$ \Vert {\mathbf{1}}_{B}\Vert _{p}\asymp n^{1-\frac{1}{p}}. $$

And if we choose in ε the signs of the corresponding Rudin–Shapiro polynomial, according to (2.16) and the definition of the set A, we have

$${\mathbf{1}}_{\varepsilon A}(x)=\mathcal{R}_{\frac{n}{2}}(x)-\varepsilon_{0}, $$

where \({\bf 1}_{\varepsilon A}(x)=\sum_{k\in A}\varepsilon_{k}e^{ikx}\), by (2.17) and the triangle inequality, we obtain

$$n^{\frac{1}{2}}-1 \preceq \Vert \mathcal{R}_{\frac{n}{2}} \Vert _{p}-1\leq \Vert {\mathbf{1}}_{\varepsilon A}\Vert _{p}\leq \Vert \mathcal{R}_{\frac {n}{2}}\Vert _{p}+1 \preceq n^{\frac{1}{2}}, $$

which can be rewritten as

$$ \Vert {\mathbf{1}}_{\varepsilon A}\Vert _{p}\asymp n^{\frac{1}{2}}. $$

Combining (2.2), (2.18) and (2.19), we obtain

$$L_{n}^{\mathrm{ch}}(\mathcal{T})\geq\frac{1}{1+ \Vert S_{n}\Vert _{L_{p}\rightarrow L_{p}}} \frac{\Vert {\mathbf{1}}_{B}\Vert _{p}}{\Vert {\bf 1}_{\varepsilon A}\Vert _{p}} \asymp\frac{n^{1-\frac{1}{p}}}{n^{\frac{1}{2}}}=n^{\frac{1}{2}-\frac {1}{p}} \quad \mbox{for } 2< p< \infty $$


$$L_{n}^{\mathrm{ch}}(\mathcal{T})\geq\frac{1}{ \Vert S_{n}\Vert _{L_{p}\rightarrow L_{p}}} \frac{\Vert {\mathbf{1}}_{\varepsilon A}\Vert _{p}}{\Vert {\mathbf{1}}_{B}\Vert _{p}} \asymp\frac{n^{\frac{1}{2}}}{n^{1-\frac{1}{p}}}=n^{\frac{1}{p}-\frac {1}{2}} \quad \mbox{for } 1< p \leq2, $$

which imply that

$$ L_{n}^{\mathrm{ch}}(\mathcal{T})\succeq n^{\vert \frac{1}{2}-\frac{1}{p}\vert } \quad \mbox{for } 1< p< \infty. $$

Here we use the fact that the trigonometric system \(\mathcal{T}^{d}=\{e^{i(k,x)}\}_{k \in\mathbb{Z}^{d}}\) is a Schauder basis for \(L_{p}(\mathbb{T}^{d})\), which is equivalent to

$$\sup_{n}\Vert S_{n}\Vert < \infty, $$

i.e., the partial sum operator \(S_{n}\) is uniformly bounded.

Secondly, if n is an odd number. For \(n>1\), we define

$$A=\{k: 1\leq k\leq n\} \quad \mbox{and}\quad B=\{k: n< k\leq2n+1\}. $$

An argument similar to (2.19) gives

$$ \Vert {\mathbf{1}}_{\varepsilon A}\Vert _{p}=\Vert \mathcal{R}_{\frac{n-1}{2}} \Vert _{p}\asymp (n-1)^{\frac{1}{2}}\asymp n^{\frac{1}{2}}, $$

where we choose ε as above.

On the other hand, it is clear that

$${\bf 1}_{B}(x)=\mathcal{D}_{n}(x)-\mathcal{D}_{\frac{n-1}{2}}(x), $$

and from (2.14) and the triangle inequality

$$ \begin{aligned} (n-1)^{1-\frac{1}{p}}&\preceq \Vert \mathcal {D}_{n-1}\Vert _{p}-\Vert \mathcal{D}_{\frac{n-1}{2}} \Vert _{p} \\ &\leq \Vert {\mathbf{1}}_{B}\Vert _{p} \\ &\leq \Vert \mathcal{D}_{n-1}\Vert _{p}+\Vert \mathcal{D}_{\frac {n-1}{2}}\Vert _{p} \\ &\preceq (n-1)^{1-\frac{1}{p}}, \end{aligned} $$

we have

$$ \Vert {\mathbf{1}}_{B}\Vert _{p}\asymp(n-1)^{1-\frac {1}{p}}\asymp n^{1-\frac{1}{p}}. $$

Combining (2.2), (2.21), (2.22) and using a similar argument to above, we also prove the inequality (2.20). So we complete the proof for \(1< p<\infty\).

For \(p=\infty\), if we take \(A, B\) and ε as above, then from Theorem 2.2, (2.14) and (2.17), we obtain

$$L_{n}^{\mathrm{ch}}(\mathcal{T})\geq\frac{1}{1+\Vert S_{n}\Vert _{L_{\infty }\rightarrow L_{\infty}}} \frac{\Vert {\mathbf{1}}_{B}\Vert _{\infty}}{ \Vert {\mathbf{1}}_{\varepsilon A}\Vert _{\infty}}\succeq \frac{n^{\frac{1}{2}}}{\ln n}, $$

where we use the well-known relation \(\Vert S_{n}\Vert _{L_{\infty}\rightarrow L_{\infty}}\preceq \ln n\) for all \(n\geq2\) (see [17]).

Now we adopt a different approach to remove the factor lnn. Let \(T(n)\) denote the space of trigonometric polynomials of degree n, which consists of functions of the form

$$T(x)=\sum_{\vert k\vert \leq n}c_{k}(T)e^{ikx},\quad k\in \mathbb{Z}. $$

For \(n\geq1\), we define the class of sparse trigonometric polynomials

$$A^{1}\bigl( T(n)\bigr):=\biggl\{ T\in T(n): \sum _{\vert k\vert \leq n}\bigl\vert c_{k}(T)\bigr\vert \leq1\biggr\} $$

and the function

$$f_{n}(x):=\frac{1}{4n}\sum_{1\leq \vert k\vert \leq2n} \biggl(1-\frac {\vert k\vert }{4n} \biggr)e^{ikx}, $$

which can be rewritten as

$$f_{n}(x)=\frac{1}{2n}\sum_{k=1}^{2n} \biggl(1-\frac{k}{4n} \biggr)\cos kx. $$

It is clear that \(\{1, 2, \ldots, n\}\in\mathcal{G}(f_{n}, n)\). Let \(CG_{n}(f_{n})\) be the best approximant to \(f_{n}\) in \(L_{\infty}(\mathbb{T})\) from \(\operatorname{span}\{\cos x, \cos2x, \ldots, \cos nx\}\). From the relationship

$$\Biggl\Vert \sum_{k=n+1}^{2n}\cos kx\Biggr\Vert _{q}\asymp n^{1-\frac{1}{q}}, $$

which holds for any \(1< q<\infty\), and the Hölder inequality, we have

$$ \begin{aligned} \Biggl\langle f_{n}, \sum _{k=n+1}^{2n}\cos kx\Biggr\rangle &= \Biggl\langle f_{n}-CG_{n}(f_{n}), \sum _{k=n+1}^{2n}\cos kx\Biggr\rangle \\ &\leq \bigl\Vert f_{n}-CG_{n}(f_{n})\bigr\Vert _{q'}\cdot\Biggl\Vert \sum_{k=n+1}^{2n} \cos kx\Biggr\Vert _{q } \\ &\asymp n^{ \frac{1}{q'}} \bigl\Vert f_{n}-CG_{n}(f_{n}) \bigr\Vert _{q'}, \end{aligned} $$

where \(\frac{1}{q}+\frac{1}{q'}=1\). It is easy to check that

$$\Biggl\langle f_{n}, \sum_{k=n+1}^{2n} \cos kx\Biggr\rangle \asymp1. $$

Thus, we obtain

$$\bigl\Vert f_{n}-CG_{n}(f_{n})\bigr\Vert _{q'} \succeq\frac{1}{n^{\frac{1}{q'}}}. $$

Letting \(q'\rightarrow\infty\), we have

$$\bigl\Vert f_{n}-CG_{n}(f_{n})\bigr\Vert _{\infty} \succeq1. $$

To estimate the upper bound of \(\sigma_{n}(f_{n})_{\infty}\), we invoke the following inequality, which holds for any \(m>n\) and \(g\in A^{1}( T(m))\):

$$ \sigma_{n}(g, \mathcal{T})_{\infty}\leq Cn^{-\frac{1}{2}} \biggl(1+\ln\frac{m}{n} \biggr)^{\frac{1}{2}}, $$

which was proved by DeVore and Temlyakov in [18] with the help of Gluskin’s theorem [19]. Notice that \(f_{n}\in A^{1}(T(2n))\), we have by (2.23), with \(m=2n\),

$$\sigma_{n}(f_{n}, \mathcal{T})_{\infty}\leq C n^{-\frac{1}{2}}(1+\ln 2)^{\frac{1}{2}}\preceq n^{-\frac{1}{2}}. $$


$$L_{n}^{\mathrm{ch}}(\mathcal{T})\geq \frac{\Vert f_{n}-CG_{n}(f_{n})\Vert _{\infty}}{{\sigma_{n}(f_{n}},\mathcal {T})_{\infty}}\succeq {n^{\frac{1}{2}}}. $$

Thus the proof of Theorem 2.1 is completed. □


When \(p=1\), we have, for \(n\geq2\),

$$ L_{n}^{\mathrm{ch}}(\mathcal{T})\succeq \frac{n^{\frac{1}{2}}}{(\ln n)^{2}}. $$

As in the proof of Theorem 2.1, we can derive the inequality

$$L_{n}^{\mathrm{ch}}\geq \frac{\Vert \sum_{i \in A}\varepsilon_{i}e_{i}\Vert _{1}}{\Vert S_{n}\Vert _{L_{1}\rightarrow L_{1}} \Vert \sum_{i\in B}e_{i}\Vert _{1}}. $$

Choosing A, B and \(\{\varepsilon_{i}\}\) as above, from (2.15) and (2.17) we have

$$\biggl\Vert \sum_{i \in A}\varepsilon_{i}e_{i} \biggr\Vert _{1}\asymp n^{\frac{1}{2}},\quad \mbox{and} \quad \biggl\Vert \sum _{i\in B}e_{i}\biggr\Vert _{1} \preceq\ln n. $$

Moreover it is well known that \(\Vert S_{n}\Vert _{L_{1}\rightarrow L_{1}} \preceq \ln n\), see for instance [16], Theorem 7.4.1, so we get the inequality (2.24).

An interesting and challenging problem is whether the inequality (2.24) can be improved to

$$L_{n}^{\mathrm{ch}}(\mathcal{T}) \succeq n^{\frac{1}{2}}. $$

The second example is the summing basis. Let X be the real Banach space of all sequences \({\boldsymbol{\alpha}} = (a_{n})_{n \in \mathbb{N}}\) with

$$\Vert {\boldsymbol{\alpha}} \Vert :=\sup_{M \geq1}\Biggl\vert \sum _{n=1}^{M}a_{n}\Biggr\vert \leq \infty. $$

The standard canonical basis \(\{e_{n}, e_{n}^{*}\}\) satisfies \(\Vert e_{m}\Vert \equiv1, \Vert e_{1}^{*}\Vert =1\) and \(\Vert e_{n}^{*}\Vert =2\) if \(n \geq2\), which is called the summing basis. It is known from [20], Proposition 5.1, for this basis, that \(g_{n}=2n\). So this is a non-quasi-greedy basis of X. For this basis, we can give the estimate for \(L_{n}^{\mathrm{ch}}(\Psi)\) by directly computing.

Theorem 2.4

For the summing basis \(\Psi=\{e_{n}\}\) of the real Banach space X, we have, for \(n \geq1\),

$$1+2n\leq\widetilde{L}_{n}^{\mathrm{ch}}\leq1+4n \leq L_{n}^{\mathrm{ch}} \leq1+6n. $$


We begin with the lower estimate of \(\widetilde{L}_{n}^{\mathrm{ch}}\). We first pick the vector

$$x=(\overbrace{0, 2, 0}; \overbrace{0, 2, 0}; \ldots; \overbrace{0, 2, 0}; 1; \overbrace{-2, 2}, \overbrace{-2, 2}, \ldots, \overbrace{-2, 2}, 0, 0, \ldots). $$

Then \(\Gamma=\{n: x_{n}=-2\} \in\mathcal{G}(x, n)\). Let \(CG_{n}(x)\) be the Chebyshev approximant of x which is supported on Γ, we have, for some \(\{a_{i}\}_{i=1}^{n}\subset\mathbb{R}^{n}\),

$$x-CG_{n}(x)=(\overbrace{0, 2, 0}; \overbrace{0, 2, 0}; \ldots; \overbrace{0, 2, 0}; 1; a_{1}, 2, a_{2}, 2, \ldots, a_{n}, 2, 0, 0, \ldots). $$

Notice that

$$ \begin{aligned}\bigl\Vert x-CG_{n}(x)\bigr\Vert &= \bigl\Vert (\overbrace{0, 2, 0}; \overbrace{0, 2, 0}; \ldots; \overbrace {0, 2, 0}; 1; a_{1}, 2, a_{2}, 2, \ldots, a_{n}, 2, 0, 0,\ldots)\bigr\Vert \\ &\geq \bigl\Vert (\overbrace{0, 2, 0}; \overbrace{0, 2, 0}; \ldots; \overbrace {0, 2, 0}; 1; 0, 0, 0, 0, \ldots, 0, 0, 0, 0,\ldots)\bigr\Vert \\ & = 2n+1 \end{aligned} $$


$$ \begin{aligned}\widetilde{\sigma}_{n}(x)&\leq \bigl\Vert x-(\overbrace{0, 2, 0}; \ldots;\overbrace{0, 2, 0}; 0, 0, \ldots) \bigr\Vert \\ &=\Vert(0, 0, 0; \ldots; 0, 0, 0;1, \overbrace{-2, 2}, \overbrace{-2, 2}, \ldots,\overbrace{-2,2},0,0,\ldots) \\ &=1, \end{aligned} $$

we have

$$\widetilde{L}_{n}^{\mathrm{ch}}\geq\frac{\Vert x-CG_{n}(x)\Vert }{\widetilde {\sigma}_{n}(x)} \geq 2n+1. $$

For the upper estimate of \(\widetilde{L}_{n}^{\mathrm{ch}}\), we use the notation and the method in the proof of Theorem 3.5 of the next section and notice that (3.6) can be bounded by

$$\begin{aligned} \begin{aligned} \biggl\Vert \sum _{i\in A\setminus\Gamma}e_{i}^{*}(x)e_{i}\biggr\Vert &\leq \sum_{i\in A\setminus\Gamma}\bigl\vert e_{i}^{*}(x)\bigr\vert \Vert e_{i}\Vert \\ &\leq\sup_{i\in A\setminus\Gamma} \Vert e_{i}\Vert \sum _{\Gamma \setminus A}\bigl\vert e_{i}^{*}(x)\bigr\vert \\ &\leq\sup_{i}\Vert e_{i}\Vert \sum _{\Gamma\setminus A}\bigl\vert e_{i}^{*}(x-P_{A}x) \bigr\vert \\ &\leq\sup_{i}\Vert e_{i}\Vert \bigl\Vert e_{i}^{*}\bigr\Vert \vert \Gamma\setminus A\vert \Vert x-P_{A}x\Vert \\ &\leq 2n\Vert x-P_{A}x\Vert , \end{aligned} \end{aligned}$$

which implies \(k_{n}\leq2n\), we have

$$\widetilde{L}_{n}^{\mathrm{ch}}\leq g_{n}^{c}+2n \leq k_{n}^{c}+2n\leq1+k_{n}+2n\leq 1+4n, $$

where \(g_{n}^{c}=\sup_{G \in\bigcup_{k \leq n}\mathcal{G}_{k}}\Vert I-G\Vert \) and \(k_{n}^{c}=\sup_{\#(A)\leq n}\Vert I-P_{A}\Vert \).

Now we turn to the estimate of \(L_{n}^{\mathrm{ch}}\). It is known from Proposition 5.1 in [20] that \(L_{n} \leq1+6n\), hence the upper bound follows from this inequality and the trivial inequality \(L_{n}^{\mathrm{ch}}\leq L_{n}\). For the estimate of the lower bound, let

$$x=\biggl(\overbrace{\frac{1}{2}, 1, \frac{1}{2}}; \overbrace{ \frac{1}{2}, 1, \frac{1}{2}}; \ldots; \overbrace{\frac{1}{2}, 1, \frac{1}{2}}; \frac{1}{2}; \overbrace{-1, 1}, \overbrace{-1, 1}, \ldots, \overbrace{-1, 1}, 0, 0, \ldots\biggr). $$

Similar to the proof of the first inequality, for \(\Gamma=\{n: x_{n}=-1\} \in\mathcal{G}(x, n)\), there exists some \(\{b_{i}\}_{i=1}^{n}\subset\mathbb{R}^{n}\) such that, for the Chebyshev approximant \(CG_{n}(x)\) of x which is supported on Γ,

$$x-CG_{n}(x)=\biggl(\overbrace{\frac{1}{2}, 1, \frac{1}{2}}; \overbrace{\frac{1}{2}, 1, \frac{1}{2}}; \ldots; \overbrace{\frac{1}{2}, 1, \frac{1}{2}}; \frac{1}{2}; \overbrace{b_{1}, 1}, \overbrace{b_{2}, 1}, \ldots, \overbrace{b_{n}, 1}, 0, 0, \ldots\biggr). $$

Using the definition of the norm, we have

$$\bigl\Vert x-CG_{n}(x)\bigr\Vert \geq2n+\frac{1}{2}. $$

Notice that

$$\sigma_{n}(x)\leq\bigl\Vert x-2(\overbrace{0, 1, 0}; \ldots; \overbrace {0,1, 0}; 0, 0, \ldots)\bigr\Vert =\frac{1}{2}, $$

so we conclude

$$L_{n}^{\mathrm{ch}}\geq \frac{\Vert x-CG_{n}(x)\Vert }{\sigma_{n}(x)} \geq1+4n. $$

The proof of Theorem 2.4 is completed. □

3 Chebyshevian Lebesgue constants for general bases

In this section, we obtain some upper bounds for \(L_{n}^{\mathrm{ch}}(\Psi)\) and \(\widetilde{L}_{n}^{\mathrm{ch}}(\Psi)\) with respect to a general basis Ψ in a Banach space X. These bounds are given in terms of the following quantities, which have been defined in [20].

Let \(\mathcal{G}=\bigcup_{n\geq1}\mathcal{G}_{n}\). Given \(G, G^{'}\in \mathcal{G}\) we shall write \(G^{'}< G\) whenever \(G\in\mathcal{G}_{n}\) and \(G^{'}\in\mathcal{G}_{m}\) with \(m< n\) and \(\{\rho(1), \ldots, \rho(m)\}\subset \{\rho(1), \ldots, \rho(n)\}\). Now we introduce the following parameters.

  • Quasi-greedy parameters:

    $$\widetilde{g}_{n}=\sup_{{G \in\bigcup_{k \leq n}\mathcal{G}_{k}}, G^{'}< G}\bigl\Vert G-G'\bigr\Vert . $$
  • Super-democracy parameters and their counterparts for disjoint sets:

    $$\widetilde{\mu}_{n}=\sup_{{\#(A)=\#(B)\leq n, {\boldsymbol{\varepsilon}},{\boldsymbol{\eta}}\in \Upsilon}}\frac{\Vert {\mathbf{1}}_{{\boldsymbol{\varepsilon}}A} \Vert }{\Vert {\mathbf{1}}_{{\boldsymbol{\eta}}B} \Vert } \quad \mbox{and}\quad \widetilde{\mu}_{n}^{d}=\sup_{{\#(A)=\#(B)\leq n, A\cap B=\emptyset, {\boldsymbol{\varepsilon}},{\boldsymbol{\eta}}\in \Upsilon}} \frac{\Vert {\mathbf{1}}_{{\boldsymbol{\varepsilon}}A} \Vert }{\Vert {\mathbf{1}}_{{\boldsymbol{\eta}}B} \Vert }. $$
  • A-property parameters:

    $$\nu_{n}=\sup\biggl\{ \frac{\Vert {\mathbf{1}}_{\varepsilon A}+x\Vert }{\Vert {\mathbf{1}}_{\eta B}+x\Vert }: \#(A)=\#(B)\leq n, \varepsilon, \eta\in \Upsilon, \vert x\vert _{\infty}\leq1, A\mathbin{\dot{\cup}}B \mathbin{\dot{\cup}}x\biggr\} , $$

where \(\vert x\vert _{\infty} =\sup_{i}\vert e_{i}^{*}(x)\vert \), \(\operatorname{supp}{x}=\{i\in \mathbb{N}: e_{i}^{*}(x)\neq0\}\), and \(A\mathbin{\dot{\cup}}B\mathbin{\dot{\cup}}x\) means that A, B and suppx are pairwise disjoint.

To prove our results, we shall develop the technique used in the proof of Theorem 1.3. We will make use of the properties of the truncation operators defined below.

For any \(z \in\mathbb{C}\), we set

$$\operatorname{sign}z = \textstyle\begin{cases} z/\vert z\vert & \mbox{if }\vert z\vert \neq0,\\ 0 & \mbox{if }z=0. \end{cases} $$

And for each \(\alpha>0\), we define the α-truncation of z by

$$T_{\alpha}(z) = \textstyle\begin{cases} \alpha \operatorname{sign}(z) & \mbox{if }\vert z\vert > \alpha,\\ z & \mbox{if }\vert z\vert \leq\alpha. \end{cases} $$

We extend \(T_{\alpha}\) to an operator in X by

$$T_{\alpha}(x)=\sum_{i}T_{\alpha} \bigl(e_{i}^{*}(x)\bigr)e_{i}=\sum _{i \in \Lambda_{\alpha}}\alpha \frac{e_{i}^{*}(x)}{\vert e_{i}^{*}(x)\vert }e_{i}+\sum _{i \notin\Lambda_{\alpha}}e_{i}^{*}(x)e_{i}, $$

where \(\Lambda_{\alpha}=\{n: \vert e_{i}^{*}(x)\vert >\alpha\}\). The sum above converges, since \(\Lambda_{\alpha}\) is finite. Moreover, we notice that \(T_{\alpha}(x)\) has the property, for every \(i, e_{i}^{*}(T_{\alpha}(x))=T_{\alpha}(e_{i}^{*}(x))\).

We also need the following lemmas.

Lemma 3.1


If \(x \in X\) and \({\boldsymbol{\varepsilon}}=\{\operatorname{sign}e_{i}^{*}(x)\}\), then

$$\min _{\Lambda}\bigl\vert e_{i}^{*}(x)\bigr\vert \Vert {\mathbf{1}}_{\boldsymbol{\varepsilon}\Lambda} \Vert \leq\widetilde{g}_{n} \Vert x \Vert ,\quad \forall\Lambda\in\mathcal{G}(x, n). $$

Lemma 3.2


Let \(x \in X\) and \(\alpha\geq \max \vert e_{i}^{*}(x)\vert \). Then

$$\Vert x+z\Vert \leq\nu_{n} \Vert x+\alpha{\mathbf{1}}_{\eta B} \Vert ,\quad \forall \eta\in\Upsilon, $$

and for all B and z such that \(\#(\operatorname{supp}z)\leq\#(B) \leq n\), \(B\mathbin{\dot{\cup}} x\mathbin{\dot{\cup}} z\) and \(\vert z\vert _{\infty}\leq\alpha\).

The following lemma is a special case of Lemma 3.2 for \(x=0\).

Lemma 3.3

Let \(z \in X\) and \(B \subset \mathbb{N}\) such that \(\#(\operatorname{supp}z) \leq\#(B) \leq n, B\mathbin{\dot{\cup}}z\). Then

$$\Vert z\Vert \leq\widetilde{\mu}_{n}^{d} \max\bigl\vert e_{i}^{*}(z)\bigr\vert \Vert {\mathbf{1}}_{{\boldsymbol{\eta}}B} \Vert , \quad \forall \boldsymbol{\eta} \in\Upsilon. $$

Now we present our results.

Theorem 3.4

If Ψ is a basis for a Banach space X, then for all \(n \geq1\) we have

$$L_{n}^{\mathrm{ch}}(\Psi)\leq g_{2n}^{c}+2 \nu_{n}\widetilde{g}_{n} \quad \textit{and}\quad \widetilde{L}_{n}^{\mathrm{ch}}( \Psi)\leq g_{n}^{c}+\nu_{n}\widetilde{g}_{n}. $$


For \(x \in X\) let \(a_{i}=e_{i}^{*}(x)\), and \(\Gamma\in \mathcal{G}(x, n)\). Fix \(\epsilon>0\), pick \(z=\sum_{A}b_{i}e_{i} \in \Sigma_{n}\) with \(\operatorname{supp}(z) \subset A\) and \(\#(A)=\#(\Gamma)=n\) such that \(\Vert x-z\Vert <\sigma_{n}(x)+\epsilon\). Set \(y:=x-z=\sum_{i}y_{i}e_{i}\), for which

$$y_{i} = \textstyle\begin{cases} a_{i}, & i\notin A,\\ a_{i}-b_{i}, & i \in A. \end{cases} $$

Let \(\alpha=\max_{\Gamma^{c}}\vert e_{i}^{*}(x)\vert \). We define

$$ \omega=x-P_{\Gamma}\bigl(x-T_{\alpha}(y)\bigr)=(I-P_{\Gamma}) \bigl(x-T_{\alpha}(y)\bigr)+T_{\alpha}(y). $$

For the first term of the above equality, we have

$$ \begin{aligned}(I-P_{\Gamma}) \bigl(x-T_{\alpha}(y) \bigr)&=(I-P_{\Gamma \cup A}) \bigl(x-T_{\alpha}(y)\bigr)+P_{A \backslash \Gamma} \bigl(x-T_{\alpha}(y)\bigr):=X+Z. \end{aligned} $$

Note that, for \(i \notin A\), \(y_{i}=a_{i}\), and for \(i \notin\Gamma\), \(\vert a_{i}\vert \leq\alpha\), we have, for \(i \in A^{c} \cap\Gamma^{c}\), \(T_{\alpha}(y_{i})=T_{\alpha}(a_{i})=a_{i}\). hence \(\vert X\vert _{\infty }=0\). And for \(Z=P_{A \backslash\Gamma}(x-T_{\alpha}(y))\), we have

$$\vert Z\vert _{\infty}=\max_{i\in A \backslash \Gamma}\bigl\vert e_{i}^{*}\bigl(x-T_{\alpha}(y)\bigr)\bigr\vert =\max _{i\in A \backslash \Gamma}\bigl(\vert a_{i}\vert +\bigl\vert T_{\alpha}(y_{i})\bigr\vert \bigr)\leq2\alpha $$

and \(\#(\operatorname{supp}Z)\leq\#(A\backslash\Gamma)=\#(\Gamma\backslash A)\leq n\), applying Lemma 3.2 with \(\eta=\{\operatorname{sign}e_{i}^{*}(y)\}\), to obtain

$$\begin{aligned} \bigl\Vert (I-P_{\Gamma}) \bigl(x-T_{\alpha}(y)\bigr)\bigr\Vert \leq& \nu_{n}\bigl\Vert (I-P_{\Gamma\cup A}) \bigl(x-T_{\alpha}(y)\bigr)+2\alpha{ \mathbf{1}}_{\eta \widetilde{\Gamma}}\bigr\Vert \\ =&2\nu_{n} \max_{\Gamma^{c}}\bigl\vert e_{i}^{*}(x)\bigr\vert \Vert {\mathbf{1}}_{\eta\widetilde {\Gamma}} \Vert \\ \leq& 2\nu_{n} \min_{\Gamma\backslash A}\bigl\vert e_{i}^{*}(y)\bigr\vert \Vert {\mathbf{1}}_{\eta\widetilde{\Gamma }}\Vert \\ \leq&2\nu_{n} \min_{\widetilde{\Gamma}}\bigl\vert e_{i}^{*}(y)\bigr\vert \Vert {\mathbf{1}}_{\eta\widetilde{\Gamma}} \Vert , \end{aligned}$$

where we choose Γ̃ as a set of cardinality \(\#(\widetilde{\Gamma})=\#(\Gamma\backslash A)=\#(A \backslash \Gamma)\) in which \(x-z\) attains the largest coefficients, and hence \(\widetilde{\Gamma} \in\mathcal{G}(y, n)\). Using (3.2) and Lemma 3.1, we conclude

$$ \bigl\Vert (I-P_{\Gamma}) \bigl(x-T_{\alpha}(y)\bigr)\bigr\Vert \leq 2\nu_{n} \widetilde{g}_{n} \Vert y\Vert . $$

For the second term of (3.1), [20], Lemma 2.3, gives

$$\bigl\Vert T_{\alpha}(y)\bigr\Vert \leq g_{\#(\Lambda_{\alpha})}^{c} \Vert y\Vert , $$

where \(\Lambda_{\alpha}=\{i: \vert e_{i}^{*}(y)\vert >\alpha\}\).


$$ \Lambda_{\alpha}=(\Lambda_{\alpha}\cap A) \cup \bigl( \Lambda_{\alpha}\cap A^{c}\bigr)=\bigl\{ i\in A: \vert y_{i}\vert > \alpha\bigr\} \cup \bigl\{ i \in A^{c}: \vert a_{i}\vert > \alpha\bigr\} \subset A \cup\Gamma, $$

we have

$$\Vert \omega \Vert \leq\bigl(g_{\#(A \cup\Gamma)}^{c}+2\nu_{n} \widetilde{g}_{n}\bigr) \Vert y\Vert \leq\bigl(g_{2n}^{c}+2 \nu_{n} \widetilde{g}_{n}\bigr) \Vert y\Vert . $$

Since \(x-\omega=P_{\Gamma}(x-T_{\alpha}(y))\), clearly \(\operatorname{supp}(x-\omega)\subset\Gamma\). We have

$$\bigl\Vert x-CG_{n}(x)\bigr\Vert \leq \Vert \omega \Vert \leq \bigl(g_{2n}^{c}+2\nu _{n} \widetilde{g}_{n} \bigr) \Vert x-z\Vert \leq \bigl(g_{2n}^{c}+2 \nu_{n}\widetilde{g}_{n}\bigr) \bigl(\sigma_{n}(x)+ \epsilon\bigr). $$

Letting \(\epsilon\rightarrow0\), we conclude that

$$L_{n}^{\mathrm{ch}}\leq g_{2n}^{c}+2 \nu_{n}\widetilde{g}_{n}. $$

The estimate for \(\widetilde{L}_{n}^{\mathrm{ch}}\) is similar: we choose an index set A and let \(\widetilde{\Gamma}=\Gamma\backslash A\), for which \(\Vert x-P_{A}x\Vert \leq\widetilde{\sigma}_{n}(x)+\epsilon\). For \(y=x-P_{A}x\),

$$y_{i} = e_{i}^{*}(y)=\textstyle\begin{cases} a_{i}, & i\notin A,\\ 0, & i \in A. \end{cases} $$

Now we estimate ω. On one hand, we have

$$\Lambda_{\alpha}=\bigl\{ i:\bigl\vert e_{i}^{*}(y)\bigr\vert > \alpha\bigr\} \subset\bigl\{ i:\vert a_{i}\vert >\alpha\bigr\} \subset \Gamma, $$


$$ \bigl\Vert T_{\alpha}(y)\bigr\Vert \leq g_{\#(\Lambda_{\alpha})}^{c} \Vert y\Vert \leq g_{n}^{c} \Vert y\Vert . $$

On the other hand, note that \(\widetilde{\Gamma}=\Gamma\backslash A \in\mathcal{G}(x-P_{A}x, \#(\Gamma\backslash A))\). We have the same inequalities as (3.2) and (3.3) for \(\eta=\{\operatorname{sign}e_{i}^{*}(x-P_{A}x)\}\). Thus,

$$\widetilde{L}_{n}^{\mathrm{ch}} \leq g_{n}^{c}+ \nu_{n} \widetilde{g}_{n}. $$

The proof of Theorem 3.4 is completed. □

Next, we replace \(\nu_{n}\) by \(\widetilde{\mu}_{n}^{d}\) in Theorem 3.4 and get another estimate for the upper bounds.

Theorem 3.5

If Ψ is a basis for a Banach space X, then for all \(n \geq1\) we have

$$L_{n}^{\mathrm{ch}}(\Psi)\leq g_{2n}^{c}+2 \widetilde{\mu}_{n}^{d}\widetilde{g}_{n} \quad \textit{and}\quad \widetilde{L}_{n}^{\mathrm{ch}}(\Psi) \leq g_{n}^{c}+\widetilde{\mu}_{n}^{d} \widetilde{g}_{n}. $$


With the same notation as in Theorem 3.4, we now deal with \(P_{\Gamma^{c}}(x-T_{\alpha}(y))\). It is clear that

$$P_{\Gamma^{c}}\bigl(x-T_{\alpha}(y)\bigr)=\sum _{A\backslash\Gamma}\bigl(a_{i}-T_{\alpha}(y_{i}) \bigr)e_{i}. $$

Now choose \(\eta=\{\operatorname{sign}e_{i}^{*}(x-z)\}\), \(B=\Gamma\setminus A\). Notice that \(\operatorname{supp}(\sum_{A\backslash \Gamma}(a_{i}-T_{\alpha}(y_{i}))e_{i})\mathbin{\dot{\cup}}B\),

$$\#\biggl(\operatorname{supp}\biggl(\sum_{A\backslash\Gamma} \bigl(a_{i}-T_{\alpha}(y_{i})\bigr)e_{i} \biggr)\biggr) \leq\#(A \backslash\Gamma)=\#(B)\leq n\quad \mbox{and}\quad B \in \mathcal{G}\bigl(P_{A^{c}}(x-z), n\bigr), $$

by Lemma 3.3, we can continue the estimate

$$\begin{aligned} \biggl\Vert \sum _{A \backslash\Gamma}\bigl(a_{i}-T_{\alpha}(y_{i})\bigr)e_{i}\biggr\Vert \leq&\widetilde{ \mu}_{n}^{d} \max_{A \backslash\Gamma}\bigl\vert a_{i}-T_{\alpha}(y_{i})\bigr\vert \Vert {\mathbf{1}}_{\eta B}\Vert \\ \leq&\widetilde{\mu}_{n}^{d} \max_{A \backslash \Gamma} \bigl(\vert a_{i}\vert +\bigl\vert T_{\alpha}(y_{i}) \bigr\vert \bigr) \Vert {\mathbf{1}}_{\eta B}\Vert \\ \leq &2 \alpha \widetilde{\mu}_{n}^{d} \Vert { \mathbf{1}}_{\eta B}\Vert \\ \leq&2 \max_{\Gamma^{c}}\vert a_{i}\vert \widetilde{ \mu}_{n}^{d} \Vert {\mathbf{1}}_{\eta B}\Vert . \end{aligned}$$

Then inequality (3.6) and Lemma 3.1 give

$$\begin{aligned} \bigl\Vert P_{\Gamma^{c}} \bigl(x-T_{\alpha}(y)\bigr)\bigr\Vert \leq&2 \max_{\Gamma^{c}} \vert a_{i}\vert \widetilde{\mu}_{n}^{d} \Vert {\mathbf{1}}_{\eta B}\Vert \\ \leq&2 \min_{\Gamma\setminus A}\bigl\vert e_{i}^{*}(x-z)\bigr\vert \widetilde{\mu}_{n}^{d} \Vert {\mathbf{1}}_{\eta B} \Vert \\ \leq& 2 \widetilde{\mu}_{n}^{d} \widetilde{g}_{n} \bigl\Vert P_{A^{c}}(x-z)\bigr\Vert \\ \leq &2 \widetilde{\mu}_{n}^{d} \widetilde{g}_{n} \Vert x-z\Vert . \end{aligned}$$

Using (3.4), (3.5) and (3.7), we have

$$\bigl\Vert x-CG_{n}(x)\bigr\Vert \leq \Vert \omega \Vert \leq \bigl(g_{2n}^{c}+2 \widetilde{\mu}_{n}^{d} \widetilde{g}_{n}\bigr) \Vert x-z\Vert \leq\bigl(g_{2n}^{c}+2 \widetilde{\mu}_{n}^{d} \widetilde{g}_{n}\bigr) \bigl(\sigma_{n}(x)+\epsilon\bigr). $$

Letting \(\epsilon\rightarrow0\), we conclude \(L_{n}^{\mathrm{ch}}\leq g_{2n}^{c}+2 \widetilde{\mu}_{n}^{d} \widetilde{g}_{n}\). The estimate of \(\widetilde{L}_{n}^{\mathrm{ch}}\) can be obtained in a similar way. We complete the proof. □

The following corollary shows the upper bounds in Theorem 3.5 are asymptotically optimal for quasi-greedy bases from the viewpoint of convergence rate.

Corollary 3.6

If Ψ is a quasi-greedy basis of a Banach space X with the quasi-greedy constant \(\mathfrak{K}\), then

$$L_{n}^{\mathrm{ch}}(\Psi) \leq20\kappa^{2} \mathfrak{K}^{2}\mu_{n}^{d} \quad \textit{and}\quad \widetilde{L}_{n}^{\mathrm{ch}}(\Psi) \leq12\kappa^{2} \mathfrak{K}^{2}\mu_{n}^{d}, $$

with \(\kappa=1\) or 2 for the real or complex spaces, respectively.


We need the additional inequality

$$\widetilde{\mu}_{n}^{d} \leq 4\kappa^{2} \gamma_{n} \mu_{n}^{d}, $$

where \(\gamma_{n} =\sup\{\frac{\Vert {\mathbf{1}}_{\varepsilon B}\Vert }{ \Vert {\mathbf{1}}_{\varepsilon A}\Vert }: B \subset A, \#(A) \leq n, \varepsilon\in\Upsilon\}\), to pass from \(\widetilde{\mu}_{n}^{d}\) to \(\mu_{n}^{d}\). The proof of this inequality is almost the same as Lemma 3.4 in [20]. It is clear that, for a quasi-greedy basis \((e_{i})\) with the quasi-greedy constant \(\mathfrak{K}\), \(\gamma_{n} \leq g_{n} \leq\mathfrak{K}\).

By Theorems 3.5 and the above inequality, we have

$$ \begin{aligned} L_{n}^{\mathrm{ch}}&\leq g_{2n}^{c}+2\widetilde{\mu}_{n}^{d} \widetilde {g}_{n} \\ &\leq1+\mathfrak{K}+8\kappa^{2}\gamma_{n} \mu_{n}^{d} \widetilde{g}_{n} \\ &\leq 1+\mathfrak{K}+16\kappa^{2} {\mathfrak{K}}^{2} \mu_{n}^{d}, \end{aligned} $$

where we have used \(\widetilde{g}_{n} \leq2\mathfrak{K}\) in the last inequality. Note that \(1\leq\mu_{n}\leq2\mathfrak{K}\mu_{n}^{d}\) and \(1\leq\mathfrak{K}\), thus \(\mathfrak{K}\leq2\mathfrak{K}^{2}\mu_{n}^{d}\) and we get the first conclusion

$$ \begin{aligned} L_{n}^{\mathrm{ch}}&\leq 1+ \mathfrak{K}+16\kappa^{2} {\mathfrak{K}}^{2} \mu_{n}^{d} \\ &\leq 2\mathfrak{K}+16\kappa^{2} {\mathfrak{K}}^{2} \mu_{n}^{d} \\ &\leq20\kappa^{2} {\mathfrak{K}}^{2}\mu_{n}^{d}. \end{aligned} $$

Similarly for \(\widetilde{L}_{n}^{\mathrm{ch}}\), we have

$$ \begin{aligned} \widetilde{L}_{n}^{\mathrm{ch}}& \leq g_{n}^{c}+\widetilde{\mu}_{n}^{d} \widetilde{g}_{n} \\ &\leq1+\mathfrak{K}+4\kappa^{2}\gamma_{n} \mu_{n}^{d} \widetilde{g}_{n} \\ &\leq 1+\mathfrak{K}+8\kappa^{2} {\mathfrak{K}}^{2} \mu_{n}^{d} \\ &\leq12\kappa^{2} {\mathfrak{K}}^{2}\mu_{n}^{d}. \end{aligned} $$

The proof of this corollary is completed. □

From the definition of quasi-greedy constant, we know \(\mathfrak{K}\geq1\). Moreover, it is shown in [5] that if the quasi-greedy constant \(\mathfrak{K}= 1\), then Ψ is an unconditional basis. So compared to \(O(\mathfrak{K}^{3})\) in the upper bounds of Theorem 1.3, our results improve the implicit constants for all conditional quasi-greedy bases since in this case \(\mathfrak{K}>1\).

Using Lemma 3.3 and the approach we adopt in the proof of Theorem 3.5, we obtain the following results for TGA.

Theorem 3.7

For all \(n \geq1\),

$$L_{n}(\Psi) \leq k_{2n}^{c}+\widetilde{g}_{n} \widetilde{\mu}_{n}^{d} \quad \textit{and}\quad \widetilde{L}_{n}( \Psi)\leq g_{n}^{c}+\widetilde{g}_{n}\widetilde{ \mu}_{n}^{d}. $$

Corollary 3.8

If Ψ is a quasi-greedy basis of a Banach space X with the quasi-greedy constant \(\mathfrak{K}\), then

$$\max\bigl\{ k_{n}^{c}, \mu_{n}^{d}\bigr\} \leq L_{n}(\Psi)\leq k_{2n}^{c}+8 \kappa^{2}\mathfrak {K}^{2}\mu_{n}^{d} $$


$$\mu_{n}^{d}\leq\widetilde{L}_{n}(\Psi)\leq 12 \kappa^{2}\mathfrak{K}^{2}\mu_{n}^{d}, $$

with \(\kappa=1\) or 2 for the real or complex spaces, respectively.


In [20], Berná, Garrigós and Óscar established the inequalities:

$$L_{n}(\Psi)\leq k_{2n}^{c}+\widetilde{g}_{n} \widetilde{\mu}_{n} \quad \mbox{and} \quad \widetilde{L}_{n}(\Psi)\leq g_{n}^{c}+\widetilde{g}_{n}\widetilde{ \mu}_{n}. $$

Note that the bounds in Theorem 3.7 are slightly better since \(\widetilde{\mu}_{n}^{d}\leq \widetilde{\mu}_{n}\).

Compared Theorem 3.5 with Theorem 3.7, we replace the unconditionality parameter \(k_{n}^{c}\) by quasi-greedy parameter \(g_{n}^{c}\) in the first part of the addictive bound. In general, \(g_{n}^{c}\) is essentially smaller than \(k_{n}^{c}\), thus the estimate for \(L_{n}^{\mathrm{ch}}\) is better than the estimate for \(L_{n}\). In particular, for some conditional quasi-greedy bases, a faster convergence rate is obtained.

4 Status of the research on CTGA

In this section, we present a short overview of some results and questions on the efficiency of CTGA. We will compare the approximation properties of CTGA with TGA.

We first consider quasi-greedy bases. It is well known that, for some conditional quasi-greedy bases, CTGA has a better convergence rate than TGA, in particular, for democratic, conditional quasi-greedy basis Ψ, it follows from Theorem 3.5 that the inequality

$$\bigl\Vert x-CG_{n}(x,\Psi)\bigr\Vert \leq C\sigma_{n}(x, \Psi) $$

holds for any \(x\in X\), \(n\geq1\), while for TGA with respect to these bases, from Theorem 1.5 in [20], we know that, for any \(x\in X\), \(n\geq1\), the best inequality we can get is

$$\bigl\Vert x-G_{n}(x,\Psi)\bigr\Vert \leq C \ln x \sigma_{n}(x,\Psi). $$

On the other hand, for unconditional bases, CTGA has the same rate of convergence as TGA.

Secondly we consider non-quasi-greedy bases. From Theorem 1.1, we know that, for these bases, CTGA has better convergence properties than TGA. However, as for the convergence rate, the results obtained so far show CTGA does not make essential improvements over TGA. For example, we prove that the orders of the Lebesgue constants of CTGA with respect to the trigonometric system and the summing basis are the same as those of TGA. Moreover, for the sparse classes \(A^{1}(\mathcal{T})\) defined by

$$A^{1}(\mathcal{T}):=\biggl\{ T: \sum_{k\in\mathbb{Z}} \bigl\vert c_{k}(T)\bigr\vert \leq 1\biggr\} , $$

the error performance of CTGA is also the same as TGA. In fact, we know from [11] and [12] that, for \(1\leq p\leq\infty\),

$$\sup_{f\in A^{1}(\mathcal{T})}\bigl\Vert f-CG_{n}(f,\mathcal{T})\bigr\Vert _{p}\asymp\sup_{f\in A^{1}(\mathcal{T})}\bigl\Vert f-G_{n}(f,\mathcal {T})\bigr\Vert _{p}. $$

For the Lebesgue constant \(L_{n}^{\mathrm{ch}}\) of CTGA with respect to non-quasi-greedy bases, many problems are worthy to be studied: Can one establish some new estimates of the upper and lower bounds for the general basis, and then obtain the exact rate of \(L_{n}^{\mathrm{ch}}\) for more non-quasi-greedy bases? One more interesting question is the following: Can one find a non-quasi-greedy basis for which CTGA has an essentially smaller Lebesgue constant than TGA?


  1. Hajek, P., Santalucia, V.M., Vanderwerff, J., Zizler, V.: Biorthogonal Systems in Banach Spaces. Springer, New York (2008)

    MATH  Google Scholar 

  2. Konyagin, S.V., Temlyakov, V.N.: A remark on greedy approximation in Banach spaces. East J. Approx. 5, 365–379 (1999)

    MathSciNet  MATH  Google Scholar 

  3. Wojtaszczyk, P.: Greedy algorithm for general biorthogonal systems. J. Approx. Theory 107, 293–314 (2000)

    Article  MathSciNet  MATH  Google Scholar 

  4. Albiac, F., Ansorena, J.L.: Characterization of 1-almost greedy bases. Rev. Mat. Complut. 30, 13–24 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  5. Albiac, F., Ansorena, J.L.: Characterization of 1-quasi greedy bases. J. Approx. Theory 201, 7–12 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  6. Dilworth, S.J., Kutzarova, D., Oikhberg, T.: Lebesgue constants for the weak greedy algorithm. Rev. Mat. Complut. 28, 393–409 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  7. Garrigós, G., Hernández, E., Oikhberg, T.: Lebesgue-type inequalities for quasi-greedy bases. Constr. Approx. 38, 447–470 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  8. Long, J.F., Ye, P.X.: Weak greedy algorithms for nonlinear approximation with quasi-greedy bases. WSEAS Trans. Math. 13, 525–534 (2014)

    Google Scholar 

  9. Temlyakov, V.N., Yang, M., Ye, P.X.: Lebesgue-type inequalities for greedy approximation with respect to quasi-greedy bases. East J. Approx. 17, 127–138 (2011)

    MathSciNet  MATH  Google Scholar 

  10. Temlyakov, V.N.: Greedy algorithm and m-term trigonometric approximation. Constr. Approx. 14, 569–587 (1998)

    Article  MathSciNet  MATH  Google Scholar 

  11. Dilworth, S.J., Kutzarova, D., Temlyakov, V.N.: Convergence of some greedy algorithms in Banach spaces. J. Fourier Anal. Appl. 8, 489–505 (2002)

    Article  MathSciNet  MATH  Google Scholar 

  12. Ye, P.X., Wei, X.J.: Efficiency of weak greedy algorithms for m-term approximations. Sci. China Math. 59, 697–714 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  13. Dilworth, S.J., Kalton, N.J., Kutzarova, D.: On the existence of almost greedy bases in Banach spaces. Stud. Math. 159, 67–101 (2003)

    Article  MathSciNet  MATH  Google Scholar 

  14. Albiac, F., Kalton, N.J.: Topics in Banach Space Theory. Springer, New York (2012)

    MATH  Google Scholar 

  15. Dilworth, S.J., Kutzarova, D., Schlumprecht, T., Wojtaszczyk, P.: Weak thresholding greedy algorithms in Banach spaces. J. Funct. Anal. 263, 3900–3921 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  16. Temlyakov, V.N., Tikhonov, S.: Sparse Approximation with Bases. Springer, Basel (2015)

    Book  Google Scholar 

  17. Zygmund, A.: Trigonometric Series. Cambridge University Press, Cambridge (1959)

    MATH  Google Scholar 

  18. DeVore, R.A., Temlyakov, V.N.: Nonlinear approximation by trigonometric sums. J. Fourier Anal. Appl. 2, 29–48 (1995)

    Article  MathSciNet  MATH  Google Scholar 

  19. Gluskin, E.D.: Extremal properties of orthogonal parallelepipeds and their applications to the geometry of Banach spaces. Mat. Sb. 136, 85–96 (1988)

    MATH  Google Scholar 

  20. Berná, P.M., Garrigós, G., Óscar, B.: Lebesgue inequalities for the greedy algorithm in general bases. Rev. Mat. Complut. 30, 369–392 (2017)

    Article  MathSciNet  MATH  Google Scholar 

Download references


This work is supported by the National Natural Science Foundation of China [No. 11671213].

Author information

Authors and Affiliations



All authors conceived of the study, participated in its design and coordination, drafted the manuscript, participated in the sequence alignment, and read and approved the final manuscript.

Corresponding author

Correspondence to Chunfang Shao.

Ethics declarations

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (, which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Shao, C., Ye, P. Lebesgue constants for Chebyshev thresholding greedy algorithms. J Inequal Appl 2018, 102 (2018).

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: