Skip to main content

Advertisement

We’d like to understand how you use our websites in order to improve them. Register your interest.

Projected Tikhonov regularization method for Fredholm integral equations of the first kind

Abstract

In this paper, we consider a variant of projected Tikhonov regularization method for solving Fredholm integral equations of the first kind. We give a theoretical analysis of this method in the Hilbert space \(L^{2}(a,b)\) setting and establish some convergence rates under certain regularity assumption on the exact solution and the kernel \(k(\cdot,\cdot)\). Some numerical results are also presented.

Introduction

Let \(H=L^{2}((a,b);\mathbb{R})\) and consider the Fredholm integral equation of the first kind

$$ \int_{a}^{b}k(t,s)f(s)\,ds=g(t),\quad t\in[a,b], $$
(1)

where \(k(\cdot,\cdot)\) and g are known functions, and f is the unknown function to be determined. The equation can be written as an operator equation

$$ K:H\longrightarrow H, \quad\quad f\longmapsto g=Kf. $$
(2)

Many inverse problems in applied science and engineering (see, e.g., [13] and references therein) lead to the solution of Fredholm integral equations of the first kind (1).

Several numerical methods are available in the literature to solve linear integral equations of the first kind; we can cite, for example, multiscale methods [48], spectral-collocation methods [9, 10], reproducing kernel Hilbert space methods [11, 12], eigenvalue approximation methods [1317], quadrature-based collocation methods [18, 19], projections methods [2028], and other interesting methods quite exposed in the books [13, 2936].

In the regularizing procedures, several authors have studied finite-dimensional approximations obtained by projecting regularized approximations into finite-dimensional subspaces. Such methods may be called regularization projection methods.

The main idea of regularization by projection is to project the least squares minimization on a finite-dimensional subspace to obtain a well-conditioned problem and, consequently, a stabilization of the generalized inverse of the approximate operator. We can distinguish two different cases of regularization by projection. The first one is the regularization in preimage space, and the second is regularization in image space; see, for example, [21, 22, 26, 27, 33, 37].

Following the idea developed in [18, 19] and [26, 27], we analyze a variant of projected Tikhonov regularization method applied to our problem (2) in the Hilbert space \(L^{2}(a,b)\) setting. We develop the theoretical framework of this method of approximation and give some results of convergence under certain conditions of regularity on the kernel \(k(\cdot,\cdot)\) and the solution of the problem in question.

More precisely, we build a method of projection by using very simple mathematical tools, which can be concretized and implemented numerically. Moreover, we give natural conditions on the kernel \(k(\cdot ,\cdot)\) of the operator K, which enables us to establish the convergence results of this approach. For the subspace of projection, we use the Legendre polynomials, which are well studied in the literature compared to other classes of polynomials. This judicious choice also enables us to give a simple calculation and explicit formula of approximation of \(K^{*}K\) (see (25)). It is important to note that in [27], the author gives sufficient conditions on \(\Vert A-A_{n}\Vert \) within an abstract framework to establish the convergence of this approximation, which returns an approach very limited in practice; moreover, it is not exploitable from the numerical point of view.

In this investigation, we assume that

(A1):

\(k(\cdot,\cdot)\) is nondegenerate.

(A2):

\(k(\cdot,\cdot)\in L^{2}((a,b)\times(a,b);\mathbb{R})\), that is, \(\kappa^{2}=\int_{a}^{b}\int_{a}^{b}\vert k(t,s)\vert ^{2}\,dt\,ds < +\infty\).

It is well known that under these conditions, K is a compact (Hilbert-Schmidt) integral operator with infinite-dimensional range (\(\operatorname {dim}(\mathcal{R}(K))=+\infty\)). In this case, \(\mathcal{R}(K)\) is not closed, and problem (2) belongs to the class of ill-posed problems. The ill-posedness character means that \(T^{\dagger}\) (the Moore-Penrose inverse) or \(K^{-1}\) (when K is injective) are unbounded operators. Consequently, the standard numerical procedures to solve such equations are unstable and pose very serious problems when the data are not exact; that is, small perturbations of the observation data may lead to large changes on the considered solution.

To overcome this difficulty and for obtaining stable approximate solutions for ill-posed problems, regularization procedures are employed, and Tikhonov regularization is one the such procedure. This method consists in minimizing over H the so-called Tikhonov functional

$$\Phi_{\alpha}(f)= \Vert Kf-g\Vert _{H}^{2}+\alpha \Vert f\Vert _{H}^{2}, $$

where \(\alpha> 0\) is the regularization parameter. The regularized solution \(f_{\alpha}\) is the unique minimizer of the Tikhonov functional \(\Phi_{\alpha}(f)\). We denote this minimum by \(f_{\alpha}= \operatorname{\arg\min}_{f\in H}\Phi_{\alpha }(f)\), which is a unique solution of the normal equation

$$\bigl(\alpha I +K^{*}K \bigr)f=K^{*}g. $$

The linear operator \(R(\alpha)= (\alpha I +K^{*}K)^{-1}K^{*}\in\mathcal {L}(H)\) is called a regularizing operator, and we have

$$\begin{aligned}& \bigl\Vert \bigl(\alpha I+ K^{*}K \bigr)^{-1}K^{*} \bigr\Vert _{H}^{2}= \frac{1}{2\sqrt{\alpha}}, \end{aligned}$$
(3)
$$\begin{aligned}& \Vert f-f_{\alpha} \Vert _{H}^{2} \longrightarrow0,\quad\alpha\longrightarrow0. \end{aligned}$$
(4)

To establish the main results of our work, we introduce the following assumptions:

(H1):

The operator K is injective, that is, \(N(K)=\{0\}\).

(H2):

\(g\in\mathcal{R}(K)\).

(H3):

The kernel \(k(\cdot,\cdot)\in\mathcal{C}^{r}([a,b]\times [a,b]; \mathbb{R})\), \(r\in\mathbb{N}\).

(H4):

The operator \(K^{*}\) is injective (\(\Longleftrightarrow \overline{\mathcal{R}(K)}=H\)).

Preliminaries and notation

In this section, we present the notation and functional setting and prepare some material, which will be used in our analysis. For more details, we refer the reader to [32, 36, 38].

Let \(H_{1}\) and \(H_{2}\) be two real Hilbert spaces. We denote by \(\mathcal{L}(H_{1}, H_{2})\) the space of all bounded linear operators from \(H_{1}\) to \(H_{2}\) (and \(\mathcal{L}(H)\) if \(H_{1}=H_{2}=H\)) with the operator norm

$$\Vert T\Vert =\sup_{\Vert u\Vert _{H_{1}}\leq1 }\Vert Tu\Vert _{H_{2}}, \quad T\in\mathcal{L}(H). $$

The null-space of \(T\in\mathcal{L}(H_{1}, H_{2})\) is the set \(\mathcal {N}(T)=\{u\in H_{1}: Tu=0\}\), whereas the range of T is denoted by \(\mathcal{R}(T)=T(H_{1})=\{v= Tu, u\in H_{1}\}\).

Let \(T\in\mathcal{L}(H_{1}, H_{2})\). Recall that, for \(v\in H_{2}\), the linear operator equation

$$ Tu=v $$
(5)

has a solution if and only if \(v\in\mathcal{R}(T)\).

• If \(\mathcal{R}(T)\) is infinite-dimensional and T is injective, then \(T^{-1}:\mathcal{R}(T)\longrightarrow H_{1}\) is bounded if and only if \(R(T)\) is closed.

• If \(v\notin \mathcal{R}(T)\), then we look for an element \(\hat{u}\in H_{1}\) such that is “closest to” v in the sense that û minimizes the functional \(\Vert Tu-v\Vert _{H_{2}}\).

Definition 2.1

Let \(T\in\mathcal{L}(H_{1}, H_{2})\). We call \(\hat{u}\in H_{1}\) a least residual norm solution (LRN solution) of (5) if

$$\Vert T\hat{u}-v\Vert _{H_{2}}= \inf_{u\in H_{1}}\Vert Tu-v\Vert _{H_{2}}. $$

Definition 2.2

For \(v\in G= \mathcal{R}(T)+\mathcal{R}(T)^{\perp}\), we denote the set of all LRN solutions of (5) by

$$S_{v}= \Bigl\{ \hat{u}\in H_{1}: \Vert T\hat{u}-v\Vert _{H_{2}}= \inf_{u\in H_{1}}\Vert Tu-v\Vert _{H_{2}} \Bigr\} . $$

Definition 2.3

Let \(v\in G=\mathcal{R}(T)+\mathcal{R}(T)^{\perp}\). Then \(u^{\dagger}\in S_{v}\) is called a best approximate solution (generalized solution) of (5) if \(\Vert u^{\dagger} \Vert _{H_{1}}=\inf_{\hat{u}\in S_{v}} \Vert \hat{u}\Vert _{H_{1}}\).

Theorem 2.1

Let \(v\in G=\mathcal{R}(T)+\mathcal {R}(T)^{\perp}\). Then there exist a unique \(x^{\dagger}\in S_{v}\) such that

$$\bigl\Vert u^{\dagger} \bigr\Vert _{H_{1}}=\inf _{\hat{u}\in S_{v}}\Vert \hat {u}\Vert _{H_{1}}, $$

and

$$u^{\dagger}\in\mathcal{N}(A)^{\perp},\quad u^{\dagger}=P \hat{u}_{0}, $$

where \(P: H_{1}\longrightarrow\mathcal{N}(A)^{\perp}\) is the orthogonal projection onto \(\mathcal{N}(A)^{\perp}\) and \(\hat{u}_{0}\) is any element in  \(S_{v}\).

Definition 2.4

The Moore-Penrose (generalized) inverse \(T^{\dagger}: D(T^{\dagger})\longrightarrow H_{1}\) of T defined on the dense domain \(D(T^{\dagger})=\mathcal{R}(T)+\mathcal{R}(T)^{\perp} \) maps \(v\in D(T^{\dagger })\) to the best-approximate solution of (5), that is, \(T^{\dagger }v= u^{\dagger}\).

Remark 2.1

  • \(T^{\dagger}= T^{-1}\) if \(\mathcal{R}(T)^{\perp}=\{0\}\) and \(\mathcal {N}(T)=\{0\}\).

  • \(T^{\dagger}\) is continuous if and only if \(\mathcal{R}(T)\) is a closed subspace of \(H_{2}\).

Theorem 2.2

([18], Thm. 2.7, p.31)

Let E, F be two Banach spaces, and \((T_{n})\subset\mathcal{L}(E, F)\). Then, \(T_{n}\longrightarrow T\in\mathcal{L}(E, F)\) pointwise (i.e., \(T_{n} x\longrightarrow Tx\) for all \(x\in E\)) if and only if the sequence \((T_{n})\) is uniformly bounded, and \(T_{n} x\longrightarrow Tx\) for all \(x\in\mathcal{D}\), where \(\mathcal{D}\subset E\) is a dense subspace of E.

We denote by \((\lambda_{i}, e_{i})_{i=0}^{\infty}\) the normalized eigensystem of the compact self-adjoint operator \(A=K^{*}K\). Then A can be diagonalized according to the following formula:

$$ h= \sum_{i=1}^{\infty}\langle h, e_{i}\rangle e_{i}, \quad Ah=\sum _{i=1}^{\infty} \lambda_{i}\langle h, e_{i}\rangle e_{i}. $$
(6)

The classical Legendre polynomials \((L_{j} )_{j\in\mathbb{N}}\) are defined on the interval \([-1, 1]\) and can be determined with the aid of the following recurrence formulae:

$$ \left \{ \textstyle\begin{array}{l@{\quad}l} L_{0}(x)=1,\quad\quad L_{1}(x)=x, \\ L_{j+1}(x)= (\frac{2j+1}{j+1} )xL_{j}(x)- (\frac{j}{j+1} )L_{j-1}(x), & j=1,2,\ldots. \end{array}\displaystyle \right . $$
(7)

In order to use these polynomials on the interval \([a, b]\), we define the so-called normalized shifted Legendre polynomials of degree n as follows: Let \(x\in[a,b]\); then the transformation \(y= \frac {2}{b-a}x-\frac{a+b}{b-a}\) transforms the interval \([a,b]\) onto \([-1, 1]\) and the normalized shifted Legendre polynomials are given by

$$ \hat{L}_{j}(x)=\sqrt{\frac {2}{b-a}}\sqrt{ \frac{2j+1}{2}}L_{j} \biggl(\frac{2}{b-a}x-\frac{a+b}{b-a} \biggr), \quad x\in[a,b], j\in\mathbb{N}. $$
(8)

The set \((\hat{L})_{j\in\mathbb{N}}\) is a complete orthonormal system in \(H=L^{2} ((a,b); \mathbb{R} )\), namely

$$ \langle\hat{L}_{j}, \hat{L}_{i}\rangle= \int_{a}^{b} \hat{L}_{j}(x) \hat{L}_{j}(x)\,dx=\delta_{ji}, $$
(9)

where \(\delta_{ji}\) is the Kronecker symbol.

Thus, for any function \(h\in H= L^{2} ((a,b); \mathbb{R} )\), we have the Fourier-Legendre expansion

$$ h=\sum_{j=0}^{\infty}c_{j}(h) \hat{L}_{j}, $$
(10)

where the Fourier-Legendre coefficients \(c_{j}(h)\) are given by

$$c_{j}(h) =\langle h, \hat{L}_{j}\rangle= \int_{a}^{b}\hat {L}_{j}(x)h(x)\,dx, \quad j\in\mathbb{N}. $$

Projected Tikhonov regularization method

Let \(H_{n}= \operatorname {span}\{\hat{L}_{j}, j=0,1,\ldots,n\}\) be the sequence of Legendre polynomial subspaces of degree ≤n, and let \(\Pi_{n}: H\longrightarrow H_{n}\) be the orthogonal projection defined as

$$ \Pi_{n}h=\sum_{j=0}^{n}c_{j}(h) \hat{L}_{j},\quad h\in H. $$
(11)

We quote some crucial properties of \(\Pi_{n}\) ([38], pp.283-287 and [39]).

Lemma 3.1

Let \(\Pi_{n}\) be the orthogonal projection defined in (11). Then we have

$$\begin{aligned}& \forall u\in H,\quad \bigl\Vert (I-\Pi_{n})h \bigr\Vert _{L^{2}(a,b)}\longrightarrow0, \quad n\longrightarrow\infty, \end{aligned}$$
(12)
$$\begin{aligned}& \forall u\in C^{r} \bigl([a,b];\mathbb{R} \bigr),\quad \bigl\Vert (I-\Pi_{n})u \bigr\Vert _{L^{2}(a,b)}\leq cn^{-r} \bigl\Vert u^{(r)} \bigr\Vert _{L^{2}(a,b)}, \end{aligned}$$
(13)
$$\begin{aligned}& \forall u\in C^{r} \bigl([a,b];\mathbb{R} \bigr),\quad \bigl\Vert (I-\Pi_{n})u \bigr\Vert _{\infty}\leq cn^{\frac{3}{4}-r} \bigl\Vert u^{(r)} \bigr\Vert _{L^{2}(a,b)}, \end{aligned}$$
(14)

where c is a positive constant independent of n, and r is a positive integer.

Remark 3.1

Let \(\mathbb{N}=\{0,1,2,\ldots\}\) and \(r\in\mathbb{N}\).

  1. 1.

    If \(k(\cdot,\cdot)\in\mathcal{C}^{r}([a,b]\times [a,b];\mathbb{R})\), then \(\mathcal{R}(K)\subset\mathcal {C}^{r}([a,b]\times[a,b];\mathbb{R})\). Further, denoting \(D_{i,j}k(t,s)=\frac{\partial^{i+j}}{\partial t^{i}\,\partial s^{j}} k(t,s)\), we have

    $$ \begin{aligned}[b] \Vert D_{i,j}k\Vert _{r,\infty}& = \sup_{0\leq i+j \leq r}\Vert D_{i,j}k\Vert _{\infty} \\ & = \sup_{0\leq i+j \leq r} \Bigl\{ \sup_{(t,s)\in[a,b]\times [a,b]} \bigl\vert D_{i,j}k(t,s) \bigr\vert \Bigr\} \\ & = \sup_{0\leq i+j \leq r}M_{i,j}=M_{r}< \infty. \end{aligned} $$
    (15)
  2. 2.

    If \(f\in L^{2}((a,b);\mathbb{R})\) and \(k(\cdot,\cdot)\in \mathcal{C}^{r}([a,b]\times[a,b];\mathbb{R})\), then we have the following estimates

    $$ \begin{aligned}[b] \bigl\vert D_{i}(Kf) (t) \bigr\vert & = \biggl\vert \frac{d^{i}}{d t^{i}} (Kf) (t) \biggr\vert = \biggl\vert \int_{a}^{b}\frac{\partial^{i}}{\partial t^{i}} k(t,s)f(s)\,ds \biggr\vert \\ & \leq M_{i,0} \int_{a}^{b} \bigl\vert f(s) \bigr\vert \,ds\leq M_{i,0} \sqrt {(b-a)}\Vert f\Vert _{L^{2}(a,b)}, \end{aligned} $$
    (16)

    which leads to

    $$\begin{aligned}& \bigl\Vert D_{i}(Kf) \bigr\Vert _{L^{2}((a,b))}\leq M_{i,0} (b-a)\Vert f\Vert _{L^{2}(a,b)},\quad i=0,1,\dots,r, \end{aligned}$$
    (17)
    $$\begin{aligned}& \bigl\Vert D_{i}(Kf) \bigr\Vert _{\infty}\leq M_{i,0} \sqrt{(b-a)}\Vert f\Vert _{L^{2}(a,b)},\quad i=0,1,\dots ,r. \end{aligned}$$
    (18)

In practice, ill-posed problems like integral equations of the first kind have to be approximated by a finite-dimensional problem whose solution can be easy calculated by using some numerical computation software.

In this paper, we replace the original problem \(Kf=g\) by an algebraic system \(K_{n}f^{n}=g_{n}\) posed on \(\mathbb{R}^{n+1}\), where the Moore-Penrose generalized inverse \(K_{n}^{\dagger}\) is defined for every data \(g_{n}\in\mathbb{R}^{n+1}\).

We define the linear operator \(\mathbb{Q}_{n}: H=L^{2}(a,b)\longrightarrow\mathbb{R}^{n+1}\):

$$ \forall f=\sum_{j=0}^{\infty}c_{j}(f) \hat{L}_{j}\in H,\quad\mathbb{Q}_{n}f= \bigl(c_{0}(f),c_{1}(f), \ldots, c_{n}(f) \bigr)^{T}. $$
(19)

Now, the original equation (2) is replaced by an operator equation in \(\mathbb{R}^{n+1}\), which can be written abstractly as

$$ K_{n}: H\longrightarrow\mathbb{R}^{n+1},\quad \quad K_{n}f = (\mathbb{Q}_{n}K)f = \mathbb{Q}_{n}g =g_{n}. $$
(20)

Theorem 3.1

Let \(K_{n}: H=L^{2} ((a,b);\mathbb{R} )\longrightarrow\mathbb{R}^{n+1}\) be given by formula (20). Then, \(K_{n}\) is a bounded operator, and the adjoint \(K_{n}^{*}: \mathbb{R}^{n+1} \longrightarrow H=L^{2} ((a,b);\mathbb{R} )\) of \(K_{n}\) is given by

$$ \left \{ \textstyle\begin{array}{l} \forall f\in H, \forall X=(x_{0},x_{1},\ldots,x_{n})^{T}\in\mathbb {R}^{n+1},\quad \langle K_{n}f, X\rangle_{\mathbb{R}^{n+1}}=\langle f, K_{n}^{*}X\rangle _{H},\\ (K_{n}^{*}X)(t)= \sum_{j=0}^{n}x_{j}(K^{*}\hat{L_{j}})(t), \end{array}\displaystyle \right . $$
(21)

where \(K^{*}\) is the adjoint of K:

$$\bigl(K^{*}u \bigr) (t)= \int_{a}^{b}k^{*}(t,s)u(s)\,ds = \int_{a}^{b}k(s,t)u(s)\,ds. $$

Proof

(1) For every \(f\in H\), we have

$$ \begin{aligned}[b] \Vert K_{n}f\Vert _{\mathbb{R}^{n+1}}^{2}& = \sum_{j=0}^{n} \bigl\vert c_{j}(Kf) \bigr\vert ^{2} \\ & = \sum_{j=0}^{n} \bigl\vert \langle Kf, \hat{L}_{j}\rangle \bigr\vert _{H}^{2} \\ & \leq \sum_{j=0}^{\infty} \bigl\vert \langle Kf, \hat{L}_{j}\rangle \bigr\vert _{H}^{2}= \Vert Kf\Vert _{H}^{2} \\ &\leq \kappa^{2}\Vert f\Vert _{2}^{2}, \end{aligned} $$
(22)

which implies that \(K_{n} \in\mathcal{L}(H, \mathbb{R}^{n+1})\) and \(\Vert K_{n}\Vert \leq\kappa= (\int_{a}^{b}\int_{a}^{b}\vert k(t,s)\vert ^{2}\,dt\,ds )^{\frac{1}{2}}\).

(2) By definition of \(\mathbb{Q}_{n}: H=L^{2}(a,b)\longrightarrow\mathbb{R}^{n+1}\) it is easy to check that \(\mathbb{Q}_{n} \in\mathcal{L}(H, \mathbb{R}^{n+1})\). Thus, we can define its adjoint operator \(\mathbb{Q}_{n}^{*}: \mathbb{R}^{n+1} \longrightarrow H=L^{2}(a,b)\). Now, for

$$\mathbb{Q}_{n}f= \bigl(\langle f, \hat{L}_{0} \rangle_{L^{2}(a,b)}, \ldots ,\langle f, \hat{L}_{n} \rangle_{L^{2}(a,b)} \bigr)^{\perp}\in\mathbb{R}^{n+1},\quad X= (x_{0},\ldots,x_{n} )^{\perp}\in \mathbb{R}^{n+1}, $$

from the identity

$$\langle\mathbb{Q}_{n}f, X\rangle_{\mathbb{R}^{n+1}} =\sum _{j=0}^{n}x_{j}\langle f, \hat{L}_{j}\rangle_{L^{2}(a,b)} = \Biggl\langle f, \sum _{j=0}^{n} x_{j}\hat{L}_{j} \Biggr\rangle _{L^{2}(a,b)}= \bigl\langle f, \mathbb{Q}_{n}^{*}X \bigr\rangle _{L^{2}(a,b)} $$

it follows that

$$\begin{aligned}& \mathbb{Q}_{n}^{*}X = \sum _{j=0}^{n} x_{j}\hat{L}_{j}, \end{aligned}$$
(23)
$$\begin{aligned}& K_{n}^{*}X= (\mathbb{Q}_{n} K)^{*}X= K^{*} \mathbb{Q}_{n}^{*}X = \sum_{j=0}^{n} x_{j} K^{*}\hat{L}_{j}, \end{aligned}$$
(24)

and

$$ \bigl(K_{n}^{*}K_{n} \bigr)f=\sum _{j=0}^{n} \langle Kf, \hat{L}_{j} \rangle_{L^{2}(a,b)} K^{*}\hat{L}_{j}. $$
(25)

 □

Remark 3.2

The expression \((K_{n}^{*}X)(t)= \sum_{j=1}^{n}x_{j}(K^{*}\hat {L_{j}})(t)\) allows us to conclude that

$$ \mathcal{R} \bigl(K_{n}^{*} \bigr)=\operatorname {span}\bigl\{ K^{*}\hat{L}_{j}, j=0,1,\ldots,n \bigr\} ,\quad \operatorname {dim}\bigl( \mathcal{R} \bigl(K_{n}^{*} \bigr) \bigr)\leq n+1. $$
(26)

Since \(K_{n}^{*}\) is of finite rank, H can be written as

$$ H=L^{2}(a,b)= \overline{\mathcal{R} \bigl(K_{n}^{*} \bigr)}\oplus\mathcal{N}(K_{n})= \mathcal{R} \bigl(K_{n}^{*} \bigr)\oplus\mathcal{N}(K_{n}). $$
(27)

Here and in what follows, we denote \(A= K^{*}K\) and \(A_{n} = K_{n}^{*}K_{n}\). Note that \(K^{*}\) and \(K^{*}K\) are defined by

$$\begin{aligned}& \bigl(K^{*}u \bigr) (t)= \int_{a}^{b}k^{*}(t,s)u(s)\,ds, \quad k^{*}(t,s)=\overline{k(s,t)}=k(s,t), \\& \bigl(K^{*}K u \bigr) (t)= \int_{a}^{b}\theta(t,s)u(s)\,ds, \quad \theta(t,s)= \int _{a}^{b}k(\tau,t)k(\tau,s)\,d\tau. \end{aligned}$$

Now, we are in the position to prove our main results. In the following theorem, we show the convergence of \(A_{n}\) to A and also other regularizing properties of \(A_{n}\).

Theorem 3.2

Let \(A= K^{*}K\) and \(A_{n}= K_{n}^{*}K_{n}\) be given by expression (25). Then, under the assumption

$$ k(\cdot,\cdot)\in L^{2} \bigl((a,b)\times(a,b);\mathbb{R \bigr)},$$
(A2)

we have

$$ \forall h\in H, \quad \Vert Ah-A_{n}h\Vert \longrightarrow0, \quad n\longrightarrow\infty. $$
(28)

Moreover, if

$$ k(\cdot,\cdot)\in\mathcal{C}^{r}\bigl([a,b]\times[a,b];\mathbb{R} \bigr), \quad r \geq1,$$
(H3)

then we have

$$ \Vert A-A_{n}\Vert \leq\varepsilon(n) \longrightarrow0, \quad n\longrightarrow\infty. $$
(29)

Proof

We compute

$$ \begin{aligned}[b] \bigl\Vert \bigl(K^{*}K-K_{n}^{*}K_{n} \bigr)h \bigr\Vert _{L^{2}(a,b)}& = \Biggl\Vert K^{*}Kh- \sum _{j=0}^{n}c_{j}(Kh) \bigl(K^{*}\hat{L_{j}} \bigr) \Biggr\Vert _{L^{2}(a,b)} \\ & = \Biggl\Vert K^{*} \Biggl(Kh-\sum_{j=0}^{n}c_{j}(Kh) \hat{L_{j}} \Biggr) \Biggr\Vert _{L^{2}(a,b)} \\ & = \bigl\Vert K^{*} (Kh-\Pi_{n}Kh ) \bigr\Vert _{L^{2}(a,b)} \\ & \leq \bigl\Vert K^{*} \bigr\Vert \bigl\Vert (Kh- \Pi_{n}Kh ) \bigr\Vert _{L^{2}(a,b)}\longrightarrow0,\quad n \longrightarrow\infty. \end{aligned} $$
(30)

By Lemma 3.1((17)) we can write

$$ \begin{aligned}[b] \bigl\Vert \bigl(K^{*}K-K_{n}^{*}K_{n} \bigr)h \bigr\Vert _{L^{2}(a,b)}& = \Biggl\Vert K^{*}Kh- \sum _{j=0}^{n}c_{j}(Kh) \bigl(K^{*}\hat{L_{j}} \bigr) \Biggr\Vert _{L^{2}(a,b)} \\ & = \Biggl\Vert K^{*} \Biggl(Kh-\sum_{j=0}^{n}c_{j}(Kh) \hat{L_{j}} \Biggr) \Biggr\Vert _{L^{2}(a,b)} \\ & = \bigl\Vert K^{*} (Kh-\Pi_{n}Kh ) \bigr\Vert _{L^{2}(a,b)} \\ & \leq \bigl\Vert K^{*} \bigr\Vert \bigl\Vert (Kh- \Pi_{n}Kh ) \bigr\Vert _{L^{2}(a,b)} \\ & \leq \Vert K\Vert \bigl(cn^{-r} \bigr) \bigl\Vert (Kh)^{(r)} \bigr\Vert \\ &\leq \Vert K\Vert \bigl(cn^{-r} \bigr) (b-a)M_{r,0} \Vert h\Vert _{L^{2}(a,b)} =\varepsilon(n)\Vert h\Vert _{L^{2}(a,b)}, \end{aligned} $$
(31)

which implies that

$$ \Vert A-A_{n}\Vert \leq \Vert K\Vert \bigl(cn^{-r} \bigr) (b-a)M_{r,0}=\varepsilon(n)\longrightarrow 0, \quad n\longrightarrow\infty. $$
(32)

 □

Lemma 3.2

Let \(\alpha>0\), \(\mathbb{R}_{n}(\alpha)=(\alpha I+A_{n})^{-1}A_{n}\), and \(\mathbb{R}(\alpha)=(\alpha I+A)^{-1}A\). Then

$$ \forall h\in H=L^{2}(a,b),\quad \bigl\Vert \mathbb{R}_{n}(\alpha)h-\mathbb{R}(\alpha)h \bigr\Vert _{L^{2}(a,b)} \longrightarrow0, \quad n\longrightarrow\infty. $$
(33)

Proof

Before starting the proof, we recall the following useful result.

Remark 3.3

If K is a bounded injective operator, then

$$\mathcal{N}(K)= \mathcal{N} \bigl(K^{*}K \bigr)=\{0\} \quad \text{and} \quad \overline {R \bigl(K^{*}K \bigr)}=\mathcal{N} \bigl(K^{*}K \bigr)^{\perp}=\{0\}^{\perp}=H. $$

In view of Theorem 2.2 and Remark 3.3, to show the convergence result (33), it suffices to establish the result for \(h\in\mathcal{R}(K^{*}K)\). Before starting the demonstration, we introduce the following propositions.

Proposition 3.1

For all \(h\in H\), we have

$$ \bigl\Vert (\alpha I+A)^{-1}Ah-h \bigr\Vert _{H}\longrightarrow0, \quad \alpha\longrightarrow0. $$
(34)

Proof

If \(h=\sum_{i=1}^{\infty}h_{i}e_{i}=\sum_{i=1}^{\infty }\langle h, e_{i}\rangle e_{i}\), then

$$\bigl\Vert (\alpha I+A)^{-1}Ah-h \bigr\Vert _{H}^{2}= \sum_{i=1}^{\infty} \biggl(\frac{\alpha\lambda_{i}}{\alpha+\lambda_{i}} \biggr)^{2}\vert h_{i}\vert ^{2}. $$

For \(\varepsilon>0\), we choose \(N\in\mathbb{N}\) such that \(\sum_{i=N+1}^{\infty} \vert h_{i}\vert ^{2} \leq\frac{\varepsilon}{2}\). Thus,

$$\begin{aligned} \sum_{i=1}^{\infty} \biggl(\frac{\alpha}{\alpha+\lambda_{i}} \biggr)^{2}\vert h_{i}\vert ^{2} = & \sum _{i=1}^{N} \biggl(\frac{\alpha}{\alpha+\lambda _{i}} \biggr)^{2}\vert h_{i}\vert ^{2} +\sum _{i=N+1}^{\infty} \biggl(\frac{\alpha}{\alpha+\lambda_{i}} \biggr)^{2}\vert h_{i}\vert ^{2} \\ \leq& \sum_{i=1}^{N} \biggl( \frac{\alpha}{\alpha+\lambda_{i}} \biggr)^{2}\vert h_{i}\vert ^{2} + \frac{\varepsilon}{2} \\ \leq& \sum_{i=1}^{N} \biggl( \frac{\alpha}{\lambda_{i}} \biggr)^{2}\vert h_{i}\vert ^{2}+\frac{\varepsilon}{2} \\ \leq& \frac{\varepsilon}{2}+ \alpha^{2}\frac{1}{\lambda _{N}^{2}}\Vert h \Vert _{H}^{2}. \end{aligned}$$

If we choose the parameter α such that \(\alpha^{2}\frac {1}{\lambda_{N}^{2}}\Vert h\Vert _{H}^{2}\leq\frac{\varepsilon }{2}\), then we obtain the desired convergence. □

Proposition 3.2

We have

$$ \forall n\in\mathbb{N},\quad \bigl\Vert (\alpha I +A_{n})^{-1}A_{n} \bigr\Vert =\sup _{\lambda\in[0, \Vert A_{n}\Vert ]} \frac{\lambda}{\alpha +\lambda} \leq1, $$
(35)

that is, the sequence \((\mathbb{R}_{n}(\alpha))\) is uniformly bounded with respect to n.

We return now to the proof of Lemma (3.2). We have

$$\begin{aligned} \bigl\Vert \mathbb{R}_{n}(\alpha)h-\mathbb{R}(\alpha)h \bigr\Vert _{L^{2}(a,b)} = & \bigl\Vert (\alpha I+A_{n})^{-1} \bigl[(\alpha I+A_{n})A-A_{n}(\alpha I+A) \bigr](\alpha I+A)^{-1}h \bigr\Vert _{L^{2}(a,b)} \\ = & \bigl\Vert \alpha(\alpha I +A_{n})^{-1}(A-A_{n}) (\alpha I+A)^{-1}h \bigr\Vert _{L^{2}(a,b)} \\ \leq& \alpha \bigl\Vert (\alpha I +A_{n})^{-1} \bigr\Vert \bigl\Vert (A-A_{n}) (\alpha I+A)^{-1}h \bigr\Vert _{L^{2}(a,b)}. \end{aligned}$$

Using the fact that \(\alpha \Vert (\alpha I+A_{n})^{-1}\Vert \leq 1\), we derive

$$ \bigl\Vert \mathbb{R}_{n}(\alpha )h-\mathbb{R}( \alpha)h \bigr\Vert _{L^{2}(a,b)}\leq \bigl\Vert (A-A_{n}) ( \alpha I+A)^{-1}h \bigr\Vert _{L^{2}(a,b)}, $$
(36)

and from (30) and (34) we deduce that this last inequality tends to 0 for all \(h\in\mathcal{R}(A)\).  □

Convergence and error analysis

We denote by \(R(\alpha)= (\alpha I +K^{*}K)^{-1}K^{*}\in\mathcal {L}(H)\) (resp. \((\alpha I+ K_{n}^{*}K_{n})^{-1}K_{n}^{*}\)) the regularizing operator of K (resp. of \(K_{n}\)).

To establish the convergence results of this method, we point out the following results:

$$ \bigl\Vert \bigl(\alpha I+ K_{n}^{*}K_{n} \bigr)^{-1}K_{n}^{*} \bigr\Vert = \bigl\Vert \bigl(\alpha I+ K^{*}K \bigr)^{-1}K^{*} \bigr\Vert = \frac{1}{2\sqrt{\alpha}}; $$
(37)

if \(f\in H\), then

$$ \Vert f-f_{\alpha} \Vert _{L^{2}(a,b)}\longrightarrow 0, \quad \alpha\longrightarrow0; $$
(38)

also, if \(f=Au\in R(A)\), then

$$ \Vert f-f_{\alpha} \Vert _{L^{2}(a,b)}\leq \alpha \Vert u\Vert _{L^{2}(a,b)}. $$
(39)

Let us assume that \(g_{\delta}\) are observation data of g such that

$$ \Vert g-g_{\delta} \Vert _{L^{2}(a,b)}= \Biggl(\sum _{j=0}^{\infty} \bigl\Vert c_{j}(g-g_{\delta}) \bigr\Vert ^{2} \Biggr)^{\frac{1}{2}}\leq\delta $$
(40)

with a given noise level \(\delta>0\). Then we have

$$ \bigl\Vert g_{n}-g_{n}^{\delta} \bigr\Vert _{\mathbb{R}^{n+1}}\leq \bigl\Vert g-g^{\delta} \bigr\Vert _{L(a,b)}\leq\delta. $$
(41)

Let us consider the following equations:

$$\begin{aligned}& K_{n}f^{n}=g_{n}, \end{aligned}$$
(42)
$$\begin{aligned}& K_{n}f^{\delta,n}=g_{n}^{\delta}, \end{aligned}$$
(43)
$$\begin{aligned}& \bigl(\alpha I+K^{*}K \bigr)f=K^{*}g, \end{aligned}$$
(44)
$$\begin{aligned}& \bigl(\alpha I+K^{*}K \bigr)f=K^{*}g^{\delta}. \end{aligned}$$
(45)

Because our original problem (\(Kf=g\)) is ill-posed, the problem of finding the generalized solution \(f^{\dagger,\delta, n}= K_{n}^{\dagger }g_{n}^{\delta}\in\mathcal{N}(K_{n})^{T}\) of problem (42) with inexact data \(g_{n}^{\delta}\) is instable. Regularizing this equation by the Tikhonov regularization method, we obtain

$$\begin{aligned}& \bigl(\alpha I+K_{n}^{*}K_{n} \bigr)f=K_{n}^{*}g_{n}, \end{aligned}$$
(46)
$$\begin{aligned}& \bigl(\alpha I+K_{n}^{*}K_{n} \bigr)f=K_{n}^{*}g_{n}^{\delta}. \end{aligned}$$
(47)

Denote by \(f =K^{-1}g\) the exact solution of (2), by \(f_{\alpha}\) (resp. \(f_{\alpha}^{\delta}\)) the regularized solution of (44) (resp. of (45)), and by \(f_{\alpha}^{n}\) (resp. \(f_{\alpha }^{n,\delta}\)) the regularized solution of (46) (resp. of (47)).

Definition 4.1

We denote by \(f_{\alpha}\) (resp. \(f_{\alpha}^{\delta}\)) the regularized solution of problem (2) for the exact data g (resp. for the inexact data \(g_{\delta}\)):

$$\begin{aligned}& f_{\alpha}=R(\alpha)g= \bigl(\alpha I +K^{*}K \bigr)^{-1}K^{*}g, \end{aligned}$$
(48)
$$\begin{aligned}& f_{\alpha}^{\delta}=R(\alpha)g_{\delta}= \bigl( \alpha I +K^{*}K \bigr)^{-1}K^{*}g_{\delta}. \end{aligned}$$
(49)

Definition 4.2

For any \(\alpha>0\), the unique solution \(f_{\alpha}^{\delta,n}\) of (47) is considered as a regularized solution of \(f^{\dagger,n,\delta}\).

Remark 4.1

Without loss of generality, we can assume that \(\operatorname {dim}(\mathcal{R}( K_{n}^{*}) ) =n+1\). For example, under condition (H4), the vectors \(K^{*}\hat{L}_{j}\), \(i=0,1,\ldots,n\), are linearly independent, and consequently \(\operatorname {dim}(\mathcal{R}( K_{n}^{*}) )=n+1\).

Since \(f_{\alpha}^{\delta,n}\in\mathcal{R}( K_{n}^{*}K_{n})= \mathcal{R}( K_{n}^{*}) = \operatorname {span}\{K^{*}\hat{L}_{j}, i=0,1,\ldots,n\} \) (see (26)), \(f_{\alpha}^{\delta,n}\) can be expanded as

$$ f_{\alpha}^{\delta,n}=\sum _{j=0}^{n}a_{j}K^{*} \hat{L}_{j}. $$
(50)

Then, equation (47) takes the form

$$ \sum_{j=0}^{n} \Biggl( \alpha I +a_{j}\sum_{i=0}^{n} \bigl\langle KK^{*} \hat{L}_{i}, \hat{L}_{j} \bigr\rangle _{L^{2}(a,b)} \Biggr)K^{*}\hat{L}_{j} = \sum _{j=0}^{n} \bigl\langle g^{\delta}, \hat{L}_{j} \bigr\rangle _{L^{2}(a,b)}K^{*} \hat{L}_{j}. $$
(51)

For notational convenience and simplicity, we denote

$$\begin{aligned}& \overrightarrow{a}=(a_{0},\ldots, a_{n})^{\perp} \in\mathbb{R}^{n+1}, \end{aligned}$$
(52)
$$\begin{aligned}& \overrightarrow{g_{n}^{\delta}}= \biggl( \int_{a}^{b}g^{\delta}(t) \hat{L}_{0}(t)\,dt,\ldots, \int_{a}^{b}g^{\delta}(t) \hat{L}_{n}(t)\,dt \biggr)^{\perp}\in\mathbb{R}^{n+1}, \end{aligned}$$
(53)
$$\begin{aligned}& b_{ij}= \bigl\langle KK^{*} \hat{L}_{i}, \hat{L}_{j} \bigr\rangle _{L^{2}(a,b)}= \int_{a}^{b} \int_{a}^{b} \int_{a}^{b}k(s,t)k(t,\tau)\hat{L}_{i}(s) \hat{L}_{i}(t)ds\,dt\,d\tau,\quad i,j=0,\ldots,n, \end{aligned}$$
(54)
$$\begin{aligned}& \mathbf{B}= (b_{ij})\in\mathcal{M}_{n+1}( \mathbb{R}), \quad\quad \mathbf{A}_{n}(\alpha) =\alpha I_{n+1}+ \mathbf{B}. \end{aligned}$$
(55)

Now, to determine the unknown coefficients \((a_{j})_{j=0}^{n}\), we must solve the linear algebraic system

$$ \mathbf{A}_{n}(\alpha)\overrightarrow{a}= \overrightarrow{g_{n}^{\delta}}. $$
(56)

Proposition 4.1

The linear system (56) has a unique solution \(\overrightarrow{a_{\alpha}^{n,\delta}}\) for every \(\overrightarrow{g_{n}^{\delta}}\in\mathbb{R}^{n+1}\).

Proof

Let

$$S(f,g)= \bigl\langle K^{*}Kf,g \bigr\rangle _{L^{2}(a,b)}, \quad f,g\in L^{2}(a,b). $$

We have

$$S(f,g)= \bigl\langle K^{*}Kf,g \bigr\rangle _{L^{2}(a,b)}= \bigl\langle f, K^{*}Kg \bigr\rangle _{L^{2}(a,b)} = \bigl\langle K^{*}Kg, f \bigr\rangle _{L^{2}(a,b)} =S(g,f) $$

and

$$S(f,g)= \bigl\langle K^{*}f, K^{*}g \bigr\rangle _{L^{2}(a,b)}\geq0, $$

that is, \(S(\cdot,\cdot)\) is a positive symmetric bilinear form. Hence, \(\mathbf{B}=(b_{ij})_{0\leq i,j\leq n}=(S(\hat{L}_{i},\hat {L}_{j}))_{0\leq i,j\leq n}\) is a positive symmetric matrix, and for any \(\alpha> 0\), the matrix \(\mathbf{A}_{n}(\alpha)\) of system (56) is invertible. Therefore, this system is uniquely solvable. □

The aim of this part is to derive the convergence and error bound for \(\Vert f-f_{\alpha}^{n,\delta} \Vert _{L^{2}(a,b)}\). To do this, we split the error into three parts:

$$\bigl(f_{\alpha}^{n,\delta}-f \bigr) = \bigl(f_{\alpha}^{n,\delta}-f_{\alpha}^{n} \bigr)+ \bigl(f_{\alpha}^{\delta}-f \bigr)+ \bigl(f_{\alpha}^{n}-f_{\alpha}^{\delta} \bigr). $$

Using (37), (41), and the triangle inequality, we can write

$$\begin{aligned}& \Delta_{1}= \bigl\Vert f_{\alpha}^{n,\delta}-f^{n} \bigr\Vert _{L(a,b)}= \bigl\Vert \bigl(\alpha I+ K_{n}^{*}K_{n} \bigr)^{-1}K_{n}^{*} \bigl(g_{n}^{\delta}-g_{n} \bigr) \bigr\Vert _{L(a,b)} \leq\frac{\delta}{2\sqrt{\alpha}}, \end{aligned}$$
(57)
$$\begin{aligned}& \begin{aligned}[b] \Delta_{2}&= \bigl\Vert f_{\alpha}^{\delta}-f \bigr\Vert _{L(a,b)} \leq \bigl\Vert f_{\alpha}^{\delta}-f_{\alpha} \bigr\Vert _{L(a,b)}+ \Vert f_{\alpha}-f\Vert _{L(a,b)} \\ & \leq \bigl\Vert \bigl(\alpha I+ K^{*}K \bigr)^{-1}K^{*} \bigl(g^{\delta}-g \bigr) \bigr\Vert _{L(a,b)}+ \Vert f_{\alpha}-f\Vert _{L(a,b)} \\ & \leq \frac{\delta}{2\sqrt{\alpha}}+\Vert f-f_{\alpha} \Vert _{L(a,b)}, \end{aligned} \end{aligned}$$
(58)
$$\begin{aligned}& \begin{aligned}[b] \Delta_{3}&= \bigl\Vert f_{\alpha}^{n}-f_{\alpha}^{\delta} \bigr\Vert _{L(a,b)} \leq \bigl\Vert f_{\alpha}^{n}-f_{\alpha} \bigr\Vert _{L(a,b)}+ \bigl\Vert f_{\alpha}-f_{\alpha}^{\delta} \bigr\Vert _{L(a,b)} \\ & \leq \frac{\delta}{2\sqrt{\alpha}}+ \bigl\Vert f_{\alpha }^{n}-f_{\alpha} \bigr\Vert _{L(a,b)}. \end{aligned} \end{aligned}$$
(59)

Now, by (36) the quantity \(\Vert f_{\alpha }^{n}-f_{\alpha} \Vert _{L(a,b)}\) can be estimated as follows:

$$ \begin{aligned}[b] \bigl\Vert f_{\alpha}^{n}-f_{\alpha} \bigr\Vert _{L(a,b)} & = \bigl\Vert \bigl(\alpha I+ K_{n}^{*}K_{n} \bigr)^{-1}K_{n}^{*}g_{n}- \bigl(\alpha I+ K^{*}K \bigr)^{-1}K^{*}g \bigr\Vert _{L(a,b)} \\ & = \bigl\Vert \bigl(\alpha I+ K_{n}^{*}K_{n} \bigr)^{-1}K_{n}^{*}K_{n}f- \bigl(\alpha I+ K^{*}K \bigr)^{-1}K^{*}Kf \bigr\Vert _{L(a,b)} \\ & = \bigl\Vert \mathbb{R}_{n}(\alpha)h-\mathbb{R}(\alpha)f \bigr\Vert _{L^{2}(a,b)} \\ & \leq \bigl\Vert (A-A_{n}) (\alpha+A)^{-1}f \bigr\Vert _{L^{2}(a,b)}. \end{aligned} $$
(60)

Combining (57), (58), (59), and (60), we derive

$$ \bigl\Vert f_{\alpha}^{n,\delta}-f \bigr\Vert _{L(a,b)}\leq\frac{3\delta}{2\sqrt{\alpha}}+ \bigl\Vert (A-A_{n}) ( \alpha+A)^{-1}f \bigr\Vert _{L^{2}(a,b)}+\Vert f-f_{\alpha} \Vert _{L(a,b)}. $$
(61)

Consequently, we have the following theorem.

Theorem 4.1

Let us assume that \(f=Au\in\mathcal{R}(A)\). Then, under assumptions (H1), (H2), and (H3), we have the estimate

$$ \bigl\Vert f_{\alpha}^{n,\delta}-f \bigr\Vert _{L(a,b)}\leq\frac{3\delta}{2\sqrt{\alpha}}+ \bigl(\varepsilon(n)+\alpha \bigr)\Vert u \Vert _{L^{2}(a,b)}, $$
(62)

where \(\varepsilon(n)=\frac{c}{n^{r}}\Vert K\Vert (b-a)M_{r,0}\).

An a posteriori parameter choice strategy

In this section, we consider the determination of \(\alpha(\delta)\) from the discrepancy principle of Morozov. The discrepancy principle (DP) suggests computing \(\alpha (\delta)>0\) such that

$$ \bigl\Vert Kf_{\alpha}^{n,\delta}-g^{\delta} \bigr\Vert _{L^{2}(a,b)}=\delta. $$
(63)

In this work, we consider a more general class of the damped Morozov principle given by

$$ \bigl\Vert Kf_{\alpha}^{n,\delta}-g^{\delta} \bigr\Vert _{L^{2}(a,b)}+\alpha^{\eta} \bigl\Vert f_{\alpha}^{n,\delta} \bigr\Vert _{L^{2}(a,b)}^{2}=\delta ^{2}, $$
(64)

where \(\eta\in [ 1,\infty ] \). Obviously, the classical Morozov principle (63) is a particular case of the damped case with \(\eta=\infty\).

In [40, 41], the authors propose a cubically convergent algorithm for choosing a reasonable regularization parameter. This algorithm is summarized as follows.

Algorithm of the cubic Morozov discrepancy principle (CMDP)

Step 1.:

Input \(\alpha_{0}>0\), \(\delta>0\), \(\epsilon(\text{tolerance}) >0\), \(l_{\max}\), set \(l:=0\).

Step 2.:

Compute \(f_{\alpha_{l}}^{n,\delta}\), \(\frac {d}{d\alpha}f_{\alpha_{l}}^{n,\delta}\), and \(\frac{d^{2}}{d\alpha ^{2}}f_{\alpha_{l}}^{n,\delta}\).

Step 3.:

Compute \(\Phi(\alpha_{l})\), \(\Phi'(\alpha_{l})\), and \(\Phi''(\alpha_{l})\) from formula (65).

Step 4.:

Solve for \(\alpha_{l+1}\) from iterative formulas (65), (70), and (71).

Step 5.:

If \(\vert \alpha_{l+1}-\alpha_{l}\vert \leq \epsilon\) or \(l=l_{\max}\), STOP; otherwise, set \(l=l+1\), GOTO step 2.

$$ \Phi(\alpha)= \bigl\Vert Kf_{\alpha}^{n,\delta }-g^{\delta} \bigr\Vert _{L^{2}(a,b)}+\alpha^{\eta} \bigl\Vert f_{\alpha}^{n,\delta} \bigr\Vert _{L^{2}(a,b)}^{2}-\delta ^{2} $$
(65)

and

$$ \alpha_{l+1}=\alpha_{l}-\frac{2\Phi(\alpha_{l})}{\Phi '(\alpha_{l})+(\Phi'(\alpha_{l})^{2}-2\Phi(\alpha_{l})\Phi''(\alpha _{l}))^{\frac{1}{2}}}. $$
(66)

Now, we present an alternate way to calculate \(\Phi'(\alpha)\) and \(\Phi ''(\alpha)\) in algorithm (CMDP). Let \(G(\alpha)\) denote the function

$$ G(\alpha)= \bigl\Vert Kf_{\alpha}^{n,\delta}-g^{\delta} \bigr\Vert _{L^{2}(a,b)}+\alpha \bigl\Vert f_{\alpha}^{n,\delta} \bigr\Vert _{L^{2}(a,b)}^{2}= \psi(\alpha)+\alpha\phi(\alpha), $$
(67)

where

$$\psi(\alpha)= \bigl\Vert Kf_{\alpha}^{n,\delta}-g^{\delta} \bigr\Vert _{L^{2}(a,b)}, \quad\quad \phi(\alpha)= \bigl\Vert f_{\alpha}^{n,\delta} \bigr\Vert _{L^{2}(a,b)}^{2}. $$

The first derivative of \(G(\alpha)\) (see [42]) is given by

$$ G'(\alpha)=\phi(\alpha). $$
(68)

Using (67) and (68), we get

$$G'(\alpha)=\phi(\alpha)= \psi'(\alpha)+ \phi(\alpha)+ \alpha\phi'(\alpha), $$

which implies that

$$ \psi'(\alpha)=-\alpha\phi'(\alpha) $$
(69)

and

$$\begin{aligned} \Phi'(\alpha) = & \frac{d}{d\alpha} \bigl( \psi(\alpha)+ \alpha^{\eta}\phi (\alpha)-\delta^{2} \bigr) \\ = & \psi'(\alpha)+ \eta\alpha^{\eta-1}\phi(\alpha)+ \alpha^{\eta}\phi '(\alpha) \\ =&-\alpha\phi'(\alpha)+\eta\alpha^{\eta-1}\phi(\alpha)+ \alpha^{\eta }\phi'(\alpha). \end{aligned}$$

Thus, it follows that

$$ \Phi'(\alpha)= \bigl(\alpha^{\eta}-\alpha \bigr)\phi'(\alpha)+\eta\alpha^{\eta-1}\phi(\alpha) $$
(70)

and

$$ \Phi''(\alpha)= \bigl(\alpha ^{\eta}-\alpha \bigr)\phi'(\alpha)+ \bigl(2\eta\alpha ^{\eta-1}-1 \bigr)\phi'(\alpha)+\eta(\eta-1)\alpha ^{\eta-2}\phi(\alpha), $$
(71)

where

$$\phi'(\alpha)=2 \biggl\langle \frac{d}{d\alpha}f_{\alpha}^{n,\delta },f_{\alpha}^{n,\delta} \biggr\rangle _{L^{2}(a,b)} $$

and

$$\phi''(\alpha)=2 \biggl( \biggl\langle \frac{d^{2}}{d\alpha^{2}}f_{\alpha }^{n,\delta},f_{\alpha}^{n,\delta} \biggr\rangle _{L^{2}(a,b)} + \biggl\Vert \frac {d}{d\alpha}f_{\alpha}^{n,\delta} \biggr\Vert _{L^{2}(a,b)}^{2} \biggr). $$

In our case, using (50), we can write

$$ \frac{d^{m}}{d\alpha^{m}}f_{\alpha}^{n,\delta}=\sum _{j=0}^{n}\frac{d^{m}}{ d\alpha^{m}}a_{j}(\alpha )K^{*}\hat{L}_{j}= \biggl\langle \frac{d^{m}}{ d\alpha^{m}} \overrightarrow{a(\alpha)},\overrightarrow{Y} \biggr\rangle ,\quad m\geq1, $$
(72)

where

$$ \begin{aligned}[b] & \frac{d^{m}}{d\alpha^{m}}\overrightarrow{a( \alpha)}= \biggl( \frac{d^{m}}{d\alpha^{m}} a_{0}(\alpha), \frac{d^{m}}{d\alpha^{m}}a_{1}( \alpha),\ldots,\frac{d^{m}}{ d\alpha^{m}}a_{n}( \alpha) \biggr)^{\perp}, \\ &\quad \overrightarrow{Y} = \bigl( K^{*}\hat{L}_{0},K^{*} \hat{L}_{1},\ldots,K^{*}\hat{L}_{n} \bigr)^{\perp}. \end{aligned} $$
(73)

It is easy to check that

$$ \mathbf{A}_{n}(\alpha)\frac{d^{m}}{d\alpha^{m}} \overrightarrow{a(\alpha)} = -m \frac{d^{m-1}}{d\alpha ^{m-1}}\overrightarrow{a(\alpha)}, \quad m\geq1, $$
(74)

where the matrix \(\mathbf{A}_{n}(\alpha)\) is given by (55).

Remark 4.2

We note that formula (74) provides us a practical method to calculate expression (72).

Numerical tests

The purpose of this final section is to illustrate this theoretical study with two numerical examples. The numerical experiments are completed with MATLAB.

Example 1

$$\int_{0}^{1}\exp \bigl(s^{2}t \bigr)f(t) \,dt=g(s)= \bigl(\exp \bigl(s^{2}+1 \bigr)-1 \bigr)/ \bigl(s^{2}+1 \bigr) $$

with the exact solution

$$f(t) =\exp(t). $$

Example 2

$$\int_{0}^{\pi/2}\cos \bigl(s^{2} +3t +1 \bigr)f(t)\,dt=g(s)=(1/6)\cos \bigl(s^{2}+2 \bigr)-\pi/4 \sin \bigl(s^{2} \bigr) $$

with the exact solution

$$f(t) =\sin(3t+1). $$

Let \(\{t_{i}=a+ \frac{(i-1)(b-a)}{N}, i=1,2,\ldots,N+1\}\subset[a,b]\) the collocation points of the trapezoidal quadrature formula. The trapezoidal quadrature rule associated with these collocation points has the weights \(\omega_{1} = \omega_{N+1}=\frac{b-a}{2N}\), \(\omega _{i}=\frac{b-a}{N}\), \(i=2,3,\ldots,N\).

We denote by

$$\mathbf{g}= \bigl(g(t_{1}),\ldots g(t_{N+1}) \bigr)^{\perp} $$

the discrete datum of g. Adding a random distributed perturbation (obtained by the Matlab command randn) to each data function, we obtain the vector \(\mathbf{g}^{\delta}\):

$$\mathbf{g}^{\delta} =\mathbf{g}+\varepsilon \operatorname {randn}\bigl( \operatorname {size}( \mathbf{g}) \bigr), $$

where ε indicates the noise level of the measurement data, and the function “\(\operatorname {randn}(\cdot)\)” generates arrays of normally distributed random numbers with mean 0, variance \(\sigma^{2} = 1\), and standard deviation \(\sigma= 1\). “\(\operatorname {randn}(\operatorname {size}(g))\)” returns an array of random entries of the same size as g. The bound on the measurement error δ can be measured in the sense of root mean square error (RMSE) according to

$$\delta= \bigl\Vert \mathbf{g}^{\delta}-\mathbf{g} \bigr\Vert _{*}= \Biggl( \frac {1}{N+1}\sum_{i=1}^{N+1} \bigl(g(t_{i})-g^{\delta}(t_{i}) \bigr)^{2} \Biggr)^{1/2}. $$

The discrete errors \(\Vert e_{n}\Vert _{\infty}\) and \(\Vert e_{n}\Vert _{2}\) are defined by

$$\begin{aligned}& \Vert e_{n}\Vert _{\infty}=\max_{1\leq i\leq N+1} \bigl\vert f ( t_{i} ) -f_{\alpha}^{n,\delta} ( t_{i} ) \bigr\vert \quad\mbox{and}\quad \Vert e_{n} \Vert _{2}= \Biggl[\sum_{i=1}^{N+1}w_{i} \bigl(f ( t_{i} ) -f_{\alpha}^{n,\delta} ( t_{i} ) \bigr)^{2} \Biggr]^{\frac{1}{2}}. \end{aligned}$$

Using the trapezoid rule, we compute

$$\begin{aligned}& \bigl(\mathbf{g}_{n}^{\delta} \bigr)_{j}= \bigl\langle g^{\delta}, \hat{L}_{j} \bigr\rangle \approx\sum _{i=1}^{N+1}w_{i}\sum _{k=1}^{N+1}w_{k}k(s_{k},t_{i}) \mathbf{g}^{\delta}(s_{k})\hat{L}_{j}(t_{i}), \quad \quad j=0,1,\ldots,n, \\& \mathbf{g}_{n}^{\delta}= \bigl( \bigl(g_{n}^{\delta} \bigr)_{0},\bigl(g_{n}^{\delta} \bigr)_{1}, \ldots , \bigl(g_{n}^{\delta} \bigr)_{n}\bigr)^{\perp}, \end{aligned}$$

and

$$\begin{aligned} b_{ij} = & \int_{a}^{b} \int_{a}^{b} \int_{a}^{b}k(s,t)k(t,\tau)\hat{L}_{i}(s) \hat{L}_{i}(t)ds\,dt\,d\tau \\ \approx&\mathbf{ b}_{ij}=\sum_{m=1}^{N+1} \omega_{m} \Biggl( \sum_{r=1}^{N+1}w_{r} \Biggl(\sum_{l=1}^{N+1}\omega_{l}k(s_{l},t_{r})k(t_{r}, \tau _{m})\hat{L}_{i}(s_{l})\hat{L}_{j}(t_{r}) \Biggr) \Biggr), \quad i,j=0,\ldots,n. \end{aligned}$$

Under this notation, we obtain a discrete version of system (56) in the form

$$\mathbf{A}_{n}(\alpha)\mathbf{a} =\mathbf{g}_{n}^{\delta}, $$

and the approximate solution will be calculated by the formula

$$\mathbf{f}_{\alpha}^{n,\delta}(t_{i})\approx \sum _{j=0}^{n}a_{j}(\alpha) \Biggl(\sum _{r=1}^{N+1}\omega _{r}k(s_{r},t_{i}) \hat{L}_{j}(s_{r}) \Biggr),\quad i=1,2,\ldots,N+1, $$

where

$$\mathbf{B}=(\mathbf{ b}_{ij}), \quad\quad \mathbf{A}_{n}( \alpha)=( \alpha\mathbf {I}_{n} + \mathbf{B} ), \quad\quad \mathbf{a}= \bigl(a_{0}( \alpha),a_{1}(\alpha),\ldots,a_{n}( \alpha) \bigr)^{\perp}. $$

References

  1. 1.

    Kirsch, A: An Introduction to the Mathematical Theory of Inverse Problems. Springer, Berlin (1996)

  2. 2.

    Kress, R: Linear Integral Equations, 3rd edn. Springer, New York (2014)

  3. 3.

    Lu, Y, Shen Xu, Y: Integral equation models for image restoration: high accuracy methods and fast algorithms. Inverse Probl. 26, 045006 (2010)

  4. 4.

    Adibi, H, Assari, P: Chebyshev wavelet method for numerical solution of Fredholm integral equations of the first kind. Math. Probl. Eng. 2010, Article ID 138408 (2010). doi:10.1155/2010/138408

  5. 5.

    Babaaghaie, A, Mesgarani, H: Numerical solution of Fredholm integral equations of first kind by two-dimensional trigonometric wavelets in Hölder space \(C^{\alpha}([a,b])\). Comput. Math. Math. Phys. 52(4), 601-614 (2012)

  6. 6.

    Chen, Z, Micchelli, CA, Xu, Y: Multiscale Methods for Fredholm Integral Equations. Cambridge University Press, Cambridge (2015)

  7. 7.

    Luo, X, Li, F, Yang, S: A posteriori parameter choice strategy for fast multiscale methods solving ill-posed integral equations. Adv. Comput. Math. 36, 299-314 (2012)

  8. 8.

    Maleknejad, K, Sohrabi, S: Numerical solution of Fredholm integral equations of the first kind by using Legendre wavelets. Appl. Math. Comput. 186, 836-843 (2007)

  9. 9.

    Maleknejad, K, Nouri, K, Yousefi, M: Discussion on convergence of Legendre polynomial for numerical solution of integral equations. Appl. Math. Comput. 193, 335-339 (2007)

  10. 10.

    Maleknejad, K, Mollapourasl, R, Alizadeh, M: Convergence analysis for numerical solution of Fredholm integral equation by Sinc approximation. Commun. Nonlinear Sci. Numer. Simul. 16, 2478-2485 (2011)

  11. 11.

    Hong, D, Minggen, C: Representation of the exact solution and a stability analysis on the Fredholm integral equation of the first kind in reproducing kernel space. Appl. Math. Comput. 182, 1608-1614 (2006)

  12. 12.

    Hong, D, Minggen, C: Approximate solution of the Fredholm integral equation of the first kind in a reproducing kernel Hilbert space. Appl. Math. Lett. 21, 617-623 (2008)

  13. 13.

    Ahues, M, Largillier, A, Limaye, BV: Spectral Computations for Bounded Operators. Chapman & Hall-CRC, New York (2001)

  14. 14.

    Ben Aouicha, H: Computation of the spectra of some integral operators and application to the numerical solution of some linear integral equations. Appl. Math. Comput. 218, 3217-3229 (2011)

  15. 15.

    Huang, C: Spectral collocation method for compact integral operators. PhD thesis, Wayne State University (2011)

  16. 16.

    Panigrahi Nelakanti, G: Superconvergence of Legendre projection methods for the eigenvalue problem of a compact integral operator. J. Comput. Appl. Math. 235, 2380-2391 (2011)

  17. 17.

    Panigrahi, BL, Long, G, Nelakanti, G: Legendre multi-projection methods for solving eigenvalue problems for a compact integral operator. J. Comput. Appl. Math. 239, 135-151 (2013)

  18. 18.

    Thamban, NM: Quadrature based collocation methods for integral equations of the first kind. Adv. Comput. Math. 36, 315-329 (2012)

  19. 19.

    Thamban, NM, Pereverzev, S: Regularized collocation method for Fredholm integral equations of the first kind. J. Complex. 23(4-6), 454-467 (2007)

  20. 20.

    Abramovitz, A: A trigonometrical approach for some projection methods. Acta Appl. Math. 56, 99-117 (1999)

  21. 21.

    Groetsch, CW, Neubauer, A: Regularization of ill-posed problems: optimal parameter choice in finite dimensions. J. Approx. Theory 58, 184-200 (1989)

  22. 22.

    King, JT, Neubauer, A: A variant of finite-dimensional Tikhonov regularization with a-posteriori parameter choice. Computing 40, 91-109 (1988)

  23. 23.

    Kaltenbacher, B, Offtermatt, J: A convergence analysis of regularization by discretization in preimage space. Math. Comput. 81(280), 2049-2069 (2012)

  24. 24.

    Neubauer, A: Finite-dimensional approximation of constrained Tikhonov-regularized solutions of ill-posed linear operator equations. Math. Comput. 48(178), 565-583 (1997)

  25. 25.

    Pereverzev, SV, Prössdorf, S: On the characterization of self-regularization properties of a fully discrete projection method for Symm’s integral equation. J. Integral Equ. Appl. 12(2), 113-130 (2000)

  26. 26.

    Rajan, MP: Convergence analysis of a regularized approximation for solving Fredholm integral equations of the first kind. J. Math. Anal. Appl. 279, 522-530 (2003)

  27. 27.

    Rajan, MP: A modified convergence analysis for solving Fredholm integral equations of the first kind. Integral Equ. Oper. Theory 49, 511-516 (2004)

  28. 28.

    Vainikko, G, Hämarik, U: Projection methods and selfregularization in ill-posed problems. Soviet Math. 29, 1-20 (1985) (in Russian)

  29. 29.

    Atkinson, KE: The Numerical Solution of Integral Equations of the Second Kind. Cambridge University Press, Cambridge (2009)

  30. 30.

    Bakushinsky, A, Kokurin, MY, Smirnova, A: Iterative methods for ill-posed problems. In: Inverse and Ill-Posed Problems Series, vol. 54. de Gruyter, Berlin (2011)

  31. 31.

    Chatelin, F: Spectral Approximation of Linear Operators. Academic Press, New York (1983)

  32. 32.

    Engl, HW, Hanke, M, Neubauer, A: Regularization of Inverse Problems. Kluwer Academic, Dordrecht (1996)

  33. 33.

    Groetsch, CW: Tikhonov Regularization for Integral Equations of the First Kind. Pitman, Boston (1984)

  34. 34.

    Koshev, N, Beilina, N: An adaptive finite element method for Fredholm integral equations of the first kind and its verification on experimental data. Cent. Eur. J. Math. 11(8), 1489-1509 (2013)

  35. 35.

    Tikhonov, AN, Leonov, AS, Yagola, AG: Nonlinear Ill-Posed Problems. Appl. Math. Math. Comput., vol. 14. Chapman & Hall, London (1998)

  36. 36.

    Thamban, NM: Linear Operator Equations: Approximation and Regularization. World Scientific, Singapore (2009)

  37. 37.

    Kindermann, S: Projection methods for ill-posed problems revisited. arXiv:1507.03364v1 [math.NA]. Accessed 13 Jul 2015

  38. 38.

    Canuto, C, Hussaini, MY, Quarteroni, A, Zang, TA: Spectral Methods. Springer, Berlin (2006)

  39. 39.

    Das, P, Sahani, MM, Nelakanti, G: Convergence analysis of Legendre spectral projection methods for Hammerstein integral equations of mixed type. J. Appl. Math. Comput. 49, 529-555 (2015)

  40. 40.

    Wang, YF, Xiao, TY: Fast realization algorithms for determining regularization parameters in linear inverse problems. Inverse Probl. 17, 281-291 (2001)

  41. 41.

    Zou, Y, Wang, L, Zhang, R: Cubically convergent methods for selecting the regularization parameters in linear inverse problems. J. Math. Anal. Appl. 356, 355-362 (2009)

  42. 42.

    Kunisch, K, Zou, J: Iterative choices of regularization parameters in linear inverse problems. Inverse Probl. 14, 1247-1264 (1998)

Download references

Acknowledgements

The authors thank the editor and the anonymous referees for their valuable comments and helpful suggestions that improved the quality of their article.

This work is supported by the MESRS of Algeria (CNEPRU Project B01120090003).

Author information

Affiliations

Authors

Corresponding author

Correspondence to Nadjib Boussetila.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

All authors read and approved the final manuscript.

Appendix:  Tables: Example 1 and Example 2 (discrete data)

Appendix:  Tables: Example 1 and Example 2 (discrete data)

Figures 1-8 show the comparison between the exact solution and its computed approximation for different values of n (\(n=2,3\)).

Figure 1
figure1

Example  1 , \(\pmb{\mathrm{noise\ level} =1/1{,}000}\) , \(\pmb{n=2}\) .

Figure 2
figure2

Example  1 , \(\pmb{\mathrm{noise\ level} =1/100}\) , \(\pmb{n=2}\) .

Figure 3
figure3

Example  1 , \(\pmb{\mathrm{noise\ level}=1/1{,}000}\) , \(\pmb{n=3}\) .

Figure 4
figure4

Example  1 , \(\pmb{\mathrm{noise\ level}=1/100}\) , \(\pmb{n=3}\) .

Figure 5
figure5

Example  2 , \(\pmb{\mathrm{noise\ level}=1/1{,}000}\) , \(\pmb{n=2}\) .

Figure 6
figure6

Example  2 , \(\pmb{\mathrm{noise\ level}=1/100}\) , \(\pmb{n=2}\) .

Figure 7
figure7

Example  2 , \(\pmb{\mathrm{noise\ level} =1/1{,}000}\) , \(\pmb{n=3}\) .

Figure 8
figure8

Example  2 , \(\pmb{\mathrm{noise\ level}=1/100}\) , \(\pmb{n=3}\) .

Conclusion. From Tables 1-4 we see that the numerical results agree with the theoretical results.

Table 1 Example  1 , \(\pmb{n=2}\)
Table 2 Example  1 , \(\pmb{n=3}\)
Table 3 Example  2 , \(\pmb{n=2}\)
Table 4 Example  2 , \(\pmb{n=3}\)

The projected Tikhonov regularization method developed and used in this investigation to solve the Fredholm integral equations of the first kind is very simple and effective, owing to the fact that the dimension of the subspace of projection is very small (\(n=2,3\)); moreover, the regularized solution remains stable for a strong noise (\(\varepsilon =1/100\)) and for regular data.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Neggal, B., Boussetila, N. & Rebbani, F. Projected Tikhonov regularization method for Fredholm integral equations of the first kind. J Inequal Appl 2016, 195 (2016). https://doi.org/10.1186/s13660-016-1137-6

Download citation

Keywords

  • ill-posed problems
  • integral equation of the first kind
  • projected Tikhonov regularization method