Convergence analysis of the shrinking approximants for ﬁxed point problem and generalized split common null point problem

In this paper, we compute a common solution of the ﬁxed point problem (FPP) and the generalized split common null point problem (GSCNPP) via the inertial hybrid shrinking approximants in Hilbert spaces. We show that the approximants can be easily adapted to various extensively analyzed theoretical problems in this framework. Finally, we furnish a numerical experiment to analyze the viability of the approximants in comparison with the results presented in (Reich and Tuyen in Rev. R. Acad. Cienc. Exactas Fís. Nat., Ser. A Mat. 114:180, 2020).


Introduction
The triplet ( , ·, · , · ) represents a real Hilbert space, the inner product, and the induced norm, respectively. For an operator U : K → K , Fix(U) denotes the set of all fixed points of the operator U, where K is a nonempty closed convex subset of . Recall that the operator U is called η-demimetric [46], where η ∈ (-∞, 1), if Fix(U) = ∅ and pq, (Id -U)p ≥ 1 2 (1η) (Id -U)p 2 for all p ∈ K and q ∈ Fix(U), where Id denotes the identity operator. The η-demimetric operator is equivalently defined by problems, signal processing, and image reconstruction [3-6, 11, 12, 14-19, 23, 24, 26, 28-30, 32, 34, 35, 37, 42, 43, 53-57]. In 2007, Aoyama et al. [2] suggested a Halpern [33] type approximants for an infinite family of nonexpansive operators satisfying the AKTT-Condition ∞ k=1 sup p∈X U k+1 p -U k p < ∞ for any bounded subset X of . The following construction of operator S k for a countably infinite family of η-demimetric operators does not require the AKTT-Condition and hence improves the performance of the approximants: where 0 ≤ λ m ≤ 1 and U m = ρp + (1ρ)((1γ )Id + γ U m )p for all p ∈ K with U m being η-demimetric operator, ρ ∈ (0, 1), and 0 < γ < 1η. It is well known in the context of operator S k that each U m is nonexpansive and the limit lim k→∞ Q k,m exists. Moreover, Sp = lim k→∞ S k p = lim k→∞ Q k,1 p for all p ∈ K.
This implies that Fix(S) = ∞ k=1 Fix(S k ) [36,49]. The following concept of a split convex feasibility problem (SCFP) is presented in [20]: Let H and W be nonempty closed convex subsets of real Hilbert spaces 1 and 2 , respectively. In SCFP, we compute p ∈ H such that Vp ∈ W , (1.1) where V : 1 → 2 is a bounded linear operator. The SCFP is a particular case of the following split common null point problem (SCNPP) of maximal monotone operators: Let A 1 ⊆ 1 × 1 and A 2 ⊆ 2 × 2 be two monotone operators such that = A -1 1 (0) ∩ V -1 (A -1 2 (0)) = ∅. In SCNPP, we compute p ∈ . Some interesting results on the SCNPP via iterative approximants can be found in [13,21,22,44]. It is worth mentioning that the concept of SCNPP has been extended to the concept of a generalized split common null point problem (GSCNPP) in Hilbert spaces [39,40]. In GSCNPP, we compute where A j : j → 2 j , j ∈ {1, 2, . . . , N}, is a finite family of maximal monotone operators, and V j : j → j+1 , j ∈ {1, 2, . . . , N -1}, is a finite family of bounded linear operators such From the perspective of optimization, problem (1.2) has been analyzed via different iterative approximants. A variant of the classical CQ-algorithm, essentially due to Byrne [13], is employed in [39], whereas shrinking projection approximants are analyzed in [40] to obtain the strong convergence results in Hilbert spaces. It is therefore natural to ask whether we can device strongly convergent approximants to compute a solution of GSC-NPP and fixed point point problem of an infinite family of operators without employing the AKTT-Condition.
To answer the above question, we consider the following GSCNPP and FPP: For the computation of a solution of problem (1.3), we employ hybrid shrinking approximants embedded with the inertial extrapolation technique, essentially due to Polyak [38] (see also [1,[7][8][9][10]), in Hilbert spaces.
The rest of the paper is organized as follows: In Sect. 2, we present mathematical preliminaries. We establish strong convergence results of the approximants and their variant, namely the Halpern-type approximants in Sect. 3. In Sect. 4, we elaborate the adaptability of the approximants for various extensively analyzed theoretical problems in this framework. Section 6 provides a numerical experiment to analyze the viability of the approximants in comparison with the existing results.

Preliminaries
We start this section with mathematical preliminary notions. We always assume that K is a nonempty closed convex subset of a real Hilbert space 1 .
Recall the nearest point projector 1 K of 1 onto K ⊂ 1 is defined so that for every p ∈ 1 , we have a unique 1 K p in K such that Note here that the nearest point projector has the following properties: graph, and zeros of a set-valued operator A 1 ⊆ 1 × 1 , respectively. If a set-valued operator A 1 satisfies pq, tw ≥ 0 for all (p, t), (q, w) ∈ gra(A 1 ), then A 1 is called monotone.
Recall also that a monotone operator A 1 is called maximal monotone if its graph is not strictly contained in the graph of any other monotone operator on 1 . The well-defined

Convergence analysis of the approximants
For the computation of a solution of (1.3), we propose the following approximants:
Step 0. Choose arbitrarily d 0 , d 1 ∈ 1 and set k ≥ 1; Iterative Steps: Given d k ∈ k , calculate a k , b k , and c k as follows: The approximants abort if a k = b k = c k = d k , and then d k is the required approximation. Otherwise, Fix k =: k + 1 and reiterate Step 1.
We assume the following control conditions on the approximants: Proof We divide the proof into different steps for understanding.
Step 1. We show that the approximants (d k ) defined in Algorithm 1 are stable. Claim: H k and W k are closed and convex subsets of 1 for all k ≥ 0. Consider, for each k ≥ 0, the following representation of the subsets H k and W k : The claim follows from the above representations of closed and convex subsets H k and W k of 1 for all k ≥ 1. Further, the sets and Fix(S) (from Lemma 2.1) are closed and convex. Hence we have that is nonempty, closed, and convex. Let q ∈ , ⊂ H 0 = 1 . Now it follows from Algorithm 1 that This shows that is contained in H k for all k ≥ 1. Now assume that ⊂ W k for some k ≥ 1. Using the nonexpansiveness of J A j θ j,k , (3.1), and (3.2), we get It follows from estimate (3.3) that ⊂ W k+1 , and hence ⊂ H k+1 ∩ W k+1 . Consequently, by Step 1 the approximants (d k ) defined in Algorithm 1 are stable.
Step 2. We next show that lim k→∞ d kd 1 exists.
Observe that These estimates establish the boundedness of the approximants ( d kd 1 ). Since This yields that the approximants ( d kd 1 ) are nondecreasing, and hence Step 3. We now show thatq ∈ . We first compute By (3.3) the above computation yields In view of the control condition (C1), we get As a consequence of estimates (3.5) and (3.6), we also obtain that This estimate, in the light of estimate (3.5) and the control condition (C1), yields that Similarly, we infer from estimates (3.5) and (3.8) that and from the estimates (3.6) and (3.9) that In view of the control condition (C2), consider the variant of estimate (3.2) Letting k → ∞ and using (3.9) and (C2), we have lim k→∞ a k -S k a k = 0. (3.11) Observe that The above computation, in view of estimates (3.10) and (3.11), yields Employing estimate (3.8), the above computation yields Reasoning as above, we infer from estimates (3.8) and (3.13) that lim k→∞ c k -V j-1 b k = 0 (3.14) and from estimates (3.9) and (3.14) that Using (3.14), we estimate that for all j ∈ {1, 2, . . . , N}. Then from Lemma 2.4 and (C4) we obtain the inequality This estimate implies that for all j ∈ {1, 2, . . . , N}. By Lemma 2.5 we haveV j-1q ∈ Fix(J A j θ ) for all j ∈ {1, 2, . . . , N}, that is,q ∈ . It remains to show thatq ∈ Fix(S). Observe that Using (3.12) and Lemma 2.6, this estimate implies that lim k→∞ b k -Sb k = 0. This, together with the fact that b k t q, implies by Lemma 2.5 thatq ∈ Fix(S) = ∞ k=1 Fix(S k ). Henceq ∈ .
Step 4. The final part is showing that d k → q = 1 d 1 .
Since q = 1 d 1 andq ∈ , Lemma 2.3 implies that Using the uniqueness of q yields the equalityq = q. From Step 2 it follows that d k td 1 ≤ qd 1 , and from Lemma 2.3 we obtain lim k→∞ d k =q = q = 1 d 1 .
We now consider the following Halpern-type variant of Algorithm 1:

Theorem 3.3 Any approximants defined via Algorithm 2, under the control conditions (C1)-(C4), converge strongly to an element in .
Proof Observe that for each k ≥ 1, the subsets H k have the following form: Arguing similarly as in the proof of Theorem 3.
The rest of the proof of Theorem 3.3 follows immediately from the proof of Theorem 3.1 and is therefore omitted.

Applications
Our main result in the previous section has various interesting applications of great importance in the field. We present some of these applications.

Generalized split feasibility problems
In the context of generalized split feasibility problems [20], we recall that the indicator function j K is a proper lower semicontinuous convex function (PCLS), where K ⊂ 1 . Therefore ∂j K , the subdifferential of j K , satisfies the maximal monotonicity such that ∂j K (p) = N K p , where N K p denotes the normal cone of K at u. From this we can deduce that ∂j K coincides with 1 K . Assume that where K j ⊂ j , j ∈ {1, 2, . . . , N}.  1), and γ k ∈ (0, ∞) for k ≥ 1 defined as under the control conditions (C1)-(C4), converge strongly to an element in .

Generalized split variational inequality problems
The well-known variational inequality problem deals with computation of a point p ∈ K such that where A : K → 1 is a nonlinear monotone operator defined with respect to K ⊂ 1 . By Sol(K, A) we denote the set of all solutions associated with the variational inequality problem. We consider the following problem: Theorem 4.2 Assume that = ∩ Fix(S) = ∅. Then the approximants initialized by arbitrary d 0 , d 1 ∈ 1 and H 0 = W 0 = 1 with the nonincreasing sequences ρ k , τ k ⊂ (0, 1), μ k ∈ [0, 1), and γ k ∈ (0, ∞) for k ≥ 1 defined as under the control conditions (C1)-(C4), converge strongly to an element in .
Note that h A j is maximal monotone [41] such that The rest of the proof now follows from Theorem 3.1.

Signal processing
This subsection deals with the case of signal recovery problem, which we aim to solve by applying Theorem 4.1. The following underdetermined formalism denotes the signal recovery problem: where κ ∈ R M is the measured noise data with noise ϑ, d ∈ R N is the sparse original data for recovery, and V : R N → R M (M < N ) is the bounded linear observation matrix. Formalism (4.4) is equivalent to the well-known least absolute shrinkage and selection operator Page 14 of 21 (LASSO) problem [51] in the following convex constrained optimization formalism:  is satisfied: Here d * is called the estimated signal of d. For Theorem 4.1, we choose μ k = 1 (100×k+1) 1.04 , ρ k = 1 k 1.02 , t = m -0.001, and ϑ = 0. We recover the signals for the following two tests: From Table 1 and Figs. 1 and 2 we conclude that IGSNPP as in Theorem 4.1 reconstruct the original signal (A) faster than the algorithm for GSNPP as in Theorem 4.4 [40] in the compressed sensing. Moreover, the graph of error function values (B) and objective function values (C) generated by IGSNPP as in Theorem 4.1 converge faster as compared to the algorithm for GSNPP as in Theorem 4.4 [40].

Numerical experiment and results
In this section, we focus on numerical implementation of our proposed algorithm. Comparison with Reich et al. [40]) shows the effectiveness and efficiency of our proposed al-gorithm. All codes were written in MATLAB R2020a and performed on a laptop Intel(R) Core(TM) i3-3217U @ 1.80 GHz, RAM 4.00 GB.
Example 5.1 Let 1 = R 2 and 2 = R 4 with the inner product defined by x, y = xy, for all x, y ∈ R 2 , R 4 and the induced usual norm | · |.

Conclusions
The problem for computing a common solution via unifying approximants, of a finite family of GSCNPP and the FPP for a countably infinite family of nonlinear operators has its own importance in the fields of monotone operator theory and fixed point theory. We proved that the approximants perform in an effective and efficient way when compared with the existing approximants, in particular, those studied in Hilbert spaces. The theoretical framework of the algorithm has been strengthened with an appropriate numerical example. Moreover, this framework has also been implemented to various instances of the split inverse problems. We would like to emphasize that the above mentioned problems occur naturally in many applications. Therefore iterative algorithms are inevitable in this field of investigation. As a consequence, our theoretical framework constitutes an important topic of future research.