Trigonometric Fmn-Transform of Multi-Variable Functions and Its Application to the Partial Differential Equations and Image Processing

In this study, we focus on the extension of the trigonometric F m -transform technique for functions with one-variable in order to improve its approximation properties at the end points of [ a, b ] and then generalize the extended trigonometric F m -transform technique to functions with more variables. The approximation and convergence properties of the direct and inverse multi-variable extended trigonometric F m -transforms are discussed. The applicability of multi-variable trigonometric F m -transforms to approximate multi-variable functions are illustrated by some examples. Moreover, some direct formulas for the multi-variable extended trigonometric F m -transforms of partial derivatives of multi-variable functions are obtained and they are applied to solving the Cauchy problem of the transport equation. Also, the application of multi-variable extended trigonometric F m -transforms for image compression is described. Some examples for the validity of the obtained results about the partial differential equations and image compression are given. The results are compared with some existence ones in the literature.


Introduction
The F -transform is a very important topic for a fuzzy modeling. The technique of the Ftransform for functions of one-variable was first introduced by I. Perfilieva [1]. This is reflected in a number of fields such as the construction of approximate models, approximation of functions, filtering, solution of differential equations and data compression [2,3,4,5,6]. Initially, the F m -transform of one-variable functions, a generalization of the F -transform, was introduced in ric basis functions by applying Gram-Schmidt procedure to the linearly independent system of trigonometric functions in Section 4 and define the direct and inverse tF mn -transform of twovariable function f with trigonometric components of degree mn which are generalizations of the direct and inverse t F m -transform. In Section 5, some useful approximation properties of the direct and inverse tF mn -transform components of an original function have been investigated. Also, in order to illustrate the established theory of approximation properties, we report plots of the error function for some test functions. In Section 6, some direct formulas for two-variable tF mn -transform of partial derivatives of functions and their approximation properties are derived. Then, a numerical solution of the Cauchy problem for the transport equation using the tF mn -transform is investigated and some examples for the validity of our method and comparing the obtained results with some existence results in the literature are given. In Section 7, the tF mn -transform are applied for image compression. We give various examples and compare the obtained results with the other cases of F -transform. Finally, Appendix A is devoted to the details of the obtained results in Section 6.

preliminary
In this section, we present an overview of the necessary concepts and the main idea of the extended trigonometric F m -transform (for short tF m ) of function of one-variable which is a modified version of t F m -transform stated in [8].

Fuzzy partition and its extension of an interval
One of the most important concepts of t F m -transform is fuzzy partition stated in [8]. So, this section is devoted to the introduction of the concept of fuzzy partition. Moreover, in order to improve the approximation properties, we will state the extension of the basic function according to [3]. We consider the interval [a, b] of real numbers as a common domain of all functions throughout this section.
The membership functions A 1 , ..., A p are called basic functions.
The Ruspini condition on [a, b] is p k=1 A k (x) = 1. In [3], the authors have extended a h-uniform fuzzy partition by the following technique consisting of extending [a, b] to [a − h, b + h], Moreover,Ā k (x) = A k (x) if k = 2, ..., p − 1. Hereafter, we set x 0 = a − h and x p+1 = b + h.
The following example is an extended h-uniform fuzzy partition which has an essential role in our study. These basic functions are illustrated in Fig. 1(a). 2.2 Space L 2 (Ā k ) and its subspace L m 2 (Ā k ) In this section, we modify the linear space L 2 (A k ), equipped by a weighted inner product with the weight function A k , and its subspace L m 2 (A k ) spanned by the trigonometric basis (for more details see [8]). Let us consider a function f : [a, b] → R. We can extend the function f from [a, b] to [a − h, b + h] in different ways. As an example, the extension of the function f (x) = √ 1 − x 2 from [−1, 1] to [−1.5, 1.5] is shown in Fig 1(b). Moreover, the extension of the function f (x) = x 2 from [−1, 1] to [−1.5, 1.5] is itself. The extension of f is presented byf throughout this paper. LetĀ 1 , ...,Ā p be an extended h-uniform fuzzy partition of [a, b] and k ∈ {1, ..., p}. The space L 2 (Ā k ) is a set of square-integrable functionsf ,ḡ : [x k−1 , x k+1 ] → R. The inner product of f ,ḡ ∈ L 2 (Ā k ) is defined by The following lemma is a modified version of Lemmas 2 and 3 in [8].
The orthogonal trigonometric functions u 0 k , u 1 k ,..., u m k , v 1 k ,...,v m k are linearly independent. Moreover, we have for l = 1, ..., m, k = 1, ..., p Definition 2.5. Let m ≥ 0, the linear subspace of L 2 (Ā k ) spanned by the trigonometric basis Remark 2.6. Since we will face to fuzzy partitions on y-axis throughout this paper, we will use the different notations for the trigonometric basis depend on variable y as follows for l = 1, ..., m and k = 1, ..., p.
Theorem 2.7. [30] Let H be a Hilbert space with the norm . , and let B be its closed linear subspace. Then, for every element f ∈ H, there exists a unique best approximation g 0 ∈ B in the sense that g 0 fulfills Moreover, f − g 0 ∈ B ⊥ and g 0 is called an orthogonal projection of f on B.

One-variable extended direct and inverse t F m -transform
In this section, we define the extended direct and inverse t F m -transform and some their properties.
[8] LetĀ 1 , ...,Ā p be extended sinusoidal shaped basic functions and u 0 k , u i k , v i k (i = 1, ..., m) be the trigonometric basis of L m 2 (Ā k ). Consider f ∈ L 2 (Ā k ) for k = 1, ..., p. We define an extended direct t F m -transform of a function f with respect toĀ 1 , ...,Ā p as a vector tF m (f ) = ( tF m 1 , ..., tF m p ) where the k-th component tF m k is given by In the following lemmas, we improve the similar approximation results in [8].
Lemma 2.10. Let trigonometric functions tF m k be the tF m -transform components of f ∈ L 2 (Ā k ). Letf be four times continuously differentiable on [a − h, b + h]. Then for k = 1, ..., p, we have tF m Proof. We approximate c k,0 by trapezium formula (Euler-Maclaurin Summation Formula in [31]) with three nodes {x k−1 , x k , x k+1 } as follows In a similar way, we approximate c k,i for i = 1, · · · , m with three nodes {x k−1 , x k , x k+1 } as follows Moreover, we approximate d k,i by trapezium formula with five nodes By Taylor expansion, we can estimate the termf ( It follows from the above equations that tF m Proof. It follows immediately from Lemma 2.10.

Fuzzy partition of a rectangle from the plane
In this section, we focus on a generalization of the fuzzy partition for two-variable functions which the spacial case of this generalization has been presented in [21]. We will consider a rectangle D 2 = [a, b] × [c, d] as a common domain of all real-valued functions in this section.
We say that the fuzzy partition p, q ≥ 2 and the following two additional properties are fulfilled: A k,l (x, y) = A k,l−1 (x, y − h y ) and A k,l+1 (x, y) = A k,l (x, y − h y ) for all l = 2, · · · , q − 1 and y ∈ [y l , y l+1 ]. A k,l (x, y) = 1.
Hereafter, we use extended h x h y -uniform fuzzy partition and we call it extended fuzzy partition for short.
Example 3.5. ConsiderĀ k,l as an extended fuzzy partition by basic functions with the following analytic representation which is illustrated in Fig. 2(a), A k,l does not satisfy the Ruspini condition. Example 3.7. We considerĀ k,l as an extended fuzzy partition by sinusoidal shaped basic functions with the following analytic representation which is illustrated in Fig. 2(b), where k = 1, ..., p, l = 1, ..., q.
where the kl-th componentF kl is given bȳ

TrigonometricF mn -transform for two-variable functions
In this section, firstly, we introduce a space L 2 (Ā k,l ) and its subspace. Then utilizing this space, we define the direct and inverse trigonometricF mn -transform of two-variable functions.
4.1 Space L 2 (Ā k,l ) and its subspace In what follows, we are going to construct the linear subspace of L 2 (Ā k,l ) with the extended sinusoidal shaped basic functions. We considerĀ k,l for k = 1, ..., p, l = 1, ..., q, as an extended fuzzy partition by extended sinusoidal shaped basic functions as stated in Example 3.7. For s = 1, ..., m, t = 1, ..., n, the sets {1, In the following lemmas, we apply the Gram-Schmidt process to the mentioned sets to obtain an orthogonal basis for a subspace of L 2 (Ā k,l ).
LetĀ k,l be the extended fuzzy partition by the sinusoidal shaped basic functions as stated in Example 3.7.

1) Consider for
obtained by applying the Gram-Schmidt process to the above system.
2) Consider for s = 1, ..., m, t = 1, ..., n, the linearly independent set Proof. Case 1. Applying the Gram-Schmidt process to the system {1, We use the double mathematics induction in order to prove U (s,t) We consider the following mathematics induction hypothesis on s We conclude In a similar way, we have U Case 2. Applying the Gram-Schmidt process to the system {sin sπ h x (x − x k ) sin tπ h y (y − y l )}, we gain the orthogonal trigonometric functions V (s,t) (k,l) by the following recursive equations and for s = 1, ..., m, t = 1, ..., n .., s, j = 1, ..., t, (except i=s, j=t).
We use the double mathematics induction in order to prove . We consider the following mathematics induction hypothesis on s and we conclude In a similar way, we have V The proof of Cases 3 and 4 are similar.
Proof. From Lemmas 2.4, 4.1 and properties of inner product, we have In a similar way as above, we can immediately conclude that the set is orthogonal in L 2 (Ā k,l ). In order to proof linear independence of the mentioned systems, we consider    x k+1 for t = 1, ..., n x k+1 x k+1 x k+1 Proof. It follows from the assumption of the present theorem thatf − tF mn kl ⊥L mn 2 (Ā k,l ). Utilizing the properties of orthogonality along with the basic functions of L mn 2 (Ā k,l ), we arrive at From Lemmas 2.4, 4.1 and definition of inner product, we can immediately complete the proof.

Direct and inverse trigonometricF mn -transforms
Here, we generalize the direct and inverse tF m -transform of one-variable functions to two-variable functions and we call it tF mn -transform. This is also a generalization of theF -transform of two-variable functions.  Proof. Let m, n = 0, and [ tF 00 kl ] p×q be the tF mn -transform of f . We claim that We know that tF 00 kl = a (0,0) (k,l) for k = 1, · · · , p and l = 1, · · · , q. On the other hand, we have k,l (x, y)dxdy =F kl k = 1, ..., p, l = 1, ..., q.
This completes the proof.

Error analysis of the direct and inverse tF mn -transform
In this section, using the trapezium formula (Euler-Maclaurin Summation Formula in [31]) for approximating integrals, the error analysis of the direct and inverse tF mn -transform of a given function is investigated.

Error analysis of the direct tF mn -transform
Moreover, let tF mn kl be the tF mn -transform components of f . Then for every k = 1, ..., p, l = 1, ..., q tF mn Proof. We are going to approximate integrals (4.3)-(4.11) with the help of the trapezium formula as follows We just approximate Eq. (4.8) and the others are approximated similarly. We assume sin (t + 1)π h y (y − y l ) + sin tπ h y (y − y l ) dy.
It follows from Eq. (4.8), that Five nodes {y l−1 , y l − h y 2 , y l , y l + h y 2 , y l+1 } are considered in the trapezium formula in order to approximate I(x).
The following lemma shows that if we increase m and n, then the approximation property of the components of the direct tF mn -transform get better. (Ā k,l ), respectively. Then, Proof. Our proof is similar to Lemma 2 in [32].

Error analysis of the inverse tF mn -transform
In this section, we give an error estimation of the inverse tF mn -transform in the space of continuous functions.
Proof. Utilizing the proof of Theorem 3 in [8] for A l (y) = 1.
From Remark 5.2, we deduce that there are k ∈ {1, · · · , p − 1} and l ∈ {1, · · · , q − 1} such that (x, y) ∈ [x k , x k+1 ] × [y l , y l+1 ]. Therefore, we have Proof. For reasons of brevity, we just prove Eq. (5.12) and the others are proved similarly. We approximate Eq. (4.11) with the help of the trapezium formula as follows On the other hand, from Theorem 5.1 we have Utilizing the above achievements, we can deduce

Examples
In this subsection, we give some examples and we plot the error function E(x, y) = |f (x, y)− tf mn pq (f (x, y))| to show that accuracy of approximation by the inverse tF mn -transform of given two-variable functions. We are going to approximate these functions applying the inverse tF mn -transform for various amounts of h x , h y , m, and n.
Example 5.6. In this example, we consider f (x, y) = sin(xy) on [0, 1] × [0, 1]. We can find an extension of f such thatf is four times continuously differentiable. We illustrate E(x, y) in Fig. 3(a), 3(b), and 3(c) for various amounts of h x , h y , m, and n. These figures show that if we decrease h x , h y and increase m and n, then the error function decrease. It worth nothing that the influence of the growth of m and n leads to more reduction of the error function. Moreover, the approximation error of f without extending the inverse t F mn -transform in Fig. 3(d) has been plotted. It illustrates that the error along the end lines is more than the extended case.
such that is four times continuously differentiable. In order to illustrate the established theory, we report the error function of this example in Fig. 6 for the various amounts of h x , h y , m, and n. 6 Application of the tF mn -transform to partial differential equations In this section, we are going to apply the tF mn -transform in order to solve the Cauchy problem of the transport equation. For this purpose, some direct formulas for the tF mn -transform of the partial derivatives of two-variable functions are obtained and they are applied to solving the transport equation.

The tF mn -transform of the partial derivatives of two-variable function
In this section, we will obtain tF mn -transform of the partial derivatives of two-variable function f using the components of tF mn (f ). For simplicity, we assume that {φ In the following, some notations which will be used throughout this section are considered.
In a similar way, we can show that e , y)) and e (i,j) (x, y)) .
This completes the claim. Similarly, we can prove , y)) .
i(j)=1 be the trigonometric basis functions and f be sufficiently differentiable. Moreover, let the tF mn -transform of the partial derivatives of f be given as follows where for m 1 , n 1 ∈ N tF mn kl ( y).
6.2 Error analysis for the partial derivatives of f using tF mn -transform By approximating the tF mn -transform of the partial derivatives of f using the components of tF mn (f ), we face two sources of error. The first comes from tF mn -transform of the partial derivative of f as follows and The second comes from the results of Theorem 11(a), that is (k,l) ( ∂ m 1 +n 1 f ∂x m 1 ∂y n 1 ) are determined by selecting a finite number of the sentences of Eqs. (6.1)-(6.3). We estimate an error bound of | ∂ m 1 +n 1 f ∂x m 1 ∂y n 1 − tf mn pq ( ∂ m 1 +n 1 f ∂x m 1 ∂y n 1 ) | as follows Depending on the number of sentences in which we select in Eq. (6.3) we have where q 1 and q 2 depended on the number of sentences of Eq. (6.3). Therefore, we can conclude In a similar way, we can obtain . The details of the tF mn -transform of the partial derivations of f based on the forward, central, and backward differences w.r.t. x and y can be found in Tables 5, 6 and 7 in Appendix A respectively.

6.3
The approximation solution of the Cauchy problem of the transport equation We consider the following transport problem We assume where tF 11 We apply tF 11 -transform to the both sides of Eq. (6.4) as follows From the initial condition u(x, 0) = σ(x) for k = 1, · · · , p, we have From the boundary condition u(0, t) = δ(t) for l = 1, 2, · · · , q, we get From Tables 5 and 7 in Appendix A and Lemma 5.5, we have the following recursive equation We solve Eq. (6.5) and obtain z (l) (i,j) for i, j = 0, 1 and l = 1, 2, · · · , q. Similarly, by the results of Tables  5 and 7 in Appendix A and Lemma 5.5, we have Utilizing Tables 5 and 7 in Appendix A and Lemma 5.5, we get (1,j) , l = 1, 2, · · · , q − 1, j = 0, 1, (6.7) where y Finally, we deduce Applying the above achievements, we solve the reductive equation (6.8) and obtain v (l) for l = 1, 2, · · · , q. Finally, we approximate u(x, t) as follows kl u(x, t) Ā k,l (x, t).
In the following, we give some examples and apply the established theory in the previous subsection to obtain the approximative solution of the given problems.
Example 6.3. Consider the following Cauchy equation with initial and boundary conditions The exact solution is u(x, t) = (1 − x 2 ) cos(x 2 t). The numerical results using the tF 11 -transform for p = q = 20, p = q = 40 are obtained in Table 1 and compared with theF -transform with the same p and q. Our numerical results confirm that the accuracy of the solution can be increased by replacing tF 11transform withF -transform. Moreover, Fig. 7 shows that the better results can be obtained by using the higher degree tF mn -transform. In the following example, to compare the numerical solution obtained by tF 11 -transform with the F 1 -transform (see e.g. [27]), we omit some sentences of tF 11 -transform and call it tF 1 -transform Example 6.4. Let the following transport equation with initial and boundary conditions The exact solution is u(x, t) = cos 3 (10πx)+sin 2 (10πt) . To illustrate, we give the numerical results using the tF 11 -transform, tF 1 -transform, and F 1 transform for p = q = 30 in Table 2 and Fig. 8.   In [27] authors generalized F -transform to F 1 -transform with polynomial components. In this example, we show that our numerical results are better than F 1 -transform. Without any bias, we say that the accuracy of some approximation solutions of given problems can be increased by tF 11 -transform.
The value ρ = pq P Q is called compression rate. The quality of the reconstructed image via the Peak Signal to Noise Ratio can be evaluated by P SN R = 20 log 10 L − 1

RM SE ,
where L is the length of the gray scale of image and is Root Mean Square Error.
In this paper, we extract our test images from http://decsai.ugr.es/cvg/dbimagenes/. We apply our method to gray image Zelda ( Fig. 9(a)) of size 256×256. In Fig. 9  To compare tF 11 -transform method with F 1 -transform in [27], we show our results for three gray level images as Lena, Einstein and Leopard of size 256 × 256 ( Fig. 10(a), 10(b), and 10(c)). For reasons of brevity, we present the mean value of PSNR trends which is obtained from images Lena, Einstein, and Leopard in Table 3.  Finally, we compare our results with the Fast MF-transform method [28]. So, we evaluate the PSNR and SSIM ( structural similarity index measure) for gray image Airport (Fig. 11(a)) of size 512 × 512, that is decoded with compression ratio values 0.06 and 0.11 under tF 11 -transform method. The obtained results, which are shown in Fig. 11 and Table 4, verify with less CPU time, we obtain the decoded image which the quality of image is better than the Fast MF-transform method. Also, with the same PSNR value, the CPU time of our method is about half.  As mentioned above, the results obtained by using the tF 11 -transform for compression of gray scale images show the efficiency of the proposed method from the point of view of accuracy and computation Table 6: The tF mn -transform of partial derivations of f based on the central differences w.r.t. x and y tF mn kl ( ∂f ∂x ) =