MATHEMATICAL METHODS AND APPLIED COMPUTING

Similar documents
Numerical Solution of Ordinary Differential Equations in Fluctuationlessness Theorem Perspective

Fluctuationlessness Theorem and its Application to Boundary Value Problems of ODEs

Weighted Singular Value Decomposition for Folded Matrices

A Reverse Technique for Lumping High Dimensional Model Representation Method

High Dimensional Model Representation (HDMR) Based Folded Vector Decomposition

First Degree Rectangular Eigenvalue Problems of Cubic Arrays Over Two Dimensional Ways: A Theoretical Investigation

There are two things that are particularly nice about the first basis

which arises when we compute the orthogonal projection of a vector y in a subspace with an orthogonal basis. Hence assume that P y = A ij = x j, x i

Towards A New Multiway Array Decomposition Algorithm: Elementwise Multiway Array High Dimensional Model Representation (EMAHDMR)

MATH 23a, FALL 2002 THEORETICAL LINEAR ALGEBRA AND MULTIVARIABLE CALCULUS Solutions to Final Exam (in-class portion) January 22, 2003

e x = 1 + x + x2 2! + x3 If the function f(x) can be written as a power series on an interval I, then the power series is of the form

Vectors in Function Spaces

Finite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product

Math Real Analysis II

Topic 2 Quiz 2. choice C implies B and B implies C. correct-choice C implies B, but B does not imply C

Elements of linear algebra

BASIC ALGORITHMS IN LINEAR ALGEBRA. Matrices and Applications of Gaussian Elimination. A 2 x. A T m x. A 1 x A T 1. A m x

Linear Algebra March 16, 2019

Functional Analysis Review

Chapter 3 Transformations

Review problems for MA 54, Fall 2004.

MATH 583A REVIEW SESSION #1

Chapter 1. Preliminaries. The purpose of this chapter is to provide some basic background information. Linear Space. Hilbert Space.

Further Mathematical Methods (Linear Algebra) 2002

Math 361: Homework 1 Solutions

Waves on 2 and 3 dimensional domains

MAT Linear Algebra Collection of sample exams

Linear Algebra. Session 12

MATHEMATICS COMPREHENSIVE EXAM: IN-CLASS COMPONENT

Properties of Matrices and Operations on Matrices

Linear Algebra Massoud Malek

Applied Linear Algebra in Geoscience Using MATLAB

The following definition is fundamental.

October 25, 2013 INNER PRODUCT SPACES

Math Linear Algebra

Vector Spaces. Vector space, ν, over the field of complex numbers, C, is a set of elements a, b,..., satisfying the following axioms.

NOTES ON LINEAR ALGEBRA CLASS HANDOUT

8.5 Taylor Polynomials and Taylor Series

11 a 12 a 21 a 11 a 22 a 12 a 21. (C.11) A = The determinant of a product of two matrices is given by AB = A B 1 1 = (C.13) and similarly.

MATH 115A: SAMPLE FINAL SOLUTIONS

Definition 1. A set V is a vector space over the scalar field F {R, C} iff. there are two operations defined on V, called vector addition

2 Garrett: `A Good Spectral Theorem' 1. von Neumann algebras, density theorem The commutant of a subring S of a ring R is S 0 = fr 2 R : rs = sr; 8s 2

A trigonometric orthogonality with respect to a nonnegative Borel measure

Principal Component Analysis

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

Introduction - Motivation. Many phenomena (physical, chemical, biological, etc.) are model by differential equations. f f(x + h) f(x) (x) = lim

ELE/MCE 503 Linear Algebra Facts Fall 2018

Linear Algebra: Characteristic Value Problem

A linear algebra proof of the fundamental theorem of algebra

MATH 304 Linear Algebra Lecture 20: The Gram-Schmidt process (continued). Eigenvalues and eigenvectors.

Mathematical foundations - linear algebra

A linear algebra proof of the fundamental theorem of algebra

TAYLOR AND MACLAURIN SERIES

ax 2 + bx + c = 0 where

Final Exam. Linear Algebra Summer 2011 Math S2010X (3) Corrin Clarkson. August 10th, Solutions

Tennessee s State Mathematics Standards Precalculus

j=1 u 1jv 1j. 1/ 2 Lemma 1. An orthogonal set of vectors must be linearly independent.

Midterm for Introduction to Numerical Analysis I, AMSC/CMSC 466, on 10/29/2015

The Conjugate Gradient Method

Math Real Analysis II

1 9/5 Matrices, vectors, and their applications

MATH 205C: STATIONARY PHASE LEMMA

Chapter 7: Techniques of Integration

Quantum Computing Lecture 2. Review of Linear Algebra

Hilbert Spaces. Contents

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.

MATH 220: INNER PRODUCT SPACES, SYMMETRIC OPERATORS, ORTHOGONALITY

Duke University, Department of Electrical and Computer Engineering Optimization for Scientists and Engineers c Alex Bronstein, 2014

Chapter 6: Orthogonality

B553 Lecture 3: Multivariate Calculus and Linear Algebra Review

COMP 558 lecture 18 Nov. 15, 2010

Analysis-3 lecture schemes

q-series Michael Gri for the partition function, he developed the basic idea of the q-exponential. From

3 (Maths) Linear Algebra

ON ANGLES BETWEEN SUBSPACES OF INNER PRODUCT SPACES

2. Review of Linear Algebra

Chapter 3. Vector spaces

MATH 225 Summer 2005 Linear Algebra II Solutions to Assignment 1 Due: Wednesday July 13, 2005

2.3. VECTOR SPACES 25

Tutorial 6 - MUB and Complex Inner Product

Lecture 7: Positive Semidefinite Matrices

Generalized eigenvector - Wikipedia, the free encyclopedia

Solving the 3D Laplace Equation by Meshless Collocation via Harmonic Kernels

NORMS ON SPACE OF MATRICES

Calculus II Practice Test Problems for Chapter 7 Page 1 of 6

Basic Calculus Review

Chapter Two Elements of Linear Algebra

Chapter 11 - Sequences and Series

Math 520 Exam 2 Topic Outline Sections 1 3 (Xiao/Dumas/Liaw) Spring 2008

Precalculus. Precalculus Higher Mathematics Courses 85

The Sommerfeld Polynomial Method: Harmonic Oscillator Example

Representation theory and quantum mechanics tutorial Spin and the hydrogen atom

MATH 167: APPLIED LINEAR ALGEBRA Chapter 3

1 Lyapunov theory of stability

Linear Regression and Its Applications

Further Mathematical Methods (Linear Algebra)

Tutorials in Optimization. Richard Socher

2. Signal Space Concepts

Recall that any inner product space V has an associated norm defined by

SPRING OF 2008 D. DETERMINANTS

Transcription:

Numerical Approximation to Multivariate Functions Using Fluctuationlessness Theorem with a Trigonometric Basis Function to Deal with Highly Oscillatory Functions N.A. BAYKARA Marmara University Department of Mathematics Göztepe, 347, İstanbul TÜRKİYE Turkey) baki@be.itu.edu.tr ERCAN GÜRVİT Marmara University Department of Mathematics Göztepe, 347, İstanbul TÜRKİYE Turkey) ercan@be.itu.edu.tr METİN DEMİRALP İstanbul Technical University Informatics Institute Maslak, 34469, İstanbul TÜRKİYE Turkey) demiralp@be.itu.edu.tr Abstract: Recently developed Fluctuation Free Matrix Representation Method can be used in approximating the integrals appearing in the multiple remainder terms of the Multivariate Taylor Expansion. This provides us with a new numerical approximation method for multivariate functions. In this work a trigonometric basis set, rather than a polynomial one is chosen in order to deal with highly oscillatory functions. Key Words: Multivariate Functions, Fluctuationlessness Theorem, Numerical Approximation, Explicit Remainder Term,Taylor Expansion, Trigonometric Basis Set 1 Introduction We have succesfully applied the fluctuation free integration method to Taylor series remainders for the approximation of certain univariate [1] or multivariate [] functions or their integrals [3 5] where the numerical results were very promising even in the case of slow convergence. The basic idea was to express the function under consideration via a Taylor series with remainder. In contrast to the usual way of ignoring the remainder term and using the detained polynomial part Taylor polynomial) for approximation we keep the remainder term which can be expressed as an integral over certain order derivative of the target function inside an appropriate interval and under a convenient weight function. The general tendency is to approximate this integral by using the mean value theorems of integral calculus. However this is done not for numerical approximation but error estimation since the characteristic point where the mean value theorem holds can not be evaluated easily it is somehow a formidable task in fact). Hence we do not intend to use this approach but try to approximate the integral by a recently developed method we call Fluctuation Free Integration [6 9]. We could directly use an appropriate Gauss quadrature however it uses polynomials and is based on orthogonal polynomials. Whereas the fluctuation free integration method involves the Gauss quadratures as its particular case, and does not have to use polynomials. Any basis set can be used to this end. The quality of numerical approximation is however sensitive to basis set choice. Fluctuation free integration is based on the fluctuation free matrix representation which is nothing else then the matrix representation on a restricted finite dimensional subspace of the considered function space where an inner product and a norm is defined. However this is for the matrix representation of the independent variable. If the operator under consideration is an algebraic one which multiplies its operand by a function then the approximation is beyond the dimensional restriction. It relates the function matrix representation to the independent variable matrix representation s image under the relevant function for finite dimensional representations. If the dimension becomes infinite and the related basis set spans the whole function space then this relation becomes exact otherwise it is approximate and the approximation quality increases as the dimension of the restricted space increases. We start by discussing the approximation procedure using the Fluctuationlessness Theorem. Then some definitions are given about the multi-index notation. Following these definitions the algorithm is given step by step, first defining the restructuring of the explicit remainder term of the Taylor expansion then forming the basis set and finally applying the Fluctuationlessness Theorem to that structure to obtain an approximation to the function. Finally, concluding remarks are given at the last section. ISSN: 179-769 394 ISBN: 978-96-474-14-3

Fluctuationlessness Theorem We need to explain what fluctuation is before attempting to state the fluctuationlessness theorem we comprehensively use in our approximations related to the matrix representations. If {u 1 x),..., u n x),...} denotes the basis function set which spans the function space we mentioned above then any function gx) in our function space let us denote it by H) which is a Hilbert space of the analytic and therefore square integrable functions should be a linear combination of these functions as follows gx) = g i u i x), g i u i, g) b dxu i x)gx), i = 1,, 3,... 1) a where we can truncate the infinite sum above at n number of terms to get an approximation to gx) and we obtain n gx) g n) x) g i u i x) n = 1,, 3,... ) The infinite series remainder term ignored above is called Fluctuation in gx) since it contains at least n + 1). basis function which should vanish at least n times in the interval [ a, b ] because of the orthogonality amongst the basis functions and therefore it has a fluctuating nature around zero. We do not intend to get into the details of this issue here but the phrase fluctuation free means all function fluctuations are ignored. What we have mentioned above is the scalar fluctuation. Whereas it is possible to have certain matrix algebraic entities which can be prefixed by the word fluctuation. Even thoughit is possible to talk about the so-called Fluctuation Operator. It is defined as the operator projecting from H to the subspace spanned by all basis functions except the first n number of them, that is, the functions u n+1 x), u n+ x),..., This operator appears when the inner products or expectation values are considered. Then, the fluctuation free means that all terms where the fluctuation operator appears at least once are ignored. The fluctuationlessness approximation is based on a theorem which was conjectured and proven by M. Demiralp [6]. This theorem states that the matrix representation of an algebraic operator which multiplies its argument by a scalar univariate function, is identical to the image of the independent variable matrix representation over the same subspace via the same basis set, under that univariate function, when the fluctuation involving terms are ignored. Let M f stand for the matrix representation of the function f, we can write down the approximation M f = u, fu T ) f u, tu T ) 3) The function f = ft) is defined over the interval [ a, b ] including t =. u i t)s constitute orthonormal basis functions of the Hilbert space from which our functions are chosen as we mentioned above. We define u n) t) = [ u 1 t)... u n t) ] T. The inner product of two functions say g 1 t) and g t) under a weight function wt) can be expressed as follows g 1, g ) = b a dtwt)g 1 t)g t) 4) as we have used above for a particular case. Now, if we expand f as ft) = f i t i 5) i= and insert this expression to 1) then we obtain ) M f = u, f i t i u T This can then be written as M f = = f i= i= i= f i u, t i u T ) f i u, tu T ) i 6) 7) u, tu T ) 8) For the argument being the matrix representation of the variable t, we can write the above approximation as M f ft n) ) 9) where T n) is an nn symmetric matrix and its components are calculated by T n) ij = b a dtu it)tu j t). 9) is the formulawise expression of the fluctuationlessness theorem. 3 Taylor s Formula for Several Variables Taylor s theorem for several variables will be considered here together with its remainder term in explicit form. Let x = x 1, x,..., x N ) lie in the ball B with ISSN: 179-769 395 ISBN: 978-96-474-14-3

center a = a 1, a,..., a N ) and f be a real-valued function defined on the closure B having k + 1) continuous partial derivatives at every point. The multivariable function can be expressed as the sum of a kth order Taylor polynomial P k x) and a corresponding remainder term R k x) as fx) = P k x) + R k x) 1) where the Taylor polynomial term is P k x) = fa) + k σ =1 1 Dσ f) a)x a) σ 11) In the above formula σ = σ 1, σ,..., σ N ) is a multiindex which is in fact an N-dimensional vector-like structure whose components are non-negative integers. Some of the operations over σ are defined as follows σ = σ 1 + σ +... + σ N 1) = σ 1!σ!... σ N! 13) x σ = x σ 1 1 xσ... xσ N N 14) D σ f = σ 1 σ σ N x σ 1 1 x σ x σ f 15) N N The remainder term can be expressed explicitly as R k x) = where k + 1 x a) σ 1 t) k D σ f) x a)t + a) dt 16) D σ f) x a)t + a) = D σ f) x 1 a 1 )t + a 1,..., x N a N ) t + a N ) 17) 4 Approximation of the Taylor Remainder Term We can define a weight function w k t) k + 1)1 t) k, k =, 1,,... 18) and use it to rewrite the remainder term as R k x) = x a) σ dtw k t) D σ f) x a)t + a) 19) Although we have used this form of the integral in our several applications it is better to deal with symmetrized structures here since we need to use certain nonpolynomial basis functions to reflect the high oscillations in the integrand if any. The integral in the last term of 19) is between and 1 and there is no noticable symmetry in the integrand around the midpoint of the integral 1. Our purpose now is to establish a symmetry in the kernel about this point. To this end we can use the following odd even decomposition where D σ f) x a)t + a) ϕ σ,+ t) + ϕ σ, t) ϕ σ,+ t) 1 [Dσ f) x a)t + a) t 1 ) ) + D σ f) x x a)t)] 1) ϕ σ, t) 1 t 1 [ Dσ f) x a)t + a) D σ f) x x a)t) ] ) which satisfy the symmetry relations ϕ σ,+ 1 t) = ϕ σ,+ t), ϕ σ, 1 t) = ϕ σ, t) 3) as long as the function ϕ s and therefore k+1)th partial derivative of f remains analytic in a region which takes the interval [, 1 ] as an interior line segment in the complex plane of the independent variable. Now we can write = dtw k t) D σ f) x a)t + a) + dtw k t)ϕ σ,+ t) dtw k t) t 1 ) ϕ σ, t) 4) Although the function to be integrated under the given weight has been decomposed to two even functions these two integrals still do not have evenness in their integrands due to the lack of some symmetric properties in their weights. If the weight functions are also decomposed to two additive terms each of which remains symmetric around the midpoint of the integration interval and conserves the property of being ISSN: 179-769 396 ISBN: 978-96-474-14-3

weight function, then the integrals may gain more amenable forms. We can define [ ] t k + 1 t) k, w k,+ t) k + 1 w k, t) k + 1 [ 1 t) k t k ] 1 t ) 5) where each function is symmetric under the transformation replacing t by 1 t) and remains nonnegative. The existence of symmetry is obvious for both however the nonnegativeness may need to be clearly shown for the second since the first one is the sum of nonnegative terms as long as t varies between and 1 inclusive. It is not difficult ) to show that the expression multiplying 1 t has a root at t = 1/. Since the first derivative of this factor with respect to t in the interval [, 1] is always positive this is a single root and therefore this factor) is the product of a symmetric function with 1 t. This means that the second function in 5) is even under the transformation replacing t by 1 t). This completes the existence of the property of being a weight function. Now we can write dtw k t)ϕ σ,+ t) = dtw k,+ t)ϕ σ,+ t) + dt t 1 w k, t)ϕ σ,+ t) 6) where the second integral vanishes since its integrand changes its sign when t is replaced by 1 t). Hence, dtw k t)ϕ σ,+ t) = dtw k,+ t)ϕ σ,+ t). 7) A similar discussion takes us to the following equality dtw k t)ϕ σ, t) = dtw k, t)ϕ σ, t). 8) The last two equalities enable us to conclude dtk + 1)1 t) k D σ f) x a)t + a) = dtw k,+ t)ϕ σ,+ t) dtw k, t)ϕ σ, t) 9) where the integrands of the right hand side integrals are even functions of the integration variable deviation from the midpoint. In other words, the replacement of t by 1 t) leaves the integrands unchanged. These are the desired final forms of the integrals. We use them in the implementations individually and then combine the obtained results. 5 Basis Functions We are now to construct an orthonormal set of basis functions u i t). The basis functions for approximating oscillating functions are to be not in purely polynomial but some oscillatory structures like trigonometric function and polynomial contaning forms for rather simple cases. We start with the following linearly independent functions to ultimately arrive at u i t) functions v i t) cos κ t 1 )) t 1 i, ) i = 1,, 3,... 3) which are used in the Filon quadrature like cases [1, 11]. We have chosen these functions to get evenness in the structure, that is, these functions remain unchanged when t is replaced by 1 t). We could provide the evenness not by using cosine but sine function of the same type multiplied by odd powers of t 1/). In that case there would be a phase shift in the oscillations. We could even use composite structures involving both odd and even powers of t 1/) by multiplying each group with a different trigonometric function composed of linear combinations of sines for odd powers and cosines for even powers with different frequencies. In that case different frequencies can be used to obtain the maximum efficiency in the approximation. Without depending on what we have chosen as structure, we get an orthonormalized set from the set given in 3) or from one of the sets we have mentioned above by using orthonormalization procedures. What we know about the resulting orthonormal basis function set is the fact that they will satisfy a three consecutive term linear recursion with the combination coefficients composed of certain constants and t 1/) and the matrix representation of t 1/) will be tridiagonal with respect to this set. These arise from the theory of orthogonal polynomials. Since we take nonnegative integer powers of t 1/) and orthogonalize them after multiplying by appropriate trigonometric structures we produce polynomials of t 1/) which are mutually orthonormal under the weights appearing in the above integrals after multiplying them by the square of the trigonometric factors appearing in the definition of the basis functions written or mentioned above. Hence we are now sufficiently equipped to continue ahead to the application of the fluctuation free integration to the latter integrals with symmetric integrands above. ISSN: 179-769 397 ISBN: 978-96-474-14-3

6 Fluctuationless Integration The right hand side integrals in 9) are defined under weights which are symmetric under the replacement of t by 1 t). The functions to be integrated are also symmetric under the same replecement. ϕs, the functions to be integrated there, remain analytic in the interval [, 1 ] and therefore their evenness allow us to consider them as the analytic function of t 1/) also. Thus, the fluctuationlessness theorem does not use the matrix representation of t alone but t 1/) instead of the independent variable operator. This may facilitate the protection of the fluctuation free integration method quality from the negative effects of some oscillations hence become important in the analysis of highly oscillatory functions. Now the integrals at the right hand side of 9) can be considered as the particular cases of the following integral I dtwt)ϕ t 1 ) ) 31) where the weight function wt) is assumed to satisfy w1 t) = wt). If we consider the above defined basis function set u 1 t),...,u m t),... and retain its first, say n, elements to define a finite dimensional subspace of the square integrable function space over the interval [, 1 ] and under the weight wt) multiplied by the square of the nonpolynomial common factor of the basis functions then we can define h T n [ h 1... h n ], T n) ij T n) u i, h i T n) 11 T n) 1n..... T n) n1 T nn n) dtwt)u i t), i = 1,,..., n 3), 33) t 1 ) ) u j, i, j = 1,,..., n 34) where the subindex in T n) stands for recalling the square nature of the operator whose matrix representation is under consideration. We can now write the following formula for the fluctuation free integration ) I h T n) n ϕ T h n 35) There will be two T n) matrices, corresponding to two weight functions, w k,+ t) and w k, t). Their elements are explicitly given below T ij,+ dtw k,+ t)u i t) t ) 1 u j t), T ij, apply this to 9) dtw k,+ t)ϕ σ,+ t) h T n ϕ σ,+ T n 1) i, j = 1,,..., n dtw k, t)u i t) t ) 1 u j t),,+ i, j = 1,,..., n dtw k, t)ϕ σ, t) ) h n h T n ϕ σ, T n), 36) ) h n 37) Now, in order to obtain a scalar equivalent of the expression above we will proceed with the eigenvalues and the eigenvectors of the T n),+ and Tn), matrices. T n),+ t i,+ = τ i,+ t i,+, i = 1,,..., n 38) T n), t i, = τ i, t i,, i = 1,,..., n 39) Here none of the eigenvalues is multiple and the eigenvectors are normalized in the Frobenius sense. By continuing we write down the spectral decomposition of T n),+ and Tn), T n) as follows n,+ = τ i,+ t i,+ t T i,+ 4) T n) n, = τ i, t i, t T i, 41) Now, replacing 9) in 16) R k x) = x a) σ dtw k t) D σ f) x a)t + a) x a) σ [ ) h T n ϕ σ,+ h n h T n ϕ σ, T n),+ T n), ) h n ] 4) and inserting the eigenvalues and the eigenvectors in the result we obtain x a) σ n R k x) ϕ σ,+ τ i,+ ) ISSN: 179-769 398 ISBN: 978-96-474-14-3

+ h T n t i,+ ) x a) σ n ϕ σ, τ i, ) ) h T n t i, 43) Thus the approximation formula becomes ready to be used. 7 Conclusion As it can be easily observed, it does not seem to be easy to work with multi-index notation, as well as with trigonometric basis set. But once the algorithm is set it is only a matter of archiving first the T n) matrices which are independent of the function to be used, then calculating their eigenvectors and eigenvalues and plugging them into the formula to obtain the desired approximation. It would be wise to stay away from the points where the function to be approximated is not analytic, taking into consideration that the Fluctuation Theorem is valid for the analytic parts of the function. Even if the analyticity is not disrupted in the domain to be analyzed the existence of a non-analytic point near the integration interval affects the results negatively. Acknowledgement The third author is grateful to the Turkish Academy of Sciences and all authors are indebted to WSEAS, for its support. References: [1] N. A. Baykara, Ercan Gürvit and Metin Demiralp, A Hybridized Finite Taylor Formula by Fluctuation Free Remainder Term for Univariate Function Approximation, AIP Conf. Proc. 148, 878) [] N. A. Baykara, Ercan Gürvit and Metin Demiralp, A Hybridized Finite Taylor Formula by Fluctuation Free Remainder Term for Multivariable Function Approximation, AIP Conf. Proc. 1148, 19) [3] Ercan Gürvit, N. A. Baykara, and Metin Demiralp, Evaluation of Univariate Integrals via Fluctuationlessness Theorem, AIP Conf. Proc. 148, 39 8) [4] Ercan Gürvit, N. A. Baykara, and Metin Demiralp, Evaluation of Multivariate Integrals via Fluctuationlessness Theorem and Taylor s Remainder, AIP Conf. Proc. 1148, 18 9) [5] E. Gürvit, N.A.Baykara, M. Demiralp, Numerical Integration of Bivariate Functions over a Non Rectangular Area by Using Fluctuationlessness Theorem, WSEAS Transactions on Mathematics, Issue 5, Volume 8, April 9), 193-198 [6] M. Demiralp, Fluctuation Expansion at the horizon as a New and Efficient Tool for Integration and ODE and PDE Solving, 8, under review. [7] B. Tunga and M. Demiralp, Fluctuationlessness Approximation Based Multivariate Integration in Hybrid High Dimensional Model Representation,AIP Conf. Proc. 148,56 8) [8] S. Tuna, B. Tunga, N.A. Baykara and M. Demiralp, Fluctuation Free Matrix Representation Based Univariate Integration in Hybrid High Dimensional Model Representation HHDMR) Over Plain and Factorized HDMR, WSEAS Transactions on Mathematics, Issue 6, Volume 8, April 9), 6-3 [9] S. Üsküplü and M. Demiralp, Univariate Integration via Space Extension Based Fluctuation Approximation, AIP Conf. Proc. 148, 566 8) [1] Andreas Asheim, A combined Filon/asymptotic quadrature method for highly oscillatory problems, Preprint Numerics no. 9/7. http://www.math.ntnu.no/preprint/numerics/7 /N9-7.pdf [11] Sheehan Olver. Moment-free numerical integration of highly oscillatory functions. j-imaj- NUMER-ANAL, 6):13-7, apr 6. ISSN: 179-769 399 ISBN: 978-96-474-14-3