Scattered Data Approximation of Noisy Data via Iterated Moving Least Squares
|
|
- Randolph Kelly
- 6 years ago
- Views:
Transcription
1 Scattered Data Approximation o Noisy Data via Iterated Moving Least Squares Gregory E. Fasshauer and Jack G. Zhang Abstract. In this paper we ocus on two methods or multivariate approximation problems with non-uniormly distributed noisy data. The new approach proposed here is an iterated approximate moving least-squares method. We compare our method to ridge regression which ilters out noise by using a smoothing parameter. Our goal is to ind an optimal number o iterations or the iterative method and an optimal smoothing parameter or ridge regression so that the corresponding approximants do not exactly interpolate the given data but are reasonably close. For both approaches we implement variants o leave-one-out cross-validation in order to ind these optimal values. The shape parameter or the basis unctions is also optimized in our algorithms. 1. Introduction In multivariate data itting problems we are usually given data (x j, j ), j = 1,..., N with distinct x j R s and j R, and we want to ind a (continuous) unction P : R s R such that P (x j ) = j, j = 1,..., N. (1) For classical interpolation methods we assume P to be a linear combination o a set o basis unctions φ j, i.e., P (x) = c j φ j (x). (2) The coeicients c j are determined by satisying the constraint (1). To guarantee existence o a unique set o c j or arbitrary distinct x j, it is known that, when s > 1 and N > 1, φ j must be x j -dependent. In the Conerence Title 1 Editors pp Copyright Oc 2005 by Nashboro Press, Brentwood, TN. ISBN x-x All rights o reproduction in any orm reserved.
2 2 G. E Fasshauer and J. G. Zhang literature, the φ j are requently chosen to be basis unctions generated by shits o a single strictly positive deinite basic unction φ, i.e., Correspondingly, (2) is now rewritten as φ j (x) = φ(x x j ). (3) P (x) = c j φ(x x j ). (4) To express the problem in matrix-vector orm we let c = [c 1,..., c N ] T, = [ 1,..., N ] T, Φ ij = φ(x i x j ). Then, enorcing the interpolation constraint (1) with a P in the orm o (4) leads to Φc =. (5) The act that φ is assumed to be strictly positive deinite guarantees that the interpolation matrix Φ is invertible. Thereore, c = Φ 1. Note that, in this classical setup, there is no restriction on the distribution o the data sites x j except or being pair-wise distinct. However, distribution o the x j is an important issue in the so-called approximate moving least squares (AMLS) approximation method. More details will be given later. The deinition o the basic unction φ oten involves a multiplicative parameter ε, e.g., the Gaussian φ(x) = e ε2 x 2. This shape parameter can be used to control the latness o φ, and inding a good value or ε is a major issue o data approximation (see e.g., the recent paper [3]). Throughout the rest o this paper, we requently omit ε in our notation, but the reader should be aware that some quantities, e.g., φ, Φ and c may be ε-dependent. The unction P based on (5) exactly interpolates the given data. When N is large, solving this (oten dense and ill-conditioned) linear system can be rather time-consuming. In many practical applications the data values j will contain a certain amount o inaccuracy such as experimental noise or computational error (e.g., i the data come rom some other previous computation). This act demonstrates that we may not want to invest too much eort into obtaining a solution that exactly interpolates the data. Usually one attempts to overcome these diiculties by giving up some o the exactness o the solution unction. For more details on scattered data approximation by positive deinite unctions we reer the reader to either [1] or [7].
3 Iterated Moving Least Squares 3 2. Interpolation By Iterated AMLS Approximation In the standard approach to moving least squares (MLS) approximation one takes a local (usually polynomial) approximation space, and determines the point-wise approximation as the solution o a locally weighted norm minimization problem. This and many more details o MLS and AMLS approximation are described in [1]. The dimension o this polynomial space is usually ar smaller than the data size. This polynomial approach can be equivalently interpreted as a quasi-interpolation approach, namely Backus-Gilbert s method. Similarly to the local polynomial approach, Backus-Gilbert s method avoids solving a ull size linear system. However, its generating unctions are point-wise determined, that is, unless a polynomial space o low order is used, evaluation o the Backus-Gilbert solution at each point still requires the solution o a linear system whose size is based on the dimension o the polynomial space. The advantage o the Backus-Gilbert ormulation is that it suggests an extension to a continuous approximant whose evaluation no longer requires solution o any linear systems From the Backus-Gilbert Method to AMLS Approximation Assume a quasi-interpolant Q : R s R o the orm Q (x) = j φ(x x j ). (6) It is known that i φ is a cardinal unction, then Q interpolates the given data and minimizes the point-wise error. In the Backus-Gilbert ormulation o MLS approximation we do not intend to use a cardinal φ, but instead we determine the φ(x x j ) at a ixed x via the ollowing setup. Assume φ(x x j ) = w(x x j )q(x x j ), q Π s d, (7) with a positive weight unction w : R s R and polynomial q o degree at most d in s variables which satisies the point-wise conditions (x x j ) α φ(x x j ) = δ α,0, 0 α d. (8) It will become apparent later why we use the same notation φ or the MLS generating unction as we used to deine the interpolant P in (4). However, at this point the Backus-Gilbert method itsel does not require φ to be strictly positive deinite. Condition (8) presents a set o discrete moment conditions. These moment conditions and the orm (7) o the generating unction can be derived
4 4 G. E Fasshauer and J. G. Zhang rom the Gram matrix associated with the standard MLS approximation. For d > 1, it is quite complicated to obtain an explicit analytic ormulation or φ. However, i x j are (coordinate-wise) uniormly distributed, then (8) may be approximately converted to an integral orm via construction o a Riemann sum, i.e., R s x α φ( x)d x = δ α0, 0 α d. (9) In analogy to (8), condition (9) now represents a set o continuous moment conditions. These conditions are the key ingredient o AMLS approximation a method closely related to the general approximate approximation paradigm irst suggested by Maz ya in the early 1990s (see, e.g., [5] and reerences therein). The advantage o the continuous ormulation (9) is that we can now explicitly construct the basic unction φ. In act, it is clear that any appropriately normalized φ will satisy (9) or d = 0. Condition (9) also implies a polynomial reproduction via unction convolution, i.e., x α φ(x x)d x = (x x) α φ( x)d x = x α, 0 α d. (10) R s R s Consequently, p(x) = p( x)φ(x x)d x, R s p Π s d. (11) This reproduction o degree-d polynomials is what ensures good approximation properties o the scheme. Again, in the case o a uniorm center distribution, the polynomial reproduction (11) can be approximately converted to a discrete orm, i.e., p(x) p(x j )φ(x x j ) p Π s d. (12) This interpretation leads us to believe that the (discrete) AMLS approximation method should also contain properties o unction convolution. In particular, convolution is known to be a smoothing operation when it does not exactly reconstruct the convoluted unction. Consequently, we expect Q to carry some high level smoothing eects Iterated AMLS Approximation We now choose a φ that not only meets the continuous moment conditions but also is strictly positive deinite. With such a φ, the quasi-interpolant
5 Iterated Moving Least Squares 5 Q may in act be pushed as close to the interpolant P as possible via an iterative process initialized with Q. Details o this approach are presented in [2]. A basic algorithm is summarized below: Q (0) with residual unctions N (x) = j φ(x x j ), (13) Q (n+1) (x) = Q (n) (x) + R (n) [ ] j Q (n) (x j ) φ(x x j ), (14) (x) = Q (n+1) (x) Q (n) (x) [ ] = j Q (n) (x j ) φ(x x j ). (15) According to the results presented in [2], i the matrix with entries Φ ij = φ(x i x j ) satisies Φ 2 < 1, then Q (n) P, i.e., R (n) (x) 0 as n. This iterative process will eventually reach the exact interpolant simply by reducing the residuals at the data points during each iteration. Note that the residuals themselves are also quasi-interpolants o the same construction as the initial Q (0) is. We now remind the reader o the data site distribution issue. AMLS approximation in general is best ormulated or a uniorm data distribution. However, since the iterative Q (n) is guaranteed to arrive at the interpolant P it is not necessary to ask or evenly spaced data in the construction o Q (0) or the residual unction R (n) in each iteration. The initial AMLS approximant will usually be smooth but rather ar away rom the exact true unction. As the iteration proceeds, the residuals decrease, but noise is also being picked up (i there is any). Thus, one might expect that during the iteration there is a moment when the approximant is relatively ideal meaning it is smooth and also close to the true unction. This optimal moment is known to be problem dependent and also φ-dependent. We will come back to this discussion in detail later. To end this section, let us make sure that there are some basic unctions φ available or this iterative AMLS method. For example, any normalized strictly positive deinite unction (or d = 0) will do, e.g., inverse multiquadrics φ(x) = (1 + x 2 ) β, β > 0 or all s, Wendland s compactly supported unctions or speciic s, normalized Laguerre-Gaussians or arbitrary d and s (see [2]), e.g.,
6 6 G. E Fasshauer and J. G. Zhang φ(x) = e x2 or d = 0 and all s, φ(x) = ( 3 2 x2 )e x2 or d = 1 and s = Ridge Regression or Noise Filtering A well known approach to dealing with noisy data is ridge regression or smoothing spline approximation (see, e.g., the seminal paper [4] by Kimeldor and Wahba). The basic idea is to give up some accuracy and then obtain a solution via a minimization process that balances a tradeo between smoothness and closeness to the data. I we again assume the solution P to be o the orm (4) (but not subject to constraint (1)), then using a smoothing parameter γ > 0, the coeicient vector c is determined by min ct Φc + γ (P (x j ) j ) 2. (16) Here the quadratic orm c T Φc is an indicator o the smoothness o P and clearly, N (P (x j ) j ) 2 measures the closeness to the data. It can be shown that solving (16) leads to the linear system (Φ + 1γ ) I c =. (17) Note that this is just a regularized version o the interpolation system (5). It is clear that (17) will have a unique solution ) provided φ is strictly positive deinite, since then the matrix (Φ + 1γ I is also positive deinite or γ > 0. The unction P ound by solving (17) will no longer exactly it the given data simply because o the use o γ. The parameter γ balances the tradeo between closeness o it and smoothness. Obviously, a large ) γ makes P it the data more closely. In the limiting case, (Φ + 1γ I Φ as γ. This recovers the standard interpolation ormulation as given by (5). We will use cross-validation to get a good value o the smoothing parameter. A dierent approach was suggested in the recent paper [8]. 4. Leave-One-Out Cross-Validation As mentioned earlier, the optimal number o iterations or iterated AMLS approximation and the optimal smoothing parameter γ or ridge regression are φ-dependent. For a given problem, in addition to the optimal smoothing parameter γ used in ridge regression and the optimal number o iterations or iterated AMLS approximation, we seek to ind an optimal shape parameter ε or the basic unction φ in both methods. Indeed, an optimal ε is also required or standard interpolation. In order to ind
7 Iterated Moving Least Squares 7 these optimal values we use a variation o the so-called leave-one-out crossvalidation (LOOCV) algorithm. We now describe LOOCV along with its implementation LOOCV or Ridge Regression For k = 1,..., N, denote x [k] = [x 1,..., x k 1, x k+1,..., x N ] T, [k] = [ 1,..., k 1, k+1,..., N ] T, and correspondingly, Φ [k] ij = φ(ε(x[k] i x [k] j )), c [k] = [c [k] 1,..., c[k] N 1 ]T. We do this to indicate that in orming the approximation we do not take the whole data set into consideration, but leave out the k th element. Hence, all involved quantities are now o size (N 1) instead o N. We implement the ridge regression method to ind c [k]. That is, c [k] = ( Φ [k] + 1 γ I ) 1 [k]. (18) Correspondingly, the ridge regression approximation or leaving out the k th data value becomes P [k] N 1 (x) = c [k] j φ(ε(x x[k] )). (19) In order to obtain a cost unction or the optimization o γ (and ε) we compute the residual on the let-out data point x k, i.e., e k = k P [k] (x k), (20) and denote e = [e 1,..., e N ] T. Judging rom the ormulation presented thus ar, this optimization demands a large amount o computation especially when the set o γ candidates and the set o ε candidates are large. We adapt a ormula given by Rippa or standard RBF interpolation that will signiicantly reduce the computation (see [6]). We denote, Φ = Φ+ 1 γ I, where Φ is the interpolation matrix based on the whole data set. Following Rippa s analysis in [6] one can show that e k given by (20) can actually be computed by ( Φ 1 ) j e k = Φ 1 kk k. (21) For each pair o γ and ε, (20) requires computation o a matrix inverse and unction evaluation or every dierent k, whereas with (21) we need to compute and store the inverse o the ull size matrix only once or all k = 1,..., N.
8 8 G. E Fasshauer and J. G. Zhang 4.2. LOOCV or Iterated AMLS Approximation As mentioned earlier, the iterated AMLS approximant carries a strong smoothing property which we will use or noise iltering purposes. Again, we employ a variant o LOOCV to ind an optimal shape parameter ε and an optimal number o iterations. An interpretation o the iterated AMLS method is that the iterative approximant Q (n) approximates the interpolant P by representing the inverse o the interpolation matrix Φ by a truncated Neumann series (see [2]), i.e., n Φ 1 (I Φ) i. (22) i=0 To optimize Q (n) with respect to n and ε using LOOCV, we again deine a cost unction based on Rippa s ormula (21) which was originally derived or the interpolation process. This leads to the ollowing procedure. For a ixed ε ormulate a cost unction: During each iteration, apply an approximate version o Rippa s ormula based on (22), i.e., ( n ) e (n) i=0 (I Φ)i k k = ( n. (23) i=0 (I Φ)i) kk and denote e (n) = [e (n) 1,..., e(n) N ]T. I e (n) e (n 1) alls below a speciied tolerance or the maximal iteration number is reached, stop the iteration and store e(ε) = e (n). An optimal ε is obtained by minimizing e(ε) with respect to ε. Since the matrix I Φ is non-singular (see [2]) we can perorm an eigen decomposition, i.e., I Φ = XΛX 1, so that or n N the cost o updating (23) during iterations can be reduced to O(N 2 ) rom O(N 3 ) as would be required by usual direct matrix multiplication. 5. Discussion o Numerical Results In our numerical experiments we use Gaussian basic unctions. All computations are perormed on the unit square [0, 1] 2. The scaling o the basic unctions is achieved by using two parameters: the shape parameter ε deines the general shape, and a second parameter h that relects the data spacing is used to mimic stationary approximation. Thus, we end up using expansions such as P (x) = c j e ε2 h 2 x xj 2 or Q (x) = j e ε2 h 2 x xj 2.
9 Iterated Moving Least Squares 9 Test data is taken rom a modiied Franke unction. In Figure 1 we compare the convergence behavior o Gaussian RBF interpolation to that o the iterated AMLS algorithm or a problem without noise. The interpolation error is represented by the highly oscillatory curve caused by the associated numerical instabilities. The other two curves represent the errors or iterated AMLS approximation. The dash-dotted curve is or the initial iterate Q (0) and the solid one corresponds to Q (10). When ε is small the iterated AMLS method successully overcomes the numerical diiculties associated with the interpolant since the residual iteration acts as a smoothing operation. The graph on the right is a zoom-in o the let graph in a small ε range P Q (0) Q (10) 10 1 P Q (0) Q (10) RMSerr 10 0 RMSerr ε ε Fig. 1. RMS-errors vs. ε and illustration o instabilities Halton points. Results based on data with 3% noise added are summarized in Table 1. In addition to the two methods discussed in this paper we include results or an iterated Shepard approximation. Clearly, simple RBF interpolation (with a ixed ε) is not recommended or noisy data. All three methods produce similar error drops. However, searching or optimal parameters or ridge regression is considerably more time consuming than or the two iterative methods which both do not require solutions o linear systems during the iterations. We note that or the two iterative methods the initial approximant based on an optimal ε is lat and does not approximate the data very well. However, convergence is rather ast during early iterations, and then slows down once the approximant starts eeding on the noise. We hope to detect this point at which the noise is optimally iltered with the LOOCV algorithm. In summary, both iterative methods will produce reliably smooth results in signiicantly less time than the ridge regression method, and thereore perorm quite well in this noise iltering application.
10 10 G. E Fasshauer and J. G. Zhang N = RMSerr 4.80e e e e e-4 AMLS ε no. iter time RMSerr 5.65e e e e e-4 Shepard ε no. iter time RMSerr 3.54e e e e e-4 Ridge ε γ time Tab. 1. Comparison o dierent methods using Gaussians or noisy data sampled at Halton points. Reerences 1. Fasshauer, G. E., Meshree Approximation Methods with MATLAB, World Scientiic Publishers, Singapore, to appear. 2. Fasshauer, G. E., and J. G. Zhang, Iterated approximate moving least squares approximation, Proceedings o Meshless Methods (Lisbon 2005), Springer, in press. 3. Fornberg, B., and J. Zuev, The Runge phenomenon and spatially variable shape parameters in RBF interpolation, preprint, Kimeldor, G., and G. Wahba, Some results on Tchebycheian spline unctions, J. Math. Anal. Applic. 33 (1971), Maz ya, V., and G. Schmidt, On quasi-interpolation with nonuniormly distributed centers on domains and maniolds, J. Approx. Theory 110 (2001), Rippa, S., An algorithm or selecting a good value or the parameter c in radial basis unction interpolation, Adv. Comp. Math 11 (1999), Wendland, H., Scattered Data Approximation, Cambridge University Press, Cambridge, Wendland, H., and C. Rieger, Approximate interpolation with applications to selecting smoothing parameters, Numer. Math. 101, Gregory E. Fasshauer and Jack G. Zhang Illinois Institute o Technology Chicago, IL asshauer@iit.edu ass zhangou@iit.edu zhangou
Preconditioning of Radial Basis Function Interpolation Systems via Accelerated Iterated Approximate Moving Least Squares Approximation
Preconditioning o Radial Basis Function Interpolation Systems via Accelerated Iterated Approximate Moving Least Squares Approximation Gregory E. Fasshauer and Jack G. Zhang Abstract The standard approach
More informationRecent Results for Moving Least Squares Approximation
Recent Results for Moving Least Squares Approximation Gregory E. Fasshauer and Jack G. Zhang Abstract. We describe two experiments recently conducted with the approximate moving least squares (MLS) approximation
More informationMATH 590: Meshfree Methods
MATH 590: Meshfree Methods Chapter 33: Adaptive Iteration Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Fall 2010 fasshauer@iit.edu MATH 590 Chapter 33 1 Outline 1 A
More informationMATH 590: Meshfree Methods
MATH 590: Meshfree Methods Chapter 33: Adaptive Iteration Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Fall 2010 fasshauer@iit.edu MATH 590 Chapter 33 1 Outline 1 A
More informationMeshfree Approximation Methods with MATLAB
Interdisciplinary Mathematical Sc Meshfree Approximation Methods with MATLAB Gregory E. Fasshauer Illinois Institute of Technology, USA Y f? World Scientific NEW JERSEY LONDON SINGAPORE BEIJING SHANGHAI
More informationOn High-Rate Cryptographic Compression Functions
On High-Rate Cryptographic Compression Functions Richard Ostertág and Martin Stanek Department o Computer Science Faculty o Mathematics, Physics and Inormatics Comenius University Mlynská dolina, 842 48
More informationToward Approximate Moving Least Squares Approximation with Irregularly Spaced Centers
Toward Approximate Moving Least Squares Approximation with Irregularly Spaced Centers Gregory E. Fasshauer Department of Applied Mathematics, Illinois Institute of Technology, Chicago, IL 6066, U.S.A.
More informationFluctuationlessness Theorem and its Application to Boundary Value Problems of ODEs
Fluctuationlessness Theorem and its Application to Boundary Value Problems o ODEs NEJLA ALTAY İstanbul Technical University Inormatics Institute Maslak, 34469, İstanbul TÜRKİYE TURKEY) nejla@be.itu.edu.tr
More informationReceived: 30 July 2017; Accepted: 29 September 2017; Published: 8 October 2017
mathematics Article Least-Squares Solution o Linear Dierential Equations Daniele Mortari ID Aerospace Engineering, Texas A&M University, College Station, TX 77843, USA; mortari@tamu.edu; Tel.: +1-979-845-734
More informationMATH 590: Meshfree Methods
MATH 590: Meshfree Methods Chapter 14: The Power Function and Native Space Error Estimates Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Fall 2010 fasshauer@iit.edu
More informationToday. Introduction to optimization Definition and motivation 1-dimensional methods. Multi-dimensional methods. General strategies, value-only methods
Optimization Last time Root inding: deinition, motivation Algorithms: Bisection, alse position, secant, Newton-Raphson Convergence & tradeos Eample applications o Newton s method Root inding in > 1 dimension
More informationLeast Squares Approximation
Chapter 6 Least Squares Approximation As we saw in Chapter 5 we can interpret radial basis function interpolation as a constrained optimization problem. We now take this point of view again, but start
More informationObjectives. By the time the student is finished with this section of the workbook, he/she should be able
FUNCTIONS Quadratic Functions......8 Absolute Value Functions.....48 Translations o Functions..57 Radical Functions...61 Eponential Functions...7 Logarithmic Functions......8 Cubic Functions......91 Piece-Wise
More informationMATH 590: Meshfree Methods
MATH 590: Meshfree Methods Chapter 9: Conditionally Positive Definite Radial Functions Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Fall 2010 fasshauer@iit.edu MATH
More information2.6 Two-dimensional continuous interpolation 3: Kriging - introduction to geostatistics. References - geostatistics. References geostatistics (cntd.
.6 Two-dimensional continuous interpolation 3: Kriging - introduction to geostatistics Spline interpolation was originally developed or image processing. In GIS, it is mainly used in visualization o spatial
More information(One Dimension) Problem: for a function f(x), find x 0 such that f(x 0 ) = 0. f(x)
Solving Nonlinear Equations & Optimization One Dimension Problem: or a unction, ind 0 such that 0 = 0. 0 One Root: The Bisection Method This one s guaranteed to converge at least to a singularity, i not
More information( x) f = where P and Q are polynomials.
9.8 Graphing Rational Functions Lets begin with a deinition. Deinition: Rational Function A rational unction is a unction o the orm ( ) ( ) ( ) P where P and Q are polynomials. Q An eample o a simple rational
More informationCHAPTER 1: INTRODUCTION. 1.1 Inverse Theory: What It Is and What It Does
Geosciences 567: CHAPTER (RR/GZ) CHAPTER : INTRODUCTION Inverse Theory: What It Is and What It Does Inverse theory, at least as I choose to deine it, is the ine art o estimating model parameters rom data
More informationMATH 590: Meshfree Methods
MATH 590: Meshfree Methods Chapter 34: Improving the Condition Number of the Interpolation Matrix Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Fall 2010 fasshauer@iit.edu
More informationChapter 6 Reliability-based design and code developments
Chapter 6 Reliability-based design and code developments 6. General Reliability technology has become a powerul tool or the design engineer and is widely employed in practice. Structural reliability analysis
More informationScattering of Solitons of Modified KdV Equation with Self-consistent Sources
Commun. Theor. Phys. Beijing, China 49 8 pp. 89 84 c Chinese Physical Society Vol. 49, No. 4, April 5, 8 Scattering o Solitons o Modiied KdV Equation with Sel-consistent Sources ZHANG Da-Jun and WU Hua
More informationMATH 590: Meshfree Methods
MATH 590: Meshfree Methods Chapter 2: Radial Basis Function Interpolation in MATLAB Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Fall 2010 fasshauer@iit.edu MATH 590
More informationApproximation by Conditionally Positive Definite Functions with Finitely Many Centers
Approximation by Conditionally Positive Definite Functions with Finitely Many Centers Jungho Yoon Abstract. The theory of interpolation by using conditionally positive definite function provides optimal
More informationNumerical Solution of Ordinary Differential Equations in Fluctuationlessness Theorem Perspective
Numerical Solution o Ordinary Dierential Equations in Fluctuationlessness Theorem Perspective NEJLA ALTAY Bahçeşehir University Faculty o Arts and Sciences Beşiktaş, İstanbul TÜRKİYE TURKEY METİN DEMİRALP
More informationOPTIMAL PLACEMENT AND UTILIZATION OF PHASOR MEASUREMENTS FOR STATE ESTIMATION
OPTIMAL PLACEMENT AND UTILIZATION OF PHASOR MEASUREMENTS FOR STATE ESTIMATION Xu Bei, Yeo Jun Yoon and Ali Abur Teas A&M University College Station, Teas, U.S.A. abur@ee.tamu.edu Abstract This paper presents
More informationBasic mathematics of economic models. 3. Maximization
John Riley 1 January 16 Basic mathematics o economic models 3 Maimization 31 Single variable maimization 1 3 Multi variable maimization 6 33 Concave unctions 9 34 Maimization with non-negativity constraints
More informationMATH 590: Meshfree Methods
MATH 590: Meshfree Methods Chapter 6: Scattered Data Interpolation with Polynomial Precision Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Fall 2010 fasshauer@iit.edu
More informationA Simple Explanation of the Sobolev Gradient Method
A Simple Explanation o the Sobolev Gradient Method R. J. Renka July 3, 2006 Abstract We have observed that the term Sobolev gradient is used more oten than it is understood. Also, the term is oten used
More informationIn many diverse fields physical data is collected or analysed as Fourier components.
1. Fourier Methods In many diverse ields physical data is collected or analysed as Fourier components. In this section we briely discuss the mathematics o Fourier series and Fourier transorms. 1. Fourier
More informationRATIONAL FUNCTIONS. Finding Asymptotes..347 The Domain Finding Intercepts Graphing Rational Functions
RATIONAL FUNCTIONS Finding Asymptotes..347 The Domain....350 Finding Intercepts.....35 Graphing Rational Functions... 35 345 Objectives The ollowing is a list o objectives or this section o the workbook.
More informationBayesian Technique for Reducing Uncertainty in Fatigue Failure Model
9IDM- Bayesian Technique or Reducing Uncertainty in Fatigue Failure Model Sriram Pattabhiraman and Nam H. Kim University o Florida, Gainesville, FL, 36 Copyright 8 SAE International ABSTRACT In this paper,
More informationExtreme Values of Functions
Extreme Values o Functions When we are using mathematics to model the physical world in which we live, we oten express observed physical quantities in terms o variables. Then, unctions are used to describe
More informationSupplementary material for Continuous-action planning for discounted infinite-horizon nonlinear optimal control with Lipschitz values
Supplementary material or Continuous-action planning or discounted ininite-horizon nonlinear optimal control with Lipschitz values List o main notations x, X, u, U state, state space, action, action space,
More information18-660: Numerical Methods for Engineering Design and Optimization
8-66: Numerical Methods or Engineering Design and Optimization Xin Li Department o ECE Carnegie Mellon University Pittsburgh, PA 53 Slide Overview Linear Regression Ordinary least-squares regression Minima
More informationDIRECT ERROR BOUNDS FOR SYMMETRIC RBF COLLOCATION
Meshless Methods in Science and Engineering - An International Conference Porto, 22 DIRECT ERROR BOUNDS FOR SYMMETRIC RBF COLLOCATION Robert Schaback Institut für Numerische und Angewandte Mathematik (NAM)
More informationMATH 590: Meshfree Methods
MATH 590: Meshfree Methods Chapter 1 Part 3: Radial Basis Function Interpolation in MATLAB Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Fall 2014 fasshauer@iit.edu
More informationMATH 590: Meshfree Methods
MATH 590: Meshfree Methods Chapter 5: Completely Monotone and Multiply Monotone Functions Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Fall 2010 fasshauer@iit.edu MATH
More informationYURI LEVIN AND ADI BEN-ISRAEL
Pp. 1447-1457 in Progress in Analysis, Vol. Heinrich G W Begehr. Robert P Gilbert and Man Wah Wong, Editors, World Scientiic, Singapore, 003, ISBN 981-38-967-9 AN INVERSE-FREE DIRECTIONAL NEWTON METHOD
More informationOn a Closed Formula for the Derivatives of e f(x) and Related Financial Applications
International Mathematical Forum, 4, 9, no. 9, 41-47 On a Closed Formula or the Derivatives o e x) and Related Financial Applications Konstantinos Draais 1 UCD CASL, University College Dublin, Ireland
More informationMATH 590: Meshfree Methods
MATH 590: Meshfree Methods Chapter 7: Conditionally Positive Definite Functions Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Fall 2010 fasshauer@iit.edu MATH 590 Chapter
More informationMath 2412 Activity 1(Due by EOC Sep. 17)
Math 4 Activity (Due by EOC Sep. 7) Determine whether each relation is a unction.(indicate why or why not.) Find the domain and range o each relation.. 4,5, 6,7, 8,8. 5,6, 5,7, 6,6, 6,7 Determine whether
More informationAlgebra II Notes Inverse Functions Unit 1.2. Inverse of a Linear Function. Math Background
Algebra II Notes Inverse Functions Unit 1. Inverse o a Linear Function Math Background Previously, you Perormed operations with linear unctions Identiied the domain and range o linear unctions In this
More informationSTAT 801: Mathematical Statistics. Hypothesis Testing
STAT 801: Mathematical Statistics Hypothesis Testing Hypothesis testing: a statistical problem where you must choose, on the basis o data X, between two alternatives. We ormalize this as the problem o
More informationTelescoping Decomposition Method for Solving First Order Nonlinear Differential Equations
Telescoping Decomposition Method or Solving First Order Nonlinear Dierential Equations 1 Mohammed Al-Reai 2 Maysem Abu-Dalu 3 Ahmed Al-Rawashdeh Abstract The Telescoping Decomposition Method TDM is a new
More informationCurve Sketching. The process of curve sketching can be performed in the following steps:
Curve Sketching So ar you have learned how to ind st and nd derivatives o unctions and use these derivatives to determine where a unction is:. Increasing/decreasing. Relative extrema 3. Concavity 4. Points
More information8.4 Inverse Functions
Section 8. Inverse Functions 803 8. Inverse Functions As we saw in the last section, in order to solve application problems involving eponential unctions, we will need to be able to solve eponential equations
More informationThe achievable limits of operational modal analysis. * Siu-Kui Au 1)
The achievable limits o operational modal analysis * Siu-Kui Au 1) 1) Center or Engineering Dynamics and Institute or Risk and Uncertainty, University o Liverpool, Liverpool L69 3GH, United Kingdom 1)
More informationThe Clifford algebra and the Chevalley map - a computational approach (detailed version 1 ) Darij Grinberg Version 0.6 (3 June 2016). Not proofread!
The Cliord algebra and the Chevalley map - a computational approach detailed version 1 Darij Grinberg Version 0.6 3 June 2016. Not prooread! 1. Introduction: the Cliord algebra The theory o the Cliord
More informationLeast-Squares Spectral Analysis Theory Summary
Least-Squares Spectral Analysis Theory Summary Reerence: Mtamakaya, J. D. (2012). Assessment o Atmospheric Pressure Loading on the International GNSS REPRO1 Solutions Periodic Signatures. Ph.D. dissertation,
More informationSEPARATED AND PROPER MORPHISMS
SEPARATED AND PROPER MORPHISMS BRIAN OSSERMAN The notions o separatedness and properness are the algebraic geometry analogues o the Hausdor condition and compactness in topology. For varieties over the
More informationReview of Prerequisite Skills for Unit # 2 (Derivatives) U2L2: Sec.2.1 The Derivative Function
UL1: Review o Prerequisite Skills or Unit # (Derivatives) Working with the properties o exponents Simpliying radical expressions Finding the slopes o parallel and perpendicular lines Simpliying rational
More informationAH 2700A. Attenuator Pair Ratio for C vs Frequency. Option-E 50 Hz-20 khz Ultra-precision Capacitance/Loss Bridge
0 E ttenuator Pair Ratio or vs requency NEEN-ERLN 700 Option-E 0-0 k Ultra-precision apacitance/loss ridge ttenuator Ratio Pair Uncertainty o in ppm or ll Usable Pairs o Taps 0 0 0. 0. 0. 07/08/0 E E E
More information9.1 The Square Root Function
Section 9.1 The Square Root Function 869 9.1 The Square Root Function In this section we turn our attention to the square root unction, the unction deined b the equation () =. (1) We begin the section
More informationMultivariate Interpolation with Increasingly Flat Radial Basis Functions of Finite Smoothness
Multivariate Interpolation with Increasingly Flat Radial Basis Functions of Finite Smoothness Guohui Song John Riddle Gregory E. Fasshauer Fred J. Hickernell Abstract In this paper, we consider multivariate
More informationPulling by Pushing, Slip with Infinite Friction, and Perfectly Rough Surfaces
1993 IEEE International Conerence on Robotics and Automation Pulling by Pushing, Slip with Ininite Friction, and Perectly Rough Suraces Kevin M. Lynch Matthew T. Mason The Robotics Institute and School
More informationProducts and Convolutions of Gaussian Probability Density Functions
Tina Memo No. 003-003 Internal Report Products and Convolutions o Gaussian Probability Density Functions P.A. Bromiley Last updated / 9 / 03 Imaging Science and Biomedical Engineering Division, Medical
More information3. Several Random Variables
. Several Random Variables. Two Random Variables. Conditional Probabilit--Revisited. Statistical Independence.4 Correlation between Random Variables. Densit unction o the Sum o Two Random Variables. Probabilit
More informationA Comparative Study of Non-separable Wavelet and Tensor-product. Wavelet; Image Compression
Copyright c 007 Tech Science Press CMES, vol., no., pp.91-96, 007 A Comparative Study o Non-separable Wavelet and Tensor-product Wavelet in Image Compression Jun Zhang 1 Abstract: The most commonly used
More informationStability of Kernel Based Interpolation
Stability of Kernel Based Interpolation Stefano De Marchi Department of Computer Science, University of Verona (Italy) Robert Schaback Institut für Numerische und Angewandte Mathematik, University of Göttingen
More information1 Relative degree and local normal forms
THE ZERO DYNAMICS OF A NONLINEAR SYSTEM 1 Relative degree and local normal orms The purpose o this Section is to show how single-input single-output nonlinear systems can be locally given, by means o a
More informationMath 248B. Base change morphisms
Math 248B. Base change morphisms 1. Motivation A basic operation with shea cohomology is pullback. For a continuous map o topological spaces : X X and an abelian shea F on X with (topological) pullback
More informationProbabilistic Model of Error in Fixed-Point Arithmetic Gaussian Pyramid
Probabilistic Model o Error in Fixed-Point Arithmetic Gaussian Pyramid Antoine Méler John A. Ruiz-Hernandez James L. Crowley INRIA Grenoble - Rhône-Alpes 655 avenue de l Europe 38 334 Saint Ismier Cedex
More informationLecture 8 Optimization
4/9/015 Lecture 8 Optimization EE 4386/5301 Computational Methods in EE Spring 015 Optimization 1 Outline Introduction 1D Optimization Parabolic interpolation Golden section search Newton s method Multidimensional
More informationRoberto s Notes on Differential Calculus Chapter 8: Graphical analysis Section 1. Extreme points
Roberto s Notes on Dierential Calculus Chapter 8: Graphical analysis Section 1 Extreme points What you need to know already: How to solve basic algebraic and trigonometric equations. All basic techniques
More informationA new stable basis for radial basis function interpolation
A new stable basis for radial basis function interpolation Stefano De Marchi and Gabriele Santin Department of Mathematics University of Padua (Italy) Abstract It is well-known that radial basis function
More informationEstimation and detection of a periodic signal
Estimation and detection o a periodic signal Daniel Aronsson, Erik Björnemo, Mathias Johansson Signals and Systems Group, Uppsala University, Sweden, e-mail: Daniel.Aronsson,Erik.Bjornemo,Mathias.Johansson}@Angstrom.uu.se
More informationKernel B Splines and Interpolation
Kernel B Splines and Interpolation M. Bozzini, L. Lenarduzzi and R. Schaback February 6, 5 Abstract This paper applies divided differences to conditionally positive definite kernels in order to generate
More informationCircuit Complexity / Counting Problems
Lecture 5 Circuit Complexity / Counting Problems April 2, 24 Lecturer: Paul eame Notes: William Pentney 5. Circuit Complexity and Uniorm Complexity We will conclude our look at the basic relationship between
More informationA NOTE ON HENSEL S LEMMA IN SEVERAL VARIABLES
PROCEEDINGS OF THE AMERICAN MATHEMATICAL SOCIETY Volume 125, Number 11, November 1997, Pages 3185 3189 S 0002-9939(97)04112-9 A NOTE ON HENSEL S LEMMA IN SEVERAL VARIABLES BENJI FISHER (Communicated by
More informationTANGENT SPACE INTRINSIC MANIFOLD REGULARIZATION FOR DATA REPRESENTATION. Shiliang Sun
TANGENT SPACE INTRINSIC MANIFOLD REGULARIZATION FOR DATA REPRESENTATION Shiliang Sun Department o Computer Science and Technology, East China Normal University 500 Dongchuan Road, Shanghai 200241, P. R.
More informationSymbolic-Numeric Methods for Improving Structural Analysis of DAEs
Symbolic-Numeric Methods or Improving Structural Analysis o DAEs Guangning Tan, Nedialko S. Nedialkov, and John D. Pryce Abstract Systems o dierential-algebraic equations (DAEs) are generated routinely
More informationPart I: Thin Converging Lens
Laboratory 1 PHY431 Fall 011 Part I: Thin Converging Lens This eperiment is a classic eercise in geometric optics. The goal is to measure the radius o curvature and ocal length o a single converging lens
More informationDefinition: Let f(x) be a function of one variable with continuous derivatives of all orders at a the point x 0, then the series.
2.4 Local properties o unctions o several variables In this section we will learn how to address three kinds o problems which are o great importance in the ield o applied mathematics: how to obtain the
More informationRadial Basis Functions I
Radial Basis Functions I Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo November 14, 2008 Today Reformulation of natural cubic spline interpolation Scattered
More informationA Posteriori Error Bounds for Meshless Methods
A Posteriori Error Bounds for Meshless Methods Abstract R. Schaback, Göttingen 1 We show how to provide safe a posteriori error bounds for numerical solutions of well-posed operator equations using kernel
More informationRoot Arrangements of Hyperbolic Polynomial-like Functions
Root Arrangements o Hyperbolic Polynomial-like Functions Vladimir Petrov KOSTOV Université de Nice Laboratoire de Mathématiques Parc Valrose 06108 Nice Cedex France kostov@mathunicer Received: March, 005
More informationUltra Fast Calculation of Temperature Profiles of VLSI ICs in Thermal Packages Considering Parameter Variations
Ultra Fast Calculation o Temperature Proiles o VLSI ICs in Thermal Packages Considering Parameter Variations Je-Hyoung Park, Virginia Martín Hériz, Ali Shakouri, and Sung-Mo Kang Dept. o Electrical Engineering,
More informationAn efficient approximation method for the nonhomogeneous backward heat conduction problems
International Journal o Engineering and Applied Sciences IJEAS) ISSN: 394-3661 Volume-5 Issue-3 March 018 An eicient appromation method or the nonhomogeneous bacward heat conduction problems Jin Wen Xiuen
More informationRoot Finding and Optimization
Root Finding and Optimization Ramses van Zon SciNet, University o Toronto Scientiic Computing Lecture 11 February 11, 2014 Root Finding It is not uncommon in scientiic computing to want solve an equation
More informationScientific Computing: An Introductory Survey
Scientific Computing: An Introductory Survey Chapter 5 Nonlinear Equations Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction
More informationOutline. Scientific Computing: An Introductory Survey. Nonlinear Equations. Nonlinear Equations. Examples: Nonlinear Equations
Methods for Systems of Methods for Systems of Outline Scientific Computing: An Introductory Survey Chapter 5 1 Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign
More informationMath-3 Lesson 8-5. Unit 4 review: a) Compositions of functions. b) Linear combinations of functions. c) Inverse Functions. d) Quadratic Inequalities
Math- Lesson 8-5 Unit 4 review: a) Compositions o unctions b) Linear combinations o unctions c) Inverse Functions d) Quadratic Inequalities e) Rational Inequalities 1. Is the ollowing relation a unction
More informationCONVECTIVE HEAT TRANSFER CHARACTERISTICS OF NANOFLUIDS. Convective heat transfer analysis of nanofluid flowing inside a
Chapter 4 CONVECTIVE HEAT TRANSFER CHARACTERISTICS OF NANOFLUIDS Convective heat transer analysis o nanoluid lowing inside a straight tube o circular cross-section under laminar and turbulent conditions
More informationMATH 590: Meshfree Methods
MATH 590: Meshfree Methods Chapter 40: Symmetric RBF Collocation in MATLAB Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Fall 2010 fasshauer@iit.edu MATH 590 Chapter
More informationTLT-5200/5206 COMMUNICATION THEORY, Exercise 3, Fall TLT-5200/5206 COMMUNICATION THEORY, Exercise 3, Fall Problem 1.
TLT-5/56 COMMUNICATION THEORY, Exercise 3, Fall Problem. The "random walk" was modelled as a random sequence [ n] where W[i] are binary i.i.d. random variables with P[W[i] = s] = p (orward step with probability
More informationReferences Ideal Nyquist Channel and Raised Cosine Spectrum Chapter 4.5, 4.11, S. Haykin, Communication Systems, Wiley.
Baseand Data Transmission III Reerences Ideal yquist Channel and Raised Cosine Spectrum Chapter 4.5, 4., S. Haykin, Communication Systems, iley. Equalization Chapter 9., F. G. Stremler, Communication Systems,
More informationGlobal Weak Solution of Planetary Geostrophic Equations with Inviscid Geostrophic Balance
Global Weak Solution o Planetary Geostrophic Equations with Inviscid Geostrophic Balance Jian-Guo Liu 1, Roger Samelson 2, Cheng Wang 3 Communicated by R. Temam) Abstract. A reormulation o the planetary
More informationAnalytical expressions for field astigmatism in decentered two mirror telescopes and application to the collimation of the ESO VLT
ASTRONOMY & ASTROPHYSICS MAY II 000, PAGE 57 SUPPLEMENT SERIES Astron. Astrophys. Suppl. Ser. 44, 57 67 000) Analytical expressions or ield astigmatism in decentered two mirror telescopes and application
More informationManometer tubes for monitoring coastal water levels: New frequency response factors
Manometer tubes or monitoring coastal water levels: New requency response actors Author Jaari, Alireza, Cartwright, Nick, Nielsen, P. Published 0 Journal Title Coastal Engineering DOI https://doi.org/0.06/j.coastaleng.0.03.00
More informationKernel-based Approximation. Methods using MATLAB. Gregory Fasshauer. Interdisciplinary Mathematical Sciences. Michael McCourt.
SINGAPORE SHANGHAI Vol TAIPEI - Interdisciplinary Mathematical Sciences 19 Kernel-based Approximation Methods using MATLAB Gregory Fasshauer Illinois Institute of Technology, USA Michael McCourt University
More informationAnalysis Scheme in the Ensemble Kalman Filter
JUNE 1998 BURGERS ET AL. 1719 Analysis Scheme in the Ensemble Kalman Filter GERRIT BURGERS Royal Netherlands Meteorological Institute, De Bilt, the Netherlands PETER JAN VAN LEEUWEN Institute or Marine
More information2 Frequency-Domain Analysis
2 requency-domain Analysis Electrical engineers live in the two worlds, so to speak, o time and requency. requency-domain analysis is an extremely valuable tool to the communications engineer, more so
More informationThe Ascent Trajectory Optimization of Two-Stage-To-Orbit Aerospace Plane Based on Pseudospectral Method
Available online at www.sciencedirect.com ScienceDirect Procedia Engineering 00 (014) 000 000 www.elsevier.com/locate/procedia APISAT014, 014 Asia-Paciic International Symposium on Aerospace Technology,
More informationPhysics 5153 Classical Mechanics. Solution by Quadrature-1
October 14, 003 11:47:49 1 Introduction Physics 5153 Classical Mechanics Solution by Quadrature In the previous lectures, we have reduced the number o eective degrees o reedom that are needed to solve
More informationROBUST STABILITY AND PERFORMANCE ANALYSIS OF UNSTABLE PROCESS WITH DEAD TIME USING Mu SYNTHESIS
ROBUST STABILITY AND PERFORMANCE ANALYSIS OF UNSTABLE PROCESS WITH DEAD TIME USING Mu SYNTHESIS I. Thirunavukkarasu 1, V. I. George 1, G. Saravana Kumar 1 and A. Ramakalyan 2 1 Department o Instrumentation
More informationCS 450 Numerical Analysis. Chapter 5: Nonlinear Equations
Lecture slides based on the textbook Scientific Computing: An Introductory Survey by Michael T. Heath, copyright c 2018 by the Society for Industrial and Applied Mathematics. http://www.siam.org/books/cl80
More informationA Particle Swarm Optimization Algorithm for Neighbor Selection in Peer-to-Peer Networks
A Particle Swarm Optimization Algorithm or Neighbor Selection in Peer-to-Peer Networks Shichang Sun 1,3, Ajith Abraham 2,4, Guiyong Zhang 3, Hongbo Liu 3,4 1 School o Computer Science and Engineering,
More informationRegularization in Reproducing Kernel Banach Spaces
.... Regularization in Reproducing Kernel Banach Spaces Guohui Song School of Mathematical and Statistical Sciences Arizona State University Comp Math Seminar, September 16, 2010 Joint work with Dr. Fred
More informationA Brief Survey on Semi-supervised Learning with Graph Regularization
000 00 002 003 004 005 006 007 008 009 00 0 02 03 04 05 06 07 08 09 020 02 022 023 024 025 026 027 028 029 030 03 032 033 034 035 036 037 038 039 040 04 042 043 044 045 046 047 048 049 050 05 052 053 A
More informationarxiv: v1 [math.gt] 20 Feb 2008
EQUIVALENCE OF REAL MILNOR S FIBRATIONS FOR QUASI HOMOGENEOUS SINGULARITIES arxiv:0802.2746v1 [math.gt] 20 Feb 2008 ARAÚJO DOS SANTOS, R. Abstract. We are going to use the Euler s vector ields in order
More information