A Guaranteed, Adaptive, Automatic Algorithm for Univariate Function Minimization
|
|
- Ferdinand White
- 5 years ago
- Views:
Transcription
1 A Guaranteed, Adaptive, Automatic Algorithm for Univariate Function Minimization Xin Tong Department of Applied Mathematics, IIT May 20, 2014 Xin Tong 0/37
2 Outline Introduction Introduction Conclusion and Future work Xin Tong 1/37
3 Problem Description Introduction Problem Description Background Basic concepts and Fixed-cost Algorithms Approximation Thoery Univariate function minimization on the unit interval S(f) = min f(x) R, 0 x 1 where the input functions space is the Sobolev space W 2, = {f C[0,1] : f < }. Xin Tong 2/37
4 Background Introduction Problem Description Background Basic concepts and Fixed-cost Algorithms Approximation Thoery fminbnd in Matlab: find the minimum of a function of one variable within a fixed interval. It may give the local solution. It is not guaranteed. Clancy et al [1]: Numerical analysis on cone of input functions L approximation of univariate functions Xin Tong 3/37
5 Problem Description Background Basic concepts and Fixed-cost Algorithms Approximation Thoery Basic concepts and Fixed-cost Algorithms The basic linear spline algorithm: A n (f)(x) = (n 1)[f(x i )(x i+1 x)+f(x i+1 )(x x i )], x i x x i+1, x i = i 1 n 1, i = 1,2,...,n. Definition (Cone of input functions) C τ := {f W 2, : f τ f f(1)+f(0) }. Xin Tong 4/37
6 Problem Description Background Basic concepts and Fixed-cost Algorithms Approximation Thoery Basic concepts and Fixed-cost Algorithms cont d One can approximate f f(1)+f(0) by following algorithm: F n (f) : = A n (f) A 2 (f) = sup (n 1)[f(x i+1 ) f(x i )] f(1)+f(0). i=1,,n 1 And next algorithm approximates f which is also served as a lower bound: F n (f) := (n 1) 2 sup i=1,,n 2 f(x i ) 2f(x i+1 )+f(x i+2 ) f. Xin Tong 5/37
7 Problem Description Background Basic concepts and Fixed-cost Algorithms Approximation Thoery Approximation theory from Clancy et al[1] I 1 The difference between f(x) and its linear spline algorithm A n (f)(x) f(x) A n (f)(x) = (n 1) xi+1 x i v i (t,x)f (t)dt, ( ) where the continuous, piecewise differentiable kernel v is defined as { (x i+1 x)(x i t), x i t x, v i (t,x) := (x x i )(t x i+1 ), x < t x i+1. Xin Tong 6/37
8 Problem Description Background Basic concepts and Fixed-cost Algorithms Approximation Thoery Approximation theory from Clancy et al[1] II 2 Bounds on f f(1)+f(0) F n (f): 0 f f(1)+f(0) F n (f) 3 An upper bound on f : 1 2(n 1) f. f τc n Fn (f), ( ) where C n = 2(n 1) 2(n 1) τ. 4 A necessary condition that f lies in the cone C τ : f C τ = F n (f) f τc n Fn (f) = τ min,n := 2(n 1)F n (f) 2(n 1) F n (f)+f n (f) τ. Xin Tong 7/37
9 How do we get the algorithm? Bounds on f(x) and minf(x) Intervals containing optimal solutions Stopping condition Algorithm Bounds on f(x) Bounds on min f(x) 0 x 1 estimated minimum value and error possible optimal solutions } Stopping condition The algorithm Xin Tong 8/37
10 Lower bound on f(x) Bounds on f(x) and minf(x) Intervals containing optimal solutions Stopping condition Algorithm For x [x i,x i+1 ], the difference between f(x) and its linear spline algorithm A n (f)(x) becomes xi+1 f(x) A n (f)(x) = (n 1) v i (t,x)f (t)dt ( ) x i xi+1 (n 1) f v i (t,x) dt x i = f (x x i )(x i+1 x) 2 τc n Fn (f) (x x i)(x i+1 x). ( ) 2 The second inequality follows using Hölder s inequality [2, Theorem p.356]. Xin Tong 9/37
11 Lower bound on f(x) cont d Bounds on f(x) and minf(x) Intervals containing optimal solutions Stopping condition Algorithm Lower bound on f(x) fnl(x) i := A n (f)(x) τc n Fn (f) (x x i)(x i+1 x) f(x), 2 x [x i,x i+1 ]. Note that this lower bound fnl i [x i,x i+1 ]. is a quadratic function on each Xin Tong 10/37
12 Lower bound on min 0 x 1 f(x) Bounds on f(x) and minf(x) Intervals containing optimal solutions Stopping condition Algorithm Rewrite the quadratic function fnl i by completing square and rescaling ranges: fnl(x) i = f(x i+1) f(x i ) 4Cn i (t i +Cn) i 2 f(x i+1) f(x i ) [f(x i+1)+f(x i )], where ( ) t i = 2(n 1) x x i+x i+1 2 Cn i = 2(n 1)2 [f(x i+1 ) f(x i )]. τc n Fn(f) [ 1,1] and ( Cn i + 1 ) Cn i Xin Tong 11/37
13 Lower bound on min 0 x 1 f(x) cont d Bounds on f(x) and minf(x) Intervals containing optimal solutions Stopping condition Algorithm The minimum value of f i nl on [x i,x i+1 ] follows l i n :=1 2 [f(x i+1)+f(x i )] 1 4 f(x i+1) f(x i ) The lower bound on min f(x) is 0 x 1 ( ) min( Cn,1)+ i 1 min( Cn i,1). L n := min 1 i n 1 li n. Xin Tong 12/37
14 Bounds on min 0 x 1 f(x) Bounds on f(x) and minf(x) Intervals containing optimal solutions Stopping condition Algorithm The upper bound on min f(x) is 0 x 1 U n := min 0 x 1 A n(f)(x) = min 1 i n f(x i) min 0 x 1 f(x). Bounds on min 0 x 1 f(x) L n min 0 x 1 f(x) U n Xin Tong 13/37
15 Possible Optimal solution set Bounds on f(x) and minf(x) Intervals containing optimal solutions Stopping condition Algorithm Definition (Optimal solution set for an input function f) X := {t [0,1] : f(t) f(x), x [0,1]} Step 1, find all intervals may contain the optimal solutions. Now we have L n ln i fi nl (x) f(x). And if l i n U n, there exist some x in [x i,x i+1 ] such that L n l i n fi nl (x ) U n. These x may be the possible optimal solutions. Define an index set J n := {j : l j n U n}. Xin Tong 14/37
16 Possible optimal solution set Bounds on f(x) and minf(x) Intervals containing optimal solutions Stopping condition Algorithm Step 2, find all possible optimal solutions. Collect all the possible points on [x j,x j+1 ], j J n such that {x : f j nl (x) U n}. Then the possible solution set on [0,1] is given by X n := j Jn {x : f j nl (x) U n} X. Xin Tong 15/37
17 Bounds on f(x) and minf(x) Intervals containing optimal solutions Stopping condition Algorithm Estimate error and compute volume of possible solution set The estimated minimum value is chosen as min 0 x 1 A n(f)(x) (or the upper bound U n ). The error for min 0 x 1 A n(f)(x) is err n (f) := U n min 0 x 1 f(x) U n L n. The possible optimal solution set is a collection of intervals. To compute its volume is to compute the length of each interval and sum them up, i.e. Vol(X n ) = j J n Vol{x : f j nl (x) U n}. Xin Tong 16/37
18 Stopping condition Introduction Bounds on f(x) and minf(x) Intervals containing optimal solutions Stopping condition Algorithm This algorithm is guaranteed to satisfy the error tolerance ε or the X tolerance δ. Stopping condition U n L n ε or Vol(X n ) δ Xin Tong 17/37
19 Bounds on f(x) and minf(x) Intervals containing optimal solutions Stopping condition Algorithm Xin Tong 18/37
20 Bounds on f(x) and minf(x) Intervals containing optimal solutions Stopping condition Algorithm Algorithm: Adaptive Univariate Function Minimization I Let the sequences of algorithms {A n } n I, { F n } n I, {F n } n I be as described above. Let τ 2 be the cone constant. Set k = 1. Let n 1 = (τ +1)/2 +1. For any error tolerance ε, X tolerance δ and input function f, do the following: Stage 1. Estimate f f(1)+f(0) and bound f. Compute F nk (f) and F nk (f). Stage 2. Check the necessary condition for f C τ. Compute τ min,nk = F nk (f) F nk (f)+f nk (f)/(2n k 2). Xin Tong 19/37
21 Bounds on f(x) and minf(x) Intervals containing optimal solutions Stopping condition Algorithm Algorithm: Adaptive Univariate Function Minimization II If τ τ min,nk, then go to stage 3. Otherwise, set τ = 2τ min,nk. If n k (τ +1)/2, go to stage 3. Otherwise, choose τ +1 n k+1 = 1+(n k 1). 2n k 2 Go to stage 3. Stage 3. Check for convergence. Check whether n k is large enough to satisfy either the error tolerance or X tolerance, i.e. a) U nk L nk ε or b) Vol(X nk ) δ Xin Tong 20/37
22 Bounds on f(x) and minf(x) Intervals containing optimal solutions Stopping condition Algorithm Algorithm: Adaptive Univariate Function Minimization III If it is true, return min 1 i n k f(x i ) and terminate the algorithm. If it is not true, choose: Go to stage 1. n k+1 = 1+2(n k 1). Xin Tong 21/37
23 Theorem Introduction Bounds on f(x) and minf(x) Intervals containing optimal solutions Stopping condition Algorithm Theorem For any n, L n min 0 x 1 f(x) U n, X n X; Moreover, given the error tolerance ε and X tolerance δ, there exists a N such that for any n k > N, either err nk (f) = U nk min f(x) ε or Vol(X) δ is satisfied. 0 x 1 Proof. See P in the thesis. Xin Tong 22/37
24 A family of bump test functions Bump test functions Two local minimum points 1 [ 4a 2 (x z) 2 (x z a) x z a 2a 2 f(x) = +(x z +a) x z +a ], z 2a x z +2a, 0, otherwise with log 10 (a) U[ 4, 1] and z U[2a,1 2a]. The minimum value of this family functions f is 1 at x = z. It follows that f f(1)+f(0) = 1/a and f = 1/a 2. Xin Tong 23/37
25 An example of bump test functions Bump test functions Two local minimum points For instance, let a = 0.03 and z = 0.4, then the function f is depicted below Figure : One example of the bump test functions. Xin Tong 24/37
26 Bump test functions Two local minimum points Experiment with ε = 10 8 and δ = random test functions initial τ values of 10,100,1000 a cost budget of N max = 10 7 success: the exact and approximate minimum value agree to within ε. warning: the proposed n k+1 exceeds N max. The algorithm returns a warning and falls back to the previous n k. Xin Tong 25/37
27 Bump test functions Two local minimum points Experiment with ε = 10 8 and δ = 0, result Table : The empirical success rates with ε = 10 8 and δ = 0. Success Success Failure Failure τ Prob(f C τ) No Warning Warning No Warning Warning % 20.58% 20.58% 0.00% 79.42% 0.00% % 52.79% 52.77% 0.02% 47.21% 0.00% % 85.62% 80.96% 2.46% 14.38% 2.20% It was successful for almost all f that finally lies inside C τ. It diverged for a small percentage of functions lying inside the cone. Xin Tong 26/37
28 Bump test functions Two local minimum points Experiment with ε = 0 and δ = 10 6 success: the exact optimal solution is contained in the possible solution set whose volume is less than δ. Table : The empirical success rates with ε = 0 and δ = Success Success Failure Failure τ Prob(f C τ) No Warning Warning No Warning Warning % 20.75% 20.75% 0.00% 79.25% 0.00% % 52.27% 52.28% 0.00% 47.72% 0.00% % 85.47% 85.49% 0.00% 14.51% 0.00% It was successful without warning for all f that finally lies inside C τ. Xin Tong 27/37
29 Bump test functions Two local minimum points Experiment with ε = 10 8 and δ = 10 6 success: the exact and approximate minimum value agree to within ε or the exact optimal solution is contained in the solution set whose volume is less than δ. Table : The empirical success rates with ε = 10 8 and δ = Success Success Failure Failure τ Prob(f C τ) No Warning Warning No Warning Warning % 21.21% 21.21% 0.00% 78.79% 0.00% % 53.21% 53.22% 0.00% 46.78% 0.00% % 85.66% 85.69% 0.00% 14.31% 0.00% It was successful without warning for all f that finally lies inside C τ. Xin Tong 28/37
30 Bump test functions Two local minimum points Functions with two local minimum points Another family of test functions is defined as f(x) = 5exp( [10(x a 1 )] 2 ) exp( [10(x a 2 )] 2 ), with a 1 U[0,1] and a 2 U[0,1]. f has two local minimum points at a 1 and a 2. It attains its minimum at x = a 1. 0 x 1, Xin Tong 29/37
31 Bump test functions Two local minimum points One example of functions with two local minimum points For instance, let a 1 = 0.3 and a 2 = 0.75, then the function f is f(x) = 5exp( [10(x 0.3)] 2 ) exp( [10(x 0.75)] 2 ), 0 x 1 and depicted by Figure below: Figure : One example of functions with two local minimum points. Xin Tong 30/37
32 Bump test functions Two local minimum points Experiment using our algorithm compared to fminbnd random test functions ε = 0 and δ = 10 2,10 4,10 7. a cost budget of N max = 10 7 success: the exact optimal solution is contained in the possible solution set whose volume is less than δ. success for fminbnd: the difference between its solution and the exact solution is less than δ. Xin Tong 31/37
33 Bump test functions Two local minimum points Experiment using our algorithm compared to fminbnd,result For this expetiment, our algorithm performed much better than fminbnd to solve global minimization problem. Table : Success rates of our algorithm compared to fminbnd. Our Algorithm fminbnd Success Success Success δ No Warning Warning Success % 76.91% 0.00% 0.03% % 52.95% 0.00% 0.00% % 0.00% 40.58% 0.00% Xin Tong 32/37
34 Conclusions Introduction Conclusions Acknowledgement Questions Breakthroughs Soving the globle minimization problem. Guaranteed either error tolerance ε or X tolerance δ. Restrictions Input function f should have finite second order derivative on [0,1]. It needs a huge number of data solving problems with flat functions. Xin Tong 33/37
35 Future work Introduction Conclusions Acknowledgement Questions Interval extension. Computational cost. Xin Tong 34/37
36 Acknowledgement Introduction Conclusions Acknowledgement Questions Professor Fred J. Hickernell Professor Sou-Cheng Choi GAIL team members: Yuhan Ding, Yizhi Zhang, and Lan Jiang Xin Tong 35/37
37 Conclusions Acknowledgement Questions Thank you! Any Questions? Xin Tong 36/37
38 References I Introduction Conclusions Acknowledgement Questions N. Clancy, Y. Ding, C. Hamilton, F. J. Hickernell, and Y. Zhang. The cost of deterministic, adaptive, automatic algorithms: Cones, not balls. Journal of Complexity, 30:21 45, J. K. Hunter and B. Nachtergaele. Applied Analysis. World Scientific, University of California, Davis, CA, Xin Tong 37/37
Constructing Guaranteed Automatic Numerical Algorithms for U
Constructing Guaranteed Automatic Numerical Algorithms for Univariate Integration Department of Applied Mathematics, Illinois Institute of Technology July 10, 2014 Contents Introduction.. GAIL What do
More informationA Deterministic Guaranteed Automatic Algorithm for Univariate 2014 June Function 9 1 Approximatio
A Deterministic Guaranteed Automatic Algorithm for Univariate Function Approximation Yuhan Ding Advisor: Dr. Hickernell Department of Applied Mathematics Illinois Institute of Technology 2014 June 9 A
More informationA Guaranteed Automatic Integration Library for Monte Carlo Simulation
A Guaranteed Automatic Integration Library for Monte Carlo Simulation Lan Jiang Department of Applied Mathematics Illinois Institute of Technology Email: ljiang14@hawk.iit.edu Joint work with Prof. Fred
More informationSome Thoughts on Guaranteed Function Approximation Satisfying Relative Error
Some Thoughts on Guaranteed Function Approximation Satisfying Relative Error Fred J. Hickernell Department of Applied Mathematics, Illinois Institute of Technology hickernell@iit.edu mypages.iit.edu/~hickernell
More informationReliable Error Estimation for Quasi-Monte Carlo Methods
Reliable Error Estimation for Quasi-Monte Carlo Methods Department of Applied Mathematics, Joint work with Fred J. Hickernell July 8, 2014 Outline 1 Motivation Reasons Problem 2 New approach Set-up of
More informationSTAT 6385 Survey of Nonparametric Statistics. Order Statistics, EDF and Censoring
STAT 6385 Survey of Nonparametric Statistics Order Statistics, EDF and Censoring Quantile Function A quantile (or a percentile) of a distribution is that value of X such that a specific percentage of the
More informationAutomatic algorithms
Automatic algorithms Function approximation using its series decomposition in a basis Lluís Antoni Jiménez Rugama May 30, 2013 Index 1 Introduction Environment Linking H in Example and H out 2 Our algorithms
More informationNumerical Methods. Root Finding
Numerical Methods Solving Non Linear 1-Dimensional Equations Root Finding Given a real valued function f of one variable (say ), the idea is to find an such that: f() 0 1 Root Finding Eamples Find real
More informationMidterm Review. Igor Yanovsky (Math 151A TA)
Midterm Review Igor Yanovsky (Math 5A TA) Root-Finding Methods Rootfinding methods are designed to find a zero of a function f, that is, to find a value of x such that f(x) =0 Bisection Method To apply
More informationLecture 3: Basics of set-constrained and unconstrained optimization
Lecture 3: Basics of set-constrained and unconstrained optimization (Chap 6 from textbook) Xiaoqun Zhang Shanghai Jiao Tong University Last updated: October 9, 2018 Optimization basics Outline Optimization
More informationfor all subintervals I J. If the same is true for the dyadic subintervals I D J only, we will write ϕ BMO d (J). In fact, the following is true
3 ohn Nirenberg inequality, Part I A function ϕ L () belongs to the space BMO() if sup ϕ(s) ϕ I I I < for all subintervals I If the same is true for the dyadic subintervals I D only, we will write ϕ BMO
More informationConvex Optimization. Newton s method. ENSAE: Optimisation 1/44
Convex Optimization Newton s method ENSAE: Optimisation 1/44 Unconstrained minimization minimize f(x) f convex, twice continuously differentiable (hence dom f open) we assume optimal value p = inf x f(x)
More informationCurve Fitting and Interpolation
Chapter 5 Curve Fitting and Interpolation 5.1 Basic Concepts Consider a set of (x, y) data pairs (points) collected during an experiment, Curve fitting: is a procedure to develop or evaluate mathematical
More informationLectures 9-10: Polynomial and piecewise polynomial interpolation
Lectures 9-1: Polynomial and piecewise polynomial interpolation Let f be a function, which is only known at the nodes x 1, x,, x n, ie, all we know about the function f are its values y j = f(x j ), j
More informationLecture 5: Importance sampling and Hamilton-Jacobi equations
Lecture 5: Importance sampling and Hamilton-Jacobi equations Henrik Hult Department of Mathematics KTH Royal Institute of Technology Sweden Summer School on Monte Carlo Methods and Rare Events Brown University,
More informationUniform concentration inequalities, martingales, Rademacher complexity and symmetrization
Uniform concentration inequalities, martingales, Rademacher complexity and symmetrization John Duchi Outline I Motivation 1 Uniform laws of large numbers 2 Loss minimization and data dependence II Uniform
More informationJoint distribution optimal transportation for domain adaptation
Joint distribution optimal transportation for domain adaptation Changhuang Wan Mechanical and Aerospace Engineering Department The Ohio State University March 8 th, 2018 Joint distribution optimal transportation
More informationd(x n, x) d(x n, x nk ) + d(x nk, x) where we chose any fixed k > N
Problem 1. Let f : A R R have the property that for every x A, there exists ɛ > 0 such that f(t) > ɛ if t (x ɛ, x + ɛ) A. If the set A is compact, prove there exists c > 0 such that f(x) > c for all x
More informationAPPLIED MATHEMATICS REPORT AMR04/16 FINITE-ORDER WEIGHTS IMPLY TRACTABILITY OF MULTIVARIATE INTEGRATION. I.H. Sloan, X. Wang and H.
APPLIED MATHEMATICS REPORT AMR04/16 FINITE-ORDER WEIGHTS IMPLY TRACTABILITY OF MULTIVARIATE INTEGRATION I.H. Sloan, X. Wang and H. Wozniakowski Published in Journal of Complexity, Volume 20, Number 1,
More informationNumerical Integration
Numerical Integration Sanzheng Qiao Department of Computing and Software McMaster University February, 2014 Outline 1 Introduction 2 Rectangle Rule 3 Trapezoid Rule 4 Error Estimates 5 Simpson s Rule 6
More informationAnalysis Qualifying Exam
Analysis Qualifying Exam Spring 2017 Problem 1: Let f be differentiable on R. Suppose that there exists M > 0 such that f(k) M for each integer k, and f (x) M for all x R. Show that f is bounded, i.e.,
More informationIntroduction Wavelet shrinage methods have been very successful in nonparametric regression. But so far most of the wavelet regression methods have be
Wavelet Estimation For Samples With Random Uniform Design T. Tony Cai Department of Statistics, Purdue University Lawrence D. Brown Department of Statistics, University of Pennsylvania Abstract We show
More informationOptimization Methods. Lecture 19: Line Searches and Newton s Method
15.93 Optimization Methods Lecture 19: Line Searches and Newton s Method 1 Last Lecture Necessary Conditions for Optimality (identifies candidates) x local min f(x ) =, f(x ) PSD Slide 1 Sufficient Conditions
More informationApproximation of High-Dimensional Rank One Tensors
Approximation of High-Dimensional Rank One Tensors Markus Bachmayr, Wolfgang Dahmen, Ronald DeVore, and Lars Grasedyck March 14, 2013 Abstract Many real world problems are high-dimensional in that their
More informationTHE GAMMA FUNCTION AND THE ZETA FUNCTION
THE GAMMA FUNCTION AND THE ZETA FUNCTION PAUL DUNCAN Abstract. The Gamma Function and the Riemann Zeta Function are two special functions that are critical to the study of many different fields of mathematics.
More informationPROBLEMS. (b) (Polarization Identity) Show that in any inner product space
1 Professor Carl Cowen Math 54600 Fall 09 PROBLEMS 1. (Geometry in Inner Product Spaces) (a) (Parallelogram Law) Show that in any inner product space x + y 2 + x y 2 = 2( x 2 + y 2 ). (b) (Polarization
More informationThe Dirichlet s P rinciple. In this lecture we discuss an alternative formulation of the Dirichlet problem for the Laplace equation:
Oct. 1 The Dirichlet s P rinciple In this lecture we discuss an alternative formulation of the Dirichlet problem for the Laplace equation: 1. Dirichlet s Principle. u = in, u = g on. ( 1 ) If we multiply
More information2 FRED J. HICKERNELL the sample mean of the y (i) : (2) ^ 1 N The mean square error of this estimate may be written as a sum of two parts, a bias term
GOODNESS OF FIT STATISTICS, DISCREPANCIES AND ROBUST DESIGNS FRED J. HICKERNELL Abstract. The Cramer{Von Mises goodness-of-t statistic, also known as the L 2 -star discrepancy, is the optimality criterion
More informationA filled function method for finding a global minimizer on global integer optimization
Journal of Computational and Applied Mathematics 8 (2005) 200 20 www.elsevier.com/locate/cam A filled function method for finding a global minimizer on global integer optimization You-lin Shang a,b,,,
More informationChapter 3 Interpolation and Polynomial Approximation
Chapter 3 Interpolation and Polynomial Approximation Per-Olof Persson persson@berkeley.edu Department of Mathematics University of California, Berkeley Math 128A Numerical Analysis Polynomial Interpolation
More informationLaplace s Equation. Chapter Mean Value Formulas
Chapter 1 Laplace s Equation Let be an open set in R n. A function u C 2 () is called harmonic in if it satisfies Laplace s equation n (1.1) u := D ii u = 0 in. i=1 A function u C 2 () is called subharmonic
More informationA scale-free goodness-of-t statistic for the exponential distribution based on maximum correlations
Journal of Statistical Planning and Inference 18 (22) 85 97 www.elsevier.com/locate/jspi A scale-free goodness-of-t statistic for the exponential distribution based on maximum correlations J. Fortiana,
More informationMATH 220 solution to homework 4
MATH 22 solution to homework 4 Problem. Define v(t, x) = u(t, x + bt), then v t (t, x) a(x + u bt) 2 (t, x) =, t >, x R, x2 v(, x) = f(x). It suffices to show that v(t, x) F = max y R f(y). We consider
More informationSolutions to Assignment-11
Solutions to Assignment-. (a) For any sequence { } of positive numbers, show that lim inf + lim inf an lim sup an lim sup +. Show from this, that the root test is stronger than the ratio test. That is,
More informationSolutions Final Exam May. 14, 2014
Solutions Final Exam May. 14, 2014 1. Determine whether the following statements are true or false. Justify your answer (i.e., prove the claim, derive a contradiction or give a counter-example). (a) (10
More informationLOWELL WEEKLY JOURNAL. ^Jberxy and (Jmott Oao M d Ccmsparftble. %m >ai ruv GEEAT INDUSTRIES
? (») /»» 9 F ( ) / ) /»F»»»»»# F??»»» Q ( ( »»» < 3»» /» > > } > Q ( Q > Z F 5
More informationL3 Monte-Carlo integration
RNG MC IS Examination RNG MC IS Examination Monte-Carlo and Empirical Methods for Statistical Inference, FMS091/MASM11 Monte-Carlo integration Generating pseudo-random numbers: Rejection sampling Conditional
More informationPART I Lecture Notes on Numerical Solution of Root Finding Problems MATH 435
PART I Lecture Notes on Numerical Solution of Root Finding Problems MATH 435 Professor Biswa Nath Datta Department of Mathematical Sciences Northern Illinois University DeKalb, IL. 60115 USA E mail: dattab@math.niu.edu
More informationGeneralizing the Tolerance Function for Guaranteed Algorithms
Generalizing the Tolerance Function for Guaranteed Algorithms Lluís Antoni Jiménez Rugama Joint work with Professor Fred J. Hickernell Room 120, Bldg E1, Department of Applied Mathematics Email: lluisantoni@gmail.com
More informationLecture 18: Kernels Risk and Loss Support Vector Regression. Aykut Erdem December 2016 Hacettepe University
Lecture 18: Kernels Risk and Loss Support Vector Regression Aykut Erdem December 2016 Hacettepe University Administrative We will have a make-up lecture on next Saturday December 24, 2016 Presentations
More informationSolutions to Homework 2
Solutions to Homewor Due Tuesday, July 6,. Chapter. Problem solution. If the series for ln+z and ln z both converge, +z then we can find the series for ln z by term-by-term subtraction of the two series:
More informationReproducing Kernel Hilbert Spaces Class 03, 15 February 2006 Andrea Caponnetto
Reproducing Kernel Hilbert Spaces 9.520 Class 03, 15 February 2006 Andrea Caponnetto About this class Goal To introduce a particularly useful family of hypothesis spaces called Reproducing Kernel Hilbert
More informationMAT 570 REAL ANALYSIS LECTURE NOTES. Contents. 1. Sets Functions Countability Axiom of choice Equivalence relations 9
MAT 570 REAL ANALYSIS LECTURE NOTES PROFESSOR: JOHN QUIGG SEMESTER: FALL 204 Contents. Sets 2 2. Functions 5 3. Countability 7 4. Axiom of choice 8 5. Equivalence relations 9 6. Real numbers 9 7. Extended
More informationThe Dirichlet problem for non-divergence parabolic equations with discontinuous in time coefficients in a wedge
The Dirichlet problem for non-divergence parabolic equations with discontinuous in time coefficients in a wedge Vladimir Kozlov (Linköping University, Sweden) 2010 joint work with A.Nazarov Lu t u a ij
More informationOptimal global rates of convergence for interpolation problems with random design
Optimal global rates of convergence for interpolation problems with random design Michael Kohler 1 and Adam Krzyżak 2, 1 Fachbereich Mathematik, Technische Universität Darmstadt, Schlossgartenstr. 7, 64289
More informationFourth Order RK-Method
Fourth Order RK-Method The most commonly used method is Runge-Kutta fourth order method. The fourth order RK-method is y i+1 = y i + 1 6 (k 1 + 2k 2 + 2k 3 + k 4 ), Ordinary Differential Equations (ODE)
More informationMcGill University Math 354: Honors Analysis 3
Practice problems McGill University Math 354: Honors Analysis 3 not for credit Problem 1. Determine whether the family of F = {f n } functions f n (x) = x n is uniformly equicontinuous. 1st Solution: The
More informationClass 2 & 3 Overfitting & Regularization
Class 2 & 3 Overfitting & Regularization Carlo Ciliberto Department of Computer Science, UCL October 18, 2017 Last Class The goal of Statistical Learning Theory is to find a good estimator f n : X Y, approximating
More informationIntegral Jensen inequality
Integral Jensen inequality Let us consider a convex set R d, and a convex function f : (, + ]. For any x,..., x n and λ,..., λ n with n λ i =, we have () f( n λ ix i ) n λ if(x i ). For a R d, let δ a
More informationInterpolation and Polynomial Approximation I
Interpolation and Polynomial Approximation I If f (n) (x), n are available, Taylor polynomial is an approximation: f (x) = f (x 0 )+f (x 0 )(x x 0 )+ 1 2! f (x 0 )(x x 0 ) 2 + Example: e x = 1 + x 1! +
More informationLecture Note 7: Switching Stabilization via Control-Lyapunov Function
ECE7850: Hybrid Systems:Theory and Applications Lecture Note 7: Switching Stabilization via Control-Lyapunov Function Wei Zhang Assistant Professor Department of Electrical and Computer Engineering Ohio
More informationUniversity of Houston, Department of Mathematics Numerical Analysis, Fall 2005
4 Interpolation 4.1 Polynomial interpolation Problem: LetP n (I), n ln, I := [a,b] lr, be the linear space of polynomials of degree n on I, P n (I) := { p n : I lr p n (x) = n i=0 a i x i, a i lr, 0 i
More informationFunctional Analysis Exercise Class
Functional Analysis Exercise Class Week 2 November 6 November Deadline to hand in the homeworks: your exercise class on week 9 November 13 November Exercises (1) Let X be the following space of piecewise
More informationGLOBAL ATTRACTIVITY IN A NONLINEAR DIFFERENCE EQUATION
Sixth Mississippi State Conference on ifferential Equations and Computational Simulations, Electronic Journal of ifferential Equations, Conference 15 (2007), pp. 229 238. ISSN: 1072-6691. URL: http://ejde.mathmississippi
More informationCOMPLETE METRIC SPACES AND THE CONTRACTION MAPPING THEOREM
COMPLETE METRIC SPACES AND THE CONTRACTION MAPPING THEOREM A metric space (M, d) is a set M with a metric d(x, y), x, y M that has the properties d(x, y) = d(y, x), x, y M d(x, y) d(x, z) + d(z, y), x,
More informationOutline. Math Numerical Analysis. Intermediate Value Theorem. Lecture Notes Zeros and Roots. Joseph M. Mahaffy,
Outline Math 541 - Numerical Analysis Lecture Notes Zeros and Roots Joseph M. Mahaffy, jmahaffy@mail.sdsu.edu Department of Mathematics and Statistics Dynamical Systems Group Computational Sciences Research
More informationMinimax theory for a class of non-linear statistical inverse problems
Minimax theory for a class of non-linear statistical inverse problems Kolyan Ray and Johannes Schmidt-Hieber Leiden University Abstract We study a class of statistical inverse problems with non-linear
More informationBrownian Motion. 1 Definition Brownian Motion Wiener measure... 3
Brownian Motion Contents 1 Definition 2 1.1 Brownian Motion................................. 2 1.2 Wiener measure.................................. 3 2 Construction 4 2.1 Gaussian process.................................
More informationMath Numerical Analysis
Math 541 - Numerical Analysis Lecture Notes Zeros and Roots Joseph M. Mahaffy, jmahaffy@mail.sdsu.edu Department of Mathematics and Statistics Dynamical Systems Group Computational Sciences Research Center
More informationComplexity of two and multi-stage stochastic programming problems
Complexity of two and multi-stage stochastic programming problems A. Shapiro School of Industrial and Systems Engineering, Georgia Institute of Technology, Atlanta, Georgia 30332-0205, USA The concept
More information5.6 Logarithmic and Exponential Equations
SECTION 5.6 Logarithmic and Exponential Equations 305 5.6 Logarithmic and Exponential Equations PREPARING FOR THIS SECTION Before getting started, review the following: Solving Equations Using a Graphing
More informationKernels to detect abrupt changes in time series
1 UMR 8524 CNRS - Université Lille 1 2 Modal INRIA team-project 3 SSB group Paris joint work with S. Arlot, Z. Harchaoui, G. Rigaill, and G. Marot Computational and statistical trade-offs in learning IHES
More information11.8 Power Series. Recall the geometric series. (1) x n = 1+x+x 2 + +x n +
11.8 1 11.8 Power Series Recall the geometric series (1) x n 1+x+x 2 + +x n + n As we saw in section 11.2, the series (1) diverges if the common ratio x > 1 and converges if x < 1. In fact, for all x (
More informationProblem Set 5: Solutions Math 201A: Fall 2016
Problem Set 5: s Math 21A: Fall 216 Problem 1. Define f : [1, ) [1, ) by f(x) = x + 1/x. Show that f(x) f(y) < x y for all x, y [1, ) with x y, but f has no fixed point. Why doesn t this example contradict
More informationWhat can be expressed via Conic Quadratic and Semidefinite Programming?
What can be expressed via Conic Quadratic and Semidefinite Programming? A. Nemirovski Faculty of Industrial Engineering and Management Technion Israel Institute of Technology Abstract Tremendous recent
More informationConcentration of Measures by Bounded Size Bias Couplings
Concentration of Measures by Bounded Size Bias Couplings Subhankar Ghosh, Larry Goldstein University of Southern California [arxiv:0906.3886] January 10 th, 2013 Concentration of Measure Distributional
More informationIntroductory Numerical Analysis
Introductory Numerical Analysis Lecture Notes December 16, 017 Contents 1 Introduction to 1 11 Floating Point Numbers 1 1 Computational Errors 13 Algorithm 3 14 Calculus Review 3 Root Finding 5 1 Bisection
More informationNonlinear Control Systems
Nonlinear Control Systems António Pedro Aguiar pedro@isr.ist.utl.pt 3. Fundamental properties IST-DEEC PhD Course http://users.isr.ist.utl.pt/%7epedro/ncs2012/ 2012 1 Example Consider the system ẋ = f
More informationInput: A set (x i -yy i ) data. Output: Function value at arbitrary point x. What for x = 1.2?
Applied Numerical Analysis Interpolation Lecturer: Emad Fatemizadeh Interpolation Input: A set (x i -yy i ) data. Output: Function value at arbitrary point x. 0 1 4 1-3 3 9 What for x = 1.? Interpolation
More informationOptimal Covariance Change Point Detection in High Dimension
Optimal Covariance Change Point Detection in High Dimension Joint work with Daren Wang and Alessandro Rinaldo, CMU Yi Yu School of Mathematics, University of Bristol Outline Review of change point detection
More informationChapter 1. Root Finding Methods. 1.1 Bisection method
Chapter 1 Root Finding Methods We begin by considering numerical solutions to the problem f(x) = 0 (1.1) Although the problem above is simple to state it is not always easy to solve analytically. This
More informationA Nodal Spline Collocation Method for the Solution of Cauchy Singular Integral Equations 1
European Society of Computational Methods in Sciences and Engineering (ESCMSE) Journal of Numerical Analysis, Industrial and Applied Mathematics (JNAIAM) vol. 3, no. 3-4, 2008, pp. 211-220 ISSN 1790 8140
More informationA tailor made nonparametric density estimate
A tailor made nonparametric density estimate Daniel Carando 1, Ricardo Fraiman 2 and Pablo Groisman 1 1 Universidad de Buenos Aires 2 Universidad de San Andrés School and Workshop on Probability Theory
More informationBackward Differential Flow May Not Converge to a Global Minimizer of Polynomials
J Optim Theory Appl (2015) 167:401 408 DOI 10.1007/s10957-015-0727-7 Backward Differential Flow May Not Converge to a Global Minimizer of Polynomials Orhan Arıkan 1 Regina S. Burachik 2 C. Yalçın Kaya
More informationChapter 2. Discrete Distributions
Chapter. Discrete Distributions Objectives ˆ Basic Concepts & Epectations ˆ Binomial, Poisson, Geometric, Negative Binomial, and Hypergeometric Distributions ˆ Introduction to the Maimum Likelihood Estimation
More informationA PIECEWISE CONSTANT ALGORITHM FOR WEIGHTED L 1 APPROXIMATION OVER BOUNDED OR UNBOUNDED REGIONS IN R s
A PIECEWISE CONSTANT ALGORITHM FOR WEIGHTED L 1 APPROXIMATION OVER BONDED OR NBONDED REGIONS IN R s FRED J. HICKERNELL, IAN H. SLOAN, AND GRZEGORZ W. WASILKOWSKI Abstract. sing Smolyak s construction [5],
More informationStatistical inference on Lévy processes
Alberto Coca Cabrero University of Cambridge - CCA Supervisors: Dr. Richard Nickl and Professor L.C.G.Rogers Funded by Fundación Mutua Madrileña and EPSRC MASDOC/CCA student workshop 2013 26th March Outline
More informationON DIMENSION-INDEPENDENT RATES OF CONVERGENCE FOR FUNCTION APPROXIMATION WITH GAUSSIAN KERNELS
ON DIMENSION-INDEPENDENT RATES OF CONVERGENCE FOR FUNCTION APPROXIMATION WITH GAUSSIAN KERNELS GREGORY E. FASSHAUER, FRED J. HICKERNELL, AND HENRYK WOŹNIAKOWSKI Abstract. This article studies the problem
More informationStatistical Estimation: Data & Non-data Information
Statistical Estimation: Data & Non-data Information Roger J-B Wets University of California, Davis & M.Casey @ Raytheon G.Pflug @ U. Vienna, X. Dong @ EpiRisk, G-M You @ EpiRisk. a little background Decision
More informationMA5206 Homework 4. Group 4. April 26, ϕ 1 = 1, ϕ n (x) = 1 n 2 ϕ 1(n 2 x). = 1 and h n C 0. For any ξ ( 1 n, 2 n 2 ), n 3, h n (t) ξ t dt
MA526 Homework 4 Group 4 April 26, 26 Qn 6.2 Show that H is not bounded as a map: L L. Deduce from this that H is not bounded as a map L L. Let {ϕ n } be an approximation of the identity s.t. ϕ C, sptϕ
More informationMath 118B Solutions. Charles Martin. March 6, d i (x i, y i ) + d i (y i, z i ) = d(x, y) + d(y, z). i=1
Math 8B Solutions Charles Martin March 6, Homework Problems. Let (X i, d i ), i n, be finitely many metric spaces. Construct a metric on the product space X = X X n. Proof. Denote points in X as x = (x,
More informationWHY DEEP NEURAL NETWORKS FOR FUNCTION APPROXIMATION SHIYU LIANG THESIS
c 2017 Shiyu Liang WHY DEEP NEURAL NETWORKS FOR FUNCTION APPROXIMATION BY SHIYU LIANG THESIS Submitted in partial fulfillment of the requirements for the degree of Master of Science in Electrical and Computer
More informationPrinciple of Mathematical Induction
Advanced Calculus I. Math 451, Fall 2016, Prof. Vershynin Principle of Mathematical Induction 1. Prove that 1 + 2 + + n = 1 n(n + 1) for all n N. 2 2. Prove that 1 2 + 2 2 + + n 2 = 1 n(n + 1)(2n + 1)
More informationSome notes on Chapter 8: Polynomial and Piecewise-polynomial Interpolation
Some notes on Chapter 8: Polynomial and Piecewise-polynomial Interpolation See your notes. 1. Lagrange Interpolation (8.2) 1 2. Newton Interpolation (8.3) different form of the same polynomial as Lagrange
More informationA NOTE ON A DISTRIBUTION OF WEIGHTED SUMS OF I.I.D. RAYLEIGH RANDOM VARIABLES
Sankhyā : The Indian Journal of Statistics 1998, Volume 6, Series A, Pt. 2, pp. 171-175 A NOTE ON A DISTRIBUTION OF WEIGHTED SUMS OF I.I.D. RAYLEIGH RANDOM VARIABLES By P. HITCZENKO North Carolina State
More informationLOWHLL #WEEKLY JOURNAL.
# F 7 F --) 2 9 Q - Q - - F - x $ 2 F? F \ F q - x q - - - - )< - -? - F - - Q z 2 Q - x -- - - - 3 - % 3 3 - - ) F x - \ - - - - - q - q - - - - -z- < F 7-7- - Q F 2 F - F \x -? - - - - - z - x z F -
More informationOn pathwise stochastic integration
On pathwise stochastic integration Rafa l Marcin Lochowski Afican Institute for Mathematical Sciences, Warsaw School of Economics UWC seminar Rafa l Marcin Lochowski (AIMS, WSE) On pathwise stochastic
More informationExact Bounds on Sample Variance of Interval Data
University of Texas at El Paso DigitalCommons@UTEP Departmental Technical Reports (CS) Department of Computer Science 3-1-2002 Exact Bounds on Sample Variance of Interval Data Scott Ferson Lev Ginzburg
More informationLecture V. Numerical Optimization
Lecture V Numerical Optimization Gianluca Violante New York University Quantitative Macroeconomics G. Violante, Numerical Optimization p. 1 /19 Isomorphism I We describe minimization problems: to maximize
More informationGeneralization Bounds in Machine Learning. Presented by: Afshin Rostamizadeh
Generalization Bounds in Machine Learning Presented by: Afshin Rostamizadeh Outline Introduction to generalization bounds. Examples: VC-bounds Covering Number bounds Rademacher bounds Stability bounds
More informationTutorial on quasi-monte Carlo methods
Tutorial on quasi-monte Carlo methods Josef Dick School of Mathematics and Statistics, UNSW, Sydney, Australia josef.dick@unsw.edu.au Comparison: MCMC, MC, QMC Roughly speaking: Markov chain Monte Carlo
More informationj=1 r 1 x 1 x n. r m r j (x) r j r j (x) r j (x). r j x k
Maria Cameron Nonlinear Least Squares Problem The nonlinear least squares problem arises when one needs to find optimal set of parameters for a nonlinear model given a large set of data The variables x,,
More informationDo You Trust Derivatives or Differences?
ARGONNE NATIONAL LABORATORY 9700 South Cass Avenue Argonne, Illinois 60439 Do You Trust Derivatives or Differences? Jorge J. Moré and Stefan M. Wild Mathematics and Computer Science Division Preprint ANL/MCS-P2067-0312
More informationFinite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product
Chapter 4 Hilbert Spaces 4.1 Inner Product Spaces Inner Product Space. A complex vector space E is called an inner product space (or a pre-hilbert space, or a unitary space) if there is a mapping (, )
More informationEstimators based on non-convex programs: Statistical and computational guarantees
Estimators based on non-convex programs: Statistical and computational guarantees Martin Wainwright UC Berkeley Statistics and EECS Based on joint work with: Po-Ling Loh (UC Berkeley) Martin Wainwright
More informationbe the set of complex valued 2π-periodic functions f on R such that
. Fourier series. Definition.. Given a real number P, we say a complex valued function f on R is P -periodic if f(x + P ) f(x) for all x R. We let be the set of complex valued -periodic functions f on
More informationInterpolation Theory
Numerical Analysis Massoud Malek Interpolation Theory The concept of interpolation is to select a function P (x) from a given class of functions in such a way that the graph of y P (x) passes through the
More informationSmooth simultaneous confidence bands for cumulative distribution functions
Journal of Nonparametric Statistics, 2013 Vol. 25, No. 2, 395 407, http://dx.doi.org/10.1080/10485252.2012.759219 Smooth simultaneous confidence bands for cumulative distribution functions Jiangyan Wang
More informationCubic Splines. Antony Jameson. Department of Aeronautics and Astronautics, Stanford University, Stanford, California, 94305
Cubic Splines Antony Jameson Department of Aeronautics and Astronautics, Stanford University, Stanford, California, 94305 1 References on splines 1. J. H. Ahlberg, E. N. Nilson, J. H. Walsh. Theory of
More informationControlled Diffusions and Hamilton-Jacobi Bellman Equations
Controlled Diffusions and Hamilton-Jacobi Bellman Equations Emo Todorov Applied Mathematics and Computer Science & Engineering University of Washington Winter 2014 Emo Todorov (UW) AMATH/CSE 579, Winter
More information