Complexity of Approximating Functions on Real-Life Computers

Similar documents
Using Information Theory to Study Efficiency and Capacity of Computers and Similar Devices

Convergence & Continuity

The simple ideal cipher system

REAL VARIABLES: PROBLEM SET 1. = x limsup E k

Selected solutions for Homework 9

Real Analysis. Joe Patten August 12, 2018

MATH 409 Advanced Calculus I Lecture 10: Continuity. Properties of continuous functions.

Extremal I-Limit Points Of Double Sequences

1 Definition of the Riemann integral

REVIEW OF ESSENTIAL MATH 346 TOPICS

The general solution of a quadratic functional equation and Ulam stability

Bounded uniformly continuous functions

Math 328 Course Notes

On distribution functions of ξ(3/2) n mod 1

Math 321 Final Examination April 1995 Notation used in this exam: N. (1) S N (f,x) = f(t)e int dt e inx.

Iowa State University. Instructor: Alex Roitershtein Summer Homework #1. Solutions

Mathematica Slovaca. Tibor Žáčik; Ladislav Mišík A simplified formula for calculation of metric dimension of converging sequences

2 8 P. ERDŐS, A. MEIR, V. T. SÓS AND P. TURÁN [March It follows thus from (2.4) that (2.5) Ak(F, G) =def 11m 1,,,,(F, G) exists for every fixed k. By

HOMEWORK ASSIGNMENT 6

PERIODS IMPLYING ALMOST ALL PERIODS FOR TREE MAPS. A. M. Blokh. Department of Mathematics, Wesleyan University Middletown, CT , USA

Entropy for zero-temperature limits of Gibbs-equilibrium states for countable-alphabet subshifts of finite type

Analysis Finite and Infinite Sets The Real Numbers The Cantor Set

(a) We need to prove that is reflexive, symmetric and transitive. 2b + a = 3a + 3b (2a + b) = 3a + 3b 3k = 3(a + b k)

Week 2: Sequences and Series

Bloch radius, normal families and quasiregular mappings

Introduction to Hausdorff Measure and Dimension

GENERALITY OF HENSTOCK-KURZWEIL TYPE INTEGRAL ON A COMPACT ZERO-DIMENSIONAL METRIC SPACE

COMPLETENESS THEOREM FOR THE DISSIPATIVE STURM-LIOUVILLE OPERATOR ON BOUNDED TIME SCALES. Hüseyin Tuna

Homework for MATH 4603 (Advanced Calculus I) Fall Homework 13: Due on Tuesday 15 December. Homework 12: Due on Tuesday 8 December

Sequence of maximal distance codes in graphs or other metric spaces

STRONG CONVERGENCE OF AN IMPLICIT ITERATION PROCESS FOR ASYMPTOTICALLY NONEXPANSIVE IN THE INTERMEDIATE SENSE MAPPINGS IN BANACH SPACES

A LITTLE REAL ANALYSIS AND TOPOLOGY

Bonus Homework. Math 766 Spring ) For E 1,E 2 R n, define E 1 + E 2 = {x + y : x E 1,y E 2 }.

Empirical Processes: General Weak Convergence Theory

Relationships between upper exhausters and the basic subdifferential in variational analysis

Solutions to Assignment-11

The Inverse Function Theorem via Newton s Method. Michael Taylor

Functions of several variables of finite variation and their differentiability

Iowa State University. Instructor: Alex Roitershtein Summer Homework #5. Solutions

3. Algorithms. What matters? How fast do we solve the problem? How much computer resource do we need?

Some Background Math Notes on Limsups, Sets, and Convexity

Stability of Adjointable Mappings in Hilbert

EC9A0: Pre-sessional Advanced Mathematics Course. Lecture Notes: Unconstrained Optimisation By Pablo F. Beker 1

ON THE GROWTH OF ENTIRE FUNCTIONS WITH ZERO SETS HAVING INFINITE EXPONENT OF CONVERGENCE

Solution. 1 Solution of Homework 7. Sangchul Lee. March 22, Problem 1.1

1 Independent increments

Exact Penalty Functions for Nonlinear Integer Programming Problems

Solutions Final Exam May. 14, 2014

Some unified algorithms for finding minimum norm fixed point of nonexpansive semigroups in Hilbert spaces

Economics 204 Summer/Fall 2011 Lecture 5 Friday July 29, 2011

Math 127C, Spring 2006 Final Exam Solutions. x 2 ), g(y 1, y 2 ) = ( y 1 y 2, y1 2 + y2) 2. (g f) (0) = g (f(0))f (0).

Folland: Real Analysis, Chapter 4 Sébastien Picard

Copyright c 2007 Jason Underdown Some rights reserved. statement. sentential connectives. negation. conjunction. disjunction

Mathematics for Economists

AN EFFECTIVE METRIC ON C(H, K) WITH NORMAL STRUCTURE. Mona Nabiei (Received 23 June, 2015)

On the metrical theory of a non-regular continued fraction expansion

Solutions Final Exam May. 14, 2014

arxiv: v1 [math.mg] 28 Dec 2018

Some Elementary Problems (Solved and Unsolved) in Number Theory and Geometry

Covering an ellipsoid with equal balls

CONTINUITY AND DIFFERENTIABILITY ASPECTS OF METRIC PRESERVING FUNCTIONS

Data Structures and Algorithms Running time and growth functions January 18, 2018

Yuqing Chen, Yeol Je Cho, and Li Yang

Math 4317 : Real Analysis I Mid-Term Exam 1 25 September 2012

PACKING-DIMENSION PROFILES AND FRACTIONAL BROWNIAN MOTION

Some Properties of NSFDEs

Correlation dimension for self-similar Cantor sets with overlaps

Sang-baek Lee*, Jae-hyeong Bae**, and Won-gil Park***

Time series denoising with wavelet transform

Acta Acad. Paed. Agriensis, Sectio Mathematicae 28 (2001) ON VERY POROSITY AND SPACES OF GENERALIZED UNIFORMLY DISTRIBUTED SEQUENCES

Kansas City Area Teachers of Mathematics 2018 KCATM Math Competition ALGEBRA GRADE 8

Math 320: Real Analysis MWF 1pm, Campion Hall 302 Homework 8 Solutions Please write neatly, and in complete sentences when possible.

Problem 1: Compactness (12 points, 2 points each)

On the definition and properties of p-harmonious functions

Regular Variation and Extreme Events for Stochastic Processes

A TALE OF TWO CONFORMALLY INVARIANT METRICS

Combinatorial Proof of the Hot Spot Theorem

NOWHERE LOCALLY UNIFORMLY CONTINUOUS FUNCTIONS

HYERS-ULAM-RASSIAS STABILITY OF JENSEN S EQUATION AND ITS APPLICATION

Best Guaranteed Result Principle and Decision Making in Operations with Stochastic Factors and Uncertainty

Advanced Calculus Math 127B, Winter 2005 Solutions: Final. nx2 1 + n 2 x, g n(x) = n2 x

Waves in Flows. Global Existence of Solutions with non-decaying initial data 2d(3d)-Navier-Stokes ibvp in half-plane(space)

Maths 212: Homework Solutions

DRAFT - Math 101 Lecture Note - Dr. Said Algarni

Continuity. Matt Rosenzweig

Strong convergence to a common fixed point. of nonexpansive mappings semigroups

MATH 409 Advanced Calculus I Lecture 9: Limit supremum and infimum. Limits of functions.

ON WEAK CONVERGENCE THEOREM FOR NONSELF I-QUASI-NONEXPANSIVE MAPPINGS IN BANACH SPACES

Simultaneous vs. non simultaneous blow-up

ON THE STABILITY OF THE MONOMIAL FUNCTIONAL EQUATION

BEST APPROXIMATIONS AND ORTHOGONALITIES IN 2k-INNER PRODUCT SPACES. Seong Sik Kim* and Mircea Crâşmăreanu. 1. Introduction

1. Let A R be a nonempty set that is bounded from above, and let a be the least upper bound of A. Show that there exists a sequence {a n } n N

HOMEWORK ASSIGNMENT 5

1. Introduction EKREM SAVAŞ

A NOTE ON FORT S THEOREM

STA2112F99 ε δ Review

MATH 104 : Final Exam

Department of Mathematics, University of California, Berkeley. GRADUATE PRELIMINARY EXAMINATION, Part A Fall Semester 2014

Problem Set 5: Solutions Math 201A: Fall 2016

2 Statement of the problem and assumptions

Transcription:

International Journal of Foundations of Computer Science Vol. 26, No. 1 (2015) 153 157 c World Scientific Publishing Company DOI: 10.1142/S0129054115500082 Complexity of Approximating Functions on Real-Life Computers Boris Ryabko Siberian State University of Telecommunications and Information Sciences Institute of Computational Technology of Siberian Branch of Russian Academy of Science, Novosibirsk, Russia boris@ryabko.net Received 10 May 2014 Accepted 12 August 2014 Communicated by Cristian S. Calude We address the problem of estimating the computation time necessary to approximate a function on a real computer. Our approach gives a possibility to estimate the minimal time needed tocompute afunction up tothe specified leveloferror.thiscan be explained by the following example. Consider the space of functions defined on [0,1] whose absolute value and the first derivative are bounded by C. In a certain sense, for almost every such function, approximating it on its domain using an Intel x86 computer, with an error not greater than ε, takes at least k(c,ε) seconds. Here we show how to find k(c,ε). Keywords: Computational complexity; performance evaluation; ε-entropy; information theory. 1. Introduction The problem of estimating the difficulty of computation of functions has attracted attention of mathematicians for a long time; see, for example, [2, 7]. However, despite many efforts, there are many important unsolved problems. One practically important problem can be described as follows. A function and a computer are given. What is the time necessary for approximating the function on its domain with an error of at most ε on this computer? A more specific example: what kind of computer is required in order to calculate the change in temperature of the atmosphere over the next three days, with error of at most 1 degree, if the computation time should not be grater than one hour? In this paper we obtain some results which can be seen as a step towards solving such problems. Namely, we show that if one wants to approximate functions over a compact set F with an error not greater than ε on a computer comp, the computation time t for almost all functions from F satisfies the following inequality: t H ε (F)/C(comp), 153

154 B. Ryabko where H ε (F) is the ε-entropy of F and C(comp) is the capacity of the computer. Definitions of these values can be found in [3,6] and [4,5], correspondingly, and are briefly introduced below. 2. Preliminaries 2.1. ε-entropy First we recall the definition of ε-entropy [3,6]. Let (X,ρ) be a metric space, let F X and ε > 0. A set Γ is called an ε-net (of F) if for any x F there exists a y Γ such that ρ(x,y) ε. The ε-entropy H ε (F) is defined as H ε (F) = logn ε (F), N ε (F) = inf{ Γ : Γ ˆΓ ε }, (1) where ˆΓ ε is the set of all ε-nets and logx log 2 x. H ε (F) isalsocalledthe relativeε-entropybecauseitdepends onthe space(x,ρ) in which the set F is located, that is, on the metric extension of F. The infimum of H ε (F) over all metric extensions of F is called the absolute ε-entropy of F. In order to avoid technical complications we consider only the ε-entropy (1). Informally, if one wants to approximate functions from the set F with an error not greater than some fixed ε, one cannot use less than 2 Hε(F) approximating functions. Letusconsideranexample.Letθ, candtbepositive.wedenotebyl t (θ)theset ofrealfunctionsf definedon[0,t], t > 0,whichsatisfy f(x) cand f(x) f(y) θ x y for f L t (θ),x,y [0,t] and let ρ(f,g) = sup x [0,t] f(x) g(x). Then H ε (L t (θ)) = tθ/ε+o(log(1/ε)), where O() is with respect to ε going to 0, see [3]. For the similar set of functions defined on a cube in a multidimensional space, the entropy increases as const (1/ε) n, where n is the dimension. It is worth noting that estimates of the ε-entropy are known for many sets of functions [3]. 2.1.1. Computer capacity Let us now briefly introduce the definition of the computer capacity recently suggested in [4,5]. By definition, a computer consists of a set of instructions I and an accessible memory M. Any instruction x I contains not only its name (say, JUMP), but memory addresses and indexes of registers. For example, all instructions JUMP which deal with different memory addresses are considered different elements of the set of instructions I. We also suppose that there are instructions for reading data from external devices. By definition, a computer task P is a sequence of instructions P = x 1 x 2 x 3..., where x i I. It is not assumed that all possible sequences of instructions are allowed. In principle, there can be sequences of instructions which are prohibited. In other words, it is possible that sequences of instructions should satisfy some rules or limitations. We denote the set of all

Complexity of Approximating Functions on Real-Life Computers 155 allowable sequences of instructions by S C, and consider any two different sequences of computer instructions from S C as two different computer tasks. Thus, any computer task can be represented as a sequence of instructions from S C. Moreover, the opposite is also true: any sequence of instructions from S C can be considered as a computer task. Let us denote the execution time of an instruction x by τ(x). Then the execution time τ(x) of a sequence of instructions X = x 1 x 2 x 3...x t is given by τ(x) = t i=1 τ(x i). The key observation is as follows: the total number of computer tasks that can be executed in time T is equal to N(T) = {X : τ(x) = T}. (2) We define the computer capacity C T (I) and the limit computer capacity C(I) as follows: C T (I) = logn(t) T C(I) = limsup T logn(t) T, (3), (4) where N(T) is defined in (2). In [4, 5] this notation is extended to the case of multicore computers with different kinds of memory. If we assume that all execution times τ(x),x I, are integers and the greatest common divisor of τ(x),x I, equals 1, limsup is equal to lim in the definition of the computer capacity (4): logn(t) C(I) = lim, (5) T T see [4]. (This assumption is valid for real computers if the time unit is determined by the so-called clock rate, [4, 5].) It is worth noting that a simple method of estimation of the computer capacity was suggested in [4, 5] and the capacities of many real computers were estimated, see [1]. 3. The Main Result Let there be a computer (I) which can calculate any function from a set F in time T with an error not greater than ε > 0. The main observation is as follows: the total number of computer tasks executed in time T (N T (I)) cannot be less than the number of elements in the minimal ε-net of the set F, i.e. N T (I) 2 Hε(F). (6)

156 B. Ryabko From this observation we obtain the following statement: Theorem 1. Suppose that a computer (I) can calculate every function from a set F in time T with an error not greater than ε > 0. Then T H ε (F)/C T (I), (7) where H ε (F) is the ε-entropy of the set F and C T (I) is the computer capacity (3). This statement immediately follows from (6) and the definitions (1) and (3). In a certain sense, Theorem 1 estimates the computation time C(I) in the worst case. Theorem 1 below shows that the estimate (7) is valid for almost all functions. First we give some definitions. Let τ ε (f) be the time of computation of a function f with an error not greater than ε > 0 (on a computer I). From Theorem 1 we can see that there exists at least one function for which the time of calculation is not less than H ε (F)/C T (I). Having taken into account this observation we define t = H ε (F)/C(I) (8) and consider the set of functions for which the computation time is significantly less than t. This set is defined as follows: F = {f : f F, τ ε (f) t }, (9) where (0,1) is a constant. For example, if = 1/2, the time of calculation of functions from F is not greater than half of t. In a certain sense, the size of this set is quite small. More precisely, the following statement is true: Theorem 2. Let F be a set of functions and let I be a computer that can calculate any function from F in time T with an error not greater than ε > 0. Suppose that Eq. (5) is valid for this computer. If H ε (F) is large (H ε (F) goes to infinity), then the proportion of functions for which the computation time is less than t, (0,1), does not exceed 2 (1 )Hε(F) : 2 Hε(F ) 2 Hε(F) 2 (1 )Hε(F). (10) Proof. From (5) we can see that for any δ > 0 there exists such T δ that 1 δ < C(I)/C T (I) < 1+δ, (11) if T > T δ. Let us consider a set of functions F such that H ε (F) is large enough to have t > T δ. The following chain of inequalities and equalities concludes the proof: 2 Hε(F ) t 2Ct = 2Ct (Hε(F)/C(I)) 2 (1 (1+δ))Hε(F), (12) 2 Hε(F) 2 Hε(F) 2 Hε(F) where the first inequality follows from the definition of the set F (9) and (2),(3), the equation from the definition (8), and the second inequality from (11). Having taken into account that these inequalities are true for every δ we obtain (10).

Complexity of Approximating Functions on Real-Life Computers 157 References [1] A. Fionov, Y. Polyakov and B. Ryabko, Application of computer capacity to evaluation of intel x86 processors, Proc. of the 2011 2nd Int. Congress on Computer Applictions and Computational Science. Advances in Intellegent and Soft Computation (CACS 2011), 145 (2011), pp. 99 104. [2] A. Kolmogorov, Various approaches to estimating the difficulty of approximate definition and computation of functions, Proc. Int. Congress of Mathematicians, 14 (1963), pp. 369 376. [3] A. Kolmogorov and V. M. Tikhomirov, ε-entropy and ε-capacity of sets in function spaces, Amer. Math. Soc. Transl. Ser. 2 17 (1961) 277 364. [4] B. Ryabko, An information-theoretic approach to estimate the capacity of processing units, Performance Evaluation 69 (2012) 267 273. [5] B. Ryabko, On the efficiency and capacity of computers, Applied Mathematics Letters 25 (2012) 398 400. [6] V. M. Tikhomirov, Epsilon-entropy, Encyclopedia of Mathematics http://www.encyclopediaofmath.org/index.php/epsilon-entropy. [7] J. F. Traub and H. Wozniakowski, A general theory of optimal algorithms (Academic Press, 1980).