Near-Optimal conversion of Hardness into Pseudo-Randomness

Size: px
Start display at page:

Download "Near-Optimal conversion of Hardness into Pseudo-Randomness"

Transcription

1 Near-Optimal conversion of Hardness into Pseudo-Randomness Russell Impagliazzo Computer Science and Engineering UC, San Diego 9500 Gilman Drive La Jolla, CA Ronen Saltiel Department of Computer Science Hebrew University Jerusalem, Israel Avi Wigderson Department of Computer Science Hebrew University Jerusalem, Israel February 11, 2003 Abstract Various efforts ([?,?,?]) ave been made in recent years to derandomize probabilistic algoritms using te complexity teoretic assumption tat tere exists a problem in E = dtime(2 O(n) ), tat requires circuits of size s(n), (for some function s). Tese results are based on te NW-generator [?]. For te strong lower bound s(n) = 2 ɛn, [?], and later [?] get te optimal derandomization, P = BP P. However, for weaker lower bound functions s(n), tese constructions fall far sort of te natural conjecture for optimal derandomization, namely tat bptime(t) dtime(2 O(s 1 (t)) ). Te gap in tese constructions is due to an inerent limitation on efficiency in NW-style pseudo-random generators. In tis paper we are able to get derandomization in almost optimal time using any lower bound s(n). We do tis by using te NW-generator in a new, more sopisticated way. We view any failure of te generator as a reduction from te given ard function to its restrictions on smaller input sizes. Tus, eiter te original construction works (almost) optimally, or one of te restricted functions is (almost) as ard as te original. Any suc restriction can ten be plugged into te NW-generator recursively. Tis process generates many candidate generators - all are (almost) optimal, and at least one is guaranteed to be good. Ten, to perform te approximation of te acceptance probability of te given circuit (wic is te key to derandomization), we use ideas from [?]: we run a tournament between te candidate generators wic yields an accurate estimate. Following Trevisan, we explore information teoretic analogs of our new construction. Trevisan [?] (and ten [?]) used te NW-generator to construct efficient extractors. However, te inerent limitation of te NW-generator mentioned above makes te extra randomness required by tat extractor suboptimal (for certain parameters). Applying our construction, we sow ow to use a weak random souce wit optimal amount of extra randomness, for te (simpler tan extraction) task of estimating te probability of any event (wic is given by an oracle). 0

2 1 Introduction Tis paper addresses te question of ardness versus randomness trade-offs. Suc results sow tat probabilistic algoritms can be efficiently simulated deterministically under some complexity teoretic assumptions. A number of suc results are known under a worst-case circuit complexity assumption : Te s-worst-case circuit complexity assumption: Tere exists a function f = {f n } wic is computable in time 2 O(n), yet for all n, circuits of size s(n) cannot compute f n. Te conclusion we are after is of te following type: Any probabilistic algoritm tat runs in time t, can be simulated deterministically in time T (t). Suc results were previously proven by [?,?,?], and our contribution is a construction tat gives a better tradeoff between te simulation quality T and te assumption strengt s(n). Result Comparison: All results assume te s-worst-case complexity ardness assumption. Reference Conclusion for arbitrary s Conclusion for s(n) = 2 nɛ [?] bptime(t) dtime(2 O((s 1 (t)) 2 log t) ) bptime(t) dtime(2 O(log 2 ɛ +1 t) ) [?] bptime(t) dtime(2 (s O( [?] bptime(t) dtime(2 (s 1 (t)) 4 O( log 3 t 1 (t)) 2 ) ) a bptime(t) dtime(2 O(log 4 ɛ 3 t) ) log t ) ) bptime(t) dtime(2 O(log 2 ɛ 1 t) ) tis paper bptime(t) dtime(2 O(s 1 (t O(log log t ))) ) b bptime(t) dtime(2 O(log 1 ɛ t log log log t) ) optimal c bptime(t) dtime(2 O(s 1 (t)) ) bptime(t) dtime(2 O(log 1 ɛ t) ) a Impagliazzo and Wigderson state teir result only for s(n) = 2 Ω(n), and teir result puts BP P in P, for suc a lower bound. b Our result is a bit better, but we cannot state it in tis notation. c Te best we can ope for wit current tecniques. 1.1 Background Following [?], te task of derandomizing probabilistic algoritms reduces to te problem of deterministically approximating te fraction of te inputs wic a given circuit accepts. We call suc macines approximators, and our task becomes constructing efficient (in terms of running time) approximators. Previous results constructed efficient approximators by constructing pseudo-random generators. 1 Indeed, wit a pseudo-random generator in and, one can easily construct an efficient approximator. Simply run te generator over all possible seeds to construct a small discrepancy set, 2 and ten run te circuit over all te inputs in te discrepancy set. It is clear from tis discussion tat te main cost of tis process comes from constructing te discrepancy set wic is of size exponential in te generator s seed size. 3 1 Informally, a pseudo-random generator is a macine tat transforms a sort seed of truly random bits into a long string of bits tat appear random to small circuits. 2 Informally, A discrepancy set is a small set suc tat no small circuit can distinguis between an element cosen uniformly from te set and a truly uniform element. 3 Anoter observation is tat no arm is done in allowing suc generators to run in time exponential in te seed lengt. 1

3 Yao [?] used te Blum-Micali generator, on a cryptograpic assumption muc stronger tan te corresponding worst-case circuit complexity assumption to give te first non-trivial generator for derandomization. Nisan and Wigderson [?] weakened Yao s assumption to te following distributional circuit complexity assumption, wic is still seemingly stronger tan te worst-case circuit compexity assumptions above: Te v-distributional complexity ardness assumption: Tere exists a function = { n } wic is computable in time 2 O(n), yet for all n, every circuit of size v(n) computes f n correctly on at most 1/2 + 1/v(n) fraction of te inputs. Previous results using worst-case assumptions ([?,?,?]) focused on ardness amplification, tat is sowing te v-distributional complexity ardness assumption follows from te s-worst case ardness assumption. Recently, [?] came up wit an almost optimal ardness amplification sceme. Informally speaking, tey sow tat given a function f : {0, 1} n {0, 1} tat cannot be computed by circuits of size s, one can construct a function : {0, 1} O(n) {0, 1} for wic every circuit of size s Ω(1) computes g correctly on at most 1/2 + 1/s Ω(1) fraction of te inputs. Wit in and tey activate te NW-generator, and build an efficient approximator, especially wen s(n) is exponential. However, tere are some inerrent limits to te NW-generator tat make tese derandomizations sub-optimal for functions s(n) wic are not exponential. 1.2 Our result Te main point of te previous section is tat aving pused te ardness amplification pase to te limit, te remaining inefficiency is caused by te NW-generator. Wen assuming te s-worst case ardness assumption, one may ope to get a generator G : {0, 1} O(n) {0, 1} s(n)ω(1) tat fools circuits of size s(n) Ω(1). However, te best result using te NW-generator takes larger seed NW : {0, 1} O( n 2 log s(n) ) {0, 1} s(n)ω(1) For te same task. Recall tat te parameter tat dominates te time of te derandomization is te seed size. In tis paper we are able to minimize te seed size to te optimal m = O(n). However, we do lose someting. Te first loss is tat we are only able to fool circuits of size t = s(n) Ω( 1 log log n ) rater tan s(n) Ω(1). 4 Te second loss is tat rater tan constructing a discrepancy set, we construct 2 O(n) sets were at least one of tem is a discrepancy set. We don t know ow to find te rigt discrepancy set in te uge collection of sets. However, we will sow tat tis collection is still useful to construct an approximator. To explain te inerent inefficiency in te NW construction, and ow we overcome it, we briefly describe it. Te NW-generator: As mentioned before, to use te NW-generator one needs a ard function. Using te optimal ardness amplification of [?], we may assume tat tis function is s(n) Ω(1) - distributional complexity ard. Te NW-generator constructs a design, tat is t sets S 1,.., S t of size n in {1,.., m}, were te size of te intersection of any two sets is at most k. It is very 4 Tis means tat in order to derandomize a probabilistic algoritm tat runs in time t, we need n (s 1 (t log log n )) rater tan n s 1 (t O(1) ). 2

4 important observation tat is not ard to prove tat tis requirement forces m = Ω(n 2 /k). Te NW-generator is a function NW : {0, 1} m {0, 1} t. Te i t bit in te output of NW is simply (x Si ), were x Si stands for te n bits of x suc tat teir indices are in S i. Te main lemma of [?] says tat if te generator fails ten is easy. More precisely, te statement is tat if te set {NW (x) x {0, 1} m } is not a discrepancy set for circuits of size t, ten tere exists a circuit of size rougly t2 k wic computes. So te size of te circuit wic te generator fools is t = s/2 k. We want t = s Ω(1), we coose k = Θ(log s). As mentioned before small intersection size forces large seed, and one ends up wit m = Ω(n 2 / log s). On one and we must ave k small, because a factor of 2 k is lost wen getting t from s. On te oter, small k forces large m since m = Ωn 2 /k. One may ope to build designs wit smaller intersection by allowing a more general concept of designs, wic still accommodates te NW main lemma. Tis possibility is ruled out in [?]. Te only option left is to reduce te factor 2 k lost in te circuit complexity, allowing one to pick k = ω(log t), and ence decreasing te value of m = Θ(n 2 /k). Tis is basically te approac we use ere, wic leads to some interesting complications. Te new idea: One possible view of te proof of te NW-lemma is tat te NW-generator specifies family of 2 O(n) functions over k bits, (wic are restrictions of to k bits). Suc tat if te set {NW (x) x {0, 1} m } is not a discrepency set for circuits of size t, ten one of te specified functions requires circuit size s/poly(t). Te former proof used te fact tat any function over k bits can be computed by a circuit of size 2 k, tey cose t = s/2 k, and concluded tat te set above is indeed a discrepancy set for circuits of size t. We replace tat argument by considering two cases: 1. All te functions specified by te NW-generator ave small (size s/poly(t)) circuits. In suc a case we know tat te NW set is a discrepancy set for circuits of size t, and we don t lose te 2 k factor. 2. At least one of te specified functions cannot be computed by a circuit of size s/poly(t). In tis case it may be tat te NW set is not a discrepancy set. However, we ave at and a function on muc fewer bits (k instead of n) tan te original ard function, tat is almost as ard. From te point of view of ardness to input size ratio, tis function is arder tan te one we started wit. We can plug it to te NW-generator and enjoy te better lower bound. Tis approac can be used recursively until we end up wit parameters tat te former proof can andle. Te construction: We don t know wic of te two cases appened, and even worse, in case 2, we don t know wic of te 2 O(n) specified functions is te ard function. Tus, we try all possibilities. we construct sets (tat are candidates to be discrepancy sets) from te initial function and all it s specified functions. We continue tis recursively until we are sure tat one of te functions we consider is ard but all specified functions are easy. Tis can be sown to appen after at most log log n levels, and at tis point we ave 2 O(n) candidates. Tis process involves some loss. At eac level we lose a poly(t) factor from s, and so we end up coosing t = s Θ( 1 log log n ). Our final move is approximating a given circuit aving 2 O(n) candidates were at least one of tem is a discrepancy set. To do tis we use an idea from [?]. We construct a matrix wit entries for eac pair of sets. For eac suc pair we run C on all possible xors of elements from te two sets, and compute te fraction of inputs accepted by C. Note tat it if one of te two sets is a discrepancy set ten te set of te xors is also a discrepancy set. Tis means tat in te row of te discrepancy set, all entries are good approximations of te correct value, and ence lie in a small interval. For eac of te oter rows, te entry in te column of te discrepancy set is close 3

5 to te correct value. Tis means tat any row in wic all entries lie in a small interval contains entries wic are good approximations of te fraction of inputs accepted by C. Using te above process we complete te proof. An information teoretic analog a la Trevisan: Recently, Trevisan [?] used te NW-generator to construct an extractor. An ɛ-extractor of t bits from r bits using m bits is an efficiently computable function Ext : {0, 1} l {0, 1} m {0, 1} t, suc tat for all distributions D on {0, 1} l aving min-entropy 5 r, te distribution obtained by sampling f according to D and x uniformly from {0, 1} m and computing Ext(f, x) is at most ɛ statistical distance from te uniform distribution on t bits. Trevisan s extractor works by treating f as a function over n = log l bits, amplifying its ardness, and applying NW f (x). If an event T distinguises between te output of te extractor and a uniform distribution, ten it must do so for many f s. For every f suc tat NW f ( ) is caugt by T, tere is a small circuit C wic uses T gates 6, and computes f, (Tis is te main lemma of te NW-generator). Tis bounds te number of f s suc tat NW f ( ) is caugt by T, and te proof is done by realizing tat ig min-entropy says tat tere are a lot of f s. We focus our attention to constructing extractors wit minimal m. Trevisan s construction suffers from te same inefficiency of te NW-generator wic we treat ere. Namely, te size of te constructed circuit (t2 k ) must be small compared to te min-entropy r. As explained in te previous paragraps, te need to decrease k is met by increasing m, and one ends up wit m = O( log2 l log r ). We may ope tat using our tecnique we could do wit te optimal m = O(log l) for any r. Tis is indeed te case. However we don t get an extractor since our construction is not a generator. Instead, we get te information teoretic analog of an approximator. Tis is a deterministic macine tat approximates te probability of any given event T : {0, 1} t {0, 1} (wic te macine accesses as an oracle), using one sample from a distribution D wit min entropy r. Te macine runs in time l O(1), and t = r O( 1 log log log l ). To see te connection to extractors, note tat wit an extractor in and one can perform tis task by going over all strings in {0, 1} m, and using te set {Ext(f, x) x {0, 1} m } to approximate te given event T. 2 Definitions and History 2.1 Hard functions We start by defining ardness in bot worst-case complexity and distributional complexity settings. Definition 1 For a function f : {0, 1} n {0, 1}, we define: 1. S(f) = min{size(c) circuits C tat compute f correctly on every input} 2. SUC s (f) = max{p r x R {0,1} n(c(x) = f(x)) circuits C of size s} 3. ADV s (f) = 2SUC s (f) 1 Wen invoking Te NW-generator against circuits of size t(n), one needs a function = { n }, wit ADV t(n) ( n ) 1 t(n). Muc work as been done on building suc a function from a worst-case circuit lower bound, (see for example [?,?,?]). In tis paper we use te current result by [?], wic we state in our notation. 5 Te min-entropy of a distribution D is log(min xd(x)). 6 Note tat T is simply a function T : {0, 1} t {0, 1}, and so we can tink about it as a gate. 4

6 Teorem 1 [?] For every function f : {0, 1} n {0, 1}, and ɛ, tere exists a function : {0, 1} 4n {0, 1}, suc tat, for v = S(f)( ɛ n )O(1) 1. can be computed in time 2 O(n), given an oracle to f. 2. ADV v () ɛ 2.2 Generators, Discrepancy sets and Approximators In tis section we define pseudo-random generators, and macines we call approximators. It is convenient to define bot using te notion of discrepancy sets. Definition 2 For a circuit C on t bits define: µ(c) = P r w R {0,1} t(c(w) = 1) Definition 3 A (t, ɛ)-discrepancy set, is a multi-set D {0, 1} t, suc tat for all circuits C of size t: P r w R D(C(w) = 1) µ(c) ɛ In te above definition, we did not specify a parameter for te input size of te circuit. As far as we are concerned a circuit of size t may take t bits as input. We proceed and define pseudo-random generators. For te purpose of derandomizing probabilistic algoritms, generators may be allowed to run in time exponential in teir input. Definition 4 A ɛ-generator G is a family of functions 7 G t : {0, 1} m(t) {0, 1} t, suc tat: 1. For all t, te set D = {G r (x) x {0, 1} m(t) } is a (r, ɛ)-discrepancy set. 2. G is computable in time 2 O(m), (exponential in te size of te input). Te existence of good generators implies a non-trivial deterministic simulation of probabilistic algoritms. However, te proof works by building te following device (wic appears implicitly since [?] and is also used implicitly in oter efforts to derandomize BP P suc as [?]). Definition 5 A ɛ-approximator is a deterministic macine tat takes as input a circuit C and outputs an approximation to µ(c), tat is, a number q suc tat µ(c) q ɛ Te following two implications are standard: Lemma 1 ([?]) 1. If tere exists a (m, ɛ)-generator ten tere exists a ɛ-approximator tat (on a circuit of size t) runs in time 2 O(m(t)) t O(1). 2. If tere exists a 1/10-approximator tat runs in time p(t) on circuits of size t, ten bptime(t) dtime(p(t 2 )). Proof: (sketc) Having a generator, one can run te given circuit on all possible outputs of te generator. Tis is indeed an efficient approximator. Having an approximator, and given a probabilistic algoritm M(x, y), (were x is te input, and y is te random string), simply construct te circuit C x (y) = M(x, y), and approximate it s success probability. As seen from lemma??, te task of derandomizing probabilistic algoritms, reduces to constructing efficient generators. Were efficiency means small as possible seed size. of t 7 In te next sections we will ave m be a function of n (te input size of te ard function), rater tan a function 5

7 2.3 Te NW-generator In tis section we present te NW-generator, it s best known consequences for derandomization, and explain it s inerent inefficiency wen used wit a sub-exponential lower bound. Teorem 2 (Construction of nearly disjoint sets [?]) Tere exists an algoritm tat given numbers n2 O( n, m, t, suc tat t = 2 m ), constructs a (n, m)-design, tat is: sets S 1,.., S t, suc tat: 1. For all 1 i t, S i [m], and S i = n. 2. For all 1 i < j t, S i S j k = c n2 m, for some constant c. 3. Te running time of te algoritm is exponential in m. Definition 6 (Te NW-generator [?]) Given some function = { n }, and n, m, te NW-generator works by building an (n, m)-design, S 1,.., S t. It takes as input m bits, and outputs t bits. NW n,m (x) = ((x S1 ),.., (x St )) Te ting to do now, is prove tat if one plugs a ard enoug into te NW-generator, it fools circuits of some size. Lemma 2 [?] Fix n, m, t, v, ɛ. Let S 1,.., S t be te (n, m)-design promised by teorem??, and let k = c n2 m, be te promised bound on te intersection size. Let : {0, 1}n {0, 1}, be a function suc tat ADV v () ɛ2k+1 v. Te set: D = {NW n,m (x) x {0, 1} m } n2 O( is a (t, ɛ)-discrepancy set, wit t = min(2 m ) v, ). 2 k+1 Te drawback in lemma??, is tat t = O( v ), and a factor of 2 k is lost wen getting t from v. 2 k To cope wit tis k must be decreased, (particularly k must satisfy 2 k < v). Since k is rougly n2 m, m must be increased to rougly n2 log v, resulting in a generator tat takes a large seed for weak lower bounds v, and an non-efficient approximator. (Recall tat te running time of an approximator is exponential in te generator s seed size). Using teorems??,?? and lemma?? wit te parameters described above [?] prove te following teorem. Teorem 3 ([?]) If tere exists a function f = {f n } tat is computable in time 2 O(n) and for all n, S(f n ) s(n), ten tere exists a s Ω(1) n2 O( -generator, G : {0, 1} log s ) {0, 1} t, wic fools circuits of size t, for t = s Ω(1). We may still expect to ave an optimal generator, tat is G : {0, 1} O(n) {0, 1} sω(1) wic fools circuits of size s Ω(1). Tis cannot be acieved by improving te design, as te next lemma sows tat te current construction of designs is optimal. Lemma 3 If S 1,.., S t [m], and for all 1 i t, S i = n, and for all 1 i < j t, S 1 S j k, and t n n2 2k, ten m 4k 6

8 Proof: It is enoug to prove te lemma for t = n 2k. Using te first two terms in te inclusionexclusion formula we get tat: m 1 i t S i S i S i S j 1 i t Wic is te required bound, for our coice of parameters. 1 i<j t [?] prove an information teoretic generalization of te inclusion exclusion bound. Tis rules out te possibility of obtaining better parameters by relaxing te notion of a design to a weaker one for wic lemma?? can be proven. 3 Te new approximator Te main result of tis paper is a construction of an approximator tat corresponds to an almost optimal generator: Teorem 4 If f = {f n } is a function computable in time 2 O(n), suc tat for all n, S(f n ) s(n), ten tere exists a ɛ-approximator, suc tat on circuits of size s(n) O( 1 log log n ) runs in time 2 O(n), wit ɛ = s Ω( 1 log log n ). A particularly interesting case is s(n) = 2 nɛ, for wic we obtain: bptime(t) dtime(2 O((log t) 1 ɛ log log log t ) For more general functions s(n), te above equation does not seem to ave a nice closed-form solution. However, since te derandomization takes time 2 n, we can pick n = min{t, s 1 (t O(log log t) )}. Ten since n t, t log log t > t log log n. Tis gives: Teorem 5 Let f = {f n } be a function computable in time 2 O(n), suc tat for all n, S(f n ) s(n), ten bptime(t) dtime(2 s 1 (t O(log log t)) ). Tis approximator sould be compared to te one of [?], wic is constructed by applying n2 O( teorem?? and lemma?? in sequence. [?] s approximator runs in time 2 log s ) on circuits of size s Ω(1). 3.1 A new lemma In tis paper we replace lemma??, by a new lemma, saying tat: eiter we can build a discrepancy set for a large t, or we ave at our disposal a ard function on smaller input. We could plug tis function into te NW-generator, and since it s input size n is decreased, we will be able to build designs wit smaller k. Definition 7 Given a function : {0, 1} n {0, 1}, two sets S 1, S 2 [m], were: S 1 = S 2 = n, k = S 1 S 2, and α {0, 1} n k, we define a function b S 1,S 2,α : {0, 1} k {0, 1}, in te following way: b S 1,S 2,α (z) = (z; α) We tink of S 1, as te n input bits to. z is placed in te bits wic correspond to S 1 S 2, and α is used to fill te remaining n k bits. Note tat te definition is not symmetric in S 1, S 2. 7

9 n2 O( Lemma 4 Fix n, m, v, t, ɛ, suc tat t < min(2v, 2 m ) ). Let S 1,.., S t be te (n, m)-design promised by teorem??, and let k = c n2 m. Let : {0, 1}n {0, 1}, be a function suc tat ADV v () ɛ t. Consider te set: D = {NW n,m (x) x {0, 1} m } If D is not a (t, ɛ)-discrepancy set ten tere exist some 1 i, j t, and a fixing α {0, 1} n k, suc tat: S(b S i,s j,α ) v 2t ). Remark 1 It is wortwile to notice tat lemma?? indeed follows from lemma??. Simply take t = v. Tis matces te assumption about. None of te restricted functions can require 2 k+1 circuit complexity v 2t 2k, since tey are functions over k bits. So it must be te case tat D is a (t, ɛ)-discrepancy set. Te proof of lemma?? appears in appendix??. 3.2 Te construction In tis section, we start presenting te new approximator. Te first step will be building a macine tat takes a circuit size as input, and constructs a collection of small sets, were at least one of tem is a good discrepancy set. In te next section we deal wit te problem of approximating te fraction of te inputs accepted by a given circuit wit suc a collection. To build te generator we need a function f = {f n }, suc tat: 1. f is computable in time 2 O(n). 2. For all n, S(f n ) s(n). (One can replace te For all n by For infinitely many n, to get a weaker result). Parameters for te construction: m - te seed lengt. t - te lengt of te pseudo-random string, (wic is also te size of te circuit we want to fool). ɛ - a bound on te error of te generator. n - an input lengt on wic f n is ard. s - te lower bound known on f n, (tat is a number suc tat: S(f n ) s). Te construction works by recursively calling te procedure construct(l, n, s, g), (were l, n, s are integers and g is a function from {0, 1} n to {0, 1}, represented as a trut table). Te first call is to construct(1, n, s, f n ) construct(l, n, s, g) 1. Use teorem?? to create a function : {0, 1} 4n {0, 1}, suc tat if S(g) = s, ten ADV v () ɛ ɛ t. (Tis can be acieved wit v = s( tn )O(1) ). 2. Use teorem?? to create a (4n, m) design, S 1,.., S t. 3. Let k = c (4n)2 m be te bound on te intersection size. 8

10 4. Output D = {NW 4n,m (x) x {0, 1} m }. 5. If v 2 k+1 t, return. 6. For all i j [t], and for all α {0, 1} n k, Call construct( l + 1, k, v 2t, bs i,s j,α ) Note tat for eac instantiation of construct, l is te level of te instantiation in te recursion tree. Te values of n, s, v, k depend only on l, and so we call tem n l, s l, v l, k l respectively. Teorem 6 Under te following assumptions: 1. S(f n ) s. 2. f is computable in time 2 O(n). 3. t = s 2 log log n > n ɛ = t O(1). 5. m = 2cn. Te process described runs in time 2 O(n), and at least one of te sets D generated during runtime is a (t, ɛ)-discrepancy set. Te proof of teorem?? appears in appendix??. 4 A tournament of generators In te previous section we constructed a collection of 2 O(n) sets, were one of tem is a (t, ɛ)- discrepancy set. To complete te construction of te approximator, we need to be able to approximate te success probability of a given circuit, using suc a collection. We acieve tis using an idea from [?]. (See also [?]). Teorem 7 Tere exists an algoritm for te following computational problem: Input: A circuit C of size t. A collection of multi-sets D 1,.., D l, suc tat for 1 i l, D i {0, 1} t, and D i M. Moreover, at least one of te D s is a (t, ɛ)-discrepancy set. Output: A number α, suc tat α µ(c) 2ɛ Te algoritm solves te problem. and runs in time polynomial in t, l, M. Te proof of teorem?? appears in appendix??. By applying teorems??,?? in sequence we get te approximator, and prove teorem??. 5 An information teoretic analog a-la Trevisan Recently, Trevisan [?] used te NW-generator to construct an extractor. Trevisan s suffers from te same inefficiency of te NW-generator. In tis section we use our tecnique to build an information teoretic analog of an approximator. 9

11 Definition 8 An ɛ-extractor is a function Ext : {0, 1} l {0, 1} m {0, 1} t, wic can be computed in polynomial time, suc tat for all distributions Source on {0, 1} l aving min-entropy 8 r. Te distribution obtained by applying Ext(f, x), were f is sampled from Source and x is sampled uniformly from {0, 1} m is ɛ-close 9 to te uniform distribution on t bits. Definition 9 Given an event T {0, 1} t (We will tink about suc an event as a function T : {0, 1} t {0, 1}), define: µ(t ) = P r w R {0,1} t(t (w) = 1) One possible use to an extractor is to approximate µ(t ) for a given event T {0, 1} t. Definition 10 A (ɛ, δ)-approximator is a deterministic macine App, suc tat for any event T {0, 1} t, and distribution Source wit min-entropy r t, ] P r f Source [ App T (f) µ(t ) > ɛ < δ In words, te approximator takes one sample from a distribution wit min-entropy r, and uses it to approximate te probability of any event T. App uses T as an oracle. Te important parameter is te running time of te approximator. We stress tat any call to te oracle T takes one time unit. Te following lemma (wic is an analog of lemma??) sows tat tis device is te analog of an approximator for our setting. Lemma 5 If tere exists a ɛ-extractor (for some parameters l, m, t, r), ten for all δ < ɛ tere exists a ( ɛ δ 1 δ, δ)-approximator tat runs in time lo(1) 2 m. Proof: Wit an extractor in and, construct te set D = {Ext(f, x) x {0, 1} m }, ceck weater y T, for all y D, and output te proportion of y s tat are in T. Te distribution induced by te extractor approximates µ(t ) wit error at most ɛ. Terefore, te fraction of te f s suc tat Ext(f, ) gives an approximation wit error greater tan ɛ δ 1 δ is bounded by δ. Our interest is constructing extractors wit minimal m. We cannot construct an extractor using our tecnique, Instead, we construct efficient (in terms of running time) approximators. Teorem 8 For any l, r and δ tere exists an (ɛ, δ)-approximator, wit t = (r + log δ) Ω( 1 log log log l ), wic runs in time l O(1), and ɛ = (r + log δ) Ω( 1 log log log l ). From te point of view of extractors, tis corresponds to an extractor wit m = O(log l), wic is optimal. One use of an approximator is to simulate a probabilistic algoritm given a distribution wit some min-entropy. For simplicity, we prase te next teorem only for constant error. Teorem 9 Given a distribution Source on {0, 1} l wit min-entropy r, and a probabilistic algoritm A tat runs in time q, and uses r O( 1 log log log l ) random bits, tere exists a deterministic algoritm tat runs in time l O(1) q, and given one sample from Source and an input x outputs A(x), wit arbitrary constant error. Tis is somewat better (for some coices of parameters) from te approximator constructed using lemma?? from Trevisan s extractor. As tis approximator runs in time 2 log2 l log r q. Te proof of teorem?? is very similar to tat of teorem?? and appears in appendix??. Acknowledgements We tank Oded Goldreic for a conversation tat started us working on tis paper. 8 Te min-entropy of a distribution D is log(min xd(x)). 9 Te distance between two distributions D 1, D 2 on X is defined to be max A X D 1(A) D 2(A). 10

12 A Proofs A.1 Proof of te main lemma Proof: (Of lemma??) If D is not a (t, ɛ)-discrepancy set, ten tere exists a circuit A of size t suc tat: P r w R D(A(w) = 1) µ(a) > ɛ By using a standard ybrid argument as in [?], we get tat tere exists a circuit C of size t, and 1 j t, suc tat C predicts te j t bit of te generator s output from te previous j 1 of te bits, namely: P r w R D(C(w 1,.., w j 1 ) = w j ) > ɛ t Coosing a random w in D, amounts to coosing a random x in {0, 1} m, and applying NW n,m. Tis means tat w j is noting but (x Sj ). We get tat: n,m P r x R {0,1} m(c(nw (x) 1..j 1 ) = (x Sj )) > ɛ t Tere exists a fixing β {0, 1} m n to te bits outside of S j, suc tat: n,m P r y R {0,1} n(c(nw (y; β) 1..j 1 ) = (y)) > ɛ t For i < j, if we set α i = β Si \S j, we get: (1) NW n,m (y; β) i = ((y; β) Si ) = b S i,s j,α i ((y; β) Si S j ) Terefore, if it is te case tat all te b S i,s j,α s ad low circuit complexity, ten tere are size v 2t circuits wic compute NW n,m (y; β) i for all i < j. Combining tese wit C, and using (??), we get a circuit D of size t + t v 2t v, suc tat: P r y R {0,1} n(d(y) = (y)) > ɛ t Wic is a contradiction. A.2 Proof of teorem?? Te teorem will follow from a sequence of claims. Claim 1 Using te conditions in teorem??, It is easy to get te following equations. v l = s l = s l t O(1). s t O(l). n l = O( n ) 2 2l 1 Claim 2 Te process described can be performed in time 2 O(n). 11

13 Proof: We ave already fixed m = O(n). Te work done in eac instantiation of construct can be done in time 2 O(m) = 2 O(n). We will bound te size of te recursion tree. Te degree of te recursion tree at level l is bounded by t 2 2 n 1 l. Having fixed t = s 2 log log n, we note tat in all levels but te last one t 2 2 O(nl). Oterwise, v l t > t2 > 2 n l > 2 k l+1 and te process sould stop. Using te fact tat for all l, n l+1 n l 2, we can bound te degree of te recursion tree at level l by 2 O(nl) = 2 O( n 2 l ). Tis means tat te total number of instantiations is bounded by 2 O( n 2 l ) = 2 n 1 l 2 l = 2 O(n) Claim 3 Te dept of te recursion tree is bounded by O(log log n). l Proof: We simply ave to estimate l suc tat v l 2 k l t Using our former equations tis translates to: s s O( l 2 log log n ) 2 n Wic is satisfied by taking l = Θ(log log n). 2 2l s 1 2 log log n Claim 4 Suppose tat all te sets D produced up to level l 1, are not (t, ɛ)-discrepancy sets, ten tere exists a g in level l suc tat S(g) s l. Proof: Te proof uses induction on l. Te claim is certainly true for l = 1. For l > 1, We know tat in levels 1 up to l tere is no (t, ɛ)-discrepancy set. Using te induction ypotesis for levels 1 up to l 1, we know tat tere is some function g l 1, in level l 1 suc tat S(g l 1 s l 1. Tis means tat in te same instantiation of construct, te function l 1 ad ADV vl 1 ( l 1 ) ɛ t. Using lemma??, we get tat if te D produced at te current instantiation of construct is not a (t, ɛ)-discrepancy set, ten tere exists a restriction b = b S i,s j,α l 1, (for some coice of i, j, α), suc tat: S(b) v l 1 2t Proof: (Of teorem??) We ave already bounded te running time in claim??. Let d be te dept of te recursion tree. From claim?? we get tat if non of te D s in levels 1 up to d is a (t, ɛ)-discrepancy set ten one of te g s in te last level as S(g) s H. And so, ADV vt () ɛ t. At te last level we can afford te price of using lemma??. Using it we get tat D is a ( v d, ɛ)- 2 v k+1 discrepancy set. Using te fact tat at level d, d t, we get tat D is a (t, ɛ)-discrepancy set. 2 k+1 = s l 12

14 A.3 Proof of teorem?? Proof: For y {0, 1} t, define C y (w) = C(w y). Note tat for all y {0, 1} t, C y is of rougly te same size as C. For 1 i l, define: For i, j [l] define: α i (y) = P r w R D i (C(w y) = 1) α ij = E y R D j (α i (y)) Let k be an index suc tat D k is a (t, ɛ)-discrepancy set. For all y {0, 1} t, µ(c) = µ(c y ), and α k (y) µ(c y ) ɛ. From tis we ave tat for all 1 j l: α kj µ(c) ɛ Note tat for all i, j [l], α ij = α ji. Tis is because bot amount to taking all pairs a 1, a 2 from D i, D j and running C(a 1 a 2 ). Te algoritm computes α ij for all i, j [l]. and picks a row r suc tat all te numbers in I = {α rj j [l]} lie on an interval of lengt 2ɛ. It ten returns α, te middle of te interval of I. Suc an r exists, because k as tat property. For all i, we ave tat α ik µ(c) ɛ, and terefore all te numbers in I, are at a distance of 3ɛ from µ(c). From tis we ave tat α µ(c) 2ɛ. A.4 Proof of teorem?? Te following section is devoted to proving teorem?? Te construction and proof are almost identical to te previous sections. Given f wic is sampled from Source, we tink about it as a function f : {0, 1} n {0, 1}, were n = log l. We fix: m = 2cn. s = r + log δ t = s 1 2 log log n. Te actual approximation is done by calling construct(1,n,s,f). We end up wit some 2 O(n) = l O(1) sets. We will ten use similar arguments to te previous sections to approximate T. We will require some new notation. Definition 11 For functions f : {0, 1} n {0, 1}, T : {0, 1} t B we define: 1. S T (f) = min{size(c) circuits C tat use T -gates and compute f correctly on every input} 2. SUC s,t (f) = max{p r x R {0,1} n(c(x) = f(x)) circuits C of size s tat use T -gates} 3. ADV s,t (f) = 2SUC s,t (f) 1 By T -gates, we mean tat te circuit can compute te function T at te cost of one gate. Definition 12 Given a function T : {0, 1} t {0, 1}, a (T, ɛ)-discrepancy set, is a multi-set D {0, 1} t, suc tat for all y {0, 1} t, P r w R D(T (w y) = 1) µ(t ) ɛ 13

15 Te motivation for tis definition is given by te fact tat wen proving teorem??, tat enabled us to approximate a given circuit T using a collection of sets were one of tem is a discrepancy set, we actually used only tat one of te sets is a (T, ɛ)-discrepancy set. Tis is because we only used te sets D to approximate specific circuits of te form T y (w) = T (w y). Teorem 10 (Analog of teorem??) Tere exists an algoritm for te following computational problem: Input: A function T : {0, 1} t {0, 1}, wic te algoritm may use as an oracle. A collection of multi-sets D 1,.., D l, suc tat for 1 i l, D i {0, 1} t, and D i M. Moreover, at least one of te D s is a (T, ɛ)-discrepancy set. Output: A number α, suc tat α µ(t ) 2ɛ Te algoritm solves te problem. and runs in time polynomial in l, M. Using tis notation we can reprase teorem??. Teorem 11 (Analog of teorem??) For every function f : {0, 1} n {0, 1}, and ɛ, tere exists a function : {0, 1} 4n {0, 1}, suc tat: 1. can be computed in time 2 O(n), given an oracle to f. 2. ADV v,t () ɛ 3. v = S T (f)( ɛ n )O(1) Tis teorem is essentially identical to teorem??. Te proof works by assuming tat te conclusion of te teorem is false. It ten constructs a circuit tat sows tat te assumption is false. From our point of view any copies of T tat were in te first circuit are used just te same in te second circuit. Te same idea is used to prove te analog of lemma??. Lemma 6 (Analog of lemma??) Fix T : {0, 1} t n2 O( {0, 1}. Let n, m, v, t, ɛ, suc tat t < min(2v, 2 m ) ). Let S 1,.., S t be te (n, m)- design promised by teorem??, and let k = c n2 m. Let : {0, 1}n {0, 1}, be a function suc tat ADV v,t () ɛ t. Consider te set: D = {NW n,m (x) x {0, 1} m } If D is not a (T, ɛ)-discrepancy set ten tere exist some 1 i, j t, and a fixing α {0, 1} n k, suc tat: S T (b S i,s j,α ) v 2t ). Suppose D is not a (T, ɛ)-discrepancy set. ten tere exists some y {0, 1} t suc tat T ( y) is not fooled by D. but tis circuit as size O(t) wen viewed as a circuit wit T -gates. Te proof of lemma?? constructs a circuit for using tis circuit, and can proceed uncanged from tis point. We are now ready to prove teorem??. Proof: (Of teorem??) Consider te function f selected from te distribution Source. We claim tat wit probability 1 δ, S T (f) s. Tis is because te number of circuits of size s is bounded 14

16 by 2 s2). and te fact tat Source as min-entropy r, says tat any set of size 2 s2 as probability at most 2 s2 2 r < δ. Assuming tat an f suc tat S T (f) s was selected from te source, we can use lemma?? recursively, as in teorem?? to conclude tat one of te sets D constructed by te process is a (T, ɛ)-discrepancy set of size 2 O(n), wit element size t = s Ω( 1 log log n ) = (r + log δ) Ω( 1 log log log l ). Using teorem?? we can approximate te probability of T. 15

Function Composition and Chain Rules

Function Composition and Chain Rules Function Composition and s James K. Peterson Department of Biological Sciences and Department of Matematical Sciences Clemson University Marc 8, 2017 Outline 1 Function Composition and Continuity 2 Function

More information

Exam 1 Review Solutions

Exam 1 Review Solutions Exam Review Solutions Please also review te old quizzes, and be sure tat you understand te omework problems. General notes: () Always give an algebraic reason for your answer (graps are not sufficient),

More information

4. The slope of the line 2x 7y = 8 is (a) 2/7 (b) 7/2 (c) 2 (d) 2/7 (e) None of these.

4. The slope of the line 2x 7y = 8 is (a) 2/7 (b) 7/2 (c) 2 (d) 2/7 (e) None of these. Mat 11. Test Form N Fall 016 Name. Instructions. Te first eleven problems are wort points eac. Te last six problems are wort 5 points eac. For te last six problems, you must use relevant metods of algebra

More information

Continuity and Differentiability Worksheet

Continuity and Differentiability Worksheet Continuity and Differentiability Workseet (Be sure tat you can also do te grapical eercises from te tet- Tese were not included below! Typical problems are like problems -3, p. 6; -3, p. 7; 33-34, p. 7;

More information

HOMEWORK HELP 2 FOR MATH 151

HOMEWORK HELP 2 FOR MATH 151 HOMEWORK HELP 2 FOR MATH 151 Here we go; te second round of omework elp. If tere are oters you would like to see, let me know! 2.4, 43 and 44 At wat points are te functions f(x) and g(x) = xf(x)continuous,

More information

Lecture XVII. Abstract We introduce the concept of directional derivative of a scalar function and discuss its relation with the gradient operator.

Lecture XVII. Abstract We introduce the concept of directional derivative of a scalar function and discuss its relation with the gradient operator. Lecture XVII Abstract We introduce te concept of directional derivative of a scalar function and discuss its relation wit te gradient operator. Directional derivative and gradient Te directional derivative

More information

Preface. Here are a couple of warnings to my students who may be here to get a copy of what happened on a day that you missed.

Preface. Here are a couple of warnings to my students who may be here to get a copy of what happened on a day that you missed. Preface Here are my online notes for my course tat I teac ere at Lamar University. Despite te fact tat tese are my class notes, tey sould be accessible to anyone wanting to learn or needing a refreser

More information

Consider a function f we ll specify which assumptions we need to make about it in a minute. Let us reformulate the integral. 1 f(x) dx.

Consider a function f we ll specify which assumptions we need to make about it in a minute. Let us reformulate the integral. 1 f(x) dx. Capter 2 Integrals as sums and derivatives as differences We now switc to te simplest metods for integrating or differentiating a function from its function samples. A careful study of Taylor expansions

More information

NUMERICAL DIFFERENTIATION. James T. Smith San Francisco State University. In calculus classes, you compute derivatives algebraically: for example,

NUMERICAL DIFFERENTIATION. James T. Smith San Francisco State University. In calculus classes, you compute derivatives algebraically: for example, NUMERICAL DIFFERENTIATION James T Smit San Francisco State University In calculus classes, you compute derivatives algebraically: for example, f( x) = x + x f ( x) = x x Tis tecnique requires your knowing

More information

Differentiation in higher dimensions

Differentiation in higher dimensions Capter 2 Differentiation in iger dimensions 2.1 Te Total Derivative Recall tat if f : R R is a 1-variable function, and a R, we say tat f is differentiable at x = a if and only if te ratio f(a+) f(a) tends

More information

Last time, we described a pseudorandom generator that stretched its truly random input by one. If f is ( 1 2

Last time, we described a pseudorandom generator that stretched its truly random input by one. If f is ( 1 2 CMPT 881: Pseudorandomness Prof. Valentine Kabanets Lecture 20: N W Pseudorandom Generator November 25, 2004 Scribe: Ladan A. Mahabadi 1 Introduction In this last lecture of the course, we ll discuss the

More information

lecture 26: Richardson extrapolation

lecture 26: Richardson extrapolation 43 lecture 26: Ricardson extrapolation 35 Ricardson extrapolation, Romberg integration Trougout numerical analysis, one encounters procedures tat apply some simple approximation (eg, linear interpolation)

More information

Exercises for numerical differentiation. Øyvind Ryan

Exercises for numerical differentiation. Øyvind Ryan Exercises for numerical differentiation Øyvind Ryan February 25, 2013 1. Mark eac of te following statements as true or false. a. Wen we use te approximation f (a) (f (a +) f (a))/ on a computer, we can

More information

Copyright c 2008 Kevin Long

Copyright c 2008 Kevin Long Lecture 4 Numerical solution of initial value problems Te metods you ve learned so far ave obtained closed-form solutions to initial value problems. A closedform solution is an explicit algebriac formula

More information

2.3 Algebraic approach to limits

2.3 Algebraic approach to limits CHAPTER 2. LIMITS 32 2.3 Algebraic approac to its Now we start to learn ow to find its algebraically. Tis starts wit te simplest possible its, and ten builds tese up to more complicated examples. Fact.

More information

How to Find the Derivative of a Function: Calculus 1

How to Find the Derivative of a Function: Calculus 1 Introduction How to Find te Derivative of a Function: Calculus 1 Calculus is not an easy matematics course Te fact tat you ave enrolled in suc a difficult subject indicates tat you are interested in te

More information

Pre-Calculus Review Preemptive Strike

Pre-Calculus Review Preemptive Strike Pre-Calculus Review Preemptive Strike Attaced are some notes and one assignment wit tree parts. Tese are due on te day tat we start te pre-calculus review. I strongly suggest reading troug te notes torougly

More information

Mathematics 5 Worksheet 11 Geometry, Tangency, and the Derivative

Mathematics 5 Worksheet 11 Geometry, Tangency, and the Derivative Matematics 5 Workseet 11 Geometry, Tangency, and te Derivative Problem 1. Find te equation of a line wit slope m tat intersects te point (3, 9). Solution. Te equation for a line passing troug a point (x

More information

MA455 Manifolds Solutions 1 May 2008

MA455 Manifolds Solutions 1 May 2008 MA455 Manifolds Solutions 1 May 2008 1. (i) Given real numbers a < b, find a diffeomorpism (a, b) R. Solution: For example first map (a, b) to (0, π/2) and ten map (0, π/2) diffeomorpically to R using

More information

2.8 The Derivative as a Function

2.8 The Derivative as a Function .8 Te Derivative as a Function Typically, we can find te derivative of a function f at many points of its domain: Definition. Suppose tat f is a function wic is differentiable at every point of an open

More information

University Mathematics 2

University Mathematics 2 University Matematics 2 1 Differentiability In tis section, we discuss te differentiability of functions. Definition 1.1 Differentiable function). Let f) be a function. We say tat f is differentiable at

More information

Sin, Cos and All That

Sin, Cos and All That Sin, Cos and All Tat James K. Peterson Department of Biological Sciences and Department of Matematical Sciences Clemson University Marc 9, 2017 Outline Sin, Cos and all tat! A New Power Rule Derivatives

More information

MVT and Rolle s Theorem

MVT and Rolle s Theorem AP Calculus CHAPTER 4 WORKSHEET APPLICATIONS OF DIFFERENTIATION MVT and Rolle s Teorem Name Seat # Date UNLESS INDICATED, DO NOT USE YOUR CALCULATOR FOR ANY OF THESE QUESTIONS In problems 1 and, state

More information

5 Ordinary Differential Equations: Finite Difference Methods for Boundary Problems

5 Ordinary Differential Equations: Finite Difference Methods for Boundary Problems 5 Ordinary Differential Equations: Finite Difference Metods for Boundary Problems Read sections 10.1, 10.2, 10.4 Review questions 10.1 10.4, 10.8 10.9, 10.13 5.1 Introduction In te previous capters we

More information

Solutions to the Multivariable Calculus and Linear Algebra problems on the Comprehensive Examination of January 31, 2014

Solutions to the Multivariable Calculus and Linear Algebra problems on the Comprehensive Examination of January 31, 2014 Solutions to te Multivariable Calculus and Linear Algebra problems on te Compreensive Examination of January 3, 24 Tere are 9 problems ( points eac, totaling 9 points) on tis portion of te examination.

More information

Volume 29, Issue 3. Existence of competitive equilibrium in economies with multi-member households

Volume 29, Issue 3. Existence of competitive equilibrium in economies with multi-member households Volume 29, Issue 3 Existence of competitive equilibrium in economies wit multi-member ouseolds Noriisa Sato Graduate Scool of Economics, Waseda University Abstract Tis paper focuses on te existence of

More information

Cubic Functions: Local Analysis

Cubic Functions: Local Analysis Cubic function cubing coefficient Capter 13 Cubic Functions: Local Analysis Input-Output Pairs, 378 Normalized Input-Output Rule, 380 Local I-O Rule Near, 382 Local Grap Near, 384 Types of Local Graps

More information

Continuity and Differentiability of the Trigonometric Functions

Continuity and Differentiability of the Trigonometric Functions [Te basis for te following work will be te definition of te trigonometric functions as ratios of te sides of a triangle inscribed in a circle; in particular, te sine of an angle will be defined to be te

More information

1 The concept of limits (p.217 p.229, p.242 p.249, p.255 p.256) 1.1 Limits Consider the function determined by the formula 3. x since at this point

1 The concept of limits (p.217 p.229, p.242 p.249, p.255 p.256) 1.1 Limits Consider the function determined by the formula 3. x since at this point MA00 Capter 6 Calculus and Basic Linear Algebra I Limits, Continuity and Differentiability Te concept of its (p.7 p.9, p.4 p.49, p.55 p.56). Limits Consider te function determined by te formula f Note

More information

LIMITATIONS OF EULER S METHOD FOR NUMERICAL INTEGRATION

LIMITATIONS OF EULER S METHOD FOR NUMERICAL INTEGRATION LIMITATIONS OF EULER S METHOD FOR NUMERICAL INTEGRATION LAURA EVANS.. Introduction Not all differential equations can be explicitly solved for y. Tis can be problematic if we need to know te value of y

More information

2.11 That s So Derivative

2.11 That s So Derivative 2.11 Tat s So Derivative Introduction to Differential Calculus Just as one defines instantaneous velocity in terms of average velocity, we now define te instantaneous rate of cange of a function at a point

More information

= 0 and states ''hence there is a stationary point'' All aspects of the proof dx must be correct (c)

= 0 and states ''hence there is a stationary point'' All aspects of the proof dx must be correct (c) Paper 1: Pure Matematics 1 Mark Sceme 1(a) (i) (ii) d d y 3 1x 4x x M1 A1 d y dx 1.1b 1.1b 36x 48x A1ft 1.1b Substitutes x = into teir dx (3) 3 1 4 Sows d y 0 and states ''ence tere is a stationary point''

More information

Robotic manipulation project

Robotic manipulation project Robotic manipulation project Bin Nguyen December 5, 2006 Abstract Tis is te draft report for Robotic Manipulation s class project. Te cosen project aims to understand and implement Kevin Egan s non-convex

More information

Lecture 15. Interpolation II. 2 Piecewise polynomial interpolation Hermite splines

Lecture 15. Interpolation II. 2 Piecewise polynomial interpolation Hermite splines Lecture 5 Interpolation II Introduction In te previous lecture we focused primarily on polynomial interpolation of a set of n points. A difficulty we observed is tat wen n is large, our polynomial as to

More information

Function Composition and Chain Rules

Function Composition and Chain Rules Function Composition an Cain Rules James K. Peterson Department of Biological Sciences an Department of Matematical Sciences Clemson University November 2, 2018 Outline Function Composition an Continuity

More information

Time (hours) Morphine sulfate (mg)

Time (hours) Morphine sulfate (mg) Mat Xa Fall 2002 Review Notes Limits and Definition of Derivative Important Information: 1 According to te most recent information from te Registrar, te Xa final exam will be eld from 9:15 am to 12:15

More information

On Pseudorandomness w.r.t Deterministic Observers

On Pseudorandomness w.r.t Deterministic Observers On Pseudorandomness w.r.t Deterministic Observers Oded Goldreich Department of Computer Science Weizmann Institute of Science Rehovot, Israel. oded@wisdom.weizmann.ac.il Avi Wigderson The Hebrew University,

More information

Solution. Solution. f (x) = (cos x)2 cos(2x) 2 sin(2x) 2 cos x ( sin x) (cos x) 4. f (π/4) = ( 2/2) ( 2/2) ( 2/2) ( 2/2) 4.

Solution. Solution. f (x) = (cos x)2 cos(2x) 2 sin(2x) 2 cos x ( sin x) (cos x) 4. f (π/4) = ( 2/2) ( 2/2) ( 2/2) ( 2/2) 4. December 09, 20 Calculus PracticeTest s Name: (4 points) Find te absolute extrema of f(x) = x 3 0 on te interval [0, 4] Te derivative of f(x) is f (x) = 3x 2, wic is zero only at x = 0 Tus we only need

More information

2.3 Product and Quotient Rules

2.3 Product and Quotient Rules .3. PRODUCT AND QUOTIENT RULES 75.3 Product and Quotient Rules.3.1 Product rule Suppose tat f and g are two di erentiable functions. Ten ( g (x)) 0 = f 0 (x) g (x) + g 0 (x) See.3.5 on page 77 for a proof.

More information

Polynomial Interpolation

Polynomial Interpolation Capter 4 Polynomial Interpolation In tis capter, we consider te important problem of approximatinga function fx, wose values at a set of distinct points x, x, x,, x n are known, by a polynomial P x suc

More information

2.1 THE DEFINITION OF DERIVATIVE

2.1 THE DEFINITION OF DERIVATIVE 2.1 Te Derivative Contemporary Calculus 2.1 THE DEFINITION OF DERIVATIVE 1 Te grapical idea of a slope of a tangent line is very useful, but for some uses we need a more algebraic definition of te derivative

More information

Order of Accuracy. ũ h u Ch p, (1)

Order of Accuracy. ũ h u Ch p, (1) Order of Accuracy 1 Terminology We consider a numerical approximation of an exact value u. Te approximation depends on a small parameter, wic can be for instance te grid size or time step in a numerical

More information

Physically Based Modeling: Principles and Practice Implicit Methods for Differential Equations

Physically Based Modeling: Principles and Practice Implicit Methods for Differential Equations Pysically Based Modeling: Principles and Practice Implicit Metods for Differential Equations David Baraff Robotics Institute Carnegie Mellon University Please note: Tis document is 997 by David Baraff

More information

1. Questions (a) through (e) refer to the graph of the function f given below. (A) 0 (B) 1 (C) 2 (D) 4 (E) does not exist

1. Questions (a) through (e) refer to the graph of the function f given below. (A) 0 (B) 1 (C) 2 (D) 4 (E) does not exist Mat 1120 Calculus Test 2. October 18, 2001 Your name Te multiple coice problems count 4 points eac. In te multiple coice section, circle te correct coice (or coices). You must sow your work on te oter

More information

Excursions in Computing Science: Week v Milli-micro-nano-..math Part II

Excursions in Computing Science: Week v Milli-micro-nano-..math Part II Excursions in Computing Science: Week v Milli-micro-nano-..mat Part II T. H. Merrett McGill University, Montreal, Canada June, 5 I. Prefatory Notes. Cube root of 8. Almost every calculator as a square-root

More information

7.1 Using Antiderivatives to find Area

7.1 Using Antiderivatives to find Area 7.1 Using Antiderivatives to find Area Introduction finding te area under te grap of a nonnegative, continuous function f In tis section a formula is obtained for finding te area of te region bounded between

More information

Polynomials 3: Powers of x 0 + h

Polynomials 3: Powers of x 0 + h near small binomial Capter 17 Polynomials 3: Powers of + Wile it is easy to compute wit powers of a counting-numerator, it is a lot more difficult to compute wit powers of a decimal-numerator. EXAMPLE

More information

Explicit Interleavers for a Repeat Accumulate Accumulate (RAA) code construction

Explicit Interleavers for a Repeat Accumulate Accumulate (RAA) code construction Eplicit Interleavers for a Repeat Accumulate Accumulate RAA code construction Venkatesan Gurusami Computer Science and Engineering University of Wasington Seattle, WA 98195, USA Email: venkat@csasingtonedu

More information

Simulation and verification of a plate heat exchanger with a built-in tap water accumulator

Simulation and verification of a plate heat exchanger with a built-in tap water accumulator Simulation and verification of a plate eat excanger wit a built-in tap water accumulator Anders Eriksson Abstract In order to test and verify a compact brazed eat excanger (CBE wit a built-in accumulation

More information

Lab 6 Derivatives and Mutant Bacteria

Lab 6 Derivatives and Mutant Bacteria Lab 6 Derivatives and Mutant Bacteria Date: September 27, 20 Assignment Due Date: October 4, 20 Goal: In tis lab you will furter explore te concept of a derivative using R. You will use your knowledge

More information

THE IDEA OF DIFFERENTIABILITY FOR FUNCTIONS OF SEVERAL VARIABLES Math 225

THE IDEA OF DIFFERENTIABILITY FOR FUNCTIONS OF SEVERAL VARIABLES Math 225 THE IDEA OF DIFFERENTIABILITY FOR FUNCTIONS OF SEVERAL VARIABLES Mat 225 As we ave seen, te definition of derivative for a Mat 111 function g : R R and for acurveγ : R E n are te same, except for interpretation:

More information

Efficient algorithms for for clone items detection

Efficient algorithms for for clone items detection Efficient algoritms for for clone items detection Raoul Medina, Caroline Noyer, and Olivier Raynaud Raoul Medina, Caroline Noyer and Olivier Raynaud LIMOS - Université Blaise Pascal, Campus universitaire

More information

Math 312 Lecture Notes Modeling

Math 312 Lecture Notes Modeling Mat 3 Lecture Notes Modeling Warren Weckesser Department of Matematics Colgate University 5 7 January 006 Classifying Matematical Models An Example We consider te following scenario. During a storm, a

More information

ERROR BOUNDS FOR THE METHODS OF GLIMM, GODUNOV AND LEVEQUE BRADLEY J. LUCIER*

ERROR BOUNDS FOR THE METHODS OF GLIMM, GODUNOV AND LEVEQUE BRADLEY J. LUCIER* EO BOUNDS FO THE METHODS OF GLIMM, GODUNOV AND LEVEQUE BADLEY J. LUCIE* Abstract. Te expected error in L ) attimet for Glimm s sceme wen applied to a scalar conservation law is bounded by + 2 ) ) /2 T

More information

arxiv: v3 [cs.ds] 4 Aug 2017

arxiv: v3 [cs.ds] 4 Aug 2017 Non-preemptive Sceduling in a Smart Grid Model and its Implications on Macine Minimization Fu-Hong Liu 1, Hsiang-Hsuan Liu 1,2, and Prudence W.H. Wong 2 1 Department of Computer Science, National Tsing

More information

Material for Difference Quotient

Material for Difference Quotient Material for Difference Quotient Prepared by Stepanie Quintal, graduate student and Marvin Stick, professor Dept. of Matematical Sciences, UMass Lowell Summer 05 Preface Te following difference quotient

More information

Combining functions: algebraic methods

Combining functions: algebraic methods Combining functions: algebraic metods Functions can be added, subtracted, multiplied, divided, and raised to a power, just like numbers or algebra expressions. If f(x) = x 2 and g(x) = x + 2, clearly f(x)

More information

Notes on wavefunctions II: momentum wavefunctions

Notes on wavefunctions II: momentum wavefunctions Notes on wavefunctions II: momentum wavefunctions and uncertainty Te state of a particle at any time is described by a wavefunction ψ(x). Tese wavefunction must cange wit time, since we know tat particles

More information

The Complexity of Computing the MCD-Estimator

The Complexity of Computing the MCD-Estimator Te Complexity of Computing te MCD-Estimator Torsten Bernolt Lerstul Informatik 2 Universität Dortmund, Germany torstenbernolt@uni-dortmundde Paul Fiscer IMM, Danisc Tecnical University Kongens Lyngby,

More information

Exponentials and Logarithms Review Part 2: Exponentials

Exponentials and Logarithms Review Part 2: Exponentials Eponentials and Logaritms Review Part : Eponentials Notice te difference etween te functions: g( ) and f ( ) In te function g( ), te variale is te ase and te eponent is a constant. Tis is called a power

More information

Online Learning: Bandit Setting

Online Learning: Bandit Setting Online Learning: Bandit Setting Daniel asabi Summer 04 Last Update: October 0, 06 Introduction [TODO Bandits. Stocastic setting Suppose tere exists unknown distributions ν,..., ν, suc tat te loss at eac

More information

Math 161 (33) - Final exam

Math 161 (33) - Final exam Name: Id #: Mat 161 (33) - Final exam Fall Quarter 2015 Wednesday December 9, 2015-10:30am to 12:30am Instructions: Prob. Points Score possible 1 25 2 25 3 25 4 25 TOTAL 75 (BEST 3) Read eac problem carefully.

More information

2011 Fermat Contest (Grade 11)

2011 Fermat Contest (Grade 11) Te CENTRE for EDUCATION in MATHEMATICS and COMPUTING 011 Fermat Contest (Grade 11) Tursday, February 4, 011 Solutions 010 Centre for Education in Matematics and Computing 011 Fermat Contest Solutions Page

More information

AVL trees. AVL trees

AVL trees. AVL trees Dnamic set DT dnamic set DT is a structure tat stores a set of elements. Eac element as a (unique) ke and satellite data. Te structure supports te following operations. Searc(S, k) Return te element wose

More information

Financial Econometrics Prof. Massimo Guidolin

Financial Econometrics Prof. Massimo Guidolin CLEFIN A.A. 2010/2011 Financial Econometrics Prof. Massimo Guidolin A Quick Review of Basic Estimation Metods 1. Were te OLS World Ends... Consider two time series 1: = { 1 2 } and 1: = { 1 2 }. At tis

More information

0.1 Differentiation Rules

0.1 Differentiation Rules 0.1 Differentiation Rules From our previous work we ve seen tat it can be quite a task to calculate te erivative of an arbitrary function. Just working wit a secon-orer polynomial tings get pretty complicate

More information

Higher Derivatives. Differentiable Functions

Higher Derivatives. Differentiable Functions Calculus 1 Lia Vas Higer Derivatives. Differentiable Functions Te second derivative. Te derivative itself can be considered as a function. Te instantaneous rate of cange of tis function is te second derivative.

More information

3.4 Worksheet: Proof of the Chain Rule NAME

3.4 Worksheet: Proof of the Chain Rule NAME Mat 1170 3.4 Workseet: Proof of te Cain Rule NAME Te Cain Rule So far we are able to differentiate all types of functions. For example: polynomials, rational, root, and trigonometric functions. We are

More information

Math 31A Discussion Notes Week 4 October 20 and October 22, 2015

Math 31A Discussion Notes Week 4 October 20 and October 22, 2015 Mat 3A Discussion Notes Week 4 October 20 and October 22, 205 To prepare for te first midterm, we ll spend tis week working eamples resembling te various problems you ve seen so far tis term. In tese notes

More information

WHEN GENERALIZED SUMSETS ARE DIFFERENCE DOMINATED

WHEN GENERALIZED SUMSETS ARE DIFFERENCE DOMINATED WHEN GENERALIZED SUMSETS ARE DIFFERENCE DOMINATED VIRGINIA HOGAN AND STEVEN J. MILLER Abstract. We study te relationsip between te number of minus signs in a generalized sumset, A + + A A, and its cardinality;

More information

1 1. Rationalize the denominator and fully simplify the radical expression 3 3. Solution: = 1 = 3 3 = 2

1 1. Rationalize the denominator and fully simplify the radical expression 3 3. Solution: = 1 = 3 3 = 2 MTH - Spring 04 Exam Review (Solutions) Exam : February 5t 6:00-7:0 Tis exam review contains questions similar to tose you sould expect to see on Exam. Te questions included in tis review, owever, are

More information

The derivative function

The derivative function Roberto s Notes on Differential Calculus Capter : Definition of derivative Section Te derivative function Wat you need to know already: f is at a point on its grap and ow to compute it. Wat te derivative

More information

MAT244 - Ordinary Di erential Equations - Summer 2016 Assignment 2 Due: July 20, 2016

MAT244 - Ordinary Di erential Equations - Summer 2016 Assignment 2 Due: July 20, 2016 MAT244 - Ordinary Di erential Equations - Summer 206 Assignment 2 Due: July 20, 206 Full Name: Student #: Last First Indicate wic Tutorial Section you attend by filling in te appropriate circle: Tut 0

More information

Practice Problem Solutions: Exam 1

Practice Problem Solutions: Exam 1 Practice Problem Solutions: Exam 1 1. (a) Algebraic Solution: Te largest term in te numerator is 3x 2, wile te largest term in te denominator is 5x 2 3x 2 + 5. Tus lim x 5x 2 2x 3x 2 x 5x 2 = 3 5 Numerical

More information

1 + t5 dt with respect to x. du = 2. dg du = f(u). du dx. dg dx = dg. du du. dg du. dx = 4x3. - page 1 -

1 + t5 dt with respect to x. du = 2. dg du = f(u). du dx. dg dx = dg. du du. dg du. dx = 4x3. - page 1 - Eercise. Find te derivative of g( 3 + t5 dt wit respect to. Solution: Te integrand is f(t + t 5. By FTC, f( + 5. Eercise. Find te derivative of e t2 dt wit respect to. Solution: Te integrand is f(t e t2.

More information

Complexity of Decoding Positive-Rate Reed-Solomon Codes

Complexity of Decoding Positive-Rate Reed-Solomon Codes Complexity of Decoding Positive-Rate Reed-Solomon Codes Qi Ceng 1 and Daqing Wan 1 Scool of Computer Science Te University of Oklaoma Norman, OK73019 Email: qceng@cs.ou.edu Department of Matematics University

More information

CHAPTER (A) When x = 2, y = 6, so f( 2) = 6. (B) When y = 4, x can equal 6, 2, or 4.

CHAPTER (A) When x = 2, y = 6, so f( 2) = 6. (B) When y = 4, x can equal 6, 2, or 4. SECTION 3-1 101 CHAPTER 3 Section 3-1 1. No. A correspondence between two sets is a function only if eactly one element of te second set corresponds to eac element of te first set. 3. Te domain of a function

More information

Section 2: The Derivative Definition of the Derivative

Section 2: The Derivative Definition of the Derivative Capter 2 Te Derivative Applied Calculus 80 Section 2: Te Derivative Definition of te Derivative Suppose we drop a tomato from te top of a 00 foot building and time its fall. Time (sec) Heigt (ft) 0.0 00

More information

Mathematics 105 Calculus I. Exam 1. February 13, Solution Guide

Mathematics 105 Calculus I. Exam 1. February 13, Solution Guide Matematics 05 Calculus I Exam February, 009 Your Name: Solution Guide Tere are 6 total problems in tis exam. On eac problem, you must sow all your work, or oterwise torougly explain your conclusions. Tere

More information

1 Limits and Continuity

1 Limits and Continuity 1 Limits and Continuity 1.0 Tangent Lines, Velocities, Growt In tion 0.2, we estimated te slope of a line tangent to te grap of a function at a point. At te end of tion 0.3, we constructed a new function

More information

IEOR 165 Lecture 10 Distribution Estimation

IEOR 165 Lecture 10 Distribution Estimation IEOR 165 Lecture 10 Distribution Estimation 1 Motivating Problem Consider a situation were we ave iid data x i from some unknown distribution. One problem of interest is estimating te distribution tat

More information

MAT 145. Type of Calculator Used TI-89 Titanium 100 points Score 100 possible points

MAT 145. Type of Calculator Used TI-89 Titanium 100 points Score 100 possible points MAT 15 Test #2 Name Solution Guide Type of Calculator Used TI-89 Titanium 100 points Score 100 possible points Use te grap of a function sown ere as you respond to questions 1 to 8. 1. lim f (x) 0 2. lim

More information

Section 3: The Derivative Definition of the Derivative

Section 3: The Derivative Definition of the Derivative Capter 2 Te Derivative Business Calculus 85 Section 3: Te Derivative Definition of te Derivative Returning to te tangent slope problem from te first section, let's look at te problem of finding te slope

More information

Stepped-Impedance Low-Pass Filters

Stepped-Impedance Low-Pass Filters 4/23/27 Stepped Impedance Low Pass Filters 1/14 Stepped-Impedance Low-Pass Filters Say we know te impedance matrix of a symmetric two-port device: 11 21 = 21 11 Regardless of te construction of tis two

More information

Calculus I Homework: The Derivative as a Function Page 1

Calculus I Homework: The Derivative as a Function Page 1 Calculus I Homework: Te Derivative as a Function Page 1 Example (2.9.16) Make a careful sketc of te grap of f(x) = sin x and below it sketc te grap of f (x). Try to guess te formula of f (x) from its grap.

More information

Hardness Preserving Constructions of Pseudorandom Functions

Hardness Preserving Constructions of Pseudorandom Functions Hardness Preserving Constructions of Pseudorandom Functions Abisek Jain 1, Krzysztof Pietrzak 2, and Aris Tentes 3 1 UCLA. E-mail: abisek@cs.ucla.edu 2 IST Austria. E-mail: pietrzak@ist.ac.at 3 New York

More information

SECTION 1.10: DIFFERENCE QUOTIENTS LEARNING OBJECTIVES

SECTION 1.10: DIFFERENCE QUOTIENTS LEARNING OBJECTIVES (Section.0: Difference Quotients).0. SECTION.0: DIFFERENCE QUOTIENTS LEARNING OBJECTIVES Define average rate of cange (and average velocity) algebraically and grapically. Be able to identify, construct,

More information

ch (for some fixed positive number c) reaching c

ch (for some fixed positive number c) reaching c GSTF Journal of Matematics Statistics and Operations Researc (JMSOR) Vol. No. September 05 DOI 0.60/s4086-05-000-z Nonlinear Piecewise-defined Difference Equations wit Reciprocal and Cubic Terms Ramadan

More information

Precalculus Test 2 Practice Questions Page 1. Note: You can expect other types of questions on the test than the ones presented here!

Precalculus Test 2 Practice Questions Page 1. Note: You can expect other types of questions on the test than the ones presented here! Precalculus Test 2 Practice Questions Page Note: You can expect oter types of questions on te test tan te ones presented ere! Questions Example. Find te vertex of te quadratic f(x) = 4x 2 x. Example 2.

More information

Chapter 2 Limits and Continuity

Chapter 2 Limits and Continuity 4 Section. Capter Limits and Continuity Section. Rates of Cange and Limits (pp. 6) Quick Review.. f () ( ) () 4 0. f () 4( ) 4. f () sin sin 0 4. f (). 4 4 4 6. c c c 7. 8. c d d c d d c d c 9. 8 ( )(

More information

Optimal parameters for a hierarchical grid data structure for contact detection in arbitrarily polydisperse particle systems

Optimal parameters for a hierarchical grid data structure for contact detection in arbitrarily polydisperse particle systems Comp. Part. Mec. 04) :357 37 DOI 0.007/s4057-04-000-9 Optimal parameters for a ierarcical grid data structure for contact detection in arbitrarily polydisperse particle systems Dinant Krijgsman Vitaliy

More information

The Krewe of Caesar Problem. David Gurney. Southeastern Louisiana University. SLU 10541, 500 Western Avenue. Hammond, LA

The Krewe of Caesar Problem. David Gurney. Southeastern Louisiana University. SLU 10541, 500 Western Avenue. Hammond, LA Te Krewe of Caesar Problem David Gurney Souteastern Louisiana University SLU 10541, 500 Western Avenue Hammond, LA 7040 June 19, 00 Krewe of Caesar 1 ABSTRACT Tis paper provides an alternative to te usual

More information

WYSE Academic Challenge 2004 Sectional Mathematics Solution Set

WYSE Academic Challenge 2004 Sectional Mathematics Solution Set WYSE Academic Callenge 00 Sectional Matematics Solution Set. Answer: B. Since te equation can be written in te form x + y, we ave a major 5 semi-axis of lengt 5 and minor semi-axis of lengt. Tis means

More information

Homework 1 Due: Wednesday, September 28, 2016

Homework 1 Due: Wednesday, September 28, 2016 0-704 Information Processing and Learning Fall 06 Homework Due: Wednesday, September 8, 06 Notes: For positive integers k, [k] := {,..., k} denotes te set of te first k positive integers. Wen p and Y q

More information

Quantum Numbers and Rules

Quantum Numbers and Rules OpenStax-CNX module: m42614 1 Quantum Numbers and Rules OpenStax College Tis work is produced by OpenStax-CNX and licensed under te Creative Commons Attribution License 3.0 Abstract Dene quantum number.

More information

CS522 - Partial Di erential Equations

CS522 - Partial Di erential Equations CS5 - Partial Di erential Equations Tibor Jánosi April 5, 5 Numerical Di erentiation In principle, di erentiation is a simple operation. Indeed, given a function speci ed as a closed-form formula, its

More information

REVIEW LAB ANSWER KEY

REVIEW LAB ANSWER KEY REVIEW LAB ANSWER KEY. Witout using SN, find te derivative of eac of te following (you do not need to simplify your answers): a. f x 3x 3 5x x 6 f x 3 3x 5 x 0 b. g x 4 x x x notice te trick ere! x x g

More information

Chapter 5 FINITE DIFFERENCE METHOD (FDM)

Chapter 5 FINITE DIFFERENCE METHOD (FDM) MEE7 Computer Modeling Tecniques in Engineering Capter 5 FINITE DIFFERENCE METHOD (FDM) 5. Introduction to FDM Te finite difference tecniques are based upon approximations wic permit replacing differential

More information

Generic maximum nullity of a graph

Generic maximum nullity of a graph Generic maximum nullity of a grap Leslie Hogben Bryan Sader Marc 5, 2008 Abstract For a grap G of order n, te maximum nullity of G is defined to be te largest possible nullity over all real symmetric n

More information

Quaternion Dynamics, Part 1 Functions, Derivatives, and Integrals. Gary D. Simpson. rev 01 Aug 08, 2016.

Quaternion Dynamics, Part 1 Functions, Derivatives, and Integrals. Gary D. Simpson. rev 01 Aug 08, 2016. Quaternion Dynamics, Part 1 Functions, Derivatives, and Integrals Gary D. Simpson gsim1887@aol.com rev 1 Aug 8, 216 Summary Definitions are presented for "quaternion functions" of a quaternion. Polynomial

More information