2 Situation A (Two-valued Model) Describing the Situation Reaching the rst goal

Size: px
Start display at page:

Download "2 Situation A (Two-valued Model) Describing the Situation Reaching the rst goal"

Transcription

1 A Probabilistic Stopping Criterion for the Evaluation of Benchmarks (Extended Version) Michael Greiner, Manfred Schramm z Keywords Evaluation of benchmarks, heuristic comparison of algorithms, general steps for stochastic modelling, subjective probability, Principle of Indierence, Rule of Succession Abstract We present general steps of modelling a problem within the frame of subjective probabilities and explain some connections to the well-known rule of succession. For practical relevance we present the solution of a small example concerning the ecient testing of benchmarks, including a discussion about the advantage of producing further data. We present the choices of modelling and obtain solutions to dierent questions and dierent information states. The way the solutions are obtained is therefore not specic and can be used for very dierent problem situations. Contents 1 Introduction Problem description Construction of the frame of discernment (Fr) Incorporation of background assumptions and knowledge (Ba) Incorporation of update information (Up) Incorporation of assumptions and knowledge about the process update information is delivered (Pr) Terminology (Urn) Goals (expressed in the Urn-Terminology) Siemens AG, Corporate Technology ZT PP 2, Otto-Hahn-Ring 6, D Munchen, Germany. michael.greiner@mchp.siemens.de z Fachhochschule Ravensburg-Weingarten, Fachbereich Elektrotechnik/Informatik, D Weingarten, Germany. schramma@fbe.fh-weingarten.de

2 2 Situation A (Two-valued Model) Describing the Situation Reaching the rst goal Example 1: Worst Case (P bn2c+1;n for N 200) Example 2: Best Case (P n;n for N 200) Example 3: Determining a minimal majority of red balls Connection with the Rule of Succession Reaching the second goal Example 5: Worst Case (P bn2c+1;n for N 200) Example 6: Best Case (P n;n for N 200) The advantage of (further) drawings { an analysis by decision theoretic considerations Example 7: D m r;n for r 15, n 25, and N Application: A priori calculation of the necessary sample size for a given security level Situation B Describing the situation Reaching the goal Example 8: Average Case (P maj for N 200) Example 9: Studying the qualitative shapes for varying values of N Considering the mean Situation C (Three-valued Model) Describing the situation Reaching the goal Example 10: Comparison to the two-valued case Conclusion/Summary 21 Appendix { Simulation Results 22 References 25

3 1 Introduction The real problem of using probability theory for solving a (e.g. decision-) problem does not consist in using ready made statistical formulas but in answering some important questions about the construction of a corresponding model. (Fr) What frame of discernment should we use? (Ba) How can we encode our background assumptions and knowledge? (Up) How can we encode new (update) information? (Pr) How can we encode assumptions and knowledge about the process in which new (update) information is delivered? We now briey explain some of these ideas and apply them to our example. 1.1 Problem description Let N be the total number of available benchmarks for two algorithms, call them A 1 respectivly A 2. Any algorithm might behave dierently on a single benchmark: there might be a very useful solution (e.g. very quickly obtained), there might be a less useful solution (e.g. very slowly obtained), there might be a wrong solution or there might be no solution at all (e.g. core dump oder eternal search). Generally, we are interested in arguing for \A 1 is better than 1 A 2 " or vice versa, based on the information of how the two algorithms behave on the whole set of benchmarks. As the evaluation of benchmarks may be expensive we want to test as few benchmarks as possible in order to reach the desired conclusion with a certain security. Our questions are therefore as follows: (1) Increasing the size n of the tested sample of benchmarks, what is the value of A 1 better than A 2 in r cases of the sample n which is necessary to reach a specied average correctness of our decision on the total set of benchmarks? (2) What is the relation if (only) incomplete information about a sample is available? (3) What kind of relation holds between the accuracy of guessing the value of and the sample size n? Number of cases where A 1 performs better than A 2 (4) How can we estimate the value of further experiments (data)? How can we estimate their inuence on our decision? 1 Where \better" has to be dened by the specic application. N

4 1.2 Construction of the frame of discernment (Fr) Calculations of probabilities are very sensitive to the choice of the frame, i.e. the set of elementary events. For example, drawing from an urn which may contain two kinds of balls is not the same as drawing from an urn which may contain three kinds of balls (especially if the dierence is not evident in some sample). We may call these states of information dierent states of possibility. Further, we have to justify the choice of our frame of discernment and to show how dierent choices of the frame might inuence our results. Generally we choose the minimal frame which allows the specication of our information (background and update) and our goals. In our case, we use (also for stochastic convenience) the model of an urn containing N balls at the beginning. A single ball may be red (which means that A 1 performs better than A 2 on a specic benchmark), green (in the opposite case) or blue (if neither happens). The following Situations A and B (see Section 2 and Section 3) show the answers if only red and green balls are expected to be in the urn (two-valued model), Situation C exerts only a very small inuence on our goals if blue balls are also expected (three-valued model). 1.3 Incorporation of background assumptions and knowledge (Ba) Given a frame of discernment we have to specify our background (a priori) information. Whenever this kind of knowledge does not lead to a completely specied probability model (P-model), we might have to determine a most probable model (in analogy of calculating the center of gravity in physics) to simplify our calculations. Principles like Indierence, Independence and the method of Maximum Entropy (see [Greiner & Schramm, 1994]) can be shown to support such calculations of the most probable P-model whenever commitments to a single P-model are necessary. In our case we make the assumption (justied by the Principle of Indierence, because we have no specic information, especially no statistics, how tests of benchmarks are distributed) that any possible P-model (i.e. any number of red, green and blue balls in the urn) has the same possibility (weight, probability) at the beginning. 1.4 Incorporation of update information (Up) Here we specify the (update) information which is delivered by the process. This information may contain parameters which are xed in any specic problem situation. In our case this means to get the information of a sample of size n (which is drawn without replacement) containing r red, g green and b blue balls. 1.5 Incorporation of assumptions and knowledge about the process update information is delivered (Pr) Identical samples can be obtained by dierent processes. Specically, we may have obtained identical samples from dierent sets of events by dierent processes. In the famous

5 Monty Hall problem (see e.g. [Savant, 1992] or [Randow, 1992]) it makes a dierence to assume a random experiment on the set of closed doors or on the set of closed and empty doors, where the result (update information) is in both cases an empty door. In the famous problem of the two children (see e.g. [Grinstead & Snell, 1997]) it makes a dierence whether we assume (in the lack of further knowledge) the child to be selected by a random process or whether we have some information (e.g. if the family has boys they are known to be more likely presented to the visitor). In our cases we do not have such knowledge 2. To end up with a distribution about the process the update information was generated, we have to use the same principles as above. In our case the use of the Principle of Indierence (no knowledge) leads to a uniform distribution Terminology (Urn) N n R (G; B) r (g; b) U R;N U R;G;N S r;n S r;g;n the known number of balls in the urn (i.e. number of available benchmarks) the size of the sample the unknown number of red (green, blue) balls in the urn at the beginning. In the two-valued case we have (N R + G). In the three-valued case we have (N R + G + B). the number of red (green, blue) balls in the sample. In the two-valued case we have (n r + g). In the three-valued case we have (n r + g + b). the urn contains R red and N? R green balls the urn contains R red, G green and N? R? G blue balls the sample contains r red and n? r green balls the sample contains r red, g green and n? r? g blue balls 1.7 Goals (expressed in the Urn-Terminology) The main goal (Go) is of course the estimation of the value RN (resp. GN) and the use of this for a two-valued decision (R > G or G R). More rened, our questions are as follows: (1) What is the average correctness of the decision given a sample of size n with r red balls and g green balls and r > g? Given a minimum value of an expected average correctness (e.g or 0.95) of our decision and a sample of size n, what is the smallest value of r that we need to reach this correctness? Knowing this value will allow us to make a decision about R > G (given a specic sample of size n) without further calculation. 2 Though this statement is not trivial, because any attempt at knowledge mining will very probably show that the benchmarks are somehow grouped together and include some history and relationship. 3 This is a way to avoid the classical demand that the sample has to be representative.

6 We will answer these questions for the two-valued case in the equations and examples of Situation A, goal 1, and for the three-valued case in Situation C. (2) Instead of using the Principle of Indierence we can obtain the same result on the questions of item (1) by consecutively using the rule of succession of Laplace. This application can be found in Section 2.3. (3) Given a sample of size n with r red balls, what is the (minimal) condence interval of level for our decision? We give the answer to this question in answering goal 2 in Situation A. (4) Given a sample of size n with r red balls and g green balls. How will further draws increase the average correctness for our decision R > G? We will discuss this in Section 2.5 for Situation A. (5) How can we determine the size n of the sample a priori, if we demand a certain level of average correctness for our decision? We will discuss this in Section for Situation A. (6) What is the correctness of our decision if only incomplete information r > g about the sample is available? In Situation B (Section 3) we deal with this kind of knowledge for the condence levels 0.90 and (7) Finally we expand the basic model to the three-valued case and give an answer to question (1) for this model (see Situation C, Section 4). 2 Situation A (Two-valued Model) 2.1 Describing the Situation ( N\ ) (Fr) i1 e i j e i 2 fr i ; g i g ball in the i-th draw" (two-valued model), where r i \red ball in the i-th draw" and g i \green (Ba) For convenience of notation, let U R;N : U R;N [ U R+1;N [ : : : [ U N;N. In the beginning no situation is preferred. We therefore apply the Principle of Indierence to the dierent llings of the urn. This yields P (U I;N ) P (U J;N ) 8 I; J 2 f0; 1; : : : ; Ng (Up) S r;n (Pr) Random draws (Go) First Goal: We are interested in the probability : P U bn2c+1;n j S r;n ; r bn2c + 1; : : : ; n; (1) P r;n

7 where bxc denotes the largest integer value less than or equal to x 2 R. z Second Goal: Given S r;n we are interested in a condence interval I ;r of minimal length and condence niveau 2 (0; 1) for the possible values of R. 2.2 Reaching the rst goal In order to determine (1) we rst consider P (U R;N j S r;n ) P (S r;n \ U R;N ) P (S r;n ) P (S r;n j U R;N ) P (U R;N ) ; R 0; : : : ; N NP 4 : (2) P (S r;n j U J;N ) P (U J;N ) Using the well-known formula for hypergeometric distributions, we can transform (2) into P (U R;N j S r;n ) (Ba) J0 P (S r;n j U R;N ) P (S r;n j U R;N ) NP J0 P (S r;n j U J;N ) R r N?R N n R r NP J0 n?r ; (3) N?R n?r J r N?J n?r R r N?R n?r N+1 n+1 : (4) The combinatorial identity used in the last equation of (4) can be found e.g. in [Jaynes, 1996], eq Substituting (2) into (1) nally gives P r;n NX RbN2c+1 P (U R;N j S r;n ) (4) 1 N+1 n+1 NX RbN2c+1 R r! N? R n? r! : (5) Example 1: Worst Case (P bn2c+1;n for N 200) Figure 1 shows P r;n for the extremal value r bn2c + 1, plotted versus the fraction nn (sample size over urn size). From this gure it is obvious that we should always choose an even sample size n in order to get a higher probability P bn2c+1;n. For an explanation of this result remember that a majority of red balls in a sample with odd (even) size is achieved when the dierence between red and green balls is least equal or greater than 1 (2). However, we will never get P bn2c+1;n to be greater than 76%, except for the uninteresting sample sizes 2; 4; 6 and 200. z The Gaussian brackets b c are used to avoid the distinction between even and odd urn or sample sizes, respectively. 4 The values 0 R < r denote impossible events with probability 0. For simplicity of the equations these events are included.

8 n even n odd n/n Figure 1: P bn2c+1;n for N 200 (Worst case) n/n Figure 2: P n;n for N 200 (Best case). Here, a distinction between even and odd n is obviously not necessary.

9 Let us now check the consistency of the stochastic model with qualitative considerations in the vicinity of nn 1 (again for N 200 and r bn2c + 1) : n 198, i.e. r 100 : U bn2c+1;n occurs, if a red and a green ball, or a green and a red ball, or two red balls appear in the last two draws; it does not occur, if two green balls are drawn. Therefore, the probability should be approximately 34 75%. n 199, i.e. r 100 : U bn2c+1;n occurs, if a red ball appears in the last draw; it does not occur, if a green ball is drawn. Therefore, the probability should be approximately 12 50%. n 200, i.e. r 101 : P 101;200 1 as the sample contains all balls of the urn Example 2: Best Case (P n;n for N 200) Figure 2 shows P r;n for the extremal value r n, plotted versus the fraction nn (sample size over urn size). Obviously, P n;n 1 for n bn2c + 1. At rst glance it is astonishing how rapidly the curve reaches the maximum value 1. From a qualitative point of view we argue as follows: The more red balls that are in the sample (in this special case only red balls were drawn), the more likely is the appearance of red balls in the following draws. A detailed and quantitative explanation of this eect could be obtained by the following considerations: In order to study the gradual behavior 5 of P n;n we dene a family of partial sums f m g of P n;n by m : P (\at least bn2c + 1? n red balls in the following m draws") ; (6) m bn2c + 1? n; : : : ; N? n, given that we already drew a sample of size n only consisting of red balls. Obviously, bn2c+1?n bn2c+2?n : : : N?n P n;n (7) and bn2c+1?n n + 1 n + 2 n + 2 bn2c+2?n (8). n + 1 bn2c + 2 n + 3 : : : n + bn2c + 1? n n + bn2c + 2? n n + 1 bn2c + 2 " bn2c + 2 bn2c (bn2c + 2? n) 1 bn2c + 3 # (8) (9) (see Figure 3 for visualisation; again for the urn size N 200). Now it should be clear why we chose m instead of other values: The m 's allow an interpretation by themselves 6, they can easily be obtained and they \converge" to P n;n [ see (7) ]. 5 Analogous to the construction of power-tail distributions as limits of truncated tails, see [Greiner et al., 1998]. 6 For instance, the rst sum bn2c+1?n, i.e. the probability for a majority of red balls in the urn after the minimal number of additional draws, is almost twice as large as the fraction nn, though it is only a small part of P n;n itself.

10 n/n Figure 3: 101?n (), 102?n (2), 103?n (+), and P n;n () for N Example 3: Determining a minimal majority of red balls for a given security level Normally we will be interested not in the extremal values of the last examples, but in the value of r : min r f r j bn2c + 1 r n ; P r;n g ; (10) i.e. the minimal majority of red balls in the sample that cause a majority of red balls in the urn with a probability equal or higher than 2 (0; 1). 7 From Figure 4 we see that the minimal fraction r n is sectionally monotonic decreasing to a value greater than 0.5 (both for the even and the odd case). Further, for n 1; 2 there is no r with P r;n 0:9, for n N2 (the sample size is half the urn size) P r;n 0:9 if at least 55% of the balls in the sample are red. It should be noted that r is a monotonic non-decreasing function in, i.e. 1 > 2 ) r 1 r ; 2 2 (0; 1) : 2.3 Connection with the Rule of Succession Let us rst quote Laplace's rule of succession (RoS): A sample of size n is drawn out of an urn containing an unknown number of red and green balls. Then the probability RoS r;n of getting a red ball in the 7 For completeness of notation we set r : 2 n (> n) if there is no r 2 fbn2c + 1; : : : ; ng fullling P r;n for a given probability level, say 2 f0:90; 0:95g.

11 n even n odd n/n Figure 4: The fraction r n for N 200 and 0:90. following draw is r + 1 n + 2 ; (11) where r again denotes the number of red balls in the sample. We will now show an alternative way of computing P r;n [ cf. (5) ] by consecutively using (RoS). Obviously, P r;n 1 for r bn2c + 1. So in the following we assume r < bn2c + 1: P r;n N?n?r X RbN2c+1 N?n?r X RbN2c+1 N?n?r X RbN2c+1 n + 1 N N+1 n+1 P (\exactly R? r red balls in the following N? n draws") N? n R? r! {z } Number of arrangements (r + 1) : : : (r + R? r) (n? r + 1) : : : (N? r) (n + 2) : : : (n N? n) [ by using (RoS) N? n times ] (N? n)! (R? r)! (N? n? R + r)! R! (n + 1)! (N? R)! r! (N + 1)! (n? r)! N?n?r X RbN2c+1 NX RbN2c+1 R r N?R R r! N n n?r N? R n? r! 1 N+1 n+1 (5) : N?n?r X RbN2c+1 R r! N? R n? r!

12 89 rr ED BC < :; rrr ED BC rrg ED BC rrrr ED BC rrrg ED BC ED BC BC BC BC BC ED BC GF?> ED BC < ED BC < GF ED 89 rrrrrr:; 89 rrrrrg:; 89 rrgggg ED BC Figure 5: A visualisation of P n;n for N 6 and n 2. Note that we did not explicitly use (Ba) in this derivation. However, (Ba) is included in (RoS) itself (the total number of balls in the urn does not appear in (RoS) but in its derivation, see e.g. [Greiner & Schramm, 1994]). 2.4 Reaching the second goal Given S r;n we are now interested in a condence interval I ;r of minimal length and con- dence niveau 2 (0; 1) for the possible values of R. As P (U R;N j S r;n ) is unimodal [ see (4) ] with a maximum value at hr r i [ see (18) ], I ;r can be determined by the following simple algorithm: 1 Let fp (0) ; P (1) ; : : : ; P (N) g be an ordering of fp (U R;N j S r;n ) j 0 R Ng 2 such that P (0) P (1) : : : P (N) 3 J : N 4 SUM : 0 5 REPEAT 6 SUM : SUM + P (J) 7 J : J? 1 8 UNTIL (SUM ) It is easily seen that I ;r : [R;r lower ; R;r upper ] is the union of all values of R that were used in the construction of SUM.

13 n/n Figure 6: The condence interval I 0:9; r for r bn2c + 1 and N Example 5: Worst Case (P bn2c+1;n for N 200) In the worst case r bn2c + 1 Figure 6 shows a typical condence interval, i.e. increasing lower bounds and diminishing upper bounds Example 6: Best Case (P n;n for N 200) In the best case r n, we can obtain from Figure 7 the dierent values for the lower condence bound R;r lower while the upper condence bound R;r upper is equal to N The advantage of (further) drawings { an analysis by decision theoretic considerations Let us now consider the advantage of further drawings, given a sample S r;n. In discussions we came across two dierent intuitive views: By drawing additional balls we increase our knowledge about the distribution in the whole urn. As we reduce the possible llings of the urn, we therefore expect more correct decisions when guessing the content of the urn. If we have a sample S r;n with r b n 2 c + 1 red balls and we draw another red ball the condence in our decision for R > N? R increases. On the other hand, if we draw a green ball our condence obviously is weakened. So why should we draw another ball if we cannot guarantee that we will at least keep our current level of condence?

14 n/n Figure 7: The condence interval I 0:9;r for r n and N 200. At a rst glance these intuitions seem to be inconsistent. We will therefore show in detail that both describe a correct aspect of the given situation. We rst recall some terms from Sections 2.2 and 2.3. By the theorem of total probability and (11) we immediately get P r;n P r+1;n+1 RoS r;n + P r;n+1 RoS n?r;n (12) in analogy to the probability tree in Figure 5. Please note that in (12) and in the further context we implicitly extend the denition of P r;n (cf. (1)) to arbitrary values of r between 0 and n. In contrast to the previous sections we are not interested in the probability P r;n itself but in a measure of the probability of a correct decision (more red than green balls in the urn or otherwise) based on a given sample. We chose D r;n : maxfp r;n ; P n?r;n g P maxfr;n?rg;n (13) as a suitable measure for that purpose, because if P r;n < 0:5 we will of course decide that N? R > R and be correct with probability P n?r;n. Using (12) and (13) we nally get an inequality for the D r;n D r;n D r+1;n+1 RoS r;n + D r;n+1 RoS n?r;n (14) that can be regarded as a decision tree. The sign immediately results from the maximum operator in (13).

15 m Figure 8: D m r;n for r 15, n 25, and N 200. Now we are able to determine the gain of correctness D m r;n for m additional drawings, m 1; : : : ; N? n, as [(the actual correctness after n + m drawings) - (the actual correctness after the initial n drawings)], i.e. D m r;n 2 mx 6 4 s0 m s! sq i1 Q (r + i) m?s (n? r + i) i1 mq D r+s;n+m 7 5? D r;n ; (15) (n i) i1 3 where s denotes the number of red balls in the additional sample of size m. It is obvious that such a gain can only be achieved if m is greater or equal to 2r? n Example 7: D m r;n for r 15, n 25, and N 200 It can easily be seen in Figure 8 that indeed m 5 ( 2r? n) is some sort of threshold value: Only above that value is a gain achieved (as supposed in the rst intuitive view), whereas below that value positive and negative eects neutralize each other resulting in no further gain (compare the second intuitive view). 8 As there is a majority of red balls in the rst sample of size n, i.e. r b n 2 c + 1, an additional sample of size m < 2r? n would be of no inuence to the decision whether there is a majority of red balls in the urn or not.

16 N m 0:85 m 0:90 m 0: ? z Table 1: m for dierent values of and N Application: A priori calculation of the necessary sample size for a given security level Up to now we assumed that a sample S r;n was already drawn. However, it can be of considerable interest 9 to determine a priori the minimal number of balls to be drawn in order to gain a certain level of security for the chosen decision. In order words: we want to determine o m : min nm j D m 0;0 : 1mN Table 1 shows how rapidly m stabilizes for increasing N; for N > 1000 it is almost independent of N. In the appendix we will further present simulation results that support these analytic calculations. 3 Situation B 3.1 Describing the situation (Fr) as in Situation A (Ba) as in Situation A (Up) A sample of size n with rg > 0:5 (incomplete knowlegde about the sample) 10 (Pr) random draws (as in Situation A) (Go) We are interested in the probability P maj : P U bn2c+1;n j S bn2c+1;n (16) that an arbitrary majority of red balls in the sample means that we have a majority of red balls in the urn. 9 e.g. for organizers of software or hardware competitions (see [Sutcliffe & Suttner, 1996]) 10 For example, this question occurs when someone else draws the sample and only tells us that there was a majority of red balls in the sample z A? denotes that the demanded level of correctness,, could not be reached.

Upper and Lower Bounds on the Number of Faults. a System Can Withstand Without Repairs. Cambridge, MA 02139

Upper and Lower Bounds on the Number of Faults. a System Can Withstand Without Repairs. Cambridge, MA 02139 Upper and Lower Bounds on the Number of Faults a System Can Withstand Without Repairs Michel Goemans y Nancy Lynch z Isaac Saias x Laboratory for Computer Science Massachusetts Institute of Technology

More information

Stochastic dominance with imprecise information

Stochastic dominance with imprecise information Stochastic dominance with imprecise information Ignacio Montes, Enrique Miranda, Susana Montes University of Oviedo, Dep. of Statistics and Operations Research. Abstract Stochastic dominance, which is

More information

Roots of Unity, Cyclotomic Polynomials and Applications

Roots of Unity, Cyclotomic Polynomials and Applications Swiss Mathematical Olympiad smo osm Roots of Unity, Cyclotomic Polynomials and Applications The task to be done here is to give an introduction to the topics in the title. This paper is neither complete

More information

Formalizing Probability. Choosing the Sample Space. Probability Measures

Formalizing Probability. Choosing the Sample Space. Probability Measures Formalizing Probability Choosing the Sample Space What do we assign probability to? Intuitively, we assign them to possible events (things that might happen, outcomes of an experiment) Formally, we take

More information

HW1 Solutions. October 5, (20 pts.) Random variables, sample space and events Consider the random experiment of ipping a coin 4 times.

HW1 Solutions. October 5, (20 pts.) Random variables, sample space and events Consider the random experiment of ipping a coin 4 times. HW1 Solutions October 5, 2016 1. (20 pts.) Random variables, sample space and events Consider the random experiment of ipping a coin 4 times. 1. (2 pts.) Dene the appropriate random variables. Answer:

More information

b + O(n d ) where a 1, b > 1, then O(n d log n) if a = b d d ) if a < b d O(n log b a ) if a > b d

b + O(n d ) where a 1, b > 1, then O(n d log n) if a = b d d ) if a < b d O(n log b a ) if a > b d CS161, Lecture 4 Median, Selection, and the Substitution Method Scribe: Albert Chen and Juliana Cook (2015), Sam Kim (2016), Gregory Valiant (2017) Date: January 23, 2017 1 Introduction Last lecture, we

More information

in the company. Hence, we need to collect a sample which is representative of the entire population. In order for the sample to faithfully represent t

in the company. Hence, we need to collect a sample which is representative of the entire population. In order for the sample to faithfully represent t 10.001: Data Visualization and Elementary Statistical Analysis R. Sureshkumar January 15, 1997 Statistics deals with the collection and the analysis of data in the presence of variability. Variability

More information

To make a grammar probabilistic, we need to assign a probability to each context-free rewrite

To make a grammar probabilistic, we need to assign a probability to each context-free rewrite Notes on the Inside-Outside Algorithm To make a grammar probabilistic, we need to assign a probability to each context-free rewrite rule. But how should these probabilities be chosen? It is natural to

More information

Preface These notes were prepared on the occasion of giving a guest lecture in David Harel's class on Advanced Topics in Computability. David's reques

Preface These notes were prepared on the occasion of giving a guest lecture in David Harel's class on Advanced Topics in Computability. David's reques Two Lectures on Advanced Topics in Computability Oded Goldreich Department of Computer Science Weizmann Institute of Science Rehovot, Israel. oded@wisdom.weizmann.ac.il Spring 2002 Abstract This text consists

More information

Bounding the End-to-End Response Times of Tasks in a Distributed. Real-Time System Using the Direct Synchronization Protocol.

Bounding the End-to-End Response Times of Tasks in a Distributed. Real-Time System Using the Direct Synchronization Protocol. Bounding the End-to-End Response imes of asks in a Distributed Real-ime System Using the Direct Synchronization Protocol Jun Sun Jane Liu Abstract In a distributed real-time system, a task may consist

More information

Mathematics-I Prof. S.K. Ray Department of Mathematics and Statistics Indian Institute of Technology, Kanpur. Lecture 1 Real Numbers

Mathematics-I Prof. S.K. Ray Department of Mathematics and Statistics Indian Institute of Technology, Kanpur. Lecture 1 Real Numbers Mathematics-I Prof. S.K. Ray Department of Mathematics and Statistics Indian Institute of Technology, Kanpur Lecture 1 Real Numbers In these lectures, we are going to study a branch of mathematics called

More information

Chapter 0 Introduction Suppose this was the abstract of a journal paper rather than the introduction to a dissertation. Then it would probably end wit

Chapter 0 Introduction Suppose this was the abstract of a journal paper rather than the introduction to a dissertation. Then it would probably end wit Chapter 0 Introduction Suppose this was the abstract of a journal paper rather than the introduction to a dissertation. Then it would probably end with some cryptic AMS subject classications and a few

More information

Basic counting techniques. Periklis A. Papakonstantinou Rutgers Business School

Basic counting techniques. Periklis A. Papakonstantinou Rutgers Business School Basic counting techniques Periklis A. Papakonstantinou Rutgers Business School i LECTURE NOTES IN Elementary counting methods Periklis A. Papakonstantinou MSIS, Rutgers Business School ALL RIGHTS RESERVED

More information

Graph Theory. Thomas Bloom. February 6, 2015

Graph Theory. Thomas Bloom. February 6, 2015 Graph Theory Thomas Bloom February 6, 2015 1 Lecture 1 Introduction A graph (for the purposes of these lectures) is a finite set of vertices, some of which are connected by a single edge. Most importantly,

More information

Optimal Rejuvenation for. Tolerating Soft Failures. Andras Pfening, Sachin Garg, Antonio Puliato, Miklos Telek, Kishor S. Trivedi.

Optimal Rejuvenation for. Tolerating Soft Failures. Andras Pfening, Sachin Garg, Antonio Puliato, Miklos Telek, Kishor S. Trivedi. Optimal Rejuvenation for Tolerating Soft Failures Andras Pfening, Sachin Garg, Antonio Puliato, Miklos Telek, Kishor S. Trivedi Abstract In the paper we address the problem of determining the optimal time

More information

Price Competition and Endogenous Valuation in Search Advertising

Price Competition and Endogenous Valuation in Search Advertising Price Competition and Endogenous Valuation in Search Advertising Lizhen Xu Jianqing Chen Andrew Whinston Web Appendix A Heterogeneous Consumer Valuation In the baseline model, we assumed that consumers

More information

Extracted from a working draft of Goldreich s FOUNDATIONS OF CRYPTOGRAPHY. See copyright notice.

Extracted from a working draft of Goldreich s FOUNDATIONS OF CRYPTOGRAPHY. See copyright notice. 106 CHAPTER 3. PSEUDORANDOM GENERATORS Using the ideas presented in the proofs of Propositions 3.5.3 and 3.5.9, one can show that if the n 3 -bit to l(n 3 ) + 1-bit function used in Construction 3.5.2

More information

Important Properties of R

Important Properties of R Chapter 2 Important Properties of R The purpose of this chapter is to explain to the reader why the set of real numbers is so special. By the end of this chapter, the reader should understand the difference

More information

Rate-Monotonic Scheduling with variable. execution time and period. October 31, Abstract

Rate-Monotonic Scheduling with variable. execution time and period. October 31, Abstract Rate-Monotonic Scheduling with variable execution time and period Oldeld Peter October 31, 1997 Abstract Abstract is something cannot be understood. 1 Rate Monotonic Model Let Ti be a task. Let Pi be the

More information

Tree sets. Reinhard Diestel

Tree sets. Reinhard Diestel 1 Tree sets Reinhard Diestel Abstract We study an abstract notion of tree structure which generalizes treedecompositions of graphs and matroids. Unlike tree-decompositions, which are too closely linked

More information

How to Pop a Deep PDA Matters

How to Pop a Deep PDA Matters How to Pop a Deep PDA Matters Peter Leupold Department of Mathematics, Faculty of Science Kyoto Sangyo University Kyoto 603-8555, Japan email:leupold@cc.kyoto-su.ac.jp Abstract Deep PDA are push-down automata

More information

G : Statistical Mechanics Notes for Lecture 3 I. MICROCANONICAL ENSEMBLE: CONDITIONS FOR THERMAL EQUILIBRIUM Consider bringing two systems into

G : Statistical Mechanics Notes for Lecture 3 I. MICROCANONICAL ENSEMBLE: CONDITIONS FOR THERMAL EQUILIBRIUM Consider bringing two systems into G25.2651: Statistical Mechanics Notes for Lecture 3 I. MICROCANONICAL ENSEMBLE: CONDITIONS FOR THERMAL EQUILIBRIUM Consider bringing two systems into thermal contact. By thermal contact, we mean that the

More information

A FLOW DIAGRAM FOR CALCULATING LIMITS OF FUNCTIONS (OF SEVERAL VARIABLES).

A FLOW DIAGRAM FOR CALCULATING LIMITS OF FUNCTIONS (OF SEVERAL VARIABLES). A FLOW DIAGRAM FOR CALCULATING LIMITS OF FUNCTIONS (OF SEVERAL VARIABLES). Version 5.5, 2/12/2008 In many ways it is silly to try to describe a sophisticated intellectual activity by a simple and childish

More information

Chapter 1 Review of Equations and Inequalities

Chapter 1 Review of Equations and Inequalities Chapter 1 Review of Equations and Inequalities Part I Review of Basic Equations Recall that an equation is an expression with an equal sign in the middle. Also recall that, if a question asks you to solve

More information

HOW TO MAKE ELEMENTARY GEOMETRY MORE ROBUST AND THUS, MORE PRACTICAL: GENERAL ALGORITHMS. O. Kosheleva. 1. Formulation of the Problem

HOW TO MAKE ELEMENTARY GEOMETRY MORE ROBUST AND THUS, MORE PRACTICAL: GENERAL ALGORITHMS. O. Kosheleva. 1. Formulation of the Problem Ìàòåìàòè åñêèå ñòðóêòóðû è ìîäåëèðîâàíèå 2014, âûï. XX, ñ. 1?? ÓÄÊ 000.000 HOW TO MAKE ELEMENTARY GEOMETRY MORE ROBUST AND THUS, MORE PRACTICAL: GENERAL ALGORITHMS O. Kosheleva Many results of elementary

More information

CPSC 320 Sample Solution, Reductions and Resident Matching: A Residentectomy

CPSC 320 Sample Solution, Reductions and Resident Matching: A Residentectomy CPSC 320 Sample Solution, Reductions and Resident Matching: A Residentectomy August 25, 2017 A group of residents each needs a residency in some hospital. A group of hospitals each need some number (one

More information

Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) 1.1 The Formal Denition of a Vector Space

Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) 1.1 The Formal Denition of a Vector Space Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) Contents 1 Vector Spaces 1 1.1 The Formal Denition of a Vector Space.................................. 1 1.2 Subspaces...................................................

More information

1 Numbers. exponential functions, such as x 7! a x ; where a; x 2 R; trigonometric functions, such as x 7! sin x; where x 2 R; ffiffi x ; where x 0:

1 Numbers. exponential functions, such as x 7! a x ; where a; x 2 R; trigonometric functions, such as x 7! sin x; where x 2 R; ffiffi x ; where x 0: Numbers In this book we study the properties of real functions defined on intervals of the real line (possibly the whole real line) and whose image also lies on the real line. In other words, they map

More information

THE MAXIMAL SUBGROUPS AND THE COMPLEXITY OF THE FLOW SEMIGROUP OF FINITE (DI)GRAPHS

THE MAXIMAL SUBGROUPS AND THE COMPLEXITY OF THE FLOW SEMIGROUP OF FINITE (DI)GRAPHS THE MAXIMAL SUBGROUPS AND THE COMPLEXITY OF THE FLOW SEMIGROUP OF FINITE (DI)GRAPHS GÁBOR HORVÁTH, CHRYSTOPHER L. NEHANIV, AND KÁROLY PODOSKI Dedicated to John Rhodes on the occasion of his 80th birthday.

More information

the time it takes until a radioactive substance undergoes a decay

the time it takes until a radioactive substance undergoes a decay 1 Probabilities 1.1 Experiments with randomness Wewillusethetermexperimentinaverygeneralwaytorefertosomeprocess that produces a random outcome. Examples: (Ask class for some first) Here are some discrete

More information

4.1 Notation and probability review

4.1 Notation and probability review Directed and undirected graphical models Fall 2015 Lecture 4 October 21st Lecturer: Simon Lacoste-Julien Scribe: Jaime Roquero, JieYing Wu 4.1 Notation and probability review 4.1.1 Notations Let us recall

More information

CSCE 750 Final Exam Answer Key Wednesday December 7, 2005

CSCE 750 Final Exam Answer Key Wednesday December 7, 2005 CSCE 750 Final Exam Answer Key Wednesday December 7, 2005 Do all problems. Put your answers on blank paper or in a test booklet. There are 00 points total in the exam. You have 80 minutes. Please note

More information

Elementary 2-Group Character Codes. Abstract. In this correspondence we describe a class of codes over GF (q),

Elementary 2-Group Character Codes. Abstract. In this correspondence we describe a class of codes over GF (q), Elementary 2-Group Character Codes Cunsheng Ding 1, David Kohel 2, and San Ling Abstract In this correspondence we describe a class of codes over GF (q), where q is a power of an odd prime. These codes

More information

Pade approximants and noise: rational functions

Pade approximants and noise: rational functions Journal of Computational and Applied Mathematics 105 (1999) 285 297 Pade approximants and noise: rational functions Jacek Gilewicz a; a; b;1, Maciej Pindor a Centre de Physique Theorique, Unite Propre

More information

Figure 1.1: Schematic symbols of an N-transistor and P-transistor

Figure 1.1: Schematic symbols of an N-transistor and P-transistor Chapter 1 The digital abstraction The term a digital circuit refers to a device that works in a binary world. In the binary world, the only values are zeros and ones. Hence, the inputs of a digital circuit

More information

Power Calculations for Preclinical Studies Using a K-Sample Rank Test and the Lehmann Alternative Hypothesis

Power Calculations for Preclinical Studies Using a K-Sample Rank Test and the Lehmann Alternative Hypothesis Power Calculations for Preclinical Studies Using a K-Sample Rank Test and the Lehmann Alternative Hypothesis Glenn Heller Department of Epidemiology and Biostatistics, Memorial Sloan-Kettering Cancer Center,

More information

Coins with arbitrary weights. Abstract. Given a set of m coins out of a collection of coins of k unknown distinct weights, we wish to

Coins with arbitrary weights. Abstract. Given a set of m coins out of a collection of coins of k unknown distinct weights, we wish to Coins with arbitrary weights Noga Alon Dmitry N. Kozlov y Abstract Given a set of m coins out of a collection of coins of k unknown distinct weights, we wish to decide if all the m given coins have the

More information

Selection and Adversary Arguments. COMP 215 Lecture 19

Selection and Adversary Arguments. COMP 215 Lecture 19 Selection and Adversary Arguments COMP 215 Lecture 19 Selection Problems We want to find the k'th largest entry in an unsorted array. Could be the largest, smallest, median, etc. Ideas for an n lg n algorithm?

More information

Structural Grobner Basis. Bernd Sturmfels and Markus Wiegelmann TR May Department of Mathematics, UC Berkeley.

Structural Grobner Basis. Bernd Sturmfels and Markus Wiegelmann TR May Department of Mathematics, UC Berkeley. I 1947 Center St. Suite 600 Berkeley, California 94704-1198 (510) 643-9153 FAX (510) 643-7684 INTERNATIONAL COMPUTER SCIENCE INSTITUTE Structural Grobner Basis Detection Bernd Sturmfels and Markus Wiegelmann

More information

With Question/Answer Animations. Chapter 7

With Question/Answer Animations. Chapter 7 With Question/Answer Animations Chapter 7 Chapter Summary Introduction to Discrete Probability Probability Theory Bayes Theorem Section 7.1 Section Summary Finite Probability Probabilities of Complements

More information

University of Groningen. Statistical Auditing and the AOQL-method Talens, Erik

University of Groningen. Statistical Auditing and the AOQL-method Talens, Erik University of Groningen Statistical Auditing and the AOQL-method Talens, Erik IMPORTANT NOTE: You are advised to consult the publisher's version (publisher's PDF) if you wish to cite from it. Please check

More information

Lecture 3 Probability Basics

Lecture 3 Probability Basics Lecture 3 Probability Basics Thais Paiva STA 111 - Summer 2013 Term II July 3, 2013 Lecture Plan 1 Definitions of probability 2 Rules of probability 3 Conditional probability What is Probability? Probability

More information

1. Introduction Bottom-Up-Heapsort is a variant of the classical Heapsort algorithm due to Williams ([Wi64]) and Floyd ([F64]) and was rst presented i

1. Introduction Bottom-Up-Heapsort is a variant of the classical Heapsort algorithm due to Williams ([Wi64]) and Floyd ([F64]) and was rst presented i A Tight Lower Bound for the Worst Case of Bottom-Up-Heapsort 1 by Rudolf Fleischer 2 Keywords : heapsort, bottom-up-heapsort, tight lower bound ABSTRACT Bottom-Up-Heapsort is a variant of Heapsort. Its

More information

current set (database, belief set) and none of the elements b l, 1 l n, belongs to the current set then, in the case of rule (1), the item a should be

current set (database, belief set) and none of the elements b l, 1 l n, belongs to the current set then, in the case of rule (1), the item a should be Annotated revision programs Victor Marek Inna Pivkina Miros law Truszczynski Department of Computer Science, University of Kentucky, Lexington, KY 40506-0046 marek inna mirek@cs.engr.uky.edu Abstract Revision

More information

On-line Bin-Stretching. Yossi Azar y Oded Regev z. Abstract. We are given a sequence of items that can be packed into m unit size bins.

On-line Bin-Stretching. Yossi Azar y Oded Regev z. Abstract. We are given a sequence of items that can be packed into m unit size bins. On-line Bin-Stretching Yossi Azar y Oded Regev z Abstract We are given a sequence of items that can be packed into m unit size bins. In the classical bin packing problem we x the size of the bins and try

More information

Eects of domain characteristics on instance-based learning algorithms

Eects of domain characteristics on instance-based learning algorithms Theoretical Computer Science 298 (2003) 207 233 www.elsevier.com/locate/tcs Eects of domain characteristics on instance-based learning algorithms Seishi Okamoto, Nobuhiro Yugami Fujitsu Laboratories, 1-9-3

More information

MATH2206 Prob Stat/20.Jan Weekly Review 1-2

MATH2206 Prob Stat/20.Jan Weekly Review 1-2 MATH2206 Prob Stat/20.Jan.2017 Weekly Review 1-2 This week I explained the idea behind the formula of the well-known statistic standard deviation so that it is clear now why it is a measure of dispersion

More information

on a Single Machine Stephane Dauzere-Peres a Marc Sevaux b Helleveien 30 N-5035 Bergen-Sandviken, Norway Ecole des Mines de Nantes

on a Single Machine Stephane Dauzere-Peres a Marc Sevaux b Helleveien 30 N-5035 Bergen-Sandviken, Norway Ecole des Mines de Nantes Using Lagrangean Relaxation to Minimize the (Weighted) Number of Late Jobs on a Single Machine Stephane Dauzere-Peres a Marc Sevaux b a Department of Finance and Management Science Norwegian School of

More information

1 I A Q E B A I E Q 1 A ; E Q A I A (2) A : (3) A : (4)

1 I A Q E B A I E Q 1 A ; E Q A I A (2) A : (3) A : (4) Latin Squares Denition and examples Denition. (Latin Square) An n n Latin square, or a latin square of order n, is a square array with n symbols arranged so that each symbol appears just once in each row

More information

Queens College, CUNY, Department of Computer Science Numerical Methods CSCI 361 / 761 Spring 2018 Instructor: Dr. Sateesh Mane.

Queens College, CUNY, Department of Computer Science Numerical Methods CSCI 361 / 761 Spring 2018 Instructor: Dr. Sateesh Mane. Queens College, CUNY, Department of Computer Science Numerical Methods CSCI 361 / 761 Spring 2018 Instructor: Dr. Sateesh Mane c Sateesh R. Mane 2018 3 Lecture 3 3.1 General remarks March 4, 2018 This

More information

Probabilistic models

Probabilistic models Kolmogorov (Andrei Nikolaevich, 1903 1987) put forward an axiomatic system for probability theory. Foundations of the Calculus of Probabilities, published in 1933, immediately became the definitive formulation

More information

Lecture 17: Trees and Merge Sort 10:00 AM, Oct 15, 2018

Lecture 17: Trees and Merge Sort 10:00 AM, Oct 15, 2018 CS17 Integrated Introduction to Computer Science Klein Contents Lecture 17: Trees and Merge Sort 10:00 AM, Oct 15, 2018 1 Tree definitions 1 2 Analysis of mergesort using a binary tree 1 3 Analysis of

More information

A fast algorithm to generate necklaces with xed content

A fast algorithm to generate necklaces with xed content Theoretical Computer Science 301 (003) 477 489 www.elsevier.com/locate/tcs Note A fast algorithm to generate necklaces with xed content Joe Sawada 1 Department of Computer Science, University of Toronto,

More information

Lebesgue Measure on R n

Lebesgue Measure on R n CHAPTER 2 Lebesgue Measure on R n Our goal is to construct a notion of the volume, or Lebesgue measure, of rather general subsets of R n that reduces to the usual volume of elementary geometrical sets

More information

Winter Lecture 10. Convexity and Concavity

Winter Lecture 10. Convexity and Concavity Andrew McLennan February 9, 1999 Economics 5113 Introduction to Mathematical Economics Winter 1999 Lecture 10 Convexity and Concavity I. Introduction A. We now consider convexity, concavity, and the general

More information

1. (7pts) Find the points of intersection, if any, of the following planes. 3x + 9y + 6z = 3 2x 6y 4z = 2 x + 3y + 2z = 1

1. (7pts) Find the points of intersection, if any, of the following planes. 3x + 9y + 6z = 3 2x 6y 4z = 2 x + 3y + 2z = 1 Math 125 Exam 1 Version 1 February 20, 2006 1. (a) (7pts) Find the points of intersection, if any, of the following planes. Solution: augmented R 1 R 3 3x + 9y + 6z = 3 2x 6y 4z = 2 x + 3y + 2z = 1 3 9

More information

Lecture 8: Conditional probability I: definition, independence, the tree method, sampling, chain rule for independent events

Lecture 8: Conditional probability I: definition, independence, the tree method, sampling, chain rule for independent events Lecture 8: Conditional probability I: definition, independence, the tree method, sampling, chain rule for independent events Discrete Structures II (Summer 2018) Rutgers University Instructor: Abhishek

More information

percentage of problems with ( 1 lb/ub ) <= x percentage of problems with ( 1 lb/ub ) <= x n= n=8 n= n=32 n= log10( x )

percentage of problems with ( 1 lb/ub ) <= x percentage of problems with ( 1 lb/ub ) <= x n= n=8 n= n=32 n= log10( x ) Soft vs. Hard Bounds in Probabilistic Robustness Analysis Xiaoyun Zhu Yun Huang John Doyle California Institute of Technology, Pasadena, CA 925 Abstract The relationship between soft vs. hard bounds and

More information

Sets and Functions. (As we will see, in describing a set the order in which elements are listed is irrelevant).

Sets and Functions. (As we will see, in describing a set the order in which elements are listed is irrelevant). Sets and Functions 1. The language of sets Informally, a set is any collection of objects. The objects may be mathematical objects such as numbers, functions and even sets, or letters or symbols of any

More information

Computing Spanning Trees in a Social Choice Context

Computing Spanning Trees in a Social Choice Context Computing Spanning Trees in a Social Choice Context Andreas Darmann, Christian Klamler and Ulrich Pferschy Abstract This paper combines social choice theory with discrete optimization. We assume that individuals

More information

How to Assign Weights to Different Factors in Vulnerability Analysis: Towards a Justification of a Heuristic Technique

How to Assign Weights to Different Factors in Vulnerability Analysis: Towards a Justification of a Heuristic Technique University of Texas at El Paso DigitalCommons@UTEP Departmental Technical Reports (CS) Department of Computer Science 6-2014 How to Assign Weights to Different Factors in Vulnerability Analysis: Towards

More information

Monty Hall Puzzle. Draw a tree diagram of possible choices (a possibility tree ) One for each strategy switch or no-switch

Monty Hall Puzzle. Draw a tree diagram of possible choices (a possibility tree ) One for each strategy switch or no-switch Monty Hall Puzzle Example: You are asked to select one of the three doors to open. There is a large prize behind one of the doors and if you select that door, you win the prize. After you select a door,

More information

Contents. 2.1 Vectors in R n. Linear Algebra (part 2) : Vector Spaces (by Evan Dummit, 2017, v. 2.50) 2 Vector Spaces

Contents. 2.1 Vectors in R n. Linear Algebra (part 2) : Vector Spaces (by Evan Dummit, 2017, v. 2.50) 2 Vector Spaces Linear Algebra (part 2) : Vector Spaces (by Evan Dummit, 2017, v 250) Contents 2 Vector Spaces 1 21 Vectors in R n 1 22 The Formal Denition of a Vector Space 4 23 Subspaces 6 24 Linear Combinations and

More information

Quick Sort Notes , Spring 2010

Quick Sort Notes , Spring 2010 Quick Sort Notes 18.310, Spring 2010 0.1 Randomized Median Finding In a previous lecture, we discussed the problem of finding the median of a list of m elements, or more generally the element of rank m.

More information

Department of Mathematical Sciences, Norwegian University of Science and Technology, Trondheim

Department of Mathematical Sciences, Norwegian University of Science and Technology, Trondheim Tests for trend in more than one repairable system. Jan Terje Kvaly Department of Mathematical Sciences, Norwegian University of Science and Technology, Trondheim ABSTRACT: If failure time data from several

More information

Gravitational potential energy *

Gravitational potential energy * OpenStax-CNX module: m15090 1 Gravitational potential energy * Sunil Kumar Singh This work is produced by OpenStax-CNX and licensed under the Creative Commons Attribution License 2.0 The concept of potential

More information

Chapter 1. Comparison-Sorting and Selecting in. Totally Monotone Matrices. totally monotone matrices can be found in [4], [5], [9],

Chapter 1. Comparison-Sorting and Selecting in. Totally Monotone Matrices. totally monotone matrices can be found in [4], [5], [9], Chapter 1 Comparison-Sorting and Selecting in Totally Monotone Matrices Noga Alon Yossi Azar y Abstract An mn matrix A is called totally monotone if for all i 1 < i 2 and j 1 < j 2, A[i 1; j 1] > A[i 1;

More information

f(z)dz = 0. P dx + Qdy = D u dx v dy + i u dy + v dx. dxdy + i x = v

f(z)dz = 0. P dx + Qdy = D u dx v dy + i u dy + v dx. dxdy + i x = v MA525 ON CAUCHY'S THEOREM AND GREEN'S THEOREM DAVID DRASIN (EDITED BY JOSIAH YODER) 1. Introduction No doubt the most important result in this course is Cauchy's theorem. Every critical theorem in the

More information

CHAPTER 3: THE INTEGERS Z

CHAPTER 3: THE INTEGERS Z CHAPTER 3: THE INTEGERS Z MATH 378, CSUSM. SPRING 2009. AITKEN 1. Introduction The natural numbers are designed for measuring the size of finite sets, but what if you want to compare the sizes of two sets?

More information

Probabilistic models

Probabilistic models Probabilistic models Kolmogorov (Andrei Nikolaevich, 1903 1987) put forward an axiomatic system for probability theory. Foundations of the Calculus of Probabilities, published in 1933, immediately became

More information

Spurious Chaotic Solutions of Dierential. Equations. Sigitas Keras. September Department of Applied Mathematics and Theoretical Physics

Spurious Chaotic Solutions of Dierential. Equations. Sigitas Keras. September Department of Applied Mathematics and Theoretical Physics UNIVERSITY OF CAMBRIDGE Numerical Analysis Reports Spurious Chaotic Solutions of Dierential Equations Sigitas Keras DAMTP 994/NA6 September 994 Department of Applied Mathematics and Theoretical Physics

More information

Notes for Math 290 using Introduction to Mathematical Proofs by Charles E. Roberts, Jr.

Notes for Math 290 using Introduction to Mathematical Proofs by Charles E. Roberts, Jr. Notes for Math 290 using Introduction to Mathematical Proofs by Charles E. Roberts, Jr. Chapter : Logic Topics:. Statements, Negation, and Compound Statements.2 Truth Tables and Logical Equivalences.3

More information

A version of for which ZFC can not predict a single bit Robert M. Solovay May 16, Introduction In [2], Chaitin introd

A version of for which ZFC can not predict a single bit Robert M. Solovay May 16, Introduction In [2], Chaitin introd CDMTCS Research Report Series A Version of for which ZFC can not Predict a Single Bit Robert M. Solovay University of California at Berkeley CDMTCS-104 May 1999 Centre for Discrete Mathematics and Theoretical

More information

Lecture December 2009 Fall 2009 Scribe: R. Ring In this lecture we will talk about

Lecture December 2009 Fall 2009 Scribe: R. Ring In this lecture we will talk about 0368.4170: Cryptography and Game Theory Ran Canetti and Alon Rosen Lecture 7 02 December 2009 Fall 2009 Scribe: R. Ring In this lecture we will talk about Two-Player zero-sum games (min-max theorem) Mixed

More information

Can PAC Learning Algorithms Tolerate. Random Attribute Noise? Sally A. Goldman. Department of Computer Science. Washington University

Can PAC Learning Algorithms Tolerate. Random Attribute Noise? Sally A. Goldman. Department of Computer Science. Washington University Can PAC Learning Algorithms Tolerate Random Attribute Noise? Sally A. Goldman Department of Computer Science Washington University St. Louis, Missouri 63130 Robert H. Sloan y Dept. of Electrical Engineering

More information

ALGEBRA. 1. Some elementary number theory 1.1. Primes and divisibility. We denote the collection of integers

ALGEBRA. 1. Some elementary number theory 1.1. Primes and divisibility. We denote the collection of integers ALGEBRA CHRISTIAN REMLING 1. Some elementary number theory 1.1. Primes and divisibility. We denote the collection of integers by Z = {..., 2, 1, 0, 1,...}. Given a, b Z, we write a b if b = ac for some

More information

Algorithms and Data Structures 2016 Week 5 solutions (Tues 9th - Fri 12th February)

Algorithms and Data Structures 2016 Week 5 solutions (Tues 9th - Fri 12th February) Algorithms and Data Structures 016 Week 5 solutions (Tues 9th - Fri 1th February) 1. Draw the decision tree (under the assumption of all-distinct inputs) Quicksort for n = 3. answer: (of course you should

More information

Where are the hard knapsack problems?

Where are the hard knapsack problems? Computers & Operations Research 32 (2005) 2271 2284 www.elsevier.com/locate/dsw Where are the hard knapsack problems? David Pisinger Department of Computer Science, University of Copenhagen, Universitetsparken

More information

Ecient and improved implementations of this method were, for instance, given in [], [7], [9] and [0]. These implementations contain more and more ecie

Ecient and improved implementations of this method were, for instance, given in [], [7], [9] and [0]. These implementations contain more and more ecie Triangular Heaps Henk J.M. Goeman Walter A. Kosters Department of Mathematics and Computer Science, Leiden University, P.O. Box 95, 300 RA Leiden, The Netherlands Email: fgoeman,kostersg@wi.leidenuniv.nl

More information

Uncertainty modeling for robust verifiable design. Arnold Neumaier University of Vienna Vienna, Austria

Uncertainty modeling for robust verifiable design. Arnold Neumaier University of Vienna Vienna, Austria Uncertainty modeling for robust verifiable design Arnold Neumaier University of Vienna Vienna, Austria Safety Safety studies in structural engineering are supposed to guard against failure in all reasonable

More information

On the Structure of Low Autocorrelation Binary Sequences

On the Structure of Low Autocorrelation Binary Sequences On the Structure of Low Autocorrelation Binary Sequences Svein Bjarte Aasestøl University of Bergen, Bergen, Norway December 1, 2005 1 blank 2 Contents 1 Introduction 5 2 Overview 5 3 Denitions 6 3.1 Shift

More information

Module 1. Probability

Module 1. Probability Module 1 Probability 1. Introduction In our daily life we come across many processes whose nature cannot be predicted in advance. Such processes are referred to as random processes. The only way to derive

More information

Lecture 14 - P v.s. NP 1

Lecture 14 - P v.s. NP 1 CME 305: Discrete Mathematics and Algorithms Instructor: Professor Aaron Sidford (sidford@stanford.edu) February 27, 2018 Lecture 14 - P v.s. NP 1 In this lecture we start Unit 3 on NP-hardness and approximation

More information

Introduction and mathematical preliminaries

Introduction and mathematical preliminaries Chapter Introduction and mathematical preliminaries Contents. Motivation..................................2 Finite-digit arithmetic.......................... 2.3 Errors in numerical calculations.....................

More information

THE KENNESAW STATE UNIVERSITY HIGH SCHOOL MATHEMATICS COMPETITION PART II Calculators are NOT permitted Time allowed: 2 hours

THE KENNESAW STATE UNIVERSITY HIGH SCHOOL MATHEMATICS COMPETITION PART II Calculators are NOT permitted Time allowed: 2 hours THE 018-019 KENNESAW STATE UNIVERSITY HIGH SCHOOL MATHEMATICS COMPETITION PART II Calculators are NOT permitted Time allowed: hours 1 Let m be a three-digit integer with distinct digits Find all such integers

More information

Sequences of Real Numbers

Sequences of Real Numbers Chapter 8 Sequences of Real Numbers In this chapter, we assume the existence of the ordered field of real numbers, though we do not yet discuss or use the completeness of the real numbers. In the next

More information

PRIME GENERATING LUCAS SEQUENCES

PRIME GENERATING LUCAS SEQUENCES PRIME GENERATING LUCAS SEQUENCES PAUL LIU & RON ESTRIN Science One Program The University of British Columbia Vancouver, Canada April 011 1 PRIME GENERATING LUCAS SEQUENCES Abstract. The distribution of

More information

EXAMPLES OF MORDELL S EQUATION

EXAMPLES OF MORDELL S EQUATION EXAMPLES OF MORDELL S EQUATION KEITH CONRAD 1. Introduction The equation y 2 = x 3 +k, for k Z, is called Mordell s equation 1 on account of Mordell s long interest in it throughout his life. A natural

More information

Estimates for probabilities of independent events and infinite series

Estimates for probabilities of independent events and infinite series Estimates for probabilities of independent events and infinite series Jürgen Grahl and Shahar evo September 9, 06 arxiv:609.0894v [math.pr] 8 Sep 06 Abstract This paper deals with finite or infinite sequences

More information

OEIS A I. SCOPE

OEIS A I. SCOPE OEIS A161460 Richard J. Mathar Leiden Observatory, P.O. Box 9513, 2300 RA Leiden, The Netherlands (Dated: August 7, 2009) An overview to the entries of the sequence A161460 of the Online Encyclopedia of

More information

A Stable Finite Dierence Ansatz for Higher Order Dierentiation of Non-Exact. Data. Bob Anderssen and Frank de Hoog,

A Stable Finite Dierence Ansatz for Higher Order Dierentiation of Non-Exact. Data. Bob Anderssen and Frank de Hoog, A Stable Finite Dierence Ansatz for Higher Order Dierentiation of Non-Exact Data Bob Anderssen and Frank de Hoog, CSIRO Division of Mathematics and Statistics, GPO Box 1965, Canberra, ACT 2601, Australia

More information

Computability Crib Sheet

Computability Crib Sheet Computer Science and Engineering, UCSD Winter 10 CSE 200: Computability and Complexity Instructor: Mihir Bellare Computability Crib Sheet January 3, 2010 Computability Crib Sheet This is a quick reference

More information

CS5314 Randomized Algorithms. Lecture 15: Balls, Bins, Random Graphs (Hashing)

CS5314 Randomized Algorithms. Lecture 15: Balls, Bins, Random Graphs (Hashing) CS5314 Randomized Algorithms Lecture 15: Balls, Bins, Random Graphs (Hashing) 1 Objectives Study various hashing schemes Apply balls-and-bins model to analyze their performances 2 Chain Hashing Suppose

More information

STT When trying to evaluate the likelihood of random events we are using following wording.

STT When trying to evaluate the likelihood of random events we are using following wording. Introduction to Chapter 2. Probability. When trying to evaluate the likelihood of random events we are using following wording. Provide your own corresponding examples. Subjective probability. An individual

More information

CS5314 Randomized Algorithms. Lecture 18: Probabilistic Method (De-randomization, Sample-and-Modify)

CS5314 Randomized Algorithms. Lecture 18: Probabilistic Method (De-randomization, Sample-and-Modify) CS5314 Randomized Algorithms Lecture 18: Probabilistic Method (De-randomization, Sample-and-Modify) 1 Introduce two topics: De-randomize by conditional expectation provides a deterministic way to construct

More information

ICML '97 and AAAI '97 Tutorials

ICML '97 and AAAI '97 Tutorials A Short Course in Computational Learning Theory: ICML '97 and AAAI '97 Tutorials Michael Kearns AT&T Laboratories Outline Sample Complexity/Learning Curves: nite classes, Occam's VC dimension Razor, Best

More information

New Minimal Weight Representations for Left-to-Right Window Methods

New Minimal Weight Representations for Left-to-Right Window Methods New Minimal Weight Representations for Left-to-Right Window Methods James A. Muir 1 and Douglas R. Stinson 2 1 Department of Combinatorics and Optimization 2 School of Computer Science University of Waterloo

More information

Equalprocessing and equalsetup time cases of scheduling parallel machines with a single server

Equalprocessing and equalsetup time cases of scheduling parallel machines with a single server Available online at www.sciencedirect.com Computers & Operations Research 31 (004) 1867 1889 www.elsevier.com/locate/dsw Equalprocessing and equalsetup time cases of scheduling parallel machines with a

More information

INSTITUT FÜR INFORMATIK der Technischen Universität München. Lehrstuhl VIII Rechnerstruktur/-architektur Prof. Dr. E. Jessen

INSTITUT FÜR INFORMATIK der Technischen Universität München. Lehrstuhl VIII Rechnerstruktur/-architektur Prof. Dr. E. Jessen INSTITUT FÜR INFORMATIK der Technischen Universität München Lehrstuhl VIII Rechnerstruktur/-architektur Prof. Dr. E. Jessen Forschungsgruppe "`Automated Reasoning"' Probabilistic Reasoning with Maximum

More information

Modified Berlekamp-Massey algorithm for approximating the k-error linear complexity of binary sequences

Modified Berlekamp-Massey algorithm for approximating the k-error linear complexity of binary sequences Loughborough University Institutional Repository Modified Berlekamp-Massey algorithm for approximating the k-error linear complexity of binary sequences This item was submitted to Loughborough University's

More information