15-451/651: Design & Analysis of Algorithms October 23, 2018 Lecture #17: Prediction from Expert Advice last changed: October 25, 2018
|
|
- Ernest Cunningham
- 5 years ago
- Views:
Transcription
1 5-45/65: Design & Analysis of Algorithms October 23, 208 Lecture #7: Prediction from Exert Advice last changed: October 25, 208 Prediction with Exert Advice Today we ll study the roblem of making redictions based on exert advice. There are n exerts (who are just eole with oinions, the uotes suggesting that they may or may not have any exertise in the matter). Each day the following seuence of events haens: We see the n exerts redictions of the outcome. We make our own rediction about the outcome. In arallel, the actual outcome is revealed. We are correct if we made the right rediction, and make a mistake otherwise. This rocess goes on indefinitely. Our goal: at any time (say after some T days), if the best of the exerts so far has made only few mistakes, we want to have made not too many more mistakes. We want to bound the number of mistakes, and hence this is called the mistake-bound model. 2 Warmu: Simle Strategies To start off, say we have just U/Down redictions. (If the market will go u or down, or if the weather will be fair or not.) Suose we know the best exert makes no mistakes. Can we hoe to make only a few mistakes? Here s a strategy that makes only log 2 n mistakes. Just redict what the majority of the remaining exerts redicts. (In case of a tie, choose arbitrarily.) When an exert is incorrect, discard him. So each time we make a mistake, we reduce by the number of exerts by at least /2. This means after log 2 n mistakes we will be left with the erfect exert. Exercise : Argue that you cannot get an algorithm that makes fewer than log 2 n mistakes in this setting. Suose the best exert makes at most M mistakes on some seuence. Can we hoe to make only a few mistakes? Here s a strategy that makes only (M + )(log 2 n + ) mistakes. Run the above majority-and-halving strategy, but when you have discarded all the exerts, bring them all back (call this the beginning of a new hase ), and continue. Note that in each hase each exert makes at least one mistake, and you made log 2 n+ mistakes. Hence, if the best exert makes only M mistakes, there would be at most M finished hases (lus the last unfinished one), and hence at most (M + )(log 2 n + ) mistakes in all. Exercise 2: Suose the redictions belonged to some set of K items. (E.g., you could say cloudy, sunny, rain, snow.) Give algorithms showing the bound of O(M log n) mistakes still holds. Let s assume n is a ower of 2.
2 3 The Weighted Majority/Multilicative Weights Algorithm Throwing away an exert when he makes a mistake seems too drastic. Suose we instead assign weights w j to the exerts, sum the weights of the exert saying U, sum the weights of the of the exert saying Down, and redict the outcome with greater weight. (This is called the weighted majority rule, since we are following the advice of the exerts that form the weighted majority.) Then once we see the outcome, we can reduce the weight of the exerts who were wrong. In the above algorithm, we were zeroing out the weight, but suose we are gentler? The (basic) deterministic weighted majority algorithm does the following: Start with each exert having weight. Each time an exert makes a mistake, half his weight. Outut the rediction of the exerts who form the weighted majority. Remarkably, we can get a much stronger result now. Theorem If on some seuence of days, the best exert makes M mistakes. The basic deterministic weighted majority algorithm makes 2.4(M + log 2 n) mistakes. Proof: Let Φ := n i= w i be the sum of weights of the n exerts. Note that initially Φ = n. Moreover, we claim that each time we make a mistake Φ new 3 4 Φ old. Indeed, at least half the weight (which was making the majority rediction) gets halved (because it made a mistake), so we lose at least a uarter of the weight with each mistake. (Also if we don t make a mistake, Φ does not increase.) So if we ve made m mistakes at some oint, the total weight is at most Φ final (3/4) m Φ init = (3/4) m n. Moreover, if the best exert i has made M mistakes, then Φ final w i = (/2) M. So (/2) M (3/4) m n (4/3) m n2 M. Taking logs (base 2) and noting that log 2 (4/3) = comletes the roof. This is retty cool! If the best exert makes mistakes 0% of the time we err about 24% of the time (lus O(log n), but since this deends only on the number of exerts, it will be a negligible fraction as M log n). We can imrove the 2.4 factor down to as close to 2 as we want, as the next exercise shows. Exercise 3: Show that if we reduce the weight not by /2 but by ( ɛ) for some ɛ /2, then the number of mistakes is at most 2( + ɛ)m + O( log n ). (Hint: check out the aroximations we use in the ɛ roof of Theorem 2.) However, you can show that no deterministic rediction algorithm can make fewer than 2M mistakes. How? Two exerts: one says U all the time, the other Down all the time. Fix any rediction algorithm. Since this algorithm is deterministic, you know what rediction it will make on any day, if you know what haened on all revious days. As the adversary, you can now make the real outcome on this day be the oosite of the algorithm s rediction for this day. So algorithm makes mistake on all days. But one of the exerts must be right at least 50% of the days. 2
3 4 Randomized Weighted Majority Given the lower bound above for deterministic algorithms, randomization seems like a natural source of hel. (At least that examle would fail.) Let s define the Randomized Weighted Majority algorithm as follows: Start with unit weights. At each time, redict U with robability j says U w j j w, j and Down otherwise. Whenever an exert makes a mistake, multily its weight by ( ɛ). Note that the weights are retty much like in the basic MW algorithm, just reduced more gently. (Moreover, note that the rediction we make does not alter the weight reduction ste.) A slightly different, but euivalent view is the following: Start with unit weights. At each time, ick a random exert, where exert i is icked w with robability i, and redict that icked exert s rediction. And each time an j w j exert makes a mistake, multily its weight by ( ɛ). The analysis very similar to Theorem, we just need to handle exectations. Theorem 2 Let ɛ /2. If on some seuence of days, the best exert makes M mistakes. The exected number of mistakes the randomized weighted majority algorithm makes is at most ( + ɛ)m + ln n Proof: Again, let us look at the otential Φ = j w j, the total weight in the system. Having fixed the outcome on all days, this otential varies deterministically. Let F t be the fraction of the total weight on the t th day that is on the exerts who make a mistake on that day. On day t our robability of making a mistake is F t. Hence The exected number of mistakes we make is t F t. On the t th day, Φ new = Φ old ( ɛf t ). So, Φ final = n ( ɛf t ) n e ɛ t Ft, where we used that ( + x) e x for all x R. Again, Φ final ( ɛ) M, so t ( ɛ) M n e ɛ t Ft ɛ t F t M ln + ln n. ( ɛ) We can now use that ln ( ɛ) = ln( ɛ) ɛ + ɛ2 for ɛ [0, 2 ] to get the exected number of mistakes = t 3 F t M( + ɛ) + ln n
4 Pretty sweet, right? The number of mistakes we make is almost the same as the best exert, lus this additive term that just deends on n and ɛ, but is indeendent of the inut seuence. Since the otimal number of mistakes M is at most T, the number of days, we can say our # of errors otimal # of errors + ɛt + ln n ɛ So dividing both sides by T, we get the error rate (i.e., errors er unit time): () our error rate otimal error rate + ɛ + ln n ɛt ln n And now setting ɛ = T : (2) ln n our error rate otimal error rate + 2 T. (3) This last term is called the regret. So as we do the rediction for longer and longer (i.e., T ), our error rate gets as close as we want to the otimal error rate. (We have vanishing regret.) 4. Some Extensions (otional material) Reeated Zero-Sum Game. The interretation of choosing a random exert (according to the robability distribution ) allows us to view the rocess as laying a two-layer zero-sum game w i j w j reeatedly. Let the cost matrix contain only 0s and s. Each column is an exert. Each day the adversary lays a row (which defines the costs for us), we lay a column (which is like choosing an exert). Theorem 2 says that we can lay almost as best as the best column in hindsight. Fractional Costs. Suose each day t, instead of costs that are zero (no mistake) or one (mistake), we have costs c t i in [0, ]. We can extend the result to that setting with a very slight change: udate the weight of an exert i by w i w i ( ɛc t i ). With small changes in the analysis, the guarantee of Theorem 2 goes through unchanged! (Exercise: Show this!) 4.2 A Proof of the Minimax Theorem (otional material) Recall the minimax theorem: it says that in any zero-sum game (given by the row layer s ayoff matrix M), if we define the row layer s guaranteed ayoff (which is a lower bound on what she s guaranteed to earn regardless of what the column layer does): V R = max min j T Me j = max min T M and the column layer s guaranteed ayoff (which is an uer bound on what the row layer can earn, regardless of what he does): V C = min max i e T i M = min max T M, then V C = V R. Clearly, V R V C. The tricky art is the other direction, which we now use the exerts theorem to rove. Suose not. There is some zero-sum game where V R = V C δ for δ > 0, with a strict ineuality. Remember, 4
5 If the column layer commits to some strategy first, there is some row that the row layer can lay to get ayoff at least V C. On the other hand, if the row layer commits first, then the column the column layer can lay so that row layer get at most V R = V C δ. By scaling the ayoffs, imagine they are in [0, ]. Now imagine we use Randomized Weighted Majority (RWM) to lay the columns. Since the distribution from which RWM draws is udated in a deterministic fashion, let the row layer lay otimally against this distribution at each ste. In T stes, RWM has cost at most that of the best column (exert) in hindsight +ɛt + ln n The best column in hindsight can ensure cost at most V R T, since it s like the row layer has layed first. But at each time, the row layer knows your strategy, so his exected ayoff is at least V C T. Putting these together or V C T row layer s ayoff V R T + ɛt + ln n ɛ, δt ɛt + ln n But we were free to choose ɛ and T as we want, so if we choose ɛ = δ/2 and T ln n, then we get ɛ 2 a contradiction. Hence, there cannot be any ga δ, and V R = V C. 5
Extension of Minimax to Infinite Matrices
Extension of Minimax to Infinite Matrices Chris Calabro June 21, 2004 Abstract Von Neumann s minimax theorem is tyically alied to a finite ayoff matrix A R m n. Here we show that (i) if m, n are both inite,
More informationAlgorithms, Games, and Networks January 17, Lecture 2
Algorithms, Games, and Networks January 17, 2013 Lecturer: Avrim Blum Lecture 2 Scribe: Aleksandr Kazachkov 1 Readings for today s lecture Today s topic is online learning, regret minimization, and minimax
More informationThe non-stochastic multi-armed bandit problem
Submitted for journal ublication. The non-stochastic multi-armed bandit roblem Peter Auer Institute for Theoretical Comuter Science Graz University of Technology A-8010 Graz (Austria) auer@igi.tu-graz.ac.at
More informationOptimism, Delay and (In)Efficiency in a Stochastic Model of Bargaining
Otimism, Delay and In)Efficiency in a Stochastic Model of Bargaining Juan Ortner Boston University Setember 10, 2012 Abstract I study a bilateral bargaining game in which the size of the surlus follows
More informationMATH 2710: NOTES FOR ANALYSIS
MATH 270: NOTES FOR ANALYSIS The main ideas we will learn from analysis center around the idea of a limit. Limits occurs in several settings. We will start with finite limits of sequences, then cover infinite
More informationGSOE9210 Engineering Decisions
GSOE9 Engineering Decisions Problem Set 5. Consider the river roblem described in lectures: f f V B A B + (a) For =, what is the sloe of the Bayes indifference line through A? (b) Draw the Bayes indifference
More informationTopic: Lower Bounds on Randomized Algorithms Date: September 22, 2004 Scribe: Srinath Sridhar
15-859(M): Randomized Algorithms Lecturer: Anuam Guta Toic: Lower Bounds on Randomized Algorithms Date: Setember 22, 2004 Scribe: Srinath Sridhar 4.1 Introduction In this lecture, we will first consider
More informationMath 4400/6400 Homework #8 solutions. 1. Let P be an odd integer (not necessarily prime). Show that modulo 2,
MATH 4400 roblems. Math 4400/6400 Homework # solutions 1. Let P be an odd integer not necessarily rime. Show that modulo, { P 1 0 if P 1, 7 mod, 1 if P 3, mod. Proof. Suose that P 1 mod. Then we can write
More information1 Gambler s Ruin Problem
Coyright c 2017 by Karl Sigman 1 Gambler s Ruin Problem Let N 2 be an integer and let 1 i N 1. Consider a gambler who starts with an initial fortune of $i and then on each successive gamble either wins
More informationElementary Analysis in Q p
Elementary Analysis in Q Hannah Hutter, May Szedlák, Phili Wirth November 17, 2011 This reort follows very closely the book of Svetlana Katok 1. 1 Sequences and Series In this section we will see some
More information8 STOCHASTIC PROCESSES
8 STOCHASTIC PROCESSES The word stochastic is derived from the Greek στoχαστικoς, meaning to aim at a target. Stochastic rocesses involve state which changes in a random way. A Markov rocess is a articular
More informationReal Analysis 1 Fall Homework 3. a n.
eal Analysis Fall 06 Homework 3. Let and consider the measure sace N, P, µ, where µ is counting measure. That is, if N, then µ equals the number of elements in if is finite; µ = otherwise. One usually
More informationSolved Problems. (a) (b) (c) Figure P4.1 Simple Classification Problems First we draw a line between each set of dark and light data points.
Solved Problems Solved Problems P Solve the three simle classification roblems shown in Figure P by drawing a decision boundary Find weight and bias values that result in single-neuron ercetrons with the
More informationOn Wald-Type Optimal Stopping for Brownian Motion
J Al Probab Vol 34, No 1, 1997, (66-73) Prerint Ser No 1, 1994, Math Inst Aarhus On Wald-Tye Otimal Stoing for Brownian Motion S RAVRSN and PSKIR The solution is resented to all otimal stoing roblems of
More informationHomework Solution 4 for APPM4/5560 Markov Processes
Homework Solution 4 for APPM4/556 Markov Processes 9.Reflecting random walk on the line. Consider the oints,,, 4 to be marked on a straight line. Let X n be a Markov chain that moves to the right with
More informationHENSEL S LEMMA KEITH CONRAD
HENSEL S LEMMA KEITH CONRAD 1. Introduction In the -adic integers, congruences are aroximations: for a and b in Z, a b mod n is the same as a b 1/ n. Turning information modulo one ower of into similar
More information2 Asymptotic density and Dirichlet density
8.785: Analytic Number Theory, MIT, sring 2007 (K.S. Kedlaya) Primes in arithmetic rogressions In this unit, we first rove Dirichlet s theorem on rimes in arithmetic rogressions. We then rove the rime
More informationECE 534 Information Theory - Midterm 2
ECE 534 Information Theory - Midterm Nov.4, 009. 3:30-4:45 in LH03. You will be given the full class time: 75 minutes. Use it wisely! Many of the roblems have short answers; try to find shortcuts. You
More information2 Asymptotic density and Dirichlet density
8.785: Analytic Number Theory, MIT, sring 2007 (K.S. Kedlaya) Primes in arithmetic rogressions In this unit, we first rove Dirichlet s theorem on rimes in arithmetic rogressions. We then rove the rime
More informationLecture: Condorcet s Theorem
Social Networs and Social Choice Lecture Date: August 3, 00 Lecture: Condorcet s Theorem Lecturer: Elchanan Mossel Scribes: J. Neeman, N. Truong, and S. Troxler Condorcet s theorem, the most basic jury
More information4. Score normalization technical details We now discuss the technical details of the score normalization method.
SMT SCORING SYSTEM This document describes the scoring system for the Stanford Math Tournament We begin by giving an overview of the changes to scoring and a non-technical descrition of the scoring rules
More informationPackage Summary. Square Root Equations Absolute Value Equations. Making Math Possible 1 of 9 c Sa diyya Hendrickson
Solving Equations Preared by: Sa diyya Hendrickson Name: Date: Package Summary Square Root Equations Absolute Value Equations Making Math Possible 1 of 9 c Sa diyya Hendrickson A square root equation is
More informationB8.1 Martingales Through Measure Theory. Concept of independence
B8.1 Martingales Through Measure Theory Concet of indeendence Motivated by the notion of indeendent events in relims robability, we have generalized the concet of indeendence to families of σ-algebras.
More informationSolutions to In Class Problems Week 15, Wed.
Massachusetts Institute of Technology 6.04J/18.06J, Fall 05: Mathematics for Comuter Science December 14 Prof. Albert R. Meyer and Prof. Ronitt Rubinfeld revised December 14, 005, 1404 minutes Solutions
More informationCS261: A Second Course in Algorithms Lecture #11: Online Learning and the Multiplicative Weights Algorithm
CS61: A Second Course in Algorithms Lecture #11: Online Learning and the Multiplicative Weights Algorithm Tim Roughgarden February 9, 016 1 Online Algorithms This lecture begins the third module of the
More informationSplit the integral into two: [0,1] and (1, )
. A continuous random variable X has the iecewise df f( ) 0, 0, 0, where 0 is a ositive real number. - (a) For any real number such that 0, rove that the eected value of h( X ) X is E X. (0 ts) Solution:
More informationMASSACHUSETTS INSTITUTE OF TECHNOLOGY Introduction to Optimization (Spring 2004) Midterm Solutions
MASSAHUSTTS INSTITUT OF THNOLOGY 15.053 Introduction to Otimization (Sring 2004) Midterm Solutions Please note that these solutions are much more detailed that what was required on the midterm. Aggregate
More informationarxiv: v1 [physics.data-an] 26 Oct 2012
Constraints on Yield Parameters in Extended Maximum Likelihood Fits Till Moritz Karbach a, Maximilian Schlu b a TU Dortmund, Germany, moritz.karbach@cern.ch b TU Dortmund, Germany, maximilian.schlu@cern.ch
More informationTrading OTC and Incentives to Clear Centrally
Trading OTC and Incentives to Clear Centrally Gaetano Antinolfi Francesca Caraella Francesco Carli March 1, 2013 Abstract Central counterparties CCPs have been art of the modern financial system since
More informationAM 221: Advanced Optimization Spring Prof. Yaron Singer Lecture 6 February 12th, 2014
AM 221: Advanced Otimization Sring 2014 Prof. Yaron Singer Lecture 6 February 12th, 2014 1 Overview In our revious lecture we exlored the concet of duality which is the cornerstone of Otimization Theory.
More informationA Social Welfare Optimal Sequential Allocation Procedure
A Social Welfare Otimal Sequential Allocation Procedure Thomas Kalinowsi Universität Rostoc, Germany Nina Narodytsa and Toby Walsh NICTA and UNSW, Australia May 2, 201 Abstract We consider a simle sequential
More informationLecture 14: Introduction to Decision Making
Lecture 14: Introduction to Decision Making Preferences Utility functions Maximizing exected utility Value of information Actions and consequences So far, we have focused on ways of modeling a stochastic,
More informationChapter 7 Rational and Irrational Numbers
Chater 7 Rational and Irrational Numbers In this chater we first review the real line model for numbers, as discussed in Chater 2 of seventh grade, by recalling how the integers and then the rational numbers
More informationConvex Optimization methods for Computing Channel Capacity
Convex Otimization methods for Comuting Channel Caacity Abhishek Sinha Laboratory for Information and Decision Systems (LIDS), MIT sinhaa@mit.edu May 15, 2014 We consider a classical comutational roblem
More informationCS261: A Second Course in Algorithms Lecture #12: Applications of Multiplicative Weights to Games and Linear Programs
CS26: A Second Course in Algorithms Lecture #2: Applications of Multiplicative Weights to Games and Linear Programs Tim Roughgarden February, 206 Extensions of the Multiplicative Weights Guarantee Last
More informationLecture 1.2 Units, Dimensions, Estimations 1. Units To measure a quantity in physics means to compare it with a standard. Since there are many
Lecture. Units, Dimensions, Estimations. Units To measure a quantity in hysics means to comare it with a standard. Since there are many different quantities in nature, it should be many standards for those
More information1 1 c (a) 1 (b) 1 Figure 1: (a) First ath followed by salesman in the stris method. (b) Alternative ath. 4. D = distance travelled closing the loo. Th
18.415/6.854 Advanced Algorithms ovember 7, 1996 Euclidean TSP (art I) Lecturer: Michel X. Goemans MIT These notes are based on scribe notes by Marios Paaefthymiou and Mike Klugerman. 1 Euclidean TSP Consider
More informationAnalysis of some entrance probabilities for killed birth-death processes
Analysis of some entrance robabilities for killed birth-death rocesses Master s Thesis O.J.G. van der Velde Suervisor: Dr. F.M. Sieksma July 5, 207 Mathematical Institute, Leiden University Contents Introduction
More informationAvailability and Maintainability. Piero Baraldi
Availability and Maintainability 1 Introduction: reliability and availability System tyes Non maintained systems: they cannot be reaired after a failure (a telecommunication satellite, a F1 engine, a vessel
More informationCSC165H, Mathematical expression and reasoning for computer science week 12
CSC165H, Mathematical exression and reasoning for comuter science week 1 nd December 005 Gary Baumgartner and Danny Hea hea@cs.toronto.edu SF4306A 416-978-5899 htt//www.cs.toronto.edu/~hea/165/s005/index.shtml
More information1 Riesz Potential and Enbeddings Theorems
Riesz Potential and Enbeddings Theorems Given 0 < < and a function u L loc R, the Riesz otential of u is defined by u y I u x := R x y dy, x R We begin by finding an exonent such that I u L R c u L R for
More informationMA3H1 TOPICS IN NUMBER THEORY PART III
MA3H1 TOPICS IN NUMBER THEORY PART III SAMIR SIKSEK 1. Congruences Modulo m In quadratic recirocity we studied congruences of the form x 2 a (mod ). We now turn our attention to situations where is relaced
More informationCSE 599d - Quantum Computing When Quantum Computers Fall Apart
CSE 599d - Quantum Comuting When Quantum Comuters Fall Aart Dave Bacon Deartment of Comuter Science & Engineering, University of Washington In this lecture we are going to begin discussing what haens to
More informationApproximating min-max k-clustering
Aroximating min-max k-clustering Asaf Levin July 24, 2007 Abstract We consider the roblems of set artitioning into k clusters with minimum total cost and minimum of the maximum cost of a cluster. The cost
More informationOn a Markov Game with Incomplete Information
On a Markov Game with Incomlete Information Johannes Hörner, Dinah Rosenberg y, Eilon Solan z and Nicolas Vieille x{ January 24, 26 Abstract We consider an examle of a Markov game with lack of information
More informationCryptanalysis of Pseudorandom Generators
CSE 206A: Lattice Algorithms and Alications Fall 2017 Crytanalysis of Pseudorandom Generators Instructor: Daniele Micciancio UCSD CSE As a motivating alication for the study of lattice in crytograhy we
More informationECE 6960: Adv. Random Processes & Applications Lecture Notes, Fall 2010
ECE 6960: Adv. Random Processes & Alications Lecture Notes, Fall 2010 Lecture 16 Today: (1) Markov Processes, (2) Markov Chains, (3) State Classification Intro Please turn in H 6 today. Read Chater 11,
More information0.6 Factoring 73. As always, the reader is encouraged to multiply out (3
0.6 Factoring 7 5. The G.C.F. of the terms in 81 16t is just 1 so there is nothing of substance to factor out from both terms. With just a difference of two terms, we are limited to fitting this olynomial
More informationSeries Handout A. 1. Determine which of the following sums are geometric. If the sum is geometric, express the sum in closed form.
Series Handout A. Determine which of the following sums are geometric. If the sum is geometric, exress the sum in closed form. 70 a) k= ( k ) b) 50 k= ( k )2 c) 60 k= ( k )k d) 60 k= (.0)k/3 2. Find the
More informationLecture 6. 2 Recurrence/transience, harmonic functions and martingales
Lecture 6 Classification of states We have shown that all states of an irreducible countable state Markov chain must of the same tye. This gives rise to the following classification. Definition. [Classification
More informationIntroduction to Algorithms / Algorithms I Lecturer: Michael Dinitz Topic: Intro to Learning Theory Date: 12/8/16
600.463 Introduction to Algorithms / Algorithms I Lecturer: Michael Dinitz Topic: Intro to Learning Theory Date: 12/8/16 25.1 Introduction Today we re going to talk about machine learning, but from an
More informationBy Evan Chen OTIS, Internal Use
Solutions Notes for DNY-NTCONSTRUCT Evan Chen January 17, 018 1 Solution Notes to TSTST 015/5 Let ϕ(n) denote the number of ositive integers less than n that are relatively rime to n. Prove that there
More informationSan Francisco State University ECON 851 Summer Problem Set 1
San Francisco State University Michael Bar ECON 85 Summer 05 Problem Set. Suose that the wea reference relation % on L is transitive. Prove that the strict reference relation is transitive. Let A; B; C
More informationMonte Carlo Studies. Monte Carlo Studies. Sampling Distribution
Monte Carlo Studies Do not let yourself be intimidated by the material in this lecture This lecture involves more theory but is meant to imrove your understanding of: Samling distributions and tests of
More information19th Bay Area Mathematical Olympiad. Problems and Solutions. February 28, 2017
th Bay Area Mathematical Olymiad February, 07 Problems and Solutions BAMO- and BAMO- are each 5-question essay-roof exams, for middle- and high-school students, resectively. The roblems in each exam are
More informationRadial Basis Function Networks: Algorithms
Radial Basis Function Networks: Algorithms Introduction to Neural Networks : Lecture 13 John A. Bullinaria, 2004 1. The RBF Maing 2. The RBF Network Architecture 3. Comutational Power of RBF Networks 4.
More informationExcerpt from "Intermediate Algebra" 2014 AoPS Inc.
Ecert from "Intermediate Algebra" 04 AoPS Inc. www.artofroblemsolving.com for which our grah is below the -ais with the oints where the grah intersects the -ais (because the ineuality is nonstrict), we
More informationBrownian Motion and Random Prime Factorization
Brownian Motion and Random Prime Factorization Kendrick Tang June 4, 202 Contents Introduction 2 2 Brownian Motion 2 2. Develoing Brownian Motion.................... 2 2.. Measure Saces and Borel Sigma-Algebras.........
More informationSection 0.10: Complex Numbers from Precalculus Prerequisites a.k.a. Chapter 0 by Carl Stitz, PhD, and Jeff Zeager, PhD, is available under a Creative
Section 0.0: Comlex Numbers from Precalculus Prerequisites a.k.a. Chater 0 by Carl Stitz, PhD, and Jeff Zeager, PhD, is available under a Creative Commons Attribution-NonCommercial-ShareAlike.0 license.
More informationLecture 21: Quantum Communication
CS 880: Quantum Information Processing 0/6/00 Lecture : Quantum Communication Instructor: Dieter van Melkebeek Scribe: Mark Wellons Last lecture, we introduced the EPR airs which we will use in this lecture
More informationDRAFT - do not circulate
An Introduction to Proofs about Concurrent Programs K. V. S. Prasad (for the course TDA383/DIT390) Deartment of Comuter Science Chalmers University Setember 26, 2016 Rough sketch of notes released since
More informationConvolutional Codes. Lecture 13. Figure 93: Encoder for rate 1/2 constraint length 3 convolutional code.
Convolutional Codes Goals Lecture Be able to encode using a convolutional code Be able to decode a convolutional code received over a binary symmetric channel or an additive white Gaussian channel Convolutional
More informationMath 229: Introduction to Analytic Number Theory Elementary approaches II: the Euler product
Math 9: Introduction to Analytic Number Theory Elementary aroaches II: the Euler roduct Euler [Euler 737] achieved the first major advance beyond Euclid s roof by combining his method of generating functions
More informationOnline Appendix to Accompany AComparisonof Traditional and Open-Access Appointment Scheduling Policies
Online Aendix to Accomany AComarisonof Traditional and Oen-Access Aointment Scheduling Policies Lawrence W. Robinson Johnson Graduate School of Management Cornell University Ithaca, NY 14853-6201 lwr2@cornell.edu
More informationarxiv: v1 [cs.lg] 31 Jul 2014
Learning Nash Equilibria in Congestion Games Walid Krichene Benjamin Drighès Alexandre M. Bayen arxiv:408.007v [cs.lg] 3 Jul 204 Abstract We study the reeated congestion game, in which multile oulations
More informationUniversal Finite Memory Coding of Binary Sequences
Deartment of Electrical Engineering Systems Universal Finite Memory Coding of Binary Sequences Thesis submitted towards the degree of Master of Science in Electrical and Electronic Engineering in Tel-Aviv
More informationLIVE: FINAL EXAM PREPARATION PAPER 1 30 OCTOBER 2014
LIVE: FINAL EXAM PREPARATION PAPER 0 OCTOBER 04 Lesson Descrition In this lesson we: Work through uestions from various Paer aers. Challenge Question Phili is designing large fish troughs in the shae of
More informationA Simple Weight Decay Can Improve. Abstract. It has been observed in numerical simulations that a weight decay can improve
In Advances in Neural Information Processing Systems 4, J.E. Moody, S.J. Hanson and R.P. Limann, eds. Morgan Kaumann Publishers, San Mateo CA, 1995,. 950{957. A Simle Weight Decay Can Imrove Generalization
More information7.2 Inference for comparing means of two populations where the samples are independent
Objectives 7.2 Inference for comaring means of two oulations where the samles are indeendent Two-samle t significance test (we give three examles) Two-samle t confidence interval htt://onlinestatbook.com/2/tests_of_means/difference_means.ht
More informationRANDOM WALKS AND PERCOLATION: AN ANALYSIS OF CURRENT RESEARCH ON MODELING NATURAL PROCESSES
RANDOM WALKS AND PERCOLATION: AN ANALYSIS OF CURRENT RESEARCH ON MODELING NATURAL PROCESSES AARON ZWIEBACH Abstract. In this aer we will analyze research that has been recently done in the field of discrete
More information6 Stationary Distributions
6 Stationary Distributions 6. Definition and Examles Definition 6.. Let {X n } be a Markov chain on S with transition robability matrix P. A distribution π on S is called stationary (or invariant) if π
More informationOnline Appendix for The Timing and Method of Payment in Mergers when Acquirers Are Financially Constrained
Online Aendix for The Timing and Method of Payment in Mergers when Acquirers Are Financially Constrained Alexander S. Gorbenko USC Marshall School of Business Andrey Malenko MIT Sloan School of Management
More informationCERIAS Tech Report The period of the Bell numbers modulo a prime by Peter Montgomery, Sangil Nahm, Samuel Wagstaff Jr Center for Education
CERIAS Tech Reort 2010-01 The eriod of the Bell numbers modulo a rime by Peter Montgomery, Sangil Nahm, Samuel Wagstaff Jr Center for Education and Research Information Assurance and Security Purdue University,
More informationCOMMUNICATION BETWEEN SHAREHOLDERS 1
COMMUNICATION BTWN SHARHOLDRS 1 A B. O A : A D Lemma B.1. U to µ Z r 2 σ2 Z + σ2 X 2r ω 2 an additive constant that does not deend on a or θ, the agents ayoffs can be written as: 2r rθa ω2 + θ µ Y rcov
More informationChapter 8 Markov Chains and Some Applications ( 馬哥夫鏈 )
Chater 8 arkov Chains and oe Alications ( 馬哥夫鏈 Consider a sequence of rando variables,,, and suose that the set of ossible values of these rando variables is {,,,, }, which is called the state sace. It
More informationRobust hamiltonicity of random directed graphs
Robust hamiltonicity of random directed grahs Asaf Ferber Rajko Nenadov Andreas Noever asaf.ferber@inf.ethz.ch rnenadov@inf.ethz.ch anoever@inf.ethz.ch Ueli Peter ueter@inf.ethz.ch Nemanja Škorić nskoric@inf.ethz.ch
More informationWhy Proofs? Proof Techniques. Theorems. Other True Things. Proper Proof Technique. How To Construct A Proof. By Chuck Cusack
Proof Techniques By Chuck Cusack Why Proofs? Writing roofs is not most student s favorite activity. To make matters worse, most students do not understand why it is imortant to rove things. Here are just
More information2016-r1 Physics 220: Worksheet 02 Name
06-r Physics 0: Worksheet 0 Name Concets: Electric Field, lines of force, charge density, diole moment, electric diole () An equilateral triangle with each side of length 0.0 m has identical charges of
More informationA Competitive Algorithm for Minimizing Weighted Flow Time on Unrelated Machines with Speed Augmentation
Cometitive lgorithm for Minimizing Weighted Flow Time on Unrelated Machines with Seed ugmentation Jivitej S. Chadha and Naveen Garg and mit Kumar and V. N. Muralidhara Comuter Science and Engineering Indian
More informationdn i where we have used the Gibbs equation for the Gibbs energy and the definition of chemical potential
Chem 467 Sulement to Lectures 33 Phase Equilibrium Chemical Potential Revisited We introduced the chemical otential as the conjugate variable to amount. Briefly reviewing, the total Gibbs energy of a system
More informationMATH 829: Introduction to Data Mining and Analysis Consistency of Linear Regression
1/9 MATH 829: Introduction to Data Mining and Analysis Consistency of Linear Regression Dominique Guillot Deartments of Mathematical Sciences University of Delaware February 15, 2016 Distribution of regression
More informationLecture 8: Decision-making under total uncertainty: the multiplicative weight algorithm. Lecturer: Sanjeev Arora
princeton univ. F 13 cos 521: Advanced Algorithm Design Lecture 8: Decision-making under total uncertainty: the multiplicative weight algorithm Lecturer: Sanjeev Arora Scribe: (Today s notes below are
More informationChapter 7 Sampling and Sampling Distributions. Introduction. Selecting a Sample. Introduction. Sampling from a Finite Population
Chater 7 and s Selecting a Samle Point Estimation Introduction to s of Proerties of Point Estimators Other Methods Introduction An element is the entity on which data are collected. A oulation is a collection
More informationEvaluating Circuit Reliability Under Probabilistic Gate-Level Fault Models
Evaluating Circuit Reliability Under Probabilistic Gate-Level Fault Models Ketan N. Patel, Igor L. Markov and John P. Hayes University of Michigan, Ann Arbor 48109-2122 {knatel,imarkov,jhayes}@eecs.umich.edu
More informationLecture 10: Hypercontractivity
CS 880: Advanced Comlexity Theory /15/008 Lecture 10: Hyercontractivity Instructor: Dieter van Melkebeek Scribe: Baris Aydinlioglu This is a technical lecture throughout which we rove the hyercontractivity
More informationSolutions to exercises on delays. P (x = 0 θ = 1)P (θ = 1) P (x = 0) We can replace z in the first equation by its value in the second equation.
Ec 517 Christohe Chamley Solutions to exercises on delays Ex 1: P (θ = 1 x = 0) = P (x = 0 θ = 1)P (θ = 1) P (x = 0) = 1 z)µ (1 z)µ + 1 µ. The value of z is solution of µ c = δµz(1 c). We can relace z
More informationFeedback-error control
Chater 4 Feedback-error control 4.1 Introduction This chater exlains the feedback-error (FBE) control scheme originally described by Kawato [, 87, 8]. FBE is a widely used neural network based controller
More informationarxiv:cond-mat/ v2 25 Sep 2002
Energy fluctuations at the multicritical oint in two-dimensional sin glasses arxiv:cond-mat/0207694 v2 25 Se 2002 1. Introduction Hidetoshi Nishimori, Cyril Falvo and Yukiyasu Ozeki Deartment of Physics,
More informationA New Perspective on Learning Linear Separators with Large L q L p Margins
A New Persective on Learning Linear Searators with Large L q L Margins Maria-Florina Balcan Georgia Institute of Technology Christoher Berlind Georgia Institute of Technology Abstract We give theoretical
More informationLecture 9: Connecting PH, P/poly and BPP
Comutational Comlexity Theory, Fall 010 Setember Lecture 9: Connecting PH, P/oly and BPP Lecturer: Kristoffer Arnsfelt Hansen Scribe: Martin Sergio Hedevang Faester Although we do not know how to searate
More informationThe Knuth-Yao Quadrangle-Inequality Speedup is a Consequence of Total-Monotonicity
The Knuth-Yao Quadrangle-Ineuality Seedu is a Conseuence of Total-Monotonicity Wolfgang W. Bein Mordecai J. Golin Lawrence L. Larmore Yan Zhang Abstract There exist several general techniues in the literature
More informationMath 5330 Spring Notes Prime Numbers
Math 5330 Sring 208 Notes Prime Numbers The study of rime numbers is as old as mathematics itself. This set of notes has a bunch of facts about rimes, or related to rimes. Much of this stuff is old dating
More informationME 375 System Modeling and Analysis. Homework 11 Solution. Out: 18 November 2011 Due: 30 November 2011 = + +
Out: 8 November Due: 3 November Problem : You are given the following system: Gs () =. s + s+ a) Using Lalace and Inverse Lalace, calculate the unit ste resonse of this system (assume zero initial conditions).
More informationStochastic integration II: the Itô integral
13 Stochastic integration II: the Itô integral We have seen in Lecture 6 how to integrate functions Φ : (, ) L (H, E) with resect to an H-cylindrical Brownian motion W H. In this lecture we address the
More informationA note on the random greedy triangle-packing algorithm
A note on the random greedy triangle-acking algorithm Tom Bohman Alan Frieze Eyal Lubetzky Abstract The random greedy algorithm for constructing a large artial Steiner-Trile-System is defined as follows.
More informationChemical Kinetics and Equilibrium - An Overview - Key
Chemical Kinetics and Equilibrium - An Overview - Key The following questions are designed to give you an overview of the toics of chemical kinetics and chemical equilibrium. Although not comrehensive,
More informationJohn Weatherwax. Analysis of Parallel Depth First Search Algorithms
Sulementary Discussions and Solutions to Selected Problems in: Introduction to Parallel Comuting by Viin Kumar, Ananth Grama, Anshul Guta, & George Karyis John Weatherwax Chater 8 Analysis of Parallel
More informationClassical gas (molecules) Phonon gas Number fixed Population depends on frequency of mode and temperature: 1. For each particle. For an N-particle gas
Lecture 14: Thermal conductivity Review: honons as articles In chater 5, we have been considering quantized waves in solids to be articles and this becomes very imortant when we discuss thermal conductivity.
More information0.1 Motivating example: weighted majority algorithm
princeton univ. F 16 cos 521: Advanced Algorithm Design Lecture 8: Decision-making under total uncertainty: the multiplicative weight algorithm Lecturer: Sanjeev Arora Scribe: Sanjeev Arora (Today s notes
More informationHomework Set #3 Rates definitions, Channel Coding, Source-Channel coding
Homework Set # Rates definitions, Channel Coding, Source-Channel coding. Rates (a) Channels coding Rate: Assuming you are sending 4 different messages using usages of a channel. What is the rate (in bits
More information