(Not) Bounding the True Error

Size: px
Start display at page:

Download "(Not) Bounding the True Error"

Transcription

1 (No) Bounding he True Error John Langford Deparmen of Compuer Science Carnegie-Mellon Universiy Pisburgh, PA Rich Caruana Deparmen of Compuer Science Carnegie-Mellon Universiy Pisburgh, PA Absrac We presen a new approach o bounding he rue error rae of a coninuous valued classifier based upon PAC-Bayes bounds. The mehod firs consrucs a disribuion over classifiers by deermining how sensiive each parameer in he model is o noise. The rue error rae of he sochasic classifier found wih he sensiiviy analysis can hen be ighly bounded using a PAC-Bayes bound. In his paper we demonsrae he mehod on arificial neural neworks wih resuls of a order of magniude improvemen vs. he bes deerminisic neural ne bounds. 1 Inroducion In machine learning i is imporan o know he rue error rae a classifier will achieve on fuure es cases. Esimaing his error rae can be suprisingly difficul. For example, all known bounds on he rue error rae of arificial neural neworks end o be exremely loose and ofen resul in he meaningless bound of always err (error rae = 1.0). In his paper, we do no bound he rue error rae of a neural nework. Insead, we bound he rue error rae of a disribuion over neural neworks which we creae by analysing one neural nework. (Hence, he ile.) This approach proves o be much more fruiful han rying o bound he rue error rae of an individual nework. The bes curren approaches [1][2] ofen require,, or more examples before producing a nonrivial bound on he rue error rae. We produce nonrivial bounds on he rue error rae of a sochasic neural nework wih less han examples. Our approach uses he PAC-Bayes bound [4]. The approach can be hough of as a redivision of he work beween he experimener and he heoreician: we make he experimener work harder so ha he heoreician s rue error bound becomes much igher. This exra work on he par of he experimener is significan, bu racable, and he resuling bounds are much igher. An alernaive viewpoin is ha he classificaion problem is finding a hypohesis wih a low upper bound on he fuure error rae. We presen a pos-processing phase for neural neworks which resuls in a classifier wih a much lower upper bound on he fuure error rae. The pos-processing can be used wih any arificial neural ne rained wih any opimizaion mehod; i does no require he learning procedure be modified, re-run, or even ha he hreshold funcion be differeniable. In fac, his pos-processing sep can easily be adaped o oher learning algorihms. The pos-processing sep finds a large disribuion over classifiers, which has a small average empirical error rae. Given he average empirical error rae, i is sraighforward o apply he PAC-Bayes bound in order o find a bound on he average rue error rae. We find his large disribuion over classifiers by performing a simple noise sensiivy analysis on he learned model. The noise model allows us o generae a disribuion of classifiers

2 ; ] v wih a known, small, average empirical error rae. In his paper we refer o he disribuion of neural nes ha resuls from his noise analysis as a sochasic neural ne model. Why do we expec he PAC-Bayes bound o be a significan improvemen over sandard covering number and VC bound approaches? There exis learning problems for which he difference beween he lower bound and he PAC-Bayes upper bound are igh up o where is he number of raining examples. This is superior o he guaranees which can be made for ypical covering number bounds where he gap is, a bes, known up o an (asympoic) consan. The guaranee ha PAC-Bayes bounds are someimes quie igh encourages us o apply hem here. The nex secions will: 1. Describe he bounds we will compare. 2. Describe our algorihm for consrucing a disribuion over neural neworks. 3. Presen experimenal resuls. 2 Theoreical seup We will work in he sandard supervised bach learning seing. This seing sars wih he assumpion ha all examples are drawn from some fixed (unknown) disribuion,, over (inpu, oupu) pairs,. The oupu is drawn from he space "! and he inpu space is arbirary. The goal of machine learning is o use a sample se # of pairs o find a classifier, $, which maps he inpu space o he oupu space and has a small rue error, % &$')(+*-,/.0&$21 3. Since he disribuion is unknown, he rue error rae is no observable. However, we can observe he empirical error rae, % 4 5$67(8*-,9&$':1 3 =< ; $ A1 3. Now ha he basic quaniies of ineres are defined, we will firs presen a modern neural nework bound, hen specialize he PAC-Bayes bound o a sochasic neural nework. A sochasic neural nework is simply a neural nework where each weigh in he neural nework is drawn from some disribuion whenever i is used. We will describe our echnique for consrucing he disribuion of he sochasic neural nework. 2.1 Neural Nework bound We will compare a specializaion of he bes curren neural nework rue error rae bound [2] wih our approach. The neural nework bound is described in erms of he following parameers: 1. A margin, )BDCEBF. 2. A funcion G defined by GH' 3 if FBI, GH' 3 if FJK, and linear in beween. 3. L, an upper bound on he sum of he magniude of he weighs in he M h layer of he neural nework 4. N, a Lipschiz consan which holds for he M h layer of he neural nework. A Lipschiz consan is a bound on he magniude of he derivaive. 5. O, he size of he inpu space. Wih hese parameers defined, we ge he following bound. Theorem 2.1 (2 layer feed-forward Neural Nework rue error bound) where ^ Ch 3 ; g< *P,. QSR GVi'jk lnmo ]qpsruwv yx $2TVUW % &$'XJDY@Z\[ ]_^ `Cacbedgf z {} ; N ; N L ; L r~ Proof: Given in [2]. The heorem is acually only given up o a universal consan. migh be he righ choice, bu his is jus an educaed guess. The neural nework rue error bound above is

3 4 º Œ? ; v (perhaps) he ighes known bound for general feed-forward neural neworks and so i is he naural bound o compare wih. This 2 layer feed-forward bound is no easily applied in a igh manner because we can calculae a priori wha our weigh bound L should be. This can be pached up using he principle of srucural risk minimizaion. In paricular, we can sae he bound for L ; 3 a where is some non-negaive ineger and Je is a consan. If he h bound holds wih probabiliy ˆ Š x, hen all bounds will hold simulaneously wih probabiliy f, since, we ge he following heorem: Theorem 2.2 (2 layer feed-forward Neural Nework rue error bound) 3I Ž Applying his approach o he values of boh L ; and L where ^ C } 3 ; < *-,. Q R $ TVUW % &$'XJ=YnZ[ ] / ^ `C 5 } b ddf G i jk lnmo ] p r8 v ] yx z {} ; N ; N h š œ y }žÿ 5 r Proof: Apply he union bound o all possible values of and as discussed above. In pracice, we will use 3F 3 for all and repor he value of he ighes applicable bound 2.2 Sochasic Neural Nework bound Our approach will sar wih a simple refinemen [3] of he original PAC-Bayes bound [4]. We will firs specialize his bound o sochasic neural neworks and hen show ha he use of his bound in conjuncion wih a pos-processing algorihm resuls in a much igher rue error rae upper bound. Firs, we will need o define some parameers of he heorem. 1. is a disribuion over he hypoheses which can be found in an example dependen manner. 2. is a disribuion over he hypoheses which is chosen a priori wihou dependence on he examples. 3. % &$' 3I P% kª &$' is he rue error rae of he sochasic hypohesis which, in any evaluaion, draws a hypohesis $ from, and oupus $. 4. % 4 &$' 3«% k ª 4 5$6 is he average empirical error rae of he same sochasic hypohesis. Now, we are ready o sae he heorem. Theorem 2.3 (PAC-Bayes Relaive Enropy Bound) For all priors,, *-,. Q6R W KLœ4 % 5$6 n % &$' KL& ² r ³ Z bed f where KL5 ² 3 µ &$' ³ Z¹ l@ko O $ is he Kullback-Leibler divergence beween he disribuions and and KLœ4 % % &$' is he KL divergence beween a coin of bias kp l»k o % &$' and a coin of bias % &$'. Proof: Given in [3]. We need o specialize his heorem for applicaion o a sochasic neural nework wih a choice of he prior. Our prior will be zero on all neural ne srucures oher han he one we rain and a mulidimensional isoropic gaussian on he values of he weighs in our neural nework. The mulidimensional gaussian will have a mean of and a variance in each dimension of ^. This choice is made for convenience and happens o work.

4 R Q < Ó É Ç b Ì x The opimal value of ^ is unknown and dependen on he learning problem so we will wish o parameerize i in an example dependen manner. We can do his using he same rick as for he original neural ne bound. Use a sequence of bounds where ^ 3u¼ for ¼ and some consans and a nonnegaive number. For he h bound se f²½ ˆ. Now, he union bound will imply ha all bounds hold simulaneously wih probabiliy a leas f. Now, assuming ha our poserior is also defined by a mulidimensional gaussian wih he mean and variance in each dimension defined by ¾ and, we can specialize o he following corollary: Corollary 2.4 (Sochasic Neural Nework bound) Le be he number of weighs in a neural ne, ¾ be he M h weigh and be he variance of he M h weigh. Then, we have *-,.ÁÀ W KLœ4 % 5$6 n % &$'à DY@Z\[ n? ; Ä ³ ZÆÅcÇ ÈcÉ r È É aê ; Å r ³ Z x Ë ÍÎ d f (1) Proof: Analyic calculaion of he KL divergence beween wo mulidimensional Gaussians and he union bound applied for each value of. We will choose and ¼73 S as reasonable defaul values. One more sep is necessary in order o apply his bound. The essenial difficuly is evaluing %" 4 &$'. This quaniy is observable alhough calculaing i o high precision is difficul. We will avoid he need for a direc evaluaion by a mone carlo evaluaion and a bound on he ail of he mone carlo evaluaion. Le %sð 4 5$6 (*-, ÒÑ Ð 9 &$'±1 3 be he observed rae of failure of a Ó random hypoheses drawn according o and applied o a random raining example. Then, he following simple bound holds: Theorem 2.5 (Sample Convergence Bound) For all disribuions,, for all sample ses #, *-, KLœ4 %XÐ 4 % ³ Z 5$6yX dgf where Ó is he number of evaluaions of he sochasic hypohesis. Proof: This is simply an applicaion of he Chernoff bound for he ail of a Binomial where a head occurs when an error is observed and he bias is % 4 5$6. In order o calculae a bound on he expeced rue error rae, we will firs bound he expeced empirical error rae % 4 5$6 wih confidence hen bound he expeced rue error rae % &$' wih confidence, using our bound on % 4 5$6. Since he oal probabiliy of failure is only r 3 f our bound will hold wih probabiliy f. In pracice, we will use Ó 3 evaluaions of he empirical error rae of he sochasic neural nework. 2.3 Disribuion Consrucion algorihm One criical sep is missing in he descripion: How do we calculae he mulidimensional gaussian,? The variance of he poserior gaussian needs o be dependen on each weigh in order o achieve a igh bound since we wan any meaningless weighs o no conribue significanly o he overall sample complexiy. We use a simple greedy algorihm o find he appropriae variance in each dimension. 1. Train a neural ne on he examples 2. For every weigh, ¾, search for he variance,, which reduces he empirical accuracy of he rained nework by Ô Õ (for example) while holding all oher weighs fixed. 3. The sochasic neural nework defined by¾ 7! will generally have a oo-large empirical error. Therefore, we calculae a global muliplier Ö=B such ha he sochasic neural nework defined by ¾ 7Ö'! decreases he empirical accuracy by only Ô Õ. 4. Then, we evaluae he empirical error rae of he resuling sochasic neural ne wih samples from he sochasic neural nework.

5 Ø 0.8 error SNN bound NN bound SNN Train error NN Train error SNN Tes error NN Tes error error paern presenaions paern presenaions Figure 1: Plo of errors and rue error bounds for he neural nework (NN) and he sochasic neural nework (SNN). The graph exhibis overfiing afer approximaely 6000 paern presenaions. Noe ha a rue error bound of 100 implies ha a leas ÙÚ Ú Û more examples are required in order o make a nonvacuous bound. The graph on he righ expands he verical scale by excluding he poor rue error bound. 3 Experimenal Resuls How well can we bound he rue error rae of a sochasic neural nework? The answer is much beer han we can bound he rue error rae of a neural nework. Our experimenal resuls ake place on a synheic daase which has 25 inpu dimensions and one oupu dimension. Mos of hese dimensions are useless simply random numbers drawn from a Ü ÝHÚSÞÙß Gaussian. One of he 25 inpu dimensions is dependen on he label. Firs, he label à is drawn uniformly from á â0ù Þ Ù"ã, hen he special dimension is drawn from a Ü ÝHà'Þ Ùß Gaussian. Noe ha his learning problem can no be solved perfecly because some examples will be drawn from he ail of he gaussian. The ideal neural ne o use in solving his problem is a single node percepron. We will insead use a 2 layer neural ne wih 2 hidden nodes. This overly large neural ne will resul in he poenial for significan overfiing which makes he bound predicion problem ineresing. I is also somewha more realisic if he neural ne srucure does no exacly fi he learning problem. All of our daases will use jus Ù Ú Ú examples. Consrucing a nonvacuous bound for a coninuous hypohesis space a Ù Ú Ú examples is quie challenging as indicaed by figure 1. Convenional bounds are hopelessly loose while he sochasic neural nework bound is sill no as igh as migh be desired. There are several noable hings abou his figure. 1. The SNN upper bound is 2-3 orders of magniude lower han he NN upper bound. 2. The SNN performs beer han expeced. In paricular, he SNN rue error rae is a mos äæå worse han he rue error rae of he NN. This is suprising considering ha we fixed he difference in empirical error raes a ç å. 3. The SNN bound has a minimum a paern presenaions which weakly predics he overfiing poin of 6000 for boh he SNN and he NN. The comparison beween he neural nework bound and he sochasic neural nework bound is no quie fair due o he form of he bound. In paricular, he sochasic neural nework bound can never reurn a value greaer han always err. This implies ha when he bound is near he value of Ù, i is difficul o judge how rapidly exra examples will improve he sochasic neural nework bound. We can judge he sample complexiy of he sochasic bound by ploing he value of he numeraor in equaion 1. Figure 2 plos he complexiy versus he number of paern presenaions in raining. The sochasic bound is a radical improvemen on he neural nework bound bu i is no ye a perfecly igh bound. Given ha we do no have a perfecly igh bound, one imporan consideraion arises: does he minimum of he sochasic bound predic he minimum of he rue error rae (as prediced by a large holdou daase). In paricular, can we use

6 è 100 Complexiy Complexiy paern presenaions Figure 2: We plo he complexiy of he sochasic nework model (numeraor of 1) vs. raining epoch. Noe ha he complexiy increases wih more raining as expeced and says below, implying nonvacuous bounds on a raining se of size. he sochasic bound o deermine when we should cease raining? The sochasic bound depends upon (1) he complexiy which increases wih raining ime and (2) he raining error which decreases wih raining ime. This dependence resuls in a minima which for our problem occurs a approximaely paern presenaions. The poin of minimal rue error (for he sochasic and deerminisic neural neworks) occurs a approximaely 6000 paern presenaions indicaing ha he sochasic bound weakly predics he poin of minimum error. The neural nework bound has no such minimum. Is he choice of 5% increased empirical error opimal? In general, he opimal choice of he exra error rae depends upon he learning problem. Since he sochasic neural nework bound (corollary 2.4) holds for all mulidimensional gaussian disribuions, we are free o opimize he choice of disribuion in anyway we desire. Figure 3 shows he resuling bound for differen choices of. The bound has a minimum a exra error indicaing ha our iniial choice of Ô is somewha large. Also noe ha he complexiy always decreases wih increasing enropy in he disribuion of our sochasic neural ne. The exisence of a minimum in Figure 3 is he righ behaviour: he increased empirical error rae is significan in he calculaion of he rue error bound. 4 Conclusion We have applied a PAC-Bayes bound for he rue error rae of a sochasic neural nework. The sochasic neural nework bound resuls in a radically igher ( é= orders of magniude) bound on he rue error rae of a classifier while increasing he empirical and rue error raes only a small amoun. Alhough, he sochasic neural ne bound is no compleely igh, i is no vacuous wih jus examples and he minima of he bound weakly predics he poin where overraining occurs. The resuls wih a synheic daa se are exremely promising he bounds are orders of magniude beer. Our nex sep will be o es he mehod on a few real world daases o insure ha he bounds remain igh. In addiion, here remain many opporuniies for

7 ê rue error bound or complexiy Sochasic NN bound Complexiy exra raining error Figure 3: Plo of he sochasic neural ne (SNN) bound for poserior disribuions chosen according o he exra empirical error hey inroduce. improving he applicaion of he bound. For example, i is possible ha shifing he weighs when finding a maximum accepable variance will resul in a igher bound. Also, We have no aken ino accoun symmeries wihin he nework which allow for a igher bound calculaion. References [1] Peer Barle, The Sample Complexiy of Paern Classificaion wih Neural Neworks: The Size of he Weighs is More Imporan han he Size of he Nework, IEEE Transacions on Informaion Theory, Vol. 44, No. 2, March [2] V. Kolchinskii and D. Panchenko, Empirical Margin Disribuions and Bounding he Generalizaion Error of Combined Classifiers, preprin, hp://cieseer.nj.nec.com/ hml [3] John Langford and Mahias Seeger, Bounds for Averaging Classifiers. CMU ech repor, [4] David McAlleser, Some PAC-Bayes bounds, COLT 1999.

Ensamble methods: Bagging and Boosting

Ensamble methods: Bagging and Boosting Lecure 21 Ensamble mehods: Bagging and Boosing Milos Hauskrech milos@cs.pi.edu 5329 Senno Square Ensemble mehods Mixure of expers Muliple base models (classifiers, regressors), each covers a differen par

More information

Ensamble methods: Boosting

Ensamble methods: Boosting Lecure 21 Ensamble mehods: Boosing Milos Hauskrech milos@cs.pi.edu 5329 Senno Square Schedule Final exam: April 18: 1:00-2:15pm, in-class Term projecs April 23 & April 25: a 1:00-2:30pm in CS seminar room

More information

Vehicle Arrival Models : Headway

Vehicle Arrival Models : Headway Chaper 12 Vehicle Arrival Models : Headway 12.1 Inroducion Modelling arrival of vehicle a secion of road is an imporan sep in raffic flow modelling. I has imporan applicaion in raffic flow simulaion where

More information

1 Review of Zero-Sum Games

1 Review of Zero-Sum Games COS 5: heoreical Machine Learning Lecurer: Rob Schapire Lecure #23 Scribe: Eugene Brevdo April 30, 2008 Review of Zero-Sum Games Las ime we inroduced a mahemaical model for wo player zero-sum games. Any

More information

INTRODUCTION TO MACHINE LEARNING 3RD EDITION

INTRODUCTION TO MACHINE LEARNING 3RD EDITION ETHEM ALPAYDIN The MIT Press, 2014 Lecure Slides for INTRODUCTION TO MACHINE LEARNING 3RD EDITION alpaydin@boun.edu.r hp://www.cmpe.boun.edu.r/~ehem/i2ml3e CHAPTER 2: SUPERVISED LEARNING Learning a Class

More information

Diebold, Chapter 7. Francis X. Diebold, Elements of Forecasting, 4th Edition (Mason, Ohio: Cengage Learning, 2006). Chapter 7. Characterizing Cycles

Diebold, Chapter 7. Francis X. Diebold, Elements of Forecasting, 4th Edition (Mason, Ohio: Cengage Learning, 2006). Chapter 7. Characterizing Cycles Diebold, Chaper 7 Francis X. Diebold, Elemens of Forecasing, 4h Ediion (Mason, Ohio: Cengage Learning, 006). Chaper 7. Characerizing Cycles Afer compleing his reading you should be able o: Define covariance

More information

A First Course on Kinetics and Reaction Engineering. Class 19 on Unit 18

A First Course on Kinetics and Reaction Engineering. Class 19 on Unit 18 A Firs ourse on Kineics and Reacion Engineering lass 19 on Uni 18 Par I - hemical Reacions Par II - hemical Reacion Kineics Where We re Going Par III - hemical Reacion Engineering A. Ideal Reacors B. Perfecly

More information

Solutions to Odd Number Exercises in Chapter 6

Solutions to Odd Number Exercises in Chapter 6 1 Soluions o Odd Number Exercises in 6.1 R y eˆ 1.7151 y 6.3 From eˆ ( T K) ˆ R 1 1 SST SST SST (1 R ) 55.36(1.7911) we have, ˆ 6.414 T K ( ) 6.5 y ye ye y e 1 1 Consider he erms e and xe b b x e y e b

More information

Designing Information Devices and Systems I Spring 2019 Lecture Notes Note 17

Designing Information Devices and Systems I Spring 2019 Lecture Notes Note 17 EES 16A Designing Informaion Devices and Sysems I Spring 019 Lecure Noes Noe 17 17.1 apaciive ouchscreen In he las noe, we saw ha a capacior consiss of wo pieces on conducive maerial separaed by a nonconducive

More information

20. Applications of the Genetic-Drift Model

20. Applications of the Genetic-Drift Model 0. Applicaions of he Geneic-Drif Model 1) Deermining he probabiliy of forming any paricular combinaion of genoypes in he nex generaion: Example: If he parenal allele frequencies are p 0 = 0.35 and q 0

More information

2.7. Some common engineering functions. Introduction. Prerequisites. Learning Outcomes

2.7. Some common engineering functions. Introduction. Prerequisites. Learning Outcomes Some common engineering funcions 2.7 Inroducion This secion provides a caalogue of some common funcions ofen used in Science and Engineering. These include polynomials, raional funcions, he modulus funcion

More information

Bias in Conditional and Unconditional Fixed Effects Logit Estimation: a Correction * Tom Coupé

Bias in Conditional and Unconditional Fixed Effects Logit Estimation: a Correction * Tom Coupé Bias in Condiional and Uncondiional Fixed Effecs Logi Esimaion: a Correcion * Tom Coupé Economics Educaion and Research Consorium, Naional Universiy of Kyiv Mohyla Academy Address: Vul Voloska 10, 04070

More information

Online Convex Optimization Example And Follow-The-Leader

Online Convex Optimization Example And Follow-The-Leader CSE599s, Spring 2014, Online Learning Lecure 2-04/03/2014 Online Convex Opimizaion Example And Follow-The-Leader Lecurer: Brendan McMahan Scribe: Sephen Joe Jonany 1 Review of Online Convex Opimizaion

More information

3.1 More on model selection

3.1 More on model selection 3. More on Model selecion 3. Comparing models AIC, BIC, Adjused R squared. 3. Over Fiing problem. 3.3 Sample spliing. 3. More on model selecion crieria Ofen afer model fiing you are lef wih a handful of

More information

Learning a Class from Examples. Training set X. Class C 1. Class C of a family car. Output: Input representation: x 1 : price, x 2 : engine power

Learning a Class from Examples. Training set X. Class C 1. Class C of a family car. Output: Input representation: x 1 : price, x 2 : engine power Alpaydin Chaper, Michell Chaper 7 Alpaydin slides are in urquoise. Ehem Alpaydin, copyrigh: The MIT Press, 010. alpaydin@boun.edu.r hp://www.cmpe.boun.edu.r/ ehem/imle All oher slides are based on Michell.

More information

Chapter 2. First Order Scalar Equations

Chapter 2. First Order Scalar Equations Chaper. Firs Order Scalar Equaions We sar our sudy of differenial equaions in he same way he pioneers in his field did. We show paricular echniques o solve paricular ypes of firs order differenial equaions.

More information

Lecture 33: November 29

Lecture 33: November 29 36-705: Inermediae Saisics Fall 2017 Lecurer: Siva Balakrishnan Lecure 33: November 29 Today we will coninue discussing he boosrap, and hen ry o undersand why i works in a simple case. In he las lecure

More information

( ) a system of differential equations with continuous parametrization ( T = R + These look like, respectively:

( ) a system of differential equations with continuous parametrization ( T = R + These look like, respectively: XIII. DIFFERENCE AND DIFFERENTIAL EQUATIONS Ofen funcions, or a sysem of funcion, are paramerized in erms of some variable, usually denoed as and inerpreed as ime. The variable is wrien as a funcion of

More information

Lecture 3: Exponential Smoothing

Lecture 3: Exponential Smoothing NATCOR: Forecasing & Predicive Analyics Lecure 3: Exponenial Smoohing John Boylan Lancaser Cenre for Forecasing Deparmen of Managemen Science Mehods and Models Forecasing Mehod A (numerical) procedure

More information

Learning a Class from Examples. Training set X. Class C 1. Class C of a family car. Output: Input representation: x 1 : price, x 2 : engine power

Learning a Class from Examples. Training set X. Class C 1. Class C of a family car. Output: Input representation: x 1 : price, x 2 : engine power Alpaydin Chaper, Michell Chaper 7 Alpaydin slides are in urquoise. Ehem Alpaydin, copyrigh: The MIT Press, 010. alpaydin@boun.edu.r hp://www.cmpe.boun.edu.r/ ehem/imle All oher slides are based on Michell.

More information

Speaker Adaptation Techniques For Continuous Speech Using Medium and Small Adaptation Data Sets. Constantinos Boulis

Speaker Adaptation Techniques For Continuous Speech Using Medium and Small Adaptation Data Sets. Constantinos Boulis Speaker Adapaion Techniques For Coninuous Speech Using Medium and Small Adapaion Daa Ses Consaninos Boulis Ouline of he Presenaion Inroducion o he speaker adapaion problem Maximum Likelihood Sochasic Transformaions

More information

Comparing Means: t-tests for One Sample & Two Related Samples

Comparing Means: t-tests for One Sample & Two Related Samples Comparing Means: -Tess for One Sample & Two Relaed Samples Using he z-tes: Assumpions -Tess for One Sample & Two Relaed Samples The z-es (of a sample mean agains a populaion mean) is based on he assumpion

More information

Explaining Total Factor Productivity. Ulrich Kohli University of Geneva December 2015

Explaining Total Factor Productivity. Ulrich Kohli University of Geneva December 2015 Explaining Toal Facor Produciviy Ulrich Kohli Universiy of Geneva December 2015 Needed: A Theory of Toal Facor Produciviy Edward C. Presco (1998) 2 1. Inroducion Toal Facor Produciviy (TFP) has become

More information

Ensemble Confidence Estimates Posterior Probability

Ensemble Confidence Estimates Posterior Probability Ensemble Esimaes Poserior Probabiliy Michael Muhlbaier, Aposolos Topalis, and Robi Polikar Rowan Universiy, Elecrical and Compuer Engineering, Mullica Hill Rd., Glassboro, NJ 88, USA {muhlba6, opali5}@sudens.rowan.edu

More information

23.2. Representing Periodic Functions by Fourier Series. Introduction. Prerequisites. Learning Outcomes

23.2. Representing Periodic Functions by Fourier Series. Introduction. Prerequisites. Learning Outcomes Represening Periodic Funcions by Fourier Series 3. Inroducion In his Secion we show how a periodic funcion can be expressed as a series of sines and cosines. We begin by obaining some sandard inegrals

More information

Lecture 2-1 Kinematics in One Dimension Displacement, Velocity and Acceleration Everything in the world is moving. Nothing stays still.

Lecture 2-1 Kinematics in One Dimension Displacement, Velocity and Acceleration Everything in the world is moving. Nothing stays still. Lecure - Kinemaics in One Dimension Displacemen, Velociy and Acceleraion Everyhing in he world is moving. Nohing says sill. Moion occurs a all scales of he universe, saring from he moion of elecrons in

More information

CHAPTER 10 VALIDATION OF TEST WITH ARTIFICAL NEURAL NETWORK

CHAPTER 10 VALIDATION OF TEST WITH ARTIFICAL NEURAL NETWORK 175 CHAPTER 10 VALIDATION OF TEST WITH ARTIFICAL NEURAL NETWORK 10.1 INTRODUCTION Amongs he research work performed, he bes resuls of experimenal work are validaed wih Arificial Neural Nework. From he

More information

Notes on Kalman Filtering

Notes on Kalman Filtering Noes on Kalman Filering Brian Borchers and Rick Aser November 7, Inroducion Daa Assimilaion is he problem of merging model predicions wih acual measuremens of a sysem o produce an opimal esimae of he curren

More information

Random Walk with Anti-Correlated Steps

Random Walk with Anti-Correlated Steps Random Walk wih Ani-Correlaed Seps John Noga Dirk Wagner 2 Absrac We conjecure he expeced value of random walks wih ani-correlaed seps o be exacly. We suppor his conjecure wih 2 plausibiliy argumens and

More information

Two Popular Bayesian Estimators: Particle and Kalman Filters. McGill COMP 765 Sept 14 th, 2017

Two Popular Bayesian Estimators: Particle and Kalman Filters. McGill COMP 765 Sept 14 th, 2017 Two Popular Bayesian Esimaors: Paricle and Kalman Filers McGill COMP 765 Sep 14 h, 2017 1 1 1, dx x Bel x u x P x z P Recall: Bayes Filers,,,,,,, 1 1 1 1 u z u x P u z u x z P Bayes z = observaion u =

More information

KINEMATICS IN ONE DIMENSION

KINEMATICS IN ONE DIMENSION KINEMATICS IN ONE DIMENSION PREVIEW Kinemaics is he sudy of how hings move how far (disance and displacemen), how fas (speed and velociy), and how fas ha how fas changes (acceleraion). We say ha an objec

More information

ACE 562 Fall Lecture 5: The Simple Linear Regression Model: Sampling Properties of the Least Squares Estimators. by Professor Scott H.

ACE 562 Fall Lecture 5: The Simple Linear Regression Model: Sampling Properties of the Least Squares Estimators. by Professor Scott H. ACE 56 Fall 005 Lecure 5: he Simple Linear Regression Model: Sampling Properies of he Leas Squares Esimaors by Professor Sco H. Irwin Required Reading: Griffihs, Hill and Judge. "Inference in he Simple

More information

Econ107 Applied Econometrics Topic 7: Multicollinearity (Studenmund, Chapter 8)

Econ107 Applied Econometrics Topic 7: Multicollinearity (Studenmund, Chapter 8) I. Definiions and Problems A. Perfec Mulicollineariy Econ7 Applied Economerics Topic 7: Mulicollineariy (Sudenmund, Chaper 8) Definiion: Perfec mulicollineariy exiss in a following K-variable regression

More information

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation Course Noes for EE7C Spring 018: Convex Opimizaion and Approximaion Insrucor: Moriz Hard Email: hard+ee7c@berkeley.edu Graduae Insrucor: Max Simchowiz Email: msimchow+ee7c@berkeley.edu Ocober 15, 018 3

More information

Physics 235 Chapter 2. Chapter 2 Newtonian Mechanics Single Particle

Physics 235 Chapter 2. Chapter 2 Newtonian Mechanics Single Particle Chaper 2 Newonian Mechanics Single Paricle In his Chaper we will review wha Newon s laws of mechanics ell us abou he moion of a single paricle. Newon s laws are only valid in suiable reference frames,

More information

5.1 - Logarithms and Their Properties

5.1 - Logarithms and Their Properties Chaper 5 Logarihmic Funcions 5.1 - Logarihms and Their Properies Suppose ha a populaion grows according o he formula P 10, where P is he colony size a ime, in hours. When will he populaion be 2500? We

More information

T L. t=1. Proof of Lemma 1. Using the marginal cost accounting in Equation(4) and standard arguments. t )+Π RB. t )+K 1(Q RB

T L. t=1. Proof of Lemma 1. Using the marginal cost accounting in Equation(4) and standard arguments. t )+Π RB. t )+K 1(Q RB Elecronic Companion EC.1. Proofs of Technical Lemmas and Theorems LEMMA 1. Le C(RB) be he oal cos incurred by he RB policy. Then we have, T L E[C(RB)] 3 E[Z RB ]. (EC.1) Proof of Lemma 1. Using he marginal

More information

Christos Papadimitriou & Luca Trevisan November 22, 2016

Christos Papadimitriou & Luca Trevisan November 22, 2016 U.C. Bereley CS170: Algorihms Handou LN-11-22 Chrisos Papadimiriou & Luca Trevisan November 22, 2016 Sreaming algorihms In his lecure and he nex one we sudy memory-efficien algorihms ha process a sream

More information

STATE-SPACE MODELLING. A mass balance across the tank gives:

STATE-SPACE MODELLING. A mass balance across the tank gives: B. Lennox and N.F. Thornhill, 9, Sae Space Modelling, IChemE Process Managemen and Conrol Subjec Group Newsleer STE-SPACE MODELLING Inroducion: Over he pas decade or so here has been an ever increasing

More information

) were both constant and we brought them from under the integral.

) were both constant and we brought them from under the integral. YIELD-PER-RECRUIT (coninued The yield-per-recrui model applies o a cohor, bu we saw in he Age Disribuions lecure ha he properies of a cohor do no apply in general o a collecion of cohors, which is wha

More information

Math 333 Problem Set #2 Solution 14 February 2003

Math 333 Problem Set #2 Solution 14 February 2003 Mah 333 Problem Se #2 Soluion 14 February 2003 A1. Solve he iniial value problem dy dx = x2 + e 3x ; 2y 4 y(0) = 1. Soluion: This is separable; we wrie 2y 4 dy = x 2 + e x dx and inegrae o ge The iniial

More information

3.1.3 INTRODUCTION TO DYNAMIC OPTIMIZATION: DISCRETE TIME PROBLEMS. A. The Hamiltonian and First-Order Conditions in a Finite Time Horizon

3.1.3 INTRODUCTION TO DYNAMIC OPTIMIZATION: DISCRETE TIME PROBLEMS. A. The Hamiltonian and First-Order Conditions in a Finite Time Horizon 3..3 INRODUCION O DYNAMIC OPIMIZAION: DISCREE IME PROBLEMS A. he Hamilonian and Firs-Order Condiions in a Finie ime Horizon Define a new funcion, he Hamilonian funcion, H. H he change in he oal value of

More information

Linear Time-invariant systems, Convolution, and Cross-correlation

Linear Time-invariant systems, Convolution, and Cross-correlation Linear Time-invarian sysems, Convoluion, and Cross-correlaion (1) Linear Time-invarian (LTI) sysem A sysem akes in an inpu funcion and reurns an oupu funcion. x() T y() Inpu Sysem Oupu y() = T[x()] An

More information

Some Basic Information about M-S-D Systems

Some Basic Information about M-S-D Systems Some Basic Informaion abou M-S-D Sysems 1 Inroducion We wan o give some summary of he facs concerning unforced (homogeneous) and forced (non-homogeneous) models for linear oscillaors governed by second-order,

More information

= ( ) ) or a system of differential equations with continuous parametrization (T = R

= ( ) ) or a system of differential equations with continuous parametrization (T = R XIII. DIFFERENCE AND DIFFERENTIAL EQUATIONS Ofen funcions, or a sysem of funcion, are paramerized in erms of some variable, usually denoed as and inerpreed as ime. The variable is wrien as a funcion of

More information

Presentation Overview

Presentation Overview Acion Refinemen in Reinforcemen Learning by Probabiliy Smoohing By Thomas G. Dieerich & Didac Busques Speaer: Kai Xu Presenaion Overview Bacground The Probabiliy Smoohing Mehod Experimenal Sudy of Acion

More information

EXERCISES FOR SECTION 1.5

EXERCISES FOR SECTION 1.5 1.5 Exisence and Uniqueness of Soluions 43 20. 1 v c 21. 1 v c 1 2 4 6 8 10 1 2 2 4 6 8 10 Graph of approximae soluion obained using Euler s mehod wih = 0.1. Graph of approximae soluion obained using Euler

More information

Financial Econometrics Jeffrey R. Russell Midterm Winter 2009 SOLUTIONS

Financial Econometrics Jeffrey R. Russell Midterm Winter 2009 SOLUTIONS Name SOLUTIONS Financial Economerics Jeffrey R. Russell Miderm Winer 009 SOLUTIONS You have 80 minues o complee he exam. Use can use a calculaor and noes. Try o fi all your work in he space provided. If

More information

The field of mathematics has made tremendous impact on the study of

The field of mathematics has made tremendous impact on the study of A Populaion Firing Rae Model of Reverberaory Aciviy in Neuronal Neworks Zofia Koscielniak Carnegie Mellon Universiy Menor: Dr. G. Bard Ermenrou Universiy of Pisburgh Inroducion: The field of mahemaics

More information

Linear Response Theory: The connection between QFT and experiments

Linear Response Theory: The connection between QFT and experiments Phys540.nb 39 3 Linear Response Theory: The connecion beween QFT and experimens 3.1. Basic conceps and ideas Q: How do we measure he conduciviy of a meal? A: we firs inroduce a weak elecric field E, and

More information

Matrix Versions of Some Refinements of the Arithmetic-Geometric Mean Inequality

Matrix Versions of Some Refinements of the Arithmetic-Geometric Mean Inequality Marix Versions of Some Refinemens of he Arihmeic-Geomeric Mean Inequaliy Bao Qi Feng and Andrew Tonge Absrac. We esablish marix versions of refinemens due o Alzer ], Carwrigh and Field 4], and Mercer 5]

More information

Seminar 4: Hotelling 2

Seminar 4: Hotelling 2 Seminar 4: Hoelling 2 November 3, 211 1 Exercise Par 1 Iso-elasic demand A non renewable resource of a known sock S can be exraced a zero cos. Demand for he resource is of he form: D(p ) = p ε ε > A a

More information

Some Ramsey results for the n-cube

Some Ramsey results for the n-cube Some Ramsey resuls for he n-cube Ron Graham Universiy of California, San Diego Jozsef Solymosi Universiy of Briish Columbia, Vancouver, Canada Absrac In his noe we esablish a Ramsey-ype resul for cerain

More information

ACE 562 Fall Lecture 8: The Simple Linear Regression Model: R 2, Reporting the Results and Prediction. by Professor Scott H.

ACE 562 Fall Lecture 8: The Simple Linear Regression Model: R 2, Reporting the Results and Prediction. by Professor Scott H. ACE 56 Fall 5 Lecure 8: The Simple Linear Regression Model: R, Reporing he Resuls and Predicion by Professor Sco H. Irwin Required Readings: Griffihs, Hill and Judge. "Explaining Variaion in he Dependen

More information

R t. C t P t. + u t. C t = αp t + βr t + v t. + β + w t

R t. C t P t. + u t. C t = αp t + βr t + v t. + β + w t Exercise 7 C P = α + β R P + u C = αp + βr + v (a) (b) C R = α P R + β + w (c) Assumpions abou he disurbances u, v, w : Classical assumions on he disurbance of one of he equaions, eg. on (b): E(v v s P,

More information

Finish reading Chapter 2 of Spivak, rereading earlier sections as necessary. handout and fill in some missing details!

Finish reading Chapter 2 of Spivak, rereading earlier sections as necessary. handout and fill in some missing details! MAT 257, Handou 6: Ocober 7-2, 20. I. Assignmen. Finish reading Chaper 2 of Spiva, rereading earlier secions as necessary. handou and fill in some missing deails! II. Higher derivaives. Also, read his

More information

The Simple Linear Regression Model: Reporting the Results and Choosing the Functional Form

The Simple Linear Regression Model: Reporting the Results and Choosing the Functional Form Chaper 6 The Simple Linear Regression Model: Reporing he Resuls and Choosing he Funcional Form To complee he analysis of he simple linear regression model, in his chaper we will consider how o measure

More information

4.1 - Logarithms and Their Properties

4.1 - Logarithms and Their Properties Chaper 4 Logarihmic Funcions 4.1 - Logarihms and Their Properies Wha is a Logarihm? We define he common logarihm funcion, simply he log funcion, wrien log 10 x log x, as follows: If x is a posiive number,

More information

A Shooting Method for A Node Generation Algorithm

A Shooting Method for A Node Generation Algorithm A Shooing Mehod for A Node Generaion Algorihm Hiroaki Nishikawa W.M.Keck Foundaion Laboraory for Compuaional Fluid Dynamics Deparmen of Aerospace Engineering, Universiy of Michigan, Ann Arbor, Michigan

More information

Licenciatura de ADE y Licenciatura conjunta Derecho y ADE. Hoja de ejercicios 2 PARTE A

Licenciatura de ADE y Licenciatura conjunta Derecho y ADE. Hoja de ejercicios 2 PARTE A Licenciaura de ADE y Licenciaura conjuna Derecho y ADE Hoja de ejercicios PARTE A 1. Consider he following models Δy = 0.8 + ε (1 + 0.8L) Δ 1 y = ε where ε and ε are independen whie noise processes. In

More information

d 1 = c 1 b 2 - b 1 c 2 d 2 = c 1 b 3 - b 1 c 3

d 1 = c 1 b 2 - b 1 c 2 d 2 = c 1 b 3 - b 1 c 3 and d = c b - b c c d = c b - b c c This process is coninued unil he nh row has been compleed. The complee array of coefficiens is riangular. Noe ha in developing he array an enire row may be divided or

More information

Final Spring 2007

Final Spring 2007 .615 Final Spring 7 Overview The purpose of he final exam is o calculae he MHD β limi in a high-bea oroidal okamak agains he dangerous n = 1 exernal ballooning-kink mode. Effecively, his corresponds o

More information

Simulation-Solving Dynamic Models ABE 5646 Week 2, Spring 2010

Simulation-Solving Dynamic Models ABE 5646 Week 2, Spring 2010 Simulaion-Solving Dynamic Models ABE 5646 Week 2, Spring 2010 Week Descripion Reading Maerial 2 Compuer Simulaion of Dynamic Models Finie Difference, coninuous saes, discree ime Simple Mehods Euler Trapezoid

More information

A Specification Test for Linear Dynamic Stochastic General Equilibrium Models

A Specification Test for Linear Dynamic Stochastic General Equilibrium Models Journal of Saisical and Economeric Mehods, vol.1, no.2, 2012, 65-70 ISSN: 2241-0384 (prin), 2241-0376 (online) Scienpress Ld, 2012 A Specificaion Tes for Linear Dynamic Sochasic General Equilibrium Models

More information

Online Appendix to Solution Methods for Models with Rare Disasters

Online Appendix to Solution Methods for Models with Rare Disasters Online Appendix o Soluion Mehods for Models wih Rare Disasers Jesús Fernández-Villaverde and Oren Levinal In his Online Appendix, we presen he Euler condiions of he model, we develop he pricing Calvo block,

More information

RANDOM LAGRANGE MULTIPLIERS AND TRANSVERSALITY

RANDOM LAGRANGE MULTIPLIERS AND TRANSVERSALITY ECO 504 Spring 2006 Chris Sims RANDOM LAGRANGE MULTIPLIERS AND TRANSVERSALITY 1. INTRODUCTION Lagrange muliplier mehods are sandard fare in elemenary calculus courses, and hey play a cenral role in economic

More information

Unit Root Time Series. Univariate random walk

Unit Root Time Series. Univariate random walk Uni Roo ime Series Univariae random walk Consider he regression y y where ~ iid N 0, he leas squares esimae of is: ˆ yy y y yy Now wha if = If y y hen le y 0 =0 so ha y j j If ~ iid N 0, hen y ~ N 0, he

More information

Kriging Models Predicting Atrazine Concentrations in Surface Water Draining Agricultural Watersheds

Kriging Models Predicting Atrazine Concentrations in Surface Water Draining Agricultural Watersheds 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 Kriging Models Predicing Arazine Concenraions in Surface Waer Draining Agriculural Waersheds Paul L. Mosquin, Jeremy Aldworh, Wenlin Chen Supplemenal Maerial Number

More information

Introduction D P. r = constant discount rate, g = Gordon Model (1962): constant dividend growth rate.

Introduction D P. r = constant discount rate, g = Gordon Model (1962): constant dividend growth rate. Inroducion Gordon Model (1962): D P = r g r = consan discoun rae, g = consan dividend growh rae. If raional expecaions of fuure discoun raes and dividend growh vary over ime, so should he D/P raio. Since

More information

GMM - Generalized Method of Moments

GMM - Generalized Method of Moments GMM - Generalized Mehod of Momens Conens GMM esimaion, shor inroducion 2 GMM inuiion: Maching momens 2 3 General overview of GMM esimaion. 3 3. Weighing marix...........................................

More information

MATH 4330/5330, Fourier Analysis Section 6, Proof of Fourier s Theorem for Pointwise Convergence

MATH 4330/5330, Fourier Analysis Section 6, Proof of Fourier s Theorem for Pointwise Convergence MATH 433/533, Fourier Analysis Secion 6, Proof of Fourier s Theorem for Poinwise Convergence Firs, some commens abou inegraing periodic funcions. If g is a periodic funcion, g(x + ) g(x) for all real x,

More information

Biol. 356 Lab 8. Mortality, Recruitment, and Migration Rates

Biol. 356 Lab 8. Mortality, Recruitment, and Migration Rates Biol. 356 Lab 8. Moraliy, Recruimen, and Migraion Raes (modified from Cox, 00, General Ecology Lab Manual, McGraw Hill) Las week we esimaed populaion size hrough several mehods. One assumpion of all hese

More information

A variational radial basis function approximation for diffusion processes.

A variational radial basis function approximation for diffusion processes. A variaional radial basis funcion approximaion for diffusion processes. Michail D. Vreas, Dan Cornford and Yuan Shen {vreasm, d.cornford, y.shen}@ason.ac.uk Ason Universiy, Birmingham, UK hp://www.ncrg.ason.ac.uk

More information

Zürich. ETH Master Course: L Autonomous Mobile Robots Localization II

Zürich. ETH Master Course: L Autonomous Mobile Robots Localization II Roland Siegwar Margaria Chli Paul Furgale Marco Huer Marin Rufli Davide Scaramuzza ETH Maser Course: 151-0854-00L Auonomous Mobile Robos Localizaion II ACT and SEE For all do, (predicion updae / ACT),

More information

Deep Learning: Theory, Techniques & Applications - Recurrent Neural Networks -

Deep Learning: Theory, Techniques & Applications - Recurrent Neural Networks - Deep Learning: Theory, Techniques & Applicaions - Recurren Neural Neworks - Prof. Maeo Maeucci maeo.maeucci@polimi.i Deparmen of Elecronics, Informaion and Bioengineering Arificial Inelligence and Roboics

More information

Article from. Predictive Analytics and Futurism. July 2016 Issue 13

Article from. Predictive Analytics and Futurism. July 2016 Issue 13 Aricle from Predicive Analyics and Fuurism July 6 Issue An Inroducion o Incremenal Learning By Qiang Wu and Dave Snell Machine learning provides useful ools for predicive analyics The ypical machine learning

More information

Math 2142 Exam 1 Review Problems. x 2 + f (0) 3! for the 3rd Taylor polynomial at x = 0. To calculate the various quantities:

Math 2142 Exam 1 Review Problems. x 2 + f (0) 3! for the 3rd Taylor polynomial at x = 0. To calculate the various quantities: Mah 4 Eam Review Problems Problem. Calculae he 3rd Taylor polynomial for arcsin a =. Soluion. Le f() = arcsin. For his problem, we use he formula f() + f () + f ()! + f () 3! for he 3rd Taylor polynomial

More information

STRUCTURAL CHANGE IN TIME SERIES OF THE EXCHANGE RATES BETWEEN YEN-DOLLAR AND YEN-EURO IN

STRUCTURAL CHANGE IN TIME SERIES OF THE EXCHANGE RATES BETWEEN YEN-DOLLAR AND YEN-EURO IN Inernaional Journal of Applied Economerics and Quaniaive Sudies. Vol.1-3(004) STRUCTURAL CHANGE IN TIME SERIES OF THE EXCHANGE RATES BETWEEN YEN-DOLLAR AND YEN-EURO IN 001-004 OBARA, Takashi * Absrac The

More information

Supplement for Stochastic Convex Optimization: Faster Local Growth Implies Faster Global Convergence

Supplement for Stochastic Convex Optimization: Faster Local Growth Implies Faster Global Convergence Supplemen for Sochasic Convex Opimizaion: Faser Local Growh Implies Faser Global Convergence Yi Xu Qihang Lin ianbao Yang Proof of heorem heorem Suppose Assumpion holds and F (w) obeys he LGC (6) Given

More information

Two Coupled Oscillators / Normal Modes

Two Coupled Oscillators / Normal Modes Lecure 3 Phys 3750 Two Coupled Oscillaors / Normal Modes Overview and Moivaion: Today we ake a small, bu significan, sep owards wave moion. We will no ye observe waves, bu his sep is imporan in is own

More information

Lecture Notes 2. The Hilbert Space Approach to Time Series

Lecture Notes 2. The Hilbert Space Approach to Time Series Time Series Seven N. Durlauf Universiy of Wisconsin. Basic ideas Lecure Noes. The Hilber Space Approach o Time Series The Hilber space framework provides a very powerful language for discussing he relaionship

More information

Guest Lectures for Dr. MacFarlane s EE3350 Part Deux

Guest Lectures for Dr. MacFarlane s EE3350 Part Deux Gues Lecures for Dr. MacFarlane s EE3350 Par Deux Michael Plane Mon., 08-30-2010 Wrie name in corner. Poin ou his is a review, so I will go faser. Remind hem o go lisen o online lecure abou geing an A

More information

Chapter 8 The Complete Response of RL and RC Circuits

Chapter 8 The Complete Response of RL and RC Circuits Chaper 8 The Complee Response of RL and RC Circuis Seoul Naional Universiy Deparmen of Elecrical and Compuer Engineering Wha is Firs Order Circuis? Circuis ha conain only one inducor or only one capacior

More information

. Now define y j = log x j, and solve the iteration.

. Now define y j = log x j, and solve the iteration. Problem 1: (Disribued Resource Allocaion (ALOHA!)) (Adaped from M& U, Problem 5.11) In his problem, we sudy a simple disribued proocol for allocaing agens o shared resources, wherein agens conend for resources

More information

04. Kinetics of a second order reaction

04. Kinetics of a second order reaction 4. Kineics of a second order reacion Imporan conceps Reacion rae, reacion exen, reacion rae equaion, order of a reacion, firs-order reacions, second-order reacions, differenial and inegraed rae laws, Arrhenius

More information

On Measuring Pro-Poor Growth. 1. On Various Ways of Measuring Pro-Poor Growth: A Short Review of the Literature

On Measuring Pro-Poor Growth. 1. On Various Ways of Measuring Pro-Poor Growth: A Short Review of the Literature On Measuring Pro-Poor Growh 1. On Various Ways of Measuring Pro-Poor Growh: A Shor eview of he Lieraure During he pas en years or so here have been various suggesions concerning he way one should check

More information

13.3 Term structure models

13.3 Term structure models 13.3 Term srucure models 13.3.1 Expecaions hypohesis model - Simples "model" a) shor rae b) expecaions o ge oher prices Resul: y () = 1 h +1 δ = φ( δ)+ε +1 f () = E (y +1) (1) =δ + φ( δ) f (3) = E (y +)

More information

Notes for Lecture 17-18

Notes for Lecture 17-18 U.C. Berkeley CS278: Compuaional Complexiy Handou N7-8 Professor Luca Trevisan April 3-8, 2008 Noes for Lecure 7-8 In hese wo lecures we prove he firs half of he PCP Theorem, he Amplificaion Lemma, up

More information

Solutions for Assignment 2

Solutions for Assignment 2 Faculy of rs and Science Universiy of Torono CSC 358 - Inroducion o Compuer Neworks, Winer 218 Soluions for ssignmen 2 Quesion 1 (2 Poins): Go-ack n RQ In his quesion, we review how Go-ack n RQ can be

More information

Rapid Termination Evaluation for Recursive Subdivision of Bezier Curves

Rapid Termination Evaluation for Recursive Subdivision of Bezier Curves Rapid Terminaion Evaluaion for Recursive Subdivision of Bezier Curves Thomas F. Hain School of Compuer and Informaion Sciences, Universiy of Souh Alabama, Mobile, AL, U.S.A. Absrac Bézier curve flaening

More information

Navneet Saini, Mayank Goyal, Vishal Bansal (2013); Term Project AML310; Indian Institute of Technology Delhi

Navneet Saini, Mayank Goyal, Vishal Bansal (2013); Term Project AML310; Indian Institute of Technology Delhi Creep in Viscoelasic Subsances Numerical mehods o calculae he coefficiens of he Prony equaion using creep es daa and Herediary Inegrals Mehod Navnee Saini, Mayank Goyal, Vishal Bansal (23); Term Projec

More information

5. Stochastic processes (1)

5. Stochastic processes (1) Lec05.pp S-38.45 - Inroducion o Teleraffic Theory Spring 2005 Conens Basic conceps Poisson process 2 Sochasic processes () Consider some quaniy in a eleraffic (or any) sysem I ypically evolves in ime randomly

More information

Mathcad Lecture #8 In-class Worksheet Curve Fitting and Interpolation

Mathcad Lecture #8 In-class Worksheet Curve Fitting and Interpolation Mahcad Lecure #8 In-class Workshee Curve Fiing and Inerpolaion A he end of his lecure, you will be able o: explain he difference beween curve fiing and inerpolaion decide wheher curve fiing or inerpolaion

More information

Understanding the asymptotic behaviour of empirical Bayes methods

Understanding the asymptotic behaviour of empirical Bayes methods Undersanding he asympoic behaviour of empirical Bayes mehods Boond Szabo, Aad van der Vaar and Harry van Zanen EURANDOM, 11.10.2011. Conens 2/20 Moivaion Nonparameric Bayesian saisics Signal in Whie noise

More information

WEEK-3 Recitation PHYS 131. of the projectile s velocity remains constant throughout the motion, since the acceleration a x

WEEK-3 Recitation PHYS 131. of the projectile s velocity remains constant throughout the motion, since the acceleration a x WEEK-3 Reciaion PHYS 131 Ch. 3: FOC 1, 3, 4, 6, 14. Problems 9, 37, 41 & 71 and Ch. 4: FOC 1, 3, 5, 8. Problems 3, 5 & 16. Feb 8, 018 Ch. 3: FOC 1, 3, 4, 6, 14. 1. (a) The horizonal componen of he projecile

More information

Excel-Based Solution Method For The Optimal Policy Of The Hadley And Whittin s Exact Model With Arma Demand

Excel-Based Solution Method For The Optimal Policy Of The Hadley And Whittin s Exact Model With Arma Demand Excel-Based Soluion Mehod For The Opimal Policy Of The Hadley And Whiin s Exac Model Wih Arma Demand Kal Nami School of Business and Economics Winson Salem Sae Universiy Winson Salem, NC 27110 Phone: (336)750-2338

More information

RC, RL and RLC circuits

RC, RL and RLC circuits Name Dae Time o Complee h m Parner Course/ Secion / Grade RC, RL and RLC circuis Inroducion In his experimen we will invesigae he behavior of circuis conaining combinaions of resisors, capaciors, and inducors.

More information

MATHEMATICAL DESCRIPTION OF THEORETICAL METHODS OF RESERVE ECONOMY OF CONSIGNMENT STORES

MATHEMATICAL DESCRIPTION OF THEORETICAL METHODS OF RESERVE ECONOMY OF CONSIGNMENT STORES MAHEMAICAL DESCIPION OF HEOEICAL MEHODS OF ESEVE ECONOMY OF CONSIGNMEN SOES Péer elek, József Cselényi, György Demeer Universiy of Miskolc, Deparmen of Maerials Handling and Logisics Absrac: Opimizaion

More information

CONFIDENCE LIMITS AND THEIR ROBUSTNESS

CONFIDENCE LIMITS AND THEIR ROBUSTNESS CONFIDENCE LIMITS AND THEIR ROBUSTNESS Rajendran Raja Fermi Naional Acceleraor laboraory Baavia, IL 60510 Absrac Confidence limis are common place in physics analysis. Grea care mus be aken in heir calculaion

More information

Lab #2: Kinematics in 1-Dimension

Lab #2: Kinematics in 1-Dimension Reading Assignmen: Chaper 2, Secions 2-1 hrough 2-8 Lab #2: Kinemaics in 1-Dimension Inroducion: The sudy of moion is broken ino wo main areas of sudy kinemaics and dynamics. Kinemaics is he descripion

More information