Knowledge-Gradient Methods for Efficient Information Collection

Size: px
Start display at page:

Download "Knowledge-Gradient Methods for Efficient Information Collection"

Transcription

1 Knowledge-Gradent Methods for Effcent Informaton Collecton Peter Frazer Presentng jont work wth Warren Powell, Savas Dayank, and Dana Negoescu Department of Operatons Research and Fnancal Engneerng Prnceton Unversty Tuesday February 3, 200 Operatons Research and Informaton Engneerng Cornell Unversty 1 / 3

2 Outlne 1 Overvew of Informaton Collecton Applcatons 2 Global Optmzaton of Expensve Contnuous Functons Problem Descrpton Knowledge-Gradent Polcy Applcaton: Smulaton Model Calbraton (Schneder) Applcaton: Drug Dscovery 3 KG Polces for General Offlne Problems Problem Descrpton and KG Polcy Convergence 2 / 3

3 Outlne 1 Overvew of Informaton Collecton Applcatons 2 Global Optmzaton of Expensve Contnuous Functons Problem Descrpton Knowledge-Gradent Polcy Applcaton: Smulaton Model Calbraton (Schneder) Applcaton: Drug Dscovery 3 KG Polces for General Offlne Problems Problem Descrpton and KG Polcy Convergence 3 / 3

4 Informaton Collecton We consder nformaton collecton problems, n whch we must decde how much and of what type of nformaton to collect. We focus our nterest on sequental Bayesan nformaton collecton problems. In makng such decsons we trade the beneft of nformaton (the ablty to make better decsons n the future) aganst ts cost (money, tme, or opportunty cost). We propose the knowledge-gradent (KG) method as a general way to make nformaton collecton decsons. 4 / 3

5 Secton concludes the paper. Applcaton: 2 RESOURCE Smulaton ALLOCATION PROBLEMS Optmzaton There are many resource allocaton problems n the desgn of dscrete event systems. In ths paper we consder the combnatons s usually very large as we wll show the followng example. 2.1 Buffer Allocaton n Supply Chan Management We consder a 10-node network shown n Fgure 1. There Wefollowng would resource lke allocaton to optmzaton chooseproblem: are 10 servers and 10 buffers, whch s an example of a a staffng supply polcy chan, although n a such hosptal a network could tobe mnmze the model for many dfferent real-world systems, such as a patent watng tme, subject to a cost manufacturng constrant. system, a communcaton or a traffc network. There are two classes of customers wth dfferent Hosptal where 0 s dynamcs a fnte dscrete set under and J: 0 a+ partcular R s a arrval staffng dstrbutons, but polcy the same servce cannot requrements. bewe performance functon that s subject to nose. Often J(@ s consder both exponental and non-exponental evaluated an expectaton analytcally, of some random estmate but we of the can estmate dstrbutons (unform) t va n the network. smulaton. Both classes arrve performance, at any of Nodes 0-3, and leave the network after havng To fnd a good staffng polcy to mplement gone through three dfferent n our stages hosptal, of servce. The routng we s not probablstc, but class dependent as shown n Fgure adaptvely choose whch sequence of 1. Fnte polces buffer szes to at all learn nodes are about assumed whch wth s our where s a random vector that represents uncertan factors exactly what makes our optmzaton problem nterestng. smulator. n the systems. The "stochastc" aspect has to do wth the More specfc, we are nterested n dstrbutng optmally Arrval: C1: Unf[2,18] C2: Exp(O.12) Source: Sh,Chen,and Yucesan 1 Fgure 1: A 10-node Network n the Resource Allocaton Problem 36 / 3

6 Applcaton: AIDS Treatment and Preventon We would lke to treat and prevent AIDS n Afrca. We are uncertan about the effectveness of untred preventon methods, but we can learn about them by usng them n practce. To whch preventon methods should we allocate our resources? How should we balance usng tred and true methods wth usng untred methods that may be better? 6 / 3

7 Applcaton: Product Prcng We would lke to dynamcally prce products to maxmze revenue. We learn about product demand from sales and the prces at whch those sales were made. The nformaton collected depends on the prce: If we prce very hgh, we sell nothng and learn only an upper bound on what people are wllng to pay. If we prce very low, we sell to every vaguely nterested party, but learn lttle about how much they are wllng to pay. 7 / 3

8 More Informaton Collecton Applcatons Desgn a sequence of focus groups to effectvely choose features for a new product to be developed. Choose whch tems n a retal store should carry RFID tags. Decde whether to adopt a new technology now, or to wat and gather more nformaton about how well t works. Manage a supply chan when demand dstrbutons are uncertan, and demand lost due to stockout s unobserved. Desgn an adaptve data collecton strategy that wll quckly and effectvely dentfy the source and extent of radaton contamnaton n an emergency. 8 / 3

9 Example: Rankng and Selecton Introducton Assume we have fve choces, wth uncertanty n our belef about how well each one! wll Now perform. assume we Imagne have fve we choces, canwth make uncertanty a sngle our measurement, belef about how after well each one wll perform. Imagne you can make a sngle measurement, whch we have to make a choce about whch one s best. What should we do? after whch you have to make a choce about whch one s best. What would you do? Warren B. Powell 12 / 3

10 Example: Rankng and Selecton Introducton Assume we! Now haveassume fve choces, we have fve wth choces, uncertanty wth uncertanty our n our belef belef about how well each one wll well perform. each one wll Imagne perform. weimagne can make you can amake sngle a sngle measurement, after after whch you have to make a choce about whch one s best. What whch we have to make a choce about whch one s best. should we would you do? do? No mprovement Warren B. Powell 13 / 3

11 Example: Rankng and Selecton Introducton Assume we! Now haveassume fve choces, we have fve wth choces, uncertanty wth uncertanty our n our belef belef about how well each one wll well perform. each one wll Imagne perform. weimagne can make you can amake sngle a sngle measurement, after after whch you have to make a choce about whch one s best. What whch we have to make a choce about whch one s best. should we would you do? do? New soluton The value of learnng s that t may change your decson Warren B. Powell 14 / 3

12 The Knowledge-Gradent Polcy for Rankng and Selecton The knowledge-gradent polcy values each potental measurement x accordng to value of The measurng knowledge x = E[best gradent we can do wth the measurement] (best we can do wthout the measurement).! Basc prncple:» Assume you can make only one measurement, after whch you have to make a We call ths value fnal choce the(the KGmplementaton factor. decson). It then performs the measurement wth the largest KG» What factor. choce would you make now to maxmze the expected value of the mplementaton decson? Change whch produces a change n the decson.! " # " " $ Change n estmate of value of opton due to measurement Warren B. Powell / 3

13 Outlne 1 Overvew of Informaton Collecton Applcatons 2 Global Optmzaton of Expensve Contnuous Functons Problem Descrpton Knowledge-Gradent Polcy Applcaton: Smulaton Model Calbraton (Schneder) Applcaton: Drug Dscovery 3 KG Polces for General Offlne Problems Problem Descrpton and KG Polcy Convergence 11 / 3

14 Global Optmzaton of Expensve Contnuous Functons We have a functon whose global maxmum we would lke to fnd. We can evaluate the functon wth nose va some black-box, but cannot obtan gradents or other nformaton. Evaluatng the functon s expensve, justfyng the use of a sophstcated algorthm to choose evaluaton ponts. 12 / 3

15 Bayesan Pror on the Functon to be Optmzed We begn wth a Gaussan process pror on f. Under ths pror, our pror belef on the values that f takes on any fnte set of ponts x 1,...,x M s multvarate normal. (f (x 1 ),...,f (x M )) N (µ 0,Σ 0 ), where µ 0 and Σ 0 are functons of x 1,...,x M / 3

16 Updatng the Pror For computatonal reasons, we wll restrct ourselves to a fnte collecton 2 2 of ponts x 1,...,x M. The tme n posteror belef on f (x 1 ),...,f (x M ) s N (µ n,σ n ), where µ n and Σ n can be computed recursvely from t=1, x=0 the parameters of the prevous belef µ n 1 and Σ n 1, the locaton of the tme-n measurement, 0 x n, and the measurement s value, ŷ 0. n x t=4, x=1 ft t=2, x=2 ft ft x t=, x= ft x t=4, t=3, x=1 x= t=1, x=0 ft x x t=7, t=6, x=1x= ft t=2, x x t=, 14 / 3 x

17 Measurement as a Stochastc Optmzaton Problem Our goal s to choose measurements to maxmze our ablty to choose a hgh-value alternatve at mplementaton tme. The reward receved s max µ N. The optmal soluton satsfes Bellman s recurson wth a state varable of S n = (n, µ n,σ n ). V N (S N ) = max µ N, V n (S n [ ) = maxe n V n+1 (S n+1 ) x n = x ]. x However, the state space has O(M 2 ) dmensons, whch makes actually solvng Bellman s recurson mpossble and justfes the search for good heurstc polces. 1 / 3

18 Knowledge-Gradent Polcy The KG polcy assgns a value or KG factor ν n x to each potental measurement x. It then performs the measurement wth the largest KG factor. The KG factor s ν n x = E[best we can do wth the measurement] (best we can do wthout the measurement). max pror (µ n ) posteror ( ) x= alternatves () 16 / 3

19 Knowledge-Gradent Polcy The KG factor s ν n x = E[best we can do wth the measurement] (best we can do wthout the measurement). ] = E n [max x n = x max µ n. max pror (µ n ) posteror ( ) x= alternatves () 1 ) 17 / 3

20 Other Approaches Many other dervatve-free nose-tolerant global optmzaton methods exst, e.g., pattern search, e.g., Nelder-Mead stochastc approxmaton, e.g., SPSA [Spall 12]. evolutonary algorthms, smulated annealng, tabu search response surface methods. The KG method s a Bayesan global optmzaton (BGO) method because t places a Bayesan pror dstrbuton on the underlyng but unknown functon. BGO methods requre more computaton to decde where to evaluate next, but often requre fewer evaluatons to fnd global extrema [Huang et al. 2006]. 18 / 3

21 Computng the KG Factor max pror (µ n ) posteror ( ) x= alternatves () ) best posteror (max observaton ( ) 1 / 3

22 Computng the KG Factor max pror (µ n ) posteror ( ) x= alternatves () ) best posteror (max observaton ( ) 1 / 3

23 Computng the KG Factor max pror (µ n ) posteror ( ) x= alternatves () ) best posteror (max observaton ( ) 1 / 3

24 Computng the KG Factor max pror (µ n ) posteror ( ) x= alternatves () ) best posteror (max observaton ( ) 1 / 3

25 Computng the KG Factor max pror (µ n ) posteror ( ) x= alternatves () ) best posteror (max observaton ( ) 1 / 3

26 Computng the KG Factor max pror (µ n ) posteror ( ) x= alternatves () ) best posteror (max observaton ( ) 1 / 3

27 Computng the KG Factor max pror (µ n ) posteror ( ) x= alternatves () ) best posteror (max observaton ( ) 1 / 3

28 Computng the KG Factor max pror (µ n ) posteror ( ) x= alternatves () ) best posteror (max observaton ( ) 1 / 3

29 Computng the KG Factor max pror (µ n ) posteror ( ) x= alternatves () ) best posteror (max observaton ( ) 1 / 3

30 Computng the KG Factor max pror (µ n ) posteror ( ) x= alternatves () ) best posteror (max observaton ( ) 1 / 3

31 Computng the KG Factor max pror (µ n ) posteror ( ) x= alternatves () ) best posteror (max observaton ( ) 1 / 3

32 Computng the KG Factor max pror (µ n ) posteror ( ) x= alternatves () ) best posteror (max observaton ( ) 1 / 3

33 Computng the KG Factor max pror (µ n ) posteror ( ) x= alternatves () ) best posteror (max observaton ( ) 1 / 3

34 Computng the KG Factor max pror (µ n ) posteror ( ) x= alternatves () ) best posteror (max observaton ( ) 1 / 3

35 Computng the KG Factor max pror (µ n ) posteror ( ) x= alternatves () ) best posteror (max observaton ( ) 1 / 3

36 Computng the KG Factor max pror (µ n ) posteror ( ) x= alternatves () ) best posteror (max observaton ( ) 1 / 3

37 Computng the KG Factor max pror (µ n ) posteror ( ) x= alternatves () ) best posteror (max observaton ( ) 1 / 3

38 Computng the KG Factor max pror (µ n ) posteror ( ) x= alternatves () ) best posteror (max observaton ( ) 1 / 3

39 Computng the KG Factor max pror (µ n ) posteror ( ) x= alternatves () ) best posteror (max observaton ( ) 1 / 3

40 Computng the KG Factor max pror (µ n ) posteror ( ) x= alternatves () ) best posteror (max observaton ( ) 20 / 3

41 Computng the KG Factor max pror (µ n ) posteror ( ) x= alternatves () ) best posteror (max observaton ( ) 20 / 3

42 Computng the KG Factor max pror (µ n ) posteror ( ) x= alternatves () ) best posteror (max observaton ( ) 20 / 3

43 Computng the KG Factor max pror (µ n ) posteror ( ) x= alternatves () ) best posteror (max observaton ( ) 20 / 3

44 Computng the KG Factor max pror (µ n ) posteror ( ) x= alternatves () ) best posteror (max observaton ( ) 20 / 3

45 Computng the KG Factor max pror (µ n ) posteror ( ) x= alternatves () ) best posteror (max observaton ( ) 20 / 3

46 Computng the KG Factor max pror (µ n ) posteror ( ) x= alternatves () ) best posteror (max observaton ( ) 20 / 3

47 Computng the KG Factor max pror (µ n ) posteror ( ) x= alternatves () ) best posteror (max observaton ( ) 20 / 3

48 Computng the KG Factor max pror (µ n ) posteror ( ) x= alternatves () ) best posteror (max observaton ( ) 20 / 3

49 Computng the KG Factor max pror (µ n ) posteror ( ) x= alternatves () ) best posteror (max observaton ( ) 20 / 3

50 Computng the KG Factor max pror (µ n ) posteror ( ) x= alternatves () ) best posteror (max observaton ( ) 20 / 3

51 Computng the KG Factor max pror (µ n ) posteror ( ) x= alternatves () ) best posteror (max observaton ( ) 20 / 3

52 Computng the KG Factor max pror (µ n ) posteror ( ) x= alternatves () ) best posteror (max observaton ( ) 20 / 3

53 Computng the KG Factor max pror (µ n ) posteror ( ) x= alternatves () ) best posteror (max observaton ( ) 20 / 3

54 Computng the KG Factor max pror (µ n ) posteror ( ) x= alternatves () ) best posteror (max observaton ( ) 20 / 3

55 Computng the KG Factor max pror (µ n ) posteror ( ) x= alternatves () ) best posteror (max observaton ( ) 20 / 3

56 Computng the KG Factor max pror (µ n ) posteror ( ) x= alternatves () ) best posteror (max observaton ( ) 20 / 3

57 Computng the KG Factor max pror (µ n ) posteror ( ) x= alternatves () ) best posteror (max observaton ( ) 20 / 3

58 Computng the KG Factor max pror (µ n ) posteror ( ) x= alternatves () ) best posteror (max observaton ( ) 20 / 3

59 Computng the KG Factor max! n+1 a 1 +b 1 y a 2 +b 2 y a 3 +b 3 y a 4 +b 4 y observaton ( ) The KG factor νx n for measurng alternatve x s ] νx n = E n [max x n = x max x ( ) a+1 a = (b +1 b )f, j b +1 b 2008 Warren B. Powell 1 where f (z) = ϕ(z) + zφ(z), ϕ s the normal pdf and Φ s the normal cdf. µ n x 21 / 3

60 Maxmzng the KG Factor We compute the KG factor for each canddate measurement, and choose the measurement wth the largest. 8 7 µ n µ n +/ sqrt(σ n ) alternatves () 0 log(kg factor) alternatves () 22 / 3

61 Example 23 / 3

62 Example 23 / 3

63 Example 23 / 3

64 Smulaton Model Calbraton at Schneder Natonal The logstcs company Schneder Natonal uses a large smulaton-based optmzaton model to try what f scenaros. The model has several nput parameters that must be tuned to make ts behavor match realty before t can be used. Schneder Natonal 2008 Warren B. Powell Slde Warren B. Powell Slde / 3

65 Smulaton Model Calbraton at Schneder Natonal Current company practce gets company drvers home 2 tmes per month, and ndependent contractors 1.7 tmes per month, on average. The optmzaton model awards a bonus to tself each tme t brngs a truck drver home. Goal: adjust the bonuses to make the optmal soluton found by the model match current practce. Runnng the smulator to convergence for one set of bonuses takes 3 days, and the full calbraton takes 1 2 weeks when done by hand. 2 / 3

66 Smulaton Model Calbraton Results 3 Mean of Posteror, µ n 3 Std. Dev. of Posteror Bonus Bonus Bonus 1 3 log(kg Factor) Bonus 1 0 Best Ft Bonus log10(best Ft) Bonus n 26 / 3

67 Smulaton Model Calbraton Results 3 Mean of Posteror, µ n 3 Std. Dev. of Posteror Bonus Bonus Bonus 1 3 log(kg Factor) Bonus 1 0 Best Ft Bonus log10(best Ft) Bonus n 26 / 3

68 Smulaton Model Calbraton Results 3 Mean of Posteror, µ n 3 Std. Dev. of Posteror Bonus Bonus Bonus 1 3 log(kg Factor) Bonus 1 0 Best Ft Bonus log10(best Ft) Bonus n 26 / 3

69 Smulaton Model Calbraton Results 3 Mean of Posteror, µ n 3 Std. Dev. of Posteror Bonus Bonus Bonus 1 3 log(kg Factor) Bonus 1 0 Best Ft Bonus log10(best Ft) Bonus n 26 / 3

70 Smulaton Model Calbraton Results The KG method calbrates the model n approxmately 3 days, compared to 7 14 days when tuned by hand. The calbraton s automatc, freeng the human calbrator to do other work. Current practce uses the year s calbrated bonuses for each new what f scenaro, but to enforce the constrant on drver at-home tme t would be better to recalbrate the model for each scenaro. Automatc calbraton wth the KG method makes ths feasble. 27 / 3

71 Drug Dscovery We are workng wth a medcal group at Georgetown Unversty hosptal to mprove upon a small molecule they beleve can treat Ewng s sarcoma. As test cases, we use other famles of molecules for whch data has been collected and publshed, ncludng the benzomorphan famly at rght. We use the Free-Wlson model, under whch a molecule s value s the sum of the values of ts substtuent-locaton pars. Source: Katz, Osborne, Ionescu 177 Benzomorphan molecule, wth locatons R1-R avalable for substtuton. 28 / 3

72 Drug Dscovery: Numercal Results. 4. Qualty of Best Molecule log(kg/mol) best value KG Explore PE measurements Number of Measurements Best Value KG Polcy Exploraton Polcy 2 / 3

73 Outlne 1 Overvew of Informaton Collecton Applcatons 2 Global Optmzaton of Expensve Contnuous Functons Problem Descrpton Knowledge-Gradent Polcy Applcaton: Smulaton Model Calbraton (Schneder) Applcaton: Drug Dscovery 3 KG Polces for General Offlne Problems Problem Descrpton and KG Polcy Convergence 30 / 3

74 The General Offlne Informaton Collecton Problem 1 We begn wth a pror dstrbuton on some unknown truth θ. 2 We make a sequence of measurements, decdng whch types of measurement to make as we go. 3 After N measurements, we choose an mplementaton decson and earn a reward R(θ,). In the global optmzaton problem prevously dscussed, θ s the functon whose optmum we seek. Ths functon s doman s the same as both the spaces of possble measurement types x and possble mplementaton decsons. R(θ,) = θ(). 31 / 3

75 The KG Polcy for General Offlne Problems The KG polcy for any problem from ths general framework s ] arg maxe n [max x x n = x max µ n. µ n = E n [R(θ;)] s the expected value of mplementaton decson gven what we know at tme n. s defned smlarly. Evaluatng ths expresson for the KG decson s often computatonally ntensve. 32 / 3

76 Optmalty and Convergence Results The KG polcy s myopcally optmal n general (optmal when N = 1). In certan specal cases (e.g., ndependent normal rankng and selecton on 2 alternatves) the KG polcy s optmal for all N. In many problems, the KG polcy s provably convergent. Convergence means that the alternatve we thnk s best, argmax E N [R(θ,)], converges to the one that s actually the best, argmax R(θ,). In the global optmzaton of expensve functons, convergence means the KG polcy always fnds the global maxmum when gven enough measurements. KG s n some sense a myopc polcy, and so convergence s mportant because t shows that myopa does not mslead KG nto gettng stuck, measurng one alternatve over and over. 33 / 3

77 Concluson Knowledge-gradent polces form a broadly applcable class of nformaton collecton polces wth several appealng propertes: KG polces are myopcally optmal n general. KG polces are convergent n a broad class of problems. KG polces perform well numercally aganst other exstng polces n several problems. KG polces are flexble and may be computed easly n a broad class of problems. 34 / 3

78 Thank You Thank you. Any questons? 3 / 3

Knowledge-Gradient Methods for Statistical Learning

Knowledge-Gradient Methods for Statistical Learning Knowledge-Gradent Methods for Statstcal Learnng Peter Frazer Advsor: Warren Powell Operatons Research & Fnancal Engneerng Prnceton Unversty Frday May 8, 200 Fnal Publc Oral Exam 1 / 45 Outlne 1 Overvew

More information

Markov Chain Monte Carlo (MCMC), Gibbs Sampling, Metropolis Algorithms, and Simulated Annealing Bioinformatics Course Supplement

Markov Chain Monte Carlo (MCMC), Gibbs Sampling, Metropolis Algorithms, and Simulated Annealing Bioinformatics Course Supplement Markov Chan Monte Carlo MCMC, Gbbs Samplng, Metropols Algorthms, and Smulated Annealng 2001 Bonformatcs Course Supplement SNU Bontellgence Lab http://bsnuackr/ Outlne! Markov Chan Monte Carlo MCMC! Metropols-Hastngs

More information

EEL 6266 Power System Operation and Control. Chapter 3 Economic Dispatch Using Dynamic Programming

EEL 6266 Power System Operation and Control. Chapter 3 Economic Dispatch Using Dynamic Programming EEL 6266 Power System Operaton and Control Chapter 3 Economc Dspatch Usng Dynamc Programmng Pecewse Lnear Cost Functons Common practce many utltes prefer to represent ther generator cost functons as sngle-

More information

Resource Allocation with a Budget Constraint for Computing Independent Tasks in the Cloud

Resource Allocation with a Budget Constraint for Computing Independent Tasks in the Cloud Resource Allocaton wth a Budget Constrant for Computng Independent Tasks n the Cloud Wemng Sh and Bo Hong School of Electrcal and Computer Engneerng Georga Insttute of Technology, USA 2nd IEEE Internatonal

More information

Parametric fractional imputation for missing data analysis. Jae Kwang Kim Survey Working Group Seminar March 29, 2010

Parametric fractional imputation for missing data analysis. Jae Kwang Kim Survey Working Group Seminar March 29, 2010 Parametrc fractonal mputaton for mssng data analyss Jae Kwang Km Survey Workng Group Semnar March 29, 2010 1 Outlne Introducton Proposed method Fractonal mputaton Approxmaton Varance estmaton Multple mputaton

More information

Stat260: Bayesian Modeling and Inference Lecture Date: February 22, Reference Priors

Stat260: Bayesian Modeling and Inference Lecture Date: February 22, Reference Priors Stat60: Bayesan Modelng and Inference Lecture Date: February, 00 Reference Prors Lecturer: Mchael I. Jordan Scrbe: Steven Troxler and Wayne Lee In ths lecture, we assume that θ R; n hgher-dmensons, reference

More information

Bayesian Planning of Hit-Miss Inspection Tests

Bayesian Planning of Hit-Miss Inspection Tests Bayesan Plannng of Ht-Mss Inspecton Tests Yew-Meng Koh a and Wllam Q Meeker a a Center for Nondestructve Evaluaton, Department of Statstcs, Iowa State Unversty, Ames, Iowa 5000 Abstract Although some useful

More information

CS 2750 Machine Learning. Lecture 5. Density estimation. CS 2750 Machine Learning. Announcements

CS 2750 Machine Learning. Lecture 5. Density estimation. CS 2750 Machine Learning. Announcements CS 750 Machne Learnng Lecture 5 Densty estmaton Mlos Hauskrecht mlos@cs.ptt.edu 539 Sennott Square CS 750 Machne Learnng Announcements Homework Due on Wednesday before the class Reports: hand n before

More information

1 Motivation and Introduction

1 Motivation and Introduction Instructor: Dr. Volkan Cevher EXPECTATION PROPAGATION September 30, 2008 Rce Unversty STAT 63 / ELEC 633: Graphcal Models Scrbes: Ahmad Beram Andrew Waters Matthew Nokleby Index terms: Approxmate nference,

More information

Lecture 10 Support Vector Machines II

Lecture 10 Support Vector Machines II Lecture 10 Support Vector Machnes II 22 February 2016 Taylor B. Arnold Yale Statstcs STAT 365/665 1/28 Notes: Problem 3 s posted and due ths upcomng Frday There was an early bug n the fake-test data; fxed

More information

Lecture 4: November 17, Part 1 Single Buffer Management

Lecture 4: November 17, Part 1 Single Buffer Management Lecturer: Ad Rosén Algorthms for the anagement of Networs Fall 2003-2004 Lecture 4: November 7, 2003 Scrbe: Guy Grebla Part Sngle Buffer anagement In the prevous lecture we taled about the Combned Input

More information

Lecture Notes on Linear Regression

Lecture Notes on Linear Regression Lecture Notes on Lnear Regresson Feng L fl@sdueducn Shandong Unversty, Chna Lnear Regresson Problem In regresson problem, we am at predct a contnuous target value gven an nput feature vector We assume

More information

Global Sensitivity. Tuesday 20 th February, 2018

Global Sensitivity. Tuesday 20 th February, 2018 Global Senstvty Tuesday 2 th February, 28 ) Local Senstvty Most senstvty analyses [] are based on local estmates of senstvty, typcally by expandng the response n a Taylor seres about some specfc values

More information

Hidden Markov Models

Hidden Markov Models Hdden Markov Models Namrata Vaswan, Iowa State Unversty Aprl 24, 204 Hdden Markov Model Defntons and Examples Defntons:. A hdden Markov model (HMM) refers to a set of hdden states X 0, X,..., X t,...,

More information

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur Module 3 LOSSY IMAGE COMPRESSION SYSTEMS Verson ECE IIT, Kharagpur Lesson 6 Theory of Quantzaton Verson ECE IIT, Kharagpur Instructonal Objectves At the end of ths lesson, the students should be able to:

More information

Queueing Networks II Network Performance

Queueing Networks II Network Performance Queueng Networks II Network Performance Davd Tpper Assocate Professor Graduate Telecommuncatons and Networkng Program Unversty of Pttsburgh Sldes 6 Networks of Queues Many communcaton systems must be modeled

More information

Computing Correlated Equilibria in Multi-Player Games

Computing Correlated Equilibria in Multi-Player Games Computng Correlated Equlbra n Mult-Player Games Chrstos H. Papadmtrou Presented by Zhanxang Huang December 7th, 2005 1 The Author Dr. Chrstos H. Papadmtrou CS professor at UC Berkley (taught at Harvard,

More information

Numerical Heat and Mass Transfer

Numerical Heat and Mass Transfer Master degree n Mechancal Engneerng Numercal Heat and Mass Transfer 06-Fnte-Dfference Method (One-dmensonal, steady state heat conducton) Fausto Arpno f.arpno@uncas.t Introducton Why we use models and

More information

Markov Chain Monte Carlo Lecture 6

Markov Chain Monte Carlo Lecture 6 where (x 1,..., x N ) X N, N s called the populaton sze, f(x) f (x) for at least one {1, 2,..., N}, and those dfferent from f(x) are called the tral dstrbutons n terms of mportance samplng. Dfferent ways

More information

Outline and Reading. Dynamic Programming. Dynamic Programming revealed. Computing Fibonacci. The General Dynamic Programming Technique

Outline and Reading. Dynamic Programming. Dynamic Programming revealed. Computing Fibonacci. The General Dynamic Programming Technique Outlne and Readng Dynamc Programmng The General Technque ( 5.3.2) -1 Knapsac Problem ( 5.3.3) Matrx Chan-Product ( 5.3.1) Dynamc Programmng verson 1.4 1 Dynamc Programmng verson 1.4 2 Dynamc Programmng

More information

MDL-Based Unsupervised Attribute Ranking

MDL-Based Unsupervised Attribute Ranking MDL-Based Unsupervsed Attrbute Rankng Zdravko Markov Computer Scence Department Central Connectcut State Unversty New Brtan, CT 06050, USA http://www.cs.ccsu.edu/~markov/ markovz@ccsu.edu MDL-Based Unsupervsed

More information

Lecture 12: Classification

Lecture 12: Classification Lecture : Classfcaton g Dscrmnant functons g The optmal Bayes classfer g Quadratc classfers g Eucldean and Mahalanobs metrcs g K Nearest Neghbor Classfers Intellgent Sensor Systems Rcardo Guterrez-Osuna

More information

STATIC OPTIMIZATION: BASICS

STATIC OPTIMIZATION: BASICS STATIC OPTIMIZATION: BASICS 7A- Lecture Overvew What s optmzaton? What applcatons? How can optmzaton be mplemented? How can optmzaton problems be solved? Why should optmzaton apply n human movement? How

More information

Assortment Optimization under MNL

Assortment Optimization under MNL Assortment Optmzaton under MNL Haotan Song Aprl 30, 2017 1 Introducton The assortment optmzaton problem ams to fnd the revenue-maxmzng assortment of products to offer when the prces of products are fxed.

More information

Generalized Linear Methods

Generalized Linear Methods Generalzed Lnear Methods 1 Introducton In the Ensemble Methods the general dea s that usng a combnaton of several weak learner one could make a better learner. More formally, assume that we have a set

More information

ECE559VV Project Report

ECE559VV Project Report ECE559VV Project Report (Supplementary Notes Loc Xuan Bu I. MAX SUM-RATE SCHEDULING: THE UPLINK CASE We have seen (n the presentaton that, for downlnk (broadcast channels, the strategy maxmzng the sum-rate

More information

Winter 2008 CS567 Stochastic Linear/Integer Programming Guest Lecturer: Xu, Huan

Winter 2008 CS567 Stochastic Linear/Integer Programming Guest Lecturer: Xu, Huan Wnter 2008 CS567 Stochastc Lnear/Integer Programmng Guest Lecturer: Xu, Huan Class 2: More Modelng Examples 1 Capacty Expanson Capacty expanson models optmal choces of the tmng and levels of nvestments

More information

1 Convex Optimization

1 Convex Optimization Convex Optmzaton We wll consder convex optmzaton problems. Namely, mnmzaton problems where the objectve s convex (we assume no constrants for now). Such problems often arse n machne learnng. For example,

More information

LOW BIAS INTEGRATED PATH ESTIMATORS. James M. Calvin

LOW BIAS INTEGRATED PATH ESTIMATORS. James M. Calvin Proceedngs of the 007 Wnter Smulaton Conference S G Henderson, B Bller, M-H Hseh, J Shortle, J D Tew, and R R Barton, eds LOW BIAS INTEGRATED PATH ESTIMATORS James M Calvn Department of Computer Scence

More information

Week 5: Neural Networks

Week 5: Neural Networks Week 5: Neural Networks Instructor: Sergey Levne Neural Networks Summary In the prevous lecture, we saw how we can construct neural networks by extendng logstc regresson. Neural networks consst of multple

More information

An adaptive SMC scheme for ABC. Bayesian Computation (ABC)

An adaptive SMC scheme for ABC. Bayesian Computation (ABC) An adaptve SMC scheme for Approxmate Bayesan Computaton (ABC) (ont work wth Prof. Mke West) Department of Statstcal Scence - Duke Unversty Aprl/2011 Approxmate Bayesan Computaton (ABC) Problems n whch

More information

Gaussian process classification: a message-passing viewpoint

Gaussian process classification: a message-passing viewpoint Gaussan process classfcaton: a message-passng vewpont Flpe Rodrgues fmpr@de.uc.pt November 014 Abstract The goal of ths short paper s to provde a message-passng vewpont of the Expectaton Propagaton EP

More information

For now, let us focus on a specific model of neurons. These are simplified from reality but can achieve remarkable results.

For now, let us focus on a specific model of neurons. These are simplified from reality but can achieve remarkable results. Neural Networks : Dervaton compled by Alvn Wan from Professor Jtendra Malk s lecture Ths type of computaton s called deep learnng and s the most popular method for many problems, such as computer vson

More information

Logistic Regression. CAP 5610: Machine Learning Instructor: Guo-Jun QI

Logistic Regression. CAP 5610: Machine Learning Instructor: Guo-Jun QI Logstc Regresson CAP 561: achne Learnng Instructor: Guo-Jun QI Bayes Classfer: A Generatve model odel the posteror dstrbuton P(Y X) Estmate class-condtonal dstrbuton P(X Y) for each Y Estmate pror dstrbuton

More information

P R. Lecture 4. Theory and Applications of Pattern Recognition. Dept. of Electrical and Computer Engineering /

P R. Lecture 4. Theory and Applications of Pattern Recognition. Dept. of Electrical and Computer Engineering / Theory and Applcatons of Pattern Recognton 003, Rob Polkar, Rowan Unversty, Glassboro, NJ Lecture 4 Bayes Classfcaton Rule Dept. of Electrcal and Computer Engneerng 0909.40.0 / 0909.504.04 Theory & Applcatons

More information

EEE 241: Linear Systems

EEE 241: Linear Systems EEE : Lnear Systems Summary #: Backpropagaton BACKPROPAGATION The perceptron rule as well as the Wdrow Hoff learnng were desgned to tran sngle layer networks. They suffer from the same dsadvantage: they

More information

Quantifying Uncertainty

Quantifying Uncertainty Partcle Flters Quantfyng Uncertanty Sa Ravela M. I. T Last Updated: Sprng 2013 1 Quantfyng Uncertanty Partcle Flters Partcle Flters Appled to Sequental flterng problems Can also be appled to smoothng problems

More information

Using T.O.M to Estimate Parameter of distributions that have not Single Exponential Family

Using T.O.M to Estimate Parameter of distributions that have not Single Exponential Family IOSR Journal of Mathematcs IOSR-JM) ISSN: 2278-5728. Volume 3, Issue 3 Sep-Oct. 202), PP 44-48 www.osrjournals.org Usng T.O.M to Estmate Parameter of dstrbutons that have not Sngle Exponental Famly Jubran

More information

A Robust Method for Calculating the Correlation Coefficient

A Robust Method for Calculating the Correlation Coefficient A Robust Method for Calculatng the Correlaton Coeffcent E.B. Nven and C. V. Deutsch Relatonshps between prmary and secondary data are frequently quantfed usng the correlaton coeffcent; however, the tradtonal

More information

Chapter Newton s Method

Chapter Newton s Method Chapter 9. Newton s Method After readng ths chapter, you should be able to:. Understand how Newton s method s dfferent from the Golden Secton Search method. Understand how Newton s method works 3. Solve

More information

Notes prepared by Prof Mrs) M.J. Gholba Class M.Sc Part(I) Information Technology

Notes prepared by Prof Mrs) M.J. Gholba Class M.Sc Part(I) Information Technology Inverse transformatons Generaton of random observatons from gven dstrbutons Assume that random numbers,,, are readly avalable, where each tself s a random varable whch s unformly dstrbuted over the range(,).

More information

Credit Card Pricing and Impact of Adverse Selection

Credit Card Pricing and Impact of Adverse Selection Credt Card Prcng and Impact of Adverse Selecton Bo Huang and Lyn C. Thomas Unversty of Southampton Contents Background Aucton model of credt card solctaton - Errors n probablty of beng Good - Errors n

More information

1. Introduction. Consider the standard form of a linear program, given by

1. Introduction. Consider the standard form of a linear program, given by INFORMATION COLLECTION FOR LINEAR PROGRAMS WITH UNCERTAIN OBJECTIVE COEFFICIENTS ILYA O. RYZHOV AND WARREN B. POWELL Abstract. Consder a lnear program wth uncertan objectve coeffcents, for whch we have

More information

Lecture 14: Bandits with Budget Constraints

Lecture 14: Bandits with Budget Constraints IEOR 8100-001: Learnng and Optmzaton for Sequental Decson Makng 03/07/16 Lecture 14: andts wth udget Constrants Instructor: Shpra Agrawal Scrbed by: Zhpeng Lu 1 Problem defnton In the regular Mult-armed

More information

On an Extension of Stochastic Approximation EM Algorithm for Incomplete Data Problems. Vahid Tadayon 1

On an Extension of Stochastic Approximation EM Algorithm for Incomplete Data Problems. Vahid Tadayon 1 On an Extenson of Stochastc Approxmaton EM Algorthm for Incomplete Data Problems Vahd Tadayon Abstract: The Stochastc Approxmaton EM (SAEM algorthm, a varant stochastc approxmaton of EM, s a versatle tool

More information

Artificial Intelligence Bayesian Networks

Artificial Intelligence Bayesian Networks Artfcal Intellgence Bayesan Networks Adapted from sldes by Tm Fnn and Mare desjardns. Some materal borrowed from Lse Getoor. 1 Outlne Bayesan networks Network structure Condtonal probablty tables Condtonal

More information

Singular Value Decomposition: Theory and Applications

Singular Value Decomposition: Theory and Applications Sngular Value Decomposton: Theory and Applcatons Danel Khashab Sprng 2015 Last Update: March 2, 2015 1 Introducton A = UDV where columns of U and V are orthonormal and matrx D s dagonal wth postve real

More information

MLE and Bayesian Estimation. Jie Tang Department of Computer Science & Technology Tsinghua University 2012

MLE and Bayesian Estimation. Jie Tang Department of Computer Science & Technology Tsinghua University 2012 MLE and Bayesan Estmaton Je Tang Department of Computer Scence & Technology Tsnghua Unversty 01 1 Lnear Regresson? As the frst step, we need to decde how we re gong to represent the functon f. One example:

More information

The Study of Teaching-learning-based Optimization Algorithm

The Study of Teaching-learning-based Optimization Algorithm Advanced Scence and Technology Letters Vol. (AST 06), pp.05- http://dx.do.org/0.57/astl.06. The Study of Teachng-learnng-based Optmzaton Algorthm u Sun, Yan fu, Lele Kong, Haolang Q,, Helongang Insttute

More information

3.1 ML and Empirical Distribution

3.1 ML and Empirical Distribution 67577 Intro. to Machne Learnng Fall semester, 2008/9 Lecture 3: Maxmum Lkelhood/ Maxmum Entropy Dualty Lecturer: Amnon Shashua Scrbe: Amnon Shashua 1 In the prevous lecture we defned the prncple of Maxmum

More information

Kernel Methods and SVMs Extension

Kernel Methods and SVMs Extension Kernel Methods and SVMs Extenson The purpose of ths document s to revew materal covered n Machne Learnng 1 Supervsed Learnng regardng support vector machnes (SVMs). Ths document also provdes a general

More information

A PROBABILITY-DRIVEN SEARCH ALGORITHM FOR SOLVING MULTI-OBJECTIVE OPTIMIZATION PROBLEMS

A PROBABILITY-DRIVEN SEARCH ALGORITHM FOR SOLVING MULTI-OBJECTIVE OPTIMIZATION PROBLEMS HCMC Unversty of Pedagogy Thong Nguyen Huu et al. A PROBABILITY-DRIVEN SEARCH ALGORITHM FOR SOLVING MULTI-OBJECTIVE OPTIMIZATION PROBLEMS Thong Nguyen Huu and Hao Tran Van Department of mathematcs-nformaton,

More information

Some modelling aspects for the Matlab implementation of MMA

Some modelling aspects for the Matlab implementation of MMA Some modellng aspects for the Matlab mplementaton of MMA Krster Svanberg krlle@math.kth.se Optmzaton and Systems Theory Department of Mathematcs KTH, SE 10044 Stockholm September 2004 1. Consdered optmzaton

More information

NUMERICAL DIFFERENTIATION

NUMERICAL DIFFERENTIATION NUMERICAL DIFFERENTIATION 1 Introducton Dfferentaton s a method to compute the rate at whch a dependent output y changes wth respect to the change n the ndependent nput x. Ths rate of change s called the

More information

CSC 411 / CSC D11 / CSC C11

CSC 411 / CSC D11 / CSC C11 18 Boostng s a general strategy for learnng classfers by combnng smpler ones. The dea of boostng s to take a weak classfer that s, any classfer that wll do at least slghtly better than chance and use t

More information

Multilayer Perceptron (MLP)

Multilayer Perceptron (MLP) Multlayer Perceptron (MLP) Seungjn Cho Department of Computer Scence and Engneerng Pohang Unversty of Scence and Technology 77 Cheongam-ro, Nam-gu, Pohang 37673, Korea seungjn@postech.ac.kr 1 / 20 Outlne

More information

We present a new technique for adaptively choosing the sequence of molecular compounds to test in drug

We present a new technique for adaptively choosing the sequence of molecular compounds to test in drug INFORMS Journal on Computng Vol. 23, No. 3, Summer 211, pp. 346 363 ssn 191-9856 essn 1526-5528 11 233 346 do 1.1287/joc.11.417 211 INFORMS The Knowledge-Gradent Algorthm for Sequencng Experments n Drug

More information

9.913 Pattern Recognition for Vision. Class IV Part I Bayesian Decision Theory Yuri Ivanov

9.913 Pattern Recognition for Vision. Class IV Part I Bayesian Decision Theory Yuri Ivanov 9.93 Class IV Part I Bayesan Decson Theory Yur Ivanov TOC Roadmap to Machne Learnng Bayesan Decson Makng Mnmum Error Rate Decsons Mnmum Rsk Decsons Mnmax Crteron Operatng Characterstcs Notaton x - scalar

More information

Space of ML Problems. CSE 473: Artificial Intelligence. Parameter Estimation and Bayesian Networks. Learning Topics

Space of ML Problems. CSE 473: Artificial Intelligence. Parameter Estimation and Bayesian Networks. Learning Topics /7/7 CSE 73: Artfcal Intellgence Bayesan - Learnng Deter Fox Sldes adapted from Dan Weld, Jack Breese, Dan Klen, Daphne Koller, Stuart Russell, Andrew Moore & Luke Zettlemoyer What s Beng Learned? Space

More information

Linear Approximation with Regularization and Moving Least Squares

Linear Approximation with Regularization and Moving Least Squares Lnear Approxmaton wth Regularzaton and Movng Least Squares Igor Grešovn May 007 Revson 4.6 (Revson : March 004). 5 4 3 0.5 3 3.5 4 Contents: Lnear Fttng...4. Weghted Least Squares n Functon Approxmaton...

More information

Additional Codes using Finite Difference Method. 1 HJB Equation for Consumption-Saving Problem Without Uncertainty

Additional Codes using Finite Difference Method. 1 HJB Equation for Consumption-Saving Problem Without Uncertainty Addtonal Codes usng Fnte Dfference Method Benamn Moll 1 HJB Equaton for Consumpton-Savng Problem Wthout Uncertanty Before consderng the case wth stochastc ncome n http://www.prnceton.edu/~moll/ HACTproect/HACT_Numercal_Appendx.pdf,

More information

Module 9. Lecture 6. Duality in Assignment Problems

Module 9. Lecture 6. Duality in Assignment Problems Module 9 1 Lecture 6 Dualty n Assgnment Problems In ths lecture we attempt to answer few other mportant questons posed n earler lecture for (AP) and see how some of them can be explaned through the concept

More information

Department of Computer Science Artificial Intelligence Research Laboratory. Iowa State University MACHINE LEARNING

Department of Computer Science Artificial Intelligence Research Laboratory. Iowa State University MACHINE LEARNING MACHINE LEANING Vasant Honavar Bonformatcs and Computatonal Bology rogram Center for Computatonal Intellgence, Learnng, & Dscovery Iowa State Unversty honavar@cs.astate.edu www.cs.astate.edu/~honavar/

More information

Minimisation of the Average Response Time in a Cluster of Servers

Minimisation of the Average Response Time in a Cluster of Servers Mnmsaton of the Average Response Tme n a Cluster of Servers Valery Naumov Abstract: In ths paper, we consder task assgnment problem n a cluster of servers. We show that optmal statc task assgnment s tantamount

More information

Neural networks. Nuno Vasconcelos ECE Department, UCSD

Neural networks. Nuno Vasconcelos ECE Department, UCSD Neural networs Nuno Vasconcelos ECE Department, UCSD Classfcaton a classfcaton problem has two types of varables e.g. X - vector of observatons (features) n the world Y - state (class) of the world x X

More information

Annexes. EC.1. Cycle-base move illustration. EC.2. Problem Instances

Annexes. EC.1. Cycle-base move illustration. EC.2. Problem Instances ec Annexes Ths Annex frst llustrates a cycle-based move n the dynamc-block generaton tabu search. It then dsplays the characterstcs of the nstance sets, followed by detaled results of the parametercalbraton

More information

Difference Equations

Difference Equations Dfference Equatons c Jan Vrbk 1 Bascs Suppose a sequence of numbers, say a 0,a 1,a,a 3,... s defned by a certan general relatonshp between, say, three consecutve values of the sequence, e.g. a + +3a +1

More information

Stat 543 Exam 2 Spring 2016

Stat 543 Exam 2 Spring 2016 Stat 543 Exam 2 Sprng 206 I have nether gven nor receved unauthorzed assstance on ths exam. Name Sgned Date Name Prnted Ths Exam conssts of questons. Do at least 0 of the parts of the man exam. I wll score

More information

Amiri s Supply Chain Model. System Engineering b Department of Mathematics and Statistics c Odette School of Business

Amiri s Supply Chain Model. System Engineering b Department of Mathematics and Statistics c Odette School of Business Amr s Supply Chan Model by S. Ashtab a,, R.J. Caron b E. Selvarajah c a Department of Industral Manufacturng System Engneerng b Department of Mathematcs Statstcs c Odette School of Busness Unversty of

More information

CIE4801 Transportation and spatial modelling Trip distribution

CIE4801 Transportation and spatial modelling Trip distribution CIE4801 ransportaton and spatal modellng rp dstrbuton Rob van Nes, ransport & Plannng 17/4/13 Delft Unversty of echnology Challenge the future Content What s t about hree methods Wth specal attenton for

More information

DETERMINATION OF UNCERTAINTY ASSOCIATED WITH QUANTIZATION ERRORS USING THE BAYESIAN APPROACH

DETERMINATION OF UNCERTAINTY ASSOCIATED WITH QUANTIZATION ERRORS USING THE BAYESIAN APPROACH Proceedngs, XVII IMEKO World Congress, June 7, 3, Dubrovn, Croata Proceedngs, XVII IMEKO World Congress, June 7, 3, Dubrovn, Croata TC XVII IMEKO World Congress Metrology n the 3rd Mllennum June 7, 3,

More information

MIMA Group. Chapter 2 Bayesian Decision Theory. School of Computer Science and Technology, Shandong University. Xin-Shun SDU

MIMA Group. Chapter 2 Bayesian Decision Theory. School of Computer Science and Technology, Shandong University. Xin-Shun SDU Group M D L M Chapter Bayesan Decson heory Xn-Shun Xu @ SDU School of Computer Scence and echnology, Shandong Unversty Bayesan Decson heory Bayesan decson theory s a statstcal approach to data mnng/pattern

More information

2E Pattern Recognition Solutions to Introduction to Pattern Recognition, Chapter 2: Bayesian pattern classification

2E Pattern Recognition Solutions to Introduction to Pattern Recognition, Chapter 2: Bayesian pattern classification E395 - Pattern Recognton Solutons to Introducton to Pattern Recognton, Chapter : Bayesan pattern classfcaton Preface Ths document s a soluton manual for selected exercses from Introducton to Pattern Recognton

More information

College of Computer & Information Science Fall 2009 Northeastern University 20 October 2009

College of Computer & Information Science Fall 2009 Northeastern University 20 October 2009 College of Computer & Informaton Scence Fall 2009 Northeastern Unversty 20 October 2009 CS7880: Algorthmc Power Tools Scrbe: Jan Wen and Laura Poplawsk Lecture Outlne: Prmal-dual schema Network Desgn:

More information

Asymptotic Optimality of Sequential Sampling Policies for Bayesian Information Collection

Asymptotic Optimality of Sequential Sampling Policies for Bayesian Information Collection Asymptotc Optmalty of Sequental Samplng Polces for Bayesan Informaton Collecton Peter I. Frazer, Warren B. Powell September 15, 2008 Abstract We consder adaptve sequental samplng polces n a Bayesan framework.

More information

Hidden Markov Models

Hidden Markov Models CM229S: Machne Learnng for Bonformatcs Lecture 12-05/05/2016 Hdden Markov Models Lecturer: Srram Sankararaman Scrbe: Akshay Dattatray Shnde Edted by: TBD 1 Introducton For a drected graph G we can wrte

More information

FTCS Solution to the Heat Equation

FTCS Solution to the Heat Equation FTCS Soluton to the Heat Equaton ME 448/548 Notes Gerald Recktenwald Portland State Unversty Department of Mechancal Engneerng gerry@pdx.edu ME 448/548: FTCS Soluton to the Heat Equaton Overvew 1. Use

More information

Module 17: Mechanism Design & Optimal Auctions

Module 17: Mechanism Design & Optimal Auctions Module 7: Mechansm Desgn & Optmal Auctons Informaton Economcs (Ec 55) George Georgads Examples: Auctons Blateral trade Producton and dstrbuton n socety General Setup N agents Each agent has prvate nformaton

More information

An Admission Control Algorithm in Cloud Computing Systems

An Admission Control Algorithm in Cloud Computing Systems An Admsson Control Algorthm n Cloud Computng Systems Authors: Frank Yeong-Sung Ln Department of Informaton Management Natonal Tawan Unversty Tape, Tawan, R.O.C. ysln@m.ntu.edu.tw Yngje Lan Management Scence

More information

Technical Note: Capacity Constraints Across Nests in Assortment Optimization Under the Nested Logit Model

Technical Note: Capacity Constraints Across Nests in Assortment Optimization Under the Nested Logit Model Techncal Note: Capacty Constrants Across Nests n Assortment Optmzaton Under the Nested Logt Model Jacob B. Feldman, Huseyn Topaloglu School of Operatons Research and Informaton Engneerng, Cornell Unversty,

More information

Conjugacy and the Exponential Family

Conjugacy and the Exponential Family CS281B/Stat241B: Advanced Topcs n Learnng & Decson Makng Conjugacy and the Exponental Famly Lecturer: Mchael I. Jordan Scrbes: Bran Mlch 1 Conjugacy In the prevous lecture, we saw conjugate prors for the

More information

MMA and GCMMA two methods for nonlinear optimization

MMA and GCMMA two methods for nonlinear optimization MMA and GCMMA two methods for nonlnear optmzaton Krster Svanberg Optmzaton and Systems Theory, KTH, Stockholm, Sweden. krlle@math.kth.se Ths note descrbes the algorthms used n the author s 2007 mplementatons

More information

Errors for Linear Systems

Errors for Linear Systems Errors for Lnear Systems When we solve a lnear system Ax b we often do not know A and b exactly, but have only approxmatons  and ˆb avalable. Then the best thng we can do s to solve ˆx ˆb exactly whch

More information

Pricing and Resource Allocation Game Theoretic Models

Pricing and Resource Allocation Game Theoretic Models Prcng and Resource Allocaton Game Theoretc Models Zhy Huang Changbn Lu Q Zhang Computer and Informaton Scence December 8, 2009 Z. Huang, C. Lu, and Q. Zhang (CIS) Game Theoretc Models December 8, 2009

More information

Solving Nonlinear Differential Equations by a Neural Network Method

Solving Nonlinear Differential Equations by a Neural Network Method Solvng Nonlnear Dfferental Equatons by a Neural Network Method Luce P. Aarts and Peter Van der Veer Delft Unversty of Technology, Faculty of Cvlengneerng and Geoscences, Secton of Cvlengneerng Informatcs,

More information

The Second Anti-Mathima on Game Theory

The Second Anti-Mathima on Game Theory The Second Ant-Mathma on Game Theory Ath. Kehagas December 1 2006 1 Introducton In ths note we wll examne the noton of game equlbrum for three types of games 1. 2-player 2-acton zero-sum games 2. 2-player

More information

Statistical pattern recognition

Statistical pattern recognition Statstcal pattern recognton Bayes theorem Problem: decdng f a patent has a partcular condton based on a partcular test However, the test s mperfect Someone wth the condton may go undetected (false negatve

More information

Power law and dimension of the maximum value for belief distribution with the max Deng entropy

Power law and dimension of the maximum value for belief distribution with the max Deng entropy Power law and dmenson of the maxmum value for belef dstrbuton wth the max Deng entropy Bngy Kang a, a College of Informaton Engneerng, Northwest A&F Unversty, Yanglng, Shaanx, 712100, Chna. Abstract Deng

More information

Stat 543 Exam 2 Spring 2016

Stat 543 Exam 2 Spring 2016 Stat 543 Exam 2 Sprng 2016 I have nether gven nor receved unauthorzed assstance on ths exam. Name Sgned Date Name Prnted Ths Exam conssts of 11 questons. Do at least 10 of the 11 parts of the man exam.

More information

Physics 5153 Classical Mechanics. D Alembert s Principle and The Lagrangian-1

Physics 5153 Classical Mechanics. D Alembert s Principle and The Lagrangian-1 P. Guterrez Physcs 5153 Classcal Mechancs D Alembert s Prncple and The Lagrangan 1 Introducton The prncple of vrtual work provdes a method of solvng problems of statc equlbrum wthout havng to consder the

More information

Basic Statistical Analysis and Yield Calculations

Basic Statistical Analysis and Yield Calculations October 17, 007 Basc Statstcal Analyss and Yeld Calculatons Dr. José Ernesto Rayas Sánchez 1 Outlne Sources of desgn-performance uncertanty Desgn and development processes Desgn for manufacturablty A general

More information

3.1 Expectation of Functions of Several Random Variables. )' be a k-dimensional discrete or continuous random vector, with joint PMF p (, E X E X1 E X

3.1 Expectation of Functions of Several Random Variables. )' be a k-dimensional discrete or continuous random vector, with joint PMF p (, E X E X1 E X Statstcs 1: Probablty Theory II 37 3 EPECTATION OF SEVERAL RANDOM VARIABLES As n Probablty Theory I, the nterest n most stuatons les not on the actual dstrbuton of a random vector, but rather on a number

More information

Classification as a Regression Problem

Classification as a Regression Problem Target varable y C C, C,, ; Classfcaton as a Regresson Problem { }, 3 L C K To treat classfcaton as a regresson problem we should transform the target y nto numercal values; The choce of numercal class

More information

Lecture 12: Discrete Laplacian

Lecture 12: Discrete Laplacian Lecture 12: Dscrete Laplacan Scrbe: Tanye Lu Our goal s to come up wth a dscrete verson of Laplacan operator for trangulated surfaces, so that we can use t n practce to solve related problems We are mostly

More information

Maximizing Overlap of Large Primary Sampling Units in Repeated Sampling: A comparison of Ernst s Method with Ohlsson s Method

Maximizing Overlap of Large Primary Sampling Units in Repeated Sampling: A comparison of Ernst s Method with Ohlsson s Method Maxmzng Overlap of Large Prmary Samplng Unts n Repeated Samplng: A comparson of Ernst s Method wth Ohlsson s Method Red Rottach and Padrac Murphy 1 U.S. Census Bureau 4600 Slver Hll Road, Washngton DC

More information

U.C. Berkeley CS294: Beyond Worst-Case Analysis Luca Trevisan September 5, 2017

U.C. Berkeley CS294: Beyond Worst-Case Analysis Luca Trevisan September 5, 2017 U.C. Berkeley CS94: Beyond Worst-Case Analyss Handout 4s Luca Trevsan September 5, 07 Summary of Lecture 4 In whch we ntroduce semdefnte programmng and apply t to Max Cut. Semdefnte Programmng Recall that

More information

AN IMPROVED PARTICLE FILTER ALGORITHM BASED ON NEURAL NETWORK FOR TARGET TRACKING

AN IMPROVED PARTICLE FILTER ALGORITHM BASED ON NEURAL NETWORK FOR TARGET TRACKING AN IMPROVED PARTICLE FILTER ALGORITHM BASED ON NEURAL NETWORK FOR TARGET TRACKING Qn Wen, Peng Qcong 40 Lab, Insttuton of Communcaton and Informaton Engneerng,Unversty of Electronc Scence and Technology

More information

Error Probability for M Signals

Error Probability for M Signals Chapter 3 rror Probablty for M Sgnals In ths chapter we dscuss the error probablty n decdng whch of M sgnals was transmtted over an arbtrary channel. We assume the sgnals are represented by a set of orthonormal

More information

Maximum Likelihood Estimation (MLE)

Maximum Likelihood Estimation (MLE) Maxmum Lkelhood Estmaton (MLE) Ken Kreutz-Delgado (Nuno Vasconcelos) ECE 175A Wnter 01 UCSD Statstcal Learnng Goal: Gven a relatonshp between a feature vector x and a vector y, and d data samples (x,y

More information

Support Vector Machines. Vibhav Gogate The University of Texas at dallas

Support Vector Machines. Vibhav Gogate The University of Texas at dallas Support Vector Machnes Vbhav Gogate he Unversty of exas at dallas What We have Learned So Far? 1. Decson rees. Naïve Bayes 3. Lnear Regresson 4. Logstc Regresson 5. Perceptron 6. Neural networks 7. K-Nearest

More information