An efficient method for computing single parameter partial expected value of perfect information

Size: px
Start display at page:

Download "An efficient method for computing single parameter partial expected value of perfect information"

Transcription

1 An effcent metho for computng sngle parameter partal expecte value of perfect nformaton Mark Strong,, Jeremy E. Oakley 2. School of Health an Relate Research ScHARR, Unversty of Sheffel, UK. 2. School of Mathematcs an Statstcs, Unversty of Sheffel, UK. Corresponng author August 202 Abstract The value of learnng an uncertan nput n a ecson moel can be quantfe by ts partal expecte value of perfect nformaton EVPI. Ths s commonly estmate va a two level neste Monte Carlo proceure n whch the parameter of nterest s sample n an outer loop, an then contonal on ths sample value the remanng parameters are sample n an nner loop. Ths two level metho can be ffcult to mplement f the jont strbuton of the nner loop parameters contonal on the parameter of nterest s not easy to sample from. We present an smple alternatve one level metho for calculatng partal EVPI that avos the nee to sample rectly from the potentally problematc contonal strbutons. We erve the upwar bas an varance of our estmator. Introucton The value of learnng an nput to a ecson analytc moel can be quantfe by ts partal expecte value of perfect nformaton partal EVPI Raffa, 968; Claxton an Posnett, 996; Fell an Hazen, 998, The stanar two level Monte Carlo approach to calculatng partal EVPI s to sample a value of the nput parameter of nterest n an outer loop, an then to sample values from the jont contonal strbuton of the remanng parameters an run the moel n an nner loop Brennan et al., 2007; Koerkamp et al., Suffcent numbers of runs of both the outer an nner loops are requre to nsure that the partal EVPI s estmate wth suffcent precson, an wth an acceptable level of bas Oakley et al., 200. We recognse two mportant practcal lmtatons to the stanar two level Monte Carlo approach to calculatng partal EVPI. Frstly, the neste two level nature of the algorthm wth a moel run at each nner loop step can be hghly

2 computatonally emanng for all but very small loop szes f the moel s expensve to run. Seconly, we requre a metho of samplng from the jont strbuton of the nputs exclung the parameter of nterest contonal on the nput parameter of nterest. If the nput parameter of nterest s nepenent of the remanng parameters then we can smply sample from the uncontonal jont strbuton of the remanng parameters. Inee, Aes et al show that n certan classes of moel, most notably ecson tree moels wth nepenent nputs, the Monte Carlo nner loop s unnecessary snce the target nner expectaton has a close form soluton. However, f nputs are not nepenent we may nee to resort to Markov chan Monte Carlo MCMC methos f there s no close form analytc soluton to the jont contonal strbuton. Inclung an MCMC step n the algorthm s lkely to ncrease the computatonal buren conserably, as well as requrng atonal programmng. We present here a smple one level orere nput algorthm for calculatng partal EVPI that takes nto account any epenency n the nputs. The metho avos the nee to sample rectly from the contonal strbutons of the nputs, an nstea requres only a sngle set of the sample nputs an corresponng outputs n orer to calculate partal EVPI values for all nput parameters. We erve an expresson for the samplng varaton of the estmator. 2 Metho We assume we are face wth ecson optons, nexe =,...,, an have bult a computer moel y = f, x that ams to prect the net beneft of ecson opton gven a vector of nput parameter values x. We enote the true unknown values of the nputs X = {X,..., X p }, an the uncertan net beneft uner ecson opton as Y. We enote the parameter for whch we wsh to calculate the partal EVPI as X an the remanng parameters as X = {X,..., X, X +,..., X p }. We enote the expectaton over the full jont strbuton of X as E X, over the margnal strbuton of X as E X, an over the contonal strbuton of X X as E X X. The partal EVPI for nput X s ] EV P IX = E X [max E X X {f, X, X } max E X {f, X}. We wsh to evaluate the partal EVPI for each nput X wthout samplng rectly from the contonal strbuton X X, snce ths may requre computatonally ntensve numercal methos f nputs are correlate. Brefly, the ea s as follows. We assume we have a set of samples from the jont strbuton of the moel nput parameters, an a corresponng set of moel outputs.e. net benefts. The net benefts for each ecson opton are orere wth respect to the nput of nterest, an then parttone nto subsets of equal sze. Wthn each subset we calculate the mean of the net benefts for each ecson opton, an take the maxmum across the ecson optons. The average of these maxma s taken as an approxmaton to the 2

3 frst term n Equaton. The secon term n Equaton s compute usng stanar Monte Carlo samplng,.e. for each ecson opton we calculate the mean of the net benefts corresponng to the whole set of nput samples, an then take the maxmum of these means. 2. Algorthm In the followng subsectons we ntrouce notaton an escrbe the algorthm n etal n a seres of stages. Coe for mplementng the algorthm n R R evelopment Core Team, 20 s shown n appenx A an s avalable for ownloa from Stage We efne the Monte Carlo sample of moel nputs an corresponng moel outputs as {x s, y s, s =,..., S, =,..., }, where the xs are rawn from the jont strbuton of the nputs, px, an y s = f, xs s the evaluaton of the moel output at x s for ecson opton. Note the use of superscrpts to nex the ranomly rawn sample sets. We let M be the matrx of nputs an corresponng outputs x... x p y... y x 2... x 2 p y 2... y 2 M = x S... x S p y S... y S 2..2 Stage 2 For parameter of nterest, we extract the x an y,..., y columns an reorer wth respect to x, gvng M = x y... y x 2 y 2... y 2. x S..., 3 y S... y S where x x 2... x S. Note the use of brackete superscrpts to enote the sample set orere wth respect to the nput of nterest Stage 3 We partton the resultng matrx nto k =,..., K sub matrces M k of J rows each, M k = x,k x 2,k. x J,k y,k... y,k y 2,k... y 2,k..., 4 y J,k... y J,k 3

4 retanng the orerng wth respect to x, an where the row nexe j, k n Equaton 4 s the row nexe j + k J n Equaton 3. Note that J K must equal the total sample sze S Stage 4 For each M k we estmate for each ecson opton the contonal expectaton µ k = E X X =x {f, X k, X } by averagng over j =,..., J,.e. ˆµ k = J J j= y j,k, 5 where x k = J j= xj,k /J. The justfcaton for ths rests on recognsng that f J s small compare { to S, then the orere values of the nput of nterest x,k,..., x J,k } wll all be close to ther { mean value, x} k, an the corresponng values of the remanng nputs x,k,..., x J,k wll be approxmately a sample from the strbuton of X X = x k. See appenx B for a more formal justfcaton. The maxmum m k = max E X X =x {f, X k, X } s then estmate by ˆm k = max ˆµ k, 6 an fnally we estmate the frst term n the rght han se of Equaton by averagng over k =,..., K,.e Stage 5 ˆm = K ˆm k. 7 k= We estmate the secon term n the rght han se of Equaton usng smple Monte Carlo samplng,.e. max E X {f, X} max S S y n. 8 where the orer of the x n s rrelevant. Stages 2 to 4 are repeate for each parameter of nterest, notng that only a sngle set of moel runs stage s requre. n= 2.2 Choosng values for J an K We assume that we have a fxe number of moel evaluatons S an wsh to choose values for J an K subject to the constrant J K = S. Frstly we note that for small values of J the EVPI estmator s upwarly base ue to the maxmsaton n Equaton 6 Oakley et al., 200. Inee for 4

5 J = an K = S our orere nput estmator for the frst term n the rght han se of Equaton reuces to S S s= max ys, 9 whch s the Monte Carlo estmator for the frst term n the expresson for the overall EVPI, } EV P I = E X {max f, X max E X {f, X}. 0 Seconly we note that for very large values of J, an hence small values of K, the EVPI estmator s ownwarly base, an converges to zero when J = S. In ths case our orere nput estmator for the frst term n the rght han se of Equaton reuces to max S S y, s whch s the Monte Carlo estmator for the secon term n the rght han se of Equaton. The precson of the partal EVPI estmate only epens on S an not on J an K see Secton 2.4 for the ervaton of an expresson for the varance of the estmator. We therefore only nee to conser the mnmsaton of bas n our choce of J an K when S s fxe. Because the upwar bas ue to small J converges to zero as J ncreases, a sensble choce of J s that whch s just large enough such that the estmate bas ˆb s smaller than some constant c. Any choce of J larger than ths wll rsk ntroucng a ownwar bas whch becomes apparent at small values of K. s= 2.3 Estmaton of the upwar bas n the frst term of the partal EVPI estmator We estmate the upwar bas n the followng manner, usng the metho propose by Oakley et al Frstly, we wrte the vector of Monte Carlo estmators for the contonal expecte net benefts from Equaton 5 as ˆµ k =. ˆµ k,..., ˆµk If we can etermne the samplng strbuton of ths vector of estmators then we can quantfy the upwar bas n ˆm, an hence the upwar bas n the partal EVPI. Unless J s very small, ˆµ k wll follow a multvarate Normal strbuton wth mensons. Thus we have ˆµ k N µ k, J V k, 2 where µ k = by, µ k,..., µk an where each element p, q of V k s estmate ˆV k p,q = cov ˆµ k p, ˆµ k q. 3 5

6 In orer to estmate the bas n ˆm we frst raw, for each k =,..., K, a set of N samples from a multvarate Normal strbuton wth mean vector ˆµ k an varance matrx k J ˆV p,q. We choose N to be large, say,000. Let us enote these samples µ k n = µ k,n,..., µk,n for n =,..., N an k =,..., K. The bas n ˆm k s estmate by ˆbk = N N n= an the expecte bas n ˆm as, { } { } max µ k,n,..., µk,n max ˆµ k,..., ˆµk, 4 ˆb = K ˆbk. 5 k= 2.4 Estmaton of the varance of the frst term of the partal EVPI estmator Here we erve an expresson for the varance of ˆm, the frst term n the estmator for the partal EVPI Equaton. If we enote k = arg max we can rewrte Equaton 7 as The varance of ˆm s snce the y j,k k ˆµ k Ê X ˆm k = ˆm = K k= k= ˆm k = ˆµ k K k k= = J K J = S var ˆm = var S = S 2 J k= j= J k= j= J k= j= var j= y j,k k y j,k. 6 k y j,k k y j,k k, 7 are nepenent. The estmator for var ˆm s therefore smply var ˆm = SS J k= j= ˆm 2 y j,k. 8 k We see therefore that the precson of the frst term n the partal EVPI estmator oes not epen on the nvual choces of J an K, but only on S = J K. 6

7 Appenx A: R coe for mplementng the algorthm The partal.evp.functon functon as wrtten below takes as nputs the costs an effects rather than the net benefts. Ths allows the partal EVPI to be calculate at any value of wllngness to pay, λ. partal.evp.functon<-functonnputs,nput.of.nterest,costs,effects,lamba,j,k { S <- nrownputs # number of samples fj*k!=s stop"the number of samples oes not equal J tmes K" <- ncolcosts # number of ecson optons nb <- lamba*effects-costs baselne <- maxcolmeansnb perfect.nfo <- meanapplynb,,max evp <- perfect.nfo-baselne sort.orer <- orernputs[,nput.of.nterest] sort.nb <- nb[sort.orer,] nb.array <- arraysort.nb,m=cj,k, mean.k <- applynb.array,c2,3,mean partal.nfo <- meanapplymean.k,,max partal.evp <- partal.nfo-baselne partal.evp.nex <- partal.evp/evp returnlst baselne = baselne, perfect.nfo = perfect.nfo, evp = evp, partal.nfo = partal.nfo, partal.evp = partal.evp, partal.evp.nex = partal.evp.nex } Appenx B: Theoretcal justfcaton for the algorthm The orere algorthm s a metho for effcently computng the nner expectaton n the frst term of the rght han se n Equaton. roppng the ecson opton nex for clarty but wthout loss of generalty, our target s E X X =x {fx, X } where x s a realse value of the parameter of nterest, an X are the remanng uncertan parameters wth jont contonal strbuton px X{ = x. } Gven a sample x,..., xj from px X = x, the Monte Carlo estmator for E X X =x {fx, X } s Ê X X =x {fx, X } = J J f j= x, x j. 9 7

8 In our orere approxmaton metho we replace Equaton 9 wth Ê X X =x {fx, X } = J J f j= x + ε j, x j, 20 where {x + ε,..., x + ε J} = {x px X [x,..., x J } s an orere sample from s a ± ζ] for some small ζ an therefore ε 0, an xj sample from px X = x + ε j Equaton 20 s an unbase Monte Carlo estmator of { E X [x ±ζ] EX X fx, X } = fx, X px X px X [x ± ζ]x X, 2 X X whch we can rewrte by ntroucng an mportance samplng rato as X = = X X fx, X px X px X [x ± ζ]x X X fx, X px X px X [x ± ζ] X px X px X = x px X px X = x X X px X fx, X X px X = x px X [x ± ζ]x px X = x X. wthn the nner ntegral as a func- We wrte the terms fx, X ton g,.e. px X px X =x px X fx, X px X = x = gx, x, X. If g s approxmately lnear n the small nterval X [x ±ζ] then we can express gx, x, X as a frst orer Taylor seres expanson about gx, x, X, gvng px X fx, X px X = x = gx, x, X, g x, x, X + X x g X, x, X X = fx, X + X x g X, x, X X X=x X=x Substtutng back nto Equaton 22 wth c = gx,x,x X gves X=x px X fx, X X X px X = x px X [x ± ζ]x px X = x X {fx, X + cx x } px X [x ± ζ] X px X = x X. X X Snce X cx x px X [x ± ζ]x = E X [x ±ζ] {cx x } 0 an. 22 8

9 X px X [x ± ζ] X =, then {fx, X + cx x } px X [x ± ζ] X px X = x X, X X = fx, X px X = x X, X = E X X =x {fx, X }. px X Hence, we have shown that as long as gx, x, X = fx, X px X =x s suffcently smooth such that t s approxmately lnear n some small nterval X [x ± ζ], the orere approxmaton metho Equaton 20 wll prove a goo estmate of our target contonal expectaton E X X =x {fx, X }. Acknowlegements MS was fune by UK Mecal Research Councl fellowshp grant G06072 urng the course of ths work. 9

10 References Aes, A. E., Lu, G. an Claxton, K Expecte value of sample nformaton calculatons n mecal ecson moelng, Mecal ecson Makng, 24 2: Brennan, A., Kharroub, S., O Hagan, A. an Chlcott, J Calculatng partal expecte value of perfect nformaton va Monte Carlo samplng algorthms, Mecal ecson Makng, 27 4: Claxton, K. an Posnett, J An economc approach to clncal tral esgn an research prorty-settng, Health Economcs, 5 6: Fell, J. C. an Hazen, G. B Senstvty analyss an the expecte value of perfect nformaton, Mecal ecson Makng, 8 : Fell, J. C. an Hazen, G. B Erratum: Correcton: Senstvty analyss an the expecte value of perfect nformaton, Mecal ecson Makng, 23 : 97. Koerkamp, B. G., Myram Hunnk, M. G., Stjnen, T. an Wensten, M. C Ientfyng key parameters n cost-effectveness analyss usng value of nformaton: a comparson of methos, Health Economcs, 5 4: Oakley, J. E., Brennan, A., Tappenen, P. an Chlcott, J Smulaton sample szes for Monte Carlo partal EVPI calculatons, Journal of Health Economcs, 29 3: R evelopment Core Team 20. R: A Language an Envronment for Statstcal Computng, R Founaton for Statstcal Computng, Venna, Austra, ISBN Raffa, H ecson Analyss. Introuctory Lectures on Choces Uner Uncertanty, Reang, Massachusetts: Ason-Wesley. 0

Computing MLE Bias Empirically

Computing MLE Bias Empirically Computng MLE Bas Emprcally Kar Wa Lm Australan atonal Unversty January 3, 27 Abstract Ths note studes the bas arses from the MLE estmate of the rate parameter and the mean parameter of an exponental dstrbuton.

More information

New Liu Estimators for the Poisson Regression Model: Method and Application

New Liu Estimators for the Poisson Regression Model: Method and Application New Lu Estmators for the Posson Regresson Moel: Metho an Applcaton By Krstofer Månsson B. M. Golam Kbra, Pär Sölaner an Ghaz Shukur,3 Department of Economcs, Fnance an Statstcs, Jönköpng Unversty Jönköpng,

More information

Explicit bounds for the return probability of simple random walk

Explicit bounds for the return probability of simple random walk Explct bouns for the return probablty of smple ranom walk The runnng hea shoul be the same as the ttle.) Karen Ball Jacob Sterbenz Contact nformaton: Karen Ball IMA Unversty of Mnnesota 4 Ln Hall, 7 Church

More information

Parametric fractional imputation for missing data analysis. Jae Kwang Kim Survey Working Group Seminar March 29, 2010

Parametric fractional imputation for missing data analysis. Jae Kwang Kim Survey Working Group Seminar March 29, 2010 Parametrc fractonal mputaton for mssng data analyss Jae Kwang Km Survey Workng Group Semnar March 29, 2010 1 Outlne Introducton Proposed method Fractonal mputaton Approxmaton Varance estmaton Multple mputaton

More information

Econ107 Applied Econometrics Topic 3: Classical Model (Studenmund, Chapter 4)

Econ107 Applied Econometrics Topic 3: Classical Model (Studenmund, Chapter 4) I. Classcal Assumptons Econ7 Appled Econometrcs Topc 3: Classcal Model (Studenmund, Chapter 4) We have defned OLS and studed some algebrac propertes of OLS. In ths topc we wll study statstcal propertes

More information

Lecture 10 Support Vector Machines II

Lecture 10 Support Vector Machines II Lecture 10 Support Vector Machnes II 22 February 2016 Taylor B. Arnold Yale Statstcs STAT 365/665 1/28 Notes: Problem 3 s posted and due ths upcomng Frday There was an early bug n the fake-test data; fxed

More information

EEE 241: Linear Systems

EEE 241: Linear Systems EEE : Lnear Systems Summary #: Backpropagaton BACKPROPAGATION The perceptron rule as well as the Wdrow Hoff learnng were desgned to tran sngle layer networks. They suffer from the same dsadvantage: they

More information

p(z) = 1 a e z/a 1(z 0) yi a i x (1/a) exp y i a i x a i=1 n i=1 (y i a i x) inf 1 (y Ax) inf Ax y (1 ν) y if A (1 ν) = 0 otherwise

p(z) = 1 a e z/a 1(z 0) yi a i x (1/a) exp y i a i x a i=1 n i=1 (y i a i x) inf 1 (y Ax) inf Ax y (1 ν) y if A (1 ν) = 0 otherwise Dustn Lennon Math 582 Convex Optmzaton Problems from Boy, Chapter 7 Problem 7.1 Solve the MLE problem when the nose s exponentally strbute wth ensty p(z = 1 a e z/a 1(z 0 The MLE s gven by the followng:

More information

A Comparative Study for Estimation Parameters in Panel Data Model

A Comparative Study for Estimation Parameters in Panel Data Model A Comparatve Study for Estmaton Parameters n Panel Data Model Ahmed H. Youssef and Mohamed R. Abonazel hs paper examnes the panel data models when the regresson coeffcents are fxed random and mxed and

More information

Errors for Linear Systems

Errors for Linear Systems Errors for Lnear Systems When we solve a lnear system Ax b we often do not know A and b exactly, but have only approxmatons  and ˆb avalable. Then the best thng we can do s to solve ˆx ˆb exactly whch

More information

Lecture 12: Discrete Laplacian

Lecture 12: Discrete Laplacian Lecture 12: Dscrete Laplacan Scrbe: Tanye Lu Our goal s to come up wth a dscrete verson of Laplacan operator for trangulated surfaces, so that we can use t n practce to solve related problems We are mostly

More information

Large-Scale Data-Dependent Kernel Approximation Appendix

Large-Scale Data-Dependent Kernel Approximation Appendix Large-Scale Data-Depenent Kernel Approxmaton Appenx Ths appenx presents the atonal etal an proofs assocate wth the man paper [1]. 1 Introucton Let k : R p R p R be a postve efnte translaton nvarant functon

More information

ECONOMICS 351*-A Mid-Term Exam -- Fall Term 2000 Page 1 of 13 pages. QUEEN'S UNIVERSITY AT KINGSTON Department of Economics

ECONOMICS 351*-A Mid-Term Exam -- Fall Term 2000 Page 1 of 13 pages. QUEEN'S UNIVERSITY AT KINGSTON Department of Economics ECOOMICS 35*-A Md-Term Exam -- Fall Term 000 Page of 3 pages QUEE'S UIVERSITY AT KIGSTO Department of Economcs ECOOMICS 35* - Secton A Introductory Econometrcs Fall Term 000 MID-TERM EAM ASWERS MG Abbott

More information

On a one-parameter family of Riordan arrays and the weight distribution of MDS codes

On a one-parameter family of Riordan arrays and the weight distribution of MDS codes On a one-parameter famly of Roran arrays an the weght strbuton of MDS coes Paul Barry School of Scence Waterfor Insttute of Technology Irelan pbarry@wte Patrck Ftzpatrck Department of Mathematcs Unversty

More information

1 Convex Optimization

1 Convex Optimization Convex Optmzaton We wll consder convex optmzaton problems. Namely, mnmzaton problems where the objectve s convex (we assume no constrants for now). Such problems often arse n machne learnng. For example,

More information

CHAPTER 5 NUMERICAL EVALUATION OF DYNAMIC RESPONSE

CHAPTER 5 NUMERICAL EVALUATION OF DYNAMIC RESPONSE CHAPTER 5 NUMERICAL EVALUATION OF DYNAMIC RESPONSE Analytcal soluton s usually not possble when exctaton vares arbtrarly wth tme or f the system s nonlnear. Such problems can be solved by numercal tmesteppng

More information

[ ] λ λ λ. Multicollinearity. multicollinearity Ragnar Frisch (1934) perfect exact. collinearity. multicollinearity. exact

[ ] λ λ λ. Multicollinearity. multicollinearity Ragnar Frisch (1934) perfect exact. collinearity. multicollinearity. exact Multcollnearty multcollnearty Ragnar Frsch (934 perfect exact collnearty multcollnearty K exact λ λ λ K K x+ x+ + x 0 0.. λ, λ, λk 0 0.. x perfect ntercorrelated λ λ λ x+ x+ + KxK + v 0 0.. v 3 y β + β

More information

Global Sensitivity. Tuesday 20 th February, 2018

Global Sensitivity. Tuesday 20 th February, 2018 Global Senstvty Tuesday 2 th February, 28 ) Local Senstvty Most senstvty analyses [] are based on local estmates of senstvty, typcally by expandng the response n a Taylor seres about some specfc values

More information

For now, let us focus on a specific model of neurons. These are simplified from reality but can achieve remarkable results.

For now, let us focus on a specific model of neurons. These are simplified from reality but can achieve remarkable results. Neural Networks : Dervaton compled by Alvn Wan from Professor Jtendra Malk s lecture Ths type of computaton s called deep learnng and s the most popular method for many problems, such as computer vson

More information

Solutions Homework 4 March 5, 2018

Solutions Homework 4 March 5, 2018 1 Solutons Homework 4 March 5, 018 Soluton to Exercse 5.1.8: Let a IR be a translaton and c > 0 be a re-scalng. ˆb1 (cx + a) cx n + a (cx 1 + a) c x n x 1 cˆb 1 (x), whch shows ˆb 1 s locaton nvarant and

More information

ENTROPIC QUESTIONING

ENTROPIC QUESTIONING ENTROPIC QUESTIONING NACHUM. Introucton Goal. Pck the queston that contrbutes most to fnng a sutable prouct. Iea. Use an nformaton-theoretc measure. Bascs. Entropy (a non-negatve real number) measures

More information

Hidden Markov Models & The Multivariate Gaussian (10/26/04)

Hidden Markov Models & The Multivariate Gaussian (10/26/04) CS281A/Stat241A: Statstcal Learnng Theory Hdden Markov Models & The Multvarate Gaussan (10/26/04) Lecturer: Mchael I. Jordan Scrbes: Jonathan W. Hu 1 Hdden Markov Models As a bref revew, hdden Markov models

More information

A MULTIDIMENSIONAL ANALOGUE OF THE RADEMACHER-GAUSSIAN TAIL COMPARISON

A MULTIDIMENSIONAL ANALOGUE OF THE RADEMACHER-GAUSSIAN TAIL COMPARISON A MULTIDIMENSIONAL ANALOGUE OF THE RADEMACHER-GAUSSIAN TAIL COMPARISON PIOTR NAYAR AND TOMASZ TKOCZ Abstract We prove a menson-free tal comparson between the Euclean norms of sums of nepenent ranom vectors

More information

On Liu Estimators for the Logit Regression Model

On Liu Estimators for the Logit Regression Model CESIS Electronc Workng Paper Seres Paper No. 59 On Lu Estmators for the Logt Regresson Moel Krstofer Månsson B. M. Golam Kbra October 011 The Royal Insttute of technology Centre of Excellence for Scence

More information

Topological Sensitivity Analysis for Three-dimensional Linear Elasticity Problem

Topological Sensitivity Analysis for Three-dimensional Linear Elasticity Problem 6 th Worl Congress on Structural an Multscplnary Optmzaton Ro e Janero, 30 May - 03 June 2005, Brazl Topologcal Senstvty Analyss for Three-mensonal Lnear Elastcty Problem A.A. Novotny 1, R.A. Fejóo 1,

More information

A MULTIDIMENSIONAL ANALOGUE OF THE RADEMACHER-GAUSSIAN TAIL COMPARISON

A MULTIDIMENSIONAL ANALOGUE OF THE RADEMACHER-GAUSSIAN TAIL COMPARISON A MULTIDIMENSIONAL ANALOGUE OF THE RADEMACHER-GAUSSIAN TAIL COMPARISON PIOTR NAYAR AND TOMASZ TKOCZ Abstract We prove a menson-free tal comparson between the Euclean norms of sums of nepenent ranom vectors

More information

Least squares cubic splines without B-splines S.K. Lucas

Least squares cubic splines without B-splines S.K. Lucas Least squares cubc splnes wthout B-splnes S.K. Lucas School of Mathematcs and Statstcs, Unversty of South Australa, Mawson Lakes SA 595 e-mal: stephen.lucas@unsa.edu.au Submtted to the Gazette of the Australan

More information

Economics 130. Lecture 4 Simple Linear Regression Continued

Economics 130. Lecture 4 Simple Linear Regression Continued Economcs 130 Lecture 4 Contnued Readngs for Week 4 Text, Chapter and 3. We contnue wth addressng our second ssue + add n how we evaluate these relatonshps: Where do we get data to do ths analyss? How do

More information

STAT 511 FINAL EXAM NAME Spring 2001

STAT 511 FINAL EXAM NAME Spring 2001 STAT 5 FINAL EXAM NAME Sprng Instructons: Ths s a closed book exam. No notes or books are allowed. ou may use a calculator but you are not allowed to store notes or formulas n the calculator. Please wrte

More information

Matrix Approximation via Sampling, Subspace Embedding. 1 Solving Linear Systems Using SVD

Matrix Approximation via Sampling, Subspace Embedding. 1 Solving Linear Systems Using SVD Matrx Approxmaton va Samplng, Subspace Embeddng Lecturer: Anup Rao Scrbe: Rashth Sharma, Peng Zhang 0/01/016 1 Solvng Lnear Systems Usng SVD Two applcatons of SVD have been covered so far. Today we loo

More information

The Noether theorem. Elisabet Edvardsson. Analytical mechanics - FYGB08 January, 2016

The Noether theorem. Elisabet Edvardsson. Analytical mechanics - FYGB08 January, 2016 The Noether theorem Elsabet Evarsson Analytcal mechancs - FYGB08 January, 2016 1 1 Introucton The Noether theorem concerns the connecton between a certan kn of symmetres an conservaton laws n physcs. It

More information

Estimation: Part 2. Chapter GREG estimation

Estimation: Part 2. Chapter GREG estimation Chapter 9 Estmaton: Part 2 9. GREG estmaton In Chapter 8, we have seen that the regresson estmator s an effcent estmator when there s a lnear relatonshp between y and x. In ths chapter, we generalzed the

More information

An (almost) unbiased estimator for the S-Gini index

An (almost) unbiased estimator for the S-Gini index An (almost unbased estmator for the S-Gn ndex Thomas Demuynck February 25, 2009 Abstract Ths note provdes an unbased estmator for the absolute S-Gn and an almost unbased estmator for the relatve S-Gn for

More information

Numerical Heat and Mass Transfer

Numerical Heat and Mass Transfer Master degree n Mechancal Engneerng Numercal Heat and Mass Transfer 06-Fnte-Dfference Method (One-dmensonal, steady state heat conducton) Fausto Arpno f.arpno@uncas.t Introducton Why we use models and

More information

ALTERNATIVE METHODS FOR RELIABILITY-BASED ROBUST DESIGN OPTIMIZATION INCLUDING DIMENSION REDUCTION METHOD

ALTERNATIVE METHODS FOR RELIABILITY-BASED ROBUST DESIGN OPTIMIZATION INCLUDING DIMENSION REDUCTION METHOD Proceengs of IDETC/CIE 00 ASME 00 Internatonal Desgn Engneerng Techncal Conferences & Computers an Informaton n Engneerng Conference September 0-, 00, Phlaelpha, Pennsylvana, USA DETC00/DAC-997 ALTERATIVE

More information

However, since P is a symmetric idempotent matrix, of P are either 0 or 1 [Eigen-values

However, since P is a symmetric idempotent matrix, of P are either 0 or 1 [Eigen-values Fall 007 Soluton to Mdterm Examnaton STAT 7 Dr. Goel. [0 ponts] For the general lnear model = X + ε, wth uncorrelated errors havng mean zero and varance σ, suppose that the desgn matrx X s not necessarly

More information

LOW BIAS INTEGRATED PATH ESTIMATORS. James M. Calvin

LOW BIAS INTEGRATED PATH ESTIMATORS. James M. Calvin Proceedngs of the 007 Wnter Smulaton Conference S G Henderson, B Bller, M-H Hseh, J Shortle, J D Tew, and R R Barton, eds LOW BIAS INTEGRATED PATH ESTIMATORS James M Calvn Department of Computer Scence

More information

Appendix for Causal Interaction in Factorial Experiments: Application to Conjoint Analysis

Appendix for Causal Interaction in Factorial Experiments: Application to Conjoint Analysis A Appendx for Causal Interacton n Factoral Experments: Applcaton to Conjont Analyss Mathematcal Appendx: Proofs of Theorems A. Lemmas Below, we descrbe all the lemmas, whch are used to prove the man theorems

More information

ENGI9496 Lecture Notes Multiport Models in Mechanics

ENGI9496 Lecture Notes Multiport Models in Mechanics ENGI9496 Moellng an Smulaton of Dynamc Systems Mechancs an Mechansms ENGI9496 Lecture Notes Multport Moels n Mechancs (New text Secton 4..3; Secton 9.1 generalzes to 3D moton) Defntons Generalze coornates

More information

GENERALISED WALD TYPE TESTS OF NONLINEAR RESTRICTIONS. Zaka Ratsimalahelo

GENERALISED WALD TYPE TESTS OF NONLINEAR RESTRICTIONS. Zaka Ratsimalahelo GENERALISED WALD TYPE TESTS OF NONLINEAR RESTRICTIONS Zaka Ratsmalahelo Unversty of Franche-Comté, U.F.R. Scence Economque, 45D, av. e l Observatore, 5 030 Besançon - France Abstract: Ths paper proposes

More information

Linear Approximation with Regularization and Moving Least Squares

Linear Approximation with Regularization and Moving Least Squares Lnear Approxmaton wth Regularzaton and Movng Least Squares Igor Grešovn May 007 Revson 4.6 (Revson : March 004). 5 4 3 0.5 3 3.5 4 Contents: Lnear Fttng...4. Weghted Least Squares n Functon Approxmaton...

More information

Hard Problems from Advanced Partial Differential Equations (18.306)

Hard Problems from Advanced Partial Differential Equations (18.306) Har Problems from Avance Partal Dfferental Equatons (18.306) Kenny Kamrn June 27, 2004 1. We are gven the PDE 2 Ψ = Ψ xx + Ψ yy = 0. We must fn solutons of the form Ψ = x γ f (ξ), where ξ x/y. We also

More information

Lecture 21: Numerical methods for pricing American type derivatives

Lecture 21: Numerical methods for pricing American type derivatives Lecture 21: Numercal methods for prcng Amercan type dervatves Xaoguang Wang STAT 598W Aprl 10th, 2014 (STAT 598W) Lecture 21 1 / 26 Outlne 1 Fnte Dfference Method Explct Method Penalty Method (STAT 598W)

More information

The Multiple Classical Linear Regression Model (CLRM): Specification and Assumptions. 1. Introduction

The Multiple Classical Linear Regression Model (CLRM): Specification and Assumptions. 1. Introduction ECONOMICS 5* -- NOTE (Summary) ECON 5* -- NOTE The Multple Classcal Lnear Regresson Model (CLRM): Specfcaton and Assumptons. Introducton CLRM stands for the Classcal Lnear Regresson Model. The CLRM s also

More information

THE ROYAL STATISTICAL SOCIETY 2006 EXAMINATIONS SOLUTIONS HIGHER CERTIFICATE

THE ROYAL STATISTICAL SOCIETY 2006 EXAMINATIONS SOLUTIONS HIGHER CERTIFICATE THE ROYAL STATISTICAL SOCIETY 6 EXAMINATIONS SOLUTIONS HIGHER CERTIFICATE PAPER I STATISTICAL THEORY The Socety provdes these solutons to assst canddates preparng for the eamnatons n future years and for

More information

Analytical classical dynamics

Analytical classical dynamics Analytcal classcal ynamcs by Youun Hu Insttute of plasma physcs, Chnese Acaemy of Scences Emal: yhu@pp.cas.cn Abstract These notes were ntally wrtten when I rea tzpatrck s book[] an were later revse to

More information

FINITE-SAMPLE PROPERTIES OF THE MAXIMUM LIKELIHOOD ESTIMATOR FOR THE BINARY LOGIT MODEL WITH RANDOM COVARIATES

FINITE-SAMPLE PROPERTIES OF THE MAXIMUM LIKELIHOOD ESTIMATOR FOR THE BINARY LOGIT MODEL WITH RANDOM COVARIATES conometrcs Workng Paper WP0906 ISSN 485-644 Department of conomcs FINIT-SAMPL PROPRTIS OF TH MAIMUM LIKLIHOOD STIMATOR FOR TH BINARY LOGIT MODL WITH RANDOM COVARIATS Qan Chen School of Publc Fnance an

More information

WHY NOT USE THE ENTROPY METHOD FOR WEIGHT ESTIMATION?

WHY NOT USE THE ENTROPY METHOD FOR WEIGHT ESTIMATION? ISAHP 001, Berne, Swtzerlan, August -4, 001 WHY NOT USE THE ENTROPY METHOD FOR WEIGHT ESTIMATION? Masaak SHINOHARA, Chkako MIYAKE an Kekch Ohsawa Department of Mathematcal Informaton Engneerng College

More information

A New Refinement of Jacobi Method for Solution of Linear System Equations AX=b

A New Refinement of Jacobi Method for Solution of Linear System Equations AX=b Int J Contemp Math Scences, Vol 3, 28, no 17, 819-827 A New Refnement of Jacob Method for Soluton of Lnear System Equatons AX=b F Naem Dafchah Department of Mathematcs, Faculty of Scences Unversty of Gulan,

More information

x i1 =1 for all i (the constant ).

x i1 =1 for all i (the constant ). Chapter 5 The Multple Regresson Model Consder an economc model where the dependent varable s a functon of K explanatory varables. The economc model has the form: y = f ( x,x,..., ) xk Approxmate ths by

More information

Stanford University CS359G: Graph Partitioning and Expanders Handout 4 Luca Trevisan January 13, 2011

Stanford University CS359G: Graph Partitioning and Expanders Handout 4 Luca Trevisan January 13, 2011 Stanford Unversty CS359G: Graph Parttonng and Expanders Handout 4 Luca Trevsan January 3, 0 Lecture 4 In whch we prove the dffcult drecton of Cheeger s nequalty. As n the past lectures, consder an undrected

More information

Lecture 12: Classification

Lecture 12: Classification Lecture : Classfcaton g Dscrmnant functons g The optmal Bayes classfer g Quadratc classfers g Eucldean and Mahalanobs metrcs g K Nearest Neghbor Classfers Intellgent Sensor Systems Rcardo Guterrez-Osuna

More information

Approximations for a Fork/Join Station with Inputs from Finite Populations

Approximations for a Fork/Join Station with Inputs from Finite Populations Approxmatons for a Fork/Jon Staton th Inputs from Fnte Populatons Ananth rshnamurthy epartment of ecson Scences ngneerng Systems Rensselaer Polytechnc Insttute 0 8 th Street Troy NY 80 USA Rajan Sur enter

More information

C/CS/Phy191 Problem Set 3 Solutions Out: Oct 1, 2008., where ( 00. ), so the overall state of the system is ) ( ( ( ( 00 ± 11 ), Φ ± = 1

C/CS/Phy191 Problem Set 3 Solutions Out: Oct 1, 2008., where ( 00. ), so the overall state of the system is ) ( ( ( ( 00 ± 11 ), Φ ± = 1 C/CS/Phy9 Problem Set 3 Solutons Out: Oct, 8 Suppose you have two qubts n some arbtrary entangled state ψ You apply the teleportaton protocol to each of the qubts separately What s the resultng state obtaned

More information

A Robust Method for Calculating the Correlation Coefficient

A Robust Method for Calculating the Correlation Coefficient A Robust Method for Calculatng the Correlaton Coeffcent E.B. Nven and C. V. Deutsch Relatonshps between prmary and secondary data are frequently quantfed usng the correlaton coeffcent; however, the tradtonal

More information

A Note on the Numerical Solution for Fredholm Integral Equation of the Second Kind with Cauchy kernel

A Note on the Numerical Solution for Fredholm Integral Equation of the Second Kind with Cauchy kernel Journal of Mathematcs an Statstcs 7 (): 68-7, ISS 49-3644 Scence Publcatons ote on the umercal Soluton for Freholm Integral Equaton of the Secon Kn wth Cauchy kernel M. bulkaw,.m.. k Long an Z.K. Eshkuvatov

More information

Chat eld, C. and A.J.Collins, Introduction to multivariate analysis. Chapman & Hall, 1980

Chat eld, C. and A.J.Collins, Introduction to multivariate analysis. Chapman & Hall, 1980 MT07: Multvarate Statstcal Methods Mke Tso: emal mke.tso@manchester.ac.uk Webpage for notes: http://www.maths.manchester.ac.uk/~mkt/new_teachng.htm. Introducton to multvarate data. Books Chat eld, C. and

More information

A new Approach for Solving Linear Ordinary Differential Equations

A new Approach for Solving Linear Ordinary Differential Equations , ISSN 974-57X (Onlne), ISSN 974-5718 (Prnt), Vol. ; Issue No. 1; Year 14, Copyrght 13-14 by CESER PUBLICATIONS A new Approach for Solvng Lnear Ordnary Dfferental Equatons Fawz Abdelwahd Department of

More information

Parametric fractional imputation for missing data analysis

Parametric fractional imputation for missing data analysis Secton on Survey Research Methods JSM 2008 Parametrc fractonal mputaton for mssng data analyss Jae Kwang Km Wayne Fuller Abstract Under a parametrc model for mssng data, the EM algorthm s a popular tool

More information

Mechanics Physics 151

Mechanics Physics 151 Mechancs Physcs 5 Lecture 3 Contnuous Systems an Fels (Chapter 3) Where Are We Now? We ve fnshe all the essentals Fnal wll cover Lectures through Last two lectures: Classcal Fel Theory Start wth wave equatons

More information

NUMERICAL DIFFERENTIATION

NUMERICAL DIFFERENTIATION NUMERICAL DIFFERENTIATION 1 Introducton Dfferentaton s a method to compute the rate at whch a dependent output y changes wth respect to the change n the ndependent nput x. Ths rate of change s called the

More information

where I = (n x n) diagonal identity matrix with diagonal elements = 1 and off-diagonal elements = 0; and σ 2 e = variance of (Y X).

where I = (n x n) diagonal identity matrix with diagonal elements = 1 and off-diagonal elements = 0; and σ 2 e = variance of (Y X). 11.4.1 Estmaton of Multple Regresson Coeffcents In multple lnear regresson, we essentally solve n equatons for the p unnown parameters. hus n must e equal to or greater than p and n practce n should e

More information

APPROXIMATE PRICES OF BASKET AND ASIAN OPTIONS DUPONT OLIVIER. Premia 14

APPROXIMATE PRICES OF BASKET AND ASIAN OPTIONS DUPONT OLIVIER. Premia 14 APPROXIMAE PRICES OF BASKE AND ASIAN OPIONS DUPON OLIVIER Prema 14 Contents Introducton 1 1. Framewor 1 1.1. Baset optons 1.. Asan optons. Computng the prce 3. Lower bound 3.1. Closed formula for the prce

More information

Computation of Higher Order Moments from Two Multinomial Overdispersion Likelihood Models

Computation of Higher Order Moments from Two Multinomial Overdispersion Likelihood Models Computaton of Hgher Order Moments from Two Multnomal Overdsperson Lkelhood Models BY J. T. NEWCOMER, N. K. NEERCHAL Department of Mathematcs and Statstcs, Unversty of Maryland, Baltmore County, Baltmore,

More information

Population element: 1 2 N. 1.1 Sampling with Replacement: Hansen-Hurwitz Estimator(HH)

Population element: 1 2 N. 1.1 Sampling with Replacement: Hansen-Hurwitz Estimator(HH) Chapter 1 Samplng wth Unequal Probabltes Notaton: Populaton element: 1 2 N varable of nterest Y : y1 y2 y N Let s be a sample of elements drawn by a gven samplng method. In other words, s s a subset of

More information

2. High dimensional data

2. High dimensional data /8/00. Hgh mensons. Hgh mensonal ata Conser representng a ocument by a vector each component of whch correspons to the number of occurrences of a partcular wor n the ocument. The Englsh language has on

More information

Chapter 2 - The Simple Linear Regression Model S =0. e i is a random error. S β2 β. This is a minimization problem. Solution is a calculus exercise.

Chapter 2 - The Simple Linear Regression Model S =0. e i is a random error. S β2 β. This is a minimization problem. Solution is a calculus exercise. Chapter - The Smple Lnear Regresson Model The lnear regresson equaton s: where y + = β + β e for =,..., y and are observable varables e s a random error How can an estmaton rule be constructed for the

More information

e i is a random error

e i is a random error Chapter - The Smple Lnear Regresson Model The lnear regresson equaton s: where + β + β e for,..., and are observable varables e s a random error How can an estmaton rule be constructed for the unknown

More information

4DVAR, according to the name, is a four-dimensional variational method.

4DVAR, according to the name, is a four-dimensional variational method. 4D-Varatonal Data Assmlaton (4D-Var) 4DVAR, accordng to the name, s a four-dmensonal varatonal method. 4D-Var s actually a drect generalzaton of 3D-Var to handle observatons that are dstrbuted n tme. The

More information

GENERIC CONTINUOUS SPECTRUM FOR MULTI-DIMENSIONAL QUASIPERIODIC SCHRÖDINGER OPERATORS WITH ROUGH POTENTIALS

GENERIC CONTINUOUS SPECTRUM FOR MULTI-DIMENSIONAL QUASIPERIODIC SCHRÖDINGER OPERATORS WITH ROUGH POTENTIALS GENERIC CONTINUOUS SPECTRUM FOR MULTI-DIMENSIONAL QUASIPERIODIC SCHRÖDINGER OPERATORS WITH ROUGH POTENTIALS YANG FAN AND RUI HAN Abstract. We stuy the mult-mensonal operator (H xu) n = m n = um + f(t n

More information

Population Design in Nonlinear Mixed Effects Multiple Response Models: extension of PFIM and evaluation by simulation with NONMEM and MONOLIX

Population Design in Nonlinear Mixed Effects Multiple Response Models: extension of PFIM and evaluation by simulation with NONMEM and MONOLIX Populaton Desgn n Nonlnear Mxed Effects Multple Response Models: extenson of PFIM and evaluaton by smulaton wth NONMEM and MONOLIX May 4th 007 Carolne Bazzol, Sylve Retout, France Mentré Inserm U738 Unversty

More information

STAT 3008 Applied Regression Analysis

STAT 3008 Applied Regression Analysis STAT 3008 Appled Regresson Analyss Tutoral : Smple Lnear Regresson LAI Chun He Department of Statstcs, The Chnese Unversty of Hong Kong 1 Model Assumpton To quantfy the relatonshp between two factors,

More information

Expectation propagation

Expectation propagation Expectaton propagaton Lloyd Ellott May 17, 2011 Suppose p(x) s a pdf and we have a factorzaton p(x) = 1 Z n f (x). (1) =1 Expectaton propagaton s an nference algorthm desgned to approxmate the factors

More information

Chapter 6. Supplemental Text Material

Chapter 6. Supplemental Text Material Chapter 6. Supplemental Text Materal S6-. actor Effect Estmates are Least Squares Estmates We have gven heurstc or ntutve explanatons of how the estmates of the factor effects are obtaned n the textboo.

More information

Uncertainty and auto-correlation in. Measurement

Uncertainty and auto-correlation in. Measurement Uncertanty and auto-correlaton n arxv:1707.03276v2 [physcs.data-an] 30 Dec 2017 Measurement Markus Schebl Federal Offce of Metrology and Surveyng (BEV), 1160 Venna, Austra E-mal: markus.schebl@bev.gv.at

More information

Generalized Linear Methods

Generalized Linear Methods Generalzed Lnear Methods 1 Introducton In the Ensemble Methods the general dea s that usng a combnaton of several weak learner one could make a better learner. More formally, assume that we have a set

More information

Gaussian Mixture Models

Gaussian Mixture Models Lab Gaussan Mxture Models Lab Objectve: Understand the formulaton of Gaussan Mxture Models (GMMs) and how to estmate GMM parameters. You ve already seen GMMs as the observaton dstrbuton n certan contnuous

More information

On an Extension of Stochastic Approximation EM Algorithm for Incomplete Data Problems. Vahid Tadayon 1

On an Extension of Stochastic Approximation EM Algorithm for Incomplete Data Problems. Vahid Tadayon 1 On an Extenson of Stochastc Approxmaton EM Algorthm for Incomplete Data Problems Vahd Tadayon Abstract: The Stochastc Approxmaton EM (SAEM algorthm, a varant stochastc approxmaton of EM, s a versatle tool

More information

Sampling Theory MODULE V LECTURE - 17 RATIO AND PRODUCT METHODS OF ESTIMATION

Sampling Theory MODULE V LECTURE - 17 RATIO AND PRODUCT METHODS OF ESTIMATION Samplng Theory MODULE V LECTURE - 7 RATIO AND PRODUCT METHODS OF ESTIMATION DR. SHALABH DEPARTMENT OF MATHEMATICS AND STATISTICS INDIAN INSTITUTE OF TECHNOLOG KANPUR Propertes of separate rato estmator:

More information

A Hybrid Variational Iteration Method for Blasius Equation

A Hybrid Variational Iteration Method for Blasius Equation Avalable at http://pvamu.edu/aam Appl. Appl. Math. ISSN: 1932-9466 Vol. 10, Issue 1 (June 2015), pp. 223-229 Applcatons and Appled Mathematcs: An Internatonal Journal (AAM) A Hybrd Varatonal Iteraton Method

More information

Additional Codes using Finite Difference Method. 1 HJB Equation for Consumption-Saving Problem Without Uncertainty

Additional Codes using Finite Difference Method. 1 HJB Equation for Consumption-Saving Problem Without Uncertainty Addtonal Codes usng Fnte Dfference Method Benamn Moll 1 HJB Equaton for Consumpton-Savng Problem Wthout Uncertanty Before consderng the case wth stochastc ncome n http://www.prnceton.edu/~moll/ HACTproect/HACT_Numercal_Appendx.pdf,

More information

a. (All your answers should be in the letter!

a. (All your answers should be in the letter! Econ 301 Blkent Unversty Taskn Econometrcs Department of Economcs Md Term Exam I November 8, 015 Name For each hypothess testng n the exam complete the followng steps: Indcate the test statstc, ts crtcal

More information

U-Pb Geochronology Practical: Background

U-Pb Geochronology Practical: Background U-Pb Geochronology Practcal: Background Basc Concepts: accuracy: measure of the dfference between an expermental measurement and the true value precson: measure of the reproducblty of the expermental result

More information

LINEAR REGRESSION ANALYSIS. MODULE IX Lecture Multicollinearity

LINEAR REGRESSION ANALYSIS. MODULE IX Lecture Multicollinearity LINEAR REGRESSION ANALYSIS MODULE IX Lecture - 30 Multcollnearty Dr. Shalabh Department of Mathematcs and Statstcs Indan Insttute of Technology Kanpur 2 Remedes for multcollnearty Varous technques have

More information

Lecture 3. Ax x i a i. i i

Lecture 3. Ax x i a i. i i 18.409 The Behavor of Algorthms n Practce 2/14/2 Lecturer: Dan Spelman Lecture 3 Scrbe: Arvnd Sankar 1 Largest sngular value In order to bound the condton number, we need an upper bound on the largest

More information

Supporting Information

Supporting Information Supportng Informaton The neural network f n Eq. 1 s gven by: f x l = ReLU W atom x l + b atom, 2 where ReLU s the element-wse rectfed lnear unt, 21.e., ReLUx = max0, x, W atom R d d s the weght matrx to

More information

princeton univ. F 13 cos 521: Advanced Algorithm Design Lecture 3: Large deviations bounds and applications Lecturer: Sanjeev Arora

princeton univ. F 13 cos 521: Advanced Algorithm Design Lecture 3: Large deviations bounds and applications Lecturer: Sanjeev Arora prnceton unv. F 13 cos 521: Advanced Algorthm Desgn Lecture 3: Large devatons bounds and applcatons Lecturer: Sanjeev Arora Scrbe: Today s topc s devaton bounds: what s the probablty that a random varable

More information

Feature Selection: Part 1

Feature Selection: Part 1 CSE 546: Machne Learnng Lecture 5 Feature Selecton: Part 1 Instructor: Sham Kakade 1 Regresson n the hgh dmensonal settng How do we learn when the number of features d s greater than the sample sze n?

More information

Linear Feature Engineering 11

Linear Feature Engineering 11 Lnear Feature Engneerng 11 2 Least-Squares 2.1 Smple least-squares Consder the followng dataset. We have a bunch of nputs x and correspondng outputs y. The partcular values n ths dataset are x y 0.23 0.19

More information

Dr. Shalabh Department of Mathematics and Statistics Indian Institute of Technology Kanpur

Dr. Shalabh Department of Mathematics and Statistics Indian Institute of Technology Kanpur Analyss of Varance and Desgn of Exerments-I MODULE III LECTURE - 2 EXPERIMENTAL DESIGN MODELS Dr. Shalabh Deartment of Mathematcs and Statstcs Indan Insttute of Technology Kanur 2 We consder the models

More information

PHZ 6607 Lecture Notes

PHZ 6607 Lecture Notes NOTE PHZ 6607 Lecture Notes 1. Lecture 2 1.1. Defntons Books: ( Tensor Analyss on Manfols ( The mathematcal theory of black holes ( Carroll (v Schutz Vector: ( In an N-Dmensonal space, a vector s efne

More information

Chapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems

Chapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems Numercal Analyss by Dr. Anta Pal Assstant Professor Department of Mathematcs Natonal Insttute of Technology Durgapur Durgapur-713209 emal: anta.bue@gmal.com 1 . Chapter 5 Soluton of System of Lnear Equatons

More information

Finding Dense Subgraphs in G(n, 1/2)

Finding Dense Subgraphs in G(n, 1/2) Fndng Dense Subgraphs n Gn, 1/ Atsh Das Sarma 1, Amt Deshpande, and Rav Kannan 1 Georga Insttute of Technology,atsh@cc.gatech.edu Mcrosoft Research-Bangalore,amtdesh,annan@mcrosoft.com Abstract. Fndng

More information

x = , so that calculated

x = , so that calculated Stat 4, secton Sngle Factor ANOVA notes by Tm Plachowsk n chapter 8 we conducted hypothess tests n whch we compared a sngle sample s mean or proporton to some hypotheszed value Chapter 9 expanded ths to

More information

Strong Markov property: Same assertion holds for stopping times τ.

Strong Markov property: Same assertion holds for stopping times τ. Brownan moton Let X ={X t : t R + } be a real-valued stochastc process: a famlty of real random varables all defned on the same probablty space. Defne F t = nformaton avalable by observng the process up

More information

SIMPLIFIED MODEL-BASED OPTIMAL CONTROL OF VAV AIR- CONDITIONING SYSTEM

SIMPLIFIED MODEL-BASED OPTIMAL CONTROL OF VAV AIR- CONDITIONING SYSTEM Nnth Internatonal IBPSA Conference Montréal, Canaa August 5-8, 2005 SIMPLIFIED MODEL-BASED OPTIMAL CONTROL OF VAV AIR- CONDITIONING SYSTEM Nabl Nassf, Stanslaw Kajl, an Robert Sabourn École e technologe

More information

THE STURM-LIOUVILLE EIGENVALUE PROBLEM - A NUMERICAL SOLUTION USING THE CONTROL VOLUME METHOD

THE STURM-LIOUVILLE EIGENVALUE PROBLEM - A NUMERICAL SOLUTION USING THE CONTROL VOLUME METHOD Journal of Appled Mathematcs and Computatonal Mechancs 06, 5(), 7-36 www.amcm.pcz.pl p-iss 99-9965 DOI: 0.75/jamcm.06..4 e-iss 353-0588 THE STURM-LIOUVILLE EIGEVALUE PROBLEM - A UMERICAL SOLUTIO USIG THE

More information

FTCS Solution to the Heat Equation

FTCS Solution to the Heat Equation FTCS Soluton to the Heat Equaton ME 448/548 Notes Gerald Recktenwald Portland State Unversty Department of Mechancal Engneerng gerry@pdx.edu ME 448/548: FTCS Soluton to the Heat Equaton Overvew 1. Use

More information

18.1 Introduction and Recap

18.1 Introduction and Recap CS787: Advanced Algorthms Scrbe: Pryananda Shenoy and Shjn Kong Lecturer: Shuch Chawla Topc: Streamng Algorthmscontnued) Date: 0/26/2007 We contnue talng about streamng algorthms n ths lecture, ncludng

More information

Statistics for Economics & Business

Statistics for Economics & Business Statstcs for Economcs & Busness Smple Lnear Regresson Learnng Objectves In ths chapter, you learn: How to use regresson analyss to predct the value of a dependent varable based on an ndependent varable

More information