A Note on Effi cient Conditional Simulation of Gaussian Distributions. April 2010
|
|
- Herbert Carter
- 5 years ago
- Views:
Transcription
1 A Note o Effi ciet Coditioal Simulatio of Gaussia Distributios A D D C S S, U B C, V, BC, C April 2010 A Cosider a multivariate Gaussia radom vector which ca be partitioed ito observed ad uobserved compoetswe review a techique proposed almost twety years ago i the astrophysics literature to sample from the posterior Gaussia distributio of the uobserved compoets give the observed compoets [6] This techique ca be computatioally cheaper tha the stadard approach which requires computig the Cholesky decompositio of the posterior covariace matrix This useful method does ot appear to be widely kow ad has bee rediscovered idepedetly i various publicatios Keywords: forward filterig backward samplig, Gaussia processes, Kalma filter ad smoother, multivariate ormal distributio, state-space models Prelimiary Remark This ote cotais o origial material ad will ever be submitted aywhere for publicatio However it might be of iterest to people workig with Gaussia radom fields/processes so I am makig it publicly available 1 P S Let Z be a R valued Gaussia radom vector such that X Z = Y where X takes values i R x ad Y i R y We assume that Z follows a multivariate ormal distributio of mea m ad covariace Σ with m = mx m y Z N m, Σ ad Σ = Σxx Σ xy Σ T xy Σ yy where m x = E X, m y = E Y, Σ xx = cov X, Σ yy = cov Y ad Σ xy = cov X, Y 1
2 A N E C S G D 2 where It is easy to establish that give Y = y, we have X Y = y N m x y, Σ x y m x y = m x + Σ xy Σ 1 yy y m y, Σ x y = Σ xx Σ xy Σ 1 yy Σ T xy Assume we are iterested here i samplig from N m x y, Σ x y The stadard approach cosists of computig the Cholesky decompositio of Σ x y deoted here Σx y ad usig X = m x y + Σ x y U where U N 0, I is a x -dimesioal vector of idepedet stadard ormal radom variables It is ideed easy to check that X N m x y, Σ x y However, it might too expesive to compute this Cholesky decompositio if x 1 2 M 21 Algorithm The algorithm proposed i [6] to sample N m x y, Σ x y ca be summarized as follows X Sample Z = N m, Σ Y Retur X = X + Σ xy Σ 1 yy y Y Compared to the stadard method, this algorithm bypasses the computatio of the posterior covariace Σ x y ad of its Cholesky decompositio Cotrary to the stadard method, it requires beig able to simulate a radom vector from the prior ad to use a stadard regressio update I may applicatios, it is computatioally much cheaper ad easier to implemet this algorithm tha the stadard method 22 Validity of the algorithm To establish that X N m x y, Σ x y, we ote that X satisfies X = m x y + X E X Y 1 It follows that E X Y = mx y + E X Y E X Y = m x y Hece, we have E X = E E X Y = mx y
3 A N E C S G D 3 We also have cov X Y = cov X E X Y Y = Σ x y as the posterior covariace is idepedet of the specific realizatio of the observatios Hece, we obtai cov X = Σ x y which establishes the validity of the samplig method 3 A To the best of our kowledge, this algorithm first appeared i astrophysics where it was applied to Gaussia radom fields [6]; see [7] for a recet review I this cotext, x is so large that it is virtually impossible to compute Σ x y ad its Cholesky decompositio This algorithm might also prove useful for Gaussia processes applicatios arisig i spatial statistics [2] ad machie learig [8] We preset here two differet applicatios of this algorithm which have bee derived idepedetly from [6] 31 Esemble Kalma filter Cosider a liear Gaussia state-space model satisfyig for 1 X = AX 1 + V, Y = CX + W, where X 0 N 0, Σ 0, V N 0, Σ v ad W N 0, Σ w For ay geeric sequece {z k } k 0, let us deote z i:j = z i, z i+1,, z j We are iterested i the posterior desities {p x y 1: } 1 These posterior desities are Gaussia ad their statistics m x, = E X y 1: ad Σ xx, = cov X y 1: ca be computed usig the Kalma filter However if the dimesio x of the state X is very high, the it is ot possible to implemet the Kalma filter equatios This has motivated the developmet of approximatio techiques i geoscieces A very popular approach i this field is kow as the Esemble Kalma filter [4] I the esemble Kalma filter, the posterior distributios are approximated by radom samples Assume you have at time 1, N samples X i 1 N m x, 1 1, Σ xx, 1 1 i = 1,, N where m x, 1 1, Σ xx, 1 1 are estimates of m x,, Σ xx, The at time, the algorithm proceeds as follows Sample X i N Compute m x, 1 = 1 N m x, 1 AX i 1, Σ v ad Y i N i=1 Xi m T y, 1, Σ yy, 1 = 1 N, m y, 1 = 1 N N i=1 Y i Y i CX i, Σ w i=1 Y i, Σ xy, 1 = 1 N T my, 1 m T y, 1 i=1 Xi Y i T
4 A N E C S G D 4 Compute X i = X i Compute m x, = 1 N m T x, + Σ xy, 1 Σ 1 yy, 1 y Y i i=1 Xi ad Σ xx, = 1 N N i=1 Xi X i T mx, As N goes to ifiity, it follows directly from the previous developmets that X i N m x,, Σ xx, 32 Posterior simulatio i Gaussia state-space models Cosider agai a liear Gaussia state-space model X = AX 1 + V, 2 Y = CX + W, 3 where X 0 N 0, Σ 0, V N 0, Σ v ad W N 0, Σ w Let us deote y 1: = y 1, y 2,, y Whe implemetig a Markov chai Mote Carlo MCMC algorithm to estimate the hyperparameters of this model, it is usually ecessary to sample from p x 0: y 1: This is typically achieved usig the Forward Filterig Backward Samplig FFBS techique [1], [5] A alterative to this well-kow techique is give by the followig algorithm [3] Sample X 0:, Y 1: usig Eq 2-3 Use the Kalma smoother to compute both E X 0: Y 1: ad E X 0: y 1: Retur X 0: = E X 0: y 1: + X 0: E X 0: Y 1: The fact that X 0: p x 0: y 1: follows directly from Eq 1 A mior advatage of this method over the FFBS approach is that it oly relies o stadard Kalma smoothig code Actually, the algorithm discussed i [3] is slightly differet I this paper, the authors propose to sample from p x 0, v 1: y 1: istead of p x 0: y 1: usig the disturbace smoother E V 1: Y 1: The ratioale for samplig from p x 0, v 1: y 1: is that Σ v is typically a low-rak matrix R [1] CK Carter ad R Koh, O Gibbs samplig for state space models, Biometrika, vol 81, pp , 1994 [2] N Cressie, Statistics for Spatial Data, New York: Wiley, 1993 [3] J Durbi ad SJ Koopma, A simple ad effi ciet simulatio smoother for state space time series aalysis, Biometrika, vol 89, pp , 2002
5 A N E C S G D 5 [4] G Everse, Data assimilatio: The Esemble Kalma Filter, 2d ed, Spriger, 2009 [5] S Frühwirth-Schatter, Data augmetatio ad dyamic liear models, J Time Series Aalysis, vol 15, pp , 1994 [6] Y Hoffma ad E Ribak, Costraied realizatios of Gaussia fields - a simple algorithm, The Astrophysical Joural, vol 380, pp L5 L8, Oct 1991 [7] Y Hoffma, Gaussia fields ad costraied simulatios of the large-scale structure, i Data Aalysis i Cosmology, Lecture Notes i Physics, Berli: Spriger- Verlag, pp , 2009 [8] CE Rasmusse ad CKI Williams, Gaussia Processes for Machie Learig, MIT Press, 2006
Resampling Methods. X (1/2), i.e., Pr (X i m) = 1/2. We order the data: X (1) X (2) X (n). Define the sample median: ( n.
Jauary 1, 2019 Resamplig Methods Motivatio We have so may estimators with the property θ θ d N 0, σ 2 We ca also write θ a N θ, σ 2 /, where a meas approximately distributed as Oce we have a cosistet estimator
More information3/8/2016. Contents in latter part PATTERN RECOGNITION AND MACHINE LEARNING. Dynamical Systems. Dynamical Systems. Linear Dynamical Systems
Cotets i latter part PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 13: SEQUENTIAL DATA Liear Dyamical Systems What is differet from HMM? Kalma filter Its stregth ad limitatio Particle Filter Its simple
More informationProperties and Hypothesis Testing
Chapter 3 Properties ad Hypothesis Testig 3.1 Types of data The regressio techiques developed i previous chapters ca be applied to three differet kids of data. 1. Cross-sectioal data. 2. Time series data.
More informationChapter 12 EM algorithms The Expectation-Maximization (EM) algorithm is a maximum likelihood method for models that have hidden variables eg. Gaussian
Chapter 2 EM algorithms The Expectatio-Maximizatio (EM) algorithm is a maximum likelihood method for models that have hidde variables eg. Gaussia Mixture Models (GMMs), Liear Dyamic Systems (LDSs) ad Hidde
More informationSequential Monte Carlo Methods - A Review. Arnaud Doucet. Engineering Department, Cambridge University, UK
Sequetial Mote Carlo Methods - A Review Araud Doucet Egieerig Departmet, Cambridge Uiversity, UK http://www-sigproc.eg.cam.ac.uk/ ad2/araud doucet.html ad2@eg.cam.ac.uk Istitut Heri Poicaré - Paris - 2
More informationIt should be unbiased, or approximately unbiased. Variance of the variance estimator should be small. That is, the variance estimator is stable.
Chapter 10 Variace Estimatio 10.1 Itroductio Variace estimatio is a importat practical problem i survey samplig. Variace estimates are used i two purposes. Oe is the aalytic purpose such as costructig
More informationLecture 33: Bootstrap
Lecture 33: ootstrap Motivatio To evaluate ad compare differet estimators, we eed cosistet estimators of variaces or asymptotic variaces of estimators. This is also importat for hypothesis testig ad cofidece
More informationACO Comprehensive Exam 9 October 2007 Student code A. 1. Graph Theory
1. Graph Theory Prove that there exist o simple plaar triagulatio T ad two distict adjacet vertices x, y V (T ) such that x ad y are the oly vertices of T of odd degree. Do ot use the Four-Color Theorem.
More informationData Assimilation. Alan O Neill University of Reading, UK
Data Assimilatio Ala O Neill Uiversity of Readig, UK he Kalma Filter Kalma Filter (expesive Use model equatios to propagate B forward i time. B B(t Aalysis step as i OI Evolutio of Covariace Matrices (
More informationLecture 2: Monte Carlo Simulation
STAT/Q SCI 43: Itroductio to Resamplig ethods Sprig 27 Istructor: Ye-Chi Che Lecture 2: ote Carlo Simulatio 2 ote Carlo Itegratio Assume we wat to evaluate the followig itegratio: e x3 dx What ca we do?
More informationEfficient Block Sampling Strategies for Sequential Monte Carlo Methods
Efficiet Block Samplig Strategies for Sequetial Mote Carlo Methods Araud DOUCET, Mark BRIERS, ad Stéphae SÉNÉCA Sequetial Mote Carlo SMC) methods are a powerful set of simulatio-based techiques for samplig
More information4.5 Multiple Imputation
45 ultiple Imputatio Itroductio Assume a parametric model: y fy x; θ We are iterested i makig iferece about θ I Bayesia approach, we wat to make iferece about θ from fθ x, y = πθfy x, θ πθfy x, θdθ where
More informationLinear regression. Daniel Hsu (COMS 4771) (y i x T i β)2 2πσ. 2 2σ 2. 1 n. (x T i β y i ) 2. 1 ˆβ arg min. β R n d
Liear regressio Daiel Hsu (COMS 477) Maximum likelihood estimatio Oe of the simplest liear regressio models is the followig: (X, Y ),..., (X, Y ), (X, Y ) are iid radom pairs takig values i R d R, ad Y
More informationAsymptotic Results for the Linear Regression Model
Asymptotic Results for the Liear Regressio Model C. Fli November 29, 2000 1. Asymptotic Results uder Classical Assumptios The followig results apply to the liear regressio model y = Xβ + ε, where X is
More informationECE 8527: Introduction to Machine Learning and Pattern Recognition Midterm # 1. Vaishali Amin Fall, 2015
ECE 8527: Itroductio to Machie Learig ad Patter Recogitio Midterm # 1 Vaishali Ami Fall, 2015 tue39624@temple.edu Problem No. 1: Cosider a two-class discrete distributio problem: ω 1 :{[0,0], [2,0], [2,2],
More information11 Correlation and Regression
11 Correlatio Regressio 11.1 Multivariate Data Ofte we look at data where several variables are recorded for the same idividuals or samplig uits. For example, at a coastal weather statio, we might record
More informationMatrix Representation of Data in Experiment
Matrix Represetatio of Data i Experimet Cosider a very simple model for resposes y ij : y ij i ij, i 1,; j 1,,..., (ote that for simplicity we are assumig the two () groups are of equal sample size ) Y
More informationA RANK STATISTIC FOR NON-PARAMETRIC K-SAMPLE AND CHANGE POINT PROBLEMS
J. Japa Statist. Soc. Vol. 41 No. 1 2011 67 73 A RANK STATISTIC FOR NON-PARAMETRIC K-SAMPLE AND CHANGE POINT PROBLEMS Yoichi Nishiyama* We cosider k-sample ad chage poit problems for idepedet data i a
More informationECONOMETRIC THEORY. MODULE XIII Lecture - 34 Asymptotic Theory and Stochastic Regressors
ECONOMETRIC THEORY MODULE XIII Lecture - 34 Asymptotic Theory ad Stochastic Regressors Dr. Shalabh Departmet of Mathematics ad Statistics Idia Istitute of Techology Kapur Asymptotic theory The asymptotic
More informationExpectation and Variance of a random variable
Chapter 11 Expectatio ad Variace of a radom variable The aim of this lecture is to defie ad itroduce mathematical Expectatio ad variace of a fuctio of discrete & cotiuous radom variables ad the distributio
More informationV. Nollau Institute of Mathematical Stochastics, Technical University of Dresden, Germany
PROBABILITY AND STATISTICS Vol. III - Correlatio Aalysis - V. Nollau CORRELATION ANALYSIS V. Nollau Istitute of Mathematical Stochastics, Techical Uiversity of Dresde, Germay Keywords: Radom vector, multivariate
More information1 Introduction to reducing variance in Monte Carlo simulations
Copyright c 010 by Karl Sigma 1 Itroductio to reducig variace i Mote Carlo simulatios 11 Review of cofidece itervals for estimatig a mea I statistics, we estimate a ukow mea µ = E(X) of a distributio by
More informationAchieving Stationary Distributions in Markov Chains. Monday, November 17, 2008 Rice University
Istructor: Achievig Statioary Distributios i Markov Chais Moday, November 1, 008 Rice Uiversity Dr. Volka Cevher STAT 1 / ELEC 9: Graphical Models Scribe: Rya E. Guerra, Tahira N. Saleem, Terrace D. Savitsky
More informationTopic 9: Sampling Distributions of Estimators
Topic 9: Samplig Distributios of Estimators Course 003, 2016 Page 0 Samplig distributios of estimators Sice our estimators are statistics (particular fuctios of radom variables), their distributio ca be
More informationMATH 320: Probability and Statistics 9. Estimation and Testing of Parameters. Readings: Pruim, Chapter 4
MATH 30: Probability ad Statistics 9. Estimatio ad Testig of Parameters Estimatio ad Testig of Parameters We have bee dealig situatios i which we have full kowledge of the distributio of a radom variable.
More informationStochastic Simulation
Stochastic Simulatio 1 Itroductio Readig Assigmet: Read Chapter 1 of text. We shall itroduce may of the key issues to be discussed i this course via a couple of model problems. Model Problem 1 (Jackso
More informationConfidence Interval for Standard Deviation of Normal Distribution with Known Coefficients of Variation
Cofidece Iterval for tadard Deviatio of Normal Distributio with Kow Coefficiets of Variatio uparat Niwitpog Departmet of Applied tatistics, Faculty of Applied ciece Kig Mogkut s Uiversity of Techology
More informationThere is no straightforward approach for choosing the warmup period l.
B. Maddah INDE 504 Discrete-Evet Simulatio Output Aalysis () Statistical Aalysis for Steady-State Parameters I a otermiatig simulatio, the iterest is i estimatig the log ru steady state measures of performace.
More informationEstimation of the Mean and the ACVF
Chapter 5 Estimatio of the Mea ad the ACVF A statioary process {X t } is characterized by its mea ad its autocovariace fuctio γ ), ad so by the autocorrelatio fuctio ρ ) I this chapter we preset the estimators
More informationSection 1.1. Calculus: Areas And Tangents. Difference Equations to Differential Equations
Differece Equatios to Differetial Equatios Sectio. Calculus: Areas Ad Tagets The study of calculus begis with questios about chage. What happes to the velocity of a swigig pedulum as its positio chages?
More informationSAMPLING LIPSCHITZ CONTINUOUS DENSITIES. 1. Introduction
SAMPLING LIPSCHITZ CONTINUOUS DENSITIES OLIVIER BINETTE Abstract. A simple ad efficiet algorithm for geeratig radom variates from the class of Lipschitz cotiuous desities is described. A MatLab implemetatio
More informationLecture 7: Properties of Random Samples
Lecture 7: Properties of Radom Samples 1 Cotiued From Last Class Theorem 1.1. Let X 1, X,...X be a radom sample from a populatio with mea µ ad variace σ
More informationFirst Year Quantitative Comp Exam Spring, Part I - 203A. f X (x) = 0 otherwise
First Year Quatitative Comp Exam Sprig, 2012 Istructio: There are three parts. Aswer every questio i every part. Questio I-1 Part I - 203A A radom variable X is distributed with the margial desity: >
More information4. Partial Sums and the Central Limit Theorem
1 of 10 7/16/2009 6:05 AM Virtual Laboratories > 6. Radom Samples > 1 2 3 4 5 6 7 4. Partial Sums ad the Cetral Limit Theorem The cetral limit theorem ad the law of large umbers are the two fudametal theorems
More informationTopic 9: Sampling Distributions of Estimators
Topic 9: Samplig Distributios of Estimators Course 003, 2018 Page 0 Samplig distributios of estimators Sice our estimators are statistics (particular fuctios of radom variables), their distributio ca be
More informationKolmogorov-Smirnov type Tests for Local Gaussianity in High-Frequency Data
Proceedigs 59th ISI World Statistics Cogress, 5-30 August 013, Hog Kog (Sessio STS046) p.09 Kolmogorov-Smirov type Tests for Local Gaussiaity i High-Frequecy Data George Tauche, Duke Uiversity Viktor Todorov,
More informationLecture 3. Properties of Summary Statistics: Sampling Distribution
Lecture 3 Properties of Summary Statistics: Samplig Distributio Mai Theme How ca we use math to justify that our umerical summaries from the sample are good summaries of the populatio? Lecture Summary
More informationThe random version of Dvoretzky s theorem in l n
The radom versio of Dvoretzky s theorem i l Gideo Schechtma Abstract We show that with high probability a sectio of the l ball of dimesio k cε log c > 0 a uiversal costat) is ε close to a multiple of the
More informationChapter 11 Output Analysis for a Single Model. Banks, Carson, Nelson & Nicol Discrete-Event System Simulation
Chapter Output Aalysis for a Sigle Model Baks, Carso, Nelso & Nicol Discrete-Evet System Simulatio Error Estimatio If {,, } are ot statistically idepedet, the S / is a biased estimator of the true variace.
More informationAn Introduction to Randomized Algorithms
A Itroductio to Radomized Algorithms The focus of this lecture is to study a radomized algorithm for quick sort, aalyze it usig probabilistic recurrece relatios, ad also provide more geeral tools for aalysis
More informationCONDITIONAL PROBABILITY INTEGRAL TRANSFORMATIONS FOR MULTIVARIATE NORMAL DISTRIBUTIONS
CONDITIONAL PROBABILITY INTEGRAL TRANSFORMATIONS FOR MULTIVARIATE NORMAL DISTRIBUTIONS Satiago Rico Gallardo, C. P. Queseberry, F. J. O'Reilly Istitute of Statistics Mimeograph Series No. 1148 Raleigh,
More informationStatistical inference: example 1. Inferential Statistics
Statistical iferece: example 1 Iferetial Statistics POPULATION SAMPLE A clothig store chai regularly buys from a supplier large quatities of a certai piece of clothig. Each item ca be classified either
More information1 Inferential Methods for Correlation and Regression Analysis
1 Iferetial Methods for Correlatio ad Regressio Aalysis I the chapter o Correlatio ad Regressio Aalysis tools for describig bivariate cotiuous data were itroduced. The sample Pearso Correlatio Coefficiet
More informationA statistical method to determine sample size to estimate characteristic value of soil parameters
A statistical method to determie sample size to estimate characteristic value of soil parameters Y. Hojo, B. Setiawa 2 ad M. Suzuki 3 Abstract Sample size is a importat factor to be cosidered i determiig
More informationLecture 12: November 13, 2018
Mathematical Toolkit Autum 2018 Lecturer: Madhur Tulsiai Lecture 12: November 13, 2018 1 Radomized polyomial idetity testig We will use our kowledge of coditioal probability to prove the followig lemma,
More informationInvestigating the Significance of a Correlation Coefficient using Jackknife Estimates
Iteratioal Joural of Scieces: Basic ad Applied Research (IJSBAR) ISSN 2307-4531 (Prit & Olie) http://gssrr.org/idex.php?joural=jouralofbasicadapplied ---------------------------------------------------------------------------------------------------------------------------
More informationExponential Families and Bayesian Inference
Computer Visio Expoetial Families ad Bayesia Iferece Lecture Expoetial Families A expoetial family of distributios is a d-parameter family f(x; havig the followig form: f(x; = h(xe g(t T (x B(, (. where
More informationLecture 19. sup y 1,..., yn B d n
STAT 06A: Polyomials of adom Variables Lecture date: Nov Lecture 19 Grothedieck s Iequality Scribe: Be Hough The scribes are based o a guest lecture by ya O Doell. I this lecture we prove Grothedieck s
More informationDiscrete Mathematics for CS Spring 2008 David Wagner Note 22
CS 70 Discrete Mathematics for CS Sprig 2008 David Wager Note 22 I.I.D. Radom Variables Estimatig the bias of a coi Questio: We wat to estimate the proportio p of Democrats i the US populatio, by takig
More informationAlgebra of Least Squares
October 19, 2018 Algebra of Least Squares Geometry of Least Squares Recall that out data is like a table [Y X] where Y collects observatios o the depedet variable Y ad X collects observatios o the k-dimesioal
More informationA Question. Output Analysis. Example. What Are We Doing Wrong? Result from throwing a die. Let X be the random variable
A Questio Output Aalysis Let X be the radom variable Result from throwig a die 5.. Questio: What is E (X? Would you throw just oce ad take the result as your aswer? Itroductio to Simulatio WS/ - L 7 /
More informationTHE KALMAN FILTER RAUL ROJAS
THE KALMAN FILTER RAUL ROJAS Abstract. This paper provides a getle itroductio to the Kalma filter, a umerical method that ca be used for sesor fusio or for calculatio of trajectories. First, we cosider
More informationChandrasekhar Type Algorithms. for the Riccati Equation of Lainiotis Filter
Cotemporary Egieerig Scieces, Vol. 3, 00, o. 4, 9-00 Chadrasekhar ype Algorithms for the Riccati Equatio of Laiiotis Filter Nicholas Assimakis Departmet of Electroics echological Educatioal Istitute of
More informationA Genetic Algorithm for Solving General System of Equations
A Geetic Algorithm for Solvig Geeral System of Equatios Győző Molárka, Edit Miletics Departmet of Mathematics, Szécheyi Istvá Uiversity, Győr, Hugary molarka@sze.hu, miletics@sze.hu Abstract: For solvig
More informationMonte Carlo Integration
Mote Carlo Itegratio I these otes we first review basic umerical itegratio methods (usig Riema approximatio ad the trapezoidal rule) ad their limitatios for evaluatig multidimesioal itegrals. Next we itroduce
More informationSimple Linear Regression
Chapter 2 Simple Liear Regressio 2.1 Simple liear model The simple liear regressio model shows how oe kow depedet variable is determied by a sigle explaatory variable (regressor). Is is writte as: Y i
More informationLecture 19: Convergence
Lecture 19: Covergece Asymptotic approach I statistical aalysis or iferece, a key to the success of fidig a good procedure is beig able to fid some momets ad/or distributios of various statistics. I may
More informationTopic 9: Sampling Distributions of Estimators
Topic 9: Samplig Distributios of Estimators Course 003, 2018 Page 0 Samplig distributios of estimators Sice our estimators are statistics (particular fuctios of radom variables), their distributio ca be
More informationA Risk Comparison of Ordinary Least Squares vs Ridge Regression
Joural of Machie Learig Research 14 (2013) 1505-1511 Submitted 5/12; Revised 3/13; Published 6/13 A Risk Compariso of Ordiary Least Squares vs Ridge Regressio Paramveer S. Dhillo Departmet of Computer
More informationKeywords: Last-Success-Problem; Odds-Theorem; Optimal stopping; Optimal threshold AMS 2010 Mathematics Subject Classification 60G40, 62L15
CONCERNING AN ADVERSARIAL VERSION OF THE LAST-SUCCESS-PROBLEM arxiv:8.0538v [math.pr] 3 Dec 08 J.M. GRAU RIBAS Abstract. There are idepedet Beroulli radom variables with parameters p i that are observed
More informationThe variance of a sum of independent variables is the sum of their variances, since covariances are zero. Therefore. V (xi )= n n 2 σ2 = σ2.
SAMPLE STATISTICS A radom sample x 1,x,,x from a distributio f(x) is a set of idepedetly ad idetically variables with x i f(x) for all i Their joit pdf is f(x 1,x,,x )=f(x 1 )f(x ) f(x )= f(x i ) The sample
More informationThe Method of Least Squares. To understand least squares fitting of data.
The Method of Least Squares KEY WORDS Curve fittig, least square GOAL To uderstad least squares fittig of data To uderstad the least squares solutio of icosistet systems of liear equatios 1 Motivatio Curve
More informationAdvanced Sequential Monte Carlo Methods
Advaced Sequetial Mote Carlo Methods Araud Doucet Departmets of Statistics & Computer Sciece Uiversity of British Columbia A.D. () / 35 Geeric Sequetial Mote Carlo Scheme At time =, sample q () ad set
More informationMonte Carlo method and application to random processes
Mote Carlo method ad applicatio to radom processes Lecture 3: Variace reductio techiques (8/3/2017) 1 Lecturer: Eresto Mordecki, Facultad de Ciecias, Uiversidad de la República, Motevideo, Uruguay Graduate
More informationLecture 15: Learning Theory: Concentration Inequalities
STAT 425: Itroductio to Noparametric Statistics Witer 208 Lecture 5: Learig Theory: Cocetratio Iequalities Istructor: Ye-Chi Che 5. Itroductio Recall that i the lecture o classificatio, we have see that
More informationClases 7-8: Métodos de reducción de varianza en Monte Carlo *
Clases 7-8: Métodos de reducció de variaza e Mote Carlo * 9 de septiembre de 27 Ídice. Variace reductio 2. Atithetic variates 2 2.. Example: Uiform radom variables................ 3 2.2. Example: Tail
More informationPrecise Rates in Complete Moment Convergence for Negatively Associated Sequences
Commuicatios of the Korea Statistical Society 29, Vol. 16, No. 5, 841 849 Precise Rates i Complete Momet Covergece for Negatively Associated Sequeces Dae-Hee Ryu 1,a a Departmet of Computer Sciece, ChugWoo
More informationn outcome is (+1,+1, 1,..., 1). Let the r.v. X denote our position (relative to our starting point 0) after n moves. Thus X = X 1 + X 2 + +X n,
CS 70 Discrete Mathematics for CS Sprig 2008 David Wager Note 9 Variace Questio: At each time step, I flip a fair coi. If it comes up Heads, I walk oe step to the right; if it comes up Tails, I walk oe
More informationarxiv: v1 [math.pr] 4 Dec 2013
Squared-Norm Empirical Process i Baach Space arxiv:32005v [mathpr] 4 Dec 203 Vicet Q Vu Departmet of Statistics The Ohio State Uiversity Columbus, OH vqv@statosuedu Abstract Jig Lei Departmet of Statistics
More informationMachine Learning for Data Science (CS 4786)
Machie Learig for Data Sciece CS 4786) Lecture 9: Pricipal Compoet Aalysis The text i black outlies mai ideas to retai from the lecture. The text i blue give a deeper uderstadig of how we derive or get
More informationChapter 13, Part A Analysis of Variance and Experimental Design
Slides Prepared by JOHN S. LOUCKS St. Edward s Uiversity Slide 1 Chapter 13, Part A Aalysis of Variace ad Eperimetal Desig Itroductio to Aalysis of Variace Aalysis of Variace: Testig for the Equality of
More informationIntroduction to Computational Molecular Biology. Gibbs Sampling
18.417 Itroductio to Computatioal Molecular Biology Lecture 19: November 16, 2004 Scribe: Tushara C. Karuarata Lecturer: Ross Lippert Editor: Tushara C. Karuarata Gibbs Samplig Itroductio Let s first recall
More informationBayesian Methods: Introduction to Multi-parameter Models
Bayesia Methods: Itroductio to Multi-parameter Models Parameter: θ = ( θ, θ) Give Likelihood p(y θ) ad prior p(θ ), the posterior p proportioal to p(y θ) x p(θ ) Margial posterior ( θ, θ y) is Iterested
More informationChapter 18 Summary Sampling Distribution Models
Uit 5 Itroductio to Iferece Chapter 18 Summary Samplig Distributio Models What have we leared? Sample proportios ad meas will vary from sample to sample that s samplig error (samplig variability). Samplig
More informationLecture 12: September 27
36-705: Itermediate Statistics Fall 207 Lecturer: Siva Balakrisha Lecture 2: September 27 Today we will discuss sufficiecy i more detail ad the begi to discuss some geeral strategies for costructig estimators.
More informationEcon 325/327 Notes on Sample Mean, Sample Proportion, Central Limit Theorem, Chi-square Distribution, Student s t distribution 1.
Eco 325/327 Notes o Sample Mea, Sample Proportio, Cetral Limit Theorem, Chi-square Distributio, Studet s t distributio 1 Sample Mea By Hiro Kasahara We cosider a radom sample from a populatio. Defiitio
More informationMODEL CHANGE DETECTION WITH APPLICATION TO MACHINE LEARNING. University of Illinois at Urbana-Champaign
MODEL CHANGE DETECTION WITH APPLICATION TO MACHINE LEARNING Yuheg Bu Jiaxu Lu Veugopal V. Veeravalli Uiversity of Illiois at Urbaa-Champaig Tsighua Uiversity Email: bu3@illiois.edu, lujx4@mails.tsighua.edu.c,
More informationLecture 20: Multivariate convergence and the Central Limit Theorem
Lecture 20: Multivariate covergece ad the Cetral Limit Theorem Covergece i distributio for radom vectors Let Z,Z 1,Z 2,... be radom vectors o R k. If the cdf of Z is cotiuous, the we ca defie covergece
More informationMA Advanced Econometrics: Properties of Least Squares Estimators
MA Advaced Ecoometrics: Properties of Least Squares Estimators Karl Whela School of Ecoomics, UCD February 5, 20 Karl Whela UCD Least Squares Estimators February 5, 20 / 5 Part I Least Squares: Some Fiite-Sample
More informationCS284A: Representations and Algorithms in Molecular Biology
CS284A: Represetatios ad Algorithms i Molecular Biology Scribe Notes o Lectures 3 & 4: Motif Discovery via Eumeratio & Motif Represetatio Usig Positio Weight Matrix Joshua Gervi Based o presetatios by
More informationTopic 10: The Law of Large Numbers
Topic : October 6, 2 If we choose adult Europea males idepedetly ad measure their heights, keepig a ruig average, the at the begiig we might see some larger fluctuatios but as we cotiue to make measuremets,
More informationJournal of Multivariate Analysis. Superefficient estimation of the marginals by exploiting knowledge on the copula
Joural of Multivariate Aalysis 102 (2011) 1315 1319 Cotets lists available at ScieceDirect Joural of Multivariate Aalysis joural homepage: www.elsevier.com/locate/jmva Superefficiet estimatio of the margials
More informationMachine Learning Assignment-1
Uiversity of Utah, School Of Computig Machie Learig Assigmet-1 Chadramouli, Shridhara sdhara@cs.utah.edu 00873255) Sigla, Sumedha sumedha.sigla@utah.edu 00877456) September 10, 2013 1 Liear Regressio a)
More informationLecture 2: April 3, 2013
TTIC/CMSC 350 Mathematical Toolkit Sprig 203 Madhur Tulsiai Lecture 2: April 3, 203 Scribe: Shubhedu Trivedi Coi tosses cotiued We retur to the coi tossig example from the last lecture agai: Example. Give,
More informationLecture 7: Density Estimation: k-nearest Neighbor and Basis Approach
STAT 425: Itroductio to Noparametric Statistics Witer 28 Lecture 7: Desity Estimatio: k-nearest Neighbor ad Basis Approach Istructor: Ye-Chi Che Referece: Sectio 8.4 of All of Noparametric Statistics.
More informationSRC Technical Note June 17, Tight Thresholds for The Pure Literal Rule. Michael Mitzenmacher. d i g i t a l
SRC Techical Note 1997-011 Jue 17, 1997 Tight Thresholds for The Pure Literal Rule Michael Mitzemacher d i g i t a l Systems Research Ceter 130 Lytto Aveue Palo Alto, Califoria 94301 http://www.research.digital.com/src/
More informationProbability and statistics: basic terms
Probability ad statistics: basic terms M. Veeraraghava August 203 A radom variable is a rule that assigs a umerical value to each possible outcome of a experimet. Outcomes of a experimet form the sample
More informationOutput Analysis (2, Chapters 10 &11 Law)
B. Maddah ENMG 6 Simulatio Output Aalysis (, Chapters 10 &11 Law) Comparig alterative system cofiguratio Sice the output of a simulatio is radom, the comparig differet systems via simulatio should be doe
More informationarxiv: v2 [stat.me] 15 May 2018
Piecewise-Determiistic Markov Chai Mote Carlo Paul Vaetti 1, Alexadre Bouchard-Côté 2, George Deligiaidis 1, Araud Doucet 1 May 16, 218 1 Departmet of Statistics, Uiversity of Oxford, UK. 2 Departmet of
More informationElement sampling: Part 2
Chapter 4 Elemet samplig: Part 2 4.1 Itroductio We ow cosider uequal probability samplig desigs which is very popular i practice. I the uequal probability samplig, we ca improve the efficiecy of the resultig
More informationProbability 2 - Notes 10. Lemma. If X is a random variable and g(x) 0 for all x in the support of f X, then P(g(X) 1) E[g(X)].
Probability 2 - Notes 0 Some Useful Iequalities. Lemma. If X is a radom variable ad g(x 0 for all x i the support of f X, the P(g(X E[g(X]. Proof. (cotiuous case P(g(X Corollaries x:g(x f X (xdx x:g(x
More informationSimilarity Solutions to Unsteady Pseudoplastic. Flow Near a Moving Wall
Iteratioal Mathematical Forum, Vol. 9, 04, o. 3, 465-475 HIKARI Ltd, www.m-hikari.com http://dx.doi.org/0.988/imf.04.48 Similarity Solutios to Usteady Pseudoplastic Flow Near a Movig Wall W. Robi Egieerig
More informationTopics in Eigen-analysis
Topics i Eige-aalysis Li Zajiag 28 July 2014 Cotets 1 Termiology... 2 2 Some Basic Properties ad Results... 2 3 Eige-properties of Hermitia Matrices... 5 3.1 Basic Theorems... 5 3.2 Quadratic Forms & Noegative
More informationGeometry of LS. LECTURE 3 GEOMETRY OF LS, PROPERTIES OF σ 2, PARTITIONED REGRESSION, GOODNESS OF FIT
OCTOBER 7, 2016 LECTURE 3 GEOMETRY OF LS, PROPERTIES OF σ 2, PARTITIONED REGRESSION, GOODNESS OF FIT Geometry of LS We ca thik of y ad the colums of X as members of the -dimesioal Euclidea space R Oe ca
More informationMATH/STAT 352: Lecture 15
MATH/STAT 352: Lecture 15 Sectios 5.2 ad 5.3. Large sample CI for a proportio ad small sample CI for a mea. 1 5.2: Cofidece Iterval for a Proportio Estimatig proportio of successes i a biomial experimet
More information1 Duality revisited. AM 221: Advanced Optimization Spring 2016
AM 22: Advaced Optimizatio Sprig 206 Prof. Yaro Siger Sectio 7 Wedesday, Mar. 9th Duality revisited I this sectio, we will give a slightly differet perspective o duality. optimizatio program: f(x) x R
More informationStatistics 511 Additional Materials
Cofidece Itervals o mu Statistics 511 Additioal Materials This topic officially moves us from probability to statistics. We begi to discuss makig ifereces about the populatio. Oe way to differetiate probability
More informationLecture 5: April 17, 2013
TTIC/CMSC 350 Mathematical Toolkit Sprig 203 Madhur Tulsiai Lecture 5: April 7, 203 Scribe: Somaye Hashemifar Cheroff bouds recap We recall the Cheroff/Hoeffdig bouds we derived i the last lecture idepedet
More informationLecture 2: Concentration Bounds
CSE 52: Desig ad Aalysis of Algorithms I Sprig 206 Lecture 2: Cocetratio Bouds Lecturer: Shaya Oveis Ghara March 30th Scribe: Syuzaa Sargsya Disclaimer: These otes have ot bee subjected to the usual scrutiy
More informationConfidence interval for the two-parameter exponentiated Gumbel distribution based on record values
Iteratioal Joural of Applied Operatioal Research Vol. 4 No. 1 pp. 61-68 Witer 2014 Joural homepage: www.ijorlu.ir Cofidece iterval for the two-parameter expoetiated Gumbel distributio based o record values
More information