Pseudo-measurement simulations and bootstrap for the experimental cross-section covariances estimation with quality quantification

Size: px
Start display at page:

Download "Pseudo-measurement simulations and bootstrap for the experimental cross-section covariances estimation with quality quantification"

Transcription

1 Pseudo-measurement simulations and bootstrap for the experimental cross-section covariances estimation with quality quantification S. Varet 1 P. Dossantos-Uzarralde 1 N. Vayatis 2 E. Bauge 1 1 CEA-DAM-DIF 2 ENS Cachan WONDER-2012 September S. Varet ( CEA-DAM-DIF, ENS Cachan ) Experimental covariances WONDER / 29

2 Motivations: experimental cross-sections n2n cross section Mn n2n cross section 55 EXFOR data Available informations: the measurements their uncertainty Energy (MeV) covariance matrix estimation and its inverse Example Evaluated cross sections uncertainty: generalized χ 2 ([VDUVB11]) S. Varet ( CEA-DAM-DIF, ENS Cachan ) Experimental covariances WONDER / 29

3 Motivations: experimental cross-sections covariances HOW? Empirical estimator: only one measurement per energy cov(f i, F j ) 1 n n k=1 (F (k) i F i )(F (k) j F j ) Conventionnal approach: propagation error formula [IBC04],[Kes08] S. Varet ( CEA-DAM-DIF, ENS Cachan ) Experimental covariances WONDER / 29

4 Motivations: experimental cross-sections covariances HOW? Empirical estimator: only one measurement per energy Conventionnal approach: propagation error formula [IBC04],[Kes08] cov(f i, F j ) param. exp. (q k, q l ) F i q k. F j q l.cov(q k, q l ) ex. : fission fragments yield, recoil protons yield, the needed informations are rarely available linearity assumption S. Varet ( CEA-DAM-DIF, ENS Cachan ) Experimental covariances WONDER / 29

5 Motivations: experimental cross-sections covariances HOW? Empirical estimator: only one measurement per energy Conventionnal approach: propagation error formula [IBC04],[Kes08] QUALITY OF THE COVARIANCES ESTIMATION? S. Varet ( CEA-DAM-DIF, ENS Cachan ) Experimental covariances WONDER / 29

6 Outline 1 Notations 2 Experimental covariances estimation: new method 3 Validation 4 Quality measure of the obtained estimation 5 Conclusion S. Varet ( CEA-DAM-DIF, ENS Cachan ) Experimental covariances WONDER / 29

7 Outline Notations 1 Notations 2 Experimental covariances estimation: new method 3 Validation 4 Quality measure of the obtained estimation 5 Conclusion S. Varet ( CEA-DAM-DIF, ENS Cachan ) Experimental covariances WONDER / 29

8 Notations Notations Schematic representation of the experimental cross sections S. Varet ( CEA-DAM-DIF, ENS Cachan ) Experimental covariances WONDER / 29

9 Outline Experimental covariances estimation: new method 1 Notations 2 Experimental covariances estimation: new method 3 Validation 4 Quality measure of the obtained estimation 5 Conclusion S. Varet ( CEA-DAM-DIF, ENS Cachan ) Experimental covariances WONDER / 29

10 Experimental covariances estimation: new method Pseudo-measurements method Given the available information (the measurements and their uncertainty) The idea: Repeat the available information in order to artificially increase the number of measurements The method: 1 Construction of a regression model h (SVM, polynomial,...) 2 Generation of r pseudo-measurements (S (1),..., S (r) ): gaussian noise centered on h: N ((h(e 1 ),..., h(e N )) t, diag(σ 1,..., σ N )) 3 ΣF : empirical estimator of Σ F 4 Diagonal terms are imposed S. Varet ( CEA-DAM-DIF, ENS Cachan ) Experimental covariances WONDER / 29

11 Experimental covariances estimation: new method Pseudo-measurements method Given the available information (the measurements and their uncertainty) The idea: Repeat the available information in order to artificially increase the number of measurements The method: 1 Construction of a regression model h (SVM, polynomial,...) 2 Generation of r pseudo-measurements (S (1),..., S (r) ): gaussian noise centered on h: N ((h(e 1 ),..., h(e N )) t, diag(σ 1,..., σ N )) 3 ΣF : empirical estimator of Σ F 4 Diagonal terms are imposed S. Varet ( CEA-DAM-DIF, ENS Cachan ) Experimental covariances WONDER / 29

12 Experimental covariances estimation: new method The SVM regression principle [SS04], [VGS97] Linear case: Assume F i = h(e i ) = w.e i + b with E i R s and w R s (here s = 1). The svm regression model is a solution of the constraint optimisation problem: flat function: minimize errors lower than ε: subject to 1 2 w 2 { Fi (w.e i + b) ε (w.e i + b) F i ε Non linear case: Find a map of the initial space of energy, (into a higher dimensionnal space) such as the problem becomes linear in the new space (mapping via Kernel). S. Varet ( CEA-DAM-DIF, ENS Cachan ) Experimental covariances WONDER / 29

13 Experimental covariances estimation: new method Pseudo-measurements method Given the available information (the measurements and their uncertainty) The idea: Repeat the available information in order to artificially increase the number of measurements The method: 1 Construction of a regression model h (SVM, polynomial,...) 2 Generation of r pseudo-measurements (S (1),..., S (r) ): gaussian noise centered on h: N ((h(e 1 ),..., h(e N )) t, diag(σ 1,..., σ N )) 3 ΣF : empirical estimator of Σ F 4 Diagonal terms are imposed S. Varet ( CEA-DAM-DIF, ENS Cachan ) Experimental covariances WONDER / 29

14 Experimental covariances estimation: new method Pseudo-measurements method Σ F term at the i th line and j th column, for i and j {1,..., N}: and C ij = σ iσ j n + r n k=1 + σ iσ j n + r (F (k) i r k=1 (S (k) i h(e i ))(F (k) j h(e j )) V i V j h(e i ))(S (k) j h(e j )) V i V j where n = 1. C ii = σ 2 i S. Varet ( CEA-DAM-DIF, ENS Cachan ) Experimental covariances WONDER / 29

15 Experimental covariances estimation: new method Number r of pseudo-measurements? 0 Constraint on r r too small: Σ F non invertible r too large: Σ F diagonal matrix r = 0 Σ F invertible no pseudo-measurements ΣF non invertible r such as Σ F invertible S. Varet ( CEA-DAM-DIF, ENS Cachan ) Experimental covariances WONDER / 29

16 Outline Validation 1 Notations 2 Experimental covariances estimation: new method 3 Validation 4 Quality measure of the obtained estimation 5 Conclusion S. Varet ( CEA-DAM-DIF, ENS Cachan ) Experimental covariances WONDER / 29

17 Toy model Validation Goal: Compare Σ F with Σ F : real case: Σ F is unknown we build a toy model with Σ F fixed numerical validation M := M 1. M N Toy Model: ( F) N N (m M, Σ M ) number of M realisations n = 1, N = θ = B A ΣF = B A S. Varet ( CEA-DAM-DIF, ENS Cachan ) Experimental covariances WONDER / 29

18 Validation Pseudo-measurements with the Toy Model 10 SVM regression on the toy model 5 Measurements (ex : Xs) m M Variable (ex : energy) S. Varet ( CEA-DAM-DIF, ENS Cachan ) Experimental covariances WONDER / 29

19 Validation Pseudo-measurements with the Toy Model 10 SVM regression on the toy model 5 Measurement (ex : Xs) m M Toy measurement Variable (ex : energy) S. Varet ( CEA-DAM-DIF, ENS Cachan ) Experimental covariances WONDER / 29

20 Validation Pseudo-measurements with the Toy Model 10 SVM regression on the toy model 5 Measurement (ex : Xs) SVM regression m M Toy measurement Variable (ex : energy) S. Varet ( CEA-DAM-DIF, ENS Cachan ) Experimental covariances WONDER / 29

21 Validation Pseudo-measurements with the Toy Model 10 SVM regression on the toy model 5 Measurements (ex : Xs) SVM regression m M Toy measurement r=1 pseudo measurement Variable (ex : energy) S. Varet ( CEA-DAM-DIF, ENS Cachan ) Experimental covariances WONDER / 29

22 Validation Pseudo-measurements with the Toy Model N=5, r = 1 sample of pseudo-measurements Real matrix Estimation S. Varet ( CEA-DAM-DIF, ENS Cachan ) Experimental covariances WONDER / 29

23 Validation 55 25Mn: (n,2n) cross section Values extracted from [MTN11] N=11, r=1, Σ F is a 11x11 matrix SVM model Experimental cross sections Pseudo measurements Cross section (mb) Energy (ev) S. Varet ( CEA-DAM-DIF, ENS Cachan ) Experimental covariances WONDER / 29

24 Validation 55 25Mn: total cross section Values extracted from EXFOR [EXF] N=399, r=1, Σ F is a 399x399 matrix 4 SVM model Experimental data Pseudo measurements Cross section (mb) Energy (ev) x How far is the estimation from the real matrix? S. Varet ( CEA-DAM-DIF, ENS Cachan ) Experimental covariances WONDER / 29

25 Outline Quality measure of the obtained estimation 1 Notations 2 Experimental covariances estimation: new method 3 Validation 4 Quality measure of the obtained estimation 5 Conclusion S. Varet ( CEA-DAM-DIF, ENS Cachan ) Experimental covariances WONDER / 29

26 Quality measure of the obtained estimation Estimator quality Criterion choice: quality of Σ F and/or quality of Σ F 1? distance criterion 1 Distance between Σ F and Σ F : Frobenius norm 1 2 N N IE( Σ F Σ F fro ) = IE Ĉ ij C ij 2 i=1 j=1 2 Distance between Σ 1 F and Σ 1 F : Kullback-Leibler distance [LRZ08] KL( Σ F, Σ F ) = trace(( Σ F ) 1 1.Σ F ) log(det( Σ F.ΣF )) N Criterion estimation parametric bootstrap S. Varet ( CEA-DAM-DIF, ENS Cachan ) Experimental covariances WONDER / 29

27 Quality measure of the obtained estimation Bootstrap principle F : θ F, Σ F : unknown (F (1),..., F (n) ) : h, Σ F : sample measurements (F (1),..., F (n) ) 1,..., (F (1),..., F (n) ) B resample measurements Σ F 1 Σ F B Resampling allows to make statistics on Σ F : ÎE( Σ F ), Var( Σ F ), Bias( Σ F ),... Resample strategy: classical: draw with replacement in the initial sample (F (1),..., F (n) ) parametric: simulations with a fixed law (here N (h, Σ F )) S. Varet ( CEA-DAM-DIF, ENS Cachan ) Experimental covariances WONDER / 29

28 Quality measure of the obtained estimation Estimation of IE( Σ F Σ F fro ) and KL( Σ F, Σ F ) 1 st case: with r = 0, Σ F invertible Usual parametric bootstrap: (F (1),..., F (n) ) 1,..., (F (1),..., F (n) ) B where F (i) N (H, Σ F ) with H = (h(e 1 ),..., h(e N )) t. cσ F b term at the i th line and j th column, for i and j {1,..., N} and for b = 1,..., B: C b ij = σiσj n nx k=1 (F (k)b i h(e i))(f (k)b j h(e j)) q q bv bv i j and C b ii = σ 2 i IE( Σ F Σ F ) 1 B KL( Σ F, Σ F ) 1 B B b ΣF Σ F b=1 B KL( ΣF b, Σ F ) b=1 S. Varet ( CEA-DAM-DIF, ENS Cachan ) Experimental covariances WONDER / 29

29 Quality measure of the obtained estimation Estimation of IE( Σ F Σ F fro ) and KL( Σ F, Σ F ) 2 nd case: with r = 0, Σ F non invertible 2 simultaneous parametric bootstraps: (F (1),..., F (n) ) 1,..., (F (1),..., F (n) ) B where F (i) N (H, c Σ F ) et (S (1),..., S (r) ) 1,..., (S (1),..., S (r) ) B where S (i) N (H, diag( c Σ F )) cσ F b term at the i th line and j th column, for i and j {1,..., N} and for b = 1,..., B: C b ij = σiσj n + r nx k=1 (F (k)b i h(e i))(f (k)b j h(e j)) q q bv bv i j + σiσj n + r rx k=1 (S (k)b i h(e i))(s (k)b j h(e j)) q q bv bv i j and C b ii = σ 2 i IE( Σ F Σ F ) 1 B KL( Σ F, Σ F ) 1 B B b ΣF Σ F b=1 B KL( ΣF b, Σ F ) b=1 S. Varet ( CEA-DAM-DIF, ENS Cachan ) Experimental covariances WONDER / 29

30 Quality measure of the obtained estimation Bootstrap on the Toy Model N = 5, n = 1, B = 30, Σ F = , r = 1, Empirical mean and variance of C2 F Σ F its bootstrap estimation Empirical mean and variance of C2 F Σ F its bootstrap estimation (B=30). C2 F Σ F Σ B b=1 C2 b F C2 F /B Index of the 1 sample of measures S. Varet ( CEA-DAM-DIF, ENS Cachan ) Experimental covariances WONDER / 29

31 Quality measure of the obtained estimation Bootstrap on the Toy Model N = 5, n = 1, B = 30, Σ F = , r = 1, Empirical mean and variance of C2 F Σ F its bootstrap estimation Empirical mean and variance of C2 F Σ F its bootstrap estimation (B=30). C2 F Σ F Σ B b=1 C2 b F C2 F /B Index of the 1 sample of measures Empirical mean and variance of KL and its bootstrap estimation Empirical mean and variance of KL and its bootstrap estimation (B=30) Index of the 1 sample of measures KL KL boot S. Varet ( CEA-DAM-DIF, ENS Cachan ) Experimental covariances WONDER / 29

32 Quality measure of the obtained estimation 55 25Mn: (n,2n) cross section [MTN11] SVM model Experimental cross sections Pseudo measurements Cross section (mb) Energy (ev) IE( Σ F Σ F ) IE(KL( Σ F )) Var( Σ F Σ F ) Var(KL( Σ F )) S. Varet ( CEA-DAM-DIF, ENS Cachan ) Experimental covariances WONDER / 29

33 Quality measure of the obtained estimation 55 25Mn: total cross section 4 SVM model Experimental data Pseudo measurements Cross section (mb) Energy (ev) x IE( Σ F Σ F ) IE(KL( Σ F )) Var( Σ F Σ F ) Var(KL( Σ F )) 6.19 S. Varet ( CEA-DAM-DIF, ENS Cachan ) Experimental covariances WONDER / 29

34 Outline Conclusion 1 Notations 2 Experimental covariances estimation: new method 3 Validation 4 Quality measure of the obtained estimation 5 Conclusion S. Varet ( CEA-DAM-DIF, ENS Cachan ) Experimental covariances WONDER / 29

35 Conclusion Conclusion Σ F estimation 1 Classical approaches one measurement per energy experimental informations rarely available quite unrealistic assumptions 2 Our approach: only measurements are needed quality measure of the estimation Estimator quality Numerical validation Application S. Varet ( CEA-DAM-DIF, ENS Cachan ) Experimental covariances WONDER / 29

36 Conclusion Conclusion Σ F estimation Estimator quality Estimation of b Σ F Σ F with Bootstrap Estimation of KL( b Σ F) with Bootstrap Numerical validation Application S. Varet ( CEA-DAM-DIF, ENS Cachan ) Experimental covariances WONDER / 29

37 Conclusion Conclusion Σ F estimation Estimator quality Numerical validation Toy model Application S. Varet ( CEA-DAM-DIF, ENS Cachan ) Experimental covariances WONDER / 29

38 Conclusion Conclusion Σ F estimation Estimator quality Numerical validation Application Application to 55 25Mn S. Varet ( CEA-DAM-DIF, ENS Cachan ) Experimental covariances WONDER / 29

39 Future work Conclusion New approach Optimisation: shrinkage approach (in progress) Other approaches Kriging: cf poster session S. Varet ( CEA-DAM-DIF, ENS Cachan ) Experimental covariances WONDER / 29

40 References Conclusion Experimental Nuclear Reaction Data EXFOR, M. Ionescu-Bujor and D. G. Cacuci, A comparative review of sensitivity and uncertainty analysis of large-scale systems i: Deterministic methods, Nuclear Science and Engineering 147 (2004), Grégoire Kessedjian, Mesures de sections efficaces d actinides mineurs d intérêt pour la transmutation, Ph.D. thesis, Bordeaux 1, E. Levina, A. Rothman, and J. Zhu, Sparse estimation of large covariance matrices via a nested lasso penalty, The Annals of Applied Statistics 2 (2008), no. 1, A. Milocco, A. Trkov, and R. Capote Noy, Nuclear data evaluation of 55 mn by the empire code with emphasis on the capture cross-section, Nuclear Engineering and Design 241 (2011), no. 4, A.J. Smola and B. Schölkopf, A tutorial on support vector regression, Statistics and Computing 14 (2004), S. Varet, P. Dossantos-Uzarralde, N. Vayatis, and E. Bauge, Experimental covariances contribution to cross-section uncertainty determination, NCSC2 Vienna, V. Vapnik, S. Golowich, and A. Smola, Support vector method for function approximation, regression estimation, and signal processing, Advances in Neural Information Processing Systems 9 (1997), S. Varet ( CEA-DAM-DIF, ENS Cachan ) Experimental covariances WONDER / 29

Kriging approach for the experimental cross-section covariances estimation

Kriging approach for the experimental cross-section covariances estimation EPJ Web of Conferences, () DOI:./ epjconf/ C Owned by te autors, publised by EDP Sciences, Kriging approac for te experimental cross-section covariances estimation S. Varet,a, P. Dossantos-Uzarralde, N.

More information

CS281 Section 4: Factor Analysis and PCA

CS281 Section 4: Factor Analysis and PCA CS81 Section 4: Factor Analysis and PCA Scott Linderman At this point we have seen a variety of machine learning models, with a particular emphasis on models for supervised learning. In particular, we

More information

A Multiple Testing Approach to the Regularisation of Large Sample Correlation Matrices

A Multiple Testing Approach to the Regularisation of Large Sample Correlation Matrices A Multiple Testing Approach to the Regularisation of Large Sample Correlation Matrices Natalia Bailey 1 M. Hashem Pesaran 2 L. Vanessa Smith 3 1 Department of Econometrics & Business Statistics, Monash

More information

Linear Regression. CSL603 - Fall 2017 Narayanan C Krishnan

Linear Regression. CSL603 - Fall 2017 Narayanan C Krishnan Linear Regression CSL603 - Fall 2017 Narayanan C Krishnan ckn@iitrpr.ac.in Outline Univariate regression Multivariate regression Probabilistic view of regression Loss functions Bias-Variance analysis Regularization

More information

Linear Regression. CSL465/603 - Fall 2016 Narayanan C Krishnan

Linear Regression. CSL465/603 - Fall 2016 Narayanan C Krishnan Linear Regression CSL465/603 - Fall 2016 Narayanan C Krishnan ckn@iitrpr.ac.in Outline Univariate regression Multivariate regression Probabilistic view of regression Loss functions Bias-Variance analysis

More information

Sparse Kernel Density Estimation Technique Based on Zero-Norm Constraint

Sparse Kernel Density Estimation Technique Based on Zero-Norm Constraint Sparse Kernel Density Estimation Technique Based on Zero-Norm Constraint Xia Hong 1, Sheng Chen 2, Chris J. Harris 2 1 School of Systems Engineering University of Reading, Reading RG6 6AY, UK E-mail: x.hong@reading.ac.uk

More information

Covariance function estimation in Gaussian process regression

Covariance function estimation in Gaussian process regression Covariance function estimation in Gaussian process regression François Bachoc Department of Statistics and Operations Research, University of Vienna WU Research Seminar - May 2015 François Bachoc Gaussian

More information

Gaussian with mean ( µ ) and standard deviation ( σ)

Gaussian with mean ( µ ) and standard deviation ( σ) Slide from Pieter Abbeel Gaussian with mean ( µ ) and standard deviation ( σ) 10/6/16 CSE-571: Robotics X ~ N( µ, σ ) Y ~ N( aµ + b, a σ ) Y = ax + b + + + + 1 1 1 1 1 1 1 1 1 1, ~ ) ( ) ( ), ( ~ ), (

More information

Asymptotic Multivariate Kriging Using Estimated Parameters with Bayesian Prediction Methods for Non-linear Predictands

Asymptotic Multivariate Kriging Using Estimated Parameters with Bayesian Prediction Methods for Non-linear Predictands Asymptotic Multivariate Kriging Using Estimated Parameters with Bayesian Prediction Methods for Non-linear Predictands Elizabeth C. Mannshardt-Shamseldin Advisor: Richard L. Smith Duke University Department

More information

Least Absolute Shrinkage is Equivalent to Quadratic Penalization

Least Absolute Shrinkage is Equivalent to Quadratic Penalization Least Absolute Shrinkage is Equivalent to Quadratic Penalization Yves Grandvalet Heudiasyc, UMR CNRS 6599, Université de Technologie de Compiègne, BP 20.529, 60205 Compiègne Cedex, France Yves.Grandvalet@hds.utc.fr

More information

Neutron inverse kinetics via Gaussian Processes

Neutron inverse kinetics via Gaussian Processes Neutron inverse kinetics via Gaussian Processes P. Picca Politecnico di Torino, Torino, Italy R. Furfaro University of Arizona, Tucson, Arizona Outline Introduction Review of inverse kinetics techniques

More information

Kriging models with Gaussian processes - covariance function estimation and impact of spatial sampling

Kriging models with Gaussian processes - covariance function estimation and impact of spatial sampling Kriging models with Gaussian processes - covariance function estimation and impact of spatial sampling François Bachoc former PhD advisor: Josselin Garnier former CEA advisor: Jean-Marc Martinez Department

More information

Massachusetts Institute of Technology Department of Economics Time Series Lecture 6: Additional Results for VAR s

Massachusetts Institute of Technology Department of Economics Time Series Lecture 6: Additional Results for VAR s Massachusetts Institute of Technology Department of Economics Time Series 14.384 Guido Kuersteiner Lecture 6: Additional Results for VAR s 6.1. Confidence Intervals for Impulse Response Functions There

More information

Linear Methods for Regression. Lijun Zhang

Linear Methods for Regression. Lijun Zhang Linear Methods for Regression Lijun Zhang zlj@nju.edu.cn http://cs.nju.edu.cn/zlj Outline Introduction Linear Regression Models and Least Squares Subset Selection Shrinkage Methods Methods Using Derived

More information

Least Squares SVM Regression

Least Squares SVM Regression Least Squares SVM Regression Consider changing SVM to LS SVM by making following modifications: min (w,e) ½ w 2 + ½C Σ e(i) 2 subject to d(i) (w T Φ( x(i))+ b) = e(i), i, and C>0. Note that e(i) is error

More information

Bits of Machine Learning Part 1: Supervised Learning

Bits of Machine Learning Part 1: Supervised Learning Bits of Machine Learning Part 1: Supervised Learning Alexandre Proutiere and Vahan Petrosyan KTH (The Royal Institute of Technology) Outline of the Course 1. Supervised Learning Regression and Classification

More information

Support Vector Machine I

Support Vector Machine I Support Vector Machine I Jia-Bin Huang ECE-5424G / CS-5824 Virginia Tech Spring 2019 Administrative Please use piazza. No emails. HW 0 grades are back. Re-grade request for one week. HW 1 due soon. HW

More information

ECE521 Lectures 9 Fully Connected Neural Networks

ECE521 Lectures 9 Fully Connected Neural Networks ECE521 Lectures 9 Fully Connected Neural Networks Outline Multi-class classification Learning multi-layer neural networks 2 Measuring distance in probability space We learnt that the squared L2 distance

More information

Spatial Lasso with Application to GIS Model Selection. F. Jay Breidt Colorado State University

Spatial Lasso with Application to GIS Model Selection. F. Jay Breidt Colorado State University Spatial Lasso with Application to GIS Model Selection F. Jay Breidt Colorado State University with Hsin-Cheng Huang, Nan-Jung Hsu, and Dave Theobald September 25 The work reported here was developed under

More information

Classifier Complexity and Support Vector Classifiers

Classifier Complexity and Support Vector Classifiers Classifier Complexity and Support Vector Classifiers Feature 2 6 4 2 0 2 4 6 8 RBF kernel 10 10 8 6 4 2 0 2 4 6 Feature 1 David M.J. Tax Pattern Recognition Laboratory Delft University of Technology D.M.J.Tax@tudelft.nl

More information

Compressed Sensing and Neural Networks

Compressed Sensing and Neural Networks and Jan Vybíral (Charles University & Czech Technical University Prague, Czech Republic) NOMAD Summer Berlin, September 25-29, 2017 1 / 31 Outline Lasso & Introduction Notation Training the network Applications

More information

Approximate Kernel PCA with Random Features

Approximate Kernel PCA with Random Features Approximate Kernel PCA with Random Features (Computational vs. Statistical Tradeoff) Bharath K. Sriperumbudur Department of Statistics, Pennsylvania State University Journées de Statistique Paris May 28,

More information

Variational Model Selection for Sparse Gaussian Process Regression

Variational Model Selection for Sparse Gaussian Process Regression Variational Model Selection for Sparse Gaussian Process Regression Michalis K. Titsias School of Computer Science University of Manchester 7 September 2008 Outline Gaussian process regression and sparse

More information

The Behaviour of the Akaike Information Criterion when Applied to Non-nested Sequences of Models

The Behaviour of the Akaike Information Criterion when Applied to Non-nested Sequences of Models The Behaviour of the Akaike Information Criterion when Applied to Non-nested Sequences of Models Centre for Molecular, Environmental, Genetic & Analytic (MEGA) Epidemiology School of Population Health

More information

Econ 2148, fall 2017 Gaussian process priors, reproducing kernel Hilbert spaces, and Splines

Econ 2148, fall 2017 Gaussian process priors, reproducing kernel Hilbert spaces, and Splines Econ 2148, fall 2017 Gaussian process priors, reproducing kernel Hilbert spaces, and Splines Maximilian Kasy Department of Economics, Harvard University 1 / 37 Agenda 6 equivalent representations of the

More information

Complete activation data libraries for all incident particles, all energies and including covariance data

Complete activation data libraries for all incident particles, all energies and including covariance data Complete activation data libraries for all incident particles, all energies and including covariance data Arjan Koning NRG Petten, The Netherlands Workshop on Activation Data EAF 2011 June 1-3 2011, Prague,

More information

High-dimensional covariance estimation based on Gaussian graphical models

High-dimensional covariance estimation based on Gaussian graphical models High-dimensional covariance estimation based on Gaussian graphical models Shuheng Zhou Department of Statistics, The University of Michigan, Ann Arbor IMA workshop on High Dimensional Phenomena Sept. 26,

More information

Sparse Permutation Invariant Covariance Estimation: Motivation, Background and Key Results

Sparse Permutation Invariant Covariance Estimation: Motivation, Background and Key Results Sparse Permutation Invariant Covariance Estimation: Motivation, Background and Key Results David Prince Biostat 572 dprince3@uw.edu April 19, 2012 David Prince (UW) SPICE April 19, 2012 1 / 11 Electronic

More information

Multiple-step Time Series Forecasting with Sparse Gaussian Processes

Multiple-step Time Series Forecasting with Sparse Gaussian Processes Multiple-step Time Series Forecasting with Sparse Gaussian Processes Perry Groot ab Peter Lucas a Paul van den Bosch b a Radboud University, Model-Based Systems Development, Heyendaalseweg 135, 6525 AJ

More information

Advanced Statistics II: Non Parametric Tests

Advanced Statistics II: Non Parametric Tests Advanced Statistics II: Non Parametric Tests Aurélien Garivier ParisTech February 27, 2011 Outline Fitting a distribution Rank Tests for the comparison of two samples Two unrelated samples: Mann-Whitney

More information

9. Model Selection. statistical models. overview of model selection. information criteria. goodness-of-fit measures

9. Model Selection. statistical models. overview of model selection. information criteria. goodness-of-fit measures FE661 - Statistical Methods for Financial Engineering 9. Model Selection Jitkomut Songsiri statistical models overview of model selection information criteria goodness-of-fit measures 9-1 Statistical models

More information

Covariance Matrix Simplification For Efficient Uncertainty Management

Covariance Matrix Simplification For Efficient Uncertainty Management PASEO MaxEnt 2007 Covariance Matrix Simplification For Efficient Uncertainty Management André Jalobeanu, Jorge A. Gutiérrez PASEO Research Group LSIIT (CNRS/ Univ. Strasbourg) - Illkirch, France *part

More information

ESANN'2001 proceedings - European Symposium on Artificial Neural Networks Bruges (Belgium), April 2001, D-Facto public., ISBN ,

ESANN'2001 proceedings - European Symposium on Artificial Neural Networks Bruges (Belgium), April 2001, D-Facto public., ISBN , Sparse Kernel Canonical Correlation Analysis Lili Tan and Colin Fyfe 2, Λ. Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong. 2. School of Information and Communication

More information

Bayesian Inference: Principles and Practice 3. Sparse Bayesian Models and the Relevance Vector Machine

Bayesian Inference: Principles and Practice 3. Sparse Bayesian Models and the Relevance Vector Machine Bayesian Inference: Principles and Practice 3. Sparse Bayesian Models and the Relevance Vector Machine Mike Tipping Gaussian prior Marginal prior: single α Independent α Cambridge, UK Lecture 3: Overview

More information

Econometrics I. Lecture 10: Nonparametric Estimation with Kernels. Paul T. Scott NYU Stern. Fall 2018

Econometrics I. Lecture 10: Nonparametric Estimation with Kernels. Paul T. Scott NYU Stern. Fall 2018 Econometrics I Lecture 10: Nonparametric Estimation with Kernels Paul T. Scott NYU Stern Fall 2018 Paul T. Scott NYU Stern Econometrics I Fall 2018 1 / 12 Nonparametric Regression: Intuition Let s get

More information

Linear Regression. Aarti Singh. Machine Learning / Sept 27, 2010

Linear Regression. Aarti Singh. Machine Learning / Sept 27, 2010 Linear Regression Aarti Singh Machine Learning 10-701/15-781 Sept 27, 2010 Discrete to Continuous Labels Classification Sports Science News Anemic cell Healthy cell Regression X = Document Y = Topic X

More information

Probabilistic Low-Rank Matrix Completion with Adaptive Spectral Regularization Algorithms

Probabilistic Low-Rank Matrix Completion with Adaptive Spectral Regularization Algorithms Probabilistic Low-Rank Matrix Completion with Adaptive Spectral Regularization Algorithms François Caron Department of Statistics, Oxford STATLEARN 2014, Paris April 7, 2014 Joint work with Adrien Todeschini,

More information

Model Selection Tutorial 2: Problems With Using AIC to Select a Subset of Exposures in a Regression Model

Model Selection Tutorial 2: Problems With Using AIC to Select a Subset of Exposures in a Regression Model Model Selection Tutorial 2: Problems With Using AIC to Select a Subset of Exposures in a Regression Model Centre for Molecular, Environmental, Genetic & Analytic (MEGA) Epidemiology School of Population

More information

Linear Regression Linear Regression with Shrinkage

Linear Regression Linear Regression with Shrinkage Linear Regression Linear Regression ith Shrinkage Introduction Regression means predicting a continuous (usually scalar) output y from a vector of continuous inputs (features) x. Example: Predicting vehicle

More information

PATTERN RECOGNITION AND MACHINE LEARNING

PATTERN RECOGNITION AND MACHINE LEARNING PATTERN RECOGNITION AND MACHINE LEARNING Chapter 1. Introduction Shuai Huang April 21, 2014 Outline 1 What is Machine Learning? 2 Curve Fitting 3 Probability Theory 4 Model Selection 5 The curse of dimensionality

More information

Linear and Non-Linear Dimensionality Reduction

Linear and Non-Linear Dimensionality Reduction Linear and Non-Linear Dimensionality Reduction Alexander Schulz aschulz(at)techfak.uni-bielefeld.de University of Pisa, Pisa 4.5.215 and 7.5.215 Overview Dimensionality Reduction Motivation Linear Projections

More information

Hypothesis Testing with the Bootstrap. Noa Haas Statistics M.Sc. Seminar, Spring 2017 Bootstrap and Resampling Methods

Hypothesis Testing with the Bootstrap. Noa Haas Statistics M.Sc. Seminar, Spring 2017 Bootstrap and Resampling Methods Hypothesis Testing with the Bootstrap Noa Haas Statistics M.Sc. Seminar, Spring 2017 Bootstrap and Resampling Methods Bootstrap Hypothesis Testing A bootstrap hypothesis test starts with a test statistic

More information

References. Lecture 7: Support Vector Machines. Optimum Margin Perceptron. Perceptron Learning Rule

References. Lecture 7: Support Vector Machines. Optimum Margin Perceptron. Perceptron Learning Rule References Lecture 7: Support Vector Machines Isabelle Guyon guyoni@inf.ethz.ch An training algorithm for optimal margin classifiers Boser-Guyon-Vapnik, COLT, 992 http://www.clopinet.com/isabelle/p apers/colt92.ps.z

More information

Sparse Permutation Invariant Covariance Estimation: Final Talk

Sparse Permutation Invariant Covariance Estimation: Final Talk Sparse Permutation Invariant Covariance Estimation: Final Talk David Prince Biostat 572 dprince3@uw.edu May 31, 2012 David Prince (UW) SPICE May 31, 2012 1 / 19 Electronic Journal of Statistics Vol. 2

More information

Variational Methods in Bayesian Deconvolution

Variational Methods in Bayesian Deconvolution PHYSTAT, SLAC, Stanford, California, September 8-, Variational Methods in Bayesian Deconvolution K. Zarb Adami Cavendish Laboratory, University of Cambridge, UK This paper gives an introduction to the

More information

Gaussian Process Approximations of Stochastic Differential Equations

Gaussian Process Approximations of Stochastic Differential Equations Gaussian Process Approximations of Stochastic Differential Equations Cédric Archambeau Centre for Computational Statistics and Machine Learning University College London c.archambeau@cs.ucl.ac.uk CSML

More information

The connection of dropout and Bayesian statistics

The connection of dropout and Bayesian statistics The connection of dropout and Bayesian statistics Interpretation of dropout as approximate Bayesian modelling of NN http://mlg.eng.cam.ac.uk/yarin/thesis/thesis.pdf Dropout Geoffrey Hinton Google, University

More information

Sparsity Models. Tong Zhang. Rutgers University. T. Zhang (Rutgers) Sparsity Models 1 / 28

Sparsity Models. Tong Zhang. Rutgers University. T. Zhang (Rutgers) Sparsity Models 1 / 28 Sparsity Models Tong Zhang Rutgers University T. Zhang (Rutgers) Sparsity Models 1 / 28 Topics Standard sparse regression model algorithms: convex relaxation and greedy algorithm sparse recovery analysis:

More information

Sufficient Dimension Reduction using Support Vector Machine and it s variants

Sufficient Dimension Reduction using Support Vector Machine and it s variants Sufficient Dimension Reduction using Support Vector Machine and it s variants Andreas Artemiou School of Mathematics, Cardiff University @AG DANK/BCS Meeting 2013 SDR PSVM Real Data Current Research and

More information

9.520: Class 20. Bayesian Interpretations. Tomaso Poggio and Sayan Mukherjee

9.520: Class 20. Bayesian Interpretations. Tomaso Poggio and Sayan Mukherjee 9.520: Class 20 Bayesian Interpretations Tomaso Poggio and Sayan Mukherjee Plan Bayesian interpretation of Regularization Bayesian interpretation of the regularizer Bayesian interpretation of quadratic

More information

Adaptive estimation of the copula correlation matrix for semiparametric elliptical copulas

Adaptive estimation of the copula correlation matrix for semiparametric elliptical copulas Adaptive estimation of the copula correlation matrix for semiparametric elliptical copulas Department of Mathematics Department of Statistical Science Cornell University London, January 7, 2016 Joint work

More information

Introduction to Gaussian Process

Introduction to Gaussian Process Introduction to Gaussian Process CS 778 Chris Tensmeyer CS 478 INTRODUCTION 1 What Topic? Machine Learning Regression Bayesian ML Bayesian Regression Bayesian Non-parametric Gaussian Process (GP) GP Regression

More information

De-biasing the Lasso: Optimal Sample Size for Gaussian Designs

De-biasing the Lasso: Optimal Sample Size for Gaussian Designs De-biasing the Lasso: Optimal Sample Size for Gaussian Designs Adel Javanmard USC Marshall School of Business Data Science and Operations department Based on joint work with Andrea Montanari Oct 2015 Adel

More information

Model Selection for Gaussian Processes

Model Selection for Gaussian Processes Institute for Adaptive and Neural Computation School of Informatics,, UK December 26 Outline GP basics Model selection: covariance functions and parameterizations Criteria for model selection Marginal

More information

Learning the Semantic Correlation: An Alternative Way to Gain from Unlabeled Text

Learning the Semantic Correlation: An Alternative Way to Gain from Unlabeled Text Learning the Semantic Correlation: An Alternative Way to Gain from Unlabeled Text Yi Zhang Machine Learning Department Carnegie Mellon University yizhang1@cs.cmu.edu Jeff Schneider The Robotics Institute

More information

Permutation-invariant regularization of large covariance matrices. Liza Levina

Permutation-invariant regularization of large covariance matrices. Liza Levina Liza Levina Permutation-invariant covariance regularization 1/42 Permutation-invariant regularization of large covariance matrices Liza Levina Department of Statistics University of Michigan Joint work

More information

High-dimensional Covariance Estimation Based On Gaussian Graphical Models

High-dimensional Covariance Estimation Based On Gaussian Graphical Models High-dimensional Covariance Estimation Based On Gaussian Graphical Models Shuheng Zhou, Philipp Rutimann, Min Xu and Peter Buhlmann February 3, 2012 Problem definition Want to estimate the covariance matrix

More information

Machine Learning - MT & 5. Basis Expansion, Regularization, Validation

Machine Learning - MT & 5. Basis Expansion, Regularization, Validation Machine Learning - MT 2016 4 & 5. Basis Expansion, Regularization, Validation Varun Kanade University of Oxford October 19 & 24, 2016 Outline Basis function expansion to capture non-linear relationships

More information

Lecture 20: Linear model, the LSE, and UMVUE

Lecture 20: Linear model, the LSE, and UMVUE Lecture 20: Linear model, the LSE, and UMVUE Linear Models One of the most useful statistical models is X i = β τ Z i + ε i, i = 1,...,n, where X i is the ith observation and is often called the ith response;

More information

SUPPORT VECTOR MACHINE FOR THE SIMULTANEOUS APPROXIMATION OF A FUNCTION AND ITS DERIVATIVE

SUPPORT VECTOR MACHINE FOR THE SIMULTANEOUS APPROXIMATION OF A FUNCTION AND ITS DERIVATIVE SUPPORT VECTOR MACHINE FOR THE SIMULTANEOUS APPROXIMATION OF A FUNCTION AND ITS DERIVATIVE M. Lázaro 1, I. Santamaría 2, F. Pérez-Cruz 1, A. Artés-Rodríguez 1 1 Departamento de Teoría de la Señal y Comunicaciones

More information

Bootstrap & Confidence/Prediction intervals

Bootstrap & Confidence/Prediction intervals Bootstrap & Confidence/Prediction intervals Olivier Roustant Mines Saint-Étienne 2017/11 Olivier Roustant (EMSE) Bootstrap & Confidence/Prediction intervals 2017/11 1 / 9 Framework Consider a model with

More information

GWAS V: Gaussian processes

GWAS V: Gaussian processes GWAS V: Gaussian processes Dr. Oliver Stegle Christoh Lippert Prof. Dr. Karsten Borgwardt Max-Planck-Institutes Tübingen, Germany Tübingen Summer 2011 Oliver Stegle GWAS V: Gaussian processes Summer 2011

More information

Proximity-Based Anomaly Detection using Sparse Structure Learning

Proximity-Based Anomaly Detection using Sparse Structure Learning Proximity-Based Anomaly Detection using Sparse Structure Learning Tsuyoshi Idé (IBM Tokyo Research Lab) Aurelie C. Lozano, Naoki Abe, and Yan Liu (IBM T. J. Watson Research Center) 2009/04/ SDM 2009 /

More information

Bootstrapping high dimensional vector: interplay between dependence and dimensionality

Bootstrapping high dimensional vector: interplay between dependence and dimensionality Bootstrapping high dimensional vector: interplay between dependence and dimensionality Xianyang Zhang Joint work with Guang Cheng University of Missouri-Columbia LDHD: Transition Workshop, 2014 Xianyang

More information

Maximum Direction to Geometric Mean Spectral Response Ratios using the Relevance Vector Machine

Maximum Direction to Geometric Mean Spectral Response Ratios using the Relevance Vector Machine Maximum Direction to Geometric Mean Spectral Response Ratios using the Relevance Vector Machine Y. Dak Hazirbaba, J. Tezcan, Q. Cheng Southern Illinois University Carbondale, IL, USA SUMMARY: The 2009

More information

Kernel Methods. Outline

Kernel Methods. Outline Kernel Methods Quang Nguyen University of Pittsburgh CS 3750, Fall 2011 Outline Motivation Examples Kernels Definitions Kernel trick Basic properties Mercer condition Constructing feature space Hilbert

More information

Assessing the dependence of high-dimensional time series via sample autocovariances and correlations

Assessing the dependence of high-dimensional time series via sample autocovariances and correlations Assessing the dependence of high-dimensional time series via sample autocovariances and correlations Johannes Heiny University of Aarhus Joint work with Thomas Mikosch (Copenhagen), Richard Davis (Columbia),

More information

Discriminative Direction for Kernel Classifiers

Discriminative Direction for Kernel Classifiers Discriminative Direction for Kernel Classifiers Polina Golland Artificial Intelligence Lab Massachusetts Institute of Technology Cambridge, MA 02139 polina@ai.mit.edu Abstract In many scientific and engineering

More information

Double Chooz Sensitivity Analysis: Impact of the Reactor Model Uncertainty on Measurement of Sin 2 (2θ 13 )

Double Chooz Sensitivity Analysis: Impact of the Reactor Model Uncertainty on Measurement of Sin 2 (2θ 13 ) Double Chooz Sensitivity Analysis: Impact of the Reactor Model Uncertainty on Measurement of Sin 2 (2θ 13 ) Elizabeth Grace 1 Outline Neutrino Mixing: A Brief Overview Goal of Double Chooz Detector Schematic

More information

Machine Learning (BSMC-GA 4439) Wenke Liu

Machine Learning (BSMC-GA 4439) Wenke Liu Machine Learning (BSMC-GA 4439) Wenke Liu 02-01-2018 Biomedical data are usually high-dimensional Number of samples (n) is relatively small whereas number of features (p) can be large Sometimes p>>n Problems

More information

Support Vector Regression (SVR) Descriptions of SVR in this discussion follow that in Refs. (2, 6, 7, 8, 9). The literature

Support Vector Regression (SVR) Descriptions of SVR in this discussion follow that in Refs. (2, 6, 7, 8, 9). The literature Support Vector Regression (SVR) Descriptions of SVR in this discussion follow that in Refs. (2, 6, 7, 8, 9). The literature suggests the design variables should be normalized to a range of [-1,1] or [0,1].

More information

6.867 Machine learning

6.867 Machine learning 6.867 Machine learning Mid-term eam October 8, 6 ( points) Your name and MIT ID: .5.5 y.5 y.5 a).5.5 b).5.5.5.5 y.5 y.5 c).5.5 d).5.5 Figure : Plots of linear regression results with different types of

More information

Uncertainty quantification and visualization for functional random variables

Uncertainty quantification and visualization for functional random variables Uncertainty quantification and visualization for functional random variables MascotNum Workshop 2014 S. Nanty 1,3 C. Helbert 2 A. Marrel 1 N. Pérot 1 C. Prieur 3 1 CEA, DEN/DER/SESI/LSMR, F-13108, Saint-Paul-lez-Durance,

More information

STA414/2104. Lecture 11: Gaussian Processes. Department of Statistics

STA414/2104. Lecture 11: Gaussian Processes. Department of Statistics STA414/2104 Lecture 11: Gaussian Processes Department of Statistics www.utstat.utoronto.ca Delivered by Mark Ebden with thanks to Russ Salakhutdinov Outline Gaussian Processes Exam review Course evaluations

More information

STA414/2104 Statistical Methods for Machine Learning II

STA414/2104 Statistical Methods for Machine Learning II STA414/2104 Statistical Methods for Machine Learning II Murat A. Erdogdu & David Duvenaud Department of Computer Science Department of Statistical Sciences Lecture 3 Slide credits: Russ Salakhutdinov Announcements

More information

Tuning Parameter Selection in Regularized Estimations of Large Covariance Matrices

Tuning Parameter Selection in Regularized Estimations of Large Covariance Matrices Tuning Parameter Selection in Regularized Estimations of Large Covariance Matrices arxiv:1308.3416v1 [stat.me] 15 Aug 2013 Yixin Fang 1, Binhuan Wang 1, and Yang Feng 2 1 New York University and 2 Columbia

More information

A variational radial basis function approximation for diffusion processes

A variational radial basis function approximation for diffusion processes A variational radial basis function approximation for diffusion processes Michail D. Vrettas, Dan Cornford and Yuan Shen Aston University - Neural Computing Research Group Aston Triangle, Birmingham B4

More information

Computer Vision Group Prof. Daniel Cremers. 2. Regression (cont.)

Computer Vision Group Prof. Daniel Cremers. 2. Regression (cont.) Prof. Daniel Cremers 2. Regression (cont.) Regression with MLE (Rep.) Assume that y is affected by Gaussian noise : t = f(x, w)+ where Thus, we have p(t x, w, )=N (t; f(x, w), 2 ) 2 Maximum A-Posteriori

More information

1 Mixed effect models and longitudinal data analysis

1 Mixed effect models and longitudinal data analysis 1 Mixed effect models and longitudinal data analysis Mixed effects models provide a flexible approach to any situation where data have a grouping structure which introduces some kind of correlation between

More information

Semi-Supervised Learning through Principal Directions Estimation

Semi-Supervised Learning through Principal Directions Estimation Semi-Supervised Learning through Principal Directions Estimation Olivier Chapelle, Bernhard Schölkopf, Jason Weston Max Planck Institute for Biological Cybernetics, 72076 Tübingen, Germany {first.last}@tuebingen.mpg.de

More information

Bayesian linear regression

Bayesian linear regression Bayesian linear regression Linear regression is the basis of most statistical modeling. The model is Y i = X T i β + ε i, where Y i is the continuous response X i = (X i1,..., X ip ) T is the corresponding

More information

Approximate Kernel Methods

Approximate Kernel Methods Lecture 3 Approximate Kernel Methods Bharath K. Sriperumbudur Department of Statistics, Pennsylvania State University Machine Learning Summer School Tübingen, 207 Outline Motivating example Ridge regression

More information

A New Bayesian Variable Selection Method: The Bayesian Lasso with Pseudo Variables

A New Bayesian Variable Selection Method: The Bayesian Lasso with Pseudo Variables A New Bayesian Variable Selection Method: The Bayesian Lasso with Pseudo Variables Qi Tang (Joint work with Kam-Wah Tsui and Sijian Wang) Department of Statistics University of Wisconsin-Madison Feb. 8,

More information

Multivariate Statistical Analysis

Multivariate Statistical Analysis Multivariate Statistical Analysis Fall 2011 C. L. Williams, Ph.D. Lecture 4 for Applied Multivariate Analysis Outline 1 Eigen values and eigen vectors Characteristic equation Some properties of eigendecompositions

More information

Estimation de la fonction de covariance dans le modèle de Krigeage et applications: bilan de thèse et perspectives

Estimation de la fonction de covariance dans le modèle de Krigeage et applications: bilan de thèse et perspectives Estimation de la fonction de covariance dans le modèle de Krigeage et applications: bilan de thèse et perspectives François Bachoc Josselin Garnier Jean-Marc Martinez CEA-Saclay, DEN, DM2S, STMF, LGLS,

More information

Relevance Vector Machines for Earthquake Response Spectra

Relevance Vector Machines for Earthquake Response Spectra 2012 2011 American American Transactions Transactions on on Engineering Engineering & Applied Applied Sciences Sciences. American Transactions on Engineering & Applied Sciences http://tuengr.com/ateas

More information

Discriminative Learning and Big Data

Discriminative Learning and Big Data AIMS-CDT Michaelmas 2016 Discriminative Learning and Big Data Lecture 2: Other loss functions and ANN Andrew Zisserman Visual Geometry Group University of Oxford http://www.robots.ox.ac.uk/~vgg Lecture

More information

A-priori and a-posteriori covariance data in nuclear cross section adjustments: issues and challenges

A-priori and a-posteriori covariance data in nuclear cross section adjustments: issues and challenges A-priori and a-posteriori covariance data in nuclear cross section adjustments: issues and challenges G. Palmiotti 1, M.Salvatores 1,2, and G.Aliberti 3 1 Idaho National Laboratory, 2 Consultant, 3 Argonne

More information

Bootstrap, Jackknife and other resampling methods

Bootstrap, Jackknife and other resampling methods Bootstrap, Jackknife and other resampling methods Part III: Parametric Bootstrap Rozenn Dahyot Room 128, Department of Statistics Trinity College Dublin, Ireland dahyot@mee.tcd.ie 2005 R. Dahyot (TCD)

More information

On the Choice of Parametric Families of Copulas

On the Choice of Parametric Families of Copulas On the Choice of Parametric Families of Copulas Radu Craiu Department of Statistics University of Toronto Collaborators: Mariana Craiu, University Politehnica, Bucharest Vienna, July 2008 Outline 1 Brief

More information

Kernel Learning via Random Fourier Representations

Kernel Learning via Random Fourier Representations Kernel Learning via Random Fourier Representations L. Law, M. Mider, X. Miscouridou, S. Ip, A. Wang Module 5: Machine Learning L. Law, M. Mider, X. Miscouridou, S. Ip, A. Wang Kernel Learning via Random

More information

Joint ICTP-IAEA Workshop on Nuclear Reaction Data for Advanced Reactor Technologies May 2010

Joint ICTP-IAEA Workshop on Nuclear Reaction Data for Advanced Reactor Technologies May 2010 2141-34 Joint ICTP-IAEA Workshop on Nuclear Reaction Data for Advanced Reactor Technologies 3-14 May 2010 Monte Carlo Approaches for Model Evaluation of Cross Sections and Uncertainties CAPOTE R. IAEA

More information

Stationary Graph Processes: Nonparametric Spectral Estimation

Stationary Graph Processes: Nonparametric Spectral Estimation Stationary Graph Processes: Nonparametric Spectral Estimation Santiago Segarra, Antonio G. Marques, Geert Leus, and Alejandro Ribeiro Dept. of Signal Theory and Communications King Juan Carlos University

More information

Maximum Mean Discrepancy

Maximum Mean Discrepancy Maximum Mean Discrepancy Thanks to Karsten Borgwardt, Malte Rasch, Bernhard Schölkopf, Jiayuan Huang, Arthur Gretton Alexander J. Smola Statistical Machine Learning Program Canberra, ACT 0200 Australia

More information

Machine learning for pervasive systems Classification in high-dimensional spaces

Machine learning for pervasive systems Classification in high-dimensional spaces Machine learning for pervasive systems Classification in high-dimensional spaces Department of Communications and Networking Aalto University, School of Electrical Engineering stephan.sigg@aalto.fi Version

More information

Effect of Characterization Test Matrix on Design Errors: Repetition or Exploration?

Effect of Characterization Test Matrix on Design Errors: Repetition or Exploration? 10 th World Congress on Structural and Multidisciplinary Optimization May 19-24, 2013, Orlando, Florida, USA Effect of Characterization Test Matrix on Design Errors: Repetition or Exploration? Taiki Matsumura

More information

Available online at ScienceDirect. Physics Procedia 64 (2015 ) 48 54

Available online at   ScienceDirect. Physics Procedia 64 (2015 ) 48 54 Available online at www.sciencedirect.com ScienceDirect Physics Procedia 64 (2015 ) 48 54 Scientific Workshop on Nuclear Fission dynamics and the Emission of Prompt Neutrons and Gamma Rays, THEORY-3 Evaluation

More information

Linear Regression Linear Regression with Shrinkage

Linear Regression Linear Regression with Shrinkage Linear Regression Linear Regression ith Shrinkage Introduction Regression means predicting a continuous (usually scalar) output y from a vector of continuous inputs (features) x. Example: Predicting vehicle

More information

STAT 200C: High-dimensional Statistics

STAT 200C: High-dimensional Statistics STAT 200C: High-dimensional Statistics Arash A. Amini May 30, 2018 1 / 57 Table of Contents 1 Sparse linear models Basis Pursuit and restricted null space property Sufficient conditions for RNS 2 / 57

More information