Modeling biological systems - The Pharmaco-Kinetic-Dynamic paradigm

Size: px
Start display at page:

Download "Modeling biological systems - The Pharmaco-Kinetic-Dynamic paradigm"

Transcription

1 Modeling biological systems - The Pharmaco-Kinetic-Dynamic paradigm Main features:. Time profiles and complex systems 2. Disturbed and sparse measurements 3. Mixed variabilities One of the most highly developed skills in contemporary Western civilization is dissection: the split-up of problem into their smallest possible components. We are good at it. So good, we often forget to put the pieces back together again. Foreword Science and change Alvin Toffler In: Ilya Prigorine (977 Nobel in Chem), Isabelle Stengers, Order out of chaos. Man s new dialogue with nature, Bantam NY 984

2 CHAPT I : Processes and models Real process and mathematical model The functional scheme. Modeling in modern PKs. Structure and parameters in models. Computational tools: matrix calculus. Linear and nonlinear operators. The parsimony principle Redundancy and misspecification. Indexes for goodness of fit: RMSE and determination coefficient. The fotemustine example: modeling ANC toxicity. Dynamic functional scheme Initialization, iterations, parameter convergence. Residual error. Real-time processing of data Bayesian estimation, the Bayesian attractor. Geometric analysis. Influence of the noise in parameter estimates. Model validation State and parameter spaces. Checking structural and parametric identifiability. Optimize PK experiments. Building and using models Identification, simulation, dosage adjustment. Synthesis of main tasks. Controlled systems, cybernetics. 2

3 Real process and mathematical model y [ng/ml] 0 2 Real process 0 2 Fitted model t [h] y [ng/ml] y Mathematical model () t ( k t) y [ng/ml] 0 2 D = exp V V t [h] k t [h] V = 6 L k = 0. h - 3

4 Standardize observations Ex : Administer the same dose to 2 different subjects and compare kinetics. Sampling times [ h ] Real sampling times [ h ] / Subject n Conc. # - Real sampling times [ h ] / Subject n Conc. # y y2 3 Heterogeneity in real sampling: the comparison is not possible. Compare the kinetics out of the «world» of concentrations. y2 y22 y23 y24 y Transformation Use a mathematical model smoothing observations and computing PK parameters Compare PK parameters. 4

5 Compare PKs for 3 subjects Dataset In general [ ] n ˆ ˆ - V L k h { xˆ j ; j = : n } Subjects are statistically distinguishable : V k thanks to for n 2 and 3, # to for n and 2. k [ h - ] V [L] 3 5

6 Functional scheme Measurement noise noise PK PK process Observation Administration protocol Prior Prior information Equivalence criterion PK PK model Prediction Nonlinear programming 6

7 Topics in in modern PKs PK process Classical PKs : analyze physiology, drug characteristics, biopharmacy, etc. Mathematical models PK : structural and parametric aspects. Error variance models. Equivalence criterion Bayesian, maximum likelihood estimators. Nonlinear programming Iterative methods, heuristic algorithms, quasi-newton methods. Prior information Population approaches, parametric and nonparametric methods. 7

8 Modeling Models Dialectic, mental, physical, mathematical, etc. Elements Structural and parametric knowledge, state variable. Goals Model is proposed to be efficient for a specified action : recognition, control, simulation, etc. Validation Experimental and statistical tools. Applications in : engineering ( fast dynamics ), : biology, sociology, economics, PKs ( slow dynamics ), : physics, astronomy, PDs ( mixed dynamics ). Ex : Tourist map of the city of Marseille. 8

9 Species of models Static / dynamic ( time dimension, differential equations ). Discrete time / continuous time ( computer control ). Linear / nonlinear ( which is the reference? ). Deterministic, statistic ( precision of estimates ), Input-Output form / state-space model : stochastic models ( dispersion of parameters ). presentation ( macro-rates ) / knowledge ( micro-rates ) models. Lumped / distributed parameter models ( space-time differential equations ). Time invariant / time varying ( parameters ). In PKs { Dynamic, continuous time, [ nonlinear / PK parameters, linear / drug amounts ], statistical and stochastic, I-O form and state-space, lumped, time invariant }. 9

10 Mathematical modeling Models are defined by : - their structure ( number and connectivity of compartments, etc ) expressed by mathematical operations involving adjustable parameters Ex : -cpt, D V () t = exp( k t) exponential structure, parameters - the numerical value of parameters used y [ ] [ ] x= V k - V k = 6 L 0. h CHARACTERIZATION Structure + ESTIMATION Parameters MODELING System Identification 0

11 Matrix definitions Definition of terms An array of numbers written as below is called a matrix of order. m= n n = m = m= n= Square matrix if. Column vector if. Row vector if. Scalar if. A square matrix may be : aij = a ji j j aij = 0 i j a ii = i = : n Symmetric if, Diagonal if, etc. m n a a... a a a... a 2 n n = a a... a m m2 mn A diagonal matrix with is the identity matrix. Notation : matrix A ( cap ), matrix element a ( lower ), vector a ( lower, underlined ). A I

12 Matrix algebra Transposition Addition T A m n C A n m c = = ij ji A, B m n C A + B m n = ij ij ij a c = a + b Multiplication A B m n n q C = A B m q c = ij a n k = ik b kj Square matrix Determinant ( characteristic scalar ) Ex. Inversion Rules = a a a for n = 2 A A 22 2 a2 C = A C A = A C = I c = ( ) T T T ( A B) = B A ( B) = B A ij A i+ j = A A ( ) A T A i, j 2

13 Operators : Analytic geometry - Rotation of rectangular coordinate system Rotation angle ω Matrix form or [ X, Y ] [ xy, ] Matrix inversion old and new coordinates x = X cosω + Y sinω y = X sinω+ Y cosω x cosω sinω X = y sinω cosω Y c new = Ω c c old old Ω cosω sin c with = new y Y ω + ω Ω = sinω cosω X M x 3

14 Operators : Analytic geometry - 2 Conversion of rectangular to polar system Inversely r = X + Y ξ = arctan X 2 2 ( Y X) = r cosξ Y r M Impossible to link r ξ and Y X Y = r sinξ by a matrix operator. ξ X The conversion relations are nonlinear. 4

15 Linear operators Let :. 2. e () t ( t) TT e 2 () t ( t) TT s s 2 For the input combination : () t + h e () t g e 2 TT?? The operator T is linear, if and only if?? = g s( t) + h s2( t) 5

16 For the -cpt model y () t s( t) e( t) associate : - to and D - to. The model is a linear operator because y PK example - D V () t = exp( k t) when then D D D () ( ) V V g D + h D2 D + h D2 exp V y t = exp k t and 2 D2 y2() t = exp( k t) ( k t) = g y ( t) + h y ( t) g 2 PK linearity ( proportionality ) 6

17 For the -cpt model associate : - to and - to. PK example - 2 The model is a nonlinear operator because when then k y () t s( t) e( t) k y D V () t = exp( k t) D D V V D g k+ h k2 exp g k+ h k2 t g y t + h y2 t V y() t = exp ( k t) and k2 y2() t = exp( k2 t) [ ( ) ] () () Parametric nonlinearity 7

18 Choose the best model Rule : The model should be a necessary and sufficient description of the real process necessary : good fitting on observed data. sufficient : without redundancy of parameters in the structure. Rule 2 : The model structure should be parsimonious : to adequately represent the real process. by containing the smallest number of parameters. Ex : -cpt model is misspecified. 3-cpt model is redundant. 2-cpt is the best model. y t 8

19 Fotemustine dataset Rectangular array i = : n n = 29 ID ANC ANC BASE Nadir ANC Dose D Clearance CL BSA BSA Weight W i ANC i ANC i D i CL i BSA i W i n ANC n ANC n D n CL n BSA n W n Output / observations Relative ANC count y i = ANC ANC ANC i 9

20 Predictions with using then z the constants / parameters data for individual i Modeling neutrophil toxicity zi e Logit model ymi = + zi e = x + D x + CL x + BSA x + + W i 2 i T 3 i 4 i x p x x x x p, and T i g = [ D CL BSA W ] i z i = 2 = g T i x i i i i p =4 Problem Obtain numeric values for : fit to. Application y M 0 xˆ y Mi Predict toxicity for a new individual with given data y = logit 0 x ( T g ˆ ) M 0 x y i g 0 20

21 Regression line correlation coefficient Coefficient of determination 2 var( yi ymi ) R = var y y Indexes for the goodness of fit y = a y + b + e ( ) is the part of data variability i i R Mi ( with respect to the average y ) explained by the model. n Root Mean Squared Error RMSE = w n p i= RMSE n p y Mi y i R 2 takes into account the sample tail,, the number of model parameters,, and the closeness of predictions to the observations. i y i y Mi 2 i ( y y ) i Mi 2 2

22 Reduce the model structure CV p x x x * * * * * * * 3 x x * * * * * * * * 5 x * * * * * * 6 x * * * * * 7 x * * 8 x x * * * 0 x 7.67 * * * * * * * * * x * * * * x * 3 4 Largest coefficient of variation CV x RMSE (%)

23 Redundancy, misspecification Misspecification High RMSE NC non-satisfied RMSE (%) R 2 Redundancy Increasing RMSE SC not-satisfied p Warning : R 2, the squared correlation coefficient, does not detect redundancy! 23

24 Functional scheme ( dynamic ) Linear / p model : no loop, one stage estimation. Nonlinear / p model : many loops until convergence. PK process Measurement noise noise Administration protocol Prior Prior information Equivalence criterion Arbitrary initial initial values values PK model Nonlinear programming 24

25 Iterations, parameter convergence Ex : Fotemustine neutrophil toxicity Nonlinear modeling n = 29, p = 8 0 Optimized final values 0 2 nd Param. value st 3 rd RMSE Arbitrary initial values Iteration no

26 Errors in in the functional scheme The existing errors : experimental, structural, parametric. Residual error : experimental, structural ( model misspecification ). RMSE Initial parametric error ( canceled at the convergence ) Residual error Iteration no 26

27 The system identification loop Experimental Design Design Prior Knowledge Dataset Choose Model Model Structure Calculate Model Model Choose Criterion of of Fit Fit Validate Model Model Not OK: Revise OK: Use it! 27

28 Real-time processing of data Data processing In batch or off-line { m observations } - data processing Sequentially or in real-time Wait until the last observation for all data processing. Perform modeling while observations are available. { observation - datum processing } { times } Warning : In the first stages, the lack of individual information should be offset by the prior information ( attractor - population studies ). m As the number of individual samples increases, the prior information can be forgotten. Bayesian estimation 28

29 Estimates in in real-time processing Only one observation D y exp t V Two observations D y = exp k t V D y2 = exp k t V ( ) ( t, y ) = k infinite pairs of values for ( V,k ) ( ) More than two observations V,k No exact solution : find minimizing the total discrepancy between observations and predictions i [ ( t, y ) ( t, y ) ] ( ) ( ) D V y exp( k ) 29 t i k V = = t D y t 2 y y ln 2 y y 2 t t t 2 minimize i =: m RMSE

30 Equivalence criterion, real-time Indexing observation times ; individual associated with parameters. Equivalence criterion Intra-kinetic weighting In real-time when, leads to infinite solutions. Bayesian attractor m i j p x j i= Combine criteria to obtain m ( j ) 2 SE = wi obsi pred i x m < p SE Inter-kinetic weighting Select the most appealing solution by using an attractor : located at and associated with force of attraction x A prior = T ( x x ) D ( x x ) j A x j A D j A A 2 ( prior) ln( SE) ln + A 30

31 Concept of simulation Random generator of of measurement noise noise PK PK process y ( t x ) M i, 0 Observation Administration protocol PK PK model No structural error Known parameter values y ( t x ) M i, 0 3 Nonlinear programming Equivalence criterion Prediction

32 Geometric analysis y D V () t = exp( k t) t y Each i i observed pair is represented by a geometric locus in the parameter space Reference values V = 6 L k = Experimental design 0.h D = mg K = 5% - y [ng/ml] k = t D ln V i y i t [h] 32

33 nbr. samples < nbr. parameters 0.4 Infinite solutions in the geometric locus of the st observed pair. 0.3 Geometric locus h ng/ml Bayesian attractor Set solution using the Bayesian attractor [ ] - V L k h Ref k [ h - ] x A D A Est V [L]

34 nbr. samples = nbr. parameters Bias! - V [ L ] k h Ref Calc Remove bias k [ h - ] h ng/ml 0. Intensive sampling h ng/ml V [L]

35 nbr. samples > nbr. parameters Calculated parameters are the coordinates of intersecting loci. 0.4 Intersections are the p combinations of observations : C m p 5 = C 2 = [ ] 0 m - V L k h med iqr MLE k [ h - ] h h 2 h V [L] 6 h 8 h

36 Sensitivity w.r.t. noise 0.4 [ ] - V k L h Ref % med iqr % med iqr k [ h - ] V [L] 36

37 Without measurement error 0.4 Accurate exp devices 0.3 Undisturbed observations m = p k [ h - ] When? V [L] 37

38 Bayesian estimation It analyzes only a few samples per patient It requires Respect of the ethical constraints, Reduction of the cost of the PK analysis. ( ) f x Obtaining the prior information : attractor or. Balancing the population and the individual information balance the prior and likelihood terms Operating with : staffs having a good experience and high specialized health professionals. It may be used in a feedback loop Efficient data analyses. 38

39 State and parameter spaces Parameter space State space Real process PK PK process Observation Comparison Comparison Artificial mechanism PK PK model Prediction Random component Precision of estimates Measurement error 39

40 State vs. parameter space y [µg/ml] y D 0.06 = exp Confidence V 90% () t ( k t) y [µg/ml] 0.04 regions and corridors t [h] t [h] y [µg/ml] D k [h - ] Likelihoodists t [h] V [L] 40

41 Checking identifiability Structural identifiability Given a hypothetical structure with unknown parameters, and a set of proposed experiments ( not measurements! ), would the unknown parameters be uniquely determinable from these experiments? Parametric identifiability Estimating the parameters from measurements with errors and optimize sampling designs. u() t -V k 2 k 2 2 k [ h - ] % k p k e k p V [L] structural non-identifiable Non-consistent estimate V 4

42 Overall notes Modeling extensions in PKs Repeated dose ( multi-input ) and simultaneous observation of several kinetics ( multioutput systems ), model the error process. Validation Check whether the model is "good enough", whether it is valid for its purpose : { statistical tests, confidence intervals, sensitivity analysis, analysis of residuals, information concepts, new experiments } Remember The model never explains, but only describes the available observations. The model is not an exact description of the real process. The success of modeling depends on the richness of available knowledge. 42

43 Building and using models Real Process y tt Time schedule + Drug amounts Functional properties Observations, or Predictions, or Clinical Constraints Dosage adjustment L j= ( ) A exp a t ij j Simulation 43

44 Devil's triangle Process : Modeling Simulation Control Input : Adm. Protocol Identification Output : Concentrations 44

45 Dosage adjustment flowchart Individual dosage adjustment Invert the model Identify individual PK parameters by using Bayesian estimation Ethical Ethical constraints Individual observation Prior information Compiling knowledge 45

46 Processes and models Biological real processes Preserve integral functionality of the real process. Avoid experiments leading to : breakdown the feedback mechanisms or excessively stimulate the process to record quantifiable observations. Analysis by modeling Obtain a reliable and efficient ( simple ) tool complying with the functional properties of the real process. Manage complex situations : Optimize experiments : administration and sampling. Individualize pharmaco-therapy : adjust dosage regimens, prediction and control the real process. 46

47 Modeling pros vs. cons Pros : Discovery of fundamental properties of PK process by assaying biological samples. Data reduction : large datasets are summarized with a few PK parameters. Observations standardization : by means of mechanistic models, the overall observed profile is shared to its constitutive [ administration + system + drug ] components. Description of mixed variabilities ( population studies ) : compare, discriminate and classify individuals. Ethical individual recognition : combine population and incomplete individual information in a Bayesian estimator. Cons : Pluridisciplinarity : with emphasis on applied mathematics and engineering techniques. Long training course : [ undergraduate + graduate ] studies followed by Ph.D. thesis. 47

48 Optimize experiments Administration protocol Administration site Dosage adjustment Sampling protocol Schedule + Amounts Sampling site + Schedule Structural identifiability Parametric identifiability 48

49 Controlled systems Control channel Operator Old Transfer and use energy. Human operator. Management of simple situations. Information transmission and data processing. Computer help, communication theory. Control devices New Rapid actions, independent of the nature of support. Complex but optimized situations. Observation devices Remote control. Feedback channel 49

50 N. Wiener, Cybernetics,

51 W. von Brawn, NASA, 960-5

52 R. R. Jelliffe, Clinical PKs, 970- R. Bellman R. Jelliffe 52

53 system identification ( ) ( ) x arg max{ } B = f x y x arg max{ f y x } L = R.A. Fisher 0 2 y (ng/ml) t (h) C.F. Gauss y D V () t = exp( k t) 53

The general concept of pharmacokinetics

The general concept of pharmacokinetics The general concept of pharmacokinetics Hartmut Derendorf, PhD University of Florida Pharmacokinetics the time course of drug and metabolite concentrations in the body Pharmacokinetics helps to optimize

More information

Neural Network Training

Neural Network Training Neural Network Training Sargur Srihari Topics in Network Training 0. Neural network parameters Probabilistic problem formulation Specifying the activation and error functions for Regression Binary classification

More information

Noncompartmental vs. Compartmental Approaches to Pharmacokinetic Data Analysis Paolo Vicini, Ph.D. Pfizer Global Research and Development David M.

Noncompartmental vs. Compartmental Approaches to Pharmacokinetic Data Analysis Paolo Vicini, Ph.D. Pfizer Global Research and Development David M. Noncompartmental vs. Compartmental Approaches to Pharmacokinetic Data Analysis Paolo Vicini, Ph.D. Pfizer Global Research and Development David M. Foster., Ph.D. University of Washington October 18, 2012

More information

Sequence labeling. Taking collective a set of interrelated instances x 1,, x T and jointly labeling them

Sequence labeling. Taking collective a set of interrelated instances x 1,, x T and jointly labeling them HMM, MEMM and CRF 40-957 Special opics in Artificial Intelligence: Probabilistic Graphical Models Sharif University of echnology Soleymani Spring 2014 Sequence labeling aking collective a set of interrelated

More information

Matrices and systems of linear equations

Matrices and systems of linear equations Matrices and systems of linear equations Samy Tindel Purdue University Differential equations and linear algebra - MA 262 Taken from Differential equations and linear algebra by Goode and Annin Samy T.

More information

Noncompartmental vs. Compartmental Approaches to Pharmacokinetic Data Analysis Paolo Vicini, Ph.D. Pfizer Global Research and Development David M.

Noncompartmental vs. Compartmental Approaches to Pharmacokinetic Data Analysis Paolo Vicini, Ph.D. Pfizer Global Research and Development David M. Noncompartmental vs. Compartmental Approaches to Pharmacokinetic Data Analysis Paolo Vicini, Ph.D. Pfizer Global Research and Development David M. Foster., Ph.D. University of Washington October 28, 2010

More information

Statistical Machine Learning Theory. From Multi-class Classification to Structured Output Prediction. Hisashi Kashima.

Statistical Machine Learning Theory. From Multi-class Classification to Structured Output Prediction. Hisashi Kashima. http://goo.gl/jv7vj9 Course website KYOTO UNIVERSITY Statistical Machine Learning Theory From Multi-class Classification to Structured Output Prediction Hisashi Kashima kashima@i.kyoto-u.ac.jp DEPARTMENT

More information

1. Describing patient variability, 2. Minimizing patient variability, 3. Describing and Minimizing Intraindividual

1. Describing patient variability, 2. Minimizing patient variability, 3. Describing and Minimizing Intraindividual 1. Describing patient variability, 2. Minimizing patient variability, 3. Describing and Minimizing Intraindividual Variability. 4. Improving the reporting of lab assay results. Roger Jelliffe MD, Alan

More information

Gaussian Processes. Le Song. Machine Learning II: Advanced Topics CSE 8803ML, Spring 2012

Gaussian Processes. Le Song. Machine Learning II: Advanced Topics CSE 8803ML, Spring 2012 Gaussian Processes Le Song Machine Learning II: Advanced Topics CSE 8803ML, Spring 01 Pictorial view of embedding distribution Transform the entire distribution to expected features Feature space Feature

More information

Clustering. Professor Ameet Talwalkar. Professor Ameet Talwalkar CS260 Machine Learning Algorithms March 8, / 26

Clustering. Professor Ameet Talwalkar. Professor Ameet Talwalkar CS260 Machine Learning Algorithms March 8, / 26 Clustering Professor Ameet Talwalkar Professor Ameet Talwalkar CS26 Machine Learning Algorithms March 8, 217 1 / 26 Outline 1 Administration 2 Review of last lecture 3 Clustering Professor Ameet Talwalkar

More information

If we want to analyze experimental or simulated data we might encounter the following tasks:

If we want to analyze experimental or simulated data we might encounter the following tasks: Chapter 1 Introduction If we want to analyze experimental or simulated data we might encounter the following tasks: Characterization of the source of the signal and diagnosis Studying dependencies Prediction

More information

3 Results. Part I. 3.1 Base/primary model

3 Results. Part I. 3.1 Base/primary model 3 Results Part I 3.1 Base/primary model For the development of the base/primary population model the development dataset (for data details see Table 5 and sections 2.1 and 2.2), which included 1256 serum

More information

Statistical Machine Learning Theory. From Multi-class Classification to Structured Output Prediction. Hisashi Kashima.

Statistical Machine Learning Theory. From Multi-class Classification to Structured Output Prediction. Hisashi Kashima. http://goo.gl/xilnmn Course website KYOTO UNIVERSITY Statistical Machine Learning Theory From Multi-class Classification to Structured Output Prediction Hisashi Kashima kashima@i.kyoto-u.ac.jp DEPARTMENT

More information

Maximum Likelihood Estimation. only training data is available to design a classifier

Maximum Likelihood Estimation. only training data is available to design a classifier Introduction to Pattern Recognition [ Part 5 ] Mahdi Vasighi Introduction Bayesian Decision Theory shows that we could design an optimal classifier if we knew: P( i ) : priors p(x i ) : class-conditional

More information

Reading Group on Deep Learning Session 1

Reading Group on Deep Learning Session 1 Reading Group on Deep Learning Session 1 Stephane Lathuiliere & Pablo Mesejo 2 June 2016 1/31 Contents Introduction to Artificial Neural Networks to understand, and to be able to efficiently use, the popular

More information

Linear Models for Classification

Linear Models for Classification Linear Models for Classification Oliver Schulte - CMPT 726 Bishop PRML Ch. 4 Classification: Hand-written Digit Recognition CHINE INTELLIGENCE, VOL. 24, NO. 24, APRIL 2002 x i = t i = (0, 0, 0, 1, 0, 0,

More information

Variations. ECE 6540, Lecture 10 Maximum Likelihood Estimation

Variations. ECE 6540, Lecture 10 Maximum Likelihood Estimation Variations ECE 6540, Lecture 10 Last Time BLUE (Best Linear Unbiased Estimator) Formulation Advantages Disadvantages 2 The BLUE A simplification Assume the estimator is a linear system For a single parameter

More information

Computational Biology Course Descriptions 12-14

Computational Biology Course Descriptions 12-14 Computational Biology Course Descriptions 12-14 Course Number and Title INTRODUCTORY COURSES BIO 311C: Introductory Biology I BIO 311D: Introductory Biology II BIO 325: Genetics CH 301: Principles of Chemistry

More information

Linear Classification. CSE 6363 Machine Learning Vassilis Athitsos Computer Science and Engineering Department University of Texas at Arlington

Linear Classification. CSE 6363 Machine Learning Vassilis Athitsos Computer Science and Engineering Department University of Texas at Arlington Linear Classification CSE 6363 Machine Learning Vassilis Athitsos Computer Science and Engineering Department University of Texas at Arlington 1 Example of Linear Classification Red points: patterns belonging

More information

Master of Science in Statistics A Proposal

Master of Science in Statistics A Proposal 1 Master of Science in Statistics A Proposal Rationale of the Program In order to cope up with the emerging complexity on the solutions of realistic problems involving several phenomena of nature it is

More information

Stat 5101 Lecture Notes

Stat 5101 Lecture Notes Stat 5101 Lecture Notes Charles J. Geyer Copyright 1998, 1999, 2000, 2001 by Charles J. Geyer May 7, 2001 ii Stat 5101 (Geyer) Course Notes Contents 1 Random Variables and Change of Variables 1 1.1 Random

More information

LINEAR MODELS FOR CLASSIFICATION. J. Elder CSE 6390/PSYC 6225 Computational Modeling of Visual Perception

LINEAR MODELS FOR CLASSIFICATION. J. Elder CSE 6390/PSYC 6225 Computational Modeling of Visual Perception LINEAR MODELS FOR CLASSIFICATION Classification: Problem Statement 2 In regression, we are modeling the relationship between a continuous input variable x and a continuous target variable t. In classification,

More information

Cheng Soon Ong & Christian Walder. Canberra February June 2018

Cheng Soon Ong & Christian Walder. Canberra February June 2018 Cheng Soon Ong & Christian Walder Research Group and College of Engineering and Computer Science Canberra February June 2018 (Many figures from C. M. Bishop, "Pattern Recognition and ") 1of 254 Part V

More information

Today. Probability and Statistics. Linear Algebra. Calculus. Naïve Bayes Classification. Matrix Multiplication Matrix Inversion

Today. Probability and Statistics. Linear Algebra. Calculus. Naïve Bayes Classification. Matrix Multiplication Matrix Inversion Today Probability and Statistics Naïve Bayes Classification Linear Algebra Matrix Multiplication Matrix Inversion Calculus Vector Calculus Optimization Lagrange Multipliers 1 Classical Artificial Intelligence

More information

Univariate ARIMA Models

Univariate ARIMA Models Univariate ARIMA Models ARIMA Model Building Steps: Identification: Using graphs, statistics, ACFs and PACFs, transformations, etc. to achieve stationary and tentatively identify patterns and model components.

More information

Lecture 5: Logistic Regression. Neural Networks

Lecture 5: Logistic Regression. Neural Networks Lecture 5: Logistic Regression. Neural Networks Logistic regression Comparison with generative models Feed-forward neural networks Backpropagation Tricks for training neural networks COMP-652, Lecture

More information

ECE521 week 3: 23/26 January 2017

ECE521 week 3: 23/26 January 2017 ECE521 week 3: 23/26 January 2017 Outline Probabilistic interpretation of linear regression - Maximum likelihood estimation (MLE) - Maximum a posteriori (MAP) estimation Bias-variance trade-off Linear

More information

Graduate Econometrics I: What is econometrics?

Graduate Econometrics I: What is econometrics? Graduate Econometrics I: What is econometrics? Yves Dominicy Université libre de Bruxelles Solvay Brussels School of Economics and Management ECARES Yves Dominicy Graduate Econometrics I: What is econometrics?

More information

Lecture 6. Regression

Lecture 6. Regression Lecture 6. Regression Prof. Alan Yuille Summer 2014 Outline 1. Introduction to Regression 2. Binary Regression 3. Linear Regression; Polynomial Regression 4. Non-linear Regression; Multilayer Perceptron

More information

F. Combes (1,2,3) S. Retout (2), N. Frey (2) and F. Mentré (1) PODE 2012

F. Combes (1,2,3) S. Retout (2), N. Frey (2) and F. Mentré (1) PODE 2012 Prediction of shrinkage of individual parameters using the Bayesian information matrix in nonlinear mixed-effect models with application in pharmacokinetics F. Combes (1,2,3) S. Retout (2), N. Frey (2)

More information

Vectors To begin, let us describe an element of the state space as a point with numerical coordinates, that is x 1. x 2. x =

Vectors To begin, let us describe an element of the state space as a point with numerical coordinates, that is x 1. x 2. x = Linear Algebra Review Vectors To begin, let us describe an element of the state space as a point with numerical coordinates, that is x 1 x x = 2. x n Vectors of up to three dimensions are easy to diagram.

More information

Universität Potsdam Institut für Informatik Lehrstuhl Maschinelles Lernen. Bayesian Learning. Tobias Scheffer, Niels Landwehr

Universität Potsdam Institut für Informatik Lehrstuhl Maschinelles Lernen. Bayesian Learning. Tobias Scheffer, Niels Landwehr Universität Potsdam Institut für Informatik Lehrstuhl Maschinelles Lernen Bayesian Learning Tobias Scheffer, Niels Landwehr Remember: Normal Distribution Distribution over x. Density function with parameters

More information

Advanced Engineering Statistics - Section 5 - Jay Liu Dept. Chemical Engineering PKNU

Advanced Engineering Statistics - Section 5 - Jay Liu Dept. Chemical Engineering PKNU Advanced Engineering Statistics - Section 5 - Jay Liu Dept. Chemical Engineering PKNU Least squares regression What we will cover Box, G.E.P., Use and abuse of regression, Technometrics, 8 (4), 625-629,

More information

Chapter 4 Neural Networks in System Identification

Chapter 4 Neural Networks in System Identification Chapter 4 Neural Networks in System Identification Gábor HORVÁTH Department of Measurement and Information Systems Budapest University of Technology and Economics Magyar tudósok körútja 2, 52 Budapest,

More information

Lecture 4: Types of errors. Bayesian regression models. Logistic regression

Lecture 4: Types of errors. Bayesian regression models. Logistic regression Lecture 4: Types of errors. Bayesian regression models. Logistic regression A Bayesian interpretation of regularization Bayesian vs maximum likelihood fitting more generally COMP-652 and ECSE-68, Lecture

More information

Feature selection and extraction Spectral domain quality estimation Alternatives

Feature selection and extraction Spectral domain quality estimation Alternatives Feature selection and extraction Error estimation Maa-57.3210 Data Classification and Modelling in Remote Sensing Markus Törmä markus.torma@tkk.fi Measurements Preprocessing: Remove random and systematic

More information

Relevance Vector Machines for Earthquake Response Spectra

Relevance Vector Machines for Earthquake Response Spectra 2012 2011 American American Transactions Transactions on on Engineering Engineering & Applied Applied Sciences Sciences. American Transactions on Engineering & Applied Sciences http://tuengr.com/ateas

More information

Association studies and regression

Association studies and regression Association studies and regression CM226: Machine Learning for Bioinformatics. Fall 2016 Sriram Sankararaman Acknowledgments: Fei Sha, Ameet Talwalkar Association studies and regression 1 / 104 Administration

More information

PDF hosted at the Radboud Repository of the Radboud University Nijmegen

PDF hosted at the Radboud Repository of the Radboud University Nijmegen PDF hosted at the Radboud Repository of the Radboud University Nijmegen The following full text is a preprint version which may differ from the publisher's version. For additional information about this

More information

Bioinformatics: Network Analysis

Bioinformatics: Network Analysis Bioinformatics: Network Analysis Model Fitting COMP 572 (BIOS 572 / BIOE 564) - Fall 2013 Luay Nakhleh, Rice University 1 Outline Parameter estimation Model selection 2 Parameter Estimation 3 Generally

More information

Nonlinear Mixed Effects Models

Nonlinear Mixed Effects Models Nonlinear Mixed Effects Modeling Department of Mathematics Center for Research in Scientific Computation Center for Quantitative Sciences in Biomedicine North Carolina State University July 31, 216 Introduction

More information

Lecture 6: Discrete Choice: Qualitative Response

Lecture 6: Discrete Choice: Qualitative Response Lecture 6: Instructor: Department of Economics Stanford University 2011 Types of Discrete Choice Models Univariate Models Binary: Linear; Probit; Logit; Arctan, etc. Multinomial: Logit; Nested Logit; GEV;

More information

Compound Optimal Designs for Percentile Estimation in Dose-Response Models with Restricted Design Intervals

Compound Optimal Designs for Percentile Estimation in Dose-Response Models with Restricted Design Intervals Compound Optimal Designs for Percentile Estimation in Dose-Response Models with Restricted Design Intervals Stefanie Biedermann 1, Holger Dette 1, Wei Zhu 2 Abstract In dose-response studies, the dose

More information

Math 423/533: The Main Theoretical Topics

Math 423/533: The Main Theoretical Topics Math 423/533: The Main Theoretical Topics Notation sample size n, data index i number of predictors, p (p = 2 for simple linear regression) y i : response for individual i x i = (x i1,..., x ip ) (1 p)

More information

University of Houston, Department of Mathematics Numerical Analysis, Fall 2005

University of Houston, Department of Mathematics Numerical Analysis, Fall 2005 3 Numerical Solution of Nonlinear Equations and Systems 3.1 Fixed point iteration Reamrk 3.1 Problem Given a function F : lr n lr n, compute x lr n such that ( ) F(x ) = 0. In this chapter, we consider

More information

Linear discriminant functions

Linear discriminant functions Andrea Passerini passerini@disi.unitn.it Machine Learning Discriminative learning Discriminative vs generative Generative learning assumes knowledge of the distribution governing the data Discriminative

More information

Machine Learning! in just a few minutes. Jan Peters Gerhard Neumann

Machine Learning! in just a few minutes. Jan Peters Gerhard Neumann Machine Learning! in just a few minutes Jan Peters Gerhard Neumann 1 Purpose of this Lecture Foundations of machine learning tools for robotics We focus on regression methods and general principles Often

More information

Summary and discussion of: Dropout Training as Adaptive Regularization

Summary and discussion of: Dropout Training as Adaptive Regularization Summary and discussion of: Dropout Training as Adaptive Regularization Statistics Journal Club, 36-825 Kirstin Early and Calvin Murdock November 21, 2014 1 Introduction Multi-layered (i.e. deep) artificial

More information

A Novel Screening Method Using Score Test for Efficient Covariate Selection in Population Pharmacokinetic Analysis

A Novel Screening Method Using Score Test for Efficient Covariate Selection in Population Pharmacokinetic Analysis A Novel Screening Method Using Score Test for Efficient Covariate Selection in Population Pharmacokinetic Analysis Yixuan Zou 1, Chee M. Ng 1 1 College of Pharmacy, University of Kentucky, Lexington, KY

More information

Lecture Notes 1: Vector spaces

Lecture Notes 1: Vector spaces Optimization-based data analysis Fall 2017 Lecture Notes 1: Vector spaces In this chapter we review certain basic concepts of linear algebra, highlighting their application to signal processing. 1 Vector

More information

Estimation and Model Selection in Mixed Effects Models Part I. Adeline Samson 1

Estimation and Model Selection in Mixed Effects Models Part I. Adeline Samson 1 Estimation and Model Selection in Mixed Effects Models Part I Adeline Samson 1 1 University Paris Descartes Summer school 2009 - Lipari, Italy These slides are based on Marc Lavielle s slides Outline 1

More information

Machine Learning Lecture 7

Machine Learning Lecture 7 Course Outline Machine Learning Lecture 7 Fundamentals (2 weeks) Bayes Decision Theory Probability Density Estimation Statistical Learning Theory 23.05.2016 Discriminative Approaches (5 weeks) Linear Discriminant

More information

Gatsby Theoretical Neuroscience Lectures: Non-Gaussian statistics and natural images Parts III-IV

Gatsby Theoretical Neuroscience Lectures: Non-Gaussian statistics and natural images Parts III-IV Gatsby Theoretical Neuroscience Lectures: Non-Gaussian statistics and natural images Parts III-IV Aapo Hyvärinen Gatsby Unit University College London Part III: Estimation of unnormalized models Often,

More information

EL1820 Modeling of Dynamical Systems

EL1820 Modeling of Dynamical Systems EL1820 Modeling of Dynamical Systems Lecture 10 - System identification as a model building tool Experiment design Examination and prefiltering of data Model structure selection Model validation Lecture

More information

Covariance function estimation in Gaussian process regression

Covariance function estimation in Gaussian process regression Covariance function estimation in Gaussian process regression François Bachoc Department of Statistics and Operations Research, University of Vienna WU Research Seminar - May 2015 François Bachoc Gaussian

More information

Introduction to Neural Networks

Introduction to Neural Networks Introduction to Neural Networks What are (Artificial) Neural Networks? Models of the brain and nervous system Highly parallel Process information much more like the brain than a serial computer Learning

More information

On the convergence of the iterative solution of the likelihood equations

On the convergence of the iterative solution of the likelihood equations On the convergence of the iterative solution of the likelihood equations R. Moddemeijer University of Groningen, Department of Computing Science, P.O. Box 800, NL-9700 AV Groningen, The Netherlands, e-mail:

More information

401 Review. 6. Power analysis for one/two-sample hypothesis tests and for correlation analysis.

401 Review. 6. Power analysis for one/two-sample hypothesis tests and for correlation analysis. 401 Review Major topics of the course 1. Univariate analysis 2. Bivariate analysis 3. Simple linear regression 4. Linear algebra 5. Multiple regression analysis Major analysis methods 1. Graphical analysis

More information

MA/ST 810 Mathematical-Statistical Modeling and Analysis of Complex Systems

MA/ST 810 Mathematical-Statistical Modeling and Analysis of Complex Systems MA/ST 810 Mathematical-Statistical Modeling and Analysis of Complex Systems Principles of Statistical Inference Recap of statistical models Statistical inference (frequentist) Parametric vs. semiparametric

More information

Machine Learning Linear Classification. Prof. Matteo Matteucci

Machine Learning Linear Classification. Prof. Matteo Matteucci Machine Learning Linear Classification Prof. Matteo Matteucci Recall from the first lecture 2 X R p Regression Y R Continuous Output X R p Y {Ω 0, Ω 1,, Ω K } Classification Discrete Output X R p Y (X)

More information

TIME SERIES ANALYSIS. Forecasting and Control. Wiley. Fifth Edition GWILYM M. JENKINS GEORGE E. P. BOX GREGORY C. REINSEL GRETA M.

TIME SERIES ANALYSIS. Forecasting and Control. Wiley. Fifth Edition GWILYM M. JENKINS GEORGE E. P. BOX GREGORY C. REINSEL GRETA M. TIME SERIES ANALYSIS Forecasting and Control Fifth Edition GEORGE E. P. BOX GWILYM M. JENKINS GREGORY C. REINSEL GRETA M. LJUNG Wiley CONTENTS PREFACE TO THE FIFTH EDITION PREFACE TO THE FOURTH EDITION

More information

Phase I design for locating schedule-specific maximum tolerated doses

Phase I design for locating schedule-specific maximum tolerated doses Phase I design for locating schedule-specific maximum tolerated doses Nolan A. Wages, Ph.D. University of Virginia Division of Translational Research & Applied Statistics Department of Public Health Sciences

More information

Lecture 16 Solving GLMs via IRWLS

Lecture 16 Solving GLMs via IRWLS Lecture 16 Solving GLMs via IRWLS 09 November 2015 Taylor B. Arnold Yale Statistics STAT 312/612 Notes problem set 5 posted; due next class problem set 6, November 18th Goals for today fixed PCA example

More information

Ch 4. Linear Models for Classification

Ch 4. Linear Models for Classification Ch 4. Linear Models for Classification Pattern Recognition and Machine Learning, C. M. Bishop, 2006. Department of Computer Science and Engineering Pohang University of Science and echnology 77 Cheongam-ro,

More information

PATTERN RECOGNITION AND MACHINE LEARNING

PATTERN RECOGNITION AND MACHINE LEARNING PATTERN RECOGNITION AND MACHINE LEARNING Slide Set 2: Estimation Theory January 2018 Heikki Huttunen heikki.huttunen@tut.fi Department of Signal Processing Tampere University of Technology Classical Estimation

More information

What is the Matrix? Linear control of finite-dimensional spaces. November 28, 2010

What is the Matrix? Linear control of finite-dimensional spaces. November 28, 2010 What is the Matrix? Linear control of finite-dimensional spaces. November 28, 2010 Scott Strong sstrong@mines.edu Colorado School of Mines What is the Matrix? p. 1/20 Overview/Keywords/References Advanced

More information

Artificial Intelligence Module 2. Feature Selection. Andrea Torsello

Artificial Intelligence Module 2. Feature Selection. Andrea Torsello Artificial Intelligence Module 2 Feature Selection Andrea Torsello We have seen that high dimensional data is hard to classify (curse of dimensionality) Often however, the data does not fill all the space

More information

OR MSc Maths Revision Course

OR MSc Maths Revision Course OR MSc Maths Revision Course Tom Byrne School of Mathematics University of Edinburgh t.m.byrne@sms.ed.ac.uk 15 September 2017 General Information Today JCMB Lecture Theatre A, 09:30-12:30 Mathematics revision

More information

Universität Potsdam Institut für Informatik Lehrstuhl Maschinelles Lernen. Linear Classifiers. Blaine Nelson, Tobias Scheffer

Universität Potsdam Institut für Informatik Lehrstuhl Maschinelles Lernen. Linear Classifiers. Blaine Nelson, Tobias Scheffer Universität Potsdam Institut für Informatik Lehrstuhl Linear Classifiers Blaine Nelson, Tobias Scheffer Contents Classification Problem Bayesian Classifier Decision Linear Classifiers, MAP Models Logistic

More information

Introduction to Design and Analysis of Computer Experiments

Introduction to Design and Analysis of Computer Experiments Introduction to Design and Analysis of Thomas Santner Department of Statistics The Ohio State University October 2010 Outline Empirical Experimentation Empirical Experimentation Physical Experiments (agriculture,

More information

Kasetsart University Workshop. Multigrid methods: An introduction

Kasetsart University Workshop. Multigrid methods: An introduction Kasetsart University Workshop Multigrid methods: An introduction Dr. Anand Pardhanani Mathematics Department Earlham College Richmond, Indiana USA pardhan@earlham.edu A copy of these slides is available

More information

Fitting PK Models with SAS NLMIXED Procedure Halimu Haridona, PPD Inc., Beijing

Fitting PK Models with SAS NLMIXED Procedure Halimu Haridona, PPD Inc., Beijing PharmaSUG China 1 st Conference, 2012 Fitting PK Models with SAS NLMIXED Procedure Halimu Haridona, PPD Inc., Beijing ABSTRACT Pharmacokinetic (PK) models are important for new drug development. Statistical

More information

Linear Regression (9/11/13)

Linear Regression (9/11/13) STA561: Probabilistic machine learning Linear Regression (9/11/13) Lecturer: Barbara Engelhardt Scribes: Zachary Abzug, Mike Gloudemans, Zhuosheng Gu, Zhao Song 1 Why use linear regression? Figure 1: Scatter

More information

Least Squares. Ken Kreutz-Delgado (Nuno Vasconcelos) ECE 175A Winter UCSD

Least Squares. Ken Kreutz-Delgado (Nuno Vasconcelos) ECE 175A Winter UCSD Least Squares Ken Kreutz-Delgado (Nuno Vasconcelos) ECE 75A Winter 0 - UCSD (Unweighted) Least Squares Assume linearity in the unnown, deterministic model parameters Scalar, additive noise model: y f (

More information

Integration of SAS and NONMEM for Automation of Population Pharmacokinetic/Pharmacodynamic Modeling on UNIX systems

Integration of SAS and NONMEM for Automation of Population Pharmacokinetic/Pharmacodynamic Modeling on UNIX systems Integration of SAS and NONMEM for Automation of Population Pharmacokinetic/Pharmacodynamic Modeling on UNIX systems Alan J Xiao, Cognigen Corporation, Buffalo NY Jill B Fiedler-Kelly, Cognigen Corporation,

More information

Statistical Machine Learning, Part I. Regression 2

Statistical Machine Learning, Part I. Regression 2 Statistical Machine Learning, Part I Regression 2 mcuturi@i.kyoto-u.ac.jp SML-2015 1 Last Week Regression: highlight a functional relationship between a predicted variable and predictors SML-2015 2 Last

More information

Logistic Regression and Generalized Linear Models

Logistic Regression and Generalized Linear Models Logistic Regression and Generalized Linear Models Sridhar Mahadevan mahadeva@cs.umass.edu University of Massachusetts Sridhar Mahadevan: CMPSCI 689 p. 1/2 Topics Generative vs. Discriminative models In

More information

Introduction to Maximum Likelihood Estimation

Introduction to Maximum Likelihood Estimation Introduction to Maximum Likelihood Estimation Eric Zivot July 26, 2012 The Likelihood Function Let 1 be an iid sample with pdf ( ; ) where is a ( 1) vector of parameters that characterize ( ; ) Example:

More information

> DEPARTMENT OF MATHEMATICS AND COMPUTER SCIENCE GRAVIS 2016 BASEL. Logistic Regression. Pattern Recognition 2016 Sandro Schönborn University of Basel

> DEPARTMENT OF MATHEMATICS AND COMPUTER SCIENCE GRAVIS 2016 BASEL. Logistic Regression. Pattern Recognition 2016 Sandro Schönborn University of Basel Logistic Regression Pattern Recognition 2016 Sandro Schönborn University of Basel Two Worlds: Probabilistic & Algorithmic We have seen two conceptual approaches to classification: data class density estimation

More information

[y i α βx i ] 2 (2) Q = i=1

[y i α βx i ] 2 (2) Q = i=1 Least squares fits This section has no probability in it. There are no random variables. We are given n points (x i, y i ) and want to find the equation of the line that best fits them. We take the equation

More information

Matrices BUSINESS MATHEMATICS

Matrices BUSINESS MATHEMATICS Matrices BUSINESS MATHEMATICS 1 CONTENTS Matrices Special matrices Operations with matrices Matrix multipication More operations with matrices Matrix transposition Symmetric matrices Old exam question

More information

Foundation of Intelligent Systems, Part I. Regression 2

Foundation of Intelligent Systems, Part I. Regression 2 Foundation of Intelligent Systems, Part I Regression 2 mcuturi@i.kyoto-u.ac.jp FIS - 2013 1 Some Words on the Survey Not enough answers to say anything meaningful! Try again: survey. FIS - 2013 2 Last

More information

Modeling and Analysis of Dynamic Systems

Modeling and Analysis of Dynamic Systems Modeling and Analysis of Dynamic Systems by Dr. Guillaume Ducard c Fall 2016 Institute for Dynamic Systems and Control ETH Zurich, Switzerland G. Ducard c 1 Outline 1 Lecture 9: Model Parametrization 2

More information

Classification. Chapter Introduction. 6.2 The Bayes classifier

Classification. Chapter Introduction. 6.2 The Bayes classifier Chapter 6 Classification 6.1 Introduction Often encountered in applications is the situation where the response variable Y takes values in a finite set of labels. For example, the response Y could encode

More information

Stochastic Analogues to Deterministic Optimizers

Stochastic Analogues to Deterministic Optimizers Stochastic Analogues to Deterministic Optimizers ISMP 2018 Bordeaux, France Vivak Patel Presented by: Mihai Anitescu July 6, 2018 1 Apology I apologize for not being here to give this talk myself. I injured

More information

Lossless Online Bayesian Bagging

Lossless Online Bayesian Bagging Lossless Online Bayesian Bagging Herbert K. H. Lee ISDS Duke University Box 90251 Durham, NC 27708 herbie@isds.duke.edu Merlise A. Clyde ISDS Duke University Box 90251 Durham, NC 27708 clyde@isds.duke.edu

More information

Prerequisite: STATS 7 or STATS 8 or AP90 or (STATS 120A and STATS 120B and STATS 120C). AP90 with a minimum score of 3

Prerequisite: STATS 7 or STATS 8 or AP90 or (STATS 120A and STATS 120B and STATS 120C). AP90 with a minimum score of 3 University of California, Irvine 2017-2018 1 Statistics (STATS) Courses STATS 5. Seminar in Data Science. 1 Unit. An introduction to the field of Data Science; intended for entering freshman and transfers.

More information

Resampling techniques for statistical modeling

Resampling techniques for statistical modeling Resampling techniques for statistical modeling Gianluca Bontempi Département d Informatique Boulevard de Triomphe - CP 212 http://www.ulb.ac.be/di Resampling techniques p.1/33 Beyond the empirical error

More information

Online monitoring of MPC disturbance models using closed-loop data

Online monitoring of MPC disturbance models using closed-loop data Online monitoring of MPC disturbance models using closed-loop data Brian J. Odelson and James B. Rawlings Department of Chemical Engineering University of Wisconsin-Madison Online Optimization Based Identification

More information

A. Motivation To motivate the analysis of variance framework, we consider the following example.

A. Motivation To motivate the analysis of variance framework, we consider the following example. 9.07 ntroduction to Statistics for Brain and Cognitive Sciences Emery N. Brown Lecture 14: Analysis of Variance. Objectives Understand analysis of variance as a special case of the linear model. Understand

More information

Statistical Data Mining and Machine Learning Hilary Term 2016

Statistical Data Mining and Machine Learning Hilary Term 2016 Statistical Data Mining and Machine Learning Hilary Term 2016 Dino Sejdinovic Department of Statistics Oxford Slides and other materials available at: http://www.stats.ox.ac.uk/~sejdinov/sdmml Naïve Bayes

More information

Reading Assignment. Distributed Lag and Autoregressive Models. Chapter 17. Kennedy: Chapters 10 and 13. AREC-ECON 535 Lec G 1

Reading Assignment. Distributed Lag and Autoregressive Models. Chapter 17. Kennedy: Chapters 10 and 13. AREC-ECON 535 Lec G 1 Reading Assignment Distributed Lag and Autoregressive Models Chapter 17. Kennedy: Chapters 10 and 13. AREC-ECON 535 Lec G 1 Distributed Lag and Autoregressive Models Distributed lag model: y t = α + β

More information

Confidence and Prediction Intervals for Pharmacometric Models

Confidence and Prediction Intervals for Pharmacometric Models Citation: CPT Pharmacometrics Syst. Pharmacol. (218) 7, 36 373; VC 218 ASCPT All rights reserved doi:1.12/psp4.12286 TUTORIAL Confidence and Prediction Intervals for Pharmacometric Models Anne K ummel

More information

Last updated: Oct 22, 2012 LINEAR CLASSIFIERS. J. Elder CSE 4404/5327 Introduction to Machine Learning and Pattern Recognition

Last updated: Oct 22, 2012 LINEAR CLASSIFIERS. J. Elder CSE 4404/5327 Introduction to Machine Learning and Pattern Recognition Last updated: Oct 22, 2012 LINEAR CLASSIFIERS Problems 2 Please do Problem 8.3 in the textbook. We will discuss this in class. Classification: Problem Statement 3 In regression, we are modeling the relationship

More information

Parametric Techniques

Parametric Techniques Parametric Techniques Jason J. Corso SUNY at Buffalo J. Corso (SUNY at Buffalo) Parametric Techniques 1 / 39 Introduction When covering Bayesian Decision Theory, we assumed the full probabilistic structure

More information

Development of Stochastic Artificial Neural Networks for Hydrological Prediction

Development of Stochastic Artificial Neural Networks for Hydrological Prediction Development of Stochastic Artificial Neural Networks for Hydrological Prediction G. B. Kingston, M. F. Lambert and H. R. Maier Centre for Applied Modelling in Water Engineering, School of Civil and Environmental

More information

Lecture 2: Simple Classifiers

Lecture 2: Simple Classifiers CSC 412/2506 Winter 2018 Probabilistic Learning and Reasoning Lecture 2: Simple Classifiers Slides based on Rich Zemel s All lecture slides will be available on the course website: www.cs.toronto.edu/~jessebett/csc412

More information

Bayesian Linear Regression [DRAFT - In Progress]

Bayesian Linear Regression [DRAFT - In Progress] Bayesian Linear Regression [DRAFT - In Progress] David S. Rosenberg Abstract Here we develop some basics of Bayesian linear regression. Most of the calculations for this document come from the basic theory

More information

Gradient-Based Learning. Sargur N. Srihari

Gradient-Based Learning. Sargur N. Srihari Gradient-Based Learning Sargur N. srihari@cedar.buffalo.edu 1 Topics Overview 1. Example: Learning XOR 2. Gradient-Based Learning 3. Hidden Units 4. Architecture Design 5. Backpropagation and Other Differentiation

More information

Obnoxious lateness humor

Obnoxious lateness humor Obnoxious lateness humor 1 Using Bayesian Model Averaging For Addressing Model Uncertainty in Environmental Risk Assessment Louise Ryan and Melissa Whitney Department of Biostatistics Harvard School of

More information