2 Introduction to Response Surface Methodology
|
|
- Elvin Moody
- 6 years ago
- Views:
Transcription
1 2 Introduction to Response Surface Methodology 2.1 Goals of Response Surface Methods The experimenter is often interested in 1. Finding a suitable approximating function for the purpose of predicting a future response. 2. Determining what values of the x 1, x 2,..., x k s are optimum with respect to the response y. Optimum is used in the sense of finding the x 1, x 2,..., x k s which would yield one of the three experimental goals: a maximum, minimum, or a specific (target or aim) value of the response. Response surface procedures are primarily used to either (i) determine what are the optimum operating conditions which minimize or maximize a response or (ii) determine an operating region in the design variable space for which certain operating specifications are met. Although the eventual goal is usually to answer certain questions regarding operating conditions, it is extremely important that the decision be made, at the outset, regarding what experimental design be used for data collection, that is, what factor levels should be considered in the experimental process. Because model coefficients are estimated from experimental data, it is obvious that this estimation can be accomplished effectively if proper thought is given to the question of what experimental design to use. It is assumed that the experimenter is concerned with a system involving a response y which depends on the input variables ξ 1, ξ 2,..., ξ k. It is assumed that there exists an exact functional relationship E(y) = η(ξ) where E(y) is the expected response at ξ = (ξ 1, ξ 2,..., ξ k ). This relationship, however, is usually unknown. The ξ i s, also called the natural or uncoded variables. It is assumed that the ξ i s can be controlled by the experimenter with negligible error. The ξ i s are usually transformed using a linear transformation which center and scale each of the ξ i s. The transformed variables x 1, x 2,..., x k are called the design or coded variables. ξ i x i where x i = Because the true form of η is unknown and is perhaps extremely complicated, the success of RSM depends on the approximation of η by a simpler function restricted to some region of the independent ξ i variables. Low order polynomials are most often used because of their local smoothness properties. Mathematically, we are assuming that over a limited region of interest R(ξ), the main characteristics of the underlying true function η(ξ) are adequately represented by the low order terms in a Taylor series approximation of η(ξ). 1. If the approximating model is a linear model with only first order design variable terms (first-order model), then f(x) = b 0 + k b i x i = β 0 + i=1 15 k β i x i. i=1
2 This model may be useful when the experimenter is interested in studying η in narrow regions of x = (x 1, x 2,..., x k ) where little or no curvature or interactions are present. This model is also used in situations where small pilot experiments are used as preliminary experiments leading to a more extensive experimental exploration. These experiments are often sequential in nature, where the procedure leads the experimenter toward the general region containing optimal values of (x 1, x 2,..., x k ). One such procedure is the path of steepest ascent (descent). 2. If the approximating model is a linear model with first order design variable terms and their pairwise interactions (interaction model), then f(x) = b 0 + = β 0 + k b i x i + i=1 k β i x i + i=1 This model may be useful when the experimenter is interested in studying η in narrow regions of x 1, x 2,..., x k, that is, where little or no curvature is present. 3. If the approximating model is a linear model with all first and second order design variable terms (second-order or quadratic model), then f(x) = b 0 + = β 0 + k b i x i + k b ij x i x j + i=1 i<j k β i x i + k β ij x i x j + i=1 This model may be useful when the experimenter is interested in studying η in wider regions of x 1, x 2,..., x k where curvature is expected to be present. 4. Occasionally, models of order greater than 2 (such as, cubic models) are used. It should be understood that the region of interest R(ξ) lies within a larger region called the region of operability O(ξ) which is defined as the region over which experiments could be conducted. Example: You are interested in determining the water temperature ξ which extracts the maximum amount of caffeine η from a particular brand of coffee. The region of operability O(ξ) is between 0 C and 100 C. The region of interest R(ξ) is limited to temperatures near boiling, say between 95 C and 100 C. The fact that the polynomial is an approximation does not necessarily detract from its usefulness. If a model provides an adequate representation in the region of interest, then analysis of a model fitted from data will approximate an analysis of the physical system. i<j 16
3 2.2 Approximating Function Example A two-factor (3 3 or 3 2 ) factorial experiment with n = 2 replicates was run. Factor x 1 represents temperature and x 2 represents process time. The factor levels for x 1 and x 2 were coded as 1, 0, and 1. The process yield is labelled y. The true functional relationship between the response y and the variables x 1 and x 2 is y = η(x 1, x 2 ) = An experiment was run yielding the following experimental data: η(x 1, x 2 ) η(x 1, x 2 ) + ɛ x 1 x 2 Rep (True y) (Observed y) A second-order regression model f(x 1, x 2 ) was fit to the data. The fitted model was f(x 1, x 2 ) = b 0 + b 1 x 1 + b 2 x 2 + b 12 x 1 x 2 + b 11 x b 22 x 2 2 = x x x 1 x x x 2 2 The plots show the relationship between the fitted regression model f(x 1, x 2 ) and the true functional form η(x 1, x 2 ). The first set of plots contain the three-dimensional surface plots of the true function and the approximating function η(x 1, x 2 ) = f(x 1, x 2 ) = x x x 1 x x x 2 2 The second set of plots are contour plots of the same two surfaces. 17
4 18 18
5 19 19
6 20 20
7 Example: Taylor Series Approximation For function η(x, y) = 5 + e.5x 1.5y and p 0 = (0, 0) in some open ball B containing p 0, define p = (x, y) and P 2 (p, p 0 ) = η(0, 0) + x η 1! x + y η 1! y + x2 2! 2 η x 2 + xy 1!1! 2 η x y + y2 2! 2 η y 2 R 2 (p, p ) = 3 k=0 x k y (3 k) k!(3 k)! k η k x (3 k) y p where p is a point on the line segment joining p and (0, 0). Taking the partial derivatives η(x, y) = 5 + e.5x 1.5y, and evaluating them at (0, 0) yields η x = =.5e 0 =.5 η y = = 1.5e 0 = η x 2 = =.25e 0 =.25 2 η y 2 = = 2.25e 0 = η x y = =.75e 0 =.75 Thus, the second-order Taylor series approximation of η(x, y) = 5 + e.5x 1.5y about the point (0, 0) is f(x, y) = x y xy + 1 (1)(1) 2 x y2 = 6 +.5x 1.5y.75xy +.125x y 2 But, this function, too, is unknown. So what to we do as statisticians? Although f(x, y) is unknown, we know a second-order Taylor series approximation exists. We then use our data to estimate the intercept and coefficients of the terms in a full second-order (quadratic) regression model above. 21
8 Recall: For our data example, the fitted model using least-squares was f(x, y) = x 2.38y 1.09xy +.28x y 2 To find the critical point, take the first partial derivatives, equate to zero, and solve: f x = f y = Setting f x = 0 and f y = 0 produces two equations and two unknowns. The solution is the critical point (x 0, y 0 ) = (.50,.65). Next, perform the second-derivative. We need to evaluate the derivatives at critical point (x 0, y 0 ) = (.50,.65): ( ) 2 2 f x y = 1.09 (.50,.65) ( ) 2 f =.56 x 2 (.50,.65) ( ) 2 f = 2.82 y 2 (.50,.65) Evaluating yields (.50,.65) = ( ) 2 2 f x y ( ) ( 2 f x 2 2 f y 2 ) (.50,.65) = ( 1.09) 2 (.56)(2.82) =.39 But 2 f x 2 > 0, so (.50,.65) is a minimum for the fitted model (response surface). 22
9 2.3 Bias We assume the polynomial response surface model is an inadequate approximation of the true model. As a result, the model coefficients are biased by terms that are of order higher than the order of the assumed model. For example, if a first-order model is fit when curvature exists, the coefficients are seriously affected by the exclusion of important interaction and squared terms. Let y = (y 1, y 2,..., y n ) be a vector of n responses. Let ɛ = (ɛ 1, ɛ 2,..., ɛ n ) be a vector of n random errors having zero vector mean. Let f(ξ) be the approximating function of η(ξ). represented by y = or, using the approximating model, by The true functional relationship can be y = where δ(ξ) = η(ξ) f(ξ) is the difference between the actual and approximating models. We would like this to be very small over the region of interest R(ξ). Thus, there are two types of errors which must be taken into account: (i) systematic, or bias errors δ(ξ) = η(ξ) f(ξ) and (ii) random errors ɛ. Although this implies that systematic errors δ(ξ) are unavoidable, there has been a tendency since the time of Gauss ( ) to ignore them and to concentrate only on the random errors ɛ. This has been done because nice mathematical and statistical results are possible, in particular, in hypothesis testing results that rely on normality of the random errors. When choosing an experimental design, ignoring systematic errors can be a serious problem because the design point selection is based on an inadequate approximating model which, in turn, can lead to misleading results. 2.4 Fitted Model Example Using SAS The following experimental data was used to fit the model y = b 0 + b 1 x + b 2 x 2. x y We will review the SAS output and what it represents. Note that σ 2 = MS E = which is our estimate of σ 2. Then σ 2 (X X) 1 The square root of the diagonal entries are the standard errors of the model parameter estimates: s.e.(b 0 ) = (.15417)(238.62) = s.e.(b 1 ) = (.15417)(.01723) =.0515 s.e.(b 2 ) = (.15417)( ) =
10 22 24
11 23 25
12 24 26
13 25 27
7. Response Surface Methodology (Ch.10. Regression Modeling Ch. 11. Response Surface Methodology)
7. Response Surface Methodology (Ch.10. Regression Modeling Ch. 11. Response Surface Methodology) Hae-Jin Choi School of Mechanical Engineering, Chung-Ang University 1 Introduction Response surface methodology,
More informationResponse Surface Methodology
Response Surface Methodology Bruce A Craig Department of Statistics Purdue University STAT 514 Topic 27 1 Response Surface Methodology Interested in response y in relation to numeric factors x Relationship
More informationLecture 4: Types of errors. Bayesian regression models. Logistic regression
Lecture 4: Types of errors. Bayesian regression models. Logistic regression A Bayesian interpretation of regularization Bayesian vs maximum likelihood fitting more generally COMP-652 and ECSE-68, Lecture
More informationUnit 12: Response Surface Methodology and Optimality Criteria
Unit 12: Response Surface Methodology and Optimality Criteria STA 643: Advanced Experimental Design Derek S. Young 1 Learning Objectives Revisit your knowledge of polynomial regression Know how to use
More information8 RESPONSE SURFACE DESIGNS
8 RESPONSE SURFACE DESIGNS Desirable Properties of a Response Surface Design 1. It should generate a satisfactory distribution of information throughout the design region. 2. It should ensure that the
More informationOPTIMIZATION OF FIRST ORDER MODELS
Chapter 2 OPTIMIZATION OF FIRST ORDER MODELS One should not multiply explanations and causes unless it is strictly necessary William of Bakersville in Umberto Eco s In the Name of the Rose 1 In Response
More informationResponse Surface Methods
Response Surface Methods 3.12.2014 Goals of Today s Lecture See how a sequence of experiments can be performed to optimize a response variable. Understand the difference between first-order and second-order
More informationCHAPTER 6 MACHINABILITY MODELS WITH THREE INDEPENDENT VARIABLES
CHAPTER 6 MACHINABILITY MODELS WITH THREE INDEPENDENT VARIABLES 6.1 Introduction It has been found from the literature review that not much research has taken place in the area of machining of carbon silicon
More informationModule 6: Model Diagnostics
St@tmaster 02429/MIXED LINEAR MODELS PREPARED BY THE STATISTICS GROUPS AT IMM, DTU AND KU-LIFE Module 6: Model Diagnostics 6.1 Introduction............................... 1 6.2 Linear model diagnostics........................
More informationNumerical Methods. King Saud University
Numerical Methods King Saud University Aims In this lecture, we will... find the approximate solutions of derivative (first- and second-order) and antiderivative (definite integral only). Numerical Differentiation
More informationDesign & Analysis of Experiments 7E 2009 Montgomery
Chapter 5 1 Introduction to Factorial Design Study the effects of 2 or more factors All possible combinations of factor levels are investigated For example, if there are a levels of factor A and b levels
More informationEvaluation requires to define performance measures to be optimized
Evaluation Basic concepts Evaluation requires to define performance measures to be optimized Performance of learning algorithms cannot be evaluated on entire domain (generalization error) approximation
More informationData Mining Stat 588
Data Mining Stat 588 Lecture 02: Linear Methods for Regression Department of Statistics & Biostatistics Rutgers University September 13 2011 Regression Problem Quantitative generic output variable Y. Generic
More informationResponse Surface Methodology
Response Surface Methodology Process and Product Optimization Using Designed Experiments Second Edition RAYMOND H. MYERS Virginia Polytechnic Institute and State University DOUGLAS C. MONTGOMERY Arizona
More informationResponse Surface Methodology IV
LECTURE 8 Response Surface Methodology IV 1. Bias and Variance If y x is the response of the system at the point x, or in short hand, y x = f (x), then we can write η x = E(y x ). This is the true, and
More informationA General Criterion for Factorial Designs Under Model Uncertainty
A General Criterion for Factorial Designs Under Model Uncertainty Steven Gilmour Queen Mary University of London http://www.maths.qmul.ac.uk/ sgg and Pi-Wen Tsai National Taiwan Normal University Fall
More informationResponse Surface Methodology:
Response Surface Methodology: Process and Product Optimization Using Designed Experiments RAYMOND H. MYERS Virginia Polytechnic Institute and State University DOUGLAS C. MONTGOMERY Arizona State University
More informationLecture Outline. Biost 518 Applied Biostatistics II. Choice of Model for Analysis. Choice of Model. Choice of Model. Lecture 10: Multiple Regression:
Biost 518 Applied Biostatistics II Scott S. Emerson, M.D., Ph.D. Professor of Biostatistics University of Washington Lecture utline Choice of Model Alternative Models Effect of data driven selection of
More informationOptimal design of experiments
Optimal design of experiments Session 4: Some theory Peter Goos / 40 Optimal design theory continuous or approximate optimal designs implicitly assume an infinitely large number of observations are available
More informationLinear Discrimination Functions
Laurea Magistrale in Informatica Nicola Fanizzi Dipartimento di Informatica Università degli Studi di Bari November 4, 2009 Outline Linear models Gradient descent Perceptron Minimum square error approach
More informationEvaluation. Andrea Passerini Machine Learning. Evaluation
Andrea Passerini passerini@disi.unitn.it Machine Learning Basic concepts requires to define performance measures to be optimized Performance of learning algorithms cannot be evaluated on entire domain
More informationGradient Descent. Dr. Xiaowei Huang
Gradient Descent Dr. Xiaowei Huang https://cgi.csc.liv.ac.uk/~xiaowei/ Up to now, Three machine learning algorithms: decision tree learning k-nn linear regression only optimization objectives are discussed,
More informationRecap from previous lecture
Recap from previous lecture Learning is using past experience to improve future performance. Different types of learning: supervised unsupervised reinforcement active online... For a machine, experience
More informationDesign of Experiments SUTD - 21/4/2015 1
Design of Experiments SUTD - 21/4/2015 1 Outline 1. Introduction 2. 2 k Factorial Design Exercise 3. Choice of Sample Size Exercise 4. 2 k p Fractional Factorial Design Exercise 5. Follow-up experimentation
More informationDESIGN OF EXPERIMENT ERT 427 Response Surface Methodology (RSM) Miss Hanna Ilyani Zulhaimi
+ DESIGN OF EXPERIMENT ERT 427 Response Surface Methodology (RSM) Miss Hanna Ilyani Zulhaimi + Outline n Definition of Response Surface Methodology n Method of Steepest Ascent n Second-Order Response Surface
More informationConsequences of measurement error. Psychology 588: Covariance structure and factor models
Consequences of measurement error Psychology 588: Covariance structure and factor models Scaling indeterminacy of latent variables Scale of a latent variable is arbitrary and determined by a convention
More informationGradient Descent. Sargur Srihari
Gradient Descent Sargur srihari@cedar.buffalo.edu 1 Topics Simple Gradient Descent/Ascent Difficulties with Simple Gradient Descent Line Search Brent s Method Conjugate Gradient Descent Weight vectors
More informationDetection and quantification capabilities
18.4.3.7 Detection and quantification capabilities Among the most important Performance Characteristics of the Chemical Measurement Process (CMP) are those that can serve as measures of the underlying
More informationMATH602: APPLIED STATISTICS
MATH602: APPLIED STATISTICS Dr. Srinivas R. Chakravarthy Department of Science and Mathematics KETTERING UNIVERSITY Flint, MI 48504-4898 Lecture 10 1 FRACTIONAL FACTORIAL DESIGNS Complete factorial designs
More informationECE 680 Modern Automatic Control. Gradient and Newton s Methods A Review
ECE 680Modern Automatic Control p. 1/1 ECE 680 Modern Automatic Control Gradient and Newton s Methods A Review Stan Żak October 25, 2011 ECE 680Modern Automatic Control p. 2/1 Review of the Gradient Properties
More information7 The Analysis of Response Surfaces
7 The Analysis of Response Surfaces Goal: the researcher is seeking the experimental conditions which are most desirable, that is, determine optimum design variable levels. Once the researcher has determined
More informationAnnouncements. Topics: Homework: - sections , 6.1 (extreme values) * Read these sections and study solved examples in your textbook!
Announcements Topics: - sections 5.2 5.7, 6.1 (extreme values) * Read these sections and study solved examples in your textbook! Homework: - review lecture notes thoroughly - work on practice problems
More informationSimple and Multiple Linear Regression
Sta. 113 Chapter 12 and 13 of Devore March 12, 2010 Table of contents 1 Simple Linear Regression 2 Model Simple Linear Regression A simple linear regression model is given by Y = β 0 + β 1 x + ɛ where
More informationLecture Notes Special: Using Neural Networks for Mean-Squared Estimation. So: How can we evaluate E(X Y )? What is a neural network? How is it useful?
Lecture Notes Special: Using Neural Networks for Mean-Squared Estimation The Story So Far... So: How can we evaluate E(X Y )? What is a neural network? How is it useful? EE 278: Using Neural Networks for
More informationOptimization methods
Lecture notes 3 February 8, 016 1 Introduction Optimization methods In these notes we provide an overview of a selection of optimization methods. We focus on methods which rely on first-order information,
More informationContents. 2 2 factorial design 4
Contents TAMS38 - Lecture 10 Response surface methodology Lecturer: Zhenxia Liu Department of Mathematics - Mathematical Statistics 12 December, 2017 2 2 factorial design Polynomial Regression model First
More information14.0 RESPONSE SURFACE METHODOLOGY (RSM)
4. RESPONSE SURFACE METHODOLOGY (RSM) (Updated Spring ) So far, we ve focused on experiments that: Identify a few important variables from a large set of candidate variables, i.e., a screening experiment.
More informationNon-Bayesian Classifiers Part II: Linear Discriminants and Support Vector Machines
Non-Bayesian Classifiers Part II: Linear Discriminants and Support Vector Machines Selim Aksoy Department of Computer Engineering Bilkent University saksoy@cs.bilkent.edu.tr CS 551, Fall 2018 CS 551, Fall
More informationLecture 17: Numerical Optimization October 2014
Lecture 17: Numerical Optimization 36-350 22 October 2014 Agenda Basics of optimization Gradient descent Newton s method Curve-fitting R: optim, nls Reading: Recipes 13.1 and 13.2 in The R Cookbook Optional
More informationCOMP 551 Applied Machine Learning Lecture 3: Linear regression (cont d)
COMP 551 Applied Machine Learning Lecture 3: Linear regression (cont d) Instructor: Herke van Hoof (herke.vanhoof@mail.mcgill.ca) Slides mostly by: Class web page: www.cs.mcgill.ca/~hvanho2/comp551 Unless
More informationMath (P)refresher Lecture 8: Unconstrained Optimization
Math (P)refresher Lecture 8: Unconstrained Optimization September 2006 Today s Topics : Quadratic Forms Definiteness of Quadratic Forms Maxima and Minima in R n First Order Conditions Second Order Conditions
More informationLecture 2: Linear regression
Lecture 2: Linear regression Roger Grosse 1 Introduction Let s ump right in and look at our first machine learning algorithm, linear regression. In regression, we are interested in predicting a scalar-valued
More informationGaussian and Linear Discriminant Analysis; Multiclass Classification
Gaussian and Linear Discriminant Analysis; Multiclass Classification Professor Ameet Talwalkar Slide Credit: Professor Fei Sha Professor Ameet Talwalkar CS260 Machine Learning Algorithms October 13, 2015
More informationLecture 5: Linear models for classification. Logistic regression. Gradient Descent. Second-order methods.
Lecture 5: Linear models for classification. Logistic regression. Gradient Descent. Second-order methods. Linear models for classification Logistic regression Gradient descent and second-order methods
More informationLecture Notes: Geometric Considerations in Unconstrained Optimization
Lecture Notes: Geometric Considerations in Unconstrained Optimization James T. Allison February 15, 2006 The primary objectives of this lecture on unconstrained optimization are to: Establish connections
More informationMath Numerical Analysis
Math 541 - Numerical Analysis Joseph M. Mahaffy, jmahaffy@mail.sdsu.edu Department of Mathematics and Statistics Dynamical Systems Group Computational Sciences Research Center San Diego State University
More informationExam 2. Average: 85.6 Median: 87.0 Maximum: Minimum: 55.0 Standard Deviation: Numerical Methods Fall 2011 Lecture 20
Exam 2 Average: 85.6 Median: 87.0 Maximum: 100.0 Minimum: 55.0 Standard Deviation: 10.42 Fall 2011 1 Today s class Multiple Variable Linear Regression Polynomial Interpolation Lagrange Interpolation Newton
More informationLinear models: the perceptron and closest centroid algorithms. D = {(x i,y i )} n i=1. x i 2 R d 9/3/13. Preliminaries. Chapter 1, 7.
Preliminaries Linear models: the perceptron and closest centroid algorithms Chapter 1, 7 Definition: The Euclidean dot product beteen to vectors is the expression d T x = i x i The dot product is also
More informationIntroduction to Design of Experiments
Introduction to Design of Experiments Jean-Marc Vincent and Arnaud Legrand Laboratory ID-IMAG MESCAL Project Universities of Grenoble {Jean-Marc.Vincent,Arnaud.Legrand}@imag.fr November 20, 2011 J.-M.
More informationAppendix IV Experimental Design
Experimental Design The aim of pharmaceutical formulation and development is to develop an acceptable pharmaceutical formulation in the shortest possible time, using minimum number of working hours and
More informationMAT 419 Lecture Notes Transcribed by Eowyn Cenek 6/1/2012
(Homework 1: Chapter 1: Exercises 1-7, 9, 11, 19, due Monday June 11th See also the course website for lectures, assignments, etc) Note: today s lecture is primarily about definitions Lots of definitions
More informationOptimization Tutorial 1. Basic Gradient Descent
E0 270 Machine Learning Jan 16, 2015 Optimization Tutorial 1 Basic Gradient Descent Lecture by Harikrishna Narasimhan Note: This tutorial shall assume background in elementary calculus and linear algebra.
More informationRegression.
Regression www.biostat.wisc.edu/~dpage/cs760/ Goals for the lecture you should understand the following concepts linear regression RMSE, MAE, and R-square logistic regression convex functions and sets
More informationRegression coefficients may even have a different sign from the expected.
Multicolinearity Diagnostics : Some of the diagnostics e have just discussed are sensitive to multicolinearity. For example, e kno that ith multicolinearity, additions and deletions of data cause shifts
More informationAN ENHANCED RECURSIVE STOPPING RULE FOR STEEPEST ASCENT SEARCHES IN RESPONSE SURFACE METHODOLOGY.
AN ENHANCED RECURSIVE STOPPING RULE FOR STEEPEST ASCENT SEARCHES IN RESPONSE SURFACE METHODOLOGY. Guillermo Miró and Enrique Del Castillo Department of Industrial and Manufacturing Engineering The Pennsylvania
More informationMath 2 Unit 4 Quadratic Functions: Modeling
Approximate Time Frame: 2 3 Weeks Connections to Previous Learning: In Math 1, students learn how to model functions and how to interpret models of functions. Students looked at multiple representations
More informationMath 423/533: The Main Theoretical Topics
Math 423/533: The Main Theoretical Topics Notation sample size n, data index i number of predictors, p (p = 2 for simple linear regression) y i : response for individual i x i = (x i1,..., x ip ) (1 p)
More informationMath Real Analysis II
Math 432 - Real Analysis II Solutions to Homework due February 3 In class, we learned that the n-th remainder for a smooth function f(x) defined on some open interval containing is given by f (k) () R
More information1 Piecewise Cubic Interpolation
Piecewise Cubic Interpolation Typically the problem with piecewise linear interpolation is the interpolant is not differentiable as the interpolation points (it has a kinks at every interpolation point)
More informationAdvanced Engineering Statistics - Section 5 - Jay Liu Dept. Chemical Engineering PKNU
Advanced Engineering Statistics - Section 5 - Jay Liu Dept. Chemical Engineering PKNU Least squares regression What we will cover Box, G.E.P., Use and abuse of regression, Technometrics, 8 (4), 625-629,
More informationApplied Mathematics 205. Unit I: Data Fitting. Lecturer: Dr. David Knezevic
Applied Mathematics 205 Unit I: Data Fitting Lecturer: Dr. David Knezevic Unit I: Data Fitting Chapter I.4: Nonlinear Least Squares 2 / 25 Nonlinear Least Squares So far we have looked at finding a best
More informationChapter 7: Simple linear regression
The absolute movement of the ground and buildings during an earthquake is small even in major earthquakes. The damage that a building suffers depends not upon its displacement, but upon the acceleration.
More informationLinear Regression. In this problem sheet, we consider the problem of linear regression with p predictors and one intercept,
Linear Regression In this problem sheet, we consider the problem of linear regression with p predictors and one intercept, y = Xβ + ɛ, where y t = (y 1,..., y n ) is the column vector of target values,
More information(II.B) Basis and dimension
(II.B) Basis and dimension How would you explain that a plane has two dimensions? Well, you can go in two independent directions, and no more. To make this idea precise, we formulate the DEFINITION 1.
More informationReview: Second Half of Course Stat 704: Data Analysis I, Fall 2014
Review: Second Half of Course Stat 704: Data Analysis I, Fall 2014 Tim Hanson, Ph.D. University of South Carolina T. Hanson (USC) Stat 704: Data Analysis I, Fall 2014 1 / 13 Chapter 8: Polynomials & Interactions
More informationIs the test error unbiased for these programs?
Is the test error unbiased for these programs? Xtrain avg N o Preprocessing by de meaning using whole TEST set 2017 Kevin Jamieson 1 Is the test error unbiased for this program? e Stott see non for f x
More informationChapter 13. Multiple Regression and Model Building
Chapter 13 Multiple Regression and Model Building Multiple Regression Models The General Multiple Regression Model y x x x 0 1 1 2 2... k k y is the dependent variable x, x,..., x 1 2 k the model are the
More informationUnderstanding Generalization Error: Bounds and Decompositions
CIS 520: Machine Learning Spring 2018: Lecture 11 Understanding Generalization Error: Bounds and Decompositions Lecturer: Shivani Agarwal Disclaimer: These notes are designed to be a supplement to the
More informationLECTURE 7, WEDNESDAY
LECTURE 7, WEDNESDAY 25.02.04 FRANZ LEMMERMEYER 1. Singular Weierstrass Curves Consider cubic curves in Weierstraß form (1) E : y 2 + a 1 xy + a 3 y = x 3 + a 2 x 2 + a 4 x + a 6, the coefficients a i
More informationNow consider the case where E(Y) = µ = Xβ and V (Y) = σ 2 G, where G is diagonal, but unknown.
Weighting We have seen that if E(Y) = Xβ and V (Y) = σ 2 G, where G is known, the model can be rewritten as a linear model. This is known as generalized least squares or, if G is diagonal, with trace(g)
More informationLecture 14 : Online Learning, Stochastic Gradient Descent, Perceptron
CS446: Machine Learning, Fall 2017 Lecture 14 : Online Learning, Stochastic Gradient Descent, Perceptron Lecturer: Sanmi Koyejo Scribe: Ke Wang, Oct. 24th, 2017 Agenda Recap: SVM and Hinge loss, Representer
More informationDesign of Engineering Experiments Chapter 5 Introduction to Factorials
Design of Engineering Experiments Chapter 5 Introduction to Factorials Text reference, Chapter 5 page 170 General principles of factorial experiments The two-factor factorial with fixed effects The ANOVA
More informationTwo-Stage Computing Budget Allocation. Approach for Response Surface Method PENG JI
Two-Stage Computing Budget Allocation Approach for Response Surface Method PENG JI NATIONAL UNIVERSITY OF SINGAPORE 2005 Two-Stage Computing Budget Allocation Approach for Response Surface Method PENG
More informationERRATUM TO AFFINE MANIFOLDS, SYZ GEOMETRY AND THE Y VERTEX
ERRATUM TO AFFINE MANIFOLDS, SYZ GEOMETRY AND THE Y VERTEX JOHN LOFTIN, SHING-TUNG YAU, AND ERIC ZASLOW 1. Main result The purpose of this erratum is to correct an error in the proof of the main result
More informationLinear models and their mathematical foundations: Simple linear regression
Linear models and their mathematical foundations: Simple linear regression Steffen Unkel Department of Medical Statistics University Medical Center Göttingen, Germany Winter term 2018/19 1/21 Introduction
More informationResponse Surface Methodology III
LECTURE 7 Response Surface Methodology III 1. Canonical Form of Response Surface Models To examine the estimated regression model we have several choices. First, we could plot response contours. Remember
More informationEAD 115. Numerical Solution of Engineering and Scientific Problems. David M. Rocke Department of Applied Science
EAD 115 Numerical Solution of Engineering and Scientific Problems David M. Rocke Department of Applied Science Multidimensional Unconstrained Optimization Suppose we have a function f() of more than one
More informationQuantitative Techniques (Finance) 203. Polynomial Functions
Quantitative Techniques (Finance) 03 Polynomial Functions Felix Chan October 006 Introduction This topic discusses the properties and the applications of polynomial functions, specifically, linear and
More informationRidge Regression 1. to which some random noise is added. So that the training labels can be represented as:
CS 1: Machine Learning Spring 15 College of Computer and Information Science Northeastern University Lecture 3 February, 3 Instructor: Bilal Ahmed Scribe: Bilal Ahmed & Virgil Pavlu 1 Introduction Ridge
More informationComputational Physics
Interpolation, Extrapolation & Polynomial Approximation Lectures based on course notes by Pablo Laguna and Kostas Kokkotas revamped by Deirdre Shoemaker Spring 2014 Introduction In many cases, a function
More informationRectangular Systems and Echelon Forms
CHAPTER 2 Rectangular Systems and Echelon Forms 2.1 ROW ECHELON FORM AND RANK We are now ready to analyze more general linear systems consisting of m linear equations involving n unknowns a 11 x 1 + a
More informationOn rational approximation of algebraic functions. Julius Borcea. Rikard Bøgvad & Boris Shapiro
On rational approximation of algebraic functions http://arxiv.org/abs/math.ca/0409353 Julius Borcea joint work with Rikard Bøgvad & Boris Shapiro 1. Padé approximation: short overview 2. A scheme of rational
More informationVariable Objective Search
Variable Objective Search Sergiy Butenko, Oleksandra Yezerska, and Balabhaskar Balasundaram Abstract This paper introduces the variable objective search framework for combinatorial optimization. The method
More informationLECTURE 15: COMPLETENESS AND CONVEXITY
LECTURE 15: COMPLETENESS AND CONVEXITY 1. The Hopf-Rinow Theorem Recall that a Riemannian manifold (M, g) is called geodesically complete if the maximal defining interval of any geodesic is R. On the other
More informationLinear Methods for Regression. Lijun Zhang
Linear Methods for Regression Lijun Zhang zlj@nju.edu.cn http://cs.nju.edu.cn/zlj Outline Introduction Linear Regression Models and Least Squares Subset Selection Shrinkage Methods Methods Using Derived
More informationCS 195-5: Machine Learning Problem Set 1
CS 95-5: Machine Learning Problem Set Douglas Lanman dlanman@brown.edu 7 September Regression Problem Show that the prediction errors y f(x; ŵ) are necessarily uncorrelated with any linear function of
More informationMassachusetts Institute of Technology Department of Electrical Engineering and Computer Science Algorithms For Inference Fall 2014
Massachusetts Institute of Technology Department of Electrical Engineering and Computer Science 6.438 Algorithms For Inference Fall 2014 Recitation 3 1 Gaussian Graphical Models: Schur s Complement Consider
More informationSTOP, a i+ 1 is the desired root. )f(a i) > 0. Else If f(a i+ 1. Set a i+1 = a i+ 1 and b i+1 = b Else Set a i+1 = a i and b i+1 = a i+ 1
53 17. Lecture 17 Nonlinear Equations Essentially, the only way that one can solve nonlinear equations is by iteration. The quadratic formula enables one to compute the roots of p(x) = 0 when p P. Formulas
More informationOptimality conditions for unconstrained optimization. Outline
Optimality conditions for unconstrained optimization Daniel P. Robinson Department of Applied Mathematics and Statistics Johns Hopkins University September 13, 2018 Outline 1 The problem and definitions
More informationCS-E4830 Kernel Methods in Machine Learning
CS-E4830 Kernel Methods in Machine Learning Lecture 3: Convex optimization and duality Juho Rousu 27. September, 2017 Juho Rousu 27. September, 2017 1 / 45 Convex optimization Convex optimisation This
More information5. Some theorems on continuous functions
5. Some theorems on continuous functions The results of section 3 were largely concerned with continuity of functions at a single point (usually called x 0 ). In this section, we present some consequences
More informationClassification CE-717: Machine Learning Sharif University of Technology. M. Soleymani Fall 2012
Classification CE-717: Machine Learning Sharif University of Technology M. Soleymani Fall 2012 Topics Discriminant functions Logistic regression Perceptron Generative models Generative vs. discriminative
More informationAnalysis of Variance and Design of Experiments-II
Analysis of Variance and Design of Experiments-II MODULE VIII LECTURE - 36 RESPONSE SURFACE DESIGNS Dr. Shalabh Department of Mathematics & Statistics Indian Institute of Technology Kanpur 2 Design for
More informationRegression: Lecture 2
Regression: Lecture 2 Niels Richard Hansen April 26, 2012 Contents 1 Linear regression and least squares estimation 1 1.1 Distributional results................................ 3 2 Non-linear effects and
More informationTWO-LEVEL FACTORIAL EXPERIMENTS: IRREGULAR FRACTIONS
STAT 512 2-Level Factorial Experiments: Irregular Fractions 1 TWO-LEVEL FACTORIAL EXPERIMENTS: IRREGULAR FRACTIONS A major practical weakness of regular fractional factorial designs is that N must be a
More information9 Classification. 9.1 Linear Classifiers
9 Classification This topic returns to prediction. Unlike linear regression where we were predicting a numeric value, in this case we are predicting a class: winner or loser, yes or no, rich or poor, positive
More information6.867 Machine Learning
6.867 Machine Learning Problem set 1 Solutions Thursday, September 19 What and how to turn in? Turn in short written answers to the questions explicitly stated, and when requested to explain or prove.
More information2.2 Graphs of Functions
2.2 Graphs of Functions Introduction DEFINITION domain of f, D(f) Associated with every function is a set called the domain of the function. This set influences what the graph of the function looks like.
More informationFunction approximation
Week 9: Monday, Mar 26 Function approximation A common task in scientific computing is to approximate a function. The approximated function might be available only through tabulated data, or it may be
More informationRegression Analysis. Simple Regression Multivariate Regression Stepwise Regression Replication and Prediction Error EE290H F05
Regression Analysis Simple Regression Multivariate Regression Stepwise Regression Replication and Prediction Error 1 Regression Analysis In general, we "fit" a model by minimizing a metric that represents
More information