Department of Chemical Engineering ChE-101: Approaches to Chemical Engineering Problem Solving MATLAB Tutorial VII

Similar documents
CBE 291b - Computation And Optimization For Engineers

Predict Global Earth Temperature using Linier Regression

A Matrix Algebra Primer

Introduction to Determinants. Remarks. Remarks. The determinant applies in the case of square matrices

Duality # Second iteration for HW problem. Recall our LP example problem we have been working on, in equality form, is given below.

Lecture Solution of a System of Linear Equation

Matrix Solution to Linear Equations and Markov Chains

Department of Mechanical Engineering MECE 551 Final examination Winter 2008 April 16, 9:00 11:30. Question Value Mark

Ordinary Differential Equations- Boundary Value Problem

Comparison Procedures

13.4 Work done by Constant Forces

Properties of Integrals, Indefinite Integrals. Goals: Definition of the Definite Integral Integral Calculations using Antiderivatives

Student Activity 3: Single Factor ANOVA

1.2. Linear Variable Coefficient Equations. y + b "! = a y + b " Remark: The case b = 0 and a non-constant can be solved with the same idea as above.

Tests for the Ratio of Two Poisson Rates

Designing Information Devices and Systems I Discussion 8B

Population Dynamics Definition Model A model is defined as a physical representation of any natural phenomena Example: 1. A miniature building model.

SECTION 9-4 Translation of Axes

Continuous Random Variables

Chapter 3 MATRIX. In this chapter: 3.1 MATRIX NOTATION AND TERMINOLOGY

1 Linear Least Squares

CS 373, Spring Solutions to Mock midterm 1 (Based on first midterm in CS 273, Fall 2008.)

Matrices. Elementary Matrix Theory. Definition of a Matrix. Matrix Elements:

We will see what is meant by standard form very shortly

State space systems analysis (continued) Stability. A. Definitions A system is said to be Asymptotically Stable (AS) when it satisfies

MIXED MODELS (Sections ) I) In the unrestricted model, interactions are treated as in the random effects model:

1. Extend QR downwards to meet the x-axis at U(6, 0). y

Chapter 14. Matrix Representations of Linear Transformations

Chapter 9: Inferences based on Two samples: Confidence intervals and tests of hypotheses

Chapter 5 : Continuous Random Variables

MATRICES AND VECTORS SPACE

1B40 Practical Skills

UNIT 1 FUNCTIONS AND THEIR INVERSES Lesson 1.4: Logarithmic Functions as Inverses Instruction

Vyacheslav Telnin. Search for New Numbers.

Improper Integrals, and Differential Equations

Equations and Inequalities

Engineering Analysis ENG 3420 Fall Dan C. Marinescu Office: HEC 439 B Office hours: Tu-Th 11:00-12:00

The steps of the hypothesis test

NUMERICAL INTEGRATION. The inverse process to differentiation in calculus is integration. Mathematically, integration is represented by.

For the percentage of full time students at RCC the symbols would be:

Network Analysis and Synthesis. Chapter 5 Two port networks

Orthogonal Polynomials

Space Curves. Recall the parametric equations of a curve in xy-plane and compare them with parametric equations of a curve in space.

Lesson 1: Quadratic Equations

PART 1 MULTIPLE CHOICE Circle the appropriate response to each of the questions below. Each question has a value of 1 point.

Lecture 21: Order statistics

Matrices. Introduction

ME 501A Seminar in Engineering Analysis Page 1

Topic 1 Notes Jeremy Orloff

Unit #9 : Definite Integral Properties; Fundamental Theorem of Calculus

Driving Cycle Construction of City Road for Hybrid Bus Based on Markov Process Deng Pan1, a, Fengchun Sun1,b*, Hongwen He1, c, Jiankun Peng1, d

I do slope intercept form With my shades on Martin-Gay, Developmental Mathematics

Quotient Rule: am a n = am n (a 0) Negative Exponents: a n = 1 (a 0) an Power Rules: (a m ) n = a m n (ab) m = a m b m

7.2 The Definite Integral

Data Assimilation. Alan O Neill Data Assimilation Research Centre University of Reading

Bases for Vector Spaces

Alg. Sheet (1) Department : Math Form : 3 rd prep. Sheet

Recitation 3: More Applications of the Derivative

PHYS Summer Professor Caillault Homework Solutions. Chapter 2

EECS 141 Due 04/19/02, 5pm, in 558 Cory

SUMMER KNOWHOW STUDY AND LEARNING CENTRE

Equations, expressions and formulae

LECTURE 14. Dr. Teresa D. Golden University of North Texas Department of Chemistry

Here we study square linear systems and properties of their coefficient matrices as they relate to the solution set of the linear system.

The Predom module. Predom calculates and plots isothermal 1-, 2- and 3-metal predominance area diagrams. Predom accesses only compound databases.

STEP FUNCTIONS, DELTA FUNCTIONS, AND THE VARIATION OF PARAMETERS FORMULA. 0 if t < 0, 1 if t > 0.

Department of Physical Pharmacy and Pharmacokinetics Poznań University of Medical Sciences Pharmacokinetics laboratory

Chapter 3. Vector Spaces

Math 1B, lecture 4: Error bounds for numerical methods

Linear Inequalities. Work Sheet 1

Preparation for A Level Wadebridge School

Module 6: LINEAR TRANSFORMATIONS

Arithmetic & Algebra. NCTM National Conference, 2017

Expectation and Variance

Matrices and Determinants

DIRECT CURRENT CIRCUITS

Multivariate problems and matrix algebra

New data structures to reduce data size and search time

Section 6.1 INTRO to LAPLACE TRANSFORMS

A-Level Mathematics Transition Task (compulsory for all maths students and all further maths student)

TO: Next Year s AP Calculus Students

Chapter 3 Exponential and Logarithmic Functions Section 3.1

Monte Carlo method in solving numerical integration and differential equation

Operations with Matrices

Measuring Electron Work Function in Metal

Lecture Note 9: Orthogonal Reduction

fractions Let s Learn to

Non-Linear & Logistic Regression

A REVIEW OF CALCULUS CONCEPTS FOR JDEP 384H. Thomas Shores Department of Mathematics University of Nebraska Spring 2007

Best Approximation. Chapter The General Case

a a a a a a a a a a a a a a a a a a a a a a a a In this section, we introduce a general formula for computing determinants.

MASSACHUSETTS INSTITUTE OF TECHNOLOGY Physics Department Statistical Physics I Spring Term Solutions to Problem Set #1

Fig. 1. Open-Loop and Closed-Loop Systems with Plant Variations

14.3 comparing two populations: based on independent samples

Introduction to Group Theory

Chapters 4 & 5 Integrals & Applications

Algebra Readiness PLACEMENT 1 Fraction Basics 2 Percent Basics 3. Algebra Basics 9. CRS Algebra 1

Quadratic Forms. Quadratic Forms

Identify graphs of linear inequalities on a number line.

Operations with Polynomials

Transcription:

Tutoril VII: Liner Regression Lst updted 5/8/06 b G.G. Botte Deprtment of Chemicl Engineering ChE-0: Approches to Chemicl Engineering Problem Solving MATLAB Tutoril VII Liner Regression Using Lest Squre Method (lst updted 5/8/06 b GGB) Objectives: These tutorils re designed to show the introductor elements for n of the topics discussed. In lmost ll cses there re other ws to ccomplish the sme objective or higher level fetures tht cn be dded to the commnds below. An text below ppering fter the double prompt (>>) cn be entered in the Commnd Window directl or in n m-file. The following topics re covered in this tutoril; Introduction Procedure to perform liner regression in Mtlb Solved Problem using Mtlb (guided tour) Solved Problem using Excel (guided tour) Introduction: Regression of dt consists of getting mthemticl expression tht best fits ll the dt. Tht is given set of experimentl dt in which the dependent vrible is function of x the intention of regression is to determine nd expression for: = f( x) () For exmple set of experimentl dt could be predicted b using the following expression: = x + b () The objective of regression is to determine the vlues for the prmeters nd b. Notice tht in this cse the unknowns-the vribles to clculte- re nd b. Becuse the unknown vribles (coefficients) re liner the determintion of the coefficients is known s Liner Regression. There re different methods to perform liner regression the most common one is known s Lest Squre Method. As shown on the digrm below the lest squres method minimizes the sum of the squred distnces between the points nd the fitted line. Procedure to Perform Liner Regression in Mtlb: x

The objective is to determine the m prmeters ' ' ' ' nd ' ' etc. in the eqution = x + x +. m given set of n dt points (x x x m- ). This is done b writing out the eqution for ech dt point. This results in set of n equtions in m unknowns m x + x + x +. m = x + x + x +. x + x + x +. : x + x + x +. m m n n n m n where the first subscript on x identifies the independent vrible nd the second subscript signifies the dt point In mtrix nottion this is expressed s; Tht is UNKNOWNS x x x x x x x x x : x x x = : : n n n m n [ x]{ } = { } () In order to perform liner regression in Mtlb the objective is to determine the vector {} from Eq. (). This is done b using the formul given below: {} []\{} x (4) Notice tht wht is clled mtrix [x] ws built b combining ech of the individul independent vrible column vectors {x } {x } {x } nd unit column vector (vector which constituents re ll ) s shown in the schemtic representtion given below:

Tutoril VII: Liner Regression Lst updted 5/8/06 b G.G. Botte x + x + x +. m x + x + x +. m x + x + x +. m : x + x + x +. n n n m n x x x x x x x x x : x x x = : : n n n m n Unit vector The procedure to perform liner regression in Mtlb is summrized below:. Input the experimentl dt in the mfile. For exmple input the vectors {} {x } {x } {x } etc. Mke sure tht the vectors re column vectors. If ou input the vectors s row vectors use the trnspose (See Tutoril III p.6). Crete the unit column vector. Crete the mtrix [x] b combining ech of the individul column vectors nd the unit vector (See Tutoril III p. 4) 4. Appl Eq. (4) to clculte the coefficient vector {}. These re the prmeters for our eqution. 5. Determine how good is our fit b:. Clculte the predicted vlue b. Clculte the difference between the predicted vlue nd the experimentl vlue c. Mke tble tht shows the differences (experimentl dt predicted vlue nd difference between experimentl dt nd predicted vlue) d. Plot the experimentl dt (using plot see Tutoril V.b) e. Plot the eqution (using fplot see Tutoril V.b)

Solved Problem using Mtlb: Develop liner correltion to predict the finl weight of n niml bsed on the initil weight nd the mount of feed eten. (finl weight) = (initil weight) + *(feed eten) + The following dt re given: finl weight (lb) initil weight (lb) feed eten (lb) 95 4 7 77 6 80 59 00 45 9 97 9 70 6 8 50 7 80 4 6 9 40 0 84 8 5 Solution: The mfile is shown below: % This progrm shows n exmple of liner regression in Mtlb % Developed b Gerrdine Botte % Creted on: 05/8/06 % Lst modified on: 05/8/06 % Che-0 Spring 06 % Solution to Solved Problem Tutoril VII % The progrm clcultes the best fit prmeters for correltion % representing the finl weight of nimls given the initil weight % nd the mount of food eten: % fw=*initwgt+*feed+ %-------------------------------- cler; clc; fprintf('this progrm shows n exmple of liner regression in Mtlb\n'); fprintf('developed b Gerrdine Botte\n'); fprintf('creted on: 05/8/06\n'); fprintf('lst modified on: 05/8/06\n'); fprintf('che-0 Spring 06\n'); fprintf('solution to Solved Problem Tutoril VII\n'); fprintf('the progrm clcultes the best fit prmeters for correltion\n'); fprintf('representing the finl weight of nimls given the initil weight\n'); fprintf('nd the mount of food eten\n'); 4

Tutoril VII: Liner Regression fprintf('fw=*initwgt+*feed+\n'); Lst updted 5/8/06 b G.G. Botte %Step of Procedure (see p. TVII): input the dt into vectors. initwgt = [ 4 45 9 6 4 40 8]; % in lbs. (independent vrible) feed = [ 7 6 59 9 8 7 6 0 5]; % in lbs. (independent vrible) fw = [95; 77; 80; 00; 97; 70; 50; 80; 9; 84]; % in lbs (dependent vrible). %becuse the dt is given s row vectors it needs to be trnsformed into column vectors initwgt=initwgt'; feed=feed'; %Step of Procedure (see p. TVII): Crete the unit column vector for i=:0 unit(i)=; end unit=unit'; %Step of Procedure (see p. TVII):Crete the mtrix [x] b combining ech of % the individul column vectors nd the unit vector (See Tutoril III p. 4) x=[initwgt feed unit]; %Step of Procedure (see p. TVII):4. Appl Eq. (4) to clculte the coefficient % vector {}. These re the prmeters for our eqution. =x\fw; %Mke sure to bring ll the vectors bck into row vectors so tht ou cn use for loops %for printingnd performing vector opertions %printing the prmeters initwgt=initwgt'; feed=feed'; ='; fw=fw'; fprintf('the coefficients for the regression re\n'); for i=: fprintf('(%i)= %4.f\n' i (i)); end %ou cn lso print the eqution b using fprintf: fprintf('fw = %4.f * initwgt + %4.f * feed + %4.f\n' () () ()); %Clculting the numbers predicted b the eqution nd the difference for i=:0 fwp(i)=initwgt(i)*()+feed(i)*()+(); %This is the predicted finl weight lbs dev(i)=fw(i)-fwp(i); %this is the devition lbs end %Mking the comprison tble: 5

fprintf(' \n'); fprintf('experimentl finl weight predicted finl weight devition\n'); fprintf(' lbs lbs lbs\n'); fprintf(' \n'); for i=:0 fprintf(' %5.f %5.f %5.f\n' fw(i) fwp(i) dev(i)); end fprintf(' \n'); This is wht ou will see on screen: Procedure to perform liner regression in Excel: Excel cn do single or multiple liner regression through the dt nlsis toolbox. This toolbox needs to be dded s n dd in. To illustrte how to perform liner regression in Excel let us solve the sme problem:. Write our dt into n Excel spredsheet s shown below: 6

Tutoril VII: Liner Regression Lst updted 5/8/06 b G.G. Botte. Lod the dt nlsis toolbox : Click on Anlsis ToolPk Press OK 7

. Go to Dt Anlsis nd find the Regression tool: 4. Click OK nd ou will be prompted to the Regression nlsis: Select the Y rnge Select the rnge where the independent vribles re simultneousl 8

Tutoril VII: Liner Regression 5. Mke the dditionl selections nd press OK Lst updted 5/8/06 b G.G. Botte 6. This is wht ou will see on the screen: SUMMARY OUTPUT Regression Sttistics Multiple R 0.9449 R Squre 0.875798 Adjusted R Squre 0.86974 Stndrd Error 6.05078864 Observtions 0 The closer this vlue is to the better the fit is ANOVA df SS MS F ignificnce F Regression 764.5698 88.078 4.098 0.00077 Residul 7 56.840 6.604 Totl 9 00.5 Coefficients Stndrd Error t Stt P-vlue Lower 95%Upper 95%Lower 95.0%Upper 95.0% Intercept -.9964 7.7654 -.94475 0.6565-64.9949 9.00858-64.9949 9.00858 X Vrible.95679 0.585466.9584 0.047758 0.088.7765 0.088.7765 X Vrible 0.764 0.05776696.76709 0.0070 0.0806 0.54 0.0806 0.54 RESIDUAL OUTPUT Fitting prmeters Observtion Predicted Y Residuls 94.85945 0.840548 7.4467 4.7557774 79.45946 0.5740855 4 0.55 -.5506 5 99.5849 -.584997 6 67.07445.9568558 7 59.54888-9.548875 8 85.586896-5.586896 9 8.88486 9.5676 0 8.85575.884454 Difference between experimentl nd predicted vlue Predicted vlues 7. You will lern how to interpret more of the sttisticl results in the Experimentl Design Course. 9