Inferential Analysis with NIR and Chemometrics

Similar documents
LIST OF ABBREVIATIONS:

Data Reconciliation Techniques

Chemometrics. Matti Hotokka Physical chemistry Åbo Akademi University

Principle Components Analysis (PCA) Relationship Between a Linear Combination of Variables and Axes Rotation for PCA

EXTENDING PARTIAL LEAST SQUARES REGRESSION

Chemometrics. 1. Find an important subset of the original variables.

Basics of Multivariate Modelling and Data Analysis

Basics of Multivariate Modelling and Data Analysis

Principal component analysis

Application of mathematical, statistical, graphical or symbolic methods to maximize chemical information.

Chemometrics The secrets behind multivariate methods in a nutshell!

Lecture 13. Principal Component Analysis. Brett Bernstein. April 25, CDS at NYU. Brett Bernstein (CDS at NYU) Lecture 13 April 25, / 26

CS4495/6495 Introduction to Computer Vision. 8B-L2 Principle Component Analysis (and its use in Computer Vision)

Principal Component Analysis (PCA) Theory, Practice, and Examples

Rapid Compositional Analysis of Feedstocks Using Near-Infrared Spectroscopy and Chemometrics

Principal Component Analysis-I Geog 210C Introduction to Spatial Data Analysis. Chris Funk. Lecture 17

Vector Space Models. wine_spectral.r

Eigenvalues, Eigenvectors, and an Intro to PCA

Eigenvalues, Eigenvectors, and an Intro to PCA

Multivariate calibration

Principal Components Analysis (PCA)

Unconstrained Ordination

-Principal components analysis is by far the oldest multivariate technique, dating back to the early 1900's; ecologists have used PCA since the

ReducedPCR/PLSRmodelsbysubspaceprojections

FAST CROSS-VALIDATION IN ROBUST PCA

Introduction to Machine Learning

Accounting for measurement uncertainties in industrial data analysis

Statistical foundations

Robustness of Principal Components

Latent Variable Methods Course

Introduction to Machine Learning. PCA and Spectral Clustering. Introduction to Machine Learning, Slides: Eran Halperin

Mixed Hierarchical Models for the Process Environment

UPSET AND SENSOR FAILURE DETECTION IN MULTIVARIATE PROCESSES

1 Principal Components Analysis

Specifying Latent Curve and Other Growth Models Using Mplus. (Revised )

Experimental design. Matti Hotokka Department of Physical Chemistry Åbo Akademi University

Robert P. Cogdill, Carl A. Anderson, and James K. Drennen, III

Principal component analysis, PCA

Data Preprocessing Tasks

Cross model validation and multiple testing in latent variable models

Learning with Singular Vectors

Application of Raman Spectroscopy for Detection of Aflatoxins and Fumonisins in Ground Maize Samples

Putting Near-Infrared Spectroscopy (NIR) in the spotlight. 13. May 2006

Linear Regression Linear Regression with Shrinkage

Learning Multiple Tasks with a Sparse Matrix-Normal Penalty

Independence and Dependence in Calibration: A Discussion FDA and EMA Guidelines

The integration of NeSSI with Continuous Flow Reactors and PAT for Process Optimization

-However, this definition can be expanded to include: biology (biometrics), environmental science (environmetrics), economics (econometrics).

Linear Regression Linear Regression with Shrinkage

Vectors To begin, let us describe an element of the state space as a point with numerical coordinates, that is x 1. x 2. x =

Karhunen-Loève Transform KLT. JanKees van der Poel D.Sc. Student, Mechanical Engineering

Orthogonality. Orthonormal Bases, Orthogonal Matrices. Orthogonality

Multivariate Statistics (I) 2. Principal Component Analysis (PCA)

PCA Advanced Examples & Applications

The Role of a Center of Excellence in Developing PAT Applications

Ordination & PCA. Ordination. Ordination

1 A factor can be considered to be an underlying latent variable: (a) on which people differ. (b) that is explained by unknown variables

MULTIVARIATE PATTERN RECOGNITION FOR CHEMOMETRICS. Richard Brereton

Exploratory Factor Analysis and Principal Component Analysis

Structure in Data. A major objective in data analysis is to identify interesting features or structure in the data.

Just-in-time and Adaptive Methods for Soft Sensor Design. Ming Ma. A thesis submitted in partial fulfillment of the requirements for the degree of

New tricks by very old dogs: Predicting the catalytic hydrogenation of HMF derivatives using Slater-type orbitals

MULTIVARIATE STATISTICAL ANALYSIS OF SPECTROSCOPIC DATA. Haisheng Lin, Ognjen Marjanovic, Barry Lennox

2018 Fall 2210Q Section 013 Midterm Exam I Solution

Vectors and Matrices Statistics with Vectors and Matrices

Multivariate analysis of genetic data: exploring groups diversity

w. T. Federer, z. D. Feng and c. E. McCulloch

ESS 265 Spring Quarter 2005 Time Series Analysis: Linear Regression

Machine Learning 2nd Edition

Principal component analysis

FERMENTATION BATCH PROCESS MONITORING BY STEP-BY-STEP ADAPTIVE MPCA. Ning He, Lei Xie, Shu-qing Wang, Jian-ming Zhang

Eigenvalues, Eigenvectors, and an Intro to PCA

Ridge and Lasso Regression

Near Infrared reflectance spectroscopy (NIRS) Dr A T Adesogan Department of Animal Sciences University of Florida

Advanced Methods for Fault Detection

Principal Component Analysis & Factor Analysis. Psych 818 DeShon

MultiDimensional Signal Processing Master Degree in Ingegneria delle Telecomunicazioni A.A

2 D wavelet analysis , 487

Least Squares Optimization

Demonstrating the Value of Data Fusion

MACHINE LEARNING. Methods for feature extraction and reduction of dimensionality: Probabilistic PCA and kernel PCA

CPSC 340: Machine Learning and Data Mining. More PCA Fall 2017

DEVELOPMENT OF ANALYTICAL METHODS IN NIR SPECTROSCOPY FOR THE CHARACTERIZATION OF SLUDGE FROM WASTEWATER TREATEMENT PLANTS FOR AGRICULTURAL USE

A Modified Incremental Principal Component Analysis for On-line Learning of Feature Space and Classifier

Machine learning for pervasive systems Classification in high-dimensional spaces

PAT/QbD: Getting direction from other industries

Introduction to Principal Component Analysis (PCA)

Using Principal Component Analysis Modeling to Monitor Temperature Sensors in a Nuclear Research Reactor

MA 242 LINEAR ALGEBRA C1, Solutions to First Midterm Exam

Molecular Spectroscopy In The Study of Wood & Biomass

Matrix Vector Products

Computation. For QDA we need to calculate: Lets first consider the case that

Principal Component Analysis (PCA) Principal Component Analysis (PCA)

Linear Regression (9/11/13)

Ch 4. Linear Models for Classification

Classification 2: Linear discriminant analysis (continued); logistic regression

Soft Sensor Modelling based on Just-in-Time Learning and Bagging-PLS for Fermentation Processes

Bare minimum on matrix algebra. Psychology 588: Covariance structure and factor models

An Introduction to Matrix Algebra

Linear Methods for Regression. Lijun Zhang

Transcription:

Inferential Analysis with NIR and Chemometrics Santanu Talukdar Manager, Engineering Services

Part 2 NIR Spectroscopic Data with Chemometrics A Tutorial Presentation Part 2 Page.2

References This tutorial is based on the following references: 1. R.De Maesschalck, F.Estienne, J.Verdu-Andres, A.Candolfi, V.Centner, F.Despagne, D.Jouan-Rimbaud, B.Walczak, D.L.Massart, S.de.Jong, O.E.de.Noord, C.Puel, B.M.G.Vandeginste, The Development of Calibration Models for Spectroscopic Data using Principal Component Regression 2. Chemometrics Software, Unscrambler, by CAMO Page.3

Acknowledgements Yokogawa Corporations India & Japan. Page.4

NIR GASOLINE SPECTRUM A b s o r b a n c e Wave number File Conversion into *.jdx file Import of *.jdx into Chemometrics S/W A Data Matrix Page.5

Data Matrix Wave Nos. x1 x2 x3.. Xp Y1 Y2 Y3 S1 a11 a12 S2 Samples S3 : : Sm Xmxp Data Matrix S1 Sm = Samples X1 Xp = X variables = Wavenumbers aij = Absorbance by NIR Analyzer Y1 Y3 = Known physical properties (Lab QC) = RON/MON/Etc. Page.6

Data Matrix Data Matrix treated for enhancing spectral differences. Thru Norris 2 nd derivative transformation Transformed Data Matrix File Data Structure Multivariate Page.7

NIR GASOLINE SPECTRUM NORRIS DERIVATIVE A b s o r b a n c e Wave number The change in the Spectrum absorbance after second order Norris derivative transformation. Page.8

Representation of a Sample Point in a Vector Space X3 S1 S2 S1 = a11x1 + a12x2 + a13x3 S2 = a21x1 + a22x2 + a23x3 aij = Cell absorbance X2 S1 = Sample 1 X1 S2 = Sample 2 X1, X2, X3 = The variables S1 X1 a11 X2 a12 X3 a13 S1 & S2 are two distinct different samples in 3 dimensional vector space S2 a21 a22 a23 Page.9

Principal Component X1 X3 S1 S2 residual residual residual S3 X2 Location of Sample points along the PC PC Principal Component (PC) is a projection in space drawn in such a manner interconnecting sample points and variables such that Residual Variances of the samples upon PC is minimum. Loading is the cosine of the angle between the PC and each of the variables. PC is a linear combination of the variables. Number of PCs = Number of Variables. All PCs orthogonal to each other. PC is a projection in a 3 dimensional vector space. Page.10

Score Matrix PCs PC1 PC2 PC3.. PCp S1 t11 t12 Samples S2 S3 : : Sm Tmxp Score Matrix S1 Sm = Samples PC1 PCp = PCs tij = scores Page.11

Score Plot in a PC1, PC2 axes PC3 S1 S2 S1 = t11pc1 + t12pc2 + t13pc3 S2 = t21pc1 + t22pc2 + t23pc3 PC2 PC1 S1 PC1 t11 PC2 t12 PC3 t13 T2x3, a score matrix. S2 t21 t22 t23 Page.12

Actual Score Plot in a PC1, PC2 axes Page.13

Loading Matrix PCs. PC1 PC2 PC3.. PCp X1 p11 p12 p1p X2 Variables X3 : : Xp Ppxp Loading Matrix X1 Xp = Variables PC1 PCp = PCs pij = Loading Page.14

Transpose of Loading Matrix Variables. X1 X2 X3.. Xp PC1 p11 PC2 p12 PCs PC3 : : PCp p1p P pxp transpose of Loading Matrix X1 Xp = Variables PC1 PCp = PCs pij = Loading Page.15

Loading Plot in a PC1, PC2 axes PC3 X1 X2 X1 = p11pc1 + p12pc2 + p13pc3 X2 = p21pc1 + p22pc2 + p23pc3 PC2 PC1 X1 PC1 p11 PC2 p12 PC3 p13 P2x3, a loading matrix. X2 p21 p22 p23 Page.16

Actual Loading Plot in a PC1, PC2 axes Page.17

Residual Variance Residual Variation of a sample is the sum of squares of its residuals for all the principal components. It is geometrically interpretable as the squared distance between the original location of the sample and its projection onto the principal component X3 S1 residual S residual X2 S2 PC residual When any unknown sample S is matched with a known PC & if the residual is high then the sample is an outlier X1 Location of Sample points along the PC Page.18

Principal Component Analysis Xmxp = Tmxp P pxp + E Data Matrix = (Score Matrix) x (Loading Matrix) Error is ignored presently. Page.19

Principal Component Regression (PCR) PCR consists of the linear regression of the scores and Y property of interest. Y hat (predicted) = T1xr brx1 Y hat (predicted) = (row vector with r PCs) x (column vector with r regression coefficients). While doing PCR, only the first r PCs are calculated where r < min (m,p). Page.20

Principal Component Regression (PCR) How to calculate the column vector b? Ymx1 = Tmxr brx1 The column vector Y are the known responses from QC Lab. Tmxr is the score matrix. Solving the above equation, we get brx1 = Inverse(T rxm Tmxr) T rxm Ymx1 Inputting known values of the column vector Y Column vector b is calculated. The above equation is known as Multiple Linear Regression (MLR). The condition is that all variables are linearly independent. Page.21

Principal Component Regression (PCR) Principal Component Regression (PCR) is therefore a two step process. Step No 1: Decompose the X matrix by PCA. Step No 2: Fit an MLR model using the PCs instead of raw data as variables. Hence all PCs are linearly independent. Page.22

Prediction Variables X1 X2 X3.. Xp S1 a11 a12 a1p Samples S2 S3 : Sm am1 am2 amp Sm am 1 am 2 am p Xm+1xp Data Matrix where Xmxp is the old data Matrix and X1xp is the new row vector, data appended. S1 Sm = Samples, Sm = new sample added to the existing Data Matrix PC1 PCp = PCs YOKOGAWA aij INDIA = absorbances LIMITEDby NIR Analyzer Page.23

Prediction For the row vector Sm, am 1, am 2,. am p are the new absorption coefficients generated by the NIR Analyzer. The new row vector for the new sample Sm can be calculated as T1xp = X1xp Ppxp. Select the first r PCs where r< min (m,p). T1xr = X1xp Ppxr. Predicted Y known as Y hat (predicted) = T1xr brx1. Page.24

Prediction Also, Y hat (predicted) = T1xr brx1 = X1xp Ppxr brx1 = X1xpBpx1. The column vector Bpx1 are the new regression coefficients. The X variables are therefore linearly independent. The response Y hat is therefore directly impacted by the nature of such regression coefficients. Large positive or negative coefficients will influence the response of Y hat. In any prediction as the column vector Bpx1 is constant, large variance is due to the variances in the absorption coefficients in the row vector X1xp. Page.25

Prediction Large variances imply samples are of different type. If extreme samples belong to same population, then a new model has to be prepared or updated to accommodate such extreme samples in the training set. This implies new regression coefficients generated for the updated model. Else prepare different models for different sample population. This problem is known as Sample Clustering and handled by outlier management by proper selection and representation of the calibration sample set. Page.26

Prediction Summary If the new sample is from homogeneous sample population, then the new absorption coefficients are similar from the old data matrix. The error in prediction will be minimum. If the new sample is from heterogeneous sample population, then the new absorption coefficients will be varying largely from the old data matrix. The new sample will be outlying in the score plot for which the outlying alarm will trigger. The error in prediction will be high. Page.27

Training Set, Calibration. Training Set is considered for the initial m number of samples with the Y responses known by prior primary measurements. This training set is used to develop a model by PCR method. This is known as Calibration. The regression coefficients are calculated with the help of known Y responses. Finally, for any unknown sample S, Y is predicted. Page.28

RMSEP, Calibration Each of the samples from the training set are predicted by the developed model. Each sample is considered as unknown sample for prediction. ei = Yi Yi hat. Prediction Error Sum of Squares (PRESS) = sum (ei) squares. Mean Square Error of Prediction (MSEP) = PRESS /m. Root Mean Square Error of Prediction (RMSEP) = sq root (MSEP). Page.29

Predicted vs Measured Plot Predicted Y hat for both calibration and validation is plotted against measured Y. Page.30

Cross validation. Data are randomly divided into d cancellation groups. Suppose there are 15 objects and 3 cancellation groups consisting of objects 1-5, 6-10, 11-15. The b coefficients in the model that is being evaluated are determined first for the training set consisting of objects 6-15 and objects 1-5 function as a test set,i.e they are predicted with this model. Then a model is made with objects 1-5 and 11-15 as training set and 6-10 as test set. Finally a model is made with objects 1-10 in the training set and 11-15 in the test set. Page.31

RMSEP, Validation PRESS is determined for each of the d cancellation groups. Eventually the d PRESS is added to give a final PRESS. RMSEP is calculated from the final PRESS. Page.32

Optimization. Optimization consists of comparing different models and deciding which one gives best prediction. In PCR, the usual procedure is to determine the predictive power of models with 1, 2, 3 PCs and to retain the best one. For Example Select the first PC. Perform PCR & calculate RMSEP1. Select 1 & 2 PCs. Perform PCR & calculate RMSEP2. : : Select 1, 2, 3,.r PCs. Perform PCR & calculate RMSEPr. Page.33

Optimization. The result is presented as a plot showing RMSEP as a function of the number of components and is called the RMSEP curve. This curve often shows an intermediate minimum, the first local minimum or a deflection point and the number of PCs for which this occurs is then considered to be the optimal complexity of the model. The robust model uses this local minimum or the first deflection point rather than the global minimum. Page.34

RMSE Plot Page.35

External Validation. External Validation uses a completely different group of samples for prediction (called the test set) from the one used for building the model (the training set). Both the sample sets are obtained in such a way that they are represented for the data being investigated. With an external test set the prediction error obtained may depend to a large extent on how exactly the objects are situated in space in relation to each other. Uncertainty in prediction error can be represented as Prediction +/- 2*RMSEP. This measure is valid provided that the new samples are similar to ones used for calibration, otherwise the prediction error might be much higher. Page.36

Selection and Representation of the Calibration Sample Set All possible sources of variation that can be encountered must be included in the calibration set. Sources of variation such as of different origins or different batches are included and possible physical variations (e.g different temperatures, pressures, flow, etc) among samples are also covered. One approach for selecting representative calibration is the possibility based on knowledge about the process operational changes. Another approach is based on D-optimal concept. Page.37

Selection and Representation of the Calibration Sample Set The D-optimal criteria minimizes the variance of the regression coefficients. This is equivalent to selecting samples such that the variance is maximized. Variance maximization leads to selection of samples with relatively extreme characteristics and located on the borders of the calibration domain. Page.38

Example Let the two points as response variable Y selected be 87, 89. The two new candidate points for selection be 87.5 and 90. First select the minimum distances of the new candidate points, from the selected points as d1 = (87.5-87=0.5), d2 = (90-87=3), d3 = (89-87.5=1.5), d4 = (90-89=1). Selected distances are d1 and d4. Select the maximum of d1 and d4. d4 is the new selected point. The selected samples are now 87, 89, 90. Page.39

Quantitative Evaluation Yi = Set of known response variables. Each response variable for point i is measured 3 times for the total span. Average Standard Deviation SD of each variable is calculated. Historical SD of variable = 2.7 * Avg SD. Repeatability = Historical SD. Select samples such that the minimum range of Y = 5* repeatability but not less than 3* repeatability. Reference: ASTM Page.40

PLS : Partial Least Squares Partial Least Squares or Projection to latent structure. Models both the X & Y matrices simultaneously to find the latent variables in x that will predict the talent variables in Y the best. These PLS-Components are similar to principal components and will also be referred to as PCs. X3 t u Y3 u X2 t Y2 X1 Y1 PCy = f(pcx) u = f(t) PLS1 deals with only one variable at a time (like PCR) PLS2 handles several responses simultaneously Page.41

Prediction Outlier Predicted Value Vs Reference Value (QC Lab) Predicted Y Value deviation This is a plot of predicted Y-Value for all prediction samples. Boxes around the predicted value indicate the deviation. A large deviation indicates that the sample used for prediction is not similar to the samples used to make the calibration model This is a prediction outlier. Conclusion is that the prediction sample does not belong to the same sample population as the samples the model is based upon. Page.42

Outlier in Score Plot Score is the co-ordinate of sample along the PC axes. The outlying sample is clearly distinct from the sample cluster. PC2 Sample Sample Clusters Outlier Score Plot PC1 Page.43

Loading x3 PC Ө X1 x2 Cos Ө = Angle between PC & Variable Xi = Loading -1 loading +1 Two Variables having +values are in +ve correlation Two Variables having +values & -values are in ve correlation Page.44

Cluster in Score Plot PC2 Sample Cluster b Sample Cluster a PC1 Sample Clusters {Different Sample clusters a & b due to change in sample population type, although variables are +ve.} Page.45

Clusters in Data Structure Different sample clusters lead to more inaccuracy in models This can happen due to different recipes {Different sample Population} Page.46

Regression Yi = ΣBij Xij + Constant In a data structure, with known Yi (Lab values with known variables Xij, coefficients Bij is found out). In unknown case with similar data structure, Yi is predicted depending upon Xij responses Bij are known as Regression Coefficients Page.47

Regression Blend Recipe Data Structure Homogeneity Finding Regression Coefficients Regression Coefficients + Xij responses Prediction Yi Hence, Blend Recipe Change Different Sample Clusters Cause Outliers Change in Regression coefficients Page.48

Thank you very much for your attention. Comments :? E-mail : Santanu.Talukdar@in.yokogawa.com Page.49