INDIAN INSTITUTE OF SCIENCE STOCHASTIC HYDROLOGY. Lecture -30 Course Instructor : Prof. P. P. MUJUMDAR Department of Civil Engg., IISc.

Similar documents
INDIAN INSTITUTE OF SCIENCE STOCHASTIC HYDROLOGY. Lecture -29 Course Instructor : Prof. P. P. MUJUMDAR Department of Civil Engg., IISc.

INDIAN INSTITUTE OF SCIENCE STOCHASTIC HYDROLOGY. Lecture -27 Course Instructor : Prof. P. P. MUJUMDAR Department of Civil Engg., IISc.

INDIAN INSTITUTE OF SCIENCE STOCHASTIC HYDROLOGY. Lecture -18 Course Instructor : Prof. P. P. MUJUMDAR Department of Civil Engg., IISc.

INDIAN INSTITUTE OF SCIENCE STOCHASTIC HYDROLOGY. Lecture -33 Course Instructor : Prof. P. P. MUJUMDAR Department of Civil Engg., IISc.

Rainfall Analysis. Prof. M.M.M. Najim

INDIAN INSTITUTE OF SCIENCE STOCHASTIC HYDROLOGY. Lecture -20 Course Instructor : Prof. P. P. MUJUMDAR Department of Civil Engg., IISc.

Intensity-Duration-Frequency (IDF) Curves Example

Inverse of a Square Matrix. For an N N square matrix A, the inverse of A, 1

Water Resources Systems: Modeling Techniques and Analysis

INDIAN INSTITUTE OF SCIENCE STOCHASTIC HYDROLOGY. Lecture -12 Course Instructor : Prof. P. P. MUJUMDAR Department of Civil Engg., IISc.

Principal Components Analysis (PCA)

INDIAN INSTITUTE OF SCIENCE STOCHASTIC HYDROLOGY. Lecture -35 Course Instructor : Prof. P. P. MUJUMDAR Department of Civil Engg., IISc.

Linear Regression. In this problem sheet, we consider the problem of linear regression with p predictors and one intercept,

CIVIL ENGINEERING. Engineering Hydrology RIB-R T11 RIB. Detailed Explanations. Rank Improvement Batch ANSWERS. RIB(R)-18 CE Engineering Hydrology

Econ Slides from Lecture 7

Principle Components Analysis (PCA) Relationship Between a Linear Combination of Variables and Axes Rotation for PCA

LINEAR REGRESSION ANALYSIS. MODULE XVI Lecture Exercises

VAR Model. (k-variate) VAR(p) model (in the Reduced Form): Y t-2. Y t-1 = A + B 1. Y t + B 2. Y t-p. + ε t. + + B p. where:

Module 1. Lecture 2: Weather and hydrologic cycle (contd.)

Lecture 9 SLR in Matrix Form

Lecture Notes: Eigenvalues and Eigenvectors. 1 Definitions. 2 Finding All Eigenvalues

CIVE322 BASIC HYDROLOGY

Table (6): Annual precipitation amounts as recorded by stations X and Y. No. X Y

Linear Algebra: Matrix Eigenvalue Problems

Linear Algebra Review. Fei-Fei Li

INDIAN INSTITUTE OF SCIENCE STOCHASTIC HYDROLOGY. Lecture -36 Course Instructor : Prof. P. P. MUJUMDAR Department of Civil Engg., IISc.

More Linear Algebra. Edps/Soc 584, Psych 594. Carolyn J. Anderson

Homework 2. Solutions T =

ENGINEERING HYDROLOGY

q n. Q T Q = I. Projections Least Squares best fit solution to Ax = b. Gram-Schmidt process for getting an orthonormal basis from any basis.

MATH 423 Linear Algebra II Lecture 20: Geometry of linear transformations. Eigenvalues and eigenvectors. Characteristic polynomial.

CS4495/6495 Introduction to Computer Vision. 8B-L2 Principle Component Analysis (and its use in Computer Vision)

1 Last time: least-squares problems

Unsupervised Learning: Dimensionality Reduction

Multivariate Statistical Analysis

CS 246 Review of Linear Algebra 01/17/19

Rainfall and Design Storms

Properties of Linear Transformations from R n to R m

Linear Algebra & Geometry why is linear algebra useful in computer vision?

Linear Algebra Review. Fei-Fei Li

19. Principal Stresses

INDIAN INSTITUTE OF SCIENCE STOCHASTIC HYDROLOGY. Course Instructor : Prof. P. P. MUJUMDAR Department of Civil Engg., IISc.

Storm rainfall. Lecture content. 1 Analysis of storm rainfall 2 Predictive model of storm rainfall for a given

A Study on Storm Water Drainage System of Annanagara and Ashokanagara of Shimoga City Karnataka India

Systems of Algebraic Equations and Systems of Differential Equations

FACULTY OF MATHEMATICAL STUDIES MATHEMATICS FOR PART I ENGINEERING. Lectures

Vectors and Matrices Statistics with Vectors and Matrices

Water Resources Systems: Modeling Techniques and Analysis

Math Matrix Algebra

Numerical Linear Algebra Homework Assignment - Week 2

Eigenvalues, Eigenvectors, and an Intro to PCA

Eigenvalues, Eigenvectors, and an Intro to PCA

Modeling Rainfall Intensity Duration Frequency (R-IDF) Relationship for Seven Divisions of Bangladesh

1 Principal Components Analysis

ENGI 9420 Lecture Notes 2 - Matrix Algebra Page Matrix operations can render the solution of a linear system much more efficient.

Principal Component Analysis (PCA) Our starting point consists of T observations from N variables, which will be arranged in an T N matrix R,

Multivariate Statistics Fundamentals Part 1: Rotation-based Techniques

Principal Components Analysis. Sargur Srihari University at Buffalo

Linear vector spaces and subspaces.

YORK UNIVERSITY. Faculty of Science Department of Mathematics and Statistics MATH M Test #1. July 11, 2013 Solutions

Knowledge Discovery and Data Mining 1 (VO) ( )

Assignment 2 (Sol.) Introduction to Machine Learning Prof. B. Ravindran

Covariance and Correlation Matrix

Lecture 2: Precipitation

+ ( f o. (t) = f c. f c. )e kt (1)

Linear Algebra. Matrices Operations. Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0.

Data Mining Lecture 4: Covariance, EVD, PCA & SVD

ME 597: AUTONOMOUS MOBILE ROBOTICS SECTION 2 PROBABILITY. Prof. Steven Waslander

1. Matrix multiplication and Pauli Matrices: Pauli matrices are the 2 2 matrices. 1 0 i 0. 0 i

1. What is the determinant of the following matrix? a 1 a 2 4a 3 2a 2 b 1 b 2 4b 3 2b c 1. = 4, then det

Ridge Regression. Flachs, Munkholt og Skotte. May 4, 2009

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)

Diagonalization of Matrix

Cheng Soon Ong & Christian Walder. Canberra February June 2017

Math 3191 Applied Linear Algebra

Lecture 10 Multiple Linear Regression

Math 118, Fall 2014 Final Exam

Principal Component Analysis-I Geog 210C Introduction to Spatial Data Analysis. Chris Funk. Lecture 17

ECONOMETRIC THEORY. MODULE XVII Lecture - 43 Simultaneous Equations Models

Statistics 351 Probability I Fall 2006 (200630) Final Exam Solutions. θ α β Γ(α)Γ(β) (uv)α 1 (v uv) β 1 exp v }

STATISTICAL LEARNING SYSTEMS

Dimensionality Reduction. CS57300 Data Mining Fall Instructor: Bruno Ribeiro

Announcements Wednesday, November 01

Lecture 1: Systems of linear equations and their solutions

Chapter 5 Matrix Approach to Simple Linear Regression

Lecture 15, 16: Diagonalization

Gaussian random variables inr n

Maximum variance formulation

Chapter 6. Eigenvalues. Josef Leydold Mathematical Methods WS 2018/19 6 Eigenvalues 1 / 45

From Lay, 5.4. If we always treat a matrix as defining a linear transformation, what role does diagonalisation play?

Mathematical foundations - linear algebra

(the matrix with b 1 and b 2 as columns). If x is a vector in R 2, then its coordinate vector [x] B relative to B satisfies the formula.

Final Review Written by Victoria Kala SH 6432u Office Hours R 12:30 1:30pm Last Updated 11/30/2015

Introduc)on to linear algebra

Precipitation Rabi H. Mohtar

Mathematics 1EM/1ES/1FM/1FS Notes, weeks 18-23

JUST THE MATHS SLIDES NUMBER 9.6. MATRICES 6 (Eigenvalues and eigenvectors) A.J.Hobson

4.1 Least Squares Prediction 4.2 Measuring Goodness-of-Fit. 4.3 Modeling Issues. 4.4 Log-Linear Models

Next is material on matrix rank. Please see the handout

Matrices and Deformation

Transcription:

INDIAN INSTITUTE OF SCIENCE STOCHASTIC HYDROLOGY Lecture -30 Course Instructor : Prof. P. P. MUJUMDAR Department of Civil Engg., IISc.

Summary of the previous lecture IDF relationship Procedure for creating IDF curves Empirical equations for IDF relationships 2

IDF Curves Design precipitation Hyetographs from IDF relationships: Rainfall intensity Duration Alternating block method : Developing a design hyetograph from an IDF curve. Specifies the precipitation depth occurring in n successive time intervals of duration Δt over a total duration T d. 3

IDF Curves Procedure Rainfall intensity (i) from the IDF curve for specified return period and duration(t d ). Precipitation depth (P) = i x t d The amount of precipitation to be added for each additional unit of time Δt. P Δt = P td2 P td1 t d1 Δt t d2 4

IDF Curves The increments are rearranged into a time sequence with maximum intensity occurring at the center of the duration and the remaining blocks arranged in descending order alternatively to the right and left of the central block to form the design hyetograph. Total rainfall duration.. Decreasing Max value Decreasing 5

Example 1 Obtain the design precipitation hyetograph for a 2- hour storm in 10 minute increments in Bangalore with a 10 year return period. Solution: The 10 year return period design rainfall intensity for a given duration is calculated using IDF formula by Rambabu et. al. (1979) i = t KT a + b ( ) n 6

Example 1 (Contd.) For Bangalore, the constants are K = 6.275 a = 0.126 b = 0.5 n = 1.128 For T = 10 Year and duration, t = 10 min = 0.167 hr, i 0.126 6.275 10 = = 13.251 ( 0.167 + 0.5) 1.128 7

Example 1 (Contd.) Similarly the values for other durations at interval of 10 minutes are calculated. The precipitation depth is obtained by multiplying the intensity with duration. Precipitation = 13.251 * 0.167 = 2.208 cm The 10 minute precipitation depth is 2.208 cm compared with 3.434 cm for 20 minute duration, hence 2.208 cm will fall in 10 minutes, the remaining 1.226 (= 3.434 2.208) cm will fall in the remaining 10 minutes. Similarly the other values are calculated and tabulated 8

Example 1 (Contd.) Duration (min) Intensity (cm/hr) Cumulative depth (cm) Incremental depth (cm) Time (min) Precipitation (cm) 10 13.251 2.208 2.208 0-10 0.069 20 10.302 3.434 1.226 10-20 0.112 30 8.387 4.194 0.760 20-30 0.191 40 7.049 4.699 0.505 30-40 0.353 50 6.063 5.052 0.353 40-50 0.760 60 5.309 5.309 0.256 50-60 2.208 70 4.714 5.499 0.191 60-70 1.226 80 4.233 5.644 0.145 70-80 0.505 90 3.838 5.756 0.112 80-90 0.256 100 3.506 5.844 0.087 90-100 0.145 110 3.225 5.913 0.069 100-110 0.087 120 2.984 5.967 0.055 110-120 0.055 9

Example 1 (Contd.) 2.500 2.000 Precipita)on (cm) 1.500 1.000 0.500 0.000 10 20 30 40 50 60 70 80 90 100 110 120 Time (min) 10

MULTIPLE LINEAR REGRESSION 11

Multiple Linear Regression A variable (y) is dependent on many other independent variables, x 1, x 2, x 3, x 4 and so on. For example, the runoff from the water shed depends on many factors like rainfall, slope of catchment, area of catchment, moisture characteristics etc. x 2 x 3 x 4 Any model for predicting runoff should contain all these variables y x 1 12

Simple Linear Regression (x i, y i ) are observed values y Best fit line yˆi yˆi is predicted value of x i y i x y = a+ bx ˆi Error, i e = y yˆ i i i Sum of square errors Estimate the parameters a, b such that the square error is minimum n n 2 e ˆ i = yi yi i= 1 i= 1 n i= 1 ( ) { ( )} M = y a + bx i 2 i 2 13

Simple Linear Regression M y a bx M a M b n = { } 2 i i i= 1 n n yi b xi i= 1 i= 1 = ; = = = 0 0 a a y bx n b xy ( ' ) i ' i = x ' i 2 ' ( ) ( ) x x = x and y y = y ' i i i i y = a+ bx ˆi i 14

Multiple Linear Regression A general linear model of the form is y = β 1 x 1 + β 2 x 2 + β 3 x 3 +.. + β p x p y is dependent variable, x 1, x 2, x 3,,x p are independent variables and β 1, β 2, β 3,, β p are unknown parameters n observations are required on y with the corresponding n observations on each of the p independent variables. 15

Multiple Linear Regression n equations are written for each observation as y 1 = β 1 x 1,1 + β 2 x 1,2 +.. + β p x 1,p y 2 = β 1 x 2,1 + β 2 x 2,2 +.. + β p x 2,p.. y n = β 1 x n,1 + β 2 x n,2 +.. + β p x n,p Solving n equations for obtaining the p parameters. n must be equal to or greater than p, in practice n must be at least 3 to 4 times large as p. 16

Multiple Linear Regression If y i is the i th observation on y and x i,j is the i th observation on the j th independent variable, the generalized form of the equations can be written as y p = β x i j i, j j= 1 The equation can be written in matrix notation as Y = X Β ( n 1) ( n p) ( p 1) 17

Multiple Linear Regression y1 x1,1 x1,2 x1,3.. x1, p β1 y x 2 2,1 x2,2 x2,3.. x 2, p β 2 y x 3 3,1 β 3 =...... y x n n,1 xn,1 xn, p β p nx1 nxp px1 Y is an nx1 vector of observations on the dependent variable, X is an nxp matrix with n observations on each p independent variables, Β is a px1 vector of unknown parameters. 18

Multiple Linear Regression If x i,1 =1 for i, β 1 is the intercept Parameters β j, j = 1.p are estimated by minimizing the sum of square errors (e i ) e = y yˆ i i i yˆ p = β x i j i, j j= 1 19

Multiple Linear Regression In matrix notation, e = EE 2 ' i ' ( Y X ˆ) ( Y X ˆ) = Β Β d e dβ 2 i = 0 j 0 2 ( ˆ ) ' = X Y XΒ ' XY ˆ ' = X XΒ 20

Multiple Linear Regression Premultiplying with 1 1 ' ' ' ' XX XY= XX XXΒˆ ( ) ( ) ( ' XX) 1 ' XY =Βˆ ˆ ' XX XY Β= or ( ) ' 1 ( ' XX) 1 on both the sides, ( ) XX ' is a pxp matrix and rank must be p for it to be inverted. 21

Multiple Linear Regression Suppose if no. of regression coefficients are 3, then matrix XX ' is as follows ( ) n n n 2 x i,1 xi,2 x i,1 xi,3 xi,1 i= 1 i= 1 i= 1 n n n ' 2 XX = xi,1 x i,2 x i,2 xi,3 xi,2 i= 1 i= 1 i= 1 n n n 2 xi,1 x i,3 xi,2 x i,3 xi,3 i= 1 i= 1 i= 1 ( ) 22

Multiple Linear Regression A multiple coefficient of determination, R 2 (as in case of simple linear regression) is defined as R 2 Sum of squares dueto regression = Sum of squares about the mean = ΒX Y ny ' 2 Y Y ny ' ' 2 23

Example 2 In a watershed, the mean annual flood (Q) is considered to be dependent on area of watershed (A) and rainfall(r). The table gives the observations for 12 years. Obtain regression coefficients and R 2 value. Q in cumec 0.44 0.24 2.41 2.97 0.7 0.11 0.05 0.51 0.25 0.23 0.1 0.054 A in hectares 324 226 1474 2142 420 45 38 363 77 84 46 38 Rainfall in cm 43 53 48 50 43 61 81 68 74 71 71 69 24

Example 2 (Contd.) The regression model is as follows Q = β 1 + β 2 A + β 3 R Where Q is the mean flood in m 3 /sec, A is the watershed area in hectares and R is the average annual daily rainfall in mm This is represented in matrix form as Y = X Β ( 12 1) ( 12 3) ( 3 1) 25

Example 2 (Contd.) To obtain coefficients this equation is to be solved 0.44 1 324 43 0.24 1 226 53 2.41 1 1474 48 2.97 1 2142 50 0.7 1 420 43 0.11 1 45 61 = 0.05 1 38 81 0.51 1 363 68 0.25 1 77 74 0.23 1 84 71 0.1 1 46 71 0.054 12x1 1 38 69 12x3 β β β 1 2 3 3x1 26

Example 2 (Contd.) The coefficients are obtained from ˆ ' XX XY Β= ( ) ( ) ' 1 n n n 2 x i,1 xi,2 x i,1 xi,3 xi,1 i= 1 i= 1 i= 1 n n n ' 2 XX = xi,1 x i,2 x i,2 xi,3 xi,2 i= 1 i= 1 i= 1 n n n 2 xi,1 x i,3 xi,2 x i,3 xi,3 i= 1 i= 1 i= 1 27

Example 2 (Contd.) ( ' XX) 12 5277 732 = 5277 7245075 269879 732 269879 46536 The inverse of this matrix is 4 3.35 6.1 10 0.05 1 ' 4 7 6 ( XX) = 6.1 10 2.9 10 7.9 10 0.05 7.9 10 7.5 10 6 4 28

Example 2 (Contd.) ( ' ) n y i i= 1 8.06 n i,2 i 10642 i= 1 XY = x y = 417 n xi,3yi i= 1 29

Example 2 (Contd.) Β= ˆ ( ) ' XX 1 ' XY 4 3.35 6.1 10 0.05 8.06 4 7 6 = 6.1 10 2.9 10 7.9 10 10642 6 4 0.05 7.9 10 7.5 10 417 0.0351 = 0.0014 5 5.0135 10 30

Example 2 (Contd.) Therefore the regression equation is as follows Q = 0.0351 + 0.0014A + 5.0135*10-5 R From this equation, the estimated Q and the corresponding errors are tabulated. 31

Example 2 (Contd.) Q A R ˆQ e 0.44 324 43 0.49-0.05 0.24 226 53 0.35-0.11 2.41 1474 48 2.10 0.31 2.97 2142 50 3.04-0.07 0.7 420 43 0.63 0.07 0.11 45 61 0.10 0.01 0.05 38 81 0.09-0.04 0.51 363 68 0.55-0.04 0.25 77 74 0.15 0.10 0.23 84 71 0.16 0.07 0.1 46 71 0.10 0.00 0.054 38 69 0.09-0.04 32

Example 2 (Contd.) Multiple coefficient of determination, R 2 : R 2 = ΒX Y ny ' 2 Y Y ny ' ' 2 15.64 5.42 = 15.77 5.42 = 0.99 y = 0.672, n= 12 Β= 0.0351 0.0014 5.0135 10 ' 5 ( ' XY) 8.06 = 10642 417 ' YY =15.77 33

PRINCIPAL COMPONENT ANALYSIS 34

Principal Component Analysis Powerful tool for analyzing data. PCA is a way of identifying patterns in the data and data is expressed in such a way that the similarities and differences are highlighted. Once the patterns are found in the data, it can be compressed (reduce the number of dimensions) without losing information. Eigenvectors and eigenvalues are discussed first to understand the process of PCA. 35

Matrix Algebra Eigenvectors and Eigenvalues: Let A be a complex square matrix. If λ is a complex number and X a non zero complex column vector satisfying AX = λx, X is an eigenvector of A, while λ is called an eigenvalue of A. X is the eigenvector corresponding to the eigenvalue λ. Eigenvectors are possible only for square matrices. Eigenvectors of a matrix are orthogonal. 36

Matrix Algebra If λ is an eigenvalue of an n n matrix A, with corresponding eigenvector X, then (A λi)x = 0, with X 0, so det (A λi) = 0 and there are at most n distinct eigenvalues of A. Conversely if det (A λi) = 0, then (A λi)x = 0 has a non trivial solution X and so λ is an eigenvalue of A with X a corresponding eigenvector. 37

Example 3 Obtain the eigenvalues and eigenvectors for the matrix, 1 2 A = 2 1 The eigenvalues are obtained as A λi = 0 1 λ 2 = 2 1 λ 0 38

Example 3 (Contd.) ( λ)( λ) 1 1 4= 0 2 λ 2λ 3= 0 Solving the equation, λ = 3, 1 Therefore the eigenvalues are 3 and -1 for matrix A. The eigenvector is obtained by ( A λi) X = 0 39

Example 3 (Contd.) For λ 1 = 3 ( A λ ) I X = 1 1 1 0 2 2 x1 = 2 2 y 2x + 2y = 0 1 1 2x 2y = 0 1 1 0 which has solution x = y, y arbitrary. y eigenvectors corresponding to λ = 3 are the vectors, with y 0. y 40

Example 3 (Contd.) For λ 1 = 1 ( A λ ) I X = 2 2 2 2 x2 = 2 2 y 2 0 0 2x + 2y = 0 1 1 2x + 2y = 0 1 1 which has solution x = -y, y arbitrary. eigenvectors corresponding to λ = -1 are the vectors, with y 0. y y 41

Principal Component Analysis Principal Component Analysis (PCA): When data is collected on p variables, these variables are correlated Correlation indicates information contained in one variable is also contained in some of the other p-1 variables. PCA transforms the p original correlated variables into p uncorrelated components (also called as orthogonal components) These components are linear functions of the original variables. 42

Principal Component Analysis The transformation is written as Z = X A Where X is nxp matrix of n observations on p variables Z is nxp matrix of n values for each of p components A is pxp matrix of coefficients defining the linear transformation All X are assumed to be deviations from their respective means, hence X is a matrix of deviations from mean 43

Principal Component Analysis Steps for PCA: Get the data for n observations on p variables. Form a matrix with deviations from mean. Calculate the covariance matrix Calculate the eigenvalues and eigenvectors of the covariance matrix. Choosing components and forming a feature vector. Deriving the new data set. 44

Principal Component Analysis The procedure is explained with a simple data set of the yearly rainfall and the yearly runoff of a catchment for 15 years. Year 1 2 3 4 5 6 7 8 9 10 Rainfall (cm) Runoff (cm) 105 115 103 94 95 104 120 121 127 79 42 46 26 39 29 33 48 58 45 20 Year 11 12 13 14 15 Rainfall 133 111 127 108 85 (cm) Runoff 54 37 39 34 25 (cm) 45

Principal Component Analysis Step 2: Form a matrix with deviations from mean Original matrix 105 42 115 46 103 26 94 39 95 29 104 33 120 48 121 58 127 45 79 20 Matrix with deviations from mean 1.3 3.4 8.7 7.4 3.3 12.6 12.3 0.4 11.3 9.3 2.3 5.6 13.7 9.4 14.7 19.4 20.7 6.4 27.3 18.6 46

Principal Component Analysis Step 3: Calculate the covariance matrix cov( XY, ) = s = XY, n i= 1 x x y y ( )( ) i n 1 i cov( X, X) cov( X, Y) = cov( Y, X) cov( Y, Y) 216.67 141.35 141.35 133.38 47

Principal Component Analysis Step 4: Calculate the eigenvalues and eigenvectors of the covariance matrix cov( X, X) cov( X, Y) = cov( Y, X) cov( Y, Y) 216.67 141.35 141.35 133.38 48

Principal Component Analysis Step 5: Choosing components and forming a feature vector 49