Efficient Observation of Random Phenomena

Size: px
Start display at page:

Download "Efficient Observation of Random Phenomena"

Transcription

1 Lecture 9 Efficient Observation of Random Phenomena Tokyo Polytechnic University The 1st Century Center of Excellence Program Yukio Tamura POD Proper Orthogonal Decomposition Stochastic Representation of Factor Analysis Karhunen-Lo Loève Decomposition

2 POD Find a deterministic coordinate function φ (x,y) that best correlates with all the elements of a set of randomly fluctuating wind pressure fields p (x,y,t). φ (x,y) is derived to maximize the projection from the wind pressure field p (x,y,t) to the deterministic coordinate function φ (x,y). Maximization of Projection Realize from the probabilistic standpoint: p(x,y,t)φ(x,y) dxdy = max By normalizing: p(x,y,t)φ(x,y) dxdy = max { φ(x,y) dxdy } 1/

3 Maximization of Projection p(x,y,t) can take positive or negative values Maximization is made by a mean square method p(x,y,t)φ (x,y) dxdy p(x',y',t x',y',t)φ (x',y' x',y') dx'dy' max φ (x,y) dxdy Eigenvalue Problem Rp (x,y,x',y' x,y,x',y') φ (x',y' x',y') dx'dy' λ φ (x,y) Rp (x,y,x',y' x,y,x',y') Spatial Correlation of p(x,y,t) Uniformly Distributed dx dy dy Matrix Matrix Form [Rp]{φ} λ{φ} [Rp]] : Spatial Correlation Matrix M M Square Matrix {φ} : Eigenvector of Spatial Correlation Matrix [Rp][ λ : Eigenvalue of Spatial Correlation Matrix [Rp][

4 Fluctuating Pressure Field p(x,y,t) M p(x,y,t) = Σ a m (t)φ m (x,y) m = 1 p(x,y,t)φ m (x,y)dxdy a m (t) = φ m (x,y) dxdy : m-th Principal Coordinate φ m (x,y) : m-th Eigenvector (Eigen Mode) Correlation Between Principal Coordinates No correlation between different modes : a m (t) a n (t) = 0 m n Mean square of the m-th principal coordinate : a m (t) = λ m = Eigenvalue

5 Mean Square of Wind Pressure Mean-square of wind pressure at point(x,y) M p(x,y,t) = Σ λ m φ m (x,y x,y) m = 1 Field-total sum of mean-square wind pressures M p(x,y,t) dxdy dxdy = Σ λ m m = 1 M = Σ a m (t) m = 1 Reconstruction in Lower Modes N ^ p(x,y,t) Σ a m (t)φ m (x,y) m = 1 N M

6 Fluctuating Pressure Field (Fluctuating Component Only) p 1 (t) p (t) p 3 (t) t t t t 1 t t 3 t 4 t 5 State Locus of Fluctuating Pressure Field p 3 (t) t 5 t 3 t t 4 p 1 (t) t 1 p (t)

7 State Locus on Same Plane η ( t ) t 5 a 1 (t ) t 1 t t 4 t 3 ξ( t ) State Locus on Same Line t 5 a 1 (t) t 1 t 4 t t 3

8 Projection of State Locus onto Principal Coordinate a 15 a 1 a 11 t 5 a 5 a 1 (t) t 1 a 1 a 4 t t 4 a a 3 t 3 a 14 a 13 a (t) Maximization of Projection Maximization of variance M σ a1 = 1 Σa M 1j j =1 not maximization of mean square analyze fluctuating component only (zero mean)

9 Principal Coordinates a 1 (t) a (t) a 3 (t) Linear combination of original (physical) coordinates p 1 (t) p (t) p 3 (t) a 1 (t) = φ 11 p 1 (t) + φ 1 p (t) + φ 13 p 3 (t) a (t) = φ 1 p 1 (t) + φ p (t) + φ 3 p 3 (t) a 3 (t) = φ 31 p 1 (t) + φ 3 p (t) + φ 33 p 3 (t) {a} = [ [φ ]{p} Coordinate Transformation Matrix [φ ] [φ ] = [φ ij ] m-th row vector {φ m } = { {φ m1, φ m, φ m3 } T m-th Eigenvector

10 Maximization of Variance of the 1st Mode Principal Coordinate a 1 (t) Assumption - No correlation between principal coordinates a 1 (t) a (t) a 3 (t) - Unit norm of eigenvector φ m1 +φ m +φ m3 = 1 Variance of 1st Principal Coordinate a 1 (t) a1 = {a{ 1 (t)} σ a1 σ a1 = {φ 11 p 1 (t) + φ 1 p (t) + φ 13 p 3 (t)} = φ 11 σ 1 + φ 1 σ +φ 13 σ 3 + φ 11 φ 1 σ 1 + φ 11 φ 13 σ 13 + φ 1 φ 13 σ 3 a1 : Variance of the 1st principal coordinate a 1 (t) σ m : Variance of fluctuating pressure p m (t) σ mn : Covariance of p m (t) and p n (t)

11 Lagrange s s Method of Indeterminate Coefficients Maximizing variance σ a1 of a 1 (t) is equivalent to maximizing the following value L: L = σ a1 +λ (φ 11 + φ 1 + φ 13 1) λ : constant ( φ 11 + φ 1 + φ = 1 1 L Differential Coefficient of L by φ 1m L = 0 φ 11 L φ 1 L φ 13 φ 1m (σ 1 λ) φ 11 +σ 1 φ 1 +σ 13 φ 13 = 0 (a) σ 1 φ 11 + (σ λ) φ 1 +σ 3 φ 13 = 0 (b) σ 31 φ 11 +σ 3 φ 1 + (σ 3 λ) φ 13 = 0 (c)

12 Condition for Non-trivial Solution Determinant of Coefficients = 0 (σ 1 λ) σ 1 σ 13 σ 1 (σ λ) σ 3 = 0 σ 31 σ 3 (σ 3 λ) (d) Eigenvalue Problem of Matrix [R p ] Covariance Matrix [R p ] = σ 1 σ 1 σ13 σ 1 σ σ 3 σ 31 σ 3 σ 3 σ i : Variance σ ij = σ ji : Covariance

13 Equation (d) has three roots: λ = λ 1, λ, λ 3 (λ 1 λ λ 3 ) Eq.(a) φ 11 +Eq.(b) Eq.(b) φ 1 +Eq.(c) Eq.(c) φ 13 φ 11 σ 1 +φ 1 σ +φ 13 σ 3 +φ 11 φ 1 σ 1 +φ 11 φ 13 σ 13 +φ 1 φ 13 σ 3 λ (φ 11 + φ 1 + φ 13 ) = 0 σ a1 = a1 = λ Max. Variance of Principal Coordinate = Eigenvalue σ a1 Max. λ 1

14 Orthogonality of Eigenvectors of Matrix {φ m } T {φ n } = Σ φ mj φ nj = φ m1 φ n1 + φ m φ n + φ m3 φ n3 = δ mn δ mn mn : Kronecker s Delta = { 0, 1, m = n 0, m n Reconstruction of Pressure Field Reconstruction by principal coordinates a 1 (t),, a (t), a 3 (t) and eigenvectors p 1 (t) = φ 11 a 1 (t) + φ 1 a (t) + φ 31 a 3 (t) p (t) = φ 1 a 1 (t) + φ a (t) + φ 3 a 3 (t) p 3 (t) = φ 13 a 1 (t) + φ 3 a (t) + φ 33 a 3 (t) {p} = [ [φ ] T {a}

15 Proportion of m-th Mode Proportion of m-th Principal Coordinate Variance of m-th Principal Coordinate Variance of Original Pressure Field c m σ 1 + σ + σ 3 σ a1 σ am σ am a1 + σ a + σ a3 λ m a3 λ 1 + λ + λ 3 Proportion and Cumulative Proportion Proportion of m-th Principal Coordinate λ m c m MΣλ m = 1 m Cumulative Proportion up to N-th Mode C N N Σ c m = 1 m

16 Fluctuating Pressure Field P 1 (t) P (t) P 3 (t) t t t 1 t t 3 t 4 t 5 t Low-rise Building Model

17 Randomly Fluctuating Pressures Mean and Standard Deviation of Fluctuating Pressures Wind Pressures on Surfaces of a Low-rise Building Model Mean Pressure C pmean Standard Deviation C p

18 Eigenvectors of the Lowest Three Modes 1st Mode nd Mode 3rd Mode Eigenvalues,, Proportions and Cumulative Proportions Wind Pressures on Surfaces of a Low-rise Building Model Mode 1 st nd 3 rd 4 th 5 th 6 th 7 th 8 th 9 th 10 th Eigenvalue Proportion (%) Cumulative Proportion (%) st nd rd th th th th th th

19 (a) Mean pressure coefficient C p (b) Fluctuating pressure coefficient C p Pressure distributions on a low-rise building model (θ = 45, D : B : H = 4:4:1) Lowest Three Eigenvectors 1st Mode nd Mode 3rd Mode

20 High-rise Building Model Pressure Measurement Model and Analytical Lumped Mass Model Length Scale = 1/400 Mean Wind Speed Profile α = 1/6 500 Pressure Taps t = sec T = 4 sec (3,768 samples)

21 Mean and Standard Deviation of Fluctuating Pressures Wind Pressures on Surfaces of a High-rise Building Model Mean Pressure Standard Deviation Eigenvectors of Lowest Two Modes Fluctuating pressures acting on a high-rise building model 1st Mode nd Mode

22 Eigen Vectors of 3rd, 4th and 5th Modes Fluctuating pressures acting on a high-rise building model 3rd Mode 4th Mode 5th Mode 4th Mode Along-wind force coefficients at 0.H and 0.8H by 4th mode (Suburban flow, α=1/6)

23 5th Mode Along-wind force coefficients at 0.H and 0.8H by 5th mode (Suburban flow, α=1/6) Eigenvalues,, Proportions and Cumulative Proportions Wind Pressures on Surfaces of a High-rise Building Model Mode 1 st nd 3 rd 4 th 5 th Eigenvalue Proportion (%) Cumulative Proportion (%) st nd rd th th th th th th th

24 Power Spectral Densities of Wind Forces Generalized Wind Forces Lowest Five Principal Coordinates Fluctuating Wind Force Coefficients by Each Mode (α = 1/6) rms C Fx Along-wind Force rms C Fy Across-wind Force

25 Fluctuating Wind Force Coefficients by Each Mode (α = 1/6) rms C Mz Torsional Moment rms C Mz RMS Force Coefficients Reconstructed by Selected Dominant Modes ( (α = 1/6) 1,5,10,11,13,14,16,18, 1 and 31st Modes

26 Power Spectra of Generalized Wind Forces Reconstructed by Selected Dominant Modes ( (α = 1/6 = 1/6) Generalized Wind Forces Reconstructed from Selected Dominant Modes Along-wind Force Across-wind Force Torsional Moment

27 Response Analysis in Time Domain Analytical Condition Coupled oscillation of along- wind, across-wind and torsional components Newmark β method : β = 1/41 Time interval : t = 0.71s Calculation length : T = 600s Tip mean wind speed : (H=00m) V H = 55m/s (100y-recurrence in Tokyo) Analytical Model and Forces 5 Lumped masses 3DOF Fundamental natural periods : T 1X = T 1Y = 5s T 1θ = 1.3s Damping ratios : % to the critical Wind forces: - based on original pressures - based on reconstructed pressures by selected dominant modes Along-wind : nd, 3 rd & 4 th Across-wind : 1 st & 5 th Torsional : 1 st, 5 th...31 st (10 modes) Responses due to Wind Forces Reconstructed from Selected Dominant Modes Along-wind Displacement Across-wind Displacement Torsional Angle

28 Responses due to Wind Forces Reconstructed from Selected Dominant Modes Displacement Shear Force Responses due to Wind Forces Reconstructed from Selected Dominant Modes Wind Forces Response at Top (H=00m)( Along-wind Across-wind Angular Displacement Displacement Displacement (cm) (cm) (cm) Max S.D. Max S.D. Max S.D. Original Reconstructed from Selected Dominant Modes ( nd, 3 rd and 4 th ) (1 st and 5 th ) (10 selected*) Error (%) * 1 st, 5 th, 10 th, 11 th, 13 th, 14 th, 16 th, 18 th, 1 st and 31 st Modes

29 Coordinate Transformation Matrix [A][ Matrix [A][ ] as an operator for transforming the coordinate {b} = [A][ {a} Vector {a}{ is transformed to another Vector {b}{ of a different magnitude and a different direction by operation of Matrix [A][. Eigenvalue and Eigenvector [ ] ex Eigenvalues : λ = 4.41, 1.59 Eigenvectors: {1,.41} T, {1, 0.41} T

30 Coordinate Transformation Matrix and Eigenvalue / Eigenvector {b} = [ ] {a} 1 st Eigenvector st Eigenvector nd {b} {a} nd Eigenvector Eigenvector Coordinate Transformation Matrix and Eigenvalue / Eigenvector {b} {b} = [ ] {a} = λ {a} λ 1 1 {a} Magnification Factor = Eigenvalue λ 1 = 4.41 λ = 1.59 λ a : Unit Vector a = = 1

31 Coordinate Transformation Matrix and Eigenvalue / Eigenvector {b} = [ ] {a} {b} {a} 1 st Eigenvector st Eigenvector nd Eigenvector nd Eigenvector Coordinate Transformation Matrix and Eigenvalue / Eigenvector {b} = [ ] {a} λ {b} {a} 1 nd 1 st Eigenvector st Eigenvector nd Eigenvector Eigenvector

32 POD of Random Field with a Singular Condition [ ] ex. a 0 0 a = [R p ] Eigenvalues : λ = a (multiple root) Eigenvectors: Indeterminate! Uncorrelated with the same variances! Example of Eigenvectors Sample A 1st Mode nd Mode c 1 = 9.% c = 5.4%

33 Example of Eigenvectors Sample B 1st Mode nd Mode c 1 = 30.6% c = 4.7% State Locus by a 1 (t)) and a (t) nd Principal Coordinate nd Principal Coordinate 1st Principal Coordinate Sample A 1st Principal Coordinate Sample B

34 Eigenvalues and Proportions with/without Inclusion of Mean Value Mode 1 st nd Without Mean Value With Mean Value Eigenvalue Proportion (%) Eigenvalue Proportion (%) st nd p (t) 1 a (t) a 1m (t) a 1 (t) (p 1m, p m ) a m (t) p 1 (t) Merits of POD Observe phenomena by most efficient coordinates Extract hidden systematic structures from random information A significant reduction in amount of information that needs to be stored

Multivariate Statistical Analysis

Multivariate Statistical Analysis Multivariate Statistical Analysis Fall 2011 C. L. Williams, Ph.D. Lecture 4 for Applied Multivariate Analysis Outline 1 Eigen values and eigen vectors Characteristic equation Some properties of eigendecompositions

More information

Statistical signal processing

Statistical signal processing Statistical signal processing Short overview of the fundamentals Outline Random variables Random processes Stationarity Ergodicity Spectral analysis Random variable and processes Intuition: A random variable

More information

7. Variable extraction and dimensionality reduction

7. Variable extraction and dimensionality reduction 7. Variable extraction and dimensionality reduction The goal of the variable selection in the preceding chapter was to find least useful variables so that it would be possible to reduce the dimensionality

More information

Review (Probability & Linear Algebra)

Review (Probability & Linear Algebra) Review (Probability & Linear Algebra) CE-725 : Statistical Pattern Recognition Sharif University of Technology Spring 2013 M. Soleymani Outline Axioms of probability theory Conditional probability, Joint

More information

Maximum variance formulation

Maximum variance formulation 12.1. Principal Component Analysis 561 Figure 12.2 Principal component analysis seeks a space of lower dimensionality, known as the principal subspace and denoted by the magenta line, such that the orthogonal

More information

Karhunen-Loève Approximation of Random Fields Using Hierarchical Matrix Techniques

Karhunen-Loève Approximation of Random Fields Using Hierarchical Matrix Techniques Institut für Numerische Mathematik und Optimierung Karhunen-Loève Approximation of Random Fields Using Hierarchical Matrix Techniques Oliver Ernst Computational Methods with Applications Harrachov, CR,

More information

Mathematical foundations - linear algebra

Mathematical foundations - linear algebra Mathematical foundations - linear algebra Andrea Passerini passerini@disi.unitn.it Machine Learning Vector space Definition (over reals) A set X is called a vector space over IR if addition and scalar

More information

Structure in Data. A major objective in data analysis is to identify interesting features or structure in the data.

Structure in Data. A major objective in data analysis is to identify interesting features or structure in the data. Structure in Data A major objective in data analysis is to identify interesting features or structure in the data. The graphical methods are very useful in discovering structure. There are basically two

More information

1 Singular Value Decomposition and Principal Component

1 Singular Value Decomposition and Principal Component Singular Value Decomposition and Principal Component Analysis In these lectures we discuss the SVD and the PCA, two of the most widely used tools in machine learning. Principal Component Analysis (PCA)

More information

Computer Assisted Image Analysis

Computer Assisted Image Analysis Computer Assisted Image Analysis Lecture 0 - Object Descriptors II Amin Allalou amin@cb.uu.se Centre for Image Analysis Uppsala University 2009-04-27 A. Allalou (Uppsala University) Object Descriptors

More information

Unsupervised Learning: Dimensionality Reduction

Unsupervised Learning: Dimensionality Reduction Unsupervised Learning: Dimensionality Reduction CMPSCI 689 Fall 2015 Sridhar Mahadevan Lecture 3 Outline In this lecture, we set about to solve the problem posed in the previous lecture Given a dataset,

More information

Signal Analysis. Principal Component Analysis

Signal Analysis. Principal Component Analysis Multi dimensional Signal Analysis Lecture 2E Principal Component Analysis Subspace representation Note! Given avector space V of dimension N a scalar product defined by G 0 a subspace U of dimension M

More information

Review (probability, linear algebra) CE-717 : Machine Learning Sharif University of Technology

Review (probability, linear algebra) CE-717 : Machine Learning Sharif University of Technology Review (probability, linear algebra) CE-717 : Machine Learning Sharif University of Technology M. Soleymani Fall 2012 Some slides have been adopted from Prof. H.R. Rabiee s and also Prof. R. Gutierrez-Osuna

More information

Principal Component Analysis

Principal Component Analysis Principal Component Analysis Yingyu Liang yliang@cs.wisc.edu Computer Sciences Department University of Wisconsin, Madison [based on slides from Nina Balcan] slide 1 Goals for the lecture you should understand

More information

Statistical Pattern Recognition

Statistical Pattern Recognition Statistical Pattern Recognition Feature Extraction Hamid R. Rabiee Jafar Muhammadi, Alireza Ghasemi, Payam Siyari Spring 2014 http://ce.sharif.edu/courses/92-93/2/ce725-2/ Agenda Dimensionality Reduction

More information

STATISTICAL LEARNING SYSTEMS

STATISTICAL LEARNING SYSTEMS STATISTICAL LEARNING SYSTEMS LECTURE 8: UNSUPERVISED LEARNING: FINDING STRUCTURE IN DATA Institute of Computer Science, Polish Academy of Sciences Ph. D. Program 2013/2014 Principal Component Analysis

More information

Vectors and Matrices Statistics with Vectors and Matrices

Vectors and Matrices Statistics with Vectors and Matrices Vectors and Matrices Statistics with Vectors and Matrices Lecture 3 September 7, 005 Analysis Lecture #3-9/7/005 Slide 1 of 55 Today s Lecture Vectors and Matrices (Supplement A - augmented with SAS proc

More information

Polynomial Chaos and Karhunen-Loeve Expansion

Polynomial Chaos and Karhunen-Loeve Expansion Polynomial Chaos and Karhunen-Loeve Expansion 1) Random Variables Consider a system that is modeled by R = M(x, t, X) where X is a random variable. We are interested in determining the probability of the

More information

Introduction to Probability and Stocastic Processes - Part I

Introduction to Probability and Stocastic Processes - Part I Introduction to Probability and Stocastic Processes - Part I Lecture 2 Henrik Vie Christensen vie@control.auc.dk Department of Control Engineering Institute of Electronic Systems Aalborg University Denmark

More information

PCA & ICA. CE-717: Machine Learning Sharif University of Technology Spring Soleymani

PCA & ICA. CE-717: Machine Learning Sharif University of Technology Spring Soleymani PCA & ICA CE-717: Machine Learning Sharif University of Technology Spring 2015 Soleymani Dimensionality Reduction: Feature Selection vs. Feature Extraction Feature selection Select a subset of a given

More information

Principal Component Analysis -- PCA (also called Karhunen-Loeve transformation)

Principal Component Analysis -- PCA (also called Karhunen-Loeve transformation) Principal Component Analysis -- PCA (also called Karhunen-Loeve transformation) PCA transforms the original input space into a lower dimensional space, by constructing dimensions that are linear combinations

More information

«Random Vectors» Lecture #2: Introduction Andreas Polydoros

«Random Vectors» Lecture #2: Introduction Andreas Polydoros «Random Vectors» Lecture #2: Introduction Andreas Polydoros Introduction Contents: Definitions: Correlation and Covariance matrix Linear transformations: Spectral shaping and factorization he whitening

More information

Lecture 27: Structural Dynamics - Beams.

Lecture 27: Structural Dynamics - Beams. Chapter #16: Structural Dynamics and Time Dependent Heat Transfer. Lectures #1-6 have discussed only steady systems. There has been no time dependence in any problems. We will investigate beam dynamics

More information

Introduction to Machine Learning

Introduction to Machine Learning 10-701 Introduction to Machine Learning PCA Slides based on 18-661 Fall 2018 PCA Raw data can be Complex, High-dimensional To understand a phenomenon we measure various related quantities If we knew what

More information

EECS490: Digital Image Processing. Lecture #26

EECS490: Digital Image Processing. Lecture #26 Lecture #26 Moments; invariant moments Eigenvector, principal component analysis Boundary coding Image primitives Image representation: trees, graphs Object recognition and classes Minimum distance classifiers

More information

Lecture 1: Center for Uncertainty Quantification. Alexander Litvinenko. Computation of Karhunen-Loeve Expansion:

Lecture 1: Center for Uncertainty Quantification. Alexander Litvinenko. Computation of Karhunen-Loeve Expansion: tifica Lecture 1: Computation of Karhunen-Loeve Expansion: Alexander Litvinenko http://sri-uq.kaust.edu.sa/ Stochastic PDEs We consider div(κ(x, ω) u) = f (x, ω) in G, u = 0 on G, with stochastic coefficients

More information

3 2 6 Solve the initial value problem u ( t) 3. a- If A has eigenvalues λ =, λ = 1 and corresponding eigenvectors 1

3 2 6 Solve the initial value problem u ( t) 3. a- If A has eigenvalues λ =, λ = 1 and corresponding eigenvectors 1 Math Problem a- If A has eigenvalues λ =, λ = 1 and corresponding eigenvectors 1 3 6 Solve the initial value problem u ( t) = Au( t) with u (0) =. 3 1 u 1 =, u 1 3 = b- True or false and why 1. if A is

More information

Concentration Ellipsoids

Concentration Ellipsoids Concentration Ellipsoids ECE275A Lecture Supplement Fall 2008 Kenneth Kreutz Delgado Electrical and Computer Engineering Jacobs School of Engineering University of California, San Diego VERSION LSECE275CE

More information

1 Principal Components Analysis

1 Principal Components Analysis Lecture 3 and 4 Sept. 18 and Sept.20-2006 Data Visualization STAT 442 / 890, CM 462 Lecture: Ali Ghodsi 1 Principal Components Analysis Principal components analysis (PCA) is a very popular technique for

More information

Seminar on Linear Algebra

Seminar on Linear Algebra Supplement Seminar on Linear Algebra Projection, Singular Value Decomposition, Pseudoinverse Kenichi Kanatani Kyoritsu Shuppan Co., Ltd. Contents 1 Linear Space and Projection 1 1.1 Expression of Linear

More information

Lecture 7 Spectral methods

Lecture 7 Spectral methods CSE 291: Unsupervised learning Spring 2008 Lecture 7 Spectral methods 7.1 Linear algebra review 7.1.1 Eigenvalues and eigenvectors Definition 1. A d d matrix M has eigenvalue λ if there is a d-dimensional

More information

Tutorial on Principal Component Analysis

Tutorial on Principal Component Analysis Tutorial on Principal Component Analysis Copyright c 1997, 2003 Javier R. Movellan. This is an open source document. Permission is granted to copy, distribute and/or modify this document under the terms

More information

Independent Component Analysis and Its Application on Accelerator Physics

Independent Component Analysis and Its Application on Accelerator Physics Independent Component Analysis and Its Application on Accelerator Physics Xiaoying Pang LA-UR-12-20069 ICA and PCA Similarities: Blind source separation method (BSS) no model Observed signals are linear

More information

Covariance and Correlation Matrix

Covariance and Correlation Matrix Covariance and Correlation Matrix Given sample {x n } N 1, where x Rd, x n = x 1n x 2n. x dn sample mean x = 1 N N n=1 x n, and entries of sample mean are x i = 1 N N n=1 x in sample covariance matrix

More information

Fundamentals of Matrices

Fundamentals of Matrices Maschinelles Lernen II Fundamentals of Matrices Christoph Sawade/Niels Landwehr/Blaine Nelson Tobias Scheffer Matrix Examples Recap: Data Linear Model: f i x = w i T x Let X = x x n be the data matrix

More information

Dimensionality Reduction: PCA. Nicholas Ruozzi University of Texas at Dallas

Dimensionality Reduction: PCA. Nicholas Ruozzi University of Texas at Dallas Dimensionality Reduction: PCA Nicholas Ruozzi University of Texas at Dallas Eigenvalues λ is an eigenvalue of a matrix A R n n if the linear system Ax = λx has at least one non-zero solution If Ax = λx

More information

PRINCIPAL COMPONENT ANALYSIS

PRINCIPAL COMPONENT ANALYSIS PRINCIPAL COMPONENT ANALYSIS Dimensionality Reduction Tzompanaki Katerina Dimensionality Reduction Unsupervised learning Goal: Find hidden patterns in the data. Used for Visualization Data compression

More information

Stochastic Elastic-Plastic Finite Element Method for Performance Risk Simulations

Stochastic Elastic-Plastic Finite Element Method for Performance Risk Simulations Stochastic Elastic-Plastic Finite Element Method for Performance Risk Simulations Boris Jeremić 1 Kallol Sett 2 1 University of California, Davis 2 University of Akron, Ohio ICASP Zürich, Switzerland August

More information

Identification of Damping Using Proper Orthogonal Decomposition

Identification of Damping Using Proper Orthogonal Decomposition Identification of Damping Using Proper Orthogonal Decomposition M Khalil, S Adhikari and A Sarkar Department of Aerospace Engineering, University of Bristol, Bristol, U.K. Email: S.Adhikari@bristol.ac.uk

More information

Preprocessing & dimensionality reduction

Preprocessing & dimensionality reduction Introduction to Data Mining Preprocessing & dimensionality reduction CPSC/AMTH 445a/545a Guy Wolf guy.wolf@yale.edu Yale University Fall 2016 CPSC 445 (Guy Wolf) Dimensionality reduction Yale - Fall 2016

More information

Principal Component Analysis

Principal Component Analysis Principal Component Analysis November 24, 2015 From data to operators Given is data set X consisting of N vectors x n R D. Without loss of generality, assume x n = 0 (subtract mean). Let P be D N matrix

More information

Institute of Structural Engineering Page 1. Method of Finite Elements I. Chapter 2. The Direct Stiffness Method. Method of Finite Elements I

Institute of Structural Engineering Page 1. Method of Finite Elements I. Chapter 2. The Direct Stiffness Method. Method of Finite Elements I Institute of Structural Engineering Page 1 Chapter 2 The Direct Stiffness Method Institute of Structural Engineering Page 2 Direct Stiffness Method (DSM) Computational method for structural analysis Matrix

More information

EE4601 Communication Systems

EE4601 Communication Systems EE4601 Communication Systems Week 2 Review of Probability, Important Distributions 0 c 2011, Georgia Institute of Technology (lect2 1) Conditional Probability Consider a sample space that consists of two

More information

Lecture 3: Review of Linear Algebra

Lecture 3: Review of Linear Algebra ECE 83 Fall 2 Statistical Signal Processing instructor: R Nowak Lecture 3: Review of Linear Algebra Very often in this course we will represent signals as vectors and operators (eg, filters, transforms,

More information

Advanced Vibrations. Elements of Analytical Dynamics. By: H. Ahmadian Lecture One

Advanced Vibrations. Elements of Analytical Dynamics. By: H. Ahmadian Lecture One Advanced Vibrations Lecture One Elements of Analytical Dynamics By: H. Ahmadian ahmadian@iust.ac.ir Elements of Analytical Dynamics Newton's laws were formulated for a single particle Can be extended to

More information

EECS 275 Matrix Computation

EECS 275 Matrix Computation EECS 275 Matrix Computation Ming-Hsuan Yang Electrical Engineering and Computer Science University of California at Merced Merced, CA 95344 http://faculty.ucmerced.edu/mhyang Lecture 6 1 / 22 Overview

More information

Stochastic Dynamics of SDOF Systems (cont.).

Stochastic Dynamics of SDOF Systems (cont.). Outline of Stochastic Dynamics of SDOF Systems (cont.). Weakly Stationary Response Processes. Equivalent White Noise Approximations. Gaussian Response Processes as Conditional Normal Distributions. Stochastic

More information

What is Principal Component Analysis?

What is Principal Component Analysis? What is Principal Component Analysis? Principal component analysis (PCA) Reduce the dimensionality of a data set by finding a new set of variables, smaller than the original set of variables Retains most

More information

Principal Component Analysis

Principal Component Analysis Principal Component Analysis Yuanzhen Shao MA 26500 Yuanzhen Shao PCA 1 / 13 Data as points in R n Assume that we have a collection of data in R n. x 11 x 21 x 12 S = {X 1 =., X x 22 2 =.,, X x m2 m =.

More information

The following definition is fundamental.

The following definition is fundamental. 1. Some Basics from Linear Algebra With these notes, I will try and clarify certain topics that I only quickly mention in class. First and foremost, I will assume that you are familiar with many basic

More information

Karhunen-Loève Transform KLT. JanKees van der Poel D.Sc. Student, Mechanical Engineering

Karhunen-Loève Transform KLT. JanKees van der Poel D.Sc. Student, Mechanical Engineering Karhunen-Loève Transform KLT JanKees van der Poel D.Sc. Student, Mechanical Engineering Karhunen-Loève Transform Has many names cited in literature: Karhunen-Loève Transform (KLT); Karhunen-Loève Decomposition

More information

MA 575 Linear Models: Cedric E. Ginestet, Boston University Revision: Probability and Linear Algebra Week 1, Lecture 2

MA 575 Linear Models: Cedric E. Ginestet, Boston University Revision: Probability and Linear Algebra Week 1, Lecture 2 MA 575 Linear Models: Cedric E Ginestet, Boston University Revision: Probability and Linear Algebra Week 1, Lecture 2 1 Revision: Probability Theory 11 Random Variables A real-valued random variable is

More information

Linear Algebra & Geometry why is linear algebra useful in computer vision?

Linear Algebra & Geometry why is linear algebra useful in computer vision? Linear Algebra & Geometry why is linear algebra useful in computer vision? References: -Any book on linear algebra! -[HZ] chapters 2, 4 Some of the slides in this lecture are courtesy to Prof. Octavia

More information

Probabilistic Latent Semantic Analysis

Probabilistic Latent Semantic Analysis Probabilistic Latent Semantic Analysis Seungjin Choi Department of Computer Science and Engineering Pohang University of Science and Technology 77 Cheongam-ro, Nam-gu, Pohang 37673, Korea seungjin@postech.ac.kr

More information

Exercises * on Principal Component Analysis

Exercises * on Principal Component Analysis Exercises * on Principal Component Analysis Laurenz Wiskott Institut für Neuroinformatik Ruhr-Universität Bochum, Germany, EU 4 February 207 Contents Intuition 3. Problem statement..........................................

More information

COMP 558 lecture 18 Nov. 15, 2010

COMP 558 lecture 18 Nov. 15, 2010 Least squares We have seen several least squares problems thus far, and we will see more in the upcoming lectures. For this reason it is good to have a more general picture of these problems and how to

More information

PCA, Kernel PCA, ICA

PCA, Kernel PCA, ICA PCA, Kernel PCA, ICA Learning Representations. Dimensionality Reduction. Maria-Florina Balcan 04/08/2015 Big & High-Dimensional Data High-Dimensions = Lot of Features Document classification Features per

More information

Central limit theorem - go to web applet

Central limit theorem - go to web applet Central limit theorem - go to web applet Correlation maps vs. regression maps PNA is a time series of fluctuations in 500 mb heights PNA = 0.25 * [ Z(20N,160W) - Z(45N,165W) + Z(55N,115W) - Z(30N,85W)

More information

Nonlinear error dynamics for cycled data assimilation methods

Nonlinear error dynamics for cycled data assimilation methods Nonlinear error dynamics for cycled data assimilation methods A J F Moodey 1, A S Lawless 1,2, P J van Leeuwen 2, R W E Potthast 1,3 1 Department of Mathematics and Statistics, University of Reading, UK.

More information

11 a 12 a 21 a 11 a 22 a 12 a 21. (C.11) A = The determinant of a product of two matrices is given by AB = A B 1 1 = (C.13) and similarly.

11 a 12 a 21 a 11 a 22 a 12 a 21. (C.11) A = The determinant of a product of two matrices is given by AB = A B 1 1 = (C.13) and similarly. C PROPERTIES OF MATRICES 697 to whether the permutation i 1 i 2 i N is even or odd, respectively Note that I =1 Thus, for a 2 2 matrix, the determinant takes the form A = a 11 a 12 = a a 21 a 11 a 22 a

More information

A NEW METHOD FOR VIBRATION MODE ANALYSIS

A NEW METHOD FOR VIBRATION MODE ANALYSIS Proceedings of IDETC/CIE 25 25 ASME 25 International Design Engineering Technical Conferences & Computers and Information in Engineering Conference Long Beach, California, USA, September 24-28, 25 DETC25-85138

More information

COMPARISON OF MODE SHAPE VECTORS IN OPERATIONAL MODAL ANALYSIS DEALING WITH CLOSELY SPACED MODES.

COMPARISON OF MODE SHAPE VECTORS IN OPERATIONAL MODAL ANALYSIS DEALING WITH CLOSELY SPACED MODES. IOMAC'5 6 th International Operational Modal Analysis Conference 5 May-4 Gijón - Spain COMPARISON OF MODE SHAPE VECTORS IN OPERATIONAL MODAL ANALYSIS DEALING WITH CLOSELY SPACED MODES. Olsen P., and Brincker

More information

Principal Component Analysis-I Geog 210C Introduction to Spatial Data Analysis. Chris Funk. Lecture 17

Principal Component Analysis-I Geog 210C Introduction to Spatial Data Analysis. Chris Funk. Lecture 17 Principal Component Analysis-I Geog 210C Introduction to Spatial Data Analysis Chris Funk Lecture 17 Outline Filters and Rotations Generating co-varying random fields Translating co-varying fields into

More information

CHAPTER 4 PRINCIPAL COMPONENT ANALYSIS-BASED FUSION

CHAPTER 4 PRINCIPAL COMPONENT ANALYSIS-BASED FUSION 59 CHAPTER 4 PRINCIPAL COMPONENT ANALYSIS-BASED FUSION 4. INTRODUCTION Weighted average-based fusion algorithms are one of the widely used fusion methods for multi-sensor data integration. These methods

More information

Real Variables # 10 : Hilbert Spaces II

Real Variables # 10 : Hilbert Spaces II randon ehring Real Variables # 0 : Hilbert Spaces II Exercise 20 For any sequence {f n } in H with f n = for all n, there exists f H and a subsequence {f nk } such that for all g H, one has lim (f n k,

More information

Lecture 3: Review of Linear Algebra

Lecture 3: Review of Linear Algebra ECE 83 Fall 2 Statistical Signal Processing instructor: R Nowak, scribe: R Nowak Lecture 3: Review of Linear Algebra Very often in this course we will represent signals as vectors and operators (eg, filters,

More information

Dimensionality Reduction

Dimensionality Reduction Lecture 5 1 Outline 1. Overview a) What is? b) Why? 2. Principal Component Analysis (PCA) a) Objectives b) Explaining variability c) SVD 3. Related approaches a) ICA b) Autoencoders 2 Example 1: Sportsball

More information

CS281 Section 4: Factor Analysis and PCA

CS281 Section 4: Factor Analysis and PCA CS81 Section 4: Factor Analysis and PCA Scott Linderman At this point we have seen a variety of machine learning models, with a particular emphasis on models for supervised learning. In particular, we

More information

Reliability Theory of Dynamically Loaded Structures (cont.)

Reliability Theory of Dynamically Loaded Structures (cont.) Outline of Reliability Theory of Dynamically Loaded Structures (cont.) Probability Density Function of Local Maxima in a Stationary Gaussian Process. Distribution of Extreme Values. Monte Carlo Simulation

More information

Singular Value Decomposition and Principal Component Analysis (PCA) I

Singular Value Decomposition and Principal Component Analysis (PCA) I Singular Value Decomposition and Principal Component Analysis (PCA) I Prof Ned Wingreen MOL 40/50 Microarray review Data per array: 0000 genes, I (green) i,i (red) i 000 000+ data points! The expression

More information

Second-Order Inference for Gaussian Random Curves

Second-Order Inference for Gaussian Random Curves Second-Order Inference for Gaussian Random Curves With Application to DNA Minicircles Victor Panaretos David Kraus John Maddocks Ecole Polytechnique Fédérale de Lausanne Panaretos, Kraus, Maddocks (EPFL)

More information

Unsupervised Machine Learning and Data Mining. DS 5230 / DS Fall Lecture 7. Jan-Willem van de Meent

Unsupervised Machine Learning and Data Mining. DS 5230 / DS Fall Lecture 7. Jan-Willem van de Meent Unsupervised Machine Learning and Data Mining DS 5230 / DS 4420 - Fall 2018 Lecture 7 Jan-Willem van de Meent DIMENSIONALITY REDUCTION Borrowing from: Percy Liang (Stanford) Dimensionality Reduction Goal:

More information

PCA and LDA. Man-Wai MAK

PCA and LDA. Man-Wai MAK PCA and LDA Man-Wai MAK Dept. of Electronic and Information Engineering, The Hong Kong Polytechnic University enmwmak@polyu.edu.hk http://www.eie.polyu.edu.hk/ mwmak References: S.J.D. Prince,Computer

More information

EE731 Lecture Notes: Matrix Computations for Signal Processing

EE731 Lecture Notes: Matrix Computations for Signal Processing EE731 Lecture Notes: Matrix Computations for Signal Processing James P. Reilly c Department of Electrical and Computer Engineering McMaster University October 17, 005 Lecture 3 3 he Singular Value Decomposition

More information

Frank C Porter and Ilya Narsky: Statistical Analysis Techniques in Particle Physics Chap. c /9/9 page 147 le-tex

Frank C Porter and Ilya Narsky: Statistical Analysis Techniques in Particle Physics Chap. c /9/9 page 147 le-tex Frank C Porter and Ilya Narsky: Statistical Analysis Techniques in Particle Physics Chap. c08 2013/9/9 page 147 le-tex 8.3 Principal Component Analysis (PCA) 147 Figure 8.1 Principal and independent components

More information

Computational paradigms for the measurement signals processing. Metodologies for the development of classification algorithms.

Computational paradigms for the measurement signals processing. Metodologies for the development of classification algorithms. Computational paradigms for the measurement signals processing. Metodologies for the development of classification algorithms. January 5, 25 Outline Methodologies for the development of classification

More information

Gaussian random variables inr n

Gaussian random variables inr n Gaussian vectors Lecture 5 Gaussian random variables inr n One-dimensional case One-dimensional Gaussian density with mean and standard deviation (called N, ): fx x exp. Proposition If X N,, then ax b

More information

December 20, MAA704, Multivariate analysis. Christopher Engström. Multivariate. analysis. Principal component analysis

December 20, MAA704, Multivariate analysis. Christopher Engström. Multivariate. analysis. Principal component analysis .. December 20, 2013 Todays lecture. (PCA) (PLS-R) (LDA) . (PCA) is a method often used to reduce the dimension of a large dataset to one of a more manageble size. The new dataset can then be used to make

More information

Irregularity and Predictability of ENSO

Irregularity and Predictability of ENSO Irregularity and Predictability of ENSO Richard Kleeman Courant Institute of Mathematical Sciences New York Main Reference R. Kleeman. Stochastic theories for the irregularity of ENSO. Phil. Trans. Roy.

More information

E X A M. Probability Theory and Stochastic Processes Date: December 13, 2016 Duration: 4 hours. Number of pages incl.

E X A M. Probability Theory and Stochastic Processes Date: December 13, 2016 Duration: 4 hours. Number of pages incl. E X A M Course code: Course name: Number of pages incl. front page: 6 MA430-G Probability Theory and Stochastic Processes Date: December 13, 2016 Duration: 4 hours Resources allowed: Notes: Pocket calculator,

More information

Statistics for scientists and engineers

Statistics for scientists and engineers Statistics for scientists and engineers February 0, 006 Contents Introduction. Motivation - why study statistics?................................... Examples..................................................3

More information

Structural Dynamics Lecture 7. Outline of Lecture 7. Multi-Degree-of-Freedom Systems (cont.) System Reduction. Vibration due to Movable Supports.

Structural Dynamics Lecture 7. Outline of Lecture 7. Multi-Degree-of-Freedom Systems (cont.) System Reduction. Vibration due to Movable Supports. Outline of Multi-Degree-of-Freedom Systems (cont.) System Reduction. Truncated Modal Expansion with Quasi-Static Correction. Guyan Reduction. Vibration due to Movable Supports. Earthquake Excitations.

More information

MATH 829: Introduction to Data Mining and Analysis Principal component analysis

MATH 829: Introduction to Data Mining and Analysis Principal component analysis 1/11 MATH 829: Introduction to Data Mining and Analysis Principal component analysis Dominique Guillot Departments of Mathematical Sciences University of Delaware April 4, 2016 Motivation 2/11 High-dimensional

More information

SPECTRAL THEOREM FOR COMPACT SELF-ADJOINT OPERATORS

SPECTRAL THEOREM FOR COMPACT SELF-ADJOINT OPERATORS SPECTRAL THEOREM FOR COMPACT SELF-ADJOINT OPERATORS G. RAMESH Contents Introduction 1 1. Bounded Operators 1 1.3. Examples 3 2. Compact Operators 5 2.1. Properties 6 3. The Spectral Theorem 9 3.3. Self-adjoint

More information

Noise & Data Reduction

Noise & Data Reduction Noise & Data Reduction Paired Sample t Test Data Transformation - Overview From Covariance Matrix to PCA and Dimension Reduction Fourier Analysis - Spectrum Dimension Reduction 1 Remember: Central Limit

More information

Spectral representations and ergodic theorems for stationary stochastic processes

Spectral representations and ergodic theorems for stationary stochastic processes AMS 263 Stochastic Processes (Fall 2005) Instructor: Athanasios Kottas Spectral representations and ergodic theorems for stationary stochastic processes Stationary stochastic processes Theory and methods

More information

j=1 u 1jv 1j. 1/ 2 Lemma 1. An orthogonal set of vectors must be linearly independent.

j=1 u 1jv 1j. 1/ 2 Lemma 1. An orthogonal set of vectors must be linearly independent. Lecture Notes: Orthogonal and Symmetric Matrices Yufei Tao Department of Computer Science and Engineering Chinese University of Hong Kong taoyf@cse.cuhk.edu.hk Orthogonal Matrix Definition. Let u = [u

More information

On the Karhunen-Loève basis for continuous mechanical systems

On the Karhunen-Loève basis for continuous mechanical systems for continuous mechanical systems 1 1 Department of Mechanical Engineering Pontifícia Universidade Católica do Rio de Janeiro, Brazil e-mail: rsampaio@mec.puc-rio.br Some perspective to introduce the talk

More information

Lecture notes on Quantum Computing. Chapter 1 Mathematical Background

Lecture notes on Quantum Computing. Chapter 1 Mathematical Background Lecture notes on Quantum Computing Chapter 1 Mathematical Background Vector states of a quantum system with n physical states are represented by unique vectors in C n, the set of n 1 column vectors 1 For

More information

Learning with Singular Vectors

Learning with Singular Vectors Learning with Singular Vectors CIS 520 Lecture 30 October 2015 Barry Slaff Based on: CIS 520 Wiki Materials Slides by Jia Li (PSU) Works cited throughout Overview Linear regression: Given X, Y find w:

More information

Designing Information Devices and Systems II Fall 2015 Note 5

Designing Information Devices and Systems II Fall 2015 Note 5 EE 16B Designing Information Devices and Systems II Fall 01 Note Lecture given by Babak Ayazifar (9/10) Notes by: Ankit Mathur Spectral Leakage Example Compute the length-8 and length-6 DFT for the following

More information

Hale Collage. Spectropolarimetric Diagnostic Techniques!!!!!!!! Rebecca Centeno

Hale Collage. Spectropolarimetric Diagnostic Techniques!!!!!!!! Rebecca Centeno Hale Collage Spectropolarimetric Diagnostic Techniques Rebecca Centeno March 1-8, 2016 Today Simple solutions to the RTE Milne-Eddington approximation LTE solution The general inversion problem Spectral

More information

PRINCIPAL COMPONENT ANALYSIS

PRINCIPAL COMPONENT ANALYSIS PRINCIPAL COMPONENT ANALYSIS 1 INTRODUCTION One of the main problems inherent in statistics with more than two variables is the issue of visualising or interpreting data. Fortunately, quite often the problem

More information

Chapter 4. Multivariate Distributions. Obviously, the marginal distributions may be obtained easily from the joint distribution:

Chapter 4. Multivariate Distributions. Obviously, the marginal distributions may be obtained easily from the joint distribution: 4.1 Bivariate Distributions. Chapter 4. Multivariate Distributions For a pair r.v.s (X,Y ), the Joint CDF is defined as F X,Y (x, y ) = P (X x,y y ). Obviously, the marginal distributions may be obtained

More information

18.S096 Problem Set 7 Fall 2013 Factor Models Due Date: 11/14/2013. [ ] variance: E[X] =, and Cov[X] = Σ = =

18.S096 Problem Set 7 Fall 2013 Factor Models Due Date: 11/14/2013. [ ] variance: E[X] =, and Cov[X] = Σ = = 18.S096 Problem Set 7 Fall 2013 Factor Models Due Date: 11/14/2013 1. Consider a bivariate random variable: [ ] X X = 1 X 2 with mean and co [ ] variance: [ ] [ α1 Σ 1,1 Σ 1,2 σ 2 ρσ 1 σ E[X] =, and Cov[X]

More information

Let X and Y denote two random variables. The joint distribution of these random

Let X and Y denote two random variables. The joint distribution of these random EE385 Class Notes 9/7/0 John Stensby Chapter 3: Multiple Random Variables Let X and Y denote two random variables. The joint distribution of these random variables is defined as F XY(x,y) = [X x,y y] P.

More information

arxiv: v2 [math.pr] 27 Oct 2015

arxiv: v2 [math.pr] 27 Oct 2015 A brief note on the Karhunen-Loève expansion Alen Alexanderian arxiv:1509.07526v2 [math.pr] 27 Oct 2015 October 28, 2015 Abstract We provide a detailed derivation of the Karhunen Loève expansion of a stochastic

More information

Waveform-Based Coding: Outline

Waveform-Based Coding: Outline Waveform-Based Coding: Transform and Predictive Coding Yao Wang Polytechnic University, Brooklyn, NY11201 http://eeweb.poly.edu/~yao Based on: Y. Wang, J. Ostermann, and Y.-Q. Zhang, Video Processing and

More information

ISSN: (Online) Volume 3, Issue 5, May 2015 International Journal of Advance Research in Computer Science and Management Studies

ISSN: (Online) Volume 3, Issue 5, May 2015 International Journal of Advance Research in Computer Science and Management Studies ISSN: 2321-7782 (Online) Volume 3, Issue 5, May 2015 International Journal of Advance Research in Computer Science and Management Studies Research Article / Survey Paper / Case Study Available online at:

More information

Math 416 Lecture 3. The average or mean or expected value of x 1, x 2, x 3,..., x n is

Math 416 Lecture 3. The average or mean or expected value of x 1, x 2, x 3,..., x n is Math 416 Lecture 3 Expected values The average or mean or expected value of x 1, x 2, x 3,..., x n is x 1 x 2... x n n x 1 1 n x 2 1 n... x n 1 n 1 n x i p x i where p x i 1 n is the probability of x i

More information