ECEN 667 Power System Stability Lecture 23:Measurement Based Modal Analysis, FFT
|
|
- Joanna Hamilton
- 5 years ago
- Views:
Transcription
1 ECEN 667 Power System Stability Lecture 23:Measurement Based Modal Analysis, FFT Prof. Tom Overbye Dept. of Electrical and Computer Engineering Texas A&M University, 1
2 Announcements Read Chapter 8 Homework 7 is posted; due on Thursday Nov 30 Extended two days due to break Final is as per TAMU schedule. That is, Friday Dec 8 from 3 to 5pm 2
3 Measurement Based Modal Analysis With the advent of large numbers of PMUs, measurement based SSA is increasing used The goal is to determine the damping associated with the dominant oscillatory modes in the system Approaches seek to approximate a sampled signal by a series of exponential functions (usually damped sinusoidals) Several techniques are available with Prony analysis the oldest Method, which was developed by Gaspard Riche de Prony, dates to 1795; power system applications from about 1980's Here we'll consider a newer alternative, based on the variable projection method 3
4 Some Useful References J.F. Hauer, C.J. Demeure, and L.L. Scharf, "Initial results in Prony analysis of power system response signals," IEEE Trans. Power Systems, vol.5, pp 80-89, Feb 1990 D.J. Trudnowski, J.M. Johnson, and J.F. Hauer, "Making Prony analysis more accurate using multiple signals," IEEE Trans. Power Systems, vol.14, pp , Feb 1999 A. Borden, B.C. Lesieutre, J. Gronquist, "Power System Modal Analysis Tool Developed for Industry Use," Proc North American Power Symposium, Manhattan, KS, Sept
5 Variable Projection Method (VPM) Idea of all techniques is to approximate a signal, y org (t), by the sum of other, simpler signals (basis functions) Basis functions are usually exponentials, with linear and quadratic functions also added to detrend the signal Properties of the original signal can be quantified from basis function properties (such as frequency and damping) Signal is considered over an interval with t=0 at the beginning Approaches work by sampling the original signal y org (t) Vector y consists of m uniformly sampled points from y org (t) at a sampling value of DT, starting with t=0, with values y j for j=1 m Times are then t j = (j-1)dt 5
6 Variable Projection Method (VPM) At each time point j, where t j = (j-1)dt the approximation of y j is yˆ ( α) b ( t, α) i j i i j i1 where α n is a vector with the real and imaginary i j eigenvalue components, with ( t, α) e for a corresponding to a real eigenvalue, and it j i j i+1 j i1 i j ( t, α) e cos( t ) and ( α) e sin( i+1t j ) for a complex eigenvector value i j t t 6
7 Variable Projection Method (VPM) Error (residual) value at each point j is r ( t, α) y yˆ ( t, α) j j j j j is the vector containing the optimization variables Function being minimized is 1 1 α r α 2 2 m 2 ( y ˆ j y j ( t j, )) ( ) j1 2 2 r() is the residual vector Method iteratively changes to reduce the minimization function 7
8 Variable Projection Method (VPM) A key insight of the variable projection method is that yˆ( α) Φ( α) b And then the residual is minimized by selecting b Φ( α) y where Φα ( ) is the m by n matrix with values ji t i j ( α) e if corresponds to a real eigenvalue, i it j it j ji α e i 1tj ji1 α e i 1 and ( ) cos( ) and for a complex eigenvalue; t j 1 DT Finally, Φ( α) is the pseudoinverse of Φ( α) j ( ) sin( t ) j 8
9 Pseudoinverse of a Matrix The pseudoinverse of a matrix generalizes concept of a matrix inverse to an m by n matrix, in which m >= n Specifically talking about a Moore-Penrose Matrix Inverse Notation for the pseudoinverse of A is A + Satisfies AA + A = A If A is a square matrix, then A + = A -1 Quite useful for solving the least squares problem since the least squares solution of Ax = b is x = A + b Can be calculated using an SVD A A UΣV T T VΣ U 9
10 Simple Least Squares Example Assume we wish to fix a line (mx + b = y) to three data points: (1,1), (2,4), (6,4) Two unknowns, m and b; hence x = [m b] T Setup in form of Ax = b m 4 so = 2 1 b A We are trying to select m and b to minimize the error in this overdetermined problem 10
11 Simple Least Squares Example Doing an economy SVD T A UΣV Computing the pseudoinverse A A T VΣU T VΣU In an economy SVD the S matrix has dimensions of m by m if m < n or n by n if n < m 11
12 Simple Least Squares Example Computing x = [m b] T gives Ab With the pseudoinverse approach we immediately see the sensitivity of the elements of x to the elements of b New values of m and b can be readily calculated if y changes Computationally the SVD is order m 2 n+n 3 (with n < m) 12
13 VPM Example Assume we'd like to determine the characteristics of the below SMIB angle response Gen Bus 4 #1 Rotor Angle Gen Bus 4 #1 Rotor Angle The signal itself is given from 0 to 5 seconds; we will be sampling it over a shorter time period Time Gen Bus 4 #1 Rotor Angle For simplicity we'll just consider this signal from 1 to 2 seconds, and work with m=6 samples (DT=0.2, from 1 to 2 seconds); hence we'll set our t=0 as 1.0 seconds 13 5
14 VPM Example Assume we know a good approximation of this signal (over the desired range) is y ( t) e 13.58cos 11.97t 8.34sin 11.97t 24.42e org t t Hence the zero error values would be α 11.97, 8.34 b With DT=0.2, m=6 then y How we got the initial will be shown in a few slides 14
15 To verify First row is t=0, second t=0.2, etc Φ Giving Φα VPM Example b Φ( α) y ( α) ( ) 8.34 Which matches b Φ α y the known values! ,1 ( ) e α cos
16 VPM, cont. This is an iterative process, requiring an initial guess of, and then a method to update until the residual vector, r, is minimized min α min r( α) I Φ( α) Φ( α) y α 2 Solved with a gradient method, with the details on finding the gradient direction given in the Borden, Lesieutre, Gronquist 2013 NAPS paper Iterative until a minimum is reached Like any iterative method, its convergence depends on the initial guess, in this case of 2 16
17 VPM: Initial Guess of The initial guesses for are calculated using a Matrix Pencil method First, with m samples, let L=m/2 Then form a Hankel matrix, Y such that y y y y y y Y y y y 1 2 L1 2 3 L2 ml ml1 m And calculate its singular values with an economy SVD Y UΣV T The computational complexity increases with the cube of the number of measurements! 17
18 VPM: Initial Guess of The ratio of each singular value is then compared to the largest singular value s c ; retain the ones with a ratio > than a threshold (e.g., 0.16) This determines the modal order, M Assuming V is ordered by singular values (highest to lowest), let V p be then matrix with the first M columns of V Then form the matrices V 1 and V 2 such that V 1 is the matrix consisting of all but the last row of V p V 2 is the matrix consisting of all but the first row of V p NAPS paper equation is incorrect on this Discrete-time poles are found as the generalized eigenvalues of the pair {V 2T V 1, V 1T V 1 } 18
19 Generalized Eigenvalues Generalized eigenvalue problem for a matrix pair (A,B) consists of determining values k, b k and x k such that b Ax Bx k k k k If B = I then this gives the regular eigenvalues The generalized eigenvalues are then k /b k If B is nonsingular than these are the eigenvalues of B -1 A That is the situation here These eigenvalues are the discrete-time poles, z i,with the modal eigenvalues then ln( z i ) i D T Recall the log of a complex number z=r is ln(r) + j 19
20 With m=6, L=3, and Returning to Example y Y T diag( ) Y UΣV Σ, V In this example we retain all three singular values 20
21 Example Which gives V , V T VV T, VV And generalized eigenvalues of , j Then with DT=0.2 ln( ) ln( ) 23, j
22 Example This initial guess of is very close to the final solution The iteration works by calculating the Jacobian, J() (with details in the paper), and the gradient T r( α) J( α) r( α) 2 A gradient search optimization (such as Golden Section) is used to determine the distance to move in the negative gradient direction For the example T J( α)
23 Comments These techniques only match the signals at the sampled time points The sampling frequency must be at least twice the highest frequency of interest A higher sampling rate is generally better, but there is a computational limitation associated with the size of the Hankel matrix Aliasing is a concern since we are dealing with a time limited signal Detrending can be used to remove a polynomial offset Method can be extended to multiple signals 23
24 VPM: Example 2 Do VPM on speed for generator 2 from previous three bus small signal analysis case Calculated modes were at 1.51 and 2.02 Hz Input data here is the red curve gfedcb Speed_Gen Bus 1 #1 gfedcb Speed_Gen Bus 2 #1 gfedcb Speed_Gen 3 #1 24
25 VPM: Example 2 Below results were obtained from sampling the input data every 0.1 seconds (10 Hz) Values PWDVectorGrid Variables Calculated frequencies were 2.03 and 1.51 Hz with a dc offset; the 2.03 frequency has a value of almost 4 times that of the 1.51 Hz Original Value Reproduced Value 25
26 Example using PowerWorld Modal Analysis Dialog Case is earlier B3_CLS_3Gen_SSA 26
27 Example using PowerWorld Modal Analysis Dialog Details of solution process are available by selecting Show Solution Details. 27
28 VPM: Example 2 Results are quite poor if sampling is reduced to 1.5 Hz PWDVectorGrid Variables Values Original Value Reproduced Value Sampling will be at twice this frequency. With 1.5 Hz we would sample at three times per second, which is too slow. 28
29 Moving Forward with VPM Not all signals exhibit such straightforward oscillations, since there can be other slower dynamics superimposed How can this method be extended to handle these situations? Line (COUL1418) TO (COULEES3) CKT 1 MW To End Line (COUL1418) TO (COULEES3) CKT 1 MW To End Time Line (COUL1418) TO (COULEES3) CKT 1 MW To End
30 Moving Forward with VPM Here are the results PWDVectorGrid Variables Values This is made up of five signals with frequencies from to 1.1 Hz, with the highest magnitude at Hz Original Value Reproduced Value 30
ECEN 667 Power System Stability Lecture 25: FFT, Energy Methods
ECEN 667 Power System Stability Lecture 5: FFT, Energy Methods Prof. Tom Overbye Dept. of Electrical and Computer Engineering Texas A&M University, overbye@tamu.edu 1 Announcements Read Chapter 9 Final
More informationECEN 667 Power System Stability Lecture 20: Oscillations, Small Signal Stability Analysis
ECEN 667 Power System Stability Lecture 20: Oscillations, Small Signal Stability Analysis Prof. Tom Overbye Dept. of Electrical and Computer Engineering Texas A&M University, overbye@tamu.edu 1 Announcements
More informationECEN 615 Methods of Electric Power Systems Analysis Lecture 18: Least Squares, State Estimation
ECEN 615 Methods of Electric Power Systems Analysis Lecture 18: Least Squares, State Estimation Prof. om Overbye Dept. of Electrical and Computer Engineering exas A&M University overbye@tamu.edu Announcements
More informationECEN 667 Power System Stability Lecture 18: Voltage Stability, Load Models
ECEN 667 Power System Stability Lecture 18: Voltage Stability, Load Models Prof. Tom Overbye Dept. of Electrical and Computer Engineering Texas A&M University, overbye@tamu.edu 1 Announcements Read Chapter
More informationMaths for Signals and Systems Linear Algebra in Engineering
Maths for Signals and Systems Linear Algebra in Engineering Lectures 13 15, Tuesday 8 th and Friday 11 th November 016 DR TANIA STATHAKI READER (ASSOCIATE PROFFESOR) IN SIGNAL PROCESSING IMPERIAL COLLEGE
More informationVariable Projection Method for Power System Modal Identification
IEEE TRANSACTIONS ON POWER SYSTEMS, VOL. 29, NO. 6, NOVEMBER 2014 2613 Variable Projection Method for Power System Modal Identification Alexander R. Borden, Student Member, IEEE, and Bernard C. Lesieutre,
More informationCOMP 558 lecture 18 Nov. 15, 2010
Least squares We have seen several least squares problems thus far, and we will see more in the upcoming lectures. For this reason it is good to have a more general picture of these problems and how to
More informationECEN 615 Methods of Electric Power Systems Analysis Lecture 19: State Estimation
ECEN 615 Methods of Electric Power Systems Analysis Lecture 19: State Estimation Prof. Tom Overbye Dept. of Electrical and Computer Engineering Texas A&M University overbye@tamu.edu Announcements Homework
More informationECEN 667 Power System Stability Lecture 17: Transient Stability Solutions, Load Models
ECEN 667 Power System Stability Lecture 17: Transient Stability Solutions, Load Models Prof. Tom Overbye Dept. of Electrical and Computer Engineering Texas A&M University, overbye@tamu.edu 1 Announcements
More informationUNIT 6: The singular value decomposition.
UNIT 6: The singular value decomposition. María Barbero Liñán Universidad Carlos III de Madrid Bachelor in Statistics and Business Mathematical methods II 2011-2012 A square matrix is symmetric if A T
More informationSingular Value Decomposition
Singular Value Decomposition Motivatation The diagonalization theorem play a part in many interesting applications. Unfortunately not all matrices can be factored as A = PDP However a factorization A =
More informationUCSD ECE269 Handout #8 Prof. Young-Han Kim Wednesday, February 7, Homework Set #4 (Due: Wednesday, February 21, 2018)
UCSD ECE269 Handout #8 Prof. Young-Han Kim Wednesday, February 7, 2018 Homework Set #4 (Due: Wednesday, February 21, 2018) 1. Almost orthonormal basis. Let u 1,u 2,...,u n form an orthonormal basis for
More informationPrediction of Instability Points Using System Identification
Prediction of Instability Points Using System Identification Hassan Ghasemi laudio A. añizares John Reeve hassan@thunderbox.uwaterloo.ca ccanizar@engmail.uwaterloo.ca J.Reeve@ece.uwaterloo.ca Department
More informationECE 275A Homework # 3 Due Thursday 10/27/2016
ECE 275A Homework # 3 Due Thursday 10/27/2016 Reading: In addition to the lecture material presented in class, students are to read and study the following: A. The material in Section 4.11 of Moon & Stirling
More informationBindel, Fall 2009 Matrix Computations (CS 6210) Week 8: Friday, Oct 17
Logistics Week 8: Friday, Oct 17 1. HW 3 errata: in Problem 1, I meant to say p i < i, not that p i is strictly ascending my apologies. You would want p i > i if you were simply forming the matrices and
More informationVector and Matrix Norms. Vector and Matrix Norms
Vector and Matrix Norms Vector Space Algebra Matrix Algebra: We let x x and A A, where, if x is an element of an abstract vector space n, and A = A: n m, then x is a complex column vector of length n whose
More informationECEN 667 Power System Stability Lecture 21: Modal Analysis
ECEN 667 Power System Stablty Lecture 21: Modal Analyss Prof. Tom Overbye Dept. of Electrcal and Computer Engneerng Texas A&M Unversty, overbye@tamu.edu 1 Announcements Read Chapter 8 Homework 7 s posted;
More information1. Background: The SVD and the best basis (questions selected from Ch. 6- Can you fill in the exercises?)
Math 35 Exam Review SOLUTIONS Overview In this third of the course we focused on linear learning algorithms to model data. summarize: To. Background: The SVD and the best basis (questions selected from
More informationECEN 667 Power System Stability Lecture 15: PIDs, Governors, Transient Stability Solutions
ECEN 667 Power System Stability Lecture 15: PIDs, Governors, Transient Stability Solutions Prof. Tom Overbye Dept. of Electrical and Computer Engineering Texas A&M University, overbye@tamu.edu 1 Announcements
More informationNumerical Methods I Solving Nonlinear Equations
Numerical Methods I Solving Nonlinear Equations Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 October 16th, 2014 A. Donev (Courant Institute)
More informationlinearly indepedent eigenvectors as the multiplicity of the root, but in general there may be no more than one. For further discussion, assume matrice
3. Eigenvalues and Eigenvectors, Spectral Representation 3.. Eigenvalues and Eigenvectors A vector ' is eigenvector of a matrix K, if K' is parallel to ' and ' 6, i.e., K' k' k is the eigenvalue. If is
More informationLecture: Face Recognition and Feature Reduction
Lecture: Face Recognition and Feature Reduction Juan Carlos Niebles and Ranjay Krishna Stanford Vision and Learning Lab 1 Recap - Curse of dimensionality Assume 5000 points uniformly distributed in the
More informationProperties of Matrices and Operations on Matrices
Properties of Matrices and Operations on Matrices A common data structure for statistical analysis is a rectangular array or matris. Rows represent individual observational units, or just observations,
More informationLeast Squares. Tom Lyche. October 26, Centre of Mathematics for Applications, Department of Informatics, University of Oslo
Least Squares Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo October 26, 2010 Linear system Linear system Ax = b, A C m,n, b C m, x C n. under-determined
More informationChapter Six. Polynomials. Properties of Exponents Algebraic Expressions Addition, Subtraction, and Multiplication Factoring Solving by Factoring
Chapter Six Polynomials Properties of Exponents Algebraic Expressions Addition, Subtraction, and Multiplication Factoring Solving by Factoring Properties of Exponents The properties below form the basis
More informationLecture 10: Powers of Matrices, Difference Equations
Lecture 10: Powers of Matrices, Difference Equations Difference Equations A difference equation, also sometimes called a recurrence equation is an equation that defines a sequence recursively, i.e. each
More informationMATH 612 Computational methods for equation solving and function minimization Week # 2
MATH 612 Computational methods for equation solving and function minimization Week # 2 Instructor: Francisco-Javier Pancho Sayas Spring 2014 University of Delaware FJS MATH 612 1 / 38 Plan for this week
More informationDistributed Real-Time Electric Power Grid Event Detection and Dynamic Characterization
Distributed Real-Time Electric Power Grid Event Detection and Dynamic Characterization Raymond de Callafon, Charles H. Wells University of California, San Diego & OSIsoft CIGRE Grid of the Future Symposium,
More informationLarge Scale Data Analysis Using Deep Learning
Large Scale Data Analysis Using Deep Learning Linear Algebra U Kang Seoul National University U Kang 1 In This Lecture Overview of linear algebra (but, not a comprehensive survey) Focused on the subset
More information. =. a i1 x 1 + a i2 x 2 + a in x n = b i. a 11 a 12 a 1n a 21 a 22 a 1n. i1 a i2 a in
Vectors and Matrices Continued Remember that our goal is to write a system of algebraic equations as a matrix equation. Suppose we have the n linear algebraic equations a x + a 2 x 2 + a n x n = b a 2
More informationControl Systems I. Lecture 4: Diagonalization, Modal Analysis, Intro to Feedback. Readings: Emilio Frazzoli
Control Systems I Lecture 4: Diagonalization, Modal Analysis, Intro to Feedback Readings: Emilio Frazzoli Institute for Dynamic Systems and Control D-MAVT ETH Zürich October 13, 2017 E. Frazzoli (ETH)
More informationECE 275A Homework #3 Solutions
ECE 75A Homework #3 Solutions. Proof of (a). Obviously Ax = 0 y, Ax = 0 for all y. To show sufficiency, note that if y, Ax = 0 for all y, then it must certainly be true for the particular value of y =
More informationSingular Value Decomposition
Chapter 6 Singular Value Decomposition In Chapter 5, we derived a number of algorithms for computing the eigenvalues and eigenvectors of matrices A R n n. Having developed this machinery, we complete our
More informationSolution via Laplace transform and matrix exponential
EE263 Autumn 2015 S. Boyd and S. Lall Solution via Laplace transform and matrix exponential Laplace transform solving ẋ = Ax via Laplace transform state transition matrix matrix exponential qualitative
More informationMath 308 Final Exam Practice Problems
Math 308 Final Exam Practice Problems This review should not be used as your sole source for preparation for the exam You should also re-work all examples given in lecture and all suggested homework problems
More informationJim Lambers MAT 419/519 Summer Session Lecture 13 Notes
Jim Lambers MAT 419/519 Summer Session 2011-12 Lecture 13 Notes These notes correspond to Section 4.1 in the text. Least Squares Fit One of the most fundamental problems in science and engineering is data
More information8 The SVD Applied to Signal and Image Deblurring
8 The SVD Applied to Signal and Image Deblurring We will discuss the restoration of one-dimensional signals and two-dimensional gray-scale images that have been contaminated by blur and noise. After an
More information18.06 Problem Set 10 - Solutions Due Thursday, 29 November 2007 at 4 pm in
86 Problem Set - Solutions Due Thursday, 29 November 27 at 4 pm in 2-6 Problem : (5=5+5+5) Take any matrix A of the form A = B H CB, where B has full column rank and C is Hermitian and positive-definite
More informationNumerical Methods I Singular Value Decomposition
Numerical Methods I Singular Value Decomposition Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 October 9th, 2014 A. Donev (Courant Institute)
More informationMath 2142 Homework 5 Part 1 Solutions
Math 2142 Homework 5 Part 1 Solutions Problem 1. For the following homogeneous second order differential equations, give the general solution and the particular solution satisfying the given initial conditions.
More informationJeffrey D. Ullman Stanford University
Jeffrey D. Ullman Stanford University 2 Often, our data can be represented by an m-by-n matrix. And this matrix can be closely approximated by the product of two matrices that share a small common dimension
More informationAPPM 2360 Section Exam 3 Wednesday November 19, 7:00pm 8:30pm, 2014
APPM 2360 Section Exam 3 Wednesday November 9, 7:00pm 8:30pm, 204 ON THE FRONT OF YOUR BLUEBOOK write: () your name, (2) your student ID number, (3) lecture section, (4) your instructor s name, and (5)
More informationMoore-Penrose Conditions & SVD
ECE 275AB Lecture 8 Fall 2008 V1.0 c K. Kreutz-Delgado, UC San Diego p. 1/1 Lecture 8 ECE 275A Moore-Penrose Conditions & SVD ECE 275AB Lecture 8 Fall 2008 V1.0 c K. Kreutz-Delgado, UC San Diego p. 2/1
More informationNumerical Methods. Elena loli Piccolomini. Civil Engeneering. piccolom. Metodi Numerici M p. 1/??
Metodi Numerici M p. 1/?? Numerical Methods Elena loli Piccolomini Civil Engeneering http://www.dm.unibo.it/ piccolom elena.loli@unibo.it Metodi Numerici M p. 2/?? Least Squares Data Fitting Measurement
More informationApplied Mathematics 205. Unit II: Numerical Linear Algebra. Lecturer: Dr. David Knezevic
Applied Mathematics 205 Unit II: Numerical Linear Algebra Lecturer: Dr. David Knezevic Unit II: Numerical Linear Algebra Chapter II.3: QR Factorization, SVD 2 / 66 QR Factorization 3 / 66 QR Factorization
More informationOrthonormal Transformations and Least Squares
Orthonormal Transformations and Least Squares Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo October 30, 2009 Applications of Qx with Q T Q = I 1. solving
More informationThe Singular Value Decomposition
The Singular Value Decomposition Philippe B. Laval KSU Fall 2015 Philippe B. Laval (KSU) SVD Fall 2015 1 / 13 Review of Key Concepts We review some key definitions and results about matrices that will
More informationDifferential equations
Differential equations Math 7 Spring Practice problems for April Exam Problem Use the method of elimination to find the x-component of the general solution of x y = 6x 9x + y = x 6y 9y Soln: The system
More information6 The SVD Applied to Signal and Image Deblurring
6 The SVD Applied to Signal and Image Deblurring We will discuss the restoration of one-dimensional signals and two-dimensional gray-scale images that have been contaminated by blur and noise. After an
More information7x 5 x 2 x + 2. = 7x 5. (x + 1)(x 2). 4 x
Advanced Integration Techniques: Partial Fractions The method of partial fractions can occasionally make it possible to find the integral of a quotient of rational functions. Partial fractions gives us
More informationNumerical Optimization
Numerical Optimization Unit 2: Multivariable optimization problems Che-Rung Lee Scribe: February 28, 2011 (UNIT 2) Numerical Optimization February 28, 2011 1 / 17 Partial derivative of a two variable function
More informationSingular Value Decomposition
Singular Value Decomposition (Com S 477/577 Notes Yan-Bin Jia Sep, 7 Introduction Now comes a highlight of linear algebra. Any real m n matrix can be factored as A = UΣV T where U is an m m orthogonal
More informationThe Singular Value Decomposition and Least Squares Problems
The Singular Value Decomposition and Least Squares Problems Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo September 27, 2009 Applications of SVD solving
More informationComputational Methods CMSC/AMSC/MAPL 460. Eigenvalues and Eigenvectors. Ramani Duraiswami, Dept. of Computer Science
Computational Methods CMSC/AMSC/MAPL 460 Eigenvalues and Eigenvectors Ramani Duraiswami, Dept. of Computer Science Eigen Values of a Matrix Recap: A N N matrix A has an eigenvector x (non-zero) with corresponding
More information8 The SVD Applied to Signal and Image Deblurring
8 The SVD Applied to Signal and Image Deblurring We will discuss the restoration of one-dimensional signals and two-dimensional gray-scale images that have been contaminated by blur and noise. After an
More informationPseudoinverse & Moore-Penrose Conditions
ECE 275AB Lecture 7 Fall 2008 V1.0 c K. Kreutz-Delgado, UC San Diego p. 1/1 Lecture 7 ECE 275A Pseudoinverse & Moore-Penrose Conditions ECE 275AB Lecture 7 Fall 2008 V1.0 c K. Kreutz-Delgado, UC San Diego
More informationLinear Algebra & Geometry why is linear algebra useful in computer vision?
Linear Algebra & Geometry why is linear algebra useful in computer vision? References: -Any book on linear algebra! -[HZ] chapters 2, 4 Some of the slides in this lecture are courtesy to Prof. Octavia
More informationDerivative-Free Trust-Region methods
Derivative-Free Trust-Region methods MTH6418 S. Le Digabel, École Polytechnique de Montréal Fall 2015 (v4) MTH6418: DFTR 1/32 Plan Quadratic models Model Quality Derivative-Free Trust-Region Framework
More informationLinear Algebra. Linear Equations and Matrices. Copyright 2005, W.R. Winfrey
Copyright 2005, W.R. Winfrey Topics Preliminaries Systems of Linear Equations Matrices Algebraic Properties of Matrix Operations Special Types of Matrices and Partitioned Matrices Matrix Transformations
More informationLeast-Squares Fitting of Model Parameters to Experimental Data
Least-Squares Fitting of Model Parameters to Experimental Data Div. of Mathematical Sciences, Dept of Engineering Sciences and Mathematics, LTU, room E193 Outline of talk What about Science and Scientific
More informationOperational modal analysis using forced excitation and input-output autoregressive coefficients
Operational modal analysis using forced excitation and input-output autoregressive coefficients *Kyeong-Taek Park 1) and Marco Torbol 2) 1), 2) School of Urban and Environment Engineering, UNIST, Ulsan,
More informationMath Lecture 18 Notes
Math 1010 - Lecture 18 Notes Dylan Zwick Fall 2009 In our last lecture we talked about how we can add, subtract, and multiply polynomials, and we figured out that, basically, if you can add, subtract,
More informationComputational Methods. Eigenvalues and Singular Values
Computational Methods Eigenvalues and Singular Values Manfred Huber 2010 1 Eigenvalues and Singular Values Eigenvalues and singular values describe important aspects of transformations and of data relations
More informationLinear algebra I Homework #1 due Thursday, Oct Show that the diagonals of a square are orthogonal to one another.
Homework # due Thursday, Oct. 0. Show that the diagonals of a square are orthogonal to one another. Hint: Place the vertices of the square along the axes and then introduce coordinates. 2. Find the equation
More informationBindel, Fall 2016 Matrix Computations (CS 6210) Notes for
1 Iteration basics Notes for 2016-11-07 An iterative solver for Ax = b is produces a sequence of approximations x (k) x. We always stop after finitely many steps, based on some convergence criterion, e.g.
More informationTopic 5: The Difference Equation
Topic 5: The Difference Equation Yulei Luo Economics, HKU October 30, 2017 Luo, Y. (Economics, HKU) ME October 30, 2017 1 / 42 Discrete-time, Differences, and Difference Equations When time is taken to
More informationChapter 6 Higher Dimensional Linear Systems
hapter 6 Higher Dimensional Linear Systems Linear systems in R n : Observations: X x. x n X X a a n..... a n a nn ; Let T the the coordinate change matrix such that T T is in canonical form ;then Y T X
More informationMore Linear Algebra. Edps/Soc 584, Psych 594. Carolyn J. Anderson
More Linear Algebra Edps/Soc 584, Psych 594 Carolyn J. Anderson Department of Educational Psychology I L L I N O I S university of illinois at urbana-champaign c Board of Trustees, University of Illinois
More informationThe Normal Equations. For A R m n with m > n, A T A is singular if and only if A is rank-deficient. 1 Proof:
Applied Math 205 Homework 1 now posted. Due 5 PM on September 26. Last time: piecewise polynomial interpolation, least-squares fitting Today: least-squares, nonlinear least-squares The Normal Equations
More information22A-2 SUMMER 2014 LECTURE 5
A- SUMMER 0 LECTURE 5 NATHANIEL GALLUP Agenda Elimination to the identity matrix Inverse matrices LU factorization Elimination to the identity matrix Previously, we have used elimination to get a system
More informationComputational Methods. Least Squares Approximation/Optimization
Computational Methods Least Squares Approximation/Optimization Manfred Huber 2011 1 Least Squares Least squares methods are aimed at finding approximate solutions when no precise solution exists Find the
More informationx x2 2 + x3 3 x4 3. Use the divided-difference method to find a polynomial of least degree that fits the values shown: (b)
Numerical Methods - PROBLEMS. The Taylor series, about the origin, for log( + x) is x x2 2 + x3 3 x4 4 + Find an upper bound on the magnitude of the truncation error on the interval x.5 when log( + x)
More informationAM 205: lecture 8. Last time: Cholesky factorization, QR factorization Today: how to compute the QR factorization, the Singular Value Decomposition
AM 205: lecture 8 Last time: Cholesky factorization, QR factorization Today: how to compute the QR factorization, the Singular Value Decomposition QR Factorization A matrix A R m n, m n, can be factorized
More informationPreliminary Examination, Numerical Analysis, August 2016
Preliminary Examination, Numerical Analysis, August 2016 Instructions: This exam is closed books and notes. The time allowed is three hours and you need to work on any three out of questions 1-4 and any
More informationDimensionality Reduction: PCA. Nicholas Ruozzi University of Texas at Dallas
Dimensionality Reduction: PCA Nicholas Ruozzi University of Texas at Dallas Eigenvalues λ is an eigenvalue of a matrix A R n n if the linear system Ax = λx has at least one non-zero solution If Ax = λx
More information7.6 The Inverse of a Square Matrix
7.6 The Inverse of a Square Matrix Copyright Cengage Learning. All rights reserved. What You Should Learn Verify that two matrices are inverses of each other. Use Gauss-Jordan elimination to find inverses
More informationLinear Algebra for Machine Learning. Sargur N. Srihari
Linear Algebra for Machine Learning Sargur N. srihari@cedar.buffalo.edu 1 Overview Linear Algebra is based on continuous math rather than discrete math Computer scientists have little experience with it
More informationError estimates for the ESPRIT algorithm
Error estimates for the ESPRIT algorithm Daniel Potts Manfred Tasche Let z j := e f j j = 1,..., M) with f j [ ϕ, 0] + i [ π, π) and small ϕ 0 be distinct nodes. With complex coefficients c j 0, we consider
More informationSingular Value Decomposition
Singular Value Decomposition CS 205A: Mathematical Methods for Robotics, Vision, and Graphics Doug James (and Justin Solomon) CS 205A: Mathematical Methods Singular Value Decomposition 1 / 35 Understanding
More informationGeneralized Pencil Of Function Method
Generalized Pencil Of Function Method In recent times, the methodology of approximating a function by a sum of complex exponentials has found applications in many areas of electromagnetics For example
More informationLecture: Face Recognition and Feature Reduction
Lecture: Face Recognition and Feature Reduction Juan Carlos Niebles and Ranjay Krishna Stanford Vision and Learning Lab Lecture 11-1 Recap - Curse of dimensionality Assume 5000 points uniformly distributed
More informationSingular Value Decomposition (SVD)
School of Computing National University of Singapore CS CS524 Theoretical Foundations of Multimedia More Linear Algebra Singular Value Decomposition (SVD) The highpoint of linear algebra Gilbert Strang
More information22.3. Repeated Eigenvalues and Symmetric Matrices. Introduction. Prerequisites. Learning Outcomes
Repeated Eigenvalues and Symmetric Matrices. Introduction In this Section we further develop the theory of eigenvalues and eigenvectors in two distinct directions. Firstly we look at matrices where one
More informationJim Lambers MAT 419/519 Summer Session Lecture 11 Notes
Jim Lambers MAT 49/59 Summer Session 20-2 Lecture Notes These notes correspond to Section 34 in the text Broyden s Method One of the drawbacks of using Newton s Method to solve a system of nonlinear equations
More informationLinear Least-Squares Data Fitting
CHAPTER 6 Linear Least-Squares Data Fitting 61 Introduction Recall that in chapter 3 we were discussing linear systems of equations, written in shorthand in the form Ax = b In chapter 3, we just considered
More informationAnnouncements. Topics: Homework:
Announcements Topics: - sections 7.3 (the definite integral +area), 7.4 (FTC), 7.5 (additional techniques of integration) * Read these sections and study solved examples in your textbook! Homework: - review
More informationCheat Sheet for MATH461
Cheat Sheet for MATH46 Here is the stuff you really need to remember for the exams Linear systems Ax = b Problem: We consider a linear system of m equations for n unknowns x,,x n : For a given matrix A
More informationMATH 5524 MATRIX THEORY Problem Set 4
MATH 5524 MATRIX THEORY Problem Set 4 Posted Tuesday 28 March 217. Due Tuesday 4 April 217. [Corrected 3 April 217.] [Late work is due on Wednesday 5 April.] Complete any four problems, 25 points each.
More informationAnnouncements. Topics: Homework:
Announcements Topics: - sections 7.5 (additional techniques of integration), 7.6 (applications of integration), * Read these sections and study solved examples in your textbook! Homework: - review lecture
More informationEven-Numbered Homework Solutions
-6 Even-Numbered Homework Solutions Suppose that the matric B has λ = + 5i as an eigenvalue with eigenvector Y 0 = solution to dy = BY Using Euler s formula, we can write the complex-valued solution Y
More informationTransient Stability Analysis with PowerWorld Simulator
Transient Stability Analysis with PowerWorld Simulator T11: Single Machine Infinite Bus Modeling (SMIB) 21 South First Street Champaign, Illinois 6182 +1 (217) 384.633 support@powerworld.com http://www.powerworld.com
More informationTheory and Practice of Rotor Dynamics Prof. Dr. Rajiv Tiwari Department of Mechanical Engineering Indian Institute of Technology Guwahati
Theory and Practice of Rotor Dynamics Prof. Dr. Rajiv Tiwari Department of Mechanical Engineering Indian Institute of Technology Guwahati Module - 2 Simpul Rotors Lecture - 2 Jeffcott Rotor Model In the
More information3 What You Should Know About Complex Numbers
3 What You Should Know About Complex Numbers Life is complex it has a real part, and an imaginary part Andrew Koenig. Complex numbers are an extension of the more familiar world of real numbers that make
More informationESE319 Introduction to Microelectronics. Feedback Basics
Feedback Basics Stability Feedback concept Feedback in emitter follower One-pole feedback and root locus Frequency dependent feedback and root locus Gain and phase margins Conditions for closed loop stability
More informationMITOCW ocw f99-lec23_300k
MITOCW ocw-18.06-f99-lec23_300k -- and lift-off on differential equations. So, this section is about how to solve a system of first order, first derivative, constant coefficient linear equations. And if
More informationAnnouncements Wednesday, November 01
Announcements Wednesday, November 01 WeBWorK 3.1, 3.2 are due today at 11:59pm. The quiz on Friday covers 3.1, 3.2. My office is Skiles 244. Rabinoffice hours are Monday, 1 3pm and Tuesday, 9 11am. Section
More informationStatistical Signal Processing Detection, Estimation, and Time Series Analysis
Statistical Signal Processing Detection, Estimation, and Time Series Analysis Louis L. Scharf University of Colorado at Boulder with Cedric Demeure collaborating on Chapters 10 and 11 A TT ADDISON-WESLEY
More informationPartial Fractions. (Do you see how to work it out? Substitute u = ax + b, so du = a dx.) For example, 1 dx = ln x 7 + C, x x (x 3)(x + 1) = a
Partial Fractions 7-9-005 Partial fractions is the opposite of adding fractions over a common denominator. It applies to integrals of the form P(x) dx, wherep(x) and Q(x) are polynomials. Q(x) The idea
More informationLecture 5 Singular value decomposition
Lecture 5 Singular value decomposition Weinan E 1,2 and Tiejun Li 2 1 Department of Mathematics, Princeton University, weinan@princeton.edu 2 School of Mathematical Sciences, Peking University, tieli@pku.edu.cn
More information