ECEN 615 Methods of Electric Power Systems Analysis Lecture 18: Least Squares, State Estimation

Similar documents
ECEN 615 Methods of Electric Power Systems Analysis Lecture 19: State Estimation

9. Numerical linear algebra background

Linear Algebra Practice Problems

ECEN 667 Power System Stability Lecture 23:Measurement Based Modal Analysis, FFT

Numerical Methods. Elena loli Piccolomini. Civil Engeneering. piccolom. Metodi Numerici M p. 1/??

Basic Concepts in Linear Algebra

Review of Basic Concepts in Linear Algebra

= V I = Bus Admittance Matrix. Chapter 6: Power Flow. Constructing Ybus. Example. Network Solution. Triangular factorization. Let

Matrix decompositions

9. Numerical linear algebra background

Computational Methods. Least Squares Approximation/Optimization

Solution of Linear Equations

Linear Least-Squares Data Fitting

Iterative Methods. Splitting Methods

Solving Linear Systems of Equations

Numerical Linear Algebra

ANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3

Numerical Methods I Non-Square and Sparse Linear Systems

Numerical Methods I Solving Square Linear Systems: GEM and LU factorization

Vector Spaces, Orthogonality, and Linear Least Squares

Lecture 16 Methods for System of Linear Equations (Linear Systems) Songting Luo. Department of Mathematics Iowa State University

Numerical Linear Algebra Primer. Ryan Tibshirani Convex Optimization /36-725

Q T A = R ))) ) = A n 1 R

Lecture 9: Numerical Linear Algebra Primer (February 11st)

Midterm for Introduction to Numerical Analysis I, AMSC/CMSC 466, on 10/29/2015

Part IB - Easter Term 2003 Numerical Analysis I

MATH 3321 Sample Questions for Exam 3. 3y y, C = Perform the indicated operations, if possible: (a) AC (b) AB (c) B + AC (d) CBA

Inverse Theory. COST WaVaCS Winterschool Venice, February Stefan Buehler Luleå University of Technology Kiruna

Computational Linear Algebra

Introduction to Matrices

AM 205: lecture 8. Last time: Cholesky factorization, QR factorization Today: how to compute the QR factorization, the Singular Value Decomposition

EE 581 Power Systems. Admittance Matrix: Development, Direct and Iterative solutions

LU Factorization. LU factorization is the most common way of solving linear systems! Ax = b LUx = b

LECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel

Lecture 3: QR-Factorization

Cheat Sheet for MATH461

Lecture Notes in Mathematics. Arkansas Tech University Department of Mathematics. The Basics of Linear Algebra

Introduction to Mobile Robotics Compact Course on Linear Algebra. Wolfram Burgard, Bastian Steder

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 6

Introduction to Numerical Analysis

Review Questions REVIEW QUESTIONS 71

MTH Linear Algebra. Study Guide. Dr. Tony Yee Department of Mathematics and Information Technology The Hong Kong Institute of Education

Algebra C Numerical Linear Algebra Sample Exam Problems

Numerical Linear Algebra

Scientific Computing with Case Studies SIAM Press, Lecture Notes for Unit VII Sparse Matrix

Vector and Matrix Norms. Vector and Matrix Norms

Chapter 3. Linear and Nonlinear Systems

Applied Mathematics 205. Unit II: Numerical Linear Algebra. Lecturer: Dr. David Knezevic

EAD 115. Numerical Solution of Engineering and Scientific Problems. David M. Rocke Department of Applied Science

Linear Least squares

MATH 425-Spring 2010 HOMEWORK ASSIGNMENTS

Sparse least squares and Q-less QR

Linear Algebra Massoud Malek

CS 542G: Robustifying Newton, Constraints, Nonlinear Least Squares

Lecture 18: The Rank of a Matrix and Consistency of Linear Systems

Mobile Robotics 1. A Compact Course on Linear Algebra. Giorgio Grisetti

Introduction to State Estimation of Power Systems ECG 740

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian

Computational Methods. Eigenvalues and Singular Values

MATHEMATICS FOR COMPUTER VISION WEEK 2 LINEAR SYSTEMS. Dr Fabio Cuzzolin MSc in Computer Vision Oxford Brookes University Year

Power System Analysis Prof. A. K. Sinha Department of Electrical Engineering Indian Institute of Technology, Kharagpur. Lecture - 21 Power Flow VI

CPE 310: Numerical Analysis for Engineers

MODULE 8 Topics: Null space, range, column space, row space and rank of a matrix

Vector Space Concepts

Fundamentals of Linear Algebra. Marcel B. Finan Arkansas Tech University c All Rights Reserved

Linear Algebra Primer

Introduction to Mobile Robotics Compact Course on Linear Algebra. Wolfram Burgard, Cyrill Stachniss, Maren Bennewitz, Diego Tipaldi, Luciano Spinello

Orthonormal Transformations and Least Squares

Today s class. Linear Algebraic Equations LU Decomposition. Numerical Methods, Fall 2011 Lecture 8. Prof. Jinbo Bi CSE, UConn

CLASS NOTES Computational Methods for Engineering Applications I Spring 2015

Lecture 7. Econ August 18

Since the determinant of a diagonal matrix is the product of its diagonal elements it is trivial to see that det(a) = α 2. = max. A 1 x.

LU Factorization. Marco Chiarandini. DM559 Linear and Integer Programming. Department of Mathematics & Computer Science University of Southern Denmark

LINEAR SYSTEMS (11) Intensive Computation

Introduction to Mathematical Programming

12/1/2015 LINEAR ALGEBRA PRE-MID ASSIGNMENT ASSIGNED BY: PROF. SULEMAN SUBMITTED BY: M. REHAN ASGHAR BSSE 4 ROLL NO: 15126

Lecture Note 7: Iterative methods for solving linear systems. Xiaoqun Zhang Shanghai Jiao Tong University

Scientific Computing: Solving Linear Systems

Linear Algebra. The analysis of many models in the social sciences reduces to the study of systems of equations.

Cyber Attacks, Detection and Protection in Smart Grid State Estimation

Chapter 4. Solving Systems of Equations. Chapter 4

State Estimation Introduction 2.0 Exact Pseudo-measurments

Linear Solvers. Andrew Hazel

Lecture 9. Errors in solving Linear Systems. J. Chaudhry (Zeb) Department of Mathematics and Statistics University of New Mexico

HOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION)

EE731 Lecture Notes: Matrix Computations for Signal Processing

MATH 3511 Lecture 1. Solving Linear Systems 1

Course Notes: Week 1

Solving linear systems (6 lectures)

Numerical solution of Least Squares Problems 1/32

x x2 2 + x3 3 x4 3. Use the divided-difference method to find a polynomial of least degree that fits the values shown: (b)

Scientific Computing: Dense Linear Systems

B553 Lecture 5: Matrix Algebra Review

Linear Algebra in Actuarial Science: Slides to the lecture

Practical Linear Algebra: A Geometry Toolbox

False Data Injection Attacks Against Nonlinear State Estimation in Smart Power Grids

YORK UNIVERSITY. Faculty of Science Department of Mathematics and Statistics MATH M Test #2 Solutions

LINEAR ALGEBRA: NUMERICAL METHODS. Version: August 12,

1 Computing with constraints

Transcription:

ECEN 615 Methods of Electric Power Systems Analysis Lecture 18: Least Squares, State Estimation Prof. om Overbye Dept. of Electrical and Computer Engineering exas A&M University overbye@tamu.edu

Announcements Read Chapter 9 We ll just briefly cover state estimation since it is covered by ECEN 614, but will use it as an example for least squares and QR factorization Homework 4 is due on hursday Nov 1

Least Squares So far we have considered the solution of Ax = b in which A is a square matrix; as long as A is nonsingular there is a single solution hat is, we have the same number of equations (m) as unknowns (n) Many problems are overdetermined in which there more equations than unknowns (m > n) Overdetermined systems are usually inconsistent, in which no value of x exactly solves all the equations Underdetermined systems have more unknowns than equations (m < n); they never have a unique solution but are usually consistent 3

Method of Least Squares he least squares method is a solution approach for determining an approximate solution for an overdetermined system If the system is inconsistent, then not all of the equations can be exactly satisfied he difference for each equation between its exact solution and the estimated solution is known as the error Least squares seeks to minimize the sum of the squares of the errors Weighted least squares allows differ weights for the equations 4

Least Squares Solution History he method of least squares developed from trying to estimate actual values from a number of measurements Several persons in the 1700's, starting with Roger Cotes in 17, presented methods for trying to decrease model errors from using multiple measurements Legendre presented a formal description of the method in 1805; evidently Gauss claimed he did it in 1795 Method is widely used in power systems, with state estimation the best known application, dating from Fred Schweppe's work in 1970 5

Least Squares and Sparsity In many contexts least squares is applied to problems that are not sparse. For example, using a number of measurements to optimally determine a few values Regression analysis is a common example, in which a line or other curve is fit to potentially many points) Each measurement impacts each model value In the classic power system application of state estimation the system is sparse, with measurements only directly influencing a few states Power system analysis classes have tended to focus on solution methods aimed at sparse systems; we'll consider both sparse and nonsparse solution methods 6

Least Squares Problem Consider or Ax b A, x, b m n n m 1 (a ) a a a a x b 11 1 13 1n 1 1 (a ) a a a a x b 1 n x = m (a ) a a a a m1 m m3 mn x n b m 7

Least Squares Solution We write (a i ) for the row i of A and a i is a column vector Here, m n and the solution we are seeking is that which minimizes Ax - b p, where p denotes some norm Since usually an overdetermined system has no exact solution, the best we can do is determine an x that minimizes the desired norm. 8

Choice of p We discuss the choice of p in terms of a specific example Consider the equation Ax = b with 1 b 1 A = 1 b = b with b b b 0 1 3 1 b 3 (hence three equations and one unknown) We consider three possible choices for p: 9

Choice of p (i) p = 1 Ax b * is minimized by x = b 1 (ii) p = Ax b is minimized by * x = b + b + b 1 3 3 (iii) p = Ax b is minimized by * x = b + b 1 3 10

he Least Squares Problem In general, Ax b p is not differentiable for p = 1 or p = he choice of p = (Euclidean norm) has become well established given its least-squares fit interpretation he problem min Ax b is tractable for major n reasons x First, the function is differentiable m 1 1 - i x = Ax b = a x b i i =1 11

he Least Squares Problem, cont. Second, the Euclidean norm is preserved under orthogonal transformations: Q A x Q b = Ax b with Q an arbitrary orthogonal matrix; that is, Q satisfies n n QQ = Q Q = I Q 1

he Least Squares Problem, cont. We introduce next the basic underlying assumption: A is full rank, i.e., the columns of A constitute a set of linearly independent vectors his assumption implies that the rank of A is n because n m since we are dealing with an overdetermined system Fact: he least squares solution x * satisfies A A x = A b 13

Proof of Fact Since by definition the least squares solution x * minimizes at the optimum, the derivative of this function zero: 1 1 - x = Ax b = x A Ax x A b b A x + b b x 1 0 x A Ax x A b b A x + b b x x x x 1 x A A x x A b b + b x x = A A x A b 14

Implications his underlying assumption implies that A is full rank x 0 Ax 0 herefore, the fact that A A is positive definite (p.d.) follows from considering any x 0 and evaluating x A A x = A x > 0, which is the definition of a p.d. matrix We use the shorthand A A > 0 for A A being a symmetric, positive definite matrix 15

Implications he underlying assumption that A is full rank and therefore A A is p.d. implies that there exists a unique least squares solution Note: we use the inverse in a conceptual, rather than a computational, sense 1 x = A A A b he below formulation is known as the normal equations, with the solution conceptually straightforward A A x = A b 16

Example: Curve Fitting Say we wish to fit five points to a polynomial curve of the form f( t, x) x x t x t 1 3 his can be written as Ax 1 t t y 1 1 1 1 x 1 y t t 1 t3 t x 3 y 3 1 t x 4 t4 3 y4 1 t y 5 t 5 5 y

Example: Curve Fitting Say the points are t =[0,1,,3,4] and y = [0,,4,5,4]. hen 1 0 0 0 1 1 1 x 1 Ax y 1 4 x 4 1 3 9 x 3 5 1 4 16 4 0 0.886 0.57 0.086 0.143 0.086 1 x = A A A b 0.771 0.186 0.571 0.386 0.371 4 0.143 0.071 0.143 0.071 0.143 5 4 0. x = 3.1 0.5

Implications An important implication of positive definiteness is that we can factor A A since A A > 0 1/ 1/ Α Α = U D U = U D D U G G he expression A A = G G is called the Cholesky factorization of the symmetric positive definite matrix A A 19

A Least Squares Solution Algorithm Step 1: Compute the lower triangular part of A A Step : Obtain the Cholesky Factorization Α Α Step 3: Compute Α b = bˆ Step 4: Solve for y using forward substitution in G y = bˆ and for x using backward substitution in G x = y G G Note, our standard LU factorization approach would work; we can just solve it twice as fast by taking advantage of it being a symmetric matrix 0

Practical Considerations he two key problems that arise in practice with the triangularization procedure are: First, while A maybe sparse, A A is much less sparse and consequently requires more computing resources for the solution In particular, with A A second neighbors are now connected! Large networks are still sparse, just not as sparse Second, A A may actually be numerically less wellconditioned than A 1

Loss of Sparsity Example Assume the B matrix for a network is hen B B is B B = B = 3 1 0 3 6 4 1 1 4 6 3 0 1 3 Second neighbors are now connected! 1 1 0 0 1 1 0 0 1 1 0 0 1 1

Numerical Conditioning o understand the point on numerical illconditioning, we need to introduce terminology m n We define the norm of a matrix B to be B Bx max x 0 x = maximum stretching of the matrix B his is the maximum singular value of B 3

Numerical Conditioning Example Say we have the matrix 10 0 B 0 0.1 What value of x with a norm of 1 that maximizes Bx? What value of x with a norm of 1 that minimizes Bx? B Bx max x 0 x = maximum stretching of the matrix B 4

Numerical Conditioning = max l i, l is an eigenvalue of B B, i i.e., l i is a root of the polynomial ( ) B p λ = det B λ I In other words, the norm of B is the square root of the largest eigenvalue of B B i Keep in mind the eigenvalues of a p.d. matrix are positive 5

Numerical Conditioning he conditioning number of a matrix B is defined as B 1 max B B B B min the max / min stretching ratio of the matrix B A well conditioned matrix has a small value of B, close to 1; the larger the value of B, the more pronounced is the ill-conditioning 6

Power System State Estimation Overall goal is to come up with a power flow model for the present "state" of the power system based on the actual system measurements SE assumes the topology and parameters of the transmission network are mostly known Measurements come from SCADA, and increasingly, from PMUs 7

Power System State Estimation Problem can be formulated in a nonlinear, weighted least squares form as min J ( x) m i1 z i f ( x) i i where J(x) is the scalar cost function, x are the state variables (primarily bus voltage magnitudes and angles), z i are the m measurements, f(x) relates the states to the measurements and i is the assumed standard deviation for each measurement 8

Assumed Error Hence the goal is to decrease the error between the measurements and the assumed model states x he i term weighs the various measurements, recognizing that they can have vastly different assumed errors m z f ( x) i i min J ( x) i1 Measurement error is assumed Gaussian (whether it is or not is another question); outliers (bad measurements) are often removed i 9

State Estimation for Linear Functions First we ll consider the linear problem. hat is where meas meas z f(x) z Hx Let R be defined as the diagonal matrix of the variances (square of the standard deviations) for each of the measurements R 1 0 0 0 0 0 0 m 30

State Estimation for Linear Functions We then differentiate J(x) w.r.t. x to determine the value of x that minimizes this function meas 1 meas J ( x) z Hx R z Hx J ( ) 1 meas 1 x H R z H R Hx At the minimum we have J ( x) 0. So solving for x gives 1 1 1 meas x H R H H R z 31

Simple DC System Example Say we have a two bus power system that we are solving using the dc approximation. Say the line s per unit reactance is j0.1. Say we have power measurements at both ends of the line. For simplicity assume R=I. We would then like to estimate the bus angles. hen 1 1 z1 P1., z.0 P1 0.1 0.1 1 10 10 00 00 x,, H 10 10 H H 00 00 We have a problem since H H is singular. his is because of lack of an angle reference. 3

Simple DC System Example, cont. Say we directly measure 1 (with a PMU) to be zero; set this as the third measurement. hen 1 1 z1 P1., z.0 P1, z3 0 0.1 0.1. 10 10 1 01 00 x,, 10 10, z H H H 00 00 0 1 0 1 1 1 meas x H R H H R z 1. 01 00 10 10 1 x 0 00 00 10 10 0 0.1 0 Note that the angles are in radians 33

Nonlinear Formulation A regular ac power system is nonlinear, so we need to use an iterative solution approach. his is similar to the Newton power flow. Here assume m measurements and n state variables (usually bus voltage magnitudes and angles) hen the Jacobian is the H matrix f1 f1 x1 x n fx ( ) Hx ( ) x fm fm x1 x n

Measurement Example Assume we measure the real and reactive power flowing into one end of a transmission line; then the z i -f i (x) functions for these two are sin meas P V G V V G cos B ij i ij i j ij i j ij i j B wo measurements for four unknowns cos meas cap Q V B V V G sin B ij i ij i j ij i j ij i j Other measurements, such as the flow at the other end, and voltage magnitudes, add redundancy 35

SE Iterative Solution Algorithm We then make an initial guess of x, x (0) and iterate, calculating Dx each iteration z f ( x) 1 1 1 1 1 Dx H R H H R z f ( x) m m ( k1) ( k) x x Dx Keep in mind that H is no longer constant, but varies as x changes. often ill-conditioned his is exactly the least squares form developed earlier with H R -1 H an n by n matrix. his could be solved with Gaussian elimination, but this isn't preferred because the problem is often ill-conditioned 36

Nonlinear SE Solution Algorithm, Book Figure 9.11

Example: wo Bus Case Assume a two bus case with a generator supplying a load through a single line with x=0.1 pu. Assume measurements of the p/q flow on both ends of the line (into line positive), and the voltage magnitude at both the generator and the load end. So B 1 = B 1 =10.0 meas P V V B sin ij i j ij i j meas Q V B V V B cos ij i ij i j ij i j V i meas V i 0 We need to assume a reference angle unless we directly measuring phase 38

Example: wo Bus Case Let Z P1. 0 Q 1 15. V1 1 P 1 1. 98 x 0, i 0. 01 Q1 1 V 1 V 1 1. 01 V 0. 87 meas 0 We assume an angle reference of 1 0 V10sin( ) V1 V10 cos( ) V110sin( ) 0V1 V10 cos( ) V1V 10sin( ) V110 cos( ) V10sin( ) V1V 10 cos( ) V110sin( ) H ( x) V 10cos( ) V1V 10sin( ) 0V V110cos( ) 1 0 0 0 0 1 39