5. Surface Temperatures

Size: px
Start display at page:

Download "5. Surface Temperatures"

Transcription

1 5. Surface Temperatures For this case we are given a thin rectangular sheet of metal, whose temperature we can conveniently measure at any points along its four edges. If the edge temperatures are held steady, then temperatures within the sheet will settle down to a steady state. We want to determine those steady-state interior temperatures, without having to undertake the difficult job of physically measuring temperatures away from the edges. To make this problem tractable, we can impose an imaginary square grid on the rectangular sheet, and can then focus on estimating the temperatures at the grid points. Suppose that we let t ij represent the known temperature or x ij the unknown temperature at the grid point in the ith row and jth column of the grid. For example, in a grid of 1 1 meter squares on a 3 5 meter sheet, the known temperatures at grid points on the edges would be as follows: t 12 t 13 t 14 t 15 t 21 t 26 t 31 t 36 t 42 t 43 t 44 t 45 The unknown temperatures x ij at the grid points in the interior would be these: x 22 x 23 x 24 x 25 x 32 x 33 x 34 x 35 In general, for an m n meter sheet, there will be 2(m 1) + 2(n 1) known temperatures, and (m 1)(n 1) unknown temperatures. A computational scheme for estimating the interior temperatures can be based on the following very simple observation: Averaging rule: The temperature at any interior grid point is approximately the average of the temperatures at the four other nearest grid points. For example, the temperatures at the 4 grid points nearest to x 34 are x 33, x 24, x 35,andt 44. Since the average of 4 temperatures is 1 / 4 their sum, the averaging rule states that x 34 1 / 4 x / 4 x / 4 x / 4 t

2 This is only an estimate, but we can treat it as an equality in order to get an approximation to the temperatures at the interior grid points. We then have a linear equation corresponding to each interior point: x 22 = 1 / 4 t / 4 t / 4 x / 4 x 32 x 32 = 1 / 4 t / 4 x / 4 x / 4 t 42 x 23 = 1 / 4 x / 4 t / 4 x / 4 x 33 x 33 = 1 / 4 x / 4 x / 4 x / 4 t 43 x 24 = 1 / 4 x / 4 t / 4 x / 4 x 34 x 34 = 1 / 4 x / 4 x / 4 x / 4 t 44 x 25 = 1 / 4 x / 4 t / 4 t / 4 x 35 x 35 = 1 / 4 x / 4 x / 4 t / 4 t 45 Altogether, we have 8 equations in 8 variables. We ll first investigate the direct solution of these equations by elimination. Then we ll look at a different, iterative approach to the solution. Direct solution. Sorting out the variables from the constants in the expressions above, we arrive at the following matrix form for our equations: x / 1 4 / x 22 t 12 + t 21 x 32 1 / / x 32 t 42 + t 31 x 23 1 / / 1 4 / x 23 t 13 x 33 0 = 1 / 1 4 / / x 33 x / / / 4 0 x 24 1 t 43 / 4. t 14 x / 1 4 / / 4 x 34 t 44 x / / 4 x 25 t 15 + t 26 x / 1 4 / 4 0 x 35 t 45 + t 36 Calling the vector of variables x, the matrix A, and the vector of constants b, we can describe this system more concisely as x = Ax + b, or in standard equation form with all the variables on the left, as (I A)x = b. (Recall that I is the identity matrix, so that Ix is the same as x.) Given any particular boundary temperatures, it is straightforward (if a bit tedious) to set up and solve the equations in Matlab. As an example, if the boundary temperatures are 36

3 then the interior temperatures may be calculated in Matlab as follows: >> A = [ ; ; ; ; ; ; ; ]; >> b = 0.25 * [- 3-4; - 3-4; - 1; + 1; + 2; + 4; ; ]; >> x = (eye(8) - A) \ b; >> reshape (x, [2 4]) ans = The reshape function is used at the end to display the column vector of 8 variables in a 2 4 table, such that each row (or column) of result values corresponds to a row (or column) of interior grid points. The temperature is for the point in the first interior row and third interior column of the grid, for example, which is x 24. More grid points would give a better approximation, but you can see that a manual approach to entering the matrix and boundary temperatures will become too tedious and error-prone as the grid is refined. To carry out the approximation on a practical scale, we ll need to write a program that sets up the equations automatically for a specified grid size. Such a program is possible because our averaging rules produce equations that are all very similar in structure. This similarity carries over into a very regular structure for the equations coefficient matrix. 37

4 In the case of our 3 5 grid, the pattern of the nonzero elements in our matrix A is easy to see. The diagonals immediately above and below the main diagonal have an entry of 1 / 4 in every other position. The diagonals immediately above and below those are filled with entries of 1 / 4. All other entries are zeros. There s also a pattern to the placement of the edge temperatures into the constant term b of our equation system. The left and right edge temperatures go into the constants of the first and last two equations, respectively. The top and bottom edge temperatures appear in the constants of the odd-numbered and even-numbered equations. Given this information, we can proceed to write an M-file grid.m that does all the setup work for the temperature problem. It takes as arguments four arrays of temperatures along the four edges, and returns the appropriate matrix A and vector b: function [A,b] = grid (Tlft, Trgt, Ttop, Tbot) for i = 1:2:7 A(i+1,i) =.25; A(i,i+1) =.25; for i = 1:6 A(i+2,i) =.25; A(i,i+2) =.25; for i = 1:2 b(i,1) =.25 * Tlft(i); b(i+6,1) =.25 * Trgt(i); for i = 1:4 b(2*i-1,1) = b(2*i-1,1) +.25 * Ttop(i); b(2*i,1) = b(2*i,1) +.25 * Tbot(i); The first two for loops generate the two diagonal patterns of.25 entries in A, and then two more for loops build up the vector b. For the edge temperatures in our example, the grid function can be applied directly to the appropriate arrays, after which the solution x is determined from A and b as before: >> [A,b] = grid ([-4-4], [8 7], [ ], [ ]); >> x = (eye(8) - A) \ b; >> reshape (x, [2 4]) ans = An extension of grid to the 3 n case is easily made. The patterns are the same, only longer, so it sufficient to generalize the loop arrays (like 1:2:7 and 1:6) and a few other numbers that refer specifically to the 3 5 example. The 38

5 same M-function can handle n 3 grids, by simply exchanging the left/right edge temperatures with the top/bottom ones in the arguments. The general m n case is somewhat harder, but there are still only 5 non-zero diagonals in the coefficient matrix. Iterative solution. A different, iterative method of solving these equations is motivated by the physical process of temperature diffusion in the metal plate. We imagine that the edge temperatures are at their fixed values, but that the interior temperatures are initially at some values x (0) that have not yet reached their steady state. The diffusion of heat then causes the interior temperatures to start changing. As a rough approximation, we can say that, after some small step of time, the temperature of each interior point is adjusted according to our averaging rule: it becomes the average of the temperatures at the point s four nearest neighbors. Because we are using the same averaging rule, we have the same matrix expression for the new temperatures, x (1), in terms of the initial ones: x (1) = Ax (0) + b. After another small step of time, the heat diffuses some more, and we use the same rule to estimate the further interior temperature change as x (2) = Ax (1) + b. Subsequent steps lead to temperatures x (3) = Ax (2) + b, x (4) = Ax (3) + b, and so forth. In general, the formula for an iteration of temperature change is x (k+1) = Ax (k) + b, k = 1, 2, 3,... The iterations could go on indefinitely, but if the temperatures behave as we intuitively expect, then they will eventually settle down to their steady-state values. As a practical matter, we stop when x (k+1) appears to equal x (k) to the accuracy we desire. Then, writing x for either of the last, equal iterates, we have approximately x = Ax + b, which is the system we wanted to solve. A simple M-file griditer1.m suffices to carry out these iterations. After calling grid.m to set up A and b, and initializing x to a vector of all zeroes (for lack of a better choice), the temperature is re-iterated and displayed a fixed number of times: function x = griditer1 (A, b); x = [0; 0; 0; 0; 0; 0; 0; 0]; for step= 1:50 x = A * x + b; fprintf ( %3d\n, step); disp (reshape(x,2,4)); The first several iterations look like this, 39

6 >> [A,b] = grid ([-4-4], [8 7], [ ], [ ]); >> x = griditer1 (A, b); and here are some later ones: To the four decimal places shown here, there are no changes after iteration 28, at which point we have the same solution as before. It might occur to an engineer that one need not take the trouble of generating A and b just to carry out such a simple series of iterations. A simplified M-file griditer2.m could take as input a 4 6 matrix G of grid points both edge and interior that have been set to their initial temperatures. It could then perform the iterations directly to modify the values of the interior grid points, using the simple averaging rule explicitly: function G = griditer2 (G); for step= 1:50 for j = 2:5 for i = 2:3 G(i,j) =.25 * (G(i,j-1) + G(i-1,j) + G(i,j+1) + G(i+1,j)); fprintf ( %3d\n, step); disp (G(2:3,2:5)); 40

7 Then all that s necessary is to initialize G and run the M-file on it. several iterations are as follows: The first >> G(1:4,1:6) = 0; >> G(2:3,1) = [-4; -4]; >> G(2:3,6) = [ 8; 7]; >> G(1,2:5) = [ ]; >> G(4,2:5) = [ ]; >> G = griditer2 (G); Comparing with the previous version, we see that the iterates of the interior temperatures start off quite differently! They do eventually converge to the same values as before, however: The number of iterations to convergence is seen to drop to 16, however, in comparison to the 28 we saw before. How can two implementations of the same iterative scheme yield such different results? The explanation must be that they re not computing the iterates in precisely the same way. A perceptive engineer would notice that, in griditer1.m, we ask Matlab to compute the whole vector A*x+bfrom x and then to assign the whole result back to x. Ingriditer2.m, however, new values of G(i,j) are used immediately in the computations for other grid points 41

8 during the same iteration. The new value of G(2,2), for example, is used immediately in the next pass through the innermost loop as part of the computation of G(2,3). The mixing of new and old values in the same iteration seems like a mistake, yet it clearly speeds up computation of the solution in our particular example. Further experiments would show, moreover, that the mistaken implementation is quite consistently reliable and faster in the speed with which the interior temperatures approach their steady-state values. In fact, it is simply a special case of an alternative iterative solution method whose reliability and speed have been proven mathematically. Other ideas have been found to speed the convergence even more. As a result, the iterative approach is attractive for solving large systems of very simple or regular equations. 42

3. Replace any row by the sum of that row and a constant multiple of any other row.

3. Replace any row by the sum of that row and a constant multiple of any other row. Section. Solution of Linear Systems by Gauss-Jordan Method A matrix is an ordered rectangular array of numbers, letters, symbols or algebraic expressions. A matrix with m rows and n columns has size or

More information

Lecture 9: Elementary Matrices

Lecture 9: Elementary Matrices Lecture 9: Elementary Matrices Review of Row Reduced Echelon Form Consider the matrix A and the vector b defined as follows: 1 2 1 A b 3 8 5 A common technique to solve linear equations of the form Ax

More information

TOPIC 2 Computer application for manipulating matrix using MATLAB

TOPIC 2 Computer application for manipulating matrix using MATLAB YOGYAKARTA STATE UNIVERSITY MATHEMATICS AND NATURAL SCIENCES FACULTY MATHEMATICS EDUCATION STUDY PROGRAM TOPIC 2 Computer application for manipulating matrix using MATLAB Definition of Matrices in MATLAB

More information

a. Define your variables. b. Construct and fill in a table. c. State the Linear Programming Problem. Do Not Solve.

a. Define your variables. b. Construct and fill in a table. c. State the Linear Programming Problem. Do Not Solve. Math Section. Example : The officers of a high school senior class are planning to rent buses and vans for a class trip. Each bus can transport 4 students, requires chaperones, and costs $, to rent. Each

More information

PRIMES Math Problem Set

PRIMES Math Problem Set PRIMES Math Problem Set PRIMES 017 Due December 1, 01 Dear PRIMES applicant: This is the PRIMES 017 Math Problem Set. Please send us your solutions as part of your PRIMES application by December 1, 01.

More information

Linear Algebra II. 2 Matrices. Notes 2 21st October Matrix algebra

Linear Algebra II. 2 Matrices. Notes 2 21st October Matrix algebra MTH6140 Linear Algebra II Notes 2 21st October 2010 2 Matrices You have certainly seen matrices before; indeed, we met some in the first chapter of the notes Here we revise matrix algebra, consider row

More information

The dual simplex method with bounds

The dual simplex method with bounds The dual simplex method with bounds Linear programming basis. Let a linear programming problem be given by min s.t. c T x Ax = b x R n, (P) where we assume A R m n to be full row rank (we will see in the

More information

Iterative Solvers. Lab 6. Iterative Methods

Iterative Solvers. Lab 6. Iterative Methods Lab 6 Iterative Solvers Lab Objective: Many real-world problems of the form Ax = b have tens of thousands of parameters Solving such systems with Gaussian elimination or matrix factorizations could require

More information

CS264: Beyond Worst-Case Analysis Lecture #15: Topic Modeling and Nonnegative Matrix Factorization

CS264: Beyond Worst-Case Analysis Lecture #15: Topic Modeling and Nonnegative Matrix Factorization CS264: Beyond Worst-Case Analysis Lecture #15: Topic Modeling and Nonnegative Matrix Factorization Tim Roughgarden February 28, 2017 1 Preamble This lecture fulfills a promise made back in Lecture #1,

More information

Linear Algebra I Lecture 8

Linear Algebra I Lecture 8 Linear Algebra I Lecture 8 Xi Chen 1 1 University of Alberta January 25, 2019 Outline 1 2 Gauss-Jordan Elimination Given a system of linear equations f 1 (x 1, x 2,..., x n ) = 0 f 2 (x 1, x 2,..., x n

More information

Chapter 4. Vector Space Examples. 4.1 Diffusion Welding and Heat States

Chapter 4. Vector Space Examples. 4.1 Diffusion Welding and Heat States Chapter 4 Vector Space Examples 4.1 Diffusion Welding and Heat States In this section, we begin a deeper look into the mathematics for diffusion welding application discussed in Chapter 1. Recall that

More information

Notes for CS542G (Iterative Solvers for Linear Systems)

Notes for CS542G (Iterative Solvers for Linear Systems) Notes for CS542G (Iterative Solvers for Linear Systems) Robert Bridson November 20, 2007 1 The Basics We re now looking at efficient ways to solve the linear system of equations Ax = b where in this course,

More information

Maths for Signals and Systems Linear Algebra for Engineering Applications

Maths for Signals and Systems Linear Algebra for Engineering Applications Maths for Signals and Systems Linear Algebra for Engineering Applications Lectures 1-2, Tuesday 11 th October 2016 DR TANIA STATHAKI READER (ASSOCIATE PROFFESOR) IN SIGNAL PROCESSING IMPERIAL COLLEGE LONDON

More information

Solving Linear Systems

Solving Linear Systems Solving Linear Systems Iterative Solutions Methods Philippe B. Laval KSU Fall 207 Philippe B. Laval (KSU) Linear Systems Fall 207 / 2 Introduction We continue looking how to solve linear systems of the

More information

Solving Linear Systems

Solving Linear Systems Solving Linear Systems Iterative Solutions Methods Philippe B. Laval KSU Fall 2015 Philippe B. Laval (KSU) Linear Systems Fall 2015 1 / 12 Introduction We continue looking how to solve linear systems of

More information

Algebra & Trig. I. For example, the system. x y 2 z. may be represented by the augmented matrix

Algebra & Trig. I. For example, the system. x y 2 z. may be represented by the augmented matrix Algebra & Trig. I 8.1 Matrix Solutions to Linear Systems A matrix is a rectangular array of elements. o An array is a systematic arrangement of numbers or symbols in rows and columns. Matrices (the plural

More information

Designing Information Devices and Systems I Spring 2018 Lecture Notes Note Introduction to Linear Algebra the EECS Way

Designing Information Devices and Systems I Spring 2018 Lecture Notes Note Introduction to Linear Algebra the EECS Way EECS 16A Designing Information Devices and Systems I Spring 018 Lecture Notes Note 1 1.1 Introduction to Linear Algebra the EECS Way In this note, we will teach the basics of linear algebra and relate

More information

MATRICES. a m,1 a m,n A =

MATRICES. a m,1 a m,n A = MATRICES Matrices are rectangular arrays of real or complex numbers With them, we define arithmetic operations that are generalizations of those for real and complex numbers The general form a matrix of

More information

CHAPTER 2. The Simplex Method

CHAPTER 2. The Simplex Method CHAPTER 2 The Simplex Method In this chapter we present the simplex method as it applies to linear programming problems in standard form. 1. An Example We first illustrate how the simplex method works

More information

Linear Systems of Differential Equations

Linear Systems of Differential Equations Chapter 5 Linear Systems of Differential Equations Project 5. Automatic Solution of Linear Systems Calculations with numerical matrices of order greater than 3 are most frequently carried out with the

More information

Determinants of 2 2 Matrices

Determinants of 2 2 Matrices Determinants In section 4, we discussed inverses of matrices, and in particular asked an important question: How can we tell whether or not a particular square matrix A has an inverse? We will be able

More information

University of the Virgin Islands, St. Thomas January 14, 2015 Algorithms and Programming for High Schoolers. Lecture 5

University of the Virgin Islands, St. Thomas January 14, 2015 Algorithms and Programming for High Schoolers. Lecture 5 University of the Virgin Islands, St. Thomas January 14, 2015 Algorithms and Programming for High Schoolers Numerical algorithms: Lecture 5 Today we ll cover algorithms for various numerical problems:

More information

courses involve systems of equations in one way or another.

courses involve systems of equations in one way or another. Another Tool in the Toolbox Solving Matrix Equations.4 Learning Goals In this lesson you will: Determine the inverse of a matrix. Use matrices to solve systems of equations. Key Terms multiplicative identity

More information

Designing Information Devices and Systems I Fall 2018 Lecture Notes Note Introduction to Linear Algebra the EECS Way

Designing Information Devices and Systems I Fall 2018 Lecture Notes Note Introduction to Linear Algebra the EECS Way EECS 16A Designing Information Devices and Systems I Fall 018 Lecture Notes Note 1 1.1 Introduction to Linear Algebra the EECS Way In this note, we will teach the basics of linear algebra and relate it

More information

Mathematical Induction

Mathematical Induction Chapter 6 Mathematical Induction 6.1 The Process of Mathematical Induction 6.1.1 Motivating Mathematical Induction Consider the sum of the first several odd integers. produce the following: 1 = 1 1 + 3

More information

Markov Chain Monte Carlo The Metropolis-Hastings Algorithm

Markov Chain Monte Carlo The Metropolis-Hastings Algorithm Markov Chain Monte Carlo The Metropolis-Hastings Algorithm Anthony Trubiano April 11th, 2018 1 Introduction Markov Chain Monte Carlo (MCMC) methods are a class of algorithms for sampling from a probability

More information

Linear Programming Redux

Linear Programming Redux Linear Programming Redux Jim Bremer May 12, 2008 The purpose of these notes is to review the basics of linear programming and the simplex method in a clear, concise, and comprehensive way. The book contains

More information

Chapter 2: Matrix Algebra

Chapter 2: Matrix Algebra Chapter 2: Matrix Algebra (Last Updated: October 12, 2016) These notes are derived primarily from Linear Algebra and its applications by David Lay (4ed). Write A = 1. Matrix operations [a 1 a n. Then entry

More information

Laplace's equation: the potential between parallel plates

Laplace's equation: the potential between parallel plates 4/3/01 Electrostatics Laplace Solver 1 Laplace's equation: the potential between parallel plates Laplace's equation describing the electric potential in two dimensions is: ( x, y) 0 At right is the potential

More information

Topic 15 Notes Jeremy Orloff

Topic 15 Notes Jeremy Orloff Topic 5 Notes Jeremy Orloff 5 Transpose, Inverse, Determinant 5. Goals. Know the definition and be able to compute the inverse of any square matrix using row operations. 2. Know the properties of inverses.

More information

Applied Mathematics 205. Unit I: Data Fitting. Lecturer: Dr. David Knezevic

Applied Mathematics 205. Unit I: Data Fitting. Lecturer: Dr. David Knezevic Applied Mathematics 205 Unit I: Data Fitting Lecturer: Dr. David Knezevic Unit I: Data Fitting Chapter I.4: Nonlinear Least Squares 2 / 25 Nonlinear Least Squares So far we have looked at finding a best

More information

Process Model Formulation and Solution, 3E4

Process Model Formulation and Solution, 3E4 Process Model Formulation and Solution, 3E4 Section B: Linear Algebraic Equations Instructor: Kevin Dunn dunnkg@mcmasterca Department of Chemical Engineering Course notes: Dr Benoît Chachuat 06 October

More information

Introduction to Matrices

Introduction to Matrices POLS 704 Introduction to Matrices Introduction to Matrices. The Cast of Characters A matrix is a rectangular array (i.e., a table) of numbers. For example, 2 3 X 4 5 6 (4 3) 7 8 9 0 0 0 Thismatrix,with4rowsand3columns,isoforder

More information

Linear Programming and its Extensions Prof. Prabha Shrama Department of Mathematics and Statistics Indian Institute of Technology, Kanpur

Linear Programming and its Extensions Prof. Prabha Shrama Department of Mathematics and Statistics Indian Institute of Technology, Kanpur Linear Programming and its Extensions Prof. Prabha Shrama Department of Mathematics and Statistics Indian Institute of Technology, Kanpur Lecture No. # 03 Moving from one basic feasible solution to another,

More information

(Linear equations) Applied Linear Algebra in Geoscience Using MATLAB

(Linear equations) Applied Linear Algebra in Geoscience Using MATLAB Applied Linear Algebra in Geoscience Using MATLAB (Linear equations) Contents Getting Started Creating Arrays Mathematical Operations with Arrays Using Script Files and Managing Data Two-Dimensional Plots

More information

Solving the Generalized Poisson Equation Using the Finite-Difference Method (FDM)

Solving the Generalized Poisson Equation Using the Finite-Difference Method (FDM) Solving the Generalized Poisson Equation Using the Finite-Difference Method (FDM) James R. Nagel September 30, 2009 1 Introduction Numerical simulation is an extremely valuable tool for those who wish

More information

PageRank: The Math-y Version (Or, What To Do When You Can t Tear Up Little Pieces of Paper)

PageRank: The Math-y Version (Or, What To Do When You Can t Tear Up Little Pieces of Paper) PageRank: The Math-y Version (Or, What To Do When You Can t Tear Up Little Pieces of Paper) In class, we saw this graph, with each node representing people who are following each other on Twitter: Our

More information

Linear Algebra March 16, 2019

Linear Algebra March 16, 2019 Linear Algebra March 16, 2019 2 Contents 0.1 Notation................................ 4 1 Systems of linear equations, and matrices 5 1.1 Systems of linear equations..................... 5 1.2 Augmented

More information

+ MATRIX VARIABLES AND TWO DIMENSIONAL ARRAYS

+ MATRIX VARIABLES AND TWO DIMENSIONAL ARRAYS + MATRIX VARIABLES AND TWO DIMENSIONAL ARRAYS Matrices are organized rows and columns of numbers that mathematical operations can be performed on. MATLAB is organized around the rules of matrix operations.

More information

MATH 3511 Lecture 1. Solving Linear Systems 1

MATH 3511 Lecture 1. Solving Linear Systems 1 MATH 3511 Lecture 1 Solving Linear Systems 1 Dmitriy Leykekhman Spring 2012 Goals Review of basic linear algebra Solution of simple linear systems Gaussian elimination D Leykekhman - MATH 3511 Introduction

More information

B553 Lecture 5: Matrix Algebra Review

B553 Lecture 5: Matrix Algebra Review B553 Lecture 5: Matrix Algebra Review Kris Hauser January 19, 2012 We have seen in prior lectures how vectors represent points in R n and gradients of functions. Matrices represent linear transformations

More information

632 CHAP. 11 EIGENVALUES AND EIGENVECTORS. QR Method

632 CHAP. 11 EIGENVALUES AND EIGENVECTORS. QR Method 632 CHAP 11 EIGENVALUES AND EIGENVECTORS QR Method Suppose that A is a real symmetric matrix In the preceding section we saw how Householder s method is used to construct a similar tridiagonal matrix The

More information

Finite Mathematics Chapter 2. where a, b, c, d, h, and k are real numbers and neither a and b nor c and d are both zero.

Finite Mathematics Chapter 2. where a, b, c, d, h, and k are real numbers and neither a and b nor c and d are both zero. Finite Mathematics Chapter 2 Section 2.1 Systems of Linear Equations: An Introduction Systems of Equations Recall that a system of two linear equations in two variables may be written in the general form

More information

Methods of Mathematical Physics X1 Homework 2 - Solutions

Methods of Mathematical Physics X1 Homework 2 - Solutions Methods of Mathematical Physics - 556 X1 Homework - Solutions 1. Recall that we define the orthogonal complement as in class: If S is a vector space, and T is a subspace, then we define the orthogonal

More information

The Fundamental Insight

The Fundamental Insight The Fundamental Insight We will begin with a review of matrix multiplication which is needed for the development of the fundamental insight A matrix is simply an array of numbers If a given array has m

More information

5.7 Cramer's Rule 1. Using Determinants to Solve Systems Assumes the system of two equations in two unknowns

5.7 Cramer's Rule 1. Using Determinants to Solve Systems Assumes the system of two equations in two unknowns 5.7 Cramer's Rule 1. Using Determinants to Solve Systems Assumes the system of two equations in two unknowns (1) possesses the solution and provided that.. The numerators and denominators are recognized

More information

Numerical Methods Lecture 2 Simultaneous Equations

Numerical Methods Lecture 2 Simultaneous Equations CGN 42 - Computer Methods Numerical Methods Lecture 2 Simultaneous Equations Topics: matrix operations solving systems of equations Matrix operations: Adding / subtracting Transpose Multiplication Adding

More information

MAT1302F Mathematical Methods II Lecture 19

MAT1302F Mathematical Methods II Lecture 19 MAT302F Mathematical Methods II Lecture 9 Aaron Christie 2 April 205 Eigenvectors, Eigenvalues, and Diagonalization Now that the basic theory of eigenvalues and eigenvectors is in place most importantly

More information

Lecture 10: Powers of Matrices, Difference Equations

Lecture 10: Powers of Matrices, Difference Equations Lecture 10: Powers of Matrices, Difference Equations Difference Equations A difference equation, also sometimes called a recurrence equation is an equation that defines a sequence recursively, i.e. each

More information

Getting Started with Communications Engineering. Rows first, columns second. Remember that. R then C. 1

Getting Started with Communications Engineering. Rows first, columns second. Remember that. R then C. 1 1 Rows first, columns second. Remember that. R then C. 1 A matrix is a set of real or complex numbers arranged in a rectangular array. They can be any size and shape (provided they are rectangular). A

More information

2. Network flows. Balance of flow: At each node, demand plus total shipments out must equal supply plus total shipments in. Reno. Denver.

2. Network flows. Balance of flow: At each node, demand plus total shipments out must equal supply plus total shipments in. Reno. Denver. . Network flows A network consists of a collection of locations along with connections between them. The locations, called the nodes of the network, can correspond to places of various kinds such as factories,

More information

Lemma 8: Suppose the N by N matrix A has the following block upper triangular form:

Lemma 8: Suppose the N by N matrix A has the following block upper triangular form: 17 4 Determinants and the Inverse of a Square Matrix In this section, we are going to use our knowledge of determinants and their properties to derive an explicit formula for the inverse of a square matrix

More information

Numerical Solution Techniques in Mechanical and Aerospace Engineering

Numerical Solution Techniques in Mechanical and Aerospace Engineering Numerical Solution Techniques in Mechanical and Aerospace Engineering Chunlei Liang LECTURE 3 Solvers of linear algebraic equations 3.1. Outline of Lecture Finite-difference method for a 2D elliptic PDE

More information

Local regression I. Patrick Breheny. November 1. Kernel weighted averages Local linear regression

Local regression I. Patrick Breheny. November 1. Kernel weighted averages Local linear regression Local regression I Patrick Breheny November 1 Patrick Breheny STA 621: Nonparametric Statistics 1/27 Simple local models Kernel weighted averages The Nadaraya-Watson estimator Expected loss and prediction

More information

Section Gauss Elimination for Systems of Linear Equations

Section Gauss Elimination for Systems of Linear Equations Section 4.3 - Gauss Elimination for Systems of Linear Equations What is a linear equation? What does it mean to solve a system of linear equations? What are the possible cases when solving a system of

More information

Domain decomposition for the Jacobi-Davidson method: practical strategies

Domain decomposition for the Jacobi-Davidson method: practical strategies Chapter 4 Domain decomposition for the Jacobi-Davidson method: practical strategies Abstract The Jacobi-Davidson method is an iterative method for the computation of solutions of large eigenvalue problems.

More information

8.5 Taylor Polynomials and Taylor Series

8.5 Taylor Polynomials and Taylor Series 8.5. TAYLOR POLYNOMIALS AND TAYLOR SERIES 50 8.5 Taylor Polynomials and Taylor Series Motivating Questions In this section, we strive to understand the ideas generated by the following important questions:

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information

Using Matlab for Laboratory Data Reduction

Using Matlab for Laboratory Data Reduction Data Collected Using Matlab for Laboratory Data Reduction Munson, Young, and Okiishi [1] provide laboratory data for the measurement of the viscosity of water with a capillary tube viscometer. The viscometer

More information

1 Steady State Error (30 pts)

1 Steady State Error (30 pts) Professor Fearing EECS C28/ME C34 Problem Set Fall 2 Steady State Error (3 pts) Given the following continuous time (CT) system ] ẋ = A x + B u = x + 2 7 ] u(t), y = ] x () a) Given error e(t) = r(t) y(t)

More information

Chapter 1: Systems of linear equations and matrices. Section 1.1: Introduction to systems of linear equations

Chapter 1: Systems of linear equations and matrices. Section 1.1: Introduction to systems of linear equations Chapter 1: Systems of linear equations and matrices Section 1.1: Introduction to systems of linear equations Definition: A linear equation in n variables can be expressed in the form a 1 x 1 + a 2 x 2

More information

II. Analysis of Linear Programming Solutions

II. Analysis of Linear Programming Solutions Optimization Methods Draft of August 26, 2005 II. Analysis of Linear Programming Solutions Robert Fourer Department of Industrial Engineering and Management Sciences Northwestern University Evanston, Illinois

More information

Calculus II - Basic Matrix Operations

Calculus II - Basic Matrix Operations Calculus II - Basic Matrix Operations Ryan C Daileda Terminology A matrix is a rectangular array of numbers, for example 7,, 7 7 9, or / / /4 / / /4 / / /4 / /6 The numbers in any matrix are called its

More information

3 (Maths) Linear Algebra

3 (Maths) Linear Algebra 3 (Maths) Linear Algebra References: Simon and Blume, chapters 6 to 11, 16 and 23; Pemberton and Rau, chapters 11 to 13 and 25; Sundaram, sections 1.3 and 1.5. The methods and concepts of linear algebra

More information

Linear Algebra: Lecture Notes. Dr Rachel Quinlan School of Mathematics, Statistics and Applied Mathematics NUI Galway

Linear Algebra: Lecture Notes. Dr Rachel Quinlan School of Mathematics, Statistics and Applied Mathematics NUI Galway Linear Algebra: Lecture Notes Dr Rachel Quinlan School of Mathematics, Statistics and Applied Mathematics NUI Galway November 6, 23 Contents Systems of Linear Equations 2 Introduction 2 2 Elementary Row

More information

Linear Algebra Tutorial for Math3315/CSE3365 Daniel R. Reynolds

Linear Algebra Tutorial for Math3315/CSE3365 Daniel R. Reynolds Linear Algebra Tutorial for Math3315/CSE3365 Daniel R. Reynolds These notes are meant to provide a brief introduction to the topics from Linear Algebra that will be useful in Math3315/CSE3365, Introduction

More information

[Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty.]

[Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty.] Math 43 Review Notes [Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty Dot Product If v (v, v, v 3 and w (w, w, w 3, then the

More information

Solution Set 7, Fall '12

Solution Set 7, Fall '12 Solution Set 7, 18.06 Fall '12 1. Do Problem 26 from 5.1. (It might take a while but when you see it, it's easy) Solution. Let n 3, and let A be an n n matrix whose i, j entry is i + j. To show that det

More information

CHAPTER SEVEN THE GALLEY METHOD. Galley method 2. Exercises 8

CHAPTER SEVEN THE GALLEY METHOD. Galley method 2. Exercises 8 THE GALLEY METHOD Galley method Exercises 8 THE GALLEY METHOD Although the x dots-and-boxes model provides a conceptually easy framework for understanding the process of long division of polynimials, it

More information

Matrix Arithmetic. a 11 a. A + B = + a m1 a mn. + b. a 11 + b 11 a 1n + b 1n = a m1. b m1 b mn. and scalar multiplication for matrices via.

Matrix Arithmetic. a 11 a. A + B = + a m1 a mn. + b. a 11 + b 11 a 1n + b 1n = a m1. b m1 b mn. and scalar multiplication for matrices via. Matrix Arithmetic There is an arithmetic for matrices that can be viewed as extending the arithmetic we have developed for vectors to the more general setting of rectangular arrays: if A and B are m n

More information

LINEAR SYSTEMS (11) Intensive Computation

LINEAR SYSTEMS (11) Intensive Computation LINEAR SYSTEMS () Intensive Computation 27-8 prof. Annalisa Massini Viviana Arrigoni EXACT METHODS:. GAUSSIAN ELIMINATION. 2. CHOLESKY DECOMPOSITION. ITERATIVE METHODS:. JACOBI. 2. GAUSS-SEIDEL 2 CHOLESKY

More information

Section 1.6. M N = [a ij b ij ], (1.6.2)

Section 1.6. M N = [a ij b ij ], (1.6.2) The Calculus of Functions of Several Variables Section 16 Operations with Matrices In the previous section we saw the important connection between linear functions and matrices In this section we will

More information

3 Matrix Algebra. 3.1 Operations on matrices

3 Matrix Algebra. 3.1 Operations on matrices 3 Matrix Algebra A matrix is a rectangular array of numbers; it is of size m n if it has m rows and n columns. A 1 n matrix is a row vector; an m 1 matrix is a column vector. For example: 1 5 3 5 3 5 8

More information

I = i 0,

I = i 0, Special Types of Matrices Certain matrices, such as the identity matrix 0 0 0 0 0 0 I = 0 0 0, 0 0 0 have a special shape, which endows the matrix with helpful properties The identity matrix is an example

More information

MA 1B PRACTICAL - HOMEWORK SET 3 SOLUTIONS. Solution. (d) We have matrix form Ax = b and vector equation 4

MA 1B PRACTICAL - HOMEWORK SET 3 SOLUTIONS. Solution. (d) We have matrix form Ax = b and vector equation 4 MA B PRACTICAL - HOMEWORK SET SOLUTIONS (Reading) ( pts)[ch, Problem (d), (e)] Solution (d) We have matrix form Ax = b and vector equation 4 i= x iv i = b, where v i is the ith column of A, and 4 A = 8

More information

Appendix C Vector and matrix algebra

Appendix C Vector and matrix algebra Appendix C Vector and matrix algebra Concepts Scalars Vectors, rows and columns, matrices Adding and subtracting vectors and matrices Multiplying them by scalars Products of vectors and matrices, scalar

More information

In practice one often meets a situation where the function of interest, f(x), is only represented by a discrete set of tabulated points,

In practice one often meets a situation where the function of interest, f(x), is only represented by a discrete set of tabulated points, 1 Interpolation 11 Introduction In practice one often meets a situation where the function of interest, f(x), is only represented by a discrete set of tabulated points, {x i, y i = f(x i ) i = 1 n, obtained,

More information

Expectation Maximization (EM) Algorithm. Each has it s own probability of seeing H on any one flip. Let. p 1 = P ( H on Coin 1 )

Expectation Maximization (EM) Algorithm. Each has it s own probability of seeing H on any one flip. Let. p 1 = P ( H on Coin 1 ) Expectation Maximization (EM Algorithm Motivating Example: Have two coins: Coin 1 and Coin 2 Each has it s own probability of seeing H on any one flip. Let p 1 = P ( H on Coin 1 p 2 = P ( H on Coin 2 Select

More information

Relation of Pure Minimum Cost Flow Model to Linear Programming

Relation of Pure Minimum Cost Flow Model to Linear Programming Appendix A Page 1 Relation of Pure Minimum Cost Flow Model to Linear Programming The Network Model The network pure minimum cost flow model has m nodes. The external flows given by the vector b with m

More information

CS513, Spring 2007 Prof. Amos Ron Assignment #5 Solutions Prepared by Houssain Kettani. a mj i,j [2,n] a 11

CS513, Spring 2007 Prof. Amos Ron Assignment #5 Solutions Prepared by Houssain Kettani. a mj i,j [2,n] a 11 CS513, Spring 2007 Prof. Amos Ron Assignment #5 Solutions Prepared by Houssain Kettani 1 Question 1 1. Let a ij denote the entries of the matrix A. Let A (m) denote the matrix A after m Gaussian elimination

More information

= main diagonal, in the order in which their corresponding eigenvectors appear as columns of E.

= main diagonal, in the order in which their corresponding eigenvectors appear as columns of E. 3.3 Diagonalization Let A = 4. Then and are eigenvectors of A, with corresponding eigenvalues 2 and 6 respectively (check). This means 4 = 2, 4 = 6. 2 2 2 2 Thus 4 = 2 2 6 2 = 2 6 4 2 We have 4 = 2 0 0

More information

Differential Equations

Differential Equations This document was written and copyrighted by Paul Dawkins. Use of this document and its online version is governed by the Terms and Conditions of Use located at. The online version of this document is

More information

Numerical Analysis Lecture Notes

Numerical Analysis Lecture Notes Numerical Analysis Lecture Notes Peter J Olver 8 Numerical Computation of Eigenvalues In this part, we discuss some practical methods for computing eigenvalues and eigenvectors of matrices Needless to

More information

The Matrix Algebra of Sample Statistics

The Matrix Algebra of Sample Statistics The Matrix Algebra of Sample Statistics James H. Steiger Department of Psychology and Human Development Vanderbilt University James H. Steiger (Vanderbilt University) The Matrix Algebra of Sample Statistics

More information

M. Matrices and Linear Algebra

M. Matrices and Linear Algebra M. Matrices and Linear Algebra. Matrix algebra. In section D we calculated the determinants of square arrays of numbers. Such arrays are important in mathematics and its applications; they are called matrices.

More information

Kasetsart University Workshop. Multigrid methods: An introduction

Kasetsart University Workshop. Multigrid methods: An introduction Kasetsart University Workshop Multigrid methods: An introduction Dr. Anand Pardhanani Mathematics Department Earlham College Richmond, Indiana USA pardhan@earlham.edu A copy of these slides is available

More information

Chapter 1. Linear Equations

Chapter 1. Linear Equations Chapter 1. Linear Equations We ll start our study of linear algebra with linear equations. Lost of parts of mathematics rose out of trying to understand the solutions of different types of equations. Linear

More information

Example. 1 Rows 1,..., m of the simplex tableau remain lexicographically positive

Example. 1 Rows 1,..., m of the simplex tableau remain lexicographically positive 3.4 Anticycling Lexicographic order In this section we discuss two pivoting rules that are guaranteed to avoid cycling. These are the lexicographic rule and Bland s rule. Definition A vector u R n is lexicographically

More information

AMS 209, Fall 2015 Final Project Type A Numerical Linear Algebra: Gaussian Elimination with Pivoting for Solving Linear Systems

AMS 209, Fall 2015 Final Project Type A Numerical Linear Algebra: Gaussian Elimination with Pivoting for Solving Linear Systems AMS 209, Fall 205 Final Project Type A Numerical Linear Algebra: Gaussian Elimination with Pivoting for Solving Linear Systems. Overview We are interested in solving a well-defined linear system given

More information

CS227-Scientific Computing. Lecture 4: A Crash Course in Linear Algebra

CS227-Scientific Computing. Lecture 4: A Crash Course in Linear Algebra CS227-Scientific Computing Lecture 4: A Crash Course in Linear Algebra Linear Transformation of Variables A common phenomenon: Two sets of quantities linearly related: y = 3x + x 2 4x 3 y 2 = 2.7x 2 x

More information

Math 471 (Numerical methods) Chapter 3 (second half). System of equations

Math 471 (Numerical methods) Chapter 3 (second half). System of equations Math 47 (Numerical methods) Chapter 3 (second half). System of equations Overlap 3.5 3.8 of Bradie 3.5 LU factorization w/o pivoting. Motivation: ( ) A I Gaussian Elimination (U L ) where U is upper triangular

More information

Math 552 Scientific Computing II Spring SOLUTIONS: Homework Set 1

Math 552 Scientific Computing II Spring SOLUTIONS: Homework Set 1 Math 552 Scientific Computing II Spring 21 SOLUTIONS: Homework Set 1 ( ) a b 1 Let A be the 2 2 matrix A = By hand, use Gaussian elimination with back c d substitution to obtain A 1 by solving the two

More information

Some Notes on Linear Algebra

Some Notes on Linear Algebra Some Notes on Linear Algebra prepared for a first course in differential equations Thomas L Scofield Department of Mathematics and Statistics Calvin College 1998 1 The purpose of these notes is to present

More information

Linear Algebra Notes. Lecture Notes, University of Toronto, Fall 2016

Linear Algebra Notes. Lecture Notes, University of Toronto, Fall 2016 Linear Algebra Notes Lecture Notes, University of Toronto, Fall 2016 (Ctd ) 11 Isomorphisms 1 Linear maps Definition 11 An invertible linear map T : V W is called a linear isomorphism from V to W Etymology:

More information

Volume in n Dimensions

Volume in n Dimensions Volume in n Dimensions MA 305 Kurt Bryan Introduction You ve seen that if we have two vectors v and w in two dimensions then the area spanned by these vectors can be computed as v w = v 1 w 2 v 2 w 1 (where

More information

CS 323: Numerical Analysis and Computing

CS 323: Numerical Analysis and Computing CS 323: Numerical Analysis and Computing MIDTERM #1 Instructions: This is an open notes exam, i.e., you are allowed to consult any textbook, your class notes, homeworks, or any of the handouts from us.

More information

Often, in this class, we will analyze a closed-loop feedback control system, and end up with an equation of the form

Often, in this class, we will analyze a closed-loop feedback control system, and end up with an equation of the form ME 32, Spring 25, UC Berkeley, A. Packard 55 7 Review of SLODEs Throughout this section, if y denotes a function (of time, say), then y [k or y (k) denotes the k th derivative of the function y, y [k =

More information

This appendix provides a very basic introduction to linear algebra concepts.

This appendix provides a very basic introduction to linear algebra concepts. APPENDIX Basic Linear Algebra Concepts This appendix provides a very basic introduction to linear algebra concepts. Some of these concepts are intentionally presented here in a somewhat simplified (not

More information

Systems of Linear ODEs

Systems of Linear ODEs P a g e 1 Systems of Linear ODEs Systems of ordinary differential equations can be solved in much the same way as discrete dynamical systems if the differential equations are linear. We will focus here

More information

The Gauss-Jordan Elimination Algorithm

The Gauss-Jordan Elimination Algorithm The Gauss-Jordan Elimination Algorithm Solving Systems of Real Linear Equations A. Havens Department of Mathematics University of Massachusetts, Amherst January 24, 2018 Outline 1 Definitions Echelon Forms

More information