Simultaneous Linear Equations

Similar documents
Simultaneous Linear Equations

Matrix notation. A nm : n m : size of the matrix. m : no of columns, n: no of rows. Row matrix n=1 [b 1, b 2, b 3,. b m ] Column matrix m=1

TABLE OF CONTENTS INTRODUCTION, APPROXIMATION & ERRORS 1. Chapter Introduction to numerical methods 1 Multiple-choice test 7 Problem set 9

Chapter 9: Gaussian Elimination

Numerical Analysis Fall. Gauss Elimination

Today s class. Linear Algebraic Equations LU Decomposition. Numerical Methods, Fall 2011 Lecture 8. Prof. Jinbo Bi CSE, UConn

Computational Methods. Systems of Linear Equations

Numerical Algorithms. IE 496 Lecture 20

Linear Algebra Section 2.6 : LU Decomposition Section 2.7 : Permutations and transposes Wednesday, February 13th Math 301 Week #4

Y = ax + b. Numerical Applications Least-squares. Start with Self-test 10-1/459. Linear equation. Error function: E = D 2 = (Y - (ax+b)) 2

Linear Algebraic Equations

Linear Algebraic Equations

1 Number Systems and Errors 1

LU Factorization. Marco Chiarandini. DM559 Linear and Integer Programming. Department of Mathematics & Computer Science University of Southern Denmark

Matrix decompositions

lecture 2 and 3: algorithms for linear algebra

Numerical Analysis: Solutions of System of. Linear Equation. Natasha S. Sharma, PhD

Linear System of Equations

y b where U. matrix inverse A 1 ( L. 1 U 1. L 1 U 13 U 23 U 33 U 13 2 U 12 1

CE 206: Engineering Computation Sessional. System of Linear Equations

Matrix decompositions

2.1 Gaussian Elimination

nonlinear simultaneous equations of type (1)

1 GSW Sets of Systems

Introduction to Applied Linear Algebra with MATLAB

1.Chapter Objectives

The Solution of Linear Systems AX = B

A Review of Matrix Analysis

lecture 3 and 4: algorithms for linear algebra

30.3. LU Decomposition. Introduction. Prerequisites. Learning Outcomes

MA2501 Numerical Methods Spring 2015

Roundoff Error. Monday, August 29, 11

Review Questions REVIEW QUESTIONS 71

22A-2 SUMMER 2014 LECTURE 5

LECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel

AMS 147 Computational Methods and Applications Lecture 17 Copyright by Hongyun Wang, UCSC

L2-7 Some very stylish matrix decompositions for solving Ax = b 10 Oct 2015

Review of matrices. Let m, n IN. A rectangle of numbers written like A =

ANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3

CS227-Scientific Computing. Lecture 4: A Crash Course in Linear Algebra

Linear Systems. Math A Bianca Santoro. September 23, 2016

The purpose of computing is insight, not numbers. Richard Wesley Hamming

Paul Heckbert. Computer Science Department Carnegie Mellon University. 26 Sept B - Introduction to Scientific Computing 1

Extra Problems: Chapter 1

Review. Example 1. Elementary matrices in action: (a) a b c. d e f = g h i. d e f = a b c. a b c. (b) d e f. d e f.

Index. book 2009/5/27 page 121. (Page numbers set in bold type indicate the definition of an entry.)

Numerical Analysis FMN011

Example: Current in an Electrical Circuit. Solving Linear Systems:Direct Methods. Linear Systems of Equations. Solving Linear Systems: Direct Methods

CHAPTER 6. Direct Methods for Solving Linear Systems

Elementary Linear Algebra

EAD 115. Numerical Solution of Engineering and Scientific Problems. David M. Rocke Department of Applied Science

Process Model Formulation and Solution, 3E4

Linear Systems of n equations for n unknowns

Next topics: Solving systems of linear equations

Linear Algebra Linear Algebra : Matrix decompositions Monday, February 11th Math 365 Week #4

Direct Methods for solving Linear Equation Systems

(17) (18)

Shan Feng

Linear Equations and Matrix

MAC1105-College Algebra. Chapter 5-Systems of Equations & Matrices

Linear System of Equations

Solving Linear Systems Using Gaussian Elimination. How can we solve

MATH 2050 Assignment 8 Fall [10] 1. Find the determinant by reducing to triangular form for the following matrices.

Let x be an approximate solution for Ax = b, e.g., obtained by Gaussian elimination. Let x denote the exact solution. Call. r := b A x.

5/21/07 3:13 PM MATLAB Command Window 1 of 6

Chapter 4 No. 4.0 Answer True or False to the following. Give reasons for your answers.

Least-Squares Fitting of Data with Polynomials

BACHELOR OF COMPUTER APPLICATIONS (BCA) (Revised) Term-End Examination December, 2015 BCS-054 : COMPUTER ORIENTED NUMERICAL TECHNIQUES

Math 411 Preliminaries

x x2 2 + x3 3 x4 3. Use the divided-difference method to find a polynomial of least degree that fits the values shown: (b)

Main matrix factorizations

Scientific Computing

CS412: Lecture #17. Mridul Aanjaneya. March 19, 2015

Maths for Signals and Systems Linear Algebra for Engineering Applications

Introduction to Mathematical Programming

12/1/2015 LINEAR ALGEBRA PRE-MID ASSIGNMENT ASSIGNED BY: PROF. SULEMAN SUBMITTED BY: M. REHAN ASGHAR BSSE 4 ROLL NO: 15126

Review Packet 1 B 11 B 12 B 13 B = B 21 B 22 B 23 B 31 B 32 B 33 B 41 B 42 B 43

CPE 310: Numerical Analysis for Engineers

COURSE Numerical methods for solving linear systems. Practical solving of many problems eventually leads to solving linear systems.

Lecture 12 Simultaneous Linear Equations Gaussian Elimination (1) Dr.Qi Ying

Chapter 1: Systems of Linear Equations and Matrices

MATH 1120 (LINEAR ALGEBRA 1), FINAL EXAM FALL 2011 SOLUTIONS TO PRACTICE VERSION

. =. a i1 x 1 + a i2 x 2 + a in x n = b i. a 11 a 12 a 1n a 21 a 22 a 1n. i1 a i2 a in

Computational Linear Algebra

GAUSSIAN ELIMINATION AND LU DECOMPOSITION (SUPPLEMENT FOR MA511)

MIDTERM 1 - SOLUTIONS

Gaussian Elimination without/with Pivoting and Cholesky Decomposition

Cheat Sheet for MATH461

7.4. The Inverse of a Matrix. Introduction. Prerequisites. Learning Outcomes

5.7 Cramer's Rule 1. Using Determinants to Solve Systems Assumes the system of two equations in two unknowns

Matrix decompositions

Midterm for Introduction to Numerical Analysis I, AMSC/CMSC 466, on 10/29/2015

LU Factorization. A m x n matrix A admits an LU factorization if it can be written in the form of A = LU

A Brief Outline of Math 355

LINEAR SYSTEMS (11) Intensive Computation

CHAPTER 8: Matrices and Determinants

EE5120 Linear Algebra: Tutorial 1, July-Dec Solve the following sets of linear equations using Gaussian elimination (a)

TMA4125 Matematikk 4N Spring 2017

Solving Consistent Linear Systems

CS 323: Numerical Analysis and Computing

Transcription:

Simultaneous Linear Equations A farmer had 196 cows. When he rounded them up, he had 200 cows. (Reader s Digest) 1

The name of the person in the picture is A. A$AP Rocky B. Kid Cudi C. MC Hammer D. T.I. E. Vanilla Ice The size of matrix is A. B. C. D. 10 2

The c 32 entity of the matrix A. 2 B. 3 C. 6.3 D. does not exist 10 Given then if [C]=[A]-[B], c 23 = A. -3 B. 3 C. 9 3

Given then if [C]=[A][B], then c 31 =. A. -57 B. -45 C. 57 D. does not exist 10 A square matrix [A] is upper triangular if 1. 2. 3. 4. 4

An identity matrix [I] needs to satisfy the following A. B. C. matrix is square D. all of the above 10 The following system of equations x + y=2 6x + 6y=12 has solution(s). 1. no 2. one 3. more than one but a finite number of 4. infinite 10 5

PHYSICAL PROBLEMS Truss Problem 6

Pressure vessel problem a a b c b Polynomial Regression We are to fit the data to the polynomial regression model 7

END Simultaneous Linear Equations Gaussian Elimination (Naïve and the Not That So Innocent Also) 8

You know Lady Gaga; Who is Shady Gaga A. Lady Gaga s sister! B. A person who looks bad with their sunglasses on. C. A person who looks good with sunglasses on but bad once he/she takes the sunglasses off D. That is what Alejandro calls Lady Gaga E. Slim Shady s sister! 10 The goal of forward elimination steps in Naïve Gauss elimination method is to reduce the coefficient matrix to a (an) matrix. A. diagonal B. identity C. lower triangular D. upper triangular 9

One of the pitfalls of Naïve Gauss Elimination method is A. large truncation error B. large round-off error C. not able to solve equations with a noninvertible coefficient matrix Increasing the precision of numbers from single to double in the Naïve Gaussian elimination method A. avoids division by zero B. decreases round-off error C. allows equations with a noninvertible coefficient matrix to be solved 10

Division by zero during forward elimination steps in Naïve Gaussian elimination for [A][X]=[C] implies the coefficient matrix [A] 1. is invertible 2. is not invertible 3. cannot be determined to be invertible or not Division by zero during forward elimination steps in Gaussian elimination with partial pivoting of the set of equations [A][X]=[C] implies the coefficient matrix [A] 1. is invertible 2. is not invertible 3. cannot be determined to be invertible or not 11

Using 3 significant digit with chopping at all stages, the result for the following calculation is A. -0.0988 B. -0.0978 C. -0.0969 D. -0.0962 Using 3 significant digits with rounding-off at all stages, the result for the following calculation is A. -0.0988 B. -0.0978 C. -0.0969 D. -0.0962 12

Determinants If a multiple of one row of [A] nxn is added or subtracted to another row of [A] nxn to result in [B] nxn then det(a)=det(b) The determinant of an upper triangular matrix [A] nxn is given by Using forward elimination to transform [A] nxn to an upper triangular matrix, [U] nxn. Simultaneous Linear Equations LU Decomposition 13

You thought you have parking problems. Frank Ocean is scared to park when is around. A. A$AP Rocky B. Adele C. Chris Brown D. Hillary Clinton Truss Problem 14

If you have n equations and n unknowns, the computation time for forward substitution is approximately proportional to A. 4n B. 4n 2 C. 4n 3 If you have a nxn matrix, the computation time for decomposing the matrix to LU is approximately proportional to A. 8n/3 B. 8n 2 /3 C. 8n 3 /3 15

LU decomposition method is computationally more efficient than Naïve Gauss elimination for solving A. a single set of simultaneous linear equations B. multiple sets of simultaneous linear equations with different coefficient matrices and same right hand side vectors. C. multiple sets of simultaneous linear equations with same coefficient matrix and different right hand side vectors For a given 1700 x 1700 matrix [A], assume that it takes about 16 seconds to find the inverse of [A] by the use of the [L][U] decomposition method. The approximate time in seconds that all the forward substitutions take out of the 16 seconds is A. 4 B. 6 C. 8 D. 12 16

The following data is given for the velocity of the rocket as a function of time. To find the velocity at t=21s, you are asked to use a quadratic polynomial v(t)=at 2 +bt+c to approximate the velocity profile. t (s) 0 14 15 20 30 35 v m/s 0 227.04 362.78 517.35 602.97 901.67 1. 2. 3. 4. THE END 17

4.09 Adequacy of Solutions The well or ill conditioning of a system of equations [A][X]=[C] depends on the A. coefficient matrix only B. right hand side vector only C. number of unknowns D. coefficient matrix and the right hand side vector 18

The condition number of a n n diagonal matrix [A] is A. B. C. D. The adequacy of a simultaneous linear system of equations [A][X]=[C] depends on (choose the most appropriate answer) A. condition number of the coefficient matrix B. machine epsilon C. product of the condition number of coefficient matrix and machine epsilon D. norm of the coefficient matrix 19

If, then in [A][X]=[C], at least these many significant digits are correct in your solution, A. 0 B. 1 C. 2 D. 3 THE END 20