Engineering Analysis ENG 3420 Fall Dan C. Marinescu Office: HEC 439 B Office hours: Tu-Th 11:00-12:00

Similar documents
a a a a a a a a a a a a a a a a a a a a a a a a In this section, we introduce a general formula for computing determinants.

MATRICES AND VECTORS SPACE

Matrices. Elementary Matrix Theory. Definition of a Matrix. Matrix Elements:

ECON 331 Lecture Notes: Ch 4 and Ch 5

Ecuaciones Algebraicas lineales

CHAPTER 2d. MATRICES

Chapter 3 MATRIX. In this chapter: 3.1 MATRIX NOTATION AND TERMINOLOGY

Matrices and Determinants

Introduction to Determinants. Remarks. Remarks. The determinant applies in the case of square matrices

INTRODUCTION TO LINEAR ALGEBRA

Chapter 2. Determinants

How do we solve these things, especially when they get complicated? How do we know when a system has a solution, and when is it unique?

Matrix Eigenvalues and Eigenvectors September 13, 2017

Homework 5 solutions

September 13 Homework Solutions

Here we study square linear systems and properties of their coefficient matrices as they relate to the solution set of the linear system.

HW3, Math 307. CSUF. Spring 2007.

Chapter 5 Determinants

How do we solve these things, especially when they get complicated? How do we know when a system has a solution, and when is it unique?

The Algebra (al-jabr) of Matrices

Algebra Of Matrices & Determinants

The Islamic University of Gaza Faculty of Engineering Civil Engineering Department. Numerical Analysis ECIV Chapter 11

Math 520 Final Exam Topic Outline Sections 1 3 (Xiao/Dumas/Liaw) Spring 2008

Matrix Solution to Linear Equations and Markov Chains

Geometric Sequences. Geometric Sequence a sequence whose consecutive terms have a common ratio.

NUMERICAL INTEGRATION. The inverse process to differentiation in calculus is integration. Mathematically, integration is represented by.

Determinants Chapter 3

4.5 JACOBI ITERATION FOR FINDING EIGENVALUES OF A REAL SYMMETRIC MATRIX. be a real symmetric matrix. ; (where we choose θ π for.

Lecture Solution of a System of Linear Equation

Multivariate problems and matrix algebra

Matrices 13: determinant properties and rules continued

Elements of Matrix Algebra

LINEAR ALGEBRA AND MATRICES. n ij. is called the main diagonal or principal diagonal of A. A column vector is a matrix that has only one column.

Matrix Algebra. Matrix Addition, Scalar Multiplication and Transposition. Linear Algebra I 24

Matrices. Introduction

Elementary Linear Algebra

Lecture 14: Quadrature

Numerical Linear Algebra Assignment 008

A-Level Mathematics Transition Task (compulsory for all maths students and all further maths student)

State space systems analysis (continued) Stability. A. Definitions A system is said to be Asymptotically Stable (AS) when it satisfies

Quadratic Forms. Quadratic Forms

Summary Information and Formulae MTH109 College Algebra

Equations and Inequalities

Operations with Matrices

Matrix & Vector Basic Linear Algebra & Calculus

DETERMINANTS. All Mathematical truths are relative and conditional. C.P. STEINMETZ

Contents. Outline. Structured Rank Matrices Lecture 2: The theorem Proofs Examples related to structured ranks References. Structure Transport

Module 6: LINEAR TRANSFORMATIONS

Best Approximation in the 2-norm

MATRIX DEFINITION A matrix is any doubly subscripted array of elements arranged in rows and columns.

How do you know you have SLE?

Things to Memorize: A Partial List. January 27, 2017

DonnishJournals

CS434a/541a: Pattern Recognition Prof. Olga Veksler. Lecture 2

308K. 1 Section 3.2. Zelaya Eufemia. 1. Example 1: Multiplication of Matrices: X Y Z R S R S X Y Z. By associativity we have to choices:

THE DISCRIMINANT & ITS APPLICATIONS

DETERMINANTS. All Mathematical truths are relative and conditional. C.P. STEINMETZ

N 0 completions on partial matrices

Precalculus Spring 2017

Computing The Determinants By Reducing The Orders By Four

Introduction To Matrices MCV 4UI Assignment #1

fractions Let s Learn to

AQA Further Pure 2. Hyperbolic Functions. Section 2: The inverse hyperbolic functions

A Matrix Algebra Primer

Jim Lambers MAT 169 Fall Semester Lecture 4 Notes

TABLE OF CONTENTS 3 CHAPTER 1

1 Linear Least Squares

Lecture Note 9: Orthogonal Reduction

Math 75 Linear Algebra Class Notes

Chapter 3 Solving Nonlinear Equations

Lecture 19: Continuous Least Squares Approximation

Before we can begin Ch. 3 on Radicals, we need to be familiar with perfect squares, cubes, etc. Try and do as many as you can without a calculator!!!

Identify graphs of linear inequalities on a number line.

ENGI 3424 Engineering Mathematics Five Tutorial Examples of Partial Fractions

Bridging the gap: GCSE AS Level

Why symmetry? Symmetry is often argued from the requirement that the strain energy must be positive. (e.g. Generalized 3-D Hooke s law)

Theoretical foundations of Gaussian quadrature

SUMMER KNOWHOW STUDY AND LEARNING CENTRE

MATHEMATICS AND STATISTICS 1.2

Matrices, Moments and Quadrature, cont d

Best Approximation. Chapter The General Case

Lesson 1: Quadratic Equations

MA Exam 2 Study Guide, Fall u n du (or the integral of linear combinations

Chapter 4 Contravariance, Covariance, and Spacetime Diagrams

Lecture 0. MATH REVIEW for ECE : LINEAR CIRCUIT ANALYSIS II

SCHOOL OF ENGINEERING & BUILT ENVIRONMENT. Mathematics

Loudoun Valley High School Calculus Summertime Fun Packet

5.2 Exponent Properties Involving Quotients

Review of basic calculus

Quantum Physics II (8.05) Fall 2013 Assignment 2

Bases for Vector Spaces

Numerical Methods I Orthogonal Polynomials

FACULTY OF ENGINEERING TECHNOLOGY GROUP T LEUVEN CAMPUS INTRODUCTORY COURSE MATHEMATICS

Operations with Polynomials

Polynomials and Division Theory

Pre-Session Review. Part 1: Basic Algebra; Linear Functions and Graphs

4 Reciprocal lattice. 4.1 The lattice function

Sample pages. 9:04 Equations with grouping symbols

ARITHMETIC OPERATIONS. The real numbers have the following properties: a b c ab ac

We will see what is meant by standard form very shortly

Transcription:

Engineering Anlysis ENG 3420 Fll 2009 Dn C. Mrinescu Office: HEC 439 B Office hours: Tu-Th 11:00-12:00

Lecture 13 Lst time: Problem solving in preprtion for the quiz Liner Algebr Concepts Vector Spces, Liner Independence Orthogonl Vectors, Bses Mtrices Tody Solving systems of liner equtions (Chpter 9) Grphicl methods Next Time Guss elimintion Lecture 13 2

Solving systems of liner equtions Mtrices provide concise nottion for representing nd solving simultneous liner equtions: 11 x 1 + 12 x 2 + 13 x 3 = b 1 21 x 1 + 22 x 2 + 23 x 3 = b 2 31 x 1 + 32 x 2 + 33 x 3 = b 3 11 12 13 21 22 23 31 32 33 x 1 x 2 x 3 = b 1 b 2 b 3 [A]{x} = {b}

Solving systems of liner equtions in Mtlb Two wys to solve systems of liner lgebric equtions [A]{x}={b}: Left-division x = A\b Mtrix inversion x = inv(a)*b Mtrix inversion only works for squre, non-singulr systems; it is less efficient thn left-division.

Solving grphiclly systems of liner equtions For smll sets of simultneous equtions, grphing them nd determining the loction of the intersection of the stright line representing ech eqution provides solution. There is no gurntee tht one cn find the solution of system of liner equtions: ) No solution exists b) Infinite solutions exist c) System is ill-conditioned

Determinnt of the squre mtrix A=[ ij ] A = [ ij ] 1,1 2,1 =... n n,1 1,1 n 1,2 1,2 2,2... n,2............... 1, n 1 2, n 1 n 1, n 1... n, n 1 1, n 2, n... n 1, n n, n = det( A) = Ai 1 i 1 + Ai 2i2 + A... A Here the coefficient A ij of ij is clled the cofctor of A A cofctor is polynomil in the remining rows of A nd cn be described s the prtil derivtive of A. The cofctor polynomil contins only entries from n (n-1)x (n-1) mtrix M ij clled minor obtined from A by eliminting row i nd column j. in in

Determinnts of severl mtrices Determinnts for 1x1, 2x2, 3x3 mtrices re: 1 1 11 = 11 2 2 3 3 11 12 21 22 = 11 22 12 21 11 12 13 21 22 23 31 32 33 = 11 22 23 32 33 12 21 23 31 33 + 13 21 22 31 32 Determinnts for squre mtrices lrger thn 3 x 3 re more complicted.

Properties of the determinnts If we permute two rows of the rectngulr mtrix A then the sign of the determinnt det(a) chnges. The determinnt of the trnspose of mtrix A is equl to the determinnt of the originl mtrix. If two rows of A re identicl then A =0

Crmer s Rule Consider the system of liner equtions: [A]{x}={b} Ech unknown in system of liner lgebric equtions my be expressed s frction of two determinnts with denomintor D nd with the numertor obtined from D by replcing the column of coefficients of the unknown in question by the vector b consisting of constnts b 1, b 2,, b n.

Exmple of the Crmer s Rule Find x 2 in the following system of equtions: 0.3x 1 + 0.52x 2 + x 3 = 0.01 Find the determinnt D 0.3 0.52 1 D = 0.5 1 1.9 = 0.3 1 1.9 0.3 0.5 0.1 0.3 0.5 Find determinnt D 2 by replcing D s second column with b Divide 0.5x 1 + x 2 +1.9x 3 = 0.67 0.1x 1 + 0.3x 2 + 0.5x 3 = 0.44 0.520.5 1.9 0.1 0.5 +10.5 1 0.1 0.4 = 0.0022 0.3 0.01 1 0.67 1.9 0.5 1.9 0.5 0.67 D2 = 0.5 0.67 1.9 = 0.3 0.01 + 1 = 0.0649 0.44 0.5 0.1 0.5 0.1 0.44 0.1 0.44 0.5 x 2 = D 2 D = 0.0649 0.0022 = 29.5

Guss Elimintion Guss elimintion sequentil process of removing unknowns from equtions using forwrd elimintion followed by bck substitution. Nïve Guss elimintion the process does not check for potentil problems resulting from division by zero.

Nïve Guss Elimintion (cont) Forwrd elimintion Strting with the first row, dd or subtrct multiples of tht row to eliminte the first coefficient from the second row nd beyond. Continue this process with the second row to remove the second coefficient from the third row nd beyond. Stop when n upper tringulr mtrix remins. Bck substitution Strting with the lst row, solve for the unknown, then substitute tht vlue into the next highest row. Becuse of the upper-tringulr nture of the mtrix, ech row will contin only one more unknown.

function x=gussnive(a,b) ExA=[A b]; [m,n]=size(a); q=size(b); if (m~=n) fprintf ('Error: input mtrix is not squre; n = %3.0f, m=%3.0f \n', n,m); End if (n~=q) fprintf ('Error: vector b hs different dimension thn n; q = %2.0f \n', q); end n1=n+1; for k=1:n-1 for i=k+1:n fctor=exa(i,k)/exa(k,k); ExA(i,k:n1)= ExA(i,k:n1)-fctor*ExA(k,k:n1); End End x=zeros(n,1); x(n)=exa(n,n1)/exa(n,n); for i=n-1:-1:1 x(i) = (ExA(i,n1)-ExA(i,i+1:n)*x(i+1:n))/ExA(i,i); end

>> C=[150-100 0 ; -100 150-50; 0-50 50] C = 150-100 0-100 150-50 0-50 50 >> d= [588.6; 686.7;784.8] d = 588.6000 686.7000 784.8000 >> x = GussNive(C,d) x = 41.2020 55.9170 71.6130

>> A=[1 1 1 0 0 0 ; 0-1 0 1-1 0 ; 0 0-1 0 0 1 ; 0 0 0 0 1-1 ; 0 10-10 0-15 -5 ; 5-10 0-20 0 0] A = 1 1 1 0 0 0 0-1 0 1-1 0 0 0-1 0 0-1 0 0 0 0 1-1 0 10-10 0-15 -5 5-10 0-20 0 0 b = 0 0 0 0 0 200 >> b=b' b = 0 0 0 0 0 200 >> x = GussNive(A,b) x = NN NN NN NN NN NN

x=a\b x = 6.1538-4.6154-1.5385-6.1538-1.5385-1.5385 >> x=inv(a)*b x = 6.1538-4.6154-1.5385-6.1538-1.5385-1.5385

Complexity of Guss elimintion To solve n n x n system of liner equtions by Guss elimintion we crry out the following number of opertions: Flops floting-point opertions. Mflops/sec number of floting point opertion executed by processor per second. Conclusions: Forwrd Elimintion Bck Substitution Totl 2n 3 ( ) 3 + On2 n 2 + On 2n 3 () ( ) 3 + On2 As the system gets lrger, the computtion time increses gretly. Most of the effort is incurred in the elimintion step.

Pivoting If coefficient long the digonl is 0 (problem: division by 0) or close to 0 (problem: round-off error) then the Guss elimintion cuses problems. Prtil pivoting determine the coefficient with the lrgest bsolute vlue in the column below the pivot element. The rows cn then be switched so tht the lrgest element is the pivot element. Complete pivoting check lso the rows to the right of the pivot element re lso checked nd switch columns.

Prtil Pivoting Progrm

Tridigonl systems of liner equtions A tridigonl system of liner equtions bnded system with bndwidth of 3: f 1 g 1 e 2 f 2 g 2 e 3 f 3 g 3 e n 1 f n 1 g n 1 e n Cn be solved using the sme method s Guss elimintion, but with much less effort becuse most of the mtrix elements re lredy 0. f n x 1 x 2 x 3 x n 1 x n = r 1 r 2 r 3 r n 1 r n

Tridigonl system solver