Errors for Linear Systems
|
|
- Gervais Spencer
- 5 years ago
- Views:
Transcription
1 Errors for Lnear Systems When we solve a lnear system Ax b we often do not know A and b exactly, but have only approxmatons  and ˆb avalable. Then the best thng we can do s to solve ˆx ˆb exactly whch gves a dfferent soluton vector ˆx. We would lke to know how the errors of  and ˆb nfluence the error n ˆx. Example: Consder the lnear system Ax b wth ( ( We can easly see that the soluton s x ( x x 2 and solve the lnear system Aˆx ˆb. Ths gves the soluton vector ˆx sde vector has caused a large change n the soluton vector. ( Now let us use the slghtly dfferent rght hand sde vector ˆb ( 2 0 ( In ths case a small change n the rght hand Vector norms In order to measure errors n vectors by a sngle number we use a so-called vector norm. A vector norm measures the sze of a vector x R n by a nonnegatve number and has the followng propertes 0 x 0 αx α x + y + y for any x, y R n, α R. There are many possble vector norms. We wll use the three norms, 2, defned by x + + x n 2 ( x x n 2 /2 max{ x,..., x d } If we wrte n an equaton wthout any subscrpt, then the equaton s vald for all three norms (usng the same norm everywhere. If the exact vector s x and the approxmaton s ˆx we can defne the relatve error wth respect to a vector norm as ˆx x. Example: Note that n the above example we have ˆb b 0.0, but ˆx x. That means that the relatve error of the soluton s 00 tmes as large as the relatve error n the gven data,.e., the condton number of the problem s at least 00. Matrx norms A matrx norm measures the sze of a matrx A R n n by a nonnegatve number. We would lke to have the property Ax for all x R n ( where s one of the above vector norms, 2,. We defne as the smallest number satsfyng (: Ax : sup x R n x 0 max Ax x R n By usng the, 2, vector norm n ths defnton we obtan the matrx norms, 2, (whch are n general dfferent numbers. It turns out that and are easy to compute:
2 Theorem: max,...,n,...,n max,...,n,...,n Proof: For the nfnty norm we have a (maxmum of row sums of absolute values a (maxmum of column sums of absolute values Ax max a x max a x max a mplyng max a. Let be the ndex where the maxmum occurs and defne x sgn a, then and Ax max a. For the -norm we have ( Ax a x ( a x max a mplyng max a. Let be the ndex where the maxmum occurs and defne x and x 0 for, then and Ax max a. We wll not use 2 snce t s more complcated to compute (t nvolves egenvalues. Note that for A, B R n n we have AB B snce The followng results about matrx norms wll be useful later: ABx Bx B. Lemma : Let A R n n be nonsngular and E R n n. Then E < mples that A + E s nonsngular. A Proof: Assume that A + E s sngular. Then there exsts a nonzero x R n such that (A + Ex 0 and hence A Ax Ex E The left nequalty for b : Ax follows from A b A. As > 0 we obtan A E. Lemma 2: For gven vectors x, y R n wth x 0 there exsts a matrx E R n n wth Ex y and E y. Proof: For the nfnty-norm we have x for some. Let a R n be the vector wth a, a k 0 for k and let E ya, then ( a x mples Ex y and ( ya v a v y wth a v v mples E y. For the -norm we use a R n wth a sgn(x snce a x and a v v. For the 2-norm we use a x/ 2 snce a x 2 and a v a 2 v 2 v 2. Condton numbers Let x denote the soluton vector of the lnear sytem Ax b. If we choose a slghtly dfferent rght hand sde vector ˆb then we obtan a dfferent soluton vector ˆx satsfyng Aˆx ˆb. We want to know how the relatve error ˆb b / nfluences the relatve error / ( error propagaton. We have A(ˆx x ˆb b and therefore A (ˆb b A ˆb b.
3 On the other hand we have Ax. Combnng ths we obtan A ˆb b. The number cond(a : A s called condton number of the matrx A. It determnes how much the relatve error of the rght hand sde vector can be amplfed. The condton number depends on the choce of the matrx norm: In general cond (A : A and cond (A : A are dfferent numbers. Example: In the above example we have cond (A A ( and therefore ˆb b 00 ( whch s consstent wth our results above (b and ˆb were chosen so that the worst possble error magnfcaton occurs. The fact that( the matrx A n our example has a large condton number s related to the fact that A s close to the sngular matrx B. The followng result shows that cond(a ndcates how close A s to a sngular matrx: Theorem: A B mn B R n n, B sngular cond(a Proof: ( Lemma shows: B sngular mples A B A. (2 By the defnton of A there exst x, y R n such that x A y and A y matrx E R n n such that Ex y and E y sngular and A B E A.. By Lemma 2 there exsts a. Then B : A E satsfes Bx Ax Ex y y 0, hence B s ( (.0.99 Example: The matrx A s close to the sngular matrx B so that A B By the theorem we have that 0.0 cond or cond (A (A 00. As we say above we have cond (A 00,.e., the matrx B s really the closest sngular matrx to the matrx A. When we solve a lnear system Ax b we have to store the entres of A and b n the computer, yeldng a matrx  wth rounded entres â fl(a and a rounded rght hand sde vector ˆb. If the orgnal matrx A s sngular then the lnear system has no soluton or nfntely many solutons, so that any computed soluton s meanngless. How can we recognze ths on a computer? Note that the matrx  whch the computer uses may no longer be sngular. Answer: We should compute (or at least estmate cond(â. If cond(â < ε M then we can guarantee that any matrx A whch s rounded to  must be nonsngular: â a ε M a mples  A εm for the nfnty or -norm. Therefore  A  ε M ε M nonsngular by the theorem. ε M and cond(â < ε M ε M ε M mply  A Now we assume that we perturb both the rght hand sde vector b and the matrx A:  <. Hence the matrx A must be cond(â
4 Theorem: Assume Ax b and ˆx ˆb. If A s nonsngular and  A / A there holds cond(a ˆb b  cond(a  A A + Proof: Let E  A, hence Aˆx ˆb Eˆx. Subtractng Ax b gves A(ˆx x (ˆb b Eˆx and therefore A ( ˆb b + E ˆx. Dvdng by and usng / gves A ˆb b + E ˆx Now we have ˆx + ˆx x + ˆx x. By puttng ˆx x on the left hand sde and solvng for t we obtan the asserton. If cond(a  A we have that both the relatve error n the rght hand sde vector and n the matrx are magnfed by cond(a. If cond(a  A A  A then by the theorem for the condton number the matrx  may actually be sngular, so that the soluton ˆx s no longer well defned. Computng the condton number We have seen that the condton number s very useful: It tells us what accuracy we can expect for the soluton, and how close our matrx s to a sngular matrx. In order to compute the condton number we have to fnd A. Ths takes n 3 +O(n 2 operatons, compared wth n3 3 +O(n2 operatons for the LU-decomposton. Therefore the computaton of the condton number would make the soluton of a lnear system 3 tmes as expensve. For large problems ths s not reasonable. However, we do not need to compute the condton number wth full machne accuracy. Just knowng the order of magntude s suffcent. Assume that we pck a vector c and solve the lnear sytem Az c. Then z A c and z A c or A z c. Ths gves us a lower bound for A, and the cost of ths operaton s only n 2 + O(n. The trck s to pck c such that z c becomes as large as possble, so that the lower bound s close A. There are a number of heurstc methods avalable whch acheve farly good lower bounds: ( Pck c (±,..., ± and pck the sgns so that the forward susbsttuton gves a large vector, ( pckng c : z and solve A z c often mproves the lower bound. The Matlab functons condest(a and /rcond(a use smlar deas to gve lower bounds for cond (A. Typcally they gve an estmated condton number c wth c cond (A 3c and requre the soluton of 2 or 3 lnear systems whch costs O(n 2 operatons f the LU decomposton s known. (However, the Matlab commands condest and rcond only use the matrx A as an nput value, so they have to compute the LU decomposton of A frst and need n3 3 + O(n2 operatons. Computaton n machne arthmetc and resduals When we run Gaussan elmnaton on a computer each sngle operaton causes some roundoff error, and nstead of the exact soluton x of a lnear system we only get an approxmaton ˆx. As explaned above we should select the pvot canddate wth the largest absolute value to avod unnecessary subtractve cancellaton, and ths usually s a numercally stable algorthm. However, there s no theorem whch guarantees ths for partal pvotng (row nterchanges. (For full pvotng wth row and column nterchanges some theoretcal results exst. However, ths algorthm s more expensve, and for all practcal examples partal pvotng seems to work fne.
5 Queston : How much error do we have to accept for ˆx x? Ths s the unavodable error whch occurs even for an deal algorthm where we only round the nput values and the output value to machne accuracy, and use nfnte accuracy for all computatons. When we want to solve Ax b we have to store the entres of A, b n the computer, yeldng a matrx  and a rght hand sde vector ˆb of machne numbers so that  A ε M and ˆb b ε M. An deal algorthm would then try to solve ths lnear system exactly,.e., compute a vector ˆx such that ˆx ˆb. Then we have cond(a cond(aε M (ε M + ε M 2 cond(aε M f cond(a /ε M. Therefore the unavodable error s 2 cond(aε M. Queston 2: After we computed ˆx how can we check how good our computaton was? The obvous thng to check s ˆb : Aˆx and to compare t wth b. The dfference r ˆb b s called the resdual. As Ax b and Aˆx ˆb we have ˆb b cond(a where ˆb b / s called the relatve resdual. We can compute (or at least estmate cond(a, and therefore can obtan an upper bound for the error /. Actually, we can obtan a slghtly better estmate by usng (ˆb A b A ˆb b ˆb b cond(a ˆx ˆx ˆb b wth the weghted resdual ρ :. Note that / ˆx δ mples for δ < ˆx δ x + (ˆx x δ ( + δ δ whch s the same as δ up to hgher order terms O(δ 2. If ˆb b / s not much larger than ε M then the computaton was numercally stable: Just perturbng the nput slghtly from b to ˆb and then dong everythng else exactly would gve the same result ˆx. But t can happen that the relatve resdual s much larger than ε M, and yet the computaton s numercally stable. We obtan a better way to measure numercal stablty by consderng perturbatons of the matrx A: Assume we have a computed soluton ˆx. If we can fnd a slghtly perturbed matrx à such that à A ε, Èx b (2 where ε not much larger than ε M, then the computaton s numercally stable: Just perturbng the matrx wthn the roundoff error and then dong everythng exactly gves the same result as our computaton. How can we check whether such a matrx à exsts? We agan use the weghted resdual ˆb b ρ : ˆx. Then:. If ˆx s the soluton of a slghtly perturbed problem (2 we have ρ ε.
6 2. If ρ ε then ˆx s the soluton of a slghtly perturbed problem (2. Proof:. Let E à A. Then (A + Eˆx b or ˆb b Eˆx yeldng ˆb b ˆb b E ˆx, ˆx E ε. 2. Let y : b ˆb. Usng Lemma 2 we get a matrx E wth Eˆx y and E y ˆx. Then à : A + E satsfes Èx (A + Eˆx ˆb + (b ˆb b and E b ˆb ˆx ε. Summary Recommended method for solvng lnear systems on a computer:. Gven A fnd L, U, p usng Gaussan elmnaton wth pvotng, choosng the pvot canddate wth the largest absolute value. 2. Solve Lu b (where b b p by forward substtuton and Ux y by back substtuton. Do not compute the nverse matrx A. Ths takes about 3 tmes as long as computng the LU decomposton. The condton number cond(a A characterzes the senstvty of the lnear system: If Ax b and Aˆx ˆb we have ˆb b cond(a. The unavodable error due to the roundng of A and b s approxmatvely 2 cond(aε M. If cond(â /ε M then the matrx  could be the machne representaton of a sngular matrx A, and the computed soluton s usually meanngless. You should compute an approxmaton to the the condton number cond(a A. Here A can be approxmated by solvng a few lnear systems wth the exstng LU decomposton (condest n Matlab. In order to check the accuracy of a computed soluton ˆx compute the resdual r : Aˆx b and the weghted resdual ρ : r ˆx. we get an error bound ˆx x ˆx cond(aρ we should have that ρ s not much larger than ε M, otherwse the computaton was not numercally stable: The error s much larger than errors resultng from roundng A and b. Gaussan elmnaton wth the pvotng strategy of choosng the largest absolute value s n almost all cases numercally stable. We can check ths by computng the weghted resdual ρ. If ρ s much larger than ε M we can obtan a better result by teratve mprovement: Let r : Aˆx b and solve Ae r usng the exstng LU decomposton. Then let ˆx : ˆx e.
Lectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix
Lectures - Week 4 Matrx norms, Condtonng, Vector Spaces, Lnear Independence, Spannng sets and Bass, Null space and Range of a Matrx Matrx Norms Now we turn to assocatng a number to each matrx. We could
More informationChapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems
Numercal Analyss by Dr. Anta Pal Assstant Professor Department of Mathematcs Natonal Insttute of Technology Durgapur Durgapur-713209 emal: anta.bue@gmal.com 1 . Chapter 5 Soluton of System of Lnear Equatons
More informationLecture 3. Ax x i a i. i i
18.409 The Behavor of Algorthms n Practce 2/14/2 Lecturer: Dan Spelman Lecture 3 Scrbe: Arvnd Sankar 1 Largest sngular value In order to bound the condton number, we need an upper bound on the largest
More informationNorms, Condition Numbers, Eigenvalues and Eigenvectors
Norms, Condton Numbers, Egenvalues and Egenvectors 1 Norms A norm s a measure of the sze of a matrx or a vector For vectors the common norms are: N a 2 = ( x 2 1/2 the Eucldean Norm (1a b 1 = =1 N x (1b
More informationn α j x j = 0 j=1 has a nontrivial solution. Here A is the n k matrix whose jth column is the vector for all t j=0
MODULE 2 Topcs: Lnear ndependence, bass and dmenson We have seen that f n a set of vectors one vector s a lnear combnaton of the remanng vectors n the set then the span of the set s unchanged f that vector
More informationSingular Value Decomposition: Theory and Applications
Sngular Value Decomposton: Theory and Applcatons Danel Khashab Sprng 2015 Last Update: March 2, 2015 1 Introducton A = UDV where columns of U and V are orthonormal and matrx D s dagonal wth postve real
More informationExample: (13320, 22140) =? Solution #1: The divisors of are 1, 2, 3, 4, 5, 6, 9, 10, 12, 15, 18, 20, 27, 30, 36, 41,
The greatest common dvsor of two ntegers a and b (not both zero) s the largest nteger whch s a common factor of both a and b. We denote ths number by gcd(a, b), or smply (a, b) when there s no confuson
More informationform, and they present results of tests comparng the new algorthms wth other methods. Recently, Olschowka & Neumaer [7] ntroduced another dea for choo
Scalng and structural condton numbers Arnold Neumaer Insttut fur Mathematk, Unverstat Wen Strudlhofgasse 4, A-1090 Wen, Austra emal: neum@cma.unve.ac.at revsed, August 1996 Abstract. We ntroduce structural
More informationInexact Newton Methods for Inverse Eigenvalue Problems
Inexact Newton Methods for Inverse Egenvalue Problems Zheng-jan Ba Abstract In ths paper, we survey some of the latest development n usng nexact Newton-lke methods for solvng nverse egenvalue problems.
More informationρ some λ THE INVERSE POWER METHOD (or INVERSE ITERATION) , for , or (more usually) to
THE INVERSE POWER METHOD (or INVERSE ITERATION) -- applcaton of the Power method to A some fxed constant ρ (whch s called a shft), x λ ρ If the egenpars of A are { ( λ, x ) } ( ), or (more usually) to,
More informationVector Norms. Chapter 7 Iterative Techniques in Matrix Algebra. Cauchy-Bunyakovsky-Schwarz Inequality for Sums. Distances. Convergence.
Vector Norms Chapter 7 Iteratve Technques n Matrx Algebra Per-Olof Persson persson@berkeley.edu Department of Mathematcs Unversty of Calforna, Berkeley Math 128B Numercal Analyss Defnton A vector norm
More informationP A = (P P + P )A = P (I P T (P P ))A = P (A P T (P P )A) Hence if we let E = P T (P P A), We have that
Backward Error Analyss for House holder Reectors We want to show that multplcaton by householder reectors s backward stable. In partcular we wsh to show fl(p A) = P (A) = P (A + E where P = I 2vv T s the
More informationSTAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 16
STAT 39: MATHEMATICAL COMPUTATIONS I FALL 218 LECTURE 16 1 why teratve methods f we have a lnear system Ax = b where A s very, very large but s ether sparse or structured (eg, banded, Toepltz, banded plus
More informationAdditional Codes using Finite Difference Method. 1 HJB Equation for Consumption-Saving Problem Without Uncertainty
Addtonal Codes usng Fnte Dfference Method Benamn Moll 1 HJB Equaton for Consumpton-Savng Problem Wthout Uncertanty Before consderng the case wth stochastc ncome n http://www.prnceton.edu/~moll/ HACTproect/HACT_Numercal_Appendx.pdf,
More information2.3 Nilpotent endomorphisms
s a block dagonal matrx, wth A Mat dm U (C) In fact, we can assume that B = B 1 B k, wth B an ordered bass of U, and that A = [f U ] B, where f U : U U s the restrcton of f to U 40 23 Nlpotent endomorphsms
More informationCHAPTER 5 NUMERICAL EVALUATION OF DYNAMIC RESPONSE
CHAPTER 5 NUMERICAL EVALUATION OF DYNAMIC RESPONSE Analytcal soluton s usually not possble when exctaton vares arbtrarly wth tme or f the system s nonlnear. Such problems can be solved by numercal tmesteppng
More informationAPPENDIX A Some Linear Algebra
APPENDIX A Some Lnear Algebra The collecton of m, n matrces A.1 Matrces a 1,1,..., a 1,n A = a m,1,..., a m,n wth real elements a,j s denoted by R m,n. If n = 1 then A s called a column vector. Smlarly,
More informationMath 261 Exercise sheet 2
Math 261 Exercse sheet 2 http://staff.aub.edu.lb/~nm116/teachng/2017/math261/ndex.html Verson: September 25, 2017 Answers are due for Monday 25 September, 11AM. The use of calculators s allowed. Exercse
More informationEEE 241: Linear Systems
EEE : Lnear Systems Summary #: Backpropagaton BACKPROPAGATION The perceptron rule as well as the Wdrow Hoff learnng were desgned to tran sngle layer networks. They suffer from the same dsadvantage: they
More informationMatrix Approximation via Sampling, Subspace Embedding. 1 Solving Linear Systems Using SVD
Matrx Approxmaton va Samplng, Subspace Embeddng Lecturer: Anup Rao Scrbe: Rashth Sharma, Peng Zhang 0/01/016 1 Solvng Lnear Systems Usng SVD Two applcatons of SVD have been covered so far. Today we loo
More informationDynamic Programming. Preview. Dynamic Programming. Dynamic Programming. Dynamic Programming (Example: Fibonacci Sequence)
/24/27 Prevew Fbonacc Sequence Longest Common Subsequence Dynamc programmng s a method for solvng complex problems by breakng them down nto smpler sub-problems. It s applcable to problems exhbtng the propertes
More informationNumerical Heat and Mass Transfer
Master degree n Mechancal Engneerng Numercal Heat and Mass Transfer 06-Fnte-Dfference Method (One-dmensonal, steady state heat conducton) Fausto Arpno f.arpno@uncas.t Introducton Why we use models and
More informationMMA and GCMMA two methods for nonlinear optimization
MMA and GCMMA two methods for nonlnear optmzaton Krster Svanberg Optmzaton and Systems Theory, KTH, Stockholm, Sweden. krlle@math.kth.se Ths note descrbes the algorthms used n the author s 2007 mplementatons
More informationStructure and Drive Paul A. Jensen Copyright July 20, 2003
Structure and Drve Paul A. Jensen Copyrght July 20, 2003 A system s made up of several operatons wth flow passng between them. The structure of the system descrbes the flow paths from nputs to outputs.
More informationFor now, let us focus on a specific model of neurons. These are simplified from reality but can achieve remarkable results.
Neural Networks : Dervaton compled by Alvn Wan from Professor Jtendra Malk s lecture Ths type of computaton s called deep learnng and s the most popular method for many problems, such as computer vson
More informationFeature Selection: Part 1
CSE 546: Machne Learnng Lecture 5 Feature Selecton: Part 1 Instructor: Sham Kakade 1 Regresson n the hgh dmensonal settng How do we learn when the number of features d s greater than the sample sze n?
More informationStanford University CS359G: Graph Partitioning and Expanders Handout 4 Luca Trevisan January 13, 2011
Stanford Unversty CS359G: Graph Parttonng and Expanders Handout 4 Luca Trevsan January 3, 0 Lecture 4 In whch we prove the dffcult drecton of Cheeger s nequalty. As n the past lectures, consder an undrected
More informationDifference Equations
Dfference Equatons c Jan Vrbk 1 Bascs Suppose a sequence of numbers, say a 0,a 1,a,a 3,... s defned by a certan general relatonshp between, say, three consecutve values of the sequence, e.g. a + +3a +1
More informationLecture Notes on Linear Regression
Lecture Notes on Lnear Regresson Feng L fl@sdueducn Shandong Unversty, Chna Lnear Regresson Problem In regresson problem, we am at predct a contnuous target value gven an nput feature vector We assume
More information10-801: Advanced Optimization and Randomized Methods Lecture 2: Convex functions (Jan 15, 2014)
0-80: Advanced Optmzaton and Randomzed Methods Lecture : Convex functons (Jan 5, 04) Lecturer: Suvrt Sra Addr: Carnege Mellon Unversty, Sprng 04 Scrbes: Avnava Dubey, Ahmed Hefny Dsclamer: These notes
More informationFoundations of Arithmetic
Foundatons of Arthmetc Notaton We shall denote the sum and product of numbers n the usual notaton as a 2 + a 2 + a 3 + + a = a, a 1 a 2 a 3 a = a The notaton a b means a dvdes b,.e. ac = b where c s an
More informationCME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 13
CME 30: NUMERICAL LINEAR ALGEBRA FALL 005/06 LECTURE 13 GENE H GOLUB 1 Iteratve Methods Very large problems (naturally sparse, from applcatons): teratve methods Structured matrces (even sometmes dense,
More informationU.C. Berkeley CS294: Beyond Worst-Case Analysis Luca Trevisan September 5, 2017
U.C. Berkeley CS94: Beyond Worst-Case Analyss Handout 4s Luca Trevsan September 5, 07 Summary of Lecture 4 In whch we ntroduce semdefnte programmng and apply t to Max Cut. Semdefnte Programmng Recall that
More informationLecture 12: Discrete Laplacian
Lecture 12: Dscrete Laplacan Scrbe: Tanye Lu Our goal s to come up wth a dscrete verson of Laplacan operator for trangulated surfaces, so that we can use t n practce to solve related problems We are mostly
More informationLecture 4. Instructor: Haipeng Luo
Lecture 4 Instructor: Hapeng Luo In the followng lectures, we focus on the expert problem and study more adaptve algorthms. Although Hedge s proven to be worst-case optmal, one may wonder how well t would
More informationLecture 5 Decoding Binary BCH Codes
Lecture 5 Decodng Bnary BCH Codes In ths class, we wll ntroduce dfferent methods for decodng BCH codes 51 Decodng the [15, 7, 5] 2 -BCH Code Consder the [15, 7, 5] 2 -code C we ntroduced n the last lecture
More informationTHE CHINESE REMAINDER THEOREM. We should thank the Chinese for their wonderful remainder theorem. Glenn Stevens
THE CHINESE REMAINDER THEOREM KEITH CONRAD We should thank the Chnese for ther wonderful remander theorem. Glenn Stevens 1. Introducton The Chnese remander theorem says we can unquely solve any par of
More informationMATH Sensitivity of Eigenvalue Problems
MATH 537- Senstvty of Egenvalue Problems Prelmnares Let A be an n n matrx, and let λ be an egenvalue of A, correspondngly there are vectors x and y such that Ax = λx and y H A = λy H Then x s called A
More informationReport on Image warping
Report on Image warpng Xuan Ne, Dec. 20, 2004 Ths document summarzed the algorthms of our mage warpng soluton for further study, and there s a detaled descrpton about the mplementaton of these algorthms.
More informationKernel Methods and SVMs Extension
Kernel Methods and SVMs Extenson The purpose of ths document s to revew materal covered n Machne Learnng 1 Supervsed Learnng regardng support vector machnes (SVMs). Ths document also provdes a general
More informationProblem Set 9 Solutions
Desgn and Analyss of Algorthms May 4, 2015 Massachusetts Insttute of Technology 6.046J/18.410J Profs. Erk Demane, Srn Devadas, and Nancy Lynch Problem Set 9 Solutons Problem Set 9 Solutons Ths problem
More information8.4 COMPLEX VECTOR SPACES AND INNER PRODUCTS
SECTION 8.4 COMPLEX VECTOR SPACES AND INNER PRODUCTS 493 8.4 COMPLEX VECTOR SPACES AND INNER PRODUCTS All the vector spaces you have studed thus far n the text are real vector spaces because the scalars
More informationThe Order Relation and Trace Inequalities for. Hermitian Operators
Internatonal Mathematcal Forum, Vol 3, 08, no, 507-57 HIKARI Ltd, wwwm-hkarcom https://doorg/0988/mf088055 The Order Relaton and Trace Inequaltes for Hermtan Operators Y Huang School of Informaton Scence
More informationLinear Feature Engineering 11
Lnear Feature Engneerng 11 2 Least-Squares 2.1 Smple least-squares Consder the followng dataset. We have a bunch of nputs x and correspondng outputs y. The partcular values n ths dataset are x y 0.23 0.19
More informationA New Refinement of Jacobi Method for Solution of Linear System Equations AX=b
Int J Contemp Math Scences, Vol 3, 28, no 17, 819-827 A New Refnement of Jacob Method for Soluton of Lnear System Equatons AX=b F Naem Dafchah Department of Mathematcs, Faculty of Scences Unversty of Gulan,
More informationCollege of Computer & Information Science Fall 2009 Northeastern University 20 October 2009
College of Computer & Informaton Scence Fall 2009 Northeastern Unversty 20 October 2009 CS7880: Algorthmc Power Tools Scrbe: Jan Wen and Laura Poplawsk Lecture Outlne: Prmal-dual schema Network Desgn:
More informationLecture 21: Numerical methods for pricing American type derivatives
Lecture 21: Numercal methods for prcng Amercan type dervatves Xaoguang Wang STAT 598W Aprl 10th, 2014 (STAT 598W) Lecture 21 1 / 26 Outlne 1 Fnte Dfference Method Explct Method Penalty Method (STAT 598W)
More informationMATH Homework #2
MATH609-601 Homework #2 September 27, 2012 1. Problems Ths contans a set of possble solutons to all problems of HW-2. Be vglant snce typos are possble (and nevtable). (1) Problem 1 (20 pts) For a matrx
More informationLecture 2: Gram-Schmidt Vectors and the LLL Algorithm
NYU, Fall 2016 Lattces Mn Course Lecture 2: Gram-Schmdt Vectors and the LLL Algorthm Lecturer: Noah Stephens-Davdowtz 2.1 The Shortest Vector Problem In our last lecture, we consdered short solutons to
More information2.5 Iterative Improvement of a Solution to Linear Equations
2.5 Iteratve Improvement of a Soluton to Lnear Equatons 47 Dahlqust, G., and Bjorck, A. 1974, Numercal Methods (Englewood Clffs, NJ: Prentce-Hall), Example 5.4.3, p. 166. Ralston, A., and Rabnowtz, P.
More informationp 1 c 2 + p 2 c 2 + p 3 c p m c 2
Where to put a faclty? Gven locatons p 1,..., p m n R n of m houses, want to choose a locaton c n R n for the fre staton. Want c to be as close as possble to all the house. We know how to measure dstance
More informationGlobal Sensitivity. Tuesday 20 th February, 2018
Global Senstvty Tuesday 2 th February, 28 ) Local Senstvty Most senstvty analyses [] are based on local estmates of senstvty, typcally by expandng the response n a Taylor seres about some specfc values
More informationWorkshop: Approximating energies and wave functions Quantum aspects of physical chemistry
Workshop: Approxmatng energes and wave functons Quantum aspects of physcal chemstry http://quantum.bu.edu/pltl/6/6.pdf Last updated Thursday, November 7, 25 7:9:5-5: Copyrght 25 Dan Dll (dan@bu.edu) Department
More informationLecture 10: May 6, 2013
TTIC/CMSC 31150 Mathematcal Toolkt Sprng 013 Madhur Tulsan Lecture 10: May 6, 013 Scrbe: Wenje Luo In today s lecture, we manly talked about random walk on graphs and ntroduce the concept of graph expander,
More informationInner Product. Euclidean Space. Orthonormal Basis. Orthogonal
Inner Product Defnton 1 () A Eucldean space s a fnte-dmensonal vector space over the reals R, wth an nner product,. Defnton 2 (Inner Product) An nner product, on a real vector space X s a symmetrc, blnear,
More informationLinear Approximation with Regularization and Moving Least Squares
Lnear Approxmaton wth Regularzaton and Movng Least Squares Igor Grešovn May 007 Revson 4.6 (Revson : March 004). 5 4 3 0.5 3 3.5 4 Contents: Lnear Fttng...4. Weghted Least Squares n Functon Approxmaton...
More informationNUMERICAL DIFFERENTIATION
NUMERICAL DIFFERENTIATION 1 Introducton Dfferentaton s a method to compute the rate at whch a dependent output y changes wth respect to the change n the ndependent nput x. Ths rate of change s called the
More information1 GSW Iterative Techniques for y = Ax
1 for y = A I m gong to cheat here. here are a lot of teratve technques that can be used to solve the general case of a set of smultaneous equatons (wrtten n the matr form as y = A), but ths chapter sn
More informationSome Comments on Accelerating Convergence of Iterative Sequences Using Direct Inversion of the Iterative Subspace (DIIS)
Some Comments on Acceleratng Convergence of Iteratve Sequences Usng Drect Inverson of the Iteratve Subspace (DIIS) C. Davd Sherrll School of Chemstry and Bochemstry Georga Insttute of Technology May 1998
More information= z 20 z n. (k 20) + 4 z k = 4
Problem Set #7 solutons 7.2.. (a Fnd the coeffcent of z k n (z + z 5 + z 6 + z 7 + 5, k 20. We use the known seres expanson ( n+l ( z l l z n below: (z + z 5 + z 6 + z 7 + 5 (z 5 ( + z + z 2 + z + 5 5
More informationMathematical Preparations
1 Introducton Mathematcal Preparatons The theory of relatvty was developed to explan experments whch studed the propagaton of electromagnetc radaton n movng coordnate systems. Wthn expermental error the
More informationLecture 20: Lift and Project, SDP Duality. Today we will study the Lift and Project method. Then we will prove the SDP duality theorem.
prnceton u. sp 02 cos 598B: algorthms and complexty Lecture 20: Lft and Project, SDP Dualty Lecturer: Sanjeev Arora Scrbe:Yury Makarychev Today we wll study the Lft and Project method. Then we wll prove
More informationSpectral Graph Theory and its Applications September 16, Lecture 5
Spectral Graph Theory and ts Applcatons September 16, 2004 Lecturer: Danel A. Spelman Lecture 5 5.1 Introducton In ths lecture, we wll prove the followng theorem: Theorem 5.1.1. Let G be a planar graph
More informationFirst day August 1, Problems and Solutions
FOURTH INTERNATIONAL COMPETITION FOR UNIVERSITY STUDENTS IN MATHEMATICS July 30 August 4, 997, Plovdv, BULGARIA Frst day August, 997 Problems and Solutons Problem. Let {ε n } n= be a sequence of postve
More informationLecture 4: Constant Time SVD Approximation
Spectral Algorthms and Representatons eb. 17, Mar. 3 and 8, 005 Lecture 4: Constant Tme SVD Approxmaton Lecturer: Santosh Vempala Scrbe: Jangzhuo Chen Ths topc conssts of three lectures 0/17, 03/03, 03/08),
More informationLecture 10 Support Vector Machines II
Lecture 10 Support Vector Machnes II 22 February 2016 Taylor B. Arnold Yale Statstcs STAT 365/665 1/28 Notes: Problem 3 s posted and due ths upcomng Frday There was an early bug n the fake-test data; fxed
More informationAppendix B. The Finite Difference Scheme
140 APPENDIXES Appendx B. The Fnte Dfference Scheme In ths appendx we present numercal technques whch are used to approxmate solutons of system 3.1 3.3. A comprehensve treatment of theoretcal and mplementaton
More informationCommunication Complexity 16:198: February Lecture 4. x ij y ij
Communcaton Complexty 16:198:671 09 February 2010 Lecture 4 Lecturer: Troy Lee Scrbe: Rajat Mttal 1 Homework problem : Trbes We wll solve the thrd queston n the homework. The goal s to show that the nondetermnstc
More informationGeneralized Linear Methods
Generalzed Lnear Methods 1 Introducton In the Ensemble Methods the general dea s that usng a combnaton of several weak learner one could make a better learner. More formally, assume that we have a set
More informationA 2D Bounded Linear Program (H,c) 2D Linear Programming
A 2D Bounded Lnear Program (H,c) h 3 v h 8 h 5 c h 4 h h 6 h 7 h 2 2D Lnear Programmng C s a polygonal regon, the ntersecton of n halfplanes. (H, c) s nfeasble, as C s empty. Feasble regon C s unbounded
More informationLimited Dependent Variables
Lmted Dependent Varables. What f the left-hand sde varable s not a contnuous thng spread from mnus nfnty to plus nfnty? That s, gven a model = f (, β, ε, where a. s bounded below at zero, such as wages
More informationModelli Clamfim Equazione del Calore Lezione ottobre 2014
CLAMFIM Bologna Modell 1 @ Clamfm Equazone del Calore Lezone 17 15 ottobre 2014 professor Danele Rtell danele.rtell@unbo.t 1/24? Convoluton The convoluton of two functons g(t) and f(t) s the functon (g
More informationComplex Numbers. x = B B 2 4AC 2A. or x = x = 2 ± 4 4 (1) (5) 2 (1)
Complex Numbers If you have not yet encountered complex numbers, you wll soon do so n the process of solvng quadratc equatons. The general quadratc equaton Ax + Bx + C 0 has solutons x B + B 4AC A For
More informationprinceton univ. F 17 cos 521: Advanced Algorithm Design Lecture 7: LP Duality Lecturer: Matt Weinberg
prnceton unv. F 17 cos 521: Advanced Algorthm Desgn Lecture 7: LP Dualty Lecturer: Matt Wenberg Scrbe: LP Dualty s an extremely useful tool for analyzng structural propertes of lnear programs. Whle there
More informationChapter 3 Differentiation and Integration
MEE07 Computer Modelng Technques n Engneerng Chapter Derentaton and Integraton Reerence: An Introducton to Numercal Computatons, nd edton, S. yakowtz and F. zdarovsky, Mawell/Macmllan, 990. Derentaton
More informationChapter Newton s Method
Chapter 9. Newton s Method After readng ths chapter, you should be able to:. Understand how Newton s method s dfferent from the Golden Secton Search method. Understand how Newton s method works 3. Solve
More informationCalculation of time complexity (3%)
Problem 1. (30%) Calculaton of tme complexty (3%) Gven n ctes, usng exhaust search to see every result takes O(n!). Calculaton of tme needed to solve the problem (2%) 40 ctes:40! dfferent tours 40 add
More informationU.C. Berkeley CS294: Spectral Methods and Expanders Handout 8 Luca Trevisan February 17, 2016
U.C. Berkeley CS94: Spectral Methods and Expanders Handout 8 Luca Trevsan February 7, 06 Lecture 8: Spectral Algorthms Wrap-up In whch we talk about even more generalzatons of Cheeger s nequaltes, and
More informationMath 217 Fall 2013 Homework 2 Solutions
Math 17 Fall 013 Homework Solutons Due Thursday Sept. 6, 013 5pm Ths homework conssts of 6 problems of 5 ponts each. The total s 30. You need to fully justfy your answer prove that your functon ndeed has
More informationLINEAR REGRESSION ANALYSIS. MODULE IX Lecture Multicollinearity
LINEAR REGRESSION ANALYSIS MODULE IX Lecture - 30 Multcollnearty Dr. Shalabh Department of Mathematcs and Statstcs Indan Insttute of Technology Kanpur 2 Remedes for multcollnearty Varous technques have
More informationOn a direct solver for linear least squares problems
ISSN 2066-6594 Ann. Acad. Rom. Sc. Ser. Math. Appl. Vol. 8, No. 2/2016 On a drect solver for lnear least squares problems Constantn Popa Abstract The Null Space (NS) algorthm s a drect solver for lnear
More informationTransfer Functions. Convenient representation of a linear, dynamic model. A transfer function (TF) relates one input and one output: ( ) system
Transfer Functons Convenent representaton of a lnear, dynamc model. A transfer functon (TF) relates one nput and one output: x t X s y t system Y s The followng termnology s used: x y nput output forcng
More informationSTAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 17. a ij x (k) b i. a ij x (k+1) (D + L)x (k+1) = b Ux (k)
STAT 309: MATHEMATICAL COMPUTATIONS I FALL 08 LECTURE 7. sor method remnder: n coordnatewse form, Jacob method s = [ b a x (k) a and Gauss Sedel method s = [ b a = = remnder: n matrx form, Jacob method
More informationCSci 6974 and ECSE 6966 Math. Tech. for Vision, Graphics and Robotics Lecture 21, April 17, 2006 Estimating A Plane Homography
CSc 6974 and ECSE 6966 Math. Tech. for Vson, Graphcs and Robotcs Lecture 21, Aprl 17, 2006 Estmatng A Plane Homography Overvew We contnue wth a dscusson of the major ssues, usng estmaton of plane projectve
More informationLecture 4: Universal Hash Functions/Streaming Cont d
CSE 5: Desgn and Analyss of Algorthms I Sprng 06 Lecture 4: Unversal Hash Functons/Streamng Cont d Lecturer: Shayan Oves Gharan Aprl 6th Scrbe: Jacob Schreber Dsclamer: These notes have not been subjected
More informationCOS 511: Theoretical Machine Learning. Lecturer: Rob Schapire Lecture # 15 Scribe: Jieming Mao April 1, 2013
COS 511: heoretcal Machne Learnng Lecturer: Rob Schapre Lecture # 15 Scrbe: Jemng Mao Aprl 1, 013 1 Bref revew 1.1 Learnng wth expert advce Last tme, we started to talk about learnng wth expert advce.
More informationNotes on Frequency Estimation in Data Streams
Notes on Frequency Estmaton n Data Streams In (one of) the data streamng model(s), the data s a sequence of arrvals a 1, a 2,..., a m of the form a j = (, v) where s the dentty of the tem and belongs to
More informationEffects of Ignoring Correlations When Computing Sample Chi-Square. John W. Fowler February 26, 2012
Effects of Ignorng Correlatons When Computng Sample Ch-Square John W. Fowler February 6, 0 It can happen that ch-square must be computed for a sample whose elements are correlated to an unknown extent.
More informationLearning Theory: Lecture Notes
Learnng Theory: Lecture Notes Lecturer: Kamalka Chaudhur Scrbe: Qush Wang October 27, 2012 1 The Agnostc PAC Model Recall that one of the constrants of the PAC model s that the data dstrbuton has to be
More informationThe Minimum Universal Cost Flow in an Infeasible Flow Network
Journal of Scences, Islamc Republc of Iran 17(2): 175-180 (2006) Unversty of Tehran, ISSN 1016-1104 http://jscencesutacr The Mnmum Unversal Cost Flow n an Infeasble Flow Network H Saleh Fathabad * M Bagheran
More informationHongyi Miao, College of Science, Nanjing Forestry University, Nanjing ,China. (Received 20 June 2013, accepted 11 March 2014) I)ϕ (k)
ISSN 1749-3889 (prnt), 1749-3897 (onlne) Internatonal Journal of Nonlnear Scence Vol.17(2014) No.2,pp.188-192 Modfed Block Jacob-Davdson Method for Solvng Large Sparse Egenproblems Hongy Mao, College of
More informationModule 9. Lecture 6. Duality in Assignment Problems
Module 9 1 Lecture 6 Dualty n Assgnment Problems In ths lecture we attempt to answer few other mportant questons posed n earler lecture for (AP) and see how some of them can be explaned through the concept
More informationGeorgia Tech PHYS 6124 Mathematical Methods of Physics I
Georga Tech PHYS 624 Mathematcal Methods of Physcs I Instructor: Predrag Cvtanovć Fall semester 202 Homework Set #7 due October 30 202 == show all your work for maxmum credt == put labels ttle legends
More information10-701/ Machine Learning, Fall 2005 Homework 3
10-701/15-781 Machne Learnng, Fall 2005 Homework 3 Out: 10/20/05 Due: begnnng of the class 11/01/05 Instructons Contact questons-10701@autonlaborg for queston Problem 1 Regresson and Cross-valdaton [40
More informationVapnik-Chervonenkis theory
Vapnk-Chervonenks theory Rs Kondor June 13, 2008 For the purposes of ths lecture, we restrct ourselves to the bnary supervsed batch learnng settng. We assume that we have an nput space X, and an unknown
More informationSIO 224. m(r) =(ρ(r),k s (r),µ(r))
SIO 224 1. A bref look at resoluton analyss Here s some background for the Masters and Gubbns resoluton paper. Global Earth models are usually found teratvely by assumng a startng model and fndng small
More informationa b a In case b 0, a being divisible by b is the same as to say that
Secton 6.2 Dvsblty among the ntegers An nteger a ε s dvsble by b ε f there s an nteger c ε such that a = bc. Note that s dvsble by any nteger b, snce = b. On the other hand, a s dvsble by only f a = :
More informationHashing. Alexandra Stefan
Hashng Alexandra Stefan 1 Hash tables Tables Drect access table (or key-ndex table): key => ndex Hash table: key => hash value => ndex Man components Hash functon Collson resoluton Dfferent keys mapped
More informationLINEAR REGRESSION ANALYSIS. MODULE IX Lecture Multicollinearity
LINEAR REGRESSION ANALYSIS MODULE IX Lecture - 31 Multcollnearty Dr. Shalabh Department of Mathematcs and Statstcs Indan Insttute of Technology Kanpur 6. Rdge regresson The OLSE s the best lnear unbased
More informationLOW BIAS INTEGRATED PATH ESTIMATORS. James M. Calvin
Proceedngs of the 007 Wnter Smulaton Conference S G Henderson, B Bller, M-H Hseh, J Shortle, J D Tew, and R R Barton, eds LOW BIAS INTEGRATED PATH ESTIMATORS James M Calvn Department of Computer Scence
More information