1 GSW Iterative Techniques for y = Ax

Size: px
Start display at page:

Download "1 GSW Iterative Techniques for y = Ax"

Transcription

1 1 for y = A I m gong to cheat here. here are a lot of teratve technques that can be used to solve the general case of a set of smultaneous equatons (wrtten n the matr form as y = A), but ths chapter sn t gong to tal about all of the ones n ths boo. hs chapter only tals about the Jacob and Gauss-Sedel methods, and varatons on them. For the steepest descent method and the related method of conjugate gradents, see the net chapter. 1.1 he Jacob Iteratve Method he Jacob method s an teratve technque for solvng the square seres of smultaneous equatons y = A. he dea s to tae the matr A, and epress t as the sum of a matr of the dagonal elements, and a matr of all the other elements, E. For eample: A= 1 = + E= (0.1) It s very easy to nvert a matr that conssts only of dagonal elements: all you have to do s tae the nverse of the dagonal elements, for eample: = = (0.2) Wth ths decomposton (of A nto + E), we can wrte: y = A= ( + E) y E= = y E (0.) and note -1 y and -1 E are both easy to calculate, and only have to be calculated once at the start of the teraton process. he Jacob teraton starts wth an ntal guess at the value of, wrtten as [0], and then terates accordng to: [ n+ 1] = [ ] y E n (0.4) and these teratons are easy to calculate, snce s easy to nvert. For a wde-range of matrces A, ths teraton converges to the answer. he problem s t doesn t always wor, and t doesn t always converge very qucly Eample of the Jacob Method Consder the set of smultaneous lnear equatons wrtten n matr form as: 2007 ave Pearce Page 1 07/06/2007

2 = (0.5) hen start wth a guess at, perhaps = [0; 0; 0]. hs s the same matr A as dscussed above, so we have: = E= (0.6) and therefore: E= = (0.7) and y = = 0.5 (0.8) and these can be pre-computed before the teraton starts. he frst teraton then produces: [] 1 = = (0.9) the second teraton, then gves: [ 2] = = (0.10) the thrd gves: = = [ ] (0.11) and so on. he eact answer to ths problem s: 2007 ave Pearce Page 2 07/06/2007

3 2 1 1 = 1 1 = / (0.12) / and nowng ths, t s possble to eep trac of how well the algorthm s dong by calculatng the square error [n] 2 at each step. For the frst eght teratons, the results are: Vector [n] 1 Square Error [0] [ 0 0 0] [1] [ ] [2] [ ] [] [ ] [4] [ ] [5] [ ] [6] [ ] [7] [ ] [8] [ ] [ n] 2 (A remnder: the fnal answer should be [ 1 1/ 2/].) Convergence of the Jacob Method One problem wth the Jacob method s that t doesn t always wor. o see why, and to understand how the method wors at all, frst epress the result after n teratons [n] as the sum of the real answer, and an error term, e[n], so that: [ n ] = + [ ] hen, after the net teraton, the error term wll be: e n (0.1) [ n ] [ n ] e + 1 = + 1 [ n] = y E [ n] = y E Ee (0.14) but f s the real soluton, then: 1 Note I m wrtng ths as the transpose of a row vector snce ths format s easer to ft on the page. All vectors n ths boo, unless otherwse stated, are column vectors, so the transpose of a vector s a row vector ave Pearce Page 07/06/2007

4 = y E (0.15) so: [ n ] [ n] = Ee[ n] e + = Ee 1 (0.16) In other words, the effect of each teraton s to pre-multply the error vector e[n] by the matr -1 E. If the modulus of the largest egenvalue of -1 E s less than one, then ths teraton wll converge 2, and f all the egenvectors are very much smaller than one, then the teraton wll converge very rapdly. However, f at least one egenvalue of -1 E s equal to or greater than one, then ths method wll fal. A smple way to tell whether ths method wll wor for any gven matr A: f the modulus of the term on the man dagonal s greater than the sum of modul of the other terms n the row, then the teraton wll converge. In maths, ths crteron s: A > A j (0.17) j and a matr that satsfes ths crteron s nown as row dagonally domnant. he matr we used n the eample above: 2 1 A = 2 1 (0.18) 2 4 meets ths crteron (the egenvalues of -1 E n ths case are 0.68, 0.5 and 0.18, all have magntudes less than one). Now, suppose you were gven the set of equatons: 1= 2 + 1= = (0.19) and ased to solve them. Could you use Jacob teraton? Well, at frst glance you mght thn not, snce n matr notaton ths would gve: 2 hn of the error term as beng composed of components parallel to the egenvectors of -1 E. hen, each teraton results n multplyng each of these components by the correspondng egenvalue. If all the egenvalues are less than one, then ths wll result n a smaller error term, snce all the components are now smaller than they were. hs condton s suffcent, but not necessary. here are matrces that do not meet ths crteron, but whch stll converge to a soluton usng the Jacob method. he necessary condton s that the modul of all the egenvalues of -1 E are less than one ave Pearce Page 4 07/06/2007

5 = (0.20) and ths matr s not row dagonally domnant: for eample, the largest value n the second row s n the thrd column. And ndeed, t doesn t wor: the magntude of the largest egenvalue of ths matr s However, at second glance, you mght spot that all you have to do s swap the second and thrd equatons over, to get: 1= 2 + 1= = (0.21) and ths s now row-dagonally domnant. Jacob wll wor. Unfortunately, ths trc doesn t always wor. For eample, consder the equatons: = (0.22) he egenvectors of -1 E n ths case are 0.8, j and j. he absolute value of the second two egenvalues s greater than one (just), so the Jacob teraton technque wll fal to converge n ths case. We ll need to use somethng more powerful. 1.2 he Gauss-Sedel Iteratve Method Slghtly more complcated than the Jacob method, but very much along the same lnes, s the Gauss-Sedel method. he Gauss-Sedel method epresses the matr A as the sum of a dagonal matr, an upper trangular matr, and a lower trangular matr: A= 1 = U L= (0.2) (Note the trangular matrces U and L are conventonally defned as the negatve of the correspondng elements of A: the prevous equaton s the normal way of specfyng ths method.) Wth ths decomposton (of A nto U L), we can wrte: y = A= ( U L) ( ) y+ U= L ( ) ( = L y+ U ) (0.24) 2007 ave Pearce Page 5 07/06/2007

6 he Gauss-Sedel teraton starts wth an ntal guess of [0] at the value of, and then terates accordng to: ( ) [ n+ 1] = ( ) + [ ] L y U n (0.25) hs s a smlar approach to the Jacob method, ecept that now t appears we have to nvert the matr ( L), whch s not dagonal, rather than the dagonal matr. Surely that s much more dffcult to do? So what s the advantage of ths method? o answer that queston, I ll have to tal about the Jacob method n a bt more detal he Jacob Method Step-by-Step Consder what happens at each teraton step of the Jacob method: [ n] = [ ] y E n (0.26) o calculate the new vector [n+1], each element must be wored out n turn. For the th element, ths means calculatng: [ n+ ] = ( ) ( ) y E [ n ] (0.27) 1 whch s actually a lot easer than t loos. Frstly, snce s a dagonal matr, the nverse of s also dagonal, and has elements that are the nverse of the elements n. hs leads to: [ n 1] + = y E [ n] (0.28) and ths can be further smplfed by notng that s the dagonal elements of A, and E s the non-dagonal elements, so = A, and E = A ecept for E, whch s zero. So, we could wrte ths n terms of the orgnal matr A as: [ n 1] + = y A [ n] A (0.29) Now, bac to the Gauss-Sedel he Gauss-Sedel Method Step-by-Step Iteratons usually converge faster f the most recent values are used whenever possble. So, after calculatng the new frst element [n+1] 1, why not use ths value mmedately to calculate the net element [n+1] 2 rather than watng for the whole of the vector [n+1] to be calculated usng only the terms n the prevous vector [n]? hat s eactly what the Gauss-Sedel method does. After calculatng the frst element [n+1] 1, we replace the value of [n] 1 wth ths new value, and carry on, usng ths new vector wth a frst element of [n+1] 1, and all the elements those of [n]. hen, we use ths vector to calculate the second element [n+1] 2. (he result won t be the same as f the Jacob method was beng used, snce we re no longer startng wth the vector [n], we re usng ths strange new hybrd vector.) hat ll gve us a new second element. Replace the second element wth 2007 ave Pearce Page 6 07/06/2007

7 ths new value, and use ths new vector to calculate the thrd element n [n+1]. Repeat for all the elements, and for the net teraton go bac to the start and start over. In effect, we re dong: [ 1] y E [ n+ 1] E [ n] < n + = (0.0) at every step, where E s a matr of the non-dagonal elements n A, and s the matr of just the dagonal elements, just le before. Snce E s zero, we don t need to consder the term n the second summaton when =, and we could just wrte ths as: [ 1] y A [ n+ 1] A [ n] < > n + = A (0.1) hs s the only dfference between the Gauss-Sedel algorthm and the Jacob algorthm: wth the Gauss-Sedel algorthm, the new values of [n+1] are used as soon as they become avalable, you don t wat for all of the vector [n+1] to be calculated before startng to use any of the terms. If you re gong to derve the form of the Gauss-Sedel n terms of trangular and dagonal matrces, then t s much easer to start wth equaton (0.1) and note that the terms of A n whch < are just all the terms n the lower trangular matr, the one we ve called L. Lewse, the terms of A n whch > are the terms n the upper trangular matr, the one we ve called U. So: [ n 1] + = y + L [ n+ 1] + U [ n] (0.2) and snce we re now summng over the full range of n both cases, we can wrte ths n matr form: [ n ] + 1 = y + L[ n+ 1] + U[ n] (0.) whch wth a bt of algebrac manpulaton gves: [ n 1] ( ) ( + = L y+ U [ n]) (0.4) whch s dentcal to equaton (0.25). Although t loos more complcated, n fact t doesn t requre any more calculatons than the Jacob method Eample of the Gauss-Sedel Method ong the same eample as done n secton results n the followng seres of appromatons, and square errors. Note that the Gauss-Sedel n ths case converges much faster than the Jacob, due to the use of more recent nformaton n the algorthm. Vector [n] Square Error [ n] ave Pearce Page 7 07/06/2007

8 [0] [ 0 0 0] [1] [ ] [2] [ ] [] [ ] [4] [ ] [5] [ ] [6] [ ] [7] [ ] [8] [ ] Convergence of the Gauss-Sedel Method o wor out when the Gauss-Sedel method converges, we can proceed n the same way as for the smpler Jacob case above. Frst, epress [n] as the sum of the actual answer, and an error term, e[n], so that: [ n ] = + [ ] e n (0.5) hen, after the net teraton, the error term wll be: [ ] [ ] ( ) ( ) [ ] e n+ 1 = n+ 1 = L y+ L U n ( ) ( ) ( ) [ n] = L y+ L U + e (0.6) but f s the real soluton, then: ( ) ( ) 1 = L y+ L U (0.7) so: [ ] ( ) [ ] e n+ 1 = + L Ue n ( L) Ue[ n] = (0.8) So n ths case, the error term s pre-multpled by the matr ( L) -1 U after each teraton. hs mples that f the largest egenvalue of ( L) -1 U s less than one, then ths teraton wll converge, and agan, f all the egenvectors are very much smaller than one, then the teraton wll converge very rapdly. However, f at least one egenvector s equal to or greater than one, then ths method wll fal. Agan, a suffcent (but not necessary) condton for convergence s that the matr A s row dagonally domnant Comparson of the Gauss-Sedel and Jacob Methods he Gauss-Sedel s the more common and usually favoured method, for a number of reasons ave Pearce Page 8 07/06/2007

9 Frstly, and most mportantly, t tends to converge faster. hs s a result of usng the new nformaton (the updated values of the elements of the net estmate of ) as soon as they become avalable, and not watng for all the elements n the vector to be updated before usng any of them. Secondly, t doesn t requre as much storage space: the elements of the vector can be replaced one-by-one. hrdly, there are some matrces for whch Gauss-Sedel converges, but Jacob does not. (On the other hand, there are some matrces that Jacob does converge for, and Gauss-Sedel doesn t, but there are fewer of these. You ve got a better chance wth the Gauss-Sedel.) 1. Precondtonng Both the Jacob and the Gauss-Sedel methods only converge when the egenvectors of certan matrces ( -1 E n the case of Jacob and ( L) -1 U n the case of Gauss-Sedel) all have magntudes less than one. What f ths s not true? And how do you now, before you start the teratons, whether t s true or not? In general, fndng the egenvalues of matrces s not a trval tas. One soluton s to frst pre-multply the equatons by some other matr B, so we re actually tryng to solve the equatons: By = BA (0.9) If we now B, then calculatng By s easy, and ths leaves us wth a smlar problem to before, ecept that we ve now replaced the matr A wth BA. If we choose B so that BA has the requred propertes for convergence, then we can ensure that the teraton s successful. Unfortunately, fndng a sutable matr B to use s not always easy. One possblty s usng B = A H. hen, we re tryng to solve the equatons: H H A y = A A (0.40) A H A s a postve defnte matr, whch among other thngs means that the Gauss-Sedel teraton method s guaranteed to converge 4 (although t mght not converge very qucly). 1.4 Over-Relaaton Methods When the egenvalues of the matr that multples the error term at each teraton are all postve, then these methods wll converge to the correct soluton n a seres of steps, gettng closer to the soluton at each step. hat means that the correct soluton s always slghtly further away than the net step. So: why not jump a lttle further each tme? Instead of an teraton le: [ n 1] ( ) ( + = L y+ U [ n] ) (0.41) 4 he proof of ths s beyond the scope of ths chapter. See the chapter on Matr Propertes under postve defnte matrces for more nformaton about ths ave Pearce Page 9 07/06/2007

10 why not use: ( ) 1 [ 1 ] ( ) ( [ ]) n+ = w L y+ U n [ n] + [ n] (0.42) where the parameter w s greater than one? hen you re movng further than you would wth the Gauss-Sedel, and that mght get you closer to the correct soluton wth each teraton? hs s n general called over-relaaton, and n ths partcular case, the successve overrelaaton method. It can ncrease the speed of the convergence, at the epense of potentally mang the teratons unstable, and causng dvergence. Actually, equaton (0.42) s not the form most often used, because t s harder to calculate: t requres that all the elements of the vector [n] are avalable when tryng to calculate the vector [n + 1], and one of the man advantages of the Gauss-Sedel method over the Jacob s that t doesn t have to eep the last teraton, t can replace terms one-by-one. It s more usual to go bac to the teraton form: [ n 1] + = y + L [ n+ 1] + U [ n] (0.4) whch does not requre the entre vector [n] to be avalable to calculate [n+1], and multply the dfference between each term [n+1] and [n] as they are calculated by w: + [ n+ 1] + [ n] y L U [ n+ 1 ] = w [ n ] + [ n] (0.44) whch wth a bt of algebrac manpulaton, gves: 1 [ n+ 1 ] = ( w ) ( w + w [ n] + ( 1 w) ) L y U [ n ] (0.45) How to choose the best possble value of w s agan beyond the scope of ths chapter, however t s worth notng that values outsde the range 0 < w < 2 do not converge, and usng w = 1 s just equvalent to usng the Gauss-Sedel method. 1.5 utoral Questons 1) ry and solve the followng systems of equatons usng the Jacob and Gauss-Sedel teraton technques (MALAB or some other mathematcal program would be of great use for ths queston, but you could just calculate the eact soluton, and the result from the frst teraton step, and see f the error term gets smaller). Whch of them converge? a) b) = = ave Pearce Page 10 07/06/2007

11 c) = 2 d) = ) In cases where the Jacob and Gauss-Sedel approaches both converge, does the Gauss Sedel always converge faster? ) Prove that the over-relaaton form of the Gauss-Sedel equaton: can be smplfed to: + [ n+ 1] + [ n] y L U [ n+ 1 ] = w [ n ] + [ n] 1 [ n+ 1 ] = ( w ) ( w + w [ n] + ( 1 w) n ) L y U [ ] 4) he over-relaaton form n equaton (0.45) was derved from the Gauss-Sedel method. Can you use the same over-relaaton dea, only startng wth the Jacob method? If so, derve the equvalent equaton for ths case. 5) he Gauss-Sedel method s guaranteed to converge f the matr A s postve defnte. What about the Jacob method? oes ths always converge s A s postve defnte? (Ether a proof or an eample of a postve defnte A that does not converge usng the Jacob method wll be fne.) 6) For over-relaaton methods, when mght a value of w of less than one be a good dea? How could you calculate the mum value to use, and s t worth calculatng? 2007 ave Pearce Page 11 07/06/2007

Chapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems

Chapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems Numercal Analyss by Dr. Anta Pal Assstant Professor Department of Mathematcs Natonal Insttute of Technology Durgapur Durgapur-713209 emal: anta.bue@gmal.com 1 . Chapter 5 Soluton of System of Lnear Equatons

More information

Solution of Linear System of Equations and Matrix Inversion Gauss Seidel Iteration Method

Solution of Linear System of Equations and Matrix Inversion Gauss Seidel Iteration Method Soluton of Lnear System of Equatons and Matr Inverson Gauss Sedel Iteraton Method It s another well-known teratve method for solvng a system of lnear equatons of the form a + a22 + + ann = b a2 + a222

More information

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 16

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 16 STAT 39: MATHEMATICAL COMPUTATIONS I FALL 218 LECTURE 16 1 why teratve methods f we have a lnear system Ax = b where A s very, very large but s ether sparse or structured (eg, banded, Toepltz, banded plus

More information

Vector Norms. Chapter 7 Iterative Techniques in Matrix Algebra. Cauchy-Bunyakovsky-Schwarz Inequality for Sums. Distances. Convergence.

Vector Norms. Chapter 7 Iterative Techniques in Matrix Algebra. Cauchy-Bunyakovsky-Schwarz Inequality for Sums. Distances. Convergence. Vector Norms Chapter 7 Iteratve Technques n Matrx Algebra Per-Olof Persson persson@berkeley.edu Department of Mathematcs Unversty of Calforna, Berkeley Math 128B Numercal Analyss Defnton A vector norm

More information

A New Refinement of Jacobi Method for Solution of Linear System Equations AX=b

A New Refinement of Jacobi Method for Solution of Linear System Equations AX=b Int J Contemp Math Scences, Vol 3, 28, no 17, 819-827 A New Refnement of Jacob Method for Soluton of Lnear System Equatons AX=b F Naem Dafchah Department of Mathematcs, Faculty of Scences Unversty of Gulan,

More information

Singular Value Decomposition: Theory and Applications

Singular Value Decomposition: Theory and Applications Sngular Value Decomposton: Theory and Applcatons Danel Khashab Sprng 2015 Last Update: March 2, 2015 1 Introducton A = UDV where columns of U and V are orthonormal and matrx D s dagonal wth postve real

More information

1 Matrix representations of canonical matrices

1 Matrix representations of canonical matrices 1 Matrx representatons of canoncal matrces 2-d rotaton around the orgn: ( ) cos θ sn θ R 0 = sn θ cos θ 3-d rotaton around the x-axs: R x = 1 0 0 0 cos θ sn θ 0 sn θ cos θ 3-d rotaton around the y-axs:

More information

Lectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix

Lectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix Lectures - Week 4 Matrx norms, Condtonng, Vector Spaces, Lnear Independence, Spannng sets and Bass, Null space and Range of a Matrx Matrx Norms Now we turn to assocatng a number to each matrx. We could

More information

Deriving the X-Z Identity from Auxiliary Space Method

Deriving the X-Z Identity from Auxiliary Space Method Dervng the X-Z Identty from Auxlary Space Method Long Chen Department of Mathematcs, Unversty of Calforna at Irvne, Irvne, CA 92697 chenlong@math.uc.edu 1 Iteratve Methods In ths paper we dscuss teratve

More information

Foundations of Arithmetic

Foundations of Arithmetic Foundatons of Arthmetc Notaton We shall denote the sum and product of numbers n the usual notaton as a 2 + a 2 + a 3 + + a = a, a 1 a 2 a 3 a = a The notaton a b means a dvdes b,.e. ac = b where c s an

More information

C/CS/Phy191 Problem Set 3 Solutions Out: Oct 1, 2008., where ( 00. ), so the overall state of the system is ) ( ( ( ( 00 ± 11 ), Φ ± = 1

C/CS/Phy191 Problem Set 3 Solutions Out: Oct 1, 2008., where ( 00. ), so the overall state of the system is ) ( ( ( ( 00 ± 11 ), Φ ± = 1 C/CS/Phy9 Problem Set 3 Solutons Out: Oct, 8 Suppose you have two qubts n some arbtrary entangled state ψ You apply the teleportaton protocol to each of the qubts separately What s the resultng state obtaned

More information

Grover s Algorithm + Quantum Zeno Effect + Vaidman

Grover s Algorithm + Quantum Zeno Effect + Vaidman Grover s Algorthm + Quantum Zeno Effect + Vadman CS 294-2 Bomb 10/12/04 Fall 2004 Lecture 11 Grover s algorthm Recall that Grover s algorthm for searchng over a space of sze wors as follows: consder the

More information

EEE 241: Linear Systems

EEE 241: Linear Systems EEE : Lnear Systems Summary #: Backpropagaton BACKPROPAGATION The perceptron rule as well as the Wdrow Hoff learnng were desgned to tran sngle layer networks. They suffer from the same dsadvantage: they

More information

Some Comments on Accelerating Convergence of Iterative Sequences Using Direct Inversion of the Iterative Subspace (DIIS)

Some Comments on Accelerating Convergence of Iterative Sequences Using Direct Inversion of the Iterative Subspace (DIIS) Some Comments on Acceleratng Convergence of Iteratve Sequences Usng Drect Inverson of the Iteratve Subspace (DIIS) C. Davd Sherrll School of Chemstry and Bochemstry Georga Insttute of Technology May 1998

More information

Quantum Mechanics I - Session 4

Quantum Mechanics I - Session 4 Quantum Mechancs I - Sesson 4 Aprl 3, 05 Contents Operators Change of Bass 4 3 Egenvectors and Egenvalues 5 3. Denton....................................... 5 3. Rotaton n D....................................

More information

Relaxation Methods for Iterative Solution to Linear Systems of Equations

Relaxation Methods for Iterative Solution to Linear Systems of Equations Relaxaton Methods for Iteratve Soluton to Lnear Systems of Equatons Gerald Recktenwald Portland State Unversty Mechancal Engneerng Department gerry@pdx.edu Overvew Techncal topcs Basc Concepts Statonary

More information

Section 3.6 Complex Zeros

Section 3.6 Complex Zeros 04 Chapter Secton 6 Comple Zeros When fndng the zeros of polynomals, at some pont you're faced wth the problem Whle there are clearly no real numbers that are solutons to ths equaton, leavng thngs there

More information

CHAPTER 14 GENERAL PERTURBATION THEORY

CHAPTER 14 GENERAL PERTURBATION THEORY CHAPTER 4 GENERAL PERTURBATION THEORY 4 Introducton A partcle n orbt around a pont mass or a sphercally symmetrc mass dstrbuton s movng n a gravtatonal potental of the form GM / r In ths potental t moves

More information

Hopfield Training Rules 1 N

Hopfield Training Rules 1 N Hopfeld Tranng Rules To memorse a sngle pattern Suppose e set the eghts thus - = p p here, s the eght beteen nodes & s the number of nodes n the netor p s the value requred for the -th node What ll the

More information

Errors for Linear Systems

Errors for Linear Systems Errors for Lnear Systems When we solve a lnear system Ax b we often do not know A and b exactly, but have only approxmatons  and ˆb avalable. Then the best thng we can do s to solve ˆx ˆb exactly whch

More information

Lecture 12: Discrete Laplacian

Lecture 12: Discrete Laplacian Lecture 12: Dscrete Laplacan Scrbe: Tanye Lu Our goal s to come up wth a dscrete verson of Laplacan operator for trangulated surfaces, so that we can use t n practce to solve related problems We are mostly

More information

Supplement: Proofs and Technical Details for The Solution Path of the Generalized Lasso

Supplement: Proofs and Technical Details for The Solution Path of the Generalized Lasso Supplement: Proofs and Techncal Detals for The Soluton Path of the Generalzed Lasso Ryan J. Tbshran Jonathan Taylor In ths document we gve supplementary detals to the paper The Soluton Path of the Generalzed

More information

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 13

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 13 CME 30: NUMERICAL LINEAR ALGEBRA FALL 005/06 LECTURE 13 GENE H GOLUB 1 Iteratve Methods Very large problems (naturally sparse, from applcatons): teratve methods Structured matrces (even sometmes dense,

More information

Linear Approximation with Regularization and Moving Least Squares

Linear Approximation with Regularization and Moving Least Squares Lnear Approxmaton wth Regularzaton and Movng Least Squares Igor Grešovn May 007 Revson 4.6 (Revson : March 004). 5 4 3 0.5 3 3.5 4 Contents: Lnear Fttng...4. Weghted Least Squares n Functon Approxmaton...

More information

4DVAR, according to the name, is a four-dimensional variational method.

4DVAR, according to the name, is a four-dimensional variational method. 4D-Varatonal Data Assmlaton (4D-Var) 4DVAR, accordng to the name, s a four-dmensonal varatonal method. 4D-Var s actually a drect generalzaton of 3D-Var to handle observatons that are dstrbuted n tme. The

More information

Lecture 21: Numerical methods for pricing American type derivatives

Lecture 21: Numerical methods for pricing American type derivatives Lecture 21: Numercal methods for prcng Amercan type dervatves Xaoguang Wang STAT 598W Aprl 10th, 2014 (STAT 598W) Lecture 21 1 / 26 Outlne 1 Fnte Dfference Method Explct Method Penalty Method (STAT 598W)

More information

2.3 Nilpotent endomorphisms

2.3 Nilpotent endomorphisms s a block dagonal matrx, wth A Mat dm U (C) In fact, we can assume that B = B 1 B k, wth B an ordered bass of U, and that A = [f U ] B, where f U : U U s the restrcton of f to U 40 23 Nlpotent endomorphsms

More information

THE SUMMATION NOTATION Ʃ

THE SUMMATION NOTATION Ʃ Sngle Subscrpt otaton THE SUMMATIO OTATIO Ʃ Most of the calculatons we perform n statstcs are repettve operatons on lsts of numbers. For example, we compute the sum of a set of numbers, or the sum of the

More information

Inexact Newton Methods for Inverse Eigenvalue Problems

Inexact Newton Methods for Inverse Eigenvalue Problems Inexact Newton Methods for Inverse Egenvalue Problems Zheng-jan Ba Abstract In ths paper, we survey some of the latest development n usng nexact Newton-lke methods for solvng nverse egenvalue problems.

More information

Yong Joon Ryang. 1. Introduction Consider the multicommodity transportation problem with convex quadratic cost function. 1 2 (x x0 ) T Q(x x 0 )

Yong Joon Ryang. 1. Introduction Consider the multicommodity transportation problem with convex quadratic cost function. 1 2 (x x0 ) T Q(x x 0 ) Kangweon-Kyungk Math. Jour. 4 1996), No. 1, pp. 7 16 AN ITERATIVE ROW-ACTION METHOD FOR MULTICOMMODITY TRANSPORTATION PROBLEMS Yong Joon Ryang Abstract. The optmzaton problems wth quadratc constrants often

More information

Advanced Circuits Topics - Part 1 by Dr. Colton (Fall 2017)

Advanced Circuits Topics - Part 1 by Dr. Colton (Fall 2017) Advanced rcuts Topcs - Part by Dr. olton (Fall 07) Part : Some thngs you should already know from Physcs 0 and 45 These are all thngs that you should have learned n Physcs 0 and/or 45. Ths secton s organzed

More information

1 Convex Optimization

1 Convex Optimization Convex Optmzaton We wll consder convex optmzaton problems. Namely, mnmzaton problems where the objectve s convex (we assume no constrants for now). Such problems often arse n machne learnng. For example,

More information

12. The Hamilton-Jacobi Equation Michael Fowler

12. The Hamilton-Jacobi Equation Michael Fowler 1. The Hamlton-Jacob Equaton Mchael Fowler Back to Confguraton Space We ve establshed that the acton, regarded as a functon of ts coordnate endponts and tme, satsfes ( ) ( ) S q, t / t+ H qpt,, = 0, and

More information

ME 501A Seminar in Engineering Analysis Page 1

ME 501A Seminar in Engineering Analysis Page 1 umercal Solutons of oundary-value Problems n Os ovember 7, 7 umercal Solutons of oundary- Value Problems n Os Larry aretto Mechancal ngneerng 5 Semnar n ngneerng nalyss ovember 7, 7 Outlne Revew stff equaton

More information

Week 5: Neural Networks

Week 5: Neural Networks Week 5: Neural Networks Instructor: Sergey Levne Neural Networks Summary In the prevous lecture, we saw how we can construct neural networks by extendng logstc regresson. Neural networks consst of multple

More information

ρ some λ THE INVERSE POWER METHOD (or INVERSE ITERATION) , for , or (more usually) to

ρ some λ THE INVERSE POWER METHOD (or INVERSE ITERATION) , for , or (more usually) to THE INVERSE POWER METHOD (or INVERSE ITERATION) -- applcaton of the Power method to A some fxed constant ρ (whch s called a shft), x λ ρ If the egenpars of A are { ( λ, x ) } ( ), or (more usually) to,

More information

Econ107 Applied Econometrics Topic 3: Classical Model (Studenmund, Chapter 4)

Econ107 Applied Econometrics Topic 3: Classical Model (Studenmund, Chapter 4) I. Classcal Assumptons Econ7 Appled Econometrcs Topc 3: Classcal Model (Studenmund, Chapter 4) We have defned OLS and studed some algebrac propertes of OLS. In ths topc we wll study statstcal propertes

More information

Formulas for the Determinant

Formulas for the Determinant page 224 224 CHAPTER 3 Determnants e t te t e 2t 38 A = e t 2te t e 2t e t te t 2e 2t 39 If 123 A = 345, 456 compute the matrx product A adj(a) What can you conclude about det(a)? For Problems 40 43, use

More information

Chapter Newton s Method

Chapter Newton s Method Chapter 9. Newton s Method After readng ths chapter, you should be able to:. Understand how Newton s method s dfferent from the Golden Secton Search method. Understand how Newton s method works 3. Solve

More information

Lecture Notes on Linear Regression

Lecture Notes on Linear Regression Lecture Notes on Lnear Regresson Feng L fl@sdueducn Shandong Unversty, Chna Lnear Regresson Problem In regresson problem, we am at predct a contnuous target value gven an nput feature vector We assume

More information

Introduction to Vapor/Liquid Equilibrium, part 2. Raoult s Law:

Introduction to Vapor/Liquid Equilibrium, part 2. Raoult s Law: CE304, Sprng 2004 Lecture 4 Introducton to Vapor/Lqud Equlbrum, part 2 Raoult s Law: The smplest model that allows us do VLE calculatons s obtaned when we assume that the vapor phase s an deal gas, and

More information

CHAPTER 4. Vector Spaces

CHAPTER 4. Vector Spaces man 2007/2/16 page 234 CHAPTER 4 Vector Spaces To crtcze mathematcs for ts abstracton s to mss the pont entrel. Abstracton s what makes mathematcs work. Ian Stewart The man am of ths tet s to stud lnear

More information

MMA and GCMMA two methods for nonlinear optimization

MMA and GCMMA two methods for nonlinear optimization MMA and GCMMA two methods for nonlnear optmzaton Krster Svanberg Optmzaton and Systems Theory, KTH, Stockholm, Sweden. krlle@math.kth.se Ths note descrbes the algorthms used n the author s 2007 mplementatons

More information

Linear Feature Engineering 11

Linear Feature Engineering 11 Lnear Feature Engneerng 11 2 Least-Squares 2.1 Smple least-squares Consder the followng dataset. We have a bunch of nputs x and correspondng outputs y. The partcular values n ths dataset are x y 0.23 0.19

More information

Chapter 12. Ordinary Differential Equation Boundary Value (BV) Problems

Chapter 12. Ordinary Differential Equation Boundary Value (BV) Problems Chapter. Ordnar Dfferental Equaton Boundar Value (BV) Problems In ths chapter we wll learn how to solve ODE boundar value problem. BV ODE s usuall gven wth x beng the ndependent space varable. p( x) q(

More information

CSci 6974 and ECSE 6966 Math. Tech. for Vision, Graphics and Robotics Lecture 21, April 17, 2006 Estimating A Plane Homography

CSci 6974 and ECSE 6966 Math. Tech. for Vision, Graphics and Robotics Lecture 21, April 17, 2006 Estimating A Plane Homography CSc 6974 and ECSE 6966 Math. Tech. for Vson, Graphcs and Robotcs Lecture 21, Aprl 17, 2006 Estmatng A Plane Homography Overvew We contnue wth a dscusson of the major ssues, usng estmaton of plane projectve

More information

8.6 The Complex Number System

8.6 The Complex Number System 8.6 The Complex Number System Earler n the chapter, we mentoned that we cannot have a negatve under a square root, snce the square of any postve or negatve number s always postve. In ths secton we want

More information

Stanford University CS359G: Graph Partitioning and Expanders Handout 4 Luca Trevisan January 13, 2011

Stanford University CS359G: Graph Partitioning and Expanders Handout 4 Luca Trevisan January 13, 2011 Stanford Unversty CS359G: Graph Parttonng and Expanders Handout 4 Luca Trevsan January 3, 0 Lecture 4 In whch we prove the dffcult drecton of Cheeger s nequalty. As n the past lectures, consder an undrected

More information

Chapter 8. Potential Energy and Conservation of Energy

Chapter 8. Potential Energy and Conservation of Energy Chapter 8 Potental Energy and Conservaton of Energy In ths chapter we wll ntroduce the followng concepts: Potental Energy Conservatve and non-conservatve forces Mechancal Energy Conservaton of Mechancal

More information

Difference Equations

Difference Equations Dfference Equatons c Jan Vrbk 1 Bascs Suppose a sequence of numbers, say a 0,a 1,a,a 3,... s defned by a certan general relatonshp between, say, three consecutve values of the sequence, e.g. a + +3a +1

More information

Additional Codes using Finite Difference Method. 1 HJB Equation for Consumption-Saving Problem Without Uncertainty

Additional Codes using Finite Difference Method. 1 HJB Equation for Consumption-Saving Problem Without Uncertainty Addtonal Codes usng Fnte Dfference Method Benamn Moll 1 HJB Equaton for Consumpton-Savng Problem Wthout Uncertanty Before consderng the case wth stochastc ncome n http://www.prnceton.edu/~moll/ HACTproect/HACT_Numercal_Appendx.pdf,

More information

36.1 Why is it important to be able to find roots to systems of equations? Up to this point, we have discussed how to find the solution to

36.1 Why is it important to be able to find roots to systems of equations? Up to this point, we have discussed how to find the solution to ChE Lecture Notes - D. Keer, 5/9/98 Lecture 6,7,8 - Rootndng n systems o equatons (A) Theory (B) Problems (C) MATLAB Applcatons Tet: Supplementary notes rom Instructor 6. Why s t mportant to be able to

More information

OPTIMISATION. Introduction Single Variable Unconstrained Optimisation Multivariable Unconstrained Optimisation Linear Programming

OPTIMISATION. Introduction Single Variable Unconstrained Optimisation Multivariable Unconstrained Optimisation Linear Programming OPTIMIATION Introducton ngle Varable Unconstraned Optmsaton Multvarable Unconstraned Optmsaton Lnear Programmng Chapter Optmsaton /. Introducton In an engneerng analss, sometmes etremtes, ether mnmum or

More information

Workshop: Approximating energies and wave functions Quantum aspects of physical chemistry

Workshop: Approximating energies and wave functions Quantum aspects of physical chemistry Workshop: Approxmatng energes and wave functons Quantum aspects of physcal chemstry http://quantum.bu.edu/pltl/6/6.pdf Last updated Thursday, November 7, 25 7:9:5-5: Copyrght 25 Dan Dll (dan@bu.edu) Department

More information

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 17. a ij x (k) b i. a ij x (k+1) (D + L)x (k+1) = b Ux (k)

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 17. a ij x (k) b i. a ij x (k+1) (D + L)x (k+1) = b Ux (k) STAT 309: MATHEMATICAL COMPUTATIONS I FALL 08 LECTURE 7. sor method remnder: n coordnatewse form, Jacob method s = [ b a x (k) a and Gauss Sedel method s = [ b a = = remnder: n matrx form, Jacob method

More information

Assortment Optimization under MNL

Assortment Optimization under MNL Assortment Optmzaton under MNL Haotan Song Aprl 30, 2017 1 Introducton The assortment optmzaton problem ams to fnd the revenue-maxmzng assortment of products to offer when the prces of products are fxed.

More information

Physics 114 Exam 2 Fall 2014 Solutions. Name:

Physics 114 Exam 2 Fall 2014 Solutions. Name: Physcs 114 Exam Fall 014 Name: For gradng purposes (do not wrte here): Queston 1. 1... 3. 3. Problem Answer each of the followng questons. Ponts for each queston are ndcated n red. Unless otherwse ndcated,

More information

5 The Rational Canonical Form

5 The Rational Canonical Form 5 The Ratonal Canoncal Form Here p s a monc rreducble factor of the mnmum polynomal m T and s not necessarly of degree one Let F p denote the feld constructed earler n the course, consstng of all matrces

More information

Affine transformations and convexity

Affine transformations and convexity Affne transformatons and convexty The purpose of ths document s to prove some basc propertes of affne transformatons nvolvng convex sets. Here are a few onlne references for background nformaton: http://math.ucr.edu/

More information

Lecture 13 APPROXIMATION OF SECOMD ORDER DERIVATIVES

Lecture 13 APPROXIMATION OF SECOMD ORDER DERIVATIVES COMPUTATIONAL FLUID DYNAMICS: FDM: Appromaton of Second Order Dervatves Lecture APPROXIMATION OF SECOMD ORDER DERIVATIVES. APPROXIMATION OF SECOND ORDER DERIVATIVES Second order dervatves appear n dffusve

More information

THE CHINESE REMAINDER THEOREM. We should thank the Chinese for their wonderful remainder theorem. Glenn Stevens

THE CHINESE REMAINDER THEOREM. We should thank the Chinese for their wonderful remainder theorem. Glenn Stevens THE CHINESE REMAINDER THEOREM KEITH CONRAD We should thank the Chnese for ther wonderful remander theorem. Glenn Stevens 1. Introducton The Chnese remander theorem says we can unquely solve any par of

More information

NOTES ON SIMPLIFICATION OF MATRICES

NOTES ON SIMPLIFICATION OF MATRICES NOTES ON SIMPLIFICATION OF MATRICES JONATHAN LUK These notes dscuss how to smplfy an (n n) matrx In partcular, we expand on some of the materal from the textbook (wth some repetton) Part of the exposton

More information

Chapter - 2. Distribution System Power Flow Analysis

Chapter - 2. Distribution System Power Flow Analysis Chapter - 2 Dstrbuton System Power Flow Analyss CHAPTER - 2 Radal Dstrbuton System Load Flow 2.1 Introducton Load flow s an mportant tool [66] for analyzng electrcal power system network performance. Load

More information

Solutions to Homework 7, Mathematics 1. 1 x. (arccos x) (arccos x) 1

Solutions to Homework 7, Mathematics 1. 1 x. (arccos x) (arccos x) 1 Solutons to Homework 7, Mathematcs 1 Problem 1: a Prove that arccos 1 1 for 1, 1. b* Startng from the defnton of the dervatve, prove that arccos + 1, arccos 1. Hnt: For arccos arccos π + 1, the defnton

More information

A new Approach for Solving Linear Ordinary Differential Equations

A new Approach for Solving Linear Ordinary Differential Equations , ISSN 974-57X (Onlne), ISSN 974-5718 (Prnt), Vol. ; Issue No. 1; Year 14, Copyrght 13-14 by CESER PUBLICATIONS A new Approach for Solvng Lnear Ordnary Dfferental Equatons Fawz Abdelwahd Department of

More information

Problem Solving in Math (Math 43900) Fall 2013

Problem Solving in Math (Math 43900) Fall 2013 Problem Solvng n Math (Math 43900) Fall 2013 Week four (September 17) solutons Instructor: Davd Galvn 1. Let a and b be two nteger for whch a b s dvsble by 3. Prove that a 3 b 3 s dvsble by 9. Soluton:

More information

Lecture 3 Stat102, Spring 2007

Lecture 3 Stat102, Spring 2007 Lecture 3 Stat0, Sprng 007 Chapter 3. 3.: Introducton to regresson analyss Lnear regresson as a descrptve technque The least-squares equatons Chapter 3.3 Samplng dstrbuton of b 0, b. Contnued n net lecture

More information

for Linear Systems With Strictly Diagonally Dominant Matrix

for Linear Systems With Strictly Diagonally Dominant Matrix MATHEMATICS OF COMPUTATION, VOLUME 35, NUMBER 152 OCTOBER 1980, PAGES 1269-1273 On an Accelerated Overrelaxaton Iteratve Method for Lnear Systems Wth Strctly Dagonally Domnant Matrx By M. Madalena Martns*

More information

= = = (a) Use the MATLAB command rref to solve the system. (b) Let A be the coefficient matrix and B be the right-hand side of the system.

= = = (a) Use the MATLAB command rref to solve the system. (b) Let A be the coefficient matrix and B be the right-hand side of the system. Chapter Matlab Exercses Chapter Matlab Exercses. Consder the lnear system of Example n Secton.. x x x y z y y z (a) Use the MATLAB command rref to solve the system. (b) Let A be the coeffcent matrx and

More information

Fall 2012 Analysis of Experimental Measurements B. Eisenstein/rev. S. Errede

Fall 2012 Analysis of Experimental Measurements B. Eisenstein/rev. S. Errede Fall 0 Analyss of Expermental easurements B. Esensten/rev. S. Errede We now reformulate the lnear Least Squares ethod n more general terms, sutable for (eventually extendng to the non-lnear case, and also

More information

Time-Varying Systems and Computations Lecture 6

Time-Varying Systems and Computations Lecture 6 Tme-Varyng Systems and Computatons Lecture 6 Klaus Depold 14. Januar 2014 The Kalman Flter The Kalman estmaton flter attempts to estmate the actual state of an unknown dscrete dynamcal system, gven nosy

More information

CSCE 790S Background Results

CSCE 790S Background Results CSCE 790S Background Results Stephen A. Fenner September 8, 011 Abstract These results are background to the course CSCE 790S/CSCE 790B, Quantum Computaton and Informaton (Sprng 007 and Fall 011). Each

More information

Note on EM-training of IBM-model 1

Note on EM-training of IBM-model 1 Note on EM-tranng of IBM-model INF58 Language Technologcal Applcatons, Fall The sldes on ths subject (nf58 6.pdf) ncludng the example seem nsuffcent to gve a good grasp of what s gong on. Hence here are

More information

Outline and Reading. Dynamic Programming. Dynamic Programming revealed. Computing Fibonacci. The General Dynamic Programming Technique

Outline and Reading. Dynamic Programming. Dynamic Programming revealed. Computing Fibonacci. The General Dynamic Programming Technique Outlne and Readng Dynamc Programmng The General Technque ( 5.3.2) -1 Knapsac Problem ( 5.3.3) Matrx Chan-Product ( 5.3.1) Dynamc Programmng verson 1.4 1 Dynamc Programmng verson 1.4 2 Dynamc Programmng

More information

Review of Taylor Series. Read Section 1.2

Review of Taylor Series. Read Section 1.2 Revew of Taylor Seres Read Secton 1.2 1 Power Seres A power seres about c s an nfnte seres of the form k = 0 k a ( x c) = a + a ( x c) + a ( x c) + a ( x c) k 2 3 0 1 2 3 + In many cases, c = 0, and the

More information

Graph Reconstruction by Permutations

Graph Reconstruction by Permutations Graph Reconstructon by Permutatons Perre Ille and Wllam Kocay* Insttut de Mathémathques de Lumny CNRS UMR 6206 163 avenue de Lumny, Case 907 13288 Marselle Cedex 9, France e-mal: lle@ml.unv-mrs.fr Computer

More information

The Multiple Classical Linear Regression Model (CLRM): Specification and Assumptions. 1. Introduction

The Multiple Classical Linear Regression Model (CLRM): Specification and Assumptions. 1. Introduction ECONOMICS 5* -- NOTE (Summary) ECON 5* -- NOTE The Multple Classcal Lnear Regresson Model (CLRM): Specfcaton and Assumptons. Introducton CLRM stands for the Classcal Lnear Regresson Model. The CLRM s also

More information

Density matrix. c α (t)φ α (q)

Density matrix. c α (t)φ α (q) Densty matrx Note: ths s supplementary materal. I strongly recommend that you read t for your own nterest. I beleve t wll help wth understandng the quantum ensembles, but t s not necessary to know t n

More information

Section 8.3 Polar Form of Complex Numbers

Section 8.3 Polar Form of Complex Numbers 80 Chapter 8 Secton 8 Polar Form of Complex Numbers From prevous classes, you may have encountered magnary numbers the square roots of negatve numbers and, more generally, complex numbers whch are the

More information

From Biot-Savart Law to Divergence of B (1)

From Biot-Savart Law to Divergence of B (1) From Bot-Savart Law to Dvergence of B (1) Let s prove that Bot-Savart gves us B (r ) = 0 for an arbtrary current densty. Frst take the dvergence of both sdes of Bot-Savart. The dervatve s wth respect to

More information

The internal structure of natural numbers and one method for the definition of large prime numbers

The internal structure of natural numbers and one method for the definition of large prime numbers The nternal structure of natural numbers and one method for the defnton of large prme numbers Emmanul Manousos APM Insttute for the Advancement of Physcs and Mathematcs 3 Poulou str. 53 Athens Greece Abstract

More information

CHAPTER 4d. ROOTS OF EQUATIONS

CHAPTER 4d. ROOTS OF EQUATIONS CHAPTER 4d. ROOTS OF EQUATIONS A. J. Clark School o Engneerng Department o Cvl and Envronmental Engneerng by Dr. Ibrahm A. Assakka Sprng 00 ENCE 03 - Computaton Methods n Cvl Engneerng II Department o

More information

Bernoulli Numbers and Polynomials

Bernoulli Numbers and Polynomials Bernoull Numbers and Polynomals T. Muthukumar tmk@tk.ac.n 17 Jun 2014 The sum of frst n natural numbers 1, 2, 3,..., n s n n(n + 1 S 1 (n := m = = n2 2 2 + n 2. Ths formula can be derved by notng that

More information

On the correction of the h-index for career length

On the correction of the h-index for career length 1 On the correcton of the h-ndex for career length by L. Egghe Unverstet Hasselt (UHasselt), Campus Depenbeek, Agoralaan, B-3590 Depenbeek, Belgum 1 and Unverstet Antwerpen (UA), IBW, Stadscampus, Venusstraat

More information

Math 261 Exercise sheet 2

Math 261 Exercise sheet 2 Math 261 Exercse sheet 2 http://staff.aub.edu.lb/~nm116/teachng/2017/math261/ndex.html Verson: September 25, 2017 Answers are due for Monday 25 September, 11AM. The use of calculators s allowed. Exercse

More information

A MODIFIED METHOD FOR SOLVING SYSTEM OF NONLINEAR EQUATIONS

A MODIFIED METHOD FOR SOLVING SYSTEM OF NONLINEAR EQUATIONS Journal of Mathematcs and Statstcs 9 (1): 4-8, 1 ISSN 1549-644 1 Scence Publcatons do:1.844/jmssp.1.4.8 Publshed Onlne 9 (1) 1 (http://www.thescpub.com/jmss.toc) A MODIFIED METHOD FOR SOLVING SYSTEM OF

More information

= z 20 z n. (k 20) + 4 z k = 4

= z 20 z n. (k 20) + 4 z k = 4 Problem Set #7 solutons 7.2.. (a Fnd the coeffcent of z k n (z + z 5 + z 6 + z 7 + 5, k 20. We use the known seres expanson ( n+l ( z l l z n below: (z + z 5 + z 6 + z 7 + 5 (z 5 ( + z + z 2 + z + 5 5

More information

Remarks on the Properties of a Quasi-Fibonacci-like Polynomial Sequence

Remarks on the Properties of a Quasi-Fibonacci-like Polynomial Sequence Remarks on the Propertes of a Quas-Fbonacc-lke Polynomal Sequence Brce Merwne LIU Brooklyn Ilan Wenschelbaum Wesleyan Unversty Abstract Consder the Quas-Fbonacc-lke Polynomal Sequence gven by F 0 = 1,

More information

The optimal delay of the second test is therefore approximately 210 hours earlier than =2.

The optimal delay of the second test is therefore approximately 210 hours earlier than =2. THE IEC 61508 FORMULAS 223 The optmal delay of the second test s therefore approxmately 210 hours earler than =2. 8.4 The IEC 61508 Formulas IEC 61508-6 provdes approxmaton formulas for the PF for smple

More information

a b a In case b 0, a being divisible by b is the same as to say that

a b a In case b 0, a being divisible by b is the same as to say that Secton 6.2 Dvsblty among the ntegers An nteger a ε s dvsble by b ε f there s an nteger c ε such that a = bc. Note that s dvsble by any nteger b, snce = b. On the other hand, a s dvsble by only f a = :

More information

Lecture Space-Bounded Derandomization

Lecture Space-Bounded Derandomization Notes on Complexty Theory Last updated: October, 2008 Jonathan Katz Lecture Space-Bounded Derandomzaton 1 Space-Bounded Derandomzaton We now dscuss derandomzaton of space-bounded algorthms. Here non-trval

More information

Appendix B. The Finite Difference Scheme

Appendix B. The Finite Difference Scheme 140 APPENDIXES Appendx B. The Fnte Dfference Scheme In ths appendx we present numercal technques whch are used to approxmate solutons of system 3.1 3.3. A comprehensve treatment of theoretcal and mplementaton

More information

Mean Field / Variational Approximations

Mean Field / Variational Approximations Mean Feld / Varatonal Appromatons resented by Jose Nuñez 0/24/05 Outlne Introducton Mean Feld Appromaton Structured Mean Feld Weghted Mean Feld Varatonal Methods Introducton roblem: We have dstrbuton but

More information

Inner Product. Euclidean Space. Orthonormal Basis. Orthogonal

Inner Product. Euclidean Space. Orthonormal Basis. Orthogonal Inner Product Defnton 1 () A Eucldean space s a fnte-dmensonal vector space over the reals R, wth an nner product,. Defnton 2 (Inner Product) An nner product, on a real vector space X s a symmetrc, blnear,

More information

Hongyi Miao, College of Science, Nanjing Forestry University, Nanjing ,China. (Received 20 June 2013, accepted 11 March 2014) I)ϕ (k)

Hongyi Miao, College of Science, Nanjing Forestry University, Nanjing ,China. (Received 20 June 2013, accepted 11 March 2014) I)ϕ (k) ISSN 1749-3889 (prnt), 1749-3897 (onlne) Internatonal Journal of Nonlnear Scence Vol.17(2014) No.2,pp.188-192 Modfed Block Jacob-Davdson Method for Solvng Large Sparse Egenproblems Hongy Mao, College of

More information

Lecture 3. Ax x i a i. i i

Lecture 3. Ax x i a i. i i 18.409 The Behavor of Algorthms n Practce 2/14/2 Lecturer: Dan Spelman Lecture 3 Scrbe: Arvnd Sankar 1 Largest sngular value In order to bound the condton number, we need an upper bound on the largest

More information

This model contains two bonds per unit cell (one along the x-direction and the other along y). So we can rewrite the Hamiltonian as:

This model contains two bonds per unit cell (one along the x-direction and the other along y). So we can rewrite the Hamiltonian as: 1 Problem set #1 1.1. A one-band model on a square lattce Fg. 1 Consder a square lattce wth only nearest-neghbor hoppngs (as shown n the fgure above): H t, j a a j (1.1) where,j stands for nearest neghbors

More information

International Mathematical Olympiad. Preliminary Selection Contest 2012 Hong Kong. Outline of Solutions

International Mathematical Olympiad. Preliminary Selection Contest 2012 Hong Kong. Outline of Solutions Internatonal Mathematcal Olympad Prelmnary Selecton ontest Hong Kong Outlne of Solutons nswers: 7 4 7 4 6 5 9 6 99 7 6 6 9 5544 49 5 7 4 6765 5 6 6 7 6 944 9 Solutons: Snce n s a two-dgt number, we have

More information

C4B Machine Learning Answers II. = σ(z) (1 σ(z)) 1 1 e z. e z = σ(1 σ) (1 + e z )

C4B Machine Learning Answers II. = σ(z) (1 σ(z)) 1 1 e z. e z = σ(1 σ) (1 + e z ) C4B Machne Learnng Answers II.(a) Show that for the logstc sgmod functon dσ(z) dz = σ(z) ( σ(z)) A. Zsserman, Hlary Term 20 Start from the defnton of σ(z) Note that Then σ(z) = σ = dσ(z) dz = + e z e z

More information

1 Generating functions, continued

1 Generating functions, continued Generatng functons, contnued. Generatng functons and parttons We can make use of generatng functons to answer some questons a bt more restrctve than we ve done so far: Queston : Fnd a generatng functon

More information