A CHOLESKY LR ALGORITHM FOR THE POSITIVE DEFINITE SYMMETRIC DIAGONAL-PLUS-SEMISEPARABLE EIGENPROBLEM

Size: px
Start display at page:

Download "A CHOLESKY LR ALGORITHM FOR THE POSITIVE DEFINITE SYMMETRIC DIAGONAL-PLUS-SEMISEPARABLE EIGENPROBLEM"

Transcription

1 A CHOLESKY LR ALGORITHM FOR THE POSITIVE DEFINITE SYMMETRIC DIAGONAL-PLUS-SEMISEPARABLE EIGENPROBLEM BOR PLESTENJAK, ELLEN VAN CAMP, AND MARC VAN BAREL Abstract. We preset a Cholesky LR algorithm with Laguerre s shift for computig the eigevalues of a positive defiite symmetric diagoal-plus-semiseparable matrix. By exploitig the semiseparable structure, each step of the method ca be performed i liear time. Key words. diagoal-plus-semiseparable matrix, LR algorithm, Laguerre s method, Cholesky decompositio AMS subject classificatios. 65F15 1. Itroductio. The symmetric eigevalue problem is a well studied topic i umerical liear algebra. Whe the origial matrix is a symmetric matrix, very ofte a orthogoal trasformatio ito a similar tridiagoal oe is applied because the eigedecompositio of a tridiagoal matrix ca be computed i O() ad such a orthogoal similarity trasformatio always exists (see, for example, [5, 9]). I [14], a orthogoal similarity reductio is preseted that reduces ay symmetric matrix ito a diagoal-plus-semiseparable (from ow o deoted by DPSS) oe with free choice of the diagoal. This trasformatio has the same order of computatioal complexity as the reductio ito tridiagoal form, oly the secod highest order term is a little bit larger. A good choice of the diagoal however, ca compesate this small delay whe computig the eigevalues ad eigevectors afterwards. Several algorithms are kow for computig the eigedecompositio of symmetric DPSS matrices, for example, i [2] ad [8] divide ad coquer techiques are used. The authors of [1, 4, ad refereces therei] focus o QR algorithms ad i [10] a implicit QR algorithm is preseted. Whe the symmetric DPSS matrix is positive defiite, also a LR algorithm, based o the Cholesky decompositio, ca be applied i order to compute the eigevalues. Such a Cholesky LR algorithm will be costructed i this paper. Therefore, we show that the DPSS structure is preserved by the Cholesky decompositio ad the LR algorithm. As a shift, Laguerre s shifts (also used for symmetric positive defiite tridiagoal matrices i [6]) are used because oe has to be sure that the shifted matrix is positive defiite agai. Exploitig the DPSS structure, oe step of the Cholesky LR algorithm, icludig the computatio of the shift, has a computatioal cost of order O(). Because two steps of the LR algorithm are equivalet to oe step of the QR algorithm (see, for example, [5]) there will be covergece towards the eigevalues. I cotrast to the QR algorithm with shifts where the eigevalues are ot computed i ay particular order, the eigevalues i the LR algorithm are computed from the smallest to the largest oe. This Versio: March 8, The research of the first author was partially supported by the Miistry of Higher Educatio, Sciece ad Techology of Sloveia, project P The research of the secod ad third author was partially supported by the Research Coucil K.U.Leuve, project OT/00/16 (SLAP: Structured Liear Algebra Package), G (ANCILA: Asymptotic analysis of the Covergece behavior of Iterative methods i umerical Liear Algebra), G (CORFU: Costructive study of Orthogoal Fuctios) ad G (RHPH: Riema-Hilbert problems, radom matrices ad Padé- Hermite approximatio), ad by the Belgia Programme o Iteruiversity Poles of Attractio, iitiated by the Belgia State, Prime Miister s Office for Sciece, Techology ad Culture, project IUAP V-22 (Dyamical Systems ad Cotrol: Computatio, Idetificatio ad Modellig). The scietific resposibility rests with the authors. 1

2 makes it a very suitable algorithm for those applicatios where the smallest eigevalues are eeded. The paper is orgaized as follows. I 2 the cocepts used, are explaied. The preservatio of the DPSS structure uder the Cholesky decompositio ad the Cholesky LR algorithm is prove i 3. Also explicit fast algorithms for the Cholesky decompositio ad the LR algorithm are costructed. I 4 a fast computatio of Laguerre s shifts is studied. 5 focuses o the implemetatio, while umerical results are discussed i 6, followed by coclusios. 2. Prelimiaries. I this sectio we recall the defiitio of DPSS matrices ad the Gives-vector represetatio that we will use. The idea of the LR algorithm based o the Cholesky decompositio is repeated as well as Laguerre s method. DEFINITION 2.1. A matrix S is called a lower- (upper-) semiseparable matrix if every submatrix that ca be take out of the lower (upper) triagular part of the matrix S, has rak at most 1. If a matrix is lower- ad upper-semiseparable, it is called a semiseparable matrix. The sum D + S of a diagoal matrix D ad a semiseparable matrix S is called a diagoal-plussemiseparable matrix or shortly a DPSS matrix. To represet a symmetric DPSS matrix, we use the Gives-vector represetatio based o a vector f = [ f 1,..., f ] T, 1 Gives rotatios [ ci s G i = i s i c i ], i = 1,..., 1, ad a diagoal d = [d 1,...,d ] T (for more details, see, e.g., [13]). c 1 f 1 + d 1 c 2 s 1 f 1 c 1 s 2:1 f 1 s 1:1 f 1 c 2 s 1 f 1 c 2 f 2 + d 2 c 1 s 2:1 f 2 s 1:2 f 2 D + S =..... c 1 s 2:1 f 1 c 1 s 2:2 f 2 c 1 f 1 + d 1 s 1 f, 1 s 1:1 f 1 s 1:2 f 2 s 1 f 1 f + d where s a:b = s a s a 1 s b. We will deote D + S = diag(d) + Giv(c,s, f ). The above represetatio of a DPSS matrix is ot uique. Oe ca see that the parameters d 1 ad d ca be chose arbitrarily. If we chage d ito d the we ca chage f ito f = f + d d. If we chage d 1 ito d 1, the oe ca check that by takig f 1 = (c 1 f 1 + d 1 d 1 ) 2 + s 2 1 f 1 2, c 1 = (c 1 f 1 + d 1 d 1 )/ f 1, s 1 = s 1 f 1 / f 1 we get the same matrix D + S. Most ofte, however, the diagoal d is kow, so d 1 ad d are fixed. Next we recall aother importat cocept, the Cholesky LR algorithm. Let A be a symmetric positive defiite (from ow o deoted by s.p.d.) matrix. Startig from the matrix A 0 = A, a Cholesky LR algorithm geerates a sequece of similar matrices A k+1 = Vk 1 A k V k = Vk T V k, k = 0,1,..., 2

3 where V k Vk T = A k is the Cholesky decompositio of A k with V k a lower-triagular matrix. The use of a shift at each step ca speed up the covergece of the sequece A k, k = 0,1,..., towards the Schur decompositio of A. Whe applyig the Cholesky LR algorithm to a s.p.d. DPSS matrix D+S, the shift ca be icluded ito the diagoal part ad hece, whe we are able to costruct the Cholesky decompositio VV T of a arbitrary s.p.d. DPSS matrix ad the correspodig product V T V, we ca apply a step of the shifted Cholesky LR algorithm o a s.p.d. DPSS. Oe importat remark, however, is that the shift σ should be chose such that D + S σi is still positive defiite or i other words, the shift σ should be smaller tha the smallest eigevalue of D + S. To fulfill this requiremet, Laguerre s shifts are used. Let A be a s.p.d. matrix with eigevalues 0 < λ λ 1... λ 1. Let f (λ) = det(a λi) be the characteristic polyomial of A. If x is a approximatio for a eigevalue of A ad we defie S 1 (x) = S 2 (x) = i=1 i=1 1 λ i x = f (x) f (x), 1 (λ i x) 2 = f 2 (x) f (x) f (x) f 2, (x) the the ext approximatio x by Laguerre s method is give by the equatio (2.1) x = x +. S 1 (x) + ( 1)(S 2 (x) S 21 (x)) Two importat properties of Laguerre s method are that if λ is a simple eigevalue ad if x < λ the x < x < λ ad the covergece towards λ is cubic. For multiple eigevalues the covergece is liear. More details o Laguerre s method ad its properties ca be foud i, e.g., [15]. 3. Cholesky decompositio. I this sectio we show that the DPSS structure is preserved by both the Cholesky decompositio ad the LR algorithm. Eve better, if we use the Gives-vector represetatio, the we ca show that some vectors from the represetatio are ivariat to the Cholesky decompositio ad the Cholesky LR algorithm. This eables us to produce a fast algorithm for the Cholesky LR step. THEOREM 3.1. Let A be a symmetric positive defiite diagoal-plus-semiseparable matrix i the Gives-vector represetatio A = Giv(c,s, f ) + diag(d). 1. If V is a lower triagular matrix such that A = VV T is the Cholesky decompositio of A, the V ca be represeted i the Gives-vector represetatio as V = tril(giv(c,s, f )) + diag( d). 2. If B = V T V, where V is the lower triagular Cholesky factor from 1. of A, the B is agai a symmetric positive defiite diagoal-plus-semiseparable matrix with the same diagoal part as the origial matrix A: B = Giv(ĉ,ŝ, f ) + diag(d). 3

4 Proof. 1. We use iductio. As we geerate A from the top to the bottom, the followig relatio holds betwee A k ad A k+1, where A 1 = [ f 1 + d 1 ] ad A = A. If we write [ ] Bk a A k = k, f k + d k where B k R (k 1) (k 1) ad a k R k 1, the B k c k a k s k a k A k+1 = c k a T k c k f k + d k s k f k. s k a T k s k f k f k+1 + d k+1 If a T k [ ] Wk 0 V k =, v T k where W k R (k 1) (k 1) ad v k R k 1, is the Cholesky factor of A k the oe ca see that the Cholesky factor of A k+1 has the form W k 0 0 V k+1 = c k v T k 0. s k v T k It is easy to see that there exist f k ad d k such that [ ] Wk 0 V k = f k + d k v T k ad W k 0 0 V k+1 = c k v T k c k f k + d k 0. s k v T k s k f k I the last step, whe k = 1, oe ca also choose appropriate f ad d for the right bottom elemet of V. Hece, the Gives trasformatios from A appear i the Gives-vector represetatio of the Cholesky factor V as well. 2. From 1. we kow that A =VV T with V a osigular, lower-semiseparable ad lower triagular matrix. A is also a s.p.d. DPSS matrix, so A = D + S. Hece, This implies: D + S = VV T. V T V = V T (D + S)V T = V T DV T +V T SV T = D 1 + S 1. The matrix D 1 is a upper triagular matrix with the diagoal D as diagoal elemets. All the submatrices of the lower triagular part of S 1 have rak at most 1. So, D 1 + S 1 ca be rewritte as D 1 + S 1 = D + Ŝ, 4

5 where all the submatrices of the lower triagular part of Ŝ have rak at most 1. Because of symmetry, also the submatrices of the upper triagular part of Ŝ have rak at most 1 ad hece, Ŝ is a semiseparable matrix. This fiishes the proof that V T V = D + Ŝ is agai a symmetric DPSS matrix with the same diagoal part as the origial matrix A. The fact that the Gives trasformatios used i A ad i the Cholesky factor V are the same, simplifies the computatio of V. The same is true for the fact that the diagoal part of A is ivariat uder the LR algorithm. This will be exploited ow. The Cholesky factor of A has the form V = c 1 f 1 + d 1 c 2 s 1 f 1 c 2 f 2 + d c 1 s 2:1 f 1 c 1 s 2:2 f 2 c 1 f 1 + d 1 s 1:1 f 1 s 1:2 f 2 s 1 f 1 f + d where s a:b = s a s a 1 s b. By comparig the elemets of A ad VV T we get equatios for the vectors f ad d. As we kow all Gives rotatios, it is eough to compare the elemets o the diagoal ad the mai subdiagoal. Hece, we get the followig equatios, (3.1) (3.2) k 1 c k f k + d k = j=1 k 1 c k+1 s k f k = j=1 (c k s k 1 s j f j ) 2 + (c k f k + d k ) 2, k = 1,...,, c k c k+1 s k (s k 1 s j f j ) 2 +c k+1 s k f k (c k f k + d k ), k = 1,..., 1, where we assume that c = 1. If we deote the we ca write (3.1) ad (3.2) as k 1 q k := j=1 (s k 1 s k 2 s j f j ) 2, (3.3) (3.4) c k f k + d k = c 2 k q k + (c k f k + d k ) 2, k = 1,...,, c k+1 s k f k = c k c k+1 s k q k + c k+1 s k f k (c k f k + d k ), k = 1,..., 1. The solutio of (3.3) ad (3.4) for f k ad d k is (3.5) (3.6) f k = d k = f k c k q k dk + c k ( f k c k q k ), d k dk + c k ( f k c k q k ), k = 1,...,, k = 1,...,, where we assume that c = 1 ad q 1 = 0. 5

6 For later use, let us defie the commo factors i the umerator ad the deomiator of (3.5) ad (3.6) as follows: z k = f k c k q k, y k = d k + c k z k. Oe ca see from (3.1) that y k is i fact the diagoal elemet of V because (3.7) c k f k + d k = d k + c k ( f k c k q k ) = d k + c k z k = y k. As i the stadard Cholesky algorithm, a egative or zero value uder the square root appears if A is ot positive defiite, so this is a way to check whether A is positive defiite or ot. Let us remark that f ad d are ot uiquely determied. We choose the values (3.5) ad (3.6) because of cosistecy. From the above equatios we ca obtai a algorithm that computes the Cholesky factorizatio of a s.p.d. DPSS matrix i 11 + O(1) flops. ALGORITHM 3.2. A algorithm for the Cholesky decompositio VV T = A of a s.p.d. DPSS matrix A = Giv(c,s, f ) + diag(d). The result are vectors f ad d such that V = tril(giv(c,s, f )) + diag( d). I the algorithm we assume that c = 1. fuctio [ f, d] = Cholesky(c, s, f, d) c = 1 q 1 = 0 for k = 1,..., : z k = f k c k q k y k = d k + c k z k f k = z k /y k d k = d k /y k q k+1 = s 2 k (q k + f k 2) Next we study how to costruct the product V T V i a efficiet way. The product B = V T V is agai a s.p.d. DPSS matrix. A short calculatio shows that the diagoal ad subdiagoal elemets of B are equal to (3.8) (3.9) b kk = (c k f k + d k ) 2 + (s k f k ) 2, b jk = s k s k+1 s j 1 f k ( f j + c j d j ), where k = 1,...,, j > k, ad we assume that c = 1 which implies that s = 0. Let us deote B = Giv(ĉ,ŝ, f ) + diag(d). From the equality ad (3.9) it follows that ŝ 2 k f 2 k = b 2 jk j=k+1 (3.10) ŝ 2 k f 2 k = f 2 k p k, k = 1,..., 1 6

7 where p k = (s k s k+1 s j 1 ) 2 ( f j + c j d j ) 2. j=k+1 For p k, k = 1,...,1, we ca apply the recursio p k = s 2 k (p k+1 + ( f k+1 + c k+1 dk+1 ) 2) that starts with p = 0. From (3.8) we obtai (3.11) ĉ k f k = (c k f k + d k ) 2 + (s k f k ) 2 d k. By applyig the relatio (3.7) we simplify (3.11) ito (3.12) ĉ k f k = c k z k + (s k f k ) 2 ad reduce the possibility of cacellatio. From (3.10) ad (3.12) we ca compute the vectors ĉ, ŝ, ad f. ALGORITHM 3.3. A algorithm for the product B = V T V, where V = tril(giv(c,s, f )) +diag( d) is the lower triagular Cholesky factor of a s.p.d. DPSS matrix A = Giv(c,s, f ) +diag(d). The vector z was already computed i Algorithm 3.2. The result are vectors ĉ, ŝ, ad f such that B = Giv(ĉ,ŝ, f ) + diag(d). fuctio [ĉ, ŝ, f ] = VTV(c,s, f,z) c = 1 f = ( f + d ) 2 d p = 0 for k = 1,...,2,1 ) p k = s 2 k (p k+1 + ( f k+1 + c k+1 dk+1 ) 2 [ĉ k, ŝ k, f k ] = Gives(c k z k + s 2 f k k 2, f k pk ) The fuctio [c, s, f ] = Gives(x, y) i Algorithm 3.3 returs the Gives trasformatio such that [ ][ ] [ ] c s x f =. s c y 0 A stable implemetatio that guards agaist overflow requires 7 flops (see, for example, [5]). Note that some quatities such as f k 2 ad s2 k already appear i Algorithm 3.2, so we have to compute them oly oce. As a result a efficiet implemetatio of Algorithm 3.3 requires 16 + O(1) flops ad oe step of the Cholesky LR algorithm without shifts ca be performed i 27 + O(1) flops. Let us remark that i Algorithm 3.3 we do ot care about the sig of ŝ k as the eigevalues are ivariat to the sig of ŝ k, k = 1,..., 1. 7

8 4. Computatio of Laguerre s shift. As idicated i (2.1), for Laguerre s shift we eed to compute S 1 ad S 2. It is easy to see that ad S 1 (σ) = S 2 (σ) = i=1 i=1 1 λ i σ = Tr((A σi) 1 ) 1 (λ i σ) 2 = Tr((A σi) 2 ). So, if A σi = VV T is the Cholesky decompositio of the s.p.d. DPSS matrix A σi ad W = V 1, the ad S 1 (σ) = Tr(W T W ) = W 2 F S 2 (σ) = Tr(W T WW T W) = Tr(WW T WW T ) = WW T 2 F. The aim is to compute S 1 ad S 2 i a stable ad efficiet way. Let us assume that W = tril(giv( c, s, f )) + diag( d). We will later show that the algorithm derived uder the above assumptio is correct also whe W is ot DPSS. Oe ca check that W is ot DPSS whe d i = 0 for some i = 2,..., 1. I the ext lemmas ad remark, we will show that S 1 ad S 2 ca be computed i a efficiet way. LEMMA 4.1. If A = Giv(c,s, f ) + diag(d) is a symmetric diagoal-plus-semiseparable matrix the where we assume that c = 1. Proof. As A is symmetric, A 2 F = k f k + d k ) k=1(c s 2 k f k 2, A 2 F = k=1 a 2 kk k=1 If follows from the structure of A that a kk = c k f k + d k ad a 2 jk = s2 k f k 2. j=k+1 k=1 j=k+1 a 2 jk. Based o Lemma 4.1, we ca derive the followig expressios for S 1 ad S 2 : LEMMA 4.2. If W = tril(giv( c, s, f )) + diag( d) is a lower osigular triagular matrix, such that c k 0 for k = 2,..., 1, the (4.1) WW T 2 F = k=1 (WW T ) 2 kk k=1 ( (WW T ) 2 ) k+1,k, c k+1 8

9 (4.2) W 2 F = k=1 (WW T ) kk, where we assume that c = 1. Proof. WW T is a s.p.d. DPSS matrix. As a cosequece of poit 1. of Theorem 3.1, the Gives trasformatios of the represetatio of W are preserved i the product WW T. Hece, there exist two vectors x,y R such that WW T = Giv( c, s,x) + diag(y). Applyig Lemma 4.1 ad the relatios fiishes the proof. s k x k = (WW T ) k+1,k c k+1 for k = 1,..., 1, c k x k + y k = (WW T ) k,k for k = 1,..., REMARK 4.3. The formula (4.1) of Lemma 4.2 ca be geeralized such that the coditio c k 0 for k = 2,..., 1 is o loger required. If we deote by t(k) the smallest idex j, j > k, such that c j 0, the (4.3) ( ) WW T 2 F = (WW T ) 2 1 (WW kk + 2 T 2 ) t(k),k. k=1 k=1 c t(k) Sice c = 1, we always have k < t(k) ad (4.3) is well defied. I additio to d i 0 for k = 1,...,, such that W is a DPSS, let us assume from ow o also that c k 0 for k = 2,..., 1 i the Cholesky factor V. Uder this assumptios it follows from Lemma 4.2 that oly the Gives trasformatios of W ad the diagoal ad subdiagoal elemets of WW T are required for computig S 1 ad S 2. Oe ca check that ad (WW T ) kk = c 2 k k 1( ) 2 sk 1 s i f i + ( ck f k + d k ) 2 i=1 (WW T k 1 ) 2 ) k+1,k = c k+1 c k s k ( sk 1 s i f i + ck+1 s k f k ( c k f k + d k ). i=1 Because V is a lower triagular matrix ad W = V 1, the diagoal ad subdiagoal elemets of W are of the form: (4.4) (4.5) w kk = c k f k + d k = y 1 k, k = 1,...,, w k+1,k = c k+1 s k f k = c k+1s k f k y k y k+1, k = 1,..., 1, where y k = c k f k + d k is the diagoal elemet of V computed i Algorithm 3.2. If we defie r k = i=1 k 1 ) 2 ( sk 1 s i f i the we ca write (WW T ) kk = c 2 k r k + y 2 k 9

10 ad (WW T ) k+1,k c k+1 = c k s k r k + s k f k y k. For r k, k = 1,...,, we use the recursio r k+1 = s 2 k r k + s 2 f k k 2 that starts with r 1 = 0. From the relatios (4.4) ad (4.5) it follows that i order to compute the diagoal ad the subdiagoal elemets of WW T, it is eough to kow the Gives rotatios ad the diagoal ad the subdiagoal elemets of W. The followig lemma, which follows from the results i [3], helps us to compute the ecessary elemets of W. LEMMA 4.4. Let V = tril(giv(c,s, f )) + diag( d) be a osigular lower triagular matrix such that d i 0 for i = 1,...,. The W = V 1 ca be represeted i the Gives-vector represetatio as W = tril(giv( c, s, f )) +diag( d), where d 1 i = d i for i = 1,...,. Hece, the diagoal elemets of W ca be writte as (4.6) w kk = c k f k + d 1 k If we rearrage the equatios (4.5) ad (4.6) ito ad (4.7) = y 1 k, k = 1,...,. c k f k = c k f k d k y k. s k f k = c k+1s k f k c k+1 y k y k+1, the it follows that c k ad s k form a Gives trasformatio such that [ ][ ] [ ] ck s k ck y k+1 c k+1 =. s k c k c k+1 s k dk 0 Agai, for k = 1 we assume that c = c = 1. Oe ca see by iductio that c k 0 for k = 1,...,2 because we assumed that c k 0 for k = 2,..., 1 ad y k+1 = 0 would cotradict the fact that A is s.p.d. Now we ca write a algorithm for the computatio of WW T 2 F ad W 2 F. I the algorihtm ξ k deotes (WW T ) k+1,k / c k+1 ad ω k deotes the diagoal elemet (WW T ) kk. These are the values that appear i equatios (4.1) ad (4.2) for S 1 ad S 2. We use β k for the itermediate result (4.7). A careful implemetatio of the algorithm, where the values that appear i Algorithms 3.2 ad 3.3 are computed oly oce, requires 31 + O(1) flops. ALGORITHM 4.5. A algorithm that computes S 1 = W 2 F ad S 2 = WW T 2 F, where W = V 1 ad V = tril(giv(c,s, f )) + diag( d) is the Cholesky factor of a s.p.d. DPSS matrix A = Giv(c,s, f ) + diag(d), ad y = diag(v ). I the algorithm we assume c k 0 for k = 2,..., 1 ad c = c = 1. fuctio [S 1, S 2 ] = ivtrace(c,s, f, d,y) c = c = 1 for k = 1,...,2,1 : [ c k, s k ] = Gives(c k c k+1 y k+1,c k+1 s k dk ) 10

11 r 1 = 0 for k = 1,..., 1 β k = c k+1 s k f k /( c k+1 y k y k+1 ) ω k = c 2 k r k + y 2 k ξ k = c k s k r k + β k /y k r k+1 = s 2 k r k + β 2 k ω = r + y 2 S 1 = k=1 ω k S 2 = k=1 ω2 k k=1 ξ 2 k What remais to be cosidered is the case that W is ot a DPSS matrix. If d k = 0 for some k = 2,..., 1 the W has a zero block W(k + 1 :,1 : k 1), see, e.g., [7, Lemma 2.5] ad it is ot a DPSS matrix aymore. However, Algorithm 4.5, that was derived uder the assumptio that W is a DPSS matrix, returs correct values for W 2 F ad WW T 2 F i such case as well. There are o divisios by d k i the algorithm that could cause problems. We oly use d k to compute y k. If we chage d k i V the oe ca see that as log as V is osigular, W 2 F ad WW T 2 F are cotiuous fuctios of d k. So, the algorithm is correct also i the limit whe d k = 0. Aother restrictio i Algorithm 4.5 is the assumptio c k 0 for k = 2,..., 1. Whe this assumptio is ot valid, we ca still compute S 1 ad S 2 if we apply formula (4.3) from Remark 4.3. Oe ca see that i the kth colum of W we eed the elemets w kk ad w t(k),k. Because w jk = 0 for k < j < t(k), Laguerre s shift ca still be computed i O() flops. 5. Implemetatio. I this sectio we discuss some details o the implemetatio of the algorithm preseted i the previous sectios. The software ca be dowloaded freely at: First we discuss how to deflate. If s k is small eough for some k = 1,..., 1, the we decouple the problem ito two smaller problems with matrices A(1 : k,1 : k) ad A(k + 1 :,k + 1 : ). I the special case whe s 1 is small eough, we take f + d as a approximatio of a eigevalue of A ad cotiue with vectors c(1 : 2), s(1 : 2), f (1 : 1), ad d(1 : 1). As iitial shift for the smaller problem we take f + d. Aother importat problem that ca appear durig the implemetatio is the shift. If a shift i the QR algorithm is by chace a exact eigevalue the we ca immediately extract this eigevalue ad cotiue with the smaller problem. This is ot true i the Cholesky LR algorithm where shifts σ k have to be strictly below the smallest eigevalue λ, otherwise the Cholesky factorizatio does ot exist. Without the Cholesky factorizatio we ca ot compute A k+1 = Vk 1 A k V k ad deflate. I umerical computatios, eve whe σ k < λ, the Cholesky factorizatio ca fail if the differece is too small. This ca cause a problem as usually Laguerre s shifts coverge faster to the smallest eigevalue tha the elemets A k (,). A good strategy is to isert a factor τ close, but smaller, to 1 ito (2.1) ad use σ k+1 = σ k + τ S 1 (σ k ) + ( 1)(S 2 (σ k ) S1 2(σ k)) as a shift i the ew iteratio. Based o our umerical experimets we suggest the value τ = If it happes ayway that the shift is so large that the Cholesky factorizatio fails, we first reduce the shift by the factor τ = ad if the ew shift is still too large, we start agai with the shift 0. 11

12 The computatio of Laguerre s shift requires more tha half of the operatios i oe step of the Cholesky LR algorithm. We ca save work by usig the same shift oce the shift improvemet is small eough. Our umerical experimets show a speed up up to 15% if we stop improvig the shift after (σ k+1 σ k )/σ k The eigevalues should be computed from the smallest to the largest oe, however, it might happe that s 1 is so small that we deflate, ad the extracted eigevalue is ot the smallest oe. This causes a problem i the ext phase as we use the extracted eigevalue as iitial shift ad this shift is too large. The strategy from the previous paragraph overcomes this problem ad the shift goes to zero after two usuccessful Cholesky factorizatios. At the ed of 4 we proposed a modificatio of Algorithm 4.5 that hadles the case c k = 0 for some k = 2,..., 1. Without this modificatio we get zero divided by zero i such a situatio. I practice we ca implemet a simpler solutio. If we perturb c k ito wheever c k = 0 the a small c k results i a small c k. These two quatities avoid the zero divided by zero problem i Algorithm 4.5 ad we ed up with accurate results. 6. Numerical results. The followig umerical results were obtaied with Matlab 7.0 ruig o a Petium4 2.6 GHz Widows XP operatig system. We compared the Cholesky LR algorithm with a Matlab implemetatio of the implicit QR algorithm for DPSS matrices [10] ad with the Matlab fuctio eig. Exact eigevalues were computed i Mathematica 5 usig variable precisio. For all umerical examples i this sectio the cutoff criterio for both Cholesky LR ad implicit QR is With the maximum relative error we deote max 1 i λ i λ i λ i where λ i, i = 1,...,, are the exact eigevalues of the test matrix ad λ i, i = 1,...,, the computed oes. EXAMPLE 6.1. I our first example we use radom s.p.d. DPSS matrices of the form A = diag(1,...,) + triu(uv T,1) + triu(uv T,1) T + αi, where u ad v are vectors of uiformly distributed radom etries o [0,1], obtaied by the Matlab fuctio rad, ad the shift α is such that the smallest eigevalue of A is 1. The coditio umbers of these matrices are approximately. The exact eigevalues of A are computed i Mathematica usig variable precisio. Before usig eig we compute all the elemets of A accurately i double precisio, so that the iitial data for all three methods are of full precisio. The compariso is ot completely fair as i eig we first have to reduce the matrix to the tridiagoal form where additioal umerical errors could occur. The results i Table 6.1 show that the Cholesky LR method is competitive i accuracy with the other two methods. I most cases, especially for larger matrices, it is slightly more accurate tha the implicit QR method. The compariso with eig shows that by exploitig the structure we ca get more accurate results. I eig some accuracy is lost i the reductio to the tridiagoal form. Oe step of the Cholesky LR method has approximately the same complexity as oe step of the implicit QR method, but although Cholesky LR requires roughly 3.5 times more steps tha the implicit QR method, it rus much faster. This is due to a more efficiet Matlab implemetatio. The same holds for eig which rus faster tha Cholesky LR although it has O( 3 ) complexity while the complexity of Cholesky LR is O( 2 ). The differece i umber of steps is also due to the fact that i implicit QR we may choose the shift more freely as i Cholesky LR, where the shifted matrix must remai positive defiite. EXAMPLE 6.2. We use the same costructio of the test matrices as i Example 6.1. For = 200 we geerate 25 radom matrices ad compare the accuracy of the eigevalues computed by the 12

13 TABLE 6.1 Compariso of the Cholesky LR method, implicit QR for DPSS matrices, ad eig from Matlab o radom s.p.d. DPSS matrices of sizes = 50 to = 500 ad small coditio umbers. The colums are: t: ruig time i secods; steps: umber of LR (QR) steps; error: the maximum relative error of the computed eigevalues. Cholesky LR Implicit QR eig t steps error t steps error t error Cholesky LR method, implicit QR for DPSS matrices, ad eig. Agai, the exact eigevalues of A are computed i Mathematica usig variable precisio. FIG Compariso of the Cholesky LR method, implicit QR for s.p.d. DPSS matrices, ad eig from Matlab o 25 radom s.p.d. matrices of size = 200. log10 of the maximum relative error Implicit QR Matlab eig Cholesky LR idex Results, ordered by the maximum relative error of the Cholesky LR method, are show i Figure 6.1. We ca see that the most accurate method for this particular class of matrices is the Cholesky 13

14 LR algorithm. The results from eig are comparable while the results of the implicit QR are slightly worse i geeral. EXAMPLE 6.3. I this example we use s.p.d. matrices A = Qdiag(1 : )Q T, where Q is a radom orthogoal matrix, obtaied i Matlab as orth(rad()). As i the previous examples we compare the Cholesky LR method, implicit QR for DPSS matrices, ad eig from Matlab. The differece from the previous examples is that ow we have to reduce the matrix ito a similar DPSS matrix before we ca apply Cholesky LR or implicit QR. We do this usig the algorithm of [14], where we choose the diagoal elemets as radom umbers distributed uiformly o [0, 1]. There is a coectio betwee the Laczos method ad the reductio ito a similar DPSS matrix [11] which causes that the largest eigevalues of A are approximated by the lower right diagoal elemets of the DPSS matrix. This is ot good for the Cholesky LR method where the smallest eigevalues are computed first. Therefore, we apply a method that reverses the directio of the colums ad rows of the DPSS matrix i liear time [12, Chapter 2, 8.1]. TABLE 6.2 Compariso of the Cholesky LR method, implicit QR for DPSS matrices, ad eig from Matlab o radom s.p.d. matrices of sizes = 500 to = 2000 with the exact eigevalues 1,...,. The colums are: t: ruig time i secods (time for LR ad QR does ot iclude reductio ito a DPSS matrix); steps: umber of LR (QR) steps; error: the maximum relative error of the computed eigevalues. Cholesky LR Implicit QR eig t steps error t steps error t error The results i Table 6.2 show that the eigevalues of a s.p.d. matrix ca be computed accurately usig a reductio ito a DPSS matrix followed by the Cholesky LR method or the implicit QR method. For larger matrices, the Cholesky LR algorithm teds to be slightly more accurate tha the implicit QR. Sice both methods use the same reduced DPSS matrices, this implies that Cholesky LR is more accurate tha implicit QR. The computatioal times are hard to compare because of differet implemetatios ad because the time for eig icludes the reductio to the tridiagoal form while the reductio to DPSS matrices is excluded from the times of the Cholesky LR ad the implicit QR method. EXAMPLE 6.4. We use the same costructio of the test matrices as i Example 6.3. For = 1000 we geerate 25 radom matrices ad compare the accuracy of the eigevalues computed by the Cholesky LR method, implicit QR for DPSS matrices, ad eig. For the reductio ito a similar DPSS matrix we use the same approach as i Example 6.3. Results are show i Figure 6.2. Similar to the previous examples, the Cholesky LR method is comparable with eig ad usually gives slightly better results tha the implicit QR method. Similar tests o matrices with multiple eigevalues ad with eigevalues λ i = 2 i,i = 1,...,, were performed. The results obtaied by the three algorithms also i these cases are comparable. 7. Coclusios. We have preseted a versio of the Cholesky LR algorithm that exploits the structure of positive defiite DPSS matrices. We propose to combie the method with Laguerre s 14

15 FIG Compariso of the Cholesky LR method, implicit QR for DPSS matrices, ad eig from Matlab o 25 radom s.p.d. matrices of size = 1000 with the exact eigevalues 1,..., log10 of the maximum relative error Implicit QR Matlab eig Cholesky LR idex shifts. It seems atural to compare the method to the implicit QR for DPSS matrices [10]. I Cholesky LR the eigevalues are computed from the smallest to the largest eigevalue, therefore the method is very appropriate for applicatios where oe is iterested i few of the smallest or the largest eigevalues. If the complete spectrum is computed, Cholesky LR is more expesive tha implicit QR, but, as it teds to be slightly more accurate, it presets a alterative. The proposed method combied with the reductio to DPSS matrices [14] ca also be applied to a geeral s.p.d. matrix. REFERENCES [1] Bii, D.A., Gemigai, L., Pa, V.: QR-like algorithms for geeralized semiseparable matrices. Tech. Report 1470, Departmet of Mathematics, Uiversity of Pisa, 2003 [2] Chadrasekara, S., Gu, M.: A divide ad coquer algorithm for the eigedecompositio of symmetric blockdiagoal plus semi-separable matrices. Numer. Math. 96, (2004) [3] Delvaux, S., Va Barel, M.: Structures preserved by matrix iversio. Report TW 414, Departmet of Computer Sciece, K.U.Leuve, Leuve, Belgium, December 2004 [4] Fasio, D.: Ratioal Krylov matrices ad QR-steps o Hermitia diagoal-plus-semiseparable matrices. To appear i Numer. Liear Algebra Appl.; also available from ftp://ftp.dimi.uiud.it/pub/fasio/bari.ps [5] Golub, G.H., Va Loa, C.F.: Matrix Computatios, 3rd Editio. The Johs Hopkis Uiversity Press, Baltimore, 1996 [6] Grad, J., Zakrajšek, E.: LR algorithm with Laguerre shifts for symmetric tridiagoal matrices. Comput. J. 15, (1972) [7] Fiedler, M., Vavří, Z.: Geeralized Hesseberg matrices. Liear Algebra Appl. 380, (2004) [8] Mastroardi, N., Va Camp, E., Va Barel, M.: Divide ad coquer type algorithms for computig the eigedecompositio of diagoal plus semiseparable matrices. Techical Report 7 (5/2003), Istituto per le Applicazioi del 15

16 Calcolo M. Picoe, Cosiglio Nazioale delle Ricerche, Rome, Italy, To appear i Numerical Algorithms. [9] Parlett, B.N.: The symmetric eigevalue problem. Classics i Applied Mathematics, Pretice-Hall, Eglewood Cliffs, N.J., 1980 [10] Va Camp, E., Delvaux, S., Va Barel, M., Vadebril, R., Mastroardi, N.: A implicit QR-algorithm for symmetric diagoal-plus-semiseparable matrices, Report TW 419, Departmet of Computer Sciece, K.U.Leuve, Leuve, Belgium, March [11] Va Camp, E., Va Barel, M., Vadebril, R., Mastroardi, N.: Orthogoal similarity trasformatio of a symmetric matrix ito a diagoal-plus-semiseparable oe with free choice of the diagoal. Structured Numerical Liear Algebra Problems: Algorithms ad Applicatios, Cortoa, Italy, September 19-24, cortoa04/program.htm [12] Vadebril, R.: Semiseparable matrices ad the symmetric eigevalue problem. PhD, K.U.Leuve, Leuve, May 2004 [13] Vadebril, R., Va Barel, R., Mastroardi, N.: A ote o the represetatio ad defiitio of semiseparable matrices. Report TW 393, Departmet of Computer Sciece, K.U.Leuve, Leuve, Belgium, May To appear i Numer. Liear Algebra Appl. [14] Vadebril, R., Va Camp, R., Va Barel, M., Mastroardi, N.: Orthogoal similarity trasformatio of a symmetric matrix ito a diagoal-plus-semiseparable oe with free choice of the diagoal. Report TW 398, Departmet of Computer Sciece, K.U.Leuve, Leuve, Belgium, August 2004 [15] Wilkiso, J.: Algebraic eigevalue problem. Numerical Mathematics ad Scietific Computatio, Oxford Uiversity Press, Oxford,

A Cholesky LR algorithm for the positive definite symmetric diagonal-plus-semiseparable eigenproblem

A Cholesky LR algorithm for the positive definite symmetric diagonal-plus-semiseparable eigenproblem A Cholesky LR algorithm for the positive definite symmetric diagonal-plus-semiseparable eigenproblem Bor Plestenjak Department of Mathematics University of Ljubljana Slovenia Ellen Van Camp and Marc Van

More information

Infinite Sequences and Series

Infinite Sequences and Series Chapter 6 Ifiite Sequeces ad Series 6.1 Ifiite Sequeces 6.1.1 Elemetary Cocepts Simply speakig, a sequece is a ordered list of umbers writte: {a 1, a 2, a 3,...a, a +1,...} where the elemets a i represet

More information

Tridiagonal reduction redux

Tridiagonal reduction redux Week 15: Moday, Nov 27 Wedesday, Nov 29 Tridiagoal reductio redux Cosider the problem of computig eigevalues of a symmetric matrix A that is large ad sparse Usig the grail code i LAPACK, we ca compute

More information

Estimation of Backward Perturbation Bounds For Linear Least Squares Problem

Estimation of Backward Perturbation Bounds For Linear Least Squares Problem dvaced Sciece ad Techology Letters Vol.53 (ITS 4), pp.47-476 http://dx.doi.org/.457/astl.4.53.96 Estimatio of Bacward Perturbatio Bouds For Liear Least Squares Problem Xixiu Li School of Natural Scieces,

More information

THE ASYMPTOTIC COMPLEXITY OF MATRIX REDUCTION OVER FINITE FIELDS

THE ASYMPTOTIC COMPLEXITY OF MATRIX REDUCTION OVER FINITE FIELDS THE ASYMPTOTIC COMPLEXITY OF MATRIX REDUCTION OVER FINITE FIELDS DEMETRES CHRISTOFIDES Abstract. Cosider a ivertible matrix over some field. The Gauss-Jorda elimiatio reduces this matrix to the idetity

More information

Basic Iterative Methods. Basic Iterative Methods

Basic Iterative Methods. Basic Iterative Methods Abel s heorem: he roots of a polyomial with degree greater tha or equal to 5 ad arbitrary coefficiets caot be foud with a fiite umber of operatios usig additio, subtractio, multiplicatio, divisio, ad extractio

More information

6.3 Testing Series With Positive Terms

6.3 Testing Series With Positive Terms 6.3. TESTING SERIES WITH POSITIVE TERMS 307 6.3 Testig Series With Positive Terms 6.3. Review of what is kow up to ow I theory, testig a series a i for covergece amouts to fidig the i= sequece of partial

More information

PAijpam.eu ON TENSOR PRODUCT DECOMPOSITION

PAijpam.eu ON TENSOR PRODUCT DECOMPOSITION Iteratioal Joural of Pure ad Applied Mathematics Volume 103 No 3 2015, 537-545 ISSN: 1311-8080 (prited versio); ISSN: 1314-3395 (o-lie versio) url: http://wwwijpameu doi: http://dxdoiorg/1012732/ijpamv103i314

More information

(A sequence also can be thought of as the list of function values attained for a function f :ℵ X, where f (n) = x n for n 1.) x 1 x N +k x N +4 x 3

(A sequence also can be thought of as the list of function values attained for a function f :ℵ X, where f (n) = x n for n 1.) x 1 x N +k x N +4 x 3 MATH 337 Sequeces Dr. Neal, WKU Let X be a metric space with distace fuctio d. We shall defie the geeral cocept of sequece ad limit i a metric space, the apply the results i particular to some special

More information

Chimica Inorganica 3

Chimica Inorganica 3 himica Iorgaica Irreducible Represetatios ad haracter Tables Rather tha usig geometrical operatios, it is ofte much more coveiet to employ a ew set of group elemets which are matrices ad to make the rule

More information

CHAPTER 10 INFINITE SEQUENCES AND SERIES

CHAPTER 10 INFINITE SEQUENCES AND SERIES CHAPTER 10 INFINITE SEQUENCES AND SERIES 10.1 Sequeces 10.2 Ifiite Series 10.3 The Itegral Tests 10.4 Compariso Tests 10.5 The Ratio ad Root Tests 10.6 Alteratig Series: Absolute ad Coditioal Covergece

More information

Random Matrices with Blocks of Intermediate Scale Strongly Correlated Band Matrices

Random Matrices with Blocks of Intermediate Scale Strongly Correlated Band Matrices Radom Matrices with Blocks of Itermediate Scale Strogly Correlated Bad Matrices Jiayi Tog Advisor: Dr. Todd Kemp May 30, 07 Departmet of Mathematics Uiversity of Califoria, Sa Diego Cotets Itroductio Notatio

More information

Roger Apéry's proof that zeta(3) is irrational

Roger Apéry's proof that zeta(3) is irrational Cliff Bott cliffbott@hotmail.com 11 October 2011 Roger Apéry's proof that zeta(3) is irratioal Roger Apéry developed a method for searchig for cotiued fractio represetatios of umbers that have a form such

More information

CALCULATION OF FIBONACCI VECTORS

CALCULATION OF FIBONACCI VECTORS CALCULATION OF FIBONACCI VECTORS Stuart D. Aderso Departmet of Physics, Ithaca College 953 Daby Road, Ithaca NY 14850, USA email: saderso@ithaca.edu ad Dai Novak Departmet of Mathematics, Ithaca College

More information

Machine Learning for Data Science (CS 4786)

Machine Learning for Data Science (CS 4786) Machie Learig for Data Sciece CS 4786) Lecture & 3: Pricipal Compoet Aalysis The text i black outlies high level ideas. The text i blue provides simple mathematical details to derive or get to the algorithm

More information

Theorem: Let A n n. In this case that A does reduce to I, we search for A 1 as the solution matrix X to the matrix equation A X = I i.e.

Theorem: Let A n n. In this case that A does reduce to I, we search for A 1 as the solution matrix X to the matrix equation A X = I i.e. Theorem: Let A be a square matrix The A has a iverse matrix if ad oly if its reduced row echelo form is the idetity I this case the algorithm illustrated o the previous page will always yield the iverse

More information

TMA4205 Numerical Linear Algebra. The Poisson problem in R 2 : diagonalization methods

TMA4205 Numerical Linear Algebra. The Poisson problem in R 2 : diagonalization methods TMA4205 Numerical Liear Algebra The Poisso problem i R 2 : diagoalizatio methods September 3, 2007 c Eiar M Røquist Departmet of Mathematical Scieces NTNU, N-749 Trodheim, Norway All rights reserved A

More information

Iterative method for computing a Schur form of symplectic matrix

Iterative method for computing a Schur form of symplectic matrix Aals of the Uiversity of Craiova, Mathematics ad Computer Sciece Series Volume 421, 2015, Pages 158 166 ISSN: 1223-6934 Iterative method for computig a Schur form of symplectic matrix A Mesbahi, AH Betbib,

More information

Chandrasekhar Type Algorithms. for the Riccati Equation of Lainiotis Filter

Chandrasekhar Type Algorithms. for the Riccati Equation of Lainiotis Filter Cotemporary Egieerig Scieces, Vol. 3, 00, o. 4, 9-00 Chadrasekhar ype Algorithms for the Riccati Equatio of Laiiotis Filter Nicholas Assimakis Departmet of Electroics echological Educatioal Istitute of

More information

Recurrence Relations

Recurrence Relations Recurrece Relatios Aalysis of recursive algorithms, such as: it factorial (it ) { if (==0) retur ; else retur ( * factorial(-)); } Let t be the umber of multiplicatios eeded to calculate factorial(). The

More information

Efficient GMM LECTURE 12 GMM II

Efficient GMM LECTURE 12 GMM II DECEMBER 1 010 LECTURE 1 II Efficiet The estimator depeds o the choice of the weight matrix A. The efficiet estimator is the oe that has the smallest asymptotic variace amog all estimators defied by differet

More information

NEW FAST CONVERGENT SEQUENCES OF EULER-MASCHERONI TYPE

NEW FAST CONVERGENT SEQUENCES OF EULER-MASCHERONI TYPE UPB Sci Bull, Series A, Vol 79, Iss, 207 ISSN 22-7027 NEW FAST CONVERGENT SEQUENCES OF EULER-MASCHERONI TYPE Gabriel Bercu We itroduce two ew sequeces of Euler-Mascheroi type which have fast covergece

More information

Iterative Techniques for Solving Ax b -(3.8). Assume that the system has a unique solution. Let x be the solution. Then x A 1 b.

Iterative Techniques for Solving Ax b -(3.8). Assume that the system has a unique solution. Let x be the solution. Then x A 1 b. Iterative Techiques for Solvig Ax b -(8) Cosider solvig liear systems of them form: Ax b where A a ij, x x i, b b i Assume that the system has a uique solutio Let x be the solutio The x A b Jacobi ad Gauss-Seidel

More information

A GENERALIZATION OF THE SYMMETRY BETWEEN COMPLETE AND ELEMENTARY SYMMETRIC FUNCTIONS. Mircea Merca

A GENERALIZATION OF THE SYMMETRY BETWEEN COMPLETE AND ELEMENTARY SYMMETRIC FUNCTIONS. Mircea Merca Idia J Pure Appl Math 45): 75-89 February 204 c Idia Natioal Sciece Academy A GENERALIZATION OF THE SYMMETRY BETWEEN COMPLETE AND ELEMENTARY SYMMETRIC FUNCTIONS Mircea Merca Departmet of Mathematics Uiversity

More information

LECTURE 8: ORTHOGONALITY (CHAPTER 5 IN THE BOOK)

LECTURE 8: ORTHOGONALITY (CHAPTER 5 IN THE BOOK) LECTURE 8: ORTHOGONALITY (CHAPTER 5 IN THE BOOK) Everythig marked by is ot required by the course syllabus I this lecture, all vector spaces is over the real umber R. All vectors i R is viewed as a colum

More information

Sequences A sequence of numbers is a function whose domain is the positive integers. We can see that the sequence

Sequences A sequence of numbers is a function whose domain is the positive integers. We can see that the sequence Sequeces A sequece of umbers is a fuctio whose domai is the positive itegers. We ca see that the sequece 1, 1, 2, 2, 3, 3,... is a fuctio from the positive itegers whe we write the first sequece elemet

More information

OPTIMAL ALGORITHMS -- SUPPLEMENTAL NOTES

OPTIMAL ALGORITHMS -- SUPPLEMENTAL NOTES OPTIMAL ALGORITHMS -- SUPPLEMENTAL NOTES Peter M. Maurer Why Hashig is θ(). As i biary search, hashig assumes that keys are stored i a array which is idexed by a iteger. However, hashig attempts to bypass

More information

Product measures, Tonelli s and Fubini s theorems For use in MAT3400/4400, autumn 2014 Nadia S. Larsen. Version of 13 October 2014.

Product measures, Tonelli s and Fubini s theorems For use in MAT3400/4400, autumn 2014 Nadia S. Larsen. Version of 13 October 2014. Product measures, Toelli s ad Fubii s theorems For use i MAT3400/4400, autum 2014 Nadia S. Larse Versio of 13 October 2014. 1. Costructio of the product measure The purpose of these otes is to preset the

More information

CSE 1400 Applied Discrete Mathematics Number Theory and Proofs

CSE 1400 Applied Discrete Mathematics Number Theory and Proofs CSE 1400 Applied Discrete Mathematics Number Theory ad Proofs Departmet of Computer Scieces College of Egieerig Florida Tech Sprig 01 Problems for Number Theory Backgroud Number theory is the brach of

More information

6. Kalman filter implementation for linear algebraic equations. Karhunen-Loeve decomposition

6. Kalman filter implementation for linear algebraic equations. Karhunen-Loeve decomposition 6. Kalma filter implemetatio for liear algebraic equatios. Karhue-Loeve decompositio 6.1. Solvable liear algebraic systems. Probabilistic iterpretatio. Let A be a quadratic matrix (ot obligatory osigular.

More information

Discrete-Time Systems, LTI Systems, and Discrete-Time Convolution

Discrete-Time Systems, LTI Systems, and Discrete-Time Convolution EEL5: Discrete-Time Sigals ad Systems. Itroductio I this set of otes, we begi our mathematical treatmet of discrete-time s. As show i Figure, a discrete-time operates or trasforms some iput sequece x [

More information

subcaptionfont+=small,labelformat=parens,labelsep=space,skip=6pt,list=0,hypcap=0 subcaption ALGEBRAIC COMBINATORICS LECTURE 8 TUESDAY, 2/16/2016

subcaptionfont+=small,labelformat=parens,labelsep=space,skip=6pt,list=0,hypcap=0 subcaption ALGEBRAIC COMBINATORICS LECTURE 8 TUESDAY, 2/16/2016 subcaptiofot+=small,labelformat=pares,labelsep=space,skip=6pt,list=0,hypcap=0 subcaptio ALGEBRAIC COMBINATORICS LECTURE 8 TUESDAY, /6/06. Self-cojugate Partitios Recall that, give a partitio λ, we may

More information

Zeros of Polynomials

Zeros of Polynomials Math 160 www.timetodare.com 4.5 4.6 Zeros of Polyomials I these sectios we will study polyomials algebraically. Most of our work will be cocered with fidig the solutios of polyomial equatios of ay degree

More information

Stochastic Matrices in a Finite Field

Stochastic Matrices in a Finite Field Stochastic Matrices i a Fiite Field Abstract: I this project we will explore the properties of stochastic matrices i both the real ad the fiite fields. We first explore what properties 2 2 stochastic matrices

More information

Sequences of Definite Integrals, Factorials and Double Factorials

Sequences of Definite Integrals, Factorials and Double Factorials 47 6 Joural of Iteger Sequeces, Vol. 8 (5), Article 5.4.6 Sequeces of Defiite Itegrals, Factorials ad Double Factorials Thierry Daa-Picard Departmet of Applied Mathematics Jerusalem College of Techology

More information

A Note on the Symmetric Powers of the Standard Representation of S n

A Note on the Symmetric Powers of the Standard Representation of S n A Note o the Symmetric Powers of the Stadard Represetatio of S David Savitt 1 Departmet of Mathematics, Harvard Uiversity Cambridge, MA 0138, USA dsavitt@mathharvardedu Richard P Staley Departmet of Mathematics,

More information

Singular value decomposition. Mathématiques appliquées (MATH0504-1) B. Dewals, Ch. Geuzaine

Singular value decomposition. Mathématiques appliquées (MATH0504-1) B. Dewals, Ch. Geuzaine Lecture 11 Sigular value decompositio Mathématiques appliquées (MATH0504-1) B. Dewals, Ch. Geuzaie V1.2 07/12/2018 1 Sigular value decompositio (SVD) at a glace Motivatio: the image of the uit sphere S

More information

Eigenvalues and Eigenvectors

Eigenvalues and Eigenvectors 5 Eigevalues ad Eigevectors 5.3 DIAGONALIZATION DIAGONALIZATION Example 1: Let. Fid a formula for A k, give that P 1 1 = 1 2 ad, where Solutio: The stadard formula for the iverse of a 2 2 matrix yields

More information

Advanced Analysis. Min Yan Department of Mathematics Hong Kong University of Science and Technology

Advanced Analysis. Min Yan Department of Mathematics Hong Kong University of Science and Technology Advaced Aalysis Mi Ya Departmet of Mathematics Hog Kog Uiversity of Sciece ad Techology September 3, 009 Cotets Limit ad Cotiuity 7 Limit of Sequece 8 Defiitio 8 Property 3 3 Ifiity ad Ifiitesimal 8 4

More information

Decoupling Zeros of Positive Discrete-Time Linear Systems*

Decoupling Zeros of Positive Discrete-Time Linear Systems* Circuits ad Systems,,, 4-48 doi:.436/cs..7 Published Olie October (http://www.scirp.org/oural/cs) Decouplig Zeros of Positive Discrete-Time Liear Systems* bstract Tadeusz Kaczorek Faculty of Electrical

More information

Determinants of order 2 and 3 were defined in Chapter 2 by the formulae (5.1)

Determinants of order 2 and 3 were defined in Chapter 2 by the formulae (5.1) 5. Determiats 5.. Itroductio 5.2. Motivatio for the Choice of Axioms for a Determiat Fuctios 5.3. A Set of Axioms for a Determiat Fuctio 5.4. The Determiat of a Diagoal Matrix 5.5. The Determiat of a Upper

More information

ECE-S352 Introduction to Digital Signal Processing Lecture 3A Direct Solution of Difference Equations

ECE-S352 Introduction to Digital Signal Processing Lecture 3A Direct Solution of Difference Equations ECE-S352 Itroductio to Digital Sigal Processig Lecture 3A Direct Solutio of Differece Equatios Discrete Time Systems Described by Differece Equatios Uit impulse (sample) respose h() of a DT system allows

More information

THE SOLUTION OF NONLINEAR EQUATIONS f( x ) = 0.

THE SOLUTION OF NONLINEAR EQUATIONS f( x ) = 0. THE SOLUTION OF NONLINEAR EQUATIONS f( ) = 0. Noliear Equatio Solvers Bracketig. Graphical. Aalytical Ope Methods Bisectio False Positio (Regula-Falsi) Fied poit iteratio Newto Raphso Secat The root of

More information

Chapter 6 Infinite Series

Chapter 6 Infinite Series Chapter 6 Ifiite Series I the previous chapter we cosidered itegrals which were improper i the sese that the iterval of itegratio was ubouded. I this chapter we are goig to discuss a topic which is somewhat

More information

Convergence of random variables. (telegram style notes) P.J.C. Spreij

Convergence of random variables. (telegram style notes) P.J.C. Spreij Covergece of radom variables (telegram style otes).j.c. Spreij this versio: September 6, 2005 Itroductio As we kow, radom variables are by defiitio measurable fuctios o some uderlyig measurable space

More information

arxiv: v1 [math.fa] 3 Apr 2016

arxiv: v1 [math.fa] 3 Apr 2016 Aticommutator Norm Formula for Proectio Operators arxiv:164.699v1 math.fa] 3 Apr 16 Sam Walters Uiversity of Norther British Columbia ABSTRACT. We prove that for ay two proectio operators f, g o Hilbert

More information

Machine Learning for Data Science (CS 4786)

Machine Learning for Data Science (CS 4786) Machie Learig for Data Sciece CS 4786) Lecture 9: Pricipal Compoet Aalysis The text i black outlies mai ideas to retai from the lecture. The text i blue give a deeper uderstadig of how we derive or get

More information

A sequence of numbers is a function whose domain is the positive integers. We can see that the sequence

A sequence of numbers is a function whose domain is the positive integers. We can see that the sequence Sequeces A sequece of umbers is a fuctio whose domai is the positive itegers. We ca see that the sequece,, 2, 2, 3, 3,... is a fuctio from the positive itegers whe we write the first sequece elemet as

More information

Axioms of Measure Theory

Axioms of Measure Theory MATH 532 Axioms of Measure Theory Dr. Neal, WKU I. The Space Throughout the course, we shall let X deote a geeric o-empty set. I geeral, we shall ot assume that ay algebraic structure exists o X so that

More information

Seunghee Ye Ma 8: Week 5 Oct 28

Seunghee Ye Ma 8: Week 5 Oct 28 Week 5 Summary I Sectio, we go over the Mea Value Theorem ad its applicatios. I Sectio 2, we will recap what we have covered so far this term. Topics Page Mea Value Theorem. Applicatios of the Mea Value

More information

SECTION 1.5 : SUMMATION NOTATION + WORK WITH SEQUENCES

SECTION 1.5 : SUMMATION NOTATION + WORK WITH SEQUENCES SECTION 1.5 : SUMMATION NOTATION + WORK WITH SEQUENCES Read Sectio 1.5 (pages 5 9) Overview I Sectio 1.5 we lear to work with summatio otatio ad formulas. We will also itroduce a brief overview of sequeces,

More information

CALCULATING FIBONACCI VECTORS

CALCULATING FIBONACCI VECTORS THE GENERALIZED BINET FORMULA FOR CALCULATING FIBONACCI VECTORS Stuart D Aderso Departmet of Physics, Ithaca College 953 Daby Road, Ithaca NY 14850, USA email: saderso@ithacaedu ad Dai Novak Departmet

More information

a for a 1 1 matrix. a b a b 2 2 matrix: We define det ad bc 3 3 matrix: We define a a a a a a a a a a a a a a a a a a

a for a 1 1 matrix. a b a b 2 2 matrix: We define det ad bc 3 3 matrix: We define a a a a a a a a a a a a a a a a a a Math E-2b Lecture #8 Notes This week is all about determiats. We ll discuss how to defie them, how to calculate them, lear the allimportat property kow as multiliearity, ad show that a square matrix A

More information

CMSE 820: Math. Foundations of Data Sci.

CMSE 820: Math. Foundations of Data Sci. Lecture 17 8.4 Weighted path graphs Take from [10, Lecture 3] As alluded to at the ed of the previous sectio, we ow aalyze weighted path graphs. To that ed, we prove the followig: Theorem 6 (Fiedler).

More information

On forward improvement iteration for stopping problems

On forward improvement iteration for stopping problems O forward improvemet iteratio for stoppig problems Mathematical Istitute, Uiversity of Kiel, Ludewig-Mey-Str. 4, D-24098 Kiel, Germay irle@math.ui-iel.de Albrecht Irle Abstract. We cosider the optimal

More information

Linear chord diagrams with long chords

Linear chord diagrams with long chords Liear chord diagrams with log chords Everett Sulliva Departmet of Mathematics Dartmouth College Haover New Hampshire, U.S.A. everett..sulliva@dartmouth.edu Submitted: Feb 7, 2017; Accepted: Oct 7, 2017;

More information

Abstract Vector Spaces. Abstract Vector Spaces

Abstract Vector Spaces. Abstract Vector Spaces Astract Vector Spaces The process of astractio is critical i egieerig! Physical Device Data Storage Vector Space MRI machie Optical receiver 0 0 1 0 1 0 0 1 Icreasig astractio 6.1 Astract Vector Spaces

More information

Higher-order iterative methods by using Householder's method for solving certain nonlinear equations

Higher-order iterative methods by using Householder's method for solving certain nonlinear equations Math Sci Lett, No, 7- ( 7 Mathematical Sciece Letters A Iteratioal Joural http://dxdoiorg/785/msl/5 Higher-order iterative methods by usig Householder's method for solvig certai oliear equatios Waseem

More information

Apply change-of-basis formula to rewrite x as a linear combination of eigenvectors v j.

Apply change-of-basis formula to rewrite x as a linear combination of eigenvectors v j. Eigevalue-Eigevector Istructor: Nam Su Wag eigemcd Ay vector i real Euclidea space of dimesio ca be uiquely epressed as a liear combiatio of liearly idepedet vectors (ie, basis) g j, j,,, α g α g α g α

More information

Chapter 10: Power Series

Chapter 10: Power Series Chapter : Power Series 57 Chapter Overview: Power Series The reaso series are part of a Calculus course is that there are fuctios which caot be itegrated. All power series, though, ca be itegrated because

More information

Math 61CM - Solutions to homework 3

Math 61CM - Solutions to homework 3 Math 6CM - Solutios to homework 3 Cédric De Groote October 2 th, 208 Problem : Let F be a field, m 0 a fixed oegative iteger ad let V = {a 0 + a x + + a m x m a 0,, a m F} be the vector space cosistig

More information

Comparison Study of Series Approximation. and Convergence between Chebyshev. and Legendre Series

Comparison Study of Series Approximation. and Convergence between Chebyshev. and Legendre Series Applied Mathematical Scieces, Vol. 7, 03, o. 6, 3-337 HIKARI Ltd, www.m-hikari.com http://d.doi.org/0.988/ams.03.3430 Compariso Study of Series Approimatio ad Covergece betwee Chebyshev ad Legedre Series

More information

An Introduction to Randomized Algorithms

An Introduction to Randomized Algorithms A Itroductio to Radomized Algorithms The focus of this lecture is to study a radomized algorithm for quick sort, aalyze it usig probabilistic recurrece relatios, ad also provide more geeral tools for aalysis

More information

A Hadamard-type lower bound for symmetric diagonally dominant positive matrices

A Hadamard-type lower bound for symmetric diagonally dominant positive matrices A Hadamard-type lower boud for symmetric diagoally domiat positive matrices Christopher J. Hillar, Adre Wibisoo Uiversity of Califoria, Berkeley Jauary 7, 205 Abstract We prove a ew lower-boud form of

More information

Singular Continuous Measures by Michael Pejic 5/14/10

Singular Continuous Measures by Michael Pejic 5/14/10 Sigular Cotiuous Measures by Michael Peic 5/4/0 Prelimiaries Give a set X, a σ-algebra o X is a collectio of subsets of X that cotais X ad ad is closed uder complemetatio ad coutable uios hece, coutable

More information

t distribution [34] : used to test a mean against an hypothesized value (H 0 : µ = µ 0 ) or the difference

t distribution [34] : used to test a mean against an hypothesized value (H 0 : µ = µ 0 ) or the difference EXST30 Backgroud material Page From the textbook The Statistical Sleuth Mea [0]: I your text the word mea deotes a populatio mea (µ) while the work average deotes a sample average ( ). Variace [0]: The

More information

Optimally Sparse SVMs

Optimally Sparse SVMs A. Proof of Lemma 3. We here prove a lower boud o the umber of support vectors to achieve geeralizatio bouds of the form which we cosider. Importatly, this result holds ot oly for liear classifiers, but

More information

1 Last time: similar and diagonalizable matrices

1 Last time: similar and diagonalizable matrices Last time: similar ad diagoalizable matrices Let be a positive iteger Suppose A is a matrix, v R, ad λ R Recall that v a eigevector for A with eigevalue λ if v ad Av λv, or equivaletly if v is a ozero

More information

Sequences and Series of Functions

Sequences and Series of Functions Chapter 6 Sequeces ad Series of Fuctios 6.1. Covergece of a Sequece of Fuctios Poitwise Covergece. Defiitio 6.1. Let, for each N, fuctio f : A R be defied. If, for each x A, the sequece (f (x)) coverges

More information

Definition 4.2. (a) A sequence {x n } in a Banach space X is a basis for X if. unique scalars a n (x) such that x = n. a n (x) x n. (4.

Definition 4.2. (a) A sequence {x n } in a Banach space X is a basis for X if. unique scalars a n (x) such that x = n. a n (x) x n. (4. 4. BASES I BAACH SPACES 39 4. BASES I BAACH SPACES Sice a Baach space X is a vector space, it must possess a Hamel, or vector space, basis, i.e., a subset {x γ } γ Γ whose fiite liear spa is all of X ad

More information

Classification of problem & problem solving strategies. classification of time complexities (linear, logarithmic etc)

Classification of problem & problem solving strategies. classification of time complexities (linear, logarithmic etc) Classificatio of problem & problem solvig strategies classificatio of time complexities (liear, arithmic etc) Problem subdivisio Divide ad Coquer strategy. Asymptotic otatios, lower boud ad upper boud:

More information

62. Power series Definition 16. (Power series) Given a sequence {c n }, the series. c n x n = c 0 + c 1 x + c 2 x 2 + c 3 x 3 +

62. Power series Definition 16. (Power series) Given a sequence {c n }, the series. c n x n = c 0 + c 1 x + c 2 x 2 + c 3 x 3 + 62. Power series Defiitio 16. (Power series) Give a sequece {c }, the series c x = c 0 + c 1 x + c 2 x 2 + c 3 x 3 + is called a power series i the variable x. The umbers c are called the coefficiets of

More information

CHAPTER I: Vector Spaces

CHAPTER I: Vector Spaces CHAPTER I: Vector Spaces Sectio 1: Itroductio ad Examples This first chapter is largely a review of topics you probably saw i your liear algebra course. So why cover it? (1) Not everyoe remembers everythig

More information

Sequences, Mathematical Induction, and Recursion. CSE 2353 Discrete Computational Structures Spring 2018

Sequences, Mathematical Induction, and Recursion. CSE 2353 Discrete Computational Structures Spring 2018 CSE 353 Discrete Computatioal Structures Sprig 08 Sequeces, Mathematical Iductio, ad Recursio (Chapter 5, Epp) Note: some course slides adopted from publisher-provided material Overview May mathematical

More information

MAT1026 Calculus II Basic Convergence Tests for Series

MAT1026 Calculus II Basic Convergence Tests for Series MAT026 Calculus II Basic Covergece Tests for Series Egi MERMUT 202.03.08 Dokuz Eylül Uiversity Faculty of Sciece Departmet of Mathematics İzmir/TURKEY Cotets Mootoe Covergece Theorem 2 2 Series of Real

More information

Numerical Method for Blasius Equation on an infinite Interval

Numerical Method for Blasius Equation on an infinite Interval Numerical Method for Blasius Equatio o a ifiite Iterval Alexader I. Zadori Omsk departmet of Sobolev Mathematics Istitute of Siberia Brach of Russia Academy of Scieces, Russia zadori@iitam.omsk.et.ru 1

More information

September 2012 C1 Note. C1 Notes (Edexcel) Copyright - For AS, A2 notes and IGCSE / GCSE worksheets 1

September 2012 C1 Note. C1 Notes (Edexcel) Copyright   - For AS, A2 notes and IGCSE / GCSE worksheets 1 September 0 s (Edecel) Copyright www.pgmaths.co.uk - For AS, A otes ad IGCSE / GCSE worksheets September 0 Copyright www.pgmaths.co.uk - For AS, A otes ad IGCSE / GCSE worksheets September 0 Copyright

More information

Commutativity in Permutation Groups

Commutativity in Permutation Groups Commutativity i Permutatio Groups Richard Wito, PhD Abstract I the group Sym(S) of permutatios o a oempty set S, fixed poits ad trasiet poits are defied Prelimiary results o fixed ad trasiet poits are

More information

Appendix: The Laplace Transform

Appendix: The Laplace Transform Appedix: The Laplace Trasform The Laplace trasform is a powerful method that ca be used to solve differetial equatio, ad other mathematical problems. Its stregth lies i the fact that it allows the trasformatio

More information

Complex Analysis Spring 2001 Homework I Solution

Complex Analysis Spring 2001 Homework I Solution Complex Aalysis Sprig 2001 Homework I Solutio 1. Coway, Chapter 1, sectio 3, problem 3. Describe the set of poits satisfyig the equatio z a z + a = 2c, where c > 0 ad a R. To begi, we see from the triagle

More information

Math 257: Finite difference methods

Math 257: Finite difference methods Math 257: Fiite differece methods 1 Fiite Differeces Remember the defiitio of a derivative f f(x + ) f(x) (x) = lim 0 Also recall Taylor s formula: (1) f(x + ) = f(x) + f (x) + 2 f (x) + 3 f (3) (x) +...

More information

Algebra of Least Squares

Algebra of Least Squares October 19, 2018 Algebra of Least Squares Geometry of Least Squares Recall that out data is like a table [Y X] where Y collects observatios o the depedet variable Y ad X collects observatios o the k-dimesioal

More information

Polynomial Functions and Their Graphs

Polynomial Functions and Their Graphs Polyomial Fuctios ad Their Graphs I this sectio we begi the study of fuctios defied by polyomial expressios. Polyomial ad ratioal fuctios are the most commo fuctios used to model data, ad are used extesively

More information

Math 113 Exam 3 Practice

Math 113 Exam 3 Practice Math Exam Practice Exam will cover.-.9. This sheet has three sectios. The first sectio will remid you about techiques ad formulas that you should kow. The secod gives a umber of practice questios for you

More information

Math 155 (Lecture 3)

Math 155 (Lecture 3) Math 55 (Lecture 3) September 8, I this lecture, we ll cosider the aswer to oe of the most basic coutig problems i combiatorics Questio How may ways are there to choose a -elemet subset of the set {,,,

More information

A NEW CLASS OF 2-STEP RATIONAL MULTISTEP METHODS

A NEW CLASS OF 2-STEP RATIONAL MULTISTEP METHODS Jural Karya Asli Loreka Ahli Matematik Vol. No. (010) page 6-9. Jural Karya Asli Loreka Ahli Matematik A NEW CLASS OF -STEP RATIONAL MULTISTEP METHODS 1 Nazeeruddi Yaacob Teh Yua Yig Norma Alias 1 Departmet

More information

A collocation method for singular integral equations with cosecant kernel via Semi-trigonometric interpolation

A collocation method for singular integral equations with cosecant kernel via Semi-trigonometric interpolation Iteratioal Joural of Mathematics Research. ISSN 0976-5840 Volume 9 Number 1 (017) pp. 45-51 Iteratioal Research Publicatio House http://www.irphouse.com A collocatio method for sigular itegral equatios

More information

The Method of Least Squares. To understand least squares fitting of data.

The Method of Least Squares. To understand least squares fitting of data. The Method of Least Squares KEY WORDS Curve fittig, least square GOAL To uderstad least squares fittig of data To uderstad the least squares solutio of icosistet systems of liear equatios 1 Motivatio Curve

More information

In number theory we will generally be working with integers, though occasionally fractions and irrationals will come into play.

In number theory we will generally be working with integers, though occasionally fractions and irrationals will come into play. Number Theory Math 5840 otes. Sectio 1: Axioms. I umber theory we will geerally be workig with itegers, though occasioally fractios ad irratioals will come ito play. Notatio: Z deotes the set of all itegers

More information

A multivariate rational interpolation with no poles in R m

A multivariate rational interpolation with no poles in R m NTMSCI 3, No., 9-8 (05) 9 New Treds i Mathematical Scieces http://www.tmsci.com A multivariate ratioal iterpolatio with o poles i R m Osma Rasit Isik, Zekeriya Guey ad Mehmet Sezer Departmet of Mathematics,

More information

Polynomials with Rational Roots that Differ by a Non-zero Constant. Generalities

Polynomials with Rational Roots that Differ by a Non-zero Constant. Generalities Polyomials with Ratioal Roots that Differ by a No-zero Costat Philip Gibbs The problem of fidig two polyomials P(x) ad Q(x) of a give degree i a sigle variable x that have all ratioal roots ad differ by

More information

w (1) ˆx w (1) x (1) /ρ and w (2) ˆx w (2) x (2) /ρ.

w (1) ˆx w (1) x (1) /ρ and w (2) ˆx w (2) x (2) /ρ. 2 5. Weighted umber of late jobs 5.1. Release dates ad due dates: maximimizig the weight of o-time jobs Oce we add release dates, miimizig the umber of late jobs becomes a sigificatly harder problem. For

More information

Inverse Matrix. A meaning that matrix B is an inverse of matrix A.

Inverse Matrix. A meaning that matrix B is an inverse of matrix A. Iverse Matrix Two square matrices A ad B of dimesios are called iverses to oe aother if the followig holds, AB BA I (11) The otio is dual but we ofte write 1 B A meaig that matrix B is a iverse of matrix

More information

Lecture 8: October 20, Applications of SVD: least squares approximation

Lecture 8: October 20, Applications of SVD: least squares approximation Mathematical Toolkit Autum 2016 Lecturer: Madhur Tulsiai Lecture 8: October 20, 2016 1 Applicatios of SVD: least squares approximatio We discuss aother applicatio of sigular value decompositio (SVD) of

More information

The picture in figure 1.1 helps us to see that the area represents the distance traveled. Figure 1: Area represents distance travelled

The picture in figure 1.1 helps us to see that the area represents the distance traveled. Figure 1: Area represents distance travelled 1 Lecture : Area Area ad distace traveled Approximatig area by rectagles Summatio The area uder a parabola 1.1 Area ad distace Suppose we have the followig iformatio about the velocity of a particle, how

More information

Lainiotis filter implementation. via Chandrasekhar type algorithm

Lainiotis filter implementation. via Chandrasekhar type algorithm Joural of Computatios & Modellig, vol.1, o.1, 2011, 115-130 ISSN: 1792-7625 prit, 1792-8850 olie Iteratioal Scietific Press, 2011 Laiiotis filter implemetatio via Chadrasehar type algorithm Nicholas Assimais

More information

ESTIMATES OF THE NORM OF THE ERROR IN FOM AND GMRES. Ax = b

ESTIMATES OF THE NORM OF THE ERROR IN FOM AND GMRES. Ax = b ESTIMATES OF THE NORM OF THE ERROR IN FOM AND GMRES GÉRARD MEURANT 1. Itroductio. We cosider solvig a liear system Ax = b where A is a o sigular matrix of order with the full orthogoalizatio method (FOM)

More information

Discrete Orthogonal Moment Features Using Chebyshev Polynomials

Discrete Orthogonal Moment Features Using Chebyshev Polynomials Discrete Orthogoal Momet Features Usig Chebyshev Polyomials R. Mukuda, 1 S.H.Og ad P.A. Lee 3 1 Faculty of Iformatio Sciece ad Techology, Multimedia Uiversity 75450 Malacca, Malaysia. Istitute of Mathematical

More information

Hoggatt and King [lo] defined a complete sequence of natural numbers

Hoggatt and King [lo] defined a complete sequence of natural numbers REPRESENTATIONS OF N AS A SUM OF DISTINCT ELEMENTS FROM SPECIAL SEQUENCES DAVID A. KLARNER, Uiversity of Alberta, Edmoto, Caada 1. INTRODUCTION Let a, I deote a sequece of atural umbers which satisfies

More information

f(x) dx as we do. 2x dx x also diverges. Solution: We compute 2x dx lim

f(x) dx as we do. 2x dx x also diverges. Solution: We compute 2x dx lim Math 3, Sectio 2. (25 poits) Why we defie f(x) dx as we do. (a) Show that the improper itegral diverges. Hece the improper itegral x 2 + x 2 + b also diverges. Solutio: We compute x 2 + = lim b x 2 + =

More information