A Ferris-Mangasarian Technique. Applied to Linear Least Squares. Problems. J. E. Dennis, Trond Steihaug. May Rice University

Size: px
Start display at page:

Download "A Ferris-Mangasarian Technique. Applied to Linear Least Squares. Problems. J. E. Dennis, Trond Steihaug. May Rice University"

Transcription

1 A Ferrs-Mangasaran Technque Appled to Lnear Least Squares Problems J. E. Denns, Trond Stehaug CRPC-TR98740 May 1998 Center for Research on Parallel Computaton Rce Unversty 6100 South Man Street CRPC - MS 41 Houston, TX Submtted May 1998

2 A Ferrs-Mangasaran Technque Appled to Lnear Least Squares Problems J. E. Denns Computatonal and Appled Mathematcs Rce Unversty Houston TX Trond Stehaug y Department of Informatcs Unversty of Bergen Hyteknologsenteret N-5020 Bergen Norway May 1, 1998 Abstract Ths note specalzes to lnear least squares problems an approach suggested by Ferrs and Mangasaran [4] for solvng constraned optmzaton problems on parallel computers. It wll be shown here that ths specalzaton leads to an algorthm whch s mathematcally equvalent to an acceleraton and convergence forcng modcaton of the block Jacob teraton appled to the normal equatons. The resultng algorthm s a promsng way to speed up a parallel multsplttng algorthm of Renaut [9] for lnear least squares. Renaut's algorthm s related to a specalzaton of part of the Ferrs and Mangasaran approach. Research supported by DOE FG03-93ER25178, CRPC CCR , AFOSR- F , The Boeng Company, and the REDI Foundaton. y Research supported by The Research Councl of Norway and VISTA { a research cooperaton between the Norwegan Academy of Scence and Den norske stats oljeselskap a.s (Statol). 1

3 1 Introducton Ths note specalzes to lnear least squares problems an approach suggested by Ferrs and Mangasaran [4] for solvng constraned optmzaton problems on parallel computers. It wll be shown here that ths specalzaton leads to an acceleraton and convergence forcng mechansm for the block Jacob teraton appled to the normal equatons. We do not form the full normal equatons, but our numercal results hnt that the condton number of the normal equatons aects the number of teratons requred for a gven problem. The target problems are assumed to be large. The technque suggested has for each teraton two basc stages both of whch nvolve the soluton of smaller lnear least squares problems. The rst stage s to partton the optmzaton varables for the problem and then on separate processors to solve the smaller least squares problem n whch only those varables n a sngle partton are allowed to move. The parttonng of the varables s at the dscreton of the user, and hence t can be used to select the sze of the problem to be solved on each processor. The user may make other consderatons n parttonng the varables such as consstent scalng of the parttoned problems. It s well known that the teraton dened by ncrementng each varable by the amount ndcated n ths rst stage and then updatng the resdual for the next teraton s the block Jacob teraton appled to the normal equatons for the orgnal lnear least squares problem. See Bjorck [1] and the references theren. The classcal Jacob teraton alone may not converge [5, 8]. The second stage of each teraton s to compute a new terate by a synchronzaton step that nvolves solvng a least squares problem n a smaller space of surrogate varables dented by the rst stage. If the surrogate varables are taken only to be the ncrements produced by solvng the subspace least squares problems, then we wll call ths the Jacob method wth subspace correcton, and t always converges for full rank problems. It turns out that ths method has already been consdered by Renaut [9]. She calls t Optmal Recombnaton Least Squares Multsplttng (ORLSMS), and she gves further developments based on the multsplttng method of O'Leary and Whte [7]. For smplcty, we wll restrct ourselves here to the case that at each of the two stages we accurately solve the smaller mnmzaton problems, but ths 2

4 s nether necessary to the theory nor even possble n the general settng consdered n [4]. Mangasaran [6] and Ferrs and Mangasaran [4] ntroduced n stage one auxlary varables, whch they called "forget-me-not" varables. The contrbuton of ths paper s to show how to use the \forget-me-not" varables to make some very promsng reductons n the number of teratons needed f only the ORLSMS or the Jacob method wth subspace correcton s used. Our results hnt that perhaps the choce we advocate for least squares may be useful back n the general mnmzaton settng consdered n [4]. If the sze of the problem and the number of processors ndcate, the problem n the surrogate varables can n turn be attacked by the parttonng technque of the rst stage, and ths can be contnued to reduce the dmenson of the problem to be solved n the synchronzaton step untl the number of surrogate varables s manageable. In the next secton, we present some prelmnares to set the stage for the new Jacob-Ferrs-Mangasaran algorthm n Secton 3. Secton 4 presents some promsng numercal results, and Secton 5 s devoted to a dscusson and conclusons concernng ths approach. 2 Prelmnares 2.1 The lnear least squares problem Let A be an m n real matrx, m n; b 2 R m. Let M be an m m postve dente weghtng matrx. The weghted lnear least-squares problem s: mn x2r kax? b k M; where kyk 2 n M = y T My: (1) Let the columns of A be parttoned nto g blocks A = [A 1 A 2 : : : A g ]; where A s m n. Further let x be parttoned consstently nto blocks x 1 ; x 2 ; : : : ; x g. The least squares problem (1) s equvalent to gx mn x2r nfk A x? bk M : x 2 R n ; = 1; : : : ; gg: (2) =1 3

5 We want to dstrbute the varables to the avalable processors and solve a smaller subproblem on each processor n parallel. 2.2 Varable dstrbuton To solve the weghted lnear least squares problem (2) we dstrbute the varables among the avalable processors. In ths secton we wll assume that each group s assgned to ts own processor. Let x k be an approxmaton to the soluton x to (1), and partton x k nto x k 1; x k 2; : : : ; x k g. Parallelzaton: = 1; 2; : : : ; g Solve for x k+1 2 R n : mn x2r n ka x? (b? gx 1=j6= A j x k j )k M : (3) Followng the notaton and dervaton n [3], we ntroduce the drecton d k =? x k, and note that successve resduals satsfy x k+1 r k+1 = gx j=1 A j x k+1 j? b = r k + Then the th least squares subproblem (3) s: Solve for d k gx j=1 A j d k j : 2 R n : mn d2r n fka d + r k k M g; = 1; 2; : : : ; g; (4) and the th block of the new approxmate soluton s x k+1 = x k + d k ; = 1; 2; : : : ; g: (5) For d 2 R n ntroduce the vector d 2 R n whch s obtaned by startng wth a zero vector and placng the nonzero entres of d n the postons correspondng to the column ndces n A of A. Dene the drecton d k : d k = gx j=1 d k j : 4

6 Then (5) can be wrtten as x k+1 = x k + d k. Ths s [1] the classcal block Jacob method on the normal equatons A T MAx = A T Mb: (6) Assume that A has full rank. Then the followng result says that the block Jacob method converges f A T MA s sucently `block dagonally domnant'. Theorem 1 Let A have full rank, and let C be a block dagonal matrx wth th block A T MA. The correspondng block Jacob method wll converge to x, a soluton of (1) f 2 C? A T MA s postve dente. Proof: Corollary 2.1 of [5]. Even when 2 C? A T MA s not postve dente, we can force convergence by the followng two small modcatons of the block Jacob method. Let f : R n! R be dened by f(x) = kax? bk 2 M = x T A T MAx? 2(A T Mb) T x? b T Mb : (7) We can force convergence by ntroducng a smple lnesearch:. k = argmn f(x k + d k ) : (8) Theorem 2 Let A have full rank. Gven x k, choose x k+1 = x k + k d k ; where k s dened by (8). Then lm k!1 x k = x. Proof: Chapter 6 of [2]. Of course, ths lnesearch functons as a synchronzaton step, and so the attractve parallelsm n the Jacob teraton s compromsed. Note that k s the easy soluton of the 1-dmensonal least squares problem to solve for k n k(ad k ) k + r k k M. We wll ntroduce a more general lnesearch n the next secton. Fnally, we end ths secton wth a smple convergence result that follows from the applcaton of a degenerate form of the Ferrs-Mangasaran 2nd stage. 5

7 Theorem 3 Let A have full rank. Gven x k, choose x k+1 = argmnff(x k + d k ); f(x k + d k ); = 1; : : : ; gg ; (9) then lm k!1 x k = x, where x s the unque soluton of (1). Proof: Snce A s full rank, f s strongly convex and the result follows from Theorem 2.3 of [4] or Theorem 5 of the next secton. 3 The Ferrs-Mangasaran Correcton Step In the last secton, we saw that the Jacob teraton converges when the block cross terms n the coecent matrx A T MA are weak enough to be neglected. Ths secton wll ntroduce smple technques for ncorporatng the nuence of these cross terms nto the teraton. Unfortunately, these all wll take the form of a synchronzaton step and so parallelsm wll be compromsed. Mangasaran [6] and Ferrs and Mangasaran [4] ntroduced a synchronzaton step n whch the step x k = x k+1?x k s chosen by approxmately mnmzng f(x k + x) n the subspace spanned by d = 1; : : : ; g. In the followng, we call ths a subspace-correcton step. Thus, the block Jacob teraton can be seen as choosng adaptvely a sngle surrogate varable d k to represent the subspace spanned by the th block of varables x k n the correcton step. The subspace-correcton teraton step s then chosen to be the step that provdes approxmately the most decrease from x k for f n the space of Jacob-surrogate varables. Ths sort of dmensonal reducton s common n engneerng desgn through so-called surrogate varable or reduced bass technques. The derence here s that the surrogate varables are beng chosen adaptvely by the Jacob teraton rather than to be chosen a pror by engneerng judgement. 3.1 Supplementary varables The subspaces spanned by the column blocks A can be supplemented by what Ferrs and Mangasaran call \forget-me-not" varables. For our settng, a more approprate name would be \look-ahead" varables. Thus, we wll use the more neutral desgnaton \supplementary varables". 6

8 Begnnng wth a sngle full space vector, the procedures of the prevous secton are used to obtan supplementary varables to expand each subspace. Unfortunately, ths requres us to ntroduce stll more complcated notaton, whch we wll gve now. Then we wll dscuss strateges for choosng supplementary varables that we have found to be so advantageous as to justfy the added fuss. Let I be the n n dentty matrx and let I be the nn matrx formed from columns of the nn dentty matrx so that A I = A. Let the supplementary vector p 2 R n be parttoned accordngly and dene the n(g?1+n ) matrx P P = p 1 p?1 I p +1 p g : (10) For ~n = n + g? 1, dene the m ~n matrx ea = AP = [Ap 1 Ap?1 A Ap +1 Ap g ] : (11) For a gven supplementary vector p k 2 R n the g subproblems (3) are replaced by Solve for e d k 2 R ~n : mnfk A ek d e + r k k M g (12) d e where e A k s dened n (11) for the gven vector p k. The step d k 2 R n s d k = gx =1 P k e d k (13) Of course, the ncluson of p n the algorthmc mx rases the queston of how to choose an deal p for the teraton. That queston turns out to have a smple answer, whch we gve n the followng theorem and then follow wth some algorthmc modcatons amed at approxmatng the deal p. Theorem 4 Let x k 2 R n be arbtrary, and set p k = e k = x? x k. Then, each P e d k = e k, and x = x k + 1 g dk. Proof: To smplfy notaton, we wll consder the case = 1. Let v (e kt 1 ; 1; 1; ; 1) T 2 R ~n 1. Frst we wll show that d ek 1 = v solves (12). 7

9 Notce that ea k 1 = AP 1 = [A 1 A 2 e k 2 A g e k g] ; and P 1 v = e k. Thus, ea k 1v + r k = AP 1 v + r k = Ae k + r k = Ax? b ; and f d ek 1 s any other soluton, then t must gve the same resdual. Thus, by the uncty of x, s unque. P 1 e d k 1 = P 1 v = e k So, f we could choose p k = x? x k e k, then each P d ek would be e k. Of course, f we knew e k, we would be nshed, but ths ponts to takng p k to be our best estmate of e k. The best way we have thought to do ths at a partcular teraton s by takng p k = x k? x k?1, and even ths crude approxmaton to e k leads to a sgncant reducton n teratons. However, a more elaborate scheme s reasonable because f p k does not depend on k, and f the subproblems are solved usng a Cholesky factorzaton of the n n matrx A et M A e, then the Cholesky factors are saved and solvng the subproblems requre only a back substtuton (forward and backward substtuton). Ths suggests that we mght protably explot the lnear algebra savngs to try a predctor/corrector scheme dened by keepng p = p k?1 xed for several predctor teratons to obtan say x pred wthout havng to redo any factorzatons. The sole purpose of these predctor teratons s to obtan a better approxmaton x pred? x k e k to use as p k n a corrector teraton to obtan x k+1. We wll gve numercal results supportng ths procedure. 3.2 The complete algorthm At ths pont, we have obtaned the full set of supplementary varables from the block Jacob subproblems supplemented by the projectons of p. To nsh specalzng the Ferrs-Mangasaran technque to lnear least squares, we wll 8

10 explan the subspace-correcton step, and then we wll gve the complete algorthm. For a gven d 2 R n, dene the n g matrx Consder the m g matrx D = d1 d g : (14) ba = AD = A d 1 A d g : Then the columns of b A are the full set of surrogate varables. We solve the least squares problem n ths set of varables to get the subspace corrected step, whch s gven by Solve for s k 2 R g : mn k b Ask + r k k M : (15) We use the vector d k dened by (13), and the new terate s x k+1 = x k +D k s k where D k s dened n (14). Before we gve the Ferrs-Mangasaran convergence theorem for the more general nonlnear optmzaton algorthm, we pause to sum up all the specalzatons we have suggested n the followng Algorthm: Jacob-Ferrs-Mangasaran Subdvde A nto g blocks. Choose x 0. Compute r 0 = Ax 0? b: for k = 0 step 1 untl convergence do Choose vector p k. Ths may nvolve several predctor teratons for = 1; : : : ; g n parallel Compute P k n (10). Let e A k = AP k Solve for e d k Compute d k = P g =1 P k : mnfk A e e k d k + r k e k M g. d k. Compute D k n (14) and b Ak = AD k. Solve for s k : mnk b Ak s k + r k k M. x k+1 = x k + D k s k : r k+1 = r k + b Ak s k : Check for convergence. 9

11 Algorthm: Predctor teratons Let A e = AP k?1 and P = P k?1 for = 1; 2; : : : ; g. Let z 0 = x k? x k?1 ; v 0 = r k + Az 0. for j = 0; 1; : : : ; l? 1 do for = 1; : : : ; g n parallel Solve for d ej : mnfk e P Ad ej + vj k M g. Compute d j g = =1 P e d j. Compute D j and b Aj = AD j. Solve for s j : mnk b Aj s j + v j k M. z j+1 = z j + D j s j : v j+1 = v j + b Aj s j : Let p k = z l The followng result follows from Ferrs and Mangasaran [4] Theorem 2.3 by notcng that f A has full rank then the functon f n (7) s strongly convex. Theorem 5 Assume that fp k g s bounded ndependent of k. If A has full rank, then lm k!1 x k = x, where x s the unque soluton of (1). 4 Numercal results The convergence of the methods wll be llustrated on a class of randomly generated least squares problems (1). The m n coecent matrx A = QD + "R where Q s m n wth orthonormal columns, D s a n n dagonal matrx and R s a m n matrx. The elements n R and on the dagonal of D are randomly dstrbuted. For small values of " the matrx A T A s dagonally domnated. The elements n the m vector b n the least squares problem (1) are ether random (for 'non-zero' resdual case problems) or the vector s chosen to be b = Ac where c s a random n vector for 'zero resdual' problems. The weght matrx M s the dentty matrx. All tests are run on a SPARC wth Sun-4 oatng-pont usng Matlab. In Table 1, varatons of the Ferrs and Mangasaran technque are compared to the Gauss-Sedel teraton and the Jacob teraton wth the subspacecorrected step on the normal equatons (6). For ths problem D 0, " = 1, 10

12 and the elements n R are unformly dstrbuted n [-1,1]. Note that ths problem gets more dagonally domnant as m ncreases. For the partcular case reported here m = 280, n = 256. The condton number (the square of the rato of largest and smallest sngular values of A) s The block Jacob method does not converge wthout a lnesearch or subspace correcton for the reported values of g. The stoppng crteron s that the `2 derence between the exact and approxmate solutons s not more than 10?6. In all numercal experments the columns of the matrx are partoned nto g groups based on the natural order: the 1st n=g columns form the rst group, the 2nd n=g columns form the next group, etc. The startng pont s x 0 = 0. g Gauss- Jacob wth sub- Jacob-Ferrs-Mangasaran Sedel space correcton p=1 F&M p=ds Pred Zero resdual: >20000 > Non zero resdual: >20000 > Table 1: Number of teratons The columns n Table 1 gve the number of teratons to acheve the desred accuracy. The Guass-Sedel method s appled to the normal equatons wthout formng the normal equatons (see for example [1, 3]). For the Jacob method we use the subspace corrected step (15) to guarantee convergence. The column marked "p=1" are the teratons usng the supplementary varables dened by p k = (1; : : : ; 1) T 2 R n. Ferrs and Mangasaran [4] suggest usng the vector p k to be [p k] 1 j = [(rf(x k + e )? rf(x k )) ] j 1 = [A TMA for j = 1; 2; : : : ; n ; = 1; 2; : : : ; g (16) e ] j 11

13 where e 2 R n s a vector wth all ones and [ ] j denotes the jth component of a n vector. Note that ths p k does not depend on the teraton ndex k. The column marked "F&M" are the teratons usng the supplementary varables dened by (16). All tests ndcate very lttle derence between the Ferrs and Mangasaran choce of p k and p k = (1; : : : ; 1) T. The number of teratons for the algorthm that uses supplementary varables dened by choosng p k = x k?x k?1 = D k s k s n the column marked "p=ds". In the column marked "Pred" the supplementary varables are determned usng one predctor step to compute the new p k. All tests ndcate very small varatons between zero and non-zero resdual cases. Ths small derence ndcates that the governng condton number for the methods s the condton number of the normal equaton (6). For p k = x k? x k?1 the number of teratons does not ncrease wth the number of groups g on most problems. However, for the method that uses one predctor step, we see that the number of teratons n some cases ncreases wth the number of groups. Ths s nvestgated further n Table 2. Here we have chosen " = 1, the dagonal elements n D are unformly dstrbuted n [1,2] and the elements of R are n the nterval [0,1]. Further m = 26 and n = 24. g p=ds Predctor teratons l l = 1 l = 2 l = 3 l = Table 2: Number of teratons and predctor teratons A predctor teraton has the same cost n terms of arthmetc operatons as one teraton of the algorthm wth p k ndependent of the teraton ndex. If the cost of computng the QR factorzatons of the smaller systems s neglected, then usng l predctor teratons has the same cost as l+1 teratons usng "p=ds". If we consder g = 8 n Table 2 we see that t s more ecent to use 2 or 3 predctor teratons than use p k = x k? x k?1. If we use l = 2 12

14 predctor teratons and compare wth the results n Table 1 for "Pred" (l = 1) the number of teratons decreases from 2955 to 306 for the zero resdual case and g = 32. For the non zero resdual case the number of teratons decreased from 3000 to 444 when l = 2. 5 Conclusons Ths paper rases more questons than t answers, and we hope to pursue some of these questons soon. We beleve that we have found a valuable choce of the \forget-me-not" varables of Ferrs and Mangasaran, and the behavor of the predctor/corrector farly cres out for an adaptve way to decde when how many predctor teratons to do. At ths pont, we can only say that more groups means more predctor teratons. It would be also nterestng to know whether our choce would be useful n the general nonlnear optmzaton case. We suspect t would. These questons wll have to wat n order that we can make the deadlne to have our paper consdered for the ssue to honor Olv Mangasaran on hs 65th brthday. We jon all of Olv's frends, not just contrbuters to ths volume, n wshng Olv and Clare many more happy and healthy years. References [1] A. Bjorck, Numercal methods for least squares problems, SIAM, [2] J. E. Denns, Jr., Robert B. Schnabel, Numercal methods for unconstraned optmzaton and nonlnear equatons, Prentce-Hall, [3] J. E. Denns, Jr., and T. Stehaug, On the successve projectons approach to least squares problems, SIAM J. Numer. Anal. 23 (1986) [4] M. C. Ferrs and O. L. Mangasaran, Parallel varable dstrbuton, SIAM J. Optmzaton 4(1994) [5] H. B. Keller, On the soluton of sngular and semdente lnear systems by teraton, SIAM J. Numer.Anal. 2(1965)

15 [6] O. L. Mangasaran, Parallel gradent dstrbuton n unconstraned optmzaton, SIAM J. Control and Optmzaton 33(1995) [7] D. P. O'Leary and R. E. Whte, Multsplttng of matrces and parallel solutons of lnear systems, SIAM J. Algebrac Dscrete Methods, 6(1985) [8] J. M. Ortega, Introducton to parallel and vector soluton of lnear systems, Plenum Press, [9] R. A. Renaut, A parallel multsplttng soluton of the least squares problem, Numercal Lnear Algebra wth Applcatons, 4(1997)

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 16

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 16 STAT 39: MATHEMATICAL COMPUTATIONS I FALL 218 LECTURE 16 1 why teratve methods f we have a lnear system Ax = b where A s very, very large but s ether sparse or structured (eg, banded, Toepltz, banded plus

More information

Chapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems

Chapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems Numercal Analyss by Dr. Anta Pal Assstant Professor Department of Mathematcs Natonal Insttute of Technology Durgapur Durgapur-713209 emal: anta.bue@gmal.com 1 . Chapter 5 Soluton of System of Lnear Equatons

More information

Inexact Newton Methods for Inverse Eigenvalue Problems

Inexact Newton Methods for Inverse Eigenvalue Problems Inexact Newton Methods for Inverse Egenvalue Problems Zheng-jan Ba Abstract In ths paper, we survey some of the latest development n usng nexact Newton-lke methods for solvng nverse egenvalue problems.

More information

Vector Norms. Chapter 7 Iterative Techniques in Matrix Algebra. Cauchy-Bunyakovsky-Schwarz Inequality for Sums. Distances. Convergence.

Vector Norms. Chapter 7 Iterative Techniques in Matrix Algebra. Cauchy-Bunyakovsky-Schwarz Inequality for Sums. Distances. Convergence. Vector Norms Chapter 7 Iteratve Technques n Matrx Algebra Per-Olof Persson persson@berkeley.edu Department of Mathematcs Unversty of Calforna, Berkeley Math 128B Numercal Analyss Defnton A vector norm

More information

MMA and GCMMA two methods for nonlinear optimization

MMA and GCMMA two methods for nonlinear optimization MMA and GCMMA two methods for nonlnear optmzaton Krster Svanberg Optmzaton and Systems Theory, KTH, Stockholm, Sweden. krlle@math.kth.se Ths note descrbes the algorthms used n the author s 2007 mplementatons

More information

Some modelling aspects for the Matlab implementation of MMA

Some modelling aspects for the Matlab implementation of MMA Some modellng aspects for the Matlab mplementaton of MMA Krster Svanberg krlle@math.kth.se Optmzaton and Systems Theory Department of Mathematcs KTH, SE 10044 Stockholm September 2004 1. Consdered optmzaton

More information

Singular Value Decomposition: Theory and Applications

Singular Value Decomposition: Theory and Applications Sngular Value Decomposton: Theory and Applcatons Danel Khashab Sprng 2015 Last Update: March 2, 2015 1 Introducton A = UDV where columns of U and V are orthonormal and matrx D s dagonal wth postve real

More information

A New Refinement of Jacobi Method for Solution of Linear System Equations AX=b

A New Refinement of Jacobi Method for Solution of Linear System Equations AX=b Int J Contemp Math Scences, Vol 3, 28, no 17, 819-827 A New Refnement of Jacob Method for Soluton of Lnear System Equatons AX=b F Naem Dafchah Department of Mathematcs, Faculty of Scences Unversty of Gulan,

More information

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 13

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 13 CME 30: NUMERICAL LINEAR ALGEBRA FALL 005/06 LECTURE 13 GENE H GOLUB 1 Iteratve Methods Very large problems (naturally sparse, from applcatons): teratve methods Structured matrces (even sometmes dense,

More information

A Parallel Multisplitting Solution of the Least Squares Problem

A Parallel Multisplitting Solution of the Least Squares Problem NUMERICAL LINEAR ALGEBRA WITH APPLICATIONS Numer. Lnear Algebra Appl., 5, 11 31 (1998) A Parallel Multsplttng Soluton of the Least Squares Problem R. A. Renaut Department of Mathematcs, Arzona State Unversty,

More information

Convexity preserving interpolation by splines of arbitrary degree

Convexity preserving interpolation by splines of arbitrary degree Computer Scence Journal of Moldova, vol.18, no.1(52), 2010 Convexty preservng nterpolaton by splnes of arbtrary degree Igor Verlan Abstract In the present paper an algorthm of C 2 nterpolaton of dscrete

More information

Hongyi Miao, College of Science, Nanjing Forestry University, Nanjing ,China. (Received 20 June 2013, accepted 11 March 2014) I)ϕ (k)

Hongyi Miao, College of Science, Nanjing Forestry University, Nanjing ,China. (Received 20 June 2013, accepted 11 March 2014) I)ϕ (k) ISSN 1749-3889 (prnt), 1749-3897 (onlne) Internatonal Journal of Nonlnear Scence Vol.17(2014) No.2,pp.188-192 Modfed Block Jacob-Davdson Method for Solvng Large Sparse Egenproblems Hongy Mao, College of

More information

APPENDIX A Some Linear Algebra

APPENDIX A Some Linear Algebra APPENDIX A Some Lnear Algebra The collecton of m, n matrces A.1 Matrces a 1,1,..., a 1,n A = a m,1,..., a m,n wth real elements a,j s denoted by R m,n. If n = 1 then A s called a column vector. Smlarly,

More information

Lecture 12: Discrete Laplacian

Lecture 12: Discrete Laplacian Lecture 12: Dscrete Laplacan Scrbe: Tanye Lu Our goal s to come up wth a dscrete verson of Laplacan operator for trangulated surfaces, so that we can use t n practce to solve related problems We are mostly

More information

Generalized Linear Methods

Generalized Linear Methods Generalzed Lnear Methods 1 Introducton In the Ensemble Methods the general dea s that usng a combnaton of several weak learner one could make a better learner. More formally, assume that we have a set

More information

Lecture Notes on Linear Regression

Lecture Notes on Linear Regression Lecture Notes on Lnear Regresson Feng L fl@sdueducn Shandong Unversty, Chna Lnear Regresson Problem In regresson problem, we am at predct a contnuous target value gven an nput feature vector We assume

More information

EEE 241: Linear Systems

EEE 241: Linear Systems EEE : Lnear Systems Summary #: Backpropagaton BACKPROPAGATION The perceptron rule as well as the Wdrow Hoff learnng were desgned to tran sngle layer networks. They suffer from the same dsadvantage: they

More information

Errors for Linear Systems

Errors for Linear Systems Errors for Lnear Systems When we solve a lnear system Ax b we often do not know A and b exactly, but have only approxmatons  and ˆb avalable. Then the best thng we can do s to solve ˆx ˆb exactly whch

More information

Deriving the X-Z Identity from Auxiliary Space Method

Deriving the X-Z Identity from Auxiliary Space Method Dervng the X-Z Identty from Auxlary Space Method Long Chen Department of Mathematcs, Unversty of Calforna at Irvne, Irvne, CA 92697 chenlong@math.uc.edu 1 Iteratve Methods In ths paper we dscuss teratve

More information

CSci 6974 and ECSE 6966 Math. Tech. for Vision, Graphics and Robotics Lecture 21, April 17, 2006 Estimating A Plane Homography

CSci 6974 and ECSE 6966 Math. Tech. for Vision, Graphics and Robotics Lecture 21, April 17, 2006 Estimating A Plane Homography CSc 6974 and ECSE 6966 Math. Tech. for Vson, Graphcs and Robotcs Lecture 21, Aprl 17, 2006 Estmatng A Plane Homography Overvew We contnue wth a dscusson of the major ssues, usng estmaton of plane projectve

More information

Solutions to exam in SF1811 Optimization, Jan 14, 2015

Solutions to exam in SF1811 Optimization, Jan 14, 2015 Solutons to exam n SF8 Optmzaton, Jan 4, 25 3 3 O------O -4 \ / \ / The network: \/ where all lnks go from left to rght. /\ / \ / \ 6 O------O -5 2 4.(a) Let x = ( x 3, x 4, x 23, x 24 ) T, where the varable

More information

1 GSW Iterative Techniques for y = Ax

1 GSW Iterative Techniques for y = Ax 1 for y = A I m gong to cheat here. here are a lot of teratve technques that can be used to solve the general case of a set of smultaneous equatons (wrtten n the matr form as y = A), but ths chapter sn

More information

Solution of Linear System of Equations and Matrix Inversion Gauss Seidel Iteration Method

Solution of Linear System of Equations and Matrix Inversion Gauss Seidel Iteration Method Soluton of Lnear System of Equatons and Matr Inverson Gauss Sedel Iteraton Method It s another well-known teratve method for solvng a system of lnear equatons of the form a + a22 + + ann = b a2 + a222

More information

College of Computer & Information Science Fall 2009 Northeastern University 20 October 2009

College of Computer & Information Science Fall 2009 Northeastern University 20 October 2009 College of Computer & Informaton Scence Fall 2009 Northeastern Unversty 20 October 2009 CS7880: Algorthmc Power Tools Scrbe: Jan Wen and Laura Poplawsk Lecture Outlne: Prmal-dual schema Network Desgn:

More information

Formulas for the Determinant

Formulas for the Determinant page 224 224 CHAPTER 3 Determnants e t te t e 2t 38 A = e t 2te t e 2t e t te t 2e 2t 39 If 123 A = 345, 456 compute the matrx product A adj(a) What can you conclude about det(a)? For Problems 40 43, use

More information

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur Module 3 LOSSY IMAGE COMPRESSION SYSTEMS Verson ECE IIT, Kharagpur Lesson 6 Theory of Quantzaton Verson ECE IIT, Kharagpur Instructonal Objectves At the end of ths lesson, the students should be able to:

More information

Yong Joon Ryang. 1. Introduction Consider the multicommodity transportation problem with convex quadratic cost function. 1 2 (x x0 ) T Q(x x 0 )

Yong Joon Ryang. 1. Introduction Consider the multicommodity transportation problem with convex quadratic cost function. 1 2 (x x0 ) T Q(x x 0 ) Kangweon-Kyungk Math. Jour. 4 1996), No. 1, pp. 7 16 AN ITERATIVE ROW-ACTION METHOD FOR MULTICOMMODITY TRANSPORTATION PROBLEMS Yong Joon Ryang Abstract. The optmzaton problems wth quadratc constrants often

More information

Lecture 10 Support Vector Machines II

Lecture 10 Support Vector Machines II Lecture 10 Support Vector Machnes II 22 February 2016 Taylor B. Arnold Yale Statstcs STAT 365/665 1/28 Notes: Problem 3 s posted and due ths upcomng Frday There was an early bug n the fake-test data; fxed

More information

5 The Rational Canonical Form

5 The Rational Canonical Form 5 The Ratonal Canoncal Form Here p s a monc rreducble factor of the mnmum polynomal m T and s not necessarly of degree one Let F p denote the feld constructed earler n the course, consstng of all matrces

More information

Some Comments on Accelerating Convergence of Iterative Sequences Using Direct Inversion of the Iterative Subspace (DIIS)

Some Comments on Accelerating Convergence of Iterative Sequences Using Direct Inversion of the Iterative Subspace (DIIS) Some Comments on Acceleratng Convergence of Iteratve Sequences Usng Drect Inverson of the Iteratve Subspace (DIIS) C. Davd Sherrll School of Chemstry and Bochemstry Georga Insttute of Technology May 1998

More information

On a direct solver for linear least squares problems

On a direct solver for linear least squares problems ISSN 2066-6594 Ann. Acad. Rom. Sc. Ser. Math. Appl. Vol. 8, No. 2/2016 On a drect solver for lnear least squares problems Constantn Popa Abstract The Null Space (NS) algorthm s a drect solver for lnear

More information

10-701/ Machine Learning, Fall 2005 Homework 3

10-701/ Machine Learning, Fall 2005 Homework 3 10-701/15-781 Machne Learnng, Fall 2005 Homework 3 Out: 10/20/05 Due: begnnng of the class 11/01/05 Instructons Contact questons-10701@autonlaborg for queston Problem 1 Regresson and Cross-valdaton [40

More information

Relaxation Methods for Iterative Solution to Linear Systems of Equations

Relaxation Methods for Iterative Solution to Linear Systems of Equations Relaxaton Methods for Iteratve Soluton to Lnear Systems of Equatons Gerald Recktenwald Portland State Unversty Mechancal Engneerng Department gerry@pdx.edu Overvew Techncal topcs Basc Concepts Statonary

More information

CHAPTER 5 NUMERICAL EVALUATION OF DYNAMIC RESPONSE

CHAPTER 5 NUMERICAL EVALUATION OF DYNAMIC RESPONSE CHAPTER 5 NUMERICAL EVALUATION OF DYNAMIC RESPONSE Analytcal soluton s usually not possble when exctaton vares arbtrarly wth tme or f the system s nonlnear. Such problems can be solved by numercal tmesteppng

More information

Finite Element Modelling of truss/cable structures

Finite Element Modelling of truss/cable structures Pet Schreurs Endhoven Unversty of echnology Department of Mechancal Engneerng Materals echnology November 3, 214 Fnte Element Modellng of truss/cable structures 1 Fnte Element Analyss of prestressed structures

More information

1 Matrix representations of canonical matrices

1 Matrix representations of canonical matrices 1 Matrx representatons of canoncal matrces 2-d rotaton around the orgn: ( ) cos θ sn θ R 0 = sn θ cos θ 3-d rotaton around the x-axs: R x = 1 0 0 0 cos θ sn θ 0 sn θ cos θ 3-d rotaton around the y-axs:

More information

Lectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix

Lectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix Lectures - Week 4 Matrx norms, Condtonng, Vector Spaces, Lnear Independence, Spannng sets and Bass, Null space and Range of a Matrx Matrx Norms Now we turn to assocatng a number to each matrx. We could

More information

Linear Approximation with Regularization and Moving Least Squares

Linear Approximation with Regularization and Moving Least Squares Lnear Approxmaton wth Regularzaton and Movng Least Squares Igor Grešovn May 007 Revson 4.6 (Revson : March 004). 5 4 3 0.5 3 3.5 4 Contents: Lnear Fttng...4. Weghted Least Squares n Functon Approxmaton...

More information

1 Convex Optimization

1 Convex Optimization Convex Optmzaton We wll consder convex optmzaton problems. Namely, mnmzaton problems where the objectve s convex (we assume no constrants for now). Such problems often arse n machne learnng. For example,

More information

[7] R.S. Varga, Matrix Iterative Analysis, Prentice-Hall, Englewood Clis, New Jersey, (1962).

[7] R.S. Varga, Matrix Iterative Analysis, Prentice-Hall, Englewood Clis, New Jersey, (1962). [7] R.S. Varga, Matrx Iteratve Analyss, Prentce-Hall, Englewood ls, New Jersey, (962). [8] J. Zhang, Multgrd soluton of the convecton-duson equaton wth large Reynolds number, n Prelmnary Proceedngs of

More information

Norms, Condition Numbers, Eigenvalues and Eigenvectors

Norms, Condition Numbers, Eigenvalues and Eigenvectors Norms, Condton Numbers, Egenvalues and Egenvectors 1 Norms A norm s a measure of the sze of a matrx or a vector For vectors the common norms are: N a 2 = ( x 2 1/2 the Eucldean Norm (1a b 1 = =1 N x (1b

More information

A Hybrid Variational Iteration Method for Blasius Equation

A Hybrid Variational Iteration Method for Blasius Equation Avalable at http://pvamu.edu/aam Appl. Appl. Math. ISSN: 1932-9466 Vol. 10, Issue 1 (June 2015), pp. 223-229 Applcatons and Appled Mathematcs: An Internatonal Journal (AAM) A Hybrd Varatonal Iteraton Method

More information

Econ107 Applied Econometrics Topic 3: Classical Model (Studenmund, Chapter 4)

Econ107 Applied Econometrics Topic 3: Classical Model (Studenmund, Chapter 4) I. Classcal Assumptons Econ7 Appled Econometrcs Topc 3: Classcal Model (Studenmund, Chapter 4) We have defned OLS and studed some algebrac propertes of OLS. In ths topc we wll study statstcal propertes

More information

form, and they present results of tests comparng the new algorthms wth other methods. Recently, Olschowka & Neumaer [7] ntroduced another dea for choo

form, and they present results of tests comparng the new algorthms wth other methods. Recently, Olschowka & Neumaer [7] ntroduced another dea for choo Scalng and structural condton numbers Arnold Neumaer Insttut fur Mathematk, Unverstat Wen Strudlhofgasse 4, A-1090 Wen, Austra emal: neum@cma.unve.ac.at revsed, August 1996 Abstract. We ntroduce structural

More information

Lecture 21: Numerical methods for pricing American type derivatives

Lecture 21: Numerical methods for pricing American type derivatives Lecture 21: Numercal methods for prcng Amercan type dervatves Xaoguang Wang STAT 598W Aprl 10th, 2014 (STAT 598W) Lecture 21 1 / 26 Outlne 1 Fnte Dfference Method Explct Method Penalty Method (STAT 598W)

More information

The Order Relation and Trace Inequalities for. Hermitian Operators

The Order Relation and Trace Inequalities for. Hermitian Operators Internatonal Mathematcal Forum, Vol 3, 08, no, 507-57 HIKARI Ltd, wwwm-hkarcom https://doorg/0988/mf088055 The Order Relaton and Trace Inequaltes for Hermtan Operators Y Huang School of Informaton Scence

More information

Report on Image warping

Report on Image warping Report on Image warpng Xuan Ne, Dec. 20, 2004 Ths document summarzed the algorthms of our mage warpng soluton for further study, and there s a detaled descrpton about the mplementaton of these algorthms.

More information

Numerical Heat and Mass Transfer

Numerical Heat and Mass Transfer Master degree n Mechancal Engneerng Numercal Heat and Mass Transfer 06-Fnte-Dfference Method (One-dmensonal, steady state heat conducton) Fausto Arpno f.arpno@uncas.t Introducton Why we use models and

More information

C/CS/Phy191 Problem Set 3 Solutions Out: Oct 1, 2008., where ( 00. ), so the overall state of the system is ) ( ( ( ( 00 ± 11 ), Φ ± = 1

C/CS/Phy191 Problem Set 3 Solutions Out: Oct 1, 2008., where ( 00. ), so the overall state of the system is ) ( ( ( ( 00 ± 11 ), Φ ± = 1 C/CS/Phy9 Problem Set 3 Solutons Out: Oct, 8 Suppose you have two qubts n some arbtrary entangled state ψ You apply the teleportaton protocol to each of the qubts separately What s the resultng state obtaned

More information

Introduction to Simulation - Lecture 5. QR Factorization. Jacob White. Thanks to Deepak Ramaswamy, Michal Rewienski, and Karen Veroy

Introduction to Simulation - Lecture 5. QR Factorization. Jacob White. Thanks to Deepak Ramaswamy, Michal Rewienski, and Karen Veroy Introducton to Smulaton - Lecture 5 QR Factorzaton Jacob Whte hanks to Deepak Ramaswamy, Mchal Rewensk, and Karen Veroy Sngular Example LU Factorzaton Fals Strut Jont Load force he resultng nodal matrx

More information

Lecture 4: Universal Hash Functions/Streaming Cont d

Lecture 4: Universal Hash Functions/Streaming Cont d CSE 5: Desgn and Analyss of Algorthms I Sprng 06 Lecture 4: Unversal Hash Functons/Streamng Cont d Lecturer: Shayan Oves Gharan Aprl 6th Scrbe: Jacob Schreber Dsclamer: These notes have not been subjected

More information

Perron Vectors of an Irreducible Nonnegative Interval Matrix

Perron Vectors of an Irreducible Nonnegative Interval Matrix Perron Vectors of an Irreducble Nonnegatve Interval Matrx Jr Rohn August 4 2005 Abstract As s well known an rreducble nonnegatve matrx possesses a unquely determned Perron vector. As the man result of

More information

Lecture 20: November 7

Lecture 20: November 7 0-725/36-725: Convex Optmzaton Fall 205 Lecturer: Ryan Tbshran Lecture 20: November 7 Scrbes: Varsha Chnnaobreddy, Joon Sk Km, Lngyao Zhang Note: LaTeX template courtesy of UC Berkeley EECS dept. Dsclamer:

More information

ρ some λ THE INVERSE POWER METHOD (or INVERSE ITERATION) , for , or (more usually) to

ρ some λ THE INVERSE POWER METHOD (or INVERSE ITERATION) , for , or (more usually) to THE INVERSE POWER METHOD (or INVERSE ITERATION) -- applcaton of the Power method to A some fxed constant ρ (whch s called a shft), x λ ρ If the egenpars of A are { ( λ, x ) } ( ), or (more usually) to,

More information

NUMERICAL DIFFERENTIATION

NUMERICAL DIFFERENTIATION NUMERICAL DIFFERENTIATION 1 Introducton Dfferentaton s a method to compute the rate at whch a dependent output y changes wth respect to the change n the ndependent nput x. Ths rate of change s called the

More information

DUE: WEDS FEB 21ST 2018

DUE: WEDS FEB 21ST 2018 HOMEWORK # 1: FINITE DIFFERENCES IN ONE DIMENSION DUE: WEDS FEB 21ST 2018 1. Theory Beam bendng s a classcal engneerng analyss. The tradtonal soluton technque makes smplfyng assumptons such as a constant

More information

Inner Product. Euclidean Space. Orthonormal Basis. Orthogonal

Inner Product. Euclidean Space. Orthonormal Basis. Orthogonal Inner Product Defnton 1 () A Eucldean space s a fnte-dmensonal vector space over the reals R, wth an nner product,. Defnton 2 (Inner Product) An nner product, on a real vector space X s a symmetrc, blnear,

More information

Lossy Compression. Compromise accuracy of reconstruction for increased compression.

Lossy Compression. Compromise accuracy of reconstruction for increased compression. Lossy Compresson Compromse accuracy of reconstructon for ncreased compresson. The reconstructon s usually vsbly ndstngushable from the orgnal mage. Typcally, one can get up to 0:1 compresson wth almost

More information

Estimation: Part 2. Chapter GREG estimation

Estimation: Part 2. Chapter GREG estimation Chapter 9 Estmaton: Part 2 9. GREG estmaton In Chapter 8, we have seen that the regresson estmator s an effcent estmator when there s a lnear relatonshp between y and x. In ths chapter, we generalzed the

More information

Feature Selection: Part 1

Feature Selection: Part 1 CSE 546: Machne Learnng Lecture 5 Feature Selecton: Part 1 Instructor: Sham Kakade 1 Regresson n the hgh dmensonal settng How do we learn when the number of features d s greater than the sample sze n?

More information

VARIATION OF CONSTANT SUM CONSTRAINT FOR INTEGER MODEL WITH NON UNIFORM VARIABLES

VARIATION OF CONSTANT SUM CONSTRAINT FOR INTEGER MODEL WITH NON UNIFORM VARIABLES VARIATION OF CONSTANT SUM CONSTRAINT FOR INTEGER MODEL WITH NON UNIFORM VARIABLES BÂRZĂ, Slvu Faculty of Mathematcs-Informatcs Spru Haret Unversty barza_slvu@yahoo.com Abstract Ths paper wants to contnue

More information

Feb 14: Spatial analysis of data fields

Feb 14: Spatial analysis of data fields Feb 4: Spatal analyss of data felds Mappng rregularly sampled data onto a regular grd Many analyss technques for geophyscal data requre the data be located at regular ntervals n space and/or tme. hs s

More information

SIO 224. m(r) =(ρ(r),k s (r),µ(r))

SIO 224. m(r) =(ρ(r),k s (r),µ(r)) SIO 224 1. A bref look at resoluton analyss Here s some background for the Masters and Gubbns resoluton paper. Global Earth models are usually found teratvely by assumng a startng model and fndng small

More information

U.C. Berkeley CS294: Spectral Methods and Expanders Handout 8 Luca Trevisan February 17, 2016

U.C. Berkeley CS294: Spectral Methods and Expanders Handout 8 Luca Trevisan February 17, 2016 U.C. Berkeley CS94: Spectral Methods and Expanders Handout 8 Luca Trevsan February 7, 06 Lecture 8: Spectral Algorthms Wrap-up In whch we talk about even more generalzatons of Cheeger s nequaltes, and

More information

Global Sensitivity. Tuesday 20 th February, 2018

Global Sensitivity. Tuesday 20 th February, 2018 Global Senstvty Tuesday 2 th February, 28 ) Local Senstvty Most senstvty analyses [] are based on local estmates of senstvty, typcally by expandng the response n a Taylor seres about some specfc values

More information

Problem Set 9 Solutions

Problem Set 9 Solutions Desgn and Analyss of Algorthms May 4, 2015 Massachusetts Insttute of Technology 6.046J/18.410J Profs. Erk Demane, Srn Devadas, and Nancy Lynch Problem Set 9 Solutons Problem Set 9 Solutons Ths problem

More information

U.C. Berkeley CS294: Beyond Worst-Case Analysis Luca Trevisan September 5, 2017

U.C. Berkeley CS294: Beyond Worst-Case Analysis Luca Trevisan September 5, 2017 U.C. Berkeley CS94: Beyond Worst-Case Analyss Handout 4s Luca Trevsan September 5, 07 Summary of Lecture 4 In whch we ntroduce semdefnte programmng and apply t to Max Cut. Semdefnte Programmng Recall that

More information

Matrix Approximation via Sampling, Subspace Embedding. 1 Solving Linear Systems Using SVD

Matrix Approximation via Sampling, Subspace Embedding. 1 Solving Linear Systems Using SVD Matrx Approxmaton va Samplng, Subspace Embeddng Lecturer: Anup Rao Scrbe: Rashth Sharma, Peng Zhang 0/01/016 1 Solvng Lnear Systems Usng SVD Two applcatons of SVD have been covered so far. Today we loo

More information

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 17. a ij x (k) b i. a ij x (k+1) (D + L)x (k+1) = b Ux (k)

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 17. a ij x (k) b i. a ij x (k+1) (D + L)x (k+1) = b Ux (k) STAT 309: MATHEMATICAL COMPUTATIONS I FALL 08 LECTURE 7. sor method remnder: n coordnatewse form, Jacob method s = [ b a x (k) a and Gauss Sedel method s = [ b a = = remnder: n matrx form, Jacob method

More information

General viscosity iterative method for a sequence of quasi-nonexpansive mappings

General viscosity iterative method for a sequence of quasi-nonexpansive mappings Avalable onlne at www.tjnsa.com J. Nonlnear Sc. Appl. 9 (2016), 5672 5682 Research Artcle General vscosty teratve method for a sequence of quas-nonexpansve mappngs Cuje Zhang, Ynan Wang College of Scence,

More information

P A = (P P + P )A = P (I P T (P P ))A = P (A P T (P P )A) Hence if we let E = P T (P P A), We have that

P A = (P P + P )A = P (I P T (P P ))A = P (A P T (P P )A) Hence if we let E = P T (P P A), We have that Backward Error Analyss for House holder Reectors We want to show that multplcaton by householder reectors s backward stable. In partcular we wsh to show fl(p A) = P (A) = P (A + E where P = I 2vv T s the

More information

= = = (a) Use the MATLAB command rref to solve the system. (b) Let A be the coefficient matrix and B be the right-hand side of the system.

= = = (a) Use the MATLAB command rref to solve the system. (b) Let A be the coefficient matrix and B be the right-hand side of the system. Chapter Matlab Exercses Chapter Matlab Exercses. Consder the lnear system of Example n Secton.. x x x y z y y z (a) Use the MATLAB command rref to solve the system. (b) Let A be the coeffcent matrx and

More information

n α j x j = 0 j=1 has a nontrivial solution. Here A is the n k matrix whose jth column is the vector for all t j=0

n α j x j = 0 j=1 has a nontrivial solution. Here A is the n k matrix whose jth column is the vector for all t j=0 MODULE 2 Topcs: Lnear ndependence, bass and dmenson We have seen that f n a set of vectors one vector s a lnear combnaton of the remanng vectors n the set then the span of the set s unchanged f that vector

More information

Finite Difference Method

Finite Difference Method 7/0/07 Instructor r. Ramond Rump (9) 747 698 rcrump@utep.edu EE 337 Computatonal Electromagnetcs (CEM) Lecture #0 Fnte erence Method Lecture 0 These notes ma contan coprghted materal obtaned under ar use

More information

2E Pattern Recognition Solutions to Introduction to Pattern Recognition, Chapter 2: Bayesian pattern classification

2E Pattern Recognition Solutions to Introduction to Pattern Recognition, Chapter 2: Bayesian pattern classification E395 - Pattern Recognton Solutons to Introducton to Pattern Recognton, Chapter : Bayesan pattern classfcaton Preface Ths document s a soluton manual for selected exercses from Introducton to Pattern Recognton

More information

COMPARISON OF SOME RELIABILITY CHARACTERISTICS BETWEEN REDUNDANT SYSTEMS REQUIRING SUPPORTING UNITS FOR THEIR OPERATIONS

COMPARISON OF SOME RELIABILITY CHARACTERISTICS BETWEEN REDUNDANT SYSTEMS REQUIRING SUPPORTING UNITS FOR THEIR OPERATIONS Avalable onlne at http://sck.org J. Math. Comput. Sc. 3 (3), No., 6-3 ISSN: 97-537 COMPARISON OF SOME RELIABILITY CHARACTERISTICS BETWEEN REDUNDANT SYSTEMS REQUIRING SUPPORTING UNITS FOR THEIR OPERATIONS

More information

On the Multicriteria Integer Network Flow Problem

On the Multicriteria Integer Network Flow Problem BULGARIAN ACADEMY OF SCIENCES CYBERNETICS AND INFORMATION TECHNOLOGIES Volume 5, No 2 Sofa 2005 On the Multcrtera Integer Network Flow Problem Vassl Vasslev, Marana Nkolova, Maryana Vassleva Insttute of

More information

The lower and upper bounds on Perron root of nonnegative irreducible matrices

The lower and upper bounds on Perron root of nonnegative irreducible matrices Journal of Computatonal Appled Mathematcs 217 (2008) 259 267 wwwelsevercom/locate/cam The lower upper bounds on Perron root of nonnegatve rreducble matrces Guang-Xn Huang a,, Feng Yn b,keguo a a College

More information

Numerical Solution of Ordinary Differential Equations

Numerical Solution of Ordinary Differential Equations Numercal Methods (CENG 00) CHAPTER-VI Numercal Soluton of Ordnar Dfferental Equatons 6 Introducton Dfferental equatons are equatons composed of an unknown functon and ts dervatves The followng are examples

More information

Computing Correlated Equilibria in Multi-Player Games

Computing Correlated Equilibria in Multi-Player Games Computng Correlated Equlbra n Mult-Player Games Chrstos H. Papadmtrou Presented by Zhanxang Huang December 7th, 2005 1 The Author Dr. Chrstos H. Papadmtrou CS professor at UC Berkley (taught at Harvard,

More information

LINEAR REGRESSION ANALYSIS. MODULE IX Lecture Multicollinearity

LINEAR REGRESSION ANALYSIS. MODULE IX Lecture Multicollinearity LINEAR REGRESSION ANALYSIS MODULE IX Lecture - 31 Multcollnearty Dr. Shalabh Department of Mathematcs and Statstcs Indan Insttute of Technology Kanpur 6. Rdge regresson The OLSE s the best lnear unbased

More information

LINEAR REGRESSION ANALYSIS. MODULE IX Lecture Multicollinearity

LINEAR REGRESSION ANALYSIS. MODULE IX Lecture Multicollinearity LINEAR REGRESSION ANALYSIS MODULE IX Lecture - 30 Multcollnearty Dr. Shalabh Department of Mathematcs and Statstcs Indan Insttute of Technology Kanpur 2 Remedes for multcollnearty Varous technques have

More information

MATH 829: Introduction to Data Mining and Analysis The EM algorithm (part 2)

MATH 829: Introduction to Data Mining and Analysis The EM algorithm (part 2) 1/16 MATH 829: Introducton to Data Mnng and Analyss The EM algorthm (part 2) Domnque Gullot Departments of Mathematcal Scences Unversty of Delaware Aprl 20, 2016 Recall 2/16 We are gven ndependent observatons

More information

Chapter 11: Simple Linear Regression and Correlation

Chapter 11: Simple Linear Regression and Correlation Chapter 11: Smple Lnear Regresson and Correlaton 11-1 Emprcal Models 11-2 Smple Lnear Regresson 11-3 Propertes of the Least Squares Estmators 11-4 Hypothess Test n Smple Lnear Regresson 11-4.1 Use of t-tests

More information

High resolution entropy stable scheme for shallow water equations

High resolution entropy stable scheme for shallow water equations Internatonal Symposum on Computers & Informatcs (ISCI 05) Hgh resoluton entropy stable scheme for shallow water equatons Xaohan Cheng,a, Yufeng Ne,b, Department of Appled Mathematcs, Northwestern Polytechncal

More information

6.3.4 Modified Euler s method of integration

6.3.4 Modified Euler s method of integration 6.3.4 Modfed Euler s method of ntegraton Before dscussng the applcaton of Euler s method for solvng the swng equatons, let us frst revew the basc Euler s method of numercal ntegraton. Let the general from

More information

Using T.O.M to Estimate Parameter of distributions that have not Single Exponential Family

Using T.O.M to Estimate Parameter of distributions that have not Single Exponential Family IOSR Journal of Mathematcs IOSR-JM) ISSN: 2278-5728. Volume 3, Issue 3 Sep-Oct. 202), PP 44-48 www.osrjournals.org Usng T.O.M to Estmate Parameter of dstrbutons that have not Sngle Exponental Famly Jubran

More information

1 Derivation of Point-to-Plane Minimization

1 Derivation of Point-to-Plane Minimization 1 Dervaton of Pont-to-Plane Mnmzaton Consder the Chen-Medon (pont-to-plane) framework for ICP. Assume we have a collecton of ponts (p, q ) wth normals n. We want to determne the optmal rotaton and translaton

More information

System in Weibull Distribution

System in Weibull Distribution Internatonal Matheatcal Foru 4 9 no. 9 94-95 Relablty Equvalence Factors of a Seres-Parallel Syste n Webull Dstrbuton M. A. El-Dacese Matheatcs Departent Faculty of Scence Tanta Unversty Tanta Egypt eldacese@yahoo.co

More information

4DVAR, according to the name, is a four-dimensional variational method.

4DVAR, according to the name, is a four-dimensional variational method. 4D-Varatonal Data Assmlaton (4D-Var) 4DVAR, accordng to the name, s a four-dmensonal varatonal method. 4D-Var s actually a drect generalzaton of 3D-Var to handle observatons that are dstrbuted n tme. The

More information

NON-CENTRAL 7-POINT FORMULA IN THE METHOD OF LINES FOR PARABOLIC AND BURGERS' EQUATIONS

NON-CENTRAL 7-POINT FORMULA IN THE METHOD OF LINES FOR PARABOLIC AND BURGERS' EQUATIONS IJRRAS 8 (3 September 011 www.arpapress.com/volumes/vol8issue3/ijrras_8_3_08.pdf NON-CENTRAL 7-POINT FORMULA IN THE METHOD OF LINES FOR PARABOLIC AND BURGERS' EQUATIONS H.O. Bakodah Dept. of Mathematc

More information

Appendix B. The Finite Difference Scheme

Appendix B. The Finite Difference Scheme 140 APPENDIXES Appendx B. The Fnte Dfference Scheme In ths appendx we present numercal technques whch are used to approxmate solutons of system 3.1 3.3. A comprehensve treatment of theoretcal and mplementaton

More information

ISSN: ISO 9001:2008 Certified International Journal of Engineering and Innovative Technology (IJEIT) Volume 3, Issue 1, July 2013

ISSN: ISO 9001:2008 Certified International Journal of Engineering and Innovative Technology (IJEIT) Volume 3, Issue 1, July 2013 ISSN: 2277-375 Constructon of Trend Free Run Orders for Orthogonal rrays Usng Codes bstract: Sometmes when the expermental runs are carred out n a tme order sequence, the response can depend on the run

More information

ANSWERS. Problem 1. and the moment generating function (mgf) by. defined for any real t. Use this to show that E( U) var( U)

ANSWERS. Problem 1. and the moment generating function (mgf) by. defined for any real t. Use this to show that E( U) var( U) Econ 413 Exam 13 H ANSWERS Settet er nndelt 9 deloppgaver, A,B,C, som alle anbefales å telle lkt for å gøre det ltt lettere å stå. Svar er gtt . Unfortunately, there s a prntng error n the hnt of

More information

The Second Anti-Mathima on Game Theory

The Second Anti-Mathima on Game Theory The Second Ant-Mathma on Game Theory Ath. Kehagas December 1 2006 1 Introducton In ths note we wll examne the noton of game equlbrum for three types of games 1. 2-player 2-acton zero-sum games 2. 2-player

More information

CIS526: Machine Learning Lecture 3 (Sept 16, 2003) Linear Regression. Preparation help: Xiaoying Huang. x 1 θ 1 output... θ M x M

CIS526: Machine Learning Lecture 3 (Sept 16, 2003) Linear Regression. Preparation help: Xiaoying Huang. x 1 θ 1 output... θ M x M CIS56: achne Learnng Lecture 3 (Sept 6, 003) Preparaton help: Xaoyng Huang Lnear Regresson Lnear regresson can be represented by a functonal form: f(; θ) = θ 0 0 +θ + + θ = θ = 0 ote: 0 s a dummy attrbute

More information

36.1 Why is it important to be able to find roots to systems of equations? Up to this point, we have discussed how to find the solution to

36.1 Why is it important to be able to find roots to systems of equations? Up to this point, we have discussed how to find the solution to ChE Lecture Notes - D. Keer, 5/9/98 Lecture 6,7,8 - Rootndng n systems o equatons (A) Theory (B) Problems (C) MATLAB Applcatons Tet: Supplementary notes rom Instructor 6. Why s t mportant to be able to

More information

OPTIMISATION. Introduction Single Variable Unconstrained Optimisation Multivariable Unconstrained Optimisation Linear Programming

OPTIMISATION. Introduction Single Variable Unconstrained Optimisation Multivariable Unconstrained Optimisation Linear Programming OPTIMIATION Introducton ngle Varable Unconstraned Optmsaton Multvarable Unconstraned Optmsaton Lnear Programmng Chapter Optmsaton /. Introducton In an engneerng analss, sometmes etremtes, ether mnmum or

More information

Eigenvalues of Random Graphs

Eigenvalues of Random Graphs Spectral Graph Theory Lecture 2 Egenvalues of Random Graphs Danel A. Spelman November 4, 202 2. Introducton In ths lecture, we consder a random graph on n vertces n whch each edge s chosen to be n the

More information

On a Parallel Implementation of the One-Sided Block Jacobi SVD Algorithm

On a Parallel Implementation of the One-Sided Block Jacobi SVD Algorithm Jacob SVD Gabrel Okša formulaton One-Sded Block-Jacob Algorthm Acceleratng Parallelzaton Conclusons On a Parallel Implementaton of the One-Sded Block Jacob SVD Algorthm Gabrel Okša 1, Martn Bečka, 1 Marán

More information