There are two approaches to Hensel lftng. Lnear lftng starts wth polynomals f = f (0) and teratvely constructs polynomals f () such that ()f () f (?)

Size: px
Start display at page:

Download "There are two approaches to Hensel lftng. Lnear lftng starts wth polynomals f = f (0) and teratvely constructs polynomals f () such that ()f () f (?)"

Transcription

1 On Bvarate Hensel Lftng and ts Parallelzaton Laurent Bernardn Insttut fur Wssenschaftlches Rechnen ETH Zurch Abstract We present a new parallel algorthm for performng lnear Hensel lftng of bvarate polynomals over a nte eld. The sequental verson of our algorthm has a runnng tme of O(mn 4 ) for lftng m unvarate polynomals of degree n wth respect to a bvarate polynomal of degree n n both varables, assumng that we use classcal polynomal multplcaton. Our parallel algorthm further reduces ths complexty to O(m n s n3 ) on s processng nodes, assumng that s < n. We also present an asymptotcally faster algorthm, whch has a complexty of O((ln m)n ln n) operatons n the coecent eld, usng fast polynomal multplcaton and O(n ln m) processors. Expermental results on a massvely parallel, dstrbuted memory machne conrm that our algorthm scales well on hgh numbers of processng nodes. Introducton Gven polynomals f ; : : : ; f m F p [x], parwse relatvely prme and a prmtve, square-free polynomal f F p [x; y] such that f Y f (mod y) bvarate Hensel lftng ams to construct polynomals f ; : : : ; f m F p [x; y] such that ()8 : f f (mod y) ()f Y f (mod y k ) If k s sucently large, the f obtaned can be used to compute a factorzaton of f over F p. We restrct ourselves to the case of bvarate polynomals over a nte eld. In practce, bvarate polynomals are common. Moreover, state-of-the-art algorthms for factorng polynomals n more than two varables rely on multple factorzatons of bvarate polynomals [4, 5]. For these reasons, t s mportant to have a fast way of lftng bvarate polynomals. Parallel factorzaton algorthms for sparse multvarate polynomals have been presented n [9]. We use a dense lftng approach whch s most eectve for bvarate polynomals. Only as the number of varables ncreases does t become more and more mportant to use sparse technques to prevent exponental behavor n the number of varables. As mentoned above, we restrct ourselves to polynomals over nte elds, although the same deas can be appled over any rng.

2 There are two approaches to Hensel lftng. Lnear lftng starts wth polynomals f = f (0) and teratvely constructs polynomals f () such that ()f () f (?) (mod y ) ()f Y f () (mod y ) The bound k s reached after k lftng steps. Quadratc lftng also starts wth f = f (0) but constructs polynomals f ( ) such that ()f ( ) f (? ) (mod y () ) ()f Y f ( ) (mod y () ) The bound k s reached after log k lftng steps. If classcal multplcaton s used, the asymptotc complexty of both approaches s equvalent [7]. Parallelzng the quadratc algorthm s temptng as t nvolves large polynomal multplcatons that can easly be parallelzed usng Karatsuba's algorthm. However, n practce, the sequental quadratc lftng algorthm s not able to compete wth the lnear algorthm, at least for bvarate polynomals of degree up to 000 n both varables. For ths reason we wll concentrate on a parallel verson of lnear Hensel lftng. Above we assume, that we can evaluate f(x; y) at y = 0 such that deg x (f(x; y)) = deg x (f(x; 0)) and such that f(x; 0) s square-free. If ths does not hold, we compute the translated polynomal f = f(x; y ) such that f(x; 0) satses the above condtons. It s shown n [6], that ths translaton can be done usng O(n 3 ) operatons n the coecent eld (wth n a bound on the degree of f n both varables). We assume n the followng that the coecent eld F p contans such an. For more detals on ths selecton process and on the case where F p does not contan a sutable evaluaton value, see []. The Sequental Lftng Algorthm Lnear lftng algorthms for dense bvarate polynomals over nte elds gven n [3, 0, 8] need O(mn 5 ) operatons n F p, wth m the number of factors and n a bound on the degree n each varable polynomal to factor. We present a sequental algorthm, that s an order of magntude faster than these, needng only O(mn 4 ) coecent operatons. We then descrbe our parallel verson of ths algorthm. Gven polynomals f F p [x; y] and f () ::m F p[x] such that: f = f () (mod y) () wth deg x (f) n and deg y (f) n. Assume we want to lft the f () ::m up to degree n n y,.e. compute f (n) ::m such that f = f (n) (mod y n ) () and 8 = ::m f () f (n) (mod y) (3)

3 Usng lnear Hensel lftng, we want to compute, at step k, the f ::m F p [x; y] such that: wth For (5) to hold, we set: f f f = f (mod y k ) (4) f (k?) (mod y k ) (5) := f (k?) y k (6) wth F p [x]. Pluggng (6) nto (4), we see that lftng from y k? to y k amounts to solvng for the 's n Q m f? mx = f (k?) y k f (0) (mod y) (7) = (7) s a unvarate Dophantne equaton n F p [x] that can be solved by rst precomputng the solutons for mx = m Y = 6= Now we can easly compute the = 6= f (0) (mod y) (8) at each step by multplcaton wth the lefthand sde of (7) and reducton modulo f (0). Ths means that solvng the Dophantne equaton (7) has a cost of O(m) multplcatons n the coecent rng F p [x] and thus a total cost of O(mM(n)) operatons n F p, where M(n) s the complexty of multplyng two unvarate polynomals of degree n. Before we can solve the Dophantne equaton, we have to compute the left-hand sde of (7): Q m f? = f (k?) y k (mod y) (9) We notce that only the coecent of y k n the numerator, denoted by C k = f? = f (k?) s needed. We wll now dscuss how to ecently compute c k, such that! [y k ] (0) C k = f [yk ]? c k () Our dea s to compute the product of the f (k?) modulo y k at each step, reusng sub-products already computed n the prevous step. At step k we thus have to compute = f (k?) (mod y k ) () In the followng we wll denote the coecent of y = () ). u [] n f as u [] (notng that 3

4 We wll compute the product () teratvely factor by factor. We dene P q := qy = wth c k = P m [yk ]. The product of the rst two factors gves P = u[0] : : : u[] u[] u[0] f (k?) (mod y k ) (3) y : : : u[k?] u [] u[k?3] : : : u [k?3] u [] u[k?] u[k?] u [] u[k?] : : : u [k?] u [] u[k?] u [] u[k?] u [] u[k?] : : : u [k?] u [] u[k?] u [] For successve we can compute P as P = p [0] p [0] u [] p [] : : : p [0] u [k?] p [] u [k?] y : : : p [] u [k?] : : : p [k?] u [] p [k?] p [] u [k?] : : : p [k?] u [] p [k] y k y k? y k? y k y k? wth p [l] := P? [yl. Note that p s used exclusvely for smplfyng the notaton ] and that the p's are derent for varyng and k. Movng to step k we get P = u[0] P : : : = p [0] : : : u[] u[] u[0] y : : : u[k?] u [] u[k?3] : : : u [k?3] u [] u[k?] u[k?] u [] u[k?] : : : u [k?] u [] u[k?] u[k] u[] u[k?] : : : u [k?] u [] u[k] u[0] u [] u[k] u[] u[k?] : : : u [k?] u [] u[k] u[] p [0] u [k?] p [0] u [k] p [] u [k] p [0] u [] p [] y : : : y k y k? y k p [] u [k?] : : : p [k?] u [] p [k?] p [] u [k?] : : : p [k?] u [] p [k] p [] u [k?] : : : p [k] u [] p [k] y k y k y k? y k? wth p [l] := P? [y l ] We see that, lftng from y k? to y k, the only products that we have to compute and that have not already been computed n the prevous step are u[k] u [k] u[0] 4

5 for the rst two factors and u [] u[k] u[] u[k?] : : : u [k?] u [] u[k] u[] p [] u [k] p [0] u [k] p [] u [k?] : : : p [k] u [] p [k] for each subsequent factor. Thus the total number of multplcatons needed at step k of the lftng s equal to M k = k (m? )( (k )) = (k )(m? ) (4) Supposng that we want to lft our unvarate mage polynomals up to degree n, we need a total number of nx k= (k )(m? ) = ( m? )(n 5n) (5) multplcatons n the coecent rng F p [x] for computng the left-hand sdes of the arsng Dophantne equatons. Thus we get a total runnng tme of O(mn ) multplcatons n F p [x] for the lnear lftng algorthm. Supposng, that the degree n x of the factors are bounded by n, the total complexty of lnear Hensel lftng n terms of eld operatons s O(mn M(n)). Assumng classcal multplcaton we get a complexty of O(mn 4 ). In order to store the output,.e. the m factors lfted to degree n, we need memory for mn 4 elements from F p. In addton to these, we need to store the products P. These requre extra memory to hold (m? )n 4 elements from F p. Ths means that the amount of requred workng memory s less than the amount requred to store the result. 3 The Parallel Lftng Algorthm We wll now dscuss how to mplement ths algorthm n parallel. Each step needs the computaton of a sum of products of unvarate polynomals. We wll dstrbute ths computaton evenly across the avalable processng nodes. One node wll be reserved for collectng the results from the slave nodes and for solvng the Dophantne equaton at each step. The parallel lnear Hensel lftng algorthm s outlned n table. At step (D) and (G), the slave nodes need to compute a convoluton of the form kx =0 a b k? Ths s done by dstrbutng the products from ths sum evenly across the avalable slave nodes. The partal sums are then added together on the master node. The cost of ths addton s O( n s n) and although t could be reduced to O(n ln n s ) usng a bnary tree shaped algorthm, the ecency gan would be margnal as the cost of ths step s comparatvely small. Note, that whle the master node solves the Dophantne equaton necessary to lft the factors to y k, the slave nodes are already workng on the convoluton product that the master node wll need n order to lft the factors to y k. Ths gves us a nce computaton overlap and prevents the master node from ever havng to spn dly, watng for results from the slave nodes. At step k of each teraton, the master node has to perform m multplcatons and m dvsons of unvarate polynomals of degree n n order to solve the 5

6 Step Master Slave(s) (A) Input: f F p [x; y], f (0) ::m F p[x] Intalze = f (0) for = ::m Intalze p = = f (0) for = ::m Precompute the from (8) (B)?! send ::m, p ::m from master to slaves?! (C) Iterate steps (C){(J) for k from to n Compute the u [k] ::m from the Compute: and p m va equaton (7). u [] u[k?] : : : u [k?] u [] Compute u[k], u[k] u[0] Compute u [] u[k] u[k] u[] (D)? send u [] u[k?] : : : u [k?] u [] (E)?! send u [k] ::m (F) Update p (G) Iterate steps (G){(J) for from 3 to m Compute p [0]? u[k] Compute p [k]? u[] (H)? send p []? u[k] (I) Update p (J) from slaves to master? from master to slaves?! Compute: p [k]? u[0] p []? u[k] : : : p [k?]? u[] : : : p [k?]? u[] from slaves to master??! send p [k] ; p[k] from master to slaves?! Table : Parallel Algorthm 6

7 Dophantne equaton plus 4 3(m? ) multplcatons of unvarate polynomals of degree n to compute ts share of the convolutons. Ths amounts to a total cost of O(mnM(n)) operatons n F p, n order to lft the unvarate mage polynomals up to degree n. Assumng a number of s slave nodes, each one has to compute O(m k s ) multplcatons of unvarate polynomals of degree n at step k of the lftng. The total work of a sngle slave node sums up to O(m n s nm(n)) operatons n F p or O(m n s nn ) operatons n F p, assumng classcal polynomal multplcaton. 4 Expermental Results We have mplemented our algorthm on a massvely parallel, dstrbuted memory machne, an Intel Paragon, usng a verson of Maple that has been extended wth message passng prmtves []. Table summarzes the tmngs from lftng two degree n mage polynomals up to degree n n the second varable. Such a lftng s needed for the factorzaton of a bvarate polynomal of degree n n both varables. Our examples are over Nodes Tme Speedup Ecency % n = % % % n = % % % % n = % % 9.7 5% % % n = % % % % % % n = % % % % Table : Paragon Tmngs the coecent eld Z 3. Tmes are gven n wall-clock (real tme) seconds. The speedup factor s computed as: and we dene ecency as: Tme on one node Tme on s nodes Speedup Number of nodes 7

8 The n = 500 example corresponds to the factorzaton of a dense bvarate polynomal wth degree 000 n both varables. Its expanded form would have over a mllon terms. On a state-of-the-art workstaton, a Dgtal Alpha 500/333, the same lftng takes 099s, compared too 704s on 4 nodes of the Paragon. For n = 000, we could not get sequental tmngs on the Paragon, as our envronment mposes a ob lmt of 0 hours. However, we ran t on our Dgtal Alpha workstaton where t took 44 hours. Usng 8 nodes on the Paragon, we could reduce ths tme to hours, yeldng a speedup of. The sequental algorthm performs slghtly worse than the expected O(n 4 ). Ths s due to the overhead of Maple's garbage collectng memory manager whch ncreases wth the memory usage. Dstrbutng the computatons across more nodes, we also dstrbute the memory usage. Ths explans the super-lnear speedups that we encountered. 5 An Asymptotc Improvement n parallel. We acheve ths usng a bnary tree structured algorthm to combne the f ::m two by two. We assume that m s a power of two. If that's not the case, we pad usng A further mprovement of our lftng algorthm s to compute the P m dummy factors. Frst we dene T ; := f. Next we can compute: T ; = T ; yk ; yk wth ; = (T ) ;? (T [y 0 ] ; ) (T ) [y k ] ;? (T [y k ] ; ) [y 0 ] ; = kx q=0 (T ) ;? (T [y q ] ; ) [y k?q ] We can see that T ; = f? f T ;, wth T = ln m; P m : (mod y k ). Now we can compute successve wth ; ; T ; = T ; ; yk ; yk = (T?;? ) [y 0 ]?; (T?; ) [y 0 ]?;? ; = kx q=0 (T )?;? (T [y q ]?; ) [y k?q ] At each step k, smlarly to our ntal algorthm, the master node computes the, and those products of the ;, that nvolve coecents of yk, whle the slave nodes compute the remanng products of the ;. As n the ntal algorthm, we overlap the computaton of the ; on the slave nodes wth the computaton of (k?) ; and parts of (k?) ; on the master. Ths algorthm reduces the overall complexty to O((ln m) k ln s s nm(n)). Assumng O(n ln m) processors, the runnng tme s O(ln mnm(n)). Further assumng that we use fast unvarate polynomal multplcaton (M(n) = n ln n), we can clam an asymptotc runnng tme of O((ln m)n ln n) operatons n F p usng O(n ln m) processors. 8

9 6 Conclusons and further work We have presented a new algorthm for parallel bvarate Hensel lftng. The new algorthm has an asymptotc complexty of O((ln m)n ln n) operatons n F p on O(n ln m) processors. Addtonally t behaves well n practce, as experments on a massvely parallel machne have shown. The more varables a polynomal nvolves, the sparser t wll be n practce. For ths reason, even f our algorthm can be generalzed to polynomals n many varables, t wll be less ecent as t s nherently dense. Subect of further research wll be how to parallelze sparse multvarate Hensel lftng algorthms [4, 5] References [] Bernardn, L. Maple on a massvely parallel, dstrbuted memory machne. In Proceedngs of PASCO '97 (997). to appear. [] Bernardn, L., and Monagan, M. B. Ecent multvarate factorzaton over nte elds. In Proceedngs of AAECC '97 (997), Lecture Notes n Computer Scence, Sprnger-Verlag. to appear. [3] Geddes, K. O., Czapor, S. R., and Labahn, G. Algorthms for Computer Algebra. Kluwer Academc Publshers, Boston, 99. [4] Kaltofen, E. Sparse Hensel lftng. In Proceedngs of Eurocal '85, Vol. II (985), B. F. Cavness, Ed., vol. 04 of Lecture Notes n Computer Scence, Sprnger-Verlag, pp. 4{7. [5] Kaltofen, E., and Trager, B. M. Computng wth polynomals gven by black boxes for ther evaluatons: Greatest common dvsors, factorzaton, separaton of numerators and denomnators. Journal of Symbolc Computaton 9, 3 (March 990), 300{30. [6] Knuth, D. E. Semnumercal Algorthms, vol. of The Art of Computer Programmng. Addson Wesley, 98. [7] Mulders, T., and Bernardn, L. An analyss of lnear versus quadratc Hensel lftng. n preparaton, 997. [8] Vry, G. Factorzaton of multvarate polynomals wth coecents n F p. Journal of Symbolc Computaton 5, 4 (Aprl 993), 37{39. [9] Wang, P. S. Parallel polynomal operatons on SMPs: an overvew. Journal of Symbolc Computaton, 4 (996), 397{40. [0] Zppel, R. E. Eectve Polynomal Computaton. Kluwer Academc Publshers, Boston,

An efficient algorithm for multivariate Maclaurin Newton transformation

An efficient algorithm for multivariate Maclaurin Newton transformation Annales UMCS Informatca AI VIII, 2 2008) 5 14 DOI: 10.2478/v10065-008-0020-6 An effcent algorthm for multvarate Maclaurn Newton transformaton Joanna Kapusta Insttute of Mathematcs and Computer Scence,

More information

Outline and Reading. Dynamic Programming. Dynamic Programming revealed. Computing Fibonacci. The General Dynamic Programming Technique

Outline and Reading. Dynamic Programming. Dynamic Programming revealed. Computing Fibonacci. The General Dynamic Programming Technique Outlne and Readng Dynamc Programmng The General Technque ( 5.3.2) -1 Knapsac Problem ( 5.3.3) Matrx Chan-Product ( 5.3.1) Dynamc Programmng verson 1.4 1 Dynamc Programmng verson 1.4 2 Dynamc Programmng

More information

Advanced Algebraic Algorithms on Integers and Polynomials

Advanced Algebraic Algorithms on Integers and Polynomials Advanced Algebrac Algorthms on Integers and Polynomals Analyss of Algorthms Prepared by John Ref, Ph.D. Integer and Polynomal Computatons a) Newton Iteraton: applcaton to dvson b) Evaluaton and Interpolaton

More information

On the Interval Zoro Symmetric Single-step Procedure for Simultaneous Finding of Polynomial Zeros

On the Interval Zoro Symmetric Single-step Procedure for Simultaneous Finding of Polynomial Zeros Appled Mathematcal Scences, Vol. 5, 2011, no. 75, 3693-3706 On the Interval Zoro Symmetrc Sngle-step Procedure for Smultaneous Fndng of Polynomal Zeros S. F. M. Rusl, M. Mons, M. A. Hassan and W. J. Leong

More information

36.1 Why is it important to be able to find roots to systems of equations? Up to this point, we have discussed how to find the solution to

36.1 Why is it important to be able to find roots to systems of equations? Up to this point, we have discussed how to find the solution to ChE Lecture Notes - D. Keer, 5/9/98 Lecture 6,7,8 - Rootndng n systems o equatons (A) Theory (B) Problems (C) MATLAB Applcatons Tet: Supplementary notes rom Instructor 6. Why s t mportant to be able to

More information

Introduction to Vapor/Liquid Equilibrium, part 2. Raoult s Law:

Introduction to Vapor/Liquid Equilibrium, part 2. Raoult s Law: CE304, Sprng 2004 Lecture 4 Introducton to Vapor/Lqud Equlbrum, part 2 Raoult s Law: The smplest model that allows us do VLE calculatons s obtaned when we assume that the vapor phase s an deal gas, and

More information

Journal of Universal Computer Science, vol. 1, no. 7 (1995), submitted: 15/12/94, accepted: 26/6/95, appeared: 28/7/95 Springer Pub. Co.

Journal of Universal Computer Science, vol. 1, no. 7 (1995), submitted: 15/12/94, accepted: 26/6/95, appeared: 28/7/95 Springer Pub. Co. Journal of Unversal Computer Scence, vol. 1, no. 7 (1995), 469-483 submtted: 15/12/94, accepted: 26/6/95, appeared: 28/7/95 Sprnger Pub. Co. Round-o error propagaton n the soluton of the heat equaton by

More information

MATH 829: Introduction to Data Mining and Analysis The EM algorithm (part 2)

MATH 829: Introduction to Data Mining and Analysis The EM algorithm (part 2) 1/16 MATH 829: Introducton to Data Mnng and Analyss The EM algorthm (part 2) Domnque Gullot Departments of Mathematcal Scences Unversty of Delaware Aprl 20, 2016 Recall 2/16 We are gven ndependent observatons

More information

Solution of Linear System of Equations and Matrix Inversion Gauss Seidel Iteration Method

Solution of Linear System of Equations and Matrix Inversion Gauss Seidel Iteration Method Soluton of Lnear System of Equatons and Matr Inverson Gauss Sedel Iteraton Method It s another well-known teratve method for solvng a system of lnear equatons of the form a + a22 + + ann = b a2 + a222

More information

Finding Primitive Roots Pseudo-Deterministically

Finding Primitive Roots Pseudo-Deterministically Electronc Colloquum on Computatonal Complexty, Report No 207 (205) Fndng Prmtve Roots Pseudo-Determnstcally Ofer Grossman December 22, 205 Abstract Pseudo-determnstc algorthms are randomzed search algorthms

More information

Min Cut, Fast Cut, Polynomial Identities

Min Cut, Fast Cut, Polynomial Identities Randomzed Algorthms, Summer 016 Mn Cut, Fast Cut, Polynomal Identtes Instructor: Thomas Kesselhem and Kurt Mehlhorn 1 Mn Cuts n Graphs Lecture (5 pages) Throughout ths secton, G = (V, E) s a mult-graph.

More information

DISCRIMINANTS AND RAMIFIED PRIMES. 1. Introduction A prime number p is said to be ramified in a number field K if the prime ideal factorization

DISCRIMINANTS AND RAMIFIED PRIMES. 1. Introduction A prime number p is said to be ramified in a number field K if the prime ideal factorization DISCRIMINANTS AND RAMIFIED PRIMES KEITH CONRAD 1. Introducton A prme number p s sad to be ramfed n a number feld K f the prme deal factorzaton (1.1) (p) = po K = p e 1 1 peg g has some e greater than 1.

More information

LOW BIAS INTEGRATED PATH ESTIMATORS. James M. Calvin

LOW BIAS INTEGRATED PATH ESTIMATORS. James M. Calvin Proceedngs of the 007 Wnter Smulaton Conference S G Henderson, B Bller, M-H Hseh, J Shortle, J D Tew, and R R Barton, eds LOW BIAS INTEGRATED PATH ESTIMATORS James M Calvn Department of Computer Scence

More information

Quantum Mechanics I - Session 4

Quantum Mechanics I - Session 4 Quantum Mechancs I - Sesson 4 Aprl 3, 05 Contents Operators Change of Bass 4 3 Egenvectors and Egenvalues 5 3. Denton....................................... 5 3. Rotaton n D....................................

More information

Lecture 3. Ax x i a i. i i

Lecture 3. Ax x i a i. i i 18.409 The Behavor of Algorthms n Practce 2/14/2 Lecturer: Dan Spelman Lecture 3 Scrbe: Arvnd Sankar 1 Largest sngular value In order to bound the condton number, we need an upper bound on the largest

More information

Generalized Linear Methods

Generalized Linear Methods Generalzed Lnear Methods 1 Introducton In the Ensemble Methods the general dea s that usng a combnaton of several weak learner one could make a better learner. More formally, assume that we have a set

More information

Problem Set 9 Solutions

Problem Set 9 Solutions Desgn and Analyss of Algorthms May 4, 2015 Massachusetts Insttute of Technology 6.046J/18.410J Profs. Erk Demane, Srn Devadas, and Nancy Lynch Problem Set 9 Solutons Problem Set 9 Solutons Ths problem

More information

Chapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems

Chapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems Numercal Analyss by Dr. Anta Pal Assstant Professor Department of Mathematcs Natonal Insttute of Technology Durgapur Durgapur-713209 emal: anta.bue@gmal.com 1 . Chapter 5 Soluton of System of Lnear Equatons

More information

A New Refinement of Jacobi Method for Solution of Linear System Equations AX=b

A New Refinement of Jacobi Method for Solution of Linear System Equations AX=b Int J Contemp Math Scences, Vol 3, 28, no 17, 819-827 A New Refnement of Jacob Method for Soluton of Lnear System Equatons AX=b F Naem Dafchah Department of Mathematcs, Faculty of Scences Unversty of Gulan,

More information

A linear imaging system with white additive Gaussian noise on the observed data is modeled as follows:

A linear imaging system with white additive Gaussian noise on the observed data is modeled as follows: Supplementary Note Mathematcal bacground A lnear magng system wth whte addtve Gaussan nose on the observed data s modeled as follows: X = R ϕ V + G, () where X R are the expermental, two-dmensonal proecton

More information

College of Computer & Information Science Fall 2009 Northeastern University 20 October 2009

College of Computer & Information Science Fall 2009 Northeastern University 20 October 2009 College of Computer & Informaton Scence Fall 2009 Northeastern Unversty 20 October 2009 CS7880: Algorthmc Power Tools Scrbe: Jan Wen and Laura Poplawsk Lecture Outlne: Prmal-dual schema Network Desgn:

More information

CALCULUS CLASSROOM CAPSULES

CALCULUS CLASSROOM CAPSULES CALCULUS CLASSROOM CAPSULES SESSION S86 Dr. Sham Alfred Rartan Valley Communty College salfred@rartanval.edu 38th AMATYC Annual Conference Jacksonvlle, Florda November 8-, 202 2 Calculus Classroom Capsules

More information

Report on Image warping

Report on Image warping Report on Image warpng Xuan Ne, Dec. 20, 2004 Ths document summarzed the algorthms of our mage warpng soluton for further study, and there s a detaled descrpton about the mplementaton of these algorthms.

More information

Parametric fractional imputation for missing data analysis. Jae Kwang Kim Survey Working Group Seminar March 29, 2010

Parametric fractional imputation for missing data analysis. Jae Kwang Kim Survey Working Group Seminar March 29, 2010 Parametrc fractonal mputaton for mssng data analyss Jae Kwang Km Survey Workng Group Semnar March 29, 2010 1 Outlne Introducton Proposed method Fractonal mputaton Approxmaton Varance estmaton Multple mputaton

More information

Lecture 5 Decoding Binary BCH Codes

Lecture 5 Decoding Binary BCH Codes Lecture 5 Decodng Bnary BCH Codes In ths class, we wll ntroduce dfferent methods for decodng BCH codes 51 Decodng the [15, 7, 5] 2 -BCH Code Consder the [15, 7, 5] 2 -code C we ntroduced n the last lecture

More information

Some Comments on Accelerating Convergence of Iterative Sequences Using Direct Inversion of the Iterative Subspace (DIIS)

Some Comments on Accelerating Convergence of Iterative Sequences Using Direct Inversion of the Iterative Subspace (DIIS) Some Comments on Acceleratng Convergence of Iteratve Sequences Usng Drect Inverson of the Iteratve Subspace (DIIS) C. Davd Sherrll School of Chemstry and Bochemstry Georga Insttute of Technology May 1998

More information

A Hybrid Variational Iteration Method for Blasius Equation

A Hybrid Variational Iteration Method for Blasius Equation Avalable at http://pvamu.edu/aam Appl. Appl. Math. ISSN: 1932-9466 Vol. 10, Issue 1 (June 2015), pp. 223-229 Applcatons and Appled Mathematcs: An Internatonal Journal (AAM) A Hybrd Varatonal Iteraton Method

More information

On a Parallel Implementation of the One-Sided Block Jacobi SVD Algorithm

On a Parallel Implementation of the One-Sided Block Jacobi SVD Algorithm Jacob SVD Gabrel Okša formulaton One-Sded Block-Jacob Algorthm Acceleratng Parallelzaton Conclusons On a Parallel Implementaton of the One-Sded Block Jacob SVD Algorthm Gabrel Okša 1, Martn Bečka, 1 Marán

More information

OPTIMISATION. Introduction Single Variable Unconstrained Optimisation Multivariable Unconstrained Optimisation Linear Programming

OPTIMISATION. Introduction Single Variable Unconstrained Optimisation Multivariable Unconstrained Optimisation Linear Programming OPTIMIATION Introducton ngle Varable Unconstraned Optmsaton Multvarable Unconstraned Optmsaton Lnear Programmng Chapter Optmsaton /. Introducton In an engneerng analss, sometmes etremtes, ether mnmum or

More information

For now, let us focus on a specific model of neurons. These are simplified from reality but can achieve remarkable results.

For now, let us focus on a specific model of neurons. These are simplified from reality but can achieve remarkable results. Neural Networks : Dervaton compled by Alvn Wan from Professor Jtendra Malk s lecture Ths type of computaton s called deep learnng and s the most popular method for many problems, such as computer vson

More information

a b a In case b 0, a being divisible by b is the same as to say that

a b a In case b 0, a being divisible by b is the same as to say that Secton 6.2 Dvsblty among the ntegers An nteger a ε s dvsble by b ε f there s an nteger c ε such that a = bc. Note that s dvsble by any nteger b, snce = b. On the other hand, a s dvsble by only f a = :

More information

Hidden Markov Models

Hidden Markov Models CM229S: Machne Learnng for Bonformatcs Lecture 12-05/05/2016 Hdden Markov Models Lecturer: Srram Sankararaman Scrbe: Akshay Dattatray Shnde Edted by: TBD 1 Introducton For a drected graph G we can wrte

More information

Kernel Methods and SVMs Extension

Kernel Methods and SVMs Extension Kernel Methods and SVMs Extenson The purpose of ths document s to revew materal covered n Machne Learnng 1 Supervsed Learnng regardng support vector machnes (SVMs). Ths document also provdes a general

More information

U.C. Berkeley CS294: Beyond Worst-Case Analysis Luca Trevisan September 5, 2017

U.C. Berkeley CS294: Beyond Worst-Case Analysis Luca Trevisan September 5, 2017 U.C. Berkeley CS94: Beyond Worst-Case Analyss Handout 4s Luca Trevsan September 5, 07 Summary of Lecture 4 In whch we ntroduce semdefnte programmng and apply t to Max Cut. Semdefnte Programmng Recall that

More information

Chapter - 2. Distribution System Power Flow Analysis

Chapter - 2. Distribution System Power Flow Analysis Chapter - 2 Dstrbuton System Power Flow Analyss CHAPTER - 2 Radal Dstrbuton System Load Flow 2.1 Introducton Load flow s an mportant tool [66] for analyzng electrcal power system network performance. Load

More information

Section 3.6 Complex Zeros

Section 3.6 Complex Zeros 04 Chapter Secton 6 Comple Zeros When fndng the zeros of polynomals, at some pont you're faced wth the problem Whle there are clearly no real numbers that are solutons to ths equaton, leavng thngs there

More information

Feature Selection: Part 1

Feature Selection: Part 1 CSE 546: Machne Learnng Lecture 5 Feature Selecton: Part 1 Instructor: Sham Kakade 1 Regresson n the hgh dmensonal settng How do we learn when the number of features d s greater than the sample sze n?

More information

CHAPTER 5 NUMERICAL EVALUATION OF DYNAMIC RESPONSE

CHAPTER 5 NUMERICAL EVALUATION OF DYNAMIC RESPONSE CHAPTER 5 NUMERICAL EVALUATION OF DYNAMIC RESPONSE Analytcal soluton s usually not possble when exctaton vares arbtrarly wth tme or f the system s nonlnear. Such problems can be solved by numercal tmesteppng

More information

The Minimum Universal Cost Flow in an Infeasible Flow Network

The Minimum Universal Cost Flow in an Infeasible Flow Network Journal of Scences, Islamc Republc of Iran 17(2): 175-180 (2006) Unversty of Tehran, ISSN 1016-1104 http://jscencesutacr The Mnmum Unversal Cost Flow n an Infeasble Flow Network H Saleh Fathabad * M Bagheran

More information

Digital Signal Processing

Digital Signal Processing Dgtal Sgnal Processng Dscrete-tme System Analyss Manar Mohasen Offce: F8 Emal: manar.subh@ut.ac.r School of IT Engneerng Revew of Precedent Class Contnuous Sgnal The value of the sgnal s avalable over

More information

Section 8.3 Polar Form of Complex Numbers

Section 8.3 Polar Form of Complex Numbers 80 Chapter 8 Secton 8 Polar Form of Complex Numbers From prevous classes, you may have encountered magnary numbers the square roots of negatve numbers and, more generally, complex numbers whch are the

More information

Some Consequences. Example of Extended Euclidean Algorithm. The Fundamental Theorem of Arithmetic, II. Characterizing the GCD and LCM

Some Consequences. Example of Extended Euclidean Algorithm. The Fundamental Theorem of Arithmetic, II. Characterizing the GCD and LCM Example of Extended Eucldean Algorthm Recall that gcd(84, 33) = gcd(33, 18) = gcd(18, 15) = gcd(15, 3) = gcd(3, 0) = 3 We work backwards to wrte 3 as a lnear combnaton of 84 and 33: 3 = 18 15 [Now 3 s

More information

Polynomial Regression Models

Polynomial Regression Models LINEAR REGRESSION ANALYSIS MODULE XII Lecture - 6 Polynomal Regresson Models Dr. Shalabh Department of Mathematcs and Statstcs Indan Insttute of Technology Kanpur Test of sgnfcance To test the sgnfcance

More information

Example: (13320, 22140) =? Solution #1: The divisors of are 1, 2, 3, 4, 5, 6, 9, 10, 12, 15, 18, 20, 27, 30, 36, 41,

Example: (13320, 22140) =? Solution #1: The divisors of are 1, 2, 3, 4, 5, 6, 9, 10, 12, 15, 18, 20, 27, 30, 36, 41, The greatest common dvsor of two ntegers a and b (not both zero) s the largest nteger whch s a common factor of both a and b. We denote ths number by gcd(a, b), or smply (a, b) when there s no confuson

More information

Lecture 2 Solution of Nonlinear Equations ( Root Finding Problems )

Lecture 2 Solution of Nonlinear Equations ( Root Finding Problems ) Lecture Soluton o Nonlnear Equatons Root Fndng Problems Dentons Classcaton o Methods Analytcal Solutons Graphcal Methods Numercal Methods Bracketng Methods Open Methods Convergence Notatons Root Fndng

More information

THE CHINESE REMAINDER THEOREM. We should thank the Chinese for their wonderful remainder theorem. Glenn Stevens

THE CHINESE REMAINDER THEOREM. We should thank the Chinese for their wonderful remainder theorem. Glenn Stevens THE CHINESE REMAINDER THEOREM KEITH CONRAD We should thank the Chnese for ther wonderful remander theorem. Glenn Stevens 1. Introducton The Chnese remander theorem says we can unquely solve any par of

More information

The Expectation-Maximization Algorithm

The Expectation-Maximization Algorithm The Expectaton-Maxmaton Algorthm Charles Elan elan@cs.ucsd.edu November 16, 2007 Ths chapter explans the EM algorthm at multple levels of generalty. Secton 1 gves the standard hgh-level verson of the algorthm.

More information

Numerical Solution of Ordinary Differential Equations

Numerical Solution of Ordinary Differential Equations Numercal Methods (CENG 00) CHAPTER-VI Numercal Soluton of Ordnar Dfferental Equatons 6 Introducton Dfferental equatons are equatons composed of an unknown functon and ts dervatves The followng are examples

More information

Module 9. Lecture 6. Duality in Assignment Problems

Module 9. Lecture 6. Duality in Assignment Problems Module 9 1 Lecture 6 Dualty n Assgnment Problems In ths lecture we attempt to answer few other mportant questons posed n earler lecture for (AP) and see how some of them can be explaned through the concept

More information

COMPUTATIONALLY EFFICIENT WAVELET AFFINE INVARIANT FUNCTIONS FOR SHAPE RECOGNITION. Erdem Bala, Dept. of Electrical and Computer Engineering,

COMPUTATIONALLY EFFICIENT WAVELET AFFINE INVARIANT FUNCTIONS FOR SHAPE RECOGNITION. Erdem Bala, Dept. of Electrical and Computer Engineering, COMPUTATIONALLY EFFICIENT WAVELET AFFINE INVARIANT FUNCTIONS FOR SHAPE RECOGNITION Erdem Bala, Dept. of Electrcal and Computer Engneerng, Unversty of Delaware, 40 Evans Hall, Newar, DE, 976 A. Ens Cetn,

More information

1 Matrix representations of canonical matrices

1 Matrix representations of canonical matrices 1 Matrx representatons of canoncal matrces 2-d rotaton around the orgn: ( ) cos θ sn θ R 0 = sn θ cos θ 3-d rotaton around the x-axs: R x = 1 0 0 0 cos θ sn θ 0 sn θ cos θ 3-d rotaton around the y-axs:

More information

On the Multicriteria Integer Network Flow Problem

On the Multicriteria Integer Network Flow Problem BULGARIAN ACADEMY OF SCIENCES CYBERNETICS AND INFORMATION TECHNOLOGIES Volume 5, No 2 Sofa 2005 On the Multcrtera Integer Network Flow Problem Vassl Vasslev, Marana Nkolova, Maryana Vassleva Insttute of

More information

Global Sensitivity. Tuesday 20 th February, 2018

Global Sensitivity. Tuesday 20 th February, 2018 Global Senstvty Tuesday 2 th February, 28 ) Local Senstvty Most senstvty analyses [] are based on local estmates of senstvty, typcally by expandng the response n a Taylor seres about some specfc values

More information

Lecture 10 Support Vector Machines II

Lecture 10 Support Vector Machines II Lecture 10 Support Vector Machnes II 22 February 2016 Taylor B. Arnold Yale Statstcs STAT 365/665 1/28 Notes: Problem 3 s posted and due ths upcomng Frday There was an early bug n the fake-test data; fxed

More information

Solving Nonlinear Differential Equations by a Neural Network Method

Solving Nonlinear Differential Equations by a Neural Network Method Solvng Nonlnear Dfferental Equatons by a Neural Network Method Luce P. Aarts and Peter Van der Veer Delft Unversty of Technology, Faculty of Cvlengneerng and Geoscences, Secton of Cvlengneerng Informatcs,

More information

: Numerical Analysis Topic 2: Solution of Nonlinear Equations Lectures 5-11:

: Numerical Analysis Topic 2: Solution of Nonlinear Equations Lectures 5-11: 764: Numercal Analyss Topc : Soluton o Nonlnear Equatons Lectures 5-: UIN Malang Read Chapters 5 and 6 o the tetbook 764_Topc Lecture 5 Soluton o Nonlnear Equatons Root Fndng Problems Dentons Classcaton

More information

Exercises. 18 Algorithms

Exercises. 18 Algorithms 18 Algorthms Exercses 0.1. In each of the followng stuatons, ndcate whether f = O(g), or f = Ω(g), or both (n whch case f = Θ(g)). f(n) g(n) (a) n 100 n 200 (b) n 1/2 n 2/3 (c) 100n + log n n + (log n)

More information

Hongyi Miao, College of Science, Nanjing Forestry University, Nanjing ,China. (Received 20 June 2013, accepted 11 March 2014) I)ϕ (k)

Hongyi Miao, College of Science, Nanjing Forestry University, Nanjing ,China. (Received 20 June 2013, accepted 11 March 2014) I)ϕ (k) ISSN 1749-3889 (prnt), 1749-3897 (onlne) Internatonal Journal of Nonlnear Scence Vol.17(2014) No.2,pp.188-192 Modfed Block Jacob-Davdson Method for Solvng Large Sparse Egenproblems Hongy Mao, College of

More information

Lecture 4: Universal Hash Functions/Streaming Cont d

Lecture 4: Universal Hash Functions/Streaming Cont d CSE 5: Desgn and Analyss of Algorthms I Sprng 06 Lecture 4: Unversal Hash Functons/Streamng Cont d Lecturer: Shayan Oves Gharan Aprl 6th Scrbe: Jacob Schreber Dsclamer: These notes have not been subjected

More information

2 JOEL V. BRAWLE, SHUHONG GAO, AND DONALD MILLS for all ; 2 G. Then for monc polynomals f and g whose coecents are n F q and whose roots le n G, the c

2 JOEL V. BRAWLE, SHUHONG GAO, AND DONALD MILLS for all ; 2 G. Then for monc polynomals f and g whose coecents are n F q and whose roots le n G, the c Computng Composed Products of Polynomals Joel V. Brawley, Shuhong Gao, and Donald Mlls Abstract. If f(x) and g(x) are polynomals n F q[x] of degrees m and n respectvely, then the composed sum of f and

More information

A new Approach for Solving Linear Ordinary Differential Equations

A new Approach for Solving Linear Ordinary Differential Equations , ISSN 974-57X (Onlne), ISSN 974-5718 (Prnt), Vol. ; Issue No. 1; Year 14, Copyrght 13-14 by CESER PUBLICATIONS A new Approach for Solvng Lnear Ordnary Dfferental Equatons Fawz Abdelwahd Department of

More information

Structure and Drive Paul A. Jensen Copyright July 20, 2003

Structure and Drive Paul A. Jensen Copyright July 20, 2003 Structure and Drve Paul A. Jensen Copyrght July 20, 2003 A system s made up of several operatons wth flow passng between them. The structure of the system descrbes the flow paths from nputs to outputs.

More information

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 16

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 16 STAT 39: MATHEMATICAL COMPUTATIONS I FALL 218 LECTURE 16 1 why teratve methods f we have a lnear system Ax = b where A s very, very large but s ether sparse or structured (eg, banded, Toepltz, banded plus

More information

Calculation of time complexity (3%)

Calculation of time complexity (3%) Problem 1. (30%) Calculaton of tme complexty (3%) Gven n ctes, usng exhaust search to see every result takes O(n!). Calculaton of tme needed to solve the problem (2%) 40 ctes:40! dfferent tours 40 add

More information

Finding Dense Subgraphs in G(n, 1/2)

Finding Dense Subgraphs in G(n, 1/2) Fndng Dense Subgraphs n Gn, 1/ Atsh Das Sarma 1, Amt Deshpande, and Rav Kannan 1 Georga Insttute of Technology,atsh@cc.gatech.edu Mcrosoft Research-Bangalore,amtdesh,annan@mcrosoft.com Abstract. Fndng

More information

Outline. Communication. Bellman Ford Algorithm. Bellman Ford Example. Bellman Ford Shortest Path [1]

Outline. Communication. Bellman Ford Algorithm. Bellman Ford Example. Bellman Ford Shortest Path [1] DYNAMIC SHORTEST PATH SEARCH AND SYNCHRONIZED TASK SWITCHING Jay Wagenpfel, Adran Trachte 2 Outlne Shortest Communcaton Path Searchng Bellmann Ford algorthm Algorthm for dynamc case Modfcatons to our algorthm

More information

Errors for Linear Systems

Errors for Linear Systems Errors for Lnear Systems When we solve a lnear system Ax b we often do not know A and b exactly, but have only approxmatons  and ˆb avalable. Then the best thng we can do s to solve ˆx ˆb exactly whch

More information

Polynomials. 1 More properties of polynomials

Polynomials. 1 More properties of polynomials Polynomals 1 More propertes of polynomals Recall that, for R a commutatve rng wth unty (as wth all rngs n ths course unless otherwse noted), we defne R[x] to be the set of expressons n =0 a x, where a

More information

Problem Solving in Math (Math 43900) Fall 2013

Problem Solving in Math (Math 43900) Fall 2013 Problem Solvng n Math (Math 43900) Fall 2013 Week four (September 17) solutons Instructor: Davd Galvn 1. Let a and b be two nteger for whch a b s dvsble by 3. Prove that a 3 b 3 s dvsble by 9. Soluton:

More information

Difference Equations

Difference Equations Dfference Equatons c Jan Vrbk 1 Bascs Suppose a sequence of numbers, say a 0,a 1,a,a 3,... s defned by a certan general relatonshp between, say, three consecutve values of the sequence, e.g. a + +3a +1

More information

COMPARISON OF SOME RELIABILITY CHARACTERISTICS BETWEEN REDUNDANT SYSTEMS REQUIRING SUPPORTING UNITS FOR THEIR OPERATIONS

COMPARISON OF SOME RELIABILITY CHARACTERISTICS BETWEEN REDUNDANT SYSTEMS REQUIRING SUPPORTING UNITS FOR THEIR OPERATIONS Avalable onlne at http://sck.org J. Math. Comput. Sc. 3 (3), No., 6-3 ISSN: 97-537 COMPARISON OF SOME RELIABILITY CHARACTERISTICS BETWEEN REDUNDANT SYSTEMS REQUIRING SUPPORTING UNITS FOR THEIR OPERATIONS

More information

18.1 Introduction and Recap

18.1 Introduction and Recap CS787: Advanced Algorthms Scrbe: Pryananda Shenoy and Shjn Kong Lecturer: Shuch Chawla Topc: Streamng Algorthmscontnued) Date: 0/26/2007 We contnue talng about streamng algorthms n ths lecture, ncludng

More information

Distributed and Stochastic Machine Learning on Big Data

Distributed and Stochastic Machine Learning on Big Data Dstrbuted and Stochastc Machne Learnng on Bg Data Department of Computer Scence and Engneerng Hong Kong Unversty of Scence and Technology Hong Kong Introducton Synchronous ADMM Asynchronous ADMM Stochastc

More information

Gaussian process classification: a message-passing viewpoint

Gaussian process classification: a message-passing viewpoint Gaussan process classfcaton: a message-passng vewpont Flpe Rodrgues fmpr@de.uc.pt November 014 Abstract The goal of ths short paper s to provde a message-passng vewpont of the Expectaton Propagaton EP

More information

Lecture 21: Numerical methods for pricing American type derivatives

Lecture 21: Numerical methods for pricing American type derivatives Lecture 21: Numercal methods for prcng Amercan type dervatves Xaoguang Wang STAT 598W Aprl 10th, 2014 (STAT 598W) Lecture 21 1 / 26 Outlne 1 Fnte Dfference Method Explct Method Penalty Method (STAT 598W)

More information

1 GSW Iterative Techniques for y = Ax

1 GSW Iterative Techniques for y = Ax 1 for y = A I m gong to cheat here. here are a lot of teratve technques that can be used to solve the general case of a set of smultaneous equatons (wrtten n the matr form as y = A), but ths chapter sn

More information

Math 261 Exercise sheet 2

Math 261 Exercise sheet 2 Math 261 Exercse sheet 2 http://staff.aub.edu.lb/~nm116/teachng/2017/math261/ndex.html Verson: September 25, 2017 Answers are due for Monday 25 September, 11AM. The use of calculators s allowed. Exercse

More information

Lecture 4. Instructor: Haipeng Luo

Lecture 4. Instructor: Haipeng Luo Lecture 4 Instructor: Hapeng Luo In the followng lectures, we focus on the expert problem and study more adaptve algorthms. Although Hedge s proven to be worst-case optmal, one may wonder how well t would

More information

Integrals and Invariants of Euler-Lagrange Equations

Integrals and Invariants of Euler-Lagrange Equations Lecture 16 Integrals and Invarants of Euler-Lagrange Equatons ME 256 at the Indan Insttute of Scence, Bengaluru Varatonal Methods and Structural Optmzaton G. K. Ananthasuresh Professor, Mechancal Engneerng,

More information

The Study of Teaching-learning-based Optimization Algorithm

The Study of Teaching-learning-based Optimization Algorithm Advanced Scence and Technology Letters Vol. (AST 06), pp.05- http://dx.do.org/0.57/astl.06. The Study of Teachng-learnng-based Optmzaton Algorthm u Sun, Yan fu, Lele Kong, Haolang Q,, Helongang Insttute

More information

Inexact Newton Methods for Inverse Eigenvalue Problems

Inexact Newton Methods for Inverse Eigenvalue Problems Inexact Newton Methods for Inverse Egenvalue Problems Zheng-jan Ba Abstract In ths paper, we survey some of the latest development n usng nexact Newton-lke methods for solvng nverse egenvalue problems.

More information

Lecture 13 APPROXIMATION OF SECOMD ORDER DERIVATIVES

Lecture 13 APPROXIMATION OF SECOMD ORDER DERIVATIVES COMPUTATIONAL FLUID DYNAMICS: FDM: Appromaton of Second Order Dervatves Lecture APPROXIMATION OF SECOMD ORDER DERIVATIVES. APPROXIMATION OF SECOND ORDER DERIVATIVES Second order dervatves appear n dffusve

More information

Inner Product. Euclidean Space. Orthonormal Basis. Orthogonal

Inner Product. Euclidean Space. Orthonormal Basis. Orthogonal Inner Product Defnton 1 () A Eucldean space s a fnte-dmensonal vector space over the reals R, wth an nner product,. Defnton 2 (Inner Product) An nner product, on a real vector space X s a symmetrc, blnear,

More information

A CHARACTERIZATION OF ADDITIVE DERIVATIONS ON VON NEUMANN ALGEBRAS

A CHARACTERIZATION OF ADDITIVE DERIVATIONS ON VON NEUMANN ALGEBRAS Journal of Mathematcal Scences: Advances and Applcatons Volume 25, 2014, Pages 1-12 A CHARACTERIZATION OF ADDITIVE DERIVATIONS ON VON NEUMANN ALGEBRAS JIA JI, WEN ZHANG and XIAOFEI QI Department of Mathematcs

More information

LINEAR REGRESSION ANALYSIS. MODULE IX Lecture Multicollinearity

LINEAR REGRESSION ANALYSIS. MODULE IX Lecture Multicollinearity LINEAR REGRESSION ANALYSIS MODULE IX Lecture - 30 Multcollnearty Dr. Shalabh Department of Mathematcs and Statstcs Indan Insttute of Technology Kanpur 2 Remedes for multcollnearty Varous technques have

More information

New modular multiplication and division algorithms based on continued fraction expansion

New modular multiplication and division algorithms based on continued fraction expansion New modular multplcaton and dvson algorthms based on contnued fracton expanson Mourad Goucem a a UPMC Unv Pars 06 and CNRS UMR 7606, LIP6 4 place Jusseu, F-75252, Pars cedex 05, France Abstract In ths

More information

form, and they present results of tests comparng the new algorthms wth other methods. Recently, Olschowka & Neumaer [7] ntroduced another dea for choo

form, and they present results of tests comparng the new algorthms wth other methods. Recently, Olschowka & Neumaer [7] ntroduced another dea for choo Scalng and structural condton numbers Arnold Neumaer Insttut fur Mathematk, Unverstat Wen Strudlhofgasse 4, A-1090 Wen, Austra emal: neum@cma.unve.ac.at revsed, August 1996 Abstract. We ntroduce structural

More information

Computing Correlated Equilibria in Multi-Player Games

Computing Correlated Equilibria in Multi-Player Games Computng Correlated Equlbra n Mult-Player Games Chrstos H. Papadmtrou Presented by Zhanxang Huang December 7th, 2005 1 The Author Dr. Chrstos H. Papadmtrou CS professor at UC Berkley (taught at Harvard,

More information

Ensemble Methods: Boosting

Ensemble Methods: Boosting Ensemble Methods: Boostng Ncholas Ruozz Unversty of Texas at Dallas Based on the sldes of Vbhav Gogate and Rob Schapre Last Tme Varance reducton va baggng Generate new tranng data sets by samplng wth replacement

More information

Hidden Markov Models & The Multivariate Gaussian (10/26/04)

Hidden Markov Models & The Multivariate Gaussian (10/26/04) CS281A/Stat241A: Statstcal Learnng Theory Hdden Markov Models & The Multvarate Gaussan (10/26/04) Lecturer: Mchael I. Jordan Scrbes: Jonathan W. Hu 1 Hdden Markov Models As a bref revew, hdden Markov models

More information

Single-Facility Scheduling over Long Time Horizons by Logic-based Benders Decomposition

Single-Facility Scheduling over Long Time Horizons by Logic-based Benders Decomposition Sngle-Faclty Schedulng over Long Tme Horzons by Logc-based Benders Decomposton Elvn Coban and J. N. Hooker Tepper School of Busness, Carnege Mellon Unversty ecoban@andrew.cmu.edu, john@hooker.tepper.cmu.edu

More information

CISE301: Numerical Methods Topic 2: Solution of Nonlinear Equations

CISE301: Numerical Methods Topic 2: Solution of Nonlinear Equations CISE3: Numercal Methods Topc : Soluton o Nonlnear Equatons Dr. Amar Khoukh Term Read Chapters 5 and 6 o the tetbook CISE3_Topc c Khoukh_ Lecture 5 Soluton o Nonlnear Equatons Root ndng Problems Dentons

More information

Finite Difference Method

Finite Difference Method 7/0/07 Instructor r. Ramond Rump (9) 747 698 rcrump@utep.edu EE 337 Computatonal Electromagnetcs (CEM) Lecture #0 Fnte erence Method Lecture 0 These notes ma contan coprghted materal obtaned under ar use

More information

Nodal analysis of finite square resistive grids and the teaching effectiveness of students projects

Nodal analysis of finite square resistive grids and the teaching effectiveness of students projects 2 nd World Conference on Technology and Engneerng Educaton 2 WIETE Lublana Slovena 5-8 September 2 Nodal analyss of fnte square resstve grds and the teachng effectveness of students proects P. Zegarmstrz

More information

8.6 The Complex Number System

8.6 The Complex Number System 8.6 The Complex Number System Earler n the chapter, we mentoned that we cannot have a negatve under a square root, snce the square of any postve or negatve number s always postve. In ths secton we want

More information

Solutions to Problem Set 6

Solutions to Problem Set 6 Solutons to Problem Set 6 Problem 6. (Resdue theory) a) Problem 4.7.7 Boas. n ths problem we wll solve ths ntegral: x sn x x + 4x + 5 dx: To solve ths usng the resdue theorem, we study ths complex ntegral:

More information

Lectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix

Lectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix Lectures - Week 4 Matrx norms, Condtonng, Vector Spaces, Lnear Independence, Spannng sets and Bass, Null space and Range of a Matrx Matrx Norms Now we turn to assocatng a number to each matrx. We could

More information

EEL 6266 Power System Operation and Control. Chapter 3 Economic Dispatch Using Dynamic Programming

EEL 6266 Power System Operation and Control. Chapter 3 Economic Dispatch Using Dynamic Programming EEL 6266 Power System Operaton and Control Chapter 3 Economc Dspatch Usng Dynamc Programmng Pecewse Lnear Cost Functons Common practce many utltes prefer to represent ther generator cost functons as sngle-

More information

CONTRAST ENHANCEMENT FOR MIMIMUM MEAN BRIGHTNESS ERROR FROM HISTOGRAM PARTITIONING INTRODUCTION

CONTRAST ENHANCEMENT FOR MIMIMUM MEAN BRIGHTNESS ERROR FROM HISTOGRAM PARTITIONING INTRODUCTION CONTRAST ENHANCEMENT FOR MIMIMUM MEAN BRIGHTNESS ERROR FROM HISTOGRAM PARTITIONING N. Phanthuna 1,2, F. Cheevasuvt 2 and S. Chtwong 2 1 Department of Electrcal Engneerng, Faculty of Engneerng Rajamangala

More information

O-line Temporary Tasks Assignment. Abstract. In this paper we consider the temporary tasks assignment

O-line Temporary Tasks Assignment. Abstract. In this paper we consider the temporary tasks assignment O-lne Temporary Tasks Assgnment Yoss Azar and Oded Regev Dept. of Computer Scence, Tel-Avv Unversty, Tel-Avv, 69978, Israel. azar@math.tau.ac.l??? Dept. of Computer Scence, Tel-Avv Unversty, Tel-Avv, 69978,

More information