Bi-Relational Network Analysis Using a Fast Random Walk with Restart

Size: px
Start display at page:

Download "Bi-Relational Network Analysis Using a Fast Random Walk with Restart"

Transcription

1 B-Relatonal Network Analyss Usng a Fast Random Walk wth Restart Jng Xa, Dona Caragea and Wllam Hsu Departmet of Computng and Informaton Scences Kansas State Unversty, Manhattan, USA {xajng, dcaragea, bhsu}@ksuedu Abstract Identfcaton of nodes relevant to a gven node n a relatonal network s a basc problem n network analyss wth great practcal mportance Most exstng network analyss algorthms utlze one sngle relaton to defne relevancy among nodes However, n real world applcatons multple relatonshps exst between nodes n a network Therefore, network analyss algorthms that can make use of more than one relaton to dentfy the relevance set for a node are needed In ths paper, we show how the Random Walk wth Restart (RWR) approach can be used to study relevancy n a b-relatonal network from the bblographc doman, and show that makng use of two relatons results n better results as compared to approaches that use a sngle relaton As relatonal networks can be very large, we also propose a fast mplementaton for RWR by adaptng an exstng Iteratve Aggregaton and Dsaggregaton (IAD) approach The IAD-based RWR explots the block-wse structure of real world networks Expermental results show sgnfcant ncrease n runnng tme for the IAD-based RWR compared to the tradtonal power method based RWR Keywords-Relatonal data mnng; node relevancy; random walk; teratve aggregaton and dsaggregaton approach I INTRODUCTION Identfcaton of nodes relevant to a gven node n a relatonal network s of sgnfcant practcal mportance Node relevancy nformaton enables the study of complex propertes of a network As an example, n a bblographc network, nformaton about researchers relevant to a gven researcher can be used to predct potental co-author relatonshps or communtes that a researcher should jon Among several approaches to the problem of dentfyng a relevancy set for a gven node n a network, random walk based algorthms have proven very effectve [1] Tradtonally, random walk algorthms explot one type of relaton (eg, co-author relatonshps) when fndng relevance scores for a node (n our example, an author) However, n real world applcatons, for a partcular doman there always exst several types of objects (eg, papers, authors, venues) and relatons among objects of nterest (eg, co-author relatonshps, ctaton relatonshps, authorpaper relatonshps) Whle each relaton can be exploted by tself for solvng a partcular network analyss task, more nsghts nto the propertes of the network can be ganed f multple relatons are used together In ths work, we focus on b-relatonal network analyss usng a Random Walk wth Restart (RWR) approach and show ts advantages as compared to sngle relatonal network analyss Gven the large scale of network data avalable nowadays, fast mplementatons of the RWR algorthms are needed even n the case of sngle relatonal network analyss [2] For the analyss of networks wth two or more relatons, tme and memory effcent algorthms are mperatve To address ths challenge, we propose a fast mplementaton of the RWR algorthm for b-relatonal networks Ths mplementaton takes advantage of a nce property that real world networks present, specfcally ther block-wse structure Based on ths property, an teratve aggregaton and decomposton (IAD) algorthm s adapted to RWR The rest of the paper s organzed as follows We defne relevance scores and ntroduce sngle and b-relatonal networks n Secton II We ntroduce the RWR network analyss approach n Secton III Our adaptaton of the RWR approach to b-relatonal networks s presented n Secton IV, whle the IAD-based RWR and a dscusson on the effcency of the method are presented n Secton V Secton VI descrbes the expermental evaluaton of the proposed approach We conclude wth a summary and dscusson of the related work n Secton VII II BACKGROUND AND MOTIVATION A Relevance of Nodes Generally, a relatonal network can be represented as a graph G =< V, E >, where V s the set of nodes and E s the set of edges representng relatonshps between nodes n the network Smlar to prevous work [1], the man queston we address n ths paper can be stated as follows: gven a node a V, whch nodes n V are most relevant to a? To answer ths queston, for each node b V, we use RWR to compute a relevance score to a All scores together form a relevance score vector wth respect to the node a We expect nodes that are hghly relevant to a to have hgher scores than nodes that are not relevant to a Thus, the score vector can help dentfy nodes relevant to a and also quantfy relevancy B B-Relatonal Networks We wll use a smple academc communty example to motvate and descrbe b-relatonal networks In a typcal academc communty, gven a researcher a, one may be nterested n fndng the most relevant researchers b for a Here, we assume that b s relevant to a f a and b share smlar research nterests The set of nodes for ths example

2 author has paper relatonshp smlarty relatonshp authors papers Fgure 1 B-relatonal network: authors are assocated wth papers through author-has-paper relatonshps; papers are lnked through smlarty relatonshps conssts of researchers and papers In prncple, we can compute the relevancy score vector for researcher a based on co-author relatonshps (sngle-relatonal network) n a bblographc data set However, we can also buld a brelatonal network n whch papers can be lnked to each other based on content smlarty relatonshps (e, two papers are smlar f they share smlar words); authors can be lnked to papers through author-has-paper relatonshps Fgure 1 shows a vew of the b-relatonal network nduced by these two relatonshps Intutvely, authors who have smlar papers share smlar nterests Thus, usng these relatonshps together to fnd authors most relevant to a gven author should result n a stronger relevancy compared wth the relevancy obtaned usng each relaton ndependently We wll use the followng defntons for sngle and brelatonal networks: A sngle-relatonal network s a network nduced by a sngle relaton among nodes If G =< V, E > s the graph correspondng to the network, then E s the set of edges defned by the sngle relaton among nodes V A b-relatonal network s a network nduced by two types of relatons among nodes Formally, a brelatonal network s gven by G =< V 1 V 2, E 1 E 2 >, where E 1 V 1 V 2 and E 2 V 2 V 2 III RANDOM WALK WITH RESTART The notatons used n ths paper are shown n Table I The RWR method defnes a transton matrx P n n (where n s the number of nodes) Ths matrx models the probablty of transton between every two nodes n the network If P s row normalzed (e, the sum of elements n a row s 1), then P s rreducble and aperodc Therefore, accordng to the Perron-Frobenus theorem, there s a unque statonary dstrbuton of the matrx P Gven the transton matrx P, a RWR can be seen as a non-homogeneous Markov chan A RWR s defned by the followng formula: π (t+1) = (1 c)π (t) P + c e k (1) Table I SYMBOLS AND DEFINITIONS Symbol Defnton π 1 n statonary dstrbuton vector by runnng RWR from startng node k π (t) dstrbuton vector after t teratons runs c the restart probablty, 0 < c < 1 e k 1 n startng vector, the k th element s 1 and all the other elements are 0 c k 1 n startng vector c k = ce k n the number of nodes n the graph N the number of parttons P the orgnal transton matrx M the transformed transton matrx, M = (1 c)p A N N couplng matrx of M m c k π c 1 N the sze of each sub matrx M 1 m k sub-startng vector for the sub-matrx M kk where one element correspondng to the startng node k s 1 and 0 for others sub-egenvector of sub-matrx M 1 N vector where one element correspondng to M kk s 1 and the others are 0 where π s the probablty dstrbuton of a partcle startng at node k, c (0, 1) s the restartng probablty, and e k s the ntal vector As can be seen n the equaton, at each teraton, a constant c e k nterpreted as the restart s added Eq (1) converges as the number of teratons approaches nfnty [3] Therefore, we have: π = (1 c)π P + c e k = π M + c k (2) The statonary dstrbuton π, whch represents the probablty dstrbuton of reachng any node a from k, can be seen as the relevance score vector correspondng to k IV RWR FOR SINGLE AND BI-RELATIONAL NETWORKS In what follows, we explan how we apply RWR to sngle and b-relatonal networks, respectvely A Sngle-Relatonal Networks Obvously, we can drectly apply RWR for sngle relatonal networks To do that, P s constructed from the network G =< V, E > (whch s a weghted graph); e a represents the vector startng wth node a The statonary dstrbuton π (a) can be used as the relevance score vector correspondng to the node a B B-Relatonal Networks Remember that a b-relatonal network s defned by G =< V 1 V 2, E 1 E 2 >, where E 1 V 1 V 2 and E 2 V 2 V 2 Our goal s to use RWR to dentfy nodes b n V 1 relevant to a node a n V 1 by makng use of both relatonshps n E 1 and E 2 To acheve that, we construct the transton matrx P from < V 2, E 2 > Then, for the gven node a V 1 and every edge (a, p) E 1 (consequently for every node p V 2 that s lnked to a), we run RWR wth

3 transton probablty P and startng vector e p The resultng statonary dstrbuton represents the relevance score vector correspondng to the startng node p Based on the relevance score vector, we choose a set of nodes V 2 V 2 that are most relevant to node p Fnally, the nodes relevant to a are defned as those nodes b n V 1 for whch there exsts an edge (b, q) (where q V 2) For example, let us assume that an author a has four papers These papers are part of a paper smlarty network To dentfy authors related to a, we run RWR wth a transton matrx gven by the paper smlarty network and startng vectors correspondng to each of the author s papers, n turn Thus, we wll obtan a set of four relevance score vectors From each vector, we choose the most related papers and nfer that ther authors are most relevant for the gven author V SCALING UP RWR The straghtforward mplementaton of RWR requres many teratons over the transton matrx or, even worse, calculatng the nverse of the matrx [2] As mult-relatonal networks are usually large, usng ths mplementaton s mpractcal for most real world applcatons To address ths lmtaton, we propose an approach for scalng up the RWR method The theory behnd our fast RWR approach and the algorthm used n our experments are descrbed n what follows A RWR Property In ths subsecton, we wll show a nce property of π (the statonary dstrbuton of the RWR startng at k), assumng that the matrx M n (2) has the followng dagonal block-structure: M M 22 0 M = (3) 0 0 M N,N where each block sub-matrx M s of sze m, for = 1, 2,, N and M kk contans the startng node k Then, by replacng M wth (3) n (2), we get: (π 1,, π k,, π N ) = (π 1,, π k,, π N ) M M kk 0 + c ḳ 0 0 M NN 0 where π k s the sub-egenvector for the block sub-matrx M kk and c k s a (1 m k) vector correspondng to the submatrx M kk (the element correspondng to the startng node k s 1 and all the other elements are 0) As a consequence, each π can be obtaned from T π M = π, k and π k M kk + c k = π k Note that ρ(m ) < 1 (ρ s the spectral radus of a matrx, e the max egenvalue of a matrx), therefore π = 0, for = 1, 2,, N, k Thus, we only need to solve the equaton π k M kk + c k = π k Ths property explans the observaton made n [1], where the authors notced that most elements n the dstrbuton are close to zero and therefore proposed to perform RWR on the parttoned local block only In most real network applcatons, the network naturally forms a block-wse structure, although not necessarly a perfect dagonal block-structure lke the one above For nstance, n the academc communty network example, the author network has a block-wse communty structure wth respect to authors nterests and publcatons Smlarly, the papers network has a block-wse structure wth respect to papers topcs and smlarty We wll explot ths type of structure to scale up the RWR approach To do that, we frst construct a block-wse partton for M that looks lke: M 11 M 12 M 1N M 21 M 22 M 2N M = (4) M N1 M N2 M NN where M represents the lnks wthn a communty and M j, j represents the lnks between communtes and j To construct such a partton for M, we use CLUTO [4] algorthm, whch maxmzes the edge weght wthn the communty, whle mnmzng the weght between communtes Once a partton of M s constructed, we can compute the left egenvector for each dagonal sub-matrx M : u M = λ u (for k) and u k M kk + c k = u k (5) where λ (1 c) We wll use the egenvectors u of M as an approxmaton to π (the sub-vector n π correspondng to M ) and further combne the u local egenvalue vectors nto one global egenvector for the whole matrx by adaptng the Iteratve Aggregaton/Dsaggregaton (IAD) [3], [5], [6] method Ths wll allow to quckly fnd the steady dstrbuton π B Fast RWR Usng the IAD Method The combnaton of the local egenvectors correspondng to matrces M nto a global egenvector for M needs to take nto account the weght of each sub-block matrx The frst part of the IAD algorthm s used to derve ths weght vector by constructng an aggregated matrx A from M, n two steps Assumng that π s known for = 1, 2,, N, the two steps are as follows: 1) replace each row of each sub-block matrx M j wth the sum of the elements n that row; ths results n a matrx n N (one column for each M j ); 2) multply each of the resultng columns by a weght vector φ, where φ = π / π 1, for

4 = 1, 2,, N; ths results n an aggregated matrx A N N (7) back nto Eq (6) We wll construct a matrx W, = (one element for each sub-block matrx M j ) 1, 2,, N such that The elements of the matrx, A N N can be wrtten as: ( ) M s a j = φ M j e j, where φ s a row vector wth m elements W = r and e j s the 1 column vector wth m j elements Havng T (8) q constructed the aggregated matrx A, the goal s to fnd a where r T s a 1 m vector defned as: weght vector for each sub-block matrx M j by solvng the 1 equaton ξ = ξa + c 1 N Indeed, we can show that A has 1 ξ ξ j φ j M j f k such a statonary dstrbuton Ths follows from: r T j = 1 ( π 1 1, π 2 1,, π N 1 )A = ( π 1 1, π 2 1,, π N 1 ) 1 ξ k (c k + (9) ξ j φ j M jk ) f = k j k π 1 π π 0 2 π s s an m 1 vector defned as: s = e M e for = Me 1, 2,, N and q s a scalar defned as: q = 1 n N + c 1 N r T e for = 1, 2,, N Therefore, we obtan: π 0 0 N ( ) π N 1 M s (π, 1 ξ ) e 0 0 r T = (π q, 1 ξ ) (10) 0 e 0 =π Me n N + c 1 N = (π c k ) + c Wth the constructed block W, we can obtan a new π and 1 Nξ through solvng the Eq (10) Fnally we update φ = 0 0 e π /ξ wth the new values for π and ξ obtaned from W =( π 1 1, π 2 1,, π k c The steps for scalng up RWR are shown n Algorthm 1 k 1,, π N 1 ) + c 1 N =( π 1 1, π 2 1,, π k 1,, π N 1 ) Algorthm 1 IAD-based RWR We used the fact that π M = (π c k ) (Eq 1) and π k c k 1 + c = π k 1 (whch can be easly proved usng the defnton of the norm 1) If ξ = ( π 1 1, π 2 1,, π k 1,, π N 1 ) s the statonary dstrbuton of A, we consder ths to be the weght vector for the sub-block matrx M Note that φ ( = 1, 2,, N) depends on s π, whch s not known n advance; therefore, we wll use u as an approxmaton for π For practcal problems, ths approxmaton should not result n a sgnfcant error as the structure of M s presumably close to the structure n Eq (3) and M 1 s maxmzed when creatng the block-wse partton of M Therefore, an approxmaton s made such that φ = u / u 1 φ = π / π 1 (6) We use Eq (6) to compute an approxmaton A to the aggregated matrx A Each element of A s gven by a j = φ M je j Next, we determne an approxmaton egenvector ξ from ξ A + c 1 N = ξ and use t to derve the statonary dstrbuton of M: π = ( ) ξ1φ 1, ξ2φ 2,, ξn φ N (7) The second part of the IAD s used to mprove the approxmaton n Eq (7) The smplest way to do ths s to ncorporate Eq (7) back nto Eq (6) and reterate wth the goal of obtanng a better soluton However, drectly usng Eq (7) wll have no effect on the approxmaton [3] Therefore, smlar to [3], we adapt Takahash s approach [7] to mprove the approxmaton before ncorporatng Eq Input: a normalzed matrx P, the startng vector e k and the error threshold ɛ Output: the statonary dstrbuton π 1 Construct the transformed matrx M from P 2 Partton M nto N parttons usng CLUTO [4] 3 Let π (0) = (π (0) 1, π(0) 2,, π(0) N ) be a gven ntal approxmaton to the soluton and set m = 1 4 For = 1, 2,, N, compute φ (m 1) as: φ (m 1) = π (m 1) / π (m 1) 1 5 Construct aggregated matrx A (m 1) whose elements are (A (m 1) ) j = φ (m 1) M j e j 6 Solve the egenvector problem ξ (m 1) A (m 1) + c 1 N = ξ (m 1) 7 For = 1, 2,, N, construct W and derve π (m) ξ (m) by solvng Eq (10); update φ (m) = π (m) /ξ (m) and 8 Convergence test: f the the dfference between two consecutve estmates π (m) π (m 1) 2 < ɛ, then stop π (m) = (ξ (m 1) 1 φ (m) 1, ξ (m 1) 2 φ (m) 2,, ξ (m 1) Otherwse, set m = m + 1 and go to step (4) C Effcency of the IAD-based RWR N φ (m) N ) IAD-based RWR s a dvde-and-conquer method whch takes advantage of the block-wse structure of real world

5 networks The runnng tme of the algorthm depends manly on two factors: number of teratons and, for each teraton, the tme t takes to solve the Eq (10) for N block submatrces Solvng Eq (10) takes tme proportonal to the sze of the matrx M ) The CLUTO algorthm that we use to partton M takes as nput the number N of blocks needed and optmzes block sze to avod parttons wth a lot of small blocks and several large block Thus, the resultng parttons are well suted for the IAD approach The global convergence of the IAD method s stll an open problem However, we wll show that for real world networks that have a natural block-wse structure the algorthm converges very fast As for space, the algorthm stores the dagonal matrces and the sparse matrx of the cross-lnk network The aggregated matrx requres O(N 2 ) space VI EXPERIMENTAL EVALUATION A Data Sets and Questons The data set used for the experments n ths paper (called paper & co-author data) s constructed from the Cora data set mccallum, whch contans research papers For each paper, the followng felds are avalable: ttle, authors, topc, abstract, avenue (eg, conference name), among others The data set s obtaned from Cora as follows: We frst extract publcatons for whch ttle and authors names are avalable From the resultng set of papers, we select those for whch abstracts are avalable Ths results n a data set that contans 4,100 papers and 10,830 authors Two networks are constructed from ths data Frst, we construct a sngle relatonal network G =< V, E > based on the co-author relaton The weght of an edge (a, b) from author a to author b s defned as the number of publcatons co-authored by a and b, dvded by the total number of the publcatons authored by a Second, we construct a b-relatonal network G =< V 1 V 2, E 1 E 2 > based on author-has-paper and paper smlarty relatons The weght on an edge (a, p) V 1 from an author a to a paper p s 1 f a s among paper s p authors and 0 otherwse The weght of an edge (p, q) E 2 between two papers p and q s gven by the cosne smlarty between the abstracts of the two papers To calculate cosne smlarty we buld an nverted ndex over the merged vocabulary of all abstracts Usng the nverted ndex, each abstract s represented usng the TF-IDF (term frequency, ndex document frequency) weghtng scheme The questons that we want to answer about the paper & co-author data set are the followng: (Q1) What are the most relevant authors to an author a, as dentfed through the analyss of the sngle and b-relatonal networks, respectvely? Intutvely, the more smlar the papers that two authors share, the more related the authors are (Q2) How accurate s the process of mnng nformaton from the sngle and b-relatonal networks, respectvely? B Expermental Desgn and Results We answer (Q1) through a case study We run RWR on the sngle and b-relatonal networks descrbed n secton VI-A, respectvely, to get relevance score vectors We use the author Jawe Han as a startng node Table II left column shows the top 10 relevant authors for Jawe Han, as dentfed from the sngle-relatonal co-author network As expected, these are mostly hs collaborators (researchers that have co-authored papers wth hm) Table II rgh column shows the top 10 relevant authors for Jawe Han, as dentfed from the b-relatonal network These are researchers whose nterests are smlar to Jawe Han s nterests (specfcally, database and data mnng) Ths case study shows the advantage of usng the b-drectonal network n the analyss: t produces results that can be used for predctng potental future collaboratons or even potental revewers for a researcher Table II AUTHORS RELEVANT TO JIAWEI HAN Sngle-relatonal network B-relatonal network n stefanovc l lakshmanan j chang t topaloglou w gong j mylopoulos b xa r mssaou o r zaane r ramakrshnan m kamber h hrsh l lakshmanan s sudarshan k kopersk m j zak w wang a j bonner a pang d srvastava We conduct two experments to answer (Q2) The frst experment (Q21) s desgned to evaluate the accuracy of the process of labelng papers wth categores usng the smlarty network only The second experment (Q22) s desgned to test the accuracy of the process of predctng co-authors based on the b-relatonal network (Q21) In the Cora data set, each paper has a label ndcatng the research category assocated wth the paper We consder the k-th most related papers to a gven paper n the data set (accordng to ther relevance scores) The accuracy s defned as the number of papers categorzed n the same category as the gven paper, dvded by k Fgure 2 shows the results As expected, the accuracy decreases wth the number of papers k consdered (Q22) To test the accuracy of the process of predctng co-author relatonshps from the b-relatonal network, we randomly select three dstnct sets of author pars The authors n a par have co-authored some papers, whch are removed from the network We assume that n addton to the papers that a par of authors have co-authored (removed), the two authors mght have publshed other smlar papers Our ntuton s that f a par of authors share smlar papers, then they wll be predcted to be co-authors based on the brelatonal author-paper network

6 Fgure 2 Paper labelng predcton accuracy, as a functon of the total number k of papers for whch labels are predcted, based on mnng the paper smlarty network For each par, we run RWR starng at an arbtrary paper of one of the authors n the par (ths wll not be a co-authored paper, as those have been removed) and dentfy the k-th most related authors A par s predcted correctly f the coauthor n the par s among the related authors We defne the accuracy as the number of co-authors dentfed dvded by the number of pars n a data set Fgure 3 shows the results As expected, the more related authors k are retreved, the better the predcton accuracy Fgure 3 Co-author predcton accuracy, as a functon of the number k of authors retreved for each par of potental co-authors, based on mnng the b-relatonal network C Effcency of Random Walk Table III shows a comparson between the tradtonal power method (whch multples the transton matrx wth π untl the L 2 norm of successve estmates of π goes below the threshold ɛ) and the IAD-based RWR n terms of the π (1) π (0) 2 values after the frst teraton and number of steps to convergence Results for three parttons are shown Table III VALUES OF π (1) π (0) 2 AFTER THE FIRST ITERATION AND NUMBER OF ITERATIONS FOR CONVERGENCE WHEN ɛ = 10 5, FOR THE TRADITIONAL POWER METHOD VS IAD APPROACH π (1) π (0) 2 # Iteratons # Parttons Power IAD Power IAD e 2 179e e 2 159e e 2 133e VII SUMMARY AND RELATED WORK In ths paper, we have shown how to use the RWR approach to analyze a b-relatonal network A smlar analyss has been performed for networks wth three relatons, but was omtted here due to space lmtatons (to be publshed as a techncal report) Generalzaton of our approach to multrelatonal networks s possble, accordng to the semantcs of the relatons n a partcular network We have also proposed an IAD-based fast RWR mplementaton Ths mplementaton makes use of the blockwse structure that many networks present Expermental results on a data set from the bblographc doman show the benefts of usng b-relatonal networks as opposed to sngle networks The relevance scores defned by RWR have many useful propertes Compared wth other parwse metrcs, the relevance scores can capture the global structure of the graph as well as the mult-facet relatonshps between nodes RWR s a popular method for network analyss Many applcatons use random walk and related methods as a buldng block Tong et al [2] provdes an excellent revew of RWR An exact soluton for RWR usually requres the nverson of a large matrx Therefore, fast approxmate solutons to the problem have been proposed before [1] Smlar to our proposed approach, other exstng solutons make use of the block-wse structure of real world networks Tong et al [2] approxmate the statonary dstrbuton of RWR by heurstc-based low rank approxmaton Sun et al [1] perform RWR only on the partton that contans the startng pont Ther method outputs a local estmaton of the statonary dstrbuton REFERENCES [1] J Sun, H Qu, D Chakrabart, and C Faloutsos, Neghborhood formaton and anomaly detecton n bpartte graph, Proc of the Sxth Int Conf on Data Mnng (ICDM 05), pp , 2005 [2] H Tong, C Faloutsos, and J-Y Pan, Fast random walk wth restart and ts applcatons, Proc of the Sxth Int Conf on Data Mnng (ICDM 06), vol 418, pp , 2006 [3] W J Stewart, Introducton to the Numercal Soluton of Markov Chans Prnceton Unversty Press, 1996 [4] G Karyps and V Kumar, Multlevel k-way parttonng scheme for rregular graphs, J Parallel Dstrb Comput, vol 48, no 1, pp , 1998 [5] W J Stewart, Numercal methods for computng statonary dstrbuton of fnte rreducble markov chans, J of Adv n Computatonal Probablty, vol 418, pp , 1998 [6] D P O Leary, Iteratve methods for fndng the statonary vector for markov chans, Lnear Algebra, Markov Chans, and Queung Models, vol 418, pp , 1992 [7] Y Takahash, A lumpng method for numercal calculatons of statonary dstrbutons of markov chans Techncal Report B-18, Tokyo Inst of Tech, vol 418, pp , 1975

Singular Value Decomposition: Theory and Applications

Singular Value Decomposition: Theory and Applications Sngular Value Decomposton: Theory and Applcatons Danel Khashab Sprng 2015 Last Update: March 2, 2015 1 Introducton A = UDV where columns of U and V are orthonormal and matrx D s dagonal wth postve real

More information

U.C. Berkeley CS294: Spectral Methods and Expanders Handout 8 Luca Trevisan February 17, 2016

U.C. Berkeley CS294: Spectral Methods and Expanders Handout 8 Luca Trevisan February 17, 2016 U.C. Berkeley CS94: Spectral Methods and Expanders Handout 8 Luca Trevsan February 7, 06 Lecture 8: Spectral Algorthms Wrap-up In whch we talk about even more generalzatons of Cheeger s nequaltes, and

More information

Lecture 12: Discrete Laplacian

Lecture 12: Discrete Laplacian Lecture 12: Dscrete Laplacan Scrbe: Tanye Lu Our goal s to come up wth a dscrete verson of Laplacan operator for trangulated surfaces, so that we can use t n practce to solve related problems We are mostly

More information

Chapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems

Chapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems Numercal Analyss by Dr. Anta Pal Assstant Professor Department of Mathematcs Natonal Insttute of Technology Durgapur Durgapur-713209 emal: anta.bue@gmal.com 1 . Chapter 5 Soluton of System of Lnear Equatons

More information

Kernel Methods and SVMs Extension

Kernel Methods and SVMs Extension Kernel Methods and SVMs Extenson The purpose of ths document s to revew materal covered n Machne Learnng 1 Supervsed Learnng regardng support vector machnes (SVMs). Ths document also provdes a general

More information

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur Module 3 LOSSY IMAGE COMPRESSION SYSTEMS Verson ECE IIT, Kharagpur Lesson 6 Theory of Quantzaton Verson ECE IIT, Kharagpur Instructonal Objectves At the end of ths lesson, the students should be able to:

More information

Lecture Notes on Linear Regression

Lecture Notes on Linear Regression Lecture Notes on Lnear Regresson Feng L fl@sdueducn Shandong Unversty, Chna Lnear Regresson Problem In regresson problem, we am at predct a contnuous target value gven an nput feature vector We assume

More information

Difference Equations

Difference Equations Dfference Equatons c Jan Vrbk 1 Bascs Suppose a sequence of numbers, say a 0,a 1,a,a 3,... s defned by a certan general relatonshp between, say, three consecutve values of the sequence, e.g. a + +3a +1

More information

EEE 241: Linear Systems

EEE 241: Linear Systems EEE : Lnear Systems Summary #: Backpropagaton BACKPROPAGATION The perceptron rule as well as the Wdrow Hoff learnng were desgned to tran sngle layer networks. They suffer from the same dsadvantage: they

More information

Structure and Drive Paul A. Jensen Copyright July 20, 2003

Structure and Drive Paul A. Jensen Copyright July 20, 2003 Structure and Drve Paul A. Jensen Copyrght July 20, 2003 A system s made up of several operatons wth flow passng between them. The structure of the system descrbes the flow paths from nputs to outputs.

More information

LECTURE 9 CANONICAL CORRELATION ANALYSIS

LECTURE 9 CANONICAL CORRELATION ANALYSIS LECURE 9 CANONICAL CORRELAION ANALYSIS Introducton he concept of canoncal correlaton arses when we want to quantfy the assocatons between two sets of varables. For example, suppose that the frst set of

More information

Inexact Newton Methods for Inverse Eigenvalue Problems

Inexact Newton Methods for Inverse Eigenvalue Problems Inexact Newton Methods for Inverse Egenvalue Problems Zheng-jan Ba Abstract In ths paper, we survey some of the latest development n usng nexact Newton-lke methods for solvng nverse egenvalue problems.

More information

Some Comments on Accelerating Convergence of Iterative Sequences Using Direct Inversion of the Iterative Subspace (DIIS)

Some Comments on Accelerating Convergence of Iterative Sequences Using Direct Inversion of the Iterative Subspace (DIIS) Some Comments on Acceleratng Convergence of Iteratve Sequences Usng Drect Inverson of the Iteratve Subspace (DIIS) C. Davd Sherrll School of Chemstry and Bochemstry Georga Insttute of Technology May 1998

More information

Linear Approximation with Regularization and Moving Least Squares

Linear Approximation with Regularization and Moving Least Squares Lnear Approxmaton wth Regularzaton and Movng Least Squares Igor Grešovn May 007 Revson 4.6 (Revson : March 004). 5 4 3 0.5 3 3.5 4 Contents: Lnear Fttng...4. Weghted Least Squares n Functon Approxmaton...

More information

A Hybrid Variational Iteration Method for Blasius Equation

A Hybrid Variational Iteration Method for Blasius Equation Avalable at http://pvamu.edu/aam Appl. Appl. Math. ISSN: 1932-9466 Vol. 10, Issue 1 (June 2015), pp. 223-229 Applcatons and Appled Mathematcs: An Internatonal Journal (AAM) A Hybrd Varatonal Iteraton Method

More information

Generalized Linear Methods

Generalized Linear Methods Generalzed Lnear Methods 1 Introducton In the Ensemble Methods the general dea s that usng a combnaton of several weak learner one could make a better learner. More formally, assume that we have a set

More information

A Robust Method for Calculating the Correlation Coefficient

A Robust Method for Calculating the Correlation Coefficient A Robust Method for Calculatng the Correlaton Coeffcent E.B. Nven and C. V. Deutsch Relatonshps between prmary and secondary data are frequently quantfed usng the correlaton coeffcent; however, the tradtonal

More information

Norms, Condition Numbers, Eigenvalues and Eigenvectors

Norms, Condition Numbers, Eigenvalues and Eigenvectors Norms, Condton Numbers, Egenvalues and Egenvectors 1 Norms A norm s a measure of the sze of a matrx or a vector For vectors the common norms are: N a 2 = ( x 2 1/2 the Eucldean Norm (1a b 1 = =1 N x (1b

More information

Markov Chain Monte Carlo (MCMC), Gibbs Sampling, Metropolis Algorithms, and Simulated Annealing Bioinformatics Course Supplement

Markov Chain Monte Carlo (MCMC), Gibbs Sampling, Metropolis Algorithms, and Simulated Annealing Bioinformatics Course Supplement Markov Chan Monte Carlo MCMC, Gbbs Samplng, Metropols Algorthms, and Smulated Annealng 2001 Bonformatcs Course Supplement SNU Bontellgence Lab http://bsnuackr/ Outlne! Markov Chan Monte Carlo MCMC! Metropols-Hastngs

More information

1 GSW Iterative Techniques for y = Ax

1 GSW Iterative Techniques for y = Ax 1 for y = A I m gong to cheat here. here are a lot of teratve technques that can be used to solve the general case of a set of smultaneous equatons (wrtten n the matr form as y = A), but ths chapter sn

More information

Errors for Linear Systems

Errors for Linear Systems Errors for Lnear Systems When we solve a lnear system Ax b we often do not know A and b exactly, but have only approxmatons  and ˆb avalable. Then the best thng we can do s to solve ˆx ˆb exactly whch

More information

4 Analysis of Variance (ANOVA) 5 ANOVA. 5.1 Introduction. 5.2 Fixed Effects ANOVA

4 Analysis of Variance (ANOVA) 5 ANOVA. 5.1 Introduction. 5.2 Fixed Effects ANOVA 4 Analyss of Varance (ANOVA) 5 ANOVA 51 Introducton ANOVA ANOVA s a way to estmate and test the means of multple populatons We wll start wth one-way ANOVA If the populatons ncluded n the study are selected

More information

2.3 Nilpotent endomorphisms

2.3 Nilpotent endomorphisms s a block dagonal matrx, wth A Mat dm U (C) In fact, we can assume that B = B 1 B k, wth B an ordered bass of U, and that A = [f U ] B, where f U : U U s the restrcton of f to U 40 23 Nlpotent endomorphsms

More information

LINEAR REGRESSION ANALYSIS. MODULE IX Lecture Multicollinearity

LINEAR REGRESSION ANALYSIS. MODULE IX Lecture Multicollinearity LINEAR REGRESSION ANALYSIS MODULE IX Lecture - 30 Multcollnearty Dr. Shalabh Department of Mathematcs and Statstcs Indan Insttute of Technology Kanpur 2 Remedes for multcollnearty Varous technques have

More information

Hongyi Miao, College of Science, Nanjing Forestry University, Nanjing ,China. (Received 20 June 2013, accepted 11 March 2014) I)ϕ (k)

Hongyi Miao, College of Science, Nanjing Forestry University, Nanjing ,China. (Received 20 June 2013, accepted 11 March 2014) I)ϕ (k) ISSN 1749-3889 (prnt), 1749-3897 (onlne) Internatonal Journal of Nonlnear Scence Vol.17(2014) No.2,pp.188-192 Modfed Block Jacob-Davdson Method for Solvng Large Sparse Egenproblems Hongy Mao, College of

More information

Lectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix

Lectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix Lectures - Week 4 Matrx norms, Condtonng, Vector Spaces, Lnear Independence, Spannng sets and Bass, Null space and Range of a Matrx Matrx Norms Now we turn to assocatng a number to each matrx. We could

More information

C/CS/Phy191 Problem Set 3 Solutions Out: Oct 1, 2008., where ( 00. ), so the overall state of the system is ) ( ( ( ( 00 ± 11 ), Φ ± = 1

C/CS/Phy191 Problem Set 3 Solutions Out: Oct 1, 2008., where ( 00. ), so the overall state of the system is ) ( ( ( ( 00 ± 11 ), Φ ± = 1 C/CS/Phy9 Problem Set 3 Solutons Out: Oct, 8 Suppose you have two qubts n some arbtrary entangled state ψ You apply the teleportaton protocol to each of the qubts separately What s the resultng state obtaned

More information

Resource Allocation with a Budget Constraint for Computing Independent Tasks in the Cloud

Resource Allocation with a Budget Constraint for Computing Independent Tasks in the Cloud Resource Allocaton wth a Budget Constrant for Computng Independent Tasks n the Cloud Wemng Sh and Bo Hong School of Electrcal and Computer Engneerng Georga Insttute of Technology, USA 2nd IEEE Internatonal

More information

Lecture 10: May 6, 2013

Lecture 10: May 6, 2013 TTIC/CMSC 31150 Mathematcal Toolkt Sprng 013 Madhur Tulsan Lecture 10: May 6, 013 Scrbe: Wenje Luo In today s lecture, we manly talked about random walk on graphs and ntroduce the concept of graph expander,

More information

On a direct solver for linear least squares problems

On a direct solver for linear least squares problems ISSN 2066-6594 Ann. Acad. Rom. Sc. Ser. Math. Appl. Vol. 8, No. 2/2016 On a drect solver for lnear least squares problems Constantn Popa Abstract The Null Space (NS) algorthm s a drect solver for lnear

More information

Convergence of random processes

Convergence of random processes DS-GA 12 Lecture notes 6 Fall 216 Convergence of random processes 1 Introducton In these notes we study convergence of dscrete random processes. Ths allows to characterze phenomena such as the law of large

More information

Econ107 Applied Econometrics Topic 3: Classical Model (Studenmund, Chapter 4)

Econ107 Applied Econometrics Topic 3: Classical Model (Studenmund, Chapter 4) I. Classcal Assumptons Econ7 Appled Econometrcs Topc 3: Classcal Model (Studenmund, Chapter 4) We have defned OLS and studed some algebrac propertes of OLS. In ths topc we wll study statstcal propertes

More information

Boostrapaggregating (Bagging)

Boostrapaggregating (Bagging) Boostrapaggregatng (Baggng) An ensemble meta-algorthm desgned to mprove the stablty and accuracy of machne learnng algorthms Can be used n both regresson and classfcaton Reduces varance and helps to avod

More information

Composite Hypotheses testing

Composite Hypotheses testing Composte ypotheses testng In many hypothess testng problems there are many possble dstrbutons that can occur under each of the hypotheses. The output of the source s a set of parameters (ponts n a parameter

More information

The Multiple Classical Linear Regression Model (CLRM): Specification and Assumptions. 1. Introduction

The Multiple Classical Linear Regression Model (CLRM): Specification and Assumptions. 1. Introduction ECONOMICS 5* -- NOTE (Summary) ECON 5* -- NOTE The Multple Classcal Lnear Regresson Model (CLRM): Specfcaton and Assumptons. Introducton CLRM stands for the Classcal Lnear Regresson Model. The CLRM s also

More information

CSci 6974 and ECSE 6966 Math. Tech. for Vision, Graphics and Robotics Lecture 21, April 17, 2006 Estimating A Plane Homography

CSci 6974 and ECSE 6966 Math. Tech. for Vision, Graphics and Robotics Lecture 21, April 17, 2006 Estimating A Plane Homography CSc 6974 and ECSE 6966 Math. Tech. for Vson, Graphcs and Robotcs Lecture 21, Aprl 17, 2006 Estmatng A Plane Homography Overvew We contnue wth a dscusson of the major ssues, usng estmaton of plane projectve

More information

Psychology 282 Lecture #24 Outline Regression Diagnostics: Outliers

Psychology 282 Lecture #24 Outline Regression Diagnostics: Outliers Psychology 282 Lecture #24 Outlne Regresson Dagnostcs: Outlers In an earler lecture we studed the statstcal assumptons underlyng the regresson model, ncludng the followng ponts: Formal statement of assumptons.

More information

Linear Regression Analysis: Terminology and Notation

Linear Regression Analysis: Terminology and Notation ECON 35* -- Secton : Basc Concepts of Regresson Analyss (Page ) Lnear Regresson Analyss: Termnology and Notaton Consder the generc verson of the smple (two-varable) lnear regresson model. It s represented

More information

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 16

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 16 STAT 39: MATHEMATICAL COMPUTATIONS I FALL 218 LECTURE 16 1 why teratve methods f we have a lnear system Ax = b where A s very, very large but s ether sparse or structured (eg, banded, Toepltz, banded plus

More information

5 The Rational Canonical Form

5 The Rational Canonical Form 5 The Ratonal Canoncal Form Here p s a monc rreducble factor of the mnmum polynomal m T and s not necessarly of degree one Let F p denote the feld constructed earler n the course, consstng of all matrces

More information

LOW BIAS INTEGRATED PATH ESTIMATORS. James M. Calvin

LOW BIAS INTEGRATED PATH ESTIMATORS. James M. Calvin Proceedngs of the 007 Wnter Smulaton Conference S G Henderson, B Bller, M-H Hseh, J Shortle, J D Tew, and R R Barton, eds LOW BIAS INTEGRATED PATH ESTIMATORS James M Calvn Department of Computer Scence

More information

Professor Terje Haukaas University of British Columbia, Vancouver The Q4 Element

Professor Terje Haukaas University of British Columbia, Vancouver  The Q4 Element Professor Terje Haukaas Unversty of Brtsh Columba, ancouver www.nrsk.ubc.ca The Q Element Ths document consders fnte elements that carry load only n ther plane. These elements are sometmes referred to

More information

Workshop: Approximating energies and wave functions Quantum aspects of physical chemistry

Workshop: Approximating energies and wave functions Quantum aspects of physical chemistry Workshop: Approxmatng energes and wave functons Quantum aspects of physcal chemstry http://quantum.bu.edu/pltl/6/6.pdf Last updated Thursday, November 7, 25 7:9:5-5: Copyrght 25 Dan Dll (dan@bu.edu) Department

More information

Grover s Algorithm + Quantum Zeno Effect + Vaidman

Grover s Algorithm + Quantum Zeno Effect + Vaidman Grover s Algorthm + Quantum Zeno Effect + Vadman CS 294-2 Bomb 10/12/04 Fall 2004 Lecture 11 Grover s algorthm Recall that Grover s algorthm for searchng over a space of sze wors as follows: consder the

More information

Comparison of Regression Lines

Comparison of Regression Lines STATGRAPHICS Rev. 9/13/2013 Comparson of Regresson Lnes Summary... 1 Data Input... 3 Analyss Summary... 4 Plot of Ftted Model... 6 Condtonal Sums of Squares... 6 Analyss Optons... 7 Forecasts... 8 Confdence

More information

NUMERICAL DIFFERENTIATION

NUMERICAL DIFFERENTIATION NUMERICAL DIFFERENTIATION 1 Introducton Dfferentaton s a method to compute the rate at whch a dependent output y changes wth respect to the change n the ndependent nput x. Ths rate of change s called the

More information

Topics in Probability Theory and Stochastic Processes Steven R. Dunbar. Classes of States and Stationary Distributions

Topics in Probability Theory and Stochastic Processes Steven R. Dunbar. Classes of States and Stationary Distributions Steven R. Dunbar Department of Mathematcs 203 Avery Hall Unversty of Nebraska-Lncoln Lncoln, NE 68588-0130 http://www.math.unl.edu Voce: 402-472-3731 Fax: 402-472-8466 Topcs n Probablty Theory and Stochastc

More information

Lecture 10 Support Vector Machines II

Lecture 10 Support Vector Machines II Lecture 10 Support Vector Machnes II 22 February 2016 Taylor B. Arnold Yale Statstcs STAT 365/665 1/28 Notes: Problem 3 s posted and due ths upcomng Frday There was an early bug n the fake-test data; fxed

More information

A new construction of 3-separable matrices via an improved decoding of Macula s construction

A new construction of 3-separable matrices via an improved decoding of Macula s construction Dscrete Optmzaton 5 008 700 704 Contents lsts avalable at ScenceDrect Dscrete Optmzaton journal homepage: wwwelsevercom/locate/dsopt A new constructon of 3-separable matrces va an mproved decodng of Macula

More information

SIO 224. m(r) =(ρ(r),k s (r),µ(r))

SIO 224. m(r) =(ρ(r),k s (r),µ(r)) SIO 224 1. A bref look at resoluton analyss Here s some background for the Masters and Gubbns resoluton paper. Global Earth models are usually found teratvely by assumng a startng model and fndng small

More information

MEM 255 Introduction to Control Systems Review: Basics of Linear Algebra

MEM 255 Introduction to Control Systems Review: Basics of Linear Algebra MEM 255 Introducton to Control Systems Revew: Bascs of Lnear Algebra Harry G. Kwatny Department of Mechancal Engneerng & Mechancs Drexel Unversty Outlne Vectors Matrces MATLAB Advanced Topcs Vectors A

More information

Inductance Calculation for Conductors of Arbitrary Shape

Inductance Calculation for Conductors of Arbitrary Shape CRYO/02/028 Aprl 5, 2002 Inductance Calculaton for Conductors of Arbtrary Shape L. Bottura Dstrbuton: Internal Summary In ths note we descrbe a method for the numercal calculaton of nductances among conductors

More information

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 13

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 13 CME 30: NUMERICAL LINEAR ALGEBRA FALL 005/06 LECTURE 13 GENE H GOLUB 1 Iteratve Methods Very large problems (naturally sparse, from applcatons): teratve methods Structured matrces (even sometmes dense,

More information

Report on Image warping

Report on Image warping Report on Image warpng Xuan Ne, Dec. 20, 2004 Ths document summarzed the algorthms of our mage warpng soluton for further study, and there s a detaled descrpton about the mplementaton of these algorthms.

More information

Exercises. 18 Algorithms

Exercises. 18 Algorithms 18 Algorthms Exercses 0.1. In each of the followng stuatons, ndcate whether f = O(g), or f = Ω(g), or both (n whch case f = Θ(g)). f(n) g(n) (a) n 100 n 200 (b) n 1/2 n 2/3 (c) 100n + log n n + (log n)

More information

Speeding up Computation of Scalar Multiplication in Elliptic Curve Cryptosystem

Speeding up Computation of Scalar Multiplication in Elliptic Curve Cryptosystem H.K. Pathak et. al. / (IJCSE) Internatonal Journal on Computer Scence and Engneerng Speedng up Computaton of Scalar Multplcaton n Ellptc Curve Cryptosystem H. K. Pathak Manju Sangh S.o.S n Computer scence

More information

College of Computer & Information Science Fall 2009 Northeastern University 20 October 2009

College of Computer & Information Science Fall 2009 Northeastern University 20 October 2009 College of Computer & Informaton Scence Fall 2009 Northeastern Unversty 20 October 2009 CS7880: Algorthmc Power Tools Scrbe: Jan Wen and Laura Poplawsk Lecture Outlne: Prmal-dual schema Network Desgn:

More information

Rough incomplete sketch of history of the WorldWideWeb (Note first that WorldWideWeb Internet, WWW is rather a protocol and set of resources that is

Rough incomplete sketch of history of the WorldWideWeb (Note first that WorldWideWeb Internet, WWW is rather a protocol and set of resources that is Rough ncomplete sketch of hstory of the WorldWdeWeb (Note frst that WorldWdeWeb Internet, WWW s rather a protocol and set of resources that s layered on top of the pre-exstng nternet.) 945 Memex, V.Bush.

More information

Lecture 4: Universal Hash Functions/Streaming Cont d

Lecture 4: Universal Hash Functions/Streaming Cont d CSE 5: Desgn and Analyss of Algorthms I Sprng 06 Lecture 4: Unversal Hash Functons/Streamng Cont d Lecturer: Shayan Oves Gharan Aprl 6th Scrbe: Jacob Schreber Dsclamer: These notes have not been subjected

More information

763622S ADVANCED QUANTUM MECHANICS Solution Set 1 Spring c n a n. c n 2 = 1.

763622S ADVANCED QUANTUM MECHANICS Solution Set 1 Spring c n a n. c n 2 = 1. 7636S ADVANCED QUANTUM MECHANICS Soluton Set 1 Sprng 013 1 Warm-up Show that the egenvalues of a Hermtan operator  are real and that the egenkets correspondng to dfferent egenvalues are orthogonal (b)

More information

Statistics II Final Exam 26/6/18

Statistics II Final Exam 26/6/18 Statstcs II Fnal Exam 26/6/18 Academc Year 2017/18 Solutons Exam duraton: 2 h 30 mn 1. (3 ponts) A town hall s conductng a study to determne the amount of leftover food produced by the restaurants n the

More information

Calculation of time complexity (3%)

Calculation of time complexity (3%) Problem 1. (30%) Calculaton of tme complexty (3%) Gven n ctes, usng exhaust search to see every result takes O(n!). Calculaton of tme needed to solve the problem (2%) 40 ctes:40! dfferent tours 40 add

More information

Negative Binomial Regression

Negative Binomial Regression STATGRAPHICS Rev. 9/16/2013 Negatve Bnomal Regresson Summary... 1 Data Input... 3 Statstcal Model... 3 Analyss Summary... 4 Analyss Optons... 7 Plot of Ftted Model... 8 Observed Versus Predcted... 10 Predctons...

More information

NON-CENTRAL 7-POINT FORMULA IN THE METHOD OF LINES FOR PARABOLIC AND BURGERS' EQUATIONS

NON-CENTRAL 7-POINT FORMULA IN THE METHOD OF LINES FOR PARABOLIC AND BURGERS' EQUATIONS IJRRAS 8 (3 September 011 www.arpapress.com/volumes/vol8issue3/ijrras_8_3_08.pdf NON-CENTRAL 7-POINT FORMULA IN THE METHOD OF LINES FOR PARABOLIC AND BURGERS' EQUATIONS H.O. Bakodah Dept. of Mathematc

More information

Transfer Functions. Convenient representation of a linear, dynamic model. A transfer function (TF) relates one input and one output: ( ) system

Transfer Functions. Convenient representation of a linear, dynamic model. A transfer function (TF) relates one input and one output: ( ) system Transfer Functons Convenent representaton of a lnear, dynamc model. A transfer functon (TF) relates one nput and one output: x t X s y t system Y s The followng termnology s used: x y nput output forcng

More information

Numerical Heat and Mass Transfer

Numerical Heat and Mass Transfer Master degree n Mechancal Engneerng Numercal Heat and Mass Transfer 06-Fnte-Dfference Method (One-dmensonal, steady state heat conducton) Fausto Arpno f.arpno@uncas.t Introducton Why we use models and

More information

MMA and GCMMA two methods for nonlinear optimization

MMA and GCMMA two methods for nonlinear optimization MMA and GCMMA two methods for nonlnear optmzaton Krster Svanberg Optmzaton and Systems Theory, KTH, Stockholm, Sweden. krlle@math.kth.se Ths note descrbes the algorthms used n the author s 2007 mplementatons

More information

Numerical Solution of Ordinary Differential Equations

Numerical Solution of Ordinary Differential Equations Numercal Methods (CENG 00) CHAPTER-VI Numercal Soluton of Ordnar Dfferental Equatons 6 Introducton Dfferental equatons are equatons composed of an unknown functon and ts dervatves The followng are examples

More information

ANSWERS. Problem 1. and the moment generating function (mgf) by. defined for any real t. Use this to show that E( U) var( U)

ANSWERS. Problem 1. and the moment generating function (mgf) by. defined for any real t. Use this to show that E( U) var( U) Econ 413 Exam 13 H ANSWERS Settet er nndelt 9 deloppgaver, A,B,C, som alle anbefales å telle lkt for å gøre det ltt lettere å stå. Svar er gtt . Unfortunately, there s a prntng error n the hnt of

More information

Relaxation Methods for Iterative Solution to Linear Systems of Equations

Relaxation Methods for Iterative Solution to Linear Systems of Equations Relaxaton Methods for Iteratve Soluton to Lnear Systems of Equatons Gerald Recktenwald Portland State Unversty Mechancal Engneerng Department gerry@pdx.edu Overvew Techncal topcs Basc Concepts Statonary

More information

Tracking with Kalman Filter

Tracking with Kalman Filter Trackng wth Kalman Flter Scott T. Acton Vrgna Image and Vdeo Analyss (VIVA), Charles L. Brown Department of Electrcal and Computer Engneerng Department of Bomedcal Engneerng Unversty of Vrgna, Charlottesvlle,

More information

The Order Relation and Trace Inequalities for. Hermitian Operators

The Order Relation and Trace Inequalities for. Hermitian Operators Internatonal Mathematcal Forum, Vol 3, 08, no, 507-57 HIKARI Ltd, wwwm-hkarcom https://doorg/0988/mf088055 The Order Relaton and Trace Inequaltes for Hermtan Operators Y Huang School of Informaton Scence

More information

Section 8.3 Polar Form of Complex Numbers

Section 8.3 Polar Form of Complex Numbers 80 Chapter 8 Secton 8 Polar Form of Complex Numbers From prevous classes, you may have encountered magnary numbers the square roots of negatve numbers and, more generally, complex numbers whch are the

More information

Bayesian predictive Configural Frequency Analysis

Bayesian predictive Configural Frequency Analysis Psychologcal Test and Assessment Modelng, Volume 54, 2012 (3), 285-292 Bayesan predctve Confgural Frequency Analyss Eduardo Gutérrez-Peña 1 Abstract Confgural Frequency Analyss s a method for cell-wse

More information

Chapter 6. Supplemental Text Material

Chapter 6. Supplemental Text Material Chapter 6. Supplemental Text Materal S6-. actor Effect Estmates are Least Squares Estmates We have gven heurstc or ntutve explanatons of how the estmates of the factor effects are obtaned n the textboo.

More information

COMPARISON OF SOME RELIABILITY CHARACTERISTICS BETWEEN REDUNDANT SYSTEMS REQUIRING SUPPORTING UNITS FOR THEIR OPERATIONS

COMPARISON OF SOME RELIABILITY CHARACTERISTICS BETWEEN REDUNDANT SYSTEMS REQUIRING SUPPORTING UNITS FOR THEIR OPERATIONS Avalable onlne at http://sck.org J. Math. Comput. Sc. 3 (3), No., 6-3 ISSN: 97-537 COMPARISON OF SOME RELIABILITY CHARACTERISTICS BETWEEN REDUNDANT SYSTEMS REQUIRING SUPPORTING UNITS FOR THEIR OPERATIONS

More information

CHAPTER 14 GENERAL PERTURBATION THEORY

CHAPTER 14 GENERAL PERTURBATION THEORY CHAPTER 4 GENERAL PERTURBATION THEORY 4 Introducton A partcle n orbt around a pont mass or a sphercally symmetrc mass dstrbuton s movng n a gravtatonal potental of the form GM / r In ths potental t moves

More information

Chapter 9: Statistical Inference and the Relationship between Two Variables

Chapter 9: Statistical Inference and the Relationship between Two Variables Chapter 9: Statstcal Inference and the Relatonshp between Two Varables Key Words The Regresson Model The Sample Regresson Equaton The Pearson Correlaton Coeffcent Learnng Outcomes After studyng ths chapter,

More information

The Minimum Universal Cost Flow in an Infeasible Flow Network

The Minimum Universal Cost Flow in an Infeasible Flow Network Journal of Scences, Islamc Republc of Iran 17(2): 175-180 (2006) Unversty of Tehran, ISSN 1016-1104 http://jscencesutacr The Mnmum Unversal Cost Flow n an Infeasble Flow Network H Saleh Fathabad * M Bagheran

More information

Computing Correlated Equilibria in Multi-Player Games

Computing Correlated Equilibria in Multi-Player Games Computng Correlated Equlbra n Mult-Player Games Chrstos H. Papadmtrou Presented by Zhanxang Huang December 7th, 2005 1 The Author Dr. Chrstos H. Papadmtrou CS professor at UC Berkley (taught at Harvard,

More information

6. Stochastic processes (2)

6. Stochastic processes (2) Contents Markov processes Brth-death processes Lect6.ppt S-38.45 - Introducton to Teletraffc Theory Sprng 5 Markov process Consder a contnuous-tme and dscrete-state stochastc process X(t) wth state space

More information

The lower and upper bounds on Perron root of nonnegative irreducible matrices

The lower and upper bounds on Perron root of nonnegative irreducible matrices Journal of Computatonal Appled Mathematcs 217 (2008) 259 267 wwwelsevercom/locate/cam The lower upper bounds on Perron root of nonnegatve rreducble matrces Guang-Xn Huang a,, Feng Yn b,keguo a a College

More information

6. Stochastic processes (2)

6. Stochastic processes (2) 6. Stochastc processes () Lect6.ppt S-38.45 - Introducton to Teletraffc Theory Sprng 5 6. Stochastc processes () Contents Markov processes Brth-death processes 6. Stochastc processes () Markov process

More information

On the correction of the h-index for career length

On the correction of the h-index for career length 1 On the correcton of the h-ndex for career length by L. Egghe Unverstet Hasselt (UHasselt), Campus Depenbeek, Agoralaan, B-3590 Depenbeek, Belgum 1 and Unverstet Antwerpen (UA), IBW, Stadscampus, Venusstraat

More information

ALGORITHM FOR THE CALCULATION OF THE TWO VARIABLES CUBIC SPLINE FUNCTION

ALGORITHM FOR THE CALCULATION OF THE TWO VARIABLES CUBIC SPLINE FUNCTION ANALELE ŞTIINŢIFICE ALE UNIVERSITĂŢII AL.I. CUZA DIN IAŞI (S.N.) MATEMATICĂ, Tomul LIX, 013, f.1 DOI: 10.478/v10157-01-00-y ALGORITHM FOR THE CALCULATION OF THE TWO VARIABLES CUBIC SPLINE FUNCTION BY ION

More information

Google PageRank with Stochastic Matrix

Google PageRank with Stochastic Matrix Google PageRank wth Stochastc Matrx Md. Sharq, Puranjt Sanyal, Samk Mtra (M.Sc. Applcatons of Mathematcs) Dscrete Tme Markov Chan Let S be a countable set (usually S s a subset of Z or Z d or R or R d

More information

Homework Assignment 3 Due in class, Thursday October 15

Homework Assignment 3 Due in class, Thursday October 15 Homework Assgnment 3 Due n class, Thursday October 15 SDS 383C Statstcal Modelng I 1 Rdge regresson and Lasso 1. Get the Prostrate cancer data from http://statweb.stanford.edu/~tbs/elemstatlearn/ datasets/prostate.data.

More information

A New Refinement of Jacobi Method for Solution of Linear System Equations AX=b

A New Refinement of Jacobi Method for Solution of Linear System Equations AX=b Int J Contemp Math Scences, Vol 3, 28, no 17, 819-827 A New Refnement of Jacob Method for Soluton of Lnear System Equatons AX=b F Naem Dafchah Department of Mathematcs, Faculty of Scences Unversty of Gulan,

More information

Aggregation of Social Networks by Divisive Clustering Method

Aggregation of Social Networks by Divisive Clustering Method ggregaton of Socal Networks by Dvsve Clusterng Method mne Louat and Yves Lechaveller INRI Pars-Rocquencourt Rocquencourt, France {lzennyr.da_slva, Yves.Lechevaller, Fabrce.Ross}@nra.fr HCSD Beng October

More information

MATH 241B FUNCTIONAL ANALYSIS - NOTES EXAMPLES OF C ALGEBRAS

MATH 241B FUNCTIONAL ANALYSIS - NOTES EXAMPLES OF C ALGEBRAS MATH 241B FUNCTIONAL ANALYSIS - NOTES EXAMPLES OF C ALGEBRAS These are nformal notes whch cover some of the materal whch s not n the course book. The man purpose s to gve a number of nontrval examples

More information

Problem Set 9 Solutions

Problem Set 9 Solutions Desgn and Analyss of Algorthms May 4, 2015 Massachusetts Insttute of Technology 6.046J/18.410J Profs. Erk Demane, Srn Devadas, and Nancy Lynch Problem Set 9 Solutons Problem Set 9 Solutons Ths problem

More information

Appendix B: Resampling Algorithms

Appendix B: Resampling Algorithms 407 Appendx B: Resamplng Algorthms A common problem of all partcle flters s the degeneracy of weghts, whch conssts of the unbounded ncrease of the varance of the mportance weghts ω [ ] of the partcles

More information

Application of B-Spline to Numerical Solution of a System of Singularly Perturbed Problems

Application of B-Spline to Numerical Solution of a System of Singularly Perturbed Problems Mathematca Aeterna, Vol. 1, 011, no. 06, 405 415 Applcaton of B-Splne to Numercal Soluton of a System of Sngularly Perturbed Problems Yogesh Gupta Department of Mathematcs Unted College of Engneerng &

More information

Time-Varying Systems and Computations Lecture 6

Time-Varying Systems and Computations Lecture 6 Tme-Varyng Systems and Computatons Lecture 6 Klaus Depold 14. Januar 2014 The Kalman Flter The Kalman estmaton flter attempts to estmate the actual state of an unknown dscrete dynamcal system, gven nosy

More information

Stanford University CS359G: Graph Partitioning and Expanders Handout 4 Luca Trevisan January 13, 2011

Stanford University CS359G: Graph Partitioning and Expanders Handout 4 Luca Trevisan January 13, 2011 Stanford Unversty CS359G: Graph Parttonng and Expanders Handout 4 Luca Trevsan January 3, 0 Lecture 4 In whch we prove the dffcult drecton of Cheeger s nequalty. As n the past lectures, consder an undrected

More information

Parametric fractional imputation for missing data analysis. Jae Kwang Kim Survey Working Group Seminar March 29, 2010

Parametric fractional imputation for missing data analysis. Jae Kwang Kim Survey Working Group Seminar March 29, 2010 Parametrc fractonal mputaton for mssng data analyss Jae Kwang Km Survey Workng Group Semnar March 29, 2010 1 Outlne Introducton Proposed method Fractonal mputaton Approxmaton Varance estmaton Multple mputaton

More information

Annexes. EC.1. Cycle-base move illustration. EC.2. Problem Instances

Annexes. EC.1. Cycle-base move illustration. EC.2. Problem Instances ec Annexes Ths Annex frst llustrates a cycle-based move n the dynamc-block generaton tabu search. It then dsplays the characterstcs of the nstance sets, followed by detaled results of the parametercalbraton

More information

Advanced Quantum Mechanics

Advanced Quantum Mechanics Advanced Quantum Mechancs Rajdeep Sensarma! sensarma@theory.tfr.res.n ecture #9 QM of Relatvstc Partcles Recap of ast Class Scalar Felds and orentz nvarant actons Complex Scalar Feld and Charge conjugaton

More information

Finding Dense Subgraphs in G(n, 1/2)

Finding Dense Subgraphs in G(n, 1/2) Fndng Dense Subgraphs n Gn, 1/ Atsh Das Sarma 1, Amt Deshpande, and Rav Kannan 1 Georga Insttute of Technology,atsh@cc.gatech.edu Mcrosoft Research-Bangalore,amtdesh,annan@mcrosoft.com Abstract. Fndng

More information

ELASTIC WAVE PROPAGATION IN A CONTINUOUS MEDIUM

ELASTIC WAVE PROPAGATION IN A CONTINUOUS MEDIUM ELASTIC WAVE PROPAGATION IN A CONTINUOUS MEDIUM An elastc wave s a deformaton of the body that travels throughout the body n all drectons. We can examne the deformaton over a perod of tme by fxng our look

More information