Conjugate Gradient Projection Approach for Multi-Antenna Gaussian Broadcast Channels

Size: px
Start display at page:

Download "Conjugate Gradient Projection Approach for Multi-Antenna Gaussian Broadcast Channels"

Transcription

1 Conjugate Gradent Projecton Approach for Mult-Antenna Gaussan Broadcast Channels Ja Lu, Y. Thomas Hou, and Hanf D. Sheral Department of Electrcal and Computer Engneerng Department of Industral and Systems Engneerng Vrgna Polytechnc Insttute and State Unversty, Blacksburg, VA Emal: {kevnlau, thou, arxv:cs/ v1 [cs.it] 9 Jan 2007 Abstract THIS PAPER IS ELIGIBLE FOR THE STUDENT PAPER AWARD. It has been shown recently that the drtypaper codng s the optmal strategy for maxmzng the sum rate of multple-nput multple-output Gaussan broadcast channels (MIMO BC). Moreover, by the channel dualty, the nonconvex MIMO BC sum rate problem can be transformed to the convex dual MIMO multple-access channel (MIMO MAC) problem wth a sum power constrant. In ths paper, we desgn an effcent algorthm based on conjugate gradent projecton (CGP) to solve the MIMO BC maxmum sum rate problem. Our proposed CGP algorthm solves the dual sum power MAC problem by utlzng the powerful concept of Hessan conjugacy. We also develop a rgorous algorthm to solve the projecton problem. We show that CGP enjoys provable convergence, nce scalablty, and great effcency for large MIMO BC systems. I. INTRODUCTION Recently, researchers have shown great nterests n characterzng the capacty regon for multple-nput multple-output broadcast channels (MIMO BC) and MIMO multple-access channels (MIMO MAC). In partcular, although the general capacty regon for MIMO BC remans an open problem [1], the sum rate regon has been shown achevable by the drtypaper codng strategy [2], [3]. Moreover, by the remarkable channel dualty between MIMO BC and MIMO MAC establshed n [4] [6], the nonconvex MIMO BC sum rate problem can be transformed to the convex dual MIMO MAC problem wth a sum power constrant. However, although the standard nteror pont convex optmzaton method can be used to solve the sum power MIMO MAC problem, ts complexty s consderably hgher than those methods that explot the specal structure of the sum power MIMO MAC problem. Such specfcally desgned algorthms nclude the mnmax method (MM) by Lan and Yu [7], the steepest descent (SD) method by Vswanathan et al. [8], the dual decomposton (DD) method by Yu [9], and two teratve water-fllng methods (IWFs) by Jndal et al. [10]. Among these algorthms, MM s more complex than the others havng the lnear complexty. SD and DD have longer runnng tme per teraton than IWFs due to lne searches and the nner optmzaton, respectvely. Both IWFs n [10], however, do not scale well as the number of users, denoted by K, ncreases. The reason s that n each teraton of IWFs, the most recently updated soluton only accounts for a fracton of 1/K n the effectve channels computaton. The authors of [10] proposed a hybrd algorthm as a remedy. But the hybrd algorthm ntroduces addtonal complexty n mplementaton and ts performance depends upon the emprcal swtch tmng, whch, n turn, depends upon specfc problems. In addton, one of the IWFs n [10], although converges relatvely faster than the other one, requres a total storage sze for K 2 nput covarance matrces. These lmtatons of the exstng algorthms motvate us to desgn an effcent and scalable algorthm wth a modest storage requrement for solvng large MIMO BC systems. Our major contrbuton n ths paper s that we desgn a fast algorthm based on Conjugate Gradent Projecton (CGP) approach. Our algorthm s nspred by [11], where a gradent projecton method was used to heurstcally solve another nonconvex maxmum sum rate problem for sngle-hop MIMObased ad hoc networks wth mutual nterference. However, unlke [11], we use the conjugate gradent drectons nstead of gradent drectons to elmnate the zgzaggng phenomenon encountered n [11]. Also, we develop a rgorous algorthm to exactly solve the projecton problem (n [11], the way of handlng gradent projecton s based on heurstc: The authors smply set the frst dervatve to zero to get the soluton when solvng the constraned Lagrangan dual of the projecton problem). The attractve features of our proposed CGP are as follows: 1) CGP s extremely fast, and enjoys provable convergence as well as nce scalablty. As opposed to IWFs, the number of teratons requred for convergence n CGP s very nsenstve to the ncrease of the number of users. 2) CGP has the desrable lnear complexty. By adoptng the nexact lne search method called Armjo s Rule, we show that CGP has a comparable complexty to IWFs per teraton, and requres much fewer teratons for convergence n large MIMO BC systems. 3) CGP has a modest memory requrement: It only needs the soluton nformaton from the prevous step, as opposed to one of the IWFs, whch requres the soluton nformaton from prevous K 1 steps. Moreover, CGP s very ntutve and easy to mplement. The remander of ths paper s organzed as follows. In Secton II, we dscuss the network model and formulaton. Secton III ntroduces the key components n our CGP framework, ncludng conjugate gradent computaton and how to

2 perform projecton. We analyze and compare the complexty of CGP wth other exstng algorthms n Secton IV. Numercal results are presented n Secton V. Secton VI concludes ths paper. II. SYSTEM MODEL AND PROBLEM FORMULATION We frst ntroduce notaton. We use boldface to denote matrces and vectors. For a complex-valued matrx A, A and A denotes the conjugate and conjugate transpose of A, respectvely. Tr{A} denotes the trace of A. We let I denote the dentty matrx wth dmenson determned from context. A 0 represents that A s Hermtan and postve semdefnte (PSD). Dag{A 1...A n } represents the block dagonal matrx wth matrces A 1,...,A n on ts man dagonal. Suppose that a MIMO Gaussan broadcast channel has K users, each of whch s equpped wth n r antennas, and the transmtter has n t antennas. The channel matrx for user s denoted as H C nr nt. In [2], [4] [6], t has been shown that the maxmum sum rate capacty of MIMO BC s equal to the drty-paper codng regon, whch can be computed by solvng the optmzaton problem as follows: Maxmze subject to =1 log det(i+h(p )) Γj)H det(i+h ( P 1 Γj)H ) Γ 0, = 1,2,...,K =1 Tr(Γ ) P, where Γ C nt nt, = 1,...,K, are the downlnk nput covarance matrces. It s evdent that (1) s a nonconvex optmzaton problem. However, the authors n [4], [6] showed that due to the dualty between MIMO BC and MIMO MAC, (1) s equvalent to the followng MIMO MAC problem wth a sum power constrant: ( Maxmze log det I+ ) K =1 H Q H subject to Q 0, = 1,2,...,K (2) =1 Tr(Q ) P, where Q C nr nr, = 1,...,K are the uplnk nput covarance matrces. For convenence, we use the matrx Q = [ ] Q 1 Q 2... Q K to denote the set of all uplnk ( nput covarance matrces, and let F(Q) = log det I+ ) K =1 H Q H represent the objectve functon of (2). After solvng (2), we can recover the solutons of (1) by the mappng proposed n [4]. III. CONJUGATE GRADIENT PROJECTION FOR MIMO BC In ths paper, we propose an effcent algorthm based on conjugate gradent projecton (CGP) to solve (2). CGP utlzes the mportant and powerful concept of Hessan conjugacy to deflect the gradent drecton so as to acheve the superlnear convergence rate [12] smlar to that of the well-known quas-newton methods (e.g., BFGS method). Also, gradent projecton s a classcal method orgnally proposed by Rosen [13] amng at solvng constraned nonlnear programmng problems. But ts convergence proof has not been establshed (1) untl very recently [12]. The framework of CGP for solvng (2) s shown n Algorthm 1. Algorthm 1 Gradent Projecton Method Intalzaton: Choose the ntal condtons Q (0) = [Q (0) 1,Q(0) 2,...,Q(0) K ]T. Let k = 0. Man Loop: 1. Calculate the conjugate gradents G (k), = 1,2,...,K. 2. Choose an approprate step sze s k. Let Q (k) = Q (k) +s k G (k), for = 1,2,...,K. 3. Let Q (k) be the projecton of Q (k) onto Ω + (P), where Ω + (P) {Q, = 1,...,K Q 0, P K =1 Tr{Q } P}. 4. Choose approprate step sze α k. Let Q (k+1) l = Q (k) l +α k ( Q (k) Q (k) ), = 1,2,...,K. 5. k = k+1. If the maxmum absolute value of the elements n Q (k) Q (k 1) < ǫ, for = 1,2,...,L, then stop; else go to step 1. Due to the complexty of the objectve functon n (2), we adopt the nexact lne search method called Armjo s Rule to avod excessve objectve functon evaluatons, whle stll enjoyng provable convergence [12]. The basc dea of Armjo s Rule s that at each step of the lne search, we sacrfce accuracy for effcency as long as we have suffcent mprovement. Accordng to Armjo s Rule, n thek th teraton, we choose σ k = 1 and α k = β m k (the same as n [11]), where m k s the frst non-negatve nteger m that satsfes F(Q (k+1) ) F(Q (k) ) σβ m G (k), Q (k) Q (k) K [ )] = σβ m Tr, (3) =1 G (k) ( Q(k) Q (k) where 0 < β < 1 and 0 < σ < 1 are fxed scalars. Next, we wll consder two major components n the CGP framework: 1) how to compute the conjugate gradent drecton G, and 2) how to project Q (k) onto the set Ω + (P) {Q, = 1,...,K Q 0, =1 Tr{Q } P}. A. Computng the Conjugate Gradents The gradent Ḡ Q F(Q) depends on the partal dervatves of F(Q) wth respect to Q. By usng the formula ln det(a+bxc) X = [ C(A+BXC) 1 B ] T [11], [14], we can compute the partal dervatve of F(Q) wth respect to Q as follows (by lettng A = I +,j H j Q jh j, B = H, X = Q, and C = H ): F(Q) = logdet I+ H j Q Q Q jh j = H I+ H j Q jh j 1 H j T. (4) Further, from the defnton z f(z) = 2( f(z)/ z) [15], we have 1 ( ) F(Q) Ḡ = 2 = 2H I+ H j Q Q jh j H. (5)

3 Then, the conjugate gradent drecton can be computed as G (k) = Ḡ(k) + ρ k G (k 1). In ths paper, we adopt the Fletcher and Reeves choce of deflecton [12]. The Fletcher and Reeves choce of deflecton can be computed as ρ k = Ḡ(k) 2 Ḡ(k 1) 2. (6) The purpose of deflectng the gradent usng (6) s to fndg (k), whch s the Hessan-conjugate of G (k 1). By dong ths, we can elmnate the zgzaggng phenomenon encountered n the conventonal gradent projecton method, and acheve the superlnear convergence rate [12] wthout actually storng the matrx of Hessan approxmaton as n quas-newton methods. B. Projecton onto Ω + (P) Notng from (5) that G s Hermtan. We have that Q (k) = Q (k) + s k G (k) s Hermtan as well. Then, the projecton problem becomes how to smultaneously project a set of K Hermtan matrces onto the set Ω + (P), whch contans a constrant on sum power for all users. Ths s dfferent to [11], where the projecton was performed on ndvdual power constrant. In order to do ths, we construct a block dagonal matrx D = Dag{Q 1...Q K } C (K nr) (K nr). It s easy to recognze that f Q Ω + (P), = 1,...,K, we have Tr(D) = =1 Tr(Q ) P, and D 0. In ths paper, we use Frobenus norm, denoted by F, as the matrx dstance crteron. Then, the dstance between two matrces A and B s defned as A B F = ( Tr [ (A B) (A B) ])1 2. Thus, gven a block dagonal matrx D, we wsh to fnd a matrx D Ω + (P) such that D mnmzes D D F. For more convenent algebrac manpulatons, we nstead study the followng equvalent optmzaton problem: Mnmze 1 2 D D 2 F subject to Tr( D) P, D 0. (7) In (7), the objectve functon s convex n D, the constrant D 0 represents the convex cone of postve semdefnte matrces, and the constrant Tr( D) P s a lnear constrant. Thus, the problem s a convex mnmzaton problem and we can exactly solve ths problem by solvng ts Lagrangan dual problem. Assocatng Hermtan matrx X to the constrant D 0, µ to the constrant Tr( D) P, we can wrte the Lagrangan as { g(x,µ) = mn D (1/2) D D 2 F Tr(X D) ( + µ Tr( D) P )}. (8) Snce g(x, µ) s an unconstraned convex quadratc mnmzaton problem, we can compute the mnmzer of (8) by smply settng the dervatve of (8) (wth respect to D) to zero,.e., ( D D) X + µi = 0. Notng that X = X, we have D = D µi+x. Substtutng D back nto (8), we have g(x,µ) = 1 2 X µi 2 F µp +Tr[(µI X)(D+X µi)] = 1 2 D µi+x 2 F µp D 2. (9) Therefore, the Lagrangan dual problem can be wrtten as Maxmze 1 2 D µi+x 2 F µp D 2 subject to X 0,µ 0. (10) After solvng (10), we can have the optmal soluton to (7) as: D = D µ I+X, (11) where µ and X are the optmal dual solutons to Lagrangan dual problem n (10). Although the Lagrangan dual problem n (10) has a smlar structure as that n the prmal problem n (7) (havng a postve semdefntve matrx constrant), we fnd that the postve semdefnte matrx constrant can ndeed be easly handled. To see ths, we frst ntroduce Moreau Decomposton Theorem from convex analyss. Theorem 1: (Moreau Decomposton [16]) Let K be a closed convex cone. For x,x 1,x 2 C p, the two propertes below are equvalent: 1) x = x 1 +x 2 wth x 1 K, x 2 K o and x 1,x 2 = 0, 2) x 1 = p K (x) and x 2 = p K o(x), where K o {s C p : s,y 0, y K} s called the polar cone of cone K, p K ( ) represents the projecton onto cone K. In fact, the projecton onto a cone K s analogous to the projecton onto a subspace. The only dfference s that the orthogonal subspace s replaced by the polar cone. Now we consder how to project a Hermtan matrx A C n n onto the postve and negatve semdefnte cones. Frst, we can perform egenvalue decomposton on A yeldng A = UDag{λ, = 1,...,n}U, where U s the untary matrx formed by the egenvectors correspondng to the egenvalues λ, = 1,...,n. Then, we have the postve semdefnte and negatve semdefnte projectons of A as follows: A + = UDag{max{λ,0}, = 1,2,...,n}U, (12) A = UDag{mn{λ,0}, = 1,2,...,n}U. (13) The proof of (12) and (13) s a straghtforward applcaton of Theorem 1 by notng that A + 0, A 0, A +,A = 0, A + + A = A, and the postve semdefnte cone and negatve semdefnte cone are polar cones to each other. We now consder the term D µi + X, whch s the only term nvolvng X n the dual objectve functon. We can rewrte t as D µi ( X), where we note that X 0. Fndng a negatve semdefnte matrx X such that D µi ( X) F s mnmzed s equvalent to fndng the projecton of D µi onto the negatve semdefnte cone. From the prevous dscussons, we mmedately have X = (D µi). (14)

4 Snce D µi = (D µi) + +(D µi), substtutng (14) back to the Lagrangan dual objectve functon, we have mn X D µi+x F = (D µi) +. (15) Thus, the matrx varable X n the Lagrangan dual problem can be removed and the Lagrangan dual problem can be rewrtten to Maxmze ψ(µ) 1 2 (D µi)+ 2 µp + 1 F 2 D 2 (16) subject to µ 0. Suppose that after performng egenvalue decomposton on D, we have D = UΛU, where Λ s the dagonal matrx formed by the egenvalues of D, U s the untary matrx formed by the correspondng egenvectors. Snce U s untary, we have (D µi) + = U(Λ µi) + U. It then follows that (D µi)+ 2 = (Λ µi)+ 2. (17) F F We denote the egenvalues n Λ by λ, = 1,2,...,K n r. Suppose that we sort them n non-ncreasng order such that Λ = Dag{λ 1 λ 2... λ K nr }, where λ 1... λ K nr. It then follows that (Λ µi)+ 2 F = K nr (max{0,λ j µ}) 2. (18) From (18), we can rewrte ψ(µ) as ψ(µ) = 1 2 K n r (max{0,λ j µ}) 2 µp D n 2. (19) It s evdent from (19) that ψ(µ) s contnuous and (pece-wse) concave n µ. Generally, pece-wse concave maxmzaton problems can be solved by usng the subgradent method. However, due to the heurstc nature of ts step sze selecton strategy, subgradent algorthm usually does not perform well. In fact, by explotng the specal structure, (16) can be effcently solved. We can search the optmal value of µ as follows. Let ndex the peces of ψ(µ), = 0,1,...,K n r. Intally we set = 0 and ncrease subsequently. Also, we ntroduce λ 0 = and λ K nr+1 =. We let the endpont objectve value ψ (λ 0 ) = 0, φ = ψ (λ 0 ), and µ = λ 0. If > K n r, the search stops. For a partcular ndex, by settng µ ψ (ν) 1 (λ µ) 2 µp = 0, (20) µ 2 we have =1 µ = =1 λ P. (21) Now we consder the followng two cases: 1) If µ [ λ+1,λ] R+, where R + denotes the set of non-negatve real numbers, then we have found the optmal soluton for µ because ψ(µ) s concave n µ. Thus, the pont havng zero-value frst dervatve, f exsts, must be the unque global maxmum soluton. Hence, we can let µ = µ and the search s done. 2) If µ / [ λ+1,λ] R+, we must have that the local maxmum n the nterval [ λ+1,λ] R+ s acheved at one( of) the two endponts. Note that the objectve value ψ λ has been computed n the prevous teraton because from( the ) contnuty ( of ) the objectve functon, we have ψ λ = ψ 1 λ. Thus, we only ( need to ) compute the other endpont objectve value ψ ( ) ( ) λ+1. If ψ λ+1 < ψ λ = φ, then we know µ ( s the ) optmal soluton; else let µ = λ+1, φ = ψ λ+1, = +1 and contnue. Snce there are K n r +1 ntervals n total, the search process takes at most K n r +1 steps to fnd the optmal soluton µ. Hence, ths search s of polynomal-tme complexty O(n r K). After fndng µ, we can compute D as D = (D µ I) + = U(Λ µ I) + U. (22) That s, the projecton D can be computed by adjustng the egenvalues of D usng µ and keepng the egenvectors unchanged. The projecton of D n onto Ω + (P) s summarzed n Algorthm 2. Algorthm 2 Projecton onto Ω + (P) Intaton: 1. Construct a block dagonal matrx D. Perform egenvalue decomposton D = UΛU, sort the egenvalues n non-ncreasng order. 2. Introduce λ 0 = and λ K nt+1 =. Let = 0. Let the endpont objectve value ψ (λ 0 ) = 0, φ = ψ (λ 0 ), and µ = λ 0. Man Loop: 1. If > K nr, go to the fnal step; else let µ = (P λj P)/. 2. If µ [λ +1,λ ] R +, then let µ = µ and go to the fnal step. 3. Compute ψ(λ+1 ). If ψ(λ+1 ) < φ, then go to the fnal step; else let µ = λ+1, φ = ψ(λ+1 ), = +1 and contnue. Fnal Step: Compute D as D = U(Λ µ I) + U. IV. COMPLEXITY ANALYSIS In ths secton, we compare our proposed CGP wth other exstng methods for solvng MIMO BC. Smlar to IWFs, SD [8], and DD [9], CGP has the desrable lnear complexty property. Although CGP also needs to compute gradents n each teraton, the computaton s much easer than that n SD due to the dfferent perspectves n handlng MIMO BC. Thus, n ths paper, we only compare CGP wth IWF (Algorthms 1 and 2 n [10]), whch appear to be the smplest n the lterature so far. For convenence, we wll refer to Algorthm 1 and Algorthm 2 n [10] as IWF1 and IWF2, respectvely. To better llustrate the comparson, we lst the complexty per teraton for each component of CGP and IWFs n Table I. In both CGP and IWFs, t can be seen that the most tme-consumng part (ncreasng wth respect to K) s the addtons of the terms n the form of H Q H when computng gradents and effectve channels. Snce the term (I + =1 H Q H ) s common to all gradents, we only need to compute ths sum once n each teraton. Thus, the

5 TABLE I PER ITERATION COMPLEXITY COMPARISON BETWEEN CGP AND IWFS CGP IWFs Gradent/Effec. Channel K 2K Lne Search O(mK) N/A Projecton/Water-Fllng O(n rk) O(n rk) Overall O((m n r)k) O((2 + n r)k) number of such addtons per teraton for CGP s K. In IWF1 and IWF2, the number of such addtons can be reduced to 2K by a clever way of mantanng a runnng sum of (I + j H j Q jh j ). But the runnng sum, whch requres K 2 addtons for IWF1, stll needs to be computed n the ntalzaton step. Although the basc deas of the projecton n CGP and water-fllng are dfferent, the algorthm structure of them are very smlar and they have exactly the same complexty of O(n r K). The only unque component n CGP s the lne search step, whch has the complexty of O(mK) (n terms of the addtons of H Q H terms), where m s the number of trals n Armjo s Rule. Therefore, the overall complexty per teraton for CGP and IWFs are O((m+1+n r )K) and O((2+n r )K), respectvely. Accordng to our computatonal experence, the value of m usually les n between two and four. Thus, when n r s large (e.g., n r 4), the overall complexty per teraton for CGP and IWFs are comparable. However, as evdenced n the next secton, the numbers of teratons requred for convergence n CGP s much less than that n IWFs for large MIMO BC systems, and t s very nsenstve to the ncrease of the number of users. Moreover, CGP has a modest memory requrement: t only requres the soluton nformaton from the prevous step, as opposed to IWF1, whch requres prevous K 1 steps. V. NUMERICAL RESULTS Due to the space lmtaton, we only gve an example of a large MIMO BC system consstng of 100 users wth n t = n r = 4 n here. The convergence processes are plotted n Fg. 1. It s observed from Fg. 1 that CGP takes only 29 teratons to converge and t outperforms both IWFs. IWF1 s convergence speed sgnfcantly drops after the quck mprovement n the early stage. It s also seen n ths example that IWF2 s performance s nferor to IWF1, and ths observaton s n accordance wth the results n [10]. Both IWF1 and IWF2 fal to converge wthn 100 teratons. The scalablty problem of both IWFs s not surprsng because n both IWFs, the most recently updated covarance matrces only account for a fracton of 1/K n the effectve channels computaton, whch means t does not effectvely make use of the most recent soluton. In all of our numercal examples wth dfferent number of users, CGP always converges wthn 30 teratons. VI. CONCLUSION In ths paper, we developed an effcent algorthm based on conjugate gradent projecton (CGP) for solvng the maxmum Sum Rate (nats/s/hz) CGP IWF1 IWF Number of Iteratons Fg. 1. Comparson n a 100-user MIMO BC channel wth n t = n r = 4. sum rate problem of MIMO BC. We theoretcally and numercally analyzed ts ts complexty and convergence behavor. The attractve features of CGP and encouragng results showed that CGP s an excellent method for solvng the maxmum sum rate problem for large MIMO BC systems. REFERENCES [1] A. Goldsmth, S. A. Jafar, N. Jndal, and S. Vshwanath, Capacty lmts of MIMO channels, IEEE J. Select. Areas Commun., vol. 21, no. 1, pp , June [2] G. Care and S. S. (Shtz), On the achevable throughput of a multantenna Gaussan broadcast channel, IEEE Trans. Inform. Theory, vol. 49, no. 7, pp , July [3] M. Costa, Wrtng on drty paper, IEEE Trans. Inform. Theory, vol. 29, no. 3, pp , May [4] S. Vshwanath, N. Jndal, and A. Goldsmth, Dualty, achevable rates, and sum-rate capacty of MIMO broadcast channels, IEEE Trans. Inform. Theory, vol. 49, no. 10, pp , Oct [5] P. Vswanath and D. N. C. Tse, Sum capacty of the vector Gaussan broadcast channel and uplnk-downlnk dualty, IEEE Trans. Inform. Theory, vol. 49, no. 8, pp , Aug [6] W. Yu, Uplnk-downlnk dualty va mnmax dualty, IEEE Trans. Inform. Theory, vol. 52, no. 2, pp , Feb [7] T. Lan and W. Yu, Input optmzaton for mult-antenna broadcast channels and per-antenna power constrants, n Proc. IEEE GLOBECOM, Dallas, TX, U.S.A., Nov. 2004, pp [8] H. Vswanathan, S. Venkatesan, and H. Huang, Downlnk capacty evaluaton of cellular networks wth known-nterference cancellaton, IEEE J. Select. Areas Commun., vol. 21, no. 5, pp , June [9] W. Yu, A dual decomposton approach to the sum power Gaussan vector multple-access channel sum capacty problem, n Proc. Conf. Informaton Scences and Systems (CISS), Baltmore, MD, U.S.A., [10] N. Jndal, W. Rhee, S. Vshwanath, S. A. Jafar, and A. Goldsmth, Sum power teratve water-fllng for mult-antenna Gaussan broadcast channels, IEEE Trans. Inform. Theory, vol. 51, no. 4, pp , Apr [11] S. Ye and R. S. Blum, Optmzed sgnalng for MIMO nterference systems wth feedback, IEEE Trans. Sgnal Processng, vol. 51, no. 11, pp , Nov [12] M. S. Bazaraa, H. D. Sheral, and C. M. Shetty, Nonlnear Programmng: Theory and Algorthms, 3rd ed. New York, NY: John Wley & Sons Inc., [13] J. B. Rosen, The gradent projecton method for nonlnear programmng, Part I, Lnear constrants, SIAM J. Appled Mathematcs, vol. 8, pp , [14] J. R. Magnus and H. Neudecker, Matrx Dfferental Calculus wth Applcatons n Statstcs and Economcs. New York: Wley, [15] S. Haykn, Adaptve Flter Theory. Englewood Clffs, NJ: Prentce-Hall, [16] J.-B. Hrart-Urruty and C. Lemaréchal, Fundamentals of Convex Analyss. Berln: Sprnger-Verlag, 2001.

Conjugate Gradient Projection Approach for MIMO Gaussian Broadcast Channels

Conjugate Gradient Projection Approach for MIMO Gaussian Broadcast Channels Conjugate Gradent Projecton Approach for MIMO Gaussan Broadcast Channels Ja Lu Y. Thomas Hou Sastry Kompella Hanf D. Sheral Department of Electrcal and Computer Engneerng, Vrgna Tech, Blacksburg, VA 4061

More information

Conjugate Gradient Projection Approach for Multi-Antenna Gaussian Broadcast Channels

Conjugate Gradient Projection Approach for Multi-Antenna Gaussian Broadcast Channels Conjugate Gradient Projection Approach for Multi-Antenna Gaussian Broadcast Channels Jia Liu Y. Thomas Hou Hanif D. Sherali The Bradley Department of Electrical and Computer Engineering The Grado Department

More information

On the Maximum Weighted Sum-Rate of MIMO Gaussian Broadcast Channels

On the Maximum Weighted Sum-Rate of MIMO Gaussian Broadcast Channels Ths full text paper was peer revewed at the drecton of IEEE Communcatons Socety subject matter experts for publcaton n the ICC 2008 proceedngs On the Maxmum Weghted Sum-Rate of MIMO Gaussan Broadcast Channels

More information

x = x 1 + :::+ x K and the nput covarance matrces are of the form ± = E[x x y ]. 3.2 Dualty Next, we ntroduce the concept of dualty wth the followng t

x = x 1 + :::+ x K and the nput covarance matrces are of the form ± = E[x x y ]. 3.2 Dualty Next, we ntroduce the concept of dualty wth the followng t Sum Power Iteratve Water-fllng for Mult-Antenna Gaussan Broadcast Channels N. Jndal, S. Jafar, S. Vshwanath and A. Goldsmth Dept. of Electrcal Engg. Stanford Unversty, CA, 94305 emal: njndal,syed,srram,andrea@wsl.stanford.edu

More information

Inexact Newton Methods for Inverse Eigenvalue Problems

Inexact Newton Methods for Inverse Eigenvalue Problems Inexact Newton Methods for Inverse Egenvalue Problems Zheng-jan Ba Abstract In ths paper, we survey some of the latest development n usng nexact Newton-lke methods for solvng nverse egenvalue problems.

More information

ECE559VV Project Report

ECE559VV Project Report ECE559VV Project Report (Supplementary Notes Loc Xuan Bu I. MAX SUM-RATE SCHEDULING: THE UPLINK CASE We have seen (n the presentaton that, for downlnk (broadcast channels, the strategy maxmzng the sum-rate

More information

Lecture 10 Support Vector Machines II

Lecture 10 Support Vector Machines II Lecture 10 Support Vector Machnes II 22 February 2016 Taylor B. Arnold Yale Statstcs STAT 365/665 1/28 Notes: Problem 3 s posted and due ths upcomng Frday There was an early bug n the fake-test data; fxed

More information

MMA and GCMMA two methods for nonlinear optimization

MMA and GCMMA two methods for nonlinear optimization MMA and GCMMA two methods for nonlnear optmzaton Krster Svanberg Optmzaton and Systems Theory, KTH, Stockholm, Sweden. krlle@math.kth.se Ths note descrbes the algorthms used n the author s 2007 mplementatons

More information

U.C. Berkeley CS294: Beyond Worst-Case Analysis Luca Trevisan September 5, 2017

U.C. Berkeley CS294: Beyond Worst-Case Analysis Luca Trevisan September 5, 2017 U.C. Berkeley CS94: Beyond Worst-Case Analyss Handout 4s Luca Trevsan September 5, 07 Summary of Lecture 4 In whch we ntroduce semdefnte programmng and apply t to Max Cut. Semdefnte Programmng Recall that

More information

Lecture Notes on Linear Regression

Lecture Notes on Linear Regression Lecture Notes on Lnear Regresson Feng L fl@sdueducn Shandong Unversty, Chna Lnear Regresson Problem In regresson problem, we am at predct a contnuous target value gven an nput feature vector We assume

More information

Problem Set 9 Solutions

Problem Set 9 Solutions Desgn and Analyss of Algorthms May 4, 2015 Massachusetts Insttute of Technology 6.046J/18.410J Profs. Erk Demane, Srn Devadas, and Nancy Lynch Problem Set 9 Solutons Problem Set 9 Solutons Ths problem

More information

Assortment Optimization under MNL

Assortment Optimization under MNL Assortment Optmzaton under MNL Haotan Song Aprl 30, 2017 1 Introducton The assortment optmzaton problem ams to fnd the revenue-maxmzng assortment of products to offer when the prces of products are fxed.

More information

VQ widely used in coding speech, image, and video

VQ widely used in coding speech, image, and video at Scalar quantzers are specal cases of vector quantzers (VQ): they are constraned to look at one sample at a tme (memoryless) VQ does not have such constrant better RD perfomance expected Source codng

More information

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur Module 3 LOSSY IMAGE COMPRESSION SYSTEMS Verson ECE IIT, Kharagpur Lesson 6 Theory of Quantzaton Verson ECE IIT, Kharagpur Instructonal Objectves At the end of ths lesson, the students should be able to:

More information

Chapter Newton s Method

Chapter Newton s Method Chapter 9. Newton s Method After readng ths chapter, you should be able to:. Understand how Newton s method s dfferent from the Golden Secton Search method. Understand how Newton s method works 3. Solve

More information

COS 521: Advanced Algorithms Game Theory and Linear Programming

COS 521: Advanced Algorithms Game Theory and Linear Programming COS 521: Advanced Algorthms Game Theory and Lnear Programmng Moses Charkar February 27, 2013 In these notes, we ntroduce some basc concepts n game theory and lnear programmng (LP). We show a connecton

More information

Generalized Linear Methods

Generalized Linear Methods Generalzed Lnear Methods 1 Introducton In the Ensemble Methods the general dea s that usng a combnaton of several weak learner one could make a better learner. More formally, assume that we have a set

More information

Linear Approximation with Regularization and Moving Least Squares

Linear Approximation with Regularization and Moving Least Squares Lnear Approxmaton wth Regularzaton and Movng Least Squares Igor Grešovn May 007 Revson 4.6 (Revson : March 004). 5 4 3 0.5 3 3.5 4 Contents: Lnear Fttng...4. Weghted Least Squares n Functon Approxmaton...

More information

Hongyi Miao, College of Science, Nanjing Forestry University, Nanjing ,China. (Received 20 June 2013, accepted 11 March 2014) I)ϕ (k)

Hongyi Miao, College of Science, Nanjing Forestry University, Nanjing ,China. (Received 20 June 2013, accepted 11 March 2014) I)ϕ (k) ISSN 1749-3889 (prnt), 1749-3897 (onlne) Internatonal Journal of Nonlnear Scence Vol.17(2014) No.2,pp.188-192 Modfed Block Jacob-Davdson Method for Solvng Large Sparse Egenproblems Hongy Mao, College of

More information

1 GSW Iterative Techniques for y = Ax

1 GSW Iterative Techniques for y = Ax 1 for y = A I m gong to cheat here. here are a lot of teratve technques that can be used to solve the general case of a set of smultaneous equatons (wrtten n the matr form as y = A), but ths chapter sn

More information

Feature Selection: Part 1

Feature Selection: Part 1 CSE 546: Machne Learnng Lecture 5 Feature Selecton: Part 1 Instructor: Sham Kakade 1 Regresson n the hgh dmensonal settng How do we learn when the number of features d s greater than the sample sze n?

More information

Chapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems

Chapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems Numercal Analyss by Dr. Anta Pal Assstant Professor Department of Mathematcs Natonal Insttute of Technology Durgapur Durgapur-713209 emal: anta.bue@gmal.com 1 . Chapter 5 Soluton of System of Lnear Equatons

More information

Which Separator? Spring 1

Which Separator? Spring 1 Whch Separator? 6.034 - Sprng 1 Whch Separator? Mamze the margn to closest ponts 6.034 - Sprng Whch Separator? Mamze the margn to closest ponts 6.034 - Sprng 3 Margn of a pont " # y (w $ + b) proportonal

More information

Kernel Methods and SVMs Extension

Kernel Methods and SVMs Extension Kernel Methods and SVMs Extenson The purpose of ths document s to revew materal covered n Machne Learnng 1 Supervsed Learnng regardng support vector machnes (SVMs). Ths document also provdes a general

More information

6.854J / J Advanced Algorithms Fall 2008

6.854J / J Advanced Algorithms Fall 2008 MIT OpenCourseWare http://ocw.mt.edu 6.854J / 18.415J Advanced Algorthms Fall 2008 For nformaton about ctng these materals or our Terms of Use, vst: http://ocw.mt.edu/terms. 18.415/6.854 Advanced Algorthms

More information

Vector Norms. Chapter 7 Iterative Techniques in Matrix Algebra. Cauchy-Bunyakovsky-Schwarz Inequality for Sums. Distances. Convergence.

Vector Norms. Chapter 7 Iterative Techniques in Matrix Algebra. Cauchy-Bunyakovsky-Schwarz Inequality for Sums. Distances. Convergence. Vector Norms Chapter 7 Iteratve Technques n Matrx Algebra Per-Olof Persson persson@berkeley.edu Department of Mathematcs Unversty of Calforna, Berkeley Math 128B Numercal Analyss Defnton A vector norm

More information

Some Comments on Accelerating Convergence of Iterative Sequences Using Direct Inversion of the Iterative Subspace (DIIS)

Some Comments on Accelerating Convergence of Iterative Sequences Using Direct Inversion of the Iterative Subspace (DIIS) Some Comments on Acceleratng Convergence of Iteratve Sequences Usng Drect Inverson of the Iteratve Subspace (DIIS) C. Davd Sherrll School of Chemstry and Bochemstry Georga Insttute of Technology May 1998

More information

Supplement: Proofs and Technical Details for The Solution Path of the Generalized Lasso

Supplement: Proofs and Technical Details for The Solution Path of the Generalized Lasso Supplement: Proofs and Techncal Detals for The Soluton Path of the Generalzed Lasso Ryan J. Tbshran Jonathan Taylor In ths document we gve supplementary detals to the paper The Soluton Path of the Generalzed

More information

EEE 241: Linear Systems

EEE 241: Linear Systems EEE : Lnear Systems Summary #: Backpropagaton BACKPROPAGATION The perceptron rule as well as the Wdrow Hoff learnng were desgned to tran sngle layer networks. They suffer from the same dsadvantage: they

More information

The Order Relation and Trace Inequalities for. Hermitian Operators

The Order Relation and Trace Inequalities for. Hermitian Operators Internatonal Mathematcal Forum, Vol 3, 08, no, 507-57 HIKARI Ltd, wwwm-hkarcom https://doorg/0988/mf088055 The Order Relaton and Trace Inequaltes for Hermtan Operators Y Huang School of Informaton Scence

More information

Difference Equations

Difference Equations Dfference Equatons c Jan Vrbk 1 Bascs Suppose a sequence of numbers, say a 0,a 1,a,a 3,... s defned by a certan general relatonshp between, say, three consecutve values of the sequence, e.g. a + +3a +1

More information

The Minimum Universal Cost Flow in an Infeasible Flow Network

The Minimum Universal Cost Flow in an Infeasible Flow Network Journal of Scences, Islamc Republc of Iran 17(2): 175-180 (2006) Unversty of Tehran, ISSN 1016-1104 http://jscencesutacr The Mnmum Unversal Cost Flow n an Infeasble Flow Network H Saleh Fathabad * M Bagheran

More information

Singular Value Decomposition: Theory and Applications

Singular Value Decomposition: Theory and Applications Sngular Value Decomposton: Theory and Applcatons Danel Khashab Sprng 2015 Last Update: March 2, 2015 1 Introducton A = UDV where columns of U and V are orthonormal and matrx D s dagonal wth postve real

More information

Lecture 20: November 7

Lecture 20: November 7 0-725/36-725: Convex Optmzaton Fall 205 Lecturer: Ryan Tbshran Lecture 20: November 7 Scrbes: Varsha Chnnaobreddy, Joon Sk Km, Lngyao Zhang Note: LaTeX template courtesy of UC Berkeley EECS dept. Dsclamer:

More information

Yong Joon Ryang. 1. Introduction Consider the multicommodity transportation problem with convex quadratic cost function. 1 2 (x x0 ) T Q(x x 0 )

Yong Joon Ryang. 1. Introduction Consider the multicommodity transportation problem with convex quadratic cost function. 1 2 (x x0 ) T Q(x x 0 ) Kangweon-Kyungk Math. Jour. 4 1996), No. 1, pp. 7 16 AN ITERATIVE ROW-ACTION METHOD FOR MULTICOMMODITY TRANSPORTATION PROBLEMS Yong Joon Ryang Abstract. The optmzaton problems wth quadratc constrants often

More information

College of Computer & Information Science Fall 2009 Northeastern University 20 October 2009

College of Computer & Information Science Fall 2009 Northeastern University 20 October 2009 College of Computer & Informaton Scence Fall 2009 Northeastern Unversty 20 October 2009 CS7880: Algorthmc Power Tools Scrbe: Jan Wen and Laura Poplawsk Lecture Outlne: Prmal-dual schema Network Desgn:

More information

Solutions HW #2. minimize. Ax = b. Give the dual problem, and make the implicit equality constraints explicit. Solution.

Solutions HW #2. minimize. Ax = b. Give the dual problem, and make the implicit equality constraints explicit. Solution. Solutons HW #2 Dual of general LP. Fnd the dual functon of the LP mnmze subject to c T x Gx h Ax = b. Gve the dual problem, and make the mplct equalty constrants explct. Soluton. 1. The Lagrangan s L(x,

More information

Grover s Algorithm + Quantum Zeno Effect + Vaidman

Grover s Algorithm + Quantum Zeno Effect + Vaidman Grover s Algorthm + Quantum Zeno Effect + Vadman CS 294-2 Bomb 10/12/04 Fall 2004 Lecture 11 Grover s algorthm Recall that Grover s algorthm for searchng over a space of sze wors as follows: consder the

More information

U.C. Berkeley CS294: Spectral Methods and Expanders Handout 8 Luca Trevisan February 17, 2016

U.C. Berkeley CS294: Spectral Methods and Expanders Handout 8 Luca Trevisan February 17, 2016 U.C. Berkeley CS94: Spectral Methods and Expanders Handout 8 Luca Trevsan February 7, 06 Lecture 8: Spectral Algorthms Wrap-up In whch we talk about even more generalzatons of Cheeger s nequaltes, and

More information

IV. Performance Optimization

IV. Performance Optimization IV. Performance Optmzaton A. Steepest descent algorthm defnton how to set up bounds on learnng rate mnmzaton n a lne (varyng learnng rate) momentum learnng examples B. Newton s method defnton Gauss-Newton

More information

princeton univ. F 17 cos 521: Advanced Algorithm Design Lecture 7: LP Duality Lecturer: Matt Weinberg

princeton univ. F 17 cos 521: Advanced Algorithm Design Lecture 7: LP Duality Lecturer: Matt Weinberg prnceton unv. F 17 cos 521: Advanced Algorthm Desgn Lecture 7: LP Dualty Lecturer: Matt Wenberg Scrbe: LP Dualty s an extremely useful tool for analyzng structural propertes of lnear programs. Whle there

More information

4DVAR, according to the name, is a four-dimensional variational method.

4DVAR, according to the name, is a four-dimensional variational method. 4D-Varatonal Data Assmlaton (4D-Var) 4DVAR, accordng to the name, s a four-dmensonal varatonal method. 4D-Var s actually a drect generalzaton of 3D-Var to handle observatons that are dstrbuted n tme. The

More information

Errors for Linear Systems

Errors for Linear Systems Errors for Lnear Systems When we solve a lnear system Ax b we often do not know A and b exactly, but have only approxmatons  and ˆb avalable. Then the best thng we can do s to solve ˆx ˆb exactly whch

More information

Interactive Bi-Level Multi-Objective Integer. Non-linear Programming Problem

Interactive Bi-Level Multi-Objective Integer. Non-linear Programming Problem Appled Mathematcal Scences Vol 5 0 no 65 3 33 Interactve B-Level Mult-Objectve Integer Non-lnear Programmng Problem O E Emam Department of Informaton Systems aculty of Computer Scence and nformaton Helwan

More information

CSci 6974 and ECSE 6966 Math. Tech. for Vision, Graphics and Robotics Lecture 21, April 17, 2006 Estimating A Plane Homography

CSci 6974 and ECSE 6966 Math. Tech. for Vision, Graphics and Robotics Lecture 21, April 17, 2006 Estimating A Plane Homography CSc 6974 and ECSE 6966 Math. Tech. for Vson, Graphcs and Robotcs Lecture 21, Aprl 17, 2006 Estmatng A Plane Homography Overvew We contnue wth a dscusson of the major ssues, usng estmaton of plane projectve

More information

Lagrange Multipliers Kernel Trick

Lagrange Multipliers Kernel Trick Lagrange Multplers Kernel Trck Ncholas Ruozz Unversty of Texas at Dallas Based roughly on the sldes of Davd Sontag General Optmzaton A mathematcal detour, we ll come back to SVMs soon! subject to: f x

More information

On a direct solver for linear least squares problems

On a direct solver for linear least squares problems ISSN 2066-6594 Ann. Acad. Rom. Sc. Ser. Math. Appl. Vol. 8, No. 2/2016 On a drect solver for lnear least squares problems Constantn Popa Abstract The Null Space (NS) algorthm s a drect solver for lnear

More information

Power Allocation/Beamforming for DF MIMO Two-Way Relaying: Relay and Network Optimization

Power Allocation/Beamforming for DF MIMO Two-Way Relaying: Relay and Network Optimization Power Allocaton/Beamformng for DF MIMO Two-Way Relayng: Relay and Network Optmzaton Je Gao, Janshu Zhang, Sergy A. Vorobyov, Ha Jang, and Martn Haardt Dept. of Electrcal & Computer Engneerng, Unversty

More information

Resource Allocation with a Budget Constraint for Computing Independent Tasks in the Cloud

Resource Allocation with a Budget Constraint for Computing Independent Tasks in the Cloud Resource Allocaton wth a Budget Constrant for Computng Independent Tasks n the Cloud Wemng Sh and Bo Hong School of Electrcal and Computer Engneerng Georga Insttute of Technology, USA 2nd IEEE Internatonal

More information

Lecture 12: Discrete Laplacian

Lecture 12: Discrete Laplacian Lecture 12: Dscrete Laplacan Scrbe: Tanye Lu Our goal s to come up wth a dscrete verson of Laplacan operator for trangulated surfaces, so that we can use t n practce to solve related problems We are mostly

More information

The Study of Teaching-learning-based Optimization Algorithm

The Study of Teaching-learning-based Optimization Algorithm Advanced Scence and Technology Letters Vol. (AST 06), pp.05- http://dx.do.org/0.57/astl.06. The Study of Teachng-learnng-based Optmzaton Algorthm u Sun, Yan fu, Lele Kong, Haolang Q,, Helongang Insttute

More information

10-701/ Machine Learning, Fall 2005 Homework 3

10-701/ Machine Learning, Fall 2005 Homework 3 10-701/15-781 Machne Learnng, Fall 2005 Homework 3 Out: 10/20/05 Due: begnnng of the class 11/01/05 Instructons Contact questons-10701@autonlaborg for queston Problem 1 Regresson and Cross-valdaton [40

More information

A New Refinement of Jacobi Method for Solution of Linear System Equations AX=b

A New Refinement of Jacobi Method for Solution of Linear System Equations AX=b Int J Contemp Math Scences, Vol 3, 28, no 17, 819-827 A New Refnement of Jacob Method for Soluton of Lnear System Equatons AX=b F Naem Dafchah Department of Mathematcs, Faculty of Scences Unversty of Gulan,

More information

On the Multicriteria Integer Network Flow Problem

On the Multicriteria Integer Network Flow Problem BULGARIAN ACADEMY OF SCIENCES CYBERNETICS AND INFORMATION TECHNOLOGIES Volume 5, No 2 Sofa 2005 On the Multcrtera Integer Network Flow Problem Vassl Vasslev, Marana Nkolova, Maryana Vassleva Insttute of

More information

Maximal Margin Classifier

Maximal Margin Classifier CS81B/Stat41B: Advanced Topcs n Learnng & Decson Makng Mamal Margn Classfer Lecturer: Mchael Jordan Scrbes: Jana van Greunen Corrected verson - /1/004 1 References/Recommended Readng 1.1 Webstes www.kernel-machnes.org

More information

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 16

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 16 STAT 39: MATHEMATICAL COMPUTATIONS I FALL 218 LECTURE 16 1 why teratve methods f we have a lnear system Ax = b where A s very, very large but s ether sparse or structured (eg, banded, Toepltz, banded plus

More information

Lecture 20: Lift and Project, SDP Duality. Today we will study the Lift and Project method. Then we will prove the SDP duality theorem.

Lecture 20: Lift and Project, SDP Duality. Today we will study the Lift and Project method. Then we will prove the SDP duality theorem. prnceton u. sp 02 cos 598B: algorthms and complexty Lecture 20: Lft and Project, SDP Dualty Lecturer: Sanjeev Arora Scrbe:Yury Makarychev Today we wll study the Lft and Project method. Then we wll prove

More information

Solutions to exam in SF1811 Optimization, Jan 14, 2015

Solutions to exam in SF1811 Optimization, Jan 14, 2015 Solutons to exam n SF8 Optmzaton, Jan 4, 25 3 3 O------O -4 \ / \ / The network: \/ where all lnks go from left to rght. /\ / \ / \ 6 O------O -5 2 4.(a) Let x = ( x 3, x 4, x 23, x 24 ) T, where the varable

More information

n α j x j = 0 j=1 has a nontrivial solution. Here A is the n k matrix whose jth column is the vector for all t j=0

n α j x j = 0 j=1 has a nontrivial solution. Here A is the n k matrix whose jth column is the vector for all t j=0 MODULE 2 Topcs: Lnear ndependence, bass and dmenson We have seen that f n a set of vectors one vector s a lnear combnaton of the remanng vectors n the set then the span of the set s unchanged f that vector

More information

CHAPTER 7 CONSTRAINED OPTIMIZATION 2: SQP AND GRG

CHAPTER 7 CONSTRAINED OPTIMIZATION 2: SQP AND GRG Chapter 7: Constraned Optmzaton CHAPER 7 CONSRAINED OPIMIZAION : SQP AND GRG Introducton In the prevous chapter we eamned the necessary and suffcent condtons for a constraned optmum. We dd not, however,

More information

Some modelling aspects for the Matlab implementation of MMA

Some modelling aspects for the Matlab implementation of MMA Some modellng aspects for the Matlab mplementaton of MMA Krster Svanberg krlle@math.kth.se Optmzaton and Systems Theory Department of Mathematcs KTH, SE 10044 Stockholm September 2004 1. Consdered optmzaton

More information

Alternating Optimization for Capacity Region of Gaussian MIMO Broadcast Channels with Per-antenna Power Constraint

Alternating Optimization for Capacity Region of Gaussian MIMO Broadcast Channels with Per-antenna Power Constraint Alternatng Optmzaton for Capacty Regon of Gaussan MIMO Broadcast Channels wth Per-antenna Power Constrant Thuy M. Pham, Ronan Farrell, and Le-Nam Tran Department of Electronc Engneerng, Maynooth Unversty,

More information

LOW BIAS INTEGRATED PATH ESTIMATORS. James M. Calvin

LOW BIAS INTEGRATED PATH ESTIMATORS. James M. Calvin Proceedngs of the 007 Wnter Smulaton Conference S G Henderson, B Bller, M-H Hseh, J Shortle, J D Tew, and R R Barton, eds LOW BIAS INTEGRATED PATH ESTIMATORS James M Calvn Department of Computer Scence

More information

Time-Varying Systems and Computations Lecture 6

Time-Varying Systems and Computations Lecture 6 Tme-Varyng Systems and Computatons Lecture 6 Klaus Depold 14. Januar 2014 The Kalman Flter The Kalman estmaton flter attempts to estmate the actual state of an unknown dscrete dynamcal system, gven nosy

More information

Min Cut, Fast Cut, Polynomial Identities

Min Cut, Fast Cut, Polynomial Identities Randomzed Algorthms, Summer 016 Mn Cut, Fast Cut, Polynomal Identtes Instructor: Thomas Kesselhem and Kurt Mehlhorn 1 Mn Cuts n Graphs Lecture (5 pages) Throughout ths secton, G = (V, E) s a mult-graph.

More information

Support Vector Machines CS434

Support Vector Machines CS434 Support Vector Machnes CS434 Lnear Separators Many lnear separators exst that perfectly classfy all tranng examples Whch of the lnear separators s the best? + + + + + + + + + Intuton of Margn Consder ponts

More information

APPENDIX A Some Linear Algebra

APPENDIX A Some Linear Algebra APPENDIX A Some Lnear Algebra The collecton of m, n matrces A.1 Matrces a 1,1,..., a 1,n A = a m,1,..., a m,n wth real elements a,j s denoted by R m,n. If n = 1 then A s called a column vector. Smlarly,

More information

Least squares cubic splines without B-splines S.K. Lucas

Least squares cubic splines without B-splines S.K. Lucas Least squares cubc splnes wthout B-splnes S.K. Lucas School of Mathematcs and Statstcs, Unversty of South Australa, Mawson Lakes SA 595 e-mal: stephen.lucas@unsa.edu.au Submtted to the Gazette of the Australan

More information

Statistical pattern recognition

Statistical pattern recognition Statstcal pattern recognton Bayes theorem Problem: decdng f a patent has a partcular condton based on a partcular test However, the test s mperfect Someone wth the condton may go undetected (false negatve

More information

Numerical Heat and Mass Transfer

Numerical Heat and Mass Transfer Master degree n Mechancal Engneerng Numercal Heat and Mass Transfer 06-Fnte-Dfference Method (One-dmensonal, steady state heat conducton) Fausto Arpno f.arpno@uncas.t Introducton Why we use models and

More information

On the Interval Zoro Symmetric Single-step Procedure for Simultaneous Finding of Polynomial Zeros

On the Interval Zoro Symmetric Single-step Procedure for Simultaneous Finding of Polynomial Zeros Appled Mathematcal Scences, Vol. 5, 2011, no. 75, 3693-3706 On the Interval Zoro Symmetrc Sngle-step Procedure for Smultaneous Fndng of Polynomal Zeros S. F. M. Rusl, M. Mons, M. A. Hassan and W. J. Leong

More information

Computing Correlated Equilibria in Multi-Player Games

Computing Correlated Equilibria in Multi-Player Games Computng Correlated Equlbra n Mult-Player Games Chrstos H. Papadmtrou Presented by Zhanxang Huang December 7th, 2005 1 The Author Dr. Chrstos H. Papadmtrou CS professor at UC Berkley (taught at Harvard,

More information

On the Global Linear Convergence of the ADMM with Multi-Block Variables

On the Global Linear Convergence of the ADMM with Multi-Block Variables On the Global Lnear Convergence of the ADMM wth Mult-Block Varables Tany Ln Shqan Ma Shuzhong Zhang May 31, 01 Abstract The alternatng drecton method of multplers ADMM has been wdely used for solvng structured

More information

Lecture 17: Lee-Sidford Barrier

Lecture 17: Lee-Sidford Barrier CSE 599: Interplay between Convex Optmzaton and Geometry Wnter 2018 Lecturer: Yn Tat Lee Lecture 17: Lee-Sdford Barrer Dsclamer: Please tell me any mstake you notced. In ths lecture, we talk about the

More information

FUZZY GOAL PROGRAMMING VS ORDINARY FUZZY PROGRAMMING APPROACH FOR MULTI OBJECTIVE PROGRAMMING PROBLEM

FUZZY GOAL PROGRAMMING VS ORDINARY FUZZY PROGRAMMING APPROACH FOR MULTI OBJECTIVE PROGRAMMING PROBLEM Internatonal Conference on Ceramcs, Bkaner, Inda Internatonal Journal of Modern Physcs: Conference Seres Vol. 22 (2013) 757 761 World Scentfc Publshng Company DOI: 10.1142/S2010194513010982 FUZZY GOAL

More information

Simultaneous Optimization of Berth Allocation, Quay Crane Assignment and Quay Crane Scheduling Problems in Container Terminals

Simultaneous Optimization of Berth Allocation, Quay Crane Assignment and Quay Crane Scheduling Problems in Container Terminals Smultaneous Optmzaton of Berth Allocaton, Quay Crane Assgnment and Quay Crane Schedulng Problems n Contaner Termnals Necat Aras, Yavuz Türkoğulları, Z. Caner Taşkın, Kuban Altınel Abstract In ths work,

More information

NON-CENTRAL 7-POINT FORMULA IN THE METHOD OF LINES FOR PARABOLIC AND BURGERS' EQUATIONS

NON-CENTRAL 7-POINT FORMULA IN THE METHOD OF LINES FOR PARABOLIC AND BURGERS' EQUATIONS IJRRAS 8 (3 September 011 www.arpapress.com/volumes/vol8issue3/ijrras_8_3_08.pdf NON-CENTRAL 7-POINT FORMULA IN THE METHOD OF LINES FOR PARABOLIC AND BURGERS' EQUATIONS H.O. Bakodah Dept. of Mathematc

More information

THE ARIMOTO-BLAHUT ALGORITHM FOR COMPUTATION OF CHANNEL CAPACITY. William A. Pearlman. References: S. Arimoto - IEEE Trans. Inform. Thy., Jan.

THE ARIMOTO-BLAHUT ALGORITHM FOR COMPUTATION OF CHANNEL CAPACITY. William A. Pearlman. References: S. Arimoto - IEEE Trans. Inform. Thy., Jan. THE ARIMOTO-BLAHUT ALGORITHM FOR COMPUTATION OF CHANNEL CAPACITY Wllam A. Pearlman 2002 References: S. Armoto - IEEE Trans. Inform. Thy., Jan. 1972 R. Blahut - IEEE Trans. Inform. Thy., July 1972 Recall

More information

The Prncpal Component Transform The Prncpal Component Transform s also called Karhunen-Loeve Transform (KLT, Hotellng Transform, oregenvector Transfor

The Prncpal Component Transform The Prncpal Component Transform s also called Karhunen-Loeve Transform (KLT, Hotellng Transform, oregenvector Transfor Prncpal Component Transform Multvarate Random Sgnals A real tme sgnal x(t can be consdered as a random process and ts samples x m (m =0; ;N, 1 a random vector: The mean vector of X s X =[x0; ;x N,1] T

More information

Fisher Linear Discriminant Analysis

Fisher Linear Discriminant Analysis Fsher Lnear Dscrmnant Analyss Max Wellng Department of Computer Scence Unversty of Toronto 10 Kng s College Road Toronto, M5S 3G5 Canada wellng@cs.toronto.edu Abstract Ths s a note to explan Fsher lnear

More information

Convexity preserving interpolation by splines of arbitrary degree

Convexity preserving interpolation by splines of arbitrary degree Computer Scence Journal of Moldova, vol.18, no.1(52), 2010 Convexty preservng nterpolaton by splnes of arbtrary degree Igor Verlan Abstract In the present paper an algorthm of C 2 nterpolaton of dscrete

More information

Support Vector Machines. Vibhav Gogate The University of Texas at dallas

Support Vector Machines. Vibhav Gogate The University of Texas at dallas Support Vector Machnes Vbhav Gogate he Unversty of exas at dallas What We have Learned So Far? 1. Decson rees. Naïve Bayes 3. Lnear Regresson 4. Logstc Regresson 5. Perceptron 6. Neural networks 7. K-Nearest

More information

Chapter - 2. Distribution System Power Flow Analysis

Chapter - 2. Distribution System Power Flow Analysis Chapter - 2 Dstrbuton System Power Flow Analyss CHAPTER - 2 Radal Dstrbuton System Load Flow 2.1 Introducton Load flow s an mportant tool [66] for analyzng electrcal power system network performance. Load

More information

Lecture 21: Numerical methods for pricing American type derivatives

Lecture 21: Numerical methods for pricing American type derivatives Lecture 21: Numercal methods for prcng Amercan type dervatves Xaoguang Wang STAT 598W Aprl 10th, 2014 (STAT 598W) Lecture 21 1 / 26 Outlne 1 Fnte Dfference Method Explct Method Penalty Method (STAT 598W)

More information

Additional Codes using Finite Difference Method. 1 HJB Equation for Consumption-Saving Problem Without Uncertainty

Additional Codes using Finite Difference Method. 1 HJB Equation for Consumption-Saving Problem Without Uncertainty Addtonal Codes usng Fnte Dfference Method Benamn Moll 1 HJB Equaton for Consumpton-Savng Problem Wthout Uncertanty Before consderng the case wth stochastc ncome n http://www.prnceton.edu/~moll/ HACTproect/HACT_Numercal_Appendx.pdf,

More information

Section 8.3 Polar Form of Complex Numbers

Section 8.3 Polar Form of Complex Numbers 80 Chapter 8 Secton 8 Polar Form of Complex Numbers From prevous classes, you may have encountered magnary numbers the square roots of negatve numbers and, more generally, complex numbers whch are the

More information

Report on Image warping

Report on Image warping Report on Image warpng Xuan Ne, Dec. 20, 2004 Ths document summarzed the algorthms of our mage warpng soluton for further study, and there s a detaled descrpton about the mplementaton of these algorthms.

More information

Transfer Functions. Convenient representation of a linear, dynamic model. A transfer function (TF) relates one input and one output: ( ) system

Transfer Functions. Convenient representation of a linear, dynamic model. A transfer function (TF) relates one input and one output: ( ) system Transfer Functons Convenent representaton of a lnear, dynamc model. A transfer functon (TF) relates one nput and one output: x t X s y t system Y s The followng termnology s used: x y nput output forcng

More information

Module 9. Lecture 6. Duality in Assignment Problems

Module 9. Lecture 6. Duality in Assignment Problems Module 9 1 Lecture 6 Dualty n Assgnment Problems In ths lecture we attempt to answer few other mportant questons posed n earler lecture for (AP) and see how some of them can be explaned through the concept

More information

Perron Vectors of an Irreducible Nonnegative Interval Matrix

Perron Vectors of an Irreducible Nonnegative Interval Matrix Perron Vectors of an Irreducble Nonnegatve Interval Matrx Jr Rohn August 4 2005 Abstract As s well known an rreducble nonnegatve matrx possesses a unquely determned Perron vector. As the man result of

More information

OPTIMISATION. Introduction Single Variable Unconstrained Optimisation Multivariable Unconstrained Optimisation Linear Programming

OPTIMISATION. Introduction Single Variable Unconstrained Optimisation Multivariable Unconstrained Optimisation Linear Programming OPTIMIATION Introducton ngle Varable Unconstraned Optmsaton Multvarable Unconstraned Optmsaton Lnear Programmng Chapter Optmsaton /. Introducton In an engneerng analss, sometmes etremtes, ether mnmum or

More information

Dynamic Programming. Preview. Dynamic Programming. Dynamic Programming. Dynamic Programming (Example: Fibonacci Sequence)

Dynamic Programming. Preview. Dynamic Programming. Dynamic Programming. Dynamic Programming (Example: Fibonacci Sequence) /24/27 Prevew Fbonacc Sequence Longest Common Subsequence Dynamc programmng s a method for solvng complex problems by breakng them down nto smpler sub-problems. It s applcable to problems exhbtng the propertes

More information

6) Derivatives, gradients and Hessian matrices

6) Derivatives, gradients and Hessian matrices 30C00300 Mathematcal Methods for Economsts (6 cr) 6) Dervatves, gradents and Hessan matrces Smon & Blume chapters: 14, 15 Sldes by: Tmo Kuosmanen 1 Outlne Defnton of dervatve functon Dervatve notatons

More information

NUMERICAL DIFFERENTIATION

NUMERICAL DIFFERENTIATION NUMERICAL DIFFERENTIATION 1 Introducton Dfferentaton s a method to compute the rate at whch a dependent output y changes wth respect to the change n the ndependent nput x. Ths rate of change s called the

More information

More metrics on cartesian products

More metrics on cartesian products More metrcs on cartesan products If (X, d ) are metrc spaces for 1 n, then n Secton II4 of the lecture notes we defned three metrcs on X whose underlyng topologes are the product topology The purpose of

More information

LINEAR REGRESSION ANALYSIS. MODULE IX Lecture Multicollinearity

LINEAR REGRESSION ANALYSIS. MODULE IX Lecture Multicollinearity LINEAR REGRESSION ANALYSIS MODULE IX Lecture - 30 Multcollnearty Dr. Shalabh Department of Mathematcs and Statstcs Indan Insttute of Technology Kanpur 2 Remedes for multcollnearty Varous technques have

More information

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 13

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 13 CME 30: NUMERICAL LINEAR ALGEBRA FALL 005/06 LECTURE 13 GENE H GOLUB 1 Iteratve Methods Very large problems (naturally sparse, from applcatons): teratve methods Structured matrces (even sometmes dense,

More information

EEL 6266 Power System Operation and Control. Chapter 3 Economic Dispatch Using Dynamic Programming

EEL 6266 Power System Operation and Control. Chapter 3 Economic Dispatch Using Dynamic Programming EEL 6266 Power System Operaton and Control Chapter 3 Economc Dspatch Usng Dynamc Programmng Pecewse Lnear Cost Functons Common practce many utltes prefer to represent ther generator cost functons as sngle-

More information

Lectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix

Lectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix Lectures - Week 4 Matrx norms, Condtonng, Vector Spaces, Lnear Independence, Spannng sets and Bass, Null space and Range of a Matrx Matrx Norms Now we turn to assocatng a number to each matrx. We could

More information

General theory of fuzzy connectedness segmentations: reconciliation of two tracks of FC theory

General theory of fuzzy connectedness segmentations: reconciliation of two tracks of FC theory General theory of fuzzy connectedness segmentatons: reconclaton of two tracks of FC theory Krzysztof Chrs Ceselsk Department of Mathematcs, West Vrgna Unversty and MIPG, Department of Radology, Unversty

More information