Conjugate Gradient Projection Approach for MIMO Gaussian Broadcast Channels
|
|
- Eleanore Marsh
- 5 years ago
- Views:
Transcription
1 Conjugate Gradent Projecton Approach for MIMO Gaussan Broadcast Channels Ja Lu Y. Thomas Hou Sastry Kompella Hanf D. Sheral Department of Electrcal and Computer Engneerng, Vrgna Tech, Blacksburg, VA 4061 Informaton Technology Dvson, U.S. Naval Research Laboratory, Washngton, DC 0375 Department of Industral and Systems Engneerng, Vrgna Tech, Blacksburg, VA 4061 Abstract Researchers have recently shown that the drtypaper codng (DPC) s the optmal transmsson strategy for multple-nput multple-output Gaussan broadcast channels (MIMO BC). Moreover, by the channel dualty, the nonconvex MIMO BC sum rate problem can be transformed to the convex dual MIMO multple-access channel (MIMO MAC) problem wth a sum power constrant. In ths paper, we desgn an effcent algorthm based on conjugate gradent projecton (CGP) to solve the MIMO BC maxmum sum rate problem. Our proposed CGP algorthm solves the dual sum power MAC problem by utlzng the powerful concept of Hessan conjugate. We also develop a rgorous algorthm to solve the projecton problem. We show that CGP enjoys provable convergence, scalablty, and effcency for large MIMO BC systems. I. INTRODUCTION Recently, there s great nterest n characterzng the capacty regon for multple-nput multple-output (MIMO) broadcast channels (MIMO BC) and MIMO multple-access channels (MIMO MAC). Most notably, Wegarten et. al. [1] proved the long-open conjecture that the drty paper codng (DPC) strategy s the capacty achevng transmsson strategy for MIMO BC. Moreover, by the channel dualty between MIMO BC and MIMO MAC establshed n [] [4], t can be shown that the nonconvex MIMO BC sum rate problem can be transformed to the convex dual MIMO MAC problem wth a sum power constrant. However, although the standard nteror pont convex optmzaton method can be used to solve the sum power MIMO MAC problem, ts complexty s consderably hgher than those methods that explot the specal structure of the sum power MIMO MAC problem. Such custom desgned algorthms nclude the mnmax method (MM) [5], the steepest descent (SD) method [6], the dual decomposton (DD) method [7], and two teratve water-fllng methods (IWFs) [8]. Among these algorthms, MM does not have lnear complexty and s more complex than the others. SD and DD have longer runnng tme per teraton than IWFs due to lne searches and the nner optmzaton, respectvely. Both IWFs n [8], however, do not scale well as the number of users, denoted by K, ncreases. The reason s that n each teraton of IWFs, the most recently updated soluton only accounts for a fracton of 1/K n the effectve channels computaton. The authors of [8] proposed a hybrd algorthm as a remedy. But the hybrd algorthm ntroduces addtonal mplementaton complexty and ts performance depends on the emprcal swtch tmng, whch, n turn, are problem specfc. In addton, one of the IWFs n [8], although converges relatvely faster than the other one, requres a storage sze for K nput covarance matrces. These lmtatons of the exstng algorthms motvate us to desgn an effcent and scalable algorthm wth a modest storage requrement for solvng large MIMO BC systems. Our man contrbuton n ths paper s the desgn of a fast algorthm based on the Conjugate Gradent Projecton (CGP) approach. Our algorthm s nspred by [9], where a gradent projecton method was used to solve another nonconvex maxmum sum rate problem for sngle-hop MIMO-based ad hoc networks wth mutual nterference. However, unlke [9], we use conjugate gradent drectons nstead of gradent drectons to elmnate the zgzaggng phenomenon. Also, we develop a rgorous algorthm to solve the projecton problem. Ths s n contrast to [9], where the way of handlng gradent projecton s based on heurstc. Our proposed CGP has the followng attractve features: 1) CGP s extremely fast, and enjoys provable convergence as well as nce scalablty. As opposed to IWFs, the number of teratons requred for convergence of CGP s nsenstve to the ncrease of the number of users. ) CGP has lnear complexty. By adoptng the nexact lne search method called Armjo s Rule, we show that CGP has a comparable complexty to IWFs per teraton, and requres much fewer teratons for convergence n large MIMO BC systems. 3) CGP has a modest memory requrement: It only needs the soluton nformaton from the prevous step, as opposed to IWF, whch requres the soluton nformaton from prevous K 1 steps. 4) CGP s very ntutve and easy to mplement. The remander of ths paper s organzed as follows. In Secton II, we dscuss the network model and formulaton. Secton III ntroduces the key components n our CGP framework, ncludng conjugate gradent computaton and how to perform projecton. We analyze and compare the complexty of CGP wth other algorthms n Secton IV. Numercal results are presented n Secton V. Secton VI concludes ths paper. II. SYSTEM MODEL AND PROBLEM FORMULATION We begn wth ntroducng notaton. We use boldface to denote matrces and vectors. For a complex-valued matrx A, A and A denotes the conjugate and conjugate transpose of
2 A, respectvely. Tr{A} denotes the trace of A. We let I denote the dentty matrx wth dmenson determned from context. A 0 represents that A s Hermtan and postve semdefnte (PSD). Dag{A 1... A n } represents the block dagonal matrx wth matrces A 1,..., A n on ts man dagonal. Suppose that a MIMO Gaussan broadcast channel has K users, each of whch s equpped wth n r antennas, and the transmtter has n t antennas. The channel matrx for user s denoted as H C n r n t. In [] [4], [10], t has been shown that the maxmum sum rate capacty of MIMO BC s equal to the drty-paper codng regon, whch can be computed by solvng the optmzaton problem as follows: Maxmze =1 log det(i+h( Γj)H ) det(i+h ( 1 Γj)H ) subject to Γ 0, = 1,,..., K =1 Tr(Γ ) P, where Γ C n t n t, = 1,..., K, are the downlnk nput covarance matrces, P represents the maxmum transmt power at the transmtter. It s evdent that (1) s a nonconvex optmzaton problem. However, the authors n [], [4] showed that due to the dualty between MIMO BC and MIMO MAC, (1) s equvalent to the followng MIMO MAC problem wth a sum power constrant: ( Maxmze log det I + ) K =1 H Q H subject to Q 0, = 1,,..., K () =1 Tr(Q ) P, where Q C n r n r, = 1,..., K are the uplnk nput covarance matrces. For convenence, we use the matrx Q = [ ] Q 1 Q... Q K to denote the set of all uplnk ( nput covarance matrces, and let F (Q) = log det I + ) K =1 H Q H represent the objectve functon of (). After solvng (), we can recover the solutons of (1) va approprate mappng []. III. SOLUTION PROCEDURE In ths paper, we propose an effcent algorthm based on conjugate gradent projecton (CGP) to solve (). CGP utlzes the powerful concept of Hessan conjugate to deflect the gradent drecton approprately so as to acheve the superlnear convergence rate [11]. Ths s somewhat smlar to the wellknown quas-newton methods (e.g., BFGS method). CGP follows the same dea of gradent projecton whch was orgnally proposed by Rosen [1]. Durng each teraton, CGP projects the conjugate gradent drecton to fnd an mprovng feasble drecton. Its convergence proof can be found n [11]. The framework of CGP for solvng () s shown n Algorthm 1. Due to the complexty of the objectve functon n (), we adopt an nexact lne search method called Armjo s Rule to avod excessve objectve functon evaluatons, whle stll enjoyng provable convergence [11]. The basc dea of Armjo s Rule s that at each step of the lne search, we sacrfce accuracy for effcency as long as we have suffcent (1) Algorthm 1 Conjugate Gradent Projecton Method Intalzaton: Choose the ntal condtons Q (0) = [Q (0) 1, Q(0),..., Q(0) K ]T. Let k = 0. Man Loop: 1. Calculate the conjugate gradents G (k), = 1,,..., K.. Choose an approprate step sze s k. Let Q (k) = Q (k) + s k G (k), for = 1,,..., K. 3. Let Q (k) be the projecton of Q (k) onto Ω + (P ), where Ω + (P ) {Q, = 1,..., K Q 0, =1 Tr{Q } P }. 4. Choose approprate step sze α k. Let Q (k+1) l Q (k) ), = 1,,..., K. = Q (k) l + α k ( (k) Q 5. k = k + 1. If the maxmum absolute value of the elements n Q (k) Q (k 1) < ɛ, for = 1,,..., L, then stop; else go to step 1. mprovement. Accordng to Armjo s Rule, n the k th teraton, we choose σ k = 1 and α k = β m k (the same as n [9]), where m k s the frst non-negatve nteger m that satsfes F (Q (k+1) ) F (Q (k) ) σβ m G (k), Q (k) Q (k) K [ ( )] = σβ m Tr Q(k) Q (k), (3) =1 G (k) where 0 < β < 1 and 0 < σ < 1 are fxed scalars. Next, we wll consder two major components n the CGP framework: 1) how to compute the conjugate gradent drecton G, and ) how to project Q (k) onto the set Ω + (P ) {Q, = 1,..., K Q 0, =1 Tr{Q } P }. A. Computng the Conjugate Gradents The gradent Ḡ Q F (Q) depends on the partal dervatves of F (Q) wth respect to Q. By usng the formula ln det(a+bxc) X = [ C(A + BXC) 1 B ] T [13] and lettng A = I+,j H j Q jh j, B = H, X = Q, and C = H, we can compute the partal dervatve of F (Q) wth respect to Q as follows: F (Q) = log det I + H j Q Q Q jh j = H I + H j Q jh j 1 H j T. (4) Further, snce z f(z) = ( f(z)/ z) [14], we have 1 ( ) F (Q) Ḡ = = H I + H j Q Q jh j H. (5) Then, the conjugate gradent drecton can be computed as G (k) = Ḡ(k) +ρ k G (k 1). In ths paper, we adopt the Fletcher and Reeves choce of deflecton [11], whch can be computed as ρ k = Ḡ(k) Ḡ(k 1). (6)
3 The purpose of deflectng the gradent usng (6) s to fnd G (k), whch s the Hessan-conjugate of G (k 1). By dong ths, we can elmnate the zgzaggng phenomenon encountered n the conventonal gradent projecton method, and acheve the superlnear convergence rate [11] wthout actually storng a large Hessan approxmaton matrx as n quas-newton methods. B. Performng Projecton In ths secton, we develop a rgorous algorthm for the problem of projectng the conjugate gradents onto the constrant set n the dual MIMO MAC problem. Ths s n contrast to the heurstc method n [9], where the authors smply set the frst dervatve to zero to get the soluton when solvng the constraned Lagrangan dual of the projecton problem. Frst, notng from (5) that G s Hermtan, we have that s Hermtan as well. Then, the projecton problem becomes how to smultaneously project a set of K Hermtan matrces onto the set Ω + (P ), whch contans a constrant on sum power for all users. Ths s dfferent to [9], where the projecton was performed on each ndvdual power constrant. Our approach s to construct a block dagonal matrx D = Dag{Q 1... Q K } C (K n r) (K n r ). It s easy Q (k) = Q (k) + s k G (k) to recognze that f Q Ω + (P ), = 1,..., K, we have Tr(D) = =1 Tr (Q ) P, and D 0. In ths paper, we use Frobenus norm, denoted by F, as the matrx dstance crteron. The dstance between two matrces A and B s defned as A B F = ( Tr [ (A B) (A B) ]) 1. Thus, gven a block dagonal matrx D, we wsh to fnd a matrx D Ω + (P ) such that D mnmzes D D F. For more convenent algebrac manpulatons, we nstead study the followng equvalent optmzaton problem: Mnmze 1 D D F subject to Tr( D) P, D 0. (7) In (7), the objectve functon s convex n D, the constrant D 0 represents the convex cone of postve semdefnte matrces, and the constrant Tr( D) P s a lnear constrant. Thus, the problem s a convex mnmzaton problem and we can exactly solve ths problem by solvng ts Lagrangan dual problem. Assocatng Hermtan matrx X to the constrant D 0, µ to the constrant Tr( D) P, we can wrte the Lagrangan as g(x, µ) = mn D { 1 D D F Tr(X D) ( + µ Tr( D) P )}. (8) Snce g(x, µ) s an unconstraned convex quadratc mnmzaton problem, we can compute the mnmzer of (8) by smply settng the dervatve of (8) (wth respect to D) to zero,.e., ( D D) X + µi = 0. Notng that X = X, we have D = D µi + X. Substtutng D back nto (8), we have g(x, µ) = 1 X µi F µp + Tr [(µi X) (D + X µi)] = 1 D µi + X F µp + 1 D F. (9) Therefore, the Lagrangan dual problem can be wrtten as Maxmze 1 D µi + X F µp + 1 D F subject to X 0, µ 0. (10) After solvng (10), we can have the optmal soluton to (7) as D = D µ I + X, (11) where µ and X are the optmal dual solutons to Lagrangan dual problem n (10). Although the Lagrangan dual problem n (10) has a smlar structure as that n the prmal problem n (7) (havng a postve semdefntve matrx constrant), we fnd that the postve semdefnte matrx constrant can ndeed be easly handled. To see ths, we frst ntroduce Moreau Decomposton Theorem [15] from convex analyss. Theorem 1: (Moreau Decomposton) Let K be a closed convex cone. For x, x 1, x C p, the two propertes below are equvalent: 1) x = x 1 + x wth x 1 K, x K o and x 1, x = 0, ) x 1 = p K (x) and x = p K o(x), where K o {s C p : s, y 0, y K} s called the polar cone of cone K, p K ( ) represents the projecton onto cone K. In fact, the projecton onto a cone K s analogous to the projecton onto a subspace. The only dfference s that orthogonal subspaces s replaced by polar cones. Now we consder how to project a Hermtan matrx A C n n onto the postve and negatve semdefnte cones. Frst, we can perform egenvalue decomposton on A yeldng A = U A Dag{λ, = 1,..., n}u A, where U A s the untary matrx formed by the egenvectors correspondng to the egenvalues λ, = 1,..., n. Then, we have the postve semdefnte and the negatve semdefnte projectons of A as follows: A + = U A Dag{max{λ, 0}, = 1,,..., n}u A, (1) A = U A Dag{mn{λ, 0}, = 1,,..., n}u A. (13) The proof of (1) and (13) s a straghtforward applcaton of Theorem 1 by notng that A + 0, A 0, A +, A = 0, A + + A = A, and the postve semdefnte cone and negatve semdefnte cone are polar cones to each other. We now consder the term D µi + X, whch s the only term nvolvng X n the dual objectve functon. We can rewrte t as D µi ( X), where we note that X 0. Fndng a negatve semdefnte matrx X such that D µi ( X) F s mnmzed s equvalent to fndng the projecton of D µi onto the negatve semdefnte cone. From the prevous dscussons, we mmedately have X = (D µi). (14) Snce D µi = (D µi) + + (D µi), substtutng (14) back to the Lagrangan dual objectve functon, we have mn X D µi + X F = (D µi) +. (15)
4 Thus, the matrx varable X n the Lagrangan dual problem can be removed and the Lagrangan dual problem can be rewrtten as (D µi) µp F D F Maxmze ψ(µ) 1 subject to µ 0. (16) Suppose that after performng egenvalue decomposton on D, we have D = UΛU, where Λ s the dagonal matrx formed by the egenvalues of D, U s the untary matrx formed by the correspondng egenvectors. Snce U s untary, we have (D µi) + = U (Λ µi) + U. It then follows that (D µi) + F = (Λ µi) + F. (17) We denote the egenvalues n Λ by λ, = 1,,..., K n r. Suppose that we sort them n non-ncreasng order such that Λ = Dag{λ 1 λ... λ K nr }, where λ 1... λ K nr. It then follows that (Λ µi)+ F = K nr (max {0, λ j µ}). (18) From (18), we can rewrte ψ(µ) as ψ(µ) = 1 K n r (max {0, λ j µ}) µp + 1 D n F. (19) It s evdent from (19) that ψ(µ) s contnuous and (pece-wse) concave n µ. Generally, pece-wse concave maxmzaton problems can be solved by usng the subgradent method. However, due to the heurstc nature of ts step sze selecton strategy, subgradent algorthm usually does not perform well. In fact, by explotng ts specal structure, (16) can be solved effcently. We can search the optmal value of µ as follows: Let ndex the peces of ψ(µ), = 0, 1,..., K n r. Intally we set = 0 and ncrease subsequently. Also, we ntroduce λ 0 = and λ K nr+1 =. We let the endpont objectve value ψ (λ 0 ) = 0, φ = ψ (λ 0 ), and µ = λ 0. If > K n r, the search stops. For a partcular ndex, by settng µ ψ (ν) 1 (λ µ) µp = 0, (0) µ we have =1 µ = =1 λ P. (1) Now we consder the followng two cases: 1) If µ [ λ+1, λ] R+, where R + denotes the set of non-negatve real numbers, then we have found the optmal soluton for µ because ψ(µ) s concave n µ. Thus, the pont wth zero-value frst dervatve, f t exsts, must be the unque global maxmum soluton. Hence, we can let µ = µ and the search s done. ) If µ / [ λ+1, λ] R+, we must have that the local maxmum n the nterval [ λ+1, λ] R+ s acheved at one of the two endponts. Note that the objectve value ( ) ψ λ has been computed n the prevous teraton. Ths s because from ( the ) contnuty ( of the ) objectve functon, we have ψ λ = ψ 1 λ. Thus, we only ( need) to compute ( the) other endpont ( ) objectve value ψ λ+1. If ψ λ+1 < ψ λ = φ, then we know µ s the optmal soluton; else let µ = λ+1, ( ) φ = ψ λ+1, = + 1 and contnue. Snce there are K n r + 1 ntervals n total, the search process takes at most K n r + 1 steps to fnd the optmal soluton µ. Hence, ths search s of polynomal-tme complexty O(n r K). After fndng µ, we can compute D as D = (D µ I) + = U (Λ µ I) + U. () That s, the projecton D can be computed by adjustng the egenvalues of D usng µ and keepng the egenvectors unchanged. The projecton of D n onto Ω + (P ) s summarzed n Algorthm. Algorthm Performng Projecton Intaton: 1. Construct a block dagonal matrx D. Perform egenvalue decomposton D = UΛU, sort the egenvalues n non-ncreasng order.. Introduce λ 0 = and λ K nt +1 =. Let = 0. Let the endpont objectve value ψ (λ 0 ) = 0, φ = ψ (λ 0 ), and µ = λ 0. Man Loop: 1. If > K n r, go to the fnal step; else let µ = ( λ j P )/.. If µ [λ +1, λ ] R +, then let µ = µ and go to the fnal step. 3. Compute ψ(λ+1 ). If ψ(λ+1 ) < φ, then go to the fnal step; else let µ = λ+1, φ = ψ(λ+1 ), = + 1 and contnue. Fnal Step: Compute D as D = U (Λ µ I) + U. IV. COMPLEXITY ANALYSIS In ths secton, we compare our proposed CGP wth exstng methods for solvng MIMO BC. Smlar to IWFs [8], SD [6], and DD [7], CGP has lnear complexty property. Although CGP also needs to compute gradents n each teraton, the computaton s much easer than that n SD due to the dfferent perspectves n handlng MIMO BC. In ths paper, we compare CGP wth IWF (Algorthms 1 and n [8]) as IWF has the least complexty among aforementoned algorthms. For convenence, we wll refer to Algorthm 1 and Algorthm n [8] as IWF1 and IWF, respectvely. TABLE I PER ITERATION COMPLEXITY COMPARISON BETWEEN CGP AND IWFS CGP IWFs Gradent/Effectve Channel K K Lne Search O(mK) N/A Projecton/Water-Fllng O(n rk) O(n rk) Overall O((m n r )K) O(( + n r )K) To better llustrate the comparson, we lst the complexty per teraton for each component of CGP and IWFs n Table I. For both CGP and IWFs, t can be seen that the most tme-consumng part (ncreasng wth respect to K) s the addtons of the terms n the form of H Q H when computng gradents and effectve channels. Snce the term
5 (I + =1 H Q H ) s common to all gradents, we only need to compute ths sum once n each teraton. Thus, the number of such addtons per teraton for CGP s K. In IWF1 and IWF, the number of such addtons can be reduced to K by a clever way of mantanng a runnng sum of (I + j H j Q jh j ). However, the runnng sum, whch requres K addtons for IWF1, stll needs to be computed n the ntalzaton step. On the other hand, although the basc deas of the projecton n CGP and water-fllng are dfferent, the algorthm structures of them are very smlar and they have exactly the same complexty of O(n r K). The only unque component n CGP s the lne search step, whch has the complexty of O(mK) (n terms of the addtons of H Q H terms), where m s the number of trals n Armjo s Rule. Therefore, the overall complexty per teraton for CGP and IWFs are O((m n r )K) and O(( + n r )K), respectvely. Accordng to our computatonal experence, the value of m usually les wthn two and four. Thus, when n r s large (e.g., n r 4), the overall complexty per teraton for CGP and IWFs are comparable. In the next secton, however, we wll show that the numbers of teratons requred for convergence n CGP s much less than that n IWFs for large MIMO BC systems, and t s nsenstve to the ncrease of the number of users. Moreover, CGP has a modest memory requrement: It only requres the soluton nformaton from the prevous step, as opposed to IWF1, whch requres prevous K 1 steps. Ths means that IWF1 requres a storage sze for K nput covarance matrces. V. NUMERICAL RESULTS We show results for a large MIMO BC system consstng of 100 users wth n t = n r = 4. The convergence processes are plotted n Fg. 1. It s observed from Fg. 1 that CGP takes only 9 teratons to converge and t outperforms both IWFs. IWF1 s convergence speed sgnfcantly drops after ts ntal mprovement. It s also seen that IWF s performance s nferor to IWF1, and ths observaton s consstent to the results n [8]. Both IWF1 and IWF fal to converge wthn 100 teratons. The scalablty problem for both IWFs s not surprsng because for both algorthms, the most recently updated covarance matrces only account for a fracton of 1/K n the effectve channels computaton, whch means t does not effectvely make use of the most recent soluton. In all of our numercal examples wth dfferent number of users, CGP s found to converge wthn 30 teratons. VI. CONCLUSION In ths paper, we developed an effcent algorthm based on conjugate gradent projecton (CGP) for solvng the maxmum sum rate problem of MIMO BC. We analyzed ts complexty and convergence behavor and showed that CGP enjoys provable convergence, scalablty, and effcency. The attractve features of CGP and encouragng results ndcate that CGP s an excellent method for solvng the maxmum sum rate problem for large MIMO BC systems. Sum Rate (nats/s/hz) Number of Iteratons Fg. 1. Comparson n a 100-user MIMO BC channel wth n t = n r = 4. ACKNOWLEDGEMENTS The work of Y.T. Hou and J. Lu has been supported n part by the Offce of Naval Research (ONR) under Grant N The work of H.D. Sheral has been supported n part by NSF Grant REFERENCES [1] H. Wengarten, Y. Stenberg, and S. Shama (Shtz), The capacty regon of the Gaussan multple-nput multple-output broadcast channel, IEEE Trans. Inf. Theory, vol. 5, no. 9, pp , Sep [] S. Vshwanath, N. Jndal, and A. Goldsmth, Dualty, achevable rates, and sum-rate capacty of MIMO broadcast channels, IEEE Trans. Inf. Theory, vol. 49, no. 10, pp , Oct [3] P. Vswanath and D. N. C. Tse, Sum capacty of the vector Gaussan broadcast channel and uplnk-downlnk dualty, IEEE Trans. Inf. Theory, vol. 49, no. 8, pp , Aug [4] W. Yu, Uplnk-downlnk dualty va mnmax dualty, IEEE Trans. Inf. Theory, vol. 5, no., pp , Feb [5] T. Lan and W. Yu, Input optmzaton for mult-antenna broadcast channels and per-antenna power constrants, n Proc. IEEE GLOBECOM, Dallas, TX, U.S.A., Nov. 004, pp [6] H. Vswanathan, S. Venkatesan, and H. Huang, Downlnk capacty evaluaton of cellular networks wth known-nterference cancellaton, IEEE J. Sel. Areas Commun., vol. 1, no. 5, pp , Jun [7] W. Yu, A dual decomposton approach to the sum power Gaussan vector multple-access channel sum capacty problem, n Proc. Conf. Informaton Scences and Systems (CISS), Baltmore, MD, U.S.A., 003. [8] N. Jndal, W. Rhee, S. Vshwanath, S. A. Jafar, and A. Goldsmth, Sum power teratve water-fllng for mult-antenna Gaussan broadcast channels, IEEE Trans. Inf. Theory, vol. 51, no. 4, pp , Apr [9] S. Ye and R. S. Blum, Optmzed sgnalng for MIMO nterference systems wth feedback, IEEE Trans. Sgnal Process., vol. 51, no. 11, pp , Nov [10] G. Care and S. Shama (Shtz), On the achevable throughput of a multantenna Gaussan broadcast channel, IEEE Trans. Inf. Theory, vol. 49, no. 7, pp , Jul [11] M. S. Bazaraa, H. D. Sheral, and C. M. Shetty, Nonlnear Programmng: Theory and Algorthms, 3rd ed. New York, NY: John Wley & Sons Inc., 006. [1] J. B. Rosen, The gradent projecton method for nonlnear programmng, Part I, Lnear constrants, SIAM J. Appled Mathematcs, vol. 8, pp , [13] J. R. Magnus and H. Neudecker, Matrx Dfferental Calculus wth Applcatons n Statstcs and Economcs. New York: Wley, [14] S. Haykn, Adaptve Flter Theory. Englewood Clffs, NJ: Prentce-Hall, [15] J.-B. Hrart-Urruty and C. Lemaréchal, Fundamentals of Convex Analyss. Berln: Sprnger-Verlag, 001.
Conjugate Gradient Projection Approach for Multi-Antenna Gaussian Broadcast Channels
Conjugate Gradent Projecton Approach for Mult-Antenna Gaussan Broadcast Channels Ja Lu, Y. Thomas Hou, and Hanf D. Sheral Department of Electrcal and Computer Engneerng Department of Industral and Systems
More informationConjugate Gradient Projection Approach for Multi-Antenna Gaussian Broadcast Channels
Conjugate Gradient Projection Approach for Multi-Antenna Gaussian Broadcast Channels Jia Liu Y. Thomas Hou Hanif D. Sherali The Bradley Department of Electrical and Computer Engineering The Grado Department
More informationOn the Maximum Weighted Sum-Rate of MIMO Gaussian Broadcast Channels
Ths full text paper was peer revewed at the drecton of IEEE Communcatons Socety subject matter experts for publcaton n the ICC 2008 proceedngs On the Maxmum Weghted Sum-Rate of MIMO Gaussan Broadcast Channels
More informationx = x 1 + :::+ x K and the nput covarance matrces are of the form ± = E[x x y ]. 3.2 Dualty Next, we ntroduce the concept of dualty wth the followng t
Sum Power Iteratve Water-fllng for Mult-Antenna Gaussan Broadcast Channels N. Jndal, S. Jafar, S. Vshwanath and A. Goldsmth Dept. of Electrcal Engg. Stanford Unversty, CA, 94305 emal: njndal,syed,srram,andrea@wsl.stanford.edu
More informationU.C. Berkeley CS294: Beyond Worst-Case Analysis Luca Trevisan September 5, 2017
U.C. Berkeley CS94: Beyond Worst-Case Analyss Handout 4s Luca Trevsan September 5, 07 Summary of Lecture 4 In whch we ntroduce semdefnte programmng and apply t to Max Cut. Semdefnte Programmng Recall that
More informationECE559VV Project Report
ECE559VV Project Report (Supplementary Notes Loc Xuan Bu I. MAX SUM-RATE SCHEDULING: THE UPLINK CASE We have seen (n the presentaton that, for downlnk (broadcast channels, the strategy maxmzng the sum-rate
More informationMMA and GCMMA two methods for nonlinear optimization
MMA and GCMMA two methods for nonlnear optmzaton Krster Svanberg Optmzaton and Systems Theory, KTH, Stockholm, Sweden. krlle@math.kth.se Ths note descrbes the algorthms used n the author s 2007 mplementatons
More informationInexact Newton Methods for Inverse Eigenvalue Problems
Inexact Newton Methods for Inverse Egenvalue Problems Zheng-jan Ba Abstract In ths paper, we survey some of the latest development n usng nexact Newton-lke methods for solvng nverse egenvalue problems.
More informationLecture 10 Support Vector Machines II
Lecture 10 Support Vector Machnes II 22 February 2016 Taylor B. Arnold Yale Statstcs STAT 365/665 1/28 Notes: Problem 3 s posted and due ths upcomng Frday There was an early bug n the fake-test data; fxed
More informationAssortment Optimization under MNL
Assortment Optmzaton under MNL Haotan Song Aprl 30, 2017 1 Introducton The assortment optmzaton problem ams to fnd the revenue-maxmzng assortment of products to offer when the prces of products are fxed.
More informationSome Comments on Accelerating Convergence of Iterative Sequences Using Direct Inversion of the Iterative Subspace (DIIS)
Some Comments on Acceleratng Convergence of Iteratve Sequences Usng Drect Inverson of the Iteratve Subspace (DIIS) C. Davd Sherrll School of Chemstry and Bochemstry Georga Insttute of Technology May 1998
More informationVQ widely used in coding speech, image, and video
at Scalar quantzers are specal cases of vector quantzers (VQ): they are constraned to look at one sample at a tme (memoryless) VQ does not have such constrant better RD perfomance expected Source codng
More informationLecture Notes on Linear Regression
Lecture Notes on Lnear Regresson Feng L fl@sdueducn Shandong Unversty, Chna Lnear Regresson Problem In regresson problem, we am at predct a contnuous target value gven an nput feature vector We assume
More information6.854J / J Advanced Algorithms Fall 2008
MIT OpenCourseWare http://ocw.mt.edu 6.854J / 18.415J Advanced Algorthms Fall 2008 For nformaton about ctng these materals or our Terms of Use, vst: http://ocw.mt.edu/terms. 18.415/6.854 Advanced Algorthms
More informationProblem Set 9 Solutions
Desgn and Analyss of Algorthms May 4, 2015 Massachusetts Insttute of Technology 6.046J/18.410J Profs. Erk Demane, Srn Devadas, and Nancy Lynch Problem Set 9 Solutons Problem Set 9 Solutons Ths problem
More informationFeature Selection: Part 1
CSE 546: Machne Learnng Lecture 5 Feature Selecton: Part 1 Instructor: Sham Kakade 1 Regresson n the hgh dmensonal settng How do we learn when the number of features d s greater than the sample sze n?
More informationWhich Separator? Spring 1
Whch Separator? 6.034 - Sprng 1 Whch Separator? Mamze the margn to closest ponts 6.034 - Sprng Whch Separator? Mamze the margn to closest ponts 6.034 - Sprng 3 Margn of a pont " # y (w $ + b) proportonal
More informationHongyi Miao, College of Science, Nanjing Forestry University, Nanjing ,China. (Received 20 June 2013, accepted 11 March 2014) I)ϕ (k)
ISSN 1749-3889 (prnt), 1749-3897 (onlne) Internatonal Journal of Nonlnear Scence Vol.17(2014) No.2,pp.188-192 Modfed Block Jacob-Davdson Method for Solvng Large Sparse Egenproblems Hongy Mao, College of
More informationEEE 241: Linear Systems
EEE : Lnear Systems Summary #: Backpropagaton BACKPROPAGATION The perceptron rule as well as the Wdrow Hoff learnng were desgned to tran sngle layer networks. They suffer from the same dsadvantage: they
More informationThe Minimum Universal Cost Flow in an Infeasible Flow Network
Journal of Scences, Islamc Republc of Iran 17(2): 175-180 (2006) Unversty of Tehran, ISSN 1016-1104 http://jscencesutacr The Mnmum Unversal Cost Flow n an Infeasble Flow Network H Saleh Fathabad * M Bagheran
More information1 GSW Iterative Techniques for y = Ax
1 for y = A I m gong to cheat here. here are a lot of teratve technques that can be used to solve the general case of a set of smultaneous equatons (wrtten n the matr form as y = A), but ths chapter sn
More informationGeneralized Linear Methods
Generalzed Lnear Methods 1 Introducton In the Ensemble Methods the general dea s that usng a combnaton of several weak learner one could make a better learner. More formally, assume that we have a set
More informationModule 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur
Module 3 LOSSY IMAGE COMPRESSION SYSTEMS Verson ECE IIT, Kharagpur Lesson 6 Theory of Quantzaton Verson ECE IIT, Kharagpur Instructonal Objectves At the end of ths lesson, the students should be able to:
More informationChapter Newton s Method
Chapter 9. Newton s Method After readng ths chapter, you should be able to:. Understand how Newton s method s dfferent from the Golden Secton Search method. Understand how Newton s method works 3. Solve
More informationLinear Approximation with Regularization and Moving Least Squares
Lnear Approxmaton wth Regularzaton and Movng Least Squares Igor Grešovn May 007 Revson 4.6 (Revson : March 004). 5 4 3 0.5 3 3.5 4 Contents: Lnear Fttng...4. Weghted Least Squares n Functon Approxmaton...
More informationCOS 521: Advanced Algorithms Game Theory and Linear Programming
COS 521: Advanced Algorthms Game Theory and Lnear Programmng Moses Charkar February 27, 2013 In these notes, we ntroduce some basc concepts n game theory and lnear programmng (LP). We show a connecton
More informationSolutions HW #2. minimize. Ax = b. Give the dual problem, and make the implicit equality constraints explicit. Solution.
Solutons HW #2 Dual of general LP. Fnd the dual functon of the LP mnmze subject to c T x Gx h Ax = b. Gve the dual problem, and make the mplct equalty constrants explct. Soluton. 1. The Lagrangan s L(x,
More informationprinceton univ. F 17 cos 521: Advanced Algorithm Design Lecture 7: LP Duality Lecturer: Matt Weinberg
prnceton unv. F 17 cos 521: Advanced Algorthm Desgn Lecture 7: LP Dualty Lecturer: Matt Wenberg Scrbe: LP Dualty s an extremely useful tool for analyzng structural propertes of lnear programs. Whle there
More informationPower Allocation/Beamforming for DF MIMO Two-Way Relaying: Relay and Network Optimization
Power Allocaton/Beamformng for DF MIMO Two-Way Relayng: Relay and Network Optmzaton Je Gao, Janshu Zhang, Sergy A. Vorobyov, Ha Jang, and Martn Haardt Dept. of Electrcal & Computer Engneerng, Unversty
More informationChapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems
Numercal Analyss by Dr. Anta Pal Assstant Professor Department of Mathematcs Natonal Insttute of Technology Durgapur Durgapur-713209 emal: anta.bue@gmal.com 1 . Chapter 5 Soluton of System of Lnear Equatons
More informationYong Joon Ryang. 1. Introduction Consider the multicommodity transportation problem with convex quadratic cost function. 1 2 (x x0 ) T Q(x x 0 )
Kangweon-Kyungk Math. Jour. 4 1996), No. 1, pp. 7 16 AN ITERATIVE ROW-ACTION METHOD FOR MULTICOMMODITY TRANSPORTATION PROBLEMS Yong Joon Ryang Abstract. The optmzaton problems wth quadratc constrants often
More informationU.C. Berkeley CS294: Spectral Methods and Expanders Handout 8 Luca Trevisan February 17, 2016
U.C. Berkeley CS94: Spectral Methods and Expanders Handout 8 Luca Trevsan February 7, 06 Lecture 8: Spectral Algorthms Wrap-up In whch we talk about even more generalzatons of Cheeger s nequaltes, and
More information4DVAR, according to the name, is a four-dimensional variational method.
4D-Varatonal Data Assmlaton (4D-Var) 4DVAR, accordng to the name, s a four-dmensonal varatonal method. 4D-Var s actually a drect generalzaton of 3D-Var to handle observatons that are dstrbuted n tme. The
More informationIV. Performance Optimization
IV. Performance Optmzaton A. Steepest descent algorthm defnton how to set up bounds on learnng rate mnmzaton n a lne (varyng learnng rate) momentum learnng examples B. Newton s method defnton Gauss-Newton
More informationCollege of Computer & Information Science Fall 2009 Northeastern University 20 October 2009
College of Computer & Informaton Scence Fall 2009 Northeastern Unversty 20 October 2009 CS7880: Algorthmc Power Tools Scrbe: Jan Wen and Laura Poplawsk Lecture Outlne: Prmal-dual schema Network Desgn:
More informationLagrange Multipliers Kernel Trick
Lagrange Multplers Kernel Trck Ncholas Ruozz Unversty of Texas at Dallas Based roughly on the sldes of Davd Sontag General Optmzaton A mathematcal detour, we ll come back to SVMs soon! subject to: f x
More informationErrors for Linear Systems
Errors for Lnear Systems When we solve a lnear system Ax b we often do not know A and b exactly, but have only approxmatons  and ˆb avalable. Then the best thng we can do s to solve ˆx ˆb exactly whch
More informationSingular Value Decomposition: Theory and Applications
Sngular Value Decomposton: Theory and Applcatons Danel Khashab Sprng 2015 Last Update: March 2, 2015 1 Introducton A = UDV where columns of U and V are orthonormal and matrx D s dagonal wth postve real
More informationVector Norms. Chapter 7 Iterative Techniques in Matrix Algebra. Cauchy-Bunyakovsky-Schwarz Inequality for Sums. Distances. Convergence.
Vector Norms Chapter 7 Iteratve Technques n Matrx Algebra Per-Olof Persson persson@berkeley.edu Department of Mathematcs Unversty of Calforna, Berkeley Math 128B Numercal Analyss Defnton A vector norm
More informationDifference Equations
Dfference Equatons c Jan Vrbk 1 Bascs Suppose a sequence of numbers, say a 0,a 1,a,a 3,... s defned by a certan general relatonshp between, say, three consecutve values of the sequence, e.g. a + +3a +1
More informationAlternating Optimization for Capacity Region of Gaussian MIMO Broadcast Channels with Per-antenna Power Constraint
Alternatng Optmzaton for Capacty Regon of Gaussan MIMO Broadcast Channels wth Per-antenna Power Constrant Thuy M. Pham, Ronan Farrell, and Le-Nam Tran Department of Electronc Engneerng, Maynooth Unversty,
More informationGrover s Algorithm + Quantum Zeno Effect + Vaidman
Grover s Algorthm + Quantum Zeno Effect + Vadman CS 294-2 Bomb 10/12/04 Fall 2004 Lecture 11 Grover s algorthm Recall that Grover s algorthm for searchng over a space of sze wors as follows: consder the
More informationInteractive Bi-Level Multi-Objective Integer. Non-linear Programming Problem
Appled Mathematcal Scences Vol 5 0 no 65 3 33 Interactve B-Level Mult-Objectve Integer Non-lnear Programmng Problem O E Emam Department of Informaton Systems aculty of Computer Scence and nformaton Helwan
More informationOn a direct solver for linear least squares problems
ISSN 2066-6594 Ann. Acad. Rom. Sc. Ser. Math. Appl. Vol. 8, No. 2/2016 On a drect solver for lnear least squares problems Constantn Popa Abstract The Null Space (NS) algorthm s a drect solver for lnear
More informationSome modelling aspects for the Matlab implementation of MMA
Some modellng aspects for the Matlab mplementaton of MMA Krster Svanberg krlle@math.kth.se Optmzaton and Systems Theory Department of Mathematcs KTH, SE 10044 Stockholm September 2004 1. Consdered optmzaton
More informationCSci 6974 and ECSE 6966 Math. Tech. for Vision, Graphics and Robotics Lecture 21, April 17, 2006 Estimating A Plane Homography
CSc 6974 and ECSE 6966 Math. Tech. for Vson, Graphcs and Robotcs Lecture 21, Aprl 17, 2006 Estmatng A Plane Homography Overvew We contnue wth a dscusson of the major ssues, usng estmaton of plane projectve
More informationLecture 20: November 7
0-725/36-725: Convex Optmzaton Fall 205 Lecturer: Ryan Tbshran Lecture 20: November 7 Scrbes: Varsha Chnnaobreddy, Joon Sk Km, Lngyao Zhang Note: LaTeX template courtesy of UC Berkeley EECS dept. Dsclamer:
More informationKernel Methods and SVMs Extension
Kernel Methods and SVMs Extenson The purpose of ths document s to revew materal covered n Machne Learnng 1 Supervsed Learnng regardng support vector machnes (SVMs). Ths document also provdes a general
More informationSupplement: Proofs and Technical Details for The Solution Path of the Generalized Lasso
Supplement: Proofs and Techncal Detals for The Soluton Path of the Generalzed Lasso Ryan J. Tbshran Jonathan Taylor In ths document we gve supplementary detals to the paper The Soluton Path of the Generalzed
More informationSolutions to exam in SF1811 Optimization, Jan 14, 2015
Solutons to exam n SF8 Optmzaton, Jan 4, 25 3 3 O------O -4 \ / \ / The network: \/ where all lnks go from left to rght. /\ / \ / \ 6 O------O -5 2 4.(a) Let x = ( x 3, x 4, x 23, x 24 ) T, where the varable
More informationThe Study of Teaching-learning-based Optimization Algorithm
Advanced Scence and Technology Letters Vol. (AST 06), pp.05- http://dx.do.org/0.57/astl.06. The Study of Teachng-learnng-based Optmzaton Algorthm u Sun, Yan fu, Lele Kong, Haolang Q,, Helongang Insttute
More informationResource Allocation with a Budget Constraint for Computing Independent Tasks in the Cloud
Resource Allocaton wth a Budget Constrant for Computng Independent Tasks n the Cloud Wemng Sh and Bo Hong School of Electrcal and Computer Engneerng Georga Insttute of Technology, USA 2nd IEEE Internatonal
More informationThe Order Relation and Trace Inequalities for. Hermitian Operators
Internatonal Mathematcal Forum, Vol 3, 08, no, 507-57 HIKARI Ltd, wwwm-hkarcom https://doorg/0988/mf088055 The Order Relaton and Trace Inequaltes for Hermtan Operators Y Huang School of Informaton Scence
More informationLecture 20: Lift and Project, SDP Duality. Today we will study the Lift and Project method. Then we will prove the SDP duality theorem.
prnceton u. sp 02 cos 598B: algorthms and complexty Lecture 20: Lft and Project, SDP Dualty Lecturer: Sanjeev Arora Scrbe:Yury Makarychev Today we wll study the Lft and Project method. Then we wll prove
More informationMaximal Margin Classifier
CS81B/Stat41B: Advanced Topcs n Learnng & Decson Makng Mamal Margn Classfer Lecturer: Mchael Jordan Scrbes: Jana van Greunen Corrected verson - /1/004 1 References/Recommended Readng 1.1 Webstes www.kernel-machnes.org
More informationChapter - 2. Distribution System Power Flow Analysis
Chapter - 2 Dstrbuton System Power Flow Analyss CHAPTER - 2 Radal Dstrbuton System Load Flow 2.1 Introducton Load flow s an mportant tool [66] for analyzng electrcal power system network performance. Load
More information10-701/ Machine Learning, Fall 2005 Homework 3
10-701/15-781 Machne Learnng, Fall 2005 Homework 3 Out: 10/20/05 Due: begnnng of the class 11/01/05 Instructons Contact questons-10701@autonlaborg for queston Problem 1 Regresson and Cross-valdaton [40
More informationCHAPTER 7 CONSTRAINED OPTIMIZATION 2: SQP AND GRG
Chapter 7: Constraned Optmzaton CHAPER 7 CONSRAINED OPIMIZAION : SQP AND GRG Introducton In the prevous chapter we eamned the necessary and suffcent condtons for a constraned optmum. We dd not, however,
More informationOn the Multicriteria Integer Network Flow Problem
BULGARIAN ACADEMY OF SCIENCES CYBERNETICS AND INFORMATION TECHNOLOGIES Volume 5, No 2 Sofa 2005 On the Multcrtera Integer Network Flow Problem Vassl Vasslev, Marana Nkolova, Maryana Vassleva Insttute of
More informationReport on Image warping
Report on Image warpng Xuan Ne, Dec. 20, 2004 Ths document summarzed the algorthms of our mage warpng soluton for further study, and there s a detaled descrpton about the mplementaton of these algorthms.
More informationn α j x j = 0 j=1 has a nontrivial solution. Here A is the n k matrix whose jth column is the vector for all t j=0
MODULE 2 Topcs: Lnear ndependence, bass and dmenson We have seen that f n a set of vectors one vector s a lnear combnaton of the remanng vectors n the set then the span of the set s unchanged f that vector
More informationLecture 12: Discrete Laplacian
Lecture 12: Dscrete Laplacan Scrbe: Tanye Lu Our goal s to come up wth a dscrete verson of Laplacan operator for trangulated surfaces, so that we can use t n practce to solve related problems We are mostly
More informationSTAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 16
STAT 39: MATHEMATICAL COMPUTATIONS I FALL 218 LECTURE 16 1 why teratve methods f we have a lnear system Ax = b where A s very, very large but s ether sparse or structured (eg, banded, Toepltz, banded plus
More informationModule 9. Lecture 6. Duality in Assignment Problems
Module 9 1 Lecture 6 Dualty n Assgnment Problems In ths lecture we attempt to answer few other mportant questons posed n earler lecture for (AP) and see how some of them can be explaned through the concept
More informationLecture 17: Lee-Sidford Barrier
CSE 599: Interplay between Convex Optmzaton and Geometry Wnter 2018 Lecturer: Yn Tat Lee Lecture 17: Lee-Sdford Barrer Dsclamer: Please tell me any mstake you notced. In ths lecture, we talk about the
More informationThe Prncpal Component Transform The Prncpal Component Transform s also called Karhunen-Loeve Transform (KLT, Hotellng Transform, oregenvector Transfor
Prncpal Component Transform Multvarate Random Sgnals A real tme sgnal x(t can be consdered as a random process and ts samples x m (m =0; ;N, 1 a random vector: The mean vector of X s X =[x0; ;x N,1] T
More informationSection 8.3 Polar Form of Complex Numbers
80 Chapter 8 Secton 8 Polar Form of Complex Numbers From prevous classes, you may have encountered magnary numbers the square roots of negatve numbers and, more generally, complex numbers whch are the
More information1 Convex Optimization
Convex Optmzaton We wll consder convex optmzaton problems. Namely, mnmzaton problems where the objectve s convex (we assume no constrants for now). Such problems often arse n machne learnng. For example,
More informationAdditional Codes using Finite Difference Method. 1 HJB Equation for Consumption-Saving Problem Without Uncertainty
Addtonal Codes usng Fnte Dfference Method Benamn Moll 1 HJB Equaton for Consumpton-Savng Problem Wthout Uncertanty Before consderng the case wth stochastc ncome n http://www.prnceton.edu/~moll/ HACTproect/HACT_Numercal_Appendx.pdf,
More informationSimultaneous Optimization of Berth Allocation, Quay Crane Assignment and Quay Crane Scheduling Problems in Container Terminals
Smultaneous Optmzaton of Berth Allocaton, Quay Crane Assgnment and Quay Crane Schedulng Problems n Contaner Termnals Necat Aras, Yavuz Türkoğulları, Z. Caner Taşkın, Kuban Altınel Abstract In ths work,
More informationAn Interactive Optimisation Tool for Allocation Problems
An Interactve Optmsaton ool for Allocaton Problems Fredr Bonäs, Joam Westerlund and apo Westerlund Process Desgn Laboratory, Faculty of echnology, Åbo Aadem Unversty, uru 20500, Fnland hs paper presents
More informationLOW BIAS INTEGRATED PATH ESTIMATORS. James M. Calvin
Proceedngs of the 007 Wnter Smulaton Conference S G Henderson, B Bller, M-H Hseh, J Shortle, J D Tew, and R R Barton, eds LOW BIAS INTEGRATED PATH ESTIMATORS James M Calvn Department of Computer Scence
More informationTime-Varying Systems and Computations Lecture 6
Tme-Varyng Systems and Computatons Lecture 6 Klaus Depold 14. Januar 2014 The Kalman Flter The Kalman estmaton flter attempts to estmate the actual state of an unknown dscrete dynamcal system, gven nosy
More informationSupport Vector Machines CS434
Support Vector Machnes CS434 Lnear Separators Many lnear separators exst that perfectly classfy all tranng examples Whch of the lnear separators s the best? + + + + + + + + + Intuton of Margn Consder ponts
More informationMin Cut, Fast Cut, Polynomial Identities
Randomzed Algorthms, Summer 016 Mn Cut, Fast Cut, Polynomal Identtes Instructor: Thomas Kesselhem and Kurt Mehlhorn 1 Mn Cuts n Graphs Lecture (5 pages) Throughout ths secton, G = (V, E) s a mult-graph.
More informationLeast squares cubic splines without B-splines S.K. Lucas
Least squares cubc splnes wthout B-splnes S.K. Lucas School of Mathematcs and Statstcs, Unversty of South Australa, Mawson Lakes SA 595 e-mal: stephen.lucas@unsa.edu.au Submtted to the Gazette of the Australan
More informationStatistical pattern recognition
Statstcal pattern recognton Bayes theorem Problem: decdng f a patent has a partcular condton based on a partcular test However, the test s mperfect Someone wth the condton may go undetected (false negatve
More informationOPTIMISATION. Introduction Single Variable Unconstrained Optimisation Multivariable Unconstrained Optimisation Linear Programming
OPTIMIATION Introducton ngle Varable Unconstraned Optmsaton Multvarable Unconstraned Optmsaton Lnear Programmng Chapter Optmsaton /. Introducton In an engneerng analss, sometmes etremtes, ether mnmum or
More informationDynamic Programming. Preview. Dynamic Programming. Dynamic Programming. Dynamic Programming (Example: Fibonacci Sequence)
/24/27 Prevew Fbonacc Sequence Longest Common Subsequence Dynamc programmng s a method for solvng complex problems by breakng them down nto smpler sub-problems. It s applcable to problems exhbtng the propertes
More informationCSC 411 / CSC D11 / CSC C11
18 Boostng s a general strategy for learnng classfers by combnng smpler ones. The dea of boostng s to take a weak classfer that s, any classfer that wll do at least slghtly better than chance and use t
More informationNumerical Heat and Mass Transfer
Master degree n Mechancal Engneerng Numercal Heat and Mass Transfer 06-Fnte-Dfference Method (One-dmensonal, steady state heat conducton) Fausto Arpno f.arpno@uncas.t Introducton Why we use models and
More informationEstimating the Fundamental Matrix by Transforming Image Points in Projective Space 1
Estmatng the Fundamental Matrx by Transformng Image Ponts n Projectve Space 1 Zhengyou Zhang and Charles Loop Mcrosoft Research, One Mcrosoft Way, Redmond, WA 98052, USA E-mal: fzhang,cloopg@mcrosoft.com
More informationMarkov Chain Monte Carlo Lecture 6
where (x 1,..., x N ) X N, N s called the populaton sze, f(x) f (x) for at least one {1, 2,..., N}, and those dfferent from f(x) are called the tral dstrbutons n terms of mportance samplng. Dfferent ways
More informationEEL 6266 Power System Operation and Control. Chapter 3 Economic Dispatch Using Dynamic Programming
EEL 6266 Power System Operaton and Control Chapter 3 Economc Dspatch Usng Dynamc Programmng Pecewse Lnear Cost Functons Common practce many utltes prefer to represent ther generator cost functons as sngle-
More informationAPPENDIX A Some Linear Algebra
APPENDIX A Some Lnear Algebra The collecton of m, n matrces A.1 Matrces a 1,1,..., a 1,n A = a m,1,..., a m,n wth real elements a,j s denoted by R m,n. If n = 1 then A s called a column vector. Smlarly,
More informationSupport Vector Machines. Vibhav Gogate The University of Texas at dallas
Support Vector Machnes Vbhav Gogate he Unversty of exas at dallas What We have Learned So Far? 1. Decson rees. Naïve Bayes 3. Lnear Regresson 4. Logstc Regresson 5. Perceptron 6. Neural networks 7. K-Nearest
More informationOn the Global Linear Convergence of the ADMM with Multi-Block Variables
On the Global Lnear Convergence of the ADMM wth Mult-Block Varables Tany Ln Shqan Ma Shuzhong Zhang May 31, 01 Abstract The alternatng drecton method of multplers ADMM has been wdely used for solvng structured
More informationSum Capacity of Multiuser MIMO Broadcast Channels with Block Diagonalization
Sum Capacty of Multuser MIMO Broadcast Channels wth Block Dagonalzaton Zukang Shen, Runhua Chen, Jeffrey G. Andrews, Robert W. Heath, Jr., and Bran L. Evans The Unversty of Texas at Austn, Austn, Texas
More informationTransfer Functions. Convenient representation of a linear, dynamic model. A transfer function (TF) relates one input and one output: ( ) system
Transfer Functons Convenent representaton of a lnear, dynamc model. A transfer functon (TF) relates one nput and one output: x t X s y t system Y s The followng termnology s used: x y nput output forcng
More informationA MODIFIED METHOD FOR SOLVING SYSTEM OF NONLINEAR EQUATIONS
Journal of Mathematcs and Statstcs 9 (1): 4-8, 1 ISSN 1549-644 1 Scence Publcatons do:1.844/jmssp.1.4.8 Publshed Onlne 9 (1) 1 (http://www.thescpub.com/jmss.toc) A MODIFIED METHOD FOR SOLVING SYSTEM OF
More informationGeneral theory of fuzzy connectedness segmentations: reconciliation of two tracks of FC theory
General theory of fuzzy connectedness segmentatons: reconclaton of two tracks of FC theory Krzysztof Chrs Ceselsk Department of Mathematcs, West Vrgna Unversty and MIPG, Department of Radology, Unversty
More informationA New Refinement of Jacobi Method for Solution of Linear System Equations AX=b
Int J Contemp Math Scences, Vol 3, 28, no 17, 819-827 A New Refnement of Jacob Method for Soluton of Lnear System Equatons AX=b F Naem Dafchah Department of Mathematcs, Faculty of Scences Unversty of Gulan,
More informationTHE ARIMOTO-BLAHUT ALGORITHM FOR COMPUTATION OF CHANNEL CAPACITY. William A. Pearlman. References: S. Arimoto - IEEE Trans. Inform. Thy., Jan.
THE ARIMOTO-BLAHUT ALGORITHM FOR COMPUTATION OF CHANNEL CAPACITY Wllam A. Pearlman 2002 References: S. Armoto - IEEE Trans. Inform. Thy., Jan. 1972 R. Blahut - IEEE Trans. Inform. Thy., July 1972 Recall
More informationApplication of Nonbinary LDPC Codes for Communication over Fading Channels Using Higher Order Modulations
Applcaton of Nonbnary LDPC Codes for Communcaton over Fadng Channels Usng Hgher Order Modulatons Rong-Hu Peng and Rong-Rong Chen Department of Electrcal and Computer Engneerng Unversty of Utah Ths work
More informationSupporting Information
Supportng Informaton The neural network f n Eq. 1 s gven by: f x l = ReLU W atom x l + b atom, 2 where ReLU s the element-wse rectfed lnear unt, 21.e., ReLUx = max0, x, W atom R d d s the weght matrx to
More informationCHAPTER 4. Vector Spaces
man 2007/2/16 page 234 CHAPTER 4 Vector Spaces To crtcze mathematcs for ts abstracton s to mss the pont entrel. Abstracton s what makes mathematcs work. Ian Stewart The man am of ths tet s to stud lnear
More informationCME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 13
CME 30: NUMERICAL LINEAR ALGEBRA FALL 005/06 LECTURE 13 GENE H GOLUB 1 Iteratve Methods Very large problems (naturally sparse, from applcatons): teratve methods Structured matrces (even sometmes dense,
More informationLecture 5 Decoding Binary BCH Codes
Lecture 5 Decodng Bnary BCH Codes In ths class, we wll ntroduce dfferent methods for decodng BCH codes 51 Decodng the [15, 7, 5] 2 -BCH Code Consder the [15, 7, 5] 2 -code C we ntroduced n the last lecture
More informationLinear Feature Engineering 11
Lnear Feature Engneerng 11 2 Least-Squares 2.1 Smple least-squares Consder the followng dataset. We have a bunch of nputs x and correspondng outputs y. The partcular values n ths dataset are x y 0.23 0.19
More informationComputing Correlated Equilibria in Multi-Player Games
Computng Correlated Equlbra n Mult-Player Games Chrstos H. Papadmtrou Presented by Zhanxang Huang December 7th, 2005 1 The Author Dr. Chrstos H. Papadmtrou CS professor at UC Berkley (taught at Harvard,
More information