A Parallel Best-Response Algorithm with Exact Line Search for Nonconvex Sparsity-Regularized Rank Minimization
|
|
- Holly Waters
- 5 years ago
- Views:
Transcription
1 A Parallel Bes-Response Algorihm wih Exac Line Search for Nonconvex Sparsiy-Regularized Rank Minimizaion Yang Yang and Marius Pesaveno Universiy of Luxembourg, Technische Universiä Darmsad, arxiv:704489v [csdc] 3 Nov 07 Absrac In his paper, we propose a convergen parallel bes-response algorihm wih he exac line search for he nondiffereniable nonconvex sparsiy-regularized rank minimizaion problem On he one hand, i exhibis a faser convergence han subgradien algorihms and block coordinae descen algorihms On he oher hand, is convergence o a saionary poin is guaraneed, while ADMM algorihms only converge for convex problems urhermore, he exac line search procedure in he proposed algorihm is performed efficienly in closed-form o avoid he meiculous choice of sepsizes, which is however a common boleneck in subgradien algorihms and successive convex approximaion algorihms inally, he proposed algorihm is numerically esed Index Terms Backbone Nework, Big Daa Analyics, Line Search, Rank Minimizaion, Successive Convex Approximaion I INTRODUCTION In his paper, we consider he esimaion of a low rank marix X R N K and a sparse marix S R I K from noisy measuremens Y R N K such ha Y = X + DS + V, where D R N I is a known marix The rank of X is much smaller han N and K, ie, rank(x min(n, K, and he suppor size of S is much smaller han IK, ie, S 0 IK A naural measure for he daa mismach is he leas square error augmened by regularizaion funcions o promoe he rank sparsiy of X and suppor sparsiy of S: (SRRM : X,S X + DS Y + λ X + µ S, where X is he nuclear norm of X This sparsiy-regularized rank minimizaion (SRRM problem plays a fundamenal role in he analysis of raffic anomalies in large-scale backbone neworks [] In his applicaion, X = RZ where Z is he unknown raffic flows over he ime horizon of ineres, R is a given fa rouing marix, S is he raffic volume anomalies The marix X inheris he rank sparsiy from Z because common emporal paerns among he raffic flows in addiion o heir periodic behavior render mos rows/columns of Z linearly dependen and hus low rank, and S is assumed o be sparse because raffic anomalies are expeced o happen sporadically and las shorly relaive o he measuremen inerval, which is represened by he number of columns K Alhough problem (SRRM is convex, i canno be easily solved by sandard solvers when he problem dimension is large, for he reason ha he nuclear norm X is neiher differeniable nor decomposable among he blocks of X I follows from he fac [, 3] ( X = min (P,Q + Q, s PQ = X ha i may be useful o consider he following opimizaion problem where he nuclear norm X is replaced by + Q : P,Q,S PQ + DS Y + λ ( + Q + µ S, ( where P R N ρ and Q R ρ K for a ρ ha is usually much smaller han N and K: ρ min(n, K Despie he fac ha problem ( is nonconvex, i is shown in [4, Prop ] ha every saionary poin of ( is a global opimal soluion of (SRRM under some mild condiions A block coordinae descen (BCD algorihm is adaped in [5] o find a saionary poin of he nonconvex problem ( In he BCD algorihm, he variables are updaed in a cyclic order or example, when P (or Q is updaed, he variables (Q, S (or (P, S are fixed However, when fixing (P, Q and updaing S, he elemens of S are updaed elemen-wise in a sequenial order o reduce he complexiy This is because he opimizaion problem wr s i,k, he (i, k-h elemen of S, has a closed-form soluion: s i,k PQ + DS Y + λ ( + Q + µ S, while he join opimizaion problem wih respec o (wr all elemens of he marix variable S does no have a closed-form soluion and is hus no easy o solve Neverheless, a drawback of he sequenial elemen-wise updae is ha i may incur a large delay because s i+,k canno be updaed unil s i,k is updaed and he delay may be very large when I is large, which is a norm raher han an excepion in big daa analyics [6] The alernaing direcion mehod of mulipliers (ADMM algorihm enables he simulaneous updae of all elemens of S, bu i does no have a guaranee convergence o a saionary poin because he opimizaion problem ( is nonconvex [4] Noe ha here is some recen developmen in ADMM for nonconvex problems, see [7, 8] for example and he references herein The ADMM algorihm proposed in [7] is designed for nonconvex sharing/consensus problems, and canno be applied o solve problem ( The ADMM algorihm proposed in [8] converges if he marix D in ( has full row rank, which is however no necessarily he case The nondiffereniable nonconvex problem ( can also be solved by sandard subgradien and/or successive convex approximaion (SCA algorihms [9] However, convergence of subgradien and SCA algorihms is mosly esablished under diminishing sepsizes, which is someimes difficul o deploy in pracice because he convergence behavior is sensiive o he decay rae As a maer of fac, he meiculous choice of sepsizes severely limis he applicabiliy of subgradien and SCA algorihms in nonsmooh opimizaion and big daa analyics [6] In his paper, we propose a convergen parallel bes-response algorihm, where all elemens of P, Q and S are updaed simulaneously This is a well known concep in opimizaion and someimes lised under differen names, for example, he parallel block coordinae descen algorihm (cf [0] and he Jacobi algorihm (cf [] To accelerae he convergence, we compue he sepsize by he exac line search procedure proposed in []: he exac line search is performed over a properly designed differeniable funcion and he resuling sepsize can be expressed in a closed-form expression, so
2 ha he compuaional complexiy is much lower han he radiional line search which is over he original nondiffereniable objecive funcion The proposed algorihm has several aracive feaures: i he variables are updaed simulaneously based on he bes-response; ii he sepsize is compued in closed-form based on he exac line search; iv i converges o a saionary poin, and is advanages over exising algorihms are summarized as follows: eaure i is an advanage over he BCD algorihm; eaures i and ii are advanages over subgradien algorihms; eaure ii is an advanage over SCA algorihms; eaure iii is an advanage over ADMM algorihms The above advanages will furher be illusraed by numerical resuls II THE PROPOSED PARALLEL BEST-RESPONSE ALGORITHM WITH EXACT LINE SEARCH In his secion, we propose an ieraive algorihm o find a saionary poin of problem ( I consiss of solving a sequence of successively refined approximae problems, which are presumably much easier o solve han he original problem To his end, we define f(p, Q, S PQ + DS Y + λ ( + Q, g(s µ S Alhough f(p, Q, S in ( is no joinly convex wr (P, Q, S, i is individual convex in P, Q and S In oher words, f(p, Q, S is convex wr one variable while he oher wo variables are fixed Preserving and exploiing his parial convexiy considerably acceleraes he convergence and i has become he cenral idea in he successive convex approximaion and he successive pseudoconvex approximaion [, ] To simplify he noaion, we use Z as a compac noaion for (P, Q, S: Z (P, Q, S; in he res of he paper, Z and (P, Q, S are used inerchangeably Given Z = (P, Q, S in ieraion, we approximae he original nonconvex funcion f(z by a convex funcion f(z; Z ha is of he following form: where f(z; Z = f P (P; Z + f Q(Q; Z + f S(S; Z, ( f P (P; Z f(p, Q, S = PQ + DS Y + λ, f Q(Q; Z f(p, Q, S = P Q + DS Y + λ Q, f(p, Q, s i,k, (s j,k j i, (s j j i f S(S; Z i,k = i,k P q k + d is i,k + d js j,k y k j i = r(s T d(d T DS (3a (3b r(s T (d(d T DS D T (DS Y + P Q, (3c wih q k (or y k and d i denoing he k-h and i-h column of Q (or Y and D, respecively, while d(d T D denoes a diagonal marix wih elemens on he main diagonal idenical o hose of he marix D T D Noe ha in he approximae funcion wr P and Q, he remaining variables (Q, S and (P, S are fixed, respecively Alhough i is emping o define he approximae funcion of f(p, Q, S wr S by fixing P and Q, minimizing f(p, Q, S wr he marix variable S does no have a closed-form soluion and mus be solved ieraively Therefore he proposed approximae funcion f S(S; Z in (3c consiss of IK componen funcions, and in he (i, k-h componen funcion, s i,k is he variable while all oher variables are fixed, namely, P, Q, (s j,k j i, and (s j j i As we will show shorly, minimizing f(s; Z wr S exhibis a closed-form soluion We remark ha he approximae funcion f(z; Z is a (srongly convex funcion and i is differeniable in boh Z and Z urhermore, he gradien of he approximae funcion f(p, Q, S; Z is equal o ha of f(p, Q, S a Z = Z To see his: P f(z; Z = P fp (P; Z ( = P PQ + DS Y + λ = P f(p, Q, S Z=Z, Q f(z; Z = Q fq(q; Z ( = Q P Q + DS Y + λ Q = Q f(p, Q, S Z=Z and S f(z; Z = ( si,k f(z; Z i,k while si,k f(z; Z = si,k fs(s; Z = si,k f(p, Q, s i,k, s i, k, s i = si,k f(p, Q, S Z=Z P=P Q=Q In ieraion, he approximae problem consiss of minimizing he approximae funcion over he same feasible se as he original problem (: Z=(P,Q,S f(z; Z + g(s (4 Since f(z; Z is srongly convex in Z and g(s is a convex funcion wr S, he approximae problem (4 is convex and i has a unique (globally opimal soluion, which is denoed as BZ = (B P Z, B QZ, B SZ The approximae problem (4 naurally decomposes ino several smaller problems which can be solved in parallel: B P Z arg min P k fp (P; Z = (Y DS (Q T (Q (Q T + λi, (5a B QZ arg min Q f Q(Q; Z = ((P T P + λi (P T (Y DS, (5b B SZ arg min f S(S; Z + g(s S = d(d T D S µ (d(d T DS D T (DS Y + P Q, (5c where S µ(x is an elemen-wise sof-hreshold operaor: he (i, j-h elemen of S µ(x is [X ij λ] + [ X ij λ] + As we can readily see from (5, he approximae problems can be solved efficienly because he opimal soluions are provided in an analyical expression Since f(z; Z is convex in Z and differeniable in boh Z and Z, and has he same gradien as f(z a Z = Z, i follows from [, Prop ] ha BZ Z is a descen direcion of he original
3 objecive funcion f(z + g(s a Z = Z The variable updae in he -h ieraion is hus defined as follows: P + = P + γ(b P Z P, Q + = Q + γ(b QZ Q, S + = S + γ(b SZ S, (6a (6b (6c where γ (0, ] is he sepsize ha should be properly seleced A naural (and radiional choice of he sepsize γ is given by he exac line search: min 0 γ { f(z + γ(bz Z + g(s + γ(b SZ S }, (7 in which he sepsize ha yields he larges decrease in objecive funcion value along he direcion BZ Z is seleced Neverheless, his choice leads o high compuaional complexiy, because g(s is nondiffereniable and he exac line search involves minimizing a nondiffereniable funcion Alernaives include consan sepsizes and diminishing sepsizes However, hey suffer from slow convergence (cf [] and parameer uning (cf [] As a maer of fac, he meiculous choice of sepsizes have become a major boleneck for subgradien and successive convex approximaion algorihm [6] I is shown in [, Sec III-A] ha o achieve convergence, i suffices o perform he exac line search over he following differeniable funcion: f(z + γ(bz Z + g(s + γ(g(b SZ g(s, (8 which is an upper bound of he objecive funcion in (7 afer applying Jensen s inequaliy o he convex nondiffereniable funcion g(s: g(s + γ(b SZ S g(s + γ(g(b SZ g(s This exac line search procedure over he differeniable funcion (8 achieves a good radeoff beween performance and complexiy urhermore, afer subsiuing he expressions of f(z and g(s ino (8, he exac line search boils down o minimizing a four order polynomial over he inerval [0, ]: γ = arg min 0 γ where = arg min 0 γ a P Q, { f(z + γ(bz Z + γ(g(b SX g(s } { 4 aγ4 + 3 bγ3 + } cγ + dγ, (9 b 3r( P Q (P Q + P Q + D S T, c r( P Q (P Q + DS Y T + P Q + P Q + D S + λ( P + Q, d r((p Q + P Q + D S (P Q + DS Y + λ(r(p P + r(q Q + µ( B SX S, for P B P Z P, Q B QZ Q and S B SZ S inding he opimal poins of (9 is equivalen o finding he nonnegaive real roo of a hird-order polynomial Making use of Cardano s mehod, we could express γ defined in (9 in a closedform expression: γ = [ γ ] 0, γ = 3 Σ + Σ + Σ3 Σ + 3 Σ + Σ3 b 3a, (0a (0b where [ γ ] 0 = max(min( γ,, 0 is he projecion of γ ono he inerval [0, ], Σ (b/3a 3 + bc/6a d/a and Σ c/3a Algorihm The parallel bes-response algorihm wih exac line search for problem ( Daa: = 0, Z 0 (arbirary bu fixed, eg, Z 0 = 0, sop crierion δ S: Compue (B P Z, B QZ, B SZ according o (5 S: Deermine he sepsize γ by he exac line search (0 S3: Updae (P, Q, Z according o (6 S4: If r((bz Z T f(z δ, STOP; oherwise go o S (b/3a Noe ha in (0b, he righ hand side has hree values (wo of hem could be complex numbers, and he equal sign reads o be equal o he smalles one among he real nonnegaive values The proposed algorihm is summarized in Algorihm, and we draw a few commens on is aracive feaures and advanages On he parallel bes-response updae: In each ieraion, he variables P, Q, and S are updaed simulaneously based on he besresponse The improvemen in convergence speed wr he BCD algorihm in [5] is noable because in he BCD algorihm, he opimizaion wr each elemen of S, say s i,k, is implemened in a sequenial order, and he number of elemens, IK, is usually very large in big daa applicaions To avoid he meiculous choice of sepsizes and furher accelerae he convergence, he exac line search is performed over he differeniable funcion f(z +γ(bz Z + γ(g(b SZ g S(Z and i can be compued by a closed-form expression The yields easier implemenaion and faser convergence han subgradien and SCA algorihms wih diminishing sepsizes On he complexiy: The complexiy of he proposed algorihm is mainained a a very low level, because boh he bes-responses (B P Z, B QZ, B SZ and he exac line search can be compued by closed-form expressions, cf (3 and (0 Only basic linear algebraic operaions are required, reducing he requiremens on he hardware s compuaional capabiliies On he convergence: The proposed Algorihm has a guaraneed convergence in he sense ha every limi poin of he sequence {Z } is a saionary poin of problem ( This claim direcly follows from [, Theorem ], and i serves as a cerificae for he soluion qualiy compared wih ADMM algorihms A Decomposiion of he Proposed Algorihm The proposed Algorihm can be furher decomposed o enable he parallel processing over a number of L nodes in a disribued nework To see his, we firs decompose he marix variables P, D and Y ino muliple blocks (P l L l=, (D l L l= and (Y l L l=, while P l R Nl ρ, D l R Nl I and Y l R Nl K consiss of N l rows of P, D and Y, respecively: P = P P P L, D = D D D L, Y = Y Y Y L, where each node l has access o he variables (P l, Q, S The compuaion of B P Z in (6a can be decomposed as B P Z = (B P,l Z L l=: B P,l Z = (Y l D l S (Q T (Q (Q T + λi, l =,, L Accordingly, he compuaion of B QZ and B SZ in (6b and (6c can be rewrien as ( B QZ L ( = l= (P l T P L l + λi l= (P l T (Y l D l S, ( L B SZ = d l= DT l D l ( L S µ (d l= DT l D l S L l= DT l (D l S Yl + P lq
4 0 5 regularizaio scaling facor elaive error in objecive funcion value regularizaio scaling facor BCD algorihm in [5] Algorihm (proposed ADMM algorihm in [4] ieraion relaive error in objecive funcion value regularizaion scaling facor r=0 regularizaion scaling facor r=05 BCD algorihm in [5] Algorihm (proposed running ime (seconds igure Relaive error in objecive funcion value versus ieraions igure Relaive error in objecive funcion value versus he CPU ime Before deermining he sepsize, he compuaion of a in (0 can also be decomposed among he nodes as a = L l= a l, where a l P l Q The decomposiion of b, c, and d is similar o ha of a, where b l 3r( P l Q (P l Q + P lq + D l S T, c l r( P l Q (P lq + D l S Y l T + P l Q + P lq + D l S + λ P l + λ Q l I, d l r((p l Q + P lq + D l S (P lq + D l S Yl + λr(p l P l + λ I r(q Q + µ I ( BSX S To compue he sepsize as in (0, he nodes muually exchange (a l, b l, c l, d l The four dimensional vecor (a l, b l, c l, d l provides each node wih all he necessary informaion o individually calculae (a, b, c, d and (Σ, Σ, Σ 3, and hen he sepsize γ according o (0 The signaling incurred by he exac line search is hus small and affordable III NUMERICAL SIMULATIONS In his secion, we perform numerical ess o compare he proposed Algorihm wih he BCD algorihm proposed in [5] and he ADMM algorihm proposed in [4] We sar wih a brief descripion of he ADMM algorihm: he problem ( can be rewrien as P,Q,A,B PQ + DA Y + λ ( + Q + µ B, subjec o A = B ( The augmened Lagrangian of ( is L c(p, Q, A, B, Π = PQ + DA Y + λ ( + Q + µ B + r(π T (A B + c A B, where c is a posiive consan In ADMM, he variables are updaed in he -h ieraion as follows: (Q +, B + = arg min Q,A L c(p, Q, A, B, Π, P + = arg min L c(p, Q +, A +, B, Π, P A + = arg min L c(p +, Q +, A, B +, Π, B Π + = Π + c(a + B + Noe ha he soluions o he above opimizaion problems have an analyical expression; see [4] for more deails We se c = 00 in he following simulaions The simulaion parameers are se as follows N = 06, K = 380, I = 380, ρ = 3 The elemens of D are generaed randomly and hey are eiher 0 or The elemens of V follow he Gaussian disribuion wih mean 0 and variance 00 Each elemen of S can ake hree possible values, namely, -, 0,, wih he probabiliy P (S i,k = = P (S ik = = 005 and P (S ik = 0 = 09 We se Y = PQ + DS + V, where he elemens of P (Q are generaed randomly following he Gaussian disribuion wih mean 0 and variance 00/I (00/K The sparsiy regularizer λ = r Y ( Y is he specral norm of Y and µ = r D T Y, where r is he regularizaion scaling facor ha is eiher 0 or 05 In igure, we show he relaive error in objecive funcion value versus he number of ieraions achieved by differen algorihms, where he opimal objecive funcion value is compued by running Algorihm for a sufficien number of ieraions As we can see from igure, he ADMM does no always converge for boh regularizaion parameers r = 0 and 05, as he opimizaion problem ( (and ( is nonconvex Noe ha for he BCD algorihm in igure, all elemens of S are updaed once, in a sequenial order, in one ieraion We can see from igure ha he BCD algorihm converges in less number of ieraions han he proposed Algorihm Bu he incurred delay of each ieraion in he BCD algorihm is ypically very large, because all elemens are updaed sequenially On he oher hand, in he proposed algorihm, all variables are updaed simulaneously and he CPU ime (in seconds needed for each ieraion is relaively small This is illusraed numerically in igure, where wo regularizaion scaling facors are esed, namely, r = 0 and r = 05 We see ha he improvemen is noable when he regularizaion parameer is small IV CONCLUDING REMARKS In his paper, we have proposed a parallel bes-response algorihm for he nonconvex sparsiy-regularized rank minimizaion problem The proposed algorihm exhibis fas convergence and low complexiy, because he variables are updaed simulaneously based on heir bes response; he sepsize is based on he exac line search and i is performed over a differeniable funcion; and 3 boh he bes response and he sepsize are compued by closed-form expressions urhermore, he proposed algorihm has a guaraneed convergence o he saionary poin The advanages of he proposed algorihm are also consolidaed numerically
5 REERENCES [] M Mardani, G Maeos, and G B Giannakis, Recovery of low-rank plus compressed sparse marices wih applicaion o unveiling raffic anomalies, IEEE Transacions on Informaion Theory, vol 59, no 8, pp , 03 [] B Rech, M azel, and P A Parrilo, Guaraneed Minimum-Rank Soluions of Linear Marix Equaions via Nuclear Norm Minimizaion, SIAM Review, vol 5, no 3, pp 47 50, jan 00 [3] C Seffens and M Pesaveno, Block- and Rank-Sparse Recovery for Direcion inding in Parly Calibraed Arrays, pp 9, 07 [4] M Mardani, G Maeos, and G B Giannakis, Decenralized sparsiyregularized rank minimizaion: Algorihms and applicaions, IEEE Transacions on Signal Processing, vol 6, no, pp , 03 [5], Dynamic anomalography: Tracking nework anomalies via sparsiy and low rank, IEEE Journal on Seleced Topics in Signal Processing, vol 7, no, pp 50 66, 03 [6] K Slavakis, G B Giannakis, and G Maeos, Modeling and Opimizaion for Big Daa Analyics: (Saisical learning ools for our era of daa deluge, IEEE Signal Processing Magazine, vol 3, no 5, pp 8 3, sep 04 [7] M Hong, Z-q Luo, and M Razaviyayn, Convergence Analysis of Alernaing Direcion Mehod of Mulipliers for a amily of Nonconvex Problems, SIAM Journal on Opimizaion, vol 6, no, pp , jan 06 [8] B Jiang, T Lin, S Ma, and S Zhang, Srucured Nonconvex and Nonsmooh Opimizaion: Algorihms and Ieraion Complexiy Analysis, pp 30, 06 [9] acchinei, G Scuari, and S Sagraella, Parallel Selecive Algorihms for Nonconvex Big Daa Opimizaion, IEEE Transacions on Signal Processing, vol 63, no 7, pp , nov 05 [0] M Elad, Why simple shrinkage is sill relevan for redundan represenaions? IEEE Transacions on Informaion Theory, vol 5, no, pp , 006 [] G Scuari, acchinei, P Song, D P Palomar, and J-S Pang, Decomposiion by Parial Linearizaion: Parallel Opimizaion of Muli-Agen Sysems, IEEE Transacions on Signal Processing, vol 6, no 3, pp , feb 04 [] Y Yang and M Pesaveno, A Unified Successive Pseudoconvex Approximaion ramework, IEEE Transacions on Signal Processing, vol 65, no 3, pp , 07
A Primal-Dual Type Algorithm with the O(1/t) Convergence Rate for Large Scale Constrained Convex Programs
PROC. IEEE CONFERENCE ON DECISION AND CONTROL, 06 A Primal-Dual Type Algorihm wih he O(/) Convergence Rae for Large Scale Consrained Convex Programs Hao Yu and Michael J. Neely Absrac This paper considers
More informationVehicle Arrival Models : Headway
Chaper 12 Vehicle Arrival Models : Headway 12.1 Inroducion Modelling arrival of vehicle a secion of road is an imporan sep in raffic flow modelling. I has imporan applicaion in raffic flow simulaion where
More informationAn introduction to the theory of SDDP algorithm
An inroducion o he heory of SDDP algorihm V. Leclère (ENPC) Augus 1, 2014 V. Leclère Inroducion o SDDP Augus 1, 2014 1 / 21 Inroducion Large scale sochasic problem are hard o solve. Two ways of aacking
More informationLecture 9: September 25
0-725: Opimizaion Fall 202 Lecure 9: Sepember 25 Lecurer: Geoff Gordon/Ryan Tibshirani Scribes: Xuezhi Wang, Subhodeep Moira, Abhimanu Kumar Noe: LaTeX emplae couresy of UC Berkeley EECS dep. Disclaimer:
More informationCourse Notes for EE227C (Spring 2018): Convex Optimization and Approximation
Course Noes for EE7C Spring 018: Convex Opimizaion and Approximaion Insrucor: Moriz Hard Email: hard+ee7c@berkeley.edu Graduae Insrucor: Max Simchowiz Email: msimchow+ee7c@berkeley.edu Ocober 15, 018 3
More informationMulti-scale 2D acoustic full waveform inversion with high frequency impulsive source
Muli-scale D acousic full waveform inversion wih high frequency impulsive source Vladimir N Zubov*, Universiy of Calgary, Calgary AB vzubov@ucalgaryca and Michael P Lamoureux, Universiy of Calgary, Calgary
More informationSTATE-SPACE MODELLING. A mass balance across the tank gives:
B. Lennox and N.F. Thornhill, 9, Sae Space Modelling, IChemE Process Managemen and Conrol Subjec Group Newsleer STE-SPACE MODELLING Inroducion: Over he pas decade or so here has been an ever increasing
More informationRandom Walk with Anti-Correlated Steps
Random Walk wih Ani-Correlaed Seps John Noga Dirk Wagner 2 Absrac We conjecure he expeced value of random walks wih ani-correlaed seps o be exacly. We suppor his conjecure wih 2 plausibiliy argumens and
More informationAppendix to Online l 1 -Dictionary Learning with Application to Novel Document Detection
Appendix o Online l -Dicionary Learning wih Applicaion o Novel Documen Deecion Shiva Prasad Kasiviswanahan Huahua Wang Arindam Banerjee Prem Melville A Background abou ADMM In his secion, we give a brief
More informationA Decentralized Second-Order Method with Exact Linear Convergence Rate for Consensus Optimization
1 A Decenralized Second-Order Mehod wih Exac Linear Convergence Rae for Consensus Opimizaion Aryan Mokhari, Wei Shi, Qing Ling, and Alejandro Ribeiro Absrac This paper considers decenralized consensus
More informationPENALIZED LEAST SQUARES AND PENALIZED LIKELIHOOD
PENALIZED LEAST SQUARES AND PENALIZED LIKELIHOOD HAN XIAO 1. Penalized Leas Squares Lasso solves he following opimizaion problem, ˆβ lasso = arg max β R p+1 1 N y i β 0 N x ij β j β j (1.1) for some 0.
More informationPhysics 235 Chapter 2. Chapter 2 Newtonian Mechanics Single Particle
Chaper 2 Newonian Mechanics Single Paricle In his Chaper we will review wha Newon s laws of mechanics ell us abou he moion of a single paricle. Newon s laws are only valid in suiable reference frames,
More informationMATH 128A, SUMMER 2009, FINAL EXAM SOLUTION
MATH 28A, SUMME 2009, FINAL EXAM SOLUTION BENJAMIN JOHNSON () (8 poins) [Lagrange Inerpolaion] (a) (4 poins) Le f be a funcion defined a some real numbers x 0,..., x n. Give a defining equaion for he Lagrange
More informationCHAPTER 10 VALIDATION OF TEST WITH ARTIFICAL NEURAL NETWORK
175 CHAPTER 10 VALIDATION OF TEST WITH ARTIFICAL NEURAL NETWORK 10.1 INTRODUCTION Amongs he research work performed, he bes resuls of experimenal work are validaed wih Arificial Neural Nework. From he
More informationAryan Mokhtari, Wei Shi, Qing Ling, and Alejandro Ribeiro. cost function n
IEEE TRANSACTIONS ON SIGNAL AND INFORMATION PROCESSING OVER NETWORKS, VOL. 2, NO. 4, DECEMBER 2016 507 A Decenralized Second-Order Mehod wih Exac Linear Convergence Rae for Consensus Opimizaion Aryan Mokhari,
More informationMATH 5720: Gradient Methods Hung Phan, UMass Lowell October 4, 2018
MATH 5720: Gradien Mehods Hung Phan, UMass Lowell Ocober 4, 208 Descen Direcion Mehods Consider he problem min { f(x) x R n}. The general descen direcions mehod is x k+ = x k + k d k where x k is he curren
More informationA Forward-Backward Splitting Method with Component-wise Lazy Evaluation for Online Structured Convex Optimization
A Forward-Backward Spliing Mehod wih Componen-wise Lazy Evaluaion for Online Srucured Convex Opimizaion Yukihiro Togari and Nobuo Yamashia March 28, 2016 Absrac: We consider large-scale opimizaion problems
More informationInventory Analysis and Management. Multi-Period Stochastic Models: Optimality of (s, S) Policy for K-Convex Objective Functions
Muli-Period Sochasic Models: Opimali of (s, S) Polic for -Convex Objecive Funcions Consider a seing similar o he N-sage newsvendor problem excep ha now here is a fixed re-ordering cos (> 0) for each (re-)order.
More informationParticle Swarm Optimization Combining Diversification and Intensification for Nonlinear Integer Programming Problems
Paricle Swarm Opimizaion Combining Diversificaion and Inensificaion for Nonlinear Ineger Programming Problems Takeshi Masui, Masaoshi Sakawa, Kosuke Kao and Koichi Masumoo Hiroshima Universiy 1-4-1, Kagamiyama,
More informationMath 315: Linear Algebra Solutions to Assignment 6
Mah 35: Linear Algebra s o Assignmen 6 # Which of he following ses of vecors are bases for R 2? {2,, 3, }, {4,, 7, 8}, {,,, 3}, {3, 9, 4, 2}. Explain your answer. To generae he whole R 2, wo linearly independen
More informationChapter 2. First Order Scalar Equations
Chaper. Firs Order Scalar Equaions We sar our sudy of differenial equaions in he same way he pioneers in his field did. We show paricular echniques o solve paricular ypes of firs order differenial equaions.
More informationOnline Appendix to Solution Methods for Models with Rare Disasters
Online Appendix o Soluion Mehods for Models wih Rare Disasers Jesús Fernández-Villaverde and Oren Levinal In his Online Appendix, we presen he Euler condiions of he model, we develop he pricing Calvo block,
More informationArticle from. Predictive Analytics and Futurism. July 2016 Issue 13
Aricle from Predicive Analyics and Fuurism July 6 Issue An Inroducion o Incremenal Learning By Qiang Wu and Dave Snell Machine learning provides useful ools for predicive analyics The ypical machine learning
More informationSupplement for Stochastic Convex Optimization: Faster Local Growth Implies Faster Global Convergence
Supplemen for Sochasic Convex Opimizaion: Faser Local Growh Implies Faser Global Convergence Yi Xu Qihang Lin ianbao Yang Proof of heorem heorem Suppose Assumpion holds and F (w) obeys he LGC (6) Given
More informationAir Traffic Forecast Empirical Research Based on the MCMC Method
Compuer and Informaion Science; Vol. 5, No. 5; 0 ISSN 93-8989 E-ISSN 93-8997 Published by Canadian Cener of Science and Educaion Air Traffic Forecas Empirical Research Based on he MCMC Mehod Jian-bo Wang,
More information1 Review of Zero-Sum Games
COS 5: heoreical Machine Learning Lecurer: Rob Schapire Lecure #23 Scribe: Eugene Brevdo April 30, 2008 Review of Zero-Sum Games Las ime we inroduced a mahemaical model for wo player zero-sum games. Any
More informationNotes on Kalman Filtering
Noes on Kalman Filering Brian Borchers and Rick Aser November 7, Inroducion Daa Assimilaion is he problem of merging model predicions wih acual measuremens of a sysem o produce an opimal esimae of he curren
More informationT L. t=1. Proof of Lemma 1. Using the marginal cost accounting in Equation(4) and standard arguments. t )+Π RB. t )+K 1(Q RB
Elecronic Companion EC.1. Proofs of Technical Lemmas and Theorems LEMMA 1. Le C(RB) be he oal cos incurred by he RB policy. Then we have, T L E[C(RB)] 3 E[Z RB ]. (EC.1) Proof of Lemma 1. Using he marginal
More informationNetwork Newton Distributed Optimization Methods
Nework Newon Disribued Opimizaion Mehods Aryan Mokhari, Qing Ling, and Alejandro Ribeiro Absrac We sudy he problem of minimizing a sum of convex objecive funcions where he componens of he objecive are
More informationMath 2142 Exam 1 Review Problems. x 2 + f (0) 3! for the 3rd Taylor polynomial at x = 0. To calculate the various quantities:
Mah 4 Eam Review Problems Problem. Calculae he 3rd Taylor polynomial for arcsin a =. Soluion. Le f() = arcsin. For his problem, we use he formula f() + f () + f ()! + f () 3! for he 3rd Taylor polynomial
More informationSimulation-Solving Dynamic Models ABE 5646 Week 2, Spring 2010
Simulaion-Solving Dynamic Models ABE 5646 Week 2, Spring 2010 Week Descripion Reading Maerial 2 Compuer Simulaion of Dynamic Models Finie Difference, coninuous saes, discree ime Simple Mehods Euler Trapezoid
More informationLongest Common Prefixes
Longes Common Prefixes The sandard ordering for srings is he lexicographical order. I is induced by an order over he alphabe. We will use he same symbols (,
More informationGENERALIZATION OF THE FORMULA OF FAA DI BRUNO FOR A COMPOSITE FUNCTION WITH A VECTOR ARGUMENT
Inerna J Mah & Mah Sci Vol 4, No 7 000) 48 49 S0670000970 Hindawi Publishing Corp GENERALIZATION OF THE FORMULA OF FAA DI BRUNO FOR A COMPOSITE FUNCTION WITH A VECTOR ARGUMENT RUMEN L MISHKOV Received
More informationModal identification of structures from roving input data by means of maximum likelihood estimation of the state space model
Modal idenificaion of srucures from roving inpu daa by means of maximum likelihood esimaion of he sae space model J. Cara, J. Juan, E. Alarcón Absrac The usual way o perform a forced vibraion es is o fix
More informationZápadočeská Univerzita v Plzni, Czech Republic and Groupe ESIEE Paris, France
ADAPTIVE SIGNAL PROCESSING USING MAXIMUM ENTROPY ON THE MEAN METHOD AND MONTE CARLO ANALYSIS Pavla Holejšovsá, Ing. *), Z. Peroua, Ing. **), J.-F. Bercher, Prof. Assis. ***) Západočesá Univerzia v Plzni,
More informationLet us start with a two dimensional case. We consider a vector ( x,
Roaion marices We consider now roaion marices in wo and hree dimensions. We sar wih wo dimensions since wo dimensions are easier han hree o undersand, and one dimension is a lile oo simple. However, our
More informationMatrix Versions of Some Refinements of the Arithmetic-Geometric Mean Inequality
Marix Versions of Some Refinemens of he Arihmeic-Geomeric Mean Inequaliy Bao Qi Feng and Andrew Tonge Absrac. We esablish marix versions of refinemens due o Alzer ], Carwrigh and Field 4], and Mercer 5]
More informationChapter 3 Boundary Value Problem
Chaper 3 Boundary Value Problem A boundary value problem (BVP) is a problem, ypically an ODE or a PDE, which has values assigned on he physical boundary of he domain in which he problem is specified. Le
More informationRapid Termination Evaluation for Recursive Subdivision of Bezier Curves
Rapid Terminaion Evaluaion for Recursive Subdivision of Bezier Curves Thomas F. Hain School of Compuer and Informaion Sciences, Universiy of Souh Alabama, Mobile, AL, U.S.A. Absrac Bézier curve flaening
More informationPrimal-Dual Splitting: Recent Improvements and Variants
Primal-Dual Spliing: Recen Improvemens and Varians 1 Thomas Pock and 2 Anonin Chambolle 1 Insiue for Compuer Graphics and Vision, TU Graz, Ausria 2 CMAP & CNRS École Polyechnique, France The proximal poin
More informationINTRODUCTION TO MACHINE LEARNING 3RD EDITION
ETHEM ALPAYDIN The MIT Press, 2014 Lecure Slides for INTRODUCTION TO MACHINE LEARNING 3RD EDITION alpaydin@boun.edu.r hp://www.cmpe.boun.edu.r/~ehem/i2ml3e CHAPTER 2: SUPERVISED LEARNING Learning a Class
More informationWEEK-3 Recitation PHYS 131. of the projectile s velocity remains constant throughout the motion, since the acceleration a x
WEEK-3 Reciaion PHYS 131 Ch. 3: FOC 1, 3, 4, 6, 14. Problems 9, 37, 41 & 71 and Ch. 4: FOC 1, 3, 5, 8. Problems 3, 5 & 16. Feb 8, 018 Ch. 3: FOC 1, 3, 4, 6, 14. 1. (a) The horizonal componen of he projecile
More informationt is a basis for the solution space to this system, then the matrix having these solutions as columns, t x 1 t, x 2 t,... x n t x 2 t...
Mah 228- Fri Mar 24 5.6 Marix exponenials and linear sysems: The analogy beween firs order sysems of linear differenial equaions (Chaper 5) and scalar linear differenial equaions (Chaper ) is much sronger
More information3.1.3 INTRODUCTION TO DYNAMIC OPTIMIZATION: DISCRETE TIME PROBLEMS. A. The Hamiltonian and First-Order Conditions in a Finite Time Horizon
3..3 INRODUCION O DYNAMIC OPIMIZAION: DISCREE IME PROBLEMS A. he Hamilonian and Firs-Order Condiions in a Finie ime Horizon Define a new funcion, he Hamilonian funcion, H. H he change in he oal value of
More informationTom Heskes and Onno Zoeter. Presented by Mark Buller
Tom Heskes and Onno Zoeer Presened by Mark Buller Dynamic Bayesian Neworks Direced graphical models of sochasic processes Represen hidden and observed variables wih differen dependencies Generalize Hidden
More information0.1 MAXIMUM LIKELIHOOD ESTIMATION EXPLAINED
0.1 MAXIMUM LIKELIHOOD ESTIMATIO EXPLAIED Maximum likelihood esimaion is a bes-fi saisical mehod for he esimaion of he values of he parameers of a sysem, based on a se of observaions of a random variable
More informationKriging Models Predicting Atrazine Concentrations in Surface Water Draining Agricultural Watersheds
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 Kriging Models Predicing Arazine Concenraions in Surface Waer Draining Agriculural Waersheds Paul L. Mosquin, Jeremy Aldworh, Wenlin Chen Supplemenal Maerial Number
More informationNotes on online convex optimization
Noes on online convex opimizaion Karl Sraos Online convex opimizaion (OCO) is a principled framework for online learning: OnlineConvexOpimizaion Inpu: convex se S, number of seps T For =, 2,..., T : Selec
More informationDecentralized Stochastic Control with Partial History Sharing: A Common Information Approach
1 Decenralized Sochasic Conrol wih Parial Hisory Sharing: A Common Informaion Approach Ashuosh Nayyar, Adiya Mahajan and Demoshenis Tenekezis arxiv:1209.1695v1 [cs.sy] 8 Sep 2012 Absrac A general model
More information10. State Space Methods
. Sae Space Mehods. Inroducion Sae space modelling was briefly inroduced in chaper. Here more coverage is provided of sae space mehods before some of heir uses in conrol sysem design are covered in he
More informationdy dx = xey (a) y(0) = 2 (b) y(1) = 2.5 SOLUTION: See next page
Assignmen 1 MATH 2270 SOLUTION Please wrie ou complee soluions for each of he following 6 problems (one more will sill be added). You may, of course, consul wih your classmaes, he exbook or oher resources,
More informationAppendix to Creating Work Breaks From Available Idleness
Appendix o Creaing Work Breaks From Available Idleness Xu Sun and Ward Whi Deparmen of Indusrial Engineering and Operaions Research, Columbia Universiy, New York, NY, 127; {xs2235,ww24}@columbia.edu Sepember
More informationTesting for a Single Factor Model in the Multivariate State Space Framework
esing for a Single Facor Model in he Mulivariae Sae Space Framework Chen C.-Y. M. Chiba and M. Kobayashi Inernaional Graduae School of Social Sciences Yokohama Naional Universiy Japan Faculy of Economics
More informationPade and Laguerre Approximations Applied. to the Active Queue Management Model. of Internet Protocol
Applied Mahemaical Sciences, Vol. 7, 013, no. 16, 663-673 HIKARI Ld, www.m-hikari.com hp://dx.doi.org/10.1988/ams.013.39499 Pade and Laguerre Approximaions Applied o he Acive Queue Managemen Model of Inerne
More informationELE 538B: Large-Scale Optimization for Data Science. Quasi-Newton methods. Yuxin Chen Princeton University, Spring 2018
ELE 538B: Large-Scale Opimizaion for Daa Science Quasi-Newon mehods Yuxin Chen Princeon Universiy, Spring 208 00 op ff(x (x)(k)) f p 2 L µ f 05 k f (xk ) k f (xk ) =) f op ieraions converges in only 5
More informationNavneet Saini, Mayank Goyal, Vishal Bansal (2013); Term Project AML310; Indian Institute of Technology Delhi
Creep in Viscoelasic Subsances Numerical mehods o calculae he coefficiens of he Prony equaion using creep es daa and Herediary Inegrals Mehod Navnee Saini, Mayank Goyal, Vishal Bansal (23); Term Projec
More informationMorning Time: 1 hour 30 minutes Additional materials (enclosed):
ADVANCED GCE 78/0 MATHEMATICS (MEI) Differenial Equaions THURSDAY JANUARY 008 Morning Time: hour 30 minues Addiional maerials (enclosed): None Addiional maerials (required): Answer Bookle (8 pages) Graph
More informationExpert Advice for Amateurs
Exper Advice for Amaeurs Ernes K. Lai Online Appendix - Exisence of Equilibria The analysis in his secion is performed under more general payoff funcions. Wihou aking an explici form, he payoffs of he
More informationL07. KALMAN FILTERING FOR NON-LINEAR SYSTEMS. NA568 Mobile Robotics: Methods & Algorithms
L07. KALMAN FILTERING FOR NON-LINEAR SYSTEMS NA568 Mobile Roboics: Mehods & Algorihms Today s Topic Quick review on (Linear) Kalman Filer Kalman Filering for Non-Linear Sysems Exended Kalman Filer (EKF)
More informationComparing Means: t-tests for One Sample & Two Related Samples
Comparing Means: -Tess for One Sample & Two Relaed Samples Using he z-tes: Assumpions -Tess for One Sample & Two Relaed Samples The z-es (of a sample mean agains a populaion mean) is based on he assumpion
More informationR t. C t P t. + u t. C t = αp t + βr t + v t. + β + w t
Exercise 7 C P = α + β R P + u C = αp + βr + v (a) (b) C R = α P R + β + w (c) Assumpions abou he disurbances u, v, w : Classical assumions on he disurbance of one of he equaions, eg. on (b): E(v v s P,
More informationSliding Mode Extremum Seeking Control for Linear Quadratic Dynamic Game
Sliding Mode Exremum Seeking Conrol for Linear Quadraic Dynamic Game Yaodong Pan and Ümi Özgüner ITS Research Group, AIST Tsukuba Eas Namiki --, Tsukuba-shi,Ibaraki-ken 5-856, Japan e-mail: pan.yaodong@ais.go.jp
More informationAccelerated Distributed Nesterov Gradient Descent for Convex and Smooth Functions
07 IEEE 56h Annual Conference on Decision and Conrol (CDC) December -5, 07, Melbourne, Ausralia Acceleraed Disribued Neserov Gradien Descen for Convex and Smooh Funcions Guannan Qu, Na Li Absrac This paper
More informationDiebold, Chapter 7. Francis X. Diebold, Elements of Forecasting, 4th Edition (Mason, Ohio: Cengage Learning, 2006). Chapter 7. Characterizing Cycles
Diebold, Chaper 7 Francis X. Diebold, Elemens of Forecasing, 4h Ediion (Mason, Ohio: Cengage Learning, 006). Chaper 7. Characerizing Cycles Afer compleing his reading you should be able o: Define covariance
More informationDimitri Solomatine. D.P. Solomatine. Data-driven modelling (part 2). 2
Daa-driven modelling. Par. Daa-driven Arificial di Neural modelling. Newors Par Dimiri Solomaine Arificial neural newors D.P. Solomaine. Daa-driven modelling par. 1 Arificial neural newors ANN: main pes
More informationm = 41 members n = 27 (nonfounders), f = 14 (founders) 8 markers from chromosome 19
Sequenial Imporance Sampling (SIS) AKA Paricle Filering, Sequenial Impuaion (Kong, Liu, Wong, 994) For many problems, sampling direcly from he arge disribuion is difficul or impossible. One reason possible
More information14 Autoregressive Moving Average Models
14 Auoregressive Moving Average Models In his chaper an imporan parameric family of saionary ime series is inroduced, he family of he auoregressive moving average, or ARMA, processes. For a large class
More informationSuccessive Convex Approximation Algorithms for Sparse Signal Estimation with Nonconvex Regularizations
Successive Convex Approximation Algorithms for Sparse Signal Estimation with Nonconvex Regularizations Yang Yang, Marius Pesavento, Symeon Chatzinotas and Björn Ottersten Abstract In this paper, we propose
More informationAPPLICATION OF CHEBYSHEV APPROXIMATION IN THE PROCESS OF VARIATIONAL ITERATION METHOD FOR SOLVING DIFFERENTIAL- ALGEBRAIC EQUATIONS
Mahemaical and Compuaional Applicaions, Vol., No. 4, pp. 99-978,. Associaion for Scienific Research APPLICATION OF CHEBYSHEV APPROXIMATION IN THE PROCESS OF VARIATIONAL ITERATION METHOD FOR SOLVING DIFFERENTIAL-
More informationMost Probable Phase Portraits of Stochastic Differential Equations and Its Numerical Simulation
Mos Probable Phase Porrais of Sochasic Differenial Equaions and Is Numerical Simulaion Bing Yang, Zhu Zeng and Ling Wang 3 School of Mahemaics and Saisics, Huazhong Universiy of Science and Technology,
More informationA Hop Constrained Min-Sum Arborescence with Outage Costs
A Hop Consrained Min-Sum Arborescence wih Ouage Coss Rakesh Kawara Minnesoa Sae Universiy, Mankao, MN 56001 Email: Kawara@mnsu.edu Absrac The hop consrained min-sum arborescence wih ouage coss problem
More informationGMM - Generalized Method of Moments
GMM - Generalized Mehod of Momens Conens GMM esimaion, shor inroducion 2 GMM inuiion: Maching momens 2 3 General overview of GMM esimaion. 3 3. Weighing marix...........................................
More informationInventory Control of Perishable Items in a Two-Echelon Supply Chain
Journal of Indusrial Engineering, Universiy of ehran, Special Issue,, PP. 69-77 69 Invenory Conrol of Perishable Iems in a wo-echelon Supply Chain Fariborz Jolai *, Elmira Gheisariha and Farnaz Nojavan
More informationChapter 6. Systems of First Order Linear Differential Equations
Chaper 6 Sysems of Firs Order Linear Differenial Equaions We will only discuss firs order sysems However higher order sysems may be made ino firs order sysems by a rick shown below We will have a sligh
More informationThen. 1 The eigenvalues of A are inside R = n i=1 R i. 2 Union of any k circles not intersecting the other (n k)
Ger sgorin Circle Chaper 9 Approimaing Eigenvalues Per-Olof Persson persson@berkeley.edu Deparmen of Mahemaics Universiy of California, Berkeley Mah 128B Numerical Analysis (Ger sgorin Circle) Le A be
More informationPROC NLP Approach for Optimal Exponential Smoothing Srihari Jaganathan, Cognizant Technology Solutions, Newbury Park, CA.
PROC NLP Approach for Opimal Exponenial Smoohing Srihari Jaganahan, Cognizan Technology Soluions, Newbury Park, CA. ABSTRACT Esimaion of smoohing parameers and iniial values are some of he basic requiremens
More information20. Applications of the Genetic-Drift Model
0. Applicaions of he Geneic-Drif Model 1) Deermining he probabiliy of forming any paricular combinaion of genoypes in he nex generaion: Example: If he parenal allele frequencies are p 0 = 0.35 and q 0
More informationSection 3.5 Nonhomogeneous Equations; Method of Undetermined Coefficients
Secion 3.5 Nonhomogeneous Equaions; Mehod of Undeermined Coefficiens Key Terms/Ideas: Linear Differenial operaor Nonlinear operaor Second order homogeneous DE Second order nonhomogeneous DE Soluion o homogeneous
More informationStability and Bifurcation in a Neural Network Model with Two Delays
Inernaional Mahemaical Forum, Vol. 6, 11, no. 35, 175-1731 Sabiliy and Bifurcaion in a Neural Nework Model wih Two Delays GuangPing Hu and XiaoLing Li School of Mahemaics and Physics, Nanjing Universiy
More information) were both constant and we brought them from under the integral.
YIELD-PER-RECRUIT (coninued The yield-per-recrui model applies o a cohor, bu we saw in he Age Disribuions lecure ha he properies of a cohor do no apply in general o a collecion of cohors, which is wha
More informationRobust estimation based on the first- and third-moment restrictions of the power transformation model
h Inernaional Congress on Modelling and Simulaion, Adelaide, Ausralia, 6 December 3 www.mssanz.org.au/modsim3 Robus esimaion based on he firs- and hird-momen resricions of he power ransformaion Nawaa,
More informationPlanning in POMDPs. Dominik Schoenberger Abstract
Planning in POMDPs Dominik Schoenberger d.schoenberger@sud.u-darmsad.de Absrac This documen briefly explains wha a Parially Observable Markov Decision Process is. Furhermore i inroduces he differen approaches
More informationd 1 = c 1 b 2 - b 1 c 2 d 2 = c 1 b 3 - b 1 c 3
and d = c b - b c c d = c b - b c c This process is coninued unil he nh row has been compleed. The complee array of coefficiens is riangular. Noe ha in developing he array an enire row may be divided or
More informationhen found from Bayes rule. Specically, he prior disribuion is given by p( ) = N( ; ^ ; r ) (.3) where r is he prior variance (we add on he random drif
Chaper Kalman Filers. Inroducion We describe Bayesian Learning for sequenial esimaion of parameers (eg. means, AR coeciens). The updae procedures are known as Kalman Filers. We show how Dynamic Linear
More informationOn Measuring Pro-Poor Growth. 1. On Various Ways of Measuring Pro-Poor Growth: A Short Review of the Literature
On Measuring Pro-Poor Growh 1. On Various Ways of Measuring Pro-Poor Growh: A Shor eview of he Lieraure During he pas en years or so here have been various suggesions concerning he way one should check
More informationResource Allocation in Visible Light Communication Networks NOMA vs. OFDMA Transmission Techniques
Resource Allocaion in Visible Ligh Communicaion Neworks NOMA vs. OFDMA Transmission Techniques Eirini Eleni Tsiropoulou, Iakovos Gialagkolidis, Panagiois Vamvakas, and Symeon Papavassiliou Insiue of Communicaions
More informationState-Space Models. Initialization, Estimation and Smoothing of the Kalman Filter
Sae-Space Models Iniializaion, Esimaion and Smoohing of he Kalman Filer Iniializaion of he Kalman Filer The Kalman filer shows how o updae pas predicors and he corresponding predicion error variances when
More informationOrdinary dierential equations
Chaper 5 Ordinary dierenial equaions Conens 5.1 Iniial value problem........................... 31 5. Forward Euler's mehod......................... 3 5.3 Runge-Kua mehods.......................... 36
More informationSome Basic Information about M-S-D Systems
Some Basic Informaion abou M-S-D Sysems 1 Inroducion We wan o give some summary of he facs concerning unforced (homogeneous) and forced (non-homogeneous) models for linear oscillaors governed by second-order,
More informationInference of Sparse Gene Regulatory Network from RNA-Seq Time Series Data
Inference of Sparse Gene Regulaory Nework from RNA-Seq Time Series Daa Alireza Karbalayghareh and Tao Hu Texas A&M Universiy December 16, 2015 Alireza Karbalayghareh GRN Inference from RNA-Seq Time Series
More information6. 6 v ; degree = 7; leading coefficient = 6; 7. The expression has 3 terms; t p no; subtracting x from 3x ( 3x x 2x)
70. a =, r = 0%, = 0. 7. a = 000, r = 0.%, = 00 7. a =, r = 00%, = 7. ( ) = 0,000 0., where = ears 7. ( ) = + 0.0, where = weeks 7 ( ) =,000,000 0., where = das 7 = 77. = 9 7 = 7 geomeric 0. geomeric arihmeic,
More informationLearning a Class from Examples. Training set X. Class C 1. Class C of a family car. Output: Input representation: x 1 : price, x 2 : engine power
Alpaydin Chaper, Michell Chaper 7 Alpaydin slides are in urquoise. Ehem Alpaydin, copyrigh: The MIT Press, 010. alpaydin@boun.edu.r hp://www.cmpe.boun.edu.r/ ehem/imle All oher slides are based on Michell.
More informationDual Averaging Methods for Regularized Stochastic Learning and Online Optimization
Dual Averaging Mehods for Regularized Sochasic Learning and Online Opimizaion Lin Xiao Microsof Research Microsof Way Redmond, WA 985, USA lin.xiao@microsof.com Revised March 8, Absrac We consider regularized
More informationEXERCISES FOR SECTION 1.5
1.5 Exisence and Uniqueness of Soluions 43 20. 1 v c 21. 1 v c 1 2 4 6 8 10 1 2 2 4 6 8 10 Graph of approximae soluion obained using Euler s mehod wih = 0.1. Graph of approximae soluion obained using Euler
More informationTwo Popular Bayesian Estimators: Particle and Kalman Filters. McGill COMP 765 Sept 14 th, 2017
Two Popular Bayesian Esimaors: Paricle and Kalman Filers McGill COMP 765 Sep 14 h, 2017 1 1 1, dx x Bel x u x P x z P Recall: Bayes Filers,,,,,,, 1 1 1 1 u z u x P u z u x z P Bayes z = observaion u =
More information12: AUTOREGRESSIVE AND MOVING AVERAGE PROCESSES IN DISCRETE TIME. Σ j =
1: AUTOREGRESSIVE AND MOVING AVERAGE PROCESSES IN DISCRETE TIME Moving Averages Recall ha a whie noise process is a series { } = having variance σ. The whie noise process has specral densiy f (λ) = of
More informationSUFFICIENT CONDITIONS FOR EXISTENCE SOLUTION OF LINEAR TWO-POINT BOUNDARY PROBLEM IN MINIMIZATION OF QUADRATIC FUNCTIONAL
HE PUBLISHING HOUSE PROCEEDINGS OF HE ROMANIAN ACADEMY, Series A, OF HE ROMANIAN ACADEMY Volume, Number 4/200, pp 287 293 SUFFICIEN CONDIIONS FOR EXISENCE SOLUION OF LINEAR WO-POIN BOUNDARY PROBLEM IN
More informationThe General Linear Test in the Ridge Regression
ommunicaions for Saisical Applicaions Mehods 2014, Vol. 21, No. 4, 297 307 DOI: hp://dx.doi.org/10.5351/sam.2014.21.4.297 Prin ISSN 2287-7843 / Online ISSN 2383-4757 The General Linear Tes in he Ridge
More informationEchocardiography Project and Finite Fourier Series
Echocardiography Projec and Finie Fourier Series 1 U M An echocardiagram is a plo of how a porion of he hear moves as he funcion of ime over he one or more hearbea cycles If he hearbea repeas iself every
More informationTimed Circuits. Asynchronous Circuit Design. Timing Relationships. A Simple Example. Timed States. Timing Sequences. ({r 6 },t6 = 1.
Timed Circuis Asynchronous Circui Design Chris J. Myers Lecure 7: Timed Circuis Chaper 7 Previous mehods only use limied knowledge of delays. Very robus sysems, bu exremely conservaive. Large funcional
More information