Accelerated Dual Descent for Constrained Convex Network Flow Optimization

Size: px
Start display at page:

Download "Accelerated Dual Descent for Constrained Convex Network Flow Optimization"

Transcription

1 52nd IEEE Conference on Decision and Contro December 10-13, Forence, Itay Acceerated Dua Descent for Constrained Convex Networ Fow Optimization Michae Zargham, Aejandro Ribeiro, Ai Jadbabaie Abstract We present a fast distributed soution to the capacity constrained convex networ fow optimization probem. Our soution is based on a distributed approximation of Newtons method caed Acceerated Dua Descent (ADD). Our agorithm uses a parameterized approximate inverse Hessian, which is computed using matrix spitting techniques and a Neumann series truncated after N terms. The agorithm is caed ADD-N because each update requires information from N-hop neighbors in the networ. The parameter N characterizes an expicit trade off between information dependence and convergence rate. Numerica experiments show that even for N1 and N2, ADD-N converges orders of magnitude faster than subgradient descent. I. INTRODUCTION The goa of this paper is to deveop a fast distributed soution to the capacity constrained convex networ fow optimization probem. Soutions to minimum cost networ fow probems have ong been used in operations research and transportation networs [1], [2]. The semina resuts of Rocafear in [3] tie the networ fow probem to min cut and other combinatoria probems such as shortest path, [4][Chapter 1]. Networ fow probems aso arise in routing probems for communication networs, [5] where there is a need for distributed soutions. For exampe, soving a networ fow probem is ey subprobem in the wireess routing and resource aocation probem in [6]. Dua subgradient descent is a distributed agorithm used to sove the convex networ fow optimization. Anaysis of subgradient methods for convex optimization probems can be found in [7] and [8] with the atter taing into account uncertainty in the networ structure. However, practica appicabiity of subgradient type agorithms is imited by exceedingy sow convergence rates, [9] and [10]. An aternative distributed agorithm based on the Gauss Seide method is present in [11]. Lie gradient descent, Gauss Seide is a first order method. Faster methods such as dua Newton s method, require goba information and the existence of a dua Hessian. The Acceerated Dua Descent (ADD) method is a distributed agorithm requiring information from an N-hop neighborhood in order to update the dua variabes, [12], [13]. The ADD method is based on Newtons Method and therefore requires the existence of the dua Hessian. We show that capacity constraints resut in a nonsmooth dua function. This research is supported by Army Research Lab MAST Coaborative Technoogy Aiance, AFOSR compex networs program, ARO P NS, NSF CAREER CCF , and NSF CCF , ONR MURI N and NSF-ECS Michae Zargham, Aejandro Ribero and Ai Jadbabaie are with the Department of Eectrica and Systems Engineering, University of Pennsyvania. Therefore, there are points where dua Hessian does not exist and Newton type methods such as ADD cannot be directy appied. In this wor we prove the existence of a generaized dua Hessian. We compute the generaized dua Hessian via oca information and use it to impement the ADD-N agorithm. Using ADD-N with the generaized Hessian is proven to resut in a descent direction guaranteeing convergence of the agorithm. We begin the paper in Section II with the forma definition of the networ fow optimization probem considered here. Networ connectivity is modeed as a directed graph and the goa of the networ is to support a singe information fow specified by incoming rates at an arbitrary number of sources and outgoing rates at an arbitrary number of sins. Using the capacity constraints to restrict to positive vaued fows recovers the standard impementation of the networ fow optimization probem in [4], [14]. Each edge of the networ is associated with a convex function that determines the cost of transmitting a unit of fow across it. The objective is to find the fows that minimize the sum of in costs by soving a convex optimization probem with inear constraints. In Section II-A, we show that the dua probem is nonsmooth and we prove the existence of a generaized dua Hessian consistent with [15]. In Section III, we construct the ADD-N update using a Neumann expansion for the Hessian inverse, [16][Section 5.8] truncated after N terms. Since our inverse Hessian approximation is an N degree matrix poynomia our descent direction can be computed using N- hop neighbor information. In Section III-A, we demonstrate how the ADD-N direction can be impemented using N one hop exchanges per dua. In Section IV, the ADD agorithm is shown to exhibit faster convergence rate reative to subgradient descent in numerica experiments. II. CONSTRAINED NETWORK OPTIMIZATION Consider a networ represented by a directed graph G (N, E) with node set N {1,..., n}, and edge set E {1,..., E}. The networ is depoyed to support a singe information fow specified by incoming rates b i > 0 at source nodes and outgoing rates b i < 0 at sin nodes. Rate requirements are coected in a vector b, which to ensure probem feasibiity has to satisfy n i1 bi 1. Our goa is to determine a fow vector x [x e ] e E, with x e denoting the amount of fow on edge e (i, j). Fow conservation impies that it must be Ax b, with A the n E node-edge incidence /13/$ IEEE 1037

2 matrix defined as 1 if edge j eaves node i, [A] ij if edge j enters node i, 0 otherwise. We define the reward as the negative of a sum of convex cost functions φ e (x e ) denoting the cost of x e units of fow traversing edge e. We assume that the cost functions φ e are stricty convex, surjective and twice continuousy differentiabe on the interva c e x e c e u where c e 0 and c e u 0 are the capacity constraints on edge e. The direction of fow can be forced to match the direction of the edge by choosing c e 0. The networ fow optimization probem is then defined as min subject to: f(x) E e1 φ e(x e ) Ax b x c u. Keeping the capacity constraints and duaizing the equaity constraint, we have the partia dua (1) E q(λ) min x φ e (x e ) + λ (Ax b) (2) e1 subject to: x c u. We separate this equation into an edge dimensiona sum of optimization probems q(λ) λ b + E e1 min x φ e(x e ) + x e ( λ e ) (3) subject to: c e x e c e u where λ e λ j λ i where e (i, j). Each one of these optimization probems is a constrained convex optimization oca to one edge aowing us to write the prima optimizers expicity x e ( λ e ) [ ] d dx e φe ( λ e ) We are therefore interested in soving c e u c e. (4) λ arg max q(λ) (5) λ and computing the prima variabes x x(λ ) which are guaranteed to be optima by strong duaity via Sater s condition, [10]. Considering (2), we can compute the gradient g(λ) q(λ) in terms of the prima optimizers, g(λ) Ax(λ) b (6) where x(λ) denotes the vector of x e ( λ e ). The projection in (4) causes (6) to become a non-smooth function. This wor focuses on addressing this non-smoothness. A. Non-smooth Newton Method Consider an index, an arbitrary initia vector λ 0 and define iterates λ generated by the foowing recursion λ +1 λ + α d for a 0, (7) where d is an ascent direction satisfying g d > 0, g g(λ ) and α is a given stepsize sequence. The Newton method is obtained by maing d in (7) the Newton step, g H d. The Newton step is expicity given by d H g for any where the dua Hessian H exists 1. To obtain an expression for the dua Hessian, consider given dua λ and prima x x(λ ) variabes, and consider the second order approximation of the prima objective centered at the current prima iterates x, ˆf(y) f(x ) + f(x ) (y x ) (8) (y x ) 2 f(x )(y x ). The prima optimization probem in (1) is now repaced by the minimization of the approximating function ˆf(y) in (8) subject to the constraints Ay b and y c u. This approximated probem is a quadratic program whose dua is a piece-wise quadratic function ˆq(λ ) 1 2 λ A [ 2 f(x ) A ] cu λ + p λ c + r (9) 1 [ λ 2 A 2 f(x ) ] c u 2 f(x c ) [ 2 f(x ) A ] cu λ The vector p and the constant r can be expressed in cosed form as functions of f(x ) and 2 f(x ), but they are irreevant for the discussion here. When ˆf(x ) is centered at x x(λ ), we have ˆf(x(λ )) f(x(λ )) and it foows that ˆq(λ ) q(λ ) for any λ where the impicit prima optimizers x x(λ ) are used. The important consequence of (9) is that we can compute the dua Hessian. When the f(x ) A λ c u, the projection is inactive and we recover the Hessian H A 2 f(x ) A (10) discussed in [13]. We define the set of unsaturated edges U {e E c e < x e (λ ) < c e u}. (11) The dua Hessian 2 q(λ ) is dependent on which edges e are unsaturated, i.e. e U. When edge e (i, j) saturates, we have that the ith eement of the gradient g i (λ ) is unaffected by changes in λ j so g(λ )/ λ j 0. Thus the Hessian eements [H ] i,j [H ] j,i are zero. We define the off diagona eements of the generaized Hessian H 2 q(λ ), according to 2 { q(λ ) [A λ j 2 f(x ) A ] ij if (i, j) U (12) λi 0 if (i, j) U 1 H denotes the Moore-Penrose pseudoinverse of the dua function s Hessian H H(λ ) and g 1 for a. 1038

3 and the diagona eements are the sums of the off diagonas [H ] ii j:(i,j) U [A 2 f(x ) A ] ij. (13) From the definition of f(x) in (1) it foows that the prima Hessian 2 f(x ) is a diagona matrix, which is positive definite by strict convexity of f(x). Therefore, its inverse exists and can be computed ocay. Further observe that L A 2 f(x ) A is a weighted version of the networ graph s Lapacian. One consequence of its Lapacian form is that H L is negative semi-definite. Another is that 1 is an eigenvector of H associated with eigenvaue 0 and that H is invertibe on the subspace 1. B. Existence of the Generaized Hessian We sti need to understand how the projection in (4) and resuting non-smoothness in (6) impacts the existence of this dua Hessian. Theorem 1. Properties of the Dua Gradient: 1) The function g : R n R n is Lipschitz Continuous. g(λ) g( λ) L λ λ λ, λ R n (14) 2) The function g : R n R n is Semi-smooth: im {V h} Exists h V g(λ+th),t 0 Rn (15) where g(λ) is the generaized Hessian defined { } g(λ) co im g( λ) λ λ, λ D g (16) where co{ } denotes convex hu and D g is the set of points on which g(λ) is differentiabe. Proof: From our assumptions on f(x) in Section II we now that φ e (x e ) is stricty convex, surjective and twice continuousy differentiabe, therefore the function [ d dx φ e] ( ) e is continuous. We denote the vector of these functions f ( ).The prima optima variabes are computed according to (4), the saturation operator ce u c e preserves continuity but the function is no onger differentiabe. Appying (6), we observe from g(λ λ) A ( x(λ) x( λ) ) (17) that continuity of x(λ) is sufficient for continuity of g(λ). In order to show g(λ) is Lipschitz we substitute the definition of x(λ) from (4). g(λ) g( λ) A( f ( λ) cu f ( λ) cu ) (18) Since f(x) is twice continuousy differentiabe and surjective, it foows that f is continuousy differentiabe. Let y(x) f (x) which means f(y(x)) x by the definition of the function inverse. We tae the gradient of this expression with respect to x yieding 2 f(y(x)) y(x) 1 by the chain rue. Since f(x) is stricty convex we have an expression for the gradient of f y(x) [ f ](x) [ f(y(x))] 1. (19) f (c ) γ c u c -1 f (x) f (c ) Fig. 1. The constant γ is the steepest gradient point of the function f (x) on its active domain, thus maing it a property of the prima objective f(x) and the capacity constraints. This image simpifies the probem to 1 dimension to mae it visuaizabe. Now we wi find the steepest possibe sope of f γ max y(x) (20) s.t. f( ) x f(c u ). We need not consider any x outside of the set J { f( ) x f(c u )} because using that fact that f is monotonic increasing we now that any such x yieds f (x) or f (x) c u. We can guarantee the existence of γ as defined in (20) by restricting the domain of y to the set J because any continuous function from a compact space into a metric space is bounded. Using (20) we can concude that ( f ( λ) cu f ( λ) cu ) γ λ λ (21) Recaing that λ Aλ for any λ and subsituting (21) bac into (18) we have g(λ) g( λ) γ A 2 λ λ (22) which competes the proof of part 1) with L γ A 2. To continue with part 2), we consider the points for which g(λ) is not differentiabe { [ d S λ : dx φ e] } e ( λe ) c e for any e or [ d dx φ e]. (23) e ( λe ) c e u, for any e For any λ S, we have g(λ) {H(λ)} as defined in (12) and (13) because g(λ) is differentiabe at those points. The imit as we approach a point in S depends which edges are saturating and on whether the direction of the imit is from inside or outside of the saturation region. From within the saturation region, λ {λ R n (i, j) U } [ ] im λ λ,λ R n S g( λ) 0 (24) but when outside the saturation region λ {λ R n (i, j) U } [ ] im λ λ,λ R n S g( λ) [H ] ij (25) ij ij u x 1039

4 with H as defined in (10). The set of generaized Hessians g(λ) defined in (16) aows [ g(λ)] i,j [[H ] ij, 0] (26) for any edge e (i, j) for which x e (λ) c e or x e (λ) c e u. By our construction of the set g(λ) in (24)-(26), for any matrix V g(λ) incuding λ S, the expression V h exists for a h R n. We concude that (15) hods, competing the proof. Theorem 1 characterizes the dua gradient g(λ), providing a set of generaized Hessian matrices at the points where g(λ) is not differentiabe. In our proof, equation, (26) it is shown that our choice of H(λ) defined in (12) and (13) is an extreme point in the set of generaized Hessians g(λ). Whie we have shown that the entire set of generaized Hessians exists our agorithm uses this extreme point for its direct reationship to the Hessian in the unconstrained (smooth) version of networ fow optimization probem anayzed in [13]. Theorem 1 aso specifies that the dua gradient g(λ) is Lipshitz continuous which is a buiding boc for our convergence anaysis. Theorem 2. Suppose g(λ) is semi-smooth at λ and a V g(λ) are non-singuar and g(λ) is Lipshitz continous, then the λ +1 λ V g(λ ) (27) is we defined and gobay convergent to the unique soution λ of g(λ) 0 (28) at a q-superinear rate. Theorem 2 is Newton s Method for piecewise quadratic functions taen from [15][Theorem 1.1]. It states that we can sove a peicewise quadratic optimization probem such as the one we have in (9) using Newton s method as ong as the generaized Hessian exists and is fu ran. In our case, it converges as ong as the inverse of the prima Hessian 2 f(x ) is fu ran and the reduced graph Ĝ {V, U } remains connected. Unfortunatey, we cannot guarantee Ĝ wi remain connected. However, the approximate Newton method, caed ADD deveoped in [13] when appied with an enhance spitting technique, aows us to sidestep this concern. Furthermore, unie the true Newton method, can be computed using ony oca information. III. NON-SMOOTH ACCELERATED DUAL DESCENT The ADD-N agorithm uses matrix spitting to generate an approximation of the Newton direction requiring information from no more than N hops away. We consider a finite number of terms of a suitabe Tayor s expansion representation of the Newton direction. At, spit the Hessian H B D, with diagona eements D and off diagona eements B where both D and B are nonnegative eementwise. To hande potentia singuarities we appy an enhanced spitting H B where 2D + I and B D + I + B (29) observing that is a sti a diagona matrix and B retains the structure of a weighted adjacency. Using our spitting, we can define the approximate Hessian inverse as defined in [13], H N r0 N r0 1 2 ( ( 1 2 ) B 1 r B )r (30) which expoits the Neumann series for the inverse of a matrix of the form (I X). Given the approximate Hessian inverse our approximate Newton direction is given H d where the acceerated dua descent is H g (31) λ +1 λ + α d (32) and α is the step size at. The Nth order approximation H adds a term of the form ( )N B to the N 1st order approximation. The sparsity pattern of this term is that of B N, which coincides with the N-hop neighborhood because it is the N th power of a weighted adjacency matrix. Thus, computation of the oca eements of the Newton step necessitates information from N hops away; see [13] for further detais. We thus interpret (31) as a famiy of approximations indexed by N that yieds Hessian approximations requiring information from N-hop neighbors in the networ. A. Agorithm Impementation The direct impementation of the dua update described in (32) can be computationay cumbersome. In practice we impement ADD-N as an N steps of the consensus d (r+1) Bd (r) + g (33) where the is started with d (0) g which foows ceary from (30). Further expanation of the equivaence of ADD-N and N step consensus methods can be found in [13]. Our practica impementation of ADD-N is given in Agorithm 1. We assume that node i eeps trac of the i th eement of node dimensiona variabes such as λ and the nonzero eements of the i th row of matrices such as H and the spitting matrices. We aso assume that the edge variabes such as x e are observabe by both nodes i and j ined by edge e (i, j). With this framewor, steps 3 and 10 are the ony ones which require communication with neighbors one hop neighbors ony. Therefore, each fu λ +1 λ costs N + 1 oca information exchanges. B. Convergence Theorem 3. Agorithm 1 with fixed step size α α > 0 sma enough converges gobay to the optima dua variabe λ for the dua probem in (9) and x x(λ ) is the optima soution to the origina constrained networ fow optimization probem, (1). 1040

5 Agorithm 1: Acceerated Dua Descent. 1 Initiaize dua: λ 0 2 for 0, 1, 2,... do 3 Compute differentias: λ Aλ 4 Update Primas: x f ( λ ) cu 5 Compute Constraint Vioation: g Ax b 6 Compute the Generaized Hessian: { 1/φ e (x [H ] ij e ) if e (i, j) U 0 ese f(x) Prima Objective 0.4 Proj. Subgrad. ADD ADD [H ] ii j [H ] ij 10 2 Prima Feasibiity 7 Compute Spitting: I 2 diag(h ) and B + H 8 Initiaize direction: d (0) 9 for r 0, 1,..., N 1 do 10 Update direction: d (r+1) 11 end g 12 Update dua: λ +1 λ + α d 13 end Bd (r) + g Proof: As per the Descent Lemma [17][A.24], it is sufficient to show that the approximate Hessian inverse H is negative definite for a to guarantee that the ADD-N defined in (31) and (32) converges to the optima dua variabe λ for some fixed step size α. We consider the individua terms of H N r0 T r as defined in (30). The matrix D is diagona positive semi-definite, therefore 2D + I is diagona and positive definite. Using the fact that L H is a Lapacian from (12) and (13) and the basic spitting L D B, the matrix B D +I+B is stricty diagonay dominant. Furthermore, by construction B is a nonnegative symmetric matrix. A symmetric, nonnegative, stricty diagonay dominant matrix is positive definite [16][Section 6.1]. We can concude that each term T r /2 /2 ( B /2 ) r /2 (34) is positive definite because its symmetric and a product of positive definite matrices. Since each term T r is positive definite the finite sum N r0 T r is positive definite, so from (30) we get that H is negative definite and we concude that the dua converges to λ. The prima probem (1) satisfies Sater s condition for strong duaity [10][Section 5.2.3], therefore there is a zero duaity gap, indicating that the prima optima x is the argument of the minimization in (2) when λ λ. Thus x x(λ ) computed according to (4) is the optima prima variabe, competing the proof. Theorem 3 guarantees that the ADD-N agorithm converges to the optima prima and dua variabes given an appropriatey chosen fixed step size. A reativey oose bound on the aowabe step size can be determined by considering the Ax b Fig. 2. ADD-1 and ADD-2 out perform projected subgradient by an order of magnitude on a sparse networ with 20 nodes and 35 edges with capacity constraints Descent Lemma [17][A.24]. In our case such a bound wi be inversey proportiona to the Lipschitz constant L defined in Theorem 1. In practice, the appropriate step size scae can be determined by tuning. We aso mae no theoretica caim about the convergence rate, which appears in practice to be consistent with the resuts in [13]. IV. NUMERICAL EXPERIMENTS Numerica experiments were run on a variety of probems ranging in scae from 10 to 100 nodes and 9 to 500 edges. Two types of networs were considered, uniformy randomy generated networs and worst case scenario ine graphs where one node is a sin and a other nodes are sources. The stricty convex objective function φ e (x e ) exp(x e ) + exp( x e ) can be thought of a prohibitive cost of heaviy oading a singe in which encourages the networ to mae use of a of its resources. Across the board we observe resuts consistent with the unconstrained probem in [13]. The ADD-N agorithm converges one to two orders of magnitude faster than the subgradient descent type agorithms for probems arge and sma, sparse and dense. Figure 2 is a typica exampe of the convergence behavior on a sparse networ. Sparse probems tae consideraby ong to sove due to sower mixing rates for the information spreading matrix B. Mixing rates of marov chains are discussed extensivey in [18]. In Figure 3 there is a significant improvement in convergence rate, the 1041

6 Ax b f(x) Prima Objective Prima Feasibiity Proj. Subgrad. ADD ADD Fig. 3. ADD-1 and ADD-2 out perform projected subgradient by an order of magnitude on a denser networ with 20 nodes and 100 edges with capacity constraints time required to reach a feasibiity of 10 4 goes from 800 s in the first exampe to ony 100. V. CONCLUSION The Acceerated Dua Descent (ADD) agorithm has been generaized to the capacity constrained networ fow optimization probem, through proof and computation of a generaized dua Hessian. This improvement expands the appicabiity of the ADD method. In particuar, combining the resuts for the capacitated networ fow probem with the stochastic generaization in [19] opens the door to networing appications via the bacpressure formuation, [20], [21]. We have used the ADD as a foundation for Acceerated bacpressure [22] which stabiizes queues in capacitated muti commodity communication networs with stochastic pacet arriva rates. In this wor, simuations demonstrated that N 1 and N 2, respectivey denoted as ADD-1 and ADD-2 perform best in practice. ADD-1 corresponds to Newton step approximations using information from neighboring nodes ony, whie ADD-2 requires information from nodes two hops away. Whie we observe a per improvement when increasing N, we do so at the cost of additiona communication. Since further increases in N yied diminishing returns with respect to convergence rate, we recommend the use of ADD-1 and ADD-2 which outperform gradient descent by upwards of two orders of magnitude. [2] E. Miandoabchi R. Farahani. Graph Theory for Operations Research and Management. IGI Goba, [3] R. T. Rocafear. Networ Fows and Monotropic Programming. Wiey, New Yor, NY, [4] D. P. Bertseas. Networ Optimization: Continuous and Discrete Modes. Athena Scientific, Bemont, MA, [5] Micha Piro and Deepanar Medhi. Routing, Fow, and Capacity Design in Communication and Computer Networs. Esevier, [6] Lin Xiao, M. Johansson, and S.P. Boyd. Simutaneous routing and resource aocation via dua decomposition. IEEE Transactions on Communications, 52(7): , [7] A. Nedic and A. Ozdagar. Distributed subgradient methods for mutiagent optimization. IEEE Transactions on Automatic Contro, 54, [8] I. Lobe and A. Ozdagar. Distributed subgradient methods for convex optimization over random networs,. IEEE Transactions on Automatic Contro, 56, [9] N. Z. Shor. Minimization Methods for Non-differentiabe Functions. Springer, [10] S. Boyd and L. Vandenberghe. Convex Optimization. Cambridge University Press, Cambridge, UK, [11] Dimitri P. Bertseas and Didier E Baz. Distributed asynchronous reaxation methods for convex networ fow probems. SIAM Journa of Optimization and Contro, [12] M. Zargham, A. Ribeiro, A. Ozdagar, and A. Jadbabaie. Acceerated dua descent for networ optimization. In Proceedings of IEEE ACC, [13] M. Zargham, A. Ribeiro, A. Ozdagar, and A. Jadbabaie. Acceerated dua descent for networ optimization. IEEE Transactions on Automatic Contro, (submitted). [14] A. V. Godberg, E. Tardos, and R. E. Tarjan. Networ Fow Agorithms. Springer-Verag, Berin, [15] J. Sun and H. Kuo. Appying a newton method to stricty convex separabe networ quadratic programs. SIAM Journa of Optimization, 8, [16] R. Horn and C. R. Johnson. Matrix Anaysis. Cambridge University Press, New Yor, [17] D.P. Bertseas. Noninear Programming. Athena Scientific, Cambridge, Massachusetts, [18] S. Boyd, A. Ghosh, B. Prabhaar, and D. Shah. Gossip agorithms: Design, anaysis, and appications. In Proceedings of IEEE INFOCOM, [19] M. Zargham, A. Ribeiro, and A. Jadbabaie. Networ optimization under uncertainty. Proceedings of IEEE CDC, [20] L. Tassiuas and A. Ephremides. Stabiity properties of constrained queueing systems and scheduing poicies for maximum throughput in mutihop radio networs. IEEE Transactions on Automatic Contro, 37: , [21] M. J. Neey. Stochastic Networ Optimization with Appication to Communication and Queueing Systems. Morgan and Caypoo, [22] M. Zargham, A. Ribeiro, and A. Jadbabaie. Acceerated bacpressure agorithm. ArXiv: REFERENCES [1] J. B. Orin. Minimum convex cost dynamic networ fows. MATHE- MATICS OF OPERATIONS RESEARCH, May

Asynchronous Control for Coupled Markov Decision Systems

Asynchronous Control for Coupled Markov Decision Systems INFORMATION THEORY WORKSHOP (ITW) 22 Asynchronous Contro for Couped Marov Decision Systems Michae J. Neey University of Southern Caifornia Abstract This paper considers optima contro for a coection of

More information

Lecture Note 3: Stationary Iterative Methods

Lecture Note 3: Stationary Iterative Methods MATH 5330: Computationa Methods of Linear Agebra Lecture Note 3: Stationary Iterative Methods Xianyi Zeng Department of Mathematica Sciences, UTEP Stationary Iterative Methods The Gaussian eimination (or

More information

Power Control and Transmission Scheduling for Network Utility Maximization in Wireless Networks

Power Control and Transmission Scheduling for Network Utility Maximization in Wireless Networks ower Contro and Transmission Scheduing for Network Utiity Maximization in Wireess Networks Min Cao, Vivek Raghunathan, Stephen Hany, Vinod Sharma and. R. Kumar Abstract We consider a joint power contro

More information

Distributed average consensus: Beyond the realm of linearity

Distributed average consensus: Beyond the realm of linearity Distributed average consensus: Beyond the ream of inearity Usman A. Khan, Soummya Kar, and José M. F. Moura Department of Eectrica and Computer Engineering Carnegie Meon University 5 Forbes Ave, Pittsburgh,

More information

Math 124B January 31, 2012

Math 124B January 31, 2012 Math 124B January 31, 212 Viktor Grigoryan 7 Inhomogeneous boundary vaue probems Having studied the theory of Fourier series, with which we successfuy soved boundary vaue probems for the homogeneous heat

More information

Scalable Spectrum Allocation for Large Networks Based on Sparse Optimization

Scalable Spectrum Allocation for Large Networks Based on Sparse Optimization Scaabe Spectrum ocation for Large Networks ased on Sparse Optimization innan Zhuang Modem R&D Lab Samsung Semiconductor, Inc. San Diego, C Dongning Guo, Ermin Wei, and Michae L. Honig Department of Eectrica

More information

Joint Congestion Control and Routing Optimization: An Efficient Second-Order Distributed Approach

Joint Congestion Control and Routing Optimization: An Efficient Second-Order Distributed Approach IEEE/ACM TRANSACTIONS ON NETWORKING Joint Congestion Contro and Routing Optimization: An Efficient Second-Order Distributed Approach Jia Liu, Member, IEEE, Ness B. Shroff, Feow, IEEE, Cathy H. Xia, and

More information

IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, VOL. 15, NO. 2, FEBRUARY

IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, VOL. 15, NO. 2, FEBRUARY IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, VOL. 5, NO. 2, FEBRUARY 206 857 Optima Energy and Data Routing in Networks With Energy Cooperation Berk Gurakan, Student Member, IEEE, OmurOze,Member, IEEE,

More information

Source and Relay Matrices Optimization for Multiuser Multi-Hop MIMO Relay Systems

Source and Relay Matrices Optimization for Multiuser Multi-Hop MIMO Relay Systems Source and Reay Matrices Optimization for Mutiuser Muti-Hop MIMO Reay Systems Yue Rong Department of Eectrica and Computer Engineering, Curtin University, Bentey, WA 6102, Austraia Abstract In this paper,

More information

Convergence Property of the Iri-Imai Algorithm for Some Smooth Convex Programming Problems

Convergence Property of the Iri-Imai Algorithm for Some Smooth Convex Programming Problems Convergence Property of the Iri-Imai Agorithm for Some Smooth Convex Programming Probems S. Zhang Communicated by Z.Q. Luo Assistant Professor, Department of Econometrics, University of Groningen, Groningen,

More information

Maximizing Sum Rate and Minimizing MSE on Multiuser Downlink: Optimality, Fast Algorithms and Equivalence via Max-min SIR

Maximizing Sum Rate and Minimizing MSE on Multiuser Downlink: Optimality, Fast Algorithms and Equivalence via Max-min SIR 1 Maximizing Sum Rate and Minimizing MSE on Mutiuser Downink: Optimaity, Fast Agorithms and Equivaence via Max-min SIR Chee Wei Tan 1,2, Mung Chiang 2 and R. Srikant 3 1 Caifornia Institute of Technoogy,

More information

Reliability: Theory & Applications No.3, September 2006

Reliability: Theory & Applications No.3, September 2006 REDUNDANCY AND RENEWAL OF SERVERS IN OPENED QUEUING NETWORKS G. Sh. Tsitsiashvii M.A. Osipova Vadivosto, Russia 1 An opened queuing networ with a redundancy and a renewa of servers is considered. To cacuate

More information

A NOTE ON QUASI-STATIONARY DISTRIBUTIONS OF BIRTH-DEATH PROCESSES AND THE SIS LOGISTIC EPIDEMIC

A NOTE ON QUASI-STATIONARY DISTRIBUTIONS OF BIRTH-DEATH PROCESSES AND THE SIS LOGISTIC EPIDEMIC (January 8, 2003) A NOTE ON QUASI-STATIONARY DISTRIBUTIONS OF BIRTH-DEATH PROCESSES AND THE SIS LOGISTIC EPIDEMIC DAMIAN CLANCY, University of Liverpoo PHILIP K. POLLETT, University of Queensand Abstract

More information

VALIDATED CONTINUATION FOR EQUILIBRIA OF PDES

VALIDATED CONTINUATION FOR EQUILIBRIA OF PDES VALIDATED CONTINUATION FOR EQUILIBRIA OF PDES SARAH DAY, JEAN-PHILIPPE LESSARD, AND KONSTANTIN MISCHAIKOW Abstract. One of the most efficient methods for determining the equiibria of a continuous parameterized

More information

Problem set 6 The Perron Frobenius theorem.

Problem set 6 The Perron Frobenius theorem. Probem set 6 The Perron Frobenius theorem. Math 22a4 Oct 2 204, Due Oct.28 In a future probem set I want to discuss some criteria which aow us to concude that that the ground state of a sef-adjoint operator

More information

Introduction to Simulation - Lecture 13. Convergence of Multistep Methods. Jacob White. Thanks to Deepak Ramaswamy, Michal Rewienski, and Karen Veroy

Introduction to Simulation - Lecture 13. Convergence of Multistep Methods. Jacob White. Thanks to Deepak Ramaswamy, Michal Rewienski, and Karen Veroy Introduction to Simuation - Lecture 13 Convergence of Mutistep Methods Jacob White Thans to Deepa Ramaswamy, Micha Rewiensi, and Karen Veroy Outine Sma Timestep issues for Mutistep Methods Loca truncation

More information

Distributed Cross-Layer Optimization in Wireless Networks: A Second-Order Approach

Distributed Cross-Layer Optimization in Wireless Networks: A Second-Order Approach Distributed Cross-Layer Optimization in Wireess Networs: A Second-Order Approach Jia Liu Cathy H. Xia Ness B. Shroff Hanif D. Sherai Department of Eectrica and Computer Engineering The Ohio State University

More information

Delay Analysis of Maximum Weight Scheduling in Wireless Ad Hoc Networks

Delay Analysis of Maximum Weight Scheduling in Wireless Ad Hoc Networks 1 Deay Anaysis o Maximum Weight Scheduing in Wireess Ad Hoc Networks ong Bao e, Krishna Jagannathan, and Eytan Modiano Abstract This paper studies deay properties o the weknown maximum weight scheduing

More information

First-Order Corrections to Gutzwiller s Trace Formula for Systems with Discrete Symmetries

First-Order Corrections to Gutzwiller s Trace Formula for Systems with Discrete Symmetries c 26 Noninear Phenomena in Compex Systems First-Order Corrections to Gutzwier s Trace Formua for Systems with Discrete Symmetries Hoger Cartarius, Jörg Main, and Günter Wunner Institut für Theoretische

More information

A. Distribution of the test statistic

A. Distribution of the test statistic A. Distribution of the test statistic In the sequentia test, we first compute the test statistic from a mini-batch of size m. If a decision cannot be made with this statistic, we keep increasing the mini-batch

More information

Introduction to Simulation - Lecture 14. Multistep Methods II. Jacob White. Thanks to Deepak Ramaswamy, Michal Rewienski, and Karen Veroy

Introduction to Simulation - Lecture 14. Multistep Methods II. Jacob White. Thanks to Deepak Ramaswamy, Michal Rewienski, and Karen Veroy Introduction to Simuation - Lecture 14 Mutistep Methods II Jacob White Thans to Deepa Ramaswamy, Micha Rewiensi, and Karen Veroy Outine Sma Timestep issues for Mutistep Methods Reminder about LTE minimization

More information

THE REACHABILITY CONES OF ESSENTIALLY NONNEGATIVE MATRICES

THE REACHABILITY CONES OF ESSENTIALLY NONNEGATIVE MATRICES THE REACHABILITY CONES OF ESSENTIALLY NONNEGATIVE MATRICES by Michae Neumann Department of Mathematics, University of Connecticut, Storrs, CT 06269 3009 and Ronad J. Stern Department of Mathematics, Concordia

More information

VALIDATED CONTINUATION FOR EQUILIBRIA OF PDES

VALIDATED CONTINUATION FOR EQUILIBRIA OF PDES SIAM J. NUMER. ANAL. Vo. 0, No. 0, pp. 000 000 c 200X Society for Industria and Appied Mathematics VALIDATED CONTINUATION FOR EQUILIBRIA OF PDES SARAH DAY, JEAN-PHILIPPE LESSARD, AND KONSTANTIN MISCHAIKOW

More information

4 1-D Boundary Value Problems Heat Equation

4 1-D Boundary Value Problems Heat Equation 4 -D Boundary Vaue Probems Heat Equation The main purpose of this chapter is to study boundary vaue probems for the heat equation on a finite rod a x b. u t (x, t = ku xx (x, t, a < x < b, t > u(x, = ϕ(x

More information

Stochastic Variational Inference with Gradient Linearization

Stochastic Variational Inference with Gradient Linearization Stochastic Variationa Inference with Gradient Linearization Suppementa Materia Tobias Pötz * Anne S Wannenwetsch Stefan Roth Department of Computer Science, TU Darmstadt Preface In this suppementa materia,

More information

MARKOV CHAINS AND MARKOV DECISION THEORY. Contents

MARKOV CHAINS AND MARKOV DECISION THEORY. Contents MARKOV CHAINS AND MARKOV DECISION THEORY ARINDRIMA DATTA Abstract. In this paper, we begin with a forma introduction to probabiity and expain the concept of random variabes and stochastic processes. After

More information

Unconditional security of differential phase shift quantum key distribution

Unconditional security of differential phase shift quantum key distribution Unconditiona security of differentia phase shift quantum key distribution Kai Wen, Yoshihisa Yamamoto Ginzton Lab and Dept of Eectrica Engineering Stanford University Basic idea of DPS-QKD Protoco. Aice

More information

Uniprocessor Feasibility of Sporadic Tasks with Constrained Deadlines is Strongly conp-complete

Uniprocessor Feasibility of Sporadic Tasks with Constrained Deadlines is Strongly conp-complete Uniprocessor Feasibiity of Sporadic Tasks with Constrained Deadines is Strongy conp-compete Pontus Ekberg and Wang Yi Uppsaa University, Sweden Emai: {pontus.ekberg yi}@it.uu.se Abstract Deciding the feasibiity

More information

18-660: Numerical Methods for Engineering Design and Optimization

18-660: Numerical Methods for Engineering Design and Optimization 8-660: Numerica Methods for Engineering esign and Optimization in i epartment of ECE Carnegie Meon University Pittsburgh, PA 523 Side Overview Conjugate Gradient Method (Part 4) Pre-conditioning Noninear

More information

Equilibrium of Heterogeneous Congestion Control Protocols

Equilibrium of Heterogeneous Congestion Control Protocols Equiibrium of Heterogeneous Congestion Contro Protocos Ao Tang Jiantao Wang Steven H. Low EAS Division, Caifornia Institute of Technoogy Mung Chiang EE Department, Princeton University Abstract When heterogeneous

More information

CS229 Lecture notes. Andrew Ng

CS229 Lecture notes. Andrew Ng CS229 Lecture notes Andrew Ng Part IX The EM agorithm In the previous set of notes, we taked about the EM agorithm as appied to fitting a mixture of Gaussians. In this set of notes, we give a broader view

More information

Available online at ScienceDirect. IFAC PapersOnLine 50-1 (2017)

Available online at   ScienceDirect. IFAC PapersOnLine 50-1 (2017) Avaiabe onine at www.sciencedirect.com ScienceDirect IFAC PapersOnLine 50-1 (2017 3412 3417 Stabiization of discrete-time switched inear systems: Lyapunov-Metzer inequaities versus S-procedure characterizations

More information

Approximated MLC shape matrix decomposition with interleaf collision constraint

Approximated MLC shape matrix decomposition with interleaf collision constraint Approximated MLC shape matrix decomposition with intereaf coision constraint Thomas Kainowski Antje Kiese Abstract Shape matrix decomposition is a subprobem in radiation therapy panning. A given fuence

More information

4 Separation of Variables

4 Separation of Variables 4 Separation of Variabes In this chapter we describe a cassica technique for constructing forma soutions to inear boundary vaue probems. The soution of three cassica (paraboic, hyperboic and eiptic) PDE

More information

Methods for Ordinary Differential Equations. Jacob White

Methods for Ordinary Differential Equations. Jacob White Introduction to Simuation - Lecture 12 for Ordinary Differentia Equations Jacob White Thanks to Deepak Ramaswamy, Jaime Peraire, Micha Rewienski, and Karen Veroy Outine Initia Vaue probem exampes Signa

More information

Published in: Proceedings of the Twenty Second Nordic Seminar on Computational Mechanics

Published in: Proceedings of the Twenty Second Nordic Seminar on Computational Mechanics Aaborg Universitet An Efficient Formuation of the Easto-pastic Constitutive Matrix on Yied Surface Corners Causen, Johan Christian; Andersen, Lars Vabbersgaard; Damkide, Lars Pubished in: Proceedings of

More information

BASIC NOTIONS AND RESULTS IN TOPOLOGY. 1. Metric spaces. Sets with finite diameter are called bounded sets. For x X and r > 0 the set

BASIC NOTIONS AND RESULTS IN TOPOLOGY. 1. Metric spaces. Sets with finite diameter are called bounded sets. For x X and r > 0 the set BASIC NOTIONS AND RESULTS IN TOPOLOGY 1. Metric spaces A metric on a set X is a map d : X X R + with the properties: d(x, y) 0 and d(x, y) = 0 x = y, d(x, y) = d(y, x), d(x, y) d(x, z) + d(z, y), for a

More information

Primal and dual active-set methods for convex quadratic programming

Primal and dual active-set methods for convex quadratic programming Math. Program., Ser. A 216) 159:469 58 DOI 1.17/s117-15-966-2 FULL LENGTH PAPER Prima and dua active-set methods for convex quadratic programming Anders Forsgren 1 Phiip E. Gi 2 Eizabeth Wong 2 Received:

More information

How the backpropagation algorithm works Srikumar Ramalingam School of Computing University of Utah

How the backpropagation algorithm works Srikumar Ramalingam School of Computing University of Utah How the backpropagation agorithm works Srikumar Ramaingam Schoo of Computing University of Utah Reference Most of the sides are taken from the second chapter of the onine book by Michae Nieson: neuranetworksanddeepearning.com

More information

Integrating Factor Methods as Exponential Integrators

Integrating Factor Methods as Exponential Integrators Integrating Factor Methods as Exponentia Integrators Borisav V. Minchev Department of Mathematica Science, NTNU, 7491 Trondheim, Norway Borko.Minchev@ii.uib.no Abstract. Recenty a ot of effort has been

More information

Appendix of the Paper The Role of No-Arbitrage on Forecasting: Lessons from a Parametric Term Structure Model

Appendix of the Paper The Role of No-Arbitrage on Forecasting: Lessons from a Parametric Term Structure Model Appendix of the Paper The Roe of No-Arbitrage on Forecasting: Lessons from a Parametric Term Structure Mode Caio Ameida cameida@fgv.br José Vicente jose.vaentim@bcb.gov.br June 008 1 Introduction In this

More information

XSAT of linear CNF formulas

XSAT of linear CNF formulas XSAT of inear CN formuas Bernd R. Schuh Dr. Bernd Schuh, D-50968 Kön, Germany; bernd.schuh@netcoogne.de eywords: compexity, XSAT, exact inear formua, -reguarity, -uniformity, NPcompeteness Abstract. Open

More information

A Distributed Newton Method for Network Utility Maximization

A Distributed Newton Method for Network Utility Maximization A Distributed Newton Method for Networ Utility Maximization Ermin Wei, Asuman Ozdaglar, and Ali Jadbabaie Abstract Most existing wor uses dual decomposition and subgradient methods to solve Networ Utility

More information

MATH 172: MOTIVATION FOR FOURIER SERIES: SEPARATION OF VARIABLES

MATH 172: MOTIVATION FOR FOURIER SERIES: SEPARATION OF VARIABLES MATH 172: MOTIVATION FOR FOURIER SERIES: SEPARATION OF VARIABLES Separation of variabes is a method to sove certain PDEs which have a warped product structure. First, on R n, a inear PDE of order m is

More information

A Distributed Newton Method for Network Optimization

A Distributed Newton Method for Network Optimization A Distributed Newton Method for Networ Optimization Ali Jadbabaie, Asuman Ozdaglar, and Michael Zargham Abstract Most existing wor uses dual decomposition and subgradient methods to solve networ optimization

More information

CONJUGATE GRADIENT WITH SUBSPACE OPTIMIZATION

CONJUGATE GRADIENT WITH SUBSPACE OPTIMIZATION CONJUGATE GRADIENT WITH SUBSPACE OPTIMIZATION SAHAR KARIMI AND STEPHEN VAVASIS Abstract. In this paper we present a variant of the conjugate gradient (CG) agorithm in which we invoke a subspace minimization

More information

Throughput Optimal Scheduling for Wireless Downlinks with Reconfiguration Delay

Throughput Optimal Scheduling for Wireless Downlinks with Reconfiguration Delay Throughput Optima Scheduing for Wireess Downinks with Reconfiguration Deay Vineeth Baa Sukumaran vineethbs@gmai.com Department of Avionics Indian Institute of Space Science and Technoogy. Abstract We consider

More information

ASummaryofGaussianProcesses Coryn A.L. Bailer-Jones

ASummaryofGaussianProcesses Coryn A.L. Bailer-Jones ASummaryofGaussianProcesses Coryn A.L. Baier-Jones Cavendish Laboratory University of Cambridge caj@mrao.cam.ac.uk Introduction A genera prediction probem can be posed as foows. We consider that the variabe

More information

Robust Sensitivity Analysis for Linear Programming with Ellipsoidal Perturbation

Robust Sensitivity Analysis for Linear Programming with Ellipsoidal Perturbation Robust Sensitivity Anaysis for Linear Programming with Eipsoida Perturbation Ruotian Gao and Wenxun Xing Department of Mathematica Sciences Tsinghua University, Beijing, China, 100084 September 27, 2017

More information

Combining reaction kinetics to the multi-phase Gibbs energy calculation

Combining reaction kinetics to the multi-phase Gibbs energy calculation 7 th European Symposium on Computer Aided Process Engineering ESCAPE7 V. Pesu and P.S. Agachi (Editors) 2007 Esevier B.V. A rights reserved. Combining reaction inetics to the muti-phase Gibbs energy cacuation

More information

A Brief Introduction to Markov Chains and Hidden Markov Models

A Brief Introduction to Markov Chains and Hidden Markov Models A Brief Introduction to Markov Chains and Hidden Markov Modes Aen B MacKenzie Notes for December 1, 3, &8, 2015 Discrete-Time Markov Chains You may reca that when we first introduced random processes,

More information

HILBERT? What is HILBERT? Matlab Implementation of Adaptive 2D BEM. Dirk Praetorius. Features of HILBERT

HILBERT? What is HILBERT? Matlab Implementation of Adaptive 2D BEM. Dirk Praetorius. Features of HILBERT Söerhaus-Workshop 2009 October 16, 2009 What is HILBERT? HILBERT Matab Impementation of Adaptive 2D BEM joint work with M. Aurada, M. Ebner, S. Ferraz-Leite, P. Godenits, M. Karkuik, M. Mayr Hibert Is

More information

Optimization Based Bidding Strategies in the Deregulated Market

Optimization Based Bidding Strategies in the Deregulated Market Optimization ased idding Strategies in the Dereguated arket Daoyuan Zhang Ascend Communications, nc 866 North ain Street, Waingford, C 0649 Abstract With the dereguation of eectric power systems, market

More information

arxiv: v1 [cs.lg] 31 Oct 2017

arxiv: v1 [cs.lg] 31 Oct 2017 ACCELERATED SPARSE SUBSPACE CLUSTERING Abofaz Hashemi and Haris Vikao Department of Eectrica and Computer Engineering, University of Texas at Austin, Austin, TX, USA arxiv:7.26v [cs.lg] 3 Oct 27 ABSTRACT

More information

DIGITAL FILTER DESIGN OF IIR FILTERS USING REAL VALUED GENETIC ALGORITHM

DIGITAL FILTER DESIGN OF IIR FILTERS USING REAL VALUED GENETIC ALGORITHM DIGITAL FILTER DESIGN OF IIR FILTERS USING REAL VALUED GENETIC ALGORITHM MIKAEL NILSSON, MATTIAS DAHL AND INGVAR CLAESSON Bekinge Institute of Technoogy Department of Teecommunications and Signa Processing

More information

C. Fourier Sine Series Overview

C. Fourier Sine Series Overview 12 PHILIP D. LOEWEN C. Fourier Sine Series Overview Let some constant > be given. The symboic form of the FSS Eigenvaue probem combines an ordinary differentia equation (ODE) on the interva (, ) with a

More information

A Survey on Delay-Aware Resource Control. for Wireless Systems Large Deviation Theory, Stochastic Lyapunov Drift and Distributed Stochastic Learning

A Survey on Delay-Aware Resource Control. for Wireless Systems Large Deviation Theory, Stochastic Lyapunov Drift and Distributed Stochastic Learning A Survey on Deay-Aware Resource Contro 1 for Wireess Systems Large Deviation Theory, Stochastic Lyapunov Drift and Distributed Stochastic Learning arxiv:1110.4535v1 [cs.pf] 20 Oct 2011 Ying Cui Vincent

More information

14 Separation of Variables Method

14 Separation of Variables Method 14 Separation of Variabes Method Consider, for exampe, the Dirichet probem u t = Du xx < x u(x, ) = f(x) < x < u(, t) = = u(, t) t > Let u(x, t) = T (t)φ(x); now substitute into the equation: dt

More information

Coupling of LWR and phase transition models at boundary

Coupling of LWR and phase transition models at boundary Couping of LW and phase transition modes at boundary Mauro Garaveo Dipartimento di Matematica e Appicazioni, Università di Miano Bicocca, via. Cozzi 53, 20125 Miano Itay. Benedetto Piccoi Department of

More information

An Adaptive Opportunistic Routing Scheme for Wireless Ad-hoc Networks

An Adaptive Opportunistic Routing Scheme for Wireless Ad-hoc Networks An Adaptive Opportunistic Routing Scheme for Wireess Ad-hoc Networks A.A. Bhorkar, M. Naghshvar, T. Javidi, and B.D. Rao Department of Eectrica Engineering, University of Caifornia San Diego, CA, 9093

More information

Target Location Estimation in Wireless Sensor Networks Using Binary Data

Target Location Estimation in Wireless Sensor Networks Using Binary Data Target Location stimation in Wireess Sensor Networks Using Binary Data Ruixin Niu and Pramod K. Varshney Department of ectrica ngineering and Computer Science Link Ha Syracuse University Syracuse, NY 344

More information

Some Measures for Asymmetry of Distributions

Some Measures for Asymmetry of Distributions Some Measures for Asymmetry of Distributions Georgi N. Boshnakov First version: 31 January 2006 Research Report No. 5, 2006, Probabiity and Statistics Group Schoo of Mathematics, The University of Manchester

More information

Smoothness equivalence properties of univariate subdivision schemes and their projection analogues

Smoothness equivalence properties of univariate subdivision schemes and their projection analogues Numerische Mathematik manuscript No. (wi be inserted by the editor) Smoothness equivaence properties of univariate subdivision schemes and their projection anaogues Phiipp Grohs TU Graz Institute of Geometry

More information

Componentwise Determination of the Interval Hull Solution for Linear Interval Parameter Systems

Componentwise Determination of the Interval Hull Solution for Linear Interval Parameter Systems Componentwise Determination of the Interva Hu Soution for Linear Interva Parameter Systems L. V. Koev Dept. of Theoretica Eectrotechnics, Facuty of Automatics, Technica University of Sofia, 1000 Sofia,

More information

6 Wave Equation on an Interval: Separation of Variables

6 Wave Equation on an Interval: Separation of Variables 6 Wave Equation on an Interva: Separation of Variabes 6.1 Dirichet Boundary Conditions Ref: Strauss, Chapter 4 We now use the separation of variabes technique to study the wave equation on a finite interva.

More information

Moreau-Yosida Regularization for Grouped Tree Structure Learning

Moreau-Yosida Regularization for Grouped Tree Structure Learning Moreau-Yosida Reguarization for Grouped Tree Structure Learning Jun Liu Computer Science and Engineering Arizona State University J.Liu@asu.edu Jieping Ye Computer Science and Engineering Arizona State

More information

Sample Problems for Third Midterm March 18, 2013

Sample Problems for Third Midterm March 18, 2013 Mat 30. Treibergs Sampe Probems for Tird Midterm Name: Marc 8, 03 Questions 4 appeared in my Fa 000 and Fa 00 Mat 30 exams (.)Let f : R n R n be differentiabe at a R n. (a.) Let g : R n R be defined by

More information

BALANCING REGULAR MATRIX PENCILS

BALANCING REGULAR MATRIX PENCILS BALANCING REGULAR MATRIX PENCILS DAMIEN LEMONNIER AND PAUL VAN DOOREN Abstract. In this paper we present a new diagona baancing technique for reguar matrix pencis λb A, which aims at reducing the sensitivity

More information

Lecture 6: Moderately Large Deflection Theory of Beams

Lecture 6: Moderately Large Deflection Theory of Beams Structura Mechanics 2.8 Lecture 6 Semester Yr Lecture 6: Moderatey Large Defection Theory of Beams 6.1 Genera Formuation Compare to the cassica theory of beams with infinitesima deformation, the moderatey

More information

Explicit overall risk minimization transductive bound

Explicit overall risk minimization transductive bound 1 Expicit overa risk minimization transductive bound Sergio Decherchi, Paoo Gastado, Sandro Ridea, Rodofo Zunino Dept. of Biophysica and Eectronic Engineering (DIBE), Genoa University Via Opera Pia 11a,

More information

SVM: Terminology 1(6) SVM: Terminology 2(6)

SVM: Terminology 1(6) SVM: Terminology 2(6) Andrew Kusiak Inteigent Systems Laboratory 39 Seamans Center he University of Iowa Iowa City, IA 54-57 SVM he maxima margin cassifier is simiar to the perceptron: It aso assumes that the data points are

More information

The Group Structure on a Smooth Tropical Cubic

The Group Structure on a Smooth Tropical Cubic The Group Structure on a Smooth Tropica Cubic Ethan Lake Apri 20, 2015 Abstract Just as in in cassica agebraic geometry, it is possibe to define a group aw on a smooth tropica cubic curve. In this note,

More information

More Scattering: the Partial Wave Expansion

More Scattering: the Partial Wave Expansion More Scattering: the Partia Wave Expansion Michae Fower /7/8 Pane Waves and Partia Waves We are considering the soution to Schrödinger s equation for scattering of an incoming pane wave in the z-direction

More information

MA 201: Partial Differential Equations Lecture - 11

MA 201: Partial Differential Equations Lecture - 11 MA 201: Partia Differentia Equations Lecture - 11 Heat Equation Heat conduction in a thin rod The IBVP under consideration consists of: The governing equation: u t = αu xx, (1) where α is the therma diffusivity.

More information

Approximated MLC shape matrix decomposition with interleaf collision constraint

Approximated MLC shape matrix decomposition with interleaf collision constraint Agorithmic Operations Research Vo.4 (29) 49 57 Approximated MLC shape matrix decomposition with intereaf coision constraint Antje Kiese and Thomas Kainowski Institut für Mathematik, Universität Rostock,

More information

T.C. Banwell, S. Galli. {bct, Telcordia Technologies, Inc., 445 South Street, Morristown, NJ 07960, USA

T.C. Banwell, S. Galli. {bct, Telcordia Technologies, Inc., 445 South Street, Morristown, NJ 07960, USA ON THE SYMMETRY OF THE POWER INE CHANNE T.C. Banwe, S. Gai {bct, sgai}@research.tecordia.com Tecordia Technoogies, Inc., 445 South Street, Morristown, NJ 07960, USA Abstract The indoor power ine network

More information

Multilayer Kerceptron

Multilayer Kerceptron Mutiayer Kerceptron Zotán Szabó, András Lőrincz Department of Information Systems, Facuty of Informatics Eötvös Loránd University Pázmány Péter sétány 1/C H-1117, Budapest, Hungary e-mai: szzoi@csetehu,

More information

Expectation-Maximization for Estimating Parameters for a Mixture of Poissons

Expectation-Maximization for Estimating Parameters for a Mixture of Poissons Expectation-Maximization for Estimating Parameters for a Mixture of Poissons Brandon Maone Department of Computer Science University of Hesini February 18, 2014 Abstract This document derives, in excrutiating

More information

A Novel Learning Method for Elman Neural Network Using Local Search

A Novel Learning Method for Elman Neural Network Using Local Search Neura Information Processing Letters and Reviews Vo. 11, No. 8, August 2007 LETTER A Nove Learning Method for Eman Neura Networ Using Loca Search Facuty of Engineering, Toyama University, Gofuu 3190 Toyama

More information

Gauss Law. 2. Gauss s Law: connects charge and field 3. Applications of Gauss s Law

Gauss Law. 2. Gauss s Law: connects charge and field 3. Applications of Gauss s Law Gauss Law 1. Review on 1) Couomb s Law (charge and force) 2) Eectric Fied (fied and force) 2. Gauss s Law: connects charge and fied 3. Appications of Gauss s Law Couomb s Law and Eectric Fied Couomb s

More information

Group Sparse Precoding for Cloud-RAN with Multiple User Antennas

Group Sparse Precoding for Cloud-RAN with Multiple User Antennas entropy Artice Group Sparse Precoding for Coud-RAN with Mutipe User Antennas Zhiyang Liu ID, Yingxin Zhao * ID, Hong Wu and Shuxue Ding Tianjin Key Laboratory of Optoeectronic Sensor and Sensing Networ

More information

NEW DEVELOPMENT OF OPTIMAL COMPUTING BUDGET ALLOCATION FOR DISCRETE EVENT SIMULATION

NEW DEVELOPMENT OF OPTIMAL COMPUTING BUDGET ALLOCATION FOR DISCRETE EVENT SIMULATION NEW DEVELOPMENT OF OPTIMAL COMPUTING BUDGET ALLOCATION FOR DISCRETE EVENT SIMULATION Hsiao-Chang Chen Dept. of Systems Engineering University of Pennsyvania Phiadephia, PA 904-635, U.S.A. Chun-Hung Chen

More information

POLYNOMIAL OPTIMIZATION WITH APPLICATIONS TO STABILITY ANALYSIS AND CONTROL - ALTERNATIVES TO SUM OF SQUARES. Reza Kamyar. Matthew M.

POLYNOMIAL OPTIMIZATION WITH APPLICATIONS TO STABILITY ANALYSIS AND CONTROL - ALTERNATIVES TO SUM OF SQUARES. Reza Kamyar. Matthew M. Manuscript submitted to AIMS Journas Voume X, Number 0X, XX 200X doi:10.3934/xx.xx.xx.xx pp. X XX POLYNOMIAL OPTIMIZATION WITH APPLICATIONS TO STABILITY ANALYSIS AND CONTROL - ALTERNATIVES TO SUM OF SQUARES

More information

1 Heat Equation Dirichlet Boundary Conditions

1 Heat Equation Dirichlet Boundary Conditions Chapter 3 Heat Exampes in Rectanges Heat Equation Dirichet Boundary Conditions u t (x, t) = ku xx (x, t), < x (.) u(, t) =, u(, t) = u(x, ) = f(x). Separate Variabes Look for simpe soutions in the

More information

Mat 1501 lecture notes, penultimate installment

Mat 1501 lecture notes, penultimate installment Mat 1501 ecture notes, penutimate instament 1. bounded variation: functions of a singe variabe optiona) I beieve that we wi not actuay use the materia in this section the point is mainy to motivate the

More information

Homework #04 Answers and Hints (MATH4052 Partial Differential Equations)

Homework #04 Answers and Hints (MATH4052 Partial Differential Equations) Homework #4 Answers and Hints (MATH452 Partia Differentia Equations) Probem 1 (Page 89, Q2) Consider a meta rod ( < x < ), insuated aong its sides but not at its ends, which is initiay at temperature =

More information

How the backpropagation algorithm works Srikumar Ramalingam School of Computing University of Utah

How the backpropagation algorithm works Srikumar Ramalingam School of Computing University of Utah How the backpropagation agorithm works Srikumar Ramaingam Schoo of Computing University of Utah Reference Most of the sides are taken from the second chapter of the onine book by Michae Nieson: neuranetworksanddeepearning.com

More information

BP neural network-based sports performance prediction model applied research

BP neural network-based sports performance prediction model applied research Avaiabe onine www.jocpr.com Journa of Chemica and Pharmaceutica Research, 204, 6(7:93-936 Research Artice ISSN : 0975-7384 CODEN(USA : JCPRC5 BP neura networ-based sports performance prediction mode appied

More information

<C 2 2. λ 2 l. λ 1 l 1 < C 1

<C 2 2. λ 2 l. λ 1 l 1 < C 1 Teecommunication Network Contro and Management (EE E694) Prof. A. A. Lazar Notes for the ecture of 7/Feb/95 by Huayan Wang (this document was ast LaT E X-ed on May 9,995) Queueing Primer for Muticass Optima

More information

( ) is just a function of x, with

( ) is just a function of x, with II. MULTIVARIATE CALCULUS The first ecture covered functions where a singe input goes in, and a singe output comes out. Most economic appications aren t so simpe. In most cases, a number of variabes infuence

More information

Cryptanalysis of PKP: A New Approach

Cryptanalysis of PKP: A New Approach Cryptanaysis of PKP: A New Approach Éiane Jaumes and Antoine Joux DCSSI 18, rue du Dr. Zamenhoff F-92131 Issy-es-Mx Cedex France eiane.jaumes@wanadoo.fr Antoine.Joux@ens.fr Abstract. Quite recenty, in

More information

A BUNDLE METHOD FOR A CLASS OF BILEVEL NONSMOOTH CONVEX MINIMIZATION PROBLEMS

A BUNDLE METHOD FOR A CLASS OF BILEVEL NONSMOOTH CONVEX MINIMIZATION PROBLEMS SIAM J. OPTIM. Vo. 18, No. 1, pp. 242 259 c 2007 Society for Industria and Appied Mathematics A BUNDLE METHOD FOR A CLASS OF BILEVEL NONSMOOTH CONVEX MINIMIZATION PROBLEMS MIKHAIL V. SOLODOV Abstract.

More information

MA 201: Partial Differential Equations Lecture - 10

MA 201: Partial Differential Equations Lecture - 10 MA 201: Partia Differentia Equations Lecture - 10 Separation of Variabes, One dimensiona Wave Equation Initia Boundary Vaue Probem (IBVP) Reca: A physica probem governed by a PDE may contain both boundary

More information

A Distributed Newton Method for Network Utility Maximization, I: Algorithm

A Distributed Newton Method for Network Utility Maximization, I: Algorithm A Distributed Newton Method for Networ Utility Maximization, I: Algorithm Ermin Wei, Asuman Ozdaglar, and Ali Jadbabaie October 31, 2012 Abstract Most existing wors use dual decomposition and first-order

More information

Conditions for Saddle-Point Equilibria in Output-Feedback MPC with MHE

Conditions for Saddle-Point Equilibria in Output-Feedback MPC with MHE Conditions for Sadde-Point Equiibria in Output-Feedback MPC with MHE David A. Copp and João P. Hespanha Abstract A new method for soving output-feedback mode predictive contro (MPC) and moving horizon

More information

Absolute Value Preconditioning for Symmetric Indefinite Linear Systems

Absolute Value Preconditioning for Symmetric Indefinite Linear Systems MITSUBISHI ELECTRIC RESEARCH LABORATORIES http://www.mer.com Absoute Vaue Preconditioning for Symmetric Indefinite Linear Systems Vecharynski, E.; Knyazev, A.V. TR2013-016 March 2013 Abstract We introduce

More information

Distributed Optimization With Local Domains: Applications in MPC and Network Flows

Distributed Optimization With Local Domains: Applications in MPC and Network Flows Distributed Optimization With Loca Domains: Appications in MPC and Network Fows João F. C. Mota, João M. F. Xavier, Pedro M. Q. Aguiar, and Markus Püsche Abstract In this paper we consider a network with

More information

NOISE-INDUCED STABILIZATION OF STOCHASTIC DIFFERENTIAL EQUATIONS

NOISE-INDUCED STABILIZATION OF STOCHASTIC DIFFERENTIAL EQUATIONS NOISE-INDUCED STABILIZATION OF STOCHASTIC DIFFERENTIAL EQUATIONS TONY ALLEN, EMILY GEBHARDT, AND ADAM KLUBALL 3 ADVISOR: DR. TIFFANY KOLBA 4 Abstract. The phenomenon of noise-induced stabiization occurs

More information

Efficiently Generating Random Bits from Finite State Markov Chains

Efficiently Generating Random Bits from Finite State Markov Chains 1 Efficienty Generating Random Bits from Finite State Markov Chains Hongchao Zhou and Jehoshua Bruck, Feow, IEEE Abstract The probem of random number generation from an uncorreated random source (of unknown

More information

Two Kinds of Parabolic Equation algorithms in the Computational Electromagnetics

Two Kinds of Parabolic Equation algorithms in the Computational Electromagnetics Avaiabe onine at www.sciencedirect.com Procedia Engineering 9 (0) 45 49 0 Internationa Workshop on Information and Eectronics Engineering (IWIEE) Two Kinds of Paraboic Equation agorithms in the Computationa

More information