OPTIMAL POWER FLOW (OPF) problems or variants

Size: px
Start display at page:

Download "OPTIMAL POWER FLOW (OPF) problems or variants"

Transcription

1 SUBMITTED 1 Distributed Optimal Power Flow using ALADIN Alexander Engelmann, Yuning Jiang, Tillmann Mühlpfordt, Boris Houska and Timm Faulwasser arxiv:18.863v1 [cs.dc] 3 Feb 18 Abstract The present paper applies the recently proposed Augmented Lagrangian Alternating Direction Inexact Newton (ALADIN) method to solve non-convex AC Optimal Power Flow Problems (OPF) in a distributed fashion. In contrast to the often used Alternaring Direction of Multipliers Method (ADMM), ALADIN guarantees locally quadratic convergence for AC-OPF. Numerical results for IEEE 5 3 bus test cases indicate that ALADIN is able to outperform ADMM and to reduce the number of iterations by at least one order of magnitude. We compare ALADIN to numerical results for ADMM documented in the literature. The improved convergence speed comes at cost of increasing the communication effort per iteration. Therefore, we also propose a variant of ALADIN that uses inexact Hessians to reduce communication. Finally, we provide a detailed comparison of both variants of ALADIN to ADMM from an algorithmic and communication perspective. Index Terms Distributed Optimization, Optimal Power Flow, OPF, ALADIN, Alternating Direction of Multipliers Method, ADMM. I. INTRODUCTION OPTIMAL POWER FLOW (OPF) problems or variants thereof are employed in many power system contexts to ensure stable and economic system operation. In presence of line congestions for example, OPF problems are used to determine/re-dispatch generator set points. In the German power grid, the number of these re-dispatch events increased drastically in recent years due to the increasing penetration of renewables, phase-out of nuclear plants, and liberalized energy markets [7]. This trend illustrates the increasing importance of efficient and reliable OPF computation in daily grid operation. Whereas in the past, the distribution the grid level was not considered in OPF computations, nowadays this may lead to problems as renewable generation might cause violation of voltage limits and line congestions at the distribution grid level. Including the distribution grid to OPF problems under AC conditions can help to resolve this, yet doing so leads to increased problem size. These observations have triggered significant recent research activity on hierarchical, respectively, distributed algorithms for OPF; i.e. algorithms that split the overall problem into a number of smaller subproblems whose AE, TM and TF are with the Institute for Automation and Applied Computer Science, Karlsruhe Institute of Technology, Germany {alexander.engelmann, tillmann.muehlpfordt, timm.faulwasser}@kit.edu. YJ and BH are with the School of Information Science and Technology, ShanghaiTech University, Shanghai, China {borish, jiangyn}@shanghaitech.edu.cn. BH and TF acknowledge support of this joint research by the Deutsche Forschungsgemeinschaft Grant WO 56/4-1. TF acknowledges funding from the Baden-Württemberg Stiftung under the Elite Program for Postdocs. This work was also supported by the Helmholtz Association under the Joint Initiative Energy System 5 - A Contribution of the Research Field Energy. Manuscript received TBD. parallel solution may or may not be coordinated by a central entity [8]. 1 Given the relevance of solving OPF problems, it is not surprising that there exists a multitude of results on distributed algorithms for OPF under AC conditions; we refer to [8] for a recent overview. In principle, one can distinguish three main lines of research: (i) (ad-hoc) application of algorithms tailored to convex Nonlinear Programs (NLPs) thus in general losing convergence properties [14, 15]; (ii) convex relaxation of OPF by either inner or outer approximation of the feasible set [11, 5]; and (iii) application of distributed algorithms tailored to non-convex NLPs [13, 18]. The present paper will advocate a novel method along line (iii). But before doing so, we concisely review existing results for items (i) (iii). With respect to (i), the set of convex algorithms directly applied to OPF ranges from the Auxiliary Problem Principle [1, 3], the Predictor Corrector Proximal Multiplier Method [] to the popular Alternating Direction of Multipliers Method (ADMM) [14, ]. A number of recent works discusses ADMM in more detail and with different foci: exhaustive simulationbased convergence analysis [14], parameter update rules [15], and applicability to large-scale grids [16]. An advantage of all these methods is, that they usually only exchange primal variables corresponding to linear consensus constraints between the subproblems. However, the convergence rate is at most linear [4, 6]. Recently ADMM convergence results for problems with non-convex objective function of a special form (consensus and sharing problems) have been presented [17]. However, so far it is unclear whether AC OPF fits into that form and, to the best of the authors knowledge, there are no general convergence guarantees for AC OPF using ADMM Another subbranch of (i) proposes Optimality Condition Decomposition (OCD), see [9, 1] and [,, 33] to solve AC-OPF problems. This method aims to solve the first-order necessary conditions including nonlinear coupling constraints without any problem modification in a distributed fashion. In OCD, all subproblems receive primal and dual variables from neighboring regions and consider them as fixed parameters in each local optimization. In [1], a necessary condition for convergence to a first-order stationary point is discussed. However, to the best of our knowledge, it remains unclear whether this condition holds for arbitrary OPF problems [14]. Moreover, in [1], the convergence rate of this method is shown to be linear. With respect to (ii), convex outer approximations of the feasible set via Semi-Definite Programming (SDP) are considered in [3, 11, 9, 34]; i.e. the OPF problem is mapped 1 We remark that in context of numerical optimization for OPF problems the notions of distributed algorithms are not unified: While in the optimization literature distributed algorithms entail a central coordinating entity [4], in context of OPF such schemes are referred to as being hierarchical [8].

2 SUBMITTED to a higher dimensional space wherein it becomes convex whenever a specific rank constraint is dropped. This relaxed and inflated problem can be solved using the above mentioned convex algorithms obtaining convergence guarantees. The crux of SDP relaxations of OPF problems is that the exactness of solutions (in terms of the original non-relaxed OPF problem) can so far only be guaranteed via structural assumptions: either on technical equipment like small transformer resistances or on the grid topology, e.g. radial grids [8, 4, 5, 6]. Finally, research line (iii) considers algorithms with certain convergence guarantees for non-convex problems. This includes approaches based on trust region and alternating projection methods with convergence guarantees at linear rate [18]. A distributed method based on interior point is proposed in [7], where the authors (similar to works on optimal control [31, 35]) decompose certain steps in of a centralized optimization method. Hence, the authors obtain due to equivalence to the corresponding centralized method promising numerical results even for very large cases. The present paper considers the recently proposed Augmented Lagrangian Alternating Direction Inexact Newton (ALADIN) method [19] which fully exploits the distributed nature of OPF problems. Similar to ADMM, ALADIN performs sequence of local optimizations combine with a coordination step. This coordination step solves an equalityconstrained Quadratic Program (QP) composed of local QP approximations in each iteration. Thereby, all computationally expensive operations (i.e. non-convex minimizations as well as function and derivative evaluations) are performed locally. Moreover, the coordination step is computationally cheap as an equality-constrained QP results in a linear system of equations. Motivated by the fact that ALADIN has locally quadratic convergence properties [19], we proposed its application to OPF in a preceding conference paper [13] presenting results for the IEEE 5 bus problem without line limits. The contribution of the present paper is a more detailed investigation of the prospect of ALADIN for OPF problems. We present results for ALADIN for a series of IEEE test systems ranging from 5 to 3 buses. For the sake of comparison, we explicitly compare to ADMM results presented in [15]. Moreover, we show how inexact Hessians can be used to reduce the communication effort of ALADIN. The remainder of the paper is structured as follows: Section II states the AC-OPF problem. Section III, recalls AC-OPF problem in affinely coupled separable form and revisits the ALADIN algorithm. Extensive numerical case studies for ALADIN and ADMM are discussed Section IV. Finally, Section V compares ALADIN and ADMM in terms of their convergence properties and in terms of their communication effort. Notation Subscripts ( ) k,l describe nodal variables, subscripts ( ) i,j denote local variables, and superscripts ( ) k indicate ALADIN iterates. II. PROBLEM STATEMENT A. AC Optimal Power Flow Introduction Consider an electrical grid at steady state described by the triple (N, G, Y ), where N = {1,..., N} is the bus set, G N is the generator set and Y = G + jb C N N is the bus admittance matrix. Neglecting shunts for simplicity, the entries Y kl = G kl + jb kl of the bus admittance matrix are given by y km, if k = l, Y kl = m N \{k} y kl, if k l, where y kl C is the admittance of the transmission line connecting buses k and l. One bus r N is specified as reference bus for the voltage angles. The AC-OPF problem can be written as the following NLP c 1,k p k + c,k p k + c 3,k, (1a) min θ,v,p,q k G subject to v k v l (G kl cos(θ kl ) + B kl sin(θ kl )) = p k p d k, l N v k v l (G kl sin(θ kl ) B kl cos(θ kl )) = q k qk, d l N p k p k p k, k G, q k q k q k, k G, v k v k v k, k N, v r = 1, θ r =, (1b) (1c) (1d) with c 1,k > and θ kl = θ k θ l. In Problem (1) v k denotes the voltage magnitude, θ k denotes the voltage angle, p k and q k denote the active and reactive power injections, p d k and qk d denote the active and reactive power demands at bus k. Problem (1) aims to minimize the total generation cost subject to the power flow equations (1b), generation and voltage bounds (1c), and the reference constraint (1d). B. AC Optimal Power Flow Separable Reformulation For the sake of completeness, we recall the reformulation of the AC-OPF Problem (1) in affinely coupled separable form amenable to distributed optimization [13]. The IEEE 5-bus test case depicted in Fig. 1 serves as a tutorial example detailing our notation. Due to space limitations, we assume that auxiliary buses connecting two neighboring regions are already introduced and added to the bus set N, cf. [13] for details. By doing so, the bus set is enlarged and different from the bus set used in the previous section. For example, the bus set of the IEEE 5 bus case shown in Fig. 1 enlarges from 5 to 13 buses by including auxiliary buses. The enlarged bus set N is partitioned into R = {1,..., R} local bus sets N i = {n i 1,..., n i N i } N such that we have N i = N, and for all i, j R with i j holds N i N j =. Additionally, we introduce the set of corresponding auxiliary buses A N N that contains all auxiliary bus However, including them poses no significant challenge to the method proposed in the present paper since shunts only change the numerical value of Y kk, cf. [3].

3 SUBMITTED 3 6 p.u p.u. N 1 (1, 11).4 p.u. 1 3 (6, 7) (1, 13) generator consumer N (8, 9) 4 p.u. 1.7 p.u. 3 p.u. 5. p.u. 3 p.u. 4 p.u. Fig. 1. Decomposed IEEE 5-bus test case with three local bus sets N 1 = {1, 5, 6, 1, 1}, N = {, 3, 7, 8}, N 3 = {4, 9, 11, 13} (black), auxiliary bus pairs A = {(6, 7), (8, 9), (1, 11), (1, 13)} (green) and line limits depicted in red. pairs located at borders between two neighboring regions. We assign nodal variables χ k = [ θ k v k p k q k ] R 4 to all buses k N. All nodal variables χ k from region i R are concatenated in local variables x i = [χ... χ ] n i 1 n i N i R ni with n i = 4N i. Doing so, we formulate local objective functions f i : R ni R for all regions i R by f i (x i ) := k G i c 1,k p k + c,k p k + c 3,k, where G i = N i G are local generator sets. The power flow equations (1b) and slack constraints (1d) are formulated as local nonlinear equality constraints h i : R ni R n hi. Finally, we arrive at the affinely coupled separable reformulation of (1): min x s.t. f i (x i ) A i x i = λ, (a) (b) h i (x i ) = κ i i R, (c) x i x i x i η i i R. (d) Here, x = [x 1,..., x R ] R nx stacks the local decision vectors x i and λ, κ i, η i denote the dual variables (multipliers) of the affine coupling constraints, local equality constraints and local bounds respectively. At all auxiliary bus pairs (k, l) A, we enforce consensus of the corresponding auxiliary buses by θ k = θ l, v k = v l, p k = p l, q k = q l. (3) These equalities lead to the affine consensus constraint (b). The box constraints (d) collect local bounds on active/reactive power injections and voltage magnitudes for all regions. Algorithm 1 ALADIN based Distributed OPF Initialization: Initial guess (z, λ ), choose Σ i, ρ, µ, ɛ. Repeat: 1) Parallelizable Step: Solve the local NLPs x k i = arg min x i [x i,x i] f i (x i ) + (λ k ) A i x i + ρk x i zi k Σ i s.t. h i (x i ) = κ k i. (4) ) Termination Criterion: If A i x k i ɛ and x k z k ɛ, (5) return x = x k. 3) Sensitivity Evaluations: Compute local gradients gi k = f i (x k i ), Hessian approximations Bk i {f i (x k i ) + κ i h i(x k i )} and constraint Jacobians Ck i = h i(x k i ). 4) Consensus Step: Solve the coordination QP min x,s s.t. { 1 x i Bi k x i + gi k A i (x k i + x i ) = s λ QP, C k i x i = i R, ( x i ) j = j A k i i R, xi } + (λ k ) s + µk s obtaining x k and λ QP. 5) Line Search: Update primal and dual variables by z k+1 z k + α k 1(x k z k ) + α k x k, λ k+1 λ k + α k 3(λ QP λ k ), (6) with α1, k α, k α3 k from [19]. If full step is accepted, i.e. α1 k = α k = α3 k = 1, update ρ k and µ k by { ρ k+1 (µ k+1 r ρ ρ k (r µ µ k ) if ρ k < ρ (µ k < µ) ) = ρ k (µ k. ) otherwise III. ALADIN BASED DISTRIBUTED OPF Now, we describe a variant of ALADIN for solving Problem () in a distributed fashion. As summarized in Algorithm 1, ALADIN consists of five main steps: 1) Solve the decoupled NLPs (4) in parallel. ) If the solution satisfies the termination criterion (5) (ɛ is choosen by the user), the iteration is terminated with the solution from the local NLPs, x = x k. 3) If not, we compute gradients gi k, Hessian approximations Bi k and constraint Jacobians Ci k. Note that these computations are fully parallelizable. 3 4) We construct the consensus QP (6) based on local sensitivities and the active sets A k i = { j (x k i ) j = x i or x i } 3 In case of derivative based solvers, the sensitivities can be obtained from the local solvers for (4) avoiding explicit evaluation.

4 SUBMITTED Fig.. Convergence of ALADIN (solid) and ADMM (dashed) for the 5-bus system with and without considering line limits. detected by the local NLPs. Note that there are no inequality constraints in the QP (6). Thus, solving this problem is equivalent to solving a linear system of equations yielding a computationally cheap numerical operation [3]. 4 5) Apply the globalization strategy proposed in [19] to update x and λ. We remark that in practice full steps are often accepted and the line search can be omitted. Finally, the parameters ρ and µ are updated. Remark 1 (Inexact Hessians): Instead of computing exact Hessians in Step 3), one can also use approximation techniques for Bi k based on previous gradient evaluations. Here we use the blockwise and damped Broyden-Fletcher-Goldfarb-Shanno (BFGS) update. In contrast to standard BFGS, the damped version ensures positive definiteness of the Bi k s to preserve the convergence properties of ALADIN, cf. [19, 3]. The BFGS update formula is defined by B k+1 i Bi k s k i Bi ksk i = Bi k Bk i sk i sk i + rk i rk i s k i ri k with s k i = xk+1 i x k i and rk i = θk (g k+1 i (λ k+1 ) gi k(λk+1 ))+ (1 θ k )Bi ksk i, where gk i (λ) = ρk (zi k xk i ) A i λ are the gradients of the Lagrangians [3]. The damping parameter θ k is computed by the update rule given in [3, p. 537]. We note that employing BFGS reduces the need for communication within ALADIN. Instead of the Hessian matrix, it suffices to communicate the gradient of the Lagrangian and to perform the BFGS step at the coordination entity that handles the QP (6). IV. NUMERICAL RESULTS The presentation of our results is divided into three parts: First, we show considerable performance differences of AL- ADIN and ADMM for a motivating 5-bus example depicted in Fig. 1. Second, we show that ALADIN performs well for larger grids (3-bus, 57-bus), also when inexact Hessians are used. 4 ALADIN requires the Hessian approximations B k i to be positive definite to ensure convergence [19]. To ensure positive definiteness, we toggle the sign of all negative eigenvalues of all B k i s (regularization). (7) Third, we apply ALADIN to the 118 and 3-bus test cases, and compare our results to variants of ADMM published in the literature. Subsequently, all units are given in p.u. with respect to the base power of 1 MVA. The following quantities are computed a posteriori in order to visualize the convergence behavior of Algorithm 1 (where the superscript k denotes the iteration index): The consensus violation Ax k with A = [A 1,..., A R ] indicating to which extend voltages and powers at auxiliary buses match. The distance to the minimizer x k x indicating how far the current power/voltage iterates are from their optimal value. Thereby, x is the true minimizer obtained by solving () centrally. The inf-norm r k of the dual residual r k = { fi (x k i ) + A i λ k + h i (x k i )κ k i + η k i }, measuring the first-order optimality condition violation. The cost suboptimality f k f illustrating performance losses. We initialize the primal guess for voltage magnitudes to 1 p.u.; all other values are set to zero initially (flat start). All dual variables needed in ALADIN or ADMM are initialized with zero. We compare ALADIN and ADMM in terms of number of iterations and communication effort. 5 Our implementation relies on the CasADi toolbox [1] running with MATLAB R16a and IPOPT [36] as solver for the local NLPs. The true minimizers x are obtained by solving problem () centrally with IPOPT. A. 5-bus with line congestions First, we consider the IEEE 5-bus case including line limits as shown in Fig. 1. We partition the grid into three regions in a way such that it is expected to be hard for distributed 5 We omit investigating computation times since they are highly dependent non-algorithm specific factors like the programming language, code structure, compiler, and executing machine.

5 SUBMITTED Fig. 3. Convergence behavior of ALADIN for the IEEE 3-bus and 57-bus test cases using exact Hessians (solid) and inexact Hessians (dashed, here BFGS). algorithms like ALADIN or ADMM. Specifically, there is a generation center in the west containing cheap generators and no consumers. Hence, all power has to be transferred to consumption centers in the east, which results in a high power exchange between regions. Moreover, line constraints are active at the transmission lines (1, ) and (4, 5) connecting west an east. Several works using ADMM for OPF do not consider line congestions [14, 15]. To the best of the authors knowledge the work [16] is one of the few that explicitly considers line constraints. In the following, line constraints are considered as limits on the magnitude of the apparent power where p kl + q kl s kl, (8) p kl = v kg kl + v k v l (G kl cos(θ k θ l ) + B kl sin(θ k θ l )), q kl = v kb kl v k v l (B kl cos(θ k θ l ) G kl sin(θ k θ l )). Due to non-convexity of these constraints they are difficult to handle; especially when they are located at lines connecting regions. Note that bounds on the apparent power (8) can also be expressed in the framework of (d) by introducing additional decision variables. The values of the tuning parameters for ALADIN are summarized in Table III in the Appendix. We choose the weighting matrices Σ i in Algorithm 1 as diagonal matrices to obtain similar scaling in the consensus variables. The entries for the voltage magnitudes and for voltage angles are chosen as 1, the entries for the power injections are set to 1. Fig. shows the convergence behavior of ALADIN and ADMM considering the IEEE 5-bus case for two settings: In the first setting line limits are neglected while in the second there are line congestions at the lines (1, ) and (4, 5) with apparent power limitations of 4 MVA and 18 MVA respectively, cf. Fig. 1. To foster a fair comparison, the penalty parameters ρ for ADMM are chosen based on parameter sweeps aiming for fast convergence. Without line congestions, ADMM converges slower than ALADIN. However, with slight abuse of optimality and consensus, applicable solutions can be obtained within around 5 iterations assuming that underlying frequency controllers account for the remaining power mismatch. In case of active line limits, ADMM needs at least 15 iterations even for approximate solutions whereas ALADIN converges within 5 iterations to very high accuracy. This indicates the potential of ALADIN to outperform ADMM especially in case of active line limits. While ALADIN converges to high accuracies with a seemingly quadratic rate, see Fig., similar statements cannot be made for ADMM. B. 3-bus and 57-bus with inexact Hessians Next, we compare the performance of ALADIN with exact Hessians to ALADIN with inexact Hessian for larger grids. Specifically, we use approximations based on the BFGS formula (7). Inexact Hessians reduce the per-step communication effort, which is advantageous. The employed grid partitioning for the considered IEEE 3- and 57-bus test cases are taken from [14], and listed in Table IV in the Appendix for selfcontainment. To improve numerical convergence we add a quadratic regularization for the reactive power injection to the local objective functions f i (x i ) = f i (x i ) + γ k N i q k with γ nonnegative in the rest of the paper. This regularization follows the technical motivation to keep reactive power $ injections small. We choose γ = 1 hr (p.u.) which is around 1 % of the quadratic coefficient of the active power injections c 1,k. Fig. 3 depicts the convergence behavior of ALADIN with exact and inexact Hessians. 6 In either case, ALADIN converges 6 The centralized minimizer x is computed here including the regularization into the objective of ().

6 SUBMITTED Fig. 4. Convergence behavior of ALADIN for the IEEE 118 and 3-bus test cases. in less than 4 iterations to high accuracy (at least 1 4 for all convergence criteria). Furthermore, Fig. 3 shows that ALADIN with inexact Hessians needs just slightly more iterations than ALADIN with exact Hessians. One can observe that the convergence rate for ALADIN with inexact Hessians seems to be faster than linear. This observation is consistent with provably superlinear convergence for other methods using BFGS [3]. C. 118-bus and 3-bus ALADIN vs. ADMM For the IEEE 118-bus and 3-bus test cases, we compare our ALADIN results (with exact Hessians) to ADMM results documented in the literature [14, 15]. Supposing that the authors thereof chose the parameters and update rule carefully to ensure fast convergence, this provides a useful comparison. To facilitate comparability, we adopt the grid partitioning from [14] for the 118-bus case. Unfortunately the partitioning for the 3-bus case is not given in [14], such that we choose our own partitioning listed in Table IV. Using ALADIN we obtain the numerical results for the 118- bus and 3-bus case shown in Fig. 4. In either case, ALADIN shows fast convergence to a high level of accuracy for all convergence criteria. In [14, 15] the main convergence criterion is taken to be the infinity norm of the primal gap Ax k < ɛ. Adopting this criterion allows a direct comparison between ALADIN and ADMM results from [14, 15]; for ɛ = 1 4 the results are summarized in Table I. 7 ALADIN converges around one order of magnitude faster while much higher accuracies in optimality are obtained. 7 We remark that primal feasibility does not ensure convergence to a minimizer, cf. [6, Sec ] for an ADMM-specific discussion. This lack of optimality guarantees can be observed seen in the numerical results in [14, 15]. However, in practice small optimality gaps are often accepted. Nontheless one has to be aware of that using Ax k ɛ does not imply convergence of the reactive power injections to the optimal ones since the sensitivity of the objective function value with respect to the reactive power injections is much smaller than the sensitivity to active power injections. This can be verified by comparing the dual variables for active and reactive power injections, cf. [5, Chap. 3..3]. TABLE I COMPARISON OF TUNED ADMM FROM [15] AND ALADIN EMPLOYING Ax k 1 4 AS CONVERGENCE CRITERION ONLY. ADMM [14, 15] ALADIN N # iter f f # iter f f % % % % % % % % V. DISCUSSION ALADIN VS. ADMM Our numerical results from Section IV using ALADIN seem promising. However, compared to ADMM there is an increased per-step communication effort when employing AL- ADIN. Thus, we now turn towards discussing how to trade-off convergence behavior and guarantees versus per-step communication effort. 1) Convergence Properties: ADMM and ALADIN exhibit differences in known convergence guarantees. In case of ADMM, for strictly convex problems, a linear convergence rate can be achieved under rather mild assumptions like Lipschitz continuity of the gradient and regularity assumptions on the affine constraints [1]. In case of convex problems, sublinear convergence is achieved [1]. For the non-convex case, convergence can only be guaranteed for special problem classes, where to the best of the authors knowledge it is not clear whether AC OPF belongs to them [17]. However, this does not mean that ADMM does not work for non-convex OPF. Yet, one has to be aware that ADMM does not necessarily converge to a local minimizer, or converge at all. Nevertheless, in practice, ADMM works well in many cases but often gains slow practical convergence rates especially if high accuracies are needed [6]. This is in accordance with the simulation results from Section V.

7 REFERENCES 7 For ALADIN, convergence can be guaranteed without relying on a convexity assumption of the objective or the constraints [19, Thm. ]. Only mild assumptions w.r.t. the penalty parameter as well as Lipschitz continuity are required. In case of inexact Hessians, superlinear convergence, and in case of exact Hesssians even quadratic convergence can be achieved [19, Sec. 7]. This comes at the cost of an increased per-step communication, and the need for a central coordinating entity that has to solve the coupling QP. ) Communication Effort: The main conceptual difference between ALADIN and ADMM is that ALADIN uses secondorder information whereas ADMM only communicates local primal solutions. More specifically, ALADIN relies on communicating local sensitivities, i.e. g i, B i, C i, A i. for all regions i R, cf. Step 3) of Algorithm 1. The gradients g i are of dimension n i, the (symmetric) Hessians B i of dimension n i n i, and the Jacobians of the power flow equations collected in C i are of dimension (n i /) n i. Note that n i = 4N i, where N i is the number of buses in region i, cf. Section II-B. Additionally, the vector of binaries indicating the active bounds A i, which are of dimension 3ni 4, has to be communicated (bounds on power injections and voltages). Hence, the total forward communication need for ALADIN comprises n i + ni(ni+1) + n i floats and 3ni 4 binaries. The block BFGS updated described in Section III reduces the total communication need as follows. Instead of having to communicate the Hessians H i which lead to the quadratic term ni(ni+1), BFGS requires to communicate only the n i - dimensional gradients of the Lagrangian. Hence, the reduced forward communication need becomes n i + n i + n i floats and 3ni 4 binaries. After solving the QP (6), primal and dual steps for the consensus constraint are broadcast to the subproblems. The number of consensus constraints should typically be smaller than the decision variables since otherwise the original problem might be infeasible. Hence, the number of Lagrange multipliers is upper bounded by n i, and we obtain an upper bound for the backward communication effort of n i + n i floats. For ADMM, only the minimizers of the local problems have to be communicated in both directions. Hence, we obtain equal forward and backward communication need of n i floats. Remark (Floats vs. Binaries): Observe that communicating a binary value is much cheaper than communicating floats (1 bit vs. 3 or 64 bits). Hence, counting the floats is usually sufficient to approximately determine communication effort. Finally, Table II summarizes the results of this section concisely comparing convergence properties, convergence speed and communication effort for ADMM and both ALADIN variants. VI. CONCLUSION & OUTLOOK This paper has investigated the application of the Augmented Lagrangian Alternating Direction Inexact Newton (ALADIN) to distributed AC OPF problems. The presented numerical results for grids of different sizes underpin the potential of ALADIN for AC-OPF and experimentally validate TABLE II COMPARISON OF ALGORITHMIC PROPERTIES ADMM ALADIN ALADIN BFGS Conv. guarantee no yes yes Exp. conv. rate (linear) quadratic superlinear Forw. comm. (float) n i n i (n i +3) n i (n i +4) Backw. comm. (float) n i n i n i its theoretical properties like a quadratic local convergence rate and convergence guarantees. Compared to ADMM, ALADIN is able to reduce the number of iterations by at least one order of magnitude. This comes at cost of an increased per step communication effort which can be reduced by virtue of using inexact Hessians obtained via BFGS updates. By doing so, we slightly increase the needed number of iterations but ALADIN remains faster and more accurate than ADMM. While the present paper focused primarily on comparing ALADIN with ADMM, a detailed comparison with other distributed schemes would be of interest. Moreover, future work will consider multi-stage OPF problems including storages and generator ramps. From an algorithmic and communication point of view, it would be interesting to reduce the communication effort even more, e.g. by formulating the coordination QP in the coupling variables only. APPENDIX TABLE III PARAMETERIZATION OF ALADIN FOR SHOWN IEEE TEST CASES. [ N ρ ρ r ρ µ µ r µ γ N N i ] $ hr (p.u.) TABLE IV GRID PARTITIONING (EXCLUDING AUXILIARY BUSES). 5 {1, 5}, {, 3}, {4} 3 {1 8, 8}, {9 11, 17, 1, } {4 7, 9, 3}, {1 16, 18-, 3} 57 {4 6, 3 33}, {1, 1, 16, 17, 51}, {8, 9, 11, 41 43, 55 57} {13, 14, 46 5}, {34 37, 39, 4}, {7, 7 9, 5 54}, {19 3, 38, 44}, {1 6, 15, 18, 45} 118 {1 3, , 117}, {33 67}, {68 81, 116, 118}, {8, 11} 3 {1 1}, {11 }, {1 3}

8 SUBMITTED 8 REFERENCES [1] J. Andersson. A General-Purpose Software Framework for Dynamic Optimization. PhD thesis. Department of Electrical Engineering (ESAT/SCD) and Optimization in Engineering Center, Kasteelpark Arenberg 1, 31-Heverlee, Belgium: Arenberg Doctoral School, KU Leuven, 13. [] M. Arnold, S. Knopfli, and G. Andersson. Improvement of OPF Decomposition Methods Applied to Multi-Area Power Systems. In: Power Tech, 7 IEEE Lausanne. 7, pp [3] X. Bai, H. Wei, K. Fujisawa, and Y. Wang. Semidefinite programming for optimal power flow problems. In: International Journal of Electrical Power & Energy Systems 3.6 (8), pp [4] D. P. Bertsekas and J. N. Tsitsiklis. Parallel and Distributed Computation: Numerical Methods. Vol. 3. Prentice Hall Englewood Cliffs, NJ, [5] D. Bertsekas. Nonlinear programming. Athena Scientific optimization and computation series. Athena Scientific, Belmont, [6] S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein. Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers. In: Found. Trends Mach. Learn. 3.1 (11), pp [7] Bundesnetzagentur (Federal German Grid Authority). Quartalsbericht zu Netz- und Systemsicherheitsmaßnahmen Erstes Quartal 17, url: https : / / www. bundesnetzagentur. de / SharedDocs / Downloads / DE / Allgemeines / Bundesnetzagentur / Publikationen / Berichte / 17 / Quartalsbericht Q1 17. pdf? blob=publicationfile& v=. Tech. rep. 17. [8] K. Christakou, D.-C. Tomozei, J.-Y. Le Boudec, and M. Paolone. AC OPF in radial distribution networks Part I: On the limits of the branch flow convexification and the alternating direction method of multipliers. In: Electric Power Systems Research 143 (17), pp [9] A. J. Conejo, E. Castillo, R. Minguez, and R. Garcia-Bertrand. Decomposition Techniques in Mathematical Programming: Engineering and Science Applications. Springer Science & Business Media, Berlin/Heidelberg, 6. [1] A. J. Conejo, F. J. Nogales, and F. J. Prieto. A decomposition procedure based on approximate Newton directions. In: Mathematical Programming 93.3 (), pp [11] E. Dall Anese, H. Zhu, and G. B. Giannakis. Distributed Optimal Power Flow for Smart Microgrids. In: IEEE Transactions on Smart Grid 4.3 (13), pp [1] W. Deng and W. Yin. On the global and linear convergence of the generalized alternating direction method of multipliers. In: Journal of Scientific Computing 66.3 (16), pp [13] A. Engelmann, T. Mühlpfordt, Y. Jiang, B. Houska, and T. Faulwasser. Distributed AC Optimal Power Flow using ALADIN. In: IFAC- PapersOnLine 5.1 (17). th IFAC World Congress, pp [14] T. Erseghe. Distributed Optimal Power Flow Using ADMM. In: IEEE Transactions on Power Systems 9.5 (14), pp [15] T. Erseghe. A distributed approach to the OPF problem. In: EURASIP Journal on Advances in Signal Processing 15.1 (15), p. 45. [16] J. Guo, G. Hug, and O. K. Tonguz. A Case for Nonconvex Distributed Optimization in Large-Scale Power Systems. In: IEEE Transactions on Power Systems 3.5 (17), pp [17] M. Hong, Z.-Q. Luo, and M. Razaviyayn. Convergence analysis of alternating direction method of multipliers for a family of nonconvex problems. In: SIAM Journal on Optimization 6.1 (16), pp [18] J.-H. Hours and C. N. Jones. An Alternating Trust Region Algorithm for Distributed Linearly Constrained Nonlinear Programs, Application to the Optimal Power Flow Problem. In: Journal of Optimization Theory and Applications (17), pp [19] B. Houska, J. Frasch, and M. Diehl. An augmented Lagrangian based algorithm for distributed nonconvex optimization. In: SIAM Journal on Optimization 6. (16), pp [] G. Hug-Glanzmann and G. Andersson. Decentralized optimal power flow control for overlapping areas in power systems. In: IEEE Transactions on Power Systems 4.1 (9), pp [1] D. Hur, J. K. Park, and B. H. Kim. Evaluation of convergence rate in the auxiliary problem principle for distributed optimal power flow. In: Transmission and Distribution IEE Proceedings-Generation (), pp [] B. H. Kim and R. Baldick. A comparison of distributed optimal power flow algorithms. In: IEEE Transactions on Power Systems 15. (), pp [3] B. H. Kim and R. Baldick. Coarse-grained distributed optimal power flow. In: IEEE Transactions on Power Systems 1. (1997), pp [4] J. Lavaei and S. H. Low. Zero Duality Gap in Optimal Power Flow Problem. In: IEEE Transactions on Power Systems 7.1 (1), pp [5] S. H. Low. Convex Relaxation of Optimal Power Flow Part I: Formulations and Equivalence. In: IEEE Transactions on Control of Network Systems 1.1 (14), pp [6] S. H. Low. Convex Relaxation of Optimal Power Flow Part II: Exactness. In: IEEE Transactions on Control of Network Systems 1. (14), pp [7] W. Lu, M. Liu, S. Lin, and L. Li. Fully Decentralized Optimal Power Flow of Multi-Area Interconnected Power Systems Based on Distributed Interior Point Method. In: IEEE Transactions on Power Systems 33.1 (18), pp [8] D. K. Molzahn, F. Dörfler, H. Sandberg, S. H. Low, S. Chakrabarti, R. Baldick, and J. Lavaei. A Survey of Distributed Optimization and Control Algorithms for Electric Power Systems. In: IEEE Transactions on Smart Grid 8.6 (17), pp [9] D. K. Molzahn, J. T. Holzer, B. C. Lesieutre, and C. L. DeMarco. Implementation of a Large-Scale Optimal Power Flow Solver Based on Semidefinite Programming. In: IEEE Transactions on Power Systems 8.4 (13), pp [3] A. Monticelli. State estimation in electric power systems: a generalized approach. Springer Science & Business Media, [31] I. Necoara, C. Savorgnan, D. Q. Tran, J. Suykens, and M. Diehl. Distributed nonlinear optimal control using sequential convex programming and smoothing techniques. In: Decision and Control, 9 held jointly with the 9 8th Chinese Control Conference. CDC/CCC 9. Proceedings of the 48th IEEE Conference on. IEEE. 9, pp [3] J. Nocedal and S. Wright. Numerical optimization. Springer Science & Business Media, New York, 6. [33] F. J. Nogales, F. J. Prieto, and A. J. Conejo. A decomposition methodology applied to the multi-area optimal power flow problem. In: Annals of Operations Research (3), pp [34] Q. Peng and S. H. Low. Distributed Optimal Power Flow Algorithm for Radial Networks, I: Balanced Single Phase Case. In: IEEE Transactions on Smart Grid PP.99 (17), pp [35] Q. Tran-Dinh, I. Necoara, C. Savorgnan, and M. Diehl. An inexact perturbed path-following method for Lagrangian decomposition in large-scale separable convex optimization. In: SIAM Journal on Optimization 3.1 (13), pp [36] A. Wächter and L. T. Biegler. On the implementation of an interiorpoint filter line-search algorithm for large-scale nonlinear programming. In: Mathematical Programming 16.1 (6), pp

ALADIN An Algorithm for Distributed Non-Convex Optimization and Control

ALADIN An Algorithm for Distributed Non-Convex Optimization and Control ALADIN An Algorithm for Distributed Non-Convex Optimization and Control Boris Houska, Yuning Jiang, Janick Frasch, Rien Quirynen, Dimitris Kouzoupis, Moritz Diehl ShanghaiTech University, University of

More information

SIGNIFICANT increase in amount of private distributed

SIGNIFICANT increase in amount of private distributed 1 Distributed DC Optimal Power Flow for Radial Networks Through Partial Primal Dual Algorithm Vahid Rasouli Disfani, Student Member, IEEE, Lingling Fan, Senior Member, IEEE, Zhixin Miao, Senior Member,

More information

A DECOMPOSITION PROCEDURE BASED ON APPROXIMATE NEWTON DIRECTIONS

A DECOMPOSITION PROCEDURE BASED ON APPROXIMATE NEWTON DIRECTIONS Working Paper 01 09 Departamento de Estadística y Econometría Statistics and Econometrics Series 06 Universidad Carlos III de Madrid January 2001 Calle Madrid, 126 28903 Getafe (Spain) Fax (34) 91 624

More information

An Inexact Sequential Quadratic Optimization Method for Nonlinear Optimization

An Inexact Sequential Quadratic Optimization Method for Nonlinear Optimization An Inexact Sequential Quadratic Optimization Method for Nonlinear Optimization Frank E. Curtis, Lehigh University involving joint work with Travis Johnson, Northwestern University Daniel P. Robinson, Johns

More information

AM 205: lecture 19. Last time: Conditions for optimality, Newton s method for optimization Today: survey of optimization methods

AM 205: lecture 19. Last time: Conditions for optimality, Newton s method for optimization Today: survey of optimization methods AM 205: lecture 19 Last time: Conditions for optimality, Newton s method for optimization Today: survey of optimization methods Quasi-Newton Methods General form of quasi-newton methods: x k+1 = x k α

More information

A Customized ADMM for Rank-Constrained Optimization Problems with Approximate Formulations

A Customized ADMM for Rank-Constrained Optimization Problems with Approximate Formulations A Customized ADMM for Rank-Constrained Optimization Problems with Approximate Formulations Chuangchuang Sun and Ran Dai Abstract This paper proposes a customized Alternating Direction Method of Multipliers

More information

AM 205: lecture 19. Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods

AM 205: lecture 19. Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods AM 205: lecture 19 Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods Optimality Conditions: Equality Constrained Case As another example of equality

More information

Algorithms for Constrained Optimization

Algorithms for Constrained Optimization 1 / 42 Algorithms for Constrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University April 19, 2015 2 / 42 Outline 1. Convergence 2. Sequential quadratic

More information

A Distributed Newton Method for Network Utility Maximization, II: Convergence

A Distributed Newton Method for Network Utility Maximization, II: Convergence A Distributed Newton Method for Network Utility Maximization, II: Convergence Ermin Wei, Asuman Ozdaglar, and Ali Jadbabaie October 31, 2012 Abstract The existing distributed algorithms for Network Utility

More information

Infeasibility Detection and an Inexact Active-Set Method for Large-Scale Nonlinear Optimization

Infeasibility Detection and an Inexact Active-Set Method for Large-Scale Nonlinear Optimization Infeasibility Detection and an Inexact Active-Set Method for Large-Scale Nonlinear Optimization Frank E. Curtis, Lehigh University involving joint work with James V. Burke, University of Washington Daniel

More information

Various Techniques for Nonlinear Energy-Related Optimizations. Javad Lavaei. Department of Electrical Engineering Columbia University

Various Techniques for Nonlinear Energy-Related Optimizations. Javad Lavaei. Department of Electrical Engineering Columbia University Various Techniques for Nonlinear Energy-Related Optimizations Javad Lavaei Department of Electrical Engineering Columbia University Acknowledgements Caltech: Steven Low, Somayeh Sojoudi Columbia University:

More information

Distributed Approach for DC Optimal Power Flow Calculations

Distributed Approach for DC Optimal Power Flow Calculations 1 Distributed Approach for DC Optimal Power Flow Calculations Javad Mohammadi, IEEE Student Member, Soummya Kar, IEEE Member, Gabriela Hug, IEEE Member Abstract arxiv:1410.4236v1 [math.oc] 15 Oct 2014

More information

Solving Dual Problems

Solving Dual Problems Lecture 20 Solving Dual Problems We consider a constrained problem where, in addition to the constraint set X, there are also inequality and linear equality constraints. Specifically the minimization problem

More information

A Bregman alternating direction method of multipliers for sparse probabilistic Boolean network problem

A Bregman alternating direction method of multipliers for sparse probabilistic Boolean network problem A Bregman alternating direction method of multipliers for sparse probabilistic Boolean network problem Kangkang Deng, Zheng Peng Abstract: The main task of genetic regulatory networks is to construct a

More information

An Inexact Newton Method for Optimization

An Inexact Newton Method for Optimization New York University Brown Applied Mathematics Seminar, February 10, 2009 Brief biography New York State College of William and Mary (B.S.) Northwestern University (M.S. & Ph.D.) Courant Institute (Postdoc)

More information

An Inexact Newton Method for Nonlinear Constrained Optimization

An Inexact Newton Method for Nonlinear Constrained Optimization An Inexact Newton Method for Nonlinear Constrained Optimization Frank E. Curtis Numerical Analysis Seminar, January 23, 2009 Outline Motivation and background Algorithm development and theoretical results

More information

Convex Optimization with ALADIN

Convex Optimization with ALADIN Mathematical Programming manuscript No. (will be inserted by the editor) Convex Optimization with ALADIN Boris Houska Dimitris Kouzoupis Yuning Jiang Moritz Diehl the date of receipt and acceptance should

More information

An Optimization-based Approach to Decentralized Assignability

An Optimization-based Approach to Decentralized Assignability 2016 American Control Conference (ACC) Boston Marriott Copley Place July 6-8, 2016 Boston, MA, USA An Optimization-based Approach to Decentralized Assignability Alborz Alavian and Michael Rotkowitz Abstract

More information

A Parallel Primal-Dual Interior-Point Method for DC Optimal Power Flow

A Parallel Primal-Dual Interior-Point Method for DC Optimal Power Flow A Parallel Primal-Dual nterior-point Method for DC Optimal Power Flow Ariana Minot, Yue M Lu, and Na Li School of Engineering and Applied Sciences Harvard University, Cambridge, MA, USA {aminot,yuelu,nali}@seasharvardedu

More information

Dual Methods. Lecturer: Ryan Tibshirani Convex Optimization /36-725

Dual Methods. Lecturer: Ryan Tibshirani Convex Optimization /36-725 Dual Methods Lecturer: Ryan Tibshirani Conve Optimization 10-725/36-725 1 Last time: proimal Newton method Consider the problem min g() + h() where g, h are conve, g is twice differentiable, and h is simple.

More information

POWER SYSTEMS in general are currently operating

POWER SYSTEMS in general are currently operating TO APPEAR IN IEEE TRANSACTIONS ON POWER SYSTEMS 1 Robust Optimal Power Flow Solution Using Trust Region and Interior-Point Methods Andréa A. Sousa, Geraldo L. Torres, Member IEEE, Claudio A. Cañizares,

More information

Dual methods and ADMM. Barnabas Poczos & Ryan Tibshirani Convex Optimization /36-725

Dual methods and ADMM. Barnabas Poczos & Ryan Tibshirani Convex Optimization /36-725 Dual methods and ADMM Barnabas Poczos & Ryan Tibshirani Convex Optimization 10-725/36-725 1 Given f : R n R, the function is called its conjugate Recall conjugate functions f (y) = max x R n yt x f(x)

More information

10-725/36-725: Convex Optimization Spring Lecture 21: April 6

10-725/36-725: Convex Optimization Spring Lecture 21: April 6 10-725/36-725: Conve Optimization Spring 2015 Lecturer: Ryan Tibshirani Lecture 21: April 6 Scribes: Chiqun Zhang, Hanqi Cheng, Waleed Ammar Note: LaTeX template courtesy of UC Berkeley EECS dept. Disclaimer:

More information

arxiv: v1 [math.oc] 10 Apr 2017

arxiv: v1 [math.oc] 10 Apr 2017 A Method to Guarantee Local Convergence for Sequential Quadratic Programming with Poor Hessian Approximation Tuan T. Nguyen, Mircea Lazar and Hans Butler arxiv:1704.03064v1 math.oc] 10 Apr 2017 Abstract

More information

Uses of duality. Geoff Gordon & Ryan Tibshirani Optimization /

Uses of duality. Geoff Gordon & Ryan Tibshirani Optimization / Uses of duality Geoff Gordon & Ryan Tibshirani Optimization 10-725 / 36-725 1 Remember conjugate functions Given f : R n R, the function is called its conjugate f (y) = max x R n yt x f(x) Conjugates appear

More information

Recent Adaptive Methods for Nonlinear Optimization

Recent Adaptive Methods for Nonlinear Optimization Recent Adaptive Methods for Nonlinear Optimization Frank E. Curtis, Lehigh University involving joint work with James V. Burke (U. of Washington), Richard H. Byrd (U. of Colorado), Nicholas I. M. Gould

More information

Alternating Direction Method of Multipliers. Ryan Tibshirani Convex Optimization

Alternating Direction Method of Multipliers. Ryan Tibshirani Convex Optimization Alternating Direction Method of Multipliers Ryan Tibshirani Convex Optimization 10-725 Consider the problem Last time: dual ascent min x f(x) subject to Ax = b where f is strictly convex and closed. Denote

More information

Optimisation in Higher Dimensions

Optimisation in Higher Dimensions CHAPTER 6 Optimisation in Higher Dimensions Beyond optimisation in 1D, we will study two directions. First, the equivalent in nth dimension, x R n such that f(x ) f(x) for all x R n. Second, constrained

More information

5 Handling Constraints

5 Handling Constraints 5 Handling Constraints Engineering design optimization problems are very rarely unconstrained. Moreover, the constraints that appear in these problems are typically nonlinear. This motivates our interest

More information

A Trust-region-based Sequential Quadratic Programming Algorithm

A Trust-region-based Sequential Quadratic Programming Algorithm Downloaded from orbit.dtu.dk on: Oct 19, 2018 A Trust-region-based Sequential Quadratic Programming Algorithm Henriksen, Lars Christian; Poulsen, Niels Kjølstad Publication date: 2010 Document Version

More information

Algorithms for constrained local optimization

Algorithms for constrained local optimization Algorithms for constrained local optimization Fabio Schoen 2008 http://gol.dsi.unifi.it/users/schoen Algorithms for constrained local optimization p. Feasible direction methods Algorithms for constrained

More information

Dual Ascent. Ryan Tibshirani Convex Optimization

Dual Ascent. Ryan Tibshirani Convex Optimization Dual Ascent Ryan Tibshirani Conve Optimization 10-725 Last time: coordinate descent Consider the problem min f() where f() = g() + n i=1 h i( i ), with g conve and differentiable and each h i conve. Coordinate

More information

Inexact Newton Methods and Nonlinear Constrained Optimization

Inexact Newton Methods and Nonlinear Constrained Optimization Inexact Newton Methods and Nonlinear Constrained Optimization Frank E. Curtis EPSRC Symposium Capstone Conference Warwick Mathematics Institute July 2, 2009 Outline PDE-Constrained Optimization Newton

More information

A SIMPLE PARALLEL ALGORITHM WITH AN O(1/T ) CONVERGENCE RATE FOR GENERAL CONVEX PROGRAMS

A SIMPLE PARALLEL ALGORITHM WITH AN O(1/T ) CONVERGENCE RATE FOR GENERAL CONVEX PROGRAMS A SIMPLE PARALLEL ALGORITHM WITH AN O(/T ) CONVERGENCE RATE FOR GENERAL CONVEX PROGRAMS HAO YU AND MICHAEL J. NEELY Abstract. This paper considers convex programs with a general (possibly non-differentiable)

More information

12. Interior-point methods

12. Interior-point methods 12. Interior-point methods Convex Optimization Boyd & Vandenberghe inequality constrained minimization logarithmic barrier function and central path barrier method feasibility and phase I methods complexity

More information

Does Alternating Direction Method of Multipliers Converge for Nonconvex Problems?

Does Alternating Direction Method of Multipliers Converge for Nonconvex Problems? Does Alternating Direction Method of Multipliers Converge for Nonconvex Problems? Mingyi Hong IMSE and ECpE Department Iowa State University ICCOPT, Tokyo, August 2016 Mingyi Hong (Iowa State University)

More information

Newton s Method. Ryan Tibshirani Convex Optimization /36-725

Newton s Method. Ryan Tibshirani Convex Optimization /36-725 Newton s Method Ryan Tibshirani Convex Optimization 10-725/36-725 1 Last time: dual correspondences Given a function f : R n R, we define its conjugate f : R n R, Properties and examples: f (y) = max x

More information

Power Grid Partitioning: Static and Dynamic Approaches

Power Grid Partitioning: Static and Dynamic Approaches Power Grid Partitioning: Static and Dynamic Approaches Miao Zhang, Zhixin Miao, Lingling Fan Department of Electrical Engineering University of South Florida Tampa FL 3320 miaozhang@mail.usf.edu zmiao,

More information

Fast Nonnegative Matrix Factorization with Rank-one ADMM

Fast Nonnegative Matrix Factorization with Rank-one ADMM Fast Nonnegative Matrix Factorization with Rank-one Dongjin Song, David A. Meyer, Martin Renqiang Min, Department of ECE, UCSD, La Jolla, CA, 9093-0409 dosong@ucsd.edu Department of Mathematics, UCSD,

More information

Analysis of Coupling Dynamics for Power Systems with Iterative Discrete Decision Making Architectures

Analysis of Coupling Dynamics for Power Systems with Iterative Discrete Decision Making Architectures Analysis of Coupling Dynamics for Power Systems with Iterative Discrete Decision Making Architectures Zhixin Miao Department of Electrical Engineering, University of South Florida, Tampa FL USA 3362. Email:

More information

Semidefinite Relaxation of Optimal Power Flow for ac-dc Grids

Semidefinite Relaxation of Optimal Power Flow for ac-dc Grids Semidefinite Relaxation of Optimal Power Flow for ac-dc Grids Shahab Bahrami, Student Member, IEEE, Francis Therrien, Member, IEEE, Vincent W.S. Wong, Fellow, IEEE, and Juri Jatsevich, Senior Member, IEEE

More information

POWER flow studies are the cornerstone of power system

POWER flow studies are the cornerstone of power system University of Wisconsin-Madison Department of Electrical and Computer Engineering. Technical Report ECE-2-. A Sufficient Condition for Power Flow Insolvability with Applications to Voltage Stability Margins

More information

Two-Layer Network Equivalent for Electromagnetic Transients

Two-Layer Network Equivalent for Electromagnetic Transients 1328 IEEE TRANSACTIONS ON POWER DELIVERY, VOL. 18, NO. 4, OCTOBER 2003 Two-Layer Network Equivalent for Electromagnetic Transients Mohamed Abdel-Rahman, Member, IEEE, Adam Semlyen, Life Fellow, IEEE, and

More information

Nonlinear Programming

Nonlinear Programming Nonlinear Programming Kees Roos e-mail: C.Roos@ewi.tudelft.nl URL: http://www.isa.ewi.tudelft.nl/ roos LNMB Course De Uithof, Utrecht February 6 - May 8, A.D. 2006 Optimization Group 1 Outline for week

More information

Distributed and Real-time Predictive Control

Distributed and Real-time Predictive Control Distributed and Real-time Predictive Control Melanie Zeilinger Christian Conte (ETH) Alexander Domahidi (ETH) Ye Pu (EPFL) Colin Jones (EPFL) Challenges in modern control systems Power system: - Frequency

More information

A Decomposition Based Approach for Solving a General Bilevel Linear Programming

A Decomposition Based Approach for Solving a General Bilevel Linear Programming A Decomposition Based Approach for Solving a General Bilevel Linear Programming Xuan Liu, Member, IEEE, Zuyi Li, Senior Member, IEEE Abstract Bilevel optimization has been widely used in decisionmaking

More information

Optimization and Root Finding. Kurt Hornik

Optimization and Root Finding. Kurt Hornik Optimization and Root Finding Kurt Hornik Basics Root finding and unconstrained smooth optimization are closely related: Solving ƒ () = 0 can be accomplished via minimizing ƒ () 2 Slide 2 Basics Root finding

More information

Stochastic Quasi-Newton Methods

Stochastic Quasi-Newton Methods Stochastic Quasi-Newton Methods Donald Goldfarb Department of IEOR Columbia University UCLA Distinguished Lecture Series May 17-19, 2016 1 / 35 Outline Stochastic Approximation Stochastic Gradient Descent

More information

Numerical Optimization Professor Horst Cerjak, Horst Bischof, Thomas Pock Mat Vis-Gra SS09

Numerical Optimization Professor Horst Cerjak, Horst Bischof, Thomas Pock Mat Vis-Gra SS09 Numerical Optimization 1 Working Horse in Computer Vision Variational Methods Shape Analysis Machine Learning Markov Random Fields Geometry Common denominator: optimization problems 2 Overview of Methods

More information

Optimal Power Flow. S. Bose, M. Chandy, M. Farivar, D. Gayme S. Low. C. Clarke. Southern California Edison. Caltech. March 2012

Optimal Power Flow. S. Bose, M. Chandy, M. Farivar, D. Gayme S. Low. C. Clarke. Southern California Edison. Caltech. March 2012 Optimal Power Flow over Radial Networks S. Bose, M. Chandy, M. Farivar, D. Gayme S. Low Caltech C. Clarke Southern California Edison March 2012 Outline Motivation Semidefinite relaxation Bus injection

More information

Motivation. Lecture 2 Topics from Optimization and Duality. network utility maximization (NUM) problem:

Motivation. Lecture 2 Topics from Optimization and Duality. network utility maximization (NUM) problem: CDS270 Maryam Fazel Lecture 2 Topics from Optimization and Duality Motivation network utility maximization (NUM) problem: consider a network with S sources (users), each sending one flow at rate x s, through

More information

10 Numerical methods for constrained problems

10 Numerical methods for constrained problems 10 Numerical methods for constrained problems min s.t. f(x) h(x) = 0 (l), g(x) 0 (m), x X The algorithms can be roughly divided the following way: ˆ primal methods: find descent direction keeping inside

More information

Author copy. Do not redistribute.

Author copy. Do not redistribute. Author copy. Do not redistribute. DYNAMIC DECENTRALIZED VOLTAGE CONTROL FOR POWER DISTRIBUTION NETWORKS Hao Jan Liu, Wei Shi, and Hao Zhu Department of ECE and CSL, University of Illinois at Urbana-Champaign

More information

Recent Developments of Alternating Direction Method of Multipliers with Multi-Block Variables

Recent Developments of Alternating Direction Method of Multipliers with Multi-Block Variables Recent Developments of Alternating Direction Method of Multipliers with Multi-Block Variables Department of Systems Engineering and Engineering Management The Chinese University of Hong Kong 2014 Workshop

More information

Introduction to Alternating Direction Method of Multipliers

Introduction to Alternating Direction Method of Multipliers Introduction to Alternating Direction Method of Multipliers Yale Chang Machine Learning Group Meeting September 29, 2016 Yale Chang (Machine Learning Group Meeting) Introduction to Alternating Direction

More information

Interior Point Algorithms for Constrained Convex Optimization

Interior Point Algorithms for Constrained Convex Optimization Interior Point Algorithms for Constrained Convex Optimization Chee Wei Tan CS 8292 : Advanced Topics in Convex Optimization and its Applications Fall 2010 Outline Inequality constrained minimization problems

More information

The Multi-Path Utility Maximization Problem

The Multi-Path Utility Maximization Problem The Multi-Path Utility Maximization Problem Xiaojun Lin and Ness B. Shroff School of Electrical and Computer Engineering Purdue University, West Lafayette, IN 47906 {linx,shroff}@ecn.purdue.edu Abstract

More information

Improving L-BFGS Initialization for Trust-Region Methods in Deep Learning

Improving L-BFGS Initialization for Trust-Region Methods in Deep Learning Improving L-BFGS Initialization for Trust-Region Methods in Deep Learning Jacob Rafati http://rafati.net jrafatiheravi@ucmerced.edu Ph.D. Candidate, Electrical Engineering and Computer Science University

More information

Solving large Semidefinite Programs - Part 1 and 2

Solving large Semidefinite Programs - Part 1 and 2 Solving large Semidefinite Programs - Part 1 and 2 Franz Rendl http://www.math.uni-klu.ac.at Alpen-Adria-Universität Klagenfurt Austria F. Rendl, Singapore workshop 2006 p.1/34 Overview Limits of Interior

More information

CONVERGENCE ANALYSIS OF AN INTERIOR-POINT METHOD FOR NONCONVEX NONLINEAR PROGRAMMING

CONVERGENCE ANALYSIS OF AN INTERIOR-POINT METHOD FOR NONCONVEX NONLINEAR PROGRAMMING CONVERGENCE ANALYSIS OF AN INTERIOR-POINT METHOD FOR NONCONVEX NONLINEAR PROGRAMMING HANDE Y. BENSON, ARUN SEN, AND DAVID F. SHANNO Abstract. In this paper, we present global and local convergence results

More information

ASYNCHRONOUS LOCAL VOLTAGE CONTROL IN POWER DISTRIBUTION NETWORKS. Hao Zhu and Na Li

ASYNCHRONOUS LOCAL VOLTAGE CONTROL IN POWER DISTRIBUTION NETWORKS. Hao Zhu and Na Li ASYNCHRONOUS LOCAL VOLTAGE CONTROL IN POWER DISTRIBUTION NETWORKS Hao Zhu and Na Li Department of ECE, University of Illinois, Urbana, IL, 61801 School of Engineering and Applied Sciences, Harvard University,

More information

Distributed Smooth and Strongly Convex Optimization with Inexact Dual Methods

Distributed Smooth and Strongly Convex Optimization with Inexact Dual Methods Distributed Smooth and Strongly Convex Optimization with Inexact Dual Methods Mahyar Fazlyab, Santiago Paternain, Alejandro Ribeiro and Victor M. Preciado Abstract In this paper, we consider a class of

More information

On Generalized Primal-Dual Interior-Point Methods with Non-uniform Complementarity Perturbations for Quadratic Programming

On Generalized Primal-Dual Interior-Point Methods with Non-uniform Complementarity Perturbations for Quadratic Programming On Generalized Primal-Dual Interior-Point Methods with Non-uniform Complementarity Perturbations for Quadratic Programming Altuğ Bitlislioğlu and Colin N. Jones Abstract This technical note discusses convergence

More information

Projection methods to solve SDP

Projection methods to solve SDP Projection methods to solve SDP Franz Rendl http://www.math.uni-klu.ac.at Alpen-Adria-Universität Klagenfurt Austria F. Rendl, Oberwolfach Seminar, May 2010 p.1/32 Overview Augmented Primal-Dual Method

More information

Algorithm-Hardware Co-Optimization of Memristor-Based Framework for Solving SOCP and Homogeneous QCQP Problems

Algorithm-Hardware Co-Optimization of Memristor-Based Framework for Solving SOCP and Homogeneous QCQP Problems L.C.Smith College of Engineering and Computer Science Algorithm-Hardware Co-Optimization of Memristor-Based Framework for Solving SOCP and Homogeneous QCQP Problems Ao Ren Sijia Liu Ruizhe Cai Wujie Wen

More information

An Active Set Strategy for Solving Optimization Problems with up to 200,000,000 Nonlinear Constraints

An Active Set Strategy for Solving Optimization Problems with up to 200,000,000 Nonlinear Constraints An Active Set Strategy for Solving Optimization Problems with up to 200,000,000 Nonlinear Constraints Klaus Schittkowski Department of Computer Science, University of Bayreuth 95440 Bayreuth, Germany e-mail:

More information

Optimizing Nonconvex Finite Sums by a Proximal Primal-Dual Method

Optimizing Nonconvex Finite Sums by a Proximal Primal-Dual Method Optimizing Nonconvex Finite Sums by a Proximal Primal-Dual Method Davood Hajinezhad Iowa State University Davood Hajinezhad Optimizing Nonconvex Finite Sums by a Proximal Primal-Dual Method 1 / 35 Co-Authors

More information

Methods for Unconstrained Optimization Numerical Optimization Lectures 1-2

Methods for Unconstrained Optimization Numerical Optimization Lectures 1-2 Methods for Unconstrained Optimization Numerical Optimization Lectures 1-2 Coralia Cartis, University of Oxford INFOMM CDT: Modelling, Analysis and Computation of Continuous Real-World Problems Methods

More information

Lecture 13: Constrained optimization

Lecture 13: Constrained optimization 2010-12-03 Basic ideas A nonlinearly constrained problem must somehow be converted relaxed into a problem which we can solve (a linear/quadratic or unconstrained problem) We solve a sequence of such problems

More information

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation Course otes for EE7C (Spring 018): Conve Optimization and Approimation Instructor: Moritz Hardt Email: hardt+ee7c@berkeley.edu Graduate Instructor: Ma Simchowitz Email: msimchow+ee7c@berkeley.edu October

More information

4y Springer NONLINEAR INTEGER PROGRAMMING

4y Springer NONLINEAR INTEGER PROGRAMMING NONLINEAR INTEGER PROGRAMMING DUAN LI Department of Systems Engineering and Engineering Management The Chinese University of Hong Kong Shatin, N. T. Hong Kong XIAOLING SUN Department of Mathematics Shanghai

More information

MS&E 318 (CME 338) Large-Scale Numerical Optimization

MS&E 318 (CME 338) Large-Scale Numerical Optimization Stanford University, Management Science & Engineering (and ICME) MS&E 318 (CME 338) Large-Scale Numerical Optimization 1 Origins Instructor: Michael Saunders Spring 2015 Notes 9: Augmented Lagrangian Methods

More information

Programming, numerics and optimization

Programming, numerics and optimization Programming, numerics and optimization Lecture C-3: Unconstrained optimization II Łukasz Jankowski ljank@ippt.pan.pl Institute of Fundamental Technological Research Room 4.32, Phone +22.8261281 ext. 428

More information

Ranking from Crowdsourced Pairwise Comparisons via Matrix Manifold Optimization

Ranking from Crowdsourced Pairwise Comparisons via Matrix Manifold Optimization Ranking from Crowdsourced Pairwise Comparisons via Matrix Manifold Optimization Jialin Dong ShanghaiTech University 1 Outline Introduction FourVignettes: System Model and Problem Formulation Problem Analysis

More information

A semidefinite relaxation scheme for quadratically constrained quadratic problems with an additional linear constraint

A semidefinite relaxation scheme for quadratically constrained quadratic problems with an additional linear constraint Iranian Journal of Operations Research Vol. 2, No. 2, 20, pp. 29-34 A semidefinite relaxation scheme for quadratically constrained quadratic problems with an additional linear constraint M. Salahi Semidefinite

More information

Lecture Notes on Support Vector Machine

Lecture Notes on Support Vector Machine Lecture Notes on Support Vector Machine Feng Li fli@sdu.edu.cn Shandong University, China 1 Hyperplane and Margin In a n-dimensional space, a hyper plane is defined by ω T x + b = 0 (1) where ω R n is

More information

Constrained Nonlinear Optimization Algorithms

Constrained Nonlinear Optimization Algorithms Department of Industrial Engineering and Management Sciences Northwestern University waechter@iems.northwestern.edu Institute for Mathematics and its Applications University of Minnesota August 4, 2016

More information

Proximal Newton Method. Zico Kolter (notes by Ryan Tibshirani) Convex Optimization

Proximal Newton Method. Zico Kolter (notes by Ryan Tibshirani) Convex Optimization Proximal Newton Method Zico Kolter (notes by Ryan Tibshirani) Convex Optimization 10-725 Consider the problem Last time: quasi-newton methods min x f(x) with f convex, twice differentiable, dom(f) = R

More information

Optimal Tap Settings for Voltage Regulation Transformers in Distribution Networks

Optimal Tap Settings for Voltage Regulation Transformers in Distribution Networks Optimal Tap Settings for Voltage Regulation Transformers in Distribution Networks Brett A. Robbins Department of Electrical and Computer Engineering University of Illinois at Urbana-Champaign May 9, 2014

More information

Nonlinear Optimization for Optimal Control

Nonlinear Optimization for Optimal Control Nonlinear Optimization for Optimal Control Pieter Abbeel UC Berkeley EECS Many slides and figures adapted from Stephen Boyd [optional] Boyd and Vandenberghe, Convex Optimization, Chapters 9 11 [optional]

More information

Convex Optimization and l 1 -minimization

Convex Optimization and l 1 -minimization Convex Optimization and l 1 -minimization Sangwoon Yun Computational Sciences Korea Institute for Advanced Study December 11, 2009 2009 NIMS Thematic Winter School Outline I. Convex Optimization II. l

More information

Estimating Feasible Nodal Power Injections in Distribution Networks

Estimating Feasible Nodal Power Injections in Distribution Networks Estimating Feasible Nodal Power Injections in Distribution Networks Abdullah Al-Digs The University of British Columbia Vancouver, BC V6T 1Z4 Email: aldigs@ece.ubc.ca Sairaj V. Dhople University of Minnesota

More information

Lecture 11 and 12: Penalty methods and augmented Lagrangian methods for nonlinear programming

Lecture 11 and 12: Penalty methods and augmented Lagrangian methods for nonlinear programming Lecture 11 and 12: Penalty methods and augmented Lagrangian methods for nonlinear programming Coralia Cartis, Mathematical Institute, University of Oxford C6.2/B2: Continuous Optimization Lecture 11 and

More information

Accelerated primal-dual methods for linearly constrained convex problems

Accelerated primal-dual methods for linearly constrained convex problems Accelerated primal-dual methods for linearly constrained convex problems Yangyang Xu SIAM Conference on Optimization May 24, 2017 1 / 23 Accelerated proximal gradient For convex composite problem: minimize

More information

A null-space primal-dual interior-point algorithm for nonlinear optimization with nice convergence properties

A null-space primal-dual interior-point algorithm for nonlinear optimization with nice convergence properties A null-space primal-dual interior-point algorithm for nonlinear optimization with nice convergence properties Xinwei Liu and Yaxiang Yuan Abstract. We present a null-space primal-dual interior-point algorithm

More information

Convex Relaxations for Optimization of AC and HVDC Grids under Uncertainty

Convex Relaxations for Optimization of AC and HVDC Grids under Uncertainty P L Power Systems Laboratory Center for Electric Power and Energy Department of Electrical Engineering Andreas Horst Venzke Convex Relaxations for Optimization of AC and HVDC Grids under Uncertainty Master

More information

PowerApps Optimal Power Flow Formulation

PowerApps Optimal Power Flow Formulation PowerApps Optimal Power Flow Formulation Page1 Table of Contents 1 OPF Problem Statement... 3 1.1 Vector u... 3 1.1.1 Costs Associated with Vector [u] for Economic Dispatch... 4 1.1.2 Costs Associated

More information

Part 5: Penalty and augmented Lagrangian methods for equality constrained optimization. Nick Gould (RAL)

Part 5: Penalty and augmented Lagrangian methods for equality constrained optimization. Nick Gould (RAL) Part 5: Penalty and augmented Lagrangian methods for equality constrained optimization Nick Gould (RAL) x IR n f(x) subject to c(x) = Part C course on continuoue optimization CONSTRAINED MINIMIZATION x

More information

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 52, NO. 2, FEBRUARY Uplink Downlink Duality Via Minimax Duality. Wei Yu, Member, IEEE (1) (2)

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 52, NO. 2, FEBRUARY Uplink Downlink Duality Via Minimax Duality. Wei Yu, Member, IEEE (1) (2) IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 52, NO. 2, FEBRUARY 2006 361 Uplink Downlink Duality Via Minimax Duality Wei Yu, Member, IEEE Abstract The sum capacity of a Gaussian vector broadcast channel

More information

THE restructuring of the power industry has lead to

THE restructuring of the power industry has lead to GLOBALLY CONVERGENT OPTIMAL POWER FLOW USING COMPLEMENTARITY FUNCTIONS AND TRUST REGION METHODS Geraldo L. Torres Universidade Federal de Pernambuco Recife, Brazil gltorres@ieee.org Abstract - As power

More information

CHAPTER 2: QUADRATIC PROGRAMMING

CHAPTER 2: QUADRATIC PROGRAMMING CHAPTER 2: QUADRATIC PROGRAMMING Overview Quadratic programming (QP) problems are characterized by objective functions that are quadratic in the design variables, and linear constraints. In this sense,

More information

Iterative Reweighted Minimization Methods for l p Regularized Unconstrained Nonlinear Programming

Iterative Reweighted Minimization Methods for l p Regularized Unconstrained Nonlinear Programming Iterative Reweighted Minimization Methods for l p Regularized Unconstrained Nonlinear Programming Zhaosong Lu October 5, 2012 (Revised: June 3, 2013; September 17, 2013) Abstract In this paper we study

More information

Preconditioning via Diagonal Scaling

Preconditioning via Diagonal Scaling Preconditioning via Diagonal Scaling Reza Takapoui Hamid Javadi June 4, 2014 1 Introduction Interior point methods solve small to medium sized problems to high accuracy in a reasonable amount of time.

More information

2 Regularized Image Reconstruction for Compressive Imaging and Beyond

2 Regularized Image Reconstruction for Compressive Imaging and Beyond EE 367 / CS 448I Computational Imaging and Display Notes: Compressive Imaging and Regularized Image Reconstruction (lecture ) Gordon Wetzstein gordon.wetzstein@stanford.edu This document serves as a supplement

More information

What s New in Active-Set Methods for Nonlinear Optimization?

What s New in Active-Set Methods for Nonlinear Optimization? What s New in Active-Set Methods for Nonlinear Optimization? Philip E. Gill Advances in Numerical Computation, Manchester University, July 5, 2011 A Workshop in Honor of Sven Hammarling UCSD Center for

More information

PDE-Constrained and Nonsmooth Optimization

PDE-Constrained and Nonsmooth Optimization Frank E. Curtis October 1, 2009 Outline PDE-Constrained Optimization Introduction Newton s method Inexactness Results Summary and future work Nonsmooth Optimization Sequential quadratic programming (SQP)

More information

Optimal Load Shedding in Electric Power Grids

Optimal Load Shedding in Electric Power Grids ARGONNE NATIONAL LABORATORY 9700 South Cass Avenue Argonne, Illinois 60439 Optimal Load Shedding in Electric Power Grids Fu Lin Chen Chen Mathematics Computer Science Division Preprint ANL/MCS-P5408-0915

More information

Numerical optimization

Numerical optimization Numerical optimization Lecture 4 Alexander & Michael Bronstein tosca.cs.technion.ac.il/book Numerical geometry of non-rigid shapes Stanford University, Winter 2009 2 Longest Slowest Shortest Minimal Maximal

More information

Decentralized Quadratically Approximated Alternating Direction Method of Multipliers

Decentralized Quadratically Approximated Alternating Direction Method of Multipliers Decentralized Quadratically Approimated Alternating Direction Method of Multipliers Aryan Mokhtari Wei Shi Qing Ling Alejandro Ribeiro Department of Electrical and Systems Engineering, University of Pennsylvania

More information

Lagrange Duality. Daniel P. Palomar. Hong Kong University of Science and Technology (HKUST)

Lagrange Duality. Daniel P. Palomar. Hong Kong University of Science and Technology (HKUST) Lagrange Duality Daniel P. Palomar Hong Kong University of Science and Technology (HKUST) ELEC5470 - Convex Optimization Fall 2017-18, HKUST, Hong Kong Outline of Lecture Lagrangian Dual function Dual

More information