ALADIN An Algorithm for Distributed Non-Convex Optimization and Control

Size: px
Start display at page:

Download "ALADIN An Algorithm for Distributed Non-Convex Optimization and Control"

Transcription

1 ALADIN An Algorithm for Distributed Non-Convex Optimization and Control Boris Houska, Yuning Jiang, Janick Frasch, Rien Quirynen, Dimitris Kouzoupis, Moritz Diehl ShanghaiTech University, University of Magdeburg, University of Freiburg 1

2 Motivation: sensor network localization Decoupled case: each sensor takes measurement η i of its position χ i and solves i {1,..., 7}, χ i χ i η i

3 Motivation: sensor network localization Coupled case: sensors additionally measure the distance to their neighbors 7 { χ i η i 2 χ 2 + ( χ i χ i+1 2 η i ) 2} with χ 8 = χ 1 3

4 Motivation: sensor network localization Equivalent formulation: set x 1 = (χ 1, ζ 1 ) with ζ 1 = χ 2, set x 2 = (χ 2, ζ 2 ) with ζ 2 = χ 3, and so on 4

5 Motivation: sensor network localization Equivalent formulation: set x 1 = (χ 1, ζ 1 ) with ζ 1 = χ 2, set x 2 = (χ 2, ζ 2 ) with ζ 2 = χ 3, and so on 5

6 Motivation: sensor network localization Equivalent formulation: set x 1 = (χ 1, ζ 1 ) with ζ 1 = χ 2, set x 2 = (χ 2, ζ 2 ) with ζ 2 = χ 3, and so on 6

7 Motivation: sensor network localization Equivalent formulation (cont.): new variables x i = (χ i, ζ i ) separable non-convex objectives f i (x i ) = 1 2 χ i η i ζ i η i ( χ i ζ i 2 η i ) 2 affine coupling, ζ i = χ i+1, can be written as 7 A i x i = 0. 7

8 Motivation: sensor network localization Equivalent formulation (cont.): new variables x i = (χ i, ζ i ) separable non-convex objectives f i (x i ) = 1 2 χ i η i ζ i η i ( χ i ζ i 2 η i ) 2 affine coupling, ζ i = χ i+1, can be written as 7 A i x i = 0. 8

9 Motivation: sensor network localization Equivalent formulation (cont.): new variables x i = (χ i, ζ i ) separable non-convex objectives f i (x i ) = 1 2 χ i η i ζ i η i ( χ i ζ i 2 η i ) 2 affine coupling, ζ i = χ i+1, can be written as 7 A i x i = 0. 9

10 Motivation: sensor network localization Optimization problem: x 7 f i (x i ) s.t. 7 A i x i = 0. 10

11 Aim of distributed optimization algorithms Find local imizers of x f i (x i ) s.t. A i x i = b Functions f i : R n R potentially non-convex. Matrices A i R m n and vectors b R m given. Problem: N is large. 11

12 Aim of distributed optimization algorithms Find local imizers of x f i (x i ) s.t. A i x i = b Functions f i : R n R potentially non-convex. Matrices A i R m n and vectors b R m given. Problem: N is large. 12

13 Aim of distributed optimization algorithms Find local imizers of x f i (x i ) s.t. A i x i = b Functions f i : R n R potentially non-convex. Matrices A i R m n and vectors b R m given. Problem: N is large. 13

14 Aim of distributed optimization algorithms Find local imizers of x f i (x i ) s.t. A i x i = b Functions f i : R n R potentially non-convex. Matrices A i R m n and vectors b R m given. Problem: N is large. 14

15 Overview Theory - Distributed optimization algorithms - ALADIN Applications - Sensor network localization - MPC with long horizons 15

16 Distributed optimization problem Find local imizers of x f i (x i ) s.t. A i x i = b. Functions f i : R n R potentially non-convex. Matrices A i R m n and vectors b R m given. Problem: N is large. 16

17 Dual decomposition Main idea: solve dual problem max λ d(λ) with d(λ) = x {f i (x i ) + λ T A i x i } λ T b Evaluation of d can be parallelized. Applicable if f i s are (strictly) convex For non-convex f i : duality gap possible H. Everett. Generalized Lagrange multiplier method for solving problems of optimum allocation of resources,

18 Dual decomposition Main idea: solve dual problem max λ d(λ) with d(λ) = {f i (x i ) + λ T A i x i } λ T b x i Evaluation of d can be parallelized. Applicable if f i s are (strictly) convex For non-convex f i : duality gap possible H. Everett. Generalized Lagrange multiplier method for solving problems of optimum allocation of resources,

19 Dual decomposition Main idea: solve dual problem max λ d(λ) with d(λ) = {f i (x i ) + λ T A i x i } λ T b x i Evaluation of d can be parallelized. Applicable if f i s are (strictly) convex For non-convex f i : duality gap possible H. Everett. Generalized Lagrange multiplier method for solving problems of optimum allocation of resources,

20 Dual decomposition Main idea: solve dual problem max λ d(λ) with d(λ) = {f i (x i ) + λ T A i x i } λ T b x i Evaluation of d can be parallelized. Applicable if f i s are (strictly) convex For non-convex f i : duality gap possible H. Everett. Generalized Lagrange multiplier method for solving problems of optimum allocation of resources,

21 ADMM (consensus variant) Alternating Direction Method of Multipliers Input: Initial guesses x i R n and λ i R m ; ρ > 0, ɛ > 0. Repeat: 1. Solve decoupled NLPs y i f i (y i ) + λ T i A i y i + ρ 2 A i(y i x i ) Implement dual gradient steps λ + i = λ i + ρa i (y i x i ). 3. Solve coupled QP x + { ρ Ai (y i x i + 2 ) 2 2 (λ+ i )T A i x i + 4. Update the iterates x x + and λ λ +. } s.t. A i x i + = b. D. Gabay, B. Mercier. A dual algorithm for the solution of nonlinear variational problems via finite element approximations,

22 ADMM (consensus variant) Alternating Direction Method of Multipliers Input: Initial guesses x i R n and λ i R m ; ρ > 0, ɛ > 0. Repeat: 1. Solve decoupled NLPs y i f i (y i ) + λ T i A i y i + ρ 2 A i(y i x i ) Implement dual gradient steps λ + i = λ i + ρa i (y i x i ). 3. Solve coupled QP x + { ρ Ai (y i x i + 2 ) 2 2 (λ+ i )T A i x i + 4. Update the iterates x x + and λ λ +. } s.t. A i x i + = b. D. Gabay, B. Mercier. A dual algorithm for the solution of nonlinear variational problems via finite element approximations,

23 ADMM (consensus variant) Alternating Direction Method of Multipliers Input: Initial guesses x i R n and λ i R m ; ρ > 0, ɛ > 0. Repeat: 1. Solve decoupled NLPs y i f i (y i ) + λ T i A i y i + ρ 2 A i(y i x i ) Implement dual gradient steps λ + i = λ i + ρa i (y i x i ). 3. Solve coupled QP x + { ρ Ai (y i x i + 2 ) 2 2 (λ+ i )T A i x i + 4. Update the iterates x x + and λ λ +. } s.t. A i x i + = b. D. Gabay, B. Mercier. A dual algorithm for the solution of nonlinear variational problems via finite element approximations,

24 ADMM (consensus variant) Alternating Direction Method of Multipliers Input: Initial guesses x i R n and λ i R m ; ρ > 0, ɛ > 0. Repeat: 1. Solve decoupled NLPs y i f i (y i ) + λ T i A i y i + ρ 2 A i(y i x i ) Implement dual gradient steps λ + i = λ i + ρa i (y i x i ). 3. Solve coupled QP x + { ρ Ai (y i x i + 2 ) 2 2 (λ+ i )T A i x i + 4. Update the iterates x x + and λ λ +. } s.t. A i x i + = b. D. Gabay, B. Mercier. A dual algorithm for the solution of nonlinear variational problems via finite element approximations,

25 ADMM (consensus variant) Alternating Direction Method of Multipliers Input: Initial guesses x i R n and λ i R m ; ρ > 0, ɛ > 0. Repeat: 1. Solve decoupled NLPs y i f i (y i ) + λ T i A i y i + ρ 2 A i(y i x i ) Implement dual gradient steps λ + i = λ i + ρa i (y i x i ). 3. Solve coupled QP x + { ρ Ai (y i x i + 2 ) 2 2 (λ+ i )T A i x i + 4. Update the iterates x x + and λ λ +. } s.t. A i x i + = b. D. Gabay, B. Mercier. A dual algorithm for the solution of nonlinear variational problems via finite element approximations,

26 Limitations of ADMM 1) Convergence rate of ADMM is very scaling dependent. 2) ADMM may be divergent, if f i s are nonconvex. Example: x x 1 x 2 s.t. x 1 x 2 = 0. unique and regular imizer at x1 = x2 = λ = 0. For ρ = 3 all sub-problems are strictly convex. 4 ADMM is divergent; λ + = 2λ. This talk: addresses Problem 2), mitigates Problem 1) 26

27 Limitations of ADMM 1) Convergence rate of ADMM is very scaling dependent. 2) ADMM may be divergent, if f i s are nonconvex. Example: x x 1 x 2 s.t. x 1 x 2 = 0. unique and regular imizer at x1 = x2 = λ = 0. For ρ = 3 all sub-problems are strictly convex. 4 ADMM is divergent; λ + = 2λ. This talk: addresses Problem 2), mitigates Problem 1) 27

28 Limitations of ADMM 1) Convergence rate of ADMM is very scaling dependent. 2) ADMM may be divergent, if f i s are nonconvex. Example: x x 1 x 2 s.t. x 1 x 2 = 0. unique and regular imizer at x1 = x2 = λ = 0. For ρ = 3 all sub-problems are strictly convex. 4 ADMM is divergent; λ + = 2λ. This talk: addresses Problem 2), mitigates Problem 1) 28

29 Limitations of ADMM 1) Convergence rate of ADMM is very scaling dependent. 2) ADMM may be divergent, if f i s are nonconvex. Example: x x 1 x 2 s.t. x 1 x 2 = 0. unique and regular imizer at x1 = x2 = λ = 0. For ρ = 3 all sub-problems are strictly convex. 4 ADMM is divergent; λ + = 2λ. This talk: addresses Problem 2), mitigates Problem 1) 29

30 Limitations of ADMM 1) Convergence rate of ADMM is very scaling dependent. 2) ADMM may be divergent, if f i s are nonconvex. Example: x x 1 x 2 s.t. x 1 x 2 = 0. unique and regular imizer at x1 = x2 = λ = 0. For ρ = 3 all sub-problems are strictly convex. 4 ADMM is divergent; λ + = 2λ. This talk: addresses Problem 2), mitigates Problem 1) 30

31 Limitations of ADMM 1) Convergence rate of ADMM is very scaling dependent. 2) ADMM may be divergent, if f i s are nonconvex. Example: x x 1 x 2 s.t. x 1 x 2 = 0. unique and regular imizer at x1 = x2 = λ = 0. For ρ = 3 all sub-problems are strictly convex. 4 ADMM is divergent; λ + = 2λ. This talk: addresses Problem 2), mitigates Problem 1) 31

32 Overview Theory - Distributed optimization algorithms - ALADIN Applications - Sensor network localization - MPC with long horizons 32

33 ALADIN (full step variant) Augmented Lagrangian based Alternating Direction Inexact Newton Method Input: Initial guesses x i R n and λ R m, ρ > 0, ɛ > 0. Repeat: 1. Solve decoupled NLPs y i f i (y i ) + λ T A i y i + ρ 2 y i x i 2 Σ i. 2. Compute g i = f i (y i ), choose H i 2 f i (y i ), and solve coupled QP y { } 1 2 yt i H i y i + gi T y i 3. Set x y + y and λ λ +. s.t. A i (y i + y i ) = b λ + 33

34 ALADIN (full step variant) Augmented Lagrangian based Alternating Direction Inexact Newton Method Input: Initial guesses x i R n and λ R m, ρ > 0, ɛ > 0. Repeat: 1. Solve decoupled NLPs y i f i (y i ) + λ T A i y i + ρ 2 y i x i 2 Σ i. 2. Compute g i = f i (y i ), choose H i 2 f i (y i ), and solve coupled QP y { } 1 2 yt i H i y i + gi T y i 3. Set x y + y and λ λ +. s.t. A i (y i + y i ) = b λ + 34

35 ALADIN (full step variant) Augmented Lagrangian based Alternating Direction Inexact Newton Method Input: Initial guesses x i R n and λ R m, ρ > 0, ɛ > 0. Repeat: 1. Solve decoupled NLPs y i f i (y i ) + λ T A i y i + ρ 2 y i x i 2 Σ i. 2. Compute g i = f i (y i ), choose H i 2 f i (y i ), and solve coupled QP y { } 1 2 yt i H i y i + gi T y i 3. Set x y + y and λ λ +. s.t. A i (y i + y i ) = b λ + 35

36 ALADIN (full step variant) Augmented Lagrangian based Alternating Direction Inexact Newton Method Input: Initial guesses x i R n and λ R m, ρ > 0, ɛ > 0. Repeat: 1. Solve decoupled NLPs y i f i (y i ) + λ T A i y i + ρ 2 y i x i 2 Σ i. 2. Compute g i = f i (y i ), choose H i 2 f i (y i ), and solve coupled QP y { } 1 2 yt i H i y i + gi T y i 3. Set x y + y and λ λ +. s.t. A i (y i + y i ) = b λ + 36

37 Special cases For ρ : ALADIN SQP For 0 < ρ < : If H i = ρa T i A i, Σ i = A T i A i, then ALADIN ADMM For ρ = 0: ALADIN Dual Decomposition (+ Inexact Newton) 37

38 Special cases For ρ : ALADIN SQP For 0 < ρ < : If H i = ρa T i A i, Σ i = A T i A i, then ALADIN ADMM For ρ = 0: ALADIN Dual Decomposition (+ Inexact Newton) 38

39 Special cases For ρ : ALADIN SQP For 0 < ρ < : If H i = ρa T i A i, Σ i = A T i A i, then ALADIN ADMM For ρ = 0: ALADIN Dual Decomposition (+ Inexact Newton) 39

40 Local convergence Assumptions f i s are twice continuously differentiable imizer (x, λ ) regular KKT point (LICQ + SOSC satisfied) ρ satisfies 2 f i (y i ) + ρσ i 0 Theorem Full-step variant of ALADIN converges locally 1. with quadratic convergence rate, if H i = 2 f i (y i ) + O( y i x ) 2. with linear converges rate, if H i 2 f i (y i ) is sufficiently small 40

41 Local convergence Assumptions f i s are twice continuously differentiable imizer (x, λ ) regular KKT point (LICQ + SOSC satisfied) ρ satisfies 2 f i (y i ) + ρσ i 0 Theorem Full-step variant of ALADIN converges locally 1. with quadratic convergence rate, if H i = 2 f i (y i ) + O( y i x ) 2. with linear converges rate, if H i 2 f i (y i ) is sufficiently small 41

42 Globalization Definition (L1-penalty function) We say that x + is a descent step if Φ(x + ) < Φ(x) for Φ(x) = f i (x i ) + λ A i x i b, 1 λ sufficiently large. 42

43 Globalization Rough sketch: As long as 2 f i (x i ) + ρσ i 0 the proximal objectives fi (y i ) = f i (y i ) + ρ 2 y i x i 2 Σ i are strictly convex in a neighborhood of the current primal iterates x i. If we don t update x, y can be enforced to converge to the solution z of the convex auxiliary problem z fi (z i ) s.t. A i z i = b. Strategy: if x + is not a descent direction, we skip the primal update until y is a descent and set x + = y (similar to proximal methods) This strategy leads to a globalization routine for ALADIN if H i 0. 43

44 Globalization Rough sketch: As long as 2 f i (x i ) + ρσ i 0 the proximal objectives fi (y i ) = f i (y i ) + ρ 2 y i x i 2 Σ i are strictly convex in a neighborhood of the current primal iterates x i. If we don t update x, y can be enforced to converge to the solution z of the convex auxiliary problem z fi (z i ) s.t. A i z i = b. Strategy: if x + is not a descent direction, we skip the primal update until y is a descent and set x + = y (similar to proximal methods) This strategy leads to a globalization routine for ALADIN if H i 0. 44

45 Globalization Rough sketch: As long as 2 f i (x i ) + ρσ i 0 the proximal objectives fi (y i ) = f i (y i ) + ρ 2 y i x i 2 Σ i are strictly convex in a neighborhood of the current primal iterates x i. If we don t update x, y can be enforced to converge to the solution z of the convex auxiliary problem z fi (z i ) s.t. A i z i = b. Strategy: if x + is not a descent direction, we skip the primal update until y is a descent and set x + = y (similar to proximal methods) This strategy leads to a globalization routine for ALADIN if H i 0. 45

46 Globalization Rough sketch: As long as 2 f i (x i ) + ρσ i 0 the proximal objectives fi (y i ) = f i (y i ) + ρ 2 y i x i 2 Σ i are strictly convex in a neighborhood of the current primal iterates x i. If we don t update x, y can be enforced to converge to the solution z of the convex auxiliary problem z fi (z i ) s.t. A i z i = b. Strategy: if x + is not a descent direction, we skip the primal update until y is a descent and set x + = y (similar to proximal methods) This strategy leads to a globalization routine for ALADIN if H i 0. 46

47 Inequalities Distributed NLP with inequalities: x f i (x i ) s.t. N A ix i = b h i (x i ) 0. Functions f i : R n R and h i : R n R n h potentially non-convex. Matrices A i R m n and vectors b R m given. Problem: N is large. 47

48 ALADIN (with inequalities) Augmented Lagrangian based Alternating Direction Inexact Newton Method Input: Initial guesses x i R n and λ R m, ρ > 0, ɛ > 0. Repeat: 1. Solve decoupled NLPs y i f i (y i ) + λ T A i y i + ρ 2 y i x i 2 Σ i s.t. h i (y i ) 0 κ i. 2. Set g i = f i (y i )+ h i (y i )κ i, H i 2 ( f i (y i )+κ T i h i (y i ) ), solve y { } 1 2 yt i H i y i + gi T y i s.t. A i (y i + y i ) = b λ + 3. Set x y + y and λ λ +. 48

49 ALADIN (with inequalities) Augmented Lagrangian based Alternating Direction Inexact Newton Method Input: Initial guesses x i R n and λ R m, ρ > 0, ɛ > 0. Repeat: 1. Solve decoupled NLPs y i f i (y i ) + λ T A i y i + ρ 2 y i x i 2 Σ i s.t. h i (y i ) 0 κ i. 2. Set g i = f i (y i )+ h i (y i )κ i, H i 2 ( f i (y i )+κ T i h i (y i ) ), solve y { } 1 2 yt i H i y i + gi T y i s.t. A i (y i + y i ) = b λ + 3. Set x y + y and λ λ +. 49

50 ALADIN (with inequalities) Augmented Lagrangian based Alternating Direction Inexact Newton Method Input: Initial guesses x i R n and λ R m, ρ > 0, ɛ > 0. Repeat: 1. Solve decoupled NLPs y i f i (y i ) + λ T A i y i + ρ 2 y i x i 2 Σ i s.t. h i (y i ) 0 κ i. 2. Set g i = f i (y i )+ h i (y i )κ i, H i 2 ( f i (y i )+κ T i h i (y i ) ), solve y { } 1 2 yt i H i y i + gi T y i s.t. A i (y i + y i ) = b λ + 3. Set x y + y and λ λ +. 50

51 ALADIN (with inequalities) Augmented Lagrangian based Alternating Direction Inexact Newton Method Input: Initial guesses x i R n and λ R m, ρ > 0, ɛ > 0. Repeat: 1. Solve decoupled NLPs y i f i (y i ) + λ T A i y i + ρ 2 y i x i 2 Σ i s.t. h i (y i ) 0 κ i. 2. Set g i = f i (y i )+ h i (y i )κ i, H i 2 ( f i (y i )+κ T i h i (y i ) ), solve y { } 1 2 yt i H i y i + gi T y i s.t. A i (y i + y i ) = b λ + 3. Set x y + y and λ λ +. 51

52 ALADIN (with inequalities) Augmented Lagrangian based Alternating Direction Inexact Newton Method Remarks: If approximation C i C i = h i (y i ) is available, solve QP N { 1 } y,s 2 yt i H i y i + gi T y i + λ T s + µ 2 s 2 2 N A i (y i + y i ) = b + s λ QP s.t. C i y i = 0, i {1,..., N }. with g i = f i (y i ) + (C i C i ) T κ i and µ > 0 instead. If H i and C i constant, pre-compute matrix decompositions 52

53 ALADIN (with inequalities) Augmented Lagrangian based Alternating Direction Inexact Newton Method Remarks: If approximation C i C i = h i (y i ) is available, solve QP N { 1 } y,s 2 yt i H i y i + gi T y i + λ T s + µ 2 s 2 2 N A i (y i + y i ) = b + s λ QP s.t. C i y i = 0, i {1,..., N }. with g i = f i (y i ) + (C i C i ) T κ i and µ > 0 instead. If H i and C i constant, pre-compute matrix decompositions 53

54 Overview Theory - Distributed Optimization Algorithms - ALADIN Applications - Sensor network localization - MPC with long horizons 54

55 Sensor network localization Case study: sensors measure positions and distances in a graph additional inequality constraints, χ i η i 2 σ, remove outliers. 55

56 ALADIN versus SQP 10 5 primal and dual optimization variables implementation in JULIA 56

57 Overview Theory - Distributed Optimization Algorithms - ALADIN Applications - Sensor network localization - MPC with long horizons 57

58 Nonlinear MPC Repeat: Wait for state measurement ˆx. Solve x,u s.t. m 1 i=0 l(x i, u i ) + M (x m ) x i+1 = f (x i, u i ) x 0 = ˆx (x i, u i ) X U, Send u 0 to the process. 58

59 ACADO Toolkit 59

60 MPC Benchmark Nonlinear Model, 4 States, 2 Controls Run-time of ACADO code generation (not distributed) CPU time Percentage Integration & sensitivities 117 µs 65 % QP (Condensing + qpoases) 59 µs 33 % Complete real-time iteration 181 µs 100 % 60

61 MPC Benchmark Nonlinear Model, 4 States, 2 Controls Run-time of ACADO code generation (not distributed) CPU time Percentage Integration & sensitivities 117 µs 65 % QP (Condensing + qpoases) 59 µs 33 % Complete real-time iteration 181 µs 100 % 61

62 MPC with ALADIN ALADIN Step 1: solve decoupled NLPs choose a short horizon n = m N and solve y,v s.t. Ψ j (y j 0 ) + n 1 i=0 l(yj i, vj i ) + Φ j+1(yn) j y j i+1 = f (yj i, vj i ), i = 0,..., n 1 y j i X, vj i U. Arrival and end costs depend on ALADIN iterates, Ψ 0 (y) = I (ˆx, y) Φ N (y) = M (y) Ψ j (y) = λ T j y + ρ 2 y z j 2 P Ψ j (y) = λ T j y + ρ 2 y z j 2 P 62

63 MPC with ALADIN ALADIN Step 1: solve decoupled NLPs choose a short horizon n = m N and solve y,v s.t. Ψ j (y j 0 ) + n 1 i=0 l(yj i, vj i ) + Φ j+1(yn) j y j i+1 = f (yj i, vj i ), i = 0,..., n 1 y j i X, vj i U. Arrival and end costs depend on ALADIN iterates, Ψ 0 (y) = I (ˆx, y) Φ N (y) = M (y) Ψ j (y) = λ T j y + ρ 2 y z j 2 P Ψ j (y) = λ T j y + ρ 2 y z j 2 P 63

64 MPC with ALADIN ALADIN Step 2: solve coupled QP As we have no constraints, QP = LQR problem solve all matrix-valued Ricatti equations offline solve online QP online by backward-forward sweep Code export export all online operations as optimized C-code NLP solver: explicit MPC (rough heuristic) one ALADIN iteration per sampling time, skip globalization 64

65 MPC with ALADIN ALADIN Step 2: solve coupled QP As we have no constraints, QP = LQR problem solve all matrix-valued Ricatti equations offline solve online QP online by backward-forward sweep Code export export all online operations as optimized C-code NLP solver: explicit MPC (rough heuristic) one ALADIN iteration per sampling time, skip globalization 65

66 ALADIN + code generation (results by Yuning Jiang) Same nonlinear model as before, n = 2 Run-time of real-time ALADIN, 10 processors CPU time Percentage Parallel explicit MPC 6 µs 54 % QP sweeps 3 µs 27 % communication overhead 2 µs 18 % 66

67 ALADIN + code generation (results by Yuning Jiang) Same nonlinear model as before, n = 2 Run-time of real-time ALADIN, 10 processors CPU time Percentage Parallel explicit MPC 6 µs 54 % QP sweeps 3 µs 27 % communication overhead 2 µs 18 % 67

68 Conclusions ALADIN Theory can solve nonconvex distributed optimization problems, x f i (x i ) s.t. A i x i = b, to local optimality. contains SQP, ADMM, and Dual Decomposition as special case local convergence analysis similar to SQP; globalization possible 68

69 Conclusions ALADIN Theory can solve nonconvex distributed optimization problems, x f i (x i ) s.t. A i x i = b, to local optimality. contains SQP, ADMM, and Dual Decomposition as special case local convergence analysis similar to SQP; globalization possible 69

70 Conclusions ALADIN Theory can solve nonconvex distributed optimization problems, x f i (x i ) s.t. A i x i = b, to local optimality. contains SQP, ADMM, and Dual Decomposition as special case local convergence analysis similar to SQP; globalization possible 70

71 Conclusions ALADIN Applications large-scale distributed sensor network: ALADIN outperforms SQP and ADMM small-scale embedded MPC: ALADIN can be used to alternate between explicit & online MPC 71

72 Conclusions ALADIN Applications large-scale distributed sensor network: ALADIN outperforms SQP and ADMM small-scale embedded MPC: ALADIN can be used to alternate between explicit & online MPC 72

73 References B. Houska, J. Frasch, M. Diehl. An Augmented Lagrangian Based Algorithm for Distributed Non-Convex Optimization. SIOPT, D. Kouzoupis, R. Quirynen, B. Houska, M. Diehl. A block based ALADIN scheme for highly parallelizable direct optimal control. ACC,

Convex Optimization with ALADIN

Convex Optimization with ALADIN Mathematical Programming manuscript No. (will be inserted by the editor) Convex Optimization with ALADIN Boris Houska Dimitris Kouzoupis Yuning Jiang Moritz Diehl the date of receipt and acceptance should

More information

Block Condensing with qpdunes

Block Condensing with qpdunes Block Condensing with qpdunes Dimitris Kouzoupis Rien Quirynen, Janick Frasch and Moritz Diehl Systems control and optimization laboratory (SYSCOP) TEMPO summer school August 5, 215 Dimitris Kouzoupis

More information

Inexact Newton-Type Optimization with Iterated Sensitivities

Inexact Newton-Type Optimization with Iterated Sensitivities Inexact Newton-Type Optimization with Iterated Sensitivities Downloaded from: https://research.chalmers.se, 2019-01-12 01:39 UTC Citation for the original published paper (version of record: Quirynen,

More information

10 Numerical methods for constrained problems

10 Numerical methods for constrained problems 10 Numerical methods for constrained problems min s.t. f(x) h(x) = 0 (l), g(x) 0 (m), x X The algorithms can be roughly divided the following way: ˆ primal methods: find descent direction keeping inside

More information

minimize x subject to (x 2)(x 4) u,

minimize x subject to (x 2)(x 4) u, Math 6366/6367: Optimization and Variational Methods Sample Preliminary Exam Questions 1. Suppose that f : [, L] R is a C 2 -function with f () on (, L) and that you have explicit formulae for

More information

ICS-E4030 Kernel Methods in Machine Learning

ICS-E4030 Kernel Methods in Machine Learning ICS-E4030 Kernel Methods in Machine Learning Lecture 3: Convex optimization and duality Juho Rousu 28. September, 2016 Juho Rousu 28. September, 2016 1 / 38 Convex optimization Convex optimisation This

More information

CS-E4830 Kernel Methods in Machine Learning

CS-E4830 Kernel Methods in Machine Learning CS-E4830 Kernel Methods in Machine Learning Lecture 3: Convex optimization and duality Juho Rousu 27. September, 2017 Juho Rousu 27. September, 2017 1 / 45 Convex optimization Convex optimisation This

More information

Motivation. Lecture 2 Topics from Optimization and Duality. network utility maximization (NUM) problem:

Motivation. Lecture 2 Topics from Optimization and Duality. network utility maximization (NUM) problem: CDS270 Maryam Fazel Lecture 2 Topics from Optimization and Duality Motivation network utility maximization (NUM) problem: consider a network with S sources (users), each sending one flow at rate x s, through

More information

Quiz Discussion. IE417: Nonlinear Programming: Lecture 12. Motivation. Why do we care? Jeff Linderoth. 16th March 2006

Quiz Discussion. IE417: Nonlinear Programming: Lecture 12. Motivation. Why do we care? Jeff Linderoth. 16th March 2006 Quiz Discussion IE417: Nonlinear Programming: Lecture 12 Jeff Linderoth Department of Industrial and Systems Engineering Lehigh University 16th March 2006 Motivation Why do we care? We are interested in

More information

Lecture 13: Constrained optimization

Lecture 13: Constrained optimization 2010-12-03 Basic ideas A nonlinearly constrained problem must somehow be converted relaxed into a problem which we can solve (a linear/quadratic or unconstrained problem) We solve a sequence of such problems

More information

Numerical Methods for PDE-Constrained Optimization

Numerical Methods for PDE-Constrained Optimization Numerical Methods for PDE-Constrained Optimization Richard H. Byrd 1 Frank E. Curtis 2 Jorge Nocedal 2 1 University of Colorado at Boulder 2 Northwestern University Courant Institute of Mathematical Sciences,

More information

Convex Optimization. Newton s method. ENSAE: Optimisation 1/44

Convex Optimization. Newton s method. ENSAE: Optimisation 1/44 Convex Optimization Newton s method ENSAE: Optimisation 1/44 Unconstrained minimization minimize f(x) f convex, twice continuously differentiable (hence dom f open) we assume optimal value p = inf x f(x)

More information

An Inexact Sequential Quadratic Optimization Method for Nonlinear Optimization

An Inexact Sequential Quadratic Optimization Method for Nonlinear Optimization An Inexact Sequential Quadratic Optimization Method for Nonlinear Optimization Frank E. Curtis, Lehigh University involving joint work with Travis Johnson, Northwestern University Daniel P. Robinson, Johns

More information

Uses of duality. Geoff Gordon & Ryan Tibshirani Optimization /

Uses of duality. Geoff Gordon & Ryan Tibshirani Optimization / Uses of duality Geoff Gordon & Ryan Tibshirani Optimization 10-725 / 36-725 1 Remember conjugate functions Given f : R n R, the function is called its conjugate f (y) = max x R n yt x f(x) Conjugates appear

More information

1 Computing with constraints

1 Computing with constraints Notes for 2017-04-26 1 Computing with constraints Recall that our basic problem is minimize φ(x) s.t. x Ω where the feasible set Ω is defined by equality and inequality conditions Ω = {x R n : c i (x)

More information

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation Course otes for EE7C (Spring 018): Conve Optimization and Approimation Instructor: Moritz Hardt Email: hardt+ee7c@berkeley.edu Graduate Instructor: Ma Simchowitz Email: msimchow+ee7c@berkeley.edu October

More information

INTERIOR-POINT METHODS FOR NONCONVEX NONLINEAR PROGRAMMING: CONVERGENCE ANALYSIS AND COMPUTATIONAL PERFORMANCE

INTERIOR-POINT METHODS FOR NONCONVEX NONLINEAR PROGRAMMING: CONVERGENCE ANALYSIS AND COMPUTATIONAL PERFORMANCE INTERIOR-POINT METHODS FOR NONCONVEX NONLINEAR PROGRAMMING: CONVERGENCE ANALYSIS AND COMPUTATIONAL PERFORMANCE HANDE Y. BENSON, ARUN SEN, AND DAVID F. SHANNO Abstract. In this paper, we present global

More information

Optimization Problems with Constraints - introduction to theory, numerical Methods and applications

Optimization Problems with Constraints - introduction to theory, numerical Methods and applications Optimization Problems with Constraints - introduction to theory, numerical Methods and applications Dr. Abebe Geletu Ilmenau University of Technology Department of Simulation and Optimal Processes (SOP)

More information

10-725/36-725: Convex Optimization Spring Lecture 21: April 6

10-725/36-725: Convex Optimization Spring Lecture 21: April 6 10-725/36-725: Conve Optimization Spring 2015 Lecturer: Ryan Tibshirani Lecture 21: April 6 Scribes: Chiqun Zhang, Hanqi Cheng, Waleed Ammar Note: LaTeX template courtesy of UC Berkeley EECS dept. Disclaimer:

More information

Lagrange duality. The Lagrangian. We consider an optimization program of the form

Lagrange duality. The Lagrangian. We consider an optimization program of the form Lagrange duality Another way to arrive at the KKT conditions, and one which gives us some insight on solving constrained optimization problems, is through the Lagrange dual. The dual is a maximization

More information

Distributed and Real-time Predictive Control

Distributed and Real-time Predictive Control Distributed and Real-time Predictive Control Melanie Zeilinger Christian Conte (ETH) Alexander Domahidi (ETH) Ye Pu (EPFL) Colin Jones (EPFL) Challenges in modern control systems Power system: - Frequency

More information

Dual methods and ADMM. Barnabas Poczos & Ryan Tibshirani Convex Optimization /36-725

Dual methods and ADMM. Barnabas Poczos & Ryan Tibshirani Convex Optimization /36-725 Dual methods and ADMM Barnabas Poczos & Ryan Tibshirani Convex Optimization 10-725/36-725 1 Given f : R n R, the function is called its conjugate Recall conjugate functions f (y) = max x R n yt x f(x)

More information

Algorithms for Constrained Optimization

Algorithms for Constrained Optimization 1 / 42 Algorithms for Constrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University April 19, 2015 2 / 42 Outline 1. Convergence 2. Sequential quadratic

More information

12. Interior-point methods

12. Interior-point methods 12. Interior-point methods Convex Optimization Boyd & Vandenberghe inequality constrained minimization logarithmic barrier function and central path barrier method feasibility and phase I methods complexity

More information

A Constraint-Reduced MPC Algorithm for Convex Quadratic Programming, with a Modified Active-Set Identification Scheme

A Constraint-Reduced MPC Algorithm for Convex Quadratic Programming, with a Modified Active-Set Identification Scheme A Constraint-Reduced MPC Algorithm for Convex Quadratic Programming, with a Modified Active-Set Identification Scheme M. Paul Laiu 1 and (presenter) André L. Tits 2 1 Oak Ridge National Laboratory laiump@ornl.gov

More information

Constrained optimization: direct methods (cont.)

Constrained optimization: direct methods (cont.) Constrained optimization: direct methods (cont.) Jussi Hakanen Post-doctoral researcher jussi.hakanen@jyu.fi Direct methods Also known as methods of feasible directions Idea in a point x h, generate a

More information

Dual Methods. Lecturer: Ryan Tibshirani Convex Optimization /36-725

Dual Methods. Lecturer: Ryan Tibshirani Convex Optimization /36-725 Dual Methods Lecturer: Ryan Tibshirani Conve Optimization 10-725/36-725 1 Last time: proimal Newton method Consider the problem min g() + h() where g, h are conve, g is twice differentiable, and h is simple.

More information

Lecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem

Lecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem Lecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem Michael Patriksson 0-0 The Relaxation Theorem 1 Problem: find f := infimum f(x), x subject to x S, (1a) (1b) where f : R n R

More information

Math 273a: Optimization Overview of First-Order Optimization Algorithms

Math 273a: Optimization Overview of First-Order Optimization Algorithms Math 273a: Optimization Overview of First-Order Optimization Algorithms Wotao Yin Department of Mathematics, UCLA online discussions on piazza.com 1 / 9 Typical flow of numerical optimization Optimization

More information

An Inexact Newton Method for Optimization

An Inexact Newton Method for Optimization New York University Brown Applied Mathematics Seminar, February 10, 2009 Brief biography New York State College of William and Mary (B.S.) Northwestern University (M.S. & Ph.D.) Courant Institute (Postdoc)

More information

Asynchronous Non-Convex Optimization For Separable Problem

Asynchronous Non-Convex Optimization For Separable Problem Asynchronous Non-Convex Optimization For Separable Problem Sandeep Kumar and Ketan Rajawat Dept. of Electrical Engineering, IIT Kanpur Uttar Pradesh, India Distributed Optimization A general multi-agent

More information

Constrained Optimization and Lagrangian Duality

Constrained Optimization and Lagrangian Duality CIS 520: Machine Learning Oct 02, 2017 Constrained Optimization and Lagrangian Duality Lecturer: Shivani Agarwal Disclaimer: These notes are designed to be a supplement to the lecture. They may or may

More information

Applications of Linear Programming

Applications of Linear Programming Applications of Linear Programming lecturer: András London University of Szeged Institute of Informatics Department of Computational Optimization Lecture 9 Non-linear programming In case of LP, the goal

More information

An Inexact Newton Method for Nonlinear Constrained Optimization

An Inexact Newton Method for Nonlinear Constrained Optimization An Inexact Newton Method for Nonlinear Constrained Optimization Frank E. Curtis Numerical Analysis Seminar, January 23, 2009 Outline Motivation and background Algorithm development and theoretical results

More information

Recent Adaptive Methods for Nonlinear Optimization

Recent Adaptive Methods for Nonlinear Optimization Recent Adaptive Methods for Nonlinear Optimization Frank E. Curtis, Lehigh University involving joint work with James V. Burke (U. of Washington), Richard H. Byrd (U. of Colorado), Nicholas I. M. Gould

More information

Algorithms for constrained local optimization

Algorithms for constrained local optimization Algorithms for constrained local optimization Fabio Schoen 2008 http://gol.dsi.unifi.it/users/schoen Algorithms for constrained local optimization p. Feasible direction methods Algorithms for constrained

More information

4y Springer NONLINEAR INTEGER PROGRAMMING

4y Springer NONLINEAR INTEGER PROGRAMMING NONLINEAR INTEGER PROGRAMMING DUAN LI Department of Systems Engineering and Engineering Management The Chinese University of Hong Kong Shatin, N. T. Hong Kong XIAOLING SUN Department of Mathematics Shanghai

More information

Computational Optimization. Constrained Optimization Part 2

Computational Optimization. Constrained Optimization Part 2 Computational Optimization Constrained Optimization Part Optimality Conditions Unconstrained Case X* is global min Conve f X* is local min SOSC f ( *) = SONC Easiest Problem Linear equality constraints

More information

Written Examination

Written Examination Division of Scientific Computing Department of Information Technology Uppsala University Optimization Written Examination 202-2-20 Time: 4:00-9:00 Allowed Tools: Pocket Calculator, one A4 paper with notes

More information

SF2822 Applied Nonlinear Optimization. Preparatory question. Lecture 9: Sequential quadratic programming. Anders Forsgren

SF2822 Applied Nonlinear Optimization. Preparatory question. Lecture 9: Sequential quadratic programming. Anders Forsgren SF2822 Applied Nonlinear Optimization Lecture 9: Sequential quadratic programming Anders Forsgren SF2822 Applied Nonlinear Optimization, KTH / 24 Lecture 9, 207/208 Preparatory question. Try to solve theory

More information

Machine Learning. Support Vector Machines. Manfred Huber

Machine Learning. Support Vector Machines. Manfred Huber Machine Learning Support Vector Machines Manfred Huber 2015 1 Support Vector Machines Both logistic regression and linear discriminant analysis learn a linear discriminant function to separate the data

More information

On sequential optimality conditions for constrained optimization. José Mario Martínez martinez

On sequential optimality conditions for constrained optimization. José Mario Martínez  martinez On sequential optimality conditions for constrained optimization José Mario Martínez www.ime.unicamp.br/ martinez UNICAMP, Brazil 2011 Collaborators This talk is based in joint papers with Roberto Andreani

More information

Lagrange Relaxation and Duality

Lagrange Relaxation and Duality Lagrange Relaxation and Duality As we have already known, constrained optimization problems are harder to solve than unconstrained problems. By relaxation we can solve a more difficult problem by a simpler

More information

Computational Finance

Computational Finance Department of Mathematics at University of California, San Diego Computational Finance Optimization Techniques [Lecture 2] Michael Holst January 9, 2017 Contents 1 Optimization Techniques 3 1.1 Examples

More information

Convex Optimization. Dani Yogatama. School of Computer Science, Carnegie Mellon University, Pittsburgh, PA, USA. February 12, 2014

Convex Optimization. Dani Yogatama. School of Computer Science, Carnegie Mellon University, Pittsburgh, PA, USA. February 12, 2014 Convex Optimization Dani Yogatama School of Computer Science, Carnegie Mellon University, Pittsburgh, PA, USA February 12, 2014 Dani Yogatama (Carnegie Mellon University) Convex Optimization February 12,

More information

Frank-Wolfe Method. Ryan Tibshirani Convex Optimization

Frank-Wolfe Method. Ryan Tibshirani Convex Optimization Frank-Wolfe Method Ryan Tibshirani Convex Optimization 10-725 Last time: ADMM For the problem min x,z f(x) + g(z) subject to Ax + Bz = c we form augmented Lagrangian (scaled form): L ρ (x, z, w) = f(x)

More information

Accelerated primal-dual methods for linearly constrained convex problems

Accelerated primal-dual methods for linearly constrained convex problems Accelerated primal-dual methods for linearly constrained convex problems Yangyang Xu SIAM Conference on Optimization May 24, 2017 1 / 23 Accelerated proximal gradient For convex composite problem: minimize

More information

Relaxed linearized algorithms for faster X-ray CT image reconstruction

Relaxed linearized algorithms for faster X-ray CT image reconstruction Relaxed linearized algorithms for faster X-ray CT image reconstruction Hung Nien and Jeffrey A. Fessler University of Michigan, Ann Arbor The 13th Fully 3D Meeting June 2, 2015 1/20 Statistical image reconstruction

More information

Numerical optimization

Numerical optimization Numerical optimization Lecture 4 Alexander & Michael Bronstein tosca.cs.technion.ac.il/book Numerical geometry of non-rigid shapes Stanford University, Winter 2009 2 Longest Slowest Shortest Minimal Maximal

More information

Introduction to Alternating Direction Method of Multipliers

Introduction to Alternating Direction Method of Multipliers Introduction to Alternating Direction Method of Multipliers Yale Chang Machine Learning Group Meeting September 29, 2016 Yale Chang (Machine Learning Group Meeting) Introduction to Alternating Direction

More information

Efficient Numerical Methods for Nonlinear MPC and Moving Horizon Estimation

Efficient Numerical Methods for Nonlinear MPC and Moving Horizon Estimation Efficient Numerical Methods for Nonlinear MPC and Moving Horizon Estimation Moritz Diehl, Hans Joachim Ferreau, and Niels Haverbeke Optimization in Engineering Center (OPTEC) and ESAT-SCD, K.U. Leuven,

More information

Academic Editor: Dominique Bonvin Received: 26 November 2016; Accepted: 13 February 2017; Published: 27 February 2017

Academic Editor: Dominique Bonvin Received: 26 November 2016; Accepted: 13 February 2017; Published: 27 February 2017 Article Sensitivity-Based Economic NMPC with a Path-Following Approach Eka Suwartadi 1, Vyacheslav Kungurtsev 2 and Johannes Jäschke 1, * 1 Department of Chemical Engineering, Norwegian University of Science

More information

Beyond Heuristics: Applying Alternating Direction Method of Multipliers in Nonconvex Territory

Beyond Heuristics: Applying Alternating Direction Method of Multipliers in Nonconvex Territory Beyond Heuristics: Applying Alternating Direction Method of Multipliers in Nonconvex Territory Xin Liu(4Ð) State Key Laboratory of Scientific and Engineering Computing Institute of Computational Mathematics

More information

Projection methods to solve SDP

Projection methods to solve SDP Projection methods to solve SDP Franz Rendl http://www.math.uni-klu.ac.at Alpen-Adria-Universität Klagenfurt Austria F. Rendl, Oberwolfach Seminar, May 2010 p.1/32 Overview Augmented Primal-Dual Method

More information

CONVERGENCE ANALYSIS OF AN INTERIOR-POINT METHOD FOR NONCONVEX NONLINEAR PROGRAMMING

CONVERGENCE ANALYSIS OF AN INTERIOR-POINT METHOD FOR NONCONVEX NONLINEAR PROGRAMMING CONVERGENCE ANALYSIS OF AN INTERIOR-POINT METHOD FOR NONCONVEX NONLINEAR PROGRAMMING HANDE Y. BENSON, ARUN SEN, AND DAVID F. SHANNO Abstract. In this paper, we present global and local convergence results

More information

Numerical optimization. Numerical optimization. Longest Shortest where Maximal Minimal. Fastest. Largest. Optimization problems

Numerical optimization. Numerical optimization. Longest Shortest where Maximal Minimal. Fastest. Largest. Optimization problems 1 Numerical optimization Alexander & Michael Bronstein, 2006-2009 Michael Bronstein, 2010 tosca.cs.technion.ac.il/book Numerical optimization 048921 Advanced topics in vision Processing and Analysis of

More information

SEQUENTIAL QUADRATIC PROGAMMING METHODS FOR PARAMETRIC NONLINEAR OPTIMIZATION

SEQUENTIAL QUADRATIC PROGAMMING METHODS FOR PARAMETRIC NONLINEAR OPTIMIZATION SEQUENTIAL QUADRATIC PROGAMMING METHODS FOR PARAMETRIC NONLINEAR OPTIMIZATION Vyacheslav Kungurtsev Moritz Diehl July 2013 Abstract Sequential quadratic programming (SQP) methods are known to be efficient

More information

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Scientific Computing: An Introductory Survey Chapter 6 Optimization Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction permitted

More information

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Scientific Computing: An Introductory Survey Chapter 6 Optimization Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction permitted

More information

Contents. Preface. 1 Introduction Optimization view on mathematical models NLP models, black-box versus explicit expression 3

Contents. Preface. 1 Introduction Optimization view on mathematical models NLP models, black-box versus explicit expression 3 Contents Preface ix 1 Introduction 1 1.1 Optimization view on mathematical models 1 1.2 NLP models, black-box versus explicit expression 3 2 Mathematical modeling, cases 7 2.1 Introduction 7 2.2 Enclosing

More information

Optimizing Nonconvex Finite Sums by a Proximal Primal-Dual Method

Optimizing Nonconvex Finite Sums by a Proximal Primal-Dual Method Optimizing Nonconvex Finite Sums by a Proximal Primal-Dual Method Davood Hajinezhad Iowa State University Davood Hajinezhad Optimizing Nonconvex Finite Sums by a Proximal Primal-Dual Method 1 / 35 Co-Authors

More information

I.3. LMI DUALITY. Didier HENRION EECI Graduate School on Control Supélec - Spring 2010

I.3. LMI DUALITY. Didier HENRION EECI Graduate School on Control Supélec - Spring 2010 I.3. LMI DUALITY Didier HENRION henrion@laas.fr EECI Graduate School on Control Supélec - Spring 2010 Primal and dual For primal problem p = inf x g 0 (x) s.t. g i (x) 0 define Lagrangian L(x, z) = g 0

More information

A New Penalty-SQP Method

A New Penalty-SQP Method Background and Motivation Illustration of Numerical Results Final Remarks Frank E. Curtis Informs Annual Meeting, October 2008 Background and Motivation Illustration of Numerical Results Final Remarks

More information

Introduction to Nonlinear Stochastic Programming

Introduction to Nonlinear Stochastic Programming School of Mathematics T H E U N I V E R S I T Y O H F R G E D I N B U Introduction to Nonlinear Stochastic Programming Jacek Gondzio Email: J.Gondzio@ed.ac.uk URL: http://www.maths.ed.ac.uk/~gondzio SPS

More information

Computational Optimization. Augmented Lagrangian NW 17.3

Computational Optimization. Augmented Lagrangian NW 17.3 Computational Optimization Augmented Lagrangian NW 17.3 Upcoming Schedule No class April 18 Friday, April 25, in class presentations. Projects due unless you present April 25 (free extension until Monday

More information

Interior-Point and Augmented Lagrangian Algorithms for Optimization and Control

Interior-Point and Augmented Lagrangian Algorithms for Optimization and Control Interior-Point and Augmented Lagrangian Algorithms for Optimization and Control Stephen Wright University of Wisconsin-Madison May 2014 Wright (UW-Madison) Constrained Optimization May 2014 1 / 46 In This

More information

Block Structured Preconditioning within an Active-Set Method for Real-Time Optimal Control

Block Structured Preconditioning within an Active-Set Method for Real-Time Optimal Control MITSUBISHI ELECTRIC RESEARCH LABORATORIES http://www.merl.com Block Structured Preconditioning within an Active-Set Method for Real-Time Optimal Control Quirynen, R.; Knyazev, A.; Di Cairano, S. TR2018-081

More information

Lecture 16: October 22

Lecture 16: October 22 0-725/36-725: Conve Optimization Fall 208 Lecturer: Ryan Tibshirani Lecture 6: October 22 Scribes: Nic Dalmasso, Alan Mishler, Benja LeRoy Note: LaTeX template courtesy of UC Berkeley EECS dept. Disclaimer:

More information

Inexact Newton Methods and Nonlinear Constrained Optimization

Inexact Newton Methods and Nonlinear Constrained Optimization Inexact Newton Methods and Nonlinear Constrained Optimization Frank E. Curtis EPSRC Symposium Capstone Conference Warwick Mathematics Institute July 2, 2009 Outline PDE-Constrained Optimization Newton

More information

Projected Preconditioning within a Block-Sparse Active-Set Method for MPC

Projected Preconditioning within a Block-Sparse Active-Set Method for MPC MITSUBISHI ELECTRIC RESEARCH LABORATORIES http://www.merl.com Projected Preconditioning within a Block-Sparse Active-Set Method for MPC Quirynen, R.; Knyazev, A.; Di Cairano, S. TR208-22 August 25, 208

More information

14. Duality. ˆ Upper and lower bounds. ˆ General duality. ˆ Constraint qualifications. ˆ Counterexample. ˆ Complementary slackness.

14. Duality. ˆ Upper and lower bounds. ˆ General duality. ˆ Constraint qualifications. ˆ Counterexample. ˆ Complementary slackness. CS/ECE/ISyE 524 Introduction to Optimization Spring 2016 17 14. Duality ˆ Upper and lower bounds ˆ General duality ˆ Constraint qualifications ˆ Counterexample ˆ Complementary slackness ˆ Examples ˆ Sensitivity

More information

Support Vector Machines for Regression

Support Vector Machines for Regression COMP-566 Rohan Shah (1) Support Vector Machines for Regression Provided with n training data points {(x 1, y 1 ), (x 2, y 2 ),, (x n, y n )} R s R we seek a function f for a fixed ɛ > 0 such that: f(x

More information

Support Vector Machines: Maximum Margin Classifiers

Support Vector Machines: Maximum Margin Classifiers Support Vector Machines: Maximum Margin Classifiers Machine Learning and Pattern Recognition: September 16, 2008 Piotr Mirowski Based on slides by Sumit Chopra and Fu-Jie Huang 1 Outline What is behind

More information

On Generalized Primal-Dual Interior-Point Methods with Non-uniform Complementarity Perturbations for Quadratic Programming

On Generalized Primal-Dual Interior-Point Methods with Non-uniform Complementarity Perturbations for Quadratic Programming On Generalized Primal-Dual Interior-Point Methods with Non-uniform Complementarity Perturbations for Quadratic Programming Altuğ Bitlislioğlu and Colin N. Jones Abstract This technical note discusses convergence

More information

Optimization and Root Finding. Kurt Hornik

Optimization and Root Finding. Kurt Hornik Optimization and Root Finding Kurt Hornik Basics Root finding and unconstrained smooth optimization are closely related: Solving ƒ () = 0 can be accomplished via minimizing ƒ () 2 Slide 2 Basics Root finding

More information

The Lifted Newton Method and Its Use in Optimization

The Lifted Newton Method and Its Use in Optimization The Lifted Newton Method and Its Use in Optimization Moritz Diehl Optimization in Engineering Center (OPTEC), K.U. Leuven, Belgium joint work with Jan Albersmeyer (U. Heidelberg) ENSIACET, Toulouse, February

More information

Dual Ascent. Ryan Tibshirani Convex Optimization

Dual Ascent. Ryan Tibshirani Convex Optimization Dual Ascent Ryan Tibshirani Conve Optimization 10-725 Last time: coordinate descent Consider the problem min f() where f() = g() + n i=1 h i( i ), with g conve and differentiable and each h i conve. Coordinate

More information

A GLOBALLY CONVERGENT STABILIZED SQP METHOD: SUPERLINEAR CONVERGENCE

A GLOBALLY CONVERGENT STABILIZED SQP METHOD: SUPERLINEAR CONVERGENCE A GLOBALLY CONVERGENT STABILIZED SQP METHOD: SUPERLINEAR CONVERGENCE Philip E. Gill Vyacheslav Kungurtsev Daniel P. Robinson UCSD Center for Computational Mathematics Technical Report CCoM-14-1 June 30,

More information

AM 205: lecture 19. Last time: Conditions for optimality, Newton s method for optimization Today: survey of optimization methods

AM 205: lecture 19. Last time: Conditions for optimality, Newton s method for optimization Today: survey of optimization methods AM 205: lecture 19 Last time: Conditions for optimality, Newton s method for optimization Today: survey of optimization methods Quasi-Newton Methods General form of quasi-newton methods: x k+1 = x k α

More information

Interior Point Methods for Convex Quadratic and Convex Nonlinear Programming

Interior Point Methods for Convex Quadratic and Convex Nonlinear Programming School of Mathematics T H E U N I V E R S I T Y O H F E D I N B U R G Interior Point Methods for Convex Quadratic and Convex Nonlinear Programming Jacek Gondzio Email: J.Gondzio@ed.ac.uk URL: http://www.maths.ed.ac.uk/~gondzio

More information

PDE-Constrained and Nonsmooth Optimization

PDE-Constrained and Nonsmooth Optimization Frank E. Curtis October 1, 2009 Outline PDE-Constrained Optimization Introduction Newton s method Inexactness Results Summary and future work Nonsmooth Optimization Sequential quadratic programming (SQP)

More information

Numerical Optimal Control Overview. Moritz Diehl

Numerical Optimal Control Overview. Moritz Diehl Numerical Optimal Control Overview Moritz Diehl Simplified Optimal Control Problem in ODE path constraints h(x, u) 0 initial value x0 states x(t) terminal constraint r(x(t )) 0 controls u(t) 0 t T minimize

More information

Generalization to inequality constrained problem. Maximize

Generalization to inequality constrained problem. Maximize Lecture 11. 26 September 2006 Review of Lecture #10: Second order optimality conditions necessary condition, sufficient condition. If the necessary condition is violated the point cannot be a local minimum

More information

Shiqian Ma, MAT-258A: Numerical Optimization 1. Chapter 9. Alternating Direction Method of Multipliers

Shiqian Ma, MAT-258A: Numerical Optimization 1. Chapter 9. Alternating Direction Method of Multipliers Shiqian Ma, MAT-258A: Numerical Optimization 1 Chapter 9 Alternating Direction Method of Multipliers Shiqian Ma, MAT-258A: Numerical Optimization 2 Separable convex optimization a special case is min f(x)

More information

Exam in TMA4180 Optimization Theory

Exam in TMA4180 Optimization Theory Norwegian University of Science and Technology Department of Mathematical Sciences Page 1 of 11 Contact during exam: Anne Kværnø: 966384 Exam in TMA418 Optimization Theory Wednesday May 9, 13 Tid: 9. 13.

More information

Numerical Methods for Embedded Optimization and Optimal Control. Exercises

Numerical Methods for Embedded Optimization and Optimal Control. Exercises Summer Course Numerical Methods for Embedded Optimization and Optimal Control Exercises Moritz Diehl, Daniel Axehill and Lars Eriksson June 2011 Introduction This collection of exercises is intended to

More information

AN AUGMENTED LAGRANGIAN AFFINE SCALING METHOD FOR NONLINEAR PROGRAMMING

AN AUGMENTED LAGRANGIAN AFFINE SCALING METHOD FOR NONLINEAR PROGRAMMING AN AUGMENTED LAGRANGIAN AFFINE SCALING METHOD FOR NONLINEAR PROGRAMMING XIAO WANG AND HONGCHAO ZHANG Abstract. In this paper, we propose an Augmented Lagrangian Affine Scaling (ALAS) algorithm for general

More information

CS711008Z Algorithm Design and Analysis

CS711008Z Algorithm Design and Analysis CS711008Z Algorithm Design and Analysis Lecture 8 Linear programming: interior point method Dongbo Bu Institute of Computing Technology Chinese Academy of Sciences, Beijing, China 1 / 31 Outline Brief

More information

More on Lagrange multipliers

More on Lagrange multipliers More on Lagrange multipliers CE 377K April 21, 2015 REVIEW The standard form for a nonlinear optimization problem is min x f (x) s.t. g 1 (x) 0. g l (x) 0 h 1 (x) = 0. h m (x) = 0 The objective function

More information

E5295/5B5749 Convex optimization with engineering applications. Lecture 5. Convex programming and semidefinite programming

E5295/5B5749 Convex optimization with engineering applications. Lecture 5. Convex programming and semidefinite programming E5295/5B5749 Convex optimization with engineering applications Lecture 5 Convex programming and semidefinite programming A. Forsgren, KTH 1 Lecture 5 Convex optimization 2006/2007 Convex quadratic program

More information

Introduction to Mathematical Programming IE406. Lecture 10. Dr. Ted Ralphs

Introduction to Mathematical Programming IE406. Lecture 10. Dr. Ted Ralphs Introduction to Mathematical Programming IE406 Lecture 10 Dr. Ted Ralphs IE406 Lecture 10 1 Reading for This Lecture Bertsimas 4.1-4.3 IE406 Lecture 10 2 Duality Theory: Motivation Consider the following

More information

A Simple Heuristic Based on Alternating Direction Method of Multipliers for Solving Mixed-Integer Nonlinear Optimization

A Simple Heuristic Based on Alternating Direction Method of Multipliers for Solving Mixed-Integer Nonlinear Optimization A Simple Heuristic Based on Alternating Direction Method of Multipliers for Solving Mixed-Integer Nonlinear Optimization Yoshihiro Kanno Satoshi Kitayama The University of Tokyo Kanazawa University May

More information

The Alternating Direction Method of Multipliers

The Alternating Direction Method of Multipliers The Alternating Direction Method of Multipliers With Adaptive Step Size Selection Peter Sutor, Jr. Project Advisor: Professor Tom Goldstein December 2, 2015 1 / 25 Background The Dual Problem Consider

More information

An optimal control problem for a parabolic PDE with control constraints

An optimal control problem for a parabolic PDE with control constraints An optimal control problem for a parabolic PDE with control constraints PhD Summer School on Reduced Basis Methods, Ulm Martin Gubisch University of Konstanz October 7 Martin Gubisch (University of Konstanz)

More information

Final Exam Solutions

Final Exam Solutions EE55: Linear Systems Final Exam SIST, ShanghaiTech Final Exam Solutions Course: Linear Systems Teacher: Prof. Boris Houska Duration: 85min YOUR NAME: (type in English letters) I Introduction This exam

More information

OPTIMAL POWER FLOW (OPF) problems or variants

OPTIMAL POWER FLOW (OPF) problems or variants SUBMITTED 1 Distributed Optimal Power Flow using ALADIN Alexander Engelmann, Yuning Jiang, Tillmann Mühlpfordt, Boris Houska and Timm Faulwasser arxiv:18.863v1 [cs.dc] 3 Feb 18 Abstract The present paper

More information

Numerical Optimization. Review: Unconstrained Optimization

Numerical Optimization. Review: Unconstrained Optimization Numerical Optimization Finding the best feasible solution Edward P. Gatzke Department of Chemical Engineering University of South Carolina Ed Gatzke (USC CHE ) Numerical Optimization ECHE 589, Spring 2011

More information

Primal-Dual Interior-Point Methods

Primal-Dual Interior-Point Methods Primal-Dual Interior-Point Methods Lecturer: Aarti Singh Co-instructor: Pradeep Ravikumar Convex Optimization 10-725/36-725 Outline Today: Primal-dual interior-point method Special case: linear programming

More information

A STABILIZED SQP METHOD: SUPERLINEAR CONVERGENCE

A STABILIZED SQP METHOD: SUPERLINEAR CONVERGENCE A STABILIZED SQP METHOD: SUPERLINEAR CONVERGENCE Philip E. Gill Vyacheslav Kungurtsev Daniel P. Robinson UCSD Center for Computational Mathematics Technical Report CCoM-14-1 June 30, 2014 Abstract Regularized

More information

Network Newton. Aryan Mokhtari, Qing Ling and Alejandro Ribeiro. University of Pennsylvania, University of Science and Technology (China)

Network Newton. Aryan Mokhtari, Qing Ling and Alejandro Ribeiro. University of Pennsylvania, University of Science and Technology (China) Network Newton Aryan Mokhtari, Qing Ling and Alejandro Ribeiro University of Pennsylvania, University of Science and Technology (China) aryanm@seas.upenn.edu, qingling@mail.ustc.edu.cn, aribeiro@seas.upenn.edu

More information