Fast Algorithms for SDPs derived from the Kalman-Yakubovich-Popov Lemma
|
|
- Dulcie Ray
- 5 years ago
- Views:
Transcription
1 Fast Algorithms for SDPs derived from the Kalman-Yakubovich-Popov Lemma Venkataramanan (Ragu) Balakrishnan School of ECE, Purdue University 8 September 2003 European Union RTN Summer School on Multi-Agent Control Hamilton Institute
2 Fast Algorithms for SDPs derived from the Kalman-Yakubovich-Popov Lemma Venkataramanan (Ragu) Balakrishnan School of ECE, Purdue University 8 September 2003 European Union RTN Summer School on Multi-Agent Control Hamilton Institute Joint work with Lieven Vandenberghe, UCLA Anders Hansson and Ragnar Wallin, Linkoping University
3 FAST ALGORITHMS FOR KYP SDPS 1 Outline A brief introduction to Semidefinite Programming (SDP)
4 FAST ALGORITHMS FOR KYP SDPS 1 Outline A brief introduction to Semidefinite Programming (SDP) Focus: LMIs from the Kalman-Yakubovich-Popov Lemma
5 FAST ALGORITHMS FOR KYP SDPS 1 Outline A brief introduction to Semidefinite Programming (SDP) Focus: LMIs from the Kalman-Yakubovich-Popov Lemma Fast algorithms for SDPs from KYP Lemma
6 FAST ALGORITHMS FOR KYP SDPS 2 Convex optimization of the form: Semidefinite Programming (SDP) minimize c T x subject to F 0 + x 1 F x p F p 0 F 0, F 1,..., F p are given symmetric matrices, c is a vector, x is the vector of optimization variables
7 FAST ALGORITHMS FOR KYP SDPS 2 Convex optimization of the form: Semidefinite Programming (SDP) minimize c T x subject to F 0 + x 1 F x p F p 0 F 0, F 1,..., F p are given symmetric matrices, c is a vector, x is the vector of optimization variables F (x) = F 0 + x 1 F x p F p 0 called an LMI F 0 means F is positive semidefinite, that is u T F u 0 for all vectors u LMIs are nonlinear, but convex constraints: If F (x) 0 and F (y) 0, then F (λx + (1 λ)y) = λf (x) + (1 λ)f (y) 0 for all λ [0, 1]
8 FAST ALGORITHMS FOR KYP SDPS 3 SDP vs. LP SDP: minimize c T x subject to F 0 + x 1 F x p F p 0 F 0, F 1,..., F p are given symmetric matrices, c is a vector, x is the vector of optimization variables LP: minimize c T x subject to a T i x b i, i = 1,..., N Same linear objective Linear matrix inequality constraint instead of linear scalar inequalities
9 FAST ALGORITHMS FOR KYP SDPS 4 More on LMIs Matrices as variables: Example: Lyapunov inequality A is given, P = P T is the variable A T P + P A 0 Can write it as an LMI in the entries of P Better to leave LMIs in a condensed form saves notation leads to more efficient computation
10 FAST ALGORITHMS FOR KYP SDPS 4 More on LMIs Matrices as variables Multiple LMIs F (1) (x) 0,..., F (N) (x) 0 same as single LMI diag(f (1) (x),..., F (N) (x)) 0
11 FAST ALGORITHMS FOR KYP SDPS 5 LMI examples Many standard constraints can be written as LMIs Linear constraints Ax + b > 0 (componentwise) Can be rewritten as an LMI using diagonal matrices
12 FAST ALGORITHMS FOR KYP SDPS 5 LMI examples Many standard constraints can be written as LMIs Linear constraints Quadratic constraints: Inequality (Ax + b) T (Ax + b) + c T x + d < 0 is equivalent to the LMI [ I Ax + b (Ax + b) T (c T x + d) ] 0
13 FAST ALGORITHMS FOR KYP SDPS 5 LMI examples Many standard constraints can be written as LMIs Linear constraints Quadratic constraints Trace constraints: Inequality P = P T, A T P + P A 0, TrP 1 is an LMI
14 FAST ALGORITHMS FOR KYP SDPS 5 LMI examples Many standard constraints can be written as LMIs Linear constraints Quadratic constraints Trace constraints Norm constraints: Inequality σ max (A) < 1 is equivalent to LMI [ I A A T I ] 0
15 FAST ALGORITHMS FOR KYP SDPS 5 LMI examples Many standard constraints can be written as LMIs Linear constraints Quadratic constraints Trace constraints Norm constraints... mixtures of these constraints and many more
16 FAST ALGORITHMS FOR KYP SDPS 6 SDP applications Systems and control (quite well-known)
17 FAST ALGORITHMS FOR KYP SDPS 6 SDP applications Systems and control (quite well-known) Circuit design
18 FAST ALGORITHMS FOR KYP SDPS 6 SDP applications Systems and control (quite well-known) Circuit design Nonconvex optimization
19 FAST ALGORITHMS FOR KYP SDPS 6 SDP applications Systems and control (quite well-known) Circuit design Nonconvex optimization... many others
20 FAST ALGORITHMS FOR KYP SDPS 7 Outline A brief introduction to Semidefinite Programming (SDP) Focus: LMIs from the Kalman-Yakubovich-Popov Lemma Fast algorithms for SDPs from KYP Lemma
21 FAST ALGORITHMS FOR KYP SDPS 8 Kalman-Yakubovich-Popov lemma Frequency-domain inequality, rational in frequency ω, and affine in a design vector x, expressed as [ (jωi A) 1 B I ] p [ (jωi A) ( x i M i N) 1 B I i=1 ] 0
22 FAST ALGORITHMS FOR KYP SDPS 8 Kalman-Yakubovich-Popov lemma If (A, B) is controllable, then [ (jωi A) 1 B I ] p [ (jωi A) ( x i M i N) 1 B I i=1 ] 0 hold for all ω R
23 FAST ALGORITHMS FOR KYP SDPS 8 Kalman-Yakubovich-Popov lemma If (A, B) is controllable, then [ (jωi A) 1 B I ] p [ (jωi A) ( x i M i N) 1 B I i=1 ] 0 hold for all ω R [ AP + P A P B B T P 0 is feasible (an LMI with variables P, x) ] + p x i M i N 0 i=1
24 FAST ALGORITHMS FOR KYP SDPS 9 KYP Lemma consequences Semi-infinite frequency domain inequality is exactly equivalent to LMI (no sampling) P serves as an auxiliary variable Of enormous importance for systems, control, and signal processing
25 FAST ALGORITHMS FOR KYP SDPS 10 KYP LMI applications Linear system analysis and design: w Plant z Controller Problem: Design LTI controller for LTI plant Constraints specified as frequency domain inequalities on TF from w to z Youla parametrization used to express TF from w to z! px T (jω, x) = T 1 (jω) + T 2 (jω) x i Q i (jω) T 3 (jω), i=1 KYP Lemma used to obtain LMIs in variable x
26 FAST ALGORITHMS FOR KYP SDPS 10 KYP LMI applications Linear system analysis and design Digital filter design: An FIR or more general filter design problem: Find x such that H(e jθ, x) = Xp 1 i=0 x i H i (e jθ ) satisfies frequency-domain constraints (i.e., for all θ [0, 2π]) KYP Lemma used to obtain LMIs in variable x
27 FAST ALGORITHMS FOR KYP SDPS 10 KYP LMI applications Linear system analysis and design Digital filter design Robust control analysis: Stability of interconnected systems via passivity or small-gain analysis Techniques that take advantage of uncertainty structure/nature Performance analysis via Lyapunov functions
28 FAST ALGORITHMS FOR KYP SDPS 11 Focus on: KYP SDP minimize c T x + Tr(CP ) subject to [ A T P + P A P B B T P 0 ] + p i=1 x im i N where c R p, C S n, A R n n, B R n m, M i S n+m, N S n+m
29 FAST ALGORITHMS FOR KYP SDPS 11 Focus on: KYP SDP minimize c T x + Tr(CP ) subject to [ A T P + P A P B B T P 0 ] + p i=1 x im i N where c R p, C S n, A R n n, B R n m, M i S n+m, N S n+m (Extension to multiple LMIs in multiple variables straightforward) minimize subject to c T x + P K k=1 Tr(C kp k )» A T k P k + P k A k P k B k B T k P k 0 + P p i=1 x im ki N k, k = 1,..., K.
30 FAST ALGORITHMS FOR KYP SDPS 12 Numerical solution of SDPs All SDPs are convex optimization problems: Generic algorithms will work in polynomial-time Matlab LMI Control Toolbox available Moderate size problems solved quite easily But...
31 FAST ALGORITHMS FOR KYP SDPS 12 Numerical solution of SDPs All SDPs are convex optimization problems: Generic algorithms will work in polynomial-time Matlab LMI Control Toolbox available Moderate size problems solved quite easily But... KYP SDPs tend to be of very large scale Large problem sizes due to: underlying problems themselves auxiliary variable P Rest of the talk on efficient solution of KYP SDPs using convex duality
32 FAST ALGORITHMS FOR KYP SDPS 13 Rewrite SDP as Convex duality minimize c T x subject to F 0 + x 1 F x p F p S = 0 S 0
33 FAST ALGORITHMS FOR KYP SDPS 13 Convex duality Primal SDP minimize c T x subject to F 0 + x 1 F x p F p S = 0 S 0 Dual SDP maximize TrF 0 Z subject to Z 0 TrF i Z = c i, i = 1,..., m
34 FAST ALGORITHMS FOR KYP SDPS 13 Convex duality Primal SDP minimize c T x subject to F 0 + x 1 F x p F p S = 0 S 0 Dual SDP maximize TrF 0 Z subject to Z 0 TrF i Z = c i, i = 1,..., m If Z is dual feasible, then TrF 0 Z p If x is primal feasible, then c T x d Under mild conditions, p = d At optimum, S opt Z opt = F (x opt )Z opt = 0
35 FAST ALGORITHMS FOR KYP SDPS 14 Primal-dual algorithms Solve primal and dual problem together: minimize c T x + TrF 0 Z subject to F 0 + x 1 F x p F p S = 0 S 0, Z 0 TrF i Z = c i, i = 1,..., m
36 FAST ALGORITHMS FOR KYP SDPS 14 Primal-dual algorithms Solve primal and dual problem together: minimize c T x + TrF 0 Z (= TrSZ) subject to F 0 + x 1 F x p F p S = 0 S 0, Z 0 TrF i Z = c i, i = 1,..., m (Optimal value is zero!)
37 FAST ALGORITHMS FOR KYP SDPS 15 Why primal-dual algorithms? At every iteration, we have upper and lower bounds, thus guaranteed accuracy Early termination possible Other advantages at algorithmic level
38 FAST ALGORITHMS FOR KYP SDPS 16 Primal-dual algorithm outline For simplicity, suppose we have a feasible point, i.e., x, Z and S s.t. F 0 + x 1 F x p F p S = 0 S 0, Z 0 TrF i Z = c i, i = 1,..., m (More general case, with infeasible starting points, essentially the same)
39 FAST ALGORITHMS FOR KYP SDPS 16 Primal-dual algorithm outline For simplicity, suppose we have a feasible point, i.e., x, Z and S s.t. At each iteration: F 0 + x 1 F x p F p S = 0 S 0, Z 0 TrF i Z = c i, i = 1,..., m Compute product SZ. If it is small, stop Otherwise, take steps S, Z, and x such that x 1 F x p F p S = 0 TrF i Z = 0, i = 1,..., m (maintain feasibility) S + S 0, Z + Z 0
40 FAST ALGORITHMS FOR KYP SDPS 16 Primal-dual algorithm outline For simplicity, suppose we have a feasible point, i.e., x, Z and S s.t. At each iteration: F 0 + x 1 F x p F p S = 0 S 0, Z 0 TrF i Z = c i, i = 1,..., m Compute product SZ. If it is small, stop Otherwise, take steps S, Z, and x such that x 1 F x p F p S = 0 TrF i Z = 0, i = 1,..., m (maintain feasibility) S + S 0, Z + Z 0 (S + S)(Z + Z) is made smaller (address objective)
41 FAST ALGORITHMS FOR KYP SDPS 17 Solving search equations 1. x 1 F x p F p S = 0 2. TrF i Z = 0, i = 1,..., m 3. (S + S)(Z + Z) is made smaller 4. S + S 0, Z + Z 0
42 FAST ALGORITHMS FOR KYP SDPS 17 Solving search equations 1. x 1 F x p F p S = 0 (1), (2) linear equations 2. TrF i Z = 0, i = 1,..., m 3. (S + S)(Z + Z) is made smaller 4. S + S 0, Z + Z 0
43 FAST ALGORITHMS FOR KYP SDPS 17 Solving search equations 1. x 1 F x p F p S = 0 2. TrF i Z = 0, i = 1,..., m 3. (S + S)(Z + Z) is made smaller 4. S + S 0, Z + Z 0 (1), (2) linear equations (3) accomplished via Newton step, another linear equation
44 FAST ALGORITHMS FOR KYP SDPS 17 Solving search equations 1. x 1 F x p F p S = 0 2. TrF i Z = 0, i = 1,..., m 3. (S + S)(Z + Z) is made smaller 4. S + S 0, Z + Z 0 (1), (2) linear equations (3) accomplished via Newton step, another linear equation Solution strategy: First, eliminate S from the linear equations Next eliminate Z Solve a dense linear system in variable x Reconstruct Z and S S + S 0, Z + Z 0 ensured using line search
45 FAST ALGORITHMS FOR KYP SDPS 18 Outline A brief introduction to Semidefinite Programming (SDP) Focus: LMIs from the Kalman-Yakubovich-Popov Lemma Fast algorithms for SDPs from KYP Lemma
46 FAST ALGORITHMS FOR KYP SDPS 19 General-purpose implementation for KYP SDPs minimize c T x + Tr(CP ) subject to [ A T P + P A P B B T P 0 ] + p i=1 x im i N A R n n, B R n 1 (A, B) controllable p + n(n + 1)/2 variables
47 FAST ALGORITHMS FOR KYP SDPS 20 Primal SDP Primal and dual KYP SDPs minimize c T x + Tr(CP ) subject to [ A T P + P A P B B T P 0 ] + p i=1 x im i N Dual SDP maximize Tr(N Z) subject to AZ 11 + Z 11 A T + zb T + B z T = C TrM i Z = c i [ Z11 z Z = z T 2z n+1 ] 0
48 FAST ALGORITHMS FOR KYP SDPS 20 Primal SDP Primal and dual KYP SDPs minimize c T x + Tr(CP ) subject to [ A T P + P A P B B T P 0 ] + p i=1 x im i N Dual SDP maximize Tr(N Z) subject to AZ 11 + Z 11 A T + zb T + B z T = C TrM i Z = c i [ Z11 z Z = z T 2z n+1 (For future reference z = [ z T, z n+1 ] T ) ] 0
49 FAST ALGORITHMS FOR KYP SDPS 21 Search equations for KYP SDPs W ZW + [ A T P + P A P B B T P 0 ] + p x i M i = D i=1 A Z 11 + Z 11 A T + zb T + B z T = 0 W 0; values of W, D change at each iteration TrM i Z = 0
50 FAST ALGORITHMS FOR KYP SDPS 21 Search equations for KYP SDPs W ZW + [ A T P + P A P B B T P 0 ] + p x i M i = D i=1 A Z 11 + Z 11 A T + zb T + B z T = 0 TrM i Z = 0 W 0; values of W, D change at each iteration For convenience: [ A K(P ) = T P + P A P B B T P 0 ], M(x) = p i=1 x im i
51 FAST ALGORITHMS FOR KYP SDPS 21 Search equations for KYP SDPs W ZW + [ A T P + P A P B B T P 0 ] + p x i M i = D i=1 A Z 11 + Z 11 A T + zb T + B z T = 0 TrM i Z = 0 W 0; values of W, D change at each iteration For convenience: [ ] A K(P ) = T P + P A P B B T, M(x) = p P 0 i=1 x im i Then, K adj ( Z) = A Z 11 + Z 11 A T + zb T + B z T, M adj ( Z) = {TrM i Z}
52 FAST ALGORITHMS FOR KYP SDPS 22 Standard method of solving the search equations W ZW + K( P ) + M( x) = D K adj ( Z) = 0 M adj ( Z) = 0
53 FAST ALGORITHMS FOR KYP SDPS 22 Standard method of solving the search equations W ZW + K( P ) + M( x) = D K adj ( Z) = 0 M adj ( Z) = 0 General-purpose solvers eliminate Z from first equation: K adj (W 1 (K( P ) + M( x))w 1 ) = K adj (W 1 DW 1 ) M adj (W 1 (K( P ) + M( x))w 1 ) = M adj (W 1 DW 1 ) A dense set of linear equations in P, x Cost: At least O(n 6 )
54 FAST ALGORITHMS FOR KYP SDPS 23 Alternative method: Step 1 W ZW + K( P ) + M( x) = D K adj ( Z) = 0 M adj ( Z) = 0
55 FAST ALGORITHMS FOR KYP SDPS 23 Alternative method: Step 1 W ZW + K( P ) + M( x) = D A Z 11 + Z 11 A T + zb T + B z T = 0 M adj ( Z) = 0 Use second equation to express Z 11 in terms of z: Z 11 = n i=1 z ix i, where AX i + X i A T + Be T i + e ib T = 0 [ n Thus Z = B( z) = i=1 z ] ix i z z T 2 z n+1
56 FAST ALGORITHMS FOR KYP SDPS 23 Alternative method: Step 1 W ZW + K( P ) + M( x) = D A Z 11 + Z 11 A T + zb T + B z T = 0 M adj ( Z) = 0 Use second equation to express Z 11 in terms of z: Z 11 = n i=1 z ix i, where AX i + X i A T + Be T i + e ib T = 0 [ n Thus Z = B( z) = i=1 z ] ix i z z T 2 z n+1 Substituting in first and third equations gives W B( z)w + K( P ) + M( x) = D M adj (B( z)) = 0
57 FAST ALGORITHMS FOR KYP SDPS 24 Alternative method: Step 2 W B( z)w + K( P ) + M( x) = D M adj (B( z)) = 0 Note that G = K( P ) for some P B adj (G) = 0
58 FAST ALGORITHMS FOR KYP SDPS 24 Alternative method: Step 2 W B( z)w + K( P ) + M( x) = D M adj (B( z)) = 0 Note that G = K( P ) for some P B adj (G) = 0 Use to eliminate P : B adj (W B( z)w ) + B adj (M( x)) = B adj (D) M adj (B( z)) = 0 n + p + 1 linear equations in n + p + 1 variables z, x
59 FAST ALGORITHMS FOR KYP SDPS 25 Reduced search equations of the form Alternative method: Summary [ P11 P 12 P12 T 0 ] [ z x ] = [ q1 0 ] Cost of solving is O(n 3 ) operations (if we assume p = O(n)) From z, x, can find Z, P in O(n 3 ) operations Need to precompute X i s (O(n 4 )) P 12 is independent of current iterates and can be pre-computed, in O(n 4 ) Constructing P 11 requires constructing terms such as Tr(X i W 11 X j W 11 ) and W 11 X i W 12 (also O(n 4 )) Overall cost dominated by O(n 4 )
60 FAST ALGORITHMS FOR KYP SDPS 26 Numerical example KYP IPM SeDuMi (primal) n = p prep. time time/iter. time/iter CPU time in seconds on 2.4GHz PIV with 1GB of memory KYP-IPM: Matlab implementation of alternative method SeDuMi (primal): SeDuMi version 1.05 applied to primal problem Prep. time is time to compute matrices X i #iterations in both methods is comparable (7 15)
61 FAST ALGORITHMS FOR KYP SDPS 27 Further reduction in computation Use factorization of A to compute terms such as Tr(X i W 11 X j W 11 ) without computing X i, i.e., without explicitly solving AX i + X i A T + Be T i + e i B T = 0, i = 1,..., n Advantages: no need to store matrices X i, faster construction of reduced search equations Possible factorizations: eigenvalue decomposition, companion form,... For example, if A has distinct eigenvalues A = V diag(λ)v 1, easy to write down search equations in O(n 3 ), in terms of V and λ
62 FAST ALGORITHMS FOR KYP SDPS 28 Existence of distinct stable eigenvalues By assumption, (A, B) is controllable; hence can arbitrarily assign eigenvalues of A + BK by choosing K Choose T = [ I 0 K I ], and replace LMI by equivalent LMI ( [ T T A T P + P A P B B T P 0 ] + ) N x i M i T T T NT i=1 [ (A + BK) T P + P (A + BK) P B B T P 0 ] + N x i (T T M i T ) T T NT i=1 Conclusion: Can assume without loss of generality that A is stable with distinct eigenvalues
63 FAST ALGORITHMS FOR KYP SDPS 29 Numerical example Five randomly generated problems with p = 50, n = 100,..., 500 KYP IPM (fast) KYP IPM SeDuMi (primal) n prep. time time/iter prep. time time/iter prep. time time/iter KYP-IPM (fast) uses eigenvalue decomposition of A to construct reduced search equations Preprocessing time and time/iteration grow as O(n 3 )
64 FAST ALGORITHMS FOR KYP SDPS 30 SDPs derived from the KYP-lemma Conclusions A useful class of SDPs, widely encountered in systems, control and signal processing Difficult to solve using general-purpose software Generic solvers take O(n 6 ) computation Fast solution using interior-point methods Custom implementation based on fast solution of search equations (cost O(n 4 ) or O(n 3 ))
Semidefinite Programming Duality and Linear Time-invariant Systems
Semidefinite Programming Duality and Linear Time-invariant Systems Venkataramanan (Ragu) Balakrishnan School of ECE, Purdue University 2 July 2004 Workshop on Linear Matrix Inequalities in Control LAAS-CNRS,
More informationInterior-point algorithms for semidefinite programming problems derived from the KYP lemma
Interior-point algorithms for semidefinite programming problems derived from the KYP lemma Lieven Vandenberghe, V. Ragu Balakrishnan, Ragnar Wallin, Anders Hansson, and Tae Roh Summary. We discuss fast
More informationProblem structure in semidefinite programs arising in control and signal processing
Problem structure in semidefinite programs arising in control and signal processing Lieven Vandenberghe Electrical Engineering Department, UCLA Joint work with: Mehrdad Nouralishahi, Tae Roh Semidefinite
More informationModern Optimal Control
Modern Optimal Control Matthew M. Peet Arizona State University Lecture 19: Stabilization via LMIs Optimization Optimization can be posed in functional form: min x F objective function : inequality constraints
More informationFRTN10 Multivariable Control, Lecture 13. Course outline. The Q-parametrization (Youla) Example: Spring-mass System
FRTN Multivariable Control, Lecture 3 Anders Robertsson Automatic Control LTH, Lund University Course outline The Q-parametrization (Youla) L-L5 Purpose, models and loop-shaping by hand L6-L8 Limitations
More informationThe Q-parametrization (Youla) Lecture 13: Synthesis by Convex Optimization. Lecture 13: Synthesis by Convex Optimization. Example: Spring-mass System
The Q-parametrization (Youla) Lecture 3: Synthesis by Convex Optimization controlled variables z Plant distubances w Example: Spring-mass system measurements y Controller control inputs u Idea for lecture
More informationRank-one LMIs and Lyapunov's Inequality. Gjerrit Meinsma 4. Abstract. We describe a new proof of the well-known Lyapunov's matrix inequality about
Rank-one LMIs and Lyapunov's Inequality Didier Henrion 1;; Gjerrit Meinsma Abstract We describe a new proof of the well-known Lyapunov's matrix inequality about the location of the eigenvalues of a matrix
More informationCourse Outline. FRTN10 Multivariable Control, Lecture 13. General idea for Lectures Lecture 13 Outline. Example 1 (Doyle Stein, 1979)
Course Outline FRTN Multivariable Control, Lecture Automatic Control LTH, 6 L-L Specifications, models and loop-shaping by hand L6-L8 Limitations on achievable performance L9-L Controller optimization:
More informationLinear Matrix Inequalities in Robust Control. Venkataramanan (Ragu) Balakrishnan School of ECE, Purdue University MTNS 2002
Linear Matrix Inequalities in Robust Control Venkataramanan (Ragu) Balakrishnan School of ECE, Purdue University MTNS 2002 Objective A brief introduction to LMI techniques for Robust Control Emphasis on
More informationFrom Convex Optimization to Linear Matrix Inequalities
Dep. of Information Engineering University of Pisa (Italy) From Convex Optimization to Linear Matrix Inequalities eng. Sergio Grammatico grammatico.sergio@gmail.com Class of Identification of Uncertain
More informationLecture Note 5: Semidefinite Programming for Stability Analysis
ECE7850: Hybrid Systems:Theory and Applications Lecture Note 5: Semidefinite Programming for Stability Analysis Wei Zhang Assistant Professor Department of Electrical and Computer Engineering Ohio State
More information12. Interior-point methods
12. Interior-point methods Convex Optimization Boyd & Vandenberghe inequality constrained minimization logarithmic barrier function and central path barrier method feasibility and phase I methods complexity
More informationarzelier
COURSE ON LMI OPTIMIZATION WITH APPLICATIONS IN CONTROL PART II.1 LMIs IN SYSTEMS CONTROL STATE-SPACE METHODS STABILITY ANALYSIS Didier HENRION www.laas.fr/ henrion henrion@laas.fr Denis ARZELIER www.laas.fr/
More informationConvex Optimization 1
Massachusetts Institute of Technology Department of Electrical Engineering and Computer Science 6.245: MULTIVARIABLE CONTROL SYSTEMS by A. Megretski Convex Optimization 1 Many optimization objectives generated
More informationI.3. LMI DUALITY. Didier HENRION EECI Graduate School on Control Supélec - Spring 2010
I.3. LMI DUALITY Didier HENRION henrion@laas.fr EECI Graduate School on Control Supélec - Spring 2010 Primal and dual For primal problem p = inf x g 0 (x) s.t. g i (x) 0 define Lagrangian L(x, z) = g 0
More informationLagrange Duality. Daniel P. Palomar. Hong Kong University of Science and Technology (HKUST)
Lagrange Duality Daniel P. Palomar Hong Kong University of Science and Technology (HKUST) ELEC5470 - Convex Optimization Fall 2017-18, HKUST, Hong Kong Outline of Lecture Lagrangian Dual function Dual
More informationm i=1 c ix i i=1 F ix i F 0, X O.
What is SDP? for a beginner of SDP Copyright C 2005 SDPA Project 1 Introduction This note is a short course for SemiDefinite Programming s SDP beginners. SDP has various applications in, for example, control
More informationNonlinear Optimization for Optimal Control
Nonlinear Optimization for Optimal Control Pieter Abbeel UC Berkeley EECS Many slides and figures adapted from Stephen Boyd [optional] Boyd and Vandenberghe, Convex Optimization, Chapters 9 11 [optional]
More informationROBUST ANALYSIS WITH LINEAR MATRIX INEQUALITIES AND POLYNOMIAL MATRICES. Didier HENRION henrion
GRADUATE COURSE ON POLYNOMIAL METHODS FOR ROBUST CONTROL PART IV.1 ROBUST ANALYSIS WITH LINEAR MATRIX INEQUALITIES AND POLYNOMIAL MATRICES Didier HENRION www.laas.fr/ henrion henrion@laas.fr Airbus assembly
More informationSecond-order cone programming
Outline Second-order cone programming, PhD Lehigh University Department of Industrial and Systems Engineering February 10, 2009 Outline 1 Basic properties Spectral decomposition The cone of squares The
More information5. Duality. Lagrangian
5. Duality Convex Optimization Boyd & Vandenberghe Lagrange dual problem weak and strong duality geometric interpretation optimality conditions perturbation and sensitivity analysis examples generalized
More informationAdvances in Convex Optimization: Theory, Algorithms, and Applications
Advances in Convex Optimization: Theory, Algorithms, and Applications Stephen Boyd Electrical Engineering Department Stanford University (joint work with Lieven Vandenberghe, UCLA) ISIT 02 ISIT 02 Lausanne
More informationA semidefinite relaxation scheme for quadratically constrained quadratic problems with an additional linear constraint
Iranian Journal of Operations Research Vol. 2, No. 2, 20, pp. 29-34 A semidefinite relaxation scheme for quadratically constrained quadratic problems with an additional linear constraint M. Salahi Semidefinite
More informationSPARSE SECOND ORDER CONE PROGRAMMING FORMULATIONS FOR CONVEX OPTIMIZATION PROBLEMS
Journal of the Operations Research Society of Japan 2008, Vol. 51, No. 3, 241-264 SPARSE SECOND ORDER CONE PROGRAMMING FORMULATIONS FOR CONVEX OPTIMIZATION PROBLEMS Kazuhiro Kobayashi Sunyoung Kim Masakazu
More informationDeterminant maximization with linear. S. Boyd, L. Vandenberghe, S.-P. Wu. Information Systems Laboratory. Stanford University
Determinant maximization with linear matrix inequality constraints S. Boyd, L. Vandenberghe, S.-P. Wu Information Systems Laboratory Stanford University SCCM Seminar 5 February 1996 1 MAXDET problem denition
More information6-1 The Positivstellensatz P. Parrilo and S. Lall, ECC
6-1 The Positivstellensatz P. Parrilo and S. Lall, ECC 2003 2003.09.02.10 6. The Positivstellensatz Basic semialgebraic sets Semialgebraic sets Tarski-Seidenberg and quantifier elimination Feasibility
More informationOptimization in. Stephen Boyd. 3rd SIAM Conf. Control & Applications. and Control Theory. System. Convex
Optimization in Convex and Control Theory System Stephen Boyd Engineering Department Electrical University Stanford 3rd SIAM Conf. Control & Applications 1 Basic idea Many problems arising in system and
More informationEE363 homework 8 solutions
EE363 Prof. S. Boyd EE363 homework 8 solutions 1. Lyapunov condition for passivity. The system described by ẋ = f(x, u), y = g(x), x() =, with u(t), y(t) R m, is said to be passive if t u(τ) T y(τ) dτ
More informationConvex Optimization Boyd & Vandenberghe. 5. Duality
5. Duality Convex Optimization Boyd & Vandenberghe Lagrange dual problem weak and strong duality geometric interpretation optimality conditions perturbation and sensitivity analysis examples generalized
More informationInfeasible Primal-Dual (Path-Following) Interior-Point Methods for Semidefinite Programming*
Infeasible Primal-Dual (Path-Following) Interior-Point Methods for Semidefinite Programming* Notes for CAAM 564, Spring 2012 Dept of CAAM, Rice University Outline (1) Introduction (2) Formulation & a complexity
More informationResearch Reports on Mathematical and Computing Sciences
ISSN 1342-284 Research Reports on Mathematical and Computing Sciences Exploiting Sparsity in Linear and Nonlinear Matrix Inequalities via Positive Semidefinite Matrix Completion Sunyoung Kim, Masakazu
More informationFast algorithms for solving H -norm minimization. problem
Fast algorithms for solving H -norm imization problems Andras Varga German Aerospace Center DLR - Oberpfaffenhofen Institute of Robotics and System Dynamics D-82234 Wessling, Germany Andras.Varga@dlr.de
More informationConvex relaxation. In example below, we have N = 6, and the cut we are considering
Convex relaxation The art and science of convex relaxation revolves around taking a non-convex problem that you want to solve, and replacing it with a convex problem which you can actually solve the solution
More informationSemidefinite and Second Order Cone Programming Seminar Fall 2012 Project: Robust Optimization and its Application of Robust Portfolio Optimization
Semidefinite and Second Order Cone Programming Seminar Fall 2012 Project: Robust Optimization and its Application of Robust Portfolio Optimization Instructor: Farid Alizadeh Author: Ai Kagawa 12/12/2012
More informationConvex Optimization and l 1 -minimization
Convex Optimization and l 1 -minimization Sangwoon Yun Computational Sciences Korea Institute for Advanced Study December 11, 2009 2009 NIMS Thematic Winter School Outline I. Convex Optimization II. l
More informationELE539A: Optimization of Communication Systems Lecture 15: Semidefinite Programming, Detection and Estimation Applications
ELE539A: Optimization of Communication Systems Lecture 15: Semidefinite Programming, Detection and Estimation Applications Professor M. Chiang Electrical Engineering Department, Princeton University March
More information12. Interior-point methods
12. Interior-point methods Convex Optimization Boyd & Vandenberghe inequality constrained minimization logarithmic barrier function and central path barrier method feasibility and phase I methods complexity
More informationLecture: Duality.
Lecture: Duality http://bicmr.pku.edu.cn/~wenzw/opt-2016-fall.html Acknowledgement: this slides is based on Prof. Lieven Vandenberghe s lecture notes Introduction 2/35 Lagrange dual problem weak and strong
More informationInfeasible Primal-Dual (Path-Following) Interior-Point Methods for Semidefinite Programming*
Infeasible Primal-Dual (Path-Following) Interior-Point Methods for Semidefinite Programming* Yin Zhang Dept of CAAM, Rice University Outline (1) Introduction (2) Formulation & a complexity theorem (3)
More informationModule 04 Optimization Problems KKT Conditions & Solvers
Module 04 Optimization Problems KKT Conditions & Solvers Ahmad F. Taha EE 5243: Introduction to Cyber-Physical Systems Email: ahmad.taha@utsa.edu Webpage: http://engineering.utsa.edu/ taha/index.html September
More informationA Study of the Duality between Kalman Filters and LQR Problems
Purdue University Purdue e-pubs Department of Electrical and Computer Engineering Technical Reports Department of Electrical and Computer Engineering 11-3-2016 A Study of the Duality between Kalman Filters
More informationRobust and Optimal Control, Spring 2015
Robust and Optimal Control, Spring 2015 Instructor: Prof. Masayuki Fujita (S5-303B) G. Sum of Squares (SOS) G.1 SOS Program: SOS/PSD and SDP G.2 Duality, valid ineqalities and Cone G.3 Feasibility/Optimization
More informationIII. Applications in convex optimization
III. Applications in convex optimization nonsymmetric interior-point methods partial separability and decomposition partial separability first order methods interior-point methods Conic linear optimization
More informationConvex Optimization M2
Convex Optimization M2 Lecture 3 A. d Aspremont. Convex Optimization M2. 1/49 Duality A. d Aspremont. Convex Optimization M2. 2/49 DMs DM par email: dm.daspremont@gmail.com A. d Aspremont. Convex Optimization
More informationSemidefinite Programming Duality and Linear Time-Invariant Systems
Semidefinite Programming Duality and Linear Time-Invariant Systems Venkataramanan Balakrishnan, Member, IEEE, and Lieven Vandenberghe, Member, IEEE Abstract Several important problems in control theory
More informationSparse Matrix Theory and Semidefinite Optimization
Sparse Matrix Theory and Semidefinite Optimization Lieven Vandenberghe Department of Electrical Engineering University of California, Los Angeles Joint work with Martin S. Andersen and Yifan Sun Third
More informationCS711008Z Algorithm Design and Analysis
CS711008Z Algorithm Design and Analysis Lecture 8 Linear programming: interior point method Dongbo Bu Institute of Computing Technology Chinese Academy of Sciences, Beijing, China 1 / 31 Outline Brief
More informationSemidefinite Programming
Semidefinite Programming Basics and SOS Fernando Mário de Oliveira Filho Campos do Jordão, 2 November 23 Available at: www.ime.usp.br/~fmario under talks Conic programming V is a real vector space h, i
More informationStability of linear time-varying systems through quadratically parameter-dependent Lyapunov functions
Stability of linear time-varying systems through quadratically parameter-dependent Lyapunov functions Vinícius F. Montagner Department of Telematics Pedro L. D. Peres School of Electrical and Computer
More informationDenis ARZELIER arzelier
COURSE ON LMI OPTIMIZATION WITH APPLICATIONS IN CONTROL PART II.2 LMIs IN SYSTEMS CONTROL STATE-SPACE METHODS PERFORMANCE ANALYSIS and SYNTHESIS Denis ARZELIER www.laas.fr/ arzelier arzelier@laas.fr 15
More informationLecture 17: Primal-dual interior-point methods part II
10-725/36-725: Convex Optimization Spring 2015 Lecture 17: Primal-dual interior-point methods part II Lecturer: Javier Pena Scribes: Pinchao Zhang, Wei Ma Note: LaTeX template courtesy of UC Berkeley EECS
More informationBarrier Method. Javier Peña Convex Optimization /36-725
Barrier Method Javier Peña Convex Optimization 10-725/36-725 1 Last time: Newton s method For root-finding F (x) = 0 x + = x F (x) 1 F (x) For optimization x f(x) x + = x 2 f(x) 1 f(x) Assume f strongly
More informationPrimal-Dual Interior-Point Methods for Linear Programming based on Newton s Method
Primal-Dual Interior-Point Methods for Linear Programming based on Newton s Method Robert M. Freund March, 2004 2004 Massachusetts Institute of Technology. The Problem The logarithmic barrier approach
More informationminimize x x2 2 x 1x 2 x 1 subject to x 1 +2x 2 u 1 x 1 4x 2 u 2, 5x 1 +76x 2 1,
4 Duality 4.1 Numerical perturbation analysis example. Consider the quadratic program with variables x 1, x 2, and parameters u 1, u 2. minimize x 2 1 +2x2 2 x 1x 2 x 1 subject to x 1 +2x 2 u 1 x 1 4x
More information15. Conic optimization
L. Vandenberghe EE236C (Spring 216) 15. Conic optimization conic linear program examples modeling duality 15-1 Generalized (conic) inequalities Conic inequality: a constraint x K where K is a convex cone
More informationConvex Optimization M2
Convex Optimization M2 Lecture 8 A. d Aspremont. Convex Optimization M2. 1/57 Applications A. d Aspremont. Convex Optimization M2. 2/57 Outline Geometrical problems Approximation problems Combinatorial
More informationResearch overview. Seminar September 4, Lehigh University Department of Industrial & Systems Engineering. Research overview.
Research overview Lehigh University Department of Industrial & Systems Engineering COR@L Seminar September 4, 2008 1 Duality without regularity condition Duality in non-exact arithmetic 2 interior point
More informationConvex optimization problems involving finite autocorrelation sequences
Convex optimization problems involving finite autocorrelation sequences Brien Alkire and Lieven Vandenberghe Department of Electrical Engineering University of California Los Angeles brien@alkires.com,
More informationRobust and Optimal Control, Spring 2015
Robust and Optimal Control, Spring 2015 Instructor: Prof. Masayuki Fujita (S5-303B) D. Linear Matrix Inequality D.1 Convex Optimization D.2 Linear Matrix Inequality(LMI) D.3 Control Design and LMI Formulation
More information8 A First Glimpse on Design with LMIs
8 A First Glimpse on Design with LMIs 8.1 Conceptual Design Problem Given a linear time invariant system design a linear time invariant controller or filter so as to guarantee some closed loop indices
More informationFast ADMM for Sum of Squares Programs Using Partial Orthogonality
Fast ADMM for Sum of Squares Programs Using Partial Orthogonality Antonis Papachristodoulou Department of Engineering Science University of Oxford www.eng.ox.ac.uk/control/sysos antonis@eng.ox.ac.uk with
More informationHomework Set #6 - Solutions
EE 15 - Applications of Convex Optimization in Signal Processing and Communications Dr Andre Tkacenko JPL Third Term 11-1 Homework Set #6 - Solutions 1 a The feasible set is the interval [ 4] The unique
More informationFigure 3.1: Unity feedback interconnection
Chapter 3 Small Gain Theorem and Integral Quadratic Constraints This chapter presents a version of the small gain theorem for L2 gains, and then proceeds to introduce the general technique of working with
More informationInterior Point Methods. We ll discuss linear programming first, followed by three nonlinear problems. Algorithms for Linear Programming Problems
AMSC 607 / CMSC 764 Advanced Numerical Optimization Fall 2008 UNIT 3: Constrained Optimization PART 4: Introduction to Interior Point Methods Dianne P. O Leary c 2008 Interior Point Methods We ll discuss
More informationLecture 7: Convex Optimizations
Lecture 7: Convex Optimizations Radu Balan, David Levermore March 29, 2018 Convex Sets. Convex Functions A set S R n is called a convex set if for any points x, y S the line segment [x, y] := {tx + (1
More informationPOLYNOMIAL OPTIMIZATION WITH SUMS-OF-SQUARES INTERPOLANTS
POLYNOMIAL OPTIMIZATION WITH SUMS-OF-SQUARES INTERPOLANTS Sercan Yıldız syildiz@samsi.info in collaboration with Dávid Papp (NCSU) OPT Transition Workshop May 02, 2017 OUTLINE Polynomial optimization and
More informationThe maximal stable set problem : Copositive programming and Semidefinite Relaxations
The maximal stable set problem : Copositive programming and Semidefinite Relaxations Kartik Krishnan Department of Mathematical Sciences Rensselaer Polytechnic Institute Troy, NY 12180 USA kartis@rpi.edu
More informationA Quantum Interior Point Method for LPs and SDPs
A Quantum Interior Point Method for LPs and SDPs Iordanis Kerenidis 1 Anupam Prakash 1 1 CNRS, IRIF, Université Paris Diderot, Paris, France. September 26, 2018 Semi Definite Programs A Semidefinite Program
More informationLMI MODELLING 4. CONVEX LMI MODELLING. Didier HENRION. LAAS-CNRS Toulouse, FR Czech Tech Univ Prague, CZ. Universidad de Valladolid, SP March 2009
LMI MODELLING 4. CONVEX LMI MODELLING Didier HENRION LAAS-CNRS Toulouse, FR Czech Tech Univ Prague, CZ Universidad de Valladolid, SP March 2009 Minors A minor of a matrix F is the determinant of a submatrix
More informationInterior-Point Methods for Linear Optimization
Interior-Point Methods for Linear Optimization Robert M. Freund and Jorge Vera March, 204 c 204 Robert M. Freund and Jorge Vera. All rights reserved. Linear Optimization with a Logarithmic Barrier Function
More informationPrimal-dual IPM with Asymmetric Barrier
Primal-dual IPM with Asymmetric Barrier Yurii Nesterov, CORE/INMA (UCL) September 29, 2008 (IFOR, ETHZ) Yu. Nesterov Primal-dual IPM with Asymmetric Barrier 1/28 Outline 1 Symmetric and asymmetric barriers
More informationA priori bounds on the condition numbers in interior-point methods
A priori bounds on the condition numbers in interior-point methods Florian Jarre, Mathematisches Institut, Heinrich-Heine Universität Düsseldorf, Germany. Abstract Interior-point methods are known to be
More informationEE 227A: Convex Optimization and Applications October 14, 2008
EE 227A: Convex Optimization and Applications October 14, 2008 Lecture 13: SDP Duality Lecturer: Laurent El Ghaoui Reading assignment: Chapter 5 of BV. 13.1 Direct approach 13.1.1 Primal problem Consider
More informationDuality. Lagrange dual problem weak and strong duality optimality conditions perturbation and sensitivity analysis generalized inequalities
Duality Lagrange dual problem weak and strong duality optimality conditions perturbation and sensitivity analysis generalized inequalities Lagrangian Consider the optimization problem in standard form
More informationConvex relaxation. In example below, we have N = 6, and the cut we are considering
Convex relaxation The art and science of convex relaxation revolves around taking a non-convex problem that you want to solve, and replacing it with a convex problem which you can actually solve the solution
More informationHomework 4. Convex Optimization /36-725
Homework 4 Convex Optimization 10-725/36-725 Due Friday November 4 at 5:30pm submitted to Christoph Dann in Gates 8013 (Remember to a submit separate writeup for each problem, with your name at the top)
More informationSDP APPROXIMATION OF THE HALF DELAY AND THE DESIGN OF HILBERT PAIRS. Bogdan Dumitrescu
SDP APPROXIMATION OF THE HALF DELAY AND THE DESIGN OF HILBERT PAIRS Bogdan Dumitrescu Tampere International Center for Signal Processing Tampere University of Technology P.O.Box 553, 3311 Tampere, FINLAND
More informationLecture: Convex Optimization Problems
1/36 Lecture: Convex Optimization Problems http://bicmr.pku.edu.cn/~wenzw/opt-2015-fall.html Acknowledgement: this slides is based on Prof. Lieven Vandenberghe s lecture notes Introduction 2/36 optimization
More informationTutorial on Convex Optimization for Engineers Part II
Tutorial on Convex Optimization for Engineers Part II M.Sc. Jens Steinwandt Communications Research Laboratory Ilmenau University of Technology PO Box 100565 D-98684 Ilmenau, Germany jens.steinwandt@tu-ilmenau.de
More informationSolving large Semidefinite Programs - Part 1 and 2
Solving large Semidefinite Programs - Part 1 and 2 Franz Rendl http://www.math.uni-klu.ac.at Alpen-Adria-Universität Klagenfurt Austria F. Rendl, Singapore workshop 2006 p.1/34 Overview Limits of Interior
More informationAppendix A Solving Linear Matrix Inequality (LMI) Problems
Appendix A Solving Linear Matrix Inequality (LMI) Problems In this section, we present a brief introduction about linear matrix inequalities which have been used extensively to solve the FDI problems described
More informationLecture 9 Sequential unconstrained minimization
S. Boyd EE364 Lecture 9 Sequential unconstrained minimization brief history of SUMT & IP methods logarithmic barrier function central path UMT & SUMT complexity analysis feasibility phase generalized inequalities
More informationDissipative Systems Analysis and Control
Bernard Brogliato, Rogelio Lozano, Bernhard Maschke and Olav Egeland Dissipative Systems Analysis and Control Theory and Applications 2nd Edition With 94 Figures 4y Sprin er 1 Introduction 1 1.1 Example
More informationRational Covariance Extension for Boundary Data and Positive Real Lemma with Positive Semidefinite Matrix Solution
Preprints of the 18th FAC World Congress Milano (taly) August 28 - September 2, 2011 Rational Covariance Extension for Boundary Data and Positive Real Lemma with Positive Semidefinite Matrix Solution Y
More informationConvex Optimization. Newton s method. ENSAE: Optimisation 1/44
Convex Optimization Newton s method ENSAE: Optimisation 1/44 Unconstrained minimization minimize f(x) f convex, twice continuously differentiable (hence dom f open) we assume optimal value p = inf x f(x)
More informationLinear Matrix Inequality (LMI)
Linear Matrix Inequality (LMI) A linear matrix inequality is an expression of the form where F (x) F 0 + x 1 F 1 + + x m F m > 0 (1) x = (x 1,, x m ) R m, F 0,, F m are real symmetric matrices, and the
More informationSparse Sum-of-Squares Optimization for Model Updating through. Minimization of Modal Dynamic Residuals
Sparse Sum-of-Squares Optimization for Model Updating through Minimization of Modal Dynamic Residuals Dan Li 1, Yang Wang 1, 2 * 1 School of Civil and Environmental Engineering, Georgia Institute of Technology,
More informationReal Symmetric Matrices and Semidefinite Programming
Real Symmetric Matrices and Semidefinite Programming Tatsiana Maskalevich Abstract Symmetric real matrices attain an important property stating that all their eigenvalues are real. This gives rise to many
More informationRichard Anthony Conway. A dissertation submitted in partial satisfaction of the requirements for the degree of Doctor of Philosophy
Discrete-Time H 2 Guaranteed Cost Control by Richard Anthony Conway A dissertation submitted in partial satisfaction of the requirements for the degree of Doctor of Philosophy in Engineering Mechanical
More informationConvexification of Mixed-Integer Quadratically Constrained Quadratic Programs
Convexification of Mixed-Integer Quadratically Constrained Quadratic Programs Laura Galli 1 Adam N. Letchford 2 Lancaster, April 2011 1 DEIS, University of Bologna, Italy 2 Department of Management Science,
More informationInterior Point Methods: Second-Order Cone Programming and Semidefinite Programming
School of Mathematics T H E U N I V E R S I T Y O H F E D I N B U R G Interior Point Methods: Second-Order Cone Programming and Semidefinite Programming Jacek Gondzio Email: J.Gondzio@ed.ac.uk URL: http://www.maths.ed.ac.uk/~gondzio
More informationE5295/5B5749 Convex optimization with engineering applications. Lecture 5. Convex programming and semidefinite programming
E5295/5B5749 Convex optimization with engineering applications Lecture 5 Convex programming and semidefinite programming A. Forsgren, KTH 1 Lecture 5 Convex optimization 2006/2007 Convex quadratic program
More informationLecture: Algorithms for LP, SOCP and SDP
1/53 Lecture: Algorithms for LP, SOCP and SDP Zaiwen Wen Beijing International Center For Mathematical Research Peking University http://bicmr.pku.edu.cn/~wenzw/bigdata2018.html wenzw@pku.edu.cn Acknowledgement:
More informationU.C. Berkeley CS294: Beyond Worst-Case Analysis Handout 12 Luca Trevisan October 3, 2017
U.C. Berkeley CS94: Beyond Worst-Case Analysis Handout 1 Luca Trevisan October 3, 017 Scribed by Maxim Rabinovich Lecture 1 In which we begin to prove that the SDP relaxation exactly recovers communities
More information4. Convex optimization problems
Convex Optimization Boyd & Vandenberghe 4. Convex optimization problems optimization problem in standard form convex optimization problems quasiconvex optimization linear optimization quadratic optimization
More informationSparse Optimization Lecture: Basic Sparse Optimization Models
Sparse Optimization Lecture: Basic Sparse Optimization Models Instructor: Wotao Yin July 2013 online discussions on piazza.com Those who complete this lecture will know basic l 1, l 2,1, and nuclear-norm
More informationLecture: Duality of LP, SOCP and SDP
1/33 Lecture: Duality of LP, SOCP and SDP Zaiwen Wen Beijing International Center For Mathematical Research Peking University http://bicmr.pku.edu.cn/~wenzw/bigdata2017.html wenzw@pku.edu.cn Acknowledgement:
More informationProjection methods to solve SDP
Projection methods to solve SDP Franz Rendl http://www.math.uni-klu.ac.at Alpen-Adria-Universität Klagenfurt Austria F. Rendl, Oberwolfach Seminar, May 2010 p.1/32 Overview Augmented Primal-Dual Method
More informationMarcus Pantoja da Silva 1 and Celso Pascoli Bottura 2. Abstract: Nonlinear systems with time-varying uncertainties
A NEW PROPOSAL FOR H NORM CHARACTERIZATION AND THE OPTIMAL H CONTROL OF NONLINEAR SSTEMS WITH TIME-VARING UNCERTAINTIES WITH KNOWN NORM BOUND AND EXOGENOUS DISTURBANCES Marcus Pantoja da Silva 1 and Celso
More informationA FULL-NEWTON STEP INFEASIBLE-INTERIOR-POINT ALGORITHM COMPLEMENTARITY PROBLEMS
Yugoslav Journal of Operations Research 25 (205), Number, 57 72 DOI: 0.2298/YJOR3055034A A FULL-NEWTON STEP INFEASIBLE-INTERIOR-POINT ALGORITHM FOR P (κ)-horizontal LINEAR COMPLEMENTARITY PROBLEMS Soodabeh
More information