Optimization Methods for Circuit Design

Similar documents
Simulation und Optimierung analoger Schaltungen Optimization Methods for Circuit Design

Optimization Methods

Programming, numerics and optimization

Optimization: Nonlinear Optimization without Constraints. Nonlinear Optimization without Constraints 1 / 23

1 Numerical optimization

Lecture V. Numerical Optimization

Numerical Optimization: Basic Concepts and Algorithms

1 Numerical optimization

Numerical Optimization Professor Horst Cerjak, Horst Bischof, Thomas Pock Mat Vis-Gra SS09

5 Handling Constraints

Optimization and Root Finding. Kurt Hornik

Nonlinear Optimization: What s important?

Numerical Optimization of Partial Differential Equations

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey

Unconstrained optimization

Written Examination

Algorithms for Constrained Optimization

Static unconstrained optimization

Chapter 4. Unconstrained optimization

Nonlinear Programming

Optimization Methods

Scientific Computing: An Introductory Survey

Numerisches Rechnen. (für Informatiker) M. Grepl P. Esser & G. Welper & L. Zhang. Institut für Geometrie und Praktische Mathematik RWTH Aachen

Outline. Scientific Computing: An Introductory Survey. Optimization. Optimization Problems. Examples: Optimization Problems

Algorithms for constrained local optimization

Numerical optimization

Methods for Unconstrained Optimization Numerical Optimization Lectures 1-2

Line Search Methods for Unconstrained Optimisation

Numerical optimization. Numerical optimization. Longest Shortest where Maximal Minimal. Fastest. Largest. Optimization problems

Statistics 580 Optimization Methods

2.3 Linear Programming

8 Numerical methods for unconstrained problems

Scientific Computing: Optimization

4TE3/6TE3. Algorithms for. Continuous Optimization

Part 4: Active-set methods for linearly constrained optimization. Nick Gould (RAL)

Optimization Tutorial 1. Basic Gradient Descent

5 Quasi-Newton Methods

ISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints

component risk analysis

Methods that avoid calculating the Hessian. Nonlinear Optimization; Steepest Descent, Quasi-Newton. Steepest Descent

Minimization of Static! Cost Functions!

Higher-Order Methods

Optimization 2. CS5240 Theoretical Foundations in Multimedia. Leow Wee Kheng

Numerical Analysis of Electromagnetic Fields

Lecture Notes: Geometric Considerations in Unconstrained Optimization

LINEAR AND NONLINEAR PROGRAMMING

AM 205: lecture 19. Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods

Chapter 3 Numerical Methods

Newton s Method. Ryan Tibshirani Convex Optimization /36-725

Lecture 7 Unconstrained nonlinear programming

MVE165/MMG631 Linear and integer optimization with applications Lecture 13 Overview of nonlinear programming. Ann-Brith Strömberg

Subject: Optimal Control Assignment-1 (Related to Lecture notes 1-10)

CS 542G: Robustifying Newton, Constraints, Nonlinear Least Squares

Lecture 14: October 17

Contents. Preface. 1 Introduction Optimization view on mathematical models NLP models, black-box versus explicit expression 3

ECS550NFB Introduction to Numerical Methods using Matlab Day 2

Lectures 9 and 10: Constrained optimization problems and their optimality conditions

Determination of Feasible Directions by Successive Quadratic Programming and Zoutendijk Algorithms: A Comparative Study

Quasi-Newton methods for minimization

Constrained Nonlinear Optimization Algorithms

E5295/5B5749 Convex optimization with engineering applications. Lecture 8. Smooth convex unconstrained and equality-constrained minimization

Constrained Optimization

Gradient-Based Optimization

AM 205: lecture 19. Last time: Conditions for optimality, Newton s method for optimization Today: survey of optimization methods

Miscellaneous Nonlinear Programming Exercises

EAD 115. Numerical Solution of Engineering and Scientific Problems. David M. Rocke Department of Applied Science

MATH2070 Optimisation

Nonlinear Optimization for Optimal Control

Quadratic Programming

On Lagrange multipliers of trust-region subproblems

MS&E 318 (CME 338) Large-Scale Numerical Optimization

Review of Classical Optimization

Computational Finance

Solving linear equations with Gaussian Elimination (I)

Convex Optimization. Problem set 2. Due Monday April 26th

Convex Optimization. Newton s method. ENSAE: Optimisation 1/44

, b = 0. (2) 1 2 The eigenvectors of A corresponding to the eigenvalues λ 1 = 1, λ 2 = 3 are

1 Computing with constraints

NonlinearOptimization

Geometry optimization

Quasi-Newton Methods

MATHEMATICS FOR COMPUTER VISION WEEK 8 OPTIMISATION PART 2. Dr Fabio Cuzzolin MSc in Computer Vision Oxford Brookes University Year

CHAPTER 11. A Revision. 1. The Computers and Numbers therein

Appendix A Taylor Approximations and Definite Matrices

Applications of Linear Programming

Arc Search Algorithms

Part 5: Penalty and augmented Lagrangian methods for equality constrained optimization. Nick Gould (RAL)

Quasi-Newton methods: Symmetric rank 1 (SR1) Broyden Fletcher Goldfarb Shanno February 6, / 25 (BFG. Limited memory BFGS (L-BFGS)

minimize x subject to (x 2)(x 4) u,

Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings

Optimization Concepts and Applications in Engineering

Numerical Linear Algebra Primer. Ryan Tibshirani Convex Optimization /36-725

March 5, 2012 MATH 408 FINAL EXAM SAMPLE

Inverse Problems and Optimal Design in Electricity and Magnetism

Conditional Gradient (Frank-Wolfe) Method

Notes on Numerical Optimization

Constrained optimization: direct methods (cont.)

Part 3: Trust-region methods for unconstrained optimization. Nick Gould (RAL)

Optimisation in Higher Dimensions

Transcription:

Technische Universität München Department of Electrical Engineering and Information Technology Institute for Electronic Design Automation Optimization Methods for Circuit Design Compendium H. Graeb

Version 2.8 (WS 12/13) Michael Zwerger Version 2.0-2.7 (WS 08/09 - SS 12) Michael Eick Version 1.0-1.2 (SS 07 - SS 08) Husni Habal Presentation follows: H. Graeb, Analog Design Centering and Sizing, Springer, 2007. R. Fletcher, Practical Methods of Optimization, John Wiley & Sons, 2nd Edition, 2000. Status: October 12, 2012 Copyright 2008-2012 Optimization Methods for Circuit Design Compendium H. Graeb Technische Universität München Institute for Electronic Design Automation Arcisstr. 21 80333 Munich, Germany graeb@tum.de Phone: +49-89-289-23679 All rights reserved.

Contents 1 Introduction 1 1.1 Parameters, performance, simulation..................... 1 1.2 Performance specification............................ 1 1.3 Minimum, minimization............................ 2 1.4 Unconstrained optimization.......................... 2 1.5 Constrained optimization............................ 3 1.6 Classification of optimization problems.................... 4 1.7 Classification of constrained optimization problems............. 4 1.8 Structure of an iterative optimization process................ 5 1.8.1... without constraints......................... 5 1.8.2... with constraints........................... 7 1.8.3 Trust-region approach......................... 7 2 Optimality conditions 9 2.1 Optimality conditions unconstrained optimization............. 9 2.1.1 Necessary first-order condition for a local minimum of an unconstrained optimization problem..................... 10 2.1.2 Necessary second-order condition for a local minimum of an unconstrained optimization problem..................... 11 2.1.3 Sufficient and necessary conditions for second-order derivative 2 f (x ) to be positive definite.......................... 12 2.2 Optimality conditions constrained optimization.............. 13 2.2.1 Feasible descent direction r...................... 13 2.2.2 Necessary first-order conditions for a local minimum of a constrained optimization problem.......................... 14 2.2.3 Necessary second-order condition for a local minimum of a constrained optimization problem..................... 15 2.2.4 Sensitivity of the optimum with regard to a change in an active constraint................................ 16 Optimization of Analog Circuits i

3 Unconstrained optimization 17 3.1 Univariate unconstrained optimization, line search.............. 17 3.1.1 Wolfe-Powell conditions........................ 18 3.1.2 Backtracking line search........................ 19 3.1.3 Bracketing................................ 20 3.1.4 Sectioning................................ 22 3.1.5 Golden Sectioning............................ 24 3.1.6 Line search by quadratic model.................... 26 3.1.7 Unimodal function........................... 27 3.2 Multivariate unconstrained optimization without derivatives........ 29 3.2.1 Coordinate search............................ 29 3.2.2 Polytope method (Nelder-Mead simplex method).......... 31 3.3 Multivariate unconstrained optimization with derivatives.......... 34 3.3.1 Steepest descent............................. 34 3.3.2 Newton approach............................ 34 3.3.3 Quasi-Newton approach........................ 35 3.3.4 Levenberg-Marquardt approach (Newton direction plus trust region) 37 3.3.5 Least-squares (plus trust-region) approach.............. 38 3.3.6 Conjugate-gradient (CG) approach.................. 40 4 Constrained optimization problem formulations 47 4.1 Quadratic Programming (QP)......................... 47 4.1.1 QP linear equality constraints.................... 47 4.1.2 QP - inequality constraints....................... 51 4.1.3 Example................................. 53 4.2 Sequential Quadratic programming (SQP), Lagrange Newton....... 54 4.2.1 SQP equality constraints....................... 54 4.2.2 Penalty function............................ 55 Optimization of Analog Circuits ii

5 Statistical parameter tolerances 57 5.1 Univariate Gaussian distribution (normal distribution)........... 58 5.2 Multivariate normal distribution........................ 59 5.3 Transformation of statistical distributions.................. 62 5.3.1 Example................................. 63 6 Expectation values and their estimators 65 6.1 Expectation values............................... 65 6.1.1 Definitions................................ 65 6.1.2 Linear transformation of expectation value.............. 66 6.1.3 Linear transformation of variance................... 66 6.1.4 Translation law of variances...................... 66 6.1.5 Normalizing a random variable.................... 66 6.1.6 Linear transformation of a normal distribution............ 67 6.2 Estimation of expectation values........................ 68 6.2.1 Expectation value estimator...................... 68 6.2.2 Variance estimator........................... 68 6.2.3 Variance of the expectation value estimator............. 69 6.2.4 Linear transformation of estimated expectation value........ 70 6.2.5 Linear transformation of estimated variance............. 70 6.2.6 Translation law of estimated variance................. 70 Optimization of Analog Circuits iii

7 Worst-case analysis 71 7.1 Task....................................... 71 7.2 Typical tolerance regions............................ 71 7.3 Classical worst-case analysis.......................... 72 7.3.1 Task................................... 72 7.3.2 Linear performance model....................... 73 7.3.3 Peformance type f good, specification type f > f L....... 74 7.3.4 Performance type f good, specification type f < f U....... 75 7.4 Realistic worst-case analysis.......................... 76 7.4.1 Task................................... 76 7.4.2 Peformance type f good, specification type f f L :....... 77 7.4.3 Performance type f good, specification type f f U....... 78 7.5 General worst-case analysis.......................... 79 7.5.1 Task................................... 79 7.5.2 Peformance type f good, specification type f f L :....... 80 7.5.3 Performance type f good, specification type f f U :...... 81 7.5.4 General worst-case analysis with tolerance box............ 81 7.6 Input/output of worst-case analysis...................... 82 7.7 Summary of discussed worst-case analysis problems............. 84 Optimization of Analog Circuits iv

8 Yield analysis 85 8.1 Task....................................... 85 8.1.1 Acceptance function.......................... 86 8.1.2 Parametric yield............................ 86 8.2 Statistical yield analysis/monte-carlo analysis................ 87 8.2.1 Variance of yield estimator....................... 88 8.2.2 Estimated variance of yield estimator................. 88 8.2.3 Importance sampling.......................... 90 8.3 Geometric yield analysis for linearized performance feature ( realistic geometric yield analysis )............................. 91 8.3.1 Yield partition............................. 92 8.3.2 Defining worst-case distance β W L as difference from nominal performance to specification bound as multiple of standard deviation σ f of linearized performance feature (β W L -sigma design)...... 93 8.3.3 Yield partition as a function of worst-case distance β W L....... 93 8.3.4 Worst-case distance β W L defines tolerance region.......... 94 8.3.5 specification type f f U........................ 95 8.4 Geometric yield analysis for nonlinear performance feature ( general geometric yield analysis )............................. 96 8.4.1 Problem formulation.......................... 96 8.4.2 Advantages of geometric yield analysis................ 97 8.4.3 Lagrange function and first-order optimality conditions of problem (336) 98 8.4.4 Lagrange function of problem (337).................. 98 8.4.5 Second-order optimality condition of problem (336)......... 99 8.4.6 Worst-case distance........................... 99 8.4.7 Remarks................................. 100 8.4.8 Overall yield............................... 101 8.4.9 Consideration of range parameters.................. 102 Optimization of Analog Circuits v

9 Yield optimization/design centering/nominal design 103 9.1 Optimization objectives............................ 103 9.2 Derivatives of optimization objectives..................... 104 9.3 Problem formulations of analog optimization................. 105 9.4 Analysis, synthesis............................... 105 9.5 Sizing...................................... 106 9.6 Nominal design, tolerance design....................... 106 9.7 Optimization without/with constraints.................... 107 10 Sizing rules for analog circuit optimization 109 10.1 Single (NMOS) transistor........................... 109 10.1.1 Sizing rules for single transistor that acts as a voltage-controlled current source (VCCS)......................... 110 10.2 Transistor pair: current mirror (NMOS)................... 111 10.2.1 Sizing rules for current mirror..................... 111 A Matrix and vector notations 113 A.1 Vector...................................... 113 A.2 Matrix...................................... 113 A.3 Addition..................................... 114 A.4 Multiplication.................................. 114 A.5 Special cases................................... 115 A.6 Determinant of a quadratic matrix...................... 116 A.7 Inverse of a quadratic non-singular matrix.................. 116 A.8 Some properties................................. 117 B Abbreviated notations of derivatives using the nabla symbol 119 C Norms 121 Optimization of Analog Circuits vi

D Pseudo-inverse, singular value decomposition (SVD) 123 D.1 Moore-Penrose conditions........................... 123 D.2 Singular value decomposition......................... 124 E Linear equation system, rectangular system matrix with full rank 125 E.1 underdetermined system of equations..................... 125 E.2 overdetermined system of equations...................... 126 E.3 determined system of equations........................ 127 F Partial derivatives of linear, quadratic terms in matrix/vector notation129 G Probability space 131 H Convexity 133 H.1 Convex set K R n............................... 133 H.2 Convex function................................. 133 Optimization of Analog Circuits vii

Optimization of Analog Circuits viii

1 Introduction 1.1 Parameters, performance, simulation design parameters x d R n xd e.g. transistor widths, capacitances statistical parameters x s R nxs e.g. oxide thickness, threshold voltage range parameters x r R nxr e.g. operational parameters: (circuit) parameters x = [ x T d xt s x T r ] T supply voltage, temperature performance feature f i e.g. gain, bandwidth, slew rate, phase margin, delay, power (circuit) performance f = [ f i ] T R n f (circuit) simulation x f(x) e.g. SPICE abstraction from physical level! A design parameter and a statistical parameter may refer to the same physical parameter. E.g., an actual CMOS transistor width is the sum of a design parameter W k and a statistical parameter W. W k is the specific width of transistor T k while W is a width reduction that varies globally and equally for all the transistors on a die. A design parameter and a statistical parameter may be identical. 1.2 Performance specification performance specification feature (upper or lower limit on a performance): f i f L,i or f i f U,i (1) number of performance specification features: n f n P SF 2n f (2) performance specification: f L,1 f 1 (x) f U,1. f L,nf f nf (x) f U,nf f L f(x) f U (3) Optimization of Analog Circuits 1

f (a) strong local minimum weak local minimum global minimum x f (b) x Figure 1. Smooth function (a), i.e. continuous and differentiable at least several times on a closed region of the domain. Non-smooth continuous function (b). 1.3 Minimum, minimization without loss of generality: optimum minimum because: max f min f (4) { minimum, i.e., a result min minimize, i.e., a process (5) min f(x) f (x)! min (6) min f(x) x, f(x ) = f (7) 1.4 Unconstrained optimization f = min f(x) min f min f(x) x x min{f(x)} x = argmin f(x) argmin f argmin f(x) x x argmin{f(x)} (8) Optimization of Analog Circuits 2

1.5 Constrained optimization E: set of equality constraints I: set of inequality constraints min f(x) s.t. c i (x) = 0, i E c i (x) 0, i I (9) Alternative formulations where, min x f s.t. x Ω (10) min f (11) x Ω min {f(x) x Ω} (12) { c i (x) = 0, i E Ω = x c i (x) 0, i I } Lagrange function combines objective function and constraints in a single expression L(x, λ) = f(x) λ i c i (x) (13) i E I λ i : Lagrange multiplier associated with constraint i Optimization of Analog Circuits 3

1.6 Classification of optimization problems deterministic, stochastic continuous, discrete local, global scalar, vector constrained, unconstrained with or without derivatives The iterative search process is deterministic or random. Optimization variables can take an infinite number of values, e.g., the set of real numbers, or take a finite set of values or states. The objective value at a local optimal point is better than the objective values of other points in its vicinity. The objective value at a global optimal point is better than the objective values of any other point. In a vector optimization problem, multiple objective functions shall be optimized simultaneously (multiplecriteria optimization, MCO). Usually, objectives have to be traded off with each other. A Pareto-optimal point is characterized in that one objective can only be improved at the cost of another. Pareto optimization determines the set of all Pareto-optimal points. Scalar optimization refers to a single objective. A vector optimization problem is scalarized by combining the multiple objectives into a single overall objective, e.g., by a weighted sum, least-squares, or min/max. Besides the objective function that has to be optimized, constraints on the optimization variables may be given as inequalities or equalities. The optimization process may be based on gradients (first derivative) or on gradients and Hessians (second derivative), or it may not require any derivatives of the objective/constraint functions. 1.7 Classification of constrained optimization problems objective function constraint functions linear linear linear programming quadratic linear quadratic programming nonlinear nonlinear nonlinear programming convex linear equality convex programming constraints (local global minimum) concave inequality constraints Optimization of Analog Circuits 4

1.8 Structure of an iterative optimization process 1.8.1... without constraints Taylor series of a function f about iteration point x (κ) : f(x) = f(x (κ) ) + f ( x (κ)) T ( ) x x (κ) + 1 2 ( x x (κ) ) T 2 f ( x (κ)) (x x (κ)) +... (14) = f (κ) + g (κ)t (x x (κ)) + 1 2 ( x x (κ) ) T H (κ) (x x (κ)) +... (15) f (κ) : value of function f at point x (κ) g (κ) : gradient (first derivative, direction of steepest ascent) at point x (κ) H (κ) : Hessian matrix (second derivative) at point x (κ) Taylor series about search direction r starting from point x (κ) : x(r) = x (κ) + r (16) f(x (κ) + r) f(r) = f (κ) + g (κ)t r + 1 2 rt H (κ) r +... (17) Taylor series about step length along search direction r (κ) starting from point x (κ) : x(α) = x (κ) + α r (κ) (18) f(x (κ) + α r (κ) ) f(α) = f (κ) + g (κ)t r (κ) α + 1 2 r(κ)t H (κ) r (κ) α 2 +... (19) = f (κ) + f (α = 0) α + 1 2 2 f (α = 0) α 2 +... (20) f (α = 0) : slope of f along direction r (κ) 2 f (α = 0) : curvature of f along r (κ) repeat determine the search direction r (κ) determine the step length α (κ) (line search) x (κ+1) = x (κ) + α (κ) r (κ) κ:= κ + 1 until termination criteria are fulfilled Steepest-descent approach search direction: direction of steepest descent, i.e., r (κ) = g (κ) Optimization of Analog Circuits 5

x 0 x Figure 2. Visual illustration of the steepest-descent approach for Rosenbrock s function f (x 1, x 2 ) = 100(x 2 x 2 1) 2 + (1 x 1 ) 2. A backtracking line search is applied (see Sec. 3.1.2, page 19) with an initial x (0) = [ 1.0, 0.8] T and α (0) = 1, α := c 3 α. The search terminates when the Armijo condition is satisfied with c 1 = 0.7, c 3 = 0.6. Optimization of Analog Circuits 6

1.8.2... with constraints Constraint functions and objective functions are combined in an unconstrained optimization problem in each iteration step Lagrange formulation penalty function Sequential Quadratic Programming (SQP) Projection on active constraints, i.e. into subspace of an unconstrained optimization problem in each iteration step active-set methods 1.8.3 Trust-region approach model of the objective function: f(x) m ( x (κ) + r ) (21) min r m ( x (κ) + r ) s.t. r trust region (22) e.g., r < search direction and step length are computed simultaneously trust region to consider the model accuracy Optimization of Analog Circuits 7

Optimization of Analog Circuits 8

2 Optimality conditions 2.1 Optimality conditions unconstrained optimization Taylor series of the objective function around the optimum point x : f(x) = f(x ) + f (x ) T (x x ) }{{}}{{} f For x = x + r close to the optimum: g T + 1 2 (x x ) T 2 f (x ) }{{} (x x ) +... (23) H f : value of the function at the optimum x g : gradient at the optimum x H : Hessian matrix at the optimum x f(r) = f + g T r + 1 2 rt H r +... (24) x is optimal there is no descent direction, r, such that f (r) < f. x 2 f ( x (κ)) gradient (κ) x direction steepest descent direction f (x) > f ( x (κ)) f (x) = f ( x (κ)) f (x) < f ( x (κ)) x 1 Figure 3. Descent directions from x (κ) : shaded area. Optimization of Analog Circuits 9

2.1.1 Necessary first-order condition for a local minimum of an unconstrained optimization problem x : stationary point descent direction r: f ( x (κ)) T r < 0 steepest descent direction: r = f ( x (κ)) r 0 g T r 0 (25) g = f (x ) = 0 (26) Figure 4. Quadratic functions: (a) minimum at x, (b) maximum at x, (c) saddle point at x, (d) positive semidefinite with multiple minima along trench. Optimization of Analog Circuits 10

2.1.2 Necessary second-order condition for a local minimum of an unconstrained optimization problem r 0 rt 2 f (x ) r 0 2 f (x ) is positive semidefinite f has non-negative curvature (27) sufficient: r 0 rt 2 f (x ) r > 0 2 f (x ) is positive definite has positive curvature (28) Figure 5. Contour plots of quadratic functions that are (a),(b) positive or negative definite, (c) indefinite (saddle point), (d) positive or negative semidefinite. Optimization of Analog Circuits 11

2.1.3 Sufficient and necessary conditions for second-order derivative 2 f (x ) to be positive definite all eigenvalues > 0 has a Cholesky decomposition: f 2 (x ) = L L T with l ii > 0 f 2 (x ) = L D L T with l ii = 1 and d ii > 0 (29) all pivot elements during gaussian elimination without pivoting > 0 all principal minors are > 0 x 2 unconstrained descent unconstrained c i = const f ( x (κ)) c i ( x (κ) ) x (κ) descent f = const x 1 Figure 6. Dark shaded area: unconstrained directions according to (35), light shaded area: descent directions according to (34), overlap: unconstrained descent directions. When no direction satisfies both (34) and (35) then the cross section is empty and the current point is a local minimum of the function. Optimization of Analog Circuits 12

2.2 Optimality conditions constrained optimization 2.2.1 Feasible descent direction r descent direction: f ( x (κ)) T r < 0 (30) feasible direction: ( c i x (κ) + r ) ( c ) ( i x (κ) + c ) i x (κ) T r 0 (31) Inactive constraint: i is inactive ( c ) i x (κ) > 0 then each r with r < ɛ satisfies (31), e.g., ( c ) i x (κ) r = c i (x (κ) ) f (x (κ) ) f ( x (κ)) (32) (32) in (31) gives: where, ( c ) [ i x (κ) 1 c ( ) i x (κ) T ( ) ] f x (κ) 0 (33) c i (x (κ) ) f (x (κ) ) 1 c ( ) i x (κ) T ( ) f x (κ) c i (x (κ) ) f (x (κ) ) 1 Active constraint (Fig. 6): i is active c i ( x (κ) ) = 0 then (30) and (31) become: f ( x (κ)) T r < 0 (34) c i ( x (κ) ) T r 0 (35) no feasible descent direction exists: no vector r satisfies both (34) and (35) at x : f (x ) = λ i c i (x ) with λ i 0 (36) no statement about sign of λ i in case of an equality constraint (c i = 0 c i 0 c i 0) Optimization of Analog Circuits 13

2.2.2 Necessary first-order conditions for a local minimum of a constrained optimization problem x, f = f (x ), λ, L = L (x, λ ) Karush-Kuhn-Tucker (KKT) conditions L (x ) = 0 (37) c i (x ) = 0 i E (38) c i (x ) 0 i I (39) λ i 0 i I (40) λ i c i (x ) = 0 i E I (41) (37) is analogous to (26) (13) and (37) give: f (x ) λ i c i (x ) = 0 (42) i A(x ) A (x ) is the set of active constraints at x A (x ) = E {i I c i (x ) = 0} (43) (41) is called the complementarity condition, Lagrange multiplier is 0 (inactive constraint) or constraint c i (x ) is 0 (active constraint). from (41) and (13): L = f (44) Optimization of Analog Circuits 14

2.2.3 Necessary second-order condition for a local minimum of a constrained optimization problem f (x + r) = L (x + r, λ ) (45) = L (x, λ ) + r T L (x ) + 1 }{{} 2 rt 2 L (x ) r + (46) }{{} 0. =. f + 1 {}}{ 2 rt [ 2 f (x ) λ i 2 c i (x )] r + (47) for each feasible stationary direction r at x, i.e., i A(x ) F r = r r 0 c i (x ) T r 0 i A (x ) \A + c i (x ) T r = 0 i A + = { j A (x ) j E λ j > 0 } (48) necessary: sufficient: r T 2 L (x ) r 0 (49) r F r r T 2 L (x ) r > 0 (50) r F r Optimization of Analog Circuits 15

2.2.4 Sensitivity of the optimum with regard to a change in an active constraint perturbation of an active constraint at x by i 0 c i (x) 0 c i (x) i (51) L (x, λ, ) = f (x) i λ i (c i (x) i ) (52) f ( i ) = L ( i ) ( L = x T x + L i λ T λ + L ) i T i x,λ }{{}}{{} 0 T 0 T = L i = λ i x,λ f ( i ) = λ i (53) Lagrange multiplier: sensitivity to change in an active constraint close to x Optimization of Analog Circuits 16

3 Unconstrained optimization 3.1 Univariate unconstrained optimization, line search single optimization parameter, e.g., step length α f(α) f ( x (κ) + α r ) = f ( x (κ)) ( + f ) x (κ) T r α }{{}}{{} f (κ) f(α=0) + 1 2 rt 2 f ( x (κ)) r α 2 + (54) }{{} 2 f(α=0) error vector: global convergence: ɛ (κ) = x (κ) x (55) lim κ ɛ(κ) = 0 (56) convergence rate: ɛ (κ+1) = L ɛ (κ) p (57) p = 1: linear convergence, L < 1 p = 2: quadratic convergence exact line search is expensive, therefore only find α to: obtain sufficient reduction in f obtain a big enough step length Optimization of Analog Circuits 17

3.1.1 Wolfe-Powell conditions f (α = 0) c 2 f (α = 0) c 1 f (α = 0) f ( x (κ)) f f (α) 0 α opt (58) satisfied α (59) satisfied (60) satisfied Figure 7. Armijo condition: sufficient objective reduction f(α) f(0) + α c 1 f ( x (κ)) T r }{{} f(α=0) (58) curvature condition: sufficient gradient increase f ( x (κ) + α r ) T r }{{} f(α) c 2 f ( x (κ)) T r }{{} f(α=0) (59) strong curvature condition: step close to the optimum f (α) c 2 f (α = 0) (60) 0 < c 1 < c 2 < 1 (61) Optimization of Analog Circuits 18

3.1.2 Backtracking line search 0 < c 3 < 1 determine step length α t /* big enough, not too big */ WHILE α t violates (58) [α t := c 3 α t Optimization of Analog Circuits 19

3.1.3 Bracketing finding an interval [α lo, α hi ] that contains minimum /* take α lo as previous α hi and find a new, larger α hi until the minimum has passed by; if necessary exchange α lo, α hi such that f (α lo ) < f (α hi ) */ α hi := 0 REPEAT α lo := α hi determine α hi [α lo, α max ] IF α hi violates (58) OR f (α hi ) f (α lo ) /* minimum has been significantly passed, because the Armijo condition is violated or because the new objective value is larger than that at the lower border of the bracketing interval (Fig. 8) */ GOTO sectioning IF α hi satisfies (60) /* strong curvature condition satisfied and objective value smaller than at α lo, i.e., step length found (Fig. 9) */ α = α hi STOP IF f (α hi ) 0 /* α hi has lower objective value than α lo, and has passed the minimum as the objective gradient has switched sign (Fig. 10) */ exchange α hi and α lo GOTO sectioning Optimization of Analog Circuits 20

f α lo α hi is somewhere here: α Figure 8. f α hi is somewhere here: α lo α Figure 9. f α lo α hi is somewhere here: α Figure 10. Optimization of Analog Circuits 21

3.1.4 Sectioning sectioning starts with interval [α lo, α hi ] with the following properties minimum is included f (α lo ) < f (α hi ) α lo > <αhi f (α lo ) (α hi α lo ) < 0 REPEAT determine α t [α lo, α hi ] IF α t violates (58) OR f (α t ) f (α lo ) [ /* new αhi found, α lo remains (Fig. 11) */ α hi := α t ELSE /* i.e., f (α t ) f (α lo ), new α lo found */ IF α t satisfies (60) /* step length found */ α = α t STOP IF f (α t ) (α hi α lo ) 0 /* α t is on the same side from the minimum as α hi, then α lo must become the new α hi (Fig. 12) */ α hi := α lo /* Fig. 13 */ α lo := α t Optimization of Analog Circuits 22

f f α hi α t α lo α α lo α t α hi α new interval new interval α lo α hi Figure 11. f f α t α hi α lo α α lo α t α hi α new interval new interval α lo α hi α hi α lo Figure 12. f f α hi α t α lo α α lo α t α hi α new interval new interval α hi α lo α lo α hi Figure 13. Optimization of Analog Circuits 23

3.1.5 Golden Sectioning golden section ratio: 1 τ τ = τ 1 τ 2 + τ 1 = 0 : τ = 1 + 5 2 0.618 (62) /* left inner point */ α 1 := L + (1 τ) R L /* right inner point */ α 2 := L + τ R L REPEAT IF f (α 1 ) < f (α 2 ) /* α [L, α 2 ], α 1 is new right inner point */ R := α 2 α 2 := α 1 α 1 := L + (1 τ) R L ELSE /* α [α 1, R], α 2 is new left inner point */ L := α 1 α 1 := α 2 α 2 := L + τ R L UNTIL < min Optimization of Analog Circuits 24

f L α 1 α 2 R α (1 τ) τ τ (1 τ) (2) L (2) α (2) α (2) 1 2 R (2) (1 τ) (2) τ (2) τ (2) (1 τ) (2) Figure 14. Accuracy after (κ)-th step interval size: (κ) = τ κ (0) (63) required number of iterations κ to reduce interval from (0) to (κ) : κ = 1 (0) log 4.78 log (0) log τ (κ) (κ) (64) Optimization of Analog Circuits 25

3.1.6 Line search by quadratic model m, f f ( f ( L (κ)) R (κ)) f ( E (κ)) m(α) f (α t ) L (κ) E (κ) α t R (κ) α Figure 15. quadratic model by, e.g., 3 sampling points first and second-order derivatives univariate Newton approach first-order condition: m (α) = f 0 + g α + 1 2 h α2 (65) min m (α) m (α) = g + h α t! = 0 α t = g/h (66) second-order condition: h > 0 (67) Optimization of Analog Circuits 26

3.1.7 Unimodal function f is unimodal over interval [L, R]: there exists exactly one value α opt [L, R] for which holds: and 2 < α opt f (α 1 ) > f (α 2 ) L<α 1 <α 2 <R (Fig. 16 (a)) (68) 1 > α opt f (α 1 ) < f (α 2 ) L<α 1 <α 2 <R (Fig. 16 (b)) (69) f α opt L α 1 α 2 α 1 α 2 (a) (b) R α Figure 16. Optimization of Analog Circuits 27

f (a) f (b) x 0 x α opt α Figure 17. Univariate unimodal function (a). Univariate monotone function (b). Remarks minimization of univariate unimodal function: interval reduction with f (L) > f (α t ) < f (R), α t [L, R] 3 sampling points root finding of univariate monotone function: interval reduction with sign (f (L)) = sign (f (R)), [L, R] two sampling points (κ) κ = 1 (0) (70) 2 κ = 1 (0) log 3.32 log (0) (71) log 2 (κ) (κ) Optimization of Analog Circuits 28

3.2 Multivariate unconstrained optimization without derivatives 3.2.1 Coordinate search optimize one parameter at a time, alternating coordinates e j = [0... 0 1 0... 0] T j-th position REPEAT FOR j = 1,, n x α t := arg minf (x + α e j ) α x := x + α t e j UNTIL convergence or maximum allowed iterations reached good for losely coupled, uncorrelated parameters, e.g., 2 f (x) diagonal matrix: x 2 x x f = const 1-st run of REPEAT loop x 1 Figure 18. Coordinate search with exact line search for quadratic objective function with diagonal Hessian matrix. Optimization of Analog Circuits 29

(a) 1st run of REPEAT loop 2nd run of REPEAT loop x (b) x Figure 19. Coordinate search with exact line search for quadratic objective function (a). Steepest-descent method for quadratic objective function (b). Optimization of Analog Circuits 30

3.2.2 Polytope method (Nelder-Mead simplex method) x 2 f = const x 3 x 2 f x 1 x 1 Figure 20. n x -simplex is the convex hull of a set of parameter vector vertices x i, i = 1,, n x + 1 the vertices are ordered such that f i = f (x i ), f 1 f 2 f nx+1 Cases of an iteration step reflection f r < f 1 f 1 f r < f nx f nx f r < f nx+1 f nx+1 f r expansion outer contraction inner contraction f e < f r? insert x r f c f r? f cc < f nx+1? yes no delete x n+1 yes no no yes x n+1 = x e x n+1 = x r reorder x n+1 = x c reduction x n+1 = x cc x 1, v 2,..., v nx+1 Optimization of Analog Circuits 31

Reflection x 2 x 3 x 0 x r x 1 Figure 21. x 0 = 1 n x x i (72) n x x r = x 0 + ρ (x 0 x nx+1) (73) ρ: reflection coefficient ρ > 0 (default: ρ = 1) i=1 Expansion x 2 x 3 x 0 x r x e x 1 Figure 22. χ: expansion coefficient, χ > 1, χ > ρ (default: χ = 2) x e = x 0 + χ (x r x 0 ) (74) Optimization of Analog Circuits 32

Outer contraction x 2 x c x r x 3 x 0 x 1 Figure 23. γ: contraction coefficient, 0 < γ < 1 (default: γ = 1 2 ) x c = x 0 + γ (x r x 0 ) (75) Inner contraction x 2 x cc x 0 x 3 x 1 Figure 24. x cc = x 0 γ (x 0 x nx+1) (76) Reduction (shrink) x 2 x 3 v 3 x 1 v 2 Figure 25. v i = x 1 + σ (x i x 1 ), i = 2,, n x + 1 (77) σ: reduction coefficient, 0 < σ < 1 (default: σ = 1 2 ) Optimization of Analog Circuits 33

3.3 Multivariate unconstrained optimization with derivatives 3.3.1 Steepest descent (sec. 1.8.1, page 5) search direction: negative of the gradient at the current point in the parameter space (i.e., linear model of the objective function) 3.3.2 Newton approach quadratic model of objective function f ( x (κ) + r ) m ( x (κ) + r ) = f (κ) + g (κ)t r + 1 2 rt H (κ) r (78) minimize m (r) g (κ) = f ( x (κ)) H (κ) = 2 f ( x (κ)) (79) First-order optimality condition m (r) = 0 H (κ) r = g (κ) r (κ) (80) r (κ) : search direction for line seach r (κ) is obtained through the solution of a system of linear equations Second-order optimality condition 2 m (r) positive (semi)definite if not: steepest descent: r (κ) = g (κ) switch signs of negative eigenvalues Levenberg-Marquardt approach: ( H (κ) + λ I ) r = g (κ) (λ : steepest-descent approach) Optimization of Analog Circuits 34

3.3.3 Quasi-Newton approach second derivative not available successive approximation B H from gradients in the course of the optimization process, e.g., B (0) = I f ( x (κ+1)) = f ( x (κ) + α (κ) r (κ)) = g (κ+1) (81) approximate g (κ+1) from quadratic model (78) Quasi-Newton condition for approximation of B: g (κ+1) g (κ) + H (κ) α (κ) r (κ) (82) g (κ+1) g (κ) }{{} y (κ) α (κ) r (κ) = x (κ+1) x (κ) (83) = B (κ+1) (x } (κ+1) {{ x (κ) } ) (84) s (κ) Optimization of Analog Circuits 35

3.3.3.1 Symmetric rank-1 update (SR1) approach: B (κ+1) = B (κ) + u v T (85) substituting (85) in (84) ( B (κ) + u v ) T s (κ) = y (κ) substituting (86) in (85) B (κ+1) = B (κ) + because of the symmetry in B: B (κ+1) = B (κ) + u v T s (κ) = y (κ) B (κ) s (κ) u = y(κ) B (κ) s (κ) v T s (κ) (86) ( y (κ) B (κ) s (κ)) v T v T s (κ) (87) ( y (κ) B (κ) s (κ)) (y (κ) B (κ) s (κ)) T (y (κ) B (κ) s (κ) ) T s (κ) (88) if y (κ) B (κ) s (κ) = 0, then B (κ+1) = B (κ) SR1 does not guarantee positive definiteness of B alternatives: Davidon-Fletcher-Powell (DFP), Broyden-Fletcher-Goldfarb-Shannon (BFGS) approximation of B 1 instead of B or of a decomposition of the system matrix of linear equation system (80) to compute the search direction r (κ) s (κ), κ = 1,, n x linear independent and f quadratic with Hessian H: Quasi-Newton with SR1 terminates after not more than n x + 1 steps with B (κ+1) = H Optimization of Analog Circuits 36

3.3.4 Levenberg-Marquardt approach (Newton direction plus trust region) min m (r) s.t. r 2 (89) }{{} r T r constraint: trust region of model m (r), e.g., concerning positive definitiveness of B (κ), H (κ) corresponding Lagrange function: L (r, λ) = f (κ) + g (κ)t r + 1 2 rt H (κ) r λ ( r T r ) (90) stationary point L (r) = 0 : ( H (κ) + 2 λ I ) r = g (κ) (91) refer to the Newton approach, H (κ) indefinite Optimization of Analog Circuits 37

3.3.5 Least-squares (plus trust-region) approach min r f (r) f target 2 s.t. r 2 (92) linear model : trust region of linear model f (r) f (κ) (r) = f ( x (κ)) + f (x ) (κ)t r (93) }{{}}{{} f (κ) 0 S (κ) least-squares difference of linearized objective value to target value (index (κ) left out): ɛ(r) 2 = f (r) f target 2 (94) = ɛ T (r) ɛ(r) = (f 0 f target }{{} ɛ 0 +S r) T (ɛ 0 + S r) (95) = ɛ T 0 ɛ 0 + 2 r T S T ɛ 0 + r T S T S r (96) S T S: part of second derivative of ɛ(r) 2 min r ɛ(r) 2 s.t. r 2 (97) corresponding Lagrange function: L (r, λ) = ɛ T 0 ɛ 0 + 2 r T S T ɛ 0 + r T S T S r λ ( r T r ) (98) stationary point L (r) = 0: 2 S T ɛ 0 + 2 S T S r + 2 λ r = 0 (99) ( ST S + λ I ) r = S T ɛ 0 (100) λ = 0: Gauss-Newton method (100) yields r (κ) for a given λ determine the Pareto front ɛ (κ) ( r (κ)) vs. r (κ) for 0 λ < (Figs. 26, 27) select r (κ) with small step length and small error (iterative process): x (κ+1) = x (κ) + r (κ) Optimization of Analog Circuits 38

ɛ (κ) ( r (κ)) λ λ small error small step length ɛ(r (κ) ) λ = 0 r (κ) = Figure 26. Characteristic boundary curve, typical sharp bend for ill-conditioned problems. Small step length wanted, e.g., due to limited trust region of linear model. x 2 x (κ) 2 ɛ (κ) (r) = const λ = 0 r (κ) ( ) λ r = x 1 x (κ) 1 Figure 27. Characteristic boundary curve of a two-dimensional parameter space. Optimization of Analog Circuits 39

3.3.6 Conjugate-gradient (CG) approach 3.3.6.1 Optimization problem with a quadratic objective function f q (x) = b T x + 1 2 xt H x (101) n x large, H sparse min f q (x) (102) first-order optimality conditions leads to a linear equation system for the stationary point x g (x) = fq (x) = 0 (101) g (x) = b + H x = 0 (103) H x = b x (104) x is computed by solving linear equation system, not by matrix inversion second-order condition: 2 f q (x ) = H positive definite (105) Optimization of Analog Circuits 40

3.3.6.2 Eigenvalue decomposition of a symmetric positive definite matrix H H = U D 1 2 D 1 2 U T (106) D 1 2 : diagonal matrix of square roots of eigenvalues U: columns are orthogonal eigenvectors, i.e. U 1 = U T, U U T = I (107) substituting (106) in (101) f q (x) = b T x + 1 2 ( T ( ) D 1 2 UT x) D 1 2 UT x (108) coordinate transformation: x = D 1 2 UT x, x = U D 1 2 x (109) substituting (109) in (108) f q (x) = b T x + 1 2 x T x (110) b = D 1 2 UT b (111) Hessian of transformed quadratic function (110) is unity matrix H = I gradient of f q g = f q (x ) (110) = b + x (112) transformation of gradient from (103), (106), (109), and (111) f (x) = b + H x = U D 1 2 b + U D 1 2 x (113) from (112) g = f (x) = U D 1 2 f q (x ) = U D 1 2 g (114) min f q (x) min f q (x ) (115) solution in transformed coordinates (g! = 0) x = b (116) Optimization of Analog Circuits 41

3.3.6.3 Conjugate directions i j r(i)t H r (j) = 0 ( H orthogonal ) (117) from (106) ( ) T ) D 1 2 UT r (i) (D 1 2 UT r (j) = 0 i j r (i) T r (j) = 0 (118) i j transformed conjugate directions are orthogonal r (i) T = D 1 2 UT r (i) (119) 3.3.6.4 Step length substitute x = x (κ) + α r (κ) in (101) f q (α) = b T (x (κ) + α r (κ)) + 1 2 (x (κ) + α r (κ)) T ( H x (κ) + α r (κ)) (120) f q (α) = 0: f q (α) = b T r (κ) + ( x (κ) + α r (κ)) T H r (κ) (121) (103) ( = f ) q x (κ) T }{{} r + α r (κ)t H r (κ) (122) g (κ)t α (κ) = g(κ)t r (κ) r (κ)t H r (κ) (123) 3.3.6.5 New iterative solution x (κ+1) = x (κ) + α (κ) }{{} (123) r (κ) }{{} (117) r (0) = g (0) }{{} (103) = x (0) + κ α (i) r (i) (124) i=1 (125) new search direction: combination of actual gradient and previous search direction r (κ+1) = g (κ+1) + β (κ+1) r (κ) (126) substituting (126) into (117) g (κ+1)t H r (κ) + β (κ+1) r (κ)t H r (κ) = 0 (127) β (κ+1) H r (κ) = g(κ+1)t r (κ)t H r (κ) (128) Optimization of Analog Circuits 42

x 2 r (0) = g (0) x x (1) r (1) with r (0)T H r (1) = 0 x (0) g (1) x 1 Figure 28. x 2 r (0) r(1) with r (0) T r (1) = 0 x (1) x x (0) g (0) x 1 Figure 29. Optimization of Analog Circuits 43

3.3.6.6 Some properties g (κ+1) = b + H x (κ+1) = b + H x (κ) + α (κ) H r (κ) = g (κ) + α (κ) H r (κ) (129) g (κ+1)t r (κ) (129) = g (κ)t r (κ) + α (κ) r (κ)t H r (κ) (123) = g (κ)t r (κ) g (κ)t r (κ) r(κ)t H r (κ) r (κ)t H r (κ) = 0 (130) i.e., the actual gradient is orthogonal to previous search direction g (κ)t r (κ) (126) = g (κ)t ( g (κ) + β (κ) r (κ 1)) = g (κ)t g (κ) β (κ) g (κ)t r }{{ (κ 1) } 0 by (130) i.e., descent direction going along conjugate direction = g (κ)t g (κ) (131) g (κ+1)t g (κ) (129) = g (κ)t g (κ) + α (κ) r (κ)t H g (κ) (131),(126) = g (κ)t r (κ) + α (κ) r (κ)t H ( r (κ) + β (κ) r (κ 1)) (117) = g (κ)t r (κ) α (κ) r (κ)t H r (κ) (123) = g (κ)t r (κ) g (κ)t r (κ) = 0 (132) i.e., actual gradient is orthogonal to previous gradient substituting (126) into (130) g (κ+1)t ( g (κ) + β (κ) r (κ 1)) = 0 (132) g (κ+1)t r (κ 1) = 0 (133) i.e., actual gradient is orthogonal to all previous search directions substituting (129) into (132) g (κ+1)t (g (κ 1) + α (κ 1) H r (κ 1)) = 0 (126) g (κ+1)t g (κ 1) + α (κ 1) ( r (κ+1) + β (κ+1) r (κ)) H r (κ 1) = 0 (117) g (κ+1)t g (κ 1) = 0 (134) i.e., actual gradient is orthogonal to all previous gradients Optimization of Analog Circuits 44

3.3.6.7 Simplified computation of CG step length substituting (131) into (123) α (κ) = g(κ)t g (κ) r (κ)t H r (κ) (135) 3.3.6.8 Simplified computation of CG search direction substituting H r (κ) from (129) into (128) from (135) and (132) β (κ+1) = g(κ+1)t 1 α (κ) (g (κ+1) g (κ)) r (κ)t H r (κ) (136) β (κ+1) = g(κ+1)t g (κ+1) (137) g (κ)t g (κ) 3.3.6.9 CG algorithm solves optimization problem (102) or linear equation system (103) in at most n x steps g (0) = b + H x (0) r (0) = g (0) κ = 0 while r (k) 0 step length: α (κ) g = g(κ)t (κ) line-search r (κ)t H r (κ) in nonlinear programming new point: x (κ+1) = x (κ) + α (κ) r (κ) new gradient: g (κ+1) = g (κ) + α (κ) H r (κ) gradient computation in nonlinear programming Fletcher-Reeves: β (κ+1) g = g(κ+1)t (κ+1) g (κ)t g (κ) new search-direction: r (κ+1) = g (κ+1) + β (κ+1) r (κ) κ := κ + 1 end while Polak-Ribière: β (κ+1) = g(κ+1)t (g (κ+1) g (κ)) (138) g (κ)t g (κ) Optimization of Analog Circuits 45

Optimization of Analog Circuits 46

4 Constrained optimization problem formulations 4.1 Quadratic Programming (QP) 4.1.1 QP linear equality constraints min f q (x) s.t. A x = c f q (x) = b T x + 1 2 xt H x H: symmetric, positive definite (139) A <nc n x>, n c n x, rank (A) = n c A x = c (140) underdetermined linear equation system with full rank (Appendix E). Optimization of Analog Circuits 47

4.1.1.1 Transformation of (139) into an unconstrained optimization problem by coordinate transformation approach: x = [ Y <nx nc> Z] }{{ <nx nx> } non-singular c <nc> y <n x> = Y c }{{} feasible point of (140) + Z y }{{} degrees of freedom in (140) (141) A x = } A {{ Y} c + A }{{ Z} y = c (142) (143) (144) A Y = I <nc n c> (143) A Z = 0 <nc n x n c> (144) φ (y) = f q (x (y)) (145) = b T (Y c + Z y) + 1 2 (Y c + Z y)t H (Y c + Z y) (146) ( = b + 1 ) T 2 H Y c Y c (147) + (b + H Y c) T Z y + 1 2 yt Z T H Z y min f q (x) s.t. A x = c min φ (y) (148) y : solution of the unconstrained optimization problem (148), Z y lies in the kernel of A Stationary point of optimization problem (148): φ (y) = 0 ( ZT H Z ) y = Z T (b + H Y c) y x = Y c + Z y (149) }{{}}{{} reduced Hessian reduced gradient Computation of Y, Z by QR-decomposition from (143) and (483): R T Q T Y = I Y = Q R T (150) from (143) and (483): R T Q T Z = 0 Z = Q (151) Optimization of Analog Circuits 48

x 3 A x = c feasible space z 2 Y c Z y z 1 x 2 x 1 Figure 30. Y c: minimum length solution, i.e., orthogonal to plane A x = c. z i, i = 1,, n x n c : orthonormal basis of kernel (null-space) of A, i.e. A (Z y) = 0. 4.1.1.3 Computation of Lagrange factor λ required for QP with inequality constraints Lagrange function of problem (139) L (x, λ) = f q (x) λ T (A x c) (152) L (x) = 0 : A T λ = f q (x) (overdetermined) λ (143) = Y T f q (x ) (153) or, in case of QR-decomposition λ = Y T (b + H x ) (154) A T λ = b + H x (155) from (483) Q R λ = b + H x (156) R λ = Q T (b + H x ) backward substitution λ (157) Optimization of Analog Circuits 49

4.1.1.4 Computation of Y c in case of a QR-decomposition (483) R T Q T x }{{} u A x = c = c R T u = c Y c forward substitution (150) = Q R T c = Q u (insert u) u (158) 4.1.1.5 Transformation of (139) into an unconstrained optimization problem by Lagrange multipliers first-order optimality conditions of (139) according to Lagrange function (152) L (x) = 0 : b + H x A T λ = 0 A x c = 0 } (159) [ ] [ ] [ ] H A T x b = x, λ (160) A 0 λ c }{{} Lagrange matrix L L: symmetric, positive definite iff Z T H Z positive definite e.g., L 1 = [ K ] T T T U (161) K = Z (Z T H Z ) 1 Z T (162) T = Y K H Y (163) U = Y T H K H Y Y T H Y (164) factorization of L 1 by factorization of Z T H Z and computation of Y Z Optimization of Analog Circuits 50

4.1.2 QP - inequality constraints min f q (x) s.t. a T i x = c i, i E a T j x c j, j I f q (x) = b T x + 1 2 xt H x H: symmetric, positive definite (165) A <nc nx>, n c n x, rank (A) = n c. a T i, i E. A =. a T j, j I. Optimization of Analog Circuits 51

compute initial solution x (0) that satisfies all constraints, e.g., by linear programming κ := 0 repeat /* problem formulation at current iteration point x (κ) + δ for active constraints A (κ) (active set method) */ min f q (δ) s.t. a T i δ = 0, i A (κ) δ (κ) case I: δ (κ) = 0 /* i.e., x (κ) optimal for current A (κ) */ compute λ (κ) e.g., by (154) or (157) case Ia: λ (κ) i A (κ) i 0, /* i.e., first-order optimality conditions satisfied */ [ I x = x (κ) ; stop case Ib: λ (κ) i A (κ) i < 0, I /* i.e., constraint becomes inactive, further improvement of f possible */ q = arg minλ (κ) i, i A (κ) I /* i.e., q is most sensitive constraint to become inactive */ A (κ+1) = A (κ) \ {q} x (κ+1) = x (κ) case II: δ (κ) 0, /* i.e., f can be reduced for current A (κ) ; step-length computation; if no new constraint becomes active then α (κ) = 1 (case IIa); else find inequality constraint that becomes inactive first and corresponding α (κ) < 1 (case IIb) */ safety margin bound {}}{ α (κ) c = min(1, min i a T i x (κ) i / A (κ), a T i δ (κ) < 0 a T i δ (κ) ) }{{}}{{} consumed safety margin inactive constraints that at α (κ) = 1 approach their lower bound q = argmin ( ) case IIa: α (κ) = 1 [ A (κ+1) = A (κ) case IIb: α (κ) < 1 [ A (κ+1) = A (κ) (q ) x (κ+1) = x (κ) + α (κ) δ (κ) κ := κ + 1 Optimization of Analog Circuits 52

4.1.3 Example (B) f q = const x 2 x (4) = x (3) x 1 δ (4) x (5) x (6) δ (5) δ (2) x (1) = x (2) (C) (A) Figure 31. (κ) A (κ) case (1) {A, B} Ib (2) {B} IIa (3) {B} Ib (4) { } IIb (5) {C} IIa (6) {C} Ia Concerning q = arg minλ (κ) i for κ = 1: i A (1) I A x (1) B f q Figure 32. f q = λ A A + λ B B λ A < 0, λ B = 0 Optimization of Analog Circuits 53

4.2 Sequential Quadratic programming (SQP), Lagrange Newton 4.2.1 SQP equality constraints min f (x) s.t. c (x) = 0 c <nc>, n c n x (166) 4.2.1.1 Lagrange function of (166) L (x, λ) = f (x) λ T c (x) = f (x) i λ i c i (x) (167) L (x, λ) = L(x) L (λ) = = [ f (x) i λ i c i (x) c (x) g(x) A T (x) {}}{ [ c i (x) ] λ c (x) ] (168) 2 L (x, λ) = = 2 L (x) 2 L λ x T 2 L x λ T 2 L (λ) 2 f (x) i λ i 2 c i (x). c i (x) T. }{{} A(x) A T (x) 0 (169) Optimization of Analog Circuits 54

4.2.1.2 Newton approach to (167) in κ-th iteration step [ ) ) δ (κ) L(x } (κ) {{ + δ (κ) }, λ } (κ) + {{ λ (κ) } ) L (x (κ), λ (κ) + 2 L (x (κ), λ (κ) x }{{}}{{} (κ+1) λ (κ+1) L (κ) 2 L (κ) with (168) and (169) W {}} (κ) { 2 f(x (κ) ) λ (κ) ( i 2 c ) [ i x (κ) A (κ)t δ (κ) i A (κ) 0 λ (κ) ] = λ (κ) ]! = 0 (170) [ g (κ) + A (κ)t λ (κ) c (κ) ] (171) [ W (κ) A (κ)t ] [ δ (κ) ] [ g (κ) ] A (κ) 0 λ (κ+1) = c (κ) (172) L (δ, λ) = g (κ)t δ + 1 2 δt W (κ) δ λ T (A (κ) δ + c (κ)) (173) min g (κ)t δ + 1 2 δt W (κ) δ s.t. A (κ) δ = c (κ) (174) QP problem, W (κ) according to (171) including quadratic parts of objective and constraint, linearized equality constraints from (166) Quasi-Newton approach possible inequality constraints, A (κ) δ c (κ) I, treated as in QP 4.2.2 Penalty function transformation into a penalty function of an unconstrained optimization problem with penalty parameter µ > 0 quadratic: logarithmic: l 1 exact: P quad (x, µ) = f (x) + 1 2µ ct (x) c (x) (175) P log (x, µ) = f (x) µ log c i (x) (176) P l1 (x, µ) = µ f (x) i E i c i (x) + i I max ( c i (x), 0) (177) Optimization of Analog Circuits 55

Optimization of Analog Circuits 56

5 Statistical parameter tolerances modeling of manufacturing variations through a multivariate continuous distribution function of statistical parameters x s cumulative distribution function (cdf): cdf (x s ) = xs,1 xs,nxs pdf (t) dt (178) (discrete: cumulative relative frequencies) probability density function (pdf): dt = dt 1 dt 2... dt nxs pdf (x s ) = nxs xs,1 xs,nxs cdf (x s ) (179) (discrete: relative frequencies) x s,i denotes the random number value of a random variable X s,i Optimization of Analog Circuits 57

5.1 Univariate Gaussian distribution (normal distribution) x s N ( x s,0, σ 2) (180) x s,0 σ 2 σ : mean value : variance : standard deviation probability density function of the univariate normal distribution: pdf N ( xs, x s,0, σ 2) = 1 2π σ e 1 2 ( xs xs,0 σ ) 2 (181) pdf N 3σ 2σ σ 0 σ 2σ 3σ x s x s,0 1 cdf N 0.5 3σ 2σ σ 0 σ 2σ 3σ x s x s,0 Figure 33. Probability density function, pdf, and corresponding cdf of a univariate Gaussian distribution. The area of the shaded region under the pdf is the value of the cdf as shown. x s x s,0 3σ 2σ σ 0 σ 2σ 3σ 4σ cdf (x s x s,0 ) 0.1% 2.2% 15.8% 50% 84.1% 97.7% 99.8% 99.99% Standard normal distribution table Optimization of Analog Circuits 58

5.2 Multivariate normal distribution x s N (x s,0, C) (182) x s,0 : vector of mean values of statistical parameters x s C: covariance matrix of statistical parameters x s, symmetric, positive definite probability density function of the multivariate normal distribution: pdf N (x s, x s,0, C) = 1 2π n xs det (C) e 1 2 β2 (x s,x s,0,c) (183) β 2 (x s, x s,0, C) = (x x s,0 ) T C 1 (x x s,0 ) (184) C = Σ R Σ (185) 1 ρ σ 1 0 1,2 ρ 1,nxs Σ =..., R = ρ 1,2 1 ρ 2,nxs. 0 σ.... nxs ρ 1,nxs ρ 2,nxs 1 σ1 2 σ 1 ρ 1,2 σ 2 σ 1 ρ 1,nxs σ nxs σ 1 ρ 1,2 σ 2 σ2 2 σ 2 ρ 2,nxs σ nxs C =...... σ 1 ρ 1,nxs σ nxs σn 2 xs (186) (187) R : correlation matrix of statistical parameters σ k : standard deviation of component x s,k, σ k > 0 σ 2 k : variance of component x s,k σ k ρ k,l σ l : covariance of components x s,k, x s,l ρ k,l : correlation coefficient of components x s,k and x s,l, 1 < ρ k,l < 1 ρ k,l = 0 : uncorrelated and also independent if jointly normal ρ k,l = 1 : strongly correlated components Optimization of Analog Circuits 59

x s,2 x s,0,2 x s,0 β 2 (x s ) = const x s,0,1 x s,1 Figure 34. Level sets of a two-dimensional normal pdf (a) (b) (c) x s,2 β 2 (x s) = const x s,2 β 2 (x s) = const x s,2 β 2 (x s) = const x s,0,2 x s,0 x s,0,2 xs,0 x s,0,2 x s,0 x 1 x s,0,1 x s,1 x s,0,1 x s,1 x s,0,1 x s,1 Figure 35. Level sets of a two-dimensional normal pdf with general covariance matrix C, (a), with uncorrelated components, R = I, (b), and uncorrelated components of equal spread, C = σ 2 I, (c). Optimization of Analog Circuits 60

x 2 ρ = 0 a 2 σ2 β 2 = const = a 2 a 2 σ 1 x 1 Figure 36. Level set with β 2 = a 2 of a two-dimensional normal pdf for different values of the correlation coefficient, ρ. Optimization of Analog Circuits 61

5.3 Transformation of statistical distributions y R ny, z R nz, n y = n z z = z (y), y = y (z) such that the mapping from y to z is smooth and bijective (precisely z = φ (y), y = φ 1 (z)) cdf y (y) = = y z pdf y (y ) dy = z(y) ( ) pdf y (y (z )) y det dz z T pdf z (z ) dz = cdf z (z) (188) y pdf y (y ) dy = z pdf z (z ) dz (189) ( ) pdf z (z) = pdf y (y (z)) y det (190) z T univariate case: pdf z (z) = pdf y (y(z)) y z (191) In the simple univariate case the function pdf z has a domain that is a scaled version of the domain of pdf y. ( ) y determines the scaling factor. In high-order cases, the random z variable space is scaled and rotated with the Jacobian matrix y determining the scaling z T and rotation. pdf y (y) pdf z (z) y 1 y 2 y 3 y z 1 z 2 z 3 z Figure 37. Univariate pdf with random number y is transformed to a new pdf of new random number z = z (y). According to (188) the shaded areas as well as the hatched areas under the curve are equal. Optimization of Analog Circuits 62

5.3.1 Example Given: probability density function pdf U (z), here a uniform distribution: pdf U (z) = { 1 for 0 < z < 1 0 otherwise (192) probability density function pdf y (y), y R random number z Find: random number y z y 0 1 + from (188) z y 0 pdf z (z ) }{{} 1 for 0 z 1 dz = pdf y (y ) dy (193) from (192) hence z = y pdf y (y ) dy = cdf y (y) (194) y = cdf 1 y (z) (195) This example details a method to generate sample values of a random variable y with an arbitrary pdf pdf y if sample values are available from a uniform distribution pdf z : insert pdf y (y) in (194) compute cdf y by integration compute inverse cdf 1 y create uniform random number, z, insert into (195) to get sample value, y, according to pdf y (y) Optimization of Analog Circuits 63

Optimization of Analog Circuits 64

6 Expectation values and their estimators 6.1 Expectation values 6.1.1 Definitions h (z): function of a random number z with probability density function pdf (z) Expectation value E {h (z)} = E {h (z)} = pdf(z) + h (z) pdf (z) dz (196) Moment of order κ Mean value (first-order moment) Central moment of order κ m (κ) = E {z κ } (197) m (1) = m = E {z} (198) E {z 1 } m = E {z} =. (199) E {z nz } c (κ) = E {(z m) κ } c (1) = 0 (200) Variance (second-order central moment) σ: standard deviation Covariance c (2) = E { (z m) 2} = σ 2 = V {z} (201) cov {z i, z j } = E {(z i m i ) (z j m j )} (202) Variance/covariance matrix C = V {z} = E {(z } m) (z m) T V {z 1 } cov {z 1, z 2 } cov {z 1, z nz } cov {z 2, z 1 } V {z 2 } cov {z 2, z nz } =..... cov {z nz, z 1 } cov {z nz, z 2 } V {z nz } V {h (z)} = E (203) { (h (z) E {h (z)}) (h (z) E {h (z)}) T } (204) Optimization of Analog Circuits 65