Simulation und Optimierung analoger Schaltungen Optimization Methods for Circuit Design
|
|
- Leon Conley
- 5 years ago
- Views:
Transcription
1 Technische Universität München Department of Electrical Engineering and Information Technology Institute for Electronic Design Automation Simulation und Optimierung analoger Schaltungen Optimization Methods for Circuit Design Compendium H. Graeb
2 Version (WS 08/09 - SS 10) Michael Eick Version (SS 07 - SS 08) Husni Habal Presentation follows: H. Graeb, Analog Design Centering and Sizing, Springer, R. Fletcher, Practical Methods of Optimization, John Wiley & Sons, 2nd Edition, Status: February 1, 2010 Copyright Simulation und Optimierung analoger Schaltungen Optimization Methods for Circuit Design Compendium H. Graeb Technische Universität München Institute for Electronic Design Automation Arcisstr Munich, Germany graeb@tum.de Phone: All rights reserved.
3 Contents 1 Introduction Parameters, performance, simulation Performance specification Minimum, minimization Unconstrained optimization Constrained optimization Classification of optimization problems Classification of constrained optimization problems Structure of an iterative optimization process without constraints with constraints Trust-region approach Optimality conditions Optimality conditions unconstrained optimization Necessary first-order condition for a local minimum of an unconstrained optimization problem Necessary second-order condition for a local minimum of an unconstrained optimization problem Sufficient and necessary conditions for second-order derivative 2 f (x ) to be positive definite Optimality conditions constrained optimization Constrained descent direction r Necessary first-order conditions for a local minimum of a constrained optimization problem Necessary second-order condition for a local minimum of a constrained optimization problem Sensitivity of the optimum with regard to a change in an active constraint Optimization of Analog Circuits i
4 3 Worst-case analysis Task Typical tolerance regions Classical worst-case analysis Task Linear performance model Optimization type max f, specification type f > f L Optimization type min f, specification type f < f U Realistic worst-case analysis Task Optimization type max f, specification type f f L : Optimization type min f, specification type f f U General worst-case analysis Task Optimization type max f, specification type f f L : Optimization type min f, specification type f f U : Summary of discussed worst-case analysis problems Statistical parameter tolerances Univariate Gaussian distribution (normal distribution) Multivariate normal distribution Transformation of statistical distributions Example Optimization of Analog Circuits ii
5 5 Expectation values and their estimators Expectation values Definitions Linear transformation of expectation value Linear transformation of variance Translation law of variances Normalizing a random variable Linear transformation of a normal distribution Estimation of expectation values Expectation value estimator Variance estimator Variance of the expectation value estimator Linear transformation of estimated expectation value Linear transformation of estimated variance Translation law of estimated variance Optimization of Analog Circuits iii
6 6 Yield analysis Task Acceptance function Parametric yield Statistical yield analysis/monte-carlo analysis Variance of yield estimator Estimated variance of yield estimator Importance sampling Geometric yield analysis for linearized performance feature ( realistic geometric yield analysis ) Yield partition Defining worst-case distance β W L as difference from nominal performance to specification bound as multiple of standard deviation σ f of linearized performance feature Yield partition as a function of worst-case distance β W L Worst-case distance β W L defines tolerance region specification type f f U Geometric yield analysis for nonlinear performance feature ( general geometric yield analysis ) Problem formulation Advantages of geometric yield analysis Lagrange function and first-order optimality conditions of problem (210) Lagrange function of problem (211) Second-order optimality condition of problem (210) Worst-case distance Remarks Overall yield Consideration of range parameters Optimization of Analog Circuits iv
7 7 Yield optimization/design centering/nominal design Optimization objectives Derivatives of optimization objectives Problem formulations of analog optimization Unconstrained optimization Univariate unconstrained optimization, line search Wolfe-Powell conditions Backtracking line search Bracketing Sectioning Golden Sectioning Line search by quadratic model Unimodal function Multivariate unconstrained optimization without derivatives Coordinate search Polytope method (Nelder-Mead simplex method) Multivariate unconstrained optimization with derivatives Steepest descent Newton approach Quasi-Newton approach Levenberg-Marquardt approach (Newton direction plus trust region) Least-squares (plus trust-region) approach Conjugate-gradient (CG) approach Optimization of Analog Circuits v
8 9 Constrained optimization problem formulations Quadratic Programming (QP) QP linear equality constraints QP - inequality constraints Example Sequential Quadratic programming (SQP), Lagrange Newton SQP equality constraints Penalty function Sizing rules for analog circuit optimization Single (NMOS) transistor Sizing rules for single transistor that acts as a voltage-controlled current source (VCCS) Transistor pair: current mirror (NMOS) Sizing rules for current mirror Optimization of analog circuits: tasks Analysis, synthesis Sizing Nominal design, tolerance design Optimization without/with constraints A Matrix and vector notations 113 A.1 Vector A.2 Matrix A.3 Addition A.4 Multiplication A.5 Special cases A.6 Determinant of a quadratic matrix A.7 Inverse of a quadratic non-singular matrix A.8 Some properties Optimization of Analog Circuits vi
9 B Abbreviated notations of derivatives using the nabla symbol 119 C Norms 121 D Pseudo-inverse, singular value decomposition (SVD) 123 D.1 Moore-Penrose conditions D.2 Singular value decomposition E Linear equation system, rectangular system matrix with full rank 125 E.1 underdetermined system of equations E.2 overdetermined system of equations E.3 determined system of equations F Partial derivatives of linear, quadratic terms in matrix/vector notation129 G Probability space 131 H Convexity 133 H.1 Convex set K R n H.2 Convex function Optimization of Analog Circuits vii
10 Optimization of Analog Circuits viii
11 1 Introduction 1.1 Parameters, performance, simulation design parameters x d R n xd e.g. transistor widths, capacitances statistical parameters x s R nxs e.g. oxide thickness, threshold voltage range parameters x r R nxr e.g. operational parameters: (circuit) parameters x = [ x T d xt s x T r ] T supply voltage, temperature performance feature f i e.g. gain, bandwidth, slew rate, phase margin, delay, power (circuit) performance f = [ f i ] T R n f (circuit) simulation x f(x) e.g. SPICE A design parameter and a statistical parameter may refer to the same physical parameter. E.g., an actual CMOS transistor width is the sum of a design parameter W k and a statistical parameter W. W k is the specific width of transistor T k while W is a width reduction that varies globally and equally for all the transistors on a die. A design parameter and a statistical parameter may be identical. 1.2 Performance specification performance specification feature (upper or lower limit on a performance): f i f L,i or f i f U,i (1) number of performance specification features: n f n P SF 2n f (2) performance specification: f L,1 f 1 (x) f U,1. f L,nf f nf (x) f U,nf f L f(x) f U (3) Optimization of Analog Circuits 1
12 f (a) strong local minimum weak local minimum global minimum x f (b) x Figure 1. Smooth function (a), i.e. continuous and differentiable at least several times on a closed region of the domain. Non-smooth continuous function (b). 1.3 Minimum, minimization without loss of generality: optimum minimum because: max f min f (4) { minimum, i.e., a result min minimize, i.e., a process (5) min f(x) f (x)! min (6) min f(x) x, f(x ) = f (7) 1.4 Unconstrained optimization f = min f(x) min f min f(x) x x min{f(x)} x = argmin f(x) argmin f argmin f(x) x x argmin{f(x)} (8) Optimization of Analog Circuits 2
13 1.5 Constrained optimization E: set of equality constraints I: set of inequality constraints min f(x) s.t. c i (x) = 0, i E c i (x) 0, i I (9) Alternative formulations where, min x f s.t. x Ω (10) min f (11) x Ω min {f(x) x Ω} (12) { c i (x) = 0, i E Ω = x c i (x) 0, i I } Lagrange function combines objective function and constraints in a single expression L(x, λ) = f(x) λ i c i (x) (13) i E I λ i : Lagrange multiplier associated with constraint i Optimization of Analog Circuits 3
14 1.6 Classification of optimization problems deterministic, stochastic continuous, discrete local, global scalar, vector constrained, unconstrained with or without derivatives The iterative search process is deterministic or random. Optimization variables can take an infinite number of values, e.g., the set of real numbers, or take a finite set of values or states. The objective value at a local optimal point is better than the objective values of other points in its vicinity. The objective value at a global optimal point is better than the objective values of any other point. In a vector optimization problem, multiple objective functions shall be optimized simultaneously (multiplecriteria optimization, MCO). Usually, objectives have to be traded off with each other. A Pareto-optimal point is characterized in that one objective can only be improved at the cost of another. Pareto optimization determines the set of all Pareto-optimal points. Scalar optimization refers to a single objective. A vector optimization problem is scalarized by combining the multiple objectives into a single overall objective, e.g., by a weighted sum, least-squares, or min/max. Besides the objective function that has to be optimized, constraints on the optimization variables may be given as inequalities or equalities. The optimization process may be based on gradients (first derivative) or on gradients and Hessians (second derivative), or it may not require any derivatives of the objective/constraint functions. 1.7 Classification of constrained optimization problems objective function constraint functions linear linear linear programming quadratic linear quadratic programming nonlinear nonlinear nonlinear programming convex linear equality convex programming constraints (local global minimum) concave inequality constraints Optimization of Analog Circuits 4
15 1.8 Structure of an iterative optimization process without constraints Taylor series of a function f about iteration point x (κ) : f(x) = f(x (κ) ) + f ( x (κ)) T ( ) x x (κ) ( x x (κ) ) T 2 f ( x (κ)) (x x (κ)) +... (14) = f (κ) + g (κ)t (x x (κ)) ( x x (κ) ) T H (κ) (x x (κ)) +... (15) f (κ) : value of function f at point x (κ) g (κ) : gradient (first derivative, direction of steepest ascent) at point x (κ) H (κ) : Hessian matrix (second derivative) at point x (κ) Taylor series about search direction r starting from point x (κ) : x(r) = x (κ) + r (16) f(r) = f (κ) + g (κ)t r rt H (κ) r +... (17) Taylor series about step length along search direction r (κ) starting from point x (κ) : x(α) = x (κ) + α r (κ) (18) f(α) = f (κ) + g (κ)t r (κ) α r(κ)t H (κ) r (κ) α (19) = f (κ) + f (α = 0) α f (α = 0) α (20) f (α = 0) : slope of f along direction r (κ) 2 f (α = 0) : curvature of f along r (κ) repeat determine the search direction r (κ) determine the step length α (κ) (line search) x (κ+1) = x (κ) + α (κ) r (κ) κ:= κ + 1 until termination criteria are fulfilled Steepest-descent approach search direction: direction of steepest descent, i.e., r (κ) = g (κ) Optimization of Analog Circuits 5
16 Figure 2. Visual illustration of the steepest-descent approach for Rosenbrock s function f (x 1, x 2 ) = 100(x 2 x 2 1) 2 + (1 x 1 ) 2. A backtracking line search is applied (see Sec , page 67) with an initial x (0) = [ 1.0, 0.8] T and α (0) = 1, α := c 3 α. The search terminates when the Armijo condition is satisfied with c 1 = 0.7, c 3 = 0.6. Optimization of Analog Circuits 6
17 with constraints Constraint functions and objective functions are combined in an unconstrained optimization problem in each iteration step Lagrange formulation penalty function Sequential Quadratic Programming (SQP) Projection on active constraints, i.e. into subspace of an unconstrained optimization problem in each iteration step active-set methods Trust-region approach model of the objective function: f(x) m ( x (κ) + r ) (21) min r m ( x (κ) + r ) s.t. r trust region (22) e.g., r < search direction and step length are computed simultaneously trust region to consider the model accuracy Optimization of Analog Circuits 7
18 Optimization of Analog Circuits 8
19 2 Optimality conditions 2.1 Optimality conditions unconstrained optimization Taylor series of the objective function around the optimum point x : f(x) = f(x ) + f (x ) T (x x ) }{{}}{{} f For x = x + r close to the optimum: g T (x x ) T 2 f (x ) }{{} (x x ) +... (23) H f : value of the function at the optimum x g : gradient at the optimum x H : Hessian matrix at the optimum x f(r) = f + g T r rt H r +... (24) x is optimal there is no descent direction, r, such that f (r) < f. x 2 f ( x (κ)) gradient (κ) x direction steepest descent direction f (x) > f ( x (κ)) f (x) = f ( x (κ)) f (x) < f ( x (κ)) x 1 Figure 3. Descent directions from x (κ) : shaded area. Optimization of Analog Circuits 9
20 2.1.1 Necessary first-order condition for a local minimum of an unconstrained optimization problem x : stationary point descent direction r: f ( x (κ)) T r < 0 steepest descent direction: r = f ( x (κ)) r 0 g T r 0 (25) g = f (x ) = 0 (26) Figure 4. Quadratic functions: (a) minimum at x, (b) maximum at x, (c) saddle point at x, (d) positive semidefinite with multiple minima along trench. Optimization of Analog Circuits 10
21 2.1.2 Necessary second-order condition for a local minimum of an unconstrained optimization problem r 0 rt 2 f (x ) r 0 2 f (x ) is positive semidefinite f has non-negative curvature (27) sufficient: r 0 rt 2 f (x ) r > 0 2 f (x ) is positive definite has positive curvature (28) Figure 5. Contour plots of quadratic functions that are (a),(b) positive or negative definite, (c) indefinite (saddle point), (d) positive or negative semidefinite. Optimization of Analog Circuits 11
22 2.1.3 Sufficient and necessary conditions for second-order derivative 2 f (x ) to be positive definite all eigenvalues > 0 has a Cholesky decomposition: f 2 (x ) = L L T with l ii > 0 f 2 (x ) = L D L T with l ii = 1 and d ii > 0 (29) all pivot elements during gaussian elimination without pivoting > 0 all principal minors are > 0 x 2 unconstrained descent unconstrained c i = const f ( x (κ)) c i ( x (κ) ) x (κ) descent f = const x 1 Figure 6. Dark shaded area: unconstrained directions according to (35), light shaded area: descent directions according to (34), overlap: unconstrained descent directions. When no direction satisfies both (34) and (35) then the cross section is empty and the current point is a local minimum of the function. Optimization of Analog Circuits 12
23 2.2 Optimality conditions constrained optimization Constrained descent direction r descent direction: f ( x (κ)) T r < 0 (30) constrained direction: ( c i x (κ) + r ) ( c ) ( i x (κ) + c ) i x (κ) T r 0 (31) Inactive constraint: i is inactive ( c ) i x (κ) > 0 then each r with r < ɛ satisfies (31), e.g., ( c ) i x (κ) r = c i (x (κ) ) f (x (κ) ) f ( x (κ)) (32) (32) in (31) gives: where, ( c ) [ i x (κ) 1 c ( ) i x (κ) T ( ) ] f x (κ) 0 (33) c i (x (κ) ) f (x (κ) ) 1 c ( ) i x (κ) T ( ) f x (κ) c i (x (κ) ) f (x (κ) ) 1 Active constraint (Fig. 6): i is active c i ( x (κ) ) = 0 then (30) and (31) become: f ( x (κ)) T r < 0 (34) c i ( x (κ) ) T r 0 (35) no constrained descent direction exists: no vector r satisfies both (34) and (35) at x : f (x ) = λ i c i (x ) with λ i 0 (36) no statement about sign of λ i in case of an equality constraint (c i = 0 c i 0 c i 0) Optimization of Analog Circuits 13
24 2.2.2 Necessary first-order conditions for a local minimum of a constrained optimization problem x, f = f (x ), λ, L = L (x, λ ) Karush-Kuhn-Tucker (KKT) conditions L (x ) = 0 (37) c i (x ) = 0 i E (38) c i (x ) 0 i I (39) λ i 0 i I (40) λ i c i (x ) = 0 i E I (41) (37) is analogous to (26) (13) and (37) give: f (x ) λ i c i (x ) = 0 (42) i A(x ) A (x ) is the set of active constraints at x A (x ) = E {i I c i (x ) = 0} (43) (41) is called the complementarity condition, Lagrange multiplier is 0 (inactive constraint) or constraint c i (x ) is 0 (active constraint). from (41) and (13): L = f (44) Optimization of Analog Circuits 14
25 2.2.3 Necessary second-order condition for a local minimum of a constrained optimization problem f (x + r) = L (x + r, λ ) (45) = L (x, λ ) + r T L (x ) + 1 }{{} 2 rt 2 L (x ) r + (46) }{{} 0. =. f + 1 {}}{ 2 rt [ 2 f (x ) λ i 2 c i (x )] r + (47) for each feasible stationary direction r at x, i.e., i A(x ) F r = r r 0 c i (x ) T r 0 i A (x ) \A + c i (x ) T r = 0 i A + = { j A (x ) j E λ j > 0 } (48) necessary: sufficient: r T 2 L (x ) r 0 (49) r F r r T 2 L (x ) r > 0 (50) r F r Optimization of Analog Circuits 15
26 2.2.4 Sensitivity of the optimum with regard to a change in an active constraint perturbation of an active constraint at x by i 0 c i (x) 0 c i (x) i (51) L (x, λ, ) = f (x) i λ i (c i (x) i ) (52) f ( i ) = L ( i ) ( L = x T x + L i λ T λ + L ) i T i x,λ }{{}}{{} 0 T 0 T = L i = λ i x,λ f ( i ) = λ i (53) Lagrange multiplier: sensitivity to change in an active constraint close to x Optimization of Analog Circuits 16
27 3 Worst-case analysis 3.1 Task indexes d, s, r for parameter types x d, x s, x r left out index i for performance feature f i left out Given: tolerance region T of parameters Find: worst-case performance value, f W, that the circuit takes over T and corresponding worst-case parameter vectors, x W optimization specification good bad worst-case performance max f lower bound: f f L f f f W L = f (x W L ) min f upper bound: f f U f f f W U = f (x W U ) 3.2 Typical tolerance regions box: T B = {x x L x x U } { ellipsoid: T E = x β 2 (x) = (x x 0 ) T C 1 (x x 0 ) βw 2 C is symmetric, positive definite }, x 2 x U,2 x 2 β = β W x L,2 T B x 0 T E x L,1 x U,1 x 1 x 1 Figure 7. Tolerance box, tolerance ellipsoid. Optimization of Analog Circuits 17
28 3.3 Classical worst-case analysis indexes d, s, r for parameter types x d, x s, x r left out index i for performance feature f i left out Task Given: hyper-box tolerance region T B = {x x L x x U } (54) linear performance model f (x) = f a + g T (x x a ) (55) Find: worst-case parameter ( vectors ) x W L/U and corresponding worst-case performance values f W L/U = f W L/U xw L/U x 2 x U,2 f = f a g x W U f = fw U = f (xw U) x L,2 x W L x a f = f W L = f (x W L ) T B x L,1 x U,1 x 1 Figure 8. Classical worst-case analysis with tolerance box and linear performance model. Optimization of Analog Circuits 18
29 3.3.2 Linear performance model sensitivity analysis: forward finite-difference approximation: f a = f (x a ) (56) g = f (x a ) (57) f a = f (x a ) (58) f (x a,i ) g i = f (x a + x i e i ) f (x a ) x i (59) e i = [ ] T (60) i-th position f, f (a) f f(x a ) a b f g = b a x a x f, f (b) f f(x a + x) f f(x a ) g = f(xa+ x) f(xa) x x a x a + x x Figure 9. Linear performance model based on gradient (a), based on forward finitedifference approximation of gradient (b). Optimization of Analog Circuits 19
30 3.3.3 Optimization type max f, specification type f > f L min f (x) }{{} min g T x s.t. { x xl x x U x W L, f W L = f (x W L ) (61) specific linear programming problem with analytical solution corresponding Lagrange function: L (x, λ L, λ U ) = g T x λ T L (x x L ) λ T U (x U x) (62) first-order optimality conditions: L (x) = 0 : g λ L + λ U = 0 (63) λ L/U 0 x U x W L 0, x W L x L 0 λ L,j (x W L,j x L,j ) = 0, j = 1,, n x λ U,j (x U,j x W L,j ) = 0, j = 1,, n x { λ T L (x W L x L ) = 0 λ T U (x U x W L ) = 0 a i b i = 0 a T b = a i b i = 0 i i a T b = 0, a i 0, b i 0 a i b i = 0 i (64) second-order optimality condition holds because 2 L (x) = 0 either constraint x L,j or constraint x U,j active, never both, therefore from (63) and (64): either: g j = λ L,j > 0 (65) or: g j = λ U,j < 0 (66) component of the worst-case parameter vector x W L : x L,j, g j > 0 x W L,j = x U,j, g j < 0 undefined, g j = 0 (67) worst-case performance value: f W L = f a + g T (x W L x a ) = f a + j g j (x W L,j x a,j ) (68) Optimization of Analog Circuits 20
31 3.3.4 Optimization type min f, specification type f < f U min f (x) }{{} min g T x s.t. { x xl x x U x W U, f W U = f (x W U ) (69) component of the worst-case parameter vector x W U : x L,j, g j < 0 x W U,j = x U,j, g j > 0 undefined, g j = 0 (70) worst-case performance value: f W U = f a + g T (x W U x a ) = f a + j g j (x W U,j x a,j ) (71) Optimization of Analog Circuits 21
32 3.4 Realistic worst-case analysis indexes d, s, r for parameter types x d, x s, x r left out index i for performance feature f i left out Task Given: ellipsoid tolerance region { } T E = x (x x 0 ) T C 1 (x x 0 ) βw 2 (72) linear performance model f (x) = f a + g T (x x a ) (73) = f 0 + g T (x x 0 ) with f 0 = f a + g T (x 0 x a ) (74) Find: worst-case parameter vectors x W L/U values f W L/U = f ( ) x W L/U and corresponding worst-case performance x 2 x W L f = f W L = f (x W L ) x 0,2 T E x 0 f = f a + g T (x 0 x a ) g f = f W U = f (x W U) x W U x 0,1 x 1 Figure 10. Realistic worst-case analysis with tolerance ellipsoid and linear performance model. Optimization of Analog Circuits 22
33 3.4.2 Optimization type max f, specification type f f L : min f (x) }{{} min g T x s.t. (x x 0 ) T C 1 (x x 0 ) β 2 W x W L, f W L = f (x W L ) (75) specific nonlinear programming problem (linear objective function, quadratic constraint function) with analytical solution corresponding Lagrange function: L (x, λ) = g T x λ ( ) βw 2 (x x 0 ) T C 1 (x x 0 ) (76) first-order optimality conditions: L = 0 : g + 2λ W L C 1 (x W L x 0 ) = 0 (77) due to linear function f, the solution is on the border of T E, i.e., the constraint is active: (x W L x 0 ) T C 1 (x W L x 0 ) = β 2 W (78) λ W L > 0 (79) second-order optimality condition holds because: 2 L (x) = 2λ W L C 1, C 1 is positive definite, and because of (79) (77) gives: x W L x 0 = 1 2λ W L C g (80) substituting (80) into (78) gives: 1 4λ 2 W L g T C g = β 2 W (81) inserting λ W L from (81) into (80) to eliminate λ W L gives a worst-case parameter vector in terms of the performance gradient and tolerance region constants: x W L x 0 = β W g T C g C g = β W σ f C g (82) substituting (82) in (74) gives the corresponding worst-case performance value: f W L = f (x W L ) = f 0 + g T (x W L x 0 ) = f 0 β W g T C g (83) = f 0 β W σ f Gaussian error propagation, linear transformation of a normal distribution Optimization of Analog Circuits 23
34 3.4.3 Optimization type min f, specification type f f U (75) becomes min f (x) }{{} min g T x s.t. (x x 0 ) T C 1 (x x 0 ) β 2 W x W U, f W U = f (x W U ) (84) (82) becomes x W U x 0 = + β W g T C g C g = + β W σ f C g (85) (83) becomes f W U = f (x W U ) = f 0 + g T (x W U x 0 ) = f 0 + β W g T C g (86) = f 0 + β W σ f Optimization of Analog Circuits 24
35 3.5 General worst-case analysis indexes d, s, r for parameter types x d, x s, x r left out index i for performance feature f i left out Task Given: ellipsoid tolerance region: { } T E = x (x x 0 ) T C 1 (x x 0 ) βw 2 (87) general smooth performance f (x) Find: worst-case parameter vectors x W L/U values f W L/U = f ( ) x W L/U and corresponding worst-case performance x 2 f (x W U ) f x W U x 0 f (x W L ) f (W U) = f W U f = f W U = f (x W U ) T E f = f (x 0 ) = f 0 f (W L) = f W L x W L f = f W L = f (x W L ) x 1 Figure 11. General worst-case analysis with tolerance ellipsoid and nonlinear performance function. Optimization of Analog Circuits 25
36 3.5.2 Optimization type max f, specification type f f L : min x f (x) s.t. (x x 0 ) T C 1 (x x 0 ) β 2 W (88) specific nonlinear programming problem (non-linear objective function, one quadratic inequality constraint) with numerical solution, e.g., by Sequential Quadratic Programming (SQP), which yields f (x W L ) assumption: unique solution on border of T E Linearization of objective function f at worst-case point x W L, i.e., after solution of (88): substituting (89) in (88) gives: f (W L) (x) = f W L + f (x W L ) T (x x W L ) (89) min f (W L) (x) }{{} min f(x W L ) T x s.t. (x x 0 ) T C 1 (x x 0 ) β 2 W (90) structure identical to realistic worst worst-case analysis (75), replace g by f (x W L ) worst-case parameter vector: β W x W L x 0 = f(x C f (x W L ) T W L) C f(x W L ) = β W σ f (W L) C f (x W L) (91) worst-case performance value: f (W L) (x 0 ) = f 0 = f W L + f (x W L ) T (x 0 x W L ) f W L = f 0 + f (x W L ) T (x W L x 0 ) (92) substituting (91) in (92) gives the corresponding worst-case performance value: f W L = f 0 β W f (x W L ) T C f (x W L ) = f 0 β W σ f (W L) (93) Optimization of Analog Circuits 26
37 3.5.3 Optimization type min f, specification type f f U : (88) becomes (89) becomes min f (x) s.t. (x x 0 ) T C 1 (x x 0 ) β 2 W (94) f (W U) (x) = f W U + f (x W U ) T (x x W U ) (95) (90) becomes min f (x W U ) T x s.t. (x x 0 ) T C 1 (x x 0 ) β 2 W (96) worst-case parameter vector: β W x W U x 0 = + f(x C f (x W U ) T W U) C f(x W U ) = + β W σ f (W U) C f (x W U) (97) worst-case performance value: f W U = f 0 + β W f (x W U ) T C f (x W U ) = f 0 + β W σ f (W U) (98) Optimization of Analog Circuits 27
38 3.6 Summary of discussed worst-case analysis problems For each performance feature f i there exists a worst-case parameter vector x W L,i and/or x W U,i respectively and a corresponding worst-case performance value f W L,i and/or f W U,i. x W L,i and x W U,i are unique in the classical and realistic worst-case analysis. Several worst-case parameter vectors may exist in the general worst-case analysis. worst case feasible region objective function good for analysis type classical hyper-box linear uniform distribution unknown distribution discrete circuits range parameters realistic ellipsoid linear normal distribution general ellipsoid non-linear IC transistor parameters Worst-case analysis requires design technology (circuit, performance features) and process technology (statistical parameters, parameter distribution). Optimization of Analog Circuits 28
39 4 Statistical parameter tolerances modeling of manufacturing variations through a multivariate continuous distribution function of statistical parameters x s cumulative distribution function (cdf): cdf (x s ) = xs,1 xs,nxs pdf (t) dt (99) (discrete: cumulative relative frequencies) probability density function (pdf): dt = dt 1 dt 2... dt nxs pdf (x s ) = nxs xs,1 xs,nxs cdf (x s ) (100) (discrete: relative frequencies) x s,i denotes the random number value of a random variable X s,i Optimization of Analog Circuits 29
40 4.1 Univariate Gaussian distribution (normal distribution) x s N ( x s,0, σ 2) (101) x s,0 σ 2 σ : mean value : variance : standard deviation probability density function of the univariate normal distribution: pdf N ( xs, x s,0, σ 2) = 1 2π σ e 1 2 xs xs,0 σ 2 (102) pdf N 3σ 2σ σ 0 σ 2σ 3σ x s x s,0 1 cdf N 0.5 3σ 2σ σ 0 σ 2σ 3σ x s x s,0 Figure 12. Probability density function, pdf, and corresponding cdf of a univariate Gaussian distribution. The area of the shaded region under the pdf is the value of the cdf as shown. x s x s,0 3σ 2σ σ 0 σ 2σ 3σ 4σ cdf (x s x s,0 ) 0.1% 2.2% 15.8% 50% 84.1% 97.7% 99.8% 99.99% Optimization of Analog Circuits 30
41 4.2 Multivariate normal distribution x s N (x s,0, C) (103) x s,0 : vector of mean values of statistical parameters x s C: covariance matrix of statistical parameters x s, symmetric, positive definite probability density function of the multivariate normal distribution: pdf N (x s, x s,0, C) = 1 2π n xs det (C) e 1 2 β2 (x s,x s,0,c) (104) β 2 (x s, x s,0, C) = (x x s,0 ) T C 1 (x x s,0 ) (105) C = Σ R Σ (106) 1 ρ σ 1 0 1,2 ρ 1,nxs Σ =..., R = ρ 1,2 1 ρ 2,nxs. 0 σ.... nxs ρ 1,nxs ρ 2,nxs 1 σ1 2 σ 1 ρ 1,2 σ 2 σ 1 ρ 1,nxs σ nxs σ 1 ρ 1,2 σ 2 σ2 2 σ 2 ρ 2,nxs σ nxs C = σ 1 ρ 1,nxs σ nxs σn 2 xs (107) (108) R : correlation matrix of statistical parameters σ k : standard deviation of component x s,k, σ k > 0 σ 2 k : variance of component x s,k σ k ρ k,l σ l : covariance of components x s,k, x s,l ρ k,l : correlation coefficient of components x s,k and x s,l, 1 < ρ k,l < 1 ρ k,l = 0 : uncorrelated and also independent if jointly normal ρ k,l = 1 : strongly correlated components Optimization of Analog Circuits 31
42 x s,2 x s,0,2 x s,0 β 2 (x s ) = const x s,0,1 x s,1 Figure 13. Level sets of a two-dimensional normal pdf (a) (b) (c) x s,2 β 2 (x s) = const x s,2 β 2 (x s) = const x s,2 β 2 (x s) = const x s,0,2 x s,0 x s,0,2 xs,0 x s,0,2 x s,0 x 1 x s,0,1 x s,1 x s,0,1 x s,1 x s,0,1 x s,1 Figure 14. Level sets of a two-dimensional normal pdf with general covariance matrix C, (a), with uncorrelated components, R = I, (b), and uncorrelated components of equal spread, C = σ 2 I, (c). Optimization of Analog Circuits 32
43 x 2 ρ = 0 a 2 σ2 β 2 = const = a 2 a 2 σ 1 x 1 Figure 15. Level set with β 2 = a 2 of a two-dimensional normal pdf for different values of the correlation coefficient, ρ. Optimization of Analog Circuits 33
44 4.3 Transformation of statistical distributions y R ny, z R nz, n y = n z z = z (y), y = y (z) such that the mapping from y to z is smooth and bijective (precisely z = φ (y), y = φ 1 (z)) cdf y (y) = = y z pdf y (y ) dy = z(y) ( ) pdf y (y (z )) y det dz z T pdf z (z ) dz = cdf z (z) (109) y pdf y (y ) dy = z pdf z (z ) dz (110) ( ) pdf z (z) = pdf y (y (z)) y det (111) z T univariate case: pdf z (z) = pdf y (y(z)) y z (112) In the simple univariate case the function pdf z has a domain that is a scaled version of the domain of pdf y. ( ) y determines the scaling factor. In high-order cases, the random z variable space is scaled and rotated with the Jacobian matrix y determining the scaling z T and rotation. pdf y (y) pdf z (z) y 1 y 2 y 3 y z 1 z 2 z 3 z Figure 16. Univariate pdf with random number y is transformed to a new pdf of new random number z = z (y). According to (109) the shaded areas as well as the hatched areas under the curve are equal. Optimization of Analog Circuits 34
45 4.3.1 Example Given: probability density function pdf U (z), here a uniform distribution: pdf U (z) = { 1 for 0 < z < 1 0 otherwise (113) probability density function pdf y (y), y R random number z Find: random number y z y from (109) z y 0 pdf z (z ) }{{} 1 for 0 z 1 dz = pdf y (y ) dy (114) from (113) hence z = y pdf y (y ) dy = cdf y (y) (115) y = cdf 1 y (z) (116) This example details a method to generate sample values of a random variable y with an arbitrary pdf pdf y if sample values are available from a uniform distribution pdf z : insert pdf y (y) in (115) compute cdf y by integration compute inverse cdf 1 y create uniform random number, z, insert into (116) to get sample value, y, according to pdf y (y) Optimization of Analog Circuits 35
46 Optimization of Analog Circuits 36
47 5 Expectation values and their estimators 5.1 Expectation values Definitions h (z): function of a random number z with probability density function pdf (z) Expectation value E {h (z)} = E {h (z)} = pdf(z) + h (z) pdf (z) dz (117) Moment of order κ Mean value (first-order moment) Central moment of order κ m (κ) = E {z κ } (118) m (1) = m = E {z} (119) E {z 1 } m = E {z} =. (120) E {z nz } c (κ) = E {(z m) κ } c (1) = 0 (121) Variance (second-order central moment) σ: standard deviation Covariance c (2) = E { (z m) 2} = σ 2 = V {z} (122) cov {z i, z j } = E {(z i m i ) (z j m j )} (123) Variance/covariance matrix C = V {z} = E {(z } m) (z m) T V {z 1 } cov {z 1, z 2 } cov {z 1, z nz } cov {z 2, z 1 } V {z 2 } cov {z 2, z nz } =..... cov {z nz, z 1 } cov {z nz, z 2 } V {z nz } V {h (z)} = E (124) { (h (z) E {h (z)}) (h (z) E {h (z)}) T } (125) Optimization of Analog Circuits 37
48 5.1.2 Linear transformation of expectation value E {A h (z) + b} = A E {h (z)} + b (126) special cases: E {c} = c, c is a constant E {c h (z)} = c E {h (z)} E {h 1 (z) + h 2 (z)} = E {h 1 (z)} + E {h 2 (z)} Linear transformation of variance special cases: Gaussian error propagation: V {A h (z) + b} = A V {h (z)} A T (127) V { a T h (z) + b } = a T V {h (z)} a V {a h (z) + b} = a 2 V {h(z)} V { a T z + b } = a T C a = i,j a i a j σ i ρ i,j σ j = i j ρ i,j=0 a 2 i σi 2 i Translation law of variances V {h (z)} = E { (h (z) a) (h (z) a) T } (E {h (z)} a) (E {h (z)} a) T special cases: V {h (z)} = E { (h (z) a) 2} (E {h (z)} a) 2 V {h (z)} = E { h (z) h T (z) } E {h (z)} E { h T (z) } V {h (z)} = E {( h 2 (z) )} (E {h (z)}) 2 (128) Normalizing a random variable z = z E {z} V {z} = z m z σ z (129) E {z } = E {z} m z = 0 (130) σ z { V {z } = E (z 0) 2} = E { (z m z ) 2} = 1 (131) σ 2 z Optimization of Analog Circuits 38
49 5.1.6 Linear transformation of a normal distribution x N (x 0, C) f (x) = f a + g T (x x a ) (132) mean value µ f of f: µ f = E { f } = E { f a + g T (x x a ) } = E {f a } + g T (E {x} E {x a }) variance σ 2 f of f: σ 2 f µ f = f a + g T (x 0 x a ) (133) { (f ) 2 } { (g = E µf = E T (x x 0 ) ) } 2 { } = E g T (x x 0 ) (x x 0 ) T g { } = g T E (x x 0 ) (x x 0 ) T g σ 2 f = gt C g (134) Optimization of Analog Circuits 39
50 5.2 Estimation of expectation values Expectation value estimator Ê {h (x)} = ˆm h = 1 n MC n MC µ=1 h ( x (µ)) (135) x (µ) D (pdf (x)), µ = 1,..., n MC sample of the population with n MC sample elements, i.e., sample size n MC sample elements x (µ), µ = 1,, n MC, that are independently and identically distributed, i.e, E { h ( x (µ))} = E {h (x)} = m h (136) V { h ( x (µ))} = V {h (x)}, cov { h ( x (µ)), h ( x (ν))} = 0, µ ν (137) ˆφ (x) = ˆφ ( x (1),..., x (n MC) ) : estimator function of φ (x) (138) Variance estimator ˆV {h (x)} = n 1 MC ( ( ) ) ( ( h x (µ) ˆm ) ) h h x (µ) T ˆm h (139) n MC 1 µ=1 estimator bias: x (µ) D (pdf (x)), µ = 1,..., n MC ˆV {h (x)} = 1 n MC ( ( ) ) ( ( h x (µ) m ) ) h h x (µ) T m h (140) n MC µ=1 } b ˆφ = E {ˆφ (x) φ (x) (141) unbiased estimator: } E {ˆφ (x) = φ (x) b ˆφ = 0 (142) consistent estimator: strongly consistent: { } lim ˆφ P φ < ɛ = 1 (143) n MC ɛ 0 (144) variance of an estimator (quality): { ) ) } T {ˆφ} Q ˆφ = E (ˆφ φ (ˆφ φ = V + b ˆφ b Ṱ (145) φ b ˆφ = 0 : Q ˆφ = V {ˆφ} (146) Optimization of Analog Circuits 40
51 5.2.3 Variance of the expectation value estimator Q ˆmh = V { ˆm h } = V } {Ê {h (x)} = V { 1 n MC n MC µ=1 h ( x (µ))} (127) Q ˆmh = 1 n 2 MC V { nmc µ=1 I nh,n h h (µ) } h (µ) = h ( x (µ)), I nh,n h identity matrix of size n h of h (µ) (127) Q ˆmh = Q ˆmh = 1 n 2 MC 1 n 2 MC V [ ] I nh,n h I nh,n h... I nh,n h [I nh,n h I nh,n h... I nh,n h ] V h (1) h (2). h (n MC) h (1) h (2). h (n MC) I nh,n h I nh,n h. I nh,n h (137) Q ˆmh = = 1 n 2 MC 1 n 2 MC [I nh,n h I nh,n h... I nh,n h ] n MC V {h} V {h} V {h} I nh,n h I nh,n h. I nh,n h Q ˆmh = V { ˆm h } = 1 n MC V {h} (147) replace Q ˆmh by ˆQ ˆmh, V {h} by ˆV {h}, (127) by (150) to obtain the variance estimator of the expected value estimator ˆQ ˆmh = ˆV { ˆm h } = 1 n MC ˆV {h} (148) standard deviation of the mean estimator decreases with 1/ n MC, e.g., 100 times more sample elements for 10 times smaller standard deviation in the expectation value estimator Optimization of Analog Circuits 41
52 5.2.4 Linear transformation of estimated expectation value Ê {A h (z) + b} = A Ê {h (z)} + b (149) Linear transformation of estimated variance ˆV {A h (z) + b} = A ˆV {h (z)} A T (150) Translation law of estimated variance ˆV {h (z)} = n MC { [Ê h (z) h T (z) } n MC 1 Ê {h (z)} Ê { h T (z) }] (151) Optimization of Analog Circuits 42
53 6 Yield analysis 6.1 Task Given: statistical parameters with normal distribution, eventually obtained through transformation performance specification Find: percentage/proportion of circuits that fulfill the specifications statistical parameter distribution (manufacturing process) pdf (x s ) = pdf N (x s ) = 1 2π n xs det (C) e 1 2 β2 (x s) (152) β 2 (x s ) = (x s x s,0 ) T C 1 (x s x s,0 ) (153) performance acceptance region, performance specification (customer) A f = {f f L f f U } (154) solution requires either: non-normal distribution pdf f (f) or: non-linear parameter acceptance region A s = {x s f (x s ) A f } (dashed lines in Fig. 17) x s,2 f 2 β = const A s f U,2 x s,0 f (xs,0 ) A f f L,2 x s,1 f L,1 f U,1 f 1 Figure 17. Optimization of Analog Circuits 43
54 6.1.1 Acceptance function δ (x s ) = { 1, f (xs ) A f 0, f (x s ) / A f = { 1, xs A s circuit functions 0, x s / A s circuit malfunctions (155) Parametric yield Y = = pdf (x s ) dx s (156) A s δ (x s ) pdf (x s ) dx s (157) yield: expected value of the acceptance function = E {δ (x s )} (158) Optimization of Analog Circuits 44
55 6.2 Statistical yield analysis/monte-carlo analysis sample of statistical parameter vectors according to given distribution x (µ) s N (x s,0, C), µ = 1,..., n MC (159) (numerical) circuit simulation of each sample element (simulation of the stochastic manufacturing process on circuit level) x (µ) s f (µ) = f ( ) x (µ) s (160) evaluation of the acceptance function statistical yield estimation δ (µ) = δ ( { ) 1, f (µ) x (µ) A f s = (161) 0, f (µ) / A f Ŷ = Ê {δ (x s)} = 1 n MC δ (µ) (162) n MC µ=1 number of functioning circuits = (163) sample size = n + n MC = #{ + } #{ + } + #{ - } (Fig. 18) (164) x s,2 β = const A s x s,1 Figure 18. Optimization of Analog Circuits 45
56 6.2.1 Variance of yield estimator V {Ŷ } (162) = V V {δ(x s )} = E{δ 2 (x s ) } (147) {Ê {δ (xs )} = 1 V {δ (x s )} = σ 2 n Ŷ MC (165) } (E {δ (x }{{} s )}) = Y (1 Y ) }{{} (166) δ(x s) } {{ } Y Y } {{ } Y Estimated variance of yield estimator ˆV {Ŷ } (162) = ˆV (151) = = } (148) {Ê {δ (xs )} = 1 1 n MC Ŷ n MC n MC ˆV {δ (xs )} (167) n MC 1 [Ê{δ2 (x s )} (Ê }{{}} {δ {{ (x s)} } ( ) 1 Ŷ δ(x s) }{{} Ŷ Ŷ ) 2 }{{} Ŷ 2 ] (168) n MC 1 = ˆσ2 Ŷ (169) Ŷ is binomially distributed: probability that n + of n MC circuits are functioning n MC : Ŷ is normally distributed (central limit theorem) in practice: n + > 4, n MC n + > 4 and n MC > 10 ˆσ 2 Ŷ 0.25 n MC n MC Ŷ 0% 50% 100% Ŷ = 85%: n MC ˆσŶ 11.9% 5.1% 3.6% 1.6% 1.1% Figure 19. Optimization of Analog Circuits 46
57 confidence interval, confidence level P (Y [Ŷ k ζ ˆσŶ, Ŷ + k ζ ˆσŶ ]) = }{{} confidence interval kζ k ζ 1 2π e t2 2 dt } {{ } confidence level (170) e.g., n MC = 1000, Ŷ = 85% ˆσ Ŷ = 1.1%; k ζ = 3: P (Y [81.7%, 88.3%]) = 99.7% given: yield estimator Ŷ, confidence interval Y [Ŷ Y, Ŷ + Y ], confidence level ζ% find: n MC ) ) ζ = cdf N (Ŷ + kζ ˆσŶ cdf N (Ŷ kζ ˆσŶ k ζ (171) Y = k ζ ˆσŶ ( ˆσŶ ) (172) Ŷ 1 Ŷ ˆσ 2 = Ŷ n MC 1 (173) n MC = 1 + ( Ŷ ) 1 Ŷ (k ζ ) 2 Y 2 (174) number n MC for various confidence intervals and confidence levels: Ŷ ± Y ζ 90% k ζ % % % % ± 10% % ± 5% % ± 1% 3,452 4,900 8,462 13, % ± 0.01% 66,352 given: Ŷ! > Y min, significance level α find: n MC null hypothesis H 0, Ŷ < Y min, rejected if all circuits functioning, n + = n MC assuming H 0 holds, the probability of falsely (i.e., Ŷ < Y min) rejecting H 0 is P ( rejection ) test definition = P ( n + = n MC ) (175) binominal distribution = Ŷ n MC n MC > (falsely) < Y n MC min! < α (176) log α log Y min (177) Optimization of Analog Circuits 47
58 number n MC for a minimum yield and significance level: Y min α = 5% α = 1% 95% % % 3,000 4, % 30,000 46, Importance sampling Y = δ (x s ) pdf (x s ) dx s (178) = = E pdfis pdf (x s ) δ (x s ) pdf IS (x s ) pdf IS (x s ) dx s (179) { δ (x s ) pdf (x s ) pdf IS (x s ) } = E {δ(x s ) w(x s )} (180) sample created according to a separate, specific distribution pdf IS E {w(x s )} = pdf IS pdf(x s ) pdf IS (x s ) pdf IS(x s ) dx s = 1 (181) goal: reduction of estimator variance with V {Ê {δ(xs )}} = V {δ(x s)} n MC V {δ(x s )w(x s )} = pdf IS! < Eq. (182) is statisfied for n MC = n IS, if: V {δ(x s )w(x s )} pdf IS n IS δ (x s ) { } = V Ê {δ(x s )} pdfis pdf IS (182) pdf (x s ) pdf IS (x s ) pdf (x s) dx s Y 2 (183) {x pdf IS(x s ) > pdf(x s ) (184) s δ(x s)=1} Optimization of Analog Circuits 48
59 6.3 Geometric yield analysis for linearized performance feature ( realistic geometric yield analysis ) yield in case that A s is a half of R ns defined by a hyperplane linear model for one single performance feature, index i left out: f = f L + g T (x s x s,w L ) (185) e.g. after solution according to (210) or (211), f L = f (x s,w L ) g = f (x s,w L ) (186) specification type: f f L (187) x s,2 β 2 (x s ) = β 2 W L A s,l, f f L g f = f L x s,0 f = f 0 x s,a β 2 (x s,a ) β 2 (x s) = (x s x s,0) T C 1 (x s x s,0) = const x s,1 Figure 20. Optimization of Analog Circuits 49
60 6.3.1 Yield partition Y L = x s A s,l ( ) pdf N (x s ) dx s = pdf f f df (188) f f L linearized performance feature normally distributed (133), (134): f N f L + g T (x s,0 x s,w L ), g T C g }{{}}{{} (189) f 0 =f(x s,0 ) σ 2 f yield written in terms of the pdf of the linearized performance feature: Y L = 2π σf e 1 2 f L 1 f f 0 σ f «2 df (Fig. 21) (190) pdf f Y L f L f 0 f Figure 21. Optimization of Analog Circuits 50
61 6.3.2 Defining worst-case distance β W L as difference from nominal performance to specification bound as multiple of standard deviation σ f of linearized performance feature f 0 f L = g T (x s,0 x s,w L ) (191) f 0 f L = { +βw L σ f, f 0 > f L circuit functions β W L σ f, f 0 < f L circuit malfunctions (192) variable substitution: t = f f 0, df dt = σ f (193) σ f Y L = β W L 1 2π e t2 2 t L = f L f 0 σ f = dt = { βw L, f 0 > f L +β W L, f 0 < f L (194) ±β W L 1 e ( t ) 2 2 dt (t = t, 2π ) dt = 1 dt Yield partition as a function of worst-case distance β W L Y L = ±β W L 1 e t 2 2 dt 2π f 0 > f L : +β W L (195) f 0 < f L : β W L standard normal distribution, statistical tables, exact within given digits, no estimation Optimization of Analog Circuits 51
62 6.3.4 Worst-case distance β W L defines tolerance region f (x s,a ) = f L (185) g T (x s,a x s,w L ) }{{} f (x s ) = f L + g T (x s x s,w L ). = 0 (196) {. }} { g T (x s,a x s,w L ) = f L + g T (x s x s,a ) (197) i.e., x s,w L in (185) can be replaced with any point of level set f = f L because of (196) from Fig. 20: β 2 (x s,a ) = 2 C 1 (x s,a x s,0 ) = { λa g, f 0 > f L +λ a g, f 0 < f L, λ a > 0 (198) substituting (201) in (199) (x s,a x s,0 ) = λ a 2 C g (199) = g T (x s,a x s,0 ) = λ a 2 σ2 f (197) = f L f 0 (192) = β W L σ f (200) λ a 2 = β W L σ f (201) (x s,a x s,0 ) = β W L σ f C g (202) (x s,a x s,0 ) T C 1 (x s,a x s,0 ) = β 2 W L 1 σ 2 f 2 g T C g } {{ } 1 (203) (x s,a x s,0 ) T C (x s,a x s,0 ) = β 2 W L (204) β 2 W L is level parameter ( radius ) of ellipsoid that touches level hyperplane f = f L Optimization of Analog Circuits 52
63 6.3.5 specification type f f U (192) becomes f U f 0 = { +βw U σ f, f 0 < f U circuit functions β W U σ f, f 0 > f U circuit malfunctions (205) (195) becomes Y U = ±β W U 1 e t 2 2 dt 2π f 0 < f U : +β W U (206) f 0 > f U : β W U (197) becomes f (x s ) = f U + g T (x s x s,a ) (207) Optimization of Analog Circuits 53
64 6.4 Geometric yield analysis for nonlinear performance feature ( general geometric yield analysis ) Problem formulation x s,2 β 2 (x s) = (x s x s,0 ) T C 1 (x s x s,0 ) = const x s,0 f x s,w L f (W L) = f L f = f L x s,w L A s,l A s,l x s,1 Figure 22. lower bound, nominally fulfilled / upper bound, nominally violated f (x s,0 ) > f L/U : max x s pdf N (x s ) s.t. f (x s ) f L/U (208) lower bound, nominally violated / upper bound, nominally fulfilled f (x s,0 ) < f L/U : max x s pdf N (x s ) s.t. f (x s ) f L/U (209) statistical parameter vector with highest probability density on the other side of the acceptance region border lower bound, nominally fulfilled / upper bound, nominally violated f (x s,0 ) > f L/U : min x s β 2 (x s ) s.t. f (x s ) f L/U (210) lower bound, nominally violated / upper bound, nominally fulfilled f (x s,0 ) < f L/U : min x s β 2 (x s ) s.t. f (x s ) f L/U (211) statistical parameter vector with smallest distance (weighted according to pdf) to the acceptance region border specific form of a nonlinear programming problem, quadratic objective function, one nonlinear inequality constraint, iterative solution with SQP: worse-case parameter set x s,w L/U worst-case distance β W L/U = β ( ) x s,w L/U performance gradient f ( ) x s,w L/U Optimization of Analog Circuits 54
65 6.4.2 Advantages of geometric yield analysis Y L/U = A s,l/u pdf (x s ) dx s ; the larger the pdf value, the larger the error in Y if border of A s is approximated inaccurately; A s is exact at point x s,w L/U with highest pdf value, A s differs from A s the more, the smaller the pdf value Y L/U accuracy of Y L/U systematic error, depends on the curvature of performance f in x s,w L/U duality principle in minimum-norm problems: minimum distance between point and convex set equal to maximum distance between point to any separating hyperplane case 1: Y (A s) greatest lower bound of Y (A s ) concerning any tangent hyperplane x s,0 A s A s Figure 23. case 2: Y (A s) least upper bound of Y (A s ) concerning any tangent hyperplane x s,0 A s A s Figure 24. in practice, error Y L/U Y L/U 1%... 2% Optimization of Analog Circuits 55
66 6.4.3 Lagrange function and first-order optimality conditions of problem (210) L (x s, λ) = β 2 (x s ) λ (f L f (x s )) (212) L (x s ) = 0 : 2 C 1 (x s,w L x s,0 ) + λ W L f (x s,w L ) = 0 (213) λ W L (f L f (x s,w L )) = 0 (214) βw 2 L = (x s,w L x s,0 ) T C 1 (x s,w L x s,0 ) (215) assumption: λ W L = 0; then f L f (x s,w L ) (i.e., constraint inactive); from (213): x s,w L = x s,0 and f (x s,w L ) = f (x s,0 ) > f L (from (210)), which contradicts assumption, therefore: λ W L > 0 (216) f (x s,w L ) = f L (217) from (213): x s,w L x s,0 = λ W L C f (x s,w L ) 2 (218) substituting (218) in (215): ( ) 2 λw L f (x s,w L) T C f (x s,w L) = βw 2 L (219) 2 λ W L 2 = β W L f (x s,w L ) T C f (x s,w L ) (220) substituting (220) in (218): x s,w L x s,0 = β W L f (x s,w L ) T C f (x s,w L ) C f (x s,w L ) (221) (221) corresponds to (91), worst-case analysis Lagrange function of problem (211) (221) becomes L (x s, λ) = β 2 (x s ) λ (f (x s ) f L ) (222) x s,w L x s,0 = +β W L f (x s,w L ) T C f (x s,w L ) C f (x s,w L ) (223) Optimization of Analog Circuits 56
Optimization Methods for Circuit Design
Technische Universität München Department of Electrical Engineering and Information Technology Institute for Electronic Design Automation Optimization Methods for Circuit Design Compendium H. Graeb Version
More information5 Handling Constraints
5 Handling Constraints Engineering design optimization problems are very rarely unconstrained. Moreover, the constraints that appear in these problems are typically nonlinear. This motivates our interest
More informationOptimization Methods
Optimization Methods Decision making Examples: determining which ingredients and in what quantities to add to a mixture being made so that it will meet specifications on its composition allocating available
More informationNumerical Optimization: Basic Concepts and Algorithms
May 27th 2015 Numerical Optimization: Basic Concepts and Algorithms R. Duvigneau R. Duvigneau - Numerical Optimization: Basic Concepts and Algorithms 1 Outline Some basic concepts in optimization Some
More informationNonlinear Optimization: What s important?
Nonlinear Optimization: What s important? Julian Hall 10th May 2012 Convexity: convex problems A local minimizer is a global minimizer A solution of f (x) = 0 (stationary point) is a minimizer A global
More informationNumerical Optimization of Partial Differential Equations
Numerical Optimization of Partial Differential Equations Part I: basic optimization concepts in R n Bartosz Protas Department of Mathematics & Statistics McMaster University, Hamilton, Ontario, Canada
More informationE5295/5B5749 Convex optimization with engineering applications. Lecture 8. Smooth convex unconstrained and equality-constrained minimization
E5295/5B5749 Convex optimization with engineering applications Lecture 8 Smooth convex unconstrained and equality-constrained minimization A. Forsgren, KTH 1 Lecture 8 Convex optimization 2006/2007 Unconstrained
More informationAlgorithms for constrained local optimization
Algorithms for constrained local optimization Fabio Schoen 2008 http://gol.dsi.unifi.it/users/schoen Algorithms for constrained local optimization p. Feasible direction methods Algorithms for constrained
More information2.3 Linear Programming
2.3 Linear Programming Linear Programming (LP) is the term used to define a wide range of optimization problems in which the objective function is linear in the unknown variables and the constraints are
More informationOptimization: Nonlinear Optimization without Constraints. Nonlinear Optimization without Constraints 1 / 23
Optimization: Nonlinear Optimization without Constraints Nonlinear Optimization without Constraints 1 / 23 Nonlinear optimization without constraints Unconstrained minimization min x f(x) where f(x) is
More informationLectures 9 and 10: Constrained optimization problems and their optimality conditions
Lectures 9 and 10: Constrained optimization problems and their optimality conditions Coralia Cartis, Mathematical Institute, University of Oxford C6.2/B2: Continuous Optimization Lectures 9 and 10: Constrained
More informationScientific Computing: Optimization
Scientific Computing: Optimization Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 Course MATH-GA.2043 or CSCI-GA.2112, Spring 2012 March 8th, 2011 A. Donev (Courant Institute) Lecture
More informationWritten Examination
Division of Scientific Computing Department of Information Technology Uppsala University Optimization Written Examination 202-2-20 Time: 4:00-9:00 Allowed Tools: Pocket Calculator, one A4 paper with notes
More informationNumerisches Rechnen. (für Informatiker) M. Grepl P. Esser & G. Welper & L. Zhang. Institut für Geometrie und Praktische Mathematik RWTH Aachen
Numerisches Rechnen (für Informatiker) M. Grepl P. Esser & G. Welper & L. Zhang Institut für Geometrie und Praktische Mathematik RWTH Aachen Wintersemester 2011/12 IGPM, RWTH Aachen Numerisches Rechnen
More informationPart 4: Active-set methods for linearly constrained optimization. Nick Gould (RAL)
Part 4: Active-set methods for linearly constrained optimization Nick Gould RAL fx subject to Ax b Part C course on continuoue optimization LINEARLY CONSTRAINED MINIMIZATION fx subject to Ax { } b where
More informationOptimization and Root Finding. Kurt Hornik
Optimization and Root Finding Kurt Hornik Basics Root finding and unconstrained smooth optimization are closely related: Solving ƒ () = 0 can be accomplished via minimizing ƒ () 2 Slide 2 Basics Root finding
More informationScientific Computing: An Introductory Survey
Scientific Computing: An Introductory Survey Chapter 6 Optimization Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction permitted
More informationScientific Computing: An Introductory Survey
Scientific Computing: An Introductory Survey Chapter 6 Optimization Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction permitted
More informationNumerical optimization
Numerical optimization Lecture 4 Alexander & Michael Bronstein tosca.cs.technion.ac.il/book Numerical geometry of non-rigid shapes Stanford University, Winter 2009 2 Longest Slowest Shortest Minimal Maximal
More informationAppendix A Taylor Approximations and Definite Matrices
Appendix A Taylor Approximations and Definite Matrices Taylor approximations provide an easy way to approximate a function as a polynomial, using the derivatives of the function. We know, from elementary
More informationChapter 3 Numerical Methods
Chapter 3 Numerical Methods Part 2 3.2 Systems of Equations 3.3 Nonlinear and Constrained Optimization 1 Outline 3.2 Systems of Equations 3.3 Nonlinear and Constrained Optimization Summary 2 Outline 3.2
More informationNumerical optimization. Numerical optimization. Longest Shortest where Maximal Minimal. Fastest. Largest. Optimization problems
1 Numerical optimization Alexander & Michael Bronstein, 2006-2009 Michael Bronstein, 2010 tosca.cs.technion.ac.il/book Numerical optimization 048921 Advanced topics in vision Processing and Analysis of
More informationDetermination of Feasible Directions by Successive Quadratic Programming and Zoutendijk Algorithms: A Comparative Study
International Journal of Mathematics And Its Applications Vol.2 No.4 (2014), pp.47-56. ISSN: 2347-1557(online) Determination of Feasible Directions by Successive Quadratic Programming and Zoutendijk Algorithms:
More information1 Numerical optimization
Contents 1 Numerical optimization 5 1.1 Optimization of single-variable functions............ 5 1.1.1 Golden Section Search................... 6 1.1. Fibonacci Search...................... 8 1. Algorithms
More informationProgramming, numerics and optimization
Programming, numerics and optimization Lecture C-3: Unconstrained optimization II Łukasz Jankowski ljank@ippt.pan.pl Institute of Fundamental Technological Research Room 4.32, Phone +22.8261281 ext. 428
More informationTMA 4180 Optimeringsteori KARUSH-KUHN-TUCKER THEOREM
TMA 4180 Optimeringsteori KARUSH-KUHN-TUCKER THEOREM H. E. Krogstad, IMF, Spring 2012 Karush-Kuhn-Tucker (KKT) Theorem is the most central theorem in constrained optimization, and since the proof is scattered
More informationLINEAR AND NONLINEAR PROGRAMMING
LINEAR AND NONLINEAR PROGRAMMING Stephen G. Nash and Ariela Sofer George Mason University The McGraw-Hill Companies, Inc. New York St. Louis San Francisco Auckland Bogota Caracas Lisbon London Madrid Mexico
More informationNumerical Optimization Professor Horst Cerjak, Horst Bischof, Thomas Pock Mat Vis-Gra SS09
Numerical Optimization 1 Working Horse in Computer Vision Variational Methods Shape Analysis Machine Learning Markov Random Fields Geometry Common denominator: optimization problems 2 Overview of Methods
More informationStatic unconstrained optimization
Static unconstrained optimization 2 In unconstrained optimization an objective function is minimized without any additional restriction on the decision variables, i.e. min f(x) x X ad (2.) with X ad R
More informationOptimality Conditions for Constrained Optimization
72 CHAPTER 7 Optimality Conditions for Constrained Optimization 1. First Order Conditions In this section we consider first order optimality conditions for the constrained problem P : minimize f 0 (x)
More informationIE 5531: Engineering Optimization I
IE 5531: Engineering Optimization I Lecture 15: Nonlinear optimization Prof. John Gunnar Carlsson November 1, 2010 Prof. John Gunnar Carlsson IE 5531: Engineering Optimization I November 1, 2010 1 / 24
More informationLecture V. Numerical Optimization
Lecture V Numerical Optimization Gianluca Violante New York University Quantitative Macroeconomics G. Violante, Numerical Optimization p. 1 /19 Isomorphism I We describe minimization problems: to maximize
More information1 Numerical optimization
Contents Numerical optimization 5. Optimization of single-variable functions.............................. 5.. Golden Section Search..................................... 6.. Fibonacci Search........................................
More informationMinimization of Static! Cost Functions!
Minimization of Static Cost Functions Robert Stengel Optimal Control and Estimation, MAE 546, Princeton University, 2017 J = Static cost function with constant control parameter vector, u Conditions for
More informationApplications of Linear Programming
Applications of Linear Programming lecturer: András London University of Szeged Institute of Informatics Department of Computational Optimization Lecture 9 Non-linear programming In case of LP, the goal
More information4TE3/6TE3. Algorithms for. Continuous Optimization
4TE3/6TE3 Algorithms for Continuous Optimization (Algorithms for Constrained Nonlinear Optimization Problems) Tamás TERLAKY Computing and Software McMaster University Hamilton, November 2005 terlaky@mcmaster.ca
More informationISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints
ISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints Instructor: Prof. Kevin Ross Scribe: Nitish John October 18, 2011 1 The Basic Goal The main idea is to transform a given constrained
More informationMathematical Economics. Lecture Notes (in extracts)
Prof. Dr. Frank Werner Faculty of Mathematics Institute of Mathematical Optimization (IMO) http://math.uni-magdeburg.de/ werner/math-ec-new.html Mathematical Economics Lecture Notes (in extracts) Winter
More informationConvex Optimization. Newton s method. ENSAE: Optimisation 1/44
Convex Optimization Newton s method ENSAE: Optimisation 1/44 Unconstrained minimization minimize f(x) f convex, twice continuously differentiable (hence dom f open) we assume optimal value p = inf x f(x)
More informationConstrained optimization: direct methods (cont.)
Constrained optimization: direct methods (cont.) Jussi Hakanen Post-doctoral researcher jussi.hakanen@jyu.fi Direct methods Also known as methods of feasible directions Idea in a point x h, generate a
More informationMATH2070 Optimisation
MATH2070 Optimisation Nonlinear optimisation with constraints Semester 2, 2012 Lecturer: I.W. Guo Lecture slides courtesy of J.R. Wishart Review The full nonlinear optimisation problem with equality constraints
More informationIE 5531: Engineering Optimization I
IE 5531: Engineering Optimization I Lecture 12: Nonlinear optimization, continued Prof. John Gunnar Carlsson October 20, 2010 Prof. John Gunnar Carlsson IE 5531: Engineering Optimization I October 20,
More informationOutline. Scientific Computing: An Introductory Survey. Optimization. Optimization Problems. Examples: Optimization Problems
Outline Scientific Computing: An Introductory Survey Chapter 6 Optimization 1 Prof. Michael. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction
More informationAM 205: lecture 19. Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods
AM 205: lecture 19 Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods Optimality Conditions: Equality Constrained Case As another example of equality
More informationLecture 3. Optimization Problems and Iterative Algorithms
Lecture 3 Optimization Problems and Iterative Algorithms January 13, 2016 This material was jointly developed with Angelia Nedić at UIUC for IE 598ns Outline Special Functions: Linear, Quadratic, Convex
More informationOptimization Methods
Optimization Methods Categorization of Optimization Problems Continuous Optimization Discrete Optimization Combinatorial Optimization Variational Optimization Common Optimization Concepts in Computer Vision
More informationContents. Preface. 1 Introduction Optimization view on mathematical models NLP models, black-box versus explicit expression 3
Contents Preface ix 1 Introduction 1 1.1 Optimization view on mathematical models 1 1.2 NLP models, black-box versus explicit expression 3 2 Mathematical modeling, cases 7 2.1 Introduction 7 2.2 Enclosing
More informationOptimization Concepts and Applications in Engineering
Optimization Concepts and Applications in Engineering Ashok D. Belegundu, Ph.D. Department of Mechanical Engineering The Pennsylvania State University University Park, Pennsylvania Tirupathi R. Chandrupatia,
More informationOptimality Conditions
Chapter 2 Optimality Conditions 2.1 Global and Local Minima for Unconstrained Problems When a minimization problem does not have any constraints, the problem is to find the minimum of the objective function.
More informationMS&E 318 (CME 338) Large-Scale Numerical Optimization
Stanford University, Management Science & Engineering (and ICME) MS&E 318 (CME 338) Large-Scale Numerical Optimization 1 Origins Instructor: Michael Saunders Spring 2015 Notes 9: Augmented Lagrangian Methods
More informationStructural and Multidisciplinary Optimization. P. Duysinx and P. Tossings
Structural and Multidisciplinary Optimization P. Duysinx and P. Tossings 2018-2019 CONTACTS Pierre Duysinx Institut de Mécanique et du Génie Civil (B52/3) Phone number: 04/366.91.94 Email: P.Duysinx@uliege.be
More informationAlgorithms for Constrained Optimization
1 / 42 Algorithms for Constrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University April 19, 2015 2 / 42 Outline 1. Convergence 2. Sequential quadratic
More informationComputational Finance
Department of Mathematics at University of California, San Diego Computational Finance Optimization Techniques [Lecture 2] Michael Holst January 9, 2017 Contents 1 Optimization Techniques 3 1.1 Examples
More informationGeneralization to inequality constrained problem. Maximize
Lecture 11. 26 September 2006 Review of Lecture #10: Second order optimality conditions necessary condition, sufficient condition. If the necessary condition is violated the point cannot be a local minimum
More informationMethods for Unconstrained Optimization Numerical Optimization Lectures 1-2
Methods for Unconstrained Optimization Numerical Optimization Lectures 1-2 Coralia Cartis, University of Oxford INFOMM CDT: Modelling, Analysis and Computation of Continuous Real-World Problems Methods
More information8 Numerical methods for unconstrained problems
8 Numerical methods for unconstrained problems Optimization is one of the important fields in numerical computation, beside solving differential equations and linear systems. We can see that these fields
More informationNumerical Optimization
Constrained Optimization Computer Science and Automation Indian Institute of Science Bangalore 560 012, India. NPTEL Course on Constrained Optimization Constrained Optimization Problem: min h j (x) 0,
More informationConstrained Optimization
1 / 22 Constrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University March 30, 2015 2 / 22 1. Equality constraints only 1.1 Reduced gradient 1.2 Lagrange
More informationMiscellaneous Nonlinear Programming Exercises
Miscellaneous Nonlinear Programming Exercises Henry Wolkowicz 2 08 21 University of Waterloo Department of Combinatorics & Optimization Waterloo, Ontario N2L 3G1, Canada Contents 1 Numerical Analysis Background
More informationUnconstrained optimization
Chapter 4 Unconstrained optimization An unconstrained optimization problem takes the form min x Rnf(x) (4.1) for a target functional (also called objective function) f : R n R. In this chapter and throughout
More informationScientific Computing: An Introductory Survey
Scientific Computing: An Introductory Survey Chapter 6 Optimization Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction permitted
More informationOptimization Tutorial 1. Basic Gradient Descent
E0 270 Machine Learning Jan 16, 2015 Optimization Tutorial 1 Basic Gradient Descent Lecture by Harikrishna Narasimhan Note: This tutorial shall assume background in elementary calculus and linear algebra.
More informationLecture 1: Introduction. Outline. B9824 Foundations of Optimization. Fall Administrative matters. 2. Introduction. 3. Existence of optima
B9824 Foundations of Optimization Lecture 1: Introduction Fall 2009 Copyright 2009 Ciamac Moallemi Outline 1. Administrative matters 2. Introduction 3. Existence of optima 4. Local theory of unconstrained
More informationGeometry optimization
Geometry optimization Trygve Helgaker Centre for Theoretical and Computational Chemistry Department of Chemistry, University of Oslo, Norway European Summer School in Quantum Chemistry (ESQC) 211 Torre
More informationGradient Descent. Dr. Xiaowei Huang
Gradient Descent Dr. Xiaowei Huang https://cgi.csc.liv.ac.uk/~xiaowei/ Up to now, Three machine learning algorithms: decision tree learning k-nn linear regression only optimization objectives are discussed,
More informationAM 205: lecture 19. Last time: Conditions for optimality, Newton s method for optimization Today: survey of optimization methods
AM 205: lecture 19 Last time: Conditions for optimality, Newton s method for optimization Today: survey of optimization methods Quasi-Newton Methods General form of quasi-newton methods: x k+1 = x k α
More informationA Trust-region-based Sequential Quadratic Programming Algorithm
Downloaded from orbit.dtu.dk on: Oct 19, 2018 A Trust-region-based Sequential Quadratic Programming Algorithm Henriksen, Lars Christian; Poulsen, Niels Kjølstad Publication date: 2010 Document Version
More informationminimize x subject to (x 2)(x 4) u,
Math 6366/6367: Optimization and Variational Methods Sample Preliminary Exam Questions 1. Suppose that f : [, L] R is a C 2 -function with f () on (, L) and that you have explicit formulae for
More information1 Computing with constraints
Notes for 2017-04-26 1 Computing with constraints Recall that our basic problem is minimize φ(x) s.t. x Ω where the feasible set Ω is defined by equality and inequality conditions Ω = {x R n : c i (x)
More informationLecture Notes: Geometric Considerations in Unconstrained Optimization
Lecture Notes: Geometric Considerations in Unconstrained Optimization James T. Allison February 15, 2006 The primary objectives of this lecture on unconstrained optimization are to: Establish connections
More informationUnconstrained Optimization
1 / 36 Unconstrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University February 2, 2015 2 / 36 3 / 36 4 / 36 5 / 36 1. preliminaries 1.1 local approximation
More informationIntroduction to Nonlinear Stochastic Programming
School of Mathematics T H E U N I V E R S I T Y O H F R G E D I N B U Introduction to Nonlinear Stochastic Programming Jacek Gondzio Email: J.Gondzio@ed.ac.uk URL: http://www.maths.ed.ac.uk/~gondzio SPS
More informationMVE165/MMG631 Linear and integer optimization with applications Lecture 13 Overview of nonlinear programming. Ann-Brith Strömberg
MVE165/MMG631 Overview of nonlinear programming Ann-Brith Strömberg 2015 05 21 Areas of applications, examples (Ch. 9.1) Structural optimization Design of aircraft, ships, bridges, etc Decide on the material
More informationConstrained Nonlinear Optimization Algorithms
Department of Industrial Engineering and Management Sciences Northwestern University waechter@iems.northwestern.edu Institute for Mathematics and its Applications University of Minnesota August 4, 2016
More informationLecture 15: SQP methods for equality constrained optimization
Lecture 15: SQP methods for equality constrained optimization Coralia Cartis, Mathematical Institute, University of Oxford C6.2/B2: Continuous Optimization Lecture 15: SQP methods for equality constrained
More informationCONSTRAINED NONLINEAR PROGRAMMING
149 CONSTRAINED NONLINEAR PROGRAMMING We now turn to methods for general constrained nonlinear programming. These may be broadly classified into two categories: 1. TRANSFORMATION METHODS: In this approach
More informationOn Lagrange multipliers of trust-region subproblems
On Lagrange multipliers of trust-region subproblems Ladislav Lukšan, Ctirad Matonoha, Jan Vlček Institute of Computer Science AS CR, Prague Programy a algoritmy numerické matematiky 14 1.- 6. června 2008
More informationCS 542G: Robustifying Newton, Constraints, Nonlinear Least Squares
CS 542G: Robustifying Newton, Constraints, Nonlinear Least Squares Robert Bridson October 29, 2008 1 Hessian Problems in Newton Last time we fixed one of plain Newton s problems by introducing line search
More informationOptimization Problems with Constraints - introduction to theory, numerical Methods and applications
Optimization Problems with Constraints - introduction to theory, numerical Methods and applications Dr. Abebe Geletu Ilmenau University of Technology Department of Simulation and Optimal Processes (SOP)
More informationSupport Vector Machines
Support Vector Machines Sridhar Mahadevan mahadeva@cs.umass.edu University of Massachusetts Sridhar Mahadevan: CMPSCI 689 p. 1/32 Margin Classifiers margin b = 0 Sridhar Mahadevan: CMPSCI 689 p.
More informationNumerical Analysis of Electromagnetic Fields
Pei-bai Zhou Numerical Analysis of Electromagnetic Fields With 157 Figures Springer-Verlag Berlin Heidelberg New York London Paris Tokyo Hong Kong Barcelona Budapest Contents Part 1 Universal Concepts
More information, b = 0. (2) 1 2 The eigenvectors of A corresponding to the eigenvalues λ 1 = 1, λ 2 = 3 are
Quadratic forms We consider the quadratic function f : R 2 R defined by f(x) = 2 xt Ax b T x with x = (x, x 2 ) T, () where A R 2 2 is symmetric and b R 2. We will see that, depending on the eigenvalues
More informationMachine Learning. Support Vector Machines. Manfred Huber
Machine Learning Support Vector Machines Manfred Huber 2015 1 Support Vector Machines Both logistic regression and linear discriminant analysis learn a linear discriminant function to separate the data
More informationExamination paper for TMA4180 Optimization I
Department of Mathematical Sciences Examination paper for TMA4180 Optimization I Academic contact during examination: Phone: Examination date: 26th May 2016 Examination time (from to): 09:00 13:00 Permitted
More informationMore on Lagrange multipliers
More on Lagrange multipliers CE 377K April 21, 2015 REVIEW The standard form for a nonlinear optimization problem is min x f (x) s.t. g 1 (x) 0. g l (x) 0 h 1 (x) = 0. h m (x) = 0 The objective function
More informationLecture 1: Introduction. Outline. B9824 Foundations of Optimization. Fall Administrative matters. 2. Introduction. 3. Existence of optima
B9824 Foundations of Optimization Lecture 1: Introduction Fall 2010 Copyright 2010 Ciamac Moallemi Outline 1. Administrative matters 2. Introduction 3. Existence of optima 4. Local theory of unconstrained
More informationLecture 18: Optimization Programming
Fall, 2016 Outline Unconstrained Optimization 1 Unconstrained Optimization 2 Equality-constrained Optimization Inequality-constrained Optimization Mixture-constrained Optimization 3 Quadratic Programming
More informationMotivation. Lecture 2 Topics from Optimization and Duality. network utility maximization (NUM) problem:
CDS270 Maryam Fazel Lecture 2 Topics from Optimization and Duality Motivation network utility maximization (NUM) problem: consider a network with S sources (users), each sending one flow at rate x s, through
More informationInverse Problems and Optimal Design in Electricity and Magnetism
Inverse Problems and Optimal Design in Electricity and Magnetism P. Neittaanmäki Department of Mathematics, University of Jyväskylä M. Rudnicki Institute of Electrical Engineering, Warsaw and A. Savini
More informationConvex optimization problems. Optimization problem in standard form
Convex optimization problems optimization problem in standard form convex optimization problems linear optimization quadratic optimization geometric programming quasiconvex optimization generalized inequality
More informationNonlinear Optimization for Optimal Control
Nonlinear Optimization for Optimal Control Pieter Abbeel UC Berkeley EECS Many slides and figures adapted from Stephen Boyd [optional] Boyd and Vandenberghe, Convex Optimization, Chapters 9 11 [optional]
More informationSubject: Optimal Control Assignment-1 (Related to Lecture notes 1-10)
Subject: Optimal Control Assignment- (Related to Lecture notes -). Design a oil mug, shown in fig., to hold as much oil possible. The height and radius of the mug should not be more than 6cm. The mug must
More informationCourse Summary Math 211
Course Summary Math 211 table of contents I. Functions of several variables. II. R n. III. Derivatives. IV. Taylor s Theorem. V. Differential Geometry. VI. Applications. 1. Best affine approximations.
More informationOptimization 2. CS5240 Theoretical Foundations in Multimedia. Leow Wee Kheng
Optimization 2 CS5240 Theoretical Foundations in Multimedia Leow Wee Kheng Department of Computer Science School of Computing National University of Singapore Leow Wee Kheng (NUS) Optimization 2 1 / 38
More informationReview of Classical Optimization
Part II Review of Classical Optimization Multidisciplinary Design Optimization of Aircrafts 51 2 Deterministic Methods 2.1 One-Dimensional Unconstrained Minimization 2.1.1 Motivation Most practical optimization
More informationCS-E4830 Kernel Methods in Machine Learning
CS-E4830 Kernel Methods in Machine Learning Lecture 3: Convex optimization and duality Juho Rousu 27. September, 2017 Juho Rousu 27. September, 2017 1 / 45 Convex optimization Convex optimisation This
More informationStochastic Optimization with Inequality Constraints Using Simultaneous Perturbations and Penalty Functions
International Journal of Control Vol. 00, No. 00, January 2007, 1 10 Stochastic Optimization with Inequality Constraints Using Simultaneous Perturbations and Penalty Functions I-JENG WANG and JAMES C.
More informationIn view of (31), the second of these is equal to the identity I on E m, while this, in view of (30), implies that the first can be written
11.8 Inequality Constraints 341 Because by assumption x is a regular point and L x is positive definite on M, it follows that this matrix is nonsingular (see Exercise 11). Thus, by the Implicit Function
More informationEAD 115. Numerical Solution of Engineering and Scientific Problems. David M. Rocke Department of Applied Science
EAD 115 Numerical Solution of Engineering and Scientific Problems David M. Rocke Department of Applied Science Taylor s Theorem Can often approximate a function by a polynomial The error in the approximation
More informationcomponent risk analysis
273: Urban Systems Modeling Lec. 3 component risk analysis instructor: Matteo Pozzi 273: Urban Systems Modeling Lec. 3 component reliability outline risk analysis for components uncertain demand and uncertain
More information